Oct 11 07:35:57 localhost kernel: Linux version 5.14.0-621.el9.x86_64 (mockbuild@x86-05.stream.rdu2.redhat.com) (gcc (GCC) 11.5.0 20240719 (Red Hat 11.5.0-11), GNU ld version 2.35.2-67.el9) #1 SMP PREEMPT_DYNAMIC Tue Sep 30 07:37:35 UTC 2025
Oct 11 07:35:57 localhost kernel: The list of certified hardware and cloud instances for Red Hat Enterprise Linux 9 can be viewed at the Red Hat Ecosystem Catalog, https://catalog.redhat.com.
Oct 11 07:35:57 localhost kernel: Command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-621.el9.x86_64 root=UUID=9839e2e1-98a2-4594-b609-79d514deb0a3 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Oct 11 07:35:57 localhost kernel: BIOS-provided physical RAM map:
Oct 11 07:35:57 localhost kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable
Oct 11 07:35:57 localhost kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved
Oct 11 07:35:57 localhost kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved
Oct 11 07:35:57 localhost kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdafff] usable
Oct 11 07:35:57 localhost kernel: BIOS-e820: [mem 0x00000000bffdb000-0x00000000bfffffff] reserved
Oct 11 07:35:57 localhost kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved
Oct 11 07:35:57 localhost kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved
Oct 11 07:35:57 localhost kernel: BIOS-e820: [mem 0x0000000100000000-0x000000023fffffff] usable
Oct 11 07:35:57 localhost kernel: NX (Execute Disable) protection: active
Oct 11 07:35:57 localhost kernel: APIC: Static calls initialized
Oct 11 07:35:57 localhost kernel: SMBIOS 2.8 present.
Oct 11 07:35:57 localhost kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.15.0-1 04/01/2014
Oct 11 07:35:57 localhost kernel: Hypervisor detected: KVM
Oct 11 07:35:57 localhost kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00
Oct 11 07:35:57 localhost kernel: kvm-clock: using sched offset of 3969803700 cycles
Oct 11 07:35:57 localhost kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns
Oct 11 07:35:57 localhost kernel: tsc: Detected 2800.000 MHz processor
Oct 11 07:35:57 localhost kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved
Oct 11 07:35:57 localhost kernel: e820: remove [mem 0x000a0000-0x000fffff] usable
Oct 11 07:35:57 localhost kernel: last_pfn = 0x240000 max_arch_pfn = 0x400000000
Oct 11 07:35:57 localhost kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs
Oct 11 07:35:57 localhost kernel: x86/PAT: Configuration [0-7]: WB  WC  UC- UC  WB  WP  UC- WT  
Oct 11 07:35:57 localhost kernel: last_pfn = 0xbffdb max_arch_pfn = 0x400000000
Oct 11 07:35:57 localhost kernel: found SMP MP-table at [mem 0x000f5ae0-0x000f5aef]
Oct 11 07:35:57 localhost kernel: Using GB pages for direct mapping
Oct 11 07:35:57 localhost kernel: RAMDISK: [mem 0x2d858000-0x32c23fff]
Oct 11 07:35:57 localhost kernel: ACPI: Early table checksum verification disabled
Oct 11 07:35:57 localhost kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS )
Oct 11 07:35:57 localhost kernel: ACPI: RSDT 0x00000000BFFE16BD 000030 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Oct 11 07:35:57 localhost kernel: ACPI: FACP 0x00000000BFFE1571 000074 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Oct 11 07:35:57 localhost kernel: ACPI: DSDT 0x00000000BFFDFC80 0018F1 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Oct 11 07:35:57 localhost kernel: ACPI: FACS 0x00000000BFFDFC40 000040
Oct 11 07:35:57 localhost kernel: ACPI: APIC 0x00000000BFFE15E5 0000B0 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Oct 11 07:35:57 localhost kernel: ACPI: WAET 0x00000000BFFE1695 000028 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Oct 11 07:35:57 localhost kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1571-0xbffe15e4]
Oct 11 07:35:57 localhost kernel: ACPI: Reserving DSDT table memory at [mem 0xbffdfc80-0xbffe1570]
Oct 11 07:35:57 localhost kernel: ACPI: Reserving FACS table memory at [mem 0xbffdfc40-0xbffdfc7f]
Oct 11 07:35:57 localhost kernel: ACPI: Reserving APIC table memory at [mem 0xbffe15e5-0xbffe1694]
Oct 11 07:35:57 localhost kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1695-0xbffe16bc]
Oct 11 07:35:57 localhost kernel: No NUMA configuration found
Oct 11 07:35:57 localhost kernel: Faking a node at [mem 0x0000000000000000-0x000000023fffffff]
Oct 11 07:35:57 localhost kernel: NODE_DATA(0) allocated [mem 0x23ffd5000-0x23fffffff]
Oct 11 07:35:57 localhost kernel: crashkernel reserved: 0x00000000af000000 - 0x00000000bf000000 (256 MB)
Oct 11 07:35:57 localhost kernel: Zone ranges:
Oct 11 07:35:57 localhost kernel:   DMA      [mem 0x0000000000001000-0x0000000000ffffff]
Oct 11 07:35:57 localhost kernel:   DMA32    [mem 0x0000000001000000-0x00000000ffffffff]
Oct 11 07:35:57 localhost kernel:   Normal   [mem 0x0000000100000000-0x000000023fffffff]
Oct 11 07:35:57 localhost kernel:   Device   empty
Oct 11 07:35:57 localhost kernel: Movable zone start for each node
Oct 11 07:35:57 localhost kernel: Early memory node ranges
Oct 11 07:35:57 localhost kernel:   node   0: [mem 0x0000000000001000-0x000000000009efff]
Oct 11 07:35:57 localhost kernel:   node   0: [mem 0x0000000000100000-0x00000000bffdafff]
Oct 11 07:35:57 localhost kernel:   node   0: [mem 0x0000000100000000-0x000000023fffffff]
Oct 11 07:35:57 localhost kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000023fffffff]
Oct 11 07:35:57 localhost kernel: On node 0, zone DMA: 1 pages in unavailable ranges
Oct 11 07:35:57 localhost kernel: On node 0, zone DMA: 97 pages in unavailable ranges
Oct 11 07:35:57 localhost kernel: On node 0, zone Normal: 37 pages in unavailable ranges
Oct 11 07:35:57 localhost kernel: ACPI: PM-Timer IO Port: 0x608
Oct 11 07:35:57 localhost kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1])
Oct 11 07:35:57 localhost kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23
Oct 11 07:35:57 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
Oct 11 07:35:57 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level)
Oct 11 07:35:57 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
Oct 11 07:35:57 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level)
Oct 11 07:35:57 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level)
Oct 11 07:35:57 localhost kernel: ACPI: Using ACPI (MADT) for SMP configuration information
Oct 11 07:35:57 localhost kernel: TSC deadline timer available
Oct 11 07:35:57 localhost kernel: CPU topo: Max. logical packages:   8
Oct 11 07:35:57 localhost kernel: CPU topo: Max. logical dies:       8
Oct 11 07:35:57 localhost kernel: CPU topo: Max. dies per package:   1
Oct 11 07:35:57 localhost kernel: CPU topo: Max. threads per core:   1
Oct 11 07:35:57 localhost kernel: CPU topo: Num. cores per package:     1
Oct 11 07:35:57 localhost kernel: CPU topo: Num. threads per package:   1
Oct 11 07:35:57 localhost kernel: CPU topo: Allowing 8 present CPUs plus 0 hotplug CPUs
Oct 11 07:35:57 localhost kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write()
Oct 11 07:35:57 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x00000000-0x00000fff]
Oct 11 07:35:57 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x0009f000-0x0009ffff]
Oct 11 07:35:57 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x000a0000-0x000effff]
Oct 11 07:35:57 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x000f0000-0x000fffff]
Oct 11 07:35:57 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xbffdb000-0xbfffffff]
Oct 11 07:35:57 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xc0000000-0xfeffbfff]
Oct 11 07:35:57 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xfeffc000-0xfeffffff]
Oct 11 07:35:57 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xff000000-0xfffbffff]
Oct 11 07:35:57 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xfffc0000-0xffffffff]
Oct 11 07:35:57 localhost kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices
Oct 11 07:35:57 localhost kernel: Booting paravirtualized kernel on KVM
Oct 11 07:35:57 localhost kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns
Oct 11 07:35:57 localhost kernel: setup_percpu: NR_CPUS:8192 nr_cpumask_bits:8 nr_cpu_ids:8 nr_node_ids:1
Oct 11 07:35:57 localhost kernel: percpu: Embedded 64 pages/cpu s225280 r8192 d28672 u262144
Oct 11 07:35:57 localhost kernel: pcpu-alloc: s225280 r8192 d28672 u262144 alloc=1*2097152
Oct 11 07:35:57 localhost kernel: pcpu-alloc: [0] 0 1 2 3 4 5 6 7 
Oct 11 07:35:57 localhost kernel: kvm-guest: PV spinlocks disabled, no host support
Oct 11 07:35:57 localhost kernel: Kernel command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-621.el9.x86_64 root=UUID=9839e2e1-98a2-4594-b609-79d514deb0a3 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Oct 11 07:35:57 localhost kernel: Unknown kernel command line parameters "BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-621.el9.x86_64", will be passed to user space.
Oct 11 07:35:57 localhost kernel: random: crng init done
Oct 11 07:35:57 localhost kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear)
Oct 11 07:35:57 localhost kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear)
Oct 11 07:35:57 localhost kernel: Fallback order for Node 0: 0 
Oct 11 07:35:57 localhost kernel: Built 1 zonelists, mobility grouping on.  Total pages: 2064091
Oct 11 07:35:57 localhost kernel: Policy zone: Normal
Oct 11 07:35:57 localhost kernel: mem auto-init: stack:off, heap alloc:off, heap free:off
Oct 11 07:35:57 localhost kernel: software IO TLB: area num 8.
Oct 11 07:35:57 localhost kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=8, Nodes=1
Oct 11 07:35:57 localhost kernel: ftrace: allocating 49162 entries in 193 pages
Oct 11 07:35:57 localhost kernel: ftrace: allocated 193 pages with 3 groups
Oct 11 07:35:57 localhost kernel: Dynamic Preempt: voluntary
Oct 11 07:35:57 localhost kernel: rcu: Preemptible hierarchical RCU implementation.
Oct 11 07:35:57 localhost kernel: rcu:         RCU event tracing is enabled.
Oct 11 07:35:57 localhost kernel: rcu:         RCU restricting CPUs from NR_CPUS=8192 to nr_cpu_ids=8.
Oct 11 07:35:57 localhost kernel:         Trampoline variant of Tasks RCU enabled.
Oct 11 07:35:57 localhost kernel:         Rude variant of Tasks RCU enabled.
Oct 11 07:35:57 localhost kernel:         Tracing variant of Tasks RCU enabled.
Oct 11 07:35:57 localhost kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies.
Oct 11 07:35:57 localhost kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=8
Oct 11 07:35:57 localhost kernel: RCU Tasks: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Oct 11 07:35:57 localhost kernel: RCU Tasks Rude: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Oct 11 07:35:57 localhost kernel: RCU Tasks Trace: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Oct 11 07:35:57 localhost kernel: NR_IRQS: 524544, nr_irqs: 488, preallocated irqs: 16
Oct 11 07:35:57 localhost kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention.
Oct 11 07:35:57 localhost kernel: kfence: initialized - using 2097152 bytes for 255 objects at 0x(____ptrval____)-0x(____ptrval____)
Oct 11 07:35:57 localhost kernel: Console: colour VGA+ 80x25
Oct 11 07:35:57 localhost kernel: printk: console [ttyS0] enabled
Oct 11 07:35:57 localhost kernel: ACPI: Core revision 20230331
Oct 11 07:35:57 localhost kernel: APIC: Switch to symmetric I/O mode setup
Oct 11 07:35:57 localhost kernel: x2apic enabled
Oct 11 07:35:57 localhost kernel: APIC: Switched APIC routing to: physical x2apic
Oct 11 07:35:57 localhost kernel: tsc: Marking TSC unstable due to TSCs unsynchronized
Oct 11 07:35:57 localhost kernel: Calibrating delay loop (skipped) preset value.. 5600.00 BogoMIPS (lpj=2800000)
Oct 11 07:35:57 localhost kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated
Oct 11 07:35:57 localhost kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127
Oct 11 07:35:57 localhost kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0
Oct 11 07:35:57 localhost kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization
Oct 11 07:35:57 localhost kernel: Spectre V2 : Mitigation: Retpolines
Oct 11 07:35:57 localhost kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT
Oct 11 07:35:57 localhost kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls
Oct 11 07:35:57 localhost kernel: RETBleed: Mitigation: untrained return thunk
Oct 11 07:35:57 localhost kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier
Oct 11 07:35:57 localhost kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl
Oct 11 07:35:57 localhost kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied!
Oct 11 07:35:57 localhost kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options.
Oct 11 07:35:57 localhost kernel: x86/bugs: return thunk changed
Oct 11 07:35:57 localhost kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode
Oct 11 07:35:57 localhost kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'
Oct 11 07:35:57 localhost kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
Oct 11 07:35:57 localhost kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
Oct 11 07:35:57 localhost kernel: x86/fpu: xstate_offset[2]:  576, xstate_sizes[2]:  256
Oct 11 07:35:57 localhost kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format.
Oct 11 07:35:57 localhost kernel: Freeing SMP alternatives memory: 40K
Oct 11 07:35:57 localhost kernel: pid_max: default: 32768 minimum: 301
Oct 11 07:35:57 localhost kernel: LSM: initializing lsm=lockdown,capability,landlock,yama,integrity,selinux,bpf
Oct 11 07:35:57 localhost kernel: landlock: Up and running.
Oct 11 07:35:57 localhost kernel: Yama: becoming mindful.
Oct 11 07:35:57 localhost kernel: SELinux:  Initializing.
Oct 11 07:35:57 localhost kernel: LSM support for eBPF active
Oct 11 07:35:57 localhost kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Oct 11 07:35:57 localhost kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Oct 11 07:35:57 localhost kernel: smpboot: CPU0: AMD EPYC-Rome Processor (family: 0x17, model: 0x31, stepping: 0x0)
Oct 11 07:35:57 localhost kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver.
Oct 11 07:35:57 localhost kernel: ... version:                0
Oct 11 07:35:57 localhost kernel: ... bit width:              48
Oct 11 07:35:57 localhost kernel: ... generic registers:      6
Oct 11 07:35:57 localhost kernel: ... value mask:             0000ffffffffffff
Oct 11 07:35:57 localhost kernel: ... max period:             00007fffffffffff
Oct 11 07:35:57 localhost kernel: ... fixed-purpose events:   0
Oct 11 07:35:57 localhost kernel: ... event mask:             000000000000003f
Oct 11 07:35:57 localhost kernel: signal: max sigframe size: 1776
Oct 11 07:35:57 localhost kernel: rcu: Hierarchical SRCU implementation.
Oct 11 07:35:57 localhost kernel: rcu:         Max phase no-delay instances is 400.
Oct 11 07:35:57 localhost kernel: smp: Bringing up secondary CPUs ...
Oct 11 07:35:57 localhost kernel: smpboot: x86: Booting SMP configuration:
Oct 11 07:35:57 localhost kernel: .... node  #0, CPUs:      #1 #2 #3 #4 #5 #6 #7
Oct 11 07:35:57 localhost kernel: smp: Brought up 1 node, 8 CPUs
Oct 11 07:35:57 localhost kernel: smpboot: Total of 8 processors activated (44800.00 BogoMIPS)
Oct 11 07:35:57 localhost kernel: node 0 deferred pages initialised in 9ms
Oct 11 07:35:57 localhost kernel: Memory: 7765960K/8388068K available (16384K kernel code, 5784K rwdata, 13864K rodata, 4188K init, 7196K bss, 616204K reserved, 0K cma-reserved)
Oct 11 07:35:57 localhost kernel: devtmpfs: initialized
Oct 11 07:35:57 localhost kernel: x86/mm: Memory block size: 128MB
Oct 11 07:35:57 localhost kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns
Oct 11 07:35:57 localhost kernel: futex hash table entries: 2048 (order: 5, 131072 bytes, linear)
Oct 11 07:35:57 localhost kernel: pinctrl core: initialized pinctrl subsystem
Oct 11 07:35:57 localhost kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family
Oct 11 07:35:57 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL pool for atomic allocations
Oct 11 07:35:57 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations
Oct 11 07:35:57 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations
Oct 11 07:35:57 localhost kernel: audit: initializing netlink subsys (disabled)
Oct 11 07:35:57 localhost kernel: audit: type=2000 audit(1760168155.621:1): state=initialized audit_enabled=0 res=1
Oct 11 07:35:57 localhost kernel: thermal_sys: Registered thermal governor 'fair_share'
Oct 11 07:35:57 localhost kernel: thermal_sys: Registered thermal governor 'step_wise'
Oct 11 07:35:57 localhost kernel: thermal_sys: Registered thermal governor 'user_space'
Oct 11 07:35:57 localhost kernel: cpuidle: using governor menu
Oct 11 07:35:57 localhost kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
Oct 11 07:35:57 localhost kernel: PCI: Using configuration type 1 for base access
Oct 11 07:35:57 localhost kernel: PCI: Using configuration type 1 for extended access
Oct 11 07:35:57 localhost kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible.
Oct 11 07:35:57 localhost kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages
Oct 11 07:35:57 localhost kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page
Oct 11 07:35:57 localhost kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages
Oct 11 07:35:57 localhost kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page
Oct 11 07:35:57 localhost kernel: Demotion targets for Node 0: null
Oct 11 07:35:57 localhost kernel: cryptd: max_cpu_qlen set to 1000
Oct 11 07:35:57 localhost kernel: ACPI: Added _OSI(Module Device)
Oct 11 07:35:57 localhost kernel: ACPI: Added _OSI(Processor Device)
Oct 11 07:35:57 localhost kernel: ACPI: Added _OSI(3.0 _SCP Extensions)
Oct 11 07:35:57 localhost kernel: ACPI: Added _OSI(Processor Aggregator Device)
Oct 11 07:35:57 localhost kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded
Oct 11 07:35:57 localhost kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC
Oct 11 07:35:57 localhost kernel: ACPI: Interpreter enabled
Oct 11 07:35:57 localhost kernel: ACPI: PM: (supports S0 S3 S4 S5)
Oct 11 07:35:57 localhost kernel: ACPI: Using IOAPIC for interrupt routing
Oct 11 07:35:57 localhost kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
Oct 11 07:35:57 localhost kernel: PCI: Using E820 reservations for host bridge windows
Oct 11 07:35:57 localhost kernel: ACPI: Enabled 2 GPEs in block 00 to 0F
Oct 11 07:35:57 localhost kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])
Oct 11 07:35:57 localhost kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI EDR HPX-Type3]
Oct 11 07:35:57 localhost kernel: acpiphp: Slot [3] registered
Oct 11 07:35:57 localhost kernel: acpiphp: Slot [4] registered
Oct 11 07:35:57 localhost kernel: acpiphp: Slot [5] registered
Oct 11 07:35:57 localhost kernel: acpiphp: Slot [6] registered
Oct 11 07:35:57 localhost kernel: acpiphp: Slot [7] registered
Oct 11 07:35:57 localhost kernel: acpiphp: Slot [8] registered
Oct 11 07:35:57 localhost kernel: acpiphp: Slot [9] registered
Oct 11 07:35:57 localhost kernel: acpiphp: Slot [10] registered
Oct 11 07:35:57 localhost kernel: acpiphp: Slot [11] registered
Oct 11 07:35:57 localhost kernel: acpiphp: Slot [12] registered
Oct 11 07:35:57 localhost kernel: acpiphp: Slot [13] registered
Oct 11 07:35:57 localhost kernel: acpiphp: Slot [14] registered
Oct 11 07:35:57 localhost kernel: acpiphp: Slot [15] registered
Oct 11 07:35:57 localhost kernel: acpiphp: Slot [16] registered
Oct 11 07:35:57 localhost kernel: acpiphp: Slot [17] registered
Oct 11 07:35:57 localhost kernel: acpiphp: Slot [18] registered
Oct 11 07:35:57 localhost kernel: acpiphp: Slot [19] registered
Oct 11 07:35:57 localhost kernel: acpiphp: Slot [20] registered
Oct 11 07:35:57 localhost kernel: acpiphp: Slot [21] registered
Oct 11 07:35:57 localhost kernel: acpiphp: Slot [22] registered
Oct 11 07:35:57 localhost kernel: acpiphp: Slot [23] registered
Oct 11 07:35:57 localhost kernel: acpiphp: Slot [24] registered
Oct 11 07:35:57 localhost kernel: acpiphp: Slot [25] registered
Oct 11 07:35:57 localhost kernel: acpiphp: Slot [26] registered
Oct 11 07:35:57 localhost kernel: acpiphp: Slot [27] registered
Oct 11 07:35:57 localhost kernel: acpiphp: Slot [28] registered
Oct 11 07:35:57 localhost kernel: acpiphp: Slot [29] registered
Oct 11 07:35:57 localhost kernel: acpiphp: Slot [30] registered
Oct 11 07:35:57 localhost kernel: acpiphp: Slot [31] registered
Oct 11 07:35:57 localhost kernel: PCI host bridge to bus 0000:00
Oct 11 07:35:57 localhost kernel: pci_bus 0000:00: root bus resource [io  0x0000-0x0cf7 window]
Oct 11 07:35:57 localhost kernel: pci_bus 0000:00: root bus resource [io  0x0d00-0xffff window]
Oct 11 07:35:57 localhost kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window]
Oct 11 07:35:57 localhost kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window]
Oct 11 07:35:57 localhost kernel: pci_bus 0000:00: root bus resource [mem 0x240000000-0x2bfffffff window]
Oct 11 07:35:57 localhost kernel: pci_bus 0000:00: root bus resource [bus 00-ff]
Oct 11 07:35:57 localhost kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint
Oct 11 07:35:57 localhost kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint
Oct 11 07:35:57 localhost kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 conventional PCI endpoint
Oct 11 07:35:57 localhost kernel: pci 0000:00:01.1: BAR 4 [io  0xc140-0xc14f]
Oct 11 07:35:57 localhost kernel: pci 0000:00:01.1: BAR 0 [io  0x01f0-0x01f7]: legacy IDE quirk
Oct 11 07:35:57 localhost kernel: pci 0000:00:01.1: BAR 1 [io  0x03f6]: legacy IDE quirk
Oct 11 07:35:57 localhost kernel: pci 0000:00:01.1: BAR 2 [io  0x0170-0x0177]: legacy IDE quirk
Oct 11 07:35:57 localhost kernel: pci 0000:00:01.1: BAR 3 [io  0x0376]: legacy IDE quirk
Oct 11 07:35:57 localhost kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 conventional PCI endpoint
Oct 11 07:35:57 localhost kernel: pci 0000:00:01.2: BAR 4 [io  0xc100-0xc11f]
Oct 11 07:35:57 localhost kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint
Oct 11 07:35:57 localhost kernel: pci 0000:00:01.3: quirk: [io  0x0600-0x063f] claimed by PIIX4 ACPI
Oct 11 07:35:57 localhost kernel: pci 0000:00:01.3: quirk: [io  0x0700-0x070f] claimed by PIIX4 SMB
Oct 11 07:35:57 localhost kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint
Oct 11 07:35:57 localhost kernel: pci 0000:00:02.0: BAR 0 [mem 0xfe000000-0xfe7fffff pref]
Oct 11 07:35:57 localhost kernel: pci 0000:00:02.0: BAR 2 [mem 0xfe800000-0xfe803fff 64bit pref]
Oct 11 07:35:57 localhost kernel: pci 0000:00:02.0: BAR 4 [mem 0xfeb90000-0xfeb90fff]
Oct 11 07:35:57 localhost kernel: pci 0000:00:02.0: ROM [mem 0xfeb80000-0xfeb8ffff pref]
Oct 11 07:35:57 localhost kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff]
Oct 11 07:35:57 localhost kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Oct 11 07:35:57 localhost kernel: pci 0000:00:03.0: BAR 0 [io  0xc080-0xc0bf]
Oct 11 07:35:57 localhost kernel: pci 0000:00:03.0: BAR 1 [mem 0xfeb91000-0xfeb91fff]
Oct 11 07:35:57 localhost kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe804000-0xfe807fff 64bit pref]
Oct 11 07:35:57 localhost kernel: pci 0000:00:03.0: ROM [mem 0xfeb00000-0xfeb7ffff pref]
Oct 11 07:35:57 localhost kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint
Oct 11 07:35:57 localhost kernel: pci 0000:00:04.0: BAR 0 [io  0xc000-0xc07f]
Oct 11 07:35:57 localhost kernel: pci 0000:00:04.0: BAR 1 [mem 0xfeb92000-0xfeb92fff]
Oct 11 07:35:57 localhost kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe808000-0xfe80bfff 64bit pref]
Oct 11 07:35:57 localhost kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 conventional PCI endpoint
Oct 11 07:35:57 localhost kernel: pci 0000:00:05.0: BAR 0 [io  0xc0c0-0xc0ff]
Oct 11 07:35:57 localhost kernel: pci 0000:00:05.0: BAR 4 [mem 0xfe80c000-0xfe80ffff 64bit pref]
Oct 11 07:35:57 localhost kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint
Oct 11 07:35:57 localhost kernel: pci 0000:00:06.0: BAR 0 [io  0xc120-0xc13f]
Oct 11 07:35:57 localhost kernel: pci 0000:00:06.0: BAR 4 [mem 0xfe810000-0xfe813fff 64bit pref]
Oct 11 07:35:57 localhost kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10
Oct 11 07:35:57 localhost kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10
Oct 11 07:35:57 localhost kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11
Oct 11 07:35:57 localhost kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11
Oct 11 07:35:57 localhost kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9
Oct 11 07:35:57 localhost kernel: iommu: Default domain type: Translated
Oct 11 07:35:57 localhost kernel: iommu: DMA domain TLB invalidation policy: lazy mode
Oct 11 07:35:57 localhost kernel: SCSI subsystem initialized
Oct 11 07:35:57 localhost kernel: ACPI: bus type USB registered
Oct 11 07:35:57 localhost kernel: usbcore: registered new interface driver usbfs
Oct 11 07:35:57 localhost kernel: usbcore: registered new interface driver hub
Oct 11 07:35:57 localhost kernel: usbcore: registered new device driver usb
Oct 11 07:35:57 localhost kernel: pps_core: LinuxPPS API ver. 1 registered
Oct 11 07:35:57 localhost kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it>
Oct 11 07:35:57 localhost kernel: PTP clock support registered
Oct 11 07:35:57 localhost kernel: EDAC MC: Ver: 3.0.0
Oct 11 07:35:57 localhost kernel: NetLabel: Initializing
Oct 11 07:35:57 localhost kernel: NetLabel:  domain hash size = 128
Oct 11 07:35:57 localhost kernel: NetLabel:  protocols = UNLABELED CIPSOv4 CALIPSO
Oct 11 07:35:57 localhost kernel: NetLabel:  unlabeled traffic allowed by default
Oct 11 07:35:57 localhost kernel: PCI: Using ACPI for IRQ routing
Oct 11 07:35:57 localhost kernel: PCI: pci_cache_line_size set to 64 bytes
Oct 11 07:35:57 localhost kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff]
Oct 11 07:35:57 localhost kernel: e820: reserve RAM buffer [mem 0xbffdb000-0xbfffffff]
Oct 11 07:35:57 localhost kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device
Oct 11 07:35:57 localhost kernel: pci 0000:00:02.0: vgaarb: bridge control possible
Oct 11 07:35:57 localhost kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none
Oct 11 07:35:57 localhost kernel: vgaarb: loaded
Oct 11 07:35:57 localhost kernel: clocksource: Switched to clocksource kvm-clock
Oct 11 07:35:57 localhost kernel: VFS: Disk quotas dquot_6.6.0
Oct 11 07:35:57 localhost kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
Oct 11 07:35:57 localhost kernel: pnp: PnP ACPI init
Oct 11 07:35:57 localhost kernel: pnp 00:03: [dma 2]
Oct 11 07:35:57 localhost kernel: pnp: PnP ACPI: found 5 devices
Oct 11 07:35:57 localhost kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns
Oct 11 07:35:57 localhost kernel: NET: Registered PF_INET protocol family
Oct 11 07:35:57 localhost kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear)
Oct 11 07:35:57 localhost kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear)
Oct 11 07:35:57 localhost kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear)
Oct 11 07:35:57 localhost kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear)
Oct 11 07:35:57 localhost kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear)
Oct 11 07:35:57 localhost kernel: TCP: Hash tables configured (established 65536 bind 65536)
Oct 11 07:35:57 localhost kernel: MPTCP token hash table entries: 8192 (order: 5, 196608 bytes, linear)
Oct 11 07:35:57 localhost kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear)
Oct 11 07:35:57 localhost kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear)
Oct 11 07:35:57 localhost kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family
Oct 11 07:35:57 localhost kernel: NET: Registered PF_XDP protocol family
Oct 11 07:35:57 localhost kernel: pci_bus 0000:00: resource 4 [io  0x0000-0x0cf7 window]
Oct 11 07:35:57 localhost kernel: pci_bus 0000:00: resource 5 [io  0x0d00-0xffff window]
Oct 11 07:35:57 localhost kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window]
Oct 11 07:35:57 localhost kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window]
Oct 11 07:35:57 localhost kernel: pci_bus 0000:00: resource 8 [mem 0x240000000-0x2bfffffff window]
Oct 11 07:35:57 localhost kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release
Oct 11 07:35:57 localhost kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers
Oct 11 07:35:57 localhost kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11
Oct 11 07:35:57 localhost kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x160 took 100214 usecs
Oct 11 07:35:57 localhost kernel: PCI: CLS 0 bytes, default 64
Oct 11 07:35:57 localhost kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB)
Oct 11 07:35:57 localhost kernel: software IO TLB: mapped [mem 0x00000000ab000000-0x00000000af000000] (64MB)
Oct 11 07:35:57 localhost kernel: ACPI: bus type thunderbolt registered
Oct 11 07:35:57 localhost kernel: Trying to unpack rootfs image as initramfs...
Oct 11 07:35:57 localhost kernel: Initialise system trusted keyrings
Oct 11 07:35:57 localhost kernel: Key type blacklist registered
Oct 11 07:35:57 localhost kernel: workingset: timestamp_bits=36 max_order=21 bucket_order=0
Oct 11 07:35:57 localhost kernel: zbud: loaded
Oct 11 07:35:57 localhost kernel: integrity: Platform Keyring initialized
Oct 11 07:35:57 localhost kernel: integrity: Machine keyring initialized
Oct 11 07:35:57 localhost kernel: Freeing initrd memory: 85808K
Oct 11 07:35:57 localhost kernel: NET: Registered PF_ALG protocol family
Oct 11 07:35:57 localhost kernel: xor: automatically using best checksumming function   avx       
Oct 11 07:35:57 localhost kernel: Key type asymmetric registered
Oct 11 07:35:57 localhost kernel: Asymmetric key parser 'x509' registered
Oct 11 07:35:57 localhost kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 246)
Oct 11 07:35:57 localhost kernel: io scheduler mq-deadline registered
Oct 11 07:35:57 localhost kernel: io scheduler kyber registered
Oct 11 07:35:57 localhost kernel: io scheduler bfq registered
Oct 11 07:35:57 localhost kernel: atomic64_test: passed for x86-64 platform with CX8 and with SSE
Oct 11 07:35:57 localhost kernel: shpchp: Standard Hot Plug PCI Controller Driver version: 0.4
Oct 11 07:35:57 localhost kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input0
Oct 11 07:35:57 localhost kernel: ACPI: button: Power Button [PWRF]
Oct 11 07:35:57 localhost kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10
Oct 11 07:35:57 localhost kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11
Oct 11 07:35:57 localhost kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10
Oct 11 07:35:57 localhost kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
Oct 11 07:35:57 localhost kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A
Oct 11 07:35:57 localhost kernel: Non-volatile memory driver v1.3
Oct 11 07:35:57 localhost kernel: rdac: device handler registered
Oct 11 07:35:57 localhost kernel: hp_sw: device handler registered
Oct 11 07:35:57 localhost kernel: emc: device handler registered
Oct 11 07:35:57 localhost kernel: alua: device handler registered
Oct 11 07:35:57 localhost kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller
Oct 11 07:35:57 localhost kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1
Oct 11 07:35:57 localhost kernel: uhci_hcd 0000:00:01.2: detected 2 ports
Oct 11 07:35:57 localhost kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c100
Oct 11 07:35:57 localhost kernel: usb usb1: New USB device found, idVendor=1d6b, idProduct=0001, bcdDevice= 5.14
Oct 11 07:35:57 localhost kernel: usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1
Oct 11 07:35:57 localhost kernel: usb usb1: Product: UHCI Host Controller
Oct 11 07:35:57 localhost kernel: usb usb1: Manufacturer: Linux 5.14.0-621.el9.x86_64 uhci_hcd
Oct 11 07:35:57 localhost kernel: usb usb1: SerialNumber: 0000:00:01.2
Oct 11 07:35:57 localhost kernel: hub 1-0:1.0: USB hub found
Oct 11 07:35:57 localhost kernel: hub 1-0:1.0: 2 ports detected
Oct 11 07:35:57 localhost kernel: usbcore: registered new interface driver usbserial_generic
Oct 11 07:35:57 localhost kernel: usbserial: USB Serial support registered for generic
Oct 11 07:35:57 localhost kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12
Oct 11 07:35:57 localhost kernel: serio: i8042 KBD port at 0x60,0x64 irq 1
Oct 11 07:35:57 localhost kernel: serio: i8042 AUX port at 0x60,0x64 irq 12
Oct 11 07:35:57 localhost kernel: mousedev: PS/2 mouse device common for all mice
Oct 11 07:35:57 localhost kernel: rtc_cmos 00:04: RTC can wake from S4
Oct 11 07:35:57 localhost kernel: rtc_cmos 00:04: registered as rtc0
Oct 11 07:35:57 localhost kernel: rtc_cmos 00:04: setting system clock to 2025-10-11T07:35:56 UTC (1760168156)
Oct 11 07:35:57 localhost kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram
Oct 11 07:35:57 localhost kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled
Oct 11 07:35:57 localhost kernel: hid: raw HID events driver (C) Jiri Kosina
Oct 11 07:35:57 localhost kernel: usbcore: registered new interface driver usbhid
Oct 11 07:35:57 localhost kernel: usbhid: USB HID core driver
Oct 11 07:35:57 localhost kernel: drop_monitor: Initializing network drop monitor service
Oct 11 07:35:57 localhost kernel: Initializing XFRM netlink socket
Oct 11 07:35:57 localhost kernel: NET: Registered PF_INET6 protocol family
Oct 11 07:35:57 localhost kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1
Oct 11 07:35:57 localhost kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input4
Oct 11 07:35:57 localhost kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input3
Oct 11 07:35:57 localhost kernel: Segment Routing with IPv6
Oct 11 07:35:57 localhost kernel: NET: Registered PF_PACKET protocol family
Oct 11 07:35:57 localhost kernel: mpls_gso: MPLS GSO support
Oct 11 07:35:57 localhost kernel: IPI shorthand broadcast: enabled
Oct 11 07:35:57 localhost kernel: AVX2 version of gcm_enc/dec engaged.
Oct 11 07:35:57 localhost kernel: AES CTR mode by8 optimization enabled
Oct 11 07:35:57 localhost kernel: sched_clock: Marking stable (1309001600, 139560770)->(1523055570, -74493200)
Oct 11 07:35:57 localhost kernel: registered taskstats version 1
Oct 11 07:35:57 localhost kernel: Loading compiled-in X.509 certificates
Oct 11 07:35:57 localhost kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 72f99a463516b0dfb027e50caab189f607ef1bc9'
Oct 11 07:35:57 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux Driver Update Program (key 3): bf57f3e87362bc7229d9f465321773dfd1f77a80'
Oct 11 07:35:57 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux kpatch signing key: 4d38fd864ebe18c5f0b72e3852e2014c3a676fc8'
Oct 11 07:35:57 localhost kernel: Loaded X.509 cert 'RH-IMA-CA: Red Hat IMA CA: fb31825dd0e073685b264e3038963673f753959a'
Oct 11 07:35:57 localhost kernel: Loaded X.509 cert 'Nvidia GPU OOT signing 001: 55e1cef88193e60419f0b0ec379c49f77545acf0'
Oct 11 07:35:57 localhost kernel: Demotion targets for Node 0: null
Oct 11 07:35:57 localhost kernel: page_owner is disabled
Oct 11 07:35:57 localhost kernel: Key type .fscrypt registered
Oct 11 07:35:57 localhost kernel: Key type fscrypt-provisioning registered
Oct 11 07:35:57 localhost kernel: Key type big_key registered
Oct 11 07:35:57 localhost kernel: Key type encrypted registered
Oct 11 07:35:57 localhost kernel: ima: No TPM chip found, activating TPM-bypass!
Oct 11 07:35:57 localhost kernel: Loading compiled-in module X.509 certificates
Oct 11 07:35:57 localhost kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 72f99a463516b0dfb027e50caab189f607ef1bc9'
Oct 11 07:35:57 localhost kernel: ima: Allocated hash algorithm: sha256
Oct 11 07:35:57 localhost kernel: ima: No architecture policies found
Oct 11 07:35:57 localhost kernel: evm: Initialising EVM extended attributes:
Oct 11 07:35:57 localhost kernel: evm: security.selinux
Oct 11 07:35:57 localhost kernel: evm: security.SMACK64 (disabled)
Oct 11 07:35:57 localhost kernel: evm: security.SMACK64EXEC (disabled)
Oct 11 07:35:57 localhost kernel: evm: security.SMACK64TRANSMUTE (disabled)
Oct 11 07:35:57 localhost kernel: evm: security.SMACK64MMAP (disabled)
Oct 11 07:35:57 localhost kernel: evm: security.apparmor (disabled)
Oct 11 07:35:57 localhost kernel: evm: security.ima
Oct 11 07:35:57 localhost kernel: evm: security.capability
Oct 11 07:35:57 localhost kernel: evm: HMAC attrs: 0x1
Oct 11 07:35:57 localhost kernel: usb 1-1: new full-speed USB device number 2 using uhci_hcd
Oct 11 07:35:57 localhost kernel: Running certificate verification RSA selftest
Oct 11 07:35:57 localhost kernel: Loaded X.509 cert 'Certificate verification self-testing key: f58703bb33ce1b73ee02eccdee5b8817518fe3db'
Oct 11 07:35:57 localhost kernel: Running certificate verification ECDSA selftest
Oct 11 07:35:57 localhost kernel: Loaded X.509 cert 'Certificate verification ECDSA self-testing key: 2900bcea1deb7bc8479a84a23d758efdfdd2b2d3'
Oct 11 07:35:57 localhost kernel: clk: Disabling unused clocks
Oct 11 07:35:57 localhost kernel: Freeing unused decrypted memory: 2028K
Oct 11 07:35:57 localhost kernel: Freeing unused kernel image (initmem) memory: 4188K
Oct 11 07:35:57 localhost kernel: Write protecting the kernel read-only data: 30720k
Oct 11 07:35:57 localhost kernel: Freeing unused kernel image (rodata/data gap) memory: 472K
Oct 11 07:35:57 localhost kernel: x86/mm: Checked W+X mappings: passed, no W+X pages found.
Oct 11 07:35:57 localhost kernel: Run /init as init process
Oct 11 07:35:57 localhost kernel:   with arguments:
Oct 11 07:35:57 localhost kernel:     /init
Oct 11 07:35:57 localhost kernel:   with environment:
Oct 11 07:35:57 localhost kernel:     HOME=/
Oct 11 07:35:57 localhost kernel:     TERM=linux
Oct 11 07:35:57 localhost kernel:     BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-621.el9.x86_64
Oct 11 07:35:57 localhost systemd[1]: systemd 252-57.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Oct 11 07:35:57 localhost systemd[1]: Detected virtualization kvm.
Oct 11 07:35:57 localhost systemd[1]: Detected architecture x86-64.
Oct 11 07:35:57 localhost systemd[1]: Running in initrd.
Oct 11 07:35:57 localhost kernel: usb 1-1: New USB device found, idVendor=0627, idProduct=0001, bcdDevice= 0.00
Oct 11 07:35:57 localhost kernel: usb 1-1: New USB device strings: Mfr=1, Product=3, SerialNumber=10
Oct 11 07:35:57 localhost kernel: usb 1-1: Product: QEMU USB Tablet
Oct 11 07:35:57 localhost kernel: usb 1-1: Manufacturer: QEMU
Oct 11 07:35:57 localhost kernel: usb 1-1: SerialNumber: 28754-0000:00:01.2-1
Oct 11 07:35:57 localhost kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:01.2/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input5
Oct 11 07:35:57 localhost kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:00:01.2-1/input0
Oct 11 07:35:57 localhost systemd[1]: No hostname configured, using default hostname.
Oct 11 07:35:57 localhost systemd[1]: Hostname set to <localhost>.
Oct 11 07:35:57 localhost systemd[1]: Initializing machine ID from VM UUID.
Oct 11 07:35:57 localhost systemd[1]: Queued start job for default target Initrd Default Target.
Oct 11 07:35:57 localhost systemd[1]: Started Dispatch Password Requests to Console Directory Watch.
Oct 11 07:35:57 localhost systemd[1]: Reached target Local Encrypted Volumes.
Oct 11 07:35:57 localhost systemd[1]: Reached target Initrd /usr File System.
Oct 11 07:35:57 localhost systemd[1]: Reached target Local File Systems.
Oct 11 07:35:57 localhost systemd[1]: Reached target Path Units.
Oct 11 07:35:57 localhost systemd[1]: Reached target Slice Units.
Oct 11 07:35:57 localhost systemd[1]: Reached target Swaps.
Oct 11 07:35:57 localhost systemd[1]: Reached target Timer Units.
Oct 11 07:35:57 localhost systemd[1]: Listening on D-Bus System Message Bus Socket.
Oct 11 07:35:57 localhost systemd[1]: Listening on Journal Socket (/dev/log).
Oct 11 07:35:57 localhost systemd[1]: Listening on Journal Socket.
Oct 11 07:35:57 localhost systemd[1]: Listening on udev Control Socket.
Oct 11 07:35:57 localhost systemd[1]: Listening on udev Kernel Socket.
Oct 11 07:35:57 localhost systemd[1]: Reached target Socket Units.
Oct 11 07:35:57 localhost systemd[1]: Starting Create List of Static Device Nodes...
Oct 11 07:35:57 localhost systemd[1]: Starting Journal Service...
Oct 11 07:35:57 localhost systemd[1]: Load Kernel Modules was skipped because no trigger condition checks were met.
Oct 11 07:35:57 localhost systemd[1]: Starting Apply Kernel Variables...
Oct 11 07:35:57 localhost systemd[1]: Starting Create System Users...
Oct 11 07:35:57 localhost systemd[1]: Starting Setup Virtual Console...
Oct 11 07:35:57 localhost systemd[1]: Finished Create List of Static Device Nodes.
Oct 11 07:35:57 localhost systemd[1]: Finished Apply Kernel Variables.
Oct 11 07:35:57 localhost systemd[1]: Finished Create System Users.
Oct 11 07:35:57 localhost systemd-journald[306]: Journal started
Oct 11 07:35:57 localhost systemd-journald[306]: Runtime Journal (/run/log/journal/441009aeb83147c3ab1edc607f9feb6d) is 8.0M, max 153.6M, 145.6M free.
Oct 11 07:35:57 localhost systemd-sysusers[309]: Creating group 'users' with GID 100.
Oct 11 07:35:57 localhost systemd-sysusers[309]: Creating group 'dbus' with GID 81.
Oct 11 07:35:57 localhost systemd-sysusers[309]: Creating user 'dbus' (System Message Bus) with UID 81 and GID 81.
Oct 11 07:35:57 localhost systemd[1]: Started Journal Service.
Oct 11 07:35:57 localhost systemd[1]: Starting Create Static Device Nodes in /dev...
Oct 11 07:35:57 localhost systemd[1]: Starting Create Volatile Files and Directories...
Oct 11 07:35:57 localhost systemd[1]: Finished Create Static Device Nodes in /dev.
Oct 11 07:35:57 localhost systemd[1]: Finished Create Volatile Files and Directories.
Oct 11 07:35:57 localhost systemd[1]: Finished Setup Virtual Console.
Oct 11 07:35:57 localhost systemd[1]: dracut ask for additional cmdline parameters was skipped because no trigger condition checks were met.
Oct 11 07:35:57 localhost systemd[1]: Starting dracut cmdline hook...
Oct 11 07:35:57 localhost dracut-cmdline[327]: dracut-9 dracut-057-102.git20250818.el9
Oct 11 07:35:57 localhost dracut-cmdline[327]: Using kernel command line parameters:    BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-621.el9.x86_64 root=UUID=9839e2e1-98a2-4594-b609-79d514deb0a3 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Oct 11 07:35:58 localhost systemd[1]: Finished dracut cmdline hook.
Oct 11 07:35:58 localhost systemd[1]: Starting dracut pre-udev hook...
Oct 11 07:35:58 localhost kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
Oct 11 07:35:58 localhost kernel: device-mapper: uevent: version 1.0.3
Oct 11 07:35:58 localhost kernel: device-mapper: ioctl: 4.50.0-ioctl (2025-04-28) initialised: dm-devel@lists.linux.dev
Oct 11 07:35:58 localhost kernel: RPC: Registered named UNIX socket transport module.
Oct 11 07:35:58 localhost kernel: RPC: Registered udp transport module.
Oct 11 07:35:58 localhost kernel: RPC: Registered tcp transport module.
Oct 11 07:35:58 localhost kernel: RPC: Registered tcp-with-tls transport module.
Oct 11 07:35:58 localhost kernel: RPC: Registered tcp NFSv4.1 backchannel transport module.
Oct 11 07:35:58 localhost rpc.statd[442]: Version 2.5.4 starting
Oct 11 07:35:58 localhost rpc.statd[442]: Initializing NSM state
Oct 11 07:35:58 localhost rpc.idmapd[447]: Setting log level to 0
Oct 11 07:35:58 localhost systemd[1]: Finished dracut pre-udev hook.
Oct 11 07:35:58 localhost systemd[1]: Starting Rule-based Manager for Device Events and Files...
Oct 11 07:35:58 localhost systemd-udevd[460]: Using default interface naming scheme 'rhel-9.0'.
Oct 11 07:35:58 localhost systemd[1]: Started Rule-based Manager for Device Events and Files.
Oct 11 07:35:58 localhost systemd[1]: Starting dracut pre-trigger hook...
Oct 11 07:35:58 localhost systemd[1]: Finished dracut pre-trigger hook.
Oct 11 07:35:58 localhost systemd[1]: Starting Coldplug All udev Devices...
Oct 11 07:35:58 localhost systemd[1]: Created slice Slice /system/modprobe.
Oct 11 07:35:58 localhost systemd[1]: Starting Load Kernel Module configfs...
Oct 11 07:35:58 localhost systemd[1]: Finished Coldplug All udev Devices.
Oct 11 07:35:58 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Oct 11 07:35:58 localhost systemd[1]: Finished Load Kernel Module configfs.
Oct 11 07:35:58 localhost systemd[1]: Mounting Kernel Configuration File System...
Oct 11 07:35:58 localhost systemd[1]: nm-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Oct 11 07:35:58 localhost systemd[1]: Reached target Network.
Oct 11 07:35:58 localhost systemd[1]: nm-wait-online-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Oct 11 07:35:58 localhost systemd[1]: Starting dracut initqueue hook...
Oct 11 07:35:58 localhost systemd[1]: Mounted Kernel Configuration File System.
Oct 11 07:35:58 localhost systemd[1]: Reached target System Initialization.
Oct 11 07:35:58 localhost systemd[1]: Reached target Basic System.
Oct 11 07:35:58 localhost kernel: libata version 3.00 loaded.
Oct 11 07:35:58 localhost kernel: virtio_blk virtio2: 8/0/0 default/read/poll queues
Oct 11 07:35:58 localhost kernel: ata_piix 0000:00:01.1: version 2.13
Oct 11 07:35:58 localhost kernel: scsi host0: ata_piix
Oct 11 07:35:58 localhost kernel: scsi host1: ata_piix
Oct 11 07:35:58 localhost kernel: virtio_blk virtio2: [vda] 167772160 512-byte logical blocks (85.9 GB/80.0 GiB)
Oct 11 07:35:58 localhost kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc140 irq 14 lpm-pol 0
Oct 11 07:35:58 localhost kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc148 irq 15 lpm-pol 0
Oct 11 07:35:58 localhost kernel:  vda: vda1
Oct 11 07:35:58 localhost kernel: ata1: found unknown device (class 0)
Oct 11 07:35:58 localhost kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100
Oct 11 07:35:58 localhost kernel: scsi 0:0:0:0: CD-ROM            QEMU     QEMU DVD-ROM     2.5+ PQ: 0 ANSI: 5
Oct 11 07:35:58 localhost systemd-udevd[491]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 07:35:58 localhost kernel: scsi 0:0:0:0: Attached scsi generic sg0 type 5
Oct 11 07:35:58 localhost kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray
Oct 11 07:35:58 localhost kernel: cdrom: Uniform CD-ROM driver Revision: 3.20
Oct 11 07:35:58 localhost systemd[1]: Found device /dev/disk/by-uuid/9839e2e1-98a2-4594-b609-79d514deb0a3.
Oct 11 07:35:58 localhost systemd[1]: Reached target Initrd Root Device.
Oct 11 07:35:59 localhost kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0
Oct 11 07:35:59 localhost systemd[1]: Finished dracut initqueue hook.
Oct 11 07:35:59 localhost systemd[1]: Reached target Preparation for Remote File Systems.
Oct 11 07:35:59 localhost systemd[1]: Reached target Remote Encrypted Volumes.
Oct 11 07:35:59 localhost systemd[1]: Reached target Remote File Systems.
Oct 11 07:35:59 localhost systemd[1]: Starting dracut pre-mount hook...
Oct 11 07:35:59 localhost systemd[1]: Finished dracut pre-mount hook.
Oct 11 07:35:59 localhost systemd[1]: Starting File System Check on /dev/disk/by-uuid/9839e2e1-98a2-4594-b609-79d514deb0a3...
Oct 11 07:35:59 localhost systemd-fsck[553]: /usr/sbin/fsck.xfs: XFS file system.
Oct 11 07:35:59 localhost systemd[1]: Finished File System Check on /dev/disk/by-uuid/9839e2e1-98a2-4594-b609-79d514deb0a3.
Oct 11 07:35:59 localhost systemd[1]: Mounting /sysroot...
Oct 11 07:35:59 localhost kernel: SGI XFS with ACLs, security attributes, scrub, quota, no debug enabled
Oct 11 07:35:59 localhost kernel: XFS (vda1): Mounting V5 Filesystem 9839e2e1-98a2-4594-b609-79d514deb0a3
Oct 11 07:35:59 localhost kernel: XFS (vda1): Ending clean mount
Oct 11 07:35:59 localhost systemd[1]: Mounted /sysroot.
Oct 11 07:35:59 localhost systemd[1]: Reached target Initrd Root File System.
Oct 11 07:35:59 localhost systemd[1]: Starting Mountpoints Configured in the Real Root...
Oct 11 07:35:59 localhost systemd[1]: initrd-parse-etc.service: Deactivated successfully.
Oct 11 07:35:59 localhost systemd[1]: Finished Mountpoints Configured in the Real Root.
Oct 11 07:35:59 localhost systemd[1]: Reached target Initrd File Systems.
Oct 11 07:35:59 localhost systemd[1]: Reached target Initrd Default Target.
Oct 11 07:35:59 localhost systemd[1]: Starting dracut mount hook...
Oct 11 07:35:59 localhost systemd[1]: Finished dracut mount hook.
Oct 11 07:35:59 localhost systemd[1]: Starting dracut pre-pivot and cleanup hook...
Oct 11 07:35:59 localhost rpc.idmapd[447]: exiting on signal 15
Oct 11 07:35:59 localhost systemd[1]: var-lib-nfs-rpc_pipefs.mount: Deactivated successfully.
Oct 11 07:35:59 localhost systemd[1]: Finished dracut pre-pivot and cleanup hook.
Oct 11 07:35:59 localhost systemd[1]: Starting Cleaning Up and Shutting Down Daemons...
Oct 11 07:36:00 localhost systemd[1]: Stopped target Network.
Oct 11 07:36:00 localhost systemd[1]: Stopped target Remote Encrypted Volumes.
Oct 11 07:36:00 localhost systemd[1]: Stopped target Timer Units.
Oct 11 07:36:00 localhost systemd[1]: dbus.socket: Deactivated successfully.
Oct 11 07:36:00 localhost systemd[1]: Closed D-Bus System Message Bus Socket.
Oct 11 07:36:00 localhost systemd[1]: dracut-pre-pivot.service: Deactivated successfully.
Oct 11 07:36:00 localhost systemd[1]: Stopped dracut pre-pivot and cleanup hook.
Oct 11 07:36:00 localhost systemd[1]: Stopped target Initrd Default Target.
Oct 11 07:36:00 localhost systemd[1]: Stopped target Basic System.
Oct 11 07:36:00 localhost systemd[1]: Stopped target Initrd Root Device.
Oct 11 07:36:00 localhost systemd[1]: Stopped target Initrd /usr File System.
Oct 11 07:36:00 localhost systemd[1]: Stopped target Path Units.
Oct 11 07:36:00 localhost systemd[1]: Stopped target Remote File Systems.
Oct 11 07:36:00 localhost systemd[1]: Stopped target Preparation for Remote File Systems.
Oct 11 07:36:00 localhost systemd[1]: Stopped target Slice Units.
Oct 11 07:36:00 localhost systemd[1]: Stopped target Socket Units.
Oct 11 07:36:00 localhost systemd[1]: Stopped target System Initialization.
Oct 11 07:36:00 localhost systemd[1]: Stopped target Local File Systems.
Oct 11 07:36:00 localhost systemd[1]: Stopped target Swaps.
Oct 11 07:36:00 localhost systemd[1]: dracut-mount.service: Deactivated successfully.
Oct 11 07:36:00 localhost systemd[1]: Stopped dracut mount hook.
Oct 11 07:36:00 localhost systemd[1]: dracut-pre-mount.service: Deactivated successfully.
Oct 11 07:36:00 localhost systemd[1]: Stopped dracut pre-mount hook.
Oct 11 07:36:00 localhost systemd[1]: Stopped target Local Encrypted Volumes.
Oct 11 07:36:00 localhost systemd[1]: systemd-ask-password-console.path: Deactivated successfully.
Oct 11 07:36:00 localhost systemd[1]: Stopped Dispatch Password Requests to Console Directory Watch.
Oct 11 07:36:00 localhost systemd[1]: dracut-initqueue.service: Deactivated successfully.
Oct 11 07:36:00 localhost systemd[1]: Stopped dracut initqueue hook.
Oct 11 07:36:00 localhost systemd[1]: systemd-sysctl.service: Deactivated successfully.
Oct 11 07:36:00 localhost systemd[1]: Stopped Apply Kernel Variables.
Oct 11 07:36:00 localhost systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully.
Oct 11 07:36:00 localhost systemd[1]: Stopped Create Volatile Files and Directories.
Oct 11 07:36:00 localhost systemd[1]: systemd-udev-trigger.service: Deactivated successfully.
Oct 11 07:36:00 localhost systemd[1]: Stopped Coldplug All udev Devices.
Oct 11 07:36:00 localhost systemd[1]: dracut-pre-trigger.service: Deactivated successfully.
Oct 11 07:36:00 localhost systemd[1]: Stopped dracut pre-trigger hook.
Oct 11 07:36:00 localhost systemd[1]: Stopping Rule-based Manager for Device Events and Files...
Oct 11 07:36:00 localhost systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Oct 11 07:36:00 localhost systemd[1]: Stopped Setup Virtual Console.
Oct 11 07:36:00 localhost systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully.
Oct 11 07:36:00 localhost systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Oct 11 07:36:00 localhost systemd[1]: initrd-cleanup.service: Deactivated successfully.
Oct 11 07:36:00 localhost systemd[1]: Finished Cleaning Up and Shutting Down Daemons.
Oct 11 07:36:00 localhost systemd[1]: systemd-udevd.service: Deactivated successfully.
Oct 11 07:36:00 localhost systemd[1]: Stopped Rule-based Manager for Device Events and Files.
Oct 11 07:36:00 localhost systemd[1]: systemd-udevd-control.socket: Deactivated successfully.
Oct 11 07:36:00 localhost systemd[1]: Closed udev Control Socket.
Oct 11 07:36:00 localhost systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully.
Oct 11 07:36:00 localhost systemd[1]: Closed udev Kernel Socket.
Oct 11 07:36:00 localhost systemd[1]: dracut-pre-udev.service: Deactivated successfully.
Oct 11 07:36:00 localhost systemd[1]: Stopped dracut pre-udev hook.
Oct 11 07:36:00 localhost systemd[1]: dracut-cmdline.service: Deactivated successfully.
Oct 11 07:36:00 localhost systemd[1]: Stopped dracut cmdline hook.
Oct 11 07:36:00 localhost systemd[1]: Starting Cleanup udev Database...
Oct 11 07:36:00 localhost systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully.
Oct 11 07:36:00 localhost systemd[1]: Stopped Create Static Device Nodes in /dev.
Oct 11 07:36:00 localhost systemd[1]: kmod-static-nodes.service: Deactivated successfully.
Oct 11 07:36:00 localhost systemd[1]: Stopped Create List of Static Device Nodes.
Oct 11 07:36:00 localhost systemd[1]: systemd-sysusers.service: Deactivated successfully.
Oct 11 07:36:00 localhost systemd[1]: Stopped Create System Users.
Oct 11 07:36:00 localhost systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully.
Oct 11 07:36:00 localhost systemd[1]: run-credentials-systemd\x2dsysusers.service.mount: Deactivated successfully.
Oct 11 07:36:00 localhost systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully.
Oct 11 07:36:00 localhost systemd[1]: Finished Cleanup udev Database.
Oct 11 07:36:00 localhost systemd[1]: Reached target Switch Root.
Oct 11 07:36:00 localhost systemd[1]: Starting Switch Root...
Oct 11 07:36:00 localhost systemd[1]: Switching root.
Oct 11 07:36:00 localhost systemd-journald[306]: Journal stopped
Oct 11 08:23:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.135 162815 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.135 162815 DEBUG neutron.agent.ovn.metadata_agent [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:15 compute-0 rsyslogd[1003]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 11 08:23:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.135 162815 DEBUG neutron.agent.ovn.metadata_agent [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.135 162815 DEBUG neutron.agent.ovn.metadata_agent [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.136 162815 DEBUG neutron.agent.ovn.metadata_agent [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.136 162815 DEBUG neutron.agent.ovn.metadata_agent [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.136 162815 DEBUG neutron.agent.ovn.metadata_agent [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.136 162815 DEBUG neutron.agent.ovn.metadata_agent [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.136 162815 DEBUG neutron.agent.ovn.metadata_agent [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.136 162815 DEBUG neutron.agent.ovn.metadata_agent [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.136 162815 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.136 162815 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.136 162815 DEBUG neutron.agent.ovn.metadata_agent [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.136 162815 DEBUG neutron.agent.ovn.metadata_agent [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.137 162815 DEBUG neutron.agent.ovn.metadata_agent [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.137 162815 DEBUG neutron.agent.ovn.metadata_agent [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.137 162815 DEBUG neutron.agent.ovn.metadata_agent [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.137 162815 DEBUG neutron.agent.ovn.metadata_agent [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.137 162815 DEBUG neutron.agent.ovn.metadata_agent [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.137 162815 DEBUG neutron.agent.ovn.metadata_agent [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.137 162815 DEBUG neutron.agent.ovn.metadata_agent [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.137 162815 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.137 162815 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.138 162815 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.138 162815 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.138 162815 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.138 162815 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.138 162815 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.138 162815 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.138 162815 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.138 162815 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.139 162815 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.139 162815 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.139 162815 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.139 162815 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.139 162815 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.139 162815 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.139 162815 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.139 162815 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.139 162815 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.139 162815 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.140 162815 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.140 162815 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.140 162815 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.140 162815 DEBUG neutron.agent.ovn.metadata_agent [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.140 162815 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.140 162815 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.140 162815 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.140 162815 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.140 162815 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.141 162815 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.141 162815 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.141 162815 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.141 162815 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.141 162815 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.141 162815 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.141 162815 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.141 162815 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.142 162815 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.142 162815 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.142 162815 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.142 162815 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.142 162815 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.142 162815 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.142 162815 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.142 162815 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.142 162815 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.142 162815 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.143 162815 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.143 162815 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.143 162815 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.143 162815 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.143 162815 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.143 162815 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.143 162815 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.143 162815 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.144 162815 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.144 162815 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.144 162815 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.144 162815 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.144 162815 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.144 162815 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.144 162815 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.144 162815 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.144 162815 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.144 162815 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.145 162815 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.145 162815 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.145 162815 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.145 162815 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.145 162815 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.145 162815 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.145 162815 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.145 162815 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.145 162815 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.146 162815 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.146 162815 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.146 162815 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.146 162815 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.146 162815 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.146 162815 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.146 162815 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.147 162815 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.147 162815 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.147 162815 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.147 162815 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.147 162815 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.147 162815 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.147 162815 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.147 162815 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.148 162815 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.148 162815 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Oct 11 08:23:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.160 162815 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Oct 11 08:23:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.161 162815 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Oct 11 08:23:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.161 162815 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Oct 11 08:23:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.161 162815 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connecting...
Oct 11 08:23:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.162 162815 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connected
Oct 11 08:23:15 compute-0 rsyslogd[1003]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 11 08:23:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.176 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Loaded chassis name bec9a5e2-82b0-42f0-811d-08d245f1dc66 (UUID: bec9a5e2-82b0-42f0-811d-08d245f1dc66) and ovn bridge br-int. _load_config /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:309
Oct 11 08:23:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.207 162815 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry
Oct 11 08:23:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.208 162815 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87
Oct 11 08:23:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.208 162815 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Oct 11 08:23:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.208 162815 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Chassis_Private.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Oct 11 08:23:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.211 162815 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Oct 11 08:23:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.218 162815 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected
Oct 11 08:23:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.231 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched CREATE: ChassisPrivateCreateEvent(events=('create',), table='Chassis_Private', conditions=(('name', '=', 'bec9a5e2-82b0-42f0-811d-08d245f1dc66'),), old_conditions=None), priority=20 to row=Chassis_Private(chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], external_ids={}, name=bec9a5e2-82b0-42f0-811d-08d245f1dc66, nb_cfg_timestamp=1760170926930, nb_cfg=1) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 08:23:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.232 162815 DEBUG neutron_lib.callbacks.manager [-] Subscribe: <bound method MetadataProxyHandler.post_fork_initialize of <neutron.agent.ovn.metadata.server.MetadataProxyHandler object at 0x7f6582ad0f70>> process after_init 55550000, False subscribe /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:52
Oct 11 08:23:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.233 162815 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 08:23:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.233 162815 DEBUG oslo_concurrency.lockutils [-] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 08:23:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.233 162815 DEBUG oslo_concurrency.lockutils [-] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 08:23:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.234 162815 INFO oslo_service.service [-] Starting 1 workers
Oct 11 08:23:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.237 162815 DEBUG oslo_service.service [-] Started child 162924 _start_child /usr/lib/python3.9/site-packages/oslo_service/service.py:575
Oct 11 08:23:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.240 162815 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.namespace_cmd', '--privsep_sock_path', '/tmp/tmplkui4esc/privsep.sock']
Oct 11 08:23:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.241 162924 DEBUG neutron_lib.callbacks.manager [-] Publish callbacks ['neutron.agent.ovn.metadata.server.MetadataProxyHandler.post_fork_initialize-497889'] for process (None), after_init _notify_loop /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:184
Oct 11 08:23:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.265 162924 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry
Oct 11 08:23:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.266 162924 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87
Oct 11 08:23:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.266 162924 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Oct 11 08:23:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.269 162924 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Oct 11 08:23:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.276 162924 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected
Oct 11 08:23:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.281 162924 INFO eventlet.wsgi.server [-] (162924) wsgi starting up on http:/var/lib/neutron/metadata_proxy
Oct 11 08:23:15 compute-0 kernel: capability: warning: `privsep-helper' uses deprecated v2 capabilities in a way that may be insecure
Oct 11 08:23:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.902 162815 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Oct 11 08:23:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.903 162815 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmplkui4esc/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Oct 11 08:23:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.770 162929 INFO oslo.privsep.daemon [-] privsep daemon starting
Oct 11 08:23:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.775 162929 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Oct 11 08:23:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.777 162929 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none
Oct 11 08:23:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.777 162929 INFO oslo.privsep.daemon [-] privsep daemon running as pid 162929
Oct 11 08:23:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:15.907 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[02689a7c-be2c-4dd7-a7a8-e7844b2f5bfe]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:23:16 compute-0 ceph-mon[74313]: pgmap v483: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:23:16 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:16.439 162929 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:23:16 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:16.439 162929 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:23:16 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:16.439 162929 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:23:16 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v484: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:23:16 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:16.986 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[6fa0224e-81b9-4492-9fc5-a436096e91ea]: (4, []) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:23:16 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:16.989 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbAddCommand(_result=None, table=Chassis_Private, record=bec9a5e2-82b0-42f0-811d-08d245f1dc66, column=external_ids, values=({'neutron:ovn-metadata-id': '9f3e73f3-77a0-5f6f-87b5-2f64343bab3a'},)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.006 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=bec9a5e2-82b0-42f0-811d-08d245f1dc66, col_values=(('external_ids', {'neutron:ovn-bridge': 'br-int'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.012 162815 DEBUG oslo_service.service [-] Full set of CONF: wait /usr/lib/python3.9/site-packages/oslo_service/service.py:649
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.013 162815 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.013 162815 DEBUG oslo_service.service [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.013 162815 DEBUG oslo_service.service [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.013 162815 DEBUG oslo_service.service [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.013 162815 DEBUG oslo_service.service [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.014 162815 DEBUG oslo_service.service [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.014 162815 DEBUG oslo_service.service [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.014 162815 DEBUG oslo_service.service [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.014 162815 DEBUG oslo_service.service [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.015 162815 DEBUG oslo_service.service [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.015 162815 DEBUG oslo_service.service [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.015 162815 DEBUG oslo_service.service [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.015 162815 DEBUG oslo_service.service [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.016 162815 DEBUG oslo_service.service [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.016 162815 DEBUG oslo_service.service [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.016 162815 DEBUG oslo_service.service [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.016 162815 DEBUG oslo_service.service [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.017 162815 DEBUG oslo_service.service [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.017 162815 DEBUG oslo_service.service [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.017 162815 DEBUG oslo_service.service [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.017 162815 DEBUG oslo_service.service [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.017 162815 DEBUG oslo_service.service [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.018 162815 DEBUG oslo_service.service [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.018 162815 DEBUG oslo_service.service [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.018 162815 DEBUG oslo_service.service [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.019 162815 DEBUG oslo_service.service [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.019 162815 DEBUG oslo_service.service [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.019 162815 DEBUG oslo_service.service [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.019 162815 DEBUG oslo_service.service [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.019 162815 DEBUG oslo_service.service [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.020 162815 DEBUG oslo_service.service [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.020 162815 DEBUG oslo_service.service [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.020 162815 DEBUG oslo_service.service [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.020 162815 DEBUG oslo_service.service [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.021 162815 DEBUG oslo_service.service [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.021 162815 DEBUG oslo_service.service [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.021 162815 DEBUG oslo_service.service [-] host                           = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.022 162815 DEBUG oslo_service.service [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.022 162815 DEBUG oslo_service.service [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.022 162815 DEBUG oslo_service.service [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.022 162815 DEBUG oslo_service.service [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.023 162815 DEBUG oslo_service.service [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.023 162815 DEBUG oslo_service.service [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.023 162815 DEBUG oslo_service.service [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.023 162815 DEBUG oslo_service.service [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.023 162815 DEBUG oslo_service.service [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.024 162815 DEBUG oslo_service.service [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.024 162815 DEBUG oslo_service.service [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.024 162815 DEBUG oslo_service.service [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.024 162815 DEBUG oslo_service.service [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.025 162815 DEBUG oslo_service.service [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.025 162815 DEBUG oslo_service.service [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.025 162815 DEBUG oslo_service.service [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.025 162815 DEBUG oslo_service.service [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.025 162815 DEBUG oslo_service.service [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.026 162815 DEBUG oslo_service.service [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.026 162815 DEBUG oslo_service.service [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.026 162815 DEBUG oslo_service.service [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.026 162815 DEBUG oslo_service.service [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.027 162815 DEBUG oslo_service.service [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.027 162815 DEBUG oslo_service.service [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.027 162815 DEBUG oslo_service.service [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.027 162815 DEBUG oslo_service.service [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.028 162815 DEBUG oslo_service.service [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.028 162815 DEBUG oslo_service.service [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.028 162815 DEBUG oslo_service.service [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.028 162815 DEBUG oslo_service.service [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.028 162815 DEBUG oslo_service.service [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.029 162815 DEBUG oslo_service.service [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.029 162815 DEBUG oslo_service.service [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.029 162815 DEBUG oslo_service.service [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.029 162815 DEBUG oslo_service.service [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.030 162815 DEBUG oslo_service.service [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.030 162815 DEBUG oslo_service.service [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.030 162815 DEBUG oslo_service.service [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.030 162815 DEBUG oslo_service.service [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.030 162815 DEBUG oslo_service.service [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.031 162815 DEBUG oslo_service.service [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.031 162815 DEBUG oslo_service.service [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.031 162815 DEBUG oslo_service.service [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.031 162815 DEBUG oslo_service.service [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.032 162815 DEBUG oslo_service.service [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.032 162815 DEBUG oslo_service.service [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.032 162815 DEBUG oslo_service.service [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.032 162815 DEBUG oslo_service.service [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.032 162815 DEBUG oslo_service.service [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.033 162815 DEBUG oslo_service.service [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.033 162815 DEBUG oslo_service.service [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.033 162815 DEBUG oslo_service.service [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.033 162815 DEBUG oslo_service.service [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.034 162815 DEBUG oslo_service.service [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.034 162815 DEBUG oslo_service.service [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.034 162815 DEBUG oslo_service.service [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.034 162815 DEBUG oslo_service.service [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.034 162815 DEBUG oslo_service.service [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.035 162815 DEBUG oslo_service.service [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.035 162815 DEBUG oslo_service.service [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.035 162815 DEBUG oslo_service.service [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.035 162815 DEBUG oslo_service.service [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.036 162815 DEBUG oslo_service.service [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.036 162815 DEBUG oslo_service.service [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.036 162815 DEBUG oslo_service.service [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.036 162815 DEBUG oslo_service.service [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.037 162815 DEBUG oslo_service.service [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.037 162815 DEBUG oslo_service.service [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.037 162815 DEBUG oslo_service.service [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.038 162815 DEBUG oslo_service.service [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.038 162815 DEBUG oslo_service.service [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.038 162815 DEBUG oslo_service.service [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.039 162815 DEBUG oslo_service.service [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.039 162815 DEBUG oslo_service.service [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.039 162815 DEBUG oslo_service.service [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.040 162815 DEBUG oslo_service.service [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.040 162815 DEBUG oslo_service.service [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.040 162815 DEBUG oslo_service.service [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.040 162815 DEBUG oslo_service.service [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.040 162815 DEBUG oslo_service.service [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.041 162815 DEBUG oslo_service.service [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.041 162815 DEBUG oslo_service.service [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.041 162815 DEBUG oslo_service.service [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.042 162815 DEBUG oslo_service.service [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.042 162815 DEBUG oslo_service.service [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.042 162815 DEBUG oslo_service.service [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.042 162815 DEBUG oslo_service.service [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.043 162815 DEBUG oslo_service.service [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.043 162815 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.043 162815 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.043 162815 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.044 162815 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.044 162815 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.044 162815 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.044 162815 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.044 162815 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.045 162815 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.045 162815 DEBUG oslo_service.service [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.045 162815 DEBUG oslo_service.service [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.046 162815 DEBUG oslo_service.service [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.046 162815 DEBUG oslo_service.service [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.046 162815 DEBUG oslo_service.service [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.046 162815 DEBUG oslo_service.service [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.047 162815 DEBUG oslo_service.service [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.047 162815 DEBUG oslo_service.service [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.047 162815 DEBUG oslo_service.service [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.047 162815 DEBUG oslo_service.service [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.047 162815 DEBUG oslo_service.service [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.048 162815 DEBUG oslo_service.service [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.048 162815 DEBUG oslo_service.service [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.048 162815 DEBUG oslo_service.service [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.048 162815 DEBUG oslo_service.service [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.049 162815 DEBUG oslo_service.service [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.049 162815 DEBUG oslo_service.service [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.049 162815 DEBUG oslo_service.service [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.049 162815 DEBUG oslo_service.service [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.049 162815 DEBUG oslo_service.service [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.050 162815 DEBUG oslo_service.service [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.050 162815 DEBUG oslo_service.service [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.050 162815 DEBUG oslo_service.service [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.050 162815 DEBUG oslo_service.service [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.050 162815 DEBUG oslo_service.service [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.051 162815 DEBUG oslo_service.service [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.051 162815 DEBUG oslo_service.service [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.051 162815 DEBUG oslo_service.service [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.051 162815 DEBUG oslo_service.service [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.052 162815 DEBUG oslo_service.service [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.052 162815 DEBUG oslo_service.service [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.052 162815 DEBUG oslo_service.service [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.052 162815 DEBUG oslo_service.service [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.052 162815 DEBUG oslo_service.service [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.053 162815 DEBUG oslo_service.service [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.053 162815 DEBUG oslo_service.service [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.053 162815 DEBUG oslo_service.service [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.053 162815 DEBUG oslo_service.service [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.054 162815 DEBUG oslo_service.service [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.054 162815 DEBUG oslo_service.service [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.054 162815 DEBUG oslo_service.service [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.054 162815 DEBUG oslo_service.service [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.055 162815 DEBUG oslo_service.service [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.055 162815 DEBUG oslo_service.service [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.055 162815 DEBUG oslo_service.service [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.055 162815 DEBUG oslo_service.service [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.055 162815 DEBUG oslo_service.service [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.056 162815 DEBUG oslo_service.service [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.056 162815 DEBUG oslo_service.service [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.056 162815 DEBUG oslo_service.service [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.057 162815 DEBUG oslo_service.service [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.057 162815 DEBUG oslo_service.service [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.057 162815 DEBUG oslo_service.service [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.057 162815 DEBUG oslo_service.service [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.057 162815 DEBUG oslo_service.service [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.058 162815 DEBUG oslo_service.service [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.058 162815 DEBUG oslo_service.service [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.058 162815 DEBUG oslo_service.service [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.058 162815 DEBUG oslo_service.service [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.059 162815 DEBUG oslo_service.service [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.059 162815 DEBUG oslo_service.service [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.059 162815 DEBUG oslo_service.service [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.059 162815 DEBUG oslo_service.service [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.059 162815 DEBUG oslo_service.service [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.059 162815 DEBUG oslo_service.service [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.059 162815 DEBUG oslo_service.service [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.060 162815 DEBUG oslo_service.service [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.060 162815 DEBUG oslo_service.service [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.060 162815 DEBUG oslo_service.service [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.060 162815 DEBUG oslo_service.service [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.060 162815 DEBUG oslo_service.service [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.060 162815 DEBUG oslo_service.service [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.060 162815 DEBUG oslo_service.service [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.060 162815 DEBUG oslo_service.service [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.061 162815 DEBUG oslo_service.service [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.061 162815 DEBUG oslo_service.service [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.061 162815 DEBUG oslo_service.service [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.061 162815 DEBUG oslo_service.service [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.061 162815 DEBUG oslo_service.service [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.061 162815 DEBUG oslo_service.service [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.061 162815 DEBUG oslo_service.service [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.062 162815 DEBUG oslo_service.service [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.062 162815 DEBUG oslo_service.service [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.062 162815 DEBUG oslo_service.service [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.062 162815 DEBUG oslo_service.service [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.062 162815 DEBUG oslo_service.service [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.062 162815 DEBUG oslo_service.service [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.062 162815 DEBUG oslo_service.service [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.063 162815 DEBUG oslo_service.service [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.063 162815 DEBUG oslo_service.service [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.063 162815 DEBUG oslo_service.service [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.063 162815 DEBUG oslo_service.service [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.063 162815 DEBUG oslo_service.service [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.063 162815 DEBUG oslo_service.service [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.063 162815 DEBUG oslo_service.service [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.063 162815 DEBUG oslo_service.service [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.064 162815 DEBUG oslo_service.service [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.064 162815 DEBUG oslo_service.service [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.064 162815 DEBUG oslo_service.service [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.064 162815 DEBUG oslo_service.service [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.064 162815 DEBUG oslo_service.service [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.064 162815 DEBUG oslo_service.service [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.064 162815 DEBUG oslo_service.service [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.064 162815 DEBUG oslo_service.service [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.065 162815 DEBUG oslo_service.service [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.065 162815 DEBUG oslo_service.service [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.065 162815 DEBUG oslo_service.service [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.065 162815 DEBUG oslo_service.service [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.065 162815 DEBUG oslo_service.service [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.065 162815 DEBUG oslo_service.service [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.065 162815 DEBUG oslo_service.service [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.066 162815 DEBUG oslo_service.service [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.066 162815 DEBUG oslo_service.service [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.066 162815 DEBUG oslo_service.service [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.066 162815 DEBUG oslo_service.service [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.066 162815 DEBUG oslo_service.service [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.066 162815 DEBUG oslo_service.service [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.066 162815 DEBUG oslo_service.service [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.067 162815 DEBUG oslo_service.service [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.067 162815 DEBUG oslo_service.service [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.067 162815 DEBUG oslo_service.service [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.067 162815 DEBUG oslo_service.service [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.067 162815 DEBUG oslo_service.service [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.067 162815 DEBUG oslo_service.service [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.067 162815 DEBUG oslo_service.service [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.068 162815 DEBUG oslo_service.service [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.068 162815 DEBUG oslo_service.service [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.068 162815 DEBUG oslo_service.service [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.068 162815 DEBUG oslo_service.service [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.068 162815 DEBUG oslo_service.service [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.068 162815 DEBUG oslo_service.service [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.068 162815 DEBUG oslo_service.service [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.068 162815 DEBUG oslo_service.service [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.069 162815 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.069 162815 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.069 162815 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.069 162815 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.069 162815 DEBUG oslo_service.service [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.069 162815 DEBUG oslo_service.service [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.070 162815 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.070 162815 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.070 162815 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.070 162815 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.070 162815 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.070 162815 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.070 162815 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.071 162815 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.071 162815 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.071 162815 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.071 162815 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.071 162815 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.071 162815 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.071 162815 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.071 162815 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.072 162815 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.072 162815 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.072 162815 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.072 162815 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.072 162815 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.072 162815 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.072 162815 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.073 162815 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.073 162815 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.073 162815 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.073 162815 DEBUG oslo_service.service [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.073 162815 DEBUG oslo_service.service [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.073 162815 DEBUG oslo_service.service [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.073 162815 DEBUG oslo_service.service [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:23:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:23:17.074 162815 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Oct 11 08:23:17 compute-0 ceph-mon[74313]: pgmap v484: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:23:17 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 08:23:18 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v485: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:23:19 compute-0 sshd-session[162934]: Accepted publickey for zuul from 192.168.122.30 port 45906 ssh2: ECDSA SHA256:KKxgUhG08XzjYMLOyvbR+tXItyOnGoLl6Fn32NV5afE
Oct 11 08:23:19 compute-0 systemd-logind[819]: New session 50 of user zuul.
Oct 11 08:23:19 compute-0 systemd[1]: Started Session 50 of User zuul.
Oct 11 08:23:19 compute-0 sshd-session[162934]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 11 08:23:20 compute-0 ceph-mon[74313]: pgmap v485: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:23:20 compute-0 python3.9[163087]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 11 08:23:20 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v486: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:23:21 compute-0 ceph-mon[74313]: pgmap v486: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:23:21 compute-0 sudo[163241]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rndnxbuhjupsbiomokhmdsoimoxrnedx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171000.835907-34-137476219726962/AnsiballZ_command.py'
Oct 11 08:23:21 compute-0 sudo[163241]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:23:21 compute-0 python3.9[163243]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --filter name=^nova_virtlogd$ --format \{\{.Names\}\} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 08:23:21 compute-0 sudo[163241]: pam_unix(sudo:session): session closed for user root
Oct 11 08:23:22 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 08:23:22 compute-0 sudo[163406]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vjaoxnqkqadfdtnjovwrnjlwpmisgkjn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171002.2158945-45-191049581649828/AnsiballZ_systemd_service.py'
Oct 11 08:23:22 compute-0 sudo[163406]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:23:22 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v487: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:23:23 compute-0 python3.9[163408]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 11 08:23:23 compute-0 systemd[1]: Reloading.
Oct 11 08:23:23 compute-0 systemd-rc-local-generator[163431]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 08:23:23 compute-0 systemd-sysv-generator[163439]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 08:23:23 compute-0 sudo[163406]: pam_unix(sudo:session): session closed for user root
Oct 11 08:23:24 compute-0 ceph-mon[74313]: pgmap v487: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:23:24 compute-0 python3.9[163593]: ansible-ansible.builtin.service_facts Invoked
Oct 11 08:23:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 08:23:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 08:23:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 08:23:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 08:23:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 08:23:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 08:23:24 compute-0 network[163610]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct 11 08:23:24 compute-0 network[163611]: 'network-scripts' will be removed from distribution in near future.
Oct 11 08:23:24 compute-0 network[163612]: It is advised to switch to 'NetworkManager' instead for network management.
Oct 11 08:23:24 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v488: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:23:25 compute-0 ceph-mon[74313]: pgmap v488: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:23:26 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v489: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:23:27 compute-0 ceph-mon[74313]: pgmap v489: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:23:27 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 08:23:28 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v490: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:23:29 compute-0 ceph-mon[74313]: pgmap v490: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:23:29 compute-0 sudo[163875]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kwqpntwyfpxjothxgamvxlivrllrjnbj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171008.665326-64-198971586020642/AnsiballZ_systemd_service.py'
Oct 11 08:23:29 compute-0 sudo[163875]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:23:29 compute-0 python3.9[163877]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_libvirt.target state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 11 08:23:29 compute-0 sudo[163875]: pam_unix(sudo:session): session closed for user root
Oct 11 08:23:30 compute-0 sudo[164028]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cndwgrbzyuuhhvzfmtmucwimqvtzpgiv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171009.6689577-64-216822334406614/AnsiballZ_systemd_service.py'
Oct 11 08:23:30 compute-0 sudo[164028]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:23:30 compute-0 python3.9[164030]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtlogd_wrapper.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 11 08:23:30 compute-0 sudo[164028]: pam_unix(sudo:session): session closed for user root
Oct 11 08:23:30 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v491: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:23:30 compute-0 sudo[164181]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ecjqsgtzwcygfllctkvgjqetmmrnmrkx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171010.5944872-64-206546637306703/AnsiballZ_systemd_service.py'
Oct 11 08:23:31 compute-0 sudo[164181]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:23:31 compute-0 ceph-mon[74313]: pgmap v491: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:23:31 compute-0 python3.9[164183]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtnodedevd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 11 08:23:31 compute-0 sudo[164181]: pam_unix(sudo:session): session closed for user root
Oct 11 08:23:31 compute-0 sudo[164345]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pocuiomdmsqtdoilifukwcbspvnjfkpm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171011.5249155-64-270807892379970/AnsiballZ_systemd_service.py'
Oct 11 08:23:31 compute-0 sudo[164345]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:23:32 compute-0 podman[164308]: 2025-10-11 08:23:32.008197164 +0000 UTC m=+0.159198682 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 11 08:23:32 compute-0 python3.9[164349]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtproxyd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 11 08:23:32 compute-0 sudo[164345]: pam_unix(sudo:session): session closed for user root
Oct 11 08:23:32 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 08:23:32 compute-0 sudo[164513]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nnkpldemicbunwzbezmskazgvbwjjpqk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171012.4726305-64-44998345181189/AnsiballZ_systemd_service.py'
Oct 11 08:23:32 compute-0 sudo[164513]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:23:32 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v492: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct 11 08:23:33 compute-0 ceph-mon[74313]: pgmap v492: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct 11 08:23:33 compute-0 python3.9[164515]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtqemud.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 11 08:23:33 compute-0 sudo[164513]: pam_unix(sudo:session): session closed for user root
Oct 11 08:23:34 compute-0 sudo[164666]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-stckycrrrqhfogobjsiazytmpxoxigjh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171013.6707861-64-158959394527037/AnsiballZ_systemd_service.py'
Oct 11 08:23:34 compute-0 sudo[164666]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:23:34 compute-0 python3.9[164668]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtsecretd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 11 08:23:34 compute-0 sudo[164666]: pam_unix(sudo:session): session closed for user root
Oct 11 08:23:34 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v493: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct 11 08:23:35 compute-0 sudo[164819]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aeafwphsfsskgymunareefnnntwhdyii ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171014.6541307-64-43492286201901/AnsiballZ_systemd_service.py'
Oct 11 08:23:35 compute-0 sudo[164819]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:23:35 compute-0 ceph-mon[74313]: pgmap v493: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct 11 08:23:35 compute-0 python3.9[164821]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtstoraged.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 11 08:23:35 compute-0 sudo[164819]: pam_unix(sudo:session): session closed for user root
Oct 11 08:23:36 compute-0 sudo[164972]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uygfeaxgklybiqindilngxbbvpmhdjdz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171015.773424-116-81522948409039/AnsiballZ_file.py'
Oct 11 08:23:36 compute-0 sudo[164972]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:23:36 compute-0 python3.9[164974]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 08:23:36 compute-0 sudo[164972]: pam_unix(sudo:session): session closed for user root
Oct 11 08:23:36 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v494: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct 11 08:23:36 compute-0 sudo[165124]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-chswjrpnxbnrlhxlzcaeobycgnnvabxd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171016.705926-116-25814869889523/AnsiballZ_file.py'
Oct 11 08:23:36 compute-0 sudo[165124]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:23:37 compute-0 ceph-mon[74313]: pgmap v494: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct 11 08:23:37 compute-0 python3.9[165126]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 08:23:37 compute-0 sudo[165124]: pam_unix(sudo:session): session closed for user root
Oct 11 08:23:37 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 08:23:37 compute-0 sudo[165276]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wtghurckufoqwdqtzrggyvzevfzwlrmd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171017.4080055-116-144094441456005/AnsiballZ_file.py'
Oct 11 08:23:37 compute-0 sudo[165276]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:23:38 compute-0 python3.9[165278]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 08:23:38 compute-0 sudo[165276]: pam_unix(sudo:session): session closed for user root
Oct 11 08:23:38 compute-0 sudo[165428]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bypjzelwdzdgjgryuytazbkbzygmplqx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171018.2572956-116-4121854104500/AnsiballZ_file.py'
Oct 11 08:23:38 compute-0 sudo[165428]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:23:38 compute-0 python3.9[165430]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 08:23:38 compute-0 sudo[165428]: pam_unix(sudo:session): session closed for user root
Oct 11 08:23:38 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v495: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct 11 08:23:39 compute-0 ceph-mon[74313]: pgmap v495: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct 11 08:23:39 compute-0 sudo[165580]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bfkyoiozmdwoexijjyeuebixhdtgltso ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171019.073366-116-243056904255476/AnsiballZ_file.py'
Oct 11 08:23:39 compute-0 sudo[165580]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:23:39 compute-0 python3.9[165582]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 08:23:39 compute-0 sudo[165580]: pam_unix(sudo:session): session closed for user root
Oct 11 08:23:40 compute-0 sudo[165732]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vylsklqyrbmsfmdqtuyagszbppifbuum ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171019.82704-116-242856864645718/AnsiballZ_file.py'
Oct 11 08:23:40 compute-0 sudo[165732]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:23:40 compute-0 python3.9[165734]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 08:23:40 compute-0 sudo[165732]: pam_unix(sudo:session): session closed for user root
Oct 11 08:23:40 compute-0 sudo[165884]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qhkzrpiqarumszcqyhehfmeeqvztwyfa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171020.5684588-116-89572605944570/AnsiballZ_file.py'
Oct 11 08:23:40 compute-0 sudo[165884]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:23:40 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v496: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct 11 08:23:41 compute-0 ceph-mon[74313]: pgmap v496: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct 11 08:23:41 compute-0 python3.9[165886]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 08:23:41 compute-0 sudo[165884]: pam_unix(sudo:session): session closed for user root
Oct 11 08:23:41 compute-0 sudo[166036]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-idfigimcmykfwczyuvrurtvnyytgfxlt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171021.466896-166-161261331176132/AnsiballZ_file.py'
Oct 11 08:23:41 compute-0 sudo[166036]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:23:42 compute-0 python3.9[166038]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 08:23:42 compute-0 sudo[166036]: pam_unix(sudo:session): session closed for user root
Oct 11 08:23:42 compute-0 sudo[166188]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ddybgupckppbtgkfjhvatcgrtalsancc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171022.277759-166-195060990505694/AnsiballZ_file.py'
Oct 11 08:23:42 compute-0 sudo[166188]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:23:42 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 08:23:42 compute-0 python3.9[166190]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 08:23:42 compute-0 sudo[166188]: pam_unix(sudo:session): session closed for user root
Oct 11 08:23:42 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v497: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct 11 08:23:43 compute-0 ceph-mon[74313]: pgmap v497: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct 11 08:23:43 compute-0 sudo[166340]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rxjzojywsvtpxoytxphnojwjmbfgflkb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171023.0233352-166-215622379626213/AnsiballZ_file.py'
Oct 11 08:23:43 compute-0 sudo[166340]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:23:43 compute-0 python3.9[166342]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 08:23:43 compute-0 sudo[166340]: pam_unix(sudo:session): session closed for user root
Oct 11 08:23:43 compute-0 podman[166344]: 2025-10-11 08:23:43.779736914 +0000 UTC m=+0.085162721 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Oct 11 08:23:44 compute-0 sudo[166511]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zowlewkqvkdqittpwzxpsnbsvizdhapo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171023.8392963-166-237780039210204/AnsiballZ_file.py'
Oct 11 08:23:44 compute-0 sudo[166511]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:23:44 compute-0 python3.9[166513]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 08:23:44 compute-0 sudo[166511]: pam_unix(sudo:session): session closed for user root
Oct 11 08:23:44 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v498: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:23:45 compute-0 sudo[166663]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oscfpmezqzxkobjdqjtxegltbfofqour ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171024.677115-166-77888561739256/AnsiballZ_file.py'
Oct 11 08:23:45 compute-0 sudo[166663]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:23:45 compute-0 ceph-mon[74313]: pgmap v498: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:23:45 compute-0 python3.9[166665]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 08:23:45 compute-0 sudo[166663]: pam_unix(sudo:session): session closed for user root
Oct 11 08:23:45 compute-0 sudo[166815]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-crsqfnpbzroertdpccrcaoknxmtdkfgf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171025.5004883-166-263149525639/AnsiballZ_file.py'
Oct 11 08:23:45 compute-0 sudo[166815]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:23:46 compute-0 python3.9[166817]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 08:23:46 compute-0 sudo[166815]: pam_unix(sudo:session): session closed for user root
Oct 11 08:23:46 compute-0 sudo[166967]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-acrjkpdscffhbigbcmeowgemzdkduzze ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171026.337974-166-271564782380060/AnsiballZ_file.py'
Oct 11 08:23:46 compute-0 sudo[166967]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:23:46 compute-0 python3.9[166969]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 08:23:46 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v499: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:23:47 compute-0 sudo[166967]: pam_unix(sudo:session): session closed for user root
Oct 11 08:23:47 compute-0 ceph-mon[74313]: pgmap v499: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:23:47 compute-0 sudo[167119]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ztqnwjkvnmmmzleebpwfxuxopqgkjkgf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171027.2762105-217-187265773438338/AnsiballZ_command.py'
Oct 11 08:23:47 compute-0 sudo[167119]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:23:47 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 08:23:47 compute-0 python3.9[167121]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then
                                               systemctl disable --now certmonger.service
                                               test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service
                                             fi
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 08:23:47 compute-0 sudo[167119]: pam_unix(sudo:session): session closed for user root
Oct 11 08:23:48 compute-0 python3.9[167273]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Oct 11 08:23:48 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v500: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:23:49 compute-0 ceph-mon[74313]: pgmap v500: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:23:49 compute-0 sudo[167423]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lhssqqszhjvfsjmnekvbmwqvtctlskqw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171029.2095616-235-223124134452132/AnsiballZ_systemd_service.py'
Oct 11 08:23:49 compute-0 sudo[167423]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:23:50 compute-0 python3.9[167425]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 11 08:23:50 compute-0 systemd[1]: Reloading.
Oct 11 08:23:50 compute-0 systemd-sysv-generator[167455]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 08:23:50 compute-0 systemd-rc-local-generator[167449]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 08:23:50 compute-0 sudo[167423]: pam_unix(sudo:session): session closed for user root
Oct 11 08:23:50 compute-0 sudo[167611]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wbhsampwsqvbrbwkheuzixgskcxltssa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171030.5845335-243-188672482467701/AnsiballZ_command.py'
Oct 11 08:23:50 compute-0 sudo[167611]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:23:50 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v501: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:23:51 compute-0 ceph-mon[74313]: pgmap v501: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:23:51 compute-0 python3.9[167613]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_libvirt.target _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 08:23:51 compute-0 sudo[167611]: pam_unix(sudo:session): session closed for user root
Oct 11 08:23:51 compute-0 sudo[167764]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ldvrzaqnluvobbmgnidblquiwunfcdyi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171031.431822-243-85305014552785/AnsiballZ_command.py'
Oct 11 08:23:51 compute-0 sudo[167764]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:23:52 compute-0 python3.9[167766]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtlogd_wrapper.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 08:23:52 compute-0 sudo[167764]: pam_unix(sudo:session): session closed for user root
Oct 11 08:23:52 compute-0 sudo[167917]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mgywjywvkhmslqvwelrdalrpwpmnkotq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171032.2757666-243-80211725481366/AnsiballZ_command.py'
Oct 11 08:23:52 compute-0 sudo[167917]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:23:52 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 08:23:52 compute-0 python3.9[167919]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtnodedevd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 08:23:52 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v502: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:23:53 compute-0 sudo[167917]: pam_unix(sudo:session): session closed for user root
Oct 11 08:23:53 compute-0 ceph-mon[74313]: pgmap v502: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:23:53 compute-0 sudo[168025]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:23:53 compute-0 sudo[168025]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:23:53 compute-0 sudo[168025]: pam_unix(sudo:session): session closed for user root
Oct 11 08:23:53 compute-0 sudo[168109]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qnkeybnbntcabbmluakkbxdcpiukufiy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171033.1917017-243-196700764019936/AnsiballZ_command.py'
Oct 11 08:23:53 compute-0 sudo[168109]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:23:53 compute-0 sudo[168080]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 08:23:53 compute-0 sudo[168080]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:23:53 compute-0 sudo[168080]: pam_unix(sudo:session): session closed for user root
Oct 11 08:23:53 compute-0 sudo[168123]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:23:53 compute-0 sudo[168123]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:23:53 compute-0 sudo[168123]: pam_unix(sudo:session): session closed for user root
Oct 11 08:23:53 compute-0 python3.9[168120]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtproxyd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 08:23:53 compute-0 sudo[168148]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Oct 11 08:23:53 compute-0 sudo[168148]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:23:53 compute-0 sudo[168109]: pam_unix(sudo:session): session closed for user root
Oct 11 08:23:54 compute-0 sudo[168400]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pzdtynjyghdmvmkrjkcuzksrbhmtxrej ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171034.0793905-243-122929696769811/AnsiballZ_command.py'
Oct 11 08:23:54 compute-0 sudo[168400]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:23:54 compute-0 podman[168382]: 2025-10-11 08:23:54.578115015 +0000 UTC m=+0.104523109 container exec ef4d743dbf6b626090e433b260dff1359de31ba4682290cbdab8727911345729 (image=quay.io/ceph/ceph:v18, name=ceph-33219f8b-dc38-5a8f-a577-8ccc4b37190a-mon-compute-0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct 11 08:23:54 compute-0 podman[168382]: 2025-10-11 08:23:54.701474624 +0000 UTC m=+0.227882678 container exec_died ef4d743dbf6b626090e433b260dff1359de31ba4682290cbdab8727911345729 (image=quay.io/ceph/ceph:v18, name=ceph-33219f8b-dc38-5a8f-a577-8ccc4b37190a-mon-compute-0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 08:23:54 compute-0 ceph-mgr[74605]: [balancer INFO root] Optimize plan auto_2025-10-11_08:23:54
Oct 11 08:23:54 compute-0 ceph-mgr[74605]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 08:23:54 compute-0 ceph-mgr[74605]: [balancer INFO root] do_upmap
Oct 11 08:23:54 compute-0 ceph-mgr[74605]: [balancer INFO root] pools ['default.rgw.meta', 'cephfs.cephfs.data', 'volumes', 'images', 'default.rgw.control', '.mgr', 'default.rgw.log', 'cephfs.cephfs.meta', 'vms', '.rgw.root', 'backups']
Oct 11 08:23:54 compute-0 ceph-mgr[74605]: [balancer INFO root] prepared 0/10 changes
Oct 11 08:23:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 08:23:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 08:23:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 08:23:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 08:23:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 08:23:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 08:23:54 compute-0 python3.9[168409]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtqemud.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 08:23:54 compute-0 sudo[168400]: pam_unix(sudo:session): session closed for user root
Oct 11 08:23:54 compute-0 ceph-mgr[74605]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 08:23:54 compute-0 ceph-mgr[74605]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 08:23:54 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 08:23:54 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 08:23:54 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 08:23:54 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 08:23:54 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 08:23:54 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 08:23:54 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 08:23:54 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 08:23:55 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v503: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:23:55 compute-0 ceph-mon[74313]: pgmap v503: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:23:55 compute-0 sudo[168670]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hdzuhyuselwmuigxbiaqemuekcldbiii ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171035.022469-243-95581823282270/AnsiballZ_command.py'
Oct 11 08:23:55 compute-0 sudo[168670]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:23:55 compute-0 python3.9[168675]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtsecretd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 08:23:55 compute-0 sudo[168148]: pam_unix(sudo:session): session closed for user root
Oct 11 08:23:55 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 08:23:55 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 08:23:55 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 08:23:55 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 08:23:55 compute-0 sudo[168670]: pam_unix(sudo:session): session closed for user root
Oct 11 08:23:55 compute-0 sudo[168708]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:23:55 compute-0 sudo[168708]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:23:55 compute-0 sudo[168708]: pam_unix(sudo:session): session closed for user root
Oct 11 08:23:55 compute-0 sudo[168757]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 08:23:55 compute-0 sudo[168757]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:23:55 compute-0 sudo[168757]: pam_unix(sudo:session): session closed for user root
Oct 11 08:23:56 compute-0 sudo[168809]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:23:56 compute-0 sudo[168809]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:23:56 compute-0 sudo[168809]: pam_unix(sudo:session): session closed for user root
Oct 11 08:23:56 compute-0 sudo[168859]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 11 08:23:56 compute-0 sudo[168859]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:23:56 compute-0 sudo[168965]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lmibwdvnwgphxkiwuxzbcmoawsukmdxp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171035.9270797-243-249511405386492/AnsiballZ_command.py'
Oct 11 08:23:56 compute-0 sudo[168965]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:23:56 compute-0 python3.9[168971]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtstoraged.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 08:23:56 compute-0 sudo[168965]: pam_unix(sudo:session): session closed for user root
Oct 11 08:23:56 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 08:23:56 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 08:23:56 compute-0 sudo[168859]: pam_unix(sudo:session): session closed for user root
Oct 11 08:23:56 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 08:23:56 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 08:23:56 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 11 08:23:56 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 08:23:56 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 11 08:23:56 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 08:23:56 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev bd212914-65b2-4aa1-a709-ad4a37d23c86 does not exist
Oct 11 08:23:56 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev 9460f98c-bb92-4225-aec7-727922ea6339 does not exist
Oct 11 08:23:56 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev 18a86277-9beb-410d-bd76-c0e8ff387f6b does not exist
Oct 11 08:23:56 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 11 08:23:56 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 08:23:56 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 11 08:23:56 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 08:23:56 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 08:23:56 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 08:23:56 compute-0 sudo[169018]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:23:56 compute-0 sudo[169018]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:23:56 compute-0 sudo[169018]: pam_unix(sudo:session): session closed for user root
Oct 11 08:23:57 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v504: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:23:57 compute-0 sudo[169043]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 08:23:57 compute-0 sudo[169043]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:23:57 compute-0 sudo[169043]: pam_unix(sudo:session): session closed for user root
Oct 11 08:23:57 compute-0 sudo[169088]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:23:57 compute-0 sudo[169088]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:23:57 compute-0 sudo[169088]: pam_unix(sudo:session): session closed for user root
Oct 11 08:23:57 compute-0 sudo[169144]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 11 08:23:57 compute-0 sudo[169144]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:23:57 compute-0 sudo[169283]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fspwfszuhwuekhgshitqqoqflvwlenpi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171037.0622869-297-14411521791/AnsiballZ_getent.py'
Oct 11 08:23:57 compute-0 sudo[169283]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:23:57 compute-0 podman[169285]: 2025-10-11 08:23:57.701433698 +0000 UTC m=+0.071555240 container create 7f6848da0dad0b0275ed1be89bc4e2c2dd37fc5ca9ddf460b0c8aa7d43621b4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_antonelli, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct 11 08:23:57 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 08:23:57 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 08:23:57 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 08:23:57 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 08:23:57 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 08:23:57 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 08:23:57 compute-0 ceph-mon[74313]: pgmap v504: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:23:57 compute-0 systemd[1]: Started libpod-conmon-7f6848da0dad0b0275ed1be89bc4e2c2dd37fc5ca9ddf460b0c8aa7d43621b4a.scope.
Oct 11 08:23:57 compute-0 podman[169285]: 2025-10-11 08:23:57.669550371 +0000 UTC m=+0.039672413 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 08:23:57 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 08:23:57 compute-0 systemd[1]: Started libcrun container.
Oct 11 08:23:57 compute-0 podman[169285]: 2025-10-11 08:23:57.821982497 +0000 UTC m=+0.192104089 container init 7f6848da0dad0b0275ed1be89bc4e2c2dd37fc5ca9ddf460b0c8aa7d43621b4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_antonelli, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct 11 08:23:57 compute-0 podman[169285]: 2025-10-11 08:23:57.835148256 +0000 UTC m=+0.205269788 container start 7f6848da0dad0b0275ed1be89bc4e2c2dd37fc5ca9ddf460b0c8aa7d43621b4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_antonelli, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2)
Oct 11 08:23:57 compute-0 podman[169285]: 2025-10-11 08:23:57.839202982 +0000 UTC m=+0.209324574 container attach 7f6848da0dad0b0275ed1be89bc4e2c2dd37fc5ca9ddf460b0c8aa7d43621b4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_antonelli, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 08:23:57 compute-0 sweet_antonelli[169302]: 167 167
Oct 11 08:23:57 compute-0 systemd[1]: libpod-7f6848da0dad0b0275ed1be89bc4e2c2dd37fc5ca9ddf460b0c8aa7d43621b4a.scope: Deactivated successfully.
Oct 11 08:23:57 compute-0 podman[169285]: 2025-10-11 08:23:57.842921479 +0000 UTC m=+0.213043001 container died 7f6848da0dad0b0275ed1be89bc4e2c2dd37fc5ca9ddf460b0c8aa7d43621b4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_antonelli, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True)
Oct 11 08:23:57 compute-0 python3.9[169287]: ansible-ansible.builtin.getent Invoked with database=passwd key=libvirt fail_key=True service=None split=None
Oct 11 08:23:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-be13aa91c9a5b7ead75b543058011fca715d57382fd93933d48088a8cb3c1134-merged.mount: Deactivated successfully.
Oct 11 08:23:57 compute-0 sudo[169283]: pam_unix(sudo:session): session closed for user root
Oct 11 08:23:57 compute-0 podman[169285]: 2025-10-11 08:23:57.899396004 +0000 UTC m=+0.269517546 container remove 7f6848da0dad0b0275ed1be89bc4e2c2dd37fc5ca9ddf460b0c8aa7d43621b4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_antonelli, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0)
Oct 11 08:23:57 compute-0 systemd[1]: libpod-conmon-7f6848da0dad0b0275ed1be89bc4e2c2dd37fc5ca9ddf460b0c8aa7d43621b4a.scope: Deactivated successfully.
Oct 11 08:23:58 compute-0 podman[169351]: 2025-10-11 08:23:58.098971536 +0000 UTC m=+0.059742360 container create 354a9f307aef4930636bc24f65e918eb3b614988d2cade95ad36ce974bb3535f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_aryabhata, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 08:23:58 compute-0 systemd[1]: Started libpod-conmon-354a9f307aef4930636bc24f65e918eb3b614988d2cade95ad36ce974bb3535f.scope.
Oct 11 08:23:58 compute-0 podman[169351]: 2025-10-11 08:23:58.083495281 +0000 UTC m=+0.044266115 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 08:23:58 compute-0 systemd[1]: Started libcrun container.
Oct 11 08:23:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf038e0523ac45a13bf877b5298ddb732481d6f6e6483b17e0088955e6c635f2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 08:23:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf038e0523ac45a13bf877b5298ddb732481d6f6e6483b17e0088955e6c635f2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 08:23:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf038e0523ac45a13bf877b5298ddb732481d6f6e6483b17e0088955e6c635f2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 08:23:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf038e0523ac45a13bf877b5298ddb732481d6f6e6483b17e0088955e6c635f2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 08:23:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf038e0523ac45a13bf877b5298ddb732481d6f6e6483b17e0088955e6c635f2/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 08:23:58 compute-0 podman[169351]: 2025-10-11 08:23:58.208681873 +0000 UTC m=+0.169452697 container init 354a9f307aef4930636bc24f65e918eb3b614988d2cade95ad36ce974bb3535f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_aryabhata, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 08:23:58 compute-0 podman[169351]: 2025-10-11 08:23:58.219523735 +0000 UTC m=+0.180294599 container start 354a9f307aef4930636bc24f65e918eb3b614988d2cade95ad36ce974bb3535f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_aryabhata, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct 11 08:23:58 compute-0 podman[169351]: 2025-10-11 08:23:58.223435547 +0000 UTC m=+0.184206481 container attach 354a9f307aef4930636bc24f65e918eb3b614988d2cade95ad36ce974bb3535f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_aryabhata, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True)
Oct 11 08:23:58 compute-0 sudo[169497]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pcpjvcisnonimdqxmdjizhionrfaghzo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171038.084021-305-108193333623022/AnsiballZ_group.py'
Oct 11 08:23:58 compute-0 sudo[169497]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:23:58 compute-0 python3.9[169499]: ansible-ansible.builtin.group Invoked with gid=42473 name=libvirt state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Oct 11 08:23:58 compute-0 groupadd[169501]: group added to /etc/group: name=libvirt, GID=42473
Oct 11 08:23:58 compute-0 groupadd[169501]: group added to /etc/gshadow: name=libvirt
Oct 11 08:23:58 compute-0 groupadd[169501]: new group: name=libvirt, GID=42473
Oct 11 08:23:58 compute-0 sudo[169497]: pam_unix(sudo:session): session closed for user root
Oct 11 08:23:59 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v505: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:23:59 compute-0 ceph-mon[74313]: pgmap v505: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:23:59 compute-0 intelligent_aryabhata[169415]: --> passed data devices: 0 physical, 3 LVM
Oct 11 08:23:59 compute-0 intelligent_aryabhata[169415]: --> relative data size: 1.0
Oct 11 08:23:59 compute-0 intelligent_aryabhata[169415]: --> All data devices are unavailable
Oct 11 08:23:59 compute-0 systemd[1]: libpod-354a9f307aef4930636bc24f65e918eb3b614988d2cade95ad36ce974bb3535f.scope: Deactivated successfully.
Oct 11 08:23:59 compute-0 systemd[1]: libpod-354a9f307aef4930636bc24f65e918eb3b614988d2cade95ad36ce974bb3535f.scope: Consumed 1.228s CPU time.
Oct 11 08:23:59 compute-0 podman[169351]: 2025-10-11 08:23:59.517304245 +0000 UTC m=+1.478075109 container died 354a9f307aef4930636bc24f65e918eb3b614988d2cade95ad36ce974bb3535f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_aryabhata, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 08:23:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-cf038e0523ac45a13bf877b5298ddb732481d6f6e6483b17e0088955e6c635f2-merged.mount: Deactivated successfully.
Oct 11 08:23:59 compute-0 podman[169351]: 2025-10-11 08:23:59.594040573 +0000 UTC m=+1.554811407 container remove 354a9f307aef4930636bc24f65e918eb3b614988d2cade95ad36ce974bb3535f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_aryabhata, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 11 08:23:59 compute-0 systemd[1]: libpod-conmon-354a9f307aef4930636bc24f65e918eb3b614988d2cade95ad36ce974bb3535f.scope: Deactivated successfully.
Oct 11 08:23:59 compute-0 sudo[169144]: pam_unix(sudo:session): session closed for user root
Oct 11 08:23:59 compute-0 sudo[169641]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:23:59 compute-0 sudo[169641]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:23:59 compute-0 sudo[169641]: pam_unix(sudo:session): session closed for user root
Oct 11 08:23:59 compute-0 sudo[169690]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 08:23:59 compute-0 sudo[169690]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:23:59 compute-0 sudo[169690]: pam_unix(sudo:session): session closed for user root
Oct 11 08:23:59 compute-0 sudo[169741]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ujzyclqghcugtctxxwjxeflrtzvuvvec ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171039.2304866-313-202285102488786/AnsiballZ_user.py'
Oct 11 08:23:59 compute-0 sudo[169741]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:23:59 compute-0 sudo[169742]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:23:59 compute-0 sudo[169742]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:23:59 compute-0 sudo[169742]: pam_unix(sudo:session): session closed for user root
Oct 11 08:23:59 compute-0 sudo[169769]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a -- lvm list --format json
Oct 11 08:23:59 compute-0 sudo[169769]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:24:00 compute-0 python3.9[169756]: ansible-ansible.builtin.user Invoked with comment=libvirt user group=libvirt groups=[''] name=libvirt shell=/sbin/nologin state=present uid=42473 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Oct 11 08:24:00 compute-0 useradd[169795]: new user: name=libvirt, UID=42473, GID=42473, home=/home/libvirt, shell=/sbin/nologin, from=/dev/pts/0
Oct 11 08:24:00 compute-0 sudo[169741]: pam_unix(sudo:session): session closed for user root
Oct 11 08:24:00 compute-0 podman[169866]: 2025-10-11 08:24:00.438424037 +0000 UTC m=+0.071219791 container create 94d634d33b3fa55f41e57727e23851bffd467d3896d930d4295b6ee04acf1e8c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_chandrasekhar, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct 11 08:24:00 compute-0 systemd[1]: Started libpod-conmon-94d634d33b3fa55f41e57727e23851bffd467d3896d930d4295b6ee04acf1e8c.scope.
Oct 11 08:24:00 compute-0 podman[169866]: 2025-10-11 08:24:00.40865908 +0000 UTC m=+0.041454904 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 08:24:00 compute-0 systemd[1]: Started libcrun container.
Oct 11 08:24:00 compute-0 podman[169866]: 2025-10-11 08:24:00.547493255 +0000 UTC m=+0.180289069 container init 94d634d33b3fa55f41e57727e23851bffd467d3896d930d4295b6ee04acf1e8c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_chandrasekhar, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 08:24:00 compute-0 podman[169866]: 2025-10-11 08:24:00.560652253 +0000 UTC m=+0.193447977 container start 94d634d33b3fa55f41e57727e23851bffd467d3896d930d4295b6ee04acf1e8c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_chandrasekhar, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct 11 08:24:00 compute-0 podman[169866]: 2025-10-11 08:24:00.565916925 +0000 UTC m=+0.198712679 container attach 94d634d33b3fa55f41e57727e23851bffd467d3896d930d4295b6ee04acf1e8c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_chandrasekhar, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct 11 08:24:00 compute-0 practical_chandrasekhar[169883]: 167 167
Oct 11 08:24:00 compute-0 systemd[1]: libpod-94d634d33b3fa55f41e57727e23851bffd467d3896d930d4295b6ee04acf1e8c.scope: Deactivated successfully.
Oct 11 08:24:00 compute-0 podman[169866]: 2025-10-11 08:24:00.572390571 +0000 UTC m=+0.205186355 container died 94d634d33b3fa55f41e57727e23851bffd467d3896d930d4295b6ee04acf1e8c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_chandrasekhar, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 08:24:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-a978568275c391bdbe3ba1850bc715e779d26569e39fe87ad757cd950809cc98-merged.mount: Deactivated successfully.
Oct 11 08:24:00 compute-0 podman[169866]: 2025-10-11 08:24:00.631358598 +0000 UTC m=+0.264154352 container remove 94d634d33b3fa55f41e57727e23851bffd467d3896d930d4295b6ee04acf1e8c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_chandrasekhar, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 08:24:00 compute-0 systemd[1]: libpod-conmon-94d634d33b3fa55f41e57727e23851bffd467d3896d930d4295b6ee04acf1e8c.scope: Deactivated successfully.
Oct 11 08:24:00 compute-0 podman[169959]: 2025-10-11 08:24:00.85664466 +0000 UTC m=+0.062906301 container create 4d94ff61dc615f8d38cfae300cdad27c2b5b6b2ebe7909b2a2bdd92b0670378e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_antonelli, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct 11 08:24:00 compute-0 systemd[1]: Started libpod-conmon-4d94ff61dc615f8d38cfae300cdad27c2b5b6b2ebe7909b2a2bdd92b0670378e.scope.
Oct 11 08:24:00 compute-0 podman[169959]: 2025-10-11 08:24:00.824185436 +0000 UTC m=+0.030447127 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 08:24:00 compute-0 systemd[1]: Started libcrun container.
Oct 11 08:24:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3de6e85ec19679a308e38c8d48f9b35b98cea92bcd9d3ac663692f79d442e143/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 08:24:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3de6e85ec19679a308e38c8d48f9b35b98cea92bcd9d3ac663692f79d442e143/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 08:24:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3de6e85ec19679a308e38c8d48f9b35b98cea92bcd9d3ac663692f79d442e143/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 08:24:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3de6e85ec19679a308e38c8d48f9b35b98cea92bcd9d3ac663692f79d442e143/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 08:24:01 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v506: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:24:01 compute-0 podman[169959]: 2025-10-11 08:24:01.024838849 +0000 UTC m=+0.231100450 container init 4d94ff61dc615f8d38cfae300cdad27c2b5b6b2ebe7909b2a2bdd92b0670378e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_antonelli, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct 11 08:24:01 compute-0 podman[169959]: 2025-10-11 08:24:01.039525952 +0000 UTC m=+0.245787543 container start 4d94ff61dc615f8d38cfae300cdad27c2b5b6b2ebe7909b2a2bdd92b0670378e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_antonelli, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 08:24:01 compute-0 podman[169959]: 2025-10-11 08:24:01.043556438 +0000 UTC m=+0.249818039 container attach 4d94ff61dc615f8d38cfae300cdad27c2b5b6b2ebe7909b2a2bdd92b0670378e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_antonelli, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 08:24:01 compute-0 ceph-mon[74313]: pgmap v506: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:24:01 compute-0 sudo[170054]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dqzhuczvwxzrujfthhqijpjrlqrxtdcm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171040.6516325-324-58612072455548/AnsiballZ_setup.py'
Oct 11 08:24:01 compute-0 sudo[170054]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:24:01 compute-0 python3.9[170056]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 11 08:24:01 compute-0 sudo[170054]: pam_unix(sudo:session): session closed for user root
Oct 11 08:24:01 compute-0 priceless_antonelli[170003]: {
Oct 11 08:24:01 compute-0 priceless_antonelli[170003]:     "0": [
Oct 11 08:24:01 compute-0 priceless_antonelli[170003]:         {
Oct 11 08:24:01 compute-0 priceless_antonelli[170003]:             "devices": [
Oct 11 08:24:01 compute-0 priceless_antonelli[170003]:                 "/dev/loop3"
Oct 11 08:24:01 compute-0 priceless_antonelli[170003]:             ],
Oct 11 08:24:01 compute-0 priceless_antonelli[170003]:             "lv_name": "ceph_lv0",
Oct 11 08:24:01 compute-0 priceless_antonelli[170003]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 08:24:01 compute-0 priceless_antonelli[170003]:             "lv_size": "21470642176",
Oct 11 08:24:01 compute-0 priceless_antonelli[170003]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=bd31cfe9-266a-4479-a7f9-659fff14b3e1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 08:24:01 compute-0 priceless_antonelli[170003]:             "lv_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 08:24:01 compute-0 priceless_antonelli[170003]:             "name": "ceph_lv0",
Oct 11 08:24:01 compute-0 priceless_antonelli[170003]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 08:24:01 compute-0 priceless_antonelli[170003]:             "tags": {
Oct 11 08:24:01 compute-0 priceless_antonelli[170003]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 11 08:24:01 compute-0 priceless_antonelli[170003]:                 "ceph.block_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 08:24:01 compute-0 priceless_antonelli[170003]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 08:24:01 compute-0 priceless_antonelli[170003]:                 "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 08:24:01 compute-0 priceless_antonelli[170003]:                 "ceph.cluster_name": "ceph",
Oct 11 08:24:01 compute-0 priceless_antonelli[170003]:                 "ceph.crush_device_class": "",
Oct 11 08:24:01 compute-0 priceless_antonelli[170003]:                 "ceph.encrypted": "0",
Oct 11 08:24:01 compute-0 priceless_antonelli[170003]:                 "ceph.osd_fsid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 08:24:01 compute-0 priceless_antonelli[170003]:                 "ceph.osd_id": "0",
Oct 11 08:24:01 compute-0 priceless_antonelli[170003]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 08:24:01 compute-0 priceless_antonelli[170003]:                 "ceph.type": "block",
Oct 11 08:24:01 compute-0 priceless_antonelli[170003]:                 "ceph.vdo": "0"
Oct 11 08:24:01 compute-0 priceless_antonelli[170003]:             },
Oct 11 08:24:01 compute-0 priceless_antonelli[170003]:             "type": "block",
Oct 11 08:24:01 compute-0 priceless_antonelli[170003]:             "vg_name": "ceph_vg0"
Oct 11 08:24:01 compute-0 priceless_antonelli[170003]:         }
Oct 11 08:24:01 compute-0 priceless_antonelli[170003]:     ],
Oct 11 08:24:01 compute-0 priceless_antonelli[170003]:     "1": [
Oct 11 08:24:01 compute-0 priceless_antonelli[170003]:         {
Oct 11 08:24:01 compute-0 priceless_antonelli[170003]:             "devices": [
Oct 11 08:24:01 compute-0 priceless_antonelli[170003]:                 "/dev/loop4"
Oct 11 08:24:01 compute-0 priceless_antonelli[170003]:             ],
Oct 11 08:24:01 compute-0 priceless_antonelli[170003]:             "lv_name": "ceph_lv1",
Oct 11 08:24:01 compute-0 priceless_antonelli[170003]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 08:24:01 compute-0 priceless_antonelli[170003]:             "lv_size": "21470642176",
Oct 11 08:24:01 compute-0 priceless_antonelli[170003]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c6e730c8-7075-45e5-bf72-061b73088f5b,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 08:24:01 compute-0 priceless_antonelli[170003]:             "lv_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 08:24:01 compute-0 priceless_antonelli[170003]:             "name": "ceph_lv1",
Oct 11 08:24:01 compute-0 priceless_antonelli[170003]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 08:24:01 compute-0 priceless_antonelli[170003]:             "tags": {
Oct 11 08:24:01 compute-0 priceless_antonelli[170003]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 11 08:24:01 compute-0 priceless_antonelli[170003]:                 "ceph.block_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 08:24:01 compute-0 priceless_antonelli[170003]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 08:24:01 compute-0 priceless_antonelli[170003]:                 "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 08:24:01 compute-0 priceless_antonelli[170003]:                 "ceph.cluster_name": "ceph",
Oct 11 08:24:01 compute-0 priceless_antonelli[170003]:                 "ceph.crush_device_class": "",
Oct 11 08:24:01 compute-0 priceless_antonelli[170003]:                 "ceph.encrypted": "0",
Oct 11 08:24:01 compute-0 priceless_antonelli[170003]:                 "ceph.osd_fsid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 08:24:01 compute-0 priceless_antonelli[170003]:                 "ceph.osd_id": "1",
Oct 11 08:24:01 compute-0 priceless_antonelli[170003]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 08:24:01 compute-0 priceless_antonelli[170003]:                 "ceph.type": "block",
Oct 11 08:24:01 compute-0 priceless_antonelli[170003]:                 "ceph.vdo": "0"
Oct 11 08:24:01 compute-0 priceless_antonelli[170003]:             },
Oct 11 08:24:01 compute-0 priceless_antonelli[170003]:             "type": "block",
Oct 11 08:24:01 compute-0 priceless_antonelli[170003]:             "vg_name": "ceph_vg1"
Oct 11 08:24:01 compute-0 priceless_antonelli[170003]:         }
Oct 11 08:24:01 compute-0 priceless_antonelli[170003]:     ],
Oct 11 08:24:01 compute-0 priceless_antonelli[170003]:     "2": [
Oct 11 08:24:01 compute-0 priceless_antonelli[170003]:         {
Oct 11 08:24:01 compute-0 priceless_antonelli[170003]:             "devices": [
Oct 11 08:24:01 compute-0 priceless_antonelli[170003]:                 "/dev/loop5"
Oct 11 08:24:01 compute-0 priceless_antonelli[170003]:             ],
Oct 11 08:24:01 compute-0 priceless_antonelli[170003]:             "lv_name": "ceph_lv2",
Oct 11 08:24:01 compute-0 priceless_antonelli[170003]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 08:24:01 compute-0 priceless_antonelli[170003]:             "lv_size": "21470642176",
Oct 11 08:24:01 compute-0 priceless_antonelli[170003]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9a8d87fe-940e-4960-94fb-405e22e6e9ee,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 08:24:01 compute-0 priceless_antonelli[170003]:             "lv_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 08:24:01 compute-0 priceless_antonelli[170003]:             "name": "ceph_lv2",
Oct 11 08:24:01 compute-0 priceless_antonelli[170003]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 08:24:01 compute-0 priceless_antonelli[170003]:             "tags": {
Oct 11 08:24:01 compute-0 priceless_antonelli[170003]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 11 08:24:01 compute-0 priceless_antonelli[170003]:                 "ceph.block_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 08:24:01 compute-0 priceless_antonelli[170003]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 08:24:01 compute-0 priceless_antonelli[170003]:                 "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 08:24:01 compute-0 priceless_antonelli[170003]:                 "ceph.cluster_name": "ceph",
Oct 11 08:24:01 compute-0 priceless_antonelli[170003]:                 "ceph.crush_device_class": "",
Oct 11 08:24:01 compute-0 priceless_antonelli[170003]:                 "ceph.encrypted": "0",
Oct 11 08:24:01 compute-0 priceless_antonelli[170003]:                 "ceph.osd_fsid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 08:24:01 compute-0 priceless_antonelli[170003]:                 "ceph.osd_id": "2",
Oct 11 08:24:01 compute-0 priceless_antonelli[170003]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 08:24:01 compute-0 priceless_antonelli[170003]:                 "ceph.type": "block",
Oct 11 08:24:01 compute-0 priceless_antonelli[170003]:                 "ceph.vdo": "0"
Oct 11 08:24:01 compute-0 priceless_antonelli[170003]:             },
Oct 11 08:24:01 compute-0 priceless_antonelli[170003]:             "type": "block",
Oct 11 08:24:01 compute-0 priceless_antonelli[170003]:             "vg_name": "ceph_vg2"
Oct 11 08:24:01 compute-0 priceless_antonelli[170003]:         }
Oct 11 08:24:01 compute-0 priceless_antonelli[170003]:     ]
Oct 11 08:24:01 compute-0 priceless_antonelli[170003]: }
Oct 11 08:24:01 compute-0 systemd[1]: libpod-4d94ff61dc615f8d38cfae300cdad27c2b5b6b2ebe7909b2a2bdd92b0670378e.scope: Deactivated successfully.
Oct 11 08:24:01 compute-0 podman[170081]: 2025-10-11 08:24:01.971496856 +0000 UTC m=+0.045929382 container died 4d94ff61dc615f8d38cfae300cdad27c2b5b6b2ebe7909b2a2bdd92b0670378e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_antonelli, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct 11 08:24:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-3de6e85ec19679a308e38c8d48f9b35b98cea92bcd9d3ac663692f79d442e143-merged.mount: Deactivated successfully.
Oct 11 08:24:02 compute-0 podman[170081]: 2025-10-11 08:24:02.054619208 +0000 UTC m=+0.129051754 container remove 4d94ff61dc615f8d38cfae300cdad27c2b5b6b2ebe7909b2a2bdd92b0670378e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_antonelli, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 08:24:02 compute-0 systemd[1]: libpod-conmon-4d94ff61dc615f8d38cfae300cdad27c2b5b6b2ebe7909b2a2bdd92b0670378e.scope: Deactivated successfully.
Oct 11 08:24:02 compute-0 sudo[169769]: pam_unix(sudo:session): session closed for user root
Oct 11 08:24:02 compute-0 sudo[170194]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kgnipiexhkerhivlddtidhioejoyqzli ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171040.6516325-324-58612072455548/AnsiballZ_dnf.py'
Oct 11 08:24:02 compute-0 sudo[170194]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:24:02 compute-0 sudo[170150]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:24:02 compute-0 sudo[170150]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:24:02 compute-0 sudo[170150]: pam_unix(sudo:session): session closed for user root
Oct 11 08:24:02 compute-0 podman[170131]: 2025-10-11 08:24:02.261376167 +0000 UTC m=+0.138234918 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ovn_controller, tcib_managed=true)
Oct 11 08:24:02 compute-0 sudo[170209]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 08:24:02 compute-0 sudo[170209]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:24:02 compute-0 sudo[170209]: pam_unix(sudo:session): session closed for user root
Oct 11 08:24:02 compute-0 sudo[170236]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:24:02 compute-0 sudo[170236]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:24:02 compute-0 sudo[170236]: pam_unix(sudo:session): session closed for user root
Oct 11 08:24:02 compute-0 sudo[170261]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a -- raw list --format json
Oct 11 08:24:02 compute-0 sudo[170261]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:24:02 compute-0 python3.9[170205]: ansible-ansible.legacy.dnf Invoked with name=['libvirt ', 'libvirt-admin ', 'libvirt-client ', 'libvirt-daemon ', 'qemu-kvm', 'qemu-img', 'libguestfs', 'libseccomp', 'swtpm', 'swtpm-tools', 'edk2-ovmf', 'ceph-common', 'cyrus-sasl-scram'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 11 08:24:02 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 08:24:02 compute-0 podman[170328]: 2025-10-11 08:24:02.939710734 +0000 UTC m=+0.070011285 container create 2f7be35e793e509888c495985a00413ca505d9fe0eccad8738e52b2d27fa0e41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_hawking, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 08:24:02 compute-0 systemd[1]: Started libpod-conmon-2f7be35e793e509888c495985a00413ca505d9fe0eccad8738e52b2d27fa0e41.scope.
Oct 11 08:24:03 compute-0 podman[170328]: 2025-10-11 08:24:02.912049958 +0000 UTC m=+0.042350569 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 08:24:03 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v507: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:24:03 compute-0 systemd[1]: Started libcrun container.
Oct 11 08:24:03 compute-0 podman[170328]: 2025-10-11 08:24:03.047279179 +0000 UTC m=+0.177579730 container init 2f7be35e793e509888c495985a00413ca505d9fe0eccad8738e52b2d27fa0e41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_hawking, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 08:24:03 compute-0 podman[170328]: 2025-10-11 08:24:03.056236727 +0000 UTC m=+0.186537278 container start 2f7be35e793e509888c495985a00413ca505d9fe0eccad8738e52b2d27fa0e41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_hawking, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 08:24:03 compute-0 podman[170328]: 2025-10-11 08:24:03.060633373 +0000 UTC m=+0.190933974 container attach 2f7be35e793e509888c495985a00413ca505d9fe0eccad8738e52b2d27fa0e41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_hawking, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 08:24:03 compute-0 competent_hawking[170344]: 167 167
Oct 11 08:24:03 compute-0 systemd[1]: libpod-2f7be35e793e509888c495985a00413ca505d9fe0eccad8738e52b2d27fa0e41.scope: Deactivated successfully.
Oct 11 08:24:03 compute-0 conmon[170344]: conmon 2f7be35e793e509888c4 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-2f7be35e793e509888c495985a00413ca505d9fe0eccad8738e52b2d27fa0e41.scope/container/memory.events
Oct 11 08:24:03 compute-0 podman[170328]: 2025-10-11 08:24:03.067224933 +0000 UTC m=+0.197525504 container died 2f7be35e793e509888c495985a00413ca505d9fe0eccad8738e52b2d27fa0e41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_hawking, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 08:24:03 compute-0 ceph-mon[74313]: pgmap v507: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:24:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-a788126162f8251dd8057bed0fbf59842994cfaf5cc40c3ffedbe2d6c25779a8-merged.mount: Deactivated successfully.
Oct 11 08:24:03 compute-0 podman[170328]: 2025-10-11 08:24:03.13732123 +0000 UTC m=+0.267621791 container remove 2f7be35e793e509888c495985a00413ca505d9fe0eccad8738e52b2d27fa0e41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_hawking, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct 11 08:24:03 compute-0 systemd[1]: libpod-conmon-2f7be35e793e509888c495985a00413ca505d9fe0eccad8738e52b2d27fa0e41.scope: Deactivated successfully.
Oct 11 08:24:03 compute-0 podman[170368]: 2025-10-11 08:24:03.377691926 +0000 UTC m=+0.057067593 container create f1675d4970417cc75971bc559e27b86cac1456ac7a4adfd6df9fd647f25573de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_babbage, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef)
Oct 11 08:24:03 compute-0 systemd[1]: Started libpod-conmon-f1675d4970417cc75971bc559e27b86cac1456ac7a4adfd6df9fd647f25573de.scope.
Oct 11 08:24:03 compute-0 podman[170368]: 2025-10-11 08:24:03.353476969 +0000 UTC m=+0.032852706 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 08:24:03 compute-0 systemd[1]: Started libcrun container.
Oct 11 08:24:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ffd207b9f692c79af67c3129f2a4cd4139ac0ce401cc85dbaef735fd9190866f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 08:24:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ffd207b9f692c79af67c3129f2a4cd4139ac0ce401cc85dbaef735fd9190866f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 08:24:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ffd207b9f692c79af67c3129f2a4cd4139ac0ce401cc85dbaef735fd9190866f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 08:24:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ffd207b9f692c79af67c3129f2a4cd4139ac0ce401cc85dbaef735fd9190866f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 08:24:03 compute-0 podman[170368]: 2025-10-11 08:24:03.506115661 +0000 UTC m=+0.185491348 container init f1675d4970417cc75971bc559e27b86cac1456ac7a4adfd6df9fd647f25573de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_babbage, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 08:24:03 compute-0 podman[170368]: 2025-10-11 08:24:03.517976062 +0000 UTC m=+0.197351749 container start f1675d4970417cc75971bc559e27b86cac1456ac7a4adfd6df9fd647f25573de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_babbage, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Oct 11 08:24:03 compute-0 podman[170368]: 2025-10-11 08:24:03.527879947 +0000 UTC m=+0.207255624 container attach f1675d4970417cc75971bc559e27b86cac1456ac7a4adfd6df9fd647f25573de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_babbage, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct 11 08:24:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 08:24:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:24:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 08:24:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:24:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 08:24:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:24:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 08:24:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:24:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 08:24:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:24:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 08:24:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:24:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 11 08:24:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:24:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 08:24:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:24:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 11 08:24:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:24:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 11 08:24:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:24:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 08:24:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:24:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 11 08:24:04 compute-0 admiring_babbage[170384]: {
Oct 11 08:24:04 compute-0 admiring_babbage[170384]:     "9a8d87fe-940e-4960-94fb-405e22e6e9ee": {
Oct 11 08:24:04 compute-0 admiring_babbage[170384]:         "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 08:24:04 compute-0 admiring_babbage[170384]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 11 08:24:04 compute-0 admiring_babbage[170384]:         "osd_id": 2,
Oct 11 08:24:04 compute-0 admiring_babbage[170384]:         "osd_uuid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 08:24:04 compute-0 admiring_babbage[170384]:         "type": "bluestore"
Oct 11 08:24:04 compute-0 admiring_babbage[170384]:     },
Oct 11 08:24:04 compute-0 admiring_babbage[170384]:     "bd31cfe9-266a-4479-a7f9-659fff14b3e1": {
Oct 11 08:24:04 compute-0 admiring_babbage[170384]:         "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 08:24:04 compute-0 admiring_babbage[170384]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 11 08:24:04 compute-0 admiring_babbage[170384]:         "osd_id": 0,
Oct 11 08:24:04 compute-0 admiring_babbage[170384]:         "osd_uuid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 08:24:04 compute-0 admiring_babbage[170384]:         "type": "bluestore"
Oct 11 08:24:04 compute-0 admiring_babbage[170384]:     },
Oct 11 08:24:04 compute-0 admiring_babbage[170384]:     "c6e730c8-7075-45e5-bf72-061b73088f5b": {
Oct 11 08:24:04 compute-0 admiring_babbage[170384]:         "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 08:24:04 compute-0 admiring_babbage[170384]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 11 08:24:04 compute-0 admiring_babbage[170384]:         "osd_id": 1,
Oct 11 08:24:04 compute-0 admiring_babbage[170384]:         "osd_uuid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 08:24:04 compute-0 admiring_babbage[170384]:         "type": "bluestore"
Oct 11 08:24:04 compute-0 admiring_babbage[170384]:     }
Oct 11 08:24:04 compute-0 admiring_babbage[170384]: }
Oct 11 08:24:04 compute-0 systemd[1]: libpod-f1675d4970417cc75971bc559e27b86cac1456ac7a4adfd6df9fd647f25573de.scope: Deactivated successfully.
Oct 11 08:24:04 compute-0 systemd[1]: libpod-f1675d4970417cc75971bc559e27b86cac1456ac7a4adfd6df9fd647f25573de.scope: Consumed 1.144s CPU time.
Oct 11 08:24:04 compute-0 podman[170368]: 2025-10-11 08:24:04.653514313 +0000 UTC m=+1.332890040 container died f1675d4970417cc75971bc559e27b86cac1456ac7a4adfd6df9fd647f25573de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_babbage, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct 11 08:24:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-ffd207b9f692c79af67c3129f2a4cd4139ac0ce401cc85dbaef735fd9190866f-merged.mount: Deactivated successfully.
Oct 11 08:24:04 compute-0 podman[170368]: 2025-10-11 08:24:04.733559046 +0000 UTC m=+1.412934743 container remove f1675d4970417cc75971bc559e27b86cac1456ac7a4adfd6df9fd647f25573de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_babbage, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Oct 11 08:24:04 compute-0 systemd[1]: libpod-conmon-f1675d4970417cc75971bc559e27b86cac1456ac7a4adfd6df9fd647f25573de.scope: Deactivated successfully.
Oct 11 08:24:04 compute-0 sudo[170261]: pam_unix(sudo:session): session closed for user root
Oct 11 08:24:04 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 08:24:04 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 08:24:04 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 08:24:04 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 08:24:04 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev b67c40d2-f1f4-4504-a3b9-4cbe597cf5e7 does not exist
Oct 11 08:24:04 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev 59809dd6-81d9-4067-881b-b2380aa72a00 does not exist
Oct 11 08:24:04 compute-0 sudo[170437]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:24:04 compute-0 sudo[170437]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:24:04 compute-0 sudo[170437]: pam_unix(sudo:session): session closed for user root
Oct 11 08:24:05 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v508: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:24:05 compute-0 sudo[170462]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 11 08:24:05 compute-0 sudo[170462]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:24:05 compute-0 sudo[170462]: pam_unix(sudo:session): session closed for user root
Oct 11 08:24:05 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 08:24:05 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 08:24:05 compute-0 ceph-mon[74313]: pgmap v508: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:24:07 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v509: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:24:07 compute-0 ceph-mon[74313]: pgmap v509: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:24:07 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 08:24:09 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v510: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:24:09 compute-0 ceph-mon[74313]: pgmap v510: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:24:11 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v511: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:24:11 compute-0 ceph-mon[74313]: pgmap v511: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:24:12 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 08:24:13 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v512: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:24:13 compute-0 ceph-mon[74313]: pgmap v512: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:24:14 compute-0 podman[170663]: 2025-10-11 08:24:14.807752131 +0000 UTC m=+0.097433685 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 11 08:24:15 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v513: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:24:15 compute-0 ceph-mon[74313]: pgmap v513: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:24:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:24:15.151 162815 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:24:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:24:15.152 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:24:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:24:15.152 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:24:17 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v514: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:24:17 compute-0 ceph-mon[74313]: pgmap v514: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:24:17 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 08:24:19 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v515: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:24:19 compute-0 ceph-mon[74313]: pgmap v515: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:24:21 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v516: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:24:21 compute-0 ceph-mon[74313]: pgmap v516: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:24:22 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 08:24:23 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v517: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:24:23 compute-0 ceph-mon[74313]: pgmap v517: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:24:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 08:24:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 08:24:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 08:24:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 08:24:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 08:24:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 08:24:25 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v518: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:24:25 compute-0 ceph-mon[74313]: pgmap v518: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:24:27 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v519: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:24:27 compute-0 ceph-mon[74313]: pgmap v519: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:24:27 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 08:24:28 compute-0 kernel: SELinux:  Converting 2767 SID table entries...
Oct 11 08:24:28 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Oct 11 08:24:28 compute-0 kernel: SELinux:  policy capability open_perms=1
Oct 11 08:24:28 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Oct 11 08:24:28 compute-0 kernel: SELinux:  policy capability always_check_network=0
Oct 11 08:24:28 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct 11 08:24:28 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct 11 08:24:28 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct 11 08:24:29 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v520: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:24:29 compute-0 ceph-mon[74313]: pgmap v520: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:24:31 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v521: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:24:31 compute-0 ceph-mon[74313]: pgmap v521: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:24:32 compute-0 dbus-broker-launch[809]: avc:  op=load_policy lsm=selinux seqno=12 res=1
Oct 11 08:24:32 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 08:24:32 compute-0 podman[170697]: 2025-10-11 08:24:32.81120339 +0000 UTC m=+0.110440592 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_controller, org.label-schema.build-date=20251001, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 08:24:33 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v522: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:24:33 compute-0 ceph-mon[74313]: pgmap v522: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:24:35 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v523: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:24:35 compute-0 ceph-mon[74313]: pgmap v523: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:24:37 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v524: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:24:37 compute-0 ceph-mon[74313]: pgmap v524: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:24:37 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 08:24:38 compute-0 kernel: SELinux:  Converting 2767 SID table entries...
Oct 11 08:24:38 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Oct 11 08:24:38 compute-0 kernel: SELinux:  policy capability open_perms=1
Oct 11 08:24:38 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Oct 11 08:24:38 compute-0 kernel: SELinux:  policy capability always_check_network=0
Oct 11 08:24:38 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct 11 08:24:38 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct 11 08:24:38 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct 11 08:24:39 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v525: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:24:39 compute-0 ceph-mon[74313]: pgmap v525: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:24:41 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v526: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:24:41 compute-0 ceph-mon[74313]: pgmap v526: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:24:41 compute-0 ceph-mon[74313]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #27. Immutable memtables: 0.
Oct 11 08:24:41 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:24:41.491782) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 11 08:24:41 compute-0 ceph-mon[74313]: rocksdb: [db/flush_job.cc:856] [default] [JOB 9] Flushing memtable with next log file: 27
Oct 11 08:24:41 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760171081491860, "job": 9, "event": "flush_started", "num_memtables": 1, "num_entries": 2038, "num_deletes": 251, "total_data_size": 3509069, "memory_usage": 3570288, "flush_reason": "Manual Compaction"}
Oct 11 08:24:41 compute-0 ceph-mon[74313]: rocksdb: [db/flush_job.cc:885] [default] [JOB 9] Level-0 flush table #28: started
Oct 11 08:24:41 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760171081635765, "cf_name": "default", "job": 9, "event": "table_file_creation", "file_number": 28, "file_size": 3433926, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 9724, "largest_seqno": 11761, "table_properties": {"data_size": 3424665, "index_size": 5883, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2309, "raw_key_size": 17735, "raw_average_key_size": 19, "raw_value_size": 3406330, "raw_average_value_size": 3730, "num_data_blocks": 267, "num_entries": 913, "num_filter_entries": 913, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760170849, "oldest_key_time": 1760170849, "file_creation_time": 1760171081, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "25d4b2da-549a-4320-988a-8e4ed36a341b", "db_session_id": "HBQ1LYMNE0LLZGR7AHC6", "orig_file_number": 28, "seqno_to_time_mapping": "N/A"}}
Oct 11 08:24:41 compute-0 ceph-mon[74313]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 9] Flush lasted 144114 microseconds, and 8089 cpu microseconds.
Oct 11 08:24:41 compute-0 ceph-mon[74313]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 08:24:41 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:24:41.635892) [db/flush_job.cc:967] [default] [JOB 9] Level-0 flush table #28: 3433926 bytes OK
Oct 11 08:24:41 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:24:41.635915) [db/memtable_list.cc:519] [default] Level-0 commit table #28 started
Oct 11 08:24:41 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:24:41.648447) [db/memtable_list.cc:722] [default] Level-0 commit table #28: memtable #1 done
Oct 11 08:24:41 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:24:41.648492) EVENT_LOG_v1 {"time_micros": 1760171081648481, "job": 9, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 11 08:24:41 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:24:41.648522) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 11 08:24:41 compute-0 ceph-mon[74313]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 9] Try to delete WAL files size 3500576, prev total WAL file size 3500576, number of live WAL files 2.
Oct 11 08:24:41 compute-0 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000024.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 08:24:41 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:24:41.649612) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300353032' seq:72057594037927935, type:22 .. '7061786F7300373534' seq:0, type:0; will stop at (end)
Oct 11 08:24:41 compute-0 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 10] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 11 08:24:41 compute-0 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 9 Base level 0, inputs: [28(3353KB)], [26(5990KB)]
Oct 11 08:24:41 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760171081649685, "job": 10, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [28], "files_L6": [26], "score": -1, "input_data_size": 9568448, "oldest_snapshot_seqno": -1}
Oct 11 08:24:41 compute-0 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 10] Generated table #29: 3702 keys, 7899439 bytes, temperature: kUnknown
Oct 11 08:24:41 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760171081716200, "cf_name": "default", "job": 10, "event": "table_file_creation", "file_number": 29, "file_size": 7899439, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7871124, "index_size": 17965, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 9285, "raw_key_size": 88915, "raw_average_key_size": 24, "raw_value_size": 7800721, "raw_average_value_size": 2107, "num_data_blocks": 778, "num_entries": 3702, "num_filter_entries": 3702, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760170204, "oldest_key_time": 0, "file_creation_time": 1760171081, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "25d4b2da-549a-4320-988a-8e4ed36a341b", "db_session_id": "HBQ1LYMNE0LLZGR7AHC6", "orig_file_number": 29, "seqno_to_time_mapping": "N/A"}}
Oct 11 08:24:41 compute-0 ceph-mon[74313]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 08:24:41 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:24:41.716548) [db/compaction/compaction_job.cc:1663] [default] [JOB 10] Compacted 1@0 + 1@6 files to L6 => 7899439 bytes
Oct 11 08:24:41 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:24:41.727971) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 143.6 rd, 118.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.3, 5.9 +0.0 blob) out(7.5 +0.0 blob), read-write-amplify(5.1) write-amplify(2.3) OK, records in: 4216, records dropped: 514 output_compression: NoCompression
Oct 11 08:24:41 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:24:41.727994) EVENT_LOG_v1 {"time_micros": 1760171081727982, "job": 10, "event": "compaction_finished", "compaction_time_micros": 66647, "compaction_time_cpu_micros": 17677, "output_level": 6, "num_output_files": 1, "total_output_size": 7899439, "num_input_records": 4216, "num_output_records": 3702, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 11 08:24:41 compute-0 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000028.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 08:24:41 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760171081728613, "job": 10, "event": "table_file_deletion", "file_number": 28}
Oct 11 08:24:41 compute-0 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000026.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 08:24:41 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760171081729585, "job": 10, "event": "table_file_deletion", "file_number": 26}
Oct 11 08:24:41 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:24:41.649531) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 08:24:41 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:24:41.729691) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 08:24:41 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:24:41.729699) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 08:24:41 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:24:41.729701) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 08:24:41 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:24:41.729705) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 08:24:41 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:24:41.729708) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 08:24:42 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 08:24:43 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v527: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:24:43 compute-0 ceph-mon[74313]: pgmap v527: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:24:45 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v528: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:24:45 compute-0 ceph-mon[74313]: pgmap v528: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:24:45 compute-0 dbus-broker-launch[809]: avc:  op=load_policy lsm=selinux seqno=13 res=1
Oct 11 08:24:45 compute-0 podman[170728]: 2025-10-11 08:24:45.816277904 +0000 UTC m=+0.090385896 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Oct 11 08:24:47 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v529: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:24:47 compute-0 ceph-mon[74313]: pgmap v529: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:24:47 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 08:24:49 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v530: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:24:49 compute-0 ceph-mon[74313]: pgmap v530: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:24:51 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v531: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:24:51 compute-0 ceph-mon[74313]: pgmap v531: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:24:52 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 08:24:53 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v532: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:24:53 compute-0 ceph-mon[74313]: pgmap v532: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:24:54 compute-0 ceph-mgr[74605]: [balancer INFO root] Optimize plan auto_2025-10-11_08:24:54
Oct 11 08:24:54 compute-0 ceph-mgr[74605]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 08:24:54 compute-0 ceph-mgr[74605]: [balancer INFO root] do_upmap
Oct 11 08:24:54 compute-0 ceph-mgr[74605]: [balancer INFO root] pools ['default.rgw.control', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'vms', 'volumes', 'images', 'default.rgw.log', '.mgr', 'default.rgw.meta', 'backups', '.rgw.root']
Oct 11 08:24:54 compute-0 ceph-mgr[74605]: [balancer INFO root] prepared 0/10 changes
Oct 11 08:24:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 08:24:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 08:24:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 08:24:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 08:24:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 08:24:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 08:24:54 compute-0 ceph-mgr[74605]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 08:24:54 compute-0 ceph-mgr[74605]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 08:24:54 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 08:24:54 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 08:24:54 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 08:24:54 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 08:24:54 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 08:24:54 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 08:24:54 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 08:24:54 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 08:24:55 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v533: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:24:55 compute-0 ceph-mon[74313]: pgmap v533: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:24:57 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v534: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:24:57 compute-0 ceph-mon[74313]: pgmap v534: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:24:57 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 08:24:59 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v535: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:24:59 compute-0 ceph-mon[74313]: pgmap v535: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:25:01 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v536: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:25:01 compute-0 ceph-mon[74313]: pgmap v536: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:25:02 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 08:25:03 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v537: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:25:03 compute-0 ceph-mon[74313]: pgmap v537: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:25:03 compute-0 podman[175826]: 2025-10-11 08:25:03.816797458 +0000 UTC m=+0.107932149 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_controller, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct 11 08:25:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 08:25:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:25:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 08:25:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:25:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 08:25:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:25:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 08:25:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:25:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 08:25:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:25:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 08:25:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:25:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 11 08:25:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:25:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 08:25:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:25:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 11 08:25:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:25:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 11 08:25:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:25:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 08:25:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:25:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 11 08:25:05 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v538: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:25:05 compute-0 ceph-mon[74313]: pgmap v538: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:25:05 compute-0 sudo[176454]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:25:05 compute-0 sudo[176454]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:25:05 compute-0 sudo[176454]: pam_unix(sudo:session): session closed for user root
Oct 11 08:25:05 compute-0 sudo[176522]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 08:25:05 compute-0 sudo[176522]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:25:05 compute-0 sudo[176522]: pam_unix(sudo:session): session closed for user root
Oct 11 08:25:05 compute-0 sudo[176588]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:25:05 compute-0 sudo[176588]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:25:05 compute-0 sudo[176588]: pam_unix(sudo:session): session closed for user root
Oct 11 08:25:05 compute-0 sudo[176655]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 11 08:25:05 compute-0 sudo[176655]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:25:05 compute-0 sudo[176655]: pam_unix(sudo:session): session closed for user root
Oct 11 08:25:06 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 08:25:06 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 08:25:06 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 11 08:25:06 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 08:25:06 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 11 08:25:06 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 08:25:06 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev 883367fd-d821-48d6-bf4c-8b78a2bb8bec does not exist
Oct 11 08:25:06 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev 57b8757a-241f-472f-a562-1b5f6a0c3c65 does not exist
Oct 11 08:25:06 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev d9286945-1626-445a-9ccf-c97379c4caf8 does not exist
Oct 11 08:25:06 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 11 08:25:06 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 08:25:06 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 11 08:25:06 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 08:25:06 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 08:25:06 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 08:25:06 compute-0 sudo[177058]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:25:06 compute-0 sudo[177058]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:25:06 compute-0 sudo[177058]: pam_unix(sudo:session): session closed for user root
Oct 11 08:25:06 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 08:25:06 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 08:25:06 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 08:25:06 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 08:25:06 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 08:25:06 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 08:25:06 compute-0 sudo[177130]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 08:25:06 compute-0 sudo[177130]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:25:06 compute-0 sudo[177130]: pam_unix(sudo:session): session closed for user root
Oct 11 08:25:06 compute-0 sudo[177202]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:25:06 compute-0 sudo[177202]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:25:06 compute-0 sudo[177202]: pam_unix(sudo:session): session closed for user root
Oct 11 08:25:06 compute-0 sudo[177260]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 11 08:25:06 compute-0 sudo[177260]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:25:06 compute-0 podman[177525]: 2025-10-11 08:25:06.816499694 +0000 UTC m=+0.071164723 container create a932a7fd7e2a24f7a5b5192f58e1edf5187cdc20007a50737c99d0edc5ffffe5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_saha, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 08:25:06 compute-0 systemd[1]: Started libpod-conmon-a932a7fd7e2a24f7a5b5192f58e1edf5187cdc20007a50737c99d0edc5ffffe5.scope.
Oct 11 08:25:06 compute-0 podman[177525]: 2025-10-11 08:25:06.778889624 +0000 UTC m=+0.033554703 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 08:25:06 compute-0 systemd[1]: Started libcrun container.
Oct 11 08:25:06 compute-0 podman[177525]: 2025-10-11 08:25:06.918356674 +0000 UTC m=+0.173021693 container init a932a7fd7e2a24f7a5b5192f58e1edf5187cdc20007a50737c99d0edc5ffffe5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_saha, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 08:25:06 compute-0 podman[177525]: 2025-10-11 08:25:06.932388054 +0000 UTC m=+0.187053083 container start a932a7fd7e2a24f7a5b5192f58e1edf5187cdc20007a50737c99d0edc5ffffe5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_saha, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 08:25:06 compute-0 podman[177525]: 2025-10-11 08:25:06.937567466 +0000 UTC m=+0.192232475 container attach a932a7fd7e2a24f7a5b5192f58e1edf5187cdc20007a50737c99d0edc5ffffe5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_saha, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 08:25:06 compute-0 ecstatic_saha[177596]: 167 167
Oct 11 08:25:06 compute-0 systemd[1]: libpod-a932a7fd7e2a24f7a5b5192f58e1edf5187cdc20007a50737c99d0edc5ffffe5.scope: Deactivated successfully.
Oct 11 08:25:06 compute-0 podman[177525]: 2025-10-11 08:25:06.952097651 +0000 UTC m=+0.206762680 container died a932a7fd7e2a24f7a5b5192f58e1edf5187cdc20007a50737c99d0edc5ffffe5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_saha, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 08:25:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-9d6a1b06de49393411fbaf2c8077798e75551af77ac300513065bb979466decf-merged.mount: Deactivated successfully.
Oct 11 08:25:07 compute-0 podman[177525]: 2025-10-11 08:25:07.010907201 +0000 UTC m=+0.265572220 container remove a932a7fd7e2a24f7a5b5192f58e1edf5187cdc20007a50737c99d0edc5ffffe5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_saha, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 08:25:07 compute-0 systemd[1]: libpod-conmon-a932a7fd7e2a24f7a5b5192f58e1edf5187cdc20007a50737c99d0edc5ffffe5.scope: Deactivated successfully.
Oct 11 08:25:07 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v539: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:25:07 compute-0 ceph-mon[74313]: pgmap v539: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:25:07 compute-0 podman[177710]: 2025-10-11 08:25:07.248949525 +0000 UTC m=+0.078852138 container create 3f7bda73a4ce0091f33310a165c1e385a644a6af66484528012c1007517cdb54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_leavitt, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True)
Oct 11 08:25:07 compute-0 systemd[1]: Started libpod-conmon-3f7bda73a4ce0091f33310a165c1e385a644a6af66484528012c1007517cdb54.scope.
Oct 11 08:25:07 compute-0 podman[177710]: 2025-10-11 08:25:07.218372611 +0000 UTC m=+0.048275264 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 08:25:07 compute-0 systemd[1]: Started libcrun container.
Oct 11 08:25:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d588e7f93943ed30843b50531fa30c0e2e11917cfc92a501ebe83884febfc359/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 08:25:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d588e7f93943ed30843b50531fa30c0e2e11917cfc92a501ebe83884febfc359/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 08:25:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d588e7f93943ed30843b50531fa30c0e2e11917cfc92a501ebe83884febfc359/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 08:25:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d588e7f93943ed30843b50531fa30c0e2e11917cfc92a501ebe83884febfc359/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 08:25:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d588e7f93943ed30843b50531fa30c0e2e11917cfc92a501ebe83884febfc359/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 08:25:07 compute-0 podman[177710]: 2025-10-11 08:25:07.370506701 +0000 UTC m=+0.200409365 container init 3f7bda73a4ce0091f33310a165c1e385a644a6af66484528012c1007517cdb54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_leavitt, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef)
Oct 11 08:25:07 compute-0 podman[177710]: 2025-10-11 08:25:07.388045425 +0000 UTC m=+0.217948038 container start 3f7bda73a4ce0091f33310a165c1e385a644a6af66484528012c1007517cdb54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_leavitt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Oct 11 08:25:07 compute-0 podman[177710]: 2025-10-11 08:25:07.395582925 +0000 UTC m=+0.225485528 container attach 3f7bda73a4ce0091f33310a165c1e385a644a6af66484528012c1007517cdb54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_leavitt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Oct 11 08:25:07 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 08:25:08 compute-0 awesome_leavitt[177793]: --> passed data devices: 0 physical, 3 LVM
Oct 11 08:25:08 compute-0 awesome_leavitt[177793]: --> relative data size: 1.0
Oct 11 08:25:08 compute-0 awesome_leavitt[177793]: --> All data devices are unavailable
Oct 11 08:25:08 compute-0 systemd[1]: libpod-3f7bda73a4ce0091f33310a165c1e385a644a6af66484528012c1007517cdb54.scope: Deactivated successfully.
Oct 11 08:25:08 compute-0 conmon[177793]: conmon 3f7bda73a4ce0091f333 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-3f7bda73a4ce0091f33310a165c1e385a644a6af66484528012c1007517cdb54.scope/container/memory.events
Oct 11 08:25:08 compute-0 systemd[1]: libpod-3f7bda73a4ce0091f33310a165c1e385a644a6af66484528012c1007517cdb54.scope: Consumed 1.164s CPU time.
Oct 11 08:25:08 compute-0 podman[177710]: 2025-10-11 08:25:08.666867136 +0000 UTC m=+1.496769749 container died 3f7bda73a4ce0091f33310a165c1e385a644a6af66484528012c1007517cdb54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_leavitt, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 11 08:25:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-d588e7f93943ed30843b50531fa30c0e2e11917cfc92a501ebe83884febfc359-merged.mount: Deactivated successfully.
Oct 11 08:25:08 compute-0 podman[177710]: 2025-10-11 08:25:08.741238272 +0000 UTC m=+1.571140855 container remove 3f7bda73a4ce0091f33310a165c1e385a644a6af66484528012c1007517cdb54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_leavitt, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct 11 08:25:08 compute-0 systemd[1]: libpod-conmon-3f7bda73a4ce0091f33310a165c1e385a644a6af66484528012c1007517cdb54.scope: Deactivated successfully.
Oct 11 08:25:08 compute-0 sudo[177260]: pam_unix(sudo:session): session closed for user root
Oct 11 08:25:08 compute-0 sudo[178462]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:25:08 compute-0 sudo[178462]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:25:08 compute-0 sudo[178462]: pam_unix(sudo:session): session closed for user root
Oct 11 08:25:08 compute-0 sudo[178527]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 08:25:08 compute-0 sudo[178527]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:25:08 compute-0 sudo[178527]: pam_unix(sudo:session): session closed for user root
Oct 11 08:25:09 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v540: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:25:09 compute-0 sudo[178588]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:25:09 compute-0 sudo[178588]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:25:09 compute-0 sudo[178588]: pam_unix(sudo:session): session closed for user root
Oct 11 08:25:09 compute-0 ceph-mon[74313]: pgmap v540: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:25:09 compute-0 sudo[178652]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a -- lvm list --format json
Oct 11 08:25:09 compute-0 sudo[178652]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:25:09 compute-0 podman[178896]: 2025-10-11 08:25:09.559942343 +0000 UTC m=+0.069648399 container create 5bab73d9c71f30af052ce95e981729640fd5a4d7b051025e3ddd9b0480589866 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_ritchie, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2)
Oct 11 08:25:09 compute-0 systemd[1]: Started libpod-conmon-5bab73d9c71f30af052ce95e981729640fd5a4d7b051025e3ddd9b0480589866.scope.
Oct 11 08:25:09 compute-0 podman[178896]: 2025-10-11 08:25:09.531535382 +0000 UTC m=+0.041241488 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 08:25:09 compute-0 systemd[1]: Started libcrun container.
Oct 11 08:25:09 compute-0 podman[178896]: 2025-10-11 08:25:09.674067612 +0000 UTC m=+0.183773728 container init 5bab73d9c71f30af052ce95e981729640fd5a4d7b051025e3ddd9b0480589866 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_ritchie, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 08:25:09 compute-0 podman[178896]: 2025-10-11 08:25:09.688575006 +0000 UTC m=+0.198281042 container start 5bab73d9c71f30af052ce95e981729640fd5a4d7b051025e3ddd9b0480589866 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_ritchie, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 08:25:09 compute-0 podman[178896]: 2025-10-11 08:25:09.692963654 +0000 UTC m=+0.202669730 container attach 5bab73d9c71f30af052ce95e981729640fd5a4d7b051025e3ddd9b0480589866 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_ritchie, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Oct 11 08:25:09 compute-0 youthful_ritchie[178974]: 167 167
Oct 11 08:25:09 compute-0 systemd[1]: libpod-5bab73d9c71f30af052ce95e981729640fd5a4d7b051025e3ddd9b0480589866.scope: Deactivated successfully.
Oct 11 08:25:09 compute-0 podman[178896]: 2025-10-11 08:25:09.69896374 +0000 UTC m=+0.208669806 container died 5bab73d9c71f30af052ce95e981729640fd5a4d7b051025e3ddd9b0480589866 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_ritchie, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 08:25:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-438ce47867b0cc540f5bad6d733492a1e1c86ace3b813f73faf348e794272713-merged.mount: Deactivated successfully.
Oct 11 08:25:09 compute-0 podman[178896]: 2025-10-11 08:25:09.75194566 +0000 UTC m=+0.261651686 container remove 5bab73d9c71f30af052ce95e981729640fd5a4d7b051025e3ddd9b0480589866 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_ritchie, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 08:25:09 compute-0 systemd[1]: libpod-conmon-5bab73d9c71f30af052ce95e981729640fd5a4d7b051025e3ddd9b0480589866.scope: Deactivated successfully.
Oct 11 08:25:09 compute-0 podman[179114]: 2025-10-11 08:25:09.982807604 +0000 UTC m=+0.053765254 container create 6043939bdb5a3e4aec50d761094e8cd4da806b0e628edf7e1b8003471be52523 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_easley, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Oct 11 08:25:10 compute-0 systemd[1]: Started libpod-conmon-6043939bdb5a3e4aec50d761094e8cd4da806b0e628edf7e1b8003471be52523.scope.
Oct 11 08:25:10 compute-0 podman[179114]: 2025-10-11 08:25:09.961269874 +0000 UTC m=+0.032227554 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 08:25:10 compute-0 systemd[1]: Started libcrun container.
Oct 11 08:25:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be2bddb51f3edeb01d5309b13b72e03ec4e667795e3faec5d987bc7afef75012/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 08:25:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be2bddb51f3edeb01d5309b13b72e03ec4e667795e3faec5d987bc7afef75012/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 08:25:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be2bddb51f3edeb01d5309b13b72e03ec4e667795e3faec5d987bc7afef75012/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 08:25:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be2bddb51f3edeb01d5309b13b72e03ec4e667795e3faec5d987bc7afef75012/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 08:25:10 compute-0 podman[179114]: 2025-10-11 08:25:10.09889305 +0000 UTC m=+0.169850790 container init 6043939bdb5a3e4aec50d761094e8cd4da806b0e628edf7e1b8003471be52523 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_easley, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 08:25:10 compute-0 podman[179114]: 2025-10-11 08:25:10.11392431 +0000 UTC m=+0.184881960 container start 6043939bdb5a3e4aec50d761094e8cd4da806b0e628edf7e1b8003471be52523 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_easley, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 11 08:25:10 compute-0 podman[179114]: 2025-10-11 08:25:10.117758712 +0000 UTC m=+0.188716442 container attach 6043939bdb5a3e4aec50d761094e8cd4da806b0e628edf7e1b8003471be52523 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_easley, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507)
Oct 11 08:25:10 compute-0 interesting_easley[179187]: {
Oct 11 08:25:10 compute-0 interesting_easley[179187]:     "0": [
Oct 11 08:25:10 compute-0 interesting_easley[179187]:         {
Oct 11 08:25:10 compute-0 interesting_easley[179187]:             "devices": [
Oct 11 08:25:10 compute-0 interesting_easley[179187]:                 "/dev/loop3"
Oct 11 08:25:10 compute-0 interesting_easley[179187]:             ],
Oct 11 08:25:10 compute-0 interesting_easley[179187]:             "lv_name": "ceph_lv0",
Oct 11 08:25:10 compute-0 interesting_easley[179187]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 08:25:10 compute-0 interesting_easley[179187]:             "lv_size": "21470642176",
Oct 11 08:25:10 compute-0 interesting_easley[179187]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=bd31cfe9-266a-4479-a7f9-659fff14b3e1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 08:25:10 compute-0 interesting_easley[179187]:             "lv_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 08:25:10 compute-0 interesting_easley[179187]:             "name": "ceph_lv0",
Oct 11 08:25:10 compute-0 interesting_easley[179187]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 08:25:10 compute-0 interesting_easley[179187]:             "tags": {
Oct 11 08:25:10 compute-0 interesting_easley[179187]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 11 08:25:10 compute-0 interesting_easley[179187]:                 "ceph.block_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 08:25:10 compute-0 interesting_easley[179187]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 08:25:10 compute-0 interesting_easley[179187]:                 "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 08:25:10 compute-0 interesting_easley[179187]:                 "ceph.cluster_name": "ceph",
Oct 11 08:25:10 compute-0 interesting_easley[179187]:                 "ceph.crush_device_class": "",
Oct 11 08:25:10 compute-0 interesting_easley[179187]:                 "ceph.encrypted": "0",
Oct 11 08:25:10 compute-0 interesting_easley[179187]:                 "ceph.osd_fsid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 08:25:10 compute-0 interesting_easley[179187]:                 "ceph.osd_id": "0",
Oct 11 08:25:10 compute-0 interesting_easley[179187]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 08:25:10 compute-0 interesting_easley[179187]:                 "ceph.type": "block",
Oct 11 08:25:10 compute-0 interesting_easley[179187]:                 "ceph.vdo": "0"
Oct 11 08:25:10 compute-0 interesting_easley[179187]:             },
Oct 11 08:25:10 compute-0 interesting_easley[179187]:             "type": "block",
Oct 11 08:25:10 compute-0 interesting_easley[179187]:             "vg_name": "ceph_vg0"
Oct 11 08:25:10 compute-0 interesting_easley[179187]:         }
Oct 11 08:25:10 compute-0 interesting_easley[179187]:     ],
Oct 11 08:25:10 compute-0 interesting_easley[179187]:     "1": [
Oct 11 08:25:10 compute-0 interesting_easley[179187]:         {
Oct 11 08:25:10 compute-0 interesting_easley[179187]:             "devices": [
Oct 11 08:25:10 compute-0 interesting_easley[179187]:                 "/dev/loop4"
Oct 11 08:25:10 compute-0 interesting_easley[179187]:             ],
Oct 11 08:25:10 compute-0 interesting_easley[179187]:             "lv_name": "ceph_lv1",
Oct 11 08:25:10 compute-0 interesting_easley[179187]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 08:25:10 compute-0 interesting_easley[179187]:             "lv_size": "21470642176",
Oct 11 08:25:10 compute-0 interesting_easley[179187]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c6e730c8-7075-45e5-bf72-061b73088f5b,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 08:25:10 compute-0 interesting_easley[179187]:             "lv_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 08:25:10 compute-0 interesting_easley[179187]:             "name": "ceph_lv1",
Oct 11 08:25:10 compute-0 interesting_easley[179187]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 08:25:10 compute-0 interesting_easley[179187]:             "tags": {
Oct 11 08:25:10 compute-0 interesting_easley[179187]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 11 08:25:10 compute-0 interesting_easley[179187]:                 "ceph.block_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 08:25:10 compute-0 interesting_easley[179187]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 08:25:10 compute-0 interesting_easley[179187]:                 "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 08:25:10 compute-0 interesting_easley[179187]:                 "ceph.cluster_name": "ceph",
Oct 11 08:25:10 compute-0 interesting_easley[179187]:                 "ceph.crush_device_class": "",
Oct 11 08:25:10 compute-0 interesting_easley[179187]:                 "ceph.encrypted": "0",
Oct 11 08:25:10 compute-0 interesting_easley[179187]:                 "ceph.osd_fsid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 08:25:10 compute-0 interesting_easley[179187]:                 "ceph.osd_id": "1",
Oct 11 08:25:10 compute-0 interesting_easley[179187]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 08:25:10 compute-0 interesting_easley[179187]:                 "ceph.type": "block",
Oct 11 08:25:10 compute-0 interesting_easley[179187]:                 "ceph.vdo": "0"
Oct 11 08:25:10 compute-0 interesting_easley[179187]:             },
Oct 11 08:25:10 compute-0 interesting_easley[179187]:             "type": "block",
Oct 11 08:25:10 compute-0 interesting_easley[179187]:             "vg_name": "ceph_vg1"
Oct 11 08:25:10 compute-0 interesting_easley[179187]:         }
Oct 11 08:25:10 compute-0 interesting_easley[179187]:     ],
Oct 11 08:25:10 compute-0 interesting_easley[179187]:     "2": [
Oct 11 08:25:10 compute-0 interesting_easley[179187]:         {
Oct 11 08:25:10 compute-0 interesting_easley[179187]:             "devices": [
Oct 11 08:25:10 compute-0 interesting_easley[179187]:                 "/dev/loop5"
Oct 11 08:25:10 compute-0 interesting_easley[179187]:             ],
Oct 11 08:25:10 compute-0 interesting_easley[179187]:             "lv_name": "ceph_lv2",
Oct 11 08:25:10 compute-0 interesting_easley[179187]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 08:25:10 compute-0 interesting_easley[179187]:             "lv_size": "21470642176",
Oct 11 08:25:10 compute-0 interesting_easley[179187]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9a8d87fe-940e-4960-94fb-405e22e6e9ee,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 08:25:10 compute-0 interesting_easley[179187]:             "lv_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 08:25:10 compute-0 interesting_easley[179187]:             "name": "ceph_lv2",
Oct 11 08:25:10 compute-0 interesting_easley[179187]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 08:25:10 compute-0 interesting_easley[179187]:             "tags": {
Oct 11 08:25:10 compute-0 interesting_easley[179187]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 11 08:25:10 compute-0 interesting_easley[179187]:                 "ceph.block_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 08:25:10 compute-0 interesting_easley[179187]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 08:25:10 compute-0 interesting_easley[179187]:                 "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 08:25:10 compute-0 interesting_easley[179187]:                 "ceph.cluster_name": "ceph",
Oct 11 08:25:10 compute-0 interesting_easley[179187]:                 "ceph.crush_device_class": "",
Oct 11 08:25:10 compute-0 interesting_easley[179187]:                 "ceph.encrypted": "0",
Oct 11 08:25:10 compute-0 interesting_easley[179187]:                 "ceph.osd_fsid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 08:25:10 compute-0 interesting_easley[179187]:                 "ceph.osd_id": "2",
Oct 11 08:25:10 compute-0 interesting_easley[179187]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 08:25:10 compute-0 interesting_easley[179187]:                 "ceph.type": "block",
Oct 11 08:25:10 compute-0 interesting_easley[179187]:                 "ceph.vdo": "0"
Oct 11 08:25:10 compute-0 interesting_easley[179187]:             },
Oct 11 08:25:10 compute-0 interesting_easley[179187]:             "type": "block",
Oct 11 08:25:10 compute-0 interesting_easley[179187]:             "vg_name": "ceph_vg2"
Oct 11 08:25:10 compute-0 interesting_easley[179187]:         }
Oct 11 08:25:10 compute-0 interesting_easley[179187]:     ]
Oct 11 08:25:10 compute-0 interesting_easley[179187]: }
Oct 11 08:25:10 compute-0 systemd[1]: libpod-6043939bdb5a3e4aec50d761094e8cd4da806b0e628edf7e1b8003471be52523.scope: Deactivated successfully.
Oct 11 08:25:10 compute-0 podman[179114]: 2025-10-11 08:25:10.909031841 +0000 UTC m=+0.979989521 container died 6043939bdb5a3e4aec50d761094e8cd4da806b0e628edf7e1b8003471be52523 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_easley, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Oct 11 08:25:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-be2bddb51f3edeb01d5309b13b72e03ec4e667795e3faec5d987bc7afef75012-merged.mount: Deactivated successfully.
Oct 11 08:25:10 compute-0 podman[179114]: 2025-10-11 08:25:10.980403119 +0000 UTC m=+1.051360759 container remove 6043939bdb5a3e4aec50d761094e8cd4da806b0e628edf7e1b8003471be52523 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_easley, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 08:25:10 compute-0 systemd[1]: libpod-conmon-6043939bdb5a3e4aec50d761094e8cd4da806b0e628edf7e1b8003471be52523.scope: Deactivated successfully.
Oct 11 08:25:11 compute-0 sudo[178652]: pam_unix(sudo:session): session closed for user root
Oct 11 08:25:11 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v541: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:25:11 compute-0 sudo[179619]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:25:11 compute-0 sudo[179619]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:25:11 compute-0 sudo[179619]: pam_unix(sudo:session): session closed for user root
Oct 11 08:25:11 compute-0 ceph-mon[74313]: pgmap v541: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:25:11 compute-0 sudo[179678]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 08:25:11 compute-0 sudo[179678]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:25:11 compute-0 sudo[179678]: pam_unix(sudo:session): session closed for user root
Oct 11 08:25:11 compute-0 sudo[179749]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:25:11 compute-0 sudo[179749]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:25:11 compute-0 sudo[179749]: pam_unix(sudo:session): session closed for user root
Oct 11 08:25:11 compute-0 sudo[179813]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a -- raw list --format json
Oct 11 08:25:11 compute-0 sudo[179813]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:25:11 compute-0 podman[180053]: 2025-10-11 08:25:11.826229264 +0000 UTC m=+0.043939177 container create 0958dcb0022bc6e63027a2aa1da1a5f9de4ddab98e498fa331e57490ad6513fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_volhard, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 11 08:25:11 compute-0 systemd[1]: Started libpod-conmon-0958dcb0022bc6e63027a2aa1da1a5f9de4ddab98e498fa331e57490ad6513fd.scope.
Oct 11 08:25:11 compute-0 systemd[1]: Started libcrun container.
Oct 11 08:25:11 compute-0 podman[180053]: 2025-10-11 08:25:11.807248029 +0000 UTC m=+0.024957982 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 08:25:11 compute-0 podman[180053]: 2025-10-11 08:25:11.918606575 +0000 UTC m=+0.136316508 container init 0958dcb0022bc6e63027a2aa1da1a5f9de4ddab98e498fa331e57490ad6513fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_volhard, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 08:25:11 compute-0 podman[180053]: 2025-10-11 08:25:11.929860765 +0000 UTC m=+0.147570678 container start 0958dcb0022bc6e63027a2aa1da1a5f9de4ddab98e498fa331e57490ad6513fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_volhard, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 08:25:11 compute-0 podman[180053]: 2025-10-11 08:25:11.933274254 +0000 UTC m=+0.150984167 container attach 0958dcb0022bc6e63027a2aa1da1a5f9de4ddab98e498fa331e57490ad6513fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_volhard, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 11 08:25:11 compute-0 relaxed_volhard[180112]: 167 167
Oct 11 08:25:11 compute-0 systemd[1]: libpod-0958dcb0022bc6e63027a2aa1da1a5f9de4ddab98e498fa331e57490ad6513fd.scope: Deactivated successfully.
Oct 11 08:25:11 compute-0 podman[180053]: 2025-10-11 08:25:11.939995521 +0000 UTC m=+0.157705434 container died 0958dcb0022bc6e63027a2aa1da1a5f9de4ddab98e498fa331e57490ad6513fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_volhard, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 08:25:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-0002bc3405b4241ab6d554aa2694c6bae0111e3c1295ffdf3fbcac2ecb372ec4-merged.mount: Deactivated successfully.
Oct 11 08:25:11 compute-0 podman[180053]: 2025-10-11 08:25:11.980101714 +0000 UTC m=+0.197811627 container remove 0958dcb0022bc6e63027a2aa1da1a5f9de4ddab98e498fa331e57490ad6513fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_volhard, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct 11 08:25:11 compute-0 systemd[1]: libpod-conmon-0958dcb0022bc6e63027a2aa1da1a5f9de4ddab98e498fa331e57490ad6513fd.scope: Deactivated successfully.
Oct 11 08:25:12 compute-0 podman[180243]: 2025-10-11 08:25:12.197297279 +0000 UTC m=+0.069565097 container create f686bb263578154b59b2ff4875050baad95229990844e4f39b20401c36f836b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_lamport, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507)
Oct 11 08:25:12 compute-0 systemd[1]: Started libpod-conmon-f686bb263578154b59b2ff4875050baad95229990844e4f39b20401c36f836b8.scope.
Oct 11 08:25:12 compute-0 podman[180243]: 2025-10-11 08:25:12.159787021 +0000 UTC m=+0.032054849 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 08:25:12 compute-0 systemd[1]: Started libcrun container.
Oct 11 08:25:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2fb61b4497cb2e828644f8aae182b6424738d059b439731b2b8c4eac2308814c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 08:25:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2fb61b4497cb2e828644f8aae182b6424738d059b439731b2b8c4eac2308814c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 08:25:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2fb61b4497cb2e828644f8aae182b6424738d059b439731b2b8c4eac2308814c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 08:25:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2fb61b4497cb2e828644f8aae182b6424738d059b439731b2b8c4eac2308814c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 08:25:12 compute-0 podman[180243]: 2025-10-11 08:25:12.31322521 +0000 UTC m=+0.185493058 container init f686bb263578154b59b2ff4875050baad95229990844e4f39b20401c36f836b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_lamport, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct 11 08:25:12 compute-0 podman[180243]: 2025-10-11 08:25:12.374974917 +0000 UTC m=+0.247242775 container start f686bb263578154b59b2ff4875050baad95229990844e4f39b20401c36f836b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_lamport, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Oct 11 08:25:12 compute-0 podman[180243]: 2025-10-11 08:25:12.378930302 +0000 UTC m=+0.251198150 container attach f686bb263578154b59b2ff4875050baad95229990844e4f39b20401c36f836b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_lamport, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True)
Oct 11 08:25:12 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 08:25:13 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v542: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:25:13 compute-0 ceph-mon[74313]: pgmap v542: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:25:13 compute-0 peaceful_lamport[180311]: {
Oct 11 08:25:13 compute-0 peaceful_lamport[180311]:     "9a8d87fe-940e-4960-94fb-405e22e6e9ee": {
Oct 11 08:25:13 compute-0 peaceful_lamport[180311]:         "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 08:25:13 compute-0 peaceful_lamport[180311]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 11 08:25:13 compute-0 peaceful_lamport[180311]:         "osd_id": 2,
Oct 11 08:25:13 compute-0 peaceful_lamport[180311]:         "osd_uuid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 08:25:13 compute-0 peaceful_lamport[180311]:         "type": "bluestore"
Oct 11 08:25:13 compute-0 peaceful_lamport[180311]:     },
Oct 11 08:25:13 compute-0 peaceful_lamport[180311]:     "bd31cfe9-266a-4479-a7f9-659fff14b3e1": {
Oct 11 08:25:13 compute-0 peaceful_lamport[180311]:         "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 08:25:13 compute-0 peaceful_lamport[180311]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 11 08:25:13 compute-0 peaceful_lamport[180311]:         "osd_id": 0,
Oct 11 08:25:13 compute-0 peaceful_lamport[180311]:         "osd_uuid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 08:25:13 compute-0 peaceful_lamport[180311]:         "type": "bluestore"
Oct 11 08:25:13 compute-0 peaceful_lamport[180311]:     },
Oct 11 08:25:13 compute-0 peaceful_lamport[180311]:     "c6e730c8-7075-45e5-bf72-061b73088f5b": {
Oct 11 08:25:13 compute-0 peaceful_lamport[180311]:         "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 08:25:13 compute-0 peaceful_lamport[180311]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 11 08:25:13 compute-0 peaceful_lamport[180311]:         "osd_id": 1,
Oct 11 08:25:13 compute-0 peaceful_lamport[180311]:         "osd_uuid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 08:25:13 compute-0 peaceful_lamport[180311]:         "type": "bluestore"
Oct 11 08:25:13 compute-0 peaceful_lamport[180311]:     }
Oct 11 08:25:13 compute-0 peaceful_lamport[180311]: }
Oct 11 08:25:13 compute-0 podman[180243]: 2025-10-11 08:25:13.4487341 +0000 UTC m=+1.321001928 container died f686bb263578154b59b2ff4875050baad95229990844e4f39b20401c36f836b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_lamport, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 08:25:13 compute-0 systemd[1]: libpod-f686bb263578154b59b2ff4875050baad95229990844e4f39b20401c36f836b8.scope: Deactivated successfully.
Oct 11 08:25:13 compute-0 systemd[1]: libpod-f686bb263578154b59b2ff4875050baad95229990844e4f39b20401c36f836b8.scope: Consumed 1.074s CPU time.
Oct 11 08:25:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-2fb61b4497cb2e828644f8aae182b6424738d059b439731b2b8c4eac2308814c-merged.mount: Deactivated successfully.
Oct 11 08:25:13 compute-0 podman[180243]: 2025-10-11 08:25:13.533030616 +0000 UTC m=+1.405298464 container remove f686bb263578154b59b2ff4875050baad95229990844e4f39b20401c36f836b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_lamport, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 08:25:13 compute-0 systemd[1]: libpod-conmon-f686bb263578154b59b2ff4875050baad95229990844e4f39b20401c36f836b8.scope: Deactivated successfully.
Oct 11 08:25:13 compute-0 sudo[179813]: pam_unix(sudo:session): session closed for user root
Oct 11 08:25:13 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 08:25:13 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 08:25:13 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 08:25:13 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 08:25:13 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev f92228bd-de32-4a67-a35d-89b70af86737 does not exist
Oct 11 08:25:13 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev e06179c5-dfef-429f-b5dc-21a03c28dc52 does not exist
Oct 11 08:25:13 compute-0 sudo[180995]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:25:13 compute-0 sudo[180995]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:25:13 compute-0 sudo[180995]: pam_unix(sudo:session): session closed for user root
Oct 11 08:25:13 compute-0 sudo[181074]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 11 08:25:13 compute-0 sudo[181074]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:25:13 compute-0 sudo[181074]: pam_unix(sudo:session): session closed for user root
Oct 11 08:25:14 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 08:25:14 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 08:25:15 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v543: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:25:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:25:15.153 162815 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:25:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:25:15.155 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:25:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:25:15.155 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:25:15 compute-0 ceph-mon[74313]: pgmap v543: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:25:16 compute-0 podman[182538]: 2025-10-11 08:25:16.805582284 +0000 UTC m=+0.091608151 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent)
Oct 11 08:25:17 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v544: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:25:17 compute-0 ceph-mon[74313]: pgmap v544: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:25:17 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 08:25:19 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v545: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:25:19 compute-0 ceph-mon[74313]: pgmap v545: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:25:21 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v546: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:25:21 compute-0 ceph-mon[74313]: pgmap v546: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:25:22 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 08:25:23 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v547: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:25:23 compute-0 ceph-mon[74313]: pgmap v547: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:25:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 08:25:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 08:25:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 08:25:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 08:25:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 08:25:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 08:25:25 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v548: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:25:25 compute-0 ceph-mon[74313]: pgmap v548: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:25:27 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v549: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:25:27 compute-0 ceph-mon[74313]: pgmap v549: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:25:27 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 08:25:29 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v550: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:25:29 compute-0 ceph-mon[74313]: pgmap v550: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:25:31 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v551: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:25:31 compute-0 ceph-mon[74313]: pgmap v551: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:25:32 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 08:25:33 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v552: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:25:33 compute-0 ceph-mon[74313]: pgmap v552: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:25:34 compute-0 podman[188417]: 2025-10-11 08:25:34.826094225 +0000 UTC m=+0.118601100 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller)
Oct 11 08:25:35 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v553: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:25:35 compute-0 ceph-mon[74313]: pgmap v553: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:25:37 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v554: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:25:37 compute-0 ceph-mon[74313]: pgmap v554: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:25:37 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 08:25:39 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v555: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:25:39 compute-0 ceph-mon[74313]: pgmap v555: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:25:41 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v556: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:25:41 compute-0 ceph-mon[74313]: pgmap v556: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:25:42 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 08:25:43 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v557: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:25:43 compute-0 ceph-mon[74313]: pgmap v557: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:25:43 compute-0 kernel: SELinux:  Converting 2768 SID table entries...
Oct 11 08:25:43 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Oct 11 08:25:43 compute-0 kernel: SELinux:  policy capability open_perms=1
Oct 11 08:25:43 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Oct 11 08:25:43 compute-0 kernel: SELinux:  policy capability always_check_network=0
Oct 11 08:25:43 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct 11 08:25:43 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct 11 08:25:43 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct 11 08:25:44 compute-0 groupadd[188457]: group added to /etc/group: name=dnsmasq, GID=991
Oct 11 08:25:44 compute-0 groupadd[188457]: group added to /etc/gshadow: name=dnsmasq
Oct 11 08:25:44 compute-0 groupadd[188457]: new group: name=dnsmasq, GID=991
Oct 11 08:25:45 compute-0 useradd[188464]: new user: name=dnsmasq, UID=991, GID=991, home=/var/lib/dnsmasq, shell=/usr/sbin/nologin, from=none
Oct 11 08:25:45 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v558: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:25:45 compute-0 dbus-broker-launch[792]: Noticed file-system modification, trigger reload.
Oct 11 08:25:45 compute-0 dbus-broker-launch[809]: avc:  op=load_policy lsm=selinux seqno=14 res=1
Oct 11 08:25:45 compute-0 dbus-broker-launch[792]: Noticed file-system modification, trigger reload.
Oct 11 08:25:45 compute-0 ceph-mon[74313]: pgmap v558: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:25:46 compute-0 groupadd[188477]: group added to /etc/group: name=clevis, GID=990
Oct 11 08:25:46 compute-0 groupadd[188477]: group added to /etc/gshadow: name=clevis
Oct 11 08:25:46 compute-0 groupadd[188477]: new group: name=clevis, GID=990
Oct 11 08:25:46 compute-0 useradd[188484]: new user: name=clevis, UID=990, GID=990, home=/var/cache/clevis, shell=/usr/sbin/nologin, from=none
Oct 11 08:25:46 compute-0 usermod[188494]: add 'clevis' to group 'tss'
Oct 11 08:25:46 compute-0 usermod[188494]: add 'clevis' to shadow group 'tss'
Oct 11 08:25:47 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v559: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:25:47 compute-0 ceph-mon[74313]: pgmap v559: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:25:47 compute-0 podman[188508]: 2025-10-11 08:25:47.215314823 +0000 UTC m=+0.103047181 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Oct 11 08:25:47 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 08:25:48 compute-0 polkitd[6218]: Reloading rules
Oct 11 08:25:48 compute-0 polkitd[6218]: Collecting garbage unconditionally...
Oct 11 08:25:48 compute-0 polkitd[6218]: Loading rules from directory /etc/polkit-1/rules.d
Oct 11 08:25:48 compute-0 polkitd[6218]: Loading rules from directory /usr/share/polkit-1/rules.d
Oct 11 08:25:48 compute-0 polkitd[6218]: Finished loading, compiling and executing 4 rules
Oct 11 08:25:48 compute-0 polkitd[6218]: Reloading rules
Oct 11 08:25:48 compute-0 polkitd[6218]: Collecting garbage unconditionally...
Oct 11 08:25:48 compute-0 polkitd[6218]: Loading rules from directory /etc/polkit-1/rules.d
Oct 11 08:25:48 compute-0 polkitd[6218]: Loading rules from directory /usr/share/polkit-1/rules.d
Oct 11 08:25:48 compute-0 polkitd[6218]: Finished loading, compiling and executing 4 rules
Oct 11 08:25:49 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v560: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:25:49 compute-0 ceph-mon[74313]: pgmap v560: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:25:50 compute-0 groupadd[188701]: group added to /etc/group: name=ceph, GID=167
Oct 11 08:25:50 compute-0 groupadd[188701]: group added to /etc/gshadow: name=ceph
Oct 11 08:25:50 compute-0 groupadd[188701]: new group: name=ceph, GID=167
Oct 11 08:25:50 compute-0 useradd[188707]: new user: name=ceph, UID=167, GID=167, home=/var/lib/ceph, shell=/sbin/nologin, from=none
Oct 11 08:25:51 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v561: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:25:51 compute-0 ceph-mon[74313]: pgmap v561: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:25:52 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 08:25:53 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v562: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:25:53 compute-0 ceph-mon[74313]: pgmap v562: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:25:53 compute-0 systemd[1]: Stopping OpenSSH server daemon...
Oct 11 08:25:53 compute-0 sshd[1004]: Received signal 15; terminating.
Oct 11 08:25:53 compute-0 systemd[1]: sshd.service: Deactivated successfully.
Oct 11 08:25:53 compute-0 systemd[1]: Stopped OpenSSH server daemon.
Oct 11 08:25:53 compute-0 systemd[1]: sshd.service: Consumed 2.890s CPU time, no IO.
Oct 11 08:25:53 compute-0 systemd[1]: Stopped target sshd-keygen.target.
Oct 11 08:25:53 compute-0 systemd[1]: Stopping sshd-keygen.target...
Oct 11 08:25:53 compute-0 systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Oct 11 08:25:53 compute-0 systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Oct 11 08:25:53 compute-0 systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Oct 11 08:25:53 compute-0 systemd[1]: Reached target sshd-keygen.target.
Oct 11 08:25:53 compute-0 systemd[1]: Starting OpenSSH server daemon...
Oct 11 08:25:53 compute-0 sshd[189332]: Server listening on 0.0.0.0 port 22.
Oct 11 08:25:53 compute-0 sshd[189332]: Server listening on :: port 22.
Oct 11 08:25:53 compute-0 systemd[1]: Started OpenSSH server daemon.
Oct 11 08:25:54 compute-0 ceph-mgr[74605]: [balancer INFO root] Optimize plan auto_2025-10-11_08:25:54
Oct 11 08:25:54 compute-0 ceph-mgr[74605]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 08:25:54 compute-0 ceph-mgr[74605]: [balancer INFO root] do_upmap
Oct 11 08:25:54 compute-0 ceph-mgr[74605]: [balancer INFO root] pools ['cephfs.cephfs.meta', '.rgw.root', '.mgr', 'default.rgw.meta', 'backups', 'vms', 'cephfs.cephfs.data', 'images', 'default.rgw.control', 'default.rgw.log', 'volumes']
Oct 11 08:25:54 compute-0 ceph-mgr[74605]: [balancer INFO root] prepared 0/10 changes
Oct 11 08:25:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 08:25:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 08:25:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 08:25:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 08:25:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 08:25:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 08:25:54 compute-0 ceph-mgr[74605]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 08:25:54 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 08:25:54 compute-0 ceph-mgr[74605]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 08:25:54 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 08:25:54 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 08:25:54 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 08:25:54 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 08:25:54 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 08:25:54 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 08:25:54 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 08:25:55 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v563: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:25:55 compute-0 ceph-mon[74313]: pgmap v563: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:25:55 compute-0 ceph-mgr[74605]: client.0 ms_handle_reset on v2:192.168.122.100:6800/2898047278
Oct 11 08:25:56 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct 11 08:25:56 compute-0 systemd[1]: Starting man-db-cache-update.service...
Oct 11 08:25:56 compute-0 systemd[1]: Reloading.
Oct 11 08:25:56 compute-0 systemd-rc-local-generator[189590]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 08:25:56 compute-0 systemd-sysv-generator[189593]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 08:25:57 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v564: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:25:57 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Oct 11 08:25:57 compute-0 ceph-mon[74313]: pgmap v564: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:25:57 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 08:25:58 compute-0 systemd[1]: Starting PackageKit Daemon...
Oct 11 08:25:58 compute-0 PackageKit[191236]: daemon start
Oct 11 08:25:59 compute-0 systemd[1]: Started PackageKit Daemon.
Oct 11 08:25:59 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v565: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:25:59 compute-0 ceph-mon[74313]: pgmap v565: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:25:59 compute-0 sudo[170194]: pam_unix(sudo:session): session closed for user root
Oct 11 08:26:00 compute-0 sudo[192507]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ookajmtzvpvnxqnyknwmxfkgzjvvvsfw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171159.658934-336-132809868112243/AnsiballZ_systemd.py'
Oct 11 08:26:00 compute-0 sudo[192507]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:26:00 compute-0 python3.9[192530]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct 11 08:26:01 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v566: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:26:01 compute-0 ceph-mon[74313]: pgmap v566: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:26:01 compute-0 systemd[1]: Reloading.
Oct 11 08:26:02 compute-0 systemd-rc-local-generator[193694]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 08:26:02 compute-0 systemd-sysv-generator[193698]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 08:26:02 compute-0 sudo[192507]: pam_unix(sudo:session): session closed for user root
Oct 11 08:26:02 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 08:26:02 compute-0 sudo[194410]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fnincsaalnmdljfkfsoirspcshasmaup ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171162.4933577-336-107481283402667/AnsiballZ_systemd.py'
Oct 11 08:26:02 compute-0 sudo[194410]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:26:03 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v567: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:26:03 compute-0 python3.9[194436]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct 11 08:26:03 compute-0 ceph-mon[74313]: pgmap v567: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:26:03 compute-0 systemd[1]: Reloading.
Oct 11 08:26:03 compute-0 systemd-rc-local-generator[194785]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 08:26:03 compute-0 systemd-sysv-generator[194801]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 08:26:03 compute-0 sudo[194410]: pam_unix(sudo:session): session closed for user root
Oct 11 08:26:04 compute-0 sudo[195497]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hosiyolofijljcibydilhzotspgpkviu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171163.7633717-336-79213860418640/AnsiballZ_systemd.py'
Oct 11 08:26:04 compute-0 sudo[195497]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:26:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 08:26:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:26:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 08:26:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:26:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 08:26:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:26:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 08:26:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:26:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 08:26:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:26:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 08:26:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:26:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 11 08:26:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:26:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 08:26:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:26:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 11 08:26:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:26:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 11 08:26:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:26:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 08:26:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:26:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 11 08:26:04 compute-0 python3.9[195517]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tls.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct 11 08:26:04 compute-0 systemd[1]: Reloading.
Oct 11 08:26:04 compute-0 systemd-rc-local-generator[195880]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 08:26:04 compute-0 systemd-sysv-generator[195886]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 08:26:04 compute-0 sudo[195497]: pam_unix(sudo:session): session closed for user root
Oct 11 08:26:05 compute-0 podman[196039]: 2025-10-11 08:26:05.036341701 +0000 UTC m=+0.148848215 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 11 08:26:05 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v568: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:26:05 compute-0 auditd[700]: Audit daemon rotating log files
Oct 11 08:26:05 compute-0 ceph-mon[74313]: pgmap v568: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:26:05 compute-0 sudo[196571]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-exxegbfglcddhdnvnkdcdqskjmywcfkm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171165.090196-336-272433569324603/AnsiballZ_systemd.py'
Oct 11 08:26:05 compute-0 sudo[196571]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:26:05 compute-0 python3.9[196600]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=virtproxyd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct 11 08:26:05 compute-0 systemd[1]: Reloading.
Oct 11 08:26:06 compute-0 systemd-rc-local-generator[196981]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 08:26:06 compute-0 systemd-sysv-generator[196987]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 08:26:06 compute-0 sudo[196571]: pam_unix(sudo:session): session closed for user root
Oct 11 08:26:06 compute-0 sudo[197786]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-skwstkmfobvsokxlskaaidaihzhgufyx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171166.5233066-365-114246076900727/AnsiballZ_systemd.py'
Oct 11 08:26:06 compute-0 sudo[197786]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:26:07 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v569: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:26:07 compute-0 ceph-mon[74313]: pgmap v569: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:26:07 compute-0 python3.9[197798]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 11 08:26:07 compute-0 systemd[1]: Reloading.
Oct 11 08:26:07 compute-0 systemd-rc-local-generator[198154]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 08:26:07 compute-0 systemd-sysv-generator[198158]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 08:26:07 compute-0 sudo[197786]: pam_unix(sudo:session): session closed for user root
Oct 11 08:26:07 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 08:26:08 compute-0 sudo[198895]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vqdvzpyknfvwsrvfndlbibppvwpwubyt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171167.868255-365-107652964391254/AnsiballZ_systemd.py'
Oct 11 08:26:08 compute-0 sudo[198895]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:26:08 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Oct 11 08:26:08 compute-0 systemd[1]: Finished man-db-cache-update.service.
Oct 11 08:26:08 compute-0 systemd[1]: man-db-cache-update.service: Consumed 14.849s CPU time.
Oct 11 08:26:08 compute-0 systemd[1]: run-r6ee3f6c33fb64d0fbf675f39ad61da22.service: Deactivated successfully.
Oct 11 08:26:08 compute-0 python3.9[198919]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 11 08:26:09 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v570: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:26:09 compute-0 ceph-mon[74313]: pgmap v570: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:26:09 compute-0 systemd[1]: Reloading.
Oct 11 08:26:09 compute-0 systemd-sysv-generator[198996]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 08:26:09 compute-0 systemd-rc-local-generator[198993]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 08:26:10 compute-0 sudo[198895]: pam_unix(sudo:session): session closed for user root
Oct 11 08:26:10 compute-0 sudo[199150]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zmhruljnnmzexzgysfsfbhwkqdmjqajz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171170.2415173-365-12772101757639/AnsiballZ_systemd.py'
Oct 11 08:26:10 compute-0 sudo[199150]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:26:10 compute-0 python3.9[199152]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 11 08:26:11 compute-0 systemd[1]: Reloading.
Oct 11 08:26:11 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v571: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:26:11 compute-0 ceph-mon[74313]: pgmap v571: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:26:11 compute-0 systemd-rc-local-generator[199182]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 08:26:11 compute-0 systemd-sysv-generator[199185]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 08:26:11 compute-0 sudo[199150]: pam_unix(sudo:session): session closed for user root
Oct 11 08:26:12 compute-0 sudo[199341]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hlaozqdzgzjyqhufhvdnlnfyfltqkzfz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171171.614886-365-105354791138464/AnsiballZ_systemd.py'
Oct 11 08:26:12 compute-0 sudo[199341]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:26:12 compute-0 python3.9[199343]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 11 08:26:12 compute-0 sudo[199341]: pam_unix(sudo:session): session closed for user root
Oct 11 08:26:12 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 08:26:13 compute-0 sudo[199496]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kpxdjtcumduwtcxbovjyhuucypvazufc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171172.635983-365-162399713461401/AnsiballZ_systemd.py'
Oct 11 08:26:13 compute-0 sudo[199496]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:26:13 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v572: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:26:13 compute-0 ceph-mon[74313]: pgmap v572: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:26:13 compute-0 python3.9[199498]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 11 08:26:13 compute-0 systemd[1]: Reloading.
Oct 11 08:26:13 compute-0 systemd-sysv-generator[199529]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 08:26:13 compute-0 systemd-rc-local-generator[199525]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 08:26:13 compute-0 sudo[199496]: pam_unix(sudo:session): session closed for user root
Oct 11 08:26:13 compute-0 sudo[199537]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:26:13 compute-0 sudo[199537]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:26:13 compute-0 sudo[199537]: pam_unix(sudo:session): session closed for user root
Oct 11 08:26:13 compute-0 sudo[199567]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 08:26:13 compute-0 sudo[199567]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:26:13 compute-0 sudo[199567]: pam_unix(sudo:session): session closed for user root
Oct 11 08:26:14 compute-0 sudo[199611]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:26:14 compute-0 sudo[199611]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:26:14 compute-0 sudo[199611]: pam_unix(sudo:session): session closed for user root
Oct 11 08:26:14 compute-0 sudo[199657]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 11 08:26:14 compute-0 sudo[199657]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:26:14 compute-0 sudo[199801]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-muydtobnceiedovawcewtptukahtohbt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171174.11093-401-68779001421106/AnsiballZ_systemd.py'
Oct 11 08:26:14 compute-0 sudo[199801]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:26:14 compute-0 sudo[199657]: pam_unix(sudo:session): session closed for user root
Oct 11 08:26:14 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 08:26:14 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 08:26:14 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 11 08:26:14 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 08:26:14 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 11 08:26:14 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 08:26:14 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev 9964f7e2-b646-42e4-84a6-b296fdf29598 does not exist
Oct 11 08:26:14 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev bf347f46-5543-4216-9383-807a511d2cdd does not exist
Oct 11 08:26:14 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev c5c5f727-6d4b-468f-87c5-71a0456bf10f does not exist
Oct 11 08:26:14 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 11 08:26:14 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 08:26:14 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 11 08:26:14 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 08:26:14 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 08:26:14 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 08:26:14 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 08:26:14 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 08:26:14 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 08:26:14 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 08:26:14 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 08:26:14 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 08:26:14 compute-0 python3.9[199805]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-tls.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct 11 08:26:14 compute-0 sudo[199820]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:26:14 compute-0 sudo[199820]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:26:14 compute-0 sudo[199820]: pam_unix(sudo:session): session closed for user root
Oct 11 08:26:15 compute-0 systemd[1]: Reloading.
Oct 11 08:26:15 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v573: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:26:15 compute-0 systemd-rc-local-generator[199898]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 08:26:15 compute-0 systemd-sysv-generator[199902]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 08:26:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:26:15.155 162815 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:26:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:26:15.156 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:26:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:26:15.157 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:26:15 compute-0 sudo[199847]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 08:26:15 compute-0 sudo[199847]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:26:15 compute-0 sudo[199847]: pam_unix(sudo:session): session closed for user root
Oct 11 08:26:15 compute-0 systemd[1]: Listening on libvirt proxy daemon socket.
Oct 11 08:26:15 compute-0 systemd[1]: Listening on libvirt proxy daemon TLS IP socket.
Oct 11 08:26:15 compute-0 sudo[199801]: pam_unix(sudo:session): session closed for user root
Oct 11 08:26:15 compute-0 sudo[199910]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:26:15 compute-0 sudo[199910]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:26:15 compute-0 sudo[199910]: pam_unix(sudo:session): session closed for user root
Oct 11 08:26:15 compute-0 sudo[199936]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 11 08:26:15 compute-0 sudo[199936]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:26:15 compute-0 ceph-mon[74313]: pgmap v573: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:26:16 compute-0 podman[200100]: 2025-10-11 08:26:16.049864111 +0000 UTC m=+0.068707428 container create 1a8e960e6425ac992e89c5d0a403e3dbe53f984e1705324c7e2c22761a8b4866 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_booth, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct 11 08:26:16 compute-0 podman[200100]: 2025-10-11 08:26:16.020064419 +0000 UTC m=+0.038907776 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 08:26:16 compute-0 systemd[1]: Started libpod-conmon-1a8e960e6425ac992e89c5d0a403e3dbe53f984e1705324c7e2c22761a8b4866.scope.
Oct 11 08:26:16 compute-0 systemd[1]: Started libcrun container.
Oct 11 08:26:16 compute-0 podman[200100]: 2025-10-11 08:26:16.177509282 +0000 UTC m=+0.196352649 container init 1a8e960e6425ac992e89c5d0a403e3dbe53f984e1705324c7e2c22761a8b4866 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_booth, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 08:26:16 compute-0 sudo[200169]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wwglnbcrzoyspojtvvzhnpiitvzjrkph ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171175.700744-409-53938465961216/AnsiballZ_systemd.py'
Oct 11 08:26:16 compute-0 podman[200100]: 2025-10-11 08:26:16.193145354 +0000 UTC m=+0.211988661 container start 1a8e960e6425ac992e89c5d0a403e3dbe53f984e1705324c7e2c22761a8b4866 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_booth, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default)
Oct 11 08:26:16 compute-0 sudo[200169]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:26:16 compute-0 podman[200100]: 2025-10-11 08:26:16.198159209 +0000 UTC m=+0.217002586 container attach 1a8e960e6425ac992e89c5d0a403e3dbe53f984e1705324c7e2c22761a8b4866 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_booth, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 08:26:16 compute-0 modest_booth[200149]: 167 167
Oct 11 08:26:16 compute-0 systemd[1]: libpod-1a8e960e6425ac992e89c5d0a403e3dbe53f984e1705324c7e2c22761a8b4866.scope: Deactivated successfully.
Oct 11 08:26:16 compute-0 podman[200100]: 2025-10-11 08:26:16.203584876 +0000 UTC m=+0.222428183 container died 1a8e960e6425ac992e89c5d0a403e3dbe53f984e1705324c7e2c22761a8b4866 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_booth, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 08:26:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-ca8daa62f5b2277c06d470e28e600c92bb0fee6d04bfb5954f8146cb84958b6e-merged.mount: Deactivated successfully.
Oct 11 08:26:16 compute-0 podman[200100]: 2025-10-11 08:26:16.265591859 +0000 UTC m=+0.284435176 container remove 1a8e960e6425ac992e89c5d0a403e3dbe53f984e1705324c7e2c22761a8b4866 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_booth, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Oct 11 08:26:16 compute-0 systemd[1]: libpod-conmon-1a8e960e6425ac992e89c5d0a403e3dbe53f984e1705324c7e2c22761a8b4866.scope: Deactivated successfully.
Oct 11 08:26:16 compute-0 python3.9[200173]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 11 08:26:16 compute-0 podman[200192]: 2025-10-11 08:26:16.509850611 +0000 UTC m=+0.059265935 container create 082c2156b4ecde2b2c1771ea19136513cb0dce5dfcc2f3cb18dd5604d06ef1ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_montalcini, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct 11 08:26:16 compute-0 systemd[1]: Started libpod-conmon-082c2156b4ecde2b2c1771ea19136513cb0dce5dfcc2f3cb18dd5604d06ef1ac.scope.
Oct 11 08:26:16 compute-0 podman[200192]: 2025-10-11 08:26:16.486477785 +0000 UTC m=+0.035893109 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 08:26:16 compute-0 systemd[1]: Started libcrun container.
Oct 11 08:26:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8fb3c57c5bce9c1c71852580b70ec56c657f27802c86cfec46d6279309cddf2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 08:26:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8fb3c57c5bce9c1c71852580b70ec56c657f27802c86cfec46d6279309cddf2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 08:26:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8fb3c57c5bce9c1c71852580b70ec56c657f27802c86cfec46d6279309cddf2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 08:26:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8fb3c57c5bce9c1c71852580b70ec56c657f27802c86cfec46d6279309cddf2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 08:26:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8fb3c57c5bce9c1c71852580b70ec56c657f27802c86cfec46d6279309cddf2/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 08:26:16 compute-0 podman[200192]: 2025-10-11 08:26:16.615273859 +0000 UTC m=+0.164689153 container init 082c2156b4ecde2b2c1771ea19136513cb0dce5dfcc2f3cb18dd5604d06ef1ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_montalcini, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 08:26:16 compute-0 podman[200192]: 2025-10-11 08:26:16.630264063 +0000 UTC m=+0.179679397 container start 082c2156b4ecde2b2c1771ea19136513cb0dce5dfcc2f3cb18dd5604d06ef1ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_montalcini, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507)
Oct 11 08:26:16 compute-0 podman[200192]: 2025-10-11 08:26:16.634632759 +0000 UTC m=+0.184048053 container attach 082c2156b4ecde2b2c1771ea19136513cb0dce5dfcc2f3cb18dd5604d06ef1ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_montalcini, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 08:26:16 compute-0 sudo[200169]: pam_unix(sudo:session): session closed for user root
Oct 11 08:26:17 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v574: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:26:17 compute-0 ceph-mon[74313]: pgmap v574: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:26:17 compute-0 sudo[200371]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jqkgltrtdhgbnxagexoylpspwwmaixqs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171176.8598855-409-179184511390583/AnsiballZ_systemd.py'
Oct 11 08:26:17 compute-0 sudo[200371]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:26:17 compute-0 podman[200375]: 2025-10-11 08:26:17.483948457 +0000 UTC m=+0.109569439 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct 11 08:26:17 compute-0 python3.9[200376]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 11 08:26:17 compute-0 elastic_montalcini[200210]: --> passed data devices: 0 physical, 3 LVM
Oct 11 08:26:17 compute-0 elastic_montalcini[200210]: --> relative data size: 1.0
Oct 11 08:26:17 compute-0 elastic_montalcini[200210]: --> All data devices are unavailable
Oct 11 08:26:17 compute-0 sudo[200371]: pam_unix(sudo:session): session closed for user root
Oct 11 08:26:17 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 08:26:17 compute-0 systemd[1]: libpod-082c2156b4ecde2b2c1771ea19136513cb0dce5dfcc2f3cb18dd5604d06ef1ac.scope: Deactivated successfully.
Oct 11 08:26:17 compute-0 systemd[1]: libpod-082c2156b4ecde2b2c1771ea19136513cb0dce5dfcc2f3cb18dd5604d06ef1ac.scope: Consumed 1.100s CPU time.
Oct 11 08:26:17 compute-0 podman[200192]: 2025-10-11 08:26:17.807764841 +0000 UTC m=+1.357180165 container died 082c2156b4ecde2b2c1771ea19136513cb0dce5dfcc2f3cb18dd5604d06ef1ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_montalcini, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Oct 11 08:26:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-d8fb3c57c5bce9c1c71852580b70ec56c657f27802c86cfec46d6279309cddf2-merged.mount: Deactivated successfully.
Oct 11 08:26:17 compute-0 podman[200192]: 2025-10-11 08:26:17.875271643 +0000 UTC m=+1.424686927 container remove 082c2156b4ecde2b2c1771ea19136513cb0dce5dfcc2f3cb18dd5604d06ef1ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_montalcini, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 08:26:17 compute-0 systemd[1]: libpod-conmon-082c2156b4ecde2b2c1771ea19136513cb0dce5dfcc2f3cb18dd5604d06ef1ac.scope: Deactivated successfully.
Oct 11 08:26:17 compute-0 sudo[199936]: pam_unix(sudo:session): session closed for user root
Oct 11 08:26:17 compute-0 sudo[200449]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:26:17 compute-0 sudo[200449]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:26:17 compute-0 sudo[200449]: pam_unix(sudo:session): session closed for user root
Oct 11 08:26:18 compute-0 sudo[200497]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 08:26:18 compute-0 sudo[200497]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:26:18 compute-0 sudo[200497]: pam_unix(sudo:session): session closed for user root
Oct 11 08:26:18 compute-0 sudo[200551]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:26:18 compute-0 sudo[200551]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:26:18 compute-0 sudo[200551]: pam_unix(sudo:session): session closed for user root
Oct 11 08:26:18 compute-0 sudo[200599]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a -- lvm list --format json
Oct 11 08:26:18 compute-0 sudo[200599]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:26:18 compute-0 sudo[200674]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qcynsddnlyjhwssoeuhzufonpqrwyfrw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171177.986633-409-276101225251270/AnsiballZ_systemd.py'
Oct 11 08:26:18 compute-0 sudo[200674]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:26:18 compute-0 python3.9[200683]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 11 08:26:18 compute-0 podman[200717]: 2025-10-11 08:26:18.752395315 +0000 UTC m=+0.069901672 container create b6d26c2f50f04c04976f31a6845ad34bbc1c9617ed05b6ba0ad588e5d519f1f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_mccarthy, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 08:26:18 compute-0 systemd[1]: Started libpod-conmon-b6d26c2f50f04c04976f31a6845ad34bbc1c9617ed05b6ba0ad588e5d519f1f2.scope.
Oct 11 08:26:18 compute-0 podman[200717]: 2025-10-11 08:26:18.72145217 +0000 UTC m=+0.038958587 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 08:26:18 compute-0 systemd[1]: Started libcrun container.
Oct 11 08:26:18 compute-0 podman[200717]: 2025-10-11 08:26:18.874865726 +0000 UTC m=+0.192372123 container init b6d26c2f50f04c04976f31a6845ad34bbc1c9617ed05b6ba0ad588e5d519f1f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_mccarthy, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct 11 08:26:18 compute-0 podman[200717]: 2025-10-11 08:26:18.888578023 +0000 UTC m=+0.206084390 container start b6d26c2f50f04c04976f31a6845ad34bbc1c9617ed05b6ba0ad588e5d519f1f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_mccarthy, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507)
Oct 11 08:26:18 compute-0 podman[200717]: 2025-10-11 08:26:18.893097074 +0000 UTC m=+0.210603501 container attach b6d26c2f50f04c04976f31a6845ad34bbc1c9617ed05b6ba0ad588e5d519f1f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_mccarthy, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct 11 08:26:18 compute-0 exciting_mccarthy[200736]: 167 167
Oct 11 08:26:18 compute-0 systemd[1]: libpod-b6d26c2f50f04c04976f31a6845ad34bbc1c9617ed05b6ba0ad588e5d519f1f2.scope: Deactivated successfully.
Oct 11 08:26:18 compute-0 podman[200717]: 2025-10-11 08:26:18.899634753 +0000 UTC m=+0.217141120 container died b6d26c2f50f04c04976f31a6845ad34bbc1c9617ed05b6ba0ad588e5d519f1f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_mccarthy, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Oct 11 08:26:18 compute-0 sudo[200674]: pam_unix(sudo:session): session closed for user root
Oct 11 08:26:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-57088c6405620078587aa0184a2e35980e087fc0246251744b6edf7adfdc3994-merged.mount: Deactivated successfully.
Oct 11 08:26:18 compute-0 podman[200717]: 2025-10-11 08:26:18.94899292 +0000 UTC m=+0.266499247 container remove b6d26c2f50f04c04976f31a6845ad34bbc1c9617ed05b6ba0ad588e5d519f1f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_mccarthy, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 08:26:18 compute-0 systemd[1]: libpod-conmon-b6d26c2f50f04c04976f31a6845ad34bbc1c9617ed05b6ba0ad588e5d519f1f2.scope: Deactivated successfully.
Oct 11 08:26:19 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v575: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:26:19 compute-0 ceph-mon[74313]: pgmap v575: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:26:19 compute-0 podman[200797]: 2025-10-11 08:26:19.171612677 +0000 UTC m=+0.069901532 container create 19a7d7037a1553d29e8d20749f9aae55c571686bcfdfa6219bc977a3e5f5c3d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_chaum, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 08:26:19 compute-0 podman[200797]: 2025-10-11 08:26:19.14542791 +0000 UTC m=+0.043716855 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 08:26:19 compute-0 systemd[1]: Started libpod-conmon-19a7d7037a1553d29e8d20749f9aae55c571686bcfdfa6219bc977a3e5f5c3d8.scope.
Oct 11 08:26:19 compute-0 systemd[1]: Started libcrun container.
Oct 11 08:26:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e89f6557004dfec1988db132ba044fdc74c5726d7cc93d3a7242d8277d95e637/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 08:26:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e89f6557004dfec1988db132ba044fdc74c5726d7cc93d3a7242d8277d95e637/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 08:26:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e89f6557004dfec1988db132ba044fdc74c5726d7cc93d3a7242d8277d95e637/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 08:26:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e89f6557004dfec1988db132ba044fdc74c5726d7cc93d3a7242d8277d95e637/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 08:26:19 compute-0 podman[200797]: 2025-10-11 08:26:19.302299816 +0000 UTC m=+0.200588771 container init 19a7d7037a1553d29e8d20749f9aae55c571686bcfdfa6219bc977a3e5f5c3d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_chaum, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct 11 08:26:19 compute-0 podman[200797]: 2025-10-11 08:26:19.316377213 +0000 UTC m=+0.214666108 container start 19a7d7037a1553d29e8d20749f9aae55c571686bcfdfa6219bc977a3e5f5c3d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_chaum, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 08:26:19 compute-0 podman[200797]: 2025-10-11 08:26:19.320663657 +0000 UTC m=+0.218952592 container attach 19a7d7037a1553d29e8d20749f9aae55c571686bcfdfa6219bc977a3e5f5c3d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_chaum, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct 11 08:26:19 compute-0 sudo[200931]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qwiovsfooauqdkremhgeibmphuxchosl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171179.1158023-409-7905920086025/AnsiballZ_systemd.py'
Oct 11 08:26:19 compute-0 sudo[200931]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:26:19 compute-0 python3.9[200933]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 11 08:26:20 compute-0 sudo[200931]: pam_unix(sudo:session): session closed for user root
Oct 11 08:26:20 compute-0 competent_chaum[200853]: {
Oct 11 08:26:20 compute-0 competent_chaum[200853]:     "0": [
Oct 11 08:26:20 compute-0 competent_chaum[200853]:         {
Oct 11 08:26:20 compute-0 competent_chaum[200853]:             "devices": [
Oct 11 08:26:20 compute-0 competent_chaum[200853]:                 "/dev/loop3"
Oct 11 08:26:20 compute-0 competent_chaum[200853]:             ],
Oct 11 08:26:20 compute-0 competent_chaum[200853]:             "lv_name": "ceph_lv0",
Oct 11 08:26:20 compute-0 competent_chaum[200853]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 08:26:20 compute-0 competent_chaum[200853]:             "lv_size": "21470642176",
Oct 11 08:26:20 compute-0 competent_chaum[200853]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=bd31cfe9-266a-4479-a7f9-659fff14b3e1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 08:26:20 compute-0 competent_chaum[200853]:             "lv_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 08:26:20 compute-0 competent_chaum[200853]:             "name": "ceph_lv0",
Oct 11 08:26:20 compute-0 competent_chaum[200853]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 08:26:20 compute-0 competent_chaum[200853]:             "tags": {
Oct 11 08:26:20 compute-0 competent_chaum[200853]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 11 08:26:20 compute-0 competent_chaum[200853]:                 "ceph.block_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 08:26:20 compute-0 competent_chaum[200853]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 08:26:20 compute-0 competent_chaum[200853]:                 "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 08:26:20 compute-0 competent_chaum[200853]:                 "ceph.cluster_name": "ceph",
Oct 11 08:26:20 compute-0 competent_chaum[200853]:                 "ceph.crush_device_class": "",
Oct 11 08:26:20 compute-0 competent_chaum[200853]:                 "ceph.encrypted": "0",
Oct 11 08:26:20 compute-0 competent_chaum[200853]:                 "ceph.osd_fsid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 08:26:20 compute-0 competent_chaum[200853]:                 "ceph.osd_id": "0",
Oct 11 08:26:20 compute-0 competent_chaum[200853]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 08:26:20 compute-0 competent_chaum[200853]:                 "ceph.type": "block",
Oct 11 08:26:20 compute-0 competent_chaum[200853]:                 "ceph.vdo": "0"
Oct 11 08:26:20 compute-0 competent_chaum[200853]:             },
Oct 11 08:26:20 compute-0 competent_chaum[200853]:             "type": "block",
Oct 11 08:26:20 compute-0 competent_chaum[200853]:             "vg_name": "ceph_vg0"
Oct 11 08:26:20 compute-0 competent_chaum[200853]:         }
Oct 11 08:26:20 compute-0 competent_chaum[200853]:     ],
Oct 11 08:26:20 compute-0 competent_chaum[200853]:     "1": [
Oct 11 08:26:20 compute-0 competent_chaum[200853]:         {
Oct 11 08:26:20 compute-0 competent_chaum[200853]:             "devices": [
Oct 11 08:26:20 compute-0 competent_chaum[200853]:                 "/dev/loop4"
Oct 11 08:26:20 compute-0 competent_chaum[200853]:             ],
Oct 11 08:26:20 compute-0 competent_chaum[200853]:             "lv_name": "ceph_lv1",
Oct 11 08:26:20 compute-0 competent_chaum[200853]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 08:26:20 compute-0 competent_chaum[200853]:             "lv_size": "21470642176",
Oct 11 08:26:20 compute-0 competent_chaum[200853]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c6e730c8-7075-45e5-bf72-061b73088f5b,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 08:26:20 compute-0 competent_chaum[200853]:             "lv_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 08:26:20 compute-0 competent_chaum[200853]:             "name": "ceph_lv1",
Oct 11 08:26:20 compute-0 competent_chaum[200853]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 08:26:20 compute-0 competent_chaum[200853]:             "tags": {
Oct 11 08:26:20 compute-0 competent_chaum[200853]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 11 08:26:20 compute-0 competent_chaum[200853]:                 "ceph.block_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 08:26:20 compute-0 competent_chaum[200853]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 08:26:20 compute-0 competent_chaum[200853]:                 "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 08:26:20 compute-0 competent_chaum[200853]:                 "ceph.cluster_name": "ceph",
Oct 11 08:26:20 compute-0 competent_chaum[200853]:                 "ceph.crush_device_class": "",
Oct 11 08:26:20 compute-0 competent_chaum[200853]:                 "ceph.encrypted": "0",
Oct 11 08:26:20 compute-0 competent_chaum[200853]:                 "ceph.osd_fsid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 08:26:20 compute-0 competent_chaum[200853]:                 "ceph.osd_id": "1",
Oct 11 08:26:20 compute-0 competent_chaum[200853]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 08:26:20 compute-0 competent_chaum[200853]:                 "ceph.type": "block",
Oct 11 08:26:20 compute-0 competent_chaum[200853]:                 "ceph.vdo": "0"
Oct 11 08:26:20 compute-0 competent_chaum[200853]:             },
Oct 11 08:26:20 compute-0 competent_chaum[200853]:             "type": "block",
Oct 11 08:26:20 compute-0 competent_chaum[200853]:             "vg_name": "ceph_vg1"
Oct 11 08:26:20 compute-0 competent_chaum[200853]:         }
Oct 11 08:26:20 compute-0 competent_chaum[200853]:     ],
Oct 11 08:26:20 compute-0 competent_chaum[200853]:     "2": [
Oct 11 08:26:20 compute-0 competent_chaum[200853]:         {
Oct 11 08:26:20 compute-0 competent_chaum[200853]:             "devices": [
Oct 11 08:26:20 compute-0 competent_chaum[200853]:                 "/dev/loop5"
Oct 11 08:26:20 compute-0 competent_chaum[200853]:             ],
Oct 11 08:26:20 compute-0 competent_chaum[200853]:             "lv_name": "ceph_lv2",
Oct 11 08:26:20 compute-0 competent_chaum[200853]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 08:26:20 compute-0 competent_chaum[200853]:             "lv_size": "21470642176",
Oct 11 08:26:20 compute-0 competent_chaum[200853]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9a8d87fe-940e-4960-94fb-405e22e6e9ee,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 08:26:20 compute-0 competent_chaum[200853]:             "lv_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 08:26:20 compute-0 competent_chaum[200853]:             "name": "ceph_lv2",
Oct 11 08:26:20 compute-0 competent_chaum[200853]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 08:26:20 compute-0 competent_chaum[200853]:             "tags": {
Oct 11 08:26:20 compute-0 competent_chaum[200853]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 11 08:26:20 compute-0 competent_chaum[200853]:                 "ceph.block_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 08:26:20 compute-0 competent_chaum[200853]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 08:26:20 compute-0 competent_chaum[200853]:                 "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 08:26:20 compute-0 competent_chaum[200853]:                 "ceph.cluster_name": "ceph",
Oct 11 08:26:20 compute-0 competent_chaum[200853]:                 "ceph.crush_device_class": "",
Oct 11 08:26:20 compute-0 competent_chaum[200853]:                 "ceph.encrypted": "0",
Oct 11 08:26:20 compute-0 competent_chaum[200853]:                 "ceph.osd_fsid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 08:26:20 compute-0 competent_chaum[200853]:                 "ceph.osd_id": "2",
Oct 11 08:26:20 compute-0 competent_chaum[200853]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 08:26:20 compute-0 competent_chaum[200853]:                 "ceph.type": "block",
Oct 11 08:26:20 compute-0 competent_chaum[200853]:                 "ceph.vdo": "0"
Oct 11 08:26:20 compute-0 competent_chaum[200853]:             },
Oct 11 08:26:20 compute-0 competent_chaum[200853]:             "type": "block",
Oct 11 08:26:20 compute-0 competent_chaum[200853]:             "vg_name": "ceph_vg2"
Oct 11 08:26:20 compute-0 competent_chaum[200853]:         }
Oct 11 08:26:20 compute-0 competent_chaum[200853]:     ]
Oct 11 08:26:20 compute-0 competent_chaum[200853]: }
Oct 11 08:26:20 compute-0 systemd[1]: libpod-19a7d7037a1553d29e8d20749f9aae55c571686bcfdfa6219bc977a3e5f5c3d8.scope: Deactivated successfully.
Oct 11 08:26:20 compute-0 podman[200797]: 2025-10-11 08:26:20.130265166 +0000 UTC m=+1.028554031 container died 19a7d7037a1553d29e8d20749f9aae55c571686bcfdfa6219bc977a3e5f5c3d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_chaum, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct 11 08:26:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-e89f6557004dfec1988db132ba044fdc74c5726d7cc93d3a7242d8277d95e637-merged.mount: Deactivated successfully.
Oct 11 08:26:20 compute-0 podman[200797]: 2025-10-11 08:26:20.219928929 +0000 UTC m=+1.118217814 container remove 19a7d7037a1553d29e8d20749f9aae55c571686bcfdfa6219bc977a3e5f5c3d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_chaum, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 08:26:20 compute-0 systemd[1]: libpod-conmon-19a7d7037a1553d29e8d20749f9aae55c571686bcfdfa6219bc977a3e5f5c3d8.scope: Deactivated successfully.
Oct 11 08:26:20 compute-0 sudo[200599]: pam_unix(sudo:session): session closed for user root
Oct 11 08:26:20 compute-0 sudo[201016]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:26:20 compute-0 sudo[201016]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:26:20 compute-0 sudo[201016]: pam_unix(sudo:session): session closed for user root
Oct 11 08:26:20 compute-0 sudo[201075]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 08:26:20 compute-0 sudo[201075]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:26:20 compute-0 sudo[201075]: pam_unix(sudo:session): session closed for user root
Oct 11 08:26:20 compute-0 sudo[201105]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:26:20 compute-0 sudo[201105]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:26:20 compute-0 sudo[201105]: pam_unix(sudo:session): session closed for user root
Oct 11 08:26:20 compute-0 sudo[201154]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a -- raw list --format json
Oct 11 08:26:20 compute-0 sudo[201154]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:26:20 compute-0 sudo[201203]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pfxxyajzsoyzgtbstxqjnkjhkuvzxfic ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171180.2358618-409-56315306420686/AnsiballZ_systemd.py'
Oct 11 08:26:20 compute-0 sudo[201203]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:26:20 compute-0 python3.9[201207]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 11 08:26:21 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v576: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:26:21 compute-0 sudo[201203]: pam_unix(sudo:session): session closed for user root
Oct 11 08:26:21 compute-0 podman[201252]: 2025-10-11 08:26:21.116773721 +0000 UTC m=+0.059103680 container create f6172ca743f71d632a7b10deb062f6297f85784e9599d03f33031e7998b0f6df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_carver, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS)
Oct 11 08:26:21 compute-0 ceph-mon[74313]: pgmap v576: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:26:21 compute-0 systemd[1]: Started libpod-conmon-f6172ca743f71d632a7b10deb062f6297f85784e9599d03f33031e7998b0f6df.scope.
Oct 11 08:26:21 compute-0 podman[201252]: 2025-10-11 08:26:21.085086725 +0000 UTC m=+0.027416704 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 08:26:21 compute-0 systemd[1]: Started libcrun container.
Oct 11 08:26:21 compute-0 podman[201252]: 2025-10-11 08:26:21.220283214 +0000 UTC m=+0.162613183 container init f6172ca743f71d632a7b10deb062f6297f85784e9599d03f33031e7998b0f6df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_carver, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Oct 11 08:26:21 compute-0 podman[201252]: 2025-10-11 08:26:21.230987144 +0000 UTC m=+0.173317113 container start f6172ca743f71d632a7b10deb062f6297f85784e9599d03f33031e7998b0f6df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_carver, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 08:26:21 compute-0 podman[201252]: 2025-10-11 08:26:21.235066772 +0000 UTC m=+0.177396731 container attach f6172ca743f71d632a7b10deb062f6297f85784e9599d03f33031e7998b0f6df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_carver, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct 11 08:26:21 compute-0 xenodochial_carver[201276]: 167 167
Oct 11 08:26:21 compute-0 systemd[1]: libpod-f6172ca743f71d632a7b10deb062f6297f85784e9599d03f33031e7998b0f6df.scope: Deactivated successfully.
Oct 11 08:26:21 compute-0 podman[201252]: 2025-10-11 08:26:21.242064664 +0000 UTC m=+0.184394623 container died f6172ca743f71d632a7b10deb062f6297f85784e9599d03f33031e7998b0f6df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_carver, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 08:26:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-8e431252c652aea722080ec89640261392f51f3e7bb34eff2449af3f4c53f162-merged.mount: Deactivated successfully.
Oct 11 08:26:21 compute-0 podman[201252]: 2025-10-11 08:26:21.294326395 +0000 UTC m=+0.236656344 container remove f6172ca743f71d632a7b10deb062f6297f85784e9599d03f33031e7998b0f6df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_carver, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct 11 08:26:21 compute-0 systemd[1]: libpod-conmon-f6172ca743f71d632a7b10deb062f6297f85784e9599d03f33031e7998b0f6df.scope: Deactivated successfully.
Oct 11 08:26:21 compute-0 podman[201380]: 2025-10-11 08:26:21.570438759 +0000 UTC m=+0.091768344 container create 04040285a03d4a7fe86a42018c47a0b0faa8e1be5afbefac270c36abfba5e3d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_diffie, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct 11 08:26:21 compute-0 podman[201380]: 2025-10-11 08:26:21.521187885 +0000 UTC m=+0.042517570 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 08:26:21 compute-0 systemd[1]: Started libpod-conmon-04040285a03d4a7fe86a42018c47a0b0faa8e1be5afbefac270c36abfba5e3d5.scope.
Oct 11 08:26:21 compute-0 systemd[1]: Started libcrun container.
Oct 11 08:26:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa2034ff69a4e7666bf61e6e2d3b74a01958efc8ad35c823f46c11f273eb6b8a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 08:26:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa2034ff69a4e7666bf61e6e2d3b74a01958efc8ad35c823f46c11f273eb6b8a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 08:26:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa2034ff69a4e7666bf61e6e2d3b74a01958efc8ad35c823f46c11f273eb6b8a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 08:26:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa2034ff69a4e7666bf61e6e2d3b74a01958efc8ad35c823f46c11f273eb6b8a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 08:26:21 compute-0 podman[201380]: 2025-10-11 08:26:21.695436094 +0000 UTC m=+0.216765749 container init 04040285a03d4a7fe86a42018c47a0b0faa8e1be5afbefac270c36abfba5e3d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_diffie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default)
Oct 11 08:26:21 compute-0 podman[201380]: 2025-10-11 08:26:21.706916936 +0000 UTC m=+0.228246541 container start 04040285a03d4a7fe86a42018c47a0b0faa8e1be5afbefac270c36abfba5e3d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_diffie, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 08:26:21 compute-0 podman[201380]: 2025-10-11 08:26:21.710667784 +0000 UTC m=+0.231997449 container attach 04040285a03d4a7fe86a42018c47a0b0faa8e1be5afbefac270c36abfba5e3d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_diffie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 08:26:21 compute-0 sudo[201460]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yiqmdcgitgkqsoyquqdfzsaocmljcndv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171181.307003-409-98071184729313/AnsiballZ_systemd.py'
Oct 11 08:26:21 compute-0 sudo[201460]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:26:22 compute-0 python3.9[201463]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 11 08:26:22 compute-0 sudo[201460]: pam_unix(sudo:session): session closed for user root
Oct 11 08:26:22 compute-0 sweet_diffie[201430]: {
Oct 11 08:26:22 compute-0 sweet_diffie[201430]:     "9a8d87fe-940e-4960-94fb-405e22e6e9ee": {
Oct 11 08:26:22 compute-0 sweet_diffie[201430]:         "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 08:26:22 compute-0 sweet_diffie[201430]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 11 08:26:22 compute-0 sweet_diffie[201430]:         "osd_id": 2,
Oct 11 08:26:22 compute-0 sweet_diffie[201430]:         "osd_uuid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 08:26:22 compute-0 sweet_diffie[201430]:         "type": "bluestore"
Oct 11 08:26:22 compute-0 sweet_diffie[201430]:     },
Oct 11 08:26:22 compute-0 sweet_diffie[201430]:     "bd31cfe9-266a-4479-a7f9-659fff14b3e1": {
Oct 11 08:26:22 compute-0 sweet_diffie[201430]:         "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 08:26:22 compute-0 sweet_diffie[201430]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 11 08:26:22 compute-0 sweet_diffie[201430]:         "osd_id": 0,
Oct 11 08:26:22 compute-0 sweet_diffie[201430]:         "osd_uuid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 08:26:22 compute-0 sweet_diffie[201430]:         "type": "bluestore"
Oct 11 08:26:22 compute-0 sweet_diffie[201430]:     },
Oct 11 08:26:22 compute-0 sweet_diffie[201430]:     "c6e730c8-7075-45e5-bf72-061b73088f5b": {
Oct 11 08:26:22 compute-0 sweet_diffie[201430]:         "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 08:26:22 compute-0 sweet_diffie[201430]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 11 08:26:22 compute-0 sweet_diffie[201430]:         "osd_id": 1,
Oct 11 08:26:22 compute-0 sweet_diffie[201430]:         "osd_uuid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 08:26:22 compute-0 sweet_diffie[201430]:         "type": "bluestore"
Oct 11 08:26:22 compute-0 sweet_diffie[201430]:     }
Oct 11 08:26:22 compute-0 sweet_diffie[201430]: }
Oct 11 08:26:22 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 08:26:22 compute-0 sudo[201644]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tdmiurxnopoafshwerqgswaitezuvcue ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171182.3902314-409-211416707658310/AnsiballZ_systemd.py'
Oct 11 08:26:22 compute-0 sudo[201644]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:26:22 compute-0 systemd[1]: libpod-04040285a03d4a7fe86a42018c47a0b0faa8e1be5afbefac270c36abfba5e3d5.scope: Deactivated successfully.
Oct 11 08:26:22 compute-0 podman[201380]: 2025-10-11 08:26:22.835030916 +0000 UTC m=+1.356360551 container died 04040285a03d4a7fe86a42018c47a0b0faa8e1be5afbefac270c36abfba5e3d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_diffie, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct 11 08:26:22 compute-0 systemd[1]: libpod-04040285a03d4a7fe86a42018c47a0b0faa8e1be5afbefac270c36abfba5e3d5.scope: Consumed 1.128s CPU time.
Oct 11 08:26:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-aa2034ff69a4e7666bf61e6e2d3b74a01958efc8ad35c823f46c11f273eb6b8a-merged.mount: Deactivated successfully.
Oct 11 08:26:22 compute-0 podman[201380]: 2025-10-11 08:26:22.916891963 +0000 UTC m=+1.438221558 container remove 04040285a03d4a7fe86a42018c47a0b0faa8e1be5afbefac270c36abfba5e3d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_diffie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 08:26:22 compute-0 systemd[1]: libpod-conmon-04040285a03d4a7fe86a42018c47a0b0faa8e1be5afbefac270c36abfba5e3d5.scope: Deactivated successfully.
Oct 11 08:26:22 compute-0 sudo[201154]: pam_unix(sudo:session): session closed for user root
Oct 11 08:26:22 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 08:26:22 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 08:26:22 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 08:26:22 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 08:26:22 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev 36abbe34-fe35-40b7-a58c-bf511ee84431 does not exist
Oct 11 08:26:22 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev a7fa4941-155f-4fe9-a516-1eb3113cf502 does not exist
Oct 11 08:26:23 compute-0 sudo[201660]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:26:23 compute-0 sudo[201660]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:26:23 compute-0 sudo[201660]: pam_unix(sudo:session): session closed for user root
Oct 11 08:26:23 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v577: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:26:23 compute-0 python3.9[201646]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 11 08:26:23 compute-0 sudo[201685]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 11 08:26:23 compute-0 sudo[201685]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:26:23 compute-0 sudo[201685]: pam_unix(sudo:session): session closed for user root
Oct 11 08:26:23 compute-0 sudo[201644]: pam_unix(sudo:session): session closed for user root
Oct 11 08:26:23 compute-0 sudo[201862]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-smrnywifumvtfbnoidacnmjjwbcddvro ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171183.4274201-409-151967281126112/AnsiballZ_systemd.py'
Oct 11 08:26:23 compute-0 sudo[201862]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:26:23 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 08:26:23 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 08:26:23 compute-0 ceph-mon[74313]: pgmap v577: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:26:24 compute-0 python3.9[201864]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 11 08:26:24 compute-0 sudo[201862]: pam_unix(sudo:session): session closed for user root
Oct 11 08:26:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 08:26:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 08:26:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 08:26:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 08:26:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 08:26:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 08:26:24 compute-0 sudo[202017]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-auhtlkikypnryyqdfocdrlbjucsnsltc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171184.4044104-409-186530751191093/AnsiballZ_systemd.py'
Oct 11 08:26:24 compute-0 sudo[202017]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:26:25 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v578: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:26:25 compute-0 python3.9[202019]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 11 08:26:25 compute-0 ceph-mon[74313]: pgmap v578: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:26:25 compute-0 sudo[202017]: pam_unix(sudo:session): session closed for user root
Oct 11 08:26:25 compute-0 sudo[202172]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-shgdoalxzirugakcshdzchxijbxdifwl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171185.4464793-409-163209511049756/AnsiballZ_systemd.py'
Oct 11 08:26:25 compute-0 sudo[202172]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:26:26 compute-0 python3.9[202174]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 11 08:26:26 compute-0 sudo[202172]: pam_unix(sudo:session): session closed for user root
Oct 11 08:26:26 compute-0 sudo[202327]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fhjkupvoqvyygfeniusfegiglqxedsru ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171186.4767087-409-196657726490532/AnsiballZ_systemd.py'
Oct 11 08:26:26 compute-0 sudo[202327]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:26:27 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v579: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:26:27 compute-0 ceph-mon[74313]: pgmap v579: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:26:27 compute-0 python3.9[202329]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 11 08:26:27 compute-0 sudo[202327]: pam_unix(sudo:session): session closed for user root
Oct 11 08:26:27 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 08:26:27 compute-0 sudo[202482]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-opjhkomjfiixrztebvkagrbgqvcehstg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171187.5234058-409-14117894097344/AnsiballZ_systemd.py'
Oct 11 08:26:27 compute-0 sudo[202482]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:26:28 compute-0 python3.9[202484]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 11 08:26:28 compute-0 sudo[202482]: pam_unix(sudo:session): session closed for user root
Oct 11 08:26:29 compute-0 sudo[202637]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vqycevfasrtfnmvyxuhgmicuueovpovp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171188.5907164-409-250364435781355/AnsiballZ_systemd.py'
Oct 11 08:26:29 compute-0 sudo[202637]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:26:29 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v580: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:26:29 compute-0 ceph-mon[74313]: pgmap v580: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:26:29 compute-0 python3.9[202639]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 11 08:26:29 compute-0 sudo[202637]: pam_unix(sudo:session): session closed for user root
Oct 11 08:26:30 compute-0 sudo[202792]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pxbohmzplzpczfbpjbsunkofglzqyoat ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171189.6494753-409-80896504995480/AnsiballZ_systemd.py'
Oct 11 08:26:30 compute-0 sudo[202792]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:26:30 compute-0 python3.9[202794]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 11 08:26:30 compute-0 sudo[202792]: pam_unix(sudo:session): session closed for user root
Oct 11 08:26:31 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v581: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:26:31 compute-0 ceph-mon[74313]: pgmap v581: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:26:31 compute-0 sudo[202947]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rfzmgayvdddbvcsbinympwvdzkhdilve ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171191.0392754-511-55168033419222/AnsiballZ_file.py'
Oct 11 08:26:31 compute-0 sudo[202947]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:26:31 compute-0 python3.9[202949]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/etc/tmpfiles.d/ setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct 11 08:26:31 compute-0 sudo[202947]: pam_unix(sudo:session): session closed for user root
Oct 11 08:26:32 compute-0 sudo[203099]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cajljbahdglzafpqxjosjdttxjxclaou ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171191.9787087-511-218199969200413/AnsiballZ_file.py'
Oct 11 08:26:32 compute-0 sudo[203099]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:26:32 compute-0 python3.9[203101]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct 11 08:26:32 compute-0 sudo[203099]: pam_unix(sudo:session): session closed for user root
Oct 11 08:26:32 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 08:26:33 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v582: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:26:33 compute-0 ceph-mon[74313]: pgmap v582: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:26:33 compute-0 sudo[203251]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xtrywnjjpmmroxmtbpfymifyukzozvmc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171192.7727437-511-146346376663864/AnsiballZ_file.py'
Oct 11 08:26:33 compute-0 sudo[203251]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:26:33 compute-0 python3.9[203253]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 11 08:26:33 compute-0 sudo[203251]: pam_unix(sudo:session): session closed for user root
Oct 11 08:26:33 compute-0 sudo[203403]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-plxoehnhksdwkzcmjhficgalfoovosvz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171193.5638008-511-129803030905555/AnsiballZ_file.py'
Oct 11 08:26:33 compute-0 sudo[203403]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:26:34 compute-0 python3.9[203405]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt/private setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 11 08:26:34 compute-0 sudo[203403]: pam_unix(sudo:session): session closed for user root
Oct 11 08:26:34 compute-0 sudo[203555]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hsmttwteoxkmpjgykugfmiudoywbnvtq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171194.3719654-511-75403966266792/AnsiballZ_file.py'
Oct 11 08:26:34 compute-0 sudo[203555]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:26:35 compute-0 python3.9[203557]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/CA setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 11 08:26:35 compute-0 sudo[203555]: pam_unix(sudo:session): session closed for user root
Oct 11 08:26:35 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v583: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:26:35 compute-0 ceph-mon[74313]: pgmap v583: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:26:35 compute-0 sudo[203723]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hbhehusppsipimbipagiklshmnayzvtn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171195.219136-511-171581333272276/AnsiballZ_file.py'
Oct 11 08:26:35 compute-0 sudo[203723]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:26:35 compute-0 podman[203681]: 2025-10-11 08:26:35.705350683 +0000 UTC m=+0.141032109 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 11 08:26:35 compute-0 python3.9[203728]: ansible-ansible.builtin.file Invoked with group=qemu owner=root path=/etc/pki/qemu setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct 11 08:26:35 compute-0 sudo[203723]: pam_unix(sudo:session): session closed for user root
Oct 11 08:26:36 compute-0 sudo[203886]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jeblcawfpyvonjvfyeimnkyxkiognofd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171196.1097093-554-194864933559665/AnsiballZ_stat.py'
Oct 11 08:26:36 compute-0 sudo[203886]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:26:36 compute-0 python3.9[203888]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtlogd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 08:26:36 compute-0 sudo[203886]: pam_unix(sudo:session): session closed for user root
Oct 11 08:26:37 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v584: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:26:37 compute-0 ceph-mon[74313]: pgmap v584: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:26:37 compute-0 sudo[204011]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ubvhsxqezvjhoryopueqrqnmaeudsbsh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171196.1097093-554-194864933559665/AnsiballZ_copy.py'
Oct 11 08:26:37 compute-0 sudo[204011]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:26:37 compute-0 python3.9[204013]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtlogd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1760171196.1097093-554-194864933559665/.source.conf follow=False _original_basename=virtlogd.conf checksum=d7a72ae92c2c205983b029473e05a6aa4c58ec24 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 08:26:37 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 08:26:37 compute-0 sudo[204011]: pam_unix(sudo:session): session closed for user root
Oct 11 08:26:38 compute-0 sudo[204163]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bhvgnekadgofczafbtrtrrvbgkdyxiod ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171197.9724848-554-91649675375093/AnsiballZ_stat.py'
Oct 11 08:26:38 compute-0 sudo[204163]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:26:38 compute-0 python3.9[204165]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtnodedevd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 08:26:38 compute-0 sudo[204163]: pam_unix(sudo:session): session closed for user root
Oct 11 08:26:39 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v585: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:26:39 compute-0 sudo[204288]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wpbtgryxvlsjlecoqfhswvqvegxvjfjx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171197.9724848-554-91649675375093/AnsiballZ_copy.py'
Oct 11 08:26:39 compute-0 sudo[204288]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:26:39 compute-0 ceph-mon[74313]: pgmap v585: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:26:39 compute-0 python3.9[204290]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtnodedevd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1760171197.9724848-554-91649675375093/.source.conf follow=False _original_basename=virtnodedevd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 08:26:39 compute-0 sudo[204288]: pam_unix(sudo:session): session closed for user root
Oct 11 08:26:40 compute-0 sudo[204440]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eaihfgnjptdgtyjfqxbtzcbbtxynadvm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171199.557333-554-26072867298799/AnsiballZ_stat.py'
Oct 11 08:26:40 compute-0 sudo[204440]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:26:40 compute-0 python3.9[204442]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtproxyd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 08:26:40 compute-0 sudo[204440]: pam_unix(sudo:session): session closed for user root
Oct 11 08:26:40 compute-0 sudo[204565]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cwzofpytslkxclkkekqmzfetshpgzbyh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171199.557333-554-26072867298799/AnsiballZ_copy.py'
Oct 11 08:26:40 compute-0 sudo[204565]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:26:41 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v586: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:26:41 compute-0 ceph-mon[74313]: pgmap v586: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:26:41 compute-0 python3.9[204567]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtproxyd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1760171199.557333-554-26072867298799/.source.conf follow=False _original_basename=virtproxyd.conf checksum=28bc484b7c9988e03de49d4fcc0a088ea975f716 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 08:26:41 compute-0 sudo[204565]: pam_unix(sudo:session): session closed for user root
Oct 11 08:26:41 compute-0 sudo[204717]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tgnbammbkkueiygwyfcchumrhhurpgoj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171201.4470196-554-103478801935860/AnsiballZ_stat.py'
Oct 11 08:26:41 compute-0 sudo[204717]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:26:42 compute-0 python3.9[204719]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtqemud.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 08:26:42 compute-0 sudo[204717]: pam_unix(sudo:session): session closed for user root
Oct 11 08:26:42 compute-0 sudo[204842]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mgjfnsdgzvizozrdeixgwvkmdzusohmf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171201.4470196-554-103478801935860/AnsiballZ_copy.py'
Oct 11 08:26:42 compute-0 sudo[204842]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:26:42 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 08:26:42 compute-0 python3.9[204844]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtqemud.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1760171201.4470196-554-103478801935860/.source.conf follow=False _original_basename=virtqemud.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 08:26:42 compute-0 sudo[204842]: pam_unix(sudo:session): session closed for user root
Oct 11 08:26:43 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v587: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:26:43 compute-0 ceph-mon[74313]: pgmap v587: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:26:43 compute-0 sudo[204994]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pappbsdduqfjapgthlwdgsqjjtafewrr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171203.0568814-554-47111044612848/AnsiballZ_stat.py'
Oct 11 08:26:43 compute-0 sudo[204994]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:26:43 compute-0 python3.9[204996]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/qemu.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 08:26:43 compute-0 sudo[204994]: pam_unix(sudo:session): session closed for user root
Oct 11 08:26:44 compute-0 sudo[205119]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zzlflehslbsyoksnivbdsyabbhuwqoir ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171203.0568814-554-47111044612848/AnsiballZ_copy.py'
Oct 11 08:26:44 compute-0 sudo[205119]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:26:44 compute-0 python3.9[205121]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/qemu.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1760171203.0568814-554-47111044612848/.source.conf follow=False _original_basename=qemu.conf.j2 checksum=c44de21af13c90603565570f09ff60c6a41ed8df backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 08:26:44 compute-0 sudo[205119]: pam_unix(sudo:session): session closed for user root
Oct 11 08:26:45 compute-0 sudo[205271]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qstauxrtnhtgzeorksngeazsnknblbus ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171204.6573875-554-180382546408933/AnsiballZ_stat.py'
Oct 11 08:26:45 compute-0 sudo[205271]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:26:45 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v588: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:26:45 compute-0 ceph-mon[74313]: pgmap v588: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:26:45 compute-0 python3.9[205273]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtsecretd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 08:26:45 compute-0 sudo[205271]: pam_unix(sudo:session): session closed for user root
Oct 11 08:26:45 compute-0 sudo[205396]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-irnmogmwwtcjdtorgyhfmuwvilsovmke ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171204.6573875-554-180382546408933/AnsiballZ_copy.py'
Oct 11 08:26:45 compute-0 sudo[205396]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:26:46 compute-0 python3.9[205398]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtsecretd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1760171204.6573875-554-180382546408933/.source.conf follow=False _original_basename=virtsecretd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 08:26:46 compute-0 sudo[205396]: pam_unix(sudo:session): session closed for user root
Oct 11 08:26:46 compute-0 sudo[205548]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pjcuvskmuiafochjncotditxkorfldry ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171206.2742188-554-113360488149126/AnsiballZ_stat.py'
Oct 11 08:26:46 compute-0 sudo[205548]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:26:46 compute-0 python3.9[205550]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/auth.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 08:26:46 compute-0 sudo[205548]: pam_unix(sudo:session): session closed for user root
Oct 11 08:26:47 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v589: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:26:47 compute-0 ceph-mon[74313]: pgmap v589: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:26:47 compute-0 sudo[205671]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uwjtehaaxuionnqtukprrvltahrcpfpm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171206.2742188-554-113360488149126/AnsiballZ_copy.py'
Oct 11 08:26:47 compute-0 sudo[205671]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:26:47 compute-0 python3.9[205673]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/auth.conf group=libvirt mode=0600 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1760171206.2742188-554-113360488149126/.source.conf follow=False _original_basename=auth.conf checksum=a94cd818c374cec2c8425b70d2e0e2f41b743ae4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 08:26:47 compute-0 sudo[205671]: pam_unix(sudo:session): session closed for user root
Oct 11 08:26:47 compute-0 podman[205674]: 2025-10-11 08:26:47.65568066 +0000 UTC m=+0.079257013 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS)
Oct 11 08:26:47 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 08:26:48 compute-0 sudo[205843]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rzedlmizxlxwqhphjyguhendppebubfu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171207.7554858-554-156885925612333/AnsiballZ_stat.py'
Oct 11 08:26:48 compute-0 sudo[205843]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:26:48 compute-0 python3.9[205845]: ansible-ansible.legacy.stat Invoked with path=/etc/sasl2/libvirt.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 08:26:48 compute-0 sudo[205843]: pam_unix(sudo:session): session closed for user root
Oct 11 08:26:48 compute-0 sudo[205968]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iewubuyoyutrdulvapzouqppooyjnqyh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171207.7554858-554-156885925612333/AnsiballZ_copy.py'
Oct 11 08:26:48 compute-0 sudo[205968]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:26:49 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v590: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:26:49 compute-0 python3.9[205970]: ansible-ansible.legacy.copy Invoked with dest=/etc/sasl2/libvirt.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1760171207.7554858-554-156885925612333/.source.conf follow=False _original_basename=sasl_libvirt.conf checksum=652e4d404bf79253d06956b8e9847c9364979d4a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 08:26:49 compute-0 ceph-mon[74313]: pgmap v590: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:26:49 compute-0 sudo[205968]: pam_unix(sudo:session): session closed for user root
Oct 11 08:26:49 compute-0 sudo[206120]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-btinfhhxieqgkvlssupgbktcfrsjbine ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171209.3922555-667-168414151038260/AnsiballZ_command.py'
Oct 11 08:26:49 compute-0 sudo[206120]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:26:50 compute-0 python3.9[206122]: ansible-ansible.legacy.command Invoked with cmd=saslpasswd2 -f /etc/libvirt/passwd.db -p -a libvirt -u openstack migration stdin=12345678 _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None
Oct 11 08:26:50 compute-0 sudo[206120]: pam_unix(sudo:session): session closed for user root
Oct 11 08:26:50 compute-0 sudo[206273]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lehyzgeiiuwjbjguockssqroflnjmhnp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171210.3260202-676-126522557445908/AnsiballZ_file.py'
Oct 11 08:26:50 compute-0 sudo[206273]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:26:50 compute-0 python3.9[206275]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 08:26:50 compute-0 sudo[206273]: pam_unix(sudo:session): session closed for user root
Oct 11 08:26:51 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v591: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:26:51 compute-0 ceph-mon[74313]: pgmap v591: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:26:51 compute-0 sudo[206425]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-olvtnkirfakhgucsbnadcthjfzfwpxsu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171211.1694825-676-130609296935135/AnsiballZ_file.py'
Oct 11 08:26:51 compute-0 sudo[206425]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:26:51 compute-0 python3.9[206427]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 08:26:51 compute-0 sudo[206425]: pam_unix(sudo:session): session closed for user root
Oct 11 08:26:52 compute-0 sudo[206577]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nifgxjinhnjoseufyrfwanrwlrdbmdvf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171211.9735758-676-108443723598169/AnsiballZ_file.py'
Oct 11 08:26:52 compute-0 sudo[206577]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:26:52 compute-0 python3.9[206579]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 08:26:52 compute-0 sudo[206577]: pam_unix(sudo:session): session closed for user root
Oct 11 08:26:52 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 08:26:53 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v592: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:26:53 compute-0 sudo[206729]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jinvdsjglzkfklnrpkrphsgfdciqeoqn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171212.765266-676-278708974466476/AnsiballZ_file.py'
Oct 11 08:26:53 compute-0 sudo[206729]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:26:53 compute-0 ceph-mon[74313]: pgmap v592: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:26:53 compute-0 python3.9[206731]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 08:26:53 compute-0 sudo[206729]: pam_unix(sudo:session): session closed for user root
Oct 11 08:26:53 compute-0 sudo[206881]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tcpdmabfgoxyybsrdzjhqgwbepjvwdrr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171213.5785153-676-188604628170039/AnsiballZ_file.py'
Oct 11 08:26:53 compute-0 sudo[206881]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:26:54 compute-0 python3.9[206883]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 08:26:54 compute-0 sudo[206881]: pam_unix(sudo:session): session closed for user root
Oct 11 08:26:54 compute-0 ceph-mgr[74605]: [balancer INFO root] Optimize plan auto_2025-10-11_08:26:54
Oct 11 08:26:54 compute-0 ceph-mgr[74605]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 08:26:54 compute-0 ceph-mgr[74605]: [balancer INFO root] do_upmap
Oct 11 08:26:54 compute-0 ceph-mgr[74605]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'images', 'backups', 'vms', '.mgr', '.rgw.root', 'default.rgw.meta', 'volumes', 'default.rgw.log', 'cephfs.cephfs.data', 'default.rgw.control']
Oct 11 08:26:54 compute-0 ceph-mgr[74605]: [balancer INFO root] prepared 0/10 changes
Oct 11 08:26:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 08:26:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 08:26:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 08:26:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 08:26:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 08:26:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 08:26:54 compute-0 sudo[207033]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-htgbjnhozvuutyznquvxkpdmbwxjtsiz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171214.376763-676-224550020929607/AnsiballZ_file.py'
Oct 11 08:26:54 compute-0 sudo[207033]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:26:54 compute-0 ceph-mgr[74605]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 08:26:54 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 08:26:54 compute-0 ceph-mgr[74605]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 08:26:54 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 08:26:54 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 08:26:54 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 08:26:54 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 08:26:54 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 08:26:54 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 08:26:54 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 08:26:55 compute-0 python3.9[207035]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 08:26:55 compute-0 sudo[207033]: pam_unix(sudo:session): session closed for user root
Oct 11 08:26:55 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v593: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:26:55 compute-0 ceph-mon[74313]: pgmap v593: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:26:55 compute-0 sudo[207185]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lnznodbpjovedzhdwwncwmukmybaixfn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171215.204701-676-22027138848079/AnsiballZ_file.py'
Oct 11 08:26:55 compute-0 sudo[207185]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:26:55 compute-0 python3.9[207187]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 08:26:55 compute-0 sudo[207185]: pam_unix(sudo:session): session closed for user root
Oct 11 08:26:56 compute-0 sudo[207337]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jycslevvtdtkkcohlmgsqbzivmydeisj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171216.1223252-676-3349073050753/AnsiballZ_file.py'
Oct 11 08:26:56 compute-0 sudo[207337]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:26:56 compute-0 python3.9[207339]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 08:26:56 compute-0 sudo[207337]: pam_unix(sudo:session): session closed for user root
Oct 11 08:26:57 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v594: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:26:57 compute-0 ceph-mon[74313]: pgmap v594: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:26:57 compute-0 sudo[207489]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wjbfymjoxrfkykgoonsojmsmrndjczse ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171216.953404-676-87507841021725/AnsiballZ_file.py'
Oct 11 08:26:57 compute-0 sudo[207489]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:26:57 compute-0 python3.9[207491]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 08:26:57 compute-0 sudo[207489]: pam_unix(sudo:session): session closed for user root
Oct 11 08:26:57 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 08:26:58 compute-0 sudo[207641]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lrvzumvjifmagjvfecjuslyrfqdzwqgy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171217.775047-676-194580008937853/AnsiballZ_file.py'
Oct 11 08:26:58 compute-0 sudo[207641]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:26:58 compute-0 python3.9[207643]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 08:26:58 compute-0 sudo[207641]: pam_unix(sudo:session): session closed for user root
Oct 11 08:26:59 compute-0 sudo[207793]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wuvwcxxbnkusvtqlzprdafvksjdhxwdq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171218.5534792-676-141575840618709/AnsiballZ_file.py'
Oct 11 08:26:59 compute-0 sudo[207793]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:26:59 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v595: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:26:59 compute-0 ceph-mon[74313]: pgmap v595: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:26:59 compute-0 python3.9[207795]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 08:26:59 compute-0 sudo[207793]: pam_unix(sudo:session): session closed for user root
Oct 11 08:26:59 compute-0 sudo[207945]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vlwftuehhhvotpifxboioywdledfkqjg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171219.4867685-676-66721847904102/AnsiballZ_file.py'
Oct 11 08:26:59 compute-0 sudo[207945]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:27:00 compute-0 python3.9[207947]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 08:27:00 compute-0 sudo[207945]: pam_unix(sudo:session): session closed for user root
Oct 11 08:27:00 compute-0 sudo[208097]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jelbhuvboghqdfskomznwgyzmfzjfgqr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171220.2611513-676-25521909946481/AnsiballZ_file.py'
Oct 11 08:27:00 compute-0 sudo[208097]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:27:00 compute-0 python3.9[208099]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 08:27:00 compute-0 sudo[208097]: pam_unix(sudo:session): session closed for user root
Oct 11 08:27:01 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v596: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:27:01 compute-0 ceph-mon[74313]: pgmap v596: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:27:01 compute-0 sudo[208249]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zfitbdjkstgcgkevulnkjxklhlvouilf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171221.0724318-676-271284050482259/AnsiballZ_file.py'
Oct 11 08:27:01 compute-0 sudo[208249]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:27:01 compute-0 python3.9[208251]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 08:27:01 compute-0 sudo[208249]: pam_unix(sudo:session): session closed for user root
Oct 11 08:27:02 compute-0 sudo[208401]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-knfdkcxfanzjbjjkimxzignopafipzkp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171221.9284294-775-269471614739624/AnsiballZ_stat.py'
Oct 11 08:27:02 compute-0 sudo[208401]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:27:02 compute-0 python3.9[208403]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 08:27:02 compute-0 sudo[208401]: pam_unix(sudo:session): session closed for user root
Oct 11 08:27:02 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 08:27:03 compute-0 sudo[208524]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-waimvfhbepuehzrrxrwtetmnxwaxxjqw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171221.9284294-775-269471614739624/AnsiballZ_copy.py'
Oct 11 08:27:03 compute-0 sudo[208524]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:27:03 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v597: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:27:03 compute-0 ceph-mon[74313]: pgmap v597: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:27:03 compute-0 python3.9[208526]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760171221.9284294-775-269471614739624/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 08:27:03 compute-0 sudo[208524]: pam_unix(sudo:session): session closed for user root
Oct 11 08:27:03 compute-0 sudo[208676]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-weqnkbguslmkxbymxxhubhqbicvbjeox ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171223.4792318-775-176658747803651/AnsiballZ_stat.py'
Oct 11 08:27:03 compute-0 sudo[208676]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:27:04 compute-0 python3.9[208678]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 08:27:04 compute-0 sudo[208676]: pam_unix(sudo:session): session closed for user root
Oct 11 08:27:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 08:27:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:27:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 08:27:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:27:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 08:27:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:27:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 08:27:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:27:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 08:27:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:27:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 08:27:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:27:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 11 08:27:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:27:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 08:27:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:27:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 11 08:27:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:27:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 11 08:27:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:27:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 08:27:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:27:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 11 08:27:04 compute-0 sudo[208799]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-arfsjqihvueuahdonrhqcqifkpebjcpx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171223.4792318-775-176658747803651/AnsiballZ_copy.py'
Oct 11 08:27:04 compute-0 sudo[208799]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:27:04 compute-0 python3.9[208801]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760171223.4792318-775-176658747803651/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 08:27:04 compute-0 sudo[208799]: pam_unix(sudo:session): session closed for user root
Oct 11 08:27:05 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v598: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:27:05 compute-0 ceph-mon[74313]: pgmap v598: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:27:05 compute-0 sudo[208951]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sbscxtkwtyraakzjfnvtrpfnfufnzhzx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171225.046138-775-67219452377892/AnsiballZ_stat.py'
Oct 11 08:27:05 compute-0 sudo[208951]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:27:05 compute-0 python3.9[208953]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 08:27:05 compute-0 sudo[208951]: pam_unix(sudo:session): session closed for user root
Oct 11 08:27:06 compute-0 sudo[209084]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pfgirezgtnpkbfctpirqfrvtgncnzyib ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171225.046138-775-67219452377892/AnsiballZ_copy.py'
Oct 11 08:27:06 compute-0 sudo[209084]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:27:06 compute-0 podman[209048]: 2025-10-11 08:27:06.271127718 +0000 UTC m=+0.148244478 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=ovn_controller)
Oct 11 08:27:06 compute-0 python3.9[209090]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760171225.046138-775-67219452377892/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 08:27:06 compute-0 sudo[209084]: pam_unix(sudo:session): session closed for user root
Oct 11 08:27:07 compute-0 sudo[209249]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bhdeufjqhfmgydsiokdltftzqrxtqrcj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171226.6717944-775-251791200899765/AnsiballZ_stat.py'
Oct 11 08:27:07 compute-0 sudo[209249]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:27:07 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v599: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:27:07 compute-0 ceph-mon[74313]: pgmap v599: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:27:07 compute-0 python3.9[209251]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 08:27:07 compute-0 sudo[209249]: pam_unix(sudo:session): session closed for user root
Oct 11 08:27:07 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 08:27:07 compute-0 sudo[209372]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zshmuvmdwztnueiqkmrtggxxnyearpfx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171226.6717944-775-251791200899765/AnsiballZ_copy.py'
Oct 11 08:27:07 compute-0 sudo[209372]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:27:08 compute-0 python3.9[209374]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760171226.6717944-775-251791200899765/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 08:27:08 compute-0 sudo[209372]: pam_unix(sudo:session): session closed for user root
Oct 11 08:27:08 compute-0 sudo[209524]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ncauxaunzpibsuqbtaymnwcpxthwtoks ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171228.2590163-775-60781284299053/AnsiballZ_stat.py'
Oct 11 08:27:08 compute-0 sudo[209524]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:27:08 compute-0 python3.9[209526]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 08:27:08 compute-0 sudo[209524]: pam_unix(sudo:session): session closed for user root
Oct 11 08:27:09 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v600: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:27:09 compute-0 ceph-mon[74313]: pgmap v600: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:27:09 compute-0 sudo[209647]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-znjvpqjvquhyhddwqtzejtgkkioafvpb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171228.2590163-775-60781284299053/AnsiballZ_copy.py'
Oct 11 08:27:09 compute-0 sudo[209647]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:27:09 compute-0 python3.9[209649]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760171228.2590163-775-60781284299053/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 08:27:09 compute-0 sudo[209647]: pam_unix(sudo:session): session closed for user root
Oct 11 08:27:10 compute-0 sudo[209799]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mibagjvwfjppzrhaqowlemkflemnlbiy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171229.944545-775-254370766992314/AnsiballZ_stat.py'
Oct 11 08:27:10 compute-0 sudo[209799]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:27:10 compute-0 python3.9[209801]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 08:27:10 compute-0 sudo[209799]: pam_unix(sudo:session): session closed for user root
Oct 11 08:27:11 compute-0 sudo[209922]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rrgmkdstjwputtcrbljxolknzkndyzvs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171229.944545-775-254370766992314/AnsiballZ_copy.py'
Oct 11 08:27:11 compute-0 sudo[209922]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:27:11 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v601: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:27:11 compute-0 ceph-mon[74313]: pgmap v601: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:27:11 compute-0 python3.9[209924]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760171229.944545-775-254370766992314/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 08:27:11 compute-0 sudo[209922]: pam_unix(sudo:session): session closed for user root
Oct 11 08:27:12 compute-0 sudo[210074]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gubostkmeafpgoebobuusbtogujhuypd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171231.6060023-775-233063784969844/AnsiballZ_stat.py'
Oct 11 08:27:12 compute-0 sudo[210074]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:27:12 compute-0 python3.9[210076]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 08:27:12 compute-0 sudo[210074]: pam_unix(sudo:session): session closed for user root
Oct 11 08:27:12 compute-0 sudo[210197]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hrgowgufshmtypwerpiygtzrykugnrrq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171231.6060023-775-233063784969844/AnsiballZ_copy.py'
Oct 11 08:27:12 compute-0 sudo[210197]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:27:12 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 08:27:13 compute-0 python3.9[210199]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760171231.6060023-775-233063784969844/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 08:27:13 compute-0 sudo[210197]: pam_unix(sudo:session): session closed for user root
Oct 11 08:27:13 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v602: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:27:13 compute-0 ceph-mon[74313]: pgmap v602: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:27:13 compute-0 sudo[210349]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gsbjqbwvdyqliglzkwcfellyfqmgpkor ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171233.274518-775-79186126086940/AnsiballZ_stat.py'
Oct 11 08:27:13 compute-0 sudo[210349]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:27:13 compute-0 python3.9[210351]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 08:27:13 compute-0 sudo[210349]: pam_unix(sudo:session): session closed for user root
Oct 11 08:27:14 compute-0 sudo[210472]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mnbzblldkgptjwqmvovwkygnahpxnech ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171233.274518-775-79186126086940/AnsiballZ_copy.py'
Oct 11 08:27:14 compute-0 sudo[210472]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:27:14 compute-0 python3.9[210474]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760171233.274518-775-79186126086940/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 08:27:14 compute-0 sudo[210472]: pam_unix(sudo:session): session closed for user root
Oct 11 08:27:15 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v603: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:27:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:27:15.157 162815 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:27:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:27:15.157 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:27:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:27:15.158 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:27:15 compute-0 ceph-mon[74313]: pgmap v603: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:27:15 compute-0 sudo[210624]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sqhyvumjpsvzonpmskngouwrnbbxerdc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171234.8783908-775-61529651540859/AnsiballZ_stat.py'
Oct 11 08:27:15 compute-0 sudo[210624]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:27:15 compute-0 python3.9[210626]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 08:27:15 compute-0 sudo[210624]: pam_unix(sudo:session): session closed for user root
Oct 11 08:27:16 compute-0 sudo[210747]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dzcwvjmxcvpnqoogoyaruzagxrmzduva ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171234.8783908-775-61529651540859/AnsiballZ_copy.py'
Oct 11 08:27:16 compute-0 sudo[210747]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:27:16 compute-0 python3.9[210749]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760171234.8783908-775-61529651540859/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 08:27:16 compute-0 sudo[210747]: pam_unix(sudo:session): session closed for user root
Oct 11 08:27:16 compute-0 sudo[210899]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zuodxrrbzzbpvuayxglgyucuvteakpyt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171236.4811127-775-94497765831696/AnsiballZ_stat.py'
Oct 11 08:27:16 compute-0 sudo[210899]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:27:17 compute-0 python3.9[210901]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 08:27:17 compute-0 sudo[210899]: pam_unix(sudo:session): session closed for user root
Oct 11 08:27:17 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v604: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:27:17 compute-0 sudo[211022]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ekcnjximtftfjubbowgafxkdmuwfvsac ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171236.4811127-775-94497765831696/AnsiballZ_copy.py'
Oct 11 08:27:17 compute-0 sudo[211022]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:27:17 compute-0 python3.9[211024]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760171236.4811127-775-94497765831696/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 08:27:17 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 08:27:17 compute-0 sudo[211022]: pam_unix(sudo:session): session closed for user root
Oct 11 08:27:18 compute-0 ceph-mon[74313]: pgmap v604: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:27:18 compute-0 sudo[211187]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pifjwqxkgiaejdguvgbihxggkgvalluj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171238.0171764-775-52338772292211/AnsiballZ_stat.py'
Oct 11 08:27:18 compute-0 sudo[211187]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:27:18 compute-0 podman[211148]: 2025-10-11 08:27:18.412508688 +0000 UTC m=+0.073952400 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Oct 11 08:27:18 compute-0 python3.9[211195]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 08:27:18 compute-0 sudo[211187]: pam_unix(sudo:session): session closed for user root
Oct 11 08:27:19 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v605: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:27:19 compute-0 sudo[211316]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fezhckduqzskiegtxztnolipayogrxvg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171238.0171764-775-52338772292211/AnsiballZ_copy.py'
Oct 11 08:27:19 compute-0 sudo[211316]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:27:19 compute-0 python3.9[211318]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760171238.0171764-775-52338772292211/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 08:27:19 compute-0 sudo[211316]: pam_unix(sudo:session): session closed for user root
Oct 11 08:27:19 compute-0 sudo[211468]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qnsltjxlotipxblkbpvhgnekegtsusra ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171239.5173194-775-190524431362542/AnsiballZ_stat.py'
Oct 11 08:27:19 compute-0 sudo[211468]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:27:20 compute-0 python3.9[211470]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 08:27:20 compute-0 sudo[211468]: pam_unix(sudo:session): session closed for user root
Oct 11 08:27:20 compute-0 ceph-mon[74313]: pgmap v605: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:27:20 compute-0 sudo[211591]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jvdtdzwhpmhrkmfazktuvvpejxbvnrcm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171239.5173194-775-190524431362542/AnsiballZ_copy.py'
Oct 11 08:27:20 compute-0 sudo[211591]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:27:20 compute-0 python3.9[211593]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760171239.5173194-775-190524431362542/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 08:27:20 compute-0 sudo[211591]: pam_unix(sudo:session): session closed for user root
Oct 11 08:27:21 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v606: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:27:21 compute-0 sudo[211743]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cqrldpzeewlvgprqgfivlfdiwmwnzgnk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171241.0570507-775-57145224136841/AnsiballZ_stat.py'
Oct 11 08:27:21 compute-0 sudo[211743]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:27:21 compute-0 python3.9[211745]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 08:27:21 compute-0 sudo[211743]: pam_unix(sudo:session): session closed for user root
Oct 11 08:27:22 compute-0 sudo[211866]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-abzeivhcgyuznycebjrzcfmocwqzngkb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171241.0570507-775-57145224136841/AnsiballZ_copy.py'
Oct 11 08:27:22 compute-0 sudo[211866]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:27:22 compute-0 ceph-mon[74313]: pgmap v606: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:27:22 compute-0 python3.9[211868]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760171241.0570507-775-57145224136841/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 08:27:22 compute-0 sudo[211866]: pam_unix(sudo:session): session closed for user root
Oct 11 08:27:22 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 08:27:23 compute-0 sudo[212018]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yyrtvqduzpjygbmwhchyjteznbypxxoz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171242.6043742-775-170105886017473/AnsiballZ_stat.py'
Oct 11 08:27:23 compute-0 sudo[212018]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:27:23 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v607: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:27:23 compute-0 python3.9[212020]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 08:27:23 compute-0 sudo[212018]: pam_unix(sudo:session): session closed for user root
Oct 11 08:27:23 compute-0 sudo[212021]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:27:23 compute-0 sudo[212021]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:27:23 compute-0 sudo[212021]: pam_unix(sudo:session): session closed for user root
Oct 11 08:27:23 compute-0 sudo[212049]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 08:27:23 compute-0 sudo[212049]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:27:23 compute-0 sudo[212049]: pam_unix(sudo:session): session closed for user root
Oct 11 08:27:23 compute-0 sudo[212098]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:27:23 compute-0 sudo[212098]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:27:23 compute-0 sudo[212098]: pam_unix(sudo:session): session closed for user root
Oct 11 08:27:23 compute-0 sudo[212144]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 11 08:27:23 compute-0 sudo[212144]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:27:23 compute-0 sudo[212253]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zdkbgjohzdugeuhcnirtekwrppqbafxj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171242.6043742-775-170105886017473/AnsiballZ_copy.py'
Oct 11 08:27:23 compute-0 sudo[212253]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:27:23 compute-0 python3.9[212257]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760171242.6043742-775-170105886017473/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 08:27:24 compute-0 sudo[212253]: pam_unix(sudo:session): session closed for user root
Oct 11 08:27:24 compute-0 sudo[212144]: pam_unix(sudo:session): session closed for user root
Oct 11 08:27:24 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 08:27:24 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 08:27:24 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 11 08:27:24 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 08:27:24 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 11 08:27:24 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 08:27:24 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev 7fa81eb5-7a76-4d63-8200-6dff16728af0 does not exist
Oct 11 08:27:24 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev ca367e0e-b32d-4eba-b3b4-44b24be56c13 does not exist
Oct 11 08:27:24 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev 46c42ac4-4e36-4308-91e9-deb561f4e405 does not exist
Oct 11 08:27:24 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 11 08:27:24 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 08:27:24 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 11 08:27:24 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 08:27:24 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 08:27:24 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 08:27:24 compute-0 ceph-mon[74313]: pgmap v607: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:27:24 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 08:27:24 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 08:27:24 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 08:27:24 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 08:27:24 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 08:27:24 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 08:27:24 compute-0 sudo[212299]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:27:24 compute-0 sudo[212299]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:27:24 compute-0 sudo[212299]: pam_unix(sudo:session): session closed for user root
Oct 11 08:27:24 compute-0 sudo[212325]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 08:27:24 compute-0 sudo[212325]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:27:24 compute-0 sudo[212325]: pam_unix(sudo:session): session closed for user root
Oct 11 08:27:24 compute-0 sudo[212378]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:27:24 compute-0 sudo[212378]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:27:24 compute-0 sudo[212378]: pam_unix(sudo:session): session closed for user root
Oct 11 08:27:24 compute-0 sudo[212433]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 11 08:27:24 compute-0 sudo[212433]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:27:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 08:27:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 08:27:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 08:27:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 08:27:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 08:27:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 08:27:24 compute-0 python3.9[212524]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail
                                             ls -lRZ /run/libvirt | grep -E ':container_\S+_t'
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 08:27:25 compute-0 podman[212568]: 2025-10-11 08:27:25.006267838 +0000 UTC m=+0.071933431 container create c94a2bf43c051137f67bb5ab578a3d41076dd09ed95b1bcb1985f77008332c5f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_perlman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 11 08:27:25 compute-0 systemd[1]: Started libpod-conmon-c94a2bf43c051137f67bb5ab578a3d41076dd09ed95b1bcb1985f77008332c5f.scope.
Oct 11 08:27:25 compute-0 podman[212568]: 2025-10-11 08:27:24.974595632 +0000 UTC m=+0.040261285 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 08:27:25 compute-0 systemd[1]: Started libcrun container.
Oct 11 08:27:25 compute-0 podman[212568]: 2025-10-11 08:27:25.109144283 +0000 UTC m=+0.174809906 container init c94a2bf43c051137f67bb5ab578a3d41076dd09ed95b1bcb1985f77008332c5f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_perlman, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct 11 08:27:25 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v608: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:27:25 compute-0 podman[212568]: 2025-10-11 08:27:25.124120986 +0000 UTC m=+0.189786569 container start c94a2bf43c051137f67bb5ab578a3d41076dd09ed95b1bcb1985f77008332c5f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_perlman, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 08:27:25 compute-0 podman[212568]: 2025-10-11 08:27:25.128479292 +0000 UTC m=+0.194144875 container attach c94a2bf43c051137f67bb5ab578a3d41076dd09ed95b1bcb1985f77008332c5f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_perlman, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 08:27:25 compute-0 nostalgic_perlman[212609]: 167 167
Oct 11 08:27:25 compute-0 systemd[1]: libpod-c94a2bf43c051137f67bb5ab578a3d41076dd09ed95b1bcb1985f77008332c5f.scope: Deactivated successfully.
Oct 11 08:27:25 compute-0 podman[212568]: 2025-10-11 08:27:25.133028153 +0000 UTC m=+0.198693776 container died c94a2bf43c051137f67bb5ab578a3d41076dd09ed95b1bcb1985f77008332c5f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_perlman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 08:27:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-57833386b3307bb388273bfea463b57765fa668bd1e9ff890fef288e5ccbb2ad-merged.mount: Deactivated successfully.
Oct 11 08:27:25 compute-0 podman[212568]: 2025-10-11 08:27:25.19068382 +0000 UTC m=+0.256349383 container remove c94a2bf43c051137f67bb5ab578a3d41076dd09ed95b1bcb1985f77008332c5f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_perlman, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 11 08:27:25 compute-0 systemd[1]: libpod-conmon-c94a2bf43c051137f67bb5ab578a3d41076dd09ed95b1bcb1985f77008332c5f.scope: Deactivated successfully.
Oct 11 08:27:25 compute-0 podman[212685]: 2025-10-11 08:27:25.434356776 +0000 UTC m=+0.066713850 container create d1c8d064afefa80b63b7dd491b7c590e91dd0c8ef71d9c3fffe39bdde19cebb7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_burnell, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 08:27:25 compute-0 systemd[1]: Started libpod-conmon-d1c8d064afefa80b63b7dd491b7c590e91dd0c8ef71d9c3fffe39bdde19cebb7.scope.
Oct 11 08:27:25 compute-0 podman[212685]: 2025-10-11 08:27:25.406630115 +0000 UTC m=+0.038987239 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 08:27:25 compute-0 systemd[1]: Started libcrun container.
Oct 11 08:27:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f1814c00ad2842a1281dbc01403f2643bf8336db48a55bcf22a5a9f2c1dbd00e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 08:27:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f1814c00ad2842a1281dbc01403f2643bf8336db48a55bcf22a5a9f2c1dbd00e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 08:27:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f1814c00ad2842a1281dbc01403f2643bf8336db48a55bcf22a5a9f2c1dbd00e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 08:27:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f1814c00ad2842a1281dbc01403f2643bf8336db48a55bcf22a5a9f2c1dbd00e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 08:27:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f1814c00ad2842a1281dbc01403f2643bf8336db48a55bcf22a5a9f2c1dbd00e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 08:27:25 compute-0 podman[212685]: 2025-10-11 08:27:25.547135377 +0000 UTC m=+0.179492511 container init d1c8d064afefa80b63b7dd491b7c590e91dd0c8ef71d9c3fffe39bdde19cebb7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_burnell, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 08:27:25 compute-0 podman[212685]: 2025-10-11 08:27:25.563974014 +0000 UTC m=+0.196331088 container start d1c8d064afefa80b63b7dd491b7c590e91dd0c8ef71d9c3fffe39bdde19cebb7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_burnell, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef)
Oct 11 08:27:25 compute-0 podman[212685]: 2025-10-11 08:27:25.568919577 +0000 UTC m=+0.201276721 container attach d1c8d064afefa80b63b7dd491b7c590e91dd0c8ef71d9c3fffe39bdde19cebb7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_burnell, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 08:27:25 compute-0 sudo[212780]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eunreluqhblvpdcfbxiehkqbimglmrht ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171245.18961-981-130398830528573/AnsiballZ_seboolean.py'
Oct 11 08:27:25 compute-0 sudo[212780]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:27:26 compute-0 python3.9[212782]: ansible-ansible.posix.seboolean Invoked with name=os_enable_vtpm persistent=True state=True ignore_selinux_state=False
Oct 11 08:27:26 compute-0 ceph-mon[74313]: pgmap v608: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:27:26 compute-0 vigorous_burnell[212702]: --> passed data devices: 0 physical, 3 LVM
Oct 11 08:27:26 compute-0 vigorous_burnell[212702]: --> relative data size: 1.0
Oct 11 08:27:26 compute-0 vigorous_burnell[212702]: --> All data devices are unavailable
Oct 11 08:27:26 compute-0 systemd[1]: libpod-d1c8d064afefa80b63b7dd491b7c590e91dd0c8ef71d9c3fffe39bdde19cebb7.scope: Deactivated successfully.
Oct 11 08:27:26 compute-0 dbus-broker-launch[809]: avc:  op=load_policy lsm=selinux seqno=15 res=1
Oct 11 08:27:26 compute-0 systemd[1]: libpod-d1c8d064afefa80b63b7dd491b7c590e91dd0c8ef71d9c3fffe39bdde19cebb7.scope: Consumed 1.171s CPU time.
Oct 11 08:27:26 compute-0 podman[212685]: 2025-10-11 08:27:26.809388826 +0000 UTC m=+1.441745920 container died d1c8d064afefa80b63b7dd491b7c590e91dd0c8ef71d9c3fffe39bdde19cebb7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_burnell, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct 11 08:27:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-f1814c00ad2842a1281dbc01403f2643bf8336db48a55bcf22a5a9f2c1dbd00e-merged.mount: Deactivated successfully.
Oct 11 08:27:26 compute-0 podman[212685]: 2025-10-11 08:27:26.935559475 +0000 UTC m=+1.567916529 container remove d1c8d064afefa80b63b7dd491b7c590e91dd0c8ef71d9c3fffe39bdde19cebb7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_burnell, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 08:27:26 compute-0 systemd[1]: libpod-conmon-d1c8d064afefa80b63b7dd491b7c590e91dd0c8ef71d9c3fffe39bdde19cebb7.scope: Deactivated successfully.
Oct 11 08:27:27 compute-0 sudo[212433]: pam_unix(sudo:session): session closed for user root
Oct 11 08:27:27 compute-0 sudo[212824]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:27:27 compute-0 sudo[212824]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:27:27 compute-0 sudo[212824]: pam_unix(sudo:session): session closed for user root
Oct 11 08:27:27 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v609: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:27:27 compute-0 sudo[212849]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 08:27:27 compute-0 sudo[212849]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:27:27 compute-0 sudo[212849]: pam_unix(sudo:session): session closed for user root
Oct 11 08:27:27 compute-0 sudo[212874]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:27:27 compute-0 sudo[212874]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:27:27 compute-0 sudo[212874]: pam_unix(sudo:session): session closed for user root
Oct 11 08:27:27 compute-0 sudo[212780]: pam_unix(sudo:session): session closed for user root
Oct 11 08:27:27 compute-0 sudo[212899]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a -- lvm list --format json
Oct 11 08:27:27 compute-0 sudo[212899]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:27:27 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 08:27:27 compute-0 podman[213071]: 2025-10-11 08:27:27.844730944 +0000 UTC m=+0.073180817 container create 61e1403a6ddb15124cdc03e9957f648bf7d56099404dc767c36e9ef301f16356 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_mclaren, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Oct 11 08:27:27 compute-0 systemd[1]: Started libpod-conmon-61e1403a6ddb15124cdc03e9957f648bf7d56099404dc767c36e9ef301f16356.scope.
Oct 11 08:27:27 compute-0 podman[213071]: 2025-10-11 08:27:27.811909415 +0000 UTC m=+0.040359298 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 08:27:27 compute-0 systemd[1]: Started libcrun container.
Oct 11 08:27:27 compute-0 sudo[213131]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zdymnmaivxyegbbjcaoiyhqoepzarkhb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171247.5227697-989-145462579323876/AnsiballZ_copy.py'
Oct 11 08:27:27 compute-0 sudo[213131]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:27:27 compute-0 podman[213071]: 2025-10-11 08:27:27.963041944 +0000 UTC m=+0.191491827 container init 61e1403a6ddb15124cdc03e9957f648bf7d56099404dc767c36e9ef301f16356 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_mclaren, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct 11 08:27:27 compute-0 podman[213071]: 2025-10-11 08:27:27.972538768 +0000 UTC m=+0.200988611 container start 61e1403a6ddb15124cdc03e9957f648bf7d56099404dc767c36e9ef301f16356 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_mclaren, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0)
Oct 11 08:27:27 compute-0 podman[213071]: 2025-10-11 08:27:27.976865073 +0000 UTC m=+0.205314986 container attach 61e1403a6ddb15124cdc03e9957f648bf7d56099404dc767c36e9ef301f16356 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_mclaren, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Oct 11 08:27:27 compute-0 magical_mclaren[213130]: 167 167
Oct 11 08:27:27 compute-0 systemd[1]: libpod-61e1403a6ddb15124cdc03e9957f648bf7d56099404dc767c36e9ef301f16356.scope: Deactivated successfully.
Oct 11 08:27:27 compute-0 podman[213071]: 2025-10-11 08:27:27.981874758 +0000 UTC m=+0.210324591 container died 61e1403a6ddb15124cdc03e9957f648bf7d56099404dc767c36e9ef301f16356 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_mclaren, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Oct 11 08:27:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-56275893a094fae51c73af8befef1d575caa2921308be59ae1b8f410f96a0561-merged.mount: Deactivated successfully.
Oct 11 08:27:28 compute-0 podman[213071]: 2025-10-11 08:27:28.027635141 +0000 UTC m=+0.256084974 container remove 61e1403a6ddb15124cdc03e9957f648bf7d56099404dc767c36e9ef301f16356 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_mclaren, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 08:27:28 compute-0 systemd[1]: libpod-conmon-61e1403a6ddb15124cdc03e9957f648bf7d56099404dc767c36e9ef301f16356.scope: Deactivated successfully.
Oct 11 08:27:28 compute-0 python3.9[213135]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/servercert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 08:27:28 compute-0 sudo[213131]: pam_unix(sudo:session): session closed for user root
Oct 11 08:27:28 compute-0 podman[213158]: 2025-10-11 08:27:28.244471211 +0000 UTC m=+0.062357214 container create 8eced79336be683dd3def2011572e63859482f9008534f079974feb55057fcfa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_lamarr, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 11 08:27:28 compute-0 ceph-mon[74313]: pgmap v609: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:27:28 compute-0 systemd[1]: Started libpod-conmon-8eced79336be683dd3def2011572e63859482f9008534f079974feb55057fcfa.scope.
Oct 11 08:27:28 compute-0 podman[213158]: 2025-10-11 08:27:28.225488752 +0000 UTC m=+0.043374745 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 08:27:28 compute-0 systemd[1]: Started libcrun container.
Oct 11 08:27:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/95a93f08318ebf4582dc5994914aa7a65950b7589beee5825acc0a682fe7612d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 08:27:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/95a93f08318ebf4582dc5994914aa7a65950b7589beee5825acc0a682fe7612d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 08:27:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/95a93f08318ebf4582dc5994914aa7a65950b7589beee5825acc0a682fe7612d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 08:27:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/95a93f08318ebf4582dc5994914aa7a65950b7589beee5825acc0a682fe7612d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 08:27:28 compute-0 podman[213158]: 2025-10-11 08:27:28.371420322 +0000 UTC m=+0.189306345 container init 8eced79336be683dd3def2011572e63859482f9008534f079974feb55057fcfa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_lamarr, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 08:27:28 compute-0 podman[213158]: 2025-10-11 08:27:28.383429379 +0000 UTC m=+0.201315372 container start 8eced79336be683dd3def2011572e63859482f9008534f079974feb55057fcfa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_lamarr, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct 11 08:27:28 compute-0 podman[213158]: 2025-10-11 08:27:28.388002412 +0000 UTC m=+0.205888415 container attach 8eced79336be683dd3def2011572e63859482f9008534f079974feb55057fcfa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_lamarr, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct 11 08:27:28 compute-0 sudo[213329]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-adykyxsvdujrgiaxmmxpmtxwzemivrmc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171248.3915584-989-176272374478573/AnsiballZ_copy.py'
Oct 11 08:27:28 compute-0 sudo[213329]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:27:28 compute-0 python3.9[213331]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/serverkey.pem group=root mode=0600 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 08:27:29 compute-0 sudo[213329]: pam_unix(sudo:session): session closed for user root
Oct 11 08:27:29 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v610: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:27:29 compute-0 interesting_lamarr[213199]: {
Oct 11 08:27:29 compute-0 interesting_lamarr[213199]:     "0": [
Oct 11 08:27:29 compute-0 interesting_lamarr[213199]:         {
Oct 11 08:27:29 compute-0 interesting_lamarr[213199]:             "devices": [
Oct 11 08:27:29 compute-0 interesting_lamarr[213199]:                 "/dev/loop3"
Oct 11 08:27:29 compute-0 interesting_lamarr[213199]:             ],
Oct 11 08:27:29 compute-0 interesting_lamarr[213199]:             "lv_name": "ceph_lv0",
Oct 11 08:27:29 compute-0 interesting_lamarr[213199]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 08:27:29 compute-0 interesting_lamarr[213199]:             "lv_size": "21470642176",
Oct 11 08:27:29 compute-0 interesting_lamarr[213199]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=bd31cfe9-266a-4479-a7f9-659fff14b3e1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 08:27:29 compute-0 interesting_lamarr[213199]:             "lv_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 08:27:29 compute-0 interesting_lamarr[213199]:             "name": "ceph_lv0",
Oct 11 08:27:29 compute-0 interesting_lamarr[213199]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 08:27:29 compute-0 interesting_lamarr[213199]:             "tags": {
Oct 11 08:27:29 compute-0 interesting_lamarr[213199]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 11 08:27:29 compute-0 interesting_lamarr[213199]:                 "ceph.block_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 08:27:29 compute-0 interesting_lamarr[213199]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 08:27:29 compute-0 interesting_lamarr[213199]:                 "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 08:27:29 compute-0 interesting_lamarr[213199]:                 "ceph.cluster_name": "ceph",
Oct 11 08:27:29 compute-0 interesting_lamarr[213199]:                 "ceph.crush_device_class": "",
Oct 11 08:27:29 compute-0 interesting_lamarr[213199]:                 "ceph.encrypted": "0",
Oct 11 08:27:29 compute-0 interesting_lamarr[213199]:                 "ceph.osd_fsid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 08:27:29 compute-0 interesting_lamarr[213199]:                 "ceph.osd_id": "0",
Oct 11 08:27:29 compute-0 interesting_lamarr[213199]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 08:27:29 compute-0 interesting_lamarr[213199]:                 "ceph.type": "block",
Oct 11 08:27:29 compute-0 interesting_lamarr[213199]:                 "ceph.vdo": "0"
Oct 11 08:27:29 compute-0 interesting_lamarr[213199]:             },
Oct 11 08:27:29 compute-0 interesting_lamarr[213199]:             "type": "block",
Oct 11 08:27:29 compute-0 interesting_lamarr[213199]:             "vg_name": "ceph_vg0"
Oct 11 08:27:29 compute-0 interesting_lamarr[213199]:         }
Oct 11 08:27:29 compute-0 interesting_lamarr[213199]:     ],
Oct 11 08:27:29 compute-0 interesting_lamarr[213199]:     "1": [
Oct 11 08:27:29 compute-0 interesting_lamarr[213199]:         {
Oct 11 08:27:29 compute-0 interesting_lamarr[213199]:             "devices": [
Oct 11 08:27:29 compute-0 interesting_lamarr[213199]:                 "/dev/loop4"
Oct 11 08:27:29 compute-0 interesting_lamarr[213199]:             ],
Oct 11 08:27:29 compute-0 interesting_lamarr[213199]:             "lv_name": "ceph_lv1",
Oct 11 08:27:29 compute-0 interesting_lamarr[213199]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 08:27:29 compute-0 interesting_lamarr[213199]:             "lv_size": "21470642176",
Oct 11 08:27:29 compute-0 interesting_lamarr[213199]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c6e730c8-7075-45e5-bf72-061b73088f5b,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 08:27:29 compute-0 interesting_lamarr[213199]:             "lv_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 08:27:29 compute-0 interesting_lamarr[213199]:             "name": "ceph_lv1",
Oct 11 08:27:29 compute-0 interesting_lamarr[213199]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 08:27:29 compute-0 interesting_lamarr[213199]:             "tags": {
Oct 11 08:27:29 compute-0 interesting_lamarr[213199]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 11 08:27:29 compute-0 interesting_lamarr[213199]:                 "ceph.block_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 08:27:29 compute-0 interesting_lamarr[213199]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 08:27:29 compute-0 interesting_lamarr[213199]:                 "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 08:27:29 compute-0 interesting_lamarr[213199]:                 "ceph.cluster_name": "ceph",
Oct 11 08:27:29 compute-0 interesting_lamarr[213199]:                 "ceph.crush_device_class": "",
Oct 11 08:27:29 compute-0 interesting_lamarr[213199]:                 "ceph.encrypted": "0",
Oct 11 08:27:29 compute-0 interesting_lamarr[213199]:                 "ceph.osd_fsid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 08:27:29 compute-0 interesting_lamarr[213199]:                 "ceph.osd_id": "1",
Oct 11 08:27:29 compute-0 interesting_lamarr[213199]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 08:27:29 compute-0 interesting_lamarr[213199]:                 "ceph.type": "block",
Oct 11 08:27:29 compute-0 interesting_lamarr[213199]:                 "ceph.vdo": "0"
Oct 11 08:27:29 compute-0 interesting_lamarr[213199]:             },
Oct 11 08:27:29 compute-0 interesting_lamarr[213199]:             "type": "block",
Oct 11 08:27:29 compute-0 interesting_lamarr[213199]:             "vg_name": "ceph_vg1"
Oct 11 08:27:29 compute-0 interesting_lamarr[213199]:         }
Oct 11 08:27:29 compute-0 interesting_lamarr[213199]:     ],
Oct 11 08:27:29 compute-0 interesting_lamarr[213199]:     "2": [
Oct 11 08:27:29 compute-0 interesting_lamarr[213199]:         {
Oct 11 08:27:29 compute-0 interesting_lamarr[213199]:             "devices": [
Oct 11 08:27:29 compute-0 interesting_lamarr[213199]:                 "/dev/loop5"
Oct 11 08:27:29 compute-0 interesting_lamarr[213199]:             ],
Oct 11 08:27:29 compute-0 interesting_lamarr[213199]:             "lv_name": "ceph_lv2",
Oct 11 08:27:29 compute-0 interesting_lamarr[213199]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 08:27:29 compute-0 interesting_lamarr[213199]:             "lv_size": "21470642176",
Oct 11 08:27:29 compute-0 interesting_lamarr[213199]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9a8d87fe-940e-4960-94fb-405e22e6e9ee,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 08:27:29 compute-0 interesting_lamarr[213199]:             "lv_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 08:27:29 compute-0 interesting_lamarr[213199]:             "name": "ceph_lv2",
Oct 11 08:27:29 compute-0 interesting_lamarr[213199]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 08:27:29 compute-0 interesting_lamarr[213199]:             "tags": {
Oct 11 08:27:29 compute-0 interesting_lamarr[213199]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 11 08:27:29 compute-0 interesting_lamarr[213199]:                 "ceph.block_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 08:27:29 compute-0 interesting_lamarr[213199]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 08:27:29 compute-0 interesting_lamarr[213199]:                 "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 08:27:29 compute-0 interesting_lamarr[213199]:                 "ceph.cluster_name": "ceph",
Oct 11 08:27:29 compute-0 interesting_lamarr[213199]:                 "ceph.crush_device_class": "",
Oct 11 08:27:29 compute-0 interesting_lamarr[213199]:                 "ceph.encrypted": "0",
Oct 11 08:27:29 compute-0 interesting_lamarr[213199]:                 "ceph.osd_fsid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 08:27:29 compute-0 interesting_lamarr[213199]:                 "ceph.osd_id": "2",
Oct 11 08:27:29 compute-0 interesting_lamarr[213199]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 08:27:29 compute-0 interesting_lamarr[213199]:                 "ceph.type": "block",
Oct 11 08:27:29 compute-0 interesting_lamarr[213199]:                 "ceph.vdo": "0"
Oct 11 08:27:29 compute-0 interesting_lamarr[213199]:             },
Oct 11 08:27:29 compute-0 interesting_lamarr[213199]:             "type": "block",
Oct 11 08:27:29 compute-0 interesting_lamarr[213199]:             "vg_name": "ceph_vg2"
Oct 11 08:27:29 compute-0 interesting_lamarr[213199]:         }
Oct 11 08:27:29 compute-0 interesting_lamarr[213199]:     ]
Oct 11 08:27:29 compute-0 interesting_lamarr[213199]: }
Oct 11 08:27:29 compute-0 systemd[1]: libpod-8eced79336be683dd3def2011572e63859482f9008534f079974feb55057fcfa.scope: Deactivated successfully.
Oct 11 08:27:29 compute-0 podman[213158]: 2025-10-11 08:27:29.193086841 +0000 UTC m=+1.010972834 container died 8eced79336be683dd3def2011572e63859482f9008534f079974feb55057fcfa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_lamarr, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct 11 08:27:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-95a93f08318ebf4582dc5994914aa7a65950b7589beee5825acc0a682fe7612d-merged.mount: Deactivated successfully.
Oct 11 08:27:29 compute-0 podman[213158]: 2025-10-11 08:27:29.282111555 +0000 UTC m=+1.099997538 container remove 8eced79336be683dd3def2011572e63859482f9008534f079974feb55057fcfa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_lamarr, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True)
Oct 11 08:27:29 compute-0 systemd[1]: libpod-conmon-8eced79336be683dd3def2011572e63859482f9008534f079974feb55057fcfa.scope: Deactivated successfully.
Oct 11 08:27:29 compute-0 sudo[212899]: pam_unix(sudo:session): session closed for user root
Oct 11 08:27:29 compute-0 sudo[213425]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:27:29 compute-0 sudo[213425]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:27:29 compute-0 sudo[213425]: pam_unix(sudo:session): session closed for user root
Oct 11 08:27:29 compute-0 sudo[213477]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 08:27:29 compute-0 sudo[213477]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:27:29 compute-0 sudo[213477]: pam_unix(sudo:session): session closed for user root
Oct 11 08:27:29 compute-0 sudo[213528]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:27:29 compute-0 sudo[213528]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:27:29 compute-0 sudo[213528]: pam_unix(sudo:session): session closed for user root
Oct 11 08:27:29 compute-0 sudo[213571]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-coxyxgzftegktmosxnmrxcpydrrwtxtw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171249.2150576-989-73363366745092/AnsiballZ_copy.py'
Oct 11 08:27:29 compute-0 sudo[213571]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:27:29 compute-0 sudo[213576]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a -- raw list --format json
Oct 11 08:27:29 compute-0 sudo[213576]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:27:29 compute-0 python3.9[213575]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/clientcert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 08:27:29 compute-0 sudo[213571]: pam_unix(sudo:session): session closed for user root
Oct 11 08:27:30 compute-0 podman[213691]: 2025-10-11 08:27:30.170085352 +0000 UTC m=+0.073340982 container create a140391039848c5bbeb9a9f63ac2e59d1998c6b23c44f3bf506a59b7667fabbc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_jones, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 08:27:30 compute-0 systemd[1]: Started libpod-conmon-a140391039848c5bbeb9a9f63ac2e59d1998c6b23c44f3bf506a59b7667fabbc.scope.
Oct 11 08:27:30 compute-0 podman[213691]: 2025-10-11 08:27:30.135648446 +0000 UTC m=+0.038904126 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 08:27:30 compute-0 ceph-mon[74313]: pgmap v610: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:27:30 compute-0 systemd[1]: Started libcrun container.
Oct 11 08:27:30 compute-0 podman[213691]: 2025-10-11 08:27:30.278997411 +0000 UTC m=+0.182253101 container init a140391039848c5bbeb9a9f63ac2e59d1998c6b23c44f3bf506a59b7667fabbc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_jones, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 08:27:30 compute-0 podman[213691]: 2025-10-11 08:27:30.289672589 +0000 UTC m=+0.192928199 container start a140391039848c5bbeb9a9f63ac2e59d1998c6b23c44f3bf506a59b7667fabbc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_jones, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 08:27:30 compute-0 podman[213691]: 2025-10-11 08:27:30.293296754 +0000 UTC m=+0.196552404 container attach a140391039848c5bbeb9a9f63ac2e59d1998c6b23c44f3bf506a59b7667fabbc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_jones, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 08:27:30 compute-0 distracted_jones[213750]: 167 167
Oct 11 08:27:30 compute-0 systemd[1]: libpod-a140391039848c5bbeb9a9f63ac2e59d1998c6b23c44f3bf506a59b7667fabbc.scope: Deactivated successfully.
Oct 11 08:27:30 compute-0 podman[213691]: 2025-10-11 08:27:30.298202536 +0000 UTC m=+0.201458176 container died a140391039848c5bbeb9a9f63ac2e59d1998c6b23c44f3bf506a59b7667fabbc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_jones, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 08:27:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-17e167e00dd6a533f98760deb4d2886dfc77ede951cfce61206ccf5e0e58b006-merged.mount: Deactivated successfully.
Oct 11 08:27:30 compute-0 podman[213691]: 2025-10-11 08:27:30.360106516 +0000 UTC m=+0.263362156 container remove a140391039848c5bbeb9a9f63ac2e59d1998c6b23c44f3bf506a59b7667fabbc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_jones, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct 11 08:27:30 compute-0 systemd[1]: libpod-conmon-a140391039848c5bbeb9a9f63ac2e59d1998c6b23c44f3bf506a59b7667fabbc.scope: Deactivated successfully.
Oct 11 08:27:30 compute-0 sudo[213824]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mkunsdfxmxukdphitmftcdvnmxqstynz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171250.0492606-989-141784251399517/AnsiballZ_copy.py'
Oct 11 08:27:30 compute-0 sudo[213824]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:27:30 compute-0 podman[213832]: 2025-10-11 08:27:30.588929113 +0000 UTC m=+0.062811418 container create 1e40bbf269c8674e9ab6edb708d9b16454ae2520e53d0e2b3af5bf6a33face7c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_herschel, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 08:27:30 compute-0 podman[213832]: 2025-10-11 08:27:30.567638497 +0000 UTC m=+0.041520832 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 08:27:30 compute-0 systemd[1]: Started libpod-conmon-1e40bbf269c8674e9ab6edb708d9b16454ae2520e53d0e2b3af5bf6a33face7c.scope.
Oct 11 08:27:30 compute-0 systemd[1]: Started libcrun container.
Oct 11 08:27:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a435232cf9687cb9a3d42a0906a904207273d0ff94939affe8a6d9cd8a0030d4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 08:27:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a435232cf9687cb9a3d42a0906a904207273d0ff94939affe8a6d9cd8a0030d4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 08:27:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a435232cf9687cb9a3d42a0906a904207273d0ff94939affe8a6d9cd8a0030d4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 08:27:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a435232cf9687cb9a3d42a0906a904207273d0ff94939affe8a6d9cd8a0030d4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 08:27:30 compute-0 python3.9[213827]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/clientkey.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 08:27:30 compute-0 podman[213832]: 2025-10-11 08:27:30.749029872 +0000 UTC m=+0.222912267 container init 1e40bbf269c8674e9ab6edb708d9b16454ae2520e53d0e2b3af5bf6a33face7c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_herschel, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct 11 08:27:30 compute-0 podman[213832]: 2025-10-11 08:27:30.762914584 +0000 UTC m=+0.236796909 container start 1e40bbf269c8674e9ab6edb708d9b16454ae2520e53d0e2b3af5bf6a33face7c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_herschel, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 08:27:30 compute-0 podman[213832]: 2025-10-11 08:27:30.76797498 +0000 UTC m=+0.241857355 container attach 1e40bbf269c8674e9ab6edb708d9b16454ae2520e53d0e2b3af5bf6a33face7c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_herschel, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 08:27:30 compute-0 sudo[213824]: pam_unix(sudo:session): session closed for user root
Oct 11 08:27:31 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v611: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:27:31 compute-0 sudo[214003]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bfadhhcezmqkfywiwausikpopxhmhfzo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171250.9462147-989-16162226886399/AnsiballZ_copy.py'
Oct 11 08:27:31 compute-0 sudo[214003]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:27:31 compute-0 python3.9[214005]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/CA/cacert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 08:27:31 compute-0 sudo[214003]: pam_unix(sudo:session): session closed for user root
Oct 11 08:27:31 compute-0 bold_herschel[213849]: {
Oct 11 08:27:31 compute-0 bold_herschel[213849]:     "9a8d87fe-940e-4960-94fb-405e22e6e9ee": {
Oct 11 08:27:31 compute-0 bold_herschel[213849]:         "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 08:27:31 compute-0 bold_herschel[213849]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 11 08:27:31 compute-0 bold_herschel[213849]:         "osd_id": 2,
Oct 11 08:27:31 compute-0 bold_herschel[213849]:         "osd_uuid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 08:27:31 compute-0 bold_herschel[213849]:         "type": "bluestore"
Oct 11 08:27:31 compute-0 bold_herschel[213849]:     },
Oct 11 08:27:31 compute-0 bold_herschel[213849]:     "bd31cfe9-266a-4479-a7f9-659fff14b3e1": {
Oct 11 08:27:31 compute-0 bold_herschel[213849]:         "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 08:27:31 compute-0 bold_herschel[213849]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 11 08:27:31 compute-0 bold_herschel[213849]:         "osd_id": 0,
Oct 11 08:27:31 compute-0 bold_herschel[213849]:         "osd_uuid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 08:27:31 compute-0 bold_herschel[213849]:         "type": "bluestore"
Oct 11 08:27:31 compute-0 bold_herschel[213849]:     },
Oct 11 08:27:31 compute-0 bold_herschel[213849]:     "c6e730c8-7075-45e5-bf72-061b73088f5b": {
Oct 11 08:27:31 compute-0 bold_herschel[213849]:         "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 08:27:31 compute-0 bold_herschel[213849]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 11 08:27:31 compute-0 bold_herschel[213849]:         "osd_id": 1,
Oct 11 08:27:31 compute-0 bold_herschel[213849]:         "osd_uuid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 08:27:31 compute-0 bold_herschel[213849]:         "type": "bluestore"
Oct 11 08:27:31 compute-0 bold_herschel[213849]:     }
Oct 11 08:27:31 compute-0 bold_herschel[213849]: }
Oct 11 08:27:31 compute-0 systemd[1]: libpod-1e40bbf269c8674e9ab6edb708d9b16454ae2520e53d0e2b3af5bf6a33face7c.scope: Deactivated successfully.
Oct 11 08:27:31 compute-0 systemd[1]: libpod-1e40bbf269c8674e9ab6edb708d9b16454ae2520e53d0e2b3af5bf6a33face7c.scope: Consumed 1.182s CPU time.
Oct 11 08:27:31 compute-0 podman[213832]: 2025-10-11 08:27:31.946023223 +0000 UTC m=+1.419905568 container died 1e40bbf269c8674e9ab6edb708d9b16454ae2520e53d0e2b3af5bf6a33face7c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_herschel, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct 11 08:27:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-a435232cf9687cb9a3d42a0906a904207273d0ff94939affe8a6d9cd8a0030d4-merged.mount: Deactivated successfully.
Oct 11 08:27:32 compute-0 podman[213832]: 2025-10-11 08:27:32.036655723 +0000 UTC m=+1.510538048 container remove 1e40bbf269c8674e9ab6edb708d9b16454ae2520e53d0e2b3af5bf6a33face7c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_herschel, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct 11 08:27:32 compute-0 systemd[1]: libpod-conmon-1e40bbf269c8674e9ab6edb708d9b16454ae2520e53d0e2b3af5bf6a33face7c.scope: Deactivated successfully.
Oct 11 08:27:32 compute-0 sudo[213576]: pam_unix(sudo:session): session closed for user root
Oct 11 08:27:32 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 08:27:32 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 08:27:32 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 08:27:32 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 08:27:32 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev a2d185e5-9e30-48b8-bcc9-37ac1d5a604a does not exist
Oct 11 08:27:32 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev 6207cc91-c19a-4cdf-ad6c-f4daf5656d99 does not exist
Oct 11 08:27:32 compute-0 sudo[214147]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:27:32 compute-0 sudo[214147]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:27:32 compute-0 sudo[214147]: pam_unix(sudo:session): session closed for user root
Oct 11 08:27:32 compute-0 sudo[214196]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 11 08:27:32 compute-0 sudo[214196]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:27:32 compute-0 sudo[214196]: pam_unix(sudo:session): session closed for user root
Oct 11 08:27:32 compute-0 sudo[214245]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-skwthpomdrasqtgbtgzyutebuqlseevp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171251.8617952-1025-270461995078982/AnsiballZ_copy.py'
Oct 11 08:27:32 compute-0 sudo[214245]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:27:32 compute-0 python3.9[214249]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 08:27:32 compute-0 sudo[214245]: pam_unix(sudo:session): session closed for user root
Oct 11 08:27:32 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 08:27:33 compute-0 sudo[214399]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xvwexldirzjnrsxzxxqnmtrgmjtixvkp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171252.6881-1025-169874545324626/AnsiballZ_copy.py'
Oct 11 08:27:33 compute-0 sudo[214399]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:27:33 compute-0 ceph-mon[74313]: pgmap v611: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:27:33 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 08:27:33 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 08:27:33 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v612: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:27:33 compute-0 python3.9[214401]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 08:27:33 compute-0 sudo[214399]: pam_unix(sudo:session): session closed for user root
Oct 11 08:27:33 compute-0 sudo[214551]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qzgbwautztkbatkdodaozcknjctdxynh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171253.4865131-1025-41491611252133/AnsiballZ_copy.py'
Oct 11 08:27:33 compute-0 sudo[214551]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:27:34 compute-0 python3.9[214553]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 08:27:34 compute-0 sudo[214551]: pam_unix(sudo:session): session closed for user root
Oct 11 08:27:34 compute-0 sudo[214703]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kinhjwzyuugfxotjgvbjnrroekiogrqf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171254.27721-1025-239222536099020/AnsiballZ_copy.py'
Oct 11 08:27:34 compute-0 sudo[214703]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:27:34 compute-0 python3.9[214705]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 08:27:34 compute-0 sudo[214703]: pam_unix(sudo:session): session closed for user root
Oct 11 08:27:35 compute-0 ceph-mon[74313]: pgmap v612: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:27:35 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v613: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:27:35 compute-0 sudo[214855]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-msemxzevqdjkqwftbqvcejairsteozkd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171255.1241045-1025-47081561518927/AnsiballZ_copy.py'
Oct 11 08:27:35 compute-0 sudo[214855]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:27:35 compute-0 python3.9[214857]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/ca-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 08:27:35 compute-0 sudo[214855]: pam_unix(sudo:session): session closed for user root
Oct 11 08:27:36 compute-0 sudo[215019]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gfqlmiicvulpqssuxoxpxftwiunudikc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171256.0090168-1061-18159663791091/AnsiballZ_systemd.py'
Oct 11 08:27:36 compute-0 sudo[215019]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:27:36 compute-0 podman[214981]: 2025-10-11 08:27:36.595241786 +0000 UTC m=+0.147819485 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2)
Oct 11 08:27:36 compute-0 python3.9[215026]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtlogd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 11 08:27:36 compute-0 systemd[1]: Reloading.
Oct 11 08:27:36 compute-0 systemd-rc-local-generator[215062]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 08:27:36 compute-0 systemd-sysv-generator[215067]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 08:27:37 compute-0 ceph-mon[74313]: pgmap v613: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:27:37 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v614: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:27:37 compute-0 systemd[1]: Starting libvirt logging daemon socket...
Oct 11 08:27:37 compute-0 systemd[1]: Listening on libvirt logging daemon socket.
Oct 11 08:27:37 compute-0 systemd[1]: Starting libvirt logging daemon admin socket...
Oct 11 08:27:37 compute-0 systemd[1]: Listening on libvirt logging daemon admin socket.
Oct 11 08:27:37 compute-0 systemd[1]: Starting libvirt logging daemon...
Oct 11 08:27:37 compute-0 systemd[1]: Started libvirt logging daemon.
Oct 11 08:27:37 compute-0 sudo[215019]: pam_unix(sudo:session): session closed for user root
Oct 11 08:27:37 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 08:27:38 compute-0 sudo[215226]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wsryptnqfoizmfhjojowskstzwwnzqcg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171257.6694121-1061-245083315331642/AnsiballZ_systemd.py'
Oct 11 08:27:38 compute-0 sudo[215226]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:27:38 compute-0 ceph-mon[74313]: pgmap v614: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:27:38 compute-0 python3.9[215228]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtnodedevd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 11 08:27:38 compute-0 systemd[1]: Reloading.
Oct 11 08:27:38 compute-0 systemd-rc-local-generator[215257]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 08:27:38 compute-0 systemd-sysv-generator[215261]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 08:27:38 compute-0 systemd[1]: Starting libvirt nodedev daemon socket...
Oct 11 08:27:38 compute-0 systemd[1]: Listening on libvirt nodedev daemon socket.
Oct 11 08:27:38 compute-0 systemd[1]: Starting libvirt nodedev daemon admin socket...
Oct 11 08:27:38 compute-0 systemd[1]: Starting libvirt nodedev daemon read-only socket...
Oct 11 08:27:38 compute-0 systemd[1]: Listening on libvirt nodedev daemon admin socket.
Oct 11 08:27:38 compute-0 systemd[1]: Listening on libvirt nodedev daemon read-only socket.
Oct 11 08:27:38 compute-0 systemd[1]: Starting libvirt nodedev daemon...
Oct 11 08:27:38 compute-0 systemd[1]: Started libvirt nodedev daemon.
Oct 11 08:27:38 compute-0 sudo[215226]: pam_unix(sudo:session): session closed for user root
Oct 11 08:27:39 compute-0 systemd[1]: Starting SETroubleshoot daemon for processing new SELinux denial logs...
Oct 11 08:27:39 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v615: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:27:39 compute-0 systemd[1]: Started SETroubleshoot daemon for processing new SELinux denial logs.
Oct 11 08:27:39 compute-0 sudo[215446]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bbmwxjrfnyuamunuhlxoakcehuibvgfd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171259.1586235-1061-187678331232018/AnsiballZ_systemd.py'
Oct 11 08:27:39 compute-0 sudo[215446]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:27:39 compute-0 systemd[1]: Created slice Slice /system/dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged.
Oct 11 08:27:39 compute-0 systemd[1]: Started dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service.
Oct 11 08:27:39 compute-0 python3.9[215450]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtproxyd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 11 08:27:39 compute-0 systemd[1]: Reloading.
Oct 11 08:27:40 compute-0 systemd-rc-local-generator[215482]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 08:27:40 compute-0 systemd-sysv-generator[215485]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 08:27:40 compute-0 ceph-mon[74313]: pgmap v615: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:27:40 compute-0 systemd[1]: Starting libvirt proxy daemon admin socket...
Oct 11 08:27:40 compute-0 systemd[1]: Starting libvirt proxy daemon read-only socket...
Oct 11 08:27:40 compute-0 systemd[1]: Listening on libvirt proxy daemon admin socket.
Oct 11 08:27:40 compute-0 systemd[1]: Listening on libvirt proxy daemon read-only socket.
Oct 11 08:27:40 compute-0 systemd[1]: Starting libvirt proxy daemon...
Oct 11 08:27:40 compute-0 systemd[1]: Started libvirt proxy daemon.
Oct 11 08:27:40 compute-0 sudo[215446]: pam_unix(sudo:session): session closed for user root
Oct 11 08:27:40 compute-0 setroubleshoot[215316]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l fe00e702-c632-4d13-93c6-b24e5c622602
Oct 11 08:27:40 compute-0 setroubleshoot[215316]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.
                                                  
                                                  *****  Plugin dac_override (91.4 confidence) suggests   **********************
                                                  
                                                  If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system
                                                  Then turn on full auditing to get path information about the offending file and generate the error again.
                                                  Do
                                                  
                                                  Turn on full auditing
                                                  # auditctl -w /etc/shadow -p w
                                                  Try to recreate AVC. Then execute
                                                  # ausearch -m avc -ts recent
                                                  If you see PATH record check ownership/permissions on file, and fix it,
                                                  otherwise report as a bugzilla.
                                                  
                                                  *****  Plugin catchall (9.59 confidence) suggests   **************************
                                                  
                                                  If you believe that virtlogd should have the dac_read_search capability by default.
                                                  Then you should report this as a bug.
                                                  You can generate a local policy module to allow this access.
                                                  Do
                                                  allow this access for now by executing:
                                                  # ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd
                                                  # semodule -X 300 -i my-virtlogd.pp
                                                  
Oct 11 08:27:40 compute-0 setroubleshoot[215316]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l fe00e702-c632-4d13-93c6-b24e5c622602
Oct 11 08:27:40 compute-0 setroubleshoot[215316]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.
                                                  
                                                  *****  Plugin dac_override (91.4 confidence) suggests   **********************
                                                  
                                                  If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system
                                                  Then turn on full auditing to get path information about the offending file and generate the error again.
                                                  Do
                                                  
                                                  Turn on full auditing
                                                  # auditctl -w /etc/shadow -p w
                                                  Try to recreate AVC. Then execute
                                                  # ausearch -m avc -ts recent
                                                  If you see PATH record check ownership/permissions on file, and fix it,
                                                  otherwise report as a bugzilla.
                                                  
                                                  *****  Plugin catchall (9.59 confidence) suggests   **************************
                                                  
                                                  If you believe that virtlogd should have the dac_read_search capability by default.
                                                  Then you should report this as a bug.
                                                  You can generate a local policy module to allow this access.
                                                  Do
                                                  allow this access for now by executing:
                                                  # ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd
                                                  # semodule -X 300 -i my-virtlogd.pp
                                                  
Oct 11 08:27:41 compute-0 sudo[215662]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pcxhpeijtxrspoovzgitadufjltbalpa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171260.619711-1061-2660572774785/AnsiballZ_systemd.py'
Oct 11 08:27:41 compute-0 sudo[215662]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:27:41 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v616: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:27:41 compute-0 python3.9[215664]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtqemud.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 11 08:27:41 compute-0 systemd[1]: Reloading.
Oct 11 08:27:41 compute-0 systemd-rc-local-generator[215687]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 08:27:41 compute-0 systemd-sysv-generator[215694]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 08:27:41 compute-0 systemd[1]: Listening on libvirt locking daemon socket.
Oct 11 08:27:41 compute-0 systemd[1]: Starting libvirt QEMU daemon socket...
Oct 11 08:27:41 compute-0 systemd[1]: Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw).
Oct 11 08:27:41 compute-0 systemd[1]: Starting Virtual Machine and Container Registration Service...
Oct 11 08:27:41 compute-0 systemd[1]: Listening on libvirt QEMU daemon socket.
Oct 11 08:27:41 compute-0 systemd[1]: Starting libvirt QEMU daemon admin socket...
Oct 11 08:27:41 compute-0 systemd[1]: Starting libvirt QEMU daemon read-only socket...
Oct 11 08:27:41 compute-0 systemd[1]: Listening on libvirt QEMU daemon admin socket.
Oct 11 08:27:41 compute-0 systemd[1]: Listening on libvirt QEMU daemon read-only socket.
Oct 11 08:27:41 compute-0 systemd[1]: Started Virtual Machine and Container Registration Service.
Oct 11 08:27:41 compute-0 systemd[1]: Starting libvirt QEMU daemon...
Oct 11 08:27:41 compute-0 systemd[1]: Started libvirt QEMU daemon.
Oct 11 08:27:41 compute-0 sudo[215662]: pam_unix(sudo:session): session closed for user root
Oct 11 08:27:42 compute-0 ceph-mon[74313]: pgmap v616: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:27:42 compute-0 sudo[215876]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gbdbjjzvtoptkjaxhlamcjrxuretvigh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171262.1343105-1061-221182395537366/AnsiballZ_systemd.py'
Oct 11 08:27:42 compute-0 sudo[215876]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:27:42 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 08:27:42 compute-0 python3.9[215878]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtsecretd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 11 08:27:42 compute-0 systemd[1]: Reloading.
Oct 11 08:27:43 compute-0 systemd-rc-local-generator[215901]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 08:27:43 compute-0 systemd-sysv-generator[215906]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 08:27:43 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v617: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:27:43 compute-0 systemd[1]: Starting libvirt secret daemon socket...
Oct 11 08:27:43 compute-0 systemd[1]: Listening on libvirt secret daemon socket.
Oct 11 08:27:43 compute-0 systemd[1]: Starting libvirt secret daemon admin socket...
Oct 11 08:27:43 compute-0 systemd[1]: Starting libvirt secret daemon read-only socket...
Oct 11 08:27:43 compute-0 systemd[1]: Listening on libvirt secret daemon admin socket.
Oct 11 08:27:43 compute-0 systemd[1]: Listening on libvirt secret daemon read-only socket.
Oct 11 08:27:43 compute-0 systemd[1]: Starting libvirt secret daemon...
Oct 11 08:27:43 compute-0 systemd[1]: Started libvirt secret daemon.
Oct 11 08:27:43 compute-0 sudo[215876]: pam_unix(sudo:session): session closed for user root
Oct 11 08:27:44 compute-0 sudo[216085]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dqddgakttfsqvqcfpatvbcmvokcwcrfj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171263.7901828-1098-208616370609973/AnsiballZ_file.py'
Oct 11 08:27:44 compute-0 sudo[216085]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:27:44 compute-0 ceph-mon[74313]: pgmap v617: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:27:44 compute-0 python3.9[216087]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 08:27:44 compute-0 sudo[216085]: pam_unix(sudo:session): session closed for user root
Oct 11 08:27:45 compute-0 sudo[216237]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yomowuumqyiuuihrtfbcvajxzkotltde ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171264.6484523-1106-79341700010140/AnsiballZ_find.py'
Oct 11 08:27:45 compute-0 sudo[216237]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:27:45 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v618: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:27:45 compute-0 python3.9[216239]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.conf'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Oct 11 08:27:45 compute-0 sudo[216237]: pam_unix(sudo:session): session closed for user root
Oct 11 08:27:45 compute-0 sudo[216389]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tntvqskiqvwpdygayyejtkmhaiyjnmqg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171265.5008643-1114-9042625513254/AnsiballZ_command.py'
Oct 11 08:27:45 compute-0 sudo[216389]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:27:46 compute-0 python3.9[216391]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail;
                                             echo ceph
                                             awk -F '=' '/fsid/ {print $2}' /var/lib/openstack/config/ceph/ceph.conf | xargs
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 08:27:46 compute-0 sudo[216389]: pam_unix(sudo:session): session closed for user root
Oct 11 08:27:46 compute-0 ceph-mon[74313]: pgmap v618: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:27:46 compute-0 python3.9[216545]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.keyring'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Oct 11 08:27:47 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v619: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:27:47 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 08:27:48 compute-0 python3.9[216695]: ansible-ansible.legacy.stat Invoked with path=/tmp/secret.xml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 08:27:48 compute-0 ceph-mon[74313]: pgmap v619: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:27:48 compute-0 podman[216790]: 2025-10-11 08:27:48.581458981 +0000 UTC m=+0.084565426 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 11 08:27:48 compute-0 python3.9[216827]: ansible-ansible.legacy.copy Invoked with dest=/tmp/secret.xml mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1760171267.417348-1133-51141290879582/.source.xml follow=False _original_basename=secret.xml.j2 checksum=b56ed023dc31f704f6fbc855b5c0dab0643174c8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 08:27:49 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v620: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:27:49 compute-0 sudo[216986]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dhcufjonhptrvewjpmimeuxznmfgsyhk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171268.9952378-1148-129271498249655/AnsiballZ_command.py'
Oct 11 08:27:49 compute-0 sudo[216986]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:27:49 compute-0 python3.9[216988]: ansible-ansible.legacy.command Invoked with _raw_params=virsh secret-undefine 33219f8b-dc38-5a8f-a577-8ccc4b37190a
                                             virsh secret-define --file /tmp/secret.xml
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 08:27:49 compute-0 polkitd[6218]: Registered Authentication Agent for unix-process:216990:311437 (system bus name :1.3039 [/usr/bin/pkttyagent --process 216990 --notify-fd 4 --fallback], object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8)
Oct 11 08:27:49 compute-0 polkitd[6218]: Unregistered Authentication Agent for unix-process:216990:311437 (system bus name :1.3039, object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8) (disconnected from bus)
Oct 11 08:27:49 compute-0 polkitd[6218]: Registered Authentication Agent for unix-process:216989:311436 (system bus name :1.3040 [/usr/bin/pkttyagent --process 216989 --notify-fd 4 --fallback], object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8)
Oct 11 08:27:49 compute-0 polkitd[6218]: Unregistered Authentication Agent for unix-process:216989:311436 (system bus name :1.3040, object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8) (disconnected from bus)
Oct 11 08:27:49 compute-0 sudo[216986]: pam_unix(sudo:session): session closed for user root
Oct 11 08:27:50 compute-0 ceph-mon[74313]: pgmap v620: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:27:50 compute-0 python3.9[217150]: ansible-ansible.builtin.file Invoked with path=/tmp/secret.xml state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 08:27:50 compute-0 systemd[1]: dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service: Deactivated successfully.
Oct 11 08:27:50 compute-0 systemd[1]: dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service: Consumed 1.027s CPU time.
Oct 11 08:27:50 compute-0 systemd[1]: setroubleshootd.service: Deactivated successfully.
Oct 11 08:27:51 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v621: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:27:51 compute-0 sudo[217300]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gvbtmsutypctclsnutfkilhyguaksosl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171271.0102286-1164-267202748806267/AnsiballZ_command.py'
Oct 11 08:27:51 compute-0 sudo[217300]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:27:51 compute-0 sudo[217300]: pam_unix(sudo:session): session closed for user root
Oct 11 08:27:52 compute-0 ceph-mon[74313]: pgmap v621: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:27:52 compute-0 sudo[217453]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lwsdimzcgwqcdraktmbmtpckqrqnymea ; FSID=33219f8b-dc38-5a8f-a577-8ccc4b37190a KEY=AQC/EOpoAAAAABAAbjRLTNBPTj6MQLghfQxMew== /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171271.9761324-1172-27687821657804/AnsiballZ_command.py'
Oct 11 08:27:52 compute-0 sudo[217453]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:27:52 compute-0 polkitd[6218]: Registered Authentication Agent for unix-process:217456:311739 (system bus name :1.3043 [/usr/bin/pkttyagent --process 217456 --notify-fd 4 --fallback], object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8)
Oct 11 08:27:52 compute-0 polkitd[6218]: Unregistered Authentication Agent for unix-process:217456:311739 (system bus name :1.3043, object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8) (disconnected from bus)
Oct 11 08:27:52 compute-0 sudo[217453]: pam_unix(sudo:session): session closed for user root
Oct 11 08:27:52 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 08:27:53 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v622: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:27:53 compute-0 sudo[217611]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rwhhscafzeffgnuzpvvbjwmfmlvacmvi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171273.066379-1180-197332468368696/AnsiballZ_copy.py'
Oct 11 08:27:53 compute-0 sudo[217611]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:27:53 compute-0 python3.9[217613]: ansible-ansible.legacy.copy Invoked with dest=/etc/ceph/ceph.conf group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/config/ceph/ceph.conf backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 08:27:53 compute-0 sudo[217611]: pam_unix(sudo:session): session closed for user root
Oct 11 08:27:54 compute-0 ceph-mon[74313]: pgmap v622: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:27:54 compute-0 sudo[217763]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wvbvehrvdrhrwhzxumbnqyrjtyymfqje ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171273.8905687-1188-64374561409026/AnsiballZ_stat.py'
Oct 11 08:27:54 compute-0 sudo[217763]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:27:54 compute-0 python3.9[217765]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/libvirt.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 08:27:54 compute-0 sudo[217763]: pam_unix(sudo:session): session closed for user root
Oct 11 08:27:54 compute-0 ceph-mgr[74605]: [balancer INFO root] Optimize plan auto_2025-10-11_08:27:54
Oct 11 08:27:54 compute-0 ceph-mgr[74605]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 08:27:54 compute-0 ceph-mgr[74605]: [balancer INFO root] do_upmap
Oct 11 08:27:54 compute-0 ceph-mgr[74605]: [balancer INFO root] pools ['volumes', 'backups', 'images', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', '.mgr', 'default.rgw.log', 'default.rgw.meta', 'default.rgw.control', 'vms', '.rgw.root']
Oct 11 08:27:54 compute-0 ceph-mgr[74605]: [balancer INFO root] prepared 0/10 changes
Oct 11 08:27:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 08:27:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 08:27:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 08:27:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 08:27:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 08:27:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 08:27:54 compute-0 ceph-mgr[74605]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 08:27:54 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 08:27:54 compute-0 ceph-mgr[74605]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 08:27:54 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 08:27:54 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 08:27:54 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 08:27:54 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 08:27:54 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 08:27:54 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 08:27:54 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 08:27:55 compute-0 sudo[217886]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xqzfhtzilmybbhkezqnkyuuyoktgdsaj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171273.8905687-1188-64374561409026/AnsiballZ_copy.py'
Oct 11 08:27:55 compute-0 sudo[217886]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:27:55 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v623: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:27:55 compute-0 python3.9[217888]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/libvirt.yaml mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1760171273.8905687-1188-64374561409026/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=5ca83b1310a74c5e48c4c3d4640e1cb8fdac1061 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 08:27:55 compute-0 sudo[217886]: pam_unix(sudo:session): session closed for user root
Oct 11 08:27:56 compute-0 sudo[218038]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tclrywnmxdczbzjuhubgmtefaxecfvah ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171275.6455288-1204-160903456365537/AnsiballZ_file.py'
Oct 11 08:27:56 compute-0 sudo[218038]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:27:56 compute-0 ceph-mon[74313]: pgmap v623: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:27:56 compute-0 python3.9[218040]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 08:27:56 compute-0 sudo[218038]: pam_unix(sudo:session): session closed for user root
Oct 11 08:27:56 compute-0 sudo[218190]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ajxcirthrrshcgbaddwtywyydcwrqkgf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171276.563571-1212-234284925246305/AnsiballZ_stat.py'
Oct 11 08:27:56 compute-0 sudo[218190]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:27:57 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v624: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:27:57 compute-0 python3.9[218192]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 08:27:57 compute-0 sudo[218190]: pam_unix(sudo:session): session closed for user root
Oct 11 08:27:57 compute-0 sudo[218268]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vxnsxayiwkqflbzilwbularlurgbgmff ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171276.563571-1212-234284925246305/AnsiballZ_file.py'
Oct 11 08:27:57 compute-0 sudo[218268]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:27:57 compute-0 python3.9[218270]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 08:27:57 compute-0 sudo[218268]: pam_unix(sudo:session): session closed for user root
Oct 11 08:27:57 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 08:27:58 compute-0 sudo[218420]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ttkzqegwiokddsfhdlccvyxnmsoqhvfz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171277.8597684-1224-264368285891350/AnsiballZ_stat.py'
Oct 11 08:27:58 compute-0 sudo[218420]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:27:58 compute-0 ceph-mon[74313]: pgmap v624: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:27:58 compute-0 python3.9[218422]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 08:27:58 compute-0 sudo[218420]: pam_unix(sudo:session): session closed for user root
Oct 11 08:27:58 compute-0 sudo[218498]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vmochrikoslnjxohrtqgytmkuadzonun ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171277.8597684-1224-264368285891350/AnsiballZ_file.py'
Oct 11 08:27:58 compute-0 sudo[218498]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:27:58 compute-0 python3.9[218500]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.g6qg7uc3 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 08:27:58 compute-0 sudo[218498]: pam_unix(sudo:session): session closed for user root
Oct 11 08:27:59 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v625: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:27:59 compute-0 sudo[218650]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-upudstkzdhihblayoqjustvvtguilnfj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171279.1143794-1236-131409840226499/AnsiballZ_stat.py'
Oct 11 08:27:59 compute-0 sudo[218650]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:27:59 compute-0 python3.9[218652]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 08:27:59 compute-0 sudo[218650]: pam_unix(sudo:session): session closed for user root
Oct 11 08:28:00 compute-0 sudo[218728]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qpdmjzzbjeuhrbscasoopddytbvekaxq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171279.1143794-1236-131409840226499/AnsiballZ_file.py'
Oct 11 08:28:00 compute-0 sudo[218728]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:28:00 compute-0 python3.9[218730]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 08:28:00 compute-0 ceph-mon[74313]: pgmap v625: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:28:00 compute-0 sudo[218728]: pam_unix(sudo:session): session closed for user root
Oct 11 08:28:00 compute-0 sudo[218880]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xelnmbpnjgfarzgzdckuyglggsakvype ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171280.5276484-1249-266229198354118/AnsiballZ_command.py'
Oct 11 08:28:00 compute-0 sudo[218880]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:28:01 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v626: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:28:01 compute-0 python3.9[218882]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 08:28:01 compute-0 sudo[218880]: pam_unix(sudo:session): session closed for user root
Oct 11 08:28:01 compute-0 anacron[27972]: Job `cron.weekly' started
Oct 11 08:28:01 compute-0 anacron[27972]: Job `cron.weekly' terminated
Oct 11 08:28:01 compute-0 sudo[219035]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-htnejoitccuisigjhrbcjavxoiunvgpo ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1760171281.3642054-1257-112160502196434/AnsiballZ_edpm_nftables_from_files.py'
Oct 11 08:28:01 compute-0 sudo[219035]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:28:02 compute-0 python3[219037]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Oct 11 08:28:02 compute-0 sudo[219035]: pam_unix(sudo:session): session closed for user root
Oct 11 08:28:02 compute-0 ceph-mon[74313]: pgmap v626: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:28:02 compute-0 sudo[219187]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bledqntqlnwtbzfganmayijfmyunoxsk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171282.3565161-1265-141020923627479/AnsiballZ_stat.py'
Oct 11 08:28:02 compute-0 sudo[219187]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:28:02 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 08:28:02 compute-0 python3.9[219189]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 08:28:02 compute-0 sudo[219187]: pam_unix(sudo:session): session closed for user root
Oct 11 08:28:03 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v627: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:28:03 compute-0 sudo[219265]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fykobkrghtdqfmehqxgmtcivlqfyqdkg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171282.3565161-1265-141020923627479/AnsiballZ_file.py'
Oct 11 08:28:03 compute-0 sudo[219265]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:28:03 compute-0 python3.9[219267]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 08:28:03 compute-0 sudo[219265]: pam_unix(sudo:session): session closed for user root
Oct 11 08:28:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 08:28:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:28:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 08:28:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:28:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 08:28:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:28:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 08:28:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:28:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 08:28:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:28:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 08:28:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:28:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 11 08:28:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:28:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 08:28:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:28:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 11 08:28:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:28:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 11 08:28:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:28:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 08:28:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:28:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 11 08:28:04 compute-0 sudo[219417]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vywuemdcuucanachlcaurhwmulkfxeqt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171283.740831-1277-246133502812441/AnsiballZ_stat.py'
Oct 11 08:28:04 compute-0 sudo[219417]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:28:04 compute-0 ceph-mon[74313]: pgmap v627: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:28:04 compute-0 python3.9[219419]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 08:28:04 compute-0 sudo[219417]: pam_unix(sudo:session): session closed for user root
Oct 11 08:28:04 compute-0 sudo[219495]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pswxwmyoxpcoxeeopyyljcpvznfobuyi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171283.740831-1277-246133502812441/AnsiballZ_file.py'
Oct 11 08:28:04 compute-0 sudo[219495]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:28:04 compute-0 python3.9[219497]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 08:28:05 compute-0 sudo[219495]: pam_unix(sudo:session): session closed for user root
Oct 11 08:28:05 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v628: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:28:05 compute-0 sudo[219647]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bedylqatpyqzqufurwopnvbniogkucva ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171285.2214074-1289-137592346066566/AnsiballZ_stat.py'
Oct 11 08:28:05 compute-0 sudo[219647]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:28:05 compute-0 python3.9[219649]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 08:28:05 compute-0 sudo[219647]: pam_unix(sudo:session): session closed for user root
Oct 11 08:28:06 compute-0 sudo[219725]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sgeperwzfwuvpsrlaayddrkuyqodhmgm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171285.2214074-1289-137592346066566/AnsiballZ_file.py'
Oct 11 08:28:06 compute-0 sudo[219725]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:28:06 compute-0 ceph-mon[74313]: pgmap v628: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:28:06 compute-0 python3.9[219727]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 08:28:06 compute-0 sudo[219725]: pam_unix(sudo:session): session closed for user root
Oct 11 08:28:06 compute-0 podman[219775]: 2025-10-11 08:28:06.846246689 +0000 UTC m=+0.143842212 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 11 08:28:07 compute-0 sudo[219904]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dkqumrkmbzbuhtkykptjyzoxufrqwgwg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171286.663834-1301-162297893741232/AnsiballZ_stat.py'
Oct 11 08:28:07 compute-0 sudo[219904]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:28:07 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v629: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:28:07 compute-0 python3.9[219906]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 08:28:07 compute-0 sudo[219904]: pam_unix(sudo:session): session closed for user root
Oct 11 08:28:07 compute-0 sudo[219982]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xzxbshyrtlpxbblbrtjpxzbbospohogs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171286.663834-1301-162297893741232/AnsiballZ_file.py'
Oct 11 08:28:07 compute-0 sudo[219982]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:28:07 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 08:28:07 compute-0 ceph-mon[74313]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #30. Immutable memtables: 0.
Oct 11 08:28:07 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:28:07.828172) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 11 08:28:07 compute-0 ceph-mon[74313]: rocksdb: [db/flush_job.cc:856] [default] [JOB 11] Flushing memtable with next log file: 30
Oct 11 08:28:07 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760171287828233, "job": 11, "event": "flush_started", "num_memtables": 1, "num_entries": 1799, "num_deletes": 250, "total_data_size": 3029473, "memory_usage": 3062648, "flush_reason": "Manual Compaction"}
Oct 11 08:28:07 compute-0 ceph-mon[74313]: rocksdb: [db/flush_job.cc:885] [default] [JOB 11] Level-0 flush table #31: started
Oct 11 08:28:07 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760171287839466, "cf_name": "default", "job": 11, "event": "table_file_creation", "file_number": 31, "file_size": 1706193, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 11762, "largest_seqno": 13560, "table_properties": {"data_size": 1700337, "index_size": 2931, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1861, "raw_key_size": 14635, "raw_average_key_size": 20, "raw_value_size": 1687431, "raw_average_value_size": 2317, "num_data_blocks": 136, "num_entries": 728, "num_filter_entries": 728, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760171083, "oldest_key_time": 1760171083, "file_creation_time": 1760171287, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "25d4b2da-549a-4320-988a-8e4ed36a341b", "db_session_id": "HBQ1LYMNE0LLZGR7AHC6", "orig_file_number": 31, "seqno_to_time_mapping": "N/A"}}
Oct 11 08:28:07 compute-0 ceph-mon[74313]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 11] Flush lasted 11328 microseconds, and 4294 cpu microseconds.
Oct 11 08:28:07 compute-0 ceph-mon[74313]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 08:28:07 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:28:07.839516) [db/flush_job.cc:967] [default] [JOB 11] Level-0 flush table #31: 1706193 bytes OK
Oct 11 08:28:07 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:28:07.839537) [db/memtable_list.cc:519] [default] Level-0 commit table #31 started
Oct 11 08:28:07 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:28:07.840912) [db/memtable_list.cc:722] [default] Level-0 commit table #31: memtable #1 done
Oct 11 08:28:07 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:28:07.840926) EVENT_LOG_v1 {"time_micros": 1760171287840921, "job": 11, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 11 08:28:07 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:28:07.840947) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 11 08:28:07 compute-0 ceph-mon[74313]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 11] Try to delete WAL files size 3021862, prev total WAL file size 3021862, number of live WAL files 2.
Oct 11 08:28:07 compute-0 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000027.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 08:28:07 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:28:07.841849) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D67727374617400323532' seq:72057594037927935, type:22 .. '6D67727374617400353033' seq:0, type:0; will stop at (end)
Oct 11 08:28:07 compute-0 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 12] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 11 08:28:07 compute-0 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 11 Base level 0, inputs: [31(1666KB)], [29(7714KB)]
Oct 11 08:28:07 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760171287841919, "job": 12, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [31], "files_L6": [29], "score": -1, "input_data_size": 9605632, "oldest_snapshot_seqno": -1}
Oct 11 08:28:07 compute-0 python3.9[219984]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 08:28:07 compute-0 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 12] Generated table #32: 4015 keys, 7581083 bytes, temperature: kUnknown
Oct 11 08:28:07 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760171287894850, "cf_name": "default", "job": 12, "event": "table_file_creation", "file_number": 32, "file_size": 7581083, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7552542, "index_size": 17423, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10053, "raw_key_size": 95500, "raw_average_key_size": 23, "raw_value_size": 7478419, "raw_average_value_size": 1862, "num_data_blocks": 758, "num_entries": 4015, "num_filter_entries": 4015, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760170204, "oldest_key_time": 0, "file_creation_time": 1760171287, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "25d4b2da-549a-4320-988a-8e4ed36a341b", "db_session_id": "HBQ1LYMNE0LLZGR7AHC6", "orig_file_number": 32, "seqno_to_time_mapping": "N/A"}}
Oct 11 08:28:07 compute-0 ceph-mon[74313]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 08:28:07 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:28:07.895127) [db/compaction/compaction_job.cc:1663] [default] [JOB 12] Compacted 1@0 + 1@6 files to L6 => 7581083 bytes
Oct 11 08:28:07 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:28:07.896368) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 181.2 rd, 143.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.6, 7.5 +0.0 blob) out(7.2 +0.0 blob), read-write-amplify(10.1) write-amplify(4.4) OK, records in: 4430, records dropped: 415 output_compression: NoCompression
Oct 11 08:28:07 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:28:07.896398) EVENT_LOG_v1 {"time_micros": 1760171287896384, "job": 12, "event": "compaction_finished", "compaction_time_micros": 53015, "compaction_time_cpu_micros": 33300, "output_level": 6, "num_output_files": 1, "total_output_size": 7581083, "num_input_records": 4430, "num_output_records": 4015, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 11 08:28:07 compute-0 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000031.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 08:28:07 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760171287897043, "job": 12, "event": "table_file_deletion", "file_number": 31}
Oct 11 08:28:07 compute-0 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000029.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 08:28:07 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760171287899291, "job": 12, "event": "table_file_deletion", "file_number": 29}
Oct 11 08:28:07 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:28:07.841719) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 08:28:07 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:28:07.899385) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 08:28:07 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:28:07.899391) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 08:28:07 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:28:07.899393) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 08:28:07 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:28:07.899395) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 08:28:07 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:28:07.899396) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 08:28:07 compute-0 sudo[219982]: pam_unix(sudo:session): session closed for user root
Oct 11 08:28:08 compute-0 sudo[220134]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ljrkmkbuzpsekdjfaddeevztwpjpghos ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171288.1122963-1313-256563557952893/AnsiballZ_stat.py'
Oct 11 08:28:08 compute-0 sudo[220134]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:28:08 compute-0 ceph-mon[74313]: pgmap v629: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:28:08 compute-0 python3.9[220136]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 08:28:08 compute-0 sudo[220134]: pam_unix(sudo:session): session closed for user root
Oct 11 08:28:09 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v630: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:28:09 compute-0 sudo[220259]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-smrcxzxwxtsslvmibyileuznvbntrcmp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171288.1122963-1313-256563557952893/AnsiballZ_copy.py'
Oct 11 08:28:09 compute-0 sudo[220259]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:28:09 compute-0 python3.9[220261]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760171288.1122963-1313-256563557952893/.source.nft follow=False _original_basename=ruleset.j2 checksum=ac3ce8ce2d33fa5fe0a79b0c811c97734ce43fa5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 08:28:09 compute-0 sudo[220259]: pam_unix(sudo:session): session closed for user root
Oct 11 08:28:10 compute-0 sudo[220411]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vdtyjlsjnqhskzcthgswnfckbjuykrbi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171289.8745258-1328-270230787635538/AnsiballZ_file.py'
Oct 11 08:28:10 compute-0 sudo[220411]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:28:10 compute-0 python3.9[220413]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 08:28:10 compute-0 sudo[220411]: pam_unix(sudo:session): session closed for user root
Oct 11 08:28:10 compute-0 ceph-mon[74313]: pgmap v630: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:28:11 compute-0 sudo[220563]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xrrcuaxnnbfndcaywkdsxhxnbrcumrvl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171290.696893-1336-33118700524028/AnsiballZ_command.py'
Oct 11 08:28:11 compute-0 sudo[220563]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:28:11 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v631: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:28:11 compute-0 python3.9[220565]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 08:28:11 compute-0 sudo[220563]: pam_unix(sudo:session): session closed for user root
Oct 11 08:28:12 compute-0 sudo[220718]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-euurzzvrnubuiazicwtxzopeurkgvelu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171291.60524-1344-65349588077719/AnsiballZ_blockinfile.py'
Oct 11 08:28:12 compute-0 sudo[220718]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:28:12 compute-0 python3.9[220720]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                             include "/etc/nftables/edpm-chains.nft"
                                             include "/etc/nftables/edpm-rules.nft"
                                             include "/etc/nftables/edpm-jumps.nft"
                                              path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 08:28:12 compute-0 sudo[220718]: pam_unix(sudo:session): session closed for user root
Oct 11 08:28:12 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 08:28:12 compute-0 ceph-mon[74313]: pgmap v631: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:28:13 compute-0 sudo[220870]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-teltddrrucirbvtkggpnrtewpfvjjinf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171292.6865408-1353-16961491966889/AnsiballZ_command.py'
Oct 11 08:28:13 compute-0 sudo[220870]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:28:13 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v632: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:28:13 compute-0 python3.9[220872]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 08:28:13 compute-0 sudo[220870]: pam_unix(sudo:session): session closed for user root
Oct 11 08:28:13 compute-0 sudo[221023]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lxaeqojrrhujbcunellowsiinyytijat ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171293.6113203-1361-122748169089972/AnsiballZ_stat.py'
Oct 11 08:28:13 compute-0 sudo[221023]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:28:14 compute-0 python3.9[221025]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 11 08:28:14 compute-0 sudo[221023]: pam_unix(sudo:session): session closed for user root
Oct 11 08:28:14 compute-0 ceph-mon[74313]: pgmap v632: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:28:14 compute-0 sudo[221177]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-azaeoesafhcvertxczucogkvtmlrlfvt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171294.4780478-1369-183567297224018/AnsiballZ_command.py'
Oct 11 08:28:14 compute-0 sudo[221177]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:28:15 compute-0 python3.9[221179]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 08:28:15 compute-0 sudo[221177]: pam_unix(sudo:session): session closed for user root
Oct 11 08:28:15 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v633: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:28:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:28:15.158 162815 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:28:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:28:15.158 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:28:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:28:15.158 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:28:15 compute-0 sudo[221332]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qoxlltxrusvwdwxdjoackofjcpbyphhi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171295.361084-1377-15543438758106/AnsiballZ_file.py'
Oct 11 08:28:15 compute-0 sudo[221332]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:28:15 compute-0 python3.9[221334]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 08:28:15 compute-0 sudo[221332]: pam_unix(sudo:session): session closed for user root
Oct 11 08:28:16 compute-0 sudo[221484]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zvfypvjwnpshqlweuffqnrrogimnjnpz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171296.184386-1385-12432590233/AnsiballZ_stat.py'
Oct 11 08:28:16 compute-0 sudo[221484]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:28:16 compute-0 python3.9[221486]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 08:28:16 compute-0 sudo[221484]: pam_unix(sudo:session): session closed for user root
Oct 11 08:28:16 compute-0 ceph-mon[74313]: pgmap v633: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:28:17 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v634: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:28:17 compute-0 sudo[221607]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eihwpiucbismernnxmqciukefrcwdxos ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171296.184386-1385-12432590233/AnsiballZ_copy.py'
Oct 11 08:28:17 compute-0 sudo[221607]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:28:17 compute-0 python3.9[221609]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760171296.184386-1385-12432590233/.source.target follow=False _original_basename=edpm_libvirt.target checksum=13035a1aa0f414c677b14be9a5a363b6623d393c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 08:28:17 compute-0 sudo[221607]: pam_unix(sudo:session): session closed for user root
Oct 11 08:28:17 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 08:28:18 compute-0 sudo[221759]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-okrsxjvwqxtoqickgliemhwbuvoogcgw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171297.773118-1400-261364558454995/AnsiballZ_stat.py'
Oct 11 08:28:18 compute-0 sudo[221759]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:28:18 compute-0 python3.9[221761]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt_guests.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 08:28:18 compute-0 sudo[221759]: pam_unix(sudo:session): session closed for user root
Oct 11 08:28:18 compute-0 podman[221833]: 2025-10-11 08:28:18.792244014 +0000 UTC m=+0.089790826 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team)
Oct 11 08:28:18 compute-0 sudo[221902]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xdfagbymmmzyhgwubddtydczhgapindr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171297.773118-1400-261364558454995/AnsiballZ_copy.py'
Oct 11 08:28:18 compute-0 sudo[221902]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:28:18 compute-0 ceph-mon[74313]: pgmap v634: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:28:19 compute-0 python3.9[221904]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt_guests.service mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760171297.773118-1400-261364558454995/.source.service follow=False _original_basename=edpm_libvirt_guests.service checksum=db83430a42fc2ccfd6ed8b56ebf04f3dff9cd0cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 08:28:19 compute-0 sudo[221902]: pam_unix(sudo:session): session closed for user root
Oct 11 08:28:19 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v635: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:28:19 compute-0 sudo[222054]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xputcgdjgwaplzscwobaslqexghlfafx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171299.2835908-1415-150476464040750/AnsiballZ_stat.py'
Oct 11 08:28:19 compute-0 sudo[222054]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:28:19 compute-0 python3.9[222056]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virt-guest-shutdown.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 08:28:19 compute-0 sudo[222054]: pam_unix(sudo:session): session closed for user root
Oct 11 08:28:20 compute-0 sudo[222177]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kaegjjddpvnbpunhhpirhkyflwtrzval ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171299.2835908-1415-150476464040750/AnsiballZ_copy.py'
Oct 11 08:28:20 compute-0 sudo[222177]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:28:20 compute-0 python3.9[222179]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virt-guest-shutdown.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760171299.2835908-1415-150476464040750/.source.target follow=False _original_basename=virt-guest-shutdown.target checksum=49ca149619c596cbba877418629d2cf8f7b0f5cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 08:28:20 compute-0 sudo[222177]: pam_unix(sudo:session): session closed for user root
Oct 11 08:28:20 compute-0 ceph-mon[74313]: pgmap v635: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:28:21 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v636: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:28:21 compute-0 sudo[222329]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oflzqxwlqknngyxtkrepnhkqjjbogzjn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171300.925316-1430-10785776014137/AnsiballZ_systemd.py'
Oct 11 08:28:21 compute-0 sudo[222329]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:28:21 compute-0 python3.9[222331]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt.target state=restarted daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 11 08:28:21 compute-0 systemd[1]: Reloading.
Oct 11 08:28:21 compute-0 systemd-rc-local-generator[222358]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 08:28:21 compute-0 systemd-sysv-generator[222361]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 08:28:22 compute-0 systemd[1]: Reached target edpm_libvirt.target.
Oct 11 08:28:22 compute-0 sudo[222329]: pam_unix(sudo:session): session closed for user root
Oct 11 08:28:22 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 08:28:22 compute-0 sudo[222520]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tvfnvwszwayhngicbrsstfjmsnrjuduc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171302.5125206-1438-278585741491845/AnsiballZ_systemd.py'
Oct 11 08:28:22 compute-0 sudo[222520]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:28:22 compute-0 ceph-mon[74313]: pgmap v636: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:28:23 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v637: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:28:23 compute-0 python3.9[222522]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt_guests daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Oct 11 08:28:23 compute-0 systemd[1]: Reloading.
Oct 11 08:28:23 compute-0 systemd-rc-local-generator[222550]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 08:28:23 compute-0 systemd-sysv-generator[222554]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 08:28:24 compute-0 systemd[1]: Reloading.
Oct 11 08:28:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 08:28:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 08:28:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 08:28:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 08:28:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 08:28:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 08:28:24 compute-0 systemd-rc-local-generator[222588]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 08:28:24 compute-0 systemd-sysv-generator[222593]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 08:28:24 compute-0 ceph-mon[74313]: pgmap v637: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:28:25 compute-0 sudo[222520]: pam_unix(sudo:session): session closed for user root
Oct 11 08:28:25 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v638: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:28:25 compute-0 sshd-session[162937]: Connection closed by 192.168.122.30 port 45906
Oct 11 08:28:25 compute-0 sshd-session[162934]: pam_unix(sshd:session): session closed for user zuul
Oct 11 08:28:25 compute-0 systemd[1]: session-50.scope: Deactivated successfully.
Oct 11 08:28:25 compute-0 systemd[1]: session-50.scope: Consumed 4min 16.415s CPU time.
Oct 11 08:28:25 compute-0 systemd-logind[819]: Session 50 logged out. Waiting for processes to exit.
Oct 11 08:28:25 compute-0 systemd-logind[819]: Removed session 50.
Oct 11 08:28:26 compute-0 ceph-mon[74313]: pgmap v638: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:28:27 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v639: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:28:27 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 08:28:28 compute-0 ceph-mon[74313]: pgmap v639: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:28:29 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v640: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:28:30 compute-0 sshd-session[222619]: Accepted publickey for zuul from 192.168.122.30 port 51504 ssh2: ECDSA SHA256:KKxgUhG08XzjYMLOyvbR+tXItyOnGoLl6Fn32NV5afE
Oct 11 08:28:30 compute-0 ceph-mon[74313]: pgmap v640: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:28:30 compute-0 systemd-logind[819]: New session 51 of user zuul.
Oct 11 08:28:30 compute-0 systemd[1]: Started Session 51 of User zuul.
Oct 11 08:28:30 compute-0 sshd-session[222619]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 11 08:28:31 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v641: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:28:32 compute-0 python3.9[222772]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 11 08:28:32 compute-0 sudo[222777]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:28:32 compute-0 sudo[222777]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:28:32 compute-0 sudo[222777]: pam_unix(sudo:session): session closed for user root
Oct 11 08:28:32 compute-0 sudo[222802]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 08:28:32 compute-0 sudo[222802]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:28:32 compute-0 sudo[222802]: pam_unix(sudo:session): session closed for user root
Oct 11 08:28:32 compute-0 sudo[222827]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:28:32 compute-0 sudo[222827]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:28:32 compute-0 sudo[222827]: pam_unix(sudo:session): session closed for user root
Oct 11 08:28:32 compute-0 sudo[222876]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 11 08:28:32 compute-0 sudo[222876]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:28:32 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 08:28:32 compute-0 ceph-mon[74313]: pgmap v641: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:28:33 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v642: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:28:33 compute-0 sudo[222876]: pam_unix(sudo:session): session closed for user root
Oct 11 08:28:33 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 08:28:33 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 08:28:33 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 11 08:28:33 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 08:28:33 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 11 08:28:33 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 08:28:33 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev 42cc0e00-67ba-49dd-8b5c-41a39a043931 does not exist
Oct 11 08:28:33 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev ab224b1c-ec78-4866-bfcf-7261c46de3e3 does not exist
Oct 11 08:28:33 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev ac890ab1-9f1c-40c7-825d-b2f848b733c3 does not exist
Oct 11 08:28:33 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 11 08:28:33 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 08:28:33 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 11 08:28:33 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 08:28:33 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 08:28:33 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 08:28:33 compute-0 sudo[223011]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:28:33 compute-0 sudo[223011]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:28:33 compute-0 sudo[223011]: pam_unix(sudo:session): session closed for user root
Oct 11 08:28:33 compute-0 sudo[223099]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uaguozofiqybtdfvgjzktuhewfkfopaj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171312.9407651-34-94533092714700/AnsiballZ_file.py'
Oct 11 08:28:33 compute-0 sudo[223099]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:28:33 compute-0 sudo[223066]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 08:28:33 compute-0 sudo[223066]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:28:33 compute-0 sudo[223066]: pam_unix(sudo:session): session closed for user root
Oct 11 08:28:33 compute-0 sudo[223110]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:28:33 compute-0 sudo[223110]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:28:33 compute-0 sudo[223110]: pam_unix(sudo:session): session closed for user root
Oct 11 08:28:33 compute-0 sudo[223135]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 11 08:28:33 compute-0 sudo[223135]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:28:33 compute-0 python3.9[223107]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/iscsi setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 11 08:28:33 compute-0 sudo[223099]: pam_unix(sudo:session): session closed for user root
Oct 11 08:28:33 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 08:28:33 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 08:28:33 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 08:28:33 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 08:28:33 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 08:28:33 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 08:28:34 compute-0 podman[223277]: 2025-10-11 08:28:34.16630885 +0000 UTC m=+0.067096003 container create 30686c8c1a7294c3e598efa147d0bfb0eedb263df3750ec9bd9e0d92124d9852 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_wilson, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct 11 08:28:34 compute-0 systemd[1]: Started libpod-conmon-30686c8c1a7294c3e598efa147d0bfb0eedb263df3750ec9bd9e0d92124d9852.scope.
Oct 11 08:28:34 compute-0 podman[223277]: 2025-10-11 08:28:34.135130402 +0000 UTC m=+0.035917615 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 08:28:34 compute-0 systemd[1]: Started libcrun container.
Oct 11 08:28:34 compute-0 podman[223277]: 2025-10-11 08:28:34.28095029 +0000 UTC m=+0.181737483 container init 30686c8c1a7294c3e598efa147d0bfb0eedb263df3750ec9bd9e0d92124d9852 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_wilson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 08:28:34 compute-0 podman[223277]: 2025-10-11 08:28:34.292870363 +0000 UTC m=+0.193657516 container start 30686c8c1a7294c3e598efa147d0bfb0eedb263df3750ec9bd9e0d92124d9852 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_wilson, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct 11 08:28:34 compute-0 podman[223277]: 2025-10-11 08:28:34.297343712 +0000 UTC m=+0.198130865 container attach 30686c8c1a7294c3e598efa147d0bfb0eedb263df3750ec9bd9e0d92124d9852 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_wilson, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 08:28:34 compute-0 thirsty_wilson[223321]: 167 167
Oct 11 08:28:34 compute-0 systemd[1]: libpod-30686c8c1a7294c3e598efa147d0bfb0eedb263df3750ec9bd9e0d92124d9852.scope: Deactivated successfully.
Oct 11 08:28:34 compute-0 podman[223277]: 2025-10-11 08:28:34.303465268 +0000 UTC m=+0.204252411 container died 30686c8c1a7294c3e598efa147d0bfb0eedb263df3750ec9bd9e0d92124d9852 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_wilson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 08:28:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-78789da50e0528f404f2401fb2c40fde8a434080ccbafa1ab052f0135d282463-merged.mount: Deactivated successfully.
Oct 11 08:28:34 compute-0 podman[223277]: 2025-10-11 08:28:34.362076986 +0000 UTC m=+0.262864139 container remove 30686c8c1a7294c3e598efa147d0bfb0eedb263df3750ec9bd9e0d92124d9852 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_wilson, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 08:28:34 compute-0 systemd[1]: libpod-conmon-30686c8c1a7294c3e598efa147d0bfb0eedb263df3750ec9bd9e0d92124d9852.scope: Deactivated successfully.
Oct 11 08:28:34 compute-0 sudo[223385]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cekcjirzkmcvavslrblwrhqzkmfmzcoi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171313.969813-34-143296601510960/AnsiballZ_file.py'
Oct 11 08:28:34 compute-0 sudo[223385]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:28:34 compute-0 podman[223393]: 2025-10-11 08:28:34.604718581 +0000 UTC m=+0.060468712 container create 6806cef7266a40b837d8060a6eb0c245ec50f6ee5f1154c192499e70b359a754 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_lovelace, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct 11 08:28:34 compute-0 python3.9[223387]: ansible-ansible.builtin.file Invoked with path=/etc/target setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 11 08:28:34 compute-0 sudo[223385]: pam_unix(sudo:session): session closed for user root
Oct 11 08:28:34 compute-0 podman[223393]: 2025-10-11 08:28:34.575772778 +0000 UTC m=+0.031522969 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 08:28:34 compute-0 systemd[1]: Started libpod-conmon-6806cef7266a40b837d8060a6eb0c245ec50f6ee5f1154c192499e70b359a754.scope.
Oct 11 08:28:34 compute-0 systemd[1]: Started libcrun container.
Oct 11 08:28:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9096da77454e0f901002b97534d359267fb2b3710b8d19d1da71bd82db63360d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 08:28:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9096da77454e0f901002b97534d359267fb2b3710b8d19d1da71bd82db63360d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 08:28:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9096da77454e0f901002b97534d359267fb2b3710b8d19d1da71bd82db63360d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 08:28:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9096da77454e0f901002b97534d359267fb2b3710b8d19d1da71bd82db63360d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 08:28:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9096da77454e0f901002b97534d359267fb2b3710b8d19d1da71bd82db63360d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 08:28:34 compute-0 podman[223393]: 2025-10-11 08:28:34.744904747 +0000 UTC m=+0.200654928 container init 6806cef7266a40b837d8060a6eb0c245ec50f6ee5f1154c192499e70b359a754 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_lovelace, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 08:28:34 compute-0 podman[223393]: 2025-10-11 08:28:34.76410797 +0000 UTC m=+0.219858111 container start 6806cef7266a40b837d8060a6eb0c245ec50f6ee5f1154c192499e70b359a754 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_lovelace, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Oct 11 08:28:34 compute-0 podman[223393]: 2025-10-11 08:28:34.768371913 +0000 UTC m=+0.224122114 container attach 6806cef7266a40b837d8060a6eb0c245ec50f6ee5f1154c192499e70b359a754 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_lovelace, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 08:28:34 compute-0 ceph-mon[74313]: pgmap v642: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:28:35 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v643: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:28:35 compute-0 sudo[223564]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nfmgcrafubfgbgxdslstfubdrsfwzkpc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171314.838647-34-125237909014876/AnsiballZ_file.py'
Oct 11 08:28:35 compute-0 sudo[223564]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:28:35 compute-0 python3.9[223566]: ansible-ansible.builtin.file Invoked with path=/var/lib/iscsi setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 11 08:28:35 compute-0 sudo[223564]: pam_unix(sudo:session): session closed for user root
Oct 11 08:28:35 compute-0 angry_lovelace[223411]: --> passed data devices: 0 physical, 3 LVM
Oct 11 08:28:35 compute-0 angry_lovelace[223411]: --> relative data size: 1.0
Oct 11 08:28:35 compute-0 angry_lovelace[223411]: --> All data devices are unavailable
Oct 11 08:28:35 compute-0 systemd[1]: libpod-6806cef7266a40b837d8060a6eb0c245ec50f6ee5f1154c192499e70b359a754.scope: Deactivated successfully.
Oct 11 08:28:35 compute-0 podman[223393]: 2025-10-11 08:28:35.863355567 +0000 UTC m=+1.319105698 container died 6806cef7266a40b837d8060a6eb0c245ec50f6ee5f1154c192499e70b359a754 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_lovelace, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 08:28:35 compute-0 systemd[1]: libpod-6806cef7266a40b837d8060a6eb0c245ec50f6ee5f1154c192499e70b359a754.scope: Consumed 1.013s CPU time.
Oct 11 08:28:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-9096da77454e0f901002b97534d359267fb2b3710b8d19d1da71bd82db63360d-merged.mount: Deactivated successfully.
Oct 11 08:28:35 compute-0 podman[223393]: 2025-10-11 08:28:35.930016185 +0000 UTC m=+1.385766296 container remove 6806cef7266a40b837d8060a6eb0c245ec50f6ee5f1154c192499e70b359a754 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_lovelace, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct 11 08:28:35 compute-0 systemd[1]: libpod-conmon-6806cef7266a40b837d8060a6eb0c245ec50f6ee5f1154c192499e70b359a754.scope: Deactivated successfully.
Oct 11 08:28:35 compute-0 sudo[223135]: pam_unix(sudo:session): session closed for user root
Oct 11 08:28:36 compute-0 sudo[223760]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-givnencsavoqpertzrnasnbbpnmvqito ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171315.668779-34-101953531647786/AnsiballZ_file.py'
Oct 11 08:28:36 compute-0 sudo[223760]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:28:36 compute-0 sudo[223747]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:28:36 compute-0 sudo[223747]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:28:36 compute-0 sudo[223747]: pam_unix(sudo:session): session closed for user root
Oct 11 08:28:36 compute-0 sudo[223780]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 08:28:36 compute-0 sudo[223780]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:28:36 compute-0 sudo[223780]: pam_unix(sudo:session): session closed for user root
Oct 11 08:28:36 compute-0 sudo[223805]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:28:36 compute-0 sudo[223805]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:28:36 compute-0 sudo[223805]: pam_unix(sudo:session): session closed for user root
Oct 11 08:28:36 compute-0 python3.9[223777]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/config-data selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Oct 11 08:28:36 compute-0 sudo[223760]: pam_unix(sudo:session): session closed for user root
Oct 11 08:28:36 compute-0 sudo[223830]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a -- lvm list --format json
Oct 11 08:28:36 compute-0 sudo[223830]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:28:36 compute-0 podman[223995]: 2025-10-11 08:28:36.739893991 +0000 UTC m=+0.062170871 container create 0759dd61a2a8c546ac4daad99822f01dce0213430ac2a007dcd4f4efdd3d8937 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_bardeen, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 08:28:36 compute-0 systemd[1]: Started libpod-conmon-0759dd61a2a8c546ac4daad99822f01dce0213430ac2a007dcd4f4efdd3d8937.scope.
Oct 11 08:28:36 compute-0 podman[223995]: 2025-10-11 08:28:36.714530941 +0000 UTC m=+0.036807841 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 08:28:36 compute-0 systemd[1]: Started libcrun container.
Oct 11 08:28:36 compute-0 podman[223995]: 2025-10-11 08:28:36.837969184 +0000 UTC m=+0.160246154 container init 0759dd61a2a8c546ac4daad99822f01dce0213430ac2a007dcd4f4efdd3d8937 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_bardeen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Oct 11 08:28:36 compute-0 sudo[224064]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yviyreybbqixpmzfnhbccgidmhklugew ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171316.4163268-34-173609406520146/AnsiballZ_file.py'
Oct 11 08:28:36 compute-0 podman[223995]: 2025-10-11 08:28:36.850019231 +0000 UTC m=+0.172296111 container start 0759dd61a2a8c546ac4daad99822f01dce0213430ac2a007dcd4f4efdd3d8937 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_bardeen, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default)
Oct 11 08:28:36 compute-0 sudo[224064]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:28:36 compute-0 podman[223995]: 2025-10-11 08:28:36.854205252 +0000 UTC m=+0.176482222 container attach 0759dd61a2a8c546ac4daad99822f01dce0213430ac2a007dcd4f4efdd3d8937 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_bardeen, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 08:28:36 compute-0 confident_bardeen[224046]: 167 167
Oct 11 08:28:36 compute-0 systemd[1]: libpod-0759dd61a2a8c546ac4daad99822f01dce0213430ac2a007dcd4f4efdd3d8937.scope: Deactivated successfully.
Oct 11 08:28:36 compute-0 podman[223995]: 2025-10-11 08:28:36.860517513 +0000 UTC m=+0.182794423 container died 0759dd61a2a8c546ac4daad99822f01dce0213430ac2a007dcd4f4efdd3d8937 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_bardeen, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 08:28:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-34977f0978772871fdc9210f84bd32f55dd7e915a017ebe3e0545d1590f4c6b6-merged.mount: Deactivated successfully.
Oct 11 08:28:36 compute-0 podman[223995]: 2025-10-11 08:28:36.937448058 +0000 UTC m=+0.259724968 container remove 0759dd61a2a8c546ac4daad99822f01dce0213430ac2a007dcd4f4efdd3d8937 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_bardeen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct 11 08:28:36 compute-0 systemd[1]: libpod-conmon-0759dd61a2a8c546ac4daad99822f01dce0213430ac2a007dcd4f4efdd3d8937.scope: Deactivated successfully.
Oct 11 08:28:36 compute-0 ceph-mon[74313]: pgmap v643: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:28:37 compute-0 podman[224070]: 2025-10-11 08:28:37.092746399 +0000 UTC m=+0.189439195 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 11 08:28:37 compute-0 python3.9[224068]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/config-data/ansible-generated/iscsid setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 11 08:28:37 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v644: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:28:37 compute-0 sudo[224064]: pam_unix(sudo:session): session closed for user root
Oct 11 08:28:37 compute-0 podman[224115]: 2025-10-11 08:28:37.201350526 +0000 UTC m=+0.057547388 container create 8bdcbce48bfd5d17130e409f4942a05091384d5bd95b93d3ecc3afe819ce6134 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_dijkstra, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 11 08:28:37 compute-0 systemd[1]: Started libpod-conmon-8bdcbce48bfd5d17130e409f4942a05091384d5bd95b93d3ecc3afe819ce6134.scope.
Oct 11 08:28:37 compute-0 podman[224115]: 2025-10-11 08:28:37.179411334 +0000 UTC m=+0.035608276 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 08:28:37 compute-0 systemd[1]: Started libcrun container.
Oct 11 08:28:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f7bfe8b2818632421ed295fbbf65cf14a97751e17a5ca3282dbf2fc62bf674f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 08:28:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f7bfe8b2818632421ed295fbbf65cf14a97751e17a5ca3282dbf2fc62bf674f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 08:28:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f7bfe8b2818632421ed295fbbf65cf14a97751e17a5ca3282dbf2fc62bf674f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 08:28:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f7bfe8b2818632421ed295fbbf65cf14a97751e17a5ca3282dbf2fc62bf674f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 08:28:37 compute-0 podman[224115]: 2025-10-11 08:28:37.332719738 +0000 UTC m=+0.188916640 container init 8bdcbce48bfd5d17130e409f4942a05091384d5bd95b93d3ecc3afe819ce6134 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_dijkstra, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 08:28:37 compute-0 podman[224115]: 2025-10-11 08:28:37.345620889 +0000 UTC m=+0.201817741 container start 8bdcbce48bfd5d17130e409f4942a05091384d5bd95b93d3ecc3afe819ce6134 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_dijkstra, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 08:28:37 compute-0 podman[224115]: 2025-10-11 08:28:37.348870163 +0000 UTC m=+0.205067115 container attach 8bdcbce48bfd5d17130e409f4942a05091384d5bd95b93d3ecc3afe819ce6134 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_dijkstra, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef)
Oct 11 08:28:37 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 08:28:37 compute-0 sudo[224286]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hzgywwbzgjhtpmnspdllvrktlzjstrgo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171317.4135807-70-242597387509731/AnsiballZ_stat.py'
Oct 11 08:28:37 compute-0 sudo[224286]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:28:38 compute-0 compassionate_dijkstra[224156]: {
Oct 11 08:28:38 compute-0 compassionate_dijkstra[224156]:     "0": [
Oct 11 08:28:38 compute-0 compassionate_dijkstra[224156]:         {
Oct 11 08:28:38 compute-0 compassionate_dijkstra[224156]:             "devices": [
Oct 11 08:28:38 compute-0 compassionate_dijkstra[224156]:                 "/dev/loop3"
Oct 11 08:28:38 compute-0 compassionate_dijkstra[224156]:             ],
Oct 11 08:28:38 compute-0 compassionate_dijkstra[224156]:             "lv_name": "ceph_lv0",
Oct 11 08:28:38 compute-0 compassionate_dijkstra[224156]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 08:28:38 compute-0 compassionate_dijkstra[224156]:             "lv_size": "21470642176",
Oct 11 08:28:38 compute-0 compassionate_dijkstra[224156]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=bd31cfe9-266a-4479-a7f9-659fff14b3e1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 08:28:38 compute-0 compassionate_dijkstra[224156]:             "lv_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 08:28:38 compute-0 compassionate_dijkstra[224156]:             "name": "ceph_lv0",
Oct 11 08:28:38 compute-0 compassionate_dijkstra[224156]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 08:28:38 compute-0 compassionate_dijkstra[224156]:             "tags": {
Oct 11 08:28:38 compute-0 compassionate_dijkstra[224156]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 11 08:28:38 compute-0 compassionate_dijkstra[224156]:                 "ceph.block_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 08:28:38 compute-0 compassionate_dijkstra[224156]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 08:28:38 compute-0 compassionate_dijkstra[224156]:                 "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 08:28:38 compute-0 compassionate_dijkstra[224156]:                 "ceph.cluster_name": "ceph",
Oct 11 08:28:38 compute-0 compassionate_dijkstra[224156]:                 "ceph.crush_device_class": "",
Oct 11 08:28:38 compute-0 compassionate_dijkstra[224156]:                 "ceph.encrypted": "0",
Oct 11 08:28:38 compute-0 compassionate_dijkstra[224156]:                 "ceph.osd_fsid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 08:28:38 compute-0 compassionate_dijkstra[224156]:                 "ceph.osd_id": "0",
Oct 11 08:28:38 compute-0 compassionate_dijkstra[224156]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 08:28:38 compute-0 compassionate_dijkstra[224156]:                 "ceph.type": "block",
Oct 11 08:28:38 compute-0 compassionate_dijkstra[224156]:                 "ceph.vdo": "0"
Oct 11 08:28:38 compute-0 compassionate_dijkstra[224156]:             },
Oct 11 08:28:38 compute-0 compassionate_dijkstra[224156]:             "type": "block",
Oct 11 08:28:38 compute-0 compassionate_dijkstra[224156]:             "vg_name": "ceph_vg0"
Oct 11 08:28:38 compute-0 compassionate_dijkstra[224156]:         }
Oct 11 08:28:38 compute-0 compassionate_dijkstra[224156]:     ],
Oct 11 08:28:38 compute-0 compassionate_dijkstra[224156]:     "1": [
Oct 11 08:28:38 compute-0 compassionate_dijkstra[224156]:         {
Oct 11 08:28:38 compute-0 compassionate_dijkstra[224156]:             "devices": [
Oct 11 08:28:38 compute-0 compassionate_dijkstra[224156]:                 "/dev/loop4"
Oct 11 08:28:38 compute-0 compassionate_dijkstra[224156]:             ],
Oct 11 08:28:38 compute-0 compassionate_dijkstra[224156]:             "lv_name": "ceph_lv1",
Oct 11 08:28:38 compute-0 compassionate_dijkstra[224156]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 08:28:38 compute-0 compassionate_dijkstra[224156]:             "lv_size": "21470642176",
Oct 11 08:28:38 compute-0 compassionate_dijkstra[224156]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c6e730c8-7075-45e5-bf72-061b73088f5b,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 08:28:38 compute-0 compassionate_dijkstra[224156]:             "lv_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 08:28:38 compute-0 compassionate_dijkstra[224156]:             "name": "ceph_lv1",
Oct 11 08:28:38 compute-0 compassionate_dijkstra[224156]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 08:28:38 compute-0 compassionate_dijkstra[224156]:             "tags": {
Oct 11 08:28:38 compute-0 compassionate_dijkstra[224156]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 11 08:28:38 compute-0 compassionate_dijkstra[224156]:                 "ceph.block_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 08:28:38 compute-0 compassionate_dijkstra[224156]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 08:28:38 compute-0 compassionate_dijkstra[224156]:                 "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 08:28:38 compute-0 compassionate_dijkstra[224156]:                 "ceph.cluster_name": "ceph",
Oct 11 08:28:38 compute-0 compassionate_dijkstra[224156]:                 "ceph.crush_device_class": "",
Oct 11 08:28:38 compute-0 compassionate_dijkstra[224156]:                 "ceph.encrypted": "0",
Oct 11 08:28:38 compute-0 compassionate_dijkstra[224156]:                 "ceph.osd_fsid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 08:28:38 compute-0 compassionate_dijkstra[224156]:                 "ceph.osd_id": "1",
Oct 11 08:28:38 compute-0 compassionate_dijkstra[224156]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 08:28:38 compute-0 compassionate_dijkstra[224156]:                 "ceph.type": "block",
Oct 11 08:28:38 compute-0 compassionate_dijkstra[224156]:                 "ceph.vdo": "0"
Oct 11 08:28:38 compute-0 compassionate_dijkstra[224156]:             },
Oct 11 08:28:38 compute-0 compassionate_dijkstra[224156]:             "type": "block",
Oct 11 08:28:38 compute-0 compassionate_dijkstra[224156]:             "vg_name": "ceph_vg1"
Oct 11 08:28:38 compute-0 compassionate_dijkstra[224156]:         }
Oct 11 08:28:38 compute-0 compassionate_dijkstra[224156]:     ],
Oct 11 08:28:38 compute-0 compassionate_dijkstra[224156]:     "2": [
Oct 11 08:28:38 compute-0 compassionate_dijkstra[224156]:         {
Oct 11 08:28:38 compute-0 compassionate_dijkstra[224156]:             "devices": [
Oct 11 08:28:38 compute-0 compassionate_dijkstra[224156]:                 "/dev/loop5"
Oct 11 08:28:38 compute-0 compassionate_dijkstra[224156]:             ],
Oct 11 08:28:38 compute-0 compassionate_dijkstra[224156]:             "lv_name": "ceph_lv2",
Oct 11 08:28:38 compute-0 compassionate_dijkstra[224156]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 08:28:38 compute-0 compassionate_dijkstra[224156]:             "lv_size": "21470642176",
Oct 11 08:28:38 compute-0 compassionate_dijkstra[224156]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9a8d87fe-940e-4960-94fb-405e22e6e9ee,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 08:28:38 compute-0 compassionate_dijkstra[224156]:             "lv_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 08:28:38 compute-0 compassionate_dijkstra[224156]:             "name": "ceph_lv2",
Oct 11 08:28:38 compute-0 compassionate_dijkstra[224156]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 08:28:38 compute-0 compassionate_dijkstra[224156]:             "tags": {
Oct 11 08:28:38 compute-0 compassionate_dijkstra[224156]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 11 08:28:38 compute-0 compassionate_dijkstra[224156]:                 "ceph.block_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 08:28:38 compute-0 compassionate_dijkstra[224156]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 08:28:38 compute-0 compassionate_dijkstra[224156]:                 "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 08:28:38 compute-0 compassionate_dijkstra[224156]:                 "ceph.cluster_name": "ceph",
Oct 11 08:28:38 compute-0 compassionate_dijkstra[224156]:                 "ceph.crush_device_class": "",
Oct 11 08:28:38 compute-0 compassionate_dijkstra[224156]:                 "ceph.encrypted": "0",
Oct 11 08:28:38 compute-0 compassionate_dijkstra[224156]:                 "ceph.osd_fsid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 08:28:38 compute-0 compassionate_dijkstra[224156]:                 "ceph.osd_id": "2",
Oct 11 08:28:38 compute-0 compassionate_dijkstra[224156]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 08:28:38 compute-0 compassionate_dijkstra[224156]:                 "ceph.type": "block",
Oct 11 08:28:38 compute-0 compassionate_dijkstra[224156]:                 "ceph.vdo": "0"
Oct 11 08:28:38 compute-0 compassionate_dijkstra[224156]:             },
Oct 11 08:28:38 compute-0 compassionate_dijkstra[224156]:             "type": "block",
Oct 11 08:28:38 compute-0 compassionate_dijkstra[224156]:             "vg_name": "ceph_vg2"
Oct 11 08:28:38 compute-0 compassionate_dijkstra[224156]:         }
Oct 11 08:28:38 compute-0 compassionate_dijkstra[224156]:     ]
Oct 11 08:28:38 compute-0 compassionate_dijkstra[224156]: }
Oct 11 08:28:38 compute-0 systemd[1]: libpod-8bdcbce48bfd5d17130e409f4942a05091384d5bd95b93d3ecc3afe819ce6134.scope: Deactivated successfully.
Oct 11 08:28:38 compute-0 conmon[224156]: conmon 8bdcbce48bfd5d17130e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-8bdcbce48bfd5d17130e409f4942a05091384d5bd95b93d3ecc3afe819ce6134.scope/container/memory.events
Oct 11 08:28:38 compute-0 podman[224115]: 2025-10-11 08:28:38.097276859 +0000 UTC m=+0.953473711 container died 8bdcbce48bfd5d17130e409f4942a05091384d5bd95b93d3ecc3afe819ce6134 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_dijkstra, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 08:28:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-2f7bfe8b2818632421ed295fbbf65cf14a97751e17a5ca3282dbf2fc62bf674f-merged.mount: Deactivated successfully.
Oct 11 08:28:38 compute-0 podman[224115]: 2025-10-11 08:28:38.158534833 +0000 UTC m=+1.014731685 container remove 8bdcbce48bfd5d17130e409f4942a05091384d5bd95b93d3ecc3afe819ce6134 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_dijkstra, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507)
Oct 11 08:28:38 compute-0 systemd[1]: libpod-conmon-8bdcbce48bfd5d17130e409f4942a05091384d5bd95b93d3ecc3afe819ce6134.scope: Deactivated successfully.
Oct 11 08:28:38 compute-0 sudo[223830]: pam_unix(sudo:session): session closed for user root
Oct 11 08:28:38 compute-0 python3.9[224290]: ansible-ansible.builtin.stat Invoked with path=/lib/systemd/system/iscsid.socket follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 11 08:28:38 compute-0 sudo[224304]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:28:38 compute-0 sudo[224304]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:28:38 compute-0 sudo[224304]: pam_unix(sudo:session): session closed for user root
Oct 11 08:28:38 compute-0 sudo[224286]: pam_unix(sudo:session): session closed for user root
Oct 11 08:28:38 compute-0 sudo[224331]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 08:28:38 compute-0 sudo[224331]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:28:38 compute-0 sudo[224331]: pam_unix(sudo:session): session closed for user root
Oct 11 08:28:38 compute-0 sudo[224365]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:28:38 compute-0 sudo[224365]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:28:38 compute-0 sudo[224365]: pam_unix(sudo:session): session closed for user root
Oct 11 08:28:38 compute-0 sudo[224405]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a -- raw list --format json
Oct 11 08:28:38 compute-0 sudo[224405]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:28:38 compute-0 podman[224522]: 2025-10-11 08:28:38.912982963 +0000 UTC m=+0.048394474 container create 19639dfb423063c9e312e30ce48c42dcf1d03203038ae6df53f0f7c7efd89b7b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_lederberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True)
Oct 11 08:28:38 compute-0 systemd[1]: Started libpod-conmon-19639dfb423063c9e312e30ce48c42dcf1d03203038ae6df53f0f7c7efd89b7b.scope.
Oct 11 08:28:38 compute-0 ceph-mon[74313]: pgmap v644: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:28:38 compute-0 systemd[1]: Started libcrun container.
Oct 11 08:28:38 compute-0 podman[224522]: 2025-10-11 08:28:38.89273671 +0000 UTC m=+0.028148221 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 08:28:38 compute-0 podman[224522]: 2025-10-11 08:28:38.998447213 +0000 UTC m=+0.133858724 container init 19639dfb423063c9e312e30ce48c42dcf1d03203038ae6df53f0f7c7efd89b7b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_lederberg, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 08:28:39 compute-0 podman[224522]: 2025-10-11 08:28:39.010790479 +0000 UTC m=+0.146201960 container start 19639dfb423063c9e312e30ce48c42dcf1d03203038ae6df53f0f7c7efd89b7b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_lederberg, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Oct 11 08:28:39 compute-0 podman[224522]: 2025-10-11 08:28:39.014568047 +0000 UTC m=+0.149979568 container attach 19639dfb423063c9e312e30ce48c42dcf1d03203038ae6df53f0f7c7efd89b7b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_lederberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True)
Oct 11 08:28:39 compute-0 flamboyant_lederberg[224539]: 167 167
Oct 11 08:28:39 compute-0 systemd[1]: libpod-19639dfb423063c9e312e30ce48c42dcf1d03203038ae6df53f0f7c7efd89b7b.scope: Deactivated successfully.
Oct 11 08:28:39 compute-0 conmon[224539]: conmon 19639dfb423063c9e312 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-19639dfb423063c9e312e30ce48c42dcf1d03203038ae6df53f0f7c7efd89b7b.scope/container/memory.events
Oct 11 08:28:39 compute-0 podman[224522]: 2025-10-11 08:28:39.0195204 +0000 UTC m=+0.154931881 container died 19639dfb423063c9e312e30ce48c42dcf1d03203038ae6df53f0f7c7efd89b7b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_lederberg, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3)
Oct 11 08:28:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-b839a9853b45f090f9934db823397080d3910d020e5be87c305fe059d534bd08-merged.mount: Deactivated successfully.
Oct 11 08:28:39 compute-0 podman[224522]: 2025-10-11 08:28:39.06850561 +0000 UTC m=+0.203917131 container remove 19639dfb423063c9e312e30ce48c42dcf1d03203038ae6df53f0f7c7efd89b7b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_lederberg, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Oct 11 08:28:39 compute-0 systemd[1]: libpod-conmon-19639dfb423063c9e312e30ce48c42dcf1d03203038ae6df53f0f7c7efd89b7b.scope: Deactivated successfully.
Oct 11 08:28:39 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v645: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:28:39 compute-0 podman[224597]: 2025-10-11 08:28:39.299074868 +0000 UTC m=+0.071659724 container create bb646aac68e70e458994ae0197495159514221938ce88aacb5bec75f3d2de08f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_pasteur, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 08:28:39 compute-0 systemd[1]: Started libpod-conmon-bb646aac68e70e458994ae0197495159514221938ce88aacb5bec75f3d2de08f.scope.
Oct 11 08:28:39 compute-0 sudo[224651]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-urkkxxlbohvywkdomzkucdwfxqdjeful ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171318.5271149-78-202761824454995/AnsiballZ_systemd.py'
Oct 11 08:28:39 compute-0 podman[224597]: 2025-10-11 08:28:39.270005481 +0000 UTC m=+0.042590427 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 08:28:39 compute-0 sudo[224651]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:28:39 compute-0 systemd[1]: Started libcrun container.
Oct 11 08:28:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b2343a6c67a37335b6509748c84b37e6098a35891cce33932bbc7c911168d311/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 08:28:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b2343a6c67a37335b6509748c84b37e6098a35891cce33932bbc7c911168d311/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 08:28:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b2343a6c67a37335b6509748c84b37e6098a35891cce33932bbc7c911168d311/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 08:28:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b2343a6c67a37335b6509748c84b37e6098a35891cce33932bbc7c911168d311/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 08:28:39 compute-0 podman[224597]: 2025-10-11 08:28:39.417205399 +0000 UTC m=+0.189790285 container init bb646aac68e70e458994ae0197495159514221938ce88aacb5bec75f3d2de08f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_pasteur, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Oct 11 08:28:39 compute-0 podman[224597]: 2025-10-11 08:28:39.432593032 +0000 UTC m=+0.205177938 container start bb646aac68e70e458994ae0197495159514221938ce88aacb5bec75f3d2de08f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_pasteur, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct 11 08:28:39 compute-0 podman[224597]: 2025-10-11 08:28:39.437082151 +0000 UTC m=+0.209667047 container attach bb646aac68e70e458994ae0197495159514221938ce88aacb5bec75f3d2de08f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_pasteur, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct 11 08:28:39 compute-0 python3.9[224656]: ansible-ansible.builtin.systemd Invoked with enabled=False name=iscsid.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 11 08:28:39 compute-0 systemd[1]: Reloading.
Oct 11 08:28:39 compute-0 systemd-rc-local-generator[224687]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 08:28:39 compute-0 systemd-sysv-generator[224690]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 08:28:40 compute-0 sudo[224651]: pam_unix(sudo:session): session closed for user root
Oct 11 08:28:40 compute-0 musing_pasteur[224652]: {
Oct 11 08:28:40 compute-0 musing_pasteur[224652]:     "9a8d87fe-940e-4960-94fb-405e22e6e9ee": {
Oct 11 08:28:40 compute-0 musing_pasteur[224652]:         "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 08:28:40 compute-0 musing_pasteur[224652]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 11 08:28:40 compute-0 musing_pasteur[224652]:         "osd_id": 2,
Oct 11 08:28:40 compute-0 musing_pasteur[224652]:         "osd_uuid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 08:28:40 compute-0 musing_pasteur[224652]:         "type": "bluestore"
Oct 11 08:28:40 compute-0 musing_pasteur[224652]:     },
Oct 11 08:28:40 compute-0 musing_pasteur[224652]:     "bd31cfe9-266a-4479-a7f9-659fff14b3e1": {
Oct 11 08:28:40 compute-0 musing_pasteur[224652]:         "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 08:28:40 compute-0 musing_pasteur[224652]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 11 08:28:40 compute-0 musing_pasteur[224652]:         "osd_id": 0,
Oct 11 08:28:40 compute-0 musing_pasteur[224652]:         "osd_uuid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 08:28:40 compute-0 musing_pasteur[224652]:         "type": "bluestore"
Oct 11 08:28:40 compute-0 musing_pasteur[224652]:     },
Oct 11 08:28:40 compute-0 musing_pasteur[224652]:     "c6e730c8-7075-45e5-bf72-061b73088f5b": {
Oct 11 08:28:40 compute-0 musing_pasteur[224652]:         "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 08:28:40 compute-0 musing_pasteur[224652]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 11 08:28:40 compute-0 musing_pasteur[224652]:         "osd_id": 1,
Oct 11 08:28:40 compute-0 musing_pasteur[224652]:         "osd_uuid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 08:28:40 compute-0 musing_pasteur[224652]:         "type": "bluestore"
Oct 11 08:28:40 compute-0 musing_pasteur[224652]:     }
Oct 11 08:28:40 compute-0 musing_pasteur[224652]: }
Oct 11 08:28:40 compute-0 systemd[1]: libpod-bb646aac68e70e458994ae0197495159514221938ce88aacb5bec75f3d2de08f.scope: Deactivated successfully.
Oct 11 08:28:40 compute-0 systemd[1]: libpod-bb646aac68e70e458994ae0197495159514221938ce88aacb5bec75f3d2de08f.scope: Consumed 1.071s CPU time.
Oct 11 08:28:40 compute-0 podman[224597]: 2025-10-11 08:28:40.503965675 +0000 UTC m=+1.276550581 container died bb646aac68e70e458994ae0197495159514221938ce88aacb5bec75f3d2de08f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_pasteur, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct 11 08:28:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-b2343a6c67a37335b6509748c84b37e6098a35891cce33932bbc7c911168d311-merged.mount: Deactivated successfully.
Oct 11 08:28:40 compute-0 podman[224597]: 2025-10-11 08:28:40.585521013 +0000 UTC m=+1.358105899 container remove bb646aac68e70e458994ae0197495159514221938ce88aacb5bec75f3d2de08f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_pasteur, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default)
Oct 11 08:28:40 compute-0 systemd[1]: libpod-conmon-bb646aac68e70e458994ae0197495159514221938ce88aacb5bec75f3d2de08f.scope: Deactivated successfully.
Oct 11 08:28:40 compute-0 sudo[224405]: pam_unix(sudo:session): session closed for user root
Oct 11 08:28:40 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 08:28:40 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 08:28:40 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 08:28:40 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 08:28:40 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev 48031b39-0a0f-49bc-8804-0181a2d0d867 does not exist
Oct 11 08:28:40 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev 9e97f9d2-f503-446e-86cf-7d92074cef0b does not exist
Oct 11 08:28:40 compute-0 sudo[224813]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:28:40 compute-0 sudo[224813]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:28:40 compute-0 sudo[224813]: pam_unix(sudo:session): session closed for user root
Oct 11 08:28:40 compute-0 sudo[224859]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 11 08:28:40 compute-0 sudo[224859]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:28:40 compute-0 sudo[224859]: pam_unix(sudo:session): session closed for user root
Oct 11 08:28:40 compute-0 sudo[224936]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lxhqazomyddpafbdgjsoftizfzqxingp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171320.4059353-86-273457172898999/AnsiballZ_service_facts.py'
Oct 11 08:28:40 compute-0 sudo[224936]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:28:40 compute-0 ceph-mon[74313]: pgmap v645: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:28:40 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 08:28:40 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 08:28:41 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v646: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:28:41 compute-0 python3.9[224938]: ansible-ansible.builtin.service_facts Invoked
Oct 11 08:28:41 compute-0 network[224955]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct 11 08:28:41 compute-0 network[224956]: 'network-scripts' will be removed from distribution in near future.
Oct 11 08:28:41 compute-0 network[224957]: It is advised to switch to 'NetworkManager' instead for network management.
Oct 11 08:28:42 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 08:28:42 compute-0 ceph-mon[74313]: pgmap v646: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:28:43 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v647: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:28:45 compute-0 ceph-mon[74313]: pgmap v647: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:28:45 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v648: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:28:46 compute-0 sudo[224936]: pam_unix(sudo:session): session closed for user root
Oct 11 08:28:47 compute-0 ceph-mon[74313]: pgmap v648: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:28:47 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v649: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:28:47 compute-0 sudo[225229]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ucjkwsexoahborjwpkxzlqkecqzewhtn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171327.0086906-94-98680565733653/AnsiballZ_systemd.py'
Oct 11 08:28:47 compute-0 sudo[225229]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:28:47 compute-0 python3.9[225231]: ansible-ansible.builtin.systemd Invoked with enabled=False name=iscsi-starter.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 11 08:28:47 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 08:28:48 compute-0 systemd[1]: Reloading.
Oct 11 08:28:48 compute-0 systemd-sysv-generator[225260]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 08:28:48 compute-0 systemd-rc-local-generator[225256]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 08:28:49 compute-0 ceph-mon[74313]: pgmap v649: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:28:49 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v650: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:28:49 compute-0 sudo[225229]: pam_unix(sudo:session): session closed for user root
Oct 11 08:28:49 compute-0 podman[225270]: 2025-10-11 08:28:49.291366447 +0000 UTC m=+0.095632604 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent)
Oct 11 08:28:50 compute-0 python3.9[225440]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/.initiator_reset follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 11 08:28:50 compute-0 sudo[225590]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-djotxhpahcvztjmsoaptlfsxxxgntxjf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171330.3055892-111-168593837261073/AnsiballZ_podman_container.py'
Oct 11 08:28:50 compute-0 sudo[225590]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:28:51 compute-0 ceph-mon[74313]: pgmap v650: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:28:51 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v651: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:28:51 compute-0 python3.9[225592]: ansible-containers.podman.podman_container Invoked with command=/usr/sbin/iscsi-iname detach=False image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f name=iscsid_config rm=True tty=True executable=podman state=started debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Oct 11 08:28:52 compute-0 podman[225605]: 2025-10-11 08:28:52.707645679 +0000 UTC m=+1.452646342 image pull 74877095db294c27659f24e7f86074178a6f28eee68561c30e3ce4d18519e09c quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f
Oct 11 08:28:52 compute-0 rsyslogd[1003]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 11 08:28:52 compute-0 rsyslogd[1003]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 11 08:28:52 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 08:28:52 compute-0 podman[225664]: 2025-10-11 08:28:52.908659316 +0000 UTC m=+0.061415769 container create 974a768f94d5d782e9620e035ff75920f4d9520cae2e6765dfbc015d83002a34 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid_config, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Oct 11 08:28:52 compute-0 NetworkManager[44960]: <info>  [1760171332.9400] manager: (podman0): new Bridge device (/org/freedesktop/NetworkManager/Devices/21)
Oct 11 08:28:52 compute-0 kernel: podman0: port 1(veth0) entered blocking state
Oct 11 08:28:52 compute-0 kernel: podman0: port 1(veth0) entered disabled state
Oct 11 08:28:52 compute-0 kernel: veth0: entered allmulticast mode
Oct 11 08:28:52 compute-0 kernel: veth0: entered promiscuous mode
Oct 11 08:28:52 compute-0 podman[225664]: 2025-10-11 08:28:52.8771742 +0000 UTC m=+0.029930653 image pull 74877095db294c27659f24e7f86074178a6f28eee68561c30e3ce4d18519e09c quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f
Oct 11 08:28:52 compute-0 kernel: podman0: port 1(veth0) entered blocking state
Oct 11 08:28:52 compute-0 kernel: podman0: port 1(veth0) entered forwarding state
Oct 11 08:28:52 compute-0 NetworkManager[44960]: <info>  [1760171332.9797] manager: (veth0): new Veth device (/org/freedesktop/NetworkManager/Devices/22)
Oct 11 08:28:52 compute-0 NetworkManager[44960]: <info>  [1760171332.9835] device (veth0): carrier: link connected
Oct 11 08:28:52 compute-0 NetworkManager[44960]: <info>  [1760171332.9845] device (podman0): carrier: link connected
Oct 11 08:28:52 compute-0 systemd-udevd[225694]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 08:28:53 compute-0 systemd-udevd[225697]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 08:28:53 compute-0 NetworkManager[44960]: <info>  [1760171333.0293] device (podman0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 08:28:53 compute-0 NetworkManager[44960]: <info>  [1760171333.0329] device (podman0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Oct 11 08:28:53 compute-0 NetworkManager[44960]: <info>  [1760171333.0357] device (podman0): Activation: starting connection 'podman0' (f56496b4-d1b3-4fc8-b162-b99153c010b8)
Oct 11 08:28:53 compute-0 ceph-mon[74313]: pgmap v651: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:28:53 compute-0 NetworkManager[44960]: <info>  [1760171333.0369] device (podman0): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Oct 11 08:28:53 compute-0 NetworkManager[44960]: <info>  [1760171333.0384] device (podman0): state change: prepare -> config (reason 'none', managed-type: 'external')
Oct 11 08:28:53 compute-0 NetworkManager[44960]: <info>  [1760171333.0395] device (podman0): state change: config -> ip-config (reason 'none', managed-type: 'external')
Oct 11 08:28:53 compute-0 NetworkManager[44960]: <info>  [1760171333.0404] device (podman0): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Oct 11 08:28:53 compute-0 systemd[1]: Starting Network Manager Script Dispatcher Service...
Oct 11 08:28:53 compute-0 systemd[1]: Started Network Manager Script Dispatcher Service.
Oct 11 08:28:53 compute-0 NetworkManager[44960]: <info>  [1760171333.0834] device (podman0): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Oct 11 08:28:53 compute-0 NetworkManager[44960]: <info>  [1760171333.0846] device (podman0): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Oct 11 08:28:53 compute-0 NetworkManager[44960]: <info>  [1760171333.0876] device (podman0): Activation: successful, device activated.
Oct 11 08:28:53 compute-0 systemd[1]: iscsi.service: Unit cannot be reloaded because it is inactive.
Oct 11 08:28:53 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v652: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:28:53 compute-0 systemd[1]: Started libpod-conmon-974a768f94d5d782e9620e035ff75920f4d9520cae2e6765dfbc015d83002a34.scope.
Oct 11 08:28:53 compute-0 systemd[1]: Started libcrun container.
Oct 11 08:28:53 compute-0 podman[225664]: 2025-10-11 08:28:53.430139199 +0000 UTC m=+0.582895702 container init 974a768f94d5d782e9620e035ff75920f4d9520cae2e6765dfbc015d83002a34 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid_config, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 08:28:53 compute-0 podman[225664]: 2025-10-11 08:28:53.446297244 +0000 UTC m=+0.599053687 container start 974a768f94d5d782e9620e035ff75920f4d9520cae2e6765dfbc015d83002a34 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid_config, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team)
Oct 11 08:28:53 compute-0 podman[225664]: 2025-10-11 08:28:53.4523993 +0000 UTC m=+0.605155803 container attach 974a768f94d5d782e9620e035ff75920f4d9520cae2e6765dfbc015d83002a34 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid_config, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0)
Oct 11 08:28:53 compute-0 iscsid_config[225822]: iqn.1994-05.com.redhat:37f734a867d1
Oct 11 08:28:53 compute-0 systemd[1]: libpod-974a768f94d5d782e9620e035ff75920f4d9520cae2e6765dfbc015d83002a34.scope: Deactivated successfully.
Oct 11 08:28:53 compute-0 podman[225664]: 2025-10-11 08:28:53.456682073 +0000 UTC m=+0.609438516 container died 974a768f94d5d782e9620e035ff75920f4d9520cae2e6765dfbc015d83002a34 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid_config, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001)
Oct 11 08:28:53 compute-0 kernel: podman0: port 1(veth0) entered disabled state
Oct 11 08:28:53 compute-0 kernel: veth0 (unregistering): left allmulticast mode
Oct 11 08:28:53 compute-0 kernel: veth0 (unregistering): left promiscuous mode
Oct 11 08:28:53 compute-0 kernel: podman0: port 1(veth0) entered disabled state
Oct 11 08:28:53 compute-0 NetworkManager[44960]: <info>  [1760171333.5170] device (podman0): state change: activated -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 08:28:53 compute-0 systemd[1]: run-netns-netns\x2d602a4fb2\x2d095e\x2d26fb\x2d2b94\x2d033eeb81c571.mount: Deactivated successfully.
Oct 11 08:28:53 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-974a768f94d5d782e9620e035ff75920f4d9520cae2e6765dfbc015d83002a34-userdata-shm.mount: Deactivated successfully.
Oct 11 08:28:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-6fe1fc2e701b77a548eeb9e38929048b6526f604f4a4e6fe28763cd760b11888-merged.mount: Deactivated successfully.
Oct 11 08:28:53 compute-0 podman[225664]: 2025-10-11 08:28:53.957501351 +0000 UTC m=+1.110257794 container remove 974a768f94d5d782e9620e035ff75920f4d9520cae2e6765dfbc015d83002a34 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid_config, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.license=GPLv2)
Oct 11 08:28:53 compute-0 systemd[1]: libpod-conmon-974a768f94d5d782e9620e035ff75920f4d9520cae2e6765dfbc015d83002a34.scope: Deactivated successfully.
Oct 11 08:28:53 compute-0 python3.9[225592]: ansible-containers.podman.podman_container PODMAN-CONTAINER-DEBUG: podman run --name iscsid_config --detach=False --rm --tty=True quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f /usr/sbin/iscsi-iname
Oct 11 08:28:54 compute-0 python3.9[225592]: ansible-containers.podman.podman_container PODMAN-CONTAINER-DEBUG: Error generating systemd: 
                                             DEPRECATED command:
                                             It is recommended to use Quadlets for running containers and pods under systemd.
                                             
                                             Please refer to podman-systemd.unit(5) for details.
                                             Error: iscsid_config does not refer to a container or pod: no pod with name or ID iscsid_config found: no such pod: no container with name or ID "iscsid_config" found: no such container
Oct 11 08:28:54 compute-0 sudo[225590]: pam_unix(sudo:session): session closed for user root
Oct 11 08:28:54 compute-0 ceph-mgr[74605]: [balancer INFO root] Optimize plan auto_2025-10-11_08:28:54
Oct 11 08:28:54 compute-0 ceph-mgr[74605]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 08:28:54 compute-0 ceph-mgr[74605]: [balancer INFO root] do_upmap
Oct 11 08:28:54 compute-0 ceph-mgr[74605]: [balancer INFO root] pools ['.mgr', 'backups', 'volumes', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'default.rgw.meta', 'vms', '.rgw.root', 'default.rgw.log', 'images', 'default.rgw.control']
Oct 11 08:28:54 compute-0 ceph-mgr[74605]: [balancer INFO root] prepared 0/10 changes
Oct 11 08:28:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 08:28:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 08:28:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 08:28:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 08:28:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 08:28:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 08:28:54 compute-0 sudo[226062]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mbsjxixclhlumanrxrccfhxxbgafdnnc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171334.3837938-119-6403385519039/AnsiballZ_stat.py'
Oct 11 08:28:54 compute-0 sudo[226062]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:28:54 compute-0 ceph-mgr[74605]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 08:28:54 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 08:28:54 compute-0 ceph-mgr[74605]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 08:28:54 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 08:28:54 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 08:28:54 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 08:28:54 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 08:28:54 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 08:28:54 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 08:28:54 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 08:28:55 compute-0 python3.9[226064]: ansible-ansible.legacy.stat Invoked with path=/etc/iscsi/initiatorname.iscsi follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 08:28:55 compute-0 ceph-mon[74313]: pgmap v652: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:28:55 compute-0 sudo[226062]: pam_unix(sudo:session): session closed for user root
Oct 11 08:28:55 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v653: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:28:55 compute-0 sudo[226185]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-skmlkhuesuayppffvqemiwlmqrdtsicn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171334.3837938-119-6403385519039/AnsiballZ_copy.py'
Oct 11 08:28:55 compute-0 sudo[226185]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:28:55 compute-0 python3.9[226187]: ansible-ansible.legacy.copy Invoked with dest=/etc/iscsi/initiatorname.iscsi mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760171334.3837938-119-6403385519039/.source.iscsi _original_basename=.ocpwkz9q follow=False checksum=a3276cc3a122ba31a71903624a34893e0bef54c4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 08:28:55 compute-0 sudo[226185]: pam_unix(sudo:session): session closed for user root
Oct 11 08:28:56 compute-0 sudo[226337]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ssnhobdrfduistbjkiikcmjzrwtotrws ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171336.226127-134-182561361976499/AnsiballZ_file.py'
Oct 11 08:28:56 compute-0 sudo[226337]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:28:56 compute-0 python3.9[226339]: ansible-ansible.builtin.file Invoked with mode=0600 path=/etc/iscsi/.initiator_reset state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 08:28:56 compute-0 sudo[226337]: pam_unix(sudo:session): session closed for user root
Oct 11 08:28:57 compute-0 ceph-mon[74313]: pgmap v653: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:28:57 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v654: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:28:57 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 08:28:57 compute-0 python3.9[226489]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/iscsid.conf follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 11 08:28:58 compute-0 sudo[226641]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rdekusxmlaoqlvivtdieonnjhipfoyok ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171338.144-151-169858052089532/AnsiballZ_lineinfile.py'
Oct 11 08:28:58 compute-0 sudo[226641]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:28:58 compute-0 python3.9[226643]: ansible-ansible.builtin.lineinfile Invoked with insertafter=^#node.session.auth.chap.algs line=node.session.auth.chap_algs = SHA3-256,SHA256,SHA1,MD5 path=/etc/iscsi/iscsid.conf regexp=^node.session.auth.chap_algs state=present backrefs=False create=False backup=False firstmatch=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 08:28:58 compute-0 sudo[226641]: pam_unix(sudo:session): session closed for user root
Oct 11 08:28:59 compute-0 ceph-mon[74313]: pgmap v654: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:28:59 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v655: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:28:59 compute-0 sudo[226793]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sgqiejtzlijdnrvhxbvixbffocprmgtn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171339.365106-160-114687301023923/AnsiballZ_file.py'
Oct 11 08:28:59 compute-0 sudo[226793]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:29:00 compute-0 python3.9[226795]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 11 08:29:00 compute-0 sudo[226793]: pam_unix(sudo:session): session closed for user root
Oct 11 08:29:00 compute-0 sudo[226945]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uemhoabvpaeqrajgykzlkmwevvpwdlgn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171340.35856-168-32639984737641/AnsiballZ_stat.py'
Oct 11 08:29:00 compute-0 sudo[226945]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:29:01 compute-0 python3.9[226947]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 08:29:01 compute-0 sudo[226945]: pam_unix(sudo:session): session closed for user root
Oct 11 08:29:01 compute-0 ceph-mon[74313]: pgmap v655: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:29:01 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v656: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:29:01 compute-0 sudo[227023]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pxaczdrqrspviqgpceaddunnmoekmkwc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171340.35856-168-32639984737641/AnsiballZ_file.py'
Oct 11 08:29:01 compute-0 sudo[227023]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:29:01 compute-0 python3.9[227025]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 11 08:29:01 compute-0 sudo[227023]: pam_unix(sudo:session): session closed for user root
Oct 11 08:29:02 compute-0 sudo[227175]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uzyqujqaygzhftmknoijqfmjhhvfcwzh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171341.8161285-168-158175572886769/AnsiballZ_stat.py'
Oct 11 08:29:02 compute-0 sudo[227175]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:29:02 compute-0 python3.9[227177]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 08:29:02 compute-0 sudo[227175]: pam_unix(sudo:session): session closed for user root
Oct 11 08:29:02 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 08:29:02 compute-0 sudo[227253]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bamybpsopuejiqvarsxavxdbgetpsrcm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171341.8161285-168-158175572886769/AnsiballZ_file.py'
Oct 11 08:29:02 compute-0 sudo[227253]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:29:03 compute-0 ceph-mon[74313]: pgmap v656: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:29:03 compute-0 python3.9[227255]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 11 08:29:03 compute-0 sudo[227253]: pam_unix(sudo:session): session closed for user root
Oct 11 08:29:03 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v657: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:29:03 compute-0 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Oct 11 08:29:03 compute-0 sudo[227405]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kqkrlmcwskzogdhveusrnsqtgguheqqm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171343.3645487-191-46782069743755/AnsiballZ_file.py'
Oct 11 08:29:03 compute-0 sudo[227405]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:29:03 compute-0 python3.9[227407]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 08:29:04 compute-0 sudo[227405]: pam_unix(sudo:session): session closed for user root
Oct 11 08:29:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 08:29:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:29:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 08:29:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:29:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 08:29:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:29:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 08:29:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:29:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 08:29:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:29:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 08:29:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:29:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 11 08:29:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:29:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 08:29:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:29:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 11 08:29:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:29:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 11 08:29:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:29:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 08:29:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:29:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 11 08:29:04 compute-0 sudo[227557]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hgmogjqguwmmrgjozwaixepuppkoftgp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171344.2552614-199-226288100188651/AnsiballZ_stat.py'
Oct 11 08:29:04 compute-0 sudo[227557]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:29:04 compute-0 python3.9[227559]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 08:29:04 compute-0 sudo[227557]: pam_unix(sudo:session): session closed for user root
Oct 11 08:29:05 compute-0 ceph-mon[74313]: pgmap v657: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:29:05 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v658: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:29:05 compute-0 sudo[227635]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qpcqpiefjizbwgkgtxgiliewusqkskza ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171344.2552614-199-226288100188651/AnsiballZ_file.py'
Oct 11 08:29:05 compute-0 sudo[227635]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:29:05 compute-0 python3.9[227637]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 08:29:05 compute-0 sudo[227635]: pam_unix(sudo:session): session closed for user root
Oct 11 08:29:06 compute-0 sudo[227787]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cxavszlivqdlqxvnwlcebeznahyhlxlo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171345.718208-211-70463751182890/AnsiballZ_stat.py'
Oct 11 08:29:06 compute-0 sudo[227787]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:29:06 compute-0 python3.9[227789]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 08:29:06 compute-0 sudo[227787]: pam_unix(sudo:session): session closed for user root
Oct 11 08:29:06 compute-0 sudo[227865]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-btbacsqqnumoqnjwlciqjdzdafqpvuhq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171345.718208-211-70463751182890/AnsiballZ_file.py'
Oct 11 08:29:06 compute-0 sudo[227865]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:29:06 compute-0 python3.9[227867]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 08:29:06 compute-0 sudo[227865]: pam_unix(sudo:session): session closed for user root
Oct 11 08:29:07 compute-0 ceph-mon[74313]: pgmap v658: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:29:07 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v659: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:29:07 compute-0 sudo[228028]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yuvaxjhgfrlynvobguoctyvfchyrcwfx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171347.0876849-223-255875637759207/AnsiballZ_systemd.py'
Oct 11 08:29:07 compute-0 sudo[228028]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:29:07 compute-0 podman[227991]: 2025-10-11 08:29:07.634294835 +0000 UTC m=+0.152952134 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 11 08:29:07 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 08:29:07 compute-0 python3.9[228034]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 11 08:29:07 compute-0 systemd[1]: Reloading.
Oct 11 08:29:07 compute-0 systemd-sysv-generator[228074]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 08:29:08 compute-0 systemd-rc-local-generator[228069]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 08:29:08 compute-0 sudo[228028]: pam_unix(sudo:session): session closed for user root
Oct 11 08:29:08 compute-0 sudo[228231]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ztsuorygsbszhftturfrreqkighanloi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171348.540262-231-54820449964934/AnsiballZ_stat.py'
Oct 11 08:29:08 compute-0 sudo[228231]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:29:09 compute-0 python3.9[228233]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 08:29:09 compute-0 ceph-mon[74313]: pgmap v659: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:29:09 compute-0 sudo[228231]: pam_unix(sudo:session): session closed for user root
Oct 11 08:29:09 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v660: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:29:09 compute-0 sudo[228309]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-siuyprmwesuoduwxsghzlykvytoiczkv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171348.540262-231-54820449964934/AnsiballZ_file.py'
Oct 11 08:29:09 compute-0 sudo[228309]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:29:09 compute-0 python3.9[228311]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 08:29:09 compute-0 sudo[228309]: pam_unix(sudo:session): session closed for user root
Oct 11 08:29:10 compute-0 sudo[228461]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xdfairebqipamroacizhxkkalhnxfdvk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171349.9171667-243-36144144992704/AnsiballZ_stat.py'
Oct 11 08:29:10 compute-0 sudo[228461]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:29:10 compute-0 python3.9[228463]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 08:29:10 compute-0 sudo[228461]: pam_unix(sudo:session): session closed for user root
Oct 11 08:29:10 compute-0 sudo[228539]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-atmgjlmsoinfsthoukjrkdajggtymort ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171349.9171667-243-36144144992704/AnsiballZ_file.py'
Oct 11 08:29:10 compute-0 sudo[228539]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:29:10 compute-0 python3.9[228541]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 08:29:10 compute-0 sudo[228539]: pam_unix(sudo:session): session closed for user root
Oct 11 08:29:11 compute-0 ceph-mon[74313]: pgmap v660: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:29:11 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v661: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:29:11 compute-0 sudo[228691]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yaeduhljiqeikgffqmivfhlzqqbryodj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171351.1889632-255-154873082894833/AnsiballZ_systemd.py'
Oct 11 08:29:11 compute-0 sudo[228691]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:29:11 compute-0 python3.9[228693]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 11 08:29:11 compute-0 systemd[1]: Reloading.
Oct 11 08:29:12 compute-0 systemd-sysv-generator[228720]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 08:29:12 compute-0 systemd-rc-local-generator[228713]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 08:29:12 compute-0 systemd[1]: Starting Create netns directory...
Oct 11 08:29:12 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Oct 11 08:29:12 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Oct 11 08:29:12 compute-0 systemd[1]: Finished Create netns directory.
Oct 11 08:29:12 compute-0 sudo[228691]: pam_unix(sudo:session): session closed for user root
Oct 11 08:29:12 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 08:29:13 compute-0 sudo[228884]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-olriglwkklvlntvyyyyxbziloltpiufd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171352.7349317-265-78910205291081/AnsiballZ_file.py'
Oct 11 08:29:13 compute-0 sudo[228884]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:29:13 compute-0 ceph-mon[74313]: pgmap v661: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:29:13 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v662: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:29:13 compute-0 python3.9[228886]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 11 08:29:13 compute-0 sudo[228884]: pam_unix(sudo:session): session closed for user root
Oct 11 08:29:13 compute-0 sudo[229036]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gnmgxpxgiwgeufphzkrajgpjgizujlst ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171353.5582364-273-82616585647129/AnsiballZ_stat.py'
Oct 11 08:29:13 compute-0 sudo[229036]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:29:14 compute-0 python3.9[229038]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/iscsid/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 08:29:14 compute-0 sudo[229036]: pam_unix(sudo:session): session closed for user root
Oct 11 08:29:14 compute-0 sudo[229159]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ckiahtsqyupkfwgqbesmhuxsthdwojwe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171353.5582364-273-82616585647129/AnsiballZ_copy.py'
Oct 11 08:29:14 compute-0 sudo[229159]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:29:14 compute-0 python3.9[229161]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/iscsid/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760171353.5582364-273-82616585647129/.source _original_basename=healthcheck follow=False checksum=2e1237e7fe015c809b173c52e24cfb87132f4344 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct 11 08:29:14 compute-0 sudo[229159]: pam_unix(sudo:session): session closed for user root
Oct 11 08:29:15 compute-0 ceph-mon[74313]: pgmap v662: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:29:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:29:15.158 162815 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:29:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:29:15.159 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:29:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:29:15.160 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:29:15 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v663: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:29:15 compute-0 sudo[229311]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-khkatlxhhwsklxpomnursqxfzgttdpwg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171355.3558688-290-247743439711380/AnsiballZ_file.py'
Oct 11 08:29:15 compute-0 sudo[229311]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:29:15 compute-0 python3.9[229313]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 11 08:29:15 compute-0 sudo[229311]: pam_unix(sudo:session): session closed for user root
Oct 11 08:29:16 compute-0 ceph-mon[74313]: pgmap v663: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:29:16 compute-0 sudo[229463]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vmczgowruwqyzbykwphiselmzbtkbzeh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171356.173322-298-258275819767205/AnsiballZ_stat.py'
Oct 11 08:29:16 compute-0 sudo[229463]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:29:16 compute-0 python3.9[229465]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/iscsid.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 08:29:16 compute-0 sudo[229463]: pam_unix(sudo:session): session closed for user root
Oct 11 08:29:17 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v664: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:29:17 compute-0 sudo[229586]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tdtmbqmvzjbeztoyjkjjojphdqwydmiy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171356.173322-298-258275819767205/AnsiballZ_copy.py'
Oct 11 08:29:17 compute-0 sudo[229586]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:29:17 compute-0 python3.9[229588]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/iscsid.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1760171356.173322-298-258275819767205/.source.json _original_basename=.slzkfrxv follow=False checksum=80e4f97460718c7e5c66b21ef8b846eba0e0dbc8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 08:29:17 compute-0 sudo[229586]: pam_unix(sudo:session): session closed for user root
Oct 11 08:29:17 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 08:29:18 compute-0 sudo[229738]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hzzqlzeekamfdghtzboerlgudobjxccr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171357.6644244-313-126679487758012/AnsiballZ_file.py'
Oct 11 08:29:18 compute-0 sudo[229738]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:29:18 compute-0 ceph-mon[74313]: pgmap v664: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:29:18 compute-0 python3.9[229740]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/iscsid state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 08:29:18 compute-0 sudo[229738]: pam_unix(sudo:session): session closed for user root
Oct 11 08:29:18 compute-0 sudo[229890]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uyueoelkhogzwpklymeycjbmmmiawulp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171358.5101507-321-147337585579646/AnsiballZ_stat.py'
Oct 11 08:29:18 compute-0 sudo[229890]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:29:19 compute-0 sudo[229890]: pam_unix(sudo:session): session closed for user root
Oct 11 08:29:19 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v665: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:29:19 compute-0 sudo[230024]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vezmufyqxznmydchoahbwstmrmxeuyxg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171358.5101507-321-147337585579646/AnsiballZ_copy.py'
Oct 11 08:29:19 compute-0 sudo[230024]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:29:19 compute-0 podman[229987]: 2025-10-11 08:29:19.634579952 +0000 UTC m=+0.098661661 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Oct 11 08:29:19 compute-0 sudo[230024]: pam_unix(sudo:session): session closed for user root
Oct 11 08:29:20 compute-0 ceph-mon[74313]: pgmap v665: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:29:20 compute-0 sudo[230182]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vmrfawfuykeuksyxvbxparwgeymyczci ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171360.2380962-338-96730654207933/AnsiballZ_container_config_data.py'
Oct 11 08:29:20 compute-0 sudo[230182]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:29:21 compute-0 python3.9[230184]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/iscsid config_pattern=*.json debug=False
Oct 11 08:29:21 compute-0 sudo[230182]: pam_unix(sudo:session): session closed for user root
Oct 11 08:29:21 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v666: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:29:21 compute-0 sudo[230334]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-upedznstpvvyqteovrzkaruyoesdbqvo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171361.342843-347-93904403488371/AnsiballZ_container_config_hash.py'
Oct 11 08:29:21 compute-0 sudo[230334]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:29:22 compute-0 python3.9[230336]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Oct 11 08:29:22 compute-0 sudo[230334]: pam_unix(sudo:session): session closed for user root
Oct 11 08:29:22 compute-0 ceph-mon[74313]: pgmap v666: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:29:22 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 08:29:23 compute-0 sudo[230486]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-arkdktkmjvaavswixzbqastwtmggxylq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171362.52971-356-111648377669908/AnsiballZ_podman_container_info.py'
Oct 11 08:29:23 compute-0 sudo[230486]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:29:23 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v667: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:29:23 compute-0 python3.9[230488]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Oct 11 08:29:23 compute-0 sudo[230486]: pam_unix(sudo:session): session closed for user root
Oct 11 08:29:24 compute-0 ceph-mon[74313]: pgmap v667: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:29:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 08:29:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 08:29:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 08:29:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 08:29:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 08:29:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 08:29:24 compute-0 sudo[230665]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ozwcvkkdoxzxnslrnbdtzovdxvvqdpmy ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1760171364.2957664-369-171656400375396/AnsiballZ_edpm_container_manage.py'
Oct 11 08:29:24 compute-0 sudo[230665]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:29:25 compute-0 python3[230667]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/iscsid config_id=iscsid config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Oct 11 08:29:25 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v668: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:29:25 compute-0 podman[230706]: 2025-10-11 08:29:25.381697899 +0000 UTC m=+0.049350082 container create 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, io.buildah.version=1.41.3, config_id=iscsid, org.label-schema.vendor=CentOS, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.build-date=20251001, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.license=GPLv2)
Oct 11 08:29:25 compute-0 podman[230706]: 2025-10-11 08:29:25.355491594 +0000 UTC m=+0.023143807 image pull 74877095db294c27659f24e7f86074178a6f28eee68561c30e3ce4d18519e09c quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f
Oct 11 08:29:25 compute-0 python3[230667]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name iscsid --conmon-pidfile /run/iscsid.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=iscsid --label container_name=iscsid --label managed_by=edpm_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro --volume /dev:/dev --volume /run:/run --volume /sys:/sys --volume /lib/modules:/lib/modules:ro --volume /etc/iscsi:/etc/iscsi:z --volume /etc/target:/etc/target:z --volume /var/lib/iscsi:/var/lib/iscsi:z --volume /var/lib/openstack/healthchecks/iscsid:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f
Oct 11 08:29:25 compute-0 sudo[230665]: pam_unix(sudo:session): session closed for user root
Oct 11 08:29:26 compute-0 sudo[230893]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nomkajursvgytkwljadiswhrxokyieiz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171365.7937808-377-53241138205698/AnsiballZ_stat.py'
Oct 11 08:29:26 compute-0 sudo[230893]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:29:26 compute-0 ceph-mon[74313]: pgmap v668: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:29:26 compute-0 python3.9[230895]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 11 08:29:26 compute-0 sudo[230893]: pam_unix(sudo:session): session closed for user root
Oct 11 08:29:27 compute-0 sudo[231047]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wwajvwkyhzyaeutqpykeiukjxslfwjgq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171366.709686-386-50078184841211/AnsiballZ_file.py'
Oct 11 08:29:27 compute-0 sudo[231047]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:29:27 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v669: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:29:27 compute-0 python3.9[231049]: ansible-file Invoked with path=/etc/systemd/system/edpm_iscsid.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 08:29:27 compute-0 sudo[231047]: pam_unix(sudo:session): session closed for user root
Oct 11 08:29:27 compute-0 sudo[231123]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-awqyepuknusvklyuniavwpvzaouvzisj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171366.709686-386-50078184841211/AnsiballZ_stat.py'
Oct 11 08:29:27 compute-0 sudo[231123]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:29:27 compute-0 python3.9[231125]: ansible-stat Invoked with path=/etc/systemd/system/edpm_iscsid_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 11 08:29:27 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 08:29:27 compute-0 sudo[231123]: pam_unix(sudo:session): session closed for user root
Oct 11 08:29:28 compute-0 ceph-mon[74313]: pgmap v669: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:29:28 compute-0 sudo[231274]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ritbotvkgyikstcgvteqphznuwmduope ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171367.9348116-386-45364736007413/AnsiballZ_copy.py'
Oct 11 08:29:28 compute-0 sudo[231274]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:29:28 compute-0 python3.9[231276]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1760171367.9348116-386-45364736007413/source dest=/etc/systemd/system/edpm_iscsid.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 08:29:28 compute-0 sudo[231274]: pam_unix(sudo:session): session closed for user root
Oct 11 08:29:28 compute-0 sudo[231350]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xlpealrouhdcwgykxbaygdysiqggwong ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171367.9348116-386-45364736007413/AnsiballZ_systemd.py'
Oct 11 08:29:28 compute-0 sudo[231350]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:29:29 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v670: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:29:29 compute-0 python3.9[231352]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 11 08:29:29 compute-0 systemd[1]: Reloading.
Oct 11 08:29:29 compute-0 systemd-rc-local-generator[231375]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 08:29:29 compute-0 systemd-sysv-generator[231381]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 08:29:29 compute-0 sudo[231350]: pam_unix(sudo:session): session closed for user root
Oct 11 08:29:30 compute-0 sudo[231461]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-thtrmylumpvphacemxkvzkyrkwbycpuj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171367.9348116-386-45364736007413/AnsiballZ_systemd.py'
Oct 11 08:29:30 compute-0 sudo[231461]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:29:30 compute-0 ceph-mon[74313]: pgmap v670: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:29:30 compute-0 python3.9[231463]: ansible-systemd Invoked with state=restarted name=edpm_iscsid.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 11 08:29:30 compute-0 systemd[1]: Reloading.
Oct 11 08:29:30 compute-0 systemd-rc-local-generator[231487]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 08:29:30 compute-0 systemd-sysv-generator[231490]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 08:29:30 compute-0 systemd[1]: Starting iscsid container...
Oct 11 08:29:30 compute-0 systemd[1]: Started libcrun container.
Oct 11 08:29:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e9788305ccac62cbcce6e5451f80a6cd2ee3b7bc6a9fb94aacf69e0dc4abae6/merged/etc/target supports timestamps until 2038 (0x7fffffff)
Oct 11 08:29:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e9788305ccac62cbcce6e5451f80a6cd2ee3b7bc6a9fb94aacf69e0dc4abae6/merged/etc/iscsi supports timestamps until 2038 (0x7fffffff)
Oct 11 08:29:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e9788305ccac62cbcce6e5451f80a6cd2ee3b7bc6a9fb94aacf69e0dc4abae6/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Oct 11 08:29:31 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9.
Oct 11 08:29:31 compute-0 podman[231502]: 2025-10-11 08:29:31.015268374 +0000 UTC m=+0.175168014 container init 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=iscsid, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=iscsid)
Oct 11 08:29:31 compute-0 iscsid[231517]: + sudo -E kolla_set_configs
Oct 11 08:29:31 compute-0 podman[231502]: 2025-10-11 08:29:31.052011032 +0000 UTC m=+0.211910652 container start 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=iscsid, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=iscsid, io.buildah.version=1.41.3)
Oct 11 08:29:31 compute-0 podman[231502]: iscsid
Oct 11 08:29:31 compute-0 sudo[231523]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Oct 11 08:29:31 compute-0 systemd[1]: Started iscsid container.
Oct 11 08:29:31 compute-0 systemd[1]: Created slice User Slice of UID 0.
Oct 11 08:29:31 compute-0 systemd[1]: Starting User Runtime Directory /run/user/0...
Oct 11 08:29:31 compute-0 systemd[1]: Finished User Runtime Directory /run/user/0.
Oct 11 08:29:31 compute-0 sudo[231461]: pam_unix(sudo:session): session closed for user root
Oct 11 08:29:31 compute-0 systemd[1]: Starting User Manager for UID 0...
Oct 11 08:29:31 compute-0 systemd[231541]: pam_unix(systemd-user:session): session opened for user root(uid=0) by root(uid=0)
Oct 11 08:29:31 compute-0 podman[231524]: 2025-10-11 08:29:31.133252241 +0000 UTC m=+0.073114296 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=starting, health_failing_streak=1, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=iscsid, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=iscsid)
Oct 11 08:29:31 compute-0 systemd[1]: 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9-4be06497c94ab793.service: Main process exited, code=exited, status=1/FAILURE
Oct 11 08:29:31 compute-0 systemd[1]: 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9-4be06497c94ab793.service: Failed with result 'exit-code'.
Oct 11 08:29:31 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v671: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:29:31 compute-0 systemd[231541]: Queued start job for default target Main User Target.
Oct 11 08:29:31 compute-0 systemd[231541]: Created slice User Application Slice.
Oct 11 08:29:31 compute-0 systemd[231541]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system).
Oct 11 08:29:31 compute-0 systemd[231541]: Started Daily Cleanup of User's Temporary Directories.
Oct 11 08:29:31 compute-0 systemd[231541]: Reached target Paths.
Oct 11 08:29:31 compute-0 systemd[231541]: Reached target Timers.
Oct 11 08:29:31 compute-0 systemd[231541]: Starting D-Bus User Message Bus Socket...
Oct 11 08:29:31 compute-0 systemd[231541]: Starting Create User's Volatile Files and Directories...
Oct 11 08:29:31 compute-0 systemd[231541]: Listening on D-Bus User Message Bus Socket.
Oct 11 08:29:31 compute-0 systemd[231541]: Reached target Sockets.
Oct 11 08:29:31 compute-0 systemd[231541]: Finished Create User's Volatile Files and Directories.
Oct 11 08:29:31 compute-0 systemd[231541]: Reached target Basic System.
Oct 11 08:29:31 compute-0 systemd[231541]: Reached target Main User Target.
Oct 11 08:29:31 compute-0 systemd[231541]: Startup finished in 147ms.
Oct 11 08:29:31 compute-0 systemd[1]: Started User Manager for UID 0.
Oct 11 08:29:31 compute-0 systemd[1]: Started Session c3 of User root.
Oct 11 08:29:31 compute-0 sudo[231523]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 11 08:29:31 compute-0 iscsid[231517]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Oct 11 08:29:31 compute-0 iscsid[231517]: INFO:__main__:Validating config file
Oct 11 08:29:31 compute-0 iscsid[231517]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Oct 11 08:29:31 compute-0 iscsid[231517]: INFO:__main__:Writing out command to execute
Oct 11 08:29:31 compute-0 sudo[231523]: pam_unix(sudo:session): session closed for user root
Oct 11 08:29:31 compute-0 systemd[1]: session-c3.scope: Deactivated successfully.
Oct 11 08:29:31 compute-0 iscsid[231517]: ++ cat /run_command
Oct 11 08:29:31 compute-0 iscsid[231517]: + CMD='/usr/sbin/iscsid -f'
Oct 11 08:29:31 compute-0 iscsid[231517]: + ARGS=
Oct 11 08:29:31 compute-0 iscsid[231517]: + sudo kolla_copy_cacerts
Oct 11 08:29:31 compute-0 sudo[231613]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_copy_cacerts
Oct 11 08:29:31 compute-0 systemd[1]: Started Session c4 of User root.
Oct 11 08:29:31 compute-0 sudo[231613]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 11 08:29:31 compute-0 sudo[231613]: pam_unix(sudo:session): session closed for user root
Oct 11 08:29:31 compute-0 systemd[1]: session-c4.scope: Deactivated successfully.
Oct 11 08:29:31 compute-0 iscsid[231517]: + [[ ! -n '' ]]
Oct 11 08:29:31 compute-0 iscsid[231517]: + . kolla_extend_start
Oct 11 08:29:31 compute-0 iscsid[231517]: Running command: '/usr/sbin/iscsid -f'
Oct 11 08:29:31 compute-0 iscsid[231517]: ++ [[ ! -f /etc/iscsi/initiatorname.iscsi ]]
Oct 11 08:29:31 compute-0 iscsid[231517]: + echo 'Running command: '\''/usr/sbin/iscsid -f'\'''
Oct 11 08:29:31 compute-0 iscsid[231517]: + umask 0022
Oct 11 08:29:31 compute-0 iscsid[231517]: + exec /usr/sbin/iscsid -f
Oct 11 08:29:31 compute-0 kernel: Loading iSCSI transport class v2.0-870.
Oct 11 08:29:31 compute-0 python3.9[231721]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/.iscsid_restart_required follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 11 08:29:32 compute-0 ceph-mon[74313]: pgmap v671: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:29:32 compute-0 sudo[231871]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iaafabjjikrhplsnrvneitmrlgxpxtlh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171372.2076166-423-84923339032781/AnsiballZ_file.py'
Oct 11 08:29:32 compute-0 sudo[231871]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:29:32 compute-0 python3.9[231873]: ansible-ansible.builtin.file Invoked with path=/etc/iscsi/.iscsid_restart_required state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 08:29:32 compute-0 sudo[231871]: pam_unix(sudo:session): session closed for user root
Oct 11 08:29:32 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 08:29:33 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v672: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:29:33 compute-0 sudo[232023]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jwoqedbxlbcswtlkivyibmfsberngdne ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171373.3123674-434-70448126858007/AnsiballZ_service_facts.py'
Oct 11 08:29:33 compute-0 sudo[232023]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:29:33 compute-0 python3.9[232025]: ansible-ansible.builtin.service_facts Invoked
Oct 11 08:29:33 compute-0 network[232042]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct 11 08:29:33 compute-0 network[232043]: 'network-scripts' will be removed from distribution in near future.
Oct 11 08:29:33 compute-0 network[232044]: It is advised to switch to 'NetworkManager' instead for network management.
Oct 11 08:29:34 compute-0 ceph-mon[74313]: pgmap v672: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:29:35 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v673: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:29:36 compute-0 ceph-mon[74313]: pgmap v673: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:29:37 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v674: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:29:37 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 08:29:37 compute-0 podman[232151]: 2025-10-11 08:29:37.845657975 +0000 UTC m=+0.155065285 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 11 08:29:38 compute-0 sudo[232023]: pam_unix(sudo:session): session closed for user root
Oct 11 08:29:38 compute-0 ceph-mon[74313]: pgmap v674: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:29:38 compute-0 systemd[1]: virtnodedevd.service: Deactivated successfully.
Oct 11 08:29:38 compute-0 sudo[232345]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rsnnvseiorrtyshrbprubfegputsyhbu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171378.608867-444-170060942648456/AnsiballZ_file.py'
Oct 11 08:29:38 compute-0 sudo[232345]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:29:39 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v675: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:29:39 compute-0 python3.9[232347]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Oct 11 08:29:39 compute-0 sudo[232345]: pam_unix(sudo:session): session closed for user root
Oct 11 08:29:39 compute-0 unix_chkpwd[232369]: password check failed for user (root)
Oct 11 08:29:39 compute-0 sshd-session[232239]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=43.251.161.75  user=root
Oct 11 08:29:39 compute-0 sudo[232498]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hkioerxugvpuxysthwbvxfrqfluyuxuh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171379.4830284-452-86393308647497/AnsiballZ_modprobe.py'
Oct 11 08:29:39 compute-0 sudo[232498]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:29:40 compute-0 python3.9[232500]: ansible-community.general.modprobe Invoked with name=dm-multipath state=present params= persistent=disabled
Oct 11 08:29:40 compute-0 sudo[232498]: pam_unix(sudo:session): session closed for user root
Oct 11 08:29:40 compute-0 ceph-mon[74313]: pgmap v675: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:29:40 compute-0 systemd[1]: virtproxyd.service: Deactivated successfully.
Oct 11 08:29:40 compute-0 sudo[232626]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:29:40 compute-0 sudo[232626]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:29:40 compute-0 sudo[232626]: pam_unix(sudo:session): session closed for user root
Oct 11 08:29:40 compute-0 sudo[232686]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kykagcartevjowdoahnrnbskbjgysnre ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171380.5323439-460-93739374489177/AnsiballZ_stat.py'
Oct 11 08:29:40 compute-0 sudo[232686]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:29:40 compute-0 sudo[232677]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 08:29:41 compute-0 sudo[232677]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:29:41 compute-0 sudo[232677]: pam_unix(sudo:session): session closed for user root
Oct 11 08:29:41 compute-0 sudo[232708]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:29:41 compute-0 sudo[232708]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:29:41 compute-0 sudo[232708]: pam_unix(sudo:session): session closed for user root
Oct 11 08:29:41 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v676: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:29:41 compute-0 sudo[232733]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 11 08:29:41 compute-0 python3.9[232705]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/dm-multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 08:29:41 compute-0 sudo[232733]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:29:41 compute-0 sudo[232686]: pam_unix(sudo:session): session closed for user root
Oct 11 08:29:41 compute-0 systemd[1]: Stopping User Manager for UID 0...
Oct 11 08:29:41 compute-0 systemd[231541]: Activating special unit Exit the Session...
Oct 11 08:29:41 compute-0 systemd[231541]: Stopped target Main User Target.
Oct 11 08:29:41 compute-0 systemd[231541]: Stopped target Basic System.
Oct 11 08:29:41 compute-0 systemd[231541]: Stopped target Paths.
Oct 11 08:29:41 compute-0 systemd[231541]: Stopped target Sockets.
Oct 11 08:29:41 compute-0 systemd[231541]: Stopped target Timers.
Oct 11 08:29:41 compute-0 systemd[231541]: Stopped Daily Cleanup of User's Temporary Directories.
Oct 11 08:29:41 compute-0 systemd[231541]: Closed D-Bus User Message Bus Socket.
Oct 11 08:29:41 compute-0 systemd[231541]: Stopped Create User's Volatile Files and Directories.
Oct 11 08:29:41 compute-0 systemd[231541]: Removed slice User Application Slice.
Oct 11 08:29:41 compute-0 systemd[231541]: Reached target Shutdown.
Oct 11 08:29:41 compute-0 systemd[231541]: Finished Exit the Session.
Oct 11 08:29:41 compute-0 systemd[231541]: Reached target Exit the Session.
Oct 11 08:29:41 compute-0 systemd[1]: user@0.service: Deactivated successfully.
Oct 11 08:29:41 compute-0 systemd[1]: Stopped User Manager for UID 0.
Oct 11 08:29:41 compute-0 systemd[1]: Stopping User Runtime Directory /run/user/0...
Oct 11 08:29:41 compute-0 systemd[1]: run-user-0.mount: Deactivated successfully.
Oct 11 08:29:41 compute-0 systemd[1]: user-runtime-dir@0.service: Deactivated successfully.
Oct 11 08:29:41 compute-0 systemd[1]: Stopped User Runtime Directory /run/user/0.
Oct 11 08:29:41 compute-0 systemd[1]: Removed slice User Slice of UID 0.
Oct 11 08:29:41 compute-0 sudo[232906]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gjvnpmmtpkcojgjvobdvzofffhhqnhak ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171380.5323439-460-93739374489177/AnsiballZ_copy.py'
Oct 11 08:29:41 compute-0 sudo[232906]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:29:41 compute-0 sudo[232733]: pam_unix(sudo:session): session closed for user root
Oct 11 08:29:41 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 08:29:41 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 08:29:41 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 11 08:29:41 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 08:29:41 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 11 08:29:41 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 08:29:41 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev 0b955c97-831a-492e-a779-e335ff79ab59 does not exist
Oct 11 08:29:41 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev a44214e7-f78d-4c0b-96fe-546dc1842407 does not exist
Oct 11 08:29:41 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev 95c485c7-4c61-404d-88f5-feb83187e929 does not exist
Oct 11 08:29:41 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 11 08:29:41 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 08:29:41 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 11 08:29:41 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 08:29:41 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 08:29:41 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 08:29:41 compute-0 sudo[232913]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:29:41 compute-0 sudo[232913]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:29:41 compute-0 sudo[232913]: pam_unix(sudo:session): session closed for user root
Oct 11 08:29:41 compute-0 sudo[232938]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 08:29:41 compute-0 sudo[232938]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:29:41 compute-0 python3.9[232912]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/dm-multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760171380.5323439-460-93739374489177/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=065061c60917e4f67cecc70d12ce55e42f9d0b3f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 08:29:41 compute-0 sudo[232938]: pam_unix(sudo:session): session closed for user root
Oct 11 08:29:41 compute-0 sudo[232906]: pam_unix(sudo:session): session closed for user root
Oct 11 08:29:41 compute-0 sshd-session[232239]: Failed password for root from 43.251.161.75 port 51948 ssh2
Oct 11 08:29:42 compute-0 sudo[232963]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:29:42 compute-0 sudo[232963]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:29:42 compute-0 sudo[232963]: pam_unix(sudo:session): session closed for user root
Oct 11 08:29:42 compute-0 sudo[233012]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 11 08:29:42 compute-0 sudo[233012]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:29:42 compute-0 ceph-mon[74313]: pgmap v676: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:29:42 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 08:29:42 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 08:29:42 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 08:29:42 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 08:29:42 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 08:29:42 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 08:29:42 compute-0 podman[233164]: 2025-10-11 08:29:42.564045364 +0000 UTC m=+0.061351707 container create 278d9e4454914fdcf28912d843bd328fd8ae296691608c36756cb93e061f2bc8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_engelbart, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 08:29:42 compute-0 systemd[1]: Started libpod-conmon-278d9e4454914fdcf28912d843bd328fd8ae296691608c36756cb93e061f2bc8.scope.
Oct 11 08:29:42 compute-0 podman[233164]: 2025-10-11 08:29:42.531490897 +0000 UTC m=+0.028797290 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 08:29:42 compute-0 systemd[1]: Started libcrun container.
Oct 11 08:29:42 compute-0 sudo[233220]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dodgjrmbbjqrxrchorzdiqzuvcobmtqv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171382.2539191-476-269178909613079/AnsiballZ_lineinfile.py'
Oct 11 08:29:42 compute-0 sudo[233220]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:29:42 compute-0 podman[233164]: 2025-10-11 08:29:42.66776323 +0000 UTC m=+0.165069573 container init 278d9e4454914fdcf28912d843bd328fd8ae296691608c36756cb93e061f2bc8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_engelbart, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Oct 11 08:29:42 compute-0 podman[233164]: 2025-10-11 08:29:42.679416235 +0000 UTC m=+0.176722578 container start 278d9e4454914fdcf28912d843bd328fd8ae296691608c36756cb93e061f2bc8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_engelbart, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True)
Oct 11 08:29:42 compute-0 podman[233164]: 2025-10-11 08:29:42.684146571 +0000 UTC m=+0.181452924 container attach 278d9e4454914fdcf28912d843bd328fd8ae296691608c36756cb93e061f2bc8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_engelbart, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default)
Oct 11 08:29:42 compute-0 pensive_engelbart[233218]: 167 167
Oct 11 08:29:42 compute-0 systemd[1]: libpod-278d9e4454914fdcf28912d843bd328fd8ae296691608c36756cb93e061f2bc8.scope: Deactivated successfully.
Oct 11 08:29:42 compute-0 podman[233164]: 2025-10-11 08:29:42.69104524 +0000 UTC m=+0.188351553 container died 278d9e4454914fdcf28912d843bd328fd8ae296691608c36756cb93e061f2bc8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_engelbart, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 08:29:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-1468b79d140f33173562e0d66fd52f7769324159ebdae7f742e0928c27e18ec8-merged.mount: Deactivated successfully.
Oct 11 08:29:42 compute-0 podman[233164]: 2025-10-11 08:29:42.747576018 +0000 UTC m=+0.244882361 container remove 278d9e4454914fdcf28912d843bd328fd8ae296691608c36756cb93e061f2bc8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_engelbart, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 08:29:42 compute-0 systemd[1]: libpod-conmon-278d9e4454914fdcf28912d843bd328fd8ae296691608c36756cb93e061f2bc8.scope: Deactivated successfully.
Oct 11 08:29:42 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 08:29:42 compute-0 ceph-mon[74313]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #33. Immutable memtables: 0.
Oct 11 08:29:42 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:29:42.849899) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 11 08:29:42 compute-0 ceph-mon[74313]: rocksdb: [db/flush_job.cc:856] [default] [JOB 13] Flushing memtable with next log file: 33
Oct 11 08:29:42 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760171382849932, "job": 13, "event": "flush_started", "num_memtables": 1, "num_entries": 1232, "num_deletes": 507, "total_data_size": 1391353, "memory_usage": 1425136, "flush_reason": "Manual Compaction"}
Oct 11 08:29:42 compute-0 ceph-mon[74313]: rocksdb: [db/flush_job.cc:885] [default] [JOB 13] Level-0 flush table #34: started
Oct 11 08:29:42 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760171382861524, "cf_name": "default", "job": 13, "event": "table_file_creation", "file_number": 34, "file_size": 1377887, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13561, "largest_seqno": 14792, "table_properties": {"data_size": 1372429, "index_size": 2406, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1989, "raw_key_size": 14058, "raw_average_key_size": 17, "raw_value_size": 1359514, "raw_average_value_size": 1731, "num_data_blocks": 110, "num_entries": 785, "num_filter_entries": 785, "num_deletions": 507, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760171288, "oldest_key_time": 1760171288, "file_creation_time": 1760171382, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "25d4b2da-549a-4320-988a-8e4ed36a341b", "db_session_id": "HBQ1LYMNE0LLZGR7AHC6", "orig_file_number": 34, "seqno_to_time_mapping": "N/A"}}
Oct 11 08:29:42 compute-0 ceph-mon[74313]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 13] Flush lasted 11664 microseconds, and 4414 cpu microseconds.
Oct 11 08:29:42 compute-0 ceph-mon[74313]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 08:29:42 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:29:42.861563) [db/flush_job.cc:967] [default] [JOB 13] Level-0 flush table #34: 1377887 bytes OK
Oct 11 08:29:42 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:29:42.861579) [db/memtable_list.cc:519] [default] Level-0 commit table #34 started
Oct 11 08:29:42 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:29:42.863239) [db/memtable_list.cc:722] [default] Level-0 commit table #34: memtable #1 done
Oct 11 08:29:42 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:29:42.863251) EVENT_LOG_v1 {"time_micros": 1760171382863248, "job": 13, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 11 08:29:42 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:29:42.863266) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 11 08:29:42 compute-0 ceph-mon[74313]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 13] Try to delete WAL files size 1384670, prev total WAL file size 1384670, number of live WAL files 2.
Oct 11 08:29:42 compute-0 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000030.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 08:29:42 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:29:42.863867) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0030' seq:72057594037927935, type:22 .. '6C6F676D00323533' seq:0, type:0; will stop at (end)
Oct 11 08:29:42 compute-0 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 14] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 11 08:29:42 compute-0 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 13 Base level 0, inputs: [34(1345KB)], [32(7403KB)]
Oct 11 08:29:42 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760171382863950, "job": 14, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [34], "files_L6": [32], "score": -1, "input_data_size": 8958970, "oldest_snapshot_seqno": -1}
Oct 11 08:29:42 compute-0 python3.9[233223]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=dm-multipath  mode=0644 state=present path=/etc/modules backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 08:29:42 compute-0 sudo[233220]: pam_unix(sudo:session): session closed for user root
Oct 11 08:29:42 compute-0 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 14] Generated table #35: 3773 keys, 7002265 bytes, temperature: kUnknown
Oct 11 08:29:42 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760171382915775, "cf_name": "default", "job": 14, "event": "table_file_creation", "file_number": 35, "file_size": 7002265, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 6975434, "index_size": 16290, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 9477, "raw_key_size": 92581, "raw_average_key_size": 24, "raw_value_size": 6905427, "raw_average_value_size": 1830, "num_data_blocks": 689, "num_entries": 3773, "num_filter_entries": 3773, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760170204, "oldest_key_time": 0, "file_creation_time": 1760171382, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "25d4b2da-549a-4320-988a-8e4ed36a341b", "db_session_id": "HBQ1LYMNE0LLZGR7AHC6", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Oct 11 08:29:42 compute-0 ceph-mon[74313]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 08:29:42 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:29:42.916140) [db/compaction/compaction_job.cc:1663] [default] [JOB 14] Compacted 1@0 + 1@6 files to L6 => 7002265 bytes
Oct 11 08:29:42 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:29:42.919993) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 172.5 rd, 134.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.3, 7.2 +0.0 blob) out(6.7 +0.0 blob), read-write-amplify(11.6) write-amplify(5.1) OK, records in: 4800, records dropped: 1027 output_compression: NoCompression
Oct 11 08:29:42 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:29:42.920084) EVENT_LOG_v1 {"time_micros": 1760171382920030, "job": 14, "event": "compaction_finished", "compaction_time_micros": 51929, "compaction_time_cpu_micros": 33363, "output_level": 6, "num_output_files": 1, "total_output_size": 7002265, "num_input_records": 4800, "num_output_records": 3773, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 11 08:29:42 compute-0 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000034.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 08:29:42 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760171382920815, "job": 14, "event": "table_file_deletion", "file_number": 34}
Oct 11 08:29:42 compute-0 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000032.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 08:29:42 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760171382924261, "job": 14, "event": "table_file_deletion", "file_number": 32}
Oct 11 08:29:42 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:29:42.863703) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 08:29:42 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:29:42.924330) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 08:29:42 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:29:42.924340) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 08:29:42 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:29:42.924344) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 08:29:42 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:29:42.924348) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 08:29:42 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:29:42.924352) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 08:29:42 compute-0 podman[233246]: 2025-10-11 08:29:42.984422486 +0000 UTC m=+0.074642440 container create afc86255b0163812b08d5c6befa358ffb605bd24ea321b321c4c96d0d46efbe7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_brattain, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct 11 08:29:43 compute-0 systemd[1]: Started libpod-conmon-afc86255b0163812b08d5c6befa358ffb605bd24ea321b321c4c96d0d46efbe7.scope.
Oct 11 08:29:43 compute-0 podman[233246]: 2025-10-11 08:29:42.954923717 +0000 UTC m=+0.045143741 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 08:29:43 compute-0 systemd[1]: Started libcrun container.
Oct 11 08:29:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59abc0f4a689f19ffb2bae00ee6d1f4e57fd4fc79ad79aaceb5bbd3e16e51b9e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 08:29:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59abc0f4a689f19ffb2bae00ee6d1f4e57fd4fc79ad79aaceb5bbd3e16e51b9e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 08:29:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59abc0f4a689f19ffb2bae00ee6d1f4e57fd4fc79ad79aaceb5bbd3e16e51b9e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 08:29:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59abc0f4a689f19ffb2bae00ee6d1f4e57fd4fc79ad79aaceb5bbd3e16e51b9e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 08:29:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59abc0f4a689f19ffb2bae00ee6d1f4e57fd4fc79ad79aaceb5bbd3e16e51b9e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 08:29:43 compute-0 podman[233246]: 2025-10-11 08:29:43.099558861 +0000 UTC m=+0.189778855 container init afc86255b0163812b08d5c6befa358ffb605bd24ea321b321c4c96d0d46efbe7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_brattain, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct 11 08:29:43 compute-0 podman[233246]: 2025-10-11 08:29:43.114198612 +0000 UTC m=+0.204418546 container start afc86255b0163812b08d5c6befa358ffb605bd24ea321b321c4c96d0d46efbe7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_brattain, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 08:29:43 compute-0 podman[233246]: 2025-10-11 08:29:43.117700863 +0000 UTC m=+0.207920897 container attach afc86255b0163812b08d5c6befa358ffb605bd24ea321b321c4c96d0d46efbe7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_brattain, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 08:29:43 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v677: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:29:43 compute-0 sudo[233417]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tdfmyzixoiibcwzhouabqupqykfywtlj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171383.1500676-484-70491934968812/AnsiballZ_systemd.py'
Oct 11 08:29:43 compute-0 sudo[233417]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:29:43 compute-0 sshd-session[232239]: Received disconnect from 43.251.161.75 port 51948:11:  [preauth]
Oct 11 08:29:43 compute-0 sshd-session[232239]: Disconnected from authenticating user root 43.251.161.75 port 51948 [preauth]
Oct 11 08:29:43 compute-0 python3.9[233419]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 11 08:29:44 compute-0 relaxed_brattain[233287]: --> passed data devices: 0 physical, 3 LVM
Oct 11 08:29:44 compute-0 relaxed_brattain[233287]: --> relative data size: 1.0
Oct 11 08:29:44 compute-0 relaxed_brattain[233287]: --> All data devices are unavailable
Oct 11 08:29:44 compute-0 systemd[1]: libpod-afc86255b0163812b08d5c6befa358ffb605bd24ea321b321c4c96d0d46efbe7.scope: Deactivated successfully.
Oct 11 08:29:44 compute-0 podman[233246]: 2025-10-11 08:29:44.291711501 +0000 UTC m=+1.381931465 container died afc86255b0163812b08d5c6befa358ffb605bd24ea321b321c4c96d0d46efbe7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_brattain, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 08:29:44 compute-0 systemd[1]: libpod-afc86255b0163812b08d5c6befa358ffb605bd24ea321b321c4c96d0d46efbe7.scope: Consumed 1.145s CPU time.
Oct 11 08:29:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-59abc0f4a689f19ffb2bae00ee6d1f4e57fd4fc79ad79aaceb5bbd3e16e51b9e-merged.mount: Deactivated successfully.
Oct 11 08:29:44 compute-0 podman[233246]: 2025-10-11 08:29:44.374658169 +0000 UTC m=+1.464878123 container remove afc86255b0163812b08d5c6befa358ffb605bd24ea321b321c4c96d0d46efbe7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_brattain, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 08:29:44 compute-0 systemd[1]: libpod-conmon-afc86255b0163812b08d5c6befa358ffb605bd24ea321b321c4c96d0d46efbe7.scope: Deactivated successfully.
Oct 11 08:29:44 compute-0 sudo[233012]: pam_unix(sudo:session): session closed for user root
Oct 11 08:29:44 compute-0 sudo[233458]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:29:44 compute-0 sudo[233458]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:29:44 compute-0 sudo[233458]: pam_unix(sudo:session): session closed for user root
Oct 11 08:29:44 compute-0 sudo[233483]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 08:29:44 compute-0 sudo[233483]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:29:44 compute-0 sudo[233483]: pam_unix(sudo:session): session closed for user root
Oct 11 08:29:44 compute-0 sudo[233508]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:29:44 compute-0 sudo[233508]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:29:44 compute-0 sudo[233508]: pam_unix(sudo:session): session closed for user root
Oct 11 08:29:44 compute-0 sudo[233533]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a -- lvm list --format json
Oct 11 08:29:44 compute-0 sudo[233533]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:29:44 compute-0 ceph-mon[74313]: pgmap v677: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:29:44 compute-0 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Oct 11 08:29:44 compute-0 systemd[1]: Stopped Load Kernel Modules.
Oct 11 08:29:44 compute-0 systemd[1]: Stopping Load Kernel Modules...
Oct 11 08:29:44 compute-0 systemd[1]: Starting Load Kernel Modules...
Oct 11 08:29:44 compute-0 systemd[1]: Finished Load Kernel Modules.
Oct 11 08:29:45 compute-0 sudo[233417]: pam_unix(sudo:session): session closed for user root
Oct 11 08:29:45 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v678: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:29:45 compute-0 podman[233625]: 2025-10-11 08:29:45.252727449 +0000 UTC m=+0.067163785 container create c4fd2b25dd6f38e3f61aa8f61c6d8354ac7733e3d132c902f14ece5ba8dfc646 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_mahavira, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 08:29:45 compute-0 systemd[1]: Started libpod-conmon-c4fd2b25dd6f38e3f61aa8f61c6d8354ac7733e3d132c902f14ece5ba8dfc646.scope.
Oct 11 08:29:45 compute-0 podman[233625]: 2025-10-11 08:29:45.215938769 +0000 UTC m=+0.030375195 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 08:29:45 compute-0 systemd[1]: Started libcrun container.
Oct 11 08:29:45 compute-0 podman[233625]: 2025-10-11 08:29:45.365812104 +0000 UTC m=+0.180248480 container init c4fd2b25dd6f38e3f61aa8f61c6d8354ac7733e3d132c902f14ece5ba8dfc646 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_mahavira, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct 11 08:29:45 compute-0 podman[233625]: 2025-10-11 08:29:45.378542821 +0000 UTC m=+0.192979167 container start c4fd2b25dd6f38e3f61aa8f61c6d8354ac7733e3d132c902f14ece5ba8dfc646 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_mahavira, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 08:29:45 compute-0 podman[233625]: 2025-10-11 08:29:45.382666799 +0000 UTC m=+0.197103135 container attach c4fd2b25dd6f38e3f61aa8f61c6d8354ac7733e3d132c902f14ece5ba8dfc646 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_mahavira, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 11 08:29:45 compute-0 eloquent_mahavira[233663]: 167 167
Oct 11 08:29:45 compute-0 podman[233625]: 2025-10-11 08:29:45.387868759 +0000 UTC m=+0.202305125 container died c4fd2b25dd6f38e3f61aa8f61c6d8354ac7733e3d132c902f14ece5ba8dfc646 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_mahavira, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 08:29:45 compute-0 systemd[1]: libpod-c4fd2b25dd6f38e3f61aa8f61c6d8354ac7733e3d132c902f14ece5ba8dfc646.scope: Deactivated successfully.
Oct 11 08:29:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-a1ff33193960472577a078072ff956f2c9c3fe0fc050f98616ca01d6b619c7fb-merged.mount: Deactivated successfully.
Oct 11 08:29:45 compute-0 podman[233625]: 2025-10-11 08:29:45.442674827 +0000 UTC m=+0.257111163 container remove c4fd2b25dd6f38e3f61aa8f61c6d8354ac7733e3d132c902f14ece5ba8dfc646 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_mahavira, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct 11 08:29:45 compute-0 systemd[1]: libpod-conmon-c4fd2b25dd6f38e3f61aa8f61c6d8354ac7733e3d132c902f14ece5ba8dfc646.scope: Deactivated successfully.
Oct 11 08:29:45 compute-0 podman[233764]: 2025-10-11 08:29:45.644824997 +0000 UTC m=+0.042683330 container create ae50f18c28048935811f306b5b4fb44172cf275072c0996aac58a5d6842e5cb8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_mayer, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Oct 11 08:29:45 compute-0 systemd[1]: Started libpod-conmon-ae50f18c28048935811f306b5b4fb44172cf275072c0996aac58a5d6842e5cb8.scope.
Oct 11 08:29:45 compute-0 systemd[1]: Started libcrun container.
Oct 11 08:29:45 compute-0 podman[233764]: 2025-10-11 08:29:45.623702439 +0000 UTC m=+0.021560762 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 08:29:45 compute-0 sudo[233806]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vtwizlwpyptpgbhnuxxjgynwbfxyaeei ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171385.3238878-492-109160282116941/AnsiballZ_file.py'
Oct 11 08:29:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0ffa360fd7cf73d8bef30a880116de5d4cf6d66b6112a36c21bc822136d6505/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 08:29:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0ffa360fd7cf73d8bef30a880116de5d4cf6d66b6112a36c21bc822136d6505/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 08:29:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0ffa360fd7cf73d8bef30a880116de5d4cf6d66b6112a36c21bc822136d6505/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 08:29:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0ffa360fd7cf73d8bef30a880116de5d4cf6d66b6112a36c21bc822136d6505/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 08:29:45 compute-0 sudo[233806]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:29:45 compute-0 podman[233764]: 2025-10-11 08:29:45.741358416 +0000 UTC m=+0.139216799 container init ae50f18c28048935811f306b5b4fb44172cf275072c0996aac58a5d6842e5cb8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_mayer, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 08:29:45 compute-0 podman[233764]: 2025-10-11 08:29:45.754949737 +0000 UTC m=+0.152808090 container start ae50f18c28048935811f306b5b4fb44172cf275072c0996aac58a5d6842e5cb8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_mayer, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 08:29:45 compute-0 podman[233764]: 2025-10-11 08:29:45.759256811 +0000 UTC m=+0.157115154 container attach ae50f18c28048935811f306b5b4fb44172cf275072c0996aac58a5d6842e5cb8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_mayer, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct 11 08:29:45 compute-0 python3.9[233811]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 11 08:29:45 compute-0 sudo[233806]: pam_unix(sudo:session): session closed for user root
Oct 11 08:29:46 compute-0 fervent_mayer[233807]: {
Oct 11 08:29:46 compute-0 fervent_mayer[233807]:     "0": [
Oct 11 08:29:46 compute-0 fervent_mayer[233807]:         {
Oct 11 08:29:46 compute-0 fervent_mayer[233807]:             "devices": [
Oct 11 08:29:46 compute-0 fervent_mayer[233807]:                 "/dev/loop3"
Oct 11 08:29:46 compute-0 fervent_mayer[233807]:             ],
Oct 11 08:29:46 compute-0 fervent_mayer[233807]:             "lv_name": "ceph_lv0",
Oct 11 08:29:46 compute-0 fervent_mayer[233807]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 08:29:46 compute-0 fervent_mayer[233807]:             "lv_size": "21470642176",
Oct 11 08:29:46 compute-0 fervent_mayer[233807]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=bd31cfe9-266a-4479-a7f9-659fff14b3e1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 08:29:46 compute-0 fervent_mayer[233807]:             "lv_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 08:29:46 compute-0 fervent_mayer[233807]:             "name": "ceph_lv0",
Oct 11 08:29:46 compute-0 fervent_mayer[233807]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 08:29:46 compute-0 fervent_mayer[233807]:             "tags": {
Oct 11 08:29:46 compute-0 fervent_mayer[233807]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 11 08:29:46 compute-0 fervent_mayer[233807]:                 "ceph.block_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 08:29:46 compute-0 fervent_mayer[233807]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 08:29:46 compute-0 fervent_mayer[233807]:                 "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 08:29:46 compute-0 fervent_mayer[233807]:                 "ceph.cluster_name": "ceph",
Oct 11 08:29:46 compute-0 fervent_mayer[233807]:                 "ceph.crush_device_class": "",
Oct 11 08:29:46 compute-0 fervent_mayer[233807]:                 "ceph.encrypted": "0",
Oct 11 08:29:46 compute-0 fervent_mayer[233807]:                 "ceph.osd_fsid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 08:29:46 compute-0 fervent_mayer[233807]:                 "ceph.osd_id": "0",
Oct 11 08:29:46 compute-0 fervent_mayer[233807]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 08:29:46 compute-0 fervent_mayer[233807]:                 "ceph.type": "block",
Oct 11 08:29:46 compute-0 fervent_mayer[233807]:                 "ceph.vdo": "0"
Oct 11 08:29:46 compute-0 fervent_mayer[233807]:             },
Oct 11 08:29:46 compute-0 fervent_mayer[233807]:             "type": "block",
Oct 11 08:29:46 compute-0 fervent_mayer[233807]:             "vg_name": "ceph_vg0"
Oct 11 08:29:46 compute-0 fervent_mayer[233807]:         }
Oct 11 08:29:46 compute-0 fervent_mayer[233807]:     ],
Oct 11 08:29:46 compute-0 fervent_mayer[233807]:     "1": [
Oct 11 08:29:46 compute-0 fervent_mayer[233807]:         {
Oct 11 08:29:46 compute-0 fervent_mayer[233807]:             "devices": [
Oct 11 08:29:46 compute-0 fervent_mayer[233807]:                 "/dev/loop4"
Oct 11 08:29:46 compute-0 fervent_mayer[233807]:             ],
Oct 11 08:29:46 compute-0 fervent_mayer[233807]:             "lv_name": "ceph_lv1",
Oct 11 08:29:46 compute-0 fervent_mayer[233807]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 08:29:46 compute-0 fervent_mayer[233807]:             "lv_size": "21470642176",
Oct 11 08:29:46 compute-0 fervent_mayer[233807]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c6e730c8-7075-45e5-bf72-061b73088f5b,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 08:29:46 compute-0 fervent_mayer[233807]:             "lv_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 08:29:46 compute-0 fervent_mayer[233807]:             "name": "ceph_lv1",
Oct 11 08:29:46 compute-0 fervent_mayer[233807]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 08:29:46 compute-0 fervent_mayer[233807]:             "tags": {
Oct 11 08:29:46 compute-0 fervent_mayer[233807]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 11 08:29:46 compute-0 fervent_mayer[233807]:                 "ceph.block_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 08:29:46 compute-0 fervent_mayer[233807]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 08:29:46 compute-0 fervent_mayer[233807]:                 "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 08:29:46 compute-0 fervent_mayer[233807]:                 "ceph.cluster_name": "ceph",
Oct 11 08:29:46 compute-0 fervent_mayer[233807]:                 "ceph.crush_device_class": "",
Oct 11 08:29:46 compute-0 fervent_mayer[233807]:                 "ceph.encrypted": "0",
Oct 11 08:29:46 compute-0 fervent_mayer[233807]:                 "ceph.osd_fsid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 08:29:46 compute-0 fervent_mayer[233807]:                 "ceph.osd_id": "1",
Oct 11 08:29:46 compute-0 fervent_mayer[233807]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 08:29:46 compute-0 fervent_mayer[233807]:                 "ceph.type": "block",
Oct 11 08:29:46 compute-0 fervent_mayer[233807]:                 "ceph.vdo": "0"
Oct 11 08:29:46 compute-0 fervent_mayer[233807]:             },
Oct 11 08:29:46 compute-0 fervent_mayer[233807]:             "type": "block",
Oct 11 08:29:46 compute-0 fervent_mayer[233807]:             "vg_name": "ceph_vg1"
Oct 11 08:29:46 compute-0 fervent_mayer[233807]:         }
Oct 11 08:29:46 compute-0 fervent_mayer[233807]:     ],
Oct 11 08:29:46 compute-0 fervent_mayer[233807]:     "2": [
Oct 11 08:29:46 compute-0 fervent_mayer[233807]:         {
Oct 11 08:29:46 compute-0 fervent_mayer[233807]:             "devices": [
Oct 11 08:29:46 compute-0 fervent_mayer[233807]:                 "/dev/loop5"
Oct 11 08:29:46 compute-0 fervent_mayer[233807]:             ],
Oct 11 08:29:46 compute-0 fervent_mayer[233807]:             "lv_name": "ceph_lv2",
Oct 11 08:29:46 compute-0 fervent_mayer[233807]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 08:29:46 compute-0 fervent_mayer[233807]:             "lv_size": "21470642176",
Oct 11 08:29:46 compute-0 fervent_mayer[233807]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9a8d87fe-940e-4960-94fb-405e22e6e9ee,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 08:29:46 compute-0 fervent_mayer[233807]:             "lv_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 08:29:46 compute-0 fervent_mayer[233807]:             "name": "ceph_lv2",
Oct 11 08:29:46 compute-0 fervent_mayer[233807]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 08:29:46 compute-0 fervent_mayer[233807]:             "tags": {
Oct 11 08:29:46 compute-0 fervent_mayer[233807]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 11 08:29:46 compute-0 fervent_mayer[233807]:                 "ceph.block_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 08:29:46 compute-0 fervent_mayer[233807]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 08:29:46 compute-0 fervent_mayer[233807]:                 "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 08:29:46 compute-0 fervent_mayer[233807]:                 "ceph.cluster_name": "ceph",
Oct 11 08:29:46 compute-0 fervent_mayer[233807]:                 "ceph.crush_device_class": "",
Oct 11 08:29:46 compute-0 fervent_mayer[233807]:                 "ceph.encrypted": "0",
Oct 11 08:29:46 compute-0 fervent_mayer[233807]:                 "ceph.osd_fsid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 08:29:46 compute-0 fervent_mayer[233807]:                 "ceph.osd_id": "2",
Oct 11 08:29:46 compute-0 fervent_mayer[233807]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 08:29:46 compute-0 fervent_mayer[233807]:                 "ceph.type": "block",
Oct 11 08:29:46 compute-0 fervent_mayer[233807]:                 "ceph.vdo": "0"
Oct 11 08:29:46 compute-0 fervent_mayer[233807]:             },
Oct 11 08:29:46 compute-0 fervent_mayer[233807]:             "type": "block",
Oct 11 08:29:46 compute-0 fervent_mayer[233807]:             "vg_name": "ceph_vg2"
Oct 11 08:29:46 compute-0 fervent_mayer[233807]:         }
Oct 11 08:29:46 compute-0 fervent_mayer[233807]:     ]
Oct 11 08:29:46 compute-0 fervent_mayer[233807]: }
Oct 11 08:29:46 compute-0 systemd[1]: libpod-ae50f18c28048935811f306b5b4fb44172cf275072c0996aac58a5d6842e5cb8.scope: Deactivated successfully.
Oct 11 08:29:46 compute-0 podman[233764]: 2025-10-11 08:29:46.59492585 +0000 UTC m=+0.992784183 container died ae50f18c28048935811f306b5b4fb44172cf275072c0996aac58a5d6842e5cb8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_mayer, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct 11 08:29:46 compute-0 sudo[233973]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gkbwcgrscwwhkwodqhfgkhtcngxlwbgb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171386.2517848-501-214360037993206/AnsiballZ_stat.py'
Oct 11 08:29:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-e0ffa360fd7cf73d8bef30a880116de5d4cf6d66b6112a36c21bc822136d6505-merged.mount: Deactivated successfully.
Oct 11 08:29:46 compute-0 sudo[233973]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:29:46 compute-0 podman[233764]: 2025-10-11 08:29:46.660145997 +0000 UTC m=+1.058004300 container remove ae50f18c28048935811f306b5b4fb44172cf275072c0996aac58a5d6842e5cb8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_mayer, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct 11 08:29:46 compute-0 systemd[1]: libpod-conmon-ae50f18c28048935811f306b5b4fb44172cf275072c0996aac58a5d6842e5cb8.scope: Deactivated successfully.
Oct 11 08:29:46 compute-0 sudo[233533]: pam_unix(sudo:session): session closed for user root
Oct 11 08:29:46 compute-0 sudo[233983]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:29:46 compute-0 sudo[233983]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:29:46 compute-0 sudo[233983]: pam_unix(sudo:session): session closed for user root
Oct 11 08:29:46 compute-0 python3.9[233980]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 11 08:29:46 compute-0 sudo[233973]: pam_unix(sudo:session): session closed for user root
Oct 11 08:29:46 compute-0 ceph-mon[74313]: pgmap v678: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:29:46 compute-0 sudo[234008]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 08:29:46 compute-0 sudo[234008]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:29:46 compute-0 sudo[234008]: pam_unix(sudo:session): session closed for user root
Oct 11 08:29:46 compute-0 sudo[234038]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:29:46 compute-0 sudo[234038]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:29:46 compute-0 sudo[234038]: pam_unix(sudo:session): session closed for user root
Oct 11 08:29:46 compute-0 sudo[234082]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a -- raw list --format json
Oct 11 08:29:46 compute-0 sudo[234082]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:29:47 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v679: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:29:47 compute-0 podman[234221]: 2025-10-11 08:29:47.416142392 +0000 UTC m=+0.077358828 container create 85c5beecfccac8523beef2f3f6004b944653bc55e42fa68a2c09a6af8ca4d2d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_sanderson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct 11 08:29:47 compute-0 systemd[1]: Started libpod-conmon-85c5beecfccac8523beef2f3f6004b944653bc55e42fa68a2c09a6af8ca4d2d7.scope.
Oct 11 08:29:47 compute-0 podman[234221]: 2025-10-11 08:29:47.387026434 +0000 UTC m=+0.048242920 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 08:29:47 compute-0 systemd[1]: Started libcrun container.
Oct 11 08:29:47 compute-0 podman[234221]: 2025-10-11 08:29:47.530099612 +0000 UTC m=+0.191316098 container init 85c5beecfccac8523beef2f3f6004b944653bc55e42fa68a2c09a6af8ca4d2d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_sanderson, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct 11 08:29:47 compute-0 sudo[234291]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nwkywxombzvhxgbjbfnmymedwwyamrbx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171387.1321905-510-5673466335428/AnsiballZ_stat.py'
Oct 11 08:29:47 compute-0 sudo[234291]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:29:47 compute-0 podman[234221]: 2025-10-11 08:29:47.542637873 +0000 UTC m=+0.203854279 container start 85c5beecfccac8523beef2f3f6004b944653bc55e42fa68a2c09a6af8ca4d2d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_sanderson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct 11 08:29:47 compute-0 podman[234221]: 2025-10-11 08:29:47.546203475 +0000 UTC m=+0.207419961 container attach 85c5beecfccac8523beef2f3f6004b944653bc55e42fa68a2c09a6af8ca4d2d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_sanderson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct 11 08:29:47 compute-0 objective_sanderson[234273]: 167 167
Oct 11 08:29:47 compute-0 systemd[1]: libpod-85c5beecfccac8523beef2f3f6004b944653bc55e42fa68a2c09a6af8ca4d2d7.scope: Deactivated successfully.
Oct 11 08:29:47 compute-0 podman[234221]: 2025-10-11 08:29:47.553038562 +0000 UTC m=+0.214254988 container died 85c5beecfccac8523beef2f3f6004b944653bc55e42fa68a2c09a6af8ca4d2d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_sanderson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 08:29:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-97352ff08d8b3f3c59de4e620c3c2ede678133dc2844fdcfda4c01a59e7ce6ba-merged.mount: Deactivated successfully.
Oct 11 08:29:47 compute-0 podman[234221]: 2025-10-11 08:29:47.601876008 +0000 UTC m=+0.263092434 container remove 85c5beecfccac8523beef2f3f6004b944653bc55e42fa68a2c09a6af8ca4d2d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_sanderson, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 08:29:47 compute-0 systemd[1]: libpod-conmon-85c5beecfccac8523beef2f3f6004b944653bc55e42fa68a2c09a6af8ca4d2d7.scope: Deactivated successfully.
Oct 11 08:29:47 compute-0 python3.9[234294]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 11 08:29:47 compute-0 sudo[234291]: pam_unix(sudo:session): session closed for user root
Oct 11 08:29:47 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 08:29:47 compute-0 podman[234315]: 2025-10-11 08:29:47.855311444 +0000 UTC m=+0.060355628 container create 2c26dcea805e1bd1744342bb6c3a7c5ce3f87c95ced4e6e22a5f5520e7b6135c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_carson, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 08:29:47 compute-0 systemd[1]: Started libpod-conmon-2c26dcea805e1bd1744342bb6c3a7c5ce3f87c95ced4e6e22a5f5520e7b6135c.scope.
Oct 11 08:29:47 compute-0 podman[234315]: 2025-10-11 08:29:47.82494727 +0000 UTC m=+0.029991504 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 08:29:47 compute-0 systemd[1]: Started libcrun container.
Oct 11 08:29:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0de79593851794d9c05c24a035e8527f8c7b62b0fca22c59c2cef2bf8041102b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 08:29:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0de79593851794d9c05c24a035e8527f8c7b62b0fca22c59c2cef2bf8041102b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 08:29:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0de79593851794d9c05c24a035e8527f8c7b62b0fca22c59c2cef2bf8041102b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 08:29:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0de79593851794d9c05c24a035e8527f8c7b62b0fca22c59c2cef2bf8041102b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 08:29:47 compute-0 podman[234315]: 2025-10-11 08:29:47.970152321 +0000 UTC m=+0.175196535 container init 2c26dcea805e1bd1744342bb6c3a7c5ce3f87c95ced4e6e22a5f5520e7b6135c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_carson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Oct 11 08:29:47 compute-0 podman[234315]: 2025-10-11 08:29:47.985493462 +0000 UTC m=+0.190537646 container start 2c26dcea805e1bd1744342bb6c3a7c5ce3f87c95ced4e6e22a5f5520e7b6135c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_carson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 08:29:47 compute-0 podman[234315]: 2025-10-11 08:29:47.989556039 +0000 UTC m=+0.194600273 container attach 2c26dcea805e1bd1744342bb6c3a7c5ce3f87c95ced4e6e22a5f5520e7b6135c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_carson, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct 11 08:29:48 compute-0 sudo[234485]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-beonxssciifmhbakzfjpsvmpaavzbvpx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171388.0173821-518-44476485314969/AnsiballZ_stat.py'
Oct 11 08:29:48 compute-0 sudo[234485]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:29:48 compute-0 python3.9[234487]: ansible-ansible.legacy.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 08:29:48 compute-0 sudo[234485]: pam_unix(sudo:session): session closed for user root
Oct 11 08:29:48 compute-0 ceph-mon[74313]: pgmap v679: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:29:49 compute-0 infallible_carson[234355]: {
Oct 11 08:29:49 compute-0 infallible_carson[234355]:     "9a8d87fe-940e-4960-94fb-405e22e6e9ee": {
Oct 11 08:29:49 compute-0 infallible_carson[234355]:         "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 08:29:49 compute-0 infallible_carson[234355]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 11 08:29:49 compute-0 infallible_carson[234355]:         "osd_id": 2,
Oct 11 08:29:49 compute-0 infallible_carson[234355]:         "osd_uuid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 08:29:49 compute-0 infallible_carson[234355]:         "type": "bluestore"
Oct 11 08:29:49 compute-0 infallible_carson[234355]:     },
Oct 11 08:29:49 compute-0 infallible_carson[234355]:     "bd31cfe9-266a-4479-a7f9-659fff14b3e1": {
Oct 11 08:29:49 compute-0 infallible_carson[234355]:         "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 08:29:49 compute-0 infallible_carson[234355]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 11 08:29:49 compute-0 infallible_carson[234355]:         "osd_id": 0,
Oct 11 08:29:49 compute-0 infallible_carson[234355]:         "osd_uuid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 08:29:49 compute-0 infallible_carson[234355]:         "type": "bluestore"
Oct 11 08:29:49 compute-0 infallible_carson[234355]:     },
Oct 11 08:29:49 compute-0 infallible_carson[234355]:     "c6e730c8-7075-45e5-bf72-061b73088f5b": {
Oct 11 08:29:49 compute-0 infallible_carson[234355]:         "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 08:29:49 compute-0 infallible_carson[234355]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 11 08:29:49 compute-0 infallible_carson[234355]:         "osd_id": 1,
Oct 11 08:29:49 compute-0 infallible_carson[234355]:         "osd_uuid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 08:29:49 compute-0 infallible_carson[234355]:         "type": "bluestore"
Oct 11 08:29:49 compute-0 infallible_carson[234355]:     }
Oct 11 08:29:49 compute-0 infallible_carson[234355]: }
Oct 11 08:29:49 compute-0 systemd[1]: libpod-2c26dcea805e1bd1744342bb6c3a7c5ce3f87c95ced4e6e22a5f5520e7b6135c.scope: Deactivated successfully.
Oct 11 08:29:49 compute-0 systemd[1]: libpod-2c26dcea805e1bd1744342bb6c3a7c5ce3f87c95ced4e6e22a5f5520e7b6135c.scope: Consumed 1.093s CPU time.
Oct 11 08:29:49 compute-0 podman[234315]: 2025-10-11 08:29:49.068917454 +0000 UTC m=+1.273961658 container died 2c26dcea805e1bd1744342bb6c3a7c5ce3f87c95ced4e6e22a5f5520e7b6135c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_carson, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct 11 08:29:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-0de79593851794d9c05c24a035e8527f8c7b62b0fca22c59c2cef2bf8041102b-merged.mount: Deactivated successfully.
Oct 11 08:29:49 compute-0 podman[234315]: 2025-10-11 08:29:49.147185147 +0000 UTC m=+1.352229301 container remove 2c26dcea805e1bd1744342bb6c3a7c5ce3f87c95ced4e6e22a5f5520e7b6135c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_carson, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True)
Oct 11 08:29:49 compute-0 systemd[1]: libpod-conmon-2c26dcea805e1bd1744342bb6c3a7c5ce3f87c95ced4e6e22a5f5520e7b6135c.scope: Deactivated successfully.
Oct 11 08:29:49 compute-0 sudo[234647]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-znmdlnepruvufqytakdhlkngyfjrmggs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171388.0173821-518-44476485314969/AnsiballZ_copy.py'
Oct 11 08:29:49 compute-0 sudo[234647]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:29:49 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v680: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:29:49 compute-0 sudo[234082]: pam_unix(sudo:session): session closed for user root
Oct 11 08:29:49 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 08:29:49 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 08:29:49 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 08:29:49 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 08:29:49 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev 1c9f7a63-6e8b-445b-8213-00a092a77280 does not exist
Oct 11 08:29:49 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev 6099151d-d993-4b2b-9875-fdf1eafbc55a does not exist
Oct 11 08:29:49 compute-0 sudo[234650]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:29:49 compute-0 sudo[234650]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:29:49 compute-0 sudo[234650]: pam_unix(sudo:session): session closed for user root
Oct 11 08:29:49 compute-0 sudo[234675]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 11 08:29:49 compute-0 sudo[234675]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:29:49 compute-0 sudo[234675]: pam_unix(sudo:session): session closed for user root
Oct 11 08:29:49 compute-0 python3.9[234649]: ansible-ansible.legacy.copy Invoked with dest=/etc/multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760171388.0173821-518-44476485314969/.source.conf _original_basename=multipath.conf follow=False checksum=bf02ab264d3d648048a81f3bacec8bc58db93162 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 08:29:49 compute-0 sudo[234647]: pam_unix(sudo:session): session closed for user root
Oct 11 08:29:49 compute-0 podman[234755]: 2025-10-11 08:29:49.791991521 +0000 UTC m=+0.088431457 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Oct 11 08:29:50 compute-0 sudo[234869]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dzjdnbmwgskgxdpanvkmoebajgfobmsx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171389.6305451-533-131478040565926/AnsiballZ_command.py'
Oct 11 08:29:50 compute-0 sudo[234869]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:29:50 compute-0 ceph-mon[74313]: pgmap v680: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:29:50 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 08:29:50 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 08:29:50 compute-0 python3.9[234871]: ansible-ansible.legacy.command Invoked with _raw_params=grep -q '^blacklist\s*{' /etc/multipath.conf _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 08:29:50 compute-0 sudo[234869]: pam_unix(sudo:session): session closed for user root
Oct 11 08:29:51 compute-0 sudo[235022]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xcqeyeuqiawfmpokcxpnddkqdrxityds ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171390.6595926-541-93362853637390/AnsiballZ_lineinfile.py'
Oct 11 08:29:51 compute-0 sudo[235022]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:29:51 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v681: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:29:51 compute-0 python3.9[235024]: ansible-ansible.builtin.lineinfile Invoked with line=blacklist { path=/etc/multipath.conf state=present backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 08:29:51 compute-0 sudo[235022]: pam_unix(sudo:session): session closed for user root
Oct 11 08:29:52 compute-0 sudo[235174]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vckcdwyrhrrmyvyryikeihczszboqtke ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171391.5622785-549-108673124612211/AnsiballZ_replace.py'
Oct 11 08:29:52 compute-0 sudo[235174]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:29:52 compute-0 ceph-mon[74313]: pgmap v681: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:29:52 compute-0 python3.9[235176]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^(blacklist {) replace=\1\n} backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 08:29:52 compute-0 sudo[235174]: pam_unix(sudo:session): session closed for user root
Oct 11 08:29:52 compute-0 systemd[1]: virtqemud.service: Deactivated successfully.
Oct 11 08:29:52 compute-0 systemd[1]: virtsecretd.service: Deactivated successfully.
Oct 11 08:29:52 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 08:29:52 compute-0 sudo[235328]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wbovrcpvxcvzptaceglmkzvzkaxeqkfl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171392.5571053-557-258832221239262/AnsiballZ_replace.py'
Oct 11 08:29:52 compute-0 sudo[235328]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:29:53 compute-0 python3.9[235330]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^blacklist\s*{\n[\s]+devnode \"\.\*\" replace=blacklist { backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 08:29:53 compute-0 sudo[235328]: pam_unix(sudo:session): session closed for user root
Oct 11 08:29:53 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v682: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:29:53 compute-0 sudo[235480]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-elfpgkrwquvrpvsygvdqouthtwprcmvk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171393.4498024-566-227311017855462/AnsiballZ_lineinfile.py'
Oct 11 08:29:53 compute-0 sudo[235480]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:29:54 compute-0 python3.9[235482]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        find_multipaths yes path=/etc/multipath.conf regexp=^\s+find_multipaths state=present backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 08:29:54 compute-0 sudo[235480]: pam_unix(sudo:session): session closed for user root
Oct 11 08:29:54 compute-0 ceph-mon[74313]: pgmap v682: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:29:54 compute-0 sudo[235632]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kbrmieyxspajtlbxcghrscnvpkojhaox ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171394.279078-566-35295712956160/AnsiballZ_lineinfile.py'
Oct 11 08:29:54 compute-0 sudo[235632]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:29:54 compute-0 ceph-mgr[74605]: [balancer INFO root] Optimize plan auto_2025-10-11_08:29:54
Oct 11 08:29:54 compute-0 ceph-mgr[74605]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 08:29:54 compute-0 ceph-mgr[74605]: [balancer INFO root] do_upmap
Oct 11 08:29:54 compute-0 ceph-mgr[74605]: [balancer INFO root] pools ['vms', '.rgw.root', 'cephfs.cephfs.meta', 'default.rgw.control', '.mgr', 'images', 'default.rgw.meta', 'default.rgw.log', 'volumes', 'cephfs.cephfs.data', 'backups']
Oct 11 08:29:54 compute-0 ceph-mgr[74605]: [balancer INFO root] prepared 0/10 changes
Oct 11 08:29:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 08:29:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 08:29:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 08:29:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 08:29:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 08:29:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 08:29:54 compute-0 python3.9[235634]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        recheck_wwid yes path=/etc/multipath.conf regexp=^\s+recheck_wwid state=present backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 08:29:54 compute-0 sudo[235632]: pam_unix(sudo:session): session closed for user root
Oct 11 08:29:54 compute-0 ceph-mgr[74605]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 08:29:54 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 08:29:54 compute-0 ceph-mgr[74605]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 08:29:54 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 08:29:54 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 08:29:54 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 08:29:54 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 08:29:54 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 08:29:54 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 08:29:54 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 08:29:55 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v683: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:29:55 compute-0 sudo[235784]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rhsrrzqsbxxsjjxqhghwmxecznwdemju ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171395.0825026-566-130726793234571/AnsiballZ_lineinfile.py'
Oct 11 08:29:55 compute-0 sudo[235784]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:29:55 compute-0 python3.9[235786]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        skip_kpartx yes path=/etc/multipath.conf regexp=^\s+skip_kpartx state=present backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 08:29:55 compute-0 sudo[235784]: pam_unix(sudo:session): session closed for user root
Oct 11 08:29:56 compute-0 ceph-mon[74313]: pgmap v683: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:29:56 compute-0 sudo[235936]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bufnfwahcojhvmloqlwjwzvmypblfmbh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171395.9210682-566-51339673492809/AnsiballZ_lineinfile.py'
Oct 11 08:29:56 compute-0 sudo[235936]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:29:56 compute-0 python3.9[235938]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        user_friendly_names no path=/etc/multipath.conf regexp=^\s+user_friendly_names state=present backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 08:29:56 compute-0 sudo[235936]: pam_unix(sudo:session): session closed for user root
Oct 11 08:29:57 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v684: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:29:57 compute-0 sudo[236088]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ljgiteczsrbsovcmbtrdyrlfjiuhoslc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171396.8843951-595-2596863413955/AnsiballZ_stat.py'
Oct 11 08:29:57 compute-0 sudo[236088]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:29:57 compute-0 python3.9[236090]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 11 08:29:57 compute-0 sudo[236088]: pam_unix(sudo:session): session closed for user root
Oct 11 08:29:57 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 08:29:58 compute-0 sudo[236242]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tsloygnhxkzbnzppvovqagzeajdjatjh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171397.6903224-603-213940673675419/AnsiballZ_file.py'
Oct 11 08:29:58 compute-0 sudo[236242]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:29:58 compute-0 python3.9[236244]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/multipath/.multipath_restart_required state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 08:29:58 compute-0 ceph-mon[74313]: pgmap v684: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:29:58 compute-0 sudo[236242]: pam_unix(sudo:session): session closed for user root
Oct 11 08:29:58 compute-0 sudo[236394]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aipjyebsztihxotafjkokftenihnrwbo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171398.5635839-612-266515025555879/AnsiballZ_file.py'
Oct 11 08:29:58 compute-0 sudo[236394]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:29:59 compute-0 python3.9[236396]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 11 08:29:59 compute-0 sudo[236394]: pam_unix(sudo:session): session closed for user root
Oct 11 08:29:59 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v685: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:29:59 compute-0 sudo[236546]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iecpcsokecygndmvhjvnnlhmcoebytay ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171399.3794353-620-707838538379/AnsiballZ_stat.py'
Oct 11 08:29:59 compute-0 sudo[236546]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:29:59 compute-0 python3.9[236548]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 08:29:59 compute-0 sudo[236546]: pam_unix(sudo:session): session closed for user root
Oct 11 08:30:00 compute-0 ceph-mon[74313]: pgmap v685: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:30:00 compute-0 sudo[236624]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yzodcumbldhmimyuyjyrsikqluilitpv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171399.3794353-620-707838538379/AnsiballZ_file.py'
Oct 11 08:30:00 compute-0 sudo[236624]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:30:00 compute-0 python3.9[236626]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 11 08:30:00 compute-0 sudo[236624]: pam_unix(sudo:session): session closed for user root
Oct 11 08:30:01 compute-0 sudo[236776]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-slajujadopxtqimudbeabzunzxulskst ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171400.7223158-620-73532748506841/AnsiballZ_stat.py'
Oct 11 08:30:01 compute-0 sudo[236776]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:30:01 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v686: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:30:01 compute-0 podman[236779]: 2025-10-11 08:30:01.293884275 +0000 UTC m=+0.078110932 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=iscsid, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 08:30:01 compute-0 python3.9[236778]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 08:30:01 compute-0 sudo[236776]: pam_unix(sudo:session): session closed for user root
Oct 11 08:30:01 compute-0 sudo[236873]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cievtqqzhxbhtaoakpdnscekjhcwrcjr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171400.7223158-620-73532748506841/AnsiballZ_file.py'
Oct 11 08:30:01 compute-0 sudo[236873]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:30:01 compute-0 python3.9[236875]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 11 08:30:01 compute-0 sudo[236873]: pam_unix(sudo:session): session closed for user root
Oct 11 08:30:02 compute-0 ceph-mon[74313]: pgmap v686: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:30:02 compute-0 sudo[237025]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xmvgxmbaimgwtmqvxaqqhxahyqlfqxrk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171402.155274-643-48535217951595/AnsiballZ_file.py'
Oct 11 08:30:02 compute-0 sudo[237025]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:30:02 compute-0 python3.9[237027]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 08:30:02 compute-0 sudo[237025]: pam_unix(sudo:session): session closed for user root
Oct 11 08:30:02 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 08:30:03 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v687: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:30:03 compute-0 sudo[237177]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lllcstdfqiynrpevyvxpvpuoiodehdbs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171403.015132-651-185079206983899/AnsiballZ_stat.py'
Oct 11 08:30:03 compute-0 sudo[237177]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:30:03 compute-0 python3.9[237179]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 08:30:03 compute-0 sudo[237177]: pam_unix(sudo:session): session closed for user root
Oct 11 08:30:04 compute-0 sudo[237255]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ricrkwfhccfrutzgqiwvdjdqvmssppzi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171403.015132-651-185079206983899/AnsiballZ_file.py'
Oct 11 08:30:04 compute-0 sudo[237255]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:30:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 08:30:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:30:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 08:30:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:30:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 08:30:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:30:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 08:30:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:30:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 08:30:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:30:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 08:30:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:30:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 11 08:30:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:30:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 08:30:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:30:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 11 08:30:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:30:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 11 08:30:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:30:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 08:30:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:30:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 11 08:30:04 compute-0 python3.9[237257]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 08:30:04 compute-0 sudo[237255]: pam_unix(sudo:session): session closed for user root
Oct 11 08:30:04 compute-0 ceph-mon[74313]: pgmap v687: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:30:04 compute-0 sudo[237407]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-viufwqbnsxclkxakcgclhzouegdogcur ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171404.4735248-663-46351122092112/AnsiballZ_stat.py'
Oct 11 08:30:04 compute-0 sudo[237407]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:30:05 compute-0 python3.9[237409]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 08:30:05 compute-0 sudo[237407]: pam_unix(sudo:session): session closed for user root
Oct 11 08:30:05 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v688: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:30:05 compute-0 sudo[237485]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rafwnkenotolnjsdfphdhwetmjdvinsp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171404.4735248-663-46351122092112/AnsiballZ_file.py'
Oct 11 08:30:05 compute-0 sudo[237485]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:30:05 compute-0 python3.9[237487]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 08:30:05 compute-0 sudo[237485]: pam_unix(sudo:session): session closed for user root
Oct 11 08:30:06 compute-0 ceph-mon[74313]: pgmap v688: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:30:06 compute-0 sudo[237637]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-csbrljwdeoluwprmcgfzfjolvhiyjyfx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171405.8624358-675-81811336180901/AnsiballZ_systemd.py'
Oct 11 08:30:06 compute-0 sudo[237637]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:30:06 compute-0 python3.9[237639]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 11 08:30:06 compute-0 systemd[1]: Reloading.
Oct 11 08:30:06 compute-0 systemd-sysv-generator[237668]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 08:30:06 compute-0 systemd-rc-local-generator[237665]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 08:30:07 compute-0 sudo[237637]: pam_unix(sudo:session): session closed for user root
Oct 11 08:30:07 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v689: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:30:07 compute-0 ceph-mon[74313]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 11 08:30:07 compute-0 ceph-mon[74313]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Cumulative writes: 3311 writes, 14K keys, 3311 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s
                                           Cumulative WAL: 3311 writes, 3311 syncs, 1.00 writes per sync, written: 0.02 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1288 writes, 5840 keys, 1288 commit groups, 1.0 writes per commit group, ingest: 8.52 MB, 0.01 MB/s
                                           Interval WAL: 1288 writes, 1288 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     63.7      0.24              0.06         7    0.034       0      0       0.0       0.0
                                             L6      1/0    6.68 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   2.7    165.9    136.9      0.30              0.16         6    0.049     24K   3202       0.0       0.0
                                            Sum      1/0    6.68 MB   0.0      0.0     0.0      0.0       0.1      0.0       0.0   3.7     91.7    104.2      0.54              0.22        13    0.041     24K   3202       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   4.9     91.0     91.5      0.38              0.12         8    0.047     17K   2470       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0    165.9    136.9      0.30              0.16         6    0.049     24K   3202       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     64.6      0.24              0.06         6    0.039       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     12.2      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.015, interval 0.007
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.05 GB write, 0.05 MB/s write, 0.05 GB read, 0.04 MB/s read, 0.5 seconds
                                           Interval compaction: 0.03 GB write, 0.06 MB/s write, 0.03 GB read, 0.06 MB/s read, 0.4 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558f0ab3b1f0#2 capacity: 308.00 MB usage: 1.56 MB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 0 last_secs: 6.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(99,1.35 MB,0.43816%) FilterBlock(14,75.42 KB,0.0239137%) IndexBlock(14,144.80 KB,0.0459101%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Oct 11 08:30:07 compute-0 sudo[237825]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xhhxcxiebafhcxclfaqxypyitfnvpdrs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171407.3081741-683-170146719598663/AnsiballZ_stat.py'
Oct 11 08:30:07 compute-0 sudo[237825]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:30:07 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 08:30:07 compute-0 python3.9[237827]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 08:30:07 compute-0 sudo[237825]: pam_unix(sudo:session): session closed for user root
Oct 11 08:30:08 compute-0 sudo[237916]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tqtvrlovxseaqfbyzzfrlteygracqxej ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171407.3081741-683-170146719598663/AnsiballZ_file.py'
Oct 11 08:30:08 compute-0 sudo[237916]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:30:08 compute-0 ceph-mon[74313]: pgmap v689: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:30:08 compute-0 podman[237877]: 2025-10-11 08:30:08.301392394 +0000 UTC m=+0.149602947 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Oct 11 08:30:08 compute-0 python3.9[237922]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 08:30:08 compute-0 sudo[237916]: pam_unix(sudo:session): session closed for user root
Oct 11 08:30:09 compute-0 sudo[238081]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zwuajyjujedyctpjwdkxvkswjdjndxgm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171408.716912-695-27553308274625/AnsiballZ_stat.py'
Oct 11 08:30:09 compute-0 sudo[238081]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:30:09 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v690: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:30:09 compute-0 python3.9[238083]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 08:30:09 compute-0 sudo[238081]: pam_unix(sudo:session): session closed for user root
Oct 11 08:30:09 compute-0 sudo[238159]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eusodbeeogsxxrgzdampcvffoxcnozkg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171408.716912-695-27553308274625/AnsiballZ_file.py'
Oct 11 08:30:09 compute-0 sudo[238159]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:30:09 compute-0 python3.9[238161]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 08:30:09 compute-0 sudo[238159]: pam_unix(sudo:session): session closed for user root
Oct 11 08:30:10 compute-0 ceph-mon[74313]: pgmap v690: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:30:10 compute-0 sudo[238311]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lxhvxxixomzzmvztecbijzjndswkxwfe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171410.1214747-707-155526997406142/AnsiballZ_systemd.py'
Oct 11 08:30:10 compute-0 sudo[238311]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:30:10 compute-0 python3.9[238313]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 11 08:30:10 compute-0 systemd[1]: Reloading.
Oct 11 08:30:11 compute-0 systemd-sysv-generator[238344]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 08:30:11 compute-0 systemd-rc-local-generator[238339]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 08:30:11 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v691: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:30:11 compute-0 systemd[1]: Starting Create netns directory...
Oct 11 08:30:11 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Oct 11 08:30:11 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Oct 11 08:30:11 compute-0 systemd[1]: Finished Create netns directory.
Oct 11 08:30:11 compute-0 sudo[238311]: pam_unix(sudo:session): session closed for user root
Oct 11 08:30:12 compute-0 sudo[238504]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eabwatzeuntduhfruyqgufjuyldxtmai ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171411.7727895-717-202567536400438/AnsiballZ_file.py'
Oct 11 08:30:12 compute-0 sudo[238504]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:30:12 compute-0 ceph-mon[74313]: pgmap v691: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:30:12 compute-0 python3.9[238506]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 11 08:30:12 compute-0 sudo[238504]: pam_unix(sudo:session): session closed for user root
Oct 11 08:30:12 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 08:30:13 compute-0 sudo[238656]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gdfwgrmntznqwtojqtpgpzuamfyzlmao ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171412.636839-725-126955633758627/AnsiballZ_stat.py'
Oct 11 08:30:13 compute-0 sudo[238656]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:30:13 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v692: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:30:13 compute-0 python3.9[238658]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/multipathd/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 08:30:13 compute-0 sudo[238656]: pam_unix(sudo:session): session closed for user root
Oct 11 08:30:13 compute-0 sudo[238779]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eutkprhaneuyoxossusdfpizdxkbgenp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171412.636839-725-126955633758627/AnsiballZ_copy.py'
Oct 11 08:30:13 compute-0 sudo[238779]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:30:13 compute-0 python3.9[238781]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/multipathd/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760171412.636839-725-126955633758627/.source _original_basename=healthcheck follow=False checksum=af9d0c1c8f3cb0e30ce9609be9d5b01924d0d23f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct 11 08:30:13 compute-0 sudo[238779]: pam_unix(sudo:session): session closed for user root
Oct 11 08:30:14 compute-0 ceph-mon[74313]: pgmap v692: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:30:14 compute-0 sudo[238931]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cfijfdfgwyzassibfpkcefhenpknizkn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171414.4385922-742-253824944181484/AnsiballZ_file.py'
Oct 11 08:30:14 compute-0 sudo[238931]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:30:15 compute-0 python3.9[238933]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 11 08:30:15 compute-0 sudo[238931]: pam_unix(sudo:session): session closed for user root
Oct 11 08:30:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:30:15.159 162815 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:30:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:30:15.160 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:30:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:30:15.161 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:30:15 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v693: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:30:15 compute-0 sudo[239083]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tuymixxckfnqjebkggdwwdbsbzsyludy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171415.3658206-750-139762178139221/AnsiballZ_stat.py'
Oct 11 08:30:15 compute-0 sudo[239083]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:30:15 compute-0 python3.9[239085]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/multipathd.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 08:30:15 compute-0 sudo[239083]: pam_unix(sudo:session): session closed for user root
Oct 11 08:30:16 compute-0 ceph-mon[74313]: pgmap v693: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:30:16 compute-0 sudo[239206]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lsxasxpopaixzaanywkzbyfnhipwwbsh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171415.3658206-750-139762178139221/AnsiballZ_copy.py'
Oct 11 08:30:16 compute-0 sudo[239206]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:30:16 compute-0 python3.9[239208]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/multipathd.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1760171415.3658206-750-139762178139221/.source.json _original_basename=.zbb_7_qx follow=False checksum=3f7959ee8ac9757398adcc451c3b416c957d7c14 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 08:30:16 compute-0 sudo[239206]: pam_unix(sudo:session): session closed for user root
Oct 11 08:30:17 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v694: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:30:17 compute-0 sudo[239358]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dmjyovgjlvbhbrqfaotgyuwhxmcvsgse ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171416.9400601-765-191258030066725/AnsiballZ_file.py'
Oct 11 08:30:17 compute-0 sudo[239358]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:30:17 compute-0 python3.9[239360]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/multipathd state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 08:30:17 compute-0 sudo[239358]: pam_unix(sudo:session): session closed for user root
Oct 11 08:30:17 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 08:30:18 compute-0 sudo[239510]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xxyidscekrivokkuzybrmeysivmnmypu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171417.8999603-773-62488331066048/AnsiballZ_stat.py'
Oct 11 08:30:18 compute-0 sudo[239510]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:30:18 compute-0 ceph-mon[74313]: pgmap v694: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:30:18 compute-0 sudo[239510]: pam_unix(sudo:session): session closed for user root
Oct 11 08:30:19 compute-0 sudo[239633]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ngvypkvlrjfehsykpdytqbhvtpzbpivz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171417.8999603-773-62488331066048/AnsiballZ_copy.py'
Oct 11 08:30:19 compute-0 sudo[239633]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:30:19 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v695: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:30:19 compute-0 sudo[239633]: pam_unix(sudo:session): session closed for user root
Oct 11 08:30:20 compute-0 sudo[239796]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rnagaqvyznotrsrakkmjuqjrvlgfpxzr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171419.786615-790-216541837209075/AnsiballZ_container_config_data.py'
Oct 11 08:30:20 compute-0 sudo[239796]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:30:20 compute-0 podman[239759]: 2025-10-11 08:30:20.220634922 +0000 UTC m=+0.090685217 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, tcib_managed=true)
Oct 11 08:30:20 compute-0 ceph-mon[74313]: pgmap v695: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:30:20 compute-0 python3.9[239804]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/multipathd config_pattern=*.json debug=False
Oct 11 08:30:20 compute-0 sudo[239796]: pam_unix(sudo:session): session closed for user root
Oct 11 08:30:21 compute-0 sudo[239956]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bouquzuctxxbbovveyawzidubdzqmsam ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171420.7469742-799-123529107027760/AnsiballZ_container_config_hash.py'
Oct 11 08:30:21 compute-0 sudo[239956]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:30:21 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v696: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:30:21 compute-0 python3.9[239958]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Oct 11 08:30:21 compute-0 sudo[239956]: pam_unix(sudo:session): session closed for user root
Oct 11 08:30:22 compute-0 sudo[240108]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wsqoobwremqiajyyygaoblmletnomimm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171421.75924-808-280896849716412/AnsiballZ_podman_container_info.py'
Oct 11 08:30:22 compute-0 sudo[240108]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:30:22 compute-0 ceph-mon[74313]: pgmap v696: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:30:22 compute-0 python3.9[240110]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Oct 11 08:30:22 compute-0 sudo[240108]: pam_unix(sudo:session): session closed for user root
Oct 11 08:30:22 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 08:30:23 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v697: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:30:23 compute-0 sudo[240287]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rqlknzrhiochfxdlxwuzlklxrwilwhzj ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1760171423.4089453-821-176485027529701/AnsiballZ_edpm_container_manage.py'
Oct 11 08:30:23 compute-0 sudo[240287]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:30:24 compute-0 python3[240289]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/multipathd config_id=multipathd config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Oct 11 08:30:24 compute-0 ceph-mon[74313]: pgmap v697: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:30:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 08:30:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 08:30:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 08:30:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 08:30:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 08:30:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 08:30:25 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v698: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:30:25 compute-0 podman[240301]: 2025-10-11 08:30:25.361263743 +0000 UTC m=+1.211142571 image pull f541ff382622bd8bc9ad206129d2a8e74c239ff4503fa3b67d3bdf6d5b50b511 quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43
Oct 11 08:30:25 compute-0 podman[240360]: 2025-10-11 08:30:25.62589122 +0000 UTC m=+0.075480998 container create 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 11 08:30:25 compute-0 podman[240360]: 2025-10-11 08:30:25.589196396 +0000 UTC m=+0.038786234 image pull f541ff382622bd8bc9ad206129d2a8e74c239ff4503fa3b67d3bdf6d5b50b511 quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43
Oct 11 08:30:25 compute-0 python3[240289]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name multipathd --conmon-pidfile /run/multipathd.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=multipathd --label container_name=multipathd --label managed_by=edpm_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro --volume /dev:/dev --volume /run/udev:/run/udev --volume /sys:/sys --volume /lib/modules:/lib/modules:ro --volume /etc/iscsi:/etc/iscsi:ro --volume /var/lib/iscsi:/var/lib/iscsi:z --volume /etc/multipath:/etc/multipath:z --volume /etc/multipath.conf:/etc/multipath.conf:ro --volume /var/lib/openstack/healthchecks/multipathd:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43
Oct 11 08:30:25 compute-0 sudo[240287]: pam_unix(sudo:session): session closed for user root
Oct 11 08:30:26 compute-0 ceph-mon[74313]: pgmap v698: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:30:26 compute-0 sudo[240548]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fbzgficbfwmmotpwtahezhzvchfdhesj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171426.0671048-829-195477057184011/AnsiballZ_stat.py'
Oct 11 08:30:26 compute-0 sudo[240548]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:30:26 compute-0 python3.9[240550]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 11 08:30:26 compute-0 sudo[240548]: pam_unix(sudo:session): session closed for user root
Oct 11 08:30:27 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v699: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:30:27 compute-0 sudo[240702]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ceedzkeyiolgnybcdigauezoivmrcgrz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171426.981377-838-157546340712116/AnsiballZ_file.py'
Oct 11 08:30:27 compute-0 sudo[240702]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:30:27 compute-0 python3.9[240704]: ansible-file Invoked with path=/etc/systemd/system/edpm_multipathd.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 08:30:27 compute-0 sudo[240702]: pam_unix(sudo:session): session closed for user root
Oct 11 08:30:27 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 08:30:27 compute-0 sudo[240778]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-syulpayfppomlydbukiijyfeeqyanpaz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171426.981377-838-157546340712116/AnsiballZ_stat.py'
Oct 11 08:30:27 compute-0 sudo[240778]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:30:28 compute-0 python3.9[240780]: ansible-stat Invoked with path=/etc/systemd/system/edpm_multipathd_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 11 08:30:28 compute-0 sudo[240778]: pam_unix(sudo:session): session closed for user root
Oct 11 08:30:28 compute-0 ceph-mon[74313]: pgmap v699: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:30:28 compute-0 sudo[240929]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rbeenqaonjfjfwejqzdgqghvvaldfaem ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171428.1494484-838-204843542158204/AnsiballZ_copy.py'
Oct 11 08:30:28 compute-0 sudo[240929]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:30:28 compute-0 python3.9[240931]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1760171428.1494484-838-204843542158204/source dest=/etc/systemd/system/edpm_multipathd.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 08:30:28 compute-0 sudo[240929]: pam_unix(sudo:session): session closed for user root
Oct 11 08:30:29 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v700: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:30:29 compute-0 sudo[241005]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ahtzpbwvxpmxdwzhdbfgrrwbwqfilwhn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171428.1494484-838-204843542158204/AnsiballZ_systemd.py'
Oct 11 08:30:29 compute-0 sudo[241005]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:30:29 compute-0 python3.9[241007]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 11 08:30:29 compute-0 systemd[1]: Reloading.
Oct 11 08:30:29 compute-0 systemd-rc-local-generator[241030]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 08:30:29 compute-0 systemd-sysv-generator[241036]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 08:30:30 compute-0 sudo[241005]: pam_unix(sudo:session): session closed for user root
Oct 11 08:30:30 compute-0 ceph-mon[74313]: pgmap v700: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:30:30 compute-0 sudo[241115]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mixankclcrhdgbdlaejqxxgihezqendu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171428.1494484-838-204843542158204/AnsiballZ_systemd.py'
Oct 11 08:30:30 compute-0 sudo[241115]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:30:30 compute-0 python3.9[241117]: ansible-systemd Invoked with state=restarted name=edpm_multipathd.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 11 08:30:30 compute-0 systemd[1]: Reloading.
Oct 11 08:30:30 compute-0 systemd-rc-local-generator[241149]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 08:30:30 compute-0 systemd-sysv-generator[241153]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 08:30:31 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v701: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:30:31 compute-0 systemd[1]: Starting multipathd container...
Oct 11 08:30:31 compute-0 systemd[1]: Started libcrun container.
Oct 11 08:30:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e27cbfb756b19c9f20be1db736f032cd43ae5d0c2a634ca74e3c4d2ae715a8c/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Oct 11 08:30:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e27cbfb756b19c9f20be1db736f032cd43ae5d0c2a634ca74e3c4d2ae715a8c/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Oct 11 08:30:31 compute-0 podman[241170]: 2025-10-11 08:30:31.466517316 +0000 UTC m=+0.123389808 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.license=GPLv2)
Oct 11 08:30:31 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863.
Oct 11 08:30:31 compute-0 podman[241157]: 2025-10-11 08:30:31.492158228 +0000 UTC m=+0.235248380 container init 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct 11 08:30:31 compute-0 multipathd[241189]: + sudo -E kolla_set_configs
Oct 11 08:30:31 compute-0 sudo[241198]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Oct 11 08:30:31 compute-0 podman[241157]: 2025-10-11 08:30:31.530020295 +0000 UTC m=+0.273110447 container start 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct 11 08:30:31 compute-0 sudo[241198]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Oct 11 08:30:31 compute-0 sudo[241198]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 11 08:30:31 compute-0 podman[241157]: multipathd
Oct 11 08:30:31 compute-0 systemd[1]: Started multipathd container.
Oct 11 08:30:31 compute-0 sudo[241115]: pam_unix(sudo:session): session closed for user root
Oct 11 08:30:31 compute-0 multipathd[241189]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Oct 11 08:30:31 compute-0 multipathd[241189]: INFO:__main__:Validating config file
Oct 11 08:30:31 compute-0 multipathd[241189]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Oct 11 08:30:31 compute-0 multipathd[241189]: INFO:__main__:Writing out command to execute
Oct 11 08:30:31 compute-0 sudo[241198]: pam_unix(sudo:session): session closed for user root
Oct 11 08:30:31 compute-0 multipathd[241189]: ++ cat /run_command
Oct 11 08:30:31 compute-0 multipathd[241189]: + CMD='/usr/sbin/multipathd -d'
Oct 11 08:30:31 compute-0 multipathd[241189]: + ARGS=
Oct 11 08:30:31 compute-0 multipathd[241189]: + sudo kolla_copy_cacerts
Oct 11 08:30:31 compute-0 sudo[241224]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_copy_cacerts
Oct 11 08:30:31 compute-0 sudo[241224]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Oct 11 08:30:31 compute-0 sudo[241224]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 11 08:30:31 compute-0 podman[241199]: 2025-10-11 08:30:31.618771826 +0000 UTC m=+0.076046073 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=starting, health_failing_streak=1, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true)
Oct 11 08:30:31 compute-0 sudo[241224]: pam_unix(sudo:session): session closed for user root
Oct 11 08:30:31 compute-0 multipathd[241189]: Running command: '/usr/sbin/multipathd -d'
Oct 11 08:30:31 compute-0 multipathd[241189]: + [[ ! -n '' ]]
Oct 11 08:30:31 compute-0 multipathd[241189]: + . kolla_extend_start
Oct 11 08:30:31 compute-0 multipathd[241189]: + echo 'Running command: '\''/usr/sbin/multipathd -d'\'''
Oct 11 08:30:31 compute-0 multipathd[241189]: + umask 0022
Oct 11 08:30:31 compute-0 multipathd[241189]: + exec /usr/sbin/multipathd -d
Oct 11 08:30:31 compute-0 systemd[1]: 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863-7c8c33c3794afeb.service: Main process exited, code=exited, status=1/FAILURE
Oct 11 08:30:31 compute-0 systemd[1]: 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863-7c8c33c3794afeb.service: Failed with result 'exit-code'.
Oct 11 08:30:31 compute-0 multipathd[241189]: 3276.396699 | --------start up--------
Oct 11 08:30:31 compute-0 multipathd[241189]: 3276.396716 | read /etc/multipath.conf
Oct 11 08:30:31 compute-0 multipathd[241189]: 3276.406473 | path checkers start up
Oct 11 08:30:32 compute-0 python3.9[241382]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath/.multipath_restart_required follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 11 08:30:32 compute-0 ceph-mon[74313]: pgmap v701: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:30:32 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 08:30:32 compute-0 sudo[241534]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oqxpldpbumzeylyckjwwdjxsttkmysek ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171432.5425081-874-9957730304742/AnsiballZ_command.py'
Oct 11 08:30:32 compute-0 sudo[241534]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:30:33 compute-0 python3.9[241536]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps --filter volume=/etc/multipath.conf --format {{.Names}} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 08:30:33 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v702: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:30:33 compute-0 sudo[241534]: pam_unix(sudo:session): session closed for user root
Oct 11 08:30:33 compute-0 sudo[241699]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-evwlgewpofbglgkbvktzgnsaevhomavl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171433.4295917-882-18242919884886/AnsiballZ_systemd.py'
Oct 11 08:30:33 compute-0 sudo[241699]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:30:34 compute-0 python3.9[241701]: ansible-ansible.builtin.systemd Invoked with name=edpm_multipathd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 11 08:30:34 compute-0 systemd[1]: Stopping multipathd container...
Oct 11 08:30:34 compute-0 multipathd[241189]: 3279.057544 | exit (signal)
Oct 11 08:30:34 compute-0 multipathd[241189]: 3279.057647 | --------shut down-------
Oct 11 08:30:34 compute-0 systemd[1]: libpod-37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863.scope: Deactivated successfully.
Oct 11 08:30:34 compute-0 podman[241705]: 2025-10-11 08:30:34.33250394 +0000 UTC m=+0.095383059 container died 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.vendor=CentOS, container_name=multipathd, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 08:30:34 compute-0 systemd[1]: 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863-7c8c33c3794afeb.timer: Deactivated successfully.
Oct 11 08:30:34 compute-0 systemd[1]: Stopped /usr/bin/podman healthcheck run 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863.
Oct 11 08:30:34 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863-userdata-shm.mount: Deactivated successfully.
Oct 11 08:30:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-0e27cbfb756b19c9f20be1db736f032cd43ae5d0c2a634ca74e3c4d2ae715a8c-merged.mount: Deactivated successfully.
Oct 11 08:30:34 compute-0 ceph-mon[74313]: pgmap v702: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:30:34 compute-0 podman[241705]: 2025-10-11 08:30:34.512284896 +0000 UTC m=+0.275164015 container cleanup 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.build-date=20251001)
Oct 11 08:30:34 compute-0 podman[241705]: multipathd
Oct 11 08:30:34 compute-0 podman[241735]: multipathd
Oct 11 08:30:34 compute-0 systemd[1]: edpm_multipathd.service: Deactivated successfully.
Oct 11 08:30:34 compute-0 systemd[1]: Stopped multipathd container.
Oct 11 08:30:34 compute-0 systemd[1]: Starting multipathd container...
Oct 11 08:30:34 compute-0 systemd[1]: Started libcrun container.
Oct 11 08:30:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e27cbfb756b19c9f20be1db736f032cd43ae5d0c2a634ca74e3c4d2ae715a8c/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Oct 11 08:30:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e27cbfb756b19c9f20be1db736f032cd43ae5d0c2a634ca74e3c4d2ae715a8c/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Oct 11 08:30:34 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863.
Oct 11 08:30:34 compute-0 podman[241746]: 2025-10-11 08:30:34.811111087 +0000 UTC m=+0.165602147 container init 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, managed_by=edpm_ansible, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct 11 08:30:34 compute-0 multipathd[241762]: + sudo -E kolla_set_configs
Oct 11 08:30:34 compute-0 podman[241746]: 2025-10-11 08:30:34.851333211 +0000 UTC m=+0.205824211 container start 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 11 08:30:34 compute-0 sudo[241768]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Oct 11 08:30:34 compute-0 podman[241746]: multipathd
Oct 11 08:30:34 compute-0 sudo[241768]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Oct 11 08:30:34 compute-0 sudo[241768]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 11 08:30:34 compute-0 systemd[1]: Started multipathd container.
Oct 11 08:30:34 compute-0 sudo[241699]: pam_unix(sudo:session): session closed for user root
Oct 11 08:30:34 compute-0 multipathd[241762]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Oct 11 08:30:34 compute-0 multipathd[241762]: INFO:__main__:Validating config file
Oct 11 08:30:34 compute-0 multipathd[241762]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Oct 11 08:30:34 compute-0 multipathd[241762]: INFO:__main__:Writing out command to execute
Oct 11 08:30:34 compute-0 podman[241769]: 2025-10-11 08:30:34.951517434 +0000 UTC m=+0.083986078 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=starting, health_failing_streak=1, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, managed_by=edpm_ansible, tcib_managed=true, container_name=multipathd, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 08:30:34 compute-0 sudo[241768]: pam_unix(sudo:session): session closed for user root
Oct 11 08:30:34 compute-0 systemd[1]: 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863-536a9e401c18b8f.service: Main process exited, code=exited, status=1/FAILURE
Oct 11 08:30:34 compute-0 systemd[1]: 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863-536a9e401c18b8f.service: Failed with result 'exit-code'.
Oct 11 08:30:34 compute-0 multipathd[241762]: ++ cat /run_command
Oct 11 08:30:34 compute-0 multipathd[241762]: + CMD='/usr/sbin/multipathd -d'
Oct 11 08:30:34 compute-0 multipathd[241762]: + ARGS=
Oct 11 08:30:34 compute-0 multipathd[241762]: + sudo kolla_copy_cacerts
Oct 11 08:30:34 compute-0 sudo[241805]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_copy_cacerts
Oct 11 08:30:34 compute-0 sudo[241805]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Oct 11 08:30:34 compute-0 sudo[241805]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 11 08:30:34 compute-0 sudo[241805]: pam_unix(sudo:session): session closed for user root
Oct 11 08:30:34 compute-0 multipathd[241762]: + [[ ! -n '' ]]
Oct 11 08:30:34 compute-0 multipathd[241762]: + . kolla_extend_start
Oct 11 08:30:34 compute-0 multipathd[241762]: Running command: '/usr/sbin/multipathd -d'
Oct 11 08:30:34 compute-0 multipathd[241762]: + echo 'Running command: '\''/usr/sbin/multipathd -d'\'''
Oct 11 08:30:34 compute-0 multipathd[241762]: + umask 0022
Oct 11 08:30:34 compute-0 multipathd[241762]: + exec /usr/sbin/multipathd -d
Oct 11 08:30:35 compute-0 multipathd[241762]: 3279.765478 | --------start up--------
Oct 11 08:30:35 compute-0 multipathd[241762]: 3279.766019 | read /etc/multipath.conf
Oct 11 08:30:35 compute-0 multipathd[241762]: 3279.774370 | path checkers start up
Oct 11 08:30:35 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v703: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:30:35 compute-0 sudo[241951]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ofdufmacgrovffvlbfuxkmgvjnelbldt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171435.1577573-890-163241924100372/AnsiballZ_file.py'
Oct 11 08:30:35 compute-0 sudo[241951]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:30:35 compute-0 python3.9[241953]: ansible-ansible.builtin.file Invoked with path=/etc/multipath/.multipath_restart_required state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 08:30:35 compute-0 sudo[241951]: pam_unix(sudo:session): session closed for user root
Oct 11 08:30:36 compute-0 ceph-mon[74313]: pgmap v703: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:30:36 compute-0 sudo[242103]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nucficmexeboghgxuttrggkvepajipie ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171436.2279887-902-195828106965349/AnsiballZ_file.py'
Oct 11 08:30:36 compute-0 sudo[242103]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:30:36 compute-0 python3.9[242105]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Oct 11 08:30:36 compute-0 sudo[242103]: pam_unix(sudo:session): session closed for user root
Oct 11 08:30:37 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v704: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:30:37 compute-0 sudo[242255]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pkvvgqrsrxacxmurbmfkbkydwjdapxbu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171437.038431-910-53260468214626/AnsiballZ_modprobe.py'
Oct 11 08:30:37 compute-0 sudo[242255]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:30:37 compute-0 python3.9[242257]: ansible-community.general.modprobe Invoked with name=nvme-fabrics state=present params= persistent=disabled
Oct 11 08:30:37 compute-0 kernel: Key type psk registered
Oct 11 08:30:37 compute-0 sudo[242255]: pam_unix(sudo:session): session closed for user root
Oct 11 08:30:37 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 08:30:38 compute-0 sudo[242429]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lltizgnivakilhdmslfnfcmbvkhosgft ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171438.0185792-918-4015368103802/AnsiballZ_stat.py'
Oct 11 08:30:38 compute-0 sudo[242429]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:30:38 compute-0 podman[242393]: 2025-10-11 08:30:38.537083004 +0000 UTC m=+0.159208987 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct 11 08:30:38 compute-0 ceph-mon[74313]: pgmap v704: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:30:38 compute-0 python3.9[242440]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/nvme-fabrics.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 08:30:38 compute-0 sudo[242429]: pam_unix(sudo:session): session closed for user root
Oct 11 08:30:39 compute-0 sudo[242570]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xbyfpisogcixrfqseyqpeubkphognnpo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171438.0185792-918-4015368103802/AnsiballZ_copy.py'
Oct 11 08:30:39 compute-0 sudo[242570]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:30:39 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v705: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:30:39 compute-0 python3.9[242572]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/nvme-fabrics.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760171438.0185792-918-4015368103802/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=783c778f0c68cc414f35486f234cbb1cf3f9bbff backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 08:30:39 compute-0 sudo[242570]: pam_unix(sudo:session): session closed for user root
Oct 11 08:30:40 compute-0 sudo[242722]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-forypketijbtxbtmmwzhobhvgnmmzyyu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171439.6887186-934-54142140216980/AnsiballZ_lineinfile.py'
Oct 11 08:30:40 compute-0 sudo[242722]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:30:40 compute-0 python3.9[242724]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=nvme-fabrics  mode=0644 state=present path=/etc/modules backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 08:30:40 compute-0 sudo[242722]: pam_unix(sudo:session): session closed for user root
Oct 11 08:30:40 compute-0 ceph-mon[74313]: pgmap v705: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:30:41 compute-0 sudo[242874]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-olmmadqaraiubxtnklrtqthbsmrwacuh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171440.6070278-942-164323015693360/AnsiballZ_systemd.py'
Oct 11 08:30:41 compute-0 sudo[242874]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:30:41 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v706: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:30:41 compute-0 python3.9[242876]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 11 08:30:41 compute-0 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Oct 11 08:30:41 compute-0 systemd[1]: Stopped Load Kernel Modules.
Oct 11 08:30:41 compute-0 systemd[1]: Stopping Load Kernel Modules...
Oct 11 08:30:41 compute-0 systemd[1]: Starting Load Kernel Modules...
Oct 11 08:30:41 compute-0 systemd[1]: Finished Load Kernel Modules.
Oct 11 08:30:41 compute-0 sudo[242874]: pam_unix(sudo:session): session closed for user root
Oct 11 08:30:42 compute-0 sudo[243030]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eonitltvshzmqidkztloywrirkxdnkai ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171441.7306292-950-224424922252113/AnsiballZ_setup.py'
Oct 11 08:30:42 compute-0 sudo[243030]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:30:42 compute-0 python3.9[243032]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 11 08:30:42 compute-0 ceph-mon[74313]: pgmap v706: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:30:42 compute-0 sudo[243030]: pam_unix(sudo:session): session closed for user root
Oct 11 08:30:42 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 08:30:43 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v707: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:30:43 compute-0 sudo[243114]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iluyqyxgwuntdjwbnapaugicojrawasy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171441.7306292-950-224424922252113/AnsiballZ_dnf.py'
Oct 11 08:30:43 compute-0 sudo[243114]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:30:43 compute-0 python3.9[243116]: ansible-ansible.legacy.dnf Invoked with name=['nvme-cli'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 11 08:30:44 compute-0 ceph-mon[74313]: pgmap v707: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:30:45 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v708: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:30:46 compute-0 ceph-mon[74313]: pgmap v708: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:30:47 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v709: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:30:47 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 08:30:48 compute-0 ceph-mon[74313]: pgmap v709: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:30:49 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v710: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:30:49 compute-0 sudo[243121]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:30:49 compute-0 sudo[243121]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:30:49 compute-0 sudo[243121]: pam_unix(sudo:session): session closed for user root
Oct 11 08:30:49 compute-0 sudo[243146]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 08:30:49 compute-0 systemd[1]: Reloading.
Oct 11 08:30:49 compute-0 systemd-rc-local-generator[243196]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 08:30:49 compute-0 systemd-sysv-generator[243200]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 08:30:49 compute-0 systemd[1]: Reloading.
Oct 11 08:30:50 compute-0 systemd-rc-local-generator[243231]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 08:30:50 compute-0 systemd-sysv-generator[243234]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 08:30:50 compute-0 sudo[243146]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:30:50 compute-0 sudo[243146]: pam_unix(sudo:session): session closed for user root
Oct 11 08:30:50 compute-0 sudo[243242]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:30:50 compute-0 sudo[243242]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:30:50 compute-0 sudo[243242]: pam_unix(sudo:session): session closed for user root
Oct 11 08:30:50 compute-0 podman[243240]: 2025-10-11 08:30:50.473314683 +0000 UTC m=+0.106145853 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true)
Oct 11 08:30:50 compute-0 systemd-logind[819]: Watching system buttons on /dev/input/event0 (Power Button)
Oct 11 08:30:50 compute-0 sudo[243284]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 11 08:30:50 compute-0 sudo[243284]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:30:50 compute-0 systemd-logind[819]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Oct 11 08:30:50 compute-0 lvm[243351]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct 11 08:30:50 compute-0 lvm[243351]: VG ceph_vg0 finished
Oct 11 08:30:50 compute-0 lvm[243352]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Oct 11 08:30:50 compute-0 lvm[243352]: VG ceph_vg1 finished
Oct 11 08:30:50 compute-0 lvm[243349]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Oct 11 08:30:50 compute-0 lvm[243349]: VG ceph_vg2 finished
Oct 11 08:30:50 compute-0 ceph-mon[74313]: pgmap v710: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:30:50 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct 11 08:30:50 compute-0 systemd[1]: Starting man-db-cache-update.service...
Oct 11 08:30:50 compute-0 systemd[1]: Reloading.
Oct 11 08:30:50 compute-0 systemd-sysv-generator[243421]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 08:30:50 compute-0 systemd-rc-local-generator[243418]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 08:30:51 compute-0 sudo[243284]: pam_unix(sudo:session): session closed for user root
Oct 11 08:30:51 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Oct 11 08:30:51 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 08:30:51 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 08:30:51 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 11 08:30:51 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 08:30:51 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v711: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:30:51 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 11 08:30:51 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 08:30:51 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev 9dfe07c3-8452-4a7e-a006-be92927547ec does not exist
Oct 11 08:30:51 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev a4f657b2-b0f9-4773-981d-361ca93a0033 does not exist
Oct 11 08:30:51 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev 2b9113d8-4d81-4c96-b9b6-5b0984e3f683 does not exist
Oct 11 08:30:51 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 11 08:30:51 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 08:30:51 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 11 08:30:51 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 08:30:51 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 08:30:51 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 08:30:51 compute-0 sudo[243627]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:30:51 compute-0 sudo[243627]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:30:51 compute-0 sudo[243627]: pam_unix(sudo:session): session closed for user root
Oct 11 08:30:51 compute-0 sudo[243748]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 08:30:51 compute-0 sudo[243748]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:30:51 compute-0 sudo[243748]: pam_unix(sudo:session): session closed for user root
Oct 11 08:30:51 compute-0 sudo[243857]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:30:51 compute-0 sudo[243857]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:30:51 compute-0 sudo[243857]: pam_unix(sudo:session): session closed for user root
Oct 11 08:30:51 compute-0 sudo[243943]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 11 08:30:51 compute-0 sudo[243943]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:30:51 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 08:30:51 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 08:30:51 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 08:30:51 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 08:30:51 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 08:30:51 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 08:30:51 compute-0 sudo[243114]: pam_unix(sudo:session): session closed for user root
Oct 11 08:30:52 compute-0 podman[244375]: 2025-10-11 08:30:52.080048889 +0000 UTC m=+0.055573937 container create c688fc8f69e18b28294ec647497012595357636c3e7e272c34a7b613778f6f56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_shamir, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 08:30:52 compute-0 systemd[1]: Started libpod-conmon-c688fc8f69e18b28294ec647497012595357636c3e7e272c34a7b613778f6f56.scope.
Oct 11 08:30:52 compute-0 podman[244375]: 2025-10-11 08:30:52.056967429 +0000 UTC m=+0.032492487 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 08:30:52 compute-0 systemd[1]: Started libcrun container.
Oct 11 08:30:52 compute-0 podman[244375]: 2025-10-11 08:30:52.19364827 +0000 UTC m=+0.169173358 container init c688fc8f69e18b28294ec647497012595357636c3e7e272c34a7b613778f6f56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_shamir, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 11 08:30:52 compute-0 podman[244375]: 2025-10-11 08:30:52.204323561 +0000 UTC m=+0.179848569 container start c688fc8f69e18b28294ec647497012595357636c3e7e272c34a7b613778f6f56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_shamir, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 08:30:52 compute-0 podman[244375]: 2025-10-11 08:30:52.209405444 +0000 UTC m=+0.184930542 container attach c688fc8f69e18b28294ec647497012595357636c3e7e272c34a7b613778f6f56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_shamir, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 08:30:52 compute-0 beautiful_shamir[244521]: 167 167
Oct 11 08:30:52 compute-0 systemd[1]: libpod-c688fc8f69e18b28294ec647497012595357636c3e7e272c34a7b613778f6f56.scope: Deactivated successfully.
Oct 11 08:30:52 compute-0 podman[244375]: 2025-10-11 08:30:52.214358144 +0000 UTC m=+0.189883162 container died c688fc8f69e18b28294ec647497012595357636c3e7e272c34a7b613778f6f56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_shamir, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 08:30:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-2d7b673f38d50a5ef1c0044613536f556efbc9914f9f4e2eecb556645bd245c7-merged.mount: Deactivated successfully.
Oct 11 08:30:52 compute-0 podman[244375]: 2025-10-11 08:30:52.269605061 +0000 UTC m=+0.245130079 container remove c688fc8f69e18b28294ec647497012595357636c3e7e272c34a7b613778f6f56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_shamir, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 11 08:30:52 compute-0 systemd[1]: libpod-conmon-c688fc8f69e18b28294ec647497012595357636c3e7e272c34a7b613778f6f56.scope: Deactivated successfully.
Oct 11 08:30:52 compute-0 sudo[244809]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kkzsudstxgkeddvqhwshfiufycdgpvqo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171452.0428007-962-48234604940718/AnsiballZ_file.py'
Oct 11 08:30:52 compute-0 sudo[244809]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:30:52 compute-0 podman[244804]: 2025-10-11 08:30:52.502245336 +0000 UTC m=+0.062591964 container create e414cebd424d841514c278b12403fe0cbe1f8b8bbb7bb0b6efa94560d9755533 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_wescoff, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct 11 08:30:52 compute-0 systemd[1]: Started libpod-conmon-e414cebd424d841514c278b12403fe0cbe1f8b8bbb7bb0b6efa94560d9755533.scope.
Oct 11 08:30:52 compute-0 podman[244804]: 2025-10-11 08:30:52.47114178 +0000 UTC m=+0.031488398 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 08:30:52 compute-0 systemd[1]: Started libcrun container.
Oct 11 08:30:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/00f722859fa2bbe7cf2c3f38911581c4dfbd514c83ccc655f507ea7dd7883f2e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 08:30:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/00f722859fa2bbe7cf2c3f38911581c4dfbd514c83ccc655f507ea7dd7883f2e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 08:30:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/00f722859fa2bbe7cf2c3f38911581c4dfbd514c83ccc655f507ea7dd7883f2e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 08:30:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/00f722859fa2bbe7cf2c3f38911581c4dfbd514c83ccc655f507ea7dd7883f2e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 08:30:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/00f722859fa2bbe7cf2c3f38911581c4dfbd514c83ccc655f507ea7dd7883f2e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 08:30:52 compute-0 podman[244804]: 2025-10-11 08:30:52.624623025 +0000 UTC m=+0.184969633 container init e414cebd424d841514c278b12403fe0cbe1f8b8bbb7bb0b6efa94560d9755533 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_wescoff, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Oct 11 08:30:52 compute-0 podman[244804]: 2025-10-11 08:30:52.639936357 +0000 UTC m=+0.200282945 container start e414cebd424d841514c278b12403fe0cbe1f8b8bbb7bb0b6efa94560d9755533 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_wescoff, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 08:30:52 compute-0 podman[244804]: 2025-10-11 08:30:52.644560877 +0000 UTC m=+0.204907505 container attach e414cebd424d841514c278b12403fe0cbe1f8b8bbb7bb0b6efa94560d9755533 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_wescoff, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 11 08:30:52 compute-0 ceph-mon[74313]: pgmap v711: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:30:52 compute-0 python3.9[244840]: ansible-ansible.builtin.file Invoked with mode=0600 path=/etc/iscsi/.iscsid_restart_required state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 08:30:52 compute-0 sudo[244809]: pam_unix(sudo:session): session closed for user root
Oct 11 08:30:52 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Oct 11 08:30:52 compute-0 systemd[1]: Finished man-db-cache-update.service.
Oct 11 08:30:52 compute-0 systemd[1]: man-db-cache-update.service: Consumed 2.220s CPU time.
Oct 11 08:30:52 compute-0 systemd[1]: run-r31dae10ddc3a4f9fa80492e05f87c37c.service: Deactivated successfully.
Oct 11 08:30:52 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 08:30:53 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v712: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:30:53 compute-0 python3.9[245080]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 11 08:30:53 compute-0 heuristic_wescoff[244892]: --> passed data devices: 0 physical, 3 LVM
Oct 11 08:30:53 compute-0 heuristic_wescoff[244892]: --> relative data size: 1.0
Oct 11 08:30:53 compute-0 heuristic_wescoff[244892]: --> All data devices are unavailable
Oct 11 08:30:53 compute-0 systemd[1]: libpod-e414cebd424d841514c278b12403fe0cbe1f8b8bbb7bb0b6efa94560d9755533.scope: Deactivated successfully.
Oct 11 08:30:53 compute-0 systemd[1]: libpod-e414cebd424d841514c278b12403fe0cbe1f8b8bbb7bb0b6efa94560d9755533.scope: Consumed 1.159s CPU time.
Oct 11 08:30:53 compute-0 podman[244804]: 2025-10-11 08:30:53.877437019 +0000 UTC m=+1.437783637 container died e414cebd424d841514c278b12403fe0cbe1f8b8bbb7bb0b6efa94560d9755533 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_wescoff, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct 11 08:30:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-00f722859fa2bbe7cf2c3f38911581c4dfbd514c83ccc655f507ea7dd7883f2e-merged.mount: Deactivated successfully.
Oct 11 08:30:53 compute-0 podman[244804]: 2025-10-11 08:30:53.96371203 +0000 UTC m=+1.524058628 container remove e414cebd424d841514c278b12403fe0cbe1f8b8bbb7bb0b6efa94560d9755533 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_wescoff, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct 11 08:30:53 compute-0 systemd[1]: libpod-conmon-e414cebd424d841514c278b12403fe0cbe1f8b8bbb7bb0b6efa94560d9755533.scope: Deactivated successfully.
Oct 11 08:30:54 compute-0 sudo[243943]: pam_unix(sudo:session): session closed for user root
Oct 11 08:30:54 compute-0 sudo[245117]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:30:54 compute-0 sudo[245117]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:30:54 compute-0 sudo[245117]: pam_unix(sudo:session): session closed for user root
Oct 11 08:30:54 compute-0 sudo[245166]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 08:30:54 compute-0 sudo[245166]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:30:54 compute-0 sudo[245166]: pam_unix(sudo:session): session closed for user root
Oct 11 08:30:54 compute-0 sudo[245191]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:30:54 compute-0 sudo[245191]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:30:54 compute-0 sudo[245191]: pam_unix(sudo:session): session closed for user root
Oct 11 08:30:54 compute-0 sudo[245219]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a -- lvm list --format json
Oct 11 08:30:54 compute-0 sudo[245219]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:30:54 compute-0 ceph-mon[74313]: pgmap v712: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:30:54 compute-0 podman[245389]: 2025-10-11 08:30:54.713325264 +0000 UTC m=+0.050037721 container create 5accb42b85a58bacb41c7de23b818b758a7215c9582884453bc53a07d7860530 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_shannon, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Oct 11 08:30:54 compute-0 sudo[245418]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bfcqareqqbspamfvebtsrnizxipkgutc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171454.295019-980-269754387230513/AnsiballZ_file.py'
Oct 11 08:30:54 compute-0 sudo[245418]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:30:54 compute-0 ceph-mgr[74605]: [balancer INFO root] Optimize plan auto_2025-10-11_08:30:54
Oct 11 08:30:54 compute-0 ceph-mgr[74605]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 08:30:54 compute-0 ceph-mgr[74605]: [balancer INFO root] do_upmap
Oct 11 08:30:54 compute-0 ceph-mgr[74605]: [balancer INFO root] pools ['.mgr', 'default.rgw.log', 'cephfs.cephfs.meta', 'vms', '.rgw.root', 'default.rgw.meta', 'cephfs.cephfs.data', 'backups', 'default.rgw.control', 'images', 'volumes']
Oct 11 08:30:54 compute-0 ceph-mgr[74605]: [balancer INFO root] prepared 0/10 changes
Oct 11 08:30:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 08:30:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 08:30:54 compute-0 systemd[1]: Started libpod-conmon-5accb42b85a58bacb41c7de23b818b758a7215c9582884453bc53a07d7860530.scope.
Oct 11 08:30:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 08:30:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 08:30:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 08:30:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 08:30:54 compute-0 podman[245389]: 2025-10-11 08:30:54.688155535 +0000 UTC m=+0.024868002 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 08:30:54 compute-0 systemd[1]: Started libcrun container.
Oct 11 08:30:54 compute-0 podman[245389]: 2025-10-11 08:30:54.836933847 +0000 UTC m=+0.173646314 container init 5accb42b85a58bacb41c7de23b818b758a7215c9582884453bc53a07d7860530 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_shannon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct 11 08:30:54 compute-0 podman[245389]: 2025-10-11 08:30:54.850286984 +0000 UTC m=+0.186999441 container start 5accb42b85a58bacb41c7de23b818b758a7215c9582884453bc53a07d7860530 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_shannon, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 08:30:54 compute-0 interesting_shannon[245425]: 167 167
Oct 11 08:30:54 compute-0 systemd[1]: libpod-5accb42b85a58bacb41c7de23b818b758a7215c9582884453bc53a07d7860530.scope: Deactivated successfully.
Oct 11 08:30:54 compute-0 podman[245389]: 2025-10-11 08:30:54.8543943 +0000 UTC m=+0.191106757 container attach 5accb42b85a58bacb41c7de23b818b758a7215c9582884453bc53a07d7860530 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_shannon, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 08:30:54 compute-0 podman[245389]: 2025-10-11 08:30:54.860387488 +0000 UTC m=+0.197099945 container died 5accb42b85a58bacb41c7de23b818b758a7215c9582884453bc53a07d7860530 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_shannon, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct 11 08:30:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-2e5d63f0666364347eb3a7bd0b0899be1d3c336581a6a059fe9432846be0576d-merged.mount: Deactivated successfully.
Oct 11 08:30:54 compute-0 podman[245389]: 2025-10-11 08:30:54.914451312 +0000 UTC m=+0.251163769 container remove 5accb42b85a58bacb41c7de23b818b758a7215c9582884453bc53a07d7860530 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_shannon, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct 11 08:30:54 compute-0 systemd[1]: libpod-conmon-5accb42b85a58bacb41c7de23b818b758a7215c9582884453bc53a07d7860530.scope: Deactivated successfully.
Oct 11 08:30:54 compute-0 ceph-mgr[74605]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 08:30:54 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 08:30:54 compute-0 ceph-mgr[74605]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 08:30:54 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 08:30:54 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 08:30:54 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 08:30:54 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 08:30:54 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 08:30:54 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 08:30:54 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 08:30:54 compute-0 python3.9[245422]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/ssh/ssh_known_hosts state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 08:30:54 compute-0 sudo[245418]: pam_unix(sudo:session): session closed for user root
Oct 11 08:30:55 compute-0 podman[245471]: 2025-10-11 08:30:55.136424557 +0000 UTC m=+0.055782953 container create 35729055cb69069ff9813bc9639569eb5ecedc49fcc0b0094d7223720ab7d31e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_nobel, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct 11 08:30:55 compute-0 systemd[1]: Started libpod-conmon-35729055cb69069ff9813bc9639569eb5ecedc49fcc0b0094d7223720ab7d31e.scope.
Oct 11 08:30:55 compute-0 podman[245471]: 2025-10-11 08:30:55.107135282 +0000 UTC m=+0.026493758 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 08:30:55 compute-0 systemd[1]: Started libcrun container.
Oct 11 08:30:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/39e8cf8d05078483a94e6222e687d0f496bd8d66759636b2537185e79cea5da4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 08:30:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/39e8cf8d05078483a94e6222e687d0f496bd8d66759636b2537185e79cea5da4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 08:30:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/39e8cf8d05078483a94e6222e687d0f496bd8d66759636b2537185e79cea5da4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 08:30:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/39e8cf8d05078483a94e6222e687d0f496bd8d66759636b2537185e79cea5da4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 08:30:55 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v713: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:30:55 compute-0 podman[245471]: 2025-10-11 08:30:55.238420421 +0000 UTC m=+0.157778867 container init 35729055cb69069ff9813bc9639569eb5ecedc49fcc0b0094d7223720ab7d31e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_nobel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct 11 08:30:55 compute-0 podman[245471]: 2025-10-11 08:30:55.252482978 +0000 UTC m=+0.171841404 container start 35729055cb69069ff9813bc9639569eb5ecedc49fcc0b0094d7223720ab7d31e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_nobel, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True)
Oct 11 08:30:55 compute-0 podman[245471]: 2025-10-11 08:30:55.256658695 +0000 UTC m=+0.176017121 container attach 35729055cb69069ff9813bc9639569eb5ecedc49fcc0b0094d7223720ab7d31e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_nobel, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct 11 08:30:56 compute-0 zealous_nobel[245487]: {
Oct 11 08:30:56 compute-0 zealous_nobel[245487]:     "0": [
Oct 11 08:30:56 compute-0 zealous_nobel[245487]:         {
Oct 11 08:30:56 compute-0 zealous_nobel[245487]:             "devices": [
Oct 11 08:30:56 compute-0 zealous_nobel[245487]:                 "/dev/loop3"
Oct 11 08:30:56 compute-0 zealous_nobel[245487]:             ],
Oct 11 08:30:56 compute-0 zealous_nobel[245487]:             "lv_name": "ceph_lv0",
Oct 11 08:30:56 compute-0 zealous_nobel[245487]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 08:30:56 compute-0 zealous_nobel[245487]:             "lv_size": "21470642176",
Oct 11 08:30:56 compute-0 zealous_nobel[245487]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=bd31cfe9-266a-4479-a7f9-659fff14b3e1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 08:30:56 compute-0 zealous_nobel[245487]:             "lv_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 08:30:56 compute-0 zealous_nobel[245487]:             "name": "ceph_lv0",
Oct 11 08:30:56 compute-0 zealous_nobel[245487]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 08:30:56 compute-0 zealous_nobel[245487]:             "tags": {
Oct 11 08:30:56 compute-0 zealous_nobel[245487]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 11 08:30:56 compute-0 zealous_nobel[245487]:                 "ceph.block_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 08:30:56 compute-0 zealous_nobel[245487]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 08:30:56 compute-0 zealous_nobel[245487]:                 "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 08:30:56 compute-0 zealous_nobel[245487]:                 "ceph.cluster_name": "ceph",
Oct 11 08:30:56 compute-0 zealous_nobel[245487]:                 "ceph.crush_device_class": "",
Oct 11 08:30:56 compute-0 zealous_nobel[245487]:                 "ceph.encrypted": "0",
Oct 11 08:30:56 compute-0 zealous_nobel[245487]:                 "ceph.osd_fsid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 08:30:56 compute-0 zealous_nobel[245487]:                 "ceph.osd_id": "0",
Oct 11 08:30:56 compute-0 zealous_nobel[245487]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 08:30:56 compute-0 zealous_nobel[245487]:                 "ceph.type": "block",
Oct 11 08:30:56 compute-0 zealous_nobel[245487]:                 "ceph.vdo": "0"
Oct 11 08:30:56 compute-0 zealous_nobel[245487]:             },
Oct 11 08:30:56 compute-0 zealous_nobel[245487]:             "type": "block",
Oct 11 08:30:56 compute-0 zealous_nobel[245487]:             "vg_name": "ceph_vg0"
Oct 11 08:30:56 compute-0 zealous_nobel[245487]:         }
Oct 11 08:30:56 compute-0 zealous_nobel[245487]:     ],
Oct 11 08:30:56 compute-0 zealous_nobel[245487]:     "1": [
Oct 11 08:30:56 compute-0 zealous_nobel[245487]:         {
Oct 11 08:30:56 compute-0 zealous_nobel[245487]:             "devices": [
Oct 11 08:30:56 compute-0 zealous_nobel[245487]:                 "/dev/loop4"
Oct 11 08:30:56 compute-0 zealous_nobel[245487]:             ],
Oct 11 08:30:56 compute-0 zealous_nobel[245487]:             "lv_name": "ceph_lv1",
Oct 11 08:30:56 compute-0 zealous_nobel[245487]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 08:30:56 compute-0 zealous_nobel[245487]:             "lv_size": "21470642176",
Oct 11 08:30:56 compute-0 zealous_nobel[245487]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c6e730c8-7075-45e5-bf72-061b73088f5b,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 08:30:56 compute-0 zealous_nobel[245487]:             "lv_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 08:30:56 compute-0 zealous_nobel[245487]:             "name": "ceph_lv1",
Oct 11 08:30:56 compute-0 zealous_nobel[245487]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 08:30:56 compute-0 zealous_nobel[245487]:             "tags": {
Oct 11 08:30:56 compute-0 zealous_nobel[245487]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 11 08:30:56 compute-0 zealous_nobel[245487]:                 "ceph.block_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 08:30:56 compute-0 zealous_nobel[245487]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 08:30:56 compute-0 zealous_nobel[245487]:                 "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 08:30:56 compute-0 zealous_nobel[245487]:                 "ceph.cluster_name": "ceph",
Oct 11 08:30:56 compute-0 zealous_nobel[245487]:                 "ceph.crush_device_class": "",
Oct 11 08:30:56 compute-0 zealous_nobel[245487]:                 "ceph.encrypted": "0",
Oct 11 08:30:56 compute-0 zealous_nobel[245487]:                 "ceph.osd_fsid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 08:30:56 compute-0 zealous_nobel[245487]:                 "ceph.osd_id": "1",
Oct 11 08:30:56 compute-0 zealous_nobel[245487]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 08:30:56 compute-0 zealous_nobel[245487]:                 "ceph.type": "block",
Oct 11 08:30:56 compute-0 zealous_nobel[245487]:                 "ceph.vdo": "0"
Oct 11 08:30:56 compute-0 zealous_nobel[245487]:             },
Oct 11 08:30:56 compute-0 zealous_nobel[245487]:             "type": "block",
Oct 11 08:30:56 compute-0 zealous_nobel[245487]:             "vg_name": "ceph_vg1"
Oct 11 08:30:56 compute-0 zealous_nobel[245487]:         }
Oct 11 08:30:56 compute-0 zealous_nobel[245487]:     ],
Oct 11 08:30:56 compute-0 zealous_nobel[245487]:     "2": [
Oct 11 08:30:56 compute-0 zealous_nobel[245487]:         {
Oct 11 08:30:56 compute-0 zealous_nobel[245487]:             "devices": [
Oct 11 08:30:56 compute-0 zealous_nobel[245487]:                 "/dev/loop5"
Oct 11 08:30:56 compute-0 zealous_nobel[245487]:             ],
Oct 11 08:30:56 compute-0 zealous_nobel[245487]:             "lv_name": "ceph_lv2",
Oct 11 08:30:56 compute-0 zealous_nobel[245487]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 08:30:56 compute-0 zealous_nobel[245487]:             "lv_size": "21470642176",
Oct 11 08:30:56 compute-0 zealous_nobel[245487]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9a8d87fe-940e-4960-94fb-405e22e6e9ee,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 08:30:56 compute-0 zealous_nobel[245487]:             "lv_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 08:30:56 compute-0 zealous_nobel[245487]:             "name": "ceph_lv2",
Oct 11 08:30:56 compute-0 zealous_nobel[245487]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 08:30:56 compute-0 zealous_nobel[245487]:             "tags": {
Oct 11 08:30:56 compute-0 zealous_nobel[245487]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 11 08:30:56 compute-0 zealous_nobel[245487]:                 "ceph.block_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 08:30:56 compute-0 zealous_nobel[245487]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 08:30:56 compute-0 zealous_nobel[245487]:                 "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 08:30:56 compute-0 zealous_nobel[245487]:                 "ceph.cluster_name": "ceph",
Oct 11 08:30:56 compute-0 zealous_nobel[245487]:                 "ceph.crush_device_class": "",
Oct 11 08:30:56 compute-0 zealous_nobel[245487]:                 "ceph.encrypted": "0",
Oct 11 08:30:56 compute-0 zealous_nobel[245487]:                 "ceph.osd_fsid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 08:30:56 compute-0 zealous_nobel[245487]:                 "ceph.osd_id": "2",
Oct 11 08:30:56 compute-0 zealous_nobel[245487]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 08:30:56 compute-0 zealous_nobel[245487]:                 "ceph.type": "block",
Oct 11 08:30:56 compute-0 zealous_nobel[245487]:                 "ceph.vdo": "0"
Oct 11 08:30:56 compute-0 zealous_nobel[245487]:             },
Oct 11 08:30:56 compute-0 zealous_nobel[245487]:             "type": "block",
Oct 11 08:30:56 compute-0 zealous_nobel[245487]:             "vg_name": "ceph_vg2"
Oct 11 08:30:56 compute-0 zealous_nobel[245487]:         }
Oct 11 08:30:56 compute-0 zealous_nobel[245487]:     ]
Oct 11 08:30:56 compute-0 zealous_nobel[245487]: }
Oct 11 08:30:56 compute-0 podman[245471]: 2025-10-11 08:30:56.07056142 +0000 UTC m=+0.989919856 container died 35729055cb69069ff9813bc9639569eb5ecedc49fcc0b0094d7223720ab7d31e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_nobel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 08:30:56 compute-0 systemd[1]: libpod-35729055cb69069ff9813bc9639569eb5ecedc49fcc0b0094d7223720ab7d31e.scope: Deactivated successfully.
Oct 11 08:30:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-39e8cf8d05078483a94e6222e687d0f496bd8d66759636b2537185e79cea5da4-merged.mount: Deactivated successfully.
Oct 11 08:30:56 compute-0 sudo[245632]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jqbktbovklkymopzbtvampbksykfohjv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171455.3850765-991-270634433845340/AnsiballZ_systemd_service.py'
Oct 11 08:30:56 compute-0 sudo[245632]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:30:56 compute-0 podman[245471]: 2025-10-11 08:30:56.135622493 +0000 UTC m=+1.054980899 container remove 35729055cb69069ff9813bc9639569eb5ecedc49fcc0b0094d7223720ab7d31e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_nobel, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct 11 08:30:56 compute-0 systemd[1]: libpod-conmon-35729055cb69069ff9813bc9639569eb5ecedc49fcc0b0094d7223720ab7d31e.scope: Deactivated successfully.
Oct 11 08:30:56 compute-0 sudo[245219]: pam_unix(sudo:session): session closed for user root
Oct 11 08:30:56 compute-0 sudo[245638]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:30:56 compute-0 sudo[245638]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:30:56 compute-0 sudo[245638]: pam_unix(sudo:session): session closed for user root
Oct 11 08:30:56 compute-0 sudo[245663]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 08:30:56 compute-0 sudo[245663]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:30:56 compute-0 sudo[245663]: pam_unix(sudo:session): session closed for user root
Oct 11 08:30:56 compute-0 sudo[245688]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:30:56 compute-0 sudo[245688]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:30:56 compute-0 sudo[245688]: pam_unix(sudo:session): session closed for user root
Oct 11 08:30:56 compute-0 sudo[245713]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a -- raw list --format json
Oct 11 08:30:56 compute-0 sudo[245713]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:30:56 compute-0 python3.9[245634]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 11 08:30:56 compute-0 systemd[1]: Reloading.
Oct 11 08:30:56 compute-0 systemd-rc-local-generator[245768]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 08:30:56 compute-0 systemd-sysv-generator[245771]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 08:30:56 compute-0 ceph-mon[74313]: pgmap v713: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:30:56 compute-0 sudo[245632]: pam_unix(sudo:session): session closed for user root
Oct 11 08:30:56 compute-0 podman[245811]: 2025-10-11 08:30:56.997569083 +0000 UTC m=+0.123146101 container create df5f0960e12c909b52de5366a0ce4211181ffcda18c74155871998a1bbea75dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_cray, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct 11 08:30:57 compute-0 podman[245811]: 2025-10-11 08:30:56.910562151 +0000 UTC m=+0.036139159 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 08:30:57 compute-0 systemd[1]: Started libpod-conmon-df5f0960e12c909b52de5366a0ce4211181ffcda18c74155871998a1bbea75dc.scope.
Oct 11 08:30:57 compute-0 systemd[1]: Started libcrun container.
Oct 11 08:30:57 compute-0 podman[245811]: 2025-10-11 08:30:57.208546268 +0000 UTC m=+0.334123336 container init df5f0960e12c909b52de5366a0ce4211181ffcda18c74155871998a1bbea75dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_cray, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 08:30:57 compute-0 podman[245811]: 2025-10-11 08:30:57.218075137 +0000 UTC m=+0.343652135 container start df5f0960e12c909b52de5366a0ce4211181ffcda18c74155871998a1bbea75dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_cray, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 08:30:57 compute-0 hopeful_cray[245851]: 167 167
Oct 11 08:30:57 compute-0 systemd[1]: libpod-df5f0960e12c909b52de5366a0ce4211181ffcda18c74155871998a1bbea75dc.scope: Deactivated successfully.
Oct 11 08:30:57 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v714: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:30:57 compute-0 podman[245811]: 2025-10-11 08:30:57.247882187 +0000 UTC m=+0.373459225 container attach df5f0960e12c909b52de5366a0ce4211181ffcda18c74155871998a1bbea75dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_cray, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 08:30:57 compute-0 podman[245811]: 2025-10-11 08:30:57.251414916 +0000 UTC m=+0.376991914 container died df5f0960e12c909b52de5366a0ce4211181ffcda18c74155871998a1bbea75dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_cray, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 08:30:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-cafd15bacc684cc6055e500c62c947083032abbf5803ba22446f4d8ed831c311-merged.mount: Deactivated successfully.
Oct 11 08:30:57 compute-0 podman[245811]: 2025-10-11 08:30:57.556912435 +0000 UTC m=+0.682489463 container remove df5f0960e12c909b52de5366a0ce4211181ffcda18c74155871998a1bbea75dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_cray, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct 11 08:30:57 compute-0 systemd[1]: libpod-conmon-df5f0960e12c909b52de5366a0ce4211181ffcda18c74155871998a1bbea75dc.scope: Deactivated successfully.
Oct 11 08:30:57 compute-0 podman[246002]: 2025-10-11 08:30:57.810498251 +0000 UTC m=+0.074345736 container create d8e70c3982f0f0c4ce74031ae9a813cf8ef8b24c0b30d905ea15ffb5a94784c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_herschel, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 08:30:57 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 08:30:57 compute-0 podman[246002]: 2025-10-11 08:30:57.780785574 +0000 UTC m=+0.044633099 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 08:30:57 compute-0 systemd[1]: Started libpod-conmon-d8e70c3982f0f0c4ce74031ae9a813cf8ef8b24c0b30d905ea15ffb5a94784c1.scope.
Oct 11 08:30:57 compute-0 python3.9[246003]: ansible-ansible.builtin.service_facts Invoked
Oct 11 08:30:57 compute-0 systemd[1]: Started libcrun container.
Oct 11 08:30:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e9d0ecf7b12eda9831953f0554e51bbfdabc114ff48479406c1a180f2c82050/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 08:30:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e9d0ecf7b12eda9831953f0554e51bbfdabc114ff48479406c1a180f2c82050/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 08:30:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e9d0ecf7b12eda9831953f0554e51bbfdabc114ff48479406c1a180f2c82050/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 08:30:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e9d0ecf7b12eda9831953f0554e51bbfdabc114ff48479406c1a180f2c82050/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 08:30:57 compute-0 podman[246002]: 2025-10-11 08:30:57.993545019 +0000 UTC m=+0.257392514 container init d8e70c3982f0f0c4ce74031ae9a813cf8ef8b24c0b30d905ea15ffb5a94784c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_herschel, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 11 08:30:57 compute-0 network[246039]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct 11 08:30:57 compute-0 network[246041]: 'network-scripts' will be removed from distribution in near future.
Oct 11 08:30:58 compute-0 network[246042]: It is advised to switch to 'NetworkManager' instead for network management.
Oct 11 08:30:58 compute-0 podman[246002]: 2025-10-11 08:30:58.002689817 +0000 UTC m=+0.266537272 container start d8e70c3982f0f0c4ce74031ae9a813cf8ef8b24c0b30d905ea15ffb5a94784c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_herschel, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 08:30:58 compute-0 podman[246002]: 2025-10-11 08:30:58.006424012 +0000 UTC m=+0.270271477 container attach d8e70c3982f0f0c4ce74031ae9a813cf8ef8b24c0b30d905ea15ffb5a94784c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_herschel, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 08:30:58 compute-0 ceph-mon[74313]: pgmap v714: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:30:59 compute-0 hungry_herschel[246020]: {
Oct 11 08:30:59 compute-0 hungry_herschel[246020]:     "9a8d87fe-940e-4960-94fb-405e22e6e9ee": {
Oct 11 08:30:59 compute-0 hungry_herschel[246020]:         "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 08:30:59 compute-0 hungry_herschel[246020]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 11 08:30:59 compute-0 hungry_herschel[246020]:         "osd_id": 2,
Oct 11 08:30:59 compute-0 hungry_herschel[246020]:         "osd_uuid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 08:30:59 compute-0 hungry_herschel[246020]:         "type": "bluestore"
Oct 11 08:30:59 compute-0 hungry_herschel[246020]:     },
Oct 11 08:30:59 compute-0 hungry_herschel[246020]:     "bd31cfe9-266a-4479-a7f9-659fff14b3e1": {
Oct 11 08:30:59 compute-0 hungry_herschel[246020]:         "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 08:30:59 compute-0 hungry_herschel[246020]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 11 08:30:59 compute-0 hungry_herschel[246020]:         "osd_id": 0,
Oct 11 08:30:59 compute-0 hungry_herschel[246020]:         "osd_uuid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 08:30:59 compute-0 hungry_herschel[246020]:         "type": "bluestore"
Oct 11 08:30:59 compute-0 hungry_herschel[246020]:     },
Oct 11 08:30:59 compute-0 hungry_herschel[246020]:     "c6e730c8-7075-45e5-bf72-061b73088f5b": {
Oct 11 08:30:59 compute-0 hungry_herschel[246020]:         "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 08:30:59 compute-0 hungry_herschel[246020]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 11 08:30:59 compute-0 hungry_herschel[246020]:         "osd_id": 1,
Oct 11 08:30:59 compute-0 hungry_herschel[246020]:         "osd_uuid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 08:30:59 compute-0 hungry_herschel[246020]:         "type": "bluestore"
Oct 11 08:30:59 compute-0 hungry_herschel[246020]:     }
Oct 11 08:30:59 compute-0 hungry_herschel[246020]: }
Oct 11 08:30:59 compute-0 systemd[1]: libpod-d8e70c3982f0f0c4ce74031ae9a813cf8ef8b24c0b30d905ea15ffb5a94784c1.scope: Deactivated successfully.
Oct 11 08:30:59 compute-0 podman[246002]: 2025-10-11 08:30:59.097701693 +0000 UTC m=+1.361549158 container died d8e70c3982f0f0c4ce74031ae9a813cf8ef8b24c0b30d905ea15ffb5a94784c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_herschel, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct 11 08:30:59 compute-0 systemd[1]: libpod-d8e70c3982f0f0c4ce74031ae9a813cf8ef8b24c0b30d905ea15ffb5a94784c1.scope: Consumed 1.100s CPU time.
Oct 11 08:30:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-8e9d0ecf7b12eda9831953f0554e51bbfdabc114ff48479406c1a180f2c82050-merged.mount: Deactivated successfully.
Oct 11 08:30:59 compute-0 podman[246002]: 2025-10-11 08:30:59.198414631 +0000 UTC m=+1.462262096 container remove d8e70c3982f0f0c4ce74031ae9a813cf8ef8b24c0b30d905ea15ffb5a94784c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_herschel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 11 08:30:59 compute-0 systemd[1]: libpod-conmon-d8e70c3982f0f0c4ce74031ae9a813cf8ef8b24c0b30d905ea15ffb5a94784c1.scope: Deactivated successfully.
Oct 11 08:30:59 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v715: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:30:59 compute-0 sudo[245713]: pam_unix(sudo:session): session closed for user root
Oct 11 08:30:59 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 08:30:59 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 08:30:59 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 08:30:59 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 08:30:59 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev 4a3e6bcd-5138-4803-9b27-2f76c10f8754 does not exist
Oct 11 08:30:59 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev 28c5034f-b8c7-4dfa-bec1-207eef25f928 does not exist
Oct 11 08:30:59 compute-0 sudo[246105]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:30:59 compute-0 sudo[246105]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:30:59 compute-0 sudo[246105]: pam_unix(sudo:session): session closed for user root
Oct 11 08:30:59 compute-0 sudo[246134]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 11 08:30:59 compute-0 sudo[246134]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:30:59 compute-0 sudo[246134]: pam_unix(sudo:session): session closed for user root
Oct 11 08:31:00 compute-0 ceph-mon[74313]: pgmap v715: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:31:00 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 08:31:00 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 08:31:01 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v716: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:31:01 compute-0 podman[246228]: 2025-10-11 08:31:01.647467345 +0000 UTC m=+0.109112435 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=iscsid)
Oct 11 08:31:02 compute-0 ceph-mon[74313]: pgmap v716: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:31:02 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 08:31:03 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v717: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:31:03 compute-0 sudo[246431]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bkpwhkxlmdxzmennwlupjcknbahnuyvm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171463.0840132-1010-67947076594525/AnsiballZ_systemd_service.py'
Oct 11 08:31:03 compute-0 sudo[246431]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:31:03 compute-0 python3.9[246433]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 11 08:31:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 08:31:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:31:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 08:31:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:31:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 08:31:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:31:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 08:31:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:31:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 08:31:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:31:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 08:31:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:31:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 11 08:31:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:31:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 08:31:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:31:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 11 08:31:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:31:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 11 08:31:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:31:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 08:31:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:31:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 11 08:31:04 compute-0 ceph-mon[74313]: pgmap v717: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:31:04 compute-0 sudo[246431]: pam_unix(sudo:session): session closed for user root
Oct 11 08:31:05 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v718: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:31:05 compute-0 sudo[246600]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zjyfhujzwotsykyamlvggmmcoikkmtme ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171465.0787935-1010-204191565854243/AnsiballZ_systemd_service.py'
Oct 11 08:31:05 compute-0 sudo[246600]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:31:05 compute-0 podman[246558]: 2025-10-11 08:31:05.541844007 +0000 UTC m=+0.102408886 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 11 08:31:05 compute-0 python3.9[246607]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_migration_target.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 11 08:31:05 compute-0 sudo[246600]: pam_unix(sudo:session): session closed for user root
Oct 11 08:31:06 compute-0 ceph-mon[74313]: pgmap v718: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:31:06 compute-0 sudo[246758]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ghhdyvlsfwadpmywhyvlrvfygityexhl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171466.0399308-1010-152534337299452/AnsiballZ_systemd_service.py'
Oct 11 08:31:06 compute-0 sudo[246758]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:31:06 compute-0 python3.9[246760]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api_cron.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 11 08:31:06 compute-0 sudo[246758]: pam_unix(sudo:session): session closed for user root
Oct 11 08:31:07 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v719: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:31:07 compute-0 sudo[246911]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ydwgxxmytuainiudbvnilrlmbuplvhba ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171466.9201372-1010-126019639633699/AnsiballZ_systemd_service.py'
Oct 11 08:31:07 compute-0 sudo[246911]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:31:07 compute-0 python3.9[246913]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 11 08:31:07 compute-0 sudo[246911]: pam_unix(sudo:session): session closed for user root
Oct 11 08:31:07 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 08:31:08 compute-0 ceph-mon[74313]: pgmap v719: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:31:08 compute-0 sudo[247064]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ifeefflqbvjdtbldfjphrqljpwarsykf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171467.8903751-1010-175057069262494/AnsiballZ_systemd_service.py'
Oct 11 08:31:08 compute-0 sudo[247064]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:31:08 compute-0 python3.9[247066]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_conductor.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 11 08:31:08 compute-0 sudo[247064]: pam_unix(sudo:session): session closed for user root
Oct 11 08:31:08 compute-0 podman[247068]: 2025-10-11 08:31:08.825219492 +0000 UTC m=+0.123926334 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Oct 11 08:31:09 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v720: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:31:09 compute-0 sudo[247243]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-znucqarurjoftieyvnrsfzovzjjyhjob ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171468.914946-1010-161887839602341/AnsiballZ_systemd_service.py'
Oct 11 08:31:09 compute-0 sudo[247243]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:31:09 compute-0 python3.9[247245]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_metadata.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 11 08:31:09 compute-0 sudo[247243]: pam_unix(sudo:session): session closed for user root
Oct 11 08:31:10 compute-0 ceph-mon[74313]: pgmap v720: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:31:10 compute-0 sudo[247396]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ehfpsjxjzultshoniiielhizgrukfrhv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171469.8823183-1010-168049227666088/AnsiballZ_systemd_service.py'
Oct 11 08:31:10 compute-0 sudo[247396]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:31:10 compute-0 python3.9[247398]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_scheduler.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 11 08:31:10 compute-0 sudo[247396]: pam_unix(sudo:session): session closed for user root
Oct 11 08:31:11 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v721: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:31:11 compute-0 sudo[247549]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vzegiyqnoswfnkdbsivfytnzsyrtiecr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171470.8630445-1010-50336682602457/AnsiballZ_systemd_service.py'
Oct 11 08:31:11 compute-0 sudo[247549]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:31:11 compute-0 python3.9[247551]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_vnc_proxy.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 11 08:31:11 compute-0 sudo[247549]: pam_unix(sudo:session): session closed for user root
Oct 11 08:31:12 compute-0 ceph-mon[74313]: pgmap v721: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:31:12 compute-0 sudo[247702]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-drcvqsueddjigxbkqvjayvydmxlhdyyy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171472.0216289-1069-177545218299566/AnsiballZ_file.py'
Oct 11 08:31:12 compute-0 sudo[247702]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:31:12 compute-0 python3.9[247704]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 08:31:12 compute-0 sudo[247702]: pam_unix(sudo:session): session closed for user root
Oct 11 08:31:12 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 08:31:13 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v722: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:31:13 compute-0 sudo[247854]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jopmvwehgxxvwsxpmfoehblzrtnnqncx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171472.8445802-1069-245443946518694/AnsiballZ_file.py'
Oct 11 08:31:13 compute-0 sudo[247854]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:31:13 compute-0 python3.9[247856]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 08:31:13 compute-0 sudo[247854]: pam_unix(sudo:session): session closed for user root
Oct 11 08:31:14 compute-0 sudo[248006]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gmhgnyiynbngrobomgzgfxvxjksnimri ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171473.651047-1069-49974982853105/AnsiballZ_file.py'
Oct 11 08:31:14 compute-0 sudo[248006]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:31:14 compute-0 python3.9[248008]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 08:31:14 compute-0 sudo[248006]: pam_unix(sudo:session): session closed for user root
Oct 11 08:31:14 compute-0 ceph-mon[74313]: pgmap v722: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:31:14 compute-0 sudo[248158]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zcylcarqwikjqzyvozwimeitcrikdoti ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171474.4442258-1069-177369004168771/AnsiballZ_file.py'
Oct 11 08:31:14 compute-0 sudo[248158]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:31:15 compute-0 python3.9[248160]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 08:31:15 compute-0 sudo[248158]: pam_unix(sudo:session): session closed for user root
Oct 11 08:31:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:31:15.160 162815 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:31:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:31:15.161 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:31:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:31:15.161 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:31:15 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v723: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:31:15 compute-0 sudo[248310]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hmrrmhhowtkqaxartblsjfhskiyevujg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171475.2629778-1069-67433760372517/AnsiballZ_file.py'
Oct 11 08:31:15 compute-0 sudo[248310]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:31:15 compute-0 python3.9[248312]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 08:31:15 compute-0 sudo[248310]: pam_unix(sudo:session): session closed for user root
Oct 11 08:31:16 compute-0 ceph-mon[74313]: pgmap v723: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:31:16 compute-0 sudo[248462]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mqkevjucobxubruwruqsdbafcxflbbch ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171476.1223974-1069-244774944882881/AnsiballZ_file.py'
Oct 11 08:31:16 compute-0 sudo[248462]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:31:16 compute-0 python3.9[248464]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 08:31:16 compute-0 sudo[248462]: pam_unix(sudo:session): session closed for user root
Oct 11 08:31:17 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v724: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:31:17 compute-0 sudo[248614]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fkvrapvouxrihuzbkfgxycsgwojiquwc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171476.961866-1069-271490335500627/AnsiballZ_file.py'
Oct 11 08:31:17 compute-0 sudo[248614]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:31:17 compute-0 python3.9[248616]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 08:31:17 compute-0 sudo[248614]: pam_unix(sudo:session): session closed for user root
Oct 11 08:31:17 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 08:31:18 compute-0 sudo[248766]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uezcamgbkvtsgyoslybgctfepfavdwrd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171477.792543-1069-148939877982527/AnsiballZ_file.py'
Oct 11 08:31:18 compute-0 sudo[248766]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:31:18 compute-0 ceph-mon[74313]: pgmap v724: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:31:18 compute-0 python3.9[248768]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 08:31:18 compute-0 sudo[248766]: pam_unix(sudo:session): session closed for user root
Oct 11 08:31:19 compute-0 sudo[248918]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lfudgqiewstuivdtltxngktxpudewcob ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171478.6622427-1126-249522869048409/AnsiballZ_file.py'
Oct 11 08:31:19 compute-0 sudo[248918]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:31:19 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v725: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:31:19 compute-0 python3.9[248920]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 08:31:19 compute-0 sudo[248918]: pam_unix(sudo:session): session closed for user root
Oct 11 08:31:19 compute-0 sudo[249070]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zkujckzpifawraezsjutcizzrkdegxku ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171479.5059724-1126-217841275387172/AnsiballZ_file.py'
Oct 11 08:31:19 compute-0 sudo[249070]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:31:20 compute-0 python3.9[249072]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 08:31:20 compute-0 sudo[249070]: pam_unix(sudo:session): session closed for user root
Oct 11 08:31:20 compute-0 ceph-mon[74313]: pgmap v725: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:31:20 compute-0 podman[249178]: 2025-10-11 08:31:20.791566577 +0000 UTC m=+0.081049875 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001)
Oct 11 08:31:20 compute-0 sudo[249242]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xhrsoygcenropyyzqdffzmhwrwiekfss ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171480.336835-1126-169853584601183/AnsiballZ_file.py'
Oct 11 08:31:20 compute-0 sudo[249242]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:31:21 compute-0 python3.9[249244]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 08:31:21 compute-0 sudo[249242]: pam_unix(sudo:session): session closed for user root
Oct 11 08:31:21 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v726: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:31:21 compute-0 sudo[249394]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dbcmojeikqhintdksfgykaxogdmxrwhq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171481.219872-1126-160499617082484/AnsiballZ_file.py'
Oct 11 08:31:21 compute-0 sudo[249394]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:31:21 compute-0 python3.9[249396]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 08:31:21 compute-0 sudo[249394]: pam_unix(sudo:session): session closed for user root
Oct 11 08:31:22 compute-0 ceph-mon[74313]: pgmap v726: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:31:22 compute-0 sudo[249546]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-agdfckzpajtjikwcsehjyvmhrtldyzqe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171482.001108-1126-279526726126403/AnsiballZ_file.py'
Oct 11 08:31:22 compute-0 sudo[249546]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:31:22 compute-0 python3.9[249548]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 08:31:22 compute-0 sudo[249546]: pam_unix(sudo:session): session closed for user root
Oct 11 08:31:22 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 08:31:23 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v727: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:31:23 compute-0 sudo[249698]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fpvmbyhdzphkqunfdiukqyisbtxswhiu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171482.9068596-1126-7664523924374/AnsiballZ_file.py'
Oct 11 08:31:23 compute-0 sudo[249698]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:31:23 compute-0 python3.9[249700]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 08:31:23 compute-0 sudo[249698]: pam_unix(sudo:session): session closed for user root
Oct 11 08:31:24 compute-0 sudo[249850]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jtvkajjovnjbkdtgwsbpdepxuyfhdlkb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171483.728074-1126-79939623000808/AnsiballZ_file.py'
Oct 11 08:31:24 compute-0 sudo[249850]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:31:24 compute-0 python3.9[249852]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 08:31:24 compute-0 sudo[249850]: pam_unix(sudo:session): session closed for user root
Oct 11 08:31:24 compute-0 ceph-mon[74313]: pgmap v727: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:31:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 08:31:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 08:31:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 08:31:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 08:31:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 08:31:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 08:31:24 compute-0 sudo[250002]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tvfamwibdtynfnjxygywyhisufzyedun ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171484.5468464-1126-180178537876999/AnsiballZ_file.py'
Oct 11 08:31:24 compute-0 sudo[250002]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:31:25 compute-0 python3.9[250004]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 08:31:25 compute-0 sudo[250002]: pam_unix(sudo:session): session closed for user root
Oct 11 08:31:25 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v728: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:31:25 compute-0 sudo[250154]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iorkgllfoudyugrfibprsgjjhkxbvqdd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171485.5243058-1184-43271524131266/AnsiballZ_command.py'
Oct 11 08:31:25 compute-0 sudo[250154]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:31:26 compute-0 python3.9[250156]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then
                                               systemctl disable --now certmonger.service
                                               test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service
                                             fi
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 08:31:26 compute-0 sudo[250154]: pam_unix(sudo:session): session closed for user root
Oct 11 08:31:26 compute-0 ceph-mon[74313]: pgmap v728: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:31:27 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v729: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:31:27 compute-0 python3.9[250308]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Oct 11 08:31:27 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 08:31:27 compute-0 sudo[250458]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jtspwqgekuhxgxatoeuzmdvhgyrcfuxu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171487.5577452-1202-55195325565177/AnsiballZ_systemd_service.py'
Oct 11 08:31:27 compute-0 sudo[250458]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:31:28 compute-0 python3.9[250460]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 11 08:31:28 compute-0 systemd[1]: Reloading.
Oct 11 08:31:28 compute-0 ceph-mon[74313]: pgmap v729: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:31:28 compute-0 systemd-rc-local-generator[250487]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 08:31:28 compute-0 systemd-sysv-generator[250491]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 08:31:28 compute-0 sudo[250458]: pam_unix(sudo:session): session closed for user root
Oct 11 08:31:29 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v730: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:31:29 compute-0 sudo[250645]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-adhtuzzwvyiuojpofoidjzrxqlgpxkpi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171488.9171684-1210-68356882519187/AnsiballZ_command.py'
Oct 11 08:31:29 compute-0 sudo[250645]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:31:29 compute-0 python3.9[250647]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 08:31:29 compute-0 sudo[250645]: pam_unix(sudo:session): session closed for user root
Oct 11 08:31:30 compute-0 sudo[250798]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-odznyaglbhcmcpaqtmbjpkpdxemzpsuo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171489.8005965-1210-278834344176490/AnsiballZ_command.py'
Oct 11 08:31:30 compute-0 sudo[250798]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:31:30 compute-0 ceph-mon[74313]: pgmap v730: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:31:30 compute-0 python3.9[250800]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_migration_target.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 08:31:30 compute-0 sudo[250798]: pam_unix(sudo:session): session closed for user root
Oct 11 08:31:31 compute-0 sudo[250951]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aghfgilsmfutozbfvftghpixnygncfyb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171490.6779563-1210-128507642870690/AnsiballZ_command.py'
Oct 11 08:31:31 compute-0 sudo[250951]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:31:31 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v731: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:31:31 compute-0 python3.9[250953]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api_cron.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 08:31:31 compute-0 sudo[250951]: pam_unix(sudo:session): session closed for user root
Oct 11 08:31:31 compute-0 podman[251031]: 2025-10-11 08:31:31.803588471 +0000 UTC m=+0.100213395 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team)
Oct 11 08:31:31 compute-0 sudo[251125]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-woxpgvalcklylzoxkjbbcchptxlurcik ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171491.5773506-1210-253462744803661/AnsiballZ_command.py'
Oct 11 08:31:31 compute-0 sudo[251125]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:31:32 compute-0 python3.9[251127]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 08:31:32 compute-0 sudo[251125]: pam_unix(sudo:session): session closed for user root
Oct 11 08:31:32 compute-0 ceph-mon[74313]: pgmap v731: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:31:32 compute-0 sudo[251278]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bthduxwipuotvgqauqnqkxcgzgmxocrt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171492.3938575-1210-261959531655920/AnsiballZ_command.py'
Oct 11 08:31:32 compute-0 sudo[251278]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:31:32 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 08:31:32 compute-0 python3.9[251280]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_conductor.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 08:31:33 compute-0 sudo[251278]: pam_unix(sudo:session): session closed for user root
Oct 11 08:31:33 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v732: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:31:33 compute-0 sudo[251431]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nkehzqcachjxfgnylsbvutmlrytmtlos ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171493.385085-1210-218569525615811/AnsiballZ_command.py'
Oct 11 08:31:33 compute-0 sudo[251431]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:31:33 compute-0 python3.9[251433]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_metadata.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 08:31:34 compute-0 sudo[251431]: pam_unix(sudo:session): session closed for user root
Oct 11 08:31:34 compute-0 ceph-mon[74313]: pgmap v732: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:31:34 compute-0 sudo[251584]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-evqetmmmzfmcghwyothxjquhbttluwrv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171494.1961284-1210-127118037103170/AnsiballZ_command.py'
Oct 11 08:31:34 compute-0 sudo[251584]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:31:34 compute-0 python3.9[251586]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_scheduler.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 08:31:34 compute-0 sudo[251584]: pam_unix(sudo:session): session closed for user root
Oct 11 08:31:35 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v733: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:31:35 compute-0 sudo[251737]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ogrxybujtlytxrmymopnuanusbeotvlv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171495.0174782-1210-135672684809471/AnsiballZ_command.py'
Oct 11 08:31:35 compute-0 sudo[251737]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:31:35 compute-0 python3.9[251739]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_vnc_proxy.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 11 08:31:35 compute-0 sudo[251737]: pam_unix(sudo:session): session closed for user root
Oct 11 08:31:35 compute-0 podman[251741]: 2025-10-11 08:31:35.774356336 +0000 UTC m=+0.102767457 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 08:31:36 compute-0 ceph-mon[74313]: pgmap v733: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:31:37 compute-0 sudo[251909]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nkqkxhrblaogjevlsjhqhoqsvmwazdff ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171496.7503383-1289-206197444876625/AnsiballZ_file.py'
Oct 11 08:31:37 compute-0 sudo[251909]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:31:37 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v734: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:31:37 compute-0 python3.9[251911]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 11 08:31:37 compute-0 sudo[251909]: pam_unix(sudo:session): session closed for user root
Oct 11 08:31:37 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 08:31:38 compute-0 sudo[252061]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-evplpubifwpgmhdagmzcrfrhriyniege ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171497.5735807-1289-66320447042446/AnsiballZ_file.py'
Oct 11 08:31:38 compute-0 sudo[252061]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:31:38 compute-0 python3.9[252063]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/containers setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 11 08:31:38 compute-0 sudo[252061]: pam_unix(sudo:session): session closed for user root
Oct 11 08:31:38 compute-0 ceph-mon[74313]: pgmap v734: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:31:38 compute-0 sudo[252213]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iylcoqcdlilvnsxbwjvvplsryeittozg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171498.4924276-1289-51059451226029/AnsiballZ_file.py'
Oct 11 08:31:38 compute-0 sudo[252213]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:31:39 compute-0 podman[252215]: 2025-10-11 08:31:39.026982903 +0000 UTC m=+0.130115008 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS)
Oct 11 08:31:39 compute-0 python3.9[252216]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova_nvme_cleaner setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 11 08:31:39 compute-0 sudo[252213]: pam_unix(sudo:session): session closed for user root
Oct 11 08:31:39 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v735: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:31:39 compute-0 sudo[252391]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-utendqwrnubttffgiexoidivqoffyxfy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171499.387508-1311-273418671569277/AnsiballZ_file.py'
Oct 11 08:31:39 compute-0 sudo[252391]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:31:40 compute-0 python3.9[252393]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 11 08:31:40 compute-0 sudo[252391]: pam_unix(sudo:session): session closed for user root
Oct 11 08:31:40 compute-0 ceph-mon[74313]: pgmap v735: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:31:40 compute-0 sudo[252543]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dbmgstgbhtfwdzlxmfvelpwyajhfiubl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171500.2172875-1311-121725493703767/AnsiballZ_file.py'
Oct 11 08:31:40 compute-0 sudo[252543]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:31:40 compute-0 python3.9[252545]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/_nova_secontext setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 11 08:31:40 compute-0 sudo[252543]: pam_unix(sudo:session): session closed for user root
Oct 11 08:31:41 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v736: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:31:41 compute-0 sudo[252695]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rdwlbhpsqofwfeswxzkjkkrymfctgtei ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171501.0554504-1311-80918735886494/AnsiballZ_file.py'
Oct 11 08:31:41 compute-0 sudo[252695]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:31:41 compute-0 python3.9[252697]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova/instances setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 11 08:31:41 compute-0 sudo[252695]: pam_unix(sudo:session): session closed for user root
Oct 11 08:31:42 compute-0 sudo[252847]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-audhefoatyzcixmosdzadetccoykizhw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171501.886913-1311-128329622199094/AnsiballZ_file.py'
Oct 11 08:31:42 compute-0 sudo[252847]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:31:42 compute-0 ceph-mon[74313]: pgmap v736: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:31:42 compute-0 python3.9[252849]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/etc/ceph setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 11 08:31:42 compute-0 sudo[252847]: pam_unix(sudo:session): session closed for user root
Oct 11 08:31:42 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 08:31:43 compute-0 sudo[252999]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ixmoraigdtzracmdlhdiykcalmaocxyg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171502.7998388-1311-250242217965879/AnsiballZ_file.py'
Oct 11 08:31:43 compute-0 sudo[252999]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:31:43 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v737: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:31:43 compute-0 python3.9[253001]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct 11 08:31:43 compute-0 sudo[252999]: pam_unix(sudo:session): session closed for user root
Oct 11 08:31:43 compute-0 sudo[253151]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pwuqcwxrqxzazikzxdmdgmoxwyrciacn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171503.600986-1311-237585300122948/AnsiballZ_file.py'
Oct 11 08:31:44 compute-0 sudo[253151]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:31:44 compute-0 python3.9[253153]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/iscsi setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct 11 08:31:44 compute-0 sudo[253151]: pam_unix(sudo:session): session closed for user root
Oct 11 08:31:44 compute-0 ceph-mon[74313]: pgmap v737: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:31:44 compute-0 sudo[253303]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dfnqbmmftqqrtvchspybylnzcjnkhmku ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171504.417586-1311-92059254218515/AnsiballZ_file.py'
Oct 11 08:31:44 compute-0 sudo[253303]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:31:45 compute-0 python3.9[253305]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/var/lib/iscsi setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct 11 08:31:45 compute-0 sudo[253303]: pam_unix(sudo:session): session closed for user root
Oct 11 08:31:45 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v738: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:31:45 compute-0 sudo[253455]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-inquimxuohhilwrooqldsjmtwwabionj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171505.2363608-1311-258600135936186/AnsiballZ_file.py'
Oct 11 08:31:45 compute-0 sudo[253455]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:31:45 compute-0 python3.9[253457]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/nvme setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct 11 08:31:45 compute-0 sudo[253455]: pam_unix(sudo:session): session closed for user root
Oct 11 08:31:46 compute-0 sudo[253607]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sdugibrufavkerdafspbalihrvrrcvke ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171506.0378306-1311-234637825266046/AnsiballZ_file.py'
Oct 11 08:31:46 compute-0 sudo[253607]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:31:46 compute-0 ceph-mon[74313]: pgmap v738: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:31:46 compute-0 python3.9[253609]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/run/openvswitch setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct 11 08:31:46 compute-0 sudo[253607]: pam_unix(sudo:session): session closed for user root
Oct 11 08:31:47 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v739: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:31:47 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 08:31:48 compute-0 ceph-mon[74313]: pgmap v739: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:31:49 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v740: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:31:50 compute-0 ceph-mon[74313]: pgmap v740: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:31:51 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v741: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:31:51 compute-0 podman[253634]: 2025-10-11 08:31:51.782186581 +0000 UTC m=+0.078460822 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251001)
Oct 11 08:31:52 compute-0 sudo[253779]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mofbjzffrrbkvjnxpvoriuzadsiycqtg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171511.9294066-1514-233043274860588/AnsiballZ_getent.py'
Oct 11 08:31:52 compute-0 sudo[253779]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:31:52 compute-0 ceph-mon[74313]: pgmap v741: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:31:52 compute-0 python3.9[253781]: ansible-ansible.builtin.getent Invoked with database=passwd key=nova fail_key=True service=None split=None
Oct 11 08:31:52 compute-0 sudo[253779]: pam_unix(sudo:session): session closed for user root
Oct 11 08:31:52 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 08:31:53 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v742: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:31:53 compute-0 sudo[253932]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uoqgivdvbdmzgjsuytxpaeuvnrtvzdit ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171512.9601305-1522-11984510176559/AnsiballZ_group.py'
Oct 11 08:31:53 compute-0 sudo[253932]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:31:53 compute-0 python3.9[253934]: ansible-ansible.builtin.group Invoked with gid=42436 name=nova state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Oct 11 08:31:53 compute-0 groupadd[253935]: group added to /etc/group: name=nova, GID=42436
Oct 11 08:31:53 compute-0 groupadd[253935]: group added to /etc/gshadow: name=nova
Oct 11 08:31:53 compute-0 groupadd[253935]: new group: name=nova, GID=42436
Oct 11 08:31:53 compute-0 sudo[253932]: pam_unix(sudo:session): session closed for user root
Oct 11 08:31:54 compute-0 ceph-mon[74313]: pgmap v742: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:31:54 compute-0 sudo[254090]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mihfmtthvcnilnenlhpbndnwmhkylvul ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171514.063325-1530-72404257585697/AnsiballZ_user.py'
Oct 11 08:31:54 compute-0 sudo[254090]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:31:54 compute-0 ceph-mgr[74605]: [balancer INFO root] Optimize plan auto_2025-10-11_08:31:54
Oct 11 08:31:54 compute-0 ceph-mgr[74605]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 08:31:54 compute-0 ceph-mgr[74605]: [balancer INFO root] do_upmap
Oct 11 08:31:54 compute-0 ceph-mgr[74605]: [balancer INFO root] pools ['.rgw.root', 'volumes', 'default.rgw.log', 'cephfs.cephfs.meta', 'vms', 'default.rgw.control', 'cephfs.cephfs.data', '.mgr', 'default.rgw.meta', 'images', 'backups']
Oct 11 08:31:54 compute-0 ceph-mgr[74605]: [balancer INFO root] prepared 0/10 changes
Oct 11 08:31:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 08:31:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 08:31:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 08:31:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 08:31:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 08:31:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 08:31:54 compute-0 ceph-mgr[74605]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 08:31:54 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 08:31:54 compute-0 ceph-mgr[74605]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 08:31:54 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 08:31:54 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 08:31:54 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 08:31:54 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 08:31:54 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 08:31:54 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 08:31:54 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 08:31:54 compute-0 python3.9[254092]: ansible-ansible.builtin.user Invoked with comment=nova user group=nova groups=['libvirt'] name=nova shell=/bin/sh state=present uid=42436 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Oct 11 08:31:55 compute-0 useradd[254094]: new user: name=nova, UID=42436, GID=42436, home=/home/nova, shell=/bin/sh, from=/dev/pts/0
Oct 11 08:31:55 compute-0 useradd[254094]: add 'nova' to group 'libvirt'
Oct 11 08:31:55 compute-0 useradd[254094]: add 'nova' to shadow group 'libvirt'
Oct 11 08:31:55 compute-0 ceph-osd[88249]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 11 08:31:55 compute-0 ceph-osd[88249]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Cumulative writes: 5622 writes, 23K keys, 5622 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s
                                           Cumulative WAL: 5622 writes, 881 syncs, 6.38 writes per sync, written: 0.02 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 212 writes, 318 keys, 212 commit groups, 1.0 writes per commit group, ingest: 0.11 MB, 0.00 MB/s
                                           Interval WAL: 212 writes, 106 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f9267111f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 2.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f9267111f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 2.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f9267111f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 2.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f9267111f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 2.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f9267111f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 2.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f9267111f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 2.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f9267111f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 2.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f926711090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 4e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f926711090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 4e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f926711090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 4e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f9267111f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 2.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f9267111f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 2.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Oct 11 08:31:55 compute-0 sudo[254090]: pam_unix(sudo:session): session closed for user root
Oct 11 08:31:55 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v743: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:31:55 compute-0 sshd-session[254125]: Accepted publickey for zuul from 192.168.122.30 port 33122 ssh2: ECDSA SHA256:KKxgUhG08XzjYMLOyvbR+tXItyOnGoLl6Fn32NV5afE
Oct 11 08:31:56 compute-0 systemd-logind[819]: New session 53 of user zuul.
Oct 11 08:31:56 compute-0 systemd[1]: Started Session 53 of User zuul.
Oct 11 08:31:56 compute-0 sshd-session[254125]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 11 08:31:56 compute-0 sshd-session[254128]: Received disconnect from 192.168.122.30 port 33122:11: disconnected by user
Oct 11 08:31:56 compute-0 sshd-session[254128]: Disconnected from user zuul 192.168.122.30 port 33122
Oct 11 08:31:56 compute-0 sshd-session[254125]: pam_unix(sshd:session): session closed for user zuul
Oct 11 08:31:56 compute-0 systemd[1]: session-53.scope: Deactivated successfully.
Oct 11 08:31:56 compute-0 systemd-logind[819]: Session 53 logged out. Waiting for processes to exit.
Oct 11 08:31:56 compute-0 systemd-logind[819]: Removed session 53.
Oct 11 08:31:56 compute-0 ceph-mon[74313]: pgmap v743: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:31:57 compute-0 python3.9[254278]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/config.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 08:31:57 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v744: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:31:57 compute-0 python3.9[254399]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/config.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760171516.401401-1555-269118851967355/.source.json follow=False _original_basename=config.json.j2 checksum=2c2474b5f24ef7c9ed37f49680082593e0d1100b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 11 08:31:57 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 08:31:58 compute-0 python3.9[254549]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova-blank.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 08:31:58 compute-0 ceph-mon[74313]: pgmap v744: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:31:59 compute-0 python3.9[254625]: ansible-ansible.legacy.file Invoked with mode=0644 setype=container_file_t dest=/var/lib/openstack/config/nova/nova-blank.conf _original_basename=nova-blank.conf recurse=False state=file path=/var/lib/openstack/config/nova/nova-blank.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 11 08:31:59 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v745: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:31:59 compute-0 sudo[254725]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:31:59 compute-0 sudo[254725]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:31:59 compute-0 sudo[254725]: pam_unix(sudo:session): session closed for user root
Oct 11 08:31:59 compute-0 sudo[254777]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 08:31:59 compute-0 sudo[254777]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:31:59 compute-0 sudo[254777]: pam_unix(sudo:session): session closed for user root
Oct 11 08:31:59 compute-0 sudo[254826]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:31:59 compute-0 sudo[254826]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:31:59 compute-0 sudo[254826]: pam_unix(sudo:session): session closed for user root
Oct 11 08:31:59 compute-0 sudo[254851]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Oct 11 08:31:59 compute-0 sudo[254851]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:31:59 compute-0 python3.9[254823]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/ssh-config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 08:32:00 compute-0 sudo[254851]: pam_unix(sudo:session): session closed for user root
Oct 11 08:32:00 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 08:32:00 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 08:32:00 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 08:32:00 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 08:32:00 compute-0 sudo[254966]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:32:00 compute-0 sudo[254966]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:32:00 compute-0 sudo[254966]: pam_unix(sudo:session): session closed for user root
Oct 11 08:32:00 compute-0 sudo[255018]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 08:32:00 compute-0 sudo[255018]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:32:00 compute-0 sudo[255018]: pam_unix(sudo:session): session closed for user root
Oct 11 08:32:00 compute-0 ceph-osd[89278]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 11 08:32:00 compute-0 ceph-osd[89278]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Cumulative writes: 6712 writes, 27K keys, 6712 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s
                                           Cumulative WAL: 6712 writes, 1226 syncs, 5.47 writes per sync, written: 0.02 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 180 writes, 271 keys, 180 commit groups, 1.0 writes per commit group, ingest: 0.09 MB, 0.00 MB/s
                                           Interval WAL: 180 writes, 90 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557ab31031f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557ab31031f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557ab31031f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557ab31031f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557ab31031f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557ab31031f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557ab31031f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557ab3103090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557ab3103090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557ab3103090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557ab31031f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557ab31031f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Oct 11 08:32:00 compute-0 sudo[255067]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:32:00 compute-0 sudo[255067]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:32:00 compute-0 sudo[255067]: pam_unix(sudo:session): session closed for user root
Oct 11 08:32:00 compute-0 sudo[255092]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 11 08:32:00 compute-0 sudo[255092]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:32:00 compute-0 python3.9[255064]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/ssh-config mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760171519.2215683-1555-266665536993903/.source follow=False _original_basename=ssh-config checksum=4297f735c41bdc1ff52d72e6f623a02242f37958 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 11 08:32:01 compute-0 sudo[255092]: pam_unix(sudo:session): session closed for user root
Oct 11 08:32:01 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 08:32:01 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 08:32:01 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 11 08:32:01 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 08:32:01 compute-0 ceph-mon[74313]: pgmap v745: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:32:01 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 08:32:01 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 08:32:01 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 11 08:32:01 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 08:32:01 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev 8c2bda3e-e11f-4c1a-bca9-cdec29912896 does not exist
Oct 11 08:32:01 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev 4aa11e04-9816-4b74-8117-1144417eb6f2 does not exist
Oct 11 08:32:01 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev 93d3038a-70b7-49cc-8b7d-7af4886c39a9 does not exist
Oct 11 08:32:01 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 11 08:32:01 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 08:32:01 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 11 08:32:01 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 08:32:01 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 08:32:01 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 08:32:01 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v746: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:32:01 compute-0 sudo[255295]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:32:01 compute-0 sudo[255295]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:32:01 compute-0 sudo[255295]: pam_unix(sudo:session): session closed for user root
Oct 11 08:32:01 compute-0 sudo[255323]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 08:32:01 compute-0 sudo[255323]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:32:01 compute-0 sudo[255323]: pam_unix(sudo:session): session closed for user root
Oct 11 08:32:01 compute-0 python3.9[255301]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/02-nova-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 08:32:01 compute-0 sudo[255348]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:32:01 compute-0 sudo[255348]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:32:01 compute-0 sudo[255348]: pam_unix(sudo:session): session closed for user root
Oct 11 08:32:01 compute-0 sudo[255379]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 11 08:32:01 compute-0 sudo[255379]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:32:01 compute-0 podman[255558]: 2025-10-11 08:32:01.946550324 +0000 UTC m=+0.050340161 container create e79c4ac1d4a4f7830316fce0b11f7694464500eb22cbedf18416c12c82284304 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_spence, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct 11 08:32:01 compute-0 systemd[1]: Started libpod-conmon-e79c4ac1d4a4f7830316fce0b11f7694464500eb22cbedf18416c12c82284304.scope.
Oct 11 08:32:02 compute-0 podman[255558]: 2025-10-11 08:32:01.925062143 +0000 UTC m=+0.028851990 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 08:32:02 compute-0 systemd[1]: Started libcrun container.
Oct 11 08:32:02 compute-0 podman[255558]: 2025-10-11 08:32:02.06421332 +0000 UTC m=+0.168003137 container init e79c4ac1d4a4f7830316fce0b11f7694464500eb22cbedf18416c12c82284304 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_spence, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 08:32:02 compute-0 podman[255558]: 2025-10-11 08:32:02.076619447 +0000 UTC m=+0.180409294 container start e79c4ac1d4a4f7830316fce0b11f7694464500eb22cbedf18416c12c82284304 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_spence, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 08:32:02 compute-0 compassionate_spence[255575]: 167 167
Oct 11 08:32:02 compute-0 podman[255558]: 2025-10-11 08:32:02.105528717 +0000 UTC m=+0.209318534 container attach e79c4ac1d4a4f7830316fce0b11f7694464500eb22cbedf18416c12c82284304 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_spence, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 08:32:02 compute-0 systemd[1]: libpod-e79c4ac1d4a4f7830316fce0b11f7694464500eb22cbedf18416c12c82284304.scope: Deactivated successfully.
Oct 11 08:32:02 compute-0 python3.9[255557]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/02-nova-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760171520.784946-1555-121773290648046/.source.conf follow=False _original_basename=02-nova-host-specific.conf.j2 checksum=1feba546d0beacad9258164ab79b8a747685ccc8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 11 08:32:02 compute-0 podman[255572]: 2025-10-11 08:32:02.151558337 +0000 UTC m=+0.148049468 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_id=iscsid)
Oct 11 08:32:02 compute-0 podman[255594]: 2025-10-11 08:32:02.160089855 +0000 UTC m=+0.029336692 container died e79c4ac1d4a4f7830316fce0b11f7694464500eb22cbedf18416c12c82284304 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_spence, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct 11 08:32:02 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 08:32:02 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 08:32:02 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 08:32:02 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 08:32:02 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 08:32:02 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 08:32:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-1cd7a2baf67e75eafbc03850439205fd34a8c845f8c5ade34e5f0acef7b65008-merged.mount: Deactivated successfully.
Oct 11 08:32:02 compute-0 podman[255594]: 2025-10-11 08:32:02.215356573 +0000 UTC m=+0.084603430 container remove e79c4ac1d4a4f7830316fce0b11f7694464500eb22cbedf18416c12c82284304 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_spence, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 08:32:02 compute-0 systemd[1]: libpod-conmon-e79c4ac1d4a4f7830316fce0b11f7694464500eb22cbedf18416c12c82284304.scope: Deactivated successfully.
Oct 11 08:32:02 compute-0 podman[255671]: 2025-10-11 08:32:02.424954474 +0000 UTC m=+0.046919785 container create f34dc80e299718e2a37f14e69f59de32f4edc45ad15ed5445d3ffb0c9b565d48 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_payne, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 08:32:02 compute-0 systemd[1]: Started libpod-conmon-f34dc80e299718e2a37f14e69f59de32f4edc45ad15ed5445d3ffb0c9b565d48.scope.
Oct 11 08:32:02 compute-0 podman[255671]: 2025-10-11 08:32:02.405681424 +0000 UTC m=+0.027646735 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 08:32:02 compute-0 systemd[1]: Started libcrun container.
Oct 11 08:32:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/01325597af399b2ff9d7cba1e5433aa49e9809d51a64468a11565a1cceb66d9e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 08:32:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/01325597af399b2ff9d7cba1e5433aa49e9809d51a64468a11565a1cceb66d9e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 08:32:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/01325597af399b2ff9d7cba1e5433aa49e9809d51a64468a11565a1cceb66d9e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 08:32:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/01325597af399b2ff9d7cba1e5433aa49e9809d51a64468a11565a1cceb66d9e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 08:32:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/01325597af399b2ff9d7cba1e5433aa49e9809d51a64468a11565a1cceb66d9e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 08:32:02 compute-0 podman[255671]: 2025-10-11 08:32:02.533447783 +0000 UTC m=+0.155413144 container init f34dc80e299718e2a37f14e69f59de32f4edc45ad15ed5445d3ffb0c9b565d48 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_payne, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 08:32:02 compute-0 podman[255671]: 2025-10-11 08:32:02.547319101 +0000 UTC m=+0.169284392 container start f34dc80e299718e2a37f14e69f59de32f4edc45ad15ed5445d3ffb0c9b565d48 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_payne, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct 11 08:32:02 compute-0 podman[255671]: 2025-10-11 08:32:02.550869971 +0000 UTC m=+0.172835252 container attach f34dc80e299718e2a37f14e69f59de32f4edc45ad15ed5445d3ffb0c9b565d48 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_payne, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct 11 08:32:02 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 08:32:02 compute-0 python3.9[255791]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova_statedir_ownership.py follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 08:32:03 compute-0 ceph-mon[74313]: pgmap v746: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:32:03 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v747: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:32:03 compute-0 python3.9[255920]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/nova_statedir_ownership.py mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760171522.3421571-1555-13752243027683/.source.py follow=False _original_basename=nova_statedir_ownership.py checksum=c6c8a3cfefa5efd60ceb1408c4e977becedb71e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 11 08:32:03 compute-0 angry_payne[255722]: --> passed data devices: 0 physical, 3 LVM
Oct 11 08:32:03 compute-0 angry_payne[255722]: --> relative data size: 1.0
Oct 11 08:32:03 compute-0 angry_payne[255722]: --> All data devices are unavailable
Oct 11 08:32:03 compute-0 systemd[1]: libpod-f34dc80e299718e2a37f14e69f59de32f4edc45ad15ed5445d3ffb0c9b565d48.scope: Deactivated successfully.
Oct 11 08:32:03 compute-0 podman[255671]: 2025-10-11 08:32:03.725393047 +0000 UTC m=+1.347358358 container died f34dc80e299718e2a37f14e69f59de32f4edc45ad15ed5445d3ffb0c9b565d48 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_payne, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct 11 08:32:03 compute-0 systemd[1]: libpod-f34dc80e299718e2a37f14e69f59de32f4edc45ad15ed5445d3ffb0c9b565d48.scope: Consumed 1.111s CPU time.
Oct 11 08:32:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-01325597af399b2ff9d7cba1e5433aa49e9809d51a64468a11565a1cceb66d9e-merged.mount: Deactivated successfully.
Oct 11 08:32:03 compute-0 podman[255671]: 2025-10-11 08:32:03.800079859 +0000 UTC m=+1.422045140 container remove f34dc80e299718e2a37f14e69f59de32f4edc45ad15ed5445d3ffb0c9b565d48 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_payne, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct 11 08:32:03 compute-0 systemd[1]: libpod-conmon-f34dc80e299718e2a37f14e69f59de32f4edc45ad15ed5445d3ffb0c9b565d48.scope: Deactivated successfully.
Oct 11 08:32:03 compute-0 sudo[255379]: pam_unix(sudo:session): session closed for user root
Oct 11 08:32:03 compute-0 sudo[255996]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:32:03 compute-0 sudo[255996]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:32:03 compute-0 sudo[255996]: pam_unix(sudo:session): session closed for user root
Oct 11 08:32:04 compute-0 sudo[256050]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 08:32:04 compute-0 sudo[256050]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:32:04 compute-0 sudo[256050]: pam_unix(sudo:session): session closed for user root
Oct 11 08:32:04 compute-0 sudo[256098]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:32:04 compute-0 sudo[256098]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:32:04 compute-0 sudo[256098]: pam_unix(sudo:session): session closed for user root
Oct 11 08:32:04 compute-0 ceph-mon[74313]: pgmap v747: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:32:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 08:32:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:32:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 08:32:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:32:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 08:32:04 compute-0 sudo[256132]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a -- lvm list --format json
Oct 11 08:32:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:32:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 08:32:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:32:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 08:32:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:32:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 08:32:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:32:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 11 08:32:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:32:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 08:32:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:32:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 11 08:32:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:32:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 11 08:32:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:32:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 08:32:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:32:04 compute-0 sudo[256132]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:32:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 11 08:32:04 compute-0 sudo[256198]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cazdgwamyuqknqjksoliqnjjuqhpeeky ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171523.8705897-1624-85962653010980/AnsiballZ_file.py'
Oct 11 08:32:04 compute-0 sudo[256198]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:32:04 compute-0 python3.9[256200]: ansible-ansible.builtin.file Invoked with group=nova mode=0700 owner=nova path=/home/nova/.ssh state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 08:32:04 compute-0 sudo[256198]: pam_unix(sudo:session): session closed for user root
Oct 11 08:32:04 compute-0 podman[256249]: 2025-10-11 08:32:04.676191758 +0000 UTC m=+0.048142500 container create 526bb5c527113b28ef7d4e9adb27c311bc36ab02d4b7f1475fba5b3533def258 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_cerf, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct 11 08:32:04 compute-0 systemd[1]: Started libpod-conmon-526bb5c527113b28ef7d4e9adb27c311bc36ab02d4b7f1475fba5b3533def258.scope.
Oct 11 08:32:04 compute-0 systemd[1]: Started libcrun container.
Oct 11 08:32:04 compute-0 podman[256249]: 2025-10-11 08:32:04.65556631 +0000 UTC m=+0.027517052 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 08:32:04 compute-0 podman[256249]: 2025-10-11 08:32:04.770638323 +0000 UTC m=+0.142589115 container init 526bb5c527113b28ef7d4e9adb27c311bc36ab02d4b7f1475fba5b3533def258 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_cerf, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 08:32:04 compute-0 podman[256249]: 2025-10-11 08:32:04.785996643 +0000 UTC m=+0.157947375 container start 526bb5c527113b28ef7d4e9adb27c311bc36ab02d4b7f1475fba5b3533def258 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_cerf, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 08:32:04 compute-0 podman[256249]: 2025-10-11 08:32:04.790337185 +0000 UTC m=+0.162287937 container attach 526bb5c527113b28ef7d4e9adb27c311bc36ab02d4b7f1475fba5b3533def258 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_cerf, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct 11 08:32:04 compute-0 kind_cerf[256283]: 167 167
Oct 11 08:32:04 compute-0 systemd[1]: libpod-526bb5c527113b28ef7d4e9adb27c311bc36ab02d4b7f1475fba5b3533def258.scope: Deactivated successfully.
Oct 11 08:32:04 compute-0 podman[256249]: 2025-10-11 08:32:04.796155768 +0000 UTC m=+0.168106500 container died 526bb5c527113b28ef7d4e9adb27c311bc36ab02d4b7f1475fba5b3533def258 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_cerf, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 08:32:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-8fe40077af7f6e854f30e998075e301caf5c936696c0cafc469f5bbee189c44c-merged.mount: Deactivated successfully.
Oct 11 08:32:04 compute-0 podman[256249]: 2025-10-11 08:32:04.85014179 +0000 UTC m=+0.222092522 container remove 526bb5c527113b28ef7d4e9adb27c311bc36ab02d4b7f1475fba5b3533def258 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_cerf, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 08:32:04 compute-0 systemd[1]: libpod-conmon-526bb5c527113b28ef7d4e9adb27c311bc36ab02d4b7f1475fba5b3533def258.scope: Deactivated successfully.
Oct 11 08:32:05 compute-0 podman[256383]: 2025-10-11 08:32:05.092675983 +0000 UTC m=+0.073495170 container create cca1cfae9d46bb50bcbde1eb0dfb30869fa00bddfe85526f45a2410f0c43a852 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_mahavira, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 08:32:05 compute-0 systemd[1]: Started libpod-conmon-cca1cfae9d46bb50bcbde1eb0dfb30869fa00bddfe85526f45a2410f0c43a852.scope.
Oct 11 08:32:05 compute-0 podman[256383]: 2025-10-11 08:32:05.063937568 +0000 UTC m=+0.044756775 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 08:32:05 compute-0 systemd[1]: Started libcrun container.
Oct 11 08:32:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f6fd0502e4457dd9c49f674ba2f6013e623a95b40e62ef16d8240b758755a23/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 08:32:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f6fd0502e4457dd9c49f674ba2f6013e623a95b40e62ef16d8240b758755a23/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 08:32:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f6fd0502e4457dd9c49f674ba2f6013e623a95b40e62ef16d8240b758755a23/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 08:32:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f6fd0502e4457dd9c49f674ba2f6013e623a95b40e62ef16d8240b758755a23/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 08:32:05 compute-0 podman[256383]: 2025-10-11 08:32:05.20966644 +0000 UTC m=+0.190485647 container init cca1cfae9d46bb50bcbde1eb0dfb30869fa00bddfe85526f45a2410f0c43a852 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_mahavira, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 08:32:05 compute-0 podman[256383]: 2025-10-11 08:32:05.226757458 +0000 UTC m=+0.207576655 container start cca1cfae9d46bb50bcbde1eb0dfb30869fa00bddfe85526f45a2410f0c43a852 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_mahavira, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Oct 11 08:32:05 compute-0 podman[256383]: 2025-10-11 08:32:05.231318076 +0000 UTC m=+0.212137273 container attach cca1cfae9d46bb50bcbde1eb0dfb30869fa00bddfe85526f45a2410f0c43a852 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_mahavira, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct 11 08:32:05 compute-0 sudo[256455]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-omhwkhstxgtepvhfcuggpgjzwhdygzgq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171524.8029053-1632-167231778493843/AnsiballZ_copy.py'
Oct 11 08:32:05 compute-0 sudo[256455]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:32:05 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v748: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:32:05 compute-0 python3.9[256458]: ansible-ansible.legacy.copy Invoked with dest=/home/nova/.ssh/authorized_keys group=nova mode=0600 owner=nova remote_src=True src=/var/lib/openstack/config/nova/ssh-publickey backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 08:32:05 compute-0 sudo[256455]: pam_unix(sudo:session): session closed for user root
Oct 11 08:32:05 compute-0 elegant_mahavira[256425]: {
Oct 11 08:32:05 compute-0 elegant_mahavira[256425]:     "0": [
Oct 11 08:32:05 compute-0 elegant_mahavira[256425]:         {
Oct 11 08:32:05 compute-0 elegant_mahavira[256425]:             "devices": [
Oct 11 08:32:05 compute-0 elegant_mahavira[256425]:                 "/dev/loop3"
Oct 11 08:32:05 compute-0 elegant_mahavira[256425]:             ],
Oct 11 08:32:05 compute-0 elegant_mahavira[256425]:             "lv_name": "ceph_lv0",
Oct 11 08:32:05 compute-0 elegant_mahavira[256425]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 08:32:05 compute-0 elegant_mahavira[256425]:             "lv_size": "21470642176",
Oct 11 08:32:05 compute-0 elegant_mahavira[256425]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=bd31cfe9-266a-4479-a7f9-659fff14b3e1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 08:32:05 compute-0 elegant_mahavira[256425]:             "lv_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 08:32:05 compute-0 elegant_mahavira[256425]:             "name": "ceph_lv0",
Oct 11 08:32:05 compute-0 elegant_mahavira[256425]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 08:32:05 compute-0 elegant_mahavira[256425]:             "tags": {
Oct 11 08:32:05 compute-0 elegant_mahavira[256425]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 11 08:32:05 compute-0 elegant_mahavira[256425]:                 "ceph.block_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 08:32:05 compute-0 elegant_mahavira[256425]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 08:32:05 compute-0 elegant_mahavira[256425]:                 "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 08:32:05 compute-0 elegant_mahavira[256425]:                 "ceph.cluster_name": "ceph",
Oct 11 08:32:05 compute-0 elegant_mahavira[256425]:                 "ceph.crush_device_class": "",
Oct 11 08:32:05 compute-0 elegant_mahavira[256425]:                 "ceph.encrypted": "0",
Oct 11 08:32:05 compute-0 elegant_mahavira[256425]:                 "ceph.osd_fsid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 08:32:05 compute-0 elegant_mahavira[256425]:                 "ceph.osd_id": "0",
Oct 11 08:32:05 compute-0 elegant_mahavira[256425]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 08:32:05 compute-0 elegant_mahavira[256425]:                 "ceph.type": "block",
Oct 11 08:32:05 compute-0 elegant_mahavira[256425]:                 "ceph.vdo": "0"
Oct 11 08:32:05 compute-0 elegant_mahavira[256425]:             },
Oct 11 08:32:05 compute-0 elegant_mahavira[256425]:             "type": "block",
Oct 11 08:32:05 compute-0 elegant_mahavira[256425]:             "vg_name": "ceph_vg0"
Oct 11 08:32:05 compute-0 elegant_mahavira[256425]:         }
Oct 11 08:32:05 compute-0 elegant_mahavira[256425]:     ],
Oct 11 08:32:05 compute-0 elegant_mahavira[256425]:     "1": [
Oct 11 08:32:05 compute-0 elegant_mahavira[256425]:         {
Oct 11 08:32:05 compute-0 elegant_mahavira[256425]:             "devices": [
Oct 11 08:32:05 compute-0 elegant_mahavira[256425]:                 "/dev/loop4"
Oct 11 08:32:05 compute-0 elegant_mahavira[256425]:             ],
Oct 11 08:32:05 compute-0 elegant_mahavira[256425]:             "lv_name": "ceph_lv1",
Oct 11 08:32:05 compute-0 elegant_mahavira[256425]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 08:32:05 compute-0 elegant_mahavira[256425]:             "lv_size": "21470642176",
Oct 11 08:32:05 compute-0 elegant_mahavira[256425]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c6e730c8-7075-45e5-bf72-061b73088f5b,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 08:32:05 compute-0 elegant_mahavira[256425]:             "lv_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 08:32:05 compute-0 elegant_mahavira[256425]:             "name": "ceph_lv1",
Oct 11 08:32:05 compute-0 elegant_mahavira[256425]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 08:32:05 compute-0 elegant_mahavira[256425]:             "tags": {
Oct 11 08:32:05 compute-0 elegant_mahavira[256425]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 11 08:32:05 compute-0 elegant_mahavira[256425]:                 "ceph.block_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 08:32:05 compute-0 elegant_mahavira[256425]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 08:32:05 compute-0 elegant_mahavira[256425]:                 "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 08:32:05 compute-0 elegant_mahavira[256425]:                 "ceph.cluster_name": "ceph",
Oct 11 08:32:05 compute-0 elegant_mahavira[256425]:                 "ceph.crush_device_class": "",
Oct 11 08:32:05 compute-0 elegant_mahavira[256425]:                 "ceph.encrypted": "0",
Oct 11 08:32:05 compute-0 elegant_mahavira[256425]:                 "ceph.osd_fsid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 08:32:05 compute-0 elegant_mahavira[256425]:                 "ceph.osd_id": "1",
Oct 11 08:32:05 compute-0 elegant_mahavira[256425]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 08:32:05 compute-0 elegant_mahavira[256425]:                 "ceph.type": "block",
Oct 11 08:32:05 compute-0 elegant_mahavira[256425]:                 "ceph.vdo": "0"
Oct 11 08:32:05 compute-0 elegant_mahavira[256425]:             },
Oct 11 08:32:05 compute-0 elegant_mahavira[256425]:             "type": "block",
Oct 11 08:32:05 compute-0 elegant_mahavira[256425]:             "vg_name": "ceph_vg1"
Oct 11 08:32:05 compute-0 elegant_mahavira[256425]:         }
Oct 11 08:32:05 compute-0 elegant_mahavira[256425]:     ],
Oct 11 08:32:05 compute-0 elegant_mahavira[256425]:     "2": [
Oct 11 08:32:05 compute-0 elegant_mahavira[256425]:         {
Oct 11 08:32:05 compute-0 elegant_mahavira[256425]:             "devices": [
Oct 11 08:32:05 compute-0 elegant_mahavira[256425]:                 "/dev/loop5"
Oct 11 08:32:05 compute-0 elegant_mahavira[256425]:             ],
Oct 11 08:32:05 compute-0 elegant_mahavira[256425]:             "lv_name": "ceph_lv2",
Oct 11 08:32:05 compute-0 elegant_mahavira[256425]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 08:32:05 compute-0 elegant_mahavira[256425]:             "lv_size": "21470642176",
Oct 11 08:32:05 compute-0 elegant_mahavira[256425]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9a8d87fe-940e-4960-94fb-405e22e6e9ee,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 08:32:05 compute-0 elegant_mahavira[256425]:             "lv_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 08:32:05 compute-0 elegant_mahavira[256425]:             "name": "ceph_lv2",
Oct 11 08:32:05 compute-0 elegant_mahavira[256425]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 08:32:05 compute-0 elegant_mahavira[256425]:             "tags": {
Oct 11 08:32:05 compute-0 elegant_mahavira[256425]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 11 08:32:05 compute-0 elegant_mahavira[256425]:                 "ceph.block_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 08:32:05 compute-0 elegant_mahavira[256425]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 08:32:05 compute-0 elegant_mahavira[256425]:                 "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 08:32:05 compute-0 elegant_mahavira[256425]:                 "ceph.cluster_name": "ceph",
Oct 11 08:32:05 compute-0 elegant_mahavira[256425]:                 "ceph.crush_device_class": "",
Oct 11 08:32:05 compute-0 elegant_mahavira[256425]:                 "ceph.encrypted": "0",
Oct 11 08:32:05 compute-0 elegant_mahavira[256425]:                 "ceph.osd_fsid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 08:32:05 compute-0 elegant_mahavira[256425]:                 "ceph.osd_id": "2",
Oct 11 08:32:05 compute-0 elegant_mahavira[256425]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 08:32:05 compute-0 elegant_mahavira[256425]:                 "ceph.type": "block",
Oct 11 08:32:05 compute-0 elegant_mahavira[256425]:                 "ceph.vdo": "0"
Oct 11 08:32:05 compute-0 elegant_mahavira[256425]:             },
Oct 11 08:32:05 compute-0 elegant_mahavira[256425]:             "type": "block",
Oct 11 08:32:05 compute-0 elegant_mahavira[256425]:             "vg_name": "ceph_vg2"
Oct 11 08:32:05 compute-0 elegant_mahavira[256425]:         }
Oct 11 08:32:05 compute-0 elegant_mahavira[256425]:     ]
Oct 11 08:32:05 compute-0 elegant_mahavira[256425]: }
Oct 11 08:32:05 compute-0 systemd[1]: libpod-cca1cfae9d46bb50bcbde1eb0dfb30869fa00bddfe85526f45a2410f0c43a852.scope: Deactivated successfully.
Oct 11 08:32:05 compute-0 podman[256383]: 2025-10-11 08:32:05.99941715 +0000 UTC m=+0.980236337 container died cca1cfae9d46bb50bcbde1eb0dfb30869fa00bddfe85526f45a2410f0c43a852 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_mahavira, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 08:32:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-3f6fd0502e4457dd9c49f674ba2f6013e623a95b40e62ef16d8240b758755a23-merged.mount: Deactivated successfully.
Oct 11 08:32:06 compute-0 podman[256383]: 2025-10-11 08:32:06.086502369 +0000 UTC m=+1.067321536 container remove cca1cfae9d46bb50bcbde1eb0dfb30869fa00bddfe85526f45a2410f0c43a852 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_mahavira, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 08:32:06 compute-0 systemd[1]: libpod-conmon-cca1cfae9d46bb50bcbde1eb0dfb30869fa00bddfe85526f45a2410f0c43a852.scope: Deactivated successfully.
Oct 11 08:32:06 compute-0 sudo[256132]: pam_unix(sudo:session): session closed for user root
Oct 11 08:32:06 compute-0 podman[256568]: 2025-10-11 08:32:06.151467898 +0000 UTC m=+0.116518404 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team)
Oct 11 08:32:06 compute-0 sudo[256643]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hxwncboqkxpcbxhgbfdbtxrzwdneobyf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171525.7280037-1640-99049637732699/AnsiballZ_stat.py'
Oct 11 08:32:06 compute-0 sudo[256643]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:32:06 compute-0 sudo[256644]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:32:06 compute-0 sudo[256644]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:32:06 compute-0 sudo[256644]: pam_unix(sudo:session): session closed for user root
Oct 11 08:32:06 compute-0 ceph-osd[90364]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 11 08:32:06 compute-0 ceph-osd[90364]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.3 total, 600.0 interval
                                           Cumulative writes: 5676 writes, 23K keys, 5676 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s
                                           Cumulative WAL: 5676 writes, 878 syncs, 6.46 writes per sync, written: 0.02 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 180 writes, 270 keys, 180 commit groups, 1.0 writes per commit group, ingest: 0.09 MB, 0.00 MB/s
                                           Interval WAL: 180 writes, 90 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.05              0.00         1    0.048       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.05              0.00         1    0.048       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.05              0.00         1    0.048       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c4d9a771f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c4d9a771f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c4d9a771f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c4d9a771f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.06              0.00         1    0.062       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.06              0.00         1    0.062       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.06              0.00         1    0.062       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.1 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c4d9a771f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c4d9a771f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c4d9a771f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c4d9a77090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 1.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c4d9a77090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 1.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.08              0.00         1    0.082       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.08              0.00         1    0.082       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.08              0.00         1    0.082       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.1 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c4d9a77090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 1.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.01              0.00         1    0.012       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.01              0.00         1    0.012       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.01              0.00         1    0.012       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c4d9a771f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c4d9a771f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Oct 11 08:32:06 compute-0 ceph-mon[74313]: pgmap v748: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:32:06 compute-0 sudo[256671]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 08:32:06 compute-0 sudo[256671]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:32:06 compute-0 sudo[256671]: pam_unix(sudo:session): session closed for user root
Oct 11 08:32:06 compute-0 python3.9[256658]: ansible-ansible.builtin.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 11 08:32:06 compute-0 sudo[256696]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:32:06 compute-0 sudo[256696]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:32:06 compute-0 sudo[256696]: pam_unix(sudo:session): session closed for user root
Oct 11 08:32:06 compute-0 sudo[256643]: pam_unix(sudo:session): session closed for user root
Oct 11 08:32:06 compute-0 sudo[256721]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a -- raw list --format json
Oct 11 08:32:06 compute-0 sudo[256721]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:32:06 compute-0 podman[256887]: 2025-10-11 08:32:06.89772927 +0000 UTC m=+0.053307074 container create deab9ccdd2ba17e3f94c2edde3f5d126d574b382e4e9e7661e0e819d00c3a823 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_heisenberg, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 08:32:06 compute-0 systemd[1]: Started libpod-conmon-deab9ccdd2ba17e3f94c2edde3f5d126d574b382e4e9e7661e0e819d00c3a823.scope.
Oct 11 08:32:06 compute-0 podman[256887]: 2025-10-11 08:32:06.871103965 +0000 UTC m=+0.026681829 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 08:32:06 compute-0 systemd[1]: Started libcrun container.
Oct 11 08:32:07 compute-0 podman[256887]: 2025-10-11 08:32:07.001453646 +0000 UTC m=+0.157031510 container init deab9ccdd2ba17e3f94c2edde3f5d126d574b382e4e9e7661e0e819d00c3a823 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_heisenberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 11 08:32:07 compute-0 podman[256887]: 2025-10-11 08:32:07.014326516 +0000 UTC m=+0.169904300 container start deab9ccdd2ba17e3f94c2edde3f5d126d574b382e4e9e7661e0e819d00c3a823 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_heisenberg, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 08:32:07 compute-0 podman[256887]: 2025-10-11 08:32:07.018448221 +0000 UTC m=+0.174026035 container attach deab9ccdd2ba17e3f94c2edde3f5d126d574b382e4e9e7661e0e819d00c3a823 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_heisenberg, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct 11 08:32:07 compute-0 vibrant_heisenberg[256928]: 167 167
Oct 11 08:32:07 compute-0 systemd[1]: libpod-deab9ccdd2ba17e3f94c2edde3f5d126d574b382e4e9e7661e0e819d00c3a823.scope: Deactivated successfully.
Oct 11 08:32:07 compute-0 podman[256887]: 2025-10-11 08:32:07.024137231 +0000 UTC m=+0.179715045 container died deab9ccdd2ba17e3f94c2edde3f5d126d574b382e4e9e7661e0e819d00c3a823 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_heisenberg, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct 11 08:32:07 compute-0 sudo[256958]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xlejriciabwwwgtnpxfwkxberkesphpg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171526.623717-1648-8221585267862/AnsiballZ_stat.py'
Oct 11 08:32:07 compute-0 sudo[256958]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:32:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-543d316cd198d718a285aad7485e62e63c0f3a3eba392f45a7eef1c509d354d2-merged.mount: Deactivated successfully.
Oct 11 08:32:07 compute-0 podman[256887]: 2025-10-11 08:32:07.073297947 +0000 UTC m=+0.228875721 container remove deab9ccdd2ba17e3f94c2edde3f5d126d574b382e4e9e7661e0e819d00c3a823 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_heisenberg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct 11 08:32:07 compute-0 systemd[1]: libpod-conmon-deab9ccdd2ba17e3f94c2edde3f5d126d574b382e4e9e7661e0e819d00c3a823.scope: Deactivated successfully.
Oct 11 08:32:07 compute-0 python3.9[256974]: ansible-ansible.legacy.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 08:32:07 compute-0 sudo[256958]: pam_unix(sudo:session): session closed for user root
Oct 11 08:32:07 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v749: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:32:07 compute-0 podman[256982]: 2025-10-11 08:32:07.324084581 +0000 UTC m=+0.063782537 container create 9b2813e26064ba153df5c8a4d41f25cc074fc15784afeacc52be188b75d011a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_merkle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 08:32:07 compute-0 systemd[1]: Started libpod-conmon-9b2813e26064ba153df5c8a4d41f25cc074fc15784afeacc52be188b75d011a6.scope.
Oct 11 08:32:07 compute-0 podman[256982]: 2025-10-11 08:32:07.299386859 +0000 UTC m=+0.039084905 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 08:32:07 compute-0 systemd[1]: Started libcrun container.
Oct 11 08:32:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eccbe32beb8ca37d1ac5ae3ed2cb1983070b03c6958eb52d5215ce072d41d3a7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 08:32:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eccbe32beb8ca37d1ac5ae3ed2cb1983070b03c6958eb52d5215ce072d41d3a7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 08:32:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eccbe32beb8ca37d1ac5ae3ed2cb1983070b03c6958eb52d5215ce072d41d3a7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 08:32:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eccbe32beb8ca37d1ac5ae3ed2cb1983070b03c6958eb52d5215ce072d41d3a7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 08:32:07 compute-0 podman[256982]: 2025-10-11 08:32:07.431285954 +0000 UTC m=+0.170983950 container init 9b2813e26064ba153df5c8a4d41f25cc074fc15784afeacc52be188b75d011a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_merkle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 08:32:07 compute-0 podman[256982]: 2025-10-11 08:32:07.445929784 +0000 UTC m=+0.185627790 container start 9b2813e26064ba153df5c8a4d41f25cc074fc15784afeacc52be188b75d011a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_merkle, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 08:32:07 compute-0 podman[256982]: 2025-10-11 08:32:07.449699509 +0000 UTC m=+0.189397505 container attach 9b2813e26064ba153df5c8a4d41f25cc074fc15784afeacc52be188b75d011a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_merkle, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 08:32:07 compute-0 sudo[257124]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oezdegmwesxbddslevxnonmykoejzpcg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171526.623717-1648-8221585267862/AnsiballZ_copy.py'
Oct 11 08:32:07 compute-0 sudo[257124]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:32:07 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 08:32:07 compute-0 ceph-mgr[74605]: [devicehealth INFO root] Check health
Oct 11 08:32:07 compute-0 python3.9[257126]: ansible-ansible.legacy.copy Invoked with attributes=+i dest=/var/lib/nova/compute_id group=nova mode=0400 owner=nova src=/home/zuul/.ansible/tmp/ansible-tmp-1760171526.623717-1648-8221585267862/.source _original_basename=.46essqj7 follow=False checksum=c9017603f1bc5177c2246b299a8eb428f37cce9e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None
Oct 11 08:32:08 compute-0 sudo[257124]: pam_unix(sudo:session): session closed for user root
Oct 11 08:32:08 compute-0 ceph-mon[74313]: pgmap v749: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:32:08 compute-0 gifted_merkle[257022]: {
Oct 11 08:32:08 compute-0 gifted_merkle[257022]:     "9a8d87fe-940e-4960-94fb-405e22e6e9ee": {
Oct 11 08:32:08 compute-0 gifted_merkle[257022]:         "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 08:32:08 compute-0 gifted_merkle[257022]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 11 08:32:08 compute-0 gifted_merkle[257022]:         "osd_id": 2,
Oct 11 08:32:08 compute-0 gifted_merkle[257022]:         "osd_uuid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 08:32:08 compute-0 gifted_merkle[257022]:         "type": "bluestore"
Oct 11 08:32:08 compute-0 gifted_merkle[257022]:     },
Oct 11 08:32:08 compute-0 gifted_merkle[257022]:     "bd31cfe9-266a-4479-a7f9-659fff14b3e1": {
Oct 11 08:32:08 compute-0 gifted_merkle[257022]:         "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 08:32:08 compute-0 gifted_merkle[257022]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 11 08:32:08 compute-0 gifted_merkle[257022]:         "osd_id": 0,
Oct 11 08:32:08 compute-0 gifted_merkle[257022]:         "osd_uuid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 08:32:08 compute-0 gifted_merkle[257022]:         "type": "bluestore"
Oct 11 08:32:08 compute-0 gifted_merkle[257022]:     },
Oct 11 08:32:08 compute-0 gifted_merkle[257022]:     "c6e730c8-7075-45e5-bf72-061b73088f5b": {
Oct 11 08:32:08 compute-0 gifted_merkle[257022]:         "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 08:32:08 compute-0 gifted_merkle[257022]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 11 08:32:08 compute-0 gifted_merkle[257022]:         "osd_id": 1,
Oct 11 08:32:08 compute-0 gifted_merkle[257022]:         "osd_uuid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 08:32:08 compute-0 gifted_merkle[257022]:         "type": "bluestore"
Oct 11 08:32:08 compute-0 gifted_merkle[257022]:     }
Oct 11 08:32:08 compute-0 gifted_merkle[257022]: }
Oct 11 08:32:08 compute-0 systemd[1]: libpod-9b2813e26064ba153df5c8a4d41f25cc074fc15784afeacc52be188b75d011a6.scope: Deactivated successfully.
Oct 11 08:32:08 compute-0 systemd[1]: libpod-9b2813e26064ba153df5c8a4d41f25cc074fc15784afeacc52be188b75d011a6.scope: Consumed 1.148s CPU time.
Oct 11 08:32:08 compute-0 podman[257256]: 2025-10-11 08:32:08.660174083 +0000 UTC m=+0.044478366 container died 9b2813e26064ba153df5c8a4d41f25cc074fc15784afeacc52be188b75d011a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_merkle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 08:32:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-eccbe32beb8ca37d1ac5ae3ed2cb1983070b03c6958eb52d5215ce072d41d3a7-merged.mount: Deactivated successfully.
Oct 11 08:32:08 compute-0 podman[257256]: 2025-10-11 08:32:08.735690869 +0000 UTC m=+0.119995062 container remove 9b2813e26064ba153df5c8a4d41f25cc074fc15784afeacc52be188b75d011a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_merkle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct 11 08:32:08 compute-0 systemd[1]: libpod-conmon-9b2813e26064ba153df5c8a4d41f25cc074fc15784afeacc52be188b75d011a6.scope: Deactivated successfully.
Oct 11 08:32:08 compute-0 sudo[256721]: pam_unix(sudo:session): session closed for user root
Oct 11 08:32:08 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 08:32:08 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 08:32:08 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 08:32:08 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 08:32:08 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev 5218a6f6-82d2-4439-b7d4-8cebc544d1f7 does not exist
Oct 11 08:32:08 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev 95afc3f7-c5b4-43a5-8b3e-a68376bed9ce does not exist
Oct 11 08:32:08 compute-0 sudo[257321]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:32:08 compute-0 sudo[257321]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:32:08 compute-0 sudo[257321]: pam_unix(sudo:session): session closed for user root
Oct 11 08:32:08 compute-0 sudo[257347]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 11 08:32:08 compute-0 sudo[257347]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:32:08 compute-0 sudo[257347]: pam_unix(sudo:session): session closed for user root
Oct 11 08:32:09 compute-0 python3.9[257322]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 11 08:32:09 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v750: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:32:09 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 08:32:09 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 08:32:09 compute-0 podman[257497]: 2025-10-11 08:32:09.889395993 +0000 UTC m=+0.179742646 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_controller)
Oct 11 08:32:10 compute-0 python3.9[257536]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 08:32:10 compute-0 python3.9[257670]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760171529.3303435-1674-151816430564190/.source.json follow=False _original_basename=nova_compute.json.j2 checksum=837ffd9c004e5987a2e117698c56827ebbfeb5b9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 11 08:32:10 compute-0 ceph-mon[74313]: pgmap v750: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:32:11 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v751: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:32:11 compute-0 python3.9[257820]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute_init.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 11 08:32:12 compute-0 python3.9[257941]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute_init.json mode=0700 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760171530.897792-1689-232689671924507/.source.json follow=False _original_basename=nova_compute_init.json.j2 checksum=722ab36345f3375cbdcf911ce8f6e1a8083d7e59 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 11 08:32:12 compute-0 ceph-mon[74313]: pgmap v751: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:32:12 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 08:32:12 compute-0 sudo[258091]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nirxdtlhrlxxfmpcswagopbksbtppqnm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171532.511467-1706-41996679498063/AnsiballZ_container_config_data.py'
Oct 11 08:32:12 compute-0 sudo[258091]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:32:13 compute-0 python3.9[258093]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute_init.json debug=False
Oct 11 08:32:13 compute-0 sudo[258091]: pam_unix(sudo:session): session closed for user root
Oct 11 08:32:13 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v752: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:32:13 compute-0 sudo[258243]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eiczknvkvlkusavvstvomyvhdnomnzox ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171533.486595-1715-280494634654819/AnsiballZ_container_config_hash.py'
Oct 11 08:32:13 compute-0 sudo[258243]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:32:14 compute-0 python3.9[258245]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Oct 11 08:32:14 compute-0 sudo[258243]: pam_unix(sudo:session): session closed for user root
Oct 11 08:32:14 compute-0 ceph-mon[74313]: pgmap v752: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:32:15 compute-0 sudo[258395]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vclfejjmhzncqjcvcngsvmaauxwfdzpp ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1760171534.5638978-1725-204687485144512/AnsiballZ_edpm_container_manage.py'
Oct 11 08:32:15 compute-0 sudo[258395]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:32:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:32:15.161 162815 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:32:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:32:15.162 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:32:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:32:15.162 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:32:15 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v753: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:32:15 compute-0 python3[258397]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute_init.json log_base_path=/var/log/containers/stdouts debug=False
Oct 11 08:32:16 compute-0 ceph-mon[74313]: pgmap v753: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:32:17 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v754: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:32:17 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 08:32:18 compute-0 ceph-mon[74313]: pgmap v754: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:32:19 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v755: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:32:20 compute-0 ceph-mon[74313]: pgmap v755: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:32:21 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v756: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:32:22 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 08:32:23 compute-0 ceph-mon[74313]: pgmap v756: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:32:23 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v757: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:32:24 compute-0 ceph-mon[74313]: pgmap v757: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:32:24 compute-0 podman[258469]: 2025-10-11 08:32:24.710173021 +0000 UTC m=+2.002081716 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Oct 11 08:32:24 compute-0 podman[258410]: 2025-10-11 08:32:24.732636831 +0000 UTC m=+9.347437240 image pull 7ac362f4e836cf46e10a309acb4abf774df9481a1d6404c213437495cfb42f5d quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844
Oct 11 08:32:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 08:32:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 08:32:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 08:32:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 08:32:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 08:32:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 08:32:24 compute-0 podman[258512]: 2025-10-11 08:32:24.938348242 +0000 UTC m=+0.070741332 container create a935fb67611e2cdf191c5aec8ff164c181e631191daef2a1684486c6280c3ac6 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844, name=nova_compute_init, org.label-schema.build-date=20251001, tcib_managed=true, container_name=nova_compute_init, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm, maintainer=OpenStack Kubernetes Operator team)
Oct 11 08:32:24 compute-0 podman[258512]: 2025-10-11 08:32:24.899075472 +0000 UTC m=+0.031468612 image pull 7ac362f4e836cf46e10a309acb4abf774df9481a1d6404c213437495cfb42f5d quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844
Oct 11 08:32:24 compute-0 python3[258397]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute_init --conmon-pidfile /run/nova_compute_init.pid --env NOVA_STATEDIR_OWNERSHIP_SKIP=/var/lib/nova/compute_id --env __OS_DEBUG=False --label config_id=edpm --label container_name=nova_compute_init --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']} --log-driver journald --log-level info --network none --privileged=False --security-opt label=disable --user root --volume /dev/log:/dev/log --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z --volume /var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844 bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init
Oct 11 08:32:25 compute-0 sudo[258395]: pam_unix(sudo:session): session closed for user root
Oct 11 08:32:25 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v758: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:32:25 compute-0 sudo[258700]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ofewlucxenfytsloifqfqdrznukstpik ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171545.3614573-1733-240009443080343/AnsiballZ_stat.py'
Oct 11 08:32:25 compute-0 sudo[258700]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:32:26 compute-0 python3.9[258702]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 11 08:32:26 compute-0 sudo[258700]: pam_unix(sudo:session): session closed for user root
Oct 11 08:32:26 compute-0 ceph-mon[74313]: pgmap v758: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:32:26 compute-0 sudo[258854]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rwabwhwyczalmfsfokllslchbrgtcecb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171546.5220985-1745-118679889774920/AnsiballZ_container_config_data.py'
Oct 11 08:32:26 compute-0 sudo[258854]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:32:27 compute-0 python3.9[258856]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute.json debug=False
Oct 11 08:32:27 compute-0 sudo[258854]: pam_unix(sudo:session): session closed for user root
Oct 11 08:32:27 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v759: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:32:27 compute-0 sudo[259006]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aknhcufbnxaisfnbeqcykfkczmkhxron ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171547.453928-1754-76055183103440/AnsiballZ_container_config_hash.py'
Oct 11 08:32:27 compute-0 sudo[259006]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:32:27 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 08:32:28 compute-0 python3.9[259008]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Oct 11 08:32:28 compute-0 sudo[259006]: pam_unix(sudo:session): session closed for user root
Oct 11 08:32:28 compute-0 ceph-mon[74313]: pgmap v759: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:32:28 compute-0 sudo[259158]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-faktimsotstxkhcozgthmvqnpgkfcslr ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1760171548.4709058-1764-248412239880625/AnsiballZ_edpm_container_manage.py'
Oct 11 08:32:28 compute-0 sudo[259158]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:32:29 compute-0 python3[259160]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute.json log_base_path=/var/log/containers/stdouts debug=False
Oct 11 08:32:29 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v760: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:32:29 compute-0 podman[259198]: 2025-10-11 08:32:29.411114738 +0000 UTC m=+0.068629554 container create 73b98b0be9f6f26e777db084dcc633e96c305d7c0e44252280b6444085b3225d (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844, name=nova_compute, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, config_id=edpm, container_name=nova_compute, org.label-schema.license=GPLv2)
Oct 11 08:32:29 compute-0 podman[259198]: 2025-10-11 08:32:29.372230158 +0000 UTC m=+0.029745034 image pull 7ac362f4e836cf46e10a309acb4abf774df9481a1d6404c213437495cfb42f5d quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844
Oct 11 08:32:29 compute-0 python3[259160]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute --conmon-pidfile /run/nova_compute.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --label config_id=edpm --label container_name=nova_compute --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']} --log-driver journald --log-level info --network host --privileged=True --user nova --volume /var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro --volume /var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /etc/localtime:/etc/localtime:ro --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /var/lib/libvirt:/var/lib/libvirt --volume /run/libvirt:/run/libvirt:shared --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/iscsi:/var/lib/iscsi:z --volume /etc/multipath:/etc/multipath:z --volume /etc/multipath.conf:/etc/multipath.conf:ro --volume /etc/iscsi:/etc/iscsi:ro --volume /etc/nvme:/etc/nvme --volume /var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844 kolla_start
Oct 11 08:32:29 compute-0 sudo[259158]: pam_unix(sudo:session): session closed for user root
Oct 11 08:32:30 compute-0 sudo[259385]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zaquheawmfwrzseejeyluzqzbazstnsk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171549.8713322-1772-249688445288074/AnsiballZ_stat.py'
Oct 11 08:32:30 compute-0 sudo[259385]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:32:30 compute-0 ceph-mon[74313]: pgmap v760: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:32:30 compute-0 ceph-mon[74313]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #36. Immutable memtables: 0.
Oct 11 08:32:30 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:32:30.393672) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 11 08:32:30 compute-0 ceph-mon[74313]: rocksdb: [db/flush_job.cc:856] [default] [JOB 15] Flushing memtable with next log file: 36
Oct 11 08:32:30 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760171550393870, "job": 15, "event": "flush_started", "num_memtables": 1, "num_entries": 1544, "num_deletes": 251, "total_data_size": 2541071, "memory_usage": 2581312, "flush_reason": "Manual Compaction"}
Oct 11 08:32:30 compute-0 ceph-mon[74313]: rocksdb: [db/flush_job.cc:885] [default] [JOB 15] Level-0 flush table #37: started
Oct 11 08:32:30 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760171550414905, "cf_name": "default", "job": 15, "event": "table_file_creation", "file_number": 37, "file_size": 2485526, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14793, "largest_seqno": 16336, "table_properties": {"data_size": 2478349, "index_size": 4248, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1861, "raw_key_size": 14331, "raw_average_key_size": 19, "raw_value_size": 2464057, "raw_average_value_size": 3366, "num_data_blocks": 194, "num_entries": 732, "num_filter_entries": 732, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760171383, "oldest_key_time": 1760171383, "file_creation_time": 1760171550, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "25d4b2da-549a-4320-988a-8e4ed36a341b", "db_session_id": "HBQ1LYMNE0LLZGR7AHC6", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Oct 11 08:32:30 compute-0 ceph-mon[74313]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 15] Flush lasted 21577 microseconds, and 11174 cpu microseconds.
Oct 11 08:32:30 compute-0 ceph-mon[74313]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 08:32:30 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:32:30.415266) [db/flush_job.cc:967] [default] [JOB 15] Level-0 flush table #37: 2485526 bytes OK
Oct 11 08:32:30 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:32:30.415463) [db/memtable_list.cc:519] [default] Level-0 commit table #37 started
Oct 11 08:32:30 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:32:30.417136) [db/memtable_list.cc:722] [default] Level-0 commit table #37: memtable #1 done
Oct 11 08:32:30 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:32:30.417160) EVENT_LOG_v1 {"time_micros": 1760171550417152, "job": 15, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 11 08:32:30 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:32:30.417193) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 11 08:32:30 compute-0 ceph-mon[74313]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 15] Try to delete WAL files size 2534380, prev total WAL file size 2534380, number of live WAL files 2.
Oct 11 08:32:30 compute-0 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000033.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 08:32:30 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:32:30.419582) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031303034' seq:72057594037927935, type:22 .. '7061786F730031323536' seq:0, type:0; will stop at (end)
Oct 11 08:32:30 compute-0 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 16] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 11 08:32:30 compute-0 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 15 Base level 0, inputs: [37(2427KB)], [35(6838KB)]
Oct 11 08:32:30 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760171550420086, "job": 16, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [37], "files_L6": [35], "score": -1, "input_data_size": 9487791, "oldest_snapshot_seqno": -1}
Oct 11 08:32:30 compute-0 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 16] Generated table #38: 3991 keys, 7714207 bytes, temperature: kUnknown
Oct 11 08:32:30 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760171550474214, "cf_name": "default", "job": 16, "event": "table_file_creation", "file_number": 38, "file_size": 7714207, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7685324, "index_size": 17829, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 9989, "raw_key_size": 97485, "raw_average_key_size": 24, "raw_value_size": 7610851, "raw_average_value_size": 1907, "num_data_blocks": 755, "num_entries": 3991, "num_filter_entries": 3991, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760170204, "oldest_key_time": 0, "file_creation_time": 1760171550, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "25d4b2da-549a-4320-988a-8e4ed36a341b", "db_session_id": "HBQ1LYMNE0LLZGR7AHC6", "orig_file_number": 38, "seqno_to_time_mapping": "N/A"}}
Oct 11 08:32:30 compute-0 ceph-mon[74313]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 08:32:30 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:32:30.474952) [db/compaction/compaction_job.cc:1663] [default] [JOB 16] Compacted 1@0 + 1@6 files to L6 => 7714207 bytes
Oct 11 08:32:30 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:32:30.476803) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 174.3 rd, 141.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.4, 6.7 +0.0 blob) out(7.4 +0.0 blob), read-write-amplify(6.9) write-amplify(3.1) OK, records in: 4505, records dropped: 514 output_compression: NoCompression
Oct 11 08:32:30 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:32:30.476879) EVENT_LOG_v1 {"time_micros": 1760171550476860, "job": 16, "event": "compaction_finished", "compaction_time_micros": 54422, "compaction_time_cpu_micros": 34225, "output_level": 6, "num_output_files": 1, "total_output_size": 7714207, "num_input_records": 4505, "num_output_records": 3991, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 11 08:32:30 compute-0 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000037.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 08:32:30 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760171550478795, "job": 16, "event": "table_file_deletion", "file_number": 37}
Oct 11 08:32:30 compute-0 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000035.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 08:32:30 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760171550481718, "job": 16, "event": "table_file_deletion", "file_number": 35}
Oct 11 08:32:30 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:32:30.419508) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 08:32:30 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:32:30.481960) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 08:32:30 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:32:30.481968) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 08:32:30 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:32:30.481971) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 08:32:30 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:32:30.481973) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 08:32:30 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:32:30.481976) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 08:32:30 compute-0 python3.9[259387]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 11 08:32:30 compute-0 sudo[259385]: pam_unix(sudo:session): session closed for user root
Oct 11 08:32:31 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v761: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:32:31 compute-0 sudo[259539]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iebjayijbflrieqaqpuunfkmfltmwbin ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171550.8800185-1781-99001908798841/AnsiballZ_file.py'
Oct 11 08:32:31 compute-0 sudo[259539]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:32:31 compute-0 python3.9[259541]: ansible-file Invoked with path=/etc/systemd/system/edpm_nova_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 08:32:31 compute-0 sudo[259539]: pam_unix(sudo:session): session closed for user root
Oct 11 08:32:32 compute-0 podman[259664]: 2025-10-11 08:32:32.304099816 +0000 UTC m=+0.088750637 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=iscsid, container_name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible)
Oct 11 08:32:32 compute-0 sudo[259711]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sgmygfvfhyanijxwpttipxnkjjnuunbr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171551.6557946-1781-128799318449788/AnsiballZ_copy.py'
Oct 11 08:32:32 compute-0 sudo[259711]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:32:32 compute-0 ceph-mon[74313]: pgmap v761: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:32:32 compute-0 python3.9[259713]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1760171551.6557946-1781-128799318449788/source dest=/etc/systemd/system/edpm_nova_compute.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 11 08:32:32 compute-0 sudo[259711]: pam_unix(sudo:session): session closed for user root
Oct 11 08:32:32 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 08:32:32 compute-0 sudo[259787]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hzyakgsgruwizedrivnrhynrprixftwf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171551.6557946-1781-128799318449788/AnsiballZ_systemd.py'
Oct 11 08:32:32 compute-0 sudo[259787]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:32:33 compute-0 python3.9[259789]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 11 08:32:33 compute-0 systemd[1]: Reloading.
Oct 11 08:32:33 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v762: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:32:33 compute-0 systemd-rc-local-generator[259814]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 08:32:33 compute-0 systemd-sysv-generator[259820]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 08:32:33 compute-0 sudo[259787]: pam_unix(sudo:session): session closed for user root
Oct 11 08:32:33 compute-0 sudo[259899]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aiyrcvbmfdmpuxjtgxcehrllteztmlpz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171551.6557946-1781-128799318449788/AnsiballZ_systemd.py'
Oct 11 08:32:34 compute-0 sudo[259899]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:32:34 compute-0 python3.9[259901]: ansible-systemd Invoked with state=restarted name=edpm_nova_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 11 08:32:34 compute-0 systemd[1]: Reloading.
Oct 11 08:32:34 compute-0 ceph-mon[74313]: pgmap v762: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:32:34 compute-0 systemd-rc-local-generator[259929]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 08:32:34 compute-0 systemd-sysv-generator[259934]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 08:32:34 compute-0 systemd[1]: Starting nova_compute container...
Oct 11 08:32:34 compute-0 systemd[1]: Started libcrun container.
Oct 11 08:32:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9446680486d73744fe42e7bc9dbd4018ec0b5ae0f06cba6589fac0eabf4e949/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Oct 11 08:32:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9446680486d73744fe42e7bc9dbd4018ec0b5ae0f06cba6589fac0eabf4e949/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Oct 11 08:32:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9446680486d73744fe42e7bc9dbd4018ec0b5ae0f06cba6589fac0eabf4e949/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Oct 11 08:32:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9446680486d73744fe42e7bc9dbd4018ec0b5ae0f06cba6589fac0eabf4e949/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Oct 11 08:32:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9446680486d73744fe42e7bc9dbd4018ec0b5ae0f06cba6589fac0eabf4e949/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Oct 11 08:32:34 compute-0 podman[259941]: 2025-10-11 08:32:34.914632404 +0000 UTC m=+0.157560034 container init 73b98b0be9f6f26e777db084dcc633e96c305d7c0e44252280b6444085b3225d (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844, name=nova_compute, container_name=nova_compute, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=edpm, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct 11 08:32:34 compute-0 podman[259941]: 2025-10-11 08:32:34.927895865 +0000 UTC m=+0.170823445 container start 73b98b0be9f6f26e777db084dcc633e96c305d7c0e44252280b6444085b3225d (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844, name=nova_compute, container_name=nova_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 11 08:32:34 compute-0 podman[259941]: nova_compute
Oct 11 08:32:34 compute-0 nova_compute[259955]: + sudo -E kolla_set_configs
Oct 11 08:32:34 compute-0 systemd[1]: Started nova_compute container.
Oct 11 08:32:34 compute-0 sudo[259899]: pam_unix(sudo:session): session closed for user root
Oct 11 08:32:35 compute-0 nova_compute[259955]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Oct 11 08:32:35 compute-0 nova_compute[259955]: INFO:__main__:Validating config file
Oct 11 08:32:35 compute-0 nova_compute[259955]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Oct 11 08:32:35 compute-0 nova_compute[259955]: INFO:__main__:Copying service configuration files
Oct 11 08:32:35 compute-0 nova_compute[259955]: INFO:__main__:Deleting /etc/nova/nova.conf
Oct 11 08:32:35 compute-0 nova_compute[259955]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Oct 11 08:32:35 compute-0 nova_compute[259955]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Oct 11 08:32:35 compute-0 nova_compute[259955]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Oct 11 08:32:35 compute-0 nova_compute[259955]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Oct 11 08:32:35 compute-0 nova_compute[259955]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Oct 11 08:32:35 compute-0 nova_compute[259955]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Oct 11 08:32:35 compute-0 nova_compute[259955]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Oct 11 08:32:35 compute-0 nova_compute[259955]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Oct 11 08:32:35 compute-0 nova_compute[259955]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Oct 11 08:32:35 compute-0 nova_compute[259955]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Oct 11 08:32:35 compute-0 nova_compute[259955]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Oct 11 08:32:35 compute-0 nova_compute[259955]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Oct 11 08:32:35 compute-0 nova_compute[259955]: INFO:__main__:Deleting /etc/ceph
Oct 11 08:32:35 compute-0 nova_compute[259955]: INFO:__main__:Creating directory /etc/ceph
Oct 11 08:32:35 compute-0 nova_compute[259955]: INFO:__main__:Setting permission for /etc/ceph
Oct 11 08:32:35 compute-0 nova_compute[259955]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Oct 11 08:32:35 compute-0 nova_compute[259955]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Oct 11 08:32:35 compute-0 nova_compute[259955]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Oct 11 08:32:35 compute-0 nova_compute[259955]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Oct 11 08:32:35 compute-0 nova_compute[259955]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Oct 11 08:32:35 compute-0 nova_compute[259955]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Oct 11 08:32:35 compute-0 nova_compute[259955]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Oct 11 08:32:35 compute-0 nova_compute[259955]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Oct 11 08:32:35 compute-0 nova_compute[259955]: INFO:__main__:Writing out command to execute
Oct 11 08:32:35 compute-0 nova_compute[259955]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Oct 11 08:32:35 compute-0 nova_compute[259955]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Oct 11 08:32:35 compute-0 nova_compute[259955]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Oct 11 08:32:35 compute-0 nova_compute[259955]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Oct 11 08:32:35 compute-0 nova_compute[259955]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Oct 11 08:32:35 compute-0 nova_compute[259955]: ++ cat /run_command
Oct 11 08:32:35 compute-0 nova_compute[259955]: + CMD=nova-compute
Oct 11 08:32:35 compute-0 nova_compute[259955]: + ARGS=
Oct 11 08:32:35 compute-0 nova_compute[259955]: + sudo kolla_copy_cacerts
Oct 11 08:32:35 compute-0 nova_compute[259955]: + [[ ! -n '' ]]
Oct 11 08:32:35 compute-0 nova_compute[259955]: + . kolla_extend_start
Oct 11 08:32:35 compute-0 nova_compute[259955]: Running command: 'nova-compute'
Oct 11 08:32:35 compute-0 nova_compute[259955]: + echo 'Running command: '\''nova-compute'\'''
Oct 11 08:32:35 compute-0 nova_compute[259955]: + umask 0022
Oct 11 08:32:35 compute-0 nova_compute[259955]: + exec nova-compute
Oct 11 08:32:35 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v763: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:32:36 compute-0 python3.9[260117]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner_healthcheck.service follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 11 08:32:36 compute-0 ceph-mon[74313]: pgmap v763: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:32:36 compute-0 podman[260194]: 2025-10-11 08:32:36.806344607 +0000 UTC m=+0.102954464 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, container_name=multipathd)
Oct 11 08:32:37 compute-0 python3.9[260284]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 11 08:32:37 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v764: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:32:37 compute-0 nova_compute[259955]: 2025-10-11 08:32:37.287 2 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Oct 11 08:32:37 compute-0 nova_compute[259955]: 2025-10-11 08:32:37.287 2 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Oct 11 08:32:37 compute-0 nova_compute[259955]: 2025-10-11 08:32:37.287 2 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Oct 11 08:32:37 compute-0 nova_compute[259955]: 2025-10-11 08:32:37.287 2 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs
Oct 11 08:32:37 compute-0 nova_compute[259955]: 2025-10-11 08:32:37.427 2 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:32:37 compute-0 nova_compute[259955]: 2025-10-11 08:32:37.460 2 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 0 in 0.033s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:32:37 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.105 2 INFO nova.virt.driver [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'
Oct 11 08:32:38 compute-0 python3.9[260438]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service.requires follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.296 2 INFO nova.compute.provider_config [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.332 2 DEBUG oslo_concurrency.lockutils [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.332 2 DEBUG oslo_concurrency.lockutils [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.332 2 DEBUG oslo_concurrency.lockutils [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.333 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.333 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.333 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.333 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.334 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.334 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.334 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.334 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.334 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.335 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.335 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.335 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.335 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.335 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.336 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.336 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.336 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.336 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.336 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.336 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] console_host                   = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.337 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.337 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.337 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.337 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.337 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.338 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.338 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.338 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.338 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.338 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.339 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.339 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.339 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.339 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.339 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.340 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.340 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.340 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.340 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.340 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.341 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.341 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.341 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.341 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.341 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.342 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.342 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.342 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.342 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.342 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.343 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.343 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.343 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.343 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.343 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.343 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.344 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.344 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.344 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.344 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.345 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.345 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.345 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.345 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.345 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.345 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.346 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.346 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.346 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.346 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.346 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.347 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.347 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.347 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.347 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.348 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.348 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.348 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.348 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.348 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.349 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] my_block_storage_ip            = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.349 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] my_ip                          = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.349 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.349 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.350 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.350 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.350 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.350 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.350 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.351 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.351 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.351 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.351 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.351 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.352 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.352 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.352 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.352 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.352 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.353 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.353 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.353 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.353 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.353 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.353 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.354 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.354 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.354 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.354 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.354 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.355 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.355 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.355 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.355 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.355 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.355 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.356 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.356 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.356 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.356 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.356 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.357 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.357 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.357 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.357 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.357 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.357 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.358 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.358 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.358 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.358 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.358 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.359 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.359 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.359 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.359 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.359 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.360 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.360 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.360 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.360 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.360 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.361 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.361 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.361 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.361 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.361 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.362 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.362 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.362 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.362 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.362 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.363 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.363 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.363 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.363 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.363 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.364 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.364 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.364 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.364 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.364 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.365 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.365 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.365 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.365 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.365 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.366 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.366 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.366 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.366 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.366 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.367 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.367 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.367 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.367 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.367 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.367 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.368 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.368 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.368 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.368 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.368 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.369 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.369 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.369 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.369 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.369 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.370 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.370 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.370 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.370 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.370 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.371 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.371 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.371 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.371 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.371 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.372 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.372 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.372 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.372 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.372 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.373 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.373 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.373 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.373 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.373 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.374 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.374 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.374 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.374 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.374 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.374 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.375 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.375 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.375 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.375 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.375 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.376 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.376 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.376 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.376 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.376 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.377 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.377 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.377 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.377 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.377 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.377 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.377 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.378 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.378 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.378 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.378 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.378 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.378 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.378 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.379 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.379 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.379 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.379 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.379 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.379 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.379 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.379 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.380 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.380 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.380 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.380 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.380 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.380 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.380 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.381 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.381 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.381 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.381 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.381 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.381 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.381 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.381 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.382 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.382 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.382 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.382 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.382 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.382 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.382 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.383 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.383 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.383 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.383 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.383 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.383 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.383 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.384 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.384 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.384 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.384 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.384 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.384 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.384 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.385 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.385 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.385 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.385 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.385 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.385 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.385 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.386 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.386 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.386 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.386 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.386 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.386 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.387 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.387 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.387 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.387 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.387 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.387 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.387 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.388 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.388 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.388 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.388 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.388 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.388 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.388 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.389 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.389 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.389 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.389 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.389 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.389 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.389 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.390 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.390 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.390 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.390 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.390 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.390 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.390 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.391 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.391 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.391 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.391 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.391 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.391 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.391 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.392 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.392 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.392 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.392 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.392 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.392 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.393 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.393 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.393 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.393 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.393 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.393 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.394 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.394 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.394 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.394 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.394 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.394 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.395 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.395 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.395 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.395 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.395 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.395 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.395 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.396 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.396 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.396 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.396 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.396 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.396 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.397 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.397 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.397 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.397 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.397 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.397 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.397 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.398 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.398 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.398 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.398 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.398 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.398 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.398 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.398 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.399 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.399 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.399 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.399 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.399 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.399 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.400 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.400 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.400 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.400 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.400 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.400 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.401 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.401 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.401 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.401 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.401 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.401 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.402 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.402 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.402 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.402 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.402 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.402 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.402 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.403 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.403 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.403 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.403 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.403 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.403 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.403 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.403 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.404 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.404 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.404 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.404 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.404 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.404 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.404 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.405 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.405 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.405 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.405 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.405 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.405 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.406 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.406 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.406 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.406 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.406 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.406 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.407 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.407 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.407 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.407 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.407 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.407 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.408 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.408 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.408 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.408 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.408 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.408 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.408 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.409 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.409 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.409 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.409 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.409 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.409 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.410 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.410 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.410 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.410 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.410 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.410 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.410 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.411 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.411 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.411 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.411 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.411 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.412 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.412 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.412 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.412 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.412 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.413 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.413 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.413 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.413 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.413 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.413 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.413 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.414 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.414 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.414 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.414 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.414 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.414 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.414 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.415 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.415 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.415 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.415 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.415 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.415 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.416 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.416 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.416 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.416 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.416 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.416 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.416 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.417 2 WARNING oslo_config.cfg [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Oct 11 08:32:38 compute-0 nova_compute[259955]: live_migration_uri is deprecated for removal in favor of two other options that
Oct 11 08:32:38 compute-0 nova_compute[259955]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Oct 11 08:32:38 compute-0 nova_compute[259955]: and ``live_migration_inbound_addr`` respectively.
Oct 11 08:32:38 compute-0 nova_compute[259955]: ).  Its value may be silently ignored in the future.
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.417 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.417 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.417 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.417 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.417 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.418 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.418 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.418 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.418 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.418 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.418 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.419 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.419 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.419 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.419 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.419 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.419 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.419 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.420 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] libvirt.rbd_secret_uuid        = 33219f8b-dc38-5a8f-a577-8ccc4b37190a log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.420 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.420 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.420 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.420 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.420 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.421 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.421 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.421 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.421 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.421 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.421 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.421 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.422 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.422 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.422 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.422 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.422 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.422 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.423 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.423 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.423 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.423 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.423 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.423 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.423 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.424 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.424 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.424 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.424 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.424 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.424 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.424 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.425 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.425 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.425 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.425 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.425 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.425 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.425 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.426 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.426 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.426 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.426 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.426 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.426 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.427 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.427 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.427 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.427 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.427 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.427 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.427 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.428 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.428 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.428 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.428 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.428 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.428 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.428 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.429 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.429 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.429 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 ceph-mon[74313]: pgmap v764: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.429 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.429 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.430 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.430 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.430 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.430 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.430 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.431 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.431 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.431 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.431 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.431 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.431 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.432 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.432 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.432 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.432 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.432 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.432 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.432 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.433 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.433 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.433 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.433 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.433 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.433 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.433 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.434 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.434 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.434 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.434 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.434 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.434 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.435 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.435 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.435 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.435 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.435 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.435 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.436 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.436 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.436 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.436 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.436 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.437 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.437 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.437 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.437 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.437 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.438 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.438 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.438 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.438 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.438 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.439 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.439 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.439 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.439 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.439 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.440 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.440 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.440 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.440 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.440 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.441 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.441 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.441 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.441 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.441 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.442 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.442 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.442 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.442 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.442 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.442 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.443 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.443 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.443 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.443 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.443 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.444 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.444 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.444 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.444 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.444 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.445 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.445 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.445 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.445 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.445 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.446 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.446 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.446 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.446 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.446 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.446 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.447 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.447 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.447 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.447 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.448 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.448 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.448 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.448 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.448 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.448 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.448 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.449 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.449 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.449 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.449 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.449 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.449 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.449 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.450 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.450 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.450 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.450 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.450 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.450 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.451 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.451 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.451 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.451 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.451 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.451 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.451 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.452 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.452 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.452 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.452 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.452 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.452 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.452 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.453 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.453 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.453 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.453 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.453 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.453 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.453 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.454 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.454 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.454 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.454 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.454 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.454 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.454 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.455 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.455 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.455 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.455 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.455 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.455 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.455 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.456 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.456 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.456 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.456 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.456 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.456 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.456 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.457 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.457 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.457 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.457 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.457 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.457 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.458 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.458 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.458 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.458 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] vnc.server_proxyclient_address = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.458 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.458 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.458 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.459 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.459 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.459 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.459 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.459 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.459 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.459 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.460 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.460 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.460 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.460 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.460 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.460 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.460 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.460 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.461 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.461 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.461 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.461 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.461 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.461 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.461 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.462 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.462 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.462 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.462 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.462 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.462 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.462 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.462 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.463 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.463 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.463 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.463 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.463 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.463 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.463 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.464 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.464 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.464 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.464 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.464 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.464 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.465 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.465 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.465 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.465 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.465 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.465 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.465 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.465 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.466 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.466 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.466 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.466 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.466 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.466 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.466 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.467 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.467 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.467 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.467 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.467 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.467 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.467 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.467 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.468 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.468 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.468 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.468 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.468 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.468 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.468 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.469 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.469 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.469 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.469 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.469 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.469 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.469 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.470 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.470 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.470 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.470 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.470 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.470 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.470 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.471 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.471 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.471 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.471 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.471 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.471 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.471 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.471 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.472 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.472 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.472 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.472 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.472 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.472 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.472 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.473 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.473 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.473 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.473 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.473 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.473 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.473 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.473 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.474 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.474 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.474 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.474 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.474 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.474 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.475 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.475 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.475 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.475 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.475 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.476 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.476 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.476 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.476 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.476 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.476 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.477 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.477 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.477 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.477 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.477 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.477 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.477 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.478 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.478 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.478 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.478 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.478 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.478 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.479 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.479 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.479 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.479 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.479 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.479 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.479 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.480 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.480 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.480 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.480 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.480 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.480 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.481 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.481 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.481 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.481 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.481 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.481 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.481 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.482 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.482 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.482 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.482 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.482 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.482 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.482 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.483 2 DEBUG oslo_service.service [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.484 2 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.504 2 DEBUG nova.virt.libvirt.host [None req-cb6e29da-07c7-400e-907a-0634244d7819 - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.505 2 DEBUG nova.virt.libvirt.host [None req-cb6e29da-07c7-400e-907a-0634244d7819 - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.505 2 DEBUG nova.virt.libvirt.host [None req-cb6e29da-07c7-400e-907a-0634244d7819 - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.505 2 DEBUG nova.virt.libvirt.host [None req-cb6e29da-07c7-400e-907a-0634244d7819 - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503
Oct 11 08:32:38 compute-0 systemd[1]: Starting libvirt QEMU daemon...
Oct 11 08:32:38 compute-0 systemd[1]: Started libvirt QEMU daemon.
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.591 2 DEBUG nova.virt.libvirt.host [None req-cb6e29da-07c7-400e-907a-0634244d7819 - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7f0338b40040> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.594 2 DEBUG nova.virt.libvirt.host [None req-cb6e29da-07c7-400e-907a-0634244d7819 - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7f0338b40040> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.596 2 INFO nova.virt.libvirt.driver [None req-cb6e29da-07c7-400e-907a-0634244d7819 - - - - - -] Connection event '1' reason 'None'
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.618 2 WARNING nova.virt.libvirt.driver [None req-cb6e29da-07c7-400e-907a-0634244d7819 - - - - - -] Cannot update service status on host "compute-0.ctlplane.example.com" since it is not registered.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.
Oct 11 08:32:38 compute-0 nova_compute[259955]: 2025-10-11 08:32:38.619 2 DEBUG nova.virt.libvirt.volume.mount [None req-cb6e29da-07c7-400e-907a-0634244d7819 - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130
Oct 11 08:32:38 compute-0 sudo[260640]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wndjaqrbssdhyuozidyoaslywagkgakf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171558.4709527-1841-44388469337197/AnsiballZ_podman_container.py'
Oct 11 08:32:38 compute-0 sudo[260640]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:32:39 compute-0 python3.9[260642]: ansible-containers.podman.podman_container Invoked with name=nova_nvme_cleaner state=absent executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Oct 11 08:32:39 compute-0 sudo[260640]: pam_unix(sudo:session): session closed for user root
Oct 11 08:32:39 compute-0 rsyslogd[1003]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 11 08:32:39 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v765: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:32:39 compute-0 rsyslogd[1003]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 11 08:32:39 compute-0 nova_compute[259955]: 2025-10-11 08:32:39.558 2 INFO nova.virt.libvirt.host [None req-cb6e29da-07c7-400e-907a-0634244d7819 - - - - - -] Libvirt host capabilities <capabilities>
Oct 11 08:32:39 compute-0 nova_compute[259955]: 
Oct 11 08:32:39 compute-0 nova_compute[259955]:   <host>
Oct 11 08:32:39 compute-0 nova_compute[259955]:     <uuid>441009ae-b831-47c3-ab1e-dc607f9feb6d</uuid>
Oct 11 08:32:39 compute-0 nova_compute[259955]:     <cpu>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <arch>x86_64</arch>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model>EPYC-Rome-v4</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <vendor>AMD</vendor>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <microcode version='16777317'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <signature family='23' model='49' stepping='0'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <maxphysaddr mode='emulate' bits='40'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <feature name='x2apic'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <feature name='tsc-deadline'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <feature name='osxsave'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <feature name='hypervisor'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <feature name='tsc_adjust'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <feature name='spec-ctrl'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <feature name='stibp'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <feature name='arch-capabilities'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <feature name='ssbd'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <feature name='cmp_legacy'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <feature name='topoext'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <feature name='virt-ssbd'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <feature name='lbrv'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <feature name='tsc-scale'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <feature name='vmcb-clean'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <feature name='pause-filter'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <feature name='pfthreshold'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <feature name='svme-addr-chk'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <feature name='rdctl-no'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <feature name='skip-l1dfl-vmentry'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <feature name='mds-no'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <feature name='pschange-mc-no'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <pages unit='KiB' size='4'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <pages unit='KiB' size='2048'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <pages unit='KiB' size='1048576'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:     </cpu>
Oct 11 08:32:39 compute-0 nova_compute[259955]:     <power_management>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <suspend_mem/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:     </power_management>
Oct 11 08:32:39 compute-0 nova_compute[259955]:     <iommu support='no'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:     <migration_features>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <live/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <uri_transports>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <uri_transport>tcp</uri_transport>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <uri_transport>rdma</uri_transport>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </uri_transports>
Oct 11 08:32:39 compute-0 nova_compute[259955]:     </migration_features>
Oct 11 08:32:39 compute-0 nova_compute[259955]:     <topology>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <cells num='1'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <cell id='0'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:           <memory unit='KiB'>7864360</memory>
Oct 11 08:32:39 compute-0 nova_compute[259955]:           <pages unit='KiB' size='4'>1966090</pages>
Oct 11 08:32:39 compute-0 nova_compute[259955]:           <pages unit='KiB' size='2048'>0</pages>
Oct 11 08:32:39 compute-0 nova_compute[259955]:           <pages unit='KiB' size='1048576'>0</pages>
Oct 11 08:32:39 compute-0 nova_compute[259955]:           <distances>
Oct 11 08:32:39 compute-0 nova_compute[259955]:             <sibling id='0' value='10'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:           </distances>
Oct 11 08:32:39 compute-0 nova_compute[259955]:           <cpus num='8'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:             <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:             <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:             <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:             <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:             <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:             <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:             <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:             <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:           </cpus>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         </cell>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </cells>
Oct 11 08:32:39 compute-0 nova_compute[259955]:     </topology>
Oct 11 08:32:39 compute-0 nova_compute[259955]:     <cache>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:     </cache>
Oct 11 08:32:39 compute-0 nova_compute[259955]:     <secmodel>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model>selinux</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <doi>0</doi>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Oct 11 08:32:39 compute-0 nova_compute[259955]:     </secmodel>
Oct 11 08:32:39 compute-0 nova_compute[259955]:     <secmodel>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model>dac</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <doi>0</doi>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <baselabel type='kvm'>+107:+107</baselabel>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <baselabel type='qemu'>+107:+107</baselabel>
Oct 11 08:32:39 compute-0 nova_compute[259955]:     </secmodel>
Oct 11 08:32:39 compute-0 nova_compute[259955]:   </host>
Oct 11 08:32:39 compute-0 nova_compute[259955]: 
Oct 11 08:32:39 compute-0 nova_compute[259955]:   <guest>
Oct 11 08:32:39 compute-0 nova_compute[259955]:     <os_type>hvm</os_type>
Oct 11 08:32:39 compute-0 nova_compute[259955]:     <arch name='i686'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <wordsize>32</wordsize>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <machine canonical='pc-q35-rhel9.6.0' maxCpus='4096'>q35</machine>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <domain type='qemu'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <domain type='kvm'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:     </arch>
Oct 11 08:32:39 compute-0 nova_compute[259955]:     <features>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <pae/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <nonpae/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <acpi default='on' toggle='yes'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <apic default='on' toggle='no'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <cpuselection/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <deviceboot/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <disksnapshot default='on' toggle='no'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <externalSnapshot/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:     </features>
Oct 11 08:32:39 compute-0 nova_compute[259955]:   </guest>
Oct 11 08:32:39 compute-0 nova_compute[259955]: 
Oct 11 08:32:39 compute-0 nova_compute[259955]:   <guest>
Oct 11 08:32:39 compute-0 nova_compute[259955]:     <os_type>hvm</os_type>
Oct 11 08:32:39 compute-0 nova_compute[259955]:     <arch name='x86_64'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <wordsize>64</wordsize>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <machine canonical='pc-q35-rhel9.6.0' maxCpus='4096'>q35</machine>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <domain type='qemu'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <domain type='kvm'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:     </arch>
Oct 11 08:32:39 compute-0 nova_compute[259955]:     <features>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <acpi default='on' toggle='yes'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <apic default='on' toggle='no'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <cpuselection/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <deviceboot/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <disksnapshot default='on' toggle='no'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <externalSnapshot/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:     </features>
Oct 11 08:32:39 compute-0 nova_compute[259955]:   </guest>
Oct 11 08:32:39 compute-0 nova_compute[259955]: 
Oct 11 08:32:39 compute-0 nova_compute[259955]: </capabilities>
Oct 11 08:32:39 compute-0 nova_compute[259955]: 
Oct 11 08:32:39 compute-0 nova_compute[259955]: 2025-10-11 08:32:39.567 2 DEBUG nova.virt.libvirt.host [None req-cb6e29da-07c7-400e-907a-0634244d7819 - - - - - -] Getting domain capabilities for i686 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Oct 11 08:32:39 compute-0 nova_compute[259955]: 2025-10-11 08:32:39.602 2 DEBUG nova.virt.libvirt.host [None req-cb6e29da-07c7-400e-907a-0634244d7819 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Oct 11 08:32:39 compute-0 nova_compute[259955]: <domainCapabilities>
Oct 11 08:32:39 compute-0 nova_compute[259955]:   <path>/usr/libexec/qemu-kvm</path>
Oct 11 08:32:39 compute-0 nova_compute[259955]:   <domain>kvm</domain>
Oct 11 08:32:39 compute-0 nova_compute[259955]:   <machine>pc-q35-rhel9.6.0</machine>
Oct 11 08:32:39 compute-0 nova_compute[259955]:   <arch>i686</arch>
Oct 11 08:32:39 compute-0 nova_compute[259955]:   <vcpu max='4096'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:   <iothreads supported='yes'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:   <os supported='yes'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:     <enum name='firmware'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:     <loader supported='yes'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <enum name='type'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>rom</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>pflash</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </enum>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <enum name='readonly'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>yes</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>no</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </enum>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <enum name='secure'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>no</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </enum>
Oct 11 08:32:39 compute-0 nova_compute[259955]:     </loader>
Oct 11 08:32:39 compute-0 nova_compute[259955]:   </os>
Oct 11 08:32:39 compute-0 nova_compute[259955]:   <cpu>
Oct 11 08:32:39 compute-0 nova_compute[259955]:     <mode name='host-passthrough' supported='yes'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <enum name='hostPassthroughMigratable'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>on</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>off</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </enum>
Oct 11 08:32:39 compute-0 nova_compute[259955]:     </mode>
Oct 11 08:32:39 compute-0 nova_compute[259955]:     <mode name='maximum' supported='yes'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <enum name='maximumMigratable'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>on</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>off</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </enum>
Oct 11 08:32:39 compute-0 nova_compute[259955]:     </mode>
Oct 11 08:32:39 compute-0 nova_compute[259955]:     <mode name='host-model' supported='yes'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model fallback='forbid'>EPYC-Rome</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <vendor>AMD</vendor>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <maxphysaddr mode='passthrough' limit='40'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <feature policy='require' name='x2apic'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <feature policy='require' name='tsc-deadline'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <feature policy='require' name='hypervisor'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <feature policy='require' name='tsc_adjust'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <feature policy='require' name='spec-ctrl'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <feature policy='require' name='stibp'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <feature policy='require' name='arch-capabilities'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <feature policy='require' name='ssbd'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <feature policy='require' name='cmp_legacy'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <feature policy='require' name='overflow-recov'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <feature policy='require' name='succor'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <feature policy='require' name='ibrs'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <feature policy='require' name='amd-ssbd'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <feature policy='require' name='virt-ssbd'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <feature policy='require' name='lbrv'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <feature policy='require' name='tsc-scale'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <feature policy='require' name='vmcb-clean'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <feature policy='require' name='flushbyasid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <feature policy='require' name='pause-filter'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <feature policy='require' name='pfthreshold'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <feature policy='require' name='svme-addr-chk'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <feature policy='require' name='lfence-always-serializing'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <feature policy='require' name='rdctl-no'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <feature policy='require' name='mds-no'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <feature policy='require' name='pschange-mc-no'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <feature policy='require' name='gds-no'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <feature policy='require' name='rfds-no'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <feature policy='disable' name='xsaves'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:     </mode>
Oct 11 08:32:39 compute-0 nova_compute[259955]:     <mode name='custom' supported='yes'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='Broadwell'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='hle'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='invpcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='rtm'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='Broadwell-IBRS'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='hle'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='invpcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='rtm'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='Broadwell-noTSX'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='invpcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='Broadwell-noTSX-IBRS'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='invpcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='Broadwell-v1'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='hle'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='invpcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='rtm'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='Broadwell-v2'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='invpcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='Broadwell-v3'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='hle'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='invpcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='rtm'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='Broadwell-v4'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='invpcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='Cascadelake-Server'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512bw'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512cd'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512dq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512f'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vl'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vnni'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='hle'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='invpcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pku'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='rtm'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='Cascadelake-Server-noTSX'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512bw'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512cd'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512dq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512f'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vl'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vnni'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='ibrs-all'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='invpcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pku'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='Cascadelake-Server-v1'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512bw'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512cd'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512dq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512f'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vl'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vnni'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='hle'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='invpcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pku'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='rtm'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='Cascadelake-Server-v2'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512bw'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512cd'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512dq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512f'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vl'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vnni'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='hle'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='ibrs-all'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='invpcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pku'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='rtm'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='Cascadelake-Server-v3'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512bw'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512cd'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512dq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512f'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vl'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vnni'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='ibrs-all'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='invpcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pku'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='Cascadelake-Server-v4'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512bw'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512cd'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512dq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512f'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vl'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vnni'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='ibrs-all'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='invpcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pku'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='Cascadelake-Server-v5'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512bw'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512cd'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512dq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512f'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vl'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vnni'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='ibrs-all'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='invpcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pku'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='xsaves'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='Cooperlake'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512-bf16'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512bw'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512cd'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512dq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512f'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vl'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vnni'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='hle'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='ibrs-all'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='invpcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pku'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='rtm'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='taa-no'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='Cooperlake-v1'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512-bf16'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512bw'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512cd'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512dq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512f'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vl'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vnni'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='hle'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='ibrs-all'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='invpcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pku'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='rtm'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='taa-no'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='Cooperlake-v2'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512-bf16'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512bw'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512cd'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512dq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512f'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vl'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vnni'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='hle'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='ibrs-all'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='invpcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pku'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='rtm'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='taa-no'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='xsaves'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='Denverton'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='mpx'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='Denverton-v1'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='mpx'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='Denverton-v2'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='Denverton-v3'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='xsaves'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='Dhyana-v2'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='xsaves'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='EPYC-Genoa'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='amd-psfd'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='auto-ibrs'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512-bf16'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512-vpopcntdq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512bitalg'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512bw'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512cd'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512dq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512f'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512ifma'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vbmi'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vbmi2'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vl'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vnni'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='fsrm'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='gfni'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='invpcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='la57'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='no-nested-data-bp'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='null-sel-clr-base'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pku'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='stibp-always-on'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='vaes'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='vpclmulqdq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='xsaves'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='EPYC-Genoa-v1'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='amd-psfd'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='auto-ibrs'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512-bf16'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512-vpopcntdq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512bitalg'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512bw'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512cd'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512dq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512f'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512ifma'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vbmi'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vbmi2'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vl'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vnni'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='fsrm'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='gfni'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='invpcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='la57'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='no-nested-data-bp'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='null-sel-clr-base'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pku'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='stibp-always-on'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='vaes'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='vpclmulqdq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='xsaves'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='EPYC-Milan'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='fsrm'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='invpcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pku'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='xsaves'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='EPYC-Milan-v1'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='fsrm'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='invpcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pku'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='xsaves'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='EPYC-Milan-v2'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='amd-psfd'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='fsrm'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='invpcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='no-nested-data-bp'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='null-sel-clr-base'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pku'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='stibp-always-on'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='vaes'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='vpclmulqdq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='xsaves'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='EPYC-Rome'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='xsaves'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='EPYC-Rome-v1'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='xsaves'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='EPYC-Rome-v2'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='xsaves'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='EPYC-Rome-v3'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='xsaves'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='EPYC-v3'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='xsaves'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='EPYC-v4'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='xsaves'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='GraniteRapids'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='amx-bf16'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='amx-fp16'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='amx-int8'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='amx-tile'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx-vnni'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512-bf16'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512-fp16'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512-vpopcntdq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512bitalg'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512bw'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512cd'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512dq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512f'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512ifma'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vbmi'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vbmi2'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vl'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vnni'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='bus-lock-detect'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='fbsdp-no'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='fsrc'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='fsrm'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='fsrs'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='fzrm'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='gfni'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='hle'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='ibrs-all'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='invpcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='la57'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='mcdt-no'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pbrsb-no'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pku'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='prefetchiti'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='psdp-no'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='rtm'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='sbdr-ssdp-no'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='serialize'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='taa-no'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='tsx-ldtrk'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='vaes'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='vpclmulqdq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='xfd'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='xsaves'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='GraniteRapids-v1'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='amx-bf16'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='amx-fp16'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='amx-int8'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='amx-tile'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx-vnni'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512-bf16'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512-fp16'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512-vpopcntdq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512bitalg'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512bw'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512cd'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512dq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512f'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512ifma'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vbmi'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vbmi2'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vl'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vnni'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='bus-lock-detect'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='fbsdp-no'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='fsrc'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='fsrm'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='fsrs'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='fzrm'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='gfni'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='hle'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='ibrs-all'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='invpcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='la57'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='mcdt-no'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pbrsb-no'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pku'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='prefetchiti'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='psdp-no'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='rtm'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='sbdr-ssdp-no'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='serialize'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='taa-no'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='tsx-ldtrk'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='vaes'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='vpclmulqdq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='xfd'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='xsaves'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='GraniteRapids-v2'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='amx-bf16'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='amx-fp16'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='amx-int8'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='amx-tile'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx-vnni'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx10'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx10-128'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx10-256'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx10-512'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512-bf16'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512-fp16'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512-vpopcntdq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512bitalg'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512bw'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512cd'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512dq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512f'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512ifma'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vbmi'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vbmi2'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vl'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vnni'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='bus-lock-detect'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='cldemote'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='fbsdp-no'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='fsrc'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='fsrm'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='fsrs'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='fzrm'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='gfni'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='hle'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='ibrs-all'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='invpcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='la57'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='mcdt-no'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='movdir64b'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='movdiri'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pbrsb-no'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pku'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='prefetchiti'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='psdp-no'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='rtm'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='sbdr-ssdp-no'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='serialize'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='ss'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='taa-no'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='tsx-ldtrk'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='vaes'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='vpclmulqdq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='xfd'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='xsaves'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='Haswell'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='hle'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='invpcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='rtm'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='Haswell-IBRS'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='hle'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='invpcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='rtm'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='Haswell-noTSX'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='invpcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='Haswell-noTSX-IBRS'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='invpcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='Haswell-v1'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='hle'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='invpcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='rtm'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='Haswell-v2'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='invpcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='Haswell-v3'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='hle'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='invpcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='rtm'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='Haswell-v4'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='invpcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='Icelake-Server'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512-vpopcntdq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512bitalg'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512bw'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512cd'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512dq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512f'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vbmi'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vbmi2'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vl'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vnni'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='gfni'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='hle'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='invpcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='la57'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pku'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='rtm'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='vaes'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='vpclmulqdq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='Icelake-Server-noTSX'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512-vpopcntdq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512bitalg'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512bw'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512cd'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512dq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512f'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vbmi'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vbmi2'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vl'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vnni'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='gfni'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='invpcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='la57'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pku'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='vaes'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='vpclmulqdq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='Icelake-Server-v1'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512-vpopcntdq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512bitalg'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512bw'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512cd'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512dq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512f'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vbmi'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vbmi2'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vl'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vnni'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='gfni'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='hle'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='invpcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='la57'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pku'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='rtm'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='vaes'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='vpclmulqdq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='Icelake-Server-v2'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512-vpopcntdq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512bitalg'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512bw'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512cd'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512dq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512f'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vbmi'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vbmi2'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vl'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vnni'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='gfni'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='invpcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='la57'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pku'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='vaes'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='vpclmulqdq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='Icelake-Server-v3'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512-vpopcntdq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512bitalg'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512bw'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512cd'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512dq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512f'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vbmi'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vbmi2'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vl'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vnni'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='gfni'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='ibrs-all'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='invpcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='la57'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pku'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='taa-no'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='vaes'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='vpclmulqdq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='Icelake-Server-v4'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512-vpopcntdq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512bitalg'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512bw'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512cd'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512dq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512f'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512ifma'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vbmi'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vbmi2'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vl'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vnni'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='fsrm'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='gfni'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='ibrs-all'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='invpcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='la57'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pku'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='taa-no'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='vaes'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='vpclmulqdq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='Icelake-Server-v5'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512-vpopcntdq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512bitalg'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512bw'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512cd'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512dq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512f'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512ifma'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vbmi'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vbmi2'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vl'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vnni'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='fsrm'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='gfni'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='ibrs-all'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='invpcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='la57'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pku'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='taa-no'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='vaes'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='vpclmulqdq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='xsaves'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='Icelake-Server-v6'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512-vpopcntdq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512bitalg'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512bw'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512cd'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512dq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512f'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512ifma'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vbmi'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vbmi2'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vl'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vnni'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='fsrm'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='gfni'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='ibrs-all'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='invpcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='la57'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pku'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='taa-no'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='vaes'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='vpclmulqdq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='xsaves'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='Icelake-Server-v7'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512-vpopcntdq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512bitalg'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512bw'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512cd'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512dq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512f'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512ifma'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vbmi'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vbmi2'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vl'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vnni'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='fsrm'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='gfni'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='hle'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='ibrs-all'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='invpcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='la57'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pku'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='rtm'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='taa-no'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='vaes'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='vpclmulqdq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='xsaves'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='IvyBridge'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='IvyBridge-IBRS'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='IvyBridge-v1'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='IvyBridge-v2'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='KnightsMill'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512-4fmaps'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512-4vnniw'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512-vpopcntdq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512cd'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512er'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512f'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512pf'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='ss'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='KnightsMill-v1'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512-4fmaps'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512-4vnniw'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512-vpopcntdq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512cd'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512er'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512f'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512pf'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='ss'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='Opteron_G4'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='fma4'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='xop'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='Opteron_G4-v1'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='fma4'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='xop'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='Opteron_G5'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='fma4'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='tbm'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='xop'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='Opteron_G5-v1'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='fma4'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='tbm'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='xop'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='SapphireRapids'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='amx-bf16'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='amx-int8'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='amx-tile'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx-vnni'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512-bf16'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512-fp16'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512-vpopcntdq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512bitalg'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512bw'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512cd'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512dq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512f'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512ifma'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vbmi'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vbmi2'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vl'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vnni'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='bus-lock-detect'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='fsrc'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='fsrm'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='fsrs'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='fzrm'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='gfni'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='hle'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='ibrs-all'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='invpcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='la57'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pku'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='rtm'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='serialize'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='taa-no'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='tsx-ldtrk'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='vaes'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='vpclmulqdq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='xfd'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='xsaves'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='SapphireRapids-v1'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='amx-bf16'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='amx-int8'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='amx-tile'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx-vnni'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512-bf16'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512-fp16'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512-vpopcntdq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512bitalg'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512bw'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512cd'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512dq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512f'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512ifma'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vbmi'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vbmi2'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vl'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vnni'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='bus-lock-detect'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='fsrc'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='fsrm'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='fsrs'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='fzrm'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='gfni'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='hle'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='ibrs-all'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='invpcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='la57'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pku'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='rtm'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='serialize'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='taa-no'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='tsx-ldtrk'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='vaes'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='vpclmulqdq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='xfd'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='xsaves'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='SapphireRapids-v2'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='amx-bf16'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='amx-int8'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='amx-tile'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx-vnni'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512-bf16'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512-fp16'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512-vpopcntdq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512bitalg'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512bw'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512cd'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512dq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512f'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512ifma'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vbmi'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vbmi2'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vl'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vnni'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='bus-lock-detect'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='fbsdp-no'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='fsrc'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='fsrm'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='fsrs'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='fzrm'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='gfni'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='hle'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='ibrs-all'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='invpcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='la57'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pku'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='psdp-no'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='rtm'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='sbdr-ssdp-no'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='serialize'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='taa-no'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='tsx-ldtrk'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='vaes'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='vpclmulqdq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='xfd'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='xsaves'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='SapphireRapids-v3'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='amx-bf16'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='amx-int8'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='amx-tile'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx-vnni'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512-bf16'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512-fp16'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512-vpopcntdq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512bitalg'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512bw'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512cd'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512dq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512f'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512ifma'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vbmi'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vbmi2'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vl'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vnni'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='bus-lock-detect'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='cldemote'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='fbsdp-no'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='fsrc'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='fsrm'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='fsrs'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='fzrm'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='gfni'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='hle'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='ibrs-all'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='invpcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='la57'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='movdir64b'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='movdiri'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pku'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='psdp-no'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='rtm'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='sbdr-ssdp-no'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='serialize'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='ss'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='taa-no'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='tsx-ldtrk'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='vaes'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='vpclmulqdq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='xfd'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='xsaves'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='SierraForest'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx-ifma'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx-ne-convert'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx-vnni'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx-vnni-int8'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='bus-lock-detect'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='cmpccxadd'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='fbsdp-no'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='fsrm'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='fsrs'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='gfni'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='ibrs-all'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='invpcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='mcdt-no'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pbrsb-no'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pku'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='psdp-no'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='sbdr-ssdp-no'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='serialize'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='vaes'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='vpclmulqdq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='xsaves'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='SierraForest-v1'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx-ifma'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx-ne-convert'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx-vnni'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx-vnni-int8'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='bus-lock-detect'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='cmpccxadd'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='fbsdp-no'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='fsrm'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='fsrs'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='gfni'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='ibrs-all'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='invpcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='mcdt-no'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pbrsb-no'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pku'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='psdp-no'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='sbdr-ssdp-no'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='serialize'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='vaes'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='vpclmulqdq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='xsaves'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='Skylake-Client'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='hle'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='invpcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='rtm'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='Skylake-Client-IBRS'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='hle'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='invpcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='rtm'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='invpcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='Skylake-Client-v1'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='hle'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='invpcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='rtm'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='Skylake-Client-v2'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='hle'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='invpcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='rtm'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='Skylake-Client-v3'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='invpcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='Skylake-Client-v4'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='invpcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='xsaves'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='Skylake-Server'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512bw'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512cd'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512dq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512f'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vl'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='hle'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='invpcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pku'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='rtm'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='Skylake-Server-IBRS'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512bw'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512cd'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512dq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512f'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vl'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='hle'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='invpcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pku'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='rtm'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512bw'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512cd'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512dq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512f'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vl'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='invpcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pku'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='Skylake-Server-v1'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512bw'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512cd'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512dq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512f'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vl'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='hle'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='invpcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pku'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='rtm'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='Skylake-Server-v2'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512bw'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512cd'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512dq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512f'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vl'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='hle'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='invpcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pku'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='rtm'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='Skylake-Server-v3'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512bw'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512cd'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512dq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512f'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vl'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='invpcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pku'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='Skylake-Server-v4'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512bw'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512cd'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512dq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512f'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vl'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='invpcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pku'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='Skylake-Server-v5'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512bw'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512cd'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512dq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512f'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vl'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='invpcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pku'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='xsaves'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='Snowridge'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='cldemote'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='core-capability'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='gfni'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='movdir64b'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='movdiri'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='mpx'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='split-lock-detect'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='Snowridge-v1'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='cldemote'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='core-capability'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='gfni'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='movdir64b'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='movdiri'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='mpx'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='split-lock-detect'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='Snowridge-v2'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='cldemote'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='core-capability'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='gfni'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='movdir64b'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='movdiri'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='split-lock-detect'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='Snowridge-v3'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='cldemote'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='core-capability'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='gfni'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='movdir64b'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='movdiri'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='split-lock-detect'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='xsaves'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='Snowridge-v4'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='cldemote'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='gfni'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='movdir64b'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='movdiri'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='xsaves'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='athlon'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='3dnow'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='3dnowext'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='athlon-v1'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='3dnow'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='3dnowext'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='core2duo'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='ss'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='core2duo-v1'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='ss'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='coreduo'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='ss'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='coreduo-v1'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='ss'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='n270'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='ss'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='n270-v1'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='ss'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='phenom'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='3dnow'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='3dnowext'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='phenom-v1'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='3dnow'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='3dnowext'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:     </mode>
Oct 11 08:32:39 compute-0 nova_compute[259955]:   </cpu>
Oct 11 08:32:39 compute-0 nova_compute[259955]:   <memoryBacking supported='yes'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:     <enum name='sourceType'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <value>file</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <value>anonymous</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <value>memfd</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:     </enum>
Oct 11 08:32:39 compute-0 nova_compute[259955]:   </memoryBacking>
Oct 11 08:32:39 compute-0 nova_compute[259955]:   <devices>
Oct 11 08:32:39 compute-0 nova_compute[259955]:     <disk supported='yes'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <enum name='diskDevice'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>disk</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>cdrom</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>floppy</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>lun</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </enum>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <enum name='bus'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>fdc</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>scsi</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>virtio</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>usb</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>sata</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </enum>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <enum name='model'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>virtio</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>virtio-transitional</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>virtio-non-transitional</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </enum>
Oct 11 08:32:39 compute-0 nova_compute[259955]:     </disk>
Oct 11 08:32:39 compute-0 nova_compute[259955]:     <graphics supported='yes'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <enum name='type'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>vnc</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>egl-headless</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>dbus</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </enum>
Oct 11 08:32:39 compute-0 nova_compute[259955]:     </graphics>
Oct 11 08:32:39 compute-0 nova_compute[259955]:     <video supported='yes'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <enum name='modelType'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>vga</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>cirrus</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>virtio</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>none</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>bochs</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>ramfb</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </enum>
Oct 11 08:32:39 compute-0 nova_compute[259955]:     </video>
Oct 11 08:32:39 compute-0 nova_compute[259955]:     <hostdev supported='yes'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <enum name='mode'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>subsystem</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </enum>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <enum name='startupPolicy'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>default</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>mandatory</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>requisite</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>optional</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </enum>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <enum name='subsysType'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>usb</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>pci</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>scsi</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </enum>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <enum name='capsType'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <enum name='pciBackend'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:     </hostdev>
Oct 11 08:32:39 compute-0 nova_compute[259955]:     <rng supported='yes'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <enum name='model'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>virtio</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>virtio-transitional</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>virtio-non-transitional</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </enum>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <enum name='backendModel'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>random</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>egd</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>builtin</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </enum>
Oct 11 08:32:39 compute-0 nova_compute[259955]:     </rng>
Oct 11 08:32:39 compute-0 nova_compute[259955]:     <filesystem supported='yes'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <enum name='driverType'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>path</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>handle</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>virtiofs</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </enum>
Oct 11 08:32:39 compute-0 nova_compute[259955]:     </filesystem>
Oct 11 08:32:39 compute-0 nova_compute[259955]:     <tpm supported='yes'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <enum name='model'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>tpm-tis</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>tpm-crb</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </enum>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <enum name='backendModel'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>emulator</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>external</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </enum>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <enum name='backendVersion'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>2.0</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </enum>
Oct 11 08:32:39 compute-0 nova_compute[259955]:     </tpm>
Oct 11 08:32:39 compute-0 nova_compute[259955]:     <redirdev supported='yes'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <enum name='bus'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>usb</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </enum>
Oct 11 08:32:39 compute-0 nova_compute[259955]:     </redirdev>
Oct 11 08:32:39 compute-0 nova_compute[259955]:     <channel supported='yes'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <enum name='type'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>pty</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>unix</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </enum>
Oct 11 08:32:39 compute-0 nova_compute[259955]:     </channel>
Oct 11 08:32:39 compute-0 nova_compute[259955]:     <crypto supported='yes'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <enum name='model'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <enum name='type'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>qemu</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </enum>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <enum name='backendModel'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>builtin</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </enum>
Oct 11 08:32:39 compute-0 nova_compute[259955]:     </crypto>
Oct 11 08:32:39 compute-0 nova_compute[259955]:     <interface supported='yes'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <enum name='backendType'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>default</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>passt</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </enum>
Oct 11 08:32:39 compute-0 nova_compute[259955]:     </interface>
Oct 11 08:32:39 compute-0 nova_compute[259955]:     <panic supported='yes'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <enum name='model'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>isa</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>hyperv</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </enum>
Oct 11 08:32:39 compute-0 nova_compute[259955]:     </panic>
Oct 11 08:32:39 compute-0 nova_compute[259955]:   </devices>
Oct 11 08:32:39 compute-0 nova_compute[259955]:   <features>
Oct 11 08:32:39 compute-0 nova_compute[259955]:     <gic supported='no'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:     <vmcoreinfo supported='yes'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:     <genid supported='yes'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:     <backingStoreInput supported='yes'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:     <backup supported='yes'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:     <async-teardown supported='yes'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:     <ps2 supported='yes'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:     <sev supported='no'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:     <sgx supported='no'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:     <hyperv supported='yes'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <enum name='features'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>relaxed</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>vapic</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>spinlocks</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>vpindex</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>runtime</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>synic</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>stimer</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>reset</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>vendor_id</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>frequencies</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>reenlightenment</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>tlbflush</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>ipi</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>avic</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>emsr_bitmap</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>xmm_input</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </enum>
Oct 11 08:32:39 compute-0 nova_compute[259955]:     </hyperv>
Oct 11 08:32:39 compute-0 nova_compute[259955]:     <launchSecurity supported='no'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:   </features>
Oct 11 08:32:39 compute-0 nova_compute[259955]: </domainCapabilities>
Oct 11 08:32:39 compute-0 nova_compute[259955]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Oct 11 08:32:39 compute-0 nova_compute[259955]: 2025-10-11 08:32:39.612 2 DEBUG nova.virt.libvirt.host [None req-cb6e29da-07c7-400e-907a-0634244d7819 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Oct 11 08:32:39 compute-0 nova_compute[259955]: <domainCapabilities>
Oct 11 08:32:39 compute-0 nova_compute[259955]:   <path>/usr/libexec/qemu-kvm</path>
Oct 11 08:32:39 compute-0 nova_compute[259955]:   <domain>kvm</domain>
Oct 11 08:32:39 compute-0 nova_compute[259955]:   <machine>pc-i440fx-rhel7.6.0</machine>
Oct 11 08:32:39 compute-0 nova_compute[259955]:   <arch>i686</arch>
Oct 11 08:32:39 compute-0 nova_compute[259955]:   <vcpu max='240'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:   <iothreads supported='yes'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:   <os supported='yes'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:     <enum name='firmware'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:     <loader supported='yes'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <enum name='type'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>rom</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>pflash</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </enum>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <enum name='readonly'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>yes</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>no</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </enum>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <enum name='secure'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>no</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </enum>
Oct 11 08:32:39 compute-0 nova_compute[259955]:     </loader>
Oct 11 08:32:39 compute-0 nova_compute[259955]:   </os>
Oct 11 08:32:39 compute-0 nova_compute[259955]:   <cpu>
Oct 11 08:32:39 compute-0 nova_compute[259955]:     <mode name='host-passthrough' supported='yes'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <enum name='hostPassthroughMigratable'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>on</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>off</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </enum>
Oct 11 08:32:39 compute-0 nova_compute[259955]:     </mode>
Oct 11 08:32:39 compute-0 nova_compute[259955]:     <mode name='maximum' supported='yes'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <enum name='maximumMigratable'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>on</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>off</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </enum>
Oct 11 08:32:39 compute-0 nova_compute[259955]:     </mode>
Oct 11 08:32:39 compute-0 nova_compute[259955]:     <mode name='host-model' supported='yes'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model fallback='forbid'>EPYC-Rome</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <vendor>AMD</vendor>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <maxphysaddr mode='passthrough' limit='40'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <feature policy='require' name='x2apic'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <feature policy='require' name='tsc-deadline'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <feature policy='require' name='hypervisor'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <feature policy='require' name='tsc_adjust'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <feature policy='require' name='spec-ctrl'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <feature policy='require' name='stibp'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <feature policy='require' name='arch-capabilities'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <feature policy='require' name='ssbd'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <feature policy='require' name='cmp_legacy'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <feature policy='require' name='overflow-recov'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <feature policy='require' name='succor'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <feature policy='require' name='ibrs'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <feature policy='require' name='amd-ssbd'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <feature policy='require' name='virt-ssbd'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <feature policy='require' name='lbrv'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <feature policy='require' name='tsc-scale'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <feature policy='require' name='vmcb-clean'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <feature policy='require' name='flushbyasid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <feature policy='require' name='pause-filter'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <feature policy='require' name='pfthreshold'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <feature policy='require' name='svme-addr-chk'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <feature policy='require' name='lfence-always-serializing'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <feature policy='require' name='rdctl-no'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <feature policy='require' name='mds-no'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <feature policy='require' name='pschange-mc-no'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <feature policy='require' name='gds-no'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <feature policy='require' name='rfds-no'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <feature policy='disable' name='xsaves'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:     </mode>
Oct 11 08:32:39 compute-0 nova_compute[259955]:     <mode name='custom' supported='yes'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='Broadwell'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='hle'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='invpcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='rtm'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='Broadwell-IBRS'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='hle'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='invpcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='rtm'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='Broadwell-noTSX'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='invpcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='Broadwell-noTSX-IBRS'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='invpcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='Broadwell-v1'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='hle'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='invpcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='rtm'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='Broadwell-v2'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='invpcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='Broadwell-v3'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='hle'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='invpcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='rtm'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='Broadwell-v4'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='invpcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='Cascadelake-Server'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512bw'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512cd'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512dq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512f'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vl'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vnni'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='hle'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='invpcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pku'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='rtm'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='Cascadelake-Server-noTSX'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512bw'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512cd'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512dq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512f'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vl'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vnni'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='ibrs-all'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='invpcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pku'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='Cascadelake-Server-v1'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512bw'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512cd'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512dq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512f'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vl'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vnni'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='hle'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='invpcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pku'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='rtm'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='Cascadelake-Server-v2'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512bw'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512cd'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512dq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512f'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vl'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vnni'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='hle'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='ibrs-all'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='invpcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pku'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='rtm'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='Cascadelake-Server-v3'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512bw'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512cd'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512dq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512f'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vl'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vnni'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='ibrs-all'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='invpcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pku'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='Cascadelake-Server-v4'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512bw'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512cd'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512dq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512f'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vl'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vnni'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='ibrs-all'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='invpcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pku'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='Cascadelake-Server-v5'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512bw'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512cd'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512dq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512f'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vl'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vnni'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='ibrs-all'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='invpcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pku'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='xsaves'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='Cooperlake'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512-bf16'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512bw'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512cd'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512dq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512f'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vl'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vnni'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='hle'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='ibrs-all'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='invpcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pku'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='rtm'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='taa-no'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='Cooperlake-v1'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512-bf16'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512bw'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512cd'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512dq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512f'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vl'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vnni'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='hle'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='ibrs-all'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='invpcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pku'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='rtm'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='taa-no'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='Cooperlake-v2'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512-bf16'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512bw'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512cd'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512dq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512f'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vl'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vnni'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='hle'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='ibrs-all'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='invpcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pku'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='rtm'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='taa-no'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='xsaves'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='Denverton'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='mpx'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='Denverton-v1'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='mpx'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='Denverton-v2'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='Denverton-v3'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='xsaves'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='Dhyana-v2'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='xsaves'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='EPYC-Genoa'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='amd-psfd'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='auto-ibrs'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512-bf16'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512-vpopcntdq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512bitalg'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512bw'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512cd'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512dq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512f'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512ifma'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vbmi'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vbmi2'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vl'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vnni'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='fsrm'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='gfni'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='invpcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='la57'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='no-nested-data-bp'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='null-sel-clr-base'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pku'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='stibp-always-on'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='vaes'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='vpclmulqdq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='xsaves'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='EPYC-Genoa-v1'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='amd-psfd'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='auto-ibrs'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512-bf16'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512-vpopcntdq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512bitalg'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512bw'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512cd'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512dq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512f'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512ifma'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vbmi'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vbmi2'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vl'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vnni'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='fsrm'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='gfni'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='invpcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='la57'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='no-nested-data-bp'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='null-sel-clr-base'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pku'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='stibp-always-on'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='vaes'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='vpclmulqdq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='xsaves'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='EPYC-Milan'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='fsrm'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='invpcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pku'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='xsaves'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='EPYC-Milan-v1'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='fsrm'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='invpcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pku'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='xsaves'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='EPYC-Milan-v2'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='amd-psfd'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='fsrm'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='invpcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='no-nested-data-bp'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='null-sel-clr-base'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pku'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='stibp-always-on'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='vaes'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='vpclmulqdq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='xsaves'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='EPYC-Rome'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='xsaves'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='EPYC-Rome-v1'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='xsaves'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='EPYC-Rome-v2'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='xsaves'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='EPYC-Rome-v3'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='xsaves'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='EPYC-v3'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='xsaves'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='EPYC-v4'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='xsaves'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='GraniteRapids'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='amx-bf16'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='amx-fp16'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='amx-int8'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='amx-tile'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx-vnni'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512-bf16'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512-fp16'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512-vpopcntdq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512bitalg'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512bw'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512cd'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512dq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512f'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512ifma'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vbmi'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vbmi2'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vl'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vnni'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='bus-lock-detect'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='fbsdp-no'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='fsrc'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='fsrm'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='fsrs'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='fzrm'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='gfni'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='hle'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='ibrs-all'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='invpcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='la57'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='mcdt-no'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pbrsb-no'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pku'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='prefetchiti'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='psdp-no'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='rtm'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='sbdr-ssdp-no'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='serialize'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='taa-no'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='tsx-ldtrk'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='vaes'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='vpclmulqdq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='xfd'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='xsaves'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='GraniteRapids-v1'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='amx-bf16'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='amx-fp16'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='amx-int8'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='amx-tile'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx-vnni'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512-bf16'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512-fp16'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512-vpopcntdq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512bitalg'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512bw'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512cd'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512dq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512f'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512ifma'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vbmi'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vbmi2'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vl'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vnni'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='bus-lock-detect'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='fbsdp-no'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='fsrc'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='fsrm'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='fsrs'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='fzrm'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='gfni'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='hle'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='ibrs-all'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='invpcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='la57'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='mcdt-no'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pbrsb-no'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pku'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='prefetchiti'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='psdp-no'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='rtm'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='sbdr-ssdp-no'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='serialize'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='taa-no'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='tsx-ldtrk'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='vaes'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='vpclmulqdq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='xfd'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='xsaves'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='GraniteRapids-v2'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='amx-bf16'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='amx-fp16'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='amx-int8'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='amx-tile'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx-vnni'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx10'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx10-128'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx10-256'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx10-512'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512-bf16'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512-fp16'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512-vpopcntdq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512bitalg'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512bw'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512cd'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512dq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512f'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512ifma'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vbmi'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vbmi2'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vl'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vnni'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='bus-lock-detect'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='cldemote'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='fbsdp-no'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='fsrc'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='fsrm'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='fsrs'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='fzrm'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='gfni'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='hle'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='ibrs-all'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='invpcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='la57'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='mcdt-no'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='movdir64b'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='movdiri'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pbrsb-no'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pku'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='prefetchiti'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='psdp-no'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='rtm'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='sbdr-ssdp-no'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='serialize'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='ss'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='taa-no'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='tsx-ldtrk'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='vaes'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='vpclmulqdq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='xfd'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='xsaves'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='Haswell'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='hle'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='invpcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='rtm'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='Haswell-IBRS'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='hle'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='invpcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='rtm'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='Haswell-noTSX'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='invpcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='Haswell-noTSX-IBRS'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='invpcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='Haswell-v1'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='hle'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='invpcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='rtm'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='Haswell-v2'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='invpcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='Haswell-v3'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='hle'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='invpcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='rtm'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='Haswell-v4'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='invpcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='Icelake-Server'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512-vpopcntdq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512bitalg'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512bw'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512cd'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512dq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512f'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vbmi'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vbmi2'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vl'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vnni'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='gfni'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='hle'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='invpcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='la57'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pku'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='rtm'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='vaes'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='vpclmulqdq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='Icelake-Server-noTSX'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512-vpopcntdq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512bitalg'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512bw'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512cd'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512dq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512f'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vbmi'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vbmi2'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vl'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vnni'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='gfni'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='invpcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='la57'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pku'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='vaes'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='vpclmulqdq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='Icelake-Server-v1'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512-vpopcntdq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512bitalg'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512bw'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512cd'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512dq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512f'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vbmi'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vbmi2'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vl'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vnni'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='gfni'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='hle'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='invpcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='la57'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pku'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='rtm'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='vaes'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='vpclmulqdq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='Icelake-Server-v2'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512-vpopcntdq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512bitalg'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512bw'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512cd'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512dq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512f'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vbmi'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vbmi2'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vl'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vnni'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='gfni'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='invpcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='la57'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pku'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='vaes'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='vpclmulqdq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='Icelake-Server-v3'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512-vpopcntdq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512bitalg'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512bw'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512cd'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512dq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512f'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vbmi'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vbmi2'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vl'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vnni'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='gfni'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='ibrs-all'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='invpcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='la57'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pku'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='taa-no'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='vaes'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='vpclmulqdq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='Icelake-Server-v4'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512-vpopcntdq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512bitalg'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512bw'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512cd'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512dq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512f'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512ifma'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vbmi'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vbmi2'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vl'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vnni'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='fsrm'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='gfni'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='ibrs-all'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='invpcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='la57'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pku'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='taa-no'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='vaes'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='vpclmulqdq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='Icelake-Server-v5'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512-vpopcntdq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512bitalg'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512bw'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512cd'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512dq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512f'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512ifma'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vbmi'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vbmi2'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vl'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vnni'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='fsrm'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='gfni'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='ibrs-all'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='invpcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='la57'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pku'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='taa-no'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='vaes'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='vpclmulqdq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='xsaves'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='Icelake-Server-v6'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512-vpopcntdq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512bitalg'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512bw'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512cd'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512dq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512f'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512ifma'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vbmi'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vbmi2'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vl'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vnni'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='fsrm'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='gfni'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='ibrs-all'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='invpcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='la57'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pku'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='taa-no'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='vaes'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='vpclmulqdq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='xsaves'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='Icelake-Server-v7'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512-vpopcntdq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512bitalg'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512bw'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512cd'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512dq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512f'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512ifma'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vbmi'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vbmi2'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vl'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vnni'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='fsrm'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='gfni'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='hle'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='ibrs-all'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='invpcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='la57'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pku'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='rtm'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='taa-no'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='vaes'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='vpclmulqdq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='xsaves'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='IvyBridge'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='IvyBridge-IBRS'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='IvyBridge-v1'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='IvyBridge-v2'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='KnightsMill'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512-4fmaps'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512-4vnniw'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512-vpopcntdq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512cd'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512er'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512f'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512pf'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='ss'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='KnightsMill-v1'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512-4fmaps'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512-4vnniw'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512-vpopcntdq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512cd'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512er'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512f'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512pf'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='ss'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='Opteron_G4'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='fma4'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='xop'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='Opteron_G4-v1'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='fma4'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='xop'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='Opteron_G5'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='fma4'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='tbm'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='xop'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='Opteron_G5-v1'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='fma4'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='tbm'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='xop'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='SapphireRapids'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='amx-bf16'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='amx-int8'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='amx-tile'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx-vnni'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512-bf16'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512-fp16'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512-vpopcntdq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512bitalg'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512bw'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512cd'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512dq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512f'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512ifma'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vbmi'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vbmi2'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vl'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vnni'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='bus-lock-detect'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='fsrc'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='fsrm'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='fsrs'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='fzrm'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='gfni'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='hle'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='ibrs-all'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='invpcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='la57'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pku'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='rtm'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='serialize'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='taa-no'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='tsx-ldtrk'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='vaes'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='vpclmulqdq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='xfd'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='xsaves'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='SapphireRapids-v1'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='amx-bf16'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='amx-int8'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='amx-tile'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx-vnni'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512-bf16'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512-fp16'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512-vpopcntdq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512bitalg'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512bw'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512cd'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512dq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512f'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512ifma'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vbmi'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vbmi2'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vl'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vnni'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='bus-lock-detect'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='fsrc'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='fsrm'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='fsrs'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='fzrm'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='gfni'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='hle'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='ibrs-all'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='invpcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='la57'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pku'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='rtm'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='serialize'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='taa-no'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='tsx-ldtrk'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='vaes'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='vpclmulqdq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='xfd'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='xsaves'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='SapphireRapids-v2'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='amx-bf16'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='amx-int8'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='amx-tile'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx-vnni'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512-bf16'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512-fp16'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512-vpopcntdq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512bitalg'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512bw'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512cd'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512dq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512f'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512ifma'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vbmi'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vbmi2'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vl'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vnni'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='bus-lock-detect'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='fbsdp-no'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='fsrc'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='fsrm'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='fsrs'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='fzrm'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='gfni'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='hle'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='ibrs-all'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='invpcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='la57'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pku'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='psdp-no'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='rtm'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='sbdr-ssdp-no'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='serialize'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='taa-no'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='tsx-ldtrk'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='vaes'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='vpclmulqdq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='xfd'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='xsaves'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='SapphireRapids-v3'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='amx-bf16'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='amx-int8'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='amx-tile'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx-vnni'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512-bf16'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512-fp16'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512-vpopcntdq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512bitalg'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512bw'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512cd'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512dq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512f'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512ifma'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vbmi'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vbmi2'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vl'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vnni'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='bus-lock-detect'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='cldemote'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='fbsdp-no'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='fsrc'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='fsrm'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='fsrs'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='fzrm'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='gfni'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='hle'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='ibrs-all'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='invpcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='la57'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='movdir64b'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='movdiri'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pku'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='psdp-no'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='rtm'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='sbdr-ssdp-no'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='serialize'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='ss'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='taa-no'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='tsx-ldtrk'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='vaes'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='vpclmulqdq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='xfd'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='xsaves'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='SierraForest'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx-ifma'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx-ne-convert'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx-vnni'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx-vnni-int8'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='bus-lock-detect'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='cmpccxadd'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='fbsdp-no'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='fsrm'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='fsrs'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='gfni'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='ibrs-all'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='invpcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='mcdt-no'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pbrsb-no'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pku'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='psdp-no'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='sbdr-ssdp-no'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='serialize'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='vaes'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='vpclmulqdq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='xsaves'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='SierraForest-v1'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx-ifma'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx-ne-convert'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx-vnni'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx-vnni-int8'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='bus-lock-detect'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='cmpccxadd'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='fbsdp-no'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='fsrm'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='fsrs'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='gfni'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='ibrs-all'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='invpcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='mcdt-no'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pbrsb-no'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pku'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='psdp-no'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='sbdr-ssdp-no'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='serialize'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='vaes'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='vpclmulqdq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='xsaves'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='Skylake-Client'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='hle'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='invpcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='rtm'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='Skylake-Client-IBRS'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='hle'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='invpcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='rtm'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='invpcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='Skylake-Client-v1'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='hle'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='invpcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='rtm'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='Skylake-Client-v2'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='hle'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='invpcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='rtm'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='Skylake-Client-v3'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='invpcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='Skylake-Client-v4'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='invpcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='xsaves'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='Skylake-Server'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512bw'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512cd'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512dq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512f'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vl'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='hle'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='invpcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pku'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='rtm'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='Skylake-Server-IBRS'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512bw'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512cd'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512dq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512f'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vl'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='hle'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='invpcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pku'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='rtm'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512bw'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512cd'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512dq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512f'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vl'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='invpcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pku'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='Skylake-Server-v1'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512bw'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512cd'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512dq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512f'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vl'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='hle'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='invpcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pku'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='rtm'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='Skylake-Server-v2'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512bw'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512cd'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512dq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512f'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vl'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='hle'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='invpcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pku'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='rtm'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='Skylake-Server-v3'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512bw'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512cd'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512dq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512f'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vl'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='invpcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pku'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='Skylake-Server-v4'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512bw'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512cd'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512dq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512f'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vl'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='invpcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pku'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='Skylake-Server-v5'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512bw'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512cd'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512dq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512f'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vl'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='invpcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pku'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='xsaves'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='Snowridge'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='cldemote'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='core-capability'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='gfni'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='movdir64b'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='movdiri'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='mpx'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='split-lock-detect'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='Snowridge-v1'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='cldemote'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='core-capability'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='gfni'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='movdir64b'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='movdiri'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='mpx'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='split-lock-detect'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='Snowridge-v2'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='cldemote'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='core-capability'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='gfni'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='movdir64b'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='movdiri'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='split-lock-detect'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='Snowridge-v3'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='cldemote'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='core-capability'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='gfni'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='movdir64b'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='movdiri'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='split-lock-detect'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='xsaves'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='Snowridge-v4'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='cldemote'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='gfni'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='movdir64b'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='movdiri'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='xsaves'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='athlon'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='3dnow'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='3dnowext'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='athlon-v1'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='3dnow'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='3dnowext'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='core2duo'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='ss'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='core2duo-v1'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='ss'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='coreduo'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='ss'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='coreduo-v1'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='ss'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='n270'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='ss'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='n270-v1'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='ss'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='phenom'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='3dnow'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='3dnowext'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='phenom-v1'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='3dnow'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='3dnowext'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:     </mode>
Oct 11 08:32:39 compute-0 nova_compute[259955]:   </cpu>
Oct 11 08:32:39 compute-0 nova_compute[259955]:   <memoryBacking supported='yes'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:     <enum name='sourceType'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <value>file</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <value>anonymous</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <value>memfd</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:     </enum>
Oct 11 08:32:39 compute-0 nova_compute[259955]:   </memoryBacking>
Oct 11 08:32:39 compute-0 nova_compute[259955]:   <devices>
Oct 11 08:32:39 compute-0 nova_compute[259955]:     <disk supported='yes'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <enum name='diskDevice'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>disk</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>cdrom</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>floppy</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>lun</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </enum>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <enum name='bus'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>ide</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>fdc</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>scsi</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>virtio</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>usb</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>sata</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </enum>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <enum name='model'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>virtio</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>virtio-transitional</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>virtio-non-transitional</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </enum>
Oct 11 08:32:39 compute-0 nova_compute[259955]:     </disk>
Oct 11 08:32:39 compute-0 nova_compute[259955]:     <graphics supported='yes'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <enum name='type'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>vnc</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>egl-headless</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>dbus</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </enum>
Oct 11 08:32:39 compute-0 nova_compute[259955]:     </graphics>
Oct 11 08:32:39 compute-0 nova_compute[259955]:     <video supported='yes'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <enum name='modelType'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>vga</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>cirrus</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>virtio</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>none</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>bochs</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>ramfb</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </enum>
Oct 11 08:32:39 compute-0 nova_compute[259955]:     </video>
Oct 11 08:32:39 compute-0 nova_compute[259955]:     <hostdev supported='yes'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <enum name='mode'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>subsystem</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </enum>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <enum name='startupPolicy'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>default</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>mandatory</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>requisite</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>optional</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </enum>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <enum name='subsysType'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>usb</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>pci</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>scsi</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </enum>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <enum name='capsType'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <enum name='pciBackend'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:     </hostdev>
Oct 11 08:32:39 compute-0 nova_compute[259955]:     <rng supported='yes'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <enum name='model'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>virtio</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>virtio-transitional</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>virtio-non-transitional</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </enum>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <enum name='backendModel'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>random</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>egd</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>builtin</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </enum>
Oct 11 08:32:39 compute-0 nova_compute[259955]:     </rng>
Oct 11 08:32:39 compute-0 nova_compute[259955]:     <filesystem supported='yes'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <enum name='driverType'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>path</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>handle</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>virtiofs</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </enum>
Oct 11 08:32:39 compute-0 nova_compute[259955]:     </filesystem>
Oct 11 08:32:39 compute-0 nova_compute[259955]:     <tpm supported='yes'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <enum name='model'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>tpm-tis</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>tpm-crb</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </enum>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <enum name='backendModel'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>emulator</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>external</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </enum>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <enum name='backendVersion'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>2.0</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </enum>
Oct 11 08:32:39 compute-0 nova_compute[259955]:     </tpm>
Oct 11 08:32:39 compute-0 nova_compute[259955]:     <redirdev supported='yes'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <enum name='bus'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>usb</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </enum>
Oct 11 08:32:39 compute-0 nova_compute[259955]:     </redirdev>
Oct 11 08:32:39 compute-0 nova_compute[259955]:     <channel supported='yes'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <enum name='type'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>pty</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>unix</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </enum>
Oct 11 08:32:39 compute-0 nova_compute[259955]:     </channel>
Oct 11 08:32:39 compute-0 nova_compute[259955]:     <crypto supported='yes'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <enum name='model'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <enum name='type'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>qemu</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </enum>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <enum name='backendModel'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>builtin</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </enum>
Oct 11 08:32:39 compute-0 nova_compute[259955]:     </crypto>
Oct 11 08:32:39 compute-0 nova_compute[259955]:     <interface supported='yes'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <enum name='backendType'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>default</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>passt</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </enum>
Oct 11 08:32:39 compute-0 nova_compute[259955]:     </interface>
Oct 11 08:32:39 compute-0 nova_compute[259955]:     <panic supported='yes'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <enum name='model'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>isa</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>hyperv</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </enum>
Oct 11 08:32:39 compute-0 nova_compute[259955]:     </panic>
Oct 11 08:32:39 compute-0 nova_compute[259955]:   </devices>
Oct 11 08:32:39 compute-0 nova_compute[259955]:   <features>
Oct 11 08:32:39 compute-0 nova_compute[259955]:     <gic supported='no'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:     <vmcoreinfo supported='yes'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:     <genid supported='yes'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:     <backingStoreInput supported='yes'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:     <backup supported='yes'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:     <async-teardown supported='yes'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:     <ps2 supported='yes'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:     <sev supported='no'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:     <sgx supported='no'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:     <hyperv supported='yes'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <enum name='features'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>relaxed</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>vapic</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>spinlocks</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>vpindex</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>runtime</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>synic</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>stimer</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>reset</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>vendor_id</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>frequencies</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>reenlightenment</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>tlbflush</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>ipi</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>avic</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>emsr_bitmap</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>xmm_input</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </enum>
Oct 11 08:32:39 compute-0 nova_compute[259955]:     </hyperv>
Oct 11 08:32:39 compute-0 nova_compute[259955]:     <launchSecurity supported='no'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:   </features>
Oct 11 08:32:39 compute-0 nova_compute[259955]: </domainCapabilities>
Oct 11 08:32:39 compute-0 nova_compute[259955]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Oct 11 08:32:39 compute-0 nova_compute[259955]: 2025-10-11 08:32:39.662 2 DEBUG nova.virt.libvirt.host [None req-cb6e29da-07c7-400e-907a-0634244d7819 - - - - - -] Getting domain capabilities for x86_64 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Oct 11 08:32:39 compute-0 nova_compute[259955]: 2025-10-11 08:32:39.668 2 DEBUG nova.virt.libvirt.host [None req-cb6e29da-07c7-400e-907a-0634244d7819 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Oct 11 08:32:39 compute-0 nova_compute[259955]: <domainCapabilities>
Oct 11 08:32:39 compute-0 nova_compute[259955]:   <path>/usr/libexec/qemu-kvm</path>
Oct 11 08:32:39 compute-0 nova_compute[259955]:   <domain>kvm</domain>
Oct 11 08:32:39 compute-0 nova_compute[259955]:   <machine>pc-q35-rhel9.6.0</machine>
Oct 11 08:32:39 compute-0 nova_compute[259955]:   <arch>x86_64</arch>
Oct 11 08:32:39 compute-0 nova_compute[259955]:   <vcpu max='4096'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:   <iothreads supported='yes'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:   <os supported='yes'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:     <enum name='firmware'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <value>efi</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:     </enum>
Oct 11 08:32:39 compute-0 nova_compute[259955]:     <loader supported='yes'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <enum name='type'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>rom</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>pflash</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </enum>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <enum name='readonly'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>yes</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>no</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </enum>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <enum name='secure'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>yes</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>no</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </enum>
Oct 11 08:32:39 compute-0 nova_compute[259955]:     </loader>
Oct 11 08:32:39 compute-0 nova_compute[259955]:   </os>
Oct 11 08:32:39 compute-0 nova_compute[259955]:   <cpu>
Oct 11 08:32:39 compute-0 nova_compute[259955]:     <mode name='host-passthrough' supported='yes'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <enum name='hostPassthroughMigratable'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>on</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>off</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </enum>
Oct 11 08:32:39 compute-0 nova_compute[259955]:     </mode>
Oct 11 08:32:39 compute-0 nova_compute[259955]:     <mode name='maximum' supported='yes'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <enum name='maximumMigratable'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>on</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>off</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </enum>
Oct 11 08:32:39 compute-0 nova_compute[259955]:     </mode>
Oct 11 08:32:39 compute-0 nova_compute[259955]:     <mode name='host-model' supported='yes'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model fallback='forbid'>EPYC-Rome</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <vendor>AMD</vendor>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <maxphysaddr mode='passthrough' limit='40'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <feature policy='require' name='x2apic'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <feature policy='require' name='tsc-deadline'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <feature policy='require' name='hypervisor'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <feature policy='require' name='tsc_adjust'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <feature policy='require' name='spec-ctrl'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <feature policy='require' name='stibp'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <feature policy='require' name='arch-capabilities'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <feature policy='require' name='ssbd'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <feature policy='require' name='cmp_legacy'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <feature policy='require' name='overflow-recov'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <feature policy='require' name='succor'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <feature policy='require' name='ibrs'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <feature policy='require' name='amd-ssbd'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <feature policy='require' name='virt-ssbd'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <feature policy='require' name='lbrv'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <feature policy='require' name='tsc-scale'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <feature policy='require' name='vmcb-clean'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <feature policy='require' name='flushbyasid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <feature policy='require' name='pause-filter'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <feature policy='require' name='pfthreshold'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <feature policy='require' name='svme-addr-chk'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <feature policy='require' name='lfence-always-serializing'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <feature policy='require' name='rdctl-no'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <feature policy='require' name='mds-no'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <feature policy='require' name='pschange-mc-no'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <feature policy='require' name='gds-no'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <feature policy='require' name='rfds-no'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <feature policy='disable' name='xsaves'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:     </mode>
Oct 11 08:32:39 compute-0 nova_compute[259955]:     <mode name='custom' supported='yes'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='Broadwell'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='hle'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='invpcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='rtm'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='Broadwell-IBRS'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='hle'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='invpcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='rtm'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='Broadwell-noTSX'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='invpcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='Broadwell-noTSX-IBRS'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='invpcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='Broadwell-v1'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='hle'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='invpcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='rtm'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='Broadwell-v2'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='invpcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='Broadwell-v3'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='hle'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='invpcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='rtm'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='Broadwell-v4'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='invpcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='Cascadelake-Server'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512bw'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512cd'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512dq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512f'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vl'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vnni'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='hle'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='invpcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pku'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='rtm'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='Cascadelake-Server-noTSX'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512bw'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512cd'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512dq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512f'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vl'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vnni'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='ibrs-all'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='invpcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pku'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='Cascadelake-Server-v1'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512bw'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512cd'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512dq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512f'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vl'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vnni'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='hle'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='invpcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pku'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='rtm'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='Cascadelake-Server-v2'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512bw'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512cd'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512dq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512f'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vl'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vnni'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='hle'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='ibrs-all'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='invpcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pku'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='rtm'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='Cascadelake-Server-v3'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512bw'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512cd'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512dq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512f'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vl'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vnni'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='ibrs-all'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='invpcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pku'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='Cascadelake-Server-v4'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512bw'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512cd'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512dq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512f'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vl'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vnni'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='ibrs-all'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='invpcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pku'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='Cascadelake-Server-v5'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512bw'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512cd'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512dq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512f'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vl'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vnni'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='ibrs-all'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='invpcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pku'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='xsaves'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='Cooperlake'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512-bf16'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512bw'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512cd'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512dq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512f'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vl'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vnni'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='hle'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='ibrs-all'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='invpcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pku'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='rtm'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='taa-no'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='Cooperlake-v1'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512-bf16'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512bw'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512cd'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512dq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512f'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vl'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vnni'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='hle'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='ibrs-all'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='invpcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pku'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='rtm'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='taa-no'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='Cooperlake-v2'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512-bf16'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512bw'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512cd'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512dq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512f'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vl'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vnni'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='hle'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='ibrs-all'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='invpcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pku'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='rtm'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='taa-no'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='xsaves'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='Denverton'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='mpx'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='Denverton-v1'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='mpx'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='Denverton-v2'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='Denverton-v3'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='xsaves'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='Dhyana-v2'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='xsaves'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='EPYC-Genoa'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='amd-psfd'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='auto-ibrs'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512-bf16'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512-vpopcntdq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512bitalg'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512bw'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512cd'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512dq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512f'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512ifma'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vbmi'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vbmi2'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vl'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vnni'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='fsrm'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='gfni'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='invpcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='la57'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='no-nested-data-bp'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='null-sel-clr-base'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pku'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='stibp-always-on'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='vaes'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='vpclmulqdq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='xsaves'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='EPYC-Genoa-v1'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='amd-psfd'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='auto-ibrs'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512-bf16'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512-vpopcntdq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512bitalg'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512bw'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512cd'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512dq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512f'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512ifma'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vbmi'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vbmi2'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vl'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vnni'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='fsrm'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='gfni'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='invpcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='la57'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='no-nested-data-bp'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='null-sel-clr-base'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pku'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='stibp-always-on'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='vaes'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='vpclmulqdq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='xsaves'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='EPYC-Milan'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='fsrm'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='invpcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pku'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='xsaves'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='EPYC-Milan-v1'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='fsrm'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='invpcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pku'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='xsaves'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='EPYC-Milan-v2'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='amd-psfd'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='fsrm'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='invpcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='no-nested-data-bp'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='null-sel-clr-base'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pku'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='stibp-always-on'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='vaes'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='vpclmulqdq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='xsaves'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='EPYC-Rome'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='xsaves'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='EPYC-Rome-v1'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='xsaves'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='EPYC-Rome-v2'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='xsaves'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='EPYC-Rome-v3'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='xsaves'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='EPYC-v3'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='xsaves'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='EPYC-v4'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='xsaves'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='GraniteRapids'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='amx-bf16'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='amx-fp16'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='amx-int8'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='amx-tile'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx-vnni'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512-bf16'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512-fp16'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512-vpopcntdq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512bitalg'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512bw'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512cd'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512dq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512f'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512ifma'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vbmi'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vbmi2'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vl'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vnni'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='bus-lock-detect'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='fbsdp-no'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='fsrc'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='fsrm'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='fsrs'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='fzrm'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='gfni'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='hle'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='ibrs-all'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='invpcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='la57'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='mcdt-no'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pbrsb-no'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pku'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='prefetchiti'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='psdp-no'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='rtm'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='sbdr-ssdp-no'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='serialize'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='taa-no'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='tsx-ldtrk'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='vaes'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='vpclmulqdq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='xfd'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='xsaves'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='GraniteRapids-v1'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='amx-bf16'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='amx-fp16'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='amx-int8'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='amx-tile'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx-vnni'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512-bf16'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512-fp16'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512-vpopcntdq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512bitalg'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512bw'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512cd'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512dq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512f'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512ifma'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vbmi'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vbmi2'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vl'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vnni'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='bus-lock-detect'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='fbsdp-no'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='fsrc'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='fsrm'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='fsrs'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='fzrm'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='gfni'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='hle'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='ibrs-all'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='invpcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='la57'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='mcdt-no'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pbrsb-no'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pku'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='prefetchiti'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='psdp-no'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='rtm'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='sbdr-ssdp-no'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='serialize'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='taa-no'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='tsx-ldtrk'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='vaes'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='vpclmulqdq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='xfd'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='xsaves'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='GraniteRapids-v2'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='amx-bf16'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='amx-fp16'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='amx-int8'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='amx-tile'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx-vnni'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx10'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx10-128'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx10-256'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx10-512'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512-bf16'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512-fp16'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512-vpopcntdq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512bitalg'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512bw'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512cd'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512dq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512f'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512ifma'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vbmi'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vbmi2'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vl'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vnni'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='bus-lock-detect'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='cldemote'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='fbsdp-no'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='fsrc'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='fsrm'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='fsrs'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='fzrm'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='gfni'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='hle'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='ibrs-all'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='invpcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='la57'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='mcdt-no'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='movdir64b'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='movdiri'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pbrsb-no'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pku'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='prefetchiti'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='psdp-no'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='rtm'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='sbdr-ssdp-no'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='serialize'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='ss'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='taa-no'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='tsx-ldtrk'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='vaes'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='vpclmulqdq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='xfd'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='xsaves'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='Haswell'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='hle'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='invpcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='rtm'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='Haswell-IBRS'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='hle'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='invpcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='rtm'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='Haswell-noTSX'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='invpcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='Haswell-noTSX-IBRS'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='invpcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='Haswell-v1'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='hle'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='invpcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='rtm'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='Haswell-v2'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='invpcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='Haswell-v3'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='hle'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='invpcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='rtm'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='Haswell-v4'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='invpcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='Icelake-Server'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512-vpopcntdq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512bitalg'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512bw'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512cd'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512dq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512f'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vbmi'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vbmi2'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vl'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vnni'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='gfni'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='hle'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='invpcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='la57'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pku'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='rtm'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='vaes'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='vpclmulqdq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='Icelake-Server-noTSX'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512-vpopcntdq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512bitalg'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512bw'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512cd'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512dq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512f'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vbmi'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vbmi2'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vl'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vnni'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='gfni'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='invpcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='la57'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pku'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='vaes'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='vpclmulqdq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='Icelake-Server-v1'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512-vpopcntdq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512bitalg'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512bw'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512cd'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512dq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512f'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vbmi'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vbmi2'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vl'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vnni'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='gfni'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='hle'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='invpcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='la57'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pku'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='rtm'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='vaes'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='vpclmulqdq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='Icelake-Server-v2'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512-vpopcntdq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512bitalg'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512bw'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512cd'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512dq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512f'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vbmi'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vbmi2'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vl'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vnni'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='gfni'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='invpcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='la57'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pku'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='vaes'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='vpclmulqdq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='Icelake-Server-v3'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512-vpopcntdq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512bitalg'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512bw'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512cd'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512dq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512f'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vbmi'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vbmi2'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vl'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vnni'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='gfni'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='ibrs-all'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='invpcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='la57'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pku'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='taa-no'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='vaes'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='vpclmulqdq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='Icelake-Server-v4'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512-vpopcntdq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512bitalg'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512bw'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512cd'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512dq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512f'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512ifma'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vbmi'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vbmi2'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vl'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vnni'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='fsrm'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='gfni'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='ibrs-all'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='invpcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='la57'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pku'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='taa-no'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='vaes'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='vpclmulqdq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='Icelake-Server-v5'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512-vpopcntdq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512bitalg'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512bw'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512cd'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512dq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512f'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512ifma'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vbmi'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vbmi2'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vl'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vnni'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='fsrm'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='gfni'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='ibrs-all'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='invpcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='la57'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pku'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='taa-no'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='vaes'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='vpclmulqdq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='xsaves'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='Icelake-Server-v6'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512-vpopcntdq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512bitalg'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512bw'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512cd'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512dq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512f'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512ifma'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vbmi'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vbmi2'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vl'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vnni'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='fsrm'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='gfni'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='ibrs-all'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='invpcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='la57'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pku'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='taa-no'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='vaes'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='vpclmulqdq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='xsaves'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='Icelake-Server-v7'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512-vpopcntdq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512bitalg'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512bw'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512cd'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512dq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512f'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512ifma'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vbmi'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vbmi2'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vl'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vnni'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='fsrm'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='gfni'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='hle'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='ibrs-all'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='invpcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='la57'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pku'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='rtm'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='taa-no'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='vaes'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='vpclmulqdq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='xsaves'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='IvyBridge'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='IvyBridge-IBRS'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='IvyBridge-v1'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='IvyBridge-v2'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='KnightsMill'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512-4fmaps'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512-4vnniw'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512-vpopcntdq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512cd'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512er'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512f'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512pf'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='ss'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='KnightsMill-v1'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512-4fmaps'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512-4vnniw'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512-vpopcntdq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512cd'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512er'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512f'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512pf'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='ss'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='Opteron_G4'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='fma4'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='xop'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='Opteron_G4-v1'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='fma4'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='xop'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='Opteron_G5'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='fma4'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='tbm'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='xop'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='Opteron_G5-v1'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='fma4'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='tbm'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='xop'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='SapphireRapids'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='amx-bf16'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='amx-int8'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='amx-tile'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx-vnni'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512-bf16'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512-fp16'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512-vpopcntdq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512bitalg'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512bw'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512cd'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512dq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512f'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512ifma'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vbmi'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vbmi2'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vl'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vnni'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='bus-lock-detect'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='fsrc'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='fsrm'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='fsrs'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='fzrm'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='gfni'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='hle'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='ibrs-all'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='invpcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='la57'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pku'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='rtm'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='serialize'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='taa-no'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='tsx-ldtrk'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='vaes'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='vpclmulqdq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='xfd'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='xsaves'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='SapphireRapids-v1'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='amx-bf16'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='amx-int8'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='amx-tile'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx-vnni'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512-bf16'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512-fp16'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512-vpopcntdq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512bitalg'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512bw'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512cd'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512dq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512f'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512ifma'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vbmi'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vbmi2'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vl'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vnni'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='bus-lock-detect'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='fsrc'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='fsrm'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='fsrs'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='fzrm'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='gfni'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='hle'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='ibrs-all'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='invpcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='la57'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pku'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='rtm'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='serialize'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='taa-no'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='tsx-ldtrk'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='vaes'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='vpclmulqdq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='xfd'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='xsaves'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='SapphireRapids-v2'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='amx-bf16'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='amx-int8'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='amx-tile'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx-vnni'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512-bf16'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512-fp16'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512-vpopcntdq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512bitalg'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512bw'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512cd'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512dq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512f'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512ifma'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vbmi'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vbmi2'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vl'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vnni'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='bus-lock-detect'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='fbsdp-no'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='fsrc'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='fsrm'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='fsrs'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='fzrm'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='gfni'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='hle'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='ibrs-all'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='invpcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='la57'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pku'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='psdp-no'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='rtm'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='sbdr-ssdp-no'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='serialize'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='taa-no'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='tsx-ldtrk'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='vaes'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='vpclmulqdq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='xfd'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='xsaves'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='SapphireRapids-v3'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='amx-bf16'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='amx-int8'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='amx-tile'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx-vnni'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512-bf16'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512-fp16'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512-vpopcntdq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512bitalg'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512bw'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512cd'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512dq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512f'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512ifma'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vbmi'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vbmi2'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vl'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vnni'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='bus-lock-detect'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='cldemote'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='fbsdp-no'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='fsrc'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='fsrm'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='fsrs'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='fzrm'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='gfni'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='hle'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='ibrs-all'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='invpcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='la57'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='movdir64b'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='movdiri'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pku'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='psdp-no'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='rtm'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='sbdr-ssdp-no'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='serialize'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='ss'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='taa-no'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='tsx-ldtrk'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='vaes'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='vpclmulqdq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='xfd'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='xsaves'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='SierraForest'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx-ifma'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx-ne-convert'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx-vnni'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx-vnni-int8'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='bus-lock-detect'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='cmpccxadd'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='fbsdp-no'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='fsrm'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='fsrs'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='gfni'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='ibrs-all'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='invpcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='mcdt-no'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pbrsb-no'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pku'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='psdp-no'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='sbdr-ssdp-no'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='serialize'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='vaes'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='vpclmulqdq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='xsaves'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='SierraForest-v1'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx-ifma'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx-ne-convert'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx-vnni'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx-vnni-int8'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='bus-lock-detect'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='cmpccxadd'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='fbsdp-no'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='fsrm'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='fsrs'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='gfni'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='ibrs-all'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='invpcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='mcdt-no'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pbrsb-no'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pku'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='psdp-no'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='sbdr-ssdp-no'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='serialize'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='vaes'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='vpclmulqdq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='xsaves'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='Skylake-Client'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='hle'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='invpcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='rtm'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='Skylake-Client-IBRS'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='hle'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='invpcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='rtm'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='invpcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='Skylake-Client-v1'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='hle'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='invpcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='rtm'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='Skylake-Client-v2'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='hle'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='invpcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='rtm'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='Skylake-Client-v3'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='invpcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='Skylake-Client-v4'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='invpcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='xsaves'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='Skylake-Server'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512bw'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512cd'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512dq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512f'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vl'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='hle'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='invpcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pku'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='rtm'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='Skylake-Server-IBRS'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512bw'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512cd'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512dq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512f'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vl'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='hle'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='invpcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pku'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='rtm'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512bw'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512cd'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512dq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512f'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vl'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='invpcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pku'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='Skylake-Server-v1'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512bw'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512cd'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512dq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512f'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vl'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='hle'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='invpcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pku'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='rtm'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='Skylake-Server-v2'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512bw'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512cd'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512dq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512f'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vl'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='hle'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='invpcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pku'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='rtm'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='Skylake-Server-v3'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512bw'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512cd'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512dq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512f'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vl'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='invpcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pku'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='Skylake-Server-v4'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512bw'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512cd'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512dq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512f'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vl'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='invpcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pku'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='Skylake-Server-v5'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512bw'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512cd'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512dq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512f'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vl'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='invpcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pku'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='xsaves'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='Snowridge'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='cldemote'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='core-capability'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='gfni'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='movdir64b'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='movdiri'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='mpx'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='split-lock-detect'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='Snowridge-v1'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='cldemote'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='core-capability'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='gfni'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='movdir64b'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='movdiri'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='mpx'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='split-lock-detect'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='Snowridge-v2'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='cldemote'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='core-capability'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='gfni'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='movdir64b'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='movdiri'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='split-lock-detect'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='Snowridge-v3'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='cldemote'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='core-capability'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='gfni'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='movdir64b'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='movdiri'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='split-lock-detect'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='xsaves'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='Snowridge-v4'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='cldemote'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='gfni'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='movdir64b'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='movdiri'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='xsaves'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='athlon'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='3dnow'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='3dnowext'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='athlon-v1'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='3dnow'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='3dnowext'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='core2duo'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='ss'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='core2duo-v1'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='ss'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='coreduo'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='ss'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='coreduo-v1'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='ss'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='n270'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='ss'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='n270-v1'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='ss'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='phenom'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='3dnow'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='3dnowext'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='phenom-v1'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='3dnow'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='3dnowext'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:     </mode>
Oct 11 08:32:39 compute-0 nova_compute[259955]:   </cpu>
Oct 11 08:32:39 compute-0 nova_compute[259955]:   <memoryBacking supported='yes'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:     <enum name='sourceType'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <value>file</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <value>anonymous</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <value>memfd</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:     </enum>
Oct 11 08:32:39 compute-0 nova_compute[259955]:   </memoryBacking>
Oct 11 08:32:39 compute-0 nova_compute[259955]:   <devices>
Oct 11 08:32:39 compute-0 nova_compute[259955]:     <disk supported='yes'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <enum name='diskDevice'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>disk</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>cdrom</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>floppy</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>lun</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </enum>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <enum name='bus'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>fdc</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>scsi</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>virtio</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>usb</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>sata</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </enum>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <enum name='model'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>virtio</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>virtio-transitional</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>virtio-non-transitional</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </enum>
Oct 11 08:32:39 compute-0 nova_compute[259955]:     </disk>
Oct 11 08:32:39 compute-0 nova_compute[259955]:     <graphics supported='yes'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <enum name='type'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>vnc</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>egl-headless</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>dbus</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </enum>
Oct 11 08:32:39 compute-0 nova_compute[259955]:     </graphics>
Oct 11 08:32:39 compute-0 nova_compute[259955]:     <video supported='yes'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <enum name='modelType'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>vga</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>cirrus</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>virtio</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>none</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>bochs</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>ramfb</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </enum>
Oct 11 08:32:39 compute-0 nova_compute[259955]:     </video>
Oct 11 08:32:39 compute-0 nova_compute[259955]:     <hostdev supported='yes'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <enum name='mode'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>subsystem</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </enum>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <enum name='startupPolicy'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>default</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>mandatory</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>requisite</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>optional</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </enum>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <enum name='subsysType'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>usb</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>pci</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>scsi</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </enum>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <enum name='capsType'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <enum name='pciBackend'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:     </hostdev>
Oct 11 08:32:39 compute-0 nova_compute[259955]:     <rng supported='yes'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <enum name='model'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>virtio</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>virtio-transitional</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>virtio-non-transitional</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </enum>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <enum name='backendModel'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>random</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>egd</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>builtin</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </enum>
Oct 11 08:32:39 compute-0 nova_compute[259955]:     </rng>
Oct 11 08:32:39 compute-0 nova_compute[259955]:     <filesystem supported='yes'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <enum name='driverType'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>path</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>handle</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>virtiofs</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </enum>
Oct 11 08:32:39 compute-0 nova_compute[259955]:     </filesystem>
Oct 11 08:32:39 compute-0 nova_compute[259955]:     <tpm supported='yes'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <enum name='model'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>tpm-tis</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>tpm-crb</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </enum>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <enum name='backendModel'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>emulator</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>external</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </enum>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <enum name='backendVersion'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>2.0</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </enum>
Oct 11 08:32:39 compute-0 nova_compute[259955]:     </tpm>
Oct 11 08:32:39 compute-0 nova_compute[259955]:     <redirdev supported='yes'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <enum name='bus'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>usb</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </enum>
Oct 11 08:32:39 compute-0 nova_compute[259955]:     </redirdev>
Oct 11 08:32:39 compute-0 nova_compute[259955]:     <channel supported='yes'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <enum name='type'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>pty</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>unix</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </enum>
Oct 11 08:32:39 compute-0 nova_compute[259955]:     </channel>
Oct 11 08:32:39 compute-0 nova_compute[259955]:     <crypto supported='yes'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <enum name='model'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <enum name='type'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>qemu</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </enum>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <enum name='backendModel'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>builtin</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </enum>
Oct 11 08:32:39 compute-0 nova_compute[259955]:     </crypto>
Oct 11 08:32:39 compute-0 nova_compute[259955]:     <interface supported='yes'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <enum name='backendType'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>default</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>passt</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </enum>
Oct 11 08:32:39 compute-0 nova_compute[259955]:     </interface>
Oct 11 08:32:39 compute-0 nova_compute[259955]:     <panic supported='yes'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <enum name='model'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>isa</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>hyperv</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </enum>
Oct 11 08:32:39 compute-0 nova_compute[259955]:     </panic>
Oct 11 08:32:39 compute-0 nova_compute[259955]:   </devices>
Oct 11 08:32:39 compute-0 nova_compute[259955]:   <features>
Oct 11 08:32:39 compute-0 nova_compute[259955]:     <gic supported='no'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:     <vmcoreinfo supported='yes'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:     <genid supported='yes'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:     <backingStoreInput supported='yes'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:     <backup supported='yes'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:     <async-teardown supported='yes'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:     <ps2 supported='yes'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:     <sev supported='no'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:     <sgx supported='no'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:     <hyperv supported='yes'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <enum name='features'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>relaxed</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>vapic</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>spinlocks</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>vpindex</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>runtime</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>synic</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>stimer</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>reset</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>vendor_id</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>frequencies</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>reenlightenment</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>tlbflush</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>ipi</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>avic</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>emsr_bitmap</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>xmm_input</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </enum>
Oct 11 08:32:39 compute-0 nova_compute[259955]:     </hyperv>
Oct 11 08:32:39 compute-0 nova_compute[259955]:     <launchSecurity supported='no'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:   </features>
Oct 11 08:32:39 compute-0 nova_compute[259955]: </domainCapabilities>
Oct 11 08:32:39 compute-0 nova_compute[259955]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Oct 11 08:32:39 compute-0 nova_compute[259955]: 2025-10-11 08:32:39.729 2 DEBUG nova.virt.libvirt.host [None req-cb6e29da-07c7-400e-907a-0634244d7819 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Oct 11 08:32:39 compute-0 nova_compute[259955]: <domainCapabilities>
Oct 11 08:32:39 compute-0 nova_compute[259955]:   <path>/usr/libexec/qemu-kvm</path>
Oct 11 08:32:39 compute-0 nova_compute[259955]:   <domain>kvm</domain>
Oct 11 08:32:39 compute-0 nova_compute[259955]:   <machine>pc-i440fx-rhel7.6.0</machine>
Oct 11 08:32:39 compute-0 nova_compute[259955]:   <arch>x86_64</arch>
Oct 11 08:32:39 compute-0 nova_compute[259955]:   <vcpu max='240'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:   <iothreads supported='yes'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:   <os supported='yes'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:     <enum name='firmware'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:     <loader supported='yes'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <enum name='type'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>rom</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>pflash</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </enum>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <enum name='readonly'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>yes</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>no</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </enum>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <enum name='secure'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>no</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </enum>
Oct 11 08:32:39 compute-0 nova_compute[259955]:     </loader>
Oct 11 08:32:39 compute-0 nova_compute[259955]:   </os>
Oct 11 08:32:39 compute-0 nova_compute[259955]:   <cpu>
Oct 11 08:32:39 compute-0 nova_compute[259955]:     <mode name='host-passthrough' supported='yes'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <enum name='hostPassthroughMigratable'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>on</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>off</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </enum>
Oct 11 08:32:39 compute-0 nova_compute[259955]:     </mode>
Oct 11 08:32:39 compute-0 nova_compute[259955]:     <mode name='maximum' supported='yes'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <enum name='maximumMigratable'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>on</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>off</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </enum>
Oct 11 08:32:39 compute-0 nova_compute[259955]:     </mode>
Oct 11 08:32:39 compute-0 nova_compute[259955]:     <mode name='host-model' supported='yes'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model fallback='forbid'>EPYC-Rome</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <vendor>AMD</vendor>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <maxphysaddr mode='passthrough' limit='40'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <feature policy='require' name='x2apic'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <feature policy='require' name='tsc-deadline'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <feature policy='require' name='hypervisor'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <feature policy='require' name='tsc_adjust'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <feature policy='require' name='spec-ctrl'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <feature policy='require' name='stibp'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <feature policy='require' name='arch-capabilities'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <feature policy='require' name='ssbd'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <feature policy='require' name='cmp_legacy'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <feature policy='require' name='overflow-recov'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <feature policy='require' name='succor'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <feature policy='require' name='ibrs'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <feature policy='require' name='amd-ssbd'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <feature policy='require' name='virt-ssbd'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <feature policy='require' name='lbrv'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <feature policy='require' name='tsc-scale'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <feature policy='require' name='vmcb-clean'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <feature policy='require' name='flushbyasid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <feature policy='require' name='pause-filter'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <feature policy='require' name='pfthreshold'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <feature policy='require' name='svme-addr-chk'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <feature policy='require' name='lfence-always-serializing'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <feature policy='require' name='rdctl-no'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <feature policy='require' name='mds-no'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <feature policy='require' name='pschange-mc-no'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <feature policy='require' name='gds-no'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <feature policy='require' name='rfds-no'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <feature policy='disable' name='xsaves'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:     </mode>
Oct 11 08:32:39 compute-0 nova_compute[259955]:     <mode name='custom' supported='yes'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='Broadwell'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='hle'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='invpcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='rtm'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='Broadwell-IBRS'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='hle'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='invpcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='rtm'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='Broadwell-noTSX'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='invpcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='Broadwell-noTSX-IBRS'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='invpcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='Broadwell-v1'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='hle'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='invpcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='rtm'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='Broadwell-v2'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='invpcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='Broadwell-v3'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='hle'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='invpcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='rtm'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='Broadwell-v4'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='invpcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='Cascadelake-Server'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512bw'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512cd'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512dq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512f'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vl'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vnni'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='hle'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='invpcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pku'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='rtm'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='Cascadelake-Server-noTSX'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512bw'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512cd'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512dq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512f'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vl'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vnni'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='ibrs-all'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='invpcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pku'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='Cascadelake-Server-v1'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512bw'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512cd'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512dq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512f'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vl'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vnni'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='hle'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='invpcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pku'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='rtm'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='Cascadelake-Server-v2'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512bw'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512cd'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512dq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512f'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vl'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vnni'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='hle'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='ibrs-all'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='invpcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pku'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='rtm'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='Cascadelake-Server-v3'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512bw'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512cd'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512dq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512f'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vl'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vnni'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='ibrs-all'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='invpcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pku'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='Cascadelake-Server-v4'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512bw'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512cd'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512dq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512f'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vl'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vnni'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='ibrs-all'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='invpcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pku'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='Cascadelake-Server-v5'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512bw'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512cd'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512dq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512f'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vl'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vnni'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='ibrs-all'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='invpcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pku'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='xsaves'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='Cooperlake'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512-bf16'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512bw'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512cd'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512dq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512f'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vl'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vnni'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='hle'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='ibrs-all'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='invpcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pku'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='rtm'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='taa-no'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='Cooperlake-v1'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512-bf16'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512bw'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512cd'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512dq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512f'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vl'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vnni'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='hle'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='ibrs-all'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='invpcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pku'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='rtm'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='taa-no'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='Cooperlake-v2'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512-bf16'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512bw'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512cd'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512dq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512f'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vl'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vnni'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='hle'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='ibrs-all'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='invpcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pku'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='rtm'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='taa-no'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='xsaves'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='Denverton'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='mpx'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='Denverton-v1'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='mpx'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='Denverton-v2'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='Denverton-v3'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='xsaves'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='Dhyana-v2'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='xsaves'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='EPYC-Genoa'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='amd-psfd'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='auto-ibrs'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512-bf16'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512-vpopcntdq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512bitalg'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512bw'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512cd'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512dq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512f'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512ifma'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vbmi'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vbmi2'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vl'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vnni'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='fsrm'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='gfni'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='invpcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='la57'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='no-nested-data-bp'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='null-sel-clr-base'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pku'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='stibp-always-on'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='vaes'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='vpclmulqdq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='xsaves'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='EPYC-Genoa-v1'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='amd-psfd'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='auto-ibrs'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512-bf16'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512-vpopcntdq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512bitalg'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512bw'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512cd'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512dq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512f'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512ifma'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vbmi'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vbmi2'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vl'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vnni'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='fsrm'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='gfni'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='invpcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='la57'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='no-nested-data-bp'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='null-sel-clr-base'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pku'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='stibp-always-on'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='vaes'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='vpclmulqdq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='xsaves'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='EPYC-Milan'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='fsrm'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='invpcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pku'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='xsaves'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='EPYC-Milan-v1'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='fsrm'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='invpcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pku'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='xsaves'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='EPYC-Milan-v2'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='amd-psfd'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='fsrm'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='invpcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='no-nested-data-bp'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='null-sel-clr-base'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pku'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='stibp-always-on'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='vaes'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='vpclmulqdq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='xsaves'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='EPYC-Rome'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='xsaves'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='EPYC-Rome-v1'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='xsaves'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='EPYC-Rome-v2'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='xsaves'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='EPYC-Rome-v3'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='xsaves'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='EPYC-v3'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='xsaves'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='EPYC-v4'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='xsaves'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='GraniteRapids'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='amx-bf16'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='amx-fp16'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='amx-int8'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='amx-tile'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx-vnni'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512-bf16'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512-fp16'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512-vpopcntdq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512bitalg'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512bw'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512cd'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512dq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512f'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512ifma'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vbmi'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vbmi2'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vl'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vnni'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='bus-lock-detect'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='fbsdp-no'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='fsrc'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='fsrm'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='fsrs'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='fzrm'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='gfni'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='hle'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='ibrs-all'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='invpcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='la57'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='mcdt-no'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pbrsb-no'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pku'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='prefetchiti'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='psdp-no'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='rtm'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='sbdr-ssdp-no'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='serialize'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='taa-no'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='tsx-ldtrk'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='vaes'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='vpclmulqdq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='xfd'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='xsaves'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='GraniteRapids-v1'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='amx-bf16'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='amx-fp16'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='amx-int8'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='amx-tile'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx-vnni'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512-bf16'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512-fp16'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512-vpopcntdq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512bitalg'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512bw'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512cd'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512dq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512f'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512ifma'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vbmi'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vbmi2'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vl'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vnni'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='bus-lock-detect'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='fbsdp-no'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='fsrc'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='fsrm'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='fsrs'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='fzrm'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='gfni'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='hle'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='ibrs-all'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='invpcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='la57'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='mcdt-no'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pbrsb-no'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pku'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='prefetchiti'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='psdp-no'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='rtm'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='sbdr-ssdp-no'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='serialize'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='taa-no'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='tsx-ldtrk'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='vaes'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='vpclmulqdq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='xfd'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='xsaves'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='GraniteRapids-v2'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='amx-bf16'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='amx-fp16'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='amx-int8'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='amx-tile'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx-vnni'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx10'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx10-128'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx10-256'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx10-512'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512-bf16'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512-fp16'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512-vpopcntdq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512bitalg'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512bw'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512cd'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512dq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512f'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512ifma'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vbmi'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vbmi2'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vl'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vnni'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='bus-lock-detect'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='cldemote'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='fbsdp-no'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='fsrc'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='fsrm'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='fsrs'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='fzrm'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='gfni'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='hle'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='ibrs-all'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='invpcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='la57'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='mcdt-no'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='movdir64b'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='movdiri'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pbrsb-no'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pku'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='prefetchiti'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='psdp-no'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='rtm'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='sbdr-ssdp-no'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='serialize'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='ss'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='taa-no'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='tsx-ldtrk'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='vaes'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='vpclmulqdq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='xfd'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='xsaves'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='Haswell'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='hle'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='invpcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='rtm'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='Haswell-IBRS'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='hle'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='invpcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='rtm'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='Haswell-noTSX'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='invpcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='Haswell-noTSX-IBRS'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='invpcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='Haswell-v1'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='hle'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='invpcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='rtm'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='Haswell-v2'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='invpcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='Haswell-v3'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='hle'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='invpcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='rtm'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='Haswell-v4'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='invpcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='Icelake-Server'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512-vpopcntdq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512bitalg'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512bw'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512cd'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512dq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512f'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vbmi'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vbmi2'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vl'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vnni'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='gfni'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='hle'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='invpcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='la57'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pku'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='rtm'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='vaes'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='vpclmulqdq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='Icelake-Server-noTSX'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512-vpopcntdq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512bitalg'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512bw'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512cd'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512dq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512f'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vbmi'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vbmi2'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vl'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vnni'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='gfni'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='invpcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='la57'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pku'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='vaes'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='vpclmulqdq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='Icelake-Server-v1'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512-vpopcntdq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512bitalg'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512bw'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512cd'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512dq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512f'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vbmi'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vbmi2'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vl'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vnni'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='gfni'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='hle'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='invpcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='la57'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pku'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='rtm'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='vaes'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='vpclmulqdq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='Icelake-Server-v2'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512-vpopcntdq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512bitalg'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512bw'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512cd'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512dq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512f'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vbmi'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vbmi2'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vl'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vnni'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='gfni'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='invpcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='la57'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pku'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='vaes'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='vpclmulqdq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='Icelake-Server-v3'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512-vpopcntdq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512bitalg'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512bw'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512cd'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512dq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512f'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vbmi'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vbmi2'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vl'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vnni'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='gfni'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='ibrs-all'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='invpcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='la57'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pku'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='taa-no'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='vaes'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='vpclmulqdq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='Icelake-Server-v4'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512-vpopcntdq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512bitalg'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512bw'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512cd'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512dq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512f'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512ifma'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vbmi'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vbmi2'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vl'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vnni'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='fsrm'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='gfni'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='ibrs-all'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='invpcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='la57'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pku'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='taa-no'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='vaes'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='vpclmulqdq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='Icelake-Server-v5'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512-vpopcntdq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512bitalg'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512bw'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512cd'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512dq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512f'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512ifma'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vbmi'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vbmi2'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vl'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vnni'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='fsrm'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='gfni'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='ibrs-all'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='invpcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='la57'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pku'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='taa-no'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='vaes'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='vpclmulqdq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='xsaves'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='Icelake-Server-v6'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512-vpopcntdq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512bitalg'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512bw'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512cd'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512dq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512f'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512ifma'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vbmi'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vbmi2'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vl'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vnni'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='fsrm'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='gfni'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='ibrs-all'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='invpcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='la57'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pku'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='taa-no'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='vaes'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='vpclmulqdq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='xsaves'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='Icelake-Server-v7'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512-vpopcntdq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512bitalg'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512bw'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512cd'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512dq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512f'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512ifma'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vbmi'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vbmi2'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vl'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vnni'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='fsrm'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='gfni'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='hle'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='ibrs-all'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='invpcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='la57'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pku'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='rtm'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='taa-no'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='vaes'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='vpclmulqdq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='xsaves'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='IvyBridge'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='IvyBridge-IBRS'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='IvyBridge-v1'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='IvyBridge-v2'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='KnightsMill'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512-4fmaps'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512-4vnniw'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512-vpopcntdq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512cd'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512er'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512f'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512pf'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='ss'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='KnightsMill-v1'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512-4fmaps'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512-4vnniw'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512-vpopcntdq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512cd'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512er'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512f'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512pf'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='ss'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='Opteron_G4'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='fma4'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='xop'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='Opteron_G4-v1'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='fma4'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='xop'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='Opteron_G5'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='fma4'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='tbm'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='xop'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='Opteron_G5-v1'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='fma4'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='tbm'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='xop'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='SapphireRapids'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='amx-bf16'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='amx-int8'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='amx-tile'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx-vnni'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512-bf16'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512-fp16'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512-vpopcntdq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512bitalg'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512bw'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512cd'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512dq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512f'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512ifma'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vbmi'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vbmi2'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vl'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vnni'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='bus-lock-detect'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='fsrc'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='fsrm'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='fsrs'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='fzrm'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='gfni'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='hle'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='ibrs-all'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='invpcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='la57'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pku'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='rtm'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='serialize'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='taa-no'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='tsx-ldtrk'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='vaes'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='vpclmulqdq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='xfd'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='xsaves'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='SapphireRapids-v1'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='amx-bf16'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='amx-int8'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='amx-tile'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx-vnni'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512-bf16'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512-fp16'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512-vpopcntdq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512bitalg'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512bw'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512cd'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512dq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512f'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512ifma'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vbmi'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vbmi2'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vl'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vnni'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='bus-lock-detect'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='fsrc'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='fsrm'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='fsrs'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='fzrm'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='gfni'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='hle'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='ibrs-all'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='invpcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='la57'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pku'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='rtm'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='serialize'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='taa-no'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='tsx-ldtrk'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='vaes'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='vpclmulqdq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='xfd'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='xsaves'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='SapphireRapids-v2'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='amx-bf16'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='amx-int8'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='amx-tile'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx-vnni'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512-bf16'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512-fp16'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512-vpopcntdq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512bitalg'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512bw'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512cd'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512dq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512f'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512ifma'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vbmi'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vbmi2'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vl'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vnni'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='bus-lock-detect'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='fbsdp-no'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='fsrc'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='fsrm'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='fsrs'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='fzrm'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='gfni'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='hle'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='ibrs-all'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='invpcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='la57'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pku'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='psdp-no'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='rtm'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='sbdr-ssdp-no'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='serialize'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='taa-no'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='tsx-ldtrk'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='vaes'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='vpclmulqdq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='xfd'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='xsaves'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='SapphireRapids-v3'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='amx-bf16'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='amx-int8'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='amx-tile'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx-vnni'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512-bf16'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512-fp16'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512-vpopcntdq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512bitalg'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512bw'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512cd'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512dq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512f'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512ifma'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vbmi'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vbmi2'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vl'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vnni'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='bus-lock-detect'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='cldemote'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='fbsdp-no'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='fsrc'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='fsrm'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='fsrs'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='fzrm'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='gfni'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='hle'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='ibrs-all'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='invpcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='la57'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='movdir64b'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='movdiri'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pku'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='psdp-no'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='rtm'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='sbdr-ssdp-no'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='serialize'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='ss'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='taa-no'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='tsx-ldtrk'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='vaes'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='vpclmulqdq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='xfd'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='xsaves'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='SierraForest'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx-ifma'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx-ne-convert'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx-vnni'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx-vnni-int8'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='bus-lock-detect'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='cmpccxadd'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='fbsdp-no'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='fsrm'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='fsrs'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='gfni'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='ibrs-all'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='invpcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='mcdt-no'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pbrsb-no'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pku'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='psdp-no'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='sbdr-ssdp-no'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='serialize'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='vaes'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='vpclmulqdq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='xsaves'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='SierraForest-v1'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx-ifma'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx-ne-convert'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx-vnni'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx-vnni-int8'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='bus-lock-detect'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='cmpccxadd'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='fbsdp-no'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='fsrm'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='fsrs'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='gfni'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='ibrs-all'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='invpcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='mcdt-no'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pbrsb-no'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pku'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='psdp-no'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='sbdr-ssdp-no'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='serialize'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='vaes'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='vpclmulqdq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='xsaves'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='Skylake-Client'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='hle'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='invpcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='rtm'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='Skylake-Client-IBRS'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='hle'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='invpcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='rtm'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='invpcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='Skylake-Client-v1'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='hle'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='invpcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='rtm'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='Skylake-Client-v2'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='hle'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='invpcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='rtm'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='Skylake-Client-v3'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='invpcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='Skylake-Client-v4'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='invpcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='xsaves'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='Skylake-Server'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512bw'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512cd'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512dq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512f'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vl'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='hle'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='invpcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pku'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='rtm'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='Skylake-Server-IBRS'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512bw'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512cd'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512dq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512f'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vl'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='hle'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='invpcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pku'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='rtm'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512bw'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512cd'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512dq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512f'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vl'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='invpcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pku'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='Skylake-Server-v1'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512bw'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512cd'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512dq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512f'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vl'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='hle'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='invpcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pku'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='rtm'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='Skylake-Server-v2'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512bw'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512cd'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512dq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512f'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vl'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='hle'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='invpcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pku'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='rtm'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='Skylake-Server-v3'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512bw'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512cd'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512dq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512f'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vl'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='invpcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pku'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='Skylake-Server-v4'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512bw'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512cd'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512dq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512f'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vl'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='invpcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pku'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='Skylake-Server-v5'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512bw'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512cd'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512dq'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512f'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='avx512vl'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='invpcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pcid'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='pku'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='xsaves'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='Snowridge'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='cldemote'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='core-capability'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='gfni'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='movdir64b'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='movdiri'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='mpx'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='split-lock-detect'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='Snowridge-v1'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='cldemote'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='core-capability'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='gfni'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='movdir64b'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='movdiri'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='mpx'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='split-lock-detect'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='Snowridge-v2'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='cldemote'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='core-capability'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='gfni'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='movdir64b'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='movdiri'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='split-lock-detect'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='Snowridge-v3'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='cldemote'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='core-capability'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='gfni'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='movdir64b'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='movdiri'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='split-lock-detect'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='xsaves'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='Snowridge-v4'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='cldemote'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='erms'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='gfni'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='movdir64b'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='movdiri'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='xsaves'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='athlon'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='3dnow'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='3dnowext'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='athlon-v1'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='3dnow'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='3dnowext'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='core2duo'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='ss'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='core2duo-v1'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='ss'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='coreduo'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='ss'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='coreduo-v1'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='ss'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='n270'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='ss'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='n270-v1'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='ss'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='phenom'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='3dnow'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='3dnowext'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <blockers model='phenom-v1'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='3dnow'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <feature name='3dnowext'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </blockers>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Oct 11 08:32:39 compute-0 nova_compute[259955]:     </mode>
Oct 11 08:32:39 compute-0 nova_compute[259955]:   </cpu>
Oct 11 08:32:39 compute-0 nova_compute[259955]:   <memoryBacking supported='yes'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:     <enum name='sourceType'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <value>file</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <value>anonymous</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <value>memfd</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:     </enum>
Oct 11 08:32:39 compute-0 nova_compute[259955]:   </memoryBacking>
Oct 11 08:32:39 compute-0 nova_compute[259955]:   <devices>
Oct 11 08:32:39 compute-0 nova_compute[259955]:     <disk supported='yes'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <enum name='diskDevice'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>disk</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>cdrom</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>floppy</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>lun</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </enum>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <enum name='bus'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>ide</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>fdc</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>scsi</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>virtio</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>usb</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>sata</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </enum>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <enum name='model'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>virtio</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>virtio-transitional</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>virtio-non-transitional</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </enum>
Oct 11 08:32:39 compute-0 nova_compute[259955]:     </disk>
Oct 11 08:32:39 compute-0 nova_compute[259955]:     <graphics supported='yes'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <enum name='type'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>vnc</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>egl-headless</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>dbus</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </enum>
Oct 11 08:32:39 compute-0 nova_compute[259955]:     </graphics>
Oct 11 08:32:39 compute-0 nova_compute[259955]:     <video supported='yes'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <enum name='modelType'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>vga</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>cirrus</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>virtio</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>none</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>bochs</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>ramfb</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </enum>
Oct 11 08:32:39 compute-0 nova_compute[259955]:     </video>
Oct 11 08:32:39 compute-0 nova_compute[259955]:     <hostdev supported='yes'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <enum name='mode'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>subsystem</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </enum>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <enum name='startupPolicy'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>default</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>mandatory</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>requisite</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>optional</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </enum>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <enum name='subsysType'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>usb</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>pci</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>scsi</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </enum>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <enum name='capsType'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <enum name='pciBackend'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:     </hostdev>
Oct 11 08:32:39 compute-0 nova_compute[259955]:     <rng supported='yes'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <enum name='model'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>virtio</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>virtio-transitional</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>virtio-non-transitional</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </enum>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <enum name='backendModel'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>random</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>egd</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>builtin</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </enum>
Oct 11 08:32:39 compute-0 nova_compute[259955]:     </rng>
Oct 11 08:32:39 compute-0 nova_compute[259955]:     <filesystem supported='yes'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <enum name='driverType'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>path</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>handle</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>virtiofs</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </enum>
Oct 11 08:32:39 compute-0 nova_compute[259955]:     </filesystem>
Oct 11 08:32:39 compute-0 nova_compute[259955]:     <tpm supported='yes'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <enum name='model'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>tpm-tis</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>tpm-crb</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </enum>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <enum name='backendModel'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>emulator</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>external</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </enum>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <enum name='backendVersion'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>2.0</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </enum>
Oct 11 08:32:39 compute-0 nova_compute[259955]:     </tpm>
Oct 11 08:32:39 compute-0 nova_compute[259955]:     <redirdev supported='yes'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <enum name='bus'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>usb</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </enum>
Oct 11 08:32:39 compute-0 nova_compute[259955]:     </redirdev>
Oct 11 08:32:39 compute-0 nova_compute[259955]:     <channel supported='yes'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <enum name='type'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>pty</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>unix</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </enum>
Oct 11 08:32:39 compute-0 nova_compute[259955]:     </channel>
Oct 11 08:32:39 compute-0 nova_compute[259955]:     <crypto supported='yes'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <enum name='model'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <enum name='type'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>qemu</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </enum>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <enum name='backendModel'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>builtin</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </enum>
Oct 11 08:32:39 compute-0 nova_compute[259955]:     </crypto>
Oct 11 08:32:39 compute-0 nova_compute[259955]:     <interface supported='yes'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <enum name='backendType'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>default</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>passt</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </enum>
Oct 11 08:32:39 compute-0 nova_compute[259955]:     </interface>
Oct 11 08:32:39 compute-0 nova_compute[259955]:     <panic supported='yes'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <enum name='model'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>isa</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>hyperv</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </enum>
Oct 11 08:32:39 compute-0 nova_compute[259955]:     </panic>
Oct 11 08:32:39 compute-0 nova_compute[259955]:   </devices>
Oct 11 08:32:39 compute-0 nova_compute[259955]:   <features>
Oct 11 08:32:39 compute-0 nova_compute[259955]:     <gic supported='no'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:     <vmcoreinfo supported='yes'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:     <genid supported='yes'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:     <backingStoreInput supported='yes'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:     <backup supported='yes'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:     <async-teardown supported='yes'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:     <ps2 supported='yes'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:     <sev supported='no'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:     <sgx supported='no'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:     <hyperv supported='yes'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       <enum name='features'>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>relaxed</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>vapic</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>spinlocks</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>vpindex</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>runtime</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>synic</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>stimer</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>reset</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>vendor_id</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>frequencies</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>reenlightenment</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>tlbflush</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>ipi</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>avic</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>emsr_bitmap</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:         <value>xmm_input</value>
Oct 11 08:32:39 compute-0 nova_compute[259955]:       </enum>
Oct 11 08:32:39 compute-0 nova_compute[259955]:     </hyperv>
Oct 11 08:32:39 compute-0 nova_compute[259955]:     <launchSecurity supported='no'/>
Oct 11 08:32:39 compute-0 nova_compute[259955]:   </features>
Oct 11 08:32:39 compute-0 nova_compute[259955]: </domainCapabilities>
Oct 11 08:32:39 compute-0 nova_compute[259955]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Oct 11 08:32:39 compute-0 nova_compute[259955]: 2025-10-11 08:32:39.783 2 DEBUG nova.virt.libvirt.host [None req-cb6e29da-07c7-400e-907a-0634244d7819 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Oct 11 08:32:39 compute-0 nova_compute[259955]: 2025-10-11 08:32:39.783 2 INFO nova.virt.libvirt.host [None req-cb6e29da-07c7-400e-907a-0634244d7819 - - - - - -] Secure Boot support detected
Oct 11 08:32:39 compute-0 nova_compute[259955]: 2025-10-11 08:32:39.786 2 INFO nova.virt.libvirt.driver [None req-cb6e29da-07c7-400e-907a-0634244d7819 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Oct 11 08:32:39 compute-0 nova_compute[259955]: 2025-10-11 08:32:39.786 2 INFO nova.virt.libvirt.driver [None req-cb6e29da-07c7-400e-907a-0634244d7819 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Oct 11 08:32:39 compute-0 nova_compute[259955]: 2025-10-11 08:32:39.801 2 DEBUG nova.virt.libvirt.driver [None req-cb6e29da-07c7-400e-907a-0634244d7819 - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097
Oct 11 08:32:39 compute-0 nova_compute[259955]: 2025-10-11 08:32:39.844 2 INFO nova.virt.node [None req-cb6e29da-07c7-400e-907a-0634244d7819 - - - - - -] Determined node identity ead2f521-4d5d-46d9-864c-1aac19134114 from /var/lib/nova/compute_id
Oct 11 08:32:39 compute-0 nova_compute[259955]: 2025-10-11 08:32:39.885 2 WARNING nova.compute.manager [None req-cb6e29da-07c7-400e-907a-0634244d7819 - - - - - -] Compute nodes ['ead2f521-4d5d-46d9-864c-1aac19134114'] for host compute-0.ctlplane.example.com were not found in the database. If this is the first time this service is starting on this host, then you can ignore this warning.
Oct 11 08:32:39 compute-0 nova_compute[259955]: 2025-10-11 08:32:39.937 2 INFO nova.compute.manager [None req-cb6e29da-07c7-400e-907a-0634244d7819 - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host
Oct 11 08:32:39 compute-0 nova_compute[259955]: 2025-10-11 08:32:39.988 2 WARNING nova.compute.manager [None req-cb6e29da-07c7-400e-907a-0634244d7819 - - - - - -] No compute node record found for host compute-0.ctlplane.example.com. If this is the first time this service is starting on this host, then you can ignore this warning.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.
Oct 11 08:32:39 compute-0 nova_compute[259955]: 2025-10-11 08:32:39.988 2 DEBUG oslo_concurrency.lockutils [None req-cb6e29da-07c7-400e-907a-0634244d7819 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:32:39 compute-0 nova_compute[259955]: 2025-10-11 08:32:39.989 2 DEBUG oslo_concurrency.lockutils [None req-cb6e29da-07c7-400e-907a-0634244d7819 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:32:39 compute-0 nova_compute[259955]: 2025-10-11 08:32:39.989 2 DEBUG oslo_concurrency.lockutils [None req-cb6e29da-07c7-400e-907a-0634244d7819 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:32:39 compute-0 nova_compute[259955]: 2025-10-11 08:32:39.989 2 DEBUG nova.compute.resource_tracker [None req-cb6e29da-07c7-400e-907a-0634244d7819 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 11 08:32:39 compute-0 nova_compute[259955]: 2025-10-11 08:32:39.990 2 DEBUG oslo_concurrency.processutils [None req-cb6e29da-07c7-400e-907a-0634244d7819 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:32:39 compute-0 sudo[260828]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bupsnrsqonnxerrsdxfmtvjzmuohblpw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171559.5577009-1849-112941087864868/AnsiballZ_systemd.py'
Oct 11 08:32:40 compute-0 sudo[260828]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:32:40 compute-0 podman[260830]: 2025-10-11 08:32:40.165726579 +0000 UTC m=+0.149094637 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct 11 08:32:40 compute-0 python3.9[260832]: ansible-ansible.builtin.systemd Invoked with name=edpm_nova_compute.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 11 08:32:40 compute-0 systemd[1]: Stopping nova_compute container...
Oct 11 08:32:40 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 08:32:40 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2926021458' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:32:40 compute-0 nova_compute[259955]: 2025-10-11 08:32:40.426 2 DEBUG oslo_concurrency.lockutils [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 08:32:40 compute-0 nova_compute[259955]: 2025-10-11 08:32:40.427 2 DEBUG oslo_concurrency.lockutils [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 08:32:40 compute-0 nova_compute[259955]: 2025-10-11 08:32:40.427 2 DEBUG oslo_concurrency.lockutils [None req-74aa9f85-2d31-4434-9d9e-55f2de00e79a - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 08:32:40 compute-0 ceph-mon[74313]: pgmap v765: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:32:40 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/2926021458' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:32:40 compute-0 systemd[1]: libpod-73b98b0be9f6f26e777db084dcc633e96c305d7c0e44252280b6444085b3225d.scope: Deactivated successfully.
Oct 11 08:32:40 compute-0 virtqemud[260524]: libvirt version: 10.10.0, package: 15.el9 (builder@centos.org, 2025-08-18-13:22:20, )
Oct 11 08:32:40 compute-0 virtqemud[260524]: hostname: compute-0
Oct 11 08:32:40 compute-0 virtqemud[260524]: End of file while reading data: Input/output error
Oct 11 08:32:40 compute-0 systemd[1]: libpod-73b98b0be9f6f26e777db084dcc633e96c305d7c0e44252280b6444085b3225d.scope: Consumed 3.616s CPU time.
Oct 11 08:32:40 compute-0 podman[260879]: 2025-10-11 08:32:40.931929389 +0000 UTC m=+0.563698799 container died 73b98b0be9f6f26e777db084dcc633e96c305d7c0e44252280b6444085b3225d (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844, name=nova_compute, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=edpm, container_name=nova_compute, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 11 08:32:40 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-73b98b0be9f6f26e777db084dcc633e96c305d7c0e44252280b6444085b3225d-userdata-shm.mount: Deactivated successfully.
Oct 11 08:32:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-e9446680486d73744fe42e7bc9dbd4018ec0b5ae0f06cba6589fac0eabf4e949-merged.mount: Deactivated successfully.
Oct 11 08:32:41 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v766: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:32:41 compute-0 podman[260879]: 2025-10-11 08:32:41.430208275 +0000 UTC m=+1.061977645 container cleanup 73b98b0be9f6f26e777db084dcc633e96c305d7c0e44252280b6444085b3225d (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844, name=nova_compute, container_name=nova_compute, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.vendor=CentOS, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 11 08:32:41 compute-0 podman[260879]: nova_compute
Oct 11 08:32:41 compute-0 podman[260907]: nova_compute
Oct 11 08:32:41 compute-0 systemd[1]: edpm_nova_compute.service: Deactivated successfully.
Oct 11 08:32:41 compute-0 systemd[1]: Stopped nova_compute container.
Oct 11 08:32:41 compute-0 systemd[1]: Starting nova_compute container...
Oct 11 08:32:41 compute-0 systemd[1]: Started libcrun container.
Oct 11 08:32:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9446680486d73744fe42e7bc9dbd4018ec0b5ae0f06cba6589fac0eabf4e949/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Oct 11 08:32:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9446680486d73744fe42e7bc9dbd4018ec0b5ae0f06cba6589fac0eabf4e949/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Oct 11 08:32:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9446680486d73744fe42e7bc9dbd4018ec0b5ae0f06cba6589fac0eabf4e949/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Oct 11 08:32:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9446680486d73744fe42e7bc9dbd4018ec0b5ae0f06cba6589fac0eabf4e949/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Oct 11 08:32:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9446680486d73744fe42e7bc9dbd4018ec0b5ae0f06cba6589fac0eabf4e949/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Oct 11 08:32:41 compute-0 podman[260920]: 2025-10-11 08:32:41.675410383 +0000 UTC m=+0.121397261 container init 73b98b0be9f6f26e777db084dcc633e96c305d7c0e44252280b6444085b3225d (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844, name=nova_compute, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, container_name=nova_compute, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 11 08:32:41 compute-0 podman[260920]: 2025-10-11 08:32:41.687198123 +0000 UTC m=+0.133184981 container start 73b98b0be9f6f26e777db084dcc633e96c305d7c0e44252280b6444085b3225d (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844, name=nova_compute, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, config_id=edpm, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=nova_compute)
Oct 11 08:32:41 compute-0 podman[260920]: nova_compute
Oct 11 08:32:41 compute-0 nova_compute[260935]: + sudo -E kolla_set_configs
Oct 11 08:32:41 compute-0 systemd[1]: Started nova_compute container.
Oct 11 08:32:41 compute-0 sudo[260828]: pam_unix(sudo:session): session closed for user root
Oct 11 08:32:41 compute-0 nova_compute[260935]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Oct 11 08:32:41 compute-0 nova_compute[260935]: INFO:__main__:Validating config file
Oct 11 08:32:41 compute-0 nova_compute[260935]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Oct 11 08:32:41 compute-0 nova_compute[260935]: INFO:__main__:Copying service configuration files
Oct 11 08:32:41 compute-0 nova_compute[260935]: INFO:__main__:Deleting /etc/nova/nova.conf
Oct 11 08:32:41 compute-0 nova_compute[260935]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Oct 11 08:32:41 compute-0 nova_compute[260935]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Oct 11 08:32:41 compute-0 nova_compute[260935]: INFO:__main__:Deleting /etc/nova/nova.conf.d/01-nova.conf
Oct 11 08:32:41 compute-0 nova_compute[260935]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Oct 11 08:32:41 compute-0 nova_compute[260935]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Oct 11 08:32:41 compute-0 nova_compute[260935]: INFO:__main__:Deleting /etc/nova/nova.conf.d/03-ceph-nova.conf
Oct 11 08:32:41 compute-0 nova_compute[260935]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Oct 11 08:32:41 compute-0 nova_compute[260935]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Oct 11 08:32:41 compute-0 nova_compute[260935]: INFO:__main__:Deleting /etc/nova/nova.conf.d/25-nova-extra.conf
Oct 11 08:32:41 compute-0 nova_compute[260935]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Oct 11 08:32:41 compute-0 nova_compute[260935]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Oct 11 08:32:41 compute-0 nova_compute[260935]: INFO:__main__:Deleting /etc/nova/nova.conf.d/nova-blank.conf
Oct 11 08:32:41 compute-0 nova_compute[260935]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Oct 11 08:32:41 compute-0 nova_compute[260935]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Oct 11 08:32:41 compute-0 nova_compute[260935]: INFO:__main__:Deleting /etc/nova/nova.conf.d/02-nova-host-specific.conf
Oct 11 08:32:41 compute-0 nova_compute[260935]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Oct 11 08:32:41 compute-0 nova_compute[260935]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Oct 11 08:32:41 compute-0 nova_compute[260935]: INFO:__main__:Deleting /etc/ceph
Oct 11 08:32:41 compute-0 nova_compute[260935]: INFO:__main__:Creating directory /etc/ceph
Oct 11 08:32:41 compute-0 nova_compute[260935]: INFO:__main__:Setting permission for /etc/ceph
Oct 11 08:32:41 compute-0 nova_compute[260935]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Oct 11 08:32:41 compute-0 nova_compute[260935]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Oct 11 08:32:41 compute-0 nova_compute[260935]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Oct 11 08:32:41 compute-0 nova_compute[260935]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Oct 11 08:32:41 compute-0 nova_compute[260935]: INFO:__main__:Deleting /var/lib/nova/.ssh/ssh-privatekey
Oct 11 08:32:41 compute-0 nova_compute[260935]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Oct 11 08:32:41 compute-0 nova_compute[260935]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Oct 11 08:32:41 compute-0 nova_compute[260935]: INFO:__main__:Deleting /var/lib/nova/.ssh/config
Oct 11 08:32:41 compute-0 nova_compute[260935]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Oct 11 08:32:41 compute-0 nova_compute[260935]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Oct 11 08:32:41 compute-0 nova_compute[260935]: INFO:__main__:Writing out command to execute
Oct 11 08:32:41 compute-0 nova_compute[260935]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Oct 11 08:32:41 compute-0 nova_compute[260935]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Oct 11 08:32:41 compute-0 nova_compute[260935]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Oct 11 08:32:41 compute-0 nova_compute[260935]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Oct 11 08:32:41 compute-0 nova_compute[260935]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Oct 11 08:32:41 compute-0 nova_compute[260935]: ++ cat /run_command
Oct 11 08:32:41 compute-0 nova_compute[260935]: + CMD=nova-compute
Oct 11 08:32:41 compute-0 nova_compute[260935]: + ARGS=
Oct 11 08:32:41 compute-0 nova_compute[260935]: + sudo kolla_copy_cacerts
Oct 11 08:32:41 compute-0 nova_compute[260935]: + [[ ! -n '' ]]
Oct 11 08:32:41 compute-0 nova_compute[260935]: + . kolla_extend_start
Oct 11 08:32:41 compute-0 nova_compute[260935]: Running command: 'nova-compute'
Oct 11 08:32:41 compute-0 nova_compute[260935]: + echo 'Running command: '\''nova-compute'\'''
Oct 11 08:32:41 compute-0 nova_compute[260935]: + umask 0022
Oct 11 08:32:41 compute-0 nova_compute[260935]: + exec nova-compute
Oct 11 08:32:42 compute-0 sudo[261096]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-piffilzaaeomramsaoyretljobloljxm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760171562.0011816-1858-226088112072202/AnsiballZ_podman_container.py'
Oct 11 08:32:42 compute-0 sudo[261096]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 08:32:42 compute-0 ceph-mon[74313]: pgmap v766: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:32:42 compute-0 python3.9[261098]: ansible-containers.podman.podman_container Invoked with name=nova_compute_init state=started executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Oct 11 08:32:42 compute-0 systemd[1]: Started libpod-conmon-a935fb67611e2cdf191c5aec8ff164c181e631191daef2a1684486c6280c3ac6.scope.
Oct 11 08:32:42 compute-0 systemd[1]: Started libcrun container.
Oct 11 08:32:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e0df993bee33502936ff8744830557639fda9bda47a2a4bf67587ad0db19cfe/merged/usr/sbin/nova_statedir_ownership.py supports timestamps until 2038 (0x7fffffff)
Oct 11 08:32:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e0df993bee33502936ff8744830557639fda9bda47a2a4bf67587ad0db19cfe/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Oct 11 08:32:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e0df993bee33502936ff8744830557639fda9bda47a2a4bf67587ad0db19cfe/merged/var/lib/_nova_secontext supports timestamps until 2038 (0x7fffffff)
Oct 11 08:32:42 compute-0 podman[261124]: 2025-10-11 08:32:42.891284667 +0000 UTC m=+0.143960822 container init a935fb67611e2cdf191c5aec8ff164c181e631191daef2a1684486c6280c3ac6 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844, name=nova_compute_init, config_id=edpm, container_name=nova_compute_init, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']})
Oct 11 08:32:42 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 08:32:42 compute-0 podman[261124]: 2025-10-11 08:32:42.905273179 +0000 UTC m=+0.157949314 container start a935fb67611e2cdf191c5aec8ff164c181e631191daef2a1684486c6280c3ac6 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844, name=nova_compute_init, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, container_name=nova_compute_init, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Oct 11 08:32:42 compute-0 python3.9[261098]: ansible-containers.podman.podman_container PODMAN-CONTAINER-DEBUG: podman start nova_compute_init
Oct 11 08:32:42 compute-0 nova_compute_init[261146]: INFO:nova_statedir:Applying nova statedir ownership
Oct 11 08:32:42 compute-0 nova_compute_init[261146]: INFO:nova_statedir:Target ownership for /var/lib/nova: 42436:42436
Oct 11 08:32:42 compute-0 nova_compute_init[261146]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/
Oct 11 08:32:42 compute-0 nova_compute_init[261146]: INFO:nova_statedir:Changing ownership of /var/lib/nova from 1000:1000 to 42436:42436
Oct 11 08:32:42 compute-0 nova_compute_init[261146]: INFO:nova_statedir:Setting selinux context of /var/lib/nova to system_u:object_r:container_file_t:s0
Oct 11 08:32:42 compute-0 nova_compute_init[261146]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/instances/
Oct 11 08:32:42 compute-0 nova_compute_init[261146]: INFO:nova_statedir:Changing ownership of /var/lib/nova/instances from 1000:1000 to 42436:42436
Oct 11 08:32:42 compute-0 nova_compute_init[261146]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/instances to system_u:object_r:container_file_t:s0
Oct 11 08:32:42 compute-0 nova_compute_init[261146]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/
Oct 11 08:32:42 compute-0 nova_compute_init[261146]: INFO:nova_statedir:Ownership of /var/lib/nova/.ssh already 42436:42436
Oct 11 08:32:42 compute-0 nova_compute_init[261146]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/.ssh to system_u:object_r:container_file_t:s0
Oct 11 08:32:42 compute-0 nova_compute_init[261146]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/ssh-privatekey
Oct 11 08:32:42 compute-0 nova_compute_init[261146]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/config
Oct 11 08:32:42 compute-0 nova_compute_init[261146]: INFO:nova_statedir:Nova statedir ownership complete
Oct 11 08:32:42 compute-0 systemd[1]: libpod-a935fb67611e2cdf191c5aec8ff164c181e631191daef2a1684486c6280c3ac6.scope: Deactivated successfully.
Oct 11 08:32:42 compute-0 podman[261147]: 2025-10-11 08:32:42.981112763 +0000 UTC m=+0.039878978 container died a935fb67611e2cdf191c5aec8ff164c181e631191daef2a1684486c6280c3ac6 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844, name=nova_compute_init, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, managed_by=edpm_ansible, tcib_managed=true, config_id=edpm, container_name=nova_compute_init, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct 11 08:32:43 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a935fb67611e2cdf191c5aec8ff164c181e631191daef2a1684486c6280c3ac6-userdata-shm.mount: Deactivated successfully.
Oct 11 08:32:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-9e0df993bee33502936ff8744830557639fda9bda47a2a4bf67587ad0db19cfe-merged.mount: Deactivated successfully.
Oct 11 08:32:43 compute-0 podman[261160]: 2025-10-11 08:32:43.059106748 +0000 UTC m=+0.072036059 container cleanup a935fb67611e2cdf191c5aec8ff164c181e631191daef2a1684486c6280c3ac6 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844, name=nova_compute_init, config_id=edpm, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, container_name=nova_compute_init, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true)
Oct 11 08:32:43 compute-0 systemd[1]: libpod-conmon-a935fb67611e2cdf191c5aec8ff164c181e631191daef2a1684486c6280c3ac6.scope: Deactivated successfully.
Oct 11 08:32:43 compute-0 sudo[261096]: pam_unix(sudo:session): session closed for user root
Oct 11 08:32:43 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v767: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:32:43 compute-0 nova_compute[260935]: 2025-10-11 08:32:43.553 2 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Oct 11 08:32:43 compute-0 nova_compute[260935]: 2025-10-11 08:32:43.554 2 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Oct 11 08:32:43 compute-0 nova_compute[260935]: 2025-10-11 08:32:43.554 2 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Oct 11 08:32:43 compute-0 nova_compute[260935]: 2025-10-11 08:32:43.554 2 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs
Oct 11 08:32:43 compute-0 sshd-session[222622]: Connection closed by 192.168.122.30 port 51504
Oct 11 08:32:43 compute-0 sshd-session[222619]: pam_unix(sshd:session): session closed for user zuul
Oct 11 08:32:43 compute-0 nova_compute[260935]: 2025-10-11 08:32:43.671 2 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:32:43 compute-0 systemd[1]: session-51.scope: Deactivated successfully.
Oct 11 08:32:43 compute-0 systemd[1]: session-51.scope: Consumed 3min 21.197s CPU time.
Oct 11 08:32:43 compute-0 systemd-logind[819]: Session 51 logged out. Waiting for processes to exit.
Oct 11 08:32:43 compute-0 systemd-logind[819]: Removed session 51.
Oct 11 08:32:43 compute-0 nova_compute[260935]: 2025-10-11 08:32:43.693 2 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 0 in 0.022s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.232 2 INFO nova.virt.driver [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.359 2 INFO nova.compute.provider_config [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.375 2 DEBUG oslo_concurrency.lockutils [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.375 2 DEBUG oslo_concurrency.lockutils [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.375 2 DEBUG oslo_concurrency.lockutils [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.376 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.376 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.376 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.376 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.376 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.376 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.376 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.376 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.377 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.377 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.377 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.377 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.377 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.377 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.377 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.378 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.378 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.378 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.378 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.378 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] console_host                   = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.378 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.379 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.379 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.379 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.379 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.379 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.380 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.380 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.380 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.380 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.380 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.381 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.381 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.381 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.381 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.381 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.382 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.382 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.382 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.382 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.382 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.383 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.383 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.383 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.383 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.383 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.384 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.384 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.384 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.384 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.384 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.385 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.385 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.385 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.385 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.385 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.386 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.386 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.386 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.386 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.386 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.387 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.387 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.387 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.387 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.387 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.387 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.388 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.388 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.388 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.388 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.388 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.389 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.389 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.389 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.389 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.389 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.390 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.390 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.390 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.390 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] my_block_storage_ip            = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.390 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] my_ip                          = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.391 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.391 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.391 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.391 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.392 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.392 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.392 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.392 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.392 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.393 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.393 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.393 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.393 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.393 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.393 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.394 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.394 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.394 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.394 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.394 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.395 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.395 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.395 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.395 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.395 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.396 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.396 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.396 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.396 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.396 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.397 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.397 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.397 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.397 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.398 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.398 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.398 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.398 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.398 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.399 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.399 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.399 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.399 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.399 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.399 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.400 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.400 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.400 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.400 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.400 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.400 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.400 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.400 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.401 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.401 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.401 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.401 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.401 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.401 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.401 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.402 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.402 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.402 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.402 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.402 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.402 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.402 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.403 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.403 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.403 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.403 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.403 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.403 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.403 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.404 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.404 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.404 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.404 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.404 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.404 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.404 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.405 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.405 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.405 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.405 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.405 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.405 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.406 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.406 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.406 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.406 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.406 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.406 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.406 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.407 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.407 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.407 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.407 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.407 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.407 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.407 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.408 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.408 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.408 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.408 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.408 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.408 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.409 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.409 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.409 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.409 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.409 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.409 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.409 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.410 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.410 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.410 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.410 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.410 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.410 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.410 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.411 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.411 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.411 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.411 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.411 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.411 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.411 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.412 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.412 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.412 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.412 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.412 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.412 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.412 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.413 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.413 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.413 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.413 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.413 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.413 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.413 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.414 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.414 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.414 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.414 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.414 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.414 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.414 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.415 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.415 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.415 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.415 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.415 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.415 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.415 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.416 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.416 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.416 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.416 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.416 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.416 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.416 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.417 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.417 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.417 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.417 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.417 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.417 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.417 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.418 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.418 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.418 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.418 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.418 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.418 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.418 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.419 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.419 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.419 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.419 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.419 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.419 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.420 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.420 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.420 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.420 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.420 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.420 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.420 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.421 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.421 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.421 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.421 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.421 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.421 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.421 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.422 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.422 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.422 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.422 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.422 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.422 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.422 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.423 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.423 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.423 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.423 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.423 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.423 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.423 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.424 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.424 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.424 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.424 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.424 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.424 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.424 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.425 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.425 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.425 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.425 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.425 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.425 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.425 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.426 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.426 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.426 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.426 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.426 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.426 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.426 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.426 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.427 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.427 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.427 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.427 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.427 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.427 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.427 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.428 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.428 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.428 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.428 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.428 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.428 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.428 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.429 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.429 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.429 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.429 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.429 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.429 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.429 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.430 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.430 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.430 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.430 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.430 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.430 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.430 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.431 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.431 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.431 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.431 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.431 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.431 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.432 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.432 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.432 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.432 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.432 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.432 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.433 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.433 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.433 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.433 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.433 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.433 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.434 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.434 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.434 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.434 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.434 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.434 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.435 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.435 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.435 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.435 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.435 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.435 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.436 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.436 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.436 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.436 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.436 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.436 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.436 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.437 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.437 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.437 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.437 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.437 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.437 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.438 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.438 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.438 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.438 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.438 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.438 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.439 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.439 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.439 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.439 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.439 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.439 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.439 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.440 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.440 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.440 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.440 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.440 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.440 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.441 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.441 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.441 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.441 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.441 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.442 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.442 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.442 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.442 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.442 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.442 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.442 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.443 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.443 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.443 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.443 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.443 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.443 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.443 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.444 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.444 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.444 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.444 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.444 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.444 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.444 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.445 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.445 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.445 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.445 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.445 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.445 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.445 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.446 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.446 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.446 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.446 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.446 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.446 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.447 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.447 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.447 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.447 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.447 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.448 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.448 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.448 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.448 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.448 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.448 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.449 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.449 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.449 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.449 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.449 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.449 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.449 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.450 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.450 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.450 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.450 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.450 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.450 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.450 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.451 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.451 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.451 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.451 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.451 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.451 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.451 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.452 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.452 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.452 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.452 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.452 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.452 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.453 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.453 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.453 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.453 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.453 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.453 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.454 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 ceph-mon[74313]: pgmap v767: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.454 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.454 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.455 2 WARNING oslo_config.cfg [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Oct 11 08:32:44 compute-0 nova_compute[260935]: live_migration_uri is deprecated for removal in favor of two other options that
Oct 11 08:32:44 compute-0 nova_compute[260935]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Oct 11 08:32:44 compute-0 nova_compute[260935]: and ``live_migration_inbound_addr`` respectively.
Oct 11 08:32:44 compute-0 nova_compute[260935]: ).  Its value may be silently ignored in the future.
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.455 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.455 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.455 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.455 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.456 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.456 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.456 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.456 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.456 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.456 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.457 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.457 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.457 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.457 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.457 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.457 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.458 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.458 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.458 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] libvirt.rbd_secret_uuid        = 33219f8b-dc38-5a8f-a577-8ccc4b37190a log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.458 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.458 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.460 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.460 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.461 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.461 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.461 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.462 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.462 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.462 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.463 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.463 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.464 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.464 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.464 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.464 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.465 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.465 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.465 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.466 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.466 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.466 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.467 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.467 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.467 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.468 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.468 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.468 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.469 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.469 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.469 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.470 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.470 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.470 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.471 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.471 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.471 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.471 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.472 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.472 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.472 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.473 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.473 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.473 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.474 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.474 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.474 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.474 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.475 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.475 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.475 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.476 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.476 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.476 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.477 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.477 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.477 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.477 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.478 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.478 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.478 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.479 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.479 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.479 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.480 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.480 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.480 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.481 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.481 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.481 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.482 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.482 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.482 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.483 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.483 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.483 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.484 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.484 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.485 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.485 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.486 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.486 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.487 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.487 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.488 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.488 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.489 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.489 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.490 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.490 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.491 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.492 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.492 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.493 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.493 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.494 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.494 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.495 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.495 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.496 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.496 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.497 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.497 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.498 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.498 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.499 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.500 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.500 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.501 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.501 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.502 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.502 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.503 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.504 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.504 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.505 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.505 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.506 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.506 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.507 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.508 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.508 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.509 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.509 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.510 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.510 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.511 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.512 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.512 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.512 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.513 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.514 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.514 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.514 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.515 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.515 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.515 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.516 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.516 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.516 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.517 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.517 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.517 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.517 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.518 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.518 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.518 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.519 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.519 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.519 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.519 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.520 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.520 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.520 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.520 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.521 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.521 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.521 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.522 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.522 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.522 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.522 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.523 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.523 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.523 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.524 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.524 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.524 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.524 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.525 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.525 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.525 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.525 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.526 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.526 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.526 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.527 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.527 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.527 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.528 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.528 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.528 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.528 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.529 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.529 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.529 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.530 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.530 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.530 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.530 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.531 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.531 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.531 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.531 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.532 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.532 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.532 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.533 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.533 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.533 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.533 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.534 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.534 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.534 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.534 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.535 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.535 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.535 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.536 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.536 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.536 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.536 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.537 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.537 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.537 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.537 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.538 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.538 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.538 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.539 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.539 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.539 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.540 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.540 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.540 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.541 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.541 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.541 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.542 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.542 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.543 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.543 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.543 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.544 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] vnc.server_proxyclient_address = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.544 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.544 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.545 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.545 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.545 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.546 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.546 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.546 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.547 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.547 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.547 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.547 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.548 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.548 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.548 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.548 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.548 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.549 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.549 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.549 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.549 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.549 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.550 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.550 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.550 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.550 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.550 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.551 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.551 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.551 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.551 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.551 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.552 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.552 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.552 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.552 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.552 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.553 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.553 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.553 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.553 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.554 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.554 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.554 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.554 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.555 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.555 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.555 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.555 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.555 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.555 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.556 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.556 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.556 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.556 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.556 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.556 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.556 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.557 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.557 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.557 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.557 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.557 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.557 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.557 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.558 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.558 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.558 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.558 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.558 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.558 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.558 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.558 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.559 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.559 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.559 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.559 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.559 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.559 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.559 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.560 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.560 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.560 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.560 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.560 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.560 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.560 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.561 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.561 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.561 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.561 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.561 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.561 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.561 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.562 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.562 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.562 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.562 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.562 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.562 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.562 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.562 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.563 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.563 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.563 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.563 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.563 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.563 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.563 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.563 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.564 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.564 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.564 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.564 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.564 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.564 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.564 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.565 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.565 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.565 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.565 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.565 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.565 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.565 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.565 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.566 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.566 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.566 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.566 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.566 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.566 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.566 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.567 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.567 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.567 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.567 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.567 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.567 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.567 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.567 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.568 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.568 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.568 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.568 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.568 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.568 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.568 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.569 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.569 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.569 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.569 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.569 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.569 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.569 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.570 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.570 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.570 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.570 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.570 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.570 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.570 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.570 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.571 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.571 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.571 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.571 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.571 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.571 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.571 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.572 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.572 2 DEBUG oslo_service.service [None req-3e1646a9-d6b3-4133-b9fc-da789de9095c - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.572 2 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.629 2 INFO nova.virt.node [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Determined node identity ead2f521-4d5d-46d9-864c-1aac19134114 from /var/lib/nova/compute_id
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.630 2 DEBUG nova.virt.libvirt.host [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.631 2 DEBUG nova.virt.libvirt.host [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.631 2 DEBUG nova.virt.libvirt.host [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.632 2 DEBUG nova.virt.libvirt.host [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.649 2 DEBUG nova.virt.libvirt.host [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7f1e55b5ecd0> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.653 2 DEBUG nova.virt.libvirt.host [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7f1e55b5ecd0> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.654 2 INFO nova.virt.libvirt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Connection event '1' reason 'None'
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.665 2 INFO nova.virt.libvirt.host [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Libvirt host capabilities <capabilities>
Oct 11 08:32:44 compute-0 nova_compute[260935]: 
Oct 11 08:32:44 compute-0 nova_compute[260935]:   <host>
Oct 11 08:32:44 compute-0 nova_compute[260935]:     <uuid>441009ae-b831-47c3-ab1e-dc607f9feb6d</uuid>
Oct 11 08:32:44 compute-0 nova_compute[260935]:     <cpu>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <arch>x86_64</arch>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model>EPYC-Rome-v4</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <vendor>AMD</vendor>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <microcode version='16777317'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <signature family='23' model='49' stepping='0'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <maxphysaddr mode='emulate' bits='40'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <feature name='x2apic'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <feature name='tsc-deadline'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <feature name='osxsave'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <feature name='hypervisor'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <feature name='tsc_adjust'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <feature name='spec-ctrl'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <feature name='stibp'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <feature name='arch-capabilities'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <feature name='ssbd'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <feature name='cmp_legacy'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <feature name='topoext'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <feature name='virt-ssbd'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <feature name='lbrv'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <feature name='tsc-scale'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <feature name='vmcb-clean'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <feature name='pause-filter'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <feature name='pfthreshold'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <feature name='svme-addr-chk'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <feature name='rdctl-no'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <feature name='skip-l1dfl-vmentry'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <feature name='mds-no'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <feature name='pschange-mc-no'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <pages unit='KiB' size='4'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <pages unit='KiB' size='2048'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <pages unit='KiB' size='1048576'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:     </cpu>
Oct 11 08:32:44 compute-0 nova_compute[260935]:     <power_management>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <suspend_mem/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:     </power_management>
Oct 11 08:32:44 compute-0 nova_compute[260935]:     <iommu support='no'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:     <migration_features>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <live/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <uri_transports>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <uri_transport>tcp</uri_transport>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <uri_transport>rdma</uri_transport>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </uri_transports>
Oct 11 08:32:44 compute-0 nova_compute[260935]:     </migration_features>
Oct 11 08:32:44 compute-0 nova_compute[260935]:     <topology>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <cells num='1'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <cell id='0'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:           <memory unit='KiB'>7864360</memory>
Oct 11 08:32:44 compute-0 nova_compute[260935]:           <pages unit='KiB' size='4'>1966090</pages>
Oct 11 08:32:44 compute-0 nova_compute[260935]:           <pages unit='KiB' size='2048'>0</pages>
Oct 11 08:32:44 compute-0 nova_compute[260935]:           <pages unit='KiB' size='1048576'>0</pages>
Oct 11 08:32:44 compute-0 nova_compute[260935]:           <distances>
Oct 11 08:32:44 compute-0 nova_compute[260935]:             <sibling id='0' value='10'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:           </distances>
Oct 11 08:32:44 compute-0 nova_compute[260935]:           <cpus num='8'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:             <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:             <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:             <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:             <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:             <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:             <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:             <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:             <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:           </cpus>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         </cell>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </cells>
Oct 11 08:32:44 compute-0 nova_compute[260935]:     </topology>
Oct 11 08:32:44 compute-0 nova_compute[260935]:     <cache>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:     </cache>
Oct 11 08:32:44 compute-0 nova_compute[260935]:     <secmodel>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model>selinux</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <doi>0</doi>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Oct 11 08:32:44 compute-0 nova_compute[260935]:     </secmodel>
Oct 11 08:32:44 compute-0 nova_compute[260935]:     <secmodel>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model>dac</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <doi>0</doi>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <baselabel type='kvm'>+107:+107</baselabel>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <baselabel type='qemu'>+107:+107</baselabel>
Oct 11 08:32:44 compute-0 nova_compute[260935]:     </secmodel>
Oct 11 08:32:44 compute-0 nova_compute[260935]:   </host>
Oct 11 08:32:44 compute-0 nova_compute[260935]: 
Oct 11 08:32:44 compute-0 nova_compute[260935]:   <guest>
Oct 11 08:32:44 compute-0 nova_compute[260935]:     <os_type>hvm</os_type>
Oct 11 08:32:44 compute-0 nova_compute[260935]:     <arch name='i686'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <wordsize>32</wordsize>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <machine canonical='pc-q35-rhel9.6.0' maxCpus='4096'>q35</machine>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <domain type='qemu'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <domain type='kvm'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:     </arch>
Oct 11 08:32:44 compute-0 nova_compute[260935]:     <features>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <pae/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <nonpae/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <acpi default='on' toggle='yes'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <apic default='on' toggle='no'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <cpuselection/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <deviceboot/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <disksnapshot default='on' toggle='no'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <externalSnapshot/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:     </features>
Oct 11 08:32:44 compute-0 nova_compute[260935]:   </guest>
Oct 11 08:32:44 compute-0 nova_compute[260935]: 
Oct 11 08:32:44 compute-0 nova_compute[260935]:   <guest>
Oct 11 08:32:44 compute-0 nova_compute[260935]:     <os_type>hvm</os_type>
Oct 11 08:32:44 compute-0 nova_compute[260935]:     <arch name='x86_64'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <wordsize>64</wordsize>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <machine canonical='pc-q35-rhel9.6.0' maxCpus='4096'>q35</machine>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <domain type='qemu'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <domain type='kvm'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:     </arch>
Oct 11 08:32:44 compute-0 nova_compute[260935]:     <features>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <acpi default='on' toggle='yes'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <apic default='on' toggle='no'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <cpuselection/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <deviceboot/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <disksnapshot default='on' toggle='no'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <externalSnapshot/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:     </features>
Oct 11 08:32:44 compute-0 nova_compute[260935]:   </guest>
Oct 11 08:32:44 compute-0 nova_compute[260935]: 
Oct 11 08:32:44 compute-0 nova_compute[260935]: </capabilities>
Oct 11 08:32:44 compute-0 nova_compute[260935]: 
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.676 2 DEBUG nova.virt.libvirt.host [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Getting domain capabilities for i686 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.682 2 DEBUG nova.virt.libvirt.host [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Oct 11 08:32:44 compute-0 nova_compute[260935]: <domainCapabilities>
Oct 11 08:32:44 compute-0 nova_compute[260935]:   <path>/usr/libexec/qemu-kvm</path>
Oct 11 08:32:44 compute-0 nova_compute[260935]:   <domain>kvm</domain>
Oct 11 08:32:44 compute-0 nova_compute[260935]:   <machine>pc-i440fx-rhel7.6.0</machine>
Oct 11 08:32:44 compute-0 nova_compute[260935]:   <arch>i686</arch>
Oct 11 08:32:44 compute-0 nova_compute[260935]:   <vcpu max='240'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:   <iothreads supported='yes'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:   <os supported='yes'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:     <enum name='firmware'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:     <loader supported='yes'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <enum name='type'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>rom</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>pflash</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </enum>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <enum name='readonly'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>yes</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>no</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </enum>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <enum name='secure'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>no</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </enum>
Oct 11 08:32:44 compute-0 nova_compute[260935]:     </loader>
Oct 11 08:32:44 compute-0 nova_compute[260935]:   </os>
Oct 11 08:32:44 compute-0 nova_compute[260935]:   <cpu>
Oct 11 08:32:44 compute-0 nova_compute[260935]:     <mode name='host-passthrough' supported='yes'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <enum name='hostPassthroughMigratable'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>on</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>off</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </enum>
Oct 11 08:32:44 compute-0 nova_compute[260935]:     </mode>
Oct 11 08:32:44 compute-0 nova_compute[260935]:     <mode name='maximum' supported='yes'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <enum name='maximumMigratable'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>on</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>off</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </enum>
Oct 11 08:32:44 compute-0 nova_compute[260935]:     </mode>
Oct 11 08:32:44 compute-0 nova_compute[260935]:     <mode name='host-model' supported='yes'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model fallback='forbid'>EPYC-Rome</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <vendor>AMD</vendor>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <maxphysaddr mode='passthrough' limit='40'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <feature policy='require' name='x2apic'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <feature policy='require' name='tsc-deadline'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <feature policy='require' name='hypervisor'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <feature policy='require' name='tsc_adjust'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <feature policy='require' name='spec-ctrl'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <feature policy='require' name='stibp'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <feature policy='require' name='arch-capabilities'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <feature policy='require' name='ssbd'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <feature policy='require' name='cmp_legacy'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <feature policy='require' name='overflow-recov'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <feature policy='require' name='succor'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <feature policy='require' name='ibrs'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <feature policy='require' name='amd-ssbd'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <feature policy='require' name='virt-ssbd'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <feature policy='require' name='lbrv'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <feature policy='require' name='tsc-scale'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <feature policy='require' name='vmcb-clean'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <feature policy='require' name='flushbyasid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <feature policy='require' name='pause-filter'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <feature policy='require' name='pfthreshold'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <feature policy='require' name='svme-addr-chk'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <feature policy='require' name='lfence-always-serializing'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <feature policy='require' name='rdctl-no'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <feature policy='require' name='mds-no'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <feature policy='require' name='pschange-mc-no'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <feature policy='require' name='gds-no'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <feature policy='require' name='rfds-no'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <feature policy='disable' name='xsaves'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:     </mode>
Oct 11 08:32:44 compute-0 nova_compute[260935]:     <mode name='custom' supported='yes'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='Broadwell'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='hle'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='invpcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='rtm'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='Broadwell-IBRS'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='hle'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='invpcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='rtm'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='Broadwell-noTSX'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='invpcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='Broadwell-noTSX-IBRS'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='invpcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='Broadwell-v1'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='hle'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='invpcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='rtm'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='Broadwell-v2'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='invpcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='Broadwell-v3'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='hle'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='invpcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='rtm'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='Broadwell-v4'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='invpcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='Cascadelake-Server'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512bw'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512cd'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512dq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512f'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vl'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vnni'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='hle'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='invpcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pku'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='rtm'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='Cascadelake-Server-noTSX'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512bw'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512cd'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512dq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512f'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vl'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vnni'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='ibrs-all'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='invpcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pku'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='Cascadelake-Server-v1'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512bw'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512cd'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512dq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512f'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vl'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vnni'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='hle'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='invpcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pku'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='rtm'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='Cascadelake-Server-v2'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512bw'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512cd'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512dq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512f'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vl'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vnni'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='hle'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='ibrs-all'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='invpcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pku'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='rtm'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='Cascadelake-Server-v3'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512bw'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512cd'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512dq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512f'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vl'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vnni'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='ibrs-all'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='invpcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pku'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='Cascadelake-Server-v4'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512bw'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512cd'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512dq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512f'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vl'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vnni'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='ibrs-all'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='invpcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pku'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='Cascadelake-Server-v5'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512bw'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512cd'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512dq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512f'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vl'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vnni'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='ibrs-all'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='invpcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pku'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='xsaves'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='Cooperlake'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512-bf16'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512bw'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512cd'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512dq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512f'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vl'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vnni'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='hle'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='ibrs-all'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='invpcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pku'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='rtm'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='taa-no'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='Cooperlake-v1'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512-bf16'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512bw'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512cd'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512dq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512f'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vl'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vnni'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='hle'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='ibrs-all'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='invpcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pku'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='rtm'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='taa-no'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='Cooperlake-v2'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512-bf16'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512bw'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512cd'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512dq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512f'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vl'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vnni'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='hle'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='ibrs-all'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='invpcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pku'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='rtm'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='taa-no'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='xsaves'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='Denverton'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='mpx'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='Denverton-v1'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='mpx'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='Denverton-v2'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='Denverton-v3'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='xsaves'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='Dhyana-v2'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='xsaves'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='EPYC-Genoa'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='amd-psfd'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='auto-ibrs'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512-bf16'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512-vpopcntdq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512bitalg'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512bw'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512cd'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512dq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512f'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512ifma'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vbmi'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vbmi2'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vl'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vnni'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='fsrm'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='gfni'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='invpcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='la57'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='no-nested-data-bp'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='null-sel-clr-base'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pku'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='stibp-always-on'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='vaes'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='vpclmulqdq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='xsaves'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='EPYC-Genoa-v1'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='amd-psfd'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='auto-ibrs'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512-bf16'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512-vpopcntdq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512bitalg'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512bw'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512cd'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512dq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512f'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512ifma'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vbmi'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vbmi2'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vl'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vnni'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='fsrm'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='gfni'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='invpcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='la57'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='no-nested-data-bp'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='null-sel-clr-base'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pku'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='stibp-always-on'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='vaes'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='vpclmulqdq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='xsaves'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='EPYC-Milan'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='fsrm'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='invpcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pku'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='xsaves'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='EPYC-Milan-v1'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='fsrm'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='invpcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pku'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='xsaves'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='EPYC-Milan-v2'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='amd-psfd'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='fsrm'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='invpcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='no-nested-data-bp'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='null-sel-clr-base'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pku'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='stibp-always-on'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='vaes'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='vpclmulqdq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='xsaves'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='EPYC-Rome'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='xsaves'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='EPYC-Rome-v1'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='xsaves'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='EPYC-Rome-v2'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='xsaves'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='EPYC-Rome-v3'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='xsaves'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='EPYC-v3'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='xsaves'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='EPYC-v4'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='xsaves'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='GraniteRapids'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='amx-bf16'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='amx-fp16'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='amx-int8'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='amx-tile'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx-vnni'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512-bf16'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512-fp16'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512-vpopcntdq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512bitalg'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512bw'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512cd'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512dq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512f'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512ifma'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vbmi'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vbmi2'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vl'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vnni'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='bus-lock-detect'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='fbsdp-no'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='fsrc'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='fsrm'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='fsrs'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='fzrm'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='gfni'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='hle'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='ibrs-all'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='invpcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='la57'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='mcdt-no'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pbrsb-no'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pku'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='prefetchiti'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='psdp-no'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='rtm'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='sbdr-ssdp-no'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='serialize'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='taa-no'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='tsx-ldtrk'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='vaes'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='vpclmulqdq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='xfd'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='xsaves'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='GraniteRapids-v1'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='amx-bf16'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='amx-fp16'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='amx-int8'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='amx-tile'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx-vnni'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512-bf16'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512-fp16'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512-vpopcntdq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512bitalg'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512bw'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512cd'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512dq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512f'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512ifma'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vbmi'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vbmi2'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vl'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vnni'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='bus-lock-detect'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='fbsdp-no'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='fsrc'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='fsrm'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='fsrs'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='fzrm'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='gfni'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='hle'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='ibrs-all'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='invpcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='la57'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='mcdt-no'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pbrsb-no'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pku'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='prefetchiti'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='psdp-no'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='rtm'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='sbdr-ssdp-no'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='serialize'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='taa-no'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='tsx-ldtrk'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='vaes'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='vpclmulqdq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='xfd'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='xsaves'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='GraniteRapids-v2'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='amx-bf16'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='amx-fp16'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='amx-int8'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='amx-tile'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx-vnni'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx10'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx10-128'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx10-256'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx10-512'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512-bf16'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512-fp16'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512-vpopcntdq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512bitalg'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512bw'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512cd'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512dq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512f'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512ifma'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vbmi'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vbmi2'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vl'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vnni'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='bus-lock-detect'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='cldemote'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='fbsdp-no'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='fsrc'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='fsrm'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='fsrs'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='fzrm'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='gfni'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='hle'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='ibrs-all'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='invpcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='la57'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='mcdt-no'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='movdir64b'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='movdiri'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pbrsb-no'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pku'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='prefetchiti'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='psdp-no'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='rtm'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='sbdr-ssdp-no'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='serialize'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='ss'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='taa-no'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='tsx-ldtrk'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='vaes'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='vpclmulqdq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='xfd'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='xsaves'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='Haswell'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='hle'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='invpcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='rtm'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='Haswell-IBRS'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='hle'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='invpcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='rtm'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='Haswell-noTSX'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='invpcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='Haswell-noTSX-IBRS'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='invpcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='Haswell-v1'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='hle'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='invpcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='rtm'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='Haswell-v2'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='invpcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='Haswell-v3'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='hle'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='invpcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='rtm'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='Haswell-v4'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='invpcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='Icelake-Server'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512-vpopcntdq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512bitalg'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512bw'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512cd'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512dq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512f'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vbmi'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vbmi2'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vl'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vnni'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='gfni'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='hle'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='invpcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='la57'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pku'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='rtm'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='vaes'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='vpclmulqdq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='Icelake-Server-noTSX'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512-vpopcntdq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512bitalg'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512bw'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512cd'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512dq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512f'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vbmi'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vbmi2'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vl'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vnni'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='gfni'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='invpcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='la57'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pku'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='vaes'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='vpclmulqdq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='Icelake-Server-v1'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512-vpopcntdq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512bitalg'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512bw'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512cd'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512dq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512f'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vbmi'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vbmi2'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vl'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vnni'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='gfni'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='hle'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='invpcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='la57'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pku'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='rtm'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='vaes'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='vpclmulqdq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='Icelake-Server-v2'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512-vpopcntdq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512bitalg'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512bw'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512cd'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512dq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512f'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vbmi'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vbmi2'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vl'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vnni'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='gfni'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='invpcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='la57'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pku'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='vaes'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='vpclmulqdq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='Icelake-Server-v3'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512-vpopcntdq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512bitalg'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512bw'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512cd'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512dq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512f'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vbmi'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vbmi2'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vl'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vnni'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='gfni'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='ibrs-all'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='invpcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='la57'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pku'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='taa-no'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='vaes'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='vpclmulqdq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='Icelake-Server-v4'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512-vpopcntdq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512bitalg'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512bw'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512cd'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512dq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512f'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512ifma'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vbmi'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vbmi2'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vl'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vnni'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='fsrm'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='gfni'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='ibrs-all'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='invpcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='la57'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pku'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='taa-no'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='vaes'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='vpclmulqdq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='Icelake-Server-v5'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512-vpopcntdq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512bitalg'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512bw'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512cd'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512dq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512f'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512ifma'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vbmi'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vbmi2'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vl'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vnni'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='fsrm'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='gfni'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='ibrs-all'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='invpcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='la57'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pku'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='taa-no'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='vaes'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='vpclmulqdq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='xsaves'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='Icelake-Server-v6'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512-vpopcntdq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512bitalg'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512bw'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512cd'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512dq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512f'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512ifma'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vbmi'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vbmi2'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vl'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vnni'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='fsrm'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='gfni'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='ibrs-all'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='invpcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='la57'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pku'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='taa-no'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='vaes'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='vpclmulqdq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='xsaves'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='Icelake-Server-v7'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512-vpopcntdq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512bitalg'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512bw'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512cd'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512dq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512f'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512ifma'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vbmi'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vbmi2'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vl'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vnni'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='fsrm'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='gfni'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='hle'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='ibrs-all'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='invpcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='la57'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pku'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='rtm'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='taa-no'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='vaes'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='vpclmulqdq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='xsaves'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='IvyBridge'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='IvyBridge-IBRS'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='IvyBridge-v1'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='IvyBridge-v2'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='KnightsMill'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512-4fmaps'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512-4vnniw'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512-vpopcntdq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512cd'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512er'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512f'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512pf'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='ss'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='KnightsMill-v1'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512-4fmaps'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512-4vnniw'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512-vpopcntdq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512cd'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512er'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512f'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512pf'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='ss'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='Opteron_G4'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='fma4'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='xop'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='Opteron_G4-v1'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='fma4'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='xop'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='Opteron_G5'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='fma4'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='tbm'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='xop'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='Opteron_G5-v1'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='fma4'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='tbm'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='xop'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='SapphireRapids'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='amx-bf16'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='amx-int8'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='amx-tile'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx-vnni'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512-bf16'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512-fp16'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512-vpopcntdq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512bitalg'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512bw'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512cd'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512dq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512f'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512ifma'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vbmi'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vbmi2'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vl'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vnni'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='bus-lock-detect'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='fsrc'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='fsrm'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='fsrs'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='fzrm'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='gfni'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='hle'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='ibrs-all'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='invpcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='la57'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pku'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='rtm'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='serialize'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='taa-no'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='tsx-ldtrk'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='vaes'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='vpclmulqdq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='xfd'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='xsaves'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='SapphireRapids-v1'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='amx-bf16'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='amx-int8'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='amx-tile'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx-vnni'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512-bf16'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512-fp16'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512-vpopcntdq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512bitalg'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512bw'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512cd'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512dq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512f'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512ifma'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vbmi'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vbmi2'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vl'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vnni'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='bus-lock-detect'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='fsrc'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='fsrm'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='fsrs'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='fzrm'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='gfni'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='hle'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='ibrs-all'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='invpcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='la57'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pku'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='rtm'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='serialize'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='taa-no'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='tsx-ldtrk'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='vaes'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='vpclmulqdq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='xfd'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='xsaves'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='SapphireRapids-v2'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='amx-bf16'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='amx-int8'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='amx-tile'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx-vnni'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512-bf16'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512-fp16'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512-vpopcntdq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512bitalg'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512bw'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512cd'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512dq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512f'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512ifma'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vbmi'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vbmi2'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vl'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vnni'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='bus-lock-detect'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='fbsdp-no'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='fsrc'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='fsrm'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='fsrs'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='fzrm'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='gfni'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='hle'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='ibrs-all'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='invpcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='la57'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pku'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='psdp-no'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='rtm'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='sbdr-ssdp-no'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='serialize'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='taa-no'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='tsx-ldtrk'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='vaes'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='vpclmulqdq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='xfd'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='xsaves'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='SapphireRapids-v3'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='amx-bf16'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='amx-int8'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='amx-tile'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx-vnni'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512-bf16'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512-fp16'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512-vpopcntdq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512bitalg'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512bw'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512cd'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512dq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512f'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512ifma'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vbmi'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vbmi2'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vl'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vnni'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='bus-lock-detect'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='cldemote'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='fbsdp-no'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='fsrc'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='fsrm'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='fsrs'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='fzrm'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='gfni'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='hle'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='ibrs-all'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='invpcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='la57'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='movdir64b'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='movdiri'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pku'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='psdp-no'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='rtm'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='sbdr-ssdp-no'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='serialize'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='ss'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='taa-no'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='tsx-ldtrk'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='vaes'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='vpclmulqdq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='xfd'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='xsaves'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='SierraForest'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx-ifma'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx-ne-convert'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx-vnni'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx-vnni-int8'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='bus-lock-detect'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='cmpccxadd'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='fbsdp-no'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='fsrm'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='fsrs'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='gfni'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='ibrs-all'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='invpcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='mcdt-no'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pbrsb-no'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pku'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='psdp-no'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='sbdr-ssdp-no'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='serialize'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='vaes'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='vpclmulqdq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='xsaves'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='SierraForest-v1'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx-ifma'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx-ne-convert'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx-vnni'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx-vnni-int8'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='bus-lock-detect'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='cmpccxadd'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='fbsdp-no'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='fsrm'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='fsrs'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='gfni'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='ibrs-all'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='invpcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='mcdt-no'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pbrsb-no'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pku'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='psdp-no'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='sbdr-ssdp-no'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='serialize'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='vaes'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='vpclmulqdq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='xsaves'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='Skylake-Client'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='hle'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='invpcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='rtm'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='Skylake-Client-IBRS'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='hle'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='invpcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='rtm'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='invpcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='Skylake-Client-v1'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='hle'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='invpcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='rtm'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='Skylake-Client-v2'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='hle'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='invpcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='rtm'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='Skylake-Client-v3'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='invpcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='Skylake-Client-v4'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='invpcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='xsaves'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='Skylake-Server'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512bw'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512cd'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512dq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512f'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vl'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='hle'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='invpcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pku'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='rtm'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='Skylake-Server-IBRS'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512bw'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512cd'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512dq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512f'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vl'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='hle'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='invpcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pku'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='rtm'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512bw'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512cd'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512dq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512f'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vl'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='invpcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pku'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='Skylake-Server-v1'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512bw'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512cd'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512dq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512f'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vl'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='hle'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='invpcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pku'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='rtm'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='Skylake-Server-v2'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512bw'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512cd'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512dq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512f'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vl'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='hle'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='invpcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pku'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='rtm'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='Skylake-Server-v3'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512bw'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512cd'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512dq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512f'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vl'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='invpcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pku'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='Skylake-Server-v4'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512bw'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512cd'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512dq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512f'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vl'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='invpcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pku'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='Skylake-Server-v5'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512bw'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512cd'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512dq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512f'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vl'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='invpcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pku'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='xsaves'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='Snowridge'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='cldemote'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='core-capability'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='gfni'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='movdir64b'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='movdiri'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='mpx'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='split-lock-detect'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='Snowridge-v1'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='cldemote'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='core-capability'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='gfni'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='movdir64b'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='movdiri'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='mpx'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='split-lock-detect'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='Snowridge-v2'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='cldemote'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='core-capability'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='gfni'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='movdir64b'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='movdiri'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='split-lock-detect'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='Snowridge-v3'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='cldemote'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='core-capability'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='gfni'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='movdir64b'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='movdiri'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='split-lock-detect'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='xsaves'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='Snowridge-v4'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='cldemote'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='gfni'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='movdir64b'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='movdiri'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='xsaves'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='athlon'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='3dnow'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='3dnowext'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='athlon-v1'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='3dnow'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='3dnowext'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='core2duo'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='ss'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='core2duo-v1'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='ss'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='coreduo'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='ss'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='coreduo-v1'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='ss'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='n270'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='ss'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='n270-v1'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='ss'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='phenom'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='3dnow'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='3dnowext'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='phenom-v1'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='3dnow'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='3dnowext'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:     </mode>
Oct 11 08:32:44 compute-0 nova_compute[260935]:   </cpu>
Oct 11 08:32:44 compute-0 nova_compute[260935]:   <memoryBacking supported='yes'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:     <enum name='sourceType'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <value>file</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <value>anonymous</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <value>memfd</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:     </enum>
Oct 11 08:32:44 compute-0 nova_compute[260935]:   </memoryBacking>
Oct 11 08:32:44 compute-0 nova_compute[260935]:   <devices>
Oct 11 08:32:44 compute-0 nova_compute[260935]:     <disk supported='yes'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <enum name='diskDevice'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>disk</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>cdrom</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>floppy</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>lun</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </enum>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <enum name='bus'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>ide</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>fdc</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>scsi</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>virtio</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>usb</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>sata</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </enum>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <enum name='model'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>virtio</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>virtio-transitional</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>virtio-non-transitional</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </enum>
Oct 11 08:32:44 compute-0 nova_compute[260935]:     </disk>
Oct 11 08:32:44 compute-0 nova_compute[260935]:     <graphics supported='yes'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <enum name='type'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>vnc</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>egl-headless</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>dbus</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </enum>
Oct 11 08:32:44 compute-0 nova_compute[260935]:     </graphics>
Oct 11 08:32:44 compute-0 nova_compute[260935]:     <video supported='yes'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <enum name='modelType'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>vga</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>cirrus</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>virtio</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>none</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>bochs</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>ramfb</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </enum>
Oct 11 08:32:44 compute-0 nova_compute[260935]:     </video>
Oct 11 08:32:44 compute-0 nova_compute[260935]:     <hostdev supported='yes'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <enum name='mode'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>subsystem</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </enum>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <enum name='startupPolicy'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>default</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>mandatory</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>requisite</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>optional</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </enum>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <enum name='subsysType'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>usb</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>pci</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>scsi</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </enum>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <enum name='capsType'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <enum name='pciBackend'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:     </hostdev>
Oct 11 08:32:44 compute-0 nova_compute[260935]:     <rng supported='yes'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <enum name='model'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>virtio</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>virtio-transitional</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>virtio-non-transitional</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </enum>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <enum name='backendModel'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>random</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>egd</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>builtin</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </enum>
Oct 11 08:32:44 compute-0 nova_compute[260935]:     </rng>
Oct 11 08:32:44 compute-0 nova_compute[260935]:     <filesystem supported='yes'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <enum name='driverType'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>path</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>handle</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>virtiofs</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </enum>
Oct 11 08:32:44 compute-0 nova_compute[260935]:     </filesystem>
Oct 11 08:32:44 compute-0 nova_compute[260935]:     <tpm supported='yes'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <enum name='model'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>tpm-tis</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>tpm-crb</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </enum>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <enum name='backendModel'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>emulator</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>external</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </enum>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <enum name='backendVersion'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>2.0</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </enum>
Oct 11 08:32:44 compute-0 nova_compute[260935]:     </tpm>
Oct 11 08:32:44 compute-0 nova_compute[260935]:     <redirdev supported='yes'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <enum name='bus'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>usb</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </enum>
Oct 11 08:32:44 compute-0 nova_compute[260935]:     </redirdev>
Oct 11 08:32:44 compute-0 nova_compute[260935]:     <channel supported='yes'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <enum name='type'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>pty</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>unix</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </enum>
Oct 11 08:32:44 compute-0 nova_compute[260935]:     </channel>
Oct 11 08:32:44 compute-0 nova_compute[260935]:     <crypto supported='yes'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <enum name='model'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <enum name='type'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>qemu</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </enum>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <enum name='backendModel'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>builtin</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </enum>
Oct 11 08:32:44 compute-0 nova_compute[260935]:     </crypto>
Oct 11 08:32:44 compute-0 nova_compute[260935]:     <interface supported='yes'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <enum name='backendType'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>default</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>passt</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </enum>
Oct 11 08:32:44 compute-0 nova_compute[260935]:     </interface>
Oct 11 08:32:44 compute-0 nova_compute[260935]:     <panic supported='yes'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <enum name='model'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>isa</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>hyperv</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </enum>
Oct 11 08:32:44 compute-0 nova_compute[260935]:     </panic>
Oct 11 08:32:44 compute-0 nova_compute[260935]:   </devices>
Oct 11 08:32:44 compute-0 nova_compute[260935]:   <features>
Oct 11 08:32:44 compute-0 nova_compute[260935]:     <gic supported='no'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:     <vmcoreinfo supported='yes'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:     <genid supported='yes'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:     <backingStoreInput supported='yes'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:     <backup supported='yes'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:     <async-teardown supported='yes'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:     <ps2 supported='yes'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:     <sev supported='no'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:     <sgx supported='no'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:     <hyperv supported='yes'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <enum name='features'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>relaxed</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>vapic</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>spinlocks</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>vpindex</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>runtime</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>synic</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>stimer</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>reset</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>vendor_id</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>frequencies</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>reenlightenment</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>tlbflush</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>ipi</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>avic</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>emsr_bitmap</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>xmm_input</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </enum>
Oct 11 08:32:44 compute-0 nova_compute[260935]:     </hyperv>
Oct 11 08:32:44 compute-0 nova_compute[260935]:     <launchSecurity supported='no'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:   </features>
Oct 11 08:32:44 compute-0 nova_compute[260935]: </domainCapabilities>
Oct 11 08:32:44 compute-0 nova_compute[260935]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.688 2 DEBUG nova.virt.libvirt.volume.mount [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.693 2 DEBUG nova.virt.libvirt.host [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Oct 11 08:32:44 compute-0 nova_compute[260935]: <domainCapabilities>
Oct 11 08:32:44 compute-0 nova_compute[260935]:   <path>/usr/libexec/qemu-kvm</path>
Oct 11 08:32:44 compute-0 nova_compute[260935]:   <domain>kvm</domain>
Oct 11 08:32:44 compute-0 nova_compute[260935]:   <machine>pc-q35-rhel9.6.0</machine>
Oct 11 08:32:44 compute-0 nova_compute[260935]:   <arch>i686</arch>
Oct 11 08:32:44 compute-0 nova_compute[260935]:   <vcpu max='4096'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:   <iothreads supported='yes'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:   <os supported='yes'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:     <enum name='firmware'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:     <loader supported='yes'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <enum name='type'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>rom</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>pflash</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </enum>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <enum name='readonly'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>yes</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>no</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </enum>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <enum name='secure'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>no</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </enum>
Oct 11 08:32:44 compute-0 nova_compute[260935]:     </loader>
Oct 11 08:32:44 compute-0 nova_compute[260935]:   </os>
Oct 11 08:32:44 compute-0 nova_compute[260935]:   <cpu>
Oct 11 08:32:44 compute-0 nova_compute[260935]:     <mode name='host-passthrough' supported='yes'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <enum name='hostPassthroughMigratable'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>on</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>off</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </enum>
Oct 11 08:32:44 compute-0 nova_compute[260935]:     </mode>
Oct 11 08:32:44 compute-0 nova_compute[260935]:     <mode name='maximum' supported='yes'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <enum name='maximumMigratable'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>on</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>off</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </enum>
Oct 11 08:32:44 compute-0 nova_compute[260935]:     </mode>
Oct 11 08:32:44 compute-0 nova_compute[260935]:     <mode name='host-model' supported='yes'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model fallback='forbid'>EPYC-Rome</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <vendor>AMD</vendor>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <maxphysaddr mode='passthrough' limit='40'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <feature policy='require' name='x2apic'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <feature policy='require' name='tsc-deadline'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <feature policy='require' name='hypervisor'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <feature policy='require' name='tsc_adjust'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <feature policy='require' name='spec-ctrl'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <feature policy='require' name='stibp'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <feature policy='require' name='arch-capabilities'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <feature policy='require' name='ssbd'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <feature policy='require' name='cmp_legacy'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <feature policy='require' name='overflow-recov'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <feature policy='require' name='succor'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <feature policy='require' name='ibrs'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <feature policy='require' name='amd-ssbd'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <feature policy='require' name='virt-ssbd'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <feature policy='require' name='lbrv'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <feature policy='require' name='tsc-scale'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <feature policy='require' name='vmcb-clean'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <feature policy='require' name='flushbyasid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <feature policy='require' name='pause-filter'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <feature policy='require' name='pfthreshold'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <feature policy='require' name='svme-addr-chk'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <feature policy='require' name='lfence-always-serializing'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <feature policy='require' name='rdctl-no'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <feature policy='require' name='mds-no'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <feature policy='require' name='pschange-mc-no'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <feature policy='require' name='gds-no'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <feature policy='require' name='rfds-no'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <feature policy='disable' name='xsaves'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:     </mode>
Oct 11 08:32:44 compute-0 nova_compute[260935]:     <mode name='custom' supported='yes'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='Broadwell'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='hle'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='invpcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='rtm'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='Broadwell-IBRS'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='hle'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='invpcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='rtm'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='Broadwell-noTSX'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='invpcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='Broadwell-noTSX-IBRS'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='invpcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='Broadwell-v1'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='hle'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='invpcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='rtm'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='Broadwell-v2'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='invpcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='Broadwell-v3'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='hle'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='invpcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='rtm'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='Broadwell-v4'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='invpcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='Cascadelake-Server'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512bw'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512cd'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512dq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512f'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vl'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vnni'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='hle'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='invpcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pku'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='rtm'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='Cascadelake-Server-noTSX'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512bw'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512cd'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512dq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512f'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vl'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vnni'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='ibrs-all'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='invpcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pku'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='Cascadelake-Server-v1'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512bw'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512cd'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512dq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512f'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vl'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vnni'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='hle'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='invpcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pku'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='rtm'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='Cascadelake-Server-v2'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512bw'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512cd'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512dq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512f'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vl'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vnni'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='hle'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='ibrs-all'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='invpcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pku'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='rtm'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='Cascadelake-Server-v3'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512bw'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512cd'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512dq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512f'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vl'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vnni'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='ibrs-all'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='invpcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pku'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='Cascadelake-Server-v4'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512bw'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512cd'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512dq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512f'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vl'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vnni'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='ibrs-all'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='invpcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pku'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='Cascadelake-Server-v5'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512bw'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512cd'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512dq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512f'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vl'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vnni'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='ibrs-all'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='invpcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pku'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='xsaves'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='Cooperlake'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512-bf16'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512bw'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512cd'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512dq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512f'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vl'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vnni'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='hle'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='ibrs-all'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='invpcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pku'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='rtm'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='taa-no'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='Cooperlake-v1'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512-bf16'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512bw'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512cd'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512dq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512f'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vl'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vnni'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='hle'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='ibrs-all'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='invpcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pku'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='rtm'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='taa-no'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='Cooperlake-v2'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512-bf16'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512bw'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512cd'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512dq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512f'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vl'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vnni'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='hle'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='ibrs-all'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='invpcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pku'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='rtm'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='taa-no'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='xsaves'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='Denverton'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='mpx'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='Denverton-v1'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='mpx'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='Denverton-v2'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='Denverton-v3'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='xsaves'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='Dhyana-v2'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='xsaves'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='EPYC-Genoa'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='amd-psfd'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='auto-ibrs'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512-bf16'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512-vpopcntdq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512bitalg'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512bw'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512cd'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512dq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512f'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512ifma'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vbmi'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vbmi2'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vl'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vnni'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='fsrm'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='gfni'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='invpcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='la57'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='no-nested-data-bp'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='null-sel-clr-base'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pku'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='stibp-always-on'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='vaes'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='vpclmulqdq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='xsaves'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='EPYC-Genoa-v1'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='amd-psfd'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='auto-ibrs'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512-bf16'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512-vpopcntdq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512bitalg'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512bw'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512cd'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512dq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512f'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512ifma'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vbmi'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vbmi2'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vl'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vnni'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='fsrm'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='gfni'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='invpcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='la57'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='no-nested-data-bp'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='null-sel-clr-base'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pku'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='stibp-always-on'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='vaes'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='vpclmulqdq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='xsaves'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='EPYC-Milan'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='fsrm'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='invpcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pku'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='xsaves'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='EPYC-Milan-v1'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='fsrm'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='invpcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pku'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='xsaves'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='EPYC-Milan-v2'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='amd-psfd'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='fsrm'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='invpcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='no-nested-data-bp'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='null-sel-clr-base'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pku'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='stibp-always-on'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='vaes'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='vpclmulqdq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='xsaves'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='EPYC-Rome'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='xsaves'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='EPYC-Rome-v1'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='xsaves'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='EPYC-Rome-v2'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='xsaves'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='EPYC-Rome-v3'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='xsaves'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='EPYC-v3'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='xsaves'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='EPYC-v4'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='xsaves'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='GraniteRapids'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='amx-bf16'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='amx-fp16'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='amx-int8'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='amx-tile'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx-vnni'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512-bf16'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512-fp16'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512-vpopcntdq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512bitalg'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512bw'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512cd'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512dq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512f'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512ifma'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vbmi'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vbmi2'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vl'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vnni'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='bus-lock-detect'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='fbsdp-no'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='fsrc'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='fsrm'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='fsrs'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='fzrm'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='gfni'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='hle'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='ibrs-all'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='invpcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='la57'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='mcdt-no'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pbrsb-no'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pku'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='prefetchiti'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='psdp-no'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='rtm'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='sbdr-ssdp-no'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='serialize'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='taa-no'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='tsx-ldtrk'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='vaes'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='vpclmulqdq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='xfd'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='xsaves'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='GraniteRapids-v1'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='amx-bf16'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='amx-fp16'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='amx-int8'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='amx-tile'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx-vnni'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512-bf16'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512-fp16'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512-vpopcntdq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512bitalg'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512bw'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512cd'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512dq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512f'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512ifma'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vbmi'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vbmi2'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vl'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vnni'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='bus-lock-detect'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='fbsdp-no'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='fsrc'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='fsrm'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='fsrs'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='fzrm'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='gfni'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='hle'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='ibrs-all'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='invpcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='la57'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='mcdt-no'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pbrsb-no'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pku'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='prefetchiti'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='psdp-no'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='rtm'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='sbdr-ssdp-no'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='serialize'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='taa-no'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='tsx-ldtrk'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='vaes'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='vpclmulqdq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='xfd'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='xsaves'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='GraniteRapids-v2'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='amx-bf16'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='amx-fp16'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='amx-int8'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='amx-tile'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx-vnni'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx10'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx10-128'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx10-256'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx10-512'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512-bf16'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512-fp16'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512-vpopcntdq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512bitalg'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512bw'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512cd'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512dq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512f'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512ifma'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vbmi'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vbmi2'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vl'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vnni'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='bus-lock-detect'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='cldemote'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='fbsdp-no'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='fsrc'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='fsrm'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='fsrs'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='fzrm'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='gfni'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='hle'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='ibrs-all'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='invpcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='la57'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='mcdt-no'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='movdir64b'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='movdiri'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pbrsb-no'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pku'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='prefetchiti'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='psdp-no'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='rtm'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='sbdr-ssdp-no'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='serialize'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='ss'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='taa-no'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='tsx-ldtrk'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='vaes'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='vpclmulqdq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='xfd'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='xsaves'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='Haswell'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='hle'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='invpcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='rtm'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='Haswell-IBRS'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='hle'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='invpcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='rtm'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='Haswell-noTSX'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='invpcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='Haswell-noTSX-IBRS'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='invpcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='Haswell-v1'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='hle'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='invpcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='rtm'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='Haswell-v2'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='invpcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='Haswell-v3'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='hle'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='invpcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='rtm'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='Haswell-v4'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='invpcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='Icelake-Server'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512-vpopcntdq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512bitalg'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512bw'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512cd'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512dq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512f'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vbmi'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vbmi2'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vl'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vnni'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='gfni'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='hle'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='invpcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='la57'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pku'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='rtm'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='vaes'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='vpclmulqdq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='Icelake-Server-noTSX'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512-vpopcntdq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512bitalg'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512bw'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512cd'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512dq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512f'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vbmi'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vbmi2'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vl'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vnni'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='gfni'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='invpcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='la57'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pku'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='vaes'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='vpclmulqdq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='Icelake-Server-v1'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512-vpopcntdq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512bitalg'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512bw'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512cd'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512dq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512f'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vbmi'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vbmi2'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vl'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vnni'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='gfni'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='hle'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='invpcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='la57'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pku'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='rtm'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='vaes'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='vpclmulqdq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='Icelake-Server-v2'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512-vpopcntdq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512bitalg'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512bw'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512cd'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512dq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512f'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vbmi'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vbmi2'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vl'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vnni'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='gfni'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='invpcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='la57'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pku'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='vaes'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='vpclmulqdq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='Icelake-Server-v3'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512-vpopcntdq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512bitalg'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512bw'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512cd'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512dq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512f'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vbmi'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vbmi2'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vl'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vnni'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='gfni'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='ibrs-all'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='invpcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='la57'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pku'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='taa-no'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='vaes'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='vpclmulqdq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='Icelake-Server-v4'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512-vpopcntdq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512bitalg'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512bw'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512cd'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512dq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512f'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512ifma'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vbmi'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vbmi2'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vl'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vnni'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='fsrm'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='gfni'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='ibrs-all'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='invpcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='la57'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pku'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='taa-no'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='vaes'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='vpclmulqdq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='Icelake-Server-v5'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512-vpopcntdq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512bitalg'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512bw'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512cd'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512dq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512f'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512ifma'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vbmi'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vbmi2'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vl'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vnni'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='fsrm'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='gfni'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='ibrs-all'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='invpcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='la57'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pku'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='taa-no'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='vaes'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='vpclmulqdq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='xsaves'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='Icelake-Server-v6'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512-vpopcntdq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512bitalg'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512bw'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512cd'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512dq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512f'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512ifma'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vbmi'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vbmi2'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vl'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vnni'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='fsrm'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='gfni'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='ibrs-all'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='invpcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='la57'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pku'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='taa-no'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='vaes'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='vpclmulqdq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='xsaves'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='Icelake-Server-v7'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512-vpopcntdq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512bitalg'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512bw'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512cd'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512dq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512f'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512ifma'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vbmi'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vbmi2'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vl'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vnni'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='fsrm'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='gfni'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='hle'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='ibrs-all'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='invpcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='la57'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pku'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='rtm'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='taa-no'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='vaes'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='vpclmulqdq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='xsaves'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='IvyBridge'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='IvyBridge-IBRS'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='IvyBridge-v1'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='IvyBridge-v2'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='KnightsMill'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512-4fmaps'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512-4vnniw'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512-vpopcntdq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512cd'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512er'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512f'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512pf'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='ss'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='KnightsMill-v1'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512-4fmaps'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512-4vnniw'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512-vpopcntdq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512cd'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512er'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512f'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512pf'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='ss'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='Opteron_G4'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='fma4'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='xop'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='Opteron_G4-v1'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='fma4'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='xop'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='Opteron_G5'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='fma4'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='tbm'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='xop'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='Opteron_G5-v1'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='fma4'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='tbm'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='xop'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='SapphireRapids'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='amx-bf16'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='amx-int8'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='amx-tile'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx-vnni'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512-bf16'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512-fp16'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512-vpopcntdq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512bitalg'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512bw'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512cd'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512dq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512f'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512ifma'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vbmi'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vbmi2'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vl'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vnni'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='bus-lock-detect'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='fsrc'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='fsrm'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='fsrs'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='fzrm'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='gfni'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='hle'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='ibrs-all'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='invpcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='la57'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pku'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='rtm'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='serialize'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='taa-no'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='tsx-ldtrk'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='vaes'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='vpclmulqdq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='xfd'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='xsaves'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='SapphireRapids-v1'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='amx-bf16'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='amx-int8'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='amx-tile'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx-vnni'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512-bf16'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512-fp16'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512-vpopcntdq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512bitalg'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512bw'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512cd'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512dq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512f'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512ifma'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vbmi'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vbmi2'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vl'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vnni'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='bus-lock-detect'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='fsrc'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='fsrm'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='fsrs'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='fzrm'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='gfni'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='hle'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='ibrs-all'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='invpcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='la57'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pku'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='rtm'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='serialize'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='taa-no'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='tsx-ldtrk'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='vaes'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='vpclmulqdq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='xfd'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='xsaves'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='SapphireRapids-v2'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='amx-bf16'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='amx-int8'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='amx-tile'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx-vnni'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512-bf16'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512-fp16'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512-vpopcntdq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512bitalg'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512bw'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512cd'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512dq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512f'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512ifma'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vbmi'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vbmi2'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vl'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vnni'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='bus-lock-detect'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='fbsdp-no'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='fsrc'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='fsrm'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='fsrs'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='fzrm'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='gfni'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='hle'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='ibrs-all'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='invpcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='la57'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pku'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='psdp-no'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='rtm'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='sbdr-ssdp-no'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='serialize'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='taa-no'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='tsx-ldtrk'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='vaes'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='vpclmulqdq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='xfd'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='xsaves'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='SapphireRapids-v3'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='amx-bf16'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='amx-int8'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='amx-tile'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx-vnni'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512-bf16'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512-fp16'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512-vpopcntdq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512bitalg'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512bw'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512cd'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512dq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512f'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512ifma'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vbmi'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vbmi2'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vl'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vnni'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='bus-lock-detect'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='cldemote'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='fbsdp-no'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='fsrc'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='fsrm'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='fsrs'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='fzrm'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='gfni'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='hle'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='ibrs-all'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='invpcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='la57'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='movdir64b'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='movdiri'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pku'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='psdp-no'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='rtm'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='sbdr-ssdp-no'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='serialize'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='ss'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='taa-no'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='tsx-ldtrk'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='vaes'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='vpclmulqdq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='xfd'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='xsaves'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='SierraForest'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx-ifma'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx-ne-convert'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx-vnni'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx-vnni-int8'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='bus-lock-detect'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='cmpccxadd'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='fbsdp-no'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='fsrm'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='fsrs'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='gfni'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='ibrs-all'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='invpcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='mcdt-no'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pbrsb-no'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pku'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='psdp-no'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='sbdr-ssdp-no'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='serialize'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='vaes'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='vpclmulqdq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='xsaves'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='SierraForest-v1'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx-ifma'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx-ne-convert'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx-vnni'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx-vnni-int8'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='bus-lock-detect'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='cmpccxadd'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='fbsdp-no'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='fsrm'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='fsrs'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='gfni'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='ibrs-all'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='invpcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='mcdt-no'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pbrsb-no'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pku'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='psdp-no'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='sbdr-ssdp-no'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='serialize'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='vaes'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='vpclmulqdq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='xsaves'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='Skylake-Client'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='hle'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='invpcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='rtm'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='Skylake-Client-IBRS'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='hle'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='invpcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='rtm'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='invpcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='Skylake-Client-v1'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='hle'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='invpcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='rtm'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='Skylake-Client-v2'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='hle'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='invpcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='rtm'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='Skylake-Client-v3'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='invpcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='Skylake-Client-v4'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='invpcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='xsaves'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='Skylake-Server'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512bw'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512cd'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512dq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512f'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vl'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='hle'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='invpcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pku'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='rtm'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='Skylake-Server-IBRS'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512bw'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512cd'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512dq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512f'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vl'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='hle'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='invpcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pku'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='rtm'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512bw'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512cd'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512dq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512f'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vl'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='invpcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pku'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='Skylake-Server-v1'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512bw'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512cd'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512dq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512f'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vl'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='hle'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='invpcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pku'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='rtm'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='Skylake-Server-v2'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512bw'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512cd'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512dq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512f'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vl'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='hle'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='invpcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pku'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='rtm'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='Skylake-Server-v3'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512bw'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512cd'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512dq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512f'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vl'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='invpcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pku'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='Skylake-Server-v4'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512bw'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512cd'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512dq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512f'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vl'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='invpcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pku'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='Skylake-Server-v5'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512bw'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512cd'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512dq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512f'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vl'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='invpcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pku'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='xsaves'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='Snowridge'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='cldemote'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='core-capability'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='gfni'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='movdir64b'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='movdiri'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='mpx'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='split-lock-detect'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='Snowridge-v1'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='cldemote'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='core-capability'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='gfni'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='movdir64b'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='movdiri'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='mpx'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='split-lock-detect'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='Snowridge-v2'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='cldemote'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='core-capability'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='gfni'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='movdir64b'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='movdiri'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='split-lock-detect'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='Snowridge-v3'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='cldemote'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='core-capability'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='gfni'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='movdir64b'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='movdiri'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='split-lock-detect'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='xsaves'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='Snowridge-v4'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='cldemote'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='gfni'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='movdir64b'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='movdiri'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='xsaves'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='athlon'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='3dnow'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='3dnowext'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='athlon-v1'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='3dnow'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='3dnowext'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='core2duo'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='ss'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='core2duo-v1'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='ss'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='coreduo'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='ss'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='coreduo-v1'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='ss'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='n270'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='ss'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='n270-v1'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='ss'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='phenom'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='3dnow'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='3dnowext'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='phenom-v1'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='3dnow'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='3dnowext'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:     </mode>
Oct 11 08:32:44 compute-0 nova_compute[260935]:   </cpu>
Oct 11 08:32:44 compute-0 nova_compute[260935]:   <memoryBacking supported='yes'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:     <enum name='sourceType'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <value>file</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <value>anonymous</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <value>memfd</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:     </enum>
Oct 11 08:32:44 compute-0 nova_compute[260935]:   </memoryBacking>
Oct 11 08:32:44 compute-0 nova_compute[260935]:   <devices>
Oct 11 08:32:44 compute-0 nova_compute[260935]:     <disk supported='yes'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <enum name='diskDevice'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>disk</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>cdrom</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>floppy</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>lun</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </enum>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <enum name='bus'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>fdc</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>scsi</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>virtio</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>usb</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>sata</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </enum>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <enum name='model'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>virtio</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>virtio-transitional</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>virtio-non-transitional</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </enum>
Oct 11 08:32:44 compute-0 nova_compute[260935]:     </disk>
Oct 11 08:32:44 compute-0 nova_compute[260935]:     <graphics supported='yes'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <enum name='type'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>vnc</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>egl-headless</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>dbus</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </enum>
Oct 11 08:32:44 compute-0 nova_compute[260935]:     </graphics>
Oct 11 08:32:44 compute-0 nova_compute[260935]:     <video supported='yes'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <enum name='modelType'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>vga</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>cirrus</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>virtio</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>none</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>bochs</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>ramfb</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </enum>
Oct 11 08:32:44 compute-0 nova_compute[260935]:     </video>
Oct 11 08:32:44 compute-0 nova_compute[260935]:     <hostdev supported='yes'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <enum name='mode'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>subsystem</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </enum>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <enum name='startupPolicy'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>default</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>mandatory</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>requisite</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>optional</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </enum>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <enum name='subsysType'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>usb</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>pci</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>scsi</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </enum>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <enum name='capsType'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <enum name='pciBackend'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:     </hostdev>
Oct 11 08:32:44 compute-0 nova_compute[260935]:     <rng supported='yes'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <enum name='model'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>virtio</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>virtio-transitional</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>virtio-non-transitional</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </enum>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <enum name='backendModel'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>random</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>egd</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>builtin</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </enum>
Oct 11 08:32:44 compute-0 nova_compute[260935]:     </rng>
Oct 11 08:32:44 compute-0 nova_compute[260935]:     <filesystem supported='yes'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <enum name='driverType'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>path</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>handle</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>virtiofs</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </enum>
Oct 11 08:32:44 compute-0 nova_compute[260935]:     </filesystem>
Oct 11 08:32:44 compute-0 nova_compute[260935]:     <tpm supported='yes'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <enum name='model'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>tpm-tis</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>tpm-crb</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </enum>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <enum name='backendModel'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>emulator</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>external</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </enum>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <enum name='backendVersion'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>2.0</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </enum>
Oct 11 08:32:44 compute-0 nova_compute[260935]:     </tpm>
Oct 11 08:32:44 compute-0 nova_compute[260935]:     <redirdev supported='yes'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <enum name='bus'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>usb</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </enum>
Oct 11 08:32:44 compute-0 nova_compute[260935]:     </redirdev>
Oct 11 08:32:44 compute-0 nova_compute[260935]:     <channel supported='yes'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <enum name='type'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>pty</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>unix</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </enum>
Oct 11 08:32:44 compute-0 nova_compute[260935]:     </channel>
Oct 11 08:32:44 compute-0 nova_compute[260935]:     <crypto supported='yes'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <enum name='model'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <enum name='type'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>qemu</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </enum>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <enum name='backendModel'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>builtin</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </enum>
Oct 11 08:32:44 compute-0 nova_compute[260935]:     </crypto>
Oct 11 08:32:44 compute-0 nova_compute[260935]:     <interface supported='yes'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <enum name='backendType'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>default</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>passt</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </enum>
Oct 11 08:32:44 compute-0 nova_compute[260935]:     </interface>
Oct 11 08:32:44 compute-0 nova_compute[260935]:     <panic supported='yes'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <enum name='model'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>isa</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>hyperv</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </enum>
Oct 11 08:32:44 compute-0 nova_compute[260935]:     </panic>
Oct 11 08:32:44 compute-0 nova_compute[260935]:   </devices>
Oct 11 08:32:44 compute-0 nova_compute[260935]:   <features>
Oct 11 08:32:44 compute-0 nova_compute[260935]:     <gic supported='no'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:     <vmcoreinfo supported='yes'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:     <genid supported='yes'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:     <backingStoreInput supported='yes'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:     <backup supported='yes'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:     <async-teardown supported='yes'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:     <ps2 supported='yes'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:     <sev supported='no'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:     <sgx supported='no'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:     <hyperv supported='yes'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <enum name='features'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>relaxed</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>vapic</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>spinlocks</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>vpindex</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>runtime</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>synic</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>stimer</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>reset</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>vendor_id</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>frequencies</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>reenlightenment</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>tlbflush</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>ipi</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>avic</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>emsr_bitmap</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>xmm_input</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </enum>
Oct 11 08:32:44 compute-0 nova_compute[260935]:     </hyperv>
Oct 11 08:32:44 compute-0 nova_compute[260935]:     <launchSecurity supported='no'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:   </features>
Oct 11 08:32:44 compute-0 nova_compute[260935]: </domainCapabilities>
Oct 11 08:32:44 compute-0 nova_compute[260935]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.721 2 DEBUG nova.virt.libvirt.host [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Getting domain capabilities for x86_64 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.727 2 DEBUG nova.virt.libvirt.host [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Oct 11 08:32:44 compute-0 nova_compute[260935]: <domainCapabilities>
Oct 11 08:32:44 compute-0 nova_compute[260935]:   <path>/usr/libexec/qemu-kvm</path>
Oct 11 08:32:44 compute-0 nova_compute[260935]:   <domain>kvm</domain>
Oct 11 08:32:44 compute-0 nova_compute[260935]:   <machine>pc-i440fx-rhel7.6.0</machine>
Oct 11 08:32:44 compute-0 nova_compute[260935]:   <arch>x86_64</arch>
Oct 11 08:32:44 compute-0 nova_compute[260935]:   <vcpu max='240'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:   <iothreads supported='yes'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:   <os supported='yes'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:     <enum name='firmware'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:     <loader supported='yes'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <enum name='type'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>rom</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>pflash</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </enum>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <enum name='readonly'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>yes</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>no</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </enum>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <enum name='secure'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>no</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </enum>
Oct 11 08:32:44 compute-0 nova_compute[260935]:     </loader>
Oct 11 08:32:44 compute-0 nova_compute[260935]:   </os>
Oct 11 08:32:44 compute-0 nova_compute[260935]:   <cpu>
Oct 11 08:32:44 compute-0 nova_compute[260935]:     <mode name='host-passthrough' supported='yes'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <enum name='hostPassthroughMigratable'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>on</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>off</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </enum>
Oct 11 08:32:44 compute-0 nova_compute[260935]:     </mode>
Oct 11 08:32:44 compute-0 nova_compute[260935]:     <mode name='maximum' supported='yes'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <enum name='maximumMigratable'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>on</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>off</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </enum>
Oct 11 08:32:44 compute-0 nova_compute[260935]:     </mode>
Oct 11 08:32:44 compute-0 nova_compute[260935]:     <mode name='host-model' supported='yes'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model fallback='forbid'>EPYC-Rome</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <vendor>AMD</vendor>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <maxphysaddr mode='passthrough' limit='40'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <feature policy='require' name='x2apic'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <feature policy='require' name='tsc-deadline'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <feature policy='require' name='hypervisor'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <feature policy='require' name='tsc_adjust'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <feature policy='require' name='spec-ctrl'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <feature policy='require' name='stibp'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <feature policy='require' name='arch-capabilities'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <feature policy='require' name='ssbd'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <feature policy='require' name='cmp_legacy'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <feature policy='require' name='overflow-recov'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <feature policy='require' name='succor'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <feature policy='require' name='ibrs'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <feature policy='require' name='amd-ssbd'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <feature policy='require' name='virt-ssbd'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <feature policy='require' name='lbrv'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <feature policy='require' name='tsc-scale'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <feature policy='require' name='vmcb-clean'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <feature policy='require' name='flushbyasid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <feature policy='require' name='pause-filter'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <feature policy='require' name='pfthreshold'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <feature policy='require' name='svme-addr-chk'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <feature policy='require' name='lfence-always-serializing'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <feature policy='require' name='rdctl-no'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <feature policy='require' name='mds-no'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <feature policy='require' name='pschange-mc-no'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <feature policy='require' name='gds-no'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <feature policy='require' name='rfds-no'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <feature policy='disable' name='xsaves'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:     </mode>
Oct 11 08:32:44 compute-0 nova_compute[260935]:     <mode name='custom' supported='yes'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='Broadwell'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='hle'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='invpcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='rtm'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='Broadwell-IBRS'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='hle'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='invpcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='rtm'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='Broadwell-noTSX'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='invpcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='Broadwell-noTSX-IBRS'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='invpcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='Broadwell-v1'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='hle'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='invpcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='rtm'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='Broadwell-v2'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='invpcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='Broadwell-v3'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='hle'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='invpcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='rtm'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='Broadwell-v4'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='invpcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='Cascadelake-Server'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512bw'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512cd'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512dq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512f'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vl'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vnni'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='hle'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='invpcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pku'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='rtm'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='Cascadelake-Server-noTSX'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512bw'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512cd'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512dq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512f'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vl'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vnni'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='ibrs-all'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='invpcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pku'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='Cascadelake-Server-v1'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512bw'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512cd'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512dq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512f'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vl'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vnni'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='hle'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='invpcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pku'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='rtm'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='Cascadelake-Server-v2'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512bw'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512cd'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512dq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512f'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vl'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vnni'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='hle'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='ibrs-all'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='invpcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pku'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='rtm'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='Cascadelake-Server-v3'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512bw'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512cd'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512dq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512f'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vl'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vnni'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='ibrs-all'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='invpcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pku'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='Cascadelake-Server-v4'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512bw'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512cd'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512dq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512f'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vl'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vnni'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='ibrs-all'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='invpcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pku'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='Cascadelake-Server-v5'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512bw'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512cd'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512dq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512f'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vl'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vnni'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='ibrs-all'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='invpcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pku'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='xsaves'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='Cooperlake'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512-bf16'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512bw'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512cd'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512dq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512f'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vl'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vnni'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='hle'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='ibrs-all'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='invpcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pku'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='rtm'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='taa-no'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='Cooperlake-v1'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512-bf16'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512bw'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512cd'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512dq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512f'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vl'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vnni'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='hle'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='ibrs-all'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='invpcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pku'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='rtm'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='taa-no'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='Cooperlake-v2'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512-bf16'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512bw'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512cd'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512dq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512f'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vl'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vnni'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='hle'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='ibrs-all'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='invpcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pku'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='rtm'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='taa-no'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='xsaves'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='Denverton'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='mpx'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='Denverton-v1'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='mpx'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='Denverton-v2'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='Denverton-v3'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='xsaves'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='Dhyana-v2'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='xsaves'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='EPYC-Genoa'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='amd-psfd'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='auto-ibrs'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512-bf16'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512-vpopcntdq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512bitalg'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512bw'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512cd'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512dq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512f'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512ifma'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vbmi'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vbmi2'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vl'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vnni'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='fsrm'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='gfni'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='invpcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='la57'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='no-nested-data-bp'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='null-sel-clr-base'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pku'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='stibp-always-on'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='vaes'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='vpclmulqdq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='xsaves'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='EPYC-Genoa-v1'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='amd-psfd'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='auto-ibrs'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512-bf16'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512-vpopcntdq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512bitalg'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512bw'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512cd'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512dq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512f'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512ifma'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vbmi'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vbmi2'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vl'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vnni'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='fsrm'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='gfni'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='invpcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='la57'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='no-nested-data-bp'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='null-sel-clr-base'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pku'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='stibp-always-on'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='vaes'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='vpclmulqdq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='xsaves'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='EPYC-Milan'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='fsrm'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='invpcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pku'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='xsaves'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='EPYC-Milan-v1'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='fsrm'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='invpcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pku'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='xsaves'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='EPYC-Milan-v2'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='amd-psfd'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='fsrm'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='invpcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='no-nested-data-bp'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='null-sel-clr-base'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pku'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='stibp-always-on'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='vaes'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='vpclmulqdq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='xsaves'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='EPYC-Rome'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='xsaves'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='EPYC-Rome-v1'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='xsaves'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='EPYC-Rome-v2'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='xsaves'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='EPYC-Rome-v3'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='xsaves'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='EPYC-v3'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='xsaves'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='EPYC-v4'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='xsaves'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='GraniteRapids'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='amx-bf16'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='amx-fp16'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='amx-int8'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='amx-tile'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx-vnni'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512-bf16'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512-fp16'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512-vpopcntdq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512bitalg'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512bw'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512cd'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512dq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512f'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512ifma'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vbmi'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vbmi2'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vl'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vnni'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='bus-lock-detect'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='fbsdp-no'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='fsrc'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='fsrm'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='fsrs'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='fzrm'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='gfni'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='hle'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='ibrs-all'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='invpcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='la57'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='mcdt-no'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pbrsb-no'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pku'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='prefetchiti'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='psdp-no'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='rtm'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='sbdr-ssdp-no'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='serialize'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='taa-no'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='tsx-ldtrk'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='vaes'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='vpclmulqdq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='xfd'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='xsaves'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='GraniteRapids-v1'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='amx-bf16'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='amx-fp16'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='amx-int8'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='amx-tile'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx-vnni'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512-bf16'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512-fp16'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512-vpopcntdq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512bitalg'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512bw'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512cd'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512dq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512f'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512ifma'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vbmi'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vbmi2'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vl'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vnni'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='bus-lock-detect'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='fbsdp-no'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='fsrc'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='fsrm'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='fsrs'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='fzrm'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='gfni'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='hle'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='ibrs-all'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='invpcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='la57'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='mcdt-no'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pbrsb-no'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pku'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='prefetchiti'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='psdp-no'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='rtm'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='sbdr-ssdp-no'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='serialize'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='taa-no'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='tsx-ldtrk'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='vaes'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='vpclmulqdq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='xfd'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='xsaves'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='GraniteRapids-v2'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='amx-bf16'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='amx-fp16'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='amx-int8'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='amx-tile'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx-vnni'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx10'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx10-128'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx10-256'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx10-512'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512-bf16'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512-fp16'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512-vpopcntdq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512bitalg'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512bw'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512cd'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512dq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512f'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512ifma'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vbmi'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vbmi2'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vl'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vnni'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='bus-lock-detect'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='cldemote'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='fbsdp-no'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='fsrc'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='fsrm'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='fsrs'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='fzrm'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='gfni'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='hle'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='ibrs-all'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='invpcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='la57'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='mcdt-no'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='movdir64b'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='movdiri'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pbrsb-no'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pku'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='prefetchiti'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='psdp-no'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='rtm'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='sbdr-ssdp-no'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='serialize'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='ss'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='taa-no'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='tsx-ldtrk'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='vaes'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='vpclmulqdq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='xfd'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='xsaves'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='Haswell'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='hle'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='invpcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='rtm'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='Haswell-IBRS'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='hle'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='invpcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='rtm'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='Haswell-noTSX'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='invpcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='Haswell-noTSX-IBRS'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='invpcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='Haswell-v1'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='hle'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='invpcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='rtm'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='Haswell-v2'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='invpcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='Haswell-v3'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='hle'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='invpcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='rtm'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='Haswell-v4'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='invpcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='Icelake-Server'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512-vpopcntdq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512bitalg'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512bw'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512cd'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512dq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512f'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vbmi'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vbmi2'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vl'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vnni'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='gfni'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='hle'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='invpcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='la57'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pku'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='rtm'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='vaes'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='vpclmulqdq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='Icelake-Server-noTSX'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512-vpopcntdq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512bitalg'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512bw'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512cd'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512dq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512f'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vbmi'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vbmi2'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vl'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vnni'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='gfni'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='invpcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='la57'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pku'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='vaes'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='vpclmulqdq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='Icelake-Server-v1'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512-vpopcntdq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512bitalg'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512bw'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512cd'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512dq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512f'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vbmi'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vbmi2'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vl'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vnni'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='gfni'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='hle'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='invpcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='la57'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pku'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='rtm'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='vaes'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='vpclmulqdq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='Icelake-Server-v2'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512-vpopcntdq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512bitalg'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512bw'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512cd'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512dq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512f'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vbmi'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vbmi2'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vl'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vnni'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='gfni'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='invpcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='la57'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pku'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='vaes'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='vpclmulqdq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='Icelake-Server-v3'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512-vpopcntdq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512bitalg'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512bw'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512cd'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512dq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512f'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vbmi'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vbmi2'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vl'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vnni'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='gfni'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='ibrs-all'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='invpcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='la57'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pku'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='taa-no'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='vaes'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='vpclmulqdq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='Icelake-Server-v4'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512-vpopcntdq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512bitalg'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512bw'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512cd'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512dq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512f'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512ifma'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vbmi'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vbmi2'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vl'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vnni'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='fsrm'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='gfni'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='ibrs-all'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='invpcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='la57'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pku'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='taa-no'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='vaes'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='vpclmulqdq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='Icelake-Server-v5'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512-vpopcntdq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512bitalg'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512bw'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512cd'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512dq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512f'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512ifma'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vbmi'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vbmi2'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vl'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vnni'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='fsrm'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='gfni'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='ibrs-all'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='invpcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='la57'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pku'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='taa-no'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='vaes'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='vpclmulqdq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='xsaves'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='Icelake-Server-v6'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512-vpopcntdq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512bitalg'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512bw'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512cd'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512dq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512f'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512ifma'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vbmi'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vbmi2'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vl'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vnni'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='fsrm'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='gfni'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='ibrs-all'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='invpcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='la57'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pku'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='taa-no'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='vaes'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='vpclmulqdq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='xsaves'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='Icelake-Server-v7'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512-vpopcntdq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512bitalg'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512bw'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512cd'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512dq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512f'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512ifma'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vbmi'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vbmi2'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vl'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vnni'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='fsrm'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='gfni'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='hle'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='ibrs-all'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='invpcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='la57'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pku'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='rtm'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='taa-no'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='vaes'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='vpclmulqdq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='xsaves'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='IvyBridge'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='IvyBridge-IBRS'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='IvyBridge-v1'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='IvyBridge-v2'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='KnightsMill'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512-4fmaps'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512-4vnniw'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512-vpopcntdq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512cd'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512er'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512f'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512pf'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='ss'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='KnightsMill-v1'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512-4fmaps'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512-4vnniw'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512-vpopcntdq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512cd'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512er'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512f'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512pf'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='ss'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='Opteron_G4'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='fma4'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='xop'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='Opteron_G4-v1'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='fma4'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='xop'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='Opteron_G5'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='fma4'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='tbm'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='xop'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='Opteron_G5-v1'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='fma4'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='tbm'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='xop'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='SapphireRapids'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='amx-bf16'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='amx-int8'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='amx-tile'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx-vnni'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512-bf16'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512-fp16'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512-vpopcntdq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512bitalg'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512bw'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512cd'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512dq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512f'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512ifma'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vbmi'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vbmi2'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vl'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vnni'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='bus-lock-detect'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='fsrc'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='fsrm'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='fsrs'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='fzrm'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='gfni'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='hle'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='ibrs-all'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='invpcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='la57'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pku'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='rtm'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='serialize'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='taa-no'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='tsx-ldtrk'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='vaes'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='vpclmulqdq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='xfd'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='xsaves'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='SapphireRapids-v1'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='amx-bf16'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='amx-int8'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='amx-tile'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx-vnni'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512-bf16'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512-fp16'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512-vpopcntdq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512bitalg'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512bw'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512cd'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512dq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512f'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512ifma'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vbmi'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vbmi2'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vl'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vnni'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='bus-lock-detect'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='fsrc'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='fsrm'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='fsrs'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='fzrm'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='gfni'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='hle'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='ibrs-all'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='invpcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='la57'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pku'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='rtm'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='serialize'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='taa-no'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='tsx-ldtrk'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='vaes'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='vpclmulqdq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='xfd'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='xsaves'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='SapphireRapids-v2'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='amx-bf16'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='amx-int8'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='amx-tile'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx-vnni'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512-bf16'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512-fp16'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512-vpopcntdq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512bitalg'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512bw'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512cd'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512dq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512f'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512ifma'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vbmi'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vbmi2'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vl'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vnni'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='bus-lock-detect'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='fbsdp-no'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='fsrc'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='fsrm'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='fsrs'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='fzrm'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='gfni'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='hle'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='ibrs-all'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='invpcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='la57'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pku'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='psdp-no'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='rtm'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='sbdr-ssdp-no'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='serialize'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='taa-no'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='tsx-ldtrk'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='vaes'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='vpclmulqdq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='xfd'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='xsaves'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='SapphireRapids-v3'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='amx-bf16'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='amx-int8'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='amx-tile'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx-vnni'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512-bf16'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512-fp16'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512-vpopcntdq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512bitalg'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512bw'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512cd'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512dq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512f'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512ifma'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vbmi'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vbmi2'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vl'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vnni'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='bus-lock-detect'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='cldemote'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='fbsdp-no'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='fsrc'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='fsrm'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='fsrs'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='fzrm'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='gfni'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='hle'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='ibrs-all'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='invpcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='la57'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='movdir64b'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='movdiri'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pku'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='psdp-no'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='rtm'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='sbdr-ssdp-no'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='serialize'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='ss'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='taa-no'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='tsx-ldtrk'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='vaes'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='vpclmulqdq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='xfd'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='xsaves'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='SierraForest'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx-ifma'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx-ne-convert'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx-vnni'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx-vnni-int8'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='bus-lock-detect'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='cmpccxadd'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='fbsdp-no'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='fsrm'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='fsrs'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='gfni'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='ibrs-all'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='invpcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='mcdt-no'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pbrsb-no'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pku'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='psdp-no'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='sbdr-ssdp-no'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='serialize'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='vaes'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='vpclmulqdq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='xsaves'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='SierraForest-v1'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx-ifma'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx-ne-convert'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx-vnni'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx-vnni-int8'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='bus-lock-detect'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='cmpccxadd'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='fbsdp-no'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='fsrm'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='fsrs'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='gfni'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='ibrs-all'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='invpcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='mcdt-no'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pbrsb-no'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pku'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='psdp-no'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='sbdr-ssdp-no'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='serialize'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='vaes'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='vpclmulqdq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='xsaves'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='Skylake-Client'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='hle'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='invpcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='rtm'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='Skylake-Client-IBRS'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='hle'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='invpcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='rtm'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='invpcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='Skylake-Client-v1'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='hle'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='invpcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='rtm'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='Skylake-Client-v2'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='hle'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='invpcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='rtm'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='Skylake-Client-v3'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='invpcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='Skylake-Client-v4'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='invpcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='xsaves'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='Skylake-Server'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512bw'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512cd'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512dq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512f'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vl'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='hle'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='invpcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pku'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='rtm'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='Skylake-Server-IBRS'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512bw'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512cd'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512dq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512f'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vl'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='hle'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='invpcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pku'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='rtm'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512bw'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512cd'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512dq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512f'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vl'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='invpcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pku'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='Skylake-Server-v1'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512bw'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512cd'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512dq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512f'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vl'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='hle'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='invpcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pku'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='rtm'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='Skylake-Server-v2'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512bw'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512cd'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512dq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512f'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vl'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='hle'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='invpcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pku'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='rtm'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='Skylake-Server-v3'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512bw'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512cd'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512dq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512f'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vl'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='invpcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pku'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='Skylake-Server-v4'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512bw'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512cd'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512dq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512f'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vl'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='invpcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pku'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='Skylake-Server-v5'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512bw'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512cd'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512dq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512f'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vl'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='invpcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pku'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='xsaves'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='Snowridge'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='cldemote'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='core-capability'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='gfni'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='movdir64b'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='movdiri'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='mpx'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='split-lock-detect'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='Snowridge-v1'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='cldemote'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='core-capability'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='gfni'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='movdir64b'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='movdiri'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='mpx'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='split-lock-detect'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='Snowridge-v2'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='cldemote'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='core-capability'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='gfni'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='movdir64b'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='movdiri'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='split-lock-detect'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='Snowridge-v3'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='cldemote'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='core-capability'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='gfni'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='movdir64b'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='movdiri'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='split-lock-detect'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='xsaves'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='Snowridge-v4'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='cldemote'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='gfni'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='movdir64b'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='movdiri'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='xsaves'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='athlon'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='3dnow'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='3dnowext'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='athlon-v1'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='3dnow'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='3dnowext'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='core2duo'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='ss'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='core2duo-v1'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='ss'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='coreduo'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='ss'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='coreduo-v1'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='ss'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='n270'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='ss'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='n270-v1'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='ss'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='phenom'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='3dnow'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='3dnowext'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='phenom-v1'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='3dnow'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='3dnowext'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:     </mode>
Oct 11 08:32:44 compute-0 nova_compute[260935]:   </cpu>
Oct 11 08:32:44 compute-0 nova_compute[260935]:   <memoryBacking supported='yes'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:     <enum name='sourceType'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <value>file</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <value>anonymous</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <value>memfd</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:     </enum>
Oct 11 08:32:44 compute-0 nova_compute[260935]:   </memoryBacking>
Oct 11 08:32:44 compute-0 nova_compute[260935]:   <devices>
Oct 11 08:32:44 compute-0 nova_compute[260935]:     <disk supported='yes'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <enum name='diskDevice'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>disk</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>cdrom</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>floppy</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>lun</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </enum>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <enum name='bus'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>ide</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>fdc</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>scsi</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>virtio</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>usb</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>sata</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </enum>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <enum name='model'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>virtio</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>virtio-transitional</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>virtio-non-transitional</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </enum>
Oct 11 08:32:44 compute-0 nova_compute[260935]:     </disk>
Oct 11 08:32:44 compute-0 nova_compute[260935]:     <graphics supported='yes'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <enum name='type'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>vnc</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>egl-headless</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>dbus</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </enum>
Oct 11 08:32:44 compute-0 nova_compute[260935]:     </graphics>
Oct 11 08:32:44 compute-0 nova_compute[260935]:     <video supported='yes'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <enum name='modelType'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>vga</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>cirrus</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>virtio</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>none</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>bochs</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>ramfb</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </enum>
Oct 11 08:32:44 compute-0 nova_compute[260935]:     </video>
Oct 11 08:32:44 compute-0 nova_compute[260935]:     <hostdev supported='yes'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <enum name='mode'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>subsystem</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </enum>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <enum name='startupPolicy'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>default</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>mandatory</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>requisite</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>optional</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </enum>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <enum name='subsysType'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>usb</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>pci</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>scsi</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </enum>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <enum name='capsType'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <enum name='pciBackend'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:     </hostdev>
Oct 11 08:32:44 compute-0 nova_compute[260935]:     <rng supported='yes'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <enum name='model'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>virtio</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>virtio-transitional</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>virtio-non-transitional</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </enum>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <enum name='backendModel'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>random</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>egd</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>builtin</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </enum>
Oct 11 08:32:44 compute-0 nova_compute[260935]:     </rng>
Oct 11 08:32:44 compute-0 nova_compute[260935]:     <filesystem supported='yes'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <enum name='driverType'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>path</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>handle</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>virtiofs</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </enum>
Oct 11 08:32:44 compute-0 nova_compute[260935]:     </filesystem>
Oct 11 08:32:44 compute-0 nova_compute[260935]:     <tpm supported='yes'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <enum name='model'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>tpm-tis</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>tpm-crb</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </enum>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <enum name='backendModel'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>emulator</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>external</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </enum>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <enum name='backendVersion'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>2.0</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </enum>
Oct 11 08:32:44 compute-0 nova_compute[260935]:     </tpm>
Oct 11 08:32:44 compute-0 nova_compute[260935]:     <redirdev supported='yes'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <enum name='bus'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>usb</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </enum>
Oct 11 08:32:44 compute-0 nova_compute[260935]:     </redirdev>
Oct 11 08:32:44 compute-0 nova_compute[260935]:     <channel supported='yes'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <enum name='type'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>pty</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>unix</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </enum>
Oct 11 08:32:44 compute-0 nova_compute[260935]:     </channel>
Oct 11 08:32:44 compute-0 nova_compute[260935]:     <crypto supported='yes'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <enum name='model'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <enum name='type'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>qemu</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </enum>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <enum name='backendModel'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>builtin</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </enum>
Oct 11 08:32:44 compute-0 nova_compute[260935]:     </crypto>
Oct 11 08:32:44 compute-0 nova_compute[260935]:     <interface supported='yes'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <enum name='backendType'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>default</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>passt</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </enum>
Oct 11 08:32:44 compute-0 nova_compute[260935]:     </interface>
Oct 11 08:32:44 compute-0 nova_compute[260935]:     <panic supported='yes'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <enum name='model'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>isa</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>hyperv</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </enum>
Oct 11 08:32:44 compute-0 nova_compute[260935]:     </panic>
Oct 11 08:32:44 compute-0 nova_compute[260935]:   </devices>
Oct 11 08:32:44 compute-0 nova_compute[260935]:   <features>
Oct 11 08:32:44 compute-0 nova_compute[260935]:     <gic supported='no'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:     <vmcoreinfo supported='yes'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:     <genid supported='yes'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:     <backingStoreInput supported='yes'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:     <backup supported='yes'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:     <async-teardown supported='yes'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:     <ps2 supported='yes'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:     <sev supported='no'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:     <sgx supported='no'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:     <hyperv supported='yes'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <enum name='features'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>relaxed</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>vapic</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>spinlocks</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>vpindex</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>runtime</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>synic</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>stimer</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>reset</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>vendor_id</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>frequencies</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>reenlightenment</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>tlbflush</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>ipi</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>avic</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>emsr_bitmap</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>xmm_input</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </enum>
Oct 11 08:32:44 compute-0 nova_compute[260935]:     </hyperv>
Oct 11 08:32:44 compute-0 nova_compute[260935]:     <launchSecurity supported='no'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:   </features>
Oct 11 08:32:44 compute-0 nova_compute[260935]: </domainCapabilities>
Oct 11 08:32:44 compute-0 nova_compute[260935]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.797 2 DEBUG nova.virt.libvirt.host [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Oct 11 08:32:44 compute-0 nova_compute[260935]: <domainCapabilities>
Oct 11 08:32:44 compute-0 nova_compute[260935]:   <path>/usr/libexec/qemu-kvm</path>
Oct 11 08:32:44 compute-0 nova_compute[260935]:   <domain>kvm</domain>
Oct 11 08:32:44 compute-0 nova_compute[260935]:   <machine>pc-q35-rhel9.6.0</machine>
Oct 11 08:32:44 compute-0 nova_compute[260935]:   <arch>x86_64</arch>
Oct 11 08:32:44 compute-0 nova_compute[260935]:   <vcpu max='4096'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:   <iothreads supported='yes'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:   <os supported='yes'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:     <enum name='firmware'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <value>efi</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:     </enum>
Oct 11 08:32:44 compute-0 nova_compute[260935]:     <loader supported='yes'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <enum name='type'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>rom</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>pflash</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </enum>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <enum name='readonly'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>yes</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>no</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </enum>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <enum name='secure'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>yes</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>no</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </enum>
Oct 11 08:32:44 compute-0 nova_compute[260935]:     </loader>
Oct 11 08:32:44 compute-0 nova_compute[260935]:   </os>
Oct 11 08:32:44 compute-0 nova_compute[260935]:   <cpu>
Oct 11 08:32:44 compute-0 nova_compute[260935]:     <mode name='host-passthrough' supported='yes'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <enum name='hostPassthroughMigratable'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>on</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>off</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </enum>
Oct 11 08:32:44 compute-0 nova_compute[260935]:     </mode>
Oct 11 08:32:44 compute-0 nova_compute[260935]:     <mode name='maximum' supported='yes'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <enum name='maximumMigratable'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>on</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>off</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </enum>
Oct 11 08:32:44 compute-0 nova_compute[260935]:     </mode>
Oct 11 08:32:44 compute-0 nova_compute[260935]:     <mode name='host-model' supported='yes'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model fallback='forbid'>EPYC-Rome</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <vendor>AMD</vendor>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <maxphysaddr mode='passthrough' limit='40'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <feature policy='require' name='x2apic'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <feature policy='require' name='tsc-deadline'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <feature policy='require' name='hypervisor'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <feature policy='require' name='tsc_adjust'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <feature policy='require' name='spec-ctrl'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <feature policy='require' name='stibp'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <feature policy='require' name='arch-capabilities'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <feature policy='require' name='ssbd'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <feature policy='require' name='cmp_legacy'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <feature policy='require' name='overflow-recov'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <feature policy='require' name='succor'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <feature policy='require' name='ibrs'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <feature policy='require' name='amd-ssbd'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <feature policy='require' name='virt-ssbd'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <feature policy='require' name='lbrv'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <feature policy='require' name='tsc-scale'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <feature policy='require' name='vmcb-clean'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <feature policy='require' name='flushbyasid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <feature policy='require' name='pause-filter'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <feature policy='require' name='pfthreshold'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <feature policy='require' name='svme-addr-chk'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <feature policy='require' name='lfence-always-serializing'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <feature policy='require' name='rdctl-no'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <feature policy='require' name='mds-no'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <feature policy='require' name='pschange-mc-no'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <feature policy='require' name='gds-no'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <feature policy='require' name='rfds-no'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <feature policy='disable' name='xsaves'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:     </mode>
Oct 11 08:32:44 compute-0 nova_compute[260935]:     <mode name='custom' supported='yes'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='Broadwell'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='hle'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='invpcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='rtm'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='Broadwell-IBRS'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='hle'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='invpcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='rtm'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='Broadwell-noTSX'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='invpcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='Broadwell-noTSX-IBRS'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='invpcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='Broadwell-v1'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='hle'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='invpcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='rtm'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='Broadwell-v2'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='invpcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='Broadwell-v3'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='hle'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='invpcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='rtm'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='Broadwell-v4'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='invpcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='Cascadelake-Server'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512bw'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512cd'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512dq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512f'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vl'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vnni'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='hle'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='invpcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pku'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='rtm'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='Cascadelake-Server-noTSX'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512bw'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512cd'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512dq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512f'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vl'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vnni'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='ibrs-all'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='invpcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pku'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='Cascadelake-Server-v1'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512bw'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512cd'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512dq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512f'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vl'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vnni'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='hle'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='invpcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pku'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='rtm'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='Cascadelake-Server-v2'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512bw'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512cd'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512dq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512f'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vl'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vnni'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='hle'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='ibrs-all'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='invpcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pku'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='rtm'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='Cascadelake-Server-v3'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512bw'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512cd'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512dq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512f'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vl'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vnni'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='ibrs-all'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='invpcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pku'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='Cascadelake-Server-v4'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512bw'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512cd'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512dq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512f'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vl'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vnni'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='ibrs-all'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='invpcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pku'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='Cascadelake-Server-v5'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512bw'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512cd'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512dq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512f'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vl'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vnni'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='ibrs-all'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='invpcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pku'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='xsaves'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='Cooperlake'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512-bf16'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512bw'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512cd'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512dq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512f'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vl'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vnni'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='hle'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='ibrs-all'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='invpcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pku'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='rtm'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='taa-no'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='Cooperlake-v1'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512-bf16'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512bw'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512cd'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512dq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512f'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vl'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vnni'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='hle'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='ibrs-all'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='invpcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pku'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='rtm'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='taa-no'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='Cooperlake-v2'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512-bf16'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512bw'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512cd'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512dq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512f'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vl'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vnni'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='hle'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='ibrs-all'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='invpcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pku'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='rtm'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='taa-no'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='xsaves'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='Denverton'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='mpx'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='Denverton-v1'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='mpx'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='Denverton-v2'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='Denverton-v3'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='xsaves'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='Dhyana-v2'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='xsaves'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='EPYC-Genoa'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='amd-psfd'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='auto-ibrs'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512-bf16'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512-vpopcntdq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512bitalg'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512bw'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512cd'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512dq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512f'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512ifma'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vbmi'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vbmi2'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vl'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vnni'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='fsrm'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='gfni'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='invpcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='la57'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='no-nested-data-bp'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='null-sel-clr-base'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pku'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='stibp-always-on'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='vaes'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='vpclmulqdq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='xsaves'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='EPYC-Genoa-v1'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='amd-psfd'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='auto-ibrs'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512-bf16'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512-vpopcntdq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512bitalg'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512bw'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512cd'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512dq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512f'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512ifma'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vbmi'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vbmi2'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vl'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vnni'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='fsrm'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='gfni'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='invpcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='la57'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='no-nested-data-bp'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='null-sel-clr-base'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pku'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='stibp-always-on'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='vaes'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='vpclmulqdq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='xsaves'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='EPYC-Milan'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='fsrm'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='invpcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pku'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='xsaves'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='EPYC-Milan-v1'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='fsrm'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='invpcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pku'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='xsaves'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='EPYC-Milan-v2'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='amd-psfd'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='fsrm'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='invpcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='no-nested-data-bp'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='null-sel-clr-base'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pku'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='stibp-always-on'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='vaes'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='vpclmulqdq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='xsaves'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='EPYC-Rome'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='xsaves'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='EPYC-Rome-v1'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='xsaves'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='EPYC-Rome-v2'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='xsaves'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='EPYC-Rome-v3'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='xsaves'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='EPYC-v3'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='xsaves'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='EPYC-v4'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='xsaves'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='GraniteRapids'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='amx-bf16'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='amx-fp16'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='amx-int8'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='amx-tile'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx-vnni'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512-bf16'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512-fp16'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512-vpopcntdq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512bitalg'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512bw'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512cd'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512dq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512f'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512ifma'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vbmi'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vbmi2'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vl'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vnni'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='bus-lock-detect'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='fbsdp-no'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='fsrc'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='fsrm'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='fsrs'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='fzrm'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='gfni'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='hle'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='ibrs-all'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='invpcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='la57'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='mcdt-no'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pbrsb-no'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pku'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='prefetchiti'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='psdp-no'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='rtm'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='sbdr-ssdp-no'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='serialize'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='taa-no'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='tsx-ldtrk'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='vaes'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='vpclmulqdq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='xfd'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='xsaves'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='GraniteRapids-v1'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='amx-bf16'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='amx-fp16'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='amx-int8'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='amx-tile'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx-vnni'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512-bf16'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512-fp16'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512-vpopcntdq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512bitalg'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512bw'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512cd'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512dq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512f'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512ifma'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vbmi'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vbmi2'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vl'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vnni'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='bus-lock-detect'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='fbsdp-no'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='fsrc'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='fsrm'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='fsrs'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='fzrm'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='gfni'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='hle'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='ibrs-all'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='invpcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='la57'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='mcdt-no'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pbrsb-no'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pku'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='prefetchiti'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='psdp-no'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='rtm'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='sbdr-ssdp-no'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='serialize'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='taa-no'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='tsx-ldtrk'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='vaes'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='vpclmulqdq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='xfd'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='xsaves'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='GraniteRapids-v2'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='amx-bf16'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='amx-fp16'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='amx-int8'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='amx-tile'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx-vnni'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx10'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx10-128'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx10-256'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx10-512'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512-bf16'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512-fp16'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512-vpopcntdq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512bitalg'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512bw'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512cd'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512dq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512f'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512ifma'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vbmi'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vbmi2'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vl'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vnni'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='bus-lock-detect'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='cldemote'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='fbsdp-no'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='fsrc'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='fsrm'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='fsrs'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='fzrm'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='gfni'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='hle'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='ibrs-all'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='invpcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='la57'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='mcdt-no'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='movdir64b'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='movdiri'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pbrsb-no'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pku'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='prefetchiti'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='psdp-no'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='rtm'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='sbdr-ssdp-no'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='serialize'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='ss'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='taa-no'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='tsx-ldtrk'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='vaes'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='vpclmulqdq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='xfd'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='xsaves'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='Haswell'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='hle'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='invpcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='rtm'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='Haswell-IBRS'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='hle'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='invpcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='rtm'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='Haswell-noTSX'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='invpcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='Haswell-noTSX-IBRS'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='invpcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='Haswell-v1'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='hle'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='invpcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='rtm'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='Haswell-v2'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='invpcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='Haswell-v3'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='hle'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='invpcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='rtm'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='Haswell-v4'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='invpcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='Icelake-Server'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512-vpopcntdq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512bitalg'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512bw'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512cd'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512dq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512f'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vbmi'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vbmi2'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vl'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vnni'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='gfni'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='hle'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='invpcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='la57'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pku'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='rtm'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='vaes'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='vpclmulqdq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='Icelake-Server-noTSX'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512-vpopcntdq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512bitalg'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512bw'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512cd'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512dq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512f'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vbmi'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vbmi2'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vl'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vnni'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='gfni'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='invpcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='la57'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pku'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='vaes'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='vpclmulqdq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='Icelake-Server-v1'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512-vpopcntdq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512bitalg'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512bw'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512cd'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512dq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512f'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vbmi'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vbmi2'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vl'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vnni'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='gfni'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='hle'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='invpcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='la57'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pku'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='rtm'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='vaes'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='vpclmulqdq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='Icelake-Server-v2'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512-vpopcntdq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512bitalg'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512bw'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512cd'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512dq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512f'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vbmi'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vbmi2'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vl'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vnni'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='gfni'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='invpcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='la57'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pku'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='vaes'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='vpclmulqdq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='Icelake-Server-v3'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512-vpopcntdq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512bitalg'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512bw'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512cd'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512dq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512f'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vbmi'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vbmi2'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vl'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vnni'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='gfni'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='ibrs-all'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='invpcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='la57'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pku'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='taa-no'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='vaes'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='vpclmulqdq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='Icelake-Server-v4'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512-vpopcntdq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512bitalg'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512bw'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512cd'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512dq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512f'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512ifma'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vbmi'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vbmi2'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vl'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vnni'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='fsrm'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='gfni'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='ibrs-all'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='invpcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='la57'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pku'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='taa-no'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='vaes'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='vpclmulqdq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='Icelake-Server-v5'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512-vpopcntdq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512bitalg'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512bw'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512cd'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512dq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512f'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512ifma'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vbmi'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vbmi2'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vl'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vnni'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='fsrm'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='gfni'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='ibrs-all'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='invpcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='la57'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pku'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='taa-no'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='vaes'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='vpclmulqdq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='xsaves'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='Icelake-Server-v6'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512-vpopcntdq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512bitalg'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512bw'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512cd'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512dq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512f'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512ifma'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vbmi'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vbmi2'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vl'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vnni'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='fsrm'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='gfni'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='ibrs-all'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='invpcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='la57'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pku'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='taa-no'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='vaes'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='vpclmulqdq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='xsaves'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='Icelake-Server-v7'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512-vpopcntdq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512bitalg'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512bw'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512cd'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512dq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512f'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512ifma'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vbmi'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vbmi2'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vl'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vnni'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='fsrm'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='gfni'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='hle'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='ibrs-all'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='invpcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='la57'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pku'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='rtm'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='taa-no'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='vaes'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='vpclmulqdq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='xsaves'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='IvyBridge'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='IvyBridge-IBRS'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='IvyBridge-v1'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='IvyBridge-v2'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='KnightsMill'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512-4fmaps'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512-4vnniw'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512-vpopcntdq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512cd'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512er'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512f'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512pf'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='ss'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='KnightsMill-v1'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512-4fmaps'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512-4vnniw'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512-vpopcntdq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512cd'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512er'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512f'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512pf'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='ss'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='Opteron_G4'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='fma4'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='xop'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='Opteron_G4-v1'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='fma4'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='xop'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='Opteron_G5'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='fma4'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='tbm'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='xop'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='Opteron_G5-v1'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='fma4'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='tbm'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='xop'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='SapphireRapids'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='amx-bf16'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='amx-int8'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='amx-tile'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx-vnni'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512-bf16'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512-fp16'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512-vpopcntdq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512bitalg'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512bw'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512cd'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512dq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512f'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512ifma'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vbmi'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vbmi2'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vl'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vnni'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='bus-lock-detect'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='fsrc'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='fsrm'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='fsrs'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='fzrm'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='gfni'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='hle'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='ibrs-all'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='invpcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='la57'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pku'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='rtm'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='serialize'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='taa-no'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='tsx-ldtrk'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='vaes'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='vpclmulqdq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='xfd'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='xsaves'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='SapphireRapids-v1'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='amx-bf16'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='amx-int8'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='amx-tile'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx-vnni'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512-bf16'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512-fp16'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512-vpopcntdq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512bitalg'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512bw'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512cd'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512dq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512f'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512ifma'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vbmi'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vbmi2'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vl'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vnni'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='bus-lock-detect'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='fsrc'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='fsrm'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='fsrs'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='fzrm'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='gfni'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='hle'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='ibrs-all'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='invpcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='la57'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pku'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='rtm'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='serialize'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='taa-no'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='tsx-ldtrk'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='vaes'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='vpclmulqdq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='xfd'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='xsaves'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='SapphireRapids-v2'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='amx-bf16'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='amx-int8'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='amx-tile'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx-vnni'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512-bf16'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512-fp16'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512-vpopcntdq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512bitalg'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512bw'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512cd'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512dq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512f'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512ifma'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vbmi'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vbmi2'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vl'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vnni'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='bus-lock-detect'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='fbsdp-no'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='fsrc'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='fsrm'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='fsrs'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='fzrm'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='gfni'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='hle'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='ibrs-all'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='invpcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='la57'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pku'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='psdp-no'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='rtm'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='sbdr-ssdp-no'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='serialize'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='taa-no'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='tsx-ldtrk'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='vaes'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='vpclmulqdq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='xfd'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='xsaves'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='SapphireRapids-v3'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='amx-bf16'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='amx-int8'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='amx-tile'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx-vnni'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512-bf16'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512-fp16'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512-vpopcntdq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512bitalg'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512bw'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512cd'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512dq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512f'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512ifma'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vbmi'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vbmi2'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vl'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vnni'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='bus-lock-detect'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='cldemote'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='fbsdp-no'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='fsrc'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='fsrm'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='fsrs'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='fzrm'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='gfni'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='hle'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='ibrs-all'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='invpcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='la57'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='movdir64b'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='movdiri'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pku'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='psdp-no'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='rtm'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='sbdr-ssdp-no'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='serialize'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='ss'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='taa-no'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='tsx-ldtrk'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='vaes'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='vpclmulqdq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='xfd'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='xsaves'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='SierraForest'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx-ifma'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx-ne-convert'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx-vnni'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx-vnni-int8'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='bus-lock-detect'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='cmpccxadd'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='fbsdp-no'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='fsrm'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='fsrs'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='gfni'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='ibrs-all'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='invpcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='mcdt-no'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pbrsb-no'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pku'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='psdp-no'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='sbdr-ssdp-no'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='serialize'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='vaes'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='vpclmulqdq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='xsaves'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='SierraForest-v1'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx-ifma'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx-ne-convert'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx-vnni'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx-vnni-int8'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='bus-lock-detect'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='cmpccxadd'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='fbsdp-no'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='fsrm'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='fsrs'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='gfni'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='ibrs-all'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='invpcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='mcdt-no'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pbrsb-no'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pku'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='psdp-no'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='sbdr-ssdp-no'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='serialize'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='vaes'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='vpclmulqdq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='xsaves'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='Skylake-Client'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='hle'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='invpcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='rtm'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='Skylake-Client-IBRS'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='hle'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='invpcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='rtm'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='invpcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='Skylake-Client-v1'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='hle'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='invpcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='rtm'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='Skylake-Client-v2'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='hle'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='invpcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='rtm'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='Skylake-Client-v3'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='invpcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='Skylake-Client-v4'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='invpcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='xsaves'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='Skylake-Server'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512bw'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512cd'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512dq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512f'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vl'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='hle'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='invpcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pku'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='rtm'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='Skylake-Server-IBRS'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512bw'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512cd'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512dq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512f'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vl'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='hle'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='invpcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pku'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='rtm'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512bw'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512cd'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512dq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512f'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vl'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='invpcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pku'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='Skylake-Server-v1'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512bw'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512cd'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512dq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512f'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vl'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='hle'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='invpcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pku'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='rtm'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='Skylake-Server-v2'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512bw'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512cd'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512dq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512f'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vl'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='hle'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='invpcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pku'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='rtm'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='Skylake-Server-v3'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512bw'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512cd'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512dq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512f'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vl'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='invpcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pku'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='Skylake-Server-v4'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512bw'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512cd'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512dq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512f'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vl'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='invpcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pku'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='Skylake-Server-v5'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512bw'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512cd'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512dq'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512f'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='avx512vl'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='invpcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pcid'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='pku'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='xsaves'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='Snowridge'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='cldemote'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='core-capability'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='gfni'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='movdir64b'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='movdiri'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='mpx'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='split-lock-detect'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='Snowridge-v1'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='cldemote'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='core-capability'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='gfni'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='movdir64b'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='movdiri'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='mpx'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='split-lock-detect'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='Snowridge-v2'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='cldemote'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='core-capability'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='gfni'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='movdir64b'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='movdiri'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='split-lock-detect'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='Snowridge-v3'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='cldemote'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='core-capability'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='gfni'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='movdir64b'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='movdiri'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='split-lock-detect'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='xsaves'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='Snowridge-v4'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='cldemote'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='erms'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='gfni'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='movdir64b'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='movdiri'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='xsaves'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='athlon'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='3dnow'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='3dnowext'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='athlon-v1'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='3dnow'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='3dnowext'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='core2duo'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='ss'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='core2duo-v1'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='ss'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='coreduo'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='ss'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='coreduo-v1'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='ss'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='n270'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='ss'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='n270-v1'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='ss'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='phenom'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='3dnow'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='3dnowext'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <blockers model='phenom-v1'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='3dnow'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <feature name='3dnowext'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </blockers>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Oct 11 08:32:44 compute-0 nova_compute[260935]:     </mode>
Oct 11 08:32:44 compute-0 nova_compute[260935]:   </cpu>
Oct 11 08:32:44 compute-0 nova_compute[260935]:   <memoryBacking supported='yes'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:     <enum name='sourceType'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <value>file</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <value>anonymous</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <value>memfd</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:     </enum>
Oct 11 08:32:44 compute-0 nova_compute[260935]:   </memoryBacking>
Oct 11 08:32:44 compute-0 nova_compute[260935]:   <devices>
Oct 11 08:32:44 compute-0 nova_compute[260935]:     <disk supported='yes'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <enum name='diskDevice'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>disk</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>cdrom</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>floppy</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>lun</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </enum>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <enum name='bus'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>fdc</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>scsi</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>virtio</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>usb</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>sata</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </enum>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <enum name='model'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>virtio</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>virtio-transitional</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>virtio-non-transitional</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </enum>
Oct 11 08:32:44 compute-0 nova_compute[260935]:     </disk>
Oct 11 08:32:44 compute-0 nova_compute[260935]:     <graphics supported='yes'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <enum name='type'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>vnc</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>egl-headless</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>dbus</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </enum>
Oct 11 08:32:44 compute-0 nova_compute[260935]:     </graphics>
Oct 11 08:32:44 compute-0 nova_compute[260935]:     <video supported='yes'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <enum name='modelType'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>vga</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>cirrus</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>virtio</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>none</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>bochs</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>ramfb</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </enum>
Oct 11 08:32:44 compute-0 nova_compute[260935]:     </video>
Oct 11 08:32:44 compute-0 nova_compute[260935]:     <hostdev supported='yes'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <enum name='mode'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>subsystem</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </enum>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <enum name='startupPolicy'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>default</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>mandatory</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>requisite</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>optional</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </enum>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <enum name='subsysType'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>usb</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>pci</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>scsi</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </enum>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <enum name='capsType'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <enum name='pciBackend'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:     </hostdev>
Oct 11 08:32:44 compute-0 nova_compute[260935]:     <rng supported='yes'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <enum name='model'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>virtio</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>virtio-transitional</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>virtio-non-transitional</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </enum>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <enum name='backendModel'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>random</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>egd</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>builtin</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </enum>
Oct 11 08:32:44 compute-0 nova_compute[260935]:     </rng>
Oct 11 08:32:44 compute-0 nova_compute[260935]:     <filesystem supported='yes'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <enum name='driverType'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>path</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>handle</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>virtiofs</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </enum>
Oct 11 08:32:44 compute-0 nova_compute[260935]:     </filesystem>
Oct 11 08:32:44 compute-0 nova_compute[260935]:     <tpm supported='yes'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <enum name='model'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>tpm-tis</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>tpm-crb</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </enum>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <enum name='backendModel'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>emulator</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>external</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </enum>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <enum name='backendVersion'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>2.0</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </enum>
Oct 11 08:32:44 compute-0 nova_compute[260935]:     </tpm>
Oct 11 08:32:44 compute-0 nova_compute[260935]:     <redirdev supported='yes'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <enum name='bus'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>usb</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </enum>
Oct 11 08:32:44 compute-0 nova_compute[260935]:     </redirdev>
Oct 11 08:32:44 compute-0 nova_compute[260935]:     <channel supported='yes'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <enum name='type'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>pty</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>unix</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </enum>
Oct 11 08:32:44 compute-0 nova_compute[260935]:     </channel>
Oct 11 08:32:44 compute-0 nova_compute[260935]:     <crypto supported='yes'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <enum name='model'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <enum name='type'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>qemu</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </enum>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <enum name='backendModel'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>builtin</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </enum>
Oct 11 08:32:44 compute-0 nova_compute[260935]:     </crypto>
Oct 11 08:32:44 compute-0 nova_compute[260935]:     <interface supported='yes'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <enum name='backendType'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>default</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>passt</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </enum>
Oct 11 08:32:44 compute-0 nova_compute[260935]:     </interface>
Oct 11 08:32:44 compute-0 nova_compute[260935]:     <panic supported='yes'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <enum name='model'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>isa</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>hyperv</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </enum>
Oct 11 08:32:44 compute-0 nova_compute[260935]:     </panic>
Oct 11 08:32:44 compute-0 nova_compute[260935]:   </devices>
Oct 11 08:32:44 compute-0 nova_compute[260935]:   <features>
Oct 11 08:32:44 compute-0 nova_compute[260935]:     <gic supported='no'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:     <vmcoreinfo supported='yes'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:     <genid supported='yes'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:     <backingStoreInput supported='yes'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:     <backup supported='yes'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:     <async-teardown supported='yes'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:     <ps2 supported='yes'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:     <sev supported='no'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:     <sgx supported='no'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:     <hyperv supported='yes'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       <enum name='features'>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>relaxed</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>vapic</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>spinlocks</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>vpindex</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>runtime</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>synic</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>stimer</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>reset</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>vendor_id</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>frequencies</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>reenlightenment</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>tlbflush</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>ipi</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>avic</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>emsr_bitmap</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:         <value>xmm_input</value>
Oct 11 08:32:44 compute-0 nova_compute[260935]:       </enum>
Oct 11 08:32:44 compute-0 nova_compute[260935]:     </hyperv>
Oct 11 08:32:44 compute-0 nova_compute[260935]:     <launchSecurity supported='no'/>
Oct 11 08:32:44 compute-0 nova_compute[260935]:   </features>
Oct 11 08:32:44 compute-0 nova_compute[260935]: </domainCapabilities>
Oct 11 08:32:44 compute-0 nova_compute[260935]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.892 2 DEBUG nova.virt.libvirt.host [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.892 2 DEBUG nova.virt.libvirt.host [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.892 2 DEBUG nova.virt.libvirt.host [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.892 2 INFO nova.virt.libvirt.host [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Secure Boot support detected
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.895 2 INFO nova.virt.libvirt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.895 2 INFO nova.virt.libvirt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.903 2 DEBUG nova.virt.libvirt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.926 2 INFO nova.virt.node [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Determined node identity ead2f521-4d5d-46d9-864c-1aac19134114 from /var/lib/nova/compute_id
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.945 2 WARNING nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Compute nodes ['ead2f521-4d5d-46d9-864c-1aac19134114'] for host compute-0.ctlplane.example.com were not found in the database. If this is the first time this service is starting on this host, then you can ignore this warning.
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.973 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.996 2 WARNING nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] No compute node record found for host compute-0.ctlplane.example.com. If this is the first time this service is starting on this host, then you can ignore this warning.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.996 2 DEBUG oslo_concurrency.lockutils [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.997 2 DEBUG oslo_concurrency.lockutils [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.997 2 DEBUG oslo_concurrency.lockutils [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.998 2 DEBUG nova.compute.resource_tracker [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 11 08:32:44 compute-0 nova_compute[260935]: 2025-10-11 08:32:44.998 2 DEBUG oslo_concurrency.processutils [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:32:45 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v768: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:32:45 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 08:32:45 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1004446298' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:32:45 compute-0 nova_compute[260935]: 2025-10-11 08:32:45.503 2 DEBUG oslo_concurrency.processutils [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.504s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:32:45 compute-0 systemd[1]: Starting libvirt nodedev daemon...
Oct 11 08:32:45 compute-0 systemd[1]: Started libvirt nodedev daemon.
Oct 11 08:32:45 compute-0 nova_compute[260935]: 2025-10-11 08:32:45.953 2 WARNING nova.virt.libvirt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 08:32:45 compute-0 nova_compute[260935]: 2025-10-11 08:32:45.955 2 DEBUG nova.compute.resource_tracker [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5182MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 11 08:32:45 compute-0 nova_compute[260935]: 2025-10-11 08:32:45.956 2 DEBUG oslo_concurrency.lockutils [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:32:45 compute-0 nova_compute[260935]: 2025-10-11 08:32:45.956 2 DEBUG oslo_concurrency.lockutils [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:32:46 compute-0 nova_compute[260935]: 2025-10-11 08:32:46.008 2 WARNING nova.compute.resource_tracker [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] No compute node record for compute-0.ctlplane.example.com:ead2f521-4d5d-46d9-864c-1aac19134114: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host ead2f521-4d5d-46d9-864c-1aac19134114 could not be found.
Oct 11 08:32:46 compute-0 nova_compute[260935]: 2025-10-11 08:32:46.045 2 INFO nova.compute.resource_tracker [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Compute node record created for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com with uuid: ead2f521-4d5d-46d9-864c-1aac19134114
Oct 11 08:32:46 compute-0 nova_compute[260935]: 2025-10-11 08:32:46.184 2 DEBUG nova.compute.resource_tracker [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 11 08:32:46 compute-0 nova_compute[260935]: 2025-10-11 08:32:46.185 2 DEBUG nova.compute.resource_tracker [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 11 08:32:46 compute-0 ceph-mon[74313]: pgmap v768: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:32:46 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/1004446298' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:32:47 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v769: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:32:47 compute-0 nova_compute[260935]: 2025-10-11 08:32:47.394 2 INFO nova.scheduler.client.report [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [req-7739ac00-a31a-4d51-9974-b274c4dd3293] Created resource provider record via placement API for resource provider with UUID ead2f521-4d5d-46d9-864c-1aac19134114 and name compute-0.ctlplane.example.com.
Oct 11 08:32:47 compute-0 nova_compute[260935]: 2025-10-11 08:32:47.782 2 DEBUG oslo_concurrency.processutils [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:32:47 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 08:32:48 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 08:32:48 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4251336718' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:32:48 compute-0 nova_compute[260935]: 2025-10-11 08:32:48.231 2 DEBUG oslo_concurrency.processutils [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:32:48 compute-0 nova_compute[260935]: 2025-10-11 08:32:48.238 2 DEBUG nova.virt.libvirt.host [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] /sys/module/kvm_amd/parameters/sev contains [N
Oct 11 08:32:48 compute-0 nova_compute[260935]: ] _kernel_supports_amd_sev /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1803
Oct 11 08:32:48 compute-0 nova_compute[260935]: 2025-10-11 08:32:48.239 2 INFO nova.virt.libvirt.host [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] kernel doesn't support AMD SEV
Oct 11 08:32:48 compute-0 nova_compute[260935]: 2025-10-11 08:32:48.241 2 DEBUG nova.compute.provider_tree [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Updating inventory in ProviderTree for provider ead2f521-4d5d-46d9-864c-1aac19134114 with inventory: {'MEMORY_MB': {'total': 7680, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Oct 11 08:32:48 compute-0 nova_compute[260935]: 2025-10-11 08:32:48.241 2 DEBUG nova.virt.libvirt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 11 08:32:48 compute-0 nova_compute[260935]: 2025-10-11 08:32:48.325 2 DEBUG nova.scheduler.client.report [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Updated inventory for provider ead2f521-4d5d-46d9-864c-1aac19134114 with generation 0 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 7680, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957
Oct 11 08:32:48 compute-0 nova_compute[260935]: 2025-10-11 08:32:48.326 2 DEBUG nova.compute.provider_tree [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Updating resource provider ead2f521-4d5d-46d9-864c-1aac19134114 generation from 0 to 1 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164
Oct 11 08:32:48 compute-0 nova_compute[260935]: 2025-10-11 08:32:48.326 2 DEBUG nova.compute.provider_tree [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Updating inventory in ProviderTree for provider ead2f521-4d5d-46d9-864c-1aac19134114 with inventory: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Oct 11 08:32:48 compute-0 ceph-mon[74313]: pgmap v769: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:32:48 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/4251336718' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:32:48 compute-0 nova_compute[260935]: 2025-10-11 08:32:48.512 2 DEBUG nova.compute.provider_tree [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Updating resource provider ead2f521-4d5d-46d9-864c-1aac19134114 generation from 1 to 2 during operation: update_traits _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164
Oct 11 08:32:48 compute-0 nova_compute[260935]: 2025-10-11 08:32:48.546 2 DEBUG nova.compute.resource_tracker [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 11 08:32:48 compute-0 nova_compute[260935]: 2025-10-11 08:32:48.547 2 DEBUG oslo_concurrency.lockutils [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.591s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:32:48 compute-0 nova_compute[260935]: 2025-10-11 08:32:48.547 2 DEBUG nova.service [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Creating RPC server for service compute start /usr/lib/python3.9/site-packages/nova/service.py:182
Oct 11 08:32:48 compute-0 nova_compute[260935]: 2025-10-11 08:32:48.656 2 DEBUG nova.service [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Join ServiceGroup membership for this service compute start /usr/lib/python3.9/site-packages/nova/service.py:199
Oct 11 08:32:48 compute-0 nova_compute[260935]: 2025-10-11 08:32:48.657 2 DEBUG nova.servicegroup.drivers.db [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] DB_Driver: join new ServiceGroup member compute-0.ctlplane.example.com to the compute group, service = <Service: host=compute-0.ctlplane.example.com, binary=nova-compute, manager_class_name=nova.compute.manager.ComputeManager> join /usr/lib/python3.9/site-packages/nova/servicegroup/drivers/db.py:44
Oct 11 08:32:49 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v770: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:32:50 compute-0 ceph-mon[74313]: pgmap v770: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:32:51 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v771: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:32:52 compute-0 ceph-mon[74313]: pgmap v771: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:32:52 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 08:32:53 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v772: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:32:54 compute-0 ceph-mon[74313]: pgmap v772: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:32:54 compute-0 ceph-mgr[74605]: [balancer INFO root] Optimize plan auto_2025-10-11_08:32:54
Oct 11 08:32:54 compute-0 ceph-mgr[74605]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 08:32:54 compute-0 ceph-mgr[74605]: [balancer INFO root] do_upmap
Oct 11 08:32:54 compute-0 ceph-mgr[74605]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'images', 'default.rgw.control', 'cephfs.cephfs.data', 'volumes', 'backups', 'vms', 'default.rgw.meta', '.mgr', 'default.rgw.log', '.rgw.root']
Oct 11 08:32:54 compute-0 ceph-mgr[74605]: [balancer INFO root] prepared 0/10 changes
Oct 11 08:32:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 08:32:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 08:32:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 08:32:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 08:32:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 08:32:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 08:32:54 compute-0 ceph-mgr[74605]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 08:32:54 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 08:32:54 compute-0 ceph-mgr[74605]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 08:32:54 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 08:32:54 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 08:32:54 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 08:32:54 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 08:32:54 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 08:32:54 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 08:32:54 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 08:32:55 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v773: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:32:55 compute-0 podman[261302]: 2025-10-11 08:32:55.803791429 +0000 UTC m=+0.095521746 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct 11 08:32:56 compute-0 ceph-mon[74313]: pgmap v773: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:32:57 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v774: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:32:57 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 08:32:58 compute-0 ceph-mon[74313]: pgmap v774: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:32:59 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v775: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:33:00 compute-0 ceph-mon[74313]: pgmap v775: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:33:01 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v776: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:33:02 compute-0 ceph-mon[74313]: pgmap v776: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:33:02 compute-0 podman[261322]: 2025-10-11 08:33:02.806277398 +0000 UTC m=+0.105491186 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.schema-version=1.0)
Oct 11 08:33:02 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 08:33:03 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v777: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:33:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 08:33:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:33:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 08:33:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:33:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 08:33:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:33:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 08:33:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:33:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 08:33:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:33:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 08:33:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:33:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 11 08:33:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:33:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 08:33:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:33:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 11 08:33:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:33:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 11 08:33:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:33:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 08:33:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:33:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 11 08:33:04 compute-0 ceph-mon[74313]: pgmap v777: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:33:05 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v778: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:33:06 compute-0 ceph-mon[74313]: pgmap v778: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:33:07 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v779: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:33:07 compute-0 podman[261342]: 2025-10-11 08:33:07.813945056 +0000 UTC m=+0.107510042 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct 11 08:33:07 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 08:33:08 compute-0 ceph-mon[74313]: pgmap v779: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:33:08 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 08:33:08 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1463444221' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 08:33:08 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 08:33:08 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1463444221' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 08:33:09 compute-0 sudo[261364]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:33:09 compute-0 sudo[261364]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:33:09 compute-0 sudo[261364]: pam_unix(sudo:session): session closed for user root
Oct 11 08:33:09 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 08:33:09 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2586404284' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 08:33:09 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 08:33:09 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2586404284' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 08:33:09 compute-0 sudo[261389]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 08:33:09 compute-0 sudo[261389]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:33:09 compute-0 sudo[261389]: pam_unix(sudo:session): session closed for user root
Oct 11 08:33:09 compute-0 sudo[261414]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:33:09 compute-0 sudo[261414]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:33:09 compute-0 sudo[261414]: pam_unix(sudo:session): session closed for user root
Oct 11 08:33:09 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v780: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:33:09 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 08:33:09 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3292399546' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 08:33:09 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 08:33:09 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3292399546' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 08:33:09 compute-0 sudo[261439]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 11 08:33:09 compute-0 sudo[261439]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:33:09 compute-0 ceph-mon[74313]: from='client.? 192.168.122.10:0/1463444221' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 08:33:09 compute-0 ceph-mon[74313]: from='client.? 192.168.122.10:0/1463444221' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 08:33:09 compute-0 ceph-mon[74313]: from='client.? 192.168.122.10:0/2586404284' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 08:33:09 compute-0 ceph-mon[74313]: from='client.? 192.168.122.10:0/2586404284' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 08:33:09 compute-0 ceph-mon[74313]: from='client.? 192.168.122.10:0/3292399546' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 08:33:09 compute-0 ceph-mon[74313]: from='client.? 192.168.122.10:0/3292399546' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 08:33:10 compute-0 sudo[261439]: pam_unix(sudo:session): session closed for user root
Oct 11 08:33:10 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Oct 11 08:33:10 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 11 08:33:10 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 08:33:10 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 08:33:10 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 11 08:33:10 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 08:33:10 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 11 08:33:10 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 08:33:10 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev efea182e-33ee-4d25-bea7-d5cba7d465f3 does not exist
Oct 11 08:33:10 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev 568535d3-bca7-493d-b1be-4764fd61c8ad does not exist
Oct 11 08:33:10 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev 12e510ff-3d79-460d-b21e-069c7432ce3b does not exist
Oct 11 08:33:10 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 11 08:33:10 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 08:33:10 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 11 08:33:10 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 08:33:10 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 08:33:10 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 08:33:10 compute-0 sudo[261497]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:33:10 compute-0 sudo[261497]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:33:10 compute-0 sudo[261497]: pam_unix(sudo:session): session closed for user root
Oct 11 08:33:10 compute-0 sudo[261522]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 08:33:10 compute-0 sudo[261522]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:33:10 compute-0 sudo[261522]: pam_unix(sudo:session): session closed for user root
Oct 11 08:33:10 compute-0 sudo[261548]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:33:10 compute-0 sudo[261548]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:33:10 compute-0 sudo[261548]: pam_unix(sudo:session): session closed for user root
Oct 11 08:33:10 compute-0 sudo[261586]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 11 08:33:10 compute-0 sudo[261586]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:33:10 compute-0 podman[261546]: 2025-10-11 08:33:10.510474342 +0000 UTC m=+0.212120383 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true)
Oct 11 08:33:10 compute-0 ceph-mon[74313]: pgmap v780: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:33:10 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 11 08:33:10 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 08:33:10 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 08:33:10 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 08:33:10 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 08:33:10 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 08:33:10 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 08:33:10 compute-0 podman[261661]: 2025-10-11 08:33:10.971388621 +0000 UTC m=+0.069226460 container create cf44567fb7cecbf2507e9c1dfee5ba296d1ab6372afec0c588800d3b1af1d893 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_lamarr, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 08:33:11 compute-0 systemd[1]: Started libpod-conmon-cf44567fb7cecbf2507e9c1dfee5ba296d1ab6372afec0c588800d3b1af1d893.scope.
Oct 11 08:33:11 compute-0 podman[261661]: 2025-10-11 08:33:10.942615915 +0000 UTC m=+0.040453824 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 08:33:11 compute-0 systemd[1]: Started libcrun container.
Oct 11 08:33:11 compute-0 podman[261661]: 2025-10-11 08:33:11.082342869 +0000 UTC m=+0.180180768 container init cf44567fb7cecbf2507e9c1dfee5ba296d1ab6372afec0c588800d3b1af1d893 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_lamarr, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 08:33:11 compute-0 podman[261661]: 2025-10-11 08:33:11.100842387 +0000 UTC m=+0.198680236 container start cf44567fb7cecbf2507e9c1dfee5ba296d1ab6372afec0c588800d3b1af1d893 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_lamarr, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Oct 11 08:33:11 compute-0 practical_lamarr[261678]: 167 167
Oct 11 08:33:11 compute-0 podman[261661]: 2025-10-11 08:33:11.105372024 +0000 UTC m=+0.203209883 container attach cf44567fb7cecbf2507e9c1dfee5ba296d1ab6372afec0c588800d3b1af1d893 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_lamarr, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct 11 08:33:11 compute-0 systemd[1]: libpod-cf44567fb7cecbf2507e9c1dfee5ba296d1ab6372afec0c588800d3b1af1d893.scope: Deactivated successfully.
Oct 11 08:33:11 compute-0 conmon[261678]: conmon cf44567fb7cecbf2507e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-cf44567fb7cecbf2507e9c1dfee5ba296d1ab6372afec0c588800d3b1af1d893.scope/container/memory.events
Oct 11 08:33:11 compute-0 podman[261661]: 2025-10-11 08:33:11.108060909 +0000 UTC m=+0.205898758 container died cf44567fb7cecbf2507e9c1dfee5ba296d1ab6372afec0c588800d3b1af1d893 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_lamarr, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Oct 11 08:33:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-7a43ab7038c6f441572aafe432d8320816b96b696fb94b6fae5628be1ec44cf7-merged.mount: Deactivated successfully.
Oct 11 08:33:11 compute-0 podman[261661]: 2025-10-11 08:33:11.153904963 +0000 UTC m=+0.251742772 container remove cf44567fb7cecbf2507e9c1dfee5ba296d1ab6372afec0c588800d3b1af1d893 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_lamarr, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 08:33:11 compute-0 systemd[1]: libpod-conmon-cf44567fb7cecbf2507e9c1dfee5ba296d1ab6372afec0c588800d3b1af1d893.scope: Deactivated successfully.
Oct 11 08:33:11 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v781: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:33:11 compute-0 podman[261702]: 2025-10-11 08:33:11.375313395 +0000 UTC m=+0.062010298 container create 79ce021772aa6259336c3b3c930ab4efd23bdb4134559e6bc0a09d0355663660 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_heyrovsky, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Oct 11 08:33:11 compute-0 systemd[1]: Started libpod-conmon-79ce021772aa6259336c3b3c930ab4efd23bdb4134559e6bc0a09d0355663660.scope.
Oct 11 08:33:11 compute-0 podman[261702]: 2025-10-11 08:33:11.346649312 +0000 UTC m=+0.033346285 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 08:33:11 compute-0 systemd[1]: Started libcrun container.
Oct 11 08:33:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a7145b776e3f815f1963877a3353a9b93241a66be8a622634c1ab8941d85e8e3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 08:33:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a7145b776e3f815f1963877a3353a9b93241a66be8a622634c1ab8941d85e8e3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 08:33:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a7145b776e3f815f1963877a3353a9b93241a66be8a622634c1ab8941d85e8e3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 08:33:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a7145b776e3f815f1963877a3353a9b93241a66be8a622634c1ab8941d85e8e3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 08:33:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a7145b776e3f815f1963877a3353a9b93241a66be8a622634c1ab8941d85e8e3/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 08:33:11 compute-0 podman[261702]: 2025-10-11 08:33:11.4894472 +0000 UTC m=+0.176144183 container init 79ce021772aa6259336c3b3c930ab4efd23bdb4134559e6bc0a09d0355663660 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_heyrovsky, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 08:33:11 compute-0 podman[261702]: 2025-10-11 08:33:11.502454615 +0000 UTC m=+0.189151548 container start 79ce021772aa6259336c3b3c930ab4efd23bdb4134559e6bc0a09d0355663660 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_heyrovsky, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 08:33:11 compute-0 podman[261702]: 2025-10-11 08:33:11.506749395 +0000 UTC m=+0.193446388 container attach 79ce021772aa6259336c3b3c930ab4efd23bdb4134559e6bc0a09d0355663660 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_heyrovsky, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Oct 11 08:33:12 compute-0 ceph-mon[74313]: pgmap v781: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:33:12 compute-0 funny_heyrovsky[261718]: --> passed data devices: 0 physical, 3 LVM
Oct 11 08:33:12 compute-0 funny_heyrovsky[261718]: --> relative data size: 1.0
Oct 11 08:33:12 compute-0 funny_heyrovsky[261718]: --> All data devices are unavailable
Oct 11 08:33:12 compute-0 systemd[1]: libpod-79ce021772aa6259336c3b3c930ab4efd23bdb4134559e6bc0a09d0355663660.scope: Deactivated successfully.
Oct 11 08:33:12 compute-0 systemd[1]: libpod-79ce021772aa6259336c3b3c930ab4efd23bdb4134559e6bc0a09d0355663660.scope: Consumed 1.220s CPU time.
Oct 11 08:33:12 compute-0 podman[261702]: 2025-10-11 08:33:12.777135617 +0000 UTC m=+1.463832570 container died 79ce021772aa6259336c3b3c930ab4efd23bdb4134559e6bc0a09d0355663660 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_heyrovsky, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 08:33:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-a7145b776e3f815f1963877a3353a9b93241a66be8a622634c1ab8941d85e8e3-merged.mount: Deactivated successfully.
Oct 11 08:33:12 compute-0 podman[261702]: 2025-10-11 08:33:12.865130952 +0000 UTC m=+1.551827885 container remove 79ce021772aa6259336c3b3c930ab4efd23bdb4134559e6bc0a09d0355663660 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_heyrovsky, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 08:33:12 compute-0 systemd[1]: libpod-conmon-79ce021772aa6259336c3b3c930ab4efd23bdb4134559e6bc0a09d0355663660.scope: Deactivated successfully.
Oct 11 08:33:12 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 08:33:12 compute-0 sudo[261586]: pam_unix(sudo:session): session closed for user root
Oct 11 08:33:12 compute-0 sudo[261760]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:33:13 compute-0 sudo[261760]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:33:13 compute-0 sudo[261760]: pam_unix(sudo:session): session closed for user root
Oct 11 08:33:13 compute-0 sudo[261785]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 08:33:13 compute-0 sudo[261785]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:33:13 compute-0 sudo[261785]: pam_unix(sudo:session): session closed for user root
Oct 11 08:33:13 compute-0 sudo[261810]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:33:13 compute-0 sudo[261810]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:33:13 compute-0 sudo[261810]: pam_unix(sudo:session): session closed for user root
Oct 11 08:33:13 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v782: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:33:13 compute-0 sudo[261835]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a -- lvm list --format json
Oct 11 08:33:13 compute-0 sudo[261835]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:33:13 compute-0 podman[261900]: 2025-10-11 08:33:13.743502444 +0000 UTC m=+0.058223352 container create 857023e21739820ab2936d8a0b19d72f393ffb616e54b351d9bc59e15b035c92 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_ishizaka, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 08:33:13 compute-0 systemd[1]: Started libpod-conmon-857023e21739820ab2936d8a0b19d72f393ffb616e54b351d9bc59e15b035c92.scope.
Oct 11 08:33:13 compute-0 podman[261900]: 2025-10-11 08:33:13.715410127 +0000 UTC m=+0.030131085 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 08:33:13 compute-0 systemd[1]: Started libcrun container.
Oct 11 08:33:13 compute-0 podman[261900]: 2025-10-11 08:33:13.842786545 +0000 UTC m=+0.157507473 container init 857023e21739820ab2936d8a0b19d72f393ffb616e54b351d9bc59e15b035c92 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_ishizaka, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct 11 08:33:13 compute-0 podman[261900]: 2025-10-11 08:33:13.855929633 +0000 UTC m=+0.170650551 container start 857023e21739820ab2936d8a0b19d72f393ffb616e54b351d9bc59e15b035c92 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_ishizaka, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct 11 08:33:13 compute-0 podman[261900]: 2025-10-11 08:33:13.860413749 +0000 UTC m=+0.175134657 container attach 857023e21739820ab2936d8a0b19d72f393ffb616e54b351d9bc59e15b035c92 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_ishizaka, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 08:33:13 compute-0 eloquent_ishizaka[261916]: 167 167
Oct 11 08:33:13 compute-0 podman[261900]: 2025-10-11 08:33:13.863098164 +0000 UTC m=+0.177819092 container died 857023e21739820ab2936d8a0b19d72f393ffb616e54b351d9bc59e15b035c92 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_ishizaka, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 08:33:13 compute-0 systemd[1]: libpod-857023e21739820ab2936d8a0b19d72f393ffb616e54b351d9bc59e15b035c92.scope: Deactivated successfully.
Oct 11 08:33:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-762e39850d97e8c992d041f3e633f9520a5dca465fe71c916518f90c993998ca-merged.mount: Deactivated successfully.
Oct 11 08:33:13 compute-0 podman[261900]: 2025-10-11 08:33:13.910486151 +0000 UTC m=+0.225207059 container remove 857023e21739820ab2936d8a0b19d72f393ffb616e54b351d9bc59e15b035c92 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_ishizaka, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct 11 08:33:13 compute-0 systemd[1]: libpod-conmon-857023e21739820ab2936d8a0b19d72f393ffb616e54b351d9bc59e15b035c92.scope: Deactivated successfully.
Oct 11 08:33:14 compute-0 podman[261939]: 2025-10-11 08:33:14.16571183 +0000 UTC m=+0.068850350 container create 72e5a7555918f1cc8bf772d877b94073602a2c3f723a88ca972d948fb51152c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_raman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct 11 08:33:14 compute-0 systemd[1]: Started libpod-conmon-72e5a7555918f1cc8bf772d877b94073602a2c3f723a88ca972d948fb51152c6.scope.
Oct 11 08:33:14 compute-0 podman[261939]: 2025-10-11 08:33:14.139766163 +0000 UTC m=+0.042904723 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 08:33:14 compute-0 systemd[1]: Started libcrun container.
Oct 11 08:33:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6cae115275bfe2671a9997fe47531fed97b2decb0e525d5c44f6b973dba96477/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 08:33:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6cae115275bfe2671a9997fe47531fed97b2decb0e525d5c44f6b973dba96477/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 08:33:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6cae115275bfe2671a9997fe47531fed97b2decb0e525d5c44f6b973dba96477/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 08:33:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6cae115275bfe2671a9997fe47531fed97b2decb0e525d5c44f6b973dba96477/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 08:33:14 compute-0 podman[261939]: 2025-10-11 08:33:14.275171475 +0000 UTC m=+0.178310035 container init 72e5a7555918f1cc8bf772d877b94073602a2c3f723a88ca972d948fb51152c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_raman, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct 11 08:33:14 compute-0 podman[261939]: 2025-10-11 08:33:14.287085259 +0000 UTC m=+0.190223779 container start 72e5a7555918f1cc8bf772d877b94073602a2c3f723a88ca972d948fb51152c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_raman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct 11 08:33:14 compute-0 podman[261939]: 2025-10-11 08:33:14.291261786 +0000 UTC m=+0.194400356 container attach 72e5a7555918f1cc8bf772d877b94073602a2c3f723a88ca972d948fb51152c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_raman, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct 11 08:33:14 compute-0 ceph-mon[74313]: pgmap v782: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:33:15 compute-0 tender_raman[261956]: {
Oct 11 08:33:15 compute-0 tender_raman[261956]:     "0": [
Oct 11 08:33:15 compute-0 tender_raman[261956]:         {
Oct 11 08:33:15 compute-0 tender_raman[261956]:             "devices": [
Oct 11 08:33:15 compute-0 tender_raman[261956]:                 "/dev/loop3"
Oct 11 08:33:15 compute-0 tender_raman[261956]:             ],
Oct 11 08:33:15 compute-0 tender_raman[261956]:             "lv_name": "ceph_lv0",
Oct 11 08:33:15 compute-0 tender_raman[261956]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 08:33:15 compute-0 tender_raman[261956]:             "lv_size": "21470642176",
Oct 11 08:33:15 compute-0 tender_raman[261956]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=bd31cfe9-266a-4479-a7f9-659fff14b3e1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 08:33:15 compute-0 tender_raman[261956]:             "lv_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 08:33:15 compute-0 tender_raman[261956]:             "name": "ceph_lv0",
Oct 11 08:33:15 compute-0 tender_raman[261956]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 08:33:15 compute-0 tender_raman[261956]:             "tags": {
Oct 11 08:33:15 compute-0 tender_raman[261956]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 11 08:33:15 compute-0 tender_raman[261956]:                 "ceph.block_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 08:33:15 compute-0 tender_raman[261956]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 08:33:15 compute-0 tender_raman[261956]:                 "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 08:33:15 compute-0 tender_raman[261956]:                 "ceph.cluster_name": "ceph",
Oct 11 08:33:15 compute-0 tender_raman[261956]:                 "ceph.crush_device_class": "",
Oct 11 08:33:15 compute-0 tender_raman[261956]:                 "ceph.encrypted": "0",
Oct 11 08:33:15 compute-0 tender_raman[261956]:                 "ceph.osd_fsid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 08:33:15 compute-0 tender_raman[261956]:                 "ceph.osd_id": "0",
Oct 11 08:33:15 compute-0 tender_raman[261956]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 08:33:15 compute-0 tender_raman[261956]:                 "ceph.type": "block",
Oct 11 08:33:15 compute-0 tender_raman[261956]:                 "ceph.vdo": "0"
Oct 11 08:33:15 compute-0 tender_raman[261956]:             },
Oct 11 08:33:15 compute-0 tender_raman[261956]:             "type": "block",
Oct 11 08:33:15 compute-0 tender_raman[261956]:             "vg_name": "ceph_vg0"
Oct 11 08:33:15 compute-0 tender_raman[261956]:         }
Oct 11 08:33:15 compute-0 tender_raman[261956]:     ],
Oct 11 08:33:15 compute-0 tender_raman[261956]:     "1": [
Oct 11 08:33:15 compute-0 tender_raman[261956]:         {
Oct 11 08:33:15 compute-0 tender_raman[261956]:             "devices": [
Oct 11 08:33:15 compute-0 tender_raman[261956]:                 "/dev/loop4"
Oct 11 08:33:15 compute-0 tender_raman[261956]:             ],
Oct 11 08:33:15 compute-0 tender_raman[261956]:             "lv_name": "ceph_lv1",
Oct 11 08:33:15 compute-0 tender_raman[261956]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 08:33:15 compute-0 tender_raman[261956]:             "lv_size": "21470642176",
Oct 11 08:33:15 compute-0 tender_raman[261956]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c6e730c8-7075-45e5-bf72-061b73088f5b,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 08:33:15 compute-0 tender_raman[261956]:             "lv_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 08:33:15 compute-0 tender_raman[261956]:             "name": "ceph_lv1",
Oct 11 08:33:15 compute-0 tender_raman[261956]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 08:33:15 compute-0 tender_raman[261956]:             "tags": {
Oct 11 08:33:15 compute-0 tender_raman[261956]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 11 08:33:15 compute-0 tender_raman[261956]:                 "ceph.block_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 08:33:15 compute-0 tender_raman[261956]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 08:33:15 compute-0 tender_raman[261956]:                 "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 08:33:15 compute-0 tender_raman[261956]:                 "ceph.cluster_name": "ceph",
Oct 11 08:33:15 compute-0 tender_raman[261956]:                 "ceph.crush_device_class": "",
Oct 11 08:33:15 compute-0 tender_raman[261956]:                 "ceph.encrypted": "0",
Oct 11 08:33:15 compute-0 tender_raman[261956]:                 "ceph.osd_fsid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 08:33:15 compute-0 tender_raman[261956]:                 "ceph.osd_id": "1",
Oct 11 08:33:15 compute-0 tender_raman[261956]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 08:33:15 compute-0 tender_raman[261956]:                 "ceph.type": "block",
Oct 11 08:33:15 compute-0 tender_raman[261956]:                 "ceph.vdo": "0"
Oct 11 08:33:15 compute-0 tender_raman[261956]:             },
Oct 11 08:33:15 compute-0 tender_raman[261956]:             "type": "block",
Oct 11 08:33:15 compute-0 tender_raman[261956]:             "vg_name": "ceph_vg1"
Oct 11 08:33:15 compute-0 tender_raman[261956]:         }
Oct 11 08:33:15 compute-0 tender_raman[261956]:     ],
Oct 11 08:33:15 compute-0 tender_raman[261956]:     "2": [
Oct 11 08:33:15 compute-0 tender_raman[261956]:         {
Oct 11 08:33:15 compute-0 tender_raman[261956]:             "devices": [
Oct 11 08:33:15 compute-0 tender_raman[261956]:                 "/dev/loop5"
Oct 11 08:33:15 compute-0 tender_raman[261956]:             ],
Oct 11 08:33:15 compute-0 tender_raman[261956]:             "lv_name": "ceph_lv2",
Oct 11 08:33:15 compute-0 tender_raman[261956]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 08:33:15 compute-0 tender_raman[261956]:             "lv_size": "21470642176",
Oct 11 08:33:15 compute-0 tender_raman[261956]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9a8d87fe-940e-4960-94fb-405e22e6e9ee,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 08:33:15 compute-0 tender_raman[261956]:             "lv_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 08:33:15 compute-0 tender_raman[261956]:             "name": "ceph_lv2",
Oct 11 08:33:15 compute-0 tender_raman[261956]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 08:33:15 compute-0 tender_raman[261956]:             "tags": {
Oct 11 08:33:15 compute-0 tender_raman[261956]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 11 08:33:15 compute-0 tender_raman[261956]:                 "ceph.block_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 08:33:15 compute-0 tender_raman[261956]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 08:33:15 compute-0 tender_raman[261956]:                 "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 08:33:15 compute-0 tender_raman[261956]:                 "ceph.cluster_name": "ceph",
Oct 11 08:33:15 compute-0 tender_raman[261956]:                 "ceph.crush_device_class": "",
Oct 11 08:33:15 compute-0 tender_raman[261956]:                 "ceph.encrypted": "0",
Oct 11 08:33:15 compute-0 tender_raman[261956]:                 "ceph.osd_fsid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 08:33:15 compute-0 tender_raman[261956]:                 "ceph.osd_id": "2",
Oct 11 08:33:15 compute-0 tender_raman[261956]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 08:33:15 compute-0 tender_raman[261956]:                 "ceph.type": "block",
Oct 11 08:33:15 compute-0 tender_raman[261956]:                 "ceph.vdo": "0"
Oct 11 08:33:15 compute-0 tender_raman[261956]:             },
Oct 11 08:33:15 compute-0 tender_raman[261956]:             "type": "block",
Oct 11 08:33:15 compute-0 tender_raman[261956]:             "vg_name": "ceph_vg2"
Oct 11 08:33:15 compute-0 tender_raman[261956]:         }
Oct 11 08:33:15 compute-0 tender_raman[261956]:     ]
Oct 11 08:33:15 compute-0 tender_raman[261956]: }
Oct 11 08:33:15 compute-0 systemd[1]: libpod-72e5a7555918f1cc8bf772d877b94073602a2c3f723a88ca972d948fb51152c6.scope: Deactivated successfully.
Oct 11 08:33:15 compute-0 podman[261939]: 2025-10-11 08:33:15.062421485 +0000 UTC m=+0.965560005 container died 72e5a7555918f1cc8bf772d877b94073602a2c3f723a88ca972d948fb51152c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_raman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Oct 11 08:33:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-6cae115275bfe2671a9997fe47531fed97b2decb0e525d5c44f6b973dba96477-merged.mount: Deactivated successfully.
Oct 11 08:33:15 compute-0 podman[261939]: 2025-10-11 08:33:15.148113815 +0000 UTC m=+1.051252335 container remove 72e5a7555918f1cc8bf772d877b94073602a2c3f723a88ca972d948fb51152c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_raman, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 08:33:15 compute-0 systemd[1]: libpod-conmon-72e5a7555918f1cc8bf772d877b94073602a2c3f723a88ca972d948fb51152c6.scope: Deactivated successfully.
Oct 11 08:33:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:33:15.162 162815 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:33:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:33:15.164 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:33:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:33:15.164 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:33:15 compute-0 sudo[261835]: pam_unix(sudo:session): session closed for user root
Oct 11 08:33:15 compute-0 sudo[261977]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:33:15 compute-0 sudo[261977]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:33:15 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v783: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:33:15 compute-0 sudo[261977]: pam_unix(sudo:session): session closed for user root
Oct 11 08:33:15 compute-0 sudo[262002]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 08:33:15 compute-0 sudo[262002]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:33:15 compute-0 sudo[262002]: pam_unix(sudo:session): session closed for user root
Oct 11 08:33:15 compute-0 sudo[262027]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:33:15 compute-0 sudo[262027]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:33:15 compute-0 sudo[262027]: pam_unix(sudo:session): session closed for user root
Oct 11 08:33:15 compute-0 sudo[262052]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a -- raw list --format json
Oct 11 08:33:15 compute-0 sudo[262052]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:33:16 compute-0 podman[262120]: 2025-10-11 08:33:16.059746248 +0000 UTC m=+0.060109604 container create ba38a7193cb0b3327b2d2eb60fa45e945d981a409923db2fc39e3465ca2ee206 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_newton, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 08:33:16 compute-0 systemd[1]: Started libpod-conmon-ba38a7193cb0b3327b2d2eb60fa45e945d981a409923db2fc39e3465ca2ee206.scope.
Oct 11 08:33:16 compute-0 podman[262120]: 2025-10-11 08:33:16.032985099 +0000 UTC m=+0.033348505 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 08:33:16 compute-0 systemd[1]: Started libcrun container.
Oct 11 08:33:16 compute-0 podman[262120]: 2025-10-11 08:33:16.161495028 +0000 UTC m=+0.161858414 container init ba38a7193cb0b3327b2d2eb60fa45e945d981a409923db2fc39e3465ca2ee206 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_newton, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct 11 08:33:16 compute-0 podman[262120]: 2025-10-11 08:33:16.172470426 +0000 UTC m=+0.172833782 container start ba38a7193cb0b3327b2d2eb60fa45e945d981a409923db2fc39e3465ca2ee206 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_newton, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 08:33:16 compute-0 podman[262120]: 2025-10-11 08:33:16.176632062 +0000 UTC m=+0.176995458 container attach ba38a7193cb0b3327b2d2eb60fa45e945d981a409923db2fc39e3465ca2ee206 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_newton, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 08:33:16 compute-0 practical_newton[262136]: 167 167
Oct 11 08:33:16 compute-0 systemd[1]: libpod-ba38a7193cb0b3327b2d2eb60fa45e945d981a409923db2fc39e3465ca2ee206.scope: Deactivated successfully.
Oct 11 08:33:16 compute-0 conmon[262136]: conmon ba38a7193cb0b3327b2d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ba38a7193cb0b3327b2d2eb60fa45e945d981a409923db2fc39e3465ca2ee206.scope/container/memory.events
Oct 11 08:33:16 compute-0 podman[262120]: 2025-10-11 08:33:16.181708704 +0000 UTC m=+0.182072100 container died ba38a7193cb0b3327b2d2eb60fa45e945d981a409923db2fc39e3465ca2ee206 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_newton, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Oct 11 08:33:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-21658351ee08594ad6d15308f312db4161d5a65e6524eca84b1ab1bc5b39fe35-merged.mount: Deactivated successfully.
Oct 11 08:33:16 compute-0 podman[262120]: 2025-10-11 08:33:16.233470074 +0000 UTC m=+0.233833420 container remove ba38a7193cb0b3327b2d2eb60fa45e945d981a409923db2fc39e3465ca2ee206 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_newton, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507)
Oct 11 08:33:16 compute-0 systemd[1]: libpod-conmon-ba38a7193cb0b3327b2d2eb60fa45e945d981a409923db2fc39e3465ca2ee206.scope: Deactivated successfully.
Oct 11 08:33:16 compute-0 podman[262160]: 2025-10-11 08:33:16.468837617 +0000 UTC m=+0.057865122 container create 93b10108e522ad3cc280194954ba144424bc64b935b9cdb6bbeb35a3be929fdb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_lederberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 11 08:33:16 compute-0 systemd[1]: Started libpod-conmon-93b10108e522ad3cc280194954ba144424bc64b935b9cdb6bbeb35a3be929fdb.scope.
Oct 11 08:33:16 compute-0 podman[262160]: 2025-10-11 08:33:16.439918087 +0000 UTC m=+0.028945612 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 08:33:16 compute-0 systemd[1]: Started libcrun container.
Oct 11 08:33:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6cc24891a63369e56025d2e7d4575ca968c312927f4f790d8694764d2827cf7f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 08:33:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6cc24891a63369e56025d2e7d4575ca968c312927f4f790d8694764d2827cf7f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 08:33:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6cc24891a63369e56025d2e7d4575ca968c312927f4f790d8694764d2827cf7f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 08:33:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6cc24891a63369e56025d2e7d4575ca968c312927f4f790d8694764d2827cf7f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 08:33:16 compute-0 podman[262160]: 2025-10-11 08:33:16.575098993 +0000 UTC m=+0.164126518 container init 93b10108e522ad3cc280194954ba144424bc64b935b9cdb6bbeb35a3be929fdb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_lederberg, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct 11 08:33:16 compute-0 podman[262160]: 2025-10-11 08:33:16.593502038 +0000 UTC m=+0.182529543 container start 93b10108e522ad3cc280194954ba144424bc64b935b9cdb6bbeb35a3be929fdb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_lederberg, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct 11 08:33:16 compute-0 podman[262160]: 2025-10-11 08:33:16.597875811 +0000 UTC m=+0.186903336 container attach 93b10108e522ad3cc280194954ba144424bc64b935b9cdb6bbeb35a3be929fdb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_lederberg, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 08:33:16 compute-0 ceph-mon[74313]: pgmap v783: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:33:17 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v784: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:33:17 compute-0 blissful_lederberg[262177]: {
Oct 11 08:33:17 compute-0 blissful_lederberg[262177]:     "9a8d87fe-940e-4960-94fb-405e22e6e9ee": {
Oct 11 08:33:17 compute-0 blissful_lederberg[262177]:         "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 08:33:17 compute-0 blissful_lederberg[262177]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 11 08:33:17 compute-0 blissful_lederberg[262177]:         "osd_id": 2,
Oct 11 08:33:17 compute-0 blissful_lederberg[262177]:         "osd_uuid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 08:33:17 compute-0 blissful_lederberg[262177]:         "type": "bluestore"
Oct 11 08:33:17 compute-0 blissful_lederberg[262177]:     },
Oct 11 08:33:17 compute-0 blissful_lederberg[262177]:     "bd31cfe9-266a-4479-a7f9-659fff14b3e1": {
Oct 11 08:33:17 compute-0 blissful_lederberg[262177]:         "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 08:33:17 compute-0 blissful_lederberg[262177]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 11 08:33:17 compute-0 blissful_lederberg[262177]:         "osd_id": 0,
Oct 11 08:33:17 compute-0 blissful_lederberg[262177]:         "osd_uuid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 08:33:17 compute-0 blissful_lederberg[262177]:         "type": "bluestore"
Oct 11 08:33:17 compute-0 blissful_lederberg[262177]:     },
Oct 11 08:33:17 compute-0 blissful_lederberg[262177]:     "c6e730c8-7075-45e5-bf72-061b73088f5b": {
Oct 11 08:33:17 compute-0 blissful_lederberg[262177]:         "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 08:33:17 compute-0 blissful_lederberg[262177]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 11 08:33:17 compute-0 blissful_lederberg[262177]:         "osd_id": 1,
Oct 11 08:33:17 compute-0 blissful_lederberg[262177]:         "osd_uuid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 08:33:17 compute-0 blissful_lederberg[262177]:         "type": "bluestore"
Oct 11 08:33:17 compute-0 blissful_lederberg[262177]:     }
Oct 11 08:33:17 compute-0 blissful_lederberg[262177]: }
Oct 11 08:33:17 compute-0 systemd[1]: libpod-93b10108e522ad3cc280194954ba144424bc64b935b9cdb6bbeb35a3be929fdb.scope: Deactivated successfully.
Oct 11 08:33:17 compute-0 systemd[1]: libpod-93b10108e522ad3cc280194954ba144424bc64b935b9cdb6bbeb35a3be929fdb.scope: Consumed 1.059s CPU time.
Oct 11 08:33:17 compute-0 podman[262210]: 2025-10-11 08:33:17.718249791 +0000 UTC m=+0.043540340 container died 93b10108e522ad3cc280194954ba144424bc64b935b9cdb6bbeb35a3be929fdb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_lederberg, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct 11 08:33:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-6cc24891a63369e56025d2e7d4575ca968c312927f4f790d8694764d2827cf7f-merged.mount: Deactivated successfully.
Oct 11 08:33:17 compute-0 podman[262210]: 2025-10-11 08:33:17.792418438 +0000 UTC m=+0.117708897 container remove 93b10108e522ad3cc280194954ba144424bc64b935b9cdb6bbeb35a3be929fdb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_lederberg, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 08:33:17 compute-0 systemd[1]: libpod-conmon-93b10108e522ad3cc280194954ba144424bc64b935b9cdb6bbeb35a3be929fdb.scope: Deactivated successfully.
Oct 11 08:33:17 compute-0 sudo[262052]: pam_unix(sudo:session): session closed for user root
Oct 11 08:33:17 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 08:33:17 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 08:33:17 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 08:33:17 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 08:33:17 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev 24eae480-c4d4-4477-8df7-12c1dbc1ed3f does not exist
Oct 11 08:33:17 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev bd0b60f5-3b8a-4fbb-97c9-bec1f534c497 does not exist
Oct 11 08:33:17 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 08:33:17 compute-0 sudo[262225]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:33:17 compute-0 sudo[262225]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:33:17 compute-0 sudo[262225]: pam_unix(sudo:session): session closed for user root
Oct 11 08:33:18 compute-0 sudo[262250]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 11 08:33:18 compute-0 sudo[262250]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:33:18 compute-0 sudo[262250]: pam_unix(sudo:session): session closed for user root
Oct 11 08:33:18 compute-0 ceph-mon[74313]: pgmap v784: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:33:18 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 08:33:18 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 08:33:19 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v785: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:33:20 compute-0 ceph-mon[74313]: pgmap v785: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:33:21 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v786: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:33:22 compute-0 ceph-mon[74313]: pgmap v786: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:33:22 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 08:33:23 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v787: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:33:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 08:33:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 08:33:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 08:33:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 08:33:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 08:33:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 08:33:24 compute-0 ceph-mon[74313]: pgmap v787: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:33:25 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v788: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:33:26 compute-0 podman[262275]: 2025-10-11 08:33:26.808858856 +0000 UTC m=+0.102576354 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct 11 08:33:26 compute-0 ceph-mon[74313]: pgmap v788: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:33:27 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v789: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:33:27 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 08:33:28 compute-0 ceph-mon[74313]: pgmap v789: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:33:29 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v790: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:33:30 compute-0 ceph-mon[74313]: pgmap v790: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:33:31 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v791: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:33:32 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 08:33:32 compute-0 ceph-mon[74313]: pgmap v791: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:33:33 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v792: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct 11 08:33:33 compute-0 podman[262296]: 2025-10-11 08:33:33.800994356 +0000 UTC m=+0.093630854 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 11 08:33:34 compute-0 ceph-mon[74313]: pgmap v792: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct 11 08:33:35 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v793: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct 11 08:33:36 compute-0 ceph-mon[74313]: pgmap v793: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct 11 08:33:37 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v794: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct 11 08:33:37 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 08:33:38 compute-0 podman[262316]: 2025-10-11 08:33:38.794281151 +0000 UTC m=+0.089379985 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 11 08:33:38 compute-0 ceph-mon[74313]: pgmap v794: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct 11 08:33:39 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v795: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct 11 08:33:40 compute-0 podman[262337]: 2025-10-11 08:33:40.831179061 +0000 UTC m=+0.123878001 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3)
Oct 11 08:33:40 compute-0 ceph-mon[74313]: pgmap v795: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct 11 08:33:41 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v796: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct 11 08:33:42 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 08:33:42 compute-0 ceph-mon[74313]: pgmap v796: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct 11 08:33:43 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v797: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct 11 08:33:43 compute-0 nova_compute[260935]: 2025-10-11 08:33:43.659 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 08:33:43 compute-0 nova_compute[260935]: 2025-10-11 08:33:43.704 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 08:33:43 compute-0 nova_compute[260935]: 2025-10-11 08:33:43.704 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 11 08:33:43 compute-0 nova_compute[260935]: 2025-10-11 08:33:43.705 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 08:33:43 compute-0 nova_compute[260935]: 2025-10-11 08:33:43.746 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:33:43 compute-0 nova_compute[260935]: 2025-10-11 08:33:43.747 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:33:43 compute-0 nova_compute[260935]: 2025-10-11 08:33:43.747 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:33:43 compute-0 nova_compute[260935]: 2025-10-11 08:33:43.748 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 11 08:33:43 compute-0 nova_compute[260935]: 2025-10-11 08:33:43.748 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:33:44 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 08:33:44 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/961444815' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:33:44 compute-0 nova_compute[260935]: 2025-10-11 08:33:44.215 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:33:44 compute-0 nova_compute[260935]: 2025-10-11 08:33:44.451 2 WARNING nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 08:33:44 compute-0 nova_compute[260935]: 2025-10-11 08:33:44.453 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5163MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 11 08:33:44 compute-0 nova_compute[260935]: 2025-10-11 08:33:44.453 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:33:44 compute-0 nova_compute[260935]: 2025-10-11 08:33:44.454 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:33:44 compute-0 nova_compute[260935]: 2025-10-11 08:33:44.587 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 11 08:33:44 compute-0 nova_compute[260935]: 2025-10-11 08:33:44.588 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 11 08:33:44 compute-0 nova_compute[260935]: 2025-10-11 08:33:44.606 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:33:45 compute-0 ceph-mon[74313]: pgmap v797: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct 11 08:33:45 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/961444815' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:33:45 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 08:33:45 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/585207780' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:33:45 compute-0 nova_compute[260935]: 2025-10-11 08:33:45.084 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.477s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:33:45 compute-0 nova_compute[260935]: 2025-10-11 08:33:45.092 2 DEBUG nova.compute.provider_tree [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 08:33:45 compute-0 nova_compute[260935]: 2025-10-11 08:33:45.142 2 DEBUG nova.scheduler.client.report [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 08:33:45 compute-0 nova_compute[260935]: 2025-10-11 08:33:45.145 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 11 08:33:45 compute-0 nova_compute[260935]: 2025-10-11 08:33:45.146 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.692s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:33:45 compute-0 nova_compute[260935]: 2025-10-11 08:33:45.146 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 08:33:45 compute-0 nova_compute[260935]: 2025-10-11 08:33:45.188 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 08:33:45 compute-0 nova_compute[260935]: 2025-10-11 08:33:45.188 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 08:33:45 compute-0 nova_compute[260935]: 2025-10-11 08:33:45.189 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 11 08:33:45 compute-0 nova_compute[260935]: 2025-10-11 08:33:45.189 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 11 08:33:45 compute-0 nova_compute[260935]: 2025-10-11 08:33:45.207 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 11 08:33:45 compute-0 nova_compute[260935]: 2025-10-11 08:33:45.207 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 08:33:45 compute-0 nova_compute[260935]: 2025-10-11 08:33:45.208 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 08:33:45 compute-0 nova_compute[260935]: 2025-10-11 08:33:45.209 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 08:33:45 compute-0 nova_compute[260935]: 2025-10-11 08:33:45.209 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 08:33:45 compute-0 nova_compute[260935]: 2025-10-11 08:33:45.210 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 08:33:45 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v798: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:33:46 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/585207780' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:33:47 compute-0 ceph-mon[74313]: pgmap v798: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:33:47 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v799: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:33:47 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 08:33:49 compute-0 ceph-mon[74313]: pgmap v799: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:33:49 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v800: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:33:50 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "version", "format": "json"} v 0) v1
Oct 11 08:33:50 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2423047993' entity='client.openstack' cmd=[{"prefix": "version", "format": "json"}]: dispatch
Oct 11 08:33:50 compute-0 ceph-mgr[74605]: log_channel(audit) log [DBG] : from='client.14355 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Oct 11 08:33:50 compute-0 ceph-mgr[74605]: [volumes INFO volumes.module] Starting _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Oct 11 08:33:50 compute-0 ceph-mgr[74605]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Oct 11 08:33:51 compute-0 ceph-mon[74313]: pgmap v800: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:33:51 compute-0 ceph-mon[74313]: from='client.? 192.168.122.10:0/2423047993' entity='client.openstack' cmd=[{"prefix": "version", "format": "json"}]: dispatch
Oct 11 08:33:51 compute-0 ceph-mon[74313]: from='client.14355 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Oct 11 08:33:51 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v801: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:33:52 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 08:33:53 compute-0 ceph-mon[74313]: pgmap v801: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:33:53 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v802: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:33:54 compute-0 ceph-mgr[74605]: [balancer INFO root] Optimize plan auto_2025-10-11_08:33:54
Oct 11 08:33:54 compute-0 ceph-mgr[74605]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 08:33:54 compute-0 ceph-mgr[74605]: [balancer INFO root] do_upmap
Oct 11 08:33:54 compute-0 ceph-mgr[74605]: [balancer INFO root] pools ['cephfs.cephfs.data', 'backups', '.mgr', '.rgw.root', 'default.rgw.log', 'vms', 'cephfs.cephfs.meta', 'volumes', 'default.rgw.control', 'images', 'default.rgw.meta']
Oct 11 08:33:54 compute-0 ceph-mgr[74605]: [balancer INFO root] prepared 0/10 changes
Oct 11 08:33:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 08:33:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 08:33:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 08:33:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 08:33:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 08:33:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 08:33:54 compute-0 ceph-mgr[74605]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 08:33:54 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 08:33:54 compute-0 ceph-mgr[74605]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 08:33:54 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 08:33:54 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 08:33:54 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 08:33:54 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 08:33:54 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 08:33:54 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 08:33:54 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 08:33:55 compute-0 ceph-mon[74313]: pgmap v802: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:33:55 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v803: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:33:57 compute-0 ceph-mon[74313]: pgmap v803: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:33:57 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v804: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:33:57 compute-0 podman[262408]: 2025-10-11 08:33:57.814107249 +0000 UTC m=+0.104840377 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 11 08:33:57 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 08:33:59 compute-0 ceph-mon[74313]: pgmap v804: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:33:59 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v805: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:34:01 compute-0 ceph-mon[74313]: pgmap v805: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:34:01 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v806: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:34:02 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 08:34:03 compute-0 ceph-mon[74313]: pgmap v806: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:34:03 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v807: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:34:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 08:34:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:34:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 08:34:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:34:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 08:34:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:34:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 08:34:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:34:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 08:34:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:34:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 08:34:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:34:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 11 08:34:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:34:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 08:34:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:34:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 11 08:34:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:34:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 11 08:34:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:34:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 08:34:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:34:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 11 08:34:04 compute-0 podman[262427]: 2025-10-11 08:34:04.797883555 +0000 UTC m=+0.093418377 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=iscsid, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.vendor=CentOS, container_name=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct 11 08:34:05 compute-0 ceph-mon[74313]: pgmap v807: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:34:05 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v808: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:34:07 compute-0 ceph-mon[74313]: pgmap v808: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:34:07 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v809: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:34:07 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 08:34:09 compute-0 ceph-mon[74313]: pgmap v809: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:34:09 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v810: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:34:09 compute-0 podman[262448]: 2025-10-11 08:34:09.786846431 +0000 UTC m=+0.088457902 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2)
Oct 11 08:34:11 compute-0 ceph-mon[74313]: pgmap v810: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:34:11 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v811: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:34:11 compute-0 podman[262468]: 2025-10-11 08:34:11.831702599 +0000 UTC m=+0.126339847 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Oct 11 08:34:12 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 08:34:13 compute-0 ceph-mon[74313]: pgmap v811: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:34:13 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v812: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:34:14 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "version", "format": "json"} v 0) v1
Oct 11 08:34:14 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1413615351' entity='client.openstack' cmd=[{"prefix": "version", "format": "json"}]: dispatch
Oct 11 08:34:14 compute-0 ceph-mgr[74605]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Oct 11 08:34:14 compute-0 ceph-mgr[74605]: [volumes INFO volumes.module] Starting _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Oct 11 08:34:14 compute-0 ceph-mgr[74605]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Oct 11 08:34:15 compute-0 ceph-mon[74313]: pgmap v812: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:34:15 compute-0 ceph-mon[74313]: from='client.? 192.168.122.10:0/1413615351' entity='client.openstack' cmd=[{"prefix": "version", "format": "json"}]: dispatch
Oct 11 08:34:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:34:15.163 162815 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:34:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:34:15.164 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:34:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:34:15.164 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:34:15 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v813: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:34:16 compute-0 ceph-mon[74313]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Oct 11 08:34:17 compute-0 ceph-mon[74313]: pgmap v813: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:34:17 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v814: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:34:17 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 08:34:18 compute-0 sudo[262495]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:34:18 compute-0 sudo[262495]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:34:18 compute-0 sudo[262495]: pam_unix(sudo:session): session closed for user root
Oct 11 08:34:18 compute-0 sudo[262520]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 08:34:18 compute-0 sudo[262520]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:34:18 compute-0 sudo[262520]: pam_unix(sudo:session): session closed for user root
Oct 11 08:34:18 compute-0 sudo[262545]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:34:18 compute-0 sudo[262545]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:34:18 compute-0 sudo[262545]: pam_unix(sudo:session): session closed for user root
Oct 11 08:34:18 compute-0 sudo[262570]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Oct 11 08:34:18 compute-0 sudo[262570]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:34:19 compute-0 podman[262667]: 2025-10-11 08:34:19.131027557 +0000 UTC m=+0.093816714 container exec ef4d743dbf6b626090e433b260dff1359de31ba4682290cbdab8727911345729 (image=quay.io/ceph/ceph:v18, name=ceph-33219f8b-dc38-5a8f-a577-8ccc4b37190a-mon-compute-0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Oct 11 08:34:19 compute-0 ceph-mon[74313]: pgmap v814: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:34:19 compute-0 podman[262667]: 2025-10-11 08:34:19.264355241 +0000 UTC m=+0.227144348 container exec_died ef4d743dbf6b626090e433b260dff1359de31ba4682290cbdab8727911345729 (image=quay.io/ceph/ceph:v18, name=ceph-33219f8b-dc38-5a8f-a577-8ccc4b37190a-mon-compute-0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 08:34:19 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v815: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:34:20 compute-0 sudo[262570]: pam_unix(sudo:session): session closed for user root
Oct 11 08:34:20 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 08:34:20 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 08:34:20 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 08:34:20 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 08:34:20 compute-0 sudo[262827]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:34:20 compute-0 sudo[262827]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:34:20 compute-0 sudo[262827]: pam_unix(sudo:session): session closed for user root
Oct 11 08:34:20 compute-0 sudo[262852]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 08:34:20 compute-0 sudo[262852]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:34:20 compute-0 sudo[262852]: pam_unix(sudo:session): session closed for user root
Oct 11 08:34:20 compute-0 sudo[262877]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:34:20 compute-0 sudo[262877]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:34:20 compute-0 sudo[262877]: pam_unix(sudo:session): session closed for user root
Oct 11 08:34:20 compute-0 sudo[262902]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 11 08:34:20 compute-0 sudo[262902]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:34:21 compute-0 sudo[262902]: pam_unix(sudo:session): session closed for user root
Oct 11 08:34:21 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 08:34:21 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 08:34:21 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 11 08:34:21 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 08:34:21 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 11 08:34:21 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 08:34:21 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev 330636ef-75cf-4c46-87fa-f77ff9d27976 does not exist
Oct 11 08:34:21 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev 18f31804-1006-4b15-b159-53fa97c74f00 does not exist
Oct 11 08:34:21 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev ac56f209-67d3-43b7-a347-8b63ed516cfc does not exist
Oct 11 08:34:21 compute-0 ceph-mon[74313]: pgmap v815: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:34:21 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 08:34:21 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 08:34:21 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 08:34:21 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 08:34:21 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 11 08:34:21 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 08:34:21 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 11 08:34:21 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 08:34:21 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 08:34:21 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 08:34:21 compute-0 sudo[262959]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:34:21 compute-0 sudo[262959]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:34:21 compute-0 sudo[262959]: pam_unix(sudo:session): session closed for user root
Oct 11 08:34:21 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v816: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:34:21 compute-0 sudo[262984]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 08:34:21 compute-0 sudo[262984]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:34:21 compute-0 sudo[262984]: pam_unix(sudo:session): session closed for user root
Oct 11 08:34:21 compute-0 sudo[263009]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:34:21 compute-0 sudo[263009]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:34:21 compute-0 sudo[263009]: pam_unix(sudo:session): session closed for user root
Oct 11 08:34:21 compute-0 sudo[263034]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 11 08:34:21 compute-0 sudo[263034]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:34:22 compute-0 podman[263100]: 2025-10-11 08:34:22.092770026 +0000 UTC m=+0.069818243 container create 7b7e159129314742cb1e791e87c085acee78c45c0819380e382b40c6bf8f3385 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_sutherland, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 08:34:22 compute-0 systemd[1]: Started libpod-conmon-7b7e159129314742cb1e791e87c085acee78c45c0819380e382b40c6bf8f3385.scope.
Oct 11 08:34:22 compute-0 podman[263100]: 2025-10-11 08:34:22.064111122 +0000 UTC m=+0.041159389 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 08:34:22 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 08:34:22 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 08:34:22 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 08:34:22 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 08:34:22 compute-0 systemd[1]: Started libcrun container.
Oct 11 08:34:22 compute-0 podman[263100]: 2025-10-11 08:34:22.215402616 +0000 UTC m=+0.192450833 container init 7b7e159129314742cb1e791e87c085acee78c45c0819380e382b40c6bf8f3385 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_sutherland, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 11 08:34:22 compute-0 podman[263100]: 2025-10-11 08:34:22.226127021 +0000 UTC m=+0.203175208 container start 7b7e159129314742cb1e791e87c085acee78c45c0819380e382b40c6bf8f3385 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_sutherland, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3)
Oct 11 08:34:22 compute-0 podman[263100]: 2025-10-11 08:34:22.229844506 +0000 UTC m=+0.206892703 container attach 7b7e159129314742cb1e791e87c085acee78c45c0819380e382b40c6bf8f3385 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_sutherland, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 08:34:22 compute-0 dazzling_sutherland[263116]: 167 167
Oct 11 08:34:22 compute-0 systemd[1]: libpod-7b7e159129314742cb1e791e87c085acee78c45c0819380e382b40c6bf8f3385.scope: Deactivated successfully.
Oct 11 08:34:22 compute-0 podman[263100]: 2025-10-11 08:34:22.236177186 +0000 UTC m=+0.213225393 container died 7b7e159129314742cb1e791e87c085acee78c45c0819380e382b40c6bf8f3385 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_sutherland, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True)
Oct 11 08:34:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-d43111cdd22f79d8f92b1408d6efce52bdb8422c3d45991be4c947f1fce3a6fb-merged.mount: Deactivated successfully.
Oct 11 08:34:22 compute-0 podman[263100]: 2025-10-11 08:34:22.283833608 +0000 UTC m=+0.260881785 container remove 7b7e159129314742cb1e791e87c085acee78c45c0819380e382b40c6bf8f3385 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_sutherland, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 08:34:22 compute-0 systemd[1]: libpod-conmon-7b7e159129314742cb1e791e87c085acee78c45c0819380e382b40c6bf8f3385.scope: Deactivated successfully.
Oct 11 08:34:22 compute-0 podman[263139]: 2025-10-11 08:34:22.484931606 +0000 UTC m=+0.054288432 container create 9ebf724888c2fad41f576291fbb54d242db95edd54a32e0895a46b9603acc028 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_zhukovsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 08:34:22 compute-0 systemd[1]: Started libpod-conmon-9ebf724888c2fad41f576291fbb54d242db95edd54a32e0895a46b9603acc028.scope.
Oct 11 08:34:22 compute-0 systemd[1]: Started libcrun container.
Oct 11 08:34:22 compute-0 podman[263139]: 2025-10-11 08:34:22.464432794 +0000 UTC m=+0.033789650 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 08:34:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b12ae1ad029cb0d39f85212b74415bea721c4dcc1b023a2a78ea209dcf59ffd7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 08:34:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b12ae1ad029cb0d39f85212b74415bea721c4dcc1b023a2a78ea209dcf59ffd7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 08:34:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b12ae1ad029cb0d39f85212b74415bea721c4dcc1b023a2a78ea209dcf59ffd7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 08:34:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b12ae1ad029cb0d39f85212b74415bea721c4dcc1b023a2a78ea209dcf59ffd7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 08:34:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b12ae1ad029cb0d39f85212b74415bea721c4dcc1b023a2a78ea209dcf59ffd7/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 08:34:22 compute-0 podman[263139]: 2025-10-11 08:34:22.577316988 +0000 UTC m=+0.146673894 container init 9ebf724888c2fad41f576291fbb54d242db95edd54a32e0895a46b9603acc028 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_zhukovsky, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 08:34:22 compute-0 podman[263139]: 2025-10-11 08:34:22.593125947 +0000 UTC m=+0.162482803 container start 9ebf724888c2fad41f576291fbb54d242db95edd54a32e0895a46b9603acc028 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_zhukovsky, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 08:34:22 compute-0 podman[263139]: 2025-10-11 08:34:22.597007017 +0000 UTC m=+0.166363913 container attach 9ebf724888c2fad41f576291fbb54d242db95edd54a32e0895a46b9603acc028 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_zhukovsky, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct 11 08:34:22 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 08:34:23 compute-0 ceph-mon[74313]: pgmap v816: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:34:23 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v817: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:34:23 compute-0 laughing_zhukovsky[263155]: --> passed data devices: 0 physical, 3 LVM
Oct 11 08:34:23 compute-0 laughing_zhukovsky[263155]: --> relative data size: 1.0
Oct 11 08:34:23 compute-0 laughing_zhukovsky[263155]: --> All data devices are unavailable
Oct 11 08:34:23 compute-0 systemd[1]: libpod-9ebf724888c2fad41f576291fbb54d242db95edd54a32e0895a46b9603acc028.scope: Deactivated successfully.
Oct 11 08:34:23 compute-0 systemd[1]: libpod-9ebf724888c2fad41f576291fbb54d242db95edd54a32e0895a46b9603acc028.scope: Consumed 1.058s CPU time.
Oct 11 08:34:23 compute-0 podman[263184]: 2025-10-11 08:34:23.781601867 +0000 UTC m=+0.044345570 container died 9ebf724888c2fad41f576291fbb54d242db95edd54a32e0895a46b9603acc028 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_zhukovsky, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 08:34:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-b12ae1ad029cb0d39f85212b74415bea721c4dcc1b023a2a78ea209dcf59ffd7-merged.mount: Deactivated successfully.
Oct 11 08:34:23 compute-0 podman[263184]: 2025-10-11 08:34:23.856779521 +0000 UTC m=+0.119523174 container remove 9ebf724888c2fad41f576291fbb54d242db95edd54a32e0895a46b9603acc028 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_zhukovsky, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 08:34:23 compute-0 systemd[1]: libpod-conmon-9ebf724888c2fad41f576291fbb54d242db95edd54a32e0895a46b9603acc028.scope: Deactivated successfully.
Oct 11 08:34:23 compute-0 sudo[263034]: pam_unix(sudo:session): session closed for user root
Oct 11 08:34:24 compute-0 sudo[263199]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:34:24 compute-0 sudo[263199]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:34:24 compute-0 sudo[263199]: pam_unix(sudo:session): session closed for user root
Oct 11 08:34:24 compute-0 sudo[263224]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 08:34:24 compute-0 sudo[263224]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:34:24 compute-0 sudo[263224]: pam_unix(sudo:session): session closed for user root
Oct 11 08:34:24 compute-0 sudo[263249]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:34:24 compute-0 sudo[263249]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:34:24 compute-0 sudo[263249]: pam_unix(sudo:session): session closed for user root
Oct 11 08:34:24 compute-0 sudo[263274]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a -- lvm list --format json
Oct 11 08:34:24 compute-0 sudo[263274]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:34:24 compute-0 podman[263339]: 2025-10-11 08:34:24.762308291 +0000 UTC m=+0.065417707 container create e44dc9854e88d9f40d4531834a1063ebc96dfe591ab95860440ec16b4ba8adea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_heisenberg, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 08:34:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 08:34:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 08:34:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 08:34:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 08:34:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 08:34:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 08:34:24 compute-0 systemd[1]: Started libpod-conmon-e44dc9854e88d9f40d4531834a1063ebc96dfe591ab95860440ec16b4ba8adea.scope.
Oct 11 08:34:24 compute-0 podman[263339]: 2025-10-11 08:34:24.736363215 +0000 UTC m=+0.039472671 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 08:34:24 compute-0 systemd[1]: Started libcrun container.
Oct 11 08:34:24 compute-0 podman[263339]: 2025-10-11 08:34:24.861484686 +0000 UTC m=+0.164594122 container init e44dc9854e88d9f40d4531834a1063ebc96dfe591ab95860440ec16b4ba8adea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_heisenberg, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 08:34:24 compute-0 podman[263339]: 2025-10-11 08:34:24.872741886 +0000 UTC m=+0.175851302 container start e44dc9854e88d9f40d4531834a1063ebc96dfe591ab95860440ec16b4ba8adea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_heisenberg, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default)
Oct 11 08:34:24 compute-0 happy_heisenberg[263355]: 167 167
Oct 11 08:34:24 compute-0 podman[263339]: 2025-10-11 08:34:24.877442049 +0000 UTC m=+0.180551495 container attach e44dc9854e88d9f40d4531834a1063ebc96dfe591ab95860440ec16b4ba8adea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_heisenberg, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 08:34:24 compute-0 systemd[1]: libpod-e44dc9854e88d9f40d4531834a1063ebc96dfe591ab95860440ec16b4ba8adea.scope: Deactivated successfully.
Oct 11 08:34:24 compute-0 podman[263339]: 2025-10-11 08:34:24.878209891 +0000 UTC m=+0.181319297 container died e44dc9854e88d9f40d4531834a1063ebc96dfe591ab95860440ec16b4ba8adea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_heisenberg, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default)
Oct 11 08:34:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-b43a95d47aadf706783ce939f857176acc3166697dc62b8d550f5db2f056530b-merged.mount: Deactivated successfully.
Oct 11 08:34:24 compute-0 podman[263339]: 2025-10-11 08:34:24.928612601 +0000 UTC m=+0.231722007 container remove e44dc9854e88d9f40d4531834a1063ebc96dfe591ab95860440ec16b4ba8adea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_heisenberg, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 08:34:24 compute-0 systemd[1]: libpod-conmon-e44dc9854e88d9f40d4531834a1063ebc96dfe591ab95860440ec16b4ba8adea.scope: Deactivated successfully.
Oct 11 08:34:25 compute-0 podman[263380]: 2025-10-11 08:34:25.151059605 +0000 UTC m=+0.069443112 container create 6bb1b8f74bf43ab036e8106bb0b15490ccd99160aa0d46d931784b15cd94a601 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_brahmagupta, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Oct 11 08:34:25 compute-0 systemd[1]: Started libpod-conmon-6bb1b8f74bf43ab036e8106bb0b15490ccd99160aa0d46d931784b15cd94a601.scope.
Oct 11 08:34:25 compute-0 ceph-mon[74313]: pgmap v817: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:34:25 compute-0 podman[263380]: 2025-10-11 08:34:25.125274853 +0000 UTC m=+0.043658410 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 08:34:25 compute-0 systemd[1]: Started libcrun container.
Oct 11 08:34:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/312cf7dcc4072764221615c6d53179518d1506027c55429fd81103bba42e6771/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 08:34:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/312cf7dcc4072764221615c6d53179518d1506027c55429fd81103bba42e6771/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 08:34:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/312cf7dcc4072764221615c6d53179518d1506027c55429fd81103bba42e6771/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 08:34:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/312cf7dcc4072764221615c6d53179518d1506027c55429fd81103bba42e6771/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 08:34:25 compute-0 podman[263380]: 2025-10-11 08:34:25.256618471 +0000 UTC m=+0.175002018 container init 6bb1b8f74bf43ab036e8106bb0b15490ccd99160aa0d46d931784b15cd94a601 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_brahmagupta, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct 11 08:34:25 compute-0 podman[263380]: 2025-10-11 08:34:25.275367283 +0000 UTC m=+0.193750790 container start 6bb1b8f74bf43ab036e8106bb0b15490ccd99160aa0d46d931784b15cd94a601 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_brahmagupta, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 08:34:25 compute-0 podman[263380]: 2025-10-11 08:34:25.281001533 +0000 UTC m=+0.199385110 container attach 6bb1b8f74bf43ab036e8106bb0b15490ccd99160aa0d46d931784b15cd94a601 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_brahmagupta, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 08:34:25 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v818: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:34:26 compute-0 awesome_brahmagupta[263397]: {
Oct 11 08:34:26 compute-0 awesome_brahmagupta[263397]:     "0": [
Oct 11 08:34:26 compute-0 awesome_brahmagupta[263397]:         {
Oct 11 08:34:26 compute-0 awesome_brahmagupta[263397]:             "devices": [
Oct 11 08:34:26 compute-0 awesome_brahmagupta[263397]:                 "/dev/loop3"
Oct 11 08:34:26 compute-0 awesome_brahmagupta[263397]:             ],
Oct 11 08:34:26 compute-0 awesome_brahmagupta[263397]:             "lv_name": "ceph_lv0",
Oct 11 08:34:26 compute-0 awesome_brahmagupta[263397]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 08:34:26 compute-0 awesome_brahmagupta[263397]:             "lv_size": "21470642176",
Oct 11 08:34:26 compute-0 awesome_brahmagupta[263397]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=bd31cfe9-266a-4479-a7f9-659fff14b3e1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 08:34:26 compute-0 awesome_brahmagupta[263397]:             "lv_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 08:34:26 compute-0 awesome_brahmagupta[263397]:             "name": "ceph_lv0",
Oct 11 08:34:26 compute-0 awesome_brahmagupta[263397]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 08:34:26 compute-0 awesome_brahmagupta[263397]:             "tags": {
Oct 11 08:34:26 compute-0 awesome_brahmagupta[263397]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 11 08:34:26 compute-0 awesome_brahmagupta[263397]:                 "ceph.block_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 08:34:26 compute-0 awesome_brahmagupta[263397]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 08:34:26 compute-0 awesome_brahmagupta[263397]:                 "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 08:34:26 compute-0 awesome_brahmagupta[263397]:                 "ceph.cluster_name": "ceph",
Oct 11 08:34:26 compute-0 awesome_brahmagupta[263397]:                 "ceph.crush_device_class": "",
Oct 11 08:34:26 compute-0 awesome_brahmagupta[263397]:                 "ceph.encrypted": "0",
Oct 11 08:34:26 compute-0 awesome_brahmagupta[263397]:                 "ceph.osd_fsid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 08:34:26 compute-0 awesome_brahmagupta[263397]:                 "ceph.osd_id": "0",
Oct 11 08:34:26 compute-0 awesome_brahmagupta[263397]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 08:34:26 compute-0 awesome_brahmagupta[263397]:                 "ceph.type": "block",
Oct 11 08:34:26 compute-0 awesome_brahmagupta[263397]:                 "ceph.vdo": "0"
Oct 11 08:34:26 compute-0 awesome_brahmagupta[263397]:             },
Oct 11 08:34:26 compute-0 awesome_brahmagupta[263397]:             "type": "block",
Oct 11 08:34:26 compute-0 awesome_brahmagupta[263397]:             "vg_name": "ceph_vg0"
Oct 11 08:34:26 compute-0 awesome_brahmagupta[263397]:         }
Oct 11 08:34:26 compute-0 awesome_brahmagupta[263397]:     ],
Oct 11 08:34:26 compute-0 awesome_brahmagupta[263397]:     "1": [
Oct 11 08:34:26 compute-0 awesome_brahmagupta[263397]:         {
Oct 11 08:34:26 compute-0 awesome_brahmagupta[263397]:             "devices": [
Oct 11 08:34:26 compute-0 awesome_brahmagupta[263397]:                 "/dev/loop4"
Oct 11 08:34:26 compute-0 awesome_brahmagupta[263397]:             ],
Oct 11 08:34:26 compute-0 awesome_brahmagupta[263397]:             "lv_name": "ceph_lv1",
Oct 11 08:34:26 compute-0 awesome_brahmagupta[263397]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 08:34:26 compute-0 awesome_brahmagupta[263397]:             "lv_size": "21470642176",
Oct 11 08:34:26 compute-0 awesome_brahmagupta[263397]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c6e730c8-7075-45e5-bf72-061b73088f5b,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 08:34:26 compute-0 awesome_brahmagupta[263397]:             "lv_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 08:34:26 compute-0 awesome_brahmagupta[263397]:             "name": "ceph_lv1",
Oct 11 08:34:26 compute-0 awesome_brahmagupta[263397]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 08:34:26 compute-0 awesome_brahmagupta[263397]:             "tags": {
Oct 11 08:34:26 compute-0 awesome_brahmagupta[263397]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 11 08:34:26 compute-0 awesome_brahmagupta[263397]:                 "ceph.block_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 08:34:26 compute-0 awesome_brahmagupta[263397]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 08:34:26 compute-0 awesome_brahmagupta[263397]:                 "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 08:34:26 compute-0 awesome_brahmagupta[263397]:                 "ceph.cluster_name": "ceph",
Oct 11 08:34:26 compute-0 awesome_brahmagupta[263397]:                 "ceph.crush_device_class": "",
Oct 11 08:34:26 compute-0 awesome_brahmagupta[263397]:                 "ceph.encrypted": "0",
Oct 11 08:34:26 compute-0 awesome_brahmagupta[263397]:                 "ceph.osd_fsid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 08:34:26 compute-0 awesome_brahmagupta[263397]:                 "ceph.osd_id": "1",
Oct 11 08:34:26 compute-0 awesome_brahmagupta[263397]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 08:34:26 compute-0 awesome_brahmagupta[263397]:                 "ceph.type": "block",
Oct 11 08:34:26 compute-0 awesome_brahmagupta[263397]:                 "ceph.vdo": "0"
Oct 11 08:34:26 compute-0 awesome_brahmagupta[263397]:             },
Oct 11 08:34:26 compute-0 awesome_brahmagupta[263397]:             "type": "block",
Oct 11 08:34:26 compute-0 awesome_brahmagupta[263397]:             "vg_name": "ceph_vg1"
Oct 11 08:34:26 compute-0 awesome_brahmagupta[263397]:         }
Oct 11 08:34:26 compute-0 awesome_brahmagupta[263397]:     ],
Oct 11 08:34:26 compute-0 awesome_brahmagupta[263397]:     "2": [
Oct 11 08:34:26 compute-0 awesome_brahmagupta[263397]:         {
Oct 11 08:34:26 compute-0 awesome_brahmagupta[263397]:             "devices": [
Oct 11 08:34:26 compute-0 awesome_brahmagupta[263397]:                 "/dev/loop5"
Oct 11 08:34:26 compute-0 awesome_brahmagupta[263397]:             ],
Oct 11 08:34:26 compute-0 awesome_brahmagupta[263397]:             "lv_name": "ceph_lv2",
Oct 11 08:34:26 compute-0 awesome_brahmagupta[263397]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 08:34:26 compute-0 awesome_brahmagupta[263397]:             "lv_size": "21470642176",
Oct 11 08:34:26 compute-0 awesome_brahmagupta[263397]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9a8d87fe-940e-4960-94fb-405e22e6e9ee,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 08:34:26 compute-0 awesome_brahmagupta[263397]:             "lv_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 08:34:26 compute-0 awesome_brahmagupta[263397]:             "name": "ceph_lv2",
Oct 11 08:34:26 compute-0 awesome_brahmagupta[263397]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 08:34:26 compute-0 awesome_brahmagupta[263397]:             "tags": {
Oct 11 08:34:26 compute-0 awesome_brahmagupta[263397]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 11 08:34:26 compute-0 awesome_brahmagupta[263397]:                 "ceph.block_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 08:34:26 compute-0 awesome_brahmagupta[263397]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 08:34:26 compute-0 awesome_brahmagupta[263397]:                 "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 08:34:26 compute-0 awesome_brahmagupta[263397]:                 "ceph.cluster_name": "ceph",
Oct 11 08:34:26 compute-0 awesome_brahmagupta[263397]:                 "ceph.crush_device_class": "",
Oct 11 08:34:26 compute-0 awesome_brahmagupta[263397]:                 "ceph.encrypted": "0",
Oct 11 08:34:26 compute-0 awesome_brahmagupta[263397]:                 "ceph.osd_fsid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 08:34:26 compute-0 awesome_brahmagupta[263397]:                 "ceph.osd_id": "2",
Oct 11 08:34:26 compute-0 awesome_brahmagupta[263397]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 08:34:26 compute-0 awesome_brahmagupta[263397]:                 "ceph.type": "block",
Oct 11 08:34:26 compute-0 awesome_brahmagupta[263397]:                 "ceph.vdo": "0"
Oct 11 08:34:26 compute-0 awesome_brahmagupta[263397]:             },
Oct 11 08:34:26 compute-0 awesome_brahmagupta[263397]:             "type": "block",
Oct 11 08:34:26 compute-0 awesome_brahmagupta[263397]:             "vg_name": "ceph_vg2"
Oct 11 08:34:26 compute-0 awesome_brahmagupta[263397]:         }
Oct 11 08:34:26 compute-0 awesome_brahmagupta[263397]:     ]
Oct 11 08:34:26 compute-0 awesome_brahmagupta[263397]: }
Oct 11 08:34:26 compute-0 systemd[1]: libpod-6bb1b8f74bf43ab036e8106bb0b15490ccd99160aa0d46d931784b15cd94a601.scope: Deactivated successfully.
Oct 11 08:34:26 compute-0 podman[263380]: 2025-10-11 08:34:26.046555321 +0000 UTC m=+0.964938828 container died 6bb1b8f74bf43ab036e8106bb0b15490ccd99160aa0d46d931784b15cd94a601 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_brahmagupta, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct 11 08:34:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-312cf7dcc4072764221615c6d53179518d1506027c55429fd81103bba42e6771-merged.mount: Deactivated successfully.
Oct 11 08:34:26 compute-0 podman[263380]: 2025-10-11 08:34:26.117803993 +0000 UTC m=+1.036187480 container remove 6bb1b8f74bf43ab036e8106bb0b15490ccd99160aa0d46d931784b15cd94a601 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_brahmagupta, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 08:34:26 compute-0 systemd[1]: libpod-conmon-6bb1b8f74bf43ab036e8106bb0b15490ccd99160aa0d46d931784b15cd94a601.scope: Deactivated successfully.
Oct 11 08:34:26 compute-0 sudo[263274]: pam_unix(sudo:session): session closed for user root
Oct 11 08:34:26 compute-0 sudo[263420]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:34:26 compute-0 sudo[263420]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:34:26 compute-0 sudo[263420]: pam_unix(sudo:session): session closed for user root
Oct 11 08:34:26 compute-0 sudo[263445]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 08:34:26 compute-0 sudo[263445]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:34:26 compute-0 sudo[263445]: pam_unix(sudo:session): session closed for user root
Oct 11 08:34:26 compute-0 sudo[263470]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:34:26 compute-0 sudo[263470]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:34:26 compute-0 sudo[263470]: pam_unix(sudo:session): session closed for user root
Oct 11 08:34:26 compute-0 sudo[263495]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a -- raw list --format json
Oct 11 08:34:26 compute-0 sudo[263495]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:34:26 compute-0 podman[263561]: 2025-10-11 08:34:26.941521021 +0000 UTC m=+0.063735490 container create 1f68612ef2160fd57a00fb8365d8f29c8cff10906d9664ad1bf0601cfdbd564f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_williamson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default)
Oct 11 08:34:26 compute-0 systemd[1]: Started libpod-conmon-1f68612ef2160fd57a00fb8365d8f29c8cff10906d9664ad1bf0601cfdbd564f.scope.
Oct 11 08:34:27 compute-0 podman[263561]: 2025-10-11 08:34:26.916390147 +0000 UTC m=+0.038604656 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 08:34:27 compute-0 systemd[1]: Started libcrun container.
Oct 11 08:34:27 compute-0 podman[263561]: 2025-10-11 08:34:27.046338675 +0000 UTC m=+0.168553174 container init 1f68612ef2160fd57a00fb8365d8f29c8cff10906d9664ad1bf0601cfdbd564f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_williamson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 08:34:27 compute-0 podman[263561]: 2025-10-11 08:34:27.057326697 +0000 UTC m=+0.179541156 container start 1f68612ef2160fd57a00fb8365d8f29c8cff10906d9664ad1bf0601cfdbd564f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_williamson, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Oct 11 08:34:27 compute-0 pedantic_williamson[263578]: 167 167
Oct 11 08:34:27 compute-0 podman[263561]: 2025-10-11 08:34:27.061857536 +0000 UTC m=+0.184072055 container attach 1f68612ef2160fd57a00fb8365d8f29c8cff10906d9664ad1bf0601cfdbd564f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_williamson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 11 08:34:27 compute-0 systemd[1]: libpod-1f68612ef2160fd57a00fb8365d8f29c8cff10906d9664ad1bf0601cfdbd564f.scope: Deactivated successfully.
Oct 11 08:34:27 compute-0 podman[263561]: 2025-10-11 08:34:27.063317057 +0000 UTC m=+0.185531526 container died 1f68612ef2160fd57a00fb8365d8f29c8cff10906d9664ad1bf0601cfdbd564f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_williamson, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 08:34:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-33467f67ae425702c454a28bbdc4ace11d198d3f350c865f0c8d758cdec93162-merged.mount: Deactivated successfully.
Oct 11 08:34:27 compute-0 podman[263561]: 2025-10-11 08:34:27.115271192 +0000 UTC m=+0.237485661 container remove 1f68612ef2160fd57a00fb8365d8f29c8cff10906d9664ad1bf0601cfdbd564f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_williamson, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default)
Oct 11 08:34:27 compute-0 systemd[1]: libpod-conmon-1f68612ef2160fd57a00fb8365d8f29c8cff10906d9664ad1bf0601cfdbd564f.scope: Deactivated successfully.
Oct 11 08:34:27 compute-0 ceph-mon[74313]: pgmap v818: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:34:27 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v819: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:34:27 compute-0 podman[263602]: 2025-10-11 08:34:27.356250561 +0000 UTC m=+0.062697500 container create 5a696e7568302684f6160f1debc1436b0cba85417acbe3bc55913ffa495de41b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_turing, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct 11 08:34:27 compute-0 systemd[1]: Started libpod-conmon-5a696e7568302684f6160f1debc1436b0cba85417acbe3bc55913ffa495de41b.scope.
Oct 11 08:34:27 compute-0 podman[263602]: 2025-10-11 08:34:27.32978298 +0000 UTC m=+0.036229959 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 08:34:27 compute-0 systemd[1]: Started libcrun container.
Oct 11 08:34:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8179358aae0d6d39e922f88153bb6e46897de32107497cffffc94754381d0196/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 08:34:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8179358aae0d6d39e922f88153bb6e46897de32107497cffffc94754381d0196/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 08:34:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8179358aae0d6d39e922f88153bb6e46897de32107497cffffc94754381d0196/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 08:34:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8179358aae0d6d39e922f88153bb6e46897de32107497cffffc94754381d0196/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 08:34:27 compute-0 podman[263602]: 2025-10-11 08:34:27.454259663 +0000 UTC m=+0.160706652 container init 5a696e7568302684f6160f1debc1436b0cba85417acbe3bc55913ffa495de41b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_turing, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct 11 08:34:27 compute-0 podman[263602]: 2025-10-11 08:34:27.465749589 +0000 UTC m=+0.172196538 container start 5a696e7568302684f6160f1debc1436b0cba85417acbe3bc55913ffa495de41b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_turing, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 08:34:27 compute-0 podman[263602]: 2025-10-11 08:34:27.470382131 +0000 UTC m=+0.176829060 container attach 5a696e7568302684f6160f1debc1436b0cba85417acbe3bc55913ffa495de41b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_turing, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Oct 11 08:34:27 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 08:34:28 compute-0 ceph-mon[74313]: pgmap v819: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:34:28 compute-0 thirsty_turing[263618]: {
Oct 11 08:34:28 compute-0 thirsty_turing[263618]:     "9a8d87fe-940e-4960-94fb-405e22e6e9ee": {
Oct 11 08:34:28 compute-0 thirsty_turing[263618]:         "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 08:34:28 compute-0 thirsty_turing[263618]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 11 08:34:28 compute-0 thirsty_turing[263618]:         "osd_id": 2,
Oct 11 08:34:28 compute-0 thirsty_turing[263618]:         "osd_uuid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 08:34:28 compute-0 thirsty_turing[263618]:         "type": "bluestore"
Oct 11 08:34:28 compute-0 thirsty_turing[263618]:     },
Oct 11 08:34:28 compute-0 thirsty_turing[263618]:     "bd31cfe9-266a-4479-a7f9-659fff14b3e1": {
Oct 11 08:34:28 compute-0 thirsty_turing[263618]:         "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 08:34:28 compute-0 thirsty_turing[263618]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 11 08:34:28 compute-0 thirsty_turing[263618]:         "osd_id": 0,
Oct 11 08:34:28 compute-0 thirsty_turing[263618]:         "osd_uuid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 08:34:28 compute-0 thirsty_turing[263618]:         "type": "bluestore"
Oct 11 08:34:28 compute-0 thirsty_turing[263618]:     },
Oct 11 08:34:28 compute-0 thirsty_turing[263618]:     "c6e730c8-7075-45e5-bf72-061b73088f5b": {
Oct 11 08:34:28 compute-0 thirsty_turing[263618]:         "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 08:34:28 compute-0 thirsty_turing[263618]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 11 08:34:28 compute-0 thirsty_turing[263618]:         "osd_id": 1,
Oct 11 08:34:28 compute-0 thirsty_turing[263618]:         "osd_uuid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 08:34:28 compute-0 thirsty_turing[263618]:         "type": "bluestore"
Oct 11 08:34:28 compute-0 thirsty_turing[263618]:     }
Oct 11 08:34:28 compute-0 thirsty_turing[263618]: }
Oct 11 08:34:28 compute-0 systemd[1]: libpod-5a696e7568302684f6160f1debc1436b0cba85417acbe3bc55913ffa495de41b.scope: Deactivated successfully.
Oct 11 08:34:28 compute-0 systemd[1]: libpod-5a696e7568302684f6160f1debc1436b0cba85417acbe3bc55913ffa495de41b.scope: Consumed 1.097s CPU time.
Oct 11 08:34:28 compute-0 podman[263602]: 2025-10-11 08:34:28.553378078 +0000 UTC m=+1.259825057 container died 5a696e7568302684f6160f1debc1436b0cba85417acbe3bc55913ffa495de41b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_turing, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Oct 11 08:34:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-8179358aae0d6d39e922f88153bb6e46897de32107497cffffc94754381d0196-merged.mount: Deactivated successfully.
Oct 11 08:34:28 compute-0 podman[263602]: 2025-10-11 08:34:28.62706753 +0000 UTC m=+1.333514439 container remove 5a696e7568302684f6160f1debc1436b0cba85417acbe3bc55913ffa495de41b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_turing, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 08:34:28 compute-0 systemd[1]: libpod-conmon-5a696e7568302684f6160f1debc1436b0cba85417acbe3bc55913ffa495de41b.scope: Deactivated successfully.
Oct 11 08:34:28 compute-0 sudo[263495]: pam_unix(sudo:session): session closed for user root
Oct 11 08:34:28 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 08:34:28 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 08:34:28 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 08:34:28 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 08:34:28 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev 83d83d10-777c-47c3-bfdc-cf34beff0b3f does not exist
Oct 11 08:34:28 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev 3809be50-8880-4bf2-a159-b2f07562729f does not exist
Oct 11 08:34:28 compute-0 podman[263652]: 2025-10-11 08:34:28.708922323 +0000 UTC m=+0.114023377 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent)
Oct 11 08:34:28 compute-0 sudo[263680]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:34:28 compute-0 sudo[263680]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:34:28 compute-0 sudo[263680]: pam_unix(sudo:session): session closed for user root
Oct 11 08:34:28 compute-0 sudo[263705]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 11 08:34:28 compute-0 sudo[263705]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:34:28 compute-0 sudo[263705]: pam_unix(sudo:session): session closed for user root
Oct 11 08:34:29 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v820: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:34:29 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 08:34:29 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 08:34:30 compute-0 ceph-mon[74313]: pgmap v820: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:34:31 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v821: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:34:32 compute-0 ceph-mon[74313]: pgmap v821: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:34:32 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 08:34:33 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v822: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:34:34 compute-0 ceph-mon[74313]: pgmap v822: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:34:35 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v823: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:34:35 compute-0 podman[263730]: 2025-10-11 08:34:35.791189849 +0000 UTC m=+0.089792029 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_id=iscsid, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, container_name=iscsid, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 11 08:34:36 compute-0 ceph-mon[74313]: pgmap v823: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:34:37 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v824: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:34:37 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 08:34:37 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1935225412' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 08:34:37 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 08:34:37 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1935225412' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 08:34:37 compute-0 ceph-mon[74313]: from='client.? 192.168.122.10:0/1935225412' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 08:34:37 compute-0 ceph-mon[74313]: from='client.? 192.168.122.10:0/1935225412' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 08:34:37 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 08:34:38 compute-0 ceph-mon[74313]: pgmap v824: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:34:39 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v825: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:34:40 compute-0 ceph-mon[74313]: pgmap v825: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:34:40 compute-0 podman[263750]: 2025-10-11 08:34:40.801979494 +0000 UTC m=+0.106027661 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Oct 11 08:34:41 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v826: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:34:42 compute-0 ceph-mon[74313]: pgmap v826: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:34:42 compute-0 podman[263771]: 2025-10-11 08:34:42.893666169 +0000 UTC m=+0.191103955 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.vendor=CentOS)
Oct 11 08:34:42 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 08:34:43 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v827: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:34:43 compute-0 nova_compute[260935]: 2025-10-11 08:34:43.719 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 08:34:44 compute-0 nova_compute[260935]: 2025-10-11 08:34:44.198 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 08:34:44 compute-0 nova_compute[260935]: 2025-10-11 08:34:44.198 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 08:34:44 compute-0 nova_compute[260935]: 2025-10-11 08:34:44.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 08:34:44 compute-0 nova_compute[260935]: 2025-10-11 08:34:44.703 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 11 08:34:44 compute-0 nova_compute[260935]: 2025-10-11 08:34:44.703 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 11 08:34:44 compute-0 ceph-mon[74313]: pgmap v827: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:34:45 compute-0 nova_compute[260935]: 2025-10-11 08:34:45.113 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 11 08:34:45 compute-0 nova_compute[260935]: 2025-10-11 08:34:45.114 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 08:34:45 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v828: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:34:45 compute-0 nova_compute[260935]: 2025-10-11 08:34:45.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 08:34:45 compute-0 nova_compute[260935]: 2025-10-11 08:34:45.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 08:34:45 compute-0 nova_compute[260935]: 2025-10-11 08:34:45.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 08:34:45 compute-0 nova_compute[260935]: 2025-10-11 08:34:45.704 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 08:34:45 compute-0 nova_compute[260935]: 2025-10-11 08:34:45.704 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 11 08:34:45 compute-0 nova_compute[260935]: 2025-10-11 08:34:45.704 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 08:34:46 compute-0 nova_compute[260935]: 2025-10-11 08:34:46.041 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:34:46 compute-0 nova_compute[260935]: 2025-10-11 08:34:46.042 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:34:46 compute-0 nova_compute[260935]: 2025-10-11 08:34:46.042 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:34:46 compute-0 nova_compute[260935]: 2025-10-11 08:34:46.043 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 11 08:34:46 compute-0 nova_compute[260935]: 2025-10-11 08:34:46.043 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:34:46 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 08:34:46 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3611365352' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:34:46 compute-0 nova_compute[260935]: 2025-10-11 08:34:46.497 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:34:46 compute-0 ceph-mon[74313]: pgmap v828: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:34:46 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/3611365352' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:34:46 compute-0 nova_compute[260935]: 2025-10-11 08:34:46.791 2 WARNING nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 08:34:46 compute-0 nova_compute[260935]: 2025-10-11 08:34:46.793 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5133MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 11 08:34:46 compute-0 nova_compute[260935]: 2025-10-11 08:34:46.794 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:34:46 compute-0 nova_compute[260935]: 2025-10-11 08:34:46.794 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:34:47 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v829: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:34:47 compute-0 nova_compute[260935]: 2025-10-11 08:34:47.711 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 11 08:34:47 compute-0 nova_compute[260935]: 2025-10-11 08:34:47.712 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 11 08:34:47 compute-0 nova_compute[260935]: 2025-10-11 08:34:47.744 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:34:47 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 08:34:48 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 08:34:48 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1300747106' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:34:48 compute-0 nova_compute[260935]: 2025-10-11 08:34:48.259 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.515s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:34:48 compute-0 nova_compute[260935]: 2025-10-11 08:34:48.264 2 DEBUG nova.compute.provider_tree [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 08:34:48 compute-0 nova_compute[260935]: 2025-10-11 08:34:48.302 2 DEBUG nova.scheduler.client.report [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 08:34:48 compute-0 nova_compute[260935]: 2025-10-11 08:34:48.304 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 11 08:34:48 compute-0 nova_compute[260935]: 2025-10-11 08:34:48.304 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.510s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:34:48 compute-0 ceph-mon[74313]: pgmap v829: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:34:48 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/1300747106' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:34:49 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v830: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:34:50 compute-0 ceph-mon[74313]: pgmap v830: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:34:51 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v831: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:34:52 compute-0 ceph-mon[74313]: pgmap v831: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:34:52 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 08:34:53 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v832: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:34:53 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:34:53.467 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=2, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:d1:d9', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '16:ab:1e:b7:4b:7f'}, ipsec=False) old=SB_Global(nb_cfg=1) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 08:34:53 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:34:53.469 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 11 08:34:53 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:34:53.471 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=bec9a5e2-82b0-42f0-811d-08d245f1dc66, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '2'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:34:54 compute-0 ceph-mgr[74605]: [balancer INFO root] Optimize plan auto_2025-10-11_08:34:54
Oct 11 08:34:54 compute-0 ceph-mgr[74605]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 08:34:54 compute-0 ceph-mgr[74605]: [balancer INFO root] do_upmap
Oct 11 08:34:54 compute-0 ceph-mgr[74605]: [balancer INFO root] pools ['volumes', 'default.rgw.meta', 'cephfs.cephfs.data', 'default.rgw.log', 'default.rgw.control', '.mgr', '.rgw.root', 'backups', 'images', 'vms', 'cephfs.cephfs.meta']
Oct 11 08:34:54 compute-0 ceph-mgr[74605]: [balancer INFO root] prepared 0/10 changes
Oct 11 08:34:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 08:34:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 08:34:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 08:34:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 08:34:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 08:34:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 08:34:54 compute-0 ceph-mon[74313]: pgmap v832: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:34:54 compute-0 ceph-mgr[74605]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 08:34:54 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 08:34:54 compute-0 ceph-mgr[74605]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 08:34:54 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 08:34:54 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 08:34:54 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 08:34:54 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 08:34:54 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 08:34:54 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 08:34:54 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 08:34:55 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v833: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:34:56 compute-0 ceph-mon[74313]: pgmap v833: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:34:57 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v834: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:34:57 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 08:34:58 compute-0 ceph-mon[74313]: pgmap v834: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:34:59 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v835: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:34:59 compute-0 podman[263841]: 2025-10-11 08:34:59.786628731 +0000 UTC m=+0.094177734 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct 11 08:35:00 compute-0 ceph-mon[74313]: pgmap v835: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:35:01 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v836: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:35:02 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 08:35:02 compute-0 ceph-mon[74313]: pgmap v836: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:35:03 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v837: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:35:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 08:35:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:35:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 08:35:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:35:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 08:35:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:35:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 08:35:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:35:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 08:35:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:35:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 08:35:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:35:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 11 08:35:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:35:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 08:35:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:35:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 11 08:35:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:35:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 11 08:35:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:35:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 08:35:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:35:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 11 08:35:04 compute-0 ceph-mon[74313]: pgmap v837: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:35:05 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v838: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:35:06 compute-0 podman[263860]: 2025-10-11 08:35:06.800672452 +0000 UTC m=+0.092229039 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.vendor=CentOS)
Oct 11 08:35:06 compute-0 ceph-mon[74313]: pgmap v838: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:35:07 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v839: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:35:07 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 08:35:08 compute-0 ceph-mon[74313]: pgmap v839: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:35:09 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v840: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:35:11 compute-0 ceph-mon[74313]: pgmap v840: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:35:11 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v841: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:35:11 compute-0 podman[263880]: 2025-10-11 08:35:11.836881708 +0000 UTC m=+0.129171908 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 11 08:35:12 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 08:35:13 compute-0 ceph-mon[74313]: pgmap v841: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:35:13 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v842: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:35:13 compute-0 podman[263900]: 2025-10-11 08:35:13.786660985 +0000 UTC m=+0.093882015 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Oct 11 08:35:15 compute-0 ceph-mon[74313]: pgmap v842: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:35:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:35:15.165 162815 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:35:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:35:15.165 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:35:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:35:15.165 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:35:15 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v843: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:35:17 compute-0 ceph-mon[74313]: pgmap v843: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:35:17 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v844: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:35:17 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 08:35:19 compute-0 ceph-mon[74313]: pgmap v844: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:35:19 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v845: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:35:21 compute-0 ceph-mon[74313]: pgmap v845: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:35:21 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v846: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:35:22 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 08:35:23 compute-0 ceph-mon[74313]: pgmap v846: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:35:23 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v847: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:35:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 08:35:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 08:35:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 08:35:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 08:35:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 08:35:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 08:35:25 compute-0 ceph-mon[74313]: pgmap v847: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:35:25 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v848: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:35:27 compute-0 ceph-mon[74313]: pgmap v848: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:35:27 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v849: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:35:27 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 08:35:29 compute-0 sudo[263926]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:35:29 compute-0 sudo[263926]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:35:29 compute-0 sudo[263926]: pam_unix(sudo:session): session closed for user root
Oct 11 08:35:29 compute-0 ceph-mon[74313]: pgmap v849: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:35:29 compute-0 sudo[263951]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 08:35:29 compute-0 sudo[263951]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:35:29 compute-0 sudo[263951]: pam_unix(sudo:session): session closed for user root
Oct 11 08:35:29 compute-0 sudo[263976]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:35:29 compute-0 sudo[263976]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:35:29 compute-0 sudo[263976]: pam_unix(sudo:session): session closed for user root
Oct 11 08:35:29 compute-0 sudo[264001]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 11 08:35:29 compute-0 sudo[264001]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:35:29 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v850: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:35:29 compute-0 sudo[264001]: pam_unix(sudo:session): session closed for user root
Oct 11 08:35:30 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 08:35:30 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 08:35:30 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 11 08:35:30 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 08:35:30 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 11 08:35:30 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 08:35:30 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev fffc74b7-0452-4a04-862d-1debf0509b7b does not exist
Oct 11 08:35:30 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev e35dca71-c121-4274-9461-2dd1c6dea592 does not exist
Oct 11 08:35:30 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev ea556447-c788-428f-ae22-cb6dcb54cb7d does not exist
Oct 11 08:35:30 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 11 08:35:30 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 08:35:30 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 11 08:35:30 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 08:35:30 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 08:35:30 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 08:35:30 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 08:35:30 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 08:35:30 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 08:35:30 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 08:35:30 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 08:35:30 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 08:35:30 compute-0 sudo[264057]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:35:30 compute-0 sudo[264057]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:35:30 compute-0 sudo[264057]: pam_unix(sudo:session): session closed for user root
Oct 11 08:35:30 compute-0 sudo[264084]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 08:35:30 compute-0 podman[264081]: 2025-10-11 08:35:30.247929274 +0000 UTC m=+0.088428581 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3)
Oct 11 08:35:30 compute-0 sudo[264084]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:35:30 compute-0 sudo[264084]: pam_unix(sudo:session): session closed for user root
Oct 11 08:35:30 compute-0 sudo[264126]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:35:30 compute-0 sudo[264126]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:35:30 compute-0 sudo[264126]: pam_unix(sudo:session): session closed for user root
Oct 11 08:35:30 compute-0 sudo[264151]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 11 08:35:30 compute-0 sudo[264151]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:35:30 compute-0 podman[264218]: 2025-10-11 08:35:30.959583062 +0000 UTC m=+0.069879835 container create 67802077e3ab81c22cc6c3191975046d8b5ae186d7dd451e04726868540c69ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_wozniak, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 11 08:35:31 compute-0 systemd[1]: Started libpod-conmon-67802077e3ab81c22cc6c3191975046d8b5ae186d7dd451e04726868540c69ed.scope.
Oct 11 08:35:31 compute-0 podman[264218]: 2025-10-11 08:35:30.930748273 +0000 UTC m=+0.041045106 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 08:35:31 compute-0 systemd[1]: Started libcrun container.
Oct 11 08:35:31 compute-0 podman[264218]: 2025-10-11 08:35:31.061938647 +0000 UTC m=+0.172235490 container init 67802077e3ab81c22cc6c3191975046d8b5ae186d7dd451e04726868540c69ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_wozniak, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 11 08:35:31 compute-0 podman[264218]: 2025-10-11 08:35:31.074792962 +0000 UTC m=+0.185089715 container start 67802077e3ab81c22cc6c3191975046d8b5ae186d7dd451e04726868540c69ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_wozniak, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 08:35:31 compute-0 podman[264218]: 2025-10-11 08:35:31.078624619 +0000 UTC m=+0.188921412 container attach 67802077e3ab81c22cc6c3191975046d8b5ae186d7dd451e04726868540c69ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_wozniak, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 08:35:31 compute-0 fervent_wozniak[264234]: 167 167
Oct 11 08:35:31 compute-0 systemd[1]: libpod-67802077e3ab81c22cc6c3191975046d8b5ae186d7dd451e04726868540c69ed.scope: Deactivated successfully.
Oct 11 08:35:31 compute-0 podman[264218]: 2025-10-11 08:35:31.085160605 +0000 UTC m=+0.195457388 container died 67802077e3ab81c22cc6c3191975046d8b5ae186d7dd451e04726868540c69ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_wozniak, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Oct 11 08:35:31 compute-0 ceph-mon[74313]: pgmap v850: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:35:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-0ea48686b1174e2fa93db28a0ac99c09b82874fdcad7b9b7dc62331b61320d80-merged.mount: Deactivated successfully.
Oct 11 08:35:31 compute-0 podman[264218]: 2025-10-11 08:35:31.13397507 +0000 UTC m=+0.244271823 container remove 67802077e3ab81c22cc6c3191975046d8b5ae186d7dd451e04726868540c69ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_wozniak, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 08:35:31 compute-0 systemd[1]: libpod-conmon-67802077e3ab81c22cc6c3191975046d8b5ae186d7dd451e04726868540c69ed.scope: Deactivated successfully.
Oct 11 08:35:31 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v851: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:35:31 compute-0 podman[264257]: 2025-10-11 08:35:31.405006333 +0000 UTC m=+0.074722532 container create f3b7c5c8d0b2a10f74f082d04dc936f367805d4800897d57356cfe345fa1bc2d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_feynman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default)
Oct 11 08:35:31 compute-0 systemd[1]: Started libpod-conmon-f3b7c5c8d0b2a10f74f082d04dc936f367805d4800897d57356cfe345fa1bc2d.scope.
Oct 11 08:35:31 compute-0 podman[264257]: 2025-10-11 08:35:31.377895503 +0000 UTC m=+0.047611782 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 08:35:31 compute-0 systemd[1]: Started libcrun container.
Oct 11 08:35:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc0a8bb12889f1c9a0087cd566d17266fc94f9eb65690f70fa9de65bfe488292/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 08:35:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc0a8bb12889f1c9a0087cd566d17266fc94f9eb65690f70fa9de65bfe488292/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 08:35:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc0a8bb12889f1c9a0087cd566d17266fc94f9eb65690f70fa9de65bfe488292/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 08:35:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc0a8bb12889f1c9a0087cd566d17266fc94f9eb65690f70fa9de65bfe488292/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 08:35:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc0a8bb12889f1c9a0087cd566d17266fc94f9eb65690f70fa9de65bfe488292/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 08:35:31 compute-0 podman[264257]: 2025-10-11 08:35:31.518243907 +0000 UTC m=+0.187960146 container init f3b7c5c8d0b2a10f74f082d04dc936f367805d4800897d57356cfe345fa1bc2d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_feynman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 08:35:31 compute-0 podman[264257]: 2025-10-11 08:35:31.53105219 +0000 UTC m=+0.200768409 container start f3b7c5c8d0b2a10f74f082d04dc936f367805d4800897d57356cfe345fa1bc2d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_feynman, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 08:35:31 compute-0 podman[264257]: 2025-10-11 08:35:31.535924398 +0000 UTC m=+0.205640677 container attach f3b7c5c8d0b2a10f74f082d04dc936f367805d4800897d57356cfe345fa1bc2d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_feynman, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507)
Oct 11 08:35:32 compute-0 vigilant_feynman[264273]: --> passed data devices: 0 physical, 3 LVM
Oct 11 08:35:32 compute-0 vigilant_feynman[264273]: --> relative data size: 1.0
Oct 11 08:35:32 compute-0 vigilant_feynman[264273]: --> All data devices are unavailable
Oct 11 08:35:32 compute-0 systemd[1]: libpod-f3b7c5c8d0b2a10f74f082d04dc936f367805d4800897d57356cfe345fa1bc2d.scope: Deactivated successfully.
Oct 11 08:35:32 compute-0 systemd[1]: libpod-f3b7c5c8d0b2a10f74f082d04dc936f367805d4800897d57356cfe345fa1bc2d.scope: Consumed 1.157s CPU time.
Oct 11 08:35:32 compute-0 podman[264257]: 2025-10-11 08:35:32.729990688 +0000 UTC m=+1.399706917 container died f3b7c5c8d0b2a10f74f082d04dc936f367805d4800897d57356cfe345fa1bc2d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_feynman, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct 11 08:35:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-bc0a8bb12889f1c9a0087cd566d17266fc94f9eb65690f70fa9de65bfe488292-merged.mount: Deactivated successfully.
Oct 11 08:35:32 compute-0 podman[264257]: 2025-10-11 08:35:32.815264729 +0000 UTC m=+1.484980968 container remove f3b7c5c8d0b2a10f74f082d04dc936f367805d4800897d57356cfe345fa1bc2d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_feynman, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct 11 08:35:32 compute-0 systemd[1]: libpod-conmon-f3b7c5c8d0b2a10f74f082d04dc936f367805d4800897d57356cfe345fa1bc2d.scope: Deactivated successfully.
Oct 11 08:35:32 compute-0 sudo[264151]: pam_unix(sudo:session): session closed for user root
Oct 11 08:35:32 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 08:35:32 compute-0 sudo[264316]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:35:32 compute-0 sudo[264316]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:35:32 compute-0 sudo[264316]: pam_unix(sudo:session): session closed for user root
Oct 11 08:35:33 compute-0 sudo[264341]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 08:35:33 compute-0 sudo[264341]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:35:33 compute-0 sudo[264341]: pam_unix(sudo:session): session closed for user root
Oct 11 08:35:33 compute-0 ceph-mon[74313]: pgmap v851: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:35:33 compute-0 sudo[264366]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:35:33 compute-0 sudo[264366]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:35:33 compute-0 sudo[264366]: pam_unix(sudo:session): session closed for user root
Oct 11 08:35:33 compute-0 sudo[264391]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a -- lvm list --format json
Oct 11 08:35:33 compute-0 sudo[264391]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:35:33 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v852: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:35:33 compute-0 podman[264457]: 2025-10-11 08:35:33.78352797 +0000 UTC m=+0.070721928 container create 79c6ed0d5c95c8def82a106d8488d24b4416cb1abd2e88b1212685b563ab69f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_germain, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 08:35:33 compute-0 systemd[1]: Started libpod-conmon-79c6ed0d5c95c8def82a106d8488d24b4416cb1abd2e88b1212685b563ab69f2.scope.
Oct 11 08:35:33 compute-0 podman[264457]: 2025-10-11 08:35:33.753105026 +0000 UTC m=+0.040298994 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 08:35:33 compute-0 systemd[1]: Started libcrun container.
Oct 11 08:35:33 compute-0 podman[264457]: 2025-10-11 08:35:33.884937828 +0000 UTC m=+0.172131836 container init 79c6ed0d5c95c8def82a106d8488d24b4416cb1abd2e88b1212685b563ab69f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_germain, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2)
Oct 11 08:35:33 compute-0 podman[264457]: 2025-10-11 08:35:33.896255219 +0000 UTC m=+0.183449167 container start 79c6ed0d5c95c8def82a106d8488d24b4416cb1abd2e88b1212685b563ab69f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_germain, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True)
Oct 11 08:35:33 compute-0 podman[264457]: 2025-10-11 08:35:33.899973755 +0000 UTC m=+0.187167763 container attach 79c6ed0d5c95c8def82a106d8488d24b4416cb1abd2e88b1212685b563ab69f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_germain, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct 11 08:35:33 compute-0 modest_germain[264473]: 167 167
Oct 11 08:35:33 compute-0 systemd[1]: libpod-79c6ed0d5c95c8def82a106d8488d24b4416cb1abd2e88b1212685b563ab69f2.scope: Deactivated successfully.
Oct 11 08:35:33 compute-0 podman[264457]: 2025-10-11 08:35:33.901514858 +0000 UTC m=+0.188708816 container died 79c6ed0d5c95c8def82a106d8488d24b4416cb1abd2e88b1212685b563ab69f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_germain, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 08:35:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-7c132b017127d026a474b19a60c6e08842f175cff5623304d3b0a0a4246a4008-merged.mount: Deactivated successfully.
Oct 11 08:35:33 compute-0 podman[264457]: 2025-10-11 08:35:33.941424351 +0000 UTC m=+0.228618269 container remove 79c6ed0d5c95c8def82a106d8488d24b4416cb1abd2e88b1212685b563ab69f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_germain, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 08:35:33 compute-0 systemd[1]: libpod-conmon-79c6ed0d5c95c8def82a106d8488d24b4416cb1abd2e88b1212685b563ab69f2.scope: Deactivated successfully.
Oct 11 08:35:34 compute-0 podman[264497]: 2025-10-11 08:35:34.180591839 +0000 UTC m=+0.073168367 container create 2a01e0925f0c2259ac0279d7f34d3357a80105d59f68ba488013a3bceaf2e309 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_swirles, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 08:35:34 compute-0 systemd[1]: Started libpod-conmon-2a01e0925f0c2259ac0279d7f34d3357a80105d59f68ba488013a3bceaf2e309.scope.
Oct 11 08:35:34 compute-0 podman[264497]: 2025-10-11 08:35:34.151281297 +0000 UTC m=+0.043857885 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 08:35:34 compute-0 systemd[1]: Started libcrun container.
Oct 11 08:35:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/33876b5dc49687fa1690a247465b844a1cacba5cf17d77dfbbe27f96dcb7f28a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 08:35:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/33876b5dc49687fa1690a247465b844a1cacba5cf17d77dfbbe27f96dcb7f28a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 08:35:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/33876b5dc49687fa1690a247465b844a1cacba5cf17d77dfbbe27f96dcb7f28a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 08:35:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/33876b5dc49687fa1690a247465b844a1cacba5cf17d77dfbbe27f96dcb7f28a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 08:35:34 compute-0 podman[264497]: 2025-10-11 08:35:34.283805629 +0000 UTC m=+0.176382157 container init 2a01e0925f0c2259ac0279d7f34d3357a80105d59f68ba488013a3bceaf2e309 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_swirles, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef)
Oct 11 08:35:34 compute-0 podman[264497]: 2025-10-11 08:35:34.296620902 +0000 UTC m=+0.189197430 container start 2a01e0925f0c2259ac0279d7f34d3357a80105d59f68ba488013a3bceaf2e309 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_swirles, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 08:35:34 compute-0 podman[264497]: 2025-10-11 08:35:34.301504011 +0000 UTC m=+0.194080539 container attach 2a01e0925f0c2259ac0279d7f34d3357a80105d59f68ba488013a3bceaf2e309 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_swirles, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 08:35:35 compute-0 pensive_swirles[264513]: {
Oct 11 08:35:35 compute-0 pensive_swirles[264513]:     "0": [
Oct 11 08:35:35 compute-0 pensive_swirles[264513]:         {
Oct 11 08:35:35 compute-0 pensive_swirles[264513]:             "devices": [
Oct 11 08:35:35 compute-0 pensive_swirles[264513]:                 "/dev/loop3"
Oct 11 08:35:35 compute-0 pensive_swirles[264513]:             ],
Oct 11 08:35:35 compute-0 pensive_swirles[264513]:             "lv_name": "ceph_lv0",
Oct 11 08:35:35 compute-0 pensive_swirles[264513]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 08:35:35 compute-0 pensive_swirles[264513]:             "lv_size": "21470642176",
Oct 11 08:35:35 compute-0 pensive_swirles[264513]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=bd31cfe9-266a-4479-a7f9-659fff14b3e1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 08:35:35 compute-0 pensive_swirles[264513]:             "lv_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 08:35:35 compute-0 pensive_swirles[264513]:             "name": "ceph_lv0",
Oct 11 08:35:35 compute-0 pensive_swirles[264513]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 08:35:35 compute-0 pensive_swirles[264513]:             "tags": {
Oct 11 08:35:35 compute-0 pensive_swirles[264513]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 11 08:35:35 compute-0 pensive_swirles[264513]:                 "ceph.block_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 08:35:35 compute-0 pensive_swirles[264513]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 08:35:35 compute-0 pensive_swirles[264513]:                 "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 08:35:35 compute-0 pensive_swirles[264513]:                 "ceph.cluster_name": "ceph",
Oct 11 08:35:35 compute-0 pensive_swirles[264513]:                 "ceph.crush_device_class": "",
Oct 11 08:35:35 compute-0 pensive_swirles[264513]:                 "ceph.encrypted": "0",
Oct 11 08:35:35 compute-0 pensive_swirles[264513]:                 "ceph.osd_fsid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 08:35:35 compute-0 pensive_swirles[264513]:                 "ceph.osd_id": "0",
Oct 11 08:35:35 compute-0 pensive_swirles[264513]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 08:35:35 compute-0 pensive_swirles[264513]:                 "ceph.type": "block",
Oct 11 08:35:35 compute-0 pensive_swirles[264513]:                 "ceph.vdo": "0"
Oct 11 08:35:35 compute-0 pensive_swirles[264513]:             },
Oct 11 08:35:35 compute-0 pensive_swirles[264513]:             "type": "block",
Oct 11 08:35:35 compute-0 pensive_swirles[264513]:             "vg_name": "ceph_vg0"
Oct 11 08:35:35 compute-0 pensive_swirles[264513]:         }
Oct 11 08:35:35 compute-0 pensive_swirles[264513]:     ],
Oct 11 08:35:35 compute-0 pensive_swirles[264513]:     "1": [
Oct 11 08:35:35 compute-0 pensive_swirles[264513]:         {
Oct 11 08:35:35 compute-0 pensive_swirles[264513]:             "devices": [
Oct 11 08:35:35 compute-0 pensive_swirles[264513]:                 "/dev/loop4"
Oct 11 08:35:35 compute-0 pensive_swirles[264513]:             ],
Oct 11 08:35:35 compute-0 pensive_swirles[264513]:             "lv_name": "ceph_lv1",
Oct 11 08:35:35 compute-0 pensive_swirles[264513]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 08:35:35 compute-0 pensive_swirles[264513]:             "lv_size": "21470642176",
Oct 11 08:35:35 compute-0 pensive_swirles[264513]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c6e730c8-7075-45e5-bf72-061b73088f5b,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 08:35:35 compute-0 pensive_swirles[264513]:             "lv_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 08:35:35 compute-0 pensive_swirles[264513]:             "name": "ceph_lv1",
Oct 11 08:35:35 compute-0 pensive_swirles[264513]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 08:35:35 compute-0 pensive_swirles[264513]:             "tags": {
Oct 11 08:35:35 compute-0 pensive_swirles[264513]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 11 08:35:35 compute-0 pensive_swirles[264513]:                 "ceph.block_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 08:35:35 compute-0 pensive_swirles[264513]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 08:35:35 compute-0 pensive_swirles[264513]:                 "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 08:35:35 compute-0 pensive_swirles[264513]:                 "ceph.cluster_name": "ceph",
Oct 11 08:35:35 compute-0 pensive_swirles[264513]:                 "ceph.crush_device_class": "",
Oct 11 08:35:35 compute-0 pensive_swirles[264513]:                 "ceph.encrypted": "0",
Oct 11 08:35:35 compute-0 pensive_swirles[264513]:                 "ceph.osd_fsid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 08:35:35 compute-0 pensive_swirles[264513]:                 "ceph.osd_id": "1",
Oct 11 08:35:35 compute-0 pensive_swirles[264513]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 08:35:35 compute-0 pensive_swirles[264513]:                 "ceph.type": "block",
Oct 11 08:35:35 compute-0 pensive_swirles[264513]:                 "ceph.vdo": "0"
Oct 11 08:35:35 compute-0 pensive_swirles[264513]:             },
Oct 11 08:35:35 compute-0 pensive_swirles[264513]:             "type": "block",
Oct 11 08:35:35 compute-0 pensive_swirles[264513]:             "vg_name": "ceph_vg1"
Oct 11 08:35:35 compute-0 pensive_swirles[264513]:         }
Oct 11 08:35:35 compute-0 pensive_swirles[264513]:     ],
Oct 11 08:35:35 compute-0 pensive_swirles[264513]:     "2": [
Oct 11 08:35:35 compute-0 pensive_swirles[264513]:         {
Oct 11 08:35:35 compute-0 pensive_swirles[264513]:             "devices": [
Oct 11 08:35:35 compute-0 pensive_swirles[264513]:                 "/dev/loop5"
Oct 11 08:35:35 compute-0 pensive_swirles[264513]:             ],
Oct 11 08:35:35 compute-0 pensive_swirles[264513]:             "lv_name": "ceph_lv2",
Oct 11 08:35:35 compute-0 pensive_swirles[264513]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 08:35:35 compute-0 pensive_swirles[264513]:             "lv_size": "21470642176",
Oct 11 08:35:35 compute-0 pensive_swirles[264513]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9a8d87fe-940e-4960-94fb-405e22e6e9ee,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 08:35:35 compute-0 pensive_swirles[264513]:             "lv_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 08:35:35 compute-0 pensive_swirles[264513]:             "name": "ceph_lv2",
Oct 11 08:35:35 compute-0 pensive_swirles[264513]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 08:35:35 compute-0 pensive_swirles[264513]:             "tags": {
Oct 11 08:35:35 compute-0 pensive_swirles[264513]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 11 08:35:35 compute-0 pensive_swirles[264513]:                 "ceph.block_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 08:35:35 compute-0 pensive_swirles[264513]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 08:35:35 compute-0 pensive_swirles[264513]:                 "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 08:35:35 compute-0 pensive_swirles[264513]:                 "ceph.cluster_name": "ceph",
Oct 11 08:35:35 compute-0 pensive_swirles[264513]:                 "ceph.crush_device_class": "",
Oct 11 08:35:35 compute-0 pensive_swirles[264513]:                 "ceph.encrypted": "0",
Oct 11 08:35:35 compute-0 pensive_swirles[264513]:                 "ceph.osd_fsid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 08:35:35 compute-0 pensive_swirles[264513]:                 "ceph.osd_id": "2",
Oct 11 08:35:35 compute-0 pensive_swirles[264513]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 08:35:35 compute-0 pensive_swirles[264513]:                 "ceph.type": "block",
Oct 11 08:35:35 compute-0 pensive_swirles[264513]:                 "ceph.vdo": "0"
Oct 11 08:35:35 compute-0 pensive_swirles[264513]:             },
Oct 11 08:35:35 compute-0 pensive_swirles[264513]:             "type": "block",
Oct 11 08:35:35 compute-0 pensive_swirles[264513]:             "vg_name": "ceph_vg2"
Oct 11 08:35:35 compute-0 pensive_swirles[264513]:         }
Oct 11 08:35:35 compute-0 pensive_swirles[264513]:     ]
Oct 11 08:35:35 compute-0 pensive_swirles[264513]: }
Oct 11 08:35:35 compute-0 systemd[1]: libpod-2a01e0925f0c2259ac0279d7f34d3357a80105d59f68ba488013a3bceaf2e309.scope: Deactivated successfully.
Oct 11 08:35:35 compute-0 podman[264497]: 2025-10-11 08:35:35.048067859 +0000 UTC m=+0.940644367 container died 2a01e0925f0c2259ac0279d7f34d3357a80105d59f68ba488013a3bceaf2e309 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_swirles, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct 11 08:35:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-33876b5dc49687fa1690a247465b844a1cacba5cf17d77dfbbe27f96dcb7f28a-merged.mount: Deactivated successfully.
Oct 11 08:35:35 compute-0 ceph-mon[74313]: pgmap v852: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:35:35 compute-0 podman[264497]: 2025-10-11 08:35:35.131086315 +0000 UTC m=+1.023662823 container remove 2a01e0925f0c2259ac0279d7f34d3357a80105d59f68ba488013a3bceaf2e309 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_swirles, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct 11 08:35:35 compute-0 systemd[1]: libpod-conmon-2a01e0925f0c2259ac0279d7f34d3357a80105d59f68ba488013a3bceaf2e309.scope: Deactivated successfully.
Oct 11 08:35:35 compute-0 sudo[264391]: pam_unix(sudo:session): session closed for user root
Oct 11 08:35:35 compute-0 sudo[264537]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:35:35 compute-0 sudo[264537]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:35:35 compute-0 sudo[264537]: pam_unix(sudo:session): session closed for user root
Oct 11 08:35:35 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v853: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:35:35 compute-0 sudo[264562]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 08:35:35 compute-0 sudo[264562]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:35:35 compute-0 sudo[264562]: pam_unix(sudo:session): session closed for user root
Oct 11 08:35:35 compute-0 sudo[264587]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:35:35 compute-0 sudo[264587]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:35:35 compute-0 sudo[264587]: pam_unix(sudo:session): session closed for user root
Oct 11 08:35:35 compute-0 sudo[264612]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a -- raw list --format json
Oct 11 08:35:35 compute-0 sudo[264612]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:35:36 compute-0 podman[264680]: 2025-10-11 08:35:36.049964175 +0000 UTC m=+0.067402534 container create 89b5ab045a4f01be168486ee769da2a4394a3a5ea9352abe858a12a037c8dd66 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_neumann, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 08:35:36 compute-0 systemd[1]: Started libpod-conmon-89b5ab045a4f01be168486ee769da2a4394a3a5ea9352abe858a12a037c8dd66.scope.
Oct 11 08:35:36 compute-0 podman[264680]: 2025-10-11 08:35:36.022983569 +0000 UTC m=+0.040421918 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 08:35:36 compute-0 systemd[1]: Started libcrun container.
Oct 11 08:35:36 compute-0 podman[264680]: 2025-10-11 08:35:36.151106835 +0000 UTC m=+0.168545204 container init 89b5ab045a4f01be168486ee769da2a4394a3a5ea9352abe858a12a037c8dd66 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_neumann, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 08:35:36 compute-0 podman[264680]: 2025-10-11 08:35:36.158016671 +0000 UTC m=+0.175454990 container start 89b5ab045a4f01be168486ee769da2a4394a3a5ea9352abe858a12a037c8dd66 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_neumann, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 08:35:36 compute-0 podman[264680]: 2025-10-11 08:35:36.16147475 +0000 UTC m=+0.178913089 container attach 89b5ab045a4f01be168486ee769da2a4394a3a5ea9352abe858a12a037c8dd66 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_neumann, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Oct 11 08:35:36 compute-0 stupefied_neumann[264696]: 167 167
Oct 11 08:35:36 compute-0 systemd[1]: libpod-89b5ab045a4f01be168486ee769da2a4394a3a5ea9352abe858a12a037c8dd66.scope: Deactivated successfully.
Oct 11 08:35:36 compute-0 podman[264680]: 2025-10-11 08:35:36.164201057 +0000 UTC m=+0.181639376 container died 89b5ab045a4f01be168486ee769da2a4394a3a5ea9352abe858a12a037c8dd66 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_neumann, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct 11 08:35:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-e7587e281152edd9dfbdc177084392c555c11516e770b68be07e32ce887cbf99-merged.mount: Deactivated successfully.
Oct 11 08:35:36 compute-0 podman[264680]: 2025-10-11 08:35:36.195693481 +0000 UTC m=+0.213131800 container remove 89b5ab045a4f01be168486ee769da2a4394a3a5ea9352abe858a12a037c8dd66 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_neumann, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Oct 11 08:35:36 compute-0 systemd[1]: libpod-conmon-89b5ab045a4f01be168486ee769da2a4394a3a5ea9352abe858a12a037c8dd66.scope: Deactivated successfully.
Oct 11 08:35:36 compute-0 podman[264718]: 2025-10-11 08:35:36.443181975 +0000 UTC m=+0.078876100 container create 0bd5b5acf1c2a06acf572f800ea8eabca75e198c59689e86bc2b85255c504e47 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_visvesvaraya, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct 11 08:35:36 compute-0 systemd[1]: Started libpod-conmon-0bd5b5acf1c2a06acf572f800ea8eabca75e198c59689e86bc2b85255c504e47.scope.
Oct 11 08:35:36 compute-0 podman[264718]: 2025-10-11 08:35:36.412616087 +0000 UTC m=+0.048310252 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 08:35:36 compute-0 systemd[1]: Started libcrun container.
Oct 11 08:35:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3453c3798e98d5721698c82491c7e905bb04a20741ca02db06d2d78910c8a2d4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 08:35:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3453c3798e98d5721698c82491c7e905bb04a20741ca02db06d2d78910c8a2d4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 08:35:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3453c3798e98d5721698c82491c7e905bb04a20741ca02db06d2d78910c8a2d4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 08:35:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3453c3798e98d5721698c82491c7e905bb04a20741ca02db06d2d78910c8a2d4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 08:35:36 compute-0 podman[264718]: 2025-10-11 08:35:36.571719473 +0000 UTC m=+0.207413608 container init 0bd5b5acf1c2a06acf572f800ea8eabca75e198c59689e86bc2b85255c504e47 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_visvesvaraya, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 08:35:36 compute-0 podman[264718]: 2025-10-11 08:35:36.585963557 +0000 UTC m=+0.221657682 container start 0bd5b5acf1c2a06acf572f800ea8eabca75e198c59689e86bc2b85255c504e47 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_visvesvaraya, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 08:35:36 compute-0 podman[264718]: 2025-10-11 08:35:36.589956201 +0000 UTC m=+0.225650326 container attach 0bd5b5acf1c2a06acf572f800ea8eabca75e198c59689e86bc2b85255c504e47 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_visvesvaraya, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Oct 11 08:35:37 compute-0 ceph-mon[74313]: pgmap v853: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:35:37 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v854: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:35:37 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 08:35:37 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1514659896' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 08:35:37 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 08:35:37 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1514659896' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 08:35:37 compute-0 strange_visvesvaraya[264735]: {
Oct 11 08:35:37 compute-0 strange_visvesvaraya[264735]:     "9a8d87fe-940e-4960-94fb-405e22e6e9ee": {
Oct 11 08:35:37 compute-0 strange_visvesvaraya[264735]:         "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 08:35:37 compute-0 strange_visvesvaraya[264735]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 11 08:35:37 compute-0 strange_visvesvaraya[264735]:         "osd_id": 2,
Oct 11 08:35:37 compute-0 strange_visvesvaraya[264735]:         "osd_uuid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 08:35:37 compute-0 strange_visvesvaraya[264735]:         "type": "bluestore"
Oct 11 08:35:37 compute-0 strange_visvesvaraya[264735]:     },
Oct 11 08:35:37 compute-0 strange_visvesvaraya[264735]:     "bd31cfe9-266a-4479-a7f9-659fff14b3e1": {
Oct 11 08:35:37 compute-0 strange_visvesvaraya[264735]:         "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 08:35:37 compute-0 strange_visvesvaraya[264735]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 11 08:35:37 compute-0 strange_visvesvaraya[264735]:         "osd_id": 0,
Oct 11 08:35:37 compute-0 strange_visvesvaraya[264735]:         "osd_uuid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 08:35:37 compute-0 strange_visvesvaraya[264735]:         "type": "bluestore"
Oct 11 08:35:37 compute-0 strange_visvesvaraya[264735]:     },
Oct 11 08:35:37 compute-0 strange_visvesvaraya[264735]:     "c6e730c8-7075-45e5-bf72-061b73088f5b": {
Oct 11 08:35:37 compute-0 strange_visvesvaraya[264735]:         "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 08:35:37 compute-0 strange_visvesvaraya[264735]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 11 08:35:37 compute-0 strange_visvesvaraya[264735]:         "osd_id": 1,
Oct 11 08:35:37 compute-0 strange_visvesvaraya[264735]:         "osd_uuid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 08:35:37 compute-0 strange_visvesvaraya[264735]:         "type": "bluestore"
Oct 11 08:35:37 compute-0 strange_visvesvaraya[264735]:     }
Oct 11 08:35:37 compute-0 strange_visvesvaraya[264735]: }
Oct 11 08:35:37 compute-0 systemd[1]: libpod-0bd5b5acf1c2a06acf572f800ea8eabca75e198c59689e86bc2b85255c504e47.scope: Deactivated successfully.
Oct 11 08:35:37 compute-0 systemd[1]: libpod-0bd5b5acf1c2a06acf572f800ea8eabca75e198c59689e86bc2b85255c504e47.scope: Consumed 1.119s CPU time.
Oct 11 08:35:37 compute-0 podman[264774]: 2025-10-11 08:35:37.769796947 +0000 UTC m=+0.044572386 container died 0bd5b5acf1c2a06acf572f800ea8eabca75e198c59689e86bc2b85255c504e47 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_visvesvaraya, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 08:35:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-3453c3798e98d5721698c82491c7e905bb04a20741ca02db06d2d78910c8a2d4-merged.mount: Deactivated successfully.
Oct 11 08:35:37 compute-0 podman[264768]: 2025-10-11 08:35:37.826966269 +0000 UTC m=+0.121396396 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, container_name=iscsid, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Oct 11 08:35:37 compute-0 podman[264774]: 2025-10-11 08:35:37.851935308 +0000 UTC m=+0.126710687 container remove 0bd5b5acf1c2a06acf572f800ea8eabca75e198c59689e86bc2b85255c504e47 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_visvesvaraya, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct 11 08:35:37 compute-0 systemd[1]: libpod-conmon-0bd5b5acf1c2a06acf572f800ea8eabca75e198c59689e86bc2b85255c504e47.scope: Deactivated successfully.
Oct 11 08:35:37 compute-0 sudo[264612]: pam_unix(sudo:session): session closed for user root
Oct 11 08:35:37 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 08:35:37 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 08:35:37 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 08:35:37 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 08:35:37 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 08:35:37 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev 89252cea-c438-4022-9bb3-52fbb141fd1b does not exist
Oct 11 08:35:37 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev c0802e6f-7844-4614-b044-0a501a87ade3 does not exist
Oct 11 08:35:38 compute-0 sudo[264800]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:35:38 compute-0 sudo[264800]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:35:38 compute-0 sudo[264800]: pam_unix(sudo:session): session closed for user root
Oct 11 08:35:38 compute-0 sudo[264825]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 11 08:35:38 compute-0 sudo[264825]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:35:38 compute-0 sudo[264825]: pam_unix(sudo:session): session closed for user root
Oct 11 08:35:38 compute-0 ceph-mon[74313]: from='client.? 192.168.122.10:0/1514659896' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 08:35:38 compute-0 ceph-mon[74313]: from='client.? 192.168.122.10:0/1514659896' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 08:35:38 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 08:35:38 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 08:35:39 compute-0 ceph-mon[74313]: pgmap v854: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:35:39 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v855: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:35:41 compute-0 ceph-mon[74313]: pgmap v855: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:35:41 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v856: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:35:42 compute-0 podman[264850]: 2025-10-11 08:35:42.821393509 +0000 UTC m=+0.112977207 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Oct 11 08:35:42 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 08:35:43 compute-0 ceph-mon[74313]: pgmap v856: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:35:43 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v857: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:35:44 compute-0 podman[264870]: 2025-10-11 08:35:44.803466095 +0000 UTC m=+0.100256117 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 11 08:35:45 compute-0 ceph-mon[74313]: pgmap v857: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:35:45 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v858: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:35:47 compute-0 ceph-mon[74313]: pgmap v858: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:35:47 compute-0 nova_compute[260935]: 2025-10-11 08:35:47.304 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 08:35:47 compute-0 nova_compute[260935]: 2025-10-11 08:35:47.305 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 11 08:35:47 compute-0 nova_compute[260935]: 2025-10-11 08:35:47.305 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 11 08:35:47 compute-0 nova_compute[260935]: 2025-10-11 08:35:47.336 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 11 08:35:47 compute-0 nova_compute[260935]: 2025-10-11 08:35:47.337 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 08:35:47 compute-0 nova_compute[260935]: 2025-10-11 08:35:47.337 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 08:35:47 compute-0 nova_compute[260935]: 2025-10-11 08:35:47.338 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 08:35:47 compute-0 nova_compute[260935]: 2025-10-11 08:35:47.338 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 08:35:47 compute-0 nova_compute[260935]: 2025-10-11 08:35:47.338 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 11 08:35:47 compute-0 nova_compute[260935]: 2025-10-11 08:35:47.339 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 08:35:47 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v859: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:35:47 compute-0 nova_compute[260935]: 2025-10-11 08:35:47.372 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:35:47 compute-0 nova_compute[260935]: 2025-10-11 08:35:47.373 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:35:47 compute-0 nova_compute[260935]: 2025-10-11 08:35:47.374 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:35:47 compute-0 nova_compute[260935]: 2025-10-11 08:35:47.374 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 11 08:35:47 compute-0 nova_compute[260935]: 2025-10-11 08:35:47.375 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:35:47 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 08:35:47 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2289925496' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:35:47 compute-0 nova_compute[260935]: 2025-10-11 08:35:47.833 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:35:47 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 08:35:48 compute-0 nova_compute[260935]: 2025-10-11 08:35:48.133 2 WARNING nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 08:35:48 compute-0 nova_compute[260935]: 2025-10-11 08:35:48.135 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5145MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 11 08:35:48 compute-0 nova_compute[260935]: 2025-10-11 08:35:48.135 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:35:48 compute-0 nova_compute[260935]: 2025-10-11 08:35:48.136 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:35:48 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/2289925496' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:35:48 compute-0 nova_compute[260935]: 2025-10-11 08:35:48.392 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 11 08:35:48 compute-0 nova_compute[260935]: 2025-10-11 08:35:48.392 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 11 08:35:48 compute-0 nova_compute[260935]: 2025-10-11 08:35:48.409 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:35:48 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 08:35:48 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1034930190' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:35:48 compute-0 nova_compute[260935]: 2025-10-11 08:35:48.890 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.481s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:35:48 compute-0 nova_compute[260935]: 2025-10-11 08:35:48.899 2 DEBUG nova.compute.provider_tree [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 08:35:48 compute-0 nova_compute[260935]: 2025-10-11 08:35:48.921 2 DEBUG nova.scheduler.client.report [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 08:35:48 compute-0 nova_compute[260935]: 2025-10-11 08:35:48.924 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 11 08:35:48 compute-0 nova_compute[260935]: 2025-10-11 08:35:48.925 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.789s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:35:49 compute-0 ceph-mon[74313]: pgmap v859: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:35:49 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/1034930190' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:35:49 compute-0 nova_compute[260935]: 2025-10-11 08:35:49.290 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 08:35:49 compute-0 nova_compute[260935]: 2025-10-11 08:35:49.290 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 08:35:49 compute-0 nova_compute[260935]: 2025-10-11 08:35:49.291 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 08:35:49 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v860: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:35:51 compute-0 ceph-mon[74313]: pgmap v860: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:35:51 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v861: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:35:52 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 08:35:53 compute-0 ceph-mon[74313]: pgmap v861: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:35:53 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v862: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:35:54 compute-0 ceph-mon[74313]: pgmap v862: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:35:54 compute-0 ceph-mgr[74605]: [balancer INFO root] Optimize plan auto_2025-10-11_08:35:54
Oct 11 08:35:54 compute-0 ceph-mgr[74605]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 08:35:54 compute-0 ceph-mgr[74605]: [balancer INFO root] do_upmap
Oct 11 08:35:54 compute-0 ceph-mgr[74605]: [balancer INFO root] pools ['default.rgw.control', 'cephfs.cephfs.data', 'images', 'vms', '.mgr', 'cephfs.cephfs.meta', 'default.rgw.log', '.rgw.root', 'backups', 'volumes', 'default.rgw.meta']
Oct 11 08:35:54 compute-0 ceph-mgr[74605]: [balancer INFO root] prepared 0/10 changes
Oct 11 08:35:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 08:35:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 08:35:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 08:35:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 08:35:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 08:35:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 08:35:54 compute-0 ceph-mgr[74605]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 08:35:54 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 08:35:54 compute-0 ceph-mgr[74605]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 08:35:54 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 08:35:54 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 08:35:54 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 08:35:54 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 08:35:54 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 08:35:54 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 08:35:54 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 08:35:55 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v863: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:35:56 compute-0 ceph-mon[74313]: pgmap v863: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:35:57 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v864: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:35:57 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 08:35:58 compute-0 ceph-mon[74313]: pgmap v864: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:35:59 compute-0 PackageKit[191236]: daemon quit
Oct 11 08:35:59 compute-0 systemd[1]: packagekit.service: Deactivated successfully.
Oct 11 08:35:59 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v865: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:36:00 compute-0 ceph-mon[74313]: pgmap v865: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:36:00 compute-0 podman[264940]: 2025-10-11 08:36:00.783479634 +0000 UTC m=+0.078725696 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Oct 11 08:36:01 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v866: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:36:02 compute-0 ceph-mon[74313]: pgmap v866: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:36:02 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 08:36:03 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v867: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:36:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 08:36:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:36:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 08:36:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:36:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 08:36:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:36:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 08:36:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:36:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 08:36:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:36:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 08:36:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:36:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 11 08:36:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:36:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 08:36:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:36:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 11 08:36:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:36:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 11 08:36:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:36:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 08:36:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:36:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 11 08:36:04 compute-0 ceph-mon[74313]: pgmap v867: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:36:05 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v868: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:36:06 compute-0 ceph-mon[74313]: pgmap v868: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:36:07 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v869: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:36:07 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 08:36:08 compute-0 ceph-mon[74313]: pgmap v869: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:36:08 compute-0 podman[264959]: 2025-10-11 08:36:08.799510672 +0000 UTC m=+0.093199986 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, container_name=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_id=iscsid)
Oct 11 08:36:09 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v870: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:36:10 compute-0 ceph-mon[74313]: pgmap v870: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:36:11 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v871: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:36:12 compute-0 ceph-mon[74313]: pgmap v871: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:36:12 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 08:36:13 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v872: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:36:13 compute-0 podman[264980]: 2025-10-11 08:36:13.79696378 +0000 UTC m=+0.096203851 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct 11 08:36:14 compute-0 ceph-mon[74313]: pgmap v872: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:36:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:36:15.166 162815 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:36:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:36:15.166 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:36:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:36:15.166 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:36:15 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v873: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:36:15 compute-0 podman[265001]: 2025-10-11 08:36:15.822661162 +0000 UTC m=+0.123722162 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 08:36:16 compute-0 ceph-mon[74313]: pgmap v873: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:36:17 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v874: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:36:17 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 08:36:18 compute-0 ceph-mon[74313]: pgmap v874: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:36:19 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v875: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:36:20 compute-0 ceph-mon[74313]: pgmap v875: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:36:20 compute-0 ceph-mon[74313]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #39. Immutable memtables: 0.
Oct 11 08:36:20 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:36:20.543689) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 11 08:36:20 compute-0 ceph-mon[74313]: rocksdb: [db/flush_job.cc:856] [default] [JOB 17] Flushing memtable with next log file: 39
Oct 11 08:36:20 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760171780543733, "job": 17, "event": "flush_started", "num_memtables": 1, "num_entries": 2051, "num_deletes": 251, "total_data_size": 3436538, "memory_usage": 3487368, "flush_reason": "Manual Compaction"}
Oct 11 08:36:20 compute-0 ceph-mon[74313]: rocksdb: [db/flush_job.cc:885] [default] [JOB 17] Level-0 flush table #40: started
Oct 11 08:36:20 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760171780576181, "cf_name": "default", "job": 17, "event": "table_file_creation", "file_number": 40, "file_size": 3371628, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16337, "largest_seqno": 18387, "table_properties": {"data_size": 3362309, "index_size": 5877, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2373, "raw_key_size": 18477, "raw_average_key_size": 19, "raw_value_size": 3343780, "raw_average_value_size": 3591, "num_data_blocks": 266, "num_entries": 931, "num_filter_entries": 931, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760171551, "oldest_key_time": 1760171551, "file_creation_time": 1760171780, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "25d4b2da-549a-4320-988a-8e4ed36a341b", "db_session_id": "HBQ1LYMNE0LLZGR7AHC6", "orig_file_number": 40, "seqno_to_time_mapping": "N/A"}}
Oct 11 08:36:20 compute-0 ceph-mon[74313]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 17] Flush lasted 32562 microseconds, and 14273 cpu microseconds.
Oct 11 08:36:20 compute-0 ceph-mon[74313]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 08:36:20 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:36:20.576250) [db/flush_job.cc:967] [default] [JOB 17] Level-0 flush table #40: 3371628 bytes OK
Oct 11 08:36:20 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:36:20.576278) [db/memtable_list.cc:519] [default] Level-0 commit table #40 started
Oct 11 08:36:20 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:36:20.578439) [db/memtable_list.cc:722] [default] Level-0 commit table #40: memtable #1 done
Oct 11 08:36:20 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:36:20.578465) EVENT_LOG_v1 {"time_micros": 1760171780578457, "job": 17, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 11 08:36:20 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:36:20.578491) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 11 08:36:20 compute-0 ceph-mon[74313]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 17] Try to delete WAL files size 3427954, prev total WAL file size 3427954, number of live WAL files 2.
Oct 11 08:36:20 compute-0 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000036.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 08:36:20 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:36:20.580034) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031323535' seq:72057594037927935, type:22 .. '7061786F730031353037' seq:0, type:0; will stop at (end)
Oct 11 08:36:20 compute-0 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 18] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 11 08:36:20 compute-0 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 17 Base level 0, inputs: [40(3292KB)], [38(7533KB)]
Oct 11 08:36:20 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760171780580078, "job": 18, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [40], "files_L6": [38], "score": -1, "input_data_size": 11085835, "oldest_snapshot_seqno": -1}
Oct 11 08:36:20 compute-0 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 18] Generated table #41: 4408 keys, 9331196 bytes, temperature: kUnknown
Oct 11 08:36:20 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760171780643860, "cf_name": "default", "job": 18, "event": "table_file_creation", "file_number": 41, "file_size": 9331196, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9298053, "index_size": 21001, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11077, "raw_key_size": 106535, "raw_average_key_size": 24, "raw_value_size": 9214667, "raw_average_value_size": 2090, "num_data_blocks": 891, "num_entries": 4408, "num_filter_entries": 4408, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760170204, "oldest_key_time": 0, "file_creation_time": 1760171780, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "25d4b2da-549a-4320-988a-8e4ed36a341b", "db_session_id": "HBQ1LYMNE0LLZGR7AHC6", "orig_file_number": 41, "seqno_to_time_mapping": "N/A"}}
Oct 11 08:36:20 compute-0 ceph-mon[74313]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 08:36:20 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:36:20.644226) [db/compaction/compaction_job.cc:1663] [default] [JOB 18] Compacted 1@0 + 1@6 files to L6 => 9331196 bytes
Oct 11 08:36:20 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:36:20.645770) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 173.4 rd, 146.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.2, 7.4 +0.0 blob) out(8.9 +0.0 blob), read-write-amplify(6.1) write-amplify(2.8) OK, records in: 4922, records dropped: 514 output_compression: NoCompression
Oct 11 08:36:20 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:36:20.645796) EVENT_LOG_v1 {"time_micros": 1760171780645785, "job": 18, "event": "compaction_finished", "compaction_time_micros": 63922, "compaction_time_cpu_micros": 35770, "output_level": 6, "num_output_files": 1, "total_output_size": 9331196, "num_input_records": 4922, "num_output_records": 4408, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 11 08:36:20 compute-0 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000040.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 08:36:20 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760171780647113, "job": 18, "event": "table_file_deletion", "file_number": 40}
Oct 11 08:36:20 compute-0 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000038.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 08:36:20 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760171780649861, "job": 18, "event": "table_file_deletion", "file_number": 38}
Oct 11 08:36:20 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:36:20.579978) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 08:36:20 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:36:20.649911) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 08:36:20 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:36:20.649918) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 08:36:20 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:36:20.649921) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 08:36:20 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:36:20.649924) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 08:36:20 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:36:20.649928) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 08:36:21 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v876: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:36:22 compute-0 ceph-mon[74313]: pgmap v876: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:36:22 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 08:36:23 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v877: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:36:24 compute-0 ceph-mon[74313]: pgmap v877: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:36:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 08:36:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 08:36:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 08:36:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 08:36:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 08:36:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 08:36:25 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v878: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:36:26 compute-0 ceph-mon[74313]: pgmap v878: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:36:27 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v879: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:36:27 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 08:36:27 compute-0 ceph-mon[74313]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #42. Immutable memtables: 0.
Oct 11 08:36:27 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:36:27.976434) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 11 08:36:27 compute-0 ceph-mon[74313]: rocksdb: [db/flush_job.cc:856] [default] [JOB 19] Flushing memtable with next log file: 42
Oct 11 08:36:27 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760171787976475, "job": 19, "event": "flush_started", "num_memtables": 1, "num_entries": 304, "num_deletes": 250, "total_data_size": 123412, "memory_usage": 130304, "flush_reason": "Manual Compaction"}
Oct 11 08:36:27 compute-0 ceph-mon[74313]: rocksdb: [db/flush_job.cc:885] [default] [JOB 19] Level-0 flush table #43: started
Oct 11 08:36:27 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760171787979460, "cf_name": "default", "job": 19, "event": "table_file_creation", "file_number": 43, "file_size": 122438, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 18388, "largest_seqno": 18691, "table_properties": {"data_size": 120426, "index_size": 240, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 709, "raw_key_size": 5290, "raw_average_key_size": 19, "raw_value_size": 116523, "raw_average_value_size": 423, "num_data_blocks": 11, "num_entries": 275, "num_filter_entries": 275, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760171781, "oldest_key_time": 1760171781, "file_creation_time": 1760171787, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "25d4b2da-549a-4320-988a-8e4ed36a341b", "db_session_id": "HBQ1LYMNE0LLZGR7AHC6", "orig_file_number": 43, "seqno_to_time_mapping": "N/A"}}
Oct 11 08:36:27 compute-0 ceph-mon[74313]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 19] Flush lasted 3076 microseconds, and 1166 cpu microseconds.
Oct 11 08:36:27 compute-0 ceph-mon[74313]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 08:36:27 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:36:27.979511) [db/flush_job.cc:967] [default] [JOB 19] Level-0 flush table #43: 122438 bytes OK
Oct 11 08:36:27 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:36:27.979533) [db/memtable_list.cc:519] [default] Level-0 commit table #43 started
Oct 11 08:36:27 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:36:27.980887) [db/memtable_list.cc:722] [default] Level-0 commit table #43: memtable #1 done
Oct 11 08:36:27 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:36:27.980910) EVENT_LOG_v1 {"time_micros": 1760171787980903, "job": 19, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 11 08:36:27 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:36:27.980928) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 11 08:36:27 compute-0 ceph-mon[74313]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 19] Try to delete WAL files size 121226, prev total WAL file size 121226, number of live WAL files 2.
Oct 11 08:36:27 compute-0 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000039.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 08:36:27 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:36:27.981369) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D67727374617400353032' seq:72057594037927935, type:22 .. '6D67727374617400373533' seq:0, type:0; will stop at (end)
Oct 11 08:36:27 compute-0 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 20] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 11 08:36:27 compute-0 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 19 Base level 0, inputs: [43(119KB)], [41(9112KB)]
Oct 11 08:36:27 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760171787981415, "job": 20, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [43], "files_L6": [41], "score": -1, "input_data_size": 9453634, "oldest_snapshot_seqno": -1}
Oct 11 08:36:28 compute-0 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 20] Generated table #44: 4176 keys, 6165904 bytes, temperature: kUnknown
Oct 11 08:36:28 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760171788038053, "cf_name": "default", "job": 20, "event": "table_file_creation", "file_number": 44, "file_size": 6165904, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 6138897, "index_size": 15442, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10501, "raw_key_size": 102140, "raw_average_key_size": 24, "raw_value_size": 6064097, "raw_average_value_size": 1452, "num_data_blocks": 650, "num_entries": 4176, "num_filter_entries": 4176, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760170204, "oldest_key_time": 0, "file_creation_time": 1760171787, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "25d4b2da-549a-4320-988a-8e4ed36a341b", "db_session_id": "HBQ1LYMNE0LLZGR7AHC6", "orig_file_number": 44, "seqno_to_time_mapping": "N/A"}}
Oct 11 08:36:28 compute-0 ceph-mon[74313]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 08:36:28 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:36:28.038507) [db/compaction/compaction_job.cc:1663] [default] [JOB 20] Compacted 1@0 + 1@6 files to L6 => 6165904 bytes
Oct 11 08:36:28 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:36:28.040166) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 166.5 rd, 108.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.1, 8.9 +0.0 blob) out(5.9 +0.0 blob), read-write-amplify(127.6) write-amplify(50.4) OK, records in: 4683, records dropped: 507 output_compression: NoCompression
Oct 11 08:36:28 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:36:28.040202) EVENT_LOG_v1 {"time_micros": 1760171788040186, "job": 20, "event": "compaction_finished", "compaction_time_micros": 56776, "compaction_time_cpu_micros": 35306, "output_level": 6, "num_output_files": 1, "total_output_size": 6165904, "num_input_records": 4683, "num_output_records": 4176, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 11 08:36:28 compute-0 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000043.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 08:36:28 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760171788040580, "job": 20, "event": "table_file_deletion", "file_number": 43}
Oct 11 08:36:28 compute-0 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000041.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 08:36:28 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760171788044090, "job": 20, "event": "table_file_deletion", "file_number": 41}
Oct 11 08:36:28 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:36:27.981289) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 08:36:28 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:36:28.044220) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 08:36:28 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:36:28.044229) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 08:36:28 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:36:28.044231) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 08:36:28 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:36:28.044235) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 08:36:28 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:36:28.044238) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 08:36:28 compute-0 ceph-mon[74313]: pgmap v879: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:36:29 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v880: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:36:30 compute-0 ceph-mon[74313]: pgmap v880: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:36:31 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v881: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:36:31 compute-0 podman[265028]: 2025-10-11 08:36:31.782620562 +0000 UTC m=+0.081517666 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 11 08:36:32 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 08:36:33 compute-0 ceph-mon[74313]: pgmap v881: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:36:33 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v882: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:36:35 compute-0 ceph-mon[74313]: pgmap v882: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:36:35 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v883: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:36:37 compute-0 ceph-mon[74313]: pgmap v883: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:36:37 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v884: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:36:37 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 08:36:37 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3902930428' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 08:36:37 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 08:36:37 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3902930428' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 08:36:37 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 08:36:38 compute-0 ceph-mon[74313]: from='client.? 192.168.122.10:0/3902930428' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 08:36:38 compute-0 ceph-mon[74313]: from='client.? 192.168.122.10:0/3902930428' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 08:36:38 compute-0 sudo[265048]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:36:38 compute-0 sudo[265048]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:36:38 compute-0 sudo[265048]: pam_unix(sudo:session): session closed for user root
Oct 11 08:36:38 compute-0 sudo[265073]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 08:36:38 compute-0 sudo[265073]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:36:38 compute-0 sudo[265073]: pam_unix(sudo:session): session closed for user root
Oct 11 08:36:38 compute-0 sudo[265098]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:36:38 compute-0 sudo[265098]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:36:38 compute-0 sudo[265098]: pam_unix(sudo:session): session closed for user root
Oct 11 08:36:38 compute-0 sudo[265123]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 11 08:36:38 compute-0 sudo[265123]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:36:39 compute-0 ceph-mon[74313]: pgmap v884: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:36:39 compute-0 sudo[265123]: pam_unix(sudo:session): session closed for user root
Oct 11 08:36:39 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 08:36:39 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 08:36:39 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 11 08:36:39 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 08:36:39 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 11 08:36:39 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 08:36:39 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev c7ad0462-d154-4ca4-8e7e-4742c7c341c1 does not exist
Oct 11 08:36:39 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev bf1410b3-87d4-48f9-841d-91034fa9b574 does not exist
Oct 11 08:36:39 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev b345118d-93a1-4839-9a10-7986eae63879 does not exist
Oct 11 08:36:39 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 11 08:36:39 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 08:36:39 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 11 08:36:39 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 08:36:39 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 08:36:39 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 08:36:39 compute-0 sudo[265179]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:36:39 compute-0 sudo[265179]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:36:39 compute-0 sudo[265179]: pam_unix(sudo:session): session closed for user root
Oct 11 08:36:39 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v885: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:36:39 compute-0 sudo[265210]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 08:36:39 compute-0 podman[265203]: 2025-10-11 08:36:39.414109492 +0000 UTC m=+0.085667165 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team)
Oct 11 08:36:39 compute-0 sudo[265210]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:36:39 compute-0 sudo[265210]: pam_unix(sudo:session): session closed for user root
Oct 11 08:36:39 compute-0 sudo[265249]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:36:39 compute-0 sudo[265249]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:36:39 compute-0 sudo[265249]: pam_unix(sudo:session): session closed for user root
Oct 11 08:36:39 compute-0 sudo[265274]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 11 08:36:39 compute-0 sudo[265274]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:36:40 compute-0 podman[265340]: 2025-10-11 08:36:40.026022306 +0000 UTC m=+0.073784616 container create 4ca2317f390840eb5f372970a59c8237dbf2e244b1b6ce2a47e74f1316c092ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_thompson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct 11 08:36:40 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 08:36:40 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 08:36:40 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 08:36:40 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 08:36:40 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 08:36:40 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 08:36:40 compute-0 podman[265340]: 2025-10-11 08:36:39.99251011 +0000 UTC m=+0.040272500 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 08:36:40 compute-0 systemd[1]: Started libpod-conmon-4ca2317f390840eb5f372970a59c8237dbf2e244b1b6ce2a47e74f1316c092ad.scope.
Oct 11 08:36:40 compute-0 systemd[1]: Started libcrun container.
Oct 11 08:36:40 compute-0 podman[265340]: 2025-10-11 08:36:40.173030349 +0000 UTC m=+0.220792739 container init 4ca2317f390840eb5f372970a59c8237dbf2e244b1b6ce2a47e74f1316c092ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_thompson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 08:36:40 compute-0 podman[265340]: 2025-10-11 08:36:40.186317298 +0000 UTC m=+0.234079628 container start 4ca2317f390840eb5f372970a59c8237dbf2e244b1b6ce2a47e74f1316c092ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_thompson, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct 11 08:36:40 compute-0 focused_thompson[265356]: 167 167
Oct 11 08:36:40 compute-0 systemd[1]: libpod-4ca2317f390840eb5f372970a59c8237dbf2e244b1b6ce2a47e74f1316c092ad.scope: Deactivated successfully.
Oct 11 08:36:40 compute-0 podman[265340]: 2025-10-11 08:36:40.196970792 +0000 UTC m=+0.244733132 container attach 4ca2317f390840eb5f372970a59c8237dbf2e244b1b6ce2a47e74f1316c092ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_thompson, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef)
Oct 11 08:36:40 compute-0 podman[265340]: 2025-10-11 08:36:40.197385384 +0000 UTC m=+0.245147724 container died 4ca2317f390840eb5f372970a59c8237dbf2e244b1b6ce2a47e74f1316c092ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_thompson, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 08:36:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-9e25a2b6a4b085b8cc78cd3c158309d57abb8f4157a50af499525eaf3fc61290-merged.mount: Deactivated successfully.
Oct 11 08:36:40 compute-0 podman[265340]: 2025-10-11 08:36:40.401429774 +0000 UTC m=+0.449192114 container remove 4ca2317f390840eb5f372970a59c8237dbf2e244b1b6ce2a47e74f1316c092ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_thompson, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 08:36:40 compute-0 systemd[1]: libpod-conmon-4ca2317f390840eb5f372970a59c8237dbf2e244b1b6ce2a47e74f1316c092ad.scope: Deactivated successfully.
Oct 11 08:36:40 compute-0 podman[265380]: 2025-10-11 08:36:40.595352706 +0000 UTC m=+0.055027781 container create 7126ae642ea38cb549e2dd11744f9f34d5a6735a82f2bec36b12fbb636b6e53f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_shaw, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 08:36:40 compute-0 systemd[1]: Started libpod-conmon-7126ae642ea38cb549e2dd11744f9f34d5a6735a82f2bec36b12fbb636b6e53f.scope.
Oct 11 08:36:40 compute-0 podman[265380]: 2025-10-11 08:36:40.570708923 +0000 UTC m=+0.030384068 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 08:36:40 compute-0 systemd[1]: Started libcrun container.
Oct 11 08:36:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4ce21210e6d9768af4d1e551f3e4bcb8d829a19378aa7b3e596b44491343f879/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 08:36:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4ce21210e6d9768af4d1e551f3e4bcb8d829a19378aa7b3e596b44491343f879/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 08:36:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4ce21210e6d9768af4d1e551f3e4bcb8d829a19378aa7b3e596b44491343f879/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 08:36:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4ce21210e6d9768af4d1e551f3e4bcb8d829a19378aa7b3e596b44491343f879/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 08:36:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4ce21210e6d9768af4d1e551f3e4bcb8d829a19378aa7b3e596b44491343f879/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 08:36:40 compute-0 podman[265380]: 2025-10-11 08:36:40.710277794 +0000 UTC m=+0.169952919 container init 7126ae642ea38cb549e2dd11744f9f34d5a6735a82f2bec36b12fbb636b6e53f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_shaw, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 08:36:40 compute-0 podman[265380]: 2025-10-11 08:36:40.72450756 +0000 UTC m=+0.184182665 container start 7126ae642ea38cb549e2dd11744f9f34d5a6735a82f2bec36b12fbb636b6e53f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_shaw, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct 11 08:36:40 compute-0 podman[265380]: 2025-10-11 08:36:40.729743049 +0000 UTC m=+0.189418214 container attach 7126ae642ea38cb549e2dd11744f9f34d5a6735a82f2bec36b12fbb636b6e53f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_shaw, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct 11 08:36:41 compute-0 ceph-mon[74313]: pgmap v885: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:36:41 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v886: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:36:41 compute-0 friendly_shaw[265397]: --> passed data devices: 0 physical, 3 LVM
Oct 11 08:36:41 compute-0 friendly_shaw[265397]: --> relative data size: 1.0
Oct 11 08:36:41 compute-0 friendly_shaw[265397]: --> All data devices are unavailable
Oct 11 08:36:41 compute-0 systemd[1]: libpod-7126ae642ea38cb549e2dd11744f9f34d5a6735a82f2bec36b12fbb636b6e53f.scope: Deactivated successfully.
Oct 11 08:36:41 compute-0 podman[265380]: 2025-10-11 08:36:41.91754285 +0000 UTC m=+1.377217925 container died 7126ae642ea38cb549e2dd11744f9f34d5a6735a82f2bec36b12fbb636b6e53f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_shaw, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 08:36:41 compute-0 systemd[1]: libpod-7126ae642ea38cb549e2dd11744f9f34d5a6735a82f2bec36b12fbb636b6e53f.scope: Consumed 1.151s CPU time.
Oct 11 08:36:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-4ce21210e6d9768af4d1e551f3e4bcb8d829a19378aa7b3e596b44491343f879-merged.mount: Deactivated successfully.
Oct 11 08:36:41 compute-0 podman[265380]: 2025-10-11 08:36:41.988229746 +0000 UTC m=+1.447904851 container remove 7126ae642ea38cb549e2dd11744f9f34d5a6735a82f2bec36b12fbb636b6e53f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_shaw, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct 11 08:36:41 compute-0 systemd[1]: libpod-conmon-7126ae642ea38cb549e2dd11744f9f34d5a6735a82f2bec36b12fbb636b6e53f.scope: Deactivated successfully.
Oct 11 08:36:42 compute-0 sudo[265274]: pam_unix(sudo:session): session closed for user root
Oct 11 08:36:42 compute-0 sudo[265440]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:36:42 compute-0 sudo[265440]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:36:42 compute-0 sudo[265440]: pam_unix(sudo:session): session closed for user root
Oct 11 08:36:42 compute-0 sudo[265465]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 08:36:42 compute-0 sudo[265465]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:36:42 compute-0 sudo[265465]: pam_unix(sudo:session): session closed for user root
Oct 11 08:36:42 compute-0 sudo[265490]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:36:42 compute-0 sudo[265490]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:36:42 compute-0 sudo[265490]: pam_unix(sudo:session): session closed for user root
Oct 11 08:36:42 compute-0 sudo[265515]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a -- lvm list --format json
Oct 11 08:36:42 compute-0 sudo[265515]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:36:42 compute-0 podman[265581]: 2025-10-11 08:36:42.869522493 +0000 UTC m=+0.062947056 container create 3e7e1b3b40a083a54cf4b0b15fd709ad42ac7d6adfc8f1b083fd9c8336e6ff6f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_lamarr, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3)
Oct 11 08:36:42 compute-0 systemd[1]: Started libpod-conmon-3e7e1b3b40a083a54cf4b0b15fd709ad42ac7d6adfc8f1b083fd9c8336e6ff6f.scope.
Oct 11 08:36:42 compute-0 podman[265581]: 2025-10-11 08:36:42.844091558 +0000 UTC m=+0.037516171 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 08:36:42 compute-0 systemd[1]: Started libcrun container.
Oct 11 08:36:42 compute-0 podman[265581]: 2025-10-11 08:36:42.962909167 +0000 UTC m=+0.156333760 container init 3e7e1b3b40a083a54cf4b0b15fd709ad42ac7d6adfc8f1b083fd9c8336e6ff6f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_lamarr, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 08:36:42 compute-0 podman[265581]: 2025-10-11 08:36:42.973369695 +0000 UTC m=+0.166794258 container start 3e7e1b3b40a083a54cf4b0b15fd709ad42ac7d6adfc8f1b083fd9c8336e6ff6f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_lamarr, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 08:36:42 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 08:36:42 compute-0 podman[265581]: 2025-10-11 08:36:42.978133271 +0000 UTC m=+0.171557824 container attach 3e7e1b3b40a083a54cf4b0b15fd709ad42ac7d6adfc8f1b083fd9c8336e6ff6f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_lamarr, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 08:36:42 compute-0 goofy_lamarr[265598]: 167 167
Oct 11 08:36:42 compute-0 systemd[1]: libpod-3e7e1b3b40a083a54cf4b0b15fd709ad42ac7d6adfc8f1b083fd9c8336e6ff6f.scope: Deactivated successfully.
Oct 11 08:36:42 compute-0 podman[265581]: 2025-10-11 08:36:42.981130117 +0000 UTC m=+0.174554670 container died 3e7e1b3b40a083a54cf4b0b15fd709ad42ac7d6adfc8f1b083fd9c8336e6ff6f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_lamarr, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct 11 08:36:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-3ac6697690962e6586e570d32edbbedff0fc9e79413bbdbbeffb609e0613b91f-merged.mount: Deactivated successfully.
Oct 11 08:36:43 compute-0 podman[265581]: 2025-10-11 08:36:43.035209429 +0000 UTC m=+0.228633962 container remove 3e7e1b3b40a083a54cf4b0b15fd709ad42ac7d6adfc8f1b083fd9c8336e6ff6f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_lamarr, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 11 08:36:43 compute-0 systemd[1]: libpod-conmon-3e7e1b3b40a083a54cf4b0b15fd709ad42ac7d6adfc8f1b083fd9c8336e6ff6f.scope: Deactivated successfully.
Oct 11 08:36:43 compute-0 ceph-mon[74313]: pgmap v886: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:36:43 compute-0 podman[265623]: 2025-10-11 08:36:43.280181347 +0000 UTC m=+0.068003331 container create 33993e1f4b44889f9de960e07ba57c7bd3cf8e51a9b783e4e7091a9137b4ecc7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_chandrasekhar, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct 11 08:36:43 compute-0 systemd[1]: Started libpod-conmon-33993e1f4b44889f9de960e07ba57c7bd3cf8e51a9b783e4e7091a9137b4ecc7.scope.
Oct 11 08:36:43 compute-0 podman[265623]: 2025-10-11 08:36:43.251947342 +0000 UTC m=+0.039769376 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 08:36:43 compute-0 systemd[1]: Started libcrun container.
Oct 11 08:36:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f90a2c66e81dceea51469b2fbae88274319c8eced1cc1c53f8a72c9496273623/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 08:36:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f90a2c66e81dceea51469b2fbae88274319c8eced1cc1c53f8a72c9496273623/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 08:36:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f90a2c66e81dceea51469b2fbae88274319c8eced1cc1c53f8a72c9496273623/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 08:36:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f90a2c66e81dceea51469b2fbae88274319c8eced1cc1c53f8a72c9496273623/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 08:36:43 compute-0 podman[265623]: 2025-10-11 08:36:43.387938911 +0000 UTC m=+0.175760945 container init 33993e1f4b44889f9de960e07ba57c7bd3cf8e51a9b783e4e7091a9137b4ecc7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_chandrasekhar, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 08:36:43 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v887: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:36:43 compute-0 podman[265623]: 2025-10-11 08:36:43.402805115 +0000 UTC m=+0.190627069 container start 33993e1f4b44889f9de960e07ba57c7bd3cf8e51a9b783e4e7091a9137b4ecc7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_chandrasekhar, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct 11 08:36:43 compute-0 podman[265623]: 2025-10-11 08:36:43.406789698 +0000 UTC m=+0.194611722 container attach 33993e1f4b44889f9de960e07ba57c7bd3cf8e51a9b783e4e7091a9137b4ecc7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_chandrasekhar, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 08:36:44 compute-0 romantic_chandrasekhar[265639]: {
Oct 11 08:36:44 compute-0 romantic_chandrasekhar[265639]:     "0": [
Oct 11 08:36:44 compute-0 romantic_chandrasekhar[265639]:         {
Oct 11 08:36:44 compute-0 romantic_chandrasekhar[265639]:             "devices": [
Oct 11 08:36:44 compute-0 romantic_chandrasekhar[265639]:                 "/dev/loop3"
Oct 11 08:36:44 compute-0 romantic_chandrasekhar[265639]:             ],
Oct 11 08:36:44 compute-0 romantic_chandrasekhar[265639]:             "lv_name": "ceph_lv0",
Oct 11 08:36:44 compute-0 romantic_chandrasekhar[265639]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 08:36:44 compute-0 romantic_chandrasekhar[265639]:             "lv_size": "21470642176",
Oct 11 08:36:44 compute-0 romantic_chandrasekhar[265639]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=bd31cfe9-266a-4479-a7f9-659fff14b3e1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 08:36:44 compute-0 romantic_chandrasekhar[265639]:             "lv_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 08:36:44 compute-0 romantic_chandrasekhar[265639]:             "name": "ceph_lv0",
Oct 11 08:36:44 compute-0 romantic_chandrasekhar[265639]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 08:36:44 compute-0 romantic_chandrasekhar[265639]:             "tags": {
Oct 11 08:36:44 compute-0 romantic_chandrasekhar[265639]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 11 08:36:44 compute-0 romantic_chandrasekhar[265639]:                 "ceph.block_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 08:36:44 compute-0 romantic_chandrasekhar[265639]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 08:36:44 compute-0 romantic_chandrasekhar[265639]:                 "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 08:36:44 compute-0 romantic_chandrasekhar[265639]:                 "ceph.cluster_name": "ceph",
Oct 11 08:36:44 compute-0 romantic_chandrasekhar[265639]:                 "ceph.crush_device_class": "",
Oct 11 08:36:44 compute-0 romantic_chandrasekhar[265639]:                 "ceph.encrypted": "0",
Oct 11 08:36:44 compute-0 romantic_chandrasekhar[265639]:                 "ceph.osd_fsid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 08:36:44 compute-0 romantic_chandrasekhar[265639]:                 "ceph.osd_id": "0",
Oct 11 08:36:44 compute-0 romantic_chandrasekhar[265639]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 08:36:44 compute-0 romantic_chandrasekhar[265639]:                 "ceph.type": "block",
Oct 11 08:36:44 compute-0 romantic_chandrasekhar[265639]:                 "ceph.vdo": "0"
Oct 11 08:36:44 compute-0 romantic_chandrasekhar[265639]:             },
Oct 11 08:36:44 compute-0 romantic_chandrasekhar[265639]:             "type": "block",
Oct 11 08:36:44 compute-0 romantic_chandrasekhar[265639]:             "vg_name": "ceph_vg0"
Oct 11 08:36:44 compute-0 romantic_chandrasekhar[265639]:         }
Oct 11 08:36:44 compute-0 romantic_chandrasekhar[265639]:     ],
Oct 11 08:36:44 compute-0 romantic_chandrasekhar[265639]:     "1": [
Oct 11 08:36:44 compute-0 romantic_chandrasekhar[265639]:         {
Oct 11 08:36:44 compute-0 romantic_chandrasekhar[265639]:             "devices": [
Oct 11 08:36:44 compute-0 romantic_chandrasekhar[265639]:                 "/dev/loop4"
Oct 11 08:36:44 compute-0 romantic_chandrasekhar[265639]:             ],
Oct 11 08:36:44 compute-0 romantic_chandrasekhar[265639]:             "lv_name": "ceph_lv1",
Oct 11 08:36:44 compute-0 romantic_chandrasekhar[265639]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 08:36:44 compute-0 romantic_chandrasekhar[265639]:             "lv_size": "21470642176",
Oct 11 08:36:44 compute-0 romantic_chandrasekhar[265639]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c6e730c8-7075-45e5-bf72-061b73088f5b,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 08:36:44 compute-0 romantic_chandrasekhar[265639]:             "lv_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 08:36:44 compute-0 romantic_chandrasekhar[265639]:             "name": "ceph_lv1",
Oct 11 08:36:44 compute-0 romantic_chandrasekhar[265639]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 08:36:44 compute-0 romantic_chandrasekhar[265639]:             "tags": {
Oct 11 08:36:44 compute-0 romantic_chandrasekhar[265639]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 11 08:36:44 compute-0 romantic_chandrasekhar[265639]:                 "ceph.block_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 08:36:44 compute-0 romantic_chandrasekhar[265639]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 08:36:44 compute-0 romantic_chandrasekhar[265639]:                 "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 08:36:44 compute-0 romantic_chandrasekhar[265639]:                 "ceph.cluster_name": "ceph",
Oct 11 08:36:44 compute-0 romantic_chandrasekhar[265639]:                 "ceph.crush_device_class": "",
Oct 11 08:36:44 compute-0 romantic_chandrasekhar[265639]:                 "ceph.encrypted": "0",
Oct 11 08:36:44 compute-0 romantic_chandrasekhar[265639]:                 "ceph.osd_fsid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 08:36:44 compute-0 romantic_chandrasekhar[265639]:                 "ceph.osd_id": "1",
Oct 11 08:36:44 compute-0 romantic_chandrasekhar[265639]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 08:36:44 compute-0 romantic_chandrasekhar[265639]:                 "ceph.type": "block",
Oct 11 08:36:44 compute-0 romantic_chandrasekhar[265639]:                 "ceph.vdo": "0"
Oct 11 08:36:44 compute-0 romantic_chandrasekhar[265639]:             },
Oct 11 08:36:44 compute-0 romantic_chandrasekhar[265639]:             "type": "block",
Oct 11 08:36:44 compute-0 romantic_chandrasekhar[265639]:             "vg_name": "ceph_vg1"
Oct 11 08:36:44 compute-0 romantic_chandrasekhar[265639]:         }
Oct 11 08:36:44 compute-0 romantic_chandrasekhar[265639]:     ],
Oct 11 08:36:44 compute-0 romantic_chandrasekhar[265639]:     "2": [
Oct 11 08:36:44 compute-0 romantic_chandrasekhar[265639]:         {
Oct 11 08:36:44 compute-0 romantic_chandrasekhar[265639]:             "devices": [
Oct 11 08:36:44 compute-0 romantic_chandrasekhar[265639]:                 "/dev/loop5"
Oct 11 08:36:44 compute-0 romantic_chandrasekhar[265639]:             ],
Oct 11 08:36:44 compute-0 romantic_chandrasekhar[265639]:             "lv_name": "ceph_lv2",
Oct 11 08:36:44 compute-0 romantic_chandrasekhar[265639]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 08:36:44 compute-0 romantic_chandrasekhar[265639]:             "lv_size": "21470642176",
Oct 11 08:36:44 compute-0 romantic_chandrasekhar[265639]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9a8d87fe-940e-4960-94fb-405e22e6e9ee,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 08:36:44 compute-0 romantic_chandrasekhar[265639]:             "lv_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 08:36:44 compute-0 romantic_chandrasekhar[265639]:             "name": "ceph_lv2",
Oct 11 08:36:44 compute-0 romantic_chandrasekhar[265639]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 08:36:44 compute-0 romantic_chandrasekhar[265639]:             "tags": {
Oct 11 08:36:44 compute-0 romantic_chandrasekhar[265639]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 11 08:36:44 compute-0 romantic_chandrasekhar[265639]:                 "ceph.block_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 08:36:44 compute-0 romantic_chandrasekhar[265639]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 08:36:44 compute-0 romantic_chandrasekhar[265639]:                 "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 08:36:44 compute-0 romantic_chandrasekhar[265639]:                 "ceph.cluster_name": "ceph",
Oct 11 08:36:44 compute-0 romantic_chandrasekhar[265639]:                 "ceph.crush_device_class": "",
Oct 11 08:36:44 compute-0 romantic_chandrasekhar[265639]:                 "ceph.encrypted": "0",
Oct 11 08:36:44 compute-0 romantic_chandrasekhar[265639]:                 "ceph.osd_fsid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 08:36:44 compute-0 romantic_chandrasekhar[265639]:                 "ceph.osd_id": "2",
Oct 11 08:36:44 compute-0 romantic_chandrasekhar[265639]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 08:36:44 compute-0 romantic_chandrasekhar[265639]:                 "ceph.type": "block",
Oct 11 08:36:44 compute-0 romantic_chandrasekhar[265639]:                 "ceph.vdo": "0"
Oct 11 08:36:44 compute-0 romantic_chandrasekhar[265639]:             },
Oct 11 08:36:44 compute-0 romantic_chandrasekhar[265639]:             "type": "block",
Oct 11 08:36:44 compute-0 romantic_chandrasekhar[265639]:             "vg_name": "ceph_vg2"
Oct 11 08:36:44 compute-0 romantic_chandrasekhar[265639]:         }
Oct 11 08:36:44 compute-0 romantic_chandrasekhar[265639]:     ]
Oct 11 08:36:44 compute-0 romantic_chandrasekhar[265639]: }
Oct 11 08:36:44 compute-0 systemd[1]: libpod-33993e1f4b44889f9de960e07ba57c7bd3cf8e51a9b783e4e7091a9137b4ecc7.scope: Deactivated successfully.
Oct 11 08:36:44 compute-0 podman[265623]: 2025-10-11 08:36:44.216787993 +0000 UTC m=+1.004609977 container died 33993e1f4b44889f9de960e07ba57c7bd3cf8e51a9b783e4e7091a9137b4ecc7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_chandrasekhar, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct 11 08:36:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-f90a2c66e81dceea51469b2fbae88274319c8eced1cc1c53f8a72c9496273623-merged.mount: Deactivated successfully.
Oct 11 08:36:44 compute-0 podman[265623]: 2025-10-11 08:36:44.29415856 +0000 UTC m=+1.081980504 container remove 33993e1f4b44889f9de960e07ba57c7bd3cf8e51a9b783e4e7091a9137b4ecc7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_chandrasekhar, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 11 08:36:44 compute-0 systemd[1]: libpod-conmon-33993e1f4b44889f9de960e07ba57c7bd3cf8e51a9b783e4e7091a9137b4ecc7.scope: Deactivated successfully.
Oct 11 08:36:44 compute-0 sudo[265515]: pam_unix(sudo:session): session closed for user root
Oct 11 08:36:44 compute-0 podman[265649]: 2025-10-11 08:36:44.354763449 +0000 UTC m=+0.090806052 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=multipathd, io.buildah.version=1.41.3, config_id=multipathd, org.label-schema.schema-version=1.0)
Oct 11 08:36:44 compute-0 sudo[265679]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:36:44 compute-0 sudo[265679]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:36:44 compute-0 sudo[265679]: pam_unix(sudo:session): session closed for user root
Oct 11 08:36:44 compute-0 sudo[265704]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 08:36:44 compute-0 sudo[265704]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:36:44 compute-0 sudo[265704]: pam_unix(sudo:session): session closed for user root
Oct 11 08:36:44 compute-0 sudo[265729]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:36:44 compute-0 sudo[265729]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:36:44 compute-0 sudo[265729]: pam_unix(sudo:session): session closed for user root
Oct 11 08:36:44 compute-0 sudo[265754]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a -- raw list --format json
Oct 11 08:36:44 compute-0 sudo[265754]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:36:45 compute-0 ceph-mon[74313]: pgmap v887: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:36:45 compute-0 podman[265820]: 2025-10-11 08:36:45.095661062 +0000 UTC m=+0.050192083 container create 9dc14ea7558a631436fffbfc8f05803036d603e6d45b31d3e85bd6f8d6d9abfc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_almeida, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct 11 08:36:45 compute-0 systemd[1]: Started libpod-conmon-9dc14ea7558a631436fffbfc8f05803036d603e6d45b31d3e85bd6f8d6d9abfc.scope.
Oct 11 08:36:45 compute-0 podman[265820]: 2025-10-11 08:36:45.076643329 +0000 UTC m=+0.031174330 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 08:36:45 compute-0 systemd[1]: Started libcrun container.
Oct 11 08:36:45 compute-0 podman[265820]: 2025-10-11 08:36:45.211306451 +0000 UTC m=+0.165837482 container init 9dc14ea7558a631436fffbfc8f05803036d603e6d45b31d3e85bd6f8d6d9abfc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_almeida, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct 11 08:36:45 compute-0 podman[265820]: 2025-10-11 08:36:45.22284572 +0000 UTC m=+0.177376751 container start 9dc14ea7558a631436fffbfc8f05803036d603e6d45b31d3e85bd6f8d6d9abfc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_almeida, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 08:36:45 compute-0 podman[265820]: 2025-10-11 08:36:45.226933366 +0000 UTC m=+0.181464437 container attach 9dc14ea7558a631436fffbfc8f05803036d603e6d45b31d3e85bd6f8d6d9abfc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_almeida, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 08:36:45 compute-0 jovial_almeida[265836]: 167 167
Oct 11 08:36:45 compute-0 systemd[1]: libpod-9dc14ea7558a631436fffbfc8f05803036d603e6d45b31d3e85bd6f8d6d9abfc.scope: Deactivated successfully.
Oct 11 08:36:45 compute-0 podman[265820]: 2025-10-11 08:36:45.230687073 +0000 UTC m=+0.185218094 container died 9dc14ea7558a631436fffbfc8f05803036d603e6d45b31d3e85bd6f8d6d9abfc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_almeida, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct 11 08:36:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-f9a037c77becfd965b59936e1d069ef5d944b854875a0c27daaa1aa73da336af-merged.mount: Deactivated successfully.
Oct 11 08:36:45 compute-0 podman[265820]: 2025-10-11 08:36:45.283455519 +0000 UTC m=+0.237986540 container remove 9dc14ea7558a631436fffbfc8f05803036d603e6d45b31d3e85bd6f8d6d9abfc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_almeida, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 11 08:36:45 compute-0 systemd[1]: libpod-conmon-9dc14ea7558a631436fffbfc8f05803036d603e6d45b31d3e85bd6f8d6d9abfc.scope: Deactivated successfully.
Oct 11 08:36:45 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v888: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:36:45 compute-0 podman[265860]: 2025-10-11 08:36:45.521602292 +0000 UTC m=+0.063562984 container create b82fcf610599eda490d256b06810788cfc609c2e9fa510cb693b9f5758387890 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_margulis, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 08:36:45 compute-0 systemd[1]: Started libpod-conmon-b82fcf610599eda490d256b06810788cfc609c2e9fa510cb693b9f5758387890.scope.
Oct 11 08:36:45 compute-0 podman[265860]: 2025-10-11 08:36:45.498697668 +0000 UTC m=+0.040658430 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 08:36:45 compute-0 systemd[1]: Started libcrun container.
Oct 11 08:36:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/339c4769f16f192181be28d02618b545da7cf432002ff79f0b8019f4d3ea1339/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 08:36:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/339c4769f16f192181be28d02618b545da7cf432002ff79f0b8019f4d3ea1339/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 08:36:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/339c4769f16f192181be28d02618b545da7cf432002ff79f0b8019f4d3ea1339/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 08:36:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/339c4769f16f192181be28d02618b545da7cf432002ff79f0b8019f4d3ea1339/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 08:36:45 compute-0 podman[265860]: 2025-10-11 08:36:45.629147449 +0000 UTC m=+0.171108171 container init b82fcf610599eda490d256b06810788cfc609c2e9fa510cb693b9f5758387890 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_margulis, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS)
Oct 11 08:36:45 compute-0 podman[265860]: 2025-10-11 08:36:45.63512055 +0000 UTC m=+0.177081252 container start b82fcf610599eda490d256b06810788cfc609c2e9fa510cb693b9f5758387890 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_margulis, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 08:36:45 compute-0 podman[265860]: 2025-10-11 08:36:45.638310131 +0000 UTC m=+0.180270913 container attach b82fcf610599eda490d256b06810788cfc609c2e9fa510cb693b9f5758387890 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_margulis, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct 11 08:36:45 compute-0 nova_compute[260935]: 2025-10-11 08:36:45.704 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 08:36:46 compute-0 nova_compute[260935]: 2025-10-11 08:36:46.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 08:36:46 compute-0 nova_compute[260935]: 2025-10-11 08:36:46.704 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 08:36:46 compute-0 nova_compute[260935]: 2025-10-11 08:36:46.704 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 11 08:36:46 compute-0 fervent_margulis[265876]: {
Oct 11 08:36:46 compute-0 fervent_margulis[265876]:     "9a8d87fe-940e-4960-94fb-405e22e6e9ee": {
Oct 11 08:36:46 compute-0 fervent_margulis[265876]:         "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 08:36:46 compute-0 fervent_margulis[265876]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 11 08:36:46 compute-0 fervent_margulis[265876]:         "osd_id": 2,
Oct 11 08:36:46 compute-0 fervent_margulis[265876]:         "osd_uuid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 08:36:46 compute-0 fervent_margulis[265876]:         "type": "bluestore"
Oct 11 08:36:46 compute-0 fervent_margulis[265876]:     },
Oct 11 08:36:46 compute-0 fervent_margulis[265876]:     "bd31cfe9-266a-4479-a7f9-659fff14b3e1": {
Oct 11 08:36:46 compute-0 fervent_margulis[265876]:         "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 08:36:46 compute-0 fervent_margulis[265876]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 11 08:36:46 compute-0 fervent_margulis[265876]:         "osd_id": 0,
Oct 11 08:36:46 compute-0 fervent_margulis[265876]:         "osd_uuid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 08:36:46 compute-0 fervent_margulis[265876]:         "type": "bluestore"
Oct 11 08:36:46 compute-0 fervent_margulis[265876]:     },
Oct 11 08:36:46 compute-0 fervent_margulis[265876]:     "c6e730c8-7075-45e5-bf72-061b73088f5b": {
Oct 11 08:36:46 compute-0 fervent_margulis[265876]:         "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 08:36:46 compute-0 fervent_margulis[265876]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 11 08:36:46 compute-0 fervent_margulis[265876]:         "osd_id": 1,
Oct 11 08:36:46 compute-0 fervent_margulis[265876]:         "osd_uuid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 08:36:46 compute-0 fervent_margulis[265876]:         "type": "bluestore"
Oct 11 08:36:46 compute-0 fervent_margulis[265876]:     }
Oct 11 08:36:46 compute-0 fervent_margulis[265876]: }
Oct 11 08:36:46 compute-0 systemd[1]: libpod-b82fcf610599eda490d256b06810788cfc609c2e9fa510cb693b9f5758387890.scope: Deactivated successfully.
Oct 11 08:36:46 compute-0 systemd[1]: libpod-b82fcf610599eda490d256b06810788cfc609c2e9fa510cb693b9f5758387890.scope: Consumed 1.120s CPU time.
Oct 11 08:36:46 compute-0 podman[265860]: 2025-10-11 08:36:46.750437092 +0000 UTC m=+1.292397814 container died b82fcf610599eda490d256b06810788cfc609c2e9fa510cb693b9f5758387890 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_margulis, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct 11 08:36:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-339c4769f16f192181be28d02618b545da7cf432002ff79f0b8019f4d3ea1339-merged.mount: Deactivated successfully.
Oct 11 08:36:46 compute-0 podman[265860]: 2025-10-11 08:36:46.883437346 +0000 UTC m=+1.425398038 container remove b82fcf610599eda490d256b06810788cfc609c2e9fa510cb693b9f5758387890 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_margulis, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 08:36:46 compute-0 systemd[1]: libpod-conmon-b82fcf610599eda490d256b06810788cfc609c2e9fa510cb693b9f5758387890.scope: Deactivated successfully.
Oct 11 08:36:46 compute-0 podman[265908]: 2025-10-11 08:36:46.902799768 +0000 UTC m=+0.195646122 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, config_id=ovn_controller, org.label-schema.build-date=20251001)
Oct 11 08:36:46 compute-0 sudo[265754]: pam_unix(sudo:session): session closed for user root
Oct 11 08:36:46 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 08:36:46 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 08:36:46 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 08:36:46 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 08:36:46 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev 9cea45c5-d653-4634-bf7e-e840e4b80f00 does not exist
Oct 11 08:36:46 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev ba796e0c-7ee4-41c8-a373-8eaab63b8d1f does not exist
Oct 11 08:36:47 compute-0 sudo[265946]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:36:47 compute-0 sudo[265946]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:36:47 compute-0 sudo[265946]: pam_unix(sudo:session): session closed for user root
Oct 11 08:36:47 compute-0 ceph-mon[74313]: pgmap v888: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:36:47 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 08:36:47 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 08:36:47 compute-0 sudo[265971]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 11 08:36:47 compute-0 sudo[265971]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:36:47 compute-0 sudo[265971]: pam_unix(sudo:session): session closed for user root
Oct 11 08:36:47 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v889: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:36:47 compute-0 nova_compute[260935]: 2025-10-11 08:36:47.699 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 08:36:47 compute-0 nova_compute[260935]: 2025-10-11 08:36:47.724 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 08:36:47 compute-0 nova_compute[260935]: 2025-10-11 08:36:47.725 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 08:36:47 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 08:36:48 compute-0 nova_compute[260935]: 2025-10-11 08:36:48.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 08:36:48 compute-0 nova_compute[260935]: 2025-10-11 08:36:48.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 08:36:48 compute-0 nova_compute[260935]: 2025-10-11 08:36:48.703 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 11 08:36:48 compute-0 nova_compute[260935]: 2025-10-11 08:36:48.703 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 11 08:36:48 compute-0 nova_compute[260935]: 2025-10-11 08:36:48.736 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 11 08:36:48 compute-0 nova_compute[260935]: 2025-10-11 08:36:48.737 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 08:36:48 compute-0 nova_compute[260935]: 2025-10-11 08:36:48.774 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:36:48 compute-0 nova_compute[260935]: 2025-10-11 08:36:48.774 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:36:48 compute-0 nova_compute[260935]: 2025-10-11 08:36:48.775 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:36:48 compute-0 nova_compute[260935]: 2025-10-11 08:36:48.775 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 11 08:36:48 compute-0 nova_compute[260935]: 2025-10-11 08:36:48.776 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:36:49 compute-0 ceph-mon[74313]: pgmap v889: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:36:49 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 08:36:49 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/935852823' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:36:49 compute-0 nova_compute[260935]: 2025-10-11 08:36:49.251 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:36:49 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v890: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:36:49 compute-0 nova_compute[260935]: 2025-10-11 08:36:49.523 2 WARNING nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 08:36:49 compute-0 nova_compute[260935]: 2025-10-11 08:36:49.525 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5155MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 11 08:36:49 compute-0 nova_compute[260935]: 2025-10-11 08:36:49.526 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:36:49 compute-0 nova_compute[260935]: 2025-10-11 08:36:49.527 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:36:49 compute-0 nova_compute[260935]: 2025-10-11 08:36:49.707 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 11 08:36:49 compute-0 nova_compute[260935]: 2025-10-11 08:36:49.707 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 11 08:36:49 compute-0 nova_compute[260935]: 2025-10-11 08:36:49.723 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:36:50 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/935852823' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:36:50 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 08:36:50 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/733932756' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:36:50 compute-0 nova_compute[260935]: 2025-10-11 08:36:50.199 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:36:50 compute-0 nova_compute[260935]: 2025-10-11 08:36:50.208 2 DEBUG nova.compute.provider_tree [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 08:36:50 compute-0 nova_compute[260935]: 2025-10-11 08:36:50.238 2 DEBUG nova.scheduler.client.report [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 08:36:50 compute-0 nova_compute[260935]: 2025-10-11 08:36:50.241 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 11 08:36:50 compute-0 nova_compute[260935]: 2025-10-11 08:36:50.241 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.715s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:36:51 compute-0 ceph-mon[74313]: pgmap v890: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:36:51 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/733932756' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:36:51 compute-0 nova_compute[260935]: 2025-10-11 08:36:51.207 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 08:36:51 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v891: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:36:52 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 08:36:53 compute-0 ceph-mon[74313]: pgmap v891: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:36:53 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v892: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:36:54 compute-0 ceph-mgr[74605]: [balancer INFO root] Optimize plan auto_2025-10-11_08:36:54
Oct 11 08:36:54 compute-0 ceph-mgr[74605]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 08:36:54 compute-0 ceph-mgr[74605]: [balancer INFO root] do_upmap
Oct 11 08:36:54 compute-0 ceph-mgr[74605]: [balancer INFO root] pools ['volumes', 'default.rgw.meta', 'vms', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', '.mgr', 'backups', 'images', '.rgw.root', 'default.rgw.control', 'default.rgw.log']
Oct 11 08:36:54 compute-0 ceph-mgr[74605]: [balancer INFO root] prepared 0/10 changes
Oct 11 08:36:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 08:36:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 08:36:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 08:36:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 08:36:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 08:36:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 08:36:54 compute-0 ceph-mgr[74605]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 08:36:54 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 08:36:54 compute-0 ceph-mgr[74605]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 08:36:54 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 08:36:54 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 08:36:54 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 08:36:54 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 08:36:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 08:36:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 08:36:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 08:36:55 compute-0 ceph-mon[74313]: pgmap v892: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:36:55 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v893: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:36:57 compute-0 ceph-mon[74313]: pgmap v893: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:36:57 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v894: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:36:57 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 08:36:59 compute-0 ceph-mon[74313]: pgmap v894: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:36:59 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v895: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:37:01 compute-0 ceph-mon[74313]: pgmap v895: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:37:01 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v896: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:37:02 compute-0 podman[266040]: 2025-10-11 08:37:02.795289653 +0000 UTC m=+0.093121877 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 11 08:37:02 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 08:37:03 compute-0 ceph-mon[74313]: pgmap v896: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:37:03 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v897: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:37:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 08:37:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:37:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 08:37:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:37:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 08:37:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:37:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 08:37:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:37:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 08:37:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:37:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 08:37:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:37:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 11 08:37:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:37:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 08:37:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:37:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 11 08:37:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:37:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 11 08:37:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:37:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 08:37:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:37:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 11 08:37:05 compute-0 ceph-mon[74313]: pgmap v897: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:37:05 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v898: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:37:07 compute-0 ceph-mon[74313]: pgmap v898: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:37:07 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v899: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:37:07 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 08:37:09 compute-0 ceph-mon[74313]: pgmap v899: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:37:09 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v900: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:37:09 compute-0 podman[266061]: 2025-10-11 08:37:09.795694003 +0000 UTC m=+0.063067340 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=iscsid, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct 11 08:37:11 compute-0 ceph-mon[74313]: pgmap v900: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:37:11 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v901: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:37:12 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 08:37:13 compute-0 ceph-mon[74313]: pgmap v901: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:37:13 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v902: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:37:14 compute-0 podman[266081]: 2025-10-11 08:37:14.824606587 +0000 UTC m=+0.115811154 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Oct 11 08:37:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:37:15.167 162815 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:37:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:37:15.168 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:37:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:37:15.168 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:37:15 compute-0 ceph-mon[74313]: pgmap v902: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:37:15 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v903: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:37:17 compute-0 ceph-mon[74313]: pgmap v903: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:37:17 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v904: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:37:17 compute-0 podman[266101]: 2025-10-11 08:37:17.838329419 +0000 UTC m=+0.138696377 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct 11 08:37:17 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 08:37:19 compute-0 ceph-mon[74313]: pgmap v904: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:37:19 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v905: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:37:20 compute-0 ceph-mon[74313]: pgmap v905: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:37:21 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v906: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:37:22 compute-0 ceph-mon[74313]: pgmap v906: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:37:22 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 08:37:23 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v907: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:37:24 compute-0 ceph-mon[74313]: pgmap v907: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:37:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 08:37:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 08:37:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 08:37:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 08:37:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 08:37:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 08:37:25 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v908: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:37:26 compute-0 ceph-mon[74313]: pgmap v908: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:37:27 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v909: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:37:27 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 08:37:28 compute-0 ceph-mon[74313]: pgmap v909: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:37:29 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v910: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:37:30 compute-0 ceph-mon[74313]: pgmap v910: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:37:31 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v911: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:37:32 compute-0 ceph-mon[74313]: pgmap v911: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:37:32 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 08:37:32 compute-0 ceph-mon[74313]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #45. Immutable memtables: 0.
Oct 11 08:37:32 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:37:32.991489) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 11 08:37:32 compute-0 ceph-mon[74313]: rocksdb: [db/flush_job.cc:856] [default] [JOB 21] Flushing memtable with next log file: 45
Oct 11 08:37:32 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760171852991535, "job": 21, "event": "flush_started", "num_memtables": 1, "num_entries": 768, "num_deletes": 255, "total_data_size": 960681, "memory_usage": 974720, "flush_reason": "Manual Compaction"}
Oct 11 08:37:32 compute-0 ceph-mon[74313]: rocksdb: [db/flush_job.cc:885] [default] [JOB 21] Level-0 flush table #46: started
Oct 11 08:37:33 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760171853002313, "cf_name": "default", "job": 21, "event": "table_file_creation", "file_number": 46, "file_size": 951926, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 18692, "largest_seqno": 19459, "table_properties": {"data_size": 948028, "index_size": 1678, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1157, "raw_key_size": 8323, "raw_average_key_size": 18, "raw_value_size": 940141, "raw_average_value_size": 2061, "num_data_blocks": 76, "num_entries": 456, "num_filter_entries": 456, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760171788, "oldest_key_time": 1760171788, "file_creation_time": 1760171852, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "25d4b2da-549a-4320-988a-8e4ed36a341b", "db_session_id": "HBQ1LYMNE0LLZGR7AHC6", "orig_file_number": 46, "seqno_to_time_mapping": "N/A"}}
Oct 11 08:37:33 compute-0 ceph-mon[74313]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 21] Flush lasted 10877 microseconds, and 3580 cpu microseconds.
Oct 11 08:37:33 compute-0 ceph-mon[74313]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 08:37:33 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:37:33.002364) [db/flush_job.cc:967] [default] [JOB 21] Level-0 flush table #46: 951926 bytes OK
Oct 11 08:37:33 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:37:33.002388) [db/memtable_list.cc:519] [default] Level-0 commit table #46 started
Oct 11 08:37:33 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:37:33.004137) [db/memtable_list.cc:722] [default] Level-0 commit table #46: memtable #1 done
Oct 11 08:37:33 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:37:33.004158) EVENT_LOG_v1 {"time_micros": 1760171853004152, "job": 21, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 11 08:37:33 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:37:33.004178) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 11 08:37:33 compute-0 ceph-mon[74313]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 21] Try to delete WAL files size 956778, prev total WAL file size 956778, number of live WAL files 2.
Oct 11 08:37:33 compute-0 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000042.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 08:37:33 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:37:33.004882) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D00323532' seq:72057594037927935, type:22 .. '6C6F676D00353033' seq:0, type:0; will stop at (end)
Oct 11 08:37:33 compute-0 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 22] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 11 08:37:33 compute-0 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 21 Base level 0, inputs: [46(929KB)], [44(6021KB)]
Oct 11 08:37:33 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760171853004949, "job": 22, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [46], "files_L6": [44], "score": -1, "input_data_size": 7117830, "oldest_snapshot_seqno": -1}
Oct 11 08:37:33 compute-0 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 22] Generated table #47: 4110 keys, 6986821 bytes, temperature: kUnknown
Oct 11 08:37:33 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760171853054269, "cf_name": "default", "job": 22, "event": "table_file_creation", "file_number": 47, "file_size": 6986821, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 6958843, "index_size": 16604, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10309, "raw_key_size": 101863, "raw_average_key_size": 24, "raw_value_size": 6883857, "raw_average_value_size": 1674, "num_data_blocks": 696, "num_entries": 4110, "num_filter_entries": 4110, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760170204, "oldest_key_time": 0, "file_creation_time": 1760171853, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "25d4b2da-549a-4320-988a-8e4ed36a341b", "db_session_id": "HBQ1LYMNE0LLZGR7AHC6", "orig_file_number": 47, "seqno_to_time_mapping": "N/A"}}
Oct 11 08:37:33 compute-0 ceph-mon[74313]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 08:37:33 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:37:33.054592) [db/compaction/compaction_job.cc:1663] [default] [JOB 22] Compacted 1@0 + 1@6 files to L6 => 6986821 bytes
Oct 11 08:37:33 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:37:33.056374) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 144.0 rd, 141.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.9, 5.9 +0.0 blob) out(6.7 +0.0 blob), read-write-amplify(14.8) write-amplify(7.3) OK, records in: 4632, records dropped: 522 output_compression: NoCompression
Oct 11 08:37:33 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:37:33.056403) EVENT_LOG_v1 {"time_micros": 1760171853056390, "job": 22, "event": "compaction_finished", "compaction_time_micros": 49414, "compaction_time_cpu_micros": 32730, "output_level": 6, "num_output_files": 1, "total_output_size": 6986821, "num_input_records": 4632, "num_output_records": 4110, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 11 08:37:33 compute-0 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000046.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 08:37:33 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760171853056877, "job": 22, "event": "table_file_deletion", "file_number": 46}
Oct 11 08:37:33 compute-0 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000044.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 08:37:33 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760171853059089, "job": 22, "event": "table_file_deletion", "file_number": 44}
Oct 11 08:37:33 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:37:33.004769) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 08:37:33 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:37:33.059136) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 08:37:33 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:37:33.059142) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 08:37:33 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:37:33.059146) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 08:37:33 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:37:33.059149) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 08:37:33 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:37:33.059152) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 08:37:33 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v912: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:37:33 compute-0 podman[266127]: 2025-10-11 08:37:33.789085259 +0000 UTC m=+0.079234871 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Oct 11 08:37:35 compute-0 ceph-mon[74313]: pgmap v912: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:37:35 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v913: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:37:37 compute-0 ceph-mon[74313]: pgmap v913: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:37:37 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v914: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:37:37 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 08:37:37 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1190014263' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 08:37:37 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 08:37:37 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1190014263' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 08:37:37 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 08:37:38 compute-0 ceph-mon[74313]: from='client.? 192.168.122.10:0/1190014263' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 08:37:38 compute-0 ceph-mon[74313]: from='client.? 192.168.122.10:0/1190014263' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 08:37:39 compute-0 ceph-mon[74313]: pgmap v914: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:37:39 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v915: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:37:40 compute-0 podman[266148]: 2025-10-11 08:37:40.800848261 +0000 UTC m=+0.097885573 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, container_name=iscsid, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct 11 08:37:41 compute-0 ceph-mon[74313]: pgmap v915: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:37:41 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v916: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:37:42 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 08:37:43 compute-0 ceph-mon[74313]: pgmap v916: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:37:43 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v917: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:37:43 compute-0 nova_compute[260935]: 2025-10-11 08:37:43.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 08:37:43 compute-0 nova_compute[260935]: 2025-10-11 08:37:43.704 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Oct 11 08:37:43 compute-0 nova_compute[260935]: 2025-10-11 08:37:43.736 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Oct 11 08:37:43 compute-0 nova_compute[260935]: 2025-10-11 08:37:43.738 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 08:37:43 compute-0 nova_compute[260935]: 2025-10-11 08:37:43.738 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Oct 11 08:37:43 compute-0 nova_compute[260935]: 2025-10-11 08:37:43.755 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 08:37:45 compute-0 ceph-mon[74313]: pgmap v917: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:37:45 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v918: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:37:45 compute-0 podman[266169]: 2025-10-11 08:37:45.792266567 +0000 UTC m=+0.089171374 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_id=multipathd)
Oct 11 08:37:46 compute-0 nova_compute[260935]: 2025-10-11 08:37:46.768 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 08:37:46 compute-0 nova_compute[260935]: 2025-10-11 08:37:46.768 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 08:37:46 compute-0 nova_compute[260935]: 2025-10-11 08:37:46.769 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 08:37:46 compute-0 nova_compute[260935]: 2025-10-11 08:37:46.769 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 11 08:37:47 compute-0 ceph-mon[74313]: pgmap v918: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:37:47 compute-0 sudo[266189]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:37:47 compute-0 sudo[266189]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:37:47 compute-0 sudo[266189]: pam_unix(sudo:session): session closed for user root
Oct 11 08:37:47 compute-0 sudo[266214]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 08:37:47 compute-0 sudo[266214]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:37:47 compute-0 sudo[266214]: pam_unix(sudo:session): session closed for user root
Oct 11 08:37:47 compute-0 sudo[266239]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:37:47 compute-0 sudo[266239]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:37:47 compute-0 sudo[266239]: pam_unix(sudo:session): session closed for user root
Oct 11 08:37:47 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v919: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:37:47 compute-0 sudo[266264]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 11 08:37:47 compute-0 sudo[266264]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:37:47 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 08:37:48 compute-0 sudo[266264]: pam_unix(sudo:session): session closed for user root
Oct 11 08:37:48 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 08:37:48 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 08:37:48 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 11 08:37:48 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 08:37:48 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 11 08:37:48 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 08:37:48 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev 611af44b-8405-4923-9649-83e72f55ae4d does not exist
Oct 11 08:37:48 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev c5415471-9297-4e27-aa32-aafd2789e4d1 does not exist
Oct 11 08:37:48 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev 047cf064-a2c3-4e32-80ef-8fbbcbb523b6 does not exist
Oct 11 08:37:48 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 11 08:37:48 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 08:37:48 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 11 08:37:48 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 08:37:48 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 08:37:48 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 08:37:48 compute-0 sudo[266321]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:37:48 compute-0 sudo[266321]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:37:48 compute-0 sudo[266321]: pam_unix(sudo:session): session closed for user root
Oct 11 08:37:48 compute-0 sudo[266352]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 08:37:48 compute-0 sudo[266352]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:37:48 compute-0 sudo[266352]: pam_unix(sudo:session): session closed for user root
Oct 11 08:37:48 compute-0 podman[266345]: 2025-10-11 08:37:48.435991526 +0000 UTC m=+0.138498871 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 11 08:37:48 compute-0 sudo[266395]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:37:48 compute-0 sudo[266395]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:37:48 compute-0 sudo[266395]: pam_unix(sudo:session): session closed for user root
Oct 11 08:37:48 compute-0 sudo[266422]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 11 08:37:48 compute-0 sudo[266422]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:37:48 compute-0 nova_compute[260935]: 2025-10-11 08:37:48.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 08:37:48 compute-0 nova_compute[260935]: 2025-10-11 08:37:48.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 08:37:48 compute-0 podman[266487]: 2025-10-11 08:37:48.888339109 +0000 UTC m=+0.067985470 container create cc0a0a35dcb1a82e7421cb6b70d2911714ea86121daa5f7eead72fb5e6107be9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_nightingale, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 08:37:48 compute-0 systemd[1]: Started libpod-conmon-cc0a0a35dcb1a82e7421cb6b70d2911714ea86121daa5f7eead72fb5e6107be9.scope.
Oct 11 08:37:48 compute-0 podman[266487]: 2025-10-11 08:37:48.85962162 +0000 UTC m=+0.039268041 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 08:37:48 compute-0 systemd[1]: Started libcrun container.
Oct 11 08:37:49 compute-0 podman[266487]: 2025-10-11 08:37:49.004940065 +0000 UTC m=+0.184586466 container init cc0a0a35dcb1a82e7421cb6b70d2911714ea86121daa5f7eead72fb5e6107be9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_nightingale, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct 11 08:37:49 compute-0 podman[266487]: 2025-10-11 08:37:49.016769042 +0000 UTC m=+0.196415413 container start cc0a0a35dcb1a82e7421cb6b70d2911714ea86121daa5f7eead72fb5e6107be9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_nightingale, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Oct 11 08:37:49 compute-0 podman[266487]: 2025-10-11 08:37:49.021895219 +0000 UTC m=+0.201541590 container attach cc0a0a35dcb1a82e7421cb6b70d2911714ea86121daa5f7eead72fb5e6107be9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_nightingale, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct 11 08:37:49 compute-0 nifty_nightingale[266503]: 167 167
Oct 11 08:37:49 compute-0 systemd[1]: libpod-cc0a0a35dcb1a82e7421cb6b70d2911714ea86121daa5f7eead72fb5e6107be9.scope: Deactivated successfully.
Oct 11 08:37:49 compute-0 conmon[266503]: conmon cc0a0a35dcb1a82e7421 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-cc0a0a35dcb1a82e7421cb6b70d2911714ea86121daa5f7eead72fb5e6107be9.scope/container/memory.events
Oct 11 08:37:49 compute-0 podman[266487]: 2025-10-11 08:37:49.027667023 +0000 UTC m=+0.207313384 container died cc0a0a35dcb1a82e7421cb6b70d2911714ea86121daa5f7eead72fb5e6107be9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_nightingale, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct 11 08:37:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-73c38e87bb0f9feaf2d8cda1712a358afafc1cdedc1b0cbf830f9917a50391cf-merged.mount: Deactivated successfully.
Oct 11 08:37:49 compute-0 ceph-mon[74313]: pgmap v919: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:37:49 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 08:37:49 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 08:37:49 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 08:37:49 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 08:37:49 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 08:37:49 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 08:37:49 compute-0 podman[266487]: 2025-10-11 08:37:49.083405443 +0000 UTC m=+0.263051814 container remove cc0a0a35dcb1a82e7421cb6b70d2911714ea86121daa5f7eead72fb5e6107be9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_nightingale, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct 11 08:37:49 compute-0 systemd[1]: libpod-conmon-cc0a0a35dcb1a82e7421cb6b70d2911714ea86121daa5f7eead72fb5e6107be9.scope: Deactivated successfully.
Oct 11 08:37:49 compute-0 podman[266528]: 2025-10-11 08:37:49.281429322 +0000 UTC m=+0.054685051 container create f8904a8a61465b2f188eedf3d9f2f9ec81fda04f4ef6d526f70ca15c89da58c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_shtern, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 08:37:49 compute-0 systemd[1]: Started libpod-conmon-f8904a8a61465b2f188eedf3d9f2f9ec81fda04f4ef6d526f70ca15c89da58c6.scope.
Oct 11 08:37:49 compute-0 podman[266528]: 2025-10-11 08:37:49.255670427 +0000 UTC m=+0.028926216 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 08:37:49 compute-0 systemd[1]: Started libcrun container.
Oct 11 08:37:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/79a5d3f7c752cd8bf0e0d5a5818a21b540ee634410105d86b2821c2506641c86/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 08:37:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/79a5d3f7c752cd8bf0e0d5a5818a21b540ee634410105d86b2821c2506641c86/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 08:37:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/79a5d3f7c752cd8bf0e0d5a5818a21b540ee634410105d86b2821c2506641c86/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 08:37:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/79a5d3f7c752cd8bf0e0d5a5818a21b540ee634410105d86b2821c2506641c86/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 08:37:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/79a5d3f7c752cd8bf0e0d5a5818a21b540ee634410105d86b2821c2506641c86/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 08:37:49 compute-0 podman[266528]: 2025-10-11 08:37:49.384007687 +0000 UTC m=+0.157263466 container init f8904a8a61465b2f188eedf3d9f2f9ec81fda04f4ef6d526f70ca15c89da58c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_shtern, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS)
Oct 11 08:37:49 compute-0 podman[266528]: 2025-10-11 08:37:49.399658334 +0000 UTC m=+0.172914073 container start f8904a8a61465b2f188eedf3d9f2f9ec81fda04f4ef6d526f70ca15c89da58c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_shtern, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct 11 08:37:49 compute-0 podman[266528]: 2025-10-11 08:37:49.404044909 +0000 UTC m=+0.177300648 container attach f8904a8a61465b2f188eedf3d9f2f9ec81fda04f4ef6d526f70ca15c89da58c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_shtern, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507)
Oct 11 08:37:49 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v920: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:37:49 compute-0 nova_compute[260935]: 2025-10-11 08:37:49.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 08:37:49 compute-0 nova_compute[260935]: 2025-10-11 08:37:49.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 08:37:49 compute-0 nova_compute[260935]: 2025-10-11 08:37:49.738 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:37:49 compute-0 nova_compute[260935]: 2025-10-11 08:37:49.738 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:37:49 compute-0 nova_compute[260935]: 2025-10-11 08:37:49.739 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:37:49 compute-0 nova_compute[260935]: 2025-10-11 08:37:49.739 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 11 08:37:49 compute-0 nova_compute[260935]: 2025-10-11 08:37:49.739 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:37:50 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 08:37:50 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/291088395' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:37:50 compute-0 nova_compute[260935]: 2025-10-11 08:37:50.210 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:37:50 compute-0 nova_compute[260935]: 2025-10-11 08:37:50.416 2 WARNING nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 08:37:50 compute-0 nova_compute[260935]: 2025-10-11 08:37:50.418 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5116MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 11 08:37:50 compute-0 nova_compute[260935]: 2025-10-11 08:37:50.418 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:37:50 compute-0 nova_compute[260935]: 2025-10-11 08:37:50.419 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:37:50 compute-0 thirsty_shtern[266545]: --> passed data devices: 0 physical, 3 LVM
Oct 11 08:37:50 compute-0 thirsty_shtern[266545]: --> relative data size: 1.0
Oct 11 08:37:50 compute-0 thirsty_shtern[266545]: --> All data devices are unavailable
Oct 11 08:37:50 compute-0 systemd[1]: libpod-f8904a8a61465b2f188eedf3d9f2f9ec81fda04f4ef6d526f70ca15c89da58c6.scope: Deactivated successfully.
Oct 11 08:37:50 compute-0 podman[266528]: 2025-10-11 08:37:50.534779492 +0000 UTC m=+1.308035191 container died f8904a8a61465b2f188eedf3d9f2f9ec81fda04f4ef6d526f70ca15c89da58c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_shtern, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 08:37:50 compute-0 systemd[1]: libpod-f8904a8a61465b2f188eedf3d9f2f9ec81fda04f4ef6d526f70ca15c89da58c6.scope: Consumed 1.070s CPU time.
Oct 11 08:37:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-79a5d3f7c752cd8bf0e0d5a5818a21b540ee634410105d86b2821c2506641c86-merged.mount: Deactivated successfully.
Oct 11 08:37:50 compute-0 podman[266528]: 2025-10-11 08:37:50.606970721 +0000 UTC m=+1.380226440 container remove f8904a8a61465b2f188eedf3d9f2f9ec81fda04f4ef6d526f70ca15c89da58c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_shtern, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct 11 08:37:50 compute-0 systemd[1]: libpod-conmon-f8904a8a61465b2f188eedf3d9f2f9ec81fda04f4ef6d526f70ca15c89da58c6.scope: Deactivated successfully.
Oct 11 08:37:50 compute-0 sudo[266422]: pam_unix(sudo:session): session closed for user root
Oct 11 08:37:50 compute-0 sudo[266610]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:37:50 compute-0 sudo[266610]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:37:50 compute-0 sudo[266610]: pam_unix(sudo:session): session closed for user root
Oct 11 08:37:50 compute-0 nova_compute[260935]: 2025-10-11 08:37:50.781 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 11 08:37:50 compute-0 nova_compute[260935]: 2025-10-11 08:37:50.782 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 11 08:37:50 compute-0 sudo[266635]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 08:37:50 compute-0 sudo[266635]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:37:50 compute-0 sudo[266635]: pam_unix(sudo:session): session closed for user root
Oct 11 08:37:50 compute-0 nova_compute[260935]: 2025-10-11 08:37:50.896 2 DEBUG nova.scheduler.client.report [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Refreshing inventories for resource provider ead2f521-4d5d-46d9-864c-1aac19134114 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Oct 11 08:37:50 compute-0 sudo[266660]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:37:50 compute-0 sudo[266660]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:37:50 compute-0 sudo[266660]: pam_unix(sudo:session): session closed for user root
Oct 11 08:37:51 compute-0 nova_compute[260935]: 2025-10-11 08:37:51.011 2 DEBUG nova.scheduler.client.report [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Updating ProviderTree inventory for provider ead2f521-4d5d-46d9-864c-1aac19134114 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Oct 11 08:37:51 compute-0 nova_compute[260935]: 2025-10-11 08:37:51.011 2 DEBUG nova.compute.provider_tree [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Updating inventory in ProviderTree for provider ead2f521-4d5d-46d9-864c-1aac19134114 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Oct 11 08:37:51 compute-0 sudo[266685]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a -- lvm list --format json
Oct 11 08:37:51 compute-0 sudo[266685]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:37:51 compute-0 nova_compute[260935]: 2025-10-11 08:37:51.041 2 DEBUG nova.scheduler.client.report [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Refreshing aggregate associations for resource provider ead2f521-4d5d-46d9-864c-1aac19134114, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Oct 11 08:37:51 compute-0 nova_compute[260935]: 2025-10-11 08:37:51.066 2 DEBUG nova.scheduler.client.report [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Refreshing trait associations for resource provider ead2f521-4d5d-46d9-864c-1aac19134114, traits: HW_CPU_X86_AESNI,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_CLMUL,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_AVX,COMPUTE_RESCUE_BFV,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_F16C,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NODE,HW_CPU_X86_SSE2,HW_CPU_X86_BMI,COMPUTE_DEVICE_TAGGING,COMPUTE_STORAGE_BUS_FDC,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SHA,COMPUTE_STORAGE_BUS_SATA,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SSE42,HW_CPU_X86_SSE41,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_ABM,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_STORAGE_BUS_IDE,COMPUTE_STORAGE_BUS_USB,COMPUTE_TRUSTED_CERTS,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_FMA3,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_SSE4A,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSSE3,HW_CPU_X86_SVM,HW_CPU_X86_BMI2,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_AVX2,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_SECURITY_TPM_2_0,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_AMD_SVM,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_RTL8139 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Oct 11 08:37:51 compute-0 ceph-mon[74313]: pgmap v920: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:37:51 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/291088395' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:37:51 compute-0 nova_compute[260935]: 2025-10-11 08:37:51.091 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:37:51 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v921: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:37:51 compute-0 podman[266770]: 2025-10-11 08:37:51.525077319 +0000 UTC m=+0.063847863 container create ecda262daf8a2f8da0255b5f1bdc2aff8881f22c3d8915b018dac2d1f7460a89 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_wescoff, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 08:37:51 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 08:37:51 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1280830858' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:37:51 compute-0 nova_compute[260935]: 2025-10-11 08:37:51.568 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.477s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:37:51 compute-0 systemd[1]: Started libpod-conmon-ecda262daf8a2f8da0255b5f1bdc2aff8881f22c3d8915b018dac2d1f7460a89.scope.
Oct 11 08:37:51 compute-0 nova_compute[260935]: 2025-10-11 08:37:51.579 2 DEBUG nova.compute.provider_tree [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 08:37:51 compute-0 podman[266770]: 2025-10-11 08:37:51.50127807 +0000 UTC m=+0.040048624 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 08:37:51 compute-0 systemd[1]: Started libcrun container.
Oct 11 08:37:51 compute-0 podman[266770]: 2025-10-11 08:37:51.62960991 +0000 UTC m=+0.168380454 container init ecda262daf8a2f8da0255b5f1bdc2aff8881f22c3d8915b018dac2d1f7460a89 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_wescoff, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 08:37:51 compute-0 podman[266770]: 2025-10-11 08:37:51.643351022 +0000 UTC m=+0.182121556 container start ecda262daf8a2f8da0255b5f1bdc2aff8881f22c3d8915b018dac2d1f7460a89 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_wescoff, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct 11 08:37:51 compute-0 podman[266770]: 2025-10-11 08:37:51.647852171 +0000 UTC m=+0.186622785 container attach ecda262daf8a2f8da0255b5f1bdc2aff8881f22c3d8915b018dac2d1f7460a89 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_wescoff, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 08:37:51 compute-0 mystifying_wescoff[266788]: 167 167
Oct 11 08:37:51 compute-0 systemd[1]: libpod-ecda262daf8a2f8da0255b5f1bdc2aff8881f22c3d8915b018dac2d1f7460a89.scope: Deactivated successfully.
Oct 11 08:37:51 compute-0 podman[266770]: 2025-10-11 08:37:51.654243993 +0000 UTC m=+0.193014547 container died ecda262daf8a2f8da0255b5f1bdc2aff8881f22c3d8915b018dac2d1f7460a89 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_wescoff, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct 11 08:37:51 compute-0 nova_compute[260935]: 2025-10-11 08:37:51.688 2 DEBUG nova.scheduler.client.report [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 08:37:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-163a8781212963e2ca5f87b8d7b1048ba5afd23c944307695e18708a37282a80-merged.mount: Deactivated successfully.
Oct 11 08:37:51 compute-0 nova_compute[260935]: 2025-10-11 08:37:51.691 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 11 08:37:51 compute-0 nova_compute[260935]: 2025-10-11 08:37:51.692 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.273s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:37:51 compute-0 podman[266770]: 2025-10-11 08:37:51.712230087 +0000 UTC m=+0.251000631 container remove ecda262daf8a2f8da0255b5f1bdc2aff8881f22c3d8915b018dac2d1f7460a89 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_wescoff, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 08:37:51 compute-0 systemd[1]: libpod-conmon-ecda262daf8a2f8da0255b5f1bdc2aff8881f22c3d8915b018dac2d1f7460a89.scope: Deactivated successfully.
Oct 11 08:37:51 compute-0 podman[266812]: 2025-10-11 08:37:51.957431961 +0000 UTC m=+0.066206229 container create 4e147d514f3d5a93735774097b0d1c7b84237a886a8a3151879f0e7e1e0841da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_lewin, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507)
Oct 11 08:37:52 compute-0 systemd[1]: Started libpod-conmon-4e147d514f3d5a93735774097b0d1c7b84237a886a8a3151879f0e7e1e0841da.scope.
Oct 11 08:37:52 compute-0 podman[266812]: 2025-10-11 08:37:51.93460668 +0000 UTC m=+0.043380968 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 08:37:52 compute-0 systemd[1]: Started libcrun container.
Oct 11 08:37:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80a21aa9d6d73f7a344d776e189b25233ea02bfc55dff84b533fc9dd2f5d5e76/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 08:37:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80a21aa9d6d73f7a344d776e189b25233ea02bfc55dff84b533fc9dd2f5d5e76/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 08:37:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80a21aa9d6d73f7a344d776e189b25233ea02bfc55dff84b533fc9dd2f5d5e76/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 08:37:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80a21aa9d6d73f7a344d776e189b25233ea02bfc55dff84b533fc9dd2f5d5e76/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 08:37:52 compute-0 podman[266812]: 2025-10-11 08:37:52.07169576 +0000 UTC m=+0.180470088 container init 4e147d514f3d5a93735774097b0d1c7b84237a886a8a3151879f0e7e1e0841da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_lewin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct 11 08:37:52 compute-0 podman[266812]: 2025-10-11 08:37:52.082395896 +0000 UTC m=+0.191170204 container start 4e147d514f3d5a93735774097b0d1c7b84237a886a8a3151879f0e7e1e0841da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_lewin, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 11 08:37:52 compute-0 podman[266812]: 2025-10-11 08:37:52.086510613 +0000 UTC m=+0.195285001 container attach 4e147d514f3d5a93735774097b0d1c7b84237a886a8a3151879f0e7e1e0841da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_lewin, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 08:37:52 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/1280830858' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:37:52 compute-0 nova_compute[260935]: 2025-10-11 08:37:52.688 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 08:37:52 compute-0 nova_compute[260935]: 2025-10-11 08:37:52.689 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 08:37:52 compute-0 nova_compute[260935]: 2025-10-11 08:37:52.690 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 11 08:37:52 compute-0 nova_compute[260935]: 2025-10-11 08:37:52.690 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 11 08:37:52 compute-0 nova_compute[260935]: 2025-10-11 08:37:52.719 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 11 08:37:52 compute-0 keen_lewin[266829]: {
Oct 11 08:37:52 compute-0 keen_lewin[266829]:     "0": [
Oct 11 08:37:52 compute-0 keen_lewin[266829]:         {
Oct 11 08:37:52 compute-0 keen_lewin[266829]:             "devices": [
Oct 11 08:37:52 compute-0 keen_lewin[266829]:                 "/dev/loop3"
Oct 11 08:37:52 compute-0 keen_lewin[266829]:             ],
Oct 11 08:37:52 compute-0 keen_lewin[266829]:             "lv_name": "ceph_lv0",
Oct 11 08:37:52 compute-0 keen_lewin[266829]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 08:37:52 compute-0 keen_lewin[266829]:             "lv_size": "21470642176",
Oct 11 08:37:52 compute-0 keen_lewin[266829]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=bd31cfe9-266a-4479-a7f9-659fff14b3e1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 08:37:52 compute-0 keen_lewin[266829]:             "lv_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 08:37:52 compute-0 keen_lewin[266829]:             "name": "ceph_lv0",
Oct 11 08:37:52 compute-0 keen_lewin[266829]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 08:37:52 compute-0 keen_lewin[266829]:             "tags": {
Oct 11 08:37:52 compute-0 keen_lewin[266829]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 11 08:37:52 compute-0 keen_lewin[266829]:                 "ceph.block_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 08:37:52 compute-0 keen_lewin[266829]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 08:37:52 compute-0 keen_lewin[266829]:                 "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 08:37:52 compute-0 keen_lewin[266829]:                 "ceph.cluster_name": "ceph",
Oct 11 08:37:52 compute-0 keen_lewin[266829]:                 "ceph.crush_device_class": "",
Oct 11 08:37:52 compute-0 keen_lewin[266829]:                 "ceph.encrypted": "0",
Oct 11 08:37:52 compute-0 keen_lewin[266829]:                 "ceph.osd_fsid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 08:37:52 compute-0 keen_lewin[266829]:                 "ceph.osd_id": "0",
Oct 11 08:37:52 compute-0 keen_lewin[266829]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 08:37:52 compute-0 keen_lewin[266829]:                 "ceph.type": "block",
Oct 11 08:37:52 compute-0 keen_lewin[266829]:                 "ceph.vdo": "0"
Oct 11 08:37:52 compute-0 keen_lewin[266829]:             },
Oct 11 08:37:52 compute-0 keen_lewin[266829]:             "type": "block",
Oct 11 08:37:52 compute-0 keen_lewin[266829]:             "vg_name": "ceph_vg0"
Oct 11 08:37:52 compute-0 keen_lewin[266829]:         }
Oct 11 08:37:52 compute-0 keen_lewin[266829]:     ],
Oct 11 08:37:52 compute-0 keen_lewin[266829]:     "1": [
Oct 11 08:37:52 compute-0 keen_lewin[266829]:         {
Oct 11 08:37:52 compute-0 keen_lewin[266829]:             "devices": [
Oct 11 08:37:52 compute-0 keen_lewin[266829]:                 "/dev/loop4"
Oct 11 08:37:52 compute-0 keen_lewin[266829]:             ],
Oct 11 08:37:52 compute-0 keen_lewin[266829]:             "lv_name": "ceph_lv1",
Oct 11 08:37:52 compute-0 keen_lewin[266829]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 08:37:52 compute-0 keen_lewin[266829]:             "lv_size": "21470642176",
Oct 11 08:37:52 compute-0 keen_lewin[266829]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c6e730c8-7075-45e5-bf72-061b73088f5b,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 08:37:52 compute-0 keen_lewin[266829]:             "lv_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 08:37:52 compute-0 keen_lewin[266829]:             "name": "ceph_lv1",
Oct 11 08:37:52 compute-0 keen_lewin[266829]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 08:37:52 compute-0 keen_lewin[266829]:             "tags": {
Oct 11 08:37:52 compute-0 keen_lewin[266829]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 11 08:37:52 compute-0 keen_lewin[266829]:                 "ceph.block_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 08:37:52 compute-0 keen_lewin[266829]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 08:37:52 compute-0 keen_lewin[266829]:                 "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 08:37:52 compute-0 keen_lewin[266829]:                 "ceph.cluster_name": "ceph",
Oct 11 08:37:52 compute-0 keen_lewin[266829]:                 "ceph.crush_device_class": "",
Oct 11 08:37:52 compute-0 keen_lewin[266829]:                 "ceph.encrypted": "0",
Oct 11 08:37:52 compute-0 keen_lewin[266829]:                 "ceph.osd_fsid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 08:37:52 compute-0 keen_lewin[266829]:                 "ceph.osd_id": "1",
Oct 11 08:37:52 compute-0 keen_lewin[266829]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 08:37:52 compute-0 keen_lewin[266829]:                 "ceph.type": "block",
Oct 11 08:37:52 compute-0 keen_lewin[266829]:                 "ceph.vdo": "0"
Oct 11 08:37:52 compute-0 keen_lewin[266829]:             },
Oct 11 08:37:52 compute-0 keen_lewin[266829]:             "type": "block",
Oct 11 08:37:52 compute-0 keen_lewin[266829]:             "vg_name": "ceph_vg1"
Oct 11 08:37:52 compute-0 keen_lewin[266829]:         }
Oct 11 08:37:52 compute-0 keen_lewin[266829]:     ],
Oct 11 08:37:52 compute-0 keen_lewin[266829]:     "2": [
Oct 11 08:37:52 compute-0 keen_lewin[266829]:         {
Oct 11 08:37:52 compute-0 keen_lewin[266829]:             "devices": [
Oct 11 08:37:52 compute-0 keen_lewin[266829]:                 "/dev/loop5"
Oct 11 08:37:52 compute-0 keen_lewin[266829]:             ],
Oct 11 08:37:52 compute-0 keen_lewin[266829]:             "lv_name": "ceph_lv2",
Oct 11 08:37:52 compute-0 keen_lewin[266829]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 08:37:52 compute-0 keen_lewin[266829]:             "lv_size": "21470642176",
Oct 11 08:37:52 compute-0 keen_lewin[266829]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9a8d87fe-940e-4960-94fb-405e22e6e9ee,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 08:37:52 compute-0 keen_lewin[266829]:             "lv_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 08:37:52 compute-0 keen_lewin[266829]:             "name": "ceph_lv2",
Oct 11 08:37:52 compute-0 keen_lewin[266829]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 08:37:52 compute-0 keen_lewin[266829]:             "tags": {
Oct 11 08:37:52 compute-0 keen_lewin[266829]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 11 08:37:52 compute-0 keen_lewin[266829]:                 "ceph.block_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 08:37:52 compute-0 keen_lewin[266829]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 08:37:52 compute-0 keen_lewin[266829]:                 "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 08:37:52 compute-0 keen_lewin[266829]:                 "ceph.cluster_name": "ceph",
Oct 11 08:37:52 compute-0 keen_lewin[266829]:                 "ceph.crush_device_class": "",
Oct 11 08:37:52 compute-0 keen_lewin[266829]:                 "ceph.encrypted": "0",
Oct 11 08:37:52 compute-0 keen_lewin[266829]:                 "ceph.osd_fsid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 08:37:52 compute-0 keen_lewin[266829]:                 "ceph.osd_id": "2",
Oct 11 08:37:52 compute-0 keen_lewin[266829]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 08:37:52 compute-0 keen_lewin[266829]:                 "ceph.type": "block",
Oct 11 08:37:52 compute-0 keen_lewin[266829]:                 "ceph.vdo": "0"
Oct 11 08:37:52 compute-0 keen_lewin[266829]:             },
Oct 11 08:37:52 compute-0 keen_lewin[266829]:             "type": "block",
Oct 11 08:37:52 compute-0 keen_lewin[266829]:             "vg_name": "ceph_vg2"
Oct 11 08:37:52 compute-0 keen_lewin[266829]:         }
Oct 11 08:37:52 compute-0 keen_lewin[266829]:     ]
Oct 11 08:37:52 compute-0 keen_lewin[266829]: }
Oct 11 08:37:52 compute-0 systemd[1]: libpod-4e147d514f3d5a93735774097b0d1c7b84237a886a8a3151879f0e7e1e0841da.scope: Deactivated successfully.
Oct 11 08:37:52 compute-0 podman[266838]: 2025-10-11 08:37:52.968022417 +0000 UTC m=+0.032976871 container died 4e147d514f3d5a93735774097b0d1c7b84237a886a8a3151879f0e7e1e0841da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_lewin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Oct 11 08:37:52 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 08:37:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-80a21aa9d6d73f7a344d776e189b25233ea02bfc55dff84b533fc9dd2f5d5e76-merged.mount: Deactivated successfully.
Oct 11 08:37:53 compute-0 podman[266838]: 2025-10-11 08:37:53.04347582 +0000 UTC m=+0.108430254 container remove 4e147d514f3d5a93735774097b0d1c7b84237a886a8a3151879f0e7e1e0841da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_lewin, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 08:37:53 compute-0 systemd[1]: libpod-conmon-4e147d514f3d5a93735774097b0d1c7b84237a886a8a3151879f0e7e1e0841da.scope: Deactivated successfully.
Oct 11 08:37:53 compute-0 sudo[266685]: pam_unix(sudo:session): session closed for user root
Oct 11 08:37:53 compute-0 ceph-mon[74313]: pgmap v921: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:37:53 compute-0 sudo[266853]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:37:53 compute-0 sudo[266853]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:37:53 compute-0 sudo[266853]: pam_unix(sudo:session): session closed for user root
Oct 11 08:37:53 compute-0 sudo[266878]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 08:37:53 compute-0 sudo[266878]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:37:53 compute-0 sudo[266878]: pam_unix(sudo:session): session closed for user root
Oct 11 08:37:53 compute-0 sudo[266903]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:37:53 compute-0 sudo[266903]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:37:53 compute-0 sudo[266903]: pam_unix(sudo:session): session closed for user root
Oct 11 08:37:53 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v922: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:37:53 compute-0 sudo[266928]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a -- raw list --format json
Oct 11 08:37:53 compute-0 sudo[266928]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:37:54 compute-0 podman[266995]: 2025-10-11 08:37:54.026127129 +0000 UTC m=+0.111925484 container create 4485d01d5d8c51fa11927ce5dabb07ffecc3da562a4ff3c57aac30223ee1e62b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_chatelet, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct 11 08:37:54 compute-0 podman[266995]: 2025-10-11 08:37:53.954231318 +0000 UTC m=+0.040029693 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 08:37:54 compute-0 systemd[1]: Started libpod-conmon-4485d01d5d8c51fa11927ce5dabb07ffecc3da562a4ff3c57aac30223ee1e62b.scope.
Oct 11 08:37:54 compute-0 systemd[1]: Started libcrun container.
Oct 11 08:37:54 compute-0 podman[266995]: 2025-10-11 08:37:54.144286529 +0000 UTC m=+0.230084894 container init 4485d01d5d8c51fa11927ce5dabb07ffecc3da562a4ff3c57aac30223ee1e62b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_chatelet, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct 11 08:37:54 compute-0 podman[266995]: 2025-10-11 08:37:54.157071694 +0000 UTC m=+0.242870059 container start 4485d01d5d8c51fa11927ce5dabb07ffecc3da562a4ff3c57aac30223ee1e62b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_chatelet, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True)
Oct 11 08:37:54 compute-0 podman[266995]: 2025-10-11 08:37:54.161867481 +0000 UTC m=+0.247665896 container attach 4485d01d5d8c51fa11927ce5dabb07ffecc3da562a4ff3c57aac30223ee1e62b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_chatelet, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct 11 08:37:54 compute-0 dazzling_chatelet[267011]: 167 167
Oct 11 08:37:54 compute-0 systemd[1]: libpod-4485d01d5d8c51fa11927ce5dabb07ffecc3da562a4ff3c57aac30223ee1e62b.scope: Deactivated successfully.
Oct 11 08:37:54 compute-0 podman[266995]: 2025-10-11 08:37:54.167181502 +0000 UTC m=+0.252979867 container died 4485d01d5d8c51fa11927ce5dabb07ffecc3da562a4ff3c57aac30223ee1e62b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_chatelet, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct 11 08:37:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-5bd638472678d4392cb4f2c669c318331e4baa597b9b9fe223da981af2de770e-merged.mount: Deactivated successfully.
Oct 11 08:37:54 compute-0 podman[266995]: 2025-10-11 08:37:54.218300191 +0000 UTC m=+0.304098556 container remove 4485d01d5d8c51fa11927ce5dabb07ffecc3da562a4ff3c57aac30223ee1e62b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_chatelet, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 11 08:37:54 compute-0 systemd[1]: libpod-conmon-4485d01d5d8c51fa11927ce5dabb07ffecc3da562a4ff3c57aac30223ee1e62b.scope: Deactivated successfully.
Oct 11 08:37:54 compute-0 podman[267036]: 2025-10-11 08:37:54.475707342 +0000 UTC m=+0.074248879 container create 5dffa9f0037a13c35c38850671397716e5d1b1e36d713c14e16e9cd35e8eee50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_margulis, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 11 08:37:54 compute-0 systemd[1]: Started libpod-conmon-5dffa9f0037a13c35c38850671397716e5d1b1e36d713c14e16e9cd35e8eee50.scope.
Oct 11 08:37:54 compute-0 podman[267036]: 2025-10-11 08:37:54.445940733 +0000 UTC m=+0.044482320 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 08:37:54 compute-0 systemd[1]: Started libcrun container.
Oct 11 08:37:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/132f5e5e1b84d6d662c67a840df4f300f9909ba275843e5466ef9cf0210bf12e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 08:37:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/132f5e5e1b84d6d662c67a840df4f300f9909ba275843e5466ef9cf0210bf12e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 08:37:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/132f5e5e1b84d6d662c67a840df4f300f9909ba275843e5466ef9cf0210bf12e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 08:37:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/132f5e5e1b84d6d662c67a840df4f300f9909ba275843e5466ef9cf0210bf12e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 08:37:54 compute-0 podman[267036]: 2025-10-11 08:37:54.579111201 +0000 UTC m=+0.177652758 container init 5dffa9f0037a13c35c38850671397716e5d1b1e36d713c14e16e9cd35e8eee50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_margulis, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 08:37:54 compute-0 podman[267036]: 2025-10-11 08:37:54.596611611 +0000 UTC m=+0.195153148 container start 5dffa9f0037a13c35c38850671397716e5d1b1e36d713c14e16e9cd35e8eee50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_margulis, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct 11 08:37:54 compute-0 podman[267036]: 2025-10-11 08:37:54.601304134 +0000 UTC m=+0.199845731 container attach 5dffa9f0037a13c35c38850671397716e5d1b1e36d713c14e16e9cd35e8eee50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_margulis, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 08:37:54 compute-0 ceph-mgr[74605]: [balancer INFO root] Optimize plan auto_2025-10-11_08:37:54
Oct 11 08:37:54 compute-0 ceph-mgr[74605]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 08:37:54 compute-0 ceph-mgr[74605]: [balancer INFO root] do_upmap
Oct 11 08:37:54 compute-0 ceph-mgr[74605]: [balancer INFO root] pools ['volumes', 'default.rgw.control', '.rgw.root', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'default.rgw.log', 'default.rgw.meta', '.mgr', 'images', 'vms', 'backups']
Oct 11 08:37:54 compute-0 ceph-mgr[74605]: [balancer INFO root] prepared 0/10 changes
Oct 11 08:37:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 08:37:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 08:37:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 08:37:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 08:37:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 08:37:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 08:37:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 08:37:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 08:37:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 08:37:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 08:37:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 08:37:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 08:37:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 08:37:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 08:37:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 08:37:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 08:37:55 compute-0 ceph-mon[74313]: pgmap v922: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:37:55 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v923: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:37:55 compute-0 gracious_margulis[267053]: {
Oct 11 08:37:55 compute-0 gracious_margulis[267053]:     "9a8d87fe-940e-4960-94fb-405e22e6e9ee": {
Oct 11 08:37:55 compute-0 gracious_margulis[267053]:         "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 08:37:55 compute-0 gracious_margulis[267053]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 11 08:37:55 compute-0 gracious_margulis[267053]:         "osd_id": 2,
Oct 11 08:37:55 compute-0 gracious_margulis[267053]:         "osd_uuid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 08:37:55 compute-0 gracious_margulis[267053]:         "type": "bluestore"
Oct 11 08:37:55 compute-0 gracious_margulis[267053]:     },
Oct 11 08:37:55 compute-0 gracious_margulis[267053]:     "bd31cfe9-266a-4479-a7f9-659fff14b3e1": {
Oct 11 08:37:55 compute-0 gracious_margulis[267053]:         "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 08:37:55 compute-0 gracious_margulis[267053]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 11 08:37:55 compute-0 gracious_margulis[267053]:         "osd_id": 0,
Oct 11 08:37:55 compute-0 gracious_margulis[267053]:         "osd_uuid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 08:37:55 compute-0 gracious_margulis[267053]:         "type": "bluestore"
Oct 11 08:37:55 compute-0 gracious_margulis[267053]:     },
Oct 11 08:37:55 compute-0 gracious_margulis[267053]:     "c6e730c8-7075-45e5-bf72-061b73088f5b": {
Oct 11 08:37:55 compute-0 gracious_margulis[267053]:         "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 08:37:55 compute-0 gracious_margulis[267053]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 11 08:37:55 compute-0 gracious_margulis[267053]:         "osd_id": 1,
Oct 11 08:37:55 compute-0 gracious_margulis[267053]:         "osd_uuid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 08:37:55 compute-0 gracious_margulis[267053]:         "type": "bluestore"
Oct 11 08:37:55 compute-0 gracious_margulis[267053]:     }
Oct 11 08:37:55 compute-0 gracious_margulis[267053]: }
Oct 11 08:37:55 compute-0 systemd[1]: libpod-5dffa9f0037a13c35c38850671397716e5d1b1e36d713c14e16e9cd35e8eee50.scope: Deactivated successfully.
Oct 11 08:37:55 compute-0 podman[267036]: 2025-10-11 08:37:55.721845957 +0000 UTC m=+1.320387504 container died 5dffa9f0037a13c35c38850671397716e5d1b1e36d713c14e16e9cd35e8eee50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_margulis, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct 11 08:37:55 compute-0 systemd[1]: libpod-5dffa9f0037a13c35c38850671397716e5d1b1e36d713c14e16e9cd35e8eee50.scope: Consumed 1.129s CPU time.
Oct 11 08:37:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-132f5e5e1b84d6d662c67a840df4f300f9909ba275843e5466ef9cf0210bf12e-merged.mount: Deactivated successfully.
Oct 11 08:37:55 compute-0 podman[267036]: 2025-10-11 08:37:55.807523481 +0000 UTC m=+1.406065018 container remove 5dffa9f0037a13c35c38850671397716e5d1b1e36d713c14e16e9cd35e8eee50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_margulis, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 08:37:55 compute-0 systemd[1]: libpod-conmon-5dffa9f0037a13c35c38850671397716e5d1b1e36d713c14e16e9cd35e8eee50.scope: Deactivated successfully.
Oct 11 08:37:55 compute-0 sudo[266928]: pam_unix(sudo:session): session closed for user root
Oct 11 08:37:55 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 08:37:55 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 08:37:55 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 08:37:55 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 08:37:55 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev 7d623a57-1adc-4775-aa55-d65085f6855d does not exist
Oct 11 08:37:55 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev 22ef0d61-3740-4eb5-8e4a-4018b923e7f4 does not exist
Oct 11 08:37:55 compute-0 sudo[267100]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:37:55 compute-0 sudo[267100]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:37:55 compute-0 sudo[267100]: pam_unix(sudo:session): session closed for user root
Oct 11 08:37:56 compute-0 sudo[267125]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 11 08:37:56 compute-0 sudo[267125]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:37:56 compute-0 sudo[267125]: pam_unix(sudo:session): session closed for user root
Oct 11 08:37:56 compute-0 ceph-mon[74313]: pgmap v923: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:37:56 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 08:37:56 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 08:37:57 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v924: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:37:57 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 08:37:58 compute-0 ceph-mon[74313]: pgmap v924: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:37:59 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v925: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:38:00 compute-0 ceph-mon[74313]: pgmap v925: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:38:01 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v926: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:38:02 compute-0 ceph-mon[74313]: pgmap v926: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:38:02 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 08:38:03 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v927: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:38:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 08:38:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:38:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 08:38:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:38:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 08:38:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:38:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 08:38:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:38:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 08:38:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:38:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 08:38:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:38:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 11 08:38:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:38:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 08:38:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:38:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 11 08:38:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:38:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 11 08:38:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:38:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 08:38:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:38:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 11 08:38:04 compute-0 podman[267150]: 2025-10-11 08:38:04.816134102 +0000 UTC m=+0.107046944 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible)
Oct 11 08:38:04 compute-0 ceph-mon[74313]: pgmap v927: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:38:05 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v928: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:38:06 compute-0 ceph-mon[74313]: pgmap v928: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:38:07 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v929: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:38:07 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 08:38:08 compute-0 ceph-mon[74313]: pgmap v929: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:38:09 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v930: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:38:10 compute-0 ceph-mon[74313]: pgmap v930: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:38:11 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v931: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:38:11 compute-0 podman[267170]: 2025-10-11 08:38:11.769834458 +0000 UTC m=+0.071583863 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid)
Oct 11 08:38:12 compute-0 ceph-mon[74313]: pgmap v931: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:38:12 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 08:38:13 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v932: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:38:14 compute-0 ceph-mon[74313]: pgmap v932: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:38:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:38:15.168 162815 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:38:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:38:15.169 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:38:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:38:15.169 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:38:15 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v933: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:38:16 compute-0 podman[267192]: 2025-10-11 08:38:16.787106352 +0000 UTC m=+0.085163531 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 11 08:38:16 compute-0 ceph-mon[74313]: pgmap v933: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:38:17 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v934: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:38:17 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 08:38:18 compute-0 podman[267213]: 2025-10-11 08:38:18.840619817 +0000 UTC m=+0.131529163 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 11 08:38:18 compute-0 ceph-mon[74313]: pgmap v934: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:38:19 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v935: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:38:20 compute-0 ceph-mon[74313]: pgmap v935: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:38:21 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v936: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:38:22 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 08:38:23 compute-0 ceph-mon[74313]: pgmap v936: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:38:23 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v937: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:38:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 08:38:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 08:38:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 08:38:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 08:38:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 08:38:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 08:38:25 compute-0 ceph-mon[74313]: pgmap v937: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:38:25 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v938: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:38:27 compute-0 ceph-mon[74313]: pgmap v938: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:38:27 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v939: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:38:28 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 08:38:29 compute-0 ceph-mon[74313]: pgmap v939: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:38:29 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v940: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:38:31 compute-0 ceph-mon[74313]: pgmap v940: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:38:31 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v941: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:38:33 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 08:38:33 compute-0 ceph-mon[74313]: pgmap v941: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:38:33 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v942: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:38:35 compute-0 ceph-mon[74313]: pgmap v942: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:38:35 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v943: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:38:35 compute-0 podman[267242]: 2025-10-11 08:38:35.792380768 +0000 UTC m=+0.090646336 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct 11 08:38:37 compute-0 ceph-mon[74313]: pgmap v943: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:38:37 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v944: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:38:37 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 08:38:37 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3518290371' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 08:38:37 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 08:38:37 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3518290371' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 08:38:38 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 08:38:38 compute-0 ceph-mon[74313]: from='client.? 192.168.122.10:0/3518290371' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 08:38:38 compute-0 ceph-mon[74313]: from='client.? 192.168.122.10:0/3518290371' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 08:38:39 compute-0 ceph-mon[74313]: pgmap v944: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:38:39 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v945: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:38:41 compute-0 ceph-mon[74313]: pgmap v945: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:38:41 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v946: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:38:42 compute-0 sshd-session[267241]: error: kex_exchange_identification: read: Connection timed out
Oct 11 08:38:42 compute-0 sshd-session[267241]: banner exchange: Connection from 115.190.136.184 port 33824: Connection timed out
Oct 11 08:38:42 compute-0 podman[267262]: 2025-10-11 08:38:42.789231823 +0000 UTC m=+0.084082079 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=iscsid, managed_by=edpm_ansible)
Oct 11 08:38:43 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 08:38:43 compute-0 ceph-mon[74313]: pgmap v946: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:38:43 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v947: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:38:45 compute-0 ceph-mon[74313]: pgmap v947: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:38:45 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v948: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:38:46 compute-0 nova_compute[260935]: 2025-10-11 08:38:46.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 08:38:46 compute-0 nova_compute[260935]: 2025-10-11 08:38:46.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 08:38:46 compute-0 nova_compute[260935]: 2025-10-11 08:38:46.703 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 11 08:38:47 compute-0 ceph-mon[74313]: pgmap v948: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:38:47 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v949: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:38:47 compute-0 podman[267283]: 2025-10-11 08:38:47.785429044 +0000 UTC m=+0.076769301 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_id=multipathd, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0)
Oct 11 08:38:48 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 08:38:48 compute-0 nova_compute[260935]: 2025-10-11 08:38:48.699 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 08:38:48 compute-0 nova_compute[260935]: 2025-10-11 08:38:48.773 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 08:38:49 compute-0 ceph-mon[74313]: pgmap v949: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:38:49 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v950: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:38:49 compute-0 nova_compute[260935]: 2025-10-11 08:38:49.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 08:38:49 compute-0 podman[267303]: 2025-10-11 08:38:49.837177227 +0000 UTC m=+0.135075704 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct 11 08:38:50 compute-0 nova_compute[260935]: 2025-10-11 08:38:50.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 08:38:51 compute-0 ceph-mon[74313]: pgmap v950: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:38:51 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v951: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:38:51 compute-0 nova_compute[260935]: 2025-10-11 08:38:51.698 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 08:38:51 compute-0 nova_compute[260935]: 2025-10-11 08:38:51.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 08:38:51 compute-0 nova_compute[260935]: 2025-10-11 08:38:51.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 08:38:53 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 08:38:53 compute-0 ceph-mon[74313]: pgmap v951: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:38:53 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v952: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:38:53 compute-0 nova_compute[260935]: 2025-10-11 08:38:53.534 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:38:53 compute-0 nova_compute[260935]: 2025-10-11 08:38:53.535 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:38:53 compute-0 nova_compute[260935]: 2025-10-11 08:38:53.535 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:38:53 compute-0 nova_compute[260935]: 2025-10-11 08:38:53.535 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 11 08:38:53 compute-0 nova_compute[260935]: 2025-10-11 08:38:53.536 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:38:53 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 08:38:53 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/179881512' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:38:54 compute-0 nova_compute[260935]: 2025-10-11 08:38:54.016 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:38:54 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/179881512' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:38:54 compute-0 nova_compute[260935]: 2025-10-11 08:38:54.202 2 WARNING nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 08:38:54 compute-0 nova_compute[260935]: 2025-10-11 08:38:54.203 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5172MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 11 08:38:54 compute-0 nova_compute[260935]: 2025-10-11 08:38:54.204 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:38:54 compute-0 nova_compute[260935]: 2025-10-11 08:38:54.204 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:38:54 compute-0 ceph-mgr[74605]: [balancer INFO root] Optimize plan auto_2025-10-11_08:38:54
Oct 11 08:38:54 compute-0 ceph-mgr[74605]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 08:38:54 compute-0 ceph-mgr[74605]: [balancer INFO root] do_upmap
Oct 11 08:38:54 compute-0 ceph-mgr[74605]: [balancer INFO root] pools ['default.rgw.log', 'cephfs.cephfs.data', 'default.rgw.meta', 'vms', 'cephfs.cephfs.meta', 'volumes', 'images', '.rgw.root', 'default.rgw.control', 'backups', '.mgr']
Oct 11 08:38:54 compute-0 ceph-mgr[74605]: [balancer INFO root] prepared 0/10 changes
Oct 11 08:38:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 08:38:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 08:38:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 08:38:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 08:38:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 08:38:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 08:38:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 08:38:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 08:38:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 08:38:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 08:38:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 08:38:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 08:38:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 08:38:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 08:38:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 08:38:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 08:38:55 compute-0 nova_compute[260935]: 2025-10-11 08:38:55.147 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 11 08:38:55 compute-0 nova_compute[260935]: 2025-10-11 08:38:55.148 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 11 08:38:55 compute-0 nova_compute[260935]: 2025-10-11 08:38:55.173 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:38:55 compute-0 ceph-mon[74313]: pgmap v952: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:38:55 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v953: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:38:55 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 08:38:55 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4180341362' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:38:55 compute-0 nova_compute[260935]: 2025-10-11 08:38:55.604 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.431s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:38:55 compute-0 nova_compute[260935]: 2025-10-11 08:38:55.609 2 DEBUG nova.compute.provider_tree [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 08:38:55 compute-0 nova_compute[260935]: 2025-10-11 08:38:55.792 2 DEBUG nova.scheduler.client.report [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 08:38:55 compute-0 nova_compute[260935]: 2025-10-11 08:38:55.793 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 11 08:38:55 compute-0 nova_compute[260935]: 2025-10-11 08:38:55.794 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.590s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:38:56 compute-0 sudo[267370]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:38:56 compute-0 sudo[267370]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:38:56 compute-0 sudo[267370]: pam_unix(sudo:session): session closed for user root
Oct 11 08:38:56 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/4180341362' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:38:56 compute-0 sudo[267395]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 08:38:56 compute-0 sudo[267395]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:38:56 compute-0 sudo[267395]: pam_unix(sudo:session): session closed for user root
Oct 11 08:38:56 compute-0 sudo[267420]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:38:56 compute-0 sudo[267420]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:38:56 compute-0 sudo[267420]: pam_unix(sudo:session): session closed for user root
Oct 11 08:38:56 compute-0 sudo[267445]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 11 08:38:56 compute-0 sudo[267445]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:38:56 compute-0 nova_compute[260935]: 2025-10-11 08:38:56.794 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 08:38:56 compute-0 nova_compute[260935]: 2025-10-11 08:38:56.795 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 11 08:38:56 compute-0 nova_compute[260935]: 2025-10-11 08:38:56.795 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 11 08:38:56 compute-0 nova_compute[260935]: 2025-10-11 08:38:56.824 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 11 08:38:56 compute-0 sudo[267445]: pam_unix(sudo:session): session closed for user root
Oct 11 08:38:56 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 08:38:56 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 08:38:56 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 11 08:38:56 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 08:38:56 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 11 08:38:56 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 08:38:56 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev 8afa6748-4d8a-49dd-a7b7-31f015af244e does not exist
Oct 11 08:38:56 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev 3002e3f2-b4df-4e24-ac15-977cbf8f3593 does not exist
Oct 11 08:38:56 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev 4b0b8523-7efe-4cdf-9450-ab5f7ac6324f does not exist
Oct 11 08:38:56 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 11 08:38:56 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 08:38:56 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 11 08:38:56 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 08:38:56 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 08:38:56 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 08:38:57 compute-0 sudo[267501]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:38:57 compute-0 sudo[267501]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:38:57 compute-0 sudo[267501]: pam_unix(sudo:session): session closed for user root
Oct 11 08:38:57 compute-0 sudo[267526]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 08:38:57 compute-0 sudo[267526]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:38:57 compute-0 sudo[267526]: pam_unix(sudo:session): session closed for user root
Oct 11 08:38:57 compute-0 ceph-mon[74313]: pgmap v953: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:38:57 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 08:38:57 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 08:38:57 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 08:38:57 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 08:38:57 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 08:38:57 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 08:38:57 compute-0 sudo[267551]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:38:57 compute-0 sudo[267551]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:38:57 compute-0 sudo[267551]: pam_unix(sudo:session): session closed for user root
Oct 11 08:38:57 compute-0 sudo[267576]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 11 08:38:57 compute-0 sudo[267576]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:38:57 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v954: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:38:57 compute-0 podman[267643]: 2025-10-11 08:38:57.696245618 +0000 UTC m=+0.050904963 container create e0b0de607ac5869410e9f794e90b8f497c154ec6e30d8e749f66aed7e1dadcb3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_archimedes, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 08:38:57 compute-0 systemd[1]: Started libpod-conmon-e0b0de607ac5869410e9f794e90b8f497c154ec6e30d8e749f66aed7e1dadcb3.scope.
Oct 11 08:38:57 compute-0 podman[267643]: 2025-10-11 08:38:57.669645809 +0000 UTC m=+0.024305244 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 08:38:57 compute-0 systemd[1]: Started libcrun container.
Oct 11 08:38:57 compute-0 podman[267643]: 2025-10-11 08:38:57.786513263 +0000 UTC m=+0.141172628 container init e0b0de607ac5869410e9f794e90b8f497c154ec6e30d8e749f66aed7e1dadcb3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_archimedes, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct 11 08:38:57 compute-0 podman[267643]: 2025-10-11 08:38:57.799533584 +0000 UTC m=+0.154192969 container start e0b0de607ac5869410e9f794e90b8f497c154ec6e30d8e749f66aed7e1dadcb3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_archimedes, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 08:38:57 compute-0 podman[267643]: 2025-10-11 08:38:57.804503116 +0000 UTC m=+0.159162461 container attach e0b0de607ac5869410e9f794e90b8f497c154ec6e30d8e749f66aed7e1dadcb3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_archimedes, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct 11 08:38:57 compute-0 hardcore_archimedes[267659]: 167 167
Oct 11 08:38:57 compute-0 systemd[1]: libpod-e0b0de607ac5869410e9f794e90b8f497c154ec6e30d8e749f66aed7e1dadcb3.scope: Deactivated successfully.
Oct 11 08:38:57 compute-0 podman[267643]: 2025-10-11 08:38:57.807212543 +0000 UTC m=+0.161871888 container died e0b0de607ac5869410e9f794e90b8f497c154ec6e30d8e749f66aed7e1dadcb3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_archimedes, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 11 08:38:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-6692efc6ee7b4c340588fe59815a8bab78c225adba108fe2f645e9075512b486-merged.mount: Deactivated successfully.
Oct 11 08:38:57 compute-0 podman[267643]: 2025-10-11 08:38:57.883447798 +0000 UTC m=+0.238107183 container remove e0b0de607ac5869410e9f794e90b8f497c154ec6e30d8e749f66aed7e1dadcb3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_archimedes, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct 11 08:38:57 compute-0 systemd[1]: libpod-conmon-e0b0de607ac5869410e9f794e90b8f497c154ec6e30d8e749f66aed7e1dadcb3.scope: Deactivated successfully.
Oct 11 08:38:58 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 08:38:58 compute-0 podman[267682]: 2025-10-11 08:38:58.05811687 +0000 UTC m=+0.045589451 container create ef4a8feb9767e172b447955b8eb07cfa1b6476a8a1cf2b68eb8ccf1e7ec7117d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_ardinghelli, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct 11 08:38:58 compute-0 systemd[1]: Started libpod-conmon-ef4a8feb9767e172b447955b8eb07cfa1b6476a8a1cf2b68eb8ccf1e7ec7117d.scope.
Oct 11 08:38:58 compute-0 podman[267682]: 2025-10-11 08:38:58.036560315 +0000 UTC m=+0.024032916 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 08:38:58 compute-0 systemd[1]: Started libcrun container.
Oct 11 08:38:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55cbb2aa42b72638a9f789442d8c6081cc90f304dea89055e90f696eaeabc96b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 08:38:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55cbb2aa42b72638a9f789442d8c6081cc90f304dea89055e90f696eaeabc96b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 08:38:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55cbb2aa42b72638a9f789442d8c6081cc90f304dea89055e90f696eaeabc96b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 08:38:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55cbb2aa42b72638a9f789442d8c6081cc90f304dea89055e90f696eaeabc96b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 08:38:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55cbb2aa42b72638a9f789442d8c6081cc90f304dea89055e90f696eaeabc96b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 08:38:58 compute-0 podman[267682]: 2025-10-11 08:38:58.158976347 +0000 UTC m=+0.146448958 container init ef4a8feb9767e172b447955b8eb07cfa1b6476a8a1cf2b68eb8ccf1e7ec7117d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_ardinghelli, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 08:38:58 compute-0 podman[267682]: 2025-10-11 08:38:58.1723998 +0000 UTC m=+0.159872381 container start ef4a8feb9767e172b447955b8eb07cfa1b6476a8a1cf2b68eb8ccf1e7ec7117d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_ardinghelli, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 08:38:58 compute-0 podman[267682]: 2025-10-11 08:38:58.177069323 +0000 UTC m=+0.164541954 container attach ef4a8feb9767e172b447955b8eb07cfa1b6476a8a1cf2b68eb8ccf1e7ec7117d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_ardinghelli, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 08:38:59 compute-0 ceph-mon[74313]: pgmap v954: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:38:59 compute-0 distracted_ardinghelli[267699]: --> passed data devices: 0 physical, 3 LVM
Oct 11 08:38:59 compute-0 distracted_ardinghelli[267699]: --> relative data size: 1.0
Oct 11 08:38:59 compute-0 distracted_ardinghelli[267699]: --> All data devices are unavailable
Oct 11 08:38:59 compute-0 systemd[1]: libpod-ef4a8feb9767e172b447955b8eb07cfa1b6476a8a1cf2b68eb8ccf1e7ec7117d.scope: Deactivated successfully.
Oct 11 08:38:59 compute-0 systemd[1]: libpod-ef4a8feb9767e172b447955b8eb07cfa1b6476a8a1cf2b68eb8ccf1e7ec7117d.scope: Consumed 1.063s CPU time.
Oct 11 08:38:59 compute-0 podman[267682]: 2025-10-11 08:38:59.295013111 +0000 UTC m=+1.282485692 container died ef4a8feb9767e172b447955b8eb07cfa1b6476a8a1cf2b68eb8ccf1e7ec7117d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_ardinghelli, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 08:38:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-55cbb2aa42b72638a9f789442d8c6081cc90f304dea89055e90f696eaeabc96b-merged.mount: Deactivated successfully.
Oct 11 08:38:59 compute-0 podman[267682]: 2025-10-11 08:38:59.367672993 +0000 UTC m=+1.355145564 container remove ef4a8feb9767e172b447955b8eb07cfa1b6476a8a1cf2b68eb8ccf1e7ec7117d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_ardinghelli, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct 11 08:38:59 compute-0 systemd[1]: libpod-conmon-ef4a8feb9767e172b447955b8eb07cfa1b6476a8a1cf2b68eb8ccf1e7ec7117d.scope: Deactivated successfully.
Oct 11 08:38:59 compute-0 sudo[267576]: pam_unix(sudo:session): session closed for user root
Oct 11 08:38:59 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v955: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:38:59 compute-0 sudo[267743]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:38:59 compute-0 sudo[267743]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:38:59 compute-0 sudo[267743]: pam_unix(sudo:session): session closed for user root
Oct 11 08:38:59 compute-0 sudo[267768]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 08:38:59 compute-0 sudo[267768]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:38:59 compute-0 sudo[267768]: pam_unix(sudo:session): session closed for user root
Oct 11 08:38:59 compute-0 sudo[267793]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:38:59 compute-0 sudo[267793]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:38:59 compute-0 sudo[267793]: pam_unix(sudo:session): session closed for user root
Oct 11 08:38:59 compute-0 sudo[267818]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a -- lvm list --format json
Oct 11 08:38:59 compute-0 sudo[267818]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:39:00 compute-0 podman[267886]: 2025-10-11 08:39:00.275340464 +0000 UTC m=+0.078599163 container create 649f0ec8799212ea67dfd34b23b4f938a1b40c906be0a9a92f131a15b16651e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_banach, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0)
Oct 11 08:39:00 compute-0 podman[267886]: 2025-10-11 08:39:00.244967297 +0000 UTC m=+0.048226036 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 08:39:00 compute-0 systemd[1]: Started libpod-conmon-649f0ec8799212ea67dfd34b23b4f938a1b40c906be0a9a92f131a15b16651e5.scope.
Oct 11 08:39:00 compute-0 systemd[1]: Started libcrun container.
Oct 11 08:39:00 compute-0 podman[267886]: 2025-10-11 08:39:00.414665008 +0000 UTC m=+0.217923757 container init 649f0ec8799212ea67dfd34b23b4f938a1b40c906be0a9a92f131a15b16651e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_banach, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct 11 08:39:00 compute-0 podman[267886]: 2025-10-11 08:39:00.426886317 +0000 UTC m=+0.230145006 container start 649f0ec8799212ea67dfd34b23b4f938a1b40c906be0a9a92f131a15b16651e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_banach, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default)
Oct 11 08:39:00 compute-0 podman[267886]: 2025-10-11 08:39:00.43084789 +0000 UTC m=+0.234106589 container attach 649f0ec8799212ea67dfd34b23b4f938a1b40c906be0a9a92f131a15b16651e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_banach, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct 11 08:39:00 compute-0 youthful_banach[267903]: 167 167
Oct 11 08:39:00 compute-0 systemd[1]: libpod-649f0ec8799212ea67dfd34b23b4f938a1b40c906be0a9a92f131a15b16651e5.scope: Deactivated successfully.
Oct 11 08:39:00 compute-0 podman[267886]: 2025-10-11 08:39:00.435053029 +0000 UTC m=+0.238311718 container died 649f0ec8799212ea67dfd34b23b4f938a1b40c906be0a9a92f131a15b16651e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_banach, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 08:39:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-5d5bc87112bd85bb2f6438288cfb3876ace0eb59d2e8a885347a345ec635749a-merged.mount: Deactivated successfully.
Oct 11 08:39:00 compute-0 podman[267886]: 2025-10-11 08:39:00.485683154 +0000 UTC m=+0.288941853 container remove 649f0ec8799212ea67dfd34b23b4f938a1b40c906be0a9a92f131a15b16651e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_banach, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct 11 08:39:00 compute-0 systemd[1]: libpod-conmon-649f0ec8799212ea67dfd34b23b4f938a1b40c906be0a9a92f131a15b16651e5.scope: Deactivated successfully.
Oct 11 08:39:00 compute-0 podman[267926]: 2025-10-11 08:39:00.750918009 +0000 UTC m=+0.073826427 container create 1efc68d922dc2a88cbc4aac3a91008f48d5cdc7cec4eebc2f0cba556b936f76f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_haibt, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 08:39:00 compute-0 systemd[1]: Started libpod-conmon-1efc68d922dc2a88cbc4aac3a91008f48d5cdc7cec4eebc2f0cba556b936f76f.scope.
Oct 11 08:39:00 compute-0 podman[267926]: 2025-10-11 08:39:00.71832662 +0000 UTC m=+0.041235078 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 08:39:00 compute-0 systemd[1]: Started libcrun container.
Oct 11 08:39:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/feba2df914374848d6ef71db2f26d1f336713c74ba24da729f82228e26dd28b1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 08:39:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/feba2df914374848d6ef71db2f26d1f336713c74ba24da729f82228e26dd28b1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 08:39:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/feba2df914374848d6ef71db2f26d1f336713c74ba24da729f82228e26dd28b1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 08:39:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/feba2df914374848d6ef71db2f26d1f336713c74ba24da729f82228e26dd28b1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 08:39:00 compute-0 podman[267926]: 2025-10-11 08:39:00.854469583 +0000 UTC m=+0.177378011 container init 1efc68d922dc2a88cbc4aac3a91008f48d5cdc7cec4eebc2f0cba556b936f76f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_haibt, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Oct 11 08:39:00 compute-0 podman[267926]: 2025-10-11 08:39:00.870942703 +0000 UTC m=+0.193851091 container start 1efc68d922dc2a88cbc4aac3a91008f48d5cdc7cec4eebc2f0cba556b936f76f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_haibt, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct 11 08:39:00 compute-0 podman[267926]: 2025-10-11 08:39:00.874881325 +0000 UTC m=+0.197789753 container attach 1efc68d922dc2a88cbc4aac3a91008f48d5cdc7cec4eebc2f0cba556b936f76f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_haibt, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct 11 08:39:01 compute-0 ceph-mon[74313]: pgmap v955: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:39:01 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v956: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:39:01 compute-0 flamboyant_haibt[267943]: {
Oct 11 08:39:01 compute-0 flamboyant_haibt[267943]:     "0": [
Oct 11 08:39:01 compute-0 flamboyant_haibt[267943]:         {
Oct 11 08:39:01 compute-0 flamboyant_haibt[267943]:             "devices": [
Oct 11 08:39:01 compute-0 flamboyant_haibt[267943]:                 "/dev/loop3"
Oct 11 08:39:01 compute-0 flamboyant_haibt[267943]:             ],
Oct 11 08:39:01 compute-0 flamboyant_haibt[267943]:             "lv_name": "ceph_lv0",
Oct 11 08:39:01 compute-0 flamboyant_haibt[267943]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 08:39:01 compute-0 flamboyant_haibt[267943]:             "lv_size": "21470642176",
Oct 11 08:39:01 compute-0 flamboyant_haibt[267943]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=bd31cfe9-266a-4479-a7f9-659fff14b3e1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 08:39:01 compute-0 flamboyant_haibt[267943]:             "lv_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 08:39:01 compute-0 flamboyant_haibt[267943]:             "name": "ceph_lv0",
Oct 11 08:39:01 compute-0 flamboyant_haibt[267943]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 08:39:01 compute-0 flamboyant_haibt[267943]:             "tags": {
Oct 11 08:39:01 compute-0 flamboyant_haibt[267943]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 11 08:39:01 compute-0 flamboyant_haibt[267943]:                 "ceph.block_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 08:39:01 compute-0 flamboyant_haibt[267943]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 08:39:01 compute-0 flamboyant_haibt[267943]:                 "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 08:39:01 compute-0 flamboyant_haibt[267943]:                 "ceph.cluster_name": "ceph",
Oct 11 08:39:01 compute-0 flamboyant_haibt[267943]:                 "ceph.crush_device_class": "",
Oct 11 08:39:01 compute-0 flamboyant_haibt[267943]:                 "ceph.encrypted": "0",
Oct 11 08:39:01 compute-0 flamboyant_haibt[267943]:                 "ceph.osd_fsid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 08:39:01 compute-0 flamboyant_haibt[267943]:                 "ceph.osd_id": "0",
Oct 11 08:39:01 compute-0 flamboyant_haibt[267943]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 08:39:01 compute-0 flamboyant_haibt[267943]:                 "ceph.type": "block",
Oct 11 08:39:01 compute-0 flamboyant_haibt[267943]:                 "ceph.vdo": "0"
Oct 11 08:39:01 compute-0 flamboyant_haibt[267943]:             },
Oct 11 08:39:01 compute-0 flamboyant_haibt[267943]:             "type": "block",
Oct 11 08:39:01 compute-0 flamboyant_haibt[267943]:             "vg_name": "ceph_vg0"
Oct 11 08:39:01 compute-0 flamboyant_haibt[267943]:         }
Oct 11 08:39:01 compute-0 flamboyant_haibt[267943]:     ],
Oct 11 08:39:01 compute-0 flamboyant_haibt[267943]:     "1": [
Oct 11 08:39:01 compute-0 flamboyant_haibt[267943]:         {
Oct 11 08:39:01 compute-0 flamboyant_haibt[267943]:             "devices": [
Oct 11 08:39:01 compute-0 flamboyant_haibt[267943]:                 "/dev/loop4"
Oct 11 08:39:01 compute-0 flamboyant_haibt[267943]:             ],
Oct 11 08:39:01 compute-0 flamboyant_haibt[267943]:             "lv_name": "ceph_lv1",
Oct 11 08:39:01 compute-0 flamboyant_haibt[267943]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 08:39:01 compute-0 flamboyant_haibt[267943]:             "lv_size": "21470642176",
Oct 11 08:39:01 compute-0 flamboyant_haibt[267943]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c6e730c8-7075-45e5-bf72-061b73088f5b,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 08:39:01 compute-0 flamboyant_haibt[267943]:             "lv_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 08:39:01 compute-0 flamboyant_haibt[267943]:             "name": "ceph_lv1",
Oct 11 08:39:01 compute-0 flamboyant_haibt[267943]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 08:39:01 compute-0 flamboyant_haibt[267943]:             "tags": {
Oct 11 08:39:01 compute-0 flamboyant_haibt[267943]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 11 08:39:01 compute-0 flamboyant_haibt[267943]:                 "ceph.block_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 08:39:01 compute-0 flamboyant_haibt[267943]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 08:39:01 compute-0 flamboyant_haibt[267943]:                 "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 08:39:01 compute-0 flamboyant_haibt[267943]:                 "ceph.cluster_name": "ceph",
Oct 11 08:39:01 compute-0 flamboyant_haibt[267943]:                 "ceph.crush_device_class": "",
Oct 11 08:39:01 compute-0 flamboyant_haibt[267943]:                 "ceph.encrypted": "0",
Oct 11 08:39:01 compute-0 flamboyant_haibt[267943]:                 "ceph.osd_fsid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 08:39:01 compute-0 flamboyant_haibt[267943]:                 "ceph.osd_id": "1",
Oct 11 08:39:01 compute-0 flamboyant_haibt[267943]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 08:39:01 compute-0 flamboyant_haibt[267943]:                 "ceph.type": "block",
Oct 11 08:39:01 compute-0 flamboyant_haibt[267943]:                 "ceph.vdo": "0"
Oct 11 08:39:01 compute-0 flamboyant_haibt[267943]:             },
Oct 11 08:39:01 compute-0 flamboyant_haibt[267943]:             "type": "block",
Oct 11 08:39:01 compute-0 flamboyant_haibt[267943]:             "vg_name": "ceph_vg1"
Oct 11 08:39:01 compute-0 flamboyant_haibt[267943]:         }
Oct 11 08:39:01 compute-0 flamboyant_haibt[267943]:     ],
Oct 11 08:39:01 compute-0 flamboyant_haibt[267943]:     "2": [
Oct 11 08:39:01 compute-0 flamboyant_haibt[267943]:         {
Oct 11 08:39:01 compute-0 flamboyant_haibt[267943]:             "devices": [
Oct 11 08:39:01 compute-0 flamboyant_haibt[267943]:                 "/dev/loop5"
Oct 11 08:39:01 compute-0 flamboyant_haibt[267943]:             ],
Oct 11 08:39:01 compute-0 flamboyant_haibt[267943]:             "lv_name": "ceph_lv2",
Oct 11 08:39:01 compute-0 flamboyant_haibt[267943]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 08:39:01 compute-0 flamboyant_haibt[267943]:             "lv_size": "21470642176",
Oct 11 08:39:01 compute-0 flamboyant_haibt[267943]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9a8d87fe-940e-4960-94fb-405e22e6e9ee,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 08:39:01 compute-0 flamboyant_haibt[267943]:             "lv_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 08:39:01 compute-0 flamboyant_haibt[267943]:             "name": "ceph_lv2",
Oct 11 08:39:01 compute-0 flamboyant_haibt[267943]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 08:39:01 compute-0 flamboyant_haibt[267943]:             "tags": {
Oct 11 08:39:01 compute-0 flamboyant_haibt[267943]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 11 08:39:01 compute-0 flamboyant_haibt[267943]:                 "ceph.block_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 08:39:01 compute-0 flamboyant_haibt[267943]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 08:39:01 compute-0 flamboyant_haibt[267943]:                 "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 08:39:01 compute-0 flamboyant_haibt[267943]:                 "ceph.cluster_name": "ceph",
Oct 11 08:39:01 compute-0 flamboyant_haibt[267943]:                 "ceph.crush_device_class": "",
Oct 11 08:39:01 compute-0 flamboyant_haibt[267943]:                 "ceph.encrypted": "0",
Oct 11 08:39:01 compute-0 flamboyant_haibt[267943]:                 "ceph.osd_fsid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 08:39:01 compute-0 flamboyant_haibt[267943]:                 "ceph.osd_id": "2",
Oct 11 08:39:01 compute-0 flamboyant_haibt[267943]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 08:39:01 compute-0 flamboyant_haibt[267943]:                 "ceph.type": "block",
Oct 11 08:39:01 compute-0 flamboyant_haibt[267943]:                 "ceph.vdo": "0"
Oct 11 08:39:01 compute-0 flamboyant_haibt[267943]:             },
Oct 11 08:39:01 compute-0 flamboyant_haibt[267943]:             "type": "block",
Oct 11 08:39:01 compute-0 flamboyant_haibt[267943]:             "vg_name": "ceph_vg2"
Oct 11 08:39:01 compute-0 flamboyant_haibt[267943]:         }
Oct 11 08:39:01 compute-0 flamboyant_haibt[267943]:     ]
Oct 11 08:39:01 compute-0 flamboyant_haibt[267943]: }
Oct 11 08:39:01 compute-0 systemd[1]: libpod-1efc68d922dc2a88cbc4aac3a91008f48d5cdc7cec4eebc2f0cba556b936f76f.scope: Deactivated successfully.
Oct 11 08:39:01 compute-0 podman[267926]: 2025-10-11 08:39:01.696552833 +0000 UTC m=+1.019461251 container died 1efc68d922dc2a88cbc4aac3a91008f48d5cdc7cec4eebc2f0cba556b936f76f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_haibt, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct 11 08:39:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-feba2df914374848d6ef71db2f26d1f336713c74ba24da729f82228e26dd28b1-merged.mount: Deactivated successfully.
Oct 11 08:39:01 compute-0 podman[267926]: 2025-10-11 08:39:01.782404512 +0000 UTC m=+1.105312920 container remove 1efc68d922dc2a88cbc4aac3a91008f48d5cdc7cec4eebc2f0cba556b936f76f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_haibt, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct 11 08:39:01 compute-0 systemd[1]: libpod-conmon-1efc68d922dc2a88cbc4aac3a91008f48d5cdc7cec4eebc2f0cba556b936f76f.scope: Deactivated successfully.
Oct 11 08:39:01 compute-0 sudo[267818]: pam_unix(sudo:session): session closed for user root
Oct 11 08:39:01 compute-0 sudo[267966]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:39:01 compute-0 sudo[267966]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:39:01 compute-0 sudo[267966]: pam_unix(sudo:session): session closed for user root
Oct 11 08:39:02 compute-0 sudo[267991]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 08:39:02 compute-0 sudo[267991]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:39:02 compute-0 sudo[267991]: pam_unix(sudo:session): session closed for user root
Oct 11 08:39:02 compute-0 sudo[268016]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:39:02 compute-0 sudo[268016]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:39:02 compute-0 sudo[268016]: pam_unix(sudo:session): session closed for user root
Oct 11 08:39:02 compute-0 sudo[268041]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a -- raw list --format json
Oct 11 08:39:02 compute-0 sudo[268041]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:39:02 compute-0 podman[268106]: 2025-10-11 08:39:02.707075326 +0000 UTC m=+0.065931972 container create 898abcf45acf911663d9b0b76e3fb512b4f487a2b72d44f8298527f01ee6d2c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_hoover, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct 11 08:39:02 compute-0 systemd[1]: Started libpod-conmon-898abcf45acf911663d9b0b76e3fb512b4f487a2b72d44f8298527f01ee6d2c5.scope.
Oct 11 08:39:02 compute-0 podman[268106]: 2025-10-11 08:39:02.681487886 +0000 UTC m=+0.040344582 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 08:39:02 compute-0 systemd[1]: Started libcrun container.
Oct 11 08:39:02 compute-0 podman[268106]: 2025-10-11 08:39:02.81132767 +0000 UTC m=+0.170184356 container init 898abcf45acf911663d9b0b76e3fb512b4f487a2b72d44f8298527f01ee6d2c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_hoover, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Oct 11 08:39:02 compute-0 podman[268106]: 2025-10-11 08:39:02.822774216 +0000 UTC m=+0.181630872 container start 898abcf45acf911663d9b0b76e3fb512b4f487a2b72d44f8298527f01ee6d2c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_hoover, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Oct 11 08:39:02 compute-0 podman[268106]: 2025-10-11 08:39:02.827381338 +0000 UTC m=+0.186238034 container attach 898abcf45acf911663d9b0b76e3fb512b4f487a2b72d44f8298527f01ee6d2c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_hoover, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 08:39:02 compute-0 suspicious_hoover[268122]: 167 167
Oct 11 08:39:02 compute-0 systemd[1]: libpod-898abcf45acf911663d9b0b76e3fb512b4f487a2b72d44f8298527f01ee6d2c5.scope: Deactivated successfully.
Oct 11 08:39:02 compute-0 podman[268106]: 2025-10-11 08:39:02.831450964 +0000 UTC m=+0.190307610 container died 898abcf45acf911663d9b0b76e3fb512b4f487a2b72d44f8298527f01ee6d2c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_hoover, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 08:39:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-217d92d393ffbd968d07661fb62a1659ef920f4fd7dd2f5ad3f9ab493a3ceaaa-merged.mount: Deactivated successfully.
Oct 11 08:39:02 compute-0 podman[268106]: 2025-10-11 08:39:02.914872763 +0000 UTC m=+0.273729379 container remove 898abcf45acf911663d9b0b76e3fb512b4f487a2b72d44f8298527f01ee6d2c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_hoover, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 08:39:02 compute-0 systemd[1]: libpod-conmon-898abcf45acf911663d9b0b76e3fb512b4f487a2b72d44f8298527f01ee6d2c5.scope: Deactivated successfully.
Oct 11 08:39:03 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 08:39:03 compute-0 podman[268145]: 2025-10-11 08:39:03.147375915 +0000 UTC m=+0.066503138 container create c090d6f3b50b5d57abf14433b969721484c122a9bb10344b902f897d7ec17584 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_galileo, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct 11 08:39:03 compute-0 systemd[1]: Started libpod-conmon-c090d6f3b50b5d57abf14433b969721484c122a9bb10344b902f897d7ec17584.scope.
Oct 11 08:39:03 compute-0 podman[268145]: 2025-10-11 08:39:03.119784338 +0000 UTC m=+0.038911611 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 08:39:03 compute-0 ceph-mon[74313]: pgmap v956: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:39:03 compute-0 systemd[1]: Started libcrun container.
Oct 11 08:39:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/febf5c4eab80e435038d5d874253786158b1146c70d622daa6dc70a86cecb88a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 08:39:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/febf5c4eab80e435038d5d874253786158b1146c70d622daa6dc70a86cecb88a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 08:39:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/febf5c4eab80e435038d5d874253786158b1146c70d622daa6dc70a86cecb88a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 08:39:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/febf5c4eab80e435038d5d874253786158b1146c70d622daa6dc70a86cecb88a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 08:39:03 compute-0 podman[268145]: 2025-10-11 08:39:03.250665041 +0000 UTC m=+0.169792304 container init c090d6f3b50b5d57abf14433b969721484c122a9bb10344b902f897d7ec17584 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_galileo, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct 11 08:39:03 compute-0 podman[268145]: 2025-10-11 08:39:03.264867196 +0000 UTC m=+0.183994419 container start c090d6f3b50b5d57abf14433b969721484c122a9bb10344b902f897d7ec17584 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_galileo, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Oct 11 08:39:03 compute-0 podman[268145]: 2025-10-11 08:39:03.271238568 +0000 UTC m=+0.190365831 container attach c090d6f3b50b5d57abf14433b969721484c122a9bb10344b902f897d7ec17584 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_galileo, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 11 08:39:03 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v957: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:39:04 compute-0 infallible_galileo[268162]: {
Oct 11 08:39:04 compute-0 infallible_galileo[268162]:     "9a8d87fe-940e-4960-94fb-405e22e6e9ee": {
Oct 11 08:39:04 compute-0 infallible_galileo[268162]:         "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 08:39:04 compute-0 infallible_galileo[268162]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 11 08:39:04 compute-0 infallible_galileo[268162]:         "osd_id": 2,
Oct 11 08:39:04 compute-0 infallible_galileo[268162]:         "osd_uuid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 08:39:04 compute-0 infallible_galileo[268162]:         "type": "bluestore"
Oct 11 08:39:04 compute-0 infallible_galileo[268162]:     },
Oct 11 08:39:04 compute-0 infallible_galileo[268162]:     "bd31cfe9-266a-4479-a7f9-659fff14b3e1": {
Oct 11 08:39:04 compute-0 infallible_galileo[268162]:         "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 08:39:04 compute-0 infallible_galileo[268162]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 11 08:39:04 compute-0 infallible_galileo[268162]:         "osd_id": 0,
Oct 11 08:39:04 compute-0 infallible_galileo[268162]:         "osd_uuid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 08:39:04 compute-0 infallible_galileo[268162]:         "type": "bluestore"
Oct 11 08:39:04 compute-0 infallible_galileo[268162]:     },
Oct 11 08:39:04 compute-0 infallible_galileo[268162]:     "c6e730c8-7075-45e5-bf72-061b73088f5b": {
Oct 11 08:39:04 compute-0 infallible_galileo[268162]:         "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 08:39:04 compute-0 infallible_galileo[268162]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 11 08:39:04 compute-0 infallible_galileo[268162]:         "osd_id": 1,
Oct 11 08:39:04 compute-0 infallible_galileo[268162]:         "osd_uuid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 08:39:04 compute-0 infallible_galileo[268162]:         "type": "bluestore"
Oct 11 08:39:04 compute-0 infallible_galileo[268162]:     }
Oct 11 08:39:04 compute-0 infallible_galileo[268162]: }
Oct 11 08:39:04 compute-0 systemd[1]: libpod-c090d6f3b50b5d57abf14433b969721484c122a9bb10344b902f897d7ec17584.scope: Deactivated successfully.
Oct 11 08:39:04 compute-0 systemd[1]: libpod-c090d6f3b50b5d57abf14433b969721484c122a9bb10344b902f897d7ec17584.scope: Consumed 1.121s CPU time.
Oct 11 08:39:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 08:39:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:39:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 08:39:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:39:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 08:39:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:39:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 08:39:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:39:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 08:39:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:39:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 08:39:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:39:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 11 08:39:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:39:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 08:39:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:39:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 11 08:39:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:39:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 11 08:39:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:39:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 08:39:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:39:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 11 08:39:04 compute-0 podman[268195]: 2025-10-11 08:39:04.446767219 +0000 UTC m=+0.045096877 container died c090d6f3b50b5d57abf14433b969721484c122a9bb10344b902f897d7ec17584 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_galileo, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 08:39:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-febf5c4eab80e435038d5d874253786158b1146c70d622daa6dc70a86cecb88a-merged.mount: Deactivated successfully.
Oct 11 08:39:04 compute-0 podman[268195]: 2025-10-11 08:39:04.526306978 +0000 UTC m=+0.124636596 container remove c090d6f3b50b5d57abf14433b969721484c122a9bb10344b902f897d7ec17584 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_galileo, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct 11 08:39:04 compute-0 systemd[1]: libpod-conmon-c090d6f3b50b5d57abf14433b969721484c122a9bb10344b902f897d7ec17584.scope: Deactivated successfully.
Oct 11 08:39:04 compute-0 sudo[268041]: pam_unix(sudo:session): session closed for user root
Oct 11 08:39:04 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 08:39:04 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 08:39:04 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 08:39:04 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 08:39:04 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev ef3172c8-de60-405c-bdb2-78267df11b5f does not exist
Oct 11 08:39:04 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev 61a5e16b-24d2-43ac-a536-a7d022ea8ae9 does not exist
Oct 11 08:39:04 compute-0 sudo[268210]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:39:04 compute-0 sudo[268210]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:39:04 compute-0 sudo[268210]: pam_unix(sudo:session): session closed for user root
Oct 11 08:39:04 compute-0 sudo[268235]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 11 08:39:04 compute-0 sudo[268235]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:39:04 compute-0 sudo[268235]: pam_unix(sudo:session): session closed for user root
Oct 11 08:39:05 compute-0 ceph-mon[74313]: pgmap v957: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:39:05 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 08:39:05 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 08:39:05 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v958: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:39:06 compute-0 podman[268260]: 2025-10-11 08:39:06.821136685 +0000 UTC m=+0.108587788 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 08:39:07 compute-0 ceph-mon[74313]: pgmap v958: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:39:07 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v959: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:39:08 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 08:39:09 compute-0 ceph-mon[74313]: pgmap v959: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:39:09 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v960: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:39:11 compute-0 ceph-mon[74313]: pgmap v960: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:39:11 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v961: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:39:13 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 08:39:13 compute-0 ceph-mon[74313]: pgmap v961: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:39:13 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v962: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:39:13 compute-0 podman[268281]: 2025-10-11 08:39:13.78524304 +0000 UTC m=+0.085681305 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_id=iscsid, container_name=iscsid, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct 11 08:39:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:39:15.169 162815 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:39:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:39:15.170 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:39:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:39:15.170 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:39:15 compute-0 ceph-mon[74313]: pgmap v962: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:39:15 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v963: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:39:17 compute-0 ceph-mon[74313]: pgmap v963: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:39:17 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v964: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:39:18 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 08:39:18 compute-0 podman[268301]: 2025-10-11 08:39:18.80283509 +0000 UTC m=+0.098295485 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd)
Oct 11 08:39:19 compute-0 ceph-mon[74313]: pgmap v964: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:39:19 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v965: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:39:20 compute-0 ceph-mon[74313]: pgmap v965: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:39:20 compute-0 podman[268322]: 2025-10-11 08:39:20.847716997 +0000 UTC m=+0.144793831 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS)
Oct 11 08:39:21 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v966: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:39:22 compute-0 ceph-mon[74313]: pgmap v966: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:39:23 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 08:39:23 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v967: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:39:24 compute-0 ceph-mon[74313]: pgmap v967: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:39:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 08:39:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 08:39:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 08:39:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 08:39:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 08:39:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 08:39:25 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v968: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:39:26 compute-0 ceph-mon[74313]: pgmap v968: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:39:27 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v969: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:39:28 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 08:39:28 compute-0 ceph-mon[74313]: pgmap v969: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:39:29 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v970: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:39:30 compute-0 ceph-mon[74313]: pgmap v970: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:39:31 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v971: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:39:32 compute-0 ceph-mon[74313]: pgmap v971: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:39:33 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 08:39:33 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v972: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:39:34 compute-0 ceph-mon[74313]: pgmap v972: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:39:35 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v973: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:39:36 compute-0 ceph-mon[74313]: pgmap v973: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:39:37 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 08:39:37 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2935519272' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 08:39:37 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 08:39:37 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2935519272' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 08:39:37 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v974: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:39:37 compute-0 ceph-mon[74313]: from='client.? 192.168.122.10:0/2935519272' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 08:39:37 compute-0 ceph-mon[74313]: from='client.? 192.168.122.10:0/2935519272' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 08:39:37 compute-0 podman[268350]: 2025-10-11 08:39:37.801419183 +0000 UTC m=+0.097213154 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, tcib_managed=true)
Oct 11 08:39:38 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 08:39:38 compute-0 ceph-mon[74313]: pgmap v974: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:39:39 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v975: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:39:40 compute-0 ceph-mon[74313]: pgmap v975: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:39:41 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v976: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:39:42 compute-0 ceph-mon[74313]: pgmap v976: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:39:43 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 08:39:43 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v977: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:39:44 compute-0 ceph-mon[74313]: pgmap v977: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:39:44 compute-0 podman[268369]: 2025-10-11 08:39:44.777690303 +0000 UTC m=+0.078469180 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=iscsid, container_name=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team)
Oct 11 08:39:45 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v978: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:39:46 compute-0 ceph-mon[74313]: pgmap v978: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:39:47 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v979: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:39:47 compute-0 nova_compute[260935]: 2025-10-11 08:39:47.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 08:39:48 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 08:39:48 compute-0 ceph-mon[74313]: pgmap v979: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:39:48 compute-0 nova_compute[260935]: 2025-10-11 08:39:48.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 08:39:48 compute-0 nova_compute[260935]: 2025-10-11 08:39:48.702 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 11 08:39:49 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v980: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:39:49 compute-0 nova_compute[260935]: 2025-10-11 08:39:49.704 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 08:39:49 compute-0 nova_compute[260935]: 2025-10-11 08:39:49.704 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 08:39:49 compute-0 podman[268389]: 2025-10-11 08:39:49.79410814 +0000 UTC m=+0.093978041 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_id=multipathd, org.label-schema.build-date=20251001, io.buildah.version=1.41.3)
Oct 11 08:39:50 compute-0 ceph-mon[74313]: pgmap v980: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:39:51 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v981: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:39:51 compute-0 nova_compute[260935]: 2025-10-11 08:39:51.699 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 08:39:51 compute-0 nova_compute[260935]: 2025-10-11 08:39:51.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 08:39:51 compute-0 nova_compute[260935]: 2025-10-11 08:39:51.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 08:39:51 compute-0 nova_compute[260935]: 2025-10-11 08:39:51.731 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:39:51 compute-0 nova_compute[260935]: 2025-10-11 08:39:51.732 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:39:51 compute-0 nova_compute[260935]: 2025-10-11 08:39:51.732 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:39:51 compute-0 nova_compute[260935]: 2025-10-11 08:39:51.732 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 11 08:39:51 compute-0 nova_compute[260935]: 2025-10-11 08:39:51.732 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:39:51 compute-0 podman[268410]: 2025-10-11 08:39:51.899928357 +0000 UTC m=+0.188388405 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 08:39:52 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 08:39:52 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4279787727' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:39:52 compute-0 nova_compute[260935]: 2025-10-11 08:39:52.195 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:39:52 compute-0 nova_compute[260935]: 2025-10-11 08:39:52.378 2 WARNING nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 08:39:52 compute-0 nova_compute[260935]: 2025-10-11 08:39:52.380 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5167MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 11 08:39:52 compute-0 nova_compute[260935]: 2025-10-11 08:39:52.381 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:39:52 compute-0 nova_compute[260935]: 2025-10-11 08:39:52.381 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:39:52 compute-0 nova_compute[260935]: 2025-10-11 08:39:52.466 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 11 08:39:52 compute-0 nova_compute[260935]: 2025-10-11 08:39:52.467 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 11 08:39:52 compute-0 nova_compute[260935]: 2025-10-11 08:39:52.493 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:39:52 compute-0 ceph-mon[74313]: pgmap v981: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:39:52 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/4279787727' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:39:52 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 08:39:52 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/496415161' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:39:52 compute-0 nova_compute[260935]: 2025-10-11 08:39:52.951 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:39:52 compute-0 nova_compute[260935]: 2025-10-11 08:39:52.959 2 DEBUG nova.compute.provider_tree [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 08:39:52 compute-0 nova_compute[260935]: 2025-10-11 08:39:52.979 2 DEBUG nova.scheduler.client.report [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 08:39:52 compute-0 nova_compute[260935]: 2025-10-11 08:39:52.982 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 11 08:39:52 compute-0 nova_compute[260935]: 2025-10-11 08:39:52.982 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.601s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:39:53 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 08:39:53 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v982: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:39:53 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/496415161' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:39:54 compute-0 ceph-mon[74313]: pgmap v982: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:39:54 compute-0 ceph-mgr[74605]: [balancer INFO root] Optimize plan auto_2025-10-11_08:39:54
Oct 11 08:39:54 compute-0 ceph-mgr[74605]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 08:39:54 compute-0 ceph-mgr[74605]: [balancer INFO root] do_upmap
Oct 11 08:39:54 compute-0 ceph-mgr[74605]: [balancer INFO root] pools ['default.rgw.control', 'volumes', 'vms', '.mgr', '.rgw.root', 'images', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'backups', 'default.rgw.log', 'default.rgw.meta']
Oct 11 08:39:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 08:39:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 08:39:54 compute-0 ceph-mgr[74605]: [balancer INFO root] prepared 0/10 changes
Oct 11 08:39:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 08:39:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 08:39:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 08:39:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 08:39:54 compute-0 nova_compute[260935]: 2025-10-11 08:39:54.983 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 08:39:54 compute-0 nova_compute[260935]: 2025-10-11 08:39:54.984 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 11 08:39:54 compute-0 nova_compute[260935]: 2025-10-11 08:39:54.984 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 11 08:39:55 compute-0 nova_compute[260935]: 2025-10-11 08:39:55.004 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 11 08:39:55 compute-0 nova_compute[260935]: 2025-10-11 08:39:55.004 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 08:39:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 08:39:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 08:39:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 08:39:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 08:39:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 08:39:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 08:39:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 08:39:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 08:39:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 08:39:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 08:39:55 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v983: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:39:56 compute-0 ceph-mon[74313]: pgmap v983: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:39:57 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v984: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:39:58 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 08:39:58 compute-0 ceph-mon[74313]: pgmap v984: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:39:59 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v985: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:40:00 compute-0 ceph-mon[74313]: pgmap v985: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:40:01 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v986: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:40:02 compute-0 ceph-mon[74313]: pgmap v986: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:40:03 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 08:40:03 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v987: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:40:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 08:40:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:40:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 08:40:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:40:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 08:40:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:40:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 08:40:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:40:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 08:40:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:40:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 08:40:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:40:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 11 08:40:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:40:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 08:40:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:40:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 11 08:40:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:40:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 11 08:40:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:40:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 08:40:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:40:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 11 08:40:04 compute-0 ceph-mon[74313]: pgmap v987: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:40:04 compute-0 sudo[268479]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:40:04 compute-0 sudo[268479]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:40:04 compute-0 sudo[268479]: pam_unix(sudo:session): session closed for user root
Oct 11 08:40:05 compute-0 sudo[268504]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 08:40:05 compute-0 sudo[268504]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:40:05 compute-0 sudo[268504]: pam_unix(sudo:session): session closed for user root
Oct 11 08:40:05 compute-0 sudo[268530]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:40:05 compute-0 sudo[268530]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:40:05 compute-0 sudo[268530]: pam_unix(sudo:session): session closed for user root
Oct 11 08:40:05 compute-0 sudo[268555]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 11 08:40:05 compute-0 sudo[268555]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:40:05 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v988: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:40:05 compute-0 sudo[268555]: pam_unix(sudo:session): session closed for user root
Oct 11 08:40:05 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 08:40:05 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 08:40:05 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 11 08:40:05 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 08:40:05 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 11 08:40:05 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 08:40:05 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev e33c656d-9512-4357-b752-217993c6cc20 does not exist
Oct 11 08:40:05 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev 090414a1-eac5-415d-aa19-8d8956f2c471 does not exist
Oct 11 08:40:05 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev 71c6af04-8ce0-4d70-a50e-ad8fe30ec914 does not exist
Oct 11 08:40:05 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 11 08:40:05 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 08:40:05 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 11 08:40:05 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 08:40:05 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 08:40:05 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 08:40:05 compute-0 sudo[268613]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:40:05 compute-0 sudo[268613]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:40:06 compute-0 sudo[268613]: pam_unix(sudo:session): session closed for user root
Oct 11 08:40:06 compute-0 sudo[268638]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 08:40:06 compute-0 sudo[268638]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:40:06 compute-0 sudo[268638]: pam_unix(sudo:session): session closed for user root
Oct 11 08:40:06 compute-0 sudo[268663]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:40:06 compute-0 sudo[268663]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:40:06 compute-0 sudo[268663]: pam_unix(sudo:session): session closed for user root
Oct 11 08:40:06 compute-0 sudo[268688]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 11 08:40:06 compute-0 sudo[268688]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:40:06 compute-0 podman[268755]: 2025-10-11 08:40:06.694808543 +0000 UTC m=+0.066375263 container create 753f6a67532760f9960dd80f6529e82a9412539eb76139eb94bb9ac728bce8c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_thompson, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Oct 11 08:40:06 compute-0 systemd[1]: Started libpod-conmon-753f6a67532760f9960dd80f6529e82a9412539eb76139eb94bb9ac728bce8c0.scope.
Oct 11 08:40:06 compute-0 ceph-mon[74313]: pgmap v988: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:40:06 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 08:40:06 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 08:40:06 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 08:40:06 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 08:40:06 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 08:40:06 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 08:40:06 compute-0 podman[268755]: 2025-10-11 08:40:06.66591886 +0000 UTC m=+0.037485630 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 08:40:06 compute-0 systemd[1]: Started libcrun container.
Oct 11 08:40:06 compute-0 podman[268755]: 2025-10-11 08:40:06.801230519 +0000 UTC m=+0.172797269 container init 753f6a67532760f9960dd80f6529e82a9412539eb76139eb94bb9ac728bce8c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_thompson, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct 11 08:40:06 compute-0 podman[268755]: 2025-10-11 08:40:06.811728298 +0000 UTC m=+0.183295018 container start 753f6a67532760f9960dd80f6529e82a9412539eb76139eb94bb9ac728bce8c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_thompson, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 08:40:06 compute-0 podman[268755]: 2025-10-11 08:40:06.815595239 +0000 UTC m=+0.187161959 container attach 753f6a67532760f9960dd80f6529e82a9412539eb76139eb94bb9ac728bce8c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_thompson, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 08:40:06 compute-0 flamboyant_thompson[268772]: 167 167
Oct 11 08:40:06 compute-0 systemd[1]: libpod-753f6a67532760f9960dd80f6529e82a9412539eb76139eb94bb9ac728bce8c0.scope: Deactivated successfully.
Oct 11 08:40:06 compute-0 podman[268755]: 2025-10-11 08:40:06.822345311 +0000 UTC m=+0.193912021 container died 753f6a67532760f9960dd80f6529e82a9412539eb76139eb94bb9ac728bce8c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_thompson, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct 11 08:40:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-2b177b526649d7ede552e12222b260cfa8eacd142d71dc68a99c4e03f679cc18-merged.mount: Deactivated successfully.
Oct 11 08:40:06 compute-0 podman[268755]: 2025-10-11 08:40:06.878111912 +0000 UTC m=+0.249678592 container remove 753f6a67532760f9960dd80f6529e82a9412539eb76139eb94bb9ac728bce8c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_thompson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 08:40:06 compute-0 systemd[1]: libpod-conmon-753f6a67532760f9960dd80f6529e82a9412539eb76139eb94bb9ac728bce8c0.scope: Deactivated successfully.
Oct 11 08:40:07 compute-0 podman[268796]: 2025-10-11 08:40:07.118522049 +0000 UTC m=+0.061042272 container create fc0d7d2d68b81cd600e821253e7cf6b11334205e2dd1c96b2e8b873a4c1d8afd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_solomon, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 08:40:07 compute-0 systemd[1]: Started libpod-conmon-fc0d7d2d68b81cd600e821253e7cf6b11334205e2dd1c96b2e8b873a4c1d8afd.scope.
Oct 11 08:40:07 compute-0 podman[268796]: 2025-10-11 08:40:07.089289576 +0000 UTC m=+0.031809889 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 08:40:07 compute-0 systemd[1]: Started libcrun container.
Oct 11 08:40:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/44067ae6dac312a8e349f941216f48c3770edfbb70420903471c4002621c36c8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 08:40:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/44067ae6dac312a8e349f941216f48c3770edfbb70420903471c4002621c36c8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 08:40:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/44067ae6dac312a8e349f941216f48c3770edfbb70420903471c4002621c36c8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 08:40:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/44067ae6dac312a8e349f941216f48c3770edfbb70420903471c4002621c36c8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 08:40:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/44067ae6dac312a8e349f941216f48c3770edfbb70420903471c4002621c36c8/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 08:40:07 compute-0 podman[268796]: 2025-10-11 08:40:07.224283736 +0000 UTC m=+0.166803979 container init fc0d7d2d68b81cd600e821253e7cf6b11334205e2dd1c96b2e8b873a4c1d8afd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_solomon, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 08:40:07 compute-0 podman[268796]: 2025-10-11 08:40:07.23318891 +0000 UTC m=+0.175709173 container start fc0d7d2d68b81cd600e821253e7cf6b11334205e2dd1c96b2e8b873a4c1d8afd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_solomon, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Oct 11 08:40:07 compute-0 podman[268796]: 2025-10-11 08:40:07.236780653 +0000 UTC m=+0.179300876 container attach fc0d7d2d68b81cd600e821253e7cf6b11334205e2dd1c96b2e8b873a4c1d8afd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_solomon, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 11 08:40:07 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v989: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:40:07 compute-0 ceph-mon[74313]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 11 08:40:07 compute-0 ceph-mon[74313]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.0 total, 600.0 interval
                                           Cumulative writes: 4619 writes, 20K keys, 4619 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.02 MB/s
                                           Cumulative WAL: 4619 writes, 4619 syncs, 1.00 writes per sync, written: 0.03 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1308 writes, 5683 keys, 1308 commit groups, 1.0 writes per commit group, ingest: 8.49 MB, 0.01 MB/s
                                           Interval WAL: 1308 writes, 1308 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     71.1      0.31              0.09        11    0.028       0      0       0.0       0.0
                                             L6      1/0    6.66 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   3.2    162.4    133.2      0.52              0.30        10    0.052     43K   5259       0.0       0.0
                                            Sum      1/0    6.66 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   4.2    102.1    110.1      0.83              0.39        21    0.039     43K   5259       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   5.4    121.1    121.0      0.29              0.17         8    0.037     18K   2057       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   0.0    162.4    133.2      0.52              0.30        10    0.052     43K   5259       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     71.9      0.30              0.09        10    0.030       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     12.2      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1800.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.021, interval 0.006
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.09 GB write, 0.05 MB/s write, 0.08 GB read, 0.05 MB/s read, 0.8 seconds
                                           Interval compaction: 0.03 GB write, 0.06 MB/s write, 0.03 GB read, 0.06 MB/s read, 0.3 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558f0ab3b1f0#2 capacity: 308.00 MB usage: 6.31 MB table_size: 0 occupancy: 18446744073709551615 collections: 4 last_copies: 0 last_secs: 0.000142 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(407,5.95 MB,1.9319%) FilterBlock(22,128.17 KB,0.0406389%) IndexBlock(22,237.91 KB,0.0754319%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Oct 11 08:40:08 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 08:40:08 compute-0 tender_solomon[268813]: --> passed data devices: 0 physical, 3 LVM
Oct 11 08:40:08 compute-0 tender_solomon[268813]: --> relative data size: 1.0
Oct 11 08:40:08 compute-0 tender_solomon[268813]: --> All data devices are unavailable
Oct 11 08:40:08 compute-0 podman[268796]: 2025-10-11 08:40:08.396119232 +0000 UTC m=+1.338639465 container died fc0d7d2d68b81cd600e821253e7cf6b11334205e2dd1c96b2e8b873a4c1d8afd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_solomon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef)
Oct 11 08:40:08 compute-0 systemd[1]: libpod-fc0d7d2d68b81cd600e821253e7cf6b11334205e2dd1c96b2e8b873a4c1d8afd.scope: Deactivated successfully.
Oct 11 08:40:08 compute-0 systemd[1]: libpod-fc0d7d2d68b81cd600e821253e7cf6b11334205e2dd1c96b2e8b873a4c1d8afd.scope: Consumed 1.107s CPU time.
Oct 11 08:40:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-44067ae6dac312a8e349f941216f48c3770edfbb70420903471c4002621c36c8-merged.mount: Deactivated successfully.
Oct 11 08:40:08 compute-0 podman[268796]: 2025-10-11 08:40:08.479023277 +0000 UTC m=+1.421543540 container remove fc0d7d2d68b81cd600e821253e7cf6b11334205e2dd1c96b2e8b873a4c1d8afd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_solomon, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct 11 08:40:08 compute-0 systemd[1]: libpod-conmon-fc0d7d2d68b81cd600e821253e7cf6b11334205e2dd1c96b2e8b873a4c1d8afd.scope: Deactivated successfully.
Oct 11 08:40:08 compute-0 sudo[268688]: pam_unix(sudo:session): session closed for user root
Oct 11 08:40:08 compute-0 podman[268843]: 2025-10-11 08:40:08.533169051 +0000 UTC m=+0.096020250 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_id=ovn_metadata_agent)
Oct 11 08:40:08 compute-0 sudo[268875]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:40:08 compute-0 sudo[268875]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:40:08 compute-0 sudo[268875]: pam_unix(sudo:session): session closed for user root
Oct 11 08:40:08 compute-0 sudo[268900]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 08:40:08 compute-0 sudo[268900]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:40:08 compute-0 sudo[268900]: pam_unix(sudo:session): session closed for user root
Oct 11 08:40:08 compute-0 ceph-mon[74313]: pgmap v989: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:40:08 compute-0 ceph-mon[74313]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #48. Immutable memtables: 0.
Oct 11 08:40:08 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:40:08.783124) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 11 08:40:08 compute-0 ceph-mon[74313]: rocksdb: [db/flush_job.cc:856] [default] [JOB 23] Flushing memtable with next log file: 48
Oct 11 08:40:08 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760172008783163, "job": 23, "event": "flush_started", "num_memtables": 1, "num_entries": 1477, "num_deletes": 251, "total_data_size": 2307185, "memory_usage": 2349344, "flush_reason": "Manual Compaction"}
Oct 11 08:40:08 compute-0 ceph-mon[74313]: rocksdb: [db/flush_job.cc:885] [default] [JOB 23] Level-0 flush table #49: started
Oct 11 08:40:08 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760172008802965, "cf_name": "default", "job": 23, "event": "table_file_creation", "file_number": 49, "file_size": 2274062, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 19460, "largest_seqno": 20936, "table_properties": {"data_size": 2267158, "index_size": 3975, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1861, "raw_key_size": 14236, "raw_average_key_size": 19, "raw_value_size": 2253376, "raw_average_value_size": 3138, "num_data_blocks": 181, "num_entries": 718, "num_filter_entries": 718, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760171853, "oldest_key_time": 1760171853, "file_creation_time": 1760172008, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "25d4b2da-549a-4320-988a-8e4ed36a341b", "db_session_id": "HBQ1LYMNE0LLZGR7AHC6", "orig_file_number": 49, "seqno_to_time_mapping": "N/A"}}
Oct 11 08:40:08 compute-0 ceph-mon[74313]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 23] Flush lasted 20188 microseconds, and 9651 cpu microseconds.
Oct 11 08:40:08 compute-0 ceph-mon[74313]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 08:40:08 compute-0 sudo[268925]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:40:08 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:40:08.803281) [db/flush_job.cc:967] [default] [JOB 23] Level-0 flush table #49: 2274062 bytes OK
Oct 11 08:40:08 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:40:08.803409) [db/memtable_list.cc:519] [default] Level-0 commit table #49 started
Oct 11 08:40:08 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:40:08.805523) [db/memtable_list.cc:722] [default] Level-0 commit table #49: memtable #1 done
Oct 11 08:40:08 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:40:08.805606) EVENT_LOG_v1 {"time_micros": 1760172008805598, "job": 23, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 11 08:40:08 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:40:08.805633) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 11 08:40:08 compute-0 ceph-mon[74313]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 23] Try to delete WAL files size 2300699, prev total WAL file size 2300699, number of live WAL files 2.
Oct 11 08:40:08 compute-0 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000045.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 08:40:08 compute-0 sudo[268925]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:40:08 compute-0 sudo[268925]: pam_unix(sudo:session): session closed for user root
Oct 11 08:40:08 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:40:08.806938) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031353036' seq:72057594037927935, type:22 .. '7061786F730031373538' seq:0, type:0; will stop at (end)
Oct 11 08:40:08 compute-0 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 24] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 11 08:40:08 compute-0 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 23 Base level 0, inputs: [49(2220KB)], [47(6823KB)]
Oct 11 08:40:08 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760172008806977, "job": 24, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [49], "files_L6": [47], "score": -1, "input_data_size": 9260883, "oldest_snapshot_seqno": -1}
Oct 11 08:40:08 compute-0 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 24] Generated table #50: 4314 keys, 7493713 bytes, temperature: kUnknown
Oct 11 08:40:08 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760172008869327, "cf_name": "default", "job": 24, "event": "table_file_creation", "file_number": 50, "file_size": 7493713, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7463986, "index_size": 17841, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10821, "raw_key_size": 106672, "raw_average_key_size": 24, "raw_value_size": 7384974, "raw_average_value_size": 1711, "num_data_blocks": 748, "num_entries": 4314, "num_filter_entries": 4314, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760170204, "oldest_key_time": 0, "file_creation_time": 1760172008, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "25d4b2da-549a-4320-988a-8e4ed36a341b", "db_session_id": "HBQ1LYMNE0LLZGR7AHC6", "orig_file_number": 50, "seqno_to_time_mapping": "N/A"}}
Oct 11 08:40:08 compute-0 ceph-mon[74313]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 08:40:08 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:40:08.869620) [db/compaction/compaction_job.cc:1663] [default] [JOB 24] Compacted 1@0 + 1@6 files to L6 => 7493713 bytes
Oct 11 08:40:08 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:40:08.871187) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 148.3 rd, 120.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.2, 6.7 +0.0 blob) out(7.1 +0.0 blob), read-write-amplify(7.4) write-amplify(3.3) OK, records in: 4828, records dropped: 514 output_compression: NoCompression
Oct 11 08:40:08 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:40:08.871214) EVENT_LOG_v1 {"time_micros": 1760172008871202, "job": 24, "event": "compaction_finished", "compaction_time_micros": 62440, "compaction_time_cpu_micros": 31252, "output_level": 6, "num_output_files": 1, "total_output_size": 7493713, "num_input_records": 4828, "num_output_records": 4314, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 11 08:40:08 compute-0 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000049.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 08:40:08 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760172008872311, "job": 24, "event": "table_file_deletion", "file_number": 49}
Oct 11 08:40:08 compute-0 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000047.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 08:40:08 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760172008875465, "job": 24, "event": "table_file_deletion", "file_number": 47}
Oct 11 08:40:08 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:40:08.806857) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 08:40:08 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:40:08.875515) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 08:40:08 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:40:08.875519) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 08:40:08 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:40:08.875521) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 08:40:08 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:40:08.875523) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 08:40:08 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:40:08.875524) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 08:40:08 compute-0 sudo[268950]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a -- lvm list --format json
Oct 11 08:40:08 compute-0 sudo[268950]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:40:09 compute-0 podman[269019]: 2025-10-11 08:40:09.38398262 +0000 UTC m=+0.068735782 container create 00d468885dc8ec0172c4561528dca840208d71c0f67d03d119036c6b7a3c4a5e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_noether, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Oct 11 08:40:09 compute-0 systemd[1]: Started libpod-conmon-00d468885dc8ec0172c4561528dca840208d71c0f67d03d119036c6b7a3c4a5e.scope.
Oct 11 08:40:09 compute-0 systemd[1]: Started libcrun container.
Oct 11 08:40:09 compute-0 podman[269019]: 2025-10-11 08:40:09.358249806 +0000 UTC m=+0.043003008 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 08:40:09 compute-0 podman[269019]: 2025-10-11 08:40:09.463971821 +0000 UTC m=+0.148724993 container init 00d468885dc8ec0172c4561528dca840208d71c0f67d03d119036c6b7a3c4a5e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_noether, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct 11 08:40:09 compute-0 podman[269019]: 2025-10-11 08:40:09.476096637 +0000 UTC m=+0.160849769 container start 00d468885dc8ec0172c4561528dca840208d71c0f67d03d119036c6b7a3c4a5e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_noether, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 08:40:09 compute-0 priceless_noether[269035]: 167 167
Oct 11 08:40:09 compute-0 systemd[1]: libpod-00d468885dc8ec0172c4561528dca840208d71c0f67d03d119036c6b7a3c4a5e.scope: Deactivated successfully.
Oct 11 08:40:09 compute-0 podman[269019]: 2025-10-11 08:40:09.484330442 +0000 UTC m=+0.169083584 container attach 00d468885dc8ec0172c4561528dca840208d71c0f67d03d119036c6b7a3c4a5e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_noether, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 08:40:09 compute-0 podman[269019]: 2025-10-11 08:40:09.484653271 +0000 UTC m=+0.169406403 container died 00d468885dc8ec0172c4561528dca840208d71c0f67d03d119036c6b7a3c4a5e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_noether, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct 11 08:40:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-b45cb8eb5a1e3f85b75be311710a5a41c0353bfccdb0207e1ca22ec836707d5e-merged.mount: Deactivated successfully.
Oct 11 08:40:09 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v990: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:40:09 compute-0 podman[269019]: 2025-10-11 08:40:09.523202371 +0000 UTC m=+0.207955493 container remove 00d468885dc8ec0172c4561528dca840208d71c0f67d03d119036c6b7a3c4a5e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_noether, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 08:40:09 compute-0 systemd[1]: libpod-conmon-00d468885dc8ec0172c4561528dca840208d71c0f67d03d119036c6b7a3c4a5e.scope: Deactivated successfully.
Oct 11 08:40:09 compute-0 podman[269059]: 2025-10-11 08:40:09.768011234 +0000 UTC m=+0.057545163 container create ac74977a4368a5da11d12dcd630872a12b8af1371b56582236c67fa563a8cbee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_jemison, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 08:40:09 compute-0 systemd[1]: Started libpod-conmon-ac74977a4368a5da11d12dcd630872a12b8af1371b56582236c67fa563a8cbee.scope.
Oct 11 08:40:09 compute-0 podman[269059]: 2025-10-11 08:40:09.748488027 +0000 UTC m=+0.038021996 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 08:40:09 compute-0 systemd[1]: Started libcrun container.
Oct 11 08:40:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2f7cd7020bd155e13b4770dd10054bc06bb0fc45346a238d1f7077fc8507ad4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 08:40:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2f7cd7020bd155e13b4770dd10054bc06bb0fc45346a238d1f7077fc8507ad4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 08:40:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2f7cd7020bd155e13b4770dd10054bc06bb0fc45346a238d1f7077fc8507ad4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 08:40:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2f7cd7020bd155e13b4770dd10054bc06bb0fc45346a238d1f7077fc8507ad4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 08:40:09 compute-0 podman[269059]: 2025-10-11 08:40:09.891793275 +0000 UTC m=+0.181327204 container init ac74977a4368a5da11d12dcd630872a12b8af1371b56582236c67fa563a8cbee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_jemison, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct 11 08:40:09 compute-0 podman[269059]: 2025-10-11 08:40:09.899159495 +0000 UTC m=+0.188693424 container start ac74977a4368a5da11d12dcd630872a12b8af1371b56582236c67fa563a8cbee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_jemison, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Oct 11 08:40:09 compute-0 podman[269059]: 2025-10-11 08:40:09.902977564 +0000 UTC m=+0.192511493 container attach ac74977a4368a5da11d12dcd630872a12b8af1371b56582236c67fa563a8cbee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_jemison, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 08:40:10 compute-0 jovial_jemison[269076]: {
Oct 11 08:40:10 compute-0 jovial_jemison[269076]:     "0": [
Oct 11 08:40:10 compute-0 jovial_jemison[269076]:         {
Oct 11 08:40:10 compute-0 jovial_jemison[269076]:             "devices": [
Oct 11 08:40:10 compute-0 jovial_jemison[269076]:                 "/dev/loop3"
Oct 11 08:40:10 compute-0 jovial_jemison[269076]:             ],
Oct 11 08:40:10 compute-0 jovial_jemison[269076]:             "lv_name": "ceph_lv0",
Oct 11 08:40:10 compute-0 jovial_jemison[269076]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 08:40:10 compute-0 jovial_jemison[269076]:             "lv_size": "21470642176",
Oct 11 08:40:10 compute-0 jovial_jemison[269076]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=bd31cfe9-266a-4479-a7f9-659fff14b3e1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 08:40:10 compute-0 jovial_jemison[269076]:             "lv_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 08:40:10 compute-0 jovial_jemison[269076]:             "name": "ceph_lv0",
Oct 11 08:40:10 compute-0 jovial_jemison[269076]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 08:40:10 compute-0 jovial_jemison[269076]:             "tags": {
Oct 11 08:40:10 compute-0 jovial_jemison[269076]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 11 08:40:10 compute-0 jovial_jemison[269076]:                 "ceph.block_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 08:40:10 compute-0 jovial_jemison[269076]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 08:40:10 compute-0 jovial_jemison[269076]:                 "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 08:40:10 compute-0 jovial_jemison[269076]:                 "ceph.cluster_name": "ceph",
Oct 11 08:40:10 compute-0 jovial_jemison[269076]:                 "ceph.crush_device_class": "",
Oct 11 08:40:10 compute-0 jovial_jemison[269076]:                 "ceph.encrypted": "0",
Oct 11 08:40:10 compute-0 jovial_jemison[269076]:                 "ceph.osd_fsid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 08:40:10 compute-0 jovial_jemison[269076]:                 "ceph.osd_id": "0",
Oct 11 08:40:10 compute-0 jovial_jemison[269076]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 08:40:10 compute-0 jovial_jemison[269076]:                 "ceph.type": "block",
Oct 11 08:40:10 compute-0 jovial_jemison[269076]:                 "ceph.vdo": "0"
Oct 11 08:40:10 compute-0 jovial_jemison[269076]:             },
Oct 11 08:40:10 compute-0 jovial_jemison[269076]:             "type": "block",
Oct 11 08:40:10 compute-0 jovial_jemison[269076]:             "vg_name": "ceph_vg0"
Oct 11 08:40:10 compute-0 jovial_jemison[269076]:         }
Oct 11 08:40:10 compute-0 jovial_jemison[269076]:     ],
Oct 11 08:40:10 compute-0 jovial_jemison[269076]:     "1": [
Oct 11 08:40:10 compute-0 jovial_jemison[269076]:         {
Oct 11 08:40:10 compute-0 jovial_jemison[269076]:             "devices": [
Oct 11 08:40:10 compute-0 jovial_jemison[269076]:                 "/dev/loop4"
Oct 11 08:40:10 compute-0 jovial_jemison[269076]:             ],
Oct 11 08:40:10 compute-0 jovial_jemison[269076]:             "lv_name": "ceph_lv1",
Oct 11 08:40:10 compute-0 jovial_jemison[269076]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 08:40:10 compute-0 jovial_jemison[269076]:             "lv_size": "21470642176",
Oct 11 08:40:10 compute-0 jovial_jemison[269076]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c6e730c8-7075-45e5-bf72-061b73088f5b,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 08:40:10 compute-0 jovial_jemison[269076]:             "lv_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 08:40:10 compute-0 jovial_jemison[269076]:             "name": "ceph_lv1",
Oct 11 08:40:10 compute-0 jovial_jemison[269076]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 08:40:10 compute-0 jovial_jemison[269076]:             "tags": {
Oct 11 08:40:10 compute-0 jovial_jemison[269076]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 11 08:40:10 compute-0 jovial_jemison[269076]:                 "ceph.block_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 08:40:10 compute-0 jovial_jemison[269076]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 08:40:10 compute-0 jovial_jemison[269076]:                 "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 08:40:10 compute-0 jovial_jemison[269076]:                 "ceph.cluster_name": "ceph",
Oct 11 08:40:10 compute-0 jovial_jemison[269076]:                 "ceph.crush_device_class": "",
Oct 11 08:40:10 compute-0 jovial_jemison[269076]:                 "ceph.encrypted": "0",
Oct 11 08:40:10 compute-0 jovial_jemison[269076]:                 "ceph.osd_fsid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 08:40:10 compute-0 jovial_jemison[269076]:                 "ceph.osd_id": "1",
Oct 11 08:40:10 compute-0 jovial_jemison[269076]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 08:40:10 compute-0 jovial_jemison[269076]:                 "ceph.type": "block",
Oct 11 08:40:10 compute-0 jovial_jemison[269076]:                 "ceph.vdo": "0"
Oct 11 08:40:10 compute-0 jovial_jemison[269076]:             },
Oct 11 08:40:10 compute-0 jovial_jemison[269076]:             "type": "block",
Oct 11 08:40:10 compute-0 jovial_jemison[269076]:             "vg_name": "ceph_vg1"
Oct 11 08:40:10 compute-0 jovial_jemison[269076]:         }
Oct 11 08:40:10 compute-0 jovial_jemison[269076]:     ],
Oct 11 08:40:10 compute-0 jovial_jemison[269076]:     "2": [
Oct 11 08:40:10 compute-0 jovial_jemison[269076]:         {
Oct 11 08:40:10 compute-0 jovial_jemison[269076]:             "devices": [
Oct 11 08:40:10 compute-0 jovial_jemison[269076]:                 "/dev/loop5"
Oct 11 08:40:10 compute-0 jovial_jemison[269076]:             ],
Oct 11 08:40:10 compute-0 jovial_jemison[269076]:             "lv_name": "ceph_lv2",
Oct 11 08:40:10 compute-0 jovial_jemison[269076]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 08:40:10 compute-0 jovial_jemison[269076]:             "lv_size": "21470642176",
Oct 11 08:40:10 compute-0 jovial_jemison[269076]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9a8d87fe-940e-4960-94fb-405e22e6e9ee,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 08:40:10 compute-0 jovial_jemison[269076]:             "lv_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 08:40:10 compute-0 jovial_jemison[269076]:             "name": "ceph_lv2",
Oct 11 08:40:10 compute-0 jovial_jemison[269076]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 08:40:10 compute-0 jovial_jemison[269076]:             "tags": {
Oct 11 08:40:10 compute-0 jovial_jemison[269076]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 11 08:40:10 compute-0 jovial_jemison[269076]:                 "ceph.block_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 08:40:10 compute-0 jovial_jemison[269076]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 08:40:10 compute-0 jovial_jemison[269076]:                 "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 08:40:10 compute-0 jovial_jemison[269076]:                 "ceph.cluster_name": "ceph",
Oct 11 08:40:10 compute-0 jovial_jemison[269076]:                 "ceph.crush_device_class": "",
Oct 11 08:40:10 compute-0 jovial_jemison[269076]:                 "ceph.encrypted": "0",
Oct 11 08:40:10 compute-0 jovial_jemison[269076]:                 "ceph.osd_fsid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 08:40:10 compute-0 jovial_jemison[269076]:                 "ceph.osd_id": "2",
Oct 11 08:40:10 compute-0 jovial_jemison[269076]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 08:40:10 compute-0 jovial_jemison[269076]:                 "ceph.type": "block",
Oct 11 08:40:10 compute-0 jovial_jemison[269076]:                 "ceph.vdo": "0"
Oct 11 08:40:10 compute-0 jovial_jemison[269076]:             },
Oct 11 08:40:10 compute-0 jovial_jemison[269076]:             "type": "block",
Oct 11 08:40:10 compute-0 jovial_jemison[269076]:             "vg_name": "ceph_vg2"
Oct 11 08:40:10 compute-0 jovial_jemison[269076]:         }
Oct 11 08:40:10 compute-0 jovial_jemison[269076]:     ]
Oct 11 08:40:10 compute-0 jovial_jemison[269076]: }
Oct 11 08:40:10 compute-0 systemd[1]: libpod-ac74977a4368a5da11d12dcd630872a12b8af1371b56582236c67fa563a8cbee.scope: Deactivated successfully.
Oct 11 08:40:10 compute-0 podman[269059]: 2025-10-11 08:40:10.710872107 +0000 UTC m=+1.000406076 container died ac74977a4368a5da11d12dcd630872a12b8af1371b56582236c67fa563a8cbee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_jemison, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Oct 11 08:40:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-e2f7cd7020bd155e13b4770dd10054bc06bb0fc45346a238d1f7077fc8507ad4-merged.mount: Deactivated successfully.
Oct 11 08:40:10 compute-0 ceph-mon[74313]: pgmap v990: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:40:10 compute-0 podman[269059]: 2025-10-11 08:40:10.829634295 +0000 UTC m=+1.119168234 container remove ac74977a4368a5da11d12dcd630872a12b8af1371b56582236c67fa563a8cbee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_jemison, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Oct 11 08:40:10 compute-0 systemd[1]: libpod-conmon-ac74977a4368a5da11d12dcd630872a12b8af1371b56582236c67fa563a8cbee.scope: Deactivated successfully.
Oct 11 08:40:10 compute-0 sudo[268950]: pam_unix(sudo:session): session closed for user root
Oct 11 08:40:10 compute-0 sudo[269096]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:40:10 compute-0 rsyslogd[1003]: imjournal from <np0005481065:jovial_jemison>: begin to drop messages due to rate-limiting
Oct 11 08:40:10 compute-0 sudo[269096]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:40:10 compute-0 sudo[269096]: pam_unix(sudo:session): session closed for user root
Oct 11 08:40:11 compute-0 sudo[269121]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 08:40:11 compute-0 sudo[269121]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:40:11 compute-0 sudo[269121]: pam_unix(sudo:session): session closed for user root
Oct 11 08:40:11 compute-0 sudo[269146]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:40:11 compute-0 sudo[269146]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:40:11 compute-0 sudo[269146]: pam_unix(sudo:session): session closed for user root
Oct 11 08:40:11 compute-0 sudo[269171]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a -- raw list --format json
Oct 11 08:40:11 compute-0 sudo[269171]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:40:11 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v991: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:40:11 compute-0 podman[269239]: 2025-10-11 08:40:11.778203892 +0000 UTC m=+0.069623067 container create 648cab2127667e208d34317c9a1955d137224226479c66a1466ef15e80ff04b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_yalow, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct 11 08:40:11 compute-0 systemd[1]: Started libpod-conmon-648cab2127667e208d34317c9a1955d137224226479c66a1466ef15e80ff04b2.scope.
Oct 11 08:40:11 compute-0 podman[269239]: 2025-10-11 08:40:11.751170621 +0000 UTC m=+0.042589776 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 08:40:11 compute-0 systemd[1]: Started libcrun container.
Oct 11 08:40:11 compute-0 podman[269239]: 2025-10-11 08:40:11.881882689 +0000 UTC m=+0.173301924 container init 648cab2127667e208d34317c9a1955d137224226479c66a1466ef15e80ff04b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_yalow, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct 11 08:40:11 compute-0 podman[269239]: 2025-10-11 08:40:11.893670295 +0000 UTC m=+0.185089470 container start 648cab2127667e208d34317c9a1955d137224226479c66a1466ef15e80ff04b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_yalow, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct 11 08:40:11 compute-0 podman[269239]: 2025-10-11 08:40:11.897724601 +0000 UTC m=+0.189143756 container attach 648cab2127667e208d34317c9a1955d137224226479c66a1466ef15e80ff04b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_yalow, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 08:40:11 compute-0 adoring_yalow[269255]: 167 167
Oct 11 08:40:11 compute-0 systemd[1]: libpod-648cab2127667e208d34317c9a1955d137224226479c66a1466ef15e80ff04b2.scope: Deactivated successfully.
Oct 11 08:40:11 compute-0 podman[269239]: 2025-10-11 08:40:11.903951879 +0000 UTC m=+0.195371064 container died 648cab2127667e208d34317c9a1955d137224226479c66a1466ef15e80ff04b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_yalow, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 08:40:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-f63f658a6de03dd01884419fab409e2e5c156f1e42f074be2afa5eea03b7db66-merged.mount: Deactivated successfully.
Oct 11 08:40:11 compute-0 podman[269239]: 2025-10-11 08:40:11.952764261 +0000 UTC m=+0.244183396 container remove 648cab2127667e208d34317c9a1955d137224226479c66a1466ef15e80ff04b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_yalow, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 08:40:11 compute-0 systemd[1]: libpod-conmon-648cab2127667e208d34317c9a1955d137224226479c66a1466ef15e80ff04b2.scope: Deactivated successfully.
Oct 11 08:40:12 compute-0 podman[269281]: 2025-10-11 08:40:12.158553771 +0000 UTC m=+0.058678695 container create 620af5c725d5826c52810985f641dbbd726f32f35f24ae279db137b99e328a0c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_wescoff, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct 11 08:40:12 compute-0 systemd[1]: Started libpod-conmon-620af5c725d5826c52810985f641dbbd726f32f35f24ae279db137b99e328a0c.scope.
Oct 11 08:40:12 compute-0 podman[269281]: 2025-10-11 08:40:12.141599847 +0000 UTC m=+0.041724791 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 08:40:12 compute-0 systemd[1]: Started libcrun container.
Oct 11 08:40:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b0fb2386277abc26df70436762e63bc549cb116545fcc1e40a9de089ec614f7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 08:40:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b0fb2386277abc26df70436762e63bc549cb116545fcc1e40a9de089ec614f7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 08:40:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b0fb2386277abc26df70436762e63bc549cb116545fcc1e40a9de089ec614f7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 08:40:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b0fb2386277abc26df70436762e63bc549cb116545fcc1e40a9de089ec614f7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 08:40:12 compute-0 podman[269281]: 2025-10-11 08:40:12.284298908 +0000 UTC m=+0.184423932 container init 620af5c725d5826c52810985f641dbbd726f32f35f24ae279db137b99e328a0c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_wescoff, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 08:40:12 compute-0 podman[269281]: 2025-10-11 08:40:12.298772341 +0000 UTC m=+0.198897295 container start 620af5c725d5826c52810985f641dbbd726f32f35f24ae279db137b99e328a0c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_wescoff, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct 11 08:40:12 compute-0 podman[269281]: 2025-10-11 08:40:12.303176076 +0000 UTC m=+0.203301060 container attach 620af5c725d5826c52810985f641dbbd726f32f35f24ae279db137b99e328a0c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_wescoff, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 08:40:12 compute-0 ceph-mon[74313]: pgmap v991: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:40:13 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 08:40:13 compute-0 reverent_wescoff[269298]: {
Oct 11 08:40:13 compute-0 reverent_wescoff[269298]:     "9a8d87fe-940e-4960-94fb-405e22e6e9ee": {
Oct 11 08:40:13 compute-0 reverent_wescoff[269298]:         "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 08:40:13 compute-0 reverent_wescoff[269298]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 11 08:40:13 compute-0 reverent_wescoff[269298]:         "osd_id": 2,
Oct 11 08:40:13 compute-0 reverent_wescoff[269298]:         "osd_uuid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 08:40:13 compute-0 reverent_wescoff[269298]:         "type": "bluestore"
Oct 11 08:40:13 compute-0 reverent_wescoff[269298]:     },
Oct 11 08:40:13 compute-0 reverent_wescoff[269298]:     "bd31cfe9-266a-4479-a7f9-659fff14b3e1": {
Oct 11 08:40:13 compute-0 reverent_wescoff[269298]:         "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 08:40:13 compute-0 reverent_wescoff[269298]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 11 08:40:13 compute-0 reverent_wescoff[269298]:         "osd_id": 0,
Oct 11 08:40:13 compute-0 reverent_wescoff[269298]:         "osd_uuid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 08:40:13 compute-0 reverent_wescoff[269298]:         "type": "bluestore"
Oct 11 08:40:13 compute-0 reverent_wescoff[269298]:     },
Oct 11 08:40:13 compute-0 reverent_wescoff[269298]:     "c6e730c8-7075-45e5-bf72-061b73088f5b": {
Oct 11 08:40:13 compute-0 reverent_wescoff[269298]:         "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 08:40:13 compute-0 reverent_wescoff[269298]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 11 08:40:13 compute-0 reverent_wescoff[269298]:         "osd_id": 1,
Oct 11 08:40:13 compute-0 reverent_wescoff[269298]:         "osd_uuid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 08:40:13 compute-0 reverent_wescoff[269298]:         "type": "bluestore"
Oct 11 08:40:13 compute-0 reverent_wescoff[269298]:     }
Oct 11 08:40:13 compute-0 reverent_wescoff[269298]: }
Oct 11 08:40:13 compute-0 systemd[1]: libpod-620af5c725d5826c52810985f641dbbd726f32f35f24ae279db137b99e328a0c.scope: Deactivated successfully.
Oct 11 08:40:13 compute-0 systemd[1]: libpod-620af5c725d5826c52810985f641dbbd726f32f35f24ae279db137b99e328a0c.scope: Consumed 1.151s CPU time.
Oct 11 08:40:13 compute-0 podman[269281]: 2025-10-11 08:40:13.434794325 +0000 UTC m=+1.334919289 container died 620af5c725d5826c52810985f641dbbd726f32f35f24ae279db137b99e328a0c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_wescoff, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 08:40:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-1b0fb2386277abc26df70436762e63bc549cb116545fcc1e40a9de089ec614f7-merged.mount: Deactivated successfully.
Oct 11 08:40:13 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v992: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:40:13 compute-0 podman[269281]: 2025-10-11 08:40:13.516780803 +0000 UTC m=+1.416905757 container remove 620af5c725d5826c52810985f641dbbd726f32f35f24ae279db137b99e328a0c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_wescoff, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 08:40:13 compute-0 systemd[1]: libpod-conmon-620af5c725d5826c52810985f641dbbd726f32f35f24ae279db137b99e328a0c.scope: Deactivated successfully.
Oct 11 08:40:13 compute-0 sudo[269171]: pam_unix(sudo:session): session closed for user root
Oct 11 08:40:13 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 08:40:13 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 08:40:13 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 08:40:13 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 08:40:13 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev 2b896a50-fe7e-4887-af09-4640fc22c382 does not exist
Oct 11 08:40:13 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev bcc2eec7-a811-4d22-8867-35c292e015a1 does not exist
Oct 11 08:40:13 compute-0 sudo[269344]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:40:13 compute-0 sudo[269344]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:40:13 compute-0 sudo[269344]: pam_unix(sudo:session): session closed for user root
Oct 11 08:40:13 compute-0 sudo[269369]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 11 08:40:13 compute-0 sudo[269369]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:40:13 compute-0 sudo[269369]: pam_unix(sudo:session): session closed for user root
Oct 11 08:40:14 compute-0 ceph-mon[74313]: pgmap v992: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:40:14 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 08:40:14 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 08:40:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:40:15.170 162815 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:40:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:40:15.172 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:40:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:40:15.172 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:40:15 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v993: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:40:15 compute-0 podman[269394]: 2025-10-11 08:40:15.783727335 +0000 UTC m=+0.085805568 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct 11 08:40:16 compute-0 ceph-mon[74313]: pgmap v993: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:40:17 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v994: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:40:18 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 08:40:18 compute-0 ceph-mon[74313]: pgmap v994: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:40:19 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v995: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:40:20 compute-0 ceph-mon[74313]: pgmap v995: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:40:20 compute-0 podman[269415]: 2025-10-11 08:40:20.794495043 +0000 UTC m=+0.089154984 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 08:40:21 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v996: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:40:22 compute-0 ceph-mon[74313]: pgmap v996: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:40:22 compute-0 podman[269436]: 2025-10-11 08:40:22.832233647 +0000 UTC m=+0.130320268 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Oct 11 08:40:23 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 08:40:23 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v997: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:40:24 compute-0 ceph-mon[74313]: pgmap v997: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:40:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 08:40:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 08:40:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 08:40:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 08:40:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 08:40:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 08:40:25 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v998: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:40:26 compute-0 ceph-mon[74313]: pgmap v998: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:40:27 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v999: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:40:28 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 08:40:28 compute-0 ceph-mon[74313]: pgmap v999: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:40:29 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1000: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:40:30 compute-0 ceph-mon[74313]: pgmap v1000: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:40:31 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1001: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:40:32 compute-0 ceph-mon[74313]: pgmap v1001: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:40:33 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 08:40:33 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1002: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:40:34 compute-0 ceph-mon[74313]: pgmap v1002: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:40:35 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1003: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:40:36 compute-0 ceph-mon[74313]: pgmap v1003: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:40:37 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 08:40:37 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3783626774' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 08:40:37 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 08:40:37 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3783626774' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 08:40:37 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1004: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:40:37 compute-0 ceph-mon[74313]: from='client.? 192.168.122.10:0/3783626774' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 08:40:37 compute-0 ceph-mon[74313]: from='client.? 192.168.122.10:0/3783626774' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 08:40:38 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 08:40:38 compute-0 ceph-mon[74313]: pgmap v1004: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:40:38 compute-0 podman[269462]: 2025-10-11 08:40:38.801447093 +0000 UTC m=+0.096423049 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct 11 08:40:39 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1005: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:40:40 compute-0 ceph-mon[74313]: pgmap v1005: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:40:41 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1006: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:40:42 compute-0 ceph-mon[74313]: pgmap v1006: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:40:43 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 08:40:43 compute-0 rsyslogd[1003]: imjournal: 202 messages lost due to rate-limiting (20000 allowed within 600 seconds)
Oct 11 08:40:43 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1007: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:40:44 compute-0 ceph-mon[74313]: pgmap v1007: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:40:45 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1008: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:40:46 compute-0 ceph-mon[74313]: pgmap v1008: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:40:46 compute-0 podman[269482]: 2025-10-11 08:40:46.812165595 +0000 UTC m=+0.107554643 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.schema-version=1.0)
Oct 11 08:40:47 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1009: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:40:48 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 08:40:48 compute-0 nova_compute[260935]: 2025-10-11 08:40:48.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 08:40:48 compute-0 nova_compute[260935]: 2025-10-11 08:40:48.703 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 11 08:40:48 compute-0 ceph-mon[74313]: pgmap v1009: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:40:49 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1010: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:40:49 compute-0 nova_compute[260935]: 2025-10-11 08:40:49.704 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 08:40:50 compute-0 nova_compute[260935]: 2025-10-11 08:40:50.698 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 08:40:50 compute-0 nova_compute[260935]: 2025-10-11 08:40:50.730 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 08:40:50 compute-0 nova_compute[260935]: 2025-10-11 08:40:50.732 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 08:40:50 compute-0 ceph-mon[74313]: pgmap v1010: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:40:51 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1011: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:40:51 compute-0 podman[269502]: 2025-10-11 08:40:51.80012781 +0000 UTC m=+0.096837041 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 11 08:40:52 compute-0 nova_compute[260935]: 2025-10-11 08:40:52.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 08:40:52 compute-0 nova_compute[260935]: 2025-10-11 08:40:52.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 08:40:52 compute-0 ceph-mon[74313]: pgmap v1011: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:40:53 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 08:40:53 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1012: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:40:53 compute-0 nova_compute[260935]: 2025-10-11 08:40:53.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 08:40:53 compute-0 nova_compute[260935]: 2025-10-11 08:40:53.745 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:40:53 compute-0 nova_compute[260935]: 2025-10-11 08:40:53.745 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:40:53 compute-0 nova_compute[260935]: 2025-10-11 08:40:53.745 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:40:53 compute-0 nova_compute[260935]: 2025-10-11 08:40:53.746 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 11 08:40:53 compute-0 nova_compute[260935]: 2025-10-11 08:40:53.746 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:40:53 compute-0 podman[269522]: 2025-10-11 08:40:53.818203386 +0000 UTC m=+0.115844837 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Oct 11 08:40:54 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 08:40:54 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1446437953' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:40:54 compute-0 nova_compute[260935]: 2025-10-11 08:40:54.203 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:40:54 compute-0 nova_compute[260935]: 2025-10-11 08:40:54.422 2 WARNING nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 08:40:54 compute-0 nova_compute[260935]: 2025-10-11 08:40:54.425 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5157MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 11 08:40:54 compute-0 nova_compute[260935]: 2025-10-11 08:40:54.425 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:40:54 compute-0 nova_compute[260935]: 2025-10-11 08:40:54.426 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:40:54 compute-0 nova_compute[260935]: 2025-10-11 08:40:54.510 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 11 08:40:54 compute-0 nova_compute[260935]: 2025-10-11 08:40:54.510 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 11 08:40:54 compute-0 nova_compute[260935]: 2025-10-11 08:40:54.528 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:40:54 compute-0 ceph-mon[74313]: pgmap v1012: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:40:54 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/1446437953' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:40:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 08:40:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 08:40:54 compute-0 ceph-mgr[74605]: [balancer INFO root] Optimize plan auto_2025-10-11_08:40:54
Oct 11 08:40:54 compute-0 ceph-mgr[74605]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 08:40:54 compute-0 ceph-mgr[74605]: [balancer INFO root] do_upmap
Oct 11 08:40:54 compute-0 ceph-mgr[74605]: [balancer INFO root] pools ['.mgr', 'cephfs.cephfs.data', 'backups', 'cephfs.cephfs.meta', 'default.rgw.log', 'images', 'volumes', 'vms', 'default.rgw.meta', 'default.rgw.control', '.rgw.root']
Oct 11 08:40:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 08:40:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 08:40:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 08:40:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 08:40:54 compute-0 ceph-mgr[74605]: [balancer INFO root] prepared 0/10 changes
Oct 11 08:40:54 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 08:40:54 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4144864316' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:40:54 compute-0 nova_compute[260935]: 2025-10-11 08:40:54.988 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:40:54 compute-0 nova_compute[260935]: 2025-10-11 08:40:54.996 2 DEBUG nova.compute.provider_tree [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 08:40:55 compute-0 nova_compute[260935]: 2025-10-11 08:40:55.027 2 DEBUG nova.scheduler.client.report [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 08:40:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 08:40:55 compute-0 nova_compute[260935]: 2025-10-11 08:40:55.029 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 11 08:40:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 08:40:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 08:40:55 compute-0 nova_compute[260935]: 2025-10-11 08:40:55.030 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.604s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:40:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 08:40:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 08:40:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 08:40:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 08:40:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 08:40:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 08:40:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 08:40:55 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1013: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:40:55 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/4144864316' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:40:56 compute-0 nova_compute[260935]: 2025-10-11 08:40:56.033 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 08:40:56 compute-0 nova_compute[260935]: 2025-10-11 08:40:56.033 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 11 08:40:56 compute-0 nova_compute[260935]: 2025-10-11 08:40:56.034 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 11 08:40:56 compute-0 nova_compute[260935]: 2025-10-11 08:40:56.050 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 11 08:40:56 compute-0 nova_compute[260935]: 2025-10-11 08:40:56.051 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 08:40:56 compute-0 ceph-mon[74313]: pgmap v1013: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:40:57 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1014: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:40:58 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 08:40:58 compute-0 ceph-mon[74313]: pgmap v1014: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:40:59 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1015: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:41:00 compute-0 ceph-mon[74313]: pgmap v1015: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:41:01 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1016: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:41:02 compute-0 ceph-mon[74313]: pgmap v1016: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:41:03 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 08:41:03 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1017: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:41:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 08:41:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:41:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 08:41:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:41:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 08:41:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:41:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 08:41:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:41:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 08:41:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:41:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 08:41:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:41:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 11 08:41:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:41:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 08:41:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:41:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 11 08:41:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:41:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 11 08:41:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:41:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 08:41:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:41:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 11 08:41:04 compute-0 ceph-mon[74313]: pgmap v1017: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:41:05 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1018: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:41:05 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e122 do_prune osdmap full prune enabled
Oct 11 08:41:05 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e123 e123: 3 total, 3 up, 3 in
Oct 11 08:41:05 compute-0 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e123: 3 total, 3 up, 3 in
Oct 11 08:41:06 compute-0 sshd-session[269592]: Invalid user vhpadmin from 152.32.213.170 port 53898
Oct 11 08:41:06 compute-0 sshd-session[269592]: pam_unix(sshd:auth): check pass; user unknown
Oct 11 08:41:06 compute-0 sshd-session[269592]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=152.32.213.170
Oct 11 08:41:06 compute-0 ceph-mon[74313]: pgmap v1018: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:41:06 compute-0 ceph-mon[74313]: osdmap e123: 3 total, 3 up, 3 in
Oct 11 08:41:06 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e123 do_prune osdmap full prune enabled
Oct 11 08:41:07 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e124 e124: 3 total, 3 up, 3 in
Oct 11 08:41:07 compute-0 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e124: 3 total, 3 up, 3 in
Oct 11 08:41:07 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1021: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 255 B/s wr, 0 op/s
Oct 11 08:41:08 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 08:41:08 compute-0 ceph-mon[74313]: osdmap e124: 3 total, 3 up, 3 in
Oct 11 08:41:08 compute-0 sshd-session[269592]: Failed password for invalid user vhpadmin from 152.32.213.170 port 53898 ssh2
Oct 11 08:41:09 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e124 do_prune osdmap full prune enabled
Oct 11 08:41:09 compute-0 ceph-mon[74313]: pgmap v1021: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 255 B/s wr, 0 op/s
Oct 11 08:41:09 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e125 e125: 3 total, 3 up, 3 in
Oct 11 08:41:09 compute-0 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e125: 3 total, 3 up, 3 in
Oct 11 08:41:09 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1023: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 852 B/s wr, 2 op/s
Oct 11 08:41:09 compute-0 podman[269595]: 2025-10-11 08:41:09.803197437 +0000 UTC m=+0.093543658 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct 11 08:41:10 compute-0 ceph-mon[74313]: osdmap e125: 3 total, 3 up, 3 in
Oct 11 08:41:10 compute-0 sshd-session[269592]: Received disconnect from 152.32.213.170 port 53898:11: Bye Bye [preauth]
Oct 11 08:41:10 compute-0 sshd-session[269592]: Disconnected from invalid user vhpadmin 152.32.213.170 port 53898 [preauth]
Oct 11 08:41:11 compute-0 ceph-mon[74313]: pgmap v1023: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 852 B/s wr, 2 op/s
Oct 11 08:41:11 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1024: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 853 B/s wr, 2 op/s
Oct 11 08:41:12 compute-0 ceph-mon[74313]: pgmap v1024: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 853 B/s wr, 2 op/s
Oct 11 08:41:13 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 08:41:13 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1025: 321 pgs: 321 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 27 KiB/s rd, 5.4 MiB/s wr, 40 op/s
Oct 11 08:41:13 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e125 do_prune osdmap full prune enabled
Oct 11 08:41:13 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e126 e126: 3 total, 3 up, 3 in
Oct 11 08:41:13 compute-0 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e126: 3 total, 3 up, 3 in
Oct 11 08:41:13 compute-0 sudo[269614]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:41:13 compute-0 sudo[269614]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:41:13 compute-0 sudo[269614]: pam_unix(sudo:session): session closed for user root
Oct 11 08:41:14 compute-0 sudo[269639]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 08:41:14 compute-0 sudo[269639]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:41:14 compute-0 sudo[269639]: pam_unix(sudo:session): session closed for user root
Oct 11 08:41:14 compute-0 sudo[269664]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:41:14 compute-0 sudo[269664]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:41:14 compute-0 sudo[269664]: pam_unix(sudo:session): session closed for user root
Oct 11 08:41:14 compute-0 sudo[269689]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 11 08:41:14 compute-0 sudo[269689]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:41:14 compute-0 ceph-mon[74313]: pgmap v1025: 321 pgs: 321 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 27 KiB/s rd, 5.4 MiB/s wr, 40 op/s
Oct 11 08:41:14 compute-0 ceph-mon[74313]: osdmap e126: 3 total, 3 up, 3 in
Oct 11 08:41:14 compute-0 sudo[269689]: pam_unix(sudo:session): session closed for user root
Oct 11 08:41:14 compute-0 sudo[269746]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:41:14 compute-0 sudo[269746]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:41:15 compute-0 sudo[269746]: pam_unix(sudo:session): session closed for user root
Oct 11 08:41:15 compute-0 sudo[269771]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 08:41:15 compute-0 sudo[269771]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:41:15 compute-0 sudo[269771]: pam_unix(sudo:session): session closed for user root
Oct 11 08:41:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:41:15.172 162815 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:41:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:41:15.173 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:41:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:41:15.173 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:41:15 compute-0 sudo[269796]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:41:15 compute-0 sudo[269796]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:41:15 compute-0 sudo[269796]: pam_unix(sudo:session): session closed for user root
Oct 11 08:41:15 compute-0 sudo[269821]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 list-networks
Oct 11 08:41:15 compute-0 sudo[269821]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:41:15 compute-0 sudo[269821]: pam_unix(sudo:session): session closed for user root
Oct 11 08:41:15 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 08:41:15 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1027: 321 pgs: 321 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 5.1 MiB/s wr, 38 op/s
Oct 11 08:41:15 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 08:41:15 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 08:41:15 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 08:41:15 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 08:41:15 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 08:41:15 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 11 08:41:15 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 08:41:15 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 11 08:41:16 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 08:41:16 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev 001e3c2a-de09-4d8a-919d-b765d9585dc8 does not exist
Oct 11 08:41:16 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev a9227aca-8918-43eb-a9f3-14c58eee229e does not exist
Oct 11 08:41:16 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev 5c8436ca-bac0-475b-bf7b-c86dceda9969 does not exist
Oct 11 08:41:16 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 11 08:41:16 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 08:41:16 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 11 08:41:16 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 08:41:16 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 08:41:16 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 08:41:16 compute-0 sudo[269864]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:41:16 compute-0 sudo[269864]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:41:16 compute-0 sudo[269864]: pam_unix(sudo:session): session closed for user root
Oct 11 08:41:16 compute-0 sudo[269889]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 08:41:16 compute-0 sudo[269889]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:41:16 compute-0 sudo[269889]: pam_unix(sudo:session): session closed for user root
Oct 11 08:41:16 compute-0 sudo[269914]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:41:16 compute-0 sudo[269914]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:41:16 compute-0 sudo[269914]: pam_unix(sudo:session): session closed for user root
Oct 11 08:41:16 compute-0 sudo[269939]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 11 08:41:16 compute-0 sudo[269939]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:41:16 compute-0 ceph-mon[74313]: pgmap v1027: 321 pgs: 321 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 5.1 MiB/s wr, 38 op/s
Oct 11 08:41:16 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 08:41:16 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 08:41:16 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 08:41:16 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 08:41:16 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 08:41:16 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 08:41:16 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 08:41:16 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 08:41:17 compute-0 podman[270005]: 2025-10-11 08:41:17.137606624 +0000 UTC m=+0.054502648 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 08:41:17 compute-0 podman[270005]: 2025-10-11 08:41:17.359111 +0000 UTC m=+0.276006973 container create ed7ff5053f372c6140d16725fc4c6734455959a8b90c9720d95f1ceb87290b9f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_yonath, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 08:41:17 compute-0 systemd[1]: Started libpod-conmon-ed7ff5053f372c6140d16725fc4c6734455959a8b90c9720d95f1ceb87290b9f.scope.
Oct 11 08:41:17 compute-0 systemd[1]: Started libcrun container.
Oct 11 08:41:17 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1028: 321 pgs: 321 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 31 KiB/s rd, 4.9 MiB/s wr, 44 op/s
Oct 11 08:41:17 compute-0 podman[270005]: 2025-10-11 08:41:17.636623854 +0000 UTC m=+0.553519877 container init ed7ff5053f372c6140d16725fc4c6734455959a8b90c9720d95f1ceb87290b9f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_yonath, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 08:41:17 compute-0 podman[270019]: 2025-10-11 08:41:17.637141638 +0000 UTC m=+0.221627619 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, container_name=iscsid, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=iscsid, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct 11 08:41:17 compute-0 podman[270005]: 2025-10-11 08:41:17.648665423 +0000 UTC m=+0.565561396 container start ed7ff5053f372c6140d16725fc4c6734455959a8b90c9720d95f1ceb87290b9f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_yonath, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 08:41:17 compute-0 systemd[1]: libpod-ed7ff5053f372c6140d16725fc4c6734455959a8b90c9720d95f1ceb87290b9f.scope: Deactivated successfully.
Oct 11 08:41:17 compute-0 blissful_yonath[270033]: 167 167
Oct 11 08:41:17 compute-0 conmon[270033]: conmon ed7ff5053f372c6140d1 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ed7ff5053f372c6140d16725fc4c6734455959a8b90c9720d95f1ceb87290b9f.scope/container/memory.events
Oct 11 08:41:17 compute-0 podman[270005]: 2025-10-11 08:41:17.791872891 +0000 UTC m=+0.708768914 container attach ed7ff5053f372c6140d16725fc4c6734455959a8b90c9720d95f1ceb87290b9f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_yonath, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Oct 11 08:41:17 compute-0 podman[270005]: 2025-10-11 08:41:17.79326328 +0000 UTC m=+0.710159263 container died ed7ff5053f372c6140d16725fc4c6734455959a8b90c9720d95f1ceb87290b9f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_yonath, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct 11 08:41:18 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 08:41:18 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e126 do_prune osdmap full prune enabled
Oct 11 08:41:18 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e127 e127: 3 total, 3 up, 3 in
Oct 11 08:41:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-4af2354a333cbcebbc7ce4c77f19e9ea1622bbc65a69c45f7cb3565256e2d297-merged.mount: Deactivated successfully.
Oct 11 08:41:18 compute-0 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e127: 3 total, 3 up, 3 in
Oct 11 08:41:18 compute-0 podman[270005]: 2025-10-11 08:41:18.71835603 +0000 UTC m=+1.635252013 container remove ed7ff5053f372c6140d16725fc4c6734455959a8b90c9720d95f1ceb87290b9f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_yonath, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 08:41:18 compute-0 systemd[1]: libpod-conmon-ed7ff5053f372c6140d16725fc4c6734455959a8b90c9720d95f1ceb87290b9f.scope: Deactivated successfully.
Oct 11 08:41:18 compute-0 podman[270067]: 2025-10-11 08:41:18.996460531 +0000 UTC m=+0.088076894 container create 0ff36fe84742cc87c685200d67601d123b8ebe8ab0124167068ce4d99fd9ea9b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_bassi, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct 11 08:41:19 compute-0 podman[270067]: 2025-10-11 08:41:18.95278494 +0000 UTC m=+0.044401303 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 08:41:19 compute-0 systemd[1]: Started libpod-conmon-0ff36fe84742cc87c685200d67601d123b8ebe8ab0124167068ce4d99fd9ea9b.scope.
Oct 11 08:41:19 compute-0 systemd[1]: Started libcrun container.
Oct 11 08:41:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2e91f05215e9771b201de158832cfe3395b9d3fe40d40b33957b0f0fa511351/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 08:41:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2e91f05215e9771b201de158832cfe3395b9d3fe40d40b33957b0f0fa511351/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 08:41:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2e91f05215e9771b201de158832cfe3395b9d3fe40d40b33957b0f0fa511351/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 08:41:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2e91f05215e9771b201de158832cfe3395b9d3fe40d40b33957b0f0fa511351/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 08:41:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2e91f05215e9771b201de158832cfe3395b9d3fe40d40b33957b0f0fa511351/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 08:41:19 compute-0 podman[270067]: 2025-10-11 08:41:19.133605807 +0000 UTC m=+0.225222190 container init 0ff36fe84742cc87c685200d67601d123b8ebe8ab0124167068ce4d99fd9ea9b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_bassi, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef)
Oct 11 08:41:19 compute-0 podman[270067]: 2025-10-11 08:41:19.148333203 +0000 UTC m=+0.239949556 container start 0ff36fe84742cc87c685200d67601d123b8ebe8ab0124167068ce4d99fd9ea9b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_bassi, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 08:41:19 compute-0 ceph-mon[74313]: pgmap v1028: 321 pgs: 321 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 31 KiB/s rd, 4.9 MiB/s wr, 44 op/s
Oct 11 08:41:19 compute-0 ceph-mon[74313]: osdmap e127: 3 total, 3 up, 3 in
Oct 11 08:41:19 compute-0 podman[270067]: 2025-10-11 08:41:19.161491444 +0000 UTC m=+0.253107807 container attach 0ff36fe84742cc87c685200d67601d123b8ebe8ab0124167068ce4d99fd9ea9b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_bassi, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 08:41:19 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1030: 321 pgs: 321 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 32 KiB/s rd, 5.1 MiB/s wr, 45 op/s
Oct 11 08:41:20 compute-0 wonderful_bassi[270084]: --> passed data devices: 0 physical, 3 LVM
Oct 11 08:41:20 compute-0 wonderful_bassi[270084]: --> relative data size: 1.0
Oct 11 08:41:20 compute-0 wonderful_bassi[270084]: --> All data devices are unavailable
Oct 11 08:41:20 compute-0 systemd[1]: libpod-0ff36fe84742cc87c685200d67601d123b8ebe8ab0124167068ce4d99fd9ea9b.scope: Deactivated successfully.
Oct 11 08:41:20 compute-0 systemd[1]: libpod-0ff36fe84742cc87c685200d67601d123b8ebe8ab0124167068ce4d99fd9ea9b.scope: Consumed 1.149s CPU time.
Oct 11 08:41:20 compute-0 podman[270067]: 2025-10-11 08:41:20.34888538 +0000 UTC m=+1.440501743 container died 0ff36fe84742cc87c685200d67601d123b8ebe8ab0124167068ce4d99fd9ea9b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_bassi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Oct 11 08:41:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-c2e91f05215e9771b201de158832cfe3395b9d3fe40d40b33957b0f0fa511351-merged.mount: Deactivated successfully.
Oct 11 08:41:20 compute-0 podman[270067]: 2025-10-11 08:41:20.429891004 +0000 UTC m=+1.521507317 container remove 0ff36fe84742cc87c685200d67601d123b8ebe8ab0124167068ce4d99fd9ea9b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_bassi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct 11 08:41:20 compute-0 systemd[1]: libpod-conmon-0ff36fe84742cc87c685200d67601d123b8ebe8ab0124167068ce4d99fd9ea9b.scope: Deactivated successfully.
Oct 11 08:41:20 compute-0 sudo[269939]: pam_unix(sudo:session): session closed for user root
Oct 11 08:41:20 compute-0 sudo[270124]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:41:20 compute-0 sudo[270124]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:41:20 compute-0 sudo[270124]: pam_unix(sudo:session): session closed for user root
Oct 11 08:41:20 compute-0 sudo[270149]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 08:41:20 compute-0 sudo[270149]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:41:20 compute-0 sudo[270149]: pam_unix(sudo:session): session closed for user root
Oct 11 08:41:20 compute-0 sudo[270174]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:41:20 compute-0 sudo[270174]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:41:20 compute-0 sudo[270174]: pam_unix(sudo:session): session closed for user root
Oct 11 08:41:20 compute-0 sudo[270199]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a -- lvm list --format json
Oct 11 08:41:20 compute-0 sudo[270199]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:41:21 compute-0 ceph-mon[74313]: pgmap v1030: 321 pgs: 321 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 32 KiB/s rd, 5.1 MiB/s wr, 45 op/s
Oct 11 08:41:21 compute-0 podman[270265]: 2025-10-11 08:41:21.294247923 +0000 UTC m=+0.071879808 container create 45488e29878398e529a3d7fe7305b37cdb946edeb48d3850fd3af85f96351da7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_chandrasekhar, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 08:41:21 compute-0 systemd[1]: Started libpod-conmon-45488e29878398e529a3d7fe7305b37cdb946edeb48d3850fd3af85f96351da7.scope.
Oct 11 08:41:21 compute-0 podman[270265]: 2025-10-11 08:41:21.267764496 +0000 UTC m=+0.045396411 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 08:41:21 compute-0 systemd[1]: Started libcrun container.
Oct 11 08:41:21 compute-0 podman[270265]: 2025-10-11 08:41:21.399107599 +0000 UTC m=+0.176739504 container init 45488e29878398e529a3d7fe7305b37cdb946edeb48d3850fd3af85f96351da7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_chandrasekhar, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 08:41:21 compute-0 podman[270265]: 2025-10-11 08:41:21.409356638 +0000 UTC m=+0.186988513 container start 45488e29878398e529a3d7fe7305b37cdb946edeb48d3850fd3af85f96351da7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_chandrasekhar, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 08:41:21 compute-0 podman[270265]: 2025-10-11 08:41:21.413450883 +0000 UTC m=+0.191082818 container attach 45488e29878398e529a3d7fe7305b37cdb946edeb48d3850fd3af85f96351da7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_chandrasekhar, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 08:41:21 compute-0 determined_chandrasekhar[270281]: 167 167
Oct 11 08:41:21 compute-0 systemd[1]: libpod-45488e29878398e529a3d7fe7305b37cdb946edeb48d3850fd3af85f96351da7.scope: Deactivated successfully.
Oct 11 08:41:21 compute-0 podman[270265]: 2025-10-11 08:41:21.417567899 +0000 UTC m=+0.195199774 container died 45488e29878398e529a3d7fe7305b37cdb946edeb48d3850fd3af85f96351da7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_chandrasekhar, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 08:41:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-d74d173e91f76652c6874ff22f0845afe275dacd65034e3ef5c208bcaf13564e-merged.mount: Deactivated successfully.
Oct 11 08:41:21 compute-0 podman[270265]: 2025-10-11 08:41:21.469463452 +0000 UTC m=+0.247095327 container remove 45488e29878398e529a3d7fe7305b37cdb946edeb48d3850fd3af85f96351da7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_chandrasekhar, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 08:41:21 compute-0 systemd[1]: libpod-conmon-45488e29878398e529a3d7fe7305b37cdb946edeb48d3850fd3af85f96351da7.scope: Deactivated successfully.
Oct 11 08:41:21 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1031: 321 pgs: 321 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 6.5 KiB/s rd, 891 B/s wr, 8 op/s
Oct 11 08:41:21 compute-0 podman[270304]: 2025-10-11 08:41:21.705368493 +0000 UTC m=+0.066862896 container create 169e9574fec2244d96fa37691161712b19b86998b5df9afeebc811ac6779dc40 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_wing, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct 11 08:41:21 compute-0 systemd[1]: Started libpod-conmon-169e9574fec2244d96fa37691161712b19b86998b5df9afeebc811ac6779dc40.scope.
Oct 11 08:41:21 compute-0 podman[270304]: 2025-10-11 08:41:21.678172497 +0000 UTC m=+0.039666960 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 08:41:21 compute-0 systemd[1]: Started libcrun container.
Oct 11 08:41:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f9f5f9299f4f7915b1666c86c6ccd3c44c0b3be9808cdd2038ba687bdaf69504/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 08:41:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f9f5f9299f4f7915b1666c86c6ccd3c44c0b3be9808cdd2038ba687bdaf69504/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 08:41:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f9f5f9299f4f7915b1666c86c6ccd3c44c0b3be9808cdd2038ba687bdaf69504/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 08:41:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f9f5f9299f4f7915b1666c86c6ccd3c44c0b3be9808cdd2038ba687bdaf69504/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 08:41:21 compute-0 podman[270304]: 2025-10-11 08:41:21.829372269 +0000 UTC m=+0.190866802 container init 169e9574fec2244d96fa37691161712b19b86998b5df9afeebc811ac6779dc40 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_wing, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 11 08:41:21 compute-0 podman[270304]: 2025-10-11 08:41:21.839897306 +0000 UTC m=+0.201391709 container start 169e9574fec2244d96fa37691161712b19b86998b5df9afeebc811ac6779dc40 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_wing, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct 11 08:41:21 compute-0 podman[270304]: 2025-10-11 08:41:21.843407944 +0000 UTC m=+0.204902347 container attach 169e9574fec2244d96fa37691161712b19b86998b5df9afeebc811ac6779dc40 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_wing, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 08:41:22 compute-0 gallant_wing[270321]: {
Oct 11 08:41:22 compute-0 gallant_wing[270321]:     "0": [
Oct 11 08:41:22 compute-0 gallant_wing[270321]:         {
Oct 11 08:41:22 compute-0 gallant_wing[270321]:             "devices": [
Oct 11 08:41:22 compute-0 gallant_wing[270321]:                 "/dev/loop3"
Oct 11 08:41:22 compute-0 gallant_wing[270321]:             ],
Oct 11 08:41:22 compute-0 gallant_wing[270321]:             "lv_name": "ceph_lv0",
Oct 11 08:41:22 compute-0 gallant_wing[270321]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 08:41:22 compute-0 gallant_wing[270321]:             "lv_size": "21470642176",
Oct 11 08:41:22 compute-0 gallant_wing[270321]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=bd31cfe9-266a-4479-a7f9-659fff14b3e1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 08:41:22 compute-0 gallant_wing[270321]:             "lv_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 08:41:22 compute-0 gallant_wing[270321]:             "name": "ceph_lv0",
Oct 11 08:41:22 compute-0 gallant_wing[270321]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 08:41:22 compute-0 gallant_wing[270321]:             "tags": {
Oct 11 08:41:22 compute-0 gallant_wing[270321]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 11 08:41:22 compute-0 gallant_wing[270321]:                 "ceph.block_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 08:41:22 compute-0 gallant_wing[270321]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 08:41:22 compute-0 gallant_wing[270321]:                 "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 08:41:22 compute-0 gallant_wing[270321]:                 "ceph.cluster_name": "ceph",
Oct 11 08:41:22 compute-0 gallant_wing[270321]:                 "ceph.crush_device_class": "",
Oct 11 08:41:22 compute-0 gallant_wing[270321]:                 "ceph.encrypted": "0",
Oct 11 08:41:22 compute-0 gallant_wing[270321]:                 "ceph.osd_fsid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 08:41:22 compute-0 gallant_wing[270321]:                 "ceph.osd_id": "0",
Oct 11 08:41:22 compute-0 gallant_wing[270321]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 08:41:22 compute-0 gallant_wing[270321]:                 "ceph.type": "block",
Oct 11 08:41:22 compute-0 gallant_wing[270321]:                 "ceph.vdo": "0"
Oct 11 08:41:22 compute-0 gallant_wing[270321]:             },
Oct 11 08:41:22 compute-0 gallant_wing[270321]:             "type": "block",
Oct 11 08:41:22 compute-0 gallant_wing[270321]:             "vg_name": "ceph_vg0"
Oct 11 08:41:22 compute-0 gallant_wing[270321]:         }
Oct 11 08:41:22 compute-0 gallant_wing[270321]:     ],
Oct 11 08:41:22 compute-0 gallant_wing[270321]:     "1": [
Oct 11 08:41:22 compute-0 gallant_wing[270321]:         {
Oct 11 08:41:22 compute-0 gallant_wing[270321]:             "devices": [
Oct 11 08:41:22 compute-0 gallant_wing[270321]:                 "/dev/loop4"
Oct 11 08:41:22 compute-0 gallant_wing[270321]:             ],
Oct 11 08:41:22 compute-0 gallant_wing[270321]:             "lv_name": "ceph_lv1",
Oct 11 08:41:22 compute-0 gallant_wing[270321]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 08:41:22 compute-0 gallant_wing[270321]:             "lv_size": "21470642176",
Oct 11 08:41:22 compute-0 gallant_wing[270321]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c6e730c8-7075-45e5-bf72-061b73088f5b,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 08:41:22 compute-0 gallant_wing[270321]:             "lv_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 08:41:22 compute-0 gallant_wing[270321]:             "name": "ceph_lv1",
Oct 11 08:41:22 compute-0 gallant_wing[270321]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 08:41:22 compute-0 gallant_wing[270321]:             "tags": {
Oct 11 08:41:22 compute-0 gallant_wing[270321]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 11 08:41:22 compute-0 gallant_wing[270321]:                 "ceph.block_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 08:41:22 compute-0 gallant_wing[270321]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 08:41:22 compute-0 gallant_wing[270321]:                 "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 08:41:22 compute-0 gallant_wing[270321]:                 "ceph.cluster_name": "ceph",
Oct 11 08:41:22 compute-0 gallant_wing[270321]:                 "ceph.crush_device_class": "",
Oct 11 08:41:22 compute-0 gallant_wing[270321]:                 "ceph.encrypted": "0",
Oct 11 08:41:22 compute-0 gallant_wing[270321]:                 "ceph.osd_fsid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 08:41:22 compute-0 gallant_wing[270321]:                 "ceph.osd_id": "1",
Oct 11 08:41:22 compute-0 gallant_wing[270321]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 08:41:22 compute-0 gallant_wing[270321]:                 "ceph.type": "block",
Oct 11 08:41:22 compute-0 gallant_wing[270321]:                 "ceph.vdo": "0"
Oct 11 08:41:22 compute-0 gallant_wing[270321]:             },
Oct 11 08:41:22 compute-0 gallant_wing[270321]:             "type": "block",
Oct 11 08:41:22 compute-0 gallant_wing[270321]:             "vg_name": "ceph_vg1"
Oct 11 08:41:22 compute-0 gallant_wing[270321]:         }
Oct 11 08:41:22 compute-0 gallant_wing[270321]:     ],
Oct 11 08:41:22 compute-0 gallant_wing[270321]:     "2": [
Oct 11 08:41:22 compute-0 gallant_wing[270321]:         {
Oct 11 08:41:22 compute-0 gallant_wing[270321]:             "devices": [
Oct 11 08:41:22 compute-0 gallant_wing[270321]:                 "/dev/loop5"
Oct 11 08:41:22 compute-0 gallant_wing[270321]:             ],
Oct 11 08:41:22 compute-0 gallant_wing[270321]:             "lv_name": "ceph_lv2",
Oct 11 08:41:22 compute-0 gallant_wing[270321]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 08:41:22 compute-0 gallant_wing[270321]:             "lv_size": "21470642176",
Oct 11 08:41:22 compute-0 gallant_wing[270321]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9a8d87fe-940e-4960-94fb-405e22e6e9ee,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 08:41:22 compute-0 gallant_wing[270321]:             "lv_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 08:41:22 compute-0 gallant_wing[270321]:             "name": "ceph_lv2",
Oct 11 08:41:22 compute-0 gallant_wing[270321]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 08:41:22 compute-0 gallant_wing[270321]:             "tags": {
Oct 11 08:41:22 compute-0 gallant_wing[270321]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 11 08:41:22 compute-0 gallant_wing[270321]:                 "ceph.block_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 08:41:22 compute-0 gallant_wing[270321]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 08:41:22 compute-0 gallant_wing[270321]:                 "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 08:41:22 compute-0 gallant_wing[270321]:                 "ceph.cluster_name": "ceph",
Oct 11 08:41:22 compute-0 gallant_wing[270321]:                 "ceph.crush_device_class": "",
Oct 11 08:41:22 compute-0 gallant_wing[270321]:                 "ceph.encrypted": "0",
Oct 11 08:41:22 compute-0 gallant_wing[270321]:                 "ceph.osd_fsid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 08:41:22 compute-0 gallant_wing[270321]:                 "ceph.osd_id": "2",
Oct 11 08:41:22 compute-0 gallant_wing[270321]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 08:41:22 compute-0 gallant_wing[270321]:                 "ceph.type": "block",
Oct 11 08:41:22 compute-0 gallant_wing[270321]:                 "ceph.vdo": "0"
Oct 11 08:41:22 compute-0 gallant_wing[270321]:             },
Oct 11 08:41:22 compute-0 gallant_wing[270321]:             "type": "block",
Oct 11 08:41:22 compute-0 gallant_wing[270321]:             "vg_name": "ceph_vg2"
Oct 11 08:41:22 compute-0 gallant_wing[270321]:         }
Oct 11 08:41:22 compute-0 gallant_wing[270321]:     ]
Oct 11 08:41:22 compute-0 gallant_wing[270321]: }
Oct 11 08:41:22 compute-0 systemd[1]: libpod-169e9574fec2244d96fa37691161712b19b86998b5df9afeebc811ac6779dc40.scope: Deactivated successfully.
Oct 11 08:41:22 compute-0 podman[270304]: 2025-10-11 08:41:22.694521469 +0000 UTC m=+1.056015882 container died 169e9574fec2244d96fa37691161712b19b86998b5df9afeebc811ac6779dc40 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_wing, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct 11 08:41:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-f9f5f9299f4f7915b1666c86c6ccd3c44c0b3be9808cdd2038ba687bdaf69504-merged.mount: Deactivated successfully.
Oct 11 08:41:22 compute-0 podman[270304]: 2025-10-11 08:41:22.772124837 +0000 UTC m=+1.133619230 container remove 169e9574fec2244d96fa37691161712b19b86998b5df9afeebc811ac6779dc40 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_wing, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Oct 11 08:41:22 compute-0 podman[270330]: 2025-10-11 08:41:22.778790405 +0000 UTC m=+0.081986742 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=multipathd, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 11 08:41:22 compute-0 systemd[1]: libpod-conmon-169e9574fec2244d96fa37691161712b19b86998b5df9afeebc811ac6779dc40.scope: Deactivated successfully.
Oct 11 08:41:22 compute-0 sudo[270199]: pam_unix(sudo:session): session closed for user root
Oct 11 08:41:22 compute-0 sudo[270365]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:41:22 compute-0 sudo[270365]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:41:22 compute-0 sudo[270365]: pam_unix(sudo:session): session closed for user root
Oct 11 08:41:22 compute-0 sudo[270390]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 08:41:22 compute-0 sudo[270390]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:41:23 compute-0 sudo[270390]: pam_unix(sudo:session): session closed for user root
Oct 11 08:41:23 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 08:41:23 compute-0 sudo[270415]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:41:23 compute-0 sudo[270415]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:41:23 compute-0 sudo[270415]: pam_unix(sudo:session): session closed for user root
Oct 11 08:41:23 compute-0 sudo[270440]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a -- raw list --format json
Oct 11 08:41:23 compute-0 sudo[270440]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:41:23 compute-0 ceph-mon[74313]: pgmap v1031: 321 pgs: 321 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 6.5 KiB/s rd, 891 B/s wr, 8 op/s
Oct 11 08:41:23 compute-0 podman[270505]: 2025-10-11 08:41:23.532985188 +0000 UTC m=+0.050512055 container create aa8e68ddd0627e3ad3cc846ef687b8c6193d62f697bc8ef05e55124d097d85a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_sanderson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 08:41:23 compute-0 systemd[1]: Started libpod-conmon-aa8e68ddd0627e3ad3cc846ef687b8c6193d62f697bc8ef05e55124d097d85a8.scope.
Oct 11 08:41:23 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1032: 321 pgs: 321 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 5.3 KiB/s rd, 723 B/s wr, 7 op/s
Oct 11 08:41:23 compute-0 podman[270505]: 2025-10-11 08:41:23.512547482 +0000 UTC m=+0.030074339 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 08:41:23 compute-0 systemd[1]: Started libcrun container.
Oct 11 08:41:23 compute-0 podman[270505]: 2025-10-11 08:41:23.629591922 +0000 UTC m=+0.147118849 container init aa8e68ddd0627e3ad3cc846ef687b8c6193d62f697bc8ef05e55124d097d85a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_sanderson, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 08:41:23 compute-0 podman[270505]: 2025-10-11 08:41:23.636590239 +0000 UTC m=+0.154117096 container start aa8e68ddd0627e3ad3cc846ef687b8c6193d62f697bc8ef05e55124d097d85a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_sanderson, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct 11 08:41:23 compute-0 podman[270505]: 2025-10-11 08:41:23.640718856 +0000 UTC m=+0.158245713 container attach aa8e68ddd0627e3ad3cc846ef687b8c6193d62f697bc8ef05e55124d097d85a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_sanderson, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 08:41:23 compute-0 intelligent_sanderson[270521]: 167 167
Oct 11 08:41:23 compute-0 systemd[1]: libpod-aa8e68ddd0627e3ad3cc846ef687b8c6193d62f697bc8ef05e55124d097d85a8.scope: Deactivated successfully.
Oct 11 08:41:23 compute-0 podman[270505]: 2025-10-11 08:41:23.648220977 +0000 UTC m=+0.165747844 container died aa8e68ddd0627e3ad3cc846ef687b8c6193d62f697bc8ef05e55124d097d85a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_sanderson, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 08:41:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-59800077ace1209cff27597bfa56e3730ee513a01a5b441575535d81aa663c40-merged.mount: Deactivated successfully.
Oct 11 08:41:23 compute-0 podman[270505]: 2025-10-11 08:41:23.69017557 +0000 UTC m=+0.207702407 container remove aa8e68ddd0627e3ad3cc846ef687b8c6193d62f697bc8ef05e55124d097d85a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_sanderson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 08:41:23 compute-0 systemd[1]: libpod-conmon-aa8e68ddd0627e3ad3cc846ef687b8c6193d62f697bc8ef05e55124d097d85a8.scope: Deactivated successfully.
Oct 11 08:41:23 compute-0 podman[270546]: 2025-10-11 08:41:23.897029862 +0000 UTC m=+0.047824970 container create f3d44929cb0314b9051751d4500006cf5e6bd697346c1c0a786f94f2460cdc1b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_roentgen, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct 11 08:41:23 compute-0 systemd[1]: Started libpod-conmon-f3d44929cb0314b9051751d4500006cf5e6bd697346c1c0a786f94f2460cdc1b.scope.
Oct 11 08:41:23 compute-0 systemd[1]: Started libcrun container.
Oct 11 08:41:23 compute-0 podman[270546]: 2025-10-11 08:41:23.877155861 +0000 UTC m=+0.027951019 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 08:41:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/872bd824bb54f5667c52a1ce05423f656ec2749e0d7a402a00353d8dd5d608cd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 08:41:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/872bd824bb54f5667c52a1ce05423f656ec2749e0d7a402a00353d8dd5d608cd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 08:41:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/872bd824bb54f5667c52a1ce05423f656ec2749e0d7a402a00353d8dd5d608cd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 08:41:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/872bd824bb54f5667c52a1ce05423f656ec2749e0d7a402a00353d8dd5d608cd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 08:41:23 compute-0 podman[270546]: 2025-10-11 08:41:23.995760315 +0000 UTC m=+0.146555423 container init f3d44929cb0314b9051751d4500006cf5e6bd697346c1c0a786f94f2460cdc1b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_roentgen, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True)
Oct 11 08:41:24 compute-0 podman[270546]: 2025-10-11 08:41:24.009169173 +0000 UTC m=+0.159964291 container start f3d44929cb0314b9051751d4500006cf5e6bd697346c1c0a786f94f2460cdc1b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_roentgen, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 08:41:24 compute-0 podman[270546]: 2025-10-11 08:41:24.012720914 +0000 UTC m=+0.163516022 container attach f3d44929cb0314b9051751d4500006cf5e6bd697346c1c0a786f94f2460cdc1b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_roentgen, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct 11 08:41:24 compute-0 podman[270560]: 2025-10-11 08:41:24.06970061 +0000 UTC m=+0.127328441 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct 11 08:41:24 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e127 do_prune osdmap full prune enabled
Oct 11 08:41:24 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e128 e128: 3 total, 3 up, 3 in
Oct 11 08:41:24 compute-0 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e128: 3 total, 3 up, 3 in
Oct 11 08:41:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 08:41:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 08:41:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 08:41:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 08:41:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 08:41:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 08:41:25 compute-0 festive_roentgen[270568]: {
Oct 11 08:41:25 compute-0 festive_roentgen[270568]:     "9a8d87fe-940e-4960-94fb-405e22e6e9ee": {
Oct 11 08:41:25 compute-0 festive_roentgen[270568]:         "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 08:41:25 compute-0 festive_roentgen[270568]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 11 08:41:25 compute-0 festive_roentgen[270568]:         "osd_id": 2,
Oct 11 08:41:25 compute-0 festive_roentgen[270568]:         "osd_uuid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 08:41:25 compute-0 festive_roentgen[270568]:         "type": "bluestore"
Oct 11 08:41:25 compute-0 festive_roentgen[270568]:     },
Oct 11 08:41:25 compute-0 festive_roentgen[270568]:     "bd31cfe9-266a-4479-a7f9-659fff14b3e1": {
Oct 11 08:41:25 compute-0 festive_roentgen[270568]:         "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 08:41:25 compute-0 festive_roentgen[270568]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 11 08:41:25 compute-0 festive_roentgen[270568]:         "osd_id": 0,
Oct 11 08:41:25 compute-0 festive_roentgen[270568]:         "osd_uuid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 08:41:25 compute-0 festive_roentgen[270568]:         "type": "bluestore"
Oct 11 08:41:25 compute-0 festive_roentgen[270568]:     },
Oct 11 08:41:25 compute-0 festive_roentgen[270568]:     "c6e730c8-7075-45e5-bf72-061b73088f5b": {
Oct 11 08:41:25 compute-0 festive_roentgen[270568]:         "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 08:41:25 compute-0 festive_roentgen[270568]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 11 08:41:25 compute-0 festive_roentgen[270568]:         "osd_id": 1,
Oct 11 08:41:25 compute-0 festive_roentgen[270568]:         "osd_uuid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 08:41:25 compute-0 festive_roentgen[270568]:         "type": "bluestore"
Oct 11 08:41:25 compute-0 festive_roentgen[270568]:     }
Oct 11 08:41:25 compute-0 festive_roentgen[270568]: }
Oct 11 08:41:25 compute-0 systemd[1]: libpod-f3d44929cb0314b9051751d4500006cf5e6bd697346c1c0a786f94f2460cdc1b.scope: Deactivated successfully.
Oct 11 08:41:25 compute-0 systemd[1]: libpod-f3d44929cb0314b9051751d4500006cf5e6bd697346c1c0a786f94f2460cdc1b.scope: Consumed 1.178s CPU time.
Oct 11 08:41:25 compute-0 podman[270546]: 2025-10-11 08:41:25.191516437 +0000 UTC m=+1.342311585 container died f3d44929cb0314b9051751d4500006cf5e6bd697346c1c0a786f94f2460cdc1b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_roentgen, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 08:41:25 compute-0 ceph-mon[74313]: pgmap v1032: 321 pgs: 321 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 5.3 KiB/s rd, 723 B/s wr, 7 op/s
Oct 11 08:41:25 compute-0 ceph-mon[74313]: osdmap e128: 3 total, 3 up, 3 in
Oct 11 08:41:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-872bd824bb54f5667c52a1ce05423f656ec2749e0d7a402a00353d8dd5d608cd-merged.mount: Deactivated successfully.
Oct 11 08:41:25 compute-0 podman[270546]: 2025-10-11 08:41:25.279859688 +0000 UTC m=+1.430654826 container remove f3d44929cb0314b9051751d4500006cf5e6bd697346c1c0a786f94f2460cdc1b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_roentgen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 08:41:25 compute-0 systemd[1]: libpod-conmon-f3d44929cb0314b9051751d4500006cf5e6bd697346c1c0a786f94f2460cdc1b.scope: Deactivated successfully.
Oct 11 08:41:25 compute-0 sudo[270440]: pam_unix(sudo:session): session closed for user root
Oct 11 08:41:25 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 08:41:25 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 08:41:25 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 08:41:25 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 08:41:25 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev 2e789098-cf7e-4f0a-86e9-c7c40b0ab2ce does not exist
Oct 11 08:41:25 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev 4f78776a-277d-4daa-9640-d2462f34b5c7 does not exist
Oct 11 08:41:25 compute-0 sudo[270634]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:41:25 compute-0 sudo[270634]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:41:25 compute-0 sudo[270634]: pam_unix(sudo:session): session closed for user root
Oct 11 08:41:25 compute-0 sudo[270659]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 11 08:41:25 compute-0 sudo[270659]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:41:25 compute-0 sudo[270659]: pam_unix(sudo:session): session closed for user root
Oct 11 08:41:25 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1034: 321 pgs: 321 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:41:26 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 08:41:26 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 08:41:26 compute-0 ceph-mon[74313]: pgmap v1034: 321 pgs: 321 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:41:27 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e128 do_prune osdmap full prune enabled
Oct 11 08:41:27 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e129 e129: 3 total, 3 up, 3 in
Oct 11 08:41:27 compute-0 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e129: 3 total, 3 up, 3 in
Oct 11 08:41:27 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1036: 321 pgs: 321 active+clean; 29 MiB data, 177 MiB used, 60 GiB / 60 GiB avail; 35 KiB/s rd, 2.0 KiB/s wr, 45 op/s
Oct 11 08:41:28 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 08:41:28 compute-0 ceph-mon[74313]: osdmap e129: 3 total, 3 up, 3 in
Oct 11 08:41:28 compute-0 ceph-mon[74313]: pgmap v1036: 321 pgs: 321 active+clean; 29 MiB data, 177 MiB used, 60 GiB / 60 GiB avail; 35 KiB/s rd, 2.0 KiB/s wr, 45 op/s
Oct 11 08:41:29 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1037: 321 pgs: 2 active+clean+snaptrim, 319 active+clean; 13 MiB data, 169 MiB used, 60 GiB / 60 GiB avail; 40 KiB/s rd, 2.6 KiB/s wr, 54 op/s
Oct 11 08:41:30 compute-0 ceph-mon[74313]: pgmap v1037: 321 pgs: 2 active+clean+snaptrim, 319 active+clean; 13 MiB data, 169 MiB used, 60 GiB / 60 GiB avail; 40 KiB/s rd, 2.6 KiB/s wr, 54 op/s
Oct 11 08:41:31 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1038: 321 pgs: 2 active+clean+snaptrim, 319 active+clean; 13 MiB data, 169 MiB used, 60 GiB / 60 GiB avail; 40 KiB/s rd, 2.6 KiB/s wr, 54 op/s
Oct 11 08:41:32 compute-0 ceph-mon[74313]: pgmap v1038: 321 pgs: 2 active+clean+snaptrim, 319 active+clean; 13 MiB data, 169 MiB used, 60 GiB / 60 GiB avail; 40 KiB/s rd, 2.6 KiB/s wr, 54 op/s
Oct 11 08:41:33 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 08:41:33 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e129 do_prune osdmap full prune enabled
Oct 11 08:41:33 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e130 e130: 3 total, 3 up, 3 in
Oct 11 08:41:33 compute-0 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e130: 3 total, 3 up, 3 in
Oct 11 08:41:33 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1040: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 45 KiB/s rd, 3.5 KiB/s wr, 62 op/s
Oct 11 08:41:34 compute-0 ceph-mon[74313]: osdmap e130: 3 total, 3 up, 3 in
Oct 11 08:41:35 compute-0 ceph-mon[74313]: pgmap v1040: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 45 KiB/s rd, 3.5 KiB/s wr, 62 op/s
Oct 11 08:41:35 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1041: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 2.3 KiB/s wr, 38 op/s
Oct 11 08:41:37 compute-0 ceph-mon[74313]: pgmap v1041: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 2.3 KiB/s wr, 38 op/s
Oct 11 08:41:37 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 08:41:37 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1905036941' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 08:41:37 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 08:41:37 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1905036941' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 08:41:37 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1042: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 8.2 KiB/s rd, 1.2 KiB/s wr, 13 op/s
Oct 11 08:41:38 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 08:41:38 compute-0 ceph-mon[74313]: from='client.? 192.168.122.10:0/1905036941' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 08:41:38 compute-0 ceph-mon[74313]: from='client.? 192.168.122.10:0/1905036941' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 08:41:39 compute-0 ceph-mon[74313]: pgmap v1042: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 8.2 KiB/s rd, 1.2 KiB/s wr, 13 op/s
Oct 11 08:41:39 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1043: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 4.6 KiB/s rd, 716 B/s wr, 6 op/s
Oct 11 08:41:40 compute-0 podman[270684]: 2025-10-11 08:41:40.786888613 +0000 UTC m=+0.082614441 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct 11 08:41:41 compute-0 ceph-mon[74313]: pgmap v1043: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 4.6 KiB/s rd, 716 B/s wr, 6 op/s
Oct 11 08:41:41 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1044: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 4.6 KiB/s rd, 716 B/s wr, 6 op/s
Oct 11 08:41:43 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 08:41:43 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e130 do_prune osdmap full prune enabled
Oct 11 08:41:43 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e131 e131: 3 total, 3 up, 3 in
Oct 11 08:41:43 compute-0 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e131: 3 total, 3 up, 3 in
Oct 11 08:41:43 compute-0 ceph-mon[74313]: pgmap v1044: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 4.6 KiB/s rd, 716 B/s wr, 6 op/s
Oct 11 08:41:43 compute-0 ceph-mon[74313]: osdmap e131: 3 total, 3 up, 3 in
Oct 11 08:41:43 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1046: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 5.5 KiB/s rd, 1023 B/s wr, 8 op/s
Oct 11 08:41:45 compute-0 ceph-mon[74313]: pgmap v1046: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 5.5 KiB/s rd, 1023 B/s wr, 8 op/s
Oct 11 08:41:45 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1047: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 5.5 KiB/s rd, 1023 B/s wr, 8 op/s
Oct 11 08:41:47 compute-0 ceph-mon[74313]: pgmap v1047: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 5.5 KiB/s rd, 1023 B/s wr, 8 op/s
Oct 11 08:41:47 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1048: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 1.5 KiB/s wr, 14 op/s
Oct 11 08:41:47 compute-0 podman[270705]: 2025-10-11 08:41:47.797179373 +0000 UTC m=+0.097711906 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Oct 11 08:41:48 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 08:41:48 compute-0 nova_compute[260935]: 2025-10-11 08:41:48.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 08:41:48 compute-0 nova_compute[260935]: 2025-10-11 08:41:48.703 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 11 08:41:49 compute-0 ceph-mon[74313]: pgmap v1048: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 1.5 KiB/s wr, 14 op/s
Oct 11 08:41:49 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1049: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 1.6 KiB/s wr, 14 op/s
Oct 11 08:41:49 compute-0 nova_compute[260935]: 2025-10-11 08:41:49.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 08:41:51 compute-0 ceph-mon[74313]: pgmap v1049: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 1.6 KiB/s wr, 14 op/s
Oct 11 08:41:51 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1050: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 1.6 KiB/s wr, 14 op/s
Oct 11 08:41:51 compute-0 nova_compute[260935]: 2025-10-11 08:41:51.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 08:41:52 compute-0 nova_compute[260935]: 2025-10-11 08:41:52.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 08:41:52 compute-0 nova_compute[260935]: 2025-10-11 08:41:52.704 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 08:41:53 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 08:41:53 compute-0 ceph-mon[74313]: pgmap v1050: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 1.6 KiB/s wr, 14 op/s
Oct 11 08:41:53 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1051: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 4.7 KiB/s rd, 584 B/s wr, 6 op/s
Oct 11 08:41:53 compute-0 podman[270725]: 2025-10-11 08:41:53.796411218 +0000 UTC m=+0.096919223 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 11 08:41:54 compute-0 nova_compute[260935]: 2025-10-11 08:41:54.698 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 08:41:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 08:41:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 08:41:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 08:41:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 08:41:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 08:41:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 08:41:54 compute-0 ceph-mgr[74605]: [balancer INFO root] Optimize plan auto_2025-10-11_08:41:54
Oct 11 08:41:54 compute-0 ceph-mgr[74605]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 08:41:54 compute-0 ceph-mgr[74605]: [balancer INFO root] do_upmap
Oct 11 08:41:54 compute-0 ceph-mgr[74605]: [balancer INFO root] pools ['default.rgw.meta', 'images', 'vms', '.mgr', 'volumes', 'cephfs.cephfs.meta', 'default.rgw.control', 'backups', 'cephfs.cephfs.data', '.rgw.root', 'default.rgw.log']
Oct 11 08:41:54 compute-0 ceph-mgr[74605]: [balancer INFO root] prepared 0/10 changes
Oct 11 08:41:54 compute-0 podman[270745]: 2025-10-11 08:41:54.850674211 +0000 UTC m=+0.149583939 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Oct 11 08:41:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 08:41:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 08:41:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 08:41:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 08:41:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 08:41:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 08:41:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 08:41:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 08:41:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 08:41:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 08:41:55 compute-0 ceph-osd[88249]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 11 08:41:55 compute-0 ceph-osd[88249]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.1 total, 600.0 interval
                                           Cumulative writes: 6052 writes, 24K keys, 6052 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                           Cumulative WAL: 6052 writes, 1073 syncs, 5.64 writes per sync, written: 0.02 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 430 writes, 1011 keys, 430 commit groups, 1.0 writes per commit group, ingest: 0.45 MB, 0.00 MB/s
                                           Interval WAL: 430 writes, 192 syncs, 2.24 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 11 08:41:55 compute-0 ceph-mon[74313]: pgmap v1051: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 4.7 KiB/s rd, 584 B/s wr, 6 op/s
Oct 11 08:41:55 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1052: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 4.1 KiB/s rd, 511 B/s wr, 5 op/s
Oct 11 08:41:55 compute-0 nova_compute[260935]: 2025-10-11 08:41:55.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 08:41:55 compute-0 nova_compute[260935]: 2025-10-11 08:41:55.703 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 11 08:41:55 compute-0 nova_compute[260935]: 2025-10-11 08:41:55.703 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 11 08:41:55 compute-0 nova_compute[260935]: 2025-10-11 08:41:55.719 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 11 08:41:55 compute-0 nova_compute[260935]: 2025-10-11 08:41:55.719 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 08:41:55 compute-0 rsyslogd[1003]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 11 08:41:55 compute-0 nova_compute[260935]: 2025-10-11 08:41:55.749 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:41:55 compute-0 nova_compute[260935]: 2025-10-11 08:41:55.750 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:41:55 compute-0 nova_compute[260935]: 2025-10-11 08:41:55.750 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:41:55 compute-0 nova_compute[260935]: 2025-10-11 08:41:55.750 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 11 08:41:55 compute-0 nova_compute[260935]: 2025-10-11 08:41:55.750 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:41:56 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 08:41:56 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2891135953' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:41:56 compute-0 nova_compute[260935]: 2025-10-11 08:41:56.227 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.477s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:41:56 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/2891135953' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:41:56 compute-0 nova_compute[260935]: 2025-10-11 08:41:56.454 2 WARNING nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 08:41:56 compute-0 nova_compute[260935]: 2025-10-11 08:41:56.456 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5175MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 11 08:41:56 compute-0 nova_compute[260935]: 2025-10-11 08:41:56.457 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:41:56 compute-0 nova_compute[260935]: 2025-10-11 08:41:56.457 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:41:56 compute-0 nova_compute[260935]: 2025-10-11 08:41:56.528 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 11 08:41:56 compute-0 nova_compute[260935]: 2025-10-11 08:41:56.529 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 11 08:41:56 compute-0 nova_compute[260935]: 2025-10-11 08:41:56.550 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:41:57 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 08:41:57 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3738029934' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:41:57 compute-0 nova_compute[260935]: 2025-10-11 08:41:57.023 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:41:57 compute-0 nova_compute[260935]: 2025-10-11 08:41:57.031 2 DEBUG nova.compute.provider_tree [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 08:41:57 compute-0 nova_compute[260935]: 2025-10-11 08:41:57.054 2 DEBUG nova.scheduler.client.report [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 08:41:57 compute-0 nova_compute[260935]: 2025-10-11 08:41:57.056 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 11 08:41:57 compute-0 nova_compute[260935]: 2025-10-11 08:41:57.056 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.599s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:41:57 compute-0 ceph-mon[74313]: pgmap v1052: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 4.1 KiB/s rd, 511 B/s wr, 5 op/s
Oct 11 08:41:57 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/3738029934' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:41:57 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1053: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 4.1 KiB/s rd, 511 B/s wr, 5 op/s
Oct 11 08:41:58 compute-0 nova_compute[260935]: 2025-10-11 08:41:58.040 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 08:41:58 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 08:41:58 compute-0 ceph-mon[74313]: pgmap v1053: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 4.1 KiB/s rd, 511 B/s wr, 5 op/s
Oct 11 08:41:59 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1054: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 85 B/s wr, 0 op/s
Oct 11 08:42:00 compute-0 ceph-osd[89278]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 11 08:42:00 compute-0 ceph-osd[89278]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.1 total, 600.0 interval
                                           Cumulative writes: 7156 writes, 28K keys, 7156 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                           Cumulative WAL: 7156 writes, 1416 syncs, 5.05 writes per sync, written: 0.02 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 444 writes, 1061 keys, 444 commit groups, 1.0 writes per commit group, ingest: 0.54 MB, 0.00 MB/s
                                           Interval WAL: 444 writes, 190 syncs, 2.34 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 11 08:42:00 compute-0 ceph-mon[74313]: pgmap v1054: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 85 B/s wr, 0 op/s
Oct 11 08:42:01 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1055: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:42:02 compute-0 ceph-mon[74313]: pgmap v1055: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:42:03 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 08:42:03 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1056: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:42:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 08:42:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:42:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 08:42:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:42:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 08:42:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:42:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 08:42:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:42:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 08:42:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:42:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 3.1795353910268934e-07 of space, bias 1.0, pg target 9.53860617308068e-05 quantized to 32 (current 32)
Oct 11 08:42:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:42:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 11 08:42:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:42:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 08:42:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:42:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 11 08:42:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:42:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 11 08:42:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:42:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 08:42:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:42:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 11 08:42:04 compute-0 ceph-mon[74313]: pgmap v1056: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:42:05 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1057: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:42:06 compute-0 ceph-osd[90364]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 11 08:42:06 compute-0 ceph-osd[90364]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.3 total, 600.0 interval
                                           Cumulative writes: 6087 writes, 25K keys, 6087 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                           Cumulative WAL: 6087 writes, 1057 syncs, 5.76 writes per sync, written: 0.02 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 411 writes, 1113 keys, 411 commit groups, 1.0 writes per commit group, ingest: 0.56 MB, 0.00 MB/s
                                           Interval WAL: 411 writes, 179 syncs, 2.30 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 11 08:42:06 compute-0 ceph-mon[74313]: pgmap v1057: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:42:07 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1058: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:42:07 compute-0 ceph-mgr[74605]: [devicehealth INFO root] Check health
Oct 11 08:42:08 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 08:42:08 compute-0 ceph-mon[74313]: pgmap v1058: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:42:09 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1059: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:42:10 compute-0 ceph-mon[74313]: pgmap v1059: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:42:11 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1060: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:42:11 compute-0 podman[270817]: 2025-10-11 08:42:11.78913775 +0000 UTC m=+0.084441291 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct 11 08:42:12 compute-0 ceph-mon[74313]: pgmap v1060: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:42:13 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 08:42:13 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1061: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:42:14 compute-0 ceph-mon[74313]: pgmap v1061: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:42:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:42:15.173 162815 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:42:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:42:15.173 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:42:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:42:15.174 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:42:15 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1062: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:42:16 compute-0 ceph-mon[74313]: pgmap v1062: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:42:17 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1063: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:42:18 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 08:42:18 compute-0 ceph-mon[74313]: pgmap v1063: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:42:18 compute-0 podman[270835]: 2025-10-11 08:42:18.798130883 +0000 UTC m=+0.093848327 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 11 08:42:19 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1064: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:42:20 compute-0 ceph-mon[74313]: pgmap v1064: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:42:21 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1065: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:42:22 compute-0 ceph-mon[74313]: pgmap v1065: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:42:23 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 08:42:23 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1066: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:42:24 compute-0 ceph-mon[74313]: pgmap v1066: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:42:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 08:42:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 08:42:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 08:42:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 08:42:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 08:42:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 08:42:24 compute-0 podman[270855]: 2025-10-11 08:42:24.805415934 +0000 UTC m=+0.099282570 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 11 08:42:25 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1067: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:42:25 compute-0 sudo[270877]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:42:25 compute-0 sudo[270877]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:42:25 compute-0 sudo[270877]: pam_unix(sudo:session): session closed for user root
Oct 11 08:42:25 compute-0 sudo[270908]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 08:42:25 compute-0 sudo[270908]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:42:25 compute-0 sudo[270908]: pam_unix(sudo:session): session closed for user root
Oct 11 08:42:25 compute-0 podman[270901]: 2025-10-11 08:42:25.88317252 +0000 UTC m=+0.170128057 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0)
Oct 11 08:42:25 compute-0 sudo[270952]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:42:25 compute-0 sudo[270952]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:42:25 compute-0 sudo[270952]: pam_unix(sudo:session): session closed for user root
Oct 11 08:42:26 compute-0 sudo[270980]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Oct 11 08:42:26 compute-0 sudo[270980]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:42:26 compute-0 sudo[270980]: pam_unix(sudo:session): session closed for user root
Oct 11 08:42:26 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 08:42:26 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 08:42:26 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 08:42:26 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 08:42:26 compute-0 sudo[271025]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:42:26 compute-0 sudo[271025]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:42:26 compute-0 sudo[271025]: pam_unix(sudo:session): session closed for user root
Oct 11 08:42:26 compute-0 sudo[271050]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 08:42:26 compute-0 sudo[271050]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:42:26 compute-0 sudo[271050]: pam_unix(sudo:session): session closed for user root
Oct 11 08:42:26 compute-0 sudo[271075]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:42:26 compute-0 sudo[271075]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:42:26 compute-0 sudo[271075]: pam_unix(sudo:session): session closed for user root
Oct 11 08:42:26 compute-0 sudo[271100]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 11 08:42:26 compute-0 sudo[271100]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:42:26 compute-0 ceph-mon[74313]: pgmap v1067: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:42:26 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 08:42:26 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 08:42:27 compute-0 sudo[271100]: pam_unix(sudo:session): session closed for user root
Oct 11 08:42:27 compute-0 sudo[271158]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:42:27 compute-0 sudo[271158]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:42:27 compute-0 sudo[271158]: pam_unix(sudo:session): session closed for user root
Oct 11 08:42:27 compute-0 sudo[271183]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 08:42:27 compute-0 sudo[271183]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:42:27 compute-0 sudo[271183]: pam_unix(sudo:session): session closed for user root
Oct 11 08:42:27 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1068: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:42:27 compute-0 sudo[271208]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:42:27 compute-0 sudo[271208]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:42:27 compute-0 sudo[271208]: pam_unix(sudo:session): session closed for user root
Oct 11 08:42:27 compute-0 sudo[271233]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a -- inventory --format=json-pretty --filter-for-batch
Oct 11 08:42:27 compute-0 sudo[271233]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:42:28 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 08:42:28 compute-0 podman[271300]: 2025-10-11 08:42:28.244118442 +0000 UTC m=+0.064163230 container create 338237e23f46e8274c83ebaec5f1b6ba9f9e570361834a1c21fd8c425ed91ec7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_vaughan, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct 11 08:42:28 compute-0 systemd[1]: Started libpod-conmon-338237e23f46e8274c83ebaec5f1b6ba9f9e570361834a1c21fd8c425ed91ec7.scope.
Oct 11 08:42:28 compute-0 podman[271300]: 2025-10-11 08:42:28.211889173 +0000 UTC m=+0.031934011 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 08:42:28 compute-0 systemd[1]: Started libcrun container.
Oct 11 08:42:28 compute-0 podman[271300]: 2025-10-11 08:42:28.353195097 +0000 UTC m=+0.173239925 container init 338237e23f46e8274c83ebaec5f1b6ba9f9e570361834a1c21fd8c425ed91ec7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_vaughan, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Oct 11 08:42:28 compute-0 podman[271300]: 2025-10-11 08:42:28.368104117 +0000 UTC m=+0.188148885 container start 338237e23f46e8274c83ebaec5f1b6ba9f9e570361834a1c21fd8c425ed91ec7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_vaughan, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 11 08:42:28 compute-0 podman[271300]: 2025-10-11 08:42:28.372612514 +0000 UTC m=+0.192657292 container attach 338237e23f46e8274c83ebaec5f1b6ba9f9e570361834a1c21fd8c425ed91ec7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_vaughan, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct 11 08:42:28 compute-0 lucid_vaughan[271317]: 167 167
Oct 11 08:42:28 compute-0 systemd[1]: libpod-338237e23f46e8274c83ebaec5f1b6ba9f9e570361834a1c21fd8c425ed91ec7.scope: Deactivated successfully.
Oct 11 08:42:28 compute-0 podman[271300]: 2025-10-11 08:42:28.37636416 +0000 UTC m=+0.196408928 container died 338237e23f46e8274c83ebaec5f1b6ba9f9e570361834a1c21fd8c425ed91ec7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_vaughan, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 08:42:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-7b5dc300bb48cdaf8cd6984794e305ed43842dfcde5e4b3fea1b4cb4f02b6ecd-merged.mount: Deactivated successfully.
Oct 11 08:42:28 compute-0 podman[271300]: 2025-10-11 08:42:28.509036631 +0000 UTC m=+0.329081399 container remove 338237e23f46e8274c83ebaec5f1b6ba9f9e570361834a1c21fd8c425ed91ec7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_vaughan, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 08:42:28 compute-0 systemd[1]: libpod-conmon-338237e23f46e8274c83ebaec5f1b6ba9f9e570361834a1c21fd8c425ed91ec7.scope: Deactivated successfully.
Oct 11 08:42:28 compute-0 podman[271344]: 2025-10-11 08:42:28.731636156 +0000 UTC m=+0.068109201 container create 7dec2621aa532c239104b27c1196ec02d868141b77ba444a81a0090023f3b70e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_swartz, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True)
Oct 11 08:42:28 compute-0 systemd[1]: Started libpod-conmon-7dec2621aa532c239104b27c1196ec02d868141b77ba444a81a0090023f3b70e.scope.
Oct 11 08:42:28 compute-0 podman[271344]: 2025-10-11 08:42:28.70195697 +0000 UTC m=+0.038430075 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 08:42:28 compute-0 ceph-mon[74313]: pgmap v1068: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:42:28 compute-0 systemd[1]: Started libcrun container.
Oct 11 08:42:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/451ca74eba48f0f2743a0d1bad7b0ee8ecf438c70ceed4dd5136187394385e15/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 08:42:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/451ca74eba48f0f2743a0d1bad7b0ee8ecf438c70ceed4dd5136187394385e15/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 08:42:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/451ca74eba48f0f2743a0d1bad7b0ee8ecf438c70ceed4dd5136187394385e15/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 08:42:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/451ca74eba48f0f2743a0d1bad7b0ee8ecf438c70ceed4dd5136187394385e15/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 08:42:28 compute-0 podman[271344]: 2025-10-11 08:42:28.837641555 +0000 UTC m=+0.174114600 container init 7dec2621aa532c239104b27c1196ec02d868141b77ba444a81a0090023f3b70e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_swartz, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct 11 08:42:28 compute-0 podman[271344]: 2025-10-11 08:42:28.8619271 +0000 UTC m=+0.198400145 container start 7dec2621aa532c239104b27c1196ec02d868141b77ba444a81a0090023f3b70e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_swartz, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct 11 08:42:28 compute-0 podman[271344]: 2025-10-11 08:42:28.866021695 +0000 UTC m=+0.202494740 container attach 7dec2621aa532c239104b27c1196ec02d868141b77ba444a81a0090023f3b70e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_swartz, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Oct 11 08:42:29 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1069: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:42:30 compute-0 distracted_swartz[271361]: [
Oct 11 08:42:30 compute-0 distracted_swartz[271361]:     {
Oct 11 08:42:30 compute-0 distracted_swartz[271361]:         "available": false,
Oct 11 08:42:30 compute-0 distracted_swartz[271361]:         "ceph_device": false,
Oct 11 08:42:30 compute-0 distracted_swartz[271361]:         "device_id": "QEMU_DVD-ROM_QM00001",
Oct 11 08:42:30 compute-0 distracted_swartz[271361]:         "lsm_data": {},
Oct 11 08:42:30 compute-0 distracted_swartz[271361]:         "lvs": [],
Oct 11 08:42:30 compute-0 distracted_swartz[271361]:         "path": "/dev/sr0",
Oct 11 08:42:30 compute-0 distracted_swartz[271361]:         "rejected_reasons": [
Oct 11 08:42:30 compute-0 distracted_swartz[271361]:             "Has a FileSystem",
Oct 11 08:42:30 compute-0 distracted_swartz[271361]:             "Insufficient space (<5GB)"
Oct 11 08:42:30 compute-0 distracted_swartz[271361]:         ],
Oct 11 08:42:30 compute-0 distracted_swartz[271361]:         "sys_api": {
Oct 11 08:42:30 compute-0 distracted_swartz[271361]:             "actuators": null,
Oct 11 08:42:30 compute-0 distracted_swartz[271361]:             "device_nodes": "sr0",
Oct 11 08:42:30 compute-0 distracted_swartz[271361]:             "devname": "sr0",
Oct 11 08:42:30 compute-0 distracted_swartz[271361]:             "human_readable_size": "482.00 KB",
Oct 11 08:42:30 compute-0 distracted_swartz[271361]:             "id_bus": "ata",
Oct 11 08:42:30 compute-0 distracted_swartz[271361]:             "model": "QEMU DVD-ROM",
Oct 11 08:42:30 compute-0 distracted_swartz[271361]:             "nr_requests": "2",
Oct 11 08:42:30 compute-0 distracted_swartz[271361]:             "parent": "/dev/sr0",
Oct 11 08:42:30 compute-0 distracted_swartz[271361]:             "partitions": {},
Oct 11 08:42:30 compute-0 distracted_swartz[271361]:             "path": "/dev/sr0",
Oct 11 08:42:30 compute-0 distracted_swartz[271361]:             "removable": "1",
Oct 11 08:42:30 compute-0 distracted_swartz[271361]:             "rev": "2.5+",
Oct 11 08:42:30 compute-0 distracted_swartz[271361]:             "ro": "0",
Oct 11 08:42:30 compute-0 distracted_swartz[271361]:             "rotational": "0",
Oct 11 08:42:30 compute-0 distracted_swartz[271361]:             "sas_address": "",
Oct 11 08:42:30 compute-0 distracted_swartz[271361]:             "sas_device_handle": "",
Oct 11 08:42:30 compute-0 distracted_swartz[271361]:             "scheduler_mode": "mq-deadline",
Oct 11 08:42:30 compute-0 distracted_swartz[271361]:             "sectors": 0,
Oct 11 08:42:30 compute-0 distracted_swartz[271361]:             "sectorsize": "2048",
Oct 11 08:42:30 compute-0 distracted_swartz[271361]:             "size": 493568.0,
Oct 11 08:42:30 compute-0 distracted_swartz[271361]:             "support_discard": "2048",
Oct 11 08:42:30 compute-0 distracted_swartz[271361]:             "type": "disk",
Oct 11 08:42:30 compute-0 distracted_swartz[271361]:             "vendor": "QEMU"
Oct 11 08:42:30 compute-0 distracted_swartz[271361]:         }
Oct 11 08:42:30 compute-0 distracted_swartz[271361]:     }
Oct 11 08:42:30 compute-0 distracted_swartz[271361]: ]
Oct 11 08:42:30 compute-0 systemd[1]: libpod-7dec2621aa532c239104b27c1196ec02d868141b77ba444a81a0090023f3b70e.scope: Deactivated successfully.
Oct 11 08:42:30 compute-0 systemd[1]: libpod-7dec2621aa532c239104b27c1196ec02d868141b77ba444a81a0090023f3b70e.scope: Consumed 1.779s CPU time.
Oct 11 08:42:30 compute-0 podman[271344]: 2025-10-11 08:42:30.544149466 +0000 UTC m=+1.880622511 container died 7dec2621aa532c239104b27c1196ec02d868141b77ba444a81a0090023f3b70e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_swartz, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 08:42:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-451ca74eba48f0f2743a0d1bad7b0ee8ecf438c70ceed4dd5136187394385e15-merged.mount: Deactivated successfully.
Oct 11 08:42:30 compute-0 podman[271344]: 2025-10-11 08:42:30.623222305 +0000 UTC m=+1.959695320 container remove 7dec2621aa532c239104b27c1196ec02d868141b77ba444a81a0090023f3b70e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_swartz, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default)
Oct 11 08:42:30 compute-0 systemd[1]: libpod-conmon-7dec2621aa532c239104b27c1196ec02d868141b77ba444a81a0090023f3b70e.scope: Deactivated successfully.
Oct 11 08:42:30 compute-0 sudo[271233]: pam_unix(sudo:session): session closed for user root
Oct 11 08:42:30 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 08:42:30 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 08:42:30 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 08:42:30 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 08:42:30 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 08:42:30 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 08:42:30 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 11 08:42:30 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 08:42:30 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 11 08:42:30 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 08:42:30 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev d3748bc9-0eef-43c3-bfe4-9471c6e3db83 does not exist
Oct 11 08:42:30 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev 85783387-de04-4008-95e0-d0595161b6e7 does not exist
Oct 11 08:42:30 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev 634c2799-b7af-404c-b35a-6df6bc50a3bb does not exist
Oct 11 08:42:30 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 11 08:42:30 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 08:42:30 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 11 08:42:30 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 08:42:30 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 08:42:30 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 08:42:30 compute-0 ceph-mon[74313]: pgmap v1069: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:42:30 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 08:42:30 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 08:42:30 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 08:42:30 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 08:42:30 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 08:42:30 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 08:42:30 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 08:42:30 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 08:42:30 compute-0 sudo[273582]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:42:30 compute-0 sudo[273582]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:42:30 compute-0 sudo[273582]: pam_unix(sudo:session): session closed for user root
Oct 11 08:42:30 compute-0 sudo[273607]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 08:42:30 compute-0 sudo[273607]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:42:30 compute-0 sudo[273607]: pam_unix(sudo:session): session closed for user root
Oct 11 08:42:31 compute-0 sudo[273632]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:42:31 compute-0 sudo[273632]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:42:31 compute-0 sudo[273632]: pam_unix(sudo:session): session closed for user root
Oct 11 08:42:31 compute-0 sudo[273657]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 11 08:42:31 compute-0 sudo[273657]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:42:31 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1070: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:42:31 compute-0 podman[273723]: 2025-10-11 08:42:31.655699594 +0000 UTC m=+0.095828813 container create 28c788ca0aa5a2478b79577b20d1e03963440e7a95a28bb3bec0567e20f74f13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_lovelace, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 08:42:31 compute-0 podman[273723]: 2025-10-11 08:42:31.604882081 +0000 UTC m=+0.045011350 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 08:42:31 compute-0 systemd[1]: Started libpod-conmon-28c788ca0aa5a2478b79577b20d1e03963440e7a95a28bb3bec0567e20f74f13.scope.
Oct 11 08:42:31 compute-0 systemd[1]: Started libcrun container.
Oct 11 08:42:31 compute-0 podman[273723]: 2025-10-11 08:42:31.764438429 +0000 UTC m=+0.204567678 container init 28c788ca0aa5a2478b79577b20d1e03963440e7a95a28bb3bec0567e20f74f13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_lovelace, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 08:42:31 compute-0 podman[273723]: 2025-10-11 08:42:31.775870402 +0000 UTC m=+0.215999621 container start 28c788ca0aa5a2478b79577b20d1e03963440e7a95a28bb3bec0567e20f74f13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_lovelace, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct 11 08:42:31 compute-0 podman[273723]: 2025-10-11 08:42:31.78043393 +0000 UTC m=+0.220563199 container attach 28c788ca0aa5a2478b79577b20d1e03963440e7a95a28bb3bec0567e20f74f13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_lovelace, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct 11 08:42:31 compute-0 hopeful_lovelace[273740]: 167 167
Oct 11 08:42:31 compute-0 systemd[1]: libpod-28c788ca0aa5a2478b79577b20d1e03963440e7a95a28bb3bec0567e20f74f13.scope: Deactivated successfully.
Oct 11 08:42:31 compute-0 podman[273723]: 2025-10-11 08:42:31.785139073 +0000 UTC m=+0.225268292 container died 28c788ca0aa5a2478b79577b20d1e03963440e7a95a28bb3bec0567e20f74f13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_lovelace, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct 11 08:42:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-c771ab20f310fc4ed9799e9cc8ae1b1ba551249c155486b02b57b8b6a7455edc-merged.mount: Deactivated successfully.
Oct 11 08:42:31 compute-0 podman[273723]: 2025-10-11 08:42:31.840202275 +0000 UTC m=+0.280331494 container remove 28c788ca0aa5a2478b79577b20d1e03963440e7a95a28bb3bec0567e20f74f13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_lovelace, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 08:42:31 compute-0 systemd[1]: libpod-conmon-28c788ca0aa5a2478b79577b20d1e03963440e7a95a28bb3bec0567e20f74f13.scope: Deactivated successfully.
Oct 11 08:42:32 compute-0 podman[273764]: 2025-10-11 08:42:32.086358745 +0000 UTC m=+0.070011685 container create 0660820ad63e26b5d462181a78f9ff480322e2370d4f1bc257ac82e4c2d591c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_mayer, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct 11 08:42:32 compute-0 systemd[1]: Started libpod-conmon-0660820ad63e26b5d462181a78f9ff480322e2370d4f1bc257ac82e4c2d591c1.scope.
Oct 11 08:42:32 compute-0 podman[273764]: 2025-10-11 08:42:32.058470199 +0000 UTC m=+0.042123179 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 08:42:32 compute-0 systemd[1]: Started libcrun container.
Oct 11 08:42:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7eef93847f6001dc96ff9c6f11498763c5c474104181d7e488d181dfc6c926ff/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 08:42:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7eef93847f6001dc96ff9c6f11498763c5c474104181d7e488d181dfc6c926ff/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 08:42:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7eef93847f6001dc96ff9c6f11498763c5c474104181d7e488d181dfc6c926ff/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 08:42:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7eef93847f6001dc96ff9c6f11498763c5c474104181d7e488d181dfc6c926ff/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 08:42:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7eef93847f6001dc96ff9c6f11498763c5c474104181d7e488d181dfc6c926ff/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 08:42:32 compute-0 podman[273764]: 2025-10-11 08:42:32.207081559 +0000 UTC m=+0.190734549 container init 0660820ad63e26b5d462181a78f9ff480322e2370d4f1bc257ac82e4c2d591c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_mayer, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True)
Oct 11 08:42:32 compute-0 podman[273764]: 2025-10-11 08:42:32.220225139 +0000 UTC m=+0.203878079 container start 0660820ad63e26b5d462181a78f9ff480322e2370d4f1bc257ac82e4c2d591c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_mayer, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Oct 11 08:42:32 compute-0 podman[273764]: 2025-10-11 08:42:32.225337613 +0000 UTC m=+0.208990553 container attach 0660820ad63e26b5d462181a78f9ff480322e2370d4f1bc257ac82e4c2d591c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_mayer, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct 11 08:42:32 compute-0 ceph-mon[74313]: pgmap v1070: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:42:33 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 08:42:33 compute-0 gifted_mayer[273780]: --> passed data devices: 0 physical, 3 LVM
Oct 11 08:42:33 compute-0 gifted_mayer[273780]: --> relative data size: 1.0
Oct 11 08:42:33 compute-0 gifted_mayer[273780]: --> All data devices are unavailable
Oct 11 08:42:33 compute-0 systemd[1]: libpod-0660820ad63e26b5d462181a78f9ff480322e2370d4f1bc257ac82e4c2d591c1.scope: Deactivated successfully.
Oct 11 08:42:33 compute-0 podman[273764]: 2025-10-11 08:42:33.40351634 +0000 UTC m=+1.387169270 container died 0660820ad63e26b5d462181a78f9ff480322e2370d4f1bc257ac82e4c2d591c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_mayer, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct 11 08:42:33 compute-0 systemd[1]: libpod-0660820ad63e26b5d462181a78f9ff480322e2370d4f1bc257ac82e4c2d591c1.scope: Consumed 1.139s CPU time.
Oct 11 08:42:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-7eef93847f6001dc96ff9c6f11498763c5c474104181d7e488d181dfc6c926ff-merged.mount: Deactivated successfully.
Oct 11 08:42:33 compute-0 podman[273764]: 2025-10-11 08:42:33.48621157 +0000 UTC m=+1.469864500 container remove 0660820ad63e26b5d462181a78f9ff480322e2370d4f1bc257ac82e4c2d591c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_mayer, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct 11 08:42:33 compute-0 systemd[1]: libpod-conmon-0660820ad63e26b5d462181a78f9ff480322e2370d4f1bc257ac82e4c2d591c1.scope: Deactivated successfully.
Oct 11 08:42:33 compute-0 sudo[273657]: pam_unix(sudo:session): session closed for user root
Oct 11 08:42:33 compute-0 sudo[273820]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:42:33 compute-0 sudo[273820]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:42:33 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1071: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:42:33 compute-0 sudo[273820]: pam_unix(sudo:session): session closed for user root
Oct 11 08:42:33 compute-0 sudo[273845]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 08:42:33 compute-0 sudo[273845]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:42:33 compute-0 sudo[273845]: pam_unix(sudo:session): session closed for user root
Oct 11 08:42:33 compute-0 sudo[273870]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:42:33 compute-0 sudo[273870]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:42:33 compute-0 sudo[273870]: pam_unix(sudo:session): session closed for user root
Oct 11 08:42:33 compute-0 sudo[273895]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a -- lvm list --format json
Oct 11 08:42:33 compute-0 sudo[273895]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:42:34 compute-0 podman[273961]: 2025-10-11 08:42:34.328631531 +0000 UTC m=+0.071649221 container create e2119e7dd5e928dd82749f7fbaa1e73ac6f87b8b2e86364d2c0b1c1da00d40bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_herschel, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct 11 08:42:34 compute-0 systemd[1]: Started libpod-conmon-e2119e7dd5e928dd82749f7fbaa1e73ac6f87b8b2e86364d2c0b1c1da00d40bb.scope.
Oct 11 08:42:34 compute-0 podman[273961]: 2025-10-11 08:42:34.296945737 +0000 UTC m=+0.039963467 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 08:42:34 compute-0 systemd[1]: Started libcrun container.
Oct 11 08:42:34 compute-0 podman[273961]: 2025-10-11 08:42:34.429782942 +0000 UTC m=+0.172800682 container init e2119e7dd5e928dd82749f7fbaa1e73ac6f87b8b2e86364d2c0b1c1da00d40bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_herschel, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct 11 08:42:34 compute-0 podman[273961]: 2025-10-11 08:42:34.440216827 +0000 UTC m=+0.183234507 container start e2119e7dd5e928dd82749f7fbaa1e73ac6f87b8b2e86364d2c0b1c1da00d40bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_herschel, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 08:42:34 compute-0 podman[273961]: 2025-10-11 08:42:34.444442056 +0000 UTC m=+0.187459736 container attach e2119e7dd5e928dd82749f7fbaa1e73ac6f87b8b2e86364d2c0b1c1da00d40bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_herschel, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 08:42:34 compute-0 gifted_herschel[273977]: 167 167
Oct 11 08:42:34 compute-0 systemd[1]: libpod-e2119e7dd5e928dd82749f7fbaa1e73ac6f87b8b2e86364d2c0b1c1da00d40bb.scope: Deactivated successfully.
Oct 11 08:42:34 compute-0 podman[273961]: 2025-10-11 08:42:34.447465231 +0000 UTC m=+0.190482911 container died e2119e7dd5e928dd82749f7fbaa1e73ac6f87b8b2e86364d2c0b1c1da00d40bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_herschel, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 08:42:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-31300592cb4af761b8e337bac59c1168f2567537f4ac591258eac18c5958ef54-merged.mount: Deactivated successfully.
Oct 11 08:42:34 compute-0 podman[273961]: 2025-10-11 08:42:34.497398339 +0000 UTC m=+0.240416029 container remove e2119e7dd5e928dd82749f7fbaa1e73ac6f87b8b2e86364d2c0b1c1da00d40bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_herschel, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 08:42:34 compute-0 systemd[1]: libpod-conmon-e2119e7dd5e928dd82749f7fbaa1e73ac6f87b8b2e86364d2c0b1c1da00d40bb.scope: Deactivated successfully.
Oct 11 08:42:34 compute-0 podman[274001]: 2025-10-11 08:42:34.771047324 +0000 UTC m=+0.082351133 container create 18ce762fa78d1a0e0f673f27d01f1523cfe0f63591eab986978eb138203d380f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_kapitsa, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 11 08:42:34 compute-0 systemd[1]: Started libpod-conmon-18ce762fa78d1a0e0f673f27d01f1523cfe0f63591eab986978eb138203d380f.scope.
Oct 11 08:42:34 compute-0 podman[274001]: 2025-10-11 08:42:34.738339682 +0000 UTC m=+0.049643541 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 08:42:34 compute-0 ceph-mon[74313]: pgmap v1071: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:42:34 compute-0 systemd[1]: Started libcrun container.
Oct 11 08:42:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0700bd248c1284ba8f1178b93bd60a267269443ab95ac03b03f08fabcadaff9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 08:42:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0700bd248c1284ba8f1178b93bd60a267269443ab95ac03b03f08fabcadaff9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 08:42:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0700bd248c1284ba8f1178b93bd60a267269443ab95ac03b03f08fabcadaff9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 08:42:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0700bd248c1284ba8f1178b93bd60a267269443ab95ac03b03f08fabcadaff9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 08:42:34 compute-0 podman[274001]: 2025-10-11 08:42:34.890640745 +0000 UTC m=+0.201944594 container init 18ce762fa78d1a0e0f673f27d01f1523cfe0f63591eab986978eb138203d380f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_kapitsa, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 08:42:34 compute-0 podman[274001]: 2025-10-11 08:42:34.907958384 +0000 UTC m=+0.219262193 container start 18ce762fa78d1a0e0f673f27d01f1523cfe0f63591eab986978eb138203d380f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_kapitsa, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507)
Oct 11 08:42:34 compute-0 podman[274001]: 2025-10-11 08:42:34.911755441 +0000 UTC m=+0.223059320 container attach 18ce762fa78d1a0e0f673f27d01f1523cfe0f63591eab986978eb138203d380f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_kapitsa, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 08:42:35 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1072: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:42:35 compute-0 sleepy_kapitsa[274018]: {
Oct 11 08:42:35 compute-0 sleepy_kapitsa[274018]:     "0": [
Oct 11 08:42:35 compute-0 sleepy_kapitsa[274018]:         {
Oct 11 08:42:35 compute-0 sleepy_kapitsa[274018]:             "devices": [
Oct 11 08:42:35 compute-0 sleepy_kapitsa[274018]:                 "/dev/loop3"
Oct 11 08:42:35 compute-0 sleepy_kapitsa[274018]:             ],
Oct 11 08:42:35 compute-0 sleepy_kapitsa[274018]:             "lv_name": "ceph_lv0",
Oct 11 08:42:35 compute-0 sleepy_kapitsa[274018]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 08:42:35 compute-0 sleepy_kapitsa[274018]:             "lv_size": "21470642176",
Oct 11 08:42:35 compute-0 sleepy_kapitsa[274018]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=bd31cfe9-266a-4479-a7f9-659fff14b3e1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 08:42:35 compute-0 sleepy_kapitsa[274018]:             "lv_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 08:42:35 compute-0 sleepy_kapitsa[274018]:             "name": "ceph_lv0",
Oct 11 08:42:35 compute-0 sleepy_kapitsa[274018]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 08:42:35 compute-0 sleepy_kapitsa[274018]:             "tags": {
Oct 11 08:42:35 compute-0 sleepy_kapitsa[274018]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 11 08:42:35 compute-0 sleepy_kapitsa[274018]:                 "ceph.block_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 08:42:35 compute-0 sleepy_kapitsa[274018]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 08:42:35 compute-0 sleepy_kapitsa[274018]:                 "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 08:42:35 compute-0 sleepy_kapitsa[274018]:                 "ceph.cluster_name": "ceph",
Oct 11 08:42:35 compute-0 sleepy_kapitsa[274018]:                 "ceph.crush_device_class": "",
Oct 11 08:42:35 compute-0 sleepy_kapitsa[274018]:                 "ceph.encrypted": "0",
Oct 11 08:42:35 compute-0 sleepy_kapitsa[274018]:                 "ceph.osd_fsid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 08:42:35 compute-0 sleepy_kapitsa[274018]:                 "ceph.osd_id": "0",
Oct 11 08:42:35 compute-0 sleepy_kapitsa[274018]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 08:42:35 compute-0 sleepy_kapitsa[274018]:                 "ceph.type": "block",
Oct 11 08:42:35 compute-0 sleepy_kapitsa[274018]:                 "ceph.vdo": "0"
Oct 11 08:42:35 compute-0 sleepy_kapitsa[274018]:             },
Oct 11 08:42:35 compute-0 sleepy_kapitsa[274018]:             "type": "block",
Oct 11 08:42:35 compute-0 sleepy_kapitsa[274018]:             "vg_name": "ceph_vg0"
Oct 11 08:42:35 compute-0 sleepy_kapitsa[274018]:         }
Oct 11 08:42:35 compute-0 sleepy_kapitsa[274018]:     ],
Oct 11 08:42:35 compute-0 sleepy_kapitsa[274018]:     "1": [
Oct 11 08:42:35 compute-0 sleepy_kapitsa[274018]:         {
Oct 11 08:42:35 compute-0 sleepy_kapitsa[274018]:             "devices": [
Oct 11 08:42:35 compute-0 sleepy_kapitsa[274018]:                 "/dev/loop4"
Oct 11 08:42:35 compute-0 sleepy_kapitsa[274018]:             ],
Oct 11 08:42:35 compute-0 sleepy_kapitsa[274018]:             "lv_name": "ceph_lv1",
Oct 11 08:42:35 compute-0 sleepy_kapitsa[274018]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 08:42:35 compute-0 sleepy_kapitsa[274018]:             "lv_size": "21470642176",
Oct 11 08:42:35 compute-0 sleepy_kapitsa[274018]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c6e730c8-7075-45e5-bf72-061b73088f5b,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 08:42:35 compute-0 sleepy_kapitsa[274018]:             "lv_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 08:42:35 compute-0 sleepy_kapitsa[274018]:             "name": "ceph_lv1",
Oct 11 08:42:35 compute-0 sleepy_kapitsa[274018]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 08:42:35 compute-0 sleepy_kapitsa[274018]:             "tags": {
Oct 11 08:42:35 compute-0 sleepy_kapitsa[274018]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 11 08:42:35 compute-0 sleepy_kapitsa[274018]:                 "ceph.block_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 08:42:35 compute-0 sleepy_kapitsa[274018]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 08:42:35 compute-0 sleepy_kapitsa[274018]:                 "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 08:42:35 compute-0 sleepy_kapitsa[274018]:                 "ceph.cluster_name": "ceph",
Oct 11 08:42:35 compute-0 sleepy_kapitsa[274018]:                 "ceph.crush_device_class": "",
Oct 11 08:42:35 compute-0 sleepy_kapitsa[274018]:                 "ceph.encrypted": "0",
Oct 11 08:42:35 compute-0 sleepy_kapitsa[274018]:                 "ceph.osd_fsid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 08:42:35 compute-0 sleepy_kapitsa[274018]:                 "ceph.osd_id": "1",
Oct 11 08:42:35 compute-0 sleepy_kapitsa[274018]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 08:42:35 compute-0 sleepy_kapitsa[274018]:                 "ceph.type": "block",
Oct 11 08:42:35 compute-0 sleepy_kapitsa[274018]:                 "ceph.vdo": "0"
Oct 11 08:42:35 compute-0 sleepy_kapitsa[274018]:             },
Oct 11 08:42:35 compute-0 sleepy_kapitsa[274018]:             "type": "block",
Oct 11 08:42:35 compute-0 sleepy_kapitsa[274018]:             "vg_name": "ceph_vg1"
Oct 11 08:42:35 compute-0 sleepy_kapitsa[274018]:         }
Oct 11 08:42:35 compute-0 sleepy_kapitsa[274018]:     ],
Oct 11 08:42:35 compute-0 sleepy_kapitsa[274018]:     "2": [
Oct 11 08:42:35 compute-0 sleepy_kapitsa[274018]:         {
Oct 11 08:42:35 compute-0 sleepy_kapitsa[274018]:             "devices": [
Oct 11 08:42:35 compute-0 sleepy_kapitsa[274018]:                 "/dev/loop5"
Oct 11 08:42:35 compute-0 sleepy_kapitsa[274018]:             ],
Oct 11 08:42:35 compute-0 sleepy_kapitsa[274018]:             "lv_name": "ceph_lv2",
Oct 11 08:42:35 compute-0 sleepy_kapitsa[274018]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 08:42:35 compute-0 sleepy_kapitsa[274018]:             "lv_size": "21470642176",
Oct 11 08:42:35 compute-0 sleepy_kapitsa[274018]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9a8d87fe-940e-4960-94fb-405e22e6e9ee,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 08:42:35 compute-0 sleepy_kapitsa[274018]:             "lv_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 08:42:35 compute-0 sleepy_kapitsa[274018]:             "name": "ceph_lv2",
Oct 11 08:42:35 compute-0 sleepy_kapitsa[274018]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 08:42:35 compute-0 sleepy_kapitsa[274018]:             "tags": {
Oct 11 08:42:35 compute-0 sleepy_kapitsa[274018]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 11 08:42:35 compute-0 sleepy_kapitsa[274018]:                 "ceph.block_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 08:42:35 compute-0 sleepy_kapitsa[274018]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 08:42:35 compute-0 sleepy_kapitsa[274018]:                 "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 08:42:35 compute-0 sleepy_kapitsa[274018]:                 "ceph.cluster_name": "ceph",
Oct 11 08:42:35 compute-0 sleepy_kapitsa[274018]:                 "ceph.crush_device_class": "",
Oct 11 08:42:35 compute-0 sleepy_kapitsa[274018]:                 "ceph.encrypted": "0",
Oct 11 08:42:35 compute-0 sleepy_kapitsa[274018]:                 "ceph.osd_fsid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 08:42:35 compute-0 sleepy_kapitsa[274018]:                 "ceph.osd_id": "2",
Oct 11 08:42:35 compute-0 sleepy_kapitsa[274018]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 08:42:35 compute-0 sleepy_kapitsa[274018]:                 "ceph.type": "block",
Oct 11 08:42:35 compute-0 sleepy_kapitsa[274018]:                 "ceph.vdo": "0"
Oct 11 08:42:35 compute-0 sleepy_kapitsa[274018]:             },
Oct 11 08:42:35 compute-0 sleepy_kapitsa[274018]:             "type": "block",
Oct 11 08:42:35 compute-0 sleepy_kapitsa[274018]:             "vg_name": "ceph_vg2"
Oct 11 08:42:35 compute-0 sleepy_kapitsa[274018]:         }
Oct 11 08:42:35 compute-0 sleepy_kapitsa[274018]:     ]
Oct 11 08:42:35 compute-0 sleepy_kapitsa[274018]: }
Oct 11 08:42:35 compute-0 systemd[1]: libpod-18ce762fa78d1a0e0f673f27d01f1523cfe0f63591eab986978eb138203d380f.scope: Deactivated successfully.
Oct 11 08:42:35 compute-0 podman[274001]: 2025-10-11 08:42:35.731690747 +0000 UTC m=+1.042994586 container died 18ce762fa78d1a0e0f673f27d01f1523cfe0f63591eab986978eb138203d380f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_kapitsa, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 08:42:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-a0700bd248c1284ba8f1178b93bd60a267269443ab95ac03b03f08fabcadaff9-merged.mount: Deactivated successfully.
Oct 11 08:42:35 compute-0 podman[274001]: 2025-10-11 08:42:35.794865748 +0000 UTC m=+1.106169527 container remove 18ce762fa78d1a0e0f673f27d01f1523cfe0f63591eab986978eb138203d380f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_kapitsa, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 08:42:35 compute-0 systemd[1]: libpod-conmon-18ce762fa78d1a0e0f673f27d01f1523cfe0f63591eab986978eb138203d380f.scope: Deactivated successfully.
Oct 11 08:42:35 compute-0 sudo[273895]: pam_unix(sudo:session): session closed for user root
Oct 11 08:42:35 compute-0 sudo[274041]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:42:35 compute-0 sudo[274041]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:42:35 compute-0 sudo[274041]: pam_unix(sudo:session): session closed for user root
Oct 11 08:42:36 compute-0 sudo[274066]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 08:42:36 compute-0 sudo[274066]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:42:36 compute-0 sudo[274066]: pam_unix(sudo:session): session closed for user root
Oct 11 08:42:36 compute-0 sudo[274091]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:42:36 compute-0 sudo[274091]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:42:36 compute-0 sudo[274091]: pam_unix(sudo:session): session closed for user root
Oct 11 08:42:36 compute-0 sudo[274116]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a -- raw list --format json
Oct 11 08:42:36 compute-0 sudo[274116]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:42:36 compute-0 podman[274181]: 2025-10-11 08:42:36.616968246 +0000 UTC m=+0.056664569 container create aca801a7820175709710ff65797c9e9aeb355d1b71d93e870ec0c86f9134573e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_dubinsky, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 08:42:36 compute-0 systemd[1]: Started libpod-conmon-aca801a7820175709710ff65797c9e9aeb355d1b71d93e870ec0c86f9134573e.scope.
Oct 11 08:42:36 compute-0 podman[274181]: 2025-10-11 08:42:36.593411712 +0000 UTC m=+0.033108055 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 08:42:36 compute-0 systemd[1]: Started libcrun container.
Oct 11 08:42:36 compute-0 podman[274181]: 2025-10-11 08:42:36.721067851 +0000 UTC m=+0.160764214 container init aca801a7820175709710ff65797c9e9aeb355d1b71d93e870ec0c86f9134573e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_dubinsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 08:42:36 compute-0 podman[274181]: 2025-10-11 08:42:36.733429099 +0000 UTC m=+0.173125452 container start aca801a7820175709710ff65797c9e9aeb355d1b71d93e870ec0c86f9134573e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_dubinsky, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 08:42:36 compute-0 podman[274181]: 2025-10-11 08:42:36.738153432 +0000 UTC m=+0.177849795 container attach aca801a7820175709710ff65797c9e9aeb355d1b71d93e870ec0c86f9134573e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_dubinsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct 11 08:42:36 compute-0 awesome_dubinsky[274197]: 167 167
Oct 11 08:42:36 compute-0 systemd[1]: libpod-aca801a7820175709710ff65797c9e9aeb355d1b71d93e870ec0c86f9134573e.scope: Deactivated successfully.
Oct 11 08:42:36 compute-0 podman[274181]: 2025-10-11 08:42:36.745304974 +0000 UTC m=+0.185001337 container died aca801a7820175709710ff65797c9e9aeb355d1b71d93e870ec0c86f9134573e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_dubinsky, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 08:42:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-031375ea1e86540649da3c424e298af816c00c60f7a53cf5eada4987024032dd-merged.mount: Deactivated successfully.
Oct 11 08:42:36 compute-0 podman[274181]: 2025-10-11 08:42:36.800186041 +0000 UTC m=+0.239882404 container remove aca801a7820175709710ff65797c9e9aeb355d1b71d93e870ec0c86f9134573e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_dubinsky, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 08:42:36 compute-0 systemd[1]: libpod-conmon-aca801a7820175709710ff65797c9e9aeb355d1b71d93e870ec0c86f9134573e.scope: Deactivated successfully.
Oct 11 08:42:36 compute-0 ceph-mon[74313]: pgmap v1072: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:42:37 compute-0 podman[274220]: 2025-10-11 08:42:37.05628539 +0000 UTC m=+0.073413909 container create 6e5268f8a272e68db120358b57658e8fac9a2a71ecb3c17bd15e6705d0813264 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_northcutt, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 08:42:37 compute-0 systemd[1]: Started libpod-conmon-6e5268f8a272e68db120358b57658e8fac9a2a71ecb3c17bd15e6705d0813264.scope.
Oct 11 08:42:37 compute-0 podman[274220]: 2025-10-11 08:42:37.023723002 +0000 UTC m=+0.040851591 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 08:42:37 compute-0 systemd[1]: Started libcrun container.
Oct 11 08:42:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fdd1283ce59d95a2978672b3ad153f34d01bc9fc7062bdfde79126e11e7c3761/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 08:42:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fdd1283ce59d95a2978672b3ad153f34d01bc9fc7062bdfde79126e11e7c3761/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 08:42:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fdd1283ce59d95a2978672b3ad153f34d01bc9fc7062bdfde79126e11e7c3761/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 08:42:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fdd1283ce59d95a2978672b3ad153f34d01bc9fc7062bdfde79126e11e7c3761/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 08:42:37 compute-0 podman[274220]: 2025-10-11 08:42:37.16587636 +0000 UTC m=+0.183004899 container init 6e5268f8a272e68db120358b57658e8fac9a2a71ecb3c17bd15e6705d0813264 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_northcutt, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 08:42:37 compute-0 podman[274220]: 2025-10-11 08:42:37.177486017 +0000 UTC m=+0.194614546 container start 6e5268f8a272e68db120358b57658e8fac9a2a71ecb3c17bd15e6705d0813264 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_northcutt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 08:42:37 compute-0 podman[274220]: 2025-10-11 08:42:37.181226293 +0000 UTC m=+0.198354822 container attach 6e5268f8a272e68db120358b57658e8fac9a2a71ecb3c17bd15e6705d0813264 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_northcutt, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Oct 11 08:42:37 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 08:42:37 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2137733844' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 08:42:37 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 08:42:37 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2137733844' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 08:42:37 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1073: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:42:37 compute-0 ceph-mon[74313]: from='client.? 192.168.122.10:0/2137733844' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 08:42:37 compute-0 ceph-mon[74313]: from='client.? 192.168.122.10:0/2137733844' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 08:42:38 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 08:42:38 compute-0 sad_northcutt[274237]: {
Oct 11 08:42:38 compute-0 sad_northcutt[274237]:     "9a8d87fe-940e-4960-94fb-405e22e6e9ee": {
Oct 11 08:42:38 compute-0 sad_northcutt[274237]:         "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 08:42:38 compute-0 sad_northcutt[274237]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 11 08:42:38 compute-0 sad_northcutt[274237]:         "osd_id": 2,
Oct 11 08:42:38 compute-0 sad_northcutt[274237]:         "osd_uuid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 08:42:38 compute-0 sad_northcutt[274237]:         "type": "bluestore"
Oct 11 08:42:38 compute-0 sad_northcutt[274237]:     },
Oct 11 08:42:38 compute-0 sad_northcutt[274237]:     "bd31cfe9-266a-4479-a7f9-659fff14b3e1": {
Oct 11 08:42:38 compute-0 sad_northcutt[274237]:         "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 08:42:38 compute-0 sad_northcutt[274237]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 11 08:42:38 compute-0 sad_northcutt[274237]:         "osd_id": 0,
Oct 11 08:42:38 compute-0 sad_northcutt[274237]:         "osd_uuid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 08:42:38 compute-0 sad_northcutt[274237]:         "type": "bluestore"
Oct 11 08:42:38 compute-0 sad_northcutt[274237]:     },
Oct 11 08:42:38 compute-0 sad_northcutt[274237]:     "c6e730c8-7075-45e5-bf72-061b73088f5b": {
Oct 11 08:42:38 compute-0 sad_northcutt[274237]:         "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 08:42:38 compute-0 sad_northcutt[274237]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 11 08:42:38 compute-0 sad_northcutt[274237]:         "osd_id": 1,
Oct 11 08:42:38 compute-0 sad_northcutt[274237]:         "osd_uuid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 08:42:38 compute-0 sad_northcutt[274237]:         "type": "bluestore"
Oct 11 08:42:38 compute-0 sad_northcutt[274237]:     }
Oct 11 08:42:38 compute-0 sad_northcutt[274237]: }
Oct 11 08:42:38 compute-0 systemd[1]: libpod-6e5268f8a272e68db120358b57658e8fac9a2a71ecb3c17bd15e6705d0813264.scope: Deactivated successfully.
Oct 11 08:42:38 compute-0 podman[274220]: 2025-10-11 08:42:38.21542759 +0000 UTC m=+1.232556089 container died 6e5268f8a272e68db120358b57658e8fac9a2a71ecb3c17bd15e6705d0813264 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_northcutt, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef)
Oct 11 08:42:38 compute-0 systemd[1]: libpod-6e5268f8a272e68db120358b57658e8fac9a2a71ecb3c17bd15e6705d0813264.scope: Consumed 1.047s CPU time.
Oct 11 08:42:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-fdd1283ce59d95a2978672b3ad153f34d01bc9fc7062bdfde79126e11e7c3761-merged.mount: Deactivated successfully.
Oct 11 08:42:38 compute-0 podman[274220]: 2025-10-11 08:42:38.272696084 +0000 UTC m=+1.289824563 container remove 6e5268f8a272e68db120358b57658e8fac9a2a71ecb3c17bd15e6705d0813264 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_northcutt, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct 11 08:42:38 compute-0 systemd[1]: libpod-conmon-6e5268f8a272e68db120358b57658e8fac9a2a71ecb3c17bd15e6705d0813264.scope: Deactivated successfully.
Oct 11 08:42:38 compute-0 sudo[274116]: pam_unix(sudo:session): session closed for user root
Oct 11 08:42:38 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 08:42:38 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 08:42:38 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 08:42:38 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 08:42:38 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev a78175dc-cd1a-4675-b252-4e4d15a426a2 does not exist
Oct 11 08:42:38 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev 71d552c1-22ef-46ca-8d88-d9cb558c5aae does not exist
Oct 11 08:42:38 compute-0 sudo[274281]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:42:38 compute-0 sudo[274281]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:42:38 compute-0 sudo[274281]: pam_unix(sudo:session): session closed for user root
Oct 11 08:42:38 compute-0 sudo[274306]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 11 08:42:38 compute-0 sudo[274306]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:42:38 compute-0 sudo[274306]: pam_unix(sudo:session): session closed for user root
Oct 11 08:42:39 compute-0 ceph-mon[74313]: pgmap v1073: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:42:39 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 08:42:39 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 08:42:39 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1074: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:42:41 compute-0 ceph-mon[74313]: pgmap v1074: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:42:41 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1075: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:42:42 compute-0 podman[274331]: 2025-10-11 08:42:42.816320282 +0000 UTC m=+0.105582228 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct 11 08:42:43 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 08:42:43 compute-0 ceph-mon[74313]: pgmap v1075: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:42:43 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1076: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:42:44 compute-0 ceph-mon[74313]: pgmap v1076: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:42:45 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1077: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:42:46 compute-0 ceph-mon[74313]: pgmap v1077: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:42:47 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1078: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:42:48 compute-0 sshd-session[274351]: Invalid user vendas from 152.32.213.170 port 41804
Oct 11 08:42:48 compute-0 sshd-session[274351]: pam_unix(sshd:auth): check pass; user unknown
Oct 11 08:42:48 compute-0 sshd-session[274351]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=152.32.213.170
Oct 11 08:42:48 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 08:42:48 compute-0 ceph-mon[74313]: pgmap v1078: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:42:49 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1079: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:42:49 compute-0 podman[274353]: 2025-10-11 08:42:49.769701326 +0000 UTC m=+0.067976228 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, managed_by=edpm_ansible, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct 11 08:42:50 compute-0 sshd-session[274351]: Failed password for invalid user vendas from 152.32.213.170 port 41804 ssh2
Oct 11 08:42:50 compute-0 nova_compute[260935]: 2025-10-11 08:42:50.699 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 08:42:50 compute-0 ceph-mon[74313]: pgmap v1079: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:42:50 compute-0 nova_compute[260935]: 2025-10-11 08:42:50.752 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 08:42:50 compute-0 nova_compute[260935]: 2025-10-11 08:42:50.753 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 08:42:50 compute-0 nova_compute[260935]: 2025-10-11 08:42:50.753 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 11 08:42:50 compute-0 nova_compute[260935]: 2025-10-11 08:42:50.753 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 08:42:50 compute-0 nova_compute[260935]: 2025-10-11 08:42:50.753 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Oct 11 08:42:50 compute-0 nova_compute[260935]: 2025-10-11 08:42:50.917 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Oct 11 08:42:51 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1080: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:42:52 compute-0 sshd-session[274351]: Received disconnect from 152.32.213.170 port 41804:11: Bye Bye [preauth]
Oct 11 08:42:52 compute-0 sshd-session[274351]: Disconnected from invalid user vendas 152.32.213.170 port 41804 [preauth]
Oct 11 08:42:52 compute-0 ceph-mon[74313]: pgmap v1080: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:42:52 compute-0 nova_compute[260935]: 2025-10-11 08:42:52.867 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 08:42:52 compute-0 nova_compute[260935]: 2025-10-11 08:42:52.868 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 08:42:52 compute-0 nova_compute[260935]: 2025-10-11 08:42:52.868 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 08:42:53 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 08:42:53 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1081: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:42:54 compute-0 ceph-mon[74313]: pgmap v1081: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:42:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 08:42:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 08:42:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 08:42:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 08:42:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 08:42:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 08:42:54 compute-0 ceph-mgr[74605]: [balancer INFO root] Optimize plan auto_2025-10-11_08:42:54
Oct 11 08:42:54 compute-0 ceph-mgr[74605]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 08:42:54 compute-0 ceph-mgr[74605]: [balancer INFO root] do_upmap
Oct 11 08:42:54 compute-0 ceph-mgr[74605]: [balancer INFO root] pools ['default.rgw.control', 'backups', 'volumes', 'cephfs.cephfs.data', '.mgr', 'cephfs.cephfs.meta', 'vms', 'images', 'default.rgw.log', 'default.rgw.meta', '.rgw.root']
Oct 11 08:42:54 compute-0 ceph-mgr[74605]: [balancer INFO root] prepared 0/10 changes
Oct 11 08:42:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 08:42:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 08:42:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 08:42:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 08:42:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 08:42:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 08:42:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 08:42:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 08:42:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 08:42:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 08:42:55 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1082: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:42:55 compute-0 nova_compute[260935]: 2025-10-11 08:42:55.699 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 08:42:55 compute-0 podman[274373]: 2025-10-11 08:42:55.799645077 +0000 UTC m=+0.097392767 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS)
Oct 11 08:42:56 compute-0 ceph-mon[74313]: pgmap v1082: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:42:56 compute-0 podman[274393]: 2025-10-11 08:42:56.860779774 +0000 UTC m=+0.155671980 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 11 08:42:57 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1083: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:42:57 compute-0 nova_compute[260935]: 2025-10-11 08:42:57.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 08:42:57 compute-0 nova_compute[260935]: 2025-10-11 08:42:57.703 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 11 08:42:57 compute-0 nova_compute[260935]: 2025-10-11 08:42:57.703 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 11 08:42:57 compute-0 nova_compute[260935]: 2025-10-11 08:42:57.725 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 11 08:42:57 compute-0 nova_compute[260935]: 2025-10-11 08:42:57.725 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 08:42:57 compute-0 nova_compute[260935]: 2025-10-11 08:42:57.760 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:42:57 compute-0 nova_compute[260935]: 2025-10-11 08:42:57.761 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:42:57 compute-0 nova_compute[260935]: 2025-10-11 08:42:57.762 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:42:57 compute-0 nova_compute[260935]: 2025-10-11 08:42:57.762 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 11 08:42:57 compute-0 nova_compute[260935]: 2025-10-11 08:42:57.763 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:42:58 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 08:42:58 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 08:42:58 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/208361886' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:42:58 compute-0 nova_compute[260935]: 2025-10-11 08:42:58.226 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:42:58 compute-0 nova_compute[260935]: 2025-10-11 08:42:58.396 2 WARNING nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 08:42:58 compute-0 nova_compute[260935]: 2025-10-11 08:42:58.397 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5122MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 11 08:42:58 compute-0 nova_compute[260935]: 2025-10-11 08:42:58.397 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:42:58 compute-0 nova_compute[260935]: 2025-10-11 08:42:58.398 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:42:58 compute-0 nova_compute[260935]: 2025-10-11 08:42:58.645 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 11 08:42:58 compute-0 nova_compute[260935]: 2025-10-11 08:42:58.645 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 11 08:42:58 compute-0 nova_compute[260935]: 2025-10-11 08:42:58.741 2 DEBUG nova.scheduler.client.report [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Refreshing inventories for resource provider ead2f521-4d5d-46d9-864c-1aac19134114 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Oct 11 08:42:58 compute-0 ceph-mon[74313]: pgmap v1083: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:42:58 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/208361886' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:42:58 compute-0 nova_compute[260935]: 2025-10-11 08:42:58.839 2 DEBUG nova.scheduler.client.report [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Updating ProviderTree inventory for provider ead2f521-4d5d-46d9-864c-1aac19134114 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Oct 11 08:42:58 compute-0 nova_compute[260935]: 2025-10-11 08:42:58.840 2 DEBUG nova.compute.provider_tree [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Updating inventory in ProviderTree for provider ead2f521-4d5d-46d9-864c-1aac19134114 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Oct 11 08:42:58 compute-0 nova_compute[260935]: 2025-10-11 08:42:58.858 2 DEBUG nova.scheduler.client.report [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Refreshing aggregate associations for resource provider ead2f521-4d5d-46d9-864c-1aac19134114, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Oct 11 08:42:58 compute-0 nova_compute[260935]: 2025-10-11 08:42:58.885 2 DEBUG nova.scheduler.client.report [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Refreshing trait associations for resource provider ead2f521-4d5d-46d9-864c-1aac19134114, traits: HW_CPU_X86_AESNI,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_CLMUL,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_AVX,COMPUTE_RESCUE_BFV,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_F16C,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NODE,HW_CPU_X86_SSE2,HW_CPU_X86_BMI,COMPUTE_DEVICE_TAGGING,COMPUTE_STORAGE_BUS_FDC,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SHA,COMPUTE_STORAGE_BUS_SATA,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SSE42,HW_CPU_X86_SSE41,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_ABM,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_STORAGE_BUS_IDE,COMPUTE_STORAGE_BUS_USB,COMPUTE_TRUSTED_CERTS,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_FMA3,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_SSE4A,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSSE3,HW_CPU_X86_SVM,HW_CPU_X86_BMI2,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_AVX2,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_SECURITY_TPM_2_0,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_AMD_SVM,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_RTL8139 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Oct 11 08:42:58 compute-0 nova_compute[260935]: 2025-10-11 08:42:58.903 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:42:59 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 08:42:59 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2811804463' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:42:59 compute-0 nova_compute[260935]: 2025-10-11 08:42:59.386 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.483s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:42:59 compute-0 nova_compute[260935]: 2025-10-11 08:42:59.395 2 DEBUG nova.compute.provider_tree [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 08:42:59 compute-0 nova_compute[260935]: 2025-10-11 08:42:59.415 2 DEBUG nova.scheduler.client.report [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 08:42:59 compute-0 nova_compute[260935]: 2025-10-11 08:42:59.418 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 11 08:42:59 compute-0 nova_compute[260935]: 2025-10-11 08:42:59.418 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.021s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:42:59 compute-0 nova_compute[260935]: 2025-10-11 08:42:59.419 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 08:42:59 compute-0 nova_compute[260935]: 2025-10-11 08:42:59.420 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Oct 11 08:42:59 compute-0 nova_compute[260935]: 2025-10-11 08:42:59.440 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 08:42:59 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1084: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:42:59 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/2811804463' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:43:00 compute-0 nova_compute[260935]: 2025-10-11 08:43:00.433 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 08:43:00 compute-0 ceph-mon[74313]: pgmap v1084: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:43:01 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1085: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:43:02 compute-0 ceph-mon[74313]: pgmap v1085: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:43:03 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 08:43:03 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1086: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:43:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 08:43:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:43:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 08:43:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:43:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 08:43:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:43:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 08:43:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:43:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 08:43:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:43:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 3.1795353910268934e-07 of space, bias 1.0, pg target 9.53860617308068e-05 quantized to 32 (current 32)
Oct 11 08:43:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:43:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 11 08:43:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:43:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 08:43:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:43:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 11 08:43:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:43:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 11 08:43:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:43:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 08:43:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:43:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 11 08:43:04 compute-0 ceph-mon[74313]: pgmap v1086: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:43:05 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1087: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:43:06 compute-0 ceph-mon[74313]: pgmap v1087: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:43:07 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1088: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:43:08 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 08:43:08 compute-0 ceph-mon[74313]: pgmap v1088: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:43:09 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1089: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:43:10 compute-0 ceph-mon[74313]: pgmap v1089: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:43:11 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1090: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:43:12 compute-0 ceph-mon[74313]: pgmap v1090: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:43:13 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 08:43:13 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1091: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:43:13 compute-0 podman[274465]: 2025-10-11 08:43:13.777220135 +0000 UTC m=+0.078644888 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Oct 11 08:43:14 compute-0 ceph-mon[74313]: pgmap v1091: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:43:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:43:15.174 162815 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:43:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:43:15.175 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:43:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:43:15.175 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:43:15 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1092: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:43:16 compute-0 ceph-mon[74313]: pgmap v1092: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:43:17 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1093: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:43:18 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 08:43:18 compute-0 ceph-mon[74313]: pgmap v1093: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:43:19 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1094: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:43:20 compute-0 podman[274486]: 2025-10-11 08:43:20.797742911 +0000 UTC m=+0.084758010 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct 11 08:43:20 compute-0 ceph-mon[74313]: pgmap v1094: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:43:21 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1095: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:43:22 compute-0 sshd-session[274464]: error: kex_exchange_identification: read: Connection timed out
Oct 11 08:43:22 compute-0 sshd-session[274464]: banner exchange: Connection from 60.188.59.200 port 39506: Connection timed out
Oct 11 08:43:22 compute-0 ceph-mon[74313]: pgmap v1095: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:43:23 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 08:43:23 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1096: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:43:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 08:43:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 08:43:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 08:43:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 08:43:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 08:43:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 08:43:24 compute-0 ceph-mon[74313]: pgmap v1096: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:43:25 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1097: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:43:26 compute-0 podman[274507]: 2025-10-11 08:43:26.810874049 +0000 UTC m=+0.106736710 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd)
Oct 11 08:43:26 compute-0 ceph-mon[74313]: pgmap v1097: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:43:27 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1098: 321 pgs: 321 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 426 B/s wr, 3 op/s
Oct 11 08:43:27 compute-0 podman[274527]: 2025-10-11 08:43:27.859252285 +0000 UTC m=+0.148767305 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, tcib_managed=true, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 11 08:43:28 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 08:43:28 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e131 do_prune osdmap full prune enabled
Oct 11 08:43:28 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e132 e132: 3 total, 3 up, 3 in
Oct 11 08:43:28 compute-0 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e132: 3 total, 3 up, 3 in
Oct 11 08:43:28 compute-0 ceph-mon[74313]: pgmap v1098: 321 pgs: 321 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 426 B/s wr, 3 op/s
Oct 11 08:43:29 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1100: 321 pgs: 321 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.8 KiB/s rd, 511 B/s wr, 4 op/s
Oct 11 08:43:29 compute-0 ceph-mon[74313]: osdmap e132: 3 total, 3 up, 3 in
Oct 11 08:43:30 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e132 do_prune osdmap full prune enabled
Oct 11 08:43:30 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e133 e133: 3 total, 3 up, 3 in
Oct 11 08:43:30 compute-0 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e133: 3 total, 3 up, 3 in
Oct 11 08:43:30 compute-0 ceph-mon[74313]: pgmap v1100: 321 pgs: 321 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.8 KiB/s rd, 511 B/s wr, 4 op/s
Oct 11 08:43:31 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1102: 321 pgs: 321 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 3.5 KiB/s rd, 639 B/s wr, 5 op/s
Oct 11 08:43:31 compute-0 ceph-mon[74313]: osdmap e133: 3 total, 3 up, 3 in
Oct 11 08:43:32 compute-0 ceph-mon[74313]: pgmap v1102: 321 pgs: 321 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 3.5 KiB/s rd, 639 B/s wr, 5 op/s
Oct 11 08:43:33 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 08:43:33 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1103: 321 pgs: 321 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 87 KiB/s rd, 5.1 MiB/s wr, 137 op/s
Oct 11 08:43:34 compute-0 ceph-mon[74313]: pgmap v1103: 321 pgs: 321 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 87 KiB/s rd, 5.1 MiB/s wr, 137 op/s
Oct 11 08:43:35 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1104: 321 pgs: 321 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 83 KiB/s rd, 5.1 MiB/s wr, 131 op/s
Oct 11 08:43:36 compute-0 ceph-mon[74313]: pgmap v1104: 321 pgs: 321 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 83 KiB/s rd, 5.1 MiB/s wr, 131 op/s
Oct 11 08:43:37 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 08:43:37 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/617872054' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 08:43:37 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 08:43:37 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/617872054' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 08:43:37 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1105: 321 pgs: 321 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 76 KiB/s rd, 4.7 MiB/s wr, 120 op/s
Oct 11 08:43:37 compute-0 ceph-mon[74313]: from='client.? 192.168.122.10:0/617872054' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 08:43:37 compute-0 ceph-mon[74313]: from='client.? 192.168.122.10:0/617872054' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 08:43:38 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 08:43:38 compute-0 sudo[274554]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:43:38 compute-0 sudo[274554]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:43:38 compute-0 sudo[274554]: pam_unix(sudo:session): session closed for user root
Oct 11 08:43:38 compute-0 sudo[274579]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 08:43:38 compute-0 sudo[274579]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:43:38 compute-0 sudo[274579]: pam_unix(sudo:session): session closed for user root
Oct 11 08:43:38 compute-0 sudo[274604]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:43:38 compute-0 sudo[274604]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:43:38 compute-0 sudo[274604]: pam_unix(sudo:session): session closed for user root
Oct 11 08:43:38 compute-0 sudo[274629]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 11 08:43:38 compute-0 sudo[274629]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:43:38 compute-0 ceph-mon[74313]: pgmap v1105: 321 pgs: 321 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 76 KiB/s rd, 4.7 MiB/s wr, 120 op/s
Oct 11 08:43:39 compute-0 sudo[274629]: pam_unix(sudo:session): session closed for user root
Oct 11 08:43:39 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Oct 11 08:43:39 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 11 08:43:39 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 08:43:39 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 08:43:39 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 11 08:43:39 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 08:43:39 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 11 08:43:39 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 08:43:39 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev 49267909-4f64-4322-8748-6d3dda31b653 does not exist
Oct 11 08:43:39 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev fd16eafb-385e-473a-b3db-8f9595d06e74 does not exist
Oct 11 08:43:39 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev ad1c7374-3477-4534-a91a-62a4a36c493d does not exist
Oct 11 08:43:39 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1106: 321 pgs: 321 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 66 KiB/s rd, 4.1 MiB/s wr, 105 op/s
Oct 11 08:43:39 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 11 08:43:39 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 08:43:39 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 11 08:43:39 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 08:43:39 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 08:43:39 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 08:43:39 compute-0 sudo[274685]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:43:39 compute-0 sudo[274685]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:43:39 compute-0 sudo[274685]: pam_unix(sudo:session): session closed for user root
Oct 11 08:43:39 compute-0 sudo[274710]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 08:43:39 compute-0 sudo[274710]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:43:39 compute-0 sudo[274710]: pam_unix(sudo:session): session closed for user root
Oct 11 08:43:39 compute-0 sudo[274735]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:43:39 compute-0 sudo[274735]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:43:39 compute-0 sudo[274735]: pam_unix(sudo:session): session closed for user root
Oct 11 08:43:39 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 11 08:43:39 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 08:43:39 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 08:43:39 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 08:43:39 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 08:43:39 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 08:43:39 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 08:43:40 compute-0 sudo[274760]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 11 08:43:40 compute-0 sudo[274760]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:43:40 compute-0 podman[274827]: 2025-10-11 08:43:40.578088769 +0000 UTC m=+0.054603200 container create bd2f681dae196c9ebb48af3e416f31c644965531e8977c9ef713e591205066ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_satoshi, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 08:43:40 compute-0 systemd[1]: Started libpod-conmon-bd2f681dae196c9ebb48af3e416f31c644965531e8977c9ef713e591205066ec.scope.
Oct 11 08:43:40 compute-0 podman[274827]: 2025-10-11 08:43:40.554927896 +0000 UTC m=+0.031442357 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 08:43:40 compute-0 systemd[1]: Started libcrun container.
Oct 11 08:43:40 compute-0 ceph-mon[74313]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #51. Immutable memtables: 0.
Oct 11 08:43:40 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:43:40.666205) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 11 08:43:40 compute-0 ceph-mon[74313]: rocksdb: [db/flush_job.cc:856] [default] [JOB 25] Flushing memtable with next log file: 51
Oct 11 08:43:40 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760172220666248, "job": 25, "event": "flush_started", "num_memtables": 1, "num_entries": 2098, "num_deletes": 254, "total_data_size": 3451779, "memory_usage": 3516800, "flush_reason": "Manual Compaction"}
Oct 11 08:43:40 compute-0 ceph-mon[74313]: rocksdb: [db/flush_job.cc:885] [default] [JOB 25] Level-0 flush table #52: started
Oct 11 08:43:40 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760172220689888, "cf_name": "default", "job": 25, "event": "table_file_creation", "file_number": 52, "file_size": 3362279, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 20937, "largest_seqno": 23034, "table_properties": {"data_size": 3352702, "index_size": 6071, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2437, "raw_key_size": 19141, "raw_average_key_size": 20, "raw_value_size": 3333633, "raw_average_value_size": 3512, "num_data_blocks": 273, "num_entries": 949, "num_filter_entries": 949, "num_deletions": 254, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760172009, "oldest_key_time": 1760172009, "file_creation_time": 1760172220, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "25d4b2da-549a-4320-988a-8e4ed36a341b", "db_session_id": "HBQ1LYMNE0LLZGR7AHC6", "orig_file_number": 52, "seqno_to_time_mapping": "N/A"}}
Oct 11 08:43:40 compute-0 ceph-mon[74313]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 25] Flush lasted 23743 microseconds, and 11930 cpu microseconds.
Oct 11 08:43:40 compute-0 ceph-mon[74313]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 08:43:40 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:43:40.689946) [db/flush_job.cc:967] [default] [JOB 25] Level-0 flush table #52: 3362279 bytes OK
Oct 11 08:43:40 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:43:40.689976) [db/memtable_list.cc:519] [default] Level-0 commit table #52 started
Oct 11 08:43:40 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:43:40.691910) [db/memtable_list.cc:722] [default] Level-0 commit table #52: memtable #1 done
Oct 11 08:43:40 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:43:40.691932) EVENT_LOG_v1 {"time_micros": 1760172220691925, "job": 25, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 11 08:43:40 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:43:40.691956) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 11 08:43:40 compute-0 ceph-mon[74313]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 25] Try to delete WAL files size 3442995, prev total WAL file size 3442995, number of live WAL files 2.
Oct 11 08:43:40 compute-0 podman[274827]: 2025-10-11 08:43:40.692025011 +0000 UTC m=+0.168539512 container init bd2f681dae196c9ebb48af3e416f31c644965531e8977c9ef713e591205066ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_satoshi, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct 11 08:43:40 compute-0 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000048.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 08:43:40 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:43:40.693433) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031373537' seq:72057594037927935, type:22 .. '7061786F730032303039' seq:0, type:0; will stop at (end)
Oct 11 08:43:40 compute-0 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 26] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 11 08:43:40 compute-0 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 25 Base level 0, inputs: [52(3283KB)], [50(7318KB)]
Oct 11 08:43:40 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760172220693475, "job": 26, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [52], "files_L6": [50], "score": -1, "input_data_size": 10855992, "oldest_snapshot_seqno": -1}
Oct 11 08:43:40 compute-0 podman[274827]: 2025-10-11 08:43:40.705923393 +0000 UTC m=+0.182437854 container start bd2f681dae196c9ebb48af3e416f31c644965531e8977c9ef713e591205066ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_satoshi, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 08:43:40 compute-0 podman[274827]: 2025-10-11 08:43:40.710018069 +0000 UTC m=+0.186532570 container attach bd2f681dae196c9ebb48af3e416f31c644965531e8977c9ef713e591205066ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_satoshi, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct 11 08:43:40 compute-0 kind_satoshi[274843]: 167 167
Oct 11 08:43:40 compute-0 systemd[1]: libpod-bd2f681dae196c9ebb48af3e416f31c644965531e8977c9ef713e591205066ec.scope: Deactivated successfully.
Oct 11 08:43:40 compute-0 podman[274827]: 2025-10-11 08:43:40.716917433 +0000 UTC m=+0.193431894 container died bd2f681dae196c9ebb48af3e416f31c644965531e8977c9ef713e591205066ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_satoshi, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default)
Oct 11 08:43:40 compute-0 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 26] Generated table #53: 4742 keys, 9118470 bytes, temperature: kUnknown
Oct 11 08:43:40 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760172220753839, "cf_name": "default", "job": 26, "event": "table_file_creation", "file_number": 53, "file_size": 9118470, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9084237, "index_size": 21259, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11909, "raw_key_size": 116137, "raw_average_key_size": 24, "raw_value_size": 8996032, "raw_average_value_size": 1897, "num_data_blocks": 894, "num_entries": 4742, "num_filter_entries": 4742, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760170204, "oldest_key_time": 0, "file_creation_time": 1760172220, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "25d4b2da-549a-4320-988a-8e4ed36a341b", "db_session_id": "HBQ1LYMNE0LLZGR7AHC6", "orig_file_number": 53, "seqno_to_time_mapping": "N/A"}}
Oct 11 08:43:40 compute-0 ceph-mon[74313]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 08:43:40 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:43:40.754225) [db/compaction/compaction_job.cc:1663] [default] [JOB 26] Compacted 1@0 + 1@6 files to L6 => 9118470 bytes
Oct 11 08:43:40 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:43:40.755988) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 179.5 rd, 150.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.2, 7.1 +0.0 blob) out(8.7 +0.0 blob), read-write-amplify(5.9) write-amplify(2.7) OK, records in: 5263, records dropped: 521 output_compression: NoCompression
Oct 11 08:43:40 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:43:40.756018) EVENT_LOG_v1 {"time_micros": 1760172220756002, "job": 26, "event": "compaction_finished", "compaction_time_micros": 60481, "compaction_time_cpu_micros": 41609, "output_level": 6, "num_output_files": 1, "total_output_size": 9118470, "num_input_records": 5263, "num_output_records": 4742, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 11 08:43:40 compute-0 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000052.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 08:43:40 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760172220758045, "job": 26, "event": "table_file_deletion", "file_number": 52}
Oct 11 08:43:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-f2379ecc6515f8f0ab5ffc88db1491372f5385271dbb1a8219182b3490e72139-merged.mount: Deactivated successfully.
Oct 11 08:43:40 compute-0 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000050.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 08:43:40 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760172220761274, "job": 26, "event": "table_file_deletion", "file_number": 50}
Oct 11 08:43:40 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:43:40.693342) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 08:43:40 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:43:40.761338) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 08:43:40 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:43:40.761348) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 08:43:40 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:43:40.761352) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 08:43:40 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:43:40.761356) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 08:43:40 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:43:40.761360) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 08:43:40 compute-0 podman[274827]: 2025-10-11 08:43:40.777749138 +0000 UTC m=+0.254263589 container remove bd2f681dae196c9ebb48af3e416f31c644965531e8977c9ef713e591205066ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_satoshi, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 08:43:40 compute-0 systemd[1]: libpod-conmon-bd2f681dae196c9ebb48af3e416f31c644965531e8977c9ef713e591205066ec.scope: Deactivated successfully.
Oct 11 08:43:40 compute-0 ceph-mon[74313]: pgmap v1106: 321 pgs: 321 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 66 KiB/s rd, 4.1 MiB/s wr, 105 op/s
Oct 11 08:43:41 compute-0 podman[274868]: 2025-10-11 08:43:41.008879484 +0000 UTC m=+0.055567017 container create e42c8cf6f638a9dfb85e87d4d6b90cb1d14adf19406453119d1086d8428eef83 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_hugle, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct 11 08:43:41 compute-0 systemd[1]: Started libpod-conmon-e42c8cf6f638a9dfb85e87d4d6b90cb1d14adf19406453119d1086d8428eef83.scope.
Oct 11 08:43:41 compute-0 podman[274868]: 2025-10-11 08:43:40.98070104 +0000 UTC m=+0.027388623 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 08:43:41 compute-0 systemd[1]: Started libcrun container.
Oct 11 08:43:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a76e5e2c93cb310c71ff7d052129ab3bc684666f2e3d1a94b9184afda7a2f02/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 08:43:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a76e5e2c93cb310c71ff7d052129ab3bc684666f2e3d1a94b9184afda7a2f02/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 08:43:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a76e5e2c93cb310c71ff7d052129ab3bc684666f2e3d1a94b9184afda7a2f02/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 08:43:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a76e5e2c93cb310c71ff7d052129ab3bc684666f2e3d1a94b9184afda7a2f02/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 08:43:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a76e5e2c93cb310c71ff7d052129ab3bc684666f2e3d1a94b9184afda7a2f02/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 08:43:41 compute-0 podman[274868]: 2025-10-11 08:43:41.129950058 +0000 UTC m=+0.176637631 container init e42c8cf6f638a9dfb85e87d4d6b90cb1d14adf19406453119d1086d8428eef83 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_hugle, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 08:43:41 compute-0 podman[274868]: 2025-10-11 08:43:41.143412187 +0000 UTC m=+0.190099680 container start e42c8cf6f638a9dfb85e87d4d6b90cb1d14adf19406453119d1086d8428eef83 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_hugle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 11 08:43:41 compute-0 podman[274868]: 2025-10-11 08:43:41.147593805 +0000 UTC m=+0.194281338 container attach e42c8cf6f638a9dfb85e87d4d6b90cb1d14adf19406453119d1086d8428eef83 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_hugle, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct 11 08:43:41 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1107: 321 pgs: 321 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 62 KiB/s rd, 3.8 MiB/s wr, 98 op/s
Oct 11 08:43:42 compute-0 admiring_hugle[274883]: --> passed data devices: 0 physical, 3 LVM
Oct 11 08:43:42 compute-0 admiring_hugle[274883]: --> relative data size: 1.0
Oct 11 08:43:42 compute-0 admiring_hugle[274883]: --> All data devices are unavailable
Oct 11 08:43:42 compute-0 systemd[1]: libpod-e42c8cf6f638a9dfb85e87d4d6b90cb1d14adf19406453119d1086d8428eef83.scope: Deactivated successfully.
Oct 11 08:43:42 compute-0 podman[274868]: 2025-10-11 08:43:42.307080464 +0000 UTC m=+1.353768017 container died e42c8cf6f638a9dfb85e87d4d6b90cb1d14adf19406453119d1086d8428eef83 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_hugle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default)
Oct 11 08:43:42 compute-0 systemd[1]: libpod-e42c8cf6f638a9dfb85e87d4d6b90cb1d14adf19406453119d1086d8428eef83.scope: Consumed 1.120s CPU time.
Oct 11 08:43:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-5a76e5e2c93cb310c71ff7d052129ab3bc684666f2e3d1a94b9184afda7a2f02-merged.mount: Deactivated successfully.
Oct 11 08:43:42 compute-0 podman[274868]: 2025-10-11 08:43:42.378075865 +0000 UTC m=+1.424763368 container remove e42c8cf6f638a9dfb85e87d4d6b90cb1d14adf19406453119d1086d8428eef83 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_hugle, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 08:43:42 compute-0 systemd[1]: libpod-conmon-e42c8cf6f638a9dfb85e87d4d6b90cb1d14adf19406453119d1086d8428eef83.scope: Deactivated successfully.
Oct 11 08:43:42 compute-0 sudo[274760]: pam_unix(sudo:session): session closed for user root
Oct 11 08:43:42 compute-0 sudo[274925]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:43:42 compute-0 sudo[274925]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:43:42 compute-0 sudo[274925]: pam_unix(sudo:session): session closed for user root
Oct 11 08:43:42 compute-0 sudo[274950]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 08:43:42 compute-0 sudo[274950]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:43:42 compute-0 sudo[274950]: pam_unix(sudo:session): session closed for user root
Oct 11 08:43:42 compute-0 sudo[274975]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:43:42 compute-0 sudo[274975]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:43:42 compute-0 sudo[274975]: pam_unix(sudo:session): session closed for user root
Oct 11 08:43:42 compute-0 sudo[275000]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a -- lvm list --format json
Oct 11 08:43:42 compute-0 sudo[275000]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:43:43 compute-0 ceph-mon[74313]: pgmap v1107: 321 pgs: 321 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 62 KiB/s rd, 3.8 MiB/s wr, 98 op/s
Oct 11 08:43:43 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 08:43:43 compute-0 podman[275065]: 2025-10-11 08:43:43.365236226 +0000 UTC m=+0.071991821 container create cc0cb479bd24efd61c52aca99d86f509a005870723ae7c0b31d408fc9995bb24 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_williamson, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 08:43:43 compute-0 systemd[1]: Started libpod-conmon-cc0cb479bd24efd61c52aca99d86f509a005870723ae7c0b31d408fc9995bb24.scope.
Oct 11 08:43:43 compute-0 podman[275065]: 2025-10-11 08:43:43.338520303 +0000 UTC m=+0.045275968 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 08:43:43 compute-0 systemd[1]: Started libcrun container.
Oct 11 08:43:43 compute-0 podman[275065]: 2025-10-11 08:43:43.470076582 +0000 UTC m=+0.176832167 container init cc0cb479bd24efd61c52aca99d86f509a005870723ae7c0b31d408fc9995bb24 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_williamson, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct 11 08:43:43 compute-0 podman[275065]: 2025-10-11 08:43:43.478025916 +0000 UTC m=+0.184781481 container start cc0cb479bd24efd61c52aca99d86f509a005870723ae7c0b31d408fc9995bb24 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_williamson, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 08:43:43 compute-0 podman[275065]: 2025-10-11 08:43:43.481280258 +0000 UTC m=+0.188035883 container attach cc0cb479bd24efd61c52aca99d86f509a005870723ae7c0b31d408fc9995bb24 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_williamson, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct 11 08:43:43 compute-0 heuristic_williamson[275082]: 167 167
Oct 11 08:43:43 compute-0 systemd[1]: libpod-cc0cb479bd24efd61c52aca99d86f509a005870723ae7c0b31d408fc9995bb24.scope: Deactivated successfully.
Oct 11 08:43:43 compute-0 conmon[275082]: conmon cc0cb479bd24efd61c52 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-cc0cb479bd24efd61c52aca99d86f509a005870723ae7c0b31d408fc9995bb24.scope/container/memory.events
Oct 11 08:43:43 compute-0 podman[275065]: 2025-10-11 08:43:43.486974038 +0000 UTC m=+0.193729603 container died cc0cb479bd24efd61c52aca99d86f509a005870723ae7c0b31d408fc9995bb24 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_williamson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct 11 08:43:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-404dc7a167aa685c9b38dcbde2318e023005e1eced96a4d60f73799083d07c5c-merged.mount: Deactivated successfully.
Oct 11 08:43:43 compute-0 podman[275065]: 2025-10-11 08:43:43.533303114 +0000 UTC m=+0.240058679 container remove cc0cb479bd24efd61c52aca99d86f509a005870723ae7c0b31d408fc9995bb24 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_williamson, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 11 08:43:43 compute-0 systemd[1]: libpod-conmon-cc0cb479bd24efd61c52aca99d86f509a005870723ae7c0b31d408fc9995bb24.scope: Deactivated successfully.
Oct 11 08:43:43 compute-0 nova_compute[260935]: 2025-10-11 08:43:43.660 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 08:43:43 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1108: 321 pgs: 321 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 55 KiB/s rd, 3.4 MiB/s wr, 87 op/s
Oct 11 08:43:43 compute-0 podman[275106]: 2025-10-11 08:43:43.749245062 +0000 UTC m=+0.050440903 container create 01c537420c88c7d56d1f7cb04d6da9c65eea27a39be65fe00b2471b10d320441 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_carver, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct 11 08:43:43 compute-0 systemd[1]: Started libpod-conmon-01c537420c88c7d56d1f7cb04d6da9c65eea27a39be65fe00b2471b10d320441.scope.
Oct 11 08:43:43 compute-0 podman[275106]: 2025-10-11 08:43:43.728975351 +0000 UTC m=+0.030171212 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 08:43:43 compute-0 systemd[1]: Started libcrun container.
Oct 11 08:43:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3fd91a8fe5357c7897fcbd3f617129f9e5edcd2e412904b29ab122b5a5767282/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 08:43:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3fd91a8fe5357c7897fcbd3f617129f9e5edcd2e412904b29ab122b5a5767282/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 08:43:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3fd91a8fe5357c7897fcbd3f617129f9e5edcd2e412904b29ab122b5a5767282/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 08:43:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3fd91a8fe5357c7897fcbd3f617129f9e5edcd2e412904b29ab122b5a5767282/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 08:43:43 compute-0 podman[275106]: 2025-10-11 08:43:43.882419657 +0000 UTC m=+0.183615498 container init 01c537420c88c7d56d1f7cb04d6da9c65eea27a39be65fe00b2471b10d320441 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_carver, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 08:43:43 compute-0 podman[275106]: 2025-10-11 08:43:43.897026609 +0000 UTC m=+0.198222450 container start 01c537420c88c7d56d1f7cb04d6da9c65eea27a39be65fe00b2471b10d320441 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_carver, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Oct 11 08:43:43 compute-0 podman[275106]: 2025-10-11 08:43:43.900179718 +0000 UTC m=+0.201375559 container attach 01c537420c88c7d56d1f7cb04d6da9c65eea27a39be65fe00b2471b10d320441 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_carver, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct 11 08:43:43 compute-0 podman[275124]: 2025-10-11 08:43:43.911676992 +0000 UTC m=+0.087080816 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct 11 08:43:44 compute-0 competent_carver[275123]: {
Oct 11 08:43:44 compute-0 competent_carver[275123]:     "0": [
Oct 11 08:43:44 compute-0 competent_carver[275123]:         {
Oct 11 08:43:44 compute-0 competent_carver[275123]:             "devices": [
Oct 11 08:43:44 compute-0 competent_carver[275123]:                 "/dev/loop3"
Oct 11 08:43:44 compute-0 competent_carver[275123]:             ],
Oct 11 08:43:44 compute-0 competent_carver[275123]:             "lv_name": "ceph_lv0",
Oct 11 08:43:44 compute-0 competent_carver[275123]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 08:43:44 compute-0 competent_carver[275123]:             "lv_size": "21470642176",
Oct 11 08:43:44 compute-0 competent_carver[275123]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=bd31cfe9-266a-4479-a7f9-659fff14b3e1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 08:43:44 compute-0 competent_carver[275123]:             "lv_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 08:43:44 compute-0 competent_carver[275123]:             "name": "ceph_lv0",
Oct 11 08:43:44 compute-0 competent_carver[275123]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 08:43:44 compute-0 competent_carver[275123]:             "tags": {
Oct 11 08:43:44 compute-0 competent_carver[275123]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 11 08:43:44 compute-0 competent_carver[275123]:                 "ceph.block_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 08:43:44 compute-0 competent_carver[275123]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 08:43:44 compute-0 competent_carver[275123]:                 "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 08:43:44 compute-0 competent_carver[275123]:                 "ceph.cluster_name": "ceph",
Oct 11 08:43:44 compute-0 competent_carver[275123]:                 "ceph.crush_device_class": "",
Oct 11 08:43:44 compute-0 competent_carver[275123]:                 "ceph.encrypted": "0",
Oct 11 08:43:44 compute-0 competent_carver[275123]:                 "ceph.osd_fsid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 08:43:44 compute-0 competent_carver[275123]:                 "ceph.osd_id": "0",
Oct 11 08:43:44 compute-0 competent_carver[275123]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 08:43:44 compute-0 competent_carver[275123]:                 "ceph.type": "block",
Oct 11 08:43:44 compute-0 competent_carver[275123]:                 "ceph.vdo": "0"
Oct 11 08:43:44 compute-0 competent_carver[275123]:             },
Oct 11 08:43:44 compute-0 competent_carver[275123]:             "type": "block",
Oct 11 08:43:44 compute-0 competent_carver[275123]:             "vg_name": "ceph_vg0"
Oct 11 08:43:44 compute-0 competent_carver[275123]:         }
Oct 11 08:43:44 compute-0 competent_carver[275123]:     ],
Oct 11 08:43:44 compute-0 competent_carver[275123]:     "1": [
Oct 11 08:43:44 compute-0 competent_carver[275123]:         {
Oct 11 08:43:44 compute-0 competent_carver[275123]:             "devices": [
Oct 11 08:43:44 compute-0 competent_carver[275123]:                 "/dev/loop4"
Oct 11 08:43:44 compute-0 competent_carver[275123]:             ],
Oct 11 08:43:44 compute-0 competent_carver[275123]:             "lv_name": "ceph_lv1",
Oct 11 08:43:44 compute-0 competent_carver[275123]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 08:43:44 compute-0 competent_carver[275123]:             "lv_size": "21470642176",
Oct 11 08:43:44 compute-0 competent_carver[275123]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c6e730c8-7075-45e5-bf72-061b73088f5b,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 08:43:44 compute-0 competent_carver[275123]:             "lv_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 08:43:44 compute-0 competent_carver[275123]:             "name": "ceph_lv1",
Oct 11 08:43:44 compute-0 competent_carver[275123]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 08:43:44 compute-0 competent_carver[275123]:             "tags": {
Oct 11 08:43:44 compute-0 competent_carver[275123]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 11 08:43:44 compute-0 competent_carver[275123]:                 "ceph.block_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 08:43:44 compute-0 competent_carver[275123]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 08:43:44 compute-0 competent_carver[275123]:                 "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 08:43:44 compute-0 competent_carver[275123]:                 "ceph.cluster_name": "ceph",
Oct 11 08:43:44 compute-0 competent_carver[275123]:                 "ceph.crush_device_class": "",
Oct 11 08:43:44 compute-0 competent_carver[275123]:                 "ceph.encrypted": "0",
Oct 11 08:43:44 compute-0 competent_carver[275123]:                 "ceph.osd_fsid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 08:43:44 compute-0 competent_carver[275123]:                 "ceph.osd_id": "1",
Oct 11 08:43:44 compute-0 competent_carver[275123]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 08:43:44 compute-0 competent_carver[275123]:                 "ceph.type": "block",
Oct 11 08:43:44 compute-0 competent_carver[275123]:                 "ceph.vdo": "0"
Oct 11 08:43:44 compute-0 competent_carver[275123]:             },
Oct 11 08:43:44 compute-0 competent_carver[275123]:             "type": "block",
Oct 11 08:43:44 compute-0 competent_carver[275123]:             "vg_name": "ceph_vg1"
Oct 11 08:43:44 compute-0 competent_carver[275123]:         }
Oct 11 08:43:44 compute-0 competent_carver[275123]:     ],
Oct 11 08:43:44 compute-0 competent_carver[275123]:     "2": [
Oct 11 08:43:44 compute-0 competent_carver[275123]:         {
Oct 11 08:43:44 compute-0 competent_carver[275123]:             "devices": [
Oct 11 08:43:44 compute-0 competent_carver[275123]:                 "/dev/loop5"
Oct 11 08:43:44 compute-0 competent_carver[275123]:             ],
Oct 11 08:43:44 compute-0 competent_carver[275123]:             "lv_name": "ceph_lv2",
Oct 11 08:43:44 compute-0 competent_carver[275123]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 08:43:44 compute-0 competent_carver[275123]:             "lv_size": "21470642176",
Oct 11 08:43:44 compute-0 competent_carver[275123]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9a8d87fe-940e-4960-94fb-405e22e6e9ee,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 08:43:44 compute-0 competent_carver[275123]:             "lv_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 08:43:44 compute-0 competent_carver[275123]:             "name": "ceph_lv2",
Oct 11 08:43:44 compute-0 competent_carver[275123]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 08:43:44 compute-0 competent_carver[275123]:             "tags": {
Oct 11 08:43:44 compute-0 competent_carver[275123]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 11 08:43:44 compute-0 competent_carver[275123]:                 "ceph.block_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 08:43:44 compute-0 competent_carver[275123]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 08:43:44 compute-0 competent_carver[275123]:                 "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 08:43:44 compute-0 competent_carver[275123]:                 "ceph.cluster_name": "ceph",
Oct 11 08:43:44 compute-0 competent_carver[275123]:                 "ceph.crush_device_class": "",
Oct 11 08:43:44 compute-0 competent_carver[275123]:                 "ceph.encrypted": "0",
Oct 11 08:43:44 compute-0 competent_carver[275123]:                 "ceph.osd_fsid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 08:43:44 compute-0 competent_carver[275123]:                 "ceph.osd_id": "2",
Oct 11 08:43:44 compute-0 competent_carver[275123]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 08:43:44 compute-0 competent_carver[275123]:                 "ceph.type": "block",
Oct 11 08:43:44 compute-0 competent_carver[275123]:                 "ceph.vdo": "0"
Oct 11 08:43:44 compute-0 competent_carver[275123]:             },
Oct 11 08:43:44 compute-0 competent_carver[275123]:             "type": "block",
Oct 11 08:43:44 compute-0 competent_carver[275123]:             "vg_name": "ceph_vg2"
Oct 11 08:43:44 compute-0 competent_carver[275123]:         }
Oct 11 08:43:44 compute-0 competent_carver[275123]:     ]
Oct 11 08:43:44 compute-0 competent_carver[275123]: }
Oct 11 08:43:44 compute-0 systemd[1]: libpod-01c537420c88c7d56d1f7cb04d6da9c65eea27a39be65fe00b2471b10d320441.scope: Deactivated successfully.
Oct 11 08:43:44 compute-0 podman[275106]: 2025-10-11 08:43:44.703801654 +0000 UTC m=+1.004997505 container died 01c537420c88c7d56d1f7cb04d6da9c65eea27a39be65fe00b2471b10d320441 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_carver, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 08:43:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-3fd91a8fe5357c7897fcbd3f617129f9e5edcd2e412904b29ab122b5a5767282-merged.mount: Deactivated successfully.
Oct 11 08:43:44 compute-0 podman[275106]: 2025-10-11 08:43:44.775134095 +0000 UTC m=+1.076329976 container remove 01c537420c88c7d56d1f7cb04d6da9c65eea27a39be65fe00b2471b10d320441 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_carver, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 08:43:44 compute-0 systemd[1]: libpod-conmon-01c537420c88c7d56d1f7cb04d6da9c65eea27a39be65fe00b2471b10d320441.scope: Deactivated successfully.
Oct 11 08:43:44 compute-0 sudo[275000]: pam_unix(sudo:session): session closed for user root
Oct 11 08:43:44 compute-0 sudo[275162]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:43:44 compute-0 sudo[275162]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:43:44 compute-0 sudo[275162]: pam_unix(sudo:session): session closed for user root
Oct 11 08:43:45 compute-0 ceph-mon[74313]: pgmap v1108: 321 pgs: 321 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 55 KiB/s rd, 3.4 MiB/s wr, 87 op/s
Oct 11 08:43:45 compute-0 sudo[275187]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 08:43:45 compute-0 sudo[275187]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:43:45 compute-0 sudo[275187]: pam_unix(sudo:session): session closed for user root
Oct 11 08:43:45 compute-0 sudo[275212]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:43:45 compute-0 sudo[275212]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:43:45 compute-0 sudo[275212]: pam_unix(sudo:session): session closed for user root
Oct 11 08:43:45 compute-0 sudo[275237]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a -- raw list --format json
Oct 11 08:43:45 compute-0 sudo[275237]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:43:45 compute-0 podman[275303]: 2025-10-11 08:43:45.652690125 +0000 UTC m=+0.048808327 container create 6223076ade9daeeca83dccf370a201fb89af0e74aa884a06f68e06020d5d02b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_lamport, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 08:43:45 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1109: 321 pgs: 321 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:43:45 compute-0 systemd[1]: Started libpod-conmon-6223076ade9daeeca83dccf370a201fb89af0e74aa884a06f68e06020d5d02b6.scope.
Oct 11 08:43:45 compute-0 systemd[1]: Started libcrun container.
Oct 11 08:43:45 compute-0 podman[275303]: 2025-10-11 08:43:45.634806611 +0000 UTC m=+0.030924833 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 08:43:45 compute-0 podman[275303]: 2025-10-11 08:43:45.748456555 +0000 UTC m=+0.144574787 container init 6223076ade9daeeca83dccf370a201fb89af0e74aa884a06f68e06020d5d02b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_lamport, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct 11 08:43:45 compute-0 podman[275303]: 2025-10-11 08:43:45.757098419 +0000 UTC m=+0.153216621 container start 6223076ade9daeeca83dccf370a201fb89af0e74aa884a06f68e06020d5d02b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_lamport, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 08:43:45 compute-0 podman[275303]: 2025-10-11 08:43:45.760222037 +0000 UTC m=+0.156340259 container attach 6223076ade9daeeca83dccf370a201fb89af0e74aa884a06f68e06020d5d02b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_lamport, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 08:43:45 compute-0 naughty_lamport[275320]: 167 167
Oct 11 08:43:45 compute-0 systemd[1]: libpod-6223076ade9daeeca83dccf370a201fb89af0e74aa884a06f68e06020d5d02b6.scope: Deactivated successfully.
Oct 11 08:43:45 compute-0 podman[275303]: 2025-10-11 08:43:45.765613269 +0000 UTC m=+0.161731471 container died 6223076ade9daeeca83dccf370a201fb89af0e74aa884a06f68e06020d5d02b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_lamport, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Oct 11 08:43:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-43b6260fea5c0123419133acc677082a020a85c9dd393ca591c37ea3183f0108-merged.mount: Deactivated successfully.
Oct 11 08:43:45 compute-0 podman[275303]: 2025-10-11 08:43:45.798465945 +0000 UTC m=+0.194584147 container remove 6223076ade9daeeca83dccf370a201fb89af0e74aa884a06f68e06020d5d02b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_lamport, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 08:43:45 compute-0 systemd[1]: libpod-conmon-6223076ade9daeeca83dccf370a201fb89af0e74aa884a06f68e06020d5d02b6.scope: Deactivated successfully.
Oct 11 08:43:46 compute-0 podman[275346]: 2025-10-11 08:43:46.043108952 +0000 UTC m=+0.064836069 container create 4427113bd7f615fd69849860d066f89bd74000ecb009400ea765165ccabfa57e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_keldysh, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 08:43:46 compute-0 systemd[1]: Started libpod-conmon-4427113bd7f615fd69849860d066f89bd74000ecb009400ea765165ccabfa57e.scope.
Oct 11 08:43:46 compute-0 podman[275346]: 2025-10-11 08:43:46.018779376 +0000 UTC m=+0.040506473 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 08:43:46 compute-0 systemd[1]: Started libcrun container.
Oct 11 08:43:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c83a02ed9cb0b4cc429ce6634f698f22b56c208fcead626892b9cf4a92c221c4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 08:43:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c83a02ed9cb0b4cc429ce6634f698f22b56c208fcead626892b9cf4a92c221c4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 08:43:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c83a02ed9cb0b4cc429ce6634f698f22b56c208fcead626892b9cf4a92c221c4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 08:43:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c83a02ed9cb0b4cc429ce6634f698f22b56c208fcead626892b9cf4a92c221c4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 08:43:46 compute-0 podman[275346]: 2025-10-11 08:43:46.150437848 +0000 UTC m=+0.172164925 container init 4427113bd7f615fd69849860d066f89bd74000ecb009400ea765165ccabfa57e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_keldysh, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct 11 08:43:46 compute-0 podman[275346]: 2025-10-11 08:43:46.164759602 +0000 UTC m=+0.186486689 container start 4427113bd7f615fd69849860d066f89bd74000ecb009400ea765165ccabfa57e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_keldysh, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct 11 08:43:46 compute-0 podman[275346]: 2025-10-11 08:43:46.170315089 +0000 UTC m=+0.192042176 container attach 4427113bd7f615fd69849860d066f89bd74000ecb009400ea765165ccabfa57e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_keldysh, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 08:43:47 compute-0 ceph-mon[74313]: pgmap v1109: 321 pgs: 321 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:43:47 compute-0 fervent_keldysh[275363]: {
Oct 11 08:43:47 compute-0 fervent_keldysh[275363]:     "9a8d87fe-940e-4960-94fb-405e22e6e9ee": {
Oct 11 08:43:47 compute-0 fervent_keldysh[275363]:         "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 08:43:47 compute-0 fervent_keldysh[275363]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 11 08:43:47 compute-0 fervent_keldysh[275363]:         "osd_id": 2,
Oct 11 08:43:47 compute-0 fervent_keldysh[275363]:         "osd_uuid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 08:43:47 compute-0 fervent_keldysh[275363]:         "type": "bluestore"
Oct 11 08:43:47 compute-0 fervent_keldysh[275363]:     },
Oct 11 08:43:47 compute-0 fervent_keldysh[275363]:     "bd31cfe9-266a-4479-a7f9-659fff14b3e1": {
Oct 11 08:43:47 compute-0 fervent_keldysh[275363]:         "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 08:43:47 compute-0 fervent_keldysh[275363]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 11 08:43:47 compute-0 fervent_keldysh[275363]:         "osd_id": 0,
Oct 11 08:43:47 compute-0 fervent_keldysh[275363]:         "osd_uuid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 08:43:47 compute-0 fervent_keldysh[275363]:         "type": "bluestore"
Oct 11 08:43:47 compute-0 fervent_keldysh[275363]:     },
Oct 11 08:43:47 compute-0 fervent_keldysh[275363]:     "c6e730c8-7075-45e5-bf72-061b73088f5b": {
Oct 11 08:43:47 compute-0 fervent_keldysh[275363]:         "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 08:43:47 compute-0 fervent_keldysh[275363]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 11 08:43:47 compute-0 fervent_keldysh[275363]:         "osd_id": 1,
Oct 11 08:43:47 compute-0 fervent_keldysh[275363]:         "osd_uuid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 08:43:47 compute-0 fervent_keldysh[275363]:         "type": "bluestore"
Oct 11 08:43:47 compute-0 fervent_keldysh[275363]:     }
Oct 11 08:43:47 compute-0 fervent_keldysh[275363]: }
Oct 11 08:43:47 compute-0 systemd[1]: libpod-4427113bd7f615fd69849860d066f89bd74000ecb009400ea765165ccabfa57e.scope: Deactivated successfully.
Oct 11 08:43:47 compute-0 podman[275346]: 2025-10-11 08:43:47.211350159 +0000 UTC m=+1.233077276 container died 4427113bd7f615fd69849860d066f89bd74000ecb009400ea765165ccabfa57e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_keldysh, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct 11 08:43:47 compute-0 systemd[1]: libpod-4427113bd7f615fd69849860d066f89bd74000ecb009400ea765165ccabfa57e.scope: Consumed 1.056s CPU time.
Oct 11 08:43:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-c83a02ed9cb0b4cc429ce6634f698f22b56c208fcead626892b9cf4a92c221c4-merged.mount: Deactivated successfully.
Oct 11 08:43:47 compute-0 podman[275346]: 2025-10-11 08:43:47.288063341 +0000 UTC m=+1.309790458 container remove 4427113bd7f615fd69849860d066f89bd74000ecb009400ea765165ccabfa57e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_keldysh, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 08:43:47 compute-0 systemd[1]: libpod-conmon-4427113bd7f615fd69849860d066f89bd74000ecb009400ea765165ccabfa57e.scope: Deactivated successfully.
Oct 11 08:43:47 compute-0 sudo[275237]: pam_unix(sudo:session): session closed for user root
Oct 11 08:43:47 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 08:43:47 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 08:43:47 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 08:43:47 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 08:43:47 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev f2e51c90-385d-4899-9f88-d3df5d71e3cd does not exist
Oct 11 08:43:47 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev da64d99a-cbe4-45f6-8f78-9ed06aec6743 does not exist
Oct 11 08:43:47 compute-0 sudo[275409]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:43:47 compute-0 sudo[275409]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:43:47 compute-0 sudo[275409]: pam_unix(sudo:session): session closed for user root
Oct 11 08:43:47 compute-0 sudo[275434]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 11 08:43:47 compute-0 sudo[275434]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:43:47 compute-0 sudo[275434]: pam_unix(sudo:session): session closed for user root
Oct 11 08:43:47 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1110: 321 pgs: 321 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:43:48 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 08:43:48 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 08:43:48 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 08:43:49 compute-0 ceph-mon[74313]: pgmap v1110: 321 pgs: 321 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:43:49 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1111: 321 pgs: 321 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:43:50 compute-0 ceph-mon[74313]: pgmap v1111: 321 pgs: 321 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:43:50 compute-0 nova_compute[260935]: 2025-10-11 08:43:50.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 08:43:50 compute-0 nova_compute[260935]: 2025-10-11 08:43:50.704 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 08:43:50 compute-0 nova_compute[260935]: 2025-10-11 08:43:50.705 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 11 08:43:51 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1112: 321 pgs: 321 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:43:51 compute-0 podman[275459]: 2025-10-11 08:43:51.784350674 +0000 UTC m=+0.084758081 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001)
Oct 11 08:43:52 compute-0 nova_compute[260935]: 2025-10-11 08:43:52.424 2 DEBUG oslo_concurrency.lockutils [None req-370e9138-7bb3-4690-bed7-16f93c50b542 324b88029ca649458a6186fedaec68e8 3132c1b9f1b741a98560770caf557fdc - - default default] Acquiring lock "dcd0a4bd-eaa1-430c-8f7f-2e6a0c10904d" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:43:52 compute-0 nova_compute[260935]: 2025-10-11 08:43:52.425 2 DEBUG oslo_concurrency.lockutils [None req-370e9138-7bb3-4690-bed7-16f93c50b542 324b88029ca649458a6186fedaec68e8 3132c1b9f1b741a98560770caf557fdc - - default default] Lock "dcd0a4bd-eaa1-430c-8f7f-2e6a0c10904d" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:43:52 compute-0 nova_compute[260935]: 2025-10-11 08:43:52.567 2 DEBUG nova.compute.manager [None req-370e9138-7bb3-4690-bed7-16f93c50b542 324b88029ca649458a6186fedaec68e8 3132c1b9f1b741a98560770caf557fdc - - default default] [instance: dcd0a4bd-eaa1-430c-8f7f-2e6a0c10904d] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 11 08:43:52 compute-0 nova_compute[260935]: 2025-10-11 08:43:52.704 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 08:43:52 compute-0 ceph-mon[74313]: pgmap v1112: 321 pgs: 321 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:43:52 compute-0 nova_compute[260935]: 2025-10-11 08:43:52.856 2 DEBUG oslo_concurrency.lockutils [None req-370e9138-7bb3-4690-bed7-16f93c50b542 324b88029ca649458a6186fedaec68e8 3132c1b9f1b741a98560770caf557fdc - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:43:52 compute-0 nova_compute[260935]: 2025-10-11 08:43:52.857 2 DEBUG oslo_concurrency.lockutils [None req-370e9138-7bb3-4690-bed7-16f93c50b542 324b88029ca649458a6186fedaec68e8 3132c1b9f1b741a98560770caf557fdc - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:43:52 compute-0 nova_compute[260935]: 2025-10-11 08:43:52.867 2 DEBUG nova.virt.hardware [None req-370e9138-7bb3-4690-bed7-16f93c50b542 324b88029ca649458a6186fedaec68e8 3132c1b9f1b741a98560770caf557fdc - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 11 08:43:52 compute-0 nova_compute[260935]: 2025-10-11 08:43:52.867 2 INFO nova.compute.claims [None req-370e9138-7bb3-4690-bed7-16f93c50b542 324b88029ca649458a6186fedaec68e8 3132c1b9f1b741a98560770caf557fdc - - default default] [instance: dcd0a4bd-eaa1-430c-8f7f-2e6a0c10904d] Claim successful on node compute-0.ctlplane.example.com
Oct 11 08:43:53 compute-0 nova_compute[260935]: 2025-10-11 08:43:53.044 2 DEBUG oslo_concurrency.processutils [None req-370e9138-7bb3-4690-bed7-16f93c50b542 324b88029ca649458a6186fedaec68e8 3132c1b9f1b741a98560770caf557fdc - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:43:53 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:43:53.057 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=3, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:d1:d9', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '16:ab:1e:b7:4b:7f'}, ipsec=False) old=SB_Global(nb_cfg=2) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 08:43:53 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:43:53.059 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 11 08:43:53 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 08:43:53 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 08:43:53 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2697716668' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:43:53 compute-0 nova_compute[260935]: 2025-10-11 08:43:53.494 2 DEBUG oslo_concurrency.processutils [None req-370e9138-7bb3-4690-bed7-16f93c50b542 324b88029ca649458a6186fedaec68e8 3132c1b9f1b741a98560770caf557fdc - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:43:53 compute-0 nova_compute[260935]: 2025-10-11 08:43:53.502 2 DEBUG nova.compute.provider_tree [None req-370e9138-7bb3-4690-bed7-16f93c50b542 324b88029ca649458a6186fedaec68e8 3132c1b9f1b741a98560770caf557fdc - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 08:43:53 compute-0 nova_compute[260935]: 2025-10-11 08:43:53.525 2 DEBUG nova.scheduler.client.report [None req-370e9138-7bb3-4690-bed7-16f93c50b542 324b88029ca649458a6186fedaec68e8 3132c1b9f1b741a98560770caf557fdc - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 08:43:53 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1113: 321 pgs: 321 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:43:53 compute-0 nova_compute[260935]: 2025-10-11 08:43:53.687 2 DEBUG oslo_concurrency.lockutils [None req-370e9138-7bb3-4690-bed7-16f93c50b542 324b88029ca649458a6186fedaec68e8 3132c1b9f1b741a98560770caf557fdc - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.830s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:43:53 compute-0 nova_compute[260935]: 2025-10-11 08:43:53.689 2 DEBUG nova.compute.manager [None req-370e9138-7bb3-4690-bed7-16f93c50b542 324b88029ca649458a6186fedaec68e8 3132c1b9f1b741a98560770caf557fdc - - default default] [instance: dcd0a4bd-eaa1-430c-8f7f-2e6a0c10904d] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 11 08:43:53 compute-0 nova_compute[260935]: 2025-10-11 08:43:53.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 08:43:53 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/2697716668' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:43:53 compute-0 nova_compute[260935]: 2025-10-11 08:43:53.867 2 DEBUG nova.compute.manager [None req-370e9138-7bb3-4690-bed7-16f93c50b542 324b88029ca649458a6186fedaec68e8 3132c1b9f1b741a98560770caf557fdc - - default default] [instance: dcd0a4bd-eaa1-430c-8f7f-2e6a0c10904d] Not allocating networking since 'none' was specified. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1948
Oct 11 08:43:53 compute-0 nova_compute[260935]: 2025-10-11 08:43:53.958 2 INFO nova.virt.libvirt.driver [None req-370e9138-7bb3-4690-bed7-16f93c50b542 324b88029ca649458a6186fedaec68e8 3132c1b9f1b741a98560770caf557fdc - - default default] [instance: dcd0a4bd-eaa1-430c-8f7f-2e6a0c10904d] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 11 08:43:54 compute-0 nova_compute[260935]: 2025-10-11 08:43:54.145 2 DEBUG nova.compute.manager [None req-370e9138-7bb3-4690-bed7-16f93c50b542 324b88029ca649458a6186fedaec68e8 3132c1b9f1b741a98560770caf557fdc - - default default] [instance: dcd0a4bd-eaa1-430c-8f7f-2e6a0c10904d] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 11 08:43:54 compute-0 nova_compute[260935]: 2025-10-11 08:43:54.396 2 DEBUG nova.compute.manager [None req-370e9138-7bb3-4690-bed7-16f93c50b542 324b88029ca649458a6186fedaec68e8 3132c1b9f1b741a98560770caf557fdc - - default default] [instance: dcd0a4bd-eaa1-430c-8f7f-2e6a0c10904d] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 11 08:43:54 compute-0 nova_compute[260935]: 2025-10-11 08:43:54.398 2 DEBUG nova.virt.libvirt.driver [None req-370e9138-7bb3-4690-bed7-16f93c50b542 324b88029ca649458a6186fedaec68e8 3132c1b9f1b741a98560770caf557fdc - - default default] [instance: dcd0a4bd-eaa1-430c-8f7f-2e6a0c10904d] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 11 08:43:54 compute-0 nova_compute[260935]: 2025-10-11 08:43:54.399 2 INFO nova.virt.libvirt.driver [None req-370e9138-7bb3-4690-bed7-16f93c50b542 324b88029ca649458a6186fedaec68e8 3132c1b9f1b741a98560770caf557fdc - - default default] [instance: dcd0a4bd-eaa1-430c-8f7f-2e6a0c10904d] Creating image(s)
Oct 11 08:43:54 compute-0 nova_compute[260935]: 2025-10-11 08:43:54.474 2 DEBUG nova.storage.rbd_utils [None req-370e9138-7bb3-4690-bed7-16f93c50b542 324b88029ca649458a6186fedaec68e8 3132c1b9f1b741a98560770caf557fdc - - default default] rbd image dcd0a4bd-eaa1-430c-8f7f-2e6a0c10904d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:43:54 compute-0 nova_compute[260935]: 2025-10-11 08:43:54.511 2 DEBUG nova.storage.rbd_utils [None req-370e9138-7bb3-4690-bed7-16f93c50b542 324b88029ca649458a6186fedaec68e8 3132c1b9f1b741a98560770caf557fdc - - default default] rbd image dcd0a4bd-eaa1-430c-8f7f-2e6a0c10904d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:43:54 compute-0 nova_compute[260935]: 2025-10-11 08:43:54.546 2 DEBUG nova.storage.rbd_utils [None req-370e9138-7bb3-4690-bed7-16f93c50b542 324b88029ca649458a6186fedaec68e8 3132c1b9f1b741a98560770caf557fdc - - default default] rbd image dcd0a4bd-eaa1-430c-8f7f-2e6a0c10904d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:43:54 compute-0 nova_compute[260935]: 2025-10-11 08:43:54.551 2 DEBUG oslo_concurrency.lockutils [None req-370e9138-7bb3-4690-bed7-16f93c50b542 324b88029ca649458a6186fedaec68e8 3132c1b9f1b741a98560770caf557fdc - - default default] Acquiring lock "9811042c7d73cc51997f7c966840f4b7728169a1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:43:54 compute-0 nova_compute[260935]: 2025-10-11 08:43:54.552 2 DEBUG oslo_concurrency.lockutils [None req-370e9138-7bb3-4690-bed7-16f93c50b542 324b88029ca649458a6186fedaec68e8 3132c1b9f1b741a98560770caf557fdc - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:43:54 compute-0 nova_compute[260935]: 2025-10-11 08:43:54.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 08:43:54 compute-0 ceph-mon[74313]: pgmap v1113: 321 pgs: 321 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:43:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 08:43:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 08:43:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 08:43:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 08:43:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 08:43:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 08:43:54 compute-0 ceph-mgr[74605]: [balancer INFO root] Optimize plan auto_2025-10-11_08:43:54
Oct 11 08:43:54 compute-0 ceph-mgr[74605]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 08:43:54 compute-0 ceph-mgr[74605]: [balancer INFO root] do_upmap
Oct 11 08:43:54 compute-0 ceph-mgr[74605]: [balancer INFO root] pools ['cephfs.cephfs.data', 'backups', 'default.rgw.meta', 'images', 'default.rgw.control', 'default.rgw.log', 'cephfs.cephfs.meta', 'volumes', '.mgr', '.rgw.root', 'vms']
Oct 11 08:43:54 compute-0 ceph-mgr[74605]: [balancer INFO root] prepared 0/10 changes
Oct 11 08:43:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 08:43:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 08:43:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 08:43:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 08:43:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 08:43:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 08:43:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 08:43:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 08:43:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 08:43:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 08:43:55 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1114: 321 pgs: 321 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:43:55 compute-0 nova_compute[260935]: 2025-10-11 08:43:55.761 2 DEBUG nova.virt.libvirt.imagebackend [None req-370e9138-7bb3-4690-bed7-16f93c50b542 324b88029ca649458a6186fedaec68e8 3132c1b9f1b741a98560770caf557fdc - - default default] Image locations are: [{'url': 'rbd://33219f8b-dc38-5a8f-a577-8ccc4b37190a/images/03f2fef0-11c0-48e1-b3a0-3e02d898739e/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://33219f8b-dc38-5a8f-a577-8ccc4b37190a/images/03f2fef0-11c0-48e1-b3a0-3e02d898739e/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085
Oct 11 08:43:56 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:43:56.062 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=bec9a5e2-82b0-42f0-811d-08d245f1dc66, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '3'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:43:56 compute-0 nova_compute[260935]: 2025-10-11 08:43:56.698 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 08:43:56 compute-0 ceph-mon[74313]: pgmap v1114: 321 pgs: 321 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:43:57 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1115: 321 pgs: 321 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 6 op/s
Oct 11 08:43:57 compute-0 nova_compute[260935]: 2025-10-11 08:43:57.739 2 DEBUG oslo_concurrency.processutils [None req-370e9138-7bb3-4690-bed7-16f93c50b542 324b88029ca649458a6186fedaec68e8 3132c1b9f1b741a98560770caf557fdc - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:43:57 compute-0 podman[275557]: 2025-10-11 08:43:57.802205585 +0000 UTC m=+0.101259516 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, managed_by=edpm_ansible, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true)
Oct 11 08:43:57 compute-0 nova_compute[260935]: 2025-10-11 08:43:57.831 2 DEBUG oslo_concurrency.processutils [None req-370e9138-7bb3-4690-bed7-16f93c50b542 324b88029ca649458a6186fedaec68e8 3132c1b9f1b741a98560770caf557fdc - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1.part --force-share --output=json" returned: 0 in 0.092s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:43:57 compute-0 nova_compute[260935]: 2025-10-11 08:43:57.834 2 DEBUG nova.virt.images [None req-370e9138-7bb3-4690-bed7-16f93c50b542 324b88029ca649458a6186fedaec68e8 3132c1b9f1b741a98560770caf557fdc - - default default] 03f2fef0-11c0-48e1-b3a0-3e02d898739e was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242
Oct 11 08:43:57 compute-0 nova_compute[260935]: 2025-10-11 08:43:57.837 2 DEBUG nova.privsep.utils [None req-370e9138-7bb3-4690-bed7-16f93c50b542 324b88029ca649458a6186fedaec68e8 3132c1b9f1b741a98560770caf557fdc - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63
Oct 11 08:43:57 compute-0 nova_compute[260935]: 2025-10-11 08:43:57.837 2 DEBUG oslo_concurrency.processutils [None req-370e9138-7bb3-4690-bed7-16f93c50b542 324b88029ca649458a6186fedaec68e8 3132c1b9f1b741a98560770caf557fdc - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1.part /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:43:58 compute-0 nova_compute[260935]: 2025-10-11 08:43:58.051 2 DEBUG oslo_concurrency.processutils [None req-370e9138-7bb3-4690-bed7-16f93c50b542 324b88029ca649458a6186fedaec68e8 3132c1b9f1b741a98560770caf557fdc - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1.part /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1.converted" returned: 0 in 0.213s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:43:58 compute-0 nova_compute[260935]: 2025-10-11 08:43:58.057 2 DEBUG oslo_concurrency.processutils [None req-370e9138-7bb3-4690-bed7-16f93c50b542 324b88029ca649458a6186fedaec68e8 3132c1b9f1b741a98560770caf557fdc - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:43:58 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 08:43:58 compute-0 nova_compute[260935]: 2025-10-11 08:43:58.139 2 DEBUG oslo_concurrency.processutils [None req-370e9138-7bb3-4690-bed7-16f93c50b542 324b88029ca649458a6186fedaec68e8 3132c1b9f1b741a98560770caf557fdc - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1.converted --force-share --output=json" returned: 0 in 0.083s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:43:58 compute-0 nova_compute[260935]: 2025-10-11 08:43:58.142 2 DEBUG oslo_concurrency.lockutils [None req-370e9138-7bb3-4690-bed7-16f93c50b542 324b88029ca649458a6186fedaec68e8 3132c1b9f1b741a98560770caf557fdc - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 3.590s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:43:58 compute-0 nova_compute[260935]: 2025-10-11 08:43:58.172 2 DEBUG nova.storage.rbd_utils [None req-370e9138-7bb3-4690-bed7-16f93c50b542 324b88029ca649458a6186fedaec68e8 3132c1b9f1b741a98560770caf557fdc - - default default] rbd image dcd0a4bd-eaa1-430c-8f7f-2e6a0c10904d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:43:58 compute-0 nova_compute[260935]: 2025-10-11 08:43:58.176 2 DEBUG oslo_concurrency.processutils [None req-370e9138-7bb3-4690-bed7-16f93c50b542 324b88029ca649458a6186fedaec68e8 3132c1b9f1b741a98560770caf557fdc - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 dcd0a4bd-eaa1-430c-8f7f-2e6a0c10904d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:43:58 compute-0 nova_compute[260935]: 2025-10-11 08:43:58.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 08:43:58 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e133 do_prune osdmap full prune enabled
Oct 11 08:43:58 compute-0 ceph-mon[74313]: pgmap v1115: 321 pgs: 321 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 6 op/s
Oct 11 08:43:58 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e134 e134: 3 total, 3 up, 3 in
Oct 11 08:43:58 compute-0 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e134: 3 total, 3 up, 3 in
Oct 11 08:43:58 compute-0 podman[275628]: 2025-10-11 08:43:58.819710061 +0000 UTC m=+0.123665137 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible)
Oct 11 08:43:59 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1117: 321 pgs: 321 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 8 op/s
Oct 11 08:43:59 compute-0 nova_compute[260935]: 2025-10-11 08:43:59.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 08:43:59 compute-0 nova_compute[260935]: 2025-10-11 08:43:59.703 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 11 08:43:59 compute-0 nova_compute[260935]: 2025-10-11 08:43:59.703 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 11 08:43:59 compute-0 nova_compute[260935]: 2025-10-11 08:43:59.725 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: dcd0a4bd-eaa1-430c-8f7f-2e6a0c10904d] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Oct 11 08:43:59 compute-0 nova_compute[260935]: 2025-10-11 08:43:59.725 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 11 08:43:59 compute-0 nova_compute[260935]: 2025-10-11 08:43:59.727 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 08:43:59 compute-0 nova_compute[260935]: 2025-10-11 08:43:59.758 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:43:59 compute-0 nova_compute[260935]: 2025-10-11 08:43:59.758 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:43:59 compute-0 nova_compute[260935]: 2025-10-11 08:43:59.759 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:43:59 compute-0 nova_compute[260935]: 2025-10-11 08:43:59.759 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 11 08:43:59 compute-0 nova_compute[260935]: 2025-10-11 08:43:59.760 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:43:59 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e134 do_prune osdmap full prune enabled
Oct 11 08:43:59 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e135 e135: 3 total, 3 up, 3 in
Oct 11 08:43:59 compute-0 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e135: 3 total, 3 up, 3 in
Oct 11 08:43:59 compute-0 ceph-mon[74313]: osdmap e134: 3 total, 3 up, 3 in
Oct 11 08:44:00 compute-0 nova_compute[260935]: 2025-10-11 08:44:00.102 2 DEBUG oslo_concurrency.processutils [None req-370e9138-7bb3-4690-bed7-16f93c50b542 324b88029ca649458a6186fedaec68e8 3132c1b9f1b741a98560770caf557fdc - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 dcd0a4bd-eaa1-430c-8f7f-2e6a0c10904d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.925s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:44:00 compute-0 nova_compute[260935]: 2025-10-11 08:44:00.170 2 DEBUG nova.storage.rbd_utils [None req-370e9138-7bb3-4690-bed7-16f93c50b542 324b88029ca649458a6186fedaec68e8 3132c1b9f1b741a98560770caf557fdc - - default default] resizing rbd image dcd0a4bd-eaa1-430c-8f7f-2e6a0c10904d_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 11 08:44:00 compute-0 nova_compute[260935]: 2025-10-11 08:44:00.274 2 DEBUG nova.objects.instance [None req-370e9138-7bb3-4690-bed7-16f93c50b542 324b88029ca649458a6186fedaec68e8 3132c1b9f1b741a98560770caf557fdc - - default default] Lazy-loading 'migration_context' on Instance uuid dcd0a4bd-eaa1-430c-8f7f-2e6a0c10904d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:44:00 compute-0 nova_compute[260935]: 2025-10-11 08:44:00.296 2 DEBUG nova.virt.libvirt.driver [None req-370e9138-7bb3-4690-bed7-16f93c50b542 324b88029ca649458a6186fedaec68e8 3132c1b9f1b741a98560770caf557fdc - - default default] [instance: dcd0a4bd-eaa1-430c-8f7f-2e6a0c10904d] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 11 08:44:00 compute-0 nova_compute[260935]: 2025-10-11 08:44:00.297 2 DEBUG nova.virt.libvirt.driver [None req-370e9138-7bb3-4690-bed7-16f93c50b542 324b88029ca649458a6186fedaec68e8 3132c1b9f1b741a98560770caf557fdc - - default default] [instance: dcd0a4bd-eaa1-430c-8f7f-2e6a0c10904d] Ensure instance console log exists: /var/lib/nova/instances/dcd0a4bd-eaa1-430c-8f7f-2e6a0c10904d/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 11 08:44:00 compute-0 nova_compute[260935]: 2025-10-11 08:44:00.298 2 DEBUG oslo_concurrency.lockutils [None req-370e9138-7bb3-4690-bed7-16f93c50b542 324b88029ca649458a6186fedaec68e8 3132c1b9f1b741a98560770caf557fdc - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:44:00 compute-0 nova_compute[260935]: 2025-10-11 08:44:00.298 2 DEBUG oslo_concurrency.lockutils [None req-370e9138-7bb3-4690-bed7-16f93c50b542 324b88029ca649458a6186fedaec68e8 3132c1b9f1b741a98560770caf557fdc - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:44:00 compute-0 nova_compute[260935]: 2025-10-11 08:44:00.298 2 DEBUG oslo_concurrency.lockutils [None req-370e9138-7bb3-4690-bed7-16f93c50b542 324b88029ca649458a6186fedaec68e8 3132c1b9f1b741a98560770caf557fdc - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:44:00 compute-0 nova_compute[260935]: 2025-10-11 08:44:00.300 2 DEBUG nova.virt.libvirt.driver [None req-370e9138-7bb3-4690-bed7-16f93c50b542 324b88029ca649458a6186fedaec68e8 3132c1b9f1b741a98560770caf557fdc - - default default] [instance: dcd0a4bd-eaa1-430c-8f7f-2e6a0c10904d] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 11 08:44:00 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 08:44:00 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2580541415' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:44:00 compute-0 nova_compute[260935]: 2025-10-11 08:44:00.306 2 WARNING nova.virt.libvirt.driver [None req-370e9138-7bb3-4690-bed7-16f93c50b542 324b88029ca649458a6186fedaec68e8 3132c1b9f1b741a98560770caf557fdc - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 08:44:00 compute-0 nova_compute[260935]: 2025-10-11 08:44:00.311 2 DEBUG nova.virt.libvirt.host [None req-370e9138-7bb3-4690-bed7-16f93c50b542 324b88029ca649458a6186fedaec68e8 3132c1b9f1b741a98560770caf557fdc - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 11 08:44:00 compute-0 nova_compute[260935]: 2025-10-11 08:44:00.312 2 DEBUG nova.virt.libvirt.host [None req-370e9138-7bb3-4690-bed7-16f93c50b542 324b88029ca649458a6186fedaec68e8 3132c1b9f1b741a98560770caf557fdc - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 11 08:44:00 compute-0 nova_compute[260935]: 2025-10-11 08:44:00.315 2 DEBUG nova.virt.libvirt.host [None req-370e9138-7bb3-4690-bed7-16f93c50b542 324b88029ca649458a6186fedaec68e8 3132c1b9f1b741a98560770caf557fdc - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 11 08:44:00 compute-0 nova_compute[260935]: 2025-10-11 08:44:00.315 2 DEBUG nova.virt.libvirt.host [None req-370e9138-7bb3-4690-bed7-16f93c50b542 324b88029ca649458a6186fedaec68e8 3132c1b9f1b741a98560770caf557fdc - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 11 08:44:00 compute-0 nova_compute[260935]: 2025-10-11 08:44:00.316 2 DEBUG nova.virt.libvirt.driver [None req-370e9138-7bb3-4690-bed7-16f93c50b542 324b88029ca649458a6186fedaec68e8 3132c1b9f1b741a98560770caf557fdc - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 11 08:44:00 compute-0 nova_compute[260935]: 2025-10-11 08:44:00.316 2 DEBUG nova.virt.hardware [None req-370e9138-7bb3-4690-bed7-16f93c50b542 324b88029ca649458a6186fedaec68e8 3132c1b9f1b741a98560770caf557fdc - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 11 08:44:00 compute-0 nova_compute[260935]: 2025-10-11 08:44:00.317 2 DEBUG nova.virt.hardware [None req-370e9138-7bb3-4690-bed7-16f93c50b542 324b88029ca649458a6186fedaec68e8 3132c1b9f1b741a98560770caf557fdc - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 11 08:44:00 compute-0 nova_compute[260935]: 2025-10-11 08:44:00.317 2 DEBUG nova.virt.hardware [None req-370e9138-7bb3-4690-bed7-16f93c50b542 324b88029ca649458a6186fedaec68e8 3132c1b9f1b741a98560770caf557fdc - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 11 08:44:00 compute-0 nova_compute[260935]: 2025-10-11 08:44:00.317 2 DEBUG nova.virt.hardware [None req-370e9138-7bb3-4690-bed7-16f93c50b542 324b88029ca649458a6186fedaec68e8 3132c1b9f1b741a98560770caf557fdc - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 11 08:44:00 compute-0 nova_compute[260935]: 2025-10-11 08:44:00.318 2 DEBUG nova.virt.hardware [None req-370e9138-7bb3-4690-bed7-16f93c50b542 324b88029ca649458a6186fedaec68e8 3132c1b9f1b741a98560770caf557fdc - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 11 08:44:00 compute-0 nova_compute[260935]: 2025-10-11 08:44:00.318 2 DEBUG nova.virt.hardware [None req-370e9138-7bb3-4690-bed7-16f93c50b542 324b88029ca649458a6186fedaec68e8 3132c1b9f1b741a98560770caf557fdc - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 11 08:44:00 compute-0 nova_compute[260935]: 2025-10-11 08:44:00.318 2 DEBUG nova.virt.hardware [None req-370e9138-7bb3-4690-bed7-16f93c50b542 324b88029ca649458a6186fedaec68e8 3132c1b9f1b741a98560770caf557fdc - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 11 08:44:00 compute-0 nova_compute[260935]: 2025-10-11 08:44:00.319 2 DEBUG nova.virt.hardware [None req-370e9138-7bb3-4690-bed7-16f93c50b542 324b88029ca649458a6186fedaec68e8 3132c1b9f1b741a98560770caf557fdc - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 11 08:44:00 compute-0 nova_compute[260935]: 2025-10-11 08:44:00.319 2 DEBUG nova.virt.hardware [None req-370e9138-7bb3-4690-bed7-16f93c50b542 324b88029ca649458a6186fedaec68e8 3132c1b9f1b741a98560770caf557fdc - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 11 08:44:00 compute-0 nova_compute[260935]: 2025-10-11 08:44:00.319 2 DEBUG nova.virt.hardware [None req-370e9138-7bb3-4690-bed7-16f93c50b542 324b88029ca649458a6186fedaec68e8 3132c1b9f1b741a98560770caf557fdc - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 11 08:44:00 compute-0 nova_compute[260935]: 2025-10-11 08:44:00.319 2 DEBUG nova.virt.hardware [None req-370e9138-7bb3-4690-bed7-16f93c50b542 324b88029ca649458a6186fedaec68e8 3132c1b9f1b741a98560770caf557fdc - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 11 08:44:00 compute-0 nova_compute[260935]: 2025-10-11 08:44:00.323 2 DEBUG nova.privsep.utils [None req-370e9138-7bb3-4690-bed7-16f93c50b542 324b88029ca649458a6186fedaec68e8 3132c1b9f1b741a98560770caf557fdc - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63
Oct 11 08:44:00 compute-0 nova_compute[260935]: 2025-10-11 08:44:00.324 2 DEBUG oslo_concurrency.processutils [None req-370e9138-7bb3-4690-bed7-16f93c50b542 324b88029ca649458a6186fedaec68e8 3132c1b9f1b741a98560770caf557fdc - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:44:00 compute-0 nova_compute[260935]: 2025-10-11 08:44:00.346 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.587s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:44:00 compute-0 nova_compute[260935]: 2025-10-11 08:44:00.574 2 WARNING nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 08:44:00 compute-0 nova_compute[260935]: 2025-10-11 08:44:00.576 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5084MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 11 08:44:00 compute-0 nova_compute[260935]: 2025-10-11 08:44:00.576 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:44:00 compute-0 nova_compute[260935]: 2025-10-11 08:44:00.577 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:44:00 compute-0 nova_compute[260935]: 2025-10-11 08:44:00.646 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance dcd0a4bd-eaa1-430c-8f7f-2e6a0c10904d actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 08:44:00 compute-0 nova_compute[260935]: 2025-10-11 08:44:00.646 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 11 08:44:00 compute-0 nova_compute[260935]: 2025-10-11 08:44:00.647 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 11 08:44:00 compute-0 nova_compute[260935]: 2025-10-11 08:44:00.721 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:44:00 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 08:44:00 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1105123448' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:44:00 compute-0 nova_compute[260935]: 2025-10-11 08:44:00.779 2 DEBUG oslo_concurrency.processutils [None req-370e9138-7bb3-4690-bed7-16f93c50b542 324b88029ca649458a6186fedaec68e8 3132c1b9f1b741a98560770caf557fdc - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:44:00 compute-0 ceph-mon[74313]: pgmap v1117: 321 pgs: 321 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 8 op/s
Oct 11 08:44:00 compute-0 ceph-mon[74313]: osdmap e135: 3 total, 3 up, 3 in
Oct 11 08:44:00 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/2580541415' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:44:00 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/1105123448' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:44:00 compute-0 nova_compute[260935]: 2025-10-11 08:44:00.822 2 DEBUG nova.storage.rbd_utils [None req-370e9138-7bb3-4690-bed7-16f93c50b542 324b88029ca649458a6186fedaec68e8 3132c1b9f1b741a98560770caf557fdc - - default default] rbd image dcd0a4bd-eaa1-430c-8f7f-2e6a0c10904d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:44:00 compute-0 nova_compute[260935]: 2025-10-11 08:44:00.828 2 DEBUG oslo_concurrency.processutils [None req-370e9138-7bb3-4690-bed7-16f93c50b542 324b88029ca649458a6186fedaec68e8 3132c1b9f1b741a98560770caf557fdc - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:44:01 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 08:44:01 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2585505457' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:44:01 compute-0 nova_compute[260935]: 2025-10-11 08:44:01.154 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.433s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:44:01 compute-0 nova_compute[260935]: 2025-10-11 08:44:01.162 2 DEBUG nova.compute.provider_tree [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Updating inventory in ProviderTree for provider ead2f521-4d5d-46d9-864c-1aac19134114 with inventory: {'MEMORY_MB': {'total': 7680, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Oct 11 08:44:01 compute-0 nova_compute[260935]: 2025-10-11 08:44:01.210 2 ERROR nova.scheduler.client.report [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [req-0fdb4dc7-b510-40da-8f31-8fd40c7e19bb] Failed to update inventory to [{'MEMORY_MB': {'total': 7680, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}}] for resource provider with UUID ead2f521-4d5d-46d9-864c-1aac19134114.  Got 409: {"errors": [{"status": 409, "title": "Conflict", "detail": "There was a conflict when trying to complete your request.\n\n resource provider generation conflict  ", "code": "placement.concurrent_update", "request_id": "req-0fdb4dc7-b510-40da-8f31-8fd40c7e19bb"}]}
Oct 11 08:44:01 compute-0 nova_compute[260935]: 2025-10-11 08:44:01.232 2 DEBUG nova.scheduler.client.report [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Refreshing inventories for resource provider ead2f521-4d5d-46d9-864c-1aac19134114 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Oct 11 08:44:01 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 08:44:01 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3501685134' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:44:01 compute-0 nova_compute[260935]: 2025-10-11 08:44:01.257 2 DEBUG nova.scheduler.client.report [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Updating ProviderTree inventory for provider ead2f521-4d5d-46d9-864c-1aac19134114 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Oct 11 08:44:01 compute-0 nova_compute[260935]: 2025-10-11 08:44:01.257 2 DEBUG nova.compute.provider_tree [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Updating inventory in ProviderTree for provider ead2f521-4d5d-46d9-864c-1aac19134114 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Oct 11 08:44:01 compute-0 nova_compute[260935]: 2025-10-11 08:44:01.272 2 DEBUG oslo_concurrency.processutils [None req-370e9138-7bb3-4690-bed7-16f93c50b542 324b88029ca649458a6186fedaec68e8 3132c1b9f1b741a98560770caf557fdc - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:44:01 compute-0 nova_compute[260935]: 2025-10-11 08:44:01.275 2 DEBUG nova.objects.instance [None req-370e9138-7bb3-4690-bed7-16f93c50b542 324b88029ca649458a6186fedaec68e8 3132c1b9f1b741a98560770caf557fdc - - default default] Lazy-loading 'pci_devices' on Instance uuid dcd0a4bd-eaa1-430c-8f7f-2e6a0c10904d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:44:01 compute-0 nova_compute[260935]: 2025-10-11 08:44:01.278 2 DEBUG nova.scheduler.client.report [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Refreshing aggregate associations for resource provider ead2f521-4d5d-46d9-864c-1aac19134114, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Oct 11 08:44:01 compute-0 nova_compute[260935]: 2025-10-11 08:44:01.298 2 DEBUG nova.virt.libvirt.driver [None req-370e9138-7bb3-4690-bed7-16f93c50b542 324b88029ca649458a6186fedaec68e8 3132c1b9f1b741a98560770caf557fdc - - default default] [instance: dcd0a4bd-eaa1-430c-8f7f-2e6a0c10904d] End _get_guest_xml xml=<domain type="kvm">
Oct 11 08:44:01 compute-0 nova_compute[260935]:   <uuid>dcd0a4bd-eaa1-430c-8f7f-2e6a0c10904d</uuid>
Oct 11 08:44:01 compute-0 nova_compute[260935]:   <name>instance-00000001</name>
Oct 11 08:44:01 compute-0 nova_compute[260935]:   <memory>131072</memory>
Oct 11 08:44:01 compute-0 nova_compute[260935]:   <vcpu>1</vcpu>
Oct 11 08:44:01 compute-0 nova_compute[260935]:   <metadata>
Oct 11 08:44:01 compute-0 nova_compute[260935]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 08:44:01 compute-0 nova_compute[260935]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 08:44:01 compute-0 nova_compute[260935]:       <nova:name>tempest-AutoAllocateNetworkTest-server-1273474145</nova:name>
Oct 11 08:44:01 compute-0 nova_compute[260935]:       <nova:creationTime>2025-10-11 08:44:00</nova:creationTime>
Oct 11 08:44:01 compute-0 nova_compute[260935]:       <nova:flavor name="m1.nano">
Oct 11 08:44:01 compute-0 nova_compute[260935]:         <nova:memory>128</nova:memory>
Oct 11 08:44:01 compute-0 nova_compute[260935]:         <nova:disk>1</nova:disk>
Oct 11 08:44:01 compute-0 nova_compute[260935]:         <nova:swap>0</nova:swap>
Oct 11 08:44:01 compute-0 nova_compute[260935]:         <nova:ephemeral>0</nova:ephemeral>
Oct 11 08:44:01 compute-0 nova_compute[260935]:         <nova:vcpus>1</nova:vcpus>
Oct 11 08:44:01 compute-0 nova_compute[260935]:       </nova:flavor>
Oct 11 08:44:01 compute-0 nova_compute[260935]:       <nova:owner>
Oct 11 08:44:01 compute-0 nova_compute[260935]:         <nova:user uuid="324b88029ca649458a6186fedaec68e8">tempest-AutoAllocateNetworkTest-1694481392-project-member</nova:user>
Oct 11 08:44:01 compute-0 nova_compute[260935]:         <nova:project uuid="3132c1b9f1b741a98560770caf557fdc">tempest-AutoAllocateNetworkTest-1694481392</nova:project>
Oct 11 08:44:01 compute-0 nova_compute[260935]:       </nova:owner>
Oct 11 08:44:01 compute-0 nova_compute[260935]:       <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 08:44:01 compute-0 nova_compute[260935]:       <nova:ports/>
Oct 11 08:44:01 compute-0 nova_compute[260935]:     </nova:instance>
Oct 11 08:44:01 compute-0 nova_compute[260935]:   </metadata>
Oct 11 08:44:01 compute-0 nova_compute[260935]:   <sysinfo type="smbios">
Oct 11 08:44:01 compute-0 nova_compute[260935]:     <system>
Oct 11 08:44:01 compute-0 nova_compute[260935]:       <entry name="manufacturer">RDO</entry>
Oct 11 08:44:01 compute-0 nova_compute[260935]:       <entry name="product">OpenStack Compute</entry>
Oct 11 08:44:01 compute-0 nova_compute[260935]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 08:44:01 compute-0 nova_compute[260935]:       <entry name="serial">dcd0a4bd-eaa1-430c-8f7f-2e6a0c10904d</entry>
Oct 11 08:44:01 compute-0 nova_compute[260935]:       <entry name="uuid">dcd0a4bd-eaa1-430c-8f7f-2e6a0c10904d</entry>
Oct 11 08:44:01 compute-0 nova_compute[260935]:       <entry name="family">Virtual Machine</entry>
Oct 11 08:44:01 compute-0 nova_compute[260935]:     </system>
Oct 11 08:44:01 compute-0 nova_compute[260935]:   </sysinfo>
Oct 11 08:44:01 compute-0 nova_compute[260935]:   <os>
Oct 11 08:44:01 compute-0 nova_compute[260935]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 11 08:44:01 compute-0 nova_compute[260935]:     <boot dev="hd"/>
Oct 11 08:44:01 compute-0 nova_compute[260935]:     <smbios mode="sysinfo"/>
Oct 11 08:44:01 compute-0 nova_compute[260935]:   </os>
Oct 11 08:44:01 compute-0 nova_compute[260935]:   <features>
Oct 11 08:44:01 compute-0 nova_compute[260935]:     <acpi/>
Oct 11 08:44:01 compute-0 nova_compute[260935]:     <apic/>
Oct 11 08:44:01 compute-0 nova_compute[260935]:     <vmcoreinfo/>
Oct 11 08:44:01 compute-0 nova_compute[260935]:   </features>
Oct 11 08:44:01 compute-0 nova_compute[260935]:   <clock offset="utc">
Oct 11 08:44:01 compute-0 nova_compute[260935]:     <timer name="pit" tickpolicy="delay"/>
Oct 11 08:44:01 compute-0 nova_compute[260935]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 11 08:44:01 compute-0 nova_compute[260935]:     <timer name="hpet" present="no"/>
Oct 11 08:44:01 compute-0 nova_compute[260935]:   </clock>
Oct 11 08:44:01 compute-0 nova_compute[260935]:   <cpu mode="host-model" match="exact">
Oct 11 08:44:01 compute-0 nova_compute[260935]:     <topology sockets="1" cores="1" threads="1"/>
Oct 11 08:44:01 compute-0 nova_compute[260935]:   </cpu>
Oct 11 08:44:01 compute-0 nova_compute[260935]:   <devices>
Oct 11 08:44:01 compute-0 nova_compute[260935]:     <disk type="network" device="disk">
Oct 11 08:44:01 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 08:44:01 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/dcd0a4bd-eaa1-430c-8f7f-2e6a0c10904d_disk">
Oct 11 08:44:01 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 08:44:01 compute-0 nova_compute[260935]:       </source>
Oct 11 08:44:01 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 08:44:01 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 08:44:01 compute-0 nova_compute[260935]:       </auth>
Oct 11 08:44:01 compute-0 nova_compute[260935]:       <target dev="vda" bus="virtio"/>
Oct 11 08:44:01 compute-0 nova_compute[260935]:     </disk>
Oct 11 08:44:01 compute-0 nova_compute[260935]:     <disk type="network" device="cdrom">
Oct 11 08:44:01 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 08:44:01 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/dcd0a4bd-eaa1-430c-8f7f-2e6a0c10904d_disk.config">
Oct 11 08:44:01 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 08:44:01 compute-0 nova_compute[260935]:       </source>
Oct 11 08:44:01 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 08:44:01 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 08:44:01 compute-0 nova_compute[260935]:       </auth>
Oct 11 08:44:01 compute-0 nova_compute[260935]:       <target dev="sda" bus="sata"/>
Oct 11 08:44:01 compute-0 nova_compute[260935]:     </disk>
Oct 11 08:44:01 compute-0 nova_compute[260935]:     <serial type="pty">
Oct 11 08:44:01 compute-0 nova_compute[260935]:       <log file="/var/lib/nova/instances/dcd0a4bd-eaa1-430c-8f7f-2e6a0c10904d/console.log" append="off"/>
Oct 11 08:44:01 compute-0 nova_compute[260935]:     </serial>
Oct 11 08:44:01 compute-0 nova_compute[260935]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 08:44:01 compute-0 nova_compute[260935]:     <video>
Oct 11 08:44:01 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 08:44:01 compute-0 nova_compute[260935]:     </video>
Oct 11 08:44:01 compute-0 nova_compute[260935]:     <input type="tablet" bus="usb"/>
Oct 11 08:44:01 compute-0 nova_compute[260935]:     <rng model="virtio">
Oct 11 08:44:01 compute-0 nova_compute[260935]:       <backend model="random">/dev/urandom</backend>
Oct 11 08:44:01 compute-0 nova_compute[260935]:     </rng>
Oct 11 08:44:01 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root"/>
Oct 11 08:44:01 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:44:01 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:44:01 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:44:01 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:44:01 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:44:01 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:44:01 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:44:01 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:44:01 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:44:01 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:44:01 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:44:01 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:44:01 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:44:01 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:44:01 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:44:01 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:44:01 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:44:01 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:44:01 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:44:01 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:44:01 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:44:01 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:44:01 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:44:01 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:44:01 compute-0 nova_compute[260935]:     <controller type="usb" index="0"/>
Oct 11 08:44:01 compute-0 nova_compute[260935]:     <memballoon model="virtio">
Oct 11 08:44:01 compute-0 nova_compute[260935]:       <stats period="10"/>
Oct 11 08:44:01 compute-0 nova_compute[260935]:     </memballoon>
Oct 11 08:44:01 compute-0 nova_compute[260935]:   </devices>
Oct 11 08:44:01 compute-0 nova_compute[260935]: </domain>
Oct 11 08:44:01 compute-0 nova_compute[260935]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 11 08:44:01 compute-0 nova_compute[260935]: 2025-10-11 08:44:01.317 2 DEBUG nova.scheduler.client.report [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Refreshing trait associations for resource provider ead2f521-4d5d-46d9-864c-1aac19134114, traits: HW_CPU_X86_AESNI,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_CLMUL,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_AVX,COMPUTE_RESCUE_BFV,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_F16C,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NODE,HW_CPU_X86_SSE2,HW_CPU_X86_BMI,COMPUTE_DEVICE_TAGGING,COMPUTE_STORAGE_BUS_FDC,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SHA,COMPUTE_STORAGE_BUS_SATA,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SSE42,HW_CPU_X86_SSE41,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_ABM,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_STORAGE_BUS_IDE,COMPUTE_STORAGE_BUS_USB,COMPUTE_TRUSTED_CERTS,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_FMA3,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_SSE4A,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSSE3,HW_CPU_X86_SVM,HW_CPU_X86_BMI2,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_AVX2,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_SECURITY_TPM_2_0,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_AMD_SVM,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_RTL8139 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Oct 11 08:44:01 compute-0 nova_compute[260935]: 2025-10-11 08:44:01.359 2 DEBUG nova.virt.libvirt.driver [None req-370e9138-7bb3-4690-bed7-16f93c50b542 324b88029ca649458a6186fedaec68e8 3132c1b9f1b741a98560770caf557fdc - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 08:44:01 compute-0 nova_compute[260935]: 2025-10-11 08:44:01.360 2 DEBUG nova.virt.libvirt.driver [None req-370e9138-7bb3-4690-bed7-16f93c50b542 324b88029ca649458a6186fedaec68e8 3132c1b9f1b741a98560770caf557fdc - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 08:44:01 compute-0 nova_compute[260935]: 2025-10-11 08:44:01.361 2 INFO nova.virt.libvirt.driver [None req-370e9138-7bb3-4690-bed7-16f93c50b542 324b88029ca649458a6186fedaec68e8 3132c1b9f1b741a98560770caf557fdc - - default default] [instance: dcd0a4bd-eaa1-430c-8f7f-2e6a0c10904d] Using config drive
Oct 11 08:44:01 compute-0 nova_compute[260935]: 2025-10-11 08:44:01.390 2 DEBUG nova.storage.rbd_utils [None req-370e9138-7bb3-4690-bed7-16f93c50b542 324b88029ca649458a6186fedaec68e8 3132c1b9f1b741a98560770caf557fdc - - default default] rbd image dcd0a4bd-eaa1-430c-8f7f-2e6a0c10904d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:44:01 compute-0 nova_compute[260935]: 2025-10-11 08:44:01.399 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:44:01 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1119: 321 pgs: 321 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 10 op/s
Oct 11 08:44:01 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/2585505457' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:44:01 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/3501685134' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:44:01 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 08:44:01 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3466260857' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:44:01 compute-0 nova_compute[260935]: 2025-10-11 08:44:01.850 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:44:01 compute-0 nova_compute[260935]: 2025-10-11 08:44:01.858 2 DEBUG nova.compute.provider_tree [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Updating inventory in ProviderTree for provider ead2f521-4d5d-46d9-864c-1aac19134114 with inventory: {'MEMORY_MB': {'total': 7680, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Oct 11 08:44:01 compute-0 nova_compute[260935]: 2025-10-11 08:44:01.897 2 DEBUG nova.scheduler.client.report [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Updated inventory for provider ead2f521-4d5d-46d9-864c-1aac19134114 with generation 7 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 7680, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957
Oct 11 08:44:01 compute-0 nova_compute[260935]: 2025-10-11 08:44:01.897 2 DEBUG nova.compute.provider_tree [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Updating resource provider ead2f521-4d5d-46d9-864c-1aac19134114 generation from 7 to 8 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164
Oct 11 08:44:01 compute-0 nova_compute[260935]: 2025-10-11 08:44:01.898 2 DEBUG nova.compute.provider_tree [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Updating inventory in ProviderTree for provider ead2f521-4d5d-46d9-864c-1aac19134114 with inventory: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Oct 11 08:44:01 compute-0 nova_compute[260935]: 2025-10-11 08:44:01.921 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 11 08:44:01 compute-0 nova_compute[260935]: 2025-10-11 08:44:01.922 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.345s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:44:02 compute-0 nova_compute[260935]: 2025-10-11 08:44:02.191 2 INFO nova.virt.libvirt.driver [None req-370e9138-7bb3-4690-bed7-16f93c50b542 324b88029ca649458a6186fedaec68e8 3132c1b9f1b741a98560770caf557fdc - - default default] [instance: dcd0a4bd-eaa1-430c-8f7f-2e6a0c10904d] Creating config drive at /var/lib/nova/instances/dcd0a4bd-eaa1-430c-8f7f-2e6a0c10904d/disk.config
Oct 11 08:44:02 compute-0 nova_compute[260935]: 2025-10-11 08:44:02.199 2 DEBUG oslo_concurrency.processutils [None req-370e9138-7bb3-4690-bed7-16f93c50b542 324b88029ca649458a6186fedaec68e8 3132c1b9f1b741a98560770caf557fdc - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/dcd0a4bd-eaa1-430c-8f7f-2e6a0c10904d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp0kgibxgi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:44:02 compute-0 nova_compute[260935]: 2025-10-11 08:44:02.355 2 DEBUG oslo_concurrency.processutils [None req-370e9138-7bb3-4690-bed7-16f93c50b542 324b88029ca649458a6186fedaec68e8 3132c1b9f1b741a98560770caf557fdc - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/dcd0a4bd-eaa1-430c-8f7f-2e6a0c10904d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp0kgibxgi" returned: 0 in 0.156s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:44:02 compute-0 nova_compute[260935]: 2025-10-11 08:44:02.392 2 DEBUG nova.storage.rbd_utils [None req-370e9138-7bb3-4690-bed7-16f93c50b542 324b88029ca649458a6186fedaec68e8 3132c1b9f1b741a98560770caf557fdc - - default default] rbd image dcd0a4bd-eaa1-430c-8f7f-2e6a0c10904d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:44:02 compute-0 nova_compute[260935]: 2025-10-11 08:44:02.397 2 DEBUG oslo_concurrency.processutils [None req-370e9138-7bb3-4690-bed7-16f93c50b542 324b88029ca649458a6186fedaec68e8 3132c1b9f1b741a98560770caf557fdc - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/dcd0a4bd-eaa1-430c-8f7f-2e6a0c10904d/disk.config dcd0a4bd-eaa1-430c-8f7f-2e6a0c10904d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:44:02 compute-0 nova_compute[260935]: 2025-10-11 08:44:02.577 2 DEBUG oslo_concurrency.processutils [None req-370e9138-7bb3-4690-bed7-16f93c50b542 324b88029ca649458a6186fedaec68e8 3132c1b9f1b741a98560770caf557fdc - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/dcd0a4bd-eaa1-430c-8f7f-2e6a0c10904d/disk.config dcd0a4bd-eaa1-430c-8f7f-2e6a0c10904d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.180s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:44:02 compute-0 nova_compute[260935]: 2025-10-11 08:44:02.578 2 INFO nova.virt.libvirt.driver [None req-370e9138-7bb3-4690-bed7-16f93c50b542 324b88029ca649458a6186fedaec68e8 3132c1b9f1b741a98560770caf557fdc - - default default] [instance: dcd0a4bd-eaa1-430c-8f7f-2e6a0c10904d] Deleting local config drive /var/lib/nova/instances/dcd0a4bd-eaa1-430c-8f7f-2e6a0c10904d/disk.config because it was imported into RBD.
Oct 11 08:44:02 compute-0 systemd[1]: Starting libvirt secret daemon...
Oct 11 08:44:02 compute-0 systemd[1]: Started libvirt secret daemon.
Oct 11 08:44:02 compute-0 systemd-machined[215705]: New machine qemu-1-instance-00000001.
Oct 11 08:44:02 compute-0 systemd[1]: Started Virtual Machine qemu-1-instance-00000001.
Oct 11 08:44:02 compute-0 ceph-mon[74313]: pgmap v1119: 321 pgs: 321 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 10 op/s
Oct 11 08:44:02 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/3466260857' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:44:03 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 08:44:03 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1120: 321 pgs: 321 active+clean; 88 MiB data, 211 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 2.7 MiB/s wr, 53 op/s
Oct 11 08:44:04 compute-0 nova_compute[260935]: 2025-10-11 08:44:04.012 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172244.0114164, dcd0a4bd-eaa1-430c-8f7f-2e6a0c10904d => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:44:04 compute-0 nova_compute[260935]: 2025-10-11 08:44:04.013 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: dcd0a4bd-eaa1-430c-8f7f-2e6a0c10904d] VM Resumed (Lifecycle Event)
Oct 11 08:44:04 compute-0 nova_compute[260935]: 2025-10-11 08:44:04.023 2 DEBUG nova.compute.manager [None req-370e9138-7bb3-4690-bed7-16f93c50b542 324b88029ca649458a6186fedaec68e8 3132c1b9f1b741a98560770caf557fdc - - default default] [instance: dcd0a4bd-eaa1-430c-8f7f-2e6a0c10904d] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 11 08:44:04 compute-0 nova_compute[260935]: 2025-10-11 08:44:04.024 2 DEBUG nova.virt.libvirt.driver [None req-370e9138-7bb3-4690-bed7-16f93c50b542 324b88029ca649458a6186fedaec68e8 3132c1b9f1b741a98560770caf557fdc - - default default] [instance: dcd0a4bd-eaa1-430c-8f7f-2e6a0c10904d] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 11 08:44:04 compute-0 nova_compute[260935]: 2025-10-11 08:44:04.028 2 INFO nova.virt.libvirt.driver [-] [instance: dcd0a4bd-eaa1-430c-8f7f-2e6a0c10904d] Instance spawned successfully.
Oct 11 08:44:04 compute-0 nova_compute[260935]: 2025-10-11 08:44:04.029 2 DEBUG nova.virt.libvirt.driver [None req-370e9138-7bb3-4690-bed7-16f93c50b542 324b88029ca649458a6186fedaec68e8 3132c1b9f1b741a98560770caf557fdc - - default default] [instance: dcd0a4bd-eaa1-430c-8f7f-2e6a0c10904d] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 11 08:44:04 compute-0 nova_compute[260935]: 2025-10-11 08:44:04.069 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: dcd0a4bd-eaa1-430c-8f7f-2e6a0c10904d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:44:04 compute-0 nova_compute[260935]: 2025-10-11 08:44:04.080 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: dcd0a4bd-eaa1-430c-8f7f-2e6a0c10904d] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 08:44:04 compute-0 nova_compute[260935]: 2025-10-11 08:44:04.086 2 DEBUG nova.virt.libvirt.driver [None req-370e9138-7bb3-4690-bed7-16f93c50b542 324b88029ca649458a6186fedaec68e8 3132c1b9f1b741a98560770caf557fdc - - default default] [instance: dcd0a4bd-eaa1-430c-8f7f-2e6a0c10904d] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:44:04 compute-0 nova_compute[260935]: 2025-10-11 08:44:04.087 2 DEBUG nova.virt.libvirt.driver [None req-370e9138-7bb3-4690-bed7-16f93c50b542 324b88029ca649458a6186fedaec68e8 3132c1b9f1b741a98560770caf557fdc - - default default] [instance: dcd0a4bd-eaa1-430c-8f7f-2e6a0c10904d] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:44:04 compute-0 nova_compute[260935]: 2025-10-11 08:44:04.088 2 DEBUG nova.virt.libvirt.driver [None req-370e9138-7bb3-4690-bed7-16f93c50b542 324b88029ca649458a6186fedaec68e8 3132c1b9f1b741a98560770caf557fdc - - default default] [instance: dcd0a4bd-eaa1-430c-8f7f-2e6a0c10904d] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:44:04 compute-0 nova_compute[260935]: 2025-10-11 08:44:04.088 2 DEBUG nova.virt.libvirt.driver [None req-370e9138-7bb3-4690-bed7-16f93c50b542 324b88029ca649458a6186fedaec68e8 3132c1b9f1b741a98560770caf557fdc - - default default] [instance: dcd0a4bd-eaa1-430c-8f7f-2e6a0c10904d] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:44:04 compute-0 nova_compute[260935]: 2025-10-11 08:44:04.090 2 DEBUG nova.virt.libvirt.driver [None req-370e9138-7bb3-4690-bed7-16f93c50b542 324b88029ca649458a6186fedaec68e8 3132c1b9f1b741a98560770caf557fdc - - default default] [instance: dcd0a4bd-eaa1-430c-8f7f-2e6a0c10904d] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:44:04 compute-0 nova_compute[260935]: 2025-10-11 08:44:04.090 2 DEBUG nova.virt.libvirt.driver [None req-370e9138-7bb3-4690-bed7-16f93c50b542 324b88029ca649458a6186fedaec68e8 3132c1b9f1b741a98560770caf557fdc - - default default] [instance: dcd0a4bd-eaa1-430c-8f7f-2e6a0c10904d] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:44:04 compute-0 nova_compute[260935]: 2025-10-11 08:44:04.129 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: dcd0a4bd-eaa1-430c-8f7f-2e6a0c10904d] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 08:44:04 compute-0 nova_compute[260935]: 2025-10-11 08:44:04.129 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172244.022236, dcd0a4bd-eaa1-430c-8f7f-2e6a0c10904d => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:44:04 compute-0 nova_compute[260935]: 2025-10-11 08:44:04.130 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: dcd0a4bd-eaa1-430c-8f7f-2e6a0c10904d] VM Started (Lifecycle Event)
Oct 11 08:44:04 compute-0 nova_compute[260935]: 2025-10-11 08:44:04.166 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: dcd0a4bd-eaa1-430c-8f7f-2e6a0c10904d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:44:04 compute-0 nova_compute[260935]: 2025-10-11 08:44:04.174 2 INFO nova.compute.manager [None req-370e9138-7bb3-4690-bed7-16f93c50b542 324b88029ca649458a6186fedaec68e8 3132c1b9f1b741a98560770caf557fdc - - default default] [instance: dcd0a4bd-eaa1-430c-8f7f-2e6a0c10904d] Took 9.78 seconds to spawn the instance on the hypervisor.
Oct 11 08:44:04 compute-0 nova_compute[260935]: 2025-10-11 08:44:04.176 2 DEBUG nova.compute.manager [None req-370e9138-7bb3-4690-bed7-16f93c50b542 324b88029ca649458a6186fedaec68e8 3132c1b9f1b741a98560770caf557fdc - - default default] [instance: dcd0a4bd-eaa1-430c-8f7f-2e6a0c10904d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:44:04 compute-0 nova_compute[260935]: 2025-10-11 08:44:04.181 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: dcd0a4bd-eaa1-430c-8f7f-2e6a0c10904d] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 08:44:04 compute-0 nova_compute[260935]: 2025-10-11 08:44:04.215 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: dcd0a4bd-eaa1-430c-8f7f-2e6a0c10904d] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 08:44:04 compute-0 nova_compute[260935]: 2025-10-11 08:44:04.248 2 INFO nova.compute.manager [None req-370e9138-7bb3-4690-bed7-16f93c50b542 324b88029ca649458a6186fedaec68e8 3132c1b9f1b741a98560770caf557fdc - - default default] [instance: dcd0a4bd-eaa1-430c-8f7f-2e6a0c10904d] Took 11.44 seconds to build instance.
Oct 11 08:44:04 compute-0 nova_compute[260935]: 2025-10-11 08:44:04.270 2 DEBUG oslo_concurrency.lockutils [None req-370e9138-7bb3-4690-bed7-16f93c50b542 324b88029ca649458a6186fedaec68e8 3132c1b9f1b741a98560770caf557fdc - - default default] Lock "dcd0a4bd-eaa1-430c-8f7f-2e6a0c10904d" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.846s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:44:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 08:44:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:44:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 08:44:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:44:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0003460606319593671 of space, bias 1.0, pg target 0.10381818958781013 quantized to 32 (current 32)
Oct 11 08:44:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:44:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 08:44:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:44:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 08:44:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:44:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Oct 11 08:44:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:44:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 11 08:44:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:44:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 08:44:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:44:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 11 08:44:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:44:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 11 08:44:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:44:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 08:44:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:44:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 11 08:44:04 compute-0 ceph-mon[74313]: pgmap v1120: 321 pgs: 321 active+clean; 88 MiB data, 211 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 2.7 MiB/s wr, 53 op/s
Oct 11 08:44:05 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1121: 321 pgs: 321 active+clean; 88 MiB data, 211 MiB used, 60 GiB / 60 GiB avail; 27 KiB/s rd, 2.7 MiB/s wr, 42 op/s
Oct 11 08:44:06 compute-0 nova_compute[260935]: 2025-10-11 08:44:06.025 2 DEBUG oslo_concurrency.lockutils [None req-2b909e9e-40e9-4a6b-bf71-b74796f14aa6 324b88029ca649458a6186fedaec68e8 3132c1b9f1b741a98560770caf557fdc - - default default] Acquiring lock "dcd0a4bd-eaa1-430c-8f7f-2e6a0c10904d" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:44:06 compute-0 nova_compute[260935]: 2025-10-11 08:44:06.026 2 DEBUG oslo_concurrency.lockutils [None req-2b909e9e-40e9-4a6b-bf71-b74796f14aa6 324b88029ca649458a6186fedaec68e8 3132c1b9f1b741a98560770caf557fdc - - default default] Lock "dcd0a4bd-eaa1-430c-8f7f-2e6a0c10904d" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:44:06 compute-0 nova_compute[260935]: 2025-10-11 08:44:06.027 2 DEBUG oslo_concurrency.lockutils [None req-2b909e9e-40e9-4a6b-bf71-b74796f14aa6 324b88029ca649458a6186fedaec68e8 3132c1b9f1b741a98560770caf557fdc - - default default] Acquiring lock "dcd0a4bd-eaa1-430c-8f7f-2e6a0c10904d-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:44:06 compute-0 nova_compute[260935]: 2025-10-11 08:44:06.027 2 DEBUG oslo_concurrency.lockutils [None req-2b909e9e-40e9-4a6b-bf71-b74796f14aa6 324b88029ca649458a6186fedaec68e8 3132c1b9f1b741a98560770caf557fdc - - default default] Lock "dcd0a4bd-eaa1-430c-8f7f-2e6a0c10904d-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:44:06 compute-0 nova_compute[260935]: 2025-10-11 08:44:06.028 2 DEBUG oslo_concurrency.lockutils [None req-2b909e9e-40e9-4a6b-bf71-b74796f14aa6 324b88029ca649458a6186fedaec68e8 3132c1b9f1b741a98560770caf557fdc - - default default] Lock "dcd0a4bd-eaa1-430c-8f7f-2e6a0c10904d-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:44:06 compute-0 nova_compute[260935]: 2025-10-11 08:44:06.029 2 INFO nova.compute.manager [None req-2b909e9e-40e9-4a6b-bf71-b74796f14aa6 324b88029ca649458a6186fedaec68e8 3132c1b9f1b741a98560770caf557fdc - - default default] [instance: dcd0a4bd-eaa1-430c-8f7f-2e6a0c10904d] Terminating instance
Oct 11 08:44:06 compute-0 nova_compute[260935]: 2025-10-11 08:44:06.031 2 DEBUG oslo_concurrency.lockutils [None req-2b909e9e-40e9-4a6b-bf71-b74796f14aa6 324b88029ca649458a6186fedaec68e8 3132c1b9f1b741a98560770caf557fdc - - default default] Acquiring lock "refresh_cache-dcd0a4bd-eaa1-430c-8f7f-2e6a0c10904d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 08:44:06 compute-0 nova_compute[260935]: 2025-10-11 08:44:06.031 2 DEBUG oslo_concurrency.lockutils [None req-2b909e9e-40e9-4a6b-bf71-b74796f14aa6 324b88029ca649458a6186fedaec68e8 3132c1b9f1b741a98560770caf557fdc - - default default] Acquired lock "refresh_cache-dcd0a4bd-eaa1-430c-8f7f-2e6a0c10904d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 08:44:06 compute-0 nova_compute[260935]: 2025-10-11 08:44:06.031 2 DEBUG nova.network.neutron [None req-2b909e9e-40e9-4a6b-bf71-b74796f14aa6 324b88029ca649458a6186fedaec68e8 3132c1b9f1b741a98560770caf557fdc - - default default] [instance: dcd0a4bd-eaa1-430c-8f7f-2e6a0c10904d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 11 08:44:06 compute-0 nova_compute[260935]: 2025-10-11 08:44:06.253 2 DEBUG nova.network.neutron [None req-2b909e9e-40e9-4a6b-bf71-b74796f14aa6 324b88029ca649458a6186fedaec68e8 3132c1b9f1b741a98560770caf557fdc - - default default] [instance: dcd0a4bd-eaa1-430c-8f7f-2e6a0c10904d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 11 08:44:06 compute-0 nova_compute[260935]: 2025-10-11 08:44:06.566 2 DEBUG nova.network.neutron [None req-2b909e9e-40e9-4a6b-bf71-b74796f14aa6 324b88029ca649458a6186fedaec68e8 3132c1b9f1b741a98560770caf557fdc - - default default] [instance: dcd0a4bd-eaa1-430c-8f7f-2e6a0c10904d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:44:06 compute-0 nova_compute[260935]: 2025-10-11 08:44:06.581 2 DEBUG oslo_concurrency.lockutils [None req-2b909e9e-40e9-4a6b-bf71-b74796f14aa6 324b88029ca649458a6186fedaec68e8 3132c1b9f1b741a98560770caf557fdc - - default default] Releasing lock "refresh_cache-dcd0a4bd-eaa1-430c-8f7f-2e6a0c10904d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 08:44:06 compute-0 nova_compute[260935]: 2025-10-11 08:44:06.582 2 DEBUG nova.compute.manager [None req-2b909e9e-40e9-4a6b-bf71-b74796f14aa6 324b88029ca649458a6186fedaec68e8 3132c1b9f1b741a98560770caf557fdc - - default default] [instance: dcd0a4bd-eaa1-430c-8f7f-2e6a0c10904d] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 11 08:44:06 compute-0 systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000001.scope: Deactivated successfully.
Oct 11 08:44:06 compute-0 systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000001.scope: Consumed 3.869s CPU time.
Oct 11 08:44:06 compute-0 systemd-machined[215705]: Machine qemu-1-instance-00000001 terminated.
Oct 11 08:44:06 compute-0 nova_compute[260935]: 2025-10-11 08:44:06.806 2 INFO nova.virt.libvirt.driver [-] [instance: dcd0a4bd-eaa1-430c-8f7f-2e6a0c10904d] Instance destroyed successfully.
Oct 11 08:44:06 compute-0 nova_compute[260935]: 2025-10-11 08:44:06.806 2 DEBUG nova.objects.instance [None req-2b909e9e-40e9-4a6b-bf71-b74796f14aa6 324b88029ca649458a6186fedaec68e8 3132c1b9f1b741a98560770caf557fdc - - default default] Lazy-loading 'resources' on Instance uuid dcd0a4bd-eaa1-430c-8f7f-2e6a0c10904d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:44:06 compute-0 ceph-mon[74313]: pgmap v1121: 321 pgs: 321 active+clean; 88 MiB data, 211 MiB used, 60 GiB / 60 GiB avail; 27 KiB/s rd, 2.7 MiB/s wr, 42 op/s
Oct 11 08:44:07 compute-0 nova_compute[260935]: 2025-10-11 08:44:07.191 2 INFO nova.virt.libvirt.driver [None req-2b909e9e-40e9-4a6b-bf71-b74796f14aa6 324b88029ca649458a6186fedaec68e8 3132c1b9f1b741a98560770caf557fdc - - default default] [instance: dcd0a4bd-eaa1-430c-8f7f-2e6a0c10904d] Deleting instance files /var/lib/nova/instances/dcd0a4bd-eaa1-430c-8f7f-2e6a0c10904d_del
Oct 11 08:44:07 compute-0 nova_compute[260935]: 2025-10-11 08:44:07.192 2 INFO nova.virt.libvirt.driver [None req-2b909e9e-40e9-4a6b-bf71-b74796f14aa6 324b88029ca649458a6186fedaec68e8 3132c1b9f1b741a98560770caf557fdc - - default default] [instance: dcd0a4bd-eaa1-430c-8f7f-2e6a0c10904d] Deletion of /var/lib/nova/instances/dcd0a4bd-eaa1-430c-8f7f-2e6a0c10904d_del complete
Oct 11 08:44:07 compute-0 nova_compute[260935]: 2025-10-11 08:44:07.258 2 DEBUG nova.virt.libvirt.host [None req-2b909e9e-40e9-4a6b-bf71-b74796f14aa6 324b88029ca649458a6186fedaec68e8 3132c1b9f1b741a98560770caf557fdc - - default default] Checking UEFI support for host arch (x86_64) supports_uefi /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1754
Oct 11 08:44:07 compute-0 nova_compute[260935]: 2025-10-11 08:44:07.259 2 INFO nova.virt.libvirt.host [None req-2b909e9e-40e9-4a6b-bf71-b74796f14aa6 324b88029ca649458a6186fedaec68e8 3132c1b9f1b741a98560770caf557fdc - - default default] UEFI support detected
Oct 11 08:44:07 compute-0 nova_compute[260935]: 2025-10-11 08:44:07.260 2 INFO nova.compute.manager [None req-2b909e9e-40e9-4a6b-bf71-b74796f14aa6 324b88029ca649458a6186fedaec68e8 3132c1b9f1b741a98560770caf557fdc - - default default] [instance: dcd0a4bd-eaa1-430c-8f7f-2e6a0c10904d] Took 0.68 seconds to destroy the instance on the hypervisor.
Oct 11 08:44:07 compute-0 nova_compute[260935]: 2025-10-11 08:44:07.261 2 DEBUG oslo.service.loopingcall [None req-2b909e9e-40e9-4a6b-bf71-b74796f14aa6 324b88029ca649458a6186fedaec68e8 3132c1b9f1b741a98560770caf557fdc - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 11 08:44:07 compute-0 nova_compute[260935]: 2025-10-11 08:44:07.261 2 DEBUG nova.compute.manager [-] [instance: dcd0a4bd-eaa1-430c-8f7f-2e6a0c10904d] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 11 08:44:07 compute-0 nova_compute[260935]: 2025-10-11 08:44:07.261 2 DEBUG nova.network.neutron [-] [instance: dcd0a4bd-eaa1-430c-8f7f-2e6a0c10904d] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 11 08:44:07 compute-0 nova_compute[260935]: 2025-10-11 08:44:07.616 2 DEBUG nova.network.neutron [-] [instance: dcd0a4bd-eaa1-430c-8f7f-2e6a0c10904d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 11 08:44:07 compute-0 nova_compute[260935]: 2025-10-11 08:44:07.650 2 DEBUG nova.network.neutron [-] [instance: dcd0a4bd-eaa1-430c-8f7f-2e6a0c10904d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:44:07 compute-0 nova_compute[260935]: 2025-10-11 08:44:07.669 2 INFO nova.compute.manager [-] [instance: dcd0a4bd-eaa1-430c-8f7f-2e6a0c10904d] Took 0.41 seconds to deallocate network for instance.
Oct 11 08:44:07 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1122: 321 pgs: 321 active+clean; 41 MiB data, 197 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 2.4 MiB/s wr, 171 op/s
Oct 11 08:44:07 compute-0 nova_compute[260935]: 2025-10-11 08:44:07.712 2 DEBUG oslo_concurrency.lockutils [None req-2b909e9e-40e9-4a6b-bf71-b74796f14aa6 324b88029ca649458a6186fedaec68e8 3132c1b9f1b741a98560770caf557fdc - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:44:07 compute-0 nova_compute[260935]: 2025-10-11 08:44:07.712 2 DEBUG oslo_concurrency.lockutils [None req-2b909e9e-40e9-4a6b-bf71-b74796f14aa6 324b88029ca649458a6186fedaec68e8 3132c1b9f1b741a98560770caf557fdc - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:44:07 compute-0 nova_compute[260935]: 2025-10-11 08:44:07.834 2 DEBUG oslo_concurrency.processutils [None req-2b909e9e-40e9-4a6b-bf71-b74796f14aa6 324b88029ca649458a6186fedaec68e8 3132c1b9f1b741a98560770caf557fdc - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:44:08 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 08:44:08 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e135 do_prune osdmap full prune enabled
Oct 11 08:44:08 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e136 e136: 3 total, 3 up, 3 in
Oct 11 08:44:08 compute-0 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e136: 3 total, 3 up, 3 in
Oct 11 08:44:08 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 08:44:08 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2127591540' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:44:08 compute-0 nova_compute[260935]: 2025-10-11 08:44:08.335 2 DEBUG oslo_concurrency.processutils [None req-2b909e9e-40e9-4a6b-bf71-b74796f14aa6 324b88029ca649458a6186fedaec68e8 3132c1b9f1b741a98560770caf557fdc - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.501s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:44:08 compute-0 nova_compute[260935]: 2025-10-11 08:44:08.341 2 DEBUG nova.compute.provider_tree [None req-2b909e9e-40e9-4a6b-bf71-b74796f14aa6 324b88029ca649458a6186fedaec68e8 3132c1b9f1b741a98560770caf557fdc - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 08:44:08 compute-0 nova_compute[260935]: 2025-10-11 08:44:08.356 2 DEBUG nova.scheduler.client.report [None req-2b909e9e-40e9-4a6b-bf71-b74796f14aa6 324b88029ca649458a6186fedaec68e8 3132c1b9f1b741a98560770caf557fdc - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 08:44:08 compute-0 nova_compute[260935]: 2025-10-11 08:44:08.375 2 DEBUG oslo_concurrency.lockutils [None req-2b909e9e-40e9-4a6b-bf71-b74796f14aa6 324b88029ca649458a6186fedaec68e8 3132c1b9f1b741a98560770caf557fdc - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.663s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:44:08 compute-0 nova_compute[260935]: 2025-10-11 08:44:08.399 2 INFO nova.scheduler.client.report [None req-2b909e9e-40e9-4a6b-bf71-b74796f14aa6 324b88029ca649458a6186fedaec68e8 3132c1b9f1b741a98560770caf557fdc - - default default] Deleted allocations for instance dcd0a4bd-eaa1-430c-8f7f-2e6a0c10904d
Oct 11 08:44:08 compute-0 nova_compute[260935]: 2025-10-11 08:44:08.466 2 DEBUG oslo_concurrency.lockutils [None req-2b909e9e-40e9-4a6b-bf71-b74796f14aa6 324b88029ca649458a6186fedaec68e8 3132c1b9f1b741a98560770caf557fdc - - default default] Lock "dcd0a4bd-eaa1-430c-8f7f-2e6a0c10904d" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.440s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:44:09 compute-0 ceph-mon[74313]: pgmap v1122: 321 pgs: 321 active+clean; 41 MiB data, 197 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 2.4 MiB/s wr, 171 op/s
Oct 11 08:44:09 compute-0 ceph-mon[74313]: osdmap e136: 3 total, 3 up, 3 in
Oct 11 08:44:09 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/2127591540' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:44:09 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1124: 321 pgs: 321 active+clean; 41 MiB data, 197 MiB used, 60 GiB / 60 GiB avail; 2.4 MiB/s rd, 2.2 MiB/s wr, 153 op/s
Oct 11 08:44:11 compute-0 ceph-mon[74313]: pgmap v1124: 321 pgs: 321 active+clean; 41 MiB data, 197 MiB used, 60 GiB / 60 GiB avail; 2.4 MiB/s rd, 2.2 MiB/s wr, 153 op/s
Oct 11 08:44:11 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1125: 321 pgs: 321 active+clean; 41 MiB data, 197 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 152 op/s
Oct 11 08:44:13 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 08:44:13 compute-0 ceph-mon[74313]: pgmap v1125: 321 pgs: 321 active+clean; 41 MiB data, 197 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 152 op/s
Oct 11 08:44:13 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1126: 321 pgs: 321 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 16 KiB/s wr, 118 op/s
Oct 11 08:44:14 compute-0 podman[276031]: 2025-10-11 08:44:14.826773385 +0000 UTC m=+0.092491019 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 11 08:44:15 compute-0 ceph-mon[74313]: pgmap v1126: 321 pgs: 321 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 16 KiB/s wr, 118 op/s
Oct 11 08:44:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:44:15.175 162815 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:44:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:44:15.176 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:44:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:44:15.176 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:44:15 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1127: 321 pgs: 321 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 16 KiB/s wr, 118 op/s
Oct 11 08:44:17 compute-0 ceph-mon[74313]: pgmap v1127: 321 pgs: 321 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 16 KiB/s wr, 118 op/s
Oct 11 08:44:17 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1128: 321 pgs: 321 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 102 B/s rd, 0 B/s wr, 0 op/s
Oct 11 08:44:18 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 08:44:19 compute-0 sshd-session[276048]: Invalid user brian from 152.32.213.170 port 43706
Oct 11 08:44:19 compute-0 sshd-session[276048]: pam_unix(sshd:auth): check pass; user unknown
Oct 11 08:44:19 compute-0 sshd-session[276048]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=152.32.213.170
Oct 11 08:44:19 compute-0 ceph-mon[74313]: pgmap v1128: 321 pgs: 321 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 102 B/s rd, 0 B/s wr, 0 op/s
Oct 11 08:44:19 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1129: 321 pgs: 321 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 88 B/s rd, 0 B/s wr, 0 op/s
Oct 11 08:44:21 compute-0 ceph-mon[74313]: pgmap v1129: 321 pgs: 321 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 88 B/s rd, 0 B/s wr, 0 op/s
Oct 11 08:44:21 compute-0 sshd-session[276048]: Failed password for invalid user brian from 152.32.213.170 port 43706 ssh2
Oct 11 08:44:21 compute-0 sshd-session[276048]: Received disconnect from 152.32.213.170 port 43706:11: Bye Bye [preauth]
Oct 11 08:44:21 compute-0 sshd-session[276048]: Disconnected from invalid user brian 152.32.213.170 port 43706 [preauth]
Oct 11 08:44:21 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1130: 321 pgs: 321 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 B/s wr, 0 op/s
Oct 11 08:44:21 compute-0 nova_compute[260935]: 2025-10-11 08:44:21.805 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760172246.8035655, dcd0a4bd-eaa1-430c-8f7f-2e6a0c10904d => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:44:21 compute-0 nova_compute[260935]: 2025-10-11 08:44:21.806 2 INFO nova.compute.manager [-] [instance: dcd0a4bd-eaa1-430c-8f7f-2e6a0c10904d] VM Stopped (Lifecycle Event)
Oct 11 08:44:21 compute-0 nova_compute[260935]: 2025-10-11 08:44:21.904 2 DEBUG nova.compute.manager [None req-a681d7d1-255b-4b48-b5a9-ed98ee233506 - - - - - -] [instance: dcd0a4bd-eaa1-430c-8f7f-2e6a0c10904d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:44:22 compute-0 podman[276050]: 2025-10-11 08:44:22.823866794 +0000 UTC m=+0.111456533 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=iscsid, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible)
Oct 11 08:44:23 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 08:44:23 compute-0 ceph-mon[74313]: pgmap v1130: 321 pgs: 321 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 B/s wr, 0 op/s
Oct 11 08:44:23 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1131: 321 pgs: 321 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 B/s wr, 0 op/s
Oct 11 08:44:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 08:44:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 08:44:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 08:44:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 08:44:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 08:44:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 08:44:25 compute-0 ceph-mon[74313]: pgmap v1131: 321 pgs: 321 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 B/s wr, 0 op/s
Oct 11 08:44:25 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1132: 321 pgs: 321 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:44:27 compute-0 ceph-mon[74313]: pgmap v1132: 321 pgs: 321 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:44:27 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1133: 321 pgs: 321 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:44:28 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 08:44:28 compute-0 podman[276070]: 2025-10-11 08:44:28.805426903 +0000 UTC m=+0.097406497 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=multipathd, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, io.buildah.version=1.41.3)
Oct 11 08:44:29 compute-0 ceph-mon[74313]: pgmap v1133: 321 pgs: 321 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:44:29 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1134: 321 pgs: 321 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:44:29 compute-0 podman[276090]: 2025-10-11 08:44:29.840753262 +0000 UTC m=+0.135360297 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct 11 08:44:31 compute-0 ceph-mon[74313]: pgmap v1134: 321 pgs: 321 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:44:31 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1135: 321 pgs: 321 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:44:33 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 08:44:33 compute-0 ceph-mon[74313]: pgmap v1135: 321 pgs: 321 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:44:33 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1136: 321 pgs: 321 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:44:35 compute-0 ceph-mon[74313]: pgmap v1136: 321 pgs: 321 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:44:35 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1137: 321 pgs: 321 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:44:37 compute-0 ceph-mon[74313]: pgmap v1137: 321 pgs: 321 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:44:37 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 08:44:37 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2182075254' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 08:44:37 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 08:44:37 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2182075254' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 08:44:37 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1138: 321 pgs: 321 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:44:38 compute-0 nova_compute[260935]: 2025-10-11 08:44:38.084 2 DEBUG oslo_concurrency.lockutils [None req-981b23f2-89f5-44a2-9dfa-15598106e3c8 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Acquiring lock "aac0adcc-167d-400a-a04a-93767356cc9c" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:44:38 compute-0 nova_compute[260935]: 2025-10-11 08:44:38.085 2 DEBUG oslo_concurrency.lockutils [None req-981b23f2-89f5-44a2-9dfa-15598106e3c8 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Lock "aac0adcc-167d-400a-a04a-93767356cc9c" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:44:38 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 08:44:38 compute-0 nova_compute[260935]: 2025-10-11 08:44:38.136 2 DEBUG nova.compute.manager [None req-981b23f2-89f5-44a2-9dfa-15598106e3c8 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] [instance: aac0adcc-167d-400a-a04a-93767356cc9c] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 11 08:44:38 compute-0 nova_compute[260935]: 2025-10-11 08:44:38.256 2 DEBUG oslo_concurrency.lockutils [None req-981b23f2-89f5-44a2-9dfa-15598106e3c8 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:44:38 compute-0 nova_compute[260935]: 2025-10-11 08:44:38.256 2 DEBUG oslo_concurrency.lockutils [None req-981b23f2-89f5-44a2-9dfa-15598106e3c8 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:44:38 compute-0 nova_compute[260935]: 2025-10-11 08:44:38.268 2 DEBUG nova.virt.hardware [None req-981b23f2-89f5-44a2-9dfa-15598106e3c8 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 11 08:44:38 compute-0 nova_compute[260935]: 2025-10-11 08:44:38.268 2 INFO nova.compute.claims [None req-981b23f2-89f5-44a2-9dfa-15598106e3c8 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] [instance: aac0adcc-167d-400a-a04a-93767356cc9c] Claim successful on node compute-0.ctlplane.example.com
Oct 11 08:44:38 compute-0 ceph-mon[74313]: from='client.? 192.168.122.10:0/2182075254' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 08:44:38 compute-0 ceph-mon[74313]: from='client.? 192.168.122.10:0/2182075254' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 08:44:38 compute-0 nova_compute[260935]: 2025-10-11 08:44:38.397 2 DEBUG oslo_concurrency.processutils [None req-981b23f2-89f5-44a2-9dfa-15598106e3c8 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:44:38 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 08:44:38 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/253931390' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:44:38 compute-0 nova_compute[260935]: 2025-10-11 08:44:38.963 2 DEBUG oslo_concurrency.processutils [None req-981b23f2-89f5-44a2-9dfa-15598106e3c8 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.566s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:44:38 compute-0 nova_compute[260935]: 2025-10-11 08:44:38.972 2 DEBUG nova.compute.provider_tree [None req-981b23f2-89f5-44a2-9dfa-15598106e3c8 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 08:44:38 compute-0 nova_compute[260935]: 2025-10-11 08:44:38.991 2 DEBUG nova.scheduler.client.report [None req-981b23f2-89f5-44a2-9dfa-15598106e3c8 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 08:44:39 compute-0 nova_compute[260935]: 2025-10-11 08:44:39.017 2 DEBUG oslo_concurrency.lockutils [None req-981b23f2-89f5-44a2-9dfa-15598106e3c8 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.761s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:44:39 compute-0 nova_compute[260935]: 2025-10-11 08:44:39.019 2 DEBUG nova.compute.manager [None req-981b23f2-89f5-44a2-9dfa-15598106e3c8 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] [instance: aac0adcc-167d-400a-a04a-93767356cc9c] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 11 08:44:39 compute-0 nova_compute[260935]: 2025-10-11 08:44:39.078 2 DEBUG nova.compute.manager [None req-981b23f2-89f5-44a2-9dfa-15598106e3c8 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] [instance: aac0adcc-167d-400a-a04a-93767356cc9c] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 11 08:44:39 compute-0 nova_compute[260935]: 2025-10-11 08:44:39.078 2 DEBUG nova.network.neutron [None req-981b23f2-89f5-44a2-9dfa-15598106e3c8 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] [instance: aac0adcc-167d-400a-a04a-93767356cc9c] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 11 08:44:39 compute-0 nova_compute[260935]: 2025-10-11 08:44:39.268 2 INFO nova.virt.libvirt.driver [None req-981b23f2-89f5-44a2-9dfa-15598106e3c8 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] [instance: aac0adcc-167d-400a-a04a-93767356cc9c] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 11 08:44:39 compute-0 nova_compute[260935]: 2025-10-11 08:44:39.295 2 DEBUG nova.compute.manager [None req-981b23f2-89f5-44a2-9dfa-15598106e3c8 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] [instance: aac0adcc-167d-400a-a04a-93767356cc9c] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 11 08:44:39 compute-0 ceph-mon[74313]: pgmap v1138: 321 pgs: 321 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:44:39 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/253931390' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:44:39 compute-0 nova_compute[260935]: 2025-10-11 08:44:39.383 2 DEBUG nova.compute.manager [None req-981b23f2-89f5-44a2-9dfa-15598106e3c8 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] [instance: aac0adcc-167d-400a-a04a-93767356cc9c] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 11 08:44:39 compute-0 nova_compute[260935]: 2025-10-11 08:44:39.384 2 DEBUG nova.virt.libvirt.driver [None req-981b23f2-89f5-44a2-9dfa-15598106e3c8 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] [instance: aac0adcc-167d-400a-a04a-93767356cc9c] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 11 08:44:39 compute-0 nova_compute[260935]: 2025-10-11 08:44:39.385 2 INFO nova.virt.libvirt.driver [None req-981b23f2-89f5-44a2-9dfa-15598106e3c8 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] [instance: aac0adcc-167d-400a-a04a-93767356cc9c] Creating image(s)
Oct 11 08:44:39 compute-0 nova_compute[260935]: 2025-10-11 08:44:39.417 2 DEBUG nova.storage.rbd_utils [None req-981b23f2-89f5-44a2-9dfa-15598106e3c8 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] rbd image aac0adcc-167d-400a-a04a-93767356cc9c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:44:39 compute-0 nova_compute[260935]: 2025-10-11 08:44:39.451 2 DEBUG nova.storage.rbd_utils [None req-981b23f2-89f5-44a2-9dfa-15598106e3c8 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] rbd image aac0adcc-167d-400a-a04a-93767356cc9c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:44:39 compute-0 nova_compute[260935]: 2025-10-11 08:44:39.482 2 DEBUG nova.storage.rbd_utils [None req-981b23f2-89f5-44a2-9dfa-15598106e3c8 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] rbd image aac0adcc-167d-400a-a04a-93767356cc9c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:44:39 compute-0 nova_compute[260935]: 2025-10-11 08:44:39.486 2 DEBUG oslo_concurrency.processutils [None req-981b23f2-89f5-44a2-9dfa-15598106e3c8 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:44:39 compute-0 nova_compute[260935]: 2025-10-11 08:44:39.532 2 WARNING oslo_policy.policy [None req-981b23f2-89f5-44a2-9dfa-15598106e3c8 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] JSON formatted policy_file support is deprecated since Victoria release. You need to use YAML format which will be default in future. You can use ``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted policy file to YAML-formatted in backward compatible way: https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.
Oct 11 08:44:39 compute-0 nova_compute[260935]: 2025-10-11 08:44:39.533 2 WARNING oslo_policy.policy [None req-981b23f2-89f5-44a2-9dfa-15598106e3c8 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] JSON formatted policy_file support is deprecated since Victoria release. You need to use YAML format which will be default in future. You can use ``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted policy file to YAML-formatted in backward compatible way: https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.
Oct 11 08:44:39 compute-0 nova_compute[260935]: 2025-10-11 08:44:39.538 2 DEBUG nova.policy [None req-981b23f2-89f5-44a2-9dfa-15598106e3c8 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '113798b24d1e4a9e91db94214d254ea9', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '4e6573eaf6684f1c99a553fd46667a67', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 11 08:44:39 compute-0 nova_compute[260935]: 2025-10-11 08:44:39.573 2 DEBUG oslo_concurrency.processutils [None req-981b23f2-89f5-44a2-9dfa-15598106e3c8 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json" returned: 0 in 0.086s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:44:39 compute-0 nova_compute[260935]: 2025-10-11 08:44:39.573 2 DEBUG oslo_concurrency.lockutils [None req-981b23f2-89f5-44a2-9dfa-15598106e3c8 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Acquiring lock "9811042c7d73cc51997f7c966840f4b7728169a1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:44:39 compute-0 nova_compute[260935]: 2025-10-11 08:44:39.574 2 DEBUG oslo_concurrency.lockutils [None req-981b23f2-89f5-44a2-9dfa-15598106e3c8 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:44:39 compute-0 nova_compute[260935]: 2025-10-11 08:44:39.574 2 DEBUG oslo_concurrency.lockutils [None req-981b23f2-89f5-44a2-9dfa-15598106e3c8 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:44:39 compute-0 nova_compute[260935]: 2025-10-11 08:44:39.598 2 DEBUG nova.storage.rbd_utils [None req-981b23f2-89f5-44a2-9dfa-15598106e3c8 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] rbd image aac0adcc-167d-400a-a04a-93767356cc9c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:44:39 compute-0 nova_compute[260935]: 2025-10-11 08:44:39.601 2 DEBUG oslo_concurrency.processutils [None req-981b23f2-89f5-44a2-9dfa-15598106e3c8 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 aac0adcc-167d-400a-a04a-93767356cc9c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:44:39 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1139: 321 pgs: 321 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:44:39 compute-0 nova_compute[260935]: 2025-10-11 08:44:39.906 2 DEBUG oslo_concurrency.processutils [None req-981b23f2-89f5-44a2-9dfa-15598106e3c8 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 aac0adcc-167d-400a-a04a-93767356cc9c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.304s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:44:39 compute-0 nova_compute[260935]: 2025-10-11 08:44:39.987 2 DEBUG nova.storage.rbd_utils [None req-981b23f2-89f5-44a2-9dfa-15598106e3c8 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] resizing rbd image aac0adcc-167d-400a-a04a-93767356cc9c_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 11 08:44:40 compute-0 nova_compute[260935]: 2025-10-11 08:44:40.103 2 DEBUG nova.objects.instance [None req-981b23f2-89f5-44a2-9dfa-15598106e3c8 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Lazy-loading 'migration_context' on Instance uuid aac0adcc-167d-400a-a04a-93767356cc9c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:44:40 compute-0 nova_compute[260935]: 2025-10-11 08:44:40.127 2 DEBUG nova.virt.libvirt.driver [None req-981b23f2-89f5-44a2-9dfa-15598106e3c8 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] [instance: aac0adcc-167d-400a-a04a-93767356cc9c] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 11 08:44:40 compute-0 nova_compute[260935]: 2025-10-11 08:44:40.128 2 DEBUG nova.virt.libvirt.driver [None req-981b23f2-89f5-44a2-9dfa-15598106e3c8 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] [instance: aac0adcc-167d-400a-a04a-93767356cc9c] Ensure instance console log exists: /var/lib/nova/instances/aac0adcc-167d-400a-a04a-93767356cc9c/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 11 08:44:40 compute-0 nova_compute[260935]: 2025-10-11 08:44:40.128 2 DEBUG oslo_concurrency.lockutils [None req-981b23f2-89f5-44a2-9dfa-15598106e3c8 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:44:40 compute-0 nova_compute[260935]: 2025-10-11 08:44:40.129 2 DEBUG oslo_concurrency.lockutils [None req-981b23f2-89f5-44a2-9dfa-15598106e3c8 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:44:40 compute-0 nova_compute[260935]: 2025-10-11 08:44:40.129 2 DEBUG oslo_concurrency.lockutils [None req-981b23f2-89f5-44a2-9dfa-15598106e3c8 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:44:41 compute-0 ceph-mon[74313]: pgmap v1139: 321 pgs: 321 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:44:41 compute-0 nova_compute[260935]: 2025-10-11 08:44:41.500 2 DEBUG nova.network.neutron [None req-981b23f2-89f5-44a2-9dfa-15598106e3c8 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] [instance: aac0adcc-167d-400a-a04a-93767356cc9c] Successfully created port: 6e75116e-1034-4d1a-8320-6755ee57c51f _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 11 08:44:41 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1140: 321 pgs: 321 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:44:42 compute-0 nova_compute[260935]: 2025-10-11 08:44:42.476 2 DEBUG nova.network.neutron [None req-981b23f2-89f5-44a2-9dfa-15598106e3c8 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] [instance: aac0adcc-167d-400a-a04a-93767356cc9c] Successfully updated port: 6e75116e-1034-4d1a-8320-6755ee57c51f _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 11 08:44:42 compute-0 nova_compute[260935]: 2025-10-11 08:44:42.511 2 DEBUG oslo_concurrency.lockutils [None req-981b23f2-89f5-44a2-9dfa-15598106e3c8 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Acquiring lock "refresh_cache-aac0adcc-167d-400a-a04a-93767356cc9c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 08:44:42 compute-0 nova_compute[260935]: 2025-10-11 08:44:42.512 2 DEBUG oslo_concurrency.lockutils [None req-981b23f2-89f5-44a2-9dfa-15598106e3c8 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Acquired lock "refresh_cache-aac0adcc-167d-400a-a04a-93767356cc9c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 08:44:42 compute-0 nova_compute[260935]: 2025-10-11 08:44:42.512 2 DEBUG nova.network.neutron [None req-981b23f2-89f5-44a2-9dfa-15598106e3c8 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] [instance: aac0adcc-167d-400a-a04a-93767356cc9c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 11 08:44:42 compute-0 nova_compute[260935]: 2025-10-11 08:44:42.792 2 DEBUG nova.network.neutron [None req-981b23f2-89f5-44a2-9dfa-15598106e3c8 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] [instance: aac0adcc-167d-400a-a04a-93767356cc9c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 11 08:44:43 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 08:44:43 compute-0 nova_compute[260935]: 2025-10-11 08:44:43.317 2 DEBUG nova.compute.manager [req-423a4bbd-f546-45a6-a7f7-655c30786510 req-c6ce626f-bf81-492b-bb5a-4a7ae9ee0845 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: aac0adcc-167d-400a-a04a-93767356cc9c] Received event network-changed-6e75116e-1034-4d1a-8320-6755ee57c51f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:44:43 compute-0 nova_compute[260935]: 2025-10-11 08:44:43.318 2 DEBUG nova.compute.manager [req-423a4bbd-f546-45a6-a7f7-655c30786510 req-c6ce626f-bf81-492b-bb5a-4a7ae9ee0845 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: aac0adcc-167d-400a-a04a-93767356cc9c] Refreshing instance network info cache due to event network-changed-6e75116e-1034-4d1a-8320-6755ee57c51f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 08:44:43 compute-0 nova_compute[260935]: 2025-10-11 08:44:43.319 2 DEBUG oslo_concurrency.lockutils [req-423a4bbd-f546-45a6-a7f7-655c30786510 req-c6ce626f-bf81-492b-bb5a-4a7ae9ee0845 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-aac0adcc-167d-400a-a04a-93767356cc9c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 08:44:43 compute-0 ceph-mon[74313]: pgmap v1140: 321 pgs: 321 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:44:43 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1141: 321 pgs: 321 active+clean; 88 MiB data, 211 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 11 08:44:44 compute-0 nova_compute[260935]: 2025-10-11 08:44:44.076 2 DEBUG nova.network.neutron [None req-981b23f2-89f5-44a2-9dfa-15598106e3c8 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] [instance: aac0adcc-167d-400a-a04a-93767356cc9c] Updating instance_info_cache with network_info: [{"id": "6e75116e-1034-4d1a-8320-6755ee57c51f", "address": "fa:16:3e:14:7b:bc", "network": {"id": "3bd537c2-e6ec-4d00-ac83-fbf5d86f963c", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-610535994-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4e6573eaf6684f1c99a553fd46667a67", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e75116e-10", "ovs_interfaceid": "6e75116e-1034-4d1a-8320-6755ee57c51f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:44:44 compute-0 nova_compute[260935]: 2025-10-11 08:44:44.100 2 DEBUG oslo_concurrency.lockutils [None req-981b23f2-89f5-44a2-9dfa-15598106e3c8 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Releasing lock "refresh_cache-aac0adcc-167d-400a-a04a-93767356cc9c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 08:44:44 compute-0 nova_compute[260935]: 2025-10-11 08:44:44.100 2 DEBUG nova.compute.manager [None req-981b23f2-89f5-44a2-9dfa-15598106e3c8 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] [instance: aac0adcc-167d-400a-a04a-93767356cc9c] Instance network_info: |[{"id": "6e75116e-1034-4d1a-8320-6755ee57c51f", "address": "fa:16:3e:14:7b:bc", "network": {"id": "3bd537c2-e6ec-4d00-ac83-fbf5d86f963c", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-610535994-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4e6573eaf6684f1c99a553fd46667a67", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e75116e-10", "ovs_interfaceid": "6e75116e-1034-4d1a-8320-6755ee57c51f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 11 08:44:44 compute-0 nova_compute[260935]: 2025-10-11 08:44:44.101 2 DEBUG oslo_concurrency.lockutils [req-423a4bbd-f546-45a6-a7f7-655c30786510 req-c6ce626f-bf81-492b-bb5a-4a7ae9ee0845 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-aac0adcc-167d-400a-a04a-93767356cc9c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 08:44:44 compute-0 nova_compute[260935]: 2025-10-11 08:44:44.101 2 DEBUG nova.network.neutron [req-423a4bbd-f546-45a6-a7f7-655c30786510 req-c6ce626f-bf81-492b-bb5a-4a7ae9ee0845 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: aac0adcc-167d-400a-a04a-93767356cc9c] Refreshing network info cache for port 6e75116e-1034-4d1a-8320-6755ee57c51f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 08:44:44 compute-0 nova_compute[260935]: 2025-10-11 08:44:44.104 2 DEBUG nova.virt.libvirt.driver [None req-981b23f2-89f5-44a2-9dfa-15598106e3c8 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] [instance: aac0adcc-167d-400a-a04a-93767356cc9c] Start _get_guest_xml network_info=[{"id": "6e75116e-1034-4d1a-8320-6755ee57c51f", "address": "fa:16:3e:14:7b:bc", "network": {"id": "3bd537c2-e6ec-4d00-ac83-fbf5d86f963c", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-610535994-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4e6573eaf6684f1c99a553fd46667a67", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e75116e-10", "ovs_interfaceid": "6e75116e-1034-4d1a-8320-6755ee57c51f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 11 08:44:44 compute-0 nova_compute[260935]: 2025-10-11 08:44:44.108 2 WARNING nova.virt.libvirt.driver [None req-981b23f2-89f5-44a2-9dfa-15598106e3c8 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 08:44:44 compute-0 nova_compute[260935]: 2025-10-11 08:44:44.113 2 DEBUG nova.virt.libvirt.host [None req-981b23f2-89f5-44a2-9dfa-15598106e3c8 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 11 08:44:44 compute-0 nova_compute[260935]: 2025-10-11 08:44:44.114 2 DEBUG nova.virt.libvirt.host [None req-981b23f2-89f5-44a2-9dfa-15598106e3c8 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 11 08:44:44 compute-0 nova_compute[260935]: 2025-10-11 08:44:44.120 2 DEBUG nova.virt.libvirt.host [None req-981b23f2-89f5-44a2-9dfa-15598106e3c8 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 11 08:44:44 compute-0 nova_compute[260935]: 2025-10-11 08:44:44.121 2 DEBUG nova.virt.libvirt.host [None req-981b23f2-89f5-44a2-9dfa-15598106e3c8 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 11 08:44:44 compute-0 nova_compute[260935]: 2025-10-11 08:44:44.122 2 DEBUG nova.virt.libvirt.driver [None req-981b23f2-89f5-44a2-9dfa-15598106e3c8 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 11 08:44:44 compute-0 nova_compute[260935]: 2025-10-11 08:44:44.123 2 DEBUG nova.virt.hardware [None req-981b23f2-89f5-44a2-9dfa-15598106e3c8 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:44:30Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1952576325',id=25,is_public=True,memory_mb=128,name='tempest-flavor_with_ephemeral_0-1549572389',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 11 08:44:44 compute-0 nova_compute[260935]: 2025-10-11 08:44:44.124 2 DEBUG nova.virt.hardware [None req-981b23f2-89f5-44a2-9dfa-15598106e3c8 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 11 08:44:44 compute-0 nova_compute[260935]: 2025-10-11 08:44:44.124 2 DEBUG nova.virt.hardware [None req-981b23f2-89f5-44a2-9dfa-15598106e3c8 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 11 08:44:44 compute-0 nova_compute[260935]: 2025-10-11 08:44:44.125 2 DEBUG nova.virt.hardware [None req-981b23f2-89f5-44a2-9dfa-15598106e3c8 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 11 08:44:44 compute-0 nova_compute[260935]: 2025-10-11 08:44:44.125 2 DEBUG nova.virt.hardware [None req-981b23f2-89f5-44a2-9dfa-15598106e3c8 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 11 08:44:44 compute-0 nova_compute[260935]: 2025-10-11 08:44:44.125 2 DEBUG nova.virt.hardware [None req-981b23f2-89f5-44a2-9dfa-15598106e3c8 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 11 08:44:44 compute-0 nova_compute[260935]: 2025-10-11 08:44:44.126 2 DEBUG nova.virt.hardware [None req-981b23f2-89f5-44a2-9dfa-15598106e3c8 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 11 08:44:44 compute-0 nova_compute[260935]: 2025-10-11 08:44:44.126 2 DEBUG nova.virt.hardware [None req-981b23f2-89f5-44a2-9dfa-15598106e3c8 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 11 08:44:44 compute-0 nova_compute[260935]: 2025-10-11 08:44:44.127 2 DEBUG nova.virt.hardware [None req-981b23f2-89f5-44a2-9dfa-15598106e3c8 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 11 08:44:44 compute-0 nova_compute[260935]: 2025-10-11 08:44:44.127 2 DEBUG nova.virt.hardware [None req-981b23f2-89f5-44a2-9dfa-15598106e3c8 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 11 08:44:44 compute-0 nova_compute[260935]: 2025-10-11 08:44:44.128 2 DEBUG nova.virt.hardware [None req-981b23f2-89f5-44a2-9dfa-15598106e3c8 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 11 08:44:44 compute-0 nova_compute[260935]: 2025-10-11 08:44:44.135 2 DEBUG oslo_concurrency.processutils [None req-981b23f2-89f5-44a2-9dfa-15598106e3c8 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:44:44 compute-0 nova_compute[260935]: 2025-10-11 08:44:44.251 2 DEBUG oslo_concurrency.lockutils [None req-867cba43-d614-4fbe-9c73-90808e6a68a7 b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] Acquiring lock "10e2549e-21d4-44fb-acbf-9104ec32970f" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:44:44 compute-0 nova_compute[260935]: 2025-10-11 08:44:44.251 2 DEBUG oslo_concurrency.lockutils [None req-867cba43-d614-4fbe-9c73-90808e6a68a7 b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] Lock "10e2549e-21d4-44fb-acbf-9104ec32970f" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:44:44 compute-0 nova_compute[260935]: 2025-10-11 08:44:44.278 2 DEBUG nova.compute.manager [None req-867cba43-d614-4fbe-9c73-90808e6a68a7 b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] [instance: 10e2549e-21d4-44fb-acbf-9104ec32970f] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 11 08:44:44 compute-0 nova_compute[260935]: 2025-10-11 08:44:44.352 2 DEBUG oslo_concurrency.processutils [None req-7b6ccb2f-f791-4a53-bf48-6ac97096ba5f 1ae5b4d76e5b4b87b3371b1af07b4b86 e3ee64d88df445bf9d54ac52ea80e580 - - default default] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:44:44 compute-0 nova_compute[260935]: 2025-10-11 08:44:44.393 2 DEBUG oslo_concurrency.processutils [None req-7b6ccb2f-f791-4a53-bf48-6ac97096ba5f 1ae5b4d76e5b4b87b3371b1af07b4b86 e3ee64d88df445bf9d54ac52ea80e580 - - default default] CMD "env LANG=C uptime" returned: 0 in 0.041s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:44:44 compute-0 nova_compute[260935]: 2025-10-11 08:44:44.419 2 DEBUG oslo_concurrency.lockutils [None req-867cba43-d614-4fbe-9c73-90808e6a68a7 b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:44:44 compute-0 nova_compute[260935]: 2025-10-11 08:44:44.420 2 DEBUG oslo_concurrency.lockutils [None req-867cba43-d614-4fbe-9c73-90808e6a68a7 b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:44:44 compute-0 nova_compute[260935]: 2025-10-11 08:44:44.427 2 DEBUG nova.virt.hardware [None req-867cba43-d614-4fbe-9c73-90808e6a68a7 b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 11 08:44:44 compute-0 nova_compute[260935]: 2025-10-11 08:44:44.428 2 INFO nova.compute.claims [None req-867cba43-d614-4fbe-9c73-90808e6a68a7 b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] [instance: 10e2549e-21d4-44fb-acbf-9104ec32970f] Claim successful on node compute-0.ctlplane.example.com
Oct 11 08:44:44 compute-0 nova_compute[260935]: 2025-10-11 08:44:44.580 2 DEBUG oslo_concurrency.processutils [None req-867cba43-d614-4fbe-9c73-90808e6a68a7 b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:44:44 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 08:44:44 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1642061443' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:44:44 compute-0 nova_compute[260935]: 2025-10-11 08:44:44.601 2 DEBUG oslo_concurrency.processutils [None req-981b23f2-89f5-44a2-9dfa-15598106e3c8 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:44:44 compute-0 nova_compute[260935]: 2025-10-11 08:44:44.622 2 DEBUG nova.storage.rbd_utils [None req-981b23f2-89f5-44a2-9dfa-15598106e3c8 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] rbd image aac0adcc-167d-400a-a04a-93767356cc9c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:44:44 compute-0 nova_compute[260935]: 2025-10-11 08:44:44.625 2 DEBUG oslo_concurrency.processutils [None req-981b23f2-89f5-44a2-9dfa-15598106e3c8 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:44:45 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 08:44:45 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1329841632' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:44:45 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 08:44:45 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1463306307' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:44:45 compute-0 nova_compute[260935]: 2025-10-11 08:44:45.074 2 DEBUG oslo_concurrency.processutils [None req-867cba43-d614-4fbe-9c73-90808e6a68a7 b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.494s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:44:45 compute-0 nova_compute[260935]: 2025-10-11 08:44:45.084 2 DEBUG nova.compute.provider_tree [None req-867cba43-d614-4fbe-9c73-90808e6a68a7 b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 08:44:45 compute-0 nova_compute[260935]: 2025-10-11 08:44:45.090 2 DEBUG oslo_concurrency.processutils [None req-981b23f2-89f5-44a2-9dfa-15598106e3c8 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:44:45 compute-0 nova_compute[260935]: 2025-10-11 08:44:45.092 2 DEBUG nova.virt.libvirt.vif [None req-981b23f2-89f5-44a2-9dfa-15598106e3c8 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T08:44:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersWithSpecificFlavorTestJSON-server-189754419',display_name='tempest-ServersWithSpecificFlavorTestJSON-server-189754419',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(25),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverswithspecificflavortestjson-server-189754419',id=2,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=25,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNbdle8Jb2CGpQv4SpbWMUmm3hiSixEA0s5KapvSItFHc3x9bUVgKTLkLVOIdrKirkN+vbb4fO8g4FzWk9eXgUjBokGNviLy6eM2HjHbeF2CcHrjTEf0RujQy30DD/WL/w==',key_name='tempest-keypair-358550695',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4e6573eaf6684f1c99a553fd46667a67',ramdisk_id='',reservation_id='r-bh0dixv5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersWithSpecificFlavorTestJSON-1781130606',owner_user_name='tempest-ServersWithSpecificFlavorTestJSON-1781130606-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T08:44:39Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='113798b24d1e4a9e91db94214d254ea9',uuid=aac0adcc-167d-400a-a04a-93767356cc9c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6e75116e-1034-4d1a-8320-6755ee57c51f", "address": "fa:16:3e:14:7b:bc", "network": {"id": "3bd537c2-e6ec-4d00-ac83-fbf5d86f963c", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-610535994-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4e6573eaf6684f1c99a553fd46667a67", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e75116e-10", "ovs_interfaceid": "6e75116e-1034-4d1a-8320-6755ee57c51f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 11 08:44:45 compute-0 nova_compute[260935]: 2025-10-11 08:44:45.093 2 DEBUG nova.network.os_vif_util [None req-981b23f2-89f5-44a2-9dfa-15598106e3c8 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Converting VIF {"id": "6e75116e-1034-4d1a-8320-6755ee57c51f", "address": "fa:16:3e:14:7b:bc", "network": {"id": "3bd537c2-e6ec-4d00-ac83-fbf5d86f963c", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-610535994-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4e6573eaf6684f1c99a553fd46667a67", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e75116e-10", "ovs_interfaceid": "6e75116e-1034-4d1a-8320-6755ee57c51f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 08:44:45 compute-0 nova_compute[260935]: 2025-10-11 08:44:45.094 2 DEBUG nova.network.os_vif_util [None req-981b23f2-89f5-44a2-9dfa-15598106e3c8 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:14:7b:bc,bridge_name='br-int',has_traffic_filtering=True,id=6e75116e-1034-4d1a-8320-6755ee57c51f,network=Network(3bd537c2-e6ec-4d00-ac83-fbf5d86f963c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6e75116e-10') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 08:44:45 compute-0 nova_compute[260935]: 2025-10-11 08:44:45.098 2 DEBUG nova.objects.instance [None req-981b23f2-89f5-44a2-9dfa-15598106e3c8 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Lazy-loading 'pci_devices' on Instance uuid aac0adcc-167d-400a-a04a-93767356cc9c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:44:45 compute-0 nova_compute[260935]: 2025-10-11 08:44:45.114 2 DEBUG nova.scheduler.client.report [None req-867cba43-d614-4fbe-9c73-90808e6a68a7 b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 08:44:45 compute-0 nova_compute[260935]: 2025-10-11 08:44:45.152 2 DEBUG nova.virt.libvirt.driver [None req-981b23f2-89f5-44a2-9dfa-15598106e3c8 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] [instance: aac0adcc-167d-400a-a04a-93767356cc9c] End _get_guest_xml xml=<domain type="kvm">
Oct 11 08:44:45 compute-0 nova_compute[260935]:   <uuid>aac0adcc-167d-400a-a04a-93767356cc9c</uuid>
Oct 11 08:44:45 compute-0 nova_compute[260935]:   <name>instance-00000002</name>
Oct 11 08:44:45 compute-0 nova_compute[260935]:   <memory>131072</memory>
Oct 11 08:44:45 compute-0 nova_compute[260935]:   <vcpu>1</vcpu>
Oct 11 08:44:45 compute-0 nova_compute[260935]:   <metadata>
Oct 11 08:44:45 compute-0 nova_compute[260935]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 08:44:45 compute-0 nova_compute[260935]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 08:44:45 compute-0 nova_compute[260935]:       <nova:name>tempest-ServersWithSpecificFlavorTestJSON-server-189754419</nova:name>
Oct 11 08:44:45 compute-0 nova_compute[260935]:       <nova:creationTime>2025-10-11 08:44:44</nova:creationTime>
Oct 11 08:44:45 compute-0 nova_compute[260935]:       <nova:flavor name="tempest-flavor_with_ephemeral_0-1549572389">
Oct 11 08:44:45 compute-0 nova_compute[260935]:         <nova:memory>128</nova:memory>
Oct 11 08:44:45 compute-0 nova_compute[260935]:         <nova:disk>1</nova:disk>
Oct 11 08:44:45 compute-0 nova_compute[260935]:         <nova:swap>0</nova:swap>
Oct 11 08:44:45 compute-0 nova_compute[260935]:         <nova:ephemeral>0</nova:ephemeral>
Oct 11 08:44:45 compute-0 nova_compute[260935]:         <nova:vcpus>1</nova:vcpus>
Oct 11 08:44:45 compute-0 nova_compute[260935]:       </nova:flavor>
Oct 11 08:44:45 compute-0 nova_compute[260935]:       <nova:owner>
Oct 11 08:44:45 compute-0 nova_compute[260935]:         <nova:user uuid="113798b24d1e4a9e91db94214d254ea9">tempest-ServersWithSpecificFlavorTestJSON-1781130606-project-member</nova:user>
Oct 11 08:44:45 compute-0 nova_compute[260935]:         <nova:project uuid="4e6573eaf6684f1c99a553fd46667a67">tempest-ServersWithSpecificFlavorTestJSON-1781130606</nova:project>
Oct 11 08:44:45 compute-0 nova_compute[260935]:       </nova:owner>
Oct 11 08:44:45 compute-0 nova_compute[260935]:       <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 08:44:45 compute-0 nova_compute[260935]:       <nova:ports>
Oct 11 08:44:45 compute-0 nova_compute[260935]:         <nova:port uuid="6e75116e-1034-4d1a-8320-6755ee57c51f">
Oct 11 08:44:45 compute-0 nova_compute[260935]:           <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Oct 11 08:44:45 compute-0 nova_compute[260935]:         </nova:port>
Oct 11 08:44:45 compute-0 nova_compute[260935]:       </nova:ports>
Oct 11 08:44:45 compute-0 nova_compute[260935]:     </nova:instance>
Oct 11 08:44:45 compute-0 nova_compute[260935]:   </metadata>
Oct 11 08:44:45 compute-0 nova_compute[260935]:   <sysinfo type="smbios">
Oct 11 08:44:45 compute-0 nova_compute[260935]:     <system>
Oct 11 08:44:45 compute-0 nova_compute[260935]:       <entry name="manufacturer">RDO</entry>
Oct 11 08:44:45 compute-0 nova_compute[260935]:       <entry name="product">OpenStack Compute</entry>
Oct 11 08:44:45 compute-0 nova_compute[260935]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 08:44:45 compute-0 nova_compute[260935]:       <entry name="serial">aac0adcc-167d-400a-a04a-93767356cc9c</entry>
Oct 11 08:44:45 compute-0 nova_compute[260935]:       <entry name="uuid">aac0adcc-167d-400a-a04a-93767356cc9c</entry>
Oct 11 08:44:45 compute-0 nova_compute[260935]:       <entry name="family">Virtual Machine</entry>
Oct 11 08:44:45 compute-0 nova_compute[260935]:     </system>
Oct 11 08:44:45 compute-0 nova_compute[260935]:   </sysinfo>
Oct 11 08:44:45 compute-0 nova_compute[260935]:   <os>
Oct 11 08:44:45 compute-0 nova_compute[260935]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 11 08:44:45 compute-0 nova_compute[260935]:     <boot dev="hd"/>
Oct 11 08:44:45 compute-0 nova_compute[260935]:     <smbios mode="sysinfo"/>
Oct 11 08:44:45 compute-0 nova_compute[260935]:   </os>
Oct 11 08:44:45 compute-0 nova_compute[260935]:   <features>
Oct 11 08:44:45 compute-0 nova_compute[260935]:     <acpi/>
Oct 11 08:44:45 compute-0 nova_compute[260935]:     <apic/>
Oct 11 08:44:45 compute-0 nova_compute[260935]:     <vmcoreinfo/>
Oct 11 08:44:45 compute-0 nova_compute[260935]:   </features>
Oct 11 08:44:45 compute-0 nova_compute[260935]:   <clock offset="utc">
Oct 11 08:44:45 compute-0 nova_compute[260935]:     <timer name="pit" tickpolicy="delay"/>
Oct 11 08:44:45 compute-0 nova_compute[260935]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 11 08:44:45 compute-0 nova_compute[260935]:     <timer name="hpet" present="no"/>
Oct 11 08:44:45 compute-0 nova_compute[260935]:   </clock>
Oct 11 08:44:45 compute-0 nova_compute[260935]:   <cpu mode="host-model" match="exact">
Oct 11 08:44:45 compute-0 nova_compute[260935]:     <topology sockets="1" cores="1" threads="1"/>
Oct 11 08:44:45 compute-0 nova_compute[260935]:   </cpu>
Oct 11 08:44:45 compute-0 nova_compute[260935]:   <devices>
Oct 11 08:44:45 compute-0 nova_compute[260935]:     <disk type="network" device="disk">
Oct 11 08:44:45 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 08:44:45 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/aac0adcc-167d-400a-a04a-93767356cc9c_disk">
Oct 11 08:44:45 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 08:44:45 compute-0 nova_compute[260935]:       </source>
Oct 11 08:44:45 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 08:44:45 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 08:44:45 compute-0 nova_compute[260935]:       </auth>
Oct 11 08:44:45 compute-0 nova_compute[260935]:       <target dev="vda" bus="virtio"/>
Oct 11 08:44:45 compute-0 nova_compute[260935]:     </disk>
Oct 11 08:44:45 compute-0 nova_compute[260935]:     <disk type="network" device="cdrom">
Oct 11 08:44:45 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 08:44:45 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/aac0adcc-167d-400a-a04a-93767356cc9c_disk.config">
Oct 11 08:44:45 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 08:44:45 compute-0 nova_compute[260935]:       </source>
Oct 11 08:44:45 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 08:44:45 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 08:44:45 compute-0 nova_compute[260935]:       </auth>
Oct 11 08:44:45 compute-0 nova_compute[260935]:       <target dev="sda" bus="sata"/>
Oct 11 08:44:45 compute-0 nova_compute[260935]:     </disk>
Oct 11 08:44:45 compute-0 nova_compute[260935]:     <interface type="ethernet">
Oct 11 08:44:45 compute-0 nova_compute[260935]:       <mac address="fa:16:3e:14:7b:bc"/>
Oct 11 08:44:45 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 08:44:45 compute-0 nova_compute[260935]:       <driver name="vhost" rx_queue_size="512"/>
Oct 11 08:44:45 compute-0 nova_compute[260935]:       <mtu size="1442"/>
Oct 11 08:44:45 compute-0 nova_compute[260935]:       <target dev="tap6e75116e-10"/>
Oct 11 08:44:45 compute-0 nova_compute[260935]:     </interface>
Oct 11 08:44:45 compute-0 nova_compute[260935]:     <serial type="pty">
Oct 11 08:44:45 compute-0 nova_compute[260935]:       <log file="/var/lib/nova/instances/aac0adcc-167d-400a-a04a-93767356cc9c/console.log" append="off"/>
Oct 11 08:44:45 compute-0 nova_compute[260935]:     </serial>
Oct 11 08:44:45 compute-0 nova_compute[260935]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 08:44:45 compute-0 nova_compute[260935]:     <video>
Oct 11 08:44:45 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 08:44:45 compute-0 nova_compute[260935]:     </video>
Oct 11 08:44:45 compute-0 nova_compute[260935]:     <input type="tablet" bus="usb"/>
Oct 11 08:44:45 compute-0 nova_compute[260935]:     <rng model="virtio">
Oct 11 08:44:45 compute-0 nova_compute[260935]:       <backend model="random">/dev/urandom</backend>
Oct 11 08:44:45 compute-0 nova_compute[260935]:     </rng>
Oct 11 08:44:45 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root"/>
Oct 11 08:44:45 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:44:45 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:44:45 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:44:45 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:44:45 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:44:45 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:44:45 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:44:45 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:44:45 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:44:45 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:44:45 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:44:45 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:44:45 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:44:45 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:44:45 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:44:45 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:44:45 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:44:45 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:44:45 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:44:45 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:44:45 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:44:45 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:44:45 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:44:45 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:44:45 compute-0 nova_compute[260935]:     <controller type="usb" index="0"/>
Oct 11 08:44:45 compute-0 nova_compute[260935]:     <memballoon model="virtio">
Oct 11 08:44:45 compute-0 nova_compute[260935]:       <stats period="10"/>
Oct 11 08:44:45 compute-0 nova_compute[260935]:     </memballoon>
Oct 11 08:44:45 compute-0 nova_compute[260935]:   </devices>
Oct 11 08:44:45 compute-0 nova_compute[260935]: </domain>
Oct 11 08:44:45 compute-0 nova_compute[260935]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 11 08:44:45 compute-0 nova_compute[260935]: 2025-10-11 08:44:45.154 2 DEBUG nova.compute.manager [None req-981b23f2-89f5-44a2-9dfa-15598106e3c8 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] [instance: aac0adcc-167d-400a-a04a-93767356cc9c] Preparing to wait for external event network-vif-plugged-6e75116e-1034-4d1a-8320-6755ee57c51f prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 11 08:44:45 compute-0 nova_compute[260935]: 2025-10-11 08:44:45.155 2 DEBUG oslo_concurrency.lockutils [None req-981b23f2-89f5-44a2-9dfa-15598106e3c8 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Acquiring lock "aac0adcc-167d-400a-a04a-93767356cc9c-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:44:45 compute-0 nova_compute[260935]: 2025-10-11 08:44:45.155 2 DEBUG oslo_concurrency.lockutils [None req-981b23f2-89f5-44a2-9dfa-15598106e3c8 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Lock "aac0adcc-167d-400a-a04a-93767356cc9c-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:44:45 compute-0 nova_compute[260935]: 2025-10-11 08:44:45.156 2 DEBUG oslo_concurrency.lockutils [None req-981b23f2-89f5-44a2-9dfa-15598106e3c8 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Lock "aac0adcc-167d-400a-a04a-93767356cc9c-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:44:45 compute-0 nova_compute[260935]: 2025-10-11 08:44:45.157 2 DEBUG nova.virt.libvirt.vif [None req-981b23f2-89f5-44a2-9dfa-15598106e3c8 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T08:44:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersWithSpecificFlavorTestJSON-server-189754419',display_name='tempest-ServersWithSpecificFlavorTestJSON-server-189754419',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(25),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverswithspecificflavortestjson-server-189754419',id=2,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=25,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNbdle8Jb2CGpQv4SpbWMUmm3hiSixEA0s5KapvSItFHc3x9bUVgKTLkLVOIdrKirkN+vbb4fO8g4FzWk9eXgUjBokGNviLy6eM2HjHbeF2CcHrjTEf0RujQy30DD/WL/w==',key_name='tempest-keypair-358550695',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4e6573eaf6684f1c99a553fd46667a67',ramdisk_id='',reservation_id='r-bh0dixv5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersWithSpecificFlavorTestJSON-1781130606',owner_user_name='tempest-ServersWithSpecificFlavorTestJSON-1781130606-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T08:44:39Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='113798b24d1e4a9e91db94214d254ea9',uuid=aac0adcc-167d-400a-a04a-93767356cc9c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6e75116e-1034-4d1a-8320-6755ee57c51f", "address": "fa:16:3e:14:7b:bc", "network": {"id": "3bd537c2-e6ec-4d00-ac83-fbf5d86f963c", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-610535994-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4e6573eaf6684f1c99a553fd46667a67", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e75116e-10", "ovs_interfaceid": "6e75116e-1034-4d1a-8320-6755ee57c51f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 11 08:44:45 compute-0 nova_compute[260935]: 2025-10-11 08:44:45.158 2 DEBUG nova.network.os_vif_util [None req-981b23f2-89f5-44a2-9dfa-15598106e3c8 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Converting VIF {"id": "6e75116e-1034-4d1a-8320-6755ee57c51f", "address": "fa:16:3e:14:7b:bc", "network": {"id": "3bd537c2-e6ec-4d00-ac83-fbf5d86f963c", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-610535994-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4e6573eaf6684f1c99a553fd46667a67", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e75116e-10", "ovs_interfaceid": "6e75116e-1034-4d1a-8320-6755ee57c51f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 08:44:45 compute-0 nova_compute[260935]: 2025-10-11 08:44:45.159 2 DEBUG nova.network.os_vif_util [None req-981b23f2-89f5-44a2-9dfa-15598106e3c8 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:14:7b:bc,bridge_name='br-int',has_traffic_filtering=True,id=6e75116e-1034-4d1a-8320-6755ee57c51f,network=Network(3bd537c2-e6ec-4d00-ac83-fbf5d86f963c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6e75116e-10') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 08:44:45 compute-0 nova_compute[260935]: 2025-10-11 08:44:45.159 2 DEBUG os_vif [None req-981b23f2-89f5-44a2-9dfa-15598106e3c8 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:14:7b:bc,bridge_name='br-int',has_traffic_filtering=True,id=6e75116e-1034-4d1a-8320-6755ee57c51f,network=Network(3bd537c2-e6ec-4d00-ac83-fbf5d86f963c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6e75116e-10') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 11 08:44:45 compute-0 nova_compute[260935]: 2025-10-11 08:44:45.203 2 DEBUG oslo_concurrency.lockutils [None req-867cba43-d614-4fbe-9c73-90808e6a68a7 b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.783s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:44:45 compute-0 nova_compute[260935]: 2025-10-11 08:44:45.204 2 DEBUG nova.compute.manager [None req-867cba43-d614-4fbe-9c73-90808e6a68a7 b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] [instance: 10e2549e-21d4-44fb-acbf-9104ec32970f] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 11 08:44:45 compute-0 nova_compute[260935]: 2025-10-11 08:44:45.216 2 DEBUG ovsdbapp.backend.ovs_idl [None req-981b23f2-89f5-44a2-9dfa-15598106e3c8 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Oct 11 08:44:45 compute-0 nova_compute[260935]: 2025-10-11 08:44:45.216 2 DEBUG ovsdbapp.backend.ovs_idl [None req-981b23f2-89f5-44a2-9dfa-15598106e3c8 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Oct 11 08:44:45 compute-0 nova_compute[260935]: 2025-10-11 08:44:45.216 2 DEBUG ovsdbapp.backend.ovs_idl [None req-981b23f2-89f5-44a2-9dfa-15598106e3c8 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Oct 11 08:44:45 compute-0 nova_compute[260935]: 2025-10-11 08:44:45.217 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-981b23f2-89f5-44a2-9dfa-15598106e3c8 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] tcp:127.0.0.1:6640: entering CONNECTING _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Oct 11 08:44:45 compute-0 nova_compute[260935]: 2025-10-11 08:44:45.218 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-981b23f2-89f5-44a2-9dfa-15598106e3c8 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] [POLLOUT] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:44:45 compute-0 nova_compute[260935]: 2025-10-11 08:44:45.218 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-981b23f2-89f5-44a2-9dfa-15598106e3c8 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Oct 11 08:44:45 compute-0 nova_compute[260935]: 2025-10-11 08:44:45.219 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-981b23f2-89f5-44a2-9dfa-15598106e3c8 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:44:45 compute-0 nova_compute[260935]: 2025-10-11 08:44:45.220 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-981b23f2-89f5-44a2-9dfa-15598106e3c8 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:44:45 compute-0 nova_compute[260935]: 2025-10-11 08:44:45.223 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-981b23f2-89f5-44a2-9dfa-15598106e3c8 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:44:45 compute-0 nova_compute[260935]: 2025-10-11 08:44:45.231 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:44:45 compute-0 nova_compute[260935]: 2025-10-11 08:44:45.232 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:44:45 compute-0 nova_compute[260935]: 2025-10-11 08:44:45.232 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 08:44:45 compute-0 nova_compute[260935]: 2025-10-11 08:44:45.233 2 INFO oslo.privsep.daemon [None req-981b23f2-89f5-44a2-9dfa-15598106e3c8 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'vif_plug_ovs.privsep.vif_plug', '--privsep_sock_path', '/tmp/tmp05yxqiq3/privsep.sock']
Oct 11 08:44:45 compute-0 nova_compute[260935]: 2025-10-11 08:44:45.262 2 DEBUG nova.compute.manager [None req-867cba43-d614-4fbe-9c73-90808e6a68a7 b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] [instance: 10e2549e-21d4-44fb-acbf-9104ec32970f] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 11 08:44:45 compute-0 nova_compute[260935]: 2025-10-11 08:44:45.263 2 DEBUG nova.network.neutron [None req-867cba43-d614-4fbe-9c73-90808e6a68a7 b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] [instance: 10e2549e-21d4-44fb-acbf-9104ec32970f] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 11 08:44:45 compute-0 nova_compute[260935]: 2025-10-11 08:44:45.282 2 INFO nova.virt.libvirt.driver [None req-867cba43-d614-4fbe-9c73-90808e6a68a7 b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] [instance: 10e2549e-21d4-44fb-acbf-9104ec32970f] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 11 08:44:45 compute-0 nova_compute[260935]: 2025-10-11 08:44:45.301 2 DEBUG nova.compute.manager [None req-867cba43-d614-4fbe-9c73-90808e6a68a7 b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] [instance: 10e2549e-21d4-44fb-acbf-9104ec32970f] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 11 08:44:45 compute-0 ceph-mon[74313]: pgmap v1141: 321 pgs: 321 active+clean; 88 MiB data, 211 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 11 08:44:45 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/1642061443' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:44:45 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/1329841632' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:44:45 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/1463306307' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:44:45 compute-0 nova_compute[260935]: 2025-10-11 08:44:45.525 2 DEBUG nova.compute.manager [None req-867cba43-d614-4fbe-9c73-90808e6a68a7 b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] [instance: 10e2549e-21d4-44fb-acbf-9104ec32970f] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 11 08:44:45 compute-0 nova_compute[260935]: 2025-10-11 08:44:45.526 2 DEBUG nova.virt.libvirt.driver [None req-867cba43-d614-4fbe-9c73-90808e6a68a7 b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] [instance: 10e2549e-21d4-44fb-acbf-9104ec32970f] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 11 08:44:45 compute-0 nova_compute[260935]: 2025-10-11 08:44:45.527 2 INFO nova.virt.libvirt.driver [None req-867cba43-d614-4fbe-9c73-90808e6a68a7 b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] [instance: 10e2549e-21d4-44fb-acbf-9104ec32970f] Creating image(s)
Oct 11 08:44:45 compute-0 nova_compute[260935]: 2025-10-11 08:44:45.550 2 DEBUG nova.storage.rbd_utils [None req-867cba43-d614-4fbe-9c73-90808e6a68a7 b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] rbd image 10e2549e-21d4-44fb-acbf-9104ec32970f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:44:45 compute-0 nova_compute[260935]: 2025-10-11 08:44:45.577 2 DEBUG nova.storage.rbd_utils [None req-867cba43-d614-4fbe-9c73-90808e6a68a7 b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] rbd image 10e2549e-21d4-44fb-acbf-9104ec32970f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:44:45 compute-0 nova_compute[260935]: 2025-10-11 08:44:45.600 2 DEBUG nova.storage.rbd_utils [None req-867cba43-d614-4fbe-9c73-90808e6a68a7 b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] rbd image 10e2549e-21d4-44fb-acbf-9104ec32970f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:44:45 compute-0 nova_compute[260935]: 2025-10-11 08:44:45.605 2 DEBUG oslo_concurrency.processutils [None req-867cba43-d614-4fbe-9c73-90808e6a68a7 b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:44:45 compute-0 nova_compute[260935]: 2025-10-11 08:44:45.688 2 DEBUG oslo_concurrency.processutils [None req-867cba43-d614-4fbe-9c73-90808e6a68a7 b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json" returned: 0 in 0.083s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:44:45 compute-0 nova_compute[260935]: 2025-10-11 08:44:45.690 2 DEBUG oslo_concurrency.lockutils [None req-867cba43-d614-4fbe-9c73-90808e6a68a7 b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] Acquiring lock "9811042c7d73cc51997f7c966840f4b7728169a1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:44:45 compute-0 nova_compute[260935]: 2025-10-11 08:44:45.691 2 DEBUG oslo_concurrency.lockutils [None req-867cba43-d614-4fbe-9c73-90808e6a68a7 b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:44:45 compute-0 nova_compute[260935]: 2025-10-11 08:44:45.692 2 DEBUG oslo_concurrency.lockutils [None req-867cba43-d614-4fbe-9c73-90808e6a68a7 b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:44:45 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1142: 321 pgs: 321 active+clean; 88 MiB data, 211 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 11 08:44:45 compute-0 nova_compute[260935]: 2025-10-11 08:44:45.726 2 DEBUG nova.storage.rbd_utils [None req-867cba43-d614-4fbe-9c73-90808e6a68a7 b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] rbd image 10e2549e-21d4-44fb-acbf-9104ec32970f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:44:45 compute-0 nova_compute[260935]: 2025-10-11 08:44:45.732 2 DEBUG oslo_concurrency.processutils [None req-867cba43-d614-4fbe-9c73-90808e6a68a7 b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 10e2549e-21d4-44fb-acbf-9104ec32970f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:44:45 compute-0 nova_compute[260935]: 2025-10-11 08:44:45.772 2 DEBUG nova.network.neutron [None req-867cba43-d614-4fbe-9c73-90808e6a68a7 b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] [instance: 10e2549e-21d4-44fb-acbf-9104ec32970f] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188
Oct 11 08:44:45 compute-0 nova_compute[260935]: 2025-10-11 08:44:45.773 2 DEBUG nova.compute.manager [None req-867cba43-d614-4fbe-9c73-90808e6a68a7 b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] [instance: 10e2549e-21d4-44fb-acbf-9104ec32970f] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 11 08:44:45 compute-0 podman[276449]: 2025-10-11 08:44:45.785342446 +0000 UTC m=+0.082607550 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent)
Oct 11 08:44:46 compute-0 nova_compute[260935]: 2025-10-11 08:44:46.065 2 DEBUG oslo_concurrency.processutils [None req-867cba43-d614-4fbe-9c73-90808e6a68a7 b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 10e2549e-21d4-44fb-acbf-9104ec32970f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.333s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:44:46 compute-0 nova_compute[260935]: 2025-10-11 08:44:46.066 2 INFO oslo.privsep.daemon [None req-981b23f2-89f5-44a2-9dfa-15598106e3c8 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Spawned new privsep daemon via rootwrap
Oct 11 08:44:46 compute-0 nova_compute[260935]: 2025-10-11 08:44:45.902 1319 INFO oslo.privsep.daemon [-] privsep daemon starting
Oct 11 08:44:46 compute-0 nova_compute[260935]: 2025-10-11 08:44:45.915 1319 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Oct 11 08:44:46 compute-0 nova_compute[260935]: 2025-10-11 08:44:45.920 1319 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_NET_ADMIN/CAP_DAC_OVERRIDE|CAP_NET_ADMIN/none
Oct 11 08:44:46 compute-0 nova_compute[260935]: 2025-10-11 08:44:45.920 1319 INFO oslo.privsep.daemon [-] privsep daemon running as pid 1319
Oct 11 08:44:46 compute-0 nova_compute[260935]: 2025-10-11 08:44:46.133 2 DEBUG nova.storage.rbd_utils [None req-867cba43-d614-4fbe-9c73-90808e6a68a7 b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] resizing rbd image 10e2549e-21d4-44fb-acbf-9104ec32970f_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 11 08:44:46 compute-0 nova_compute[260935]: 2025-10-11 08:44:46.183 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:44:46 compute-0 nova_compute[260935]: 2025-10-11 08:44:46.229 2 DEBUG nova.network.neutron [req-423a4bbd-f546-45a6-a7f7-655c30786510 req-c6ce626f-bf81-492b-bb5a-4a7ae9ee0845 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: aac0adcc-167d-400a-a04a-93767356cc9c] Updated VIF entry in instance network info cache for port 6e75116e-1034-4d1a-8320-6755ee57c51f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 08:44:46 compute-0 nova_compute[260935]: 2025-10-11 08:44:46.230 2 DEBUG nova.network.neutron [req-423a4bbd-f546-45a6-a7f7-655c30786510 req-c6ce626f-bf81-492b-bb5a-4a7ae9ee0845 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: aac0adcc-167d-400a-a04a-93767356cc9c] Updating instance_info_cache with network_info: [{"id": "6e75116e-1034-4d1a-8320-6755ee57c51f", "address": "fa:16:3e:14:7b:bc", "network": {"id": "3bd537c2-e6ec-4d00-ac83-fbf5d86f963c", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-610535994-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4e6573eaf6684f1c99a553fd46667a67", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e75116e-10", "ovs_interfaceid": "6e75116e-1034-4d1a-8320-6755ee57c51f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:44:46 compute-0 nova_compute[260935]: 2025-10-11 08:44:46.236 2 DEBUG nova.objects.instance [None req-867cba43-d614-4fbe-9c73-90808e6a68a7 b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] Lazy-loading 'migration_context' on Instance uuid 10e2549e-21d4-44fb-acbf-9104ec32970f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:44:46 compute-0 nova_compute[260935]: 2025-10-11 08:44:46.258 2 DEBUG nova.virt.libvirt.driver [None req-867cba43-d614-4fbe-9c73-90808e6a68a7 b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] [instance: 10e2549e-21d4-44fb-acbf-9104ec32970f] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 11 08:44:46 compute-0 nova_compute[260935]: 2025-10-11 08:44:46.259 2 DEBUG nova.virt.libvirt.driver [None req-867cba43-d614-4fbe-9c73-90808e6a68a7 b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] [instance: 10e2549e-21d4-44fb-acbf-9104ec32970f] Ensure instance console log exists: /var/lib/nova/instances/10e2549e-21d4-44fb-acbf-9104ec32970f/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 11 08:44:46 compute-0 nova_compute[260935]: 2025-10-11 08:44:46.259 2 DEBUG oslo_concurrency.lockutils [None req-867cba43-d614-4fbe-9c73-90808e6a68a7 b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:44:46 compute-0 nova_compute[260935]: 2025-10-11 08:44:46.260 2 DEBUG oslo_concurrency.lockutils [None req-867cba43-d614-4fbe-9c73-90808e6a68a7 b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:44:46 compute-0 nova_compute[260935]: 2025-10-11 08:44:46.260 2 DEBUG oslo_concurrency.lockutils [None req-867cba43-d614-4fbe-9c73-90808e6a68a7 b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:44:46 compute-0 nova_compute[260935]: 2025-10-11 08:44:46.261 2 DEBUG nova.virt.libvirt.driver [None req-867cba43-d614-4fbe-9c73-90808e6a68a7 b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] [instance: 10e2549e-21d4-44fb-acbf-9104ec32970f] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 11 08:44:46 compute-0 nova_compute[260935]: 2025-10-11 08:44:46.262 2 DEBUG oslo_concurrency.lockutils [req-423a4bbd-f546-45a6-a7f7-655c30786510 req-c6ce626f-bf81-492b-bb5a-4a7ae9ee0845 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-aac0adcc-167d-400a-a04a-93767356cc9c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 08:44:46 compute-0 nova_compute[260935]: 2025-10-11 08:44:46.265 2 WARNING nova.virt.libvirt.driver [None req-867cba43-d614-4fbe-9c73-90808e6a68a7 b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 08:44:46 compute-0 nova_compute[260935]: 2025-10-11 08:44:46.269 2 DEBUG nova.virt.libvirt.host [None req-867cba43-d614-4fbe-9c73-90808e6a68a7 b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 11 08:44:46 compute-0 nova_compute[260935]: 2025-10-11 08:44:46.270 2 DEBUG nova.virt.libvirt.host [None req-867cba43-d614-4fbe-9c73-90808e6a68a7 b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 11 08:44:46 compute-0 nova_compute[260935]: 2025-10-11 08:44:46.273 2 DEBUG nova.virt.libvirt.host [None req-867cba43-d614-4fbe-9c73-90808e6a68a7 b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 11 08:44:46 compute-0 nova_compute[260935]: 2025-10-11 08:44:46.273 2 DEBUG nova.virt.libvirt.host [None req-867cba43-d614-4fbe-9c73-90808e6a68a7 b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 11 08:44:46 compute-0 nova_compute[260935]: 2025-10-11 08:44:46.273 2 DEBUG nova.virt.libvirt.driver [None req-867cba43-d614-4fbe-9c73-90808e6a68a7 b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 11 08:44:46 compute-0 nova_compute[260935]: 2025-10-11 08:44:46.273 2 DEBUG nova.virt.hardware [None req-867cba43-d614-4fbe-9c73-90808e6a68a7 b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 11 08:44:46 compute-0 nova_compute[260935]: 2025-10-11 08:44:46.274 2 DEBUG nova.virt.hardware [None req-867cba43-d614-4fbe-9c73-90808e6a68a7 b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 11 08:44:46 compute-0 nova_compute[260935]: 2025-10-11 08:44:46.274 2 DEBUG nova.virt.hardware [None req-867cba43-d614-4fbe-9c73-90808e6a68a7 b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 11 08:44:46 compute-0 nova_compute[260935]: 2025-10-11 08:44:46.274 2 DEBUG nova.virt.hardware [None req-867cba43-d614-4fbe-9c73-90808e6a68a7 b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 11 08:44:46 compute-0 nova_compute[260935]: 2025-10-11 08:44:46.275 2 DEBUG nova.virt.hardware [None req-867cba43-d614-4fbe-9c73-90808e6a68a7 b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 11 08:44:46 compute-0 nova_compute[260935]: 2025-10-11 08:44:46.275 2 DEBUG nova.virt.hardware [None req-867cba43-d614-4fbe-9c73-90808e6a68a7 b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 11 08:44:46 compute-0 nova_compute[260935]: 2025-10-11 08:44:46.275 2 DEBUG nova.virt.hardware [None req-867cba43-d614-4fbe-9c73-90808e6a68a7 b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 11 08:44:46 compute-0 nova_compute[260935]: 2025-10-11 08:44:46.276 2 DEBUG nova.virt.hardware [None req-867cba43-d614-4fbe-9c73-90808e6a68a7 b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 11 08:44:46 compute-0 nova_compute[260935]: 2025-10-11 08:44:46.276 2 DEBUG nova.virt.hardware [None req-867cba43-d614-4fbe-9c73-90808e6a68a7 b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 11 08:44:46 compute-0 nova_compute[260935]: 2025-10-11 08:44:46.276 2 DEBUG nova.virt.hardware [None req-867cba43-d614-4fbe-9c73-90808e6a68a7 b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 11 08:44:46 compute-0 nova_compute[260935]: 2025-10-11 08:44:46.276 2 DEBUG nova.virt.hardware [None req-867cba43-d614-4fbe-9c73-90808e6a68a7 b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 11 08:44:46 compute-0 nova_compute[260935]: 2025-10-11 08:44:46.279 2 DEBUG oslo_concurrency.processutils [None req-867cba43-d614-4fbe-9c73-90808e6a68a7 b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:44:46 compute-0 nova_compute[260935]: 2025-10-11 08:44:46.464 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:44:46 compute-0 nova_compute[260935]: 2025-10-11 08:44:46.465 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6e75116e-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:44:46 compute-0 nova_compute[260935]: 2025-10-11 08:44:46.465 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap6e75116e-10, col_values=(('external_ids', {'iface-id': '6e75116e-1034-4d1a-8320-6755ee57c51f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:14:7b:bc', 'vm-uuid': 'aac0adcc-167d-400a-a04a-93767356cc9c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:44:46 compute-0 nova_compute[260935]: 2025-10-11 08:44:46.467 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:44:46 compute-0 NetworkManager[44960]: <info>  [1760172286.4687] manager: (tap6e75116e-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/23)
Oct 11 08:44:46 compute-0 nova_compute[260935]: 2025-10-11 08:44:46.470 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 08:44:46 compute-0 nova_compute[260935]: 2025-10-11 08:44:46.476 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:44:46 compute-0 nova_compute[260935]: 2025-10-11 08:44:46.477 2 INFO os_vif [None req-981b23f2-89f5-44a2-9dfa-15598106e3c8 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:14:7b:bc,bridge_name='br-int',has_traffic_filtering=True,id=6e75116e-1034-4d1a-8320-6755ee57c51f,network=Network(3bd537c2-e6ec-4d00-ac83-fbf5d86f963c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6e75116e-10')
Oct 11 08:44:46 compute-0 nova_compute[260935]: 2025-10-11 08:44:46.646 2 DEBUG nova.virt.libvirt.driver [None req-981b23f2-89f5-44a2-9dfa-15598106e3c8 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 08:44:46 compute-0 nova_compute[260935]: 2025-10-11 08:44:46.647 2 DEBUG nova.virt.libvirt.driver [None req-981b23f2-89f5-44a2-9dfa-15598106e3c8 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 08:44:46 compute-0 nova_compute[260935]: 2025-10-11 08:44:46.648 2 DEBUG nova.virt.libvirt.driver [None req-981b23f2-89f5-44a2-9dfa-15598106e3c8 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] No VIF found with MAC fa:16:3e:14:7b:bc, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 11 08:44:46 compute-0 nova_compute[260935]: 2025-10-11 08:44:46.649 2 INFO nova.virt.libvirt.driver [None req-981b23f2-89f5-44a2-9dfa-15598106e3c8 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] [instance: aac0adcc-167d-400a-a04a-93767356cc9c] Using config drive
Oct 11 08:44:46 compute-0 nova_compute[260935]: 2025-10-11 08:44:46.683 2 DEBUG nova.storage.rbd_utils [None req-981b23f2-89f5-44a2-9dfa-15598106e3c8 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] rbd image aac0adcc-167d-400a-a04a-93767356cc9c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:44:46 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 08:44:46 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1972186408' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:44:46 compute-0 nova_compute[260935]: 2025-10-11 08:44:46.745 2 DEBUG oslo_concurrency.processutils [None req-867cba43-d614-4fbe-9c73-90808e6a68a7 b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:44:46 compute-0 nova_compute[260935]: 2025-10-11 08:44:46.776 2 DEBUG nova.storage.rbd_utils [None req-867cba43-d614-4fbe-9c73-90808e6a68a7 b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] rbd image 10e2549e-21d4-44fb-acbf-9104ec32970f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:44:46 compute-0 nova_compute[260935]: 2025-10-11 08:44:46.781 2 DEBUG oslo_concurrency.processutils [None req-867cba43-d614-4fbe-9c73-90808e6a68a7 b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:44:47 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 08:44:47 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1150741359' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:44:47 compute-0 nova_compute[260935]: 2025-10-11 08:44:47.225 2 DEBUG oslo_concurrency.processutils [None req-867cba43-d614-4fbe-9c73-90808e6a68a7 b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:44:47 compute-0 nova_compute[260935]: 2025-10-11 08:44:47.228 2 DEBUG nova.objects.instance [None req-867cba43-d614-4fbe-9c73-90808e6a68a7 b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] Lazy-loading 'pci_devices' on Instance uuid 10e2549e-21d4-44fb-acbf-9104ec32970f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:44:47 compute-0 nova_compute[260935]: 2025-10-11 08:44:47.247 2 DEBUG nova.virt.libvirt.driver [None req-867cba43-d614-4fbe-9c73-90808e6a68a7 b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] [instance: 10e2549e-21d4-44fb-acbf-9104ec32970f] End _get_guest_xml xml=<domain type="kvm">
Oct 11 08:44:47 compute-0 nova_compute[260935]:   <uuid>10e2549e-21d4-44fb-acbf-9104ec32970f</uuid>
Oct 11 08:44:47 compute-0 nova_compute[260935]:   <name>instance-00000003</name>
Oct 11 08:44:47 compute-0 nova_compute[260935]:   <memory>131072</memory>
Oct 11 08:44:47 compute-0 nova_compute[260935]:   <vcpu>1</vcpu>
Oct 11 08:44:47 compute-0 nova_compute[260935]:   <metadata>
Oct 11 08:44:47 compute-0 nova_compute[260935]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 08:44:47 compute-0 nova_compute[260935]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 08:44:47 compute-0 nova_compute[260935]:       <nova:name>tempest-LiveMigrationNegativeTest-server-846905837</nova:name>
Oct 11 08:44:47 compute-0 nova_compute[260935]:       <nova:creationTime>2025-10-11 08:44:46</nova:creationTime>
Oct 11 08:44:47 compute-0 nova_compute[260935]:       <nova:flavor name="m1.nano">
Oct 11 08:44:47 compute-0 nova_compute[260935]:         <nova:memory>128</nova:memory>
Oct 11 08:44:47 compute-0 nova_compute[260935]:         <nova:disk>1</nova:disk>
Oct 11 08:44:47 compute-0 nova_compute[260935]:         <nova:swap>0</nova:swap>
Oct 11 08:44:47 compute-0 nova_compute[260935]:         <nova:ephemeral>0</nova:ephemeral>
Oct 11 08:44:47 compute-0 nova_compute[260935]:         <nova:vcpus>1</nova:vcpus>
Oct 11 08:44:47 compute-0 nova_compute[260935]:       </nova:flavor>
Oct 11 08:44:47 compute-0 nova_compute[260935]:       <nova:owner>
Oct 11 08:44:47 compute-0 nova_compute[260935]:         <nova:user uuid="b1cf6609831e4cd3b6261e999c3c068e">tempest-LiveMigrationNegativeTest-1774903191-project-member</nova:user>
Oct 11 08:44:47 compute-0 nova_compute[260935]:         <nova:project uuid="507d5ff420164426beedc0ce7977570a">tempest-LiveMigrationNegativeTest-1774903191</nova:project>
Oct 11 08:44:47 compute-0 nova_compute[260935]:       </nova:owner>
Oct 11 08:44:47 compute-0 nova_compute[260935]:       <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 08:44:47 compute-0 nova_compute[260935]:       <nova:ports/>
Oct 11 08:44:47 compute-0 nova_compute[260935]:     </nova:instance>
Oct 11 08:44:47 compute-0 nova_compute[260935]:   </metadata>
Oct 11 08:44:47 compute-0 nova_compute[260935]:   <sysinfo type="smbios">
Oct 11 08:44:47 compute-0 nova_compute[260935]:     <system>
Oct 11 08:44:47 compute-0 nova_compute[260935]:       <entry name="manufacturer">RDO</entry>
Oct 11 08:44:47 compute-0 nova_compute[260935]:       <entry name="product">OpenStack Compute</entry>
Oct 11 08:44:47 compute-0 nova_compute[260935]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 08:44:47 compute-0 nova_compute[260935]:       <entry name="serial">10e2549e-21d4-44fb-acbf-9104ec32970f</entry>
Oct 11 08:44:47 compute-0 nova_compute[260935]:       <entry name="uuid">10e2549e-21d4-44fb-acbf-9104ec32970f</entry>
Oct 11 08:44:47 compute-0 nova_compute[260935]:       <entry name="family">Virtual Machine</entry>
Oct 11 08:44:47 compute-0 nova_compute[260935]:     </system>
Oct 11 08:44:47 compute-0 nova_compute[260935]:   </sysinfo>
Oct 11 08:44:47 compute-0 nova_compute[260935]:   <os>
Oct 11 08:44:47 compute-0 nova_compute[260935]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 11 08:44:47 compute-0 nova_compute[260935]:     <boot dev="hd"/>
Oct 11 08:44:47 compute-0 nova_compute[260935]:     <smbios mode="sysinfo"/>
Oct 11 08:44:47 compute-0 nova_compute[260935]:   </os>
Oct 11 08:44:47 compute-0 nova_compute[260935]:   <features>
Oct 11 08:44:47 compute-0 nova_compute[260935]:     <acpi/>
Oct 11 08:44:47 compute-0 nova_compute[260935]:     <apic/>
Oct 11 08:44:47 compute-0 nova_compute[260935]:     <vmcoreinfo/>
Oct 11 08:44:47 compute-0 nova_compute[260935]:   </features>
Oct 11 08:44:47 compute-0 nova_compute[260935]:   <clock offset="utc">
Oct 11 08:44:47 compute-0 nova_compute[260935]:     <timer name="pit" tickpolicy="delay"/>
Oct 11 08:44:47 compute-0 nova_compute[260935]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 11 08:44:47 compute-0 nova_compute[260935]:     <timer name="hpet" present="no"/>
Oct 11 08:44:47 compute-0 nova_compute[260935]:   </clock>
Oct 11 08:44:47 compute-0 nova_compute[260935]:   <cpu mode="host-model" match="exact">
Oct 11 08:44:47 compute-0 nova_compute[260935]:     <topology sockets="1" cores="1" threads="1"/>
Oct 11 08:44:47 compute-0 nova_compute[260935]:   </cpu>
Oct 11 08:44:47 compute-0 nova_compute[260935]:   <devices>
Oct 11 08:44:47 compute-0 nova_compute[260935]:     <disk type="network" device="disk">
Oct 11 08:44:47 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 08:44:47 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/10e2549e-21d4-44fb-acbf-9104ec32970f_disk">
Oct 11 08:44:47 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 08:44:47 compute-0 nova_compute[260935]:       </source>
Oct 11 08:44:47 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 08:44:47 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 08:44:47 compute-0 nova_compute[260935]:       </auth>
Oct 11 08:44:47 compute-0 nova_compute[260935]:       <target dev="vda" bus="virtio"/>
Oct 11 08:44:47 compute-0 nova_compute[260935]:     </disk>
Oct 11 08:44:47 compute-0 nova_compute[260935]:     <disk type="network" device="cdrom">
Oct 11 08:44:47 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 08:44:47 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/10e2549e-21d4-44fb-acbf-9104ec32970f_disk.config">
Oct 11 08:44:47 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 08:44:47 compute-0 nova_compute[260935]:       </source>
Oct 11 08:44:47 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 08:44:47 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 08:44:47 compute-0 nova_compute[260935]:       </auth>
Oct 11 08:44:47 compute-0 nova_compute[260935]:       <target dev="sda" bus="sata"/>
Oct 11 08:44:47 compute-0 nova_compute[260935]:     </disk>
Oct 11 08:44:47 compute-0 nova_compute[260935]:     <serial type="pty">
Oct 11 08:44:47 compute-0 nova_compute[260935]:       <log file="/var/lib/nova/instances/10e2549e-21d4-44fb-acbf-9104ec32970f/console.log" append="off"/>
Oct 11 08:44:47 compute-0 nova_compute[260935]:     </serial>
Oct 11 08:44:47 compute-0 nova_compute[260935]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 08:44:47 compute-0 nova_compute[260935]:     <video>
Oct 11 08:44:47 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 08:44:47 compute-0 nova_compute[260935]:     </video>
Oct 11 08:44:47 compute-0 nova_compute[260935]:     <input type="tablet" bus="usb"/>
Oct 11 08:44:47 compute-0 nova_compute[260935]:     <rng model="virtio">
Oct 11 08:44:47 compute-0 nova_compute[260935]:       <backend model="random">/dev/urandom</backend>
Oct 11 08:44:47 compute-0 nova_compute[260935]:     </rng>
Oct 11 08:44:47 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root"/>
Oct 11 08:44:47 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:44:47 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:44:47 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:44:47 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:44:47 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:44:47 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:44:47 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:44:47 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:44:47 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:44:47 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:44:47 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:44:47 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:44:47 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:44:47 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:44:47 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:44:47 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:44:47 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:44:47 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:44:47 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:44:47 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:44:47 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:44:47 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:44:47 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:44:47 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:44:47 compute-0 nova_compute[260935]:     <controller type="usb" index="0"/>
Oct 11 08:44:47 compute-0 nova_compute[260935]:     <memballoon model="virtio">
Oct 11 08:44:47 compute-0 nova_compute[260935]:       <stats period="10"/>
Oct 11 08:44:47 compute-0 nova_compute[260935]:     </memballoon>
Oct 11 08:44:47 compute-0 nova_compute[260935]:   </devices>
Oct 11 08:44:47 compute-0 nova_compute[260935]: </domain>
Oct 11 08:44:47 compute-0 nova_compute[260935]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 11 08:44:47 compute-0 nova_compute[260935]: 2025-10-11 08:44:47.260 2 INFO nova.virt.libvirt.driver [None req-981b23f2-89f5-44a2-9dfa-15598106e3c8 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] [instance: aac0adcc-167d-400a-a04a-93767356cc9c] Creating config drive at /var/lib/nova/instances/aac0adcc-167d-400a-a04a-93767356cc9c/disk.config
Oct 11 08:44:47 compute-0 nova_compute[260935]: 2025-10-11 08:44:47.274 2 DEBUG oslo_concurrency.processutils [None req-981b23f2-89f5-44a2-9dfa-15598106e3c8 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/aac0adcc-167d-400a-a04a-93767356cc9c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp0j1acf0s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:44:47 compute-0 ceph-mon[74313]: pgmap v1142: 321 pgs: 321 active+clean; 88 MiB data, 211 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 11 08:44:47 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/1972186408' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:44:47 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/1150741359' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:44:47 compute-0 nova_compute[260935]: 2025-10-11 08:44:47.352 2 DEBUG nova.virt.libvirt.driver [None req-867cba43-d614-4fbe-9c73-90808e6a68a7 b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 08:44:47 compute-0 nova_compute[260935]: 2025-10-11 08:44:47.354 2 DEBUG nova.virt.libvirt.driver [None req-867cba43-d614-4fbe-9c73-90808e6a68a7 b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 08:44:47 compute-0 nova_compute[260935]: 2025-10-11 08:44:47.355 2 INFO nova.virt.libvirt.driver [None req-867cba43-d614-4fbe-9c73-90808e6a68a7 b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] [instance: 10e2549e-21d4-44fb-acbf-9104ec32970f] Using config drive
Oct 11 08:44:47 compute-0 nova_compute[260935]: 2025-10-11 08:44:47.391 2 DEBUG nova.storage.rbd_utils [None req-867cba43-d614-4fbe-9c73-90808e6a68a7 b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] rbd image 10e2549e-21d4-44fb-acbf-9104ec32970f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:44:47 compute-0 nova_compute[260935]: 2025-10-11 08:44:47.422 2 DEBUG oslo_concurrency.processutils [None req-981b23f2-89f5-44a2-9dfa-15598106e3c8 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/aac0adcc-167d-400a-a04a-93767356cc9c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp0j1acf0s" returned: 0 in 0.148s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:44:47 compute-0 nova_compute[260935]: 2025-10-11 08:44:47.447 2 DEBUG nova.storage.rbd_utils [None req-981b23f2-89f5-44a2-9dfa-15598106e3c8 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] rbd image aac0adcc-167d-400a-a04a-93767356cc9c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:44:47 compute-0 nova_compute[260935]: 2025-10-11 08:44:47.452 2 DEBUG oslo_concurrency.processutils [None req-981b23f2-89f5-44a2-9dfa-15598106e3c8 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/aac0adcc-167d-400a-a04a-93767356cc9c/disk.config aac0adcc-167d-400a-a04a-93767356cc9c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:44:47 compute-0 nova_compute[260935]: 2025-10-11 08:44:47.611 2 DEBUG oslo_concurrency.processutils [None req-981b23f2-89f5-44a2-9dfa-15598106e3c8 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/aac0adcc-167d-400a-a04a-93767356cc9c/disk.config aac0adcc-167d-400a-a04a-93767356cc9c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.160s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:44:47 compute-0 nova_compute[260935]: 2025-10-11 08:44:47.613 2 INFO nova.virt.libvirt.driver [None req-981b23f2-89f5-44a2-9dfa-15598106e3c8 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] [instance: aac0adcc-167d-400a-a04a-93767356cc9c] Deleting local config drive /var/lib/nova/instances/aac0adcc-167d-400a-a04a-93767356cc9c/disk.config because it was imported into RBD.
Oct 11 08:44:47 compute-0 nova_compute[260935]: 2025-10-11 08:44:47.667 2 INFO nova.virt.libvirt.driver [None req-867cba43-d614-4fbe-9c73-90808e6a68a7 b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] [instance: 10e2549e-21d4-44fb-acbf-9104ec32970f] Creating config drive at /var/lib/nova/instances/10e2549e-21d4-44fb-acbf-9104ec32970f/disk.config
Oct 11 08:44:47 compute-0 nova_compute[260935]: 2025-10-11 08:44:47.673 2 DEBUG oslo_concurrency.processutils [None req-867cba43-d614-4fbe-9c73-90808e6a68a7 b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/10e2549e-21d4-44fb-acbf-9104ec32970f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp0y8helu4 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:44:47 compute-0 sudo[276724]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:44:47 compute-0 sudo[276724]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:44:47 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1143: 321 pgs: 321 active+clean; 134 MiB data, 232 MiB used, 60 GiB / 60 GiB avail; 35 KiB/s rd, 3.5 MiB/s wr, 54 op/s
Oct 11 08:44:47 compute-0 sudo[276724]: pam_unix(sudo:session): session closed for user root
Oct 11 08:44:47 compute-0 kernel: tun: Universal TUN/TAP device driver, 1.6
Oct 11 08:44:47 compute-0 kernel: tap6e75116e-10: entered promiscuous mode
Oct 11 08:44:47 compute-0 NetworkManager[44960]: <info>  [1760172287.7113] manager: (tap6e75116e-10): new Tun device (/org/freedesktop/NetworkManager/Devices/24)
Oct 11 08:44:47 compute-0 ovn_controller[152945]: 2025-10-11T08:44:47Z|00027|binding|INFO|Claiming lport 6e75116e-1034-4d1a-8320-6755ee57c51f for this chassis.
Oct 11 08:44:47 compute-0 ovn_controller[152945]: 2025-10-11T08:44:47Z|00028|binding|INFO|6e75116e-1034-4d1a-8320-6755ee57c51f: Claiming fa:16:3e:14:7b:bc 10.100.0.9
Oct 11 08:44:47 compute-0 nova_compute[260935]: 2025-10-11 08:44:47.714 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:44:47 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:44:47.729 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:14:7b:bc 10.100.0.9'], port_security=['fa:16:3e:14:7b:bc 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'aac0adcc-167d-400a-a04a-93767356cc9c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3bd537c2-e6ec-4d00-ac83-fbf5d86f963c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4e6573eaf6684f1c99a553fd46667a67', 'neutron:revision_number': '2', 'neutron:security_group_ids': '6b6e9add-724a-49a4-90de-2f4e4912ca78', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=be56cb0d-617a-4b33-ac01-b32b133373b2, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=6e75116e-1034-4d1a-8320-6755ee57c51f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 08:44:47 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:44:47.730 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 6e75116e-1034-4d1a-8320-6755ee57c51f in datapath 3bd537c2-e6ec-4d00-ac83-fbf5d86f963c bound to our chassis
Oct 11 08:44:47 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:44:47.733 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 3bd537c2-e6ec-4d00-ac83-fbf5d86f963c
Oct 11 08:44:47 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:44:47.734 162815 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.default', '--privsep_sock_path', '/tmp/tmps_0luhbi/privsep.sock']
Oct 11 08:44:47 compute-0 systemd-udevd[276786]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 08:44:47 compute-0 NetworkManager[44960]: <info>  [1760172287.7836] device (tap6e75116e-10): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 08:44:47 compute-0 NetworkManager[44960]: <info>  [1760172287.7855] device (tap6e75116e-10): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 08:44:47 compute-0 sudo[276763]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 08:44:47 compute-0 systemd-machined[215705]: New machine qemu-2-instance-00000002.
Oct 11 08:44:47 compute-0 sudo[276763]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:44:47 compute-0 sudo[276763]: pam_unix(sudo:session): session closed for user root
Oct 11 08:44:47 compute-0 nova_compute[260935]: 2025-10-11 08:44:47.816 2 DEBUG oslo_concurrency.processutils [None req-867cba43-d614-4fbe-9c73-90808e6a68a7 b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/10e2549e-21d4-44fb-acbf-9104ec32970f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp0y8helu4" returned: 0 in 0.143s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:44:47 compute-0 systemd[1]: Started Virtual Machine qemu-2-instance-00000002.
Oct 11 08:44:47 compute-0 ovn_controller[152945]: 2025-10-11T08:44:47Z|00029|binding|INFO|Setting lport 6e75116e-1034-4d1a-8320-6755ee57c51f ovn-installed in OVS
Oct 11 08:44:47 compute-0 ovn_controller[152945]: 2025-10-11T08:44:47Z|00030|binding|INFO|Setting lport 6e75116e-1034-4d1a-8320-6755ee57c51f up in Southbound
Oct 11 08:44:47 compute-0 nova_compute[260935]: 2025-10-11 08:44:47.859 2 DEBUG nova.storage.rbd_utils [None req-867cba43-d614-4fbe-9c73-90808e6a68a7 b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] rbd image 10e2549e-21d4-44fb-acbf-9104ec32970f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:44:47 compute-0 nova_compute[260935]: 2025-10-11 08:44:47.866 2 DEBUG oslo_concurrency.processutils [None req-867cba43-d614-4fbe-9c73-90808e6a68a7 b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/10e2549e-21d4-44fb-acbf-9104ec32970f/disk.config 10e2549e-21d4-44fb-acbf-9104ec32970f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:44:47 compute-0 nova_compute[260935]: 2025-10-11 08:44:47.886 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:44:47 compute-0 sudo[276810]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:44:47 compute-0 sudo[276810]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:44:47 compute-0 sudo[276810]: pam_unix(sudo:session): session closed for user root
Oct 11 08:44:47 compute-0 sudo[276853]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Oct 11 08:44:47 compute-0 sudo[276853]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:44:48 compute-0 nova_compute[260935]: 2025-10-11 08:44:48.043 2 DEBUG oslo_concurrency.processutils [None req-867cba43-d614-4fbe-9c73-90808e6a68a7 b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/10e2549e-21d4-44fb-acbf-9104ec32970f/disk.config 10e2549e-21d4-44fb-acbf-9104ec32970f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.178s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:44:48 compute-0 nova_compute[260935]: 2025-10-11 08:44:48.044 2 INFO nova.virt.libvirt.driver [None req-867cba43-d614-4fbe-9c73-90808e6a68a7 b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] [instance: 10e2549e-21d4-44fb-acbf-9104ec32970f] Deleting local config drive /var/lib/nova/instances/10e2549e-21d4-44fb-acbf-9104ec32970f/disk.config because it was imported into RBD.
Oct 11 08:44:48 compute-0 systemd-machined[215705]: New machine qemu-3-instance-00000003.
Oct 11 08:44:48 compute-0 systemd[1]: Started Virtual Machine qemu-3-instance-00000003.
Oct 11 08:44:48 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 08:44:48 compute-0 nova_compute[260935]: 2025-10-11 08:44:48.329 2 DEBUG nova.compute.manager [req-0905cdef-2cad-4a3f-966f-d0ad4574bc1b req-5d867cd1-62f5-45f8-8a23-60d561be8be1 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: aac0adcc-167d-400a-a04a-93767356cc9c] Received event network-vif-plugged-6e75116e-1034-4d1a-8320-6755ee57c51f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:44:48 compute-0 nova_compute[260935]: 2025-10-11 08:44:48.329 2 DEBUG oslo_concurrency.lockutils [req-0905cdef-2cad-4a3f-966f-d0ad4574bc1b req-5d867cd1-62f5-45f8-8a23-60d561be8be1 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "aac0adcc-167d-400a-a04a-93767356cc9c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:44:48 compute-0 nova_compute[260935]: 2025-10-11 08:44:48.331 2 DEBUG oslo_concurrency.lockutils [req-0905cdef-2cad-4a3f-966f-d0ad4574bc1b req-5d867cd1-62f5-45f8-8a23-60d561be8be1 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "aac0adcc-167d-400a-a04a-93767356cc9c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:44:48 compute-0 nova_compute[260935]: 2025-10-11 08:44:48.332 2 DEBUG oslo_concurrency.lockutils [req-0905cdef-2cad-4a3f-966f-d0ad4574bc1b req-5d867cd1-62f5-45f8-8a23-60d561be8be1 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "aac0adcc-167d-400a-a04a-93767356cc9c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:44:48 compute-0 nova_compute[260935]: 2025-10-11 08:44:48.332 2 DEBUG nova.compute.manager [req-0905cdef-2cad-4a3f-966f-d0ad4574bc1b req-5d867cd1-62f5-45f8-8a23-60d561be8be1 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: aac0adcc-167d-400a-a04a-93767356cc9c] Processing event network-vif-plugged-6e75116e-1034-4d1a-8320-6755ee57c51f _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 11 08:44:48 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:44:48.393 162815 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Oct 11 08:44:48 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:44:48.394 162815 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmps_0luhbi/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Oct 11 08:44:48 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:44:48.274 276945 INFO oslo.privsep.daemon [-] privsep daemon starting
Oct 11 08:44:48 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:44:48.283 276945 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Oct 11 08:44:48 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:44:48.288 276945 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/none
Oct 11 08:44:48 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:44:48.288 276945 INFO oslo.privsep.daemon [-] privsep daemon running as pid 276945
Oct 11 08:44:48 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:44:48.400 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[e30130fa-b418-4339-a37c-064b80c19859]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:44:48 compute-0 podman[277067]: 2025-10-11 08:44:48.633423771 +0000 UTC m=+0.086714436 container exec ef4d743dbf6b626090e433b260dff1359de31ba4682290cbdab8727911345729 (image=quay.io/ceph/ceph:v18, name=ceph-33219f8b-dc38-5a8f-a577-8ccc4b37190a-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 08:44:48 compute-0 podman[277067]: 2025-10-11 08:44:48.776550896 +0000 UTC m=+0.229841511 container exec_died ef4d743dbf6b626090e433b260dff1359de31ba4682290cbdab8727911345729 (image=quay.io/ceph/ceph:v18, name=ceph-33219f8b-dc38-5a8f-a577-8ccc4b37190a-mon-compute-0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 11 08:44:49 compute-0 nova_compute[260935]: 2025-10-11 08:44:49.011 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172289.0109758, aac0adcc-167d-400a-a04a-93767356cc9c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:44:49 compute-0 nova_compute[260935]: 2025-10-11 08:44:49.011 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: aac0adcc-167d-400a-a04a-93767356cc9c] VM Started (Lifecycle Event)
Oct 11 08:44:49 compute-0 nova_compute[260935]: 2025-10-11 08:44:49.014 2 DEBUG nova.compute.manager [None req-981b23f2-89f5-44a2-9dfa-15598106e3c8 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] [instance: aac0adcc-167d-400a-a04a-93767356cc9c] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 11 08:44:49 compute-0 nova_compute[260935]: 2025-10-11 08:44:49.021 2 DEBUG nova.virt.libvirt.driver [None req-981b23f2-89f5-44a2-9dfa-15598106e3c8 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] [instance: aac0adcc-167d-400a-a04a-93767356cc9c] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 11 08:44:49 compute-0 nova_compute[260935]: 2025-10-11 08:44:49.028 2 INFO nova.virt.libvirt.driver [-] [instance: aac0adcc-167d-400a-a04a-93767356cc9c] Instance spawned successfully.
Oct 11 08:44:49 compute-0 nova_compute[260935]: 2025-10-11 08:44:49.028 2 DEBUG nova.virt.libvirt.driver [None req-981b23f2-89f5-44a2-9dfa-15598106e3c8 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] [instance: aac0adcc-167d-400a-a04a-93767356cc9c] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 11 08:44:49 compute-0 nova_compute[260935]: 2025-10-11 08:44:49.039 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: aac0adcc-167d-400a-a04a-93767356cc9c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:44:49 compute-0 nova_compute[260935]: 2025-10-11 08:44:49.047 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: aac0adcc-167d-400a-a04a-93767356cc9c] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 08:44:49 compute-0 nova_compute[260935]: 2025-10-11 08:44:49.051 2 DEBUG nova.virt.libvirt.driver [None req-981b23f2-89f5-44a2-9dfa-15598106e3c8 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] [instance: aac0adcc-167d-400a-a04a-93767356cc9c] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:44:49 compute-0 nova_compute[260935]: 2025-10-11 08:44:49.051 2 DEBUG nova.virt.libvirt.driver [None req-981b23f2-89f5-44a2-9dfa-15598106e3c8 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] [instance: aac0adcc-167d-400a-a04a-93767356cc9c] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:44:49 compute-0 nova_compute[260935]: 2025-10-11 08:44:49.051 2 DEBUG nova.virt.libvirt.driver [None req-981b23f2-89f5-44a2-9dfa-15598106e3c8 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] [instance: aac0adcc-167d-400a-a04a-93767356cc9c] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:44:49 compute-0 nova_compute[260935]: 2025-10-11 08:44:49.052 2 DEBUG nova.virt.libvirt.driver [None req-981b23f2-89f5-44a2-9dfa-15598106e3c8 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] [instance: aac0adcc-167d-400a-a04a-93767356cc9c] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:44:49 compute-0 nova_compute[260935]: 2025-10-11 08:44:49.052 2 DEBUG nova.virt.libvirt.driver [None req-981b23f2-89f5-44a2-9dfa-15598106e3c8 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] [instance: aac0adcc-167d-400a-a04a-93767356cc9c] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:44:49 compute-0 nova_compute[260935]: 2025-10-11 08:44:49.052 2 DEBUG nova.virt.libvirt.driver [None req-981b23f2-89f5-44a2-9dfa-15598106e3c8 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] [instance: aac0adcc-167d-400a-a04a-93767356cc9c] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:44:49 compute-0 nova_compute[260935]: 2025-10-11 08:44:49.078 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: aac0adcc-167d-400a-a04a-93767356cc9c] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 08:44:49 compute-0 nova_compute[260935]: 2025-10-11 08:44:49.078 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172289.0117393, aac0adcc-167d-400a-a04a-93767356cc9c => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:44:49 compute-0 nova_compute[260935]: 2025-10-11 08:44:49.078 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: aac0adcc-167d-400a-a04a-93767356cc9c] VM Paused (Lifecycle Event)
Oct 11 08:44:49 compute-0 nova_compute[260935]: 2025-10-11 08:44:49.113 2 DEBUG nova.compute.manager [None req-867cba43-d614-4fbe-9c73-90808e6a68a7 b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] [instance: 10e2549e-21d4-44fb-acbf-9104ec32970f] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 11 08:44:49 compute-0 nova_compute[260935]: 2025-10-11 08:44:49.114 2 DEBUG nova.virt.libvirt.driver [None req-867cba43-d614-4fbe-9c73-90808e6a68a7 b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] [instance: 10e2549e-21d4-44fb-acbf-9104ec32970f] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 11 08:44:49 compute-0 nova_compute[260935]: 2025-10-11 08:44:49.117 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: aac0adcc-167d-400a-a04a-93767356cc9c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:44:49 compute-0 nova_compute[260935]: 2025-10-11 08:44:49.125 2 INFO nova.virt.libvirt.driver [-] [instance: 10e2549e-21d4-44fb-acbf-9104ec32970f] Instance spawned successfully.
Oct 11 08:44:49 compute-0 nova_compute[260935]: 2025-10-11 08:44:49.125 2 DEBUG nova.virt.libvirt.driver [None req-867cba43-d614-4fbe-9c73-90808e6a68a7 b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] [instance: 10e2549e-21d4-44fb-acbf-9104ec32970f] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 11 08:44:49 compute-0 nova_compute[260935]: 2025-10-11 08:44:49.129 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172289.0184076, aac0adcc-167d-400a-a04a-93767356cc9c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:44:49 compute-0 nova_compute[260935]: 2025-10-11 08:44:49.130 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: aac0adcc-167d-400a-a04a-93767356cc9c] VM Resumed (Lifecycle Event)
Oct 11 08:44:49 compute-0 nova_compute[260935]: 2025-10-11 08:44:49.137 2 INFO nova.compute.manager [None req-981b23f2-89f5-44a2-9dfa-15598106e3c8 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] [instance: aac0adcc-167d-400a-a04a-93767356cc9c] Took 9.75 seconds to spawn the instance on the hypervisor.
Oct 11 08:44:49 compute-0 nova_compute[260935]: 2025-10-11 08:44:49.137 2 DEBUG nova.compute.manager [None req-981b23f2-89f5-44a2-9dfa-15598106e3c8 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] [instance: aac0adcc-167d-400a-a04a-93767356cc9c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:44:49 compute-0 nova_compute[260935]: 2025-10-11 08:44:49.154 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: aac0adcc-167d-400a-a04a-93767356cc9c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:44:49 compute-0 nova_compute[260935]: 2025-10-11 08:44:49.163 2 DEBUG nova.virt.libvirt.driver [None req-867cba43-d614-4fbe-9c73-90808e6a68a7 b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] [instance: 10e2549e-21d4-44fb-acbf-9104ec32970f] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:44:49 compute-0 nova_compute[260935]: 2025-10-11 08:44:49.164 2 DEBUG nova.virt.libvirt.driver [None req-867cba43-d614-4fbe-9c73-90808e6a68a7 b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] [instance: 10e2549e-21d4-44fb-acbf-9104ec32970f] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:44:49 compute-0 nova_compute[260935]: 2025-10-11 08:44:49.165 2 DEBUG nova.virt.libvirt.driver [None req-867cba43-d614-4fbe-9c73-90808e6a68a7 b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] [instance: 10e2549e-21d4-44fb-acbf-9104ec32970f] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:44:49 compute-0 nova_compute[260935]: 2025-10-11 08:44:49.166 2 DEBUG nova.virt.libvirt.driver [None req-867cba43-d614-4fbe-9c73-90808e6a68a7 b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] [instance: 10e2549e-21d4-44fb-acbf-9104ec32970f] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:44:49 compute-0 nova_compute[260935]: 2025-10-11 08:44:49.167 2 DEBUG nova.virt.libvirt.driver [None req-867cba43-d614-4fbe-9c73-90808e6a68a7 b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] [instance: 10e2549e-21d4-44fb-acbf-9104ec32970f] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:44:49 compute-0 nova_compute[260935]: 2025-10-11 08:44:49.168 2 DEBUG nova.virt.libvirt.driver [None req-867cba43-d614-4fbe-9c73-90808e6a68a7 b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] [instance: 10e2549e-21d4-44fb-acbf-9104ec32970f] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:44:49 compute-0 nova_compute[260935]: 2025-10-11 08:44:49.174 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: aac0adcc-167d-400a-a04a-93767356cc9c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 08:44:49 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:44:49.219 276945 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:44:49 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:44:49.219 276945 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:44:49 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:44:49.219 276945 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:44:49 compute-0 nova_compute[260935]: 2025-10-11 08:44:49.235 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172289.1105483, 10e2549e-21d4-44fb-acbf-9104ec32970f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:44:49 compute-0 nova_compute[260935]: 2025-10-11 08:44:49.235 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 10e2549e-21d4-44fb-acbf-9104ec32970f] VM Resumed (Lifecycle Event)
Oct 11 08:44:49 compute-0 nova_compute[260935]: 2025-10-11 08:44:49.240 2 INFO nova.compute.manager [None req-981b23f2-89f5-44a2-9dfa-15598106e3c8 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] [instance: aac0adcc-167d-400a-a04a-93767356cc9c] Took 11.02 seconds to build instance.
Oct 11 08:44:49 compute-0 nova_compute[260935]: 2025-10-11 08:44:49.245 2 INFO nova.compute.manager [None req-867cba43-d614-4fbe-9c73-90808e6a68a7 b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] [instance: 10e2549e-21d4-44fb-acbf-9104ec32970f] Took 3.72 seconds to spawn the instance on the hypervisor.
Oct 11 08:44:49 compute-0 nova_compute[260935]: 2025-10-11 08:44:49.246 2 DEBUG nova.compute.manager [None req-867cba43-d614-4fbe-9c73-90808e6a68a7 b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] [instance: 10e2549e-21d4-44fb-acbf-9104ec32970f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:44:49 compute-0 nova_compute[260935]: 2025-10-11 08:44:49.261 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 10e2549e-21d4-44fb-acbf-9104ec32970f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:44:49 compute-0 nova_compute[260935]: 2025-10-11 08:44:49.266 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 10e2549e-21d4-44fb-acbf-9104ec32970f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 08:44:49 compute-0 nova_compute[260935]: 2025-10-11 08:44:49.269 2 DEBUG oslo_concurrency.lockutils [None req-981b23f2-89f5-44a2-9dfa-15598106e3c8 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Lock "aac0adcc-167d-400a-a04a-93767356cc9c" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.184s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:44:49 compute-0 nova_compute[260935]: 2025-10-11 08:44:49.290 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 10e2549e-21d4-44fb-acbf-9104ec32970f] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 08:44:49 compute-0 nova_compute[260935]: 2025-10-11 08:44:49.290 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172289.1119533, 10e2549e-21d4-44fb-acbf-9104ec32970f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:44:49 compute-0 nova_compute[260935]: 2025-10-11 08:44:49.290 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 10e2549e-21d4-44fb-acbf-9104ec32970f] VM Started (Lifecycle Event)
Oct 11 08:44:49 compute-0 nova_compute[260935]: 2025-10-11 08:44:49.311 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 10e2549e-21d4-44fb-acbf-9104ec32970f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:44:49 compute-0 nova_compute[260935]: 2025-10-11 08:44:49.313 2 INFO nova.compute.manager [None req-867cba43-d614-4fbe-9c73-90808e6a68a7 b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] [instance: 10e2549e-21d4-44fb-acbf-9104ec32970f] Took 4.93 seconds to build instance.
Oct 11 08:44:49 compute-0 nova_compute[260935]: 2025-10-11 08:44:49.319 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 10e2549e-21d4-44fb-acbf-9104ec32970f] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 08:44:49 compute-0 nova_compute[260935]: 2025-10-11 08:44:49.349 2 DEBUG oslo_concurrency.lockutils [None req-867cba43-d614-4fbe-9c73-90808e6a68a7 b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] Lock "10e2549e-21d4-44fb-acbf-9104ec32970f" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 5.098s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:44:49 compute-0 ceph-mon[74313]: pgmap v1143: 321 pgs: 321 active+clean; 134 MiB data, 232 MiB used, 60 GiB / 60 GiB avail; 35 KiB/s rd, 3.5 MiB/s wr, 54 op/s
Oct 11 08:44:49 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1144: 321 pgs: 321 active+clean; 134 MiB data, 232 MiB used, 60 GiB / 60 GiB avail; 35 KiB/s rd, 3.5 MiB/s wr, 54 op/s
Oct 11 08:44:49 compute-0 sudo[276853]: pam_unix(sudo:session): session closed for user root
Oct 11 08:44:49 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 08:44:49 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 08:44:49 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 08:44:49 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 08:44:49 compute-0 sudo[277218]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:44:49 compute-0 sudo[277218]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:44:49 compute-0 sudo[277218]: pam_unix(sudo:session): session closed for user root
Oct 11 08:44:49 compute-0 sudo[277243]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 08:44:49 compute-0 sudo[277243]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:44:49 compute-0 sudo[277243]: pam_unix(sudo:session): session closed for user root
Oct 11 08:44:50 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:44:50.010 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[f86750aa-fa02-4c0b-bf47-8b56cb8b8fad]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:44:50 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:44:50.012 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap3bd537c2-e1 in ovnmeta-3bd537c2-e6ec-4d00-ac83-fbf5d86f963c namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 11 08:44:50 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:44:50.015 276945 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap3bd537c2-e0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 11 08:44:50 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:44:50.015 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[cbd9973f-e5f1-4637-bc6f-c4d6f2fef1ac]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:44:50 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:44:50.020 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[b3f1aafa-9ff8-4366-89f3-e1c154ef5f5d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:44:50 compute-0 sudo[277268]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:44:50 compute-0 sudo[277268]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:44:50 compute-0 sudo[277268]: pam_unix(sudo:session): session closed for user root
Oct 11 08:44:50 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:44:50.057 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[ede9cd7e-d561-421b-a094-45252a1f564c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:44:50 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:44:50.094 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[4f59d4b7-8079-41df-8183-3d15ec76755a]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:44:50 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:44:50.096 162815 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.link_cmd', '--privsep_sock_path', '/tmp/tmp_s4rubi2/privsep.sock']
Oct 11 08:44:50 compute-0 sudo[277297]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 11 08:44:50 compute-0 sudo[277297]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:44:50 compute-0 sudo[277297]: pam_unix(sudo:session): session closed for user root
Oct 11 08:44:50 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 08:44:50 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 08:44:50 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 11 08:44:50 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 08:44:50 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 11 08:44:50 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 08:44:50 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev b4048180-8117-4b94-be9a-0f6c0afa0235 does not exist
Oct 11 08:44:50 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev 4feb4d42-bc20-4178-818b-5aa4ab9d0c4e does not exist
Oct 11 08:44:50 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev 524b8e4d-02eb-4a1f-b84b-ca16bce93ab9 does not exist
Oct 11 08:44:50 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 11 08:44:50 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 08:44:50 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 11 08:44:50 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 08:44:50 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 08:44:50 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 08:44:50 compute-0 ceph-mon[74313]: pgmap v1144: 321 pgs: 321 active+clean; 134 MiB data, 232 MiB used, 60 GiB / 60 GiB avail; 35 KiB/s rd, 3.5 MiB/s wr, 54 op/s
Oct 11 08:44:50 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 08:44:50 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 08:44:50 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 08:44:50 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 08:44:50 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 08:44:50 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 08:44:50 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 08:44:50 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 08:44:50 compute-0 nova_compute[260935]: 2025-10-11 08:44:50.770 2 DEBUG nova.compute.manager [req-15dac434-061d-45cf-8b24-2dc0bcd881c9 req-27186650-a844-4729-a5da-89f35f5f0895 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: aac0adcc-167d-400a-a04a-93767356cc9c] Received event network-vif-plugged-6e75116e-1034-4d1a-8320-6755ee57c51f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:44:50 compute-0 nova_compute[260935]: 2025-10-11 08:44:50.771 2 DEBUG oslo_concurrency.lockutils [req-15dac434-061d-45cf-8b24-2dc0bcd881c9 req-27186650-a844-4729-a5da-89f35f5f0895 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "aac0adcc-167d-400a-a04a-93767356cc9c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:44:50 compute-0 nova_compute[260935]: 2025-10-11 08:44:50.771 2 DEBUG oslo_concurrency.lockutils [req-15dac434-061d-45cf-8b24-2dc0bcd881c9 req-27186650-a844-4729-a5da-89f35f5f0895 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "aac0adcc-167d-400a-a04a-93767356cc9c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:44:50 compute-0 nova_compute[260935]: 2025-10-11 08:44:50.771 2 DEBUG oslo_concurrency.lockutils [req-15dac434-061d-45cf-8b24-2dc0bcd881c9 req-27186650-a844-4729-a5da-89f35f5f0895 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "aac0adcc-167d-400a-a04a-93767356cc9c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:44:50 compute-0 nova_compute[260935]: 2025-10-11 08:44:50.773 2 DEBUG nova.compute.manager [req-15dac434-061d-45cf-8b24-2dc0bcd881c9 req-27186650-a844-4729-a5da-89f35f5f0895 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: aac0adcc-167d-400a-a04a-93767356cc9c] No waiting events found dispatching network-vif-plugged-6e75116e-1034-4d1a-8320-6755ee57c51f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 08:44:50 compute-0 nova_compute[260935]: 2025-10-11 08:44:50.774 2 WARNING nova.compute.manager [req-15dac434-061d-45cf-8b24-2dc0bcd881c9 req-27186650-a844-4729-a5da-89f35f5f0895 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: aac0adcc-167d-400a-a04a-93767356cc9c] Received unexpected event network-vif-plugged-6e75116e-1034-4d1a-8320-6755ee57c51f for instance with vm_state active and task_state None.
Oct 11 08:44:50 compute-0 sudo[277357]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:44:50 compute-0 sudo[277357]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:44:50 compute-0 sudo[277357]: pam_unix(sudo:session): session closed for user root
Oct 11 08:44:50 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:44:50.855 162815 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Oct 11 08:44:50 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:44:50.856 162815 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmp_s4rubi2/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Oct 11 08:44:50 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:44:50.719 277356 INFO oslo.privsep.daemon [-] privsep daemon starting
Oct 11 08:44:50 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:44:50.728 277356 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Oct 11 08:44:50 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:44:50.733 277356 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_NET_ADMIN|CAP_SYS_ADMIN/none
Oct 11 08:44:50 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:44:50.734 277356 INFO oslo.privsep.daemon [-] privsep daemon running as pid 277356
Oct 11 08:44:50 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:44:50.860 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[9b1b9133-2538-4911-ac0a-45cef64de63e]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:44:50 compute-0 sudo[277382]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 08:44:50 compute-0 sudo[277382]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:44:50 compute-0 sudo[277382]: pam_unix(sudo:session): session closed for user root
Oct 11 08:44:50 compute-0 sudo[277410]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:44:50 compute-0 sudo[277410]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:44:50 compute-0 sudo[277410]: pam_unix(sudo:session): session closed for user root
Oct 11 08:44:51 compute-0 sudo[277435]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 11 08:44:51 compute-0 sudo[277435]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:44:51 compute-0 nova_compute[260935]: 2025-10-11 08:44:51.185 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:44:51 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:44:51.433 277356 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:44:51 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:44:51.433 277356 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:44:51 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:44:51.433 277356 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:44:51 compute-0 nova_compute[260935]: 2025-10-11 08:44:51.469 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:44:51 compute-0 podman[277501]: 2025-10-11 08:44:51.527101941 +0000 UTC m=+0.087555439 container create 3f982514105932aedde5dd0f08350381bd5f3e3e409f4ed76c01d5d36e3e2629 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_hopper, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS)
Oct 11 08:44:51 compute-0 systemd[1]: Started libpod-conmon-3f982514105932aedde5dd0f08350381bd5f3e3e409f4ed76c01d5d36e3e2629.scope.
Oct 11 08:44:51 compute-0 podman[277501]: 2025-10-11 08:44:51.484792989 +0000 UTC m=+0.045246447 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 08:44:51 compute-0 systemd[1]: Started libcrun container.
Oct 11 08:44:51 compute-0 podman[277501]: 2025-10-11 08:44:51.644791659 +0000 UTC m=+0.205245197 container init 3f982514105932aedde5dd0f08350381bd5f3e3e409f4ed76c01d5d36e3e2629 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_hopper, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef)
Oct 11 08:44:51 compute-0 podman[277501]: 2025-10-11 08:44:51.657432586 +0000 UTC m=+0.217886074 container start 3f982514105932aedde5dd0f08350381bd5f3e3e409f4ed76c01d5d36e3e2629 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_hopper, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Oct 11 08:44:51 compute-0 podman[277501]: 2025-10-11 08:44:51.660985266 +0000 UTC m=+0.221438724 container attach 3f982514105932aedde5dd0f08350381bd5f3e3e409f4ed76c01d5d36e3e2629 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_hopper, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507)
Oct 11 08:44:51 compute-0 hungry_hopper[277517]: 167 167
Oct 11 08:44:51 compute-0 systemd[1]: libpod-3f982514105932aedde5dd0f08350381bd5f3e3e409f4ed76c01d5d36e3e2629.scope: Deactivated successfully.
Oct 11 08:44:51 compute-0 podman[277501]: 2025-10-11 08:44:51.665997737 +0000 UTC m=+0.226451225 container died 3f982514105932aedde5dd0f08350381bd5f3e3e409f4ed76c01d5d36e3e2629 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_hopper, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 08:44:51 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1145: 321 pgs: 321 active+clean; 134 MiB data, 232 MiB used, 60 GiB / 60 GiB avail; 35 KiB/s rd, 3.5 MiB/s wr, 54 op/s
Oct 11 08:44:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-158ebbebb2d0e84c39fdeca21bbe72b1fb24657253754b703fde6b210afe7921-merged.mount: Deactivated successfully.
Oct 11 08:44:51 compute-0 podman[277501]: 2025-10-11 08:44:51.738310806 +0000 UTC m=+0.298764304 container remove 3f982514105932aedde5dd0f08350381bd5f3e3e409f4ed76c01d5d36e3e2629 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_hopper, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507)
Oct 11 08:44:51 compute-0 systemd[1]: libpod-conmon-3f982514105932aedde5dd0f08350381bd5f3e3e409f4ed76c01d5d36e3e2629.scope: Deactivated successfully.
Oct 11 08:44:51 compute-0 NetworkManager[44960]: <info>  [1760172291.8659] manager: (patch-provnet-02213cae-d648-4a8f-b18b-9bdf9573f1be-to-br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/25)
Oct 11 08:44:51 compute-0 NetworkManager[44960]: <info>  [1760172291.8668] device (patch-provnet-02213cae-d648-4a8f-b18b-9bdf9573f1be-to-br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 11 08:44:51 compute-0 nova_compute[260935]: 2025-10-11 08:44:51.865 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:44:51 compute-0 NetworkManager[44960]: <info>  [1760172291.8682] manager: (patch-br-int-to-provnet-02213cae-d648-4a8f-b18b-9bdf9573f1be): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/26)
Oct 11 08:44:51 compute-0 NetworkManager[44960]: <info>  [1760172291.8685] device (patch-br-int-to-provnet-02213cae-d648-4a8f-b18b-9bdf9573f1be)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 11 08:44:51 compute-0 NetworkManager[44960]: <info>  [1760172291.8693] manager: (patch-br-int-to-provnet-02213cae-d648-4a8f-b18b-9bdf9573f1be): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/27)
Oct 11 08:44:51 compute-0 NetworkManager[44960]: <info>  [1760172291.8698] manager: (patch-provnet-02213cae-d648-4a8f-b18b-9bdf9573f1be-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/28)
Oct 11 08:44:51 compute-0 NetworkManager[44960]: <info>  [1760172291.8701] device (patch-provnet-02213cae-d648-4a8f-b18b-9bdf9573f1be-to-br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Oct 11 08:44:51 compute-0 NetworkManager[44960]: <info>  [1760172291.8704] device (patch-br-int-to-provnet-02213cae-d648-4a8f-b18b-9bdf9573f1be)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Oct 11 08:44:51 compute-0 nova_compute[260935]: 2025-10-11 08:44:51.938 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:44:51 compute-0 podman[277542]: 2025-10-11 08:44:51.943572703 +0000 UTC m=+0.044255979 container create b8a653bb6e633e280d2441c0695a91497b40215944f26339c2a5217dbb4bc6c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_black, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 08:44:51 compute-0 nova_compute[260935]: 2025-10-11 08:44:51.951 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:44:51 compute-0 systemd[1]: Started libpod-conmon-b8a653bb6e633e280d2441c0695a91497b40215944f26339c2a5217dbb4bc6c6.scope.
Oct 11 08:44:52 compute-0 systemd[1]: Started libcrun container.
Oct 11 08:44:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f89a2e703f8d28b41ac86e2a7de3a208c15f8f89777b3987111deab2038aa9c0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 08:44:52 compute-0 podman[277542]: 2025-10-11 08:44:51.923667112 +0000 UTC m=+0.024350438 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 08:44:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f89a2e703f8d28b41ac86e2a7de3a208c15f8f89777b3987111deab2038aa9c0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 08:44:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f89a2e703f8d28b41ac86e2a7de3a208c15f8f89777b3987111deab2038aa9c0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 08:44:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f89a2e703f8d28b41ac86e2a7de3a208c15f8f89777b3987111deab2038aa9c0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 08:44:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f89a2e703f8d28b41ac86e2a7de3a208c15f8f89777b3987111deab2038aa9c0/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 08:44:52 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:44:52.051 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[0f1a0f6b-3f2d-4510-a89e-0faf276131b5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:44:52 compute-0 podman[277542]: 2025-10-11 08:44:52.061147868 +0000 UTC m=+0.161831194 container init b8a653bb6e633e280d2441c0695a91497b40215944f26339c2a5217dbb4bc6c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_black, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct 11 08:44:52 compute-0 NetworkManager[44960]: <info>  [1760172292.0618] manager: (tap3bd537c2-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/29)
Oct 11 08:44:52 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:44:52.061 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[8d9e5533-de1c-46e8-bdaa-76721edfa3c0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:44:52 compute-0 podman[277542]: 2025-10-11 08:44:52.069623857 +0000 UTC m=+0.170307173 container start b8a653bb6e633e280d2441c0695a91497b40215944f26339c2a5217dbb4bc6c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_black, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 08:44:52 compute-0 podman[277542]: 2025-10-11 08:44:52.073604409 +0000 UTC m=+0.174287705 container attach b8a653bb6e633e280d2441c0695a91497b40215944f26339c2a5217dbb4bc6c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_black, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2)
Oct 11 08:44:52 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:44:52.110 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[3b56a336-6fb7-452e-b58e-0e0dad2feea7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:44:52 compute-0 systemd-udevd[277569]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 08:44:52 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:44:52.114 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[7ff0d4e0-1b4f-40f3-bdd1-d7a5b1792bb2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:44:52 compute-0 NetworkManager[44960]: <info>  [1760172292.1413] device (tap3bd537c2-e0): carrier: link connected
Oct 11 08:44:52 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:44:52.150 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[45ac11be-3cd9-4463-b171-4f2b9c8ad940]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:44:52 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:44:52.176 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[56e5ff32-640b-4a74-a6cf-f0c40fc37f56]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3bd537c2-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e9:0a:0a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 15], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 413684, 'reachable_time': 17731, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 277588, 'error': None, 'target': 'ovnmeta-3bd537c2-e6ec-4d00-ac83-fbf5d86f963c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:44:52 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:44:52.200 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[c0edeadc-a7ee-41b4-a829-790692b2a705]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fee9:a0a'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 413684, 'tstamp': 413684}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 277589, 'error': None, 'target': 'ovnmeta-3bd537c2-e6ec-4d00-ac83-fbf5d86f963c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:44:52 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:44:52.223 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[e215fff9-9f44-4856-b244-d90fe26b06c4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3bd537c2-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e9:0a:0a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 15], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 413684, 'reachable_time': 17731, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 277590, 'error': None, 'target': 'ovnmeta-3bd537c2-e6ec-4d00-ac83-fbf5d86f963c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:44:52 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:44:52.269 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[4d3dd685-9085-426a-a060-f9aa93011b63]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:44:52 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:44:52.345 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[44b4fb44-0ef2-491b-90fa-92e092ff2e23]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:44:52 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:44:52.348 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3bd537c2-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:44:52 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:44:52.348 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 08:44:52 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:44:52.349 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3bd537c2-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:44:52 compute-0 kernel: tap3bd537c2-e0: entered promiscuous mode
Oct 11 08:44:52 compute-0 NetworkManager[44960]: <info>  [1760172292.4063] manager: (tap3bd537c2-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/30)
Oct 11 08:44:52 compute-0 nova_compute[260935]: 2025-10-11 08:44:52.405 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:44:52 compute-0 nova_compute[260935]: 2025-10-11 08:44:52.409 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:44:52 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:44:52.410 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap3bd537c2-e0, col_values=(('external_ids', {'iface-id': '6fcfb560-e2ca-4f85-8b53-5907aa538954'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:44:52 compute-0 nova_compute[260935]: 2025-10-11 08:44:52.416 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:44:52 compute-0 ovn_controller[152945]: 2025-10-11T08:44:52Z|00031|binding|INFO|Releasing lport 6fcfb560-e2ca-4f85-8b53-5907aa538954 from this chassis (sb_readonly=0)
Oct 11 08:44:52 compute-0 nova_compute[260935]: 2025-10-11 08:44:52.419 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:44:52 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:44:52.420 162815 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/3bd537c2-e6ec-4d00-ac83-fbf5d86f963c.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/3bd537c2-e6ec-4d00-ac83-fbf5d86f963c.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 11 08:44:52 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:44:52.421 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[25889645-a983-442c-8194-aba02d2d3d34]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:44:52 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:44:52.422 162815 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 11 08:44:52 compute-0 ovn_metadata_agent[162810]: global
Oct 11 08:44:52 compute-0 ovn_metadata_agent[162810]:     log         /dev/log local0 debug
Oct 11 08:44:52 compute-0 ovn_metadata_agent[162810]:     log-tag     haproxy-metadata-proxy-3bd537c2-e6ec-4d00-ac83-fbf5d86f963c
Oct 11 08:44:52 compute-0 ovn_metadata_agent[162810]:     user        root
Oct 11 08:44:52 compute-0 ovn_metadata_agent[162810]:     group       root
Oct 11 08:44:52 compute-0 ovn_metadata_agent[162810]:     maxconn     1024
Oct 11 08:44:52 compute-0 ovn_metadata_agent[162810]:     pidfile     /var/lib/neutron/external/pids/3bd537c2-e6ec-4d00-ac83-fbf5d86f963c.pid.haproxy
Oct 11 08:44:52 compute-0 ovn_metadata_agent[162810]:     daemon
Oct 11 08:44:52 compute-0 ovn_metadata_agent[162810]: 
Oct 11 08:44:52 compute-0 ovn_metadata_agent[162810]: defaults
Oct 11 08:44:52 compute-0 ovn_metadata_agent[162810]:     log global
Oct 11 08:44:52 compute-0 ovn_metadata_agent[162810]:     mode http
Oct 11 08:44:52 compute-0 ovn_metadata_agent[162810]:     option httplog
Oct 11 08:44:52 compute-0 ovn_metadata_agent[162810]:     option dontlognull
Oct 11 08:44:52 compute-0 ovn_metadata_agent[162810]:     option http-server-close
Oct 11 08:44:52 compute-0 ovn_metadata_agent[162810]:     option forwardfor
Oct 11 08:44:52 compute-0 ovn_metadata_agent[162810]:     retries                 3
Oct 11 08:44:52 compute-0 ovn_metadata_agent[162810]:     timeout http-request    30s
Oct 11 08:44:52 compute-0 ovn_metadata_agent[162810]:     timeout connect         30s
Oct 11 08:44:52 compute-0 ovn_metadata_agent[162810]:     timeout client          32s
Oct 11 08:44:52 compute-0 ovn_metadata_agent[162810]:     timeout server          32s
Oct 11 08:44:52 compute-0 ovn_metadata_agent[162810]:     timeout http-keep-alive 30s
Oct 11 08:44:52 compute-0 ovn_metadata_agent[162810]: 
Oct 11 08:44:52 compute-0 ovn_metadata_agent[162810]: 
Oct 11 08:44:52 compute-0 ovn_metadata_agent[162810]: listen listener
Oct 11 08:44:52 compute-0 ovn_metadata_agent[162810]:     bind 169.254.169.254:80
Oct 11 08:44:52 compute-0 ovn_metadata_agent[162810]:     server metadata /var/lib/neutron/metadata_proxy
Oct 11 08:44:52 compute-0 ovn_metadata_agent[162810]:     http-request add-header X-OVN-Network-ID 3bd537c2-e6ec-4d00-ac83-fbf5d86f963c
Oct 11 08:44:52 compute-0 ovn_metadata_agent[162810]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 11 08:44:52 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:44:52.423 162815 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-3bd537c2-e6ec-4d00-ac83-fbf5d86f963c', 'env', 'PROCESS_TAG=haproxy-3bd537c2-e6ec-4d00-ac83-fbf5d86f963c', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/3bd537c2-e6ec-4d00-ac83-fbf5d86f963c.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 11 08:44:52 compute-0 nova_compute[260935]: 2025-10-11 08:44:52.447 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:44:52 compute-0 ceph-mon[74313]: pgmap v1145: 321 pgs: 321 active+clean; 134 MiB data, 232 MiB used, 60 GiB / 60 GiB avail; 35 KiB/s rd, 3.5 MiB/s wr, 54 op/s
Oct 11 08:44:52 compute-0 podman[277621]: 2025-10-11 08:44:52.823729327 +0000 UTC m=+0.042395186 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 11 08:44:52 compute-0 podman[277621]: 2025-10-11 08:44:52.993564255 +0000 UTC m=+0.212230104 container create a93ddd922ca452d60dcc1226107836e2108ccbbff976a3b0320be2288dbe7021 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-3bd537c2-e6ec-4d00-ac83-fbf5d86f963c, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3)
Oct 11 08:44:53 compute-0 systemd[1]: Started libpod-conmon-a93ddd922ca452d60dcc1226107836e2108ccbbff976a3b0320be2288dbe7021.scope.
Oct 11 08:44:53 compute-0 nova_compute[260935]: 2025-10-11 08:44:53.043 2 DEBUG nova.compute.manager [req-90638dd7-4a89-4b6f-85c9-e420bd544a66 req-c22fb917-540f-4cc7-83a3-35a03ccfb3a1 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: aac0adcc-167d-400a-a04a-93767356cc9c] Received event network-changed-6e75116e-1034-4d1a-8320-6755ee57c51f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:44:53 compute-0 nova_compute[260935]: 2025-10-11 08:44:53.045 2 DEBUG nova.compute.manager [req-90638dd7-4a89-4b6f-85c9-e420bd544a66 req-c22fb917-540f-4cc7-83a3-35a03ccfb3a1 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: aac0adcc-167d-400a-a04a-93767356cc9c] Refreshing instance network info cache due to event network-changed-6e75116e-1034-4d1a-8320-6755ee57c51f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 08:44:53 compute-0 nova_compute[260935]: 2025-10-11 08:44:53.045 2 DEBUG oslo_concurrency.lockutils [req-90638dd7-4a89-4b6f-85c9-e420bd544a66 req-c22fb917-540f-4cc7-83a3-35a03ccfb3a1 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-aac0adcc-167d-400a-a04a-93767356cc9c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 08:44:53 compute-0 nova_compute[260935]: 2025-10-11 08:44:53.047 2 DEBUG oslo_concurrency.lockutils [req-90638dd7-4a89-4b6f-85c9-e420bd544a66 req-c22fb917-540f-4cc7-83a3-35a03ccfb3a1 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-aac0adcc-167d-400a-a04a-93767356cc9c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 08:44:53 compute-0 nova_compute[260935]: 2025-10-11 08:44:53.047 2 DEBUG nova.network.neutron [req-90638dd7-4a89-4b6f-85c9-e420bd544a66 req-c22fb917-540f-4cc7-83a3-35a03ccfb3a1 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: aac0adcc-167d-400a-a04a-93767356cc9c] Refreshing network info cache for port 6e75116e-1034-4d1a-8320-6755ee57c51f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 08:44:53 compute-0 systemd[1]: Started libcrun container.
Oct 11 08:44:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f1db26376507664bf223ae5a9650b1681bc0d81b11369be060261805baad7082/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 11 08:44:53 compute-0 podman[277621]: 2025-10-11 08:44:53.105661645 +0000 UTC m=+0.324327534 container init a93ddd922ca452d60dcc1226107836e2108ccbbff976a3b0320be2288dbe7021 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-3bd537c2-e6ec-4d00-ac83-fbf5d86f963c, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3)
Oct 11 08:44:53 compute-0 podman[277621]: 2025-10-11 08:44:53.112901649 +0000 UTC m=+0.331567458 container start a93ddd922ca452d60dcc1226107836e2108ccbbff976a3b0320be2288dbe7021 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-3bd537c2-e6ec-4d00-ac83-fbf5d86f963c, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct 11 08:44:53 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 08:44:53 compute-0 podman[277646]: 2025-10-11 08:44:53.136388401 +0000 UTC m=+0.091142461 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 11 08:44:53 compute-0 neutron-haproxy-ovnmeta-3bd537c2-e6ec-4d00-ac83-fbf5d86f963c[277655]: [NOTICE]   (277673) : New worker (277679) forked
Oct 11 08:44:53 compute-0 neutron-haproxy-ovnmeta-3bd537c2-e6ec-4d00-ac83-fbf5d86f963c[277655]: [NOTICE]   (277673) : Loading success.
Oct 11 08:44:53 compute-0 romantic_black[277559]: --> passed data devices: 0 physical, 3 LVM
Oct 11 08:44:53 compute-0 romantic_black[277559]: --> relative data size: 1.0
Oct 11 08:44:53 compute-0 romantic_black[277559]: --> All data devices are unavailable
Oct 11 08:44:53 compute-0 systemd[1]: libpod-b8a653bb6e633e280d2441c0695a91497b40215944f26339c2a5217dbb4bc6c6.scope: Deactivated successfully.
Oct 11 08:44:53 compute-0 systemd[1]: libpod-b8a653bb6e633e280d2441c0695a91497b40215944f26339c2a5217dbb4bc6c6.scope: Consumed 1.100s CPU time.
Oct 11 08:44:53 compute-0 conmon[277559]: conmon b8a653bb6e633e280d24 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b8a653bb6e633e280d2441c0695a91497b40215944f26339c2a5217dbb4bc6c6.scope/container/memory.events
Oct 11 08:44:53 compute-0 podman[277692]: 2025-10-11 08:44:53.311535589 +0000 UTC m=+0.031912451 container died b8a653bb6e633e280d2441c0695a91497b40215944f26339c2a5217dbb4bc6c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_black, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct 11 08:44:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-f89a2e703f8d28b41ac86e2a7de3a208c15f8f89777b3987111deab2038aa9c0-merged.mount: Deactivated successfully.
Oct 11 08:44:53 compute-0 podman[277692]: 2025-10-11 08:44:53.373580638 +0000 UTC m=+0.093957490 container remove b8a653bb6e633e280d2441c0695a91497b40215944f26339c2a5217dbb4bc6c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_black, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 08:44:53 compute-0 systemd[1]: libpod-conmon-b8a653bb6e633e280d2441c0695a91497b40215944f26339c2a5217dbb4bc6c6.scope: Deactivated successfully.
Oct 11 08:44:53 compute-0 sudo[277435]: pam_unix(sudo:session): session closed for user root
Oct 11 08:44:53 compute-0 sudo[277706]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:44:53 compute-0 sudo[277706]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:44:53 compute-0 sudo[277706]: pam_unix(sudo:session): session closed for user root
Oct 11 08:44:53 compute-0 sudo[277731]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 08:44:53 compute-0 sudo[277731]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:44:53 compute-0 sudo[277731]: pam_unix(sudo:session): session closed for user root
Oct 11 08:44:53 compute-0 sudo[277756]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:44:53 compute-0 sudo[277756]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:44:53 compute-0 sudo[277756]: pam_unix(sudo:session): session closed for user root
Oct 11 08:44:53 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1146: 321 pgs: 321 active+clean; 134 MiB data, 233 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 3.6 MiB/s wr, 201 op/s
Oct 11 08:44:53 compute-0 sudo[277781]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a -- lvm list --format json
Oct 11 08:44:53 compute-0 sudo[277781]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:44:54 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:44:54.017 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=4, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:d1:d9', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '16:ab:1e:b7:4b:7f'}, ipsec=False) old=SB_Global(nb_cfg=3) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 08:44:54 compute-0 nova_compute[260935]: 2025-10-11 08:44:54.018 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:44:54 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:44:54.022 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 11 08:44:54 compute-0 podman[277846]: 2025-10-11 08:44:54.229649653 +0000 UTC m=+0.071385743 container create 108c595761bf68df926e86ce46a9c57044517cacfa0ecfbf1e4d8502b6e5e3e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_brahmagupta, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 08:44:54 compute-0 systemd[1]: Started libpod-conmon-108c595761bf68df926e86ce46a9c57044517cacfa0ecfbf1e4d8502b6e5e3e5.scope.
Oct 11 08:44:54 compute-0 podman[277846]: 2025-10-11 08:44:54.193645558 +0000 UTC m=+0.035381628 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 08:44:54 compute-0 systemd[1]: Started libcrun container.
Oct 11 08:44:54 compute-0 podman[277846]: 2025-10-11 08:44:54.344286555 +0000 UTC m=+0.186022685 container init 108c595761bf68df926e86ce46a9c57044517cacfa0ecfbf1e4d8502b6e5e3e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_brahmagupta, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct 11 08:44:54 compute-0 podman[277846]: 2025-10-11 08:44:54.355362217 +0000 UTC m=+0.197098307 container start 108c595761bf68df926e86ce46a9c57044517cacfa0ecfbf1e4d8502b6e5e3e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_brahmagupta, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct 11 08:44:54 compute-0 podman[277846]: 2025-10-11 08:44:54.360199434 +0000 UTC m=+0.201935534 container attach 108c595761bf68df926e86ce46a9c57044517cacfa0ecfbf1e4d8502b6e5e3e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_brahmagupta, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True)
Oct 11 08:44:54 compute-0 epic_brahmagupta[277862]: 167 167
Oct 11 08:44:54 compute-0 systemd[1]: libpod-108c595761bf68df926e86ce46a9c57044517cacfa0ecfbf1e4d8502b6e5e3e5.scope: Deactivated successfully.
Oct 11 08:44:54 compute-0 podman[277846]: 2025-10-11 08:44:54.36608128 +0000 UTC m=+0.207817370 container died 108c595761bf68df926e86ce46a9c57044517cacfa0ecfbf1e4d8502b6e5e3e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_brahmagupta, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 08:44:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-2d0f17499bc388afee0f217bd5f66d2a91ee4675d0f33a1426224584f134f707-merged.mount: Deactivated successfully.
Oct 11 08:44:54 compute-0 podman[277846]: 2025-10-11 08:44:54.426590156 +0000 UTC m=+0.268326236 container remove 108c595761bf68df926e86ce46a9c57044517cacfa0ecfbf1e4d8502b6e5e3e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_brahmagupta, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct 11 08:44:54 compute-0 systemd[1]: libpod-conmon-108c595761bf68df926e86ce46a9c57044517cacfa0ecfbf1e4d8502b6e5e3e5.scope: Deactivated successfully.
Oct 11 08:44:54 compute-0 podman[277885]: 2025-10-11 08:44:54.705431207 +0000 UTC m=+0.093828036 container create 1c39acd5cda50586c0cb25171d4aba888e54d136022ae829645313a841582c18 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_shannon, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 08:44:54 compute-0 podman[277885]: 2025-10-11 08:44:54.653205345 +0000 UTC m=+0.041602254 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 08:44:54 compute-0 systemd[1]: Started libpod-conmon-1c39acd5cda50586c0cb25171d4aba888e54d136022ae829645313a841582c18.scope.
Oct 11 08:44:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 08:44:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 08:44:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 08:44:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 08:44:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 08:44:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 08:44:54 compute-0 systemd[1]: Started libcrun container.
Oct 11 08:44:54 compute-0 ceph-mgr[74605]: [balancer INFO root] Optimize plan auto_2025-10-11_08:44:54
Oct 11 08:44:54 compute-0 ceph-mgr[74605]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 08:44:54 compute-0 ceph-mgr[74605]: [balancer INFO root] do_upmap
Oct 11 08:44:54 compute-0 ceph-mgr[74605]: [balancer INFO root] pools ['cephfs.cephfs.data', 'default.rgw.log', 'default.rgw.control', '.rgw.root', 'cephfs.cephfs.meta', 'volumes', 'vms', 'images', '.mgr', 'default.rgw.meta', 'backups']
Oct 11 08:44:54 compute-0 ceph-mgr[74605]: [balancer INFO root] prepared 0/10 changes
Oct 11 08:44:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f34ebbe09dc6a8d40110ca9b4300a6d7412c41e6383a134fdb3014cae9861a53/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 08:44:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f34ebbe09dc6a8d40110ca9b4300a6d7412c41e6383a134fdb3014cae9861a53/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 08:44:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f34ebbe09dc6a8d40110ca9b4300a6d7412c41e6383a134fdb3014cae9861a53/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 08:44:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f34ebbe09dc6a8d40110ca9b4300a6d7412c41e6383a134fdb3014cae9861a53/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 08:44:54 compute-0 podman[277885]: 2025-10-11 08:44:54.818436673 +0000 UTC m=+0.206833562 container init 1c39acd5cda50586c0cb25171d4aba888e54d136022ae829645313a841582c18 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_shannon, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 08:44:54 compute-0 podman[277885]: 2025-10-11 08:44:54.832415917 +0000 UTC m=+0.220812786 container start 1c39acd5cda50586c0cb25171d4aba888e54d136022ae829645313a841582c18 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_shannon, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 08:44:54 compute-0 podman[277885]: 2025-10-11 08:44:54.837036947 +0000 UTC m=+0.225433856 container attach 1c39acd5cda50586c0cb25171d4aba888e54d136022ae829645313a841582c18 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_shannon, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 08:44:54 compute-0 nova_compute[260935]: 2025-10-11 08:44:54.898 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 08:44:54 compute-0 nova_compute[260935]: 2025-10-11 08:44:54.921 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 08:44:54 compute-0 nova_compute[260935]: 2025-10-11 08:44:54.921 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 08:44:54 compute-0 nova_compute[260935]: 2025-10-11 08:44:54.921 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 08:44:54 compute-0 nova_compute[260935]: 2025-10-11 08:44:54.922 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 11 08:44:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 08:44:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 08:44:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 08:44:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 08:44:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 08:44:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 08:44:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 08:44:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 08:44:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 08:44:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 08:44:55 compute-0 ceph-mon[74313]: pgmap v1146: 321 pgs: 321 active+clean; 134 MiB data, 233 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 3.6 MiB/s wr, 201 op/s
Oct 11 08:44:55 compute-0 nova_compute[260935]: 2025-10-11 08:44:55.169 2 DEBUG oslo_concurrency.lockutils [None req-8fb175bb-6bf1-4b19-9ee3-42933f94e6da b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] Acquiring lock "eda894cf-d32d-47f0-adc0-7e2f7fffb442" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:44:55 compute-0 nova_compute[260935]: 2025-10-11 08:44:55.170 2 DEBUG oslo_concurrency.lockutils [None req-8fb175bb-6bf1-4b19-9ee3-42933f94e6da b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] Lock "eda894cf-d32d-47f0-adc0-7e2f7fffb442" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:44:55 compute-0 nova_compute[260935]: 2025-10-11 08:44:55.188 2 DEBUG nova.compute.manager [None req-8fb175bb-6bf1-4b19-9ee3-42933f94e6da b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] [instance: eda894cf-d32d-47f0-adc0-7e2f7fffb442] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 11 08:44:55 compute-0 nova_compute[260935]: 2025-10-11 08:44:55.267 2 DEBUG oslo_concurrency.lockutils [None req-8fb175bb-6bf1-4b19-9ee3-42933f94e6da b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:44:55 compute-0 nova_compute[260935]: 2025-10-11 08:44:55.268 2 DEBUG oslo_concurrency.lockutils [None req-8fb175bb-6bf1-4b19-9ee3-42933f94e6da b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:44:55 compute-0 nova_compute[260935]: 2025-10-11 08:44:55.278 2 DEBUG nova.virt.hardware [None req-8fb175bb-6bf1-4b19-9ee3-42933f94e6da b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 11 08:44:55 compute-0 nova_compute[260935]: 2025-10-11 08:44:55.279 2 INFO nova.compute.claims [None req-8fb175bb-6bf1-4b19-9ee3-42933f94e6da b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] [instance: eda894cf-d32d-47f0-adc0-7e2f7fffb442] Claim successful on node compute-0.ctlplane.example.com
Oct 11 08:44:55 compute-0 nova_compute[260935]: 2025-10-11 08:44:55.432 2 DEBUG oslo_concurrency.processutils [None req-8fb175bb-6bf1-4b19-9ee3-42933f94e6da b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:44:55 compute-0 crazy_shannon[277902]: {
Oct 11 08:44:55 compute-0 crazy_shannon[277902]:     "0": [
Oct 11 08:44:55 compute-0 crazy_shannon[277902]:         {
Oct 11 08:44:55 compute-0 crazy_shannon[277902]:             "devices": [
Oct 11 08:44:55 compute-0 crazy_shannon[277902]:                 "/dev/loop3"
Oct 11 08:44:55 compute-0 crazy_shannon[277902]:             ],
Oct 11 08:44:55 compute-0 crazy_shannon[277902]:             "lv_name": "ceph_lv0",
Oct 11 08:44:55 compute-0 crazy_shannon[277902]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 08:44:55 compute-0 crazy_shannon[277902]:             "lv_size": "21470642176",
Oct 11 08:44:55 compute-0 crazy_shannon[277902]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=bd31cfe9-266a-4479-a7f9-659fff14b3e1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 08:44:55 compute-0 crazy_shannon[277902]:             "lv_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 08:44:55 compute-0 crazy_shannon[277902]:             "name": "ceph_lv0",
Oct 11 08:44:55 compute-0 crazy_shannon[277902]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 08:44:55 compute-0 crazy_shannon[277902]:             "tags": {
Oct 11 08:44:55 compute-0 crazy_shannon[277902]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 11 08:44:55 compute-0 crazy_shannon[277902]:                 "ceph.block_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 08:44:55 compute-0 crazy_shannon[277902]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 08:44:55 compute-0 crazy_shannon[277902]:                 "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 08:44:55 compute-0 crazy_shannon[277902]:                 "ceph.cluster_name": "ceph",
Oct 11 08:44:55 compute-0 crazy_shannon[277902]:                 "ceph.crush_device_class": "",
Oct 11 08:44:55 compute-0 crazy_shannon[277902]:                 "ceph.encrypted": "0",
Oct 11 08:44:55 compute-0 crazy_shannon[277902]:                 "ceph.osd_fsid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 08:44:55 compute-0 crazy_shannon[277902]:                 "ceph.osd_id": "0",
Oct 11 08:44:55 compute-0 crazy_shannon[277902]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 08:44:55 compute-0 crazy_shannon[277902]:                 "ceph.type": "block",
Oct 11 08:44:55 compute-0 crazy_shannon[277902]:                 "ceph.vdo": "0"
Oct 11 08:44:55 compute-0 crazy_shannon[277902]:             },
Oct 11 08:44:55 compute-0 crazy_shannon[277902]:             "type": "block",
Oct 11 08:44:55 compute-0 crazy_shannon[277902]:             "vg_name": "ceph_vg0"
Oct 11 08:44:55 compute-0 crazy_shannon[277902]:         }
Oct 11 08:44:55 compute-0 crazy_shannon[277902]:     ],
Oct 11 08:44:55 compute-0 crazy_shannon[277902]:     "1": [
Oct 11 08:44:55 compute-0 crazy_shannon[277902]:         {
Oct 11 08:44:55 compute-0 crazy_shannon[277902]:             "devices": [
Oct 11 08:44:55 compute-0 crazy_shannon[277902]:                 "/dev/loop4"
Oct 11 08:44:55 compute-0 crazy_shannon[277902]:             ],
Oct 11 08:44:55 compute-0 crazy_shannon[277902]:             "lv_name": "ceph_lv1",
Oct 11 08:44:55 compute-0 crazy_shannon[277902]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 08:44:55 compute-0 crazy_shannon[277902]:             "lv_size": "21470642176",
Oct 11 08:44:55 compute-0 crazy_shannon[277902]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c6e730c8-7075-45e5-bf72-061b73088f5b,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 08:44:55 compute-0 crazy_shannon[277902]:             "lv_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 08:44:55 compute-0 crazy_shannon[277902]:             "name": "ceph_lv1",
Oct 11 08:44:55 compute-0 crazy_shannon[277902]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 08:44:55 compute-0 crazy_shannon[277902]:             "tags": {
Oct 11 08:44:55 compute-0 crazy_shannon[277902]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 11 08:44:55 compute-0 crazy_shannon[277902]:                 "ceph.block_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 08:44:55 compute-0 crazy_shannon[277902]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 08:44:55 compute-0 crazy_shannon[277902]:                 "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 08:44:55 compute-0 crazy_shannon[277902]:                 "ceph.cluster_name": "ceph",
Oct 11 08:44:55 compute-0 crazy_shannon[277902]:                 "ceph.crush_device_class": "",
Oct 11 08:44:55 compute-0 crazy_shannon[277902]:                 "ceph.encrypted": "0",
Oct 11 08:44:55 compute-0 crazy_shannon[277902]:                 "ceph.osd_fsid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 08:44:55 compute-0 crazy_shannon[277902]:                 "ceph.osd_id": "1",
Oct 11 08:44:55 compute-0 crazy_shannon[277902]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 08:44:55 compute-0 crazy_shannon[277902]:                 "ceph.type": "block",
Oct 11 08:44:55 compute-0 crazy_shannon[277902]:                 "ceph.vdo": "0"
Oct 11 08:44:55 compute-0 crazy_shannon[277902]:             },
Oct 11 08:44:55 compute-0 crazy_shannon[277902]:             "type": "block",
Oct 11 08:44:55 compute-0 crazy_shannon[277902]:             "vg_name": "ceph_vg1"
Oct 11 08:44:55 compute-0 crazy_shannon[277902]:         }
Oct 11 08:44:55 compute-0 crazy_shannon[277902]:     ],
Oct 11 08:44:55 compute-0 crazy_shannon[277902]:     "2": [
Oct 11 08:44:55 compute-0 crazy_shannon[277902]:         {
Oct 11 08:44:55 compute-0 crazy_shannon[277902]:             "devices": [
Oct 11 08:44:55 compute-0 crazy_shannon[277902]:                 "/dev/loop5"
Oct 11 08:44:55 compute-0 crazy_shannon[277902]:             ],
Oct 11 08:44:55 compute-0 crazy_shannon[277902]:             "lv_name": "ceph_lv2",
Oct 11 08:44:55 compute-0 crazy_shannon[277902]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 08:44:55 compute-0 crazy_shannon[277902]:             "lv_size": "21470642176",
Oct 11 08:44:55 compute-0 crazy_shannon[277902]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9a8d87fe-940e-4960-94fb-405e22e6e9ee,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 08:44:55 compute-0 crazy_shannon[277902]:             "lv_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 08:44:55 compute-0 crazy_shannon[277902]:             "name": "ceph_lv2",
Oct 11 08:44:55 compute-0 crazy_shannon[277902]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 08:44:55 compute-0 crazy_shannon[277902]:             "tags": {
Oct 11 08:44:55 compute-0 crazy_shannon[277902]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 11 08:44:55 compute-0 crazy_shannon[277902]:                 "ceph.block_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 08:44:55 compute-0 crazy_shannon[277902]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 08:44:55 compute-0 crazy_shannon[277902]:                 "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 08:44:55 compute-0 crazy_shannon[277902]:                 "ceph.cluster_name": "ceph",
Oct 11 08:44:55 compute-0 crazy_shannon[277902]:                 "ceph.crush_device_class": "",
Oct 11 08:44:55 compute-0 crazy_shannon[277902]:                 "ceph.encrypted": "0",
Oct 11 08:44:55 compute-0 crazy_shannon[277902]:                 "ceph.osd_fsid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 08:44:55 compute-0 crazy_shannon[277902]:                 "ceph.osd_id": "2",
Oct 11 08:44:55 compute-0 crazy_shannon[277902]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 08:44:55 compute-0 crazy_shannon[277902]:                 "ceph.type": "block",
Oct 11 08:44:55 compute-0 crazy_shannon[277902]:                 "ceph.vdo": "0"
Oct 11 08:44:55 compute-0 crazy_shannon[277902]:             },
Oct 11 08:44:55 compute-0 crazy_shannon[277902]:             "type": "block",
Oct 11 08:44:55 compute-0 crazy_shannon[277902]:             "vg_name": "ceph_vg2"
Oct 11 08:44:55 compute-0 crazy_shannon[277902]:         }
Oct 11 08:44:55 compute-0 crazy_shannon[277902]:     ]
Oct 11 08:44:55 compute-0 crazy_shannon[277902]: }
Oct 11 08:44:55 compute-0 systemd[1]: libpod-1c39acd5cda50586c0cb25171d4aba888e54d136022ae829645313a841582c18.scope: Deactivated successfully.
Oct 11 08:44:55 compute-0 nova_compute[260935]: 2025-10-11 08:44:55.627 2 DEBUG nova.network.neutron [req-90638dd7-4a89-4b6f-85c9-e420bd544a66 req-c22fb917-540f-4cc7-83a3-35a03ccfb3a1 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: aac0adcc-167d-400a-a04a-93767356cc9c] Updated VIF entry in instance network info cache for port 6e75116e-1034-4d1a-8320-6755ee57c51f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 08:44:55 compute-0 nova_compute[260935]: 2025-10-11 08:44:55.630 2 DEBUG nova.network.neutron [req-90638dd7-4a89-4b6f-85c9-e420bd544a66 req-c22fb917-540f-4cc7-83a3-35a03ccfb3a1 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: aac0adcc-167d-400a-a04a-93767356cc9c] Updating instance_info_cache with network_info: [{"id": "6e75116e-1034-4d1a-8320-6755ee57c51f", "address": "fa:16:3e:14:7b:bc", "network": {"id": "3bd537c2-e6ec-4d00-ac83-fbf5d86f963c", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-610535994-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.218", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4e6573eaf6684f1c99a553fd46667a67", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e75116e-10", "ovs_interfaceid": "6e75116e-1034-4d1a-8320-6755ee57c51f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:44:55 compute-0 nova_compute[260935]: 2025-10-11 08:44:55.656 2 DEBUG oslo_concurrency.lockutils [req-90638dd7-4a89-4b6f-85c9-e420bd544a66 req-c22fb917-540f-4cc7-83a3-35a03ccfb3a1 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-aac0adcc-167d-400a-a04a-93767356cc9c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 08:44:55 compute-0 podman[277931]: 2025-10-11 08:44:55.684429151 +0000 UTC m=+0.040509889 container died 1c39acd5cda50586c0cb25171d4aba888e54d136022ae829645313a841582c18 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_shannon, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct 11 08:44:55 compute-0 nova_compute[260935]: 2025-10-11 08:44:55.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 08:44:55 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1147: 321 pgs: 321 active+clean; 134 MiB data, 233 MiB used, 60 GiB / 60 GiB avail; 3.8 MiB/s rd, 1.8 MiB/s wr, 174 op/s
Oct 11 08:44:55 compute-0 nova_compute[260935]: 2025-10-11 08:44:55.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 08:44:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-f34ebbe09dc6a8d40110ca9b4300a6d7412c41e6383a134fdb3014cae9861a53-merged.mount: Deactivated successfully.
Oct 11 08:44:55 compute-0 podman[277931]: 2025-10-11 08:44:55.743865 +0000 UTC m=+0.099945708 container remove 1c39acd5cda50586c0cb25171d4aba888e54d136022ae829645313a841582c18 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_shannon, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct 11 08:44:55 compute-0 systemd[1]: libpod-conmon-1c39acd5cda50586c0cb25171d4aba888e54d136022ae829645313a841582c18.scope: Deactivated successfully.
Oct 11 08:44:55 compute-0 sudo[277781]: pam_unix(sudo:session): session closed for user root
Oct 11 08:44:55 compute-0 sudo[277945]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:44:55 compute-0 sudo[277945]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:44:55 compute-0 sudo[277945]: pam_unix(sudo:session): session closed for user root
Oct 11 08:44:55 compute-0 sudo[277970]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 08:44:55 compute-0 sudo[277970]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:44:55 compute-0 sudo[277970]: pam_unix(sudo:session): session closed for user root
Oct 11 08:44:55 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 08:44:55 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1715674536' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:44:55 compute-0 nova_compute[260935]: 2025-10-11 08:44:55.996 2 DEBUG oslo_concurrency.processutils [None req-8fb175bb-6bf1-4b19-9ee3-42933f94e6da b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.564s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:44:55 compute-0 sudo[277995]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:44:56 compute-0 sudo[277995]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:44:56 compute-0 sudo[277995]: pam_unix(sudo:session): session closed for user root
Oct 11 08:44:56 compute-0 nova_compute[260935]: 2025-10-11 08:44:56.006 2 DEBUG nova.compute.provider_tree [None req-8fb175bb-6bf1-4b19-9ee3-42933f94e6da b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 08:44:56 compute-0 nova_compute[260935]: 2025-10-11 08:44:56.029 2 DEBUG nova.scheduler.client.report [None req-8fb175bb-6bf1-4b19-9ee3-42933f94e6da b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 08:44:56 compute-0 nova_compute[260935]: 2025-10-11 08:44:56.054 2 DEBUG oslo_concurrency.lockutils [None req-8fb175bb-6bf1-4b19-9ee3-42933f94e6da b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.786s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:44:56 compute-0 nova_compute[260935]: 2025-10-11 08:44:56.056 2 DEBUG nova.compute.manager [None req-8fb175bb-6bf1-4b19-9ee3-42933f94e6da b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] [instance: eda894cf-d32d-47f0-adc0-7e2f7fffb442] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 11 08:44:56 compute-0 sudo[278022]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a -- raw list --format json
Oct 11 08:44:56 compute-0 sudo[278022]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:44:56 compute-0 nova_compute[260935]: 2025-10-11 08:44:56.114 2 DEBUG nova.compute.manager [None req-8fb175bb-6bf1-4b19-9ee3-42933f94e6da b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] [instance: eda894cf-d32d-47f0-adc0-7e2f7fffb442] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 11 08:44:56 compute-0 nova_compute[260935]: 2025-10-11 08:44:56.116 2 DEBUG nova.network.neutron [None req-8fb175bb-6bf1-4b19-9ee3-42933f94e6da b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] [instance: eda894cf-d32d-47f0-adc0-7e2f7fffb442] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 11 08:44:56 compute-0 nova_compute[260935]: 2025-10-11 08:44:56.135 2 INFO nova.virt.libvirt.driver [None req-8fb175bb-6bf1-4b19-9ee3-42933f94e6da b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] [instance: eda894cf-d32d-47f0-adc0-7e2f7fffb442] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 11 08:44:56 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/1715674536' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:44:56 compute-0 nova_compute[260935]: 2025-10-11 08:44:56.152 2 DEBUG nova.compute.manager [None req-8fb175bb-6bf1-4b19-9ee3-42933f94e6da b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] [instance: eda894cf-d32d-47f0-adc0-7e2f7fffb442] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 11 08:44:56 compute-0 nova_compute[260935]: 2025-10-11 08:44:56.188 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:44:56 compute-0 nova_compute[260935]: 2025-10-11 08:44:56.239 2 DEBUG nova.compute.manager [None req-8fb175bb-6bf1-4b19-9ee3-42933f94e6da b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] [instance: eda894cf-d32d-47f0-adc0-7e2f7fffb442] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 11 08:44:56 compute-0 nova_compute[260935]: 2025-10-11 08:44:56.244 2 DEBUG nova.virt.libvirt.driver [None req-8fb175bb-6bf1-4b19-9ee3-42933f94e6da b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] [instance: eda894cf-d32d-47f0-adc0-7e2f7fffb442] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 11 08:44:56 compute-0 nova_compute[260935]: 2025-10-11 08:44:56.245 2 INFO nova.virt.libvirt.driver [None req-8fb175bb-6bf1-4b19-9ee3-42933f94e6da b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] [instance: eda894cf-d32d-47f0-adc0-7e2f7fffb442] Creating image(s)
Oct 11 08:44:56 compute-0 nova_compute[260935]: 2025-10-11 08:44:56.277 2 DEBUG nova.storage.rbd_utils [None req-8fb175bb-6bf1-4b19-9ee3-42933f94e6da b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] rbd image eda894cf-d32d-47f0-adc0-7e2f7fffb442_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:44:56 compute-0 nova_compute[260935]: 2025-10-11 08:44:56.308 2 DEBUG nova.storage.rbd_utils [None req-8fb175bb-6bf1-4b19-9ee3-42933f94e6da b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] rbd image eda894cf-d32d-47f0-adc0-7e2f7fffb442_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:44:56 compute-0 nova_compute[260935]: 2025-10-11 08:44:56.336 2 DEBUG nova.storage.rbd_utils [None req-8fb175bb-6bf1-4b19-9ee3-42933f94e6da b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] rbd image eda894cf-d32d-47f0-adc0-7e2f7fffb442_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:44:56 compute-0 nova_compute[260935]: 2025-10-11 08:44:56.341 2 DEBUG oslo_concurrency.processutils [None req-8fb175bb-6bf1-4b19-9ee3-42933f94e6da b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:44:56 compute-0 nova_compute[260935]: 2025-10-11 08:44:56.453 2 DEBUG oslo_concurrency.processutils [None req-8fb175bb-6bf1-4b19-9ee3-42933f94e6da b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json" returned: 0 in 0.112s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:44:56 compute-0 nova_compute[260935]: 2025-10-11 08:44:56.455 2 DEBUG oslo_concurrency.lockutils [None req-8fb175bb-6bf1-4b19-9ee3-42933f94e6da b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] Acquiring lock "9811042c7d73cc51997f7c966840f4b7728169a1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:44:56 compute-0 nova_compute[260935]: 2025-10-11 08:44:56.456 2 DEBUG oslo_concurrency.lockutils [None req-8fb175bb-6bf1-4b19-9ee3-42933f94e6da b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:44:56 compute-0 nova_compute[260935]: 2025-10-11 08:44:56.456 2 DEBUG oslo_concurrency.lockutils [None req-8fb175bb-6bf1-4b19-9ee3-42933f94e6da b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:44:56 compute-0 nova_compute[260935]: 2025-10-11 08:44:56.490 2 DEBUG nova.storage.rbd_utils [None req-8fb175bb-6bf1-4b19-9ee3-42933f94e6da b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] rbd image eda894cf-d32d-47f0-adc0-7e2f7fffb442_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:44:56 compute-0 nova_compute[260935]: 2025-10-11 08:44:56.502 2 DEBUG oslo_concurrency.processutils [None req-8fb175bb-6bf1-4b19-9ee3-42933f94e6da b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 eda894cf-d32d-47f0-adc0-7e2f7fffb442_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:44:56 compute-0 nova_compute[260935]: 2025-10-11 08:44:56.530 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:44:56 compute-0 podman[278163]: 2025-10-11 08:44:56.609934296 +0000 UTC m=+0.071351090 container create 06d0307e4a74a273214ad88c6b304739eacc58851538c2359e91aa7cd3a4bb9d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_blackwell, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct 11 08:44:56 compute-0 podman[278163]: 2025-10-11 08:44:56.58100097 +0000 UTC m=+0.042417854 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 08:44:56 compute-0 systemd[1]: Started libpod-conmon-06d0307e4a74a273214ad88c6b304739eacc58851538c2359e91aa7cd3a4bb9d.scope.
Oct 11 08:44:56 compute-0 systemd[1]: Started libcrun container.
Oct 11 08:44:56 compute-0 podman[278163]: 2025-10-11 08:44:56.775504829 +0000 UTC m=+0.236921673 container init 06d0307e4a74a273214ad88c6b304739eacc58851538c2359e91aa7cd3a4bb9d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_blackwell, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 08:44:56 compute-0 podman[278163]: 2025-10-11 08:44:56.785047201 +0000 UTC m=+0.246464005 container start 06d0307e4a74a273214ad88c6b304739eacc58851538c2359e91aa7cd3a4bb9d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_blackwell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Oct 11 08:44:56 compute-0 podman[278163]: 2025-10-11 08:44:56.788887701 +0000 UTC m=+0.250304575 container attach 06d0307e4a74a273214ad88c6b304739eacc58851538c2359e91aa7cd3a4bb9d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_blackwell, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 11 08:44:56 compute-0 stoic_blackwell[278199]: 167 167
Oct 11 08:44:56 compute-0 podman[278163]: 2025-10-11 08:44:56.793537384 +0000 UTC m=+0.254954218 container died 06d0307e4a74a273214ad88c6b304739eacc58851538c2359e91aa7cd3a4bb9d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_blackwell, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Oct 11 08:44:56 compute-0 systemd[1]: libpod-06d0307e4a74a273214ad88c6b304739eacc58851538c2359e91aa7cd3a4bb9d.scope: Deactivated successfully.
Oct 11 08:44:56 compute-0 nova_compute[260935]: 2025-10-11 08:44:56.805 2 DEBUG oslo_concurrency.processutils [None req-8fb175bb-6bf1-4b19-9ee3-42933f94e6da b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 eda894cf-d32d-47f0-adc0-7e2f7fffb442_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.304s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:44:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-551d54630e04c2c011375761b25602f023d030227ec74945e89d8f29e3933daa-merged.mount: Deactivated successfully.
Oct 11 08:44:56 compute-0 podman[278163]: 2025-10-11 08:44:56.845387896 +0000 UTC m=+0.306804700 container remove 06d0307e4a74a273214ad88c6b304739eacc58851538c2359e91aa7cd3a4bb9d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_blackwell, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct 11 08:44:56 compute-0 systemd[1]: libpod-conmon-06d0307e4a74a273214ad88c6b304739eacc58851538c2359e91aa7cd3a4bb9d.scope: Deactivated successfully.
Oct 11 08:44:56 compute-0 nova_compute[260935]: 2025-10-11 08:44:56.894 2 DEBUG nova.storage.rbd_utils [None req-8fb175bb-6bf1-4b19-9ee3-42933f94e6da b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] resizing rbd image eda894cf-d32d-47f0-adc0-7e2f7fffb442_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 11 08:44:56 compute-0 nova_compute[260935]: 2025-10-11 08:44:56.975 2 DEBUG nova.network.neutron [None req-8fb175bb-6bf1-4b19-9ee3-42933f94e6da b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] [instance: eda894cf-d32d-47f0-adc0-7e2f7fffb442] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188
Oct 11 08:44:56 compute-0 nova_compute[260935]: 2025-10-11 08:44:56.976 2 DEBUG nova.compute.manager [None req-8fb175bb-6bf1-4b19-9ee3-42933f94e6da b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] [instance: eda894cf-d32d-47f0-adc0-7e2f7fffb442] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 11 08:44:57 compute-0 nova_compute[260935]: 2025-10-11 08:44:57.058 2 DEBUG nova.objects.instance [None req-8fb175bb-6bf1-4b19-9ee3-42933f94e6da b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] Lazy-loading 'migration_context' on Instance uuid eda894cf-d32d-47f0-adc0-7e2f7fffb442 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:44:57 compute-0 nova_compute[260935]: 2025-10-11 08:44:57.079 2 DEBUG nova.virt.libvirt.driver [None req-8fb175bb-6bf1-4b19-9ee3-42933f94e6da b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] [instance: eda894cf-d32d-47f0-adc0-7e2f7fffb442] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 11 08:44:57 compute-0 nova_compute[260935]: 2025-10-11 08:44:57.080 2 DEBUG nova.virt.libvirt.driver [None req-8fb175bb-6bf1-4b19-9ee3-42933f94e6da b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] [instance: eda894cf-d32d-47f0-adc0-7e2f7fffb442] Ensure instance console log exists: /var/lib/nova/instances/eda894cf-d32d-47f0-adc0-7e2f7fffb442/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 11 08:44:57 compute-0 nova_compute[260935]: 2025-10-11 08:44:57.080 2 DEBUG oslo_concurrency.lockutils [None req-8fb175bb-6bf1-4b19-9ee3-42933f94e6da b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:44:57 compute-0 nova_compute[260935]: 2025-10-11 08:44:57.081 2 DEBUG oslo_concurrency.lockutils [None req-8fb175bb-6bf1-4b19-9ee3-42933f94e6da b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:44:57 compute-0 nova_compute[260935]: 2025-10-11 08:44:57.082 2 DEBUG oslo_concurrency.lockutils [None req-8fb175bb-6bf1-4b19-9ee3-42933f94e6da b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:44:57 compute-0 nova_compute[260935]: 2025-10-11 08:44:57.084 2 DEBUG nova.virt.libvirt.driver [None req-8fb175bb-6bf1-4b19-9ee3-42933f94e6da b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] [instance: eda894cf-d32d-47f0-adc0-7e2f7fffb442] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 11 08:44:57 compute-0 podman[278276]: 2025-10-11 08:44:57.085430608 +0000 UTC m=+0.086845714 container create da733033708753dc525e668b4b95a3ad2c5852ace4dcd9cf04be54bddc4c1291 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_bell, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 08:44:57 compute-0 nova_compute[260935]: 2025-10-11 08:44:57.091 2 WARNING nova.virt.libvirt.driver [None req-8fb175bb-6bf1-4b19-9ee3-42933f94e6da b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 08:44:57 compute-0 nova_compute[260935]: 2025-10-11 08:44:57.098 2 DEBUG nova.virt.libvirt.host [None req-8fb175bb-6bf1-4b19-9ee3-42933f94e6da b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 11 08:44:57 compute-0 nova_compute[260935]: 2025-10-11 08:44:57.099 2 DEBUG nova.virt.libvirt.host [None req-8fb175bb-6bf1-4b19-9ee3-42933f94e6da b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 11 08:44:57 compute-0 nova_compute[260935]: 2025-10-11 08:44:57.103 2 DEBUG nova.virt.libvirt.host [None req-8fb175bb-6bf1-4b19-9ee3-42933f94e6da b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 11 08:44:57 compute-0 nova_compute[260935]: 2025-10-11 08:44:57.104 2 DEBUG nova.virt.libvirt.host [None req-8fb175bb-6bf1-4b19-9ee3-42933f94e6da b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 11 08:44:57 compute-0 nova_compute[260935]: 2025-10-11 08:44:57.104 2 DEBUG nova.virt.libvirt.driver [None req-8fb175bb-6bf1-4b19-9ee3-42933f94e6da b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 11 08:44:57 compute-0 nova_compute[260935]: 2025-10-11 08:44:57.105 2 DEBUG nova.virt.hardware [None req-8fb175bb-6bf1-4b19-9ee3-42933f94e6da b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 11 08:44:57 compute-0 nova_compute[260935]: 2025-10-11 08:44:57.106 2 DEBUG nova.virt.hardware [None req-8fb175bb-6bf1-4b19-9ee3-42933f94e6da b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 11 08:44:57 compute-0 nova_compute[260935]: 2025-10-11 08:44:57.106 2 DEBUG nova.virt.hardware [None req-8fb175bb-6bf1-4b19-9ee3-42933f94e6da b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 11 08:44:57 compute-0 nova_compute[260935]: 2025-10-11 08:44:57.107 2 DEBUG nova.virt.hardware [None req-8fb175bb-6bf1-4b19-9ee3-42933f94e6da b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 11 08:44:57 compute-0 nova_compute[260935]: 2025-10-11 08:44:57.107 2 DEBUG nova.virt.hardware [None req-8fb175bb-6bf1-4b19-9ee3-42933f94e6da b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 11 08:44:57 compute-0 nova_compute[260935]: 2025-10-11 08:44:57.107 2 DEBUG nova.virt.hardware [None req-8fb175bb-6bf1-4b19-9ee3-42933f94e6da b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 11 08:44:57 compute-0 nova_compute[260935]: 2025-10-11 08:44:57.108 2 DEBUG nova.virt.hardware [None req-8fb175bb-6bf1-4b19-9ee3-42933f94e6da b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 11 08:44:57 compute-0 nova_compute[260935]: 2025-10-11 08:44:57.108 2 DEBUG nova.virt.hardware [None req-8fb175bb-6bf1-4b19-9ee3-42933f94e6da b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 11 08:44:57 compute-0 nova_compute[260935]: 2025-10-11 08:44:57.108 2 DEBUG nova.virt.hardware [None req-8fb175bb-6bf1-4b19-9ee3-42933f94e6da b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 11 08:44:57 compute-0 nova_compute[260935]: 2025-10-11 08:44:57.109 2 DEBUG nova.virt.hardware [None req-8fb175bb-6bf1-4b19-9ee3-42933f94e6da b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 11 08:44:57 compute-0 nova_compute[260935]: 2025-10-11 08:44:57.109 2 DEBUG nova.virt.hardware [None req-8fb175bb-6bf1-4b19-9ee3-42933f94e6da b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 11 08:44:57 compute-0 nova_compute[260935]: 2025-10-11 08:44:57.115 2 DEBUG oslo_concurrency.processutils [None req-8fb175bb-6bf1-4b19-9ee3-42933f94e6da b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:44:57 compute-0 podman[278276]: 2025-10-11 08:44:57.043095058 +0000 UTC m=+0.044510174 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 08:44:57 compute-0 systemd[1]: Started libpod-conmon-da733033708753dc525e668b4b95a3ad2c5852ace4dcd9cf04be54bddc4c1291.scope.
Oct 11 08:44:57 compute-0 ceph-mon[74313]: pgmap v1147: 321 pgs: 321 active+clean; 134 MiB data, 233 MiB used, 60 GiB / 60 GiB avail; 3.8 MiB/s rd, 1.8 MiB/s wr, 174 op/s
Oct 11 08:44:57 compute-0 systemd[1]: Started libcrun container.
Oct 11 08:44:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d22078b2187e12e17d5063dc397551b8d514aa50728d631e16b10d3bdc480ba6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 08:44:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d22078b2187e12e17d5063dc397551b8d514aa50728d631e16b10d3bdc480ba6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 08:44:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d22078b2187e12e17d5063dc397551b8d514aa50728d631e16b10d3bdc480ba6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 08:44:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d22078b2187e12e17d5063dc397551b8d514aa50728d631e16b10d3bdc480ba6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 08:44:57 compute-0 podman[278276]: 2025-10-11 08:44:57.199359675 +0000 UTC m=+0.200774791 container init da733033708753dc525e668b4b95a3ad2c5852ace4dcd9cf04be54bddc4c1291 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_bell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct 11 08:44:57 compute-0 podman[278276]: 2025-10-11 08:44:57.21460572 +0000 UTC m=+0.216020806 container start da733033708753dc525e668b4b95a3ad2c5852ace4dcd9cf04be54bddc4c1291 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_bell, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 08:44:57 compute-0 podman[278276]: 2025-10-11 08:44:57.21844562 +0000 UTC m=+0.219860706 container attach da733033708753dc525e668b4b95a3ad2c5852ace4dcd9cf04be54bddc4c1291 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_bell, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 08:44:57 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 08:44:57 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/887322243' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:44:57 compute-0 nova_compute[260935]: 2025-10-11 08:44:57.662 2 DEBUG oslo_concurrency.processutils [None req-8fb175bb-6bf1-4b19-9ee3-42933f94e6da b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.547s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:44:57 compute-0 nova_compute[260935]: 2025-10-11 08:44:57.701 2 DEBUG nova.storage.rbd_utils [None req-8fb175bb-6bf1-4b19-9ee3-42933f94e6da b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] rbd image eda894cf-d32d-47f0-adc0-7e2f7fffb442_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:44:57 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1148: 321 pgs: 321 active+clean; 181 MiB data, 246 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 3.6 MiB/s wr, 201 op/s
Oct 11 08:44:57 compute-0 nova_compute[260935]: 2025-10-11 08:44:57.710 2 DEBUG oslo_concurrency.processutils [None req-8fb175bb-6bf1-4b19-9ee3-42933f94e6da b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:44:58 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:44:58.025 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=bec9a5e2-82b0-42f0-811d-08d245f1dc66, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '4'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:44:58 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 08:44:58 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2952667324' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:44:58 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 08:44:58 compute-0 nova_compute[260935]: 2025-10-11 08:44:58.147 2 DEBUG oslo_concurrency.processutils [None req-8fb175bb-6bf1-4b19-9ee3-42933f94e6da b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:44:58 compute-0 nova_compute[260935]: 2025-10-11 08:44:58.149 2 DEBUG nova.objects.instance [None req-8fb175bb-6bf1-4b19-9ee3-42933f94e6da b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] Lazy-loading 'pci_devices' on Instance uuid eda894cf-d32d-47f0-adc0-7e2f7fffb442 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:44:58 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/887322243' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:44:58 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/2952667324' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:44:58 compute-0 nova_compute[260935]: 2025-10-11 08:44:58.170 2 DEBUG nova.virt.libvirt.driver [None req-8fb175bb-6bf1-4b19-9ee3-42933f94e6da b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] [instance: eda894cf-d32d-47f0-adc0-7e2f7fffb442] End _get_guest_xml xml=<domain type="kvm">
Oct 11 08:44:58 compute-0 nova_compute[260935]:   <uuid>eda894cf-d32d-47f0-adc0-7e2f7fffb442</uuid>
Oct 11 08:44:58 compute-0 nova_compute[260935]:   <name>instance-00000004</name>
Oct 11 08:44:58 compute-0 nova_compute[260935]:   <memory>131072</memory>
Oct 11 08:44:58 compute-0 nova_compute[260935]:   <vcpu>1</vcpu>
Oct 11 08:44:58 compute-0 nova_compute[260935]:   <metadata>
Oct 11 08:44:58 compute-0 nova_compute[260935]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 08:44:58 compute-0 nova_compute[260935]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 08:44:58 compute-0 nova_compute[260935]:       <nova:name>tempest-LiveMigrationNegativeTest-server-630761309</nova:name>
Oct 11 08:44:58 compute-0 nova_compute[260935]:       <nova:creationTime>2025-10-11 08:44:57</nova:creationTime>
Oct 11 08:44:58 compute-0 nova_compute[260935]:       <nova:flavor name="m1.nano">
Oct 11 08:44:58 compute-0 nova_compute[260935]:         <nova:memory>128</nova:memory>
Oct 11 08:44:58 compute-0 nova_compute[260935]:         <nova:disk>1</nova:disk>
Oct 11 08:44:58 compute-0 nova_compute[260935]:         <nova:swap>0</nova:swap>
Oct 11 08:44:58 compute-0 nova_compute[260935]:         <nova:ephemeral>0</nova:ephemeral>
Oct 11 08:44:58 compute-0 nova_compute[260935]:         <nova:vcpus>1</nova:vcpus>
Oct 11 08:44:58 compute-0 nova_compute[260935]:       </nova:flavor>
Oct 11 08:44:58 compute-0 nova_compute[260935]:       <nova:owner>
Oct 11 08:44:58 compute-0 nova_compute[260935]:         <nova:user uuid="b1cf6609831e4cd3b6261e999c3c068e">tempest-LiveMigrationNegativeTest-1774903191-project-member</nova:user>
Oct 11 08:44:58 compute-0 nova_compute[260935]:         <nova:project uuid="507d5ff420164426beedc0ce7977570a">tempest-LiveMigrationNegativeTest-1774903191</nova:project>
Oct 11 08:44:58 compute-0 nova_compute[260935]:       </nova:owner>
Oct 11 08:44:58 compute-0 nova_compute[260935]:       <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 08:44:58 compute-0 nova_compute[260935]:       <nova:ports/>
Oct 11 08:44:58 compute-0 nova_compute[260935]:     </nova:instance>
Oct 11 08:44:58 compute-0 nova_compute[260935]:   </metadata>
Oct 11 08:44:58 compute-0 nova_compute[260935]:   <sysinfo type="smbios">
Oct 11 08:44:58 compute-0 nova_compute[260935]:     <system>
Oct 11 08:44:58 compute-0 nova_compute[260935]:       <entry name="manufacturer">RDO</entry>
Oct 11 08:44:58 compute-0 nova_compute[260935]:       <entry name="product">OpenStack Compute</entry>
Oct 11 08:44:58 compute-0 nova_compute[260935]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 08:44:58 compute-0 nova_compute[260935]:       <entry name="serial">eda894cf-d32d-47f0-adc0-7e2f7fffb442</entry>
Oct 11 08:44:58 compute-0 nova_compute[260935]:       <entry name="uuid">eda894cf-d32d-47f0-adc0-7e2f7fffb442</entry>
Oct 11 08:44:58 compute-0 nova_compute[260935]:       <entry name="family">Virtual Machine</entry>
Oct 11 08:44:58 compute-0 nova_compute[260935]:     </system>
Oct 11 08:44:58 compute-0 nova_compute[260935]:   </sysinfo>
Oct 11 08:44:58 compute-0 nova_compute[260935]:   <os>
Oct 11 08:44:58 compute-0 nova_compute[260935]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 11 08:44:58 compute-0 nova_compute[260935]:     <boot dev="hd"/>
Oct 11 08:44:58 compute-0 nova_compute[260935]:     <smbios mode="sysinfo"/>
Oct 11 08:44:58 compute-0 nova_compute[260935]:   </os>
Oct 11 08:44:58 compute-0 nova_compute[260935]:   <features>
Oct 11 08:44:58 compute-0 nova_compute[260935]:     <acpi/>
Oct 11 08:44:58 compute-0 nova_compute[260935]:     <apic/>
Oct 11 08:44:58 compute-0 nova_compute[260935]:     <vmcoreinfo/>
Oct 11 08:44:58 compute-0 nova_compute[260935]:   </features>
Oct 11 08:44:58 compute-0 nova_compute[260935]:   <clock offset="utc">
Oct 11 08:44:58 compute-0 nova_compute[260935]:     <timer name="pit" tickpolicy="delay"/>
Oct 11 08:44:58 compute-0 nova_compute[260935]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 11 08:44:58 compute-0 nova_compute[260935]:     <timer name="hpet" present="no"/>
Oct 11 08:44:58 compute-0 nova_compute[260935]:   </clock>
Oct 11 08:44:58 compute-0 nova_compute[260935]:   <cpu mode="host-model" match="exact">
Oct 11 08:44:58 compute-0 nova_compute[260935]:     <topology sockets="1" cores="1" threads="1"/>
Oct 11 08:44:58 compute-0 nova_compute[260935]:   </cpu>
Oct 11 08:44:58 compute-0 nova_compute[260935]:   <devices>
Oct 11 08:44:58 compute-0 nova_compute[260935]:     <disk type="network" device="disk">
Oct 11 08:44:58 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 08:44:58 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/eda894cf-d32d-47f0-adc0-7e2f7fffb442_disk">
Oct 11 08:44:58 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 08:44:58 compute-0 nova_compute[260935]:       </source>
Oct 11 08:44:58 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 08:44:58 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 08:44:58 compute-0 nova_compute[260935]:       </auth>
Oct 11 08:44:58 compute-0 nova_compute[260935]:       <target dev="vda" bus="virtio"/>
Oct 11 08:44:58 compute-0 nova_compute[260935]:     </disk>
Oct 11 08:44:58 compute-0 nova_compute[260935]:     <disk type="network" device="cdrom">
Oct 11 08:44:58 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 08:44:58 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/eda894cf-d32d-47f0-adc0-7e2f7fffb442_disk.config">
Oct 11 08:44:58 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 08:44:58 compute-0 nova_compute[260935]:       </source>
Oct 11 08:44:58 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 08:44:58 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 08:44:58 compute-0 nova_compute[260935]:       </auth>
Oct 11 08:44:58 compute-0 nova_compute[260935]:       <target dev="sda" bus="sata"/>
Oct 11 08:44:58 compute-0 nova_compute[260935]:     </disk>
Oct 11 08:44:58 compute-0 nova_compute[260935]:     <serial type="pty">
Oct 11 08:44:58 compute-0 nova_compute[260935]:       <log file="/var/lib/nova/instances/eda894cf-d32d-47f0-adc0-7e2f7fffb442/console.log" append="off"/>
Oct 11 08:44:58 compute-0 nova_compute[260935]:     </serial>
Oct 11 08:44:58 compute-0 nova_compute[260935]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 08:44:58 compute-0 nova_compute[260935]:     <video>
Oct 11 08:44:58 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 08:44:58 compute-0 nova_compute[260935]:     </video>
Oct 11 08:44:58 compute-0 nova_compute[260935]:     <input type="tablet" bus="usb"/>
Oct 11 08:44:58 compute-0 nova_compute[260935]:     <rng model="virtio">
Oct 11 08:44:58 compute-0 nova_compute[260935]:       <backend model="random">/dev/urandom</backend>
Oct 11 08:44:58 compute-0 nova_compute[260935]:     </rng>
Oct 11 08:44:58 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root"/>
Oct 11 08:44:58 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:44:58 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:44:58 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:44:58 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:44:58 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:44:58 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:44:58 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:44:58 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:44:58 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:44:58 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:44:58 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:44:58 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:44:58 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:44:58 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:44:58 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:44:58 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:44:58 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:44:58 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:44:58 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:44:58 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:44:58 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:44:58 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:44:58 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:44:58 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:44:58 compute-0 nova_compute[260935]:     <controller type="usb" index="0"/>
Oct 11 08:44:58 compute-0 nova_compute[260935]:     <memballoon model="virtio">
Oct 11 08:44:58 compute-0 nova_compute[260935]:       <stats period="10"/>
Oct 11 08:44:58 compute-0 nova_compute[260935]:     </memballoon>
Oct 11 08:44:58 compute-0 nova_compute[260935]:   </devices>
Oct 11 08:44:58 compute-0 nova_compute[260935]: </domain>
Oct 11 08:44:58 compute-0 nova_compute[260935]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 11 08:44:58 compute-0 unruffled_bell[278311]: {
Oct 11 08:44:58 compute-0 unruffled_bell[278311]:     "9a8d87fe-940e-4960-94fb-405e22e6e9ee": {
Oct 11 08:44:58 compute-0 unruffled_bell[278311]:         "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 08:44:58 compute-0 unruffled_bell[278311]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 11 08:44:58 compute-0 unruffled_bell[278311]:         "osd_id": 2,
Oct 11 08:44:58 compute-0 unruffled_bell[278311]:         "osd_uuid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 08:44:58 compute-0 unruffled_bell[278311]:         "type": "bluestore"
Oct 11 08:44:58 compute-0 unruffled_bell[278311]:     },
Oct 11 08:44:58 compute-0 unruffled_bell[278311]:     "bd31cfe9-266a-4479-a7f9-659fff14b3e1": {
Oct 11 08:44:58 compute-0 unruffled_bell[278311]:         "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 08:44:58 compute-0 unruffled_bell[278311]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 11 08:44:58 compute-0 unruffled_bell[278311]:         "osd_id": 0,
Oct 11 08:44:58 compute-0 unruffled_bell[278311]:         "osd_uuid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 08:44:58 compute-0 unruffled_bell[278311]:         "type": "bluestore"
Oct 11 08:44:58 compute-0 unruffled_bell[278311]:     },
Oct 11 08:44:58 compute-0 unruffled_bell[278311]:     "c6e730c8-7075-45e5-bf72-061b73088f5b": {
Oct 11 08:44:58 compute-0 unruffled_bell[278311]:         "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 08:44:58 compute-0 unruffled_bell[278311]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 11 08:44:58 compute-0 unruffled_bell[278311]:         "osd_id": 1,
Oct 11 08:44:58 compute-0 unruffled_bell[278311]:         "osd_uuid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 08:44:58 compute-0 unruffled_bell[278311]:         "type": "bluestore"
Oct 11 08:44:58 compute-0 unruffled_bell[278311]:     }
Oct 11 08:44:58 compute-0 unruffled_bell[278311]: }
Oct 11 08:44:58 compute-0 nova_compute[260935]: 2025-10-11 08:44:58.256 2 DEBUG nova.virt.libvirt.driver [None req-8fb175bb-6bf1-4b19-9ee3-42933f94e6da b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 08:44:58 compute-0 nova_compute[260935]: 2025-10-11 08:44:58.256 2 DEBUG nova.virt.libvirt.driver [None req-8fb175bb-6bf1-4b19-9ee3-42933f94e6da b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 08:44:58 compute-0 nova_compute[260935]: 2025-10-11 08:44:58.257 2 INFO nova.virt.libvirt.driver [None req-8fb175bb-6bf1-4b19-9ee3-42933f94e6da b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] [instance: eda894cf-d32d-47f0-adc0-7e2f7fffb442] Using config drive
Oct 11 08:44:58 compute-0 nova_compute[260935]: 2025-10-11 08:44:58.281 2 DEBUG nova.storage.rbd_utils [None req-8fb175bb-6bf1-4b19-9ee3-42933f94e6da b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] rbd image eda894cf-d32d-47f0-adc0-7e2f7fffb442_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:44:58 compute-0 systemd[1]: libpod-da733033708753dc525e668b4b95a3ad2c5852ace4dcd9cf04be54bddc4c1291.scope: Deactivated successfully.
Oct 11 08:44:58 compute-0 systemd[1]: libpod-da733033708753dc525e668b4b95a3ad2c5852ace4dcd9cf04be54bddc4c1291.scope: Consumed 1.053s CPU time.
Oct 11 08:44:58 compute-0 podman[278425]: 2025-10-11 08:44:58.338404744 +0000 UTC m=+0.031357447 container died da733033708753dc525e668b4b95a3ad2c5852ace4dcd9cf04be54bddc4c1291 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_bell, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 08:44:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-d22078b2187e12e17d5063dc397551b8d514aa50728d631e16b10d3bdc480ba6-merged.mount: Deactivated successfully.
Oct 11 08:44:58 compute-0 podman[278425]: 2025-10-11 08:44:58.40123487 +0000 UTC m=+0.094187553 container remove da733033708753dc525e668b4b95a3ad2c5852ace4dcd9cf04be54bddc4c1291 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_bell, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct 11 08:44:58 compute-0 systemd[1]: libpod-conmon-da733033708753dc525e668b4b95a3ad2c5852ace4dcd9cf04be54bddc4c1291.scope: Deactivated successfully.
Oct 11 08:44:58 compute-0 sudo[278022]: pam_unix(sudo:session): session closed for user root
Oct 11 08:44:58 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 08:44:58 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 08:44:58 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 08:44:58 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 08:44:58 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev 5f45dae6-2b5a-4ccc-ad8e-21421610eb80 does not exist
Oct 11 08:44:58 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev e2a33329-23fd-435f-9d0f-dc630e0a6735 does not exist
Oct 11 08:44:58 compute-0 nova_compute[260935]: 2025-10-11 08:44:58.543 2 INFO nova.virt.libvirt.driver [None req-8fb175bb-6bf1-4b19-9ee3-42933f94e6da b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] [instance: eda894cf-d32d-47f0-adc0-7e2f7fffb442] Creating config drive at /var/lib/nova/instances/eda894cf-d32d-47f0-adc0-7e2f7fffb442/disk.config
Oct 11 08:44:58 compute-0 nova_compute[260935]: 2025-10-11 08:44:58.552 2 DEBUG oslo_concurrency.processutils [None req-8fb175bb-6bf1-4b19-9ee3-42933f94e6da b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/eda894cf-d32d-47f0-adc0-7e2f7fffb442/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp12aqz3uw execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:44:58 compute-0 sudo[278440]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:44:58 compute-0 sudo[278440]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:44:58 compute-0 sudo[278440]: pam_unix(sudo:session): session closed for user root
Oct 11 08:44:58 compute-0 sudo[278466]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 11 08:44:58 compute-0 sudo[278466]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:44:58 compute-0 sudo[278466]: pam_unix(sudo:session): session closed for user root
Oct 11 08:44:58 compute-0 nova_compute[260935]: 2025-10-11 08:44:58.699 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 08:44:58 compute-0 nova_compute[260935]: 2025-10-11 08:44:58.702 2 DEBUG oslo_concurrency.processutils [None req-8fb175bb-6bf1-4b19-9ee3-42933f94e6da b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/eda894cf-d32d-47f0-adc0-7e2f7fffb442/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp12aqz3uw" returned: 0 in 0.151s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:44:58 compute-0 nova_compute[260935]: 2025-10-11 08:44:58.748 2 DEBUG nova.storage.rbd_utils [None req-8fb175bb-6bf1-4b19-9ee3-42933f94e6da b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] rbd image eda894cf-d32d-47f0-adc0-7e2f7fffb442_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:44:58 compute-0 nova_compute[260935]: 2025-10-11 08:44:58.754 2 DEBUG oslo_concurrency.processutils [None req-8fb175bb-6bf1-4b19-9ee3-42933f94e6da b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/eda894cf-d32d-47f0-adc0-7e2f7fffb442/disk.config eda894cf-d32d-47f0-adc0-7e2f7fffb442_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:44:58 compute-0 nova_compute[260935]: 2025-10-11 08:44:58.951 2 DEBUG oslo_concurrency.processutils [None req-8fb175bb-6bf1-4b19-9ee3-42933f94e6da b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/eda894cf-d32d-47f0-adc0-7e2f7fffb442/disk.config eda894cf-d32d-47f0-adc0-7e2f7fffb442_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.197s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:44:58 compute-0 nova_compute[260935]: 2025-10-11 08:44:58.953 2 INFO nova.virt.libvirt.driver [None req-8fb175bb-6bf1-4b19-9ee3-42933f94e6da b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] [instance: eda894cf-d32d-47f0-adc0-7e2f7fffb442] Deleting local config drive /var/lib/nova/instances/eda894cf-d32d-47f0-adc0-7e2f7fffb442/disk.config because it was imported into RBD.
Oct 11 08:44:59 compute-0 systemd-machined[215705]: New machine qemu-4-instance-00000004.
Oct 11 08:44:59 compute-0 systemd[1]: Started Virtual Machine qemu-4-instance-00000004.
Oct 11 08:44:59 compute-0 ceph-mon[74313]: pgmap v1148: 321 pgs: 321 active+clean; 181 MiB data, 246 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 3.6 MiB/s wr, 201 op/s
Oct 11 08:44:59 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 08:44:59 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 08:44:59 compute-0 podman[278536]: 2025-10-11 08:44:59.182279176 +0000 UTC m=+0.127737072 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd)
Oct 11 08:44:59 compute-0 nova_compute[260935]: 2025-10-11 08:44:59.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 08:44:59 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1149: 321 pgs: 321 active+clean; 181 MiB data, 246 MiB used, 60 GiB / 60 GiB avail; 3.8 MiB/s rd, 1.8 MiB/s wr, 174 op/s
Oct 11 08:44:59 compute-0 nova_compute[260935]: 2025-10-11 08:44:59.777 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:44:59 compute-0 nova_compute[260935]: 2025-10-11 08:44:59.778 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:44:59 compute-0 nova_compute[260935]: 2025-10-11 08:44:59.779 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:44:59 compute-0 nova_compute[260935]: 2025-10-11 08:44:59.780 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 11 08:44:59 compute-0 nova_compute[260935]: 2025-10-11 08:44:59.781 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:44:59 compute-0 nova_compute[260935]: 2025-10-11 08:44:59.986 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172299.985939, eda894cf-d32d-47f0-adc0-7e2f7fffb442 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:44:59 compute-0 nova_compute[260935]: 2025-10-11 08:44:59.987 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: eda894cf-d32d-47f0-adc0-7e2f7fffb442] VM Resumed (Lifecycle Event)
Oct 11 08:44:59 compute-0 nova_compute[260935]: 2025-10-11 08:44:59.990 2 DEBUG nova.compute.manager [None req-8fb175bb-6bf1-4b19-9ee3-42933f94e6da b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] [instance: eda894cf-d32d-47f0-adc0-7e2f7fffb442] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 11 08:44:59 compute-0 nova_compute[260935]: 2025-10-11 08:44:59.991 2 DEBUG nova.virt.libvirt.driver [None req-8fb175bb-6bf1-4b19-9ee3-42933f94e6da b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] [instance: eda894cf-d32d-47f0-adc0-7e2f7fffb442] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 11 08:44:59 compute-0 nova_compute[260935]: 2025-10-11 08:44:59.998 2 INFO nova.virt.libvirt.driver [-] [instance: eda894cf-d32d-47f0-adc0-7e2f7fffb442] Instance spawned successfully.
Oct 11 08:45:00 compute-0 nova_compute[260935]: 2025-10-11 08:44:59.999 2 DEBUG nova.virt.libvirt.driver [None req-8fb175bb-6bf1-4b19-9ee3-42933f94e6da b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] [instance: eda894cf-d32d-47f0-adc0-7e2f7fffb442] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 11 08:45:00 compute-0 nova_compute[260935]: 2025-10-11 08:45:00.042 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: eda894cf-d32d-47f0-adc0-7e2f7fffb442] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:45:00 compute-0 nova_compute[260935]: 2025-10-11 08:45:00.046 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: eda894cf-d32d-47f0-adc0-7e2f7fffb442] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 08:45:00 compute-0 nova_compute[260935]: 2025-10-11 08:45:00.092 2 DEBUG nova.virt.libvirt.driver [None req-8fb175bb-6bf1-4b19-9ee3-42933f94e6da b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] [instance: eda894cf-d32d-47f0-adc0-7e2f7fffb442] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:45:00 compute-0 nova_compute[260935]: 2025-10-11 08:45:00.093 2 DEBUG nova.virt.libvirt.driver [None req-8fb175bb-6bf1-4b19-9ee3-42933f94e6da b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] [instance: eda894cf-d32d-47f0-adc0-7e2f7fffb442] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:45:00 compute-0 nova_compute[260935]: 2025-10-11 08:45:00.094 2 DEBUG nova.virt.libvirt.driver [None req-8fb175bb-6bf1-4b19-9ee3-42933f94e6da b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] [instance: eda894cf-d32d-47f0-adc0-7e2f7fffb442] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:45:00 compute-0 nova_compute[260935]: 2025-10-11 08:45:00.094 2 DEBUG nova.virt.libvirt.driver [None req-8fb175bb-6bf1-4b19-9ee3-42933f94e6da b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] [instance: eda894cf-d32d-47f0-adc0-7e2f7fffb442] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:45:00 compute-0 nova_compute[260935]: 2025-10-11 08:45:00.095 2 DEBUG nova.virt.libvirt.driver [None req-8fb175bb-6bf1-4b19-9ee3-42933f94e6da b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] [instance: eda894cf-d32d-47f0-adc0-7e2f7fffb442] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:45:00 compute-0 nova_compute[260935]: 2025-10-11 08:45:00.095 2 DEBUG nova.virt.libvirt.driver [None req-8fb175bb-6bf1-4b19-9ee3-42933f94e6da b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] [instance: eda894cf-d32d-47f0-adc0-7e2f7fffb442] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:45:00 compute-0 nova_compute[260935]: 2025-10-11 08:45:00.100 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: eda894cf-d32d-47f0-adc0-7e2f7fffb442] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 08:45:00 compute-0 nova_compute[260935]: 2025-10-11 08:45:00.101 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172299.9899127, eda894cf-d32d-47f0-adc0-7e2f7fffb442 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:45:00 compute-0 nova_compute[260935]: 2025-10-11 08:45:00.101 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: eda894cf-d32d-47f0-adc0-7e2f7fffb442] VM Started (Lifecycle Event)
Oct 11 08:45:00 compute-0 nova_compute[260935]: 2025-10-11 08:45:00.150 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: eda894cf-d32d-47f0-adc0-7e2f7fffb442] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:45:00 compute-0 nova_compute[260935]: 2025-10-11 08:45:00.155 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: eda894cf-d32d-47f0-adc0-7e2f7fffb442] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 08:45:00 compute-0 nova_compute[260935]: 2025-10-11 08:45:00.188 2 INFO nova.compute.manager [None req-8fb175bb-6bf1-4b19-9ee3-42933f94e6da b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] [instance: eda894cf-d32d-47f0-adc0-7e2f7fffb442] Took 3.95 seconds to spawn the instance on the hypervisor.
Oct 11 08:45:00 compute-0 nova_compute[260935]: 2025-10-11 08:45:00.189 2 DEBUG nova.compute.manager [None req-8fb175bb-6bf1-4b19-9ee3-42933f94e6da b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] [instance: eda894cf-d32d-47f0-adc0-7e2f7fffb442] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:45:00 compute-0 nova_compute[260935]: 2025-10-11 08:45:00.201 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: eda894cf-d32d-47f0-adc0-7e2f7fffb442] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 08:45:00 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 08:45:00 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/155522291' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:45:00 compute-0 nova_compute[260935]: 2025-10-11 08:45:00.249 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.468s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:45:00 compute-0 nova_compute[260935]: 2025-10-11 08:45:00.268 2 INFO nova.compute.manager [None req-8fb175bb-6bf1-4b19-9ee3-42933f94e6da b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] [instance: eda894cf-d32d-47f0-adc0-7e2f7fffb442] Took 5.03 seconds to build instance.
Oct 11 08:45:00 compute-0 nova_compute[260935]: 2025-10-11 08:45:00.305 2 DEBUG oslo_concurrency.lockutils [None req-8fb175bb-6bf1-4b19-9ee3-42933f94e6da b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] Lock "eda894cf-d32d-47f0-adc0-7e2f7fffb442" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 5.135s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:45:00 compute-0 podman[278630]: 2025-10-11 08:45:00.409050182 +0000 UTC m=+0.090259291 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct 11 08:45:00 compute-0 nova_compute[260935]: 2025-10-11 08:45:00.441 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000004 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 08:45:00 compute-0 nova_compute[260935]: 2025-10-11 08:45:00.442 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000004 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 08:45:00 compute-0 nova_compute[260935]: 2025-10-11 08:45:00.446 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000003 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 08:45:00 compute-0 nova_compute[260935]: 2025-10-11 08:45:00.446 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000003 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 08:45:00 compute-0 nova_compute[260935]: 2025-10-11 08:45:00.450 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 08:45:00 compute-0 nova_compute[260935]: 2025-10-11 08:45:00.451 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 08:45:00 compute-0 nova_compute[260935]: 2025-10-11 08:45:00.634 2 WARNING nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 08:45:00 compute-0 nova_compute[260935]: 2025-10-11 08:45:00.636 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4474MB free_disk=59.92570877075195GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 11 08:45:00 compute-0 nova_compute[260935]: 2025-10-11 08:45:00.637 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:45:00 compute-0 nova_compute[260935]: 2025-10-11 08:45:00.638 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:45:00 compute-0 nova_compute[260935]: 2025-10-11 08:45:00.767 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance aac0adcc-167d-400a-a04a-93767356cc9c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 08:45:00 compute-0 nova_compute[260935]: 2025-10-11 08:45:00.768 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance 10e2549e-21d4-44fb-acbf-9104ec32970f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 08:45:00 compute-0 nova_compute[260935]: 2025-10-11 08:45:00.768 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance eda894cf-d32d-47f0-adc0-7e2f7fffb442 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 08:45:00 compute-0 nova_compute[260935]: 2025-10-11 08:45:00.769 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 11 08:45:00 compute-0 nova_compute[260935]: 2025-10-11 08:45:00.769 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=896MB phys_disk=59GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 11 08:45:00 compute-0 nova_compute[260935]: 2025-10-11 08:45:00.871 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:45:01 compute-0 ceph-osd[88249]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Oct 11 08:45:01 compute-0 ceph-osd[90364]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Oct 11 08:45:01 compute-0 ceph-mon[74313]: pgmap v1149: 321 pgs: 321 active+clean; 181 MiB data, 246 MiB used, 60 GiB / 60 GiB avail; 3.8 MiB/s rd, 1.8 MiB/s wr, 174 op/s
Oct 11 08:45:01 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/155522291' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:45:01 compute-0 nova_compute[260935]: 2025-10-11 08:45:01.189 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:45:01 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 08:45:01 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4013955541' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:45:01 compute-0 nova_compute[260935]: 2025-10-11 08:45:01.437 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.566s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:45:01 compute-0 nova_compute[260935]: 2025-10-11 08:45:01.454 2 DEBUG nova.compute.provider_tree [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 08:45:01 compute-0 nova_compute[260935]: 2025-10-11 08:45:01.533 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:45:01 compute-0 nova_compute[260935]: 2025-10-11 08:45:01.581 2 DEBUG nova.scheduler.client.report [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 08:45:01 compute-0 nova_compute[260935]: 2025-10-11 08:45:01.591 2 DEBUG nova.objects.instance [None req-1e22ab8e-0043-46cc-8854-e162ddc10f6f 344f85d2138c407e8c0c5209ec354af7 e7682c9cda004f9985bc3bfe7abf6aab - - default default] Lazy-loading 'pci_devices' on Instance uuid eda894cf-d32d-47f0-adc0-7e2f7fffb442 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:45:01 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1150: 321 pgs: 321 active+clean; 181 MiB data, 246 MiB used, 60 GiB / 60 GiB avail; 3.8 MiB/s rd, 1.8 MiB/s wr, 174 op/s
Oct 11 08:45:01 compute-0 nova_compute[260935]: 2025-10-11 08:45:01.857 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172301.8571935, eda894cf-d32d-47f0-adc0-7e2f7fffb442 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:45:01 compute-0 nova_compute[260935]: 2025-10-11 08:45:01.858 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: eda894cf-d32d-47f0-adc0-7e2f7fffb442] VM Paused (Lifecycle Event)
Oct 11 08:45:01 compute-0 nova_compute[260935]: 2025-10-11 08:45:01.869 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 11 08:45:01 compute-0 nova_compute[260935]: 2025-10-11 08:45:01.870 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.232s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:45:01 compute-0 ovn_controller[152945]: 2025-10-11T08:45:01Z|00004|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:14:7b:bc 10.100.0.9
Oct 11 08:45:01 compute-0 ovn_controller[152945]: 2025-10-11T08:45:01Z|00005|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:14:7b:bc 10.100.0.9
Oct 11 08:45:01 compute-0 nova_compute[260935]: 2025-10-11 08:45:01.897 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: eda894cf-d32d-47f0-adc0-7e2f7fffb442] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:45:01 compute-0 nova_compute[260935]: 2025-10-11 08:45:01.904 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: eda894cf-d32d-47f0-adc0-7e2f7fffb442] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: suspending, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 08:45:01 compute-0 nova_compute[260935]: 2025-10-11 08:45:01.991 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: eda894cf-d32d-47f0-adc0-7e2f7fffb442] During sync_power_state the instance has a pending task (suspending). Skip.
Oct 11 08:45:02 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/4013955541' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:45:02 compute-0 systemd[1]: machine-qemu\x2d4\x2dinstance\x2d00000004.scope: Deactivated successfully.
Oct 11 08:45:02 compute-0 systemd[1]: machine-qemu\x2d4\x2dinstance\x2d00000004.scope: Consumed 2.704s CPU time.
Oct 11 08:45:02 compute-0 systemd-machined[215705]: Machine qemu-4-instance-00000004 terminated.
Oct 11 08:45:02 compute-0 nova_compute[260935]: 2025-10-11 08:45:02.374 2 DEBUG nova.compute.manager [None req-1e22ab8e-0043-46cc-8854-e162ddc10f6f 344f85d2138c407e8c0c5209ec354af7 e7682c9cda004f9985bc3bfe7abf6aab - - default default] [instance: eda894cf-d32d-47f0-adc0-7e2f7fffb442] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:45:02 compute-0 nova_compute[260935]: 2025-10-11 08:45:02.871 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 08:45:02 compute-0 nova_compute[260935]: 2025-10-11 08:45:02.871 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 11 08:45:02 compute-0 nova_compute[260935]: 2025-10-11 08:45:02.871 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 11 08:45:03 compute-0 nova_compute[260935]: 2025-10-11 08:45:03.067 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "refresh_cache-aac0adcc-167d-400a-a04a-93767356cc9c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 08:45:03 compute-0 nova_compute[260935]: 2025-10-11 08:45:03.067 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquired lock "refresh_cache-aac0adcc-167d-400a-a04a-93767356cc9c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 08:45:03 compute-0 nova_compute[260935]: 2025-10-11 08:45:03.068 2 DEBUG nova.network.neutron [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: aac0adcc-167d-400a-a04a-93767356cc9c] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 11 08:45:03 compute-0 nova_compute[260935]: 2025-10-11 08:45:03.068 2 DEBUG nova.objects.instance [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lazy-loading 'info_cache' on Instance uuid aac0adcc-167d-400a-a04a-93767356cc9c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:45:03 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 08:45:03 compute-0 ceph-mon[74313]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #54. Immutable memtables: 0.
Oct 11 08:45:03 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:45:03.137713) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 11 08:45:03 compute-0 ceph-mon[74313]: rocksdb: [db/flush_job.cc:856] [default] [JOB 27] Flushing memtable with next log file: 54
Oct 11 08:45:03 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760172303137758, "job": 27, "event": "flush_started", "num_memtables": 1, "num_entries": 1256, "num_deletes": 509, "total_data_size": 1386610, "memory_usage": 1416960, "flush_reason": "Manual Compaction"}
Oct 11 08:45:03 compute-0 ceph-mon[74313]: rocksdb: [db/flush_job.cc:885] [default] [JOB 27] Level-0 flush table #55: started
Oct 11 08:45:03 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760172303144772, "cf_name": "default", "job": 27, "event": "table_file_creation", "file_number": 55, "file_size": 956397, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 23035, "largest_seqno": 24290, "table_properties": {"data_size": 951587, "index_size": 1758, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1989, "raw_key_size": 14798, "raw_average_key_size": 18, "raw_value_size": 939298, "raw_average_value_size": 1205, "num_data_blocks": 79, "num_entries": 779, "num_filter_entries": 779, "num_deletions": 509, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760172220, "oldest_key_time": 1760172220, "file_creation_time": 1760172303, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "25d4b2da-549a-4320-988a-8e4ed36a341b", "db_session_id": "HBQ1LYMNE0LLZGR7AHC6", "orig_file_number": 55, "seqno_to_time_mapping": "N/A"}}
Oct 11 08:45:03 compute-0 ceph-mon[74313]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 27] Flush lasted 7115 microseconds, and 3643 cpu microseconds.
Oct 11 08:45:03 compute-0 ceph-mon[74313]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 08:45:03 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:45:03.144826) [db/flush_job.cc:967] [default] [JOB 27] Level-0 flush table #55: 956397 bytes OK
Oct 11 08:45:03 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:45:03.144848) [db/memtable_list.cc:519] [default] Level-0 commit table #55 started
Oct 11 08:45:03 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:45:03.146255) [db/memtable_list.cc:722] [default] Level-0 commit table #55: memtable #1 done
Oct 11 08:45:03 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:45:03.146272) EVENT_LOG_v1 {"time_micros": 1760172303146267, "job": 27, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 11 08:45:03 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:45:03.146296) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 11 08:45:03 compute-0 ceph-mon[74313]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 27] Try to delete WAL files size 1379769, prev total WAL file size 1379769, number of live WAL files 2.
Oct 11 08:45:03 compute-0 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000051.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 08:45:03 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:45:03.147133) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D00353032' seq:72057594037927935, type:22 .. '6C6F676D00373535' seq:0, type:0; will stop at (end)
Oct 11 08:45:03 compute-0 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 28] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 11 08:45:03 compute-0 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 27 Base level 0, inputs: [55(933KB)], [53(8904KB)]
Oct 11 08:45:03 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760172303147208, "job": 28, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [55], "files_L6": [53], "score": -1, "input_data_size": 10074867, "oldest_snapshot_seqno": -1}
Oct 11 08:45:03 compute-0 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 28] Generated table #56: 4516 keys, 7060096 bytes, temperature: kUnknown
Oct 11 08:45:03 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760172303200861, "cf_name": "default", "job": 28, "event": "table_file_creation", "file_number": 56, "file_size": 7060096, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7030134, "index_size": 17564, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11333, "raw_key_size": 113121, "raw_average_key_size": 25, "raw_value_size": 6948571, "raw_average_value_size": 1538, "num_data_blocks": 730, "num_entries": 4516, "num_filter_entries": 4516, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760170204, "oldest_key_time": 0, "file_creation_time": 1760172303, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "25d4b2da-549a-4320-988a-8e4ed36a341b", "db_session_id": "HBQ1LYMNE0LLZGR7AHC6", "orig_file_number": 56, "seqno_to_time_mapping": "N/A"}}
Oct 11 08:45:03 compute-0 ceph-mon[74313]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 08:45:03 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:45:03.201292) [db/compaction/compaction_job.cc:1663] [default] [JOB 28] Compacted 1@0 + 1@6 files to L6 => 7060096 bytes
Oct 11 08:45:03 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:45:03.202998) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 187.3 rd, 131.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.9, 8.7 +0.0 blob) out(6.7 +0.0 blob), read-write-amplify(17.9) write-amplify(7.4) OK, records in: 5521, records dropped: 1005 output_compression: NoCompression
Oct 11 08:45:03 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:45:03.203026) EVENT_LOG_v1 {"time_micros": 1760172303203013, "job": 28, "event": "compaction_finished", "compaction_time_micros": 53802, "compaction_time_cpu_micros": 37653, "output_level": 6, "num_output_files": 1, "total_output_size": 7060096, "num_input_records": 5521, "num_output_records": 4516, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 11 08:45:03 compute-0 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000055.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 08:45:03 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760172303203724, "job": 28, "event": "table_file_deletion", "file_number": 55}
Oct 11 08:45:03 compute-0 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000053.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 08:45:03 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760172303207289, "job": 28, "event": "table_file_deletion", "file_number": 53}
Oct 11 08:45:03 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:45:03.147038) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 08:45:03 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:45:03.207445) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 08:45:03 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:45:03.207459) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 08:45:03 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:45:03.207462) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 08:45:03 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:45:03.207466) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 08:45:03 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:45:03.207469) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 08:45:03 compute-0 ceph-mon[74313]: pgmap v1150: 321 pgs: 321 active+clean; 181 MiB data, 246 MiB used, 60 GiB / 60 GiB avail; 3.8 MiB/s rd, 1.8 MiB/s wr, 174 op/s
Oct 11 08:45:03 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1151: 321 pgs: 321 active+clean; 233 MiB data, 338 MiB used, 60 GiB / 60 GiB avail; 6.3 MiB/s rd, 5.9 MiB/s wr, 353 op/s
Oct 11 08:45:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 08:45:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:45:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 08:45:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:45:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0018461018387380347 of space, bias 1.0, pg target 0.5538305516214104 quantized to 32 (current 32)
Oct 11 08:45:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:45:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 08:45:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:45:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 08:45:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:45:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Oct 11 08:45:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:45:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 11 08:45:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:45:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 08:45:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:45:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 11 08:45:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:45:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 11 08:45:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:45:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 08:45:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:45:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 11 08:45:05 compute-0 ceph-mon[74313]: pgmap v1151: 321 pgs: 321 active+clean; 233 MiB data, 338 MiB used, 60 GiB / 60 GiB avail; 6.3 MiB/s rd, 5.9 MiB/s wr, 353 op/s
Oct 11 08:45:05 compute-0 nova_compute[260935]: 2025-10-11 08:45:05.296 2 DEBUG nova.network.neutron [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: aac0adcc-167d-400a-a04a-93767356cc9c] Updating instance_info_cache with network_info: [{"id": "6e75116e-1034-4d1a-8320-6755ee57c51f", "address": "fa:16:3e:14:7b:bc", "network": {"id": "3bd537c2-e6ec-4d00-ac83-fbf5d86f963c", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-610535994-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.218", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4e6573eaf6684f1c99a553fd46667a67", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e75116e-10", "ovs_interfaceid": "6e75116e-1034-4d1a-8320-6755ee57c51f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:45:05 compute-0 nova_compute[260935]: 2025-10-11 08:45:05.327 2 DEBUG oslo_concurrency.lockutils [None req-bf3d529e-917c-4ed4-8397-ab351ee6a185 b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] Acquiring lock "eda894cf-d32d-47f0-adc0-7e2f7fffb442" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:45:05 compute-0 nova_compute[260935]: 2025-10-11 08:45:05.328 2 DEBUG oslo_concurrency.lockutils [None req-bf3d529e-917c-4ed4-8397-ab351ee6a185 b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] Lock "eda894cf-d32d-47f0-adc0-7e2f7fffb442" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:45:05 compute-0 nova_compute[260935]: 2025-10-11 08:45:05.328 2 DEBUG oslo_concurrency.lockutils [None req-bf3d529e-917c-4ed4-8397-ab351ee6a185 b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] Acquiring lock "eda894cf-d32d-47f0-adc0-7e2f7fffb442-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:45:05 compute-0 nova_compute[260935]: 2025-10-11 08:45:05.329 2 DEBUG oslo_concurrency.lockutils [None req-bf3d529e-917c-4ed4-8397-ab351ee6a185 b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] Lock "eda894cf-d32d-47f0-adc0-7e2f7fffb442-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:45:05 compute-0 nova_compute[260935]: 2025-10-11 08:45:05.329 2 DEBUG oslo_concurrency.lockutils [None req-bf3d529e-917c-4ed4-8397-ab351ee6a185 b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] Lock "eda894cf-d32d-47f0-adc0-7e2f7fffb442-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:45:05 compute-0 nova_compute[260935]: 2025-10-11 08:45:05.330 2 INFO nova.compute.manager [None req-bf3d529e-917c-4ed4-8397-ab351ee6a185 b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] [instance: eda894cf-d32d-47f0-adc0-7e2f7fffb442] Terminating instance
Oct 11 08:45:05 compute-0 nova_compute[260935]: 2025-10-11 08:45:05.331 2 DEBUG oslo_concurrency.lockutils [None req-bf3d529e-917c-4ed4-8397-ab351ee6a185 b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] Acquiring lock "refresh_cache-eda894cf-d32d-47f0-adc0-7e2f7fffb442" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 08:45:05 compute-0 nova_compute[260935]: 2025-10-11 08:45:05.331 2 DEBUG oslo_concurrency.lockutils [None req-bf3d529e-917c-4ed4-8397-ab351ee6a185 b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] Acquired lock "refresh_cache-eda894cf-d32d-47f0-adc0-7e2f7fffb442" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 08:45:05 compute-0 nova_compute[260935]: 2025-10-11 08:45:05.331 2 DEBUG nova.network.neutron [None req-bf3d529e-917c-4ed4-8397-ab351ee6a185 b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] [instance: eda894cf-d32d-47f0-adc0-7e2f7fffb442] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 11 08:45:05 compute-0 nova_compute[260935]: 2025-10-11 08:45:05.348 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Releasing lock "refresh_cache-aac0adcc-167d-400a-a04a-93767356cc9c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 08:45:05 compute-0 nova_compute[260935]: 2025-10-11 08:45:05.348 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: aac0adcc-167d-400a-a04a-93767356cc9c] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 11 08:45:05 compute-0 nova_compute[260935]: 2025-10-11 08:45:05.348 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 08:45:05 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1152: 321 pgs: 321 active+clean; 233 MiB data, 338 MiB used, 60 GiB / 60 GiB avail; 2.5 MiB/s rd, 5.9 MiB/s wr, 206 op/s
Oct 11 08:45:05 compute-0 nova_compute[260935]: 2025-10-11 08:45:05.964 2 DEBUG nova.network.neutron [None req-bf3d529e-917c-4ed4-8397-ab351ee6a185 b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] [instance: eda894cf-d32d-47f0-adc0-7e2f7fffb442] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 11 08:45:06 compute-0 nova_compute[260935]: 2025-10-11 08:45:06.192 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:45:06 compute-0 nova_compute[260935]: 2025-10-11 08:45:06.535 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:45:06 compute-0 nova_compute[260935]: 2025-10-11 08:45:06.749 2 DEBUG nova.network.neutron [None req-bf3d529e-917c-4ed4-8397-ab351ee6a185 b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] [instance: eda894cf-d32d-47f0-adc0-7e2f7fffb442] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:45:06 compute-0 nova_compute[260935]: 2025-10-11 08:45:06.818 2 DEBUG oslo_concurrency.lockutils [None req-bf3d529e-917c-4ed4-8397-ab351ee6a185 b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] Releasing lock "refresh_cache-eda894cf-d32d-47f0-adc0-7e2f7fffb442" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 08:45:06 compute-0 nova_compute[260935]: 2025-10-11 08:45:06.819 2 DEBUG nova.compute.manager [None req-bf3d529e-917c-4ed4-8397-ab351ee6a185 b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] [instance: eda894cf-d32d-47f0-adc0-7e2f7fffb442] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 11 08:45:06 compute-0 nova_compute[260935]: 2025-10-11 08:45:06.827 2 INFO nova.virt.libvirt.driver [-] [instance: eda894cf-d32d-47f0-adc0-7e2f7fffb442] Instance destroyed successfully.
Oct 11 08:45:06 compute-0 nova_compute[260935]: 2025-10-11 08:45:06.828 2 DEBUG nova.objects.instance [None req-bf3d529e-917c-4ed4-8397-ab351ee6a185 b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] Lazy-loading 'resources' on Instance uuid eda894cf-d32d-47f0-adc0-7e2f7fffb442 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:45:07 compute-0 ceph-mon[74313]: pgmap v1152: 321 pgs: 321 active+clean; 233 MiB data, 338 MiB used, 60 GiB / 60 GiB avail; 2.5 MiB/s rd, 5.9 MiB/s wr, 206 op/s
Oct 11 08:45:07 compute-0 nova_compute[260935]: 2025-10-11 08:45:07.387 2 INFO nova.virt.libvirt.driver [None req-bf3d529e-917c-4ed4-8397-ab351ee6a185 b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] [instance: eda894cf-d32d-47f0-adc0-7e2f7fffb442] Deleting instance files /var/lib/nova/instances/eda894cf-d32d-47f0-adc0-7e2f7fffb442_del
Oct 11 08:45:07 compute-0 nova_compute[260935]: 2025-10-11 08:45:07.389 2 INFO nova.virt.libvirt.driver [None req-bf3d529e-917c-4ed4-8397-ab351ee6a185 b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] [instance: eda894cf-d32d-47f0-adc0-7e2f7fffb442] Deletion of /var/lib/nova/instances/eda894cf-d32d-47f0-adc0-7e2f7fffb442_del complete
Oct 11 08:45:07 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1153: 321 pgs: 321 active+clean; 200 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 6.0 MiB/s wr, 251 op/s
Oct 11 08:45:08 compute-0 nova_compute[260935]: 2025-10-11 08:45:08.108 2 INFO nova.compute.manager [None req-bf3d529e-917c-4ed4-8397-ab351ee6a185 b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] [instance: eda894cf-d32d-47f0-adc0-7e2f7fffb442] Took 1.29 seconds to destroy the instance on the hypervisor.
Oct 11 08:45:08 compute-0 nova_compute[260935]: 2025-10-11 08:45:08.109 2 DEBUG oslo.service.loopingcall [None req-bf3d529e-917c-4ed4-8397-ab351ee6a185 b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 11 08:45:08 compute-0 nova_compute[260935]: 2025-10-11 08:45:08.109 2 DEBUG nova.compute.manager [-] [instance: eda894cf-d32d-47f0-adc0-7e2f7fffb442] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 11 08:45:08 compute-0 nova_compute[260935]: 2025-10-11 08:45:08.110 2 DEBUG nova.network.neutron [-] [instance: eda894cf-d32d-47f0-adc0-7e2f7fffb442] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 11 08:45:08 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 08:45:08 compute-0 nova_compute[260935]: 2025-10-11 08:45:08.784 2 DEBUG nova.network.neutron [-] [instance: eda894cf-d32d-47f0-adc0-7e2f7fffb442] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 11 08:45:08 compute-0 nova_compute[260935]: 2025-10-11 08:45:08.811 2 DEBUG nova.network.neutron [-] [instance: eda894cf-d32d-47f0-adc0-7e2f7fffb442] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:45:08 compute-0 nova_compute[260935]: 2025-10-11 08:45:08.876 2 INFO nova.compute.manager [-] [instance: eda894cf-d32d-47f0-adc0-7e2f7fffb442] Took 0.77 seconds to deallocate network for instance.
Oct 11 08:45:08 compute-0 nova_compute[260935]: 2025-10-11 08:45:08.998 2 DEBUG oslo_concurrency.lockutils [None req-bf3d529e-917c-4ed4-8397-ab351ee6a185 b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:45:08 compute-0 nova_compute[260935]: 2025-10-11 08:45:08.999 2 DEBUG oslo_concurrency.lockutils [None req-bf3d529e-917c-4ed4-8397-ab351ee6a185 b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:45:09 compute-0 nova_compute[260935]: 2025-10-11 08:45:09.038 2 DEBUG oslo_concurrency.lockutils [None req-a395bd6b-dd8f-4041-9548-db4210a48d17 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Acquiring lock "aac0adcc-167d-400a-a04a-93767356cc9c" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:45:09 compute-0 nova_compute[260935]: 2025-10-11 08:45:09.039 2 DEBUG oslo_concurrency.lockutils [None req-a395bd6b-dd8f-4041-9548-db4210a48d17 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Lock "aac0adcc-167d-400a-a04a-93767356cc9c" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:45:09 compute-0 nova_compute[260935]: 2025-10-11 08:45:09.039 2 DEBUG oslo_concurrency.lockutils [None req-a395bd6b-dd8f-4041-9548-db4210a48d17 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Acquiring lock "aac0adcc-167d-400a-a04a-93767356cc9c-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:45:09 compute-0 nova_compute[260935]: 2025-10-11 08:45:09.040 2 DEBUG oslo_concurrency.lockutils [None req-a395bd6b-dd8f-4041-9548-db4210a48d17 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Lock "aac0adcc-167d-400a-a04a-93767356cc9c-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:45:09 compute-0 nova_compute[260935]: 2025-10-11 08:45:09.040 2 DEBUG oslo_concurrency.lockutils [None req-a395bd6b-dd8f-4041-9548-db4210a48d17 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Lock "aac0adcc-167d-400a-a04a-93767356cc9c-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:45:09 compute-0 nova_compute[260935]: 2025-10-11 08:45:09.042 2 INFO nova.compute.manager [None req-a395bd6b-dd8f-4041-9548-db4210a48d17 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] [instance: aac0adcc-167d-400a-a04a-93767356cc9c] Terminating instance
Oct 11 08:45:09 compute-0 nova_compute[260935]: 2025-10-11 08:45:09.044 2 DEBUG nova.compute.manager [None req-a395bd6b-dd8f-4041-9548-db4210a48d17 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] [instance: aac0adcc-167d-400a-a04a-93767356cc9c] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 11 08:45:09 compute-0 nova_compute[260935]: 2025-10-11 08:45:09.098 2 DEBUG oslo_concurrency.processutils [None req-bf3d529e-917c-4ed4-8397-ab351ee6a185 b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:45:09 compute-0 kernel: tap6e75116e-10 (unregistering): left promiscuous mode
Oct 11 08:45:09 compute-0 NetworkManager[44960]: <info>  [1760172309.1104] device (tap6e75116e-10): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 08:45:09 compute-0 ovn_controller[152945]: 2025-10-11T08:45:09Z|00032|binding|INFO|Releasing lport 6e75116e-1034-4d1a-8320-6755ee57c51f from this chassis (sb_readonly=0)
Oct 11 08:45:09 compute-0 ovn_controller[152945]: 2025-10-11T08:45:09Z|00033|binding|INFO|Setting lport 6e75116e-1034-4d1a-8320-6755ee57c51f down in Southbound
Oct 11 08:45:09 compute-0 ovn_controller[152945]: 2025-10-11T08:45:09Z|00034|binding|INFO|Removing iface tap6e75116e-10 ovn-installed in OVS
Oct 11 08:45:09 compute-0 nova_compute[260935]: 2025-10-11 08:45:09.144 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:45:09 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:45:09.160 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:14:7b:bc 10.100.0.9'], port_security=['fa:16:3e:14:7b:bc 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'aac0adcc-167d-400a-a04a-93767356cc9c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3bd537c2-e6ec-4d00-ac83-fbf5d86f963c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4e6573eaf6684f1c99a553fd46667a67', 'neutron:revision_number': '4', 'neutron:security_group_ids': '6b6e9add-724a-49a4-90de-2f4e4912ca78', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.218'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=be56cb0d-617a-4b33-ac01-b32b133373b2, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=6e75116e-1034-4d1a-8320-6755ee57c51f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 08:45:09 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:45:09.162 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 6e75116e-1034-4d1a-8320-6755ee57c51f in datapath 3bd537c2-e6ec-4d00-ac83-fbf5d86f963c unbound from our chassis
Oct 11 08:45:09 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:45:09.163 162815 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 3bd537c2-e6ec-4d00-ac83-fbf5d86f963c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 11 08:45:09 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:45:09.166 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[0e174ce2-6db6-4c1e-a370-3727af660144]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:45:09 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:45:09.167 162815 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-3bd537c2-e6ec-4d00-ac83-fbf5d86f963c namespace which is not needed anymore
Oct 11 08:45:09 compute-0 nova_compute[260935]: 2025-10-11 08:45:09.183 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:45:09 compute-0 systemd[1]: machine-qemu\x2d2\x2dinstance\x2d00000002.scope: Deactivated successfully.
Oct 11 08:45:09 compute-0 systemd[1]: machine-qemu\x2d2\x2dinstance\x2d00000002.scope: Consumed 14.392s CPU time.
Oct 11 08:45:09 compute-0 systemd-machined[215705]: Machine qemu-2-instance-00000002 terminated.
Oct 11 08:45:09 compute-0 ceph-mon[74313]: pgmap v1153: 321 pgs: 321 active+clean; 200 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 6.0 MiB/s wr, 251 op/s
Oct 11 08:45:09 compute-0 nova_compute[260935]: 2025-10-11 08:45:09.302 2 INFO nova.virt.libvirt.driver [-] [instance: aac0adcc-167d-400a-a04a-93767356cc9c] Instance destroyed successfully.
Oct 11 08:45:09 compute-0 nova_compute[260935]: 2025-10-11 08:45:09.303 2 DEBUG nova.objects.instance [None req-a395bd6b-dd8f-4041-9548-db4210a48d17 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Lazy-loading 'resources' on Instance uuid aac0adcc-167d-400a-a04a-93767356cc9c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:45:09 compute-0 nova_compute[260935]: 2025-10-11 08:45:09.334 2 DEBUG nova.virt.libvirt.vif [None req-a395bd6b-dd8f-4041-9548-db4210a48d17 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T08:44:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersWithSpecificFlavorTestJSON-server-189754419',display_name='tempest-ServersWithSpecificFlavorTestJSON-server-189754419',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(25),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverswithspecificflavortestjson-server-189754419',id=2,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=25,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNbdle8Jb2CGpQv4SpbWMUmm3hiSixEA0s5KapvSItFHc3x9bUVgKTLkLVOIdrKirkN+vbb4fO8g4FzWk9eXgUjBokGNviLy6eM2HjHbeF2CcHrjTEf0RujQy30DD/WL/w==',key_name='tempest-keypair-358550695',keypairs=<?>,launch_index=0,launched_at=2025-10-11T08:44:49Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='4e6573eaf6684f1c99a553fd46667a67',ramdisk_id='',reservation_id='r-bh0dixv5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersWithSpecificFlavorTestJSON-1781130606',owner_user_name='tempest-ServersWithSpecificFlavorTestJSON-1781130606-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T08:44:49Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='113798b24d1e4a9e91db94214d254ea9',uuid=aac0adcc-167d-400a-a04a-93767356cc9c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "6e75116e-1034-4d1a-8320-6755ee57c51f", "address": "fa:16:3e:14:7b:bc", "network": {"id": "3bd537c2-e6ec-4d00-ac83-fbf5d86f963c", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-610535994-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.218", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4e6573eaf6684f1c99a553fd46667a67", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e75116e-10", "ovs_interfaceid": "6e75116e-1034-4d1a-8320-6755ee57c51f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 11 08:45:09 compute-0 nova_compute[260935]: 2025-10-11 08:45:09.335 2 DEBUG nova.network.os_vif_util [None req-a395bd6b-dd8f-4041-9548-db4210a48d17 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Converting VIF {"id": "6e75116e-1034-4d1a-8320-6755ee57c51f", "address": "fa:16:3e:14:7b:bc", "network": {"id": "3bd537c2-e6ec-4d00-ac83-fbf5d86f963c", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-610535994-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.218", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4e6573eaf6684f1c99a553fd46667a67", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e75116e-10", "ovs_interfaceid": "6e75116e-1034-4d1a-8320-6755ee57c51f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 08:45:09 compute-0 nova_compute[260935]: 2025-10-11 08:45:09.337 2 DEBUG nova.network.os_vif_util [None req-a395bd6b-dd8f-4041-9548-db4210a48d17 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:14:7b:bc,bridge_name='br-int',has_traffic_filtering=True,id=6e75116e-1034-4d1a-8320-6755ee57c51f,network=Network(3bd537c2-e6ec-4d00-ac83-fbf5d86f963c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6e75116e-10') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 08:45:09 compute-0 nova_compute[260935]: 2025-10-11 08:45:09.338 2 DEBUG os_vif [None req-a395bd6b-dd8f-4041-9548-db4210a48d17 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:14:7b:bc,bridge_name='br-int',has_traffic_filtering=True,id=6e75116e-1034-4d1a-8320-6755ee57c51f,network=Network(3bd537c2-e6ec-4d00-ac83-fbf5d86f963c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6e75116e-10') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 11 08:45:09 compute-0 nova_compute[260935]: 2025-10-11 08:45:09.342 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:45:09 compute-0 nova_compute[260935]: 2025-10-11 08:45:09.343 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6e75116e-10, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:45:09 compute-0 nova_compute[260935]: 2025-10-11 08:45:09.345 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:45:09 compute-0 nova_compute[260935]: 2025-10-11 08:45:09.347 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 08:45:09 compute-0 nova_compute[260935]: 2025-10-11 08:45:09.350 2 INFO os_vif [None req-a395bd6b-dd8f-4041-9548-db4210a48d17 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:14:7b:bc,bridge_name='br-int',has_traffic_filtering=True,id=6e75116e-1034-4d1a-8320-6755ee57c51f,network=Network(3bd537c2-e6ec-4d00-ac83-fbf5d86f963c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6e75116e-10')
Oct 11 08:45:09 compute-0 neutron-haproxy-ovnmeta-3bd537c2-e6ec-4d00-ac83-fbf5d86f963c[277655]: [NOTICE]   (277673) : haproxy version is 2.8.14-c23fe91
Oct 11 08:45:09 compute-0 neutron-haproxy-ovnmeta-3bd537c2-e6ec-4d00-ac83-fbf5d86f963c[277655]: [NOTICE]   (277673) : path to executable is /usr/sbin/haproxy
Oct 11 08:45:09 compute-0 neutron-haproxy-ovnmeta-3bd537c2-e6ec-4d00-ac83-fbf5d86f963c[277655]: [WARNING]  (277673) : Exiting Master process...
Oct 11 08:45:09 compute-0 neutron-haproxy-ovnmeta-3bd537c2-e6ec-4d00-ac83-fbf5d86f963c[277655]: [ALERT]    (277673) : Current worker (277679) exited with code 143 (Terminated)
Oct 11 08:45:09 compute-0 neutron-haproxy-ovnmeta-3bd537c2-e6ec-4d00-ac83-fbf5d86f963c[277655]: [WARNING]  (277673) : All workers exited. Exiting... (0)
Oct 11 08:45:09 compute-0 systemd[1]: libpod-a93ddd922ca452d60dcc1226107836e2108ccbbff976a3b0320be2288dbe7021.scope: Deactivated successfully.
Oct 11 08:45:09 compute-0 podman[278753]: 2025-10-11 08:45:09.396159077 +0000 UTC m=+0.081022727 container died a93ddd922ca452d60dcc1226107836e2108ccbbff976a3b0320be2288dbe7021 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-3bd537c2-e6ec-4d00-ac83-fbf5d86f963c, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 11 08:45:09 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a93ddd922ca452d60dcc1226107836e2108ccbbff976a3b0320be2288dbe7021-userdata-shm.mount: Deactivated successfully.
Oct 11 08:45:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-f1db26376507664bf223ae5a9650b1681bc0d81b11369be060261805baad7082-merged.mount: Deactivated successfully.
Oct 11 08:45:09 compute-0 podman[278753]: 2025-10-11 08:45:09.450545701 +0000 UTC m=+0.135409321 container cleanup a93ddd922ca452d60dcc1226107836e2108ccbbff976a3b0320be2288dbe7021 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-3bd537c2-e6ec-4d00-ac83-fbf5d86f963c, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 08:45:09 compute-0 systemd[1]: libpod-conmon-a93ddd922ca452d60dcc1226107836e2108ccbbff976a3b0320be2288dbe7021.scope: Deactivated successfully.
Oct 11 08:45:09 compute-0 nova_compute[260935]: 2025-10-11 08:45:09.506 2 DEBUG nova.compute.manager [req-24596e1b-54d8-430f-a881-34e3e413bae7 req-ff93cf0c-ff3e-47f1-bcdf-4ade579bceb4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: aac0adcc-167d-400a-a04a-93767356cc9c] Received event network-vif-unplugged-6e75116e-1034-4d1a-8320-6755ee57c51f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:45:09 compute-0 nova_compute[260935]: 2025-10-11 08:45:09.507 2 DEBUG oslo_concurrency.lockutils [req-24596e1b-54d8-430f-a881-34e3e413bae7 req-ff93cf0c-ff3e-47f1-bcdf-4ade579bceb4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "aac0adcc-167d-400a-a04a-93767356cc9c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:45:09 compute-0 nova_compute[260935]: 2025-10-11 08:45:09.507 2 DEBUG oslo_concurrency.lockutils [req-24596e1b-54d8-430f-a881-34e3e413bae7 req-ff93cf0c-ff3e-47f1-bcdf-4ade579bceb4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "aac0adcc-167d-400a-a04a-93767356cc9c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:45:09 compute-0 nova_compute[260935]: 2025-10-11 08:45:09.508 2 DEBUG oslo_concurrency.lockutils [req-24596e1b-54d8-430f-a881-34e3e413bae7 req-ff93cf0c-ff3e-47f1-bcdf-4ade579bceb4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "aac0adcc-167d-400a-a04a-93767356cc9c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:45:09 compute-0 nova_compute[260935]: 2025-10-11 08:45:09.508 2 DEBUG nova.compute.manager [req-24596e1b-54d8-430f-a881-34e3e413bae7 req-ff93cf0c-ff3e-47f1-bcdf-4ade579bceb4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: aac0adcc-167d-400a-a04a-93767356cc9c] No waiting events found dispatching network-vif-unplugged-6e75116e-1034-4d1a-8320-6755ee57c51f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 08:45:09 compute-0 nova_compute[260935]: 2025-10-11 08:45:09.508 2 DEBUG nova.compute.manager [req-24596e1b-54d8-430f-a881-34e3e413bae7 req-ff93cf0c-ff3e-47f1-bcdf-4ade579bceb4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: aac0adcc-167d-400a-a04a-93767356cc9c] Received event network-vif-unplugged-6e75116e-1034-4d1a-8320-6755ee57c51f for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 11 08:45:09 compute-0 podman[278803]: 2025-10-11 08:45:09.54004529 +0000 UTC m=+0.060254874 container remove a93ddd922ca452d60dcc1226107836e2108ccbbff976a3b0320be2288dbe7021 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-3bd537c2-e6ec-4d00-ac83-fbf5d86f963c, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 11 08:45:09 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:45:09.551 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[0b970dcc-214c-4c21-86c2-a9f3dd5f8326]: (4, ('Sat Oct 11 08:45:09 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-3bd537c2-e6ec-4d00-ac83-fbf5d86f963c (a93ddd922ca452d60dcc1226107836e2108ccbbff976a3b0320be2288dbe7021)\na93ddd922ca452d60dcc1226107836e2108ccbbff976a3b0320be2288dbe7021\nSat Oct 11 08:45:09 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-3bd537c2-e6ec-4d00-ac83-fbf5d86f963c (a93ddd922ca452d60dcc1226107836e2108ccbbff976a3b0320be2288dbe7021)\na93ddd922ca452d60dcc1226107836e2108ccbbff976a3b0320be2288dbe7021\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:45:09 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:45:09.553 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[681e641f-5b45-4020-a22c-5fa25fbc560d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:45:09 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:45:09.554 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3bd537c2-e0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:45:09 compute-0 nova_compute[260935]: 2025-10-11 08:45:09.556 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:45:09 compute-0 kernel: tap3bd537c2-e0: left promiscuous mode
Oct 11 08:45:09 compute-0 nova_compute[260935]: 2025-10-11 08:45:09.585 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:45:09 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:45:09.590 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[3ae9d6bb-8a27-4608-a18d-2167cea1d562]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:45:09 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 08:45:09 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4137421471' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:45:09 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:45:09.617 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[e95509e6-4959-4e38-b072-966f56cfd040]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:45:09 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:45:09.618 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[6007fe18-9184-46a9-a506-76bfb5504a19]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:45:09 compute-0 nova_compute[260935]: 2025-10-11 08:45:09.637 2 DEBUG oslo_concurrency.processutils [None req-bf3d529e-917c-4ed4-8397-ab351ee6a185 b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.539s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:45:09 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:45:09.643 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[fb3c61df-f576-4810-b0e3-6d998614eba6]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 413674, 'reachable_time': 18027, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 278820, 'error': None, 'target': 'ovnmeta-3bd537c2-e6ec-4d00-ac83-fbf5d86f963c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:45:09 compute-0 nova_compute[260935]: 2025-10-11 08:45:09.643 2 DEBUG nova.compute.provider_tree [None req-bf3d529e-917c-4ed4-8397-ab351ee6a185 b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 08:45:09 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:45:09.659 162929 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-3bd537c2-e6ec-4d00-ac83-fbf5d86f963c deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 11 08:45:09 compute-0 systemd[1]: run-netns-ovnmeta\x2d3bd537c2\x2de6ec\x2d4d00\x2dac83\x2dfbf5d86f963c.mount: Deactivated successfully.
Oct 11 08:45:09 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:45:09.661 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[98a7bbe5-2353-4aaa-b853-c8af1265841a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:45:09 compute-0 nova_compute[260935]: 2025-10-11 08:45:09.671 2 DEBUG nova.scheduler.client.report [None req-bf3d529e-917c-4ed4-8397-ab351ee6a185 b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 08:45:09 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1154: 321 pgs: 321 active+clean; 200 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 4.3 MiB/s wr, 224 op/s
Oct 11 08:45:09 compute-0 nova_compute[260935]: 2025-10-11 08:45:09.757 2 DEBUG oslo_concurrency.lockutils [None req-bf3d529e-917c-4ed4-8397-ab351ee6a185 b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.758s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:45:09 compute-0 nova_compute[260935]: 2025-10-11 08:45:09.814 2 INFO nova.scheduler.client.report [None req-bf3d529e-917c-4ed4-8397-ab351ee6a185 b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] Deleted allocations for instance eda894cf-d32d-47f0-adc0-7e2f7fffb442
Oct 11 08:45:09 compute-0 nova_compute[260935]: 2025-10-11 08:45:09.842 2 INFO nova.virt.libvirt.driver [None req-a395bd6b-dd8f-4041-9548-db4210a48d17 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] [instance: aac0adcc-167d-400a-a04a-93767356cc9c] Deleting instance files /var/lib/nova/instances/aac0adcc-167d-400a-a04a-93767356cc9c_del
Oct 11 08:45:09 compute-0 nova_compute[260935]: 2025-10-11 08:45:09.843 2 INFO nova.virt.libvirt.driver [None req-a395bd6b-dd8f-4041-9548-db4210a48d17 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] [instance: aac0adcc-167d-400a-a04a-93767356cc9c] Deletion of /var/lib/nova/instances/aac0adcc-167d-400a-a04a-93767356cc9c_del complete
Oct 11 08:45:09 compute-0 nova_compute[260935]: 2025-10-11 08:45:09.960 2 DEBUG oslo_concurrency.lockutils [None req-bf3d529e-917c-4ed4-8397-ab351ee6a185 b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] Lock "eda894cf-d32d-47f0-adc0-7e2f7fffb442" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.632s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:45:09 compute-0 nova_compute[260935]: 2025-10-11 08:45:09.974 2 INFO nova.compute.manager [None req-a395bd6b-dd8f-4041-9548-db4210a48d17 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] [instance: aac0adcc-167d-400a-a04a-93767356cc9c] Took 0.93 seconds to destroy the instance on the hypervisor.
Oct 11 08:45:09 compute-0 nova_compute[260935]: 2025-10-11 08:45:09.975 2 DEBUG oslo.service.loopingcall [None req-a395bd6b-dd8f-4041-9548-db4210a48d17 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 11 08:45:09 compute-0 nova_compute[260935]: 2025-10-11 08:45:09.975 2 DEBUG nova.compute.manager [-] [instance: aac0adcc-167d-400a-a04a-93767356cc9c] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 11 08:45:09 compute-0 nova_compute[260935]: 2025-10-11 08:45:09.976 2 DEBUG nova.network.neutron [-] [instance: aac0adcc-167d-400a-a04a-93767356cc9c] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 11 08:45:10 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/4137421471' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:45:11 compute-0 nova_compute[260935]: 2025-10-11 08:45:11.147 2 DEBUG oslo_concurrency.lockutils [None req-e09cbfb0-1615-487f-aa8c-0993df28a232 b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] Acquiring lock "10e2549e-21d4-44fb-acbf-9104ec32970f" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:45:11 compute-0 nova_compute[260935]: 2025-10-11 08:45:11.148 2 DEBUG oslo_concurrency.lockutils [None req-e09cbfb0-1615-487f-aa8c-0993df28a232 b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] Lock "10e2549e-21d4-44fb-acbf-9104ec32970f" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:45:11 compute-0 nova_compute[260935]: 2025-10-11 08:45:11.148 2 DEBUG oslo_concurrency.lockutils [None req-e09cbfb0-1615-487f-aa8c-0993df28a232 b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] Acquiring lock "10e2549e-21d4-44fb-acbf-9104ec32970f-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:45:11 compute-0 nova_compute[260935]: 2025-10-11 08:45:11.149 2 DEBUG oslo_concurrency.lockutils [None req-e09cbfb0-1615-487f-aa8c-0993df28a232 b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] Lock "10e2549e-21d4-44fb-acbf-9104ec32970f-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:45:11 compute-0 nova_compute[260935]: 2025-10-11 08:45:11.150 2 DEBUG oslo_concurrency.lockutils [None req-e09cbfb0-1615-487f-aa8c-0993df28a232 b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] Lock "10e2549e-21d4-44fb-acbf-9104ec32970f-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:45:11 compute-0 nova_compute[260935]: 2025-10-11 08:45:11.152 2 INFO nova.compute.manager [None req-e09cbfb0-1615-487f-aa8c-0993df28a232 b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] [instance: 10e2549e-21d4-44fb-acbf-9104ec32970f] Terminating instance
Oct 11 08:45:11 compute-0 nova_compute[260935]: 2025-10-11 08:45:11.153 2 DEBUG oslo_concurrency.lockutils [None req-e09cbfb0-1615-487f-aa8c-0993df28a232 b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] Acquiring lock "refresh_cache-10e2549e-21d4-44fb-acbf-9104ec32970f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 08:45:11 compute-0 nova_compute[260935]: 2025-10-11 08:45:11.153 2 DEBUG oslo_concurrency.lockutils [None req-e09cbfb0-1615-487f-aa8c-0993df28a232 b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] Acquired lock "refresh_cache-10e2549e-21d4-44fb-acbf-9104ec32970f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 08:45:11 compute-0 nova_compute[260935]: 2025-10-11 08:45:11.154 2 DEBUG nova.network.neutron [None req-e09cbfb0-1615-487f-aa8c-0993df28a232 b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] [instance: 10e2549e-21d4-44fb-acbf-9104ec32970f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 11 08:45:11 compute-0 nova_compute[260935]: 2025-10-11 08:45:11.194 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:45:11 compute-0 ceph-mon[74313]: pgmap v1154: 321 pgs: 321 active+clean; 200 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 4.3 MiB/s wr, 224 op/s
Oct 11 08:45:11 compute-0 nova_compute[260935]: 2025-10-11 08:45:11.451 2 DEBUG nova.network.neutron [None req-e09cbfb0-1615-487f-aa8c-0993df28a232 b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] [instance: 10e2549e-21d4-44fb-acbf-9104ec32970f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 11 08:45:11 compute-0 nova_compute[260935]: 2025-10-11 08:45:11.481 2 DEBUG nova.network.neutron [-] [instance: aac0adcc-167d-400a-a04a-93767356cc9c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:45:11 compute-0 nova_compute[260935]: 2025-10-11 08:45:11.662 2 INFO nova.compute.manager [-] [instance: aac0adcc-167d-400a-a04a-93767356cc9c] Took 1.69 seconds to deallocate network for instance.
Oct 11 08:45:11 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1155: 321 pgs: 321 active+clean; 200 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 4.3 MiB/s wr, 224 op/s
Oct 11 08:45:11 compute-0 nova_compute[260935]: 2025-10-11 08:45:11.786 2 DEBUG oslo_concurrency.lockutils [None req-a395bd6b-dd8f-4041-9548-db4210a48d17 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:45:11 compute-0 nova_compute[260935]: 2025-10-11 08:45:11.787 2 DEBUG oslo_concurrency.lockutils [None req-a395bd6b-dd8f-4041-9548-db4210a48d17 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:45:11 compute-0 nova_compute[260935]: 2025-10-11 08:45:11.853 2 DEBUG oslo_concurrency.processutils [None req-a395bd6b-dd8f-4041-9548-db4210a48d17 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:45:11 compute-0 nova_compute[260935]: 2025-10-11 08:45:11.960 2 DEBUG nova.network.neutron [None req-e09cbfb0-1615-487f-aa8c-0993df28a232 b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] [instance: 10e2549e-21d4-44fb-acbf-9104ec32970f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:45:11 compute-0 nova_compute[260935]: 2025-10-11 08:45:11.987 2 DEBUG oslo_concurrency.lockutils [None req-e09cbfb0-1615-487f-aa8c-0993df28a232 b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] Releasing lock "refresh_cache-10e2549e-21d4-44fb-acbf-9104ec32970f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 08:45:11 compute-0 nova_compute[260935]: 2025-10-11 08:45:11.989 2 DEBUG nova.compute.manager [None req-e09cbfb0-1615-487f-aa8c-0993df28a232 b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] [instance: 10e2549e-21d4-44fb-acbf-9104ec32970f] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 11 08:45:12 compute-0 nova_compute[260935]: 2025-10-11 08:45:12.084 2 DEBUG nova.compute.manager [req-8c91d96d-1cc6-445b-b65a-ba8c62dee7d5 req-28ee356f-e437-4e2e-8d74-2cbd9f1a9b9a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: aac0adcc-167d-400a-a04a-93767356cc9c] Received event network-vif-deleted-6e75116e-1034-4d1a-8320-6755ee57c51f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:45:12 compute-0 systemd[1]: machine-qemu\x2d3\x2dinstance\x2d00000003.scope: Deactivated successfully.
Oct 11 08:45:12 compute-0 systemd[1]: machine-qemu\x2d3\x2dinstance\x2d00000003.scope: Consumed 13.950s CPU time.
Oct 11 08:45:12 compute-0 systemd-machined[215705]: Machine qemu-3-instance-00000003 terminated.
Oct 11 08:45:12 compute-0 nova_compute[260935]: 2025-10-11 08:45:12.153 2 DEBUG nova.compute.manager [req-8fed93f8-1714-4d08-aa95-f24b0520a05d req-7c7fb12b-bb9f-46ea-9ce2-bcc7bc88eaab e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: aac0adcc-167d-400a-a04a-93767356cc9c] Received event network-vif-plugged-6e75116e-1034-4d1a-8320-6755ee57c51f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:45:12 compute-0 nova_compute[260935]: 2025-10-11 08:45:12.153 2 DEBUG oslo_concurrency.lockutils [req-8fed93f8-1714-4d08-aa95-f24b0520a05d req-7c7fb12b-bb9f-46ea-9ce2-bcc7bc88eaab e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "aac0adcc-167d-400a-a04a-93767356cc9c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:45:12 compute-0 nova_compute[260935]: 2025-10-11 08:45:12.154 2 DEBUG oslo_concurrency.lockutils [req-8fed93f8-1714-4d08-aa95-f24b0520a05d req-7c7fb12b-bb9f-46ea-9ce2-bcc7bc88eaab e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "aac0adcc-167d-400a-a04a-93767356cc9c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:45:12 compute-0 nova_compute[260935]: 2025-10-11 08:45:12.155 2 DEBUG oslo_concurrency.lockutils [req-8fed93f8-1714-4d08-aa95-f24b0520a05d req-7c7fb12b-bb9f-46ea-9ce2-bcc7bc88eaab e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "aac0adcc-167d-400a-a04a-93767356cc9c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:45:12 compute-0 nova_compute[260935]: 2025-10-11 08:45:12.155 2 DEBUG nova.compute.manager [req-8fed93f8-1714-4d08-aa95-f24b0520a05d req-7c7fb12b-bb9f-46ea-9ce2-bcc7bc88eaab e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: aac0adcc-167d-400a-a04a-93767356cc9c] No waiting events found dispatching network-vif-plugged-6e75116e-1034-4d1a-8320-6755ee57c51f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 08:45:12 compute-0 nova_compute[260935]: 2025-10-11 08:45:12.156 2 WARNING nova.compute.manager [req-8fed93f8-1714-4d08-aa95-f24b0520a05d req-7c7fb12b-bb9f-46ea-9ce2-bcc7bc88eaab e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: aac0adcc-167d-400a-a04a-93767356cc9c] Received unexpected event network-vif-plugged-6e75116e-1034-4d1a-8320-6755ee57c51f for instance with vm_state deleted and task_state None.
Oct 11 08:45:12 compute-0 nova_compute[260935]: 2025-10-11 08:45:12.221 2 INFO nova.virt.libvirt.driver [-] [instance: 10e2549e-21d4-44fb-acbf-9104ec32970f] Instance destroyed successfully.
Oct 11 08:45:12 compute-0 nova_compute[260935]: 2025-10-11 08:45:12.222 2 DEBUG nova.objects.instance [None req-e09cbfb0-1615-487f-aa8c-0993df28a232 b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] Lazy-loading 'resources' on Instance uuid 10e2549e-21d4-44fb-acbf-9104ec32970f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:45:12 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 08:45:12 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1138488383' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:45:12 compute-0 nova_compute[260935]: 2025-10-11 08:45:12.377 2 DEBUG oslo_concurrency.processutils [None req-a395bd6b-dd8f-4041-9548-db4210a48d17 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.524s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:45:12 compute-0 nova_compute[260935]: 2025-10-11 08:45:12.385 2 DEBUG nova.compute.provider_tree [None req-a395bd6b-dd8f-4041-9548-db4210a48d17 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 08:45:12 compute-0 nova_compute[260935]: 2025-10-11 08:45:12.422 2 DEBUG nova.scheduler.client.report [None req-a395bd6b-dd8f-4041-9548-db4210a48d17 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 08:45:12 compute-0 nova_compute[260935]: 2025-10-11 08:45:12.563 2 DEBUG oslo_concurrency.lockutils [None req-a395bd6b-dd8f-4041-9548-db4210a48d17 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.775s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:45:12 compute-0 nova_compute[260935]: 2025-10-11 08:45:12.760 2 INFO nova.virt.libvirt.driver [None req-e09cbfb0-1615-487f-aa8c-0993df28a232 b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] [instance: 10e2549e-21d4-44fb-acbf-9104ec32970f] Deleting instance files /var/lib/nova/instances/10e2549e-21d4-44fb-acbf-9104ec32970f_del
Oct 11 08:45:12 compute-0 nova_compute[260935]: 2025-10-11 08:45:12.761 2 INFO nova.virt.libvirt.driver [None req-e09cbfb0-1615-487f-aa8c-0993df28a232 b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] [instance: 10e2549e-21d4-44fb-acbf-9104ec32970f] Deletion of /var/lib/nova/instances/10e2549e-21d4-44fb-acbf-9104ec32970f_del complete
Oct 11 08:45:12 compute-0 nova_compute[260935]: 2025-10-11 08:45:12.916 2 INFO nova.compute.manager [None req-e09cbfb0-1615-487f-aa8c-0993df28a232 b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] [instance: 10e2549e-21d4-44fb-acbf-9104ec32970f] Took 0.93 seconds to destroy the instance on the hypervisor.
Oct 11 08:45:12 compute-0 nova_compute[260935]: 2025-10-11 08:45:12.917 2 DEBUG oslo.service.loopingcall [None req-e09cbfb0-1615-487f-aa8c-0993df28a232 b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 11 08:45:12 compute-0 nova_compute[260935]: 2025-10-11 08:45:12.918 2 DEBUG nova.compute.manager [-] [instance: 10e2549e-21d4-44fb-acbf-9104ec32970f] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 11 08:45:12 compute-0 nova_compute[260935]: 2025-10-11 08:45:12.918 2 DEBUG nova.network.neutron [-] [instance: 10e2549e-21d4-44fb-acbf-9104ec32970f] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 11 08:45:13 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 08:45:13 compute-0 ceph-mon[74313]: pgmap v1155: 321 pgs: 321 active+clean; 200 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 4.3 MiB/s wr, 224 op/s
Oct 11 08:45:13 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/1138488383' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:45:13 compute-0 nova_compute[260935]: 2025-10-11 08:45:13.388 2 INFO nova.scheduler.client.report [None req-a395bd6b-dd8f-4041-9548-db4210a48d17 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Deleted allocations for instance aac0adcc-167d-400a-a04a-93767356cc9c
Oct 11 08:45:13 compute-0 nova_compute[260935]: 2025-10-11 08:45:13.400 2 DEBUG oslo_concurrency.lockutils [None req-acab66b7-c15c-4e9c-8acf-ed43b00cd150 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Acquiring lock "f7475a32-2490-4f8c-a700-a123973da072" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:45:13 compute-0 nova_compute[260935]: 2025-10-11 08:45:13.400 2 DEBUG oslo_concurrency.lockutils [None req-acab66b7-c15c-4e9c-8acf-ed43b00cd150 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Lock "f7475a32-2490-4f8c-a700-a123973da072" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:45:13 compute-0 nova_compute[260935]: 2025-10-11 08:45:13.441 2 DEBUG nova.compute.manager [None req-acab66b7-c15c-4e9c-8acf-ed43b00cd150 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] [instance: f7475a32-2490-4f8c-a700-a123973da072] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 11 08:45:13 compute-0 nova_compute[260935]: 2025-10-11 08:45:13.450 2 DEBUG nova.network.neutron [-] [instance: 10e2549e-21d4-44fb-acbf-9104ec32970f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 11 08:45:13 compute-0 nova_compute[260935]: 2025-10-11 08:45:13.595 2 DEBUG oslo_concurrency.lockutils [None req-a395bd6b-dd8f-4041-9548-db4210a48d17 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Lock "aac0adcc-167d-400a-a04a-93767356cc9c" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.556s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:45:13 compute-0 nova_compute[260935]: 2025-10-11 08:45:13.639 2 DEBUG nova.network.neutron [-] [instance: 10e2549e-21d4-44fb-acbf-9104ec32970f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:45:13 compute-0 nova_compute[260935]: 2025-10-11 08:45:13.656 2 DEBUG oslo_concurrency.lockutils [None req-acab66b7-c15c-4e9c-8acf-ed43b00cd150 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:45:13 compute-0 nova_compute[260935]: 2025-10-11 08:45:13.656 2 DEBUG oslo_concurrency.lockutils [None req-acab66b7-c15c-4e9c-8acf-ed43b00cd150 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:45:13 compute-0 nova_compute[260935]: 2025-10-11 08:45:13.665 2 DEBUG nova.virt.hardware [None req-acab66b7-c15c-4e9c-8acf-ed43b00cd150 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 11 08:45:13 compute-0 nova_compute[260935]: 2025-10-11 08:45:13.665 2 INFO nova.compute.claims [None req-acab66b7-c15c-4e9c-8acf-ed43b00cd150 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] [instance: f7475a32-2490-4f8c-a700-a123973da072] Claim successful on node compute-0.ctlplane.example.com
Oct 11 08:45:13 compute-0 nova_compute[260935]: 2025-10-11 08:45:13.703 2 INFO nova.compute.manager [-] [instance: 10e2549e-21d4-44fb-acbf-9104ec32970f] Took 0.79 seconds to deallocate network for instance.
Oct 11 08:45:13 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1156: 321 pgs: 321 active+clean; 101 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 4.3 MiB/s wr, 266 op/s
Oct 11 08:45:13 compute-0 nova_compute[260935]: 2025-10-11 08:45:13.805 2 DEBUG oslo_concurrency.lockutils [None req-e09cbfb0-1615-487f-aa8c-0993df28a232 b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:45:13 compute-0 nova_compute[260935]: 2025-10-11 08:45:13.873 2 DEBUG oslo_concurrency.processutils [None req-acab66b7-c15c-4e9c-8acf-ed43b00cd150 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:45:14 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 08:45:14 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1588867446' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:45:14 compute-0 nova_compute[260935]: 2025-10-11 08:45:14.347 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:45:14 compute-0 nova_compute[260935]: 2025-10-11 08:45:14.356 2 DEBUG oslo_concurrency.processutils [None req-acab66b7-c15c-4e9c-8acf-ed43b00cd150 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.482s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:45:14 compute-0 nova_compute[260935]: 2025-10-11 08:45:14.364 2 DEBUG nova.compute.provider_tree [None req-acab66b7-c15c-4e9c-8acf-ed43b00cd150 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 08:45:14 compute-0 nova_compute[260935]: 2025-10-11 08:45:14.399 2 DEBUG nova.scheduler.client.report [None req-acab66b7-c15c-4e9c-8acf-ed43b00cd150 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 08:45:14 compute-0 nova_compute[260935]: 2025-10-11 08:45:14.464 2 DEBUG oslo_concurrency.lockutils [None req-acab66b7-c15c-4e9c-8acf-ed43b00cd150 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.808s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:45:14 compute-0 nova_compute[260935]: 2025-10-11 08:45:14.466 2 DEBUG nova.compute.manager [None req-acab66b7-c15c-4e9c-8acf-ed43b00cd150 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] [instance: f7475a32-2490-4f8c-a700-a123973da072] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 11 08:45:14 compute-0 nova_compute[260935]: 2025-10-11 08:45:14.470 2 DEBUG oslo_concurrency.lockutils [None req-e09cbfb0-1615-487f-aa8c-0993df28a232 b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.666s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:45:14 compute-0 nova_compute[260935]: 2025-10-11 08:45:14.570 2 DEBUG oslo_concurrency.processutils [None req-e09cbfb0-1615-487f-aa8c-0993df28a232 b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:45:14 compute-0 nova_compute[260935]: 2025-10-11 08:45:14.627 2 DEBUG nova.compute.manager [None req-acab66b7-c15c-4e9c-8acf-ed43b00cd150 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] [instance: f7475a32-2490-4f8c-a700-a123973da072] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 11 08:45:14 compute-0 nova_compute[260935]: 2025-10-11 08:45:14.627 2 DEBUG nova.network.neutron [None req-acab66b7-c15c-4e9c-8acf-ed43b00cd150 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] [instance: f7475a32-2490-4f8c-a700-a123973da072] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 11 08:45:14 compute-0 nova_compute[260935]: 2025-10-11 08:45:14.710 2 INFO nova.virt.libvirt.driver [None req-acab66b7-c15c-4e9c-8acf-ed43b00cd150 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] [instance: f7475a32-2490-4f8c-a700-a123973da072] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 11 08:45:14 compute-0 nova_compute[260935]: 2025-10-11 08:45:14.880 2 DEBUG nova.compute.manager [None req-acab66b7-c15c-4e9c-8acf-ed43b00cd150 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] [instance: f7475a32-2490-4f8c-a700-a123973da072] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 11 08:45:14 compute-0 nova_compute[260935]: 2025-10-11 08:45:14.891 2 DEBUG nova.policy [None req-acab66b7-c15c-4e9c-8acf-ed43b00cd150 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '113798b24d1e4a9e91db94214d254ea9', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '4e6573eaf6684f1c99a553fd46667a67', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 11 08:45:15 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 08:45:15 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3775269212' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:45:15 compute-0 nova_compute[260935]: 2025-10-11 08:45:15.057 2 DEBUG oslo_concurrency.processutils [None req-e09cbfb0-1615-487f-aa8c-0993df28a232 b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.487s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:45:15 compute-0 nova_compute[260935]: 2025-10-11 08:45:15.065 2 DEBUG nova.compute.provider_tree [None req-e09cbfb0-1615-487f-aa8c-0993df28a232 b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 08:45:15 compute-0 nova_compute[260935]: 2025-10-11 08:45:15.101 2 DEBUG nova.compute.manager [None req-acab66b7-c15c-4e9c-8acf-ed43b00cd150 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] [instance: f7475a32-2490-4f8c-a700-a123973da072] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 11 08:45:15 compute-0 nova_compute[260935]: 2025-10-11 08:45:15.103 2 DEBUG nova.virt.libvirt.driver [None req-acab66b7-c15c-4e9c-8acf-ed43b00cd150 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] [instance: f7475a32-2490-4f8c-a700-a123973da072] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 11 08:45:15 compute-0 nova_compute[260935]: 2025-10-11 08:45:15.104 2 INFO nova.virt.libvirt.driver [None req-acab66b7-c15c-4e9c-8acf-ed43b00cd150 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] [instance: f7475a32-2490-4f8c-a700-a123973da072] Creating image(s)
Oct 11 08:45:15 compute-0 nova_compute[260935]: 2025-10-11 08:45:15.134 2 DEBUG nova.storage.rbd_utils [None req-acab66b7-c15c-4e9c-8acf-ed43b00cd150 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] rbd image f7475a32-2490-4f8c-a700-a123973da072_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:45:15 compute-0 nova_compute[260935]: 2025-10-11 08:45:15.171 2 DEBUG nova.storage.rbd_utils [None req-acab66b7-c15c-4e9c-8acf-ed43b00cd150 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] rbd image f7475a32-2490-4f8c-a700-a123973da072_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:45:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:45:15.176 162815 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:45:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:45:15.177 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:45:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:45:15.177 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:45:15 compute-0 nova_compute[260935]: 2025-10-11 08:45:15.209 2 DEBUG nova.storage.rbd_utils [None req-acab66b7-c15c-4e9c-8acf-ed43b00cd150 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] rbd image f7475a32-2490-4f8c-a700-a123973da072_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:45:15 compute-0 nova_compute[260935]: 2025-10-11 08:45:15.214 2 DEBUG oslo_concurrency.processutils [None req-acab66b7-c15c-4e9c-8acf-ed43b00cd150 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:45:15 compute-0 nova_compute[260935]: 2025-10-11 08:45:15.252 2 DEBUG nova.scheduler.client.report [None req-e09cbfb0-1615-487f-aa8c-0993df28a232 b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 08:45:15 compute-0 ceph-mon[74313]: pgmap v1156: 321 pgs: 321 active+clean; 101 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 4.3 MiB/s wr, 266 op/s
Oct 11 08:45:15 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/1588867446' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:45:15 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/3775269212' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:45:15 compute-0 nova_compute[260935]: 2025-10-11 08:45:15.281 2 DEBUG oslo_concurrency.lockutils [None req-e09cbfb0-1615-487f-aa8c-0993df28a232 b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.811s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:45:15 compute-0 nova_compute[260935]: 2025-10-11 08:45:15.304 2 DEBUG oslo_concurrency.processutils [None req-acab66b7-c15c-4e9c-8acf-ed43b00cd150 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json" returned: 0 in 0.091s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:45:15 compute-0 nova_compute[260935]: 2025-10-11 08:45:15.305 2 DEBUG oslo_concurrency.lockutils [None req-acab66b7-c15c-4e9c-8acf-ed43b00cd150 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Acquiring lock "9811042c7d73cc51997f7c966840f4b7728169a1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:45:15 compute-0 nova_compute[260935]: 2025-10-11 08:45:15.306 2 DEBUG oslo_concurrency.lockutils [None req-acab66b7-c15c-4e9c-8acf-ed43b00cd150 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:45:15 compute-0 nova_compute[260935]: 2025-10-11 08:45:15.307 2 DEBUG oslo_concurrency.lockutils [None req-acab66b7-c15c-4e9c-8acf-ed43b00cd150 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:45:15 compute-0 nova_compute[260935]: 2025-10-11 08:45:15.344 2 DEBUG nova.storage.rbd_utils [None req-acab66b7-c15c-4e9c-8acf-ed43b00cd150 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] rbd image f7475a32-2490-4f8c-a700-a123973da072_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:45:15 compute-0 nova_compute[260935]: 2025-10-11 08:45:15.348 2 DEBUG oslo_concurrency.processutils [None req-acab66b7-c15c-4e9c-8acf-ed43b00cd150 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 f7475a32-2490-4f8c-a700-a123973da072_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:45:15 compute-0 nova_compute[260935]: 2025-10-11 08:45:15.365 2 INFO nova.scheduler.client.report [None req-e09cbfb0-1615-487f-aa8c-0993df28a232 b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] Deleted allocations for instance 10e2549e-21d4-44fb-acbf-9104ec32970f
Oct 11 08:45:15 compute-0 nova_compute[260935]: 2025-10-11 08:45:15.479 2 DEBUG oslo_concurrency.lockutils [None req-e09cbfb0-1615-487f-aa8c-0993df28a232 b1cf6609831e4cd3b6261e999c3c068e 507d5ff420164426beedc0ce7977570a - - default default] Lock "10e2549e-21d4-44fb-acbf-9104ec32970f" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.331s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:45:15 compute-0 nova_compute[260935]: 2025-10-11 08:45:15.614 2 DEBUG oslo_concurrency.processutils [None req-acab66b7-c15c-4e9c-8acf-ed43b00cd150 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 f7475a32-2490-4f8c-a700-a123973da072_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.266s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:45:15 compute-0 nova_compute[260935]: 2025-10-11 08:45:15.697 2 DEBUG nova.storage.rbd_utils [None req-acab66b7-c15c-4e9c-8acf-ed43b00cd150 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] resizing rbd image f7475a32-2490-4f8c-a700-a123973da072_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 11 08:45:15 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1157: 321 pgs: 321 active+clean; 101 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 95 KiB/s rd, 154 KiB/s wr, 86 op/s
Oct 11 08:45:15 compute-0 nova_compute[260935]: 2025-10-11 08:45:15.818 2 DEBUG nova.objects.instance [None req-acab66b7-c15c-4e9c-8acf-ed43b00cd150 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Lazy-loading 'migration_context' on Instance uuid f7475a32-2490-4f8c-a700-a123973da072 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:45:15 compute-0 nova_compute[260935]: 2025-10-11 08:45:15.884 2 DEBUG nova.storage.rbd_utils [None req-acab66b7-c15c-4e9c-8acf-ed43b00cd150 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] rbd image f7475a32-2490-4f8c-a700-a123973da072_disk.eph0 does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:45:15 compute-0 nova_compute[260935]: 2025-10-11 08:45:15.916 2 DEBUG nova.storage.rbd_utils [None req-acab66b7-c15c-4e9c-8acf-ed43b00cd150 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] rbd image f7475a32-2490-4f8c-a700-a123973da072_disk.eph0 does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:45:15 compute-0 nova_compute[260935]: 2025-10-11 08:45:15.921 2 DEBUG oslo_concurrency.lockutils [None req-acab66b7-c15c-4e9c-8acf-ed43b00cd150 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Acquiring lock "ephemeral_1_0706d66" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:45:15 compute-0 nova_compute[260935]: 2025-10-11 08:45:15.922 2 DEBUG oslo_concurrency.lockutils [None req-acab66b7-c15c-4e9c-8acf-ed43b00cd150 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Lock "ephemeral_1_0706d66" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:45:15 compute-0 nova_compute[260935]: 2025-10-11 08:45:15.922 2 DEBUG oslo_concurrency.processutils [None req-acab66b7-c15c-4e9c-8acf-ed43b00cd150 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f raw /var/lib/nova/instances/_base/ephemeral_1_0706d66 1G execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:45:15 compute-0 nova_compute[260935]: 2025-10-11 08:45:15.963 2 DEBUG oslo_concurrency.processutils [None req-acab66b7-c15c-4e9c-8acf-ed43b00cd150 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f raw /var/lib/nova/instances/_base/ephemeral_1_0706d66 1G" returned: 0 in 0.040s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:45:15 compute-0 nova_compute[260935]: 2025-10-11 08:45:15.964 2 DEBUG oslo_concurrency.processutils [None req-acab66b7-c15c-4e9c-8acf-ed43b00cd150 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Running cmd (subprocess): mkfs -t vfat -n ephemeral0 /var/lib/nova/instances/_base/ephemeral_1_0706d66 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:45:16 compute-0 nova_compute[260935]: 2025-10-11 08:45:16.025 2 DEBUG oslo_concurrency.processutils [None req-acab66b7-c15c-4e9c-8acf-ed43b00cd150 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] CMD "mkfs -t vfat -n ephemeral0 /var/lib/nova/instances/_base/ephemeral_1_0706d66" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:45:16 compute-0 nova_compute[260935]: 2025-10-11 08:45:16.026 2 DEBUG oslo_concurrency.lockutils [None req-acab66b7-c15c-4e9c-8acf-ed43b00cd150 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Lock "ephemeral_1_0706d66" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.104s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:45:16 compute-0 nova_compute[260935]: 2025-10-11 08:45:16.058 2 DEBUG nova.storage.rbd_utils [None req-acab66b7-c15c-4e9c-8acf-ed43b00cd150 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] rbd image f7475a32-2490-4f8c-a700-a123973da072_disk.eph0 does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:45:16 compute-0 nova_compute[260935]: 2025-10-11 08:45:16.063 2 DEBUG oslo_concurrency.processutils [None req-acab66b7-c15c-4e9c-8acf-ed43b00cd150 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ephemeral_1_0706d66 f7475a32-2490-4f8c-a700-a123973da072_disk.eph0 --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:45:16 compute-0 nova_compute[260935]: 2025-10-11 08:45:16.196 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:45:16 compute-0 nova_compute[260935]: 2025-10-11 08:45:16.269 2 DEBUG nova.network.neutron [None req-acab66b7-c15c-4e9c-8acf-ed43b00cd150 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] [instance: f7475a32-2490-4f8c-a700-a123973da072] Successfully created port: 865f9eb5-5e9d-40e5-abb3-123fcf5d05c3 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 11 08:45:16 compute-0 podman[279153]: 2025-10-11 08:45:16.798753428 +0000 UTC m=+0.096487020 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, container_name=ovn_metadata_agent)
Oct 11 08:45:17 compute-0 nova_compute[260935]: 2025-10-11 08:45:17.010 2 DEBUG oslo_concurrency.processutils [None req-acab66b7-c15c-4e9c-8acf-ed43b00cd150 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ephemeral_1_0706d66 f7475a32-2490-4f8c-a700-a123973da072_disk.eph0 --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.947s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:45:17 compute-0 nova_compute[260935]: 2025-10-11 08:45:17.127 2 DEBUG nova.virt.libvirt.driver [None req-acab66b7-c15c-4e9c-8acf-ed43b00cd150 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] [instance: f7475a32-2490-4f8c-a700-a123973da072] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 11 08:45:17 compute-0 nova_compute[260935]: 2025-10-11 08:45:17.128 2 DEBUG nova.virt.libvirt.driver [None req-acab66b7-c15c-4e9c-8acf-ed43b00cd150 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] [instance: f7475a32-2490-4f8c-a700-a123973da072] Ensure instance console log exists: /var/lib/nova/instances/f7475a32-2490-4f8c-a700-a123973da072/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 11 08:45:17 compute-0 nova_compute[260935]: 2025-10-11 08:45:17.129 2 DEBUG oslo_concurrency.lockutils [None req-acab66b7-c15c-4e9c-8acf-ed43b00cd150 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:45:17 compute-0 nova_compute[260935]: 2025-10-11 08:45:17.129 2 DEBUG oslo_concurrency.lockutils [None req-acab66b7-c15c-4e9c-8acf-ed43b00cd150 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:45:17 compute-0 nova_compute[260935]: 2025-10-11 08:45:17.130 2 DEBUG oslo_concurrency.lockutils [None req-acab66b7-c15c-4e9c-8acf-ed43b00cd150 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:45:17 compute-0 ceph-mon[74313]: pgmap v1157: 321 pgs: 321 active+clean; 101 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 95 KiB/s rd, 154 KiB/s wr, 86 op/s
Oct 11 08:45:17 compute-0 nova_compute[260935]: 2025-10-11 08:45:17.376 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760172302.3736513, eda894cf-d32d-47f0-adc0-7e2f7fffb442 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:45:17 compute-0 nova_compute[260935]: 2025-10-11 08:45:17.377 2 INFO nova.compute.manager [-] [instance: eda894cf-d32d-47f0-adc0-7e2f7fffb442] VM Stopped (Lifecycle Event)
Oct 11 08:45:17 compute-0 nova_compute[260935]: 2025-10-11 08:45:17.401 2 DEBUG nova.compute.manager [None req-827310ad-b4c6-4c57-8b48-bb209e1e2250 - - - - - -] [instance: eda894cf-d32d-47f0-adc0-7e2f7fffb442] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:45:17 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1158: 321 pgs: 321 active+clean; 90 MiB data, 250 MiB used, 60 GiB / 60 GiB avail; 133 KiB/s rd, 1.9 MiB/s wr, 144 op/s
Oct 11 08:45:17 compute-0 nova_compute[260935]: 2025-10-11 08:45:17.917 2 DEBUG nova.network.neutron [None req-acab66b7-c15c-4e9c-8acf-ed43b00cd150 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] [instance: f7475a32-2490-4f8c-a700-a123973da072] Successfully updated port: 865f9eb5-5e9d-40e5-abb3-123fcf5d05c3 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 11 08:45:17 compute-0 nova_compute[260935]: 2025-10-11 08:45:17.935 2 DEBUG oslo_concurrency.lockutils [None req-acab66b7-c15c-4e9c-8acf-ed43b00cd150 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Acquiring lock "refresh_cache-f7475a32-2490-4f8c-a700-a123973da072" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 08:45:17 compute-0 nova_compute[260935]: 2025-10-11 08:45:17.936 2 DEBUG oslo_concurrency.lockutils [None req-acab66b7-c15c-4e9c-8acf-ed43b00cd150 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Acquired lock "refresh_cache-f7475a32-2490-4f8c-a700-a123973da072" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 08:45:17 compute-0 nova_compute[260935]: 2025-10-11 08:45:17.936 2 DEBUG nova.network.neutron [None req-acab66b7-c15c-4e9c-8acf-ed43b00cd150 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] [instance: f7475a32-2490-4f8c-a700-a123973da072] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 11 08:45:17 compute-0 nova_compute[260935]: 2025-10-11 08:45:17.996 2 DEBUG nova.compute.manager [req-d0b2a4c7-120b-4413-8dd7-d4efbb163471 req-4ef45372-d99c-498f-8de6-bcc59dab8244 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: f7475a32-2490-4f8c-a700-a123973da072] Received event network-changed-865f9eb5-5e9d-40e5-abb3-123fcf5d05c3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:45:17 compute-0 nova_compute[260935]: 2025-10-11 08:45:17.996 2 DEBUG nova.compute.manager [req-d0b2a4c7-120b-4413-8dd7-d4efbb163471 req-4ef45372-d99c-498f-8de6-bcc59dab8244 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: f7475a32-2490-4f8c-a700-a123973da072] Refreshing instance network info cache due to event network-changed-865f9eb5-5e9d-40e5-abb3-123fcf5d05c3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 08:45:17 compute-0 nova_compute[260935]: 2025-10-11 08:45:17.997 2 DEBUG oslo_concurrency.lockutils [req-d0b2a4c7-120b-4413-8dd7-d4efbb163471 req-4ef45372-d99c-498f-8de6-bcc59dab8244 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-f7475a32-2490-4f8c-a700-a123973da072" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 08:45:18 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 08:45:18 compute-0 nova_compute[260935]: 2025-10-11 08:45:18.219 2 DEBUG nova.network.neutron [None req-acab66b7-c15c-4e9c-8acf-ed43b00cd150 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] [instance: f7475a32-2490-4f8c-a700-a123973da072] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 11 08:45:19 compute-0 nova_compute[260935]: 2025-10-11 08:45:19.189 2 DEBUG nova.network.neutron [None req-acab66b7-c15c-4e9c-8acf-ed43b00cd150 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] [instance: f7475a32-2490-4f8c-a700-a123973da072] Updating instance_info_cache with network_info: [{"id": "865f9eb5-5e9d-40e5-abb3-123fcf5d05c3", "address": "fa:16:3e:b3:e2:77", "network": {"id": "3bd537c2-e6ec-4d00-ac83-fbf5d86f963c", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-610535994-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4e6573eaf6684f1c99a553fd46667a67", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap865f9eb5-5e", "ovs_interfaceid": "865f9eb5-5e9d-40e5-abb3-123fcf5d05c3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:45:19 compute-0 nova_compute[260935]: 2025-10-11 08:45:19.266 2 DEBUG oslo_concurrency.lockutils [None req-acab66b7-c15c-4e9c-8acf-ed43b00cd150 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Releasing lock "refresh_cache-f7475a32-2490-4f8c-a700-a123973da072" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 08:45:19 compute-0 nova_compute[260935]: 2025-10-11 08:45:19.267 2 DEBUG nova.compute.manager [None req-acab66b7-c15c-4e9c-8acf-ed43b00cd150 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] [instance: f7475a32-2490-4f8c-a700-a123973da072] Instance network_info: |[{"id": "865f9eb5-5e9d-40e5-abb3-123fcf5d05c3", "address": "fa:16:3e:b3:e2:77", "network": {"id": "3bd537c2-e6ec-4d00-ac83-fbf5d86f963c", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-610535994-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4e6573eaf6684f1c99a553fd46667a67", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap865f9eb5-5e", "ovs_interfaceid": "865f9eb5-5e9d-40e5-abb3-123fcf5d05c3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 11 08:45:19 compute-0 nova_compute[260935]: 2025-10-11 08:45:19.268 2 DEBUG oslo_concurrency.lockutils [req-d0b2a4c7-120b-4413-8dd7-d4efbb163471 req-4ef45372-d99c-498f-8de6-bcc59dab8244 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-f7475a32-2490-4f8c-a700-a123973da072" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 08:45:19 compute-0 nova_compute[260935]: 2025-10-11 08:45:19.269 2 DEBUG nova.network.neutron [req-d0b2a4c7-120b-4413-8dd7-d4efbb163471 req-4ef45372-d99c-498f-8de6-bcc59dab8244 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: f7475a32-2490-4f8c-a700-a123973da072] Refreshing network info cache for port 865f9eb5-5e9d-40e5-abb3-123fcf5d05c3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 08:45:19 compute-0 nova_compute[260935]: 2025-10-11 08:45:19.275 2 DEBUG nova.virt.libvirt.driver [None req-acab66b7-c15c-4e9c-8acf-ed43b00cd150 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] [instance: f7475a32-2490-4f8c-a700-a123973da072] Start _get_guest_xml network_info=[{"id": "865f9eb5-5e9d-40e5-abb3-123fcf5d05c3", "address": "fa:16:3e:b3:e2:77", "network": {"id": "3bd537c2-e6ec-4d00-ac83-fbf5d86f963c", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-610535994-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4e6573eaf6684f1c99a553fd46667a67", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap865f9eb5-5e", "ovs_interfaceid": "865f9eb5-5e9d-40e5-abb3-123fcf5d05c3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.eph0': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [{'disk_bus': 'virtio', 'device_name': '/dev/vdb', 'guest_format': None, 'size': 1, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None}], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 11 08:45:19 compute-0 ceph-mon[74313]: pgmap v1158: 321 pgs: 321 active+clean; 90 MiB data, 250 MiB used, 60 GiB / 60 GiB avail; 133 KiB/s rd, 1.9 MiB/s wr, 144 op/s
Oct 11 08:45:19 compute-0 nova_compute[260935]: 2025-10-11 08:45:19.282 2 WARNING nova.virt.libvirt.driver [None req-acab66b7-c15c-4e9c-8acf-ed43b00cd150 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 08:45:19 compute-0 nova_compute[260935]: 2025-10-11 08:45:19.287 2 DEBUG nova.virt.libvirt.host [None req-acab66b7-c15c-4e9c-8acf-ed43b00cd150 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 11 08:45:19 compute-0 nova_compute[260935]: 2025-10-11 08:45:19.288 2 DEBUG nova.virt.libvirt.host [None req-acab66b7-c15c-4e9c-8acf-ed43b00cd150 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 11 08:45:19 compute-0 nova_compute[260935]: 2025-10-11 08:45:19.294 2 DEBUG nova.virt.libvirt.host [None req-acab66b7-c15c-4e9c-8acf-ed43b00cd150 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 11 08:45:19 compute-0 nova_compute[260935]: 2025-10-11 08:45:19.294 2 DEBUG nova.virt.libvirt.host [None req-acab66b7-c15c-4e9c-8acf-ed43b00cd150 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 11 08:45:19 compute-0 nova_compute[260935]: 2025-10-11 08:45:19.295 2 DEBUG nova.virt.libvirt.driver [None req-acab66b7-c15c-4e9c-8acf-ed43b00cd150 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 11 08:45:19 compute-0 nova_compute[260935]: 2025-10-11 08:45:19.295 2 DEBUG nova.virt.hardware [None req-acab66b7-c15c-4e9c-8acf-ed43b00cd150 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:44:30Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=1,extra_specs={hw_rng:allowed='True'},flavorid='2057106209',id=24,is_public=True,memory_mb=128,name='tempest-flavor_with_ephemeral_1-455650691',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 11 08:45:19 compute-0 nova_compute[260935]: 2025-10-11 08:45:19.296 2 DEBUG nova.virt.hardware [None req-acab66b7-c15c-4e9c-8acf-ed43b00cd150 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 11 08:45:19 compute-0 nova_compute[260935]: 2025-10-11 08:45:19.296 2 DEBUG nova.virt.hardware [None req-acab66b7-c15c-4e9c-8acf-ed43b00cd150 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 11 08:45:19 compute-0 nova_compute[260935]: 2025-10-11 08:45:19.297 2 DEBUG nova.virt.hardware [None req-acab66b7-c15c-4e9c-8acf-ed43b00cd150 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 11 08:45:19 compute-0 nova_compute[260935]: 2025-10-11 08:45:19.297 2 DEBUG nova.virt.hardware [None req-acab66b7-c15c-4e9c-8acf-ed43b00cd150 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 11 08:45:19 compute-0 nova_compute[260935]: 2025-10-11 08:45:19.297 2 DEBUG nova.virt.hardware [None req-acab66b7-c15c-4e9c-8acf-ed43b00cd150 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 11 08:45:19 compute-0 nova_compute[260935]: 2025-10-11 08:45:19.298 2 DEBUG nova.virt.hardware [None req-acab66b7-c15c-4e9c-8acf-ed43b00cd150 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 11 08:45:19 compute-0 nova_compute[260935]: 2025-10-11 08:45:19.298 2 DEBUG nova.virt.hardware [None req-acab66b7-c15c-4e9c-8acf-ed43b00cd150 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 11 08:45:19 compute-0 nova_compute[260935]: 2025-10-11 08:45:19.298 2 DEBUG nova.virt.hardware [None req-acab66b7-c15c-4e9c-8acf-ed43b00cd150 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 11 08:45:19 compute-0 nova_compute[260935]: 2025-10-11 08:45:19.299 2 DEBUG nova.virt.hardware [None req-acab66b7-c15c-4e9c-8acf-ed43b00cd150 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 11 08:45:19 compute-0 nova_compute[260935]: 2025-10-11 08:45:19.299 2 DEBUG nova.virt.hardware [None req-acab66b7-c15c-4e9c-8acf-ed43b00cd150 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 11 08:45:19 compute-0 nova_compute[260935]: 2025-10-11 08:45:19.303 2 DEBUG oslo_concurrency.processutils [None req-acab66b7-c15c-4e9c-8acf-ed43b00cd150 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:45:19 compute-0 nova_compute[260935]: 2025-10-11 08:45:19.352 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:45:19 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1159: 321 pgs: 321 active+clean; 90 MiB data, 250 MiB used, 60 GiB / 60 GiB avail; 67 KiB/s rd, 1.8 MiB/s wr, 99 op/s
Oct 11 08:45:19 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 08:45:19 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2120444' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:45:19 compute-0 nova_compute[260935]: 2025-10-11 08:45:19.794 2 DEBUG oslo_concurrency.processutils [None req-acab66b7-c15c-4e9c-8acf-ed43b00cd150 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.491s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:45:19 compute-0 nova_compute[260935]: 2025-10-11 08:45:19.795 2 DEBUG oslo_concurrency.processutils [None req-acab66b7-c15c-4e9c-8acf-ed43b00cd150 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:45:20 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 08:45:20 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3174672930' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:45:20 compute-0 nova_compute[260935]: 2025-10-11 08:45:20.278 2 DEBUG oslo_concurrency.processutils [None req-acab66b7-c15c-4e9c-8acf-ed43b00cd150 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.482s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:45:20 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/2120444' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:45:20 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/3174672930' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:45:20 compute-0 nova_compute[260935]: 2025-10-11 08:45:20.317 2 DEBUG nova.storage.rbd_utils [None req-acab66b7-c15c-4e9c-8acf-ed43b00cd150 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] rbd image f7475a32-2490-4f8c-a700-a123973da072_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:45:20 compute-0 nova_compute[260935]: 2025-10-11 08:45:20.323 2 DEBUG oslo_concurrency.processutils [None req-acab66b7-c15c-4e9c-8acf-ed43b00cd150 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:45:20 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 08:45:20 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2526477960' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:45:20 compute-0 nova_compute[260935]: 2025-10-11 08:45:20.842 2 DEBUG oslo_concurrency.processutils [None req-acab66b7-c15c-4e9c-8acf-ed43b00cd150 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.519s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:45:20 compute-0 nova_compute[260935]: 2025-10-11 08:45:20.843 2 DEBUG nova.virt.libvirt.vif [None req-acab66b7-c15c-4e9c-8acf-ed43b00cd150 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T08:45:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersWithSpecificFlavorTestJSON-server-1693169676',display_name='tempest-ServersWithSpecificFlavorTestJSON-server-1693169676',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(24),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverswithspecificflavortestjson-server-1693169676',id=5,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=24,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNbdle8Jb2CGpQv4SpbWMUmm3hiSixEA0s5KapvSItFHc3x9bUVgKTLkLVOIdrKirkN+vbb4fO8g4FzWk9eXgUjBokGNviLy6eM2HjHbeF2CcHrjTEf0RujQy30DD/WL/w==',key_name='tempest-keypair-358550695',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4e6573eaf6684f1c99a553fd46667a67',ramdisk_id='',reservation_id='r-kd8cf275',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersWithSpecificFlavorTestJSON-1781130606',owner_user_name='tempest-ServersWithSpecificFlavorTestJSON-1781130606-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T08:45:14Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='113798b24d1e4a9e91db94214d254ea9',uuid=f7475a32-2490-4f8c-a700-a123973da072,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "865f9eb5-5e9d-40e5-abb3-123fcf5d05c3", "address": "fa:16:3e:b3:e2:77", "network": {"id": "3bd537c2-e6ec-4d00-ac83-fbf5d86f963c", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-610535994-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4e6573eaf6684f1c99a553fd46667a67", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap865f9eb5-5e", "ovs_interfaceid": "865f9eb5-5e9d-40e5-abb3-123fcf5d05c3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 11 08:45:20 compute-0 nova_compute[260935]: 2025-10-11 08:45:20.843 2 DEBUG nova.network.os_vif_util [None req-acab66b7-c15c-4e9c-8acf-ed43b00cd150 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Converting VIF {"id": "865f9eb5-5e9d-40e5-abb3-123fcf5d05c3", "address": "fa:16:3e:b3:e2:77", "network": {"id": "3bd537c2-e6ec-4d00-ac83-fbf5d86f963c", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-610535994-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4e6573eaf6684f1c99a553fd46667a67", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap865f9eb5-5e", "ovs_interfaceid": "865f9eb5-5e9d-40e5-abb3-123fcf5d05c3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 08:45:20 compute-0 nova_compute[260935]: 2025-10-11 08:45:20.844 2 DEBUG nova.network.os_vif_util [None req-acab66b7-c15c-4e9c-8acf-ed43b00cd150 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b3:e2:77,bridge_name='br-int',has_traffic_filtering=True,id=865f9eb5-5e9d-40e5-abb3-123fcf5d05c3,network=Network(3bd537c2-e6ec-4d00-ac83-fbf5d86f963c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap865f9eb5-5e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 08:45:20 compute-0 nova_compute[260935]: 2025-10-11 08:45:20.845 2 DEBUG nova.objects.instance [None req-acab66b7-c15c-4e9c-8acf-ed43b00cd150 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Lazy-loading 'pci_devices' on Instance uuid f7475a32-2490-4f8c-a700-a123973da072 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:45:21 compute-0 nova_compute[260935]: 2025-10-11 08:45:21.048 2 DEBUG nova.virt.libvirt.driver [None req-acab66b7-c15c-4e9c-8acf-ed43b00cd150 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] [instance: f7475a32-2490-4f8c-a700-a123973da072] End _get_guest_xml xml=<domain type="kvm">
Oct 11 08:45:21 compute-0 nova_compute[260935]:   <uuid>f7475a32-2490-4f8c-a700-a123973da072</uuid>
Oct 11 08:45:21 compute-0 nova_compute[260935]:   <name>instance-00000005</name>
Oct 11 08:45:21 compute-0 nova_compute[260935]:   <memory>131072</memory>
Oct 11 08:45:21 compute-0 nova_compute[260935]:   <vcpu>1</vcpu>
Oct 11 08:45:21 compute-0 nova_compute[260935]:   <metadata>
Oct 11 08:45:21 compute-0 nova_compute[260935]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 08:45:21 compute-0 nova_compute[260935]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 08:45:21 compute-0 nova_compute[260935]:       <nova:name>tempest-ServersWithSpecificFlavorTestJSON-server-1693169676</nova:name>
Oct 11 08:45:21 compute-0 nova_compute[260935]:       <nova:creationTime>2025-10-11 08:45:19</nova:creationTime>
Oct 11 08:45:21 compute-0 nova_compute[260935]:       <nova:flavor name="tempest-flavor_with_ephemeral_1-455650691">
Oct 11 08:45:21 compute-0 nova_compute[260935]:         <nova:memory>128</nova:memory>
Oct 11 08:45:21 compute-0 nova_compute[260935]:         <nova:disk>1</nova:disk>
Oct 11 08:45:21 compute-0 nova_compute[260935]:         <nova:swap>0</nova:swap>
Oct 11 08:45:21 compute-0 nova_compute[260935]:         <nova:ephemeral>1</nova:ephemeral>
Oct 11 08:45:21 compute-0 nova_compute[260935]:         <nova:vcpus>1</nova:vcpus>
Oct 11 08:45:21 compute-0 nova_compute[260935]:       </nova:flavor>
Oct 11 08:45:21 compute-0 nova_compute[260935]:       <nova:owner>
Oct 11 08:45:21 compute-0 nova_compute[260935]:         <nova:user uuid="113798b24d1e4a9e91db94214d254ea9">tempest-ServersWithSpecificFlavorTestJSON-1781130606-project-member</nova:user>
Oct 11 08:45:21 compute-0 nova_compute[260935]:         <nova:project uuid="4e6573eaf6684f1c99a553fd46667a67">tempest-ServersWithSpecificFlavorTestJSON-1781130606</nova:project>
Oct 11 08:45:21 compute-0 nova_compute[260935]:       </nova:owner>
Oct 11 08:45:21 compute-0 nova_compute[260935]:       <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 08:45:21 compute-0 nova_compute[260935]:       <nova:ports>
Oct 11 08:45:21 compute-0 nova_compute[260935]:         <nova:port uuid="865f9eb5-5e9d-40e5-abb3-123fcf5d05c3">
Oct 11 08:45:21 compute-0 nova_compute[260935]:           <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Oct 11 08:45:21 compute-0 nova_compute[260935]:         </nova:port>
Oct 11 08:45:21 compute-0 nova_compute[260935]:       </nova:ports>
Oct 11 08:45:21 compute-0 nova_compute[260935]:     </nova:instance>
Oct 11 08:45:21 compute-0 nova_compute[260935]:   </metadata>
Oct 11 08:45:21 compute-0 nova_compute[260935]:   <sysinfo type="smbios">
Oct 11 08:45:21 compute-0 nova_compute[260935]:     <system>
Oct 11 08:45:21 compute-0 nova_compute[260935]:       <entry name="manufacturer">RDO</entry>
Oct 11 08:45:21 compute-0 nova_compute[260935]:       <entry name="product">OpenStack Compute</entry>
Oct 11 08:45:21 compute-0 nova_compute[260935]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 08:45:21 compute-0 nova_compute[260935]:       <entry name="serial">f7475a32-2490-4f8c-a700-a123973da072</entry>
Oct 11 08:45:21 compute-0 nova_compute[260935]:       <entry name="uuid">f7475a32-2490-4f8c-a700-a123973da072</entry>
Oct 11 08:45:21 compute-0 nova_compute[260935]:       <entry name="family">Virtual Machine</entry>
Oct 11 08:45:21 compute-0 nova_compute[260935]:     </system>
Oct 11 08:45:21 compute-0 nova_compute[260935]:   </sysinfo>
Oct 11 08:45:21 compute-0 nova_compute[260935]:   <os>
Oct 11 08:45:21 compute-0 nova_compute[260935]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 11 08:45:21 compute-0 nova_compute[260935]:     <boot dev="hd"/>
Oct 11 08:45:21 compute-0 nova_compute[260935]:     <smbios mode="sysinfo"/>
Oct 11 08:45:21 compute-0 nova_compute[260935]:   </os>
Oct 11 08:45:21 compute-0 nova_compute[260935]:   <features>
Oct 11 08:45:21 compute-0 nova_compute[260935]:     <acpi/>
Oct 11 08:45:21 compute-0 nova_compute[260935]:     <apic/>
Oct 11 08:45:21 compute-0 nova_compute[260935]:     <vmcoreinfo/>
Oct 11 08:45:21 compute-0 nova_compute[260935]:   </features>
Oct 11 08:45:21 compute-0 nova_compute[260935]:   <clock offset="utc">
Oct 11 08:45:21 compute-0 nova_compute[260935]:     <timer name="pit" tickpolicy="delay"/>
Oct 11 08:45:21 compute-0 nova_compute[260935]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 11 08:45:21 compute-0 nova_compute[260935]:     <timer name="hpet" present="no"/>
Oct 11 08:45:21 compute-0 nova_compute[260935]:   </clock>
Oct 11 08:45:21 compute-0 nova_compute[260935]:   <cpu mode="host-model" match="exact">
Oct 11 08:45:21 compute-0 nova_compute[260935]:     <topology sockets="1" cores="1" threads="1"/>
Oct 11 08:45:21 compute-0 nova_compute[260935]:   </cpu>
Oct 11 08:45:21 compute-0 nova_compute[260935]:   <devices>
Oct 11 08:45:21 compute-0 nova_compute[260935]:     <disk type="network" device="disk">
Oct 11 08:45:21 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 08:45:21 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/f7475a32-2490-4f8c-a700-a123973da072_disk">
Oct 11 08:45:21 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 08:45:21 compute-0 nova_compute[260935]:       </source>
Oct 11 08:45:21 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 08:45:21 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 08:45:21 compute-0 nova_compute[260935]:       </auth>
Oct 11 08:45:21 compute-0 nova_compute[260935]:       <target dev="vda" bus="virtio"/>
Oct 11 08:45:21 compute-0 nova_compute[260935]:     </disk>
Oct 11 08:45:21 compute-0 nova_compute[260935]:     <disk type="network" device="disk">
Oct 11 08:45:21 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 08:45:21 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/f7475a32-2490-4f8c-a700-a123973da072_disk.eph0">
Oct 11 08:45:21 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 08:45:21 compute-0 nova_compute[260935]:       </source>
Oct 11 08:45:21 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 08:45:21 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 08:45:21 compute-0 nova_compute[260935]:       </auth>
Oct 11 08:45:21 compute-0 nova_compute[260935]:       <target dev="vdb" bus="virtio"/>
Oct 11 08:45:21 compute-0 nova_compute[260935]:     </disk>
Oct 11 08:45:21 compute-0 nova_compute[260935]:     <disk type="network" device="cdrom">
Oct 11 08:45:21 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 08:45:21 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/f7475a32-2490-4f8c-a700-a123973da072_disk.config">
Oct 11 08:45:21 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 08:45:21 compute-0 nova_compute[260935]:       </source>
Oct 11 08:45:21 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 08:45:21 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 08:45:21 compute-0 nova_compute[260935]:       </auth>
Oct 11 08:45:21 compute-0 nova_compute[260935]:       <target dev="sda" bus="sata"/>
Oct 11 08:45:21 compute-0 nova_compute[260935]:     </disk>
Oct 11 08:45:21 compute-0 nova_compute[260935]:     <interface type="ethernet">
Oct 11 08:45:21 compute-0 nova_compute[260935]:       <mac address="fa:16:3e:b3:e2:77"/>
Oct 11 08:45:21 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 08:45:21 compute-0 nova_compute[260935]:       <driver name="vhost" rx_queue_size="512"/>
Oct 11 08:45:21 compute-0 nova_compute[260935]:       <mtu size="1442"/>
Oct 11 08:45:21 compute-0 nova_compute[260935]:       <target dev="tap865f9eb5-5e"/>
Oct 11 08:45:21 compute-0 nova_compute[260935]:     </interface>
Oct 11 08:45:21 compute-0 nova_compute[260935]:     <serial type="pty">
Oct 11 08:45:21 compute-0 nova_compute[260935]:       <log file="/var/lib/nova/instances/f7475a32-2490-4f8c-a700-a123973da072/console.log" append="off"/>
Oct 11 08:45:21 compute-0 nova_compute[260935]:     </serial>
Oct 11 08:45:21 compute-0 nova_compute[260935]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 08:45:21 compute-0 nova_compute[260935]:     <video>
Oct 11 08:45:21 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 08:45:21 compute-0 nova_compute[260935]:     </video>
Oct 11 08:45:21 compute-0 nova_compute[260935]:     <input type="tablet" bus="usb"/>
Oct 11 08:45:21 compute-0 nova_compute[260935]:     <rng model="virtio">
Oct 11 08:45:21 compute-0 nova_compute[260935]:       <backend model="random">/dev/urandom</backend>
Oct 11 08:45:21 compute-0 nova_compute[260935]:     </rng>
Oct 11 08:45:21 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root"/>
Oct 11 08:45:21 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:45:21 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:45:21 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:45:21 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:45:21 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:45:21 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:45:21 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:45:21 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:45:21 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:45:21 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:45:21 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:45:21 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:45:21 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:45:21 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:45:21 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:45:21 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:45:21 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:45:21 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:45:21 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:45:21 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:45:21 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:45:21 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:45:21 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:45:21 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:45:21 compute-0 nova_compute[260935]:     <controller type="usb" index="0"/>
Oct 11 08:45:21 compute-0 nova_compute[260935]:     <memballoon model="virtio">
Oct 11 08:45:21 compute-0 nova_compute[260935]:       <stats period="10"/>
Oct 11 08:45:21 compute-0 nova_compute[260935]:     </memballoon>
Oct 11 08:45:21 compute-0 nova_compute[260935]:   </devices>
Oct 11 08:45:21 compute-0 nova_compute[260935]: </domain>
Oct 11 08:45:21 compute-0 nova_compute[260935]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 11 08:45:21 compute-0 nova_compute[260935]: 2025-10-11 08:45:21.049 2 DEBUG nova.compute.manager [None req-acab66b7-c15c-4e9c-8acf-ed43b00cd150 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] [instance: f7475a32-2490-4f8c-a700-a123973da072] Preparing to wait for external event network-vif-plugged-865f9eb5-5e9d-40e5-abb3-123fcf5d05c3 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 11 08:45:21 compute-0 nova_compute[260935]: 2025-10-11 08:45:21.049 2 DEBUG oslo_concurrency.lockutils [None req-acab66b7-c15c-4e9c-8acf-ed43b00cd150 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Acquiring lock "f7475a32-2490-4f8c-a700-a123973da072-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:45:21 compute-0 nova_compute[260935]: 2025-10-11 08:45:21.050 2 DEBUG oslo_concurrency.lockutils [None req-acab66b7-c15c-4e9c-8acf-ed43b00cd150 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Lock "f7475a32-2490-4f8c-a700-a123973da072-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:45:21 compute-0 nova_compute[260935]: 2025-10-11 08:45:21.050 2 DEBUG oslo_concurrency.lockutils [None req-acab66b7-c15c-4e9c-8acf-ed43b00cd150 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Lock "f7475a32-2490-4f8c-a700-a123973da072-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:45:21 compute-0 nova_compute[260935]: 2025-10-11 08:45:21.051 2 DEBUG nova.virt.libvirt.vif [None req-acab66b7-c15c-4e9c-8acf-ed43b00cd150 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T08:45:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersWithSpecificFlavorTestJSON-server-1693169676',display_name='tempest-ServersWithSpecificFlavorTestJSON-server-1693169676',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(24),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverswithspecificflavortestjson-server-1693169676',id=5,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=24,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNbdle8Jb2CGpQv4SpbWMUmm3hiSixEA0s5KapvSItFHc3x9bUVgKTLkLVOIdrKirkN+vbb4fO8g4FzWk9eXgUjBokGNviLy6eM2HjHbeF2CcHrjTEf0RujQy30DD/WL/w==',key_name='tempest-keypair-358550695',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4e6573eaf6684f1c99a553fd46667a67',ramdisk_id='',reservation_id='r-kd8cf275',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersWithSpecificFlavorTestJSON-1781130606',owner_user_name='tempest-ServersWithSpecificFlavorTestJSON-1781130606-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T08:45:14Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='113798b24d1e4a9e91db94214d254ea9',uuid=f7475a32-2490-4f8c-a700-a123973da072,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "865f9eb5-5e9d-40e5-abb3-123fcf5d05c3", "address": "fa:16:3e:b3:e2:77", "network": {"id": "3bd537c2-e6ec-4d00-ac83-fbf5d86f963c", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-610535994-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4e6573eaf6684f1c99a553fd46667a67", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap865f9eb5-5e", "ovs_interfaceid": "865f9eb5-5e9d-40e5-abb3-123fcf5d05c3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 11 08:45:21 compute-0 nova_compute[260935]: 2025-10-11 08:45:21.051 2 DEBUG nova.network.os_vif_util [None req-acab66b7-c15c-4e9c-8acf-ed43b00cd150 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Converting VIF {"id": "865f9eb5-5e9d-40e5-abb3-123fcf5d05c3", "address": "fa:16:3e:b3:e2:77", "network": {"id": "3bd537c2-e6ec-4d00-ac83-fbf5d86f963c", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-610535994-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4e6573eaf6684f1c99a553fd46667a67", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap865f9eb5-5e", "ovs_interfaceid": "865f9eb5-5e9d-40e5-abb3-123fcf5d05c3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 08:45:21 compute-0 nova_compute[260935]: 2025-10-11 08:45:21.052 2 DEBUG nova.network.os_vif_util [None req-acab66b7-c15c-4e9c-8acf-ed43b00cd150 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b3:e2:77,bridge_name='br-int',has_traffic_filtering=True,id=865f9eb5-5e9d-40e5-abb3-123fcf5d05c3,network=Network(3bd537c2-e6ec-4d00-ac83-fbf5d86f963c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap865f9eb5-5e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 08:45:21 compute-0 nova_compute[260935]: 2025-10-11 08:45:21.052 2 DEBUG os_vif [None req-acab66b7-c15c-4e9c-8acf-ed43b00cd150 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b3:e2:77,bridge_name='br-int',has_traffic_filtering=True,id=865f9eb5-5e9d-40e5-abb3-123fcf5d05c3,network=Network(3bd537c2-e6ec-4d00-ac83-fbf5d86f963c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap865f9eb5-5e') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 11 08:45:21 compute-0 nova_compute[260935]: 2025-10-11 08:45:21.052 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:45:21 compute-0 nova_compute[260935]: 2025-10-11 08:45:21.053 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:45:21 compute-0 nova_compute[260935]: 2025-10-11 08:45:21.053 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 08:45:21 compute-0 nova_compute[260935]: 2025-10-11 08:45:21.057 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:45:21 compute-0 nova_compute[260935]: 2025-10-11 08:45:21.057 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap865f9eb5-5e, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:45:21 compute-0 nova_compute[260935]: 2025-10-11 08:45:21.058 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap865f9eb5-5e, col_values=(('external_ids', {'iface-id': '865f9eb5-5e9d-40e5-abb3-123fcf5d05c3', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:b3:e2:77', 'vm-uuid': 'f7475a32-2490-4f8c-a700-a123973da072'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:45:21 compute-0 nova_compute[260935]: 2025-10-11 08:45:21.059 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:45:21 compute-0 NetworkManager[44960]: <info>  [1760172321.0609] manager: (tap865f9eb5-5e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/31)
Oct 11 08:45:21 compute-0 nova_compute[260935]: 2025-10-11 08:45:21.062 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 08:45:21 compute-0 nova_compute[260935]: 2025-10-11 08:45:21.068 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:45:21 compute-0 nova_compute[260935]: 2025-10-11 08:45:21.068 2 INFO os_vif [None req-acab66b7-c15c-4e9c-8acf-ed43b00cd150 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b3:e2:77,bridge_name='br-int',has_traffic_filtering=True,id=865f9eb5-5e9d-40e5-abb3-123fcf5d05c3,network=Network(3bd537c2-e6ec-4d00-ac83-fbf5d86f963c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap865f9eb5-5e')
Oct 11 08:45:21 compute-0 nova_compute[260935]: 2025-10-11 08:45:21.192 2 DEBUG nova.virt.libvirt.driver [None req-acab66b7-c15c-4e9c-8acf-ed43b00cd150 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 08:45:21 compute-0 nova_compute[260935]: 2025-10-11 08:45:21.193 2 DEBUG nova.virt.libvirt.driver [None req-acab66b7-c15c-4e9c-8acf-ed43b00cd150 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 08:45:21 compute-0 nova_compute[260935]: 2025-10-11 08:45:21.193 2 DEBUG nova.virt.libvirt.driver [None req-acab66b7-c15c-4e9c-8acf-ed43b00cd150 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 08:45:21 compute-0 nova_compute[260935]: 2025-10-11 08:45:21.194 2 DEBUG nova.virt.libvirt.driver [None req-acab66b7-c15c-4e9c-8acf-ed43b00cd150 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] No VIF found with MAC fa:16:3e:b3:e2:77, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 11 08:45:21 compute-0 nova_compute[260935]: 2025-10-11 08:45:21.194 2 INFO nova.virt.libvirt.driver [None req-acab66b7-c15c-4e9c-8acf-ed43b00cd150 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] [instance: f7475a32-2490-4f8c-a700-a123973da072] Using config drive
Oct 11 08:45:21 compute-0 nova_compute[260935]: 2025-10-11 08:45:21.213 2 DEBUG nova.storage.rbd_utils [None req-acab66b7-c15c-4e9c-8acf-ed43b00cd150 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] rbd image f7475a32-2490-4f8c-a700-a123973da072_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:45:21 compute-0 nova_compute[260935]: 2025-10-11 08:45:21.218 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:45:21 compute-0 ceph-mon[74313]: pgmap v1159: 321 pgs: 321 active+clean; 90 MiB data, 250 MiB used, 60 GiB / 60 GiB avail; 67 KiB/s rd, 1.8 MiB/s wr, 99 op/s
Oct 11 08:45:21 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/2526477960' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:45:21 compute-0 nova_compute[260935]: 2025-10-11 08:45:21.565 2 DEBUG nova.network.neutron [req-d0b2a4c7-120b-4413-8dd7-d4efbb163471 req-4ef45372-d99c-498f-8de6-bcc59dab8244 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: f7475a32-2490-4f8c-a700-a123973da072] Updated VIF entry in instance network info cache for port 865f9eb5-5e9d-40e5-abb3-123fcf5d05c3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 08:45:21 compute-0 nova_compute[260935]: 2025-10-11 08:45:21.566 2 DEBUG nova.network.neutron [req-d0b2a4c7-120b-4413-8dd7-d4efbb163471 req-4ef45372-d99c-498f-8de6-bcc59dab8244 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: f7475a32-2490-4f8c-a700-a123973da072] Updating instance_info_cache with network_info: [{"id": "865f9eb5-5e9d-40e5-abb3-123fcf5d05c3", "address": "fa:16:3e:b3:e2:77", "network": {"id": "3bd537c2-e6ec-4d00-ac83-fbf5d86f963c", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-610535994-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4e6573eaf6684f1c99a553fd46667a67", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap865f9eb5-5e", "ovs_interfaceid": "865f9eb5-5e9d-40e5-abb3-123fcf5d05c3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:45:21 compute-0 nova_compute[260935]: 2025-10-11 08:45:21.595 2 DEBUG oslo_concurrency.lockutils [req-d0b2a4c7-120b-4413-8dd7-d4efbb163471 req-4ef45372-d99c-498f-8de6-bcc59dab8244 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-f7475a32-2490-4f8c-a700-a123973da072" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 08:45:21 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1160: 321 pgs: 321 active+clean; 90 MiB data, 250 MiB used, 60 GiB / 60 GiB avail; 67 KiB/s rd, 1.8 MiB/s wr, 99 op/s
Oct 11 08:45:21 compute-0 nova_compute[260935]: 2025-10-11 08:45:21.812 2 INFO nova.virt.libvirt.driver [None req-acab66b7-c15c-4e9c-8acf-ed43b00cd150 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] [instance: f7475a32-2490-4f8c-a700-a123973da072] Creating config drive at /var/lib/nova/instances/f7475a32-2490-4f8c-a700-a123973da072/disk.config
Oct 11 08:45:21 compute-0 nova_compute[260935]: 2025-10-11 08:45:21.823 2 DEBUG oslo_concurrency.processutils [None req-acab66b7-c15c-4e9c-8acf-ed43b00cd150 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/f7475a32-2490-4f8c-a700-a123973da072/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp9dnsv9bj execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:45:21 compute-0 nova_compute[260935]: 2025-10-11 08:45:21.970 2 DEBUG oslo_concurrency.processutils [None req-acab66b7-c15c-4e9c-8acf-ed43b00cd150 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/f7475a32-2490-4f8c-a700-a123973da072/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp9dnsv9bj" returned: 0 in 0.147s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:45:22 compute-0 nova_compute[260935]: 2025-10-11 08:45:22.010 2 DEBUG nova.storage.rbd_utils [None req-acab66b7-c15c-4e9c-8acf-ed43b00cd150 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] rbd image f7475a32-2490-4f8c-a700-a123973da072_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:45:22 compute-0 nova_compute[260935]: 2025-10-11 08:45:22.015 2 DEBUG oslo_concurrency.processutils [None req-acab66b7-c15c-4e9c-8acf-ed43b00cd150 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/f7475a32-2490-4f8c-a700-a123973da072/disk.config f7475a32-2490-4f8c-a700-a123973da072_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:45:22 compute-0 nova_compute[260935]: 2025-10-11 08:45:22.212 2 DEBUG oslo_concurrency.processutils [None req-acab66b7-c15c-4e9c-8acf-ed43b00cd150 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/f7475a32-2490-4f8c-a700-a123973da072/disk.config f7475a32-2490-4f8c-a700-a123973da072_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.197s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:45:22 compute-0 nova_compute[260935]: 2025-10-11 08:45:22.213 2 INFO nova.virt.libvirt.driver [None req-acab66b7-c15c-4e9c-8acf-ed43b00cd150 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] [instance: f7475a32-2490-4f8c-a700-a123973da072] Deleting local config drive /var/lib/nova/instances/f7475a32-2490-4f8c-a700-a123973da072/disk.config because it was imported into RBD.
Oct 11 08:45:22 compute-0 kernel: tap865f9eb5-5e: entered promiscuous mode
Oct 11 08:45:22 compute-0 NetworkManager[44960]: <info>  [1760172322.2965] manager: (tap865f9eb5-5e): new Tun device (/org/freedesktop/NetworkManager/Devices/32)
Oct 11 08:45:22 compute-0 systemd-udevd[279381]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 08:45:22 compute-0 ovn_controller[152945]: 2025-10-11T08:45:22Z|00035|binding|INFO|Claiming lport 865f9eb5-5e9d-40e5-abb3-123fcf5d05c3 for this chassis.
Oct 11 08:45:22 compute-0 ovn_controller[152945]: 2025-10-11T08:45:22Z|00036|binding|INFO|865f9eb5-5e9d-40e5-abb3-123fcf5d05c3: Claiming fa:16:3e:b3:e2:77 10.100.0.5
Oct 11 08:45:22 compute-0 nova_compute[260935]: 2025-10-11 08:45:22.330 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:45:22 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:45:22.339 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b3:e2:77 10.100.0.5'], port_security=['fa:16:3e:b3:e2:77 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'f7475a32-2490-4f8c-a700-a123973da072', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3bd537c2-e6ec-4d00-ac83-fbf5d86f963c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4e6573eaf6684f1c99a553fd46667a67', 'neutron:revision_number': '2', 'neutron:security_group_ids': '6b6e9add-724a-49a4-90de-2f4e4912ca78', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=be56cb0d-617a-4b33-ac01-b32b133373b2, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=865f9eb5-5e9d-40e5-abb3-123fcf5d05c3) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 08:45:22 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:45:22.341 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 865f9eb5-5e9d-40e5-abb3-123fcf5d05c3 in datapath 3bd537c2-e6ec-4d00-ac83-fbf5d86f963c bound to our chassis
Oct 11 08:45:22 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:45:22.343 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 3bd537c2-e6ec-4d00-ac83-fbf5d86f963c
Oct 11 08:45:22 compute-0 NetworkManager[44960]: <info>  [1760172322.3505] device (tap865f9eb5-5e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 08:45:22 compute-0 NetworkManager[44960]: <info>  [1760172322.3526] device (tap865f9eb5-5e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 08:45:22 compute-0 ovn_controller[152945]: 2025-10-11T08:45:22Z|00037|binding|INFO|Setting lport 865f9eb5-5e9d-40e5-abb3-123fcf5d05c3 ovn-installed in OVS
Oct 11 08:45:22 compute-0 ovn_controller[152945]: 2025-10-11T08:45:22Z|00038|binding|INFO|Setting lport 865f9eb5-5e9d-40e5-abb3-123fcf5d05c3 up in Southbound
Oct 11 08:45:22 compute-0 nova_compute[260935]: 2025-10-11 08:45:22.354 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:45:22 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:45:22.358 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[631cc7a8-da19-4338-a01a-216f8133933b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:45:22 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:45:22.360 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap3bd537c2-e1 in ovnmeta-3bd537c2-e6ec-4d00-ac83-fbf5d86f963c namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 11 08:45:22 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:45:22.361 276945 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap3bd537c2-e0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 11 08:45:22 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:45:22.361 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[aab7cde0-ea25-4dbe-8706-783277143a25]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:45:22 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:45:22.363 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[03e8ee12-2d2a-4bfd-9a32-94df5d256216]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:45:22 compute-0 systemd-machined[215705]: New machine qemu-5-instance-00000005.
Oct 11 08:45:22 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:45:22.380 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[07caa198-bb3f-4a5c-8847-b29376a7c0cd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:45:22 compute-0 systemd[1]: Started Virtual Machine qemu-5-instance-00000005.
Oct 11 08:45:22 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:45:22.409 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[c0030254-5388-4c01-b77d-7eb7c855fcd5]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:45:22 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:45:22.451 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[64f0bc0a-8dd0-4c16-aa62-165b4c915280]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:45:22 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:45:22.459 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[aec6c34a-c46f-4eb5-8f2b-4e66b0d7e0cc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:45:22 compute-0 NetworkManager[44960]: <info>  [1760172322.4603] manager: (tap3bd537c2-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/33)
Oct 11 08:45:22 compute-0 systemd-udevd[279385]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 08:45:22 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:45:22.514 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[10ad650c-676c-4aae-a0fc-c09c712fd1af]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:45:22 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:45:22.519 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[093df9d6-d8ae-4ae9-836f-cc73b8214d57]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:45:22 compute-0 NetworkManager[44960]: <info>  [1760172322.5496] device (tap3bd537c2-e0): carrier: link connected
Oct 11 08:45:22 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:45:22.557 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[417e01f0-0bb3-4f6f-9f80-6993e6d36889]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:45:22 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:45:22.580 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[7f938f22-7d77-4676-ac28-3e4b5c6f91bb]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3bd537c2-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e9:0a:0a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 18], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 416725, 'reachable_time': 21173, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 279419, 'error': None, 'target': 'ovnmeta-3bd537c2-e6ec-4d00-ac83-fbf5d86f963c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:45:22 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:45:22.604 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[3b6aa32a-2c30-4ec9-9c96-7644db76c7fe]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fee9:a0a'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 416725, 'tstamp': 416725}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 279421, 'error': None, 'target': 'ovnmeta-3bd537c2-e6ec-4d00-ac83-fbf5d86f963c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:45:22 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:45:22.628 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[0bcd41de-b9e3-4e42-a038-93443849c994]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3bd537c2-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e9:0a:0a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 220, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 220, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 18], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 416725, 'reachable_time': 21173, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 192, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 192, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 279429, 'error': None, 'target': 'ovnmeta-3bd537c2-e6ec-4d00-ac83-fbf5d86f963c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:45:22 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:45:22.694 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[b28a4340-2cca-4890-b5e8-1c5fd1e1b149]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:45:22 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:45:22.788 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[5a1a0619-01ef-4b86-a4b7-88318c919f6f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:45:22 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:45:22.790 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3bd537c2-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:45:22 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:45:22.791 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 08:45:22 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:45:22.793 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3bd537c2-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:45:22 compute-0 kernel: tap3bd537c2-e0: entered promiscuous mode
Oct 11 08:45:22 compute-0 NetworkManager[44960]: <info>  [1760172322.7957] manager: (tap3bd537c2-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/34)
Oct 11 08:45:22 compute-0 nova_compute[260935]: 2025-10-11 08:45:22.797 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:45:22 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:45:22.804 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap3bd537c2-e0, col_values=(('external_ids', {'iface-id': '6fcfb560-e2ca-4f85-8b53-5907aa538954'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:45:22 compute-0 nova_compute[260935]: 2025-10-11 08:45:22.805 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:45:22 compute-0 ovn_controller[152945]: 2025-10-11T08:45:22Z|00039|binding|INFO|Releasing lport 6fcfb560-e2ca-4f85-8b53-5907aa538954 from this chassis (sb_readonly=0)
Oct 11 08:45:22 compute-0 nova_compute[260935]: 2025-10-11 08:45:22.806 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:45:22 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:45:22.807 162815 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/3bd537c2-e6ec-4d00-ac83-fbf5d86f963c.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/3bd537c2-e6ec-4d00-ac83-fbf5d86f963c.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 11 08:45:22 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:45:22.809 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[b505fd38-519b-4e13-9460-1901d5386f16]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:45:22 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:45:22.810 162815 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 11 08:45:22 compute-0 ovn_metadata_agent[162810]: global
Oct 11 08:45:22 compute-0 ovn_metadata_agent[162810]:     log         /dev/log local0 debug
Oct 11 08:45:22 compute-0 ovn_metadata_agent[162810]:     log-tag     haproxy-metadata-proxy-3bd537c2-e6ec-4d00-ac83-fbf5d86f963c
Oct 11 08:45:22 compute-0 ovn_metadata_agent[162810]:     user        root
Oct 11 08:45:22 compute-0 ovn_metadata_agent[162810]:     group       root
Oct 11 08:45:22 compute-0 ovn_metadata_agent[162810]:     maxconn     1024
Oct 11 08:45:22 compute-0 ovn_metadata_agent[162810]:     pidfile     /var/lib/neutron/external/pids/3bd537c2-e6ec-4d00-ac83-fbf5d86f963c.pid.haproxy
Oct 11 08:45:22 compute-0 ovn_metadata_agent[162810]:     daemon
Oct 11 08:45:22 compute-0 ovn_metadata_agent[162810]: 
Oct 11 08:45:22 compute-0 ovn_metadata_agent[162810]: defaults
Oct 11 08:45:22 compute-0 ovn_metadata_agent[162810]:     log global
Oct 11 08:45:22 compute-0 ovn_metadata_agent[162810]:     mode http
Oct 11 08:45:22 compute-0 ovn_metadata_agent[162810]:     option httplog
Oct 11 08:45:22 compute-0 ovn_metadata_agent[162810]:     option dontlognull
Oct 11 08:45:22 compute-0 ovn_metadata_agent[162810]:     option http-server-close
Oct 11 08:45:22 compute-0 ovn_metadata_agent[162810]:     option forwardfor
Oct 11 08:45:22 compute-0 ovn_metadata_agent[162810]:     retries                 3
Oct 11 08:45:22 compute-0 ovn_metadata_agent[162810]:     timeout http-request    30s
Oct 11 08:45:22 compute-0 ovn_metadata_agent[162810]:     timeout connect         30s
Oct 11 08:45:22 compute-0 ovn_metadata_agent[162810]:     timeout client          32s
Oct 11 08:45:22 compute-0 ovn_metadata_agent[162810]:     timeout server          32s
Oct 11 08:45:22 compute-0 ovn_metadata_agent[162810]:     timeout http-keep-alive 30s
Oct 11 08:45:22 compute-0 ovn_metadata_agent[162810]: 
Oct 11 08:45:22 compute-0 ovn_metadata_agent[162810]: 
Oct 11 08:45:22 compute-0 ovn_metadata_agent[162810]: listen listener
Oct 11 08:45:22 compute-0 ovn_metadata_agent[162810]:     bind 169.254.169.254:80
Oct 11 08:45:22 compute-0 ovn_metadata_agent[162810]:     server metadata /var/lib/neutron/metadata_proxy
Oct 11 08:45:22 compute-0 ovn_metadata_agent[162810]:     http-request add-header X-OVN-Network-ID 3bd537c2-e6ec-4d00-ac83-fbf5d86f963c
Oct 11 08:45:22 compute-0 ovn_metadata_agent[162810]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 11 08:45:22 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:45:22.811 162815 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-3bd537c2-e6ec-4d00-ac83-fbf5d86f963c', 'env', 'PROCESS_TAG=haproxy-3bd537c2-e6ec-4d00-ac83-fbf5d86f963c', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/3bd537c2-e6ec-4d00-ac83-fbf5d86f963c.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 11 08:45:22 compute-0 nova_compute[260935]: 2025-10-11 08:45:22.833 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:45:23 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 08:45:23 compute-0 nova_compute[260935]: 2025-10-11 08:45:23.175 2 DEBUG nova.compute.manager [req-1df20f0a-a20a-486f-9d9f-c776073d1cfd req-c3ff16bc-b6c3-470c-a5ea-0721b2bf7552 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: f7475a32-2490-4f8c-a700-a123973da072] Received event network-vif-plugged-865f9eb5-5e9d-40e5-abb3-123fcf5d05c3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:45:23 compute-0 nova_compute[260935]: 2025-10-11 08:45:23.177 2 DEBUG oslo_concurrency.lockutils [req-1df20f0a-a20a-486f-9d9f-c776073d1cfd req-c3ff16bc-b6c3-470c-a5ea-0721b2bf7552 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "f7475a32-2490-4f8c-a700-a123973da072-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:45:23 compute-0 nova_compute[260935]: 2025-10-11 08:45:23.177 2 DEBUG oslo_concurrency.lockutils [req-1df20f0a-a20a-486f-9d9f-c776073d1cfd req-c3ff16bc-b6c3-470c-a5ea-0721b2bf7552 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "f7475a32-2490-4f8c-a700-a123973da072-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:45:23 compute-0 nova_compute[260935]: 2025-10-11 08:45:23.178 2 DEBUG oslo_concurrency.lockutils [req-1df20f0a-a20a-486f-9d9f-c776073d1cfd req-c3ff16bc-b6c3-470c-a5ea-0721b2bf7552 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "f7475a32-2490-4f8c-a700-a123973da072-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:45:23 compute-0 nova_compute[260935]: 2025-10-11 08:45:23.179 2 DEBUG nova.compute.manager [req-1df20f0a-a20a-486f-9d9f-c776073d1cfd req-c3ff16bc-b6c3-470c-a5ea-0721b2bf7552 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: f7475a32-2490-4f8c-a700-a123973da072] Processing event network-vif-plugged-865f9eb5-5e9d-40e5-abb3-123fcf5d05c3 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 11 08:45:23 compute-0 podman[279514]: 2025-10-11 08:45:23.272169658 +0000 UTC m=+0.082567021 container create 7fdfa5eee06da18214dc33fc369066f5a6bd7352c541f9eb3b5949957f4c7925 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-3bd537c2-e6ec-4d00-ac83-fbf5d86f963c, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 11 08:45:23 compute-0 systemd[1]: Started libpod-conmon-7fdfa5eee06da18214dc33fc369066f5a6bd7352c541f9eb3b5949957f4c7925.scope.
Oct 11 08:45:23 compute-0 podman[279514]: 2025-10-11 08:45:23.233934155 +0000 UTC m=+0.044331618 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 11 08:45:23 compute-0 ceph-mon[74313]: pgmap v1160: 321 pgs: 321 active+clean; 90 MiB data, 250 MiB used, 60 GiB / 60 GiB avail; 67 KiB/s rd, 1.8 MiB/s wr, 99 op/s
Oct 11 08:45:23 compute-0 systemd[1]: Started libcrun container.
Oct 11 08:45:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49d8583738b5511fba56f82d6e92ea769c6441216f2fdd8ecdad4c60001a63ed/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 11 08:45:23 compute-0 podman[279514]: 2025-10-11 08:45:23.369062008 +0000 UTC m=+0.179459431 container init 7fdfa5eee06da18214dc33fc369066f5a6bd7352c541f9eb3b5949957f4c7925 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-3bd537c2-e6ec-4d00-ac83-fbf5d86f963c, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Oct 11 08:45:23 compute-0 podman[279514]: 2025-10-11 08:45:23.381302258 +0000 UTC m=+0.191699641 container start 7fdfa5eee06da18214dc33fc369066f5a6bd7352c541f9eb3b5949957f4c7925 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-3bd537c2-e6ec-4d00-ac83-fbf5d86f963c, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true)
Oct 11 08:45:23 compute-0 neutron-haproxy-ovnmeta-3bd537c2-e6ec-4d00-ac83-fbf5d86f963c[279530]: [NOTICE]   (279546) : New worker (279555) forked
Oct 11 08:45:23 compute-0 neutron-haproxy-ovnmeta-3bd537c2-e6ec-4d00-ac83-fbf5d86f963c[279530]: [NOTICE]   (279546) : Loading success.
Oct 11 08:45:23 compute-0 podman[279527]: 2025-10-11 08:45:23.459279477 +0000 UTC m=+0.140561219 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, managed_by=edpm_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=iscsid)
Oct 11 08:45:23 compute-0 nova_compute[260935]: 2025-10-11 08:45:23.516 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172323.5152283, f7475a32-2490-4f8c-a700-a123973da072 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:45:23 compute-0 nova_compute[260935]: 2025-10-11 08:45:23.516 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: f7475a32-2490-4f8c-a700-a123973da072] VM Started (Lifecycle Event)
Oct 11 08:45:23 compute-0 nova_compute[260935]: 2025-10-11 08:45:23.518 2 DEBUG nova.compute.manager [None req-acab66b7-c15c-4e9c-8acf-ed43b00cd150 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] [instance: f7475a32-2490-4f8c-a700-a123973da072] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 11 08:45:23 compute-0 nova_compute[260935]: 2025-10-11 08:45:23.523 2 DEBUG nova.virt.libvirt.driver [None req-acab66b7-c15c-4e9c-8acf-ed43b00cd150 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] [instance: f7475a32-2490-4f8c-a700-a123973da072] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 11 08:45:23 compute-0 nova_compute[260935]: 2025-10-11 08:45:23.527 2 INFO nova.virt.libvirt.driver [-] [instance: f7475a32-2490-4f8c-a700-a123973da072] Instance spawned successfully.
Oct 11 08:45:23 compute-0 nova_compute[260935]: 2025-10-11 08:45:23.527 2 DEBUG nova.virt.libvirt.driver [None req-acab66b7-c15c-4e9c-8acf-ed43b00cd150 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] [instance: f7475a32-2490-4f8c-a700-a123973da072] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 11 08:45:23 compute-0 nova_compute[260935]: 2025-10-11 08:45:23.545 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: f7475a32-2490-4f8c-a700-a123973da072] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:45:23 compute-0 nova_compute[260935]: 2025-10-11 08:45:23.548 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: f7475a32-2490-4f8c-a700-a123973da072] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 08:45:23 compute-0 nova_compute[260935]: 2025-10-11 08:45:23.560 2 DEBUG nova.virt.libvirt.driver [None req-acab66b7-c15c-4e9c-8acf-ed43b00cd150 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] [instance: f7475a32-2490-4f8c-a700-a123973da072] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:45:23 compute-0 nova_compute[260935]: 2025-10-11 08:45:23.561 2 DEBUG nova.virt.libvirt.driver [None req-acab66b7-c15c-4e9c-8acf-ed43b00cd150 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] [instance: f7475a32-2490-4f8c-a700-a123973da072] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:45:23 compute-0 nova_compute[260935]: 2025-10-11 08:45:23.561 2 DEBUG nova.virt.libvirt.driver [None req-acab66b7-c15c-4e9c-8acf-ed43b00cd150 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] [instance: f7475a32-2490-4f8c-a700-a123973da072] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:45:23 compute-0 nova_compute[260935]: 2025-10-11 08:45:23.561 2 DEBUG nova.virt.libvirt.driver [None req-acab66b7-c15c-4e9c-8acf-ed43b00cd150 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] [instance: f7475a32-2490-4f8c-a700-a123973da072] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:45:23 compute-0 nova_compute[260935]: 2025-10-11 08:45:23.562 2 DEBUG nova.virt.libvirt.driver [None req-acab66b7-c15c-4e9c-8acf-ed43b00cd150 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] [instance: f7475a32-2490-4f8c-a700-a123973da072] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:45:23 compute-0 nova_compute[260935]: 2025-10-11 08:45:23.562 2 DEBUG nova.virt.libvirt.driver [None req-acab66b7-c15c-4e9c-8acf-ed43b00cd150 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] [instance: f7475a32-2490-4f8c-a700-a123973da072] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:45:23 compute-0 nova_compute[260935]: 2025-10-11 08:45:23.571 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: f7475a32-2490-4f8c-a700-a123973da072] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 08:45:23 compute-0 nova_compute[260935]: 2025-10-11 08:45:23.572 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172323.5154285, f7475a32-2490-4f8c-a700-a123973da072 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:45:23 compute-0 nova_compute[260935]: 2025-10-11 08:45:23.572 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: f7475a32-2490-4f8c-a700-a123973da072] VM Paused (Lifecycle Event)
Oct 11 08:45:23 compute-0 nova_compute[260935]: 2025-10-11 08:45:23.631 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: f7475a32-2490-4f8c-a700-a123973da072] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:45:23 compute-0 nova_compute[260935]: 2025-10-11 08:45:23.636 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172323.5212383, f7475a32-2490-4f8c-a700-a123973da072 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:45:23 compute-0 nova_compute[260935]: 2025-10-11 08:45:23.636 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: f7475a32-2490-4f8c-a700-a123973da072] VM Resumed (Lifecycle Event)
Oct 11 08:45:23 compute-0 nova_compute[260935]: 2025-10-11 08:45:23.651 2 INFO nova.compute.manager [None req-acab66b7-c15c-4e9c-8acf-ed43b00cd150 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] [instance: f7475a32-2490-4f8c-a700-a123973da072] Took 8.55 seconds to spawn the instance on the hypervisor.
Oct 11 08:45:23 compute-0 nova_compute[260935]: 2025-10-11 08:45:23.651 2 DEBUG nova.compute.manager [None req-acab66b7-c15c-4e9c-8acf-ed43b00cd150 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] [instance: f7475a32-2490-4f8c-a700-a123973da072] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:45:23 compute-0 nova_compute[260935]: 2025-10-11 08:45:23.662 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: f7475a32-2490-4f8c-a700-a123973da072] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:45:23 compute-0 nova_compute[260935]: 2025-10-11 08:45:23.666 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: f7475a32-2490-4f8c-a700-a123973da072] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 08:45:23 compute-0 nova_compute[260935]: 2025-10-11 08:45:23.690 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: f7475a32-2490-4f8c-a700-a123973da072] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 08:45:23 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1161: 321 pgs: 321 active+clean; 90 MiB data, 250 MiB used, 60 GiB / 60 GiB avail; 72 KiB/s rd, 1.8 MiB/s wr, 106 op/s
Oct 11 08:45:23 compute-0 nova_compute[260935]: 2025-10-11 08:45:23.726 2 INFO nova.compute.manager [None req-acab66b7-c15c-4e9c-8acf-ed43b00cd150 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] [instance: f7475a32-2490-4f8c-a700-a123973da072] Took 10.09 seconds to build instance.
Oct 11 08:45:23 compute-0 nova_compute[260935]: 2025-10-11 08:45:23.749 2 DEBUG oslo_concurrency.lockutils [None req-acab66b7-c15c-4e9c-8acf-ed43b00cd150 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Lock "f7475a32-2490-4f8c-a700-a123973da072" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.349s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:45:24 compute-0 nova_compute[260935]: 2025-10-11 08:45:24.296 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760172309.2947922, aac0adcc-167d-400a-a04a-93767356cc9c => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:45:24 compute-0 nova_compute[260935]: 2025-10-11 08:45:24.297 2 INFO nova.compute.manager [-] [instance: aac0adcc-167d-400a-a04a-93767356cc9c] VM Stopped (Lifecycle Event)
Oct 11 08:45:24 compute-0 nova_compute[260935]: 2025-10-11 08:45:24.313 2 DEBUG nova.compute.manager [None req-a9bb22a3-e5d0-418f-8117-09b2f2ece1f7 - - - - - -] [instance: aac0adcc-167d-400a-a04a-93767356cc9c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:45:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 08:45:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 08:45:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 08:45:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 08:45:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 08:45:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 08:45:25 compute-0 nova_compute[260935]: 2025-10-11 08:45:25.322 2 DEBUG nova.compute.manager [req-7fe5b679-bfbf-42fd-ab36-4b1cc4800983 req-ab4f863e-d65b-4eb5-8918-ab4a2e1763f8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: f7475a32-2490-4f8c-a700-a123973da072] Received event network-vif-plugged-865f9eb5-5e9d-40e5-abb3-123fcf5d05c3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:45:25 compute-0 nova_compute[260935]: 2025-10-11 08:45:25.322 2 DEBUG oslo_concurrency.lockutils [req-7fe5b679-bfbf-42fd-ab36-4b1cc4800983 req-ab4f863e-d65b-4eb5-8918-ab4a2e1763f8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "f7475a32-2490-4f8c-a700-a123973da072-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:45:25 compute-0 nova_compute[260935]: 2025-10-11 08:45:25.323 2 DEBUG oslo_concurrency.lockutils [req-7fe5b679-bfbf-42fd-ab36-4b1cc4800983 req-ab4f863e-d65b-4eb5-8918-ab4a2e1763f8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "f7475a32-2490-4f8c-a700-a123973da072-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:45:25 compute-0 nova_compute[260935]: 2025-10-11 08:45:25.323 2 DEBUG oslo_concurrency.lockutils [req-7fe5b679-bfbf-42fd-ab36-4b1cc4800983 req-ab4f863e-d65b-4eb5-8918-ab4a2e1763f8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "f7475a32-2490-4f8c-a700-a123973da072-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:45:25 compute-0 nova_compute[260935]: 2025-10-11 08:45:25.323 2 DEBUG nova.compute.manager [req-7fe5b679-bfbf-42fd-ab36-4b1cc4800983 req-ab4f863e-d65b-4eb5-8918-ab4a2e1763f8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: f7475a32-2490-4f8c-a700-a123973da072] No waiting events found dispatching network-vif-plugged-865f9eb5-5e9d-40e5-abb3-123fcf5d05c3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 08:45:25 compute-0 nova_compute[260935]: 2025-10-11 08:45:25.324 2 WARNING nova.compute.manager [req-7fe5b679-bfbf-42fd-ab36-4b1cc4800983 req-ab4f863e-d65b-4eb5-8918-ab4a2e1763f8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: f7475a32-2490-4f8c-a700-a123973da072] Received unexpected event network-vif-plugged-865f9eb5-5e9d-40e5-abb3-123fcf5d05c3 for instance with vm_state active and task_state None.
Oct 11 08:45:25 compute-0 ceph-mon[74313]: pgmap v1161: 321 pgs: 321 active+clean; 90 MiB data, 250 MiB used, 60 GiB / 60 GiB avail; 72 KiB/s rd, 1.8 MiB/s wr, 106 op/s
Oct 11 08:45:25 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1162: 321 pgs: 321 active+clean; 90 MiB data, 250 MiB used, 60 GiB / 60 GiB avail; 43 KiB/s rd, 1.8 MiB/s wr, 64 op/s
Oct 11 08:45:26 compute-0 nova_compute[260935]: 2025-10-11 08:45:26.101 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:45:26 compute-0 nova_compute[260935]: 2025-10-11 08:45:26.200 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:45:27 compute-0 nova_compute[260935]: 2025-10-11 08:45:27.218 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760172312.2159898, 10e2549e-21d4-44fb-acbf-9104ec32970f => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:45:27 compute-0 nova_compute[260935]: 2025-10-11 08:45:27.219 2 INFO nova.compute.manager [-] [instance: 10e2549e-21d4-44fb-acbf-9104ec32970f] VM Stopped (Lifecycle Event)
Oct 11 08:45:27 compute-0 nova_compute[260935]: 2025-10-11 08:45:27.245 2 DEBUG nova.compute.manager [None req-71f112a0-808f-472e-9937-bd9cfa75e2b3 - - - - - -] [instance: 10e2549e-21d4-44fb-acbf-9104ec32970f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:45:27 compute-0 ceph-mon[74313]: pgmap v1162: 321 pgs: 321 active+clean; 90 MiB data, 250 MiB used, 60 GiB / 60 GiB avail; 43 KiB/s rd, 1.8 MiB/s wr, 64 op/s
Oct 11 08:45:27 compute-0 nova_compute[260935]: 2025-10-11 08:45:27.681 2 DEBUG nova.compute.manager [req-2ee9f526-9505-4e7b-8194-5e72df746c5e req-804d7d61-9295-4ba7-a554-718eb3fee364 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: f7475a32-2490-4f8c-a700-a123973da072] Received event network-changed-865f9eb5-5e9d-40e5-abb3-123fcf5d05c3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:45:27 compute-0 nova_compute[260935]: 2025-10-11 08:45:27.682 2 DEBUG nova.compute.manager [req-2ee9f526-9505-4e7b-8194-5e72df746c5e req-804d7d61-9295-4ba7-a554-718eb3fee364 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: f7475a32-2490-4f8c-a700-a123973da072] Refreshing instance network info cache due to event network-changed-865f9eb5-5e9d-40e5-abb3-123fcf5d05c3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 08:45:27 compute-0 nova_compute[260935]: 2025-10-11 08:45:27.683 2 DEBUG oslo_concurrency.lockutils [req-2ee9f526-9505-4e7b-8194-5e72df746c5e req-804d7d61-9295-4ba7-a554-718eb3fee364 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-f7475a32-2490-4f8c-a700-a123973da072" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 08:45:27 compute-0 nova_compute[260935]: 2025-10-11 08:45:27.684 2 DEBUG oslo_concurrency.lockutils [req-2ee9f526-9505-4e7b-8194-5e72df746c5e req-804d7d61-9295-4ba7-a554-718eb3fee364 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-f7475a32-2490-4f8c-a700-a123973da072" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 08:45:27 compute-0 nova_compute[260935]: 2025-10-11 08:45:27.684 2 DEBUG nova.network.neutron [req-2ee9f526-9505-4e7b-8194-5e72df746c5e req-804d7d61-9295-4ba7-a554-718eb3fee364 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: f7475a32-2490-4f8c-a700-a123973da072] Refreshing network info cache for port 865f9eb5-5e9d-40e5-abb3-123fcf5d05c3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 08:45:27 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1163: 321 pgs: 321 active+clean; 90 MiB data, 250 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 133 op/s
Oct 11 08:45:28 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 08:45:29 compute-0 ceph-mon[74313]: pgmap v1163: 321 pgs: 321 active+clean; 90 MiB data, 250 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 133 op/s
Oct 11 08:45:29 compute-0 nova_compute[260935]: 2025-10-11 08:45:29.702 2 DEBUG nova.network.neutron [req-2ee9f526-9505-4e7b-8194-5e72df746c5e req-804d7d61-9295-4ba7-a554-718eb3fee364 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: f7475a32-2490-4f8c-a700-a123973da072] Updated VIF entry in instance network info cache for port 865f9eb5-5e9d-40e5-abb3-123fcf5d05c3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 08:45:29 compute-0 nova_compute[260935]: 2025-10-11 08:45:29.704 2 DEBUG nova.network.neutron [req-2ee9f526-9505-4e7b-8194-5e72df746c5e req-804d7d61-9295-4ba7-a554-718eb3fee364 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: f7475a32-2490-4f8c-a700-a123973da072] Updating instance_info_cache with network_info: [{"id": "865f9eb5-5e9d-40e5-abb3-123fcf5d05c3", "address": "fa:16:3e:b3:e2:77", "network": {"id": "3bd537c2-e6ec-4d00-ac83-fbf5d86f963c", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-610535994-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.218", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4e6573eaf6684f1c99a553fd46667a67", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap865f9eb5-5e", "ovs_interfaceid": "865f9eb5-5e9d-40e5-abb3-123fcf5d05c3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:45:29 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1164: 321 pgs: 321 active+clean; 90 MiB data, 250 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 76 op/s
Oct 11 08:45:29 compute-0 nova_compute[260935]: 2025-10-11 08:45:29.740 2 DEBUG oslo_concurrency.lockutils [req-2ee9f526-9505-4e7b-8194-5e72df746c5e req-804d7d61-9295-4ba7-a554-718eb3fee364 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-f7475a32-2490-4f8c-a700-a123973da072" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 08:45:29 compute-0 podman[279565]: 2025-10-11 08:45:29.820716806 +0000 UTC m=+0.114121893 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Oct 11 08:45:30 compute-0 podman[279586]: 2025-10-11 08:45:30.845561891 +0000 UTC m=+0.143877194 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct 11 08:45:31 compute-0 nova_compute[260935]: 2025-10-11 08:45:31.103 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:45:31 compute-0 nova_compute[260935]: 2025-10-11 08:45:31.203 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:45:31 compute-0 ceph-mon[74313]: pgmap v1164: 321 pgs: 321 active+clean; 90 MiB data, 250 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 76 op/s
Oct 11 08:45:31 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1165: 321 pgs: 321 active+clean; 90 MiB data, 250 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 76 op/s
Oct 11 08:45:33 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 08:45:33 compute-0 nova_compute[260935]: 2025-10-11 08:45:33.160 2 DEBUG oslo_concurrency.lockutils [None req-3ef2d93d-f57f-4077-80f2-b926d4767661 9f1d33a73b9b4589967f6f042e6949bd d9dd13dec2494700919923a94f4273ae - - default default] Acquiring lock "09442602-bb27-4205-98ce-79781d8ab62f" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:45:33 compute-0 nova_compute[260935]: 2025-10-11 08:45:33.161 2 DEBUG oslo_concurrency.lockutils [None req-3ef2d93d-f57f-4077-80f2-b926d4767661 9f1d33a73b9b4589967f6f042e6949bd d9dd13dec2494700919923a94f4273ae - - default default] Lock "09442602-bb27-4205-98ce-79781d8ab62f" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:45:33 compute-0 nova_compute[260935]: 2025-10-11 08:45:33.287 2 DEBUG nova.compute.manager [None req-3ef2d93d-f57f-4077-80f2-b926d4767661 9f1d33a73b9b4589967f6f042e6949bd d9dd13dec2494700919923a94f4273ae - - default default] [instance: 09442602-bb27-4205-98ce-79781d8ab62f] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 11 08:45:33 compute-0 ceph-mon[74313]: pgmap v1165: 321 pgs: 321 active+clean; 90 MiB data, 250 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 76 op/s
Oct 11 08:45:33 compute-0 nova_compute[260935]: 2025-10-11 08:45:33.429 2 DEBUG oslo_concurrency.lockutils [None req-3ef2d93d-f57f-4077-80f2-b926d4767661 9f1d33a73b9b4589967f6f042e6949bd d9dd13dec2494700919923a94f4273ae - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:45:33 compute-0 nova_compute[260935]: 2025-10-11 08:45:33.429 2 DEBUG oslo_concurrency.lockutils [None req-3ef2d93d-f57f-4077-80f2-b926d4767661 9f1d33a73b9b4589967f6f042e6949bd d9dd13dec2494700919923a94f4273ae - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:45:33 compute-0 nova_compute[260935]: 2025-10-11 08:45:33.446 2 DEBUG nova.virt.hardware [None req-3ef2d93d-f57f-4077-80f2-b926d4767661 9f1d33a73b9b4589967f6f042e6949bd d9dd13dec2494700919923a94f4273ae - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 11 08:45:33 compute-0 nova_compute[260935]: 2025-10-11 08:45:33.447 2 INFO nova.compute.claims [None req-3ef2d93d-f57f-4077-80f2-b926d4767661 9f1d33a73b9b4589967f6f042e6949bd d9dd13dec2494700919923a94f4273ae - - default default] [instance: 09442602-bb27-4205-98ce-79781d8ab62f] Claim successful on node compute-0.ctlplane.example.com
Oct 11 08:45:33 compute-0 nova_compute[260935]: 2025-10-11 08:45:33.604 2 DEBUG oslo_concurrency.processutils [None req-3ef2d93d-f57f-4077-80f2-b926d4767661 9f1d33a73b9b4589967f6f042e6949bd d9dd13dec2494700919923a94f4273ae - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:45:33 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1166: 321 pgs: 321 active+clean; 90 MiB data, 250 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 76 op/s
Oct 11 08:45:34 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 08:45:34 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2656833175' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:45:34 compute-0 nova_compute[260935]: 2025-10-11 08:45:34.088 2 DEBUG oslo_concurrency.processutils [None req-3ef2d93d-f57f-4077-80f2-b926d4767661 9f1d33a73b9b4589967f6f042e6949bd d9dd13dec2494700919923a94f4273ae - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.484s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:45:34 compute-0 nova_compute[260935]: 2025-10-11 08:45:34.095 2 DEBUG nova.compute.provider_tree [None req-3ef2d93d-f57f-4077-80f2-b926d4767661 9f1d33a73b9b4589967f6f042e6949bd d9dd13dec2494700919923a94f4273ae - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 08:45:34 compute-0 nova_compute[260935]: 2025-10-11 08:45:34.135 2 DEBUG nova.scheduler.client.report [None req-3ef2d93d-f57f-4077-80f2-b926d4767661 9f1d33a73b9b4589967f6f042e6949bd d9dd13dec2494700919923a94f4273ae - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 08:45:34 compute-0 nova_compute[260935]: 2025-10-11 08:45:34.180 2 DEBUG oslo_concurrency.lockutils [None req-3ef2d93d-f57f-4077-80f2-b926d4767661 9f1d33a73b9b4589967f6f042e6949bd d9dd13dec2494700919923a94f4273ae - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.751s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:45:34 compute-0 nova_compute[260935]: 2025-10-11 08:45:34.181 2 DEBUG nova.compute.manager [None req-3ef2d93d-f57f-4077-80f2-b926d4767661 9f1d33a73b9b4589967f6f042e6949bd d9dd13dec2494700919923a94f4273ae - - default default] [instance: 09442602-bb27-4205-98ce-79781d8ab62f] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 11 08:45:34 compute-0 ceph-osd[89278]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Oct 11 08:45:34 compute-0 nova_compute[260935]: 2025-10-11 08:45:34.280 2 DEBUG nova.compute.manager [None req-3ef2d93d-f57f-4077-80f2-b926d4767661 9f1d33a73b9b4589967f6f042e6949bd d9dd13dec2494700919923a94f4273ae - - default default] [instance: 09442602-bb27-4205-98ce-79781d8ab62f] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 11 08:45:34 compute-0 nova_compute[260935]: 2025-10-11 08:45:34.281 2 DEBUG nova.network.neutron [None req-3ef2d93d-f57f-4077-80f2-b926d4767661 9f1d33a73b9b4589967f6f042e6949bd d9dd13dec2494700919923a94f4273ae - - default default] [instance: 09442602-bb27-4205-98ce-79781d8ab62f] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 11 08:45:34 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/2656833175' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:45:34 compute-0 nova_compute[260935]: 2025-10-11 08:45:34.505 2 INFO nova.virt.libvirt.driver [None req-3ef2d93d-f57f-4077-80f2-b926d4767661 9f1d33a73b9b4589967f6f042e6949bd d9dd13dec2494700919923a94f4273ae - - default default] [instance: 09442602-bb27-4205-98ce-79781d8ab62f] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 11 08:45:34 compute-0 nova_compute[260935]: 2025-10-11 08:45:34.561 2 DEBUG nova.compute.manager [None req-3ef2d93d-f57f-4077-80f2-b926d4767661 9f1d33a73b9b4589967f6f042e6949bd d9dd13dec2494700919923a94f4273ae - - default default] [instance: 09442602-bb27-4205-98ce-79781d8ab62f] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 11 08:45:34 compute-0 nova_compute[260935]: 2025-10-11 08:45:34.694 2 DEBUG nova.compute.manager [None req-3ef2d93d-f57f-4077-80f2-b926d4767661 9f1d33a73b9b4589967f6f042e6949bd d9dd13dec2494700919923a94f4273ae - - default default] [instance: 09442602-bb27-4205-98ce-79781d8ab62f] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 11 08:45:34 compute-0 nova_compute[260935]: 2025-10-11 08:45:34.696 2 DEBUG nova.virt.libvirt.driver [None req-3ef2d93d-f57f-4077-80f2-b926d4767661 9f1d33a73b9b4589967f6f042e6949bd d9dd13dec2494700919923a94f4273ae - - default default] [instance: 09442602-bb27-4205-98ce-79781d8ab62f] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 11 08:45:34 compute-0 nova_compute[260935]: 2025-10-11 08:45:34.697 2 INFO nova.virt.libvirt.driver [None req-3ef2d93d-f57f-4077-80f2-b926d4767661 9f1d33a73b9b4589967f6f042e6949bd d9dd13dec2494700919923a94f4273ae - - default default] [instance: 09442602-bb27-4205-98ce-79781d8ab62f] Creating image(s)
Oct 11 08:45:34 compute-0 nova_compute[260935]: 2025-10-11 08:45:34.728 2 DEBUG nova.storage.rbd_utils [None req-3ef2d93d-f57f-4077-80f2-b926d4767661 9f1d33a73b9b4589967f6f042e6949bd d9dd13dec2494700919923a94f4273ae - - default default] rbd image 09442602-bb27-4205-98ce-79781d8ab62f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:45:34 compute-0 nova_compute[260935]: 2025-10-11 08:45:34.759 2 DEBUG nova.storage.rbd_utils [None req-3ef2d93d-f57f-4077-80f2-b926d4767661 9f1d33a73b9b4589967f6f042e6949bd d9dd13dec2494700919923a94f4273ae - - default default] rbd image 09442602-bb27-4205-98ce-79781d8ab62f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:45:34 compute-0 nova_compute[260935]: 2025-10-11 08:45:34.793 2 DEBUG nova.storage.rbd_utils [None req-3ef2d93d-f57f-4077-80f2-b926d4767661 9f1d33a73b9b4589967f6f042e6949bd d9dd13dec2494700919923a94f4273ae - - default default] rbd image 09442602-bb27-4205-98ce-79781d8ab62f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:45:34 compute-0 nova_compute[260935]: 2025-10-11 08:45:34.798 2 DEBUG oslo_concurrency.processutils [None req-3ef2d93d-f57f-4077-80f2-b926d4767661 9f1d33a73b9b4589967f6f042e6949bd d9dd13dec2494700919923a94f4273ae - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:45:34 compute-0 nova_compute[260935]: 2025-10-11 08:45:34.887 2 DEBUG oslo_concurrency.processutils [None req-3ef2d93d-f57f-4077-80f2-b926d4767661 9f1d33a73b9b4589967f6f042e6949bd d9dd13dec2494700919923a94f4273ae - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json" returned: 0 in 0.089s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:45:34 compute-0 nova_compute[260935]: 2025-10-11 08:45:34.889 2 DEBUG oslo_concurrency.lockutils [None req-3ef2d93d-f57f-4077-80f2-b926d4767661 9f1d33a73b9b4589967f6f042e6949bd d9dd13dec2494700919923a94f4273ae - - default default] Acquiring lock "9811042c7d73cc51997f7c966840f4b7728169a1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:45:34 compute-0 nova_compute[260935]: 2025-10-11 08:45:34.891 2 DEBUG oslo_concurrency.lockutils [None req-3ef2d93d-f57f-4077-80f2-b926d4767661 9f1d33a73b9b4589967f6f042e6949bd d9dd13dec2494700919923a94f4273ae - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:45:34 compute-0 nova_compute[260935]: 2025-10-11 08:45:34.892 2 DEBUG oslo_concurrency.lockutils [None req-3ef2d93d-f57f-4077-80f2-b926d4767661 9f1d33a73b9b4589967f6f042e6949bd d9dd13dec2494700919923a94f4273ae - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:45:34 compute-0 nova_compute[260935]: 2025-10-11 08:45:34.927 2 DEBUG nova.storage.rbd_utils [None req-3ef2d93d-f57f-4077-80f2-b926d4767661 9f1d33a73b9b4589967f6f042e6949bd d9dd13dec2494700919923a94f4273ae - - default default] rbd image 09442602-bb27-4205-98ce-79781d8ab62f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:45:34 compute-0 nova_compute[260935]: 2025-10-11 08:45:34.934 2 DEBUG oslo_concurrency.processutils [None req-3ef2d93d-f57f-4077-80f2-b926d4767661 9f1d33a73b9b4589967f6f042e6949bd d9dd13dec2494700919923a94f4273ae - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 09442602-bb27-4205-98ce-79781d8ab62f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:45:35 compute-0 nova_compute[260935]: 2025-10-11 08:45:35.241 2 DEBUG oslo_concurrency.processutils [None req-3ef2d93d-f57f-4077-80f2-b926d4767661 9f1d33a73b9b4589967f6f042e6949bd d9dd13dec2494700919923a94f4273ae - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 09442602-bb27-4205-98ce-79781d8ab62f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.308s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:45:35 compute-0 nova_compute[260935]: 2025-10-11 08:45:35.328 2 DEBUG nova.storage.rbd_utils [None req-3ef2d93d-f57f-4077-80f2-b926d4767661 9f1d33a73b9b4589967f6f042e6949bd d9dd13dec2494700919923a94f4273ae - - default default] resizing rbd image 09442602-bb27-4205-98ce-79781d8ab62f_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 11 08:45:35 compute-0 ceph-mon[74313]: pgmap v1166: 321 pgs: 321 active+clean; 90 MiB data, 250 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 76 op/s
Oct 11 08:45:35 compute-0 ovn_controller[152945]: 2025-10-11T08:45:35Z|00006|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:b3:e2:77 10.100.0.5
Oct 11 08:45:35 compute-0 ovn_controller[152945]: 2025-10-11T08:45:35Z|00007|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:b3:e2:77 10.100.0.5
Oct 11 08:45:35 compute-0 nova_compute[260935]: 2025-10-11 08:45:35.454 2 DEBUG nova.objects.instance [None req-3ef2d93d-f57f-4077-80f2-b926d4767661 9f1d33a73b9b4589967f6f042e6949bd d9dd13dec2494700919923a94f4273ae - - default default] Lazy-loading 'migration_context' on Instance uuid 09442602-bb27-4205-98ce-79781d8ab62f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:45:35 compute-0 nova_compute[260935]: 2025-10-11 08:45:35.472 2 DEBUG nova.virt.libvirt.driver [None req-3ef2d93d-f57f-4077-80f2-b926d4767661 9f1d33a73b9b4589967f6f042e6949bd d9dd13dec2494700919923a94f4273ae - - default default] [instance: 09442602-bb27-4205-98ce-79781d8ab62f] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 11 08:45:35 compute-0 nova_compute[260935]: 2025-10-11 08:45:35.473 2 DEBUG nova.virt.libvirt.driver [None req-3ef2d93d-f57f-4077-80f2-b926d4767661 9f1d33a73b9b4589967f6f042e6949bd d9dd13dec2494700919923a94f4273ae - - default default] [instance: 09442602-bb27-4205-98ce-79781d8ab62f] Ensure instance console log exists: /var/lib/nova/instances/09442602-bb27-4205-98ce-79781d8ab62f/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 11 08:45:35 compute-0 nova_compute[260935]: 2025-10-11 08:45:35.474 2 DEBUG oslo_concurrency.lockutils [None req-3ef2d93d-f57f-4077-80f2-b926d4767661 9f1d33a73b9b4589967f6f042e6949bd d9dd13dec2494700919923a94f4273ae - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:45:35 compute-0 nova_compute[260935]: 2025-10-11 08:45:35.475 2 DEBUG oslo_concurrency.lockutils [None req-3ef2d93d-f57f-4077-80f2-b926d4767661 9f1d33a73b9b4589967f6f042e6949bd d9dd13dec2494700919923a94f4273ae - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:45:35 compute-0 nova_compute[260935]: 2025-10-11 08:45:35.476 2 DEBUG oslo_concurrency.lockutils [None req-3ef2d93d-f57f-4077-80f2-b926d4767661 9f1d33a73b9b4589967f6f042e6949bd d9dd13dec2494700919923a94f4273ae - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:45:35 compute-0 nova_compute[260935]: 2025-10-11 08:45:35.554 2 DEBUG nova.network.neutron [None req-3ef2d93d-f57f-4077-80f2-b926d4767661 9f1d33a73b9b4589967f6f042e6949bd d9dd13dec2494700919923a94f4273ae - - default default] [instance: 09442602-bb27-4205-98ce-79781d8ab62f] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188
Oct 11 08:45:35 compute-0 nova_compute[260935]: 2025-10-11 08:45:35.555 2 DEBUG nova.compute.manager [None req-3ef2d93d-f57f-4077-80f2-b926d4767661 9f1d33a73b9b4589967f6f042e6949bd d9dd13dec2494700919923a94f4273ae - - default default] [instance: 09442602-bb27-4205-98ce-79781d8ab62f] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 11 08:45:35 compute-0 nova_compute[260935]: 2025-10-11 08:45:35.558 2 DEBUG nova.virt.libvirt.driver [None req-3ef2d93d-f57f-4077-80f2-b926d4767661 9f1d33a73b9b4589967f6f042e6949bd d9dd13dec2494700919923a94f4273ae - - default default] [instance: 09442602-bb27-4205-98ce-79781d8ab62f] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 11 08:45:35 compute-0 nova_compute[260935]: 2025-10-11 08:45:35.564 2 WARNING nova.virt.libvirt.driver [None req-3ef2d93d-f57f-4077-80f2-b926d4767661 9f1d33a73b9b4589967f6f042e6949bd d9dd13dec2494700919923a94f4273ae - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 08:45:35 compute-0 nova_compute[260935]: 2025-10-11 08:45:35.571 2 DEBUG nova.virt.libvirt.host [None req-3ef2d93d-f57f-4077-80f2-b926d4767661 9f1d33a73b9b4589967f6f042e6949bd d9dd13dec2494700919923a94f4273ae - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 11 08:45:35 compute-0 nova_compute[260935]: 2025-10-11 08:45:35.573 2 DEBUG nova.virt.libvirt.host [None req-3ef2d93d-f57f-4077-80f2-b926d4767661 9f1d33a73b9b4589967f6f042e6949bd d9dd13dec2494700919923a94f4273ae - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 11 08:45:35 compute-0 nova_compute[260935]: 2025-10-11 08:45:35.578 2 DEBUG nova.virt.libvirt.host [None req-3ef2d93d-f57f-4077-80f2-b926d4767661 9f1d33a73b9b4589967f6f042e6949bd d9dd13dec2494700919923a94f4273ae - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 11 08:45:35 compute-0 nova_compute[260935]: 2025-10-11 08:45:35.579 2 DEBUG nova.virt.libvirt.host [None req-3ef2d93d-f57f-4077-80f2-b926d4767661 9f1d33a73b9b4589967f6f042e6949bd d9dd13dec2494700919923a94f4273ae - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 11 08:45:35 compute-0 nova_compute[260935]: 2025-10-11 08:45:35.580 2 DEBUG nova.virt.libvirt.driver [None req-3ef2d93d-f57f-4077-80f2-b926d4767661 9f1d33a73b9b4589967f6f042e6949bd d9dd13dec2494700919923a94f4273ae - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 11 08:45:35 compute-0 nova_compute[260935]: 2025-10-11 08:45:35.580 2 DEBUG nova.virt.hardware [None req-3ef2d93d-f57f-4077-80f2-b926d4767661 9f1d33a73b9b4589967f6f042e6949bd d9dd13dec2494700919923a94f4273ae - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 11 08:45:35 compute-0 nova_compute[260935]: 2025-10-11 08:45:35.582 2 DEBUG nova.virt.hardware [None req-3ef2d93d-f57f-4077-80f2-b926d4767661 9f1d33a73b9b4589967f6f042e6949bd d9dd13dec2494700919923a94f4273ae - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 11 08:45:35 compute-0 nova_compute[260935]: 2025-10-11 08:45:35.582 2 DEBUG nova.virt.hardware [None req-3ef2d93d-f57f-4077-80f2-b926d4767661 9f1d33a73b9b4589967f6f042e6949bd d9dd13dec2494700919923a94f4273ae - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 11 08:45:35 compute-0 nova_compute[260935]: 2025-10-11 08:45:35.583 2 DEBUG nova.virt.hardware [None req-3ef2d93d-f57f-4077-80f2-b926d4767661 9f1d33a73b9b4589967f6f042e6949bd d9dd13dec2494700919923a94f4273ae - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 11 08:45:35 compute-0 nova_compute[260935]: 2025-10-11 08:45:35.584 2 DEBUG nova.virt.hardware [None req-3ef2d93d-f57f-4077-80f2-b926d4767661 9f1d33a73b9b4589967f6f042e6949bd d9dd13dec2494700919923a94f4273ae - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 11 08:45:35 compute-0 nova_compute[260935]: 2025-10-11 08:45:35.585 2 DEBUG nova.virt.hardware [None req-3ef2d93d-f57f-4077-80f2-b926d4767661 9f1d33a73b9b4589967f6f042e6949bd d9dd13dec2494700919923a94f4273ae - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 11 08:45:35 compute-0 nova_compute[260935]: 2025-10-11 08:45:35.586 2 DEBUG nova.virt.hardware [None req-3ef2d93d-f57f-4077-80f2-b926d4767661 9f1d33a73b9b4589967f6f042e6949bd d9dd13dec2494700919923a94f4273ae - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 11 08:45:35 compute-0 nova_compute[260935]: 2025-10-11 08:45:35.586 2 DEBUG nova.virt.hardware [None req-3ef2d93d-f57f-4077-80f2-b926d4767661 9f1d33a73b9b4589967f6f042e6949bd d9dd13dec2494700919923a94f4273ae - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 11 08:45:35 compute-0 nova_compute[260935]: 2025-10-11 08:45:35.587 2 DEBUG nova.virt.hardware [None req-3ef2d93d-f57f-4077-80f2-b926d4767661 9f1d33a73b9b4589967f6f042e6949bd d9dd13dec2494700919923a94f4273ae - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 11 08:45:35 compute-0 nova_compute[260935]: 2025-10-11 08:45:35.588 2 DEBUG nova.virt.hardware [None req-3ef2d93d-f57f-4077-80f2-b926d4767661 9f1d33a73b9b4589967f6f042e6949bd d9dd13dec2494700919923a94f4273ae - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 11 08:45:35 compute-0 nova_compute[260935]: 2025-10-11 08:45:35.588 2 DEBUG nova.virt.hardware [None req-3ef2d93d-f57f-4077-80f2-b926d4767661 9f1d33a73b9b4589967f6f042e6949bd d9dd13dec2494700919923a94f4273ae - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 11 08:45:35 compute-0 nova_compute[260935]: 2025-10-11 08:45:35.594 2 DEBUG oslo_concurrency.processutils [None req-3ef2d93d-f57f-4077-80f2-b926d4767661 9f1d33a73b9b4589967f6f042e6949bd d9dd13dec2494700919923a94f4273ae - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:45:35 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1167: 321 pgs: 321 active+clean; 90 MiB data, 250 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 69 op/s
Oct 11 08:45:36 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 08:45:36 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1717705071' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:45:36 compute-0 nova_compute[260935]: 2025-10-11 08:45:36.044 2 DEBUG oslo_concurrency.processutils [None req-3ef2d93d-f57f-4077-80f2-b926d4767661 9f1d33a73b9b4589967f6f042e6949bd d9dd13dec2494700919923a94f4273ae - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:45:36 compute-0 nova_compute[260935]: 2025-10-11 08:45:36.079 2 DEBUG nova.storage.rbd_utils [None req-3ef2d93d-f57f-4077-80f2-b926d4767661 9f1d33a73b9b4589967f6f042e6949bd d9dd13dec2494700919923a94f4273ae - - default default] rbd image 09442602-bb27-4205-98ce-79781d8ab62f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:45:36 compute-0 nova_compute[260935]: 2025-10-11 08:45:36.086 2 DEBUG oslo_concurrency.processutils [None req-3ef2d93d-f57f-4077-80f2-b926d4767661 9f1d33a73b9b4589967f6f042e6949bd d9dd13dec2494700919923a94f4273ae - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:45:36 compute-0 nova_compute[260935]: 2025-10-11 08:45:36.117 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:45:36 compute-0 nova_compute[260935]: 2025-10-11 08:45:36.204 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:45:36 compute-0 ceph-mon[74313]: pgmap v1167: 321 pgs: 321 active+clean; 90 MiB data, 250 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 69 op/s
Oct 11 08:45:36 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/1717705071' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:45:36 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 08:45:36 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2599094129' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:45:36 compute-0 nova_compute[260935]: 2025-10-11 08:45:36.579 2 DEBUG oslo_concurrency.processutils [None req-3ef2d93d-f57f-4077-80f2-b926d4767661 9f1d33a73b9b4589967f6f042e6949bd d9dd13dec2494700919923a94f4273ae - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.494s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:45:36 compute-0 nova_compute[260935]: 2025-10-11 08:45:36.581 2 DEBUG nova.objects.instance [None req-3ef2d93d-f57f-4077-80f2-b926d4767661 9f1d33a73b9b4589967f6f042e6949bd d9dd13dec2494700919923a94f4273ae - - default default] Lazy-loading 'pci_devices' on Instance uuid 09442602-bb27-4205-98ce-79781d8ab62f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:45:36 compute-0 nova_compute[260935]: 2025-10-11 08:45:36.594 2 DEBUG nova.virt.libvirt.driver [None req-3ef2d93d-f57f-4077-80f2-b926d4767661 9f1d33a73b9b4589967f6f042e6949bd d9dd13dec2494700919923a94f4273ae - - default default] [instance: 09442602-bb27-4205-98ce-79781d8ab62f] End _get_guest_xml xml=<domain type="kvm">
Oct 11 08:45:36 compute-0 nova_compute[260935]:   <uuid>09442602-bb27-4205-98ce-79781d8ab62f</uuid>
Oct 11 08:45:36 compute-0 nova_compute[260935]:   <name>instance-00000006</name>
Oct 11 08:45:36 compute-0 nova_compute[260935]:   <memory>131072</memory>
Oct 11 08:45:36 compute-0 nova_compute[260935]:   <vcpu>1</vcpu>
Oct 11 08:45:36 compute-0 nova_compute[260935]:   <metadata>
Oct 11 08:45:36 compute-0 nova_compute[260935]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 08:45:36 compute-0 nova_compute[260935]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 08:45:36 compute-0 nova_compute[260935]:       <nova:name>tempest-ServerExternalEventsTest-server-2118204747</nova:name>
Oct 11 08:45:36 compute-0 nova_compute[260935]:       <nova:creationTime>2025-10-11 08:45:35</nova:creationTime>
Oct 11 08:45:36 compute-0 nova_compute[260935]:       <nova:flavor name="m1.nano">
Oct 11 08:45:36 compute-0 nova_compute[260935]:         <nova:memory>128</nova:memory>
Oct 11 08:45:36 compute-0 nova_compute[260935]:         <nova:disk>1</nova:disk>
Oct 11 08:45:36 compute-0 nova_compute[260935]:         <nova:swap>0</nova:swap>
Oct 11 08:45:36 compute-0 nova_compute[260935]:         <nova:ephemeral>0</nova:ephemeral>
Oct 11 08:45:36 compute-0 nova_compute[260935]:         <nova:vcpus>1</nova:vcpus>
Oct 11 08:45:36 compute-0 nova_compute[260935]:       </nova:flavor>
Oct 11 08:45:36 compute-0 nova_compute[260935]:       <nova:owner>
Oct 11 08:45:36 compute-0 nova_compute[260935]:         <nova:user uuid="9f1d33a73b9b4589967f6f042e6949bd">tempest-ServerExternalEventsTest-1586413318-project-member</nova:user>
Oct 11 08:45:36 compute-0 nova_compute[260935]:         <nova:project uuid="d9dd13dec2494700919923a94f4273ae">tempest-ServerExternalEventsTest-1586413318</nova:project>
Oct 11 08:45:36 compute-0 nova_compute[260935]:       </nova:owner>
Oct 11 08:45:36 compute-0 nova_compute[260935]:       <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 08:45:36 compute-0 nova_compute[260935]:       <nova:ports/>
Oct 11 08:45:36 compute-0 nova_compute[260935]:     </nova:instance>
Oct 11 08:45:36 compute-0 nova_compute[260935]:   </metadata>
Oct 11 08:45:36 compute-0 nova_compute[260935]:   <sysinfo type="smbios">
Oct 11 08:45:36 compute-0 nova_compute[260935]:     <system>
Oct 11 08:45:36 compute-0 nova_compute[260935]:       <entry name="manufacturer">RDO</entry>
Oct 11 08:45:36 compute-0 nova_compute[260935]:       <entry name="product">OpenStack Compute</entry>
Oct 11 08:45:36 compute-0 nova_compute[260935]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 08:45:36 compute-0 nova_compute[260935]:       <entry name="serial">09442602-bb27-4205-98ce-79781d8ab62f</entry>
Oct 11 08:45:36 compute-0 nova_compute[260935]:       <entry name="uuid">09442602-bb27-4205-98ce-79781d8ab62f</entry>
Oct 11 08:45:36 compute-0 nova_compute[260935]:       <entry name="family">Virtual Machine</entry>
Oct 11 08:45:36 compute-0 nova_compute[260935]:     </system>
Oct 11 08:45:36 compute-0 nova_compute[260935]:   </sysinfo>
Oct 11 08:45:36 compute-0 nova_compute[260935]:   <os>
Oct 11 08:45:36 compute-0 nova_compute[260935]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 11 08:45:36 compute-0 nova_compute[260935]:     <boot dev="hd"/>
Oct 11 08:45:36 compute-0 nova_compute[260935]:     <smbios mode="sysinfo"/>
Oct 11 08:45:36 compute-0 nova_compute[260935]:   </os>
Oct 11 08:45:36 compute-0 nova_compute[260935]:   <features>
Oct 11 08:45:36 compute-0 nova_compute[260935]:     <acpi/>
Oct 11 08:45:36 compute-0 nova_compute[260935]:     <apic/>
Oct 11 08:45:36 compute-0 nova_compute[260935]:     <vmcoreinfo/>
Oct 11 08:45:36 compute-0 nova_compute[260935]:   </features>
Oct 11 08:45:36 compute-0 nova_compute[260935]:   <clock offset="utc">
Oct 11 08:45:36 compute-0 nova_compute[260935]:     <timer name="pit" tickpolicy="delay"/>
Oct 11 08:45:36 compute-0 nova_compute[260935]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 11 08:45:36 compute-0 nova_compute[260935]:     <timer name="hpet" present="no"/>
Oct 11 08:45:36 compute-0 nova_compute[260935]:   </clock>
Oct 11 08:45:36 compute-0 nova_compute[260935]:   <cpu mode="host-model" match="exact">
Oct 11 08:45:36 compute-0 nova_compute[260935]:     <topology sockets="1" cores="1" threads="1"/>
Oct 11 08:45:36 compute-0 nova_compute[260935]:   </cpu>
Oct 11 08:45:36 compute-0 nova_compute[260935]:   <devices>
Oct 11 08:45:36 compute-0 nova_compute[260935]:     <disk type="network" device="disk">
Oct 11 08:45:36 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 08:45:36 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/09442602-bb27-4205-98ce-79781d8ab62f_disk">
Oct 11 08:45:36 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 08:45:36 compute-0 nova_compute[260935]:       </source>
Oct 11 08:45:36 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 08:45:36 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 08:45:36 compute-0 nova_compute[260935]:       </auth>
Oct 11 08:45:36 compute-0 nova_compute[260935]:       <target dev="vda" bus="virtio"/>
Oct 11 08:45:36 compute-0 nova_compute[260935]:     </disk>
Oct 11 08:45:36 compute-0 nova_compute[260935]:     <disk type="network" device="cdrom">
Oct 11 08:45:36 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 08:45:36 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/09442602-bb27-4205-98ce-79781d8ab62f_disk.config">
Oct 11 08:45:36 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 08:45:36 compute-0 nova_compute[260935]:       </source>
Oct 11 08:45:36 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 08:45:36 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 08:45:36 compute-0 nova_compute[260935]:       </auth>
Oct 11 08:45:36 compute-0 nova_compute[260935]:       <target dev="sda" bus="sata"/>
Oct 11 08:45:36 compute-0 nova_compute[260935]:     </disk>
Oct 11 08:45:36 compute-0 nova_compute[260935]:     <serial type="pty">
Oct 11 08:45:36 compute-0 nova_compute[260935]:       <log file="/var/lib/nova/instances/09442602-bb27-4205-98ce-79781d8ab62f/console.log" append="off"/>
Oct 11 08:45:36 compute-0 nova_compute[260935]:     </serial>
Oct 11 08:45:36 compute-0 nova_compute[260935]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 08:45:36 compute-0 nova_compute[260935]:     <video>
Oct 11 08:45:36 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 08:45:36 compute-0 nova_compute[260935]:     </video>
Oct 11 08:45:36 compute-0 nova_compute[260935]:     <input type="tablet" bus="usb"/>
Oct 11 08:45:36 compute-0 nova_compute[260935]:     <rng model="virtio">
Oct 11 08:45:36 compute-0 nova_compute[260935]:       <backend model="random">/dev/urandom</backend>
Oct 11 08:45:36 compute-0 nova_compute[260935]:     </rng>
Oct 11 08:45:36 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root"/>
Oct 11 08:45:36 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:45:36 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:45:36 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:45:36 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:45:36 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:45:36 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:45:36 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:45:36 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:45:36 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:45:36 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:45:36 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:45:36 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:45:36 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:45:36 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:45:36 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:45:36 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:45:36 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:45:36 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:45:36 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:45:36 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:45:36 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:45:36 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:45:36 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:45:36 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:45:36 compute-0 nova_compute[260935]:     <controller type="usb" index="0"/>
Oct 11 08:45:36 compute-0 nova_compute[260935]:     <memballoon model="virtio">
Oct 11 08:45:36 compute-0 nova_compute[260935]:       <stats period="10"/>
Oct 11 08:45:36 compute-0 nova_compute[260935]:     </memballoon>
Oct 11 08:45:36 compute-0 nova_compute[260935]:   </devices>
Oct 11 08:45:36 compute-0 nova_compute[260935]: </domain>
Oct 11 08:45:36 compute-0 nova_compute[260935]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 11 08:45:36 compute-0 nova_compute[260935]: 2025-10-11 08:45:36.637 2 DEBUG nova.virt.libvirt.driver [None req-3ef2d93d-f57f-4077-80f2-b926d4767661 9f1d33a73b9b4589967f6f042e6949bd d9dd13dec2494700919923a94f4273ae - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 08:45:36 compute-0 nova_compute[260935]: 2025-10-11 08:45:36.637 2 DEBUG nova.virt.libvirt.driver [None req-3ef2d93d-f57f-4077-80f2-b926d4767661 9f1d33a73b9b4589967f6f042e6949bd d9dd13dec2494700919923a94f4273ae - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 08:45:36 compute-0 nova_compute[260935]: 2025-10-11 08:45:36.637 2 INFO nova.virt.libvirt.driver [None req-3ef2d93d-f57f-4077-80f2-b926d4767661 9f1d33a73b9b4589967f6f042e6949bd d9dd13dec2494700919923a94f4273ae - - default default] [instance: 09442602-bb27-4205-98ce-79781d8ab62f] Using config drive
Oct 11 08:45:36 compute-0 nova_compute[260935]: 2025-10-11 08:45:36.652 2 DEBUG nova.storage.rbd_utils [None req-3ef2d93d-f57f-4077-80f2-b926d4767661 9f1d33a73b9b4589967f6f042e6949bd d9dd13dec2494700919923a94f4273ae - - default default] rbd image 09442602-bb27-4205-98ce-79781d8ab62f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:45:37 compute-0 nova_compute[260935]: 2025-10-11 08:45:37.062 2 INFO nova.virt.libvirt.driver [None req-3ef2d93d-f57f-4077-80f2-b926d4767661 9f1d33a73b9b4589967f6f042e6949bd d9dd13dec2494700919923a94f4273ae - - default default] [instance: 09442602-bb27-4205-98ce-79781d8ab62f] Creating config drive at /var/lib/nova/instances/09442602-bb27-4205-98ce-79781d8ab62f/disk.config
Oct 11 08:45:37 compute-0 nova_compute[260935]: 2025-10-11 08:45:37.071 2 DEBUG oslo_concurrency.processutils [None req-3ef2d93d-f57f-4077-80f2-b926d4767661 9f1d33a73b9b4589967f6f042e6949bd d9dd13dec2494700919923a94f4273ae - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/09442602-bb27-4205-98ce-79781d8ab62f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp6lk36gpa execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:45:37 compute-0 nova_compute[260935]: 2025-10-11 08:45:37.210 2 DEBUG oslo_concurrency.processutils [None req-3ef2d93d-f57f-4077-80f2-b926d4767661 9f1d33a73b9b4589967f6f042e6949bd d9dd13dec2494700919923a94f4273ae - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/09442602-bb27-4205-98ce-79781d8ab62f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp6lk36gpa" returned: 0 in 0.139s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:45:37 compute-0 nova_compute[260935]: 2025-10-11 08:45:37.248 2 DEBUG nova.storage.rbd_utils [None req-3ef2d93d-f57f-4077-80f2-b926d4767661 9f1d33a73b9b4589967f6f042e6949bd d9dd13dec2494700919923a94f4273ae - - default default] rbd image 09442602-bb27-4205-98ce-79781d8ab62f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:45:37 compute-0 nova_compute[260935]: 2025-10-11 08:45:37.253 2 DEBUG oslo_concurrency.processutils [None req-3ef2d93d-f57f-4077-80f2-b926d4767661 9f1d33a73b9b4589967f6f042e6949bd d9dd13dec2494700919923a94f4273ae - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/09442602-bb27-4205-98ce-79781d8ab62f/disk.config 09442602-bb27-4205-98ce-79781d8ab62f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:45:37 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/2599094129' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:45:37 compute-0 nova_compute[260935]: 2025-10-11 08:45:37.440 2 DEBUG oslo_concurrency.processutils [None req-3ef2d93d-f57f-4077-80f2-b926d4767661 9f1d33a73b9b4589967f6f042e6949bd d9dd13dec2494700919923a94f4273ae - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/09442602-bb27-4205-98ce-79781d8ab62f/disk.config 09442602-bb27-4205-98ce-79781d8ab62f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.187s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:45:37 compute-0 nova_compute[260935]: 2025-10-11 08:45:37.441 2 INFO nova.virt.libvirt.driver [None req-3ef2d93d-f57f-4077-80f2-b926d4767661 9f1d33a73b9b4589967f6f042e6949bd d9dd13dec2494700919923a94f4273ae - - default default] [instance: 09442602-bb27-4205-98ce-79781d8ab62f] Deleting local config drive /var/lib/nova/instances/09442602-bb27-4205-98ce-79781d8ab62f/disk.config because it was imported into RBD.
Oct 11 08:45:37 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 08:45:37 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3736860461' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 08:45:37 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 08:45:37 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3736860461' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 08:45:37 compute-0 systemd-machined[215705]: New machine qemu-6-instance-00000006.
Oct 11 08:45:37 compute-0 systemd[1]: Started Virtual Machine qemu-6-instance-00000006.
Oct 11 08:45:37 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1168: 321 pgs: 321 active+clean; 169 MiB data, 314 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.9 MiB/s wr, 172 op/s
Oct 11 08:45:38 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 08:45:38 compute-0 ceph-mon[74313]: from='client.? 192.168.122.10:0/3736860461' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 08:45:38 compute-0 ceph-mon[74313]: from='client.? 192.168.122.10:0/3736860461' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 08:45:38 compute-0 ceph-mon[74313]: pgmap v1168: 321 pgs: 321 active+clean; 169 MiB data, 314 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.9 MiB/s wr, 172 op/s
Oct 11 08:45:38 compute-0 nova_compute[260935]: 2025-10-11 08:45:38.505 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172338.5044954, 09442602-bb27-4205-98ce-79781d8ab62f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:45:38 compute-0 nova_compute[260935]: 2025-10-11 08:45:38.505 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 09442602-bb27-4205-98ce-79781d8ab62f] VM Resumed (Lifecycle Event)
Oct 11 08:45:38 compute-0 nova_compute[260935]: 2025-10-11 08:45:38.510 2 DEBUG nova.compute.manager [None req-3ef2d93d-f57f-4077-80f2-b926d4767661 9f1d33a73b9b4589967f6f042e6949bd d9dd13dec2494700919923a94f4273ae - - default default] [instance: 09442602-bb27-4205-98ce-79781d8ab62f] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 11 08:45:38 compute-0 nova_compute[260935]: 2025-10-11 08:45:38.511 2 DEBUG nova.virt.libvirt.driver [None req-3ef2d93d-f57f-4077-80f2-b926d4767661 9f1d33a73b9b4589967f6f042e6949bd d9dd13dec2494700919923a94f4273ae - - default default] [instance: 09442602-bb27-4205-98ce-79781d8ab62f] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 11 08:45:38 compute-0 nova_compute[260935]: 2025-10-11 08:45:38.516 2 INFO nova.virt.libvirt.driver [-] [instance: 09442602-bb27-4205-98ce-79781d8ab62f] Instance spawned successfully.
Oct 11 08:45:38 compute-0 nova_compute[260935]: 2025-10-11 08:45:38.516 2 DEBUG nova.virt.libvirt.driver [None req-3ef2d93d-f57f-4077-80f2-b926d4767661 9f1d33a73b9b4589967f6f042e6949bd d9dd13dec2494700919923a94f4273ae - - default default] [instance: 09442602-bb27-4205-98ce-79781d8ab62f] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 11 08:45:38 compute-0 nova_compute[260935]: 2025-10-11 08:45:38.542 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 09442602-bb27-4205-98ce-79781d8ab62f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:45:38 compute-0 nova_compute[260935]: 2025-10-11 08:45:38.552 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 09442602-bb27-4205-98ce-79781d8ab62f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 08:45:38 compute-0 nova_compute[260935]: 2025-10-11 08:45:38.557 2 DEBUG nova.virt.libvirt.driver [None req-3ef2d93d-f57f-4077-80f2-b926d4767661 9f1d33a73b9b4589967f6f042e6949bd d9dd13dec2494700919923a94f4273ae - - default default] [instance: 09442602-bb27-4205-98ce-79781d8ab62f] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:45:38 compute-0 nova_compute[260935]: 2025-10-11 08:45:38.558 2 DEBUG nova.virt.libvirt.driver [None req-3ef2d93d-f57f-4077-80f2-b926d4767661 9f1d33a73b9b4589967f6f042e6949bd d9dd13dec2494700919923a94f4273ae - - default default] [instance: 09442602-bb27-4205-98ce-79781d8ab62f] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:45:38 compute-0 nova_compute[260935]: 2025-10-11 08:45:38.559 2 DEBUG nova.virt.libvirt.driver [None req-3ef2d93d-f57f-4077-80f2-b926d4767661 9f1d33a73b9b4589967f6f042e6949bd d9dd13dec2494700919923a94f4273ae - - default default] [instance: 09442602-bb27-4205-98ce-79781d8ab62f] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:45:38 compute-0 nova_compute[260935]: 2025-10-11 08:45:38.559 2 DEBUG nova.virt.libvirt.driver [None req-3ef2d93d-f57f-4077-80f2-b926d4767661 9f1d33a73b9b4589967f6f042e6949bd d9dd13dec2494700919923a94f4273ae - - default default] [instance: 09442602-bb27-4205-98ce-79781d8ab62f] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:45:38 compute-0 nova_compute[260935]: 2025-10-11 08:45:38.560 2 DEBUG nova.virt.libvirt.driver [None req-3ef2d93d-f57f-4077-80f2-b926d4767661 9f1d33a73b9b4589967f6f042e6949bd d9dd13dec2494700919923a94f4273ae - - default default] [instance: 09442602-bb27-4205-98ce-79781d8ab62f] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:45:38 compute-0 nova_compute[260935]: 2025-10-11 08:45:38.561 2 DEBUG nova.virt.libvirt.driver [None req-3ef2d93d-f57f-4077-80f2-b926d4767661 9f1d33a73b9b4589967f6f042e6949bd d9dd13dec2494700919923a94f4273ae - - default default] [instance: 09442602-bb27-4205-98ce-79781d8ab62f] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:45:38 compute-0 nova_compute[260935]: 2025-10-11 08:45:38.589 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 09442602-bb27-4205-98ce-79781d8ab62f] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 08:45:38 compute-0 nova_compute[260935]: 2025-10-11 08:45:38.589 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172338.5109003, 09442602-bb27-4205-98ce-79781d8ab62f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:45:38 compute-0 nova_compute[260935]: 2025-10-11 08:45:38.590 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 09442602-bb27-4205-98ce-79781d8ab62f] VM Started (Lifecycle Event)
Oct 11 08:45:38 compute-0 nova_compute[260935]: 2025-10-11 08:45:38.628 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 09442602-bb27-4205-98ce-79781d8ab62f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:45:38 compute-0 nova_compute[260935]: 2025-10-11 08:45:38.631 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 09442602-bb27-4205-98ce-79781d8ab62f] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 08:45:38 compute-0 nova_compute[260935]: 2025-10-11 08:45:38.645 2 INFO nova.compute.manager [None req-3ef2d93d-f57f-4077-80f2-b926d4767661 9f1d33a73b9b4589967f6f042e6949bd d9dd13dec2494700919923a94f4273ae - - default default] [instance: 09442602-bb27-4205-98ce-79781d8ab62f] Took 3.95 seconds to spawn the instance on the hypervisor.
Oct 11 08:45:38 compute-0 nova_compute[260935]: 2025-10-11 08:45:38.646 2 DEBUG nova.compute.manager [None req-3ef2d93d-f57f-4077-80f2-b926d4767661 9f1d33a73b9b4589967f6f042e6949bd d9dd13dec2494700919923a94f4273ae - - default default] [instance: 09442602-bb27-4205-98ce-79781d8ab62f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:45:38 compute-0 nova_compute[260935]: 2025-10-11 08:45:38.678 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 09442602-bb27-4205-98ce-79781d8ab62f] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 08:45:38 compute-0 nova_compute[260935]: 2025-10-11 08:45:38.726 2 INFO nova.compute.manager [None req-3ef2d93d-f57f-4077-80f2-b926d4767661 9f1d33a73b9b4589967f6f042e6949bd d9dd13dec2494700919923a94f4273ae - - default default] [instance: 09442602-bb27-4205-98ce-79781d8ab62f] Took 5.34 seconds to build instance.
Oct 11 08:45:38 compute-0 nova_compute[260935]: 2025-10-11 08:45:38.769 2 DEBUG oslo_concurrency.lockutils [None req-3ef2d93d-f57f-4077-80f2-b926d4767661 9f1d33a73b9b4589967f6f042e6949bd d9dd13dec2494700919923a94f4273ae - - default default] Lock "09442602-bb27-4205-98ce-79781d8ab62f" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 5.608s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:45:39 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1169: 321 pgs: 321 active+clean; 169 MiB data, 314 MiB used, 60 GiB / 60 GiB avail; 400 KiB/s rd, 3.9 MiB/s wr, 103 op/s
Oct 11 08:45:40 compute-0 nova_compute[260935]: 2025-10-11 08:45:40.453 2 DEBUG nova.compute.manager [None req-197d3687-baab-4e81-aa2b-baf172a60627 4f41a2d813f141ea9155ac6359a8f284 1dfaaf23cdd140b7a0c8c22d4fd12487 - - default default] [instance: 09442602-bb27-4205-98ce-79781d8ab62f] Received event network-changed external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:45:40 compute-0 nova_compute[260935]: 2025-10-11 08:45:40.453 2 DEBUG nova.compute.manager [None req-197d3687-baab-4e81-aa2b-baf172a60627 4f41a2d813f141ea9155ac6359a8f284 1dfaaf23cdd140b7a0c8c22d4fd12487 - - default default] [instance: 09442602-bb27-4205-98ce-79781d8ab62f] Refreshing instance network info cache due to event network-changed. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 08:45:40 compute-0 nova_compute[260935]: 2025-10-11 08:45:40.454 2 DEBUG oslo_concurrency.lockutils [None req-197d3687-baab-4e81-aa2b-baf172a60627 4f41a2d813f141ea9155ac6359a8f284 1dfaaf23cdd140b7a0c8c22d4fd12487 - - default default] Acquiring lock "refresh_cache-09442602-bb27-4205-98ce-79781d8ab62f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 08:45:40 compute-0 nova_compute[260935]: 2025-10-11 08:45:40.454 2 DEBUG oslo_concurrency.lockutils [None req-197d3687-baab-4e81-aa2b-baf172a60627 4f41a2d813f141ea9155ac6359a8f284 1dfaaf23cdd140b7a0c8c22d4fd12487 - - default default] Acquired lock "refresh_cache-09442602-bb27-4205-98ce-79781d8ab62f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 08:45:40 compute-0 nova_compute[260935]: 2025-10-11 08:45:40.455 2 DEBUG nova.network.neutron [None req-197d3687-baab-4e81-aa2b-baf172a60627 4f41a2d813f141ea9155ac6359a8f284 1dfaaf23cdd140b7a0c8c22d4fd12487 - - default default] [instance: 09442602-bb27-4205-98ce-79781d8ab62f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 11 08:45:40 compute-0 nova_compute[260935]: 2025-10-11 08:45:40.713 2 DEBUG nova.network.neutron [None req-197d3687-baab-4e81-aa2b-baf172a60627 4f41a2d813f141ea9155ac6359a8f284 1dfaaf23cdd140b7a0c8c22d4fd12487 - - default default] [instance: 09442602-bb27-4205-98ce-79781d8ab62f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 11 08:45:40 compute-0 ceph-mon[74313]: pgmap v1169: 321 pgs: 321 active+clean; 169 MiB data, 314 MiB used, 60 GiB / 60 GiB avail; 400 KiB/s rd, 3.9 MiB/s wr, 103 op/s
Oct 11 08:45:40 compute-0 nova_compute[260935]: 2025-10-11 08:45:40.787 2 DEBUG oslo_concurrency.lockutils [None req-b07ceba1-7a93-47fd-9e08-5b0f3c15ae55 7290d84027c8464ebe5e3b46d59f3b4a d76f051bb1f9449fb098250c00407cac - - default default] Acquiring lock "176447af-d4c6-422a-9347-9f07749fb6c3" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:45:40 compute-0 nova_compute[260935]: 2025-10-11 08:45:40.787 2 DEBUG oslo_concurrency.lockutils [None req-b07ceba1-7a93-47fd-9e08-5b0f3c15ae55 7290d84027c8464ebe5e3b46d59f3b4a d76f051bb1f9449fb098250c00407cac - - default default] Lock "176447af-d4c6-422a-9347-9f07749fb6c3" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:45:40 compute-0 nova_compute[260935]: 2025-10-11 08:45:40.894 2 DEBUG nova.compute.manager [None req-b07ceba1-7a93-47fd-9e08-5b0f3c15ae55 7290d84027c8464ebe5e3b46d59f3b4a d76f051bb1f9449fb098250c00407cac - - default default] [instance: 176447af-d4c6-422a-9347-9f07749fb6c3] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 11 08:45:40 compute-0 nova_compute[260935]: 2025-10-11 08:45:40.976 2 DEBUG nova.network.neutron [None req-197d3687-baab-4e81-aa2b-baf172a60627 4f41a2d813f141ea9155ac6359a8f284 1dfaaf23cdd140b7a0c8c22d4fd12487 - - default default] [instance: 09442602-bb27-4205-98ce-79781d8ab62f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:45:41 compute-0 nova_compute[260935]: 2025-10-11 08:45:41.011 2 DEBUG oslo_concurrency.lockutils [None req-38f2cdd9-91a8-44e1-9340-20c41361c4c4 9f1d33a73b9b4589967f6f042e6949bd d9dd13dec2494700919923a94f4273ae - - default default] Acquiring lock "09442602-bb27-4205-98ce-79781d8ab62f" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:45:41 compute-0 nova_compute[260935]: 2025-10-11 08:45:41.012 2 DEBUG oslo_concurrency.lockutils [None req-38f2cdd9-91a8-44e1-9340-20c41361c4c4 9f1d33a73b9b4589967f6f042e6949bd d9dd13dec2494700919923a94f4273ae - - default default] Lock "09442602-bb27-4205-98ce-79781d8ab62f" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:45:41 compute-0 nova_compute[260935]: 2025-10-11 08:45:41.012 2 DEBUG oslo_concurrency.lockutils [None req-38f2cdd9-91a8-44e1-9340-20c41361c4c4 9f1d33a73b9b4589967f6f042e6949bd d9dd13dec2494700919923a94f4273ae - - default default] Acquiring lock "09442602-bb27-4205-98ce-79781d8ab62f-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:45:41 compute-0 nova_compute[260935]: 2025-10-11 08:45:41.012 2 DEBUG oslo_concurrency.lockutils [None req-38f2cdd9-91a8-44e1-9340-20c41361c4c4 9f1d33a73b9b4589967f6f042e6949bd d9dd13dec2494700919923a94f4273ae - - default default] Lock "09442602-bb27-4205-98ce-79781d8ab62f-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:45:41 compute-0 nova_compute[260935]: 2025-10-11 08:45:41.013 2 DEBUG oslo_concurrency.lockutils [None req-38f2cdd9-91a8-44e1-9340-20c41361c4c4 9f1d33a73b9b4589967f6f042e6949bd d9dd13dec2494700919923a94f4273ae - - default default] Lock "09442602-bb27-4205-98ce-79781d8ab62f-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:45:41 compute-0 nova_compute[260935]: 2025-10-11 08:45:41.014 2 INFO nova.compute.manager [None req-38f2cdd9-91a8-44e1-9340-20c41361c4c4 9f1d33a73b9b4589967f6f042e6949bd d9dd13dec2494700919923a94f4273ae - - default default] [instance: 09442602-bb27-4205-98ce-79781d8ab62f] Terminating instance
Oct 11 08:45:41 compute-0 nova_compute[260935]: 2025-10-11 08:45:41.014 2 DEBUG oslo_concurrency.lockutils [None req-38f2cdd9-91a8-44e1-9340-20c41361c4c4 9f1d33a73b9b4589967f6f042e6949bd d9dd13dec2494700919923a94f4273ae - - default default] Acquiring lock "refresh_cache-09442602-bb27-4205-98ce-79781d8ab62f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 08:45:41 compute-0 nova_compute[260935]: 2025-10-11 08:45:41.057 2 DEBUG oslo_concurrency.lockutils [None req-197d3687-baab-4e81-aa2b-baf172a60627 4f41a2d813f141ea9155ac6359a8f284 1dfaaf23cdd140b7a0c8c22d4fd12487 - - default default] Releasing lock "refresh_cache-09442602-bb27-4205-98ce-79781d8ab62f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 08:45:41 compute-0 nova_compute[260935]: 2025-10-11 08:45:41.058 2 DEBUG oslo_concurrency.lockutils [None req-38f2cdd9-91a8-44e1-9340-20c41361c4c4 9f1d33a73b9b4589967f6f042e6949bd d9dd13dec2494700919923a94f4273ae - - default default] Acquired lock "refresh_cache-09442602-bb27-4205-98ce-79781d8ab62f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 08:45:41 compute-0 nova_compute[260935]: 2025-10-11 08:45:41.058 2 DEBUG nova.network.neutron [None req-38f2cdd9-91a8-44e1-9340-20c41361c4c4 9f1d33a73b9b4589967f6f042e6949bd d9dd13dec2494700919923a94f4273ae - - default default] [instance: 09442602-bb27-4205-98ce-79781d8ab62f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 11 08:45:41 compute-0 nova_compute[260935]: 2025-10-11 08:45:41.122 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:45:41 compute-0 nova_compute[260935]: 2025-10-11 08:45:41.207 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:45:41 compute-0 nova_compute[260935]: 2025-10-11 08:45:41.276 2 DEBUG oslo_concurrency.lockutils [None req-b07ceba1-7a93-47fd-9e08-5b0f3c15ae55 7290d84027c8464ebe5e3b46d59f3b4a d76f051bb1f9449fb098250c00407cac - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:45:41 compute-0 nova_compute[260935]: 2025-10-11 08:45:41.277 2 DEBUG oslo_concurrency.lockutils [None req-b07ceba1-7a93-47fd-9e08-5b0f3c15ae55 7290d84027c8464ebe5e3b46d59f3b4a d76f051bb1f9449fb098250c00407cac - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:45:41 compute-0 nova_compute[260935]: 2025-10-11 08:45:41.284 2 DEBUG nova.virt.hardware [None req-b07ceba1-7a93-47fd-9e08-5b0f3c15ae55 7290d84027c8464ebe5e3b46d59f3b4a d76f051bb1f9449fb098250c00407cac - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 11 08:45:41 compute-0 nova_compute[260935]: 2025-10-11 08:45:41.284 2 INFO nova.compute.claims [None req-b07ceba1-7a93-47fd-9e08-5b0f3c15ae55 7290d84027c8464ebe5e3b46d59f3b4a d76f051bb1f9449fb098250c00407cac - - default default] [instance: 176447af-d4c6-422a-9347-9f07749fb6c3] Claim successful on node compute-0.ctlplane.example.com
Oct 11 08:45:41 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1170: 321 pgs: 321 active+clean; 169 MiB data, 314 MiB used, 60 GiB / 60 GiB avail; 400 KiB/s rd, 3.9 MiB/s wr, 103 op/s
Oct 11 08:45:41 compute-0 nova_compute[260935]: 2025-10-11 08:45:41.786 2 DEBUG nova.network.neutron [None req-38f2cdd9-91a8-44e1-9340-20c41361c4c4 9f1d33a73b9b4589967f6f042e6949bd d9dd13dec2494700919923a94f4273ae - - default default] [instance: 09442602-bb27-4205-98ce-79781d8ab62f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 11 08:45:41 compute-0 nova_compute[260935]: 2025-10-11 08:45:41.897 2 DEBUG oslo_concurrency.processutils [None req-b07ceba1-7a93-47fd-9e08-5b0f3c15ae55 7290d84027c8464ebe5e3b46d59f3b4a d76f051bb1f9449fb098250c00407cac - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:45:41 compute-0 nova_compute[260935]: 2025-10-11 08:45:41.926 2 DEBUG oslo_concurrency.lockutils [None req-3f36527e-f48e-4bb4-b3a0-7657af79c8dc 6c0be63273314958858acea89fc8ce4c 04d9fd3da9714787ad630c8c68e4f94a - - default default] Acquiring lock "7d1e4ac2-be17-4b9b-a94e-0d70a7c4395d" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:45:41 compute-0 nova_compute[260935]: 2025-10-11 08:45:41.930 2 DEBUG oslo_concurrency.lockutils [None req-3f36527e-f48e-4bb4-b3a0-7657af79c8dc 6c0be63273314958858acea89fc8ce4c 04d9fd3da9714787ad630c8c68e4f94a - - default default] Lock "7d1e4ac2-be17-4b9b-a94e-0d70a7c4395d" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.004s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:45:42 compute-0 nova_compute[260935]: 2025-10-11 08:45:42.036 2 DEBUG nova.network.neutron [None req-38f2cdd9-91a8-44e1-9340-20c41361c4c4 9f1d33a73b9b4589967f6f042e6949bd d9dd13dec2494700919923a94f4273ae - - default default] [instance: 09442602-bb27-4205-98ce-79781d8ab62f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:45:42 compute-0 nova_compute[260935]: 2025-10-11 08:45:42.094 2 DEBUG nova.compute.manager [None req-3f36527e-f48e-4bb4-b3a0-7657af79c8dc 6c0be63273314958858acea89fc8ce4c 04d9fd3da9714787ad630c8c68e4f94a - - default default] [instance: 7d1e4ac2-be17-4b9b-a94e-0d70a7c4395d] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 11 08:45:42 compute-0 nova_compute[260935]: 2025-10-11 08:45:42.201 2 DEBUG oslo_concurrency.lockutils [None req-38f2cdd9-91a8-44e1-9340-20c41361c4c4 9f1d33a73b9b4589967f6f042e6949bd d9dd13dec2494700919923a94f4273ae - - default default] Releasing lock "refresh_cache-09442602-bb27-4205-98ce-79781d8ab62f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 08:45:42 compute-0 nova_compute[260935]: 2025-10-11 08:45:42.201 2 DEBUG nova.compute.manager [None req-38f2cdd9-91a8-44e1-9340-20c41361c4c4 9f1d33a73b9b4589967f6f042e6949bd d9dd13dec2494700919923a94f4273ae - - default default] [instance: 09442602-bb27-4205-98ce-79781d8ab62f] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 11 08:45:42 compute-0 systemd[1]: machine-qemu\x2d6\x2dinstance\x2d00000006.scope: Deactivated successfully.
Oct 11 08:45:42 compute-0 systemd[1]: machine-qemu\x2d6\x2dinstance\x2d00000006.scope: Consumed 4.777s CPU time.
Oct 11 08:45:42 compute-0 systemd-machined[215705]: Machine qemu-6-instance-00000006 terminated.
Oct 11 08:45:42 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 08:45:42 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1276466375' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:45:42 compute-0 nova_compute[260935]: 2025-10-11 08:45:42.410 2 DEBUG oslo_concurrency.processutils [None req-b07ceba1-7a93-47fd-9e08-5b0f3c15ae55 7290d84027c8464ebe5e3b46d59f3b4a d76f051bb1f9449fb098250c00407cac - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.513s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:45:42 compute-0 nova_compute[260935]: 2025-10-11 08:45:42.422 2 DEBUG nova.compute.provider_tree [None req-b07ceba1-7a93-47fd-9e08-5b0f3c15ae55 7290d84027c8464ebe5e3b46d59f3b4a d76f051bb1f9449fb098250c00407cac - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 08:45:42 compute-0 nova_compute[260935]: 2025-10-11 08:45:42.429 2 INFO nova.virt.libvirt.driver [-] [instance: 09442602-bb27-4205-98ce-79781d8ab62f] Instance destroyed successfully.
Oct 11 08:45:42 compute-0 nova_compute[260935]: 2025-10-11 08:45:42.430 2 DEBUG nova.objects.instance [None req-38f2cdd9-91a8-44e1-9340-20c41361c4c4 9f1d33a73b9b4589967f6f042e6949bd d9dd13dec2494700919923a94f4273ae - - default default] Lazy-loading 'resources' on Instance uuid 09442602-bb27-4205-98ce-79781d8ab62f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:45:42 compute-0 nova_compute[260935]: 2025-10-11 08:45:42.475 2 DEBUG oslo_concurrency.lockutils [None req-3f36527e-f48e-4bb4-b3a0-7657af79c8dc 6c0be63273314958858acea89fc8ce4c 04d9fd3da9714787ad630c8c68e4f94a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:45:42 compute-0 nova_compute[260935]: 2025-10-11 08:45:42.489 2 DEBUG nova.scheduler.client.report [None req-b07ceba1-7a93-47fd-9e08-5b0f3c15ae55 7290d84027c8464ebe5e3b46d59f3b4a d76f051bb1f9449fb098250c00407cac - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 08:45:42 compute-0 nova_compute[260935]: 2025-10-11 08:45:42.616 2 DEBUG oslo_concurrency.lockutils [None req-b07ceba1-7a93-47fd-9e08-5b0f3c15ae55 7290d84027c8464ebe5e3b46d59f3b4a d76f051bb1f9449fb098250c00407cac - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.339s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:45:42 compute-0 nova_compute[260935]: 2025-10-11 08:45:42.617 2 DEBUG nova.compute.manager [None req-b07ceba1-7a93-47fd-9e08-5b0f3c15ae55 7290d84027c8464ebe5e3b46d59f3b4a d76f051bb1f9449fb098250c00407cac - - default default] [instance: 176447af-d4c6-422a-9347-9f07749fb6c3] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 11 08:45:42 compute-0 nova_compute[260935]: 2025-10-11 08:45:42.622 2 DEBUG oslo_concurrency.lockutils [None req-3f36527e-f48e-4bb4-b3a0-7657af79c8dc 6c0be63273314958858acea89fc8ce4c 04d9fd3da9714787ad630c8c68e4f94a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.147s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:45:42 compute-0 nova_compute[260935]: 2025-10-11 08:45:42.632 2 DEBUG nova.virt.hardware [None req-3f36527e-f48e-4bb4-b3a0-7657af79c8dc 6c0be63273314958858acea89fc8ce4c 04d9fd3da9714787ad630c8c68e4f94a - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 11 08:45:42 compute-0 nova_compute[260935]: 2025-10-11 08:45:42.633 2 INFO nova.compute.claims [None req-3f36527e-f48e-4bb4-b3a0-7657af79c8dc 6c0be63273314958858acea89fc8ce4c 04d9fd3da9714787ad630c8c68e4f94a - - default default] [instance: 7d1e4ac2-be17-4b9b-a94e-0d70a7c4395d] Claim successful on node compute-0.ctlplane.example.com
Oct 11 08:45:42 compute-0 ceph-mon[74313]: pgmap v1170: 321 pgs: 321 active+clean; 169 MiB data, 314 MiB used, 60 GiB / 60 GiB avail; 400 KiB/s rd, 3.9 MiB/s wr, 103 op/s
Oct 11 08:45:42 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/1276466375' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:45:42 compute-0 sshd-session[279979]: Invalid user bkp from 152.32.213.170 port 40026
Oct 11 08:45:42 compute-0 sshd-session[279979]: pam_unix(sshd:auth): check pass; user unknown
Oct 11 08:45:42 compute-0 sshd-session[279979]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=152.32.213.170
Oct 11 08:45:42 compute-0 nova_compute[260935]: 2025-10-11 08:45:42.913 2 INFO nova.virt.libvirt.driver [None req-38f2cdd9-91a8-44e1-9340-20c41361c4c4 9f1d33a73b9b4589967f6f042e6949bd d9dd13dec2494700919923a94f4273ae - - default default] [instance: 09442602-bb27-4205-98ce-79781d8ab62f] Deleting instance files /var/lib/nova/instances/09442602-bb27-4205-98ce-79781d8ab62f_del
Oct 11 08:45:42 compute-0 nova_compute[260935]: 2025-10-11 08:45:42.914 2 INFO nova.virt.libvirt.driver [None req-38f2cdd9-91a8-44e1-9340-20c41361c4c4 9f1d33a73b9b4589967f6f042e6949bd d9dd13dec2494700919923a94f4273ae - - default default] [instance: 09442602-bb27-4205-98ce-79781d8ab62f] Deletion of /var/lib/nova/instances/09442602-bb27-4205-98ce-79781d8ab62f_del complete
Oct 11 08:45:42 compute-0 nova_compute[260935]: 2025-10-11 08:45:42.927 2 DEBUG oslo_concurrency.lockutils [None req-0c4baa25-c066-4e86-af65-3982321596b6 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Acquiring lock "f7475a32-2490-4f8c-a700-a123973da072" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:45:42 compute-0 nova_compute[260935]: 2025-10-11 08:45:42.927 2 DEBUG oslo_concurrency.lockutils [None req-0c4baa25-c066-4e86-af65-3982321596b6 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Lock "f7475a32-2490-4f8c-a700-a123973da072" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:45:42 compute-0 nova_compute[260935]: 2025-10-11 08:45:42.927 2 DEBUG oslo_concurrency.lockutils [None req-0c4baa25-c066-4e86-af65-3982321596b6 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Acquiring lock "f7475a32-2490-4f8c-a700-a123973da072-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:45:42 compute-0 nova_compute[260935]: 2025-10-11 08:45:42.927 2 DEBUG oslo_concurrency.lockutils [None req-0c4baa25-c066-4e86-af65-3982321596b6 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Lock "f7475a32-2490-4f8c-a700-a123973da072-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:45:42 compute-0 nova_compute[260935]: 2025-10-11 08:45:42.927 2 DEBUG oslo_concurrency.lockutils [None req-0c4baa25-c066-4e86-af65-3982321596b6 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Lock "f7475a32-2490-4f8c-a700-a123973da072-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:45:42 compute-0 nova_compute[260935]: 2025-10-11 08:45:42.928 2 INFO nova.compute.manager [None req-0c4baa25-c066-4e86-af65-3982321596b6 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] [instance: f7475a32-2490-4f8c-a700-a123973da072] Terminating instance
Oct 11 08:45:42 compute-0 nova_compute[260935]: 2025-10-11 08:45:42.929 2 DEBUG nova.compute.manager [None req-0c4baa25-c066-4e86-af65-3982321596b6 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] [instance: f7475a32-2490-4f8c-a700-a123973da072] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 11 08:45:42 compute-0 kernel: tap865f9eb5-5e (unregistering): left promiscuous mode
Oct 11 08:45:42 compute-0 NetworkManager[44960]: <info>  [1760172342.9848] device (tap865f9eb5-5e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 08:45:43 compute-0 ovn_controller[152945]: 2025-10-11T08:45:42Z|00040|binding|INFO|Releasing lport 865f9eb5-5e9d-40e5-abb3-123fcf5d05c3 from this chassis (sb_readonly=0)
Oct 11 08:45:43 compute-0 nova_compute[260935]: 2025-10-11 08:45:42.999 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:45:43 compute-0 ovn_controller[152945]: 2025-10-11T08:45:42Z|00041|binding|INFO|Setting lport 865f9eb5-5e9d-40e5-abb3-123fcf5d05c3 down in Southbound
Oct 11 08:45:43 compute-0 ovn_controller[152945]: 2025-10-11T08:45:43Z|00042|binding|INFO|Removing iface tap865f9eb5-5e ovn-installed in OVS
Oct 11 08:45:43 compute-0 nova_compute[260935]: 2025-10-11 08:45:43.001 2 DEBUG nova.compute.manager [None req-b07ceba1-7a93-47fd-9e08-5b0f3c15ae55 7290d84027c8464ebe5e3b46d59f3b4a d76f051bb1f9449fb098250c00407cac - - default default] [instance: 176447af-d4c6-422a-9347-9f07749fb6c3] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 11 08:45:43 compute-0 nova_compute[260935]: 2025-10-11 08:45:43.001 2 DEBUG nova.network.neutron [None req-b07ceba1-7a93-47fd-9e08-5b0f3c15ae55 7290d84027c8464ebe5e3b46d59f3b4a d76f051bb1f9449fb098250c00407cac - - default default] [instance: 176447af-d4c6-422a-9347-9f07749fb6c3] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 11 08:45:43 compute-0 nova_compute[260935]: 2025-10-11 08:45:43.003 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:45:43 compute-0 nova_compute[260935]: 2025-10-11 08:45:43.035 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:45:43 compute-0 systemd[1]: machine-qemu\x2d5\x2dinstance\x2d00000005.scope: Deactivated successfully.
Oct 11 08:45:43 compute-0 systemd[1]: machine-qemu\x2d5\x2dinstance\x2d00000005.scope: Consumed 13.997s CPU time.
Oct 11 08:45:43 compute-0 systemd-machined[215705]: Machine qemu-5-instance-00000005 terminated.
Oct 11 08:45:43 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 08:45:43 compute-0 NetworkManager[44960]: <info>  [1760172343.1496] manager: (tap865f9eb5-5e): new Tun device (/org/freedesktop/NetworkManager/Devices/35)
Oct 11 08:45:43 compute-0 nova_compute[260935]: 2025-10-11 08:45:43.183 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:45:43 compute-0 nova_compute[260935]: 2025-10-11 08:45:43.193 2 INFO nova.virt.libvirt.driver [-] [instance: f7475a32-2490-4f8c-a700-a123973da072] Instance destroyed successfully.
Oct 11 08:45:43 compute-0 nova_compute[260935]: 2025-10-11 08:45:43.194 2 DEBUG nova.objects.instance [None req-0c4baa25-c066-4e86-af65-3982321596b6 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Lazy-loading 'resources' on Instance uuid f7475a32-2490-4f8c-a700-a123973da072 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:45:43 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:45:43.323 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b3:e2:77 10.100.0.5'], port_security=['fa:16:3e:b3:e2:77 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'f7475a32-2490-4f8c-a700-a123973da072', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3bd537c2-e6ec-4d00-ac83-fbf5d86f963c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4e6573eaf6684f1c99a553fd46667a67', 'neutron:revision_number': '4', 'neutron:security_group_ids': '6b6e9add-724a-49a4-90de-2f4e4912ca78', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.218'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=be56cb0d-617a-4b33-ac01-b32b133373b2, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=865f9eb5-5e9d-40e5-abb3-123fcf5d05c3) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 08:45:43 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:45:43.325 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 865f9eb5-5e9d-40e5-abb3-123fcf5d05c3 in datapath 3bd537c2-e6ec-4d00-ac83-fbf5d86f963c unbound from our chassis
Oct 11 08:45:43 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:45:43.326 162815 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 3bd537c2-e6ec-4d00-ac83-fbf5d86f963c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 11 08:45:43 compute-0 nova_compute[260935]: 2025-10-11 08:45:43.327 2 DEBUG oslo_concurrency.processutils [None req-3f36527e-f48e-4bb4-b3a0-7657af79c8dc 6c0be63273314958858acea89fc8ce4c 04d9fd3da9714787ad630c8c68e4f94a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:45:43 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:45:43.327 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[c502b238-834f-4bf6-a692-79f21d760290]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:45:43 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:45:43.332 162815 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-3bd537c2-e6ec-4d00-ac83-fbf5d86f963c namespace which is not needed anymore
Oct 11 08:45:43 compute-0 nova_compute[260935]: 2025-10-11 08:45:43.367 2 INFO nova.compute.manager [None req-38f2cdd9-91a8-44e1-9340-20c41361c4c4 9f1d33a73b9b4589967f6f042e6949bd d9dd13dec2494700919923a94f4273ae - - default default] [instance: 09442602-bb27-4205-98ce-79781d8ab62f] Took 1.17 seconds to destroy the instance on the hypervisor.
Oct 11 08:45:43 compute-0 nova_compute[260935]: 2025-10-11 08:45:43.368 2 DEBUG oslo.service.loopingcall [None req-38f2cdd9-91a8-44e1-9340-20c41361c4c4 9f1d33a73b9b4589967f6f042e6949bd d9dd13dec2494700919923a94f4273ae - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 11 08:45:43 compute-0 nova_compute[260935]: 2025-10-11 08:45:43.369 2 DEBUG nova.compute.manager [-] [instance: 09442602-bb27-4205-98ce-79781d8ab62f] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 11 08:45:43 compute-0 nova_compute[260935]: 2025-10-11 08:45:43.369 2 DEBUG nova.network.neutron [-] [instance: 09442602-bb27-4205-98ce-79781d8ab62f] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 11 08:45:43 compute-0 nova_compute[260935]: 2025-10-11 08:45:43.447 2 INFO nova.virt.libvirt.driver [None req-b07ceba1-7a93-47fd-9e08-5b0f3c15ae55 7290d84027c8464ebe5e3b46d59f3b4a d76f051bb1f9449fb098250c00407cac - - default default] [instance: 176447af-d4c6-422a-9347-9f07749fb6c3] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 11 08:45:43 compute-0 nova_compute[260935]: 2025-10-11 08:45:43.453 2 DEBUG nova.virt.libvirt.vif [None req-0c4baa25-c066-4e86-af65-3982321596b6 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T08:45:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersWithSpecificFlavorTestJSON-server-1693169676',display_name='tempest-ServersWithSpecificFlavorTestJSON-server-1693169676',ec2_ids=<?>,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(24),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverswithspecificflavortestjson-server-1693169676',id=5,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=24,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNbdle8Jb2CGpQv4SpbWMUmm3hiSixEA0s5KapvSItFHc3x9bUVgKTLkLVOIdrKirkN+vbb4fO8g4FzWk9eXgUjBokGNviLy6eM2HjHbeF2CcHrjTEf0RujQy30DD/WL/w==',key_name='tempest-keypair-358550695',keypairs=<?>,launch_index=0,launched_at=2025-10-11T08:45:23Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='4e6573eaf6684f1c99a553fd46667a67',ramdisk_id='',reservation_id='r-kd8cf275',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersWithSpecificFlavorTestJSON-1781130606',owner_user_name='tempest-ServersWithSpecificFlavorTestJSON-1781130606-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T08:45:23Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='113798b24d1e4a9e91db94214d254ea9',uuid=f7475a32-2490-4f8c-a700-a123973da072,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "865f9eb5-5e9d-40e5-abb3-123fcf5d05c3", "address": "fa:16:3e:b3:e2:77", "network": {"id": "3bd537c2-e6ec-4d00-ac83-fbf5d86f963c", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-610535994-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.218", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4e6573eaf6684f1c99a553fd46667a67", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap865f9eb5-5e", "ovs_interfaceid": "865f9eb5-5e9d-40e5-abb3-123fcf5d05c3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 11 08:45:43 compute-0 nova_compute[260935]: 2025-10-11 08:45:43.454 2 DEBUG nova.network.os_vif_util [None req-0c4baa25-c066-4e86-af65-3982321596b6 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Converting VIF {"id": "865f9eb5-5e9d-40e5-abb3-123fcf5d05c3", "address": "fa:16:3e:b3:e2:77", "network": {"id": "3bd537c2-e6ec-4d00-ac83-fbf5d86f963c", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-610535994-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.218", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4e6573eaf6684f1c99a553fd46667a67", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap865f9eb5-5e", "ovs_interfaceid": "865f9eb5-5e9d-40e5-abb3-123fcf5d05c3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 08:45:43 compute-0 nova_compute[260935]: 2025-10-11 08:45:43.455 2 DEBUG nova.network.os_vif_util [None req-0c4baa25-c066-4e86-af65-3982321596b6 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:b3:e2:77,bridge_name='br-int',has_traffic_filtering=True,id=865f9eb5-5e9d-40e5-abb3-123fcf5d05c3,network=Network(3bd537c2-e6ec-4d00-ac83-fbf5d86f963c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap865f9eb5-5e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 08:45:43 compute-0 nova_compute[260935]: 2025-10-11 08:45:43.456 2 DEBUG os_vif [None req-0c4baa25-c066-4e86-af65-3982321596b6 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:b3:e2:77,bridge_name='br-int',has_traffic_filtering=True,id=865f9eb5-5e9d-40e5-abb3-123fcf5d05c3,network=Network(3bd537c2-e6ec-4d00-ac83-fbf5d86f963c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap865f9eb5-5e') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 11 08:45:43 compute-0 nova_compute[260935]: 2025-10-11 08:45:43.459 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:45:43 compute-0 nova_compute[260935]: 2025-10-11 08:45:43.459 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap865f9eb5-5e, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:45:43 compute-0 nova_compute[260935]: 2025-10-11 08:45:43.462 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:45:43 compute-0 nova_compute[260935]: 2025-10-11 08:45:43.463 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:45:43 compute-0 nova_compute[260935]: 2025-10-11 08:45:43.467 2 INFO os_vif [None req-0c4baa25-c066-4e86-af65-3982321596b6 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:b3:e2:77,bridge_name='br-int',has_traffic_filtering=True,id=865f9eb5-5e9d-40e5-abb3-123fcf5d05c3,network=Network(3bd537c2-e6ec-4d00-ac83-fbf5d86f963c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap865f9eb5-5e')
Oct 11 08:45:43 compute-0 neutron-haproxy-ovnmeta-3bd537c2-e6ec-4d00-ac83-fbf5d86f963c[279530]: [NOTICE]   (279546) : haproxy version is 2.8.14-c23fe91
Oct 11 08:45:43 compute-0 neutron-haproxy-ovnmeta-3bd537c2-e6ec-4d00-ac83-fbf5d86f963c[279530]: [NOTICE]   (279546) : path to executable is /usr/sbin/haproxy
Oct 11 08:45:43 compute-0 neutron-haproxy-ovnmeta-3bd537c2-e6ec-4d00-ac83-fbf5d86f963c[279530]: [WARNING]  (279546) : Exiting Master process...
Oct 11 08:45:43 compute-0 neutron-haproxy-ovnmeta-3bd537c2-e6ec-4d00-ac83-fbf5d86f963c[279530]: [ALERT]    (279546) : Current worker (279555) exited with code 143 (Terminated)
Oct 11 08:45:43 compute-0 neutron-haproxy-ovnmeta-3bd537c2-e6ec-4d00-ac83-fbf5d86f963c[279530]: [WARNING]  (279546) : All workers exited. Exiting... (0)
Oct 11 08:45:43 compute-0 systemd[1]: libpod-7fdfa5eee06da18214dc33fc369066f5a6bd7352c541f9eb3b5949957f4c7925.scope: Deactivated successfully.
Oct 11 08:45:43 compute-0 podman[280059]: 2025-10-11 08:45:43.536874807 +0000 UTC m=+0.082783017 container died 7fdfa5eee06da18214dc33fc369066f5a6bd7352c541f9eb3b5949957f4c7925 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-3bd537c2-e6ec-4d00-ac83-fbf5d86f963c, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 08:45:43 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-7fdfa5eee06da18214dc33fc369066f5a6bd7352c541f9eb3b5949957f4c7925-userdata-shm.mount: Deactivated successfully.
Oct 11 08:45:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-49d8583738b5511fba56f82d6e92ea769c6441216f2fdd8ecdad4c60001a63ed-merged.mount: Deactivated successfully.
Oct 11 08:45:43 compute-0 podman[280059]: 2025-10-11 08:45:43.592130547 +0000 UTC m=+0.138038697 container cleanup 7fdfa5eee06da18214dc33fc369066f5a6bd7352c541f9eb3b5949957f4c7925 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-3bd537c2-e6ec-4d00-ac83-fbf5d86f963c, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 11 08:45:43 compute-0 systemd[1]: libpod-conmon-7fdfa5eee06da18214dc33fc369066f5a6bd7352c541f9eb3b5949957f4c7925.scope: Deactivated successfully.
Oct 11 08:45:43 compute-0 nova_compute[260935]: 2025-10-11 08:45:43.658 2 DEBUG nova.compute.manager [None req-b07ceba1-7a93-47fd-9e08-5b0f3c15ae55 7290d84027c8464ebe5e3b46d59f3b4a d76f051bb1f9449fb098250c00407cac - - default default] [instance: 176447af-d4c6-422a-9347-9f07749fb6c3] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 11 08:45:43 compute-0 podman[280126]: 2025-10-11 08:45:43.667953334 +0000 UTC m=+0.046241713 container remove 7fdfa5eee06da18214dc33fc369066f5a6bd7352c541f9eb3b5949957f4c7925 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-3bd537c2-e6ec-4d00-ac83-fbf5d86f963c, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 11 08:45:43 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:45:43.673 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[d11b584b-33fc-43c6-8fd3-4fece9ccf3d5]: (4, ('Sat Oct 11 08:45:43 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-3bd537c2-e6ec-4d00-ac83-fbf5d86f963c (7fdfa5eee06da18214dc33fc369066f5a6bd7352c541f9eb3b5949957f4c7925)\n7fdfa5eee06da18214dc33fc369066f5a6bd7352c541f9eb3b5949957f4c7925\nSat Oct 11 08:45:43 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-3bd537c2-e6ec-4d00-ac83-fbf5d86f963c (7fdfa5eee06da18214dc33fc369066f5a6bd7352c541f9eb3b5949957f4c7925)\n7fdfa5eee06da18214dc33fc369066f5a6bd7352c541f9eb3b5949957f4c7925\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:45:43 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:45:43.674 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[97e69434-aa68-475c-ab9c-6f673110588e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:45:43 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:45:43.676 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3bd537c2-e0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:45:43 compute-0 nova_compute[260935]: 2025-10-11 08:45:43.678 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:45:43 compute-0 kernel: tap3bd537c2-e0: left promiscuous mode
Oct 11 08:45:43 compute-0 nova_compute[260935]: 2025-10-11 08:45:43.698 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:45:43 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:45:43.702 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[bd3a115b-8e23-4526-92e3-130a4e9658f4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:45:43 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:45:43.725 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[aa8d3902-a1c4-40c1-bd82-a8bbd728dbce]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:45:43 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1171: 321 pgs: 321 active+clean; 169 MiB data, 315 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.9 MiB/s wr, 176 op/s
Oct 11 08:45:43 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:45:43.726 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[338c6b91-9d65-47a2-839d-17097ebf604c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:45:43 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:45:43.745 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[89f67ecd-7b65-4ed8-ad1f-65458e39d218]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 416714, 'reachable_time': 15394, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 280141, 'error': None, 'target': 'ovnmeta-3bd537c2-e6ec-4d00-ac83-fbf5d86f963c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:45:43 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:45:43.747 162929 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-3bd537c2-e6ec-4d00-ac83-fbf5d86f963c deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 11 08:45:43 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:45:43.747 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[ad744295-3970-4759-93a9-b97f6d11ab28]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:45:43 compute-0 systemd[1]: run-netns-ovnmeta\x2d3bd537c2\x2de6ec\x2d4d00\x2dac83\x2dfbf5d86f963c.mount: Deactivated successfully.
Oct 11 08:45:43 compute-0 nova_compute[260935]: 2025-10-11 08:45:43.781 2 DEBUG nova.network.neutron [-] [instance: 09442602-bb27-4205-98ce-79781d8ab62f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 11 08:45:43 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 08:45:43 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2402792429' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:45:43 compute-0 nova_compute[260935]: 2025-10-11 08:45:43.829 2 DEBUG oslo_concurrency.processutils [None req-3f36527e-f48e-4bb4-b3a0-7657af79c8dc 6c0be63273314958858acea89fc8ce4c 04d9fd3da9714787ad630c8c68e4f94a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.503s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:45:43 compute-0 nova_compute[260935]: 2025-10-11 08:45:43.837 2 DEBUG nova.compute.provider_tree [None req-3f36527e-f48e-4bb4-b3a0-7657af79c8dc 6c0be63273314958858acea89fc8ce4c 04d9fd3da9714787ad630c8c68e4f94a - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 08:45:43 compute-0 nova_compute[260935]: 2025-10-11 08:45:43.992 2 DEBUG nova.network.neutron [-] [instance: 09442602-bb27-4205-98ce-79781d8ab62f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:45:44 compute-0 nova_compute[260935]: 2025-10-11 08:45:44.046 2 INFO nova.virt.libvirt.driver [None req-0c4baa25-c066-4e86-af65-3982321596b6 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] [instance: f7475a32-2490-4f8c-a700-a123973da072] Deleting instance files /var/lib/nova/instances/f7475a32-2490-4f8c-a700-a123973da072_del
Oct 11 08:45:44 compute-0 nova_compute[260935]: 2025-10-11 08:45:44.047 2 INFO nova.virt.libvirt.driver [None req-0c4baa25-c066-4e86-af65-3982321596b6 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] [instance: f7475a32-2490-4f8c-a700-a123973da072] Deletion of /var/lib/nova/instances/f7475a32-2490-4f8c-a700-a123973da072_del complete
Oct 11 08:45:44 compute-0 nova_compute[260935]: 2025-10-11 08:45:44.052 2 DEBUG nova.scheduler.client.report [None req-3f36527e-f48e-4bb4-b3a0-7657af79c8dc 6c0be63273314958858acea89fc8ce4c 04d9fd3da9714787ad630c8c68e4f94a - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 08:45:44 compute-0 nova_compute[260935]: 2025-10-11 08:45:44.058 2 INFO nova.compute.manager [-] [instance: 09442602-bb27-4205-98ce-79781d8ab62f] Took 0.69 seconds to deallocate network for instance.
Oct 11 08:45:44 compute-0 nova_compute[260935]: 2025-10-11 08:45:44.164 2 DEBUG nova.compute.manager [None req-b07ceba1-7a93-47fd-9e08-5b0f3c15ae55 7290d84027c8464ebe5e3b46d59f3b4a d76f051bb1f9449fb098250c00407cac - - default default] [instance: 176447af-d4c6-422a-9347-9f07749fb6c3] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 11 08:45:44 compute-0 nova_compute[260935]: 2025-10-11 08:45:44.166 2 DEBUG nova.virt.libvirt.driver [None req-b07ceba1-7a93-47fd-9e08-5b0f3c15ae55 7290d84027c8464ebe5e3b46d59f3b4a d76f051bb1f9449fb098250c00407cac - - default default] [instance: 176447af-d4c6-422a-9347-9f07749fb6c3] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 11 08:45:44 compute-0 nova_compute[260935]: 2025-10-11 08:45:44.167 2 INFO nova.virt.libvirt.driver [None req-b07ceba1-7a93-47fd-9e08-5b0f3c15ae55 7290d84027c8464ebe5e3b46d59f3b4a d76f051bb1f9449fb098250c00407cac - - default default] [instance: 176447af-d4c6-422a-9347-9f07749fb6c3] Creating image(s)
Oct 11 08:45:44 compute-0 nova_compute[260935]: 2025-10-11 08:45:44.194 2 DEBUG nova.storage.rbd_utils [None req-b07ceba1-7a93-47fd-9e08-5b0f3c15ae55 7290d84027c8464ebe5e3b46d59f3b4a d76f051bb1f9449fb098250c00407cac - - default default] rbd image 176447af-d4c6-422a-9347-9f07749fb6c3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:45:44 compute-0 nova_compute[260935]: 2025-10-11 08:45:44.223 2 DEBUG nova.storage.rbd_utils [None req-b07ceba1-7a93-47fd-9e08-5b0f3c15ae55 7290d84027c8464ebe5e3b46d59f3b4a d76f051bb1f9449fb098250c00407cac - - default default] rbd image 176447af-d4c6-422a-9347-9f07749fb6c3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:45:44 compute-0 nova_compute[260935]: 2025-10-11 08:45:44.252 2 DEBUG nova.storage.rbd_utils [None req-b07ceba1-7a93-47fd-9e08-5b0f3c15ae55 7290d84027c8464ebe5e3b46d59f3b4a d76f051bb1f9449fb098250c00407cac - - default default] rbd image 176447af-d4c6-422a-9347-9f07749fb6c3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:45:44 compute-0 nova_compute[260935]: 2025-10-11 08:45:44.256 2 DEBUG oslo_concurrency.processutils [None req-b07ceba1-7a93-47fd-9e08-5b0f3c15ae55 7290d84027c8464ebe5e3b46d59f3b4a d76f051bb1f9449fb098250c00407cac - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:45:44 compute-0 nova_compute[260935]: 2025-10-11 08:45:44.339 2 DEBUG oslo_concurrency.processutils [None req-b07ceba1-7a93-47fd-9e08-5b0f3c15ae55 7290d84027c8464ebe5e3b46d59f3b4a d76f051bb1f9449fb098250c00407cac - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json" returned: 0 in 0.083s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:45:44 compute-0 nova_compute[260935]: 2025-10-11 08:45:44.341 2 DEBUG oslo_concurrency.lockutils [None req-b07ceba1-7a93-47fd-9e08-5b0f3c15ae55 7290d84027c8464ebe5e3b46d59f3b4a d76f051bb1f9449fb098250c00407cac - - default default] Acquiring lock "9811042c7d73cc51997f7c966840f4b7728169a1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:45:44 compute-0 nova_compute[260935]: 2025-10-11 08:45:44.342 2 DEBUG oslo_concurrency.lockutils [None req-b07ceba1-7a93-47fd-9e08-5b0f3c15ae55 7290d84027c8464ebe5e3b46d59f3b4a d76f051bb1f9449fb098250c00407cac - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:45:44 compute-0 nova_compute[260935]: 2025-10-11 08:45:44.342 2 DEBUG oslo_concurrency.lockutils [None req-b07ceba1-7a93-47fd-9e08-5b0f3c15ae55 7290d84027c8464ebe5e3b46d59f3b4a d76f051bb1f9449fb098250c00407cac - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:45:44 compute-0 nova_compute[260935]: 2025-10-11 08:45:44.377 2 DEBUG nova.storage.rbd_utils [None req-b07ceba1-7a93-47fd-9e08-5b0f3c15ae55 7290d84027c8464ebe5e3b46d59f3b4a d76f051bb1f9449fb098250c00407cac - - default default] rbd image 176447af-d4c6-422a-9347-9f07749fb6c3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:45:44 compute-0 nova_compute[260935]: 2025-10-11 08:45:44.382 2 DEBUG oslo_concurrency.processutils [None req-b07ceba1-7a93-47fd-9e08-5b0f3c15ae55 7290d84027c8464ebe5e3b46d59f3b4a d76f051bb1f9449fb098250c00407cac - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 176447af-d4c6-422a-9347-9f07749fb6c3_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:45:44 compute-0 nova_compute[260935]: 2025-10-11 08:45:44.414 2 DEBUG oslo_concurrency.lockutils [None req-3f36527e-f48e-4bb4-b3a0-7657af79c8dc 6c0be63273314958858acea89fc8ce4c 04d9fd3da9714787ad630c8c68e4f94a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.792s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:45:44 compute-0 nova_compute[260935]: 2025-10-11 08:45:44.415 2 DEBUG nova.compute.manager [None req-3f36527e-f48e-4bb4-b3a0-7657af79c8dc 6c0be63273314958858acea89fc8ce4c 04d9fd3da9714787ad630c8c68e4f94a - - default default] [instance: 7d1e4ac2-be17-4b9b-a94e-0d70a7c4395d] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 11 08:45:44 compute-0 nova_compute[260935]: 2025-10-11 08:45:44.532 2 INFO nova.compute.manager [None req-0c4baa25-c066-4e86-af65-3982321596b6 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] [instance: f7475a32-2490-4f8c-a700-a123973da072] Took 1.60 seconds to destroy the instance on the hypervisor.
Oct 11 08:45:44 compute-0 nova_compute[260935]: 2025-10-11 08:45:44.533 2 DEBUG oslo.service.loopingcall [None req-0c4baa25-c066-4e86-af65-3982321596b6 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 11 08:45:44 compute-0 nova_compute[260935]: 2025-10-11 08:45:44.534 2 DEBUG nova.compute.manager [-] [instance: f7475a32-2490-4f8c-a700-a123973da072] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 11 08:45:44 compute-0 nova_compute[260935]: 2025-10-11 08:45:44.535 2 DEBUG nova.network.neutron [-] [instance: f7475a32-2490-4f8c-a700-a123973da072] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 11 08:45:44 compute-0 nova_compute[260935]: 2025-10-11 08:45:44.582 2 DEBUG oslo_concurrency.lockutils [None req-38f2cdd9-91a8-44e1-9340-20c41361c4c4 9f1d33a73b9b4589967f6f042e6949bd d9dd13dec2494700919923a94f4273ae - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:45:44 compute-0 nova_compute[260935]: 2025-10-11 08:45:44.583 2 DEBUG oslo_concurrency.lockutils [None req-38f2cdd9-91a8-44e1-9340-20c41361c4c4 9f1d33a73b9b4589967f6f042e6949bd d9dd13dec2494700919923a94f4273ae - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:45:44 compute-0 nova_compute[260935]: 2025-10-11 08:45:44.714 2 DEBUG oslo_concurrency.processutils [None req-b07ceba1-7a93-47fd-9e08-5b0f3c15ae55 7290d84027c8464ebe5e3b46d59f3b4a d76f051bb1f9449fb098250c00407cac - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 176447af-d4c6-422a-9347-9f07749fb6c3_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.332s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:45:44 compute-0 nova_compute[260935]: 2025-10-11 08:45:44.715 2 DEBUG nova.compute.manager [None req-3f36527e-f48e-4bb4-b3a0-7657af79c8dc 6c0be63273314958858acea89fc8ce4c 04d9fd3da9714787ad630c8c68e4f94a - - default default] [instance: 7d1e4ac2-be17-4b9b-a94e-0d70a7c4395d] Not allocating networking since 'none' was specified. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1948
Oct 11 08:45:44 compute-0 sshd-session[279979]: Failed password for invalid user bkp from 152.32.213.170 port 40026 ssh2
Oct 11 08:45:44 compute-0 nova_compute[260935]: 2025-10-11 08:45:44.756 2 DEBUG oslo_concurrency.processutils [None req-38f2cdd9-91a8-44e1-9340-20c41361c4c4 9f1d33a73b9b4589967f6f042e6949bd d9dd13dec2494700919923a94f4273ae - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:45:44 compute-0 nova_compute[260935]: 2025-10-11 08:45:44.786 2 DEBUG nova.network.neutron [None req-b07ceba1-7a93-47fd-9e08-5b0f3c15ae55 7290d84027c8464ebe5e3b46d59f3b4a d76f051bb1f9449fb098250c00407cac - - default default] [instance: 176447af-d4c6-422a-9347-9f07749fb6c3] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188
Oct 11 08:45:44 compute-0 nova_compute[260935]: 2025-10-11 08:45:44.787 2 DEBUG nova.compute.manager [None req-b07ceba1-7a93-47fd-9e08-5b0f3c15ae55 7290d84027c8464ebe5e3b46d59f3b4a d76f051bb1f9449fb098250c00407cac - - default default] [instance: 176447af-d4c6-422a-9347-9f07749fb6c3] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 11 08:45:44 compute-0 ceph-mon[74313]: pgmap v1171: 321 pgs: 321 active+clean; 169 MiB data, 315 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.9 MiB/s wr, 176 op/s
Oct 11 08:45:44 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/2402792429' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:45:44 compute-0 nova_compute[260935]: 2025-10-11 08:45:44.849 2 DEBUG nova.storage.rbd_utils [None req-b07ceba1-7a93-47fd-9e08-5b0f3c15ae55 7290d84027c8464ebe5e3b46d59f3b4a d76f051bb1f9449fb098250c00407cac - - default default] resizing rbd image 176447af-d4c6-422a-9347-9f07749fb6c3_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 11 08:45:44 compute-0 nova_compute[260935]: 2025-10-11 08:45:44.976 2 DEBUG nova.objects.instance [None req-b07ceba1-7a93-47fd-9e08-5b0f3c15ae55 7290d84027c8464ebe5e3b46d59f3b4a d76f051bb1f9449fb098250c00407cac - - default default] Lazy-loading 'migration_context' on Instance uuid 176447af-d4c6-422a-9347-9f07749fb6c3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:45:45 compute-0 nova_compute[260935]: 2025-10-11 08:45:45.209 2 DEBUG nova.virt.libvirt.driver [None req-b07ceba1-7a93-47fd-9e08-5b0f3c15ae55 7290d84027c8464ebe5e3b46d59f3b4a d76f051bb1f9449fb098250c00407cac - - default default] [instance: 176447af-d4c6-422a-9347-9f07749fb6c3] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 11 08:45:45 compute-0 nova_compute[260935]: 2025-10-11 08:45:45.210 2 DEBUG nova.virt.libvirt.driver [None req-b07ceba1-7a93-47fd-9e08-5b0f3c15ae55 7290d84027c8464ebe5e3b46d59f3b4a d76f051bb1f9449fb098250c00407cac - - default default] [instance: 176447af-d4c6-422a-9347-9f07749fb6c3] Ensure instance console log exists: /var/lib/nova/instances/176447af-d4c6-422a-9347-9f07749fb6c3/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 11 08:45:45 compute-0 nova_compute[260935]: 2025-10-11 08:45:45.210 2 DEBUG oslo_concurrency.lockutils [None req-b07ceba1-7a93-47fd-9e08-5b0f3c15ae55 7290d84027c8464ebe5e3b46d59f3b4a d76f051bb1f9449fb098250c00407cac - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:45:45 compute-0 nova_compute[260935]: 2025-10-11 08:45:45.211 2 DEBUG oslo_concurrency.lockutils [None req-b07ceba1-7a93-47fd-9e08-5b0f3c15ae55 7290d84027c8464ebe5e3b46d59f3b4a d76f051bb1f9449fb098250c00407cac - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:45:45 compute-0 nova_compute[260935]: 2025-10-11 08:45:45.212 2 DEBUG oslo_concurrency.lockutils [None req-b07ceba1-7a93-47fd-9e08-5b0f3c15ae55 7290d84027c8464ebe5e3b46d59f3b4a d76f051bb1f9449fb098250c00407cac - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:45:45 compute-0 nova_compute[260935]: 2025-10-11 08:45:45.214 2 DEBUG nova.virt.libvirt.driver [None req-b07ceba1-7a93-47fd-9e08-5b0f3c15ae55 7290d84027c8464ebe5e3b46d59f3b4a d76f051bb1f9449fb098250c00407cac - - default default] [instance: 176447af-d4c6-422a-9347-9f07749fb6c3] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 11 08:45:45 compute-0 nova_compute[260935]: 2025-10-11 08:45:45.221 2 WARNING nova.virt.libvirt.driver [None req-b07ceba1-7a93-47fd-9e08-5b0f3c15ae55 7290d84027c8464ebe5e3b46d59f3b4a d76f051bb1f9449fb098250c00407cac - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 08:45:45 compute-0 nova_compute[260935]: 2025-10-11 08:45:45.228 2 DEBUG nova.virt.libvirt.host [None req-b07ceba1-7a93-47fd-9e08-5b0f3c15ae55 7290d84027c8464ebe5e3b46d59f3b4a d76f051bb1f9449fb098250c00407cac - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 11 08:45:45 compute-0 nova_compute[260935]: 2025-10-11 08:45:45.229 2 DEBUG nova.virt.libvirt.host [None req-b07ceba1-7a93-47fd-9e08-5b0f3c15ae55 7290d84027c8464ebe5e3b46d59f3b4a d76f051bb1f9449fb098250c00407cac - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 11 08:45:45 compute-0 nova_compute[260935]: 2025-10-11 08:45:45.235 2 DEBUG nova.virt.libvirt.host [None req-b07ceba1-7a93-47fd-9e08-5b0f3c15ae55 7290d84027c8464ebe5e3b46d59f3b4a d76f051bb1f9449fb098250c00407cac - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 11 08:45:45 compute-0 nova_compute[260935]: 2025-10-11 08:45:45.235 2 DEBUG nova.virt.libvirt.host [None req-b07ceba1-7a93-47fd-9e08-5b0f3c15ae55 7290d84027c8464ebe5e3b46d59f3b4a d76f051bb1f9449fb098250c00407cac - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 11 08:45:45 compute-0 nova_compute[260935]: 2025-10-11 08:45:45.236 2 DEBUG nova.virt.libvirt.driver [None req-b07ceba1-7a93-47fd-9e08-5b0f3c15ae55 7290d84027c8464ebe5e3b46d59f3b4a d76f051bb1f9449fb098250c00407cac - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 11 08:45:45 compute-0 nova_compute[260935]: 2025-10-11 08:45:45.236 2 DEBUG nova.virt.hardware [None req-b07ceba1-7a93-47fd-9e08-5b0f3c15ae55 7290d84027c8464ebe5e3b46d59f3b4a d76f051bb1f9449fb098250c00407cac - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 11 08:45:45 compute-0 nova_compute[260935]: 2025-10-11 08:45:45.237 2 DEBUG nova.virt.hardware [None req-b07ceba1-7a93-47fd-9e08-5b0f3c15ae55 7290d84027c8464ebe5e3b46d59f3b4a d76f051bb1f9449fb098250c00407cac - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 11 08:45:45 compute-0 nova_compute[260935]: 2025-10-11 08:45:45.238 2 DEBUG nova.virt.hardware [None req-b07ceba1-7a93-47fd-9e08-5b0f3c15ae55 7290d84027c8464ebe5e3b46d59f3b4a d76f051bb1f9449fb098250c00407cac - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 11 08:45:45 compute-0 nova_compute[260935]: 2025-10-11 08:45:45.238 2 DEBUG nova.virt.hardware [None req-b07ceba1-7a93-47fd-9e08-5b0f3c15ae55 7290d84027c8464ebe5e3b46d59f3b4a d76f051bb1f9449fb098250c00407cac - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 11 08:45:45 compute-0 nova_compute[260935]: 2025-10-11 08:45:45.238 2 DEBUG nova.virt.hardware [None req-b07ceba1-7a93-47fd-9e08-5b0f3c15ae55 7290d84027c8464ebe5e3b46d59f3b4a d76f051bb1f9449fb098250c00407cac - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 11 08:45:45 compute-0 nova_compute[260935]: 2025-10-11 08:45:45.239 2 DEBUG nova.virt.hardware [None req-b07ceba1-7a93-47fd-9e08-5b0f3c15ae55 7290d84027c8464ebe5e3b46d59f3b4a d76f051bb1f9449fb098250c00407cac - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 11 08:45:45 compute-0 nova_compute[260935]: 2025-10-11 08:45:45.239 2 DEBUG nova.virt.hardware [None req-b07ceba1-7a93-47fd-9e08-5b0f3c15ae55 7290d84027c8464ebe5e3b46d59f3b4a d76f051bb1f9449fb098250c00407cac - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 11 08:45:45 compute-0 nova_compute[260935]: 2025-10-11 08:45:45.240 2 DEBUG nova.virt.hardware [None req-b07ceba1-7a93-47fd-9e08-5b0f3c15ae55 7290d84027c8464ebe5e3b46d59f3b4a d76f051bb1f9449fb098250c00407cac - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 11 08:45:45 compute-0 nova_compute[260935]: 2025-10-11 08:45:45.240 2 DEBUG nova.virt.hardware [None req-b07ceba1-7a93-47fd-9e08-5b0f3c15ae55 7290d84027c8464ebe5e3b46d59f3b4a d76f051bb1f9449fb098250c00407cac - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 11 08:45:45 compute-0 nova_compute[260935]: 2025-10-11 08:45:45.241 2 DEBUG nova.virt.hardware [None req-b07ceba1-7a93-47fd-9e08-5b0f3c15ae55 7290d84027c8464ebe5e3b46d59f3b4a d76f051bb1f9449fb098250c00407cac - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 11 08:45:45 compute-0 nova_compute[260935]: 2025-10-11 08:45:45.241 2 DEBUG nova.virt.hardware [None req-b07ceba1-7a93-47fd-9e08-5b0f3c15ae55 7290d84027c8464ebe5e3b46d59f3b4a d76f051bb1f9449fb098250c00407cac - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 11 08:45:45 compute-0 nova_compute[260935]: 2025-10-11 08:45:45.246 2 DEBUG oslo_concurrency.processutils [None req-b07ceba1-7a93-47fd-9e08-5b0f3c15ae55 7290d84027c8464ebe5e3b46d59f3b4a d76f051bb1f9449fb098250c00407cac - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:45:45 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 08:45:45 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2464896764' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:45:45 compute-0 nova_compute[260935]: 2025-10-11 08:45:45.286 2 DEBUG oslo_concurrency.processutils [None req-38f2cdd9-91a8-44e1-9340-20c41361c4c4 9f1d33a73b9b4589967f6f042e6949bd d9dd13dec2494700919923a94f4273ae - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.530s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:45:45 compute-0 nova_compute[260935]: 2025-10-11 08:45:45.295 2 DEBUG nova.compute.provider_tree [None req-38f2cdd9-91a8-44e1-9340-20c41361c4c4 9f1d33a73b9b4589967f6f042e6949bd d9dd13dec2494700919923a94f4273ae - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 08:45:45 compute-0 nova_compute[260935]: 2025-10-11 08:45:45.301 2 INFO nova.virt.libvirt.driver [None req-3f36527e-f48e-4bb4-b3a0-7657af79c8dc 6c0be63273314958858acea89fc8ce4c 04d9fd3da9714787ad630c8c68e4f94a - - default default] [instance: 7d1e4ac2-be17-4b9b-a94e-0d70a7c4395d] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 11 08:45:45 compute-0 nova_compute[260935]: 2025-10-11 08:45:45.360 2 DEBUG nova.scheduler.client.report [None req-38f2cdd9-91a8-44e1-9340-20c41361c4c4 9f1d33a73b9b4589967f6f042e6949bd d9dd13dec2494700919923a94f4273ae - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 08:45:45 compute-0 nova_compute[260935]: 2025-10-11 08:45:45.506 2 DEBUG nova.compute.manager [None req-3f36527e-f48e-4bb4-b3a0-7657af79c8dc 6c0be63273314958858acea89fc8ce4c 04d9fd3da9714787ad630c8c68e4f94a - - default default] [instance: 7d1e4ac2-be17-4b9b-a94e-0d70a7c4395d] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 11 08:45:45 compute-0 nova_compute[260935]: 2025-10-11 08:45:45.534 2 DEBUG oslo_concurrency.lockutils [None req-38f2cdd9-91a8-44e1-9340-20c41361c4c4 9f1d33a73b9b4589967f6f042e6949bd d9dd13dec2494700919923a94f4273ae - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.952s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:45:45 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 08:45:45 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/838524003' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:45:45 compute-0 nova_compute[260935]: 2025-10-11 08:45:45.716 2 DEBUG oslo_concurrency.processutils [None req-b07ceba1-7a93-47fd-9e08-5b0f3c15ae55 7290d84027c8464ebe5e3b46d59f3b4a d76f051bb1f9449fb098250c00407cac - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:45:45 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1172: 321 pgs: 321 active+clean; 169 MiB data, 315 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.9 MiB/s wr, 176 op/s
Oct 11 08:45:45 compute-0 nova_compute[260935]: 2025-10-11 08:45:45.735 2 DEBUG nova.storage.rbd_utils [None req-b07ceba1-7a93-47fd-9e08-5b0f3c15ae55 7290d84027c8464ebe5e3b46d59f3b4a d76f051bb1f9449fb098250c00407cac - - default default] rbd image 176447af-d4c6-422a-9347-9f07749fb6c3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:45:45 compute-0 nova_compute[260935]: 2025-10-11 08:45:45.738 2 DEBUG oslo_concurrency.processutils [None req-b07ceba1-7a93-47fd-9e08-5b0f3c15ae55 7290d84027c8464ebe5e3b46d59f3b4a d76f051bb1f9449fb098250c00407cac - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:45:45 compute-0 nova_compute[260935]: 2025-10-11 08:45:45.760 2 INFO nova.scheduler.client.report [None req-38f2cdd9-91a8-44e1-9340-20c41361c4c4 9f1d33a73b9b4589967f6f042e6949bd d9dd13dec2494700919923a94f4273ae - - default default] Deleted allocations for instance 09442602-bb27-4205-98ce-79781d8ab62f
Oct 11 08:45:45 compute-0 nova_compute[260935]: 2025-10-11 08:45:45.765 2 DEBUG nova.compute.manager [None req-3f36527e-f48e-4bb4-b3a0-7657af79c8dc 6c0be63273314958858acea89fc8ce4c 04d9fd3da9714787ad630c8c68e4f94a - - default default] [instance: 7d1e4ac2-be17-4b9b-a94e-0d70a7c4395d] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 11 08:45:45 compute-0 nova_compute[260935]: 2025-10-11 08:45:45.767 2 DEBUG nova.virt.libvirt.driver [None req-3f36527e-f48e-4bb4-b3a0-7657af79c8dc 6c0be63273314958858acea89fc8ce4c 04d9fd3da9714787ad630c8c68e4f94a - - default default] [instance: 7d1e4ac2-be17-4b9b-a94e-0d70a7c4395d] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 11 08:45:45 compute-0 nova_compute[260935]: 2025-10-11 08:45:45.767 2 INFO nova.virt.libvirt.driver [None req-3f36527e-f48e-4bb4-b3a0-7657af79c8dc 6c0be63273314958858acea89fc8ce4c 04d9fd3da9714787ad630c8c68e4f94a - - default default] [instance: 7d1e4ac2-be17-4b9b-a94e-0d70a7c4395d] Creating image(s)
Oct 11 08:45:45 compute-0 nova_compute[260935]: 2025-10-11 08:45:45.794 2 DEBUG nova.storage.rbd_utils [None req-3f36527e-f48e-4bb4-b3a0-7657af79c8dc 6c0be63273314958858acea89fc8ce4c 04d9fd3da9714787ad630c8c68e4f94a - - default default] rbd image 7d1e4ac2-be17-4b9b-a94e-0d70a7c4395d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:45:45 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/2464896764' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:45:45 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/838524003' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:45:45 compute-0 nova_compute[260935]: 2025-10-11 08:45:45.830 2 DEBUG nova.storage.rbd_utils [None req-3f36527e-f48e-4bb4-b3a0-7657af79c8dc 6c0be63273314958858acea89fc8ce4c 04d9fd3da9714787ad630c8c68e4f94a - - default default] rbd image 7d1e4ac2-be17-4b9b-a94e-0d70a7c4395d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:45:45 compute-0 nova_compute[260935]: 2025-10-11 08:45:45.855 2 DEBUG nova.storage.rbd_utils [None req-3f36527e-f48e-4bb4-b3a0-7657af79c8dc 6c0be63273314958858acea89fc8ce4c 04d9fd3da9714787ad630c8c68e4f94a - - default default] rbd image 7d1e4ac2-be17-4b9b-a94e-0d70a7c4395d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:45:45 compute-0 nova_compute[260935]: 2025-10-11 08:45:45.858 2 DEBUG oslo_concurrency.processutils [None req-3f36527e-f48e-4bb4-b3a0-7657af79c8dc 6c0be63273314958858acea89fc8ce4c 04d9fd3da9714787ad630c8c68e4f94a - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:45:45 compute-0 nova_compute[260935]: 2025-10-11 08:45:45.888 2 DEBUG nova.compute.manager [req-046c99bb-a3e1-4b37-aae5-db2e4e5cbaa9 req-26767a19-0b75-4842-a3b3-6709bf8732c3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: f7475a32-2490-4f8c-a700-a123973da072] Received event network-vif-unplugged-865f9eb5-5e9d-40e5-abb3-123fcf5d05c3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:45:45 compute-0 nova_compute[260935]: 2025-10-11 08:45:45.889 2 DEBUG oslo_concurrency.lockutils [req-046c99bb-a3e1-4b37-aae5-db2e4e5cbaa9 req-26767a19-0b75-4842-a3b3-6709bf8732c3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "f7475a32-2490-4f8c-a700-a123973da072-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:45:45 compute-0 nova_compute[260935]: 2025-10-11 08:45:45.889 2 DEBUG oslo_concurrency.lockutils [req-046c99bb-a3e1-4b37-aae5-db2e4e5cbaa9 req-26767a19-0b75-4842-a3b3-6709bf8732c3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "f7475a32-2490-4f8c-a700-a123973da072-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:45:45 compute-0 nova_compute[260935]: 2025-10-11 08:45:45.889 2 DEBUG oslo_concurrency.lockutils [req-046c99bb-a3e1-4b37-aae5-db2e4e5cbaa9 req-26767a19-0b75-4842-a3b3-6709bf8732c3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "f7475a32-2490-4f8c-a700-a123973da072-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:45:45 compute-0 nova_compute[260935]: 2025-10-11 08:45:45.889 2 DEBUG nova.compute.manager [req-046c99bb-a3e1-4b37-aae5-db2e4e5cbaa9 req-26767a19-0b75-4842-a3b3-6709bf8732c3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: f7475a32-2490-4f8c-a700-a123973da072] No waiting events found dispatching network-vif-unplugged-865f9eb5-5e9d-40e5-abb3-123fcf5d05c3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 08:45:45 compute-0 nova_compute[260935]: 2025-10-11 08:45:45.890 2 DEBUG nova.compute.manager [req-046c99bb-a3e1-4b37-aae5-db2e4e5cbaa9 req-26767a19-0b75-4842-a3b3-6709bf8732c3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: f7475a32-2490-4f8c-a700-a123973da072] Received event network-vif-unplugged-865f9eb5-5e9d-40e5-abb3-123fcf5d05c3 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 11 08:45:45 compute-0 nova_compute[260935]: 2025-10-11 08:45:45.942 2 DEBUG oslo_concurrency.processutils [None req-3f36527e-f48e-4bb4-b3a0-7657af79c8dc 6c0be63273314958858acea89fc8ce4c 04d9fd3da9714787ad630c8c68e4f94a - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json" returned: 0 in 0.084s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:45:45 compute-0 nova_compute[260935]: 2025-10-11 08:45:45.943 2 DEBUG oslo_concurrency.lockutils [None req-3f36527e-f48e-4bb4-b3a0-7657af79c8dc 6c0be63273314958858acea89fc8ce4c 04d9fd3da9714787ad630c8c68e4f94a - - default default] Acquiring lock "9811042c7d73cc51997f7c966840f4b7728169a1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:45:45 compute-0 nova_compute[260935]: 2025-10-11 08:45:45.943 2 DEBUG oslo_concurrency.lockutils [None req-3f36527e-f48e-4bb4-b3a0-7657af79c8dc 6c0be63273314958858acea89fc8ce4c 04d9fd3da9714787ad630c8c68e4f94a - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:45:45 compute-0 nova_compute[260935]: 2025-10-11 08:45:45.944 2 DEBUG oslo_concurrency.lockutils [None req-3f36527e-f48e-4bb4-b3a0-7657af79c8dc 6c0be63273314958858acea89fc8ce4c 04d9fd3da9714787ad630c8c68e4f94a - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:45:45 compute-0 nova_compute[260935]: 2025-10-11 08:45:45.964 2 DEBUG nova.storage.rbd_utils [None req-3f36527e-f48e-4bb4-b3a0-7657af79c8dc 6c0be63273314958858acea89fc8ce4c 04d9fd3da9714787ad630c8c68e4f94a - - default default] rbd image 7d1e4ac2-be17-4b9b-a94e-0d70a7c4395d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:45:45 compute-0 nova_compute[260935]: 2025-10-11 08:45:45.967 2 DEBUG oslo_concurrency.processutils [None req-3f36527e-f48e-4bb4-b3a0-7657af79c8dc 6c0be63273314958858acea89fc8ce4c 04d9fd3da9714787ad630c8c68e4f94a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 7d1e4ac2-be17-4b9b-a94e-0d70a7c4395d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:45:45 compute-0 nova_compute[260935]: 2025-10-11 08:45:45.991 2 DEBUG oslo_concurrency.lockutils [None req-38f2cdd9-91a8-44e1-9340-20c41361c4c4 9f1d33a73b9b4589967f6f042e6949bd d9dd13dec2494700919923a94f4273ae - - default default] Lock "09442602-bb27-4205-98ce-79781d8ab62f" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.979s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:45:46 compute-0 sshd-session[279979]: Received disconnect from 152.32.213.170 port 40026:11: Bye Bye [preauth]
Oct 11 08:45:46 compute-0 sshd-session[279979]: Disconnected from invalid user bkp 152.32.213.170 port 40026 [preauth]
Oct 11 08:45:46 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 08:45:46 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/498100304' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:45:46 compute-0 nova_compute[260935]: 2025-10-11 08:45:46.209 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:45:46 compute-0 nova_compute[260935]: 2025-10-11 08:45:46.229 2 DEBUG oslo_concurrency.processutils [None req-b07ceba1-7a93-47fd-9e08-5b0f3c15ae55 7290d84027c8464ebe5e3b46d59f3b4a d76f051bb1f9449fb098250c00407cac - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.492s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:45:46 compute-0 nova_compute[260935]: 2025-10-11 08:45:46.232 2 DEBUG nova.objects.instance [None req-b07ceba1-7a93-47fd-9e08-5b0f3c15ae55 7290d84027c8464ebe5e3b46d59f3b4a d76f051bb1f9449fb098250c00407cac - - default default] Lazy-loading 'pci_devices' on Instance uuid 176447af-d4c6-422a-9347-9f07749fb6c3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:45:46 compute-0 nova_compute[260935]: 2025-10-11 08:45:46.255 2 DEBUG oslo_concurrency.processutils [None req-3f36527e-f48e-4bb4-b3a0-7657af79c8dc 6c0be63273314958858acea89fc8ce4c 04d9fd3da9714787ad630c8c68e4f94a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 7d1e4ac2-be17-4b9b-a94e-0d70a7c4395d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.288s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:45:46 compute-0 nova_compute[260935]: 2025-10-11 08:45:46.327 2 DEBUG nova.storage.rbd_utils [None req-3f36527e-f48e-4bb4-b3a0-7657af79c8dc 6c0be63273314958858acea89fc8ce4c 04d9fd3da9714787ad630c8c68e4f94a - - default default] resizing rbd image 7d1e4ac2-be17-4b9b-a94e-0d70a7c4395d_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 11 08:45:46 compute-0 nova_compute[260935]: 2025-10-11 08:45:46.357 2 DEBUG nova.virt.libvirt.driver [None req-b07ceba1-7a93-47fd-9e08-5b0f3c15ae55 7290d84027c8464ebe5e3b46d59f3b4a d76f051bb1f9449fb098250c00407cac - - default default] [instance: 176447af-d4c6-422a-9347-9f07749fb6c3] End _get_guest_xml xml=<domain type="kvm">
Oct 11 08:45:46 compute-0 nova_compute[260935]:   <uuid>176447af-d4c6-422a-9347-9f07749fb6c3</uuid>
Oct 11 08:45:46 compute-0 nova_compute[260935]:   <name>instance-00000007</name>
Oct 11 08:45:46 compute-0 nova_compute[260935]:   <memory>131072</memory>
Oct 11 08:45:46 compute-0 nova_compute[260935]:   <vcpu>1</vcpu>
Oct 11 08:45:46 compute-0 nova_compute[260935]:   <metadata>
Oct 11 08:45:46 compute-0 nova_compute[260935]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 08:45:46 compute-0 nova_compute[260935]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 08:45:46 compute-0 nova_compute[260935]:       <nova:name>tempest-ServerDiagnosticsTest-server-326353643</nova:name>
Oct 11 08:45:46 compute-0 nova_compute[260935]:       <nova:creationTime>2025-10-11 08:45:45</nova:creationTime>
Oct 11 08:45:46 compute-0 nova_compute[260935]:       <nova:flavor name="m1.nano">
Oct 11 08:45:46 compute-0 nova_compute[260935]:         <nova:memory>128</nova:memory>
Oct 11 08:45:46 compute-0 nova_compute[260935]:         <nova:disk>1</nova:disk>
Oct 11 08:45:46 compute-0 nova_compute[260935]:         <nova:swap>0</nova:swap>
Oct 11 08:45:46 compute-0 nova_compute[260935]:         <nova:ephemeral>0</nova:ephemeral>
Oct 11 08:45:46 compute-0 nova_compute[260935]:         <nova:vcpus>1</nova:vcpus>
Oct 11 08:45:46 compute-0 nova_compute[260935]:       </nova:flavor>
Oct 11 08:45:46 compute-0 nova_compute[260935]:       <nova:owner>
Oct 11 08:45:46 compute-0 nova_compute[260935]:         <nova:user uuid="7290d84027c8464ebe5e3b46d59f3b4a">tempest-ServerDiagnosticsTest-1181458604-project-member</nova:user>
Oct 11 08:45:46 compute-0 nova_compute[260935]:         <nova:project uuid="d76f051bb1f9449fb098250c00407cac">tempest-ServerDiagnosticsTest-1181458604</nova:project>
Oct 11 08:45:46 compute-0 nova_compute[260935]:       </nova:owner>
Oct 11 08:45:46 compute-0 nova_compute[260935]:       <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 08:45:46 compute-0 nova_compute[260935]:       <nova:ports/>
Oct 11 08:45:46 compute-0 nova_compute[260935]:     </nova:instance>
Oct 11 08:45:46 compute-0 nova_compute[260935]:   </metadata>
Oct 11 08:45:46 compute-0 nova_compute[260935]:   <sysinfo type="smbios">
Oct 11 08:45:46 compute-0 nova_compute[260935]:     <system>
Oct 11 08:45:46 compute-0 nova_compute[260935]:       <entry name="manufacturer">RDO</entry>
Oct 11 08:45:46 compute-0 nova_compute[260935]:       <entry name="product">OpenStack Compute</entry>
Oct 11 08:45:46 compute-0 nova_compute[260935]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 08:45:46 compute-0 nova_compute[260935]:       <entry name="serial">176447af-d4c6-422a-9347-9f07749fb6c3</entry>
Oct 11 08:45:46 compute-0 nova_compute[260935]:       <entry name="uuid">176447af-d4c6-422a-9347-9f07749fb6c3</entry>
Oct 11 08:45:46 compute-0 nova_compute[260935]:       <entry name="family">Virtual Machine</entry>
Oct 11 08:45:46 compute-0 nova_compute[260935]:     </system>
Oct 11 08:45:46 compute-0 nova_compute[260935]:   </sysinfo>
Oct 11 08:45:46 compute-0 nova_compute[260935]:   <os>
Oct 11 08:45:46 compute-0 nova_compute[260935]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 11 08:45:46 compute-0 nova_compute[260935]:     <boot dev="hd"/>
Oct 11 08:45:46 compute-0 nova_compute[260935]:     <smbios mode="sysinfo"/>
Oct 11 08:45:46 compute-0 nova_compute[260935]:   </os>
Oct 11 08:45:46 compute-0 nova_compute[260935]:   <features>
Oct 11 08:45:46 compute-0 nova_compute[260935]:     <acpi/>
Oct 11 08:45:46 compute-0 nova_compute[260935]:     <apic/>
Oct 11 08:45:46 compute-0 nova_compute[260935]:     <vmcoreinfo/>
Oct 11 08:45:46 compute-0 nova_compute[260935]:   </features>
Oct 11 08:45:46 compute-0 nova_compute[260935]:   <clock offset="utc">
Oct 11 08:45:46 compute-0 nova_compute[260935]:     <timer name="pit" tickpolicy="delay"/>
Oct 11 08:45:46 compute-0 nova_compute[260935]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 11 08:45:46 compute-0 nova_compute[260935]:     <timer name="hpet" present="no"/>
Oct 11 08:45:46 compute-0 nova_compute[260935]:   </clock>
Oct 11 08:45:46 compute-0 nova_compute[260935]:   <cpu mode="host-model" match="exact">
Oct 11 08:45:46 compute-0 nova_compute[260935]:     <topology sockets="1" cores="1" threads="1"/>
Oct 11 08:45:46 compute-0 nova_compute[260935]:   </cpu>
Oct 11 08:45:46 compute-0 nova_compute[260935]:   <devices>
Oct 11 08:45:46 compute-0 nova_compute[260935]:     <disk type="network" device="disk">
Oct 11 08:45:46 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 08:45:46 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/176447af-d4c6-422a-9347-9f07749fb6c3_disk">
Oct 11 08:45:46 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 08:45:46 compute-0 nova_compute[260935]:       </source>
Oct 11 08:45:46 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 08:45:46 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 08:45:46 compute-0 nova_compute[260935]:       </auth>
Oct 11 08:45:46 compute-0 nova_compute[260935]:       <target dev="vda" bus="virtio"/>
Oct 11 08:45:46 compute-0 nova_compute[260935]:     </disk>
Oct 11 08:45:46 compute-0 nova_compute[260935]:     <disk type="network" device="cdrom">
Oct 11 08:45:46 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 08:45:46 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/176447af-d4c6-422a-9347-9f07749fb6c3_disk.config">
Oct 11 08:45:46 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 08:45:46 compute-0 nova_compute[260935]:       </source>
Oct 11 08:45:46 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 08:45:46 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 08:45:46 compute-0 nova_compute[260935]:       </auth>
Oct 11 08:45:46 compute-0 nova_compute[260935]:       <target dev="sda" bus="sata"/>
Oct 11 08:45:46 compute-0 nova_compute[260935]:     </disk>
Oct 11 08:45:46 compute-0 nova_compute[260935]:     <serial type="pty">
Oct 11 08:45:46 compute-0 nova_compute[260935]:       <log file="/var/lib/nova/instances/176447af-d4c6-422a-9347-9f07749fb6c3/console.log" append="off"/>
Oct 11 08:45:46 compute-0 nova_compute[260935]:     </serial>
Oct 11 08:45:46 compute-0 nova_compute[260935]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 08:45:46 compute-0 nova_compute[260935]:     <video>
Oct 11 08:45:46 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 08:45:46 compute-0 nova_compute[260935]:     </video>
Oct 11 08:45:46 compute-0 nova_compute[260935]:     <input type="tablet" bus="usb"/>
Oct 11 08:45:46 compute-0 nova_compute[260935]:     <rng model="virtio">
Oct 11 08:45:46 compute-0 nova_compute[260935]:       <backend model="random">/dev/urandom</backend>
Oct 11 08:45:46 compute-0 nova_compute[260935]:     </rng>
Oct 11 08:45:46 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root"/>
Oct 11 08:45:46 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:45:46 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:45:46 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:45:46 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:45:46 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:45:46 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:45:46 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:45:46 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:45:46 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:45:46 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:45:46 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:45:46 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:45:46 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:45:46 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:45:46 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:45:46 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:45:46 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:45:46 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:45:46 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:45:46 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:45:46 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:45:46 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:45:46 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:45:46 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:45:46 compute-0 nova_compute[260935]:     <controller type="usb" index="0"/>
Oct 11 08:45:46 compute-0 nova_compute[260935]:     <memballoon model="virtio">
Oct 11 08:45:46 compute-0 nova_compute[260935]:       <stats period="10"/>
Oct 11 08:45:46 compute-0 nova_compute[260935]:     </memballoon>
Oct 11 08:45:46 compute-0 nova_compute[260935]:   </devices>
Oct 11 08:45:46 compute-0 nova_compute[260935]: </domain>
Oct 11 08:45:46 compute-0 nova_compute[260935]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 11 08:45:46 compute-0 nova_compute[260935]: 2025-10-11 08:45:46.416 2 DEBUG nova.objects.instance [None req-3f36527e-f48e-4bb4-b3a0-7657af79c8dc 6c0be63273314958858acea89fc8ce4c 04d9fd3da9714787ad630c8c68e4f94a - - default default] Lazy-loading 'migration_context' on Instance uuid 7d1e4ac2-be17-4b9b-a94e-0d70a7c4395d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:45:46 compute-0 nova_compute[260935]: 2025-10-11 08:45:46.437 2 DEBUG nova.virt.libvirt.driver [None req-b07ceba1-7a93-47fd-9e08-5b0f3c15ae55 7290d84027c8464ebe5e3b46d59f3b4a d76f051bb1f9449fb098250c00407cac - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 08:45:46 compute-0 nova_compute[260935]: 2025-10-11 08:45:46.437 2 DEBUG nova.virt.libvirt.driver [None req-b07ceba1-7a93-47fd-9e08-5b0f3c15ae55 7290d84027c8464ebe5e3b46d59f3b4a d76f051bb1f9449fb098250c00407cac - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 08:45:46 compute-0 nova_compute[260935]: 2025-10-11 08:45:46.438 2 INFO nova.virt.libvirt.driver [None req-b07ceba1-7a93-47fd-9e08-5b0f3c15ae55 7290d84027c8464ebe5e3b46d59f3b4a d76f051bb1f9449fb098250c00407cac - - default default] [instance: 176447af-d4c6-422a-9347-9f07749fb6c3] Using config drive
Oct 11 08:45:46 compute-0 nova_compute[260935]: 2025-10-11 08:45:46.460 2 DEBUG nova.storage.rbd_utils [None req-b07ceba1-7a93-47fd-9e08-5b0f3c15ae55 7290d84027c8464ebe5e3b46d59f3b4a d76f051bb1f9449fb098250c00407cac - - default default] rbd image 176447af-d4c6-422a-9347-9f07749fb6c3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:45:46 compute-0 nova_compute[260935]: 2025-10-11 08:45:46.490 2 DEBUG nova.virt.libvirt.driver [None req-3f36527e-f48e-4bb4-b3a0-7657af79c8dc 6c0be63273314958858acea89fc8ce4c 04d9fd3da9714787ad630c8c68e4f94a - - default default] [instance: 7d1e4ac2-be17-4b9b-a94e-0d70a7c4395d] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 11 08:45:46 compute-0 nova_compute[260935]: 2025-10-11 08:45:46.490 2 DEBUG nova.virt.libvirt.driver [None req-3f36527e-f48e-4bb4-b3a0-7657af79c8dc 6c0be63273314958858acea89fc8ce4c 04d9fd3da9714787ad630c8c68e4f94a - - default default] [instance: 7d1e4ac2-be17-4b9b-a94e-0d70a7c4395d] Ensure instance console log exists: /var/lib/nova/instances/7d1e4ac2-be17-4b9b-a94e-0d70a7c4395d/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 11 08:45:46 compute-0 nova_compute[260935]: 2025-10-11 08:45:46.490 2 DEBUG oslo_concurrency.lockutils [None req-3f36527e-f48e-4bb4-b3a0-7657af79c8dc 6c0be63273314958858acea89fc8ce4c 04d9fd3da9714787ad630c8c68e4f94a - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:45:46 compute-0 nova_compute[260935]: 2025-10-11 08:45:46.491 2 DEBUG oslo_concurrency.lockutils [None req-3f36527e-f48e-4bb4-b3a0-7657af79c8dc 6c0be63273314958858acea89fc8ce4c 04d9fd3da9714787ad630c8c68e4f94a - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:45:46 compute-0 nova_compute[260935]: 2025-10-11 08:45:46.491 2 DEBUG oslo_concurrency.lockutils [None req-3f36527e-f48e-4bb4-b3a0-7657af79c8dc 6c0be63273314958858acea89fc8ce4c 04d9fd3da9714787ad630c8c68e4f94a - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:45:46 compute-0 nova_compute[260935]: 2025-10-11 08:45:46.492 2 DEBUG nova.virt.libvirt.driver [None req-3f36527e-f48e-4bb4-b3a0-7657af79c8dc 6c0be63273314958858acea89fc8ce4c 04d9fd3da9714787ad630c8c68e4f94a - - default default] [instance: 7d1e4ac2-be17-4b9b-a94e-0d70a7c4395d] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 11 08:45:46 compute-0 nova_compute[260935]: 2025-10-11 08:45:46.496 2 WARNING nova.virt.libvirt.driver [None req-3f36527e-f48e-4bb4-b3a0-7657af79c8dc 6c0be63273314958858acea89fc8ce4c 04d9fd3da9714787ad630c8c68e4f94a - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 08:45:46 compute-0 nova_compute[260935]: 2025-10-11 08:45:46.500 2 DEBUG nova.virt.libvirt.host [None req-3f36527e-f48e-4bb4-b3a0-7657af79c8dc 6c0be63273314958858acea89fc8ce4c 04d9fd3da9714787ad630c8c68e4f94a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 11 08:45:46 compute-0 nova_compute[260935]: 2025-10-11 08:45:46.501 2 DEBUG nova.virt.libvirt.host [None req-3f36527e-f48e-4bb4-b3a0-7657af79c8dc 6c0be63273314958858acea89fc8ce4c 04d9fd3da9714787ad630c8c68e4f94a - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 11 08:45:46 compute-0 nova_compute[260935]: 2025-10-11 08:45:46.505 2 DEBUG nova.virt.libvirt.host [None req-3f36527e-f48e-4bb4-b3a0-7657af79c8dc 6c0be63273314958858acea89fc8ce4c 04d9fd3da9714787ad630c8c68e4f94a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 11 08:45:46 compute-0 nova_compute[260935]: 2025-10-11 08:45:46.506 2 DEBUG nova.virt.libvirt.host [None req-3f36527e-f48e-4bb4-b3a0-7657af79c8dc 6c0be63273314958858acea89fc8ce4c 04d9fd3da9714787ad630c8c68e4f94a - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 11 08:45:46 compute-0 nova_compute[260935]: 2025-10-11 08:45:46.507 2 DEBUG nova.virt.libvirt.driver [None req-3f36527e-f48e-4bb4-b3a0-7657af79c8dc 6c0be63273314958858acea89fc8ce4c 04d9fd3da9714787ad630c8c68e4f94a - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 11 08:45:46 compute-0 nova_compute[260935]: 2025-10-11 08:45:46.507 2 DEBUG nova.virt.hardware [None req-3f36527e-f48e-4bb4-b3a0-7657af79c8dc 6c0be63273314958858acea89fc8ce4c 04d9fd3da9714787ad630c8c68e4f94a - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 11 08:45:46 compute-0 nova_compute[260935]: 2025-10-11 08:45:46.508 2 DEBUG nova.virt.hardware [None req-3f36527e-f48e-4bb4-b3a0-7657af79c8dc 6c0be63273314958858acea89fc8ce4c 04d9fd3da9714787ad630c8c68e4f94a - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 11 08:45:46 compute-0 nova_compute[260935]: 2025-10-11 08:45:46.509 2 DEBUG nova.virt.hardware [None req-3f36527e-f48e-4bb4-b3a0-7657af79c8dc 6c0be63273314958858acea89fc8ce4c 04d9fd3da9714787ad630c8c68e4f94a - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 11 08:45:46 compute-0 nova_compute[260935]: 2025-10-11 08:45:46.509 2 DEBUG nova.virt.hardware [None req-3f36527e-f48e-4bb4-b3a0-7657af79c8dc 6c0be63273314958858acea89fc8ce4c 04d9fd3da9714787ad630c8c68e4f94a - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 11 08:45:46 compute-0 nova_compute[260935]: 2025-10-11 08:45:46.509 2 DEBUG nova.virt.hardware [None req-3f36527e-f48e-4bb4-b3a0-7657af79c8dc 6c0be63273314958858acea89fc8ce4c 04d9fd3da9714787ad630c8c68e4f94a - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 11 08:45:46 compute-0 nova_compute[260935]: 2025-10-11 08:45:46.510 2 DEBUG nova.virt.hardware [None req-3f36527e-f48e-4bb4-b3a0-7657af79c8dc 6c0be63273314958858acea89fc8ce4c 04d9fd3da9714787ad630c8c68e4f94a - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 11 08:45:46 compute-0 nova_compute[260935]: 2025-10-11 08:45:46.510 2 DEBUG nova.virt.hardware [None req-3f36527e-f48e-4bb4-b3a0-7657af79c8dc 6c0be63273314958858acea89fc8ce4c 04d9fd3da9714787ad630c8c68e4f94a - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 11 08:45:46 compute-0 nova_compute[260935]: 2025-10-11 08:45:46.511 2 DEBUG nova.virt.hardware [None req-3f36527e-f48e-4bb4-b3a0-7657af79c8dc 6c0be63273314958858acea89fc8ce4c 04d9fd3da9714787ad630c8c68e4f94a - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 11 08:45:46 compute-0 nova_compute[260935]: 2025-10-11 08:45:46.511 2 DEBUG nova.virt.hardware [None req-3f36527e-f48e-4bb4-b3a0-7657af79c8dc 6c0be63273314958858acea89fc8ce4c 04d9fd3da9714787ad630c8c68e4f94a - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 11 08:45:46 compute-0 nova_compute[260935]: 2025-10-11 08:45:46.512 2 DEBUG nova.virt.hardware [None req-3f36527e-f48e-4bb4-b3a0-7657af79c8dc 6c0be63273314958858acea89fc8ce4c 04d9fd3da9714787ad630c8c68e4f94a - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 11 08:45:46 compute-0 nova_compute[260935]: 2025-10-11 08:45:46.512 2 DEBUG nova.virt.hardware [None req-3f36527e-f48e-4bb4-b3a0-7657af79c8dc 6c0be63273314958858acea89fc8ce4c 04d9fd3da9714787ad630c8c68e4f94a - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 11 08:45:46 compute-0 nova_compute[260935]: 2025-10-11 08:45:46.516 2 DEBUG oslo_concurrency.processutils [None req-3f36527e-f48e-4bb4-b3a0-7657af79c8dc 6c0be63273314958858acea89fc8ce4c 04d9fd3da9714787ad630c8c68e4f94a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:45:46 compute-0 nova_compute[260935]: 2025-10-11 08:45:46.707 2 INFO nova.virt.libvirt.driver [None req-b07ceba1-7a93-47fd-9e08-5b0f3c15ae55 7290d84027c8464ebe5e3b46d59f3b4a d76f051bb1f9449fb098250c00407cac - - default default] [instance: 176447af-d4c6-422a-9347-9f07749fb6c3] Creating config drive at /var/lib/nova/instances/176447af-d4c6-422a-9347-9f07749fb6c3/disk.config
Oct 11 08:45:46 compute-0 nova_compute[260935]: 2025-10-11 08:45:46.711 2 DEBUG oslo_concurrency.processutils [None req-b07ceba1-7a93-47fd-9e08-5b0f3c15ae55 7290d84027c8464ebe5e3b46d59f3b4a d76f051bb1f9449fb098250c00407cac - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/176447af-d4c6-422a-9347-9f07749fb6c3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpz__5aj3k execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:45:46 compute-0 nova_compute[260935]: 2025-10-11 08:45:46.739 2 DEBUG nova.network.neutron [-] [instance: f7475a32-2490-4f8c-a700-a123973da072] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:45:46 compute-0 nova_compute[260935]: 2025-10-11 08:45:46.760 2 INFO nova.compute.manager [-] [instance: f7475a32-2490-4f8c-a700-a123973da072] Took 2.23 seconds to deallocate network for instance.
Oct 11 08:45:46 compute-0 nova_compute[260935]: 2025-10-11 08:45:46.778 2 DEBUG nova.compute.manager [req-d9c41343-b8d1-4e09-8eaa-9c24c67cf3bc req-ab67940f-142e-4682-8ae9-7ff21ffb9914 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: f7475a32-2490-4f8c-a700-a123973da072] Received event network-vif-deleted-865f9eb5-5e9d-40e5-abb3-123fcf5d05c3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:45:46 compute-0 ceph-mon[74313]: pgmap v1172: 321 pgs: 321 active+clean; 169 MiB data, 315 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.9 MiB/s wr, 176 op/s
Oct 11 08:45:46 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/498100304' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:45:46 compute-0 nova_compute[260935]: 2025-10-11 08:45:46.852 2 DEBUG oslo_concurrency.processutils [None req-b07ceba1-7a93-47fd-9e08-5b0f3c15ae55 7290d84027c8464ebe5e3b46d59f3b4a d76f051bb1f9449fb098250c00407cac - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/176447af-d4c6-422a-9347-9f07749fb6c3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpz__5aj3k" returned: 0 in 0.140s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:45:46 compute-0 nova_compute[260935]: 2025-10-11 08:45:46.886 2 DEBUG nova.storage.rbd_utils [None req-b07ceba1-7a93-47fd-9e08-5b0f3c15ae55 7290d84027c8464ebe5e3b46d59f3b4a d76f051bb1f9449fb098250c00407cac - - default default] rbd image 176447af-d4c6-422a-9347-9f07749fb6c3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:45:46 compute-0 nova_compute[260935]: 2025-10-11 08:45:46.891 2 DEBUG oslo_concurrency.processutils [None req-b07ceba1-7a93-47fd-9e08-5b0f3c15ae55 7290d84027c8464ebe5e3b46d59f3b4a d76f051bb1f9449fb098250c00407cac - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/176447af-d4c6-422a-9347-9f07749fb6c3/disk.config 176447af-d4c6-422a-9347-9f07749fb6c3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:45:46 compute-0 nova_compute[260935]: 2025-10-11 08:45:46.924 2 DEBUG oslo_concurrency.lockutils [None req-0c4baa25-c066-4e86-af65-3982321596b6 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:45:46 compute-0 nova_compute[260935]: 2025-10-11 08:45:46.925 2 DEBUG oslo_concurrency.lockutils [None req-0c4baa25-c066-4e86-af65-3982321596b6 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:45:47 compute-0 nova_compute[260935]: 2025-10-11 08:45:47.018 2 DEBUG oslo_concurrency.processutils [None req-0c4baa25-c066-4e86-af65-3982321596b6 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:45:47 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 08:45:47 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2547142510' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:45:47 compute-0 nova_compute[260935]: 2025-10-11 08:45:47.076 2 DEBUG oslo_concurrency.processutils [None req-b07ceba1-7a93-47fd-9e08-5b0f3c15ae55 7290d84027c8464ebe5e3b46d59f3b4a d76f051bb1f9449fb098250c00407cac - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/176447af-d4c6-422a-9347-9f07749fb6c3/disk.config 176447af-d4c6-422a-9347-9f07749fb6c3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.185s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:45:47 compute-0 nova_compute[260935]: 2025-10-11 08:45:47.077 2 INFO nova.virt.libvirt.driver [None req-b07ceba1-7a93-47fd-9e08-5b0f3c15ae55 7290d84027c8464ebe5e3b46d59f3b4a d76f051bb1f9449fb098250c00407cac - - default default] [instance: 176447af-d4c6-422a-9347-9f07749fb6c3] Deleting local config drive /var/lib/nova/instances/176447af-d4c6-422a-9347-9f07749fb6c3/disk.config because it was imported into RBD.
Oct 11 08:45:47 compute-0 nova_compute[260935]: 2025-10-11 08:45:47.079 2 DEBUG oslo_concurrency.processutils [None req-3f36527e-f48e-4bb4-b3a0-7657af79c8dc 6c0be63273314958858acea89fc8ce4c 04d9fd3da9714787ad630c8c68e4f94a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.563s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:45:47 compute-0 nova_compute[260935]: 2025-10-11 08:45:47.132 2 DEBUG nova.storage.rbd_utils [None req-3f36527e-f48e-4bb4-b3a0-7657af79c8dc 6c0be63273314958858acea89fc8ce4c 04d9fd3da9714787ad630c8c68e4f94a - - default default] rbd image 7d1e4ac2-be17-4b9b-a94e-0d70a7c4395d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:45:47 compute-0 nova_compute[260935]: 2025-10-11 08:45:47.144 2 DEBUG oslo_concurrency.processutils [None req-3f36527e-f48e-4bb4-b3a0-7657af79c8dc 6c0be63273314958858acea89fc8ce4c 04d9fd3da9714787ad630c8c68e4f94a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:45:47 compute-0 systemd-machined[215705]: New machine qemu-7-instance-00000007.
Oct 11 08:45:47 compute-0 systemd[1]: Started Virtual Machine qemu-7-instance-00000007.
Oct 11 08:45:47 compute-0 podman[280642]: 2025-10-11 08:45:47.208237782 +0000 UTC m=+0.095926123 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, managed_by=edpm_ansible)
Oct 11 08:45:47 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 08:45:47 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/433789802' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:45:47 compute-0 nova_compute[260935]: 2025-10-11 08:45:47.507 2 DEBUG oslo_concurrency.processutils [None req-0c4baa25-c066-4e86-af65-3982321596b6 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.489s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:45:47 compute-0 nova_compute[260935]: 2025-10-11 08:45:47.513 2 DEBUG nova.compute.provider_tree [None req-0c4baa25-c066-4e86-af65-3982321596b6 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 08:45:47 compute-0 nova_compute[260935]: 2025-10-11 08:45:47.531 2 DEBUG nova.scheduler.client.report [None req-0c4baa25-c066-4e86-af65-3982321596b6 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 08:45:47 compute-0 nova_compute[260935]: 2025-10-11 08:45:47.559 2 DEBUG oslo_concurrency.lockutils [None req-0c4baa25-c066-4e86-af65-3982321596b6 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.634s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:45:47 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 08:45:47 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/31627406' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:45:47 compute-0 nova_compute[260935]: 2025-10-11 08:45:47.596 2 INFO nova.scheduler.client.report [None req-0c4baa25-c066-4e86-af65-3982321596b6 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Deleted allocations for instance f7475a32-2490-4f8c-a700-a123973da072
Oct 11 08:45:47 compute-0 nova_compute[260935]: 2025-10-11 08:45:47.605 2 DEBUG oslo_concurrency.processutils [None req-3f36527e-f48e-4bb4-b3a0-7657af79c8dc 6c0be63273314958858acea89fc8ce4c 04d9fd3da9714787ad630c8c68e4f94a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:45:47 compute-0 nova_compute[260935]: 2025-10-11 08:45:47.606 2 DEBUG nova.objects.instance [None req-3f36527e-f48e-4bb4-b3a0-7657af79c8dc 6c0be63273314958858acea89fc8ce4c 04d9fd3da9714787ad630c8c68e4f94a - - default default] Lazy-loading 'pci_devices' on Instance uuid 7d1e4ac2-be17-4b9b-a94e-0d70a7c4395d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:45:47 compute-0 nova_compute[260935]: 2025-10-11 08:45:47.635 2 DEBUG nova.virt.libvirt.driver [None req-3f36527e-f48e-4bb4-b3a0-7657af79c8dc 6c0be63273314958858acea89fc8ce4c 04d9fd3da9714787ad630c8c68e4f94a - - default default] [instance: 7d1e4ac2-be17-4b9b-a94e-0d70a7c4395d] End _get_guest_xml xml=<domain type="kvm">
Oct 11 08:45:47 compute-0 nova_compute[260935]:   <uuid>7d1e4ac2-be17-4b9b-a94e-0d70a7c4395d</uuid>
Oct 11 08:45:47 compute-0 nova_compute[260935]:   <name>instance-00000008</name>
Oct 11 08:45:47 compute-0 nova_compute[260935]:   <memory>131072</memory>
Oct 11 08:45:47 compute-0 nova_compute[260935]:   <vcpu>1</vcpu>
Oct 11 08:45:47 compute-0 nova_compute[260935]:   <metadata>
Oct 11 08:45:47 compute-0 nova_compute[260935]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 08:45:47 compute-0 nova_compute[260935]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 08:45:47 compute-0 nova_compute[260935]:       <nova:name>tempest-ServerDiagnosticsV248Test-server-1994259322</nova:name>
Oct 11 08:45:47 compute-0 nova_compute[260935]:       <nova:creationTime>2025-10-11 08:45:46</nova:creationTime>
Oct 11 08:45:47 compute-0 nova_compute[260935]:       <nova:flavor name="m1.nano">
Oct 11 08:45:47 compute-0 nova_compute[260935]:         <nova:memory>128</nova:memory>
Oct 11 08:45:47 compute-0 nova_compute[260935]:         <nova:disk>1</nova:disk>
Oct 11 08:45:47 compute-0 nova_compute[260935]:         <nova:swap>0</nova:swap>
Oct 11 08:45:47 compute-0 nova_compute[260935]:         <nova:ephemeral>0</nova:ephemeral>
Oct 11 08:45:47 compute-0 nova_compute[260935]:         <nova:vcpus>1</nova:vcpus>
Oct 11 08:45:47 compute-0 nova_compute[260935]:       </nova:flavor>
Oct 11 08:45:47 compute-0 nova_compute[260935]:       <nova:owner>
Oct 11 08:45:47 compute-0 nova_compute[260935]:         <nova:user uuid="6c0be63273314958858acea89fc8ce4c">tempest-ServerDiagnosticsV248Test-1478543614-project-member</nova:user>
Oct 11 08:45:47 compute-0 nova_compute[260935]:         <nova:project uuid="04d9fd3da9714787ad630c8c68e4f94a">tempest-ServerDiagnosticsV248Test-1478543614</nova:project>
Oct 11 08:45:47 compute-0 nova_compute[260935]:       </nova:owner>
Oct 11 08:45:47 compute-0 nova_compute[260935]:       <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 08:45:47 compute-0 nova_compute[260935]:       <nova:ports/>
Oct 11 08:45:47 compute-0 nova_compute[260935]:     </nova:instance>
Oct 11 08:45:47 compute-0 nova_compute[260935]:   </metadata>
Oct 11 08:45:47 compute-0 nova_compute[260935]:   <sysinfo type="smbios">
Oct 11 08:45:47 compute-0 nova_compute[260935]:     <system>
Oct 11 08:45:47 compute-0 nova_compute[260935]:       <entry name="manufacturer">RDO</entry>
Oct 11 08:45:47 compute-0 nova_compute[260935]:       <entry name="product">OpenStack Compute</entry>
Oct 11 08:45:47 compute-0 nova_compute[260935]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 08:45:47 compute-0 nova_compute[260935]:       <entry name="serial">7d1e4ac2-be17-4b9b-a94e-0d70a7c4395d</entry>
Oct 11 08:45:47 compute-0 nova_compute[260935]:       <entry name="uuid">7d1e4ac2-be17-4b9b-a94e-0d70a7c4395d</entry>
Oct 11 08:45:47 compute-0 nova_compute[260935]:       <entry name="family">Virtual Machine</entry>
Oct 11 08:45:47 compute-0 nova_compute[260935]:     </system>
Oct 11 08:45:47 compute-0 nova_compute[260935]:   </sysinfo>
Oct 11 08:45:47 compute-0 nova_compute[260935]:   <os>
Oct 11 08:45:47 compute-0 nova_compute[260935]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 11 08:45:47 compute-0 nova_compute[260935]:     <boot dev="hd"/>
Oct 11 08:45:47 compute-0 nova_compute[260935]:     <smbios mode="sysinfo"/>
Oct 11 08:45:47 compute-0 nova_compute[260935]:   </os>
Oct 11 08:45:47 compute-0 nova_compute[260935]:   <features>
Oct 11 08:45:47 compute-0 nova_compute[260935]:     <acpi/>
Oct 11 08:45:47 compute-0 nova_compute[260935]:     <apic/>
Oct 11 08:45:47 compute-0 nova_compute[260935]:     <vmcoreinfo/>
Oct 11 08:45:47 compute-0 nova_compute[260935]:   </features>
Oct 11 08:45:47 compute-0 nova_compute[260935]:   <clock offset="utc">
Oct 11 08:45:47 compute-0 nova_compute[260935]:     <timer name="pit" tickpolicy="delay"/>
Oct 11 08:45:47 compute-0 nova_compute[260935]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 11 08:45:47 compute-0 nova_compute[260935]:     <timer name="hpet" present="no"/>
Oct 11 08:45:47 compute-0 nova_compute[260935]:   </clock>
Oct 11 08:45:47 compute-0 nova_compute[260935]:   <cpu mode="host-model" match="exact">
Oct 11 08:45:47 compute-0 nova_compute[260935]:     <topology sockets="1" cores="1" threads="1"/>
Oct 11 08:45:47 compute-0 nova_compute[260935]:   </cpu>
Oct 11 08:45:47 compute-0 nova_compute[260935]:   <devices>
Oct 11 08:45:47 compute-0 nova_compute[260935]:     <disk type="network" device="disk">
Oct 11 08:45:47 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 08:45:47 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/7d1e4ac2-be17-4b9b-a94e-0d70a7c4395d_disk">
Oct 11 08:45:47 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 08:45:47 compute-0 nova_compute[260935]:       </source>
Oct 11 08:45:47 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 08:45:47 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 08:45:47 compute-0 nova_compute[260935]:       </auth>
Oct 11 08:45:47 compute-0 nova_compute[260935]:       <target dev="vda" bus="virtio"/>
Oct 11 08:45:47 compute-0 nova_compute[260935]:     </disk>
Oct 11 08:45:47 compute-0 nova_compute[260935]:     <disk type="network" device="cdrom">
Oct 11 08:45:47 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 08:45:47 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/7d1e4ac2-be17-4b9b-a94e-0d70a7c4395d_disk.config">
Oct 11 08:45:47 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 08:45:47 compute-0 nova_compute[260935]:       </source>
Oct 11 08:45:47 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 08:45:47 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 08:45:47 compute-0 nova_compute[260935]:       </auth>
Oct 11 08:45:47 compute-0 nova_compute[260935]:       <target dev="sda" bus="sata"/>
Oct 11 08:45:47 compute-0 nova_compute[260935]:     </disk>
Oct 11 08:45:47 compute-0 nova_compute[260935]:     <serial type="pty">
Oct 11 08:45:47 compute-0 nova_compute[260935]:       <log file="/var/lib/nova/instances/7d1e4ac2-be17-4b9b-a94e-0d70a7c4395d/console.log" append="off"/>
Oct 11 08:45:47 compute-0 nova_compute[260935]:     </serial>
Oct 11 08:45:47 compute-0 nova_compute[260935]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 08:45:47 compute-0 nova_compute[260935]:     <video>
Oct 11 08:45:47 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 08:45:47 compute-0 nova_compute[260935]:     </video>
Oct 11 08:45:47 compute-0 nova_compute[260935]:     <input type="tablet" bus="usb"/>
Oct 11 08:45:47 compute-0 nova_compute[260935]:     <rng model="virtio">
Oct 11 08:45:47 compute-0 nova_compute[260935]:       <backend model="random">/dev/urandom</backend>
Oct 11 08:45:47 compute-0 nova_compute[260935]:     </rng>
Oct 11 08:45:47 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root"/>
Oct 11 08:45:47 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:45:47 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:45:47 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:45:47 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:45:47 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:45:47 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:45:47 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:45:47 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:45:47 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:45:47 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:45:47 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:45:47 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:45:47 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:45:47 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:45:47 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:45:47 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:45:47 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:45:47 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:45:47 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:45:47 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:45:47 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:45:47 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:45:47 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:45:47 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:45:47 compute-0 nova_compute[260935]:     <controller type="usb" index="0"/>
Oct 11 08:45:47 compute-0 nova_compute[260935]:     <memballoon model="virtio">
Oct 11 08:45:47 compute-0 nova_compute[260935]:       <stats period="10"/>
Oct 11 08:45:47 compute-0 nova_compute[260935]:     </memballoon>
Oct 11 08:45:47 compute-0 nova_compute[260935]:   </devices>
Oct 11 08:45:47 compute-0 nova_compute[260935]: </domain>
Oct 11 08:45:47 compute-0 nova_compute[260935]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 11 08:45:47 compute-0 nova_compute[260935]: 2025-10-11 08:45:47.708 2 DEBUG oslo_concurrency.lockutils [None req-0c4baa25-c066-4e86-af65-3982321596b6 113798b24d1e4a9e91db94214d254ea9 4e6573eaf6684f1c99a553fd46667a67 - - default default] Lock "f7475a32-2490-4f8c-a700-a123973da072" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.781s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:45:47 compute-0 nova_compute[260935]: 2025-10-11 08:45:47.719 2 DEBUG nova.virt.libvirt.driver [None req-3f36527e-f48e-4bb4-b3a0-7657af79c8dc 6c0be63273314958858acea89fc8ce4c 04d9fd3da9714787ad630c8c68e4f94a - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 08:45:47 compute-0 nova_compute[260935]: 2025-10-11 08:45:47.720 2 DEBUG nova.virt.libvirt.driver [None req-3f36527e-f48e-4bb4-b3a0-7657af79c8dc 6c0be63273314958858acea89fc8ce4c 04d9fd3da9714787ad630c8c68e4f94a - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 08:45:47 compute-0 nova_compute[260935]: 2025-10-11 08:45:47.721 2 INFO nova.virt.libvirt.driver [None req-3f36527e-f48e-4bb4-b3a0-7657af79c8dc 6c0be63273314958858acea89fc8ce4c 04d9fd3da9714787ad630c8c68e4f94a - - default default] [instance: 7d1e4ac2-be17-4b9b-a94e-0d70a7c4395d] Using config drive
Oct 11 08:45:47 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1173: 321 pgs: 321 active+clean; 134 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 2.4 MiB/s rd, 7.5 MiB/s wr, 300 op/s
Oct 11 08:45:47 compute-0 nova_compute[260935]: 2025-10-11 08:45:47.762 2 DEBUG nova.storage.rbd_utils [None req-3f36527e-f48e-4bb4-b3a0-7657af79c8dc 6c0be63273314958858acea89fc8ce4c 04d9fd3da9714787ad630c8c68e4f94a - - default default] rbd image 7d1e4ac2-be17-4b9b-a94e-0d70a7c4395d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:45:47 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/2547142510' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:45:47 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/433789802' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:45:47 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/31627406' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:45:48 compute-0 nova_compute[260935]: 2025-10-11 08:45:48.046 2 INFO nova.virt.libvirt.driver [None req-3f36527e-f48e-4bb4-b3a0-7657af79c8dc 6c0be63273314958858acea89fc8ce4c 04d9fd3da9714787ad630c8c68e4f94a - - default default] [instance: 7d1e4ac2-be17-4b9b-a94e-0d70a7c4395d] Creating config drive at /var/lib/nova/instances/7d1e4ac2-be17-4b9b-a94e-0d70a7c4395d/disk.config
Oct 11 08:45:48 compute-0 nova_compute[260935]: 2025-10-11 08:45:48.051 2 DEBUG oslo_concurrency.processutils [None req-3f36527e-f48e-4bb4-b3a0-7657af79c8dc 6c0be63273314958858acea89fc8ce4c 04d9fd3da9714787ad630c8c68e4f94a - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/7d1e4ac2-be17-4b9b-a94e-0d70a7c4395d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpbbt_s9wn execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:45:48 compute-0 nova_compute[260935]: 2025-10-11 08:45:48.101 2 DEBUG nova.compute.manager [req-4aeaf4b1-8b1e-4ed1-bd0f-962324f98f33 req-693894c1-d757-45dd-9a72-ba0387787875 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: f7475a32-2490-4f8c-a700-a123973da072] Received event network-vif-plugged-865f9eb5-5e9d-40e5-abb3-123fcf5d05c3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:45:48 compute-0 nova_compute[260935]: 2025-10-11 08:45:48.102 2 DEBUG oslo_concurrency.lockutils [req-4aeaf4b1-8b1e-4ed1-bd0f-962324f98f33 req-693894c1-d757-45dd-9a72-ba0387787875 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "f7475a32-2490-4f8c-a700-a123973da072-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:45:48 compute-0 nova_compute[260935]: 2025-10-11 08:45:48.103 2 DEBUG oslo_concurrency.lockutils [req-4aeaf4b1-8b1e-4ed1-bd0f-962324f98f33 req-693894c1-d757-45dd-9a72-ba0387787875 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "f7475a32-2490-4f8c-a700-a123973da072-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:45:48 compute-0 nova_compute[260935]: 2025-10-11 08:45:48.103 2 DEBUG oslo_concurrency.lockutils [req-4aeaf4b1-8b1e-4ed1-bd0f-962324f98f33 req-693894c1-d757-45dd-9a72-ba0387787875 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "f7475a32-2490-4f8c-a700-a123973da072-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:45:48 compute-0 nova_compute[260935]: 2025-10-11 08:45:48.103 2 DEBUG nova.compute.manager [req-4aeaf4b1-8b1e-4ed1-bd0f-962324f98f33 req-693894c1-d757-45dd-9a72-ba0387787875 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: f7475a32-2490-4f8c-a700-a123973da072] No waiting events found dispatching network-vif-plugged-865f9eb5-5e9d-40e5-abb3-123fcf5d05c3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 08:45:48 compute-0 nova_compute[260935]: 2025-10-11 08:45:48.104 2 WARNING nova.compute.manager [req-4aeaf4b1-8b1e-4ed1-bd0f-962324f98f33 req-693894c1-d757-45dd-9a72-ba0387787875 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: f7475a32-2490-4f8c-a700-a123973da072] Received unexpected event network-vif-plugged-865f9eb5-5e9d-40e5-abb3-123fcf5d05c3 for instance with vm_state deleted and task_state None.
Oct 11 08:45:48 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 08:45:48 compute-0 nova_compute[260935]: 2025-10-11 08:45:48.182 2 DEBUG oslo_concurrency.processutils [None req-3f36527e-f48e-4bb4-b3a0-7657af79c8dc 6c0be63273314958858acea89fc8ce4c 04d9fd3da9714787ad630c8c68e4f94a - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/7d1e4ac2-be17-4b9b-a94e-0d70a7c4395d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpbbt_s9wn" returned: 0 in 0.131s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:45:48 compute-0 nova_compute[260935]: 2025-10-11 08:45:48.213 2 DEBUG nova.storage.rbd_utils [None req-3f36527e-f48e-4bb4-b3a0-7657af79c8dc 6c0be63273314958858acea89fc8ce4c 04d9fd3da9714787ad630c8c68e4f94a - - default default] rbd image 7d1e4ac2-be17-4b9b-a94e-0d70a7c4395d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:45:48 compute-0 nova_compute[260935]: 2025-10-11 08:45:48.218 2 DEBUG oslo_concurrency.processutils [None req-3f36527e-f48e-4bb4-b3a0-7657af79c8dc 6c0be63273314958858acea89fc8ce4c 04d9fd3da9714787ad630c8c68e4f94a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/7d1e4ac2-be17-4b9b-a94e-0d70a7c4395d/disk.config 7d1e4ac2-be17-4b9b-a94e-0d70a7c4395d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:45:48 compute-0 nova_compute[260935]: 2025-10-11 08:45:48.245 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172348.2446349, 176447af-d4c6-422a-9347-9f07749fb6c3 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:45:48 compute-0 nova_compute[260935]: 2025-10-11 08:45:48.246 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 176447af-d4c6-422a-9347-9f07749fb6c3] VM Resumed (Lifecycle Event)
Oct 11 08:45:48 compute-0 nova_compute[260935]: 2025-10-11 08:45:48.253 2 DEBUG nova.compute.manager [None req-b07ceba1-7a93-47fd-9e08-5b0f3c15ae55 7290d84027c8464ebe5e3b46d59f3b4a d76f051bb1f9449fb098250c00407cac - - default default] [instance: 176447af-d4c6-422a-9347-9f07749fb6c3] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 11 08:45:48 compute-0 nova_compute[260935]: 2025-10-11 08:45:48.254 2 DEBUG nova.virt.libvirt.driver [None req-b07ceba1-7a93-47fd-9e08-5b0f3c15ae55 7290d84027c8464ebe5e3b46d59f3b4a d76f051bb1f9449fb098250c00407cac - - default default] [instance: 176447af-d4c6-422a-9347-9f07749fb6c3] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 11 08:45:48 compute-0 nova_compute[260935]: 2025-10-11 08:45:48.258 2 INFO nova.virt.libvirt.driver [-] [instance: 176447af-d4c6-422a-9347-9f07749fb6c3] Instance spawned successfully.
Oct 11 08:45:48 compute-0 nova_compute[260935]: 2025-10-11 08:45:48.258 2 DEBUG nova.virt.libvirt.driver [None req-b07ceba1-7a93-47fd-9e08-5b0f3c15ae55 7290d84027c8464ebe5e3b46d59f3b4a d76f051bb1f9449fb098250c00407cac - - default default] [instance: 176447af-d4c6-422a-9347-9f07749fb6c3] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 11 08:45:48 compute-0 nova_compute[260935]: 2025-10-11 08:45:48.283 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 176447af-d4c6-422a-9347-9f07749fb6c3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:45:48 compute-0 nova_compute[260935]: 2025-10-11 08:45:48.289 2 DEBUG nova.virt.libvirt.driver [None req-b07ceba1-7a93-47fd-9e08-5b0f3c15ae55 7290d84027c8464ebe5e3b46d59f3b4a d76f051bb1f9449fb098250c00407cac - - default default] [instance: 176447af-d4c6-422a-9347-9f07749fb6c3] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:45:48 compute-0 nova_compute[260935]: 2025-10-11 08:45:48.290 2 DEBUG nova.virt.libvirt.driver [None req-b07ceba1-7a93-47fd-9e08-5b0f3c15ae55 7290d84027c8464ebe5e3b46d59f3b4a d76f051bb1f9449fb098250c00407cac - - default default] [instance: 176447af-d4c6-422a-9347-9f07749fb6c3] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:45:48 compute-0 nova_compute[260935]: 2025-10-11 08:45:48.291 2 DEBUG nova.virt.libvirt.driver [None req-b07ceba1-7a93-47fd-9e08-5b0f3c15ae55 7290d84027c8464ebe5e3b46d59f3b4a d76f051bb1f9449fb098250c00407cac - - default default] [instance: 176447af-d4c6-422a-9347-9f07749fb6c3] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:45:48 compute-0 nova_compute[260935]: 2025-10-11 08:45:48.292 2 DEBUG nova.virt.libvirt.driver [None req-b07ceba1-7a93-47fd-9e08-5b0f3c15ae55 7290d84027c8464ebe5e3b46d59f3b4a d76f051bb1f9449fb098250c00407cac - - default default] [instance: 176447af-d4c6-422a-9347-9f07749fb6c3] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:45:48 compute-0 nova_compute[260935]: 2025-10-11 08:45:48.293 2 DEBUG nova.virt.libvirt.driver [None req-b07ceba1-7a93-47fd-9e08-5b0f3c15ae55 7290d84027c8464ebe5e3b46d59f3b4a d76f051bb1f9449fb098250c00407cac - - default default] [instance: 176447af-d4c6-422a-9347-9f07749fb6c3] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:45:48 compute-0 nova_compute[260935]: 2025-10-11 08:45:48.294 2 DEBUG nova.virt.libvirt.driver [None req-b07ceba1-7a93-47fd-9e08-5b0f3c15ae55 7290d84027c8464ebe5e3b46d59f3b4a d76f051bb1f9449fb098250c00407cac - - default default] [instance: 176447af-d4c6-422a-9347-9f07749fb6c3] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:45:48 compute-0 nova_compute[260935]: 2025-10-11 08:45:48.301 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 176447af-d4c6-422a-9347-9f07749fb6c3] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 08:45:48 compute-0 nova_compute[260935]: 2025-10-11 08:45:48.336 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 176447af-d4c6-422a-9347-9f07749fb6c3] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 08:45:48 compute-0 nova_compute[260935]: 2025-10-11 08:45:48.336 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172348.253233, 176447af-d4c6-422a-9347-9f07749fb6c3 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:45:48 compute-0 nova_compute[260935]: 2025-10-11 08:45:48.337 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 176447af-d4c6-422a-9347-9f07749fb6c3] VM Started (Lifecycle Event)
Oct 11 08:45:48 compute-0 nova_compute[260935]: 2025-10-11 08:45:48.360 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 176447af-d4c6-422a-9347-9f07749fb6c3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:45:48 compute-0 nova_compute[260935]: 2025-10-11 08:45:48.365 2 INFO nova.compute.manager [None req-b07ceba1-7a93-47fd-9e08-5b0f3c15ae55 7290d84027c8464ebe5e3b46d59f3b4a d76f051bb1f9449fb098250c00407cac - - default default] [instance: 176447af-d4c6-422a-9347-9f07749fb6c3] Took 4.20 seconds to spawn the instance on the hypervisor.
Oct 11 08:45:48 compute-0 nova_compute[260935]: 2025-10-11 08:45:48.366 2 DEBUG nova.compute.manager [None req-b07ceba1-7a93-47fd-9e08-5b0f3c15ae55 7290d84027c8464ebe5e3b46d59f3b4a d76f051bb1f9449fb098250c00407cac - - default default] [instance: 176447af-d4c6-422a-9347-9f07749fb6c3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:45:48 compute-0 nova_compute[260935]: 2025-10-11 08:45:48.367 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 176447af-d4c6-422a-9347-9f07749fb6c3] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 08:45:48 compute-0 nova_compute[260935]: 2025-10-11 08:45:48.402 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 176447af-d4c6-422a-9347-9f07749fb6c3] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 08:45:48 compute-0 nova_compute[260935]: 2025-10-11 08:45:48.439 2 INFO nova.compute.manager [None req-b07ceba1-7a93-47fd-9e08-5b0f3c15ae55 7290d84027c8464ebe5e3b46d59f3b4a d76f051bb1f9449fb098250c00407cac - - default default] [instance: 176447af-d4c6-422a-9347-9f07749fb6c3] Took 7.21 seconds to build instance.
Oct 11 08:45:48 compute-0 nova_compute[260935]: 2025-10-11 08:45:48.457 2 DEBUG oslo_concurrency.lockutils [None req-b07ceba1-7a93-47fd-9e08-5b0f3c15ae55 7290d84027c8464ebe5e3b46d59f3b4a d76f051bb1f9449fb098250c00407cac - - default default] Lock "176447af-d4c6-422a-9347-9f07749fb6c3" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.670s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:45:48 compute-0 nova_compute[260935]: 2025-10-11 08:45:48.519 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:45:49 compute-0 ceph-mon[74313]: pgmap v1173: 321 pgs: 321 active+clean; 134 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 2.4 MiB/s rd, 7.5 MiB/s wr, 300 op/s
Oct 11 08:45:49 compute-0 nova_compute[260935]: 2025-10-11 08:45:49.105 2 DEBUG oslo_concurrency.processutils [None req-3f36527e-f48e-4bb4-b3a0-7657af79c8dc 6c0be63273314958858acea89fc8ce4c 04d9fd3da9714787ad630c8c68e4f94a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/7d1e4ac2-be17-4b9b-a94e-0d70a7c4395d/disk.config 7d1e4ac2-be17-4b9b-a94e-0d70a7c4395d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.887s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:45:49 compute-0 nova_compute[260935]: 2025-10-11 08:45:49.106 2 INFO nova.virt.libvirt.driver [None req-3f36527e-f48e-4bb4-b3a0-7657af79c8dc 6c0be63273314958858acea89fc8ce4c 04d9fd3da9714787ad630c8c68e4f94a - - default default] [instance: 7d1e4ac2-be17-4b9b-a94e-0d70a7c4395d] Deleting local config drive /var/lib/nova/instances/7d1e4ac2-be17-4b9b-a94e-0d70a7c4395d/disk.config because it was imported into RBD.
Oct 11 08:45:49 compute-0 systemd-machined[215705]: New machine qemu-8-instance-00000008.
Oct 11 08:45:49 compute-0 systemd[1]: Started Virtual Machine qemu-8-instance-00000008.
Oct 11 08:45:49 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1174: 321 pgs: 321 active+clean; 134 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.6 MiB/s wr, 196 op/s
Oct 11 08:45:50 compute-0 nova_compute[260935]: 2025-10-11 08:45:50.093 2 DEBUG nova.compute.manager [None req-20743395-68eb-4d72-b551-1173753b75da 74eb6deb0f9147ab8a27056a316e2697 f2f7cc37b41844129b4b8895273c54a9 - - default default] [instance: 176447af-d4c6-422a-9347-9f07749fb6c3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:45:50 compute-0 nova_compute[260935]: 2025-10-11 08:45:50.099 2 INFO nova.compute.manager [None req-20743395-68eb-4d72-b551-1173753b75da 74eb6deb0f9147ab8a27056a316e2697 f2f7cc37b41844129b4b8895273c54a9 - - default default] [instance: 176447af-d4c6-422a-9347-9f07749fb6c3] Retrieving diagnostics
Oct 11 08:45:50 compute-0 nova_compute[260935]: 2025-10-11 08:45:50.223 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172350.2226708, 7d1e4ac2-be17-4b9b-a94e-0d70a7c4395d => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:45:50 compute-0 nova_compute[260935]: 2025-10-11 08:45:50.223 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 7d1e4ac2-be17-4b9b-a94e-0d70a7c4395d] VM Resumed (Lifecycle Event)
Oct 11 08:45:50 compute-0 nova_compute[260935]: 2025-10-11 08:45:50.228 2 DEBUG nova.compute.manager [None req-3f36527e-f48e-4bb4-b3a0-7657af79c8dc 6c0be63273314958858acea89fc8ce4c 04d9fd3da9714787ad630c8c68e4f94a - - default default] [instance: 7d1e4ac2-be17-4b9b-a94e-0d70a7c4395d] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 11 08:45:50 compute-0 nova_compute[260935]: 2025-10-11 08:45:50.228 2 DEBUG nova.virt.libvirt.driver [None req-3f36527e-f48e-4bb4-b3a0-7657af79c8dc 6c0be63273314958858acea89fc8ce4c 04d9fd3da9714787ad630c8c68e4f94a - - default default] [instance: 7d1e4ac2-be17-4b9b-a94e-0d70a7c4395d] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 11 08:45:50 compute-0 nova_compute[260935]: 2025-10-11 08:45:50.231 2 INFO nova.virt.libvirt.driver [-] [instance: 7d1e4ac2-be17-4b9b-a94e-0d70a7c4395d] Instance spawned successfully.
Oct 11 08:45:50 compute-0 nova_compute[260935]: 2025-10-11 08:45:50.232 2 DEBUG nova.virt.libvirt.driver [None req-3f36527e-f48e-4bb4-b3a0-7657af79c8dc 6c0be63273314958858acea89fc8ce4c 04d9fd3da9714787ad630c8c68e4f94a - - default default] [instance: 7d1e4ac2-be17-4b9b-a94e-0d70a7c4395d] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 11 08:45:50 compute-0 nova_compute[260935]: 2025-10-11 08:45:50.266 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 7d1e4ac2-be17-4b9b-a94e-0d70a7c4395d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:45:50 compute-0 nova_compute[260935]: 2025-10-11 08:45:50.280 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 7d1e4ac2-be17-4b9b-a94e-0d70a7c4395d] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 08:45:50 compute-0 nova_compute[260935]: 2025-10-11 08:45:50.303 2 DEBUG nova.virt.libvirt.driver [None req-3f36527e-f48e-4bb4-b3a0-7657af79c8dc 6c0be63273314958858acea89fc8ce4c 04d9fd3da9714787ad630c8c68e4f94a - - default default] [instance: 7d1e4ac2-be17-4b9b-a94e-0d70a7c4395d] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:45:50 compute-0 nova_compute[260935]: 2025-10-11 08:45:50.304 2 DEBUG nova.virt.libvirt.driver [None req-3f36527e-f48e-4bb4-b3a0-7657af79c8dc 6c0be63273314958858acea89fc8ce4c 04d9fd3da9714787ad630c8c68e4f94a - - default default] [instance: 7d1e4ac2-be17-4b9b-a94e-0d70a7c4395d] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:45:50 compute-0 nova_compute[260935]: 2025-10-11 08:45:50.306 2 DEBUG nova.virt.libvirt.driver [None req-3f36527e-f48e-4bb4-b3a0-7657af79c8dc 6c0be63273314958858acea89fc8ce4c 04d9fd3da9714787ad630c8c68e4f94a - - default default] [instance: 7d1e4ac2-be17-4b9b-a94e-0d70a7c4395d] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:45:50 compute-0 nova_compute[260935]: 2025-10-11 08:45:50.307 2 DEBUG nova.virt.libvirt.driver [None req-3f36527e-f48e-4bb4-b3a0-7657af79c8dc 6c0be63273314958858acea89fc8ce4c 04d9fd3da9714787ad630c8c68e4f94a - - default default] [instance: 7d1e4ac2-be17-4b9b-a94e-0d70a7c4395d] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:45:50 compute-0 nova_compute[260935]: 2025-10-11 08:45:50.309 2 DEBUG nova.virt.libvirt.driver [None req-3f36527e-f48e-4bb4-b3a0-7657af79c8dc 6c0be63273314958858acea89fc8ce4c 04d9fd3da9714787ad630c8c68e4f94a - - default default] [instance: 7d1e4ac2-be17-4b9b-a94e-0d70a7c4395d] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:45:50 compute-0 nova_compute[260935]: 2025-10-11 08:45:50.310 2 DEBUG nova.virt.libvirt.driver [None req-3f36527e-f48e-4bb4-b3a0-7657af79c8dc 6c0be63273314958858acea89fc8ce4c 04d9fd3da9714787ad630c8c68e4f94a - - default default] [instance: 7d1e4ac2-be17-4b9b-a94e-0d70a7c4395d] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:45:50 compute-0 nova_compute[260935]: 2025-10-11 08:45:50.343 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 7d1e4ac2-be17-4b9b-a94e-0d70a7c4395d] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 08:45:50 compute-0 nova_compute[260935]: 2025-10-11 08:45:50.344 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172350.227831, 7d1e4ac2-be17-4b9b-a94e-0d70a7c4395d => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:45:50 compute-0 nova_compute[260935]: 2025-10-11 08:45:50.345 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 7d1e4ac2-be17-4b9b-a94e-0d70a7c4395d] VM Started (Lifecycle Event)
Oct 11 08:45:50 compute-0 nova_compute[260935]: 2025-10-11 08:45:50.371 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 7d1e4ac2-be17-4b9b-a94e-0d70a7c4395d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:45:50 compute-0 nova_compute[260935]: 2025-10-11 08:45:50.377 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 7d1e4ac2-be17-4b9b-a94e-0d70a7c4395d] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 08:45:50 compute-0 nova_compute[260935]: 2025-10-11 08:45:50.383 2 INFO nova.compute.manager [None req-3f36527e-f48e-4bb4-b3a0-7657af79c8dc 6c0be63273314958858acea89fc8ce4c 04d9fd3da9714787ad630c8c68e4f94a - - default default] [instance: 7d1e4ac2-be17-4b9b-a94e-0d70a7c4395d] Took 4.62 seconds to spawn the instance on the hypervisor.
Oct 11 08:45:50 compute-0 nova_compute[260935]: 2025-10-11 08:45:50.384 2 DEBUG nova.compute.manager [None req-3f36527e-f48e-4bb4-b3a0-7657af79c8dc 6c0be63273314958858acea89fc8ce4c 04d9fd3da9714787ad630c8c68e4f94a - - default default] [instance: 7d1e4ac2-be17-4b9b-a94e-0d70a7c4395d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:45:50 compute-0 nova_compute[260935]: 2025-10-11 08:45:50.398 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 7d1e4ac2-be17-4b9b-a94e-0d70a7c4395d] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 08:45:50 compute-0 nova_compute[260935]: 2025-10-11 08:45:50.437 2 DEBUG oslo_concurrency.lockutils [None req-86c8dd69-c086-46e1-8c04-2bd7cf6e005d 7290d84027c8464ebe5e3b46d59f3b4a d76f051bb1f9449fb098250c00407cac - - default default] Acquiring lock "176447af-d4c6-422a-9347-9f07749fb6c3" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:45:50 compute-0 nova_compute[260935]: 2025-10-11 08:45:50.438 2 DEBUG oslo_concurrency.lockutils [None req-86c8dd69-c086-46e1-8c04-2bd7cf6e005d 7290d84027c8464ebe5e3b46d59f3b4a d76f051bb1f9449fb098250c00407cac - - default default] Lock "176447af-d4c6-422a-9347-9f07749fb6c3" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:45:50 compute-0 nova_compute[260935]: 2025-10-11 08:45:50.439 2 DEBUG oslo_concurrency.lockutils [None req-86c8dd69-c086-46e1-8c04-2bd7cf6e005d 7290d84027c8464ebe5e3b46d59f3b4a d76f051bb1f9449fb098250c00407cac - - default default] Acquiring lock "176447af-d4c6-422a-9347-9f07749fb6c3-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:45:50 compute-0 nova_compute[260935]: 2025-10-11 08:45:50.440 2 DEBUG oslo_concurrency.lockutils [None req-86c8dd69-c086-46e1-8c04-2bd7cf6e005d 7290d84027c8464ebe5e3b46d59f3b4a d76f051bb1f9449fb098250c00407cac - - default default] Lock "176447af-d4c6-422a-9347-9f07749fb6c3-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:45:50 compute-0 nova_compute[260935]: 2025-10-11 08:45:50.440 2 DEBUG oslo_concurrency.lockutils [None req-86c8dd69-c086-46e1-8c04-2bd7cf6e005d 7290d84027c8464ebe5e3b46d59f3b4a d76f051bb1f9449fb098250c00407cac - - default default] Lock "176447af-d4c6-422a-9347-9f07749fb6c3-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:45:50 compute-0 nova_compute[260935]: 2025-10-11 08:45:50.444 2 INFO nova.compute.manager [None req-86c8dd69-c086-46e1-8c04-2bd7cf6e005d 7290d84027c8464ebe5e3b46d59f3b4a d76f051bb1f9449fb098250c00407cac - - default default] [instance: 176447af-d4c6-422a-9347-9f07749fb6c3] Terminating instance
Oct 11 08:45:50 compute-0 nova_compute[260935]: 2025-10-11 08:45:50.447 2 DEBUG oslo_concurrency.lockutils [None req-86c8dd69-c086-46e1-8c04-2bd7cf6e005d 7290d84027c8464ebe5e3b46d59f3b4a d76f051bb1f9449fb098250c00407cac - - default default] Acquiring lock "refresh_cache-176447af-d4c6-422a-9347-9f07749fb6c3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 08:45:50 compute-0 nova_compute[260935]: 2025-10-11 08:45:50.448 2 DEBUG oslo_concurrency.lockutils [None req-86c8dd69-c086-46e1-8c04-2bd7cf6e005d 7290d84027c8464ebe5e3b46d59f3b4a d76f051bb1f9449fb098250c00407cac - - default default] Acquired lock "refresh_cache-176447af-d4c6-422a-9347-9f07749fb6c3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 08:45:50 compute-0 nova_compute[260935]: 2025-10-11 08:45:50.448 2 DEBUG nova.network.neutron [None req-86c8dd69-c086-46e1-8c04-2bd7cf6e005d 7290d84027c8464ebe5e3b46d59f3b4a d76f051bb1f9449fb098250c00407cac - - default default] [instance: 176447af-d4c6-422a-9347-9f07749fb6c3] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 11 08:45:50 compute-0 nova_compute[260935]: 2025-10-11 08:45:50.469 2 INFO nova.compute.manager [None req-3f36527e-f48e-4bb4-b3a0-7657af79c8dc 6c0be63273314958858acea89fc8ce4c 04d9fd3da9714787ad630c8c68e4f94a - - default default] [instance: 7d1e4ac2-be17-4b9b-a94e-0d70a7c4395d] Took 8.01 seconds to build instance.
Oct 11 08:45:50 compute-0 nova_compute[260935]: 2025-10-11 08:45:50.493 2 DEBUG oslo_concurrency.lockutils [None req-3f36527e-f48e-4bb4-b3a0-7657af79c8dc 6c0be63273314958858acea89fc8ce4c 04d9fd3da9714787ad630c8c68e4f94a - - default default] Lock "7d1e4ac2-be17-4b9b-a94e-0d70a7c4395d" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.563s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:45:50 compute-0 nova_compute[260935]: 2025-10-11 08:45:50.687 2 DEBUG nova.network.neutron [None req-86c8dd69-c086-46e1-8c04-2bd7cf6e005d 7290d84027c8464ebe5e3b46d59f3b4a d76f051bb1f9449fb098250c00407cac - - default default] [instance: 176447af-d4c6-422a-9347-9f07749fb6c3] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 11 08:45:50 compute-0 nova_compute[260935]: 2025-10-11 08:45:50.870 2 DEBUG nova.compute.manager [None req-31ab30fe-e299-4918-af66-8ca0d8bfbf55 aaddd80c268b4133b6c3baa62854d1ee 670dcbed2127406089bdfc746797b90a - - default default] [instance: 7d1e4ac2-be17-4b9b-a94e-0d70a7c4395d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:45:50 compute-0 nova_compute[260935]: 2025-10-11 08:45:50.873 2 INFO nova.compute.manager [None req-31ab30fe-e299-4918-af66-8ca0d8bfbf55 aaddd80c268b4133b6c3baa62854d1ee 670dcbed2127406089bdfc746797b90a - - default default] [instance: 7d1e4ac2-be17-4b9b-a94e-0d70a7c4395d] Retrieving diagnostics
Oct 11 08:45:50 compute-0 nova_compute[260935]: 2025-10-11 08:45:50.924 2 DEBUG nova.network.neutron [None req-86c8dd69-c086-46e1-8c04-2bd7cf6e005d 7290d84027c8464ebe5e3b46d59f3b4a d76f051bb1f9449fb098250c00407cac - - default default] [instance: 176447af-d4c6-422a-9347-9f07749fb6c3] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:45:50 compute-0 nova_compute[260935]: 2025-10-11 08:45:50.944 2 DEBUG oslo_concurrency.lockutils [None req-86c8dd69-c086-46e1-8c04-2bd7cf6e005d 7290d84027c8464ebe5e3b46d59f3b4a d76f051bb1f9449fb098250c00407cac - - default default] Releasing lock "refresh_cache-176447af-d4c6-422a-9347-9f07749fb6c3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 08:45:50 compute-0 nova_compute[260935]: 2025-10-11 08:45:50.945 2 DEBUG nova.compute.manager [None req-86c8dd69-c086-46e1-8c04-2bd7cf6e005d 7290d84027c8464ebe5e3b46d59f3b4a d76f051bb1f9449fb098250c00407cac - - default default] [instance: 176447af-d4c6-422a-9347-9f07749fb6c3] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 11 08:45:51 compute-0 systemd[1]: machine-qemu\x2d7\x2dinstance\x2d00000007.scope: Deactivated successfully.
Oct 11 08:45:51 compute-0 systemd[1]: machine-qemu\x2d7\x2dinstance\x2d00000007.scope: Consumed 3.733s CPU time.
Oct 11 08:45:51 compute-0 systemd-machined[215705]: Machine qemu-7-instance-00000007 terminated.
Oct 11 08:45:51 compute-0 ceph-mon[74313]: pgmap v1174: 321 pgs: 321 active+clean; 134 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.6 MiB/s wr, 196 op/s
Oct 11 08:45:51 compute-0 nova_compute[260935]: 2025-10-11 08:45:51.173 2 INFO nova.virt.libvirt.driver [-] [instance: 176447af-d4c6-422a-9347-9f07749fb6c3] Instance destroyed successfully.
Oct 11 08:45:51 compute-0 nova_compute[260935]: 2025-10-11 08:45:51.174 2 DEBUG nova.objects.instance [None req-86c8dd69-c086-46e1-8c04-2bd7cf6e005d 7290d84027c8464ebe5e3b46d59f3b4a d76f051bb1f9449fb098250c00407cac - - default default] Lazy-loading 'resources' on Instance uuid 176447af-d4c6-422a-9347-9f07749fb6c3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:45:51 compute-0 nova_compute[260935]: 2025-10-11 08:45:51.216 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:45:51 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1175: 321 pgs: 321 active+clean; 134 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.6 MiB/s wr, 196 op/s
Oct 11 08:45:52 compute-0 nova_compute[260935]: 2025-10-11 08:45:52.132 2 INFO nova.virt.libvirt.driver [None req-86c8dd69-c086-46e1-8c04-2bd7cf6e005d 7290d84027c8464ebe5e3b46d59f3b4a d76f051bb1f9449fb098250c00407cac - - default default] [instance: 176447af-d4c6-422a-9347-9f07749fb6c3] Deleting instance files /var/lib/nova/instances/176447af-d4c6-422a-9347-9f07749fb6c3_del
Oct 11 08:45:52 compute-0 nova_compute[260935]: 2025-10-11 08:45:52.133 2 INFO nova.virt.libvirt.driver [None req-86c8dd69-c086-46e1-8c04-2bd7cf6e005d 7290d84027c8464ebe5e3b46d59f3b4a d76f051bb1f9449fb098250c00407cac - - default default] [instance: 176447af-d4c6-422a-9347-9f07749fb6c3] Deletion of /var/lib/nova/instances/176447af-d4c6-422a-9347-9f07749fb6c3_del complete
Oct 11 08:45:52 compute-0 nova_compute[260935]: 2025-10-11 08:45:52.182 2 INFO nova.compute.manager [None req-86c8dd69-c086-46e1-8c04-2bd7cf6e005d 7290d84027c8464ebe5e3b46d59f3b4a d76f051bb1f9449fb098250c00407cac - - default default] [instance: 176447af-d4c6-422a-9347-9f07749fb6c3] Took 1.24 seconds to destroy the instance on the hypervisor.
Oct 11 08:45:52 compute-0 nova_compute[260935]: 2025-10-11 08:45:52.183 2 DEBUG oslo.service.loopingcall [None req-86c8dd69-c086-46e1-8c04-2bd7cf6e005d 7290d84027c8464ebe5e3b46d59f3b4a d76f051bb1f9449fb098250c00407cac - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 11 08:45:52 compute-0 nova_compute[260935]: 2025-10-11 08:45:52.183 2 DEBUG nova.compute.manager [-] [instance: 176447af-d4c6-422a-9347-9f07749fb6c3] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 11 08:45:52 compute-0 nova_compute[260935]: 2025-10-11 08:45:52.183 2 DEBUG nova.network.neutron [-] [instance: 176447af-d4c6-422a-9347-9f07749fb6c3] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 11 08:45:52 compute-0 nova_compute[260935]: 2025-10-11 08:45:52.348 2 DEBUG nova.network.neutron [-] [instance: 176447af-d4c6-422a-9347-9f07749fb6c3] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 11 08:45:52 compute-0 nova_compute[260935]: 2025-10-11 08:45:52.380 2 DEBUG nova.network.neutron [-] [instance: 176447af-d4c6-422a-9347-9f07749fb6c3] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:45:52 compute-0 nova_compute[260935]: 2025-10-11 08:45:52.393 2 INFO nova.compute.manager [-] [instance: 176447af-d4c6-422a-9347-9f07749fb6c3] Took 0.21 seconds to deallocate network for instance.
Oct 11 08:45:52 compute-0 nova_compute[260935]: 2025-10-11 08:45:52.435 2 DEBUG oslo_concurrency.lockutils [None req-86c8dd69-c086-46e1-8c04-2bd7cf6e005d 7290d84027c8464ebe5e3b46d59f3b4a d76f051bb1f9449fb098250c00407cac - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:45:52 compute-0 nova_compute[260935]: 2025-10-11 08:45:52.435 2 DEBUG oslo_concurrency.lockutils [None req-86c8dd69-c086-46e1-8c04-2bd7cf6e005d 7290d84027c8464ebe5e3b46d59f3b4a d76f051bb1f9449fb098250c00407cac - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:45:52 compute-0 nova_compute[260935]: 2025-10-11 08:45:52.479 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:45:52 compute-0 nova_compute[260935]: 2025-10-11 08:45:52.497 2 DEBUG oslo_concurrency.processutils [None req-86c8dd69-c086-46e1-8c04-2bd7cf6e005d 7290d84027c8464ebe5e3b46d59f3b4a d76f051bb1f9449fb098250c00407cac - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:45:52 compute-0 nova_compute[260935]: 2025-10-11 08:45:52.655 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:45:52 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 08:45:52 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2935633768' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:45:52 compute-0 nova_compute[260935]: 2025-10-11 08:45:52.993 2 DEBUG oslo_concurrency.processutils [None req-86c8dd69-c086-46e1-8c04-2bd7cf6e005d 7290d84027c8464ebe5e3b46d59f3b4a d76f051bb1f9449fb098250c00407cac - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.496s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:45:53 compute-0 nova_compute[260935]: 2025-10-11 08:45:53.000 2 DEBUG nova.compute.provider_tree [None req-86c8dd69-c086-46e1-8c04-2bd7cf6e005d 7290d84027c8464ebe5e3b46d59f3b4a d76f051bb1f9449fb098250c00407cac - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 08:45:53 compute-0 nova_compute[260935]: 2025-10-11 08:45:53.016 2 DEBUG nova.scheduler.client.report [None req-86c8dd69-c086-46e1-8c04-2bd7cf6e005d 7290d84027c8464ebe5e3b46d59f3b4a d76f051bb1f9449fb098250c00407cac - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 08:45:53 compute-0 nova_compute[260935]: 2025-10-11 08:45:53.040 2 DEBUG oslo_concurrency.lockutils [None req-86c8dd69-c086-46e1-8c04-2bd7cf6e005d 7290d84027c8464ebe5e3b46d59f3b4a d76f051bb1f9449fb098250c00407cac - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.604s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:45:53 compute-0 nova_compute[260935]: 2025-10-11 08:45:53.078 2 INFO nova.scheduler.client.report [None req-86c8dd69-c086-46e1-8c04-2bd7cf6e005d 7290d84027c8464ebe5e3b46d59f3b4a d76f051bb1f9449fb098250c00407cac - - default default] Deleted allocations for instance 176447af-d4c6-422a-9347-9f07749fb6c3
Oct 11 08:45:53 compute-0 ceph-mon[74313]: pgmap v1175: 321 pgs: 321 active+clean; 134 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.6 MiB/s wr, 196 op/s
Oct 11 08:45:53 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/2935633768' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:45:53 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 08:45:53 compute-0 nova_compute[260935]: 2025-10-11 08:45:53.148 2 DEBUG oslo_concurrency.lockutils [None req-86c8dd69-c086-46e1-8c04-2bd7cf6e005d 7290d84027c8464ebe5e3b46d59f3b4a d76f051bb1f9449fb098250c00407cac - - default default] Lock "176447af-d4c6-422a-9347-9f07749fb6c3" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.710s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:45:53 compute-0 nova_compute[260935]: 2025-10-11 08:45:53.522 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:45:53 compute-0 nova_compute[260935]: 2025-10-11 08:45:53.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 08:45:53 compute-0 nova_compute[260935]: 2025-10-11 08:45:53.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 08:45:53 compute-0 nova_compute[260935]: 2025-10-11 08:45:53.703 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 11 08:45:53 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1176: 321 pgs: 321 active+clean; 88 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 5.8 MiB/s rd, 3.6 MiB/s wr, 364 op/s
Oct 11 08:45:53 compute-0 podman[280940]: 2025-10-11 08:45:53.80120089 +0000 UTC m=+0.099717502 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=iscsid, io.buildah.version=1.41.3)
Oct 11 08:45:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 08:45:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 08:45:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 08:45:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 08:45:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 08:45:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 08:45:54 compute-0 ceph-mgr[74605]: [balancer INFO root] Optimize plan auto_2025-10-11_08:45:54
Oct 11 08:45:54 compute-0 ceph-mgr[74605]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 08:45:54 compute-0 ceph-mgr[74605]: [balancer INFO root] do_upmap
Oct 11 08:45:54 compute-0 ceph-mgr[74605]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'vms', 'images', 'volumes', 'backups', 'cephfs.cephfs.data', '.rgw.root', 'default.rgw.control', 'default.rgw.log', '.mgr', 'default.rgw.meta']
Oct 11 08:45:54 compute-0 ceph-mgr[74605]: [balancer INFO root] prepared 0/10 changes
Oct 11 08:45:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 08:45:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 08:45:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 08:45:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 08:45:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 08:45:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 08:45:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 08:45:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 08:45:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 08:45:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 08:45:55 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:45:55.143 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=5, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:d1:d9', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '16:ab:1e:b7:4b:7f'}, ipsec=False) old=SB_Global(nb_cfg=4) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 08:45:55 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:45:55.144 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 11 08:45:55 compute-0 nova_compute[260935]: 2025-10-11 08:45:55.146 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:45:55 compute-0 ceph-mon[74313]: pgmap v1176: 321 pgs: 321 active+clean; 88 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 5.8 MiB/s rd, 3.6 MiB/s wr, 364 op/s
Oct 11 08:45:55 compute-0 nova_compute[260935]: 2025-10-11 08:45:55.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 08:45:55 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1177: 321 pgs: 321 active+clean; 88 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 3.6 MiB/s wr, 292 op/s
Oct 11 08:45:56 compute-0 nova_compute[260935]: 2025-10-11 08:45:56.214 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:45:56 compute-0 nova_compute[260935]: 2025-10-11 08:45:56.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 08:45:57 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:45:57.146 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=bec9a5e2-82b0-42f0-811d-08d245f1dc66, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '5'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:45:57 compute-0 ceph-mon[74313]: pgmap v1177: 321 pgs: 321 active+clean; 88 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 3.6 MiB/s wr, 292 op/s
Oct 11 08:45:57 compute-0 nova_compute[260935]: 2025-10-11 08:45:57.427 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760172342.4255579, 09442602-bb27-4205-98ce-79781d8ab62f => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:45:57 compute-0 nova_compute[260935]: 2025-10-11 08:45:57.427 2 INFO nova.compute.manager [-] [instance: 09442602-bb27-4205-98ce-79781d8ab62f] VM Stopped (Lifecycle Event)
Oct 11 08:45:57 compute-0 nova_compute[260935]: 2025-10-11 08:45:57.455 2 DEBUG nova.compute.manager [None req-8d7e5293-7694-4341-ba68-bdbc3e0afa41 - - - - - -] [instance: 09442602-bb27-4205-98ce-79781d8ab62f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:45:57 compute-0 nova_compute[260935]: 2025-10-11 08:45:57.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 08:45:57 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1178: 321 pgs: 321 active+clean; 88 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 3.6 MiB/s wr, 292 op/s
Oct 11 08:45:58 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 08:45:58 compute-0 nova_compute[260935]: 2025-10-11 08:45:58.191 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760172343.190196, f7475a32-2490-4f8c-a700-a123973da072 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:45:58 compute-0 nova_compute[260935]: 2025-10-11 08:45:58.192 2 INFO nova.compute.manager [-] [instance: f7475a32-2490-4f8c-a700-a123973da072] VM Stopped (Lifecycle Event)
Oct 11 08:45:58 compute-0 nova_compute[260935]: 2025-10-11 08:45:58.233 2 DEBUG nova.compute.manager [None req-6bb03b24-802f-4566-8b5a-8e8ecb0b21f9 - - - - - -] [instance: f7475a32-2490-4f8c-a700-a123973da072] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:45:58 compute-0 sshd-session[280797]: error: kex_exchange_identification: read: Connection timed out
Oct 11 08:45:58 compute-0 sshd-session[280797]: banner exchange: Connection from 14.103.131.112 port 58224: Connection timed out
Oct 11 08:45:58 compute-0 ceph-mon[74313]: pgmap v1178: 321 pgs: 321 active+clean; 88 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 3.6 MiB/s wr, 292 op/s
Oct 11 08:45:58 compute-0 nova_compute[260935]: 2025-10-11 08:45:58.589 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:45:58 compute-0 sudo[280958]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:45:58 compute-0 sudo[280958]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:45:58 compute-0 sudo[280958]: pam_unix(sudo:session): session closed for user root
Oct 11 08:45:58 compute-0 sudo[280983]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 08:45:58 compute-0 sudo[280983]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:45:58 compute-0 sudo[280983]: pam_unix(sudo:session): session closed for user root
Oct 11 08:45:58 compute-0 sudo[281008]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:45:58 compute-0 sudo[281008]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:45:58 compute-0 sudo[281008]: pam_unix(sudo:session): session closed for user root
Oct 11 08:45:59 compute-0 sudo[281033]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 11 08:45:59 compute-0 sudo[281033]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:45:59 compute-0 sudo[281033]: pam_unix(sudo:session): session closed for user root
Oct 11 08:45:59 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 08:45:59 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 08:45:59 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 11 08:45:59 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 08:45:59 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 11 08:45:59 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 08:45:59 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev 3ef643f4-b0fb-4368-baa6-550c2bd30c03 does not exist
Oct 11 08:45:59 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev 8f2cca00-ef3f-45d6-9bbc-fe4b597d4335 does not exist
Oct 11 08:45:59 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev 667d308a-7106-45af-8406-5a0608de5d91 does not exist
Oct 11 08:45:59 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 11 08:45:59 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 08:45:59 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 11 08:45:59 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 08:45:59 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 08:45:59 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 08:45:59 compute-0 nova_compute[260935]: 2025-10-11 08:45:59.699 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 08:45:59 compute-0 nova_compute[260935]: 2025-10-11 08:45:59.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 08:45:59 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 08:45:59 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 08:45:59 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 08:45:59 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 08:45:59 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 08:45:59 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 08:45:59 compute-0 nova_compute[260935]: 2025-10-11 08:45:59.733 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:45:59 compute-0 nova_compute[260935]: 2025-10-11 08:45:59.734 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:45:59 compute-0 nova_compute[260935]: 2025-10-11 08:45:59.734 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:45:59 compute-0 nova_compute[260935]: 2025-10-11 08:45:59.735 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 11 08:45:59 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1179: 321 pgs: 321 active+clean; 88 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 3.8 MiB/s rd, 14 KiB/s wr, 168 op/s
Oct 11 08:45:59 compute-0 nova_compute[260935]: 2025-10-11 08:45:59.735 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:45:59 compute-0 sudo[281090]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:45:59 compute-0 sudo[281090]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:45:59 compute-0 sudo[281090]: pam_unix(sudo:session): session closed for user root
Oct 11 08:45:59 compute-0 sudo[281116]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 08:45:59 compute-0 sudo[281116]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:45:59 compute-0 sudo[281116]: pam_unix(sudo:session): session closed for user root
Oct 11 08:45:59 compute-0 sudo[281161]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:45:59 compute-0 sudo[281161]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:46:00 compute-0 sudo[281161]: pam_unix(sudo:session): session closed for user root
Oct 11 08:46:00 compute-0 podman[281150]: 2025-10-11 08:46:00.021842674 +0000 UTC m=+0.098069095 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.schema-version=1.0)
Oct 11 08:46:00 compute-0 sudo[281205]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 11 08:46:00 compute-0 sudo[281205]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:46:00 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 08:46:00 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3507669860' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:46:00 compute-0 nova_compute[260935]: 2025-10-11 08:46:00.234 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.498s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:46:00 compute-0 nova_compute[260935]: 2025-10-11 08:46:00.414 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000008 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 08:46:00 compute-0 nova_compute[260935]: 2025-10-11 08:46:00.415 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000008 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 08:46:00 compute-0 podman[281274]: 2025-10-11 08:46:00.571563058 +0000 UTC m=+0.050552556 container create 31eea65ccb3f4a73699e7dece29b9664d3e779c6b96195e31d409aa74900e4ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_jennings, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 11 08:46:00 compute-0 systemd[1]: Started libpod-conmon-31eea65ccb3f4a73699e7dece29b9664d3e779c6b96195e31d409aa74900e4ab.scope.
Oct 11 08:46:00 compute-0 systemd[1]: Started libcrun container.
Oct 11 08:46:00 compute-0 podman[281274]: 2025-10-11 08:46:00.548102038 +0000 UTC m=+0.027091536 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 08:46:00 compute-0 nova_compute[260935]: 2025-10-11 08:46:00.649 2 WARNING nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 08:46:00 compute-0 nova_compute[260935]: 2025-10-11 08:46:00.650 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4550MB free_disk=59.96738052368164GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 11 08:46:00 compute-0 nova_compute[260935]: 2025-10-11 08:46:00.650 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:46:00 compute-0 nova_compute[260935]: 2025-10-11 08:46:00.650 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:46:00 compute-0 podman[281274]: 2025-10-11 08:46:00.650979698 +0000 UTC m=+0.129969296 container init 31eea65ccb3f4a73699e7dece29b9664d3e779c6b96195e31d409aa74900e4ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_jennings, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef)
Oct 11 08:46:00 compute-0 podman[281274]: 2025-10-11 08:46:00.659876363 +0000 UTC m=+0.138865861 container start 31eea65ccb3f4a73699e7dece29b9664d3e779c6b96195e31d409aa74900e4ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_jennings, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 08:46:00 compute-0 podman[281274]: 2025-10-11 08:46:00.663083844 +0000 UTC m=+0.142073462 container attach 31eea65ccb3f4a73699e7dece29b9664d3e779c6b96195e31d409aa74900e4ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_jennings, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct 11 08:46:00 compute-0 competent_jennings[281291]: 167 167
Oct 11 08:46:00 compute-0 systemd[1]: libpod-31eea65ccb3f4a73699e7dece29b9664d3e779c6b96195e31d409aa74900e4ab.scope: Deactivated successfully.
Oct 11 08:46:00 compute-0 conmon[281291]: conmon 31eea65ccb3f4a73699e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-31eea65ccb3f4a73699e7dece29b9664d3e779c6b96195e31d409aa74900e4ab.scope/container/memory.events
Oct 11 08:46:00 compute-0 podman[281296]: 2025-10-11 08:46:00.70947536 +0000 UTC m=+0.024127470 container died 31eea65ccb3f4a73699e7dece29b9664d3e779c6b96195e31d409aa74900e4ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_jennings, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 08:46:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-c138ed692c3f0bae850e4115cd3bcafc172817c65768a0b0f49c50257b84a778-merged.mount: Deactivated successfully.
Oct 11 08:46:00 compute-0 ceph-mon[74313]: pgmap v1179: 321 pgs: 321 active+clean; 88 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 3.8 MiB/s rd, 14 KiB/s wr, 168 op/s
Oct 11 08:46:00 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/3507669860' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:46:00 compute-0 podman[281296]: 2025-10-11 08:46:00.753309163 +0000 UTC m=+0.067961283 container remove 31eea65ccb3f4a73699e7dece29b9664d3e779c6b96195e31d409aa74900e4ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_jennings, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct 11 08:46:00 compute-0 systemd[1]: libpod-conmon-31eea65ccb3f4a73699e7dece29b9664d3e779c6b96195e31d409aa74900e4ab.scope: Deactivated successfully.
Oct 11 08:46:00 compute-0 nova_compute[260935]: 2025-10-11 08:46:00.780 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance 7d1e4ac2-be17-4b9b-a94e-0d70a7c4395d actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 08:46:00 compute-0 nova_compute[260935]: 2025-10-11 08:46:00.781 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 11 08:46:00 compute-0 nova_compute[260935]: 2025-10-11 08:46:00.782 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 11 08:46:00 compute-0 nova_compute[260935]: 2025-10-11 08:46:00.825 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:46:00 compute-0 podman[281319]: 2025-10-11 08:46:00.9844321 +0000 UTC m=+0.048367034 container create 8d1105468be7a743d1469e12a9fc446c385639450306d645bd548a01ababc3cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_varahamihira, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct 11 08:46:01 compute-0 systemd[1]: Started libpod-conmon-8d1105468be7a743d1469e12a9fc446c385639450306d645bd548a01ababc3cc.scope.
Oct 11 08:46:01 compute-0 systemd[1]: Started libcrun container.
Oct 11 08:46:01 compute-0 podman[281319]: 2025-10-11 08:46:00.958284493 +0000 UTC m=+0.022219437 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 08:46:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e96944c20d091b25cc4a1085ece23100c058b38d0b4a40b13baf1fe1c3fc4a7d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 08:46:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e96944c20d091b25cc4a1085ece23100c058b38d0b4a40b13baf1fe1c3fc4a7d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 08:46:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e96944c20d091b25cc4a1085ece23100c058b38d0b4a40b13baf1fe1c3fc4a7d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 08:46:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e96944c20d091b25cc4a1085ece23100c058b38d0b4a40b13baf1fe1c3fc4a7d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 08:46:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e96944c20d091b25cc4a1085ece23100c058b38d0b4a40b13baf1fe1c3fc4a7d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 08:46:01 compute-0 podman[281319]: 2025-10-11 08:46:01.083992625 +0000 UTC m=+0.147927579 container init 8d1105468be7a743d1469e12a9fc446c385639450306d645bd548a01ababc3cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_varahamihira, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct 11 08:46:01 compute-0 podman[281319]: 2025-10-11 08:46:01.096878583 +0000 UTC m=+0.160813527 container start 8d1105468be7a743d1469e12a9fc446c385639450306d645bd548a01ababc3cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_varahamihira, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 08:46:01 compute-0 podman[281319]: 2025-10-11 08:46:01.102932456 +0000 UTC m=+0.166867420 container attach 8d1105468be7a743d1469e12a9fc446c385639450306d645bd548a01ababc3cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_varahamihira, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Oct 11 08:46:01 compute-0 podman[281352]: 2025-10-11 08:46:01.200964179 +0000 UTC m=+0.177798563 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 11 08:46:01 compute-0 nova_compute[260935]: 2025-10-11 08:46:01.216 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:46:01 compute-0 nova_compute[260935]: 2025-10-11 08:46:01.254 2 DEBUG nova.compute.manager [None req-b4da9aa8-5b6c-48c3-bbb6-64575046a1d1 aaddd80c268b4133b6c3baa62854d1ee 670dcbed2127406089bdfc746797b90a - - default default] [instance: 7d1e4ac2-be17-4b9b-a94e-0d70a7c4395d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:46:01 compute-0 nova_compute[260935]: 2025-10-11 08:46:01.261 2 INFO nova.compute.manager [None req-b4da9aa8-5b6c-48c3-bbb6-64575046a1d1 aaddd80c268b4133b6c3baa62854d1ee 670dcbed2127406089bdfc746797b90a - - default default] [instance: 7d1e4ac2-be17-4b9b-a94e-0d70a7c4395d] Retrieving diagnostics
Oct 11 08:46:01 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 08:46:01 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2174592845' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:46:01 compute-0 nova_compute[260935]: 2025-10-11 08:46:01.315 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.490s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:46:01 compute-0 nova_compute[260935]: 2025-10-11 08:46:01.322 2 DEBUG nova.compute.provider_tree [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 08:46:01 compute-0 nova_compute[260935]: 2025-10-11 08:46:01.366 2 DEBUG nova.scheduler.client.report [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 08:46:01 compute-0 nova_compute[260935]: 2025-10-11 08:46:01.680 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 11 08:46:01 compute-0 nova_compute[260935]: 2025-10-11 08:46:01.681 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.030s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:46:01 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1180: 321 pgs: 321 active+clean; 88 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 3.8 MiB/s rd, 14 KiB/s wr, 168 op/s
Oct 11 08:46:01 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/2174592845' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:46:02 compute-0 nova_compute[260935]: 2025-10-11 08:46:02.058 2 DEBUG oslo_concurrency.lockutils [None req-17db25f9-d979-4aeb-a03d-4eb5f10db5ad 6c0be63273314958858acea89fc8ce4c 04d9fd3da9714787ad630c8c68e4f94a - - default default] Acquiring lock "7d1e4ac2-be17-4b9b-a94e-0d70a7c4395d" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:46:02 compute-0 nova_compute[260935]: 2025-10-11 08:46:02.059 2 DEBUG oslo_concurrency.lockutils [None req-17db25f9-d979-4aeb-a03d-4eb5f10db5ad 6c0be63273314958858acea89fc8ce4c 04d9fd3da9714787ad630c8c68e4f94a - - default default] Lock "7d1e4ac2-be17-4b9b-a94e-0d70a7c4395d" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:46:02 compute-0 nova_compute[260935]: 2025-10-11 08:46:02.060 2 DEBUG oslo_concurrency.lockutils [None req-17db25f9-d979-4aeb-a03d-4eb5f10db5ad 6c0be63273314958858acea89fc8ce4c 04d9fd3da9714787ad630c8c68e4f94a - - default default] Acquiring lock "7d1e4ac2-be17-4b9b-a94e-0d70a7c4395d-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:46:02 compute-0 nova_compute[260935]: 2025-10-11 08:46:02.060 2 DEBUG oslo_concurrency.lockutils [None req-17db25f9-d979-4aeb-a03d-4eb5f10db5ad 6c0be63273314958858acea89fc8ce4c 04d9fd3da9714787ad630c8c68e4f94a - - default default] Lock "7d1e4ac2-be17-4b9b-a94e-0d70a7c4395d-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:46:02 compute-0 nova_compute[260935]: 2025-10-11 08:46:02.060 2 DEBUG oslo_concurrency.lockutils [None req-17db25f9-d979-4aeb-a03d-4eb5f10db5ad 6c0be63273314958858acea89fc8ce4c 04d9fd3da9714787ad630c8c68e4f94a - - default default] Lock "7d1e4ac2-be17-4b9b-a94e-0d70a7c4395d-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:46:02 compute-0 nova_compute[260935]: 2025-10-11 08:46:02.063 2 INFO nova.compute.manager [None req-17db25f9-d979-4aeb-a03d-4eb5f10db5ad 6c0be63273314958858acea89fc8ce4c 04d9fd3da9714787ad630c8c68e4f94a - - default default] [instance: 7d1e4ac2-be17-4b9b-a94e-0d70a7c4395d] Terminating instance
Oct 11 08:46:02 compute-0 nova_compute[260935]: 2025-10-11 08:46:02.064 2 DEBUG oslo_concurrency.lockutils [None req-17db25f9-d979-4aeb-a03d-4eb5f10db5ad 6c0be63273314958858acea89fc8ce4c 04d9fd3da9714787ad630c8c68e4f94a - - default default] Acquiring lock "refresh_cache-7d1e4ac2-be17-4b9b-a94e-0d70a7c4395d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 08:46:02 compute-0 nova_compute[260935]: 2025-10-11 08:46:02.065 2 DEBUG oslo_concurrency.lockutils [None req-17db25f9-d979-4aeb-a03d-4eb5f10db5ad 6c0be63273314958858acea89fc8ce4c 04d9fd3da9714787ad630c8c68e4f94a - - default default] Acquired lock "refresh_cache-7d1e4ac2-be17-4b9b-a94e-0d70a7c4395d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 08:46:02 compute-0 nova_compute[260935]: 2025-10-11 08:46:02.065 2 DEBUG nova.network.neutron [None req-17db25f9-d979-4aeb-a03d-4eb5f10db5ad 6c0be63273314958858acea89fc8ce4c 04d9fd3da9714787ad630c8c68e4f94a - - default default] [instance: 7d1e4ac2-be17-4b9b-a94e-0d70a7c4395d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 11 08:46:02 compute-0 gallant_varahamihira[281360]: --> passed data devices: 0 physical, 3 LVM
Oct 11 08:46:02 compute-0 gallant_varahamihira[281360]: --> relative data size: 1.0
Oct 11 08:46:02 compute-0 gallant_varahamihira[281360]: --> All data devices are unavailable
Oct 11 08:46:02 compute-0 systemd[1]: libpod-8d1105468be7a743d1469e12a9fc446c385639450306d645bd548a01ababc3cc.scope: Deactivated successfully.
Oct 11 08:46:02 compute-0 systemd[1]: libpod-8d1105468be7a743d1469e12a9fc446c385639450306d645bd548a01ababc3cc.scope: Consumed 1.108s CPU time.
Oct 11 08:46:02 compute-0 podman[281319]: 2025-10-11 08:46:02.354145722 +0000 UTC m=+1.418080666 container died 8d1105468be7a743d1469e12a9fc446c385639450306d645bd548a01ababc3cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_varahamihira, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 08:46:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-e96944c20d091b25cc4a1085ece23100c058b38d0b4a40b13baf1fe1c3fc4a7d-merged.mount: Deactivated successfully.
Oct 11 08:46:02 compute-0 podman[281319]: 2025-10-11 08:46:02.439219724 +0000 UTC m=+1.503154678 container remove 8d1105468be7a743d1469e12a9fc446c385639450306d645bd548a01ababc3cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_varahamihira, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 08:46:02 compute-0 systemd[1]: libpod-conmon-8d1105468be7a743d1469e12a9fc446c385639450306d645bd548a01ababc3cc.scope: Deactivated successfully.
Oct 11 08:46:02 compute-0 sudo[281205]: pam_unix(sudo:session): session closed for user root
Oct 11 08:46:02 compute-0 nova_compute[260935]: 2025-10-11 08:46:02.535 2 DEBUG nova.network.neutron [None req-17db25f9-d979-4aeb-a03d-4eb5f10db5ad 6c0be63273314958858acea89fc8ce4c 04d9fd3da9714787ad630c8c68e4f94a - - default default] [instance: 7d1e4ac2-be17-4b9b-a94e-0d70a7c4395d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 11 08:46:02 compute-0 sudo[281425]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:46:02 compute-0 sudo[281425]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:46:02 compute-0 sudo[281425]: pam_unix(sudo:session): session closed for user root
Oct 11 08:46:02 compute-0 sudo[281450]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 08:46:02 compute-0 sudo[281450]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:46:02 compute-0 sudo[281450]: pam_unix(sudo:session): session closed for user root
Oct 11 08:46:02 compute-0 nova_compute[260935]: 2025-10-11 08:46:02.779 2 DEBUG nova.network.neutron [None req-17db25f9-d979-4aeb-a03d-4eb5f10db5ad 6c0be63273314958858acea89fc8ce4c 04d9fd3da9714787ad630c8c68e4f94a - - default default] [instance: 7d1e4ac2-be17-4b9b-a94e-0d70a7c4395d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:46:02 compute-0 ceph-mon[74313]: pgmap v1180: 321 pgs: 321 active+clean; 88 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 3.8 MiB/s rd, 14 KiB/s wr, 168 op/s
Oct 11 08:46:02 compute-0 sudo[281475]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:46:02 compute-0 sudo[281475]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:46:02 compute-0 nova_compute[260935]: 2025-10-11 08:46:02.820 2 DEBUG oslo_concurrency.lockutils [None req-17db25f9-d979-4aeb-a03d-4eb5f10db5ad 6c0be63273314958858acea89fc8ce4c 04d9fd3da9714787ad630c8c68e4f94a - - default default] Releasing lock "refresh_cache-7d1e4ac2-be17-4b9b-a94e-0d70a7c4395d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 08:46:02 compute-0 nova_compute[260935]: 2025-10-11 08:46:02.822 2 DEBUG nova.compute.manager [None req-17db25f9-d979-4aeb-a03d-4eb5f10db5ad 6c0be63273314958858acea89fc8ce4c 04d9fd3da9714787ad630c8c68e4f94a - - default default] [instance: 7d1e4ac2-be17-4b9b-a94e-0d70a7c4395d] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 11 08:46:02 compute-0 sudo[281475]: pam_unix(sudo:session): session closed for user root
Oct 11 08:46:02 compute-0 sudo[281500]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a -- lvm list --format json
Oct 11 08:46:02 compute-0 systemd[1]: machine-qemu\x2d8\x2dinstance\x2d00000008.scope: Deactivated successfully.
Oct 11 08:46:02 compute-0 systemd[1]: machine-qemu\x2d8\x2dinstance\x2d00000008.scope: Consumed 11.909s CPU time.
Oct 11 08:46:02 compute-0 sudo[281500]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:46:02 compute-0 systemd-machined[215705]: Machine qemu-8-instance-00000008 terminated.
Oct 11 08:46:03 compute-0 nova_compute[260935]: 2025-10-11 08:46:03.051 2 INFO nova.virt.libvirt.driver [-] [instance: 7d1e4ac2-be17-4b9b-a94e-0d70a7c4395d] Instance destroyed successfully.
Oct 11 08:46:03 compute-0 nova_compute[260935]: 2025-10-11 08:46:03.053 2 DEBUG nova.objects.instance [None req-17db25f9-d979-4aeb-a03d-4eb5f10db5ad 6c0be63273314958858acea89fc8ce4c 04d9fd3da9714787ad630c8c68e4f94a - - default default] Lazy-loading 'resources' on Instance uuid 7d1e4ac2-be17-4b9b-a94e-0d70a7c4395d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:46:03 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 08:46:03 compute-0 podman[281588]: 2025-10-11 08:46:03.358831771 +0000 UTC m=+0.035847676 container create 8aef97b4f105273236c9988887f15a69e37a4cbafdfe62e512aadf54a5d02b1a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_noether, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 08:46:03 compute-0 systemd[1]: Started libpod-conmon-8aef97b4f105273236c9988887f15a69e37a4cbafdfe62e512aadf54a5d02b1a.scope.
Oct 11 08:46:03 compute-0 systemd[1]: Started libcrun container.
Oct 11 08:46:03 compute-0 nova_compute[260935]: 2025-10-11 08:46:03.427 2 INFO nova.virt.libvirt.driver [None req-17db25f9-d979-4aeb-a03d-4eb5f10db5ad 6c0be63273314958858acea89fc8ce4c 04d9fd3da9714787ad630c8c68e4f94a - - default default] [instance: 7d1e4ac2-be17-4b9b-a94e-0d70a7c4395d] Deleting instance files /var/lib/nova/instances/7d1e4ac2-be17-4b9b-a94e-0d70a7c4395d_del
Oct 11 08:46:03 compute-0 nova_compute[260935]: 2025-10-11 08:46:03.428 2 INFO nova.virt.libvirt.driver [None req-17db25f9-d979-4aeb-a03d-4eb5f10db5ad 6c0be63273314958858acea89fc8ce4c 04d9fd3da9714787ad630c8c68e4f94a - - default default] [instance: 7d1e4ac2-be17-4b9b-a94e-0d70a7c4395d] Deletion of /var/lib/nova/instances/7d1e4ac2-be17-4b9b-a94e-0d70a7c4395d_del complete
Oct 11 08:46:03 compute-0 podman[281588]: 2025-10-11 08:46:03.440459444 +0000 UTC m=+0.117475389 container init 8aef97b4f105273236c9988887f15a69e37a4cbafdfe62e512aadf54a5d02b1a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_noether, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct 11 08:46:03 compute-0 podman[281588]: 2025-10-11 08:46:03.344736178 +0000 UTC m=+0.021752103 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 08:46:03 compute-0 podman[281588]: 2025-10-11 08:46:03.451806509 +0000 UTC m=+0.128822424 container start 8aef97b4f105273236c9988887f15a69e37a4cbafdfe62e512aadf54a5d02b1a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_noether, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 08:46:03 compute-0 podman[281588]: 2025-10-11 08:46:03.454808835 +0000 UTC m=+0.131824770 container attach 8aef97b4f105273236c9988887f15a69e37a4cbafdfe62e512aadf54a5d02b1a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_noether, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Oct 11 08:46:03 compute-0 goofy_noether[281604]: 167 167
Oct 11 08:46:03 compute-0 systemd[1]: libpod-8aef97b4f105273236c9988887f15a69e37a4cbafdfe62e512aadf54a5d02b1a.scope: Deactivated successfully.
Oct 11 08:46:03 compute-0 podman[281588]: 2025-10-11 08:46:03.457288295 +0000 UTC m=+0.134304240 container died 8aef97b4f105273236c9988887f15a69e37a4cbafdfe62e512aadf54a5d02b1a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_noether, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 08:46:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-fe07ce83f9e3b442b1d8936b57c2f5d7cfffdcb56bc04e521c56c44d3c655b90-merged.mount: Deactivated successfully.
Oct 11 08:46:03 compute-0 podman[281588]: 2025-10-11 08:46:03.501220541 +0000 UTC m=+0.178236446 container remove 8aef97b4f105273236c9988887f15a69e37a4cbafdfe62e512aadf54a5d02b1a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_noether, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 08:46:03 compute-0 systemd[1]: libpod-conmon-8aef97b4f105273236c9988887f15a69e37a4cbafdfe62e512aadf54a5d02b1a.scope: Deactivated successfully.
Oct 11 08:46:03 compute-0 nova_compute[260935]: 2025-10-11 08:46:03.593 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:46:03 compute-0 nova_compute[260935]: 2025-10-11 08:46:03.682 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 08:46:03 compute-0 nova_compute[260935]: 2025-10-11 08:46:03.683 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 11 08:46:03 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1181: 321 pgs: 321 active+clean; 121 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 4.1 MiB/s rd, 2.1 MiB/s wr, 227 op/s
Oct 11 08:46:03 compute-0 nova_compute[260935]: 2025-10-11 08:46:03.772 2 INFO nova.compute.manager [None req-17db25f9-d979-4aeb-a03d-4eb5f10db5ad 6c0be63273314958858acea89fc8ce4c 04d9fd3da9714787ad630c8c68e4f94a - - default default] [instance: 7d1e4ac2-be17-4b9b-a94e-0d70a7c4395d] Took 0.95 seconds to destroy the instance on the hypervisor.
Oct 11 08:46:03 compute-0 nova_compute[260935]: 2025-10-11 08:46:03.773 2 DEBUG oslo.service.loopingcall [None req-17db25f9-d979-4aeb-a03d-4eb5f10db5ad 6c0be63273314958858acea89fc8ce4c 04d9fd3da9714787ad630c8c68e4f94a - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 11 08:46:03 compute-0 nova_compute[260935]: 2025-10-11 08:46:03.774 2 DEBUG nova.compute.manager [-] [instance: 7d1e4ac2-be17-4b9b-a94e-0d70a7c4395d] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 11 08:46:03 compute-0 nova_compute[260935]: 2025-10-11 08:46:03.774 2 DEBUG nova.network.neutron [-] [instance: 7d1e4ac2-be17-4b9b-a94e-0d70a7c4395d] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 11 08:46:03 compute-0 podman[281628]: 2025-10-11 08:46:03.788451922 +0000 UTC m=+0.103111159 container create 2f61528667b9107f87d830a654b37fb3b8747ea04345747ac0fbbb60ef2a0bc0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_sinoussi, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct 11 08:46:03 compute-0 systemd[1]: Started libpod-conmon-2f61528667b9107f87d830a654b37fb3b8747ea04345747ac0fbbb60ef2a0bc0.scope.
Oct 11 08:46:03 compute-0 podman[281628]: 2025-10-11 08:46:03.760596045 +0000 UTC m=+0.075255342 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 08:46:03 compute-0 systemd[1]: Started libcrun container.
Oct 11 08:46:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/85277ff389fa8dc0363e6921f44e80a3bf2a108b971447ce7461143f9efca6b5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 08:46:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/85277ff389fa8dc0363e6921f44e80a3bf2a108b971447ce7461143f9efca6b5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 08:46:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/85277ff389fa8dc0363e6921f44e80a3bf2a108b971447ce7461143f9efca6b5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 08:46:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/85277ff389fa8dc0363e6921f44e80a3bf2a108b971447ce7461143f9efca6b5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 08:46:03 compute-0 nova_compute[260935]: 2025-10-11 08:46:03.890 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 11 08:46:03 compute-0 nova_compute[260935]: 2025-10-11 08:46:03.891 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 08:46:03 compute-0 podman[281628]: 2025-10-11 08:46:03.907365201 +0000 UTC m=+0.222024498 container init 2f61528667b9107f87d830a654b37fb3b8747ea04345747ac0fbbb60ef2a0bc0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_sinoussi, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct 11 08:46:03 compute-0 podman[281628]: 2025-10-11 08:46:03.921490625 +0000 UTC m=+0.236149872 container start 2f61528667b9107f87d830a654b37fb3b8747ea04345747ac0fbbb60ef2a0bc0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_sinoussi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True)
Oct 11 08:46:03 compute-0 podman[281628]: 2025-10-11 08:46:03.92551769 +0000 UTC m=+0.240176987 container attach 2f61528667b9107f87d830a654b37fb3b8747ea04345747ac0fbbb60ef2a0bc0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_sinoussi, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 08:46:04 compute-0 nova_compute[260935]: 2025-10-11 08:46:04.053 2 DEBUG nova.network.neutron [-] [instance: 7d1e4ac2-be17-4b9b-a94e-0d70a7c4395d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 11 08:46:04 compute-0 nova_compute[260935]: 2025-10-11 08:46:04.254 2 DEBUG nova.network.neutron [-] [instance: 7d1e4ac2-be17-4b9b-a94e-0d70a7c4395d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:46:04 compute-0 nova_compute[260935]: 2025-10-11 08:46:04.392 2 INFO nova.compute.manager [-] [instance: 7d1e4ac2-be17-4b9b-a94e-0d70a7c4395d] Took 0.62 seconds to deallocate network for instance.
Oct 11 08:46:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 08:46:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:46:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 08:46:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:46:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007559663345705542 of space, bias 1.0, pg target 0.22678990037116625 quantized to 32 (current 32)
Oct 11 08:46:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:46:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 08:46:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:46:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 08:46:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:46:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Oct 11 08:46:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:46:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 11 08:46:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:46:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 08:46:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:46:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 11 08:46:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:46:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 11 08:46:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:46:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 08:46:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:46:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 11 08:46:04 compute-0 nova_compute[260935]: 2025-10-11 08:46:04.559 2 DEBUG oslo_concurrency.lockutils [None req-17db25f9-d979-4aeb-a03d-4eb5f10db5ad 6c0be63273314958858acea89fc8ce4c 04d9fd3da9714787ad630c8c68e4f94a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:46:04 compute-0 nova_compute[260935]: 2025-10-11 08:46:04.560 2 DEBUG oslo_concurrency.lockutils [None req-17db25f9-d979-4aeb-a03d-4eb5f10db5ad 6c0be63273314958858acea89fc8ce4c 04d9fd3da9714787ad630c8c68e4f94a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:46:04 compute-0 nova_compute[260935]: 2025-10-11 08:46:04.612 2 DEBUG oslo_concurrency.processutils [None req-17db25f9-d979-4aeb-a03d-4eb5f10db5ad 6c0be63273314958858acea89fc8ce4c 04d9fd3da9714787ad630c8c68e4f94a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:46:04 compute-0 musing_sinoussi[281645]: {
Oct 11 08:46:04 compute-0 musing_sinoussi[281645]:     "0": [
Oct 11 08:46:04 compute-0 musing_sinoussi[281645]:         {
Oct 11 08:46:04 compute-0 musing_sinoussi[281645]:             "devices": [
Oct 11 08:46:04 compute-0 musing_sinoussi[281645]:                 "/dev/loop3"
Oct 11 08:46:04 compute-0 musing_sinoussi[281645]:             ],
Oct 11 08:46:04 compute-0 musing_sinoussi[281645]:             "lv_name": "ceph_lv0",
Oct 11 08:46:04 compute-0 musing_sinoussi[281645]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 08:46:04 compute-0 musing_sinoussi[281645]:             "lv_size": "21470642176",
Oct 11 08:46:04 compute-0 musing_sinoussi[281645]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=bd31cfe9-266a-4479-a7f9-659fff14b3e1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 08:46:04 compute-0 musing_sinoussi[281645]:             "lv_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 08:46:04 compute-0 musing_sinoussi[281645]:             "name": "ceph_lv0",
Oct 11 08:46:04 compute-0 musing_sinoussi[281645]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 08:46:04 compute-0 musing_sinoussi[281645]:             "tags": {
Oct 11 08:46:04 compute-0 musing_sinoussi[281645]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 11 08:46:04 compute-0 musing_sinoussi[281645]:                 "ceph.block_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 08:46:04 compute-0 musing_sinoussi[281645]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 08:46:04 compute-0 musing_sinoussi[281645]:                 "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 08:46:04 compute-0 musing_sinoussi[281645]:                 "ceph.cluster_name": "ceph",
Oct 11 08:46:04 compute-0 musing_sinoussi[281645]:                 "ceph.crush_device_class": "",
Oct 11 08:46:04 compute-0 musing_sinoussi[281645]:                 "ceph.encrypted": "0",
Oct 11 08:46:04 compute-0 musing_sinoussi[281645]:                 "ceph.osd_fsid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 08:46:04 compute-0 musing_sinoussi[281645]:                 "ceph.osd_id": "0",
Oct 11 08:46:04 compute-0 musing_sinoussi[281645]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 08:46:04 compute-0 musing_sinoussi[281645]:                 "ceph.type": "block",
Oct 11 08:46:04 compute-0 musing_sinoussi[281645]:                 "ceph.vdo": "0"
Oct 11 08:46:04 compute-0 musing_sinoussi[281645]:             },
Oct 11 08:46:04 compute-0 musing_sinoussi[281645]:             "type": "block",
Oct 11 08:46:04 compute-0 musing_sinoussi[281645]:             "vg_name": "ceph_vg0"
Oct 11 08:46:04 compute-0 musing_sinoussi[281645]:         }
Oct 11 08:46:04 compute-0 musing_sinoussi[281645]:     ],
Oct 11 08:46:04 compute-0 musing_sinoussi[281645]:     "1": [
Oct 11 08:46:04 compute-0 musing_sinoussi[281645]:         {
Oct 11 08:46:04 compute-0 musing_sinoussi[281645]:             "devices": [
Oct 11 08:46:04 compute-0 musing_sinoussi[281645]:                 "/dev/loop4"
Oct 11 08:46:04 compute-0 musing_sinoussi[281645]:             ],
Oct 11 08:46:04 compute-0 musing_sinoussi[281645]:             "lv_name": "ceph_lv1",
Oct 11 08:46:04 compute-0 musing_sinoussi[281645]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 08:46:04 compute-0 musing_sinoussi[281645]:             "lv_size": "21470642176",
Oct 11 08:46:04 compute-0 musing_sinoussi[281645]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c6e730c8-7075-45e5-bf72-061b73088f5b,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 08:46:04 compute-0 musing_sinoussi[281645]:             "lv_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 08:46:04 compute-0 musing_sinoussi[281645]:             "name": "ceph_lv1",
Oct 11 08:46:04 compute-0 musing_sinoussi[281645]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 08:46:04 compute-0 musing_sinoussi[281645]:             "tags": {
Oct 11 08:46:04 compute-0 musing_sinoussi[281645]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 11 08:46:04 compute-0 musing_sinoussi[281645]:                 "ceph.block_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 08:46:04 compute-0 musing_sinoussi[281645]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 08:46:04 compute-0 musing_sinoussi[281645]:                 "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 08:46:04 compute-0 musing_sinoussi[281645]:                 "ceph.cluster_name": "ceph",
Oct 11 08:46:04 compute-0 musing_sinoussi[281645]:                 "ceph.crush_device_class": "",
Oct 11 08:46:04 compute-0 musing_sinoussi[281645]:                 "ceph.encrypted": "0",
Oct 11 08:46:04 compute-0 musing_sinoussi[281645]:                 "ceph.osd_fsid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 08:46:04 compute-0 musing_sinoussi[281645]:                 "ceph.osd_id": "1",
Oct 11 08:46:04 compute-0 musing_sinoussi[281645]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 08:46:04 compute-0 musing_sinoussi[281645]:                 "ceph.type": "block",
Oct 11 08:46:04 compute-0 musing_sinoussi[281645]:                 "ceph.vdo": "0"
Oct 11 08:46:04 compute-0 musing_sinoussi[281645]:             },
Oct 11 08:46:04 compute-0 musing_sinoussi[281645]:             "type": "block",
Oct 11 08:46:04 compute-0 musing_sinoussi[281645]:             "vg_name": "ceph_vg1"
Oct 11 08:46:04 compute-0 musing_sinoussi[281645]:         }
Oct 11 08:46:04 compute-0 musing_sinoussi[281645]:     ],
Oct 11 08:46:04 compute-0 musing_sinoussi[281645]:     "2": [
Oct 11 08:46:04 compute-0 musing_sinoussi[281645]:         {
Oct 11 08:46:04 compute-0 musing_sinoussi[281645]:             "devices": [
Oct 11 08:46:04 compute-0 musing_sinoussi[281645]:                 "/dev/loop5"
Oct 11 08:46:04 compute-0 musing_sinoussi[281645]:             ],
Oct 11 08:46:04 compute-0 musing_sinoussi[281645]:             "lv_name": "ceph_lv2",
Oct 11 08:46:04 compute-0 musing_sinoussi[281645]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 08:46:04 compute-0 musing_sinoussi[281645]:             "lv_size": "21470642176",
Oct 11 08:46:04 compute-0 musing_sinoussi[281645]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9a8d87fe-940e-4960-94fb-405e22e6e9ee,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 08:46:04 compute-0 musing_sinoussi[281645]:             "lv_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 08:46:04 compute-0 musing_sinoussi[281645]:             "name": "ceph_lv2",
Oct 11 08:46:04 compute-0 musing_sinoussi[281645]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 08:46:04 compute-0 musing_sinoussi[281645]:             "tags": {
Oct 11 08:46:04 compute-0 musing_sinoussi[281645]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 11 08:46:04 compute-0 musing_sinoussi[281645]:                 "ceph.block_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 08:46:04 compute-0 musing_sinoussi[281645]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 08:46:04 compute-0 musing_sinoussi[281645]:                 "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 08:46:04 compute-0 musing_sinoussi[281645]:                 "ceph.cluster_name": "ceph",
Oct 11 08:46:04 compute-0 musing_sinoussi[281645]:                 "ceph.crush_device_class": "",
Oct 11 08:46:04 compute-0 musing_sinoussi[281645]:                 "ceph.encrypted": "0",
Oct 11 08:46:04 compute-0 musing_sinoussi[281645]:                 "ceph.osd_fsid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 08:46:04 compute-0 musing_sinoussi[281645]:                 "ceph.osd_id": "2",
Oct 11 08:46:04 compute-0 musing_sinoussi[281645]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 08:46:04 compute-0 musing_sinoussi[281645]:                 "ceph.type": "block",
Oct 11 08:46:04 compute-0 musing_sinoussi[281645]:                 "ceph.vdo": "0"
Oct 11 08:46:04 compute-0 musing_sinoussi[281645]:             },
Oct 11 08:46:04 compute-0 musing_sinoussi[281645]:             "type": "block",
Oct 11 08:46:04 compute-0 musing_sinoussi[281645]:             "vg_name": "ceph_vg2"
Oct 11 08:46:04 compute-0 musing_sinoussi[281645]:         }
Oct 11 08:46:04 compute-0 musing_sinoussi[281645]:     ]
Oct 11 08:46:04 compute-0 musing_sinoussi[281645]: }
Oct 11 08:46:04 compute-0 systemd[1]: libpod-2f61528667b9107f87d830a654b37fb3b8747ea04345747ac0fbbb60ef2a0bc0.scope: Deactivated successfully.
Oct 11 08:46:04 compute-0 podman[281628]: 2025-10-11 08:46:04.725948409 +0000 UTC m=+1.040607646 container died 2f61528667b9107f87d830a654b37fb3b8747ea04345747ac0fbbb60ef2a0bc0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_sinoussi, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct 11 08:46:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-85277ff389fa8dc0363e6921f44e80a3bf2a108b971447ce7461143f9efca6b5-merged.mount: Deactivated successfully.
Oct 11 08:46:04 compute-0 ceph-mon[74313]: pgmap v1181: 321 pgs: 321 active+clean; 121 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 4.1 MiB/s rd, 2.1 MiB/s wr, 227 op/s
Oct 11 08:46:04 compute-0 podman[281628]: 2025-10-11 08:46:04.8060891 +0000 UTC m=+1.120748337 container remove 2f61528667b9107f87d830a654b37fb3b8747ea04345747ac0fbbb60ef2a0bc0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_sinoussi, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct 11 08:46:04 compute-0 systemd[1]: libpod-conmon-2f61528667b9107f87d830a654b37fb3b8747ea04345747ac0fbbb60ef2a0bc0.scope: Deactivated successfully.
Oct 11 08:46:04 compute-0 sudo[281500]: pam_unix(sudo:session): session closed for user root
Oct 11 08:46:04 compute-0 sudo[281686]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:46:04 compute-0 sudo[281686]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:46:04 compute-0 sudo[281686]: pam_unix(sudo:session): session closed for user root
Oct 11 08:46:05 compute-0 sudo[281711]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 08:46:05 compute-0 sudo[281711]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:46:05 compute-0 sudo[281711]: pam_unix(sudo:session): session closed for user root
Oct 11 08:46:05 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 08:46:05 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3466782242' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:46:05 compute-0 nova_compute[260935]: 2025-10-11 08:46:05.090 2 DEBUG oslo_concurrency.processutils [None req-17db25f9-d979-4aeb-a03d-4eb5f10db5ad 6c0be63273314958858acea89fc8ce4c 04d9fd3da9714787ad630c8c68e4f94a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:46:05 compute-0 nova_compute[260935]: 2025-10-11 08:46:05.099 2 DEBUG nova.compute.provider_tree [None req-17db25f9-d979-4aeb-a03d-4eb5f10db5ad 6c0be63273314958858acea89fc8ce4c 04d9fd3da9714787ad630c8c68e4f94a - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 08:46:05 compute-0 sudo[281736]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:46:05 compute-0 sudo[281736]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:46:05 compute-0 sudo[281736]: pam_unix(sudo:session): session closed for user root
Oct 11 08:46:05 compute-0 sudo[281763]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a -- raw list --format json
Oct 11 08:46:05 compute-0 sudo[281763]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:46:05 compute-0 nova_compute[260935]: 2025-10-11 08:46:05.532 2 DEBUG nova.scheduler.client.report [None req-17db25f9-d979-4aeb-a03d-4eb5f10db5ad 6c0be63273314958858acea89fc8ce4c 04d9fd3da9714787ad630c8c68e4f94a - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 08:46:05 compute-0 nova_compute[260935]: 2025-10-11 08:46:05.654 2 DEBUG oslo_concurrency.lockutils [None req-17db25f9-d979-4aeb-a03d-4eb5f10db5ad 6c0be63273314958858acea89fc8ce4c 04d9fd3da9714787ad630c8c68e4f94a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.094s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:46:05 compute-0 podman[281829]: 2025-10-11 08:46:05.661587824 +0000 UTC m=+0.064539766 container create 6fb2f19a7a6d776d45bb3faf3177e373bed5b783196f83d0fd267d297169f2ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_lichterman, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct 11 08:46:05 compute-0 systemd[1]: Started libpod-conmon-6fb2f19a7a6d776d45bb3faf3177e373bed5b783196f83d0fd267d297169f2ec.scope.
Oct 11 08:46:05 compute-0 podman[281829]: 2025-10-11 08:46:05.634730677 +0000 UTC m=+0.037682669 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 08:46:05 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1182: 321 pgs: 321 active+clean; 121 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 282 KiB/s rd, 2.1 MiB/s wr, 59 op/s
Oct 11 08:46:05 compute-0 systemd[1]: Started libcrun container.
Oct 11 08:46:05 compute-0 podman[281829]: 2025-10-11 08:46:05.771730413 +0000 UTC m=+0.174682365 container init 6fb2f19a7a6d776d45bb3faf3177e373bed5b783196f83d0fd267d297169f2ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_lichterman, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 08:46:05 compute-0 podman[281829]: 2025-10-11 08:46:05.784100436 +0000 UTC m=+0.187052388 container start 6fb2f19a7a6d776d45bb3faf3177e373bed5b783196f83d0fd267d297169f2ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_lichterman, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct 11 08:46:05 compute-0 podman[281829]: 2025-10-11 08:46:05.788103911 +0000 UTC m=+0.191055873 container attach 6fb2f19a7a6d776d45bb3faf3177e373bed5b783196f83d0fd267d297169f2ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_lichterman, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 08:46:05 compute-0 youthful_lichterman[281845]: 167 167
Oct 11 08:46:05 compute-0 systemd[1]: libpod-6fb2f19a7a6d776d45bb3faf3177e373bed5b783196f83d0fd267d297169f2ec.scope: Deactivated successfully.
Oct 11 08:46:05 compute-0 podman[281829]: 2025-10-11 08:46:05.792770894 +0000 UTC m=+0.195722846 container died 6fb2f19a7a6d776d45bb3faf3177e373bed5b783196f83d0fd267d297169f2ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_lichterman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 08:46:05 compute-0 nova_compute[260935]: 2025-10-11 08:46:05.803 2 INFO nova.scheduler.client.report [None req-17db25f9-d979-4aeb-a03d-4eb5f10db5ad 6c0be63273314958858acea89fc8ce4c 04d9fd3da9714787ad630c8c68e4f94a - - default default] Deleted allocations for instance 7d1e4ac2-be17-4b9b-a94e-0d70a7c4395d
Oct 11 08:46:05 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/3466782242' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:46:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-02eed1aea9672d1bafc896e7280e9ef0bcd69e1b5238d8afa223f4f5701690a8-merged.mount: Deactivated successfully.
Oct 11 08:46:05 compute-0 podman[281829]: 2025-10-11 08:46:05.839118249 +0000 UTC m=+0.242070201 container remove 6fb2f19a7a6d776d45bb3faf3177e373bed5b783196f83d0fd267d297169f2ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_lichterman, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 08:46:05 compute-0 systemd[1]: libpod-conmon-6fb2f19a7a6d776d45bb3faf3177e373bed5b783196f83d0fd267d297169f2ec.scope: Deactivated successfully.
Oct 11 08:46:05 compute-0 nova_compute[260935]: 2025-10-11 08:46:05.939 2 DEBUG oslo_concurrency.lockutils [None req-8f7c14a1-b3db-450a-ba97-b31e7f540908 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] Acquiring lock "97254317-c848-4369-b0b7-147d230fb1d1" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:46:05 compute-0 nova_compute[260935]: 2025-10-11 08:46:05.939 2 DEBUG oslo_concurrency.lockutils [None req-8f7c14a1-b3db-450a-ba97-b31e7f540908 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] Lock "97254317-c848-4369-b0b7-147d230fb1d1" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:46:06 compute-0 podman[281872]: 2025-10-11 08:46:06.102039604 +0000 UTC m=+0.076742344 container create da0681a2c23026b78f1e88b082c09a8d19a9a85c8a2404510f30dbb38e81998a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_chaplygin, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True)
Oct 11 08:46:06 compute-0 systemd[1]: Started libpod-conmon-da0681a2c23026b78f1e88b082c09a8d19a9a85c8a2404510f30dbb38e81998a.scope.
Oct 11 08:46:06 compute-0 podman[281872]: 2025-10-11 08:46:06.073644483 +0000 UTC m=+0.048347233 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 08:46:06 compute-0 nova_compute[260935]: 2025-10-11 08:46:06.171 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760172351.1709952, 176447af-d4c6-422a-9347-9f07749fb6c3 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:46:06 compute-0 nova_compute[260935]: 2025-10-11 08:46:06.172 2 INFO nova.compute.manager [-] [instance: 176447af-d4c6-422a-9347-9f07749fb6c3] VM Stopped (Lifecycle Event)
Oct 11 08:46:06 compute-0 systemd[1]: Started libcrun container.
Oct 11 08:46:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eebf84f4c0111721d477a35b16158895830bf8858363886acca7a55475d7074a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 08:46:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eebf84f4c0111721d477a35b16158895830bf8858363886acca7a55475d7074a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 08:46:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eebf84f4c0111721d477a35b16158895830bf8858363886acca7a55475d7074a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 08:46:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eebf84f4c0111721d477a35b16158895830bf8858363886acca7a55475d7074a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 08:46:06 compute-0 nova_compute[260935]: 2025-10-11 08:46:06.202 2 DEBUG oslo_concurrency.lockutils [None req-17db25f9-d979-4aeb-a03d-4eb5f10db5ad 6c0be63273314958858acea89fc8ce4c 04d9fd3da9714787ad630c8c68e4f94a - - default default] Lock "7d1e4ac2-be17-4b9b-a94e-0d70a7c4395d" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.143s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:46:06 compute-0 podman[281872]: 2025-10-11 08:46:06.204774141 +0000 UTC m=+0.179476911 container init da0681a2c23026b78f1e88b082c09a8d19a9a85c8a2404510f30dbb38e81998a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_chaplygin, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 08:46:06 compute-0 podman[281872]: 2025-10-11 08:46:06.218594586 +0000 UTC m=+0.193297316 container start da0681a2c23026b78f1e88b082c09a8d19a9a85c8a2404510f30dbb38e81998a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_chaplygin, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct 11 08:46:06 compute-0 nova_compute[260935]: 2025-10-11 08:46:06.217 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:46:06 compute-0 podman[281872]: 2025-10-11 08:46:06.222529589 +0000 UTC m=+0.197232369 container attach da0681a2c23026b78f1e88b082c09a8d19a9a85c8a2404510f30dbb38e81998a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_chaplygin, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct 11 08:46:06 compute-0 nova_compute[260935]: 2025-10-11 08:46:06.302 2 DEBUG nova.compute.manager [None req-8f7c14a1-b3db-450a-ba97-b31e7f540908 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] [instance: 97254317-c848-4369-b0b7-147d230fb1d1] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 11 08:46:06 compute-0 nova_compute[260935]: 2025-10-11 08:46:06.415 2 DEBUG nova.compute.manager [None req-3e16f465-b2ca-46ae-9588-69170c0b5f20 - - - - - -] [instance: 176447af-d4c6-422a-9347-9f07749fb6c3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:46:06 compute-0 nova_compute[260935]: 2025-10-11 08:46:06.640 2 DEBUG oslo_concurrency.lockutils [None req-8f7c14a1-b3db-450a-ba97-b31e7f540908 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:46:06 compute-0 nova_compute[260935]: 2025-10-11 08:46:06.640 2 DEBUG oslo_concurrency.lockutils [None req-8f7c14a1-b3db-450a-ba97-b31e7f540908 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:46:06 compute-0 nova_compute[260935]: 2025-10-11 08:46:06.651 2 DEBUG nova.virt.hardware [None req-8f7c14a1-b3db-450a-ba97-b31e7f540908 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 11 08:46:06 compute-0 nova_compute[260935]: 2025-10-11 08:46:06.652 2 INFO nova.compute.claims [None req-8f7c14a1-b3db-450a-ba97-b31e7f540908 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] [instance: 97254317-c848-4369-b0b7-147d230fb1d1] Claim successful on node compute-0.ctlplane.example.com
Oct 11 08:46:06 compute-0 ceph-mon[74313]: pgmap v1182: 321 pgs: 321 active+clean; 121 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 282 KiB/s rd, 2.1 MiB/s wr, 59 op/s
Oct 11 08:46:07 compute-0 relaxed_chaplygin[281889]: {
Oct 11 08:46:07 compute-0 relaxed_chaplygin[281889]:     "9a8d87fe-940e-4960-94fb-405e22e6e9ee": {
Oct 11 08:46:07 compute-0 relaxed_chaplygin[281889]:         "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 08:46:07 compute-0 relaxed_chaplygin[281889]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 11 08:46:07 compute-0 relaxed_chaplygin[281889]:         "osd_id": 2,
Oct 11 08:46:07 compute-0 relaxed_chaplygin[281889]:         "osd_uuid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 08:46:07 compute-0 relaxed_chaplygin[281889]:         "type": "bluestore"
Oct 11 08:46:07 compute-0 relaxed_chaplygin[281889]:     },
Oct 11 08:46:07 compute-0 relaxed_chaplygin[281889]:     "bd31cfe9-266a-4479-a7f9-659fff14b3e1": {
Oct 11 08:46:07 compute-0 relaxed_chaplygin[281889]:         "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 08:46:07 compute-0 relaxed_chaplygin[281889]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 11 08:46:07 compute-0 relaxed_chaplygin[281889]:         "osd_id": 0,
Oct 11 08:46:07 compute-0 relaxed_chaplygin[281889]:         "osd_uuid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 08:46:07 compute-0 relaxed_chaplygin[281889]:         "type": "bluestore"
Oct 11 08:46:07 compute-0 relaxed_chaplygin[281889]:     },
Oct 11 08:46:07 compute-0 relaxed_chaplygin[281889]:     "c6e730c8-7075-45e5-bf72-061b73088f5b": {
Oct 11 08:46:07 compute-0 relaxed_chaplygin[281889]:         "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 08:46:07 compute-0 relaxed_chaplygin[281889]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 11 08:46:07 compute-0 relaxed_chaplygin[281889]:         "osd_id": 1,
Oct 11 08:46:07 compute-0 relaxed_chaplygin[281889]:         "osd_uuid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 08:46:07 compute-0 relaxed_chaplygin[281889]:         "type": "bluestore"
Oct 11 08:46:07 compute-0 relaxed_chaplygin[281889]:     }
Oct 11 08:46:07 compute-0 relaxed_chaplygin[281889]: }
Oct 11 08:46:07 compute-0 systemd[1]: libpod-da0681a2c23026b78f1e88b082c09a8d19a9a85c8a2404510f30dbb38e81998a.scope: Deactivated successfully.
Oct 11 08:46:07 compute-0 podman[281872]: 2025-10-11 08:46:07.340763573 +0000 UTC m=+1.315466323 container died da0681a2c23026b78f1e88b082c09a8d19a9a85c8a2404510f30dbb38e81998a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_chaplygin, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct 11 08:46:07 compute-0 systemd[1]: libpod-da0681a2c23026b78f1e88b082c09a8d19a9a85c8a2404510f30dbb38e81998a.scope: Consumed 1.123s CPU time.
Oct 11 08:46:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-eebf84f4c0111721d477a35b16158895830bf8858363886acca7a55475d7074a-merged.mount: Deactivated successfully.
Oct 11 08:46:07 compute-0 podman[281872]: 2025-10-11 08:46:07.423334514 +0000 UTC m=+1.398037224 container remove da0681a2c23026b78f1e88b082c09a8d19a9a85c8a2404510f30dbb38e81998a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_chaplygin, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 08:46:07 compute-0 systemd[1]: libpod-conmon-da0681a2c23026b78f1e88b082c09a8d19a9a85c8a2404510f30dbb38e81998a.scope: Deactivated successfully.
Oct 11 08:46:07 compute-0 sudo[281763]: pam_unix(sudo:session): session closed for user root
Oct 11 08:46:07 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 08:46:07 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 08:46:07 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 08:46:07 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 08:46:07 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev 508e8fd9-3515-4e97-865b-046292aef02c does not exist
Oct 11 08:46:07 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev d72ca338-58cc-4bcd-a798-9235122e7d57 does not exist
Oct 11 08:46:07 compute-0 sudo[281936]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:46:07 compute-0 sudo[281936]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:46:07 compute-0 sudo[281936]: pam_unix(sudo:session): session closed for user root
Oct 11 08:46:07 compute-0 sudo[281961]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 11 08:46:07 compute-0 sudo[281961]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:46:07 compute-0 sudo[281961]: pam_unix(sudo:session): session closed for user root
Oct 11 08:46:07 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1183: 321 pgs: 321 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail; 301 KiB/s rd, 2.1 MiB/s wr, 87 op/s
Oct 11 08:46:07 compute-0 nova_compute[260935]: 2025-10-11 08:46:07.898 2 DEBUG oslo_concurrency.processutils [None req-8f7c14a1-b3db-450a-ba97-b31e7f540908 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:46:08 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 08:46:08 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 08:46:08 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2231748333' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:46:08 compute-0 nova_compute[260935]: 2025-10-11 08:46:08.373 2 DEBUG oslo_concurrency.processutils [None req-8f7c14a1-b3db-450a-ba97-b31e7f540908 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:46:08 compute-0 nova_compute[260935]: 2025-10-11 08:46:08.380 2 DEBUG nova.compute.provider_tree [None req-8f7c14a1-b3db-450a-ba97-b31e7f540908 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 08:46:08 compute-0 nova_compute[260935]: 2025-10-11 08:46:08.397 2 DEBUG nova.scheduler.client.report [None req-8f7c14a1-b3db-450a-ba97-b31e7f540908 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 08:46:08 compute-0 nova_compute[260935]: 2025-10-11 08:46:08.427 2 DEBUG oslo_concurrency.lockutils [None req-8f7c14a1-b3db-450a-ba97-b31e7f540908 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.787s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:46:08 compute-0 nova_compute[260935]: 2025-10-11 08:46:08.428 2 DEBUG nova.compute.manager [None req-8f7c14a1-b3db-450a-ba97-b31e7f540908 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] [instance: 97254317-c848-4369-b0b7-147d230fb1d1] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 11 08:46:08 compute-0 nova_compute[260935]: 2025-10-11 08:46:08.486 2 DEBUG nova.compute.manager [None req-8f7c14a1-b3db-450a-ba97-b31e7f540908 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] [instance: 97254317-c848-4369-b0b7-147d230fb1d1] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 11 08:46:08 compute-0 nova_compute[260935]: 2025-10-11 08:46:08.487 2 DEBUG nova.network.neutron [None req-8f7c14a1-b3db-450a-ba97-b31e7f540908 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] [instance: 97254317-c848-4369-b0b7-147d230fb1d1] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 11 08:46:08 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 08:46:08 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 08:46:08 compute-0 ceph-mon[74313]: pgmap v1183: 321 pgs: 321 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail; 301 KiB/s rd, 2.1 MiB/s wr, 87 op/s
Oct 11 08:46:08 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/2231748333' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:46:08 compute-0 nova_compute[260935]: 2025-10-11 08:46:08.521 2 INFO nova.virt.libvirt.driver [None req-8f7c14a1-b3db-450a-ba97-b31e7f540908 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] [instance: 97254317-c848-4369-b0b7-147d230fb1d1] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 11 08:46:08 compute-0 nova_compute[260935]: 2025-10-11 08:46:08.577 2 DEBUG nova.compute.manager [None req-8f7c14a1-b3db-450a-ba97-b31e7f540908 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] [instance: 97254317-c848-4369-b0b7-147d230fb1d1] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 11 08:46:08 compute-0 nova_compute[260935]: 2025-10-11 08:46:08.596 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:46:08 compute-0 nova_compute[260935]: 2025-10-11 08:46:08.668 2 DEBUG nova.compute.manager [None req-8f7c14a1-b3db-450a-ba97-b31e7f540908 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] [instance: 97254317-c848-4369-b0b7-147d230fb1d1] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 11 08:46:08 compute-0 nova_compute[260935]: 2025-10-11 08:46:08.669 2 DEBUG nova.virt.libvirt.driver [None req-8f7c14a1-b3db-450a-ba97-b31e7f540908 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] [instance: 97254317-c848-4369-b0b7-147d230fb1d1] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 11 08:46:08 compute-0 nova_compute[260935]: 2025-10-11 08:46:08.670 2 INFO nova.virt.libvirt.driver [None req-8f7c14a1-b3db-450a-ba97-b31e7f540908 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] [instance: 97254317-c848-4369-b0b7-147d230fb1d1] Creating image(s)
Oct 11 08:46:08 compute-0 nova_compute[260935]: 2025-10-11 08:46:08.702 2 DEBUG nova.storage.rbd_utils [None req-8f7c14a1-b3db-450a-ba97-b31e7f540908 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] rbd image 97254317-c848-4369-b0b7-147d230fb1d1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:46:08 compute-0 nova_compute[260935]: 2025-10-11 08:46:08.737 2 DEBUG nova.storage.rbd_utils [None req-8f7c14a1-b3db-450a-ba97-b31e7f540908 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] rbd image 97254317-c848-4369-b0b7-147d230fb1d1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:46:08 compute-0 nova_compute[260935]: 2025-10-11 08:46:08.770 2 DEBUG nova.storage.rbd_utils [None req-8f7c14a1-b3db-450a-ba97-b31e7f540908 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] rbd image 97254317-c848-4369-b0b7-147d230fb1d1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:46:08 compute-0 nova_compute[260935]: 2025-10-11 08:46:08.775 2 DEBUG oslo_concurrency.processutils [None req-8f7c14a1-b3db-450a-ba97-b31e7f540908 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:46:08 compute-0 nova_compute[260935]: 2025-10-11 08:46:08.863 2 DEBUG oslo_concurrency.processutils [None req-8f7c14a1-b3db-450a-ba97-b31e7f540908 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json" returned: 0 in 0.088s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:46:08 compute-0 nova_compute[260935]: 2025-10-11 08:46:08.864 2 DEBUG oslo_concurrency.lockutils [None req-8f7c14a1-b3db-450a-ba97-b31e7f540908 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] Acquiring lock "9811042c7d73cc51997f7c966840f4b7728169a1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:46:08 compute-0 nova_compute[260935]: 2025-10-11 08:46:08.864 2 DEBUG oslo_concurrency.lockutils [None req-8f7c14a1-b3db-450a-ba97-b31e7f540908 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:46:08 compute-0 nova_compute[260935]: 2025-10-11 08:46:08.865 2 DEBUG oslo_concurrency.lockutils [None req-8f7c14a1-b3db-450a-ba97-b31e7f540908 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:46:08 compute-0 nova_compute[260935]: 2025-10-11 08:46:08.891 2 DEBUG nova.storage.rbd_utils [None req-8f7c14a1-b3db-450a-ba97-b31e7f540908 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] rbd image 97254317-c848-4369-b0b7-147d230fb1d1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:46:08 compute-0 nova_compute[260935]: 2025-10-11 08:46:08.895 2 DEBUG oslo_concurrency.processutils [None req-8f7c14a1-b3db-450a-ba97-b31e7f540908 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 97254317-c848-4369-b0b7-147d230fb1d1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:46:09 compute-0 nova_compute[260935]: 2025-10-11 08:46:09.172 2 DEBUG oslo_concurrency.processutils [None req-8f7c14a1-b3db-450a-ba97-b31e7f540908 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 97254317-c848-4369-b0b7-147d230fb1d1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.277s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:46:09 compute-0 nova_compute[260935]: 2025-10-11 08:46:09.268 2 DEBUG nova.storage.rbd_utils [None req-8f7c14a1-b3db-450a-ba97-b31e7f540908 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] resizing rbd image 97254317-c848-4369-b0b7-147d230fb1d1_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 11 08:46:09 compute-0 nova_compute[260935]: 2025-10-11 08:46:09.382 2 DEBUG nova.objects.instance [None req-8f7c14a1-b3db-450a-ba97-b31e7f540908 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] Lazy-loading 'migration_context' on Instance uuid 97254317-c848-4369-b0b7-147d230fb1d1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:46:09 compute-0 nova_compute[260935]: 2025-10-11 08:46:09.404 2 DEBUG nova.virt.libvirt.driver [None req-8f7c14a1-b3db-450a-ba97-b31e7f540908 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] [instance: 97254317-c848-4369-b0b7-147d230fb1d1] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 11 08:46:09 compute-0 nova_compute[260935]: 2025-10-11 08:46:09.404 2 DEBUG nova.virt.libvirt.driver [None req-8f7c14a1-b3db-450a-ba97-b31e7f540908 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] [instance: 97254317-c848-4369-b0b7-147d230fb1d1] Ensure instance console log exists: /var/lib/nova/instances/97254317-c848-4369-b0b7-147d230fb1d1/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 11 08:46:09 compute-0 nova_compute[260935]: 2025-10-11 08:46:09.405 2 DEBUG oslo_concurrency.lockutils [None req-8f7c14a1-b3db-450a-ba97-b31e7f540908 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:46:09 compute-0 nova_compute[260935]: 2025-10-11 08:46:09.406 2 DEBUG oslo_concurrency.lockutils [None req-8f7c14a1-b3db-450a-ba97-b31e7f540908 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:46:09 compute-0 nova_compute[260935]: 2025-10-11 08:46:09.406 2 DEBUG oslo_concurrency.lockutils [None req-8f7c14a1-b3db-450a-ba97-b31e7f540908 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:46:09 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1184: 321 pgs: 321 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail; 301 KiB/s rd, 2.1 MiB/s wr, 87 op/s
Oct 11 08:46:09 compute-0 nova_compute[260935]: 2025-10-11 08:46:09.801 2 DEBUG nova.network.neutron [None req-8f7c14a1-b3db-450a-ba97-b31e7f540908 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] [instance: 97254317-c848-4369-b0b7-147d230fb1d1] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188
Oct 11 08:46:09 compute-0 nova_compute[260935]: 2025-10-11 08:46:09.801 2 DEBUG nova.compute.manager [None req-8f7c14a1-b3db-450a-ba97-b31e7f540908 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] [instance: 97254317-c848-4369-b0b7-147d230fb1d1] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 11 08:46:09 compute-0 nova_compute[260935]: 2025-10-11 08:46:09.804 2 DEBUG nova.virt.libvirt.driver [None req-8f7c14a1-b3db-450a-ba97-b31e7f540908 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] [instance: 97254317-c848-4369-b0b7-147d230fb1d1] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 11 08:46:09 compute-0 nova_compute[260935]: 2025-10-11 08:46:09.809 2 WARNING nova.virt.libvirt.driver [None req-8f7c14a1-b3db-450a-ba97-b31e7f540908 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 08:46:09 compute-0 nova_compute[260935]: 2025-10-11 08:46:09.818 2 DEBUG nova.virt.libvirt.host [None req-8f7c14a1-b3db-450a-ba97-b31e7f540908 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 11 08:46:09 compute-0 nova_compute[260935]: 2025-10-11 08:46:09.819 2 DEBUG nova.virt.libvirt.host [None req-8f7c14a1-b3db-450a-ba97-b31e7f540908 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 11 08:46:09 compute-0 nova_compute[260935]: 2025-10-11 08:46:09.823 2 DEBUG nova.virt.libvirt.host [None req-8f7c14a1-b3db-450a-ba97-b31e7f540908 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 11 08:46:09 compute-0 nova_compute[260935]: 2025-10-11 08:46:09.823 2 DEBUG nova.virt.libvirt.host [None req-8f7c14a1-b3db-450a-ba97-b31e7f540908 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 11 08:46:09 compute-0 nova_compute[260935]: 2025-10-11 08:46:09.824 2 DEBUG nova.virt.libvirt.driver [None req-8f7c14a1-b3db-450a-ba97-b31e7f540908 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 11 08:46:09 compute-0 nova_compute[260935]: 2025-10-11 08:46:09.824 2 DEBUG nova.virt.hardware [None req-8f7c14a1-b3db-450a-ba97-b31e7f540908 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 11 08:46:09 compute-0 nova_compute[260935]: 2025-10-11 08:46:09.825 2 DEBUG nova.virt.hardware [None req-8f7c14a1-b3db-450a-ba97-b31e7f540908 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 11 08:46:09 compute-0 nova_compute[260935]: 2025-10-11 08:46:09.825 2 DEBUG nova.virt.hardware [None req-8f7c14a1-b3db-450a-ba97-b31e7f540908 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 11 08:46:09 compute-0 nova_compute[260935]: 2025-10-11 08:46:09.826 2 DEBUG nova.virt.hardware [None req-8f7c14a1-b3db-450a-ba97-b31e7f540908 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 11 08:46:09 compute-0 nova_compute[260935]: 2025-10-11 08:46:09.826 2 DEBUG nova.virt.hardware [None req-8f7c14a1-b3db-450a-ba97-b31e7f540908 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 11 08:46:09 compute-0 nova_compute[260935]: 2025-10-11 08:46:09.826 2 DEBUG nova.virt.hardware [None req-8f7c14a1-b3db-450a-ba97-b31e7f540908 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 11 08:46:09 compute-0 nova_compute[260935]: 2025-10-11 08:46:09.827 2 DEBUG nova.virt.hardware [None req-8f7c14a1-b3db-450a-ba97-b31e7f540908 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 11 08:46:09 compute-0 nova_compute[260935]: 2025-10-11 08:46:09.827 2 DEBUG nova.virt.hardware [None req-8f7c14a1-b3db-450a-ba97-b31e7f540908 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 11 08:46:09 compute-0 nova_compute[260935]: 2025-10-11 08:46:09.828 2 DEBUG nova.virt.hardware [None req-8f7c14a1-b3db-450a-ba97-b31e7f540908 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 11 08:46:09 compute-0 nova_compute[260935]: 2025-10-11 08:46:09.828 2 DEBUG nova.virt.hardware [None req-8f7c14a1-b3db-450a-ba97-b31e7f540908 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 11 08:46:09 compute-0 nova_compute[260935]: 2025-10-11 08:46:09.828 2 DEBUG nova.virt.hardware [None req-8f7c14a1-b3db-450a-ba97-b31e7f540908 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 11 08:46:09 compute-0 nova_compute[260935]: 2025-10-11 08:46:09.833 2 DEBUG oslo_concurrency.processutils [None req-8f7c14a1-b3db-450a-ba97-b31e7f540908 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:46:10 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 08:46:10 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/55869803' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:46:10 compute-0 nova_compute[260935]: 2025-10-11 08:46:10.298 2 DEBUG oslo_concurrency.processutils [None req-8f7c14a1-b3db-450a-ba97-b31e7f540908 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:46:10 compute-0 nova_compute[260935]: 2025-10-11 08:46:10.329 2 DEBUG nova.storage.rbd_utils [None req-8f7c14a1-b3db-450a-ba97-b31e7f540908 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] rbd image 97254317-c848-4369-b0b7-147d230fb1d1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:46:10 compute-0 nova_compute[260935]: 2025-10-11 08:46:10.334 2 DEBUG oslo_concurrency.processutils [None req-8f7c14a1-b3db-450a-ba97-b31e7f540908 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:46:10 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 08:46:10 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3209928029' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:46:10 compute-0 ceph-mon[74313]: pgmap v1184: 321 pgs: 321 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail; 301 KiB/s rd, 2.1 MiB/s wr, 87 op/s
Oct 11 08:46:10 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/55869803' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:46:10 compute-0 nova_compute[260935]: 2025-10-11 08:46:10.816 2 DEBUG oslo_concurrency.processutils [None req-8f7c14a1-b3db-450a-ba97-b31e7f540908 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.482s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:46:10 compute-0 nova_compute[260935]: 2025-10-11 08:46:10.819 2 DEBUG nova.objects.instance [None req-8f7c14a1-b3db-450a-ba97-b31e7f540908 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] Lazy-loading 'pci_devices' on Instance uuid 97254317-c848-4369-b0b7-147d230fb1d1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:46:10 compute-0 nova_compute[260935]: 2025-10-11 08:46:10.997 2 DEBUG nova.virt.libvirt.driver [None req-8f7c14a1-b3db-450a-ba97-b31e7f540908 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] [instance: 97254317-c848-4369-b0b7-147d230fb1d1] End _get_guest_xml xml=<domain type="kvm">
Oct 11 08:46:10 compute-0 nova_compute[260935]:   <uuid>97254317-c848-4369-b0b7-147d230fb1d1</uuid>
Oct 11 08:46:10 compute-0 nova_compute[260935]:   <name>instance-00000009</name>
Oct 11 08:46:10 compute-0 nova_compute[260935]:   <memory>131072</memory>
Oct 11 08:46:10 compute-0 nova_compute[260935]:   <vcpu>1</vcpu>
Oct 11 08:46:10 compute-0 nova_compute[260935]:   <metadata>
Oct 11 08:46:10 compute-0 nova_compute[260935]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 08:46:10 compute-0 nova_compute[260935]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 08:46:10 compute-0 nova_compute[260935]:       <nova:name>tempest-ServersAdminNegativeTestJSON-server-654853952</nova:name>
Oct 11 08:46:10 compute-0 nova_compute[260935]:       <nova:creationTime>2025-10-11 08:46:09</nova:creationTime>
Oct 11 08:46:10 compute-0 nova_compute[260935]:       <nova:flavor name="m1.nano">
Oct 11 08:46:10 compute-0 nova_compute[260935]:         <nova:memory>128</nova:memory>
Oct 11 08:46:10 compute-0 nova_compute[260935]:         <nova:disk>1</nova:disk>
Oct 11 08:46:10 compute-0 nova_compute[260935]:         <nova:swap>0</nova:swap>
Oct 11 08:46:10 compute-0 nova_compute[260935]:         <nova:ephemeral>0</nova:ephemeral>
Oct 11 08:46:10 compute-0 nova_compute[260935]:         <nova:vcpus>1</nova:vcpus>
Oct 11 08:46:10 compute-0 nova_compute[260935]:       </nova:flavor>
Oct 11 08:46:10 compute-0 nova_compute[260935]:       <nova:owner>
Oct 11 08:46:10 compute-0 nova_compute[260935]:         <nova:user uuid="e35666a092ce4c07b2e0604bfeb5d359">tempest-ServersAdminNegativeTestJSON-646482616-project-member</nova:user>
Oct 11 08:46:10 compute-0 nova_compute[260935]:         <nova:project uuid="0b7ec3d2d2084beaa800f0f2abd06c77">tempest-ServersAdminNegativeTestJSON-646482616</nova:project>
Oct 11 08:46:10 compute-0 nova_compute[260935]:       </nova:owner>
Oct 11 08:46:10 compute-0 nova_compute[260935]:       <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 08:46:10 compute-0 nova_compute[260935]:       <nova:ports/>
Oct 11 08:46:10 compute-0 nova_compute[260935]:     </nova:instance>
Oct 11 08:46:10 compute-0 nova_compute[260935]:   </metadata>
Oct 11 08:46:10 compute-0 nova_compute[260935]:   <sysinfo type="smbios">
Oct 11 08:46:10 compute-0 nova_compute[260935]:     <system>
Oct 11 08:46:10 compute-0 nova_compute[260935]:       <entry name="manufacturer">RDO</entry>
Oct 11 08:46:10 compute-0 nova_compute[260935]:       <entry name="product">OpenStack Compute</entry>
Oct 11 08:46:10 compute-0 nova_compute[260935]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 08:46:10 compute-0 nova_compute[260935]:       <entry name="serial">97254317-c848-4369-b0b7-147d230fb1d1</entry>
Oct 11 08:46:10 compute-0 nova_compute[260935]:       <entry name="uuid">97254317-c848-4369-b0b7-147d230fb1d1</entry>
Oct 11 08:46:10 compute-0 nova_compute[260935]:       <entry name="family">Virtual Machine</entry>
Oct 11 08:46:10 compute-0 nova_compute[260935]:     </system>
Oct 11 08:46:10 compute-0 nova_compute[260935]:   </sysinfo>
Oct 11 08:46:10 compute-0 nova_compute[260935]:   <os>
Oct 11 08:46:10 compute-0 nova_compute[260935]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 11 08:46:10 compute-0 nova_compute[260935]:     <boot dev="hd"/>
Oct 11 08:46:10 compute-0 nova_compute[260935]:     <smbios mode="sysinfo"/>
Oct 11 08:46:10 compute-0 nova_compute[260935]:   </os>
Oct 11 08:46:10 compute-0 nova_compute[260935]:   <features>
Oct 11 08:46:10 compute-0 nova_compute[260935]:     <acpi/>
Oct 11 08:46:10 compute-0 nova_compute[260935]:     <apic/>
Oct 11 08:46:10 compute-0 nova_compute[260935]:     <vmcoreinfo/>
Oct 11 08:46:10 compute-0 nova_compute[260935]:   </features>
Oct 11 08:46:10 compute-0 nova_compute[260935]:   <clock offset="utc">
Oct 11 08:46:10 compute-0 nova_compute[260935]:     <timer name="pit" tickpolicy="delay"/>
Oct 11 08:46:10 compute-0 nova_compute[260935]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 11 08:46:10 compute-0 nova_compute[260935]:     <timer name="hpet" present="no"/>
Oct 11 08:46:10 compute-0 nova_compute[260935]:   </clock>
Oct 11 08:46:10 compute-0 nova_compute[260935]:   <cpu mode="host-model" match="exact">
Oct 11 08:46:10 compute-0 nova_compute[260935]:     <topology sockets="1" cores="1" threads="1"/>
Oct 11 08:46:10 compute-0 nova_compute[260935]:   </cpu>
Oct 11 08:46:10 compute-0 nova_compute[260935]:   <devices>
Oct 11 08:46:10 compute-0 nova_compute[260935]:     <disk type="network" device="disk">
Oct 11 08:46:10 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 08:46:10 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/97254317-c848-4369-b0b7-147d230fb1d1_disk">
Oct 11 08:46:10 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 08:46:10 compute-0 nova_compute[260935]:       </source>
Oct 11 08:46:10 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 08:46:10 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 08:46:10 compute-0 nova_compute[260935]:       </auth>
Oct 11 08:46:10 compute-0 nova_compute[260935]:       <target dev="vda" bus="virtio"/>
Oct 11 08:46:10 compute-0 nova_compute[260935]:     </disk>
Oct 11 08:46:10 compute-0 nova_compute[260935]:     <disk type="network" device="cdrom">
Oct 11 08:46:10 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 08:46:10 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/97254317-c848-4369-b0b7-147d230fb1d1_disk.config">
Oct 11 08:46:10 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 08:46:10 compute-0 nova_compute[260935]:       </source>
Oct 11 08:46:10 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 08:46:10 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 08:46:10 compute-0 nova_compute[260935]:       </auth>
Oct 11 08:46:10 compute-0 nova_compute[260935]:       <target dev="sda" bus="sata"/>
Oct 11 08:46:10 compute-0 nova_compute[260935]:     </disk>
Oct 11 08:46:10 compute-0 nova_compute[260935]:     <serial type="pty">
Oct 11 08:46:10 compute-0 nova_compute[260935]:       <log file="/var/lib/nova/instances/97254317-c848-4369-b0b7-147d230fb1d1/console.log" append="off"/>
Oct 11 08:46:10 compute-0 nova_compute[260935]:     </serial>
Oct 11 08:46:10 compute-0 nova_compute[260935]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 08:46:10 compute-0 nova_compute[260935]:     <video>
Oct 11 08:46:10 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 08:46:10 compute-0 nova_compute[260935]:     </video>
Oct 11 08:46:10 compute-0 nova_compute[260935]:     <input type="tablet" bus="usb"/>
Oct 11 08:46:10 compute-0 nova_compute[260935]:     <rng model="virtio">
Oct 11 08:46:10 compute-0 nova_compute[260935]:       <backend model="random">/dev/urandom</backend>
Oct 11 08:46:10 compute-0 nova_compute[260935]:     </rng>
Oct 11 08:46:10 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root"/>
Oct 11 08:46:10 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:46:10 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:46:10 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:46:10 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:46:10 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:46:10 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:46:10 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:46:10 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:46:10 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:46:10 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:46:10 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:46:10 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:46:10 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:46:10 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:46:10 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:46:10 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:46:10 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:46:10 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:46:10 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:46:10 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:46:10 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:46:10 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:46:10 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:46:10 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:46:10 compute-0 nova_compute[260935]:     <controller type="usb" index="0"/>
Oct 11 08:46:10 compute-0 nova_compute[260935]:     <memballoon model="virtio">
Oct 11 08:46:10 compute-0 nova_compute[260935]:       <stats period="10"/>
Oct 11 08:46:10 compute-0 nova_compute[260935]:     </memballoon>
Oct 11 08:46:10 compute-0 nova_compute[260935]:   </devices>
Oct 11 08:46:10 compute-0 nova_compute[260935]: </domain>
Oct 11 08:46:10 compute-0 nova_compute[260935]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 11 08:46:11 compute-0 nova_compute[260935]: 2025-10-11 08:46:11.072 2 DEBUG nova.virt.libvirt.driver [None req-8f7c14a1-b3db-450a-ba97-b31e7f540908 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 08:46:11 compute-0 nova_compute[260935]: 2025-10-11 08:46:11.073 2 DEBUG nova.virt.libvirt.driver [None req-8f7c14a1-b3db-450a-ba97-b31e7f540908 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 08:46:11 compute-0 nova_compute[260935]: 2025-10-11 08:46:11.074 2 INFO nova.virt.libvirt.driver [None req-8f7c14a1-b3db-450a-ba97-b31e7f540908 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] [instance: 97254317-c848-4369-b0b7-147d230fb1d1] Using config drive
Oct 11 08:46:11 compute-0 nova_compute[260935]: 2025-10-11 08:46:11.104 2 DEBUG nova.storage.rbd_utils [None req-8f7c14a1-b3db-450a-ba97-b31e7f540908 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] rbd image 97254317-c848-4369-b0b7-147d230fb1d1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:46:11 compute-0 nova_compute[260935]: 2025-10-11 08:46:11.219 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:46:11 compute-0 nova_compute[260935]: 2025-10-11 08:46:11.257 2 DEBUG oslo_concurrency.lockutils [None req-d0221800-790e-40b6-bc8f-c17053fcab43 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] Acquiring lock "adcf97ac-4b08-4f25-844b-e6f4e1305e3b" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:46:11 compute-0 nova_compute[260935]: 2025-10-11 08:46:11.258 2 DEBUG oslo_concurrency.lockutils [None req-d0221800-790e-40b6-bc8f-c17053fcab43 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] Lock "adcf97ac-4b08-4f25-844b-e6f4e1305e3b" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:46:11 compute-0 nova_compute[260935]: 2025-10-11 08:46:11.278 2 DEBUG nova.compute.manager [None req-d0221800-790e-40b6-bc8f-c17053fcab43 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] [instance: adcf97ac-4b08-4f25-844b-e6f4e1305e3b] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 11 08:46:11 compute-0 nova_compute[260935]: 2025-10-11 08:46:11.284 2 INFO nova.virt.libvirt.driver [None req-8f7c14a1-b3db-450a-ba97-b31e7f540908 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] [instance: 97254317-c848-4369-b0b7-147d230fb1d1] Creating config drive at /var/lib/nova/instances/97254317-c848-4369-b0b7-147d230fb1d1/disk.config
Oct 11 08:46:11 compute-0 nova_compute[260935]: 2025-10-11 08:46:11.291 2 DEBUG oslo_concurrency.processutils [None req-8f7c14a1-b3db-450a-ba97-b31e7f540908 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/97254317-c848-4369-b0b7-147d230fb1d1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_unipmym execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:46:11 compute-0 nova_compute[260935]: 2025-10-11 08:46:11.432 2 DEBUG oslo_concurrency.processutils [None req-8f7c14a1-b3db-450a-ba97-b31e7f540908 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/97254317-c848-4369-b0b7-147d230fb1d1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_unipmym" returned: 0 in 0.140s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:46:11 compute-0 nova_compute[260935]: 2025-10-11 08:46:11.468 2 DEBUG nova.storage.rbd_utils [None req-8f7c14a1-b3db-450a-ba97-b31e7f540908 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] rbd image 97254317-c848-4369-b0b7-147d230fb1d1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:46:11 compute-0 nova_compute[260935]: 2025-10-11 08:46:11.473 2 DEBUG oslo_concurrency.processutils [None req-8f7c14a1-b3db-450a-ba97-b31e7f540908 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/97254317-c848-4369-b0b7-147d230fb1d1/disk.config 97254317-c848-4369-b0b7-147d230fb1d1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:46:11 compute-0 nova_compute[260935]: 2025-10-11 08:46:11.503 2 DEBUG oslo_concurrency.lockutils [None req-d0221800-790e-40b6-bc8f-c17053fcab43 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:46:11 compute-0 nova_compute[260935]: 2025-10-11 08:46:11.504 2 DEBUG oslo_concurrency.lockutils [None req-d0221800-790e-40b6-bc8f-c17053fcab43 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:46:11 compute-0 nova_compute[260935]: 2025-10-11 08:46:11.518 2 DEBUG nova.virt.hardware [None req-d0221800-790e-40b6-bc8f-c17053fcab43 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 11 08:46:11 compute-0 nova_compute[260935]: 2025-10-11 08:46:11.518 2 INFO nova.compute.claims [None req-d0221800-790e-40b6-bc8f-c17053fcab43 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] [instance: adcf97ac-4b08-4f25-844b-e6f4e1305e3b] Claim successful on node compute-0.ctlplane.example.com
Oct 11 08:46:11 compute-0 nova_compute[260935]: 2025-10-11 08:46:11.650 2 DEBUG oslo_concurrency.processutils [None req-8f7c14a1-b3db-450a-ba97-b31e7f540908 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/97254317-c848-4369-b0b7-147d230fb1d1/disk.config 97254317-c848-4369-b0b7-147d230fb1d1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.177s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:46:11 compute-0 nova_compute[260935]: 2025-10-11 08:46:11.651 2 INFO nova.virt.libvirt.driver [None req-8f7c14a1-b3db-450a-ba97-b31e7f540908 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] [instance: 97254317-c848-4369-b0b7-147d230fb1d1] Deleting local config drive /var/lib/nova/instances/97254317-c848-4369-b0b7-147d230fb1d1/disk.config because it was imported into RBD.
Oct 11 08:46:11 compute-0 nova_compute[260935]: 2025-10-11 08:46:11.672 2 DEBUG oslo_concurrency.processutils [None req-d0221800-790e-40b6-bc8f-c17053fcab43 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:46:11 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1185: 321 pgs: 321 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail; 301 KiB/s rd, 2.1 MiB/s wr, 87 op/s
Oct 11 08:46:11 compute-0 systemd-machined[215705]: New machine qemu-9-instance-00000009.
Oct 11 08:46:11 compute-0 systemd[1]: Started Virtual Machine qemu-9-instance-00000009.
Oct 11 08:46:11 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/3209928029' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:46:12 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 08:46:12 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/686642392' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:46:12 compute-0 nova_compute[260935]: 2025-10-11 08:46:12.188 2 DEBUG oslo_concurrency.processutils [None req-d0221800-790e-40b6-bc8f-c17053fcab43 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.516s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:46:12 compute-0 nova_compute[260935]: 2025-10-11 08:46:12.195 2 DEBUG nova.compute.provider_tree [None req-d0221800-790e-40b6-bc8f-c17053fcab43 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 08:46:12 compute-0 nova_compute[260935]: 2025-10-11 08:46:12.211 2 DEBUG nova.scheduler.client.report [None req-d0221800-790e-40b6-bc8f-c17053fcab43 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 08:46:12 compute-0 nova_compute[260935]: 2025-10-11 08:46:12.241 2 DEBUG oslo_concurrency.lockutils [None req-d0221800-790e-40b6-bc8f-c17053fcab43 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.737s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:46:12 compute-0 nova_compute[260935]: 2025-10-11 08:46:12.241 2 DEBUG nova.compute.manager [None req-d0221800-790e-40b6-bc8f-c17053fcab43 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] [instance: adcf97ac-4b08-4f25-844b-e6f4e1305e3b] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 11 08:46:12 compute-0 nova_compute[260935]: 2025-10-11 08:46:12.303 2 DEBUG nova.compute.manager [None req-d0221800-790e-40b6-bc8f-c17053fcab43 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] [instance: adcf97ac-4b08-4f25-844b-e6f4e1305e3b] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 11 08:46:12 compute-0 nova_compute[260935]: 2025-10-11 08:46:12.304 2 DEBUG nova.network.neutron [None req-d0221800-790e-40b6-bc8f-c17053fcab43 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] [instance: adcf97ac-4b08-4f25-844b-e6f4e1305e3b] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 11 08:46:12 compute-0 nova_compute[260935]: 2025-10-11 08:46:12.329 2 INFO nova.virt.libvirt.driver [None req-d0221800-790e-40b6-bc8f-c17053fcab43 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] [instance: adcf97ac-4b08-4f25-844b-e6f4e1305e3b] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 11 08:46:12 compute-0 nova_compute[260935]: 2025-10-11 08:46:12.349 2 DEBUG nova.compute.manager [None req-d0221800-790e-40b6-bc8f-c17053fcab43 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] [instance: adcf97ac-4b08-4f25-844b-e6f4e1305e3b] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 11 08:46:12 compute-0 nova_compute[260935]: 2025-10-11 08:46:12.450 2 DEBUG nova.compute.manager [None req-d0221800-790e-40b6-bc8f-c17053fcab43 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] [instance: adcf97ac-4b08-4f25-844b-e6f4e1305e3b] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 11 08:46:12 compute-0 nova_compute[260935]: 2025-10-11 08:46:12.451 2 DEBUG nova.virt.libvirt.driver [None req-d0221800-790e-40b6-bc8f-c17053fcab43 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] [instance: adcf97ac-4b08-4f25-844b-e6f4e1305e3b] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 11 08:46:12 compute-0 nova_compute[260935]: 2025-10-11 08:46:12.452 2 INFO nova.virt.libvirt.driver [None req-d0221800-790e-40b6-bc8f-c17053fcab43 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] [instance: adcf97ac-4b08-4f25-844b-e6f4e1305e3b] Creating image(s)
Oct 11 08:46:12 compute-0 nova_compute[260935]: 2025-10-11 08:46:12.479 2 DEBUG nova.storage.rbd_utils [None req-d0221800-790e-40b6-bc8f-c17053fcab43 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] rbd image adcf97ac-4b08-4f25-844b-e6f4e1305e3b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:46:12 compute-0 nova_compute[260935]: 2025-10-11 08:46:12.510 2 DEBUG nova.storage.rbd_utils [None req-d0221800-790e-40b6-bc8f-c17053fcab43 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] rbd image adcf97ac-4b08-4f25-844b-e6f4e1305e3b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:46:12 compute-0 nova_compute[260935]: 2025-10-11 08:46:12.541 2 DEBUG nova.storage.rbd_utils [None req-d0221800-790e-40b6-bc8f-c17053fcab43 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] rbd image adcf97ac-4b08-4f25-844b-e6f4e1305e3b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:46:12 compute-0 nova_compute[260935]: 2025-10-11 08:46:12.546 2 DEBUG oslo_concurrency.processutils [None req-d0221800-790e-40b6-bc8f-c17053fcab43 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:46:12 compute-0 nova_compute[260935]: 2025-10-11 08:46:12.612 2 DEBUG oslo_concurrency.processutils [None req-d0221800-790e-40b6-bc8f-c17053fcab43 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:46:12 compute-0 nova_compute[260935]: 2025-10-11 08:46:12.613 2 DEBUG oslo_concurrency.lockutils [None req-d0221800-790e-40b6-bc8f-c17053fcab43 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] Acquiring lock "9811042c7d73cc51997f7c966840f4b7728169a1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:46:12 compute-0 nova_compute[260935]: 2025-10-11 08:46:12.614 2 DEBUG oslo_concurrency.lockutils [None req-d0221800-790e-40b6-bc8f-c17053fcab43 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:46:12 compute-0 nova_compute[260935]: 2025-10-11 08:46:12.615 2 DEBUG oslo_concurrency.lockutils [None req-d0221800-790e-40b6-bc8f-c17053fcab43 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:46:12 compute-0 nova_compute[260935]: 2025-10-11 08:46:12.643 2 DEBUG nova.storage.rbd_utils [None req-d0221800-790e-40b6-bc8f-c17053fcab43 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] rbd image adcf97ac-4b08-4f25-844b-e6f4e1305e3b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:46:12 compute-0 nova_compute[260935]: 2025-10-11 08:46:12.649 2 DEBUG oslo_concurrency.processutils [None req-d0221800-790e-40b6-bc8f-c17053fcab43 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 adcf97ac-4b08-4f25-844b-e6f4e1305e3b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:46:12 compute-0 nova_compute[260935]: 2025-10-11 08:46:12.677 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172372.6763885, 97254317-c848-4369-b0b7-147d230fb1d1 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:46:12 compute-0 nova_compute[260935]: 2025-10-11 08:46:12.678 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 97254317-c848-4369-b0b7-147d230fb1d1] VM Resumed (Lifecycle Event)
Oct 11 08:46:12 compute-0 nova_compute[260935]: 2025-10-11 08:46:12.682 2 DEBUG nova.compute.manager [None req-8f7c14a1-b3db-450a-ba97-b31e7f540908 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] [instance: 97254317-c848-4369-b0b7-147d230fb1d1] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 11 08:46:12 compute-0 nova_compute[260935]: 2025-10-11 08:46:12.683 2 DEBUG nova.virt.libvirt.driver [None req-8f7c14a1-b3db-450a-ba97-b31e7f540908 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] [instance: 97254317-c848-4369-b0b7-147d230fb1d1] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 11 08:46:12 compute-0 nova_compute[260935]: 2025-10-11 08:46:12.689 2 INFO nova.virt.libvirt.driver [-] [instance: 97254317-c848-4369-b0b7-147d230fb1d1] Instance spawned successfully.
Oct 11 08:46:12 compute-0 nova_compute[260935]: 2025-10-11 08:46:12.690 2 DEBUG nova.virt.libvirt.driver [None req-8f7c14a1-b3db-450a-ba97-b31e7f540908 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] [instance: 97254317-c848-4369-b0b7-147d230fb1d1] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 11 08:46:12 compute-0 nova_compute[260935]: 2025-10-11 08:46:12.708 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 97254317-c848-4369-b0b7-147d230fb1d1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:46:12 compute-0 nova_compute[260935]: 2025-10-11 08:46:12.717 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 97254317-c848-4369-b0b7-147d230fb1d1] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 08:46:12 compute-0 nova_compute[260935]: 2025-10-11 08:46:12.721 2 DEBUG nova.virt.libvirt.driver [None req-8f7c14a1-b3db-450a-ba97-b31e7f540908 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] [instance: 97254317-c848-4369-b0b7-147d230fb1d1] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:46:12 compute-0 nova_compute[260935]: 2025-10-11 08:46:12.722 2 DEBUG nova.virt.libvirt.driver [None req-8f7c14a1-b3db-450a-ba97-b31e7f540908 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] [instance: 97254317-c848-4369-b0b7-147d230fb1d1] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:46:12 compute-0 nova_compute[260935]: 2025-10-11 08:46:12.722 2 DEBUG nova.virt.libvirt.driver [None req-8f7c14a1-b3db-450a-ba97-b31e7f540908 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] [instance: 97254317-c848-4369-b0b7-147d230fb1d1] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:46:12 compute-0 nova_compute[260935]: 2025-10-11 08:46:12.723 2 DEBUG nova.virt.libvirt.driver [None req-8f7c14a1-b3db-450a-ba97-b31e7f540908 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] [instance: 97254317-c848-4369-b0b7-147d230fb1d1] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:46:12 compute-0 nova_compute[260935]: 2025-10-11 08:46:12.724 2 DEBUG nova.virt.libvirt.driver [None req-8f7c14a1-b3db-450a-ba97-b31e7f540908 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] [instance: 97254317-c848-4369-b0b7-147d230fb1d1] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:46:12 compute-0 nova_compute[260935]: 2025-10-11 08:46:12.724 2 DEBUG nova.virt.libvirt.driver [None req-8f7c14a1-b3db-450a-ba97-b31e7f540908 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] [instance: 97254317-c848-4369-b0b7-147d230fb1d1] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:46:12 compute-0 nova_compute[260935]: 2025-10-11 08:46:12.734 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 97254317-c848-4369-b0b7-147d230fb1d1] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 08:46:12 compute-0 nova_compute[260935]: 2025-10-11 08:46:12.735 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172372.6783335, 97254317-c848-4369-b0b7-147d230fb1d1 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:46:12 compute-0 nova_compute[260935]: 2025-10-11 08:46:12.736 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 97254317-c848-4369-b0b7-147d230fb1d1] VM Started (Lifecycle Event)
Oct 11 08:46:12 compute-0 nova_compute[260935]: 2025-10-11 08:46:12.756 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 97254317-c848-4369-b0b7-147d230fb1d1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:46:12 compute-0 nova_compute[260935]: 2025-10-11 08:46:12.760 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 97254317-c848-4369-b0b7-147d230fb1d1] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 08:46:12 compute-0 nova_compute[260935]: 2025-10-11 08:46:12.791 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 97254317-c848-4369-b0b7-147d230fb1d1] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 08:46:12 compute-0 ceph-mon[74313]: pgmap v1185: 321 pgs: 321 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail; 301 KiB/s rd, 2.1 MiB/s wr, 87 op/s
Oct 11 08:46:12 compute-0 nova_compute[260935]: 2025-10-11 08:46:12.816 2 INFO nova.compute.manager [None req-8f7c14a1-b3db-450a-ba97-b31e7f540908 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] [instance: 97254317-c848-4369-b0b7-147d230fb1d1] Took 4.15 seconds to spawn the instance on the hypervisor.
Oct 11 08:46:12 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/686642392' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:46:12 compute-0 nova_compute[260935]: 2025-10-11 08:46:12.817 2 DEBUG nova.compute.manager [None req-8f7c14a1-b3db-450a-ba97-b31e7f540908 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] [instance: 97254317-c848-4369-b0b7-147d230fb1d1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:46:12 compute-0 nova_compute[260935]: 2025-10-11 08:46:12.898 2 INFO nova.compute.manager [None req-8f7c14a1-b3db-450a-ba97-b31e7f540908 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] [instance: 97254317-c848-4369-b0b7-147d230fb1d1] Took 6.29 seconds to build instance.
Oct 11 08:46:12 compute-0 nova_compute[260935]: 2025-10-11 08:46:12.950 2 DEBUG nova.network.neutron [None req-d0221800-790e-40b6-bc8f-c17053fcab43 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] [instance: adcf97ac-4b08-4f25-844b-e6f4e1305e3b] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188
Oct 11 08:46:12 compute-0 nova_compute[260935]: 2025-10-11 08:46:12.951 2 DEBUG nova.compute.manager [None req-d0221800-790e-40b6-bc8f-c17053fcab43 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] [instance: adcf97ac-4b08-4f25-844b-e6f4e1305e3b] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 11 08:46:12 compute-0 nova_compute[260935]: 2025-10-11 08:46:12.965 2 DEBUG oslo_concurrency.processutils [None req-d0221800-790e-40b6-bc8f-c17053fcab43 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 adcf97ac-4b08-4f25-844b-e6f4e1305e3b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.316s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:46:13 compute-0 nova_compute[260935]: 2025-10-11 08:46:13.003 2 DEBUG oslo_concurrency.lockutils [None req-8f7c14a1-b3db-450a-ba97-b31e7f540908 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] Lock "97254317-c848-4369-b0b7-147d230fb1d1" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.063s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:46:13 compute-0 nova_compute[260935]: 2025-10-11 08:46:13.046 2 DEBUG nova.storage.rbd_utils [None req-d0221800-790e-40b6-bc8f-c17053fcab43 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] resizing rbd image adcf97ac-4b08-4f25-844b-e6f4e1305e3b_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 11 08:46:13 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 08:46:13 compute-0 nova_compute[260935]: 2025-10-11 08:46:13.173 2 DEBUG nova.objects.instance [None req-d0221800-790e-40b6-bc8f-c17053fcab43 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] Lazy-loading 'migration_context' on Instance uuid adcf97ac-4b08-4f25-844b-e6f4e1305e3b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:46:13 compute-0 nova_compute[260935]: 2025-10-11 08:46:13.211 2 DEBUG nova.virt.libvirt.driver [None req-d0221800-790e-40b6-bc8f-c17053fcab43 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] [instance: adcf97ac-4b08-4f25-844b-e6f4e1305e3b] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 11 08:46:13 compute-0 nova_compute[260935]: 2025-10-11 08:46:13.212 2 DEBUG nova.virt.libvirt.driver [None req-d0221800-790e-40b6-bc8f-c17053fcab43 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] [instance: adcf97ac-4b08-4f25-844b-e6f4e1305e3b] Ensure instance console log exists: /var/lib/nova/instances/adcf97ac-4b08-4f25-844b-e6f4e1305e3b/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 11 08:46:13 compute-0 nova_compute[260935]: 2025-10-11 08:46:13.212 2 DEBUG oslo_concurrency.lockutils [None req-d0221800-790e-40b6-bc8f-c17053fcab43 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:46:13 compute-0 nova_compute[260935]: 2025-10-11 08:46:13.213 2 DEBUG oslo_concurrency.lockutils [None req-d0221800-790e-40b6-bc8f-c17053fcab43 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:46:13 compute-0 nova_compute[260935]: 2025-10-11 08:46:13.213 2 DEBUG oslo_concurrency.lockutils [None req-d0221800-790e-40b6-bc8f-c17053fcab43 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:46:13 compute-0 nova_compute[260935]: 2025-10-11 08:46:13.216 2 DEBUG nova.virt.libvirt.driver [None req-d0221800-790e-40b6-bc8f-c17053fcab43 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] [instance: adcf97ac-4b08-4f25-844b-e6f4e1305e3b] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 11 08:46:13 compute-0 nova_compute[260935]: 2025-10-11 08:46:13.222 2 WARNING nova.virt.libvirt.driver [None req-d0221800-790e-40b6-bc8f-c17053fcab43 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 08:46:13 compute-0 nova_compute[260935]: 2025-10-11 08:46:13.228 2 DEBUG nova.virt.libvirt.host [None req-d0221800-790e-40b6-bc8f-c17053fcab43 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 11 08:46:13 compute-0 nova_compute[260935]: 2025-10-11 08:46:13.229 2 DEBUG nova.virt.libvirt.host [None req-d0221800-790e-40b6-bc8f-c17053fcab43 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 11 08:46:13 compute-0 nova_compute[260935]: 2025-10-11 08:46:13.232 2 DEBUG nova.virt.libvirt.host [None req-d0221800-790e-40b6-bc8f-c17053fcab43 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 11 08:46:13 compute-0 nova_compute[260935]: 2025-10-11 08:46:13.233 2 DEBUG nova.virt.libvirt.host [None req-d0221800-790e-40b6-bc8f-c17053fcab43 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 11 08:46:13 compute-0 nova_compute[260935]: 2025-10-11 08:46:13.233 2 DEBUG nova.virt.libvirt.driver [None req-d0221800-790e-40b6-bc8f-c17053fcab43 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 11 08:46:13 compute-0 nova_compute[260935]: 2025-10-11 08:46:13.234 2 DEBUG nova.virt.hardware [None req-d0221800-790e-40b6-bc8f-c17053fcab43 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 11 08:46:13 compute-0 nova_compute[260935]: 2025-10-11 08:46:13.235 2 DEBUG nova.virt.hardware [None req-d0221800-790e-40b6-bc8f-c17053fcab43 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 11 08:46:13 compute-0 nova_compute[260935]: 2025-10-11 08:46:13.235 2 DEBUG nova.virt.hardware [None req-d0221800-790e-40b6-bc8f-c17053fcab43 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 11 08:46:13 compute-0 nova_compute[260935]: 2025-10-11 08:46:13.236 2 DEBUG nova.virt.hardware [None req-d0221800-790e-40b6-bc8f-c17053fcab43 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 11 08:46:13 compute-0 nova_compute[260935]: 2025-10-11 08:46:13.236 2 DEBUG nova.virt.hardware [None req-d0221800-790e-40b6-bc8f-c17053fcab43 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 11 08:46:13 compute-0 nova_compute[260935]: 2025-10-11 08:46:13.236 2 DEBUG nova.virt.hardware [None req-d0221800-790e-40b6-bc8f-c17053fcab43 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 11 08:46:13 compute-0 nova_compute[260935]: 2025-10-11 08:46:13.237 2 DEBUG nova.virt.hardware [None req-d0221800-790e-40b6-bc8f-c17053fcab43 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 11 08:46:13 compute-0 nova_compute[260935]: 2025-10-11 08:46:13.237 2 DEBUG nova.virt.hardware [None req-d0221800-790e-40b6-bc8f-c17053fcab43 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 11 08:46:13 compute-0 nova_compute[260935]: 2025-10-11 08:46:13.237 2 DEBUG nova.virt.hardware [None req-d0221800-790e-40b6-bc8f-c17053fcab43 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 11 08:46:13 compute-0 nova_compute[260935]: 2025-10-11 08:46:13.238 2 DEBUG nova.virt.hardware [None req-d0221800-790e-40b6-bc8f-c17053fcab43 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 11 08:46:13 compute-0 nova_compute[260935]: 2025-10-11 08:46:13.238 2 DEBUG nova.virt.hardware [None req-d0221800-790e-40b6-bc8f-c17053fcab43 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 11 08:46:13 compute-0 nova_compute[260935]: 2025-10-11 08:46:13.243 2 DEBUG oslo_concurrency.processutils [None req-d0221800-790e-40b6-bc8f-c17053fcab43 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:46:13 compute-0 nova_compute[260935]: 2025-10-11 08:46:13.600 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:46:13 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 08:46:13 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3436262319' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:46:13 compute-0 nova_compute[260935]: 2025-10-11 08:46:13.734 2 DEBUG oslo_concurrency.processutils [None req-d0221800-790e-40b6-bc8f-c17053fcab43 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.491s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:46:13 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1186: 321 pgs: 321 active+clean; 88 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 326 KiB/s rd, 3.9 MiB/s wr, 124 op/s
Oct 11 08:46:13 compute-0 nova_compute[260935]: 2025-10-11 08:46:13.770 2 DEBUG nova.storage.rbd_utils [None req-d0221800-790e-40b6-bc8f-c17053fcab43 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] rbd image adcf97ac-4b08-4f25-844b-e6f4e1305e3b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:46:13 compute-0 nova_compute[260935]: 2025-10-11 08:46:13.778 2 DEBUG oslo_concurrency.processutils [None req-d0221800-790e-40b6-bc8f-c17053fcab43 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:46:13 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/3436262319' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:46:14 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 08:46:14 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1391371203' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:46:14 compute-0 nova_compute[260935]: 2025-10-11 08:46:14.219 2 DEBUG oslo_concurrency.processutils [None req-d0221800-790e-40b6-bc8f-c17053fcab43 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:46:14 compute-0 nova_compute[260935]: 2025-10-11 08:46:14.222 2 DEBUG nova.objects.instance [None req-d0221800-790e-40b6-bc8f-c17053fcab43 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] Lazy-loading 'pci_devices' on Instance uuid adcf97ac-4b08-4f25-844b-e6f4e1305e3b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:46:14 compute-0 nova_compute[260935]: 2025-10-11 08:46:14.268 2 DEBUG nova.virt.libvirt.driver [None req-d0221800-790e-40b6-bc8f-c17053fcab43 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] [instance: adcf97ac-4b08-4f25-844b-e6f4e1305e3b] End _get_guest_xml xml=<domain type="kvm">
Oct 11 08:46:14 compute-0 nova_compute[260935]:   <uuid>adcf97ac-4b08-4f25-844b-e6f4e1305e3b</uuid>
Oct 11 08:46:14 compute-0 nova_compute[260935]:   <name>instance-0000000a</name>
Oct 11 08:46:14 compute-0 nova_compute[260935]:   <memory>131072</memory>
Oct 11 08:46:14 compute-0 nova_compute[260935]:   <vcpu>1</vcpu>
Oct 11 08:46:14 compute-0 nova_compute[260935]:   <metadata>
Oct 11 08:46:14 compute-0 nova_compute[260935]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 08:46:14 compute-0 nova_compute[260935]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 08:46:14 compute-0 nova_compute[260935]:       <nova:name>tempest-DeleteServersAdminTestJSON-server-1073345368</nova:name>
Oct 11 08:46:14 compute-0 nova_compute[260935]:       <nova:creationTime>2025-10-11 08:46:13</nova:creationTime>
Oct 11 08:46:14 compute-0 nova_compute[260935]:       <nova:flavor name="m1.nano">
Oct 11 08:46:14 compute-0 nova_compute[260935]:         <nova:memory>128</nova:memory>
Oct 11 08:46:14 compute-0 nova_compute[260935]:         <nova:disk>1</nova:disk>
Oct 11 08:46:14 compute-0 nova_compute[260935]:         <nova:swap>0</nova:swap>
Oct 11 08:46:14 compute-0 nova_compute[260935]:         <nova:ephemeral>0</nova:ephemeral>
Oct 11 08:46:14 compute-0 nova_compute[260935]:         <nova:vcpus>1</nova:vcpus>
Oct 11 08:46:14 compute-0 nova_compute[260935]:       </nova:flavor>
Oct 11 08:46:14 compute-0 nova_compute[260935]:       <nova:owner>
Oct 11 08:46:14 compute-0 nova_compute[260935]:         <nova:user uuid="c74c2e90e9e04fd283655f1fac1d2bf7">tempest-DeleteServersAdminTestJSON-159697733-project-member</nova:user>
Oct 11 08:46:14 compute-0 nova_compute[260935]:         <nova:project uuid="f8adf6d5f1034e90bb5184f87dbf3a16">tempest-DeleteServersAdminTestJSON-159697733</nova:project>
Oct 11 08:46:14 compute-0 nova_compute[260935]:       </nova:owner>
Oct 11 08:46:14 compute-0 nova_compute[260935]:       <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 08:46:14 compute-0 nova_compute[260935]:       <nova:ports/>
Oct 11 08:46:14 compute-0 nova_compute[260935]:     </nova:instance>
Oct 11 08:46:14 compute-0 nova_compute[260935]:   </metadata>
Oct 11 08:46:14 compute-0 nova_compute[260935]:   <sysinfo type="smbios">
Oct 11 08:46:14 compute-0 nova_compute[260935]:     <system>
Oct 11 08:46:14 compute-0 nova_compute[260935]:       <entry name="manufacturer">RDO</entry>
Oct 11 08:46:14 compute-0 nova_compute[260935]:       <entry name="product">OpenStack Compute</entry>
Oct 11 08:46:14 compute-0 nova_compute[260935]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 08:46:14 compute-0 nova_compute[260935]:       <entry name="serial">adcf97ac-4b08-4f25-844b-e6f4e1305e3b</entry>
Oct 11 08:46:14 compute-0 nova_compute[260935]:       <entry name="uuid">adcf97ac-4b08-4f25-844b-e6f4e1305e3b</entry>
Oct 11 08:46:14 compute-0 nova_compute[260935]:       <entry name="family">Virtual Machine</entry>
Oct 11 08:46:14 compute-0 nova_compute[260935]:     </system>
Oct 11 08:46:14 compute-0 nova_compute[260935]:   </sysinfo>
Oct 11 08:46:14 compute-0 nova_compute[260935]:   <os>
Oct 11 08:46:14 compute-0 nova_compute[260935]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 11 08:46:14 compute-0 nova_compute[260935]:     <boot dev="hd"/>
Oct 11 08:46:14 compute-0 nova_compute[260935]:     <smbios mode="sysinfo"/>
Oct 11 08:46:14 compute-0 nova_compute[260935]:   </os>
Oct 11 08:46:14 compute-0 nova_compute[260935]:   <features>
Oct 11 08:46:14 compute-0 nova_compute[260935]:     <acpi/>
Oct 11 08:46:14 compute-0 nova_compute[260935]:     <apic/>
Oct 11 08:46:14 compute-0 nova_compute[260935]:     <vmcoreinfo/>
Oct 11 08:46:14 compute-0 nova_compute[260935]:   </features>
Oct 11 08:46:14 compute-0 nova_compute[260935]:   <clock offset="utc">
Oct 11 08:46:14 compute-0 nova_compute[260935]:     <timer name="pit" tickpolicy="delay"/>
Oct 11 08:46:14 compute-0 nova_compute[260935]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 11 08:46:14 compute-0 nova_compute[260935]:     <timer name="hpet" present="no"/>
Oct 11 08:46:14 compute-0 nova_compute[260935]:   </clock>
Oct 11 08:46:14 compute-0 nova_compute[260935]:   <cpu mode="host-model" match="exact">
Oct 11 08:46:14 compute-0 nova_compute[260935]:     <topology sockets="1" cores="1" threads="1"/>
Oct 11 08:46:14 compute-0 nova_compute[260935]:   </cpu>
Oct 11 08:46:14 compute-0 nova_compute[260935]:   <devices>
Oct 11 08:46:14 compute-0 nova_compute[260935]:     <disk type="network" device="disk">
Oct 11 08:46:14 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 08:46:14 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/adcf97ac-4b08-4f25-844b-e6f4e1305e3b_disk">
Oct 11 08:46:14 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 08:46:14 compute-0 nova_compute[260935]:       </source>
Oct 11 08:46:14 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 08:46:14 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 08:46:14 compute-0 nova_compute[260935]:       </auth>
Oct 11 08:46:14 compute-0 nova_compute[260935]:       <target dev="vda" bus="virtio"/>
Oct 11 08:46:14 compute-0 nova_compute[260935]:     </disk>
Oct 11 08:46:14 compute-0 nova_compute[260935]:     <disk type="network" device="cdrom">
Oct 11 08:46:14 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 08:46:14 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/adcf97ac-4b08-4f25-844b-e6f4e1305e3b_disk.config">
Oct 11 08:46:14 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 08:46:14 compute-0 nova_compute[260935]:       </source>
Oct 11 08:46:14 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 08:46:14 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 08:46:14 compute-0 nova_compute[260935]:       </auth>
Oct 11 08:46:14 compute-0 nova_compute[260935]:       <target dev="sda" bus="sata"/>
Oct 11 08:46:14 compute-0 nova_compute[260935]:     </disk>
Oct 11 08:46:14 compute-0 nova_compute[260935]:     <serial type="pty">
Oct 11 08:46:14 compute-0 nova_compute[260935]:       <log file="/var/lib/nova/instances/adcf97ac-4b08-4f25-844b-e6f4e1305e3b/console.log" append="off"/>
Oct 11 08:46:14 compute-0 nova_compute[260935]:     </serial>
Oct 11 08:46:14 compute-0 nova_compute[260935]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 08:46:14 compute-0 nova_compute[260935]:     <video>
Oct 11 08:46:14 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 08:46:14 compute-0 nova_compute[260935]:     </video>
Oct 11 08:46:14 compute-0 nova_compute[260935]:     <input type="tablet" bus="usb"/>
Oct 11 08:46:14 compute-0 nova_compute[260935]:     <rng model="virtio">
Oct 11 08:46:14 compute-0 nova_compute[260935]:       <backend model="random">/dev/urandom</backend>
Oct 11 08:46:14 compute-0 nova_compute[260935]:     </rng>
Oct 11 08:46:14 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root"/>
Oct 11 08:46:14 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:46:14 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:46:14 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:46:14 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:46:14 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:46:14 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:46:14 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:46:14 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:46:14 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:46:14 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:46:14 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:46:14 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:46:14 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:46:14 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:46:14 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:46:14 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:46:14 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:46:14 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:46:14 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:46:14 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:46:14 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:46:14 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:46:14 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:46:14 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:46:14 compute-0 nova_compute[260935]:     <controller type="usb" index="0"/>
Oct 11 08:46:14 compute-0 nova_compute[260935]:     <memballoon model="virtio">
Oct 11 08:46:14 compute-0 nova_compute[260935]:       <stats period="10"/>
Oct 11 08:46:14 compute-0 nova_compute[260935]:     </memballoon>
Oct 11 08:46:14 compute-0 nova_compute[260935]:   </devices>
Oct 11 08:46:14 compute-0 nova_compute[260935]: </domain>
Oct 11 08:46:14 compute-0 nova_compute[260935]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 11 08:46:14 compute-0 nova_compute[260935]: 2025-10-11 08:46:14.536 2 DEBUG nova.virt.libvirt.driver [None req-d0221800-790e-40b6-bc8f-c17053fcab43 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 08:46:14 compute-0 nova_compute[260935]: 2025-10-11 08:46:14.536 2 DEBUG nova.virt.libvirt.driver [None req-d0221800-790e-40b6-bc8f-c17053fcab43 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 08:46:14 compute-0 nova_compute[260935]: 2025-10-11 08:46:14.548 2 INFO nova.virt.libvirt.driver [None req-d0221800-790e-40b6-bc8f-c17053fcab43 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] [instance: adcf97ac-4b08-4f25-844b-e6f4e1305e3b] Using config drive
Oct 11 08:46:14 compute-0 nova_compute[260935]: 2025-10-11 08:46:14.590 2 DEBUG nova.storage.rbd_utils [None req-d0221800-790e-40b6-bc8f-c17053fcab43 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] rbd image adcf97ac-4b08-4f25-844b-e6f4e1305e3b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:46:14 compute-0 nova_compute[260935]: 2025-10-11 08:46:14.748 2 INFO nova.virt.libvirt.driver [None req-d0221800-790e-40b6-bc8f-c17053fcab43 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] [instance: adcf97ac-4b08-4f25-844b-e6f4e1305e3b] Creating config drive at /var/lib/nova/instances/adcf97ac-4b08-4f25-844b-e6f4e1305e3b/disk.config
Oct 11 08:46:14 compute-0 nova_compute[260935]: 2025-10-11 08:46:14.756 2 DEBUG oslo_concurrency.processutils [None req-d0221800-790e-40b6-bc8f-c17053fcab43 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/adcf97ac-4b08-4f25-844b-e6f4e1305e3b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp7xw3gg8j execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:46:14 compute-0 ceph-mon[74313]: pgmap v1186: 321 pgs: 321 active+clean; 88 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 326 KiB/s rd, 3.9 MiB/s wr, 124 op/s
Oct 11 08:46:14 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/1391371203' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:46:14 compute-0 nova_compute[260935]: 2025-10-11 08:46:14.903 2 DEBUG oslo_concurrency.processutils [None req-d0221800-790e-40b6-bc8f-c17053fcab43 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/adcf97ac-4b08-4f25-844b-e6f4e1305e3b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp7xw3gg8j" returned: 0 in 0.147s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:46:14 compute-0 nova_compute[260935]: 2025-10-11 08:46:14.930 2 DEBUG nova.storage.rbd_utils [None req-d0221800-790e-40b6-bc8f-c17053fcab43 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] rbd image adcf97ac-4b08-4f25-844b-e6f4e1305e3b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:46:14 compute-0 nova_compute[260935]: 2025-10-11 08:46:14.934 2 DEBUG oslo_concurrency.processutils [None req-d0221800-790e-40b6-bc8f-c17053fcab43 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/adcf97ac-4b08-4f25-844b-e6f4e1305e3b/disk.config adcf97ac-4b08-4f25-844b-e6f4e1305e3b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:46:15 compute-0 nova_compute[260935]: 2025-10-11 08:46:15.098 2 DEBUG oslo_concurrency.processutils [None req-d0221800-790e-40b6-bc8f-c17053fcab43 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/adcf97ac-4b08-4f25-844b-e6f4e1305e3b/disk.config adcf97ac-4b08-4f25-844b-e6f4e1305e3b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.163s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:46:15 compute-0 nova_compute[260935]: 2025-10-11 08:46:15.100 2 INFO nova.virt.libvirt.driver [None req-d0221800-790e-40b6-bc8f-c17053fcab43 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] [instance: adcf97ac-4b08-4f25-844b-e6f4e1305e3b] Deleting local config drive /var/lib/nova/instances/adcf97ac-4b08-4f25-844b-e6f4e1305e3b/disk.config because it was imported into RBD.
Oct 11 08:46:15 compute-0 systemd-machined[215705]: New machine qemu-10-instance-0000000a.
Oct 11 08:46:15 compute-0 systemd[1]: Started Virtual Machine qemu-10-instance-0000000a.
Oct 11 08:46:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:46:15.178 162815 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:46:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:46:15.178 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:46:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:46:15.179 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:46:15 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1187: 321 pgs: 321 active+clean; 88 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 43 KiB/s rd, 1.8 MiB/s wr, 64 op/s
Oct 11 08:46:16 compute-0 nova_compute[260935]: 2025-10-11 08:46:16.090 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172376.0894575, adcf97ac-4b08-4f25-844b-e6f4e1305e3b => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:46:16 compute-0 nova_compute[260935]: 2025-10-11 08:46:16.091 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: adcf97ac-4b08-4f25-844b-e6f4e1305e3b] VM Resumed (Lifecycle Event)
Oct 11 08:46:16 compute-0 nova_compute[260935]: 2025-10-11 08:46:16.123 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: adcf97ac-4b08-4f25-844b-e6f4e1305e3b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:46:16 compute-0 nova_compute[260935]: 2025-10-11 08:46:16.124 2 DEBUG nova.compute.manager [None req-d0221800-790e-40b6-bc8f-c17053fcab43 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] [instance: adcf97ac-4b08-4f25-844b-e6f4e1305e3b] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 11 08:46:16 compute-0 nova_compute[260935]: 2025-10-11 08:46:16.126 2 DEBUG nova.virt.libvirt.driver [None req-d0221800-790e-40b6-bc8f-c17053fcab43 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] [instance: adcf97ac-4b08-4f25-844b-e6f4e1305e3b] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 11 08:46:16 compute-0 nova_compute[260935]: 2025-10-11 08:46:16.133 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: adcf97ac-4b08-4f25-844b-e6f4e1305e3b] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 08:46:16 compute-0 nova_compute[260935]: 2025-10-11 08:46:16.137 2 INFO nova.virt.libvirt.driver [-] [instance: adcf97ac-4b08-4f25-844b-e6f4e1305e3b] Instance spawned successfully.
Oct 11 08:46:16 compute-0 nova_compute[260935]: 2025-10-11 08:46:16.138 2 DEBUG nova.virt.libvirt.driver [None req-d0221800-790e-40b6-bc8f-c17053fcab43 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] [instance: adcf97ac-4b08-4f25-844b-e6f4e1305e3b] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 11 08:46:16 compute-0 nova_compute[260935]: 2025-10-11 08:46:16.158 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: adcf97ac-4b08-4f25-844b-e6f4e1305e3b] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 08:46:16 compute-0 nova_compute[260935]: 2025-10-11 08:46:16.159 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172376.1212406, adcf97ac-4b08-4f25-844b-e6f4e1305e3b => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:46:16 compute-0 nova_compute[260935]: 2025-10-11 08:46:16.160 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: adcf97ac-4b08-4f25-844b-e6f4e1305e3b] VM Started (Lifecycle Event)
Oct 11 08:46:16 compute-0 nova_compute[260935]: 2025-10-11 08:46:16.184 2 DEBUG nova.virt.libvirt.driver [None req-d0221800-790e-40b6-bc8f-c17053fcab43 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] [instance: adcf97ac-4b08-4f25-844b-e6f4e1305e3b] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:46:16 compute-0 nova_compute[260935]: 2025-10-11 08:46:16.185 2 DEBUG nova.virt.libvirt.driver [None req-d0221800-790e-40b6-bc8f-c17053fcab43 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] [instance: adcf97ac-4b08-4f25-844b-e6f4e1305e3b] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:46:16 compute-0 nova_compute[260935]: 2025-10-11 08:46:16.186 2 DEBUG nova.virt.libvirt.driver [None req-d0221800-790e-40b6-bc8f-c17053fcab43 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] [instance: adcf97ac-4b08-4f25-844b-e6f4e1305e3b] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:46:16 compute-0 nova_compute[260935]: 2025-10-11 08:46:16.187 2 DEBUG nova.virt.libvirt.driver [None req-d0221800-790e-40b6-bc8f-c17053fcab43 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] [instance: adcf97ac-4b08-4f25-844b-e6f4e1305e3b] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:46:16 compute-0 nova_compute[260935]: 2025-10-11 08:46:16.188 2 DEBUG nova.virt.libvirt.driver [None req-d0221800-790e-40b6-bc8f-c17053fcab43 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] [instance: adcf97ac-4b08-4f25-844b-e6f4e1305e3b] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:46:16 compute-0 nova_compute[260935]: 2025-10-11 08:46:16.189 2 DEBUG nova.virt.libvirt.driver [None req-d0221800-790e-40b6-bc8f-c17053fcab43 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] [instance: adcf97ac-4b08-4f25-844b-e6f4e1305e3b] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:46:16 compute-0 nova_compute[260935]: 2025-10-11 08:46:16.219 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: adcf97ac-4b08-4f25-844b-e6f4e1305e3b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:46:16 compute-0 nova_compute[260935]: 2025-10-11 08:46:16.260 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: adcf97ac-4b08-4f25-844b-e6f4e1305e3b] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 08:46:16 compute-0 nova_compute[260935]: 2025-10-11 08:46:16.264 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:46:16 compute-0 nova_compute[260935]: 2025-10-11 08:46:16.269 2 INFO nova.compute.manager [None req-d0221800-790e-40b6-bc8f-c17053fcab43 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] [instance: adcf97ac-4b08-4f25-844b-e6f4e1305e3b] Took 3.82 seconds to spawn the instance on the hypervisor.
Oct 11 08:46:16 compute-0 nova_compute[260935]: 2025-10-11 08:46:16.269 2 DEBUG nova.compute.manager [None req-d0221800-790e-40b6-bc8f-c17053fcab43 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] [instance: adcf97ac-4b08-4f25-844b-e6f4e1305e3b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:46:16 compute-0 nova_compute[260935]: 2025-10-11 08:46:16.281 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: adcf97ac-4b08-4f25-844b-e6f4e1305e3b] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 08:46:16 compute-0 nova_compute[260935]: 2025-10-11 08:46:16.325 2 INFO nova.compute.manager [None req-d0221800-790e-40b6-bc8f-c17053fcab43 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] [instance: adcf97ac-4b08-4f25-844b-e6f4e1305e3b] Took 4.91 seconds to build instance.
Oct 11 08:46:16 compute-0 nova_compute[260935]: 2025-10-11 08:46:16.362 2 DEBUG oslo_concurrency.lockutils [None req-d0221800-790e-40b6-bc8f-c17053fcab43 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] Lock "adcf97ac-4b08-4f25-844b-e6f4e1305e3b" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 5.104s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:46:16 compute-0 ceph-mon[74313]: pgmap v1187: 321 pgs: 321 active+clean; 88 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 43 KiB/s rd, 1.8 MiB/s wr, 64 op/s
Oct 11 08:46:17 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1188: 321 pgs: 321 active+clean; 134 MiB data, 294 MiB used, 60 GiB / 60 GiB avail; 3.8 MiB/s rd, 3.6 MiB/s wr, 227 op/s
Oct 11 08:46:17 compute-0 podman[282718]: 2025-10-11 08:46:17.805265968 +0000 UTC m=+0.099125755 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct 11 08:46:18 compute-0 nova_compute[260935]: 2025-10-11 08:46:18.048 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760172363.0469139, 7d1e4ac2-be17-4b9b-a94e-0d70a7c4395d => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:46:18 compute-0 nova_compute[260935]: 2025-10-11 08:46:18.048 2 INFO nova.compute.manager [-] [instance: 7d1e4ac2-be17-4b9b-a94e-0d70a7c4395d] VM Stopped (Lifecycle Event)
Oct 11 08:46:18 compute-0 nova_compute[260935]: 2025-10-11 08:46:18.096 2 DEBUG nova.compute.manager [None req-c66dbc6d-cab8-4e01-aaf3-dd2374694f8f - - - - - -] [instance: 7d1e4ac2-be17-4b9b-a94e-0d70a7c4395d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:46:18 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 08:46:18 compute-0 nova_compute[260935]: 2025-10-11 08:46:18.525 2 DEBUG oslo_concurrency.lockutils [None req-7e8bcdc3-1f7e-4533-b533-40c605046f6f faabd708b3c047d7b74a5ff47c363129 b25ec7ba37dc4a2193c9461101f5bff5 - - default default] Acquiring lock "adcf97ac-4b08-4f25-844b-e6f4e1305e3b" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:46:18 compute-0 nova_compute[260935]: 2025-10-11 08:46:18.525 2 DEBUG oslo_concurrency.lockutils [None req-7e8bcdc3-1f7e-4533-b533-40c605046f6f faabd708b3c047d7b74a5ff47c363129 b25ec7ba37dc4a2193c9461101f5bff5 - - default default] Lock "adcf97ac-4b08-4f25-844b-e6f4e1305e3b" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:46:18 compute-0 nova_compute[260935]: 2025-10-11 08:46:18.525 2 DEBUG oslo_concurrency.lockutils [None req-7e8bcdc3-1f7e-4533-b533-40c605046f6f faabd708b3c047d7b74a5ff47c363129 b25ec7ba37dc4a2193c9461101f5bff5 - - default default] Acquiring lock "adcf97ac-4b08-4f25-844b-e6f4e1305e3b-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:46:18 compute-0 nova_compute[260935]: 2025-10-11 08:46:18.526 2 DEBUG oslo_concurrency.lockutils [None req-7e8bcdc3-1f7e-4533-b533-40c605046f6f faabd708b3c047d7b74a5ff47c363129 b25ec7ba37dc4a2193c9461101f5bff5 - - default default] Lock "adcf97ac-4b08-4f25-844b-e6f4e1305e3b-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:46:18 compute-0 nova_compute[260935]: 2025-10-11 08:46:18.526 2 DEBUG oslo_concurrency.lockutils [None req-7e8bcdc3-1f7e-4533-b533-40c605046f6f faabd708b3c047d7b74a5ff47c363129 b25ec7ba37dc4a2193c9461101f5bff5 - - default default] Lock "adcf97ac-4b08-4f25-844b-e6f4e1305e3b-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:46:18 compute-0 nova_compute[260935]: 2025-10-11 08:46:18.527 2 INFO nova.compute.manager [None req-7e8bcdc3-1f7e-4533-b533-40c605046f6f faabd708b3c047d7b74a5ff47c363129 b25ec7ba37dc4a2193c9461101f5bff5 - - default default] [instance: adcf97ac-4b08-4f25-844b-e6f4e1305e3b] Terminating instance
Oct 11 08:46:18 compute-0 nova_compute[260935]: 2025-10-11 08:46:18.528 2 DEBUG oslo_concurrency.lockutils [None req-7e8bcdc3-1f7e-4533-b533-40c605046f6f faabd708b3c047d7b74a5ff47c363129 b25ec7ba37dc4a2193c9461101f5bff5 - - default default] Acquiring lock "refresh_cache-adcf97ac-4b08-4f25-844b-e6f4e1305e3b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 08:46:18 compute-0 nova_compute[260935]: 2025-10-11 08:46:18.528 2 DEBUG oslo_concurrency.lockutils [None req-7e8bcdc3-1f7e-4533-b533-40c605046f6f faabd708b3c047d7b74a5ff47c363129 b25ec7ba37dc4a2193c9461101f5bff5 - - default default] Acquired lock "refresh_cache-adcf97ac-4b08-4f25-844b-e6f4e1305e3b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 08:46:18 compute-0 nova_compute[260935]: 2025-10-11 08:46:18.528 2 DEBUG nova.network.neutron [None req-7e8bcdc3-1f7e-4533-b533-40c605046f6f faabd708b3c047d7b74a5ff47c363129 b25ec7ba37dc4a2193c9461101f5bff5 - - default default] [instance: adcf97ac-4b08-4f25-844b-e6f4e1305e3b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 11 08:46:18 compute-0 nova_compute[260935]: 2025-10-11 08:46:18.606 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:46:18 compute-0 ceph-mon[74313]: pgmap v1188: 321 pgs: 321 active+clean; 134 MiB data, 294 MiB used, 60 GiB / 60 GiB avail; 3.8 MiB/s rd, 3.6 MiB/s wr, 227 op/s
Oct 11 08:46:18 compute-0 nova_compute[260935]: 2025-10-11 08:46:18.870 2 DEBUG nova.network.neutron [None req-7e8bcdc3-1f7e-4533-b533-40c605046f6f faabd708b3c047d7b74a5ff47c363129 b25ec7ba37dc4a2193c9461101f5bff5 - - default default] [instance: adcf97ac-4b08-4f25-844b-e6f4e1305e3b] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 11 08:46:18 compute-0 nova_compute[260935]: 2025-10-11 08:46:18.876 2 DEBUG oslo_concurrency.lockutils [None req-247a30bc-f9b5-4a93-8d65-dc91a0707a37 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] Acquiring lock "d1140ad8-4653-441c-aeec-bf4de6a680f4" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:46:18 compute-0 nova_compute[260935]: 2025-10-11 08:46:18.877 2 DEBUG oslo_concurrency.lockutils [None req-247a30bc-f9b5-4a93-8d65-dc91a0707a37 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] Lock "d1140ad8-4653-441c-aeec-bf4de6a680f4" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:46:18 compute-0 nova_compute[260935]: 2025-10-11 08:46:18.904 2 DEBUG nova.compute.manager [None req-247a30bc-f9b5-4a93-8d65-dc91a0707a37 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] [instance: d1140ad8-4653-441c-aeec-bf4de6a680f4] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 11 08:46:18 compute-0 nova_compute[260935]: 2025-10-11 08:46:18.984 2 DEBUG oslo_concurrency.lockutils [None req-247a30bc-f9b5-4a93-8d65-dc91a0707a37 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:46:18 compute-0 nova_compute[260935]: 2025-10-11 08:46:18.985 2 DEBUG oslo_concurrency.lockutils [None req-247a30bc-f9b5-4a93-8d65-dc91a0707a37 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:46:18 compute-0 nova_compute[260935]: 2025-10-11 08:46:18.991 2 DEBUG nova.virt.hardware [None req-247a30bc-f9b5-4a93-8d65-dc91a0707a37 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 11 08:46:18 compute-0 nova_compute[260935]: 2025-10-11 08:46:18.991 2 INFO nova.compute.claims [None req-247a30bc-f9b5-4a93-8d65-dc91a0707a37 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] [instance: d1140ad8-4653-441c-aeec-bf4de6a680f4] Claim successful on node compute-0.ctlplane.example.com
Oct 11 08:46:19 compute-0 nova_compute[260935]: 2025-10-11 08:46:19.186 2 DEBUG oslo_concurrency.processutils [None req-247a30bc-f9b5-4a93-8d65-dc91a0707a37 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:46:19 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 08:46:19 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3837523798' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:46:19 compute-0 nova_compute[260935]: 2025-10-11 08:46:19.650 2 DEBUG oslo_concurrency.processutils [None req-247a30bc-f9b5-4a93-8d65-dc91a0707a37 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:46:19 compute-0 nova_compute[260935]: 2025-10-11 08:46:19.662 2 DEBUG nova.compute.provider_tree [None req-247a30bc-f9b5-4a93-8d65-dc91a0707a37 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 08:46:19 compute-0 nova_compute[260935]: 2025-10-11 08:46:19.681 2 DEBUG nova.scheduler.client.report [None req-247a30bc-f9b5-4a93-8d65-dc91a0707a37 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 08:46:19 compute-0 nova_compute[260935]: 2025-10-11 08:46:19.710 2 DEBUG oslo_concurrency.lockutils [None req-247a30bc-f9b5-4a93-8d65-dc91a0707a37 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.725s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:46:19 compute-0 nova_compute[260935]: 2025-10-11 08:46:19.712 2 DEBUG nova.compute.manager [None req-247a30bc-f9b5-4a93-8d65-dc91a0707a37 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] [instance: d1140ad8-4653-441c-aeec-bf4de6a680f4] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 11 08:46:19 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1189: 321 pgs: 321 active+clean; 134 MiB data, 294 MiB used, 60 GiB / 60 GiB avail; 3.8 MiB/s rd, 3.6 MiB/s wr, 199 op/s
Oct 11 08:46:19 compute-0 nova_compute[260935]: 2025-10-11 08:46:19.764 2 DEBUG nova.compute.manager [None req-247a30bc-f9b5-4a93-8d65-dc91a0707a37 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] [instance: d1140ad8-4653-441c-aeec-bf4de6a680f4] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 11 08:46:19 compute-0 nova_compute[260935]: 2025-10-11 08:46:19.764 2 DEBUG nova.network.neutron [None req-247a30bc-f9b5-4a93-8d65-dc91a0707a37 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] [instance: d1140ad8-4653-441c-aeec-bf4de6a680f4] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 11 08:46:19 compute-0 nova_compute[260935]: 2025-10-11 08:46:19.789 2 INFO nova.virt.libvirt.driver [None req-247a30bc-f9b5-4a93-8d65-dc91a0707a37 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] [instance: d1140ad8-4653-441c-aeec-bf4de6a680f4] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 11 08:46:19 compute-0 nova_compute[260935]: 2025-10-11 08:46:19.816 2 DEBUG nova.compute.manager [None req-247a30bc-f9b5-4a93-8d65-dc91a0707a37 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] [instance: d1140ad8-4653-441c-aeec-bf4de6a680f4] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 11 08:46:19 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/3837523798' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:46:19 compute-0 nova_compute[260935]: 2025-10-11 08:46:19.918 2 DEBUG nova.compute.manager [None req-247a30bc-f9b5-4a93-8d65-dc91a0707a37 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] [instance: d1140ad8-4653-441c-aeec-bf4de6a680f4] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 11 08:46:19 compute-0 nova_compute[260935]: 2025-10-11 08:46:19.919 2 DEBUG nova.virt.libvirt.driver [None req-247a30bc-f9b5-4a93-8d65-dc91a0707a37 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] [instance: d1140ad8-4653-441c-aeec-bf4de6a680f4] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 11 08:46:19 compute-0 nova_compute[260935]: 2025-10-11 08:46:19.920 2 INFO nova.virt.libvirt.driver [None req-247a30bc-f9b5-4a93-8d65-dc91a0707a37 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] [instance: d1140ad8-4653-441c-aeec-bf4de6a680f4] Creating image(s)
Oct 11 08:46:19 compute-0 nova_compute[260935]: 2025-10-11 08:46:19.954 2 DEBUG nova.storage.rbd_utils [None req-247a30bc-f9b5-4a93-8d65-dc91a0707a37 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] rbd image d1140ad8-4653-441c-aeec-bf4de6a680f4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:46:20 compute-0 nova_compute[260935]: 2025-10-11 08:46:20.002 2 DEBUG nova.storage.rbd_utils [None req-247a30bc-f9b5-4a93-8d65-dc91a0707a37 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] rbd image d1140ad8-4653-441c-aeec-bf4de6a680f4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:46:20 compute-0 nova_compute[260935]: 2025-10-11 08:46:20.036 2 DEBUG nova.storage.rbd_utils [None req-247a30bc-f9b5-4a93-8d65-dc91a0707a37 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] rbd image d1140ad8-4653-441c-aeec-bf4de6a680f4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:46:20 compute-0 nova_compute[260935]: 2025-10-11 08:46:20.040 2 DEBUG oslo_concurrency.processutils [None req-247a30bc-f9b5-4a93-8d65-dc91a0707a37 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:46:20 compute-0 nova_compute[260935]: 2025-10-11 08:46:20.149 2 DEBUG oslo_concurrency.processutils [None req-247a30bc-f9b5-4a93-8d65-dc91a0707a37 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json" returned: 0 in 0.109s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:46:20 compute-0 nova_compute[260935]: 2025-10-11 08:46:20.150 2 DEBUG oslo_concurrency.lockutils [None req-247a30bc-f9b5-4a93-8d65-dc91a0707a37 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] Acquiring lock "9811042c7d73cc51997f7c966840f4b7728169a1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:46:20 compute-0 nova_compute[260935]: 2025-10-11 08:46:20.153 2 DEBUG oslo_concurrency.lockutils [None req-247a30bc-f9b5-4a93-8d65-dc91a0707a37 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:46:20 compute-0 nova_compute[260935]: 2025-10-11 08:46:20.154 2 DEBUG oslo_concurrency.lockutils [None req-247a30bc-f9b5-4a93-8d65-dc91a0707a37 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:46:20 compute-0 nova_compute[260935]: 2025-10-11 08:46:20.199 2 DEBUG nova.storage.rbd_utils [None req-247a30bc-f9b5-4a93-8d65-dc91a0707a37 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] rbd image d1140ad8-4653-441c-aeec-bf4de6a680f4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:46:20 compute-0 nova_compute[260935]: 2025-10-11 08:46:20.207 2 DEBUG oslo_concurrency.processutils [None req-247a30bc-f9b5-4a93-8d65-dc91a0707a37 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 d1140ad8-4653-441c-aeec-bf4de6a680f4_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:46:20 compute-0 nova_compute[260935]: 2025-10-11 08:46:20.249 2 DEBUG nova.network.neutron [None req-7e8bcdc3-1f7e-4533-b533-40c605046f6f faabd708b3c047d7b74a5ff47c363129 b25ec7ba37dc4a2193c9461101f5bff5 - - default default] [instance: adcf97ac-4b08-4f25-844b-e6f4e1305e3b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:46:20 compute-0 nova_compute[260935]: 2025-10-11 08:46:20.277 2 DEBUG oslo_concurrency.lockutils [None req-7e8bcdc3-1f7e-4533-b533-40c605046f6f faabd708b3c047d7b74a5ff47c363129 b25ec7ba37dc4a2193c9461101f5bff5 - - default default] Releasing lock "refresh_cache-adcf97ac-4b08-4f25-844b-e6f4e1305e3b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 08:46:20 compute-0 nova_compute[260935]: 2025-10-11 08:46:20.278 2 DEBUG nova.compute.manager [None req-7e8bcdc3-1f7e-4533-b533-40c605046f6f faabd708b3c047d7b74a5ff47c363129 b25ec7ba37dc4a2193c9461101f5bff5 - - default default] [instance: adcf97ac-4b08-4f25-844b-e6f4e1305e3b] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 11 08:46:20 compute-0 systemd[1]: machine-qemu\x2d10\x2dinstance\x2d0000000a.scope: Deactivated successfully.
Oct 11 08:46:20 compute-0 systemd[1]: machine-qemu\x2d10\x2dinstance\x2d0000000a.scope: Consumed 5.109s CPU time.
Oct 11 08:46:20 compute-0 systemd-machined[215705]: Machine qemu-10-instance-0000000a terminated.
Oct 11 08:46:20 compute-0 nova_compute[260935]: 2025-10-11 08:46:20.449 2 DEBUG nova.network.neutron [None req-247a30bc-f9b5-4a93-8d65-dc91a0707a37 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] [instance: d1140ad8-4653-441c-aeec-bf4de6a680f4] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188
Oct 11 08:46:20 compute-0 nova_compute[260935]: 2025-10-11 08:46:20.450 2 DEBUG nova.compute.manager [None req-247a30bc-f9b5-4a93-8d65-dc91a0707a37 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] [instance: d1140ad8-4653-441c-aeec-bf4de6a680f4] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 11 08:46:20 compute-0 nova_compute[260935]: 2025-10-11 08:46:20.501 2 INFO nova.virt.libvirt.driver [-] [instance: adcf97ac-4b08-4f25-844b-e6f4e1305e3b] Instance destroyed successfully.
Oct 11 08:46:20 compute-0 nova_compute[260935]: 2025-10-11 08:46:20.501 2 DEBUG nova.objects.instance [None req-7e8bcdc3-1f7e-4533-b533-40c605046f6f faabd708b3c047d7b74a5ff47c363129 b25ec7ba37dc4a2193c9461101f5bff5 - - default default] Lazy-loading 'resources' on Instance uuid adcf97ac-4b08-4f25-844b-e6f4e1305e3b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:46:20 compute-0 nova_compute[260935]: 2025-10-11 08:46:20.566 2 DEBUG oslo_concurrency.processutils [None req-247a30bc-f9b5-4a93-8d65-dc91a0707a37 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 d1140ad8-4653-441c-aeec-bf4de6a680f4_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.360s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:46:20 compute-0 nova_compute[260935]: 2025-10-11 08:46:20.681 2 DEBUG nova.storage.rbd_utils [None req-247a30bc-f9b5-4a93-8d65-dc91a0707a37 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] resizing rbd image d1140ad8-4653-441c-aeec-bf4de6a680f4_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 11 08:46:20 compute-0 nova_compute[260935]: 2025-10-11 08:46:20.834 2 DEBUG nova.objects.instance [None req-247a30bc-f9b5-4a93-8d65-dc91a0707a37 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] Lazy-loading 'migration_context' on Instance uuid d1140ad8-4653-441c-aeec-bf4de6a680f4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:46:20 compute-0 ceph-mon[74313]: pgmap v1189: 321 pgs: 321 active+clean; 134 MiB data, 294 MiB used, 60 GiB / 60 GiB avail; 3.8 MiB/s rd, 3.6 MiB/s wr, 199 op/s
Oct 11 08:46:20 compute-0 nova_compute[260935]: 2025-10-11 08:46:20.879 2 DEBUG nova.virt.libvirt.driver [None req-247a30bc-f9b5-4a93-8d65-dc91a0707a37 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] [instance: d1140ad8-4653-441c-aeec-bf4de6a680f4] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 11 08:46:20 compute-0 nova_compute[260935]: 2025-10-11 08:46:20.879 2 DEBUG nova.virt.libvirt.driver [None req-247a30bc-f9b5-4a93-8d65-dc91a0707a37 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] [instance: d1140ad8-4653-441c-aeec-bf4de6a680f4] Ensure instance console log exists: /var/lib/nova/instances/d1140ad8-4653-441c-aeec-bf4de6a680f4/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 11 08:46:20 compute-0 nova_compute[260935]: 2025-10-11 08:46:20.880 2 DEBUG oslo_concurrency.lockutils [None req-247a30bc-f9b5-4a93-8d65-dc91a0707a37 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:46:20 compute-0 nova_compute[260935]: 2025-10-11 08:46:20.880 2 DEBUG oslo_concurrency.lockutils [None req-247a30bc-f9b5-4a93-8d65-dc91a0707a37 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:46:20 compute-0 nova_compute[260935]: 2025-10-11 08:46:20.880 2 DEBUG oslo_concurrency.lockutils [None req-247a30bc-f9b5-4a93-8d65-dc91a0707a37 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:46:20 compute-0 nova_compute[260935]: 2025-10-11 08:46:20.882 2 DEBUG nova.virt.libvirt.driver [None req-247a30bc-f9b5-4a93-8d65-dc91a0707a37 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] [instance: d1140ad8-4653-441c-aeec-bf4de6a680f4] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 11 08:46:20 compute-0 nova_compute[260935]: 2025-10-11 08:46:20.887 2 WARNING nova.virt.libvirt.driver [None req-247a30bc-f9b5-4a93-8d65-dc91a0707a37 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 08:46:20 compute-0 nova_compute[260935]: 2025-10-11 08:46:20.894 2 DEBUG nova.virt.libvirt.host [None req-247a30bc-f9b5-4a93-8d65-dc91a0707a37 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 11 08:46:20 compute-0 nova_compute[260935]: 2025-10-11 08:46:20.895 2 DEBUG nova.virt.libvirt.host [None req-247a30bc-f9b5-4a93-8d65-dc91a0707a37 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 11 08:46:20 compute-0 nova_compute[260935]: 2025-10-11 08:46:20.901 2 DEBUG nova.virt.libvirt.host [None req-247a30bc-f9b5-4a93-8d65-dc91a0707a37 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 11 08:46:20 compute-0 nova_compute[260935]: 2025-10-11 08:46:20.903 2 DEBUG nova.virt.libvirt.host [None req-247a30bc-f9b5-4a93-8d65-dc91a0707a37 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 11 08:46:20 compute-0 nova_compute[260935]: 2025-10-11 08:46:20.904 2 DEBUG nova.virt.libvirt.driver [None req-247a30bc-f9b5-4a93-8d65-dc91a0707a37 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 11 08:46:20 compute-0 nova_compute[260935]: 2025-10-11 08:46:20.904 2 DEBUG nova.virt.hardware [None req-247a30bc-f9b5-4a93-8d65-dc91a0707a37 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 11 08:46:20 compute-0 nova_compute[260935]: 2025-10-11 08:46:20.904 2 DEBUG nova.virt.hardware [None req-247a30bc-f9b5-4a93-8d65-dc91a0707a37 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 11 08:46:20 compute-0 nova_compute[260935]: 2025-10-11 08:46:20.904 2 DEBUG nova.virt.hardware [None req-247a30bc-f9b5-4a93-8d65-dc91a0707a37 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 11 08:46:20 compute-0 nova_compute[260935]: 2025-10-11 08:46:20.905 2 DEBUG nova.virt.hardware [None req-247a30bc-f9b5-4a93-8d65-dc91a0707a37 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 11 08:46:20 compute-0 nova_compute[260935]: 2025-10-11 08:46:20.905 2 DEBUG nova.virt.hardware [None req-247a30bc-f9b5-4a93-8d65-dc91a0707a37 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 11 08:46:20 compute-0 nova_compute[260935]: 2025-10-11 08:46:20.905 2 DEBUG nova.virt.hardware [None req-247a30bc-f9b5-4a93-8d65-dc91a0707a37 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 11 08:46:20 compute-0 nova_compute[260935]: 2025-10-11 08:46:20.905 2 DEBUG nova.virt.hardware [None req-247a30bc-f9b5-4a93-8d65-dc91a0707a37 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 11 08:46:20 compute-0 nova_compute[260935]: 2025-10-11 08:46:20.906 2 DEBUG nova.virt.hardware [None req-247a30bc-f9b5-4a93-8d65-dc91a0707a37 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 11 08:46:20 compute-0 nova_compute[260935]: 2025-10-11 08:46:20.906 2 DEBUG nova.virt.hardware [None req-247a30bc-f9b5-4a93-8d65-dc91a0707a37 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 11 08:46:20 compute-0 nova_compute[260935]: 2025-10-11 08:46:20.906 2 DEBUG nova.virt.hardware [None req-247a30bc-f9b5-4a93-8d65-dc91a0707a37 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 11 08:46:20 compute-0 nova_compute[260935]: 2025-10-11 08:46:20.907 2 DEBUG nova.virt.hardware [None req-247a30bc-f9b5-4a93-8d65-dc91a0707a37 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 11 08:46:20 compute-0 nova_compute[260935]: 2025-10-11 08:46:20.914 2 DEBUG oslo_concurrency.processutils [None req-247a30bc-f9b5-4a93-8d65-dc91a0707a37 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:46:20 compute-0 nova_compute[260935]: 2025-10-11 08:46:20.971 2 INFO nova.virt.libvirt.driver [None req-7e8bcdc3-1f7e-4533-b533-40c605046f6f faabd708b3c047d7b74a5ff47c363129 b25ec7ba37dc4a2193c9461101f5bff5 - - default default] [instance: adcf97ac-4b08-4f25-844b-e6f4e1305e3b] Deleting instance files /var/lib/nova/instances/adcf97ac-4b08-4f25-844b-e6f4e1305e3b_del
Oct 11 08:46:20 compute-0 nova_compute[260935]: 2025-10-11 08:46:20.973 2 INFO nova.virt.libvirt.driver [None req-7e8bcdc3-1f7e-4533-b533-40c605046f6f faabd708b3c047d7b74a5ff47c363129 b25ec7ba37dc4a2193c9461101f5bff5 - - default default] [instance: adcf97ac-4b08-4f25-844b-e6f4e1305e3b] Deletion of /var/lib/nova/instances/adcf97ac-4b08-4f25-844b-e6f4e1305e3b_del complete
Oct 11 08:46:21 compute-0 nova_compute[260935]: 2025-10-11 08:46:21.048 2 INFO nova.compute.manager [None req-7e8bcdc3-1f7e-4533-b533-40c605046f6f faabd708b3c047d7b74a5ff47c363129 b25ec7ba37dc4a2193c9461101f5bff5 - - default default] [instance: adcf97ac-4b08-4f25-844b-e6f4e1305e3b] Took 0.77 seconds to destroy the instance on the hypervisor.
Oct 11 08:46:21 compute-0 nova_compute[260935]: 2025-10-11 08:46:21.049 2 DEBUG oslo.service.loopingcall [None req-7e8bcdc3-1f7e-4533-b533-40c605046f6f faabd708b3c047d7b74a5ff47c363129 b25ec7ba37dc4a2193c9461101f5bff5 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 11 08:46:21 compute-0 nova_compute[260935]: 2025-10-11 08:46:21.049 2 DEBUG nova.compute.manager [-] [instance: adcf97ac-4b08-4f25-844b-e6f4e1305e3b] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 11 08:46:21 compute-0 nova_compute[260935]: 2025-10-11 08:46:21.049 2 DEBUG nova.network.neutron [-] [instance: adcf97ac-4b08-4f25-844b-e6f4e1305e3b] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 11 08:46:21 compute-0 nova_compute[260935]: 2025-10-11 08:46:21.212 2 DEBUG nova.network.neutron [-] [instance: adcf97ac-4b08-4f25-844b-e6f4e1305e3b] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 11 08:46:21 compute-0 nova_compute[260935]: 2025-10-11 08:46:21.226 2 DEBUG nova.network.neutron [-] [instance: adcf97ac-4b08-4f25-844b-e6f4e1305e3b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:46:21 compute-0 nova_compute[260935]: 2025-10-11 08:46:21.239 2 INFO nova.compute.manager [-] [instance: adcf97ac-4b08-4f25-844b-e6f4e1305e3b] Took 0.19 seconds to deallocate network for instance.
Oct 11 08:46:21 compute-0 nova_compute[260935]: 2025-10-11 08:46:21.280 2 DEBUG oslo_concurrency.lockutils [None req-7e8bcdc3-1f7e-4533-b533-40c605046f6f faabd708b3c047d7b74a5ff47c363129 b25ec7ba37dc4a2193c9461101f5bff5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:46:21 compute-0 nova_compute[260935]: 2025-10-11 08:46:21.281 2 DEBUG oslo_concurrency.lockutils [None req-7e8bcdc3-1f7e-4533-b533-40c605046f6f faabd708b3c047d7b74a5ff47c363129 b25ec7ba37dc4a2193c9461101f5bff5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:46:21 compute-0 nova_compute[260935]: 2025-10-11 08:46:21.297 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:46:21 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 08:46:21 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/843929755' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:46:21 compute-0 nova_compute[260935]: 2025-10-11 08:46:21.379 2 DEBUG oslo_concurrency.processutils [None req-247a30bc-f9b5-4a93-8d65-dc91a0707a37 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:46:21 compute-0 nova_compute[260935]: 2025-10-11 08:46:21.412 2 DEBUG nova.storage.rbd_utils [None req-247a30bc-f9b5-4a93-8d65-dc91a0707a37 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] rbd image d1140ad8-4653-441c-aeec-bf4de6a680f4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:46:21 compute-0 nova_compute[260935]: 2025-10-11 08:46:21.418 2 DEBUG oslo_concurrency.processutils [None req-247a30bc-f9b5-4a93-8d65-dc91a0707a37 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:46:21 compute-0 nova_compute[260935]: 2025-10-11 08:46:21.456 2 DEBUG oslo_concurrency.processutils [None req-7e8bcdc3-1f7e-4533-b533-40c605046f6f faabd708b3c047d7b74a5ff47c363129 b25ec7ba37dc4a2193c9461101f5bff5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:46:21 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1190: 321 pgs: 321 active+clean; 134 MiB data, 294 MiB used, 60 GiB / 60 GiB avail; 3.8 MiB/s rd, 3.6 MiB/s wr, 199 op/s
Oct 11 08:46:21 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/843929755' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:46:21 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 08:46:21 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4037687385' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:46:21 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 08:46:21 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4125394097' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:46:21 compute-0 nova_compute[260935]: 2025-10-11 08:46:21.910 2 DEBUG oslo_concurrency.processutils [None req-7e8bcdc3-1f7e-4533-b533-40c605046f6f faabd708b3c047d7b74a5ff47c363129 b25ec7ba37dc4a2193c9461101f5bff5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:46:21 compute-0 nova_compute[260935]: 2025-10-11 08:46:21.924 2 DEBUG nova.compute.provider_tree [None req-7e8bcdc3-1f7e-4533-b533-40c605046f6f faabd708b3c047d7b74a5ff47c363129 b25ec7ba37dc4a2193c9461101f5bff5 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 08:46:21 compute-0 nova_compute[260935]: 2025-10-11 08:46:21.945 2 DEBUG nova.scheduler.client.report [None req-7e8bcdc3-1f7e-4533-b533-40c605046f6f faabd708b3c047d7b74a5ff47c363129 b25ec7ba37dc4a2193c9461101f5bff5 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 08:46:21 compute-0 nova_compute[260935]: 2025-10-11 08:46:21.954 2 DEBUG oslo_concurrency.processutils [None req-247a30bc-f9b5-4a93-8d65-dc91a0707a37 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.535s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:46:21 compute-0 nova_compute[260935]: 2025-10-11 08:46:21.959 2 DEBUG nova.objects.instance [None req-247a30bc-f9b5-4a93-8d65-dc91a0707a37 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] Lazy-loading 'pci_devices' on Instance uuid d1140ad8-4653-441c-aeec-bf4de6a680f4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:46:21 compute-0 nova_compute[260935]: 2025-10-11 08:46:21.979 2 DEBUG oslo_concurrency.lockutils [None req-7e8bcdc3-1f7e-4533-b533-40c605046f6f faabd708b3c047d7b74a5ff47c363129 b25ec7ba37dc4a2193c9461101f5bff5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.698s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:46:21 compute-0 nova_compute[260935]: 2025-10-11 08:46:21.988 2 DEBUG nova.virt.libvirt.driver [None req-247a30bc-f9b5-4a93-8d65-dc91a0707a37 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] [instance: d1140ad8-4653-441c-aeec-bf4de6a680f4] End _get_guest_xml xml=<domain type="kvm">
Oct 11 08:46:21 compute-0 nova_compute[260935]:   <uuid>d1140ad8-4653-441c-aeec-bf4de6a680f4</uuid>
Oct 11 08:46:21 compute-0 nova_compute[260935]:   <name>instance-0000000b</name>
Oct 11 08:46:21 compute-0 nova_compute[260935]:   <memory>131072</memory>
Oct 11 08:46:21 compute-0 nova_compute[260935]:   <vcpu>1</vcpu>
Oct 11 08:46:21 compute-0 nova_compute[260935]:   <metadata>
Oct 11 08:46:21 compute-0 nova_compute[260935]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 08:46:21 compute-0 nova_compute[260935]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 08:46:21 compute-0 nova_compute[260935]:       <nova:name>tempest-ServersAdminNegativeTestJSON-server-1290371758</nova:name>
Oct 11 08:46:21 compute-0 nova_compute[260935]:       <nova:creationTime>2025-10-11 08:46:20</nova:creationTime>
Oct 11 08:46:21 compute-0 nova_compute[260935]:       <nova:flavor name="m1.nano">
Oct 11 08:46:21 compute-0 nova_compute[260935]:         <nova:memory>128</nova:memory>
Oct 11 08:46:21 compute-0 nova_compute[260935]:         <nova:disk>1</nova:disk>
Oct 11 08:46:21 compute-0 nova_compute[260935]:         <nova:swap>0</nova:swap>
Oct 11 08:46:21 compute-0 nova_compute[260935]:         <nova:ephemeral>0</nova:ephemeral>
Oct 11 08:46:21 compute-0 nova_compute[260935]:         <nova:vcpus>1</nova:vcpus>
Oct 11 08:46:21 compute-0 nova_compute[260935]:       </nova:flavor>
Oct 11 08:46:21 compute-0 nova_compute[260935]:       <nova:owner>
Oct 11 08:46:21 compute-0 nova_compute[260935]:         <nova:user uuid="e35666a092ce4c07b2e0604bfeb5d359">tempest-ServersAdminNegativeTestJSON-646482616-project-member</nova:user>
Oct 11 08:46:21 compute-0 nova_compute[260935]:         <nova:project uuid="0b7ec3d2d2084beaa800f0f2abd06c77">tempest-ServersAdminNegativeTestJSON-646482616</nova:project>
Oct 11 08:46:21 compute-0 nova_compute[260935]:       </nova:owner>
Oct 11 08:46:21 compute-0 nova_compute[260935]:       <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 08:46:21 compute-0 nova_compute[260935]:       <nova:ports/>
Oct 11 08:46:21 compute-0 nova_compute[260935]:     </nova:instance>
Oct 11 08:46:21 compute-0 nova_compute[260935]:   </metadata>
Oct 11 08:46:21 compute-0 nova_compute[260935]:   <sysinfo type="smbios">
Oct 11 08:46:21 compute-0 nova_compute[260935]:     <system>
Oct 11 08:46:21 compute-0 nova_compute[260935]:       <entry name="manufacturer">RDO</entry>
Oct 11 08:46:21 compute-0 nova_compute[260935]:       <entry name="product">OpenStack Compute</entry>
Oct 11 08:46:21 compute-0 nova_compute[260935]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 08:46:21 compute-0 nova_compute[260935]:       <entry name="serial">d1140ad8-4653-441c-aeec-bf4de6a680f4</entry>
Oct 11 08:46:21 compute-0 nova_compute[260935]:       <entry name="uuid">d1140ad8-4653-441c-aeec-bf4de6a680f4</entry>
Oct 11 08:46:21 compute-0 nova_compute[260935]:       <entry name="family">Virtual Machine</entry>
Oct 11 08:46:21 compute-0 nova_compute[260935]:     </system>
Oct 11 08:46:21 compute-0 nova_compute[260935]:   </sysinfo>
Oct 11 08:46:21 compute-0 nova_compute[260935]:   <os>
Oct 11 08:46:21 compute-0 nova_compute[260935]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 11 08:46:21 compute-0 nova_compute[260935]:     <boot dev="hd"/>
Oct 11 08:46:21 compute-0 nova_compute[260935]:     <smbios mode="sysinfo"/>
Oct 11 08:46:21 compute-0 nova_compute[260935]:   </os>
Oct 11 08:46:21 compute-0 nova_compute[260935]:   <features>
Oct 11 08:46:21 compute-0 nova_compute[260935]:     <acpi/>
Oct 11 08:46:21 compute-0 nova_compute[260935]:     <apic/>
Oct 11 08:46:21 compute-0 nova_compute[260935]:     <vmcoreinfo/>
Oct 11 08:46:21 compute-0 nova_compute[260935]:   </features>
Oct 11 08:46:21 compute-0 nova_compute[260935]:   <clock offset="utc">
Oct 11 08:46:21 compute-0 nova_compute[260935]:     <timer name="pit" tickpolicy="delay"/>
Oct 11 08:46:21 compute-0 nova_compute[260935]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 11 08:46:21 compute-0 nova_compute[260935]:     <timer name="hpet" present="no"/>
Oct 11 08:46:21 compute-0 nova_compute[260935]:   </clock>
Oct 11 08:46:21 compute-0 nova_compute[260935]:   <cpu mode="host-model" match="exact">
Oct 11 08:46:21 compute-0 nova_compute[260935]:     <topology sockets="1" cores="1" threads="1"/>
Oct 11 08:46:21 compute-0 nova_compute[260935]:   </cpu>
Oct 11 08:46:21 compute-0 nova_compute[260935]:   <devices>
Oct 11 08:46:21 compute-0 nova_compute[260935]:     <disk type="network" device="disk">
Oct 11 08:46:21 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 08:46:21 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/d1140ad8-4653-441c-aeec-bf4de6a680f4_disk">
Oct 11 08:46:21 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 08:46:21 compute-0 nova_compute[260935]:       </source>
Oct 11 08:46:21 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 08:46:21 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 08:46:21 compute-0 nova_compute[260935]:       </auth>
Oct 11 08:46:21 compute-0 nova_compute[260935]:       <target dev="vda" bus="virtio"/>
Oct 11 08:46:21 compute-0 nova_compute[260935]:     </disk>
Oct 11 08:46:21 compute-0 nova_compute[260935]:     <disk type="network" device="cdrom">
Oct 11 08:46:21 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 08:46:21 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/d1140ad8-4653-441c-aeec-bf4de6a680f4_disk.config">
Oct 11 08:46:21 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 08:46:21 compute-0 nova_compute[260935]:       </source>
Oct 11 08:46:21 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 08:46:21 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 08:46:21 compute-0 nova_compute[260935]:       </auth>
Oct 11 08:46:21 compute-0 nova_compute[260935]:       <target dev="sda" bus="sata"/>
Oct 11 08:46:21 compute-0 nova_compute[260935]:     </disk>
Oct 11 08:46:21 compute-0 nova_compute[260935]:     <serial type="pty">
Oct 11 08:46:21 compute-0 nova_compute[260935]:       <log file="/var/lib/nova/instances/d1140ad8-4653-441c-aeec-bf4de6a680f4/console.log" append="off"/>
Oct 11 08:46:21 compute-0 nova_compute[260935]:     </serial>
Oct 11 08:46:21 compute-0 nova_compute[260935]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 08:46:21 compute-0 nova_compute[260935]:     <video>
Oct 11 08:46:21 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 08:46:21 compute-0 nova_compute[260935]:     </video>
Oct 11 08:46:21 compute-0 nova_compute[260935]:     <input type="tablet" bus="usb"/>
Oct 11 08:46:21 compute-0 nova_compute[260935]:     <rng model="virtio">
Oct 11 08:46:21 compute-0 nova_compute[260935]:       <backend model="random">/dev/urandom</backend>
Oct 11 08:46:21 compute-0 nova_compute[260935]:     </rng>
Oct 11 08:46:21 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root"/>
Oct 11 08:46:21 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:46:21 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:46:21 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:46:21 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:46:21 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:46:21 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:46:21 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:46:21 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:46:21 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:46:21 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:46:21 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:46:21 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:46:21 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:46:21 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:46:21 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:46:21 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:46:21 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:46:21 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:46:21 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:46:21 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:46:21 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:46:21 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:46:21 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:46:21 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:46:21 compute-0 nova_compute[260935]:     <controller type="usb" index="0"/>
Oct 11 08:46:21 compute-0 nova_compute[260935]:     <memballoon model="virtio">
Oct 11 08:46:21 compute-0 nova_compute[260935]:       <stats period="10"/>
Oct 11 08:46:21 compute-0 nova_compute[260935]:     </memballoon>
Oct 11 08:46:21 compute-0 nova_compute[260935]:   </devices>
Oct 11 08:46:21 compute-0 nova_compute[260935]: </domain>
Oct 11 08:46:21 compute-0 nova_compute[260935]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 11 08:46:22 compute-0 nova_compute[260935]: 2025-10-11 08:46:22.023 2 INFO nova.scheduler.client.report [None req-7e8bcdc3-1f7e-4533-b533-40c605046f6f faabd708b3c047d7b74a5ff47c363129 b25ec7ba37dc4a2193c9461101f5bff5 - - default default] Deleted allocations for instance adcf97ac-4b08-4f25-844b-e6f4e1305e3b
Oct 11 08:46:22 compute-0 nova_compute[260935]: 2025-10-11 08:46:22.047 2 DEBUG nova.virt.libvirt.driver [None req-247a30bc-f9b5-4a93-8d65-dc91a0707a37 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 08:46:22 compute-0 nova_compute[260935]: 2025-10-11 08:46:22.047 2 DEBUG nova.virt.libvirt.driver [None req-247a30bc-f9b5-4a93-8d65-dc91a0707a37 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 08:46:22 compute-0 nova_compute[260935]: 2025-10-11 08:46:22.048 2 INFO nova.virt.libvirt.driver [None req-247a30bc-f9b5-4a93-8d65-dc91a0707a37 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] [instance: d1140ad8-4653-441c-aeec-bf4de6a680f4] Using config drive
Oct 11 08:46:22 compute-0 nova_compute[260935]: 2025-10-11 08:46:22.090 2 DEBUG nova.storage.rbd_utils [None req-247a30bc-f9b5-4a93-8d65-dc91a0707a37 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] rbd image d1140ad8-4653-441c-aeec-bf4de6a680f4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:46:22 compute-0 nova_compute[260935]: 2025-10-11 08:46:22.112 2 DEBUG oslo_concurrency.lockutils [None req-7e8bcdc3-1f7e-4533-b533-40c605046f6f faabd708b3c047d7b74a5ff47c363129 b25ec7ba37dc4a2193c9461101f5bff5 - - default default] Lock "adcf97ac-4b08-4f25-844b-e6f4e1305e3b" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.587s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:46:22 compute-0 nova_compute[260935]: 2025-10-11 08:46:22.376 2 INFO nova.virt.libvirt.driver [None req-247a30bc-f9b5-4a93-8d65-dc91a0707a37 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] [instance: d1140ad8-4653-441c-aeec-bf4de6a680f4] Creating config drive at /var/lib/nova/instances/d1140ad8-4653-441c-aeec-bf4de6a680f4/disk.config
Oct 11 08:46:22 compute-0 nova_compute[260935]: 2025-10-11 08:46:22.381 2 DEBUG oslo_concurrency.processutils [None req-247a30bc-f9b5-4a93-8d65-dc91a0707a37 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/d1140ad8-4653-441c-aeec-bf4de6a680f4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp3dzpe8e7 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:46:22 compute-0 nova_compute[260935]: 2025-10-11 08:46:22.533 2 DEBUG oslo_concurrency.processutils [None req-247a30bc-f9b5-4a93-8d65-dc91a0707a37 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/d1140ad8-4653-441c-aeec-bf4de6a680f4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp3dzpe8e7" returned: 0 in 0.152s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:46:22 compute-0 nova_compute[260935]: 2025-10-11 08:46:22.569 2 DEBUG nova.storage.rbd_utils [None req-247a30bc-f9b5-4a93-8d65-dc91a0707a37 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] rbd image d1140ad8-4653-441c-aeec-bf4de6a680f4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:46:22 compute-0 nova_compute[260935]: 2025-10-11 08:46:22.574 2 DEBUG oslo_concurrency.processutils [None req-247a30bc-f9b5-4a93-8d65-dc91a0707a37 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/d1140ad8-4653-441c-aeec-bf4de6a680f4/disk.config d1140ad8-4653-441c-aeec-bf4de6a680f4_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:46:22 compute-0 nova_compute[260935]: 2025-10-11 08:46:22.782 2 DEBUG oslo_concurrency.processutils [None req-247a30bc-f9b5-4a93-8d65-dc91a0707a37 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/d1140ad8-4653-441c-aeec-bf4de6a680f4/disk.config d1140ad8-4653-441c-aeec-bf4de6a680f4_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.208s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:46:22 compute-0 nova_compute[260935]: 2025-10-11 08:46:22.784 2 INFO nova.virt.libvirt.driver [None req-247a30bc-f9b5-4a93-8d65-dc91a0707a37 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] [instance: d1140ad8-4653-441c-aeec-bf4de6a680f4] Deleting local config drive /var/lib/nova/instances/d1140ad8-4653-441c-aeec-bf4de6a680f4/disk.config because it was imported into RBD.
Oct 11 08:46:22 compute-0 ceph-mon[74313]: pgmap v1190: 321 pgs: 321 active+clean; 134 MiB data, 294 MiB used, 60 GiB / 60 GiB avail; 3.8 MiB/s rd, 3.6 MiB/s wr, 199 op/s
Oct 11 08:46:22 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/4037687385' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:46:22 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/4125394097' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:46:22 compute-0 systemd-machined[215705]: New machine qemu-11-instance-0000000b.
Oct 11 08:46:22 compute-0 systemd[1]: Started Virtual Machine qemu-11-instance-0000000b.
Oct 11 08:46:23 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 08:46:23 compute-0 nova_compute[260935]: 2025-10-11 08:46:23.609 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:46:23 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1191: 321 pgs: 321 active+clean; 134 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 5.3 MiB/s wr, 255 op/s
Oct 11 08:46:24 compute-0 nova_compute[260935]: 2025-10-11 08:46:24.095 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172384.0934303, d1140ad8-4653-441c-aeec-bf4de6a680f4 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:46:24 compute-0 nova_compute[260935]: 2025-10-11 08:46:24.096 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: d1140ad8-4653-441c-aeec-bf4de6a680f4] VM Resumed (Lifecycle Event)
Oct 11 08:46:24 compute-0 nova_compute[260935]: 2025-10-11 08:46:24.100 2 DEBUG nova.compute.manager [None req-247a30bc-f9b5-4a93-8d65-dc91a0707a37 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] [instance: d1140ad8-4653-441c-aeec-bf4de6a680f4] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 11 08:46:24 compute-0 nova_compute[260935]: 2025-10-11 08:46:24.100 2 DEBUG nova.virt.libvirt.driver [None req-247a30bc-f9b5-4a93-8d65-dc91a0707a37 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] [instance: d1140ad8-4653-441c-aeec-bf4de6a680f4] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 11 08:46:24 compute-0 nova_compute[260935]: 2025-10-11 08:46:24.106 2 INFO nova.virt.libvirt.driver [-] [instance: d1140ad8-4653-441c-aeec-bf4de6a680f4] Instance spawned successfully.
Oct 11 08:46:24 compute-0 nova_compute[260935]: 2025-10-11 08:46:24.107 2 DEBUG nova.virt.libvirt.driver [None req-247a30bc-f9b5-4a93-8d65-dc91a0707a37 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] [instance: d1140ad8-4653-441c-aeec-bf4de6a680f4] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 11 08:46:24 compute-0 nova_compute[260935]: 2025-10-11 08:46:24.115 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: d1140ad8-4653-441c-aeec-bf4de6a680f4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:46:24 compute-0 nova_compute[260935]: 2025-10-11 08:46:24.126 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: d1140ad8-4653-441c-aeec-bf4de6a680f4] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 08:46:24 compute-0 nova_compute[260935]: 2025-10-11 08:46:24.134 2 DEBUG nova.virt.libvirt.driver [None req-247a30bc-f9b5-4a93-8d65-dc91a0707a37 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] [instance: d1140ad8-4653-441c-aeec-bf4de6a680f4] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:46:24 compute-0 nova_compute[260935]: 2025-10-11 08:46:24.134 2 DEBUG nova.virt.libvirt.driver [None req-247a30bc-f9b5-4a93-8d65-dc91a0707a37 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] [instance: d1140ad8-4653-441c-aeec-bf4de6a680f4] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:46:24 compute-0 nova_compute[260935]: 2025-10-11 08:46:24.135 2 DEBUG nova.virt.libvirt.driver [None req-247a30bc-f9b5-4a93-8d65-dc91a0707a37 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] [instance: d1140ad8-4653-441c-aeec-bf4de6a680f4] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:46:24 compute-0 nova_compute[260935]: 2025-10-11 08:46:24.136 2 DEBUG nova.virt.libvirt.driver [None req-247a30bc-f9b5-4a93-8d65-dc91a0707a37 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] [instance: d1140ad8-4653-441c-aeec-bf4de6a680f4] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:46:24 compute-0 nova_compute[260935]: 2025-10-11 08:46:24.136 2 DEBUG nova.virt.libvirt.driver [None req-247a30bc-f9b5-4a93-8d65-dc91a0707a37 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] [instance: d1140ad8-4653-441c-aeec-bf4de6a680f4] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:46:24 compute-0 nova_compute[260935]: 2025-10-11 08:46:24.137 2 DEBUG nova.virt.libvirt.driver [None req-247a30bc-f9b5-4a93-8d65-dc91a0707a37 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] [instance: d1140ad8-4653-441c-aeec-bf4de6a680f4] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:46:24 compute-0 nova_compute[260935]: 2025-10-11 08:46:24.143 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: d1140ad8-4653-441c-aeec-bf4de6a680f4] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 08:46:24 compute-0 nova_compute[260935]: 2025-10-11 08:46:24.143 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172384.0960228, d1140ad8-4653-441c-aeec-bf4de6a680f4 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:46:24 compute-0 nova_compute[260935]: 2025-10-11 08:46:24.144 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: d1140ad8-4653-441c-aeec-bf4de6a680f4] VM Started (Lifecycle Event)
Oct 11 08:46:24 compute-0 nova_compute[260935]: 2025-10-11 08:46:24.166 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: d1140ad8-4653-441c-aeec-bf4de6a680f4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:46:24 compute-0 nova_compute[260935]: 2025-10-11 08:46:24.171 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: d1140ad8-4653-441c-aeec-bf4de6a680f4] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 08:46:24 compute-0 nova_compute[260935]: 2025-10-11 08:46:24.193 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: d1140ad8-4653-441c-aeec-bf4de6a680f4] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 08:46:24 compute-0 nova_compute[260935]: 2025-10-11 08:46:24.198 2 INFO nova.compute.manager [None req-247a30bc-f9b5-4a93-8d65-dc91a0707a37 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] [instance: d1140ad8-4653-441c-aeec-bf4de6a680f4] Took 4.28 seconds to spawn the instance on the hypervisor.
Oct 11 08:46:24 compute-0 nova_compute[260935]: 2025-10-11 08:46:24.199 2 DEBUG nova.compute.manager [None req-247a30bc-f9b5-4a93-8d65-dc91a0707a37 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] [instance: d1140ad8-4653-441c-aeec-bf4de6a680f4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:46:24 compute-0 nova_compute[260935]: 2025-10-11 08:46:24.271 2 INFO nova.compute.manager [None req-247a30bc-f9b5-4a93-8d65-dc91a0707a37 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] [instance: d1140ad8-4653-441c-aeec-bf4de6a680f4] Took 5.32 seconds to build instance.
Oct 11 08:46:24 compute-0 nova_compute[260935]: 2025-10-11 08:46:24.288 2 DEBUG oslo_concurrency.lockutils [None req-247a30bc-f9b5-4a93-8d65-dc91a0707a37 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] Lock "d1140ad8-4653-441c-aeec-bf4de6a680f4" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 5.411s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:46:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 08:46:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 08:46:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 08:46:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 08:46:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 08:46:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 08:46:24 compute-0 podman[283148]: 2025-10-11 08:46:24.785700561 +0000 UTC m=+0.091103515 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=iscsid)
Oct 11 08:46:24 compute-0 ceph-mon[74313]: pgmap v1191: 321 pgs: 321 active+clean; 134 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 5.3 MiB/s wr, 255 op/s
Oct 11 08:46:25 compute-0 nova_compute[260935]: 2025-10-11 08:46:25.034 2 DEBUG oslo_concurrency.lockutils [None req-d07b0c4b-6546-4391-b58a-e72875a3a621 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] Acquiring lock "e1d5bdab-1090-4b36-bff3-f11d0bc1d591" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:46:25 compute-0 nova_compute[260935]: 2025-10-11 08:46:25.035 2 DEBUG oslo_concurrency.lockutils [None req-d07b0c4b-6546-4391-b58a-e72875a3a621 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] Lock "e1d5bdab-1090-4b36-bff3-f11d0bc1d591" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:46:25 compute-0 nova_compute[260935]: 2025-10-11 08:46:25.059 2 DEBUG nova.compute.manager [None req-d07b0c4b-6546-4391-b58a-e72875a3a621 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] [instance: e1d5bdab-1090-4b36-bff3-f11d0bc1d591] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 11 08:46:25 compute-0 nova_compute[260935]: 2025-10-11 08:46:25.203 2 DEBUG oslo_concurrency.lockutils [None req-d07b0c4b-6546-4391-b58a-e72875a3a621 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:46:25 compute-0 nova_compute[260935]: 2025-10-11 08:46:25.204 2 DEBUG oslo_concurrency.lockutils [None req-d07b0c4b-6546-4391-b58a-e72875a3a621 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:46:25 compute-0 nova_compute[260935]: 2025-10-11 08:46:25.213 2 DEBUG nova.virt.hardware [None req-d07b0c4b-6546-4391-b58a-e72875a3a621 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 11 08:46:25 compute-0 nova_compute[260935]: 2025-10-11 08:46:25.213 2 INFO nova.compute.claims [None req-d07b0c4b-6546-4391-b58a-e72875a3a621 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] [instance: e1d5bdab-1090-4b36-bff3-f11d0bc1d591] Claim successful on node compute-0.ctlplane.example.com
Oct 11 08:46:25 compute-0 nova_compute[260935]: 2025-10-11 08:46:25.376 2 DEBUG oslo_concurrency.processutils [None req-d07b0c4b-6546-4391-b58a-e72875a3a621 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:46:25 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1192: 321 pgs: 321 active+clean; 134 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 3.6 MiB/s wr, 218 op/s
Oct 11 08:46:25 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 08:46:25 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4271988514' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:46:25 compute-0 nova_compute[260935]: 2025-10-11 08:46:25.874 2 DEBUG oslo_concurrency.processutils [None req-d07b0c4b-6546-4391-b58a-e72875a3a621 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.499s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:46:25 compute-0 nova_compute[260935]: 2025-10-11 08:46:25.881 2 DEBUG nova.compute.provider_tree [None req-d07b0c4b-6546-4391-b58a-e72875a3a621 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 08:46:25 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/4271988514' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:46:25 compute-0 nova_compute[260935]: 2025-10-11 08:46:25.902 2 DEBUG nova.scheduler.client.report [None req-d07b0c4b-6546-4391-b58a-e72875a3a621 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 08:46:25 compute-0 nova_compute[260935]: 2025-10-11 08:46:25.925 2 DEBUG oslo_concurrency.lockutils [None req-d07b0c4b-6546-4391-b58a-e72875a3a621 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.721s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:46:25 compute-0 nova_compute[260935]: 2025-10-11 08:46:25.926 2 DEBUG nova.compute.manager [None req-d07b0c4b-6546-4391-b58a-e72875a3a621 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] [instance: e1d5bdab-1090-4b36-bff3-f11d0bc1d591] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 11 08:46:25 compute-0 nova_compute[260935]: 2025-10-11 08:46:25.981 2 DEBUG nova.compute.manager [None req-d07b0c4b-6546-4391-b58a-e72875a3a621 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] [instance: e1d5bdab-1090-4b36-bff3-f11d0bc1d591] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 11 08:46:25 compute-0 nova_compute[260935]: 2025-10-11 08:46:25.982 2 DEBUG nova.network.neutron [None req-d07b0c4b-6546-4391-b58a-e72875a3a621 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] [instance: e1d5bdab-1090-4b36-bff3-f11d0bc1d591] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 11 08:46:26 compute-0 nova_compute[260935]: 2025-10-11 08:46:26.021 2 INFO nova.virt.libvirt.driver [None req-d07b0c4b-6546-4391-b58a-e72875a3a621 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] [instance: e1d5bdab-1090-4b36-bff3-f11d0bc1d591] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 11 08:46:26 compute-0 nova_compute[260935]: 2025-10-11 08:46:26.039 2 DEBUG nova.compute.manager [None req-d07b0c4b-6546-4391-b58a-e72875a3a621 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] [instance: e1d5bdab-1090-4b36-bff3-f11d0bc1d591] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 11 08:46:26 compute-0 nova_compute[260935]: 2025-10-11 08:46:26.187 2 DEBUG nova.compute.manager [None req-d07b0c4b-6546-4391-b58a-e72875a3a621 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] [instance: e1d5bdab-1090-4b36-bff3-f11d0bc1d591] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 11 08:46:26 compute-0 nova_compute[260935]: 2025-10-11 08:46:26.189 2 DEBUG nova.virt.libvirt.driver [None req-d07b0c4b-6546-4391-b58a-e72875a3a621 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] [instance: e1d5bdab-1090-4b36-bff3-f11d0bc1d591] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 11 08:46:26 compute-0 nova_compute[260935]: 2025-10-11 08:46:26.190 2 INFO nova.virt.libvirt.driver [None req-d07b0c4b-6546-4391-b58a-e72875a3a621 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] [instance: e1d5bdab-1090-4b36-bff3-f11d0bc1d591] Creating image(s)
Oct 11 08:46:26 compute-0 nova_compute[260935]: 2025-10-11 08:46:26.226 2 DEBUG nova.storage.rbd_utils [None req-d07b0c4b-6546-4391-b58a-e72875a3a621 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] rbd image e1d5bdab-1090-4b36-bff3-f11d0bc1d591_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:46:26 compute-0 nova_compute[260935]: 2025-10-11 08:46:26.261 2 DEBUG nova.storage.rbd_utils [None req-d07b0c4b-6546-4391-b58a-e72875a3a621 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] rbd image e1d5bdab-1090-4b36-bff3-f11d0bc1d591_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:46:26 compute-0 nova_compute[260935]: 2025-10-11 08:46:26.286 2 DEBUG nova.storage.rbd_utils [None req-d07b0c4b-6546-4391-b58a-e72875a3a621 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] rbd image e1d5bdab-1090-4b36-bff3-f11d0bc1d591_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:46:26 compute-0 nova_compute[260935]: 2025-10-11 08:46:26.292 2 DEBUG oslo_concurrency.processutils [None req-d07b0c4b-6546-4391-b58a-e72875a3a621 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:46:26 compute-0 nova_compute[260935]: 2025-10-11 08:46:26.316 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:46:26 compute-0 nova_compute[260935]: 2025-10-11 08:46:26.326 2 DEBUG nova.objects.instance [None req-8350f79d-4d9a-4f58-a70c-8d5ce260990b f7290af8b0bb4eaea5271709d2f074d1 bf78cdd64a5b457383affaaa7a1353ec - - default default] Lazy-loading 'pci_devices' on Instance uuid d1140ad8-4653-441c-aeec-bf4de6a680f4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:46:26 compute-0 nova_compute[260935]: 2025-10-11 08:46:26.347 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172386.3466928, d1140ad8-4653-441c-aeec-bf4de6a680f4 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:46:26 compute-0 nova_compute[260935]: 2025-10-11 08:46:26.347 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: d1140ad8-4653-441c-aeec-bf4de6a680f4] VM Paused (Lifecycle Event)
Oct 11 08:46:26 compute-0 nova_compute[260935]: 2025-10-11 08:46:26.375 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: d1140ad8-4653-441c-aeec-bf4de6a680f4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:46:26 compute-0 nova_compute[260935]: 2025-10-11 08:46:26.376 2 DEBUG oslo_concurrency.processutils [None req-d07b0c4b-6546-4391-b58a-e72875a3a621 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json" returned: 0 in 0.085s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:46:26 compute-0 nova_compute[260935]: 2025-10-11 08:46:26.378 2 DEBUG oslo_concurrency.lockutils [None req-d07b0c4b-6546-4391-b58a-e72875a3a621 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] Acquiring lock "9811042c7d73cc51997f7c966840f4b7728169a1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:46:26 compute-0 nova_compute[260935]: 2025-10-11 08:46:26.379 2 DEBUG oslo_concurrency.lockutils [None req-d07b0c4b-6546-4391-b58a-e72875a3a621 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:46:26 compute-0 nova_compute[260935]: 2025-10-11 08:46:26.379 2 DEBUG oslo_concurrency.lockutils [None req-d07b0c4b-6546-4391-b58a-e72875a3a621 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:46:26 compute-0 nova_compute[260935]: 2025-10-11 08:46:26.409 2 DEBUG nova.storage.rbd_utils [None req-d07b0c4b-6546-4391-b58a-e72875a3a621 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] rbd image e1d5bdab-1090-4b36-bff3-f11d0bc1d591_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:46:26 compute-0 nova_compute[260935]: 2025-10-11 08:46:26.415 2 DEBUG oslo_concurrency.processutils [None req-d07b0c4b-6546-4391-b58a-e72875a3a621 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 e1d5bdab-1090-4b36-bff3-f11d0bc1d591_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:46:26 compute-0 nova_compute[260935]: 2025-10-11 08:46:26.448 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: d1140ad8-4653-441c-aeec-bf4de6a680f4] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: suspending, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 08:46:26 compute-0 nova_compute[260935]: 2025-10-11 08:46:26.469 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: d1140ad8-4653-441c-aeec-bf4de6a680f4] During sync_power_state the instance has a pending task (suspending). Skip.
Oct 11 08:46:26 compute-0 nova_compute[260935]: 2025-10-11 08:46:26.619 2 DEBUG nova.network.neutron [None req-d07b0c4b-6546-4391-b58a-e72875a3a621 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] [instance: e1d5bdab-1090-4b36-bff3-f11d0bc1d591] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188
Oct 11 08:46:26 compute-0 nova_compute[260935]: 2025-10-11 08:46:26.620 2 DEBUG nova.compute.manager [None req-d07b0c4b-6546-4391-b58a-e72875a3a621 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] [instance: e1d5bdab-1090-4b36-bff3-f11d0bc1d591] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 11 08:46:26 compute-0 systemd[1]: machine-qemu\x2d11\x2dinstance\x2d0000000b.scope: Deactivated successfully.
Oct 11 08:46:26 compute-0 systemd[1]: machine-qemu\x2d11\x2dinstance\x2d0000000b.scope: Consumed 3.595s CPU time.
Oct 11 08:46:26 compute-0 systemd-machined[215705]: Machine qemu-11-instance-0000000b terminated.
Oct 11 08:46:26 compute-0 ceph-mon[74313]: pgmap v1192: 321 pgs: 321 active+clean; 134 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 3.6 MiB/s wr, 218 op/s
Oct 11 08:46:26 compute-0 nova_compute[260935]: 2025-10-11 08:46:26.916 2 DEBUG oslo_concurrency.processutils [None req-d07b0c4b-6546-4391-b58a-e72875a3a621 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 e1d5bdab-1090-4b36-bff3-f11d0bc1d591_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.501s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:46:27 compute-0 nova_compute[260935]: 2025-10-11 08:46:27.008 2 DEBUG nova.compute.manager [None req-8350f79d-4d9a-4f58-a70c-8d5ce260990b f7290af8b0bb4eaea5271709d2f074d1 bf78cdd64a5b457383affaaa7a1353ec - - default default] [instance: d1140ad8-4653-441c-aeec-bf4de6a680f4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:46:27 compute-0 nova_compute[260935]: 2025-10-11 08:46:27.015 2 DEBUG nova.storage.rbd_utils [None req-d07b0c4b-6546-4391-b58a-e72875a3a621 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] resizing rbd image e1d5bdab-1090-4b36-bff3-f11d0bc1d591_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 11 08:46:27 compute-0 nova_compute[260935]: 2025-10-11 08:46:27.145 2 DEBUG nova.objects.instance [None req-d07b0c4b-6546-4391-b58a-e72875a3a621 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] Lazy-loading 'migration_context' on Instance uuid e1d5bdab-1090-4b36-bff3-f11d0bc1d591 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:46:27 compute-0 nova_compute[260935]: 2025-10-11 08:46:27.168 2 DEBUG nova.virt.libvirt.driver [None req-d07b0c4b-6546-4391-b58a-e72875a3a621 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] [instance: e1d5bdab-1090-4b36-bff3-f11d0bc1d591] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 11 08:46:27 compute-0 nova_compute[260935]: 2025-10-11 08:46:27.169 2 DEBUG nova.virt.libvirt.driver [None req-d07b0c4b-6546-4391-b58a-e72875a3a621 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] [instance: e1d5bdab-1090-4b36-bff3-f11d0bc1d591] Ensure instance console log exists: /var/lib/nova/instances/e1d5bdab-1090-4b36-bff3-f11d0bc1d591/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 11 08:46:27 compute-0 nova_compute[260935]: 2025-10-11 08:46:27.169 2 DEBUG oslo_concurrency.lockutils [None req-d07b0c4b-6546-4391-b58a-e72875a3a621 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:46:27 compute-0 nova_compute[260935]: 2025-10-11 08:46:27.170 2 DEBUG oslo_concurrency.lockutils [None req-d07b0c4b-6546-4391-b58a-e72875a3a621 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:46:27 compute-0 nova_compute[260935]: 2025-10-11 08:46:27.170 2 DEBUG oslo_concurrency.lockutils [None req-d07b0c4b-6546-4391-b58a-e72875a3a621 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:46:27 compute-0 nova_compute[260935]: 2025-10-11 08:46:27.173 2 DEBUG nova.virt.libvirt.driver [None req-d07b0c4b-6546-4391-b58a-e72875a3a621 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] [instance: e1d5bdab-1090-4b36-bff3-f11d0bc1d591] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 11 08:46:27 compute-0 nova_compute[260935]: 2025-10-11 08:46:27.179 2 WARNING nova.virt.libvirt.driver [None req-d07b0c4b-6546-4391-b58a-e72875a3a621 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 08:46:27 compute-0 nova_compute[260935]: 2025-10-11 08:46:27.184 2 DEBUG nova.virt.libvirt.host [None req-d07b0c4b-6546-4391-b58a-e72875a3a621 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 11 08:46:27 compute-0 nova_compute[260935]: 2025-10-11 08:46:27.185 2 DEBUG nova.virt.libvirt.host [None req-d07b0c4b-6546-4391-b58a-e72875a3a621 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 11 08:46:27 compute-0 nova_compute[260935]: 2025-10-11 08:46:27.188 2 DEBUG nova.virt.libvirt.host [None req-d07b0c4b-6546-4391-b58a-e72875a3a621 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 11 08:46:27 compute-0 nova_compute[260935]: 2025-10-11 08:46:27.189 2 DEBUG nova.virt.libvirt.host [None req-d07b0c4b-6546-4391-b58a-e72875a3a621 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 11 08:46:27 compute-0 nova_compute[260935]: 2025-10-11 08:46:27.189 2 DEBUG nova.virt.libvirt.driver [None req-d07b0c4b-6546-4391-b58a-e72875a3a621 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 11 08:46:27 compute-0 nova_compute[260935]: 2025-10-11 08:46:27.190 2 DEBUG nova.virt.hardware [None req-d07b0c4b-6546-4391-b58a-e72875a3a621 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 11 08:46:27 compute-0 nova_compute[260935]: 2025-10-11 08:46:27.191 2 DEBUG nova.virt.hardware [None req-d07b0c4b-6546-4391-b58a-e72875a3a621 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 11 08:46:27 compute-0 nova_compute[260935]: 2025-10-11 08:46:27.191 2 DEBUG nova.virt.hardware [None req-d07b0c4b-6546-4391-b58a-e72875a3a621 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 11 08:46:27 compute-0 nova_compute[260935]: 2025-10-11 08:46:27.191 2 DEBUG nova.virt.hardware [None req-d07b0c4b-6546-4391-b58a-e72875a3a621 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 11 08:46:27 compute-0 nova_compute[260935]: 2025-10-11 08:46:27.192 2 DEBUG nova.virt.hardware [None req-d07b0c4b-6546-4391-b58a-e72875a3a621 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 11 08:46:27 compute-0 nova_compute[260935]: 2025-10-11 08:46:27.192 2 DEBUG nova.virt.hardware [None req-d07b0c4b-6546-4391-b58a-e72875a3a621 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 11 08:46:27 compute-0 nova_compute[260935]: 2025-10-11 08:46:27.193 2 DEBUG nova.virt.hardware [None req-d07b0c4b-6546-4391-b58a-e72875a3a621 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 11 08:46:27 compute-0 nova_compute[260935]: 2025-10-11 08:46:27.193 2 DEBUG nova.virt.hardware [None req-d07b0c4b-6546-4391-b58a-e72875a3a621 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 11 08:46:27 compute-0 nova_compute[260935]: 2025-10-11 08:46:27.194 2 DEBUG nova.virt.hardware [None req-d07b0c4b-6546-4391-b58a-e72875a3a621 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 11 08:46:27 compute-0 nova_compute[260935]: 2025-10-11 08:46:27.194 2 DEBUG nova.virt.hardware [None req-d07b0c4b-6546-4391-b58a-e72875a3a621 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 11 08:46:27 compute-0 nova_compute[260935]: 2025-10-11 08:46:27.194 2 DEBUG nova.virt.hardware [None req-d07b0c4b-6546-4391-b58a-e72875a3a621 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 11 08:46:27 compute-0 nova_compute[260935]: 2025-10-11 08:46:27.199 2 DEBUG oslo_concurrency.processutils [None req-d07b0c4b-6546-4391-b58a-e72875a3a621 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:46:27 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 08:46:27 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2466551359' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:46:27 compute-0 nova_compute[260935]: 2025-10-11 08:46:27.666 2 DEBUG oslo_concurrency.processutils [None req-d07b0c4b-6546-4391-b58a-e72875a3a621 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:46:27 compute-0 nova_compute[260935]: 2025-10-11 08:46:27.688 2 DEBUG nova.storage.rbd_utils [None req-d07b0c4b-6546-4391-b58a-e72875a3a621 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] rbd image e1d5bdab-1090-4b36-bff3-f11d0bc1d591_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:46:27 compute-0 nova_compute[260935]: 2025-10-11 08:46:27.691 2 DEBUG oslo_concurrency.processutils [None req-d07b0c4b-6546-4391-b58a-e72875a3a621 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:46:27 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1193: 321 pgs: 321 active+clean; 213 MiB data, 341 MiB used, 60 GiB / 60 GiB avail; 6.1 MiB/s rd, 7.5 MiB/s wr, 381 op/s
Oct 11 08:46:27 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/2466551359' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:46:28 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 08:46:28 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 08:46:28 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/618721267' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:46:28 compute-0 nova_compute[260935]: 2025-10-11 08:46:28.269 2 DEBUG oslo_concurrency.processutils [None req-d07b0c4b-6546-4391-b58a-e72875a3a621 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.577s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:46:28 compute-0 nova_compute[260935]: 2025-10-11 08:46:28.271 2 DEBUG nova.objects.instance [None req-d07b0c4b-6546-4391-b58a-e72875a3a621 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] Lazy-loading 'pci_devices' on Instance uuid e1d5bdab-1090-4b36-bff3-f11d0bc1d591 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:46:28 compute-0 nova_compute[260935]: 2025-10-11 08:46:28.287 2 DEBUG nova.virt.libvirt.driver [None req-d07b0c4b-6546-4391-b58a-e72875a3a621 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] [instance: e1d5bdab-1090-4b36-bff3-f11d0bc1d591] End _get_guest_xml xml=<domain type="kvm">
Oct 11 08:46:28 compute-0 nova_compute[260935]:   <uuid>e1d5bdab-1090-4b36-bff3-f11d0bc1d591</uuid>
Oct 11 08:46:28 compute-0 nova_compute[260935]:   <name>instance-0000000c</name>
Oct 11 08:46:28 compute-0 nova_compute[260935]:   <memory>131072</memory>
Oct 11 08:46:28 compute-0 nova_compute[260935]:   <vcpu>1</vcpu>
Oct 11 08:46:28 compute-0 nova_compute[260935]:   <metadata>
Oct 11 08:46:28 compute-0 nova_compute[260935]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 08:46:28 compute-0 nova_compute[260935]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 08:46:28 compute-0 nova_compute[260935]:       <nova:name>tempest-DeleteServersAdminTestJSON-server-884730205</nova:name>
Oct 11 08:46:28 compute-0 nova_compute[260935]:       <nova:creationTime>2025-10-11 08:46:27</nova:creationTime>
Oct 11 08:46:28 compute-0 nova_compute[260935]:       <nova:flavor name="m1.nano">
Oct 11 08:46:28 compute-0 nova_compute[260935]:         <nova:memory>128</nova:memory>
Oct 11 08:46:28 compute-0 nova_compute[260935]:         <nova:disk>1</nova:disk>
Oct 11 08:46:28 compute-0 nova_compute[260935]:         <nova:swap>0</nova:swap>
Oct 11 08:46:28 compute-0 nova_compute[260935]:         <nova:ephemeral>0</nova:ephemeral>
Oct 11 08:46:28 compute-0 nova_compute[260935]:         <nova:vcpus>1</nova:vcpus>
Oct 11 08:46:28 compute-0 nova_compute[260935]:       </nova:flavor>
Oct 11 08:46:28 compute-0 nova_compute[260935]:       <nova:owner>
Oct 11 08:46:28 compute-0 nova_compute[260935]:         <nova:user uuid="c74c2e90e9e04fd283655f1fac1d2bf7">tempest-DeleteServersAdminTestJSON-159697733-project-member</nova:user>
Oct 11 08:46:28 compute-0 nova_compute[260935]:         <nova:project uuid="f8adf6d5f1034e90bb5184f87dbf3a16">tempest-DeleteServersAdminTestJSON-159697733</nova:project>
Oct 11 08:46:28 compute-0 nova_compute[260935]:       </nova:owner>
Oct 11 08:46:28 compute-0 nova_compute[260935]:       <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 08:46:28 compute-0 nova_compute[260935]:       <nova:ports/>
Oct 11 08:46:28 compute-0 nova_compute[260935]:     </nova:instance>
Oct 11 08:46:28 compute-0 nova_compute[260935]:   </metadata>
Oct 11 08:46:28 compute-0 nova_compute[260935]:   <sysinfo type="smbios">
Oct 11 08:46:28 compute-0 nova_compute[260935]:     <system>
Oct 11 08:46:28 compute-0 nova_compute[260935]:       <entry name="manufacturer">RDO</entry>
Oct 11 08:46:28 compute-0 nova_compute[260935]:       <entry name="product">OpenStack Compute</entry>
Oct 11 08:46:28 compute-0 nova_compute[260935]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 08:46:28 compute-0 nova_compute[260935]:       <entry name="serial">e1d5bdab-1090-4b36-bff3-f11d0bc1d591</entry>
Oct 11 08:46:28 compute-0 nova_compute[260935]:       <entry name="uuid">e1d5bdab-1090-4b36-bff3-f11d0bc1d591</entry>
Oct 11 08:46:28 compute-0 nova_compute[260935]:       <entry name="family">Virtual Machine</entry>
Oct 11 08:46:28 compute-0 nova_compute[260935]:     </system>
Oct 11 08:46:28 compute-0 nova_compute[260935]:   </sysinfo>
Oct 11 08:46:28 compute-0 nova_compute[260935]:   <os>
Oct 11 08:46:28 compute-0 nova_compute[260935]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 11 08:46:28 compute-0 nova_compute[260935]:     <boot dev="hd"/>
Oct 11 08:46:28 compute-0 nova_compute[260935]:     <smbios mode="sysinfo"/>
Oct 11 08:46:28 compute-0 nova_compute[260935]:   </os>
Oct 11 08:46:28 compute-0 nova_compute[260935]:   <features>
Oct 11 08:46:28 compute-0 nova_compute[260935]:     <acpi/>
Oct 11 08:46:28 compute-0 nova_compute[260935]:     <apic/>
Oct 11 08:46:28 compute-0 nova_compute[260935]:     <vmcoreinfo/>
Oct 11 08:46:28 compute-0 nova_compute[260935]:   </features>
Oct 11 08:46:28 compute-0 nova_compute[260935]:   <clock offset="utc">
Oct 11 08:46:28 compute-0 nova_compute[260935]:     <timer name="pit" tickpolicy="delay"/>
Oct 11 08:46:28 compute-0 nova_compute[260935]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 11 08:46:28 compute-0 nova_compute[260935]:     <timer name="hpet" present="no"/>
Oct 11 08:46:28 compute-0 nova_compute[260935]:   </clock>
Oct 11 08:46:28 compute-0 nova_compute[260935]:   <cpu mode="host-model" match="exact">
Oct 11 08:46:28 compute-0 nova_compute[260935]:     <topology sockets="1" cores="1" threads="1"/>
Oct 11 08:46:28 compute-0 nova_compute[260935]:   </cpu>
Oct 11 08:46:28 compute-0 nova_compute[260935]:   <devices>
Oct 11 08:46:28 compute-0 nova_compute[260935]:     <disk type="network" device="disk">
Oct 11 08:46:28 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 08:46:28 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/e1d5bdab-1090-4b36-bff3-f11d0bc1d591_disk">
Oct 11 08:46:28 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 08:46:28 compute-0 nova_compute[260935]:       </source>
Oct 11 08:46:28 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 08:46:28 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 08:46:28 compute-0 nova_compute[260935]:       </auth>
Oct 11 08:46:28 compute-0 nova_compute[260935]:       <target dev="vda" bus="virtio"/>
Oct 11 08:46:28 compute-0 nova_compute[260935]:     </disk>
Oct 11 08:46:28 compute-0 nova_compute[260935]:     <disk type="network" device="cdrom">
Oct 11 08:46:28 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 08:46:28 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/e1d5bdab-1090-4b36-bff3-f11d0bc1d591_disk.config">
Oct 11 08:46:28 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 08:46:28 compute-0 nova_compute[260935]:       </source>
Oct 11 08:46:28 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 08:46:28 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 08:46:28 compute-0 nova_compute[260935]:       </auth>
Oct 11 08:46:28 compute-0 nova_compute[260935]:       <target dev="sda" bus="sata"/>
Oct 11 08:46:28 compute-0 nova_compute[260935]:     </disk>
Oct 11 08:46:28 compute-0 nova_compute[260935]:     <serial type="pty">
Oct 11 08:46:28 compute-0 nova_compute[260935]:       <log file="/var/lib/nova/instances/e1d5bdab-1090-4b36-bff3-f11d0bc1d591/console.log" append="off"/>
Oct 11 08:46:28 compute-0 nova_compute[260935]:     </serial>
Oct 11 08:46:28 compute-0 nova_compute[260935]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 08:46:28 compute-0 nova_compute[260935]:     <video>
Oct 11 08:46:28 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 08:46:28 compute-0 nova_compute[260935]:     </video>
Oct 11 08:46:28 compute-0 nova_compute[260935]:     <input type="tablet" bus="usb"/>
Oct 11 08:46:28 compute-0 nova_compute[260935]:     <rng model="virtio">
Oct 11 08:46:28 compute-0 nova_compute[260935]:       <backend model="random">/dev/urandom</backend>
Oct 11 08:46:28 compute-0 nova_compute[260935]:     </rng>
Oct 11 08:46:28 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root"/>
Oct 11 08:46:28 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:46:28 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:46:28 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:46:28 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:46:28 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:46:28 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:46:28 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:46:28 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:46:28 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:46:28 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:46:28 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:46:28 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:46:28 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:46:28 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:46:28 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:46:28 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:46:28 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:46:28 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:46:28 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:46:28 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:46:28 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:46:28 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:46:28 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:46:28 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:46:28 compute-0 nova_compute[260935]:     <controller type="usb" index="0"/>
Oct 11 08:46:28 compute-0 nova_compute[260935]:     <memballoon model="virtio">
Oct 11 08:46:28 compute-0 nova_compute[260935]:       <stats period="10"/>
Oct 11 08:46:28 compute-0 nova_compute[260935]:     </memballoon>
Oct 11 08:46:28 compute-0 nova_compute[260935]:   </devices>
Oct 11 08:46:28 compute-0 nova_compute[260935]: </domain>
Oct 11 08:46:28 compute-0 nova_compute[260935]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 11 08:46:28 compute-0 nova_compute[260935]: 2025-10-11 08:46:28.354 2 DEBUG nova.virt.libvirt.driver [None req-d07b0c4b-6546-4391-b58a-e72875a3a621 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 08:46:28 compute-0 nova_compute[260935]: 2025-10-11 08:46:28.354 2 DEBUG nova.virt.libvirt.driver [None req-d07b0c4b-6546-4391-b58a-e72875a3a621 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 08:46:28 compute-0 nova_compute[260935]: 2025-10-11 08:46:28.355 2 INFO nova.virt.libvirt.driver [None req-d07b0c4b-6546-4391-b58a-e72875a3a621 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] [instance: e1d5bdab-1090-4b36-bff3-f11d0bc1d591] Using config drive
Oct 11 08:46:28 compute-0 nova_compute[260935]: 2025-10-11 08:46:28.376 2 DEBUG nova.storage.rbd_utils [None req-d07b0c4b-6546-4391-b58a-e72875a3a621 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] rbd image e1d5bdab-1090-4b36-bff3-f11d0bc1d591_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:46:28 compute-0 nova_compute[260935]: 2025-10-11 08:46:28.612 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:46:28 compute-0 nova_compute[260935]: 2025-10-11 08:46:28.789 2 INFO nova.virt.libvirt.driver [None req-d07b0c4b-6546-4391-b58a-e72875a3a621 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] [instance: e1d5bdab-1090-4b36-bff3-f11d0bc1d591] Creating config drive at /var/lib/nova/instances/e1d5bdab-1090-4b36-bff3-f11d0bc1d591/disk.config
Oct 11 08:46:28 compute-0 nova_compute[260935]: 2025-10-11 08:46:28.793 2 DEBUG oslo_concurrency.processutils [None req-d07b0c4b-6546-4391-b58a-e72875a3a621 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/e1d5bdab-1090-4b36-bff3-f11d0bc1d591/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmprzemzbm7 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:46:28 compute-0 ceph-mon[74313]: pgmap v1193: 321 pgs: 321 active+clean; 213 MiB data, 341 MiB used, 60 GiB / 60 GiB avail; 6.1 MiB/s rd, 7.5 MiB/s wr, 381 op/s
Oct 11 08:46:28 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/618721267' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:46:28 compute-0 nova_compute[260935]: 2025-10-11 08:46:28.926 2 DEBUG oslo_concurrency.processutils [None req-d07b0c4b-6546-4391-b58a-e72875a3a621 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/e1d5bdab-1090-4b36-bff3-f11d0bc1d591/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmprzemzbm7" returned: 0 in 0.132s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:46:28 compute-0 nova_compute[260935]: 2025-10-11 08:46:28.968 2 DEBUG nova.storage.rbd_utils [None req-d07b0c4b-6546-4391-b58a-e72875a3a621 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] rbd image e1d5bdab-1090-4b36-bff3-f11d0bc1d591_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:46:28 compute-0 nova_compute[260935]: 2025-10-11 08:46:28.974 2 DEBUG oslo_concurrency.processutils [None req-d07b0c4b-6546-4391-b58a-e72875a3a621 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/e1d5bdab-1090-4b36-bff3-f11d0bc1d591/disk.config e1d5bdab-1090-4b36-bff3-f11d0bc1d591_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:46:29 compute-0 nova_compute[260935]: 2025-10-11 08:46:29.167 2 DEBUG oslo_concurrency.processutils [None req-d07b0c4b-6546-4391-b58a-e72875a3a621 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/e1d5bdab-1090-4b36-bff3-f11d0bc1d591/disk.config e1d5bdab-1090-4b36-bff3-f11d0bc1d591_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.192s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:46:29 compute-0 nova_compute[260935]: 2025-10-11 08:46:29.169 2 INFO nova.virt.libvirt.driver [None req-d07b0c4b-6546-4391-b58a-e72875a3a621 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] [instance: e1d5bdab-1090-4b36-bff3-f11d0bc1d591] Deleting local config drive /var/lib/nova/instances/e1d5bdab-1090-4b36-bff3-f11d0bc1d591/disk.config because it was imported into RBD.
Oct 11 08:46:29 compute-0 systemd-machined[215705]: New machine qemu-12-instance-0000000c.
Oct 11 08:46:29 compute-0 systemd[1]: Started Virtual Machine qemu-12-instance-0000000c.
Oct 11 08:46:29 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1194: 321 pgs: 321 active+clean; 213 MiB data, 341 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 5.7 MiB/s wr, 218 op/s
Oct 11 08:46:29 compute-0 nova_compute[260935]: 2025-10-11 08:46:29.797 2 DEBUG oslo_concurrency.lockutils [None req-598c7046-b8dd-4a9d-bf2a-d776307457c3 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] Acquiring lock "d1140ad8-4653-441c-aeec-bf4de6a680f4" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:46:29 compute-0 nova_compute[260935]: 2025-10-11 08:46:29.799 2 DEBUG oslo_concurrency.lockutils [None req-598c7046-b8dd-4a9d-bf2a-d776307457c3 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] Lock "d1140ad8-4653-441c-aeec-bf4de6a680f4" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:46:29 compute-0 nova_compute[260935]: 2025-10-11 08:46:29.800 2 DEBUG oslo_concurrency.lockutils [None req-598c7046-b8dd-4a9d-bf2a-d776307457c3 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] Acquiring lock "d1140ad8-4653-441c-aeec-bf4de6a680f4-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:46:29 compute-0 nova_compute[260935]: 2025-10-11 08:46:29.800 2 DEBUG oslo_concurrency.lockutils [None req-598c7046-b8dd-4a9d-bf2a-d776307457c3 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] Lock "d1140ad8-4653-441c-aeec-bf4de6a680f4-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:46:29 compute-0 nova_compute[260935]: 2025-10-11 08:46:29.800 2 DEBUG oslo_concurrency.lockutils [None req-598c7046-b8dd-4a9d-bf2a-d776307457c3 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] Lock "d1140ad8-4653-441c-aeec-bf4de6a680f4-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:46:29 compute-0 nova_compute[260935]: 2025-10-11 08:46:29.802 2 INFO nova.compute.manager [None req-598c7046-b8dd-4a9d-bf2a-d776307457c3 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] [instance: d1140ad8-4653-441c-aeec-bf4de6a680f4] Terminating instance
Oct 11 08:46:29 compute-0 nova_compute[260935]: 2025-10-11 08:46:29.803 2 DEBUG oslo_concurrency.lockutils [None req-598c7046-b8dd-4a9d-bf2a-d776307457c3 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] Acquiring lock "refresh_cache-d1140ad8-4653-441c-aeec-bf4de6a680f4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 08:46:29 compute-0 nova_compute[260935]: 2025-10-11 08:46:29.803 2 DEBUG oslo_concurrency.lockutils [None req-598c7046-b8dd-4a9d-bf2a-d776307457c3 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] Acquired lock "refresh_cache-d1140ad8-4653-441c-aeec-bf4de6a680f4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 08:46:29 compute-0 nova_compute[260935]: 2025-10-11 08:46:29.803 2 DEBUG nova.network.neutron [None req-598c7046-b8dd-4a9d-bf2a-d776307457c3 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] [instance: d1140ad8-4653-441c-aeec-bf4de6a680f4] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 11 08:46:29 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e136 do_prune osdmap full prune enabled
Oct 11 08:46:29 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e137 e137: 3 total, 3 up, 3 in
Oct 11 08:46:29 compute-0 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e137: 3 total, 3 up, 3 in
Oct 11 08:46:30 compute-0 nova_compute[260935]: 2025-10-11 08:46:30.072 2 DEBUG nova.network.neutron [None req-598c7046-b8dd-4a9d-bf2a-d776307457c3 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] [instance: d1140ad8-4653-441c-aeec-bf4de6a680f4] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 11 08:46:30 compute-0 nova_compute[260935]: 2025-10-11 08:46:30.329 2 DEBUG nova.network.neutron [None req-598c7046-b8dd-4a9d-bf2a-d776307457c3 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] [instance: d1140ad8-4653-441c-aeec-bf4de6a680f4] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:46:30 compute-0 nova_compute[260935]: 2025-10-11 08:46:30.351 2 DEBUG oslo_concurrency.lockutils [None req-598c7046-b8dd-4a9d-bf2a-d776307457c3 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] Releasing lock "refresh_cache-d1140ad8-4653-441c-aeec-bf4de6a680f4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 08:46:30 compute-0 nova_compute[260935]: 2025-10-11 08:46:30.352 2 DEBUG nova.compute.manager [None req-598c7046-b8dd-4a9d-bf2a-d776307457c3 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] [instance: d1140ad8-4653-441c-aeec-bf4de6a680f4] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 11 08:46:30 compute-0 nova_compute[260935]: 2025-10-11 08:46:30.363 2 INFO nova.virt.libvirt.driver [-] [instance: d1140ad8-4653-441c-aeec-bf4de6a680f4] Instance destroyed successfully.
Oct 11 08:46:30 compute-0 nova_compute[260935]: 2025-10-11 08:46:30.364 2 DEBUG nova.objects.instance [None req-598c7046-b8dd-4a9d-bf2a-d776307457c3 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] Lazy-loading 'resources' on Instance uuid d1140ad8-4653-441c-aeec-bf4de6a680f4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:46:30 compute-0 nova_compute[260935]: 2025-10-11 08:46:30.449 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172390.4456844, e1d5bdab-1090-4b36-bff3-f11d0bc1d591 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:46:30 compute-0 nova_compute[260935]: 2025-10-11 08:46:30.450 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: e1d5bdab-1090-4b36-bff3-f11d0bc1d591] VM Resumed (Lifecycle Event)
Oct 11 08:46:30 compute-0 nova_compute[260935]: 2025-10-11 08:46:30.454 2 DEBUG nova.compute.manager [None req-d07b0c4b-6546-4391-b58a-e72875a3a621 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] [instance: e1d5bdab-1090-4b36-bff3-f11d0bc1d591] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 11 08:46:30 compute-0 nova_compute[260935]: 2025-10-11 08:46:30.455 2 DEBUG nova.virt.libvirt.driver [None req-d07b0c4b-6546-4391-b58a-e72875a3a621 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] [instance: e1d5bdab-1090-4b36-bff3-f11d0bc1d591] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 11 08:46:30 compute-0 nova_compute[260935]: 2025-10-11 08:46:30.461 2 INFO nova.virt.libvirt.driver [-] [instance: e1d5bdab-1090-4b36-bff3-f11d0bc1d591] Instance spawned successfully.
Oct 11 08:46:30 compute-0 nova_compute[260935]: 2025-10-11 08:46:30.462 2 DEBUG nova.virt.libvirt.driver [None req-d07b0c4b-6546-4391-b58a-e72875a3a621 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] [instance: e1d5bdab-1090-4b36-bff3-f11d0bc1d591] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 11 08:46:30 compute-0 nova_compute[260935]: 2025-10-11 08:46:30.482 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: e1d5bdab-1090-4b36-bff3-f11d0bc1d591] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:46:30 compute-0 nova_compute[260935]: 2025-10-11 08:46:30.487 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: e1d5bdab-1090-4b36-bff3-f11d0bc1d591] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 08:46:30 compute-0 nova_compute[260935]: 2025-10-11 08:46:30.500 2 DEBUG nova.virt.libvirt.driver [None req-d07b0c4b-6546-4391-b58a-e72875a3a621 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] [instance: e1d5bdab-1090-4b36-bff3-f11d0bc1d591] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:46:30 compute-0 nova_compute[260935]: 2025-10-11 08:46:30.501 2 DEBUG nova.virt.libvirt.driver [None req-d07b0c4b-6546-4391-b58a-e72875a3a621 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] [instance: e1d5bdab-1090-4b36-bff3-f11d0bc1d591] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:46:30 compute-0 nova_compute[260935]: 2025-10-11 08:46:30.502 2 DEBUG nova.virt.libvirt.driver [None req-d07b0c4b-6546-4391-b58a-e72875a3a621 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] [instance: e1d5bdab-1090-4b36-bff3-f11d0bc1d591] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:46:30 compute-0 nova_compute[260935]: 2025-10-11 08:46:30.502 2 DEBUG nova.virt.libvirt.driver [None req-d07b0c4b-6546-4391-b58a-e72875a3a621 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] [instance: e1d5bdab-1090-4b36-bff3-f11d0bc1d591] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:46:30 compute-0 nova_compute[260935]: 2025-10-11 08:46:30.503 2 DEBUG nova.virt.libvirt.driver [None req-d07b0c4b-6546-4391-b58a-e72875a3a621 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] [instance: e1d5bdab-1090-4b36-bff3-f11d0bc1d591] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:46:30 compute-0 nova_compute[260935]: 2025-10-11 08:46:30.504 2 DEBUG nova.virt.libvirt.driver [None req-d07b0c4b-6546-4391-b58a-e72875a3a621 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] [instance: e1d5bdab-1090-4b36-bff3-f11d0bc1d591] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:46:30 compute-0 nova_compute[260935]: 2025-10-11 08:46:30.512 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: e1d5bdab-1090-4b36-bff3-f11d0bc1d591] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 08:46:30 compute-0 nova_compute[260935]: 2025-10-11 08:46:30.513 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172390.4483535, e1d5bdab-1090-4b36-bff3-f11d0bc1d591 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:46:30 compute-0 nova_compute[260935]: 2025-10-11 08:46:30.513 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: e1d5bdab-1090-4b36-bff3-f11d0bc1d591] VM Started (Lifecycle Event)
Oct 11 08:46:30 compute-0 nova_compute[260935]: 2025-10-11 08:46:30.548 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: e1d5bdab-1090-4b36-bff3-f11d0bc1d591] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:46:30 compute-0 nova_compute[260935]: 2025-10-11 08:46:30.556 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: e1d5bdab-1090-4b36-bff3-f11d0bc1d591] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 08:46:30 compute-0 nova_compute[260935]: 2025-10-11 08:46:30.588 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: e1d5bdab-1090-4b36-bff3-f11d0bc1d591] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 08:46:30 compute-0 nova_compute[260935]: 2025-10-11 08:46:30.593 2 INFO nova.compute.manager [None req-d07b0c4b-6546-4391-b58a-e72875a3a621 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] [instance: e1d5bdab-1090-4b36-bff3-f11d0bc1d591] Took 4.41 seconds to spawn the instance on the hypervisor.
Oct 11 08:46:30 compute-0 nova_compute[260935]: 2025-10-11 08:46:30.594 2 DEBUG nova.compute.manager [None req-d07b0c4b-6546-4391-b58a-e72875a3a621 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] [instance: e1d5bdab-1090-4b36-bff3-f11d0bc1d591] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:46:30 compute-0 nova_compute[260935]: 2025-10-11 08:46:30.679 2 INFO nova.compute.manager [None req-d07b0c4b-6546-4391-b58a-e72875a3a621 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] [instance: e1d5bdab-1090-4b36-bff3-f11d0bc1d591] Took 5.50 seconds to build instance.
Oct 11 08:46:30 compute-0 nova_compute[260935]: 2025-10-11 08:46:30.703 2 DEBUG oslo_concurrency.lockutils [None req-d07b0c4b-6546-4391-b58a-e72875a3a621 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] Lock "e1d5bdab-1090-4b36-bff3-f11d0bc1d591" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 5.668s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:46:30 compute-0 podman[283560]: 2025-10-11 08:46:30.83197407 +0000 UTC m=+0.120292099 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3)
Oct 11 08:46:30 compute-0 nova_compute[260935]: 2025-10-11 08:46:30.859 2 INFO nova.virt.libvirt.driver [None req-598c7046-b8dd-4a9d-bf2a-d776307457c3 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] [instance: d1140ad8-4653-441c-aeec-bf4de6a680f4] Deleting instance files /var/lib/nova/instances/d1140ad8-4653-441c-aeec-bf4de6a680f4_del
Oct 11 08:46:30 compute-0 nova_compute[260935]: 2025-10-11 08:46:30.860 2 INFO nova.virt.libvirt.driver [None req-598c7046-b8dd-4a9d-bf2a-d776307457c3 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] [instance: d1140ad8-4653-441c-aeec-bf4de6a680f4] Deletion of /var/lib/nova/instances/d1140ad8-4653-441c-aeec-bf4de6a680f4_del complete
Oct 11 08:46:30 compute-0 ceph-mon[74313]: pgmap v1194: 321 pgs: 321 active+clean; 213 MiB data, 341 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 5.7 MiB/s wr, 218 op/s
Oct 11 08:46:30 compute-0 ceph-mon[74313]: osdmap e137: 3 total, 3 up, 3 in
Oct 11 08:46:30 compute-0 nova_compute[260935]: 2025-10-11 08:46:30.955 2 INFO nova.compute.manager [None req-598c7046-b8dd-4a9d-bf2a-d776307457c3 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] [instance: d1140ad8-4653-441c-aeec-bf4de6a680f4] Took 0.60 seconds to destroy the instance on the hypervisor.
Oct 11 08:46:30 compute-0 nova_compute[260935]: 2025-10-11 08:46:30.956 2 DEBUG oslo.service.loopingcall [None req-598c7046-b8dd-4a9d-bf2a-d776307457c3 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 11 08:46:30 compute-0 nova_compute[260935]: 2025-10-11 08:46:30.957 2 DEBUG nova.compute.manager [-] [instance: d1140ad8-4653-441c-aeec-bf4de6a680f4] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 11 08:46:30 compute-0 nova_compute[260935]: 2025-10-11 08:46:30.957 2 DEBUG nova.network.neutron [-] [instance: d1140ad8-4653-441c-aeec-bf4de6a680f4] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 11 08:46:31 compute-0 nova_compute[260935]: 2025-10-11 08:46:31.187 2 DEBUG nova.network.neutron [-] [instance: d1140ad8-4653-441c-aeec-bf4de6a680f4] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 11 08:46:31 compute-0 nova_compute[260935]: 2025-10-11 08:46:31.200 2 DEBUG nova.network.neutron [-] [instance: d1140ad8-4653-441c-aeec-bf4de6a680f4] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:46:31 compute-0 nova_compute[260935]: 2025-10-11 08:46:31.213 2 INFO nova.compute.manager [-] [instance: d1140ad8-4653-441c-aeec-bf4de6a680f4] Took 0.26 seconds to deallocate network for instance.
Oct 11 08:46:31 compute-0 nova_compute[260935]: 2025-10-11 08:46:31.255 2 DEBUG oslo_concurrency.lockutils [None req-598c7046-b8dd-4a9d-bf2a-d776307457c3 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:46:31 compute-0 nova_compute[260935]: 2025-10-11 08:46:31.256 2 DEBUG oslo_concurrency.lockutils [None req-598c7046-b8dd-4a9d-bf2a-d776307457c3 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:46:31 compute-0 nova_compute[260935]: 2025-10-11 08:46:31.338 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:46:31 compute-0 nova_compute[260935]: 2025-10-11 08:46:31.415 2 DEBUG oslo_concurrency.processutils [None req-598c7046-b8dd-4a9d-bf2a-d776307457c3 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:46:31 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1196: 321 pgs: 321 active+clean; 213 MiB data, 341 MiB used, 60 GiB / 60 GiB avail; 2.8 MiB/s rd, 6.8 MiB/s wr, 262 op/s
Oct 11 08:46:31 compute-0 podman[283601]: 2025-10-11 08:46:31.855050345 +0000 UTC m=+0.154331083 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct 11 08:46:31 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 08:46:31 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1658224353' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:46:31 compute-0 nova_compute[260935]: 2025-10-11 08:46:31.900 2 DEBUG oslo_concurrency.processutils [None req-598c7046-b8dd-4a9d-bf2a-d776307457c3 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.485s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:46:31 compute-0 nova_compute[260935]: 2025-10-11 08:46:31.908 2 DEBUG nova.compute.provider_tree [None req-598c7046-b8dd-4a9d-bf2a-d776307457c3 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 08:46:31 compute-0 nova_compute[260935]: 2025-10-11 08:46:31.925 2 DEBUG nova.scheduler.client.report [None req-598c7046-b8dd-4a9d-bf2a-d776307457c3 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 08:46:31 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/1658224353' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:46:31 compute-0 nova_compute[260935]: 2025-10-11 08:46:31.956 2 DEBUG oslo_concurrency.lockutils [None req-598c7046-b8dd-4a9d-bf2a-d776307457c3 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.700s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:46:31 compute-0 nova_compute[260935]: 2025-10-11 08:46:31.999 2 INFO nova.scheduler.client.report [None req-598c7046-b8dd-4a9d-bf2a-d776307457c3 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] Deleted allocations for instance d1140ad8-4653-441c-aeec-bf4de6a680f4
Oct 11 08:46:32 compute-0 nova_compute[260935]: 2025-10-11 08:46:32.039 2 DEBUG oslo_concurrency.lockutils [None req-971532a9-9a46-4aba-86e8-fc73c3054690 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] Acquiring lock "e1d5bdab-1090-4b36-bff3-f11d0bc1d591" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:46:32 compute-0 nova_compute[260935]: 2025-10-11 08:46:32.039 2 DEBUG oslo_concurrency.lockutils [None req-971532a9-9a46-4aba-86e8-fc73c3054690 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] Lock "e1d5bdab-1090-4b36-bff3-f11d0bc1d591" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:46:32 compute-0 nova_compute[260935]: 2025-10-11 08:46:32.040 2 DEBUG oslo_concurrency.lockutils [None req-971532a9-9a46-4aba-86e8-fc73c3054690 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] Acquiring lock "e1d5bdab-1090-4b36-bff3-f11d0bc1d591-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:46:32 compute-0 nova_compute[260935]: 2025-10-11 08:46:32.040 2 DEBUG oslo_concurrency.lockutils [None req-971532a9-9a46-4aba-86e8-fc73c3054690 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] Lock "e1d5bdab-1090-4b36-bff3-f11d0bc1d591-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:46:32 compute-0 nova_compute[260935]: 2025-10-11 08:46:32.041 2 DEBUG oslo_concurrency.lockutils [None req-971532a9-9a46-4aba-86e8-fc73c3054690 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] Lock "e1d5bdab-1090-4b36-bff3-f11d0bc1d591-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:46:32 compute-0 nova_compute[260935]: 2025-10-11 08:46:32.043 2 INFO nova.compute.manager [None req-971532a9-9a46-4aba-86e8-fc73c3054690 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] [instance: e1d5bdab-1090-4b36-bff3-f11d0bc1d591] Terminating instance
Oct 11 08:46:32 compute-0 nova_compute[260935]: 2025-10-11 08:46:32.045 2 DEBUG oslo_concurrency.lockutils [None req-971532a9-9a46-4aba-86e8-fc73c3054690 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] Acquiring lock "refresh_cache-e1d5bdab-1090-4b36-bff3-f11d0bc1d591" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 08:46:32 compute-0 nova_compute[260935]: 2025-10-11 08:46:32.045 2 DEBUG oslo_concurrency.lockutils [None req-971532a9-9a46-4aba-86e8-fc73c3054690 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] Acquired lock "refresh_cache-e1d5bdab-1090-4b36-bff3-f11d0bc1d591" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 08:46:32 compute-0 nova_compute[260935]: 2025-10-11 08:46:32.046 2 DEBUG nova.network.neutron [None req-971532a9-9a46-4aba-86e8-fc73c3054690 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] [instance: e1d5bdab-1090-4b36-bff3-f11d0bc1d591] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 11 08:46:32 compute-0 nova_compute[260935]: 2025-10-11 08:46:32.071 2 DEBUG oslo_concurrency.lockutils [None req-598c7046-b8dd-4a9d-bf2a-d776307457c3 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] Lock "d1140ad8-4653-441c-aeec-bf4de6a680f4" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.272s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:46:32 compute-0 nova_compute[260935]: 2025-10-11 08:46:32.197 2 DEBUG nova.network.neutron [None req-971532a9-9a46-4aba-86e8-fc73c3054690 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] [instance: e1d5bdab-1090-4b36-bff3-f11d0bc1d591] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 11 08:46:32 compute-0 nova_compute[260935]: 2025-10-11 08:46:32.413 2 DEBUG nova.network.neutron [None req-971532a9-9a46-4aba-86e8-fc73c3054690 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] [instance: e1d5bdab-1090-4b36-bff3-f11d0bc1d591] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:46:32 compute-0 nova_compute[260935]: 2025-10-11 08:46:32.434 2 DEBUG oslo_concurrency.lockutils [None req-971532a9-9a46-4aba-86e8-fc73c3054690 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] Releasing lock "refresh_cache-e1d5bdab-1090-4b36-bff3-f11d0bc1d591" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 08:46:32 compute-0 nova_compute[260935]: 2025-10-11 08:46:32.435 2 DEBUG nova.compute.manager [None req-971532a9-9a46-4aba-86e8-fc73c3054690 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] [instance: e1d5bdab-1090-4b36-bff3-f11d0bc1d591] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 11 08:46:32 compute-0 systemd[1]: machine-qemu\x2d12\x2dinstance\x2d0000000c.scope: Deactivated successfully.
Oct 11 08:46:32 compute-0 systemd[1]: machine-qemu\x2d12\x2dinstance\x2d0000000c.scope: Consumed 3.241s CPU time.
Oct 11 08:46:32 compute-0 systemd-machined[215705]: Machine qemu-12-instance-0000000c terminated.
Oct 11 08:46:32 compute-0 nova_compute[260935]: 2025-10-11 08:46:32.661 2 INFO nova.virt.libvirt.driver [-] [instance: e1d5bdab-1090-4b36-bff3-f11d0bc1d591] Instance destroyed successfully.
Oct 11 08:46:32 compute-0 nova_compute[260935]: 2025-10-11 08:46:32.662 2 DEBUG nova.objects.instance [None req-971532a9-9a46-4aba-86e8-fc73c3054690 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] Lazy-loading 'resources' on Instance uuid e1d5bdab-1090-4b36-bff3-f11d0bc1d591 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:46:32 compute-0 nova_compute[260935]: 2025-10-11 08:46:32.750 2 DEBUG oslo_concurrency.lockutils [None req-1ecd3038-5521-4d63-a2a7-5c4df33ae299 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] Acquiring lock "97254317-c848-4369-b0b7-147d230fb1d1" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:46:32 compute-0 nova_compute[260935]: 2025-10-11 08:46:32.751 2 DEBUG oslo_concurrency.lockutils [None req-1ecd3038-5521-4d63-a2a7-5c4df33ae299 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] Lock "97254317-c848-4369-b0b7-147d230fb1d1" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:46:32 compute-0 nova_compute[260935]: 2025-10-11 08:46:32.752 2 DEBUG oslo_concurrency.lockutils [None req-1ecd3038-5521-4d63-a2a7-5c4df33ae299 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] Acquiring lock "97254317-c848-4369-b0b7-147d230fb1d1-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:46:32 compute-0 nova_compute[260935]: 2025-10-11 08:46:32.752 2 DEBUG oslo_concurrency.lockutils [None req-1ecd3038-5521-4d63-a2a7-5c4df33ae299 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] Lock "97254317-c848-4369-b0b7-147d230fb1d1-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:46:32 compute-0 nova_compute[260935]: 2025-10-11 08:46:32.753 2 DEBUG oslo_concurrency.lockutils [None req-1ecd3038-5521-4d63-a2a7-5c4df33ae299 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] Lock "97254317-c848-4369-b0b7-147d230fb1d1-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:46:32 compute-0 nova_compute[260935]: 2025-10-11 08:46:32.755 2 INFO nova.compute.manager [None req-1ecd3038-5521-4d63-a2a7-5c4df33ae299 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] [instance: 97254317-c848-4369-b0b7-147d230fb1d1] Terminating instance
Oct 11 08:46:32 compute-0 nova_compute[260935]: 2025-10-11 08:46:32.756 2 DEBUG oslo_concurrency.lockutils [None req-1ecd3038-5521-4d63-a2a7-5c4df33ae299 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] Acquiring lock "refresh_cache-97254317-c848-4369-b0b7-147d230fb1d1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 08:46:32 compute-0 nova_compute[260935]: 2025-10-11 08:46:32.757 2 DEBUG oslo_concurrency.lockutils [None req-1ecd3038-5521-4d63-a2a7-5c4df33ae299 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] Acquired lock "refresh_cache-97254317-c848-4369-b0b7-147d230fb1d1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 08:46:32 compute-0 nova_compute[260935]: 2025-10-11 08:46:32.757 2 DEBUG nova.network.neutron [None req-1ecd3038-5521-4d63-a2a7-5c4df33ae299 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] [instance: 97254317-c848-4369-b0b7-147d230fb1d1] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 11 08:46:32 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e137 do_prune osdmap full prune enabled
Oct 11 08:46:32 compute-0 ceph-mon[74313]: pgmap v1196: 321 pgs: 321 active+clean; 213 MiB data, 341 MiB used, 60 GiB / 60 GiB avail; 2.8 MiB/s rd, 6.8 MiB/s wr, 262 op/s
Oct 11 08:46:32 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e138 e138: 3 total, 3 up, 3 in
Oct 11 08:46:32 compute-0 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e138: 3 total, 3 up, 3 in
Oct 11 08:46:33 compute-0 nova_compute[260935]: 2025-10-11 08:46:33.107 2 DEBUG nova.network.neutron [None req-1ecd3038-5521-4d63-a2a7-5c4df33ae299 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] [instance: 97254317-c848-4369-b0b7-147d230fb1d1] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 11 08:46:33 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 11 08:46:33 compute-0 nova_compute[260935]: 2025-10-11 08:46:33.155 2 INFO nova.virt.libvirt.driver [None req-971532a9-9a46-4aba-86e8-fc73c3054690 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] [instance: e1d5bdab-1090-4b36-bff3-f11d0bc1d591] Deleting instance files /var/lib/nova/instances/e1d5bdab-1090-4b36-bff3-f11d0bc1d591_del
Oct 11 08:46:33 compute-0 nova_compute[260935]: 2025-10-11 08:46:33.156 2 INFO nova.virt.libvirt.driver [None req-971532a9-9a46-4aba-86e8-fc73c3054690 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] [instance: e1d5bdab-1090-4b36-bff3-f11d0bc1d591] Deletion of /var/lib/nova/instances/e1d5bdab-1090-4b36-bff3-f11d0bc1d591_del complete
Oct 11 08:46:33 compute-0 nova_compute[260935]: 2025-10-11 08:46:33.240 2 INFO nova.compute.manager [None req-971532a9-9a46-4aba-86e8-fc73c3054690 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] [instance: e1d5bdab-1090-4b36-bff3-f11d0bc1d591] Took 0.80 seconds to destroy the instance on the hypervisor.
Oct 11 08:46:33 compute-0 nova_compute[260935]: 2025-10-11 08:46:33.241 2 DEBUG oslo.service.loopingcall [None req-971532a9-9a46-4aba-86e8-fc73c3054690 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 11 08:46:33 compute-0 nova_compute[260935]: 2025-10-11 08:46:33.241 2 DEBUG nova.compute.manager [-] [instance: e1d5bdab-1090-4b36-bff3-f11d0bc1d591] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 11 08:46:33 compute-0 nova_compute[260935]: 2025-10-11 08:46:33.242 2 DEBUG nova.network.neutron [-] [instance: e1d5bdab-1090-4b36-bff3-f11d0bc1d591] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 11 08:46:33 compute-0 nova_compute[260935]: 2025-10-11 08:46:33.431 2 DEBUG nova.network.neutron [-] [instance: e1d5bdab-1090-4b36-bff3-f11d0bc1d591] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 11 08:46:33 compute-0 nova_compute[260935]: 2025-10-11 08:46:33.444 2 DEBUG nova.network.neutron [-] [instance: e1d5bdab-1090-4b36-bff3-f11d0bc1d591] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:46:33 compute-0 nova_compute[260935]: 2025-10-11 08:46:33.457 2 DEBUG nova.network.neutron [None req-1ecd3038-5521-4d63-a2a7-5c4df33ae299 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] [instance: 97254317-c848-4369-b0b7-147d230fb1d1] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:46:33 compute-0 nova_compute[260935]: 2025-10-11 08:46:33.470 2 INFO nova.compute.manager [-] [instance: e1d5bdab-1090-4b36-bff3-f11d0bc1d591] Took 0.23 seconds to deallocate network for instance.
Oct 11 08:46:33 compute-0 nova_compute[260935]: 2025-10-11 08:46:33.476 2 DEBUG oslo_concurrency.lockutils [None req-1ecd3038-5521-4d63-a2a7-5c4df33ae299 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] Releasing lock "refresh_cache-97254317-c848-4369-b0b7-147d230fb1d1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 08:46:33 compute-0 nova_compute[260935]: 2025-10-11 08:46:33.477 2 DEBUG nova.compute.manager [None req-1ecd3038-5521-4d63-a2a7-5c4df33ae299 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] [instance: 97254317-c848-4369-b0b7-147d230fb1d1] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 11 08:46:33 compute-0 nova_compute[260935]: 2025-10-11 08:46:33.514 2 DEBUG oslo_concurrency.lockutils [None req-971532a9-9a46-4aba-86e8-fc73c3054690 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:46:33 compute-0 nova_compute[260935]: 2025-10-11 08:46:33.515 2 DEBUG oslo_concurrency.lockutils [None req-971532a9-9a46-4aba-86e8-fc73c3054690 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:46:33 compute-0 systemd[1]: machine-qemu\x2d9\x2dinstance\x2d00000009.scope: Deactivated successfully.
Oct 11 08:46:33 compute-0 systemd[1]: machine-qemu\x2d9\x2dinstance\x2d00000009.scope: Consumed 13.051s CPU time.
Oct 11 08:46:33 compute-0 systemd-machined[215705]: Machine qemu-9-instance-00000009 terminated.
Oct 11 08:46:33 compute-0 nova_compute[260935]: 2025-10-11 08:46:33.585 2 DEBUG oslo_concurrency.processutils [None req-971532a9-9a46-4aba-86e8-fc73c3054690 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:46:33 compute-0 nova_compute[260935]: 2025-10-11 08:46:33.621 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:46:33 compute-0 nova_compute[260935]: 2025-10-11 08:46:33.711 2 INFO nova.virt.libvirt.driver [-] [instance: 97254317-c848-4369-b0b7-147d230fb1d1] Instance destroyed successfully.
Oct 11 08:46:33 compute-0 nova_compute[260935]: 2025-10-11 08:46:33.712 2 DEBUG nova.objects.instance [None req-1ecd3038-5521-4d63-a2a7-5c4df33ae299 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] Lazy-loading 'resources' on Instance uuid 97254317-c848-4369-b0b7-147d230fb1d1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:46:33 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1198: 321 pgs: 321 active+clean; 167 MiB data, 324 MiB used, 60 GiB / 60 GiB avail; 6.3 MiB/s rd, 5.9 MiB/s wr, 427 op/s
Oct 11 08:46:33 compute-0 ceph-mon[74313]: osdmap e138: 3 total, 3 up, 3 in
Oct 11 08:46:34 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 08:46:34 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/868699041' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:46:34 compute-0 nova_compute[260935]: 2025-10-11 08:46:34.071 2 DEBUG oslo_concurrency.processutils [None req-971532a9-9a46-4aba-86e8-fc73c3054690 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.487s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:46:34 compute-0 nova_compute[260935]: 2025-10-11 08:46:34.082 2 DEBUG nova.compute.provider_tree [None req-971532a9-9a46-4aba-86e8-fc73c3054690 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 08:46:34 compute-0 nova_compute[260935]: 2025-10-11 08:46:34.103 2 DEBUG nova.scheduler.client.report [None req-971532a9-9a46-4aba-86e8-fc73c3054690 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 08:46:34 compute-0 nova_compute[260935]: 2025-10-11 08:46:34.129 2 DEBUG oslo_concurrency.lockutils [None req-971532a9-9a46-4aba-86e8-fc73c3054690 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.614s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:46:34 compute-0 nova_compute[260935]: 2025-10-11 08:46:34.157 2 INFO nova.scheduler.client.report [None req-971532a9-9a46-4aba-86e8-fc73c3054690 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] Deleted allocations for instance e1d5bdab-1090-4b36-bff3-f11d0bc1d591
Oct 11 08:46:34 compute-0 nova_compute[260935]: 2025-10-11 08:46:34.177 2 INFO nova.virt.libvirt.driver [None req-1ecd3038-5521-4d63-a2a7-5c4df33ae299 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] [instance: 97254317-c848-4369-b0b7-147d230fb1d1] Deleting instance files /var/lib/nova/instances/97254317-c848-4369-b0b7-147d230fb1d1_del
Oct 11 08:46:34 compute-0 nova_compute[260935]: 2025-10-11 08:46:34.178 2 INFO nova.virt.libvirt.driver [None req-1ecd3038-5521-4d63-a2a7-5c4df33ae299 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] [instance: 97254317-c848-4369-b0b7-147d230fb1d1] Deletion of /var/lib/nova/instances/97254317-c848-4369-b0b7-147d230fb1d1_del complete
Oct 11 08:46:34 compute-0 nova_compute[260935]: 2025-10-11 08:46:34.252 2 DEBUG oslo_concurrency.lockutils [None req-971532a9-9a46-4aba-86e8-fc73c3054690 c74c2e90e9e04fd283655f1fac1d2bf7 f8adf6d5f1034e90bb5184f87dbf3a16 - - default default] Lock "e1d5bdab-1090-4b36-bff3-f11d0bc1d591" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.213s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:46:34 compute-0 nova_compute[260935]: 2025-10-11 08:46:34.255 2 INFO nova.compute.manager [None req-1ecd3038-5521-4d63-a2a7-5c4df33ae299 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] [instance: 97254317-c848-4369-b0b7-147d230fb1d1] Took 0.78 seconds to destroy the instance on the hypervisor.
Oct 11 08:46:34 compute-0 nova_compute[260935]: 2025-10-11 08:46:34.255 2 DEBUG oslo.service.loopingcall [None req-1ecd3038-5521-4d63-a2a7-5c4df33ae299 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 11 08:46:34 compute-0 nova_compute[260935]: 2025-10-11 08:46:34.255 2 DEBUG nova.compute.manager [-] [instance: 97254317-c848-4369-b0b7-147d230fb1d1] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 11 08:46:34 compute-0 nova_compute[260935]: 2025-10-11 08:46:34.256 2 DEBUG nova.network.neutron [-] [instance: 97254317-c848-4369-b0b7-147d230fb1d1] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 11 08:46:34 compute-0 nova_compute[260935]: 2025-10-11 08:46:34.391 2 DEBUG nova.network.neutron [-] [instance: 97254317-c848-4369-b0b7-147d230fb1d1] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 11 08:46:34 compute-0 nova_compute[260935]: 2025-10-11 08:46:34.404 2 DEBUG nova.network.neutron [-] [instance: 97254317-c848-4369-b0b7-147d230fb1d1] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:46:34 compute-0 nova_compute[260935]: 2025-10-11 08:46:34.422 2 INFO nova.compute.manager [-] [instance: 97254317-c848-4369-b0b7-147d230fb1d1] Took 0.17 seconds to deallocate network for instance.
Oct 11 08:46:34 compute-0 nova_compute[260935]: 2025-10-11 08:46:34.471 2 DEBUG oslo_concurrency.lockutils [None req-1ecd3038-5521-4d63-a2a7-5c4df33ae299 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:46:34 compute-0 nova_compute[260935]: 2025-10-11 08:46:34.471 2 DEBUG oslo_concurrency.lockutils [None req-1ecd3038-5521-4d63-a2a7-5c4df33ae299 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:46:34 compute-0 nova_compute[260935]: 2025-10-11 08:46:34.525 2 DEBUG oslo_concurrency.processutils [None req-1ecd3038-5521-4d63-a2a7-5c4df33ae299 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:46:34 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 08:46:34 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/870174784' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:46:34 compute-0 nova_compute[260935]: 2025-10-11 08:46:34.960 2 DEBUG oslo_concurrency.processutils [None req-1ecd3038-5521-4d63-a2a7-5c4df33ae299 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.435s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:46:34 compute-0 nova_compute[260935]: 2025-10-11 08:46:34.970 2 DEBUG nova.compute.provider_tree [None req-1ecd3038-5521-4d63-a2a7-5c4df33ae299 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 08:46:34 compute-0 ceph-mon[74313]: pgmap v1198: 321 pgs: 321 active+clean; 167 MiB data, 324 MiB used, 60 GiB / 60 GiB avail; 6.3 MiB/s rd, 5.9 MiB/s wr, 427 op/s
Oct 11 08:46:34 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/868699041' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:46:34 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/870174784' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:46:34 compute-0 nova_compute[260935]: 2025-10-11 08:46:34.991 2 DEBUG nova.scheduler.client.report [None req-1ecd3038-5521-4d63-a2a7-5c4df33ae299 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 08:46:35 compute-0 nova_compute[260935]: 2025-10-11 08:46:35.024 2 DEBUG oslo_concurrency.lockutils [None req-1ecd3038-5521-4d63-a2a7-5c4df33ae299 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.553s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:46:35 compute-0 nova_compute[260935]: 2025-10-11 08:46:35.073 2 INFO nova.scheduler.client.report [None req-1ecd3038-5521-4d63-a2a7-5c4df33ae299 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] Deleted allocations for instance 97254317-c848-4369-b0b7-147d230fb1d1
Oct 11 08:46:35 compute-0 nova_compute[260935]: 2025-10-11 08:46:35.174 2 DEBUG oslo_concurrency.lockutils [None req-1ecd3038-5521-4d63-a2a7-5c4df33ae299 e35666a092ce4c07b2e0604bfeb5d359 0b7ec3d2d2084beaa800f0f2abd06c77 - - default default] Lock "97254317-c848-4369-b0b7-147d230fb1d1" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.423s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:46:35 compute-0 nova_compute[260935]: 2025-10-11 08:46:35.500 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760172380.4990637, adcf97ac-4b08-4f25-844b-e6f4e1305e3b => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:46:35 compute-0 nova_compute[260935]: 2025-10-11 08:46:35.501 2 INFO nova.compute.manager [-] [instance: adcf97ac-4b08-4f25-844b-e6f4e1305e3b] VM Stopped (Lifecycle Event)
Oct 11 08:46:35 compute-0 nova_compute[260935]: 2025-10-11 08:46:35.532 2 DEBUG nova.compute.manager [None req-3c54d203-cde9-48b7-8c46-ffd7dd493ae3 - - - - - -] [instance: adcf97ac-4b08-4f25-844b-e6f4e1305e3b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:46:35 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1199: 321 pgs: 321 active+clean; 167 MiB data, 324 MiB used, 60 GiB / 60 GiB avail; 2.9 MiB/s rd, 41 KiB/s wr, 182 op/s
Oct 11 08:46:35 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e138 do_prune osdmap full prune enabled
Oct 11 08:46:35 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e139 e139: 3 total, 3 up, 3 in
Oct 11 08:46:35 compute-0 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e139: 3 total, 3 up, 3 in
Oct 11 08:46:36 compute-0 nova_compute[260935]: 2025-10-11 08:46:36.340 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:46:36 compute-0 ceph-mon[74313]: pgmap v1199: 321 pgs: 321 active+clean; 167 MiB data, 324 MiB used, 60 GiB / 60 GiB avail; 2.9 MiB/s rd, 41 KiB/s wr, 182 op/s
Oct 11 08:46:36 compute-0 ceph-mon[74313]: osdmap e139: 3 total, 3 up, 3 in
Oct 11 08:46:37 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 08:46:37 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3896068585' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 08:46:37 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 08:46:37 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3896068585' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 08:46:37 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1201: 321 pgs: 321 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 3.1 MiB/s rd, 48 KiB/s wr, 294 op/s
Oct 11 08:46:37 compute-0 ceph-mon[74313]: from='client.? 192.168.122.10:0/3896068585' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 08:46:37 compute-0 ceph-mon[74313]: from='client.? 192.168.122.10:0/3896068585' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 08:46:38 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 08:46:38 compute-0 nova_compute[260935]: 2025-10-11 08:46:38.624 2 DEBUG oslo_concurrency.lockutils [None req-ae69dea4-7307-4b9d-9e48-1a8c620af444 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] Acquiring lock "318c32b0-9990-4579-8abb-fc79e7460d77" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:46:38 compute-0 nova_compute[260935]: 2025-10-11 08:46:38.625 2 DEBUG oslo_concurrency.lockutils [None req-ae69dea4-7307-4b9d-9e48-1a8c620af444 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] Lock "318c32b0-9990-4579-8abb-fc79e7460d77" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:46:38 compute-0 nova_compute[260935]: 2025-10-11 08:46:38.628 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:46:38 compute-0 nova_compute[260935]: 2025-10-11 08:46:38.650 2 DEBUG nova.compute.manager [None req-ae69dea4-7307-4b9d-9e48-1a8c620af444 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] [instance: 318c32b0-9990-4579-8abb-fc79e7460d77] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 11 08:46:38 compute-0 nova_compute[260935]: 2025-10-11 08:46:38.726 2 DEBUG oslo_concurrency.lockutils [None req-ae69dea4-7307-4b9d-9e48-1a8c620af444 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:46:38 compute-0 nova_compute[260935]: 2025-10-11 08:46:38.727 2 DEBUG oslo_concurrency.lockutils [None req-ae69dea4-7307-4b9d-9e48-1a8c620af444 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:46:38 compute-0 nova_compute[260935]: 2025-10-11 08:46:38.734 2 DEBUG nova.virt.hardware [None req-ae69dea4-7307-4b9d-9e48-1a8c620af444 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 11 08:46:38 compute-0 nova_compute[260935]: 2025-10-11 08:46:38.735 2 INFO nova.compute.claims [None req-ae69dea4-7307-4b9d-9e48-1a8c620af444 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] [instance: 318c32b0-9990-4579-8abb-fc79e7460d77] Claim successful on node compute-0.ctlplane.example.com
Oct 11 08:46:38 compute-0 nova_compute[260935]: 2025-10-11 08:46:38.857 2 DEBUG oslo_concurrency.processutils [None req-ae69dea4-7307-4b9d-9e48-1a8c620af444 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:46:39 compute-0 ceph-mon[74313]: pgmap v1201: 321 pgs: 321 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 3.1 MiB/s rd, 48 KiB/s wr, 294 op/s
Oct 11 08:46:39 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 08:46:39 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2472654938' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:46:39 compute-0 nova_compute[260935]: 2025-10-11 08:46:39.394 2 DEBUG oslo_concurrency.processutils [None req-ae69dea4-7307-4b9d-9e48-1a8c620af444 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.537s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:46:39 compute-0 nova_compute[260935]: 2025-10-11 08:46:39.402 2 DEBUG nova.compute.provider_tree [None req-ae69dea4-7307-4b9d-9e48-1a8c620af444 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 08:46:39 compute-0 nova_compute[260935]: 2025-10-11 08:46:39.420 2 DEBUG nova.scheduler.client.report [None req-ae69dea4-7307-4b9d-9e48-1a8c620af444 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 08:46:39 compute-0 nova_compute[260935]: 2025-10-11 08:46:39.444 2 DEBUG oslo_concurrency.lockutils [None req-ae69dea4-7307-4b9d-9e48-1a8c620af444 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.717s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:46:39 compute-0 nova_compute[260935]: 2025-10-11 08:46:39.445 2 DEBUG nova.compute.manager [None req-ae69dea4-7307-4b9d-9e48-1a8c620af444 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] [instance: 318c32b0-9990-4579-8abb-fc79e7460d77] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 11 08:46:39 compute-0 nova_compute[260935]: 2025-10-11 08:46:39.494 2 DEBUG nova.compute.manager [None req-ae69dea4-7307-4b9d-9e48-1a8c620af444 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] [instance: 318c32b0-9990-4579-8abb-fc79e7460d77] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 11 08:46:39 compute-0 nova_compute[260935]: 2025-10-11 08:46:39.495 2 DEBUG nova.network.neutron [None req-ae69dea4-7307-4b9d-9e48-1a8c620af444 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] [instance: 318c32b0-9990-4579-8abb-fc79e7460d77] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 11 08:46:39 compute-0 nova_compute[260935]: 2025-10-11 08:46:39.520 2 INFO nova.virt.libvirt.driver [None req-ae69dea4-7307-4b9d-9e48-1a8c620af444 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] [instance: 318c32b0-9990-4579-8abb-fc79e7460d77] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 11 08:46:39 compute-0 nova_compute[260935]: 2025-10-11 08:46:39.543 2 DEBUG nova.compute.manager [None req-ae69dea4-7307-4b9d-9e48-1a8c620af444 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] [instance: 318c32b0-9990-4579-8abb-fc79e7460d77] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 11 08:46:39 compute-0 nova_compute[260935]: 2025-10-11 08:46:39.668 2 DEBUG nova.compute.manager [None req-ae69dea4-7307-4b9d-9e48-1a8c620af444 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] [instance: 318c32b0-9990-4579-8abb-fc79e7460d77] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 11 08:46:39 compute-0 nova_compute[260935]: 2025-10-11 08:46:39.670 2 DEBUG nova.virt.libvirt.driver [None req-ae69dea4-7307-4b9d-9e48-1a8c620af444 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] [instance: 318c32b0-9990-4579-8abb-fc79e7460d77] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 11 08:46:39 compute-0 nova_compute[260935]: 2025-10-11 08:46:39.670 2 INFO nova.virt.libvirt.driver [None req-ae69dea4-7307-4b9d-9e48-1a8c620af444 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] [instance: 318c32b0-9990-4579-8abb-fc79e7460d77] Creating image(s)
Oct 11 08:46:39 compute-0 nova_compute[260935]: 2025-10-11 08:46:39.704 2 DEBUG nova.storage.rbd_utils [None req-ae69dea4-7307-4b9d-9e48-1a8c620af444 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] rbd image 318c32b0-9990-4579-8abb-fc79e7460d77_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:46:39 compute-0 nova_compute[260935]: 2025-10-11 08:46:39.731 2 DEBUG nova.storage.rbd_utils [None req-ae69dea4-7307-4b9d-9e48-1a8c620af444 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] rbd image 318c32b0-9990-4579-8abb-fc79e7460d77_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:46:39 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1202: 321 pgs: 321 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 3.0 MiB/s rd, 47 KiB/s wr, 287 op/s
Oct 11 08:46:39 compute-0 nova_compute[260935]: 2025-10-11 08:46:39.760 2 DEBUG nova.storage.rbd_utils [None req-ae69dea4-7307-4b9d-9e48-1a8c620af444 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] rbd image 318c32b0-9990-4579-8abb-fc79e7460d77_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:46:39 compute-0 nova_compute[260935]: 2025-10-11 08:46:39.764 2 DEBUG oslo_concurrency.processutils [None req-ae69dea4-7307-4b9d-9e48-1a8c620af444 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:46:39 compute-0 nova_compute[260935]: 2025-10-11 08:46:39.852 2 DEBUG oslo_concurrency.processutils [None req-ae69dea4-7307-4b9d-9e48-1a8c620af444 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json" returned: 0 in 0.089s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:46:39 compute-0 nova_compute[260935]: 2025-10-11 08:46:39.854 2 DEBUG oslo_concurrency.lockutils [None req-ae69dea4-7307-4b9d-9e48-1a8c620af444 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] Acquiring lock "9811042c7d73cc51997f7c966840f4b7728169a1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:46:39 compute-0 nova_compute[260935]: 2025-10-11 08:46:39.855 2 DEBUG oslo_concurrency.lockutils [None req-ae69dea4-7307-4b9d-9e48-1a8c620af444 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:46:39 compute-0 nova_compute[260935]: 2025-10-11 08:46:39.855 2 DEBUG oslo_concurrency.lockutils [None req-ae69dea4-7307-4b9d-9e48-1a8c620af444 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:46:39 compute-0 nova_compute[260935]: 2025-10-11 08:46:39.886 2 DEBUG nova.storage.rbd_utils [None req-ae69dea4-7307-4b9d-9e48-1a8c620af444 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] rbd image 318c32b0-9990-4579-8abb-fc79e7460d77_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:46:39 compute-0 nova_compute[260935]: 2025-10-11 08:46:39.890 2 DEBUG oslo_concurrency.processutils [None req-ae69dea4-7307-4b9d-9e48-1a8c620af444 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 318c32b0-9990-4579-8abb-fc79e7460d77_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:46:40 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/2472654938' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:46:40 compute-0 nova_compute[260935]: 2025-10-11 08:46:40.030 2 DEBUG nova.network.neutron [None req-ae69dea4-7307-4b9d-9e48-1a8c620af444 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] [instance: 318c32b0-9990-4579-8abb-fc79e7460d77] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188
Oct 11 08:46:40 compute-0 nova_compute[260935]: 2025-10-11 08:46:40.031 2 DEBUG nova.compute.manager [None req-ae69dea4-7307-4b9d-9e48-1a8c620af444 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] [instance: 318c32b0-9990-4579-8abb-fc79e7460d77] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 11 08:46:40 compute-0 nova_compute[260935]: 2025-10-11 08:46:40.201 2 DEBUG oslo_concurrency.processutils [None req-ae69dea4-7307-4b9d-9e48-1a8c620af444 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 318c32b0-9990-4579-8abb-fc79e7460d77_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.311s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:46:40 compute-0 nova_compute[260935]: 2025-10-11 08:46:40.285 2 DEBUG nova.storage.rbd_utils [None req-ae69dea4-7307-4b9d-9e48-1a8c620af444 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] resizing rbd image 318c32b0-9990-4579-8abb-fc79e7460d77_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 11 08:46:40 compute-0 nova_compute[260935]: 2025-10-11 08:46:40.405 2 DEBUG nova.objects.instance [None req-ae69dea4-7307-4b9d-9e48-1a8c620af444 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] Lazy-loading 'migration_context' on Instance uuid 318c32b0-9990-4579-8abb-fc79e7460d77 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:46:40 compute-0 nova_compute[260935]: 2025-10-11 08:46:40.421 2 DEBUG nova.virt.libvirt.driver [None req-ae69dea4-7307-4b9d-9e48-1a8c620af444 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] [instance: 318c32b0-9990-4579-8abb-fc79e7460d77] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 11 08:46:40 compute-0 nova_compute[260935]: 2025-10-11 08:46:40.421 2 DEBUG nova.virt.libvirt.driver [None req-ae69dea4-7307-4b9d-9e48-1a8c620af444 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] [instance: 318c32b0-9990-4579-8abb-fc79e7460d77] Ensure instance console log exists: /var/lib/nova/instances/318c32b0-9990-4579-8abb-fc79e7460d77/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 11 08:46:40 compute-0 nova_compute[260935]: 2025-10-11 08:46:40.422 2 DEBUG oslo_concurrency.lockutils [None req-ae69dea4-7307-4b9d-9e48-1a8c620af444 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:46:40 compute-0 nova_compute[260935]: 2025-10-11 08:46:40.423 2 DEBUG oslo_concurrency.lockutils [None req-ae69dea4-7307-4b9d-9e48-1a8c620af444 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:46:40 compute-0 nova_compute[260935]: 2025-10-11 08:46:40.423 2 DEBUG oslo_concurrency.lockutils [None req-ae69dea4-7307-4b9d-9e48-1a8c620af444 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:46:40 compute-0 nova_compute[260935]: 2025-10-11 08:46:40.426 2 DEBUG nova.virt.libvirt.driver [None req-ae69dea4-7307-4b9d-9e48-1a8c620af444 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] [instance: 318c32b0-9990-4579-8abb-fc79e7460d77] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 11 08:46:40 compute-0 nova_compute[260935]: 2025-10-11 08:46:40.432 2 WARNING nova.virt.libvirt.driver [None req-ae69dea4-7307-4b9d-9e48-1a8c620af444 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 08:46:40 compute-0 nova_compute[260935]: 2025-10-11 08:46:40.438 2 DEBUG nova.virt.libvirt.host [None req-ae69dea4-7307-4b9d-9e48-1a8c620af444 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 11 08:46:40 compute-0 nova_compute[260935]: 2025-10-11 08:46:40.439 2 DEBUG nova.virt.libvirt.host [None req-ae69dea4-7307-4b9d-9e48-1a8c620af444 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 11 08:46:40 compute-0 nova_compute[260935]: 2025-10-11 08:46:40.442 2 DEBUG nova.virt.libvirt.host [None req-ae69dea4-7307-4b9d-9e48-1a8c620af444 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 11 08:46:40 compute-0 nova_compute[260935]: 2025-10-11 08:46:40.443 2 DEBUG nova.virt.libvirt.host [None req-ae69dea4-7307-4b9d-9e48-1a8c620af444 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 11 08:46:40 compute-0 nova_compute[260935]: 2025-10-11 08:46:40.444 2 DEBUG nova.virt.libvirt.driver [None req-ae69dea4-7307-4b9d-9e48-1a8c620af444 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 11 08:46:40 compute-0 nova_compute[260935]: 2025-10-11 08:46:40.444 2 DEBUG nova.virt.hardware [None req-ae69dea4-7307-4b9d-9e48-1a8c620af444 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 11 08:46:40 compute-0 nova_compute[260935]: 2025-10-11 08:46:40.445 2 DEBUG nova.virt.hardware [None req-ae69dea4-7307-4b9d-9e48-1a8c620af444 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 11 08:46:40 compute-0 nova_compute[260935]: 2025-10-11 08:46:40.445 2 DEBUG nova.virt.hardware [None req-ae69dea4-7307-4b9d-9e48-1a8c620af444 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 11 08:46:40 compute-0 nova_compute[260935]: 2025-10-11 08:46:40.446 2 DEBUG nova.virt.hardware [None req-ae69dea4-7307-4b9d-9e48-1a8c620af444 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 11 08:46:40 compute-0 nova_compute[260935]: 2025-10-11 08:46:40.446 2 DEBUG nova.virt.hardware [None req-ae69dea4-7307-4b9d-9e48-1a8c620af444 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 11 08:46:40 compute-0 nova_compute[260935]: 2025-10-11 08:46:40.446 2 DEBUG nova.virt.hardware [None req-ae69dea4-7307-4b9d-9e48-1a8c620af444 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 11 08:46:40 compute-0 nova_compute[260935]: 2025-10-11 08:46:40.447 2 DEBUG nova.virt.hardware [None req-ae69dea4-7307-4b9d-9e48-1a8c620af444 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 11 08:46:40 compute-0 nova_compute[260935]: 2025-10-11 08:46:40.447 2 DEBUG nova.virt.hardware [None req-ae69dea4-7307-4b9d-9e48-1a8c620af444 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 11 08:46:40 compute-0 nova_compute[260935]: 2025-10-11 08:46:40.447 2 DEBUG nova.virt.hardware [None req-ae69dea4-7307-4b9d-9e48-1a8c620af444 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 11 08:46:40 compute-0 nova_compute[260935]: 2025-10-11 08:46:40.448 2 DEBUG nova.virt.hardware [None req-ae69dea4-7307-4b9d-9e48-1a8c620af444 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 11 08:46:40 compute-0 nova_compute[260935]: 2025-10-11 08:46:40.448 2 DEBUG nova.virt.hardware [None req-ae69dea4-7307-4b9d-9e48-1a8c620af444 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 11 08:46:40 compute-0 nova_compute[260935]: 2025-10-11 08:46:40.452 2 DEBUG oslo_concurrency.processutils [None req-ae69dea4-7307-4b9d-9e48-1a8c620af444 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:46:40 compute-0 nova_compute[260935]: 2025-10-11 08:46:40.492 2 DEBUG oslo_concurrency.lockutils [None req-ef1f735e-f8bf-495f-ba4c-84bfe8bde23c 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] Acquiring lock "fa95251f-1ce7-4a45-8f53-ae932716a172" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:46:40 compute-0 nova_compute[260935]: 2025-10-11 08:46:40.492 2 DEBUG oslo_concurrency.lockutils [None req-ef1f735e-f8bf-495f-ba4c-84bfe8bde23c 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] Lock "fa95251f-1ce7-4a45-8f53-ae932716a172" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:46:40 compute-0 nova_compute[260935]: 2025-10-11 08:46:40.508 2 DEBUG nova.compute.manager [None req-ef1f735e-f8bf-495f-ba4c-84bfe8bde23c 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] [instance: fa95251f-1ce7-4a45-8f53-ae932716a172] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 11 08:46:40 compute-0 nova_compute[260935]: 2025-10-11 08:46:40.573 2 DEBUG oslo_concurrency.lockutils [None req-ef1f735e-f8bf-495f-ba4c-84bfe8bde23c 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:46:40 compute-0 nova_compute[260935]: 2025-10-11 08:46:40.574 2 DEBUG oslo_concurrency.lockutils [None req-ef1f735e-f8bf-495f-ba4c-84bfe8bde23c 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:46:40 compute-0 nova_compute[260935]: 2025-10-11 08:46:40.587 2 DEBUG nova.virt.hardware [None req-ef1f735e-f8bf-495f-ba4c-84bfe8bde23c 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 11 08:46:40 compute-0 nova_compute[260935]: 2025-10-11 08:46:40.588 2 INFO nova.compute.claims [None req-ef1f735e-f8bf-495f-ba4c-84bfe8bde23c 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] [instance: fa95251f-1ce7-4a45-8f53-ae932716a172] Claim successful on node compute-0.ctlplane.example.com
Oct 11 08:46:40 compute-0 nova_compute[260935]: 2025-10-11 08:46:40.748 2 DEBUG oslo_concurrency.processutils [None req-ef1f735e-f8bf-495f-ba4c-84bfe8bde23c 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:46:40 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 08:46:40 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/586804537' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:46:40 compute-0 nova_compute[260935]: 2025-10-11 08:46:40.941 2 DEBUG oslo_concurrency.processutils [None req-ae69dea4-7307-4b9d-9e48-1a8c620af444 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.488s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:46:40 compute-0 nova_compute[260935]: 2025-10-11 08:46:40.968 2 DEBUG nova.storage.rbd_utils [None req-ae69dea4-7307-4b9d-9e48-1a8c620af444 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] rbd image 318c32b0-9990-4579-8abb-fc79e7460d77_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:46:40 compute-0 nova_compute[260935]: 2025-10-11 08:46:40.973 2 DEBUG oslo_concurrency.processutils [None req-ae69dea4-7307-4b9d-9e48-1a8c620af444 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:46:41 compute-0 ceph-mon[74313]: pgmap v1202: 321 pgs: 321 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 3.0 MiB/s rd, 47 KiB/s wr, 287 op/s
Oct 11 08:46:41 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/586804537' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:46:41 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 08:46:41 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/297712343' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:46:41 compute-0 nova_compute[260935]: 2025-10-11 08:46:41.255 2 DEBUG oslo_concurrency.processutils [None req-ef1f735e-f8bf-495f-ba4c-84bfe8bde23c 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.506s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:46:41 compute-0 nova_compute[260935]: 2025-10-11 08:46:41.263 2 DEBUG nova.compute.provider_tree [None req-ef1f735e-f8bf-495f-ba4c-84bfe8bde23c 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 08:46:41 compute-0 nova_compute[260935]: 2025-10-11 08:46:41.286 2 DEBUG nova.scheduler.client.report [None req-ef1f735e-f8bf-495f-ba4c-84bfe8bde23c 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 08:46:41 compute-0 nova_compute[260935]: 2025-10-11 08:46:41.316 2 DEBUG oslo_concurrency.lockutils [None req-ef1f735e-f8bf-495f-ba4c-84bfe8bde23c 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.742s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:46:41 compute-0 nova_compute[260935]: 2025-10-11 08:46:41.317 2 DEBUG nova.compute.manager [None req-ef1f735e-f8bf-495f-ba4c-84bfe8bde23c 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] [instance: fa95251f-1ce7-4a45-8f53-ae932716a172] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 11 08:46:41 compute-0 nova_compute[260935]: 2025-10-11 08:46:41.379 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:46:41 compute-0 nova_compute[260935]: 2025-10-11 08:46:41.386 2 DEBUG nova.compute.manager [None req-ef1f735e-f8bf-495f-ba4c-84bfe8bde23c 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] [instance: fa95251f-1ce7-4a45-8f53-ae932716a172] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 11 08:46:41 compute-0 nova_compute[260935]: 2025-10-11 08:46:41.386 2 DEBUG nova.network.neutron [None req-ef1f735e-f8bf-495f-ba4c-84bfe8bde23c 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] [instance: fa95251f-1ce7-4a45-8f53-ae932716a172] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 11 08:46:41 compute-0 nova_compute[260935]: 2025-10-11 08:46:41.420 2 INFO nova.virt.libvirt.driver [None req-ef1f735e-f8bf-495f-ba4c-84bfe8bde23c 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] [instance: fa95251f-1ce7-4a45-8f53-ae932716a172] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 11 08:46:41 compute-0 nova_compute[260935]: 2025-10-11 08:46:41.441 2 DEBUG nova.compute.manager [None req-ef1f735e-f8bf-495f-ba4c-84bfe8bde23c 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] [instance: fa95251f-1ce7-4a45-8f53-ae932716a172] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 11 08:46:41 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 08:46:41 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/247863693' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:46:41 compute-0 nova_compute[260935]: 2025-10-11 08:46:41.482 2 DEBUG oslo_concurrency.processutils [None req-ae69dea4-7307-4b9d-9e48-1a8c620af444 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.509s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:46:41 compute-0 nova_compute[260935]: 2025-10-11 08:46:41.484 2 DEBUG nova.objects.instance [None req-ae69dea4-7307-4b9d-9e48-1a8c620af444 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] Lazy-loading 'pci_devices' on Instance uuid 318c32b0-9990-4579-8abb-fc79e7460d77 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:46:41 compute-0 nova_compute[260935]: 2025-10-11 08:46:41.499 2 DEBUG nova.virt.libvirt.driver [None req-ae69dea4-7307-4b9d-9e48-1a8c620af444 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] [instance: 318c32b0-9990-4579-8abb-fc79e7460d77] End _get_guest_xml xml=<domain type="kvm">
Oct 11 08:46:41 compute-0 nova_compute[260935]:   <uuid>318c32b0-9990-4579-8abb-fc79e7460d77</uuid>
Oct 11 08:46:41 compute-0 nova_compute[260935]:   <name>instance-0000000d</name>
Oct 11 08:46:41 compute-0 nova_compute[260935]:   <memory>131072</memory>
Oct 11 08:46:41 compute-0 nova_compute[260935]:   <vcpu>1</vcpu>
Oct 11 08:46:41 compute-0 nova_compute[260935]:   <metadata>
Oct 11 08:46:41 compute-0 nova_compute[260935]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 08:46:41 compute-0 nova_compute[260935]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 08:46:41 compute-0 nova_compute[260935]:       <nova:name>tempest-ListImageFiltersTestJSON-server-1102694279</nova:name>
Oct 11 08:46:41 compute-0 nova_compute[260935]:       <nova:creationTime>2025-10-11 08:46:40</nova:creationTime>
Oct 11 08:46:41 compute-0 nova_compute[260935]:       <nova:flavor name="m1.nano">
Oct 11 08:46:41 compute-0 nova_compute[260935]:         <nova:memory>128</nova:memory>
Oct 11 08:46:41 compute-0 nova_compute[260935]:         <nova:disk>1</nova:disk>
Oct 11 08:46:41 compute-0 nova_compute[260935]:         <nova:swap>0</nova:swap>
Oct 11 08:46:41 compute-0 nova_compute[260935]:         <nova:ephemeral>0</nova:ephemeral>
Oct 11 08:46:41 compute-0 nova_compute[260935]:         <nova:vcpus>1</nova:vcpus>
Oct 11 08:46:41 compute-0 nova_compute[260935]:       </nova:flavor>
Oct 11 08:46:41 compute-0 nova_compute[260935]:       <nova:owner>
Oct 11 08:46:41 compute-0 nova_compute[260935]:         <nova:user uuid="976b51186aac40648d84f68dc2241b25">tempest-ListImageFiltersTestJSON-2055883743-project-member</nova:user>
Oct 11 08:46:41 compute-0 nova_compute[260935]:         <nova:project uuid="cecc6eaab0b74c3786e0cdf9452f1b2c">tempest-ListImageFiltersTestJSON-2055883743</nova:project>
Oct 11 08:46:41 compute-0 nova_compute[260935]:       </nova:owner>
Oct 11 08:46:41 compute-0 nova_compute[260935]:       <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 08:46:41 compute-0 nova_compute[260935]:       <nova:ports/>
Oct 11 08:46:41 compute-0 nova_compute[260935]:     </nova:instance>
Oct 11 08:46:41 compute-0 nova_compute[260935]:   </metadata>
Oct 11 08:46:41 compute-0 nova_compute[260935]:   <sysinfo type="smbios">
Oct 11 08:46:41 compute-0 nova_compute[260935]:     <system>
Oct 11 08:46:41 compute-0 nova_compute[260935]:       <entry name="manufacturer">RDO</entry>
Oct 11 08:46:41 compute-0 nova_compute[260935]:       <entry name="product">OpenStack Compute</entry>
Oct 11 08:46:41 compute-0 nova_compute[260935]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 08:46:41 compute-0 nova_compute[260935]:       <entry name="serial">318c32b0-9990-4579-8abb-fc79e7460d77</entry>
Oct 11 08:46:41 compute-0 nova_compute[260935]:       <entry name="uuid">318c32b0-9990-4579-8abb-fc79e7460d77</entry>
Oct 11 08:46:41 compute-0 nova_compute[260935]:       <entry name="family">Virtual Machine</entry>
Oct 11 08:46:41 compute-0 nova_compute[260935]:     </system>
Oct 11 08:46:41 compute-0 nova_compute[260935]:   </sysinfo>
Oct 11 08:46:41 compute-0 nova_compute[260935]:   <os>
Oct 11 08:46:41 compute-0 nova_compute[260935]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 11 08:46:41 compute-0 nova_compute[260935]:     <boot dev="hd"/>
Oct 11 08:46:41 compute-0 nova_compute[260935]:     <smbios mode="sysinfo"/>
Oct 11 08:46:41 compute-0 nova_compute[260935]:   </os>
Oct 11 08:46:41 compute-0 nova_compute[260935]:   <features>
Oct 11 08:46:41 compute-0 nova_compute[260935]:     <acpi/>
Oct 11 08:46:41 compute-0 nova_compute[260935]:     <apic/>
Oct 11 08:46:41 compute-0 nova_compute[260935]:     <vmcoreinfo/>
Oct 11 08:46:41 compute-0 nova_compute[260935]:   </features>
Oct 11 08:46:41 compute-0 nova_compute[260935]:   <clock offset="utc">
Oct 11 08:46:41 compute-0 nova_compute[260935]:     <timer name="pit" tickpolicy="delay"/>
Oct 11 08:46:41 compute-0 nova_compute[260935]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 11 08:46:41 compute-0 nova_compute[260935]:     <timer name="hpet" present="no"/>
Oct 11 08:46:41 compute-0 nova_compute[260935]:   </clock>
Oct 11 08:46:41 compute-0 nova_compute[260935]:   <cpu mode="host-model" match="exact">
Oct 11 08:46:41 compute-0 nova_compute[260935]:     <topology sockets="1" cores="1" threads="1"/>
Oct 11 08:46:41 compute-0 nova_compute[260935]:   </cpu>
Oct 11 08:46:41 compute-0 nova_compute[260935]:   <devices>
Oct 11 08:46:41 compute-0 nova_compute[260935]:     <disk type="network" device="disk">
Oct 11 08:46:41 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 08:46:41 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/318c32b0-9990-4579-8abb-fc79e7460d77_disk">
Oct 11 08:46:41 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 08:46:41 compute-0 nova_compute[260935]:       </source>
Oct 11 08:46:41 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 08:46:41 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 08:46:41 compute-0 nova_compute[260935]:       </auth>
Oct 11 08:46:41 compute-0 nova_compute[260935]:       <target dev="vda" bus="virtio"/>
Oct 11 08:46:41 compute-0 nova_compute[260935]:     </disk>
Oct 11 08:46:41 compute-0 nova_compute[260935]:     <disk type="network" device="cdrom">
Oct 11 08:46:41 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 08:46:41 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/318c32b0-9990-4579-8abb-fc79e7460d77_disk.config">
Oct 11 08:46:41 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 08:46:41 compute-0 nova_compute[260935]:       </source>
Oct 11 08:46:41 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 08:46:41 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 08:46:41 compute-0 nova_compute[260935]:       </auth>
Oct 11 08:46:41 compute-0 nova_compute[260935]:       <target dev="sda" bus="sata"/>
Oct 11 08:46:41 compute-0 nova_compute[260935]:     </disk>
Oct 11 08:46:41 compute-0 nova_compute[260935]:     <serial type="pty">
Oct 11 08:46:41 compute-0 nova_compute[260935]:       <log file="/var/lib/nova/instances/318c32b0-9990-4579-8abb-fc79e7460d77/console.log" append="off"/>
Oct 11 08:46:41 compute-0 nova_compute[260935]:     </serial>
Oct 11 08:46:41 compute-0 nova_compute[260935]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 08:46:41 compute-0 nova_compute[260935]:     <video>
Oct 11 08:46:41 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 08:46:41 compute-0 nova_compute[260935]:     </video>
Oct 11 08:46:41 compute-0 nova_compute[260935]:     <input type="tablet" bus="usb"/>
Oct 11 08:46:41 compute-0 nova_compute[260935]:     <rng model="virtio">
Oct 11 08:46:41 compute-0 nova_compute[260935]:       <backend model="random">/dev/urandom</backend>
Oct 11 08:46:41 compute-0 nova_compute[260935]:     </rng>
Oct 11 08:46:41 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root"/>
Oct 11 08:46:41 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:46:41 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:46:41 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:46:41 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:46:41 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:46:41 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:46:41 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:46:41 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:46:41 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:46:41 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:46:41 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:46:41 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:46:41 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:46:41 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:46:41 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:46:41 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:46:41 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:46:41 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:46:41 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:46:41 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:46:41 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:46:41 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:46:41 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:46:41 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:46:41 compute-0 nova_compute[260935]:     <controller type="usb" index="0"/>
Oct 11 08:46:41 compute-0 nova_compute[260935]:     <memballoon model="virtio">
Oct 11 08:46:41 compute-0 nova_compute[260935]:       <stats period="10"/>
Oct 11 08:46:41 compute-0 nova_compute[260935]:     </memballoon>
Oct 11 08:46:41 compute-0 nova_compute[260935]:   </devices>
Oct 11 08:46:41 compute-0 nova_compute[260935]: </domain>
Oct 11 08:46:41 compute-0 nova_compute[260935]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 11 08:46:41 compute-0 nova_compute[260935]: 2025-10-11 08:46:41.534 2 DEBUG nova.compute.manager [None req-ef1f735e-f8bf-495f-ba4c-84bfe8bde23c 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] [instance: fa95251f-1ce7-4a45-8f53-ae932716a172] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 11 08:46:41 compute-0 nova_compute[260935]: 2025-10-11 08:46:41.536 2 DEBUG nova.virt.libvirt.driver [None req-ef1f735e-f8bf-495f-ba4c-84bfe8bde23c 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] [instance: fa95251f-1ce7-4a45-8f53-ae932716a172] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 11 08:46:41 compute-0 nova_compute[260935]: 2025-10-11 08:46:41.536 2 INFO nova.virt.libvirt.driver [None req-ef1f735e-f8bf-495f-ba4c-84bfe8bde23c 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] [instance: fa95251f-1ce7-4a45-8f53-ae932716a172] Creating image(s)
Oct 11 08:46:41 compute-0 nova_compute[260935]: 2025-10-11 08:46:41.563 2 DEBUG nova.storage.rbd_utils [None req-ef1f735e-f8bf-495f-ba4c-84bfe8bde23c 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] rbd image fa95251f-1ce7-4a45-8f53-ae932716a172_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:46:41 compute-0 nova_compute[260935]: 2025-10-11 08:46:41.587 2 DEBUG nova.storage.rbd_utils [None req-ef1f735e-f8bf-495f-ba4c-84bfe8bde23c 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] rbd image fa95251f-1ce7-4a45-8f53-ae932716a172_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:46:41 compute-0 nova_compute[260935]: 2025-10-11 08:46:41.612 2 DEBUG nova.storage.rbd_utils [None req-ef1f735e-f8bf-495f-ba4c-84bfe8bde23c 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] rbd image fa95251f-1ce7-4a45-8f53-ae932716a172_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:46:41 compute-0 nova_compute[260935]: 2025-10-11 08:46:41.616 2 DEBUG oslo_concurrency.processutils [None req-ef1f735e-f8bf-495f-ba4c-84bfe8bde23c 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:46:41 compute-0 ovn_controller[152945]: 2025-10-11T08:46:41Z|00043|memory_trim|INFO|Detected inactivity (last active 30003 ms ago): trimming memory
Oct 11 08:46:41 compute-0 nova_compute[260935]: 2025-10-11 08:46:41.669 2 DEBUG nova.virt.libvirt.driver [None req-ae69dea4-7307-4b9d-9e48-1a8c620af444 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 08:46:41 compute-0 nova_compute[260935]: 2025-10-11 08:46:41.670 2 DEBUG nova.virt.libvirt.driver [None req-ae69dea4-7307-4b9d-9e48-1a8c620af444 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 08:46:41 compute-0 nova_compute[260935]: 2025-10-11 08:46:41.671 2 INFO nova.virt.libvirt.driver [None req-ae69dea4-7307-4b9d-9e48-1a8c620af444 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] [instance: 318c32b0-9990-4579-8abb-fc79e7460d77] Using config drive
Oct 11 08:46:41 compute-0 nova_compute[260935]: 2025-10-11 08:46:41.703 2 DEBUG nova.storage.rbd_utils [None req-ae69dea4-7307-4b9d-9e48-1a8c620af444 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] rbd image 318c32b0-9990-4579-8abb-fc79e7460d77_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:46:41 compute-0 nova_compute[260935]: 2025-10-11 08:46:41.711 2 DEBUG oslo_concurrency.processutils [None req-ef1f735e-f8bf-495f-ba4c-84bfe8bde23c 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json" returned: 0 in 0.094s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:46:41 compute-0 nova_compute[260935]: 2025-10-11 08:46:41.711 2 DEBUG oslo_concurrency.lockutils [None req-ef1f735e-f8bf-495f-ba4c-84bfe8bde23c 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] Acquiring lock "9811042c7d73cc51997f7c966840f4b7728169a1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:46:41 compute-0 nova_compute[260935]: 2025-10-11 08:46:41.712 2 DEBUG oslo_concurrency.lockutils [None req-ef1f735e-f8bf-495f-ba4c-84bfe8bde23c 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:46:41 compute-0 nova_compute[260935]: 2025-10-11 08:46:41.713 2 DEBUG oslo_concurrency.lockutils [None req-ef1f735e-f8bf-495f-ba4c-84bfe8bde23c 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:46:41 compute-0 nova_compute[260935]: 2025-10-11 08:46:41.740 2 DEBUG nova.storage.rbd_utils [None req-ef1f735e-f8bf-495f-ba4c-84bfe8bde23c 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] rbd image fa95251f-1ce7-4a45-8f53-ae932716a172_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:46:41 compute-0 nova_compute[260935]: 2025-10-11 08:46:41.745 2 DEBUG oslo_concurrency.processutils [None req-ef1f735e-f8bf-495f-ba4c-84bfe8bde23c 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 fa95251f-1ce7-4a45-8f53-ae932716a172_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:46:41 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1203: 321 pgs: 321 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 67 KiB/s rd, 5.6 KiB/s wr, 96 op/s
Oct 11 08:46:41 compute-0 nova_compute[260935]: 2025-10-11 08:46:41.898 2 DEBUG nova.network.neutron [None req-ef1f735e-f8bf-495f-ba4c-84bfe8bde23c 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] [instance: fa95251f-1ce7-4a45-8f53-ae932716a172] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188
Oct 11 08:46:41 compute-0 nova_compute[260935]: 2025-10-11 08:46:41.899 2 DEBUG nova.compute.manager [None req-ef1f735e-f8bf-495f-ba4c-84bfe8bde23c 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] [instance: fa95251f-1ce7-4a45-8f53-ae932716a172] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 11 08:46:42 compute-0 nova_compute[260935]: 2025-10-11 08:46:42.008 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760172386.9637132, d1140ad8-4653-441c-aeec-bf4de6a680f4 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:46:42 compute-0 nova_compute[260935]: 2025-10-11 08:46:42.009 2 INFO nova.compute.manager [-] [instance: d1140ad8-4653-441c-aeec-bf4de6a680f4] VM Stopped (Lifecycle Event)
Oct 11 08:46:42 compute-0 nova_compute[260935]: 2025-10-11 08:46:42.030 2 DEBUG nova.compute.manager [None req-947b0616-6cd0-4172-8ba6-65959417ebf6 - - - - - -] [instance: d1140ad8-4653-441c-aeec-bf4de6a680f4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:46:42 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/297712343' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:46:42 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/247863693' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:46:42 compute-0 nova_compute[260935]: 2025-10-11 08:46:42.069 2 INFO nova.virt.libvirt.driver [None req-ae69dea4-7307-4b9d-9e48-1a8c620af444 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] [instance: 318c32b0-9990-4579-8abb-fc79e7460d77] Creating config drive at /var/lib/nova/instances/318c32b0-9990-4579-8abb-fc79e7460d77/disk.config
Oct 11 08:46:42 compute-0 nova_compute[260935]: 2025-10-11 08:46:42.078 2 DEBUG oslo_concurrency.processutils [None req-ae69dea4-7307-4b9d-9e48-1a8c620af444 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/318c32b0-9990-4579-8abb-fc79e7460d77/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpxqetqe05 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:46:42 compute-0 nova_compute[260935]: 2025-10-11 08:46:42.110 2 DEBUG oslo_concurrency.processutils [None req-ef1f735e-f8bf-495f-ba4c-84bfe8bde23c 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 fa95251f-1ce7-4a45-8f53-ae932716a172_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.365s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:46:42 compute-0 nova_compute[260935]: 2025-10-11 08:46:42.194 2 DEBUG nova.storage.rbd_utils [None req-ef1f735e-f8bf-495f-ba4c-84bfe8bde23c 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] resizing rbd image fa95251f-1ce7-4a45-8f53-ae932716a172_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 11 08:46:42 compute-0 nova_compute[260935]: 2025-10-11 08:46:42.235 2 DEBUG oslo_concurrency.processutils [None req-ae69dea4-7307-4b9d-9e48-1a8c620af444 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/318c32b0-9990-4579-8abb-fc79e7460d77/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpxqetqe05" returned: 0 in 0.157s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:46:42 compute-0 nova_compute[260935]: 2025-10-11 08:46:42.286 2 DEBUG nova.storage.rbd_utils [None req-ae69dea4-7307-4b9d-9e48-1a8c620af444 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] rbd image 318c32b0-9990-4579-8abb-fc79e7460d77_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:46:42 compute-0 nova_compute[260935]: 2025-10-11 08:46:42.295 2 DEBUG oslo_concurrency.processutils [None req-ae69dea4-7307-4b9d-9e48-1a8c620af444 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/318c32b0-9990-4579-8abb-fc79e7460d77/disk.config 318c32b0-9990-4579-8abb-fc79e7460d77_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:46:42 compute-0 nova_compute[260935]: 2025-10-11 08:46:42.394 2 DEBUG nova.objects.instance [None req-ef1f735e-f8bf-495f-ba4c-84bfe8bde23c 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] Lazy-loading 'migration_context' on Instance uuid fa95251f-1ce7-4a45-8f53-ae932716a172 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:46:42 compute-0 nova_compute[260935]: 2025-10-11 08:46:42.410 2 DEBUG nova.virt.libvirt.driver [None req-ef1f735e-f8bf-495f-ba4c-84bfe8bde23c 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] [instance: fa95251f-1ce7-4a45-8f53-ae932716a172] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 11 08:46:42 compute-0 nova_compute[260935]: 2025-10-11 08:46:42.411 2 DEBUG nova.virt.libvirt.driver [None req-ef1f735e-f8bf-495f-ba4c-84bfe8bde23c 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] [instance: fa95251f-1ce7-4a45-8f53-ae932716a172] Ensure instance console log exists: /var/lib/nova/instances/fa95251f-1ce7-4a45-8f53-ae932716a172/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 11 08:46:42 compute-0 nova_compute[260935]: 2025-10-11 08:46:42.412 2 DEBUG oslo_concurrency.lockutils [None req-ef1f735e-f8bf-495f-ba4c-84bfe8bde23c 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:46:42 compute-0 nova_compute[260935]: 2025-10-11 08:46:42.412 2 DEBUG oslo_concurrency.lockutils [None req-ef1f735e-f8bf-495f-ba4c-84bfe8bde23c 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:46:42 compute-0 nova_compute[260935]: 2025-10-11 08:46:42.413 2 DEBUG oslo_concurrency.lockutils [None req-ef1f735e-f8bf-495f-ba4c-84bfe8bde23c 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:46:42 compute-0 nova_compute[260935]: 2025-10-11 08:46:42.417 2 DEBUG nova.virt.libvirt.driver [None req-ef1f735e-f8bf-495f-ba4c-84bfe8bde23c 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] [instance: fa95251f-1ce7-4a45-8f53-ae932716a172] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 11 08:46:42 compute-0 nova_compute[260935]: 2025-10-11 08:46:42.424 2 WARNING nova.virt.libvirt.driver [None req-ef1f735e-f8bf-495f-ba4c-84bfe8bde23c 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 08:46:42 compute-0 nova_compute[260935]: 2025-10-11 08:46:42.430 2 DEBUG nova.virt.libvirt.host [None req-ef1f735e-f8bf-495f-ba4c-84bfe8bde23c 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 11 08:46:42 compute-0 nova_compute[260935]: 2025-10-11 08:46:42.431 2 DEBUG nova.virt.libvirt.host [None req-ef1f735e-f8bf-495f-ba4c-84bfe8bde23c 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 11 08:46:42 compute-0 nova_compute[260935]: 2025-10-11 08:46:42.434 2 DEBUG nova.virt.libvirt.host [None req-ef1f735e-f8bf-495f-ba4c-84bfe8bde23c 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 11 08:46:42 compute-0 nova_compute[260935]: 2025-10-11 08:46:42.435 2 DEBUG nova.virt.libvirt.host [None req-ef1f735e-f8bf-495f-ba4c-84bfe8bde23c 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 11 08:46:42 compute-0 nova_compute[260935]: 2025-10-11 08:46:42.436 2 DEBUG nova.virt.libvirt.driver [None req-ef1f735e-f8bf-495f-ba4c-84bfe8bde23c 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 11 08:46:42 compute-0 nova_compute[260935]: 2025-10-11 08:46:42.436 2 DEBUG nova.virt.hardware [None req-ef1f735e-f8bf-495f-ba4c-84bfe8bde23c 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 11 08:46:42 compute-0 nova_compute[260935]: 2025-10-11 08:46:42.437 2 DEBUG nova.virt.hardware [None req-ef1f735e-f8bf-495f-ba4c-84bfe8bde23c 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 11 08:46:42 compute-0 nova_compute[260935]: 2025-10-11 08:46:42.437 2 DEBUG nova.virt.hardware [None req-ef1f735e-f8bf-495f-ba4c-84bfe8bde23c 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 11 08:46:42 compute-0 nova_compute[260935]: 2025-10-11 08:46:42.437 2 DEBUG nova.virt.hardware [None req-ef1f735e-f8bf-495f-ba4c-84bfe8bde23c 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 11 08:46:42 compute-0 nova_compute[260935]: 2025-10-11 08:46:42.438 2 DEBUG nova.virt.hardware [None req-ef1f735e-f8bf-495f-ba4c-84bfe8bde23c 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 11 08:46:42 compute-0 nova_compute[260935]: 2025-10-11 08:46:42.438 2 DEBUG nova.virt.hardware [None req-ef1f735e-f8bf-495f-ba4c-84bfe8bde23c 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 11 08:46:42 compute-0 nova_compute[260935]: 2025-10-11 08:46:42.438 2 DEBUG nova.virt.hardware [None req-ef1f735e-f8bf-495f-ba4c-84bfe8bde23c 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 11 08:46:42 compute-0 nova_compute[260935]: 2025-10-11 08:46:42.439 2 DEBUG nova.virt.hardware [None req-ef1f735e-f8bf-495f-ba4c-84bfe8bde23c 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 11 08:46:42 compute-0 nova_compute[260935]: 2025-10-11 08:46:42.439 2 DEBUG nova.virt.hardware [None req-ef1f735e-f8bf-495f-ba4c-84bfe8bde23c 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 11 08:46:42 compute-0 nova_compute[260935]: 2025-10-11 08:46:42.439 2 DEBUG nova.virt.hardware [None req-ef1f735e-f8bf-495f-ba4c-84bfe8bde23c 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 11 08:46:42 compute-0 nova_compute[260935]: 2025-10-11 08:46:42.440 2 DEBUG nova.virt.hardware [None req-ef1f735e-f8bf-495f-ba4c-84bfe8bde23c 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 11 08:46:42 compute-0 nova_compute[260935]: 2025-10-11 08:46:42.445 2 DEBUG oslo_concurrency.processutils [None req-ef1f735e-f8bf-495f-ba4c-84bfe8bde23c 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:46:42 compute-0 nova_compute[260935]: 2025-10-11 08:46:42.510 2 DEBUG oslo_concurrency.processutils [None req-ae69dea4-7307-4b9d-9e48-1a8c620af444 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/318c32b0-9990-4579-8abb-fc79e7460d77/disk.config 318c32b0-9990-4579-8abb-fc79e7460d77_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.215s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:46:42 compute-0 nova_compute[260935]: 2025-10-11 08:46:42.511 2 INFO nova.virt.libvirt.driver [None req-ae69dea4-7307-4b9d-9e48-1a8c620af444 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] [instance: 318c32b0-9990-4579-8abb-fc79e7460d77] Deleting local config drive /var/lib/nova/instances/318c32b0-9990-4579-8abb-fc79e7460d77/disk.config because it was imported into RBD.
Oct 11 08:46:42 compute-0 systemd-machined[215705]: New machine qemu-13-instance-0000000d.
Oct 11 08:46:42 compute-0 systemd[1]: Started Virtual Machine qemu-13-instance-0000000d.
Oct 11 08:46:42 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 08:46:42 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/9991398' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:46:42 compute-0 nova_compute[260935]: 2025-10-11 08:46:42.934 2 DEBUG oslo_concurrency.processutils [None req-ef1f735e-f8bf-495f-ba4c-84bfe8bde23c 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.489s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:46:42 compute-0 nova_compute[260935]: 2025-10-11 08:46:42.958 2 DEBUG nova.storage.rbd_utils [None req-ef1f735e-f8bf-495f-ba4c-84bfe8bde23c 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] rbd image fa95251f-1ce7-4a45-8f53-ae932716a172_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:46:42 compute-0 nova_compute[260935]: 2025-10-11 08:46:42.961 2 DEBUG oslo_concurrency.processutils [None req-ef1f735e-f8bf-495f-ba4c-84bfe8bde23c 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:46:43 compute-0 ceph-mon[74313]: pgmap v1203: 321 pgs: 321 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 67 KiB/s rd, 5.6 KiB/s wr, 96 op/s
Oct 11 08:46:43 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/9991398' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:46:43 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 08:46:43 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 08:46:43 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2069511649' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:46:43 compute-0 nova_compute[260935]: 2025-10-11 08:46:43.477 2 DEBUG oslo_concurrency.lockutils [None req-9fb38453-abce-4cf7-981e-645f5e8a1e97 dc0a1b420e6f47e9a9f604fe0dd240fd 666832175a6846b29256147e566ba5be - - default default] Acquiring lock "ff13b723-e064-4bed-93dc-b6bf28af0857" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:46:43 compute-0 nova_compute[260935]: 2025-10-11 08:46:43.477 2 DEBUG oslo_concurrency.lockutils [None req-9fb38453-abce-4cf7-981e-645f5e8a1e97 dc0a1b420e6f47e9a9f604fe0dd240fd 666832175a6846b29256147e566ba5be - - default default] Lock "ff13b723-e064-4bed-93dc-b6bf28af0857" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:46:43 compute-0 nova_compute[260935]: 2025-10-11 08:46:43.491 2 DEBUG oslo_concurrency.processutils [None req-ef1f735e-f8bf-495f-ba4c-84bfe8bde23c 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.530s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:46:43 compute-0 nova_compute[260935]: 2025-10-11 08:46:43.492 2 DEBUG nova.objects.instance [None req-ef1f735e-f8bf-495f-ba4c-84bfe8bde23c 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] Lazy-loading 'pci_devices' on Instance uuid fa95251f-1ce7-4a45-8f53-ae932716a172 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:46:43 compute-0 nova_compute[260935]: 2025-10-11 08:46:43.497 2 DEBUG nova.compute.manager [None req-9fb38453-abce-4cf7-981e-645f5e8a1e97 dc0a1b420e6f47e9a9f604fe0dd240fd 666832175a6846b29256147e566ba5be - - default default] [instance: ff13b723-e064-4bed-93dc-b6bf28af0857] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 11 08:46:43 compute-0 nova_compute[260935]: 2025-10-11 08:46:43.508 2 DEBUG nova.virt.libvirt.driver [None req-ef1f735e-f8bf-495f-ba4c-84bfe8bde23c 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] [instance: fa95251f-1ce7-4a45-8f53-ae932716a172] End _get_guest_xml xml=<domain type="kvm">
Oct 11 08:46:43 compute-0 nova_compute[260935]:   <uuid>fa95251f-1ce7-4a45-8f53-ae932716a172</uuid>
Oct 11 08:46:43 compute-0 nova_compute[260935]:   <name>instance-0000000e</name>
Oct 11 08:46:43 compute-0 nova_compute[260935]:   <memory>131072</memory>
Oct 11 08:46:43 compute-0 nova_compute[260935]:   <vcpu>1</vcpu>
Oct 11 08:46:43 compute-0 nova_compute[260935]:   <metadata>
Oct 11 08:46:43 compute-0 nova_compute[260935]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 08:46:43 compute-0 nova_compute[260935]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 08:46:43 compute-0 nova_compute[260935]:       <nova:name>tempest-ListImageFiltersTestJSON-server-868078342</nova:name>
Oct 11 08:46:43 compute-0 nova_compute[260935]:       <nova:creationTime>2025-10-11 08:46:42</nova:creationTime>
Oct 11 08:46:43 compute-0 nova_compute[260935]:       <nova:flavor name="m1.nano">
Oct 11 08:46:43 compute-0 nova_compute[260935]:         <nova:memory>128</nova:memory>
Oct 11 08:46:43 compute-0 nova_compute[260935]:         <nova:disk>1</nova:disk>
Oct 11 08:46:43 compute-0 nova_compute[260935]:         <nova:swap>0</nova:swap>
Oct 11 08:46:43 compute-0 nova_compute[260935]:         <nova:ephemeral>0</nova:ephemeral>
Oct 11 08:46:43 compute-0 nova_compute[260935]:         <nova:vcpus>1</nova:vcpus>
Oct 11 08:46:43 compute-0 nova_compute[260935]:       </nova:flavor>
Oct 11 08:46:43 compute-0 nova_compute[260935]:       <nova:owner>
Oct 11 08:46:43 compute-0 nova_compute[260935]:         <nova:user uuid="976b51186aac40648d84f68dc2241b25">tempest-ListImageFiltersTestJSON-2055883743-project-member</nova:user>
Oct 11 08:46:43 compute-0 nova_compute[260935]:         <nova:project uuid="cecc6eaab0b74c3786e0cdf9452f1b2c">tempest-ListImageFiltersTestJSON-2055883743</nova:project>
Oct 11 08:46:43 compute-0 nova_compute[260935]:       </nova:owner>
Oct 11 08:46:43 compute-0 nova_compute[260935]:       <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 08:46:43 compute-0 nova_compute[260935]:       <nova:ports/>
Oct 11 08:46:43 compute-0 nova_compute[260935]:     </nova:instance>
Oct 11 08:46:43 compute-0 nova_compute[260935]:   </metadata>
Oct 11 08:46:43 compute-0 nova_compute[260935]:   <sysinfo type="smbios">
Oct 11 08:46:43 compute-0 nova_compute[260935]:     <system>
Oct 11 08:46:43 compute-0 nova_compute[260935]:       <entry name="manufacturer">RDO</entry>
Oct 11 08:46:43 compute-0 nova_compute[260935]:       <entry name="product">OpenStack Compute</entry>
Oct 11 08:46:43 compute-0 nova_compute[260935]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 08:46:43 compute-0 nova_compute[260935]:       <entry name="serial">fa95251f-1ce7-4a45-8f53-ae932716a172</entry>
Oct 11 08:46:43 compute-0 nova_compute[260935]:       <entry name="uuid">fa95251f-1ce7-4a45-8f53-ae932716a172</entry>
Oct 11 08:46:43 compute-0 nova_compute[260935]:       <entry name="family">Virtual Machine</entry>
Oct 11 08:46:43 compute-0 nova_compute[260935]:     </system>
Oct 11 08:46:43 compute-0 nova_compute[260935]:   </sysinfo>
Oct 11 08:46:43 compute-0 nova_compute[260935]:   <os>
Oct 11 08:46:43 compute-0 nova_compute[260935]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 11 08:46:43 compute-0 nova_compute[260935]:     <boot dev="hd"/>
Oct 11 08:46:43 compute-0 nova_compute[260935]:     <smbios mode="sysinfo"/>
Oct 11 08:46:43 compute-0 nova_compute[260935]:   </os>
Oct 11 08:46:43 compute-0 nova_compute[260935]:   <features>
Oct 11 08:46:43 compute-0 nova_compute[260935]:     <acpi/>
Oct 11 08:46:43 compute-0 nova_compute[260935]:     <apic/>
Oct 11 08:46:43 compute-0 nova_compute[260935]:     <vmcoreinfo/>
Oct 11 08:46:43 compute-0 nova_compute[260935]:   </features>
Oct 11 08:46:43 compute-0 nova_compute[260935]:   <clock offset="utc">
Oct 11 08:46:43 compute-0 nova_compute[260935]:     <timer name="pit" tickpolicy="delay"/>
Oct 11 08:46:43 compute-0 nova_compute[260935]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 11 08:46:43 compute-0 nova_compute[260935]:     <timer name="hpet" present="no"/>
Oct 11 08:46:43 compute-0 nova_compute[260935]:   </clock>
Oct 11 08:46:43 compute-0 nova_compute[260935]:   <cpu mode="host-model" match="exact">
Oct 11 08:46:43 compute-0 nova_compute[260935]:     <topology sockets="1" cores="1" threads="1"/>
Oct 11 08:46:43 compute-0 nova_compute[260935]:   </cpu>
Oct 11 08:46:43 compute-0 nova_compute[260935]:   <devices>
Oct 11 08:46:43 compute-0 nova_compute[260935]:     <disk type="network" device="disk">
Oct 11 08:46:43 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 08:46:43 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/fa95251f-1ce7-4a45-8f53-ae932716a172_disk">
Oct 11 08:46:43 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 08:46:43 compute-0 nova_compute[260935]:       </source>
Oct 11 08:46:43 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 08:46:43 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 08:46:43 compute-0 nova_compute[260935]:       </auth>
Oct 11 08:46:43 compute-0 nova_compute[260935]:       <target dev="vda" bus="virtio"/>
Oct 11 08:46:43 compute-0 nova_compute[260935]:     </disk>
Oct 11 08:46:43 compute-0 nova_compute[260935]:     <disk type="network" device="cdrom">
Oct 11 08:46:43 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 08:46:43 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/fa95251f-1ce7-4a45-8f53-ae932716a172_disk.config">
Oct 11 08:46:43 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 08:46:43 compute-0 nova_compute[260935]:       </source>
Oct 11 08:46:43 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 08:46:43 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 08:46:43 compute-0 nova_compute[260935]:       </auth>
Oct 11 08:46:43 compute-0 nova_compute[260935]:       <target dev="sda" bus="sata"/>
Oct 11 08:46:43 compute-0 nova_compute[260935]:     </disk>
Oct 11 08:46:43 compute-0 nova_compute[260935]:     <serial type="pty">
Oct 11 08:46:43 compute-0 nova_compute[260935]:       <log file="/var/lib/nova/instances/fa95251f-1ce7-4a45-8f53-ae932716a172/console.log" append="off"/>
Oct 11 08:46:43 compute-0 nova_compute[260935]:     </serial>
Oct 11 08:46:43 compute-0 nova_compute[260935]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 08:46:43 compute-0 nova_compute[260935]:     <video>
Oct 11 08:46:43 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 08:46:43 compute-0 nova_compute[260935]:     </video>
Oct 11 08:46:43 compute-0 nova_compute[260935]:     <input type="tablet" bus="usb"/>
Oct 11 08:46:43 compute-0 nova_compute[260935]:     <rng model="virtio">
Oct 11 08:46:43 compute-0 nova_compute[260935]:       <backend model="random">/dev/urandom</backend>
Oct 11 08:46:43 compute-0 nova_compute[260935]:     </rng>
Oct 11 08:46:43 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root"/>
Oct 11 08:46:43 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:46:43 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:46:43 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:46:43 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:46:43 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:46:43 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:46:43 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:46:43 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:46:43 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:46:43 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:46:43 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:46:43 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:46:43 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:46:43 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:46:43 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:46:43 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:46:43 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:46:43 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:46:43 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:46:43 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:46:43 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:46:43 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:46:43 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:46:43 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:46:43 compute-0 nova_compute[260935]:     <controller type="usb" index="0"/>
Oct 11 08:46:43 compute-0 nova_compute[260935]:     <memballoon model="virtio">
Oct 11 08:46:43 compute-0 nova_compute[260935]:       <stats period="10"/>
Oct 11 08:46:43 compute-0 nova_compute[260935]:     </memballoon>
Oct 11 08:46:43 compute-0 nova_compute[260935]:   </devices>
Oct 11 08:46:43 compute-0 nova_compute[260935]: </domain>
Oct 11 08:46:43 compute-0 nova_compute[260935]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 11 08:46:43 compute-0 nova_compute[260935]: 2025-10-11 08:46:43.558 2 DEBUG oslo_concurrency.lockutils [None req-9fb38453-abce-4cf7-981e-645f5e8a1e97 dc0a1b420e6f47e9a9f604fe0dd240fd 666832175a6846b29256147e566ba5be - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:46:43 compute-0 nova_compute[260935]: 2025-10-11 08:46:43.559 2 DEBUG oslo_concurrency.lockutils [None req-9fb38453-abce-4cf7-981e-645f5e8a1e97 dc0a1b420e6f47e9a9f604fe0dd240fd 666832175a6846b29256147e566ba5be - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:46:43 compute-0 nova_compute[260935]: 2025-10-11 08:46:43.565 2 DEBUG nova.virt.hardware [None req-9fb38453-abce-4cf7-981e-645f5e8a1e97 dc0a1b420e6f47e9a9f604fe0dd240fd 666832175a6846b29256147e566ba5be - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 11 08:46:43 compute-0 nova_compute[260935]: 2025-10-11 08:46:43.565 2 INFO nova.compute.claims [None req-9fb38453-abce-4cf7-981e-645f5e8a1e97 dc0a1b420e6f47e9a9f604fe0dd240fd 666832175a6846b29256147e566ba5be - - default default] [instance: ff13b723-e064-4bed-93dc-b6bf28af0857] Claim successful on node compute-0.ctlplane.example.com
Oct 11 08:46:43 compute-0 nova_compute[260935]: 2025-10-11 08:46:43.572 2 DEBUG nova.virt.libvirt.driver [None req-ef1f735e-f8bf-495f-ba4c-84bfe8bde23c 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 08:46:43 compute-0 nova_compute[260935]: 2025-10-11 08:46:43.572 2 DEBUG nova.virt.libvirt.driver [None req-ef1f735e-f8bf-495f-ba4c-84bfe8bde23c 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 08:46:43 compute-0 nova_compute[260935]: 2025-10-11 08:46:43.573 2 INFO nova.virt.libvirt.driver [None req-ef1f735e-f8bf-495f-ba4c-84bfe8bde23c 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] [instance: fa95251f-1ce7-4a45-8f53-ae932716a172] Using config drive
Oct 11 08:46:43 compute-0 nova_compute[260935]: 2025-10-11 08:46:43.598 2 DEBUG nova.storage.rbd_utils [None req-ef1f735e-f8bf-495f-ba4c-84bfe8bde23c 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] rbd image fa95251f-1ce7-4a45-8f53-ae932716a172_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:46:43 compute-0 nova_compute[260935]: 2025-10-11 08:46:43.670 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:46:43 compute-0 nova_compute[260935]: 2025-10-11 08:46:43.733 2 DEBUG oslo_concurrency.processutils [None req-9fb38453-abce-4cf7-981e-645f5e8a1e97 dc0a1b420e6f47e9a9f604fe0dd240fd 666832175a6846b29256147e566ba5be - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:46:43 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1204: 321 pgs: 321 active+clean; 134 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 100 KiB/s rd, 4.3 MiB/s wr, 148 op/s
Oct 11 08:46:43 compute-0 nova_compute[260935]: 2025-10-11 08:46:43.765 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172403.7623792, 318c32b0-9990-4579-8abb-fc79e7460d77 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:46:43 compute-0 nova_compute[260935]: 2025-10-11 08:46:43.766 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 318c32b0-9990-4579-8abb-fc79e7460d77] VM Resumed (Lifecycle Event)
Oct 11 08:46:43 compute-0 nova_compute[260935]: 2025-10-11 08:46:43.768 2 DEBUG nova.compute.manager [None req-ae69dea4-7307-4b9d-9e48-1a8c620af444 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] [instance: 318c32b0-9990-4579-8abb-fc79e7460d77] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 11 08:46:43 compute-0 nova_compute[260935]: 2025-10-11 08:46:43.768 2 DEBUG nova.virt.libvirt.driver [None req-ae69dea4-7307-4b9d-9e48-1a8c620af444 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] [instance: 318c32b0-9990-4579-8abb-fc79e7460d77] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 11 08:46:43 compute-0 nova_compute[260935]: 2025-10-11 08:46:43.772 2 INFO nova.virt.libvirt.driver [-] [instance: 318c32b0-9990-4579-8abb-fc79e7460d77] Instance spawned successfully.
Oct 11 08:46:43 compute-0 nova_compute[260935]: 2025-10-11 08:46:43.772 2 DEBUG nova.virt.libvirt.driver [None req-ae69dea4-7307-4b9d-9e48-1a8c620af444 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] [instance: 318c32b0-9990-4579-8abb-fc79e7460d77] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 11 08:46:43 compute-0 nova_compute[260935]: 2025-10-11 08:46:43.779 2 INFO nova.virt.libvirt.driver [None req-ef1f735e-f8bf-495f-ba4c-84bfe8bde23c 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] [instance: fa95251f-1ce7-4a45-8f53-ae932716a172] Creating config drive at /var/lib/nova/instances/fa95251f-1ce7-4a45-8f53-ae932716a172/disk.config
Oct 11 08:46:43 compute-0 nova_compute[260935]: 2025-10-11 08:46:43.784 2 DEBUG oslo_concurrency.processutils [None req-ef1f735e-f8bf-495f-ba4c-84bfe8bde23c 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/fa95251f-1ce7-4a45-8f53-ae932716a172/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp8deq_hn3 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:46:43 compute-0 nova_compute[260935]: 2025-10-11 08:46:43.810 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 318c32b0-9990-4579-8abb-fc79e7460d77] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:46:43 compute-0 nova_compute[260935]: 2025-10-11 08:46:43.816 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 318c32b0-9990-4579-8abb-fc79e7460d77] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 08:46:43 compute-0 nova_compute[260935]: 2025-10-11 08:46:43.818 2 DEBUG nova.virt.libvirt.driver [None req-ae69dea4-7307-4b9d-9e48-1a8c620af444 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] [instance: 318c32b0-9990-4579-8abb-fc79e7460d77] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:46:43 compute-0 nova_compute[260935]: 2025-10-11 08:46:43.818 2 DEBUG nova.virt.libvirt.driver [None req-ae69dea4-7307-4b9d-9e48-1a8c620af444 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] [instance: 318c32b0-9990-4579-8abb-fc79e7460d77] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:46:43 compute-0 nova_compute[260935]: 2025-10-11 08:46:43.819 2 DEBUG nova.virt.libvirt.driver [None req-ae69dea4-7307-4b9d-9e48-1a8c620af444 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] [instance: 318c32b0-9990-4579-8abb-fc79e7460d77] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:46:43 compute-0 nova_compute[260935]: 2025-10-11 08:46:43.819 2 DEBUG nova.virt.libvirt.driver [None req-ae69dea4-7307-4b9d-9e48-1a8c620af444 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] [instance: 318c32b0-9990-4579-8abb-fc79e7460d77] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:46:43 compute-0 nova_compute[260935]: 2025-10-11 08:46:43.819 2 DEBUG nova.virt.libvirt.driver [None req-ae69dea4-7307-4b9d-9e48-1a8c620af444 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] [instance: 318c32b0-9990-4579-8abb-fc79e7460d77] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:46:43 compute-0 nova_compute[260935]: 2025-10-11 08:46:43.820 2 DEBUG nova.virt.libvirt.driver [None req-ae69dea4-7307-4b9d-9e48-1a8c620af444 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] [instance: 318c32b0-9990-4579-8abb-fc79e7460d77] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:46:43 compute-0 nova_compute[260935]: 2025-10-11 08:46:43.842 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 318c32b0-9990-4579-8abb-fc79e7460d77] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 08:46:43 compute-0 nova_compute[260935]: 2025-10-11 08:46:43.842 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172403.763082, 318c32b0-9990-4579-8abb-fc79e7460d77 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:46:43 compute-0 nova_compute[260935]: 2025-10-11 08:46:43.843 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 318c32b0-9990-4579-8abb-fc79e7460d77] VM Started (Lifecycle Event)
Oct 11 08:46:43 compute-0 nova_compute[260935]: 2025-10-11 08:46:43.859 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 318c32b0-9990-4579-8abb-fc79e7460d77] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:46:43 compute-0 nova_compute[260935]: 2025-10-11 08:46:43.863 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 318c32b0-9990-4579-8abb-fc79e7460d77] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 08:46:43 compute-0 nova_compute[260935]: 2025-10-11 08:46:43.871 2 INFO nova.compute.manager [None req-ae69dea4-7307-4b9d-9e48-1a8c620af444 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] [instance: 318c32b0-9990-4579-8abb-fc79e7460d77] Took 4.20 seconds to spawn the instance on the hypervisor.
Oct 11 08:46:43 compute-0 nova_compute[260935]: 2025-10-11 08:46:43.871 2 DEBUG nova.compute.manager [None req-ae69dea4-7307-4b9d-9e48-1a8c620af444 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] [instance: 318c32b0-9990-4579-8abb-fc79e7460d77] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:46:43 compute-0 nova_compute[260935]: 2025-10-11 08:46:43.881 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 318c32b0-9990-4579-8abb-fc79e7460d77] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 08:46:43 compute-0 nova_compute[260935]: 2025-10-11 08:46:43.919 2 INFO nova.compute.manager [None req-ae69dea4-7307-4b9d-9e48-1a8c620af444 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] [instance: 318c32b0-9990-4579-8abb-fc79e7460d77] Took 5.22 seconds to build instance.
Oct 11 08:46:43 compute-0 nova_compute[260935]: 2025-10-11 08:46:43.924 2 DEBUG oslo_concurrency.processutils [None req-ef1f735e-f8bf-495f-ba4c-84bfe8bde23c 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/fa95251f-1ce7-4a45-8f53-ae932716a172/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp8deq_hn3" returned: 0 in 0.140s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:46:43 compute-0 nova_compute[260935]: 2025-10-11 08:46:43.948 2 DEBUG nova.storage.rbd_utils [None req-ef1f735e-f8bf-495f-ba4c-84bfe8bde23c 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] rbd image fa95251f-1ce7-4a45-8f53-ae932716a172_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:46:43 compute-0 nova_compute[260935]: 2025-10-11 08:46:43.951 2 DEBUG oslo_concurrency.processutils [None req-ef1f735e-f8bf-495f-ba4c-84bfe8bde23c 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/fa95251f-1ce7-4a45-8f53-ae932716a172/disk.config fa95251f-1ce7-4a45-8f53-ae932716a172_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:46:43 compute-0 nova_compute[260935]: 2025-10-11 08:46:43.978 2 DEBUG oslo_concurrency.lockutils [None req-ae69dea4-7307-4b9d-9e48-1a8c620af444 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] Lock "318c32b0-9990-4579-8abb-fc79e7460d77" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 5.353s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:46:44 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/2069511649' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:46:44 compute-0 nova_compute[260935]: 2025-10-11 08:46:44.134 2 DEBUG oslo_concurrency.processutils [None req-ef1f735e-f8bf-495f-ba4c-84bfe8bde23c 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/fa95251f-1ce7-4a45-8f53-ae932716a172/disk.config fa95251f-1ce7-4a45-8f53-ae932716a172_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.182s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:46:44 compute-0 nova_compute[260935]: 2025-10-11 08:46:44.135 2 INFO nova.virt.libvirt.driver [None req-ef1f735e-f8bf-495f-ba4c-84bfe8bde23c 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] [instance: fa95251f-1ce7-4a45-8f53-ae932716a172] Deleting local config drive /var/lib/nova/instances/fa95251f-1ce7-4a45-8f53-ae932716a172/disk.config because it was imported into RBD.
Oct 11 08:46:44 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 08:46:44 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/867670951' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:46:44 compute-0 systemd-machined[215705]: New machine qemu-14-instance-0000000e.
Oct 11 08:46:44 compute-0 nova_compute[260935]: 2025-10-11 08:46:44.212 2 DEBUG oslo_concurrency.processutils [None req-9fb38453-abce-4cf7-981e-645f5e8a1e97 dc0a1b420e6f47e9a9f604fe0dd240fd 666832175a6846b29256147e566ba5be - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.479s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:46:44 compute-0 systemd[1]: Started Virtual Machine qemu-14-instance-0000000e.
Oct 11 08:46:44 compute-0 nova_compute[260935]: 2025-10-11 08:46:44.221 2 DEBUG nova.compute.provider_tree [None req-9fb38453-abce-4cf7-981e-645f5e8a1e97 dc0a1b420e6f47e9a9f604fe0dd240fd 666832175a6846b29256147e566ba5be - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 08:46:44 compute-0 nova_compute[260935]: 2025-10-11 08:46:44.236 2 DEBUG nova.scheduler.client.report [None req-9fb38453-abce-4cf7-981e-645f5e8a1e97 dc0a1b420e6f47e9a9f604fe0dd240fd 666832175a6846b29256147e566ba5be - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 08:46:44 compute-0 nova_compute[260935]: 2025-10-11 08:46:44.254 2 DEBUG oslo_concurrency.lockutils [None req-9fb38453-abce-4cf7-981e-645f5e8a1e97 dc0a1b420e6f47e9a9f604fe0dd240fd 666832175a6846b29256147e566ba5be - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.695s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:46:44 compute-0 nova_compute[260935]: 2025-10-11 08:46:44.254 2 DEBUG nova.compute.manager [None req-9fb38453-abce-4cf7-981e-645f5e8a1e97 dc0a1b420e6f47e9a9f604fe0dd240fd 666832175a6846b29256147e566ba5be - - default default] [instance: ff13b723-e064-4bed-93dc-b6bf28af0857] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 11 08:46:44 compute-0 nova_compute[260935]: 2025-10-11 08:46:44.291 2 DEBUG nova.compute.manager [None req-9fb38453-abce-4cf7-981e-645f5e8a1e97 dc0a1b420e6f47e9a9f604fe0dd240fd 666832175a6846b29256147e566ba5be - - default default] [instance: ff13b723-e064-4bed-93dc-b6bf28af0857] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 11 08:46:44 compute-0 nova_compute[260935]: 2025-10-11 08:46:44.292 2 DEBUG nova.network.neutron [None req-9fb38453-abce-4cf7-981e-645f5e8a1e97 dc0a1b420e6f47e9a9f604fe0dd240fd 666832175a6846b29256147e566ba5be - - default default] [instance: ff13b723-e064-4bed-93dc-b6bf28af0857] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 11 08:46:44 compute-0 nova_compute[260935]: 2025-10-11 08:46:44.312 2 INFO nova.virt.libvirt.driver [None req-9fb38453-abce-4cf7-981e-645f5e8a1e97 dc0a1b420e6f47e9a9f604fe0dd240fd 666832175a6846b29256147e566ba5be - - default default] [instance: ff13b723-e064-4bed-93dc-b6bf28af0857] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 11 08:46:44 compute-0 nova_compute[260935]: 2025-10-11 08:46:44.330 2 DEBUG nova.compute.manager [None req-9fb38453-abce-4cf7-981e-645f5e8a1e97 dc0a1b420e6f47e9a9f604fe0dd240fd 666832175a6846b29256147e566ba5be - - default default] [instance: ff13b723-e064-4bed-93dc-b6bf28af0857] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 11 08:46:44 compute-0 nova_compute[260935]: 2025-10-11 08:46:44.398 2 DEBUG nova.compute.manager [None req-9fb38453-abce-4cf7-981e-645f5e8a1e97 dc0a1b420e6f47e9a9f604fe0dd240fd 666832175a6846b29256147e566ba5be - - default default] [instance: ff13b723-e064-4bed-93dc-b6bf28af0857] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 11 08:46:44 compute-0 nova_compute[260935]: 2025-10-11 08:46:44.400 2 DEBUG nova.virt.libvirt.driver [None req-9fb38453-abce-4cf7-981e-645f5e8a1e97 dc0a1b420e6f47e9a9f604fe0dd240fd 666832175a6846b29256147e566ba5be - - default default] [instance: ff13b723-e064-4bed-93dc-b6bf28af0857] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 11 08:46:44 compute-0 nova_compute[260935]: 2025-10-11 08:46:44.400 2 INFO nova.virt.libvirt.driver [None req-9fb38453-abce-4cf7-981e-645f5e8a1e97 dc0a1b420e6f47e9a9f604fe0dd240fd 666832175a6846b29256147e566ba5be - - default default] [instance: ff13b723-e064-4bed-93dc-b6bf28af0857] Creating image(s)
Oct 11 08:46:44 compute-0 nova_compute[260935]: 2025-10-11 08:46:44.431 2 DEBUG nova.storage.rbd_utils [None req-9fb38453-abce-4cf7-981e-645f5e8a1e97 dc0a1b420e6f47e9a9f604fe0dd240fd 666832175a6846b29256147e566ba5be - - default default] rbd image ff13b723-e064-4bed-93dc-b6bf28af0857_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:46:44 compute-0 nova_compute[260935]: 2025-10-11 08:46:44.465 2 DEBUG nova.storage.rbd_utils [None req-9fb38453-abce-4cf7-981e-645f5e8a1e97 dc0a1b420e6f47e9a9f604fe0dd240fd 666832175a6846b29256147e566ba5be - - default default] rbd image ff13b723-e064-4bed-93dc-b6bf28af0857_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:46:44 compute-0 nova_compute[260935]: 2025-10-11 08:46:44.497 2 DEBUG nova.storage.rbd_utils [None req-9fb38453-abce-4cf7-981e-645f5e8a1e97 dc0a1b420e6f47e9a9f604fe0dd240fd 666832175a6846b29256147e566ba5be - - default default] rbd image ff13b723-e064-4bed-93dc-b6bf28af0857_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:46:44 compute-0 nova_compute[260935]: 2025-10-11 08:46:44.502 2 DEBUG oslo_concurrency.processutils [None req-9fb38453-abce-4cf7-981e-645f5e8a1e97 dc0a1b420e6f47e9a9f604fe0dd240fd 666832175a6846b29256147e566ba5be - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:46:44 compute-0 nova_compute[260935]: 2025-10-11 08:46:44.580 2 DEBUG oslo_concurrency.processutils [None req-9fb38453-abce-4cf7-981e-645f5e8a1e97 dc0a1b420e6f47e9a9f604fe0dd240fd 666832175a6846b29256147e566ba5be - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json" returned: 0 in 0.078s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:46:44 compute-0 nova_compute[260935]: 2025-10-11 08:46:44.582 2 DEBUG oslo_concurrency.lockutils [None req-9fb38453-abce-4cf7-981e-645f5e8a1e97 dc0a1b420e6f47e9a9f604fe0dd240fd 666832175a6846b29256147e566ba5be - - default default] Acquiring lock "9811042c7d73cc51997f7c966840f4b7728169a1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:46:44 compute-0 nova_compute[260935]: 2025-10-11 08:46:44.582 2 DEBUG oslo_concurrency.lockutils [None req-9fb38453-abce-4cf7-981e-645f5e8a1e97 dc0a1b420e6f47e9a9f604fe0dd240fd 666832175a6846b29256147e566ba5be - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:46:44 compute-0 nova_compute[260935]: 2025-10-11 08:46:44.583 2 DEBUG oslo_concurrency.lockutils [None req-9fb38453-abce-4cf7-981e-645f5e8a1e97 dc0a1b420e6f47e9a9f604fe0dd240fd 666832175a6846b29256147e566ba5be - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:46:44 compute-0 nova_compute[260935]: 2025-10-11 08:46:44.607 2 DEBUG nova.storage.rbd_utils [None req-9fb38453-abce-4cf7-981e-645f5e8a1e97 dc0a1b420e6f47e9a9f604fe0dd240fd 666832175a6846b29256147e566ba5be - - default default] rbd image ff13b723-e064-4bed-93dc-b6bf28af0857_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:46:44 compute-0 nova_compute[260935]: 2025-10-11 08:46:44.612 2 DEBUG oslo_concurrency.processutils [None req-9fb38453-abce-4cf7-981e-645f5e8a1e97 dc0a1b420e6f47e9a9f604fe0dd240fd 666832175a6846b29256147e566ba5be - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 ff13b723-e064-4bed-93dc-b6bf28af0857_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:46:44 compute-0 nova_compute[260935]: 2025-10-11 08:46:44.690 2 DEBUG nova.network.neutron [None req-9fb38453-abce-4cf7-981e-645f5e8a1e97 dc0a1b420e6f47e9a9f604fe0dd240fd 666832175a6846b29256147e566ba5be - - default default] [instance: ff13b723-e064-4bed-93dc-b6bf28af0857] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188
Oct 11 08:46:44 compute-0 nova_compute[260935]: 2025-10-11 08:46:44.691 2 DEBUG nova.compute.manager [None req-9fb38453-abce-4cf7-981e-645f5e8a1e97 dc0a1b420e6f47e9a9f604fe0dd240fd 666832175a6846b29256147e566ba5be - - default default] [instance: ff13b723-e064-4bed-93dc-b6bf28af0857] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 11 08:46:44 compute-0 nova_compute[260935]: 2025-10-11 08:46:44.925 2 DEBUG oslo_concurrency.processutils [None req-9fb38453-abce-4cf7-981e-645f5e8a1e97 dc0a1b420e6f47e9a9f604fe0dd240fd 666832175a6846b29256147e566ba5be - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 ff13b723-e064-4bed-93dc-b6bf28af0857_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.313s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:46:45 compute-0 nova_compute[260935]: 2025-10-11 08:46:45.025 2 DEBUG nova.storage.rbd_utils [None req-9fb38453-abce-4cf7-981e-645f5e8a1e97 dc0a1b420e6f47e9a9f604fe0dd240fd 666832175a6846b29256147e566ba5be - - default default] resizing rbd image ff13b723-e064-4bed-93dc-b6bf28af0857_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 11 08:46:45 compute-0 ceph-mon[74313]: pgmap v1204: 321 pgs: 321 active+clean; 134 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 100 KiB/s rd, 4.3 MiB/s wr, 148 op/s
Oct 11 08:46:45 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/867670951' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:46:45 compute-0 nova_compute[260935]: 2025-10-11 08:46:45.159 2 DEBUG nova.objects.instance [None req-9fb38453-abce-4cf7-981e-645f5e8a1e97 dc0a1b420e6f47e9a9f604fe0dd240fd 666832175a6846b29256147e566ba5be - - default default] Lazy-loading 'migration_context' on Instance uuid ff13b723-e064-4bed-93dc-b6bf28af0857 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:46:45 compute-0 nova_compute[260935]: 2025-10-11 08:46:45.174 2 DEBUG nova.virt.libvirt.driver [None req-9fb38453-abce-4cf7-981e-645f5e8a1e97 dc0a1b420e6f47e9a9f604fe0dd240fd 666832175a6846b29256147e566ba5be - - default default] [instance: ff13b723-e064-4bed-93dc-b6bf28af0857] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 11 08:46:45 compute-0 nova_compute[260935]: 2025-10-11 08:46:45.175 2 DEBUG nova.virt.libvirt.driver [None req-9fb38453-abce-4cf7-981e-645f5e8a1e97 dc0a1b420e6f47e9a9f604fe0dd240fd 666832175a6846b29256147e566ba5be - - default default] [instance: ff13b723-e064-4bed-93dc-b6bf28af0857] Ensure instance console log exists: /var/lib/nova/instances/ff13b723-e064-4bed-93dc-b6bf28af0857/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 11 08:46:45 compute-0 nova_compute[260935]: 2025-10-11 08:46:45.177 2 DEBUG oslo_concurrency.lockutils [None req-9fb38453-abce-4cf7-981e-645f5e8a1e97 dc0a1b420e6f47e9a9f604fe0dd240fd 666832175a6846b29256147e566ba5be - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:46:45 compute-0 nova_compute[260935]: 2025-10-11 08:46:45.179 2 DEBUG oslo_concurrency.lockutils [None req-9fb38453-abce-4cf7-981e-645f5e8a1e97 dc0a1b420e6f47e9a9f604fe0dd240fd 666832175a6846b29256147e566ba5be - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:46:45 compute-0 nova_compute[260935]: 2025-10-11 08:46:45.179 2 DEBUG oslo_concurrency.lockutils [None req-9fb38453-abce-4cf7-981e-645f5e8a1e97 dc0a1b420e6f47e9a9f604fe0dd240fd 666832175a6846b29256147e566ba5be - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:46:45 compute-0 nova_compute[260935]: 2025-10-11 08:46:45.182 2 DEBUG nova.virt.libvirt.driver [None req-9fb38453-abce-4cf7-981e-645f5e8a1e97 dc0a1b420e6f47e9a9f604fe0dd240fd 666832175a6846b29256147e566ba5be - - default default] [instance: ff13b723-e064-4bed-93dc-b6bf28af0857] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 11 08:46:45 compute-0 nova_compute[260935]: 2025-10-11 08:46:45.191 2 WARNING nova.virt.libvirt.driver [None req-9fb38453-abce-4cf7-981e-645f5e8a1e97 dc0a1b420e6f47e9a9f604fe0dd240fd 666832175a6846b29256147e566ba5be - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 08:46:45 compute-0 nova_compute[260935]: 2025-10-11 08:46:45.200 2 DEBUG nova.virt.libvirt.host [None req-9fb38453-abce-4cf7-981e-645f5e8a1e97 dc0a1b420e6f47e9a9f604fe0dd240fd 666832175a6846b29256147e566ba5be - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 11 08:46:45 compute-0 nova_compute[260935]: 2025-10-11 08:46:45.201 2 DEBUG nova.virt.libvirt.host [None req-9fb38453-abce-4cf7-981e-645f5e8a1e97 dc0a1b420e6f47e9a9f604fe0dd240fd 666832175a6846b29256147e566ba5be - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 11 08:46:45 compute-0 nova_compute[260935]: 2025-10-11 08:46:45.204 2 DEBUG nova.virt.libvirt.host [None req-9fb38453-abce-4cf7-981e-645f5e8a1e97 dc0a1b420e6f47e9a9f604fe0dd240fd 666832175a6846b29256147e566ba5be - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 11 08:46:45 compute-0 nova_compute[260935]: 2025-10-11 08:46:45.205 2 DEBUG nova.virt.libvirt.host [None req-9fb38453-abce-4cf7-981e-645f5e8a1e97 dc0a1b420e6f47e9a9f604fe0dd240fd 666832175a6846b29256147e566ba5be - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 11 08:46:45 compute-0 nova_compute[260935]: 2025-10-11 08:46:45.206 2 DEBUG nova.virt.libvirt.driver [None req-9fb38453-abce-4cf7-981e-645f5e8a1e97 dc0a1b420e6f47e9a9f604fe0dd240fd 666832175a6846b29256147e566ba5be - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 11 08:46:45 compute-0 nova_compute[260935]: 2025-10-11 08:46:45.206 2 DEBUG nova.virt.hardware [None req-9fb38453-abce-4cf7-981e-645f5e8a1e97 dc0a1b420e6f47e9a9f604fe0dd240fd 666832175a6846b29256147e566ba5be - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 11 08:46:45 compute-0 nova_compute[260935]: 2025-10-11 08:46:45.207 2 DEBUG nova.virt.hardware [None req-9fb38453-abce-4cf7-981e-645f5e8a1e97 dc0a1b420e6f47e9a9f604fe0dd240fd 666832175a6846b29256147e566ba5be - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 11 08:46:45 compute-0 nova_compute[260935]: 2025-10-11 08:46:45.208 2 DEBUG nova.virt.hardware [None req-9fb38453-abce-4cf7-981e-645f5e8a1e97 dc0a1b420e6f47e9a9f604fe0dd240fd 666832175a6846b29256147e566ba5be - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 11 08:46:45 compute-0 nova_compute[260935]: 2025-10-11 08:46:45.208 2 DEBUG nova.virt.hardware [None req-9fb38453-abce-4cf7-981e-645f5e8a1e97 dc0a1b420e6f47e9a9f604fe0dd240fd 666832175a6846b29256147e566ba5be - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 11 08:46:45 compute-0 nova_compute[260935]: 2025-10-11 08:46:45.209 2 DEBUG nova.virt.hardware [None req-9fb38453-abce-4cf7-981e-645f5e8a1e97 dc0a1b420e6f47e9a9f604fe0dd240fd 666832175a6846b29256147e566ba5be - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 11 08:46:45 compute-0 nova_compute[260935]: 2025-10-11 08:46:45.209 2 DEBUG nova.virt.hardware [None req-9fb38453-abce-4cf7-981e-645f5e8a1e97 dc0a1b420e6f47e9a9f604fe0dd240fd 666832175a6846b29256147e566ba5be - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 11 08:46:45 compute-0 nova_compute[260935]: 2025-10-11 08:46:45.210 2 DEBUG nova.virt.hardware [None req-9fb38453-abce-4cf7-981e-645f5e8a1e97 dc0a1b420e6f47e9a9f604fe0dd240fd 666832175a6846b29256147e566ba5be - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 11 08:46:45 compute-0 nova_compute[260935]: 2025-10-11 08:46:45.210 2 DEBUG nova.virt.hardware [None req-9fb38453-abce-4cf7-981e-645f5e8a1e97 dc0a1b420e6f47e9a9f604fe0dd240fd 666832175a6846b29256147e566ba5be - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 11 08:46:45 compute-0 nova_compute[260935]: 2025-10-11 08:46:45.211 2 DEBUG nova.virt.hardware [None req-9fb38453-abce-4cf7-981e-645f5e8a1e97 dc0a1b420e6f47e9a9f604fe0dd240fd 666832175a6846b29256147e566ba5be - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 11 08:46:45 compute-0 nova_compute[260935]: 2025-10-11 08:46:45.211 2 DEBUG nova.virt.hardware [None req-9fb38453-abce-4cf7-981e-645f5e8a1e97 dc0a1b420e6f47e9a9f604fe0dd240fd 666832175a6846b29256147e566ba5be - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 11 08:46:45 compute-0 nova_compute[260935]: 2025-10-11 08:46:45.212 2 DEBUG nova.virt.hardware [None req-9fb38453-abce-4cf7-981e-645f5e8a1e97 dc0a1b420e6f47e9a9f604fe0dd240fd 666832175a6846b29256147e566ba5be - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 11 08:46:45 compute-0 nova_compute[260935]: 2025-10-11 08:46:45.215 2 DEBUG oslo_concurrency.processutils [None req-9fb38453-abce-4cf7-981e-645f5e8a1e97 dc0a1b420e6f47e9a9f604fe0dd240fd 666832175a6846b29256147e566ba5be - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:46:45 compute-0 nova_compute[260935]: 2025-10-11 08:46:45.379 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172405.3780336, fa95251f-1ce7-4a45-8f53-ae932716a172 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:46:45 compute-0 nova_compute[260935]: 2025-10-11 08:46:45.380 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: fa95251f-1ce7-4a45-8f53-ae932716a172] VM Resumed (Lifecycle Event)
Oct 11 08:46:45 compute-0 nova_compute[260935]: 2025-10-11 08:46:45.384 2 DEBUG nova.compute.manager [None req-ef1f735e-f8bf-495f-ba4c-84bfe8bde23c 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] [instance: fa95251f-1ce7-4a45-8f53-ae932716a172] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 11 08:46:45 compute-0 nova_compute[260935]: 2025-10-11 08:46:45.384 2 DEBUG nova.virt.libvirt.driver [None req-ef1f735e-f8bf-495f-ba4c-84bfe8bde23c 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] [instance: fa95251f-1ce7-4a45-8f53-ae932716a172] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 11 08:46:45 compute-0 nova_compute[260935]: 2025-10-11 08:46:45.389 2 INFO nova.virt.libvirt.driver [-] [instance: fa95251f-1ce7-4a45-8f53-ae932716a172] Instance spawned successfully.
Oct 11 08:46:45 compute-0 nova_compute[260935]: 2025-10-11 08:46:45.390 2 DEBUG nova.virt.libvirt.driver [None req-ef1f735e-f8bf-495f-ba4c-84bfe8bde23c 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] [instance: fa95251f-1ce7-4a45-8f53-ae932716a172] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 11 08:46:45 compute-0 nova_compute[260935]: 2025-10-11 08:46:45.409 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: fa95251f-1ce7-4a45-8f53-ae932716a172] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:46:45 compute-0 nova_compute[260935]: 2025-10-11 08:46:45.416 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: fa95251f-1ce7-4a45-8f53-ae932716a172] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 08:46:45 compute-0 nova_compute[260935]: 2025-10-11 08:46:45.425 2 DEBUG nova.virt.libvirt.driver [None req-ef1f735e-f8bf-495f-ba4c-84bfe8bde23c 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] [instance: fa95251f-1ce7-4a45-8f53-ae932716a172] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:46:45 compute-0 nova_compute[260935]: 2025-10-11 08:46:45.426 2 DEBUG nova.virt.libvirt.driver [None req-ef1f735e-f8bf-495f-ba4c-84bfe8bde23c 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] [instance: fa95251f-1ce7-4a45-8f53-ae932716a172] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:46:45 compute-0 nova_compute[260935]: 2025-10-11 08:46:45.427 2 DEBUG nova.virt.libvirt.driver [None req-ef1f735e-f8bf-495f-ba4c-84bfe8bde23c 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] [instance: fa95251f-1ce7-4a45-8f53-ae932716a172] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:46:45 compute-0 nova_compute[260935]: 2025-10-11 08:46:45.428 2 DEBUG nova.virt.libvirt.driver [None req-ef1f735e-f8bf-495f-ba4c-84bfe8bde23c 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] [instance: fa95251f-1ce7-4a45-8f53-ae932716a172] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:46:45 compute-0 nova_compute[260935]: 2025-10-11 08:46:45.428 2 DEBUG nova.virt.libvirt.driver [None req-ef1f735e-f8bf-495f-ba4c-84bfe8bde23c 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] [instance: fa95251f-1ce7-4a45-8f53-ae932716a172] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:46:45 compute-0 nova_compute[260935]: 2025-10-11 08:46:45.429 2 DEBUG nova.virt.libvirt.driver [None req-ef1f735e-f8bf-495f-ba4c-84bfe8bde23c 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] [instance: fa95251f-1ce7-4a45-8f53-ae932716a172] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:46:45 compute-0 nova_compute[260935]: 2025-10-11 08:46:45.441 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: fa95251f-1ce7-4a45-8f53-ae932716a172] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 08:46:45 compute-0 nova_compute[260935]: 2025-10-11 08:46:45.441 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172405.3833487, fa95251f-1ce7-4a45-8f53-ae932716a172 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:46:45 compute-0 nova_compute[260935]: 2025-10-11 08:46:45.442 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: fa95251f-1ce7-4a45-8f53-ae932716a172] VM Started (Lifecycle Event)
Oct 11 08:46:45 compute-0 nova_compute[260935]: 2025-10-11 08:46:45.475 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: fa95251f-1ce7-4a45-8f53-ae932716a172] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:46:45 compute-0 nova_compute[260935]: 2025-10-11 08:46:45.480 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: fa95251f-1ce7-4a45-8f53-ae932716a172] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 08:46:45 compute-0 nova_compute[260935]: 2025-10-11 08:46:45.501 2 INFO nova.compute.manager [None req-ef1f735e-f8bf-495f-ba4c-84bfe8bde23c 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] [instance: fa95251f-1ce7-4a45-8f53-ae932716a172] Took 3.97 seconds to spawn the instance on the hypervisor.
Oct 11 08:46:45 compute-0 nova_compute[260935]: 2025-10-11 08:46:45.501 2 DEBUG nova.compute.manager [None req-ef1f735e-f8bf-495f-ba4c-84bfe8bde23c 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] [instance: fa95251f-1ce7-4a45-8f53-ae932716a172] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:46:45 compute-0 nova_compute[260935]: 2025-10-11 08:46:45.503 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: fa95251f-1ce7-4a45-8f53-ae932716a172] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 08:46:45 compute-0 nova_compute[260935]: 2025-10-11 08:46:45.564 2 INFO nova.compute.manager [None req-ef1f735e-f8bf-495f-ba4c-84bfe8bde23c 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] [instance: fa95251f-1ce7-4a45-8f53-ae932716a172] Took 5.01 seconds to build instance.
Oct 11 08:46:45 compute-0 nova_compute[260935]: 2025-10-11 08:46:45.580 2 DEBUG oslo_concurrency.lockutils [None req-ef1f735e-f8bf-495f-ba4c-84bfe8bde23c 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] Lock "fa95251f-1ce7-4a45-8f53-ae932716a172" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 5.088s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:46:45 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 08:46:45 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1085323306' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:46:45 compute-0 nova_compute[260935]: 2025-10-11 08:46:45.731 2 DEBUG oslo_concurrency.processutils [None req-9fb38453-abce-4cf7-981e-645f5e8a1e97 dc0a1b420e6f47e9a9f604fe0dd240fd 666832175a6846b29256147e566ba5be - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.516s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:46:45 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1205: 321 pgs: 321 active+clean; 134 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 100 KiB/s rd, 4.3 MiB/s wr, 148 op/s
Oct 11 08:46:45 compute-0 nova_compute[260935]: 2025-10-11 08:46:45.760 2 DEBUG nova.storage.rbd_utils [None req-9fb38453-abce-4cf7-981e-645f5e8a1e97 dc0a1b420e6f47e9a9f604fe0dd240fd 666832175a6846b29256147e566ba5be - - default default] rbd image ff13b723-e064-4bed-93dc-b6bf28af0857_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:46:45 compute-0 nova_compute[260935]: 2025-10-11 08:46:45.766 2 DEBUG oslo_concurrency.processutils [None req-9fb38453-abce-4cf7-981e-645f5e8a1e97 dc0a1b420e6f47e9a9f604fe0dd240fd 666832175a6846b29256147e566ba5be - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:46:46 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/1085323306' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:46:46 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 08:46:46 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3547736486' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:46:46 compute-0 nova_compute[260935]: 2025-10-11 08:46:46.305 2 DEBUG oslo_concurrency.processutils [None req-9fb38453-abce-4cf7-981e-645f5e8a1e97 dc0a1b420e6f47e9a9f604fe0dd240fd 666832175a6846b29256147e566ba5be - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.539s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:46:46 compute-0 nova_compute[260935]: 2025-10-11 08:46:46.308 2 DEBUG nova.objects.instance [None req-9fb38453-abce-4cf7-981e-645f5e8a1e97 dc0a1b420e6f47e9a9f604fe0dd240fd 666832175a6846b29256147e566ba5be - - default default] Lazy-loading 'pci_devices' on Instance uuid ff13b723-e064-4bed-93dc-b6bf28af0857 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:46:46 compute-0 nova_compute[260935]: 2025-10-11 08:46:46.323 2 DEBUG nova.virt.libvirt.driver [None req-9fb38453-abce-4cf7-981e-645f5e8a1e97 dc0a1b420e6f47e9a9f604fe0dd240fd 666832175a6846b29256147e566ba5be - - default default] [instance: ff13b723-e064-4bed-93dc-b6bf28af0857] End _get_guest_xml xml=<domain type="kvm">
Oct 11 08:46:46 compute-0 nova_compute[260935]:   <uuid>ff13b723-e064-4bed-93dc-b6bf28af0857</uuid>
Oct 11 08:46:46 compute-0 nova_compute[260935]:   <name>instance-0000000f</name>
Oct 11 08:46:46 compute-0 nova_compute[260935]:   <memory>131072</memory>
Oct 11 08:46:46 compute-0 nova_compute[260935]:   <vcpu>1</vcpu>
Oct 11 08:46:46 compute-0 nova_compute[260935]:   <metadata>
Oct 11 08:46:46 compute-0 nova_compute[260935]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 08:46:46 compute-0 nova_compute[260935]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 08:46:46 compute-0 nova_compute[260935]:       <nova:name>tempest-TenantUsagesTestJSON-server-1292884304</nova:name>
Oct 11 08:46:46 compute-0 nova_compute[260935]:       <nova:creationTime>2025-10-11 08:46:45</nova:creationTime>
Oct 11 08:46:46 compute-0 nova_compute[260935]:       <nova:flavor name="m1.nano">
Oct 11 08:46:46 compute-0 nova_compute[260935]:         <nova:memory>128</nova:memory>
Oct 11 08:46:46 compute-0 nova_compute[260935]:         <nova:disk>1</nova:disk>
Oct 11 08:46:46 compute-0 nova_compute[260935]:         <nova:swap>0</nova:swap>
Oct 11 08:46:46 compute-0 nova_compute[260935]:         <nova:ephemeral>0</nova:ephemeral>
Oct 11 08:46:46 compute-0 nova_compute[260935]:         <nova:vcpus>1</nova:vcpus>
Oct 11 08:46:46 compute-0 nova_compute[260935]:       </nova:flavor>
Oct 11 08:46:46 compute-0 nova_compute[260935]:       <nova:owner>
Oct 11 08:46:46 compute-0 nova_compute[260935]:         <nova:user uuid="dc0a1b420e6f47e9a9f604fe0dd240fd">tempest-TenantUsagesTestJSON-545714014-project-member</nova:user>
Oct 11 08:46:46 compute-0 nova_compute[260935]:         <nova:project uuid="666832175a6846b29256147e566ba5be">tempest-TenantUsagesTestJSON-545714014</nova:project>
Oct 11 08:46:46 compute-0 nova_compute[260935]:       </nova:owner>
Oct 11 08:46:46 compute-0 nova_compute[260935]:       <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 08:46:46 compute-0 nova_compute[260935]:       <nova:ports/>
Oct 11 08:46:46 compute-0 nova_compute[260935]:     </nova:instance>
Oct 11 08:46:46 compute-0 nova_compute[260935]:   </metadata>
Oct 11 08:46:46 compute-0 nova_compute[260935]:   <sysinfo type="smbios">
Oct 11 08:46:46 compute-0 nova_compute[260935]:     <system>
Oct 11 08:46:46 compute-0 nova_compute[260935]:       <entry name="manufacturer">RDO</entry>
Oct 11 08:46:46 compute-0 nova_compute[260935]:       <entry name="product">OpenStack Compute</entry>
Oct 11 08:46:46 compute-0 nova_compute[260935]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 08:46:46 compute-0 nova_compute[260935]:       <entry name="serial">ff13b723-e064-4bed-93dc-b6bf28af0857</entry>
Oct 11 08:46:46 compute-0 nova_compute[260935]:       <entry name="uuid">ff13b723-e064-4bed-93dc-b6bf28af0857</entry>
Oct 11 08:46:46 compute-0 nova_compute[260935]:       <entry name="family">Virtual Machine</entry>
Oct 11 08:46:46 compute-0 nova_compute[260935]:     </system>
Oct 11 08:46:46 compute-0 nova_compute[260935]:   </sysinfo>
Oct 11 08:46:46 compute-0 nova_compute[260935]:   <os>
Oct 11 08:46:46 compute-0 nova_compute[260935]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 11 08:46:46 compute-0 nova_compute[260935]:     <boot dev="hd"/>
Oct 11 08:46:46 compute-0 nova_compute[260935]:     <smbios mode="sysinfo"/>
Oct 11 08:46:46 compute-0 nova_compute[260935]:   </os>
Oct 11 08:46:46 compute-0 nova_compute[260935]:   <features>
Oct 11 08:46:46 compute-0 nova_compute[260935]:     <acpi/>
Oct 11 08:46:46 compute-0 nova_compute[260935]:     <apic/>
Oct 11 08:46:46 compute-0 nova_compute[260935]:     <vmcoreinfo/>
Oct 11 08:46:46 compute-0 nova_compute[260935]:   </features>
Oct 11 08:46:46 compute-0 nova_compute[260935]:   <clock offset="utc">
Oct 11 08:46:46 compute-0 nova_compute[260935]:     <timer name="pit" tickpolicy="delay"/>
Oct 11 08:46:46 compute-0 nova_compute[260935]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 11 08:46:46 compute-0 nova_compute[260935]:     <timer name="hpet" present="no"/>
Oct 11 08:46:46 compute-0 nova_compute[260935]:   </clock>
Oct 11 08:46:46 compute-0 nova_compute[260935]:   <cpu mode="host-model" match="exact">
Oct 11 08:46:46 compute-0 nova_compute[260935]:     <topology sockets="1" cores="1" threads="1"/>
Oct 11 08:46:46 compute-0 nova_compute[260935]:   </cpu>
Oct 11 08:46:46 compute-0 nova_compute[260935]:   <devices>
Oct 11 08:46:46 compute-0 nova_compute[260935]:     <disk type="network" device="disk">
Oct 11 08:46:46 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 08:46:46 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/ff13b723-e064-4bed-93dc-b6bf28af0857_disk">
Oct 11 08:46:46 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 08:46:46 compute-0 nova_compute[260935]:       </source>
Oct 11 08:46:46 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 08:46:46 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 08:46:46 compute-0 nova_compute[260935]:       </auth>
Oct 11 08:46:46 compute-0 nova_compute[260935]:       <target dev="vda" bus="virtio"/>
Oct 11 08:46:46 compute-0 nova_compute[260935]:     </disk>
Oct 11 08:46:46 compute-0 nova_compute[260935]:     <disk type="network" device="cdrom">
Oct 11 08:46:46 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 08:46:46 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/ff13b723-e064-4bed-93dc-b6bf28af0857_disk.config">
Oct 11 08:46:46 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 08:46:46 compute-0 nova_compute[260935]:       </source>
Oct 11 08:46:46 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 08:46:46 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 08:46:46 compute-0 nova_compute[260935]:       </auth>
Oct 11 08:46:46 compute-0 nova_compute[260935]:       <target dev="sda" bus="sata"/>
Oct 11 08:46:46 compute-0 nova_compute[260935]:     </disk>
Oct 11 08:46:46 compute-0 nova_compute[260935]:     <serial type="pty">
Oct 11 08:46:46 compute-0 nova_compute[260935]:       <log file="/var/lib/nova/instances/ff13b723-e064-4bed-93dc-b6bf28af0857/console.log" append="off"/>
Oct 11 08:46:46 compute-0 nova_compute[260935]:     </serial>
Oct 11 08:46:46 compute-0 nova_compute[260935]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 08:46:46 compute-0 nova_compute[260935]:     <video>
Oct 11 08:46:46 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 08:46:46 compute-0 nova_compute[260935]:     </video>
Oct 11 08:46:46 compute-0 nova_compute[260935]:     <input type="tablet" bus="usb"/>
Oct 11 08:46:46 compute-0 nova_compute[260935]:     <rng model="virtio">
Oct 11 08:46:46 compute-0 nova_compute[260935]:       <backend model="random">/dev/urandom</backend>
Oct 11 08:46:46 compute-0 nova_compute[260935]:     </rng>
Oct 11 08:46:46 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root"/>
Oct 11 08:46:46 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:46:46 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:46:46 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:46:46 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:46:46 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:46:46 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:46:46 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:46:46 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:46:46 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:46:46 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:46:46 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:46:46 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:46:46 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:46:46 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:46:46 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:46:46 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:46:46 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:46:46 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:46:46 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:46:46 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:46:46 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:46:46 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:46:46 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:46:46 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:46:46 compute-0 nova_compute[260935]:     <controller type="usb" index="0"/>
Oct 11 08:46:46 compute-0 nova_compute[260935]:     <memballoon model="virtio">
Oct 11 08:46:46 compute-0 nova_compute[260935]:       <stats period="10"/>
Oct 11 08:46:46 compute-0 nova_compute[260935]:     </memballoon>
Oct 11 08:46:46 compute-0 nova_compute[260935]:   </devices>
Oct 11 08:46:46 compute-0 nova_compute[260935]: </domain>
Oct 11 08:46:46 compute-0 nova_compute[260935]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 11 08:46:46 compute-0 nova_compute[260935]: 2025-10-11 08:46:46.380 2 DEBUG nova.virt.libvirt.driver [None req-9fb38453-abce-4cf7-981e-645f5e8a1e97 dc0a1b420e6f47e9a9f604fe0dd240fd 666832175a6846b29256147e566ba5be - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 08:46:46 compute-0 nova_compute[260935]: 2025-10-11 08:46:46.380 2 DEBUG nova.virt.libvirt.driver [None req-9fb38453-abce-4cf7-981e-645f5e8a1e97 dc0a1b420e6f47e9a9f604fe0dd240fd 666832175a6846b29256147e566ba5be - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 08:46:46 compute-0 nova_compute[260935]: 2025-10-11 08:46:46.387 2 INFO nova.virt.libvirt.driver [None req-9fb38453-abce-4cf7-981e-645f5e8a1e97 dc0a1b420e6f47e9a9f604fe0dd240fd 666832175a6846b29256147e566ba5be - - default default] [instance: ff13b723-e064-4bed-93dc-b6bf28af0857] Using config drive
Oct 11 08:46:46 compute-0 nova_compute[260935]: 2025-10-11 08:46:46.420 2 DEBUG nova.storage.rbd_utils [None req-9fb38453-abce-4cf7-981e-645f5e8a1e97 dc0a1b420e6f47e9a9f604fe0dd240fd 666832175a6846b29256147e566ba5be - - default default] rbd image ff13b723-e064-4bed-93dc-b6bf28af0857_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:46:46 compute-0 nova_compute[260935]: 2025-10-11 08:46:46.425 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:46:46 compute-0 nova_compute[260935]: 2025-10-11 08:46:46.687 2 INFO nova.virt.libvirt.driver [None req-9fb38453-abce-4cf7-981e-645f5e8a1e97 dc0a1b420e6f47e9a9f604fe0dd240fd 666832175a6846b29256147e566ba5be - - default default] [instance: ff13b723-e064-4bed-93dc-b6bf28af0857] Creating config drive at /var/lib/nova/instances/ff13b723-e064-4bed-93dc-b6bf28af0857/disk.config
Oct 11 08:46:46 compute-0 nova_compute[260935]: 2025-10-11 08:46:46.695 2 DEBUG oslo_concurrency.processutils [None req-9fb38453-abce-4cf7-981e-645f5e8a1e97 dc0a1b420e6f47e9a9f604fe0dd240fd 666832175a6846b29256147e566ba5be - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/ff13b723-e064-4bed-93dc-b6bf28af0857/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpm2yarjj_ execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:46:46 compute-0 nova_compute[260935]: 2025-10-11 08:46:46.837 2 DEBUG oslo_concurrency.processutils [None req-9fb38453-abce-4cf7-981e-645f5e8a1e97 dc0a1b420e6f47e9a9f604fe0dd240fd 666832175a6846b29256147e566ba5be - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/ff13b723-e064-4bed-93dc-b6bf28af0857/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpm2yarjj_" returned: 0 in 0.142s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:46:46 compute-0 nova_compute[260935]: 2025-10-11 08:46:46.871 2 DEBUG nova.storage.rbd_utils [None req-9fb38453-abce-4cf7-981e-645f5e8a1e97 dc0a1b420e6f47e9a9f604fe0dd240fd 666832175a6846b29256147e566ba5be - - default default] rbd image ff13b723-e064-4bed-93dc-b6bf28af0857_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:46:46 compute-0 nova_compute[260935]: 2025-10-11 08:46:46.875 2 DEBUG oslo_concurrency.processutils [None req-9fb38453-abce-4cf7-981e-645f5e8a1e97 dc0a1b420e6f47e9a9f604fe0dd240fd 666832175a6846b29256147e566ba5be - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/ff13b723-e064-4bed-93dc-b6bf28af0857/disk.config ff13b723-e064-4bed-93dc-b6bf28af0857_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:46:47 compute-0 nova_compute[260935]: 2025-10-11 08:46:47.053 2 DEBUG oslo_concurrency.processutils [None req-9fb38453-abce-4cf7-981e-645f5e8a1e97 dc0a1b420e6f47e9a9f604fe0dd240fd 666832175a6846b29256147e566ba5be - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/ff13b723-e064-4bed-93dc-b6bf28af0857/disk.config ff13b723-e064-4bed-93dc-b6bf28af0857_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.178s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:46:47 compute-0 nova_compute[260935]: 2025-10-11 08:46:47.056 2 INFO nova.virt.libvirt.driver [None req-9fb38453-abce-4cf7-981e-645f5e8a1e97 dc0a1b420e6f47e9a9f604fe0dd240fd 666832175a6846b29256147e566ba5be - - default default] [instance: ff13b723-e064-4bed-93dc-b6bf28af0857] Deleting local config drive /var/lib/nova/instances/ff13b723-e064-4bed-93dc-b6bf28af0857/disk.config because it was imported into RBD.
Oct 11 08:46:47 compute-0 nova_compute[260935]: 2025-10-11 08:46:47.076 2 DEBUG nova.compute.manager [None req-0a082280-bc65-45a8-95c7-433e38c276a6 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] [instance: 318c32b0-9990-4579-8abb-fc79e7460d77] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:46:47 compute-0 ceph-mon[74313]: pgmap v1205: 321 pgs: 321 active+clean; 134 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 100 KiB/s rd, 4.3 MiB/s wr, 148 op/s
Oct 11 08:46:47 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/3547736486' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:46:47 compute-0 systemd-machined[215705]: New machine qemu-15-instance-0000000f.
Oct 11 08:46:47 compute-0 nova_compute[260935]: 2025-10-11 08:46:47.138 2 INFO nova.compute.manager [None req-0a082280-bc65-45a8-95c7-433e38c276a6 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] [instance: 318c32b0-9990-4579-8abb-fc79e7460d77] instance snapshotting
Oct 11 08:46:47 compute-0 systemd[1]: Started Virtual Machine qemu-15-instance-0000000f.
Oct 11 08:46:47 compute-0 nova_compute[260935]: 2025-10-11 08:46:47.498 2 INFO nova.virt.libvirt.driver [None req-0a082280-bc65-45a8-95c7-433e38c276a6 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] [instance: 318c32b0-9990-4579-8abb-fc79e7460d77] Beginning live snapshot process
Oct 11 08:46:47 compute-0 nova_compute[260935]: 2025-10-11 08:46:47.635 2 DEBUG nova.virt.libvirt.imagebackend [None req-0a082280-bc65-45a8-95c7-433e38c276a6 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] No parent info for 03f2fef0-11c0-48e1-b3a0-3e02d898739e; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Oct 11 08:46:47 compute-0 nova_compute[260935]: 2025-10-11 08:46:47.660 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760172392.6593008, e1d5bdab-1090-4b36-bff3-f11d0bc1d591 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:46:47 compute-0 nova_compute[260935]: 2025-10-11 08:46:47.660 2 INFO nova.compute.manager [-] [instance: e1d5bdab-1090-4b36-bff3-f11d0bc1d591] VM Stopped (Lifecycle Event)
Oct 11 08:46:47 compute-0 nova_compute[260935]: 2025-10-11 08:46:47.680 2 DEBUG nova.compute.manager [None req-63966129-3d6f-4943-a1f2-490a37f063d9 - - - - - -] [instance: e1d5bdab-1090-4b36-bff3-f11d0bc1d591] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:46:47 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1206: 321 pgs: 321 active+clean; 181 MiB data, 319 MiB used, 60 GiB / 60 GiB avail; 4.0 MiB/s rd, 5.5 MiB/s wr, 314 op/s
Oct 11 08:46:48 compute-0 nova_compute[260935]: 2025-10-11 08:46:48.083 2 DEBUG nova.storage.rbd_utils [None req-0a082280-bc65-45a8-95c7-433e38c276a6 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] creating snapshot(1b8a73cd32884872b8b701faca07aacb) on rbd image(318c32b0-9990-4579-8abb-fc79e7460d77_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Oct 11 08:46:48 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 08:46:48 compute-0 nova_compute[260935]: 2025-10-11 08:46:48.197 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172408.1862733, ff13b723-e064-4bed-93dc-b6bf28af0857 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:46:48 compute-0 nova_compute[260935]: 2025-10-11 08:46:48.198 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: ff13b723-e064-4bed-93dc-b6bf28af0857] VM Resumed (Lifecycle Event)
Oct 11 08:46:48 compute-0 nova_compute[260935]: 2025-10-11 08:46:48.200 2 DEBUG nova.compute.manager [None req-9fb38453-abce-4cf7-981e-645f5e8a1e97 dc0a1b420e6f47e9a9f604fe0dd240fd 666832175a6846b29256147e566ba5be - - default default] [instance: ff13b723-e064-4bed-93dc-b6bf28af0857] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 11 08:46:48 compute-0 nova_compute[260935]: 2025-10-11 08:46:48.202 2 DEBUG nova.virt.libvirt.driver [None req-9fb38453-abce-4cf7-981e-645f5e8a1e97 dc0a1b420e6f47e9a9f604fe0dd240fd 666832175a6846b29256147e566ba5be - - default default] [instance: ff13b723-e064-4bed-93dc-b6bf28af0857] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 11 08:46:48 compute-0 nova_compute[260935]: 2025-10-11 08:46:48.213 2 INFO nova.virt.libvirt.driver [-] [instance: ff13b723-e064-4bed-93dc-b6bf28af0857] Instance spawned successfully.
Oct 11 08:46:48 compute-0 nova_compute[260935]: 2025-10-11 08:46:48.214 2 DEBUG nova.virt.libvirt.driver [None req-9fb38453-abce-4cf7-981e-645f5e8a1e97 dc0a1b420e6f47e9a9f604fe0dd240fd 666832175a6846b29256147e566ba5be - - default default] [instance: ff13b723-e064-4bed-93dc-b6bf28af0857] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 11 08:46:48 compute-0 nova_compute[260935]: 2025-10-11 08:46:48.235 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: ff13b723-e064-4bed-93dc-b6bf28af0857] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:46:48 compute-0 nova_compute[260935]: 2025-10-11 08:46:48.239 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: ff13b723-e064-4bed-93dc-b6bf28af0857] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 08:46:48 compute-0 nova_compute[260935]: 2025-10-11 08:46:48.250 2 DEBUG nova.virt.libvirt.driver [None req-9fb38453-abce-4cf7-981e-645f5e8a1e97 dc0a1b420e6f47e9a9f604fe0dd240fd 666832175a6846b29256147e566ba5be - - default default] [instance: ff13b723-e064-4bed-93dc-b6bf28af0857] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:46:48 compute-0 nova_compute[260935]: 2025-10-11 08:46:48.251 2 DEBUG nova.virt.libvirt.driver [None req-9fb38453-abce-4cf7-981e-645f5e8a1e97 dc0a1b420e6f47e9a9f604fe0dd240fd 666832175a6846b29256147e566ba5be - - default default] [instance: ff13b723-e064-4bed-93dc-b6bf28af0857] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:46:48 compute-0 nova_compute[260935]: 2025-10-11 08:46:48.252 2 DEBUG nova.virt.libvirt.driver [None req-9fb38453-abce-4cf7-981e-645f5e8a1e97 dc0a1b420e6f47e9a9f604fe0dd240fd 666832175a6846b29256147e566ba5be - - default default] [instance: ff13b723-e064-4bed-93dc-b6bf28af0857] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:46:48 compute-0 nova_compute[260935]: 2025-10-11 08:46:48.252 2 DEBUG nova.virt.libvirt.driver [None req-9fb38453-abce-4cf7-981e-645f5e8a1e97 dc0a1b420e6f47e9a9f604fe0dd240fd 666832175a6846b29256147e566ba5be - - default default] [instance: ff13b723-e064-4bed-93dc-b6bf28af0857] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:46:48 compute-0 nova_compute[260935]: 2025-10-11 08:46:48.253 2 DEBUG nova.virt.libvirt.driver [None req-9fb38453-abce-4cf7-981e-645f5e8a1e97 dc0a1b420e6f47e9a9f604fe0dd240fd 666832175a6846b29256147e566ba5be - - default default] [instance: ff13b723-e064-4bed-93dc-b6bf28af0857] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:46:48 compute-0 nova_compute[260935]: 2025-10-11 08:46:48.254 2 DEBUG nova.virt.libvirt.driver [None req-9fb38453-abce-4cf7-981e-645f5e8a1e97 dc0a1b420e6f47e9a9f604fe0dd240fd 666832175a6846b29256147e566ba5be - - default default] [instance: ff13b723-e064-4bed-93dc-b6bf28af0857] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:46:48 compute-0 nova_compute[260935]: 2025-10-11 08:46:48.288 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: ff13b723-e064-4bed-93dc-b6bf28af0857] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 08:46:48 compute-0 nova_compute[260935]: 2025-10-11 08:46:48.288 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172408.1909053, ff13b723-e064-4bed-93dc-b6bf28af0857 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:46:48 compute-0 nova_compute[260935]: 2025-10-11 08:46:48.289 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: ff13b723-e064-4bed-93dc-b6bf28af0857] VM Started (Lifecycle Event)
Oct 11 08:46:48 compute-0 nova_compute[260935]: 2025-10-11 08:46:48.334 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: ff13b723-e064-4bed-93dc-b6bf28af0857] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:46:48 compute-0 nova_compute[260935]: 2025-10-11 08:46:48.339 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: ff13b723-e064-4bed-93dc-b6bf28af0857] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 08:46:48 compute-0 nova_compute[260935]: 2025-10-11 08:46:48.356 2 INFO nova.compute.manager [None req-9fb38453-abce-4cf7-981e-645f5e8a1e97 dc0a1b420e6f47e9a9f604fe0dd240fd 666832175a6846b29256147e566ba5be - - default default] [instance: ff13b723-e064-4bed-93dc-b6bf28af0857] Took 3.96 seconds to spawn the instance on the hypervisor.
Oct 11 08:46:48 compute-0 nova_compute[260935]: 2025-10-11 08:46:48.356 2 DEBUG nova.compute.manager [None req-9fb38453-abce-4cf7-981e-645f5e8a1e97 dc0a1b420e6f47e9a9f604fe0dd240fd 666832175a6846b29256147e566ba5be - - default default] [instance: ff13b723-e064-4bed-93dc-b6bf28af0857] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:46:48 compute-0 nova_compute[260935]: 2025-10-11 08:46:48.369 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: ff13b723-e064-4bed-93dc-b6bf28af0857] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 08:46:48 compute-0 nova_compute[260935]: 2025-10-11 08:46:48.430 2 INFO nova.compute.manager [None req-9fb38453-abce-4cf7-981e-645f5e8a1e97 dc0a1b420e6f47e9a9f604fe0dd240fd 666832175a6846b29256147e566ba5be - - default default] [instance: ff13b723-e064-4bed-93dc-b6bf28af0857] Took 4.89 seconds to build instance.
Oct 11 08:46:48 compute-0 nova_compute[260935]: 2025-10-11 08:46:48.455 2 DEBUG oslo_concurrency.lockutils [None req-9fb38453-abce-4cf7-981e-645f5e8a1e97 dc0a1b420e6f47e9a9f604fe0dd240fd 666832175a6846b29256147e566ba5be - - default default] Lock "ff13b723-e064-4bed-93dc-b6bf28af0857" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 4.978s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:46:48 compute-0 nova_compute[260935]: 2025-10-11 08:46:48.672 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:46:48 compute-0 nova_compute[260935]: 2025-10-11 08:46:48.706 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760172393.7047517, 97254317-c848-4369-b0b7-147d230fb1d1 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:46:48 compute-0 nova_compute[260935]: 2025-10-11 08:46:48.708 2 INFO nova.compute.manager [-] [instance: 97254317-c848-4369-b0b7-147d230fb1d1] VM Stopped (Lifecycle Event)
Oct 11 08:46:48 compute-0 nova_compute[260935]: 2025-10-11 08:46:48.740 2 DEBUG nova.compute.manager [None req-5e1f1b3c-c888-414f-aaad-3319fdc3ae74 - - - - - -] [instance: 97254317-c848-4369-b0b7-147d230fb1d1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:46:48 compute-0 podman[284864]: 2025-10-11 08:46:48.781284075 +0000 UTC m=+0.080743659 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 11 08:46:49 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e139 do_prune osdmap full prune enabled
Oct 11 08:46:49 compute-0 ceph-mon[74313]: pgmap v1206: 321 pgs: 321 active+clean; 181 MiB data, 319 MiB used, 60 GiB / 60 GiB avail; 4.0 MiB/s rd, 5.5 MiB/s wr, 314 op/s
Oct 11 08:46:49 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e140 e140: 3 total, 3 up, 3 in
Oct 11 08:46:49 compute-0 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e140: 3 total, 3 up, 3 in
Oct 11 08:46:49 compute-0 nova_compute[260935]: 2025-10-11 08:46:49.176 2 DEBUG nova.storage.rbd_utils [None req-0a082280-bc65-45a8-95c7-433e38c276a6 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] cloning vms/318c32b0-9990-4579-8abb-fc79e7460d77_disk@1b8a73cd32884872b8b701faca07aacb to images/c42c8a26-c9fa-4474-9b78-d97f24dd45ce clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Oct 11 08:46:49 compute-0 nova_compute[260935]: 2025-10-11 08:46:49.313 2 DEBUG nova.storage.rbd_utils [None req-0a082280-bc65-45a8-95c7-433e38c276a6 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] flattening images/c42c8a26-c9fa-4474-9b78-d97f24dd45ce flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Oct 11 08:46:49 compute-0 nova_compute[260935]: 2025-10-11 08:46:49.579 2 DEBUG oslo_concurrency.lockutils [None req-73234762-0e1b-432e-96e0-6ec5d35de60e dc0a1b420e6f47e9a9f604fe0dd240fd 666832175a6846b29256147e566ba5be - - default default] Acquiring lock "ff13b723-e064-4bed-93dc-b6bf28af0857" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:46:49 compute-0 nova_compute[260935]: 2025-10-11 08:46:49.580 2 DEBUG oslo_concurrency.lockutils [None req-73234762-0e1b-432e-96e0-6ec5d35de60e dc0a1b420e6f47e9a9f604fe0dd240fd 666832175a6846b29256147e566ba5be - - default default] Lock "ff13b723-e064-4bed-93dc-b6bf28af0857" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:46:49 compute-0 nova_compute[260935]: 2025-10-11 08:46:49.581 2 DEBUG oslo_concurrency.lockutils [None req-73234762-0e1b-432e-96e0-6ec5d35de60e dc0a1b420e6f47e9a9f604fe0dd240fd 666832175a6846b29256147e566ba5be - - default default] Acquiring lock "ff13b723-e064-4bed-93dc-b6bf28af0857-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:46:49 compute-0 nova_compute[260935]: 2025-10-11 08:46:49.581 2 DEBUG oslo_concurrency.lockutils [None req-73234762-0e1b-432e-96e0-6ec5d35de60e dc0a1b420e6f47e9a9f604fe0dd240fd 666832175a6846b29256147e566ba5be - - default default] Lock "ff13b723-e064-4bed-93dc-b6bf28af0857-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:46:49 compute-0 nova_compute[260935]: 2025-10-11 08:46:49.583 2 DEBUG oslo_concurrency.lockutils [None req-73234762-0e1b-432e-96e0-6ec5d35de60e dc0a1b420e6f47e9a9f604fe0dd240fd 666832175a6846b29256147e566ba5be - - default default] Lock "ff13b723-e064-4bed-93dc-b6bf28af0857-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:46:49 compute-0 nova_compute[260935]: 2025-10-11 08:46:49.586 2 INFO nova.compute.manager [None req-73234762-0e1b-432e-96e0-6ec5d35de60e dc0a1b420e6f47e9a9f604fe0dd240fd 666832175a6846b29256147e566ba5be - - default default] [instance: ff13b723-e064-4bed-93dc-b6bf28af0857] Terminating instance
Oct 11 08:46:49 compute-0 nova_compute[260935]: 2025-10-11 08:46:49.587 2 DEBUG oslo_concurrency.lockutils [None req-73234762-0e1b-432e-96e0-6ec5d35de60e dc0a1b420e6f47e9a9f604fe0dd240fd 666832175a6846b29256147e566ba5be - - default default] Acquiring lock "refresh_cache-ff13b723-e064-4bed-93dc-b6bf28af0857" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 08:46:49 compute-0 nova_compute[260935]: 2025-10-11 08:46:49.588 2 DEBUG oslo_concurrency.lockutils [None req-73234762-0e1b-432e-96e0-6ec5d35de60e dc0a1b420e6f47e9a9f604fe0dd240fd 666832175a6846b29256147e566ba5be - - default default] Acquired lock "refresh_cache-ff13b723-e064-4bed-93dc-b6bf28af0857" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 08:46:49 compute-0 nova_compute[260935]: 2025-10-11 08:46:49.588 2 DEBUG nova.network.neutron [None req-73234762-0e1b-432e-96e0-6ec5d35de60e dc0a1b420e6f47e9a9f604fe0dd240fd 666832175a6846b29256147e566ba5be - - default default] [instance: ff13b723-e064-4bed-93dc-b6bf28af0857] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 11 08:46:49 compute-0 nova_compute[260935]: 2025-10-11 08:46:49.592 2 DEBUG nova.storage.rbd_utils [None req-0a082280-bc65-45a8-95c7-433e38c276a6 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] removing snapshot(1b8a73cd32884872b8b701faca07aacb) on rbd image(318c32b0-9990-4579-8abb-fc79e7460d77_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Oct 11 08:46:49 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1208: 321 pgs: 321 active+clean; 181 MiB data, 319 MiB used, 60 GiB / 60 GiB avail; 4.7 MiB/s rd, 6.4 MiB/s wr, 286 op/s
Oct 11 08:46:50 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e140 do_prune osdmap full prune enabled
Oct 11 08:46:50 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e141 e141: 3 total, 3 up, 3 in
Oct 11 08:46:50 compute-0 ceph-mon[74313]: osdmap e140: 3 total, 3 up, 3 in
Oct 11 08:46:50 compute-0 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e141: 3 total, 3 up, 3 in
Oct 11 08:46:50 compute-0 nova_compute[260935]: 2025-10-11 08:46:50.171 2 DEBUG nova.storage.rbd_utils [None req-0a082280-bc65-45a8-95c7-433e38c276a6 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] creating snapshot(snap) on rbd image(c42c8a26-c9fa-4474-9b78-d97f24dd45ce) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Oct 11 08:46:50 compute-0 nova_compute[260935]: 2025-10-11 08:46:50.273 2 DEBUG nova.network.neutron [None req-73234762-0e1b-432e-96e0-6ec5d35de60e dc0a1b420e6f47e9a9f604fe0dd240fd 666832175a6846b29256147e566ba5be - - default default] [instance: ff13b723-e064-4bed-93dc-b6bf28af0857] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 11 08:46:50 compute-0 nova_compute[260935]: 2025-10-11 08:46:50.745 2 DEBUG nova.network.neutron [None req-73234762-0e1b-432e-96e0-6ec5d35de60e dc0a1b420e6f47e9a9f604fe0dd240fd 666832175a6846b29256147e566ba5be - - default default] [instance: ff13b723-e064-4bed-93dc-b6bf28af0857] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:46:50 compute-0 nova_compute[260935]: 2025-10-11 08:46:50.763 2 DEBUG oslo_concurrency.lockutils [None req-73234762-0e1b-432e-96e0-6ec5d35de60e dc0a1b420e6f47e9a9f604fe0dd240fd 666832175a6846b29256147e566ba5be - - default default] Releasing lock "refresh_cache-ff13b723-e064-4bed-93dc-b6bf28af0857" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 08:46:50 compute-0 nova_compute[260935]: 2025-10-11 08:46:50.764 2 DEBUG nova.compute.manager [None req-73234762-0e1b-432e-96e0-6ec5d35de60e dc0a1b420e6f47e9a9f604fe0dd240fd 666832175a6846b29256147e566ba5be - - default default] [instance: ff13b723-e064-4bed-93dc-b6bf28af0857] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 11 08:46:50 compute-0 systemd[1]: machine-qemu\x2d15\x2dinstance\x2d0000000f.scope: Deactivated successfully.
Oct 11 08:46:50 compute-0 systemd[1]: machine-qemu\x2d15\x2dinstance\x2d0000000f.scope: Consumed 3.542s CPU time.
Oct 11 08:46:50 compute-0 systemd-machined[215705]: Machine qemu-15-instance-0000000f terminated.
Oct 11 08:46:50 compute-0 nova_compute[260935]: 2025-10-11 08:46:50.991 2 INFO nova.virt.libvirt.driver [-] [instance: ff13b723-e064-4bed-93dc-b6bf28af0857] Instance destroyed successfully.
Oct 11 08:46:50 compute-0 nova_compute[260935]: 2025-10-11 08:46:50.992 2 DEBUG nova.objects.instance [None req-73234762-0e1b-432e-96e0-6ec5d35de60e dc0a1b420e6f47e9a9f604fe0dd240fd 666832175a6846b29256147e566ba5be - - default default] Lazy-loading 'resources' on Instance uuid ff13b723-e064-4bed-93dc-b6bf28af0857 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:46:51 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e141 do_prune osdmap full prune enabled
Oct 11 08:46:51 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e142 e142: 3 total, 3 up, 3 in
Oct 11 08:46:51 compute-0 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e142: 3 total, 3 up, 3 in
Oct 11 08:46:51 compute-0 ceph-mon[74313]: pgmap v1208: 321 pgs: 321 active+clean; 181 MiB data, 319 MiB used, 60 GiB / 60 GiB avail; 4.7 MiB/s rd, 6.4 MiB/s wr, 286 op/s
Oct 11 08:46:51 compute-0 ceph-mon[74313]: osdmap e141: 3 total, 3 up, 3 in
Oct 11 08:46:51 compute-0 nova_compute[260935]: 2025-10-11 08:46:51.431 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:46:51 compute-0 nova_compute[260935]: 2025-10-11 08:46:51.457 2 INFO nova.virt.libvirt.driver [None req-73234762-0e1b-432e-96e0-6ec5d35de60e dc0a1b420e6f47e9a9f604fe0dd240fd 666832175a6846b29256147e566ba5be - - default default] [instance: ff13b723-e064-4bed-93dc-b6bf28af0857] Deleting instance files /var/lib/nova/instances/ff13b723-e064-4bed-93dc-b6bf28af0857_del
Oct 11 08:46:51 compute-0 nova_compute[260935]: 2025-10-11 08:46:51.458 2 INFO nova.virt.libvirt.driver [None req-73234762-0e1b-432e-96e0-6ec5d35de60e dc0a1b420e6f47e9a9f604fe0dd240fd 666832175a6846b29256147e566ba5be - - default default] [instance: ff13b723-e064-4bed-93dc-b6bf28af0857] Deletion of /var/lib/nova/instances/ff13b723-e064-4bed-93dc-b6bf28af0857_del complete
Oct 11 08:46:51 compute-0 nova_compute[260935]: 2025-10-11 08:46:51.530 2 INFO nova.compute.manager [None req-73234762-0e1b-432e-96e0-6ec5d35de60e dc0a1b420e6f47e9a9f604fe0dd240fd 666832175a6846b29256147e566ba5be - - default default] [instance: ff13b723-e064-4bed-93dc-b6bf28af0857] Took 0.77 seconds to destroy the instance on the hypervisor.
Oct 11 08:46:51 compute-0 nova_compute[260935]: 2025-10-11 08:46:51.531 2 DEBUG oslo.service.loopingcall [None req-73234762-0e1b-432e-96e0-6ec5d35de60e dc0a1b420e6f47e9a9f604fe0dd240fd 666832175a6846b29256147e566ba5be - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 11 08:46:51 compute-0 nova_compute[260935]: 2025-10-11 08:46:51.532 2 DEBUG nova.compute.manager [-] [instance: ff13b723-e064-4bed-93dc-b6bf28af0857] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 11 08:46:51 compute-0 nova_compute[260935]: 2025-10-11 08:46:51.532 2 DEBUG nova.network.neutron [-] [instance: ff13b723-e064-4bed-93dc-b6bf28af0857] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 11 08:46:51 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1211: 321 pgs: 321 active+clean; 181 MiB data, 319 MiB used, 60 GiB / 60 GiB avail; 7.7 MiB/s rd, 3.6 MiB/s wr, 369 op/s
Oct 11 08:46:51 compute-0 nova_compute[260935]: 2025-10-11 08:46:51.772 2 DEBUG nova.network.neutron [-] [instance: ff13b723-e064-4bed-93dc-b6bf28af0857] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 11 08:46:51 compute-0 nova_compute[260935]: 2025-10-11 08:46:51.789 2 DEBUG nova.network.neutron [-] [instance: ff13b723-e064-4bed-93dc-b6bf28af0857] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:46:51 compute-0 nova_compute[260935]: 2025-10-11 08:46:51.801 2 INFO nova.compute.manager [-] [instance: ff13b723-e064-4bed-93dc-b6bf28af0857] Took 0.27 seconds to deallocate network for instance.
Oct 11 08:46:51 compute-0 nova_compute[260935]: 2025-10-11 08:46:51.848 2 DEBUG oslo_concurrency.lockutils [None req-73234762-0e1b-432e-96e0-6ec5d35de60e dc0a1b420e6f47e9a9f604fe0dd240fd 666832175a6846b29256147e566ba5be - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:46:51 compute-0 nova_compute[260935]: 2025-10-11 08:46:51.849 2 DEBUG oslo_concurrency.lockutils [None req-73234762-0e1b-432e-96e0-6ec5d35de60e dc0a1b420e6f47e9a9f604fe0dd240fd 666832175a6846b29256147e566ba5be - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:46:51 compute-0 nova_compute[260935]: 2025-10-11 08:46:51.951 2 DEBUG oslo_concurrency.processutils [None req-73234762-0e1b-432e-96e0-6ec5d35de60e dc0a1b420e6f47e9a9f604fe0dd240fd 666832175a6846b29256147e566ba5be - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:46:52 compute-0 ceph-mon[74313]: osdmap e142: 3 total, 3 up, 3 in
Oct 11 08:46:52 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 08:46:52 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1498220074' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:46:52 compute-0 nova_compute[260935]: 2025-10-11 08:46:52.456 2 DEBUG oslo_concurrency.processutils [None req-73234762-0e1b-432e-96e0-6ec5d35de60e dc0a1b420e6f47e9a9f604fe0dd240fd 666832175a6846b29256147e566ba5be - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.505s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:46:52 compute-0 nova_compute[260935]: 2025-10-11 08:46:52.464 2 DEBUG nova.compute.provider_tree [None req-73234762-0e1b-432e-96e0-6ec5d35de60e dc0a1b420e6f47e9a9f604fe0dd240fd 666832175a6846b29256147e566ba5be - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 08:46:52 compute-0 nova_compute[260935]: 2025-10-11 08:46:52.480 2 DEBUG nova.scheduler.client.report [None req-73234762-0e1b-432e-96e0-6ec5d35de60e dc0a1b420e6f47e9a9f604fe0dd240fd 666832175a6846b29256147e566ba5be - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 08:46:52 compute-0 nova_compute[260935]: 2025-10-11 08:46:52.509 2 DEBUG oslo_concurrency.lockutils [None req-73234762-0e1b-432e-96e0-6ec5d35de60e dc0a1b420e6f47e9a9f604fe0dd240fd 666832175a6846b29256147e566ba5be - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.661s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:46:52 compute-0 nova_compute[260935]: 2025-10-11 08:46:52.538 2 INFO nova.scheduler.client.report [None req-73234762-0e1b-432e-96e0-6ec5d35de60e dc0a1b420e6f47e9a9f604fe0dd240fd 666832175a6846b29256147e566ba5be - - default default] Deleted allocations for instance ff13b723-e064-4bed-93dc-b6bf28af0857
Oct 11 08:46:52 compute-0 nova_compute[260935]: 2025-10-11 08:46:52.617 2 DEBUG oslo_concurrency.lockutils [None req-73234762-0e1b-432e-96e0-6ec5d35de60e dc0a1b420e6f47e9a9f604fe0dd240fd 666832175a6846b29256147e566ba5be - - default default] Lock "ff13b723-e064-4bed-93dc-b6bf28af0857" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.037s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:46:52 compute-0 nova_compute[260935]: 2025-10-11 08:46:52.975 2 INFO nova.virt.libvirt.driver [None req-0a082280-bc65-45a8-95c7-433e38c276a6 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] [instance: 318c32b0-9990-4579-8abb-fc79e7460d77] Snapshot image upload complete
Oct 11 08:46:52 compute-0 nova_compute[260935]: 2025-10-11 08:46:52.976 2 INFO nova.compute.manager [None req-0a082280-bc65-45a8-95c7-433e38c276a6 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] [instance: 318c32b0-9990-4579-8abb-fc79e7460d77] Took 5.84 seconds to snapshot the instance on the hypervisor.
Oct 11 08:46:53 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 08:46:53 compute-0 ceph-mon[74313]: pgmap v1211: 321 pgs: 321 active+clean; 181 MiB data, 319 MiB used, 60 GiB / 60 GiB avail; 7.7 MiB/s rd, 3.6 MiB/s wr, 369 op/s
Oct 11 08:46:53 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/1498220074' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:46:53 compute-0 nova_compute[260935]: 2025-10-11 08:46:53.675 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:46:53 compute-0 nova_compute[260935]: 2025-10-11 08:46:53.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 08:46:53 compute-0 nova_compute[260935]: 2025-10-11 08:46:53.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 08:46:53 compute-0 nova_compute[260935]: 2025-10-11 08:46:53.703 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 11 08:46:53 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1212: 321 pgs: 321 active+clean; 181 MiB data, 319 MiB used, 60 GiB / 60 GiB avail; 7.5 MiB/s rd, 3.6 MiB/s wr, 310 op/s
Oct 11 08:46:54 compute-0 nova_compute[260935]: 2025-10-11 08:46:54.384 2 DEBUG nova.compute.manager [None req-3050fcdb-e94b-407e-a449-23fb8a3dd1a4 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] [instance: fa95251f-1ce7-4a45-8f53-ae932716a172] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:46:54 compute-0 nova_compute[260935]: 2025-10-11 08:46:54.435 2 INFO nova.compute.manager [None req-3050fcdb-e94b-407e-a449-23fb8a3dd1a4 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] [instance: fa95251f-1ce7-4a45-8f53-ae932716a172] instance snapshotting
Oct 11 08:46:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 08:46:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 08:46:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 08:46:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 08:46:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 08:46:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 08:46:54 compute-0 ceph-mgr[74605]: [balancer INFO root] Optimize plan auto_2025-10-11_08:46:54
Oct 11 08:46:54 compute-0 ceph-mgr[74605]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 08:46:54 compute-0 ceph-mgr[74605]: [balancer INFO root] do_upmap
Oct 11 08:46:54 compute-0 ceph-mgr[74605]: [balancer INFO root] pools ['default.rgw.meta', 'images', 'cephfs.cephfs.data', 'backups', '.rgw.root', 'default.rgw.log', 'vms', 'cephfs.cephfs.meta', '.mgr', 'default.rgw.control', 'volumes']
Oct 11 08:46:54 compute-0 ceph-mgr[74605]: [balancer INFO root] prepared 0/10 changes
Oct 11 08:46:54 compute-0 nova_compute[260935]: 2025-10-11 08:46:54.858 2 INFO nova.virt.libvirt.driver [None req-3050fcdb-e94b-407e-a449-23fb8a3dd1a4 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] [instance: fa95251f-1ce7-4a45-8f53-ae932716a172] Beginning live snapshot process
Oct 11 08:46:55 compute-0 nova_compute[260935]: 2025-10-11 08:46:55.036 2 DEBUG nova.virt.libvirt.imagebackend [None req-3050fcdb-e94b-407e-a449-23fb8a3dd1a4 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] No parent info for 03f2fef0-11c0-48e1-b3a0-3e02d898739e; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Oct 11 08:46:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 08:46:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 08:46:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 08:46:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 08:46:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 08:46:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 08:46:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 08:46:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 08:46:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 08:46:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 08:46:55 compute-0 ceph-mon[74313]: pgmap v1212: 321 pgs: 321 active+clean; 181 MiB data, 319 MiB used, 60 GiB / 60 GiB avail; 7.5 MiB/s rd, 3.6 MiB/s wr, 310 op/s
Oct 11 08:46:55 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:46:55.254 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=6, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:d1:d9', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '16:ab:1e:b7:4b:7f'}, ipsec=False) old=SB_Global(nb_cfg=5) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 08:46:55 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:46:55.255 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 11 08:46:55 compute-0 nova_compute[260935]: 2025-10-11 08:46:55.255 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:46:55 compute-0 nova_compute[260935]: 2025-10-11 08:46:55.625 2 DEBUG nova.storage.rbd_utils [None req-3050fcdb-e94b-407e-a449-23fb8a3dd1a4 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] creating snapshot(297126c640e945f882404423e085dc3c) on rbd image(fa95251f-1ce7-4a45-8f53-ae932716a172_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Oct 11 08:46:55 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1213: 321 pgs: 321 active+clean; 181 MiB data, 319 MiB used, 60 GiB / 60 GiB avail; 6.8 MiB/s rd, 3.2 MiB/s wr, 280 op/s
Oct 11 08:46:55 compute-0 podman[285067]: 2025-10-11 08:46:55.788162994 +0000 UTC m=+0.086205375 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct 11 08:46:56 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e142 do_prune osdmap full prune enabled
Oct 11 08:46:56 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e143 e143: 3 total, 3 up, 3 in
Oct 11 08:46:56 compute-0 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e143: 3 total, 3 up, 3 in
Oct 11 08:46:56 compute-0 nova_compute[260935]: 2025-10-11 08:46:56.263 2 DEBUG nova.storage.rbd_utils [None req-3050fcdb-e94b-407e-a449-23fb8a3dd1a4 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] cloning vms/fa95251f-1ce7-4a45-8f53-ae932716a172_disk@297126c640e945f882404423e085dc3c to images/5259a9a2-9e09-4a6d-9c26-e2e19bf2a857 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Oct 11 08:46:56 compute-0 nova_compute[260935]: 2025-10-11 08:46:56.481 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:46:56 compute-0 nova_compute[260935]: 2025-10-11 08:46:56.493 2 DEBUG nova.storage.rbd_utils [None req-3050fcdb-e94b-407e-a449-23fb8a3dd1a4 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] flattening images/5259a9a2-9e09-4a6d-9c26-e2e19bf2a857 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Oct 11 08:46:56 compute-0 nova_compute[260935]: 2025-10-11 08:46:56.756 2 DEBUG nova.storage.rbd_utils [None req-3050fcdb-e94b-407e-a449-23fb8a3dd1a4 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] removing snapshot(297126c640e945f882404423e085dc3c) on rbd image(fa95251f-1ce7-4a45-8f53-ae932716a172_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Oct 11 08:46:57 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e143 do_prune osdmap full prune enabled
Oct 11 08:46:57 compute-0 ceph-mon[74313]: pgmap v1213: 321 pgs: 321 active+clean; 181 MiB data, 319 MiB used, 60 GiB / 60 GiB avail; 6.8 MiB/s rd, 3.2 MiB/s wr, 280 op/s
Oct 11 08:46:57 compute-0 ceph-mon[74313]: osdmap e143: 3 total, 3 up, 3 in
Oct 11 08:46:57 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e144 e144: 3 total, 3 up, 3 in
Oct 11 08:46:57 compute-0 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e144: 3 total, 3 up, 3 in
Oct 11 08:46:57 compute-0 ceph-mon[74313]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #57. Immutable memtables: 0.
Oct 11 08:46:57 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:46:57.203358) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 11 08:46:57 compute-0 ceph-mon[74313]: rocksdb: [db/flush_job.cc:856] [default] [JOB 29] Flushing memtable with next log file: 57
Oct 11 08:46:57 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760172417203415, "job": 29, "event": "flush_started", "num_memtables": 1, "num_entries": 1375, "num_deletes": 251, "total_data_size": 1852045, "memory_usage": 1879632, "flush_reason": "Manual Compaction"}
Oct 11 08:46:57 compute-0 ceph-mon[74313]: rocksdb: [db/flush_job.cc:885] [default] [JOB 29] Level-0 flush table #58: started
Oct 11 08:46:57 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760172417216655, "cf_name": "default", "job": 29, "event": "table_file_creation", "file_number": 58, "file_size": 1830786, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 24291, "largest_seqno": 25665, "table_properties": {"data_size": 1824406, "index_size": 3519, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1797, "raw_key_size": 14264, "raw_average_key_size": 20, "raw_value_size": 1811282, "raw_average_value_size": 2572, "num_data_blocks": 156, "num_entries": 704, "num_filter_entries": 704, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760172303, "oldest_key_time": 1760172303, "file_creation_time": 1760172417, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "25d4b2da-549a-4320-988a-8e4ed36a341b", "db_session_id": "HBQ1LYMNE0LLZGR7AHC6", "orig_file_number": 58, "seqno_to_time_mapping": "N/A"}}
Oct 11 08:46:57 compute-0 ceph-mon[74313]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 29] Flush lasted 13347 microseconds, and 8785 cpu microseconds.
Oct 11 08:46:57 compute-0 ceph-mon[74313]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 08:46:57 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:46:57.216707) [db/flush_job.cc:967] [default] [JOB 29] Level-0 flush table #58: 1830786 bytes OK
Oct 11 08:46:57 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:46:57.216737) [db/memtable_list.cc:519] [default] Level-0 commit table #58 started
Oct 11 08:46:57 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:46:57.219607) [db/memtable_list.cc:722] [default] Level-0 commit table #58: memtable #1 done
Oct 11 08:46:57 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:46:57.219631) EVENT_LOG_v1 {"time_micros": 1760172417219624, "job": 29, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 11 08:46:57 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:46:57.219654) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 11 08:46:57 compute-0 ceph-mon[74313]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 29] Try to delete WAL files size 1845865, prev total WAL file size 1845865, number of live WAL files 2.
Oct 11 08:46:57 compute-0 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000054.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 08:46:57 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:46:57.220730) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032303038' seq:72057594037927935, type:22 .. '7061786F730032323630' seq:0, type:0; will stop at (end)
Oct 11 08:46:57 compute-0 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 30] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 11 08:46:57 compute-0 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 29 Base level 0, inputs: [58(1787KB)], [56(6894KB)]
Oct 11 08:46:57 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760172417220795, "job": 30, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [58], "files_L6": [56], "score": -1, "input_data_size": 8890882, "oldest_snapshot_seqno": -1}
Oct 11 08:46:57 compute-0 nova_compute[260935]: 2025-10-11 08:46:57.239 2 DEBUG nova.storage.rbd_utils [None req-3050fcdb-e94b-407e-a449-23fb8a3dd1a4 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] creating snapshot(snap) on rbd image(5259a9a2-9e09-4a6d-9c26-e2e19bf2a857) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Oct 11 08:46:57 compute-0 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 30] Generated table #59: 4702 keys, 7172618 bytes, temperature: kUnknown
Oct 11 08:46:57 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760172417267912, "cf_name": "default", "job": 30, "event": "table_file_creation", "file_number": 59, "file_size": 7172618, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7141354, "index_size": 18418, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11781, "raw_key_size": 117850, "raw_average_key_size": 25, "raw_value_size": 7056527, "raw_average_value_size": 1500, "num_data_blocks": 760, "num_entries": 4702, "num_filter_entries": 4702, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760170204, "oldest_key_time": 0, "file_creation_time": 1760172417, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "25d4b2da-549a-4320-988a-8e4ed36a341b", "db_session_id": "HBQ1LYMNE0LLZGR7AHC6", "orig_file_number": 59, "seqno_to_time_mapping": "N/A"}}
Oct 11 08:46:57 compute-0 ceph-mon[74313]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 08:46:57 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:46:57.268313) [db/compaction/compaction_job.cc:1663] [default] [JOB 30] Compacted 1@0 + 1@6 files to L6 => 7172618 bytes
Oct 11 08:46:57 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:46:57.269944) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 188.1 rd, 151.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.7, 6.7 +0.0 blob) out(6.8 +0.0 blob), read-write-amplify(8.8) write-amplify(3.9) OK, records in: 5220, records dropped: 518 output_compression: NoCompression
Oct 11 08:46:57 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:46:57.269963) EVENT_LOG_v1 {"time_micros": 1760172417269954, "job": 30, "event": "compaction_finished", "compaction_time_micros": 47274, "compaction_time_cpu_micros": 31941, "output_level": 6, "num_output_files": 1, "total_output_size": 7172618, "num_input_records": 5220, "num_output_records": 4702, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 11 08:46:57 compute-0 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000058.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 08:46:57 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760172417270868, "job": 30, "event": "table_file_deletion", "file_number": 58}
Oct 11 08:46:57 compute-0 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000056.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 08:46:57 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760172417272341, "job": 30, "event": "table_file_deletion", "file_number": 56}
Oct 11 08:46:57 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:46:57.220618) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 08:46:57 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:46:57.272538) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 08:46:57 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:46:57.272546) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 08:46:57 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:46:57.272549) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 08:46:57 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:46:57.272551) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 08:46:57 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:46:57.272553) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 08:46:57 compute-0 nova_compute[260935]: 2025-10-11 08:46:57.704 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 08:46:57 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1216: 321 pgs: 2 active+clean+snaptrim, 9 active+clean+snaptrim_wait, 310 active+clean; 291 MiB data, 390 MiB used, 60 GiB / 60 GiB avail; 11 MiB/s rd, 14 MiB/s wr, 602 op/s
Oct 11 08:46:58 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 08:46:58 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e144 do_prune osdmap full prune enabled
Oct 11 08:46:58 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e145 e145: 3 total, 3 up, 3 in
Oct 11 08:46:58 compute-0 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e145: 3 total, 3 up, 3 in
Oct 11 08:46:58 compute-0 ceph-mon[74313]: osdmap e144: 3 total, 3 up, 3 in
Oct 11 08:46:58 compute-0 ceph-mon[74313]: osdmap e145: 3 total, 3 up, 3 in
Oct 11 08:46:58 compute-0 nova_compute[260935]: 2025-10-11 08:46:58.699 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 08:46:58 compute-0 nova_compute[260935]: 2025-10-11 08:46:58.710 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:46:58 compute-0 nova_compute[260935]: 2025-10-11 08:46:58.730 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 08:46:59 compute-0 ceph-mon[74313]: pgmap v1216: 321 pgs: 2 active+clean+snaptrim, 9 active+clean+snaptrim_wait, 310 active+clean; 291 MiB data, 390 MiB used, 60 GiB / 60 GiB avail; 11 MiB/s rd, 14 MiB/s wr, 602 op/s
Oct 11 08:46:59 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:46:59.258 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=bec9a5e2-82b0-42f0-811d-08d245f1dc66, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '6'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:46:59 compute-0 nova_compute[260935]: 2025-10-11 08:46:59.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 08:46:59 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1218: 321 pgs: 2 active+clean+snaptrim, 9 active+clean+snaptrim_wait, 310 active+clean; 291 MiB data, 390 MiB used, 60 GiB / 60 GiB avail; 4.7 MiB/s rd, 12 MiB/s wr, 354 op/s
Oct 11 08:47:00 compute-0 nova_compute[260935]: 2025-10-11 08:47:00.467 2 INFO nova.virt.libvirt.driver [None req-3050fcdb-e94b-407e-a449-23fb8a3dd1a4 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] [instance: fa95251f-1ce7-4a45-8f53-ae932716a172] Snapshot image upload complete
Oct 11 08:47:00 compute-0 nova_compute[260935]: 2025-10-11 08:47:00.468 2 INFO nova.compute.manager [None req-3050fcdb-e94b-407e-a449-23fb8a3dd1a4 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] [instance: fa95251f-1ce7-4a45-8f53-ae932716a172] Took 6.03 seconds to snapshot the instance on the hypervisor.
Oct 11 08:47:00 compute-0 nova_compute[260935]: 2025-10-11 08:47:00.698 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 08:47:01 compute-0 ceph-mon[74313]: pgmap v1218: 321 pgs: 2 active+clean+snaptrim, 9 active+clean+snaptrim_wait, 310 active+clean; 291 MiB data, 390 MiB used, 60 GiB / 60 GiB avail; 4.7 MiB/s rd, 12 MiB/s wr, 354 op/s
Oct 11 08:47:01 compute-0 nova_compute[260935]: 2025-10-11 08:47:01.528 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:47:01 compute-0 nova_compute[260935]: 2025-10-11 08:47:01.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 08:47:01 compute-0 nova_compute[260935]: 2025-10-11 08:47:01.733 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:47:01 compute-0 nova_compute[260935]: 2025-10-11 08:47:01.734 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:47:01 compute-0 nova_compute[260935]: 2025-10-11 08:47:01.734 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:47:01 compute-0 nova_compute[260935]: 2025-10-11 08:47:01.734 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 11 08:47:01 compute-0 nova_compute[260935]: 2025-10-11 08:47:01.734 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:47:01 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1219: 321 pgs: 2 active+clean+snaptrim, 9 active+clean+snaptrim_wait, 310 active+clean; 291 MiB data, 390 MiB used, 60 GiB / 60 GiB avail; 4.7 MiB/s rd, 12 MiB/s wr, 354 op/s
Oct 11 08:47:01 compute-0 podman[285178]: 2025-10-11 08:47:01.784710744 +0000 UTC m=+0.080926274 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Oct 11 08:47:02 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 08:47:02 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3076908363' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:47:02 compute-0 nova_compute[260935]: 2025-10-11 08:47:02.196 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:47:02 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/3076908363' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:47:02 compute-0 nova_compute[260935]: 2025-10-11 08:47:02.286 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-0000000d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 08:47:02 compute-0 nova_compute[260935]: 2025-10-11 08:47:02.287 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-0000000d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 08:47:02 compute-0 nova_compute[260935]: 2025-10-11 08:47:02.294 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-0000000e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 08:47:02 compute-0 nova_compute[260935]: 2025-10-11 08:47:02.295 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-0000000e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 08:47:02 compute-0 podman[285220]: 2025-10-11 08:47:02.388223715 +0000 UTC m=+0.130477921 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Oct 11 08:47:02 compute-0 nova_compute[260935]: 2025-10-11 08:47:02.531 2 WARNING nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 08:47:02 compute-0 nova_compute[260935]: 2025-10-11 08:47:02.534 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4275MB free_disk=59.89763259887695GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 11 08:47:02 compute-0 nova_compute[260935]: 2025-10-11 08:47:02.535 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:47:02 compute-0 nova_compute[260935]: 2025-10-11 08:47:02.535 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:47:02 compute-0 nova_compute[260935]: 2025-10-11 08:47:02.670 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance 318c32b0-9990-4579-8abb-fc79e7460d77 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 08:47:02 compute-0 nova_compute[260935]: 2025-10-11 08:47:02.671 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance fa95251f-1ce7-4a45-8f53-ae932716a172 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 08:47:02 compute-0 nova_compute[260935]: 2025-10-11 08:47:02.672 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 11 08:47:02 compute-0 nova_compute[260935]: 2025-10-11 08:47:02.672 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 11 08:47:02 compute-0 nova_compute[260935]: 2025-10-11 08:47:02.763 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:47:03 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 08:47:03 compute-0 ceph-mon[74313]: pgmap v1219: 321 pgs: 2 active+clean+snaptrim, 9 active+clean+snaptrim_wait, 310 active+clean; 291 MiB data, 390 MiB used, 60 GiB / 60 GiB avail; 4.7 MiB/s rd, 12 MiB/s wr, 354 op/s
Oct 11 08:47:03 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 08:47:03 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1074731268' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:47:03 compute-0 nova_compute[260935]: 2025-10-11 08:47:03.308 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.545s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:47:03 compute-0 nova_compute[260935]: 2025-10-11 08:47:03.315 2 DEBUG nova.compute.provider_tree [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 08:47:03 compute-0 nova_compute[260935]: 2025-10-11 08:47:03.333 2 DEBUG nova.scheduler.client.report [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 08:47:03 compute-0 nova_compute[260935]: 2025-10-11 08:47:03.364 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 11 08:47:03 compute-0 nova_compute[260935]: 2025-10-11 08:47:03.364 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.829s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:47:03 compute-0 nova_compute[260935]: 2025-10-11 08:47:03.714 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:47:03 compute-0 nova_compute[260935]: 2025-10-11 08:47:03.722 2 DEBUG nova.compute.manager [None req-dce477b1-cb35-4d20-bef0-997e1401d841 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] [instance: 318c32b0-9990-4579-8abb-fc79e7460d77] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:47:03 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1220: 321 pgs: 321 active+clean; 292 MiB data, 391 MiB used, 60 GiB / 60 GiB avail; 3.8 MiB/s rd, 9.5 MiB/s wr, 322 op/s
Oct 11 08:47:04 compute-0 nova_compute[260935]: 2025-10-11 08:47:04.075 2 INFO nova.compute.manager [None req-dce477b1-cb35-4d20-bef0-997e1401d841 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] [instance: 318c32b0-9990-4579-8abb-fc79e7460d77] instance snapshotting
Oct 11 08:47:04 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/1074731268' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:47:04 compute-0 nova_compute[260935]: 2025-10-11 08:47:04.502 2 INFO nova.virt.libvirt.driver [None req-dce477b1-cb35-4d20-bef0-997e1401d841 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] [instance: 318c32b0-9990-4579-8abb-fc79e7460d77] Beginning live snapshot process
Oct 11 08:47:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 08:47:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:47:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 08:47:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:47:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0015118690784332877 of space, bias 1.0, pg target 0.45356072352998633 quantized to 32 (current 32)
Oct 11 08:47:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:47:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 08:47:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:47:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 08:47:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:47:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0013588698354170736 of space, bias 1.0, pg target 0.4076609506251221 quantized to 32 (current 32)
Oct 11 08:47:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:47:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 11 08:47:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:47:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 08:47:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:47:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 11 08:47:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:47:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 11 08:47:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:47:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 08:47:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:47:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 11 08:47:04 compute-0 nova_compute[260935]: 2025-10-11 08:47:04.707 2 DEBUG nova.virt.libvirt.imagebackend [None req-dce477b1-cb35-4d20-bef0-997e1401d841 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] No parent info for 03f2fef0-11c0-48e1-b3a0-3e02d898739e; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Oct 11 08:47:05 compute-0 nova_compute[260935]: 2025-10-11 08:47:05.121 2 DEBUG nova.storage.rbd_utils [None req-dce477b1-cb35-4d20-bef0-997e1401d841 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] creating snapshot(f4964df6d3c64d3c8f0e8d150fc90269) on rbd image(318c32b0-9990-4579-8abb-fc79e7460d77_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Oct 11 08:47:05 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e145 do_prune osdmap full prune enabled
Oct 11 08:47:05 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e146 e146: 3 total, 3 up, 3 in
Oct 11 08:47:05 compute-0 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e146: 3 total, 3 up, 3 in
Oct 11 08:47:05 compute-0 ceph-mon[74313]: pgmap v1220: 321 pgs: 321 active+clean; 292 MiB data, 391 MiB used, 60 GiB / 60 GiB avail; 3.8 MiB/s rd, 9.5 MiB/s wr, 322 op/s
Oct 11 08:47:05 compute-0 nova_compute[260935]: 2025-10-11 08:47:05.345 2 DEBUG nova.storage.rbd_utils [None req-dce477b1-cb35-4d20-bef0-997e1401d841 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] cloning vms/318c32b0-9990-4579-8abb-fc79e7460d77_disk@f4964df6d3c64d3c8f0e8d150fc90269 to images/283e0fc7-42ef-4bee-8e6e-575223f25fca clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Oct 11 08:47:05 compute-0 nova_compute[260935]: 2025-10-11 08:47:05.390 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 08:47:05 compute-0 nova_compute[260935]: 2025-10-11 08:47:05.390 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 11 08:47:05 compute-0 nova_compute[260935]: 2025-10-11 08:47:05.391 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 11 08:47:05 compute-0 nova_compute[260935]: 2025-10-11 08:47:05.413 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "refresh_cache-318c32b0-9990-4579-8abb-fc79e7460d77" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 08:47:05 compute-0 nova_compute[260935]: 2025-10-11 08:47:05.414 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquired lock "refresh_cache-318c32b0-9990-4579-8abb-fc79e7460d77" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 08:47:05 compute-0 nova_compute[260935]: 2025-10-11 08:47:05.414 2 DEBUG nova.network.neutron [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: 318c32b0-9990-4579-8abb-fc79e7460d77] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 11 08:47:05 compute-0 nova_compute[260935]: 2025-10-11 08:47:05.415 2 DEBUG nova.objects.instance [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 318c32b0-9990-4579-8abb-fc79e7460d77 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:47:05 compute-0 nova_compute[260935]: 2025-10-11 08:47:05.480 2 DEBUG nova.storage.rbd_utils [None req-dce477b1-cb35-4d20-bef0-997e1401d841 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] flattening images/283e0fc7-42ef-4bee-8e6e-575223f25fca flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Oct 11 08:47:05 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1222: 321 pgs: 321 active+clean; 292 MiB data, 391 MiB used, 60 GiB / 60 GiB avail; 86 KiB/s rd, 36 KiB/s wr, 39 op/s
Oct 11 08:47:05 compute-0 nova_compute[260935]: 2025-10-11 08:47:05.820 2 DEBUG nova.storage.rbd_utils [None req-dce477b1-cb35-4d20-bef0-997e1401d841 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] removing snapshot(f4964df6d3c64d3c8f0e8d150fc90269) on rbd image(318c32b0-9990-4579-8abb-fc79e7460d77_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Oct 11 08:47:05 compute-0 nova_compute[260935]: 2025-10-11 08:47:05.871 2 DEBUG nova.network.neutron [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: 318c32b0-9990-4579-8abb-fc79e7460d77] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 11 08:47:05 compute-0 nova_compute[260935]: 2025-10-11 08:47:05.988 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760172410.9875758, ff13b723-e064-4bed-93dc-b6bf28af0857 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:47:05 compute-0 nova_compute[260935]: 2025-10-11 08:47:05.989 2 INFO nova.compute.manager [-] [instance: ff13b723-e064-4bed-93dc-b6bf28af0857] VM Stopped (Lifecycle Event)
Oct 11 08:47:06 compute-0 nova_compute[260935]: 2025-10-11 08:47:06.013 2 DEBUG nova.compute.manager [None req-fde91909-6414-4b0f-af96-bb7a1ebe4752 - - - - - -] [instance: ff13b723-e064-4bed-93dc-b6bf28af0857] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:47:06 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e146 do_prune osdmap full prune enabled
Oct 11 08:47:06 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e147 e147: 3 total, 3 up, 3 in
Oct 11 08:47:06 compute-0 ceph-mon[74313]: osdmap e146: 3 total, 3 up, 3 in
Oct 11 08:47:06 compute-0 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e147: 3 total, 3 up, 3 in
Oct 11 08:47:06 compute-0 nova_compute[260935]: 2025-10-11 08:47:06.322 2 DEBUG nova.storage.rbd_utils [None req-dce477b1-cb35-4d20-bef0-997e1401d841 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] creating snapshot(snap) on rbd image(283e0fc7-42ef-4bee-8e6e-575223f25fca) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Oct 11 08:47:06 compute-0 nova_compute[260935]: 2025-10-11 08:47:06.373 2 DEBUG nova.network.neutron [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: 318c32b0-9990-4579-8abb-fc79e7460d77] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:47:06 compute-0 nova_compute[260935]: 2025-10-11 08:47:06.395 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Releasing lock "refresh_cache-318c32b0-9990-4579-8abb-fc79e7460d77" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 08:47:06 compute-0 nova_compute[260935]: 2025-10-11 08:47:06.395 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: 318c32b0-9990-4579-8abb-fc79e7460d77] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 11 08:47:06 compute-0 nova_compute[260935]: 2025-10-11 08:47:06.396 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 08:47:06 compute-0 nova_compute[260935]: 2025-10-11 08:47:06.530 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:47:07 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e147 do_prune osdmap full prune enabled
Oct 11 08:47:07 compute-0 ceph-mon[74313]: pgmap v1222: 321 pgs: 321 active+clean; 292 MiB data, 391 MiB used, 60 GiB / 60 GiB avail; 86 KiB/s rd, 36 KiB/s wr, 39 op/s
Oct 11 08:47:07 compute-0 ceph-mon[74313]: osdmap e147: 3 total, 3 up, 3 in
Oct 11 08:47:07 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e148 e148: 3 total, 3 up, 3 in
Oct 11 08:47:07 compute-0 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e148: 3 total, 3 up, 3 in
Oct 11 08:47:07 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1225: 321 pgs: 321 active+clean; 372 MiB data, 437 MiB used, 60 GiB / 60 GiB avail; 7.9 MiB/s rd, 7.8 MiB/s wr, 201 op/s
Oct 11 08:47:07 compute-0 sudo[285412]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:47:07 compute-0 sudo[285412]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:47:07 compute-0 sudo[285412]: pam_unix(sudo:session): session closed for user root
Oct 11 08:47:07 compute-0 sudo[285437]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 08:47:07 compute-0 sudo[285437]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:47:07 compute-0 sudo[285437]: pam_unix(sudo:session): session closed for user root
Oct 11 08:47:07 compute-0 sudo[285462]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:47:07 compute-0 sudo[285462]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:47:07 compute-0 sudo[285462]: pam_unix(sudo:session): session closed for user root
Oct 11 08:47:08 compute-0 sudo[285487]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 11 08:47:08 compute-0 sudo[285487]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:47:08 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 08:47:08 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e148 do_prune osdmap full prune enabled
Oct 11 08:47:08 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e149 e149: 3 total, 3 up, 3 in
Oct 11 08:47:08 compute-0 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e149: 3 total, 3 up, 3 in
Oct 11 08:47:08 compute-0 ceph-mon[74313]: osdmap e148: 3 total, 3 up, 3 in
Oct 11 08:47:08 compute-0 ceph-mon[74313]: osdmap e149: 3 total, 3 up, 3 in
Oct 11 08:47:08 compute-0 sshd-session[285410]: Invalid user master from 152.32.213.170 port 56312
Oct 11 08:47:08 compute-0 sshd-session[285410]: pam_unix(sshd:auth): check pass; user unknown
Oct 11 08:47:08 compute-0 sshd-session[285410]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=152.32.213.170
Oct 11 08:47:08 compute-0 sudo[285487]: pam_unix(sudo:session): session closed for user root
Oct 11 08:47:08 compute-0 nova_compute[260935]: 2025-10-11 08:47:08.717 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:47:08 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 08:47:08 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 08:47:08 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 11 08:47:08 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 08:47:08 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 11 08:47:08 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 08:47:08 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev ecd54621-2aea-4fb1-b1c6-941c3b9968cc does not exist
Oct 11 08:47:08 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev 80ea7eff-bb5d-4667-a97d-9421107de325 does not exist
Oct 11 08:47:08 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev df457feb-b379-4d92-b86e-181631f66d9a does not exist
Oct 11 08:47:08 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 11 08:47:08 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 08:47:08 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 11 08:47:08 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 08:47:08 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 08:47:08 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 08:47:08 compute-0 sudo[285543]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:47:08 compute-0 sudo[285543]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:47:08 compute-0 sudo[285543]: pam_unix(sudo:session): session closed for user root
Oct 11 08:47:08 compute-0 sudo[285568]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 08:47:08 compute-0 sudo[285568]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:47:08 compute-0 sudo[285568]: pam_unix(sudo:session): session closed for user root
Oct 11 08:47:09 compute-0 sudo[285593]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:47:09 compute-0 sudo[285593]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:47:09 compute-0 sudo[285593]: pam_unix(sudo:session): session closed for user root
Oct 11 08:47:09 compute-0 sudo[285618]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 11 08:47:09 compute-0 sudo[285618]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:47:09 compute-0 ceph-mon[74313]: pgmap v1225: 321 pgs: 321 active+clean; 372 MiB data, 437 MiB used, 60 GiB / 60 GiB avail; 7.9 MiB/s rd, 7.8 MiB/s wr, 201 op/s
Oct 11 08:47:09 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 08:47:09 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 08:47:09 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 08:47:09 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 08:47:09 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 08:47:09 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 08:47:09 compute-0 podman[285682]: 2025-10-11 08:47:09.567195421 +0000 UTC m=+0.060134640 container create 0bd42a1bffc76a58e54faeeb27373569af89907fc865704990e21826031edf52 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_hawking, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct 11 08:47:09 compute-0 nova_compute[260935]: 2025-10-11 08:47:09.595 2 INFO nova.virt.libvirt.driver [None req-dce477b1-cb35-4d20-bef0-997e1401d841 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] [instance: 318c32b0-9990-4579-8abb-fc79e7460d77] Snapshot image upload complete
Oct 11 08:47:09 compute-0 nova_compute[260935]: 2025-10-11 08:47:09.597 2 INFO nova.compute.manager [None req-dce477b1-cb35-4d20-bef0-997e1401d841 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] [instance: 318c32b0-9990-4579-8abb-fc79e7460d77] Took 5.52 seconds to snapshot the instance on the hypervisor.
Oct 11 08:47:09 compute-0 systemd[1]: Started libpod-conmon-0bd42a1bffc76a58e54faeeb27373569af89907fc865704990e21826031edf52.scope.
Oct 11 08:47:09 compute-0 podman[285682]: 2025-10-11 08:47:09.534356422 +0000 UTC m=+0.027295651 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 08:47:09 compute-0 systemd[1]: Started libcrun container.
Oct 11 08:47:09 compute-0 podman[285682]: 2025-10-11 08:47:09.681159298 +0000 UTC m=+0.174098547 container init 0bd42a1bffc76a58e54faeeb27373569af89907fc865704990e21826031edf52 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_hawking, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 11 08:47:09 compute-0 podman[285682]: 2025-10-11 08:47:09.690453544 +0000 UTC m=+0.183392753 container start 0bd42a1bffc76a58e54faeeb27373569af89907fc865704990e21826031edf52 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_hawking, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct 11 08:47:09 compute-0 podman[285682]: 2025-10-11 08:47:09.695884289 +0000 UTC m=+0.188823558 container attach 0bd42a1bffc76a58e54faeeb27373569af89907fc865704990e21826031edf52 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_hawking, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 08:47:09 compute-0 stoic_hawking[285698]: 167 167
Oct 11 08:47:09 compute-0 systemd[1]: libpod-0bd42a1bffc76a58e54faeeb27373569af89907fc865704990e21826031edf52.scope: Deactivated successfully.
Oct 11 08:47:09 compute-0 podman[285682]: 2025-10-11 08:47:09.703337202 +0000 UTC m=+0.196276451 container died 0bd42a1bffc76a58e54faeeb27373569af89907fc865704990e21826031edf52 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_hawking, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Oct 11 08:47:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-8f9c89dac7b28d928fe193c58664338257f1c055d69fc7bfdc635e96d320864a-merged.mount: Deactivated successfully.
Oct 11 08:47:09 compute-0 podman[285682]: 2025-10-11 08:47:09.761034102 +0000 UTC m=+0.253973321 container remove 0bd42a1bffc76a58e54faeeb27373569af89907fc865704990e21826031edf52 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_hawking, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507)
Oct 11 08:47:09 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1227: 321 pgs: 321 active+clean; 372 MiB data, 437 MiB used, 60 GiB / 60 GiB avail; 10 MiB/s rd, 10 MiB/s wr, 197 op/s
Oct 11 08:47:09 compute-0 systemd[1]: libpod-conmon-0bd42a1bffc76a58e54faeeb27373569af89907fc865704990e21826031edf52.scope: Deactivated successfully.
Oct 11 08:47:10 compute-0 podman[285723]: 2025-10-11 08:47:10.024273906 +0000 UTC m=+0.077190457 container create 5a174fc353cfd8d49c2093bdde9cfd7a695a0e87b71deae210ee373911d5bc22 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_blackburn, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 08:47:10 compute-0 podman[285723]: 2025-10-11 08:47:09.992501858 +0000 UTC m=+0.045418449 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 08:47:10 compute-0 systemd[1]: Started libpod-conmon-5a174fc353cfd8d49c2093bdde9cfd7a695a0e87b71deae210ee373911d5bc22.scope.
Oct 11 08:47:10 compute-0 systemd[1]: Started libcrun container.
Oct 11 08:47:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0d0a2fc1d501d7b8ebcb846fd82a047f619cb044e1f5d53b5b2349482d64abe/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 08:47:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0d0a2fc1d501d7b8ebcb846fd82a047f619cb044e1f5d53b5b2349482d64abe/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 08:47:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0d0a2fc1d501d7b8ebcb846fd82a047f619cb044e1f5d53b5b2349482d64abe/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 08:47:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0d0a2fc1d501d7b8ebcb846fd82a047f619cb044e1f5d53b5b2349482d64abe/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 08:47:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0d0a2fc1d501d7b8ebcb846fd82a047f619cb044e1f5d53b5b2349482d64abe/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 08:47:10 compute-0 podman[285723]: 2025-10-11 08:47:10.146922672 +0000 UTC m=+0.199839193 container init 5a174fc353cfd8d49c2093bdde9cfd7a695a0e87b71deae210ee373911d5bc22 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_blackburn, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 08:47:10 compute-0 podman[285723]: 2025-10-11 08:47:10.162550689 +0000 UTC m=+0.215467230 container start 5a174fc353cfd8d49c2093bdde9cfd7a695a0e87b71deae210ee373911d5bc22 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_blackburn, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 08:47:10 compute-0 podman[285723]: 2025-10-11 08:47:10.167727327 +0000 UTC m=+0.220643948 container attach 5a174fc353cfd8d49c2093bdde9cfd7a695a0e87b71deae210ee373911d5bc22 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_blackburn, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 08:47:10 compute-0 sshd-session[285410]: Failed password for invalid user master from 152.32.213.170 port 56312 ssh2
Oct 11 08:47:11 compute-0 mystifying_blackburn[285739]: --> passed data devices: 0 physical, 3 LVM
Oct 11 08:47:11 compute-0 mystifying_blackburn[285739]: --> relative data size: 1.0
Oct 11 08:47:11 compute-0 mystifying_blackburn[285739]: --> All data devices are unavailable
Oct 11 08:47:11 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e149 do_prune osdmap full prune enabled
Oct 11 08:47:11 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e150 e150: 3 total, 3 up, 3 in
Oct 11 08:47:11 compute-0 systemd[1]: libpod-5a174fc353cfd8d49c2093bdde9cfd7a695a0e87b71deae210ee373911d5bc22.scope: Deactivated successfully.
Oct 11 08:47:11 compute-0 podman[285723]: 2025-10-11 08:47:11.338491393 +0000 UTC m=+1.391407994 container died 5a174fc353cfd8d49c2093bdde9cfd7a695a0e87b71deae210ee373911d5bc22 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_blackburn, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct 11 08:47:11 compute-0 systemd[1]: libpod-5a174fc353cfd8d49c2093bdde9cfd7a695a0e87b71deae210ee373911d5bc22.scope: Consumed 1.120s CPU time.
Oct 11 08:47:11 compute-0 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e150: 3 total, 3 up, 3 in
Oct 11 08:47:11 compute-0 ceph-mon[74313]: pgmap v1227: 321 pgs: 321 active+clean; 372 MiB data, 437 MiB used, 60 GiB / 60 GiB avail; 10 MiB/s rd, 10 MiB/s wr, 197 op/s
Oct 11 08:47:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-e0d0a2fc1d501d7b8ebcb846fd82a047f619cb044e1f5d53b5b2349482d64abe-merged.mount: Deactivated successfully.
Oct 11 08:47:11 compute-0 podman[285723]: 2025-10-11 08:47:11.400771383 +0000 UTC m=+1.453687894 container remove 5a174fc353cfd8d49c2093bdde9cfd7a695a0e87b71deae210ee373911d5bc22 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_blackburn, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 08:47:11 compute-0 systemd[1]: libpod-conmon-5a174fc353cfd8d49c2093bdde9cfd7a695a0e87b71deae210ee373911d5bc22.scope: Deactivated successfully.
Oct 11 08:47:11 compute-0 sudo[285618]: pam_unix(sudo:session): session closed for user root
Oct 11 08:47:11 compute-0 sudo[285777]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:47:11 compute-0 nova_compute[260935]: 2025-10-11 08:47:11.576 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:47:11 compute-0 sudo[285777]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:47:11 compute-0 sudo[285777]: pam_unix(sudo:session): session closed for user root
Oct 11 08:47:11 compute-0 nova_compute[260935]: 2025-10-11 08:47:11.664 2 DEBUG oslo_concurrency.lockutils [None req-4634afe8-4bd0-4c49-a14b-40141b5d196e a00ec4c98ad14bd6ab5a4e1f11e60389 7966b3923d1340dca2b22eab0ca26cb3 - - default default] Acquiring lock "682cad4b-4bab-4a36-9bd9-11e9de7b213a" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:47:11 compute-0 nova_compute[260935]: 2025-10-11 08:47:11.664 2 DEBUG oslo_concurrency.lockutils [None req-4634afe8-4bd0-4c49-a14b-40141b5d196e a00ec4c98ad14bd6ab5a4e1f11e60389 7966b3923d1340dca2b22eab0ca26cb3 - - default default] Lock "682cad4b-4bab-4a36-9bd9-11e9de7b213a" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:47:11 compute-0 nova_compute[260935]: 2025-10-11 08:47:11.681 2 DEBUG nova.compute.manager [None req-4634afe8-4bd0-4c49-a14b-40141b5d196e a00ec4c98ad14bd6ab5a4e1f11e60389 7966b3923d1340dca2b22eab0ca26cb3 - - default default] [instance: 682cad4b-4bab-4a36-9bd9-11e9de7b213a] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 11 08:47:11 compute-0 sudo[285802]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 08:47:11 compute-0 sudo[285802]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:47:11 compute-0 sudo[285802]: pam_unix(sudo:session): session closed for user root
Oct 11 08:47:11 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1229: 321 pgs: 321 active+clean; 372 MiB data, 437 MiB used, 60 GiB / 60 GiB avail; 8.6 MiB/s rd, 8.5 MiB/s wr, 162 op/s
Oct 11 08:47:11 compute-0 nova_compute[260935]: 2025-10-11 08:47:11.774 2 DEBUG oslo_concurrency.lockutils [None req-4634afe8-4bd0-4c49-a14b-40141b5d196e a00ec4c98ad14bd6ab5a4e1f11e60389 7966b3923d1340dca2b22eab0ca26cb3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:47:11 compute-0 nova_compute[260935]: 2025-10-11 08:47:11.774 2 DEBUG oslo_concurrency.lockutils [None req-4634afe8-4bd0-4c49-a14b-40141b5d196e a00ec4c98ad14bd6ab5a4e1f11e60389 7966b3923d1340dca2b22eab0ca26cb3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:47:11 compute-0 sudo[285827]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:47:11 compute-0 sudo[285827]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:47:11 compute-0 sudo[285827]: pam_unix(sudo:session): session closed for user root
Oct 11 08:47:11 compute-0 nova_compute[260935]: 2025-10-11 08:47:11.790 2 DEBUG nova.virt.hardware [None req-4634afe8-4bd0-4c49-a14b-40141b5d196e a00ec4c98ad14bd6ab5a4e1f11e60389 7966b3923d1340dca2b22eab0ca26cb3 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 11 08:47:11 compute-0 nova_compute[260935]: 2025-10-11 08:47:11.790 2 INFO nova.compute.claims [None req-4634afe8-4bd0-4c49-a14b-40141b5d196e a00ec4c98ad14bd6ab5a4e1f11e60389 7966b3923d1340dca2b22eab0ca26cb3 - - default default] [instance: 682cad4b-4bab-4a36-9bd9-11e9de7b213a] Claim successful on node compute-0.ctlplane.example.com
Oct 11 08:47:11 compute-0 sudo[285852]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a -- lvm list --format json
Oct 11 08:47:11 compute-0 sudo[285852]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:47:11 compute-0 nova_compute[260935]: 2025-10-11 08:47:11.948 2 DEBUG oslo_concurrency.processutils [None req-4634afe8-4bd0-4c49-a14b-40141b5d196e a00ec4c98ad14bd6ab5a4e1f11e60389 7966b3923d1340dca2b22eab0ca26cb3 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:47:12 compute-0 sshd-session[285410]: Received disconnect from 152.32.213.170 port 56312:11: Bye Bye [preauth]
Oct 11 08:47:12 compute-0 sshd-session[285410]: Disconnected from invalid user master 152.32.213.170 port 56312 [preauth]
Oct 11 08:47:12 compute-0 ceph-mon[74313]: osdmap e150: 3 total, 3 up, 3 in
Oct 11 08:47:12 compute-0 podman[285938]: 2025-10-11 08:47:12.367462766 +0000 UTC m=+0.067873261 container create 30d2e33c126fa23094c1a085bd7d8114f7c4c06ceb94ebf1226828664dddc510 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_allen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 08:47:12 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 08:47:12 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3736167084' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:47:12 compute-0 nova_compute[260935]: 2025-10-11 08:47:12.412 2 DEBUG oslo_concurrency.processutils [None req-4634afe8-4bd0-4c49-a14b-40141b5d196e a00ec4c98ad14bd6ab5a4e1f11e60389 7966b3923d1340dca2b22eab0ca26cb3 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:47:12 compute-0 systemd[1]: Started libpod-conmon-30d2e33c126fa23094c1a085bd7d8114f7c4c06ceb94ebf1226828664dddc510.scope.
Oct 11 08:47:12 compute-0 nova_compute[260935]: 2025-10-11 08:47:12.424 2 DEBUG nova.compute.provider_tree [None req-4634afe8-4bd0-4c49-a14b-40141b5d196e a00ec4c98ad14bd6ab5a4e1f11e60389 7966b3923d1340dca2b22eab0ca26cb3 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 08:47:12 compute-0 podman[285938]: 2025-10-11 08:47:12.34659517 +0000 UTC m=+0.047005715 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 08:47:12 compute-0 nova_compute[260935]: 2025-10-11 08:47:12.442 2 DEBUG nova.scheduler.client.report [None req-4634afe8-4bd0-4c49-a14b-40141b5d196e a00ec4c98ad14bd6ab5a4e1f11e60389 7966b3923d1340dca2b22eab0ca26cb3 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 08:47:12 compute-0 systemd[1]: Started libcrun container.
Oct 11 08:47:12 compute-0 podman[285938]: 2025-10-11 08:47:12.468961498 +0000 UTC m=+0.169372053 container init 30d2e33c126fa23094c1a085bd7d8114f7c4c06ceb94ebf1226828664dddc510 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_allen, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct 11 08:47:12 compute-0 nova_compute[260935]: 2025-10-11 08:47:12.470 2 DEBUG oslo_concurrency.lockutils [None req-4634afe8-4bd0-4c49-a14b-40141b5d196e a00ec4c98ad14bd6ab5a4e1f11e60389 7966b3923d1340dca2b22eab0ca26cb3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.695s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:47:12 compute-0 nova_compute[260935]: 2025-10-11 08:47:12.471 2 DEBUG nova.compute.manager [None req-4634afe8-4bd0-4c49-a14b-40141b5d196e a00ec4c98ad14bd6ab5a4e1f11e60389 7966b3923d1340dca2b22eab0ca26cb3 - - default default] [instance: 682cad4b-4bab-4a36-9bd9-11e9de7b213a] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 11 08:47:12 compute-0 podman[285938]: 2025-10-11 08:47:12.481415814 +0000 UTC m=+0.181826279 container start 30d2e33c126fa23094c1a085bd7d8114f7c4c06ceb94ebf1226828664dddc510 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_allen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507)
Oct 11 08:47:12 compute-0 podman[285938]: 2025-10-11 08:47:12.485291084 +0000 UTC m=+0.185701579 container attach 30d2e33c126fa23094c1a085bd7d8114f7c4c06ceb94ebf1226828664dddc510 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_allen, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Oct 11 08:47:12 compute-0 condescending_allen[285956]: 167 167
Oct 11 08:47:12 compute-0 systemd[1]: libpod-30d2e33c126fa23094c1a085bd7d8114f7c4c06ceb94ebf1226828664dddc510.scope: Deactivated successfully.
Oct 11 08:47:12 compute-0 conmon[285956]: conmon 30d2e33c126fa23094c1 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-30d2e33c126fa23094c1a085bd7d8114f7c4c06ceb94ebf1226828664dddc510.scope/container/memory.events
Oct 11 08:47:12 compute-0 podman[285938]: 2025-10-11 08:47:12.490991737 +0000 UTC m=+0.191402262 container died 30d2e33c126fa23094c1a085bd7d8114f7c4c06ceb94ebf1226828664dddc510 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_allen, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 08:47:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-fc2676bcd6398f50201c51d5e6f9c0372337a25a32ee5aa54f5bd2f0c40ea93a-merged.mount: Deactivated successfully.
Oct 11 08:47:12 compute-0 nova_compute[260935]: 2025-10-11 08:47:12.529 2 DEBUG nova.compute.manager [None req-4634afe8-4bd0-4c49-a14b-40141b5d196e a00ec4c98ad14bd6ab5a4e1f11e60389 7966b3923d1340dca2b22eab0ca26cb3 - - default default] [instance: 682cad4b-4bab-4a36-9bd9-11e9de7b213a] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 11 08:47:12 compute-0 nova_compute[260935]: 2025-10-11 08:47:12.531 2 DEBUG nova.network.neutron [None req-4634afe8-4bd0-4c49-a14b-40141b5d196e a00ec4c98ad14bd6ab5a4e1f11e60389 7966b3923d1340dca2b22eab0ca26cb3 - - default default] [instance: 682cad4b-4bab-4a36-9bd9-11e9de7b213a] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 11 08:47:12 compute-0 podman[285938]: 2025-10-11 08:47:12.544514227 +0000 UTC m=+0.244924712 container remove 30d2e33c126fa23094c1a085bd7d8114f7c4c06ceb94ebf1226828664dddc510 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_allen, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct 11 08:47:12 compute-0 nova_compute[260935]: 2025-10-11 08:47:12.559 2 INFO nova.virt.libvirt.driver [None req-4634afe8-4bd0-4c49-a14b-40141b5d196e a00ec4c98ad14bd6ab5a4e1f11e60389 7966b3923d1340dca2b22eab0ca26cb3 - - default default] [instance: 682cad4b-4bab-4a36-9bd9-11e9de7b213a] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 11 08:47:12 compute-0 systemd[1]: libpod-conmon-30d2e33c126fa23094c1a085bd7d8114f7c4c06ceb94ebf1226828664dddc510.scope: Deactivated successfully.
Oct 11 08:47:12 compute-0 nova_compute[260935]: 2025-10-11 08:47:12.580 2 DEBUG nova.compute.manager [None req-4634afe8-4bd0-4c49-a14b-40141b5d196e a00ec4c98ad14bd6ab5a4e1f11e60389 7966b3923d1340dca2b22eab0ca26cb3 - - default default] [instance: 682cad4b-4bab-4a36-9bd9-11e9de7b213a] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 11 08:47:12 compute-0 nova_compute[260935]: 2025-10-11 08:47:12.687 2 DEBUG nova.compute.manager [None req-4634afe8-4bd0-4c49-a14b-40141b5d196e a00ec4c98ad14bd6ab5a4e1f11e60389 7966b3923d1340dca2b22eab0ca26cb3 - - default default] [instance: 682cad4b-4bab-4a36-9bd9-11e9de7b213a] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 11 08:47:12 compute-0 nova_compute[260935]: 2025-10-11 08:47:12.690 2 DEBUG nova.virt.libvirt.driver [None req-4634afe8-4bd0-4c49-a14b-40141b5d196e a00ec4c98ad14bd6ab5a4e1f11e60389 7966b3923d1340dca2b22eab0ca26cb3 - - default default] [instance: 682cad4b-4bab-4a36-9bd9-11e9de7b213a] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 11 08:47:12 compute-0 nova_compute[260935]: 2025-10-11 08:47:12.690 2 INFO nova.virt.libvirt.driver [None req-4634afe8-4bd0-4c49-a14b-40141b5d196e a00ec4c98ad14bd6ab5a4e1f11e60389 7966b3923d1340dca2b22eab0ca26cb3 - - default default] [instance: 682cad4b-4bab-4a36-9bd9-11e9de7b213a] Creating image(s)
Oct 11 08:47:12 compute-0 nova_compute[260935]: 2025-10-11 08:47:12.722 2 DEBUG nova.storage.rbd_utils [None req-4634afe8-4bd0-4c49-a14b-40141b5d196e a00ec4c98ad14bd6ab5a4e1f11e60389 7966b3923d1340dca2b22eab0ca26cb3 - - default default] rbd image 682cad4b-4bab-4a36-9bd9-11e9de7b213a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:47:12 compute-0 nova_compute[260935]: 2025-10-11 08:47:12.760 2 DEBUG nova.storage.rbd_utils [None req-4634afe8-4bd0-4c49-a14b-40141b5d196e a00ec4c98ad14bd6ab5a4e1f11e60389 7966b3923d1340dca2b22eab0ca26cb3 - - default default] rbd image 682cad4b-4bab-4a36-9bd9-11e9de7b213a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:47:12 compute-0 podman[285978]: 2025-10-11 08:47:12.772549955 +0000 UTC m=+0.069354114 container create 8f8c9355fd14f31cb75fc8d6ded080aff770a5e8bf2ddfa786330c3acacce3da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_ganguly, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 08:47:12 compute-0 nova_compute[260935]: 2025-10-11 08:47:12.802 2 DEBUG nova.storage.rbd_utils [None req-4634afe8-4bd0-4c49-a14b-40141b5d196e a00ec4c98ad14bd6ab5a4e1f11e60389 7966b3923d1340dca2b22eab0ca26cb3 - - default default] rbd image 682cad4b-4bab-4a36-9bd9-11e9de7b213a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:47:12 compute-0 nova_compute[260935]: 2025-10-11 08:47:12.808 2 DEBUG oslo_concurrency.processutils [None req-4634afe8-4bd0-4c49-a14b-40141b5d196e a00ec4c98ad14bd6ab5a4e1f11e60389 7966b3923d1340dca2b22eab0ca26cb3 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:47:12 compute-0 podman[285978]: 2025-10-11 08:47:12.743971478 +0000 UTC m=+0.040775677 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 08:47:12 compute-0 systemd[1]: Started libpod-conmon-8f8c9355fd14f31cb75fc8d6ded080aff770a5e8bf2ddfa786330c3acacce3da.scope.
Oct 11 08:47:12 compute-0 systemd[1]: Started libcrun container.
Oct 11 08:47:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c5676383cfd25b3e2a2bf4cb044e23b0057373d7334f73d9d6ea24cd52e393e9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 08:47:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c5676383cfd25b3e2a2bf4cb044e23b0057373d7334f73d9d6ea24cd52e393e9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 08:47:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c5676383cfd25b3e2a2bf4cb044e23b0057373d7334f73d9d6ea24cd52e393e9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 08:47:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c5676383cfd25b3e2a2bf4cb044e23b0057373d7334f73d9d6ea24cd52e393e9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 08:47:12 compute-0 nova_compute[260935]: 2025-10-11 08:47:12.902 2 DEBUG oslo_concurrency.processutils [None req-4634afe8-4bd0-4c49-a14b-40141b5d196e a00ec4c98ad14bd6ab5a4e1f11e60389 7966b3923d1340dca2b22eab0ca26cb3 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json" returned: 0 in 0.094s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:47:12 compute-0 nova_compute[260935]: 2025-10-11 08:47:12.903 2 DEBUG oslo_concurrency.lockutils [None req-4634afe8-4bd0-4c49-a14b-40141b5d196e a00ec4c98ad14bd6ab5a4e1f11e60389 7966b3923d1340dca2b22eab0ca26cb3 - - default default] Acquiring lock "9811042c7d73cc51997f7c966840f4b7728169a1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:47:12 compute-0 nova_compute[260935]: 2025-10-11 08:47:12.904 2 DEBUG oslo_concurrency.lockutils [None req-4634afe8-4bd0-4c49-a14b-40141b5d196e a00ec4c98ad14bd6ab5a4e1f11e60389 7966b3923d1340dca2b22eab0ca26cb3 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:47:12 compute-0 nova_compute[260935]: 2025-10-11 08:47:12.905 2 DEBUG oslo_concurrency.lockutils [None req-4634afe8-4bd0-4c49-a14b-40141b5d196e a00ec4c98ad14bd6ab5a4e1f11e60389 7966b3923d1340dca2b22eab0ca26cb3 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:47:12 compute-0 podman[285978]: 2025-10-11 08:47:12.91653165 +0000 UTC m=+0.213335819 container init 8f8c9355fd14f31cb75fc8d6ded080aff770a5e8bf2ddfa786330c3acacce3da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_ganguly, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 08:47:12 compute-0 podman[285978]: 2025-10-11 08:47:12.929337866 +0000 UTC m=+0.226141985 container start 8f8c9355fd14f31cb75fc8d6ded080aff770a5e8bf2ddfa786330c3acacce3da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_ganguly, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 08:47:12 compute-0 podman[285978]: 2025-10-11 08:47:12.933930528 +0000 UTC m=+0.230734687 container attach 8f8c9355fd14f31cb75fc8d6ded080aff770a5e8bf2ddfa786330c3acacce3da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_ganguly, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct 11 08:47:12 compute-0 nova_compute[260935]: 2025-10-11 08:47:12.938 2 DEBUG nova.storage.rbd_utils [None req-4634afe8-4bd0-4c49-a14b-40141b5d196e a00ec4c98ad14bd6ab5a4e1f11e60389 7966b3923d1340dca2b22eab0ca26cb3 - - default default] rbd image 682cad4b-4bab-4a36-9bd9-11e9de7b213a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:47:12 compute-0 nova_compute[260935]: 2025-10-11 08:47:12.944 2 DEBUG oslo_concurrency.processutils [None req-4634afe8-4bd0-4c49-a14b-40141b5d196e a00ec4c98ad14bd6ab5a4e1f11e60389 7966b3923d1340dca2b22eab0ca26cb3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 682cad4b-4bab-4a36-9bd9-11e9de7b213a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:47:13 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 08:47:13 compute-0 nova_compute[260935]: 2025-10-11 08:47:13.235 2 DEBUG oslo_concurrency.processutils [None req-4634afe8-4bd0-4c49-a14b-40141b5d196e a00ec4c98ad14bd6ab5a4e1f11e60389 7966b3923d1340dca2b22eab0ca26cb3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 682cad4b-4bab-4a36-9bd9-11e9de7b213a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.290s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:47:13 compute-0 nova_compute[260935]: 2025-10-11 08:47:13.308 2 DEBUG nova.storage.rbd_utils [None req-4634afe8-4bd0-4c49-a14b-40141b5d196e a00ec4c98ad14bd6ab5a4e1f11e60389 7966b3923d1340dca2b22eab0ca26cb3 - - default default] resizing rbd image 682cad4b-4bab-4a36-9bd9-11e9de7b213a_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 11 08:47:13 compute-0 ceph-mon[74313]: pgmap v1229: 321 pgs: 321 active+clean; 372 MiB data, 437 MiB used, 60 GiB / 60 GiB avail; 8.6 MiB/s rd, 8.5 MiB/s wr, 162 op/s
Oct 11 08:47:13 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/3736167084' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:47:13 compute-0 nova_compute[260935]: 2025-10-11 08:47:13.410 2 DEBUG nova.objects.instance [None req-4634afe8-4bd0-4c49-a14b-40141b5d196e a00ec4c98ad14bd6ab5a4e1f11e60389 7966b3923d1340dca2b22eab0ca26cb3 - - default default] Lazy-loading 'migration_context' on Instance uuid 682cad4b-4bab-4a36-9bd9-11e9de7b213a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:47:13 compute-0 nova_compute[260935]: 2025-10-11 08:47:13.427 2 DEBUG nova.virt.libvirt.driver [None req-4634afe8-4bd0-4c49-a14b-40141b5d196e a00ec4c98ad14bd6ab5a4e1f11e60389 7966b3923d1340dca2b22eab0ca26cb3 - - default default] [instance: 682cad4b-4bab-4a36-9bd9-11e9de7b213a] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 11 08:47:13 compute-0 nova_compute[260935]: 2025-10-11 08:47:13.427 2 DEBUG nova.virt.libvirt.driver [None req-4634afe8-4bd0-4c49-a14b-40141b5d196e a00ec4c98ad14bd6ab5a4e1f11e60389 7966b3923d1340dca2b22eab0ca26cb3 - - default default] [instance: 682cad4b-4bab-4a36-9bd9-11e9de7b213a] Ensure instance console log exists: /var/lib/nova/instances/682cad4b-4bab-4a36-9bd9-11e9de7b213a/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 11 08:47:13 compute-0 nova_compute[260935]: 2025-10-11 08:47:13.427 2 DEBUG oslo_concurrency.lockutils [None req-4634afe8-4bd0-4c49-a14b-40141b5d196e a00ec4c98ad14bd6ab5a4e1f11e60389 7966b3923d1340dca2b22eab0ca26cb3 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:47:13 compute-0 nova_compute[260935]: 2025-10-11 08:47:13.428 2 DEBUG oslo_concurrency.lockutils [None req-4634afe8-4bd0-4c49-a14b-40141b5d196e a00ec4c98ad14bd6ab5a4e1f11e60389 7966b3923d1340dca2b22eab0ca26cb3 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:47:13 compute-0 nova_compute[260935]: 2025-10-11 08:47:13.428 2 DEBUG oslo_concurrency.lockutils [None req-4634afe8-4bd0-4c49-a14b-40141b5d196e a00ec4c98ad14bd6ab5a4e1f11e60389 7966b3923d1340dca2b22eab0ca26cb3 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:47:13 compute-0 intelligent_ganguly[286049]: {
Oct 11 08:47:13 compute-0 intelligent_ganguly[286049]:     "0": [
Oct 11 08:47:13 compute-0 intelligent_ganguly[286049]:         {
Oct 11 08:47:13 compute-0 intelligent_ganguly[286049]:             "devices": [
Oct 11 08:47:13 compute-0 intelligent_ganguly[286049]:                 "/dev/loop3"
Oct 11 08:47:13 compute-0 intelligent_ganguly[286049]:             ],
Oct 11 08:47:13 compute-0 intelligent_ganguly[286049]:             "lv_name": "ceph_lv0",
Oct 11 08:47:13 compute-0 intelligent_ganguly[286049]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 08:47:13 compute-0 intelligent_ganguly[286049]:             "lv_size": "21470642176",
Oct 11 08:47:13 compute-0 intelligent_ganguly[286049]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=bd31cfe9-266a-4479-a7f9-659fff14b3e1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 08:47:13 compute-0 intelligent_ganguly[286049]:             "lv_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 08:47:13 compute-0 intelligent_ganguly[286049]:             "name": "ceph_lv0",
Oct 11 08:47:13 compute-0 intelligent_ganguly[286049]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 08:47:13 compute-0 intelligent_ganguly[286049]:             "tags": {
Oct 11 08:47:13 compute-0 intelligent_ganguly[286049]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 11 08:47:13 compute-0 intelligent_ganguly[286049]:                 "ceph.block_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 08:47:13 compute-0 intelligent_ganguly[286049]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 08:47:13 compute-0 intelligent_ganguly[286049]:                 "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 08:47:13 compute-0 intelligent_ganguly[286049]:                 "ceph.cluster_name": "ceph",
Oct 11 08:47:13 compute-0 intelligent_ganguly[286049]:                 "ceph.crush_device_class": "",
Oct 11 08:47:13 compute-0 intelligent_ganguly[286049]:                 "ceph.encrypted": "0",
Oct 11 08:47:13 compute-0 intelligent_ganguly[286049]:                 "ceph.osd_fsid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 08:47:13 compute-0 intelligent_ganguly[286049]:                 "ceph.osd_id": "0",
Oct 11 08:47:13 compute-0 intelligent_ganguly[286049]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 08:47:13 compute-0 intelligent_ganguly[286049]:                 "ceph.type": "block",
Oct 11 08:47:13 compute-0 intelligent_ganguly[286049]:                 "ceph.vdo": "0"
Oct 11 08:47:13 compute-0 intelligent_ganguly[286049]:             },
Oct 11 08:47:13 compute-0 intelligent_ganguly[286049]:             "type": "block",
Oct 11 08:47:13 compute-0 intelligent_ganguly[286049]:             "vg_name": "ceph_vg0"
Oct 11 08:47:13 compute-0 intelligent_ganguly[286049]:         }
Oct 11 08:47:13 compute-0 intelligent_ganguly[286049]:     ],
Oct 11 08:47:13 compute-0 intelligent_ganguly[286049]:     "1": [
Oct 11 08:47:13 compute-0 intelligent_ganguly[286049]:         {
Oct 11 08:47:13 compute-0 intelligent_ganguly[286049]:             "devices": [
Oct 11 08:47:13 compute-0 intelligent_ganguly[286049]:                 "/dev/loop4"
Oct 11 08:47:13 compute-0 intelligent_ganguly[286049]:             ],
Oct 11 08:47:13 compute-0 intelligent_ganguly[286049]:             "lv_name": "ceph_lv1",
Oct 11 08:47:13 compute-0 intelligent_ganguly[286049]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 08:47:13 compute-0 intelligent_ganguly[286049]:             "lv_size": "21470642176",
Oct 11 08:47:13 compute-0 intelligent_ganguly[286049]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c6e730c8-7075-45e5-bf72-061b73088f5b,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 08:47:13 compute-0 intelligent_ganguly[286049]:             "lv_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 08:47:13 compute-0 intelligent_ganguly[286049]:             "name": "ceph_lv1",
Oct 11 08:47:13 compute-0 intelligent_ganguly[286049]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 08:47:13 compute-0 intelligent_ganguly[286049]:             "tags": {
Oct 11 08:47:13 compute-0 intelligent_ganguly[286049]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 11 08:47:13 compute-0 intelligent_ganguly[286049]:                 "ceph.block_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 08:47:13 compute-0 intelligent_ganguly[286049]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 08:47:13 compute-0 intelligent_ganguly[286049]:                 "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 08:47:13 compute-0 intelligent_ganguly[286049]:                 "ceph.cluster_name": "ceph",
Oct 11 08:47:13 compute-0 intelligent_ganguly[286049]:                 "ceph.crush_device_class": "",
Oct 11 08:47:13 compute-0 intelligent_ganguly[286049]:                 "ceph.encrypted": "0",
Oct 11 08:47:13 compute-0 intelligent_ganguly[286049]:                 "ceph.osd_fsid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 08:47:13 compute-0 intelligent_ganguly[286049]:                 "ceph.osd_id": "1",
Oct 11 08:47:13 compute-0 intelligent_ganguly[286049]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 08:47:13 compute-0 intelligent_ganguly[286049]:                 "ceph.type": "block",
Oct 11 08:47:13 compute-0 intelligent_ganguly[286049]:                 "ceph.vdo": "0"
Oct 11 08:47:13 compute-0 intelligent_ganguly[286049]:             },
Oct 11 08:47:13 compute-0 intelligent_ganguly[286049]:             "type": "block",
Oct 11 08:47:13 compute-0 intelligent_ganguly[286049]:             "vg_name": "ceph_vg1"
Oct 11 08:47:13 compute-0 intelligent_ganguly[286049]:         }
Oct 11 08:47:13 compute-0 intelligent_ganguly[286049]:     ],
Oct 11 08:47:13 compute-0 intelligent_ganguly[286049]:     "2": [
Oct 11 08:47:13 compute-0 intelligent_ganguly[286049]:         {
Oct 11 08:47:13 compute-0 intelligent_ganguly[286049]:             "devices": [
Oct 11 08:47:13 compute-0 intelligent_ganguly[286049]:                 "/dev/loop5"
Oct 11 08:47:13 compute-0 intelligent_ganguly[286049]:             ],
Oct 11 08:47:13 compute-0 intelligent_ganguly[286049]:             "lv_name": "ceph_lv2",
Oct 11 08:47:13 compute-0 intelligent_ganguly[286049]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 08:47:13 compute-0 intelligent_ganguly[286049]:             "lv_size": "21470642176",
Oct 11 08:47:13 compute-0 intelligent_ganguly[286049]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9a8d87fe-940e-4960-94fb-405e22e6e9ee,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 08:47:13 compute-0 intelligent_ganguly[286049]:             "lv_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 08:47:13 compute-0 intelligent_ganguly[286049]:             "name": "ceph_lv2",
Oct 11 08:47:13 compute-0 intelligent_ganguly[286049]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 08:47:13 compute-0 intelligent_ganguly[286049]:             "tags": {
Oct 11 08:47:13 compute-0 intelligent_ganguly[286049]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 11 08:47:13 compute-0 intelligent_ganguly[286049]:                 "ceph.block_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 08:47:13 compute-0 intelligent_ganguly[286049]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 08:47:13 compute-0 intelligent_ganguly[286049]:                 "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 08:47:13 compute-0 intelligent_ganguly[286049]:                 "ceph.cluster_name": "ceph",
Oct 11 08:47:13 compute-0 intelligent_ganguly[286049]:                 "ceph.crush_device_class": "",
Oct 11 08:47:13 compute-0 intelligent_ganguly[286049]:                 "ceph.encrypted": "0",
Oct 11 08:47:13 compute-0 intelligent_ganguly[286049]:                 "ceph.osd_fsid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 08:47:13 compute-0 intelligent_ganguly[286049]:                 "ceph.osd_id": "2",
Oct 11 08:47:13 compute-0 intelligent_ganguly[286049]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 08:47:13 compute-0 intelligent_ganguly[286049]:                 "ceph.type": "block",
Oct 11 08:47:13 compute-0 intelligent_ganguly[286049]:                 "ceph.vdo": "0"
Oct 11 08:47:13 compute-0 intelligent_ganguly[286049]:             },
Oct 11 08:47:13 compute-0 intelligent_ganguly[286049]:             "type": "block",
Oct 11 08:47:13 compute-0 intelligent_ganguly[286049]:             "vg_name": "ceph_vg2"
Oct 11 08:47:13 compute-0 intelligent_ganguly[286049]:         }
Oct 11 08:47:13 compute-0 intelligent_ganguly[286049]:     ]
Oct 11 08:47:13 compute-0 intelligent_ganguly[286049]: }
Oct 11 08:47:13 compute-0 nova_compute[260935]: 2025-10-11 08:47:13.720 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:47:13 compute-0 systemd[1]: libpod-8f8c9355fd14f31cb75fc8d6ded080aff770a5e8bf2ddfa786330c3acacce3da.scope: Deactivated successfully.
Oct 11 08:47:13 compute-0 podman[285978]: 2025-10-11 08:47:13.761693859 +0000 UTC m=+1.058498018 container died 8f8c9355fd14f31cb75fc8d6ded080aff770a5e8bf2ddfa786330c3acacce3da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_ganguly, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 08:47:13 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1230: 321 pgs: 321 active+clean; 372 MiB data, 437 MiB used, 60 GiB / 60 GiB avail; 4.0 MiB/s rd, 3.6 MiB/s wr, 114 op/s
Oct 11 08:47:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-c5676383cfd25b3e2a2bf4cb044e23b0057373d7334f73d9d6ea24cd52e393e9-merged.mount: Deactivated successfully.
Oct 11 08:47:13 compute-0 nova_compute[260935]: 2025-10-11 08:47:13.816 2 DEBUG nova.network.neutron [None req-4634afe8-4bd0-4c49-a14b-40141b5d196e a00ec4c98ad14bd6ab5a4e1f11e60389 7966b3923d1340dca2b22eab0ca26cb3 - - default default] [instance: 682cad4b-4bab-4a36-9bd9-11e9de7b213a] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188
Oct 11 08:47:13 compute-0 nova_compute[260935]: 2025-10-11 08:47:13.816 2 DEBUG nova.compute.manager [None req-4634afe8-4bd0-4c49-a14b-40141b5d196e a00ec4c98ad14bd6ab5a4e1f11e60389 7966b3923d1340dca2b22eab0ca26cb3 - - default default] [instance: 682cad4b-4bab-4a36-9bd9-11e9de7b213a] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 11 08:47:13 compute-0 nova_compute[260935]: 2025-10-11 08:47:13.819 2 DEBUG nova.virt.libvirt.driver [None req-4634afe8-4bd0-4c49-a14b-40141b5d196e a00ec4c98ad14bd6ab5a4e1f11e60389 7966b3923d1340dca2b22eab0ca26cb3 - - default default] [instance: 682cad4b-4bab-4a36-9bd9-11e9de7b213a] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 11 08:47:13 compute-0 nova_compute[260935]: 2025-10-11 08:47:13.827 2 WARNING nova.virt.libvirt.driver [None req-4634afe8-4bd0-4c49-a14b-40141b5d196e a00ec4c98ad14bd6ab5a4e1f11e60389 7966b3923d1340dca2b22eab0ca26cb3 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 08:47:13 compute-0 nova_compute[260935]: 2025-10-11 08:47:13.843 2 DEBUG nova.virt.libvirt.host [None req-4634afe8-4bd0-4c49-a14b-40141b5d196e a00ec4c98ad14bd6ab5a4e1f11e60389 7966b3923d1340dca2b22eab0ca26cb3 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 11 08:47:13 compute-0 nova_compute[260935]: 2025-10-11 08:47:13.844 2 DEBUG nova.virt.libvirt.host [None req-4634afe8-4bd0-4c49-a14b-40141b5d196e a00ec4c98ad14bd6ab5a4e1f11e60389 7966b3923d1340dca2b22eab0ca26cb3 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 11 08:47:13 compute-0 nova_compute[260935]: 2025-10-11 08:47:13.849 2 DEBUG nova.virt.libvirt.host [None req-4634afe8-4bd0-4c49-a14b-40141b5d196e a00ec4c98ad14bd6ab5a4e1f11e60389 7966b3923d1340dca2b22eab0ca26cb3 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 11 08:47:13 compute-0 nova_compute[260935]: 2025-10-11 08:47:13.850 2 DEBUG nova.virt.libvirt.host [None req-4634afe8-4bd0-4c49-a14b-40141b5d196e a00ec4c98ad14bd6ab5a4e1f11e60389 7966b3923d1340dca2b22eab0ca26cb3 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 11 08:47:13 compute-0 nova_compute[260935]: 2025-10-11 08:47:13.851 2 DEBUG nova.virt.libvirt.driver [None req-4634afe8-4bd0-4c49-a14b-40141b5d196e a00ec4c98ad14bd6ab5a4e1f11e60389 7966b3923d1340dca2b22eab0ca26cb3 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 11 08:47:13 compute-0 nova_compute[260935]: 2025-10-11 08:47:13.851 2 DEBUG nova.virt.hardware [None req-4634afe8-4bd0-4c49-a14b-40141b5d196e a00ec4c98ad14bd6ab5a4e1f11e60389 7966b3923d1340dca2b22eab0ca26cb3 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 11 08:47:13 compute-0 nova_compute[260935]: 2025-10-11 08:47:13.852 2 DEBUG nova.virt.hardware [None req-4634afe8-4bd0-4c49-a14b-40141b5d196e a00ec4c98ad14bd6ab5a4e1f11e60389 7966b3923d1340dca2b22eab0ca26cb3 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 11 08:47:13 compute-0 nova_compute[260935]: 2025-10-11 08:47:13.852 2 DEBUG nova.virt.hardware [None req-4634afe8-4bd0-4c49-a14b-40141b5d196e a00ec4c98ad14bd6ab5a4e1f11e60389 7966b3923d1340dca2b22eab0ca26cb3 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 11 08:47:13 compute-0 nova_compute[260935]: 2025-10-11 08:47:13.852 2 DEBUG nova.virt.hardware [None req-4634afe8-4bd0-4c49-a14b-40141b5d196e a00ec4c98ad14bd6ab5a4e1f11e60389 7966b3923d1340dca2b22eab0ca26cb3 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 11 08:47:13 compute-0 nova_compute[260935]: 2025-10-11 08:47:13.853 2 DEBUG nova.virt.hardware [None req-4634afe8-4bd0-4c49-a14b-40141b5d196e a00ec4c98ad14bd6ab5a4e1f11e60389 7966b3923d1340dca2b22eab0ca26cb3 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 11 08:47:13 compute-0 nova_compute[260935]: 2025-10-11 08:47:13.853 2 DEBUG nova.virt.hardware [None req-4634afe8-4bd0-4c49-a14b-40141b5d196e a00ec4c98ad14bd6ab5a4e1f11e60389 7966b3923d1340dca2b22eab0ca26cb3 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 11 08:47:13 compute-0 podman[285978]: 2025-10-11 08:47:13.853324489 +0000 UTC m=+1.150128648 container remove 8f8c9355fd14f31cb75fc8d6ded080aff770a5e8bf2ddfa786330c3acacce3da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_ganguly, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct 11 08:47:13 compute-0 nova_compute[260935]: 2025-10-11 08:47:13.853 2 DEBUG nova.virt.hardware [None req-4634afe8-4bd0-4c49-a14b-40141b5d196e a00ec4c98ad14bd6ab5a4e1f11e60389 7966b3923d1340dca2b22eab0ca26cb3 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 11 08:47:13 compute-0 nova_compute[260935]: 2025-10-11 08:47:13.854 2 DEBUG nova.virt.hardware [None req-4634afe8-4bd0-4c49-a14b-40141b5d196e a00ec4c98ad14bd6ab5a4e1f11e60389 7966b3923d1340dca2b22eab0ca26cb3 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 11 08:47:13 compute-0 nova_compute[260935]: 2025-10-11 08:47:13.854 2 DEBUG nova.virt.hardware [None req-4634afe8-4bd0-4c49-a14b-40141b5d196e a00ec4c98ad14bd6ab5a4e1f11e60389 7966b3923d1340dca2b22eab0ca26cb3 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 11 08:47:13 compute-0 nova_compute[260935]: 2025-10-11 08:47:13.854 2 DEBUG nova.virt.hardware [None req-4634afe8-4bd0-4c49-a14b-40141b5d196e a00ec4c98ad14bd6ab5a4e1f11e60389 7966b3923d1340dca2b22eab0ca26cb3 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 11 08:47:13 compute-0 nova_compute[260935]: 2025-10-11 08:47:13.855 2 DEBUG nova.virt.hardware [None req-4634afe8-4bd0-4c49-a14b-40141b5d196e a00ec4c98ad14bd6ab5a4e1f11e60389 7966b3923d1340dca2b22eab0ca26cb3 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 11 08:47:13 compute-0 nova_compute[260935]: 2025-10-11 08:47:13.860 2 DEBUG oslo_concurrency.processutils [None req-4634afe8-4bd0-4c49-a14b-40141b5d196e a00ec4c98ad14bd6ab5a4e1f11e60389 7966b3923d1340dca2b22eab0ca26cb3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:47:13 compute-0 systemd[1]: libpod-conmon-8f8c9355fd14f31cb75fc8d6ded080aff770a5e8bf2ddfa786330c3acacce3da.scope: Deactivated successfully.
Oct 11 08:47:13 compute-0 sudo[285852]: pam_unix(sudo:session): session closed for user root
Oct 11 08:47:13 compute-0 sudo[286183]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:47:13 compute-0 sudo[286183]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:47:13 compute-0 sudo[286183]: pam_unix(sudo:session): session closed for user root
Oct 11 08:47:14 compute-0 sudo[286208]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 08:47:14 compute-0 sudo[286208]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:47:14 compute-0 sudo[286208]: pam_unix(sudo:session): session closed for user root
Oct 11 08:47:14 compute-0 nova_compute[260935]: 2025-10-11 08:47:14.176 2 DEBUG oslo_concurrency.lockutils [None req-602ff088-416a-4b9a-9d59-b4b2be10ff77 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Acquiring lock "5b2193b9-46b9-44a8-9d1c-3c6a642115b6" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:47:14 compute-0 nova_compute[260935]: 2025-10-11 08:47:14.177 2 DEBUG oslo_concurrency.lockutils [None req-602ff088-416a-4b9a-9d59-b4b2be10ff77 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Lock "5b2193b9-46b9-44a8-9d1c-3c6a642115b6" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:47:14 compute-0 sudo[286252]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:47:14 compute-0 sudo[286252]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:47:14 compute-0 sudo[286252]: pam_unix(sudo:session): session closed for user root
Oct 11 08:47:14 compute-0 nova_compute[260935]: 2025-10-11 08:47:14.198 2 DEBUG nova.compute.manager [None req-602ff088-416a-4b9a-9d59-b4b2be10ff77 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 5b2193b9-46b9-44a8-9d1c-3c6a642115b6] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 11 08:47:14 compute-0 nova_compute[260935]: 2025-10-11 08:47:14.274 2 DEBUG oslo_concurrency.lockutils [None req-602ff088-416a-4b9a-9d59-b4b2be10ff77 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:47:14 compute-0 nova_compute[260935]: 2025-10-11 08:47:14.274 2 DEBUG oslo_concurrency.lockutils [None req-602ff088-416a-4b9a-9d59-b4b2be10ff77 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:47:14 compute-0 sudo[286277]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a -- raw list --format json
Oct 11 08:47:14 compute-0 nova_compute[260935]: 2025-10-11 08:47:14.288 2 DEBUG nova.virt.hardware [None req-602ff088-416a-4b9a-9d59-b4b2be10ff77 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 11 08:47:14 compute-0 nova_compute[260935]: 2025-10-11 08:47:14.288 2 INFO nova.compute.claims [None req-602ff088-416a-4b9a-9d59-b4b2be10ff77 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 5b2193b9-46b9-44a8-9d1c-3c6a642115b6] Claim successful on node compute-0.ctlplane.example.com
Oct 11 08:47:14 compute-0 sudo[286277]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:47:14 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 08:47:14 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2051348334' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:47:14 compute-0 nova_compute[260935]: 2025-10-11 08:47:14.358 2 DEBUG oslo_concurrency.processutils [None req-4634afe8-4bd0-4c49-a14b-40141b5d196e a00ec4c98ad14bd6ab5a4e1f11e60389 7966b3923d1340dca2b22eab0ca26cb3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.499s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:47:14 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/2051348334' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:47:14 compute-0 nova_compute[260935]: 2025-10-11 08:47:14.396 2 DEBUG nova.storage.rbd_utils [None req-4634afe8-4bd0-4c49-a14b-40141b5d196e a00ec4c98ad14bd6ab5a4e1f11e60389 7966b3923d1340dca2b22eab0ca26cb3 - - default default] rbd image 682cad4b-4bab-4a36-9bd9-11e9de7b213a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:47:14 compute-0 nova_compute[260935]: 2025-10-11 08:47:14.402 2 DEBUG oslo_concurrency.processutils [None req-4634afe8-4bd0-4c49-a14b-40141b5d196e a00ec4c98ad14bd6ab5a4e1f11e60389 7966b3923d1340dca2b22eab0ca26cb3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:47:14 compute-0 nova_compute[260935]: 2025-10-11 08:47:14.727 2 DEBUG oslo_concurrency.processutils [None req-602ff088-416a-4b9a-9d59-b4b2be10ff77 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:47:14 compute-0 podman[286384]: 2025-10-11 08:47:14.764795373 +0000 UTC m=+0.062042305 container create 1caec4f6414b20d4069bf5a3b2cdc484d7f0e43a61de6754d33bc398037f9814 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_wu, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 08:47:14 compute-0 systemd[1]: Started libpod-conmon-1caec4f6414b20d4069bf5a3b2cdc484d7f0e43a61de6754d33bc398037f9814.scope.
Oct 11 08:47:14 compute-0 podman[286384]: 2025-10-11 08:47:14.740743335 +0000 UTC m=+0.037990297 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 08:47:14 compute-0 systemd[1]: Started libcrun container.
Oct 11 08:47:14 compute-0 podman[286384]: 2025-10-11 08:47:14.856633648 +0000 UTC m=+0.153880620 container init 1caec4f6414b20d4069bf5a3b2cdc484d7f0e43a61de6754d33bc398037f9814 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_wu, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct 11 08:47:14 compute-0 podman[286384]: 2025-10-11 08:47:14.864697659 +0000 UTC m=+0.161944591 container start 1caec4f6414b20d4069bf5a3b2cdc484d7f0e43a61de6754d33bc398037f9814 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_wu, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 08:47:14 compute-0 podman[286384]: 2025-10-11 08:47:14.869022632 +0000 UTC m=+0.166269654 container attach 1caec4f6414b20d4069bf5a3b2cdc484d7f0e43a61de6754d33bc398037f9814 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_wu, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct 11 08:47:14 compute-0 serene_wu[286400]: 167 167
Oct 11 08:47:14 compute-0 systemd[1]: libpod-1caec4f6414b20d4069bf5a3b2cdc484d7f0e43a61de6754d33bc398037f9814.scope: Deactivated successfully.
Oct 11 08:47:14 compute-0 conmon[286400]: conmon 1caec4f6414b20d4069b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-1caec4f6414b20d4069bf5a3b2cdc484d7f0e43a61de6754d33bc398037f9814.scope/container/memory.events
Oct 11 08:47:14 compute-0 podman[286384]: 2025-10-11 08:47:14.872743509 +0000 UTC m=+0.169990451 container died 1caec4f6414b20d4069bf5a3b2cdc484d7f0e43a61de6754d33bc398037f9814 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_wu, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507)
Oct 11 08:47:14 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 08:47:14 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1600486936' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:47:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-fd7e13e7549a36dbbf72b6432dcdc0110daa6ab9cd1a34d7e9de5daa715e2a86-merged.mount: Deactivated successfully.
Oct 11 08:47:14 compute-0 nova_compute[260935]: 2025-10-11 08:47:14.907 2 DEBUG oslo_concurrency.processutils [None req-4634afe8-4bd0-4c49-a14b-40141b5d196e a00ec4c98ad14bd6ab5a4e1f11e60389 7966b3923d1340dca2b22eab0ca26cb3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.505s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:47:14 compute-0 nova_compute[260935]: 2025-10-11 08:47:14.909 2 DEBUG nova.objects.instance [None req-4634afe8-4bd0-4c49-a14b-40141b5d196e a00ec4c98ad14bd6ab5a4e1f11e60389 7966b3923d1340dca2b22eab0ca26cb3 - - default default] Lazy-loading 'pci_devices' on Instance uuid 682cad4b-4bab-4a36-9bd9-11e9de7b213a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:47:14 compute-0 podman[286384]: 2025-10-11 08:47:14.925692922 +0000 UTC m=+0.222939894 container remove 1caec4f6414b20d4069bf5a3b2cdc484d7f0e43a61de6754d33bc398037f9814 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_wu, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 08:47:14 compute-0 nova_compute[260935]: 2025-10-11 08:47:14.926 2 DEBUG nova.virt.libvirt.driver [None req-4634afe8-4bd0-4c49-a14b-40141b5d196e a00ec4c98ad14bd6ab5a4e1f11e60389 7966b3923d1340dca2b22eab0ca26cb3 - - default default] [instance: 682cad4b-4bab-4a36-9bd9-11e9de7b213a] End _get_guest_xml xml=<domain type="kvm">
Oct 11 08:47:14 compute-0 nova_compute[260935]:   <uuid>682cad4b-4bab-4a36-9bd9-11e9de7b213a</uuid>
Oct 11 08:47:14 compute-0 nova_compute[260935]:   <name>instance-00000010</name>
Oct 11 08:47:14 compute-0 nova_compute[260935]:   <memory>131072</memory>
Oct 11 08:47:14 compute-0 nova_compute[260935]:   <vcpu>1</vcpu>
Oct 11 08:47:14 compute-0 nova_compute[260935]:   <metadata>
Oct 11 08:47:14 compute-0 nova_compute[260935]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 08:47:14 compute-0 nova_compute[260935]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 08:47:14 compute-0 nova_compute[260935]:       <nova:name>tempest-ServerDiagnosticsNegativeTest-server-1673338711</nova:name>
Oct 11 08:47:14 compute-0 nova_compute[260935]:       <nova:creationTime>2025-10-11 08:47:13</nova:creationTime>
Oct 11 08:47:14 compute-0 nova_compute[260935]:       <nova:flavor name="m1.nano">
Oct 11 08:47:14 compute-0 nova_compute[260935]:         <nova:memory>128</nova:memory>
Oct 11 08:47:14 compute-0 nova_compute[260935]:         <nova:disk>1</nova:disk>
Oct 11 08:47:14 compute-0 nova_compute[260935]:         <nova:swap>0</nova:swap>
Oct 11 08:47:14 compute-0 nova_compute[260935]:         <nova:ephemeral>0</nova:ephemeral>
Oct 11 08:47:14 compute-0 nova_compute[260935]:         <nova:vcpus>1</nova:vcpus>
Oct 11 08:47:14 compute-0 nova_compute[260935]:       </nova:flavor>
Oct 11 08:47:14 compute-0 nova_compute[260935]:       <nova:owner>
Oct 11 08:47:14 compute-0 nova_compute[260935]:         <nova:user uuid="a00ec4c98ad14bd6ab5a4e1f11e60389">tempest-ServerDiagnosticsNegativeTest-745659558-project-member</nova:user>
Oct 11 08:47:14 compute-0 nova_compute[260935]:         <nova:project uuid="7966b3923d1340dca2b22eab0ca26cb3">tempest-ServerDiagnosticsNegativeTest-745659558</nova:project>
Oct 11 08:47:14 compute-0 nova_compute[260935]:       </nova:owner>
Oct 11 08:47:14 compute-0 nova_compute[260935]:       <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 08:47:14 compute-0 nova_compute[260935]:       <nova:ports/>
Oct 11 08:47:14 compute-0 nova_compute[260935]:     </nova:instance>
Oct 11 08:47:14 compute-0 nova_compute[260935]:   </metadata>
Oct 11 08:47:14 compute-0 nova_compute[260935]:   <sysinfo type="smbios">
Oct 11 08:47:14 compute-0 nova_compute[260935]:     <system>
Oct 11 08:47:14 compute-0 nova_compute[260935]:       <entry name="manufacturer">RDO</entry>
Oct 11 08:47:14 compute-0 nova_compute[260935]:       <entry name="product">OpenStack Compute</entry>
Oct 11 08:47:14 compute-0 nova_compute[260935]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 08:47:14 compute-0 nova_compute[260935]:       <entry name="serial">682cad4b-4bab-4a36-9bd9-11e9de7b213a</entry>
Oct 11 08:47:14 compute-0 nova_compute[260935]:       <entry name="uuid">682cad4b-4bab-4a36-9bd9-11e9de7b213a</entry>
Oct 11 08:47:14 compute-0 nova_compute[260935]:       <entry name="family">Virtual Machine</entry>
Oct 11 08:47:14 compute-0 nova_compute[260935]:     </system>
Oct 11 08:47:14 compute-0 nova_compute[260935]:   </sysinfo>
Oct 11 08:47:14 compute-0 nova_compute[260935]:   <os>
Oct 11 08:47:14 compute-0 nova_compute[260935]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 11 08:47:14 compute-0 nova_compute[260935]:     <boot dev="hd"/>
Oct 11 08:47:14 compute-0 nova_compute[260935]:     <smbios mode="sysinfo"/>
Oct 11 08:47:14 compute-0 nova_compute[260935]:   </os>
Oct 11 08:47:14 compute-0 nova_compute[260935]:   <features>
Oct 11 08:47:14 compute-0 nova_compute[260935]:     <acpi/>
Oct 11 08:47:14 compute-0 nova_compute[260935]:     <apic/>
Oct 11 08:47:14 compute-0 nova_compute[260935]:     <vmcoreinfo/>
Oct 11 08:47:14 compute-0 nova_compute[260935]:   </features>
Oct 11 08:47:14 compute-0 nova_compute[260935]:   <clock offset="utc">
Oct 11 08:47:14 compute-0 nova_compute[260935]:     <timer name="pit" tickpolicy="delay"/>
Oct 11 08:47:14 compute-0 nova_compute[260935]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 11 08:47:14 compute-0 nova_compute[260935]:     <timer name="hpet" present="no"/>
Oct 11 08:47:14 compute-0 nova_compute[260935]:   </clock>
Oct 11 08:47:14 compute-0 nova_compute[260935]:   <cpu mode="host-model" match="exact">
Oct 11 08:47:14 compute-0 nova_compute[260935]:     <topology sockets="1" cores="1" threads="1"/>
Oct 11 08:47:14 compute-0 nova_compute[260935]:   </cpu>
Oct 11 08:47:14 compute-0 nova_compute[260935]:   <devices>
Oct 11 08:47:14 compute-0 nova_compute[260935]:     <disk type="network" device="disk">
Oct 11 08:47:14 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 08:47:14 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/682cad4b-4bab-4a36-9bd9-11e9de7b213a_disk">
Oct 11 08:47:14 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 08:47:14 compute-0 nova_compute[260935]:       </source>
Oct 11 08:47:14 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 08:47:14 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 08:47:14 compute-0 nova_compute[260935]:       </auth>
Oct 11 08:47:14 compute-0 nova_compute[260935]:       <target dev="vda" bus="virtio"/>
Oct 11 08:47:14 compute-0 nova_compute[260935]:     </disk>
Oct 11 08:47:14 compute-0 nova_compute[260935]:     <disk type="network" device="cdrom">
Oct 11 08:47:14 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 08:47:14 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/682cad4b-4bab-4a36-9bd9-11e9de7b213a_disk.config">
Oct 11 08:47:14 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 08:47:14 compute-0 nova_compute[260935]:       </source>
Oct 11 08:47:14 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 08:47:14 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 08:47:14 compute-0 nova_compute[260935]:       </auth>
Oct 11 08:47:14 compute-0 nova_compute[260935]:       <target dev="sda" bus="sata"/>
Oct 11 08:47:14 compute-0 nova_compute[260935]:     </disk>
Oct 11 08:47:14 compute-0 nova_compute[260935]:     <serial type="pty">
Oct 11 08:47:14 compute-0 nova_compute[260935]:       <log file="/var/lib/nova/instances/682cad4b-4bab-4a36-9bd9-11e9de7b213a/console.log" append="off"/>
Oct 11 08:47:14 compute-0 nova_compute[260935]:     </serial>
Oct 11 08:47:14 compute-0 nova_compute[260935]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 08:47:14 compute-0 nova_compute[260935]:     <video>
Oct 11 08:47:14 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 08:47:14 compute-0 nova_compute[260935]:     </video>
Oct 11 08:47:14 compute-0 nova_compute[260935]:     <input type="tablet" bus="usb"/>
Oct 11 08:47:14 compute-0 nova_compute[260935]:     <rng model="virtio">
Oct 11 08:47:14 compute-0 nova_compute[260935]:       <backend model="random">/dev/urandom</backend>
Oct 11 08:47:14 compute-0 nova_compute[260935]:     </rng>
Oct 11 08:47:14 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root"/>
Oct 11 08:47:14 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:47:14 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:47:14 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:47:14 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:47:14 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:47:14 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:47:14 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:47:14 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:47:14 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:47:14 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:47:14 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:47:14 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:47:14 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:47:14 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:47:14 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:47:14 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:47:14 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:47:14 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:47:14 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:47:14 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:47:14 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:47:14 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:47:14 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:47:14 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:47:14 compute-0 nova_compute[260935]:     <controller type="usb" index="0"/>
Oct 11 08:47:14 compute-0 nova_compute[260935]:     <memballoon model="virtio">
Oct 11 08:47:14 compute-0 nova_compute[260935]:       <stats period="10"/>
Oct 11 08:47:14 compute-0 nova_compute[260935]:     </memballoon>
Oct 11 08:47:14 compute-0 nova_compute[260935]:   </devices>
Oct 11 08:47:14 compute-0 nova_compute[260935]: </domain>
Oct 11 08:47:14 compute-0 nova_compute[260935]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 11 08:47:14 compute-0 systemd[1]: libpod-conmon-1caec4f6414b20d4069bf5a3b2cdc484d7f0e43a61de6754d33bc398037f9814.scope: Deactivated successfully.
Oct 11 08:47:14 compute-0 nova_compute[260935]: 2025-10-11 08:47:14.982 2 DEBUG nova.virt.libvirt.driver [None req-4634afe8-4bd0-4c49-a14b-40141b5d196e a00ec4c98ad14bd6ab5a4e1f11e60389 7966b3923d1340dca2b22eab0ca26cb3 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 08:47:14 compute-0 nova_compute[260935]: 2025-10-11 08:47:14.982 2 DEBUG nova.virt.libvirt.driver [None req-4634afe8-4bd0-4c49-a14b-40141b5d196e a00ec4c98ad14bd6ab5a4e1f11e60389 7966b3923d1340dca2b22eab0ca26cb3 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 08:47:14 compute-0 nova_compute[260935]: 2025-10-11 08:47:14.982 2 INFO nova.virt.libvirt.driver [None req-4634afe8-4bd0-4c49-a14b-40141b5d196e a00ec4c98ad14bd6ab5a4e1f11e60389 7966b3923d1340dca2b22eab0ca26cb3 - - default default] [instance: 682cad4b-4bab-4a36-9bd9-11e9de7b213a] Using config drive
Oct 11 08:47:15 compute-0 nova_compute[260935]: 2025-10-11 08:47:15.007 2 DEBUG nova.storage.rbd_utils [None req-4634afe8-4bd0-4c49-a14b-40141b5d196e a00ec4c98ad14bd6ab5a4e1f11e60389 7966b3923d1340dca2b22eab0ca26cb3 - - default default] rbd image 682cad4b-4bab-4a36-9bd9-11e9de7b213a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:47:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:47:15.179 162815 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:47:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:47:15.180 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:47:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:47:15.180 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:47:15 compute-0 podman[286464]: 2025-10-11 08:47:15.191219102 +0000 UTC m=+0.076757345 container create 5e5f67df638d646d697ee7279a4dc6e741bc22bbbb9e6ea9ca9384e4a5e7ab1b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_einstein, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 11 08:47:15 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 08:47:15 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1334021597' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:47:15 compute-0 podman[286464]: 2025-10-11 08:47:15.157107937 +0000 UTC m=+0.042646270 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 08:47:15 compute-0 nova_compute[260935]: 2025-10-11 08:47:15.249 2 DEBUG oslo_concurrency.processutils [None req-602ff088-416a-4b9a-9d59-b4b2be10ff77 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.523s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:47:15 compute-0 systemd[1]: Started libpod-conmon-5e5f67df638d646d697ee7279a4dc6e741bc22bbbb9e6ea9ca9384e4a5e7ab1b.scope.
Oct 11 08:47:15 compute-0 nova_compute[260935]: 2025-10-11 08:47:15.258 2 DEBUG nova.compute.provider_tree [None req-602ff088-416a-4b9a-9d59-b4b2be10ff77 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 08:47:15 compute-0 nova_compute[260935]: 2025-10-11 08:47:15.274 2 DEBUG nova.scheduler.client.report [None req-602ff088-416a-4b9a-9d59-b4b2be10ff77 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 08:47:15 compute-0 systemd[1]: Started libcrun container.
Oct 11 08:47:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/840f400aaec6c26044543d45a28a1aaceec792ce1db115e3022a4083ce83b19b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 08:47:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/840f400aaec6c26044543d45a28a1aaceec792ce1db115e3022a4083ce83b19b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 08:47:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/840f400aaec6c26044543d45a28a1aaceec792ce1db115e3022a4083ce83b19b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 08:47:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/840f400aaec6c26044543d45a28a1aaceec792ce1db115e3022a4083ce83b19b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 08:47:15 compute-0 nova_compute[260935]: 2025-10-11 08:47:15.299 2 DEBUG oslo_concurrency.lockutils [None req-602ff088-416a-4b9a-9d59-b4b2be10ff77 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.025s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:47:15 compute-0 nova_compute[260935]: 2025-10-11 08:47:15.301 2 DEBUG nova.compute.manager [None req-602ff088-416a-4b9a-9d59-b4b2be10ff77 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 5b2193b9-46b9-44a8-9d1c-3c6a642115b6] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 11 08:47:15 compute-0 podman[286464]: 2025-10-11 08:47:15.309239266 +0000 UTC m=+0.194777609 container init 5e5f67df638d646d697ee7279a4dc6e741bc22bbbb9e6ea9ca9384e4a5e7ab1b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_einstein, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 08:47:15 compute-0 podman[286464]: 2025-10-11 08:47:15.319588172 +0000 UTC m=+0.205126425 container start 5e5f67df638d646d697ee7279a4dc6e741bc22bbbb9e6ea9ca9384e4a5e7ab1b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_einstein, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default)
Oct 11 08:47:15 compute-0 podman[286464]: 2025-10-11 08:47:15.323585916 +0000 UTC m=+0.209124269 container attach 5e5f67df638d646d697ee7279a4dc6e741bc22bbbb9e6ea9ca9384e4a5e7ab1b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_einstein, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct 11 08:47:15 compute-0 nova_compute[260935]: 2025-10-11 08:47:15.347 2 INFO nova.virt.libvirt.driver [None req-4634afe8-4bd0-4c49-a14b-40141b5d196e a00ec4c98ad14bd6ab5a4e1f11e60389 7966b3923d1340dca2b22eab0ca26cb3 - - default default] [instance: 682cad4b-4bab-4a36-9bd9-11e9de7b213a] Creating config drive at /var/lib/nova/instances/682cad4b-4bab-4a36-9bd9-11e9de7b213a/disk.config
Oct 11 08:47:15 compute-0 nova_compute[260935]: 2025-10-11 08:47:15.356 2 DEBUG oslo_concurrency.processutils [None req-4634afe8-4bd0-4c49-a14b-40141b5d196e a00ec4c98ad14bd6ab5a4e1f11e60389 7966b3923d1340dca2b22eab0ca26cb3 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/682cad4b-4bab-4a36-9bd9-11e9de7b213a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpcl4e47jb execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:47:15 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e150 do_prune osdmap full prune enabled
Oct 11 08:47:15 compute-0 ceph-mon[74313]: pgmap v1230: 321 pgs: 321 active+clean; 372 MiB data, 437 MiB used, 60 GiB / 60 GiB avail; 4.0 MiB/s rd, 3.6 MiB/s wr, 114 op/s
Oct 11 08:47:15 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/1600486936' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:47:15 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/1334021597' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:47:15 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e151 e151: 3 total, 3 up, 3 in
Oct 11 08:47:15 compute-0 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e151: 3 total, 3 up, 3 in
Oct 11 08:47:15 compute-0 nova_compute[260935]: 2025-10-11 08:47:15.407 2 DEBUG nova.compute.manager [None req-602ff088-416a-4b9a-9d59-b4b2be10ff77 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 5b2193b9-46b9-44a8-9d1c-3c6a642115b6] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 11 08:47:15 compute-0 nova_compute[260935]: 2025-10-11 08:47:15.408 2 DEBUG nova.network.neutron [None req-602ff088-416a-4b9a-9d59-b4b2be10ff77 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 5b2193b9-46b9-44a8-9d1c-3c6a642115b6] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 11 08:47:15 compute-0 nova_compute[260935]: 2025-10-11 08:47:15.495 2 INFO nova.virt.libvirt.driver [None req-602ff088-416a-4b9a-9d59-b4b2be10ff77 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 5b2193b9-46b9-44a8-9d1c-3c6a642115b6] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 11 08:47:15 compute-0 nova_compute[260935]: 2025-10-11 08:47:15.510 2 DEBUG oslo_concurrency.processutils [None req-4634afe8-4bd0-4c49-a14b-40141b5d196e a00ec4c98ad14bd6ab5a4e1f11e60389 7966b3923d1340dca2b22eab0ca26cb3 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/682cad4b-4bab-4a36-9bd9-11e9de7b213a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpcl4e47jb" returned: 0 in 0.155s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:47:15 compute-0 nova_compute[260935]: 2025-10-11 08:47:15.555 2 DEBUG nova.storage.rbd_utils [None req-4634afe8-4bd0-4c49-a14b-40141b5d196e a00ec4c98ad14bd6ab5a4e1f11e60389 7966b3923d1340dca2b22eab0ca26cb3 - - default default] rbd image 682cad4b-4bab-4a36-9bd9-11e9de7b213a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:47:15 compute-0 nova_compute[260935]: 2025-10-11 08:47:15.561 2 DEBUG oslo_concurrency.processutils [None req-4634afe8-4bd0-4c49-a14b-40141b5d196e a00ec4c98ad14bd6ab5a4e1f11e60389 7966b3923d1340dca2b22eab0ca26cb3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/682cad4b-4bab-4a36-9bd9-11e9de7b213a/disk.config 682cad4b-4bab-4a36-9bd9-11e9de7b213a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:47:15 compute-0 nova_compute[260935]: 2025-10-11 08:47:15.602 2 DEBUG nova.policy [None req-602ff088-416a-4b9a-9d59-b4b2be10ff77 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '34f29a5a135d45f597eeaa741009aa67', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'eddb41c523294041b154a0a99c88e82b', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 11 08:47:15 compute-0 nova_compute[260935]: 2025-10-11 08:47:15.651 2 DEBUG nova.compute.manager [None req-602ff088-416a-4b9a-9d59-b4b2be10ff77 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 5b2193b9-46b9-44a8-9d1c-3c6a642115b6] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 11 08:47:15 compute-0 nova_compute[260935]: 2025-10-11 08:47:15.752 2 DEBUG oslo_concurrency.processutils [None req-4634afe8-4bd0-4c49-a14b-40141b5d196e a00ec4c98ad14bd6ab5a4e1f11e60389 7966b3923d1340dca2b22eab0ca26cb3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/682cad4b-4bab-4a36-9bd9-11e9de7b213a/disk.config 682cad4b-4bab-4a36-9bd9-11e9de7b213a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.191s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:47:15 compute-0 nova_compute[260935]: 2025-10-11 08:47:15.754 2 INFO nova.virt.libvirt.driver [None req-4634afe8-4bd0-4c49-a14b-40141b5d196e a00ec4c98ad14bd6ab5a4e1f11e60389 7966b3923d1340dca2b22eab0ca26cb3 - - default default] [instance: 682cad4b-4bab-4a36-9bd9-11e9de7b213a] Deleting local config drive /var/lib/nova/instances/682cad4b-4bab-4a36-9bd9-11e9de7b213a/disk.config because it was imported into RBD.
Oct 11 08:47:15 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1232: 321 pgs: 321 active+clean; 372 MiB data, 437 MiB used, 60 GiB / 60 GiB avail; 30 KiB/s rd, 3.2 KiB/s wr, 41 op/s
Oct 11 08:47:15 compute-0 systemd-machined[215705]: New machine qemu-16-instance-00000010.
Oct 11 08:47:15 compute-0 systemd[1]: Started Virtual Machine qemu-16-instance-00000010.
Oct 11 08:47:16 compute-0 nova_compute[260935]: 2025-10-11 08:47:16.218 2 DEBUG nova.compute.manager [None req-602ff088-416a-4b9a-9d59-b4b2be10ff77 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 5b2193b9-46b9-44a8-9d1c-3c6a642115b6] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 11 08:47:16 compute-0 nova_compute[260935]: 2025-10-11 08:47:16.221 2 DEBUG nova.virt.libvirt.driver [None req-602ff088-416a-4b9a-9d59-b4b2be10ff77 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 5b2193b9-46b9-44a8-9d1c-3c6a642115b6] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 11 08:47:16 compute-0 nova_compute[260935]: 2025-10-11 08:47:16.221 2 INFO nova.virt.libvirt.driver [None req-602ff088-416a-4b9a-9d59-b4b2be10ff77 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 5b2193b9-46b9-44a8-9d1c-3c6a642115b6] Creating image(s)
Oct 11 08:47:16 compute-0 nova_compute[260935]: 2025-10-11 08:47:16.253 2 DEBUG nova.storage.rbd_utils [None req-602ff088-416a-4b9a-9d59-b4b2be10ff77 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] rbd image 5b2193b9-46b9-44a8-9d1c-3c6a642115b6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:47:16 compute-0 nova_compute[260935]: 2025-10-11 08:47:16.285 2 DEBUG nova.storage.rbd_utils [None req-602ff088-416a-4b9a-9d59-b4b2be10ff77 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] rbd image 5b2193b9-46b9-44a8-9d1c-3c6a642115b6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:47:16 compute-0 nova_compute[260935]: 2025-10-11 08:47:16.320 2 DEBUG nova.storage.rbd_utils [None req-602ff088-416a-4b9a-9d59-b4b2be10ff77 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] rbd image 5b2193b9-46b9-44a8-9d1c-3c6a642115b6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:47:16 compute-0 nova_compute[260935]: 2025-10-11 08:47:16.324 2 DEBUG oslo_concurrency.processutils [None req-602ff088-416a-4b9a-9d59-b4b2be10ff77 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:47:16 compute-0 nova_compute[260935]: 2025-10-11 08:47:16.388 2 DEBUG oslo_concurrency.processutils [None req-602ff088-416a-4b9a-9d59-b4b2be10ff77 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:47:16 compute-0 nova_compute[260935]: 2025-10-11 08:47:16.390 2 DEBUG oslo_concurrency.lockutils [None req-602ff088-416a-4b9a-9d59-b4b2be10ff77 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Acquiring lock "9811042c7d73cc51997f7c966840f4b7728169a1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:47:16 compute-0 nova_compute[260935]: 2025-10-11 08:47:16.391 2 DEBUG oslo_concurrency.lockutils [None req-602ff088-416a-4b9a-9d59-b4b2be10ff77 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:47:16 compute-0 nova_compute[260935]: 2025-10-11 08:47:16.392 2 DEBUG oslo_concurrency.lockutils [None req-602ff088-416a-4b9a-9d59-b4b2be10ff77 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:47:16 compute-0 ceph-mon[74313]: osdmap e151: 3 total, 3 up, 3 in
Oct 11 08:47:16 compute-0 nova_compute[260935]: 2025-10-11 08:47:16.423 2 DEBUG nova.storage.rbd_utils [None req-602ff088-416a-4b9a-9d59-b4b2be10ff77 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] rbd image 5b2193b9-46b9-44a8-9d1c-3c6a642115b6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:47:16 compute-0 nova_compute[260935]: 2025-10-11 08:47:16.427 2 DEBUG oslo_concurrency.processutils [None req-602ff088-416a-4b9a-9d59-b4b2be10ff77 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 5b2193b9-46b9-44a8-9d1c-3c6a642115b6_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:47:16 compute-0 eloquent_einstein[286483]: {
Oct 11 08:47:16 compute-0 eloquent_einstein[286483]:     "9a8d87fe-940e-4960-94fb-405e22e6e9ee": {
Oct 11 08:47:16 compute-0 eloquent_einstein[286483]:         "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 08:47:16 compute-0 eloquent_einstein[286483]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 11 08:47:16 compute-0 eloquent_einstein[286483]:         "osd_id": 2,
Oct 11 08:47:16 compute-0 eloquent_einstein[286483]:         "osd_uuid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 08:47:16 compute-0 eloquent_einstein[286483]:         "type": "bluestore"
Oct 11 08:47:16 compute-0 eloquent_einstein[286483]:     },
Oct 11 08:47:16 compute-0 eloquent_einstein[286483]:     "bd31cfe9-266a-4479-a7f9-659fff14b3e1": {
Oct 11 08:47:16 compute-0 eloquent_einstein[286483]:         "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 08:47:16 compute-0 eloquent_einstein[286483]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 11 08:47:16 compute-0 eloquent_einstein[286483]:         "osd_id": 0,
Oct 11 08:47:16 compute-0 eloquent_einstein[286483]:         "osd_uuid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 08:47:16 compute-0 eloquent_einstein[286483]:         "type": "bluestore"
Oct 11 08:47:16 compute-0 eloquent_einstein[286483]:     },
Oct 11 08:47:16 compute-0 eloquent_einstein[286483]:     "c6e730c8-7075-45e5-bf72-061b73088f5b": {
Oct 11 08:47:16 compute-0 eloquent_einstein[286483]:         "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 08:47:16 compute-0 eloquent_einstein[286483]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 11 08:47:16 compute-0 eloquent_einstein[286483]:         "osd_id": 1,
Oct 11 08:47:16 compute-0 eloquent_einstein[286483]:         "osd_uuid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 08:47:16 compute-0 eloquent_einstein[286483]:         "type": "bluestore"
Oct 11 08:47:16 compute-0 eloquent_einstein[286483]:     }
Oct 11 08:47:16 compute-0 eloquent_einstein[286483]: }
Oct 11 08:47:16 compute-0 systemd[1]: libpod-5e5f67df638d646d697ee7279a4dc6e741bc22bbbb9e6ea9ca9384e4a5e7ab1b.scope: Deactivated successfully.
Oct 11 08:47:16 compute-0 systemd[1]: libpod-5e5f67df638d646d697ee7279a4dc6e741bc22bbbb9e6ea9ca9384e4a5e7ab1b.scope: Consumed 1.121s CPU time.
Oct 11 08:47:16 compute-0 podman[286464]: 2025-10-11 08:47:16.496623686 +0000 UTC m=+1.382161939 container died 5e5f67df638d646d697ee7279a4dc6e741bc22bbbb9e6ea9ca9384e4a5e7ab1b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_einstein, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct 11 08:47:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-840f400aaec6c26044543d45a28a1aaceec792ce1db115e3022a4083ce83b19b-merged.mount: Deactivated successfully.
Oct 11 08:47:16 compute-0 podman[286464]: 2025-10-11 08:47:16.558241637 +0000 UTC m=+1.443779880 container remove 5e5f67df638d646d697ee7279a4dc6e741bc22bbbb9e6ea9ca9384e4a5e7ab1b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_einstein, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 08:47:16 compute-0 systemd[1]: libpod-conmon-5e5f67df638d646d697ee7279a4dc6e741bc22bbbb9e6ea9ca9384e4a5e7ab1b.scope: Deactivated successfully.
Oct 11 08:47:16 compute-0 nova_compute[260935]: 2025-10-11 08:47:16.577 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:47:16 compute-0 sudo[286277]: pam_unix(sudo:session): session closed for user root
Oct 11 08:47:16 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 08:47:16 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 08:47:16 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 08:47:16 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 08:47:16 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev 07d7a294-12ba-48a1-94cf-a509c6508ecc does not exist
Oct 11 08:47:16 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev 167103fd-fde4-4b34-a96d-563930e826b4 does not exist
Oct 11 08:47:16 compute-0 sudo[286719]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:47:16 compute-0 sudo[286719]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:47:16 compute-0 sudo[286719]: pam_unix(sudo:session): session closed for user root
Oct 11 08:47:16 compute-0 nova_compute[260935]: 2025-10-11 08:47:16.748 2 DEBUG oslo_concurrency.processutils [None req-602ff088-416a-4b9a-9d59-b4b2be10ff77 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 5b2193b9-46b9-44a8-9d1c-3c6a642115b6_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.321s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:47:16 compute-0 sudo[286744]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 11 08:47:16 compute-0 sudo[286744]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:47:16 compute-0 sudo[286744]: pam_unix(sudo:session): session closed for user root
Oct 11 08:47:16 compute-0 nova_compute[260935]: 2025-10-11 08:47:16.779 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172436.7539876, 682cad4b-4bab-4a36-9bd9-11e9de7b213a => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:47:16 compute-0 nova_compute[260935]: 2025-10-11 08:47:16.779 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 682cad4b-4bab-4a36-9bd9-11e9de7b213a] VM Resumed (Lifecycle Event)
Oct 11 08:47:16 compute-0 nova_compute[260935]: 2025-10-11 08:47:16.783 2 DEBUG nova.network.neutron [None req-602ff088-416a-4b9a-9d59-b4b2be10ff77 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 5b2193b9-46b9-44a8-9d1c-3c6a642115b6] Successfully created port: 713e6030-0d3f-41ae-9f66-c4591e2498e4 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 11 08:47:16 compute-0 nova_compute[260935]: 2025-10-11 08:47:16.787 2 DEBUG nova.compute.manager [None req-4634afe8-4bd0-4c49-a14b-40141b5d196e a00ec4c98ad14bd6ab5a4e1f11e60389 7966b3923d1340dca2b22eab0ca26cb3 - - default default] [instance: 682cad4b-4bab-4a36-9bd9-11e9de7b213a] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 11 08:47:16 compute-0 nova_compute[260935]: 2025-10-11 08:47:16.787 2 DEBUG nova.virt.libvirt.driver [None req-4634afe8-4bd0-4c49-a14b-40141b5d196e a00ec4c98ad14bd6ab5a4e1f11e60389 7966b3923d1340dca2b22eab0ca26cb3 - - default default] [instance: 682cad4b-4bab-4a36-9bd9-11e9de7b213a] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 11 08:47:16 compute-0 nova_compute[260935]: 2025-10-11 08:47:16.819 2 DEBUG nova.storage.rbd_utils [None req-602ff088-416a-4b9a-9d59-b4b2be10ff77 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] resizing rbd image 5b2193b9-46b9-44a8-9d1c-3c6a642115b6_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 11 08:47:16 compute-0 nova_compute[260935]: 2025-10-11 08:47:16.846 2 INFO nova.virt.libvirt.driver [-] [instance: 682cad4b-4bab-4a36-9bd9-11e9de7b213a] Instance spawned successfully.
Oct 11 08:47:16 compute-0 nova_compute[260935]: 2025-10-11 08:47:16.847 2 DEBUG nova.virt.libvirt.driver [None req-4634afe8-4bd0-4c49-a14b-40141b5d196e a00ec4c98ad14bd6ab5a4e1f11e60389 7966b3923d1340dca2b22eab0ca26cb3 - - default default] [instance: 682cad4b-4bab-4a36-9bd9-11e9de7b213a] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 11 08:47:16 compute-0 nova_compute[260935]: 2025-10-11 08:47:16.917 2 DEBUG nova.objects.instance [None req-602ff088-416a-4b9a-9d59-b4b2be10ff77 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Lazy-loading 'migration_context' on Instance uuid 5b2193b9-46b9-44a8-9d1c-3c6a642115b6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:47:16 compute-0 nova_compute[260935]: 2025-10-11 08:47:16.941 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 682cad4b-4bab-4a36-9bd9-11e9de7b213a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:47:16 compute-0 nova_compute[260935]: 2025-10-11 08:47:16.945 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 682cad4b-4bab-4a36-9bd9-11e9de7b213a] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 08:47:16 compute-0 nova_compute[260935]: 2025-10-11 08:47:16.956 2 DEBUG nova.virt.libvirt.driver [None req-602ff088-416a-4b9a-9d59-b4b2be10ff77 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 5b2193b9-46b9-44a8-9d1c-3c6a642115b6] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 11 08:47:16 compute-0 nova_compute[260935]: 2025-10-11 08:47:16.956 2 DEBUG nova.virt.libvirt.driver [None req-602ff088-416a-4b9a-9d59-b4b2be10ff77 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 5b2193b9-46b9-44a8-9d1c-3c6a642115b6] Ensure instance console log exists: /var/lib/nova/instances/5b2193b9-46b9-44a8-9d1c-3c6a642115b6/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 11 08:47:16 compute-0 nova_compute[260935]: 2025-10-11 08:47:16.957 2 DEBUG oslo_concurrency.lockutils [None req-602ff088-416a-4b9a-9d59-b4b2be10ff77 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:47:16 compute-0 nova_compute[260935]: 2025-10-11 08:47:16.957 2 DEBUG oslo_concurrency.lockutils [None req-602ff088-416a-4b9a-9d59-b4b2be10ff77 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:47:16 compute-0 nova_compute[260935]: 2025-10-11 08:47:16.958 2 DEBUG oslo_concurrency.lockutils [None req-602ff088-416a-4b9a-9d59-b4b2be10ff77 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:47:16 compute-0 nova_compute[260935]: 2025-10-11 08:47:16.960 2 DEBUG nova.virt.libvirt.driver [None req-4634afe8-4bd0-4c49-a14b-40141b5d196e a00ec4c98ad14bd6ab5a4e1f11e60389 7966b3923d1340dca2b22eab0ca26cb3 - - default default] [instance: 682cad4b-4bab-4a36-9bd9-11e9de7b213a] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:47:16 compute-0 nova_compute[260935]: 2025-10-11 08:47:16.961 2 DEBUG nova.virt.libvirt.driver [None req-4634afe8-4bd0-4c49-a14b-40141b5d196e a00ec4c98ad14bd6ab5a4e1f11e60389 7966b3923d1340dca2b22eab0ca26cb3 - - default default] [instance: 682cad4b-4bab-4a36-9bd9-11e9de7b213a] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:47:16 compute-0 nova_compute[260935]: 2025-10-11 08:47:16.961 2 DEBUG nova.virt.libvirt.driver [None req-4634afe8-4bd0-4c49-a14b-40141b5d196e a00ec4c98ad14bd6ab5a4e1f11e60389 7966b3923d1340dca2b22eab0ca26cb3 - - default default] [instance: 682cad4b-4bab-4a36-9bd9-11e9de7b213a] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:47:16 compute-0 nova_compute[260935]: 2025-10-11 08:47:16.962 2 DEBUG nova.virt.libvirt.driver [None req-4634afe8-4bd0-4c49-a14b-40141b5d196e a00ec4c98ad14bd6ab5a4e1f11e60389 7966b3923d1340dca2b22eab0ca26cb3 - - default default] [instance: 682cad4b-4bab-4a36-9bd9-11e9de7b213a] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:47:16 compute-0 nova_compute[260935]: 2025-10-11 08:47:16.962 2 DEBUG nova.virt.libvirt.driver [None req-4634afe8-4bd0-4c49-a14b-40141b5d196e a00ec4c98ad14bd6ab5a4e1f11e60389 7966b3923d1340dca2b22eab0ca26cb3 - - default default] [instance: 682cad4b-4bab-4a36-9bd9-11e9de7b213a] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:47:16 compute-0 nova_compute[260935]: 2025-10-11 08:47:16.963 2 DEBUG nova.virt.libvirt.driver [None req-4634afe8-4bd0-4c49-a14b-40141b5d196e a00ec4c98ad14bd6ab5a4e1f11e60389 7966b3923d1340dca2b22eab0ca26cb3 - - default default] [instance: 682cad4b-4bab-4a36-9bd9-11e9de7b213a] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:47:17 compute-0 nova_compute[260935]: 2025-10-11 08:47:17.011 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 682cad4b-4bab-4a36-9bd9-11e9de7b213a] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 08:47:17 compute-0 nova_compute[260935]: 2025-10-11 08:47:17.012 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172436.7540917, 682cad4b-4bab-4a36-9bd9-11e9de7b213a => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:47:17 compute-0 nova_compute[260935]: 2025-10-11 08:47:17.012 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 682cad4b-4bab-4a36-9bd9-11e9de7b213a] VM Started (Lifecycle Event)
Oct 11 08:47:17 compute-0 nova_compute[260935]: 2025-10-11 08:47:17.217 2 INFO nova.compute.manager [None req-4634afe8-4bd0-4c49-a14b-40141b5d196e a00ec4c98ad14bd6ab5a4e1f11e60389 7966b3923d1340dca2b22eab0ca26cb3 - - default default] [instance: 682cad4b-4bab-4a36-9bd9-11e9de7b213a] Took 4.53 seconds to spawn the instance on the hypervisor.
Oct 11 08:47:17 compute-0 nova_compute[260935]: 2025-10-11 08:47:17.217 2 DEBUG nova.compute.manager [None req-4634afe8-4bd0-4c49-a14b-40141b5d196e a00ec4c98ad14bd6ab5a4e1f11e60389 7966b3923d1340dca2b22eab0ca26cb3 - - default default] [instance: 682cad4b-4bab-4a36-9bd9-11e9de7b213a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:47:17 compute-0 nova_compute[260935]: 2025-10-11 08:47:17.230 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 682cad4b-4bab-4a36-9bd9-11e9de7b213a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:47:17 compute-0 nova_compute[260935]: 2025-10-11 08:47:17.234 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 682cad4b-4bab-4a36-9bd9-11e9de7b213a] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 08:47:17 compute-0 nova_compute[260935]: 2025-10-11 08:47:17.342 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 682cad4b-4bab-4a36-9bd9-11e9de7b213a] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 08:47:17 compute-0 ceph-mon[74313]: pgmap v1232: 321 pgs: 321 active+clean; 372 MiB data, 437 MiB used, 60 GiB / 60 GiB avail; 30 KiB/s rd, 3.2 KiB/s wr, 41 op/s
Oct 11 08:47:17 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 08:47:17 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 08:47:17 compute-0 nova_compute[260935]: 2025-10-11 08:47:17.449 2 INFO nova.compute.manager [None req-4634afe8-4bd0-4c49-a14b-40141b5d196e a00ec4c98ad14bd6ab5a4e1f11e60389 7966b3923d1340dca2b22eab0ca26cb3 - - default default] [instance: 682cad4b-4bab-4a36-9bd9-11e9de7b213a] Took 5.70 seconds to build instance.
Oct 11 08:47:17 compute-0 nova_compute[260935]: 2025-10-11 08:47:17.645 2 DEBUG oslo_concurrency.lockutils [None req-4634afe8-4bd0-4c49-a14b-40141b5d196e a00ec4c98ad14bd6ab5a4e1f11e60389 7966b3923d1340dca2b22eab0ca26cb3 - - default default] Lock "682cad4b-4bab-4a36-9bd9-11e9de7b213a" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 5.981s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:47:17 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1233: 321 pgs: 321 active+clean; 464 MiB data, 480 MiB used, 60 GiB / 60 GiB avail; 800 KiB/s rd, 5.3 MiB/s wr, 188 op/s
Oct 11 08:47:17 compute-0 nova_compute[260935]: 2025-10-11 08:47:17.958 2 DEBUG nova.network.neutron [None req-602ff088-416a-4b9a-9d59-b4b2be10ff77 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 5b2193b9-46b9-44a8-9d1c-3c6a642115b6] Successfully updated port: 713e6030-0d3f-41ae-9f66-c4591e2498e4 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 11 08:47:17 compute-0 nova_compute[260935]: 2025-10-11 08:47:17.989 2 DEBUG oslo_concurrency.lockutils [None req-602ff088-416a-4b9a-9d59-b4b2be10ff77 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Acquiring lock "refresh_cache-5b2193b9-46b9-44a8-9d1c-3c6a642115b6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 08:47:17 compute-0 nova_compute[260935]: 2025-10-11 08:47:17.990 2 DEBUG oslo_concurrency.lockutils [None req-602ff088-416a-4b9a-9d59-b4b2be10ff77 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Acquired lock "refresh_cache-5b2193b9-46b9-44a8-9d1c-3c6a642115b6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 08:47:17 compute-0 nova_compute[260935]: 2025-10-11 08:47:17.990 2 DEBUG nova.network.neutron [None req-602ff088-416a-4b9a-9d59-b4b2be10ff77 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 5b2193b9-46b9-44a8-9d1c-3c6a642115b6] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 11 08:47:18 compute-0 nova_compute[260935]: 2025-10-11 08:47:18.061 2 DEBUG nova.compute.manager [req-41a19694-68f6-43b2-b0f8-58f497a4600c req-5d3708b0-3519-458b-99e1-90d8a9733030 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 5b2193b9-46b9-44a8-9d1c-3c6a642115b6] Received event network-changed-713e6030-0d3f-41ae-9f66-c4591e2498e4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:47:18 compute-0 nova_compute[260935]: 2025-10-11 08:47:18.061 2 DEBUG nova.compute.manager [req-41a19694-68f6-43b2-b0f8-58f497a4600c req-5d3708b0-3519-458b-99e1-90d8a9733030 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 5b2193b9-46b9-44a8-9d1c-3c6a642115b6] Refreshing instance network info cache due to event network-changed-713e6030-0d3f-41ae-9f66-c4591e2498e4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 08:47:18 compute-0 nova_compute[260935]: 2025-10-11 08:47:18.061 2 DEBUG oslo_concurrency.lockutils [req-41a19694-68f6-43b2-b0f8-58f497a4600c req-5d3708b0-3519-458b-99e1-90d8a9733030 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-5b2193b9-46b9-44a8-9d1c-3c6a642115b6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 08:47:18 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 08:47:18 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e151 do_prune osdmap full prune enabled
Oct 11 08:47:18 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e152 e152: 3 total, 3 up, 3 in
Oct 11 08:47:18 compute-0 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e152: 3 total, 3 up, 3 in
Oct 11 08:47:18 compute-0 nova_compute[260935]: 2025-10-11 08:47:18.724 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:47:18 compute-0 nova_compute[260935]: 2025-10-11 08:47:18.847 2 DEBUG nova.network.neutron [None req-602ff088-416a-4b9a-9d59-b4b2be10ff77 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 5b2193b9-46b9-44a8-9d1c-3c6a642115b6] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 11 08:47:19 compute-0 ceph-mon[74313]: pgmap v1233: 321 pgs: 321 active+clean; 464 MiB data, 480 MiB used, 60 GiB / 60 GiB avail; 800 KiB/s rd, 5.3 MiB/s wr, 188 op/s
Oct 11 08:47:19 compute-0 ceph-mon[74313]: osdmap e152: 3 total, 3 up, 3 in
Oct 11 08:47:19 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1235: 321 pgs: 321 active+clean; 464 MiB data, 480 MiB used, 60 GiB / 60 GiB avail; 800 KiB/s rd, 5.3 MiB/s wr, 188 op/s
Oct 11 08:47:19 compute-0 podman[286841]: 2025-10-11 08:47:19.818277674 +0000 UTC m=+0.112221617 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, io.buildah.version=1.41.3)
Oct 11 08:47:20 compute-0 nova_compute[260935]: 2025-10-11 08:47:20.002 2 DEBUG oslo_concurrency.lockutils [None req-9b69a69e-c7dc-483b-8032-001086ce1dbd a00ec4c98ad14bd6ab5a4e1f11e60389 7966b3923d1340dca2b22eab0ca26cb3 - - default default] Acquiring lock "682cad4b-4bab-4a36-9bd9-11e9de7b213a" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:47:20 compute-0 nova_compute[260935]: 2025-10-11 08:47:20.002 2 DEBUG oslo_concurrency.lockutils [None req-9b69a69e-c7dc-483b-8032-001086ce1dbd a00ec4c98ad14bd6ab5a4e1f11e60389 7966b3923d1340dca2b22eab0ca26cb3 - - default default] Lock "682cad4b-4bab-4a36-9bd9-11e9de7b213a" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:47:20 compute-0 nova_compute[260935]: 2025-10-11 08:47:20.003 2 DEBUG oslo_concurrency.lockutils [None req-9b69a69e-c7dc-483b-8032-001086ce1dbd a00ec4c98ad14bd6ab5a4e1f11e60389 7966b3923d1340dca2b22eab0ca26cb3 - - default default] Acquiring lock "682cad4b-4bab-4a36-9bd9-11e9de7b213a-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:47:20 compute-0 nova_compute[260935]: 2025-10-11 08:47:20.003 2 DEBUG oslo_concurrency.lockutils [None req-9b69a69e-c7dc-483b-8032-001086ce1dbd a00ec4c98ad14bd6ab5a4e1f11e60389 7966b3923d1340dca2b22eab0ca26cb3 - - default default] Lock "682cad4b-4bab-4a36-9bd9-11e9de7b213a-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:47:20 compute-0 nova_compute[260935]: 2025-10-11 08:47:20.004 2 DEBUG oslo_concurrency.lockutils [None req-9b69a69e-c7dc-483b-8032-001086ce1dbd a00ec4c98ad14bd6ab5a4e1f11e60389 7966b3923d1340dca2b22eab0ca26cb3 - - default default] Lock "682cad4b-4bab-4a36-9bd9-11e9de7b213a-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:47:20 compute-0 nova_compute[260935]: 2025-10-11 08:47:20.005 2 INFO nova.compute.manager [None req-9b69a69e-c7dc-483b-8032-001086ce1dbd a00ec4c98ad14bd6ab5a4e1f11e60389 7966b3923d1340dca2b22eab0ca26cb3 - - default default] [instance: 682cad4b-4bab-4a36-9bd9-11e9de7b213a] Terminating instance
Oct 11 08:47:20 compute-0 nova_compute[260935]: 2025-10-11 08:47:20.007 2 DEBUG oslo_concurrency.lockutils [None req-9b69a69e-c7dc-483b-8032-001086ce1dbd a00ec4c98ad14bd6ab5a4e1f11e60389 7966b3923d1340dca2b22eab0ca26cb3 - - default default] Acquiring lock "refresh_cache-682cad4b-4bab-4a36-9bd9-11e9de7b213a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 08:47:20 compute-0 nova_compute[260935]: 2025-10-11 08:47:20.007 2 DEBUG oslo_concurrency.lockutils [None req-9b69a69e-c7dc-483b-8032-001086ce1dbd a00ec4c98ad14bd6ab5a4e1f11e60389 7966b3923d1340dca2b22eab0ca26cb3 - - default default] Acquired lock "refresh_cache-682cad4b-4bab-4a36-9bd9-11e9de7b213a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 08:47:20 compute-0 nova_compute[260935]: 2025-10-11 08:47:20.008 2 DEBUG nova.network.neutron [None req-9b69a69e-c7dc-483b-8032-001086ce1dbd a00ec4c98ad14bd6ab5a4e1f11e60389 7966b3923d1340dca2b22eab0ca26cb3 - - default default] [instance: 682cad4b-4bab-4a36-9bd9-11e9de7b213a] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 11 08:47:20 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e152 do_prune osdmap full prune enabled
Oct 11 08:47:20 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e153 e153: 3 total, 3 up, 3 in
Oct 11 08:47:20 compute-0 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e153: 3 total, 3 up, 3 in
Oct 11 08:47:20 compute-0 nova_compute[260935]: 2025-10-11 08:47:20.323 2 DEBUG nova.network.neutron [None req-9b69a69e-c7dc-483b-8032-001086ce1dbd a00ec4c98ad14bd6ab5a4e1f11e60389 7966b3923d1340dca2b22eab0ca26cb3 - - default default] [instance: 682cad4b-4bab-4a36-9bd9-11e9de7b213a] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 11 08:47:20 compute-0 nova_compute[260935]: 2025-10-11 08:47:20.659 2 DEBUG nova.network.neutron [None req-9b69a69e-c7dc-483b-8032-001086ce1dbd a00ec4c98ad14bd6ab5a4e1f11e60389 7966b3923d1340dca2b22eab0ca26cb3 - - default default] [instance: 682cad4b-4bab-4a36-9bd9-11e9de7b213a] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:47:20 compute-0 nova_compute[260935]: 2025-10-11 08:47:20.683 2 DEBUG oslo_concurrency.lockutils [None req-9b69a69e-c7dc-483b-8032-001086ce1dbd a00ec4c98ad14bd6ab5a4e1f11e60389 7966b3923d1340dca2b22eab0ca26cb3 - - default default] Releasing lock "refresh_cache-682cad4b-4bab-4a36-9bd9-11e9de7b213a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 08:47:20 compute-0 nova_compute[260935]: 2025-10-11 08:47:20.684 2 DEBUG nova.compute.manager [None req-9b69a69e-c7dc-483b-8032-001086ce1dbd a00ec4c98ad14bd6ab5a4e1f11e60389 7966b3923d1340dca2b22eab0ca26cb3 - - default default] [instance: 682cad4b-4bab-4a36-9bd9-11e9de7b213a] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 11 08:47:20 compute-0 nova_compute[260935]: 2025-10-11 08:47:20.691 2 DEBUG nova.network.neutron [None req-602ff088-416a-4b9a-9d59-b4b2be10ff77 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 5b2193b9-46b9-44a8-9d1c-3c6a642115b6] Updating instance_info_cache with network_info: [{"id": "713e6030-0d3f-41ae-9f66-c4591e2498e4", "address": "fa:16:3e:6b:9e:4e", "network": {"id": "fff13396-b787-4c6e-9112-a1c2ef57b26d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-795919911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eddb41c523294041b154a0a99c88e82b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap713e6030-0d", "ovs_interfaceid": "713e6030-0d3f-41ae-9f66-c4591e2498e4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:47:20 compute-0 nova_compute[260935]: 2025-10-11 08:47:20.713 2 DEBUG oslo_concurrency.lockutils [None req-602ff088-416a-4b9a-9d59-b4b2be10ff77 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Releasing lock "refresh_cache-5b2193b9-46b9-44a8-9d1c-3c6a642115b6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 08:47:20 compute-0 nova_compute[260935]: 2025-10-11 08:47:20.714 2 DEBUG nova.compute.manager [None req-602ff088-416a-4b9a-9d59-b4b2be10ff77 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 5b2193b9-46b9-44a8-9d1c-3c6a642115b6] Instance network_info: |[{"id": "713e6030-0d3f-41ae-9f66-c4591e2498e4", "address": "fa:16:3e:6b:9e:4e", "network": {"id": "fff13396-b787-4c6e-9112-a1c2ef57b26d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-795919911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eddb41c523294041b154a0a99c88e82b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap713e6030-0d", "ovs_interfaceid": "713e6030-0d3f-41ae-9f66-c4591e2498e4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 11 08:47:20 compute-0 nova_compute[260935]: 2025-10-11 08:47:20.714 2 DEBUG oslo_concurrency.lockutils [req-41a19694-68f6-43b2-b0f8-58f497a4600c req-5d3708b0-3519-458b-99e1-90d8a9733030 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-5b2193b9-46b9-44a8-9d1c-3c6a642115b6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 08:47:20 compute-0 nova_compute[260935]: 2025-10-11 08:47:20.715 2 DEBUG nova.network.neutron [req-41a19694-68f6-43b2-b0f8-58f497a4600c req-5d3708b0-3519-458b-99e1-90d8a9733030 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 5b2193b9-46b9-44a8-9d1c-3c6a642115b6] Refreshing network info cache for port 713e6030-0d3f-41ae-9f66-c4591e2498e4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 08:47:20 compute-0 nova_compute[260935]: 2025-10-11 08:47:20.720 2 DEBUG nova.virt.libvirt.driver [None req-602ff088-416a-4b9a-9d59-b4b2be10ff77 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 5b2193b9-46b9-44a8-9d1c-3c6a642115b6] Start _get_guest_xml network_info=[{"id": "713e6030-0d3f-41ae-9f66-c4591e2498e4", "address": "fa:16:3e:6b:9e:4e", "network": {"id": "fff13396-b787-4c6e-9112-a1c2ef57b26d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-795919911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eddb41c523294041b154a0a99c88e82b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap713e6030-0d", "ovs_interfaceid": "713e6030-0d3f-41ae-9f66-c4591e2498e4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 11 08:47:20 compute-0 nova_compute[260935]: 2025-10-11 08:47:20.725 2 WARNING nova.virt.libvirt.driver [None req-602ff088-416a-4b9a-9d59-b4b2be10ff77 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 08:47:20 compute-0 nova_compute[260935]: 2025-10-11 08:47:20.733 2 DEBUG nova.virt.libvirt.host [None req-602ff088-416a-4b9a-9d59-b4b2be10ff77 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 11 08:47:20 compute-0 nova_compute[260935]: 2025-10-11 08:47:20.734 2 DEBUG nova.virt.libvirt.host [None req-602ff088-416a-4b9a-9d59-b4b2be10ff77 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 11 08:47:20 compute-0 nova_compute[260935]: 2025-10-11 08:47:20.745 2 DEBUG nova.virt.libvirt.host [None req-602ff088-416a-4b9a-9d59-b4b2be10ff77 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 11 08:47:20 compute-0 nova_compute[260935]: 2025-10-11 08:47:20.746 2 DEBUG nova.virt.libvirt.host [None req-602ff088-416a-4b9a-9d59-b4b2be10ff77 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 11 08:47:20 compute-0 nova_compute[260935]: 2025-10-11 08:47:20.747 2 DEBUG nova.virt.libvirt.driver [None req-602ff088-416a-4b9a-9d59-b4b2be10ff77 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 11 08:47:20 compute-0 nova_compute[260935]: 2025-10-11 08:47:20.747 2 DEBUG nova.virt.hardware [None req-602ff088-416a-4b9a-9d59-b4b2be10ff77 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 11 08:47:20 compute-0 nova_compute[260935]: 2025-10-11 08:47:20.748 2 DEBUG nova.virt.hardware [None req-602ff088-416a-4b9a-9d59-b4b2be10ff77 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 11 08:47:20 compute-0 nova_compute[260935]: 2025-10-11 08:47:20.749 2 DEBUG nova.virt.hardware [None req-602ff088-416a-4b9a-9d59-b4b2be10ff77 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 11 08:47:20 compute-0 nova_compute[260935]: 2025-10-11 08:47:20.749 2 DEBUG nova.virt.hardware [None req-602ff088-416a-4b9a-9d59-b4b2be10ff77 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 11 08:47:20 compute-0 nova_compute[260935]: 2025-10-11 08:47:20.750 2 DEBUG nova.virt.hardware [None req-602ff088-416a-4b9a-9d59-b4b2be10ff77 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 11 08:47:20 compute-0 nova_compute[260935]: 2025-10-11 08:47:20.750 2 DEBUG nova.virt.hardware [None req-602ff088-416a-4b9a-9d59-b4b2be10ff77 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 11 08:47:20 compute-0 nova_compute[260935]: 2025-10-11 08:47:20.751 2 DEBUG nova.virt.hardware [None req-602ff088-416a-4b9a-9d59-b4b2be10ff77 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 11 08:47:20 compute-0 nova_compute[260935]: 2025-10-11 08:47:20.751 2 DEBUG nova.virt.hardware [None req-602ff088-416a-4b9a-9d59-b4b2be10ff77 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 11 08:47:20 compute-0 nova_compute[260935]: 2025-10-11 08:47:20.752 2 DEBUG nova.virt.hardware [None req-602ff088-416a-4b9a-9d59-b4b2be10ff77 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 11 08:47:20 compute-0 nova_compute[260935]: 2025-10-11 08:47:20.752 2 DEBUG nova.virt.hardware [None req-602ff088-416a-4b9a-9d59-b4b2be10ff77 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 11 08:47:20 compute-0 nova_compute[260935]: 2025-10-11 08:47:20.753 2 DEBUG nova.virt.hardware [None req-602ff088-416a-4b9a-9d59-b4b2be10ff77 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 11 08:47:20 compute-0 nova_compute[260935]: 2025-10-11 08:47:20.758 2 DEBUG oslo_concurrency.processutils [None req-602ff088-416a-4b9a-9d59-b4b2be10ff77 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:47:20 compute-0 systemd[1]: machine-qemu\x2d16\x2dinstance\x2d00000010.scope: Deactivated successfully.
Oct 11 08:47:20 compute-0 systemd[1]: machine-qemu\x2d16\x2dinstance\x2d00000010.scope: Consumed 4.815s CPU time.
Oct 11 08:47:20 compute-0 systemd-machined[215705]: Machine qemu-16-instance-00000010 terminated.
Oct 11 08:47:20 compute-0 nova_compute[260935]: 2025-10-11 08:47:20.913 2 INFO nova.virt.libvirt.driver [-] [instance: 682cad4b-4bab-4a36-9bd9-11e9de7b213a] Instance destroyed successfully.
Oct 11 08:47:20 compute-0 nova_compute[260935]: 2025-10-11 08:47:20.914 2 DEBUG nova.objects.instance [None req-9b69a69e-c7dc-483b-8032-001086ce1dbd a00ec4c98ad14bd6ab5a4e1f11e60389 7966b3923d1340dca2b22eab0ca26cb3 - - default default] Lazy-loading 'resources' on Instance uuid 682cad4b-4bab-4a36-9bd9-11e9de7b213a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:47:21 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e153 do_prune osdmap full prune enabled
Oct 11 08:47:21 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e154 e154: 3 total, 3 up, 3 in
Oct 11 08:47:21 compute-0 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e154: 3 total, 3 up, 3 in
Oct 11 08:47:21 compute-0 ceph-mon[74313]: pgmap v1235: 321 pgs: 321 active+clean; 464 MiB data, 480 MiB used, 60 GiB / 60 GiB avail; 800 KiB/s rd, 5.3 MiB/s wr, 188 op/s
Oct 11 08:47:21 compute-0 ceph-mon[74313]: osdmap e153: 3 total, 3 up, 3 in
Oct 11 08:47:21 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 08:47:21 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3397164084' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:47:21 compute-0 nova_compute[260935]: 2025-10-11 08:47:21.274 2 DEBUG oslo_concurrency.processutils [None req-602ff088-416a-4b9a-9d59-b4b2be10ff77 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.516s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:47:21 compute-0 nova_compute[260935]: 2025-10-11 08:47:21.305 2 DEBUG nova.storage.rbd_utils [None req-602ff088-416a-4b9a-9d59-b4b2be10ff77 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] rbd image 5b2193b9-46b9-44a8-9d1c-3c6a642115b6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:47:21 compute-0 nova_compute[260935]: 2025-10-11 08:47:21.310 2 DEBUG oslo_concurrency.processutils [None req-602ff088-416a-4b9a-9d59-b4b2be10ff77 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:47:21 compute-0 nova_compute[260935]: 2025-10-11 08:47:21.430 2 INFO nova.virt.libvirt.driver [None req-9b69a69e-c7dc-483b-8032-001086ce1dbd a00ec4c98ad14bd6ab5a4e1f11e60389 7966b3923d1340dca2b22eab0ca26cb3 - - default default] [instance: 682cad4b-4bab-4a36-9bd9-11e9de7b213a] Deleting instance files /var/lib/nova/instances/682cad4b-4bab-4a36-9bd9-11e9de7b213a_del
Oct 11 08:47:21 compute-0 nova_compute[260935]: 2025-10-11 08:47:21.432 2 INFO nova.virt.libvirt.driver [None req-9b69a69e-c7dc-483b-8032-001086ce1dbd a00ec4c98ad14bd6ab5a4e1f11e60389 7966b3923d1340dca2b22eab0ca26cb3 - - default default] [instance: 682cad4b-4bab-4a36-9bd9-11e9de7b213a] Deletion of /var/lib/nova/instances/682cad4b-4bab-4a36-9bd9-11e9de7b213a_del complete
Oct 11 08:47:21 compute-0 nova_compute[260935]: 2025-10-11 08:47:21.516 2 INFO nova.compute.manager [None req-9b69a69e-c7dc-483b-8032-001086ce1dbd a00ec4c98ad14bd6ab5a4e1f11e60389 7966b3923d1340dca2b22eab0ca26cb3 - - default default] [instance: 682cad4b-4bab-4a36-9bd9-11e9de7b213a] Took 0.83 seconds to destroy the instance on the hypervisor.
Oct 11 08:47:21 compute-0 nova_compute[260935]: 2025-10-11 08:47:21.517 2 DEBUG oslo.service.loopingcall [None req-9b69a69e-c7dc-483b-8032-001086ce1dbd a00ec4c98ad14bd6ab5a4e1f11e60389 7966b3923d1340dca2b22eab0ca26cb3 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 11 08:47:21 compute-0 nova_compute[260935]: 2025-10-11 08:47:21.517 2 DEBUG nova.compute.manager [-] [instance: 682cad4b-4bab-4a36-9bd9-11e9de7b213a] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 11 08:47:21 compute-0 nova_compute[260935]: 2025-10-11 08:47:21.518 2 DEBUG nova.network.neutron [-] [instance: 682cad4b-4bab-4a36-9bd9-11e9de7b213a] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 11 08:47:21 compute-0 nova_compute[260935]: 2025-10-11 08:47:21.614 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:47:21 compute-0 nova_compute[260935]: 2025-10-11 08:47:21.684 2 DEBUG nova.network.neutron [-] [instance: 682cad4b-4bab-4a36-9bd9-11e9de7b213a] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 11 08:47:21 compute-0 nova_compute[260935]: 2025-10-11 08:47:21.715 2 DEBUG nova.network.neutron [-] [instance: 682cad4b-4bab-4a36-9bd9-11e9de7b213a] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:47:21 compute-0 nova_compute[260935]: 2025-10-11 08:47:21.745 2 INFO nova.compute.manager [-] [instance: 682cad4b-4bab-4a36-9bd9-11e9de7b213a] Took 0.23 seconds to deallocate network for instance.
Oct 11 08:47:21 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 08:47:21 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3178421801' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:47:21 compute-0 nova_compute[260935]: 2025-10-11 08:47:21.764 2 DEBUG oslo_concurrency.processutils [None req-602ff088-416a-4b9a-9d59-b4b2be10ff77 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:47:21 compute-0 nova_compute[260935]: 2025-10-11 08:47:21.766 2 DEBUG nova.virt.libvirt.vif [None req-602ff088-416a-4b9a-9d59-b4b2be10ff77 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T08:47:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1177835038',display_name='tempest-AttachInterfacesTestJSON-server-1177835038',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1177835038',id=17,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBM9Il64pQKTCYRuLz2OOsP19v1NZUxnzt1d6CpbMNqNcVSmJsI444B5YIDg/3s4g87KTn1UkUCttTxW17bkkPDQnOj/OhzrtE3rJwHzR/sgT5/vucTFG0ijrEL7r/7PtFg==',key_name='tempest-keypair-130237923',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='eddb41c523294041b154a0a99c88e82b',ramdisk_id='',reservation_id='r-a649ez8h',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesTestJSON-2072786320',owner_user_name='tempest-AttachInterfacesTestJSON-2072786320-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T08:47:15Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='34f29a5a135d45f597eeaa741009aa67',uuid=5b2193b9-46b9-44a8-9d1c-3c6a642115b6,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "713e6030-0d3f-41ae-9f66-c4591e2498e4", "address": "fa:16:3e:6b:9e:4e", "network": {"id": "fff13396-b787-4c6e-9112-a1c2ef57b26d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-795919911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eddb41c523294041b154a0a99c88e82b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap713e6030-0d", "ovs_interfaceid": "713e6030-0d3f-41ae-9f66-c4591e2498e4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 11 08:47:21 compute-0 nova_compute[260935]: 2025-10-11 08:47:21.766 2 DEBUG nova.network.os_vif_util [None req-602ff088-416a-4b9a-9d59-b4b2be10ff77 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Converting VIF {"id": "713e6030-0d3f-41ae-9f66-c4591e2498e4", "address": "fa:16:3e:6b:9e:4e", "network": {"id": "fff13396-b787-4c6e-9112-a1c2ef57b26d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-795919911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eddb41c523294041b154a0a99c88e82b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap713e6030-0d", "ovs_interfaceid": "713e6030-0d3f-41ae-9f66-c4591e2498e4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 08:47:21 compute-0 nova_compute[260935]: 2025-10-11 08:47:21.767 2 DEBUG nova.network.os_vif_util [None req-602ff088-416a-4b9a-9d59-b4b2be10ff77 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6b:9e:4e,bridge_name='br-int',has_traffic_filtering=True,id=713e6030-0d3f-41ae-9f66-c4591e2498e4,network=Network(fff13396-b787-4c6e-9112-a1c2ef57b26d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap713e6030-0d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 08:47:21 compute-0 nova_compute[260935]: 2025-10-11 08:47:21.769 2 DEBUG nova.objects.instance [None req-602ff088-416a-4b9a-9d59-b4b2be10ff77 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Lazy-loading 'pci_devices' on Instance uuid 5b2193b9-46b9-44a8-9d1c-3c6a642115b6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:47:21 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1238: 321 pgs: 321 active+clean; 464 MiB data, 480 MiB used, 60 GiB / 60 GiB avail; 1.0 MiB/s rd, 7.1 MiB/s wr, 199 op/s
Oct 11 08:47:21 compute-0 nova_compute[260935]: 2025-10-11 08:47:21.791 2 DEBUG nova.virt.libvirt.driver [None req-602ff088-416a-4b9a-9d59-b4b2be10ff77 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 5b2193b9-46b9-44a8-9d1c-3c6a642115b6] End _get_guest_xml xml=<domain type="kvm">
Oct 11 08:47:21 compute-0 nova_compute[260935]:   <uuid>5b2193b9-46b9-44a8-9d1c-3c6a642115b6</uuid>
Oct 11 08:47:21 compute-0 nova_compute[260935]:   <name>instance-00000011</name>
Oct 11 08:47:21 compute-0 nova_compute[260935]:   <memory>131072</memory>
Oct 11 08:47:21 compute-0 nova_compute[260935]:   <vcpu>1</vcpu>
Oct 11 08:47:21 compute-0 nova_compute[260935]:   <metadata>
Oct 11 08:47:21 compute-0 nova_compute[260935]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 08:47:21 compute-0 nova_compute[260935]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 08:47:21 compute-0 nova_compute[260935]:       <nova:name>tempest-AttachInterfacesTestJSON-server-1177835038</nova:name>
Oct 11 08:47:21 compute-0 nova_compute[260935]:       <nova:creationTime>2025-10-11 08:47:20</nova:creationTime>
Oct 11 08:47:21 compute-0 nova_compute[260935]:       <nova:flavor name="m1.nano">
Oct 11 08:47:21 compute-0 nova_compute[260935]:         <nova:memory>128</nova:memory>
Oct 11 08:47:21 compute-0 nova_compute[260935]:         <nova:disk>1</nova:disk>
Oct 11 08:47:21 compute-0 nova_compute[260935]:         <nova:swap>0</nova:swap>
Oct 11 08:47:21 compute-0 nova_compute[260935]:         <nova:ephemeral>0</nova:ephemeral>
Oct 11 08:47:21 compute-0 nova_compute[260935]:         <nova:vcpus>1</nova:vcpus>
Oct 11 08:47:21 compute-0 nova_compute[260935]:       </nova:flavor>
Oct 11 08:47:21 compute-0 nova_compute[260935]:       <nova:owner>
Oct 11 08:47:21 compute-0 nova_compute[260935]:         <nova:user uuid="34f29a5a135d45f597eeaa741009aa67">tempest-AttachInterfacesTestJSON-2072786320-project-member</nova:user>
Oct 11 08:47:21 compute-0 nova_compute[260935]:         <nova:project uuid="eddb41c523294041b154a0a99c88e82b">tempest-AttachInterfacesTestJSON-2072786320</nova:project>
Oct 11 08:47:21 compute-0 nova_compute[260935]:       </nova:owner>
Oct 11 08:47:21 compute-0 nova_compute[260935]:       <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 08:47:21 compute-0 nova_compute[260935]:       <nova:ports>
Oct 11 08:47:21 compute-0 nova_compute[260935]:         <nova:port uuid="713e6030-0d3f-41ae-9f66-c4591e2498e4">
Oct 11 08:47:21 compute-0 nova_compute[260935]:           <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Oct 11 08:47:21 compute-0 nova_compute[260935]:         </nova:port>
Oct 11 08:47:21 compute-0 nova_compute[260935]:       </nova:ports>
Oct 11 08:47:21 compute-0 nova_compute[260935]:     </nova:instance>
Oct 11 08:47:21 compute-0 nova_compute[260935]:   </metadata>
Oct 11 08:47:21 compute-0 nova_compute[260935]:   <sysinfo type="smbios">
Oct 11 08:47:21 compute-0 nova_compute[260935]:     <system>
Oct 11 08:47:21 compute-0 nova_compute[260935]:       <entry name="manufacturer">RDO</entry>
Oct 11 08:47:21 compute-0 nova_compute[260935]:       <entry name="product">OpenStack Compute</entry>
Oct 11 08:47:21 compute-0 nova_compute[260935]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 08:47:21 compute-0 nova_compute[260935]:       <entry name="serial">5b2193b9-46b9-44a8-9d1c-3c6a642115b6</entry>
Oct 11 08:47:21 compute-0 nova_compute[260935]:       <entry name="uuid">5b2193b9-46b9-44a8-9d1c-3c6a642115b6</entry>
Oct 11 08:47:21 compute-0 nova_compute[260935]:       <entry name="family">Virtual Machine</entry>
Oct 11 08:47:21 compute-0 nova_compute[260935]:     </system>
Oct 11 08:47:21 compute-0 nova_compute[260935]:   </sysinfo>
Oct 11 08:47:21 compute-0 nova_compute[260935]:   <os>
Oct 11 08:47:21 compute-0 nova_compute[260935]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 11 08:47:21 compute-0 nova_compute[260935]:     <boot dev="hd"/>
Oct 11 08:47:21 compute-0 nova_compute[260935]:     <smbios mode="sysinfo"/>
Oct 11 08:47:21 compute-0 nova_compute[260935]:   </os>
Oct 11 08:47:21 compute-0 nova_compute[260935]:   <features>
Oct 11 08:47:21 compute-0 nova_compute[260935]:     <acpi/>
Oct 11 08:47:21 compute-0 nova_compute[260935]:     <apic/>
Oct 11 08:47:21 compute-0 nova_compute[260935]:     <vmcoreinfo/>
Oct 11 08:47:21 compute-0 nova_compute[260935]:   </features>
Oct 11 08:47:21 compute-0 nova_compute[260935]:   <clock offset="utc">
Oct 11 08:47:21 compute-0 nova_compute[260935]:     <timer name="pit" tickpolicy="delay"/>
Oct 11 08:47:21 compute-0 nova_compute[260935]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 11 08:47:21 compute-0 nova_compute[260935]:     <timer name="hpet" present="no"/>
Oct 11 08:47:21 compute-0 nova_compute[260935]:   </clock>
Oct 11 08:47:21 compute-0 nova_compute[260935]:   <cpu mode="host-model" match="exact">
Oct 11 08:47:21 compute-0 nova_compute[260935]:     <topology sockets="1" cores="1" threads="1"/>
Oct 11 08:47:21 compute-0 nova_compute[260935]:   </cpu>
Oct 11 08:47:21 compute-0 nova_compute[260935]:   <devices>
Oct 11 08:47:21 compute-0 nova_compute[260935]:     <disk type="network" device="disk">
Oct 11 08:47:21 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 08:47:21 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/5b2193b9-46b9-44a8-9d1c-3c6a642115b6_disk">
Oct 11 08:47:21 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 08:47:21 compute-0 nova_compute[260935]:       </source>
Oct 11 08:47:21 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 08:47:21 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 08:47:21 compute-0 nova_compute[260935]:       </auth>
Oct 11 08:47:21 compute-0 nova_compute[260935]:       <target dev="vda" bus="virtio"/>
Oct 11 08:47:21 compute-0 nova_compute[260935]:     </disk>
Oct 11 08:47:21 compute-0 nova_compute[260935]:     <disk type="network" device="cdrom">
Oct 11 08:47:21 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 08:47:21 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/5b2193b9-46b9-44a8-9d1c-3c6a642115b6_disk.config">
Oct 11 08:47:21 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 08:47:21 compute-0 nova_compute[260935]:       </source>
Oct 11 08:47:21 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 08:47:21 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 08:47:21 compute-0 nova_compute[260935]:       </auth>
Oct 11 08:47:21 compute-0 nova_compute[260935]:       <target dev="sda" bus="sata"/>
Oct 11 08:47:21 compute-0 nova_compute[260935]:     </disk>
Oct 11 08:47:21 compute-0 nova_compute[260935]:     <interface type="ethernet">
Oct 11 08:47:21 compute-0 nova_compute[260935]:       <mac address="fa:16:3e:6b:9e:4e"/>
Oct 11 08:47:21 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 08:47:21 compute-0 nova_compute[260935]:       <driver name="vhost" rx_queue_size="512"/>
Oct 11 08:47:21 compute-0 nova_compute[260935]:       <mtu size="1442"/>
Oct 11 08:47:21 compute-0 nova_compute[260935]:       <target dev="tap713e6030-0d"/>
Oct 11 08:47:21 compute-0 nova_compute[260935]:     </interface>
Oct 11 08:47:21 compute-0 nova_compute[260935]:     <serial type="pty">
Oct 11 08:47:21 compute-0 nova_compute[260935]:       <log file="/var/lib/nova/instances/5b2193b9-46b9-44a8-9d1c-3c6a642115b6/console.log" append="off"/>
Oct 11 08:47:21 compute-0 nova_compute[260935]:     </serial>
Oct 11 08:47:21 compute-0 nova_compute[260935]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 08:47:21 compute-0 nova_compute[260935]:     <video>
Oct 11 08:47:21 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 08:47:21 compute-0 nova_compute[260935]:     </video>
Oct 11 08:47:21 compute-0 nova_compute[260935]:     <input type="tablet" bus="usb"/>
Oct 11 08:47:21 compute-0 nova_compute[260935]:     <rng model="virtio">
Oct 11 08:47:21 compute-0 nova_compute[260935]:       <backend model="random">/dev/urandom</backend>
Oct 11 08:47:21 compute-0 nova_compute[260935]:     </rng>
Oct 11 08:47:21 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root"/>
Oct 11 08:47:21 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:47:21 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:47:21 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:47:21 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:47:21 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:47:21 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:47:21 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:47:21 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:47:21 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:47:21 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:47:21 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:47:21 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:47:21 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:47:21 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:47:21 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:47:21 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:47:21 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:47:21 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:47:21 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:47:21 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:47:21 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:47:21 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:47:21 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:47:21 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:47:21 compute-0 nova_compute[260935]:     <controller type="usb" index="0"/>
Oct 11 08:47:21 compute-0 nova_compute[260935]:     <memballoon model="virtio">
Oct 11 08:47:21 compute-0 nova_compute[260935]:       <stats period="10"/>
Oct 11 08:47:21 compute-0 nova_compute[260935]:     </memballoon>
Oct 11 08:47:21 compute-0 nova_compute[260935]:   </devices>
Oct 11 08:47:21 compute-0 nova_compute[260935]: </domain>
Oct 11 08:47:21 compute-0 nova_compute[260935]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 11 08:47:21 compute-0 nova_compute[260935]: 2025-10-11 08:47:21.792 2 DEBUG nova.compute.manager [None req-602ff088-416a-4b9a-9d59-b4b2be10ff77 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 5b2193b9-46b9-44a8-9d1c-3c6a642115b6] Preparing to wait for external event network-vif-plugged-713e6030-0d3f-41ae-9f66-c4591e2498e4 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 11 08:47:21 compute-0 nova_compute[260935]: 2025-10-11 08:47:21.792 2 DEBUG oslo_concurrency.lockutils [None req-602ff088-416a-4b9a-9d59-b4b2be10ff77 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Acquiring lock "5b2193b9-46b9-44a8-9d1c-3c6a642115b6-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:47:21 compute-0 nova_compute[260935]: 2025-10-11 08:47:21.793 2 DEBUG oslo_concurrency.lockutils [None req-602ff088-416a-4b9a-9d59-b4b2be10ff77 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Lock "5b2193b9-46b9-44a8-9d1c-3c6a642115b6-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:47:21 compute-0 nova_compute[260935]: 2025-10-11 08:47:21.793 2 DEBUG oslo_concurrency.lockutils [None req-602ff088-416a-4b9a-9d59-b4b2be10ff77 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Lock "5b2193b9-46b9-44a8-9d1c-3c6a642115b6-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:47:21 compute-0 nova_compute[260935]: 2025-10-11 08:47:21.794 2 DEBUG nova.virt.libvirt.vif [None req-602ff088-416a-4b9a-9d59-b4b2be10ff77 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T08:47:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1177835038',display_name='tempest-AttachInterfacesTestJSON-server-1177835038',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1177835038',id=17,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBM9Il64pQKTCYRuLz2OOsP19v1NZUxnzt1d6CpbMNqNcVSmJsI444B5YIDg/3s4g87KTn1UkUCttTxW17bkkPDQnOj/OhzrtE3rJwHzR/sgT5/vucTFG0ijrEL7r/7PtFg==',key_name='tempest-keypair-130237923',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='eddb41c523294041b154a0a99c88e82b',ramdisk_id='',reservation_id='r-a649ez8h',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesTestJSON-2072786320',owner_user_name='tempest-AttachInterfacesTestJSON-2072786320-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T08:47:15Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='34f29a5a135d45f597eeaa741009aa67',uuid=5b2193b9-46b9-44a8-9d1c-3c6a642115b6,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "713e6030-0d3f-41ae-9f66-c4591e2498e4", "address": "fa:16:3e:6b:9e:4e", "network": {"id": "fff13396-b787-4c6e-9112-a1c2ef57b26d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-795919911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eddb41c523294041b154a0a99c88e82b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap713e6030-0d", "ovs_interfaceid": "713e6030-0d3f-41ae-9f66-c4591e2498e4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 11 08:47:21 compute-0 nova_compute[260935]: 2025-10-11 08:47:21.794 2 DEBUG nova.network.os_vif_util [None req-602ff088-416a-4b9a-9d59-b4b2be10ff77 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Converting VIF {"id": "713e6030-0d3f-41ae-9f66-c4591e2498e4", "address": "fa:16:3e:6b:9e:4e", "network": {"id": "fff13396-b787-4c6e-9112-a1c2ef57b26d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-795919911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eddb41c523294041b154a0a99c88e82b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap713e6030-0d", "ovs_interfaceid": "713e6030-0d3f-41ae-9f66-c4591e2498e4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 08:47:21 compute-0 nova_compute[260935]: 2025-10-11 08:47:21.795 2 DEBUG nova.network.os_vif_util [None req-602ff088-416a-4b9a-9d59-b4b2be10ff77 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6b:9e:4e,bridge_name='br-int',has_traffic_filtering=True,id=713e6030-0d3f-41ae-9f66-c4591e2498e4,network=Network(fff13396-b787-4c6e-9112-a1c2ef57b26d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap713e6030-0d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 08:47:21 compute-0 nova_compute[260935]: 2025-10-11 08:47:21.796 2 DEBUG os_vif [None req-602ff088-416a-4b9a-9d59-b4b2be10ff77 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:6b:9e:4e,bridge_name='br-int',has_traffic_filtering=True,id=713e6030-0d3f-41ae-9f66-c4591e2498e4,network=Network(fff13396-b787-4c6e-9112-a1c2ef57b26d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap713e6030-0d') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 11 08:47:21 compute-0 nova_compute[260935]: 2025-10-11 08:47:21.797 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:47:21 compute-0 nova_compute[260935]: 2025-10-11 08:47:21.798 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:47:21 compute-0 nova_compute[260935]: 2025-10-11 08:47:21.799 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 08:47:21 compute-0 nova_compute[260935]: 2025-10-11 08:47:21.800 2 DEBUG oslo_concurrency.lockutils [None req-9b69a69e-c7dc-483b-8032-001086ce1dbd a00ec4c98ad14bd6ab5a4e1f11e60389 7966b3923d1340dca2b22eab0ca26cb3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:47:21 compute-0 nova_compute[260935]: 2025-10-11 08:47:21.800 2 DEBUG oslo_concurrency.lockutils [None req-9b69a69e-c7dc-483b-8032-001086ce1dbd a00ec4c98ad14bd6ab5a4e1f11e60389 7966b3923d1340dca2b22eab0ca26cb3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:47:21 compute-0 nova_compute[260935]: 2025-10-11 08:47:21.816 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:47:21 compute-0 nova_compute[260935]: 2025-10-11 08:47:21.817 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap713e6030-0d, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:47:21 compute-0 nova_compute[260935]: 2025-10-11 08:47:21.818 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap713e6030-0d, col_values=(('external_ids', {'iface-id': '713e6030-0d3f-41ae-9f66-c4591e2498e4', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:6b:9e:4e', 'vm-uuid': '5b2193b9-46b9-44a8-9d1c-3c6a642115b6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:47:21 compute-0 NetworkManager[44960]: <info>  [1760172441.8231] manager: (tap713e6030-0d): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/36)
Oct 11 08:47:21 compute-0 nova_compute[260935]: 2025-10-11 08:47:21.823 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:47:21 compute-0 nova_compute[260935]: 2025-10-11 08:47:21.826 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 08:47:21 compute-0 nova_compute[260935]: 2025-10-11 08:47:21.832 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:47:21 compute-0 nova_compute[260935]: 2025-10-11 08:47:21.834 2 INFO os_vif [None req-602ff088-416a-4b9a-9d59-b4b2be10ff77 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:6b:9e:4e,bridge_name='br-int',has_traffic_filtering=True,id=713e6030-0d3f-41ae-9f66-c4591e2498e4,network=Network(fff13396-b787-4c6e-9112-a1c2ef57b26d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap713e6030-0d')
Oct 11 08:47:21 compute-0 nova_compute[260935]: 2025-10-11 08:47:21.910 2 DEBUG nova.virt.libvirt.driver [None req-602ff088-416a-4b9a-9d59-b4b2be10ff77 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 08:47:21 compute-0 nova_compute[260935]: 2025-10-11 08:47:21.911 2 DEBUG nova.virt.libvirt.driver [None req-602ff088-416a-4b9a-9d59-b4b2be10ff77 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 08:47:21 compute-0 nova_compute[260935]: 2025-10-11 08:47:21.912 2 DEBUG nova.virt.libvirt.driver [None req-602ff088-416a-4b9a-9d59-b4b2be10ff77 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] No VIF found with MAC fa:16:3e:6b:9e:4e, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 11 08:47:21 compute-0 nova_compute[260935]: 2025-10-11 08:47:21.912 2 INFO nova.virt.libvirt.driver [None req-602ff088-416a-4b9a-9d59-b4b2be10ff77 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 5b2193b9-46b9-44a8-9d1c-3c6a642115b6] Using config drive
Oct 11 08:47:21 compute-0 nova_compute[260935]: 2025-10-11 08:47:21.945 2 DEBUG nova.storage.rbd_utils [None req-602ff088-416a-4b9a-9d59-b4b2be10ff77 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] rbd image 5b2193b9-46b9-44a8-9d1c-3c6a642115b6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:47:21 compute-0 nova_compute[260935]: 2025-10-11 08:47:21.961 2 DEBUG nova.network.neutron [req-41a19694-68f6-43b2-b0f8-58f497a4600c req-5d3708b0-3519-458b-99e1-90d8a9733030 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 5b2193b9-46b9-44a8-9d1c-3c6a642115b6] Updated VIF entry in instance network info cache for port 713e6030-0d3f-41ae-9f66-c4591e2498e4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 08:47:21 compute-0 nova_compute[260935]: 2025-10-11 08:47:21.962 2 DEBUG nova.network.neutron [req-41a19694-68f6-43b2-b0f8-58f497a4600c req-5d3708b0-3519-458b-99e1-90d8a9733030 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 5b2193b9-46b9-44a8-9d1c-3c6a642115b6] Updating instance_info_cache with network_info: [{"id": "713e6030-0d3f-41ae-9f66-c4591e2498e4", "address": "fa:16:3e:6b:9e:4e", "network": {"id": "fff13396-b787-4c6e-9112-a1c2ef57b26d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-795919911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eddb41c523294041b154a0a99c88e82b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap713e6030-0d", "ovs_interfaceid": "713e6030-0d3f-41ae-9f66-c4591e2498e4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:47:21 compute-0 nova_compute[260935]: 2025-10-11 08:47:21.969 2 DEBUG oslo_concurrency.processutils [None req-9b69a69e-c7dc-483b-8032-001086ce1dbd a00ec4c98ad14bd6ab5a4e1f11e60389 7966b3923d1340dca2b22eab0ca26cb3 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:47:22 compute-0 nova_compute[260935]: 2025-10-11 08:47:22.007 2 DEBUG oslo_concurrency.lockutils [req-41a19694-68f6-43b2-b0f8-58f497a4600c req-5d3708b0-3519-458b-99e1-90d8a9733030 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-5b2193b9-46b9-44a8-9d1c-3c6a642115b6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 08:47:22 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e154 do_prune osdmap full prune enabled
Oct 11 08:47:22 compute-0 ceph-mon[74313]: osdmap e154: 3 total, 3 up, 3 in
Oct 11 08:47:22 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/3397164084' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:47:22 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/3178421801' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:47:22 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e155 e155: 3 total, 3 up, 3 in
Oct 11 08:47:22 compute-0 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e155: 3 total, 3 up, 3 in
Oct 11 08:47:22 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 08:47:22 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3256956480' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:47:22 compute-0 nova_compute[260935]: 2025-10-11 08:47:22.492 2 DEBUG oslo_concurrency.processutils [None req-9b69a69e-c7dc-483b-8032-001086ce1dbd a00ec4c98ad14bd6ab5a4e1f11e60389 7966b3923d1340dca2b22eab0ca26cb3 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.523s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:47:22 compute-0 nova_compute[260935]: 2025-10-11 08:47:22.499 2 DEBUG nova.compute.provider_tree [None req-9b69a69e-c7dc-483b-8032-001086ce1dbd a00ec4c98ad14bd6ab5a4e1f11e60389 7966b3923d1340dca2b22eab0ca26cb3 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 08:47:22 compute-0 nova_compute[260935]: 2025-10-11 08:47:22.513 2 DEBUG nova.scheduler.client.report [None req-9b69a69e-c7dc-483b-8032-001086ce1dbd a00ec4c98ad14bd6ab5a4e1f11e60389 7966b3923d1340dca2b22eab0ca26cb3 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 08:47:22 compute-0 nova_compute[260935]: 2025-10-11 08:47:22.531 2 DEBUG oslo_concurrency.lockutils [None req-9b69a69e-c7dc-483b-8032-001086ce1dbd a00ec4c98ad14bd6ab5a4e1f11e60389 7966b3923d1340dca2b22eab0ca26cb3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.731s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:47:22 compute-0 nova_compute[260935]: 2025-10-11 08:47:22.564 2 INFO nova.scheduler.client.report [None req-9b69a69e-c7dc-483b-8032-001086ce1dbd a00ec4c98ad14bd6ab5a4e1f11e60389 7966b3923d1340dca2b22eab0ca26cb3 - - default default] Deleted allocations for instance 682cad4b-4bab-4a36-9bd9-11e9de7b213a
Oct 11 08:47:22 compute-0 nova_compute[260935]: 2025-10-11 08:47:22.600 2 INFO nova.virt.libvirt.driver [None req-602ff088-416a-4b9a-9d59-b4b2be10ff77 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 5b2193b9-46b9-44a8-9d1c-3c6a642115b6] Creating config drive at /var/lib/nova/instances/5b2193b9-46b9-44a8-9d1c-3c6a642115b6/disk.config
Oct 11 08:47:22 compute-0 nova_compute[260935]: 2025-10-11 08:47:22.605 2 DEBUG oslo_concurrency.processutils [None req-602ff088-416a-4b9a-9d59-b4b2be10ff77 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/5b2193b9-46b9-44a8-9d1c-3c6a642115b6/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp3tslbnzu execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:47:22 compute-0 nova_compute[260935]: 2025-10-11 08:47:22.638 2 DEBUG oslo_concurrency.lockutils [None req-9b69a69e-c7dc-483b-8032-001086ce1dbd a00ec4c98ad14bd6ab5a4e1f11e60389 7966b3923d1340dca2b22eab0ca26cb3 - - default default] Lock "682cad4b-4bab-4a36-9bd9-11e9de7b213a" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.636s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:47:22 compute-0 nova_compute[260935]: 2025-10-11 08:47:22.751 2 DEBUG oslo_concurrency.processutils [None req-602ff088-416a-4b9a-9d59-b4b2be10ff77 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/5b2193b9-46b9-44a8-9d1c-3c6a642115b6/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp3tslbnzu" returned: 0 in 0.146s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:47:22 compute-0 nova_compute[260935]: 2025-10-11 08:47:22.792 2 DEBUG nova.storage.rbd_utils [None req-602ff088-416a-4b9a-9d59-b4b2be10ff77 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] rbd image 5b2193b9-46b9-44a8-9d1c-3c6a642115b6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:47:22 compute-0 nova_compute[260935]: 2025-10-11 08:47:22.797 2 DEBUG oslo_concurrency.processutils [None req-602ff088-416a-4b9a-9d59-b4b2be10ff77 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/5b2193b9-46b9-44a8-9d1c-3c6a642115b6/disk.config 5b2193b9-46b9-44a8-9d1c-3c6a642115b6_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:47:22 compute-0 nova_compute[260935]: 2025-10-11 08:47:22.854 2 DEBUG oslo_concurrency.lockutils [None req-201c53c8-6689-4b2a-b932-451ef298d63e 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] Acquiring lock "fa95251f-1ce7-4a45-8f53-ae932716a172" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:47:22 compute-0 nova_compute[260935]: 2025-10-11 08:47:22.854 2 DEBUG oslo_concurrency.lockutils [None req-201c53c8-6689-4b2a-b932-451ef298d63e 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] Lock "fa95251f-1ce7-4a45-8f53-ae932716a172" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:47:22 compute-0 nova_compute[260935]: 2025-10-11 08:47:22.855 2 DEBUG oslo_concurrency.lockutils [None req-201c53c8-6689-4b2a-b932-451ef298d63e 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] Acquiring lock "fa95251f-1ce7-4a45-8f53-ae932716a172-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:47:22 compute-0 nova_compute[260935]: 2025-10-11 08:47:22.855 2 DEBUG oslo_concurrency.lockutils [None req-201c53c8-6689-4b2a-b932-451ef298d63e 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] Lock "fa95251f-1ce7-4a45-8f53-ae932716a172-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:47:22 compute-0 nova_compute[260935]: 2025-10-11 08:47:22.856 2 DEBUG oslo_concurrency.lockutils [None req-201c53c8-6689-4b2a-b932-451ef298d63e 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] Lock "fa95251f-1ce7-4a45-8f53-ae932716a172-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:47:22 compute-0 nova_compute[260935]: 2025-10-11 08:47:22.858 2 INFO nova.compute.manager [None req-201c53c8-6689-4b2a-b932-451ef298d63e 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] [instance: fa95251f-1ce7-4a45-8f53-ae932716a172] Terminating instance
Oct 11 08:47:22 compute-0 nova_compute[260935]: 2025-10-11 08:47:22.859 2 DEBUG oslo_concurrency.lockutils [None req-201c53c8-6689-4b2a-b932-451ef298d63e 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] Acquiring lock "refresh_cache-fa95251f-1ce7-4a45-8f53-ae932716a172" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 08:47:22 compute-0 nova_compute[260935]: 2025-10-11 08:47:22.860 2 DEBUG oslo_concurrency.lockutils [None req-201c53c8-6689-4b2a-b932-451ef298d63e 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] Acquired lock "refresh_cache-fa95251f-1ce7-4a45-8f53-ae932716a172" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 08:47:22 compute-0 nova_compute[260935]: 2025-10-11 08:47:22.860 2 DEBUG nova.network.neutron [None req-201c53c8-6689-4b2a-b932-451ef298d63e 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] [instance: fa95251f-1ce7-4a45-8f53-ae932716a172] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 11 08:47:22 compute-0 nova_compute[260935]: 2025-10-11 08:47:22.989 2 DEBUG oslo_concurrency.processutils [None req-602ff088-416a-4b9a-9d59-b4b2be10ff77 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/5b2193b9-46b9-44a8-9d1c-3c6a642115b6/disk.config 5b2193b9-46b9-44a8-9d1c-3c6a642115b6_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.192s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:47:22 compute-0 nova_compute[260935]: 2025-10-11 08:47:22.990 2 INFO nova.virt.libvirt.driver [None req-602ff088-416a-4b9a-9d59-b4b2be10ff77 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 5b2193b9-46b9-44a8-9d1c-3c6a642115b6] Deleting local config drive /var/lib/nova/instances/5b2193b9-46b9-44a8-9d1c-3c6a642115b6/disk.config because it was imported into RBD.
Oct 11 08:47:23 compute-0 NetworkManager[44960]: <info>  [1760172443.0568] manager: (tap713e6030-0d): new Tun device (/org/freedesktop/NetworkManager/Devices/37)
Oct 11 08:47:23 compute-0 kernel: tap713e6030-0d: entered promiscuous mode
Oct 11 08:47:23 compute-0 systemd-udevd[286860]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 08:47:23 compute-0 nova_compute[260935]: 2025-10-11 08:47:23.062 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:47:23 compute-0 ovn_controller[152945]: 2025-10-11T08:47:23Z|00044|binding|INFO|Claiming lport 713e6030-0d3f-41ae-9f66-c4591e2498e4 for this chassis.
Oct 11 08:47:23 compute-0 ovn_controller[152945]: 2025-10-11T08:47:23Z|00045|binding|INFO|713e6030-0d3f-41ae-9f66-c4591e2498e4: Claiming fa:16:3e:6b:9e:4e 10.100.0.10
Oct 11 08:47:23 compute-0 nova_compute[260935]: 2025-10-11 08:47:23.073 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:47:23 compute-0 NetworkManager[44960]: <info>  [1760172443.0807] device (tap713e6030-0d): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 08:47:23 compute-0 NetworkManager[44960]: <info>  [1760172443.0819] device (tap713e6030-0d): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 08:47:23 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:47:23.087 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6b:9e:4e 10.100.0.10'], port_security=['fa:16:3e:6b:9e:4e 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '5b2193b9-46b9-44a8-9d1c-3c6a642115b6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fff13396-b787-4c6e-9112-a1c2ef57b26d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'eddb41c523294041b154a0a99c88e82b', 'neutron:revision_number': '2', 'neutron:security_group_ids': '45992a1d-2e72-47ad-a788-d3f5230a5526', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c4201c7b-c907-464d-88cb-d19f17d8f067, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=713e6030-0d3f-41ae-9f66-c4591e2498e4) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 08:47:23 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:47:23.089 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 713e6030-0d3f-41ae-9f66-c4591e2498e4 in datapath fff13396-b787-4c6e-9112-a1c2ef57b26d bound to our chassis
Oct 11 08:47:23 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:47:23.091 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network fff13396-b787-4c6e-9112-a1c2ef57b26d
Oct 11 08:47:23 compute-0 systemd-machined[215705]: New machine qemu-17-instance-00000011.
Oct 11 08:47:23 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:47:23.111 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[d54c97aa-29e8-4596-b037-a748134df35d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:47:23 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:47:23.112 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapfff13396-b1 in ovnmeta-fff13396-b787-4c6e-9112-a1c2ef57b26d namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 11 08:47:23 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:47:23.116 276945 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapfff13396-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 11 08:47:23 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:47:23.116 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[706d03af-a7db-4432-adac-8be1de30906e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:47:23 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:47:23.118 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[e2999396-ea7b-487a-9852-47be262e2e6c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:47:23 compute-0 systemd[1]: Started Virtual Machine qemu-17-instance-00000011.
Oct 11 08:47:23 compute-0 nova_compute[260935]: 2025-10-11 08:47:23.147 2 DEBUG nova.network.neutron [None req-201c53c8-6689-4b2a-b932-451ef298d63e 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] [instance: fa95251f-1ce7-4a45-8f53-ae932716a172] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 11 08:47:23 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:47:23.147 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[7a3e21d1-f0f2-49dd-a42c-77c139d37a95]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:47:23 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 08:47:23 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e155 do_prune osdmap full prune enabled
Oct 11 08:47:23 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e156 e156: 3 total, 3 up, 3 in
Oct 11 08:47:23 compute-0 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e156: 3 total, 3 up, 3 in
Oct 11 08:47:23 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:47:23.187 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[82483b1d-5fc0-42da-b33d-0c55d6f88e21]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:47:23 compute-0 nova_compute[260935]: 2025-10-11 08:47:23.197 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:47:23 compute-0 ovn_controller[152945]: 2025-10-11T08:47:23Z|00046|binding|INFO|Setting lport 713e6030-0d3f-41ae-9f66-c4591e2498e4 ovn-installed in OVS
Oct 11 08:47:23 compute-0 ovn_controller[152945]: 2025-10-11T08:47:23Z|00047|binding|INFO|Setting lport 713e6030-0d3f-41ae-9f66-c4591e2498e4 up in Southbound
Oct 11 08:47:23 compute-0 nova_compute[260935]: 2025-10-11 08:47:23.204 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:47:23 compute-0 ceph-mon[74313]: pgmap v1238: 321 pgs: 321 active+clean; 464 MiB data, 480 MiB used, 60 GiB / 60 GiB avail; 1.0 MiB/s rd, 7.1 MiB/s wr, 199 op/s
Oct 11 08:47:23 compute-0 ceph-mon[74313]: osdmap e155: 3 total, 3 up, 3 in
Oct 11 08:47:23 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/3256956480' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:47:23 compute-0 ceph-mon[74313]: osdmap e156: 3 total, 3 up, 3 in
Oct 11 08:47:23 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:47:23.229 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[f67a42bf-6d3c-4a3c-a9c7-b36bb076ad69]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:47:23 compute-0 NetworkManager[44960]: <info>  [1760172443.2388] manager: (tapfff13396-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/38)
Oct 11 08:47:23 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:47:23.241 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[ebbe329b-00c0-42be-ad5a-e005f6cc668d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:47:23 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:47:23.288 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[9050206a-94dc-4629-9063-8f8a294109d0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:47:23 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:47:23.292 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[ddc88f76-9781-483c-b7d5-ab2449ec661d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:47:23 compute-0 NetworkManager[44960]: <info>  [1760172443.3236] device (tapfff13396-b0): carrier: link connected
Oct 11 08:47:23 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:47:23.331 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[d499b188-6fc9-48da-baf5-8f0374620065]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:47:23 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:47:23.360 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[a43b3faf-462f-4cbd-9b3b-64b1e3d9aceb]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfff13396-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:dd:a4:2d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 21], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 428802, 'reachable_time': 36432, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 287072, 'error': None, 'target': 'ovnmeta-fff13396-b787-4c6e-9112-a1c2ef57b26d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:47:23 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:47:23.382 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[a38839a5-fcca-4da7-9a6c-59777a85bd40]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fedd:a42d'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 428802, 'tstamp': 428802}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 287073, 'error': None, 'target': 'ovnmeta-fff13396-b787-4c6e-9112-a1c2ef57b26d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:47:23 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:47:23.411 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[caf49178-515e-45cd-90ac-8d062988135c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfff13396-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:dd:a4:2d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 21], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 428802, 'reachable_time': 36432, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 287074, 'error': None, 'target': 'ovnmeta-fff13396-b787-4c6e-9112-a1c2ef57b26d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:47:23 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:47:23.468 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[7d990724-c304-4b81-a9b5-802f5a7cd812]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:47:23 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:47:23.568 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[dcaecf90-8bd8-4066-bf81-b344d54a6afe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:47:23 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:47:23.570 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfff13396-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:47:23 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:47:23.570 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 08:47:23 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:47:23.571 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfff13396-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:47:23 compute-0 nova_compute[260935]: 2025-10-11 08:47:23.573 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:47:23 compute-0 NetworkManager[44960]: <info>  [1760172443.5746] manager: (tapfff13396-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/39)
Oct 11 08:47:23 compute-0 kernel: tapfff13396-b0: entered promiscuous mode
Oct 11 08:47:23 compute-0 nova_compute[260935]: 2025-10-11 08:47:23.577 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:47:23 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:47:23.581 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapfff13396-b0, col_values=(('external_ids', {'iface-id': '2a916b98-1e7b-4604-b1f0-e2f195b1c17e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:47:23 compute-0 nova_compute[260935]: 2025-10-11 08:47:23.583 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:47:23 compute-0 ovn_controller[152945]: 2025-10-11T08:47:23Z|00048|binding|INFO|Releasing lport 2a916b98-1e7b-4604-b1f0-e2f195b1c17e from this chassis (sb_readonly=0)
Oct 11 08:47:23 compute-0 nova_compute[260935]: 2025-10-11 08:47:23.618 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:47:23 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:47:23.619 162815 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/fff13396-b787-4c6e-9112-a1c2ef57b26d.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/fff13396-b787-4c6e-9112-a1c2ef57b26d.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 11 08:47:23 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:47:23.621 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[fae8b4de-bbe8-424c-9ddf-b3def6dc2d55]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:47:23 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:47:23.622 162815 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 11 08:47:23 compute-0 ovn_metadata_agent[162810]: global
Oct 11 08:47:23 compute-0 ovn_metadata_agent[162810]:     log         /dev/log local0 debug
Oct 11 08:47:23 compute-0 ovn_metadata_agent[162810]:     log-tag     haproxy-metadata-proxy-fff13396-b787-4c6e-9112-a1c2ef57b26d
Oct 11 08:47:23 compute-0 ovn_metadata_agent[162810]:     user        root
Oct 11 08:47:23 compute-0 ovn_metadata_agent[162810]:     group       root
Oct 11 08:47:23 compute-0 ovn_metadata_agent[162810]:     maxconn     1024
Oct 11 08:47:23 compute-0 ovn_metadata_agent[162810]:     pidfile     /var/lib/neutron/external/pids/fff13396-b787-4c6e-9112-a1c2ef57b26d.pid.haproxy
Oct 11 08:47:23 compute-0 ovn_metadata_agent[162810]:     daemon
Oct 11 08:47:23 compute-0 ovn_metadata_agent[162810]: 
Oct 11 08:47:23 compute-0 ovn_metadata_agent[162810]: defaults
Oct 11 08:47:23 compute-0 ovn_metadata_agent[162810]:     log global
Oct 11 08:47:23 compute-0 ovn_metadata_agent[162810]:     mode http
Oct 11 08:47:23 compute-0 ovn_metadata_agent[162810]:     option httplog
Oct 11 08:47:23 compute-0 ovn_metadata_agent[162810]:     option dontlognull
Oct 11 08:47:23 compute-0 ovn_metadata_agent[162810]:     option http-server-close
Oct 11 08:47:23 compute-0 ovn_metadata_agent[162810]:     option forwardfor
Oct 11 08:47:23 compute-0 ovn_metadata_agent[162810]:     retries                 3
Oct 11 08:47:23 compute-0 ovn_metadata_agent[162810]:     timeout http-request    30s
Oct 11 08:47:23 compute-0 ovn_metadata_agent[162810]:     timeout connect         30s
Oct 11 08:47:23 compute-0 ovn_metadata_agent[162810]:     timeout client          32s
Oct 11 08:47:23 compute-0 ovn_metadata_agent[162810]:     timeout server          32s
Oct 11 08:47:23 compute-0 ovn_metadata_agent[162810]:     timeout http-keep-alive 30s
Oct 11 08:47:23 compute-0 ovn_metadata_agent[162810]: 
Oct 11 08:47:23 compute-0 ovn_metadata_agent[162810]: 
Oct 11 08:47:23 compute-0 ovn_metadata_agent[162810]: listen listener
Oct 11 08:47:23 compute-0 ovn_metadata_agent[162810]:     bind 169.254.169.254:80
Oct 11 08:47:23 compute-0 ovn_metadata_agent[162810]:     server metadata /var/lib/neutron/metadata_proxy
Oct 11 08:47:23 compute-0 ovn_metadata_agent[162810]:     http-request add-header X-OVN-Network-ID fff13396-b787-4c6e-9112-a1c2ef57b26d
Oct 11 08:47:23 compute-0 ovn_metadata_agent[162810]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 11 08:47:23 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:47:23.623 162815 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-fff13396-b787-4c6e-9112-a1c2ef57b26d', 'env', 'PROCESS_TAG=haproxy-fff13396-b787-4c6e-9112-a1c2ef57b26d', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/fff13396-b787-4c6e-9112-a1c2ef57b26d.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 11 08:47:23 compute-0 nova_compute[260935]: 2025-10-11 08:47:23.729 2 DEBUG nova.network.neutron [None req-201c53c8-6689-4b2a-b932-451ef298d63e 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] [instance: fa95251f-1ce7-4a45-8f53-ae932716a172] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:47:23 compute-0 nova_compute[260935]: 2025-10-11 08:47:23.752 2 DEBUG oslo_concurrency.lockutils [None req-201c53c8-6689-4b2a-b932-451ef298d63e 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] Releasing lock "refresh_cache-fa95251f-1ce7-4a45-8f53-ae932716a172" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 08:47:23 compute-0 nova_compute[260935]: 2025-10-11 08:47:23.752 2 DEBUG nova.compute.manager [None req-201c53c8-6689-4b2a-b932-451ef298d63e 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] [instance: fa95251f-1ce7-4a45-8f53-ae932716a172] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 11 08:47:23 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1241: 321 pgs: 321 active+clean; 275 MiB data, 387 MiB used, 60 GiB / 60 GiB avail; 4.6 MiB/s rd, 13 KiB/s wr, 420 op/s
Oct 11 08:47:23 compute-0 nova_compute[260935]: 2025-10-11 08:47:23.791 2 DEBUG nova.compute.manager [req-536ae651-2067-4b0c-bfe7-fe4b76f6b9a0 req-2455056c-298a-4d9e-917c-ba88756c5c81 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 5b2193b9-46b9-44a8-9d1c-3c6a642115b6] Received event network-vif-plugged-713e6030-0d3f-41ae-9f66-c4591e2498e4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:47:23 compute-0 nova_compute[260935]: 2025-10-11 08:47:23.794 2 DEBUG oslo_concurrency.lockutils [req-536ae651-2067-4b0c-bfe7-fe4b76f6b9a0 req-2455056c-298a-4d9e-917c-ba88756c5c81 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "5b2193b9-46b9-44a8-9d1c-3c6a642115b6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:47:23 compute-0 nova_compute[260935]: 2025-10-11 08:47:23.795 2 DEBUG oslo_concurrency.lockutils [req-536ae651-2067-4b0c-bfe7-fe4b76f6b9a0 req-2455056c-298a-4d9e-917c-ba88756c5c81 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "5b2193b9-46b9-44a8-9d1c-3c6a642115b6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:47:23 compute-0 nova_compute[260935]: 2025-10-11 08:47:23.795 2 DEBUG oslo_concurrency.lockutils [req-536ae651-2067-4b0c-bfe7-fe4b76f6b9a0 req-2455056c-298a-4d9e-917c-ba88756c5c81 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "5b2193b9-46b9-44a8-9d1c-3c6a642115b6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:47:23 compute-0 nova_compute[260935]: 2025-10-11 08:47:23.795 2 DEBUG nova.compute.manager [req-536ae651-2067-4b0c-bfe7-fe4b76f6b9a0 req-2455056c-298a-4d9e-917c-ba88756c5c81 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 5b2193b9-46b9-44a8-9d1c-3c6a642115b6] Processing event network-vif-plugged-713e6030-0d3f-41ae-9f66-c4591e2498e4 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 11 08:47:23 compute-0 systemd[1]: machine-qemu\x2d14\x2dinstance\x2d0000000e.scope: Deactivated successfully.
Oct 11 08:47:23 compute-0 systemd[1]: machine-qemu\x2d14\x2dinstance\x2d0000000e.scope: Consumed 13.727s CPU time.
Oct 11 08:47:23 compute-0 systemd-machined[215705]: Machine qemu-14-instance-0000000e terminated.
Oct 11 08:47:23 compute-0 nova_compute[260935]: 2025-10-11 08:47:23.981 2 INFO nova.virt.libvirt.driver [-] [instance: fa95251f-1ce7-4a45-8f53-ae932716a172] Instance destroyed successfully.
Oct 11 08:47:23 compute-0 nova_compute[260935]: 2025-10-11 08:47:23.981 2 DEBUG nova.objects.instance [None req-201c53c8-6689-4b2a-b932-451ef298d63e 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] Lazy-loading 'resources' on Instance uuid fa95251f-1ce7-4a45-8f53-ae932716a172 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:47:24 compute-0 podman[287169]: 2025-10-11 08:47:24.143118289 +0000 UTC m=+0.063712533 container create 779a4dfe6de7028a6c29a354067b0ff2a57745a062e774c98de6f0ccb789cb5a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-fff13396-b787-4c6e-9112-a1c2ef57b26d, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 08:47:24 compute-0 systemd[1]: Started libpod-conmon-779a4dfe6de7028a6c29a354067b0ff2a57745a062e774c98de6f0ccb789cb5a.scope.
Oct 11 08:47:24 compute-0 podman[287169]: 2025-10-11 08:47:24.109898309 +0000 UTC m=+0.030492603 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 11 08:47:24 compute-0 systemd[1]: Started libcrun container.
Oct 11 08:47:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/829cba9cb35af42a78e9e37b72195be70588984fe3032b82c532e66c2e3b0e2c/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 11 08:47:24 compute-0 podman[287169]: 2025-10-11 08:47:24.247673797 +0000 UTC m=+0.168268041 container init 779a4dfe6de7028a6c29a354067b0ff2a57745a062e774c98de6f0ccb789cb5a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-fff13396-b787-4c6e-9112-a1c2ef57b26d, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 11 08:47:24 compute-0 podman[287169]: 2025-10-11 08:47:24.255406498 +0000 UTC m=+0.176000742 container start 779a4dfe6de7028a6c29a354067b0ff2a57745a062e774c98de6f0ccb789cb5a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-fff13396-b787-4c6e-9112-a1c2ef57b26d, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true)
Oct 11 08:47:24 compute-0 neutron-haproxy-ovnmeta-fff13396-b787-4c6e-9112-a1c2ef57b26d[287185]: [NOTICE]   (287190) : New worker (287192) forked
Oct 11 08:47:24 compute-0 neutron-haproxy-ovnmeta-fff13396-b787-4c6e-9112-a1c2ef57b26d[287185]: [NOTICE]   (287190) : Loading success.
Oct 11 08:47:24 compute-0 nova_compute[260935]: 2025-10-11 08:47:24.313 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172444.3123507, 5b2193b9-46b9-44a8-9d1c-3c6a642115b6 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:47:24 compute-0 nova_compute[260935]: 2025-10-11 08:47:24.313 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 5b2193b9-46b9-44a8-9d1c-3c6a642115b6] VM Started (Lifecycle Event)
Oct 11 08:47:24 compute-0 nova_compute[260935]: 2025-10-11 08:47:24.317 2 DEBUG nova.compute.manager [None req-602ff088-416a-4b9a-9d59-b4b2be10ff77 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 5b2193b9-46b9-44a8-9d1c-3c6a642115b6] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 11 08:47:24 compute-0 nova_compute[260935]: 2025-10-11 08:47:24.329 2 DEBUG nova.virt.libvirt.driver [None req-602ff088-416a-4b9a-9d59-b4b2be10ff77 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 5b2193b9-46b9-44a8-9d1c-3c6a642115b6] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 11 08:47:24 compute-0 nova_compute[260935]: 2025-10-11 08:47:24.335 2 INFO nova.virt.libvirt.driver [-] [instance: 5b2193b9-46b9-44a8-9d1c-3c6a642115b6] Instance spawned successfully.
Oct 11 08:47:24 compute-0 nova_compute[260935]: 2025-10-11 08:47:24.335 2 DEBUG nova.virt.libvirt.driver [None req-602ff088-416a-4b9a-9d59-b4b2be10ff77 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 5b2193b9-46b9-44a8-9d1c-3c6a642115b6] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 11 08:47:24 compute-0 nova_compute[260935]: 2025-10-11 08:47:24.339 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 5b2193b9-46b9-44a8-9d1c-3c6a642115b6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:47:24 compute-0 nova_compute[260935]: 2025-10-11 08:47:24.343 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 5b2193b9-46b9-44a8-9d1c-3c6a642115b6] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 08:47:24 compute-0 nova_compute[260935]: 2025-10-11 08:47:24.361 2 DEBUG nova.virt.libvirt.driver [None req-602ff088-416a-4b9a-9d59-b4b2be10ff77 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 5b2193b9-46b9-44a8-9d1c-3c6a642115b6] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:47:24 compute-0 nova_compute[260935]: 2025-10-11 08:47:24.361 2 DEBUG nova.virt.libvirt.driver [None req-602ff088-416a-4b9a-9d59-b4b2be10ff77 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 5b2193b9-46b9-44a8-9d1c-3c6a642115b6] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:47:24 compute-0 nova_compute[260935]: 2025-10-11 08:47:24.362 2 DEBUG nova.virt.libvirt.driver [None req-602ff088-416a-4b9a-9d59-b4b2be10ff77 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 5b2193b9-46b9-44a8-9d1c-3c6a642115b6] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:47:24 compute-0 nova_compute[260935]: 2025-10-11 08:47:24.363 2 DEBUG nova.virt.libvirt.driver [None req-602ff088-416a-4b9a-9d59-b4b2be10ff77 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 5b2193b9-46b9-44a8-9d1c-3c6a642115b6] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:47:24 compute-0 nova_compute[260935]: 2025-10-11 08:47:24.363 2 DEBUG nova.virt.libvirt.driver [None req-602ff088-416a-4b9a-9d59-b4b2be10ff77 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 5b2193b9-46b9-44a8-9d1c-3c6a642115b6] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:47:24 compute-0 nova_compute[260935]: 2025-10-11 08:47:24.364 2 DEBUG nova.virt.libvirt.driver [None req-602ff088-416a-4b9a-9d59-b4b2be10ff77 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 5b2193b9-46b9-44a8-9d1c-3c6a642115b6] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:47:24 compute-0 nova_compute[260935]: 2025-10-11 08:47:24.372 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 5b2193b9-46b9-44a8-9d1c-3c6a642115b6] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 08:47:24 compute-0 nova_compute[260935]: 2025-10-11 08:47:24.372 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172444.3126302, 5b2193b9-46b9-44a8-9d1c-3c6a642115b6 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:47:24 compute-0 nova_compute[260935]: 2025-10-11 08:47:24.373 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 5b2193b9-46b9-44a8-9d1c-3c6a642115b6] VM Paused (Lifecycle Event)
Oct 11 08:47:24 compute-0 nova_compute[260935]: 2025-10-11 08:47:24.411 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 5b2193b9-46b9-44a8-9d1c-3c6a642115b6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:47:24 compute-0 nova_compute[260935]: 2025-10-11 08:47:24.421 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172444.32918, 5b2193b9-46b9-44a8-9d1c-3c6a642115b6 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:47:24 compute-0 nova_compute[260935]: 2025-10-11 08:47:24.421 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 5b2193b9-46b9-44a8-9d1c-3c6a642115b6] VM Resumed (Lifecycle Event)
Oct 11 08:47:24 compute-0 nova_compute[260935]: 2025-10-11 08:47:24.428 2 INFO nova.virt.libvirt.driver [None req-201c53c8-6689-4b2a-b932-451ef298d63e 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] [instance: fa95251f-1ce7-4a45-8f53-ae932716a172] Deleting instance files /var/lib/nova/instances/fa95251f-1ce7-4a45-8f53-ae932716a172_del
Oct 11 08:47:24 compute-0 nova_compute[260935]: 2025-10-11 08:47:24.429 2 INFO nova.virt.libvirt.driver [None req-201c53c8-6689-4b2a-b932-451ef298d63e 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] [instance: fa95251f-1ce7-4a45-8f53-ae932716a172] Deletion of /var/lib/nova/instances/fa95251f-1ce7-4a45-8f53-ae932716a172_del complete
Oct 11 08:47:24 compute-0 nova_compute[260935]: 2025-10-11 08:47:24.436 2 INFO nova.compute.manager [None req-602ff088-416a-4b9a-9d59-b4b2be10ff77 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 5b2193b9-46b9-44a8-9d1c-3c6a642115b6] Took 8.22 seconds to spawn the instance on the hypervisor.
Oct 11 08:47:24 compute-0 nova_compute[260935]: 2025-10-11 08:47:24.437 2 DEBUG nova.compute.manager [None req-602ff088-416a-4b9a-9d59-b4b2be10ff77 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 5b2193b9-46b9-44a8-9d1c-3c6a642115b6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:47:24 compute-0 nova_compute[260935]: 2025-10-11 08:47:24.438 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 5b2193b9-46b9-44a8-9d1c-3c6a642115b6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:47:24 compute-0 nova_compute[260935]: 2025-10-11 08:47:24.447 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 5b2193b9-46b9-44a8-9d1c-3c6a642115b6] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 08:47:24 compute-0 nova_compute[260935]: 2025-10-11 08:47:24.483 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 5b2193b9-46b9-44a8-9d1c-3c6a642115b6] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 08:47:24 compute-0 nova_compute[260935]: 2025-10-11 08:47:24.494 2 INFO nova.compute.manager [None req-201c53c8-6689-4b2a-b932-451ef298d63e 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] [instance: fa95251f-1ce7-4a45-8f53-ae932716a172] Took 0.74 seconds to destroy the instance on the hypervisor.
Oct 11 08:47:24 compute-0 nova_compute[260935]: 2025-10-11 08:47:24.495 2 DEBUG oslo.service.loopingcall [None req-201c53c8-6689-4b2a-b932-451ef298d63e 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 11 08:47:24 compute-0 nova_compute[260935]: 2025-10-11 08:47:24.496 2 DEBUG nova.compute.manager [-] [instance: fa95251f-1ce7-4a45-8f53-ae932716a172] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 11 08:47:24 compute-0 nova_compute[260935]: 2025-10-11 08:47:24.496 2 DEBUG nova.network.neutron [-] [instance: fa95251f-1ce7-4a45-8f53-ae932716a172] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 11 08:47:24 compute-0 nova_compute[260935]: 2025-10-11 08:47:24.510 2 INFO nova.compute.manager [None req-602ff088-416a-4b9a-9d59-b4b2be10ff77 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 5b2193b9-46b9-44a8-9d1c-3c6a642115b6] Took 10.25 seconds to build instance.
Oct 11 08:47:24 compute-0 nova_compute[260935]: 2025-10-11 08:47:24.528 2 DEBUG oslo_concurrency.lockutils [None req-602ff088-416a-4b9a-9d59-b4b2be10ff77 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Lock "5b2193b9-46b9-44a8-9d1c-3c6a642115b6" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.352s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:47:24 compute-0 nova_compute[260935]: 2025-10-11 08:47:24.665 2 DEBUG nova.network.neutron [-] [instance: fa95251f-1ce7-4a45-8f53-ae932716a172] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 11 08:47:24 compute-0 nova_compute[260935]: 2025-10-11 08:47:24.683 2 DEBUG nova.network.neutron [-] [instance: fa95251f-1ce7-4a45-8f53-ae932716a172] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:47:24 compute-0 nova_compute[260935]: 2025-10-11 08:47:24.701 2 INFO nova.compute.manager [-] [instance: fa95251f-1ce7-4a45-8f53-ae932716a172] Took 0.21 seconds to deallocate network for instance.
Oct 11 08:47:24 compute-0 nova_compute[260935]: 2025-10-11 08:47:24.746 2 DEBUG oslo_concurrency.lockutils [None req-201c53c8-6689-4b2a-b932-451ef298d63e 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:47:24 compute-0 nova_compute[260935]: 2025-10-11 08:47:24.747 2 DEBUG oslo_concurrency.lockutils [None req-201c53c8-6689-4b2a-b932-451ef298d63e 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:47:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 08:47:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 08:47:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 08:47:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 08:47:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 08:47:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 08:47:24 compute-0 nova_compute[260935]: 2025-10-11 08:47:24.876 2 DEBUG oslo_concurrency.processutils [None req-201c53c8-6689-4b2a-b932-451ef298d63e 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:47:25 compute-0 ceph-mon[74313]: pgmap v1241: 321 pgs: 321 active+clean; 275 MiB data, 387 MiB used, 60 GiB / 60 GiB avail; 4.6 MiB/s rd, 13 KiB/s wr, 420 op/s
Oct 11 08:47:25 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 08:47:25 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/677872699' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:47:25 compute-0 nova_compute[260935]: 2025-10-11 08:47:25.368 2 DEBUG oslo_concurrency.processutils [None req-201c53c8-6689-4b2a-b932-451ef298d63e 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.492s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:47:25 compute-0 nova_compute[260935]: 2025-10-11 08:47:25.374 2 DEBUG nova.compute.provider_tree [None req-201c53c8-6689-4b2a-b932-451ef298d63e 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 08:47:25 compute-0 nova_compute[260935]: 2025-10-11 08:47:25.391 2 DEBUG nova.scheduler.client.report [None req-201c53c8-6689-4b2a-b932-451ef298d63e 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 08:47:25 compute-0 nova_compute[260935]: 2025-10-11 08:47:25.409 2 DEBUG oslo_concurrency.lockutils [None req-201c53c8-6689-4b2a-b932-451ef298d63e 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.662s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:47:25 compute-0 nova_compute[260935]: 2025-10-11 08:47:25.463 2 INFO nova.scheduler.client.report [None req-201c53c8-6689-4b2a-b932-451ef298d63e 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] Deleted allocations for instance fa95251f-1ce7-4a45-8f53-ae932716a172
Oct 11 08:47:25 compute-0 nova_compute[260935]: 2025-10-11 08:47:25.532 2 DEBUG oslo_concurrency.lockutils [None req-201c53c8-6689-4b2a-b932-451ef298d63e 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] Lock "fa95251f-1ce7-4a45-8f53-ae932716a172" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.677s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:47:25 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1242: 321 pgs: 321 active+clean; 275 MiB data, 387 MiB used, 60 GiB / 60 GiB avail; 3.3 MiB/s rd, 9.5 KiB/s wr, 301 op/s
Oct 11 08:47:26 compute-0 nova_compute[260935]: 2025-10-11 08:47:26.014 2 DEBUG nova.compute.manager [req-0eae21b8-4657-4bb9-b7ec-15783345b9c0 req-1277c82f-979d-4e5d-ac87-13542e399c22 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 5b2193b9-46b9-44a8-9d1c-3c6a642115b6] Received event network-vif-plugged-713e6030-0d3f-41ae-9f66-c4591e2498e4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:47:26 compute-0 nova_compute[260935]: 2025-10-11 08:47:26.015 2 DEBUG oslo_concurrency.lockutils [req-0eae21b8-4657-4bb9-b7ec-15783345b9c0 req-1277c82f-979d-4e5d-ac87-13542e399c22 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "5b2193b9-46b9-44a8-9d1c-3c6a642115b6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:47:26 compute-0 nova_compute[260935]: 2025-10-11 08:47:26.016 2 DEBUG oslo_concurrency.lockutils [req-0eae21b8-4657-4bb9-b7ec-15783345b9c0 req-1277c82f-979d-4e5d-ac87-13542e399c22 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "5b2193b9-46b9-44a8-9d1c-3c6a642115b6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:47:26 compute-0 nova_compute[260935]: 2025-10-11 08:47:26.016 2 DEBUG oslo_concurrency.lockutils [req-0eae21b8-4657-4bb9-b7ec-15783345b9c0 req-1277c82f-979d-4e5d-ac87-13542e399c22 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "5b2193b9-46b9-44a8-9d1c-3c6a642115b6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:47:26 compute-0 nova_compute[260935]: 2025-10-11 08:47:26.017 2 DEBUG nova.compute.manager [req-0eae21b8-4657-4bb9-b7ec-15783345b9c0 req-1277c82f-979d-4e5d-ac87-13542e399c22 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 5b2193b9-46b9-44a8-9d1c-3c6a642115b6] No waiting events found dispatching network-vif-plugged-713e6030-0d3f-41ae-9f66-c4591e2498e4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 08:47:26 compute-0 nova_compute[260935]: 2025-10-11 08:47:26.017 2 WARNING nova.compute.manager [req-0eae21b8-4657-4bb9-b7ec-15783345b9c0 req-1277c82f-979d-4e5d-ac87-13542e399c22 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 5b2193b9-46b9-44a8-9d1c-3c6a642115b6] Received unexpected event network-vif-plugged-713e6030-0d3f-41ae-9f66-c4591e2498e4 for instance with vm_state active and task_state None.
Oct 11 08:47:26 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/677872699' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:47:26 compute-0 nova_compute[260935]: 2025-10-11 08:47:26.301 2 DEBUG oslo_concurrency.lockutils [None req-a720d339-044f-4388-8f34-b69eba5324d5 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] Acquiring lock "318c32b0-9990-4579-8abb-fc79e7460d77" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:47:26 compute-0 nova_compute[260935]: 2025-10-11 08:47:26.302 2 DEBUG oslo_concurrency.lockutils [None req-a720d339-044f-4388-8f34-b69eba5324d5 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] Lock "318c32b0-9990-4579-8abb-fc79e7460d77" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:47:26 compute-0 nova_compute[260935]: 2025-10-11 08:47:26.302 2 DEBUG oslo_concurrency.lockutils [None req-a720d339-044f-4388-8f34-b69eba5324d5 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] Acquiring lock "318c32b0-9990-4579-8abb-fc79e7460d77-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:47:26 compute-0 nova_compute[260935]: 2025-10-11 08:47:26.303 2 DEBUG oslo_concurrency.lockutils [None req-a720d339-044f-4388-8f34-b69eba5324d5 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] Lock "318c32b0-9990-4579-8abb-fc79e7460d77-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:47:26 compute-0 nova_compute[260935]: 2025-10-11 08:47:26.303 2 DEBUG oslo_concurrency.lockutils [None req-a720d339-044f-4388-8f34-b69eba5324d5 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] Lock "318c32b0-9990-4579-8abb-fc79e7460d77-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:47:26 compute-0 nova_compute[260935]: 2025-10-11 08:47:26.305 2 INFO nova.compute.manager [None req-a720d339-044f-4388-8f34-b69eba5324d5 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] [instance: 318c32b0-9990-4579-8abb-fc79e7460d77] Terminating instance
Oct 11 08:47:26 compute-0 nova_compute[260935]: 2025-10-11 08:47:26.306 2 DEBUG oslo_concurrency.lockutils [None req-a720d339-044f-4388-8f34-b69eba5324d5 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] Acquiring lock "refresh_cache-318c32b0-9990-4579-8abb-fc79e7460d77" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 08:47:26 compute-0 nova_compute[260935]: 2025-10-11 08:47:26.307 2 DEBUG oslo_concurrency.lockutils [None req-a720d339-044f-4388-8f34-b69eba5324d5 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] Acquired lock "refresh_cache-318c32b0-9990-4579-8abb-fc79e7460d77" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 08:47:26 compute-0 nova_compute[260935]: 2025-10-11 08:47:26.307 2 DEBUG nova.network.neutron [None req-a720d339-044f-4388-8f34-b69eba5324d5 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] [instance: 318c32b0-9990-4579-8abb-fc79e7460d77] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 11 08:47:26 compute-0 nova_compute[260935]: 2025-10-11 08:47:26.485 2 DEBUG nova.network.neutron [None req-a720d339-044f-4388-8f34-b69eba5324d5 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] [instance: 318c32b0-9990-4579-8abb-fc79e7460d77] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 11 08:47:26 compute-0 nova_compute[260935]: 2025-10-11 08:47:26.666 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:47:26 compute-0 podman[287225]: 2025-10-11 08:47:26.806170652 +0000 UTC m=+0.086515704 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_managed=true, config_id=iscsid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, io.buildah.version=1.41.3)
Oct 11 08:47:26 compute-0 nova_compute[260935]: 2025-10-11 08:47:26.808 2 DEBUG nova.network.neutron [None req-a720d339-044f-4388-8f34-b69eba5324d5 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] [instance: 318c32b0-9990-4579-8abb-fc79e7460d77] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:47:26 compute-0 nova_compute[260935]: 2025-10-11 08:47:26.821 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:47:26 compute-0 nova_compute[260935]: 2025-10-11 08:47:26.823 2 DEBUG oslo_concurrency.lockutils [None req-a720d339-044f-4388-8f34-b69eba5324d5 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] Releasing lock "refresh_cache-318c32b0-9990-4579-8abb-fc79e7460d77" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 08:47:26 compute-0 nova_compute[260935]: 2025-10-11 08:47:26.824 2 DEBUG nova.compute.manager [None req-a720d339-044f-4388-8f34-b69eba5324d5 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] [instance: 318c32b0-9990-4579-8abb-fc79e7460d77] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 11 08:47:26 compute-0 sshd-session[287221]: Invalid user fast from 155.4.244.179 port 21931
Oct 11 08:47:26 compute-0 systemd[1]: machine-qemu\x2d13\x2dinstance\x2d0000000d.scope: Deactivated successfully.
Oct 11 08:47:26 compute-0 systemd[1]: machine-qemu\x2d13\x2dinstance\x2d0000000d.scope: Consumed 14.615s CPU time.
Oct 11 08:47:26 compute-0 sshd-session[287221]: pam_unix(sshd:auth): check pass; user unknown
Oct 11 08:47:26 compute-0 sshd-session[287221]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=155.4.244.179
Oct 11 08:47:26 compute-0 systemd-machined[215705]: Machine qemu-13-instance-0000000d terminated.
Oct 11 08:47:27 compute-0 nova_compute[260935]: 2025-10-11 08:47:27.051 2 INFO nova.virt.libvirt.driver [-] [instance: 318c32b0-9990-4579-8abb-fc79e7460d77] Instance destroyed successfully.
Oct 11 08:47:27 compute-0 nova_compute[260935]: 2025-10-11 08:47:27.053 2 DEBUG nova.objects.instance [None req-a720d339-044f-4388-8f34-b69eba5324d5 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] Lazy-loading 'resources' on Instance uuid 318c32b0-9990-4579-8abb-fc79e7460d77 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:47:27 compute-0 ceph-mon[74313]: pgmap v1242: 321 pgs: 321 active+clean; 275 MiB data, 387 MiB used, 60 GiB / 60 GiB avail; 3.3 MiB/s rd, 9.5 KiB/s wr, 301 op/s
Oct 11 08:47:27 compute-0 nova_compute[260935]: 2025-10-11 08:47:27.486 2 INFO nova.virt.libvirt.driver [None req-a720d339-044f-4388-8f34-b69eba5324d5 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] [instance: 318c32b0-9990-4579-8abb-fc79e7460d77] Deleting instance files /var/lib/nova/instances/318c32b0-9990-4579-8abb-fc79e7460d77_del
Oct 11 08:47:27 compute-0 nova_compute[260935]: 2025-10-11 08:47:27.488 2 INFO nova.virt.libvirt.driver [None req-a720d339-044f-4388-8f34-b69eba5324d5 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] [instance: 318c32b0-9990-4579-8abb-fc79e7460d77] Deletion of /var/lib/nova/instances/318c32b0-9990-4579-8abb-fc79e7460d77_del complete
Oct 11 08:47:27 compute-0 nova_compute[260935]: 2025-10-11 08:47:27.534 2 INFO nova.compute.manager [None req-a720d339-044f-4388-8f34-b69eba5324d5 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] [instance: 318c32b0-9990-4579-8abb-fc79e7460d77] Took 0.71 seconds to destroy the instance on the hypervisor.
Oct 11 08:47:27 compute-0 nova_compute[260935]: 2025-10-11 08:47:27.536 2 DEBUG oslo.service.loopingcall [None req-a720d339-044f-4388-8f34-b69eba5324d5 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 11 08:47:27 compute-0 nova_compute[260935]: 2025-10-11 08:47:27.536 2 DEBUG nova.compute.manager [-] [instance: 318c32b0-9990-4579-8abb-fc79e7460d77] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 11 08:47:27 compute-0 nova_compute[260935]: 2025-10-11 08:47:27.537 2 DEBUG nova.network.neutron [-] [instance: 318c32b0-9990-4579-8abb-fc79e7460d77] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 11 08:47:27 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1243: 321 pgs: 321 active+clean; 127 MiB data, 316 MiB used, 60 GiB / 60 GiB avail; 6.3 MiB/s rd, 38 KiB/s wr, 474 op/s
Oct 11 08:47:27 compute-0 nova_compute[260935]: 2025-10-11 08:47:27.880 2 DEBUG nova.network.neutron [-] [instance: 318c32b0-9990-4579-8abb-fc79e7460d77] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 11 08:47:27 compute-0 nova_compute[260935]: 2025-10-11 08:47:27.895 2 DEBUG nova.network.neutron [-] [instance: 318c32b0-9990-4579-8abb-fc79e7460d77] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:47:27 compute-0 nova_compute[260935]: 2025-10-11 08:47:27.918 2 INFO nova.compute.manager [-] [instance: 318c32b0-9990-4579-8abb-fc79e7460d77] Took 0.38 seconds to deallocate network for instance.
Oct 11 08:47:27 compute-0 nova_compute[260935]: 2025-10-11 08:47:27.967 2 DEBUG oslo_concurrency.lockutils [None req-a720d339-044f-4388-8f34-b69eba5324d5 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:47:27 compute-0 nova_compute[260935]: 2025-10-11 08:47:27.968 2 DEBUG oslo_concurrency.lockutils [None req-a720d339-044f-4388-8f34-b69eba5324d5 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:47:28 compute-0 nova_compute[260935]: 2025-10-11 08:47:28.015 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:47:28 compute-0 NetworkManager[44960]: <info>  [1760172448.0161] manager: (patch-provnet-02213cae-d648-4a8f-b18b-9bdf9573f1be-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/40)
Oct 11 08:47:28 compute-0 NetworkManager[44960]: <info>  [1760172448.0178] manager: (patch-br-int-to-provnet-02213cae-d648-4a8f-b18b-9bdf9573f1be): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/41)
Oct 11 08:47:28 compute-0 nova_compute[260935]: 2025-10-11 08:47:28.049 2 DEBUG oslo_concurrency.processutils [None req-a720d339-044f-4388-8f34-b69eba5324d5 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:47:28 compute-0 ovn_controller[152945]: 2025-10-11T08:47:28Z|00049|binding|INFO|Releasing lport 2a916b98-1e7b-4604-b1f0-e2f195b1c17e from this chassis (sb_readonly=0)
Oct 11 08:47:28 compute-0 nova_compute[260935]: 2025-10-11 08:47:28.150 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:47:28 compute-0 nova_compute[260935]: 2025-10-11 08:47:28.164 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:47:28 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 08:47:28 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e156 do_prune osdmap full prune enabled
Oct 11 08:47:28 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e157 e157: 3 total, 3 up, 3 in
Oct 11 08:47:28 compute-0 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e157: 3 total, 3 up, 3 in
Oct 11 08:47:28 compute-0 nova_compute[260935]: 2025-10-11 08:47:28.256 2 DEBUG nova.compute.manager [req-19146771-dec9-4181-ac6b-e8a57e5f442c req-d7d8cd7c-dba5-427f-9b8d-30a96a3b959e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 5b2193b9-46b9-44a8-9d1c-3c6a642115b6] Received event network-changed-713e6030-0d3f-41ae-9f66-c4591e2498e4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:47:28 compute-0 nova_compute[260935]: 2025-10-11 08:47:28.256 2 DEBUG nova.compute.manager [req-19146771-dec9-4181-ac6b-e8a57e5f442c req-d7d8cd7c-dba5-427f-9b8d-30a96a3b959e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 5b2193b9-46b9-44a8-9d1c-3c6a642115b6] Refreshing instance network info cache due to event network-changed-713e6030-0d3f-41ae-9f66-c4591e2498e4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 08:47:28 compute-0 nova_compute[260935]: 2025-10-11 08:47:28.257 2 DEBUG oslo_concurrency.lockutils [req-19146771-dec9-4181-ac6b-e8a57e5f442c req-d7d8cd7c-dba5-427f-9b8d-30a96a3b959e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-5b2193b9-46b9-44a8-9d1c-3c6a642115b6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 08:47:28 compute-0 nova_compute[260935]: 2025-10-11 08:47:28.257 2 DEBUG oslo_concurrency.lockutils [req-19146771-dec9-4181-ac6b-e8a57e5f442c req-d7d8cd7c-dba5-427f-9b8d-30a96a3b959e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-5b2193b9-46b9-44a8-9d1c-3c6a642115b6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 08:47:28 compute-0 nova_compute[260935]: 2025-10-11 08:47:28.257 2 DEBUG nova.network.neutron [req-19146771-dec9-4181-ac6b-e8a57e5f442c req-d7d8cd7c-dba5-427f-9b8d-30a96a3b959e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 5b2193b9-46b9-44a8-9d1c-3c6a642115b6] Refreshing network info cache for port 713e6030-0d3f-41ae-9f66-c4591e2498e4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 08:47:28 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 08:47:28 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2828696214' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:47:28 compute-0 nova_compute[260935]: 2025-10-11 08:47:28.564 2 DEBUG oslo_concurrency.processutils [None req-a720d339-044f-4388-8f34-b69eba5324d5 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.515s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:47:28 compute-0 nova_compute[260935]: 2025-10-11 08:47:28.573 2 DEBUG nova.compute.provider_tree [None req-a720d339-044f-4388-8f34-b69eba5324d5 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 08:47:28 compute-0 nova_compute[260935]: 2025-10-11 08:47:28.591 2 DEBUG nova.scheduler.client.report [None req-a720d339-044f-4388-8f34-b69eba5324d5 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 08:47:28 compute-0 nova_compute[260935]: 2025-10-11 08:47:28.618 2 DEBUG oslo_concurrency.lockutils [None req-a720d339-044f-4388-8f34-b69eba5324d5 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.650s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:47:28 compute-0 nova_compute[260935]: 2025-10-11 08:47:28.643 2 INFO nova.scheduler.client.report [None req-a720d339-044f-4388-8f34-b69eba5324d5 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] Deleted allocations for instance 318c32b0-9990-4579-8abb-fc79e7460d77
Oct 11 08:47:28 compute-0 nova_compute[260935]: 2025-10-11 08:47:28.712 2 DEBUG oslo_concurrency.lockutils [None req-a720d339-044f-4388-8f34-b69eba5324d5 976b51186aac40648d84f68dc2241b25 cecc6eaab0b74c3786e0cdf9452f1b2c - - default default] Lock "318c32b0-9990-4579-8abb-fc79e7460d77" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.410s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:47:29 compute-0 sshd-session[287221]: Failed password for invalid user fast from 155.4.244.179 port 21931 ssh2
Oct 11 08:47:29 compute-0 ceph-mon[74313]: pgmap v1243: 321 pgs: 321 active+clean; 127 MiB data, 316 MiB used, 60 GiB / 60 GiB avail; 6.3 MiB/s rd, 38 KiB/s wr, 474 op/s
Oct 11 08:47:29 compute-0 ceph-mon[74313]: osdmap e157: 3 total, 3 up, 3 in
Oct 11 08:47:29 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/2828696214' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:47:29 compute-0 nova_compute[260935]: 2025-10-11 08:47:29.342 2 DEBUG nova.network.neutron [req-19146771-dec9-4181-ac6b-e8a57e5f442c req-d7d8cd7c-dba5-427f-9b8d-30a96a3b959e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 5b2193b9-46b9-44a8-9d1c-3c6a642115b6] Updated VIF entry in instance network info cache for port 713e6030-0d3f-41ae-9f66-c4591e2498e4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 08:47:29 compute-0 nova_compute[260935]: 2025-10-11 08:47:29.343 2 DEBUG nova.network.neutron [req-19146771-dec9-4181-ac6b-e8a57e5f442c req-d7d8cd7c-dba5-427f-9b8d-30a96a3b959e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 5b2193b9-46b9-44a8-9d1c-3c6a642115b6] Updating instance_info_cache with network_info: [{"id": "713e6030-0d3f-41ae-9f66-c4591e2498e4", "address": "fa:16:3e:6b:9e:4e", "network": {"id": "fff13396-b787-4c6e-9112-a1c2ef57b26d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-795919911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.196", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eddb41c523294041b154a0a99c88e82b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap713e6030-0d", "ovs_interfaceid": "713e6030-0d3f-41ae-9f66-c4591e2498e4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:47:29 compute-0 nova_compute[260935]: 2025-10-11 08:47:29.365 2 DEBUG oslo_concurrency.lockutils [req-19146771-dec9-4181-ac6b-e8a57e5f442c req-d7d8cd7c-dba5-427f-9b8d-30a96a3b959e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-5b2193b9-46b9-44a8-9d1c-3c6a642115b6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 08:47:29 compute-0 nova_compute[260935]: 2025-10-11 08:47:29.706 2 DEBUG oslo_concurrency.lockutils [None req-1b094375-7a98-4ea4-848d-cc6aeffeadc4 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] Acquiring lock "66086b61-46ca-4a1b-a9f0-692678bcbf7a" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:47:29 compute-0 nova_compute[260935]: 2025-10-11 08:47:29.708 2 DEBUG oslo_concurrency.lockutils [None req-1b094375-7a98-4ea4-848d-cc6aeffeadc4 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] Lock "66086b61-46ca-4a1b-a9f0-692678bcbf7a" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:47:29 compute-0 nova_compute[260935]: 2025-10-11 08:47:29.733 2 DEBUG nova.compute.manager [None req-1b094375-7a98-4ea4-848d-cc6aeffeadc4 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] [instance: 66086b61-46ca-4a1b-a9f0-692678bcbf7a] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 11 08:47:29 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1245: 321 pgs: 321 active+clean; 127 MiB data, 316 MiB used, 60 GiB / 60 GiB avail; 4.9 MiB/s rd, 30 KiB/s wr, 349 op/s
Oct 11 08:47:29 compute-0 nova_compute[260935]: 2025-10-11 08:47:29.810 2 DEBUG oslo_concurrency.lockutils [None req-1b094375-7a98-4ea4-848d-cc6aeffeadc4 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:47:29 compute-0 nova_compute[260935]: 2025-10-11 08:47:29.811 2 DEBUG oslo_concurrency.lockutils [None req-1b094375-7a98-4ea4-848d-cc6aeffeadc4 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:47:29 compute-0 nova_compute[260935]: 2025-10-11 08:47:29.819 2 DEBUG nova.virt.hardware [None req-1b094375-7a98-4ea4-848d-cc6aeffeadc4 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 11 08:47:29 compute-0 nova_compute[260935]: 2025-10-11 08:47:29.820 2 INFO nova.compute.claims [None req-1b094375-7a98-4ea4-848d-cc6aeffeadc4 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] [instance: 66086b61-46ca-4a1b-a9f0-692678bcbf7a] Claim successful on node compute-0.ctlplane.example.com
Oct 11 08:47:29 compute-0 nova_compute[260935]: 2025-10-11 08:47:29.960 2 DEBUG oslo_concurrency.processutils [None req-1b094375-7a98-4ea4-848d-cc6aeffeadc4 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:47:30 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e157 do_prune osdmap full prune enabled
Oct 11 08:47:30 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e158 e158: 3 total, 3 up, 3 in
Oct 11 08:47:30 compute-0 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e158: 3 total, 3 up, 3 in
Oct 11 08:47:30 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 08:47:30 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2425366987' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:47:30 compute-0 nova_compute[260935]: 2025-10-11 08:47:30.454 2 DEBUG oslo_concurrency.processutils [None req-1b094375-7a98-4ea4-848d-cc6aeffeadc4 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.494s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:47:30 compute-0 nova_compute[260935]: 2025-10-11 08:47:30.463 2 DEBUG nova.compute.provider_tree [None req-1b094375-7a98-4ea4-848d-cc6aeffeadc4 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 08:47:30 compute-0 nova_compute[260935]: 2025-10-11 08:47:30.483 2 DEBUG nova.scheduler.client.report [None req-1b094375-7a98-4ea4-848d-cc6aeffeadc4 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 08:47:30 compute-0 nova_compute[260935]: 2025-10-11 08:47:30.511 2 DEBUG oslo_concurrency.lockutils [None req-1b094375-7a98-4ea4-848d-cc6aeffeadc4 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.700s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:47:30 compute-0 nova_compute[260935]: 2025-10-11 08:47:30.512 2 DEBUG nova.compute.manager [None req-1b094375-7a98-4ea4-848d-cc6aeffeadc4 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] [instance: 66086b61-46ca-4a1b-a9f0-692678bcbf7a] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 11 08:47:30 compute-0 nova_compute[260935]: 2025-10-11 08:47:30.567 2 DEBUG nova.compute.manager [None req-1b094375-7a98-4ea4-848d-cc6aeffeadc4 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] [instance: 66086b61-46ca-4a1b-a9f0-692678bcbf7a] Not allocating networking since 'none' was specified. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1948
Oct 11 08:47:30 compute-0 nova_compute[260935]: 2025-10-11 08:47:30.580 2 INFO nova.virt.libvirt.driver [None req-1b094375-7a98-4ea4-848d-cc6aeffeadc4 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] [instance: 66086b61-46ca-4a1b-a9f0-692678bcbf7a] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 11 08:47:30 compute-0 nova_compute[260935]: 2025-10-11 08:47:30.597 2 DEBUG nova.compute.manager [None req-1b094375-7a98-4ea4-848d-cc6aeffeadc4 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] [instance: 66086b61-46ca-4a1b-a9f0-692678bcbf7a] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 11 08:47:30 compute-0 nova_compute[260935]: 2025-10-11 08:47:30.683 2 DEBUG nova.compute.manager [None req-1b094375-7a98-4ea4-848d-cc6aeffeadc4 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] [instance: 66086b61-46ca-4a1b-a9f0-692678bcbf7a] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 11 08:47:30 compute-0 nova_compute[260935]: 2025-10-11 08:47:30.685 2 DEBUG nova.virt.libvirt.driver [None req-1b094375-7a98-4ea4-848d-cc6aeffeadc4 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] [instance: 66086b61-46ca-4a1b-a9f0-692678bcbf7a] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 11 08:47:30 compute-0 nova_compute[260935]: 2025-10-11 08:47:30.686 2 INFO nova.virt.libvirt.driver [None req-1b094375-7a98-4ea4-848d-cc6aeffeadc4 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] [instance: 66086b61-46ca-4a1b-a9f0-692678bcbf7a] Creating image(s)
Oct 11 08:47:30 compute-0 nova_compute[260935]: 2025-10-11 08:47:30.720 2 DEBUG nova.storage.rbd_utils [None req-1b094375-7a98-4ea4-848d-cc6aeffeadc4 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] rbd image 66086b61-46ca-4a1b-a9f0-692678bcbf7a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:47:30 compute-0 nova_compute[260935]: 2025-10-11 08:47:30.759 2 DEBUG nova.storage.rbd_utils [None req-1b094375-7a98-4ea4-848d-cc6aeffeadc4 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] rbd image 66086b61-46ca-4a1b-a9f0-692678bcbf7a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:47:30 compute-0 nova_compute[260935]: 2025-10-11 08:47:30.794 2 DEBUG nova.storage.rbd_utils [None req-1b094375-7a98-4ea4-848d-cc6aeffeadc4 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] rbd image 66086b61-46ca-4a1b-a9f0-692678bcbf7a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:47:30 compute-0 nova_compute[260935]: 2025-10-11 08:47:30.799 2 DEBUG oslo_concurrency.processutils [None req-1b094375-7a98-4ea4-848d-cc6aeffeadc4 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:47:30 compute-0 nova_compute[260935]: 2025-10-11 08:47:30.889 2 DEBUG oslo_concurrency.processutils [None req-1b094375-7a98-4ea4-848d-cc6aeffeadc4 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json" returned: 0 in 0.090s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:47:30 compute-0 nova_compute[260935]: 2025-10-11 08:47:30.891 2 DEBUG oslo_concurrency.lockutils [None req-1b094375-7a98-4ea4-848d-cc6aeffeadc4 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] Acquiring lock "9811042c7d73cc51997f7c966840f4b7728169a1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:47:30 compute-0 nova_compute[260935]: 2025-10-11 08:47:30.892 2 DEBUG oslo_concurrency.lockutils [None req-1b094375-7a98-4ea4-848d-cc6aeffeadc4 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:47:30 compute-0 nova_compute[260935]: 2025-10-11 08:47:30.893 2 DEBUG oslo_concurrency.lockutils [None req-1b094375-7a98-4ea4-848d-cc6aeffeadc4 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:47:30 compute-0 nova_compute[260935]: 2025-10-11 08:47:30.928 2 DEBUG nova.storage.rbd_utils [None req-1b094375-7a98-4ea4-848d-cc6aeffeadc4 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] rbd image 66086b61-46ca-4a1b-a9f0-692678bcbf7a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:47:30 compute-0 nova_compute[260935]: 2025-10-11 08:47:30.934 2 DEBUG oslo_concurrency.processutils [None req-1b094375-7a98-4ea4-848d-cc6aeffeadc4 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 66086b61-46ca-4a1b-a9f0-692678bcbf7a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:47:31 compute-0 sshd-session[287221]: Received disconnect from 155.4.244.179 port 21931:11: Bye Bye [preauth]
Oct 11 08:47:31 compute-0 sshd-session[287221]: Disconnected from invalid user fast 155.4.244.179 port 21931 [preauth]
Oct 11 08:47:31 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e158 do_prune osdmap full prune enabled
Oct 11 08:47:31 compute-0 nova_compute[260935]: 2025-10-11 08:47:31.200 2 DEBUG oslo_concurrency.processutils [None req-1b094375-7a98-4ea4-848d-cc6aeffeadc4 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 66086b61-46ca-4a1b-a9f0-692678bcbf7a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.266s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:47:31 compute-0 ceph-mon[74313]: pgmap v1245: 321 pgs: 321 active+clean; 127 MiB data, 316 MiB used, 60 GiB / 60 GiB avail; 4.9 MiB/s rd, 30 KiB/s wr, 349 op/s
Oct 11 08:47:31 compute-0 ceph-mon[74313]: osdmap e158: 3 total, 3 up, 3 in
Oct 11 08:47:31 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/2425366987' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:47:31 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e159 e159: 3 total, 3 up, 3 in
Oct 11 08:47:31 compute-0 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e159: 3 total, 3 up, 3 in
Oct 11 08:47:31 compute-0 nova_compute[260935]: 2025-10-11 08:47:31.297 2 DEBUG nova.storage.rbd_utils [None req-1b094375-7a98-4ea4-848d-cc6aeffeadc4 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] resizing rbd image 66086b61-46ca-4a1b-a9f0-692678bcbf7a_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 11 08:47:31 compute-0 nova_compute[260935]: 2025-10-11 08:47:31.446 2 DEBUG nova.objects.instance [None req-1b094375-7a98-4ea4-848d-cc6aeffeadc4 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] Lazy-loading 'migration_context' on Instance uuid 66086b61-46ca-4a1b-a9f0-692678bcbf7a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:47:31 compute-0 nova_compute[260935]: 2025-10-11 08:47:31.463 2 DEBUG nova.virt.libvirt.driver [None req-1b094375-7a98-4ea4-848d-cc6aeffeadc4 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] [instance: 66086b61-46ca-4a1b-a9f0-692678bcbf7a] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 11 08:47:31 compute-0 nova_compute[260935]: 2025-10-11 08:47:31.464 2 DEBUG nova.virt.libvirt.driver [None req-1b094375-7a98-4ea4-848d-cc6aeffeadc4 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] [instance: 66086b61-46ca-4a1b-a9f0-692678bcbf7a] Ensure instance console log exists: /var/lib/nova/instances/66086b61-46ca-4a1b-a9f0-692678bcbf7a/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 11 08:47:31 compute-0 nova_compute[260935]: 2025-10-11 08:47:31.464 2 DEBUG oslo_concurrency.lockutils [None req-1b094375-7a98-4ea4-848d-cc6aeffeadc4 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:47:31 compute-0 nova_compute[260935]: 2025-10-11 08:47:31.465 2 DEBUG oslo_concurrency.lockutils [None req-1b094375-7a98-4ea4-848d-cc6aeffeadc4 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:47:31 compute-0 nova_compute[260935]: 2025-10-11 08:47:31.465 2 DEBUG oslo_concurrency.lockutils [None req-1b094375-7a98-4ea4-848d-cc6aeffeadc4 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:47:31 compute-0 nova_compute[260935]: 2025-10-11 08:47:31.466 2 DEBUG nova.virt.libvirt.driver [None req-1b094375-7a98-4ea4-848d-cc6aeffeadc4 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] [instance: 66086b61-46ca-4a1b-a9f0-692678bcbf7a] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 11 08:47:31 compute-0 nova_compute[260935]: 2025-10-11 08:47:31.470 2 WARNING nova.virt.libvirt.driver [None req-1b094375-7a98-4ea4-848d-cc6aeffeadc4 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 08:47:31 compute-0 nova_compute[260935]: 2025-10-11 08:47:31.474 2 DEBUG nova.virt.libvirt.host [None req-1b094375-7a98-4ea4-848d-cc6aeffeadc4 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 11 08:47:31 compute-0 nova_compute[260935]: 2025-10-11 08:47:31.475 2 DEBUG nova.virt.libvirt.host [None req-1b094375-7a98-4ea4-848d-cc6aeffeadc4 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 11 08:47:31 compute-0 nova_compute[260935]: 2025-10-11 08:47:31.477 2 DEBUG nova.virt.libvirt.host [None req-1b094375-7a98-4ea4-848d-cc6aeffeadc4 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 11 08:47:31 compute-0 nova_compute[260935]: 2025-10-11 08:47:31.477 2 DEBUG nova.virt.libvirt.host [None req-1b094375-7a98-4ea4-848d-cc6aeffeadc4 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 11 08:47:31 compute-0 nova_compute[260935]: 2025-10-11 08:47:31.478 2 DEBUG nova.virt.libvirt.driver [None req-1b094375-7a98-4ea4-848d-cc6aeffeadc4 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 11 08:47:31 compute-0 nova_compute[260935]: 2025-10-11 08:47:31.478 2 DEBUG nova.virt.hardware [None req-1b094375-7a98-4ea4-848d-cc6aeffeadc4 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 11 08:47:31 compute-0 nova_compute[260935]: 2025-10-11 08:47:31.478 2 DEBUG nova.virt.hardware [None req-1b094375-7a98-4ea4-848d-cc6aeffeadc4 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 11 08:47:31 compute-0 nova_compute[260935]: 2025-10-11 08:47:31.478 2 DEBUG nova.virt.hardware [None req-1b094375-7a98-4ea4-848d-cc6aeffeadc4 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 11 08:47:31 compute-0 nova_compute[260935]: 2025-10-11 08:47:31.479 2 DEBUG nova.virt.hardware [None req-1b094375-7a98-4ea4-848d-cc6aeffeadc4 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 11 08:47:31 compute-0 nova_compute[260935]: 2025-10-11 08:47:31.479 2 DEBUG nova.virt.hardware [None req-1b094375-7a98-4ea4-848d-cc6aeffeadc4 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 11 08:47:31 compute-0 nova_compute[260935]: 2025-10-11 08:47:31.479 2 DEBUG nova.virt.hardware [None req-1b094375-7a98-4ea4-848d-cc6aeffeadc4 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 11 08:47:31 compute-0 nova_compute[260935]: 2025-10-11 08:47:31.479 2 DEBUG nova.virt.hardware [None req-1b094375-7a98-4ea4-848d-cc6aeffeadc4 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 11 08:47:31 compute-0 nova_compute[260935]: 2025-10-11 08:47:31.479 2 DEBUG nova.virt.hardware [None req-1b094375-7a98-4ea4-848d-cc6aeffeadc4 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 11 08:47:31 compute-0 nova_compute[260935]: 2025-10-11 08:47:31.480 2 DEBUG nova.virt.hardware [None req-1b094375-7a98-4ea4-848d-cc6aeffeadc4 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 11 08:47:31 compute-0 nova_compute[260935]: 2025-10-11 08:47:31.480 2 DEBUG nova.virt.hardware [None req-1b094375-7a98-4ea4-848d-cc6aeffeadc4 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 11 08:47:31 compute-0 nova_compute[260935]: 2025-10-11 08:47:31.480 2 DEBUG nova.virt.hardware [None req-1b094375-7a98-4ea4-848d-cc6aeffeadc4 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 11 08:47:31 compute-0 nova_compute[260935]: 2025-10-11 08:47:31.483 2 DEBUG oslo_concurrency.processutils [None req-1b094375-7a98-4ea4-848d-cc6aeffeadc4 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:47:31 compute-0 nova_compute[260935]: 2025-10-11 08:47:31.713 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:47:31 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1248: 321 pgs: 321 active+clean; 127 MiB data, 316 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 33 KiB/s wr, 240 op/s
Oct 11 08:47:31 compute-0 nova_compute[260935]: 2025-10-11 08:47:31.826 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:47:31 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 08:47:31 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4160667198' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:47:31 compute-0 nova_compute[260935]: 2025-10-11 08:47:31.929 2 DEBUG oslo_concurrency.processutils [None req-1b094375-7a98-4ea4-848d-cc6aeffeadc4 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:47:31 compute-0 nova_compute[260935]: 2025-10-11 08:47:31.963 2 DEBUG nova.storage.rbd_utils [None req-1b094375-7a98-4ea4-848d-cc6aeffeadc4 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] rbd image 66086b61-46ca-4a1b-a9f0-692678bcbf7a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:47:31 compute-0 nova_compute[260935]: 2025-10-11 08:47:31.971 2 DEBUG oslo_concurrency.processutils [None req-1b094375-7a98-4ea4-848d-cc6aeffeadc4 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:47:32 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e159 do_prune osdmap full prune enabled
Oct 11 08:47:32 compute-0 ceph-mon[74313]: osdmap e159: 3 total, 3 up, 3 in
Oct 11 08:47:32 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/4160667198' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:47:32 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e160 e160: 3 total, 3 up, 3 in
Oct 11 08:47:32 compute-0 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e160: 3 total, 3 up, 3 in
Oct 11 08:47:32 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 08:47:32 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3614962514' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:47:32 compute-0 nova_compute[260935]: 2025-10-11 08:47:32.427 2 DEBUG oslo_concurrency.processutils [None req-1b094375-7a98-4ea4-848d-cc6aeffeadc4 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:47:32 compute-0 nova_compute[260935]: 2025-10-11 08:47:32.430 2 DEBUG nova.objects.instance [None req-1b094375-7a98-4ea4-848d-cc6aeffeadc4 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] Lazy-loading 'pci_devices' on Instance uuid 66086b61-46ca-4a1b-a9f0-692678bcbf7a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:47:32 compute-0 nova_compute[260935]: 2025-10-11 08:47:32.455 2 DEBUG nova.virt.libvirt.driver [None req-1b094375-7a98-4ea4-848d-cc6aeffeadc4 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] [instance: 66086b61-46ca-4a1b-a9f0-692678bcbf7a] End _get_guest_xml xml=<domain type="kvm">
Oct 11 08:47:32 compute-0 nova_compute[260935]:   <uuid>66086b61-46ca-4a1b-a9f0-692678bcbf7a</uuid>
Oct 11 08:47:32 compute-0 nova_compute[260935]:   <name>instance-00000012</name>
Oct 11 08:47:32 compute-0 nova_compute[260935]:   <memory>131072</memory>
Oct 11 08:47:32 compute-0 nova_compute[260935]:   <vcpu>1</vcpu>
Oct 11 08:47:32 compute-0 nova_compute[260935]:   <metadata>
Oct 11 08:47:32 compute-0 nova_compute[260935]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 08:47:32 compute-0 nova_compute[260935]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 08:47:32 compute-0 nova_compute[260935]:       <nova:name>tempest-ServersAdmin275Test-server-132893026</nova:name>
Oct 11 08:47:32 compute-0 nova_compute[260935]:       <nova:creationTime>2025-10-11 08:47:31</nova:creationTime>
Oct 11 08:47:32 compute-0 nova_compute[260935]:       <nova:flavor name="m1.nano">
Oct 11 08:47:32 compute-0 nova_compute[260935]:         <nova:memory>128</nova:memory>
Oct 11 08:47:32 compute-0 nova_compute[260935]:         <nova:disk>1</nova:disk>
Oct 11 08:47:32 compute-0 nova_compute[260935]:         <nova:swap>0</nova:swap>
Oct 11 08:47:32 compute-0 nova_compute[260935]:         <nova:ephemeral>0</nova:ephemeral>
Oct 11 08:47:32 compute-0 nova_compute[260935]:         <nova:vcpus>1</nova:vcpus>
Oct 11 08:47:32 compute-0 nova_compute[260935]:       </nova:flavor>
Oct 11 08:47:32 compute-0 nova_compute[260935]:       <nova:owner>
Oct 11 08:47:32 compute-0 nova_compute[260935]:         <nova:user uuid="7cfe9716527d49f18102a38c7480e208">tempest-ServersAdmin275Test-1935053767-project-member</nova:user>
Oct 11 08:47:32 compute-0 nova_compute[260935]:         <nova:project uuid="7fdd898b69404913a643940b3869140b">tempest-ServersAdmin275Test-1935053767</nova:project>
Oct 11 08:47:32 compute-0 nova_compute[260935]:       </nova:owner>
Oct 11 08:47:32 compute-0 nova_compute[260935]:       <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 08:47:32 compute-0 nova_compute[260935]:       <nova:ports/>
Oct 11 08:47:32 compute-0 nova_compute[260935]:     </nova:instance>
Oct 11 08:47:32 compute-0 nova_compute[260935]:   </metadata>
Oct 11 08:47:32 compute-0 nova_compute[260935]:   <sysinfo type="smbios">
Oct 11 08:47:32 compute-0 nova_compute[260935]:     <system>
Oct 11 08:47:32 compute-0 nova_compute[260935]:       <entry name="manufacturer">RDO</entry>
Oct 11 08:47:32 compute-0 nova_compute[260935]:       <entry name="product">OpenStack Compute</entry>
Oct 11 08:47:32 compute-0 nova_compute[260935]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 08:47:32 compute-0 nova_compute[260935]:       <entry name="serial">66086b61-46ca-4a1b-a9f0-692678bcbf7a</entry>
Oct 11 08:47:32 compute-0 nova_compute[260935]:       <entry name="uuid">66086b61-46ca-4a1b-a9f0-692678bcbf7a</entry>
Oct 11 08:47:32 compute-0 nova_compute[260935]:       <entry name="family">Virtual Machine</entry>
Oct 11 08:47:32 compute-0 nova_compute[260935]:     </system>
Oct 11 08:47:32 compute-0 nova_compute[260935]:   </sysinfo>
Oct 11 08:47:32 compute-0 nova_compute[260935]:   <os>
Oct 11 08:47:32 compute-0 nova_compute[260935]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 11 08:47:32 compute-0 nova_compute[260935]:     <boot dev="hd"/>
Oct 11 08:47:32 compute-0 nova_compute[260935]:     <smbios mode="sysinfo"/>
Oct 11 08:47:32 compute-0 nova_compute[260935]:   </os>
Oct 11 08:47:32 compute-0 nova_compute[260935]:   <features>
Oct 11 08:47:32 compute-0 nova_compute[260935]:     <acpi/>
Oct 11 08:47:32 compute-0 nova_compute[260935]:     <apic/>
Oct 11 08:47:32 compute-0 nova_compute[260935]:     <vmcoreinfo/>
Oct 11 08:47:32 compute-0 nova_compute[260935]:   </features>
Oct 11 08:47:32 compute-0 nova_compute[260935]:   <clock offset="utc">
Oct 11 08:47:32 compute-0 nova_compute[260935]:     <timer name="pit" tickpolicy="delay"/>
Oct 11 08:47:32 compute-0 nova_compute[260935]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 11 08:47:32 compute-0 nova_compute[260935]:     <timer name="hpet" present="no"/>
Oct 11 08:47:32 compute-0 nova_compute[260935]:   </clock>
Oct 11 08:47:32 compute-0 nova_compute[260935]:   <cpu mode="host-model" match="exact">
Oct 11 08:47:32 compute-0 nova_compute[260935]:     <topology sockets="1" cores="1" threads="1"/>
Oct 11 08:47:32 compute-0 nova_compute[260935]:   </cpu>
Oct 11 08:47:32 compute-0 nova_compute[260935]:   <devices>
Oct 11 08:47:32 compute-0 nova_compute[260935]:     <disk type="network" device="disk">
Oct 11 08:47:32 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 08:47:32 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/66086b61-46ca-4a1b-a9f0-692678bcbf7a_disk">
Oct 11 08:47:32 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 08:47:32 compute-0 nova_compute[260935]:       </source>
Oct 11 08:47:32 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 08:47:32 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 08:47:32 compute-0 nova_compute[260935]:       </auth>
Oct 11 08:47:32 compute-0 nova_compute[260935]:       <target dev="vda" bus="virtio"/>
Oct 11 08:47:32 compute-0 nova_compute[260935]:     </disk>
Oct 11 08:47:32 compute-0 nova_compute[260935]:     <disk type="network" device="cdrom">
Oct 11 08:47:32 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 08:47:32 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/66086b61-46ca-4a1b-a9f0-692678bcbf7a_disk.config">
Oct 11 08:47:32 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 08:47:32 compute-0 nova_compute[260935]:       </source>
Oct 11 08:47:32 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 08:47:32 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 08:47:32 compute-0 nova_compute[260935]:       </auth>
Oct 11 08:47:32 compute-0 nova_compute[260935]:       <target dev="sda" bus="sata"/>
Oct 11 08:47:32 compute-0 nova_compute[260935]:     </disk>
Oct 11 08:47:32 compute-0 nova_compute[260935]:     <serial type="pty">
Oct 11 08:47:32 compute-0 nova_compute[260935]:       <log file="/var/lib/nova/instances/66086b61-46ca-4a1b-a9f0-692678bcbf7a/console.log" append="off"/>
Oct 11 08:47:32 compute-0 nova_compute[260935]:     </serial>
Oct 11 08:47:32 compute-0 nova_compute[260935]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 08:47:32 compute-0 nova_compute[260935]:     <video>
Oct 11 08:47:32 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 08:47:32 compute-0 nova_compute[260935]:     </video>
Oct 11 08:47:32 compute-0 nova_compute[260935]:     <input type="tablet" bus="usb"/>
Oct 11 08:47:32 compute-0 nova_compute[260935]:     <rng model="virtio">
Oct 11 08:47:32 compute-0 nova_compute[260935]:       <backend model="random">/dev/urandom</backend>
Oct 11 08:47:32 compute-0 nova_compute[260935]:     </rng>
Oct 11 08:47:32 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root"/>
Oct 11 08:47:32 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:47:32 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:47:32 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:47:32 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:47:32 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:47:32 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:47:32 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:47:32 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:47:32 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:47:32 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:47:32 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:47:32 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:47:32 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:47:32 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:47:32 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:47:32 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:47:32 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:47:32 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:47:32 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:47:32 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:47:32 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:47:32 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:47:32 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:47:32 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:47:32 compute-0 nova_compute[260935]:     <controller type="usb" index="0"/>
Oct 11 08:47:32 compute-0 nova_compute[260935]:     <memballoon model="virtio">
Oct 11 08:47:32 compute-0 nova_compute[260935]:       <stats period="10"/>
Oct 11 08:47:32 compute-0 nova_compute[260935]:     </memballoon>
Oct 11 08:47:32 compute-0 nova_compute[260935]:   </devices>
Oct 11 08:47:32 compute-0 nova_compute[260935]: </domain>
Oct 11 08:47:32 compute-0 nova_compute[260935]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 11 08:47:32 compute-0 nova_compute[260935]: 2025-10-11 08:47:32.551 2 DEBUG nova.virt.libvirt.driver [None req-1b094375-7a98-4ea4-848d-cc6aeffeadc4 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 08:47:32 compute-0 nova_compute[260935]: 2025-10-11 08:47:32.552 2 DEBUG nova.virt.libvirt.driver [None req-1b094375-7a98-4ea4-848d-cc6aeffeadc4 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 08:47:32 compute-0 nova_compute[260935]: 2025-10-11 08:47:32.553 2 INFO nova.virt.libvirt.driver [None req-1b094375-7a98-4ea4-848d-cc6aeffeadc4 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] [instance: 66086b61-46ca-4a1b-a9f0-692678bcbf7a] Using config drive
Oct 11 08:47:32 compute-0 nova_compute[260935]: 2025-10-11 08:47:32.598 2 DEBUG nova.storage.rbd_utils [None req-1b094375-7a98-4ea4-848d-cc6aeffeadc4 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] rbd image 66086b61-46ca-4a1b-a9f0-692678bcbf7a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:47:32 compute-0 podman[287541]: 2025-10-11 08:47:32.610450625 +0000 UTC m=+0.097763076 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct 11 08:47:32 compute-0 podman[287542]: 2025-10-11 08:47:32.639434053 +0000 UTC m=+0.113197536 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251001)
Oct 11 08:47:33 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 08:47:33 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e160 do_prune osdmap full prune enabled
Oct 11 08:47:33 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e161 e161: 3 total, 3 up, 3 in
Oct 11 08:47:33 compute-0 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e161: 3 total, 3 up, 3 in
Oct 11 08:47:33 compute-0 ceph-mon[74313]: pgmap v1248: 321 pgs: 321 active+clean; 127 MiB data, 316 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 33 KiB/s wr, 240 op/s
Oct 11 08:47:33 compute-0 ceph-mon[74313]: osdmap e160: 3 total, 3 up, 3 in
Oct 11 08:47:33 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/3614962514' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:47:33 compute-0 ceph-mon[74313]: osdmap e161: 3 total, 3 up, 3 in
Oct 11 08:47:33 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1251: 321 pgs: 321 active+clean; 134 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 188 KiB/s rd, 5.3 MiB/s wr, 272 op/s
Oct 11 08:47:33 compute-0 nova_compute[260935]: 2025-10-11 08:47:33.846 2 INFO nova.virt.libvirt.driver [None req-1b094375-7a98-4ea4-848d-cc6aeffeadc4 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] [instance: 66086b61-46ca-4a1b-a9f0-692678bcbf7a] Creating config drive at /var/lib/nova/instances/66086b61-46ca-4a1b-a9f0-692678bcbf7a/disk.config
Oct 11 08:47:33 compute-0 nova_compute[260935]: 2025-10-11 08:47:33.856 2 DEBUG oslo_concurrency.processutils [None req-1b094375-7a98-4ea4-848d-cc6aeffeadc4 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/66086b61-46ca-4a1b-a9f0-692678bcbf7a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpnn6pm873 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:47:34 compute-0 nova_compute[260935]: 2025-10-11 08:47:34.005 2 DEBUG oslo_concurrency.processutils [None req-1b094375-7a98-4ea4-848d-cc6aeffeadc4 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/66086b61-46ca-4a1b-a9f0-692678bcbf7a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpnn6pm873" returned: 0 in 0.149s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:47:34 compute-0 nova_compute[260935]: 2025-10-11 08:47:34.047 2 DEBUG nova.storage.rbd_utils [None req-1b094375-7a98-4ea4-848d-cc6aeffeadc4 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] rbd image 66086b61-46ca-4a1b-a9f0-692678bcbf7a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:47:34 compute-0 nova_compute[260935]: 2025-10-11 08:47:34.052 2 DEBUG oslo_concurrency.processutils [None req-1b094375-7a98-4ea4-848d-cc6aeffeadc4 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/66086b61-46ca-4a1b-a9f0-692678bcbf7a/disk.config 66086b61-46ca-4a1b-a9f0-692678bcbf7a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:47:34 compute-0 nova_compute[260935]: 2025-10-11 08:47:34.258 2 DEBUG oslo_concurrency.processutils [None req-1b094375-7a98-4ea4-848d-cc6aeffeadc4 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/66086b61-46ca-4a1b-a9f0-692678bcbf7a/disk.config 66086b61-46ca-4a1b-a9f0-692678bcbf7a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.206s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:47:34 compute-0 nova_compute[260935]: 2025-10-11 08:47:34.259 2 INFO nova.virt.libvirt.driver [None req-1b094375-7a98-4ea4-848d-cc6aeffeadc4 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] [instance: 66086b61-46ca-4a1b-a9f0-692678bcbf7a] Deleting local config drive /var/lib/nova/instances/66086b61-46ca-4a1b-a9f0-692678bcbf7a/disk.config because it was imported into RBD.
Oct 11 08:47:34 compute-0 systemd-machined[215705]: New machine qemu-18-instance-00000012.
Oct 11 08:47:34 compute-0 systemd[1]: Started Virtual Machine qemu-18-instance-00000012.
Oct 11 08:47:35 compute-0 ceph-mon[74313]: pgmap v1251: 321 pgs: 321 active+clean; 134 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 188 KiB/s rd, 5.3 MiB/s wr, 272 op/s
Oct 11 08:47:35 compute-0 nova_compute[260935]: 2025-10-11 08:47:35.622 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172455.6214416, 66086b61-46ca-4a1b-a9f0-692678bcbf7a => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:47:35 compute-0 nova_compute[260935]: 2025-10-11 08:47:35.624 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 66086b61-46ca-4a1b-a9f0-692678bcbf7a] VM Resumed (Lifecycle Event)
Oct 11 08:47:35 compute-0 nova_compute[260935]: 2025-10-11 08:47:35.629 2 DEBUG nova.compute.manager [None req-1b094375-7a98-4ea4-848d-cc6aeffeadc4 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] [instance: 66086b61-46ca-4a1b-a9f0-692678bcbf7a] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 11 08:47:35 compute-0 nova_compute[260935]: 2025-10-11 08:47:35.631 2 DEBUG nova.virt.libvirt.driver [None req-1b094375-7a98-4ea4-848d-cc6aeffeadc4 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] [instance: 66086b61-46ca-4a1b-a9f0-692678bcbf7a] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 11 08:47:35 compute-0 nova_compute[260935]: 2025-10-11 08:47:35.637 2 INFO nova.virt.libvirt.driver [-] [instance: 66086b61-46ca-4a1b-a9f0-692678bcbf7a] Instance spawned successfully.
Oct 11 08:47:35 compute-0 nova_compute[260935]: 2025-10-11 08:47:35.638 2 DEBUG nova.virt.libvirt.driver [None req-1b094375-7a98-4ea4-848d-cc6aeffeadc4 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] [instance: 66086b61-46ca-4a1b-a9f0-692678bcbf7a] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 11 08:47:35 compute-0 nova_compute[260935]: 2025-10-11 08:47:35.650 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 66086b61-46ca-4a1b-a9f0-692678bcbf7a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:47:35 compute-0 nova_compute[260935]: 2025-10-11 08:47:35.660 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 66086b61-46ca-4a1b-a9f0-692678bcbf7a] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 08:47:35 compute-0 nova_compute[260935]: 2025-10-11 08:47:35.669 2 DEBUG nova.virt.libvirt.driver [None req-1b094375-7a98-4ea4-848d-cc6aeffeadc4 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] [instance: 66086b61-46ca-4a1b-a9f0-692678bcbf7a] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:47:35 compute-0 nova_compute[260935]: 2025-10-11 08:47:35.670 2 DEBUG nova.virt.libvirt.driver [None req-1b094375-7a98-4ea4-848d-cc6aeffeadc4 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] [instance: 66086b61-46ca-4a1b-a9f0-692678bcbf7a] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:47:35 compute-0 nova_compute[260935]: 2025-10-11 08:47:35.671 2 DEBUG nova.virt.libvirt.driver [None req-1b094375-7a98-4ea4-848d-cc6aeffeadc4 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] [instance: 66086b61-46ca-4a1b-a9f0-692678bcbf7a] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:47:35 compute-0 nova_compute[260935]: 2025-10-11 08:47:35.672 2 DEBUG nova.virt.libvirt.driver [None req-1b094375-7a98-4ea4-848d-cc6aeffeadc4 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] [instance: 66086b61-46ca-4a1b-a9f0-692678bcbf7a] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:47:35 compute-0 nova_compute[260935]: 2025-10-11 08:47:35.673 2 DEBUG nova.virt.libvirt.driver [None req-1b094375-7a98-4ea4-848d-cc6aeffeadc4 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] [instance: 66086b61-46ca-4a1b-a9f0-692678bcbf7a] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:47:35 compute-0 nova_compute[260935]: 2025-10-11 08:47:35.674 2 DEBUG nova.virt.libvirt.driver [None req-1b094375-7a98-4ea4-848d-cc6aeffeadc4 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] [instance: 66086b61-46ca-4a1b-a9f0-692678bcbf7a] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:47:35 compute-0 nova_compute[260935]: 2025-10-11 08:47:35.684 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 66086b61-46ca-4a1b-a9f0-692678bcbf7a] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 08:47:35 compute-0 nova_compute[260935]: 2025-10-11 08:47:35.685 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172455.6231475, 66086b61-46ca-4a1b-a9f0-692678bcbf7a => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:47:35 compute-0 nova_compute[260935]: 2025-10-11 08:47:35.686 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 66086b61-46ca-4a1b-a9f0-692678bcbf7a] VM Started (Lifecycle Event)
Oct 11 08:47:35 compute-0 nova_compute[260935]: 2025-10-11 08:47:35.719 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 66086b61-46ca-4a1b-a9f0-692678bcbf7a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:47:35 compute-0 ovn_controller[152945]: 2025-10-11T08:47:35Z|00008|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:6b:9e:4e 10.100.0.10
Oct 11 08:47:35 compute-0 ovn_controller[152945]: 2025-10-11T08:47:35Z|00009|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:6b:9e:4e 10.100.0.10
Oct 11 08:47:35 compute-0 nova_compute[260935]: 2025-10-11 08:47:35.725 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 66086b61-46ca-4a1b-a9f0-692678bcbf7a] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 08:47:35 compute-0 nova_compute[260935]: 2025-10-11 08:47:35.734 2 INFO nova.compute.manager [None req-1b094375-7a98-4ea4-848d-cc6aeffeadc4 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] [instance: 66086b61-46ca-4a1b-a9f0-692678bcbf7a] Took 5.05 seconds to spawn the instance on the hypervisor.
Oct 11 08:47:35 compute-0 nova_compute[260935]: 2025-10-11 08:47:35.734 2 DEBUG nova.compute.manager [None req-1b094375-7a98-4ea4-848d-cc6aeffeadc4 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] [instance: 66086b61-46ca-4a1b-a9f0-692678bcbf7a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:47:35 compute-0 nova_compute[260935]: 2025-10-11 08:47:35.745 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 66086b61-46ca-4a1b-a9f0-692678bcbf7a] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 08:47:35 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1252: 321 pgs: 321 active+clean; 134 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 135 KiB/s rd, 3.8 MiB/s wr, 195 op/s
Oct 11 08:47:35 compute-0 nova_compute[260935]: 2025-10-11 08:47:35.789 2 INFO nova.compute.manager [None req-1b094375-7a98-4ea4-848d-cc6aeffeadc4 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] [instance: 66086b61-46ca-4a1b-a9f0-692678bcbf7a] Took 6.01 seconds to build instance.
Oct 11 08:47:35 compute-0 nova_compute[260935]: 2025-10-11 08:47:35.805 2 DEBUG oslo_concurrency.lockutils [None req-1b094375-7a98-4ea4-848d-cc6aeffeadc4 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] Lock "66086b61-46ca-4a1b-a9f0-692678bcbf7a" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 6.097s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:47:35 compute-0 nova_compute[260935]: 2025-10-11 08:47:35.911 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760172440.9105935, 682cad4b-4bab-4a36-9bd9-11e9de7b213a => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:47:35 compute-0 nova_compute[260935]: 2025-10-11 08:47:35.912 2 INFO nova.compute.manager [-] [instance: 682cad4b-4bab-4a36-9bd9-11e9de7b213a] VM Stopped (Lifecycle Event)
Oct 11 08:47:35 compute-0 nova_compute[260935]: 2025-10-11 08:47:35.933 2 DEBUG nova.compute.manager [None req-e0794b61-391d-4e91-9f24-fcfbba0ecccc - - - - - -] [instance: 682cad4b-4bab-4a36-9bd9-11e9de7b213a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:47:36 compute-0 nova_compute[260935]: 2025-10-11 08:47:36.747 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:47:36 compute-0 nova_compute[260935]: 2025-10-11 08:47:36.827 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:47:37 compute-0 ceph-mon[74313]: pgmap v1252: 321 pgs: 321 active+clean; 134 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 135 KiB/s rd, 3.8 MiB/s wr, 195 op/s
Oct 11 08:47:37 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 08:47:37 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1440883750' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 08:47:37 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 08:47:37 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1440883750' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 08:47:37 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1253: 321 pgs: 321 active+clean; 167 MiB data, 332 MiB used, 60 GiB / 60 GiB avail; 4.2 MiB/s rd, 7.2 MiB/s wr, 436 op/s
Oct 11 08:47:37 compute-0 nova_compute[260935]: 2025-10-11 08:47:37.991 2 INFO nova.compute.manager [None req-bd5d0c03-cc4b-47eb-9336-59861d26f68e 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] [instance: 66086b61-46ca-4a1b-a9f0-692678bcbf7a] Rebuilding instance
Oct 11 08:47:38 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 08:47:38 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e161 do_prune osdmap full prune enabled
Oct 11 08:47:38 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e162 e162: 3 total, 3 up, 3 in
Oct 11 08:47:38 compute-0 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e162: 3 total, 3 up, 3 in
Oct 11 08:47:38 compute-0 nova_compute[260935]: 2025-10-11 08:47:38.260 2 DEBUG nova.objects.instance [None req-bd5d0c03-cc4b-47eb-9336-59861d26f68e 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] Lazy-loading 'trusted_certs' on Instance uuid 66086b61-46ca-4a1b-a9f0-692678bcbf7a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:47:38 compute-0 ceph-mon[74313]: from='client.? 192.168.122.10:0/1440883750' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 08:47:38 compute-0 ceph-mon[74313]: from='client.? 192.168.122.10:0/1440883750' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 08:47:38 compute-0 ceph-mon[74313]: osdmap e162: 3 total, 3 up, 3 in
Oct 11 08:47:38 compute-0 nova_compute[260935]: 2025-10-11 08:47:38.311 2 DEBUG nova.compute.manager [None req-bd5d0c03-cc4b-47eb-9336-59861d26f68e 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] [instance: 66086b61-46ca-4a1b-a9f0-692678bcbf7a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:47:38 compute-0 nova_compute[260935]: 2025-10-11 08:47:38.455 2 DEBUG nova.objects.instance [None req-bd5d0c03-cc4b-47eb-9336-59861d26f68e 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] Lazy-loading 'pci_requests' on Instance uuid 66086b61-46ca-4a1b-a9f0-692678bcbf7a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:47:38 compute-0 nova_compute[260935]: 2025-10-11 08:47:38.474 2 DEBUG nova.objects.instance [None req-bd5d0c03-cc4b-47eb-9336-59861d26f68e 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] Lazy-loading 'pci_devices' on Instance uuid 66086b61-46ca-4a1b-a9f0-692678bcbf7a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:47:38 compute-0 nova_compute[260935]: 2025-10-11 08:47:38.494 2 DEBUG nova.objects.instance [None req-bd5d0c03-cc4b-47eb-9336-59861d26f68e 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] Lazy-loading 'resources' on Instance uuid 66086b61-46ca-4a1b-a9f0-692678bcbf7a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:47:38 compute-0 nova_compute[260935]: 2025-10-11 08:47:38.527 2 DEBUG nova.objects.instance [None req-bd5d0c03-cc4b-47eb-9336-59861d26f68e 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] Lazy-loading 'migration_context' on Instance uuid 66086b61-46ca-4a1b-a9f0-692678bcbf7a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:47:38 compute-0 nova_compute[260935]: 2025-10-11 08:47:38.560 2 DEBUG nova.objects.instance [None req-bd5d0c03-cc4b-47eb-9336-59861d26f68e 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] [instance: 66086b61-46ca-4a1b-a9f0-692678bcbf7a] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Oct 11 08:47:38 compute-0 nova_compute[260935]: 2025-10-11 08:47:38.565 2 DEBUG nova.virt.libvirt.driver [None req-bd5d0c03-cc4b-47eb-9336-59861d26f68e 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] [instance: 66086b61-46ca-4a1b-a9f0-692678bcbf7a] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Oct 11 08:47:38 compute-0 nova_compute[260935]: 2025-10-11 08:47:38.978 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760172443.9770055, fa95251f-1ce7-4a45-8f53-ae932716a172 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:47:38 compute-0 nova_compute[260935]: 2025-10-11 08:47:38.978 2 INFO nova.compute.manager [-] [instance: fa95251f-1ce7-4a45-8f53-ae932716a172] VM Stopped (Lifecycle Event)
Oct 11 08:47:39 compute-0 nova_compute[260935]: 2025-10-11 08:47:38.999 2 DEBUG nova.compute.manager [None req-6bf2ec43-1071-43bd-a20a-385d195fdc2c - - - - - -] [instance: fa95251f-1ce7-4a45-8f53-ae932716a172] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:47:39 compute-0 ceph-mon[74313]: pgmap v1253: 321 pgs: 321 active+clean; 167 MiB data, 332 MiB used, 60 GiB / 60 GiB avail; 4.2 MiB/s rd, 7.2 MiB/s wr, 436 op/s
Oct 11 08:47:39 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1255: 321 pgs: 321 active+clean; 167 MiB data, 332 MiB used, 60 GiB / 60 GiB avail; 3.6 MiB/s rd, 4.0 MiB/s wr, 327 op/s
Oct 11 08:47:40 compute-0 ovn_controller[152945]: 2025-10-11T08:47:40Z|00050|binding|INFO|Releasing lport 2a916b98-1e7b-4604-b1f0-e2f195b1c17e from this chassis (sb_readonly=0)
Oct 11 08:47:40 compute-0 nova_compute[260935]: 2025-10-11 08:47:40.612 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:47:41 compute-0 ceph-mon[74313]: pgmap v1255: 321 pgs: 321 active+clean; 167 MiB data, 332 MiB used, 60 GiB / 60 GiB avail; 3.6 MiB/s rd, 4.0 MiB/s wr, 327 op/s
Oct 11 08:47:41 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1256: 321 pgs: 321 active+clean; 167 MiB data, 332 MiB used, 60 GiB / 60 GiB avail; 3.1 MiB/s rd, 3.0 MiB/s wr, 206 op/s
Oct 11 08:47:41 compute-0 nova_compute[260935]: 2025-10-11 08:47:41.788 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:47:41 compute-0 nova_compute[260935]: 2025-10-11 08:47:41.830 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:47:42 compute-0 nova_compute[260935]: 2025-10-11 08:47:42.050 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760172447.0485742, 318c32b0-9990-4579-8abb-fc79e7460d77 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:47:42 compute-0 nova_compute[260935]: 2025-10-11 08:47:42.050 2 INFO nova.compute.manager [-] [instance: 318c32b0-9990-4579-8abb-fc79e7460d77] VM Stopped (Lifecycle Event)
Oct 11 08:47:42 compute-0 nova_compute[260935]: 2025-10-11 08:47:42.069 2 DEBUG nova.compute.manager [None req-9fa6736e-d1be-4f9b-98bf-8de5cf561552 - - - - - -] [instance: 318c32b0-9990-4579-8abb-fc79e7460d77] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:47:43 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 08:47:43 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e162 do_prune osdmap full prune enabled
Oct 11 08:47:43 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e163 e163: 3 total, 3 up, 3 in
Oct 11 08:47:43 compute-0 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e163: 3 total, 3 up, 3 in
Oct 11 08:47:43 compute-0 ceph-mon[74313]: pgmap v1256: 321 pgs: 321 active+clean; 167 MiB data, 332 MiB used, 60 GiB / 60 GiB avail; 3.1 MiB/s rd, 3.0 MiB/s wr, 206 op/s
Oct 11 08:47:43 compute-0 ceph-mon[74313]: osdmap e163: 3 total, 3 up, 3 in
Oct 11 08:47:43 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1258: 321 pgs: 321 active+clean; 167 MiB data, 332 MiB used, 60 GiB / 60 GiB avail; 3.4 MiB/s rd, 3.2 MiB/s wr, 221 op/s
Oct 11 08:47:45 compute-0 ceph-mon[74313]: pgmap v1258: 321 pgs: 321 active+clean; 167 MiB data, 332 MiB used, 60 GiB / 60 GiB avail; 3.4 MiB/s rd, 3.2 MiB/s wr, 221 op/s
Oct 11 08:47:45 compute-0 nova_compute[260935]: 2025-10-11 08:47:45.365 2 DEBUG oslo_concurrency.lockutils [None req-12160066-095a-471e-ae52-63171fe3a949 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Acquiring lock "interface-5b2193b9-46b9-44a8-9d1c-3c6a642115b6-None" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:47:45 compute-0 nova_compute[260935]: 2025-10-11 08:47:45.366 2 DEBUG oslo_concurrency.lockutils [None req-12160066-095a-471e-ae52-63171fe3a949 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Lock "interface-5b2193b9-46b9-44a8-9d1c-3c6a642115b6-None" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:47:45 compute-0 nova_compute[260935]: 2025-10-11 08:47:45.366 2 DEBUG nova.objects.instance [None req-12160066-095a-471e-ae52-63171fe3a949 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Lazy-loading 'flavor' on Instance uuid 5b2193b9-46b9-44a8-9d1c-3c6a642115b6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:47:45 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1259: 321 pgs: 321 active+clean; 167 MiB data, 332 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:47:46 compute-0 nova_compute[260935]: 2025-10-11 08:47:46.229 2 DEBUG nova.objects.instance [None req-12160066-095a-471e-ae52-63171fe3a949 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Lazy-loading 'pci_requests' on Instance uuid 5b2193b9-46b9-44a8-9d1c-3c6a642115b6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:47:46 compute-0 nova_compute[260935]: 2025-10-11 08:47:46.244 2 DEBUG nova.network.neutron [None req-12160066-095a-471e-ae52-63171fe3a949 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 5b2193b9-46b9-44a8-9d1c-3c6a642115b6] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 11 08:47:46 compute-0 nova_compute[260935]: 2025-10-11 08:47:46.252 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:47:46 compute-0 nova_compute[260935]: 2025-10-11 08:47:46.471 2 DEBUG nova.policy [None req-12160066-095a-471e-ae52-63171fe3a949 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '34f29a5a135d45f597eeaa741009aa67', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'eddb41c523294041b154a0a99c88e82b', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 11 08:47:46 compute-0 nova_compute[260935]: 2025-10-11 08:47:46.792 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:47:46 compute-0 nova_compute[260935]: 2025-10-11 08:47:46.832 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:47:47 compute-0 ceph-mon[74313]: pgmap v1259: 321 pgs: 321 active+clean; 167 MiB data, 332 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:47:47 compute-0 nova_compute[260935]: 2025-10-11 08:47:47.436 2 DEBUG nova.network.neutron [None req-12160066-095a-471e-ae52-63171fe3a949 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 5b2193b9-46b9-44a8-9d1c-3c6a642115b6] Successfully created port: 0ae5b718-3374-4544-8f79-2f11854381cd _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 11 08:47:47 compute-0 nova_compute[260935]: 2025-10-11 08:47:47.612 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:47:47 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1260: 321 pgs: 321 active+clean; 195 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 374 KiB/s rd, 2.6 MiB/s wr, 66 op/s
Oct 11 08:47:48 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e163 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 08:47:48 compute-0 nova_compute[260935]: 2025-10-11 08:47:48.637 2 DEBUG nova.virt.libvirt.driver [None req-bd5d0c03-cc4b-47eb-9336-59861d26f68e 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] [instance: 66086b61-46ca-4a1b-a9f0-692678bcbf7a] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Oct 11 08:47:48 compute-0 nova_compute[260935]: 2025-10-11 08:47:48.730 2 DEBUG nova.network.neutron [None req-12160066-095a-471e-ae52-63171fe3a949 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 5b2193b9-46b9-44a8-9d1c-3c6a642115b6] Successfully updated port: 0ae5b718-3374-4544-8f79-2f11854381cd _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 11 08:47:48 compute-0 nova_compute[260935]: 2025-10-11 08:47:48.754 2 DEBUG oslo_concurrency.lockutils [None req-12160066-095a-471e-ae52-63171fe3a949 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Acquiring lock "refresh_cache-5b2193b9-46b9-44a8-9d1c-3c6a642115b6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 08:47:48 compute-0 nova_compute[260935]: 2025-10-11 08:47:48.755 2 DEBUG oslo_concurrency.lockutils [None req-12160066-095a-471e-ae52-63171fe3a949 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Acquired lock "refresh_cache-5b2193b9-46b9-44a8-9d1c-3c6a642115b6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 08:47:48 compute-0 nova_compute[260935]: 2025-10-11 08:47:48.755 2 DEBUG nova.network.neutron [None req-12160066-095a-471e-ae52-63171fe3a949 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 5b2193b9-46b9-44a8-9d1c-3c6a642115b6] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 11 08:47:48 compute-0 nova_compute[260935]: 2025-10-11 08:47:48.764 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:47:48 compute-0 nova_compute[260935]: 2025-10-11 08:47:48.847 2 DEBUG nova.compute.manager [req-13af72b0-7017-42a9-9d17-6a735b1e74e8 req-0d552423-1d45-42ae-be4b-a0bccca404c3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 5b2193b9-46b9-44a8-9d1c-3c6a642115b6] Received event network-changed-0ae5b718-3374-4544-8f79-2f11854381cd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:47:48 compute-0 nova_compute[260935]: 2025-10-11 08:47:48.848 2 DEBUG nova.compute.manager [req-13af72b0-7017-42a9-9d17-6a735b1e74e8 req-0d552423-1d45-42ae-be4b-a0bccca404c3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 5b2193b9-46b9-44a8-9d1c-3c6a642115b6] Refreshing instance network info cache due to event network-changed-0ae5b718-3374-4544-8f79-2f11854381cd. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 08:47:48 compute-0 nova_compute[260935]: 2025-10-11 08:47:48.849 2 DEBUG oslo_concurrency.lockutils [req-13af72b0-7017-42a9-9d17-6a735b1e74e8 req-0d552423-1d45-42ae-be4b-a0bccca404c3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-5b2193b9-46b9-44a8-9d1c-3c6a642115b6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 08:47:49 compute-0 nova_compute[260935]: 2025-10-11 08:47:49.170 2 WARNING nova.network.neutron [None req-12160066-095a-471e-ae52-63171fe3a949 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 5b2193b9-46b9-44a8-9d1c-3c6a642115b6] fff13396-b787-4c6e-9112-a1c2ef57b26d already exists in list: networks containing: ['fff13396-b787-4c6e-9112-a1c2ef57b26d']. ignoring it
Oct 11 08:47:49 compute-0 ceph-mon[74313]: pgmap v1260: 321 pgs: 321 active+clean; 195 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 374 KiB/s rd, 2.6 MiB/s wr, 66 op/s
Oct 11 08:47:49 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1261: 321 pgs: 321 active+clean; 195 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 358 KiB/s rd, 2.5 MiB/s wr, 63 op/s
Oct 11 08:47:50 compute-0 podman[287698]: 2025-10-11 08:47:50.806699688 +0000 UTC m=+0.093584466 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 11 08:47:50 compute-0 systemd[1]: machine-qemu\x2d18\x2dinstance\x2d00000012.scope: Deactivated successfully.
Oct 11 08:47:50 compute-0 systemd[1]: machine-qemu\x2d18\x2dinstance\x2d00000012.scope: Consumed 13.154s CPU time.
Oct 11 08:47:50 compute-0 systemd-machined[215705]: Machine qemu-18-instance-00000012 terminated.
Oct 11 08:47:51 compute-0 ceph-mon[74313]: pgmap v1261: 321 pgs: 321 active+clean; 195 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 358 KiB/s rd, 2.5 MiB/s wr, 63 op/s
Oct 11 08:47:51 compute-0 nova_compute[260935]: 2025-10-11 08:47:51.505 2 DEBUG nova.network.neutron [None req-12160066-095a-471e-ae52-63171fe3a949 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 5b2193b9-46b9-44a8-9d1c-3c6a642115b6] Updating instance_info_cache with network_info: [{"id": "713e6030-0d3f-41ae-9f66-c4591e2498e4", "address": "fa:16:3e:6b:9e:4e", "network": {"id": "fff13396-b787-4c6e-9112-a1c2ef57b26d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-795919911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.196", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eddb41c523294041b154a0a99c88e82b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap713e6030-0d", "ovs_interfaceid": "713e6030-0d3f-41ae-9f66-c4591e2498e4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "0ae5b718-3374-4544-8f79-2f11854381cd", "address": "fa:16:3e:3e:fe:cf", "network": {"id": "fff13396-b787-4c6e-9112-a1c2ef57b26d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-795919911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eddb41c523294041b154a0a99c88e82b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0ae5b718-33", "ovs_interfaceid": "0ae5b718-3374-4544-8f79-2f11854381cd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:47:51 compute-0 nova_compute[260935]: 2025-10-11 08:47:51.527 2 DEBUG oslo_concurrency.lockutils [None req-12160066-095a-471e-ae52-63171fe3a949 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Releasing lock "refresh_cache-5b2193b9-46b9-44a8-9d1c-3c6a642115b6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 08:47:51 compute-0 nova_compute[260935]: 2025-10-11 08:47:51.528 2 DEBUG oslo_concurrency.lockutils [req-13af72b0-7017-42a9-9d17-6a735b1e74e8 req-0d552423-1d45-42ae-be4b-a0bccca404c3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-5b2193b9-46b9-44a8-9d1c-3c6a642115b6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 08:47:51 compute-0 nova_compute[260935]: 2025-10-11 08:47:51.528 2 DEBUG nova.network.neutron [req-13af72b0-7017-42a9-9d17-6a735b1e74e8 req-0d552423-1d45-42ae-be4b-a0bccca404c3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 5b2193b9-46b9-44a8-9d1c-3c6a642115b6] Refreshing network info cache for port 0ae5b718-3374-4544-8f79-2f11854381cd _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 08:47:51 compute-0 nova_compute[260935]: 2025-10-11 08:47:51.533 2 DEBUG nova.virt.libvirt.vif [None req-12160066-095a-471e-ae52-63171fe3a949 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T08:47:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1177835038',display_name='tempest-AttachInterfacesTestJSON-server-1177835038',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1177835038',id=17,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBM9Il64pQKTCYRuLz2OOsP19v1NZUxnzt1d6CpbMNqNcVSmJsI444B5YIDg/3s4g87KTn1UkUCttTxW17bkkPDQnOj/OhzrtE3rJwHzR/sgT5/vucTFG0ijrEL7r/7PtFg==',key_name='tempest-keypair-130237923',keypairs=<?>,launch_index=0,launched_at=2025-10-11T08:47:24Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='eddb41c523294041b154a0a99c88e82b',ramdisk_id='',reservation_id='r-a649ez8h',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-2072786320',owner_user_name='tempest-AttachInterfacesTestJSON-2072786320-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T08:47:24Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='34f29a5a135d45f597eeaa741009aa67',uuid=5b2193b9-46b9-44a8-9d1c-3c6a642115b6,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "0ae5b718-3374-4544-8f79-2f11854381cd", "address": "fa:16:3e:3e:fe:cf", "network": {"id": "fff13396-b787-4c6e-9112-a1c2ef57b26d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-795919911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eddb41c523294041b154a0a99c88e82b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0ae5b718-33", "ovs_interfaceid": "0ae5b718-3374-4544-8f79-2f11854381cd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 11 08:47:51 compute-0 nova_compute[260935]: 2025-10-11 08:47:51.533 2 DEBUG nova.network.os_vif_util [None req-12160066-095a-471e-ae52-63171fe3a949 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Converting VIF {"id": "0ae5b718-3374-4544-8f79-2f11854381cd", "address": "fa:16:3e:3e:fe:cf", "network": {"id": "fff13396-b787-4c6e-9112-a1c2ef57b26d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-795919911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eddb41c523294041b154a0a99c88e82b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0ae5b718-33", "ovs_interfaceid": "0ae5b718-3374-4544-8f79-2f11854381cd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 08:47:51 compute-0 nova_compute[260935]: 2025-10-11 08:47:51.535 2 DEBUG nova.network.os_vif_util [None req-12160066-095a-471e-ae52-63171fe3a949 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3e:fe:cf,bridge_name='br-int',has_traffic_filtering=True,id=0ae5b718-3374-4544-8f79-2f11854381cd,network=Network(fff13396-b787-4c6e-9112-a1c2ef57b26d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0ae5b718-33') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 08:47:51 compute-0 nova_compute[260935]: 2025-10-11 08:47:51.535 2 DEBUG os_vif [None req-12160066-095a-471e-ae52-63171fe3a949 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:3e:fe:cf,bridge_name='br-int',has_traffic_filtering=True,id=0ae5b718-3374-4544-8f79-2f11854381cd,network=Network(fff13396-b787-4c6e-9112-a1c2ef57b26d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0ae5b718-33') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 11 08:47:51 compute-0 nova_compute[260935]: 2025-10-11 08:47:51.536 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:47:51 compute-0 nova_compute[260935]: 2025-10-11 08:47:51.537 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:47:51 compute-0 nova_compute[260935]: 2025-10-11 08:47:51.537 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 08:47:51 compute-0 nova_compute[260935]: 2025-10-11 08:47:51.542 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:47:51 compute-0 nova_compute[260935]: 2025-10-11 08:47:51.542 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0ae5b718-33, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:47:51 compute-0 nova_compute[260935]: 2025-10-11 08:47:51.543 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap0ae5b718-33, col_values=(('external_ids', {'iface-id': '0ae5b718-3374-4544-8f79-2f11854381cd', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:3e:fe:cf', 'vm-uuid': '5b2193b9-46b9-44a8-9d1c-3c6a642115b6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:47:51 compute-0 NetworkManager[44960]: <info>  [1760172471.5464] manager: (tap0ae5b718-33): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/42)
Oct 11 08:47:51 compute-0 nova_compute[260935]: 2025-10-11 08:47:51.553 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 08:47:51 compute-0 nova_compute[260935]: 2025-10-11 08:47:51.559 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:47:51 compute-0 nova_compute[260935]: 2025-10-11 08:47:51.560 2 INFO os_vif [None req-12160066-095a-471e-ae52-63171fe3a949 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:3e:fe:cf,bridge_name='br-int',has_traffic_filtering=True,id=0ae5b718-3374-4544-8f79-2f11854381cd,network=Network(fff13396-b787-4c6e-9112-a1c2ef57b26d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0ae5b718-33')
Oct 11 08:47:51 compute-0 nova_compute[260935]: 2025-10-11 08:47:51.562 2 DEBUG nova.virt.libvirt.vif [None req-12160066-095a-471e-ae52-63171fe3a949 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T08:47:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1177835038',display_name='tempest-AttachInterfacesTestJSON-server-1177835038',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1177835038',id=17,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBM9Il64pQKTCYRuLz2OOsP19v1NZUxnzt1d6CpbMNqNcVSmJsI444B5YIDg/3s4g87KTn1UkUCttTxW17bkkPDQnOj/OhzrtE3rJwHzR/sgT5/vucTFG0ijrEL7r/7PtFg==',key_name='tempest-keypair-130237923',keypairs=<?>,launch_index=0,launched_at=2025-10-11T08:47:24Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='eddb41c523294041b154a0a99c88e82b',ramdisk_id='',reservation_id='r-a649ez8h',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-2072786320',owner_user_name='tempest-AttachInterfacesTestJSON-2072786320-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T08:47:24Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='34f29a5a135d45f597eeaa741009aa67',uuid=5b2193b9-46b9-44a8-9d1c-3c6a642115b6,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "0ae5b718-3374-4544-8f79-2f11854381cd", "address": "fa:16:3e:3e:fe:cf", "network": {"id": "fff13396-b787-4c6e-9112-a1c2ef57b26d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-795919911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eddb41c523294041b154a0a99c88e82b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0ae5b718-33", "ovs_interfaceid": "0ae5b718-3374-4544-8f79-2f11854381cd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 11 08:47:51 compute-0 nova_compute[260935]: 2025-10-11 08:47:51.562 2 DEBUG nova.network.os_vif_util [None req-12160066-095a-471e-ae52-63171fe3a949 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Converting VIF {"id": "0ae5b718-3374-4544-8f79-2f11854381cd", "address": "fa:16:3e:3e:fe:cf", "network": {"id": "fff13396-b787-4c6e-9112-a1c2ef57b26d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-795919911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eddb41c523294041b154a0a99c88e82b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0ae5b718-33", "ovs_interfaceid": "0ae5b718-3374-4544-8f79-2f11854381cd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 08:47:51 compute-0 nova_compute[260935]: 2025-10-11 08:47:51.563 2 DEBUG nova.network.os_vif_util [None req-12160066-095a-471e-ae52-63171fe3a949 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3e:fe:cf,bridge_name='br-int',has_traffic_filtering=True,id=0ae5b718-3374-4544-8f79-2f11854381cd,network=Network(fff13396-b787-4c6e-9112-a1c2ef57b26d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0ae5b718-33') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 08:47:51 compute-0 nova_compute[260935]: 2025-10-11 08:47:51.567 2 DEBUG nova.virt.libvirt.guest [None req-12160066-095a-471e-ae52-63171fe3a949 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] attach device xml: <interface type="ethernet">
Oct 11 08:47:51 compute-0 nova_compute[260935]:   <mac address="fa:16:3e:3e:fe:cf"/>
Oct 11 08:47:51 compute-0 nova_compute[260935]:   <model type="virtio"/>
Oct 11 08:47:51 compute-0 nova_compute[260935]:   <driver name="vhost" rx_queue_size="512"/>
Oct 11 08:47:51 compute-0 nova_compute[260935]:   <mtu size="1442"/>
Oct 11 08:47:51 compute-0 nova_compute[260935]:   <target dev="tap0ae5b718-33"/>
Oct 11 08:47:51 compute-0 nova_compute[260935]: </interface>
Oct 11 08:47:51 compute-0 nova_compute[260935]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Oct 11 08:47:51 compute-0 NetworkManager[44960]: <info>  [1760172471.5848] manager: (tap0ae5b718-33): new Tun device (/org/freedesktop/NetworkManager/Devices/43)
Oct 11 08:47:51 compute-0 systemd-udevd[287714]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 08:47:51 compute-0 kernel: tap0ae5b718-33: entered promiscuous mode
Oct 11 08:47:51 compute-0 NetworkManager[44960]: <info>  [1760172471.6338] device (tap0ae5b718-33): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 08:47:51 compute-0 NetworkManager[44960]: <info>  [1760172471.6370] device (tap0ae5b718-33): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 08:47:51 compute-0 ovn_controller[152945]: 2025-10-11T08:47:51Z|00051|binding|INFO|Claiming lport 0ae5b718-3374-4544-8f79-2f11854381cd for this chassis.
Oct 11 08:47:51 compute-0 ovn_controller[152945]: 2025-10-11T08:47:51Z|00052|binding|INFO|0ae5b718-3374-4544-8f79-2f11854381cd: Claiming fa:16:3e:3e:fe:cf 10.100.0.14
Oct 11 08:47:51 compute-0 nova_compute[260935]: 2025-10-11 08:47:51.638 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:47:51 compute-0 nova_compute[260935]: 2025-10-11 08:47:51.644 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:47:51 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:47:51.650 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3e:fe:cf 10.100.0.14'], port_security=['fa:16:3e:3e:fe:cf 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '5b2193b9-46b9-44a8-9d1c-3c6a642115b6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fff13396-b787-4c6e-9112-a1c2ef57b26d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'eddb41c523294041b154a0a99c88e82b', 'neutron:revision_number': '2', 'neutron:security_group_ids': '9a83c3d0-687d-44b7-980a-bde786b1b429', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c4201c7b-c907-464d-88cb-d19f17d8f067, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=0ae5b718-3374-4544-8f79-2f11854381cd) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 08:47:51 compute-0 nova_compute[260935]: 2025-10-11 08:47:51.652 2 INFO nova.virt.libvirt.driver [None req-bd5d0c03-cc4b-47eb-9336-59861d26f68e 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] [instance: 66086b61-46ca-4a1b-a9f0-692678bcbf7a] Instance shutdown successfully after 13 seconds.
Oct 11 08:47:51 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:47:51.654 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 0ae5b718-3374-4544-8f79-2f11854381cd in datapath fff13396-b787-4c6e-9112-a1c2ef57b26d bound to our chassis
Oct 11 08:47:51 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:47:51.658 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network fff13396-b787-4c6e-9112-a1c2ef57b26d
Oct 11 08:47:51 compute-0 ovn_controller[152945]: 2025-10-11T08:47:51Z|00053|binding|INFO|Setting lport 0ae5b718-3374-4544-8f79-2f11854381cd ovn-installed in OVS
Oct 11 08:47:51 compute-0 ovn_controller[152945]: 2025-10-11T08:47:51Z|00054|binding|INFO|Setting lport 0ae5b718-3374-4544-8f79-2f11854381cd up in Southbound
Oct 11 08:47:51 compute-0 nova_compute[260935]: 2025-10-11 08:47:51.669 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:47:51 compute-0 nova_compute[260935]: 2025-10-11 08:47:51.677 2 INFO nova.virt.libvirt.driver [-] [instance: 66086b61-46ca-4a1b-a9f0-692678bcbf7a] Instance destroyed successfully.
Oct 11 08:47:51 compute-0 nova_compute[260935]: 2025-10-11 08:47:51.687 2 INFO nova.virt.libvirt.driver [-] [instance: 66086b61-46ca-4a1b-a9f0-692678bcbf7a] Instance destroyed successfully.
Oct 11 08:47:51 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:47:51.686 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[ca77ca6b-3a37-480a-bc26-01b57923cc10]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:47:51 compute-0 nova_compute[260935]: 2025-10-11 08:47:51.719 2 DEBUG nova.virt.libvirt.driver [None req-12160066-095a-471e-ae52-63171fe3a949 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 08:47:51 compute-0 nova_compute[260935]: 2025-10-11 08:47:51.719 2 DEBUG nova.virt.libvirt.driver [None req-12160066-095a-471e-ae52-63171fe3a949 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 08:47:51 compute-0 nova_compute[260935]: 2025-10-11 08:47:51.719 2 DEBUG nova.virt.libvirt.driver [None req-12160066-095a-471e-ae52-63171fe3a949 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] No VIF found with MAC fa:16:3e:6b:9e:4e, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 11 08:47:51 compute-0 nova_compute[260935]: 2025-10-11 08:47:51.720 2 DEBUG nova.virt.libvirt.driver [None req-12160066-095a-471e-ae52-63171fe3a949 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] No VIF found with MAC fa:16:3e:3e:fe:cf, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 11 08:47:51 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:47:51.732 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[ef1b1cbe-204c-470f-8f1c-c3a07e436dfd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:47:51 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:47:51.737 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[6a22556f-52e7-49b9-bf26-755970131fb4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:47:51 compute-0 nova_compute[260935]: 2025-10-11 08:47:51.757 2 DEBUG nova.virt.libvirt.guest [None req-12160066-095a-471e-ae52-63171fe3a949 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 08:47:51 compute-0 nova_compute[260935]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 08:47:51 compute-0 nova_compute[260935]:   <nova:name>tempest-AttachInterfacesTestJSON-server-1177835038</nova:name>
Oct 11 08:47:51 compute-0 nova_compute[260935]:   <nova:creationTime>2025-10-11 08:47:51</nova:creationTime>
Oct 11 08:47:51 compute-0 nova_compute[260935]:   <nova:flavor name="m1.nano">
Oct 11 08:47:51 compute-0 nova_compute[260935]:     <nova:memory>128</nova:memory>
Oct 11 08:47:51 compute-0 nova_compute[260935]:     <nova:disk>1</nova:disk>
Oct 11 08:47:51 compute-0 nova_compute[260935]:     <nova:swap>0</nova:swap>
Oct 11 08:47:51 compute-0 nova_compute[260935]:     <nova:ephemeral>0</nova:ephemeral>
Oct 11 08:47:51 compute-0 nova_compute[260935]:     <nova:vcpus>1</nova:vcpus>
Oct 11 08:47:51 compute-0 nova_compute[260935]:   </nova:flavor>
Oct 11 08:47:51 compute-0 nova_compute[260935]:   <nova:owner>
Oct 11 08:47:51 compute-0 nova_compute[260935]:     <nova:user uuid="34f29a5a135d45f597eeaa741009aa67">tempest-AttachInterfacesTestJSON-2072786320-project-member</nova:user>
Oct 11 08:47:51 compute-0 nova_compute[260935]:     <nova:project uuid="eddb41c523294041b154a0a99c88e82b">tempest-AttachInterfacesTestJSON-2072786320</nova:project>
Oct 11 08:47:51 compute-0 nova_compute[260935]:   </nova:owner>
Oct 11 08:47:51 compute-0 nova_compute[260935]:   <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 08:47:51 compute-0 nova_compute[260935]:   <nova:ports>
Oct 11 08:47:51 compute-0 nova_compute[260935]:     <nova:port uuid="713e6030-0d3f-41ae-9f66-c4591e2498e4">
Oct 11 08:47:51 compute-0 nova_compute[260935]:       <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Oct 11 08:47:51 compute-0 nova_compute[260935]:     </nova:port>
Oct 11 08:47:51 compute-0 nova_compute[260935]:     <nova:port uuid="0ae5b718-3374-4544-8f79-2f11854381cd">
Oct 11 08:47:51 compute-0 nova_compute[260935]:       <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Oct 11 08:47:51 compute-0 nova_compute[260935]:     </nova:port>
Oct 11 08:47:51 compute-0 nova_compute[260935]:   </nova:ports>
Oct 11 08:47:51 compute-0 nova_compute[260935]: </nova:instance>
Oct 11 08:47:51 compute-0 nova_compute[260935]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Oct 11 08:47:51 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:47:51.778 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[2ebfd00a-2f95-487d-a598-36fe395c485a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:47:51 compute-0 nova_compute[260935]: 2025-10-11 08:47:51.791 2 DEBUG oslo_concurrency.lockutils [None req-12160066-095a-471e-ae52-63171fe3a949 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Lock "interface-5b2193b9-46b9-44a8-9d1c-3c6a642115b6-None" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 6.425s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:47:51 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1262: 321 pgs: 321 active+clean; 195 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 358 KiB/s rd, 2.5 MiB/s wr, 63 op/s
Oct 11 08:47:51 compute-0 nova_compute[260935]: 2025-10-11 08:47:51.796 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:47:51 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:47:51.811 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[163b7040-3427-4443-bf05-5e1efef0a10b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfff13396-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:dd:a4:2d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 21], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 428802, 'reachable_time': 36432, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 287748, 'error': None, 'target': 'ovnmeta-fff13396-b787-4c6e-9112-a1c2ef57b26d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:47:51 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:47:51.839 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[38d42948-5986-43c1-b623-7629e96b158d]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapfff13396-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 428821, 'tstamp': 428821}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 287749, 'error': None, 'target': 'ovnmeta-fff13396-b787-4c6e-9112-a1c2ef57b26d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapfff13396-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 428826, 'tstamp': 428826}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 287749, 'error': None, 'target': 'ovnmeta-fff13396-b787-4c6e-9112-a1c2ef57b26d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:47:51 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:47:51.842 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfff13396-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:47:51 compute-0 nova_compute[260935]: 2025-10-11 08:47:51.845 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:47:51 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:47:51.849 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfff13396-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:47:51 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:47:51.849 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 08:47:51 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:47:51.850 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapfff13396-b0, col_values=(('external_ids', {'iface-id': '2a916b98-1e7b-4604-b1f0-e2f195b1c17e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:47:51 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:47:51.850 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 08:47:52 compute-0 nova_compute[260935]: 2025-10-11 08:47:52.097 2 INFO nova.virt.libvirt.driver [None req-bd5d0c03-cc4b-47eb-9336-59861d26f68e 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] [instance: 66086b61-46ca-4a1b-a9f0-692678bcbf7a] Deleting instance files /var/lib/nova/instances/66086b61-46ca-4a1b-a9f0-692678bcbf7a_del
Oct 11 08:47:52 compute-0 nova_compute[260935]: 2025-10-11 08:47:52.098 2 INFO nova.virt.libvirt.driver [None req-bd5d0c03-cc4b-47eb-9336-59861d26f68e 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] [instance: 66086b61-46ca-4a1b-a9f0-692678bcbf7a] Deletion of /var/lib/nova/instances/66086b61-46ca-4a1b-a9f0-692678bcbf7a_del complete
Oct 11 08:47:52 compute-0 nova_compute[260935]: 2025-10-11 08:47:52.234 2 DEBUG nova.virt.libvirt.driver [None req-bd5d0c03-cc4b-47eb-9336-59861d26f68e 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] [instance: 66086b61-46ca-4a1b-a9f0-692678bcbf7a] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 11 08:47:52 compute-0 nova_compute[260935]: 2025-10-11 08:47:52.235 2 INFO nova.virt.libvirt.driver [None req-bd5d0c03-cc4b-47eb-9336-59861d26f68e 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] [instance: 66086b61-46ca-4a1b-a9f0-692678bcbf7a] Creating image(s)
Oct 11 08:47:52 compute-0 nova_compute[260935]: 2025-10-11 08:47:52.262 2 DEBUG nova.storage.rbd_utils [None req-bd5d0c03-cc4b-47eb-9336-59861d26f68e 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] rbd image 66086b61-46ca-4a1b-a9f0-692678bcbf7a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:47:52 compute-0 nova_compute[260935]: 2025-10-11 08:47:52.293 2 DEBUG nova.storage.rbd_utils [None req-bd5d0c03-cc4b-47eb-9336-59861d26f68e 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] rbd image 66086b61-46ca-4a1b-a9f0-692678bcbf7a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:47:52 compute-0 nova_compute[260935]: 2025-10-11 08:47:52.322 2 DEBUG nova.storage.rbd_utils [None req-bd5d0c03-cc4b-47eb-9336-59861d26f68e 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] rbd image 66086b61-46ca-4a1b-a9f0-692678bcbf7a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:47:52 compute-0 nova_compute[260935]: 2025-10-11 08:47:52.325 2 DEBUG oslo_concurrency.lockutils [None req-bd5d0c03-cc4b-47eb-9336-59861d26f68e 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] Acquiring lock "d427ed36e4acfaf36d5cf36bd49361b1db4ee571" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:47:52 compute-0 nova_compute[260935]: 2025-10-11 08:47:52.326 2 DEBUG oslo_concurrency.lockutils [None req-bd5d0c03-cc4b-47eb-9336-59861d26f68e 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] Lock "d427ed36e4acfaf36d5cf36bd49361b1db4ee571" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:47:52 compute-0 nova_compute[260935]: 2025-10-11 08:47:52.654 2 DEBUG nova.virt.libvirt.imagebackend [None req-bd5d0c03-cc4b-47eb-9336-59861d26f68e 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] Image locations are: [{'url': 'rbd://33219f8b-dc38-5a8f-a577-8ccc4b37190a/images/95632eb9-5895-4e20-b760-0f149aadf400/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://33219f8b-dc38-5a8f-a577-8ccc4b37190a/images/95632eb9-5895-4e20-b760-0f149aadf400/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085
Oct 11 08:47:52 compute-0 nova_compute[260935]: 2025-10-11 08:47:52.904 2 DEBUG nova.network.neutron [req-13af72b0-7017-42a9-9d17-6a735b1e74e8 req-0d552423-1d45-42ae-be4b-a0bccca404c3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 5b2193b9-46b9-44a8-9d1c-3c6a642115b6] Updated VIF entry in instance network info cache for port 0ae5b718-3374-4544-8f79-2f11854381cd. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 08:47:52 compute-0 nova_compute[260935]: 2025-10-11 08:47:52.905 2 DEBUG nova.network.neutron [req-13af72b0-7017-42a9-9d17-6a735b1e74e8 req-0d552423-1d45-42ae-be4b-a0bccca404c3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 5b2193b9-46b9-44a8-9d1c-3c6a642115b6] Updating instance_info_cache with network_info: [{"id": "713e6030-0d3f-41ae-9f66-c4591e2498e4", "address": "fa:16:3e:6b:9e:4e", "network": {"id": "fff13396-b787-4c6e-9112-a1c2ef57b26d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-795919911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.196", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eddb41c523294041b154a0a99c88e82b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap713e6030-0d", "ovs_interfaceid": "713e6030-0d3f-41ae-9f66-c4591e2498e4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "0ae5b718-3374-4544-8f79-2f11854381cd", "address": "fa:16:3e:3e:fe:cf", "network": {"id": "fff13396-b787-4c6e-9112-a1c2ef57b26d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-795919911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eddb41c523294041b154a0a99c88e82b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0ae5b718-33", "ovs_interfaceid": "0ae5b718-3374-4544-8f79-2f11854381cd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:47:52 compute-0 nova_compute[260935]: 2025-10-11 08:47:52.922 2 DEBUG oslo_concurrency.lockutils [req-13af72b0-7017-42a9-9d17-6a735b1e74e8 req-0d552423-1d45-42ae-be4b-a0bccca404c3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-5b2193b9-46b9-44a8-9d1c-3c6a642115b6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 08:47:53 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e163 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 08:47:53 compute-0 ovn_controller[152945]: 2025-10-11T08:47:53Z|00055|binding|INFO|Releasing lport 2a916b98-1e7b-4604-b1f0-e2f195b1c17e from this chassis (sb_readonly=0)
Oct 11 08:47:53 compute-0 ceph-mon[74313]: pgmap v1262: 321 pgs: 321 active+clean; 195 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 358 KiB/s rd, 2.5 MiB/s wr, 63 op/s
Oct 11 08:47:53 compute-0 nova_compute[260935]: 2025-10-11 08:47:53.451 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:47:53 compute-0 nova_compute[260935]: 2025-10-11 08:47:53.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 08:47:53 compute-0 nova_compute[260935]: 2025-10-11 08:47:53.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 08:47:53 compute-0 nova_compute[260935]: 2025-10-11 08:47:53.704 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 11 08:47:53 compute-0 nova_compute[260935]: 2025-10-11 08:47:53.762 2 DEBUG oslo_concurrency.processutils [None req-bd5d0c03-cc4b-47eb-9336-59861d26f68e 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/d427ed36e4acfaf36d5cf36bd49361b1db4ee571.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:47:53 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1263: 321 pgs: 321 active+clean; 121 MiB data, 323 MiB used, 60 GiB / 60 GiB avail; 387 KiB/s rd, 2.5 MiB/s wr, 106 op/s
Oct 11 08:47:53 compute-0 nova_compute[260935]: 2025-10-11 08:47:53.826 2 DEBUG oslo_concurrency.processutils [None req-bd5d0c03-cc4b-47eb-9336-59861d26f68e 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/d427ed36e4acfaf36d5cf36bd49361b1db4ee571.part --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:47:53 compute-0 nova_compute[260935]: 2025-10-11 08:47:53.828 2 DEBUG nova.virt.images [None req-bd5d0c03-cc4b-47eb-9336-59861d26f68e 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] 95632eb9-5895-4e20-b760-0f149aadf400 was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242
Oct 11 08:47:53 compute-0 nova_compute[260935]: 2025-10-11 08:47:53.829 2 DEBUG nova.privsep.utils [None req-bd5d0c03-cc4b-47eb-9336-59861d26f68e 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63
Oct 11 08:47:53 compute-0 nova_compute[260935]: 2025-10-11 08:47:53.830 2 DEBUG oslo_concurrency.processutils [None req-bd5d0c03-cc4b-47eb-9336-59861d26f68e 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/d427ed36e4acfaf36d5cf36bd49361b1db4ee571.part /var/lib/nova/instances/_base/d427ed36e4acfaf36d5cf36bd49361b1db4ee571.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:47:53 compute-0 ovn_controller[152945]: 2025-10-11T08:47:53Z|00010|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:3e:fe:cf 10.100.0.14
Oct 11 08:47:53 compute-0 ovn_controller[152945]: 2025-10-11T08:47:53Z|00011|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:3e:fe:cf 10.100.0.14
Oct 11 08:47:54 compute-0 nova_compute[260935]: 2025-10-11 08:47:54.009 2 DEBUG oslo_concurrency.processutils [None req-bd5d0c03-cc4b-47eb-9336-59861d26f68e 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/d427ed36e4acfaf36d5cf36bd49361b1db4ee571.part /var/lib/nova/instances/_base/d427ed36e4acfaf36d5cf36bd49361b1db4ee571.converted" returned: 0 in 0.179s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:47:54 compute-0 nova_compute[260935]: 2025-10-11 08:47:54.017 2 DEBUG oslo_concurrency.processutils [None req-bd5d0c03-cc4b-47eb-9336-59861d26f68e 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/d427ed36e4acfaf36d5cf36bd49361b1db4ee571.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:47:54 compute-0 nova_compute[260935]: 2025-10-11 08:47:54.110 2 DEBUG oslo_concurrency.processutils [None req-bd5d0c03-cc4b-47eb-9336-59861d26f68e 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/d427ed36e4acfaf36d5cf36bd49361b1db4ee571.converted --force-share --output=json" returned: 0 in 0.093s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:47:54 compute-0 nova_compute[260935]: 2025-10-11 08:47:54.114 2 DEBUG oslo_concurrency.lockutils [None req-bd5d0c03-cc4b-47eb-9336-59861d26f68e 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] Lock "d427ed36e4acfaf36d5cf36bd49361b1db4ee571" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 1.787s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:47:54 compute-0 nova_compute[260935]: 2025-10-11 08:47:54.147 2 DEBUG nova.storage.rbd_utils [None req-bd5d0c03-cc4b-47eb-9336-59861d26f68e 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] rbd image 66086b61-46ca-4a1b-a9f0-692678bcbf7a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:47:54 compute-0 nova_compute[260935]: 2025-10-11 08:47:54.152 2 DEBUG oslo_concurrency.processutils [None req-bd5d0c03-cc4b-47eb-9336-59861d26f68e 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/d427ed36e4acfaf36d5cf36bd49361b1db4ee571 66086b61-46ca-4a1b-a9f0-692678bcbf7a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:47:54 compute-0 nova_compute[260935]: 2025-10-11 08:47:54.463 2 DEBUG oslo_concurrency.processutils [None req-bd5d0c03-cc4b-47eb-9336-59861d26f68e 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/d427ed36e4acfaf36d5cf36bd49361b1db4ee571 66086b61-46ca-4a1b-a9f0-692678bcbf7a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.311s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:47:54 compute-0 nova_compute[260935]: 2025-10-11 08:47:54.556 2 DEBUG nova.storage.rbd_utils [None req-bd5d0c03-cc4b-47eb-9336-59861d26f68e 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] resizing rbd image 66086b61-46ca-4a1b-a9f0-692678bcbf7a_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 11 08:47:54 compute-0 nova_compute[260935]: 2025-10-11 08:47:54.688 2 DEBUG nova.virt.libvirt.driver [None req-bd5d0c03-cc4b-47eb-9336-59861d26f68e 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] [instance: 66086b61-46ca-4a1b-a9f0-692678bcbf7a] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 11 08:47:54 compute-0 nova_compute[260935]: 2025-10-11 08:47:54.689 2 DEBUG nova.virt.libvirt.driver [None req-bd5d0c03-cc4b-47eb-9336-59861d26f68e 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] [instance: 66086b61-46ca-4a1b-a9f0-692678bcbf7a] Ensure instance console log exists: /var/lib/nova/instances/66086b61-46ca-4a1b-a9f0-692678bcbf7a/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 11 08:47:54 compute-0 nova_compute[260935]: 2025-10-11 08:47:54.690 2 DEBUG oslo_concurrency.lockutils [None req-bd5d0c03-cc4b-47eb-9336-59861d26f68e 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:47:54 compute-0 nova_compute[260935]: 2025-10-11 08:47:54.691 2 DEBUG oslo_concurrency.lockutils [None req-bd5d0c03-cc4b-47eb-9336-59861d26f68e 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:47:54 compute-0 nova_compute[260935]: 2025-10-11 08:47:54.691 2 DEBUG oslo_concurrency.lockutils [None req-bd5d0c03-cc4b-47eb-9336-59861d26f68e 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:47:54 compute-0 nova_compute[260935]: 2025-10-11 08:47:54.695 2 DEBUG nova.virt.libvirt.driver [None req-bd5d0c03-cc4b-47eb-9336-59861d26f68e 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] [instance: 66086b61-46ca-4a1b-a9f0-692678bcbf7a] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:29Z,direct_url=<?>,disk_format='qcow2',id=95632eb9-5895-4e20-b760-0f149aadf400,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 11 08:47:54 compute-0 nova_compute[260935]: 2025-10-11 08:47:54.701 2 WARNING nova.virt.libvirt.driver [None req-bd5d0c03-cc4b-47eb-9336-59861d26f68e 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError
Oct 11 08:47:54 compute-0 nova_compute[260935]: 2025-10-11 08:47:54.712 2 DEBUG nova.virt.libvirt.host [None req-bd5d0c03-cc4b-47eb-9336-59861d26f68e 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 11 08:47:54 compute-0 nova_compute[260935]: 2025-10-11 08:47:54.713 2 DEBUG nova.virt.libvirt.host [None req-bd5d0c03-cc4b-47eb-9336-59861d26f68e 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 11 08:47:54 compute-0 nova_compute[260935]: 2025-10-11 08:47:54.719 2 DEBUG nova.virt.libvirt.host [None req-bd5d0c03-cc4b-47eb-9336-59861d26f68e 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 11 08:47:54 compute-0 nova_compute[260935]: 2025-10-11 08:47:54.720 2 DEBUG nova.virt.libvirt.host [None req-bd5d0c03-cc4b-47eb-9336-59861d26f68e 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 11 08:47:54 compute-0 nova_compute[260935]: 2025-10-11 08:47:54.721 2 DEBUG nova.virt.libvirt.driver [None req-bd5d0c03-cc4b-47eb-9336-59861d26f68e 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 11 08:47:54 compute-0 nova_compute[260935]: 2025-10-11 08:47:54.721 2 DEBUG nova.virt.hardware [None req-bd5d0c03-cc4b-47eb-9336-59861d26f68e 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:29Z,direct_url=<?>,disk_format='qcow2',id=95632eb9-5895-4e20-b760-0f149aadf400,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 11 08:47:54 compute-0 nova_compute[260935]: 2025-10-11 08:47:54.722 2 DEBUG nova.virt.hardware [None req-bd5d0c03-cc4b-47eb-9336-59861d26f68e 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 11 08:47:54 compute-0 nova_compute[260935]: 2025-10-11 08:47:54.723 2 DEBUG nova.virt.hardware [None req-bd5d0c03-cc4b-47eb-9336-59861d26f68e 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 11 08:47:54 compute-0 nova_compute[260935]: 2025-10-11 08:47:54.723 2 DEBUG nova.virt.hardware [None req-bd5d0c03-cc4b-47eb-9336-59861d26f68e 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 11 08:47:54 compute-0 nova_compute[260935]: 2025-10-11 08:47:54.724 2 DEBUG nova.virt.hardware [None req-bd5d0c03-cc4b-47eb-9336-59861d26f68e 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 11 08:47:54 compute-0 nova_compute[260935]: 2025-10-11 08:47:54.724 2 DEBUG nova.virt.hardware [None req-bd5d0c03-cc4b-47eb-9336-59861d26f68e 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 11 08:47:54 compute-0 nova_compute[260935]: 2025-10-11 08:47:54.724 2 DEBUG nova.virt.hardware [None req-bd5d0c03-cc4b-47eb-9336-59861d26f68e 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 11 08:47:54 compute-0 nova_compute[260935]: 2025-10-11 08:47:54.725 2 DEBUG nova.virt.hardware [None req-bd5d0c03-cc4b-47eb-9336-59861d26f68e 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 11 08:47:54 compute-0 nova_compute[260935]: 2025-10-11 08:47:54.725 2 DEBUG nova.virt.hardware [None req-bd5d0c03-cc4b-47eb-9336-59861d26f68e 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 11 08:47:54 compute-0 nova_compute[260935]: 2025-10-11 08:47:54.726 2 DEBUG nova.virt.hardware [None req-bd5d0c03-cc4b-47eb-9336-59861d26f68e 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 11 08:47:54 compute-0 nova_compute[260935]: 2025-10-11 08:47:54.726 2 DEBUG nova.virt.hardware [None req-bd5d0c03-cc4b-47eb-9336-59861d26f68e 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 11 08:47:54 compute-0 nova_compute[260935]: 2025-10-11 08:47:54.727 2 DEBUG nova.objects.instance [None req-bd5d0c03-cc4b-47eb-9336-59861d26f68e 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] Lazy-loading 'vcpu_model' on Instance uuid 66086b61-46ca-4a1b-a9f0-692678bcbf7a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:47:54 compute-0 nova_compute[260935]: 2025-10-11 08:47:54.752 2 DEBUG oslo_concurrency.processutils [None req-bd5d0c03-cc4b-47eb-9336-59861d26f68e 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:47:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 08:47:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 08:47:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 08:47:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 08:47:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 08:47:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 08:47:54 compute-0 ceph-mgr[74605]: [balancer INFO root] Optimize plan auto_2025-10-11_08:47:54
Oct 11 08:47:54 compute-0 ceph-mgr[74605]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 08:47:54 compute-0 ceph-mgr[74605]: [balancer INFO root] do_upmap
Oct 11 08:47:54 compute-0 ceph-mgr[74605]: [balancer INFO root] pools ['default.rgw.control', 'cephfs.cephfs.data', 'default.rgw.meta', 'default.rgw.log', '.mgr', '.rgw.root', 'volumes', 'vms', 'images', 'cephfs.cephfs.meta', 'backups']
Oct 11 08:47:54 compute-0 ceph-mgr[74605]: [balancer INFO root] prepared 0/10 changes
Oct 11 08:47:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 08:47:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 08:47:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 08:47:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 08:47:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 08:47:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 08:47:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 08:47:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 08:47:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 08:47:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 08:47:55 compute-0 nova_compute[260935]: 2025-10-11 08:47:55.272 2 DEBUG oslo_concurrency.lockutils [None req-b1acccde-62a2-4164-bd32-9ee867650c6d 16055681fed745bb89347149995b8486 dc753a4e96fc46008b6e6b1fd29b160d - - default default] Acquiring lock "f6e6ccd5-d393-4fa3-bf88-491311678dd1" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:47:55 compute-0 nova_compute[260935]: 2025-10-11 08:47:55.273 2 DEBUG oslo_concurrency.lockutils [None req-b1acccde-62a2-4164-bd32-9ee867650c6d 16055681fed745bb89347149995b8486 dc753a4e96fc46008b6e6b1fd29b160d - - default default] Lock "f6e6ccd5-d393-4fa3-bf88-491311678dd1" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:47:55 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 08:47:55 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3059503799' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:47:55 compute-0 nova_compute[260935]: 2025-10-11 08:47:55.295 2 DEBUG nova.compute.manager [None req-b1acccde-62a2-4164-bd32-9ee867650c6d 16055681fed745bb89347149995b8486 dc753a4e96fc46008b6e6b1fd29b160d - - default default] [instance: f6e6ccd5-d393-4fa3-bf88-491311678dd1] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 11 08:47:55 compute-0 nova_compute[260935]: 2025-10-11 08:47:55.299 2 DEBUG oslo_concurrency.processutils [None req-bd5d0c03-cc4b-47eb-9336-59861d26f68e 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.547s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:47:55 compute-0 nova_compute[260935]: 2025-10-11 08:47:55.337 2 DEBUG nova.storage.rbd_utils [None req-bd5d0c03-cc4b-47eb-9336-59861d26f68e 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] rbd image 66086b61-46ca-4a1b-a9f0-692678bcbf7a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:47:55 compute-0 nova_compute[260935]: 2025-10-11 08:47:55.343 2 DEBUG oslo_concurrency.processutils [None req-bd5d0c03-cc4b-47eb-9336-59861d26f68e 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:47:55 compute-0 ceph-mon[74313]: pgmap v1263: 321 pgs: 321 active+clean; 121 MiB data, 323 MiB used, 60 GiB / 60 GiB avail; 387 KiB/s rd, 2.5 MiB/s wr, 106 op/s
Oct 11 08:47:55 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/3059503799' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:47:55 compute-0 nova_compute[260935]: 2025-10-11 08:47:55.416 2 DEBUG oslo_concurrency.lockutils [None req-b1acccde-62a2-4164-bd32-9ee867650c6d 16055681fed745bb89347149995b8486 dc753a4e96fc46008b6e6b1fd29b160d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:47:55 compute-0 nova_compute[260935]: 2025-10-11 08:47:55.417 2 DEBUG oslo_concurrency.lockutils [None req-b1acccde-62a2-4164-bd32-9ee867650c6d 16055681fed745bb89347149995b8486 dc753a4e96fc46008b6e6b1fd29b160d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:47:55 compute-0 nova_compute[260935]: 2025-10-11 08:47:55.426 2 DEBUG nova.virt.hardware [None req-b1acccde-62a2-4164-bd32-9ee867650c6d 16055681fed745bb89347149995b8486 dc753a4e96fc46008b6e6b1fd29b160d - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 11 08:47:55 compute-0 nova_compute[260935]: 2025-10-11 08:47:55.426 2 INFO nova.compute.claims [None req-b1acccde-62a2-4164-bd32-9ee867650c6d 16055681fed745bb89347149995b8486 dc753a4e96fc46008b6e6b1fd29b160d - - default default] [instance: f6e6ccd5-d393-4fa3-bf88-491311678dd1] Claim successful on node compute-0.ctlplane.example.com
Oct 11 08:47:55 compute-0 nova_compute[260935]: 2025-10-11 08:47:55.553 2 DEBUG oslo_concurrency.processutils [None req-b1acccde-62a2-4164-bd32-9ee867650c6d 16055681fed745bb89347149995b8486 dc753a4e96fc46008b6e6b1fd29b160d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:47:55 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1264: 321 pgs: 321 active+clean; 121 MiB data, 323 MiB used, 60 GiB / 60 GiB avail; 341 KiB/s rd, 2.2 MiB/s wr, 93 op/s
Oct 11 08:47:55 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 08:47:55 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2958598207' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:47:55 compute-0 nova_compute[260935]: 2025-10-11 08:47:55.818 2 DEBUG oslo_concurrency.processutils [None req-bd5d0c03-cc4b-47eb-9336-59861d26f68e 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:47:55 compute-0 nova_compute[260935]: 2025-10-11 08:47:55.821 2 DEBUG nova.virt.libvirt.driver [None req-bd5d0c03-cc4b-47eb-9336-59861d26f68e 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] [instance: 66086b61-46ca-4a1b-a9f0-692678bcbf7a] End _get_guest_xml xml=<domain type="kvm">
Oct 11 08:47:55 compute-0 nova_compute[260935]:   <uuid>66086b61-46ca-4a1b-a9f0-692678bcbf7a</uuid>
Oct 11 08:47:55 compute-0 nova_compute[260935]:   <name>instance-00000012</name>
Oct 11 08:47:55 compute-0 nova_compute[260935]:   <memory>131072</memory>
Oct 11 08:47:55 compute-0 nova_compute[260935]:   <vcpu>1</vcpu>
Oct 11 08:47:55 compute-0 nova_compute[260935]:   <metadata>
Oct 11 08:47:55 compute-0 nova_compute[260935]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 08:47:55 compute-0 nova_compute[260935]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 08:47:55 compute-0 nova_compute[260935]:       <nova:name>tempest-ServersAdmin275Test-server-132893026</nova:name>
Oct 11 08:47:55 compute-0 nova_compute[260935]:       <nova:creationTime>2025-10-11 08:47:54</nova:creationTime>
Oct 11 08:47:55 compute-0 nova_compute[260935]:       <nova:flavor name="m1.nano">
Oct 11 08:47:55 compute-0 nova_compute[260935]:         <nova:memory>128</nova:memory>
Oct 11 08:47:55 compute-0 nova_compute[260935]:         <nova:disk>1</nova:disk>
Oct 11 08:47:55 compute-0 nova_compute[260935]:         <nova:swap>0</nova:swap>
Oct 11 08:47:55 compute-0 nova_compute[260935]:         <nova:ephemeral>0</nova:ephemeral>
Oct 11 08:47:55 compute-0 nova_compute[260935]:         <nova:vcpus>1</nova:vcpus>
Oct 11 08:47:55 compute-0 nova_compute[260935]:       </nova:flavor>
Oct 11 08:47:55 compute-0 nova_compute[260935]:       <nova:owner>
Oct 11 08:47:55 compute-0 nova_compute[260935]:         <nova:user uuid="7cfe9716527d49f18102a38c7480e208">tempest-ServersAdmin275Test-1935053767-project-member</nova:user>
Oct 11 08:47:55 compute-0 nova_compute[260935]:         <nova:project uuid="7fdd898b69404913a643940b3869140b">tempest-ServersAdmin275Test-1935053767</nova:project>
Oct 11 08:47:55 compute-0 nova_compute[260935]:       </nova:owner>
Oct 11 08:47:55 compute-0 nova_compute[260935]:       <nova:root type="image" uuid="95632eb9-5895-4e20-b760-0f149aadf400"/>
Oct 11 08:47:55 compute-0 nova_compute[260935]:       <nova:ports/>
Oct 11 08:47:55 compute-0 nova_compute[260935]:     </nova:instance>
Oct 11 08:47:55 compute-0 nova_compute[260935]:   </metadata>
Oct 11 08:47:55 compute-0 nova_compute[260935]:   <sysinfo type="smbios">
Oct 11 08:47:55 compute-0 nova_compute[260935]:     <system>
Oct 11 08:47:55 compute-0 nova_compute[260935]:       <entry name="manufacturer">RDO</entry>
Oct 11 08:47:55 compute-0 nova_compute[260935]:       <entry name="product">OpenStack Compute</entry>
Oct 11 08:47:55 compute-0 nova_compute[260935]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 08:47:55 compute-0 nova_compute[260935]:       <entry name="serial">66086b61-46ca-4a1b-a9f0-692678bcbf7a</entry>
Oct 11 08:47:55 compute-0 nova_compute[260935]:       <entry name="uuid">66086b61-46ca-4a1b-a9f0-692678bcbf7a</entry>
Oct 11 08:47:55 compute-0 nova_compute[260935]:       <entry name="family">Virtual Machine</entry>
Oct 11 08:47:55 compute-0 nova_compute[260935]:     </system>
Oct 11 08:47:55 compute-0 nova_compute[260935]:   </sysinfo>
Oct 11 08:47:55 compute-0 nova_compute[260935]:   <os>
Oct 11 08:47:55 compute-0 nova_compute[260935]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 11 08:47:55 compute-0 nova_compute[260935]:     <boot dev="hd"/>
Oct 11 08:47:55 compute-0 nova_compute[260935]:     <smbios mode="sysinfo"/>
Oct 11 08:47:55 compute-0 nova_compute[260935]:   </os>
Oct 11 08:47:55 compute-0 nova_compute[260935]:   <features>
Oct 11 08:47:55 compute-0 nova_compute[260935]:     <acpi/>
Oct 11 08:47:55 compute-0 nova_compute[260935]:     <apic/>
Oct 11 08:47:55 compute-0 nova_compute[260935]:     <vmcoreinfo/>
Oct 11 08:47:55 compute-0 nova_compute[260935]:   </features>
Oct 11 08:47:55 compute-0 nova_compute[260935]:   <clock offset="utc">
Oct 11 08:47:55 compute-0 nova_compute[260935]:     <timer name="pit" tickpolicy="delay"/>
Oct 11 08:47:55 compute-0 nova_compute[260935]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 11 08:47:55 compute-0 nova_compute[260935]:     <timer name="hpet" present="no"/>
Oct 11 08:47:55 compute-0 nova_compute[260935]:   </clock>
Oct 11 08:47:55 compute-0 nova_compute[260935]:   <cpu mode="host-model" match="exact">
Oct 11 08:47:55 compute-0 nova_compute[260935]:     <topology sockets="1" cores="1" threads="1"/>
Oct 11 08:47:55 compute-0 nova_compute[260935]:   </cpu>
Oct 11 08:47:55 compute-0 nova_compute[260935]:   <devices>
Oct 11 08:47:55 compute-0 nova_compute[260935]:     <disk type="network" device="disk">
Oct 11 08:47:55 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 08:47:55 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/66086b61-46ca-4a1b-a9f0-692678bcbf7a_disk">
Oct 11 08:47:55 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 08:47:55 compute-0 nova_compute[260935]:       </source>
Oct 11 08:47:55 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 08:47:55 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 08:47:55 compute-0 nova_compute[260935]:       </auth>
Oct 11 08:47:55 compute-0 nova_compute[260935]:       <target dev="vda" bus="virtio"/>
Oct 11 08:47:55 compute-0 nova_compute[260935]:     </disk>
Oct 11 08:47:55 compute-0 nova_compute[260935]:     <disk type="network" device="cdrom">
Oct 11 08:47:55 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 08:47:55 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/66086b61-46ca-4a1b-a9f0-692678bcbf7a_disk.config">
Oct 11 08:47:55 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 08:47:55 compute-0 nova_compute[260935]:       </source>
Oct 11 08:47:55 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 08:47:55 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 08:47:55 compute-0 nova_compute[260935]:       </auth>
Oct 11 08:47:55 compute-0 nova_compute[260935]:       <target dev="sda" bus="sata"/>
Oct 11 08:47:55 compute-0 nova_compute[260935]:     </disk>
Oct 11 08:47:55 compute-0 nova_compute[260935]:     <serial type="pty">
Oct 11 08:47:55 compute-0 nova_compute[260935]:       <log file="/var/lib/nova/instances/66086b61-46ca-4a1b-a9f0-692678bcbf7a/console.log" append="off"/>
Oct 11 08:47:55 compute-0 nova_compute[260935]:     </serial>
Oct 11 08:47:55 compute-0 nova_compute[260935]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 08:47:55 compute-0 nova_compute[260935]:     <video>
Oct 11 08:47:55 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 08:47:55 compute-0 nova_compute[260935]:     </video>
Oct 11 08:47:55 compute-0 nova_compute[260935]:     <input type="tablet" bus="usb"/>
Oct 11 08:47:55 compute-0 nova_compute[260935]:     <rng model="virtio">
Oct 11 08:47:55 compute-0 nova_compute[260935]:       <backend model="random">/dev/urandom</backend>
Oct 11 08:47:55 compute-0 nova_compute[260935]:     </rng>
Oct 11 08:47:55 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root"/>
Oct 11 08:47:55 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:47:55 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:47:55 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:47:55 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:47:55 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:47:55 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:47:55 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:47:55 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:47:55 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:47:55 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:47:55 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:47:55 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:47:55 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:47:55 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:47:55 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:47:55 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:47:55 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:47:55 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:47:55 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:47:55 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:47:55 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:47:55 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:47:55 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:47:55 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:47:55 compute-0 nova_compute[260935]:     <controller type="usb" index="0"/>
Oct 11 08:47:55 compute-0 nova_compute[260935]:     <memballoon model="virtio">
Oct 11 08:47:55 compute-0 nova_compute[260935]:       <stats period="10"/>
Oct 11 08:47:55 compute-0 nova_compute[260935]:     </memballoon>
Oct 11 08:47:55 compute-0 nova_compute[260935]:   </devices>
Oct 11 08:47:55 compute-0 nova_compute[260935]: </domain>
Oct 11 08:47:55 compute-0 nova_compute[260935]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 11 08:47:55 compute-0 nova_compute[260935]: 2025-10-11 08:47:55.892 2 DEBUG nova.virt.libvirt.driver [None req-bd5d0c03-cc4b-47eb-9336-59861d26f68e 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 08:47:55 compute-0 nova_compute[260935]: 2025-10-11 08:47:55.893 2 DEBUG nova.virt.libvirt.driver [None req-bd5d0c03-cc4b-47eb-9336-59861d26f68e 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 08:47:55 compute-0 nova_compute[260935]: 2025-10-11 08:47:55.893 2 INFO nova.virt.libvirt.driver [None req-bd5d0c03-cc4b-47eb-9336-59861d26f68e 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] [instance: 66086b61-46ca-4a1b-a9f0-692678bcbf7a] Using config drive
Oct 11 08:47:55 compute-0 nova_compute[260935]: 2025-10-11 08:47:55.923 2 DEBUG nova.storage.rbd_utils [None req-bd5d0c03-cc4b-47eb-9336-59861d26f68e 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] rbd image 66086b61-46ca-4a1b-a9f0-692678bcbf7a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:47:55 compute-0 nova_compute[260935]: 2025-10-11 08:47:55.947 2 DEBUG nova.objects.instance [None req-bd5d0c03-cc4b-47eb-9336-59861d26f68e 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] Lazy-loading 'ec2_ids' on Instance uuid 66086b61-46ca-4a1b-a9f0-692678bcbf7a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:47:56 compute-0 nova_compute[260935]: 2025-10-11 08:47:56.028 2 DEBUG nova.objects.instance [None req-bd5d0c03-cc4b-47eb-9336-59861d26f68e 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] Lazy-loading 'keypairs' on Instance uuid 66086b61-46ca-4a1b-a9f0-692678bcbf7a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:47:56 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 08:47:56 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2579184716' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:47:56 compute-0 nova_compute[260935]: 2025-10-11 08:47:56.073 2 DEBUG oslo_concurrency.processutils [None req-b1acccde-62a2-4164-bd32-9ee867650c6d 16055681fed745bb89347149995b8486 dc753a4e96fc46008b6e6b1fd29b160d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.520s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:47:56 compute-0 nova_compute[260935]: 2025-10-11 08:47:56.080 2 DEBUG nova.compute.provider_tree [None req-b1acccde-62a2-4164-bd32-9ee867650c6d 16055681fed745bb89347149995b8486 dc753a4e96fc46008b6e6b1fd29b160d - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 08:47:56 compute-0 nova_compute[260935]: 2025-10-11 08:47:56.100 2 DEBUG nova.scheduler.client.report [None req-b1acccde-62a2-4164-bd32-9ee867650c6d 16055681fed745bb89347149995b8486 dc753a4e96fc46008b6e6b1fd29b160d - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 08:47:56 compute-0 nova_compute[260935]: 2025-10-11 08:47:56.128 2 DEBUG oslo_concurrency.lockutils [None req-b1acccde-62a2-4164-bd32-9ee867650c6d 16055681fed745bb89347149995b8486 dc753a4e96fc46008b6e6b1fd29b160d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.710s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:47:56 compute-0 nova_compute[260935]: 2025-10-11 08:47:56.129 2 DEBUG nova.compute.manager [None req-b1acccde-62a2-4164-bd32-9ee867650c6d 16055681fed745bb89347149995b8486 dc753a4e96fc46008b6e6b1fd29b160d - - default default] [instance: f6e6ccd5-d393-4fa3-bf88-491311678dd1] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 11 08:47:56 compute-0 nova_compute[260935]: 2025-10-11 08:47:56.183 2 DEBUG nova.compute.manager [req-8ec4460b-1875-4ae6-9870-3057a607f965 req-3e8132b0-be06-42bb-a1e5-45257641fba4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 5b2193b9-46b9-44a8-9d1c-3c6a642115b6] Received event network-vif-plugged-0ae5b718-3374-4544-8f79-2f11854381cd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:47:56 compute-0 nova_compute[260935]: 2025-10-11 08:47:56.184 2 DEBUG oslo_concurrency.lockutils [req-8ec4460b-1875-4ae6-9870-3057a607f965 req-3e8132b0-be06-42bb-a1e5-45257641fba4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "5b2193b9-46b9-44a8-9d1c-3c6a642115b6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:47:56 compute-0 nova_compute[260935]: 2025-10-11 08:47:56.184 2 DEBUG oslo_concurrency.lockutils [req-8ec4460b-1875-4ae6-9870-3057a607f965 req-3e8132b0-be06-42bb-a1e5-45257641fba4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "5b2193b9-46b9-44a8-9d1c-3c6a642115b6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:47:56 compute-0 nova_compute[260935]: 2025-10-11 08:47:56.185 2 DEBUG oslo_concurrency.lockutils [req-8ec4460b-1875-4ae6-9870-3057a607f965 req-3e8132b0-be06-42bb-a1e5-45257641fba4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "5b2193b9-46b9-44a8-9d1c-3c6a642115b6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:47:56 compute-0 nova_compute[260935]: 2025-10-11 08:47:56.185 2 DEBUG nova.compute.manager [req-8ec4460b-1875-4ae6-9870-3057a607f965 req-3e8132b0-be06-42bb-a1e5-45257641fba4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 5b2193b9-46b9-44a8-9d1c-3c6a642115b6] No waiting events found dispatching network-vif-plugged-0ae5b718-3374-4544-8f79-2f11854381cd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 08:47:56 compute-0 nova_compute[260935]: 2025-10-11 08:47:56.186 2 WARNING nova.compute.manager [req-8ec4460b-1875-4ae6-9870-3057a607f965 req-3e8132b0-be06-42bb-a1e5-45257641fba4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 5b2193b9-46b9-44a8-9d1c-3c6a642115b6] Received unexpected event network-vif-plugged-0ae5b718-3374-4544-8f79-2f11854381cd for instance with vm_state active and task_state None.
Oct 11 08:47:56 compute-0 nova_compute[260935]: 2025-10-11 08:47:56.191 2 DEBUG nova.compute.manager [None req-b1acccde-62a2-4164-bd32-9ee867650c6d 16055681fed745bb89347149995b8486 dc753a4e96fc46008b6e6b1fd29b160d - - default default] [instance: f6e6ccd5-d393-4fa3-bf88-491311678dd1] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 11 08:47:56 compute-0 nova_compute[260935]: 2025-10-11 08:47:56.192 2 DEBUG nova.network.neutron [None req-b1acccde-62a2-4164-bd32-9ee867650c6d 16055681fed745bb89347149995b8486 dc753a4e96fc46008b6e6b1fd29b160d - - default default] [instance: f6e6ccd5-d393-4fa3-bf88-491311678dd1] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 11 08:47:56 compute-0 nova_compute[260935]: 2025-10-11 08:47:56.223 2 INFO nova.virt.libvirt.driver [None req-b1acccde-62a2-4164-bd32-9ee867650c6d 16055681fed745bb89347149995b8486 dc753a4e96fc46008b6e6b1fd29b160d - - default default] [instance: f6e6ccd5-d393-4fa3-bf88-491311678dd1] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 11 08:47:56 compute-0 nova_compute[260935]: 2025-10-11 08:47:56.245 2 DEBUG nova.compute.manager [None req-b1acccde-62a2-4164-bd32-9ee867650c6d 16055681fed745bb89347149995b8486 dc753a4e96fc46008b6e6b1fd29b160d - - default default] [instance: f6e6ccd5-d393-4fa3-bf88-491311678dd1] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 11 08:47:56 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/2958598207' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:47:56 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/2579184716' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:47:56 compute-0 nova_compute[260935]: 2025-10-11 08:47:56.458 2 DEBUG nova.compute.manager [None req-b1acccde-62a2-4164-bd32-9ee867650c6d 16055681fed745bb89347149995b8486 dc753a4e96fc46008b6e6b1fd29b160d - - default default] [instance: f6e6ccd5-d393-4fa3-bf88-491311678dd1] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 11 08:47:56 compute-0 nova_compute[260935]: 2025-10-11 08:47:56.461 2 DEBUG nova.virt.libvirt.driver [None req-b1acccde-62a2-4164-bd32-9ee867650c6d 16055681fed745bb89347149995b8486 dc753a4e96fc46008b6e6b1fd29b160d - - default default] [instance: f6e6ccd5-d393-4fa3-bf88-491311678dd1] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 11 08:47:56 compute-0 nova_compute[260935]: 2025-10-11 08:47:56.461 2 INFO nova.virt.libvirt.driver [None req-b1acccde-62a2-4164-bd32-9ee867650c6d 16055681fed745bb89347149995b8486 dc753a4e96fc46008b6e6b1fd29b160d - - default default] [instance: f6e6ccd5-d393-4fa3-bf88-491311678dd1] Creating image(s)
Oct 11 08:47:56 compute-0 nova_compute[260935]: 2025-10-11 08:47:56.490 2 DEBUG nova.storage.rbd_utils [None req-b1acccde-62a2-4164-bd32-9ee867650c6d 16055681fed745bb89347149995b8486 dc753a4e96fc46008b6e6b1fd29b160d - - default default] rbd image f6e6ccd5-d393-4fa3-bf88-491311678dd1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:47:56 compute-0 nova_compute[260935]: 2025-10-11 08:47:56.516 2 DEBUG nova.storage.rbd_utils [None req-b1acccde-62a2-4164-bd32-9ee867650c6d 16055681fed745bb89347149995b8486 dc753a4e96fc46008b6e6b1fd29b160d - - default default] rbd image f6e6ccd5-d393-4fa3-bf88-491311678dd1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:47:56 compute-0 nova_compute[260935]: 2025-10-11 08:47:56.547 2 DEBUG nova.storage.rbd_utils [None req-b1acccde-62a2-4164-bd32-9ee867650c6d 16055681fed745bb89347149995b8486 dc753a4e96fc46008b6e6b1fd29b160d - - default default] rbd image f6e6ccd5-d393-4fa3-bf88-491311678dd1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:47:56 compute-0 nova_compute[260935]: 2025-10-11 08:47:56.552 2 DEBUG oslo_concurrency.processutils [None req-b1acccde-62a2-4164-bd32-9ee867650c6d 16055681fed745bb89347149995b8486 dc753a4e96fc46008b6e6b1fd29b160d - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:47:56 compute-0 nova_compute[260935]: 2025-10-11 08:47:56.585 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:47:56 compute-0 nova_compute[260935]: 2025-10-11 08:47:56.646 2 DEBUG oslo_concurrency.processutils [None req-b1acccde-62a2-4164-bd32-9ee867650c6d 16055681fed745bb89347149995b8486 dc753a4e96fc46008b6e6b1fd29b160d - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json" returned: 0 in 0.094s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:47:56 compute-0 nova_compute[260935]: 2025-10-11 08:47:56.646 2 DEBUG oslo_concurrency.lockutils [None req-b1acccde-62a2-4164-bd32-9ee867650c6d 16055681fed745bb89347149995b8486 dc753a4e96fc46008b6e6b1fd29b160d - - default default] Acquiring lock "9811042c7d73cc51997f7c966840f4b7728169a1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:47:56 compute-0 nova_compute[260935]: 2025-10-11 08:47:56.647 2 DEBUG oslo_concurrency.lockutils [None req-b1acccde-62a2-4164-bd32-9ee867650c6d 16055681fed745bb89347149995b8486 dc753a4e96fc46008b6e6b1fd29b160d - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:47:56 compute-0 nova_compute[260935]: 2025-10-11 08:47:56.648 2 DEBUG oslo_concurrency.lockutils [None req-b1acccde-62a2-4164-bd32-9ee867650c6d 16055681fed745bb89347149995b8486 dc753a4e96fc46008b6e6b1fd29b160d - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:47:56 compute-0 nova_compute[260935]: 2025-10-11 08:47:56.675 2 DEBUG nova.storage.rbd_utils [None req-b1acccde-62a2-4164-bd32-9ee867650c6d 16055681fed745bb89347149995b8486 dc753a4e96fc46008b6e6b1fd29b160d - - default default] rbd image f6e6ccd5-d393-4fa3-bf88-491311678dd1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:47:56 compute-0 nova_compute[260935]: 2025-10-11 08:47:56.679 2 DEBUG oslo_concurrency.processutils [None req-b1acccde-62a2-4164-bd32-9ee867650c6d 16055681fed745bb89347149995b8486 dc753a4e96fc46008b6e6b1fd29b160d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 f6e6ccd5-d393-4fa3-bf88-491311678dd1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:47:56 compute-0 nova_compute[260935]: 2025-10-11 08:47:56.712 2 INFO nova.virt.libvirt.driver [None req-bd5d0c03-cc4b-47eb-9336-59861d26f68e 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] [instance: 66086b61-46ca-4a1b-a9f0-692678bcbf7a] Creating config drive at /var/lib/nova/instances/66086b61-46ca-4a1b-a9f0-692678bcbf7a/disk.config
Oct 11 08:47:56 compute-0 nova_compute[260935]: 2025-10-11 08:47:56.717 2 DEBUG oslo_concurrency.processutils [None req-bd5d0c03-cc4b-47eb-9336-59861d26f68e 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/66086b61-46ca-4a1b-a9f0-692678bcbf7a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpl8ivjij3 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:47:56 compute-0 nova_compute[260935]: 2025-10-11 08:47:56.767 2 DEBUG nova.policy [None req-b1acccde-62a2-4164-bd32-9ee867650c6d 16055681fed745bb89347149995b8486 dc753a4e96fc46008b6e6b1fd29b160d - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '16055681fed745bb89347149995b8486', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'dc753a4e96fc46008b6e6b1fd29b160d', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 11 08:47:56 compute-0 nova_compute[260935]: 2025-10-11 08:47:56.798 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:47:56 compute-0 nova_compute[260935]: 2025-10-11 08:47:56.864 2 DEBUG oslo_concurrency.processutils [None req-bd5d0c03-cc4b-47eb-9336-59861d26f68e 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/66086b61-46ca-4a1b-a9f0-692678bcbf7a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpl8ivjij3" returned: 0 in 0.147s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:47:56 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:47:56.873 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=7, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:d1:d9', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '16:ab:1e:b7:4b:7f'}, ipsec=False) old=SB_Global(nb_cfg=6) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 08:47:56 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:47:56.875 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 11 08:47:56 compute-0 nova_compute[260935]: 2025-10-11 08:47:56.906 2 DEBUG nova.storage.rbd_utils [None req-bd5d0c03-cc4b-47eb-9336-59861d26f68e 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] rbd image 66086b61-46ca-4a1b-a9f0-692678bcbf7a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:47:56 compute-0 nova_compute[260935]: 2025-10-11 08:47:56.911 2 DEBUG oslo_concurrency.processutils [None req-bd5d0c03-cc4b-47eb-9336-59861d26f68e 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/66086b61-46ca-4a1b-a9f0-692678bcbf7a/disk.config 66086b61-46ca-4a1b-a9f0-692678bcbf7a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:47:56 compute-0 nova_compute[260935]: 2025-10-11 08:47:56.939 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:47:56 compute-0 nova_compute[260935]: 2025-10-11 08:47:56.965 2 DEBUG oslo_concurrency.processutils [None req-b1acccde-62a2-4164-bd32-9ee867650c6d 16055681fed745bb89347149995b8486 dc753a4e96fc46008b6e6b1fd29b160d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 f6e6ccd5-d393-4fa3-bf88-491311678dd1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.286s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:47:57 compute-0 nova_compute[260935]: 2025-10-11 08:47:57.028 2 DEBUG nova.storage.rbd_utils [None req-b1acccde-62a2-4164-bd32-9ee867650c6d 16055681fed745bb89347149995b8486 dc753a4e96fc46008b6e6b1fd29b160d - - default default] resizing rbd image f6e6ccd5-d393-4fa3-bf88-491311678dd1_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 11 08:47:57 compute-0 nova_compute[260935]: 2025-10-11 08:47:57.059 2 DEBUG oslo_concurrency.processutils [None req-bd5d0c03-cc4b-47eb-9336-59861d26f68e 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/66086b61-46ca-4a1b-a9f0-692678bcbf7a/disk.config 66086b61-46ca-4a1b-a9f0-692678bcbf7a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.148s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:47:57 compute-0 nova_compute[260935]: 2025-10-11 08:47:57.060 2 INFO nova.virt.libvirt.driver [None req-bd5d0c03-cc4b-47eb-9336-59861d26f68e 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] [instance: 66086b61-46ca-4a1b-a9f0-692678bcbf7a] Deleting local config drive /var/lib/nova/instances/66086b61-46ca-4a1b-a9f0-692678bcbf7a/disk.config because it was imported into RBD.
Oct 11 08:47:57 compute-0 nova_compute[260935]: 2025-10-11 08:47:57.119 2 DEBUG nova.objects.instance [None req-b1acccde-62a2-4164-bd32-9ee867650c6d 16055681fed745bb89347149995b8486 dc753a4e96fc46008b6e6b1fd29b160d - - default default] Lazy-loading 'migration_context' on Instance uuid f6e6ccd5-d393-4fa3-bf88-491311678dd1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:47:57 compute-0 systemd-machined[215705]: New machine qemu-19-instance-00000012.
Oct 11 08:47:57 compute-0 nova_compute[260935]: 2025-10-11 08:47:57.144 2 DEBUG nova.virt.libvirt.driver [None req-b1acccde-62a2-4164-bd32-9ee867650c6d 16055681fed745bb89347149995b8486 dc753a4e96fc46008b6e6b1fd29b160d - - default default] [instance: f6e6ccd5-d393-4fa3-bf88-491311678dd1] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 11 08:47:57 compute-0 nova_compute[260935]: 2025-10-11 08:47:57.144 2 DEBUG nova.virt.libvirt.driver [None req-b1acccde-62a2-4164-bd32-9ee867650c6d 16055681fed745bb89347149995b8486 dc753a4e96fc46008b6e6b1fd29b160d - - default default] [instance: f6e6ccd5-d393-4fa3-bf88-491311678dd1] Ensure instance console log exists: /var/lib/nova/instances/f6e6ccd5-d393-4fa3-bf88-491311678dd1/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 11 08:47:57 compute-0 nova_compute[260935]: 2025-10-11 08:47:57.145 2 DEBUG oslo_concurrency.lockutils [None req-b1acccde-62a2-4164-bd32-9ee867650c6d 16055681fed745bb89347149995b8486 dc753a4e96fc46008b6e6b1fd29b160d - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:47:57 compute-0 nova_compute[260935]: 2025-10-11 08:47:57.146 2 DEBUG oslo_concurrency.lockutils [None req-b1acccde-62a2-4164-bd32-9ee867650c6d 16055681fed745bb89347149995b8486 dc753a4e96fc46008b6e6b1fd29b160d - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:47:57 compute-0 nova_compute[260935]: 2025-10-11 08:47:57.146 2 DEBUG oslo_concurrency.lockutils [None req-b1acccde-62a2-4164-bd32-9ee867650c6d 16055681fed745bb89347149995b8486 dc753a4e96fc46008b6e6b1fd29b160d - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:47:57 compute-0 systemd[1]: Started Virtual Machine qemu-19-instance-00000012.
Oct 11 08:47:57 compute-0 podman[288245]: 2025-10-11 08:47:57.202076317 +0000 UTC m=+0.069740575 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=iscsid, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=iscsid, managed_by=edpm_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.vendor=CentOS)
Oct 11 08:47:57 compute-0 ceph-mon[74313]: pgmap v1264: 321 pgs: 321 active+clean; 121 MiB data, 323 MiB used, 60 GiB / 60 GiB avail; 341 KiB/s rd, 2.2 MiB/s wr, 93 op/s
Oct 11 08:47:57 compute-0 nova_compute[260935]: 2025-10-11 08:47:57.556 2 DEBUG nova.network.neutron [None req-b1acccde-62a2-4164-bd32-9ee867650c6d 16055681fed745bb89347149995b8486 dc753a4e96fc46008b6e6b1fd29b160d - - default default] [instance: f6e6ccd5-d393-4fa3-bf88-491311678dd1] Successfully created port: 128b1135-2e8f-4e78-8e09-e16b082e9225 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 11 08:47:57 compute-0 nova_compute[260935]: 2025-10-11 08:47:57.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 08:47:57 compute-0 nova_compute[260935]: 2025-10-11 08:47:57.704 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 08:47:57 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1265: 321 pgs: 321 active+clean; 213 MiB data, 344 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 5.7 MiB/s wr, 160 op/s
Oct 11 08:47:57 compute-0 nova_compute[260935]: 2025-10-11 08:47:57.999 2 DEBUG nova.virt.libvirt.host [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Removed pending event for 66086b61-46ca-4a1b-a9f0-692678bcbf7a due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Oct 11 08:47:58 compute-0 nova_compute[260935]: 2025-10-11 08:47:57.999 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172477.998533, 66086b61-46ca-4a1b-a9f0-692678bcbf7a => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:47:58 compute-0 nova_compute[260935]: 2025-10-11 08:47:58.000 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 66086b61-46ca-4a1b-a9f0-692678bcbf7a] VM Resumed (Lifecycle Event)
Oct 11 08:47:58 compute-0 nova_compute[260935]: 2025-10-11 08:47:58.002 2 DEBUG nova.compute.manager [None req-bd5d0c03-cc4b-47eb-9336-59861d26f68e 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] [instance: 66086b61-46ca-4a1b-a9f0-692678bcbf7a] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 11 08:47:58 compute-0 nova_compute[260935]: 2025-10-11 08:47:58.002 2 DEBUG nova.virt.libvirt.driver [None req-bd5d0c03-cc4b-47eb-9336-59861d26f68e 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] [instance: 66086b61-46ca-4a1b-a9f0-692678bcbf7a] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 11 08:47:58 compute-0 nova_compute[260935]: 2025-10-11 08:47:58.005 2 INFO nova.virt.libvirt.driver [-] [instance: 66086b61-46ca-4a1b-a9f0-692678bcbf7a] Instance spawned successfully.
Oct 11 08:47:58 compute-0 nova_compute[260935]: 2025-10-11 08:47:58.005 2 DEBUG nova.virt.libvirt.driver [None req-bd5d0c03-cc4b-47eb-9336-59861d26f68e 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] [instance: 66086b61-46ca-4a1b-a9f0-692678bcbf7a] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 11 08:47:58 compute-0 nova_compute[260935]: 2025-10-11 08:47:58.026 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 66086b61-46ca-4a1b-a9f0-692678bcbf7a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:47:58 compute-0 nova_compute[260935]: 2025-10-11 08:47:58.029 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 66086b61-46ca-4a1b-a9f0-692678bcbf7a] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 08:47:58 compute-0 nova_compute[260935]: 2025-10-11 08:47:58.040 2 DEBUG nova.virt.libvirt.driver [None req-bd5d0c03-cc4b-47eb-9336-59861d26f68e 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] [instance: 66086b61-46ca-4a1b-a9f0-692678bcbf7a] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:47:58 compute-0 nova_compute[260935]: 2025-10-11 08:47:58.040 2 DEBUG nova.virt.libvirt.driver [None req-bd5d0c03-cc4b-47eb-9336-59861d26f68e 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] [instance: 66086b61-46ca-4a1b-a9f0-692678bcbf7a] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:47:58 compute-0 nova_compute[260935]: 2025-10-11 08:47:58.040 2 DEBUG nova.virt.libvirt.driver [None req-bd5d0c03-cc4b-47eb-9336-59861d26f68e 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] [instance: 66086b61-46ca-4a1b-a9f0-692678bcbf7a] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:47:58 compute-0 nova_compute[260935]: 2025-10-11 08:47:58.041 2 DEBUG nova.virt.libvirt.driver [None req-bd5d0c03-cc4b-47eb-9336-59861d26f68e 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] [instance: 66086b61-46ca-4a1b-a9f0-692678bcbf7a] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:47:58 compute-0 nova_compute[260935]: 2025-10-11 08:47:58.041 2 DEBUG nova.virt.libvirt.driver [None req-bd5d0c03-cc4b-47eb-9336-59861d26f68e 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] [instance: 66086b61-46ca-4a1b-a9f0-692678bcbf7a] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:47:58 compute-0 nova_compute[260935]: 2025-10-11 08:47:58.042 2 DEBUG nova.virt.libvirt.driver [None req-bd5d0c03-cc4b-47eb-9336-59861d26f68e 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] [instance: 66086b61-46ca-4a1b-a9f0-692678bcbf7a] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:47:58 compute-0 nova_compute[260935]: 2025-10-11 08:47:58.067 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 66086b61-46ca-4a1b-a9f0-692678bcbf7a] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Oct 11 08:47:58 compute-0 nova_compute[260935]: 2025-10-11 08:47:58.068 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172478.0016768, 66086b61-46ca-4a1b-a9f0-692678bcbf7a => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:47:58 compute-0 nova_compute[260935]: 2025-10-11 08:47:58.068 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 66086b61-46ca-4a1b-a9f0-692678bcbf7a] VM Started (Lifecycle Event)
Oct 11 08:47:58 compute-0 nova_compute[260935]: 2025-10-11 08:47:58.072 2 DEBUG oslo_concurrency.lockutils [None req-343edac2-7a63-40be-bb97-24799780dce2 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Acquiring lock "interface-5b2193b9-46b9-44a8-9d1c-3c6a642115b6-0ae5b718-3374-4544-8f79-2f11854381cd" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:47:58 compute-0 nova_compute[260935]: 2025-10-11 08:47:58.072 2 DEBUG oslo_concurrency.lockutils [None req-343edac2-7a63-40be-bb97-24799780dce2 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Lock "interface-5b2193b9-46b9-44a8-9d1c-3c6a642115b6-0ae5b718-3374-4544-8f79-2f11854381cd" acquired by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:47:58 compute-0 nova_compute[260935]: 2025-10-11 08:47:58.099 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 66086b61-46ca-4a1b-a9f0-692678bcbf7a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:47:58 compute-0 nova_compute[260935]: 2025-10-11 08:47:58.100 2 DEBUG nova.objects.instance [None req-343edac2-7a63-40be-bb97-24799780dce2 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Lazy-loading 'flavor' on Instance uuid 5b2193b9-46b9-44a8-9d1c-3c6a642115b6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:47:58 compute-0 nova_compute[260935]: 2025-10-11 08:47:58.103 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 66086b61-46ca-4a1b-a9f0-692678bcbf7a] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 08:47:58 compute-0 nova_compute[260935]: 2025-10-11 08:47:58.111 2 DEBUG nova.compute.manager [None req-bd5d0c03-cc4b-47eb-9336-59861d26f68e 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] [instance: 66086b61-46ca-4a1b-a9f0-692678bcbf7a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:47:58 compute-0 nova_compute[260935]: 2025-10-11 08:47:58.137 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 66086b61-46ca-4a1b-a9f0-692678bcbf7a] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Oct 11 08:47:58 compute-0 nova_compute[260935]: 2025-10-11 08:47:58.140 2 DEBUG nova.virt.libvirt.vif [None req-343edac2-7a63-40be-bb97-24799780dce2 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T08:47:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1177835038',display_name='tempest-AttachInterfacesTestJSON-server-1177835038',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1177835038',id=17,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBM9Il64pQKTCYRuLz2OOsP19v1NZUxnzt1d6CpbMNqNcVSmJsI444B5YIDg/3s4g87KTn1UkUCttTxW17bkkPDQnOj/OhzrtE3rJwHzR/sgT5/vucTFG0ijrEL7r/7PtFg==',key_name='tempest-keypair-130237923',keypairs=<?>,launch_index=0,launched_at=2025-10-11T08:47:24Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='eddb41c523294041b154a0a99c88e82b',ramdisk_id='',reservation_id='r-a649ez8h',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-2072786320',owner_user_name='tempest-AttachInterfacesTestJSON-2072786320-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T08:47:24Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='34f29a5a135d45f597eeaa741009aa67',uuid=5b2193b9-46b9-44a8-9d1c-3c6a642115b6,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "0ae5b718-3374-4544-8f79-2f11854381cd", "address": "fa:16:3e:3e:fe:cf", "network": {"id": "fff13396-b787-4c6e-9112-a1c2ef57b26d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-795919911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eddb41c523294041b154a0a99c88e82b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0ae5b718-33", "ovs_interfaceid": "0ae5b718-3374-4544-8f79-2f11854381cd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 11 08:47:58 compute-0 nova_compute[260935]: 2025-10-11 08:47:58.140 2 DEBUG nova.network.os_vif_util [None req-343edac2-7a63-40be-bb97-24799780dce2 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Converting VIF {"id": "0ae5b718-3374-4544-8f79-2f11854381cd", "address": "fa:16:3e:3e:fe:cf", "network": {"id": "fff13396-b787-4c6e-9112-a1c2ef57b26d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-795919911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eddb41c523294041b154a0a99c88e82b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0ae5b718-33", "ovs_interfaceid": "0ae5b718-3374-4544-8f79-2f11854381cd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 08:47:58 compute-0 nova_compute[260935]: 2025-10-11 08:47:58.141 2 DEBUG nova.network.os_vif_util [None req-343edac2-7a63-40be-bb97-24799780dce2 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3e:fe:cf,bridge_name='br-int',has_traffic_filtering=True,id=0ae5b718-3374-4544-8f79-2f11854381cd,network=Network(fff13396-b787-4c6e-9112-a1c2ef57b26d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0ae5b718-33') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 08:47:58 compute-0 nova_compute[260935]: 2025-10-11 08:47:58.143 2 DEBUG nova.virt.libvirt.guest [None req-343edac2-7a63-40be-bb97-24799780dce2 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:3e:fe:cf"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap0ae5b718-33"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Oct 11 08:47:58 compute-0 nova_compute[260935]: 2025-10-11 08:47:58.146 2 DEBUG nova.virt.libvirt.guest [None req-343edac2-7a63-40be-bb97-24799780dce2 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:3e:fe:cf"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap0ae5b718-33"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Oct 11 08:47:58 compute-0 nova_compute[260935]: 2025-10-11 08:47:58.148 2 DEBUG nova.virt.libvirt.driver [None req-343edac2-7a63-40be-bb97-24799780dce2 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Attempting to detach device tap0ae5b718-33 from instance 5b2193b9-46b9-44a8-9d1c-3c6a642115b6 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Oct 11 08:47:58 compute-0 nova_compute[260935]: 2025-10-11 08:47:58.149 2 DEBUG nova.virt.libvirt.guest [None req-343edac2-7a63-40be-bb97-24799780dce2 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] detach device xml: <interface type="ethernet">
Oct 11 08:47:58 compute-0 nova_compute[260935]:   <mac address="fa:16:3e:3e:fe:cf"/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:   <model type="virtio"/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:   <driver name="vhost" rx_queue_size="512"/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:   <mtu size="1442"/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:   <target dev="tap0ae5b718-33"/>
Oct 11 08:47:58 compute-0 nova_compute[260935]: </interface>
Oct 11 08:47:58 compute-0 nova_compute[260935]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Oct 11 08:47:58 compute-0 nova_compute[260935]: 2025-10-11 08:47:58.157 2 DEBUG nova.virt.libvirt.guest [None req-343edac2-7a63-40be-bb97-24799780dce2 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:3e:fe:cf"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap0ae5b718-33"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Oct 11 08:47:58 compute-0 nova_compute[260935]: 2025-10-11 08:47:58.161 2 DEBUG nova.virt.libvirt.guest [None req-343edac2-7a63-40be-bb97-24799780dce2 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:3e:fe:cf"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap0ae5b718-33"/></interface>not found in domain: <domain type='kvm' id='17'>
Oct 11 08:47:58 compute-0 nova_compute[260935]:   <name>instance-00000011</name>
Oct 11 08:47:58 compute-0 nova_compute[260935]:   <uuid>5b2193b9-46b9-44a8-9d1c-3c6a642115b6</uuid>
Oct 11 08:47:58 compute-0 nova_compute[260935]:   <metadata>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 08:47:58 compute-0 nova_compute[260935]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:   <nova:name>tempest-AttachInterfacesTestJSON-server-1177835038</nova:name>
Oct 11 08:47:58 compute-0 nova_compute[260935]:   <nova:creationTime>2025-10-11 08:47:51</nova:creationTime>
Oct 11 08:47:58 compute-0 nova_compute[260935]:   <nova:flavor name="m1.nano">
Oct 11 08:47:58 compute-0 nova_compute[260935]:     <nova:memory>128</nova:memory>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     <nova:disk>1</nova:disk>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     <nova:swap>0</nova:swap>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     <nova:ephemeral>0</nova:ephemeral>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     <nova:vcpus>1</nova:vcpus>
Oct 11 08:47:58 compute-0 nova_compute[260935]:   </nova:flavor>
Oct 11 08:47:58 compute-0 nova_compute[260935]:   <nova:owner>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     <nova:user uuid="34f29a5a135d45f597eeaa741009aa67">tempest-AttachInterfacesTestJSON-2072786320-project-member</nova:user>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     <nova:project uuid="eddb41c523294041b154a0a99c88e82b">tempest-AttachInterfacesTestJSON-2072786320</nova:project>
Oct 11 08:47:58 compute-0 nova_compute[260935]:   </nova:owner>
Oct 11 08:47:58 compute-0 nova_compute[260935]:   <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:   <nova:ports>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     <nova:port uuid="713e6030-0d3f-41ae-9f66-c4591e2498e4">
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     </nova:port>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     <nova:port uuid="0ae5b718-3374-4544-8f79-2f11854381cd">
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     </nova:port>
Oct 11 08:47:58 compute-0 nova_compute[260935]:   </nova:ports>
Oct 11 08:47:58 compute-0 nova_compute[260935]: </nova:instance>
Oct 11 08:47:58 compute-0 nova_compute[260935]:   </metadata>
Oct 11 08:47:58 compute-0 nova_compute[260935]:   <memory unit='KiB'>131072</memory>
Oct 11 08:47:58 compute-0 nova_compute[260935]:   <currentMemory unit='KiB'>131072</currentMemory>
Oct 11 08:47:58 compute-0 nova_compute[260935]:   <vcpu placement='static'>1</vcpu>
Oct 11 08:47:58 compute-0 nova_compute[260935]:   <resource>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     <partition>/machine</partition>
Oct 11 08:47:58 compute-0 nova_compute[260935]:   </resource>
Oct 11 08:47:58 compute-0 nova_compute[260935]:   <sysinfo type='smbios'>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     <system>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <entry name='manufacturer'>RDO</entry>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <entry name='product'>OpenStack Compute</entry>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <entry name='serial'>5b2193b9-46b9-44a8-9d1c-3c6a642115b6</entry>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <entry name='uuid'>5b2193b9-46b9-44a8-9d1c-3c6a642115b6</entry>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <entry name='family'>Virtual Machine</entry>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     </system>
Oct 11 08:47:58 compute-0 nova_compute[260935]:   </sysinfo>
Oct 11 08:47:58 compute-0 nova_compute[260935]:   <os>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     <type arch='x86_64' machine='pc-q35-rhel9.6.0'>hvm</type>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     <boot dev='hd'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     <smbios mode='sysinfo'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:   </os>
Oct 11 08:47:58 compute-0 nova_compute[260935]:   <features>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     <acpi/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     <apic/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     <vmcoreinfo state='on'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:   </features>
Oct 11 08:47:58 compute-0 nova_compute[260935]:   <cpu mode='custom' match='exact' check='full'>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     <model fallback='forbid'>EPYC-Rome</model>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     <vendor>AMD</vendor>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     <feature policy='require' name='x2apic'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     <feature policy='require' name='tsc-deadline'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     <feature policy='require' name='hypervisor'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     <feature policy='require' name='tsc_adjust'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     <feature policy='require' name='spec-ctrl'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     <feature policy='require' name='stibp'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     <feature policy='require' name='arch-capabilities'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     <feature policy='require' name='ssbd'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     <feature policy='require' name='cmp_legacy'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     <feature policy='require' name='overflow-recov'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     <feature policy='require' name='succor'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     <feature policy='require' name='ibrs'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     <feature policy='require' name='amd-ssbd'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     <feature policy='require' name='virt-ssbd'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     <feature policy='disable' name='lbrv'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     <feature policy='disable' name='tsc-scale'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     <feature policy='disable' name='vmcb-clean'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     <feature policy='disable' name='flushbyasid'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     <feature policy='disable' name='pause-filter'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     <feature policy='disable' name='pfthreshold'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     <feature policy='disable' name='svme-addr-chk'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     <feature policy='require' name='lfence-always-serializing'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     <feature policy='require' name='rdctl-no'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     <feature policy='require' name='mds-no'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     <feature policy='require' name='pschange-mc-no'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     <feature policy='require' name='gds-no'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     <feature policy='require' name='rfds-no'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     <feature policy='disable' name='xsaves'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     <feature policy='disable' name='svm'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     <feature policy='require' name='topoext'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     <feature policy='disable' name='npt'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     <feature policy='disable' name='nrip-save'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:   </cpu>
Oct 11 08:47:58 compute-0 nova_compute[260935]:   <clock offset='utc'>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     <timer name='pit' tickpolicy='delay'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     <timer name='rtc' tickpolicy='catchup'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     <timer name='hpet' present='no'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:   </clock>
Oct 11 08:47:58 compute-0 nova_compute[260935]:   <on_poweroff>destroy</on_poweroff>
Oct 11 08:47:58 compute-0 nova_compute[260935]:   <on_reboot>restart</on_reboot>
Oct 11 08:47:58 compute-0 nova_compute[260935]:   <on_crash>destroy</on_crash>
Oct 11 08:47:58 compute-0 nova_compute[260935]:   <devices>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     <disk type='network' device='disk'>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <driver name='qemu' type='raw' cache='none'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <auth username='openstack'>
Oct 11 08:47:58 compute-0 nova_compute[260935]:         <secret type='ceph' uuid='33219f8b-dc38-5a8f-a577-8ccc4b37190a'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       </auth>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <source protocol='rbd' name='vms/5b2193b9-46b9-44a8-9d1c-3c6a642115b6_disk' index='2'>
Oct 11 08:47:58 compute-0 nova_compute[260935]:         <host name='192.168.122.100' port='6789'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       </source>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <target dev='vda' bus='virtio'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <alias name='virtio-disk0'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     </disk>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     <disk type='network' device='cdrom'>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <driver name='qemu' type='raw' cache='none'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <auth username='openstack'>
Oct 11 08:47:58 compute-0 nova_compute[260935]:         <secret type='ceph' uuid='33219f8b-dc38-5a8f-a577-8ccc4b37190a'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       </auth>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <source protocol='rbd' name='vms/5b2193b9-46b9-44a8-9d1c-3c6a642115b6_disk.config' index='1'>
Oct 11 08:47:58 compute-0 nova_compute[260935]:         <host name='192.168.122.100' port='6789'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       </source>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <target dev='sda' bus='sata'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <readonly/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <alias name='sata0-0-0'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     </disk>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     <controller type='pci' index='0' model='pcie-root'>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <alias name='pcie.0'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     <controller type='pci' index='1' model='pcie-root-port'>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <target chassis='1' port='0x10'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <alias name='pci.1'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     <controller type='pci' index='2' model='pcie-root-port'>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <target chassis='2' port='0x11'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <alias name='pci.2'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     <controller type='pci' index='3' model='pcie-root-port'>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <target chassis='3' port='0x12'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <alias name='pci.3'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     <controller type='pci' index='4' model='pcie-root-port'>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <target chassis='4' port='0x13'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <alias name='pci.4'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     <controller type='pci' index='5' model='pcie-root-port'>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <target chassis='5' port='0x14'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <alias name='pci.5'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     <controller type='pci' index='6' model='pcie-root-port'>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <target chassis='6' port='0x15'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <alias name='pci.6'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     <controller type='pci' index='7' model='pcie-root-port'>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <target chassis='7' port='0x16'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <alias name='pci.7'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     <controller type='pci' index='8' model='pcie-root-port'>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <target chassis='8' port='0x17'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <alias name='pci.8'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     <controller type='pci' index='9' model='pcie-root-port'>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <target chassis='9' port='0x18'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <alias name='pci.9'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     <controller type='pci' index='10' model='pcie-root-port'>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <target chassis='10' port='0x19'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <alias name='pci.10'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     <controller type='pci' index='11' model='pcie-root-port'>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <target chassis='11' port='0x1a'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <alias name='pci.11'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     <controller type='pci' index='12' model='pcie-root-port'>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <target chassis='12' port='0x1b'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <alias name='pci.12'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     <controller type='pci' index='13' model='pcie-root-port'>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <target chassis='13' port='0x1c'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <alias name='pci.13'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     <controller type='pci' index='14' model='pcie-root-port'>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <target chassis='14' port='0x1d'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <alias name='pci.14'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     <controller type='pci' index='15' model='pcie-root-port'>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <target chassis='15' port='0x1e'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <alias name='pci.15'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     <controller type='pci' index='16' model='pcie-root-port'>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <target chassis='16' port='0x1f'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <alias name='pci.16'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     <controller type='pci' index='17' model='pcie-root-port'>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <target chassis='17' port='0x20'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <alias name='pci.17'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     <controller type='pci' index='18' model='pcie-root-port'>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <target chassis='18' port='0x21'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <alias name='pci.18'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     <controller type='pci' index='19' model='pcie-root-port'>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <target chassis='19' port='0x22'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <alias name='pci.19'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     <controller type='pci' index='20' model='pcie-root-port'>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <target chassis='20' port='0x23'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <alias name='pci.20'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     <controller type='pci' index='21' model='pcie-root-port'>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <target chassis='21' port='0x24'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <alias name='pci.21'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     <controller type='pci' index='22' model='pcie-root-port'>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <target chassis='22' port='0x25'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <alias name='pci.22'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     <controller type='pci' index='23' model='pcie-root-port'>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <target chassis='23' port='0x26'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <alias name='pci.23'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     <controller type='pci' index='24' model='pcie-root-port'>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <target chassis='24' port='0x27'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <alias name='pci.24'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     <controller type='pci' index='25' model='pcie-root-port'>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <target chassis='25' port='0x28'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <alias name='pci.25'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <model name='pcie-pci-bridge'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <alias name='pci.26'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     <controller type='usb' index='0' model='piix3-uhci'>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <alias name='usb'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     <controller type='sata' index='0'>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <alias name='ide'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     <interface type='ethernet'>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <mac address='fa:16:3e:6b:9e:4e'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <target dev='tap713e6030-0d'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <model type='virtio'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <driver name='vhost' rx_queue_size='512'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <mtu size='1442'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <alias name='net0'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     </interface>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     <interface type='ethernet'>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <mac address='fa:16:3e:3e:fe:cf'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <target dev='tap0ae5b718-33'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <model type='virtio'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <driver name='vhost' rx_queue_size='512'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <mtu size='1442'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <alias name='net1'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     </interface>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     <serial type='pty'>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <source path='/dev/pts/2'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <log file='/var/lib/nova/instances/5b2193b9-46b9-44a8-9d1c-3c6a642115b6/console.log' append='off'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <target type='isa-serial' port='0'>
Oct 11 08:47:58 compute-0 nova_compute[260935]:         <model name='isa-serial'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       </target>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <alias name='serial0'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     </serial>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     <console type='pty' tty='/dev/pts/2'>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <source path='/dev/pts/2'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <log file='/var/lib/nova/instances/5b2193b9-46b9-44a8-9d1c-3c6a642115b6/console.log' append='off'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <target type='serial' port='0'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <alias name='serial0'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     </console>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     <input type='tablet' bus='usb'>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <alias name='input0'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <address type='usb' bus='0' port='1'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     </input>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     <input type='mouse' bus='ps2'>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <alias name='input1'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     </input>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     <input type='keyboard' bus='ps2'>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <alias name='input2'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     </input>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     <graphics type='vnc' port='5902' autoport='yes' listen='::0'>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <listen type='address' address='::0'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     </graphics>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     <audio id='1' type='none'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     <video>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <model type='virtio' heads='1' primary='yes'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <alias name='video0'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     </video>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     <watchdog model='itco' action='reset'>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <alias name='watchdog0'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     </watchdog>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     <memballoon model='virtio'>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <stats period='10'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <alias name='balloon0'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     </memballoon>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     <rng model='virtio'>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <backend model='random'>/dev/urandom</backend>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <alias name='rng0'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     </rng>
Oct 11 08:47:58 compute-0 nova_compute[260935]:   </devices>
Oct 11 08:47:58 compute-0 nova_compute[260935]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     <label>system_u:system_r:svirt_t:s0:c460,c984</label>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c460,c984</imagelabel>
Oct 11 08:47:58 compute-0 nova_compute[260935]:   </seclabel>
Oct 11 08:47:58 compute-0 nova_compute[260935]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     <label>+107:+107</label>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     <imagelabel>+107:+107</imagelabel>
Oct 11 08:47:58 compute-0 nova_compute[260935]:   </seclabel>
Oct 11 08:47:58 compute-0 nova_compute[260935]: </domain>
Oct 11 08:47:58 compute-0 nova_compute[260935]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Oct 11 08:47:58 compute-0 nova_compute[260935]: 2025-10-11 08:47:58.175 2 INFO nova.virt.libvirt.driver [None req-343edac2-7a63-40be-bb97-24799780dce2 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Successfully detached device tap0ae5b718-33 from instance 5b2193b9-46b9-44a8-9d1c-3c6a642115b6 from the persistent domain config.
Oct 11 08:47:58 compute-0 nova_compute[260935]: 2025-10-11 08:47:58.176 2 DEBUG nova.virt.libvirt.driver [None req-343edac2-7a63-40be-bb97-24799780dce2 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] (1/8): Attempting to detach device tap0ae5b718-33 with device alias net1 from instance 5b2193b9-46b9-44a8-9d1c-3c6a642115b6 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Oct 11 08:47:58 compute-0 nova_compute[260935]: 2025-10-11 08:47:58.176 2 DEBUG nova.virt.libvirt.guest [None req-343edac2-7a63-40be-bb97-24799780dce2 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] detach device xml: <interface type="ethernet">
Oct 11 08:47:58 compute-0 nova_compute[260935]:   <mac address="fa:16:3e:3e:fe:cf"/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:   <model type="virtio"/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:   <driver name="vhost" rx_queue_size="512"/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:   <mtu size="1442"/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:   <target dev="tap0ae5b718-33"/>
Oct 11 08:47:58 compute-0 nova_compute[260935]: </interface>
Oct 11 08:47:58 compute-0 nova_compute[260935]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Oct 11 08:47:58 compute-0 nova_compute[260935]: 2025-10-11 08:47:58.183 2 DEBUG oslo_concurrency.lockutils [None req-bd5d0c03-cc4b-47eb-9336-59861d26f68e 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:47:58 compute-0 nova_compute[260935]: 2025-10-11 08:47:58.184 2 DEBUG oslo_concurrency.lockutils [None req-bd5d0c03-cc4b-47eb-9336-59861d26f68e 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:47:58 compute-0 nova_compute[260935]: 2025-10-11 08:47:58.184 2 DEBUG nova.objects.instance [None req-bd5d0c03-cc4b-47eb-9336-59861d26f68e 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] [instance: 66086b61-46ca-4a1b-a9f0-692678bcbf7a] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Oct 11 08:47:58 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e163 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 08:47:58 compute-0 nova_compute[260935]: 2025-10-11 08:47:58.231 2 DEBUG oslo_concurrency.lockutils [None req-bd5d0c03-cc4b-47eb-9336-59861d26f68e 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.047s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:47:58 compute-0 kernel: tap0ae5b718-33 (unregistering): left promiscuous mode
Oct 11 08:47:58 compute-0 NetworkManager[44960]: <info>  [1760172478.2460] device (tap0ae5b718-33): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 08:47:58 compute-0 nova_compute[260935]: 2025-10-11 08:47:58.256 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:47:58 compute-0 ovn_controller[152945]: 2025-10-11T08:47:58Z|00056|binding|INFO|Releasing lport 0ae5b718-3374-4544-8f79-2f11854381cd from this chassis (sb_readonly=0)
Oct 11 08:47:58 compute-0 ovn_controller[152945]: 2025-10-11T08:47:58Z|00057|binding|INFO|Setting lport 0ae5b718-3374-4544-8f79-2f11854381cd down in Southbound
Oct 11 08:47:58 compute-0 ovn_controller[152945]: 2025-10-11T08:47:58Z|00058|binding|INFO|Removing iface tap0ae5b718-33 ovn-installed in OVS
Oct 11 08:47:58 compute-0 nova_compute[260935]: 2025-10-11 08:47:58.259 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:47:58 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:47:58.262 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3e:fe:cf 10.100.0.14'], port_security=['fa:16:3e:3e:fe:cf 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '5b2193b9-46b9-44a8-9d1c-3c6a642115b6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fff13396-b787-4c6e-9112-a1c2ef57b26d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'eddb41c523294041b154a0a99c88e82b', 'neutron:revision_number': '4', 'neutron:security_group_ids': '9a83c3d0-687d-44b7-980a-bde786b1b429', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c4201c7b-c907-464d-88cb-d19f17d8f067, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=0ae5b718-3374-4544-8f79-2f11854381cd) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 08:47:58 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:47:58.263 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 0ae5b718-3374-4544-8f79-2f11854381cd in datapath fff13396-b787-4c6e-9112-a1c2ef57b26d unbound from our chassis
Oct 11 08:47:58 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:47:58.264 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network fff13396-b787-4c6e-9112-a1c2ef57b26d
Oct 11 08:47:58 compute-0 nova_compute[260935]: 2025-10-11 08:47:58.274 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:47:58 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:47:58.280 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[79f4d222-67fc-4b87-80dc-61e0939abdb6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:47:58 compute-0 nova_compute[260935]: 2025-10-11 08:47:58.286 2 DEBUG nova.virt.libvirt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Received event <DeviceRemovedEvent: 1760172478.286079, 5b2193b9-46b9-44a8-9d1c-3c6a642115b6 => net1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Oct 11 08:47:58 compute-0 nova_compute[260935]: 2025-10-11 08:47:58.288 2 DEBUG nova.virt.libvirt.driver [None req-343edac2-7a63-40be-bb97-24799780dce2 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Start waiting for the detach event from libvirt for device tap0ae5b718-33 with device alias net1 for instance 5b2193b9-46b9-44a8-9d1c-3c6a642115b6 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Oct 11 08:47:58 compute-0 nova_compute[260935]: 2025-10-11 08:47:58.289 2 DEBUG nova.virt.libvirt.guest [None req-343edac2-7a63-40be-bb97-24799780dce2 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:3e:fe:cf"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap0ae5b718-33"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Oct 11 08:47:58 compute-0 nova_compute[260935]: 2025-10-11 08:47:58.302 2 DEBUG nova.virt.libvirt.guest [None req-343edac2-7a63-40be-bb97-24799780dce2 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:3e:fe:cf"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap0ae5b718-33"/></interface>not found in domain: <domain type='kvm' id='17'>
Oct 11 08:47:58 compute-0 nova_compute[260935]:   <name>instance-00000011</name>
Oct 11 08:47:58 compute-0 nova_compute[260935]:   <uuid>5b2193b9-46b9-44a8-9d1c-3c6a642115b6</uuid>
Oct 11 08:47:58 compute-0 nova_compute[260935]:   <metadata>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 08:47:58 compute-0 nova_compute[260935]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:   <nova:name>tempest-AttachInterfacesTestJSON-server-1177835038</nova:name>
Oct 11 08:47:58 compute-0 nova_compute[260935]:   <nova:creationTime>2025-10-11 08:47:51</nova:creationTime>
Oct 11 08:47:58 compute-0 nova_compute[260935]:   <nova:flavor name="m1.nano">
Oct 11 08:47:58 compute-0 nova_compute[260935]:     <nova:memory>128</nova:memory>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     <nova:disk>1</nova:disk>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     <nova:swap>0</nova:swap>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     <nova:ephemeral>0</nova:ephemeral>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     <nova:vcpus>1</nova:vcpus>
Oct 11 08:47:58 compute-0 nova_compute[260935]:   </nova:flavor>
Oct 11 08:47:58 compute-0 nova_compute[260935]:   <nova:owner>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     <nova:user uuid="34f29a5a135d45f597eeaa741009aa67">tempest-AttachInterfacesTestJSON-2072786320-project-member</nova:user>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     <nova:project uuid="eddb41c523294041b154a0a99c88e82b">tempest-AttachInterfacesTestJSON-2072786320</nova:project>
Oct 11 08:47:58 compute-0 nova_compute[260935]:   </nova:owner>
Oct 11 08:47:58 compute-0 nova_compute[260935]:   <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:   <nova:ports>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     <nova:port uuid="713e6030-0d3f-41ae-9f66-c4591e2498e4">
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     </nova:port>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     <nova:port uuid="0ae5b718-3374-4544-8f79-2f11854381cd">
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     </nova:port>
Oct 11 08:47:58 compute-0 nova_compute[260935]:   </nova:ports>
Oct 11 08:47:58 compute-0 nova_compute[260935]: </nova:instance>
Oct 11 08:47:58 compute-0 nova_compute[260935]:   </metadata>
Oct 11 08:47:58 compute-0 nova_compute[260935]:   <memory unit='KiB'>131072</memory>
Oct 11 08:47:58 compute-0 nova_compute[260935]:   <currentMemory unit='KiB'>131072</currentMemory>
Oct 11 08:47:58 compute-0 nova_compute[260935]:   <vcpu placement='static'>1</vcpu>
Oct 11 08:47:58 compute-0 nova_compute[260935]:   <resource>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     <partition>/machine</partition>
Oct 11 08:47:58 compute-0 nova_compute[260935]:   </resource>
Oct 11 08:47:58 compute-0 nova_compute[260935]:   <sysinfo type='smbios'>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     <system>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <entry name='manufacturer'>RDO</entry>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <entry name='product'>OpenStack Compute</entry>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <entry name='serial'>5b2193b9-46b9-44a8-9d1c-3c6a642115b6</entry>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <entry name='uuid'>5b2193b9-46b9-44a8-9d1c-3c6a642115b6</entry>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <entry name='family'>Virtual Machine</entry>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     </system>
Oct 11 08:47:58 compute-0 nova_compute[260935]:   </sysinfo>
Oct 11 08:47:58 compute-0 nova_compute[260935]:   <os>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     <type arch='x86_64' machine='pc-q35-rhel9.6.0'>hvm</type>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     <boot dev='hd'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     <smbios mode='sysinfo'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:   </os>
Oct 11 08:47:58 compute-0 nova_compute[260935]:   <features>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     <acpi/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     <apic/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     <vmcoreinfo state='on'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:   </features>
Oct 11 08:47:58 compute-0 nova_compute[260935]:   <cpu mode='custom' match='exact' check='full'>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     <model fallback='forbid'>EPYC-Rome</model>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     <vendor>AMD</vendor>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     <feature policy='require' name='x2apic'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     <feature policy='require' name='tsc-deadline'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     <feature policy='require' name='hypervisor'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     <feature policy='require' name='tsc_adjust'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     <feature policy='require' name='spec-ctrl'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     <feature policy='require' name='stibp'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     <feature policy='require' name='arch-capabilities'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     <feature policy='require' name='ssbd'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     <feature policy='require' name='cmp_legacy'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     <feature policy='require' name='overflow-recov'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     <feature policy='require' name='succor'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     <feature policy='require' name='ibrs'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     <feature policy='require' name='amd-ssbd'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     <feature policy='require' name='virt-ssbd'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     <feature policy='disable' name='lbrv'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     <feature policy='disable' name='tsc-scale'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     <feature policy='disable' name='vmcb-clean'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     <feature policy='disable' name='flushbyasid'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     <feature policy='disable' name='pause-filter'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     <feature policy='disable' name='pfthreshold'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     <feature policy='disable' name='svme-addr-chk'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     <feature policy='require' name='lfence-always-serializing'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     <feature policy='require' name='rdctl-no'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     <feature policy='require' name='mds-no'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     <feature policy='require' name='pschange-mc-no'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     <feature policy='require' name='gds-no'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     <feature policy='require' name='rfds-no'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     <feature policy='disable' name='xsaves'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     <feature policy='disable' name='svm'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     <feature policy='require' name='topoext'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     <feature policy='disable' name='npt'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     <feature policy='disable' name='nrip-save'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:   </cpu>
Oct 11 08:47:58 compute-0 nova_compute[260935]:   <clock offset='utc'>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     <timer name='pit' tickpolicy='delay'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     <timer name='rtc' tickpolicy='catchup'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     <timer name='hpet' present='no'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:   </clock>
Oct 11 08:47:58 compute-0 nova_compute[260935]:   <on_poweroff>destroy</on_poweroff>
Oct 11 08:47:58 compute-0 nova_compute[260935]:   <on_reboot>restart</on_reboot>
Oct 11 08:47:58 compute-0 nova_compute[260935]:   <on_crash>destroy</on_crash>
Oct 11 08:47:58 compute-0 nova_compute[260935]:   <devices>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     <disk type='network' device='disk'>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <driver name='qemu' type='raw' cache='none'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <auth username='openstack'>
Oct 11 08:47:58 compute-0 nova_compute[260935]:         <secret type='ceph' uuid='33219f8b-dc38-5a8f-a577-8ccc4b37190a'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       </auth>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <source protocol='rbd' name='vms/5b2193b9-46b9-44a8-9d1c-3c6a642115b6_disk' index='2'>
Oct 11 08:47:58 compute-0 nova_compute[260935]:         <host name='192.168.122.100' port='6789'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       </source>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <target dev='vda' bus='virtio'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <alias name='virtio-disk0'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     </disk>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     <disk type='network' device='cdrom'>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <driver name='qemu' type='raw' cache='none'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <auth username='openstack'>
Oct 11 08:47:58 compute-0 nova_compute[260935]:         <secret type='ceph' uuid='33219f8b-dc38-5a8f-a577-8ccc4b37190a'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       </auth>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <source protocol='rbd' name='vms/5b2193b9-46b9-44a8-9d1c-3c6a642115b6_disk.config' index='1'>
Oct 11 08:47:58 compute-0 nova_compute[260935]:         <host name='192.168.122.100' port='6789'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       </source>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <target dev='sda' bus='sata'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <readonly/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <alias name='sata0-0-0'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     </disk>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     <controller type='pci' index='0' model='pcie-root'>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <alias name='pcie.0'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     <controller type='pci' index='1' model='pcie-root-port'>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <target chassis='1' port='0x10'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <alias name='pci.1'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     <controller type='pci' index='2' model='pcie-root-port'>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <target chassis='2' port='0x11'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <alias name='pci.2'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     <controller type='pci' index='3' model='pcie-root-port'>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <target chassis='3' port='0x12'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <alias name='pci.3'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     <controller type='pci' index='4' model='pcie-root-port'>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <target chassis='4' port='0x13'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <alias name='pci.4'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     <controller type='pci' index='5' model='pcie-root-port'>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <target chassis='5' port='0x14'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <alias name='pci.5'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     <controller type='pci' index='6' model='pcie-root-port'>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <target chassis='6' port='0x15'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <alias name='pci.6'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     <controller type='pci' index='7' model='pcie-root-port'>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <target chassis='7' port='0x16'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <alias name='pci.7'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     <controller type='pci' index='8' model='pcie-root-port'>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <target chassis='8' port='0x17'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <alias name='pci.8'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     <controller type='pci' index='9' model='pcie-root-port'>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <target chassis='9' port='0x18'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <alias name='pci.9'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     <controller type='pci' index='10' model='pcie-root-port'>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <target chassis='10' port='0x19'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <alias name='pci.10'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     <controller type='pci' index='11' model='pcie-root-port'>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <target chassis='11' port='0x1a'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <alias name='pci.11'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     <controller type='pci' index='12' model='pcie-root-port'>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <target chassis='12' port='0x1b'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <alias name='pci.12'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     <controller type='pci' index='13' model='pcie-root-port'>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <target chassis='13' port='0x1c'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <alias name='pci.13'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     <controller type='pci' index='14' model='pcie-root-port'>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <target chassis='14' port='0x1d'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <alias name='pci.14'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     <controller type='pci' index='15' model='pcie-root-port'>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <target chassis='15' port='0x1e'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <alias name='pci.15'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     <controller type='pci' index='16' model='pcie-root-port'>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <target chassis='16' port='0x1f'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <alias name='pci.16'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     <controller type='pci' index='17' model='pcie-root-port'>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <target chassis='17' port='0x20'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <alias name='pci.17'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     <controller type='pci' index='18' model='pcie-root-port'>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <target chassis='18' port='0x21'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <alias name='pci.18'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     <controller type='pci' index='19' model='pcie-root-port'>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <target chassis='19' port='0x22'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <alias name='pci.19'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     <controller type='pci' index='20' model='pcie-root-port'>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <target chassis='20' port='0x23'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <alias name='pci.20'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     <controller type='pci' index='21' model='pcie-root-port'>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <target chassis='21' port='0x24'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <alias name='pci.21'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     <controller type='pci' index='22' model='pcie-root-port'>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <target chassis='22' port='0x25'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <alias name='pci.22'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     <controller type='pci' index='23' model='pcie-root-port'>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <target chassis='23' port='0x26'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <alias name='pci.23'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     <controller type='pci' index='24' model='pcie-root-port'>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <target chassis='24' port='0x27'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <alias name='pci.24'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     <controller type='pci' index='25' model='pcie-root-port'>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <target chassis='25' port='0x28'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <alias name='pci.25'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <model name='pcie-pci-bridge'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <alias name='pci.26'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     <controller type='usb' index='0' model='piix3-uhci'>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <alias name='usb'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     <controller type='sata' index='0'>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <alias name='ide'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     <interface type='ethernet'>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <mac address='fa:16:3e:6b:9e:4e'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <target dev='tap713e6030-0d'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <model type='virtio'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <driver name='vhost' rx_queue_size='512'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <mtu size='1442'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <alias name='net0'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     </interface>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     <serial type='pty'>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <source path='/dev/pts/2'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <log file='/var/lib/nova/instances/5b2193b9-46b9-44a8-9d1c-3c6a642115b6/console.log' append='off'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <target type='isa-serial' port='0'>
Oct 11 08:47:58 compute-0 nova_compute[260935]:         <model name='isa-serial'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       </target>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <alias name='serial0'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     </serial>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     <console type='pty' tty='/dev/pts/2'>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <source path='/dev/pts/2'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <log file='/var/lib/nova/instances/5b2193b9-46b9-44a8-9d1c-3c6a642115b6/console.log' append='off'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <target type='serial' port='0'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <alias name='serial0'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     </console>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     <input type='tablet' bus='usb'>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <alias name='input0'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <address type='usb' bus='0' port='1'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     </input>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     <input type='mouse' bus='ps2'>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <alias name='input1'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     </input>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     <input type='keyboard' bus='ps2'>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <alias name='input2'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     </input>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     <graphics type='vnc' port='5902' autoport='yes' listen='::0'>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <listen type='address' address='::0'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     </graphics>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     <audio id='1' type='none'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     <video>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <model type='virtio' heads='1' primary='yes'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <alias name='video0'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     </video>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     <watchdog model='itco' action='reset'>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <alias name='watchdog0'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     </watchdog>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     <memballoon model='virtio'>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <stats period='10'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <alias name='balloon0'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     </memballoon>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     <rng model='virtio'>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <backend model='random'>/dev/urandom</backend>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <alias name='rng0'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     </rng>
Oct 11 08:47:58 compute-0 nova_compute[260935]:   </devices>
Oct 11 08:47:58 compute-0 nova_compute[260935]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     <label>system_u:system_r:svirt_t:s0:c460,c984</label>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c460,c984</imagelabel>
Oct 11 08:47:58 compute-0 nova_compute[260935]:   </seclabel>
Oct 11 08:47:58 compute-0 nova_compute[260935]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     <label>+107:+107</label>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     <imagelabel>+107:+107</imagelabel>
Oct 11 08:47:58 compute-0 nova_compute[260935]:   </seclabel>
Oct 11 08:47:58 compute-0 nova_compute[260935]: </domain>
Oct 11 08:47:58 compute-0 nova_compute[260935]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Oct 11 08:47:58 compute-0 nova_compute[260935]: 2025-10-11 08:47:58.310 2 INFO nova.virt.libvirt.driver [None req-343edac2-7a63-40be-bb97-24799780dce2 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Successfully detached device tap0ae5b718-33 from instance 5b2193b9-46b9-44a8-9d1c-3c6a642115b6 from the live domain config.
Oct 11 08:47:58 compute-0 nova_compute[260935]: 2025-10-11 08:47:58.311 2 DEBUG nova.virt.libvirt.vif [None req-343edac2-7a63-40be-bb97-24799780dce2 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T08:47:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1177835038',display_name='tempest-AttachInterfacesTestJSON-server-1177835038',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1177835038',id=17,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBM9Il64pQKTCYRuLz2OOsP19v1NZUxnzt1d6CpbMNqNcVSmJsI444B5YIDg/3s4g87KTn1UkUCttTxW17bkkPDQnOj/OhzrtE3rJwHzR/sgT5/vucTFG0ijrEL7r/7PtFg==',key_name='tempest-keypair-130237923',keypairs=<?>,launch_index=0,launched_at=2025-10-11T08:47:24Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='eddb41c523294041b154a0a99c88e82b',ramdisk_id='',reservation_id='r-a649ez8h',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-2072786320',owner_user_name='tempest-AttachInterfacesTestJSON-2072786320-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T08:47:24Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='34f29a5a135d45f597eeaa741009aa67',uuid=5b2193b9-46b9-44a8-9d1c-3c6a642115b6,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "0ae5b718-3374-4544-8f79-2f11854381cd", "address": "fa:16:3e:3e:fe:cf", "network": {"id": "fff13396-b787-4c6e-9112-a1c2ef57b26d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-795919911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eddb41c523294041b154a0a99c88e82b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0ae5b718-33", "ovs_interfaceid": "0ae5b718-3374-4544-8f79-2f11854381cd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 11 08:47:58 compute-0 nova_compute[260935]: 2025-10-11 08:47:58.311 2 DEBUG nova.network.os_vif_util [None req-343edac2-7a63-40be-bb97-24799780dce2 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Converting VIF {"id": "0ae5b718-3374-4544-8f79-2f11854381cd", "address": "fa:16:3e:3e:fe:cf", "network": {"id": "fff13396-b787-4c6e-9112-a1c2ef57b26d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-795919911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eddb41c523294041b154a0a99c88e82b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0ae5b718-33", "ovs_interfaceid": "0ae5b718-3374-4544-8f79-2f11854381cd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 08:47:58 compute-0 nova_compute[260935]: 2025-10-11 08:47:58.312 2 DEBUG nova.network.os_vif_util [None req-343edac2-7a63-40be-bb97-24799780dce2 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3e:fe:cf,bridge_name='br-int',has_traffic_filtering=True,id=0ae5b718-3374-4544-8f79-2f11854381cd,network=Network(fff13396-b787-4c6e-9112-a1c2ef57b26d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0ae5b718-33') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 08:47:58 compute-0 nova_compute[260935]: 2025-10-11 08:47:58.313 2 DEBUG os_vif [None req-343edac2-7a63-40be-bb97-24799780dce2 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:3e:fe:cf,bridge_name='br-int',has_traffic_filtering=True,id=0ae5b718-3374-4544-8f79-2f11854381cd,network=Network(fff13396-b787-4c6e-9112-a1c2ef57b26d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0ae5b718-33') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 11 08:47:58 compute-0 nova_compute[260935]: 2025-10-11 08:47:58.320 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:47:58 compute-0 nova_compute[260935]: 2025-10-11 08:47:58.320 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0ae5b718-33, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:47:58 compute-0 nova_compute[260935]: 2025-10-11 08:47:58.321 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:47:58 compute-0 nova_compute[260935]: 2025-10-11 08:47:58.323 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 08:47:58 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:47:58.320 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[eed7dbc2-027d-4515-ab15-d282e4088e2b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:47:58 compute-0 nova_compute[260935]: 2025-10-11 08:47:58.325 2 INFO os_vif [None req-343edac2-7a63-40be-bb97-24799780dce2 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:3e:fe:cf,bridge_name='br-int',has_traffic_filtering=True,id=0ae5b718-3374-4544-8f79-2f11854381cd,network=Network(fff13396-b787-4c6e-9112-a1c2ef57b26d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0ae5b718-33')
Oct 11 08:47:58 compute-0 nova_compute[260935]: 2025-10-11 08:47:58.325 2 DEBUG nova.virt.libvirt.guest [None req-343edac2-7a63-40be-bb97-24799780dce2 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 08:47:58 compute-0 nova_compute[260935]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:   <nova:name>tempest-AttachInterfacesTestJSON-server-1177835038</nova:name>
Oct 11 08:47:58 compute-0 nova_compute[260935]:   <nova:creationTime>2025-10-11 08:47:58</nova:creationTime>
Oct 11 08:47:58 compute-0 nova_compute[260935]:   <nova:flavor name="m1.nano">
Oct 11 08:47:58 compute-0 nova_compute[260935]:     <nova:memory>128</nova:memory>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     <nova:disk>1</nova:disk>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     <nova:swap>0</nova:swap>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     <nova:ephemeral>0</nova:ephemeral>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     <nova:vcpus>1</nova:vcpus>
Oct 11 08:47:58 compute-0 nova_compute[260935]:   </nova:flavor>
Oct 11 08:47:58 compute-0 nova_compute[260935]:   <nova:owner>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     <nova:user uuid="34f29a5a135d45f597eeaa741009aa67">tempest-AttachInterfacesTestJSON-2072786320-project-member</nova:user>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     <nova:project uuid="eddb41c523294041b154a0a99c88e82b">tempest-AttachInterfacesTestJSON-2072786320</nova:project>
Oct 11 08:47:58 compute-0 nova_compute[260935]:   </nova:owner>
Oct 11 08:47:58 compute-0 nova_compute[260935]:   <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:   <nova:ports>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     <nova:port uuid="713e6030-0d3f-41ae-9f66-c4591e2498e4">
Oct 11 08:47:58 compute-0 nova_compute[260935]:       <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Oct 11 08:47:58 compute-0 nova_compute[260935]:     </nova:port>
Oct 11 08:47:58 compute-0 nova_compute[260935]:   </nova:ports>
Oct 11 08:47:58 compute-0 nova_compute[260935]: </nova:instance>
Oct 11 08:47:58 compute-0 nova_compute[260935]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Oct 11 08:47:58 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:47:58.326 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[5980e7ad-c531-4479-abc4-3a98c504bd75]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:47:58 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:47:58.361 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[f885cd05-33a7-4aef-9db1-f7501942bf3c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:47:58 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:47:58.386 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[bc0b09b9-f2d6-4a3c-b451-bdbdf7f72187]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfff13396-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:dd:a4:2d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 11, 'tx_packets': 7, 'rx_bytes': 958, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 11, 'tx_packets': 7, 'rx_bytes': 958, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 21], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 428802, 'reachable_time': 36432, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 288326, 'error': None, 'target': 'ovnmeta-fff13396-b787-4c6e-9112-a1c2ef57b26d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:47:58 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:47:58.414 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[9bedd155-8046-4d59-941a-79d2dc1ab0c7]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapfff13396-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 428821, 'tstamp': 428821}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 288327, 'error': None, 'target': 'ovnmeta-fff13396-b787-4c6e-9112-a1c2ef57b26d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapfff13396-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 428826, 'tstamp': 428826}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 288327, 'error': None, 'target': 'ovnmeta-fff13396-b787-4c6e-9112-a1c2ef57b26d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:47:58 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:47:58.416 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfff13396-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:47:58 compute-0 nova_compute[260935]: 2025-10-11 08:47:58.418 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:47:58 compute-0 nova_compute[260935]: 2025-10-11 08:47:58.419 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:47:58 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:47:58.422 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfff13396-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:47:58 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:47:58.422 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 08:47:58 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:47:58.423 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapfff13396-b0, col_values=(('external_ids', {'iface-id': '2a916b98-1e7b-4604-b1f0-e2f195b1c17e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:47:58 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:47:58.424 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 08:47:58 compute-0 nova_compute[260935]: 2025-10-11 08:47:58.715 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 08:47:58 compute-0 nova_compute[260935]: 2025-10-11 08:47:58.716 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Oct 11 08:47:58 compute-0 nova_compute[260935]: 2025-10-11 08:47:58.736 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Oct 11 08:47:59 compute-0 nova_compute[260935]: 2025-10-11 08:47:59.336 2 DEBUG nova.compute.manager [req-c4239238-b186-456b-a93d-8c311e40cd99 req-4bc1b88c-194d-474c-a935-86d30f045a37 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 5b2193b9-46b9-44a8-9d1c-3c6a642115b6] Received event network-vif-plugged-0ae5b718-3374-4544-8f79-2f11854381cd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:47:59 compute-0 nova_compute[260935]: 2025-10-11 08:47:59.336 2 DEBUG oslo_concurrency.lockutils [req-c4239238-b186-456b-a93d-8c311e40cd99 req-4bc1b88c-194d-474c-a935-86d30f045a37 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "5b2193b9-46b9-44a8-9d1c-3c6a642115b6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:47:59 compute-0 nova_compute[260935]: 2025-10-11 08:47:59.337 2 DEBUG oslo_concurrency.lockutils [req-c4239238-b186-456b-a93d-8c311e40cd99 req-4bc1b88c-194d-474c-a935-86d30f045a37 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "5b2193b9-46b9-44a8-9d1c-3c6a642115b6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:47:59 compute-0 nova_compute[260935]: 2025-10-11 08:47:59.337 2 DEBUG oslo_concurrency.lockutils [req-c4239238-b186-456b-a93d-8c311e40cd99 req-4bc1b88c-194d-474c-a935-86d30f045a37 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "5b2193b9-46b9-44a8-9d1c-3c6a642115b6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:47:59 compute-0 nova_compute[260935]: 2025-10-11 08:47:59.337 2 DEBUG nova.compute.manager [req-c4239238-b186-456b-a93d-8c311e40cd99 req-4bc1b88c-194d-474c-a935-86d30f045a37 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 5b2193b9-46b9-44a8-9d1c-3c6a642115b6] No waiting events found dispatching network-vif-plugged-0ae5b718-3374-4544-8f79-2f11854381cd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 08:47:59 compute-0 nova_compute[260935]: 2025-10-11 08:47:59.337 2 WARNING nova.compute.manager [req-c4239238-b186-456b-a93d-8c311e40cd99 req-4bc1b88c-194d-474c-a935-86d30f045a37 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 5b2193b9-46b9-44a8-9d1c-3c6a642115b6] Received unexpected event network-vif-plugged-0ae5b718-3374-4544-8f79-2f11854381cd for instance with vm_state active and task_state None.
Oct 11 08:47:59 compute-0 nova_compute[260935]: 2025-10-11 08:47:59.344 2 DEBUG oslo_concurrency.lockutils [None req-343edac2-7a63-40be-bb97-24799780dce2 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Acquiring lock "refresh_cache-5b2193b9-46b9-44a8-9d1c-3c6a642115b6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 08:47:59 compute-0 nova_compute[260935]: 2025-10-11 08:47:59.344 2 DEBUG oslo_concurrency.lockutils [None req-343edac2-7a63-40be-bb97-24799780dce2 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Acquired lock "refresh_cache-5b2193b9-46b9-44a8-9d1c-3c6a642115b6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 08:47:59 compute-0 nova_compute[260935]: 2025-10-11 08:47:59.344 2 DEBUG nova.network.neutron [None req-343edac2-7a63-40be-bb97-24799780dce2 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 5b2193b9-46b9-44a8-9d1c-3c6a642115b6] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 11 08:47:59 compute-0 ceph-mon[74313]: pgmap v1265: 321 pgs: 321 active+clean; 213 MiB data, 344 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 5.7 MiB/s wr, 160 op/s
Oct 11 08:47:59 compute-0 nova_compute[260935]: 2025-10-11 08:47:59.387 2 DEBUG nova.network.neutron [None req-b1acccde-62a2-4164-bd32-9ee867650c6d 16055681fed745bb89347149995b8486 dc753a4e96fc46008b6e6b1fd29b160d - - default default] [instance: f6e6ccd5-d393-4fa3-bf88-491311678dd1] Successfully updated port: 128b1135-2e8f-4e78-8e09-e16b082e9225 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 11 08:47:59 compute-0 nova_compute[260935]: 2025-10-11 08:47:59.408 2 DEBUG oslo_concurrency.lockutils [None req-b1acccde-62a2-4164-bd32-9ee867650c6d 16055681fed745bb89347149995b8486 dc753a4e96fc46008b6e6b1fd29b160d - - default default] Acquiring lock "refresh_cache-f6e6ccd5-d393-4fa3-bf88-491311678dd1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 08:47:59 compute-0 nova_compute[260935]: 2025-10-11 08:47:59.408 2 DEBUG oslo_concurrency.lockutils [None req-b1acccde-62a2-4164-bd32-9ee867650c6d 16055681fed745bb89347149995b8486 dc753a4e96fc46008b6e6b1fd29b160d - - default default] Acquired lock "refresh_cache-f6e6ccd5-d393-4fa3-bf88-491311678dd1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 08:47:59 compute-0 nova_compute[260935]: 2025-10-11 08:47:59.408 2 DEBUG nova.network.neutron [None req-b1acccde-62a2-4164-bd32-9ee867650c6d 16055681fed745bb89347149995b8486 dc753a4e96fc46008b6e6b1fd29b160d - - default default] [instance: f6e6ccd5-d393-4fa3-bf88-491311678dd1] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 11 08:47:59 compute-0 nova_compute[260935]: 2025-10-11 08:47:59.495 2 DEBUG nova.compute.manager [req-9fbc9f2b-9925-4f02-a420-c41d4237c01f req-71825224-6999-4140-b6cc-a9b837c30ffd e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: f6e6ccd5-d393-4fa3-bf88-491311678dd1] Received event network-changed-128b1135-2e8f-4e78-8e09-e16b082e9225 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:47:59 compute-0 nova_compute[260935]: 2025-10-11 08:47:59.496 2 DEBUG nova.compute.manager [req-9fbc9f2b-9925-4f02-a420-c41d4237c01f req-71825224-6999-4140-b6cc-a9b837c30ffd e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: f6e6ccd5-d393-4fa3-bf88-491311678dd1] Refreshing instance network info cache due to event network-changed-128b1135-2e8f-4e78-8e09-e16b082e9225. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 08:47:59 compute-0 nova_compute[260935]: 2025-10-11 08:47:59.496 2 DEBUG oslo_concurrency.lockutils [req-9fbc9f2b-9925-4f02-a420-c41d4237c01f req-71825224-6999-4140-b6cc-a9b837c30ffd e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-f6e6ccd5-d393-4fa3-bf88-491311678dd1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 08:47:59 compute-0 nova_compute[260935]: 2025-10-11 08:47:59.586 2 DEBUG nova.network.neutron [None req-b1acccde-62a2-4164-bd32-9ee867650c6d 16055681fed745bb89347149995b8486 dc753a4e96fc46008b6e6b1fd29b160d - - default default] [instance: f6e6ccd5-d393-4fa3-bf88-491311678dd1] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 11 08:47:59 compute-0 nova_compute[260935]: 2025-10-11 08:47:59.723 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 08:47:59 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1266: 321 pgs: 321 active+clean; 213 MiB data, 344 MiB used, 60 GiB / 60 GiB avail; 1.8 MiB/s rd, 3.6 MiB/s wr, 107 op/s
Oct 11 08:47:59 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:47:59.878 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=bec9a5e2-82b0-42f0-811d-08d245f1dc66, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '7'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:48:00 compute-0 nova_compute[260935]: 2025-10-11 08:48:00.416 2 INFO nova.compute.manager [None req-0253c722-4ccd-4d8a-83e3-a929e61a488a 5c92a33b381a4eb19cda7d8ab3531d63 f5d998faafab4ef7b40f18acfe17d238 - - default default] [instance: 66086b61-46ca-4a1b-a9f0-692678bcbf7a] Rebuilding instance
Oct 11 08:48:00 compute-0 nova_compute[260935]: 2025-10-11 08:48:00.704 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 08:48:00 compute-0 nova_compute[260935]: 2025-10-11 08:48:00.932 2 DEBUG nova.objects.instance [None req-0253c722-4ccd-4d8a-83e3-a929e61a488a 5c92a33b381a4eb19cda7d8ab3531d63 f5d998faafab4ef7b40f18acfe17d238 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 66086b61-46ca-4a1b-a9f0-692678bcbf7a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:48:00 compute-0 nova_compute[260935]: 2025-10-11 08:48:00.950 2 DEBUG nova.compute.manager [None req-0253c722-4ccd-4d8a-83e3-a929e61a488a 5c92a33b381a4eb19cda7d8ab3531d63 f5d998faafab4ef7b40f18acfe17d238 - - default default] [instance: 66086b61-46ca-4a1b-a9f0-692678bcbf7a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:48:00 compute-0 nova_compute[260935]: 2025-10-11 08:48:00.997 2 DEBUG nova.objects.instance [None req-0253c722-4ccd-4d8a-83e3-a929e61a488a 5c92a33b381a4eb19cda7d8ab3531d63 f5d998faafab4ef7b40f18acfe17d238 - - default default] Lazy-loading 'pci_requests' on Instance uuid 66086b61-46ca-4a1b-a9f0-692678bcbf7a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:48:01 compute-0 nova_compute[260935]: 2025-10-11 08:48:01.007 2 DEBUG nova.objects.instance [None req-0253c722-4ccd-4d8a-83e3-a929e61a488a 5c92a33b381a4eb19cda7d8ab3531d63 f5d998faafab4ef7b40f18acfe17d238 - - default default] Lazy-loading 'pci_devices' on Instance uuid 66086b61-46ca-4a1b-a9f0-692678bcbf7a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:48:01 compute-0 nova_compute[260935]: 2025-10-11 08:48:01.018 2 DEBUG nova.objects.instance [None req-0253c722-4ccd-4d8a-83e3-a929e61a488a 5c92a33b381a4eb19cda7d8ab3531d63 f5d998faafab4ef7b40f18acfe17d238 - - default default] Lazy-loading 'resources' on Instance uuid 66086b61-46ca-4a1b-a9f0-692678bcbf7a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:48:01 compute-0 nova_compute[260935]: 2025-10-11 08:48:01.030 2 DEBUG nova.objects.instance [None req-0253c722-4ccd-4d8a-83e3-a929e61a488a 5c92a33b381a4eb19cda7d8ab3531d63 f5d998faafab4ef7b40f18acfe17d238 - - default default] Lazy-loading 'migration_context' on Instance uuid 66086b61-46ca-4a1b-a9f0-692678bcbf7a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:48:01 compute-0 nova_compute[260935]: 2025-10-11 08:48:01.041 2 DEBUG nova.objects.instance [None req-0253c722-4ccd-4d8a-83e3-a929e61a488a 5c92a33b381a4eb19cda7d8ab3531d63 f5d998faafab4ef7b40f18acfe17d238 - - default default] [instance: 66086b61-46ca-4a1b-a9f0-692678bcbf7a] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Oct 11 08:48:01 compute-0 nova_compute[260935]: 2025-10-11 08:48:01.046 2 DEBUG nova.virt.libvirt.driver [None req-0253c722-4ccd-4d8a-83e3-a929e61a488a 5c92a33b381a4eb19cda7d8ab3531d63 f5d998faafab4ef7b40f18acfe17d238 - - default default] [instance: 66086b61-46ca-4a1b-a9f0-692678bcbf7a] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Oct 11 08:48:01 compute-0 nova_compute[260935]: 2025-10-11 08:48:01.072 2 INFO nova.network.neutron [None req-343edac2-7a63-40be-bb97-24799780dce2 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 5b2193b9-46b9-44a8-9d1c-3c6a642115b6] Port 0ae5b718-3374-4544-8f79-2f11854381cd from network info_cache is no longer associated with instance in Neutron. Removing from network info_cache.
Oct 11 08:48:01 compute-0 nova_compute[260935]: 2025-10-11 08:48:01.073 2 DEBUG nova.network.neutron [None req-343edac2-7a63-40be-bb97-24799780dce2 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 5b2193b9-46b9-44a8-9d1c-3c6a642115b6] Updating instance_info_cache with network_info: [{"id": "713e6030-0d3f-41ae-9f66-c4591e2498e4", "address": "fa:16:3e:6b:9e:4e", "network": {"id": "fff13396-b787-4c6e-9112-a1c2ef57b26d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-795919911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.196", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eddb41c523294041b154a0a99c88e82b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap713e6030-0d", "ovs_interfaceid": "713e6030-0d3f-41ae-9f66-c4591e2498e4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:48:01 compute-0 nova_compute[260935]: 2025-10-11 08:48:01.101 2 DEBUG nova.network.neutron [None req-b1acccde-62a2-4164-bd32-9ee867650c6d 16055681fed745bb89347149995b8486 dc753a4e96fc46008b6e6b1fd29b160d - - default default] [instance: f6e6ccd5-d393-4fa3-bf88-491311678dd1] Updating instance_info_cache with network_info: [{"id": "128b1135-2e8f-4e78-8e09-e16b082e9225", "address": "fa:16:3e:ac:95:81", "network": {"id": "678d17d5-515c-4e7c-a42a-5bd4db3dbb7b", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-1273402507-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dc753a4e96fc46008b6e6b1fd29b160d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap128b1135-2e", "ovs_interfaceid": "128b1135-2e8f-4e78-8e09-e16b082e9225", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:48:01 compute-0 nova_compute[260935]: 2025-10-11 08:48:01.104 2 DEBUG oslo_concurrency.lockutils [None req-343edac2-7a63-40be-bb97-24799780dce2 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Releasing lock "refresh_cache-5b2193b9-46b9-44a8-9d1c-3c6a642115b6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 08:48:01 compute-0 nova_compute[260935]: 2025-10-11 08:48:01.126 2 DEBUG oslo_concurrency.lockutils [None req-343edac2-7a63-40be-bb97-24799780dce2 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Lock "interface-5b2193b9-46b9-44a8-9d1c-3c6a642115b6-0ae5b718-3374-4544-8f79-2f11854381cd" "released" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: held 3.054s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:48:01 compute-0 nova_compute[260935]: 2025-10-11 08:48:01.130 2 DEBUG oslo_concurrency.lockutils [None req-b1acccde-62a2-4164-bd32-9ee867650c6d 16055681fed745bb89347149995b8486 dc753a4e96fc46008b6e6b1fd29b160d - - default default] Releasing lock "refresh_cache-f6e6ccd5-d393-4fa3-bf88-491311678dd1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 08:48:01 compute-0 nova_compute[260935]: 2025-10-11 08:48:01.130 2 DEBUG nova.compute.manager [None req-b1acccde-62a2-4164-bd32-9ee867650c6d 16055681fed745bb89347149995b8486 dc753a4e96fc46008b6e6b1fd29b160d - - default default] [instance: f6e6ccd5-d393-4fa3-bf88-491311678dd1] Instance network_info: |[{"id": "128b1135-2e8f-4e78-8e09-e16b082e9225", "address": "fa:16:3e:ac:95:81", "network": {"id": "678d17d5-515c-4e7c-a42a-5bd4db3dbb7b", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-1273402507-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dc753a4e96fc46008b6e6b1fd29b160d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap128b1135-2e", "ovs_interfaceid": "128b1135-2e8f-4e78-8e09-e16b082e9225", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 11 08:48:01 compute-0 nova_compute[260935]: 2025-10-11 08:48:01.131 2 DEBUG oslo_concurrency.lockutils [req-9fbc9f2b-9925-4f02-a420-c41d4237c01f req-71825224-6999-4140-b6cc-a9b837c30ffd e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-f6e6ccd5-d393-4fa3-bf88-491311678dd1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 08:48:01 compute-0 nova_compute[260935]: 2025-10-11 08:48:01.131 2 DEBUG nova.network.neutron [req-9fbc9f2b-9925-4f02-a420-c41d4237c01f req-71825224-6999-4140-b6cc-a9b837c30ffd e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: f6e6ccd5-d393-4fa3-bf88-491311678dd1] Refreshing network info cache for port 128b1135-2e8f-4e78-8e09-e16b082e9225 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 08:48:01 compute-0 nova_compute[260935]: 2025-10-11 08:48:01.134 2 DEBUG nova.virt.libvirt.driver [None req-b1acccde-62a2-4164-bd32-9ee867650c6d 16055681fed745bb89347149995b8486 dc753a4e96fc46008b6e6b1fd29b160d - - default default] [instance: f6e6ccd5-d393-4fa3-bf88-491311678dd1] Start _get_guest_xml network_info=[{"id": "128b1135-2e8f-4e78-8e09-e16b082e9225", "address": "fa:16:3e:ac:95:81", "network": {"id": "678d17d5-515c-4e7c-a42a-5bd4db3dbb7b", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-1273402507-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dc753a4e96fc46008b6e6b1fd29b160d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap128b1135-2e", "ovs_interfaceid": "128b1135-2e8f-4e78-8e09-e16b082e9225", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 11 08:48:01 compute-0 nova_compute[260935]: 2025-10-11 08:48:01.138 2 WARNING nova.virt.libvirt.driver [None req-b1acccde-62a2-4164-bd32-9ee867650c6d 16055681fed745bb89347149995b8486 dc753a4e96fc46008b6e6b1fd29b160d - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 08:48:01 compute-0 nova_compute[260935]: 2025-10-11 08:48:01.142 2 DEBUG nova.virt.libvirt.host [None req-b1acccde-62a2-4164-bd32-9ee867650c6d 16055681fed745bb89347149995b8486 dc753a4e96fc46008b6e6b1fd29b160d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 11 08:48:01 compute-0 nova_compute[260935]: 2025-10-11 08:48:01.143 2 DEBUG nova.virt.libvirt.host [None req-b1acccde-62a2-4164-bd32-9ee867650c6d 16055681fed745bb89347149995b8486 dc753a4e96fc46008b6e6b1fd29b160d - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 11 08:48:01 compute-0 nova_compute[260935]: 2025-10-11 08:48:01.146 2 DEBUG nova.virt.libvirt.host [None req-b1acccde-62a2-4164-bd32-9ee867650c6d 16055681fed745bb89347149995b8486 dc753a4e96fc46008b6e6b1fd29b160d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 11 08:48:01 compute-0 nova_compute[260935]: 2025-10-11 08:48:01.146 2 DEBUG nova.virt.libvirt.host [None req-b1acccde-62a2-4164-bd32-9ee867650c6d 16055681fed745bb89347149995b8486 dc753a4e96fc46008b6e6b1fd29b160d - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 11 08:48:01 compute-0 nova_compute[260935]: 2025-10-11 08:48:01.146 2 DEBUG nova.virt.libvirt.driver [None req-b1acccde-62a2-4164-bd32-9ee867650c6d 16055681fed745bb89347149995b8486 dc753a4e96fc46008b6e6b1fd29b160d - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 11 08:48:01 compute-0 nova_compute[260935]: 2025-10-11 08:48:01.147 2 DEBUG nova.virt.hardware [None req-b1acccde-62a2-4164-bd32-9ee867650c6d 16055681fed745bb89347149995b8486 dc753a4e96fc46008b6e6b1fd29b160d - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 11 08:48:01 compute-0 nova_compute[260935]: 2025-10-11 08:48:01.147 2 DEBUG nova.virt.hardware [None req-b1acccde-62a2-4164-bd32-9ee867650c6d 16055681fed745bb89347149995b8486 dc753a4e96fc46008b6e6b1fd29b160d - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 11 08:48:01 compute-0 nova_compute[260935]: 2025-10-11 08:48:01.147 2 DEBUG nova.virt.hardware [None req-b1acccde-62a2-4164-bd32-9ee867650c6d 16055681fed745bb89347149995b8486 dc753a4e96fc46008b6e6b1fd29b160d - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 11 08:48:01 compute-0 nova_compute[260935]: 2025-10-11 08:48:01.148 2 DEBUG nova.virt.hardware [None req-b1acccde-62a2-4164-bd32-9ee867650c6d 16055681fed745bb89347149995b8486 dc753a4e96fc46008b6e6b1fd29b160d - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 11 08:48:01 compute-0 nova_compute[260935]: 2025-10-11 08:48:01.148 2 DEBUG nova.virt.hardware [None req-b1acccde-62a2-4164-bd32-9ee867650c6d 16055681fed745bb89347149995b8486 dc753a4e96fc46008b6e6b1fd29b160d - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 11 08:48:01 compute-0 nova_compute[260935]: 2025-10-11 08:48:01.148 2 DEBUG nova.virt.hardware [None req-b1acccde-62a2-4164-bd32-9ee867650c6d 16055681fed745bb89347149995b8486 dc753a4e96fc46008b6e6b1fd29b160d - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 11 08:48:01 compute-0 nova_compute[260935]: 2025-10-11 08:48:01.148 2 DEBUG nova.virt.hardware [None req-b1acccde-62a2-4164-bd32-9ee867650c6d 16055681fed745bb89347149995b8486 dc753a4e96fc46008b6e6b1fd29b160d - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 11 08:48:01 compute-0 nova_compute[260935]: 2025-10-11 08:48:01.149 2 DEBUG nova.virt.hardware [None req-b1acccde-62a2-4164-bd32-9ee867650c6d 16055681fed745bb89347149995b8486 dc753a4e96fc46008b6e6b1fd29b160d - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 11 08:48:01 compute-0 nova_compute[260935]: 2025-10-11 08:48:01.149 2 DEBUG nova.virt.hardware [None req-b1acccde-62a2-4164-bd32-9ee867650c6d 16055681fed745bb89347149995b8486 dc753a4e96fc46008b6e6b1fd29b160d - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 11 08:48:01 compute-0 nova_compute[260935]: 2025-10-11 08:48:01.149 2 DEBUG nova.virt.hardware [None req-b1acccde-62a2-4164-bd32-9ee867650c6d 16055681fed745bb89347149995b8486 dc753a4e96fc46008b6e6b1fd29b160d - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 11 08:48:01 compute-0 nova_compute[260935]: 2025-10-11 08:48:01.149 2 DEBUG nova.virt.hardware [None req-b1acccde-62a2-4164-bd32-9ee867650c6d 16055681fed745bb89347149995b8486 dc753a4e96fc46008b6e6b1fd29b160d - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 11 08:48:01 compute-0 nova_compute[260935]: 2025-10-11 08:48:01.152 2 DEBUG oslo_concurrency.processutils [None req-b1acccde-62a2-4164-bd32-9ee867650c6d 16055681fed745bb89347149995b8486 dc753a4e96fc46008b6e6b1fd29b160d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:48:01 compute-0 nova_compute[260935]: 2025-10-11 08:48:01.214 2 DEBUG oslo_concurrency.lockutils [None req-5a167f4a-efd9-47fc-92a3-43da5d789ab2 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Acquiring lock "5b2193b9-46b9-44a8-9d1c-3c6a642115b6" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:48:01 compute-0 nova_compute[260935]: 2025-10-11 08:48:01.215 2 DEBUG oslo_concurrency.lockutils [None req-5a167f4a-efd9-47fc-92a3-43da5d789ab2 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Lock "5b2193b9-46b9-44a8-9d1c-3c6a642115b6" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:48:01 compute-0 nova_compute[260935]: 2025-10-11 08:48:01.215 2 DEBUG oslo_concurrency.lockutils [None req-5a167f4a-efd9-47fc-92a3-43da5d789ab2 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Acquiring lock "5b2193b9-46b9-44a8-9d1c-3c6a642115b6-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:48:01 compute-0 nova_compute[260935]: 2025-10-11 08:48:01.216 2 DEBUG oslo_concurrency.lockutils [None req-5a167f4a-efd9-47fc-92a3-43da5d789ab2 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Lock "5b2193b9-46b9-44a8-9d1c-3c6a642115b6-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:48:01 compute-0 nova_compute[260935]: 2025-10-11 08:48:01.216 2 DEBUG oslo_concurrency.lockutils [None req-5a167f4a-efd9-47fc-92a3-43da5d789ab2 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Lock "5b2193b9-46b9-44a8-9d1c-3c6a642115b6-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:48:01 compute-0 nova_compute[260935]: 2025-10-11 08:48:01.217 2 INFO nova.compute.manager [None req-5a167f4a-efd9-47fc-92a3-43da5d789ab2 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 5b2193b9-46b9-44a8-9d1c-3c6a642115b6] Terminating instance
Oct 11 08:48:01 compute-0 nova_compute[260935]: 2025-10-11 08:48:01.218 2 DEBUG nova.compute.manager [None req-5a167f4a-efd9-47fc-92a3-43da5d789ab2 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 5b2193b9-46b9-44a8-9d1c-3c6a642115b6] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 11 08:48:01 compute-0 kernel: tap713e6030-0d (unregistering): left promiscuous mode
Oct 11 08:48:01 compute-0 NetworkManager[44960]: <info>  [1760172481.2846] device (tap713e6030-0d): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 08:48:01 compute-0 nova_compute[260935]: 2025-10-11 08:48:01.300 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:48:01 compute-0 ovn_controller[152945]: 2025-10-11T08:48:01Z|00059|binding|INFO|Releasing lport 713e6030-0d3f-41ae-9f66-c4591e2498e4 from this chassis (sb_readonly=0)
Oct 11 08:48:01 compute-0 ovn_controller[152945]: 2025-10-11T08:48:01Z|00060|binding|INFO|Setting lport 713e6030-0d3f-41ae-9f66-c4591e2498e4 down in Southbound
Oct 11 08:48:01 compute-0 ovn_controller[152945]: 2025-10-11T08:48:01Z|00061|binding|INFO|Removing iface tap713e6030-0d ovn-installed in OVS
Oct 11 08:48:01 compute-0 nova_compute[260935]: 2025-10-11 08:48:01.305 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:48:01 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:48:01.310 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6b:9e:4e 10.100.0.10'], port_security=['fa:16:3e:6b:9e:4e 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '5b2193b9-46b9-44a8-9d1c-3c6a642115b6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fff13396-b787-4c6e-9112-a1c2ef57b26d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'eddb41c523294041b154a0a99c88e82b', 'neutron:revision_number': '4', 'neutron:security_group_ids': '45992a1d-2e72-47ad-a788-d3f5230a5526', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.196'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c4201c7b-c907-464d-88cb-d19f17d8f067, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=713e6030-0d3f-41ae-9f66-c4591e2498e4) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 08:48:01 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:48:01.312 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 713e6030-0d3f-41ae-9f66-c4591e2498e4 in datapath fff13396-b787-4c6e-9112-a1c2ef57b26d unbound from our chassis
Oct 11 08:48:01 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:48:01.314 162815 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network fff13396-b787-4c6e-9112-a1c2ef57b26d, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 11 08:48:01 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:48:01.315 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[1435ab6b-14c7-4b2b-8105-32a77c79fd3e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:48:01 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:48:01.316 162815 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-fff13396-b787-4c6e-9112-a1c2ef57b26d namespace which is not needed anymore
Oct 11 08:48:01 compute-0 nova_compute[260935]: 2025-10-11 08:48:01.334 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:48:01 compute-0 systemd[1]: machine-qemu\x2d17\x2dinstance\x2d00000011.scope: Deactivated successfully.
Oct 11 08:48:01 compute-0 systemd[1]: machine-qemu\x2d17\x2dinstance\x2d00000011.scope: Consumed 14.191s CPU time.
Oct 11 08:48:01 compute-0 systemd-machined[215705]: Machine qemu-17-instance-00000011 terminated.
Oct 11 08:48:01 compute-0 ceph-mon[74313]: pgmap v1266: 321 pgs: 321 active+clean; 213 MiB data, 344 MiB used, 60 GiB / 60 GiB avail; 1.8 MiB/s rd, 3.6 MiB/s wr, 107 op/s
Oct 11 08:48:01 compute-0 nova_compute[260935]: 2025-10-11 08:48:01.447 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:48:01 compute-0 nova_compute[260935]: 2025-10-11 08:48:01.454 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:48:01 compute-0 nova_compute[260935]: 2025-10-11 08:48:01.463 2 INFO nova.virt.libvirt.driver [-] [instance: 5b2193b9-46b9-44a8-9d1c-3c6a642115b6] Instance destroyed successfully.
Oct 11 08:48:01 compute-0 nova_compute[260935]: 2025-10-11 08:48:01.464 2 DEBUG nova.objects.instance [None req-5a167f4a-efd9-47fc-92a3-43da5d789ab2 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Lazy-loading 'resources' on Instance uuid 5b2193b9-46b9-44a8-9d1c-3c6a642115b6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:48:01 compute-0 nova_compute[260935]: 2025-10-11 08:48:01.503 2 DEBUG nova.virt.libvirt.vif [None req-5a167f4a-efd9-47fc-92a3-43da5d789ab2 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T08:47:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1177835038',display_name='tempest-AttachInterfacesTestJSON-server-1177835038',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1177835038',id=17,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBM9Il64pQKTCYRuLz2OOsP19v1NZUxnzt1d6CpbMNqNcVSmJsI444B5YIDg/3s4g87KTn1UkUCttTxW17bkkPDQnOj/OhzrtE3rJwHzR/sgT5/vucTFG0ijrEL7r/7PtFg==',key_name='tempest-keypair-130237923',keypairs=<?>,launch_index=0,launched_at=2025-10-11T08:47:24Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='eddb41c523294041b154a0a99c88e82b',ramdisk_id='',reservation_id='r-a649ez8h',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-2072786320',owner_user_name='tempest-AttachInterfacesTestJSON-2072786320-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T08:47:24Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='34f29a5a135d45f597eeaa741009aa67',uuid=5b2193b9-46b9-44a8-9d1c-3c6a642115b6,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "713e6030-0d3f-41ae-9f66-c4591e2498e4", "address": "fa:16:3e:6b:9e:4e", "network": {"id": "fff13396-b787-4c6e-9112-a1c2ef57b26d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-795919911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.196", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eddb41c523294041b154a0a99c88e82b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap713e6030-0d", "ovs_interfaceid": "713e6030-0d3f-41ae-9f66-c4591e2498e4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 11 08:48:01 compute-0 nova_compute[260935]: 2025-10-11 08:48:01.504 2 DEBUG nova.network.os_vif_util [None req-5a167f4a-efd9-47fc-92a3-43da5d789ab2 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Converting VIF {"id": "713e6030-0d3f-41ae-9f66-c4591e2498e4", "address": "fa:16:3e:6b:9e:4e", "network": {"id": "fff13396-b787-4c6e-9112-a1c2ef57b26d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-795919911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.196", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eddb41c523294041b154a0a99c88e82b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap713e6030-0d", "ovs_interfaceid": "713e6030-0d3f-41ae-9f66-c4591e2498e4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 08:48:01 compute-0 nova_compute[260935]: 2025-10-11 08:48:01.505 2 DEBUG nova.network.os_vif_util [None req-5a167f4a-efd9-47fc-92a3-43da5d789ab2 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:6b:9e:4e,bridge_name='br-int',has_traffic_filtering=True,id=713e6030-0d3f-41ae-9f66-c4591e2498e4,network=Network(fff13396-b787-4c6e-9112-a1c2ef57b26d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap713e6030-0d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 08:48:01 compute-0 nova_compute[260935]: 2025-10-11 08:48:01.505 2 DEBUG os_vif [None req-5a167f4a-efd9-47fc-92a3-43da5d789ab2 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:6b:9e:4e,bridge_name='br-int',has_traffic_filtering=True,id=713e6030-0d3f-41ae-9f66-c4591e2498e4,network=Network(fff13396-b787-4c6e-9112-a1c2ef57b26d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap713e6030-0d') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 11 08:48:01 compute-0 nova_compute[260935]: 2025-10-11 08:48:01.507 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:48:01 compute-0 nova_compute[260935]: 2025-10-11 08:48:01.507 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap713e6030-0d, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:48:01 compute-0 nova_compute[260935]: 2025-10-11 08:48:01.509 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:48:01 compute-0 nova_compute[260935]: 2025-10-11 08:48:01.510 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:48:01 compute-0 nova_compute[260935]: 2025-10-11 08:48:01.515 2 INFO os_vif [None req-5a167f4a-efd9-47fc-92a3-43da5d789ab2 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:6b:9e:4e,bridge_name='br-int',has_traffic_filtering=True,id=713e6030-0d3f-41ae-9f66-c4591e2498e4,network=Network(fff13396-b787-4c6e-9112-a1c2ef57b26d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap713e6030-0d')
Oct 11 08:48:01 compute-0 neutron-haproxy-ovnmeta-fff13396-b787-4c6e-9112-a1c2ef57b26d[287185]: [NOTICE]   (287190) : haproxy version is 2.8.14-c23fe91
Oct 11 08:48:01 compute-0 neutron-haproxy-ovnmeta-fff13396-b787-4c6e-9112-a1c2ef57b26d[287185]: [NOTICE]   (287190) : path to executable is /usr/sbin/haproxy
Oct 11 08:48:01 compute-0 neutron-haproxy-ovnmeta-fff13396-b787-4c6e-9112-a1c2ef57b26d[287185]: [WARNING]  (287190) : Exiting Master process...
Oct 11 08:48:01 compute-0 neutron-haproxy-ovnmeta-fff13396-b787-4c6e-9112-a1c2ef57b26d[287185]: [ALERT]    (287190) : Current worker (287192) exited with code 143 (Terminated)
Oct 11 08:48:01 compute-0 neutron-haproxy-ovnmeta-fff13396-b787-4c6e-9112-a1c2ef57b26d[287185]: [WARNING]  (287190) : All workers exited. Exiting... (0)
Oct 11 08:48:01 compute-0 systemd[1]: libpod-779a4dfe6de7028a6c29a354067b0ff2a57745a062e774c98de6f0ccb789cb5a.scope: Deactivated successfully.
Oct 11 08:48:01 compute-0 podman[288372]: 2025-10-11 08:48:01.539605424 +0000 UTC m=+0.075051306 container died 779a4dfe6de7028a6c29a354067b0ff2a57745a062e774c98de6f0ccb789cb5a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-fff13396-b787-4c6e-9112-a1c2ef57b26d, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 11 08:48:01 compute-0 nova_compute[260935]: 2025-10-11 08:48:01.559 2 DEBUG nova.compute.manager [req-0d5452c6-f8eb-4c34-b406-4c0253565753 req-977baa6f-3f15-4a41-ae5c-ee28de0b6ac0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 5b2193b9-46b9-44a8-9d1c-3c6a642115b6] Received event network-vif-unplugged-0ae5b718-3374-4544-8f79-2f11854381cd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:48:01 compute-0 nova_compute[260935]: 2025-10-11 08:48:01.560 2 DEBUG oslo_concurrency.lockutils [req-0d5452c6-f8eb-4c34-b406-4c0253565753 req-977baa6f-3f15-4a41-ae5c-ee28de0b6ac0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "5b2193b9-46b9-44a8-9d1c-3c6a642115b6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:48:01 compute-0 nova_compute[260935]: 2025-10-11 08:48:01.561 2 DEBUG oslo_concurrency.lockutils [req-0d5452c6-f8eb-4c34-b406-4c0253565753 req-977baa6f-3f15-4a41-ae5c-ee28de0b6ac0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "5b2193b9-46b9-44a8-9d1c-3c6a642115b6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:48:01 compute-0 nova_compute[260935]: 2025-10-11 08:48:01.561 2 DEBUG oslo_concurrency.lockutils [req-0d5452c6-f8eb-4c34-b406-4c0253565753 req-977baa6f-3f15-4a41-ae5c-ee28de0b6ac0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "5b2193b9-46b9-44a8-9d1c-3c6a642115b6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:48:01 compute-0 nova_compute[260935]: 2025-10-11 08:48:01.561 2 DEBUG nova.compute.manager [req-0d5452c6-f8eb-4c34-b406-4c0253565753 req-977baa6f-3f15-4a41-ae5c-ee28de0b6ac0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 5b2193b9-46b9-44a8-9d1c-3c6a642115b6] No waiting events found dispatching network-vif-unplugged-0ae5b718-3374-4544-8f79-2f11854381cd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 08:48:01 compute-0 nova_compute[260935]: 2025-10-11 08:48:01.562 2 DEBUG nova.compute.manager [req-0d5452c6-f8eb-4c34-b406-4c0253565753 req-977baa6f-3f15-4a41-ae5c-ee28de0b6ac0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 5b2193b9-46b9-44a8-9d1c-3c6a642115b6] Received event network-vif-unplugged-0ae5b718-3374-4544-8f79-2f11854381cd for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 11 08:48:01 compute-0 nova_compute[260935]: 2025-10-11 08:48:01.562 2 DEBUG nova.compute.manager [req-0d5452c6-f8eb-4c34-b406-4c0253565753 req-977baa6f-3f15-4a41-ae5c-ee28de0b6ac0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 5b2193b9-46b9-44a8-9d1c-3c6a642115b6] Received event network-vif-plugged-0ae5b718-3374-4544-8f79-2f11854381cd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:48:01 compute-0 nova_compute[260935]: 2025-10-11 08:48:01.562 2 DEBUG oslo_concurrency.lockutils [req-0d5452c6-f8eb-4c34-b406-4c0253565753 req-977baa6f-3f15-4a41-ae5c-ee28de0b6ac0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "5b2193b9-46b9-44a8-9d1c-3c6a642115b6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:48:01 compute-0 nova_compute[260935]: 2025-10-11 08:48:01.562 2 DEBUG oslo_concurrency.lockutils [req-0d5452c6-f8eb-4c34-b406-4c0253565753 req-977baa6f-3f15-4a41-ae5c-ee28de0b6ac0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "5b2193b9-46b9-44a8-9d1c-3c6a642115b6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:48:01 compute-0 nova_compute[260935]: 2025-10-11 08:48:01.562 2 DEBUG oslo_concurrency.lockutils [req-0d5452c6-f8eb-4c34-b406-4c0253565753 req-977baa6f-3f15-4a41-ae5c-ee28de0b6ac0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "5b2193b9-46b9-44a8-9d1c-3c6a642115b6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:48:01 compute-0 nova_compute[260935]: 2025-10-11 08:48:01.563 2 DEBUG nova.compute.manager [req-0d5452c6-f8eb-4c34-b406-4c0253565753 req-977baa6f-3f15-4a41-ae5c-ee28de0b6ac0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 5b2193b9-46b9-44a8-9d1c-3c6a642115b6] No waiting events found dispatching network-vif-plugged-0ae5b718-3374-4544-8f79-2f11854381cd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 08:48:01 compute-0 nova_compute[260935]: 2025-10-11 08:48:01.563 2 WARNING nova.compute.manager [req-0d5452c6-f8eb-4c34-b406-4c0253565753 req-977baa6f-3f15-4a41-ae5c-ee28de0b6ac0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 5b2193b9-46b9-44a8-9d1c-3c6a642115b6] Received unexpected event network-vif-plugged-0ae5b718-3374-4544-8f79-2f11854381cd for instance with vm_state active and task_state deleting.
Oct 11 08:48:01 compute-0 nova_compute[260935]: 2025-10-11 08:48:01.563 2 DEBUG nova.compute.manager [req-0d5452c6-f8eb-4c34-b406-4c0253565753 req-977baa6f-3f15-4a41-ae5c-ee28de0b6ac0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 5b2193b9-46b9-44a8-9d1c-3c6a642115b6] Received event network-vif-deleted-0ae5b718-3374-4544-8f79-2f11854381cd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:48:01 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-779a4dfe6de7028a6c29a354067b0ff2a57745a062e774c98de6f0ccb789cb5a-userdata-shm.mount: Deactivated successfully.
Oct 11 08:48:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-829cba9cb35af42a78e9e37b72195be70588984fe3032b82c532e66c2e3b0e2c-merged.mount: Deactivated successfully.
Oct 11 08:48:01 compute-0 podman[288372]: 2025-10-11 08:48:01.61362505 +0000 UTC m=+0.149070972 container cleanup 779a4dfe6de7028a6c29a354067b0ff2a57745a062e774c98de6f0ccb789cb5a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-fff13396-b787-4c6e-9112-a1c2ef57b26d, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001)
Oct 11 08:48:01 compute-0 systemd[1]: libpod-conmon-779a4dfe6de7028a6c29a354067b0ff2a57745a062e774c98de6f0ccb789cb5a.scope: Deactivated successfully.
Oct 11 08:48:01 compute-0 anacron[27972]: Job `cron.monthly' started
Oct 11 08:48:01 compute-0 nova_compute[260935]: 2025-10-11 08:48:01.704 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 08:48:01 compute-0 nova_compute[260935]: 2025-10-11 08:48:01.707 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 08:48:01 compute-0 nova_compute[260935]: 2025-10-11 08:48:01.707 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Oct 11 08:48:01 compute-0 anacron[27972]: Job `cron.monthly' terminated
Oct 11 08:48:01 compute-0 anacron[27972]: Normal exit (3 jobs run)
Oct 11 08:48:01 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 08:48:01 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1925108110' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:48:01 compute-0 podman[288428]: 2025-10-11 08:48:01.729938615 +0000 UTC m=+0.069084116 container remove 779a4dfe6de7028a6c29a354067b0ff2a57745a062e774c98de6f0ccb789cb5a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-fff13396-b787-4c6e-9112-a1c2ef57b26d, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true)
Oct 11 08:48:01 compute-0 nova_compute[260935]: 2025-10-11 08:48:01.736 2 DEBUG oslo_concurrency.processutils [None req-b1acccde-62a2-4164-bd32-9ee867650c6d 16055681fed745bb89347149995b8486 dc753a4e96fc46008b6e6b1fd29b160d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.584s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:48:01 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:48:01.747 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[a8031769-5b89-416c-b293-5f003bba4196]: (4, ('Sat Oct 11 08:48:01 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-fff13396-b787-4c6e-9112-a1c2ef57b26d (779a4dfe6de7028a6c29a354067b0ff2a57745a062e774c98de6f0ccb789cb5a)\n779a4dfe6de7028a6c29a354067b0ff2a57745a062e774c98de6f0ccb789cb5a\nSat Oct 11 08:48:01 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-fff13396-b787-4c6e-9112-a1c2ef57b26d (779a4dfe6de7028a6c29a354067b0ff2a57745a062e774c98de6f0ccb789cb5a)\n779a4dfe6de7028a6c29a354067b0ff2a57745a062e774c98de6f0ccb789cb5a\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:48:01 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:48:01.751 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[cec30304-8503-4206-b84c-e5f0011782d1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:48:01 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:48:01.757 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfff13396-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:48:01 compute-0 kernel: tapfff13396-b0: left promiscuous mode
Oct 11 08:48:01 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:48:01.785 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[be660f03-31a0-4a63-9961-8079e4ea2f09]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:48:01 compute-0 nova_compute[260935]: 2025-10-11 08:48:01.788 2 DEBUG nova.storage.rbd_utils [None req-b1acccde-62a2-4164-bd32-9ee867650c6d 16055681fed745bb89347149995b8486 dc753a4e96fc46008b6e6b1fd29b160d - - default default] rbd image f6e6ccd5-d393-4fa3-bf88-491311678dd1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:48:01 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1267: 321 pgs: 321 active+clean; 213 MiB data, 344 MiB used, 60 GiB / 60 GiB avail; 1.8 MiB/s rd, 3.6 MiB/s wr, 107 op/s
Oct 11 08:48:01 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:48:01.804 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[689fe06b-c3b6-4fd6-bc16-d237bac9f35b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:48:01 compute-0 nova_compute[260935]: 2025-10-11 08:48:01.805 2 DEBUG oslo_concurrency.processutils [None req-b1acccde-62a2-4164-bd32-9ee867650c6d 16055681fed745bb89347149995b8486 dc753a4e96fc46008b6e6b1fd29b160d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:48:01 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:48:01.805 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[3550bd22-c0cd-4c19-8dd6-8087bafd167a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:48:01 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:48:01.829 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[30c1cd57-bdec-4050-a7b3-03e47f6da66d]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 428792, 'reachable_time': 16857, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 288462, 'error': None, 'target': 'ovnmeta-fff13396-b787-4c6e-9112-a1c2ef57b26d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:48:01 compute-0 systemd[1]: run-netns-ovnmeta\x2dfff13396\x2db787\x2d4c6e\x2d9112\x2da1c2ef57b26d.mount: Deactivated successfully.
Oct 11 08:48:01 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:48:01.834 162929 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-fff13396-b787-4c6e-9112-a1c2ef57b26d deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 11 08:48:01 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:48:01.835 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[5fe7eaab-b9e7-49d2-8972-7dfb3eadbedd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:48:01 compute-0 nova_compute[260935]: 2025-10-11 08:48:01.851 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:48:02 compute-0 nova_compute[260935]: 2025-10-11 08:48:02.104 2 INFO nova.virt.libvirt.driver [None req-5a167f4a-efd9-47fc-92a3-43da5d789ab2 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 5b2193b9-46b9-44a8-9d1c-3c6a642115b6] Deleting instance files /var/lib/nova/instances/5b2193b9-46b9-44a8-9d1c-3c6a642115b6_del
Oct 11 08:48:02 compute-0 nova_compute[260935]: 2025-10-11 08:48:02.105 2 INFO nova.virt.libvirt.driver [None req-5a167f4a-efd9-47fc-92a3-43da5d789ab2 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 5b2193b9-46b9-44a8-9d1c-3c6a642115b6] Deletion of /var/lib/nova/instances/5b2193b9-46b9-44a8-9d1c-3c6a642115b6_del complete
Oct 11 08:48:02 compute-0 nova_compute[260935]: 2025-10-11 08:48:02.155 2 INFO nova.compute.manager [None req-5a167f4a-efd9-47fc-92a3-43da5d789ab2 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 5b2193b9-46b9-44a8-9d1c-3c6a642115b6] Took 0.94 seconds to destroy the instance on the hypervisor.
Oct 11 08:48:02 compute-0 nova_compute[260935]: 2025-10-11 08:48:02.155 2 DEBUG oslo.service.loopingcall [None req-5a167f4a-efd9-47fc-92a3-43da5d789ab2 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 11 08:48:02 compute-0 nova_compute[260935]: 2025-10-11 08:48:02.156 2 DEBUG nova.compute.manager [-] [instance: 5b2193b9-46b9-44a8-9d1c-3c6a642115b6] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 11 08:48:02 compute-0 nova_compute[260935]: 2025-10-11 08:48:02.156 2 DEBUG nova.network.neutron [-] [instance: 5b2193b9-46b9-44a8-9d1c-3c6a642115b6] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 11 08:48:02 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 08:48:02 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3227162184' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:48:02 compute-0 nova_compute[260935]: 2025-10-11 08:48:02.363 2 DEBUG oslo_concurrency.processutils [None req-b1acccde-62a2-4164-bd32-9ee867650c6d 16055681fed745bb89347149995b8486 dc753a4e96fc46008b6e6b1fd29b160d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.559s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:48:02 compute-0 nova_compute[260935]: 2025-10-11 08:48:02.365 2 DEBUG nova.virt.libvirt.vif [None req-b1acccde-62a2-4164-bd32-9ee867650c6d 16055681fed745bb89347149995b8486 dc753a4e96fc46008b6e6b1fd29b160d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T08:47:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-FloatingIPsAssociationNegativeTestJSON-server-2127562755',display_name='tempest-FloatingIPsAssociationNegativeTestJSON-server-2127562755',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-floatingipsassociationnegativetestjson-server-212756275',id=19,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='dc753a4e96fc46008b6e6b1fd29b160d',ramdisk_id='',reservation_id='r-wv642fxc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-FloatingIPsAssociationNegativeTestJSON-713335751',owner_user_name='tempest-FloatingIPsAssociationNegativeTestJSON-713335751-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T08:47:56Z,user_data=None,user_id='16055681fed745bb89347149995b8486',uuid=f6e6ccd5-d393-4fa3-bf88-491311678dd1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "128b1135-2e8f-4e78-8e09-e16b082e9225", "address": "fa:16:3e:ac:95:81", "network": {"id": "678d17d5-515c-4e7c-a42a-5bd4db3dbb7b", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-1273402507-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dc753a4e96fc46008b6e6b1fd29b160d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap128b1135-2e", "ovs_interfaceid": "128b1135-2e8f-4e78-8e09-e16b082e9225", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 11 08:48:02 compute-0 nova_compute[260935]: 2025-10-11 08:48:02.365 2 DEBUG nova.network.os_vif_util [None req-b1acccde-62a2-4164-bd32-9ee867650c6d 16055681fed745bb89347149995b8486 dc753a4e96fc46008b6e6b1fd29b160d - - default default] Converting VIF {"id": "128b1135-2e8f-4e78-8e09-e16b082e9225", "address": "fa:16:3e:ac:95:81", "network": {"id": "678d17d5-515c-4e7c-a42a-5bd4db3dbb7b", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-1273402507-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dc753a4e96fc46008b6e6b1fd29b160d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap128b1135-2e", "ovs_interfaceid": "128b1135-2e8f-4e78-8e09-e16b082e9225", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 08:48:02 compute-0 nova_compute[260935]: 2025-10-11 08:48:02.366 2 DEBUG nova.network.os_vif_util [None req-b1acccde-62a2-4164-bd32-9ee867650c6d 16055681fed745bb89347149995b8486 dc753a4e96fc46008b6e6b1fd29b160d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ac:95:81,bridge_name='br-int',has_traffic_filtering=True,id=128b1135-2e8f-4e78-8e09-e16b082e9225,network=Network(678d17d5-515c-4e7c-a42a-5bd4db3dbb7b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap128b1135-2e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 08:48:02 compute-0 nova_compute[260935]: 2025-10-11 08:48:02.367 2 DEBUG nova.objects.instance [None req-b1acccde-62a2-4164-bd32-9ee867650c6d 16055681fed745bb89347149995b8486 dc753a4e96fc46008b6e6b1fd29b160d - - default default] Lazy-loading 'pci_devices' on Instance uuid f6e6ccd5-d393-4fa3-bf88-491311678dd1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:48:02 compute-0 nova_compute[260935]: 2025-10-11 08:48:02.380 2 DEBUG nova.virt.libvirt.driver [None req-b1acccde-62a2-4164-bd32-9ee867650c6d 16055681fed745bb89347149995b8486 dc753a4e96fc46008b6e6b1fd29b160d - - default default] [instance: f6e6ccd5-d393-4fa3-bf88-491311678dd1] End _get_guest_xml xml=<domain type="kvm">
Oct 11 08:48:02 compute-0 nova_compute[260935]:   <uuid>f6e6ccd5-d393-4fa3-bf88-491311678dd1</uuid>
Oct 11 08:48:02 compute-0 nova_compute[260935]:   <name>instance-00000013</name>
Oct 11 08:48:02 compute-0 nova_compute[260935]:   <memory>131072</memory>
Oct 11 08:48:02 compute-0 nova_compute[260935]:   <vcpu>1</vcpu>
Oct 11 08:48:02 compute-0 nova_compute[260935]:   <metadata>
Oct 11 08:48:02 compute-0 nova_compute[260935]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 08:48:02 compute-0 nova_compute[260935]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 08:48:02 compute-0 nova_compute[260935]:       <nova:name>tempest-FloatingIPsAssociationNegativeTestJSON-server-2127562755</nova:name>
Oct 11 08:48:02 compute-0 nova_compute[260935]:       <nova:creationTime>2025-10-11 08:48:01</nova:creationTime>
Oct 11 08:48:02 compute-0 nova_compute[260935]:       <nova:flavor name="m1.nano">
Oct 11 08:48:02 compute-0 nova_compute[260935]:         <nova:memory>128</nova:memory>
Oct 11 08:48:02 compute-0 nova_compute[260935]:         <nova:disk>1</nova:disk>
Oct 11 08:48:02 compute-0 nova_compute[260935]:         <nova:swap>0</nova:swap>
Oct 11 08:48:02 compute-0 nova_compute[260935]:         <nova:ephemeral>0</nova:ephemeral>
Oct 11 08:48:02 compute-0 nova_compute[260935]:         <nova:vcpus>1</nova:vcpus>
Oct 11 08:48:02 compute-0 nova_compute[260935]:       </nova:flavor>
Oct 11 08:48:02 compute-0 nova_compute[260935]:       <nova:owner>
Oct 11 08:48:02 compute-0 nova_compute[260935]:         <nova:user uuid="16055681fed745bb89347149995b8486">tempest-FloatingIPsAssociationNegativeTestJSON-713335751-project-member</nova:user>
Oct 11 08:48:02 compute-0 nova_compute[260935]:         <nova:project uuid="dc753a4e96fc46008b6e6b1fd29b160d">tempest-FloatingIPsAssociationNegativeTestJSON-713335751</nova:project>
Oct 11 08:48:02 compute-0 nova_compute[260935]:       </nova:owner>
Oct 11 08:48:02 compute-0 nova_compute[260935]:       <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 08:48:02 compute-0 nova_compute[260935]:       <nova:ports>
Oct 11 08:48:02 compute-0 nova_compute[260935]:         <nova:port uuid="128b1135-2e8f-4e78-8e09-e16b082e9225">
Oct 11 08:48:02 compute-0 nova_compute[260935]:           <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Oct 11 08:48:02 compute-0 nova_compute[260935]:         </nova:port>
Oct 11 08:48:02 compute-0 nova_compute[260935]:       </nova:ports>
Oct 11 08:48:02 compute-0 nova_compute[260935]:     </nova:instance>
Oct 11 08:48:02 compute-0 nova_compute[260935]:   </metadata>
Oct 11 08:48:02 compute-0 nova_compute[260935]:   <sysinfo type="smbios">
Oct 11 08:48:02 compute-0 nova_compute[260935]:     <system>
Oct 11 08:48:02 compute-0 nova_compute[260935]:       <entry name="manufacturer">RDO</entry>
Oct 11 08:48:02 compute-0 nova_compute[260935]:       <entry name="product">OpenStack Compute</entry>
Oct 11 08:48:02 compute-0 nova_compute[260935]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 08:48:02 compute-0 nova_compute[260935]:       <entry name="serial">f6e6ccd5-d393-4fa3-bf88-491311678dd1</entry>
Oct 11 08:48:02 compute-0 nova_compute[260935]:       <entry name="uuid">f6e6ccd5-d393-4fa3-bf88-491311678dd1</entry>
Oct 11 08:48:02 compute-0 nova_compute[260935]:       <entry name="family">Virtual Machine</entry>
Oct 11 08:48:02 compute-0 nova_compute[260935]:     </system>
Oct 11 08:48:02 compute-0 nova_compute[260935]:   </sysinfo>
Oct 11 08:48:02 compute-0 nova_compute[260935]:   <os>
Oct 11 08:48:02 compute-0 nova_compute[260935]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 11 08:48:02 compute-0 nova_compute[260935]:     <boot dev="hd"/>
Oct 11 08:48:02 compute-0 nova_compute[260935]:     <smbios mode="sysinfo"/>
Oct 11 08:48:02 compute-0 nova_compute[260935]:   </os>
Oct 11 08:48:02 compute-0 nova_compute[260935]:   <features>
Oct 11 08:48:02 compute-0 nova_compute[260935]:     <acpi/>
Oct 11 08:48:02 compute-0 nova_compute[260935]:     <apic/>
Oct 11 08:48:02 compute-0 nova_compute[260935]:     <vmcoreinfo/>
Oct 11 08:48:02 compute-0 nova_compute[260935]:   </features>
Oct 11 08:48:02 compute-0 nova_compute[260935]:   <clock offset="utc">
Oct 11 08:48:02 compute-0 nova_compute[260935]:     <timer name="pit" tickpolicy="delay"/>
Oct 11 08:48:02 compute-0 nova_compute[260935]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 11 08:48:02 compute-0 nova_compute[260935]:     <timer name="hpet" present="no"/>
Oct 11 08:48:02 compute-0 nova_compute[260935]:   </clock>
Oct 11 08:48:02 compute-0 nova_compute[260935]:   <cpu mode="host-model" match="exact">
Oct 11 08:48:02 compute-0 nova_compute[260935]:     <topology sockets="1" cores="1" threads="1"/>
Oct 11 08:48:02 compute-0 nova_compute[260935]:   </cpu>
Oct 11 08:48:02 compute-0 nova_compute[260935]:   <devices>
Oct 11 08:48:02 compute-0 nova_compute[260935]:     <disk type="network" device="disk">
Oct 11 08:48:02 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 08:48:02 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/f6e6ccd5-d393-4fa3-bf88-491311678dd1_disk">
Oct 11 08:48:02 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 08:48:02 compute-0 nova_compute[260935]:       </source>
Oct 11 08:48:02 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 08:48:02 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 08:48:02 compute-0 nova_compute[260935]:       </auth>
Oct 11 08:48:02 compute-0 nova_compute[260935]:       <target dev="vda" bus="virtio"/>
Oct 11 08:48:02 compute-0 nova_compute[260935]:     </disk>
Oct 11 08:48:02 compute-0 nova_compute[260935]:     <disk type="network" device="cdrom">
Oct 11 08:48:02 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 08:48:02 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/f6e6ccd5-d393-4fa3-bf88-491311678dd1_disk.config">
Oct 11 08:48:02 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 08:48:02 compute-0 nova_compute[260935]:       </source>
Oct 11 08:48:02 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 08:48:02 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 08:48:02 compute-0 nova_compute[260935]:       </auth>
Oct 11 08:48:02 compute-0 nova_compute[260935]:       <target dev="sda" bus="sata"/>
Oct 11 08:48:02 compute-0 nova_compute[260935]:     </disk>
Oct 11 08:48:02 compute-0 nova_compute[260935]:     <interface type="ethernet">
Oct 11 08:48:02 compute-0 nova_compute[260935]:       <mac address="fa:16:3e:ac:95:81"/>
Oct 11 08:48:02 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 08:48:02 compute-0 nova_compute[260935]:       <driver name="vhost" rx_queue_size="512"/>
Oct 11 08:48:02 compute-0 nova_compute[260935]:       <mtu size="1442"/>
Oct 11 08:48:02 compute-0 nova_compute[260935]:       <target dev="tap128b1135-2e"/>
Oct 11 08:48:02 compute-0 nova_compute[260935]:     </interface>
Oct 11 08:48:02 compute-0 nova_compute[260935]:     <serial type="pty">
Oct 11 08:48:02 compute-0 nova_compute[260935]:       <log file="/var/lib/nova/instances/f6e6ccd5-d393-4fa3-bf88-491311678dd1/console.log" append="off"/>
Oct 11 08:48:02 compute-0 nova_compute[260935]:     </serial>
Oct 11 08:48:02 compute-0 nova_compute[260935]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 08:48:02 compute-0 nova_compute[260935]:     <video>
Oct 11 08:48:02 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 08:48:02 compute-0 nova_compute[260935]:     </video>
Oct 11 08:48:02 compute-0 nova_compute[260935]:     <input type="tablet" bus="usb"/>
Oct 11 08:48:02 compute-0 nova_compute[260935]:     <rng model="virtio">
Oct 11 08:48:02 compute-0 nova_compute[260935]:       <backend model="random">/dev/urandom</backend>
Oct 11 08:48:02 compute-0 nova_compute[260935]:     </rng>
Oct 11 08:48:02 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root"/>
Oct 11 08:48:02 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:48:02 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:48:02 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:48:02 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:48:02 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:48:02 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:48:02 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:48:02 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:48:02 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:48:02 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:48:02 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:48:02 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:48:02 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:48:02 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:48:02 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:48:02 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:48:02 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:48:02 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:48:02 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:48:02 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:48:02 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:48:02 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:48:02 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:48:02 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:48:02 compute-0 nova_compute[260935]:     <controller type="usb" index="0"/>
Oct 11 08:48:02 compute-0 nova_compute[260935]:     <memballoon model="virtio">
Oct 11 08:48:02 compute-0 nova_compute[260935]:       <stats period="10"/>
Oct 11 08:48:02 compute-0 nova_compute[260935]:     </memballoon>
Oct 11 08:48:02 compute-0 nova_compute[260935]:   </devices>
Oct 11 08:48:02 compute-0 nova_compute[260935]: </domain>
Oct 11 08:48:02 compute-0 nova_compute[260935]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 11 08:48:02 compute-0 nova_compute[260935]: 2025-10-11 08:48:02.380 2 DEBUG nova.compute.manager [None req-b1acccde-62a2-4164-bd32-9ee867650c6d 16055681fed745bb89347149995b8486 dc753a4e96fc46008b6e6b1fd29b160d - - default default] [instance: f6e6ccd5-d393-4fa3-bf88-491311678dd1] Preparing to wait for external event network-vif-plugged-128b1135-2e8f-4e78-8e09-e16b082e9225 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 11 08:48:02 compute-0 nova_compute[260935]: 2025-10-11 08:48:02.381 2 DEBUG oslo_concurrency.lockutils [None req-b1acccde-62a2-4164-bd32-9ee867650c6d 16055681fed745bb89347149995b8486 dc753a4e96fc46008b6e6b1fd29b160d - - default default] Acquiring lock "f6e6ccd5-d393-4fa3-bf88-491311678dd1-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:48:02 compute-0 nova_compute[260935]: 2025-10-11 08:48:02.381 2 DEBUG oslo_concurrency.lockutils [None req-b1acccde-62a2-4164-bd32-9ee867650c6d 16055681fed745bb89347149995b8486 dc753a4e96fc46008b6e6b1fd29b160d - - default default] Lock "f6e6ccd5-d393-4fa3-bf88-491311678dd1-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:48:02 compute-0 nova_compute[260935]: 2025-10-11 08:48:02.381 2 DEBUG oslo_concurrency.lockutils [None req-b1acccde-62a2-4164-bd32-9ee867650c6d 16055681fed745bb89347149995b8486 dc753a4e96fc46008b6e6b1fd29b160d - - default default] Lock "f6e6ccd5-d393-4fa3-bf88-491311678dd1-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:48:02 compute-0 nova_compute[260935]: 2025-10-11 08:48:02.382 2 DEBUG nova.virt.libvirt.vif [None req-b1acccde-62a2-4164-bd32-9ee867650c6d 16055681fed745bb89347149995b8486 dc753a4e96fc46008b6e6b1fd29b160d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T08:47:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-FloatingIPsAssociationNegativeTestJSON-server-2127562755',display_name='tempest-FloatingIPsAssociationNegativeTestJSON-server-2127562755',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-floatingipsassociationnegativetestjson-server-212756275',id=19,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='dc753a4e96fc46008b6e6b1fd29b160d',ramdisk_id='',reservation_id='r-wv642fxc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-FloatingIPsAssociationNegativeTestJSON-713335751',owner_user_name='tempest-FloatingIPsAssociationNegativeTestJSON-713335751-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T08:47:56Z,user_data=None,user_id='16055681fed745bb89347149995b8486',uuid=f6e6ccd5-d393-4fa3-bf88-491311678dd1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "128b1135-2e8f-4e78-8e09-e16b082e9225", "address": "fa:16:3e:ac:95:81", "network": {"id": "678d17d5-515c-4e7c-a42a-5bd4db3dbb7b", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-1273402507-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dc753a4e96fc46008b6e6b1fd29b160d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap128b1135-2e", "ovs_interfaceid": "128b1135-2e8f-4e78-8e09-e16b082e9225", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 11 08:48:02 compute-0 nova_compute[260935]: 2025-10-11 08:48:02.382 2 DEBUG nova.network.os_vif_util [None req-b1acccde-62a2-4164-bd32-9ee867650c6d 16055681fed745bb89347149995b8486 dc753a4e96fc46008b6e6b1fd29b160d - - default default] Converting VIF {"id": "128b1135-2e8f-4e78-8e09-e16b082e9225", "address": "fa:16:3e:ac:95:81", "network": {"id": "678d17d5-515c-4e7c-a42a-5bd4db3dbb7b", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-1273402507-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dc753a4e96fc46008b6e6b1fd29b160d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap128b1135-2e", "ovs_interfaceid": "128b1135-2e8f-4e78-8e09-e16b082e9225", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 08:48:02 compute-0 nova_compute[260935]: 2025-10-11 08:48:02.382 2 DEBUG nova.network.os_vif_util [None req-b1acccde-62a2-4164-bd32-9ee867650c6d 16055681fed745bb89347149995b8486 dc753a4e96fc46008b6e6b1fd29b160d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ac:95:81,bridge_name='br-int',has_traffic_filtering=True,id=128b1135-2e8f-4e78-8e09-e16b082e9225,network=Network(678d17d5-515c-4e7c-a42a-5bd4db3dbb7b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap128b1135-2e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 08:48:02 compute-0 nova_compute[260935]: 2025-10-11 08:48:02.383 2 DEBUG os_vif [None req-b1acccde-62a2-4164-bd32-9ee867650c6d 16055681fed745bb89347149995b8486 dc753a4e96fc46008b6e6b1fd29b160d - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ac:95:81,bridge_name='br-int',has_traffic_filtering=True,id=128b1135-2e8f-4e78-8e09-e16b082e9225,network=Network(678d17d5-515c-4e7c-a42a-5bd4db3dbb7b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap128b1135-2e') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 11 08:48:02 compute-0 nova_compute[260935]: 2025-10-11 08:48:02.383 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:48:02 compute-0 nova_compute[260935]: 2025-10-11 08:48:02.384 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:48:02 compute-0 nova_compute[260935]: 2025-10-11 08:48:02.384 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 08:48:02 compute-0 nova_compute[260935]: 2025-10-11 08:48:02.387 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:48:02 compute-0 nova_compute[260935]: 2025-10-11 08:48:02.387 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap128b1135-2e, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:48:02 compute-0 nova_compute[260935]: 2025-10-11 08:48:02.388 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap128b1135-2e, col_values=(('external_ids', {'iface-id': '128b1135-2e8f-4e78-8e09-e16b082e9225', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ac:95:81', 'vm-uuid': 'f6e6ccd5-d393-4fa3-bf88-491311678dd1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:48:02 compute-0 nova_compute[260935]: 2025-10-11 08:48:02.389 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:48:02 compute-0 NetworkManager[44960]: <info>  [1760172482.3917] manager: (tap128b1135-2e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/44)
Oct 11 08:48:02 compute-0 nova_compute[260935]: 2025-10-11 08:48:02.392 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 08:48:02 compute-0 nova_compute[260935]: 2025-10-11 08:48:02.398 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:48:02 compute-0 nova_compute[260935]: 2025-10-11 08:48:02.399 2 INFO os_vif [None req-b1acccde-62a2-4164-bd32-9ee867650c6d 16055681fed745bb89347149995b8486 dc753a4e96fc46008b6e6b1fd29b160d - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ac:95:81,bridge_name='br-int',has_traffic_filtering=True,id=128b1135-2e8f-4e78-8e09-e16b082e9225,network=Network(678d17d5-515c-4e7c-a42a-5bd4db3dbb7b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap128b1135-2e')
Oct 11 08:48:02 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/1925108110' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:48:02 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/3227162184' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:48:02 compute-0 nova_compute[260935]: 2025-10-11 08:48:02.450 2 DEBUG nova.virt.libvirt.driver [None req-b1acccde-62a2-4164-bd32-9ee867650c6d 16055681fed745bb89347149995b8486 dc753a4e96fc46008b6e6b1fd29b160d - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 08:48:02 compute-0 nova_compute[260935]: 2025-10-11 08:48:02.450 2 DEBUG nova.virt.libvirt.driver [None req-b1acccde-62a2-4164-bd32-9ee867650c6d 16055681fed745bb89347149995b8486 dc753a4e96fc46008b6e6b1fd29b160d - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 08:48:02 compute-0 nova_compute[260935]: 2025-10-11 08:48:02.450 2 DEBUG nova.virt.libvirt.driver [None req-b1acccde-62a2-4164-bd32-9ee867650c6d 16055681fed745bb89347149995b8486 dc753a4e96fc46008b6e6b1fd29b160d - - default default] No VIF found with MAC fa:16:3e:ac:95:81, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 11 08:48:02 compute-0 nova_compute[260935]: 2025-10-11 08:48:02.451 2 INFO nova.virt.libvirt.driver [None req-b1acccde-62a2-4164-bd32-9ee867650c6d 16055681fed745bb89347149995b8486 dc753a4e96fc46008b6e6b1fd29b160d - - default default] [instance: f6e6ccd5-d393-4fa3-bf88-491311678dd1] Using config drive
Oct 11 08:48:02 compute-0 nova_compute[260935]: 2025-10-11 08:48:02.475 2 DEBUG nova.storage.rbd_utils [None req-b1acccde-62a2-4164-bd32-9ee867650c6d 16055681fed745bb89347149995b8486 dc753a4e96fc46008b6e6b1fd29b160d - - default default] rbd image f6e6ccd5-d393-4fa3-bf88-491311678dd1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:48:02 compute-0 nova_compute[260935]: 2025-10-11 08:48:02.822 2 DEBUG nova.network.neutron [req-9fbc9f2b-9925-4f02-a420-c41d4237c01f req-71825224-6999-4140-b6cc-a9b837c30ffd e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: f6e6ccd5-d393-4fa3-bf88-491311678dd1] Updated VIF entry in instance network info cache for port 128b1135-2e8f-4e78-8e09-e16b082e9225. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 08:48:02 compute-0 nova_compute[260935]: 2025-10-11 08:48:02.824 2 DEBUG nova.network.neutron [req-9fbc9f2b-9925-4f02-a420-c41d4237c01f req-71825224-6999-4140-b6cc-a9b837c30ffd e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: f6e6ccd5-d393-4fa3-bf88-491311678dd1] Updating instance_info_cache with network_info: [{"id": "128b1135-2e8f-4e78-8e09-e16b082e9225", "address": "fa:16:3e:ac:95:81", "network": {"id": "678d17d5-515c-4e7c-a42a-5bd4db3dbb7b", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-1273402507-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dc753a4e96fc46008b6e6b1fd29b160d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap128b1135-2e", "ovs_interfaceid": "128b1135-2e8f-4e78-8e09-e16b082e9225", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:48:02 compute-0 podman[288506]: 2025-10-11 08:48:02.830313418 +0000 UTC m=+0.115861232 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, org.label-schema.build-date=20251001, tcib_managed=true, container_name=multipathd, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct 11 08:48:02 compute-0 nova_compute[260935]: 2025-10-11 08:48:02.841 2 DEBUG oslo_concurrency.lockutils [req-9fbc9f2b-9925-4f02-a420-c41d4237c01f req-71825224-6999-4140-b6cc-a9b837c30ffd e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-f6e6ccd5-d393-4fa3-bf88-491311678dd1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 08:48:02 compute-0 podman[288507]: 2025-10-11 08:48:02.919043464 +0000 UTC m=+0.203517367 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct 11 08:48:02 compute-0 nova_compute[260935]: 2025-10-11 08:48:02.936 2 INFO nova.virt.libvirt.driver [None req-b1acccde-62a2-4164-bd32-9ee867650c6d 16055681fed745bb89347149995b8486 dc753a4e96fc46008b6e6b1fd29b160d - - default default] [instance: f6e6ccd5-d393-4fa3-bf88-491311678dd1] Creating config drive at /var/lib/nova/instances/f6e6ccd5-d393-4fa3-bf88-491311678dd1/disk.config
Oct 11 08:48:02 compute-0 nova_compute[260935]: 2025-10-11 08:48:02.944 2 DEBUG oslo_concurrency.processutils [None req-b1acccde-62a2-4164-bd32-9ee867650c6d 16055681fed745bb89347149995b8486 dc753a4e96fc46008b6e6b1fd29b160d - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/f6e6ccd5-d393-4fa3-bf88-491311678dd1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmppk9f8g6j execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:48:03 compute-0 nova_compute[260935]: 2025-10-11 08:48:03.079 2 DEBUG oslo_concurrency.processutils [None req-b1acccde-62a2-4164-bd32-9ee867650c6d 16055681fed745bb89347149995b8486 dc753a4e96fc46008b6e6b1fd29b160d - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/f6e6ccd5-d393-4fa3-bf88-491311678dd1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmppk9f8g6j" returned: 0 in 0.135s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:48:03 compute-0 nova_compute[260935]: 2025-10-11 08:48:03.116 2 DEBUG nova.storage.rbd_utils [None req-b1acccde-62a2-4164-bd32-9ee867650c6d 16055681fed745bb89347149995b8486 dc753a4e96fc46008b6e6b1fd29b160d - - default default] rbd image f6e6ccd5-d393-4fa3-bf88-491311678dd1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:48:03 compute-0 nova_compute[260935]: 2025-10-11 08:48:03.121 2 DEBUG oslo_concurrency.processutils [None req-b1acccde-62a2-4164-bd32-9ee867650c6d 16055681fed745bb89347149995b8486 dc753a4e96fc46008b6e6b1fd29b160d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/f6e6ccd5-d393-4fa3-bf88-491311678dd1/disk.config f6e6ccd5-d393-4fa3-bf88-491311678dd1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:48:03 compute-0 nova_compute[260935]: 2025-10-11 08:48:03.161 2 DEBUG nova.network.neutron [-] [instance: 5b2193b9-46b9-44a8-9d1c-3c6a642115b6] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:48:03 compute-0 nova_compute[260935]: 2025-10-11 08:48:03.169 2 DEBUG nova.compute.manager [req-58c37e2e-9ca9-4015-a083-381175cda690 req-35d5acf4-9357-4876-89f0-c625f4dc77b5 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 5b2193b9-46b9-44a8-9d1c-3c6a642115b6] Received event network-vif-deleted-713e6030-0d3f-41ae-9f66-c4591e2498e4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:48:03 compute-0 nova_compute[260935]: 2025-10-11 08:48:03.170 2 INFO nova.compute.manager [req-58c37e2e-9ca9-4015-a083-381175cda690 req-35d5acf4-9357-4876-89f0-c625f4dc77b5 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 5b2193b9-46b9-44a8-9d1c-3c6a642115b6] Neutron deleted interface 713e6030-0d3f-41ae-9f66-c4591e2498e4; detaching it from the instance and deleting it from the info cache
Oct 11 08:48:03 compute-0 nova_compute[260935]: 2025-10-11 08:48:03.171 2 DEBUG nova.network.neutron [req-58c37e2e-9ca9-4015-a083-381175cda690 req-35d5acf4-9357-4876-89f0-c625f4dc77b5 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 5b2193b9-46b9-44a8-9d1c-3c6a642115b6] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:48:03 compute-0 nova_compute[260935]: 2025-10-11 08:48:03.178 2 INFO nova.compute.manager [-] [instance: 5b2193b9-46b9-44a8-9d1c-3c6a642115b6] Took 1.02 seconds to deallocate network for instance.
Oct 11 08:48:03 compute-0 nova_compute[260935]: 2025-10-11 08:48:03.191 2 DEBUG nova.compute.manager [req-58c37e2e-9ca9-4015-a083-381175cda690 req-35d5acf4-9357-4876-89f0-c625f4dc77b5 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 5b2193b9-46b9-44a8-9d1c-3c6a642115b6] Detach interface failed, port_id=713e6030-0d3f-41ae-9f66-c4591e2498e4, reason: Instance 5b2193b9-46b9-44a8-9d1c-3c6a642115b6 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Oct 11 08:48:03 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e163 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 08:48:03 compute-0 nova_compute[260935]: 2025-10-11 08:48:03.237 2 DEBUG oslo_concurrency.lockutils [None req-5a167f4a-efd9-47fc-92a3-43da5d789ab2 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:48:03 compute-0 nova_compute[260935]: 2025-10-11 08:48:03.238 2 DEBUG oslo_concurrency.lockutils [None req-5a167f4a-efd9-47fc-92a3-43da5d789ab2 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:48:03 compute-0 nova_compute[260935]: 2025-10-11 08:48:03.317 2 DEBUG oslo_concurrency.processutils [None req-b1acccde-62a2-4164-bd32-9ee867650c6d 16055681fed745bb89347149995b8486 dc753a4e96fc46008b6e6b1fd29b160d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/f6e6ccd5-d393-4fa3-bf88-491311678dd1/disk.config f6e6ccd5-d393-4fa3-bf88-491311678dd1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.196s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:48:03 compute-0 nova_compute[260935]: 2025-10-11 08:48:03.318 2 INFO nova.virt.libvirt.driver [None req-b1acccde-62a2-4164-bd32-9ee867650c6d 16055681fed745bb89347149995b8486 dc753a4e96fc46008b6e6b1fd29b160d - - default default] [instance: f6e6ccd5-d393-4fa3-bf88-491311678dd1] Deleting local config drive /var/lib/nova/instances/f6e6ccd5-d393-4fa3-bf88-491311678dd1/disk.config because it was imported into RBD.
Oct 11 08:48:03 compute-0 kernel: tap128b1135-2e: entered promiscuous mode
Oct 11 08:48:03 compute-0 NetworkManager[44960]: <info>  [1760172483.3950] manager: (tap128b1135-2e): new Tun device (/org/freedesktop/NetworkManager/Devices/45)
Oct 11 08:48:03 compute-0 nova_compute[260935]: 2025-10-11 08:48:03.395 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:48:03 compute-0 ovn_controller[152945]: 2025-10-11T08:48:03Z|00062|binding|INFO|Claiming lport 128b1135-2e8f-4e78-8e09-e16b082e9225 for this chassis.
Oct 11 08:48:03 compute-0 ovn_controller[152945]: 2025-10-11T08:48:03Z|00063|binding|INFO|128b1135-2e8f-4e78-8e09-e16b082e9225: Claiming fa:16:3e:ac:95:81 10.100.0.4
Oct 11 08:48:03 compute-0 systemd-udevd[288335]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 08:48:03 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:48:03.403 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ac:95:81 10.100.0.4'], port_security=['fa:16:3e:ac:95:81 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'f6e6ccd5-d393-4fa3-bf88-491311678dd1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-678d17d5-515c-4e7c-a42a-5bd4db3dbb7b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dc753a4e96fc46008b6e6b1fd29b160d', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'd907eed5-bfee-42df-bbfe-0d6a84057302', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8656b2b4-2d06-42a4-a59e-112920fcccd9, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=128b1135-2e8f-4e78-8e09-e16b082e9225) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 08:48:03 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:48:03.405 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 128b1135-2e8f-4e78-8e09-e16b082e9225 in datapath 678d17d5-515c-4e7c-a42a-5bd4db3dbb7b bound to our chassis
Oct 11 08:48:03 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:48:03.407 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 678d17d5-515c-4e7c-a42a-5bd4db3dbb7b
Oct 11 08:48:03 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:48:03.428 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[3ed20d4d-c4c2-446c-b5ff-3eae2396904a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:48:03 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:48:03.430 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap678d17d5-51 in ovnmeta-678d17d5-515c-4e7c-a42a-5bd4db3dbb7b namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 11 08:48:03 compute-0 nova_compute[260935]: 2025-10-11 08:48:03.426 2 DEBUG oslo_concurrency.processutils [None req-5a167f4a-efd9-47fc-92a3-43da5d789ab2 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:48:03 compute-0 NetworkManager[44960]: <info>  [1760172483.4341] device (tap128b1135-2e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 08:48:03 compute-0 NetworkManager[44960]: <info>  [1760172483.4356] device (tap128b1135-2e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 08:48:03 compute-0 ceph-mon[74313]: pgmap v1267: 321 pgs: 321 active+clean; 213 MiB data, 344 MiB used, 60 GiB / 60 GiB avail; 1.8 MiB/s rd, 3.6 MiB/s wr, 107 op/s
Oct 11 08:48:03 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:48:03.435 276945 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap678d17d5-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 11 08:48:03 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:48:03.435 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[c44d3566-03ac-409d-959a-15bb0ef5dee9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:48:03 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:48:03.438 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[638ec3c9-04aa-41c9-a1d7-e5715d09dd5c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:48:03 compute-0 ovn_controller[152945]: 2025-10-11T08:48:03Z|00064|binding|INFO|Setting lport 128b1135-2e8f-4e78-8e09-e16b082e9225 ovn-installed in OVS
Oct 11 08:48:03 compute-0 ovn_controller[152945]: 2025-10-11T08:48:03Z|00065|binding|INFO|Setting lport 128b1135-2e8f-4e78-8e09-e16b082e9225 up in Southbound
Oct 11 08:48:03 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:48:03.456 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[b08cc99a-9b2e-48f1-b3ec-9be696f3238c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:48:03 compute-0 systemd-machined[215705]: New machine qemu-20-instance-00000013.
Oct 11 08:48:03 compute-0 nova_compute[260935]: 2025-10-11 08:48:03.470 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:48:03 compute-0 systemd[1]: Started Virtual Machine qemu-20-instance-00000013.
Oct 11 08:48:03 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:48:03.488 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[ff7a953c-0e68-4b8c-a7ac-9eba5026a242]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:48:03 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:48:03.532 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[01f624de-a3b9-4bc3-9b38-441454621d04]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:48:03 compute-0 NetworkManager[44960]: <info>  [1760172483.5450] manager: (tap678d17d5-50): new Veth device (/org/freedesktop/NetworkManager/Devices/46)
Oct 11 08:48:03 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:48:03.549 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[a18cdf1a-eeb4-4cc5-846b-885f20d71ae8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:48:03 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:48:03.598 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[fbacd693-51e9-414a-8540-f82406511a9e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:48:03 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:48:03.601 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[31a3b2c5-1472-4586-829f-f777d6fdd3ed]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:48:03 compute-0 nova_compute[260935]: 2025-10-11 08:48:03.606 2 DEBUG nova.compute.manager [req-93a0c873-48cb-4c81-8956-7d715ef9e907 req-3d66c9e9-3462-4563-a9f9-48c6fde53895 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 5b2193b9-46b9-44a8-9d1c-3c6a642115b6] Received event network-vif-unplugged-713e6030-0d3f-41ae-9f66-c4591e2498e4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:48:03 compute-0 nova_compute[260935]: 2025-10-11 08:48:03.607 2 DEBUG oslo_concurrency.lockutils [req-93a0c873-48cb-4c81-8956-7d715ef9e907 req-3d66c9e9-3462-4563-a9f9-48c6fde53895 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "5b2193b9-46b9-44a8-9d1c-3c6a642115b6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:48:03 compute-0 nova_compute[260935]: 2025-10-11 08:48:03.607 2 DEBUG oslo_concurrency.lockutils [req-93a0c873-48cb-4c81-8956-7d715ef9e907 req-3d66c9e9-3462-4563-a9f9-48c6fde53895 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "5b2193b9-46b9-44a8-9d1c-3c6a642115b6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:48:03 compute-0 nova_compute[260935]: 2025-10-11 08:48:03.608 2 DEBUG oslo_concurrency.lockutils [req-93a0c873-48cb-4c81-8956-7d715ef9e907 req-3d66c9e9-3462-4563-a9f9-48c6fde53895 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "5b2193b9-46b9-44a8-9d1c-3c6a642115b6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:48:03 compute-0 nova_compute[260935]: 2025-10-11 08:48:03.609 2 DEBUG nova.compute.manager [req-93a0c873-48cb-4c81-8956-7d715ef9e907 req-3d66c9e9-3462-4563-a9f9-48c6fde53895 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 5b2193b9-46b9-44a8-9d1c-3c6a642115b6] No waiting events found dispatching network-vif-unplugged-713e6030-0d3f-41ae-9f66-c4591e2498e4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 08:48:03 compute-0 nova_compute[260935]: 2025-10-11 08:48:03.610 2 WARNING nova.compute.manager [req-93a0c873-48cb-4c81-8956-7d715ef9e907 req-3d66c9e9-3462-4563-a9f9-48c6fde53895 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 5b2193b9-46b9-44a8-9d1c-3c6a642115b6] Received unexpected event network-vif-unplugged-713e6030-0d3f-41ae-9f66-c4591e2498e4 for instance with vm_state deleted and task_state None.
Oct 11 08:48:03 compute-0 nova_compute[260935]: 2025-10-11 08:48:03.610 2 DEBUG nova.compute.manager [req-93a0c873-48cb-4c81-8956-7d715ef9e907 req-3d66c9e9-3462-4563-a9f9-48c6fde53895 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 5b2193b9-46b9-44a8-9d1c-3c6a642115b6] Received event network-vif-plugged-713e6030-0d3f-41ae-9f66-c4591e2498e4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:48:03 compute-0 nova_compute[260935]: 2025-10-11 08:48:03.611 2 DEBUG oslo_concurrency.lockutils [req-93a0c873-48cb-4c81-8956-7d715ef9e907 req-3d66c9e9-3462-4563-a9f9-48c6fde53895 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "5b2193b9-46b9-44a8-9d1c-3c6a642115b6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:48:03 compute-0 nova_compute[260935]: 2025-10-11 08:48:03.612 2 DEBUG oslo_concurrency.lockutils [req-93a0c873-48cb-4c81-8956-7d715ef9e907 req-3d66c9e9-3462-4563-a9f9-48c6fde53895 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "5b2193b9-46b9-44a8-9d1c-3c6a642115b6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:48:03 compute-0 nova_compute[260935]: 2025-10-11 08:48:03.613 2 DEBUG oslo_concurrency.lockutils [req-93a0c873-48cb-4c81-8956-7d715ef9e907 req-3d66c9e9-3462-4563-a9f9-48c6fde53895 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "5b2193b9-46b9-44a8-9d1c-3c6a642115b6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:48:03 compute-0 nova_compute[260935]: 2025-10-11 08:48:03.613 2 DEBUG nova.compute.manager [req-93a0c873-48cb-4c81-8956-7d715ef9e907 req-3d66c9e9-3462-4563-a9f9-48c6fde53895 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 5b2193b9-46b9-44a8-9d1c-3c6a642115b6] No waiting events found dispatching network-vif-plugged-713e6030-0d3f-41ae-9f66-c4591e2498e4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 08:48:03 compute-0 nova_compute[260935]: 2025-10-11 08:48:03.614 2 WARNING nova.compute.manager [req-93a0c873-48cb-4c81-8956-7d715ef9e907 req-3d66c9e9-3462-4563-a9f9-48c6fde53895 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 5b2193b9-46b9-44a8-9d1c-3c6a642115b6] Received unexpected event network-vif-plugged-713e6030-0d3f-41ae-9f66-c4591e2498e4 for instance with vm_state deleted and task_state None.
Oct 11 08:48:03 compute-0 NetworkManager[44960]: <info>  [1760172483.6310] device (tap678d17d5-50): carrier: link connected
Oct 11 08:48:03 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:48:03.636 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[aa583c0a-1691-4fa3-8f58-4e06c841cf72]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:48:03 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:48:03.661 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[ba4b0c30-ceac-4bd4-8b13-4f2695ba6633]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap678d17d5-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:29:85:ce'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 25], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 432833, 'reachable_time': 22326, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 288655, 'error': None, 'target': 'ovnmeta-678d17d5-515c-4e7c-a42a-5bd4db3dbb7b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:48:03 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:48:03.684 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[9fe90ea5-ad5c-47a3-9f12-efa5fe470041]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe29:85ce'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 432833, 'tstamp': 432833}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 288656, 'error': None, 'target': 'ovnmeta-678d17d5-515c-4e7c-a42a-5bd4db3dbb7b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:48:03 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:48:03.706 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[e9564193-f5f9-474a-8b0f-8f9bb4cd0dad]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap678d17d5-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:29:85:ce'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 25], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 432833, 'reachable_time': 22326, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 288657, 'error': None, 'target': 'ovnmeta-678d17d5-515c-4e7c-a42a-5bd4db3dbb7b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:48:03 compute-0 nova_compute[260935]: 2025-10-11 08:48:03.726 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 08:48:03 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:48:03.753 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[77c21a5f-4187-4636-b46d-64ecc617efb5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:48:03 compute-0 nova_compute[260935]: 2025-10-11 08:48:03.757 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:48:03 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1268: 321 pgs: 321 active+clean; 134 MiB data, 324 MiB used, 60 GiB / 60 GiB avail; 3.7 MiB/s rd, 3.6 MiB/s wr, 203 op/s
Oct 11 08:48:03 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:48:03.844 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[ba24c921-b670-4559-97dd-128bcd8202fd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:48:03 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:48:03.845 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap678d17d5-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:48:03 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:48:03.846 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 08:48:03 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:48:03.846 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap678d17d5-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:48:03 compute-0 nova_compute[260935]: 2025-10-11 08:48:03.848 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:48:03 compute-0 kernel: tap678d17d5-50: entered promiscuous mode
Oct 11 08:48:03 compute-0 NetworkManager[44960]: <info>  [1760172483.8499] manager: (tap678d17d5-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/47)
Oct 11 08:48:03 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:48:03.856 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap678d17d5-50, col_values=(('external_ids', {'iface-id': '728422f2-62be-41c7-90af-4ff751731213'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:48:03 compute-0 ovn_controller[152945]: 2025-10-11T08:48:03Z|00066|binding|INFO|Releasing lport 728422f2-62be-41c7-90af-4ff751731213 from this chassis (sb_readonly=0)
Oct 11 08:48:03 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:48:03.860 162815 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/678d17d5-515c-4e7c-a42a-5bd4db3dbb7b.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/678d17d5-515c-4e7c-a42a-5bd4db3dbb7b.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 11 08:48:03 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:48:03.861 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[cab2c8db-2181-442a-9b25-6b2bb41f6117]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:48:03 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:48:03.862 162815 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 11 08:48:03 compute-0 ovn_metadata_agent[162810]: global
Oct 11 08:48:03 compute-0 ovn_metadata_agent[162810]:     log         /dev/log local0 debug
Oct 11 08:48:03 compute-0 ovn_metadata_agent[162810]:     log-tag     haproxy-metadata-proxy-678d17d5-515c-4e7c-a42a-5bd4db3dbb7b
Oct 11 08:48:03 compute-0 ovn_metadata_agent[162810]:     user        root
Oct 11 08:48:03 compute-0 ovn_metadata_agent[162810]:     group       root
Oct 11 08:48:03 compute-0 ovn_metadata_agent[162810]:     maxconn     1024
Oct 11 08:48:03 compute-0 ovn_metadata_agent[162810]:     pidfile     /var/lib/neutron/external/pids/678d17d5-515c-4e7c-a42a-5bd4db3dbb7b.pid.haproxy
Oct 11 08:48:03 compute-0 ovn_metadata_agent[162810]:     daemon
Oct 11 08:48:03 compute-0 ovn_metadata_agent[162810]: 
Oct 11 08:48:03 compute-0 ovn_metadata_agent[162810]: defaults
Oct 11 08:48:03 compute-0 ovn_metadata_agent[162810]:     log global
Oct 11 08:48:03 compute-0 ovn_metadata_agent[162810]:     mode http
Oct 11 08:48:03 compute-0 ovn_metadata_agent[162810]:     option httplog
Oct 11 08:48:03 compute-0 ovn_metadata_agent[162810]:     option dontlognull
Oct 11 08:48:03 compute-0 ovn_metadata_agent[162810]:     option http-server-close
Oct 11 08:48:03 compute-0 ovn_metadata_agent[162810]:     option forwardfor
Oct 11 08:48:03 compute-0 ovn_metadata_agent[162810]:     retries                 3
Oct 11 08:48:03 compute-0 ovn_metadata_agent[162810]:     timeout http-request    30s
Oct 11 08:48:03 compute-0 ovn_metadata_agent[162810]:     timeout connect         30s
Oct 11 08:48:03 compute-0 ovn_metadata_agent[162810]:     timeout client          32s
Oct 11 08:48:03 compute-0 ovn_metadata_agent[162810]:     timeout server          32s
Oct 11 08:48:03 compute-0 ovn_metadata_agent[162810]:     timeout http-keep-alive 30s
Oct 11 08:48:03 compute-0 ovn_metadata_agent[162810]: 
Oct 11 08:48:03 compute-0 ovn_metadata_agent[162810]: 
Oct 11 08:48:03 compute-0 ovn_metadata_agent[162810]: listen listener
Oct 11 08:48:03 compute-0 ovn_metadata_agent[162810]:     bind 169.254.169.254:80
Oct 11 08:48:03 compute-0 ovn_metadata_agent[162810]:     server metadata /var/lib/neutron/metadata_proxy
Oct 11 08:48:03 compute-0 ovn_metadata_agent[162810]:     http-request add-header X-OVN-Network-ID 678d17d5-515c-4e7c-a42a-5bd4db3dbb7b
Oct 11 08:48:03 compute-0 ovn_metadata_agent[162810]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 11 08:48:03 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:48:03.863 162815 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-678d17d5-515c-4e7c-a42a-5bd4db3dbb7b', 'env', 'PROCESS_TAG=haproxy-678d17d5-515c-4e7c-a42a-5bd4db3dbb7b', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/678d17d5-515c-4e7c-a42a-5bd4db3dbb7b.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 11 08:48:03 compute-0 nova_compute[260935]: 2025-10-11 08:48:03.896 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:48:03 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 08:48:03 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3469220891' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:48:03 compute-0 nova_compute[260935]: 2025-10-11 08:48:03.947 2 DEBUG oslo_concurrency.processutils [None req-5a167f4a-efd9-47fc-92a3-43da5d789ab2 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.521s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:48:03 compute-0 nova_compute[260935]: 2025-10-11 08:48:03.957 2 DEBUG nova.compute.provider_tree [None req-5a167f4a-efd9-47fc-92a3-43da5d789ab2 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 08:48:03 compute-0 nova_compute[260935]: 2025-10-11 08:48:03.977 2 DEBUG nova.scheduler.client.report [None req-5a167f4a-efd9-47fc-92a3-43da5d789ab2 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 08:48:04 compute-0 nova_compute[260935]: 2025-10-11 08:48:04.014 2 DEBUG oslo_concurrency.lockutils [None req-5a167f4a-efd9-47fc-92a3-43da5d789ab2 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.776s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:48:04 compute-0 nova_compute[260935]: 2025-10-11 08:48:04.019 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.262s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:48:04 compute-0 nova_compute[260935]: 2025-10-11 08:48:04.019 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:48:04 compute-0 nova_compute[260935]: 2025-10-11 08:48:04.020 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 11 08:48:04 compute-0 nova_compute[260935]: 2025-10-11 08:48:04.021 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:48:04 compute-0 nova_compute[260935]: 2025-10-11 08:48:04.098 2 INFO nova.scheduler.client.report [None req-5a167f4a-efd9-47fc-92a3-43da5d789ab2 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Deleted allocations for instance 5b2193b9-46b9-44a8-9d1c-3c6a642115b6
Oct 11 08:48:04 compute-0 nova_compute[260935]: 2025-10-11 08:48:04.163 2 DEBUG oslo_concurrency.lockutils [None req-5a167f4a-efd9-47fc-92a3-43da5d789ab2 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Lock "5b2193b9-46b9-44a8-9d1c-3c6a642115b6" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.948s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:48:04 compute-0 podman[288753]: 2025-10-11 08:48:04.372733418 +0000 UTC m=+0.093243537 container create e7f00db60ad82e6e93accaad38709500629083bb52629852047b469c6b4d70ec (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-678d17d5-515c-4e7c-a42a-5bd4db3dbb7b, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team)
Oct 11 08:48:04 compute-0 podman[288753]: 2025-10-11 08:48:04.33013208 +0000 UTC m=+0.050642229 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 11 08:48:04 compute-0 systemd[1]: Started libpod-conmon-e7f00db60ad82e6e93accaad38709500629083bb52629852047b469c6b4d70ec.scope.
Oct 11 08:48:04 compute-0 ceph-mon[74313]: pgmap v1268: 321 pgs: 321 active+clean; 134 MiB data, 324 MiB used, 60 GiB / 60 GiB avail; 3.7 MiB/s rd, 3.6 MiB/s wr, 203 op/s
Oct 11 08:48:04 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/3469220891' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:48:04 compute-0 systemd[1]: Started libcrun container.
Oct 11 08:48:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1697bbf5f67b57c75a7813d378343f8ad0bc5c688674a61d7b6592052ffd17eb/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 11 08:48:04 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 08:48:04 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4092725551' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:48:04 compute-0 podman[288753]: 2025-10-11 08:48:04.493542551 +0000 UTC m=+0.214052730 container init e7f00db60ad82e6e93accaad38709500629083bb52629852047b469c6b4d70ec (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-678d17d5-515c-4e7c-a42a-5bd4db3dbb7b, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true)
Oct 11 08:48:04 compute-0 podman[288753]: 2025-10-11 08:48:04.499043328 +0000 UTC m=+0.219553447 container start e7f00db60ad82e6e93accaad38709500629083bb52629852047b469c6b4d70ec (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-678d17d5-515c-4e7c-a42a-5bd4db3dbb7b, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 11 08:48:04 compute-0 nova_compute[260935]: 2025-10-11 08:48:04.500 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.479s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:48:04 compute-0 neutron-haproxy-ovnmeta-678d17d5-515c-4e7c-a42a-5bd4db3dbb7b[288769]: [NOTICE]   (288775) : New worker (288777) forked
Oct 11 08:48:04 compute-0 neutron-haproxy-ovnmeta-678d17d5-515c-4e7c-a42a-5bd4db3dbb7b[288769]: [NOTICE]   (288775) : Loading success.
Oct 11 08:48:04 compute-0 nova_compute[260935]: 2025-10-11 08:48:04.530 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172484.5280967, f6e6ccd5-d393-4fa3-bf88-491311678dd1 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:48:04 compute-0 nova_compute[260935]: 2025-10-11 08:48:04.531 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: f6e6ccd5-d393-4fa3-bf88-491311678dd1] VM Started (Lifecycle Event)
Oct 11 08:48:04 compute-0 nova_compute[260935]: 2025-10-11 08:48:04.550 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: f6e6ccd5-d393-4fa3-bf88-491311678dd1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:48:04 compute-0 nova_compute[260935]: 2025-10-11 08:48:04.554 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172484.5282233, f6e6ccd5-d393-4fa3-bf88-491311678dd1 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:48:04 compute-0 nova_compute[260935]: 2025-10-11 08:48:04.555 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: f6e6ccd5-d393-4fa3-bf88-491311678dd1] VM Paused (Lifecycle Event)
Oct 11 08:48:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 08:48:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:48:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 08:48:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:48:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.000694346938692453 of space, bias 1.0, pg target 0.2083040816077359 quantized to 32 (current 32)
Oct 11 08:48:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:48:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 08:48:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:48:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 08:48:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:48:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Oct 11 08:48:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:48:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 11 08:48:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:48:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 08:48:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:48:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 11 08:48:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:48:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 11 08:48:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:48:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 08:48:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:48:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 11 08:48:04 compute-0 nova_compute[260935]: 2025-10-11 08:48:04.575 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: f6e6ccd5-d393-4fa3-bf88-491311678dd1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:48:04 compute-0 nova_compute[260935]: 2025-10-11 08:48:04.587 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000013 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 08:48:04 compute-0 nova_compute[260935]: 2025-10-11 08:48:04.587 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000013 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 08:48:04 compute-0 nova_compute[260935]: 2025-10-11 08:48:04.588 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: f6e6ccd5-d393-4fa3-bf88-491311678dd1] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 08:48:04 compute-0 nova_compute[260935]: 2025-10-11 08:48:04.594 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000012 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 08:48:04 compute-0 nova_compute[260935]: 2025-10-11 08:48:04.595 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000012 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 08:48:04 compute-0 nova_compute[260935]: 2025-10-11 08:48:04.617 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: f6e6ccd5-d393-4fa3-bf88-491311678dd1] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 08:48:04 compute-0 nova_compute[260935]: 2025-10-11 08:48:04.829 2 WARNING nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 08:48:04 compute-0 nova_compute[260935]: 2025-10-11 08:48:04.831 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4313MB free_disk=59.94662857055664GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 11 08:48:04 compute-0 nova_compute[260935]: 2025-10-11 08:48:04.831 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:48:04 compute-0 nova_compute[260935]: 2025-10-11 08:48:04.832 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:48:04 compute-0 nova_compute[260935]: 2025-10-11 08:48:04.934 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance 66086b61-46ca-4a1b-a9f0-692678bcbf7a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 08:48:04 compute-0 nova_compute[260935]: 2025-10-11 08:48:04.934 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance f6e6ccd5-d393-4fa3-bf88-491311678dd1 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 08:48:04 compute-0 nova_compute[260935]: 2025-10-11 08:48:04.935 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 11 08:48:04 compute-0 nova_compute[260935]: 2025-10-11 08:48:04.935 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 11 08:48:05 compute-0 nova_compute[260935]: 2025-10-11 08:48:05.023 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:48:05 compute-0 nova_compute[260935]: 2025-10-11 08:48:05.338 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:48:05 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/4092725551' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:48:05 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 08:48:05 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/778709954' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:48:05 compute-0 nova_compute[260935]: 2025-10-11 08:48:05.498 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:48:05 compute-0 nova_compute[260935]: 2025-10-11 08:48:05.505 2 DEBUG nova.compute.provider_tree [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 08:48:05 compute-0 nova_compute[260935]: 2025-10-11 08:48:05.526 2 DEBUG nova.scheduler.client.report [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 08:48:05 compute-0 nova_compute[260935]: 2025-10-11 08:48:05.550 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 11 08:48:05 compute-0 nova_compute[260935]: 2025-10-11 08:48:05.550 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.718s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:48:05 compute-0 nova_compute[260935]: 2025-10-11 08:48:05.762 2 DEBUG nova.compute.manager [req-2ac474ef-b192-4170-8b67-3dd44e5036f3 req-6a3036d9-ef9b-4006-9f9e-025f81203add e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: f6e6ccd5-d393-4fa3-bf88-491311678dd1] Received event network-vif-plugged-128b1135-2e8f-4e78-8e09-e16b082e9225 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:48:05 compute-0 nova_compute[260935]: 2025-10-11 08:48:05.763 2 DEBUG oslo_concurrency.lockutils [req-2ac474ef-b192-4170-8b67-3dd44e5036f3 req-6a3036d9-ef9b-4006-9f9e-025f81203add e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "f6e6ccd5-d393-4fa3-bf88-491311678dd1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:48:05 compute-0 nova_compute[260935]: 2025-10-11 08:48:05.763 2 DEBUG oslo_concurrency.lockutils [req-2ac474ef-b192-4170-8b67-3dd44e5036f3 req-6a3036d9-ef9b-4006-9f9e-025f81203add e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "f6e6ccd5-d393-4fa3-bf88-491311678dd1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:48:05 compute-0 nova_compute[260935]: 2025-10-11 08:48:05.764 2 DEBUG oslo_concurrency.lockutils [req-2ac474ef-b192-4170-8b67-3dd44e5036f3 req-6a3036d9-ef9b-4006-9f9e-025f81203add e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "f6e6ccd5-d393-4fa3-bf88-491311678dd1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:48:05 compute-0 nova_compute[260935]: 2025-10-11 08:48:05.764 2 DEBUG nova.compute.manager [req-2ac474ef-b192-4170-8b67-3dd44e5036f3 req-6a3036d9-ef9b-4006-9f9e-025f81203add e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: f6e6ccd5-d393-4fa3-bf88-491311678dd1] Processing event network-vif-plugged-128b1135-2e8f-4e78-8e09-e16b082e9225 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 11 08:48:05 compute-0 nova_compute[260935]: 2025-10-11 08:48:05.764 2 DEBUG nova.compute.manager [req-2ac474ef-b192-4170-8b67-3dd44e5036f3 req-6a3036d9-ef9b-4006-9f9e-025f81203add e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: f6e6ccd5-d393-4fa3-bf88-491311678dd1] Received event network-vif-plugged-128b1135-2e8f-4e78-8e09-e16b082e9225 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:48:05 compute-0 nova_compute[260935]: 2025-10-11 08:48:05.765 2 DEBUG oslo_concurrency.lockutils [req-2ac474ef-b192-4170-8b67-3dd44e5036f3 req-6a3036d9-ef9b-4006-9f9e-025f81203add e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "f6e6ccd5-d393-4fa3-bf88-491311678dd1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:48:05 compute-0 nova_compute[260935]: 2025-10-11 08:48:05.765 2 DEBUG oslo_concurrency.lockutils [req-2ac474ef-b192-4170-8b67-3dd44e5036f3 req-6a3036d9-ef9b-4006-9f9e-025f81203add e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "f6e6ccd5-d393-4fa3-bf88-491311678dd1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:48:05 compute-0 nova_compute[260935]: 2025-10-11 08:48:05.765 2 DEBUG oslo_concurrency.lockutils [req-2ac474ef-b192-4170-8b67-3dd44e5036f3 req-6a3036d9-ef9b-4006-9f9e-025f81203add e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "f6e6ccd5-d393-4fa3-bf88-491311678dd1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:48:05 compute-0 nova_compute[260935]: 2025-10-11 08:48:05.766 2 DEBUG nova.compute.manager [req-2ac474ef-b192-4170-8b67-3dd44e5036f3 req-6a3036d9-ef9b-4006-9f9e-025f81203add e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: f6e6ccd5-d393-4fa3-bf88-491311678dd1] No waiting events found dispatching network-vif-plugged-128b1135-2e8f-4e78-8e09-e16b082e9225 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 08:48:05 compute-0 nova_compute[260935]: 2025-10-11 08:48:05.766 2 WARNING nova.compute.manager [req-2ac474ef-b192-4170-8b67-3dd44e5036f3 req-6a3036d9-ef9b-4006-9f9e-025f81203add e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: f6e6ccd5-d393-4fa3-bf88-491311678dd1] Received unexpected event network-vif-plugged-128b1135-2e8f-4e78-8e09-e16b082e9225 for instance with vm_state building and task_state spawning.
Oct 11 08:48:05 compute-0 nova_compute[260935]: 2025-10-11 08:48:05.767 2 DEBUG nova.compute.manager [None req-b1acccde-62a2-4164-bd32-9ee867650c6d 16055681fed745bb89347149995b8486 dc753a4e96fc46008b6e6b1fd29b160d - - default default] [instance: f6e6ccd5-d393-4fa3-bf88-491311678dd1] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 11 08:48:05 compute-0 nova_compute[260935]: 2025-10-11 08:48:05.776 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172485.7758877, f6e6ccd5-d393-4fa3-bf88-491311678dd1 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:48:05 compute-0 nova_compute[260935]: 2025-10-11 08:48:05.776 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: f6e6ccd5-d393-4fa3-bf88-491311678dd1] VM Resumed (Lifecycle Event)
Oct 11 08:48:05 compute-0 nova_compute[260935]: 2025-10-11 08:48:05.778 2 DEBUG nova.virt.libvirt.driver [None req-b1acccde-62a2-4164-bd32-9ee867650c6d 16055681fed745bb89347149995b8486 dc753a4e96fc46008b6e6b1fd29b160d - - default default] [instance: f6e6ccd5-d393-4fa3-bf88-491311678dd1] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 11 08:48:05 compute-0 nova_compute[260935]: 2025-10-11 08:48:05.783 2 INFO nova.virt.libvirt.driver [-] [instance: f6e6ccd5-d393-4fa3-bf88-491311678dd1] Instance spawned successfully.
Oct 11 08:48:05 compute-0 nova_compute[260935]: 2025-10-11 08:48:05.783 2 DEBUG nova.virt.libvirt.driver [None req-b1acccde-62a2-4164-bd32-9ee867650c6d 16055681fed745bb89347149995b8486 dc753a4e96fc46008b6e6b1fd29b160d - - default default] [instance: f6e6ccd5-d393-4fa3-bf88-491311678dd1] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 11 08:48:05 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1269: 321 pgs: 321 active+clean; 134 MiB data, 324 MiB used, 60 GiB / 60 GiB avail; 3.7 MiB/s rd, 3.6 MiB/s wr, 162 op/s
Oct 11 08:48:05 compute-0 nova_compute[260935]: 2025-10-11 08:48:05.796 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: f6e6ccd5-d393-4fa3-bf88-491311678dd1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:48:05 compute-0 nova_compute[260935]: 2025-10-11 08:48:05.801 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: f6e6ccd5-d393-4fa3-bf88-491311678dd1] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 08:48:05 compute-0 nova_compute[260935]: 2025-10-11 08:48:05.809 2 DEBUG nova.virt.libvirt.driver [None req-b1acccde-62a2-4164-bd32-9ee867650c6d 16055681fed745bb89347149995b8486 dc753a4e96fc46008b6e6b1fd29b160d - - default default] [instance: f6e6ccd5-d393-4fa3-bf88-491311678dd1] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:48:05 compute-0 nova_compute[260935]: 2025-10-11 08:48:05.810 2 DEBUG nova.virt.libvirt.driver [None req-b1acccde-62a2-4164-bd32-9ee867650c6d 16055681fed745bb89347149995b8486 dc753a4e96fc46008b6e6b1fd29b160d - - default default] [instance: f6e6ccd5-d393-4fa3-bf88-491311678dd1] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:48:05 compute-0 nova_compute[260935]: 2025-10-11 08:48:05.810 2 DEBUG nova.virt.libvirt.driver [None req-b1acccde-62a2-4164-bd32-9ee867650c6d 16055681fed745bb89347149995b8486 dc753a4e96fc46008b6e6b1fd29b160d - - default default] [instance: f6e6ccd5-d393-4fa3-bf88-491311678dd1] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:48:05 compute-0 nova_compute[260935]: 2025-10-11 08:48:05.811 2 DEBUG nova.virt.libvirt.driver [None req-b1acccde-62a2-4164-bd32-9ee867650c6d 16055681fed745bb89347149995b8486 dc753a4e96fc46008b6e6b1fd29b160d - - default default] [instance: f6e6ccd5-d393-4fa3-bf88-491311678dd1] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:48:05 compute-0 nova_compute[260935]: 2025-10-11 08:48:05.811 2 DEBUG nova.virt.libvirt.driver [None req-b1acccde-62a2-4164-bd32-9ee867650c6d 16055681fed745bb89347149995b8486 dc753a4e96fc46008b6e6b1fd29b160d - - default default] [instance: f6e6ccd5-d393-4fa3-bf88-491311678dd1] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:48:05 compute-0 nova_compute[260935]: 2025-10-11 08:48:05.812 2 DEBUG nova.virt.libvirt.driver [None req-b1acccde-62a2-4164-bd32-9ee867650c6d 16055681fed745bb89347149995b8486 dc753a4e96fc46008b6e6b1fd29b160d - - default default] [instance: f6e6ccd5-d393-4fa3-bf88-491311678dd1] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:48:05 compute-0 nova_compute[260935]: 2025-10-11 08:48:05.837 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: f6e6ccd5-d393-4fa3-bf88-491311678dd1] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 08:48:05 compute-0 nova_compute[260935]: 2025-10-11 08:48:05.873 2 INFO nova.compute.manager [None req-b1acccde-62a2-4164-bd32-9ee867650c6d 16055681fed745bb89347149995b8486 dc753a4e96fc46008b6e6b1fd29b160d - - default default] [instance: f6e6ccd5-d393-4fa3-bf88-491311678dd1] Took 9.41 seconds to spawn the instance on the hypervisor.
Oct 11 08:48:05 compute-0 nova_compute[260935]: 2025-10-11 08:48:05.874 2 DEBUG nova.compute.manager [None req-b1acccde-62a2-4164-bd32-9ee867650c6d 16055681fed745bb89347149995b8486 dc753a4e96fc46008b6e6b1fd29b160d - - default default] [instance: f6e6ccd5-d393-4fa3-bf88-491311678dd1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:48:05 compute-0 nova_compute[260935]: 2025-10-11 08:48:05.971 2 INFO nova.compute.manager [None req-b1acccde-62a2-4164-bd32-9ee867650c6d 16055681fed745bb89347149995b8486 dc753a4e96fc46008b6e6b1fd29b160d - - default default] [instance: f6e6ccd5-d393-4fa3-bf88-491311678dd1] Took 10.59 seconds to build instance.
Oct 11 08:48:05 compute-0 nova_compute[260935]: 2025-10-11 08:48:05.991 2 DEBUG oslo_concurrency.lockutils [None req-b1acccde-62a2-4164-bd32-9ee867650c6d 16055681fed745bb89347149995b8486 dc753a4e96fc46008b6e6b1fd29b160d - - default default] Lock "f6e6ccd5-d393-4fa3-bf88-491311678dd1" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.718s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:48:06 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/778709954' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:48:06 compute-0 ceph-mon[74313]: pgmap v1269: 321 pgs: 321 active+clean; 134 MiB data, 324 MiB used, 60 GiB / 60 GiB avail; 3.7 MiB/s rd, 3.6 MiB/s wr, 162 op/s
Oct 11 08:48:06 compute-0 nova_compute[260935]: 2025-10-11 08:48:06.527 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 08:48:06 compute-0 nova_compute[260935]: 2025-10-11 08:48:06.528 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 11 08:48:06 compute-0 nova_compute[260935]: 2025-10-11 08:48:06.552 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 11 08:48:06 compute-0 nova_compute[260935]: 2025-10-11 08:48:06.552 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 08:48:06 compute-0 nova_compute[260935]: 2025-10-11 08:48:06.845 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:48:07 compute-0 nova_compute[260935]: 2025-10-11 08:48:07.390 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:48:07 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1270: 321 pgs: 321 active+clean; 134 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 5.6 MiB/s rd, 3.6 MiB/s wr, 236 op/s
Oct 11 08:48:08 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e163 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 08:48:08 compute-0 ceph-mon[74313]: pgmap v1270: 321 pgs: 321 active+clean; 134 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 5.6 MiB/s rd, 3.6 MiB/s wr, 236 op/s
Oct 11 08:48:09 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1271: 321 pgs: 321 active+clean; 134 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 3.8 MiB/s rd, 14 KiB/s wr, 170 op/s
Oct 11 08:48:10 compute-0 nova_compute[260935]: 2025-10-11 08:48:10.327 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:48:10 compute-0 ceph-mon[74313]: pgmap v1271: 321 pgs: 321 active+clean; 134 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 3.8 MiB/s rd, 14 KiB/s wr, 170 op/s
Oct 11 08:48:11 compute-0 nova_compute[260935]: 2025-10-11 08:48:11.102 2 DEBUG nova.virt.libvirt.driver [None req-0253c722-4ccd-4d8a-83e3-a929e61a488a 5c92a33b381a4eb19cda7d8ab3531d63 f5d998faafab4ef7b40f18acfe17d238 - - default default] [instance: 66086b61-46ca-4a1b-a9f0-692678bcbf7a] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Oct 11 08:48:11 compute-0 nova_compute[260935]: 2025-10-11 08:48:11.594 2 DEBUG oslo_concurrency.lockutils [None req-137c26ae-7b64-42c1-8e0f-3f35576f3c95 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Acquiring lock "057de6d9-3f9e-4b23-9019-f62ba6b453e7" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:48:11 compute-0 nova_compute[260935]: 2025-10-11 08:48:11.596 2 DEBUG oslo_concurrency.lockutils [None req-137c26ae-7b64-42c1-8e0f-3f35576f3c95 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Lock "057de6d9-3f9e-4b23-9019-f62ba6b453e7" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:48:11 compute-0 nova_compute[260935]: 2025-10-11 08:48:11.627 2 DEBUG nova.compute.manager [None req-137c26ae-7b64-42c1-8e0f-3f35576f3c95 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 11 08:48:11 compute-0 nova_compute[260935]: 2025-10-11 08:48:11.736 2 DEBUG oslo_concurrency.lockutils [None req-137c26ae-7b64-42c1-8e0f-3f35576f3c95 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:48:11 compute-0 nova_compute[260935]: 2025-10-11 08:48:11.737 2 DEBUG oslo_concurrency.lockutils [None req-137c26ae-7b64-42c1-8e0f-3f35576f3c95 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:48:11 compute-0 nova_compute[260935]: 2025-10-11 08:48:11.745 2 DEBUG nova.virt.hardware [None req-137c26ae-7b64-42c1-8e0f-3f35576f3c95 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 11 08:48:11 compute-0 nova_compute[260935]: 2025-10-11 08:48:11.745 2 INFO nova.compute.claims [None req-137c26ae-7b64-42c1-8e0f-3f35576f3c95 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] Claim successful on node compute-0.ctlplane.example.com
Oct 11 08:48:11 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1272: 321 pgs: 321 active+clean; 134 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 3.8 MiB/s rd, 14 KiB/s wr, 170 op/s
Oct 11 08:48:11 compute-0 nova_compute[260935]: 2025-10-11 08:48:11.878 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:48:11 compute-0 nova_compute[260935]: 2025-10-11 08:48:11.910 2 DEBUG oslo_concurrency.processutils [None req-137c26ae-7b64-42c1-8e0f-3f35576f3c95 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:48:12 compute-0 nova_compute[260935]: 2025-10-11 08:48:12.393 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:48:12 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 08:48:12 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4238313076' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:48:12 compute-0 nova_compute[260935]: 2025-10-11 08:48:12.425 2 DEBUG oslo_concurrency.processutils [None req-137c26ae-7b64-42c1-8e0f-3f35576f3c95 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.515s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:48:12 compute-0 nova_compute[260935]: 2025-10-11 08:48:12.431 2 DEBUG nova.compute.provider_tree [None req-137c26ae-7b64-42c1-8e0f-3f35576f3c95 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 08:48:12 compute-0 nova_compute[260935]: 2025-10-11 08:48:12.450 2 DEBUG nova.scheduler.client.report [None req-137c26ae-7b64-42c1-8e0f-3f35576f3c95 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 08:48:12 compute-0 nova_compute[260935]: 2025-10-11 08:48:12.476 2 DEBUG oslo_concurrency.lockutils [None req-137c26ae-7b64-42c1-8e0f-3f35576f3c95 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.739s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:48:12 compute-0 nova_compute[260935]: 2025-10-11 08:48:12.477 2 DEBUG nova.compute.manager [None req-137c26ae-7b64-42c1-8e0f-3f35576f3c95 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 11 08:48:12 compute-0 nova_compute[260935]: 2025-10-11 08:48:12.535 2 DEBUG nova.compute.manager [None req-137c26ae-7b64-42c1-8e0f-3f35576f3c95 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 11 08:48:12 compute-0 nova_compute[260935]: 2025-10-11 08:48:12.536 2 DEBUG nova.network.neutron [None req-137c26ae-7b64-42c1-8e0f-3f35576f3c95 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 11 08:48:12 compute-0 nova_compute[260935]: 2025-10-11 08:48:12.562 2 INFO nova.virt.libvirt.driver [None req-137c26ae-7b64-42c1-8e0f-3f35576f3c95 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 11 08:48:12 compute-0 nova_compute[260935]: 2025-10-11 08:48:12.584 2 DEBUG nova.compute.manager [None req-137c26ae-7b64-42c1-8e0f-3f35576f3c95 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 11 08:48:12 compute-0 nova_compute[260935]: 2025-10-11 08:48:12.696 2 DEBUG nova.compute.manager [None req-137c26ae-7b64-42c1-8e0f-3f35576f3c95 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 11 08:48:12 compute-0 nova_compute[260935]: 2025-10-11 08:48:12.698 2 DEBUG nova.virt.libvirt.driver [None req-137c26ae-7b64-42c1-8e0f-3f35576f3c95 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 11 08:48:12 compute-0 nova_compute[260935]: 2025-10-11 08:48:12.699 2 INFO nova.virt.libvirt.driver [None req-137c26ae-7b64-42c1-8e0f-3f35576f3c95 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] Creating image(s)
Oct 11 08:48:12 compute-0 nova_compute[260935]: 2025-10-11 08:48:12.729 2 DEBUG nova.storage.rbd_utils [None req-137c26ae-7b64-42c1-8e0f-3f35576f3c95 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] rbd image 057de6d9-3f9e-4b23-9019-f62ba6b453e7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:48:12 compute-0 nova_compute[260935]: 2025-10-11 08:48:12.760 2 DEBUG nova.storage.rbd_utils [None req-137c26ae-7b64-42c1-8e0f-3f35576f3c95 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] rbd image 057de6d9-3f9e-4b23-9019-f62ba6b453e7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:48:12 compute-0 nova_compute[260935]: 2025-10-11 08:48:12.792 2 DEBUG nova.storage.rbd_utils [None req-137c26ae-7b64-42c1-8e0f-3f35576f3c95 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] rbd image 057de6d9-3f9e-4b23-9019-f62ba6b453e7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:48:12 compute-0 nova_compute[260935]: 2025-10-11 08:48:12.796 2 DEBUG oslo_concurrency.processutils [None req-137c26ae-7b64-42c1-8e0f-3f35576f3c95 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:48:12 compute-0 nova_compute[260935]: 2025-10-11 08:48:12.861 2 DEBUG oslo_concurrency.processutils [None req-137c26ae-7b64-42c1-8e0f-3f35576f3c95 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:48:12 compute-0 nova_compute[260935]: 2025-10-11 08:48:12.863 2 DEBUG oslo_concurrency.lockutils [None req-137c26ae-7b64-42c1-8e0f-3f35576f3c95 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Acquiring lock "9811042c7d73cc51997f7c966840f4b7728169a1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:48:12 compute-0 nova_compute[260935]: 2025-10-11 08:48:12.863 2 DEBUG oslo_concurrency.lockutils [None req-137c26ae-7b64-42c1-8e0f-3f35576f3c95 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:48:12 compute-0 nova_compute[260935]: 2025-10-11 08:48:12.864 2 DEBUG oslo_concurrency.lockutils [None req-137c26ae-7b64-42c1-8e0f-3f35576f3c95 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:48:12 compute-0 ceph-mon[74313]: pgmap v1272: 321 pgs: 321 active+clean; 134 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 3.8 MiB/s rd, 14 KiB/s wr, 170 op/s
Oct 11 08:48:12 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/4238313076' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:48:12 compute-0 nova_compute[260935]: 2025-10-11 08:48:12.886 2 DEBUG nova.storage.rbd_utils [None req-137c26ae-7b64-42c1-8e0f-3f35576f3c95 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] rbd image 057de6d9-3f9e-4b23-9019-f62ba6b453e7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:48:12 compute-0 nova_compute[260935]: 2025-10-11 08:48:12.889 2 DEBUG oslo_concurrency.processutils [None req-137c26ae-7b64-42c1-8e0f-3f35576f3c95 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 057de6d9-3f9e-4b23-9019-f62ba6b453e7_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:48:13 compute-0 nova_compute[260935]: 2025-10-11 08:48:13.165 2 DEBUG nova.policy [None req-137c26ae-7b64-42c1-8e0f-3f35576f3c95 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '34f29a5a135d45f597eeaa741009aa67', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'eddb41c523294041b154a0a99c88e82b', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 11 08:48:13 compute-0 nova_compute[260935]: 2025-10-11 08:48:13.192 2 DEBUG oslo_concurrency.processutils [None req-137c26ae-7b64-42c1-8e0f-3f35576f3c95 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 057de6d9-3f9e-4b23-9019-f62ba6b453e7_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.303s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:48:13 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e163 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 08:48:13 compute-0 nova_compute[260935]: 2025-10-11 08:48:13.276 2 DEBUG nova.storage.rbd_utils [None req-137c26ae-7b64-42c1-8e0f-3f35576f3c95 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] resizing rbd image 057de6d9-3f9e-4b23-9019-f62ba6b453e7_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 11 08:48:13 compute-0 nova_compute[260935]: 2025-10-11 08:48:13.378 2 DEBUG nova.objects.instance [None req-137c26ae-7b64-42c1-8e0f-3f35576f3c95 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Lazy-loading 'migration_context' on Instance uuid 057de6d9-3f9e-4b23-9019-f62ba6b453e7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:48:13 compute-0 systemd[1]: machine-qemu\x2d19\x2dinstance\x2d00000012.scope: Deactivated successfully.
Oct 11 08:48:13 compute-0 systemd[1]: machine-qemu\x2d19\x2dinstance\x2d00000012.scope: Consumed 12.737s CPU time.
Oct 11 08:48:13 compute-0 systemd-machined[215705]: Machine qemu-19-instance-00000012 terminated.
Oct 11 08:48:13 compute-0 nova_compute[260935]: 2025-10-11 08:48:13.405 2 DEBUG nova.virt.libvirt.driver [None req-137c26ae-7b64-42c1-8e0f-3f35576f3c95 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 11 08:48:13 compute-0 nova_compute[260935]: 2025-10-11 08:48:13.405 2 DEBUG nova.virt.libvirt.driver [None req-137c26ae-7b64-42c1-8e0f-3f35576f3c95 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] Ensure instance console log exists: /var/lib/nova/instances/057de6d9-3f9e-4b23-9019-f62ba6b453e7/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 11 08:48:13 compute-0 nova_compute[260935]: 2025-10-11 08:48:13.406 2 DEBUG oslo_concurrency.lockutils [None req-137c26ae-7b64-42c1-8e0f-3f35576f3c95 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:48:13 compute-0 nova_compute[260935]: 2025-10-11 08:48:13.406 2 DEBUG oslo_concurrency.lockutils [None req-137c26ae-7b64-42c1-8e0f-3f35576f3c95 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:48:13 compute-0 nova_compute[260935]: 2025-10-11 08:48:13.407 2 DEBUG oslo_concurrency.lockutils [None req-137c26ae-7b64-42c1-8e0f-3f35576f3c95 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:48:13 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1273: 321 pgs: 321 active+clean; 167 MiB data, 332 MiB used, 60 GiB / 60 GiB avail; 4.2 MiB/s rd, 2.2 MiB/s wr, 234 op/s
Oct 11 08:48:13 compute-0 nova_compute[260935]: 2025-10-11 08:48:13.957 2 DEBUG nova.compute.manager [req-7118c14a-f554-4b4c-935c-9bbf62aeec5a req-2a496d66-3650-4afc-b31c-1bdfce1d22c5 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: f6e6ccd5-d393-4fa3-bf88-491311678dd1] Received event network-changed-128b1135-2e8f-4e78-8e09-e16b082e9225 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:48:13 compute-0 nova_compute[260935]: 2025-10-11 08:48:13.957 2 DEBUG nova.compute.manager [req-7118c14a-f554-4b4c-935c-9bbf62aeec5a req-2a496d66-3650-4afc-b31c-1bdfce1d22c5 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: f6e6ccd5-d393-4fa3-bf88-491311678dd1] Refreshing instance network info cache due to event network-changed-128b1135-2e8f-4e78-8e09-e16b082e9225. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 08:48:13 compute-0 nova_compute[260935]: 2025-10-11 08:48:13.958 2 DEBUG oslo_concurrency.lockutils [req-7118c14a-f554-4b4c-935c-9bbf62aeec5a req-2a496d66-3650-4afc-b31c-1bdfce1d22c5 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-f6e6ccd5-d393-4fa3-bf88-491311678dd1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 08:48:13 compute-0 nova_compute[260935]: 2025-10-11 08:48:13.958 2 DEBUG oslo_concurrency.lockutils [req-7118c14a-f554-4b4c-935c-9bbf62aeec5a req-2a496d66-3650-4afc-b31c-1bdfce1d22c5 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-f6e6ccd5-d393-4fa3-bf88-491311678dd1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 08:48:13 compute-0 nova_compute[260935]: 2025-10-11 08:48:13.959 2 DEBUG nova.network.neutron [req-7118c14a-f554-4b4c-935c-9bbf62aeec5a req-2a496d66-3650-4afc-b31c-1bdfce1d22c5 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: f6e6ccd5-d393-4fa3-bf88-491311678dd1] Refreshing network info cache for port 128b1135-2e8f-4e78-8e09-e16b082e9225 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 08:48:14 compute-0 nova_compute[260935]: 2025-10-11 08:48:14.122 2 INFO nova.virt.libvirt.driver [None req-0253c722-4ccd-4d8a-83e3-a929e61a488a 5c92a33b381a4eb19cda7d8ab3531d63 f5d998faafab4ef7b40f18acfe17d238 - - default default] [instance: 66086b61-46ca-4a1b-a9f0-692678bcbf7a] Instance shutdown successfully after 13 seconds.
Oct 11 08:48:14 compute-0 nova_compute[260935]: 2025-10-11 08:48:14.131 2 INFO nova.virt.libvirt.driver [-] [instance: 66086b61-46ca-4a1b-a9f0-692678bcbf7a] Instance destroyed successfully.
Oct 11 08:48:14 compute-0 nova_compute[260935]: 2025-10-11 08:48:14.137 2 INFO nova.virt.libvirt.driver [-] [instance: 66086b61-46ca-4a1b-a9f0-692678bcbf7a] Instance destroyed successfully.
Oct 11 08:48:14 compute-0 ovn_controller[152945]: 2025-10-11T08:48:14Z|00067|binding|INFO|Releasing lport 728422f2-62be-41c7-90af-4ff751731213 from this chassis (sb_readonly=0)
Oct 11 08:48:14 compute-0 nova_compute[260935]: 2025-10-11 08:48:14.414 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:48:14 compute-0 nova_compute[260935]: 2025-10-11 08:48:14.569 2 INFO nova.virt.libvirt.driver [None req-0253c722-4ccd-4d8a-83e3-a929e61a488a 5c92a33b381a4eb19cda7d8ab3531d63 f5d998faafab4ef7b40f18acfe17d238 - - default default] [instance: 66086b61-46ca-4a1b-a9f0-692678bcbf7a] Deleting instance files /var/lib/nova/instances/66086b61-46ca-4a1b-a9f0-692678bcbf7a_del
Oct 11 08:48:14 compute-0 nova_compute[260935]: 2025-10-11 08:48:14.571 2 INFO nova.virt.libvirt.driver [None req-0253c722-4ccd-4d8a-83e3-a929e61a488a 5c92a33b381a4eb19cda7d8ab3531d63 f5d998faafab4ef7b40f18acfe17d238 - - default default] [instance: 66086b61-46ca-4a1b-a9f0-692678bcbf7a] Deletion of /var/lib/nova/instances/66086b61-46ca-4a1b-a9f0-692678bcbf7a_del complete
Oct 11 08:48:14 compute-0 nova_compute[260935]: 2025-10-11 08:48:14.755 2 DEBUG nova.virt.libvirt.driver [None req-0253c722-4ccd-4d8a-83e3-a929e61a488a 5c92a33b381a4eb19cda7d8ab3531d63 f5d998faafab4ef7b40f18acfe17d238 - - default default] [instance: 66086b61-46ca-4a1b-a9f0-692678bcbf7a] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 11 08:48:14 compute-0 nova_compute[260935]: 2025-10-11 08:48:14.756 2 INFO nova.virt.libvirt.driver [None req-0253c722-4ccd-4d8a-83e3-a929e61a488a 5c92a33b381a4eb19cda7d8ab3531d63 f5d998faafab4ef7b40f18acfe17d238 - - default default] [instance: 66086b61-46ca-4a1b-a9f0-692678bcbf7a] Creating image(s)
Oct 11 08:48:14 compute-0 nova_compute[260935]: 2025-10-11 08:48:14.797 2 DEBUG nova.storage.rbd_utils [None req-0253c722-4ccd-4d8a-83e3-a929e61a488a 5c92a33b381a4eb19cda7d8ab3531d63 f5d998faafab4ef7b40f18acfe17d238 - - default default] rbd image 66086b61-46ca-4a1b-a9f0-692678bcbf7a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:48:14 compute-0 nova_compute[260935]: 2025-10-11 08:48:14.835 2 DEBUG nova.storage.rbd_utils [None req-0253c722-4ccd-4d8a-83e3-a929e61a488a 5c92a33b381a4eb19cda7d8ab3531d63 f5d998faafab4ef7b40f18acfe17d238 - - default default] rbd image 66086b61-46ca-4a1b-a9f0-692678bcbf7a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:48:14 compute-0 nova_compute[260935]: 2025-10-11 08:48:14.868 2 DEBUG nova.storage.rbd_utils [None req-0253c722-4ccd-4d8a-83e3-a929e61a488a 5c92a33b381a4eb19cda7d8ab3531d63 f5d998faafab4ef7b40f18acfe17d238 - - default default] rbd image 66086b61-46ca-4a1b-a9f0-692678bcbf7a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:48:14 compute-0 nova_compute[260935]: 2025-10-11 08:48:14.873 2 DEBUG oslo_concurrency.processutils [None req-0253c722-4ccd-4d8a-83e3-a929e61a488a 5c92a33b381a4eb19cda7d8ab3531d63 f5d998faafab4ef7b40f18acfe17d238 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:48:14 compute-0 ceph-mon[74313]: pgmap v1273: 321 pgs: 321 active+clean; 167 MiB data, 332 MiB used, 60 GiB / 60 GiB avail; 4.2 MiB/s rd, 2.2 MiB/s wr, 234 op/s
Oct 11 08:48:14 compute-0 nova_compute[260935]: 2025-10-11 08:48:14.909 2 DEBUG nova.network.neutron [None req-137c26ae-7b64-42c1-8e0f-3f35576f3c95 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] Successfully created port: db31f1b4-b009-40dc-a028-b72fe0b1eb45 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 11 08:48:14 compute-0 nova_compute[260935]: 2025-10-11 08:48:14.967 2 DEBUG oslo_concurrency.processutils [None req-0253c722-4ccd-4d8a-83e3-a929e61a488a 5c92a33b381a4eb19cda7d8ab3531d63 f5d998faafab4ef7b40f18acfe17d238 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json" returned: 0 in 0.093s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:48:14 compute-0 nova_compute[260935]: 2025-10-11 08:48:14.968 2 DEBUG oslo_concurrency.lockutils [None req-0253c722-4ccd-4d8a-83e3-a929e61a488a 5c92a33b381a4eb19cda7d8ab3531d63 f5d998faafab4ef7b40f18acfe17d238 - - default default] Acquiring lock "9811042c7d73cc51997f7c966840f4b7728169a1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:48:14 compute-0 nova_compute[260935]: 2025-10-11 08:48:14.969 2 DEBUG oslo_concurrency.lockutils [None req-0253c722-4ccd-4d8a-83e3-a929e61a488a 5c92a33b381a4eb19cda7d8ab3531d63 f5d998faafab4ef7b40f18acfe17d238 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:48:14 compute-0 nova_compute[260935]: 2025-10-11 08:48:14.969 2 DEBUG oslo_concurrency.lockutils [None req-0253c722-4ccd-4d8a-83e3-a929e61a488a 5c92a33b381a4eb19cda7d8ab3531d63 f5d998faafab4ef7b40f18acfe17d238 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:48:15 compute-0 nova_compute[260935]: 2025-10-11 08:48:15.002 2 DEBUG nova.storage.rbd_utils [None req-0253c722-4ccd-4d8a-83e3-a929e61a488a 5c92a33b381a4eb19cda7d8ab3531d63 f5d998faafab4ef7b40f18acfe17d238 - - default default] rbd image 66086b61-46ca-4a1b-a9f0-692678bcbf7a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:48:15 compute-0 nova_compute[260935]: 2025-10-11 08:48:15.007 2 DEBUG oslo_concurrency.processutils [None req-0253c722-4ccd-4d8a-83e3-a929e61a488a 5c92a33b381a4eb19cda7d8ab3531d63 f5d998faafab4ef7b40f18acfe17d238 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 66086b61-46ca-4a1b-a9f0-692678bcbf7a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:48:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:48:15.180 162815 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:48:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:48:15.182 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:48:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:48:15.183 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:48:15 compute-0 nova_compute[260935]: 2025-10-11 08:48:15.338 2 DEBUG oslo_concurrency.processutils [None req-0253c722-4ccd-4d8a-83e3-a929e61a488a 5c92a33b381a4eb19cda7d8ab3531d63 f5d998faafab4ef7b40f18acfe17d238 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 66086b61-46ca-4a1b-a9f0-692678bcbf7a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.330s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:48:15 compute-0 nova_compute[260935]: 2025-10-11 08:48:15.428 2 DEBUG nova.storage.rbd_utils [None req-0253c722-4ccd-4d8a-83e3-a929e61a488a 5c92a33b381a4eb19cda7d8ab3531d63 f5d998faafab4ef7b40f18acfe17d238 - - default default] resizing rbd image 66086b61-46ca-4a1b-a9f0-692678bcbf7a_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 11 08:48:15 compute-0 nova_compute[260935]: 2025-10-11 08:48:15.538 2 DEBUG nova.virt.libvirt.driver [None req-0253c722-4ccd-4d8a-83e3-a929e61a488a 5c92a33b381a4eb19cda7d8ab3531d63 f5d998faafab4ef7b40f18acfe17d238 - - default default] [instance: 66086b61-46ca-4a1b-a9f0-692678bcbf7a] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 11 08:48:15 compute-0 nova_compute[260935]: 2025-10-11 08:48:15.539 2 DEBUG nova.virt.libvirt.driver [None req-0253c722-4ccd-4d8a-83e3-a929e61a488a 5c92a33b381a4eb19cda7d8ab3531d63 f5d998faafab4ef7b40f18acfe17d238 - - default default] [instance: 66086b61-46ca-4a1b-a9f0-692678bcbf7a] Ensure instance console log exists: /var/lib/nova/instances/66086b61-46ca-4a1b-a9f0-692678bcbf7a/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 11 08:48:15 compute-0 nova_compute[260935]: 2025-10-11 08:48:15.540 2 DEBUG oslo_concurrency.lockutils [None req-0253c722-4ccd-4d8a-83e3-a929e61a488a 5c92a33b381a4eb19cda7d8ab3531d63 f5d998faafab4ef7b40f18acfe17d238 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:48:15 compute-0 nova_compute[260935]: 2025-10-11 08:48:15.540 2 DEBUG oslo_concurrency.lockutils [None req-0253c722-4ccd-4d8a-83e3-a929e61a488a 5c92a33b381a4eb19cda7d8ab3531d63 f5d998faafab4ef7b40f18acfe17d238 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:48:15 compute-0 nova_compute[260935]: 2025-10-11 08:48:15.540 2 DEBUG oslo_concurrency.lockutils [None req-0253c722-4ccd-4d8a-83e3-a929e61a488a 5c92a33b381a4eb19cda7d8ab3531d63 f5d998faafab4ef7b40f18acfe17d238 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:48:15 compute-0 nova_compute[260935]: 2025-10-11 08:48:15.542 2 DEBUG nova.virt.libvirt.driver [None req-0253c722-4ccd-4d8a-83e3-a929e61a488a 5c92a33b381a4eb19cda7d8ab3531d63 f5d998faafab4ef7b40f18acfe17d238 - - default default] [instance: 66086b61-46ca-4a1b-a9f0-692678bcbf7a] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 11 08:48:15 compute-0 nova_compute[260935]: 2025-10-11 08:48:15.550 2 WARNING nova.virt.libvirt.driver [None req-0253c722-4ccd-4d8a-83e3-a929e61a488a 5c92a33b381a4eb19cda7d8ab3531d63 f5d998faafab4ef7b40f18acfe17d238 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError
Oct 11 08:48:15 compute-0 nova_compute[260935]: 2025-10-11 08:48:15.602 2 DEBUG nova.virt.libvirt.host [None req-0253c722-4ccd-4d8a-83e3-a929e61a488a 5c92a33b381a4eb19cda7d8ab3531d63 f5d998faafab4ef7b40f18acfe17d238 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 11 08:48:15 compute-0 nova_compute[260935]: 2025-10-11 08:48:15.603 2 DEBUG nova.virt.libvirt.host [None req-0253c722-4ccd-4d8a-83e3-a929e61a488a 5c92a33b381a4eb19cda7d8ab3531d63 f5d998faafab4ef7b40f18acfe17d238 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 11 08:48:15 compute-0 nova_compute[260935]: 2025-10-11 08:48:15.608 2 DEBUG nova.virt.libvirt.host [None req-0253c722-4ccd-4d8a-83e3-a929e61a488a 5c92a33b381a4eb19cda7d8ab3531d63 f5d998faafab4ef7b40f18acfe17d238 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 11 08:48:15 compute-0 nova_compute[260935]: 2025-10-11 08:48:15.608 2 DEBUG nova.virt.libvirt.host [None req-0253c722-4ccd-4d8a-83e3-a929e61a488a 5c92a33b381a4eb19cda7d8ab3531d63 f5d998faafab4ef7b40f18acfe17d238 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 11 08:48:15 compute-0 nova_compute[260935]: 2025-10-11 08:48:15.609 2 DEBUG nova.virt.libvirt.driver [None req-0253c722-4ccd-4d8a-83e3-a929e61a488a 5c92a33b381a4eb19cda7d8ab3531d63 f5d998faafab4ef7b40f18acfe17d238 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 11 08:48:15 compute-0 nova_compute[260935]: 2025-10-11 08:48:15.609 2 DEBUG nova.virt.hardware [None req-0253c722-4ccd-4d8a-83e3-a929e61a488a 5c92a33b381a4eb19cda7d8ab3531d63 f5d998faafab4ef7b40f18acfe17d238 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 11 08:48:15 compute-0 nova_compute[260935]: 2025-10-11 08:48:15.610 2 DEBUG nova.virt.hardware [None req-0253c722-4ccd-4d8a-83e3-a929e61a488a 5c92a33b381a4eb19cda7d8ab3531d63 f5d998faafab4ef7b40f18acfe17d238 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 11 08:48:15 compute-0 nova_compute[260935]: 2025-10-11 08:48:15.610 2 DEBUG nova.virt.hardware [None req-0253c722-4ccd-4d8a-83e3-a929e61a488a 5c92a33b381a4eb19cda7d8ab3531d63 f5d998faafab4ef7b40f18acfe17d238 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 11 08:48:15 compute-0 nova_compute[260935]: 2025-10-11 08:48:15.610 2 DEBUG nova.virt.hardware [None req-0253c722-4ccd-4d8a-83e3-a929e61a488a 5c92a33b381a4eb19cda7d8ab3531d63 f5d998faafab4ef7b40f18acfe17d238 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 11 08:48:15 compute-0 nova_compute[260935]: 2025-10-11 08:48:15.610 2 DEBUG nova.virt.hardware [None req-0253c722-4ccd-4d8a-83e3-a929e61a488a 5c92a33b381a4eb19cda7d8ab3531d63 f5d998faafab4ef7b40f18acfe17d238 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 11 08:48:15 compute-0 nova_compute[260935]: 2025-10-11 08:48:15.610 2 DEBUG nova.virt.hardware [None req-0253c722-4ccd-4d8a-83e3-a929e61a488a 5c92a33b381a4eb19cda7d8ab3531d63 f5d998faafab4ef7b40f18acfe17d238 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 11 08:48:15 compute-0 nova_compute[260935]: 2025-10-11 08:48:15.611 2 DEBUG nova.virt.hardware [None req-0253c722-4ccd-4d8a-83e3-a929e61a488a 5c92a33b381a4eb19cda7d8ab3531d63 f5d998faafab4ef7b40f18acfe17d238 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 11 08:48:15 compute-0 nova_compute[260935]: 2025-10-11 08:48:15.611 2 DEBUG nova.virt.hardware [None req-0253c722-4ccd-4d8a-83e3-a929e61a488a 5c92a33b381a4eb19cda7d8ab3531d63 f5d998faafab4ef7b40f18acfe17d238 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 11 08:48:15 compute-0 nova_compute[260935]: 2025-10-11 08:48:15.611 2 DEBUG nova.virt.hardware [None req-0253c722-4ccd-4d8a-83e3-a929e61a488a 5c92a33b381a4eb19cda7d8ab3531d63 f5d998faafab4ef7b40f18acfe17d238 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 11 08:48:15 compute-0 nova_compute[260935]: 2025-10-11 08:48:15.611 2 DEBUG nova.virt.hardware [None req-0253c722-4ccd-4d8a-83e3-a929e61a488a 5c92a33b381a4eb19cda7d8ab3531d63 f5d998faafab4ef7b40f18acfe17d238 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 11 08:48:15 compute-0 nova_compute[260935]: 2025-10-11 08:48:15.612 2 DEBUG nova.virt.hardware [None req-0253c722-4ccd-4d8a-83e3-a929e61a488a 5c92a33b381a4eb19cda7d8ab3531d63 f5d998faafab4ef7b40f18acfe17d238 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 11 08:48:15 compute-0 nova_compute[260935]: 2025-10-11 08:48:15.612 2 DEBUG nova.objects.instance [None req-0253c722-4ccd-4d8a-83e3-a929e61a488a 5c92a33b381a4eb19cda7d8ab3531d63 f5d998faafab4ef7b40f18acfe17d238 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 66086b61-46ca-4a1b-a9f0-692678bcbf7a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:48:15 compute-0 nova_compute[260935]: 2025-10-11 08:48:15.646 2 DEBUG oslo_concurrency.processutils [None req-0253c722-4ccd-4d8a-83e3-a929e61a488a 5c92a33b381a4eb19cda7d8ab3531d63 f5d998faafab4ef7b40f18acfe17d238 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:48:15 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1274: 321 pgs: 321 active+clean; 167 MiB data, 332 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.2 MiB/s wr, 138 op/s
Oct 11 08:48:15 compute-0 nova_compute[260935]: 2025-10-11 08:48:15.801 2 DEBUG nova.network.neutron [None req-137c26ae-7b64-42c1-8e0f-3f35576f3c95 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] Successfully updated port: db31f1b4-b009-40dc-a028-b72fe0b1eb45 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 11 08:48:15 compute-0 nova_compute[260935]: 2025-10-11 08:48:15.920 2 DEBUG oslo_concurrency.lockutils [None req-137c26ae-7b64-42c1-8e0f-3f35576f3c95 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Acquiring lock "refresh_cache-057de6d9-3f9e-4b23-9019-f62ba6b453e7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 08:48:15 compute-0 nova_compute[260935]: 2025-10-11 08:48:15.921 2 DEBUG oslo_concurrency.lockutils [None req-137c26ae-7b64-42c1-8e0f-3f35576f3c95 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Acquired lock "refresh_cache-057de6d9-3f9e-4b23-9019-f62ba6b453e7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 08:48:15 compute-0 nova_compute[260935]: 2025-10-11 08:48:15.922 2 DEBUG nova.network.neutron [None req-137c26ae-7b64-42c1-8e0f-3f35576f3c95 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 11 08:48:15 compute-0 nova_compute[260935]: 2025-10-11 08:48:15.978 2 DEBUG nova.network.neutron [req-7118c14a-f554-4b4c-935c-9bbf62aeec5a req-2a496d66-3650-4afc-b31c-1bdfce1d22c5 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: f6e6ccd5-d393-4fa3-bf88-491311678dd1] Updated VIF entry in instance network info cache for port 128b1135-2e8f-4e78-8e09-e16b082e9225. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 08:48:15 compute-0 nova_compute[260935]: 2025-10-11 08:48:15.980 2 DEBUG nova.network.neutron [req-7118c14a-f554-4b4c-935c-9bbf62aeec5a req-2a496d66-3650-4afc-b31c-1bdfce1d22c5 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: f6e6ccd5-d393-4fa3-bf88-491311678dd1] Updating instance_info_cache with network_info: [{"id": "128b1135-2e8f-4e78-8e09-e16b082e9225", "address": "fa:16:3e:ac:95:81", "network": {"id": "678d17d5-515c-4e7c-a42a-5bd4db3dbb7b", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-1273402507-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.213", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dc753a4e96fc46008b6e6b1fd29b160d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap128b1135-2e", "ovs_interfaceid": "128b1135-2e8f-4e78-8e09-e16b082e9225", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:48:16 compute-0 nova_compute[260935]: 2025-10-11 08:48:16.000 2 DEBUG oslo_concurrency.lockutils [req-7118c14a-f554-4b4c-935c-9bbf62aeec5a req-2a496d66-3650-4afc-b31c-1bdfce1d22c5 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-f6e6ccd5-d393-4fa3-bf88-491311678dd1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 08:48:16 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 08:48:16 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/245679798' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:48:16 compute-0 nova_compute[260935]: 2025-10-11 08:48:16.146 2 DEBUG oslo_concurrency.processutils [None req-0253c722-4ccd-4d8a-83e3-a929e61a488a 5c92a33b381a4eb19cda7d8ab3531d63 f5d998faafab4ef7b40f18acfe17d238 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.499s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:48:16 compute-0 nova_compute[260935]: 2025-10-11 08:48:16.179 2 DEBUG nova.storage.rbd_utils [None req-0253c722-4ccd-4d8a-83e3-a929e61a488a 5c92a33b381a4eb19cda7d8ab3531d63 f5d998faafab4ef7b40f18acfe17d238 - - default default] rbd image 66086b61-46ca-4a1b-a9f0-692678bcbf7a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:48:16 compute-0 nova_compute[260935]: 2025-10-11 08:48:16.186 2 DEBUG oslo_concurrency.processutils [None req-0253c722-4ccd-4d8a-83e3-a929e61a488a 5c92a33b381a4eb19cda7d8ab3531d63 f5d998faafab4ef7b40f18acfe17d238 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:48:16 compute-0 nova_compute[260935]: 2025-10-11 08:48:16.230 2 DEBUG nova.network.neutron [None req-137c26ae-7b64-42c1-8e0f-3f35576f3c95 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 11 08:48:16 compute-0 nova_compute[260935]: 2025-10-11 08:48:16.244 2 DEBUG nova.compute.manager [req-8cf3764b-264a-4309-b17c-988b4ac89461 req-be3cc025-de82-4d65-a37d-d3e6511c0bb6 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] Received event network-changed-db31f1b4-b009-40dc-a028-b72fe0b1eb45 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:48:16 compute-0 nova_compute[260935]: 2025-10-11 08:48:16.245 2 DEBUG nova.compute.manager [req-8cf3764b-264a-4309-b17c-988b4ac89461 req-be3cc025-de82-4d65-a37d-d3e6511c0bb6 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] Refreshing instance network info cache due to event network-changed-db31f1b4-b009-40dc-a028-b72fe0b1eb45. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 08:48:16 compute-0 nova_compute[260935]: 2025-10-11 08:48:16.245 2 DEBUG oslo_concurrency.lockutils [req-8cf3764b-264a-4309-b17c-988b4ac89461 req-be3cc025-de82-4d65-a37d-d3e6511c0bb6 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-057de6d9-3f9e-4b23-9019-f62ba6b453e7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 08:48:16 compute-0 nova_compute[260935]: 2025-10-11 08:48:16.462 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760172481.4613087, 5b2193b9-46b9-44a8-9d1c-3c6a642115b6 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:48:16 compute-0 nova_compute[260935]: 2025-10-11 08:48:16.463 2 INFO nova.compute.manager [-] [instance: 5b2193b9-46b9-44a8-9d1c-3c6a642115b6] VM Stopped (Lifecycle Event)
Oct 11 08:48:16 compute-0 nova_compute[260935]: 2025-10-11 08:48:16.490 2 DEBUG nova.compute.manager [None req-28cbac28-a941-4502-836a-f22c1c7ae50b - - - - - -] [instance: 5b2193b9-46b9-44a8-9d1c-3c6a642115b6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:48:16 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 08:48:16 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3078272704' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:48:16 compute-0 nova_compute[260935]: 2025-10-11 08:48:16.767 2 DEBUG oslo_concurrency.processutils [None req-0253c722-4ccd-4d8a-83e3-a929e61a488a 5c92a33b381a4eb19cda7d8ab3531d63 f5d998faafab4ef7b40f18acfe17d238 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.581s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:48:16 compute-0 nova_compute[260935]: 2025-10-11 08:48:16.772 2 DEBUG nova.virt.libvirt.driver [None req-0253c722-4ccd-4d8a-83e3-a929e61a488a 5c92a33b381a4eb19cda7d8ab3531d63 f5d998faafab4ef7b40f18acfe17d238 - - default default] [instance: 66086b61-46ca-4a1b-a9f0-692678bcbf7a] End _get_guest_xml xml=<domain type="kvm">
Oct 11 08:48:16 compute-0 nova_compute[260935]:   <uuid>66086b61-46ca-4a1b-a9f0-692678bcbf7a</uuid>
Oct 11 08:48:16 compute-0 nova_compute[260935]:   <name>instance-00000012</name>
Oct 11 08:48:16 compute-0 nova_compute[260935]:   <memory>131072</memory>
Oct 11 08:48:16 compute-0 nova_compute[260935]:   <vcpu>1</vcpu>
Oct 11 08:48:16 compute-0 nova_compute[260935]:   <metadata>
Oct 11 08:48:16 compute-0 nova_compute[260935]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 08:48:16 compute-0 nova_compute[260935]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 08:48:16 compute-0 nova_compute[260935]:       <nova:name>tempest-ServersAdmin275Test-server-132893026</nova:name>
Oct 11 08:48:16 compute-0 nova_compute[260935]:       <nova:creationTime>2025-10-11 08:48:15</nova:creationTime>
Oct 11 08:48:16 compute-0 nova_compute[260935]:       <nova:flavor name="m1.nano">
Oct 11 08:48:16 compute-0 nova_compute[260935]:         <nova:memory>128</nova:memory>
Oct 11 08:48:16 compute-0 nova_compute[260935]:         <nova:disk>1</nova:disk>
Oct 11 08:48:16 compute-0 nova_compute[260935]:         <nova:swap>0</nova:swap>
Oct 11 08:48:16 compute-0 nova_compute[260935]:         <nova:ephemeral>0</nova:ephemeral>
Oct 11 08:48:16 compute-0 nova_compute[260935]:         <nova:vcpus>1</nova:vcpus>
Oct 11 08:48:16 compute-0 nova_compute[260935]:       </nova:flavor>
Oct 11 08:48:16 compute-0 nova_compute[260935]:       <nova:owner>
Oct 11 08:48:16 compute-0 nova_compute[260935]:         <nova:user uuid="7cfe9716527d49f18102a38c7480e208">tempest-ServersAdmin275Test-1935053767-project-member</nova:user>
Oct 11 08:48:16 compute-0 nova_compute[260935]:         <nova:project uuid="7fdd898b69404913a643940b3869140b">tempest-ServersAdmin275Test-1935053767</nova:project>
Oct 11 08:48:16 compute-0 nova_compute[260935]:       </nova:owner>
Oct 11 08:48:16 compute-0 nova_compute[260935]:       <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 08:48:16 compute-0 nova_compute[260935]:       <nova:ports/>
Oct 11 08:48:16 compute-0 nova_compute[260935]:     </nova:instance>
Oct 11 08:48:16 compute-0 nova_compute[260935]:   </metadata>
Oct 11 08:48:16 compute-0 nova_compute[260935]:   <sysinfo type="smbios">
Oct 11 08:48:16 compute-0 nova_compute[260935]:     <system>
Oct 11 08:48:16 compute-0 nova_compute[260935]:       <entry name="manufacturer">RDO</entry>
Oct 11 08:48:16 compute-0 nova_compute[260935]:       <entry name="product">OpenStack Compute</entry>
Oct 11 08:48:16 compute-0 nova_compute[260935]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 08:48:16 compute-0 nova_compute[260935]:       <entry name="serial">66086b61-46ca-4a1b-a9f0-692678bcbf7a</entry>
Oct 11 08:48:16 compute-0 nova_compute[260935]:       <entry name="uuid">66086b61-46ca-4a1b-a9f0-692678bcbf7a</entry>
Oct 11 08:48:16 compute-0 nova_compute[260935]:       <entry name="family">Virtual Machine</entry>
Oct 11 08:48:16 compute-0 nova_compute[260935]:     </system>
Oct 11 08:48:16 compute-0 nova_compute[260935]:   </sysinfo>
Oct 11 08:48:16 compute-0 nova_compute[260935]:   <os>
Oct 11 08:48:16 compute-0 nova_compute[260935]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 11 08:48:16 compute-0 nova_compute[260935]:     <boot dev="hd"/>
Oct 11 08:48:16 compute-0 nova_compute[260935]:     <smbios mode="sysinfo"/>
Oct 11 08:48:16 compute-0 nova_compute[260935]:   </os>
Oct 11 08:48:16 compute-0 nova_compute[260935]:   <features>
Oct 11 08:48:16 compute-0 nova_compute[260935]:     <acpi/>
Oct 11 08:48:16 compute-0 nova_compute[260935]:     <apic/>
Oct 11 08:48:16 compute-0 nova_compute[260935]:     <vmcoreinfo/>
Oct 11 08:48:16 compute-0 nova_compute[260935]:   </features>
Oct 11 08:48:16 compute-0 nova_compute[260935]:   <clock offset="utc">
Oct 11 08:48:16 compute-0 nova_compute[260935]:     <timer name="pit" tickpolicy="delay"/>
Oct 11 08:48:16 compute-0 nova_compute[260935]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 11 08:48:16 compute-0 nova_compute[260935]:     <timer name="hpet" present="no"/>
Oct 11 08:48:16 compute-0 nova_compute[260935]:   </clock>
Oct 11 08:48:16 compute-0 nova_compute[260935]:   <cpu mode="host-model" match="exact">
Oct 11 08:48:16 compute-0 nova_compute[260935]:     <topology sockets="1" cores="1" threads="1"/>
Oct 11 08:48:16 compute-0 nova_compute[260935]:   </cpu>
Oct 11 08:48:16 compute-0 nova_compute[260935]:   <devices>
Oct 11 08:48:16 compute-0 nova_compute[260935]:     <disk type="network" device="disk">
Oct 11 08:48:16 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 08:48:16 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/66086b61-46ca-4a1b-a9f0-692678bcbf7a_disk">
Oct 11 08:48:16 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 08:48:16 compute-0 nova_compute[260935]:       </source>
Oct 11 08:48:16 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 08:48:16 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 08:48:16 compute-0 nova_compute[260935]:       </auth>
Oct 11 08:48:16 compute-0 nova_compute[260935]:       <target dev="vda" bus="virtio"/>
Oct 11 08:48:16 compute-0 nova_compute[260935]:     </disk>
Oct 11 08:48:16 compute-0 nova_compute[260935]:     <disk type="network" device="cdrom">
Oct 11 08:48:16 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 08:48:16 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/66086b61-46ca-4a1b-a9f0-692678bcbf7a_disk.config">
Oct 11 08:48:16 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 08:48:16 compute-0 nova_compute[260935]:       </source>
Oct 11 08:48:16 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 08:48:16 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 08:48:16 compute-0 nova_compute[260935]:       </auth>
Oct 11 08:48:16 compute-0 nova_compute[260935]:       <target dev="sda" bus="sata"/>
Oct 11 08:48:16 compute-0 nova_compute[260935]:     </disk>
Oct 11 08:48:16 compute-0 nova_compute[260935]:     <serial type="pty">
Oct 11 08:48:16 compute-0 nova_compute[260935]:       <log file="/var/lib/nova/instances/66086b61-46ca-4a1b-a9f0-692678bcbf7a/console.log" append="off"/>
Oct 11 08:48:16 compute-0 nova_compute[260935]:     </serial>
Oct 11 08:48:16 compute-0 nova_compute[260935]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 08:48:16 compute-0 nova_compute[260935]:     <video>
Oct 11 08:48:16 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 08:48:16 compute-0 nova_compute[260935]:     </video>
Oct 11 08:48:16 compute-0 nova_compute[260935]:     <input type="tablet" bus="usb"/>
Oct 11 08:48:16 compute-0 nova_compute[260935]:     <rng model="virtio">
Oct 11 08:48:16 compute-0 nova_compute[260935]:       <backend model="random">/dev/urandom</backend>
Oct 11 08:48:16 compute-0 nova_compute[260935]:     </rng>
Oct 11 08:48:16 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root"/>
Oct 11 08:48:16 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:48:16 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:48:16 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:48:16 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:48:16 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:48:16 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:48:16 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:48:16 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:48:16 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:48:16 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:48:16 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:48:16 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:48:16 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:48:16 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:48:16 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:48:16 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:48:16 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:48:16 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:48:16 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:48:16 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:48:16 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:48:16 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:48:16 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:48:16 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:48:16 compute-0 nova_compute[260935]:     <controller type="usb" index="0"/>
Oct 11 08:48:16 compute-0 nova_compute[260935]:     <memballoon model="virtio">
Oct 11 08:48:16 compute-0 nova_compute[260935]:       <stats period="10"/>
Oct 11 08:48:16 compute-0 nova_compute[260935]:     </memballoon>
Oct 11 08:48:16 compute-0 nova_compute[260935]:   </devices>
Oct 11 08:48:16 compute-0 nova_compute[260935]: </domain>
Oct 11 08:48:16 compute-0 nova_compute[260935]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 11 08:48:16 compute-0 nova_compute[260935]: 2025-10-11 08:48:16.849 2 DEBUG nova.virt.libvirt.driver [None req-0253c722-4ccd-4d8a-83e3-a929e61a488a 5c92a33b381a4eb19cda7d8ab3531d63 f5d998faafab4ef7b40f18acfe17d238 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 08:48:16 compute-0 nova_compute[260935]: 2025-10-11 08:48:16.850 2 DEBUG nova.virt.libvirt.driver [None req-0253c722-4ccd-4d8a-83e3-a929e61a488a 5c92a33b381a4eb19cda7d8ab3531d63 f5d998faafab4ef7b40f18acfe17d238 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 08:48:16 compute-0 nova_compute[260935]: 2025-10-11 08:48:16.851 2 INFO nova.virt.libvirt.driver [None req-0253c722-4ccd-4d8a-83e3-a929e61a488a 5c92a33b381a4eb19cda7d8ab3531d63 f5d998faafab4ef7b40f18acfe17d238 - - default default] [instance: 66086b61-46ca-4a1b-a9f0-692678bcbf7a] Using config drive
Oct 11 08:48:16 compute-0 sudo[289248]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:48:16 compute-0 nova_compute[260935]: 2025-10-11 08:48:16.881 2 DEBUG nova.storage.rbd_utils [None req-0253c722-4ccd-4d8a-83e3-a929e61a488a 5c92a33b381a4eb19cda7d8ab3531d63 f5d998faafab4ef7b40f18acfe17d238 - - default default] rbd image 66086b61-46ca-4a1b-a9f0-692678bcbf7a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:48:16 compute-0 sudo[289248]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:48:16 compute-0 sudo[289248]: pam_unix(sudo:session): session closed for user root
Oct 11 08:48:16 compute-0 nova_compute[260935]: 2025-10-11 08:48:16.891 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:48:16 compute-0 ceph-mon[74313]: pgmap v1274: 321 pgs: 321 active+clean; 167 MiB data, 332 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.2 MiB/s wr, 138 op/s
Oct 11 08:48:16 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/245679798' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:48:16 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/3078272704' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:48:16 compute-0 nova_compute[260935]: 2025-10-11 08:48:16.926 2 DEBUG nova.objects.instance [None req-0253c722-4ccd-4d8a-83e3-a929e61a488a 5c92a33b381a4eb19cda7d8ab3531d63 f5d998faafab4ef7b40f18acfe17d238 - - default default] Lazy-loading 'ec2_ids' on Instance uuid 66086b61-46ca-4a1b-a9f0-692678bcbf7a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:48:16 compute-0 sudo[289291]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 08:48:16 compute-0 nova_compute[260935]: 2025-10-11 08:48:16.961 2 DEBUG nova.objects.instance [None req-0253c722-4ccd-4d8a-83e3-a929e61a488a 5c92a33b381a4eb19cda7d8ab3531d63 f5d998faafab4ef7b40f18acfe17d238 - - default default] Lazy-loading 'keypairs' on Instance uuid 66086b61-46ca-4a1b-a9f0-692678bcbf7a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:48:16 compute-0 sudo[289291]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:48:16 compute-0 sudo[289291]: pam_unix(sudo:session): session closed for user root
Oct 11 08:48:17 compute-0 sudo[289316]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:48:17 compute-0 sudo[289316]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:48:17 compute-0 sudo[289316]: pam_unix(sudo:session): session closed for user root
Oct 11 08:48:17 compute-0 sudo[289341]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 11 08:48:17 compute-0 sudo[289341]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:48:17 compute-0 nova_compute[260935]: 2025-10-11 08:48:17.180 2 INFO nova.virt.libvirt.driver [None req-0253c722-4ccd-4d8a-83e3-a929e61a488a 5c92a33b381a4eb19cda7d8ab3531d63 f5d998faafab4ef7b40f18acfe17d238 - - default default] [instance: 66086b61-46ca-4a1b-a9f0-692678bcbf7a] Creating config drive at /var/lib/nova/instances/66086b61-46ca-4a1b-a9f0-692678bcbf7a/disk.config
Oct 11 08:48:17 compute-0 nova_compute[260935]: 2025-10-11 08:48:17.191 2 DEBUG oslo_concurrency.processutils [None req-0253c722-4ccd-4d8a-83e3-a929e61a488a 5c92a33b381a4eb19cda7d8ab3531d63 f5d998faafab4ef7b40f18acfe17d238 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/66086b61-46ca-4a1b-a9f0-692678bcbf7a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpk_ex2f3x execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:48:17 compute-0 nova_compute[260935]: 2025-10-11 08:48:17.350 2 DEBUG oslo_concurrency.processutils [None req-0253c722-4ccd-4d8a-83e3-a929e61a488a 5c92a33b381a4eb19cda7d8ab3531d63 f5d998faafab4ef7b40f18acfe17d238 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/66086b61-46ca-4a1b-a9f0-692678bcbf7a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpk_ex2f3x" returned: 0 in 0.159s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:48:17 compute-0 nova_compute[260935]: 2025-10-11 08:48:17.389 2 DEBUG nova.storage.rbd_utils [None req-0253c722-4ccd-4d8a-83e3-a929e61a488a 5c92a33b381a4eb19cda7d8ab3531d63 f5d998faafab4ef7b40f18acfe17d238 - - default default] rbd image 66086b61-46ca-4a1b-a9f0-692678bcbf7a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:48:17 compute-0 nova_compute[260935]: 2025-10-11 08:48:17.397 2 DEBUG oslo_concurrency.processutils [None req-0253c722-4ccd-4d8a-83e3-a929e61a488a 5c92a33b381a4eb19cda7d8ab3531d63 f5d998faafab4ef7b40f18acfe17d238 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/66086b61-46ca-4a1b-a9f0-692678bcbf7a/disk.config 66086b61-46ca-4a1b-a9f0-692678bcbf7a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:48:17 compute-0 nova_compute[260935]: 2025-10-11 08:48:17.433 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:48:17 compute-0 nova_compute[260935]: 2025-10-11 08:48:17.598 2 DEBUG nova.network.neutron [None req-137c26ae-7b64-42c1-8e0f-3f35576f3c95 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] Updating instance_info_cache with network_info: [{"id": "db31f1b4-b009-40dc-a028-b72fe0b1eb45", "address": "fa:16:3e:97:65:f3", "network": {"id": "fff13396-b787-4c6e-9112-a1c2ef57b26d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-795919911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eddb41c523294041b154a0a99c88e82b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdb31f1b4-b0", "ovs_interfaceid": "db31f1b4-b009-40dc-a028-b72fe0b1eb45", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:48:17 compute-0 nova_compute[260935]: 2025-10-11 08:48:17.616 2 DEBUG oslo_concurrency.processutils [None req-0253c722-4ccd-4d8a-83e3-a929e61a488a 5c92a33b381a4eb19cda7d8ab3531d63 f5d998faafab4ef7b40f18acfe17d238 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/66086b61-46ca-4a1b-a9f0-692678bcbf7a/disk.config 66086b61-46ca-4a1b-a9f0-692678bcbf7a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.219s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:48:17 compute-0 nova_compute[260935]: 2025-10-11 08:48:17.616 2 INFO nova.virt.libvirt.driver [None req-0253c722-4ccd-4d8a-83e3-a929e61a488a 5c92a33b381a4eb19cda7d8ab3531d63 f5d998faafab4ef7b40f18acfe17d238 - - default default] [instance: 66086b61-46ca-4a1b-a9f0-692678bcbf7a] Deleting local config drive /var/lib/nova/instances/66086b61-46ca-4a1b-a9f0-692678bcbf7a/disk.config because it was imported into RBD.
Oct 11 08:48:17 compute-0 nova_compute[260935]: 2025-10-11 08:48:17.623 2 DEBUG oslo_concurrency.lockutils [None req-137c26ae-7b64-42c1-8e0f-3f35576f3c95 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Releasing lock "refresh_cache-057de6d9-3f9e-4b23-9019-f62ba6b453e7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 08:48:17 compute-0 nova_compute[260935]: 2025-10-11 08:48:17.624 2 DEBUG nova.compute.manager [None req-137c26ae-7b64-42c1-8e0f-3f35576f3c95 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] Instance network_info: |[{"id": "db31f1b4-b009-40dc-a028-b72fe0b1eb45", "address": "fa:16:3e:97:65:f3", "network": {"id": "fff13396-b787-4c6e-9112-a1c2ef57b26d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-795919911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eddb41c523294041b154a0a99c88e82b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdb31f1b4-b0", "ovs_interfaceid": "db31f1b4-b009-40dc-a028-b72fe0b1eb45", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 11 08:48:17 compute-0 nova_compute[260935]: 2025-10-11 08:48:17.631 2 DEBUG oslo_concurrency.lockutils [req-8cf3764b-264a-4309-b17c-988b4ac89461 req-be3cc025-de82-4d65-a37d-d3e6511c0bb6 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-057de6d9-3f9e-4b23-9019-f62ba6b453e7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 08:48:17 compute-0 nova_compute[260935]: 2025-10-11 08:48:17.633 2 DEBUG nova.network.neutron [req-8cf3764b-264a-4309-b17c-988b4ac89461 req-be3cc025-de82-4d65-a37d-d3e6511c0bb6 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] Refreshing network info cache for port db31f1b4-b009-40dc-a028-b72fe0b1eb45 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 08:48:17 compute-0 nova_compute[260935]: 2025-10-11 08:48:17.640 2 DEBUG nova.virt.libvirt.driver [None req-137c26ae-7b64-42c1-8e0f-3f35576f3c95 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] Start _get_guest_xml network_info=[{"id": "db31f1b4-b009-40dc-a028-b72fe0b1eb45", "address": "fa:16:3e:97:65:f3", "network": {"id": "fff13396-b787-4c6e-9112-a1c2ef57b26d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-795919911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eddb41c523294041b154a0a99c88e82b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdb31f1b4-b0", "ovs_interfaceid": "db31f1b4-b009-40dc-a028-b72fe0b1eb45", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 11 08:48:17 compute-0 nova_compute[260935]: 2025-10-11 08:48:17.654 2 WARNING nova.virt.libvirt.driver [None req-137c26ae-7b64-42c1-8e0f-3f35576f3c95 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 08:48:17 compute-0 nova_compute[260935]: 2025-10-11 08:48:17.695 2 DEBUG nova.virt.libvirt.host [None req-137c26ae-7b64-42c1-8e0f-3f35576f3c95 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 11 08:48:17 compute-0 nova_compute[260935]: 2025-10-11 08:48:17.697 2 DEBUG nova.virt.libvirt.host [None req-137c26ae-7b64-42c1-8e0f-3f35576f3c95 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 11 08:48:17 compute-0 nova_compute[260935]: 2025-10-11 08:48:17.705 2 DEBUG nova.virt.libvirt.host [None req-137c26ae-7b64-42c1-8e0f-3f35576f3c95 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 11 08:48:17 compute-0 nova_compute[260935]: 2025-10-11 08:48:17.706 2 DEBUG nova.virt.libvirt.host [None req-137c26ae-7b64-42c1-8e0f-3f35576f3c95 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 11 08:48:17 compute-0 nova_compute[260935]: 2025-10-11 08:48:17.707 2 DEBUG nova.virt.libvirt.driver [None req-137c26ae-7b64-42c1-8e0f-3f35576f3c95 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 11 08:48:17 compute-0 nova_compute[260935]: 2025-10-11 08:48:17.707 2 DEBUG nova.virt.hardware [None req-137c26ae-7b64-42c1-8e0f-3f35576f3c95 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 11 08:48:17 compute-0 nova_compute[260935]: 2025-10-11 08:48:17.708 2 DEBUG nova.virt.hardware [None req-137c26ae-7b64-42c1-8e0f-3f35576f3c95 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 11 08:48:17 compute-0 nova_compute[260935]: 2025-10-11 08:48:17.709 2 DEBUG nova.virt.hardware [None req-137c26ae-7b64-42c1-8e0f-3f35576f3c95 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 11 08:48:17 compute-0 nova_compute[260935]: 2025-10-11 08:48:17.710 2 DEBUG nova.virt.hardware [None req-137c26ae-7b64-42c1-8e0f-3f35576f3c95 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 11 08:48:17 compute-0 nova_compute[260935]: 2025-10-11 08:48:17.711 2 DEBUG nova.virt.hardware [None req-137c26ae-7b64-42c1-8e0f-3f35576f3c95 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 11 08:48:17 compute-0 nova_compute[260935]: 2025-10-11 08:48:17.711 2 DEBUG nova.virt.hardware [None req-137c26ae-7b64-42c1-8e0f-3f35576f3c95 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 11 08:48:17 compute-0 nova_compute[260935]: 2025-10-11 08:48:17.712 2 DEBUG nova.virt.hardware [None req-137c26ae-7b64-42c1-8e0f-3f35576f3c95 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 11 08:48:17 compute-0 nova_compute[260935]: 2025-10-11 08:48:17.713 2 DEBUG nova.virt.hardware [None req-137c26ae-7b64-42c1-8e0f-3f35576f3c95 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 11 08:48:17 compute-0 nova_compute[260935]: 2025-10-11 08:48:17.713 2 DEBUG nova.virt.hardware [None req-137c26ae-7b64-42c1-8e0f-3f35576f3c95 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 11 08:48:17 compute-0 nova_compute[260935]: 2025-10-11 08:48:17.714 2 DEBUG nova.virt.hardware [None req-137c26ae-7b64-42c1-8e0f-3f35576f3c95 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 11 08:48:17 compute-0 nova_compute[260935]: 2025-10-11 08:48:17.714 2 DEBUG nova.virt.hardware [None req-137c26ae-7b64-42c1-8e0f-3f35576f3c95 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 11 08:48:17 compute-0 nova_compute[260935]: 2025-10-11 08:48:17.720 2 DEBUG oslo_concurrency.processutils [None req-137c26ae-7b64-42c1-8e0f-3f35576f3c95 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:48:17 compute-0 systemd-machined[215705]: New machine qemu-21-instance-00000012.
Oct 11 08:48:17 compute-0 sudo[289341]: pam_unix(sudo:session): session closed for user root
Oct 11 08:48:17 compute-0 systemd[1]: Started Virtual Machine qemu-21-instance-00000012.
Oct 11 08:48:17 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 08:48:17 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 08:48:17 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 11 08:48:17 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 08:48:17 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 11 08:48:17 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1275: 321 pgs: 321 active+clean; 206 MiB data, 352 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 7.8 MiB/s wr, 271 op/s
Oct 11 08:48:17 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 08:48:17 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev 25572a8e-09a8-4b46-a0e0-cbc8b0a576f7 does not exist
Oct 11 08:48:17 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev 2a5bbb53-ee5b-4096-a14d-a1e5d92de080 does not exist
Oct 11 08:48:17 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev bfb84436-79f9-4ace-aac8-5a5aa55729f9 does not exist
Oct 11 08:48:17 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 11 08:48:17 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 08:48:17 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 11 08:48:17 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 08:48:17 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 08:48:17 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 08:48:17 compute-0 ovn_controller[152945]: 2025-10-11T08:48:17Z|00012|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:ac:95:81 10.100.0.4
Oct 11 08:48:17 compute-0 ovn_controller[152945]: 2025-10-11T08:48:17Z|00013|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:ac:95:81 10.100.0.4
Oct 11 08:48:17 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 08:48:17 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 08:48:17 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 08:48:17 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 08:48:17 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 08:48:17 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 08:48:17 compute-0 sudo[289453]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:48:17 compute-0 sudo[289453]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:48:17 compute-0 sudo[289453]: pam_unix(sudo:session): session closed for user root
Oct 11 08:48:17 compute-0 sudo[289497]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 08:48:17 compute-0 sudo[289497]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:48:17 compute-0 sudo[289497]: pam_unix(sudo:session): session closed for user root
Oct 11 08:48:18 compute-0 sudo[289522]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:48:18 compute-0 sudo[289522]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:48:18 compute-0 sudo[289522]: pam_unix(sudo:session): session closed for user root
Oct 11 08:48:18 compute-0 sudo[289547]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 11 08:48:18 compute-0 sudo[289547]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:48:18 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 08:48:18 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/374156802' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:48:18 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e163 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 08:48:18 compute-0 nova_compute[260935]: 2025-10-11 08:48:18.215 2 DEBUG oslo_concurrency.processutils [None req-137c26ae-7b64-42c1-8e0f-3f35576f3c95 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.494s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:48:18 compute-0 nova_compute[260935]: 2025-10-11 08:48:18.249 2 DEBUG nova.storage.rbd_utils [None req-137c26ae-7b64-42c1-8e0f-3f35576f3c95 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] rbd image 057de6d9-3f9e-4b23-9019-f62ba6b453e7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:48:18 compute-0 nova_compute[260935]: 2025-10-11 08:48:18.255 2 DEBUG oslo_concurrency.processutils [None req-137c26ae-7b64-42c1-8e0f-3f35576f3c95 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:48:18 compute-0 podman[289651]: 2025-10-11 08:48:18.617984873 +0000 UTC m=+0.074781669 container create 8d015aaf773e65057cb8452300739b368ebed11fe5c85bf82bbbd87d94bc60fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_cerf, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 08:48:18 compute-0 systemd[1]: Started libpod-conmon-8d015aaf773e65057cb8452300739b368ebed11fe5c85bf82bbbd87d94bc60fc.scope.
Oct 11 08:48:18 compute-0 podman[289651]: 2025-10-11 08:48:18.586214735 +0000 UTC m=+0.043011591 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 08:48:18 compute-0 systemd[1]: Started libcrun container.
Oct 11 08:48:18 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 08:48:18 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1458321110' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:48:18 compute-0 podman[289651]: 2025-10-11 08:48:18.730118968 +0000 UTC m=+0.186915824 container init 8d015aaf773e65057cb8452300739b368ebed11fe5c85bf82bbbd87d94bc60fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_cerf, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct 11 08:48:18 compute-0 nova_compute[260935]: 2025-10-11 08:48:18.739 2 DEBUG oslo_concurrency.processutils [None req-137c26ae-7b64-42c1-8e0f-3f35576f3c95 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.484s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:48:18 compute-0 nova_compute[260935]: 2025-10-11 08:48:18.745 2 DEBUG nova.virt.libvirt.vif [None req-137c26ae-7b64-42c1-8e0f-3f35576f3c95 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T08:48:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1270343715',display_name='tempest-AttachInterfacesTestJSON-server-1270343715',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1270343715',id=20,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGqp+brcDuFBg126s8uf0VE8L4fUfMeeG8JT9FKFYCB1vHmrbx9C6Kt8XshIYtJqZ0JEMq6H9A4MzX7hRa62ELfLstfe4uxEEdjGiwcDGhX0TR8t1c69HTxfDL2XuPv0hw==',key_name='tempest-keypair-1655747577',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='eddb41c523294041b154a0a99c88e82b',ramdisk_id='',reservation_id='r-i4h1iqr7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesTestJSON-2072786320',owner_user_name='tempest-AttachInterfacesTestJSON-2072786320-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T08:48:12Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='34f29a5a135d45f597eeaa741009aa67',uuid=057de6d9-3f9e-4b23-9019-f62ba6b453e7,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "db31f1b4-b009-40dc-a028-b72fe0b1eb45", "address": "fa:16:3e:97:65:f3", "network": {"id": "fff13396-b787-4c6e-9112-a1c2ef57b26d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-795919911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eddb41c523294041b154a0a99c88e82b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdb31f1b4-b0", "ovs_interfaceid": "db31f1b4-b009-40dc-a028-b72fe0b1eb45", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 11 08:48:18 compute-0 nova_compute[260935]: 2025-10-11 08:48:18.746 2 DEBUG nova.network.os_vif_util [None req-137c26ae-7b64-42c1-8e0f-3f35576f3c95 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Converting VIF {"id": "db31f1b4-b009-40dc-a028-b72fe0b1eb45", "address": "fa:16:3e:97:65:f3", "network": {"id": "fff13396-b787-4c6e-9112-a1c2ef57b26d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-795919911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eddb41c523294041b154a0a99c88e82b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdb31f1b4-b0", "ovs_interfaceid": "db31f1b4-b009-40dc-a028-b72fe0b1eb45", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 08:48:18 compute-0 podman[289651]: 2025-10-11 08:48:18.746408554 +0000 UTC m=+0.203205360 container start 8d015aaf773e65057cb8452300739b368ebed11fe5c85bf82bbbd87d94bc60fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_cerf, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct 11 08:48:18 compute-0 nova_compute[260935]: 2025-10-11 08:48:18.747 2 DEBUG nova.network.os_vif_util [None req-137c26ae-7b64-42c1-8e0f-3f35576f3c95 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:97:65:f3,bridge_name='br-int',has_traffic_filtering=True,id=db31f1b4-b009-40dc-a028-b72fe0b1eb45,network=Network(fff13396-b787-4c6e-9112-a1c2ef57b26d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdb31f1b4-b0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 08:48:18 compute-0 podman[289651]: 2025-10-11 08:48:18.750451469 +0000 UTC m=+0.207248345 container attach 8d015aaf773e65057cb8452300739b368ebed11fe5c85bf82bbbd87d94bc60fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_cerf, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 08:48:18 compute-0 busy_cerf[289686]: 167 167
Oct 11 08:48:18 compute-0 systemd[1]: libpod-8d015aaf773e65057cb8452300739b368ebed11fe5c85bf82bbbd87d94bc60fc.scope: Deactivated successfully.
Oct 11 08:48:18 compute-0 podman[289651]: 2025-10-11 08:48:18.753444695 +0000 UTC m=+0.210241501 container died 8d015aaf773e65057cb8452300739b368ebed11fe5c85bf82bbbd87d94bc60fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_cerf, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 08:48:18 compute-0 nova_compute[260935]: 2025-10-11 08:48:18.750 2 DEBUG nova.objects.instance [None req-137c26ae-7b64-42c1-8e0f-3f35576f3c95 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Lazy-loading 'pci_devices' on Instance uuid 057de6d9-3f9e-4b23-9019-f62ba6b453e7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:48:18 compute-0 nova_compute[260935]: 2025-10-11 08:48:18.778 2 DEBUG nova.virt.libvirt.driver [None req-137c26ae-7b64-42c1-8e0f-3f35576f3c95 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] End _get_guest_xml xml=<domain type="kvm">
Oct 11 08:48:18 compute-0 nova_compute[260935]:   <uuid>057de6d9-3f9e-4b23-9019-f62ba6b453e7</uuid>
Oct 11 08:48:18 compute-0 nova_compute[260935]:   <name>instance-00000014</name>
Oct 11 08:48:18 compute-0 nova_compute[260935]:   <memory>131072</memory>
Oct 11 08:48:18 compute-0 nova_compute[260935]:   <vcpu>1</vcpu>
Oct 11 08:48:18 compute-0 nova_compute[260935]:   <metadata>
Oct 11 08:48:18 compute-0 nova_compute[260935]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 08:48:18 compute-0 nova_compute[260935]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 08:48:18 compute-0 nova_compute[260935]:       <nova:name>tempest-AttachInterfacesTestJSON-server-1270343715</nova:name>
Oct 11 08:48:18 compute-0 nova_compute[260935]:       <nova:creationTime>2025-10-11 08:48:17</nova:creationTime>
Oct 11 08:48:18 compute-0 nova_compute[260935]:       <nova:flavor name="m1.nano">
Oct 11 08:48:18 compute-0 nova_compute[260935]:         <nova:memory>128</nova:memory>
Oct 11 08:48:18 compute-0 nova_compute[260935]:         <nova:disk>1</nova:disk>
Oct 11 08:48:18 compute-0 nova_compute[260935]:         <nova:swap>0</nova:swap>
Oct 11 08:48:18 compute-0 nova_compute[260935]:         <nova:ephemeral>0</nova:ephemeral>
Oct 11 08:48:18 compute-0 nova_compute[260935]:         <nova:vcpus>1</nova:vcpus>
Oct 11 08:48:18 compute-0 nova_compute[260935]:       </nova:flavor>
Oct 11 08:48:18 compute-0 nova_compute[260935]:       <nova:owner>
Oct 11 08:48:18 compute-0 nova_compute[260935]:         <nova:user uuid="34f29a5a135d45f597eeaa741009aa67">tempest-AttachInterfacesTestJSON-2072786320-project-member</nova:user>
Oct 11 08:48:18 compute-0 nova_compute[260935]:         <nova:project uuid="eddb41c523294041b154a0a99c88e82b">tempest-AttachInterfacesTestJSON-2072786320</nova:project>
Oct 11 08:48:18 compute-0 nova_compute[260935]:       </nova:owner>
Oct 11 08:48:18 compute-0 nova_compute[260935]:       <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 08:48:18 compute-0 nova_compute[260935]:       <nova:ports>
Oct 11 08:48:18 compute-0 nova_compute[260935]:         <nova:port uuid="db31f1b4-b009-40dc-a028-b72fe0b1eb45">
Oct 11 08:48:18 compute-0 nova_compute[260935]:           <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Oct 11 08:48:18 compute-0 nova_compute[260935]:         </nova:port>
Oct 11 08:48:18 compute-0 nova_compute[260935]:       </nova:ports>
Oct 11 08:48:18 compute-0 nova_compute[260935]:     </nova:instance>
Oct 11 08:48:18 compute-0 nova_compute[260935]:   </metadata>
Oct 11 08:48:18 compute-0 nova_compute[260935]:   <sysinfo type="smbios">
Oct 11 08:48:18 compute-0 nova_compute[260935]:     <system>
Oct 11 08:48:18 compute-0 nova_compute[260935]:       <entry name="manufacturer">RDO</entry>
Oct 11 08:48:18 compute-0 nova_compute[260935]:       <entry name="product">OpenStack Compute</entry>
Oct 11 08:48:18 compute-0 nova_compute[260935]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 08:48:18 compute-0 nova_compute[260935]:       <entry name="serial">057de6d9-3f9e-4b23-9019-f62ba6b453e7</entry>
Oct 11 08:48:18 compute-0 nova_compute[260935]:       <entry name="uuid">057de6d9-3f9e-4b23-9019-f62ba6b453e7</entry>
Oct 11 08:48:18 compute-0 nova_compute[260935]:       <entry name="family">Virtual Machine</entry>
Oct 11 08:48:18 compute-0 nova_compute[260935]:     </system>
Oct 11 08:48:18 compute-0 nova_compute[260935]:   </sysinfo>
Oct 11 08:48:18 compute-0 nova_compute[260935]:   <os>
Oct 11 08:48:18 compute-0 nova_compute[260935]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 11 08:48:18 compute-0 nova_compute[260935]:     <boot dev="hd"/>
Oct 11 08:48:18 compute-0 nova_compute[260935]:     <smbios mode="sysinfo"/>
Oct 11 08:48:18 compute-0 nova_compute[260935]:   </os>
Oct 11 08:48:18 compute-0 nova_compute[260935]:   <features>
Oct 11 08:48:18 compute-0 nova_compute[260935]:     <acpi/>
Oct 11 08:48:18 compute-0 nova_compute[260935]:     <apic/>
Oct 11 08:48:18 compute-0 nova_compute[260935]:     <vmcoreinfo/>
Oct 11 08:48:18 compute-0 nova_compute[260935]:   </features>
Oct 11 08:48:18 compute-0 nova_compute[260935]:   <clock offset="utc">
Oct 11 08:48:18 compute-0 nova_compute[260935]:     <timer name="pit" tickpolicy="delay"/>
Oct 11 08:48:18 compute-0 nova_compute[260935]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 11 08:48:18 compute-0 nova_compute[260935]:     <timer name="hpet" present="no"/>
Oct 11 08:48:18 compute-0 nova_compute[260935]:   </clock>
Oct 11 08:48:18 compute-0 nova_compute[260935]:   <cpu mode="host-model" match="exact">
Oct 11 08:48:18 compute-0 nova_compute[260935]:     <topology sockets="1" cores="1" threads="1"/>
Oct 11 08:48:18 compute-0 nova_compute[260935]:   </cpu>
Oct 11 08:48:18 compute-0 nova_compute[260935]:   <devices>
Oct 11 08:48:18 compute-0 nova_compute[260935]:     <disk type="network" device="disk">
Oct 11 08:48:18 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 08:48:18 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/057de6d9-3f9e-4b23-9019-f62ba6b453e7_disk">
Oct 11 08:48:18 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 08:48:18 compute-0 nova_compute[260935]:       </source>
Oct 11 08:48:18 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 08:48:18 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 08:48:18 compute-0 nova_compute[260935]:       </auth>
Oct 11 08:48:18 compute-0 nova_compute[260935]:       <target dev="vda" bus="virtio"/>
Oct 11 08:48:18 compute-0 nova_compute[260935]:     </disk>
Oct 11 08:48:18 compute-0 nova_compute[260935]:     <disk type="network" device="cdrom">
Oct 11 08:48:18 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 08:48:18 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/057de6d9-3f9e-4b23-9019-f62ba6b453e7_disk.config">
Oct 11 08:48:18 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 08:48:18 compute-0 nova_compute[260935]:       </source>
Oct 11 08:48:18 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 08:48:18 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 08:48:18 compute-0 nova_compute[260935]:       </auth>
Oct 11 08:48:18 compute-0 nova_compute[260935]:       <target dev="sda" bus="sata"/>
Oct 11 08:48:18 compute-0 nova_compute[260935]:     </disk>
Oct 11 08:48:18 compute-0 nova_compute[260935]:     <interface type="ethernet">
Oct 11 08:48:18 compute-0 nova_compute[260935]:       <mac address="fa:16:3e:97:65:f3"/>
Oct 11 08:48:18 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 08:48:18 compute-0 nova_compute[260935]:       <driver name="vhost" rx_queue_size="512"/>
Oct 11 08:48:18 compute-0 nova_compute[260935]:       <mtu size="1442"/>
Oct 11 08:48:18 compute-0 nova_compute[260935]:       <target dev="tapdb31f1b4-b0"/>
Oct 11 08:48:18 compute-0 nova_compute[260935]:     </interface>
Oct 11 08:48:18 compute-0 nova_compute[260935]:     <serial type="pty">
Oct 11 08:48:18 compute-0 nova_compute[260935]:       <log file="/var/lib/nova/instances/057de6d9-3f9e-4b23-9019-f62ba6b453e7/console.log" append="off"/>
Oct 11 08:48:18 compute-0 nova_compute[260935]:     </serial>
Oct 11 08:48:18 compute-0 nova_compute[260935]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 08:48:18 compute-0 nova_compute[260935]:     <video>
Oct 11 08:48:18 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 08:48:18 compute-0 nova_compute[260935]:     </video>
Oct 11 08:48:18 compute-0 nova_compute[260935]:     <input type="tablet" bus="usb"/>
Oct 11 08:48:18 compute-0 nova_compute[260935]:     <rng model="virtio">
Oct 11 08:48:18 compute-0 nova_compute[260935]:       <backend model="random">/dev/urandom</backend>
Oct 11 08:48:18 compute-0 nova_compute[260935]:     </rng>
Oct 11 08:48:18 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root"/>
Oct 11 08:48:18 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:48:18 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:48:18 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:48:18 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:48:18 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:48:18 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:48:18 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:48:18 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:48:18 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:48:18 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:48:18 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:48:18 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:48:18 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:48:18 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:48:18 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:48:18 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:48:18 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:48:18 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:48:18 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:48:18 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:48:18 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:48:18 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:48:18 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:48:18 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:48:18 compute-0 nova_compute[260935]:     <controller type="usb" index="0"/>
Oct 11 08:48:18 compute-0 nova_compute[260935]:     <memballoon model="virtio">
Oct 11 08:48:18 compute-0 nova_compute[260935]:       <stats period="10"/>
Oct 11 08:48:18 compute-0 nova_compute[260935]:     </memballoon>
Oct 11 08:48:18 compute-0 nova_compute[260935]:   </devices>
Oct 11 08:48:18 compute-0 nova_compute[260935]: </domain>
Oct 11 08:48:18 compute-0 nova_compute[260935]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 11 08:48:18 compute-0 nova_compute[260935]: 2025-10-11 08:48:18.781 2 DEBUG nova.compute.manager [None req-137c26ae-7b64-42c1-8e0f-3f35576f3c95 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] Preparing to wait for external event network-vif-plugged-db31f1b4-b009-40dc-a028-b72fe0b1eb45 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 11 08:48:18 compute-0 nova_compute[260935]: 2025-10-11 08:48:18.782 2 DEBUG oslo_concurrency.lockutils [None req-137c26ae-7b64-42c1-8e0f-3f35576f3c95 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Acquiring lock "057de6d9-3f9e-4b23-9019-f62ba6b453e7-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:48:18 compute-0 nova_compute[260935]: 2025-10-11 08:48:18.782 2 DEBUG oslo_concurrency.lockutils [None req-137c26ae-7b64-42c1-8e0f-3f35576f3c95 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Lock "057de6d9-3f9e-4b23-9019-f62ba6b453e7-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:48:18 compute-0 nova_compute[260935]: 2025-10-11 08:48:18.782 2 DEBUG oslo_concurrency.lockutils [None req-137c26ae-7b64-42c1-8e0f-3f35576f3c95 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Lock "057de6d9-3f9e-4b23-9019-f62ba6b453e7-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:48:18 compute-0 nova_compute[260935]: 2025-10-11 08:48:18.783 2 DEBUG nova.virt.libvirt.vif [None req-137c26ae-7b64-42c1-8e0f-3f35576f3c95 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T08:48:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1270343715',display_name='tempest-AttachInterfacesTestJSON-server-1270343715',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1270343715',id=20,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGqp+brcDuFBg126s8uf0VE8L4fUfMeeG8JT9FKFYCB1vHmrbx9C6Kt8XshIYtJqZ0JEMq6H9A4MzX7hRa62ELfLstfe4uxEEdjGiwcDGhX0TR8t1c69HTxfDL2XuPv0hw==',key_name='tempest-keypair-1655747577',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='eddb41c523294041b154a0a99c88e82b',ramdisk_id='',reservation_id='r-i4h1iqr7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesTestJSON-2072786320',owner_user_name='tempest-AttachInterfacesTestJSON-2072786320-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T08:48:12Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='34f29a5a135d45f597eeaa741009aa67',uuid=057de6d9-3f9e-4b23-9019-f62ba6b453e7,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "db31f1b4-b009-40dc-a028-b72fe0b1eb45", "address": "fa:16:3e:97:65:f3", "network": {"id": "fff13396-b787-4c6e-9112-a1c2ef57b26d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-795919911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eddb41c523294041b154a0a99c88e82b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdb31f1b4-b0", "ovs_interfaceid": "db31f1b4-b009-40dc-a028-b72fe0b1eb45", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 11 08:48:18 compute-0 nova_compute[260935]: 2025-10-11 08:48:18.784 2 DEBUG nova.network.os_vif_util [None req-137c26ae-7b64-42c1-8e0f-3f35576f3c95 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Converting VIF {"id": "db31f1b4-b009-40dc-a028-b72fe0b1eb45", "address": "fa:16:3e:97:65:f3", "network": {"id": "fff13396-b787-4c6e-9112-a1c2ef57b26d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-795919911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eddb41c523294041b154a0a99c88e82b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdb31f1b4-b0", "ovs_interfaceid": "db31f1b4-b009-40dc-a028-b72fe0b1eb45", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 08:48:18 compute-0 nova_compute[260935]: 2025-10-11 08:48:18.785 2 DEBUG nova.network.os_vif_util [None req-137c26ae-7b64-42c1-8e0f-3f35576f3c95 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:97:65:f3,bridge_name='br-int',has_traffic_filtering=True,id=db31f1b4-b009-40dc-a028-b72fe0b1eb45,network=Network(fff13396-b787-4c6e-9112-a1c2ef57b26d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdb31f1b4-b0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 08:48:18 compute-0 nova_compute[260935]: 2025-10-11 08:48:18.785 2 DEBUG os_vif [None req-137c26ae-7b64-42c1-8e0f-3f35576f3c95 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:97:65:f3,bridge_name='br-int',has_traffic_filtering=True,id=db31f1b4-b009-40dc-a028-b72fe0b1eb45,network=Network(fff13396-b787-4c6e-9112-a1c2ef57b26d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdb31f1b4-b0') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 11 08:48:18 compute-0 nova_compute[260935]: 2025-10-11 08:48:18.786 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:48:18 compute-0 nova_compute[260935]: 2025-10-11 08:48:18.787 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:48:18 compute-0 nova_compute[260935]: 2025-10-11 08:48:18.788 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 08:48:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-22c837d62ae85d52e23cc2a4e38ae48c4ec09aca6d56c5dc27694159d24c0833-merged.mount: Deactivated successfully.
Oct 11 08:48:18 compute-0 nova_compute[260935]: 2025-10-11 08:48:18.794 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:48:18 compute-0 nova_compute[260935]: 2025-10-11 08:48:18.795 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapdb31f1b4-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:48:18 compute-0 nova_compute[260935]: 2025-10-11 08:48:18.795 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapdb31f1b4-b0, col_values=(('external_ids', {'iface-id': 'db31f1b4-b009-40dc-a028-b72fe0b1eb45', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:97:65:f3', 'vm-uuid': '057de6d9-3f9e-4b23-9019-f62ba6b453e7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:48:18 compute-0 nova_compute[260935]: 2025-10-11 08:48:18.798 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:48:18 compute-0 NetworkManager[44960]: <info>  [1760172498.7994] manager: (tapdb31f1b4-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/48)
Oct 11 08:48:18 compute-0 nova_compute[260935]: 2025-10-11 08:48:18.802 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 08:48:18 compute-0 podman[289651]: 2025-10-11 08:48:18.806751859 +0000 UTC m=+0.263548635 container remove 8d015aaf773e65057cb8452300739b368ebed11fe5c85bf82bbbd87d94bc60fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_cerf, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 08:48:18 compute-0 nova_compute[260935]: 2025-10-11 08:48:18.808 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:48:18 compute-0 nova_compute[260935]: 2025-10-11 08:48:18.809 2 INFO os_vif [None req-137c26ae-7b64-42c1-8e0f-3f35576f3c95 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:97:65:f3,bridge_name='br-int',has_traffic_filtering=True,id=db31f1b4-b009-40dc-a028-b72fe0b1eb45,network=Network(fff13396-b787-4c6e-9112-a1c2ef57b26d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdb31f1b4-b0')
Oct 11 08:48:18 compute-0 systemd[1]: libpod-conmon-8d015aaf773e65057cb8452300739b368ebed11fe5c85bf82bbbd87d94bc60fc.scope: Deactivated successfully.
Oct 11 08:48:18 compute-0 nova_compute[260935]: 2025-10-11 08:48:18.888 2 DEBUG nova.virt.libvirt.driver [None req-137c26ae-7b64-42c1-8e0f-3f35576f3c95 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 08:48:18 compute-0 nova_compute[260935]: 2025-10-11 08:48:18.888 2 DEBUG nova.virt.libvirt.driver [None req-137c26ae-7b64-42c1-8e0f-3f35576f3c95 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 08:48:18 compute-0 nova_compute[260935]: 2025-10-11 08:48:18.889 2 DEBUG nova.virt.libvirt.driver [None req-137c26ae-7b64-42c1-8e0f-3f35576f3c95 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] No VIF found with MAC fa:16:3e:97:65:f3, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 11 08:48:18 compute-0 nova_compute[260935]: 2025-10-11 08:48:18.889 2 INFO nova.virt.libvirt.driver [None req-137c26ae-7b64-42c1-8e0f-3f35576f3c95 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] Using config drive
Oct 11 08:48:18 compute-0 ceph-mon[74313]: pgmap v1275: 321 pgs: 321 active+clean; 206 MiB data, 352 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 7.8 MiB/s wr, 271 op/s
Oct 11 08:48:18 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/374156802' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:48:18 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/1458321110' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:48:18 compute-0 nova_compute[260935]: 2025-10-11 08:48:18.914 2 DEBUG nova.storage.rbd_utils [None req-137c26ae-7b64-42c1-8e0f-3f35576f3c95 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] rbd image 057de6d9-3f9e-4b23-9019-f62ba6b453e7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:48:18 compute-0 nova_compute[260935]: 2025-10-11 08:48:18.949 2 DEBUG nova.network.neutron [req-8cf3764b-264a-4309-b17c-988b4ac89461 req-be3cc025-de82-4d65-a37d-d3e6511c0bb6 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] Updated VIF entry in instance network info cache for port db31f1b4-b009-40dc-a028-b72fe0b1eb45. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 08:48:18 compute-0 nova_compute[260935]: 2025-10-11 08:48:18.949 2 DEBUG nova.network.neutron [req-8cf3764b-264a-4309-b17c-988b4ac89461 req-be3cc025-de82-4d65-a37d-d3e6511c0bb6 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] Updating instance_info_cache with network_info: [{"id": "db31f1b4-b009-40dc-a028-b72fe0b1eb45", "address": "fa:16:3e:97:65:f3", "network": {"id": "fff13396-b787-4c6e-9112-a1c2ef57b26d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-795919911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eddb41c523294041b154a0a99c88e82b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdb31f1b4-b0", "ovs_interfaceid": "db31f1b4-b009-40dc-a028-b72fe0b1eb45", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:48:18 compute-0 nova_compute[260935]: 2025-10-11 08:48:18.966 2 DEBUG oslo_concurrency.lockutils [req-8cf3764b-264a-4309-b17c-988b4ac89461 req-be3cc025-de82-4d65-a37d-d3e6511c0bb6 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-057de6d9-3f9e-4b23-9019-f62ba6b453e7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 08:48:19 compute-0 podman[289757]: 2025-10-11 08:48:19.052443502 +0000 UTC m=+0.077429845 container create 99674fca55c1a8d66cea920cd141a7d21bcc405a9b7257fc37767ee80f11da20 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_cori, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 08:48:19 compute-0 podman[289757]: 2025-10-11 08:48:19.020712595 +0000 UTC m=+0.045698978 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 08:48:19 compute-0 systemd[1]: Started libpod-conmon-99674fca55c1a8d66cea920cd141a7d21bcc405a9b7257fc37767ee80f11da20.scope.
Oct 11 08:48:19 compute-0 systemd[1]: Started libcrun container.
Oct 11 08:48:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/76b72c77a479eacbcda7e16bc67a983ebe8cf44b78b54ab4c604a29b09f2b570/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 08:48:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/76b72c77a479eacbcda7e16bc67a983ebe8cf44b78b54ab4c604a29b09f2b570/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 08:48:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/76b72c77a479eacbcda7e16bc67a983ebe8cf44b78b54ab4c604a29b09f2b570/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 08:48:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/76b72c77a479eacbcda7e16bc67a983ebe8cf44b78b54ab4c604a29b09f2b570/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 08:48:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/76b72c77a479eacbcda7e16bc67a983ebe8cf44b78b54ab4c604a29b09f2b570/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 08:48:19 compute-0 podman[289757]: 2025-10-11 08:48:19.177387123 +0000 UTC m=+0.202373516 container init 99674fca55c1a8d66cea920cd141a7d21bcc405a9b7257fc37767ee80f11da20 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_cori, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True)
Oct 11 08:48:19 compute-0 podman[289757]: 2025-10-11 08:48:19.199047912 +0000 UTC m=+0.224034245 container start 99674fca55c1a8d66cea920cd141a7d21bcc405a9b7257fc37767ee80f11da20 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_cori, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 08:48:19 compute-0 podman[289757]: 2025-10-11 08:48:19.203713326 +0000 UTC m=+0.228699659 container attach 99674fca55c1a8d66cea920cd141a7d21bcc405a9b7257fc37767ee80f11da20 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_cori, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 08:48:19 compute-0 nova_compute[260935]: 2025-10-11 08:48:19.328 2 DEBUG nova.virt.libvirt.host [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Removed pending event for 66086b61-46ca-4a1b-a9f0-692678bcbf7a due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Oct 11 08:48:19 compute-0 nova_compute[260935]: 2025-10-11 08:48:19.331 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172499.3277628, 66086b61-46ca-4a1b-a9f0-692678bcbf7a => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:48:19 compute-0 nova_compute[260935]: 2025-10-11 08:48:19.331 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 66086b61-46ca-4a1b-a9f0-692678bcbf7a] VM Resumed (Lifecycle Event)
Oct 11 08:48:19 compute-0 nova_compute[260935]: 2025-10-11 08:48:19.334 2 DEBUG nova.compute.manager [None req-0253c722-4ccd-4d8a-83e3-a929e61a488a 5c92a33b381a4eb19cda7d8ab3531d63 f5d998faafab4ef7b40f18acfe17d238 - - default default] [instance: 66086b61-46ca-4a1b-a9f0-692678bcbf7a] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 11 08:48:19 compute-0 nova_compute[260935]: 2025-10-11 08:48:19.335 2 DEBUG nova.virt.libvirt.driver [None req-0253c722-4ccd-4d8a-83e3-a929e61a488a 5c92a33b381a4eb19cda7d8ab3531d63 f5d998faafab4ef7b40f18acfe17d238 - - default default] [instance: 66086b61-46ca-4a1b-a9f0-692678bcbf7a] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 11 08:48:19 compute-0 nova_compute[260935]: 2025-10-11 08:48:19.340 2 INFO nova.virt.libvirt.driver [-] [instance: 66086b61-46ca-4a1b-a9f0-692678bcbf7a] Instance spawned successfully.
Oct 11 08:48:19 compute-0 nova_compute[260935]: 2025-10-11 08:48:19.340 2 DEBUG nova.virt.libvirt.driver [None req-0253c722-4ccd-4d8a-83e3-a929e61a488a 5c92a33b381a4eb19cda7d8ab3531d63 f5d998faafab4ef7b40f18acfe17d238 - - default default] [instance: 66086b61-46ca-4a1b-a9f0-692678bcbf7a] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 11 08:48:19 compute-0 nova_compute[260935]: 2025-10-11 08:48:19.345 2 INFO nova.virt.libvirt.driver [None req-137c26ae-7b64-42c1-8e0f-3f35576f3c95 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] Creating config drive at /var/lib/nova/instances/057de6d9-3f9e-4b23-9019-f62ba6b453e7/disk.config
Oct 11 08:48:19 compute-0 nova_compute[260935]: 2025-10-11 08:48:19.353 2 DEBUG oslo_concurrency.processutils [None req-137c26ae-7b64-42c1-8e0f-3f35576f3c95 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/057de6d9-3f9e-4b23-9019-f62ba6b453e7/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpy4fy_tou execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:48:19 compute-0 nova_compute[260935]: 2025-10-11 08:48:19.390 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 66086b61-46ca-4a1b-a9f0-692678bcbf7a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:48:19 compute-0 nova_compute[260935]: 2025-10-11 08:48:19.402 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 66086b61-46ca-4a1b-a9f0-692678bcbf7a] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 08:48:19 compute-0 nova_compute[260935]: 2025-10-11 08:48:19.417 2 DEBUG nova.virt.libvirt.driver [None req-0253c722-4ccd-4d8a-83e3-a929e61a488a 5c92a33b381a4eb19cda7d8ab3531d63 f5d998faafab4ef7b40f18acfe17d238 - - default default] [instance: 66086b61-46ca-4a1b-a9f0-692678bcbf7a] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:48:19 compute-0 nova_compute[260935]: 2025-10-11 08:48:19.418 2 DEBUG nova.virt.libvirt.driver [None req-0253c722-4ccd-4d8a-83e3-a929e61a488a 5c92a33b381a4eb19cda7d8ab3531d63 f5d998faafab4ef7b40f18acfe17d238 - - default default] [instance: 66086b61-46ca-4a1b-a9f0-692678bcbf7a] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:48:19 compute-0 nova_compute[260935]: 2025-10-11 08:48:19.419 2 DEBUG nova.virt.libvirt.driver [None req-0253c722-4ccd-4d8a-83e3-a929e61a488a 5c92a33b381a4eb19cda7d8ab3531d63 f5d998faafab4ef7b40f18acfe17d238 - - default default] [instance: 66086b61-46ca-4a1b-a9f0-692678bcbf7a] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:48:19 compute-0 nova_compute[260935]: 2025-10-11 08:48:19.419 2 DEBUG nova.virt.libvirt.driver [None req-0253c722-4ccd-4d8a-83e3-a929e61a488a 5c92a33b381a4eb19cda7d8ab3531d63 f5d998faafab4ef7b40f18acfe17d238 - - default default] [instance: 66086b61-46ca-4a1b-a9f0-692678bcbf7a] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:48:19 compute-0 nova_compute[260935]: 2025-10-11 08:48:19.420 2 DEBUG nova.virt.libvirt.driver [None req-0253c722-4ccd-4d8a-83e3-a929e61a488a 5c92a33b381a4eb19cda7d8ab3531d63 f5d998faafab4ef7b40f18acfe17d238 - - default default] [instance: 66086b61-46ca-4a1b-a9f0-692678bcbf7a] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:48:19 compute-0 nova_compute[260935]: 2025-10-11 08:48:19.420 2 DEBUG nova.virt.libvirt.driver [None req-0253c722-4ccd-4d8a-83e3-a929e61a488a 5c92a33b381a4eb19cda7d8ab3531d63 f5d998faafab4ef7b40f18acfe17d238 - - default default] [instance: 66086b61-46ca-4a1b-a9f0-692678bcbf7a] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:48:19 compute-0 nova_compute[260935]: 2025-10-11 08:48:19.444 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 66086b61-46ca-4a1b-a9f0-692678bcbf7a] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Oct 11 08:48:19 compute-0 nova_compute[260935]: 2025-10-11 08:48:19.444 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172499.330621, 66086b61-46ca-4a1b-a9f0-692678bcbf7a => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:48:19 compute-0 nova_compute[260935]: 2025-10-11 08:48:19.445 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 66086b61-46ca-4a1b-a9f0-692678bcbf7a] VM Started (Lifecycle Event)
Oct 11 08:48:19 compute-0 nova_compute[260935]: 2025-10-11 08:48:19.497 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 66086b61-46ca-4a1b-a9f0-692678bcbf7a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:48:19 compute-0 nova_compute[260935]: 2025-10-11 08:48:19.502 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 66086b61-46ca-4a1b-a9f0-692678bcbf7a] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 08:48:19 compute-0 nova_compute[260935]: 2025-10-11 08:48:19.506 2 DEBUG oslo_concurrency.processutils [None req-137c26ae-7b64-42c1-8e0f-3f35576f3c95 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/057de6d9-3f9e-4b23-9019-f62ba6b453e7/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpy4fy_tou" returned: 0 in 0.153s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:48:19 compute-0 nova_compute[260935]: 2025-10-11 08:48:19.548 2 DEBUG nova.storage.rbd_utils [None req-137c26ae-7b64-42c1-8e0f-3f35576f3c95 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] rbd image 057de6d9-3f9e-4b23-9019-f62ba6b453e7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:48:19 compute-0 nova_compute[260935]: 2025-10-11 08:48:19.553 2 DEBUG oslo_concurrency.processutils [None req-137c26ae-7b64-42c1-8e0f-3f35576f3c95 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/057de6d9-3f9e-4b23-9019-f62ba6b453e7/disk.config 057de6d9-3f9e-4b23-9019-f62ba6b453e7_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:48:19 compute-0 nova_compute[260935]: 2025-10-11 08:48:19.597 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 66086b61-46ca-4a1b-a9f0-692678bcbf7a] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Oct 11 08:48:19 compute-0 nova_compute[260935]: 2025-10-11 08:48:19.600 2 DEBUG nova.compute.manager [None req-0253c722-4ccd-4d8a-83e3-a929e61a488a 5c92a33b381a4eb19cda7d8ab3531d63 f5d998faafab4ef7b40f18acfe17d238 - - default default] [instance: 66086b61-46ca-4a1b-a9f0-692678bcbf7a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:48:19 compute-0 nova_compute[260935]: 2025-10-11 08:48:19.678 2 DEBUG oslo_concurrency.lockutils [None req-0253c722-4ccd-4d8a-83e3-a929e61a488a 5c92a33b381a4eb19cda7d8ab3531d63 f5d998faafab4ef7b40f18acfe17d238 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:48:19 compute-0 nova_compute[260935]: 2025-10-11 08:48:19.679 2 DEBUG oslo_concurrency.lockutils [None req-0253c722-4ccd-4d8a-83e3-a929e61a488a 5c92a33b381a4eb19cda7d8ab3531d63 f5d998faafab4ef7b40f18acfe17d238 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:48:19 compute-0 nova_compute[260935]: 2025-10-11 08:48:19.679 2 DEBUG nova.objects.instance [None req-0253c722-4ccd-4d8a-83e3-a929e61a488a 5c92a33b381a4eb19cda7d8ab3531d63 f5d998faafab4ef7b40f18acfe17d238 - - default default] [instance: 66086b61-46ca-4a1b-a9f0-692678bcbf7a] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Oct 11 08:48:19 compute-0 nova_compute[260935]: 2025-10-11 08:48:19.753 2 DEBUG oslo_concurrency.lockutils [None req-0253c722-4ccd-4d8a-83e3-a929e61a488a 5c92a33b381a4eb19cda7d8ab3531d63 f5d998faafab4ef7b40f18acfe17d238 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.074s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:48:19 compute-0 nova_compute[260935]: 2025-10-11 08:48:19.759 2 DEBUG oslo_concurrency.processutils [None req-137c26ae-7b64-42c1-8e0f-3f35576f3c95 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/057de6d9-3f9e-4b23-9019-f62ba6b453e7/disk.config 057de6d9-3f9e-4b23-9019-f62ba6b453e7_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.206s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:48:19 compute-0 nova_compute[260935]: 2025-10-11 08:48:19.760 2 INFO nova.virt.libvirt.driver [None req-137c26ae-7b64-42c1-8e0f-3f35576f3c95 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] Deleting local config drive /var/lib/nova/instances/057de6d9-3f9e-4b23-9019-f62ba6b453e7/disk.config because it was imported into RBD.
Oct 11 08:48:19 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1276: 321 pgs: 321 active+clean; 206 MiB data, 352 MiB used, 60 GiB / 60 GiB avail; 666 KiB/s rd, 7.7 MiB/s wr, 197 op/s
Oct 11 08:48:19 compute-0 kernel: tapdb31f1b4-b0: entered promiscuous mode
Oct 11 08:48:19 compute-0 NetworkManager[44960]: <info>  [1760172499.8774] manager: (tapdb31f1b4-b0): new Tun device (/org/freedesktop/NetworkManager/Devices/49)
Oct 11 08:48:19 compute-0 systemd-udevd[289717]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 08:48:19 compute-0 ovn_controller[152945]: 2025-10-11T08:48:19Z|00068|binding|INFO|Claiming lport db31f1b4-b009-40dc-a028-b72fe0b1eb45 for this chassis.
Oct 11 08:48:19 compute-0 ovn_controller[152945]: 2025-10-11T08:48:19Z|00069|binding|INFO|db31f1b4-b009-40dc-a028-b72fe0b1eb45: Claiming fa:16:3e:97:65:f3 10.100.0.8
Oct 11 08:48:19 compute-0 nova_compute[260935]: 2025-10-11 08:48:19.887 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:48:19 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:48:19.891 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:97:65:f3 10.100.0.8'], port_security=['fa:16:3e:97:65:f3 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '057de6d9-3f9e-4b23-9019-f62ba6b453e7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fff13396-b787-4c6e-9112-a1c2ef57b26d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'eddb41c523294041b154a0a99c88e82b', 'neutron:revision_number': '2', 'neutron:security_group_ids': '6953e178-7635-4f97-a5ef-5126f17f4f48', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c4201c7b-c907-464d-88cb-d19f17d8f067, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=db31f1b4-b009-40dc-a028-b72fe0b1eb45) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 08:48:19 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:48:19.892 162815 INFO neutron.agent.ovn.metadata.agent [-] Port db31f1b4-b009-40dc-a028-b72fe0b1eb45 in datapath fff13396-b787-4c6e-9112-a1c2ef57b26d bound to our chassis
Oct 11 08:48:19 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:48:19.894 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network fff13396-b787-4c6e-9112-a1c2ef57b26d
Oct 11 08:48:19 compute-0 NetworkManager[44960]: <info>  [1760172499.9024] device (tapdb31f1b4-b0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 08:48:19 compute-0 NetworkManager[44960]: <info>  [1760172499.9039] device (tapdb31f1b4-b0): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 08:48:19 compute-0 systemd-machined[215705]: New machine qemu-22-instance-00000014.
Oct 11 08:48:19 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:48:19.908 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[a0c67d40-bbe0-485a-b929-f453bc37a5a4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:48:19 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:48:19.911 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapfff13396-b1 in ovnmeta-fff13396-b787-4c6e-9112-a1c2ef57b26d namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 11 08:48:19 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:48:19.916 276945 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapfff13396-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 11 08:48:19 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:48:19.917 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[74fd5bb4-ae40-4fc3-aa2d-3575ce34cfd7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:48:19 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:48:19.918 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[8ee76d12-6b01-4150-b54e-e14a2ff2019b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:48:19 compute-0 systemd[1]: Started Virtual Machine qemu-22-instance-00000014.
Oct 11 08:48:19 compute-0 ovn_controller[152945]: 2025-10-11T08:48:19Z|00070|binding|INFO|Setting lport db31f1b4-b009-40dc-a028-b72fe0b1eb45 ovn-installed in OVS
Oct 11 08:48:19 compute-0 ovn_controller[152945]: 2025-10-11T08:48:19Z|00071|binding|INFO|Setting lport db31f1b4-b009-40dc-a028-b72fe0b1eb45 up in Southbound
Oct 11 08:48:19 compute-0 nova_compute[260935]: 2025-10-11 08:48:19.925 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:48:19 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:48:19.933 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[daab86c3-a30d-4206-8233-b0a370690985]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:48:19 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:48:19.961 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[d6b30689-f64d-46f8-89ef-93ef1103e6b6]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:48:20 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:48:20.000 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[f8f687f7-78cc-4a62-8a11-68c8171401e8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:48:20 compute-0 NetworkManager[44960]: <info>  [1760172500.0121] manager: (tapfff13396-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/50)
Oct 11 08:48:20 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:48:20.013 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[61848cee-1f2b-4b74-a11c-5ba49435750b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:48:20 compute-0 systemd-udevd[289833]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 08:48:20 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:48:20.053 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[c559582c-ba4c-4732-b62d-ba325eb3e1e8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:48:20 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:48:20.058 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[477c2039-e2bb-40fa-a1e7-2c0550131614]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:48:20 compute-0 NetworkManager[44960]: <info>  [1760172500.0961] device (tapfff13396-b0): carrier: link connected
Oct 11 08:48:20 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:48:20.105 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[c83a25a7-0433-4939-a958-231f5f73b875]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:48:20 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:48:20.125 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[095ac779-794b-42e0-925e-6dd16481b2b5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfff13396-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:dd:a4:2d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 27], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 434479, 'reachable_time': 16585, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 289874, 'error': None, 'target': 'ovnmeta-fff13396-b787-4c6e-9112-a1c2ef57b26d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:48:20 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:48:20.147 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[d9519239-5e3a-4807-bae6-43c3cd330239]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fedd:a42d'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 434479, 'tstamp': 434479}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 289875, 'error': None, 'target': 'ovnmeta-fff13396-b787-4c6e-9112-a1c2ef57b26d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:48:20 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:48:20.174 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[cae60820-d255-42de-a48c-0c4e14a8ac68]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfff13396-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:dd:a4:2d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 27], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 434479, 'reachable_time': 16585, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 289878, 'error': None, 'target': 'ovnmeta-fff13396-b787-4c6e-9112-a1c2ef57b26d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:48:20 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:48:20.214 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[2a85e86e-3f49-46f3-a0da-8eda06cd3eb9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:48:20 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:48:20.329 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[a9b9a150-55e8-4874-8acd-6e76cb69a51c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:48:20 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:48:20.332 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfff13396-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:48:20 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:48:20.333 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 08:48:20 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:48:20.334 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfff13396-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:48:20 compute-0 NetworkManager[44960]: <info>  [1760172500.3373] manager: (tapfff13396-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/51)
Oct 11 08:48:20 compute-0 kernel: tapfff13396-b0: entered promiscuous mode
Oct 11 08:48:20 compute-0 nova_compute[260935]: 2025-10-11 08:48:20.339 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:48:20 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:48:20.344 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapfff13396-b0, col_values=(('external_ids', {'iface-id': '2a916b98-1e7b-4604-b1f0-e2f195b1c17e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:48:20 compute-0 nova_compute[260935]: 2025-10-11 08:48:20.346 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:48:20 compute-0 ovn_controller[152945]: 2025-10-11T08:48:20Z|00072|binding|INFO|Releasing lport 2a916b98-1e7b-4604-b1f0-e2f195b1c17e from this chassis (sb_readonly=0)
Oct 11 08:48:20 compute-0 nova_compute[260935]: 2025-10-11 08:48:20.383 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:48:20 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:48:20.384 162815 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/fff13396-b787-4c6e-9112-a1c2ef57b26d.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/fff13396-b787-4c6e-9112-a1c2ef57b26d.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 11 08:48:20 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:48:20.386 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[793e0174-7ed9-44a6-9f3c-b396c4c87ceb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:48:20 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:48:20.387 162815 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 11 08:48:20 compute-0 ovn_metadata_agent[162810]: global
Oct 11 08:48:20 compute-0 ovn_metadata_agent[162810]:     log         /dev/log local0 debug
Oct 11 08:48:20 compute-0 ovn_metadata_agent[162810]:     log-tag     haproxy-metadata-proxy-fff13396-b787-4c6e-9112-a1c2ef57b26d
Oct 11 08:48:20 compute-0 ovn_metadata_agent[162810]:     user        root
Oct 11 08:48:20 compute-0 ovn_metadata_agent[162810]:     group       root
Oct 11 08:48:20 compute-0 ovn_metadata_agent[162810]:     maxconn     1024
Oct 11 08:48:20 compute-0 ovn_metadata_agent[162810]:     pidfile     /var/lib/neutron/external/pids/fff13396-b787-4c6e-9112-a1c2ef57b26d.pid.haproxy
Oct 11 08:48:20 compute-0 ovn_metadata_agent[162810]:     daemon
Oct 11 08:48:20 compute-0 ovn_metadata_agent[162810]: 
Oct 11 08:48:20 compute-0 ovn_metadata_agent[162810]: defaults
Oct 11 08:48:20 compute-0 ovn_metadata_agent[162810]:     log global
Oct 11 08:48:20 compute-0 ovn_metadata_agent[162810]:     mode http
Oct 11 08:48:20 compute-0 ovn_metadata_agent[162810]:     option httplog
Oct 11 08:48:20 compute-0 ovn_metadata_agent[162810]:     option dontlognull
Oct 11 08:48:20 compute-0 ovn_metadata_agent[162810]:     option http-server-close
Oct 11 08:48:20 compute-0 ovn_metadata_agent[162810]:     option forwardfor
Oct 11 08:48:20 compute-0 ovn_metadata_agent[162810]:     retries                 3
Oct 11 08:48:20 compute-0 ovn_metadata_agent[162810]:     timeout http-request    30s
Oct 11 08:48:20 compute-0 ovn_metadata_agent[162810]:     timeout connect         30s
Oct 11 08:48:20 compute-0 ovn_metadata_agent[162810]:     timeout client          32s
Oct 11 08:48:20 compute-0 ovn_metadata_agent[162810]:     timeout server          32s
Oct 11 08:48:20 compute-0 ovn_metadata_agent[162810]:     timeout http-keep-alive 30s
Oct 11 08:48:20 compute-0 ovn_metadata_agent[162810]: 
Oct 11 08:48:20 compute-0 ovn_metadata_agent[162810]: 
Oct 11 08:48:20 compute-0 ovn_metadata_agent[162810]: listen listener
Oct 11 08:48:20 compute-0 ovn_metadata_agent[162810]:     bind 169.254.169.254:80
Oct 11 08:48:20 compute-0 ovn_metadata_agent[162810]:     server metadata /var/lib/neutron/metadata_proxy
Oct 11 08:48:20 compute-0 ovn_metadata_agent[162810]:     http-request add-header X-OVN-Network-ID fff13396-b787-4c6e-9112-a1c2ef57b26d
Oct 11 08:48:20 compute-0 ovn_metadata_agent[162810]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 11 08:48:20 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:48:20.388 162815 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-fff13396-b787-4c6e-9112-a1c2ef57b26d', 'env', 'PROCESS_TAG=haproxy-fff13396-b787-4c6e-9112-a1c2ef57b26d', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/fff13396-b787-4c6e-9112-a1c2ef57b26d.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 11 08:48:20 compute-0 youthful_cori[289773]: --> passed data devices: 0 physical, 3 LVM
Oct 11 08:48:20 compute-0 youthful_cori[289773]: --> relative data size: 1.0
Oct 11 08:48:20 compute-0 youthful_cori[289773]: --> All data devices are unavailable
Oct 11 08:48:20 compute-0 systemd[1]: libpod-99674fca55c1a8d66cea920cd141a7d21bcc405a9b7257fc37767ee80f11da20.scope: Deactivated successfully.
Oct 11 08:48:20 compute-0 systemd[1]: libpod-99674fca55c1a8d66cea920cd141a7d21bcc405a9b7257fc37767ee80f11da20.scope: Consumed 1.151s CPU time.
Oct 11 08:48:20 compute-0 podman[289757]: 2025-10-11 08:48:20.530414879 +0000 UTC m=+1.555401192 container died 99674fca55c1a8d66cea920cd141a7d21bcc405a9b7257fc37767ee80f11da20 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_cori, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 08:48:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-76b72c77a479eacbcda7e16bc67a983ebe8cf44b78b54ab4c604a29b09f2b570-merged.mount: Deactivated successfully.
Oct 11 08:48:20 compute-0 podman[289757]: 2025-10-11 08:48:20.590805796 +0000 UTC m=+1.615792099 container remove 99674fca55c1a8d66cea920cd141a7d21bcc405a9b7257fc37767ee80f11da20 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_cori, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Oct 11 08:48:20 compute-0 systemd[1]: libpod-conmon-99674fca55c1a8d66cea920cd141a7d21bcc405a9b7257fc37767ee80f11da20.scope: Deactivated successfully.
Oct 11 08:48:20 compute-0 sudo[289547]: pam_unix(sudo:session): session closed for user root
Oct 11 08:48:20 compute-0 sudo[289956]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:48:20 compute-0 sudo[289956]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:48:20 compute-0 sudo[289956]: pam_unix(sudo:session): session closed for user root
Oct 11 08:48:20 compute-0 nova_compute[260935]: 2025-10-11 08:48:20.736 2 DEBUG nova.compute.manager [req-e53e2c4b-7751-473e-a2d9-18052b26c63c req-249566a2-a242-496e-a81b-9143952a7b75 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] Received event network-vif-plugged-db31f1b4-b009-40dc-a028-b72fe0b1eb45 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:48:20 compute-0 nova_compute[260935]: 2025-10-11 08:48:20.737 2 DEBUG oslo_concurrency.lockutils [req-e53e2c4b-7751-473e-a2d9-18052b26c63c req-249566a2-a242-496e-a81b-9143952a7b75 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "057de6d9-3f9e-4b23-9019-f62ba6b453e7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:48:20 compute-0 nova_compute[260935]: 2025-10-11 08:48:20.737 2 DEBUG oslo_concurrency.lockutils [req-e53e2c4b-7751-473e-a2d9-18052b26c63c req-249566a2-a242-496e-a81b-9143952a7b75 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "057de6d9-3f9e-4b23-9019-f62ba6b453e7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:48:20 compute-0 nova_compute[260935]: 2025-10-11 08:48:20.737 2 DEBUG oslo_concurrency.lockutils [req-e53e2c4b-7751-473e-a2d9-18052b26c63c req-249566a2-a242-496e-a81b-9143952a7b75 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "057de6d9-3f9e-4b23-9019-f62ba6b453e7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:48:20 compute-0 nova_compute[260935]: 2025-10-11 08:48:20.737 2 DEBUG nova.compute.manager [req-e53e2c4b-7751-473e-a2d9-18052b26c63c req-249566a2-a242-496e-a81b-9143952a7b75 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] Processing event network-vif-plugged-db31f1b4-b009-40dc-a028-b72fe0b1eb45 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 11 08:48:20 compute-0 sudo[289995]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 08:48:20 compute-0 sudo[289995]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:48:20 compute-0 sudo[289995]: pam_unix(sudo:session): session closed for user root
Oct 11 08:48:20 compute-0 podman[290025]: 2025-10-11 08:48:20.795371732 +0000 UTC m=+0.055106036 container create 6da8ced8088c8e43c02543b3a07e5131bfc81dd7c04d1410fe0ff52bc270558a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-fff13396-b787-4c6e-9112-a1c2ef57b26d, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.license=GPLv2)
Oct 11 08:48:20 compute-0 systemd[1]: Started libpod-conmon-6da8ced8088c8e43c02543b3a07e5131bfc81dd7c04d1410fe0ff52bc270558a.scope.
Oct 11 08:48:20 compute-0 sudo[290037]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:48:20 compute-0 sudo[290037]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:48:20 compute-0 sudo[290037]: pam_unix(sudo:session): session closed for user root
Oct 11 08:48:20 compute-0 podman[290025]: 2025-10-11 08:48:20.762577735 +0000 UTC m=+0.022312059 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 11 08:48:20 compute-0 systemd[1]: Started libcrun container.
Oct 11 08:48:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4bdab5ef0ac4476b9090df9e6eb05fca169470796c58d4abb5cf34784f91353f/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 11 08:48:20 compute-0 podman[290025]: 2025-10-11 08:48:20.891438458 +0000 UTC m=+0.151172762 container init 6da8ced8088c8e43c02543b3a07e5131bfc81dd7c04d1410fe0ff52bc270558a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-fff13396-b787-4c6e-9112-a1c2ef57b26d, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 11 08:48:20 compute-0 podman[290025]: 2025-10-11 08:48:20.897911843 +0000 UTC m=+0.157646147 container start 6da8ced8088c8e43c02543b3a07e5131bfc81dd7c04d1410fe0ff52bc270558a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-fff13396-b787-4c6e-9112-a1c2ef57b26d, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.vendor=CentOS)
Oct 11 08:48:20 compute-0 podman[290068]: 2025-10-11 08:48:20.902797073 +0000 UTC m=+0.056846976 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent)
Oct 11 08:48:20 compute-0 sudo[290076]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a -- lvm list --format json
Oct 11 08:48:20 compute-0 sudo[290076]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:48:20 compute-0 ceph-mon[74313]: pgmap v1276: 321 pgs: 321 active+clean; 206 MiB data, 352 MiB used, 60 GiB / 60 GiB avail; 666 KiB/s rd, 7.7 MiB/s wr, 197 op/s
Oct 11 08:48:20 compute-0 neutron-haproxy-ovnmeta-fff13396-b787-4c6e-9112-a1c2ef57b26d[290066]: [NOTICE]   (290111) : New worker (290115) forked
Oct 11 08:48:20 compute-0 neutron-haproxy-ovnmeta-fff13396-b787-4c6e-9112-a1c2ef57b26d[290066]: [NOTICE]   (290111) : Loading success.
Oct 11 08:48:21 compute-0 nova_compute[260935]: 2025-10-11 08:48:21.028 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172501.0277345, 057de6d9-3f9e-4b23-9019-f62ba6b453e7 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:48:21 compute-0 nova_compute[260935]: 2025-10-11 08:48:21.029 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] VM Started (Lifecycle Event)
Oct 11 08:48:21 compute-0 nova_compute[260935]: 2025-10-11 08:48:21.032 2 DEBUG nova.compute.manager [None req-137c26ae-7b64-42c1-8e0f-3f35576f3c95 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 11 08:48:21 compute-0 nova_compute[260935]: 2025-10-11 08:48:21.054 2 DEBUG nova.virt.libvirt.driver [None req-137c26ae-7b64-42c1-8e0f-3f35576f3c95 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 11 08:48:21 compute-0 nova_compute[260935]: 2025-10-11 08:48:21.058 2 INFO nova.virt.libvirt.driver [-] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] Instance spawned successfully.
Oct 11 08:48:21 compute-0 nova_compute[260935]: 2025-10-11 08:48:21.058 2 DEBUG nova.virt.libvirt.driver [None req-137c26ae-7b64-42c1-8e0f-3f35576f3c95 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 11 08:48:21 compute-0 nova_compute[260935]: 2025-10-11 08:48:21.097 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:48:21 compute-0 nova_compute[260935]: 2025-10-11 08:48:21.102 2 DEBUG oslo_concurrency.lockutils [None req-a3eb7b25-19b5-4ebf-a8bb-4e4dd3fe276f 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] Acquiring lock "66086b61-46ca-4a1b-a9f0-692678bcbf7a" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:48:21 compute-0 nova_compute[260935]: 2025-10-11 08:48:21.103 2 DEBUG oslo_concurrency.lockutils [None req-a3eb7b25-19b5-4ebf-a8bb-4e4dd3fe276f 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] Lock "66086b61-46ca-4a1b-a9f0-692678bcbf7a" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:48:21 compute-0 nova_compute[260935]: 2025-10-11 08:48:21.104 2 DEBUG oslo_concurrency.lockutils [None req-a3eb7b25-19b5-4ebf-a8bb-4e4dd3fe276f 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] Acquiring lock "66086b61-46ca-4a1b-a9f0-692678bcbf7a-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:48:21 compute-0 nova_compute[260935]: 2025-10-11 08:48:21.104 2 DEBUG oslo_concurrency.lockutils [None req-a3eb7b25-19b5-4ebf-a8bb-4e4dd3fe276f 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] Lock "66086b61-46ca-4a1b-a9f0-692678bcbf7a-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:48:21 compute-0 nova_compute[260935]: 2025-10-11 08:48:21.104 2 DEBUG oslo_concurrency.lockutils [None req-a3eb7b25-19b5-4ebf-a8bb-4e4dd3fe276f 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] Lock "66086b61-46ca-4a1b-a9f0-692678bcbf7a-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:48:21 compute-0 nova_compute[260935]: 2025-10-11 08:48:21.106 2 INFO nova.compute.manager [None req-a3eb7b25-19b5-4ebf-a8bb-4e4dd3fe276f 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] [instance: 66086b61-46ca-4a1b-a9f0-692678bcbf7a] Terminating instance
Oct 11 08:48:21 compute-0 nova_compute[260935]: 2025-10-11 08:48:21.108 2 DEBUG oslo_concurrency.lockutils [None req-a3eb7b25-19b5-4ebf-a8bb-4e4dd3fe276f 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] Acquiring lock "refresh_cache-66086b61-46ca-4a1b-a9f0-692678bcbf7a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 08:48:21 compute-0 nova_compute[260935]: 2025-10-11 08:48:21.108 2 DEBUG oslo_concurrency.lockutils [None req-a3eb7b25-19b5-4ebf-a8bb-4e4dd3fe276f 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] Acquired lock "refresh_cache-66086b61-46ca-4a1b-a9f0-692678bcbf7a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 08:48:21 compute-0 nova_compute[260935]: 2025-10-11 08:48:21.109 2 DEBUG nova.network.neutron [None req-a3eb7b25-19b5-4ebf-a8bb-4e4dd3fe276f 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] [instance: 66086b61-46ca-4a1b-a9f0-692678bcbf7a] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 11 08:48:21 compute-0 nova_compute[260935]: 2025-10-11 08:48:21.117 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 08:48:21 compute-0 nova_compute[260935]: 2025-10-11 08:48:21.120 2 DEBUG nova.virt.libvirt.driver [None req-137c26ae-7b64-42c1-8e0f-3f35576f3c95 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:48:21 compute-0 nova_compute[260935]: 2025-10-11 08:48:21.121 2 DEBUG nova.virt.libvirt.driver [None req-137c26ae-7b64-42c1-8e0f-3f35576f3c95 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:48:21 compute-0 nova_compute[260935]: 2025-10-11 08:48:21.121 2 DEBUG nova.virt.libvirt.driver [None req-137c26ae-7b64-42c1-8e0f-3f35576f3c95 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:48:21 compute-0 nova_compute[260935]: 2025-10-11 08:48:21.122 2 DEBUG nova.virt.libvirt.driver [None req-137c26ae-7b64-42c1-8e0f-3f35576f3c95 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:48:21 compute-0 nova_compute[260935]: 2025-10-11 08:48:21.123 2 DEBUG nova.virt.libvirt.driver [None req-137c26ae-7b64-42c1-8e0f-3f35576f3c95 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:48:21 compute-0 nova_compute[260935]: 2025-10-11 08:48:21.123 2 DEBUG nova.virt.libvirt.driver [None req-137c26ae-7b64-42c1-8e0f-3f35576f3c95 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:48:21 compute-0 nova_compute[260935]: 2025-10-11 08:48:21.180 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 08:48:21 compute-0 nova_compute[260935]: 2025-10-11 08:48:21.181 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172501.0315447, 057de6d9-3f9e-4b23-9019-f62ba6b453e7 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:48:21 compute-0 nova_compute[260935]: 2025-10-11 08:48:21.181 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] VM Paused (Lifecycle Event)
Oct 11 08:48:21 compute-0 nova_compute[260935]: 2025-10-11 08:48:21.202 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:48:21 compute-0 nova_compute[260935]: 2025-10-11 08:48:21.210 2 INFO nova.compute.manager [None req-137c26ae-7b64-42c1-8e0f-3f35576f3c95 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] Took 8.51 seconds to spawn the instance on the hypervisor.
Oct 11 08:48:21 compute-0 nova_compute[260935]: 2025-10-11 08:48:21.210 2 DEBUG nova.compute.manager [None req-137c26ae-7b64-42c1-8e0f-3f35576f3c95 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:48:21 compute-0 nova_compute[260935]: 2025-10-11 08:48:21.215 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172501.0392647, 057de6d9-3f9e-4b23-9019-f62ba6b453e7 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:48:21 compute-0 nova_compute[260935]: 2025-10-11 08:48:21.215 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] VM Resumed (Lifecycle Event)
Oct 11 08:48:21 compute-0 nova_compute[260935]: 2025-10-11 08:48:21.242 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:48:21 compute-0 nova_compute[260935]: 2025-10-11 08:48:21.272 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 08:48:21 compute-0 nova_compute[260935]: 2025-10-11 08:48:21.289 2 INFO nova.compute.manager [None req-137c26ae-7b64-42c1-8e0f-3f35576f3c95 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] Took 9.60 seconds to build instance.
Oct 11 08:48:21 compute-0 nova_compute[260935]: 2025-10-11 08:48:21.293 2 DEBUG nova.network.neutron [None req-a3eb7b25-19b5-4ebf-a8bb-4e4dd3fe276f 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] [instance: 66086b61-46ca-4a1b-a9f0-692678bcbf7a] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 11 08:48:21 compute-0 nova_compute[260935]: 2025-10-11 08:48:21.317 2 DEBUG oslo_concurrency.lockutils [None req-137c26ae-7b64-42c1-8e0f-3f35576f3c95 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Lock "057de6d9-3f9e-4b23-9019-f62ba6b453e7" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.722s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:48:21 compute-0 podman[290165]: 2025-10-11 08:48:21.454171404 +0000 UTC m=+0.096937522 container create 39759e930e785ce5ba32fa64bb2716da41fbb7c38b3cad3b06a9ba7b998ff9c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_gagarin, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 08:48:21 compute-0 systemd[1]: Started libpod-conmon-39759e930e785ce5ba32fa64bb2716da41fbb7c38b3cad3b06a9ba7b998ff9c3.scope.
Oct 11 08:48:21 compute-0 podman[290165]: 2025-10-11 08:48:21.418062622 +0000 UTC m=+0.060828820 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 08:48:21 compute-0 nova_compute[260935]: 2025-10-11 08:48:21.511 2 DEBUG nova.network.neutron [None req-a3eb7b25-19b5-4ebf-a8bb-4e4dd3fe276f 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] [instance: 66086b61-46ca-4a1b-a9f0-692678bcbf7a] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:48:21 compute-0 nova_compute[260935]: 2025-10-11 08:48:21.529 2 DEBUG oslo_concurrency.lockutils [None req-a3eb7b25-19b5-4ebf-a8bb-4e4dd3fe276f 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] Releasing lock "refresh_cache-66086b61-46ca-4a1b-a9f0-692678bcbf7a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 08:48:21 compute-0 nova_compute[260935]: 2025-10-11 08:48:21.530 2 DEBUG nova.compute.manager [None req-a3eb7b25-19b5-4ebf-a8bb-4e4dd3fe276f 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] [instance: 66086b61-46ca-4a1b-a9f0-692678bcbf7a] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 11 08:48:21 compute-0 systemd[1]: Started libcrun container.
Oct 11 08:48:21 compute-0 podman[290165]: 2025-10-11 08:48:21.557408725 +0000 UTC m=+0.200174913 container init 39759e930e785ce5ba32fa64bb2716da41fbb7c38b3cad3b06a9ba7b998ff9c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_gagarin, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 08:48:21 compute-0 podman[290165]: 2025-10-11 08:48:21.568459361 +0000 UTC m=+0.211225469 container start 39759e930e785ce5ba32fa64bb2716da41fbb7c38b3cad3b06a9ba7b998ff9c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_gagarin, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 11 08:48:21 compute-0 podman[290165]: 2025-10-11 08:48:21.572328211 +0000 UTC m=+0.215094409 container attach 39759e930e785ce5ba32fa64bb2716da41fbb7c38b3cad3b06a9ba7b998ff9c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_gagarin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 08:48:21 compute-0 crazy_gagarin[290179]: 167 167
Oct 11 08:48:21 compute-0 systemd[1]: libpod-39759e930e785ce5ba32fa64bb2716da41fbb7c38b3cad3b06a9ba7b998ff9c3.scope: Deactivated successfully.
Oct 11 08:48:21 compute-0 conmon[290179]: conmon 39759e930e785ce5ba32 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-39759e930e785ce5ba32fa64bb2716da41fbb7c38b3cad3b06a9ba7b998ff9c3.scope/container/memory.events
Oct 11 08:48:21 compute-0 podman[290165]: 2025-10-11 08:48:21.583203842 +0000 UTC m=+0.225969950 container died 39759e930e785ce5ba32fa64bb2716da41fbb7c38b3cad3b06a9ba7b998ff9c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_gagarin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct 11 08:48:21 compute-0 nova_compute[260935]: 2025-10-11 08:48:21.605 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:48:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-33d67d755aa81b2755f8d66a3a48bbad131c244da0720a4006c77618fafbc719-merged.mount: Deactivated successfully.
Oct 11 08:48:21 compute-0 systemd[1]: machine-qemu\x2d21\x2dinstance\x2d00000012.scope: Deactivated successfully.
Oct 11 08:48:21 compute-0 podman[290165]: 2025-10-11 08:48:21.622958929 +0000 UTC m=+0.265725037 container remove 39759e930e785ce5ba32fa64bb2716da41fbb7c38b3cad3b06a9ba7b998ff9c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_gagarin, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 08:48:21 compute-0 systemd[1]: machine-qemu\x2d21\x2dinstance\x2d00000012.scope: Consumed 3.655s CPU time.
Oct 11 08:48:21 compute-0 systemd-machined[215705]: Machine qemu-21-instance-00000012 terminated.
Oct 11 08:48:21 compute-0 systemd[1]: libpod-conmon-39759e930e785ce5ba32fa64bb2716da41fbb7c38b3cad3b06a9ba7b998ff9c3.scope: Deactivated successfully.
Oct 11 08:48:21 compute-0 nova_compute[260935]: 2025-10-11 08:48:21.762 2 INFO nova.virt.libvirt.driver [-] [instance: 66086b61-46ca-4a1b-a9f0-692678bcbf7a] Instance destroyed successfully.
Oct 11 08:48:21 compute-0 nova_compute[260935]: 2025-10-11 08:48:21.762 2 DEBUG nova.objects.instance [None req-a3eb7b25-19b5-4ebf-a8bb-4e4dd3fe276f 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] Lazy-loading 'resources' on Instance uuid 66086b61-46ca-4a1b-a9f0-692678bcbf7a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:48:21 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1277: 321 pgs: 321 active+clean; 206 MiB data, 352 MiB used, 60 GiB / 60 GiB avail; 666 KiB/s rd, 7.7 MiB/s wr, 197 op/s
Oct 11 08:48:21 compute-0 nova_compute[260935]: 2025-10-11 08:48:21.883 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:48:21 compute-0 podman[290219]: 2025-10-11 08:48:21.888276333 +0000 UTC m=+0.063857836 container create fb24a0e93f77f9eb36884e7e678fc2d6bdd985660db7a595408292c0197d00c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_hopper, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Oct 11 08:48:21 compute-0 systemd[1]: Started libpod-conmon-fb24a0e93f77f9eb36884e7e678fc2d6bdd985660db7a595408292c0197d00c5.scope.
Oct 11 08:48:21 compute-0 podman[290219]: 2025-10-11 08:48:21.865172462 +0000 UTC m=+0.040753975 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 08:48:21 compute-0 systemd[1]: Started libcrun container.
Oct 11 08:48:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb6d5af214e77a8d3acb931edeb187cf237a6ea3ab8bd86d169afecc1b1f5568/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 08:48:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb6d5af214e77a8d3acb931edeb187cf237a6ea3ab8bd86d169afecc1b1f5568/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 08:48:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb6d5af214e77a8d3acb931edeb187cf237a6ea3ab8bd86d169afecc1b1f5568/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 08:48:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb6d5af214e77a8d3acb931edeb187cf237a6ea3ab8bd86d169afecc1b1f5568/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 08:48:22 compute-0 podman[290219]: 2025-10-11 08:48:22.016897429 +0000 UTC m=+0.192478962 container init fb24a0e93f77f9eb36884e7e678fc2d6bdd985660db7a595408292c0197d00c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_hopper, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 08:48:22 compute-0 podman[290219]: 2025-10-11 08:48:22.033755831 +0000 UTC m=+0.209337384 container start fb24a0e93f77f9eb36884e7e678fc2d6bdd985660db7a595408292c0197d00c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_hopper, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 08:48:22 compute-0 podman[290219]: 2025-10-11 08:48:22.037955271 +0000 UTC m=+0.213536804 container attach fb24a0e93f77f9eb36884e7e678fc2d6bdd985660db7a595408292c0197d00c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_hopper, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 11 08:48:22 compute-0 nova_compute[260935]: 2025-10-11 08:48:22.227 2 INFO nova.virt.libvirt.driver [None req-a3eb7b25-19b5-4ebf-a8bb-4e4dd3fe276f 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] [instance: 66086b61-46ca-4a1b-a9f0-692678bcbf7a] Deleting instance files /var/lib/nova/instances/66086b61-46ca-4a1b-a9f0-692678bcbf7a_del
Oct 11 08:48:22 compute-0 nova_compute[260935]: 2025-10-11 08:48:22.231 2 INFO nova.virt.libvirt.driver [None req-a3eb7b25-19b5-4ebf-a8bb-4e4dd3fe276f 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] [instance: 66086b61-46ca-4a1b-a9f0-692678bcbf7a] Deletion of /var/lib/nova/instances/66086b61-46ca-4a1b-a9f0-692678bcbf7a_del complete
Oct 11 08:48:22 compute-0 nova_compute[260935]: 2025-10-11 08:48:22.303 2 INFO nova.compute.manager [None req-a3eb7b25-19b5-4ebf-a8bb-4e4dd3fe276f 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] [instance: 66086b61-46ca-4a1b-a9f0-692678bcbf7a] Took 0.77 seconds to destroy the instance on the hypervisor.
Oct 11 08:48:22 compute-0 nova_compute[260935]: 2025-10-11 08:48:22.304 2 DEBUG oslo.service.loopingcall [None req-a3eb7b25-19b5-4ebf-a8bb-4e4dd3fe276f 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 11 08:48:22 compute-0 nova_compute[260935]: 2025-10-11 08:48:22.305 2 DEBUG nova.compute.manager [-] [instance: 66086b61-46ca-4a1b-a9f0-692678bcbf7a] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 11 08:48:22 compute-0 nova_compute[260935]: 2025-10-11 08:48:22.305 2 DEBUG nova.network.neutron [-] [instance: 66086b61-46ca-4a1b-a9f0-692678bcbf7a] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 11 08:48:22 compute-0 nova_compute[260935]: 2025-10-11 08:48:22.676 2 DEBUG nova.network.neutron [-] [instance: 66086b61-46ca-4a1b-a9f0-692678bcbf7a] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 11 08:48:22 compute-0 nova_compute[260935]: 2025-10-11 08:48:22.703 2 DEBUG nova.network.neutron [-] [instance: 66086b61-46ca-4a1b-a9f0-692678bcbf7a] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:48:22 compute-0 nova_compute[260935]: 2025-10-11 08:48:22.722 2 INFO nova.compute.manager [-] [instance: 66086b61-46ca-4a1b-a9f0-692678bcbf7a] Took 0.42 seconds to deallocate network for instance.
Oct 11 08:48:22 compute-0 nova_compute[260935]: 2025-10-11 08:48:22.767 2 DEBUG oslo_concurrency.lockutils [None req-a3eb7b25-19b5-4ebf-a8bb-4e4dd3fe276f 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:48:22 compute-0 nova_compute[260935]: 2025-10-11 08:48:22.769 2 DEBUG oslo_concurrency.lockutils [None req-a3eb7b25-19b5-4ebf-a8bb-4e4dd3fe276f 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:48:22 compute-0 ecstatic_hopper[290238]: {
Oct 11 08:48:22 compute-0 ecstatic_hopper[290238]:     "0": [
Oct 11 08:48:22 compute-0 ecstatic_hopper[290238]:         {
Oct 11 08:48:22 compute-0 ecstatic_hopper[290238]:             "devices": [
Oct 11 08:48:22 compute-0 ecstatic_hopper[290238]:                 "/dev/loop3"
Oct 11 08:48:22 compute-0 ecstatic_hopper[290238]:             ],
Oct 11 08:48:22 compute-0 ecstatic_hopper[290238]:             "lv_name": "ceph_lv0",
Oct 11 08:48:22 compute-0 ecstatic_hopper[290238]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 08:48:22 compute-0 ecstatic_hopper[290238]:             "lv_size": "21470642176",
Oct 11 08:48:22 compute-0 ecstatic_hopper[290238]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=bd31cfe9-266a-4479-a7f9-659fff14b3e1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 08:48:22 compute-0 ecstatic_hopper[290238]:             "lv_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 08:48:22 compute-0 ecstatic_hopper[290238]:             "name": "ceph_lv0",
Oct 11 08:48:22 compute-0 ecstatic_hopper[290238]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 08:48:22 compute-0 ecstatic_hopper[290238]:             "tags": {
Oct 11 08:48:22 compute-0 ecstatic_hopper[290238]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 11 08:48:22 compute-0 ecstatic_hopper[290238]:                 "ceph.block_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 08:48:22 compute-0 ecstatic_hopper[290238]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 08:48:22 compute-0 ecstatic_hopper[290238]:                 "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 08:48:22 compute-0 ecstatic_hopper[290238]:                 "ceph.cluster_name": "ceph",
Oct 11 08:48:22 compute-0 ecstatic_hopper[290238]:                 "ceph.crush_device_class": "",
Oct 11 08:48:22 compute-0 ecstatic_hopper[290238]:                 "ceph.encrypted": "0",
Oct 11 08:48:22 compute-0 ecstatic_hopper[290238]:                 "ceph.osd_fsid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 08:48:22 compute-0 ecstatic_hopper[290238]:                 "ceph.osd_id": "0",
Oct 11 08:48:22 compute-0 ecstatic_hopper[290238]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 08:48:22 compute-0 ecstatic_hopper[290238]:                 "ceph.type": "block",
Oct 11 08:48:22 compute-0 ecstatic_hopper[290238]:                 "ceph.vdo": "0"
Oct 11 08:48:22 compute-0 ecstatic_hopper[290238]:             },
Oct 11 08:48:22 compute-0 ecstatic_hopper[290238]:             "type": "block",
Oct 11 08:48:22 compute-0 ecstatic_hopper[290238]:             "vg_name": "ceph_vg0"
Oct 11 08:48:22 compute-0 ecstatic_hopper[290238]:         }
Oct 11 08:48:22 compute-0 ecstatic_hopper[290238]:     ],
Oct 11 08:48:22 compute-0 ecstatic_hopper[290238]:     "1": [
Oct 11 08:48:22 compute-0 ecstatic_hopper[290238]:         {
Oct 11 08:48:22 compute-0 ecstatic_hopper[290238]:             "devices": [
Oct 11 08:48:22 compute-0 ecstatic_hopper[290238]:                 "/dev/loop4"
Oct 11 08:48:22 compute-0 ecstatic_hopper[290238]:             ],
Oct 11 08:48:22 compute-0 ecstatic_hopper[290238]:             "lv_name": "ceph_lv1",
Oct 11 08:48:22 compute-0 ecstatic_hopper[290238]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 08:48:22 compute-0 ecstatic_hopper[290238]:             "lv_size": "21470642176",
Oct 11 08:48:22 compute-0 ecstatic_hopper[290238]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c6e730c8-7075-45e5-bf72-061b73088f5b,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 08:48:22 compute-0 ecstatic_hopper[290238]:             "lv_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 08:48:22 compute-0 ecstatic_hopper[290238]:             "name": "ceph_lv1",
Oct 11 08:48:22 compute-0 ecstatic_hopper[290238]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 08:48:22 compute-0 ecstatic_hopper[290238]:             "tags": {
Oct 11 08:48:22 compute-0 ecstatic_hopper[290238]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 11 08:48:22 compute-0 ecstatic_hopper[290238]:                 "ceph.block_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 08:48:22 compute-0 ecstatic_hopper[290238]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 08:48:22 compute-0 ecstatic_hopper[290238]:                 "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 08:48:22 compute-0 ecstatic_hopper[290238]:                 "ceph.cluster_name": "ceph",
Oct 11 08:48:22 compute-0 ecstatic_hopper[290238]:                 "ceph.crush_device_class": "",
Oct 11 08:48:22 compute-0 ecstatic_hopper[290238]:                 "ceph.encrypted": "0",
Oct 11 08:48:22 compute-0 ecstatic_hopper[290238]:                 "ceph.osd_fsid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 08:48:22 compute-0 ecstatic_hopper[290238]:                 "ceph.osd_id": "1",
Oct 11 08:48:22 compute-0 ecstatic_hopper[290238]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 08:48:22 compute-0 ecstatic_hopper[290238]:                 "ceph.type": "block",
Oct 11 08:48:22 compute-0 ecstatic_hopper[290238]:                 "ceph.vdo": "0"
Oct 11 08:48:22 compute-0 ecstatic_hopper[290238]:             },
Oct 11 08:48:22 compute-0 ecstatic_hopper[290238]:             "type": "block",
Oct 11 08:48:22 compute-0 ecstatic_hopper[290238]:             "vg_name": "ceph_vg1"
Oct 11 08:48:22 compute-0 ecstatic_hopper[290238]:         }
Oct 11 08:48:22 compute-0 ecstatic_hopper[290238]:     ],
Oct 11 08:48:22 compute-0 ecstatic_hopper[290238]:     "2": [
Oct 11 08:48:22 compute-0 ecstatic_hopper[290238]:         {
Oct 11 08:48:22 compute-0 ecstatic_hopper[290238]:             "devices": [
Oct 11 08:48:22 compute-0 ecstatic_hopper[290238]:                 "/dev/loop5"
Oct 11 08:48:22 compute-0 ecstatic_hopper[290238]:             ],
Oct 11 08:48:22 compute-0 ecstatic_hopper[290238]:             "lv_name": "ceph_lv2",
Oct 11 08:48:22 compute-0 ecstatic_hopper[290238]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 08:48:22 compute-0 ecstatic_hopper[290238]:             "lv_size": "21470642176",
Oct 11 08:48:22 compute-0 ecstatic_hopper[290238]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9a8d87fe-940e-4960-94fb-405e22e6e9ee,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 08:48:22 compute-0 ecstatic_hopper[290238]:             "lv_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 08:48:22 compute-0 ecstatic_hopper[290238]:             "name": "ceph_lv2",
Oct 11 08:48:22 compute-0 ecstatic_hopper[290238]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 08:48:22 compute-0 ecstatic_hopper[290238]:             "tags": {
Oct 11 08:48:22 compute-0 ecstatic_hopper[290238]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 11 08:48:22 compute-0 ecstatic_hopper[290238]:                 "ceph.block_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 08:48:22 compute-0 ecstatic_hopper[290238]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 08:48:22 compute-0 ecstatic_hopper[290238]:                 "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 08:48:22 compute-0 ecstatic_hopper[290238]:                 "ceph.cluster_name": "ceph",
Oct 11 08:48:22 compute-0 ecstatic_hopper[290238]:                 "ceph.crush_device_class": "",
Oct 11 08:48:22 compute-0 ecstatic_hopper[290238]:                 "ceph.encrypted": "0",
Oct 11 08:48:22 compute-0 ecstatic_hopper[290238]:                 "ceph.osd_fsid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 08:48:22 compute-0 ecstatic_hopper[290238]:                 "ceph.osd_id": "2",
Oct 11 08:48:22 compute-0 ecstatic_hopper[290238]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 08:48:22 compute-0 ecstatic_hopper[290238]:                 "ceph.type": "block",
Oct 11 08:48:22 compute-0 ecstatic_hopper[290238]:                 "ceph.vdo": "0"
Oct 11 08:48:22 compute-0 ecstatic_hopper[290238]:             },
Oct 11 08:48:22 compute-0 ecstatic_hopper[290238]:             "type": "block",
Oct 11 08:48:22 compute-0 ecstatic_hopper[290238]:             "vg_name": "ceph_vg2"
Oct 11 08:48:22 compute-0 ecstatic_hopper[290238]:         }
Oct 11 08:48:22 compute-0 ecstatic_hopper[290238]:     ]
Oct 11 08:48:22 compute-0 ecstatic_hopper[290238]: }
Oct 11 08:48:22 compute-0 systemd[1]: libpod-fb24a0e93f77f9eb36884e7e678fc2d6bdd985660db7a595408292c0197d00c5.scope: Deactivated successfully.
Oct 11 08:48:22 compute-0 podman[290219]: 2025-10-11 08:48:22.903719349 +0000 UTC m=+1.079300842 container died fb24a0e93f77f9eb36884e7e678fc2d6bdd985660db7a595408292c0197d00c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_hopper, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 08:48:22 compute-0 ceph-mon[74313]: pgmap v1277: 321 pgs: 321 active+clean; 206 MiB data, 352 MiB used, 60 GiB / 60 GiB avail; 666 KiB/s rd, 7.7 MiB/s wr, 197 op/s
Oct 11 08:48:22 compute-0 nova_compute[260935]: 2025-10-11 08:48:22.939 2 DEBUG oslo_concurrency.processutils [None req-a3eb7b25-19b5-4ebf-a8bb-4e4dd3fe276f 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:48:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-eb6d5af214e77a8d3acb931edeb187cf237a6ea3ab8bd86d169afecc1b1f5568-merged.mount: Deactivated successfully.
Oct 11 08:48:22 compute-0 podman[290219]: 2025-10-11 08:48:22.985442155 +0000 UTC m=+1.161023668 container remove fb24a0e93f77f9eb36884e7e678fc2d6bdd985660db7a595408292c0197d00c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_hopper, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct 11 08:48:23 compute-0 systemd[1]: libpod-conmon-fb24a0e93f77f9eb36884e7e678fc2d6bdd985660db7a595408292c0197d00c5.scope: Deactivated successfully.
Oct 11 08:48:23 compute-0 sudo[290076]: pam_unix(sudo:session): session closed for user root
Oct 11 08:48:23 compute-0 sudo[290260]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:48:23 compute-0 nova_compute[260935]: 2025-10-11 08:48:23.121 2 DEBUG nova.compute.manager [req-8586e353-4f14-4155-b564-825841095417 req-51f8ac69-4489-4cfb-8bbe-56a9ae03da66 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] Received event network-vif-plugged-db31f1b4-b009-40dc-a028-b72fe0b1eb45 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:48:23 compute-0 nova_compute[260935]: 2025-10-11 08:48:23.121 2 DEBUG oslo_concurrency.lockutils [req-8586e353-4f14-4155-b564-825841095417 req-51f8ac69-4489-4cfb-8bbe-56a9ae03da66 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "057de6d9-3f9e-4b23-9019-f62ba6b453e7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:48:23 compute-0 nova_compute[260935]: 2025-10-11 08:48:23.122 2 DEBUG oslo_concurrency.lockutils [req-8586e353-4f14-4155-b564-825841095417 req-51f8ac69-4489-4cfb-8bbe-56a9ae03da66 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "057de6d9-3f9e-4b23-9019-f62ba6b453e7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:48:23 compute-0 nova_compute[260935]: 2025-10-11 08:48:23.122 2 DEBUG oslo_concurrency.lockutils [req-8586e353-4f14-4155-b564-825841095417 req-51f8ac69-4489-4cfb-8bbe-56a9ae03da66 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "057de6d9-3f9e-4b23-9019-f62ba6b453e7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:48:23 compute-0 nova_compute[260935]: 2025-10-11 08:48:23.122 2 DEBUG nova.compute.manager [req-8586e353-4f14-4155-b564-825841095417 req-51f8ac69-4489-4cfb-8bbe-56a9ae03da66 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] No waiting events found dispatching network-vif-plugged-db31f1b4-b009-40dc-a028-b72fe0b1eb45 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 08:48:23 compute-0 nova_compute[260935]: 2025-10-11 08:48:23.123 2 WARNING nova.compute.manager [req-8586e353-4f14-4155-b564-825841095417 req-51f8ac69-4489-4cfb-8bbe-56a9ae03da66 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] Received unexpected event network-vif-plugged-db31f1b4-b009-40dc-a028-b72fe0b1eb45 for instance with vm_state active and task_state None.
Oct 11 08:48:23 compute-0 sudo[290260]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:48:23 compute-0 sudo[290260]: pam_unix(sudo:session): session closed for user root
Oct 11 08:48:23 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e163 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 08:48:23 compute-0 sudo[290304]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 08:48:23 compute-0 sudo[290304]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:48:23 compute-0 sudo[290304]: pam_unix(sudo:session): session closed for user root
Oct 11 08:48:23 compute-0 sudo[290329]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:48:23 compute-0 sudo[290329]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:48:23 compute-0 sudo[290329]: pam_unix(sudo:session): session closed for user root
Oct 11 08:48:23 compute-0 sudo[290354]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a -- raw list --format json
Oct 11 08:48:23 compute-0 sudo[290354]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:48:23 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 08:48:23 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/175882175' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:48:23 compute-0 nova_compute[260935]: 2025-10-11 08:48:23.414 2 DEBUG oslo_concurrency.processutils [None req-a3eb7b25-19b5-4ebf-a8bb-4e4dd3fe276f 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:48:23 compute-0 nova_compute[260935]: 2025-10-11 08:48:23.421 2 DEBUG nova.compute.provider_tree [None req-a3eb7b25-19b5-4ebf-a8bb-4e4dd3fe276f 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 08:48:23 compute-0 nova_compute[260935]: 2025-10-11 08:48:23.445 2 DEBUG nova.scheduler.client.report [None req-a3eb7b25-19b5-4ebf-a8bb-4e4dd3fe276f 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 08:48:23 compute-0 nova_compute[260935]: 2025-10-11 08:48:23.477 2 DEBUG oslo_concurrency.lockutils [None req-a3eb7b25-19b5-4ebf-a8bb-4e4dd3fe276f 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.708s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:48:23 compute-0 nova_compute[260935]: 2025-10-11 08:48:23.518 2 INFO nova.scheduler.client.report [None req-a3eb7b25-19b5-4ebf-a8bb-4e4dd3fe276f 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] Deleted allocations for instance 66086b61-46ca-4a1b-a9f0-692678bcbf7a
Oct 11 08:48:23 compute-0 nova_compute[260935]: 2025-10-11 08:48:23.627 2 DEBUG oslo_concurrency.lockutils [None req-a3eb7b25-19b5-4ebf-a8bb-4e4dd3fe276f 7cfe9716527d49f18102a38c7480e208 7fdd898b69404913a643940b3869140b - - default default] Lock "66086b61-46ca-4a1b-a9f0-692678bcbf7a" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.524s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:48:23 compute-0 podman[290424]: 2025-10-11 08:48:23.7842719 +0000 UTC m=+0.051740740 container create 3e978874171137089eccb80ad497f6e0b2c1d160e81a98742c7defbf58796165 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_mahavira, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 08:48:23 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1278: 321 pgs: 321 active+clean; 167 MiB data, 343 MiB used, 60 GiB / 60 GiB avail; 4.0 MiB/s rd, 7.8 MiB/s wr, 365 op/s
Oct 11 08:48:23 compute-0 nova_compute[260935]: 2025-10-11 08:48:23.834 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:48:23 compute-0 systemd[1]: Started libpod-conmon-3e978874171137089eccb80ad497f6e0b2c1d160e81a98742c7defbf58796165.scope.
Oct 11 08:48:23 compute-0 podman[290424]: 2025-10-11 08:48:23.762166228 +0000 UTC m=+0.029635078 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 08:48:23 compute-0 systemd[1]: Started libcrun container.
Oct 11 08:48:23 compute-0 podman[290424]: 2025-10-11 08:48:23.899501174 +0000 UTC m=+0.166970024 container init 3e978874171137089eccb80ad497f6e0b2c1d160e81a98742c7defbf58796165 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_mahavira, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 08:48:23 compute-0 podman[290424]: 2025-10-11 08:48:23.909261183 +0000 UTC m=+0.176729993 container start 3e978874171137089eccb80ad497f6e0b2c1d160e81a98742c7defbf58796165 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_mahavira, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 08:48:23 compute-0 nervous_mahavira[290441]: 167 167
Oct 11 08:48:23 compute-0 podman[290424]: 2025-10-11 08:48:23.914541584 +0000 UTC m=+0.182010424 container attach 3e978874171137089eccb80ad497f6e0b2c1d160e81a98742c7defbf58796165 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_mahavira, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct 11 08:48:23 compute-0 systemd[1]: libpod-3e978874171137089eccb80ad497f6e0b2c1d160e81a98742c7defbf58796165.scope: Deactivated successfully.
Oct 11 08:48:23 compute-0 podman[290424]: 2025-10-11 08:48:23.916617953 +0000 UTC m=+0.184086763 container died 3e978874171137089eccb80ad497f6e0b2c1d160e81a98742c7defbf58796165 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_mahavira, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct 11 08:48:23 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/175882175' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:48:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-0317ad5535e17dbc794cebef8b886128df931c185acab25f1b978b1f3bdbb752-merged.mount: Deactivated successfully.
Oct 11 08:48:23 compute-0 podman[290424]: 2025-10-11 08:48:23.968769384 +0000 UTC m=+0.236238224 container remove 3e978874171137089eccb80ad497f6e0b2c1d160e81a98742c7defbf58796165 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_mahavira, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct 11 08:48:23 compute-0 systemd[1]: libpod-conmon-3e978874171137089eccb80ad497f6e0b2c1d160e81a98742c7defbf58796165.scope: Deactivated successfully.
Oct 11 08:48:24 compute-0 nova_compute[260935]: 2025-10-11 08:48:24.086 2 DEBUG nova.compute.manager [req-e5487992-b3d0-428a-9189-e07ae6fdf210 req-6b2144f8-3baa-4dd3-ac03-2699ead7ad15 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] Received event network-changed-db31f1b4-b009-40dc-a028-b72fe0b1eb45 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:48:24 compute-0 nova_compute[260935]: 2025-10-11 08:48:24.087 2 DEBUG nova.compute.manager [req-e5487992-b3d0-428a-9189-e07ae6fdf210 req-6b2144f8-3baa-4dd3-ac03-2699ead7ad15 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] Refreshing instance network info cache due to event network-changed-db31f1b4-b009-40dc-a028-b72fe0b1eb45. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 08:48:24 compute-0 nova_compute[260935]: 2025-10-11 08:48:24.087 2 DEBUG oslo_concurrency.lockutils [req-e5487992-b3d0-428a-9189-e07ae6fdf210 req-6b2144f8-3baa-4dd3-ac03-2699ead7ad15 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-057de6d9-3f9e-4b23-9019-f62ba6b453e7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 08:48:24 compute-0 nova_compute[260935]: 2025-10-11 08:48:24.087 2 DEBUG oslo_concurrency.lockutils [req-e5487992-b3d0-428a-9189-e07ae6fdf210 req-6b2144f8-3baa-4dd3-ac03-2699ead7ad15 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-057de6d9-3f9e-4b23-9019-f62ba6b453e7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 08:48:24 compute-0 nova_compute[260935]: 2025-10-11 08:48:24.087 2 DEBUG nova.network.neutron [req-e5487992-b3d0-428a-9189-e07ae6fdf210 req-6b2144f8-3baa-4dd3-ac03-2699ead7ad15 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] Refreshing network info cache for port db31f1b4-b009-40dc-a028-b72fe0b1eb45 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 08:48:24 compute-0 podman[290465]: 2025-10-11 08:48:24.247119849 +0000 UTC m=+0.078857504 container create 540b51e5b77bd003090daade15f0ffe4048561f3bfff76195e480b01f9c4e7b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_swanson, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 08:48:24 compute-0 podman[290465]: 2025-10-11 08:48:24.210456271 +0000 UTC m=+0.042193986 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 08:48:24 compute-0 systemd[1]: Started libpod-conmon-540b51e5b77bd003090daade15f0ffe4048561f3bfff76195e480b01f9c4e7b0.scope.
Oct 11 08:48:24 compute-0 systemd[1]: Started libcrun container.
Oct 11 08:48:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80069ce4c9e5f70d791f6ddcb3af87f74ed72915e3a827c7a3bb7fa53cbdb32b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 08:48:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80069ce4c9e5f70d791f6ddcb3af87f74ed72915e3a827c7a3bb7fa53cbdb32b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 08:48:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80069ce4c9e5f70d791f6ddcb3af87f74ed72915e3a827c7a3bb7fa53cbdb32b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 08:48:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80069ce4c9e5f70d791f6ddcb3af87f74ed72915e3a827c7a3bb7fa53cbdb32b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 08:48:24 compute-0 podman[290465]: 2025-10-11 08:48:24.398196738 +0000 UTC m=+0.229934403 container init 540b51e5b77bd003090daade15f0ffe4048561f3bfff76195e480b01f9c4e7b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_swanson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2)
Oct 11 08:48:24 compute-0 podman[290465]: 2025-10-11 08:48:24.412333002 +0000 UTC m=+0.244070637 container start 540b51e5b77bd003090daade15f0ffe4048561f3bfff76195e480b01f9c4e7b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_swanson, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct 11 08:48:24 compute-0 podman[290465]: 2025-10-11 08:48:24.416491721 +0000 UTC m=+0.248229376 container attach 540b51e5b77bd003090daade15f0ffe4048561f3bfff76195e480b01f9c4e7b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_swanson, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct 11 08:48:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 08:48:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 08:48:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 08:48:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 08:48:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 08:48:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 08:48:24 compute-0 ceph-mon[74313]: pgmap v1278: 321 pgs: 321 active+clean; 167 MiB data, 343 MiB used, 60 GiB / 60 GiB avail; 4.0 MiB/s rd, 7.8 MiB/s wr, 365 op/s
Oct 11 08:48:25 compute-0 nova_compute[260935]: 2025-10-11 08:48:25.226 2 DEBUG nova.compute.manager [req-ee9cdb8d-4a56-44ef-a27b-0642d6e5ee8c req-d476743d-789a-4d4c-bed4-0acf7578f089 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: f6e6ccd5-d393-4fa3-bf88-491311678dd1] Received event network-changed-128b1135-2e8f-4e78-8e09-e16b082e9225 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:48:25 compute-0 nova_compute[260935]: 2025-10-11 08:48:25.227 2 DEBUG nova.compute.manager [req-ee9cdb8d-4a56-44ef-a27b-0642d6e5ee8c req-d476743d-789a-4d4c-bed4-0acf7578f089 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: f6e6ccd5-d393-4fa3-bf88-491311678dd1] Refreshing instance network info cache due to event network-changed-128b1135-2e8f-4e78-8e09-e16b082e9225. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 08:48:25 compute-0 nova_compute[260935]: 2025-10-11 08:48:25.227 2 DEBUG oslo_concurrency.lockutils [req-ee9cdb8d-4a56-44ef-a27b-0642d6e5ee8c req-d476743d-789a-4d4c-bed4-0acf7578f089 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-f6e6ccd5-d393-4fa3-bf88-491311678dd1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 08:48:25 compute-0 nova_compute[260935]: 2025-10-11 08:48:25.228 2 DEBUG oslo_concurrency.lockutils [req-ee9cdb8d-4a56-44ef-a27b-0642d6e5ee8c req-d476743d-789a-4d4c-bed4-0acf7578f089 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-f6e6ccd5-d393-4fa3-bf88-491311678dd1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 08:48:25 compute-0 nova_compute[260935]: 2025-10-11 08:48:25.228 2 DEBUG nova.network.neutron [req-ee9cdb8d-4a56-44ef-a27b-0642d6e5ee8c req-d476743d-789a-4d4c-bed4-0acf7578f089 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: f6e6ccd5-d393-4fa3-bf88-491311678dd1] Refreshing network info cache for port 128b1135-2e8f-4e78-8e09-e16b082e9225 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 08:48:25 compute-0 beautiful_swanson[290481]: {
Oct 11 08:48:25 compute-0 beautiful_swanson[290481]:     "9a8d87fe-940e-4960-94fb-405e22e6e9ee": {
Oct 11 08:48:25 compute-0 beautiful_swanson[290481]:         "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 08:48:25 compute-0 beautiful_swanson[290481]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 11 08:48:25 compute-0 beautiful_swanson[290481]:         "osd_id": 2,
Oct 11 08:48:25 compute-0 beautiful_swanson[290481]:         "osd_uuid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 08:48:25 compute-0 beautiful_swanson[290481]:         "type": "bluestore"
Oct 11 08:48:25 compute-0 beautiful_swanson[290481]:     },
Oct 11 08:48:25 compute-0 beautiful_swanson[290481]:     "bd31cfe9-266a-4479-a7f9-659fff14b3e1": {
Oct 11 08:48:25 compute-0 beautiful_swanson[290481]:         "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 08:48:25 compute-0 beautiful_swanson[290481]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 11 08:48:25 compute-0 beautiful_swanson[290481]:         "osd_id": 0,
Oct 11 08:48:25 compute-0 beautiful_swanson[290481]:         "osd_uuid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 08:48:25 compute-0 beautiful_swanson[290481]:         "type": "bluestore"
Oct 11 08:48:25 compute-0 beautiful_swanson[290481]:     },
Oct 11 08:48:25 compute-0 beautiful_swanson[290481]:     "c6e730c8-7075-45e5-bf72-061b73088f5b": {
Oct 11 08:48:25 compute-0 beautiful_swanson[290481]:         "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 08:48:25 compute-0 beautiful_swanson[290481]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 11 08:48:25 compute-0 beautiful_swanson[290481]:         "osd_id": 1,
Oct 11 08:48:25 compute-0 beautiful_swanson[290481]:         "osd_uuid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 08:48:25 compute-0 beautiful_swanson[290481]:         "type": "bluestore"
Oct 11 08:48:25 compute-0 beautiful_swanson[290481]:     }
Oct 11 08:48:25 compute-0 beautiful_swanson[290481]: }
Oct 11 08:48:25 compute-0 systemd[1]: libpod-540b51e5b77bd003090daade15f0ffe4048561f3bfff76195e480b01f9c4e7b0.scope: Deactivated successfully.
Oct 11 08:48:25 compute-0 podman[290465]: 2025-10-11 08:48:25.506297023 +0000 UTC m=+1.338034658 container died 540b51e5b77bd003090daade15f0ffe4048561f3bfff76195e480b01f9c4e7b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_swanson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Oct 11 08:48:25 compute-0 systemd[1]: libpod-540b51e5b77bd003090daade15f0ffe4048561f3bfff76195e480b01f9c4e7b0.scope: Consumed 1.051s CPU time.
Oct 11 08:48:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-80069ce4c9e5f70d791f6ddcb3af87f74ed72915e3a827c7a3bb7fa53cbdb32b-merged.mount: Deactivated successfully.
Oct 11 08:48:25 compute-0 podman[290465]: 2025-10-11 08:48:25.586431214 +0000 UTC m=+1.418168839 container remove 540b51e5b77bd003090daade15f0ffe4048561f3bfff76195e480b01f9c4e7b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_swanson, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct 11 08:48:25 compute-0 systemd[1]: libpod-conmon-540b51e5b77bd003090daade15f0ffe4048561f3bfff76195e480b01f9c4e7b0.scope: Deactivated successfully.
Oct 11 08:48:25 compute-0 sudo[290354]: pam_unix(sudo:session): session closed for user root
Oct 11 08:48:25 compute-0 nova_compute[260935]: 2025-10-11 08:48:25.641 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:48:25 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 08:48:25 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 08:48:25 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 08:48:25 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 08:48:25 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev b663000c-12b7-4739-9575-03434d4f765a does not exist
Oct 11 08:48:25 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev 3c5bc784-1efe-488a-9faa-f70e2fffcf6c does not exist
Oct 11 08:48:25 compute-0 sudo[290528]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:48:25 compute-0 sudo[290528]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:48:25 compute-0 sudo[290528]: pam_unix(sudo:session): session closed for user root
Oct 11 08:48:25 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1279: 321 pgs: 321 active+clean; 167 MiB data, 343 MiB used, 60 GiB / 60 GiB avail; 3.7 MiB/s rd, 5.7 MiB/s wr, 301 op/s
Oct 11 08:48:25 compute-0 sudo[290553]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 11 08:48:25 compute-0 sudo[290553]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:48:25 compute-0 sudo[290553]: pam_unix(sudo:session): session closed for user root
Oct 11 08:48:25 compute-0 nova_compute[260935]: 2025-10-11 08:48:25.981 2 DEBUG nova.network.neutron [req-e5487992-b3d0-428a-9189-e07ae6fdf210 req-6b2144f8-3baa-4dd3-ac03-2699ead7ad15 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] Updated VIF entry in instance network info cache for port db31f1b4-b009-40dc-a028-b72fe0b1eb45. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 08:48:25 compute-0 nova_compute[260935]: 2025-10-11 08:48:25.981 2 DEBUG nova.network.neutron [req-e5487992-b3d0-428a-9189-e07ae6fdf210 req-6b2144f8-3baa-4dd3-ac03-2699ead7ad15 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] Updating instance_info_cache with network_info: [{"id": "db31f1b4-b009-40dc-a028-b72fe0b1eb45", "address": "fa:16:3e:97:65:f3", "network": {"id": "fff13396-b787-4c6e-9112-a1c2ef57b26d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-795919911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.230", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eddb41c523294041b154a0a99c88e82b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdb31f1b4-b0", "ovs_interfaceid": "db31f1b4-b009-40dc-a028-b72fe0b1eb45", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:48:26 compute-0 nova_compute[260935]: 2025-10-11 08:48:26.006 2 DEBUG oslo_concurrency.lockutils [req-e5487992-b3d0-428a-9189-e07ae6fdf210 req-6b2144f8-3baa-4dd3-ac03-2699ead7ad15 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-057de6d9-3f9e-4b23-9019-f62ba6b453e7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 08:48:26 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 08:48:26 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 08:48:26 compute-0 ceph-mon[74313]: pgmap v1279: 321 pgs: 321 active+clean; 167 MiB data, 343 MiB used, 60 GiB / 60 GiB avail; 3.7 MiB/s rd, 5.7 MiB/s wr, 301 op/s
Oct 11 08:48:26 compute-0 nova_compute[260935]: 2025-10-11 08:48:26.885 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:48:27 compute-0 nova_compute[260935]: 2025-10-11 08:48:27.278 2 DEBUG nova.network.neutron [req-ee9cdb8d-4a56-44ef-a27b-0642d6e5ee8c req-d476743d-789a-4d4c-bed4-0acf7578f089 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: f6e6ccd5-d393-4fa3-bf88-491311678dd1] Updated VIF entry in instance network info cache for port 128b1135-2e8f-4e78-8e09-e16b082e9225. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 08:48:27 compute-0 nova_compute[260935]: 2025-10-11 08:48:27.279 2 DEBUG nova.network.neutron [req-ee9cdb8d-4a56-44ef-a27b-0642d6e5ee8c req-d476743d-789a-4d4c-bed4-0acf7578f089 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: f6e6ccd5-d393-4fa3-bf88-491311678dd1] Updating instance_info_cache with network_info: [{"id": "128b1135-2e8f-4e78-8e09-e16b082e9225", "address": "fa:16:3e:ac:95:81", "network": {"id": "678d17d5-515c-4e7c-a42a-5bd4db3dbb7b", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-1273402507-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dc753a4e96fc46008b6e6b1fd29b160d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap128b1135-2e", "ovs_interfaceid": "128b1135-2e8f-4e78-8e09-e16b082e9225", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:48:27 compute-0 nova_compute[260935]: 2025-10-11 08:48:27.317 2 DEBUG oslo_concurrency.lockutils [req-ee9cdb8d-4a56-44ef-a27b-0642d6e5ee8c req-d476743d-789a-4d4c-bed4-0acf7578f089 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-f6e6ccd5-d393-4fa3-bf88-491311678dd1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 08:48:27 compute-0 podman[290578]: 2025-10-11 08:48:27.79415072 +0000 UTC m=+0.088626823 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true)
Oct 11 08:48:27 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1280: 321 pgs: 321 active+clean; 167 MiB data, 332 MiB used, 60 GiB / 60 GiB avail; 4.2 MiB/s rd, 5.7 MiB/s wr, 319 op/s
Oct 11 08:48:28 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e163 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 08:48:28 compute-0 nova_compute[260935]: 2025-10-11 08:48:28.271 2 DEBUG oslo_concurrency.lockutils [None req-bf530d45-37dc-4204-bd7a-4fc58106c816 16055681fed745bb89347149995b8486 dc753a4e96fc46008b6e6b1fd29b160d - - default default] Acquiring lock "f6e6ccd5-d393-4fa3-bf88-491311678dd1" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:48:28 compute-0 nova_compute[260935]: 2025-10-11 08:48:28.272 2 DEBUG oslo_concurrency.lockutils [None req-bf530d45-37dc-4204-bd7a-4fc58106c816 16055681fed745bb89347149995b8486 dc753a4e96fc46008b6e6b1fd29b160d - - default default] Lock "f6e6ccd5-d393-4fa3-bf88-491311678dd1" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:48:28 compute-0 nova_compute[260935]: 2025-10-11 08:48:28.272 2 DEBUG oslo_concurrency.lockutils [None req-bf530d45-37dc-4204-bd7a-4fc58106c816 16055681fed745bb89347149995b8486 dc753a4e96fc46008b6e6b1fd29b160d - - default default] Acquiring lock "f6e6ccd5-d393-4fa3-bf88-491311678dd1-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:48:28 compute-0 nova_compute[260935]: 2025-10-11 08:48:28.272 2 DEBUG oslo_concurrency.lockutils [None req-bf530d45-37dc-4204-bd7a-4fc58106c816 16055681fed745bb89347149995b8486 dc753a4e96fc46008b6e6b1fd29b160d - - default default] Lock "f6e6ccd5-d393-4fa3-bf88-491311678dd1-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:48:28 compute-0 nova_compute[260935]: 2025-10-11 08:48:28.273 2 DEBUG oslo_concurrency.lockutils [None req-bf530d45-37dc-4204-bd7a-4fc58106c816 16055681fed745bb89347149995b8486 dc753a4e96fc46008b6e6b1fd29b160d - - default default] Lock "f6e6ccd5-d393-4fa3-bf88-491311678dd1-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:48:28 compute-0 nova_compute[260935]: 2025-10-11 08:48:28.274 2 INFO nova.compute.manager [None req-bf530d45-37dc-4204-bd7a-4fc58106c816 16055681fed745bb89347149995b8486 dc753a4e96fc46008b6e6b1fd29b160d - - default default] [instance: f6e6ccd5-d393-4fa3-bf88-491311678dd1] Terminating instance
Oct 11 08:48:28 compute-0 nova_compute[260935]: 2025-10-11 08:48:28.276 2 DEBUG nova.compute.manager [None req-bf530d45-37dc-4204-bd7a-4fc58106c816 16055681fed745bb89347149995b8486 dc753a4e96fc46008b6e6b1fd29b160d - - default default] [instance: f6e6ccd5-d393-4fa3-bf88-491311678dd1] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 11 08:48:28 compute-0 kernel: tap128b1135-2e (unregistering): left promiscuous mode
Oct 11 08:48:28 compute-0 NetworkManager[44960]: <info>  [1760172508.3350] device (tap128b1135-2e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 08:48:28 compute-0 nova_compute[260935]: 2025-10-11 08:48:28.348 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:48:28 compute-0 ovn_controller[152945]: 2025-10-11T08:48:28Z|00073|binding|INFO|Releasing lport 128b1135-2e8f-4e78-8e09-e16b082e9225 from this chassis (sb_readonly=0)
Oct 11 08:48:28 compute-0 ovn_controller[152945]: 2025-10-11T08:48:28Z|00074|binding|INFO|Setting lport 128b1135-2e8f-4e78-8e09-e16b082e9225 down in Southbound
Oct 11 08:48:28 compute-0 ovn_controller[152945]: 2025-10-11T08:48:28Z|00075|binding|INFO|Removing iface tap128b1135-2e ovn-installed in OVS
Oct 11 08:48:28 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:48:28.359 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ac:95:81 10.100.0.4'], port_security=['fa:16:3e:ac:95:81 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'f6e6ccd5-d393-4fa3-bf88-491311678dd1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-678d17d5-515c-4e7c-a42a-5bd4db3dbb7b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dc753a4e96fc46008b6e6b1fd29b160d', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'd907eed5-bfee-42df-bbfe-0d6a84057302', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8656b2b4-2d06-42a4-a59e-112920fcccd9, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=128b1135-2e8f-4e78-8e09-e16b082e9225) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 08:48:28 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:48:28.361 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 128b1135-2e8f-4e78-8e09-e16b082e9225 in datapath 678d17d5-515c-4e7c-a42a-5bd4db3dbb7b unbound from our chassis
Oct 11 08:48:28 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:48:28.362 162815 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 678d17d5-515c-4e7c-a42a-5bd4db3dbb7b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 11 08:48:28 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:48:28.363 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[d2a7f822-b978-4341-ae36-3928d6b4229d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:48:28 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:48:28.363 162815 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-678d17d5-515c-4e7c-a42a-5bd4db3dbb7b namespace which is not needed anymore
Oct 11 08:48:28 compute-0 nova_compute[260935]: 2025-10-11 08:48:28.390 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:48:28 compute-0 systemd[1]: machine-qemu\x2d20\x2dinstance\x2d00000013.scope: Deactivated successfully.
Oct 11 08:48:28 compute-0 systemd[1]: machine-qemu\x2d20\x2dinstance\x2d00000013.scope: Consumed 12.639s CPU time.
Oct 11 08:48:28 compute-0 systemd-machined[215705]: Machine qemu-20-instance-00000013 terminated.
Oct 11 08:48:28 compute-0 nova_compute[260935]: 2025-10-11 08:48:28.522 2 INFO nova.virt.libvirt.driver [-] [instance: f6e6ccd5-d393-4fa3-bf88-491311678dd1] Instance destroyed successfully.
Oct 11 08:48:28 compute-0 nova_compute[260935]: 2025-10-11 08:48:28.523 2 DEBUG nova.objects.instance [None req-bf530d45-37dc-4204-bd7a-4fc58106c816 16055681fed745bb89347149995b8486 dc753a4e96fc46008b6e6b1fd29b160d - - default default] Lazy-loading 'resources' on Instance uuid f6e6ccd5-d393-4fa3-bf88-491311678dd1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:48:28 compute-0 neutron-haproxy-ovnmeta-678d17d5-515c-4e7c-a42a-5bd4db3dbb7b[288769]: [NOTICE]   (288775) : haproxy version is 2.8.14-c23fe91
Oct 11 08:48:28 compute-0 neutron-haproxy-ovnmeta-678d17d5-515c-4e7c-a42a-5bd4db3dbb7b[288769]: [NOTICE]   (288775) : path to executable is /usr/sbin/haproxy
Oct 11 08:48:28 compute-0 neutron-haproxy-ovnmeta-678d17d5-515c-4e7c-a42a-5bd4db3dbb7b[288769]: [WARNING]  (288775) : Exiting Master process...
Oct 11 08:48:28 compute-0 neutron-haproxy-ovnmeta-678d17d5-515c-4e7c-a42a-5bd4db3dbb7b[288769]: [WARNING]  (288775) : Exiting Master process...
Oct 11 08:48:28 compute-0 neutron-haproxy-ovnmeta-678d17d5-515c-4e7c-a42a-5bd4db3dbb7b[288769]: [ALERT]    (288775) : Current worker (288777) exited with code 143 (Terminated)
Oct 11 08:48:28 compute-0 neutron-haproxy-ovnmeta-678d17d5-515c-4e7c-a42a-5bd4db3dbb7b[288769]: [WARNING]  (288775) : All workers exited. Exiting... (0)
Oct 11 08:48:28 compute-0 systemd[1]: libpod-e7f00db60ad82e6e93accaad38709500629083bb52629852047b469c6b4d70ec.scope: Deactivated successfully.
Oct 11 08:48:28 compute-0 nova_compute[260935]: 2025-10-11 08:48:28.550 2 DEBUG nova.virt.libvirt.vif [None req-bf530d45-37dc-4204-bd7a-4fc58106c816 16055681fed745bb89347149995b8486 dc753a4e96fc46008b6e6b1fd29b160d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T08:47:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-FloatingIPsAssociationNegativeTestJSON-server-2127562755',display_name='tempest-FloatingIPsAssociationNegativeTestJSON-server-2127562755',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-floatingipsassociationnegativetestjson-server-212756275',id=19,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-11T08:48:05Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='dc753a4e96fc46008b6e6b1fd29b160d',ramdisk_id='',reservation_id='r-wv642fxc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-FloatingIPsAssociationNegativeTestJSON-713335751',owner_user_name='tempest-FloatingIPsAssociationNegativeTestJSON-713335751-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T08:48:05Z,user_data=None,user_id='16055681fed745bb89347149995b8486',uuid=f6e6ccd5-d393-4fa3-bf88-491311678dd1,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "128b1135-2e8f-4e78-8e09-e16b082e9225", "address": "fa:16:3e:ac:95:81", "network": {"id": "678d17d5-515c-4e7c-a42a-5bd4db3dbb7b", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-1273402507-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dc753a4e96fc46008b6e6b1fd29b160d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap128b1135-2e", "ovs_interfaceid": "128b1135-2e8f-4e78-8e09-e16b082e9225", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 11 08:48:28 compute-0 nova_compute[260935]: 2025-10-11 08:48:28.551 2 DEBUG nova.network.os_vif_util [None req-bf530d45-37dc-4204-bd7a-4fc58106c816 16055681fed745bb89347149995b8486 dc753a4e96fc46008b6e6b1fd29b160d - - default default] Converting VIF {"id": "128b1135-2e8f-4e78-8e09-e16b082e9225", "address": "fa:16:3e:ac:95:81", "network": {"id": "678d17d5-515c-4e7c-a42a-5bd4db3dbb7b", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-1273402507-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dc753a4e96fc46008b6e6b1fd29b160d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap128b1135-2e", "ovs_interfaceid": "128b1135-2e8f-4e78-8e09-e16b082e9225", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 08:48:28 compute-0 nova_compute[260935]: 2025-10-11 08:48:28.552 2 DEBUG nova.network.os_vif_util [None req-bf530d45-37dc-4204-bd7a-4fc58106c816 16055681fed745bb89347149995b8486 dc753a4e96fc46008b6e6b1fd29b160d - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:ac:95:81,bridge_name='br-int',has_traffic_filtering=True,id=128b1135-2e8f-4e78-8e09-e16b082e9225,network=Network(678d17d5-515c-4e7c-a42a-5bd4db3dbb7b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap128b1135-2e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 08:48:28 compute-0 nova_compute[260935]: 2025-10-11 08:48:28.552 2 DEBUG os_vif [None req-bf530d45-37dc-4204-bd7a-4fc58106c816 16055681fed745bb89347149995b8486 dc753a4e96fc46008b6e6b1fd29b160d - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:ac:95:81,bridge_name='br-int',has_traffic_filtering=True,id=128b1135-2e8f-4e78-8e09-e16b082e9225,network=Network(678d17d5-515c-4e7c-a42a-5bd4db3dbb7b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap128b1135-2e') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 11 08:48:28 compute-0 podman[290622]: 2025-10-11 08:48:28.552672712 +0000 UTC m=+0.065488713 container died e7f00db60ad82e6e93accaad38709500629083bb52629852047b469c6b4d70ec (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-678d17d5-515c-4e7c-a42a-5bd4db3dbb7b, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct 11 08:48:28 compute-0 nova_compute[260935]: 2025-10-11 08:48:28.557 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:48:28 compute-0 nova_compute[260935]: 2025-10-11 08:48:28.558 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap128b1135-2e, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:48:28 compute-0 nova_compute[260935]: 2025-10-11 08:48:28.563 2 DEBUG nova.compute.manager [req-50bb8a09-8318-4292-b449-1928695ec69b req-70c08add-6106-4575-af83-11d9a0e8f426 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: f6e6ccd5-d393-4fa3-bf88-491311678dd1] Received event network-vif-unplugged-128b1135-2e8f-4e78-8e09-e16b082e9225 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:48:28 compute-0 nova_compute[260935]: 2025-10-11 08:48:28.564 2 DEBUG oslo_concurrency.lockutils [req-50bb8a09-8318-4292-b449-1928695ec69b req-70c08add-6106-4575-af83-11d9a0e8f426 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "f6e6ccd5-d393-4fa3-bf88-491311678dd1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:48:28 compute-0 nova_compute[260935]: 2025-10-11 08:48:28.564 2 DEBUG oslo_concurrency.lockutils [req-50bb8a09-8318-4292-b449-1928695ec69b req-70c08add-6106-4575-af83-11d9a0e8f426 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "f6e6ccd5-d393-4fa3-bf88-491311678dd1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:48:28 compute-0 nova_compute[260935]: 2025-10-11 08:48:28.565 2 DEBUG oslo_concurrency.lockutils [req-50bb8a09-8318-4292-b449-1928695ec69b req-70c08add-6106-4575-af83-11d9a0e8f426 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "f6e6ccd5-d393-4fa3-bf88-491311678dd1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:48:28 compute-0 nova_compute[260935]: 2025-10-11 08:48:28.565 2 DEBUG nova.compute.manager [req-50bb8a09-8318-4292-b449-1928695ec69b req-70c08add-6106-4575-af83-11d9a0e8f426 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: f6e6ccd5-d393-4fa3-bf88-491311678dd1] No waiting events found dispatching network-vif-unplugged-128b1135-2e8f-4e78-8e09-e16b082e9225 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 08:48:28 compute-0 nova_compute[260935]: 2025-10-11 08:48:28.566 2 DEBUG nova.compute.manager [req-50bb8a09-8318-4292-b449-1928695ec69b req-70c08add-6106-4575-af83-11d9a0e8f426 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: f6e6ccd5-d393-4fa3-bf88-491311678dd1] Received event network-vif-unplugged-128b1135-2e8f-4e78-8e09-e16b082e9225 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 11 08:48:28 compute-0 nova_compute[260935]: 2025-10-11 08:48:28.567 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:48:28 compute-0 nova_compute[260935]: 2025-10-11 08:48:28.569 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 08:48:28 compute-0 nova_compute[260935]: 2025-10-11 08:48:28.572 2 INFO os_vif [None req-bf530d45-37dc-4204-bd7a-4fc58106c816 16055681fed745bb89347149995b8486 dc753a4e96fc46008b6e6b1fd29b160d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:ac:95:81,bridge_name='br-int',has_traffic_filtering=True,id=128b1135-2e8f-4e78-8e09-e16b082e9225,network=Network(678d17d5-515c-4e7c-a42a-5bd4db3dbb7b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap128b1135-2e')
Oct 11 08:48:28 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e7f00db60ad82e6e93accaad38709500629083bb52629852047b469c6b4d70ec-userdata-shm.mount: Deactivated successfully.
Oct 11 08:48:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-1697bbf5f67b57c75a7813d378343f8ad0bc5c688674a61d7b6592052ffd17eb-merged.mount: Deactivated successfully.
Oct 11 08:48:28 compute-0 podman[290622]: 2025-10-11 08:48:28.603554697 +0000 UTC m=+0.116370698 container cleanup e7f00db60ad82e6e93accaad38709500629083bb52629852047b469c6b4d70ec (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-678d17d5-515c-4e7c-a42a-5bd4db3dbb7b, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct 11 08:48:28 compute-0 systemd[1]: libpod-conmon-e7f00db60ad82e6e93accaad38709500629083bb52629852047b469c6b4d70ec.scope: Deactivated successfully.
Oct 11 08:48:28 compute-0 podman[290674]: 2025-10-11 08:48:28.697373769 +0000 UTC m=+0.057634489 container remove e7f00db60ad82e6e93accaad38709500629083bb52629852047b469c6b4d70ec (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-678d17d5-515c-4e7c-a42a-5bd4db3dbb7b, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2)
Oct 11 08:48:28 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:48:28.708 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[eaab65af-5549-4b77-956f-c87644057478]: (4, ('Sat Oct 11 08:48:28 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-678d17d5-515c-4e7c-a42a-5bd4db3dbb7b (e7f00db60ad82e6e93accaad38709500629083bb52629852047b469c6b4d70ec)\ne7f00db60ad82e6e93accaad38709500629083bb52629852047b469c6b4d70ec\nSat Oct 11 08:48:28 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-678d17d5-515c-4e7c-a42a-5bd4db3dbb7b (e7f00db60ad82e6e93accaad38709500629083bb52629852047b469c6b4d70ec)\ne7f00db60ad82e6e93accaad38709500629083bb52629852047b469c6b4d70ec\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:48:28 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:48:28.710 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[d977d233-fb3d-4a0f-b7db-e71f063c2da4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:48:28 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:48:28.713 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap678d17d5-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:48:28 compute-0 nova_compute[260935]: 2025-10-11 08:48:28.715 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:48:28 compute-0 kernel: tap678d17d5-50: left promiscuous mode
Oct 11 08:48:28 compute-0 nova_compute[260935]: 2025-10-11 08:48:28.720 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:48:28 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:48:28.722 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[7047b6ea-e4f5-4602-adf5-f68eeb2d8a07]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:48:28 compute-0 nova_compute[260935]: 2025-10-11 08:48:28.747 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:48:28 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:48:28.748 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[a452bb22-01b9-4d18-8a18-9f245193a260]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:48:28 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:48:28.751 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[7768325c-eace-4999-8565-36f64774e047]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:48:28 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:48:28.776 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[90e7329e-c6c8-4cf4-9401-b88743a1fc96]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 432822, 'reachable_time': 32842, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 290692, 'error': None, 'target': 'ovnmeta-678d17d5-515c-4e7c-a42a-5bd4db3dbb7b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:48:28 compute-0 systemd[1]: run-netns-ovnmeta\x2d678d17d5\x2d515c\x2d4e7c\x2da42a\x2d5bd4db3dbb7b.mount: Deactivated successfully.
Oct 11 08:48:28 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:48:28.779 162929 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-678d17d5-515c-4e7c-a42a-5bd4db3dbb7b deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 11 08:48:28 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:48:28.780 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[af238738-6e07-429e-b060-4c5bf72d7ab7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:48:28 compute-0 ceph-mon[74313]: pgmap v1280: 321 pgs: 321 active+clean; 167 MiB data, 332 MiB used, 60 GiB / 60 GiB avail; 4.2 MiB/s rd, 5.7 MiB/s wr, 319 op/s
Oct 11 08:48:28 compute-0 nova_compute[260935]: 2025-10-11 08:48:28.993 2 INFO nova.virt.libvirt.driver [None req-bf530d45-37dc-4204-bd7a-4fc58106c816 16055681fed745bb89347149995b8486 dc753a4e96fc46008b6e6b1fd29b160d - - default default] [instance: f6e6ccd5-d393-4fa3-bf88-491311678dd1] Deleting instance files /var/lib/nova/instances/f6e6ccd5-d393-4fa3-bf88-491311678dd1_del
Oct 11 08:48:28 compute-0 nova_compute[260935]: 2025-10-11 08:48:28.994 2 INFO nova.virt.libvirt.driver [None req-bf530d45-37dc-4204-bd7a-4fc58106c816 16055681fed745bb89347149995b8486 dc753a4e96fc46008b6e6b1fd29b160d - - default default] [instance: f6e6ccd5-d393-4fa3-bf88-491311678dd1] Deletion of /var/lib/nova/instances/f6e6ccd5-d393-4fa3-bf88-491311678dd1_del complete
Oct 11 08:48:29 compute-0 nova_compute[260935]: 2025-10-11 08:48:29.056 2 INFO nova.compute.manager [None req-bf530d45-37dc-4204-bd7a-4fc58106c816 16055681fed745bb89347149995b8486 dc753a4e96fc46008b6e6b1fd29b160d - - default default] [instance: f6e6ccd5-d393-4fa3-bf88-491311678dd1] Took 0.78 seconds to destroy the instance on the hypervisor.
Oct 11 08:48:29 compute-0 nova_compute[260935]: 2025-10-11 08:48:29.057 2 DEBUG oslo.service.loopingcall [None req-bf530d45-37dc-4204-bd7a-4fc58106c816 16055681fed745bb89347149995b8486 dc753a4e96fc46008b6e6b1fd29b160d - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 11 08:48:29 compute-0 nova_compute[260935]: 2025-10-11 08:48:29.057 2 DEBUG nova.compute.manager [-] [instance: f6e6ccd5-d393-4fa3-bf88-491311678dd1] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 11 08:48:29 compute-0 nova_compute[260935]: 2025-10-11 08:48:29.057 2 DEBUG nova.network.neutron [-] [instance: f6e6ccd5-d393-4fa3-bf88-491311678dd1] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 11 08:48:29 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1281: 321 pgs: 321 active+clean; 167 MiB data, 332 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 112 KiB/s wr, 186 op/s
Oct 11 08:48:29 compute-0 nova_compute[260935]: 2025-10-11 08:48:29.876 2 DEBUG nova.network.neutron [-] [instance: f6e6ccd5-d393-4fa3-bf88-491311678dd1] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:48:29 compute-0 nova_compute[260935]: 2025-10-11 08:48:29.901 2 INFO nova.compute.manager [-] [instance: f6e6ccd5-d393-4fa3-bf88-491311678dd1] Took 0.84 seconds to deallocate network for instance.
Oct 11 08:48:29 compute-0 nova_compute[260935]: 2025-10-11 08:48:29.941 2 DEBUG oslo_concurrency.lockutils [None req-bf530d45-37dc-4204-bd7a-4fc58106c816 16055681fed745bb89347149995b8486 dc753a4e96fc46008b6e6b1fd29b160d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:48:29 compute-0 nova_compute[260935]: 2025-10-11 08:48:29.942 2 DEBUG oslo_concurrency.lockutils [None req-bf530d45-37dc-4204-bd7a-4fc58106c816 16055681fed745bb89347149995b8486 dc753a4e96fc46008b6e6b1fd29b160d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:48:30 compute-0 nova_compute[260935]: 2025-10-11 08:48:30.030 2 DEBUG oslo_concurrency.processutils [None req-bf530d45-37dc-4204-bd7a-4fc58106c816 16055681fed745bb89347149995b8486 dc753a4e96fc46008b6e6b1fd29b160d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:48:30 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 08:48:30 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1270255684' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:48:30 compute-0 nova_compute[260935]: 2025-10-11 08:48:30.545 2 DEBUG oslo_concurrency.processutils [None req-bf530d45-37dc-4204-bd7a-4fc58106c816 16055681fed745bb89347149995b8486 dc753a4e96fc46008b6e6b1fd29b160d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.514s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:48:30 compute-0 nova_compute[260935]: 2025-10-11 08:48:30.553 2 DEBUG nova.compute.provider_tree [None req-bf530d45-37dc-4204-bd7a-4fc58106c816 16055681fed745bb89347149995b8486 dc753a4e96fc46008b6e6b1fd29b160d - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 08:48:30 compute-0 nova_compute[260935]: 2025-10-11 08:48:30.569 2 DEBUG nova.scheduler.client.report [None req-bf530d45-37dc-4204-bd7a-4fc58106c816 16055681fed745bb89347149995b8486 dc753a4e96fc46008b6e6b1fd29b160d - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 08:48:30 compute-0 nova_compute[260935]: 2025-10-11 08:48:30.594 2 DEBUG oslo_concurrency.lockutils [None req-bf530d45-37dc-4204-bd7a-4fc58106c816 16055681fed745bb89347149995b8486 dc753a4e96fc46008b6e6b1fd29b160d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.652s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:48:30 compute-0 nova_compute[260935]: 2025-10-11 08:48:30.618 2 INFO nova.scheduler.client.report [None req-bf530d45-37dc-4204-bd7a-4fc58106c816 16055681fed745bb89347149995b8486 dc753a4e96fc46008b6e6b1fd29b160d - - default default] Deleted allocations for instance f6e6ccd5-d393-4fa3-bf88-491311678dd1
Oct 11 08:48:30 compute-0 nova_compute[260935]: 2025-10-11 08:48:30.638 2 DEBUG nova.compute.manager [req-9ef83ac4-c3ea-48a2-949d-5c0f7898257e req-64d86a1c-6cd9-4eda-aad4-9b01272a88b2 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: f6e6ccd5-d393-4fa3-bf88-491311678dd1] Received event network-vif-plugged-128b1135-2e8f-4e78-8e09-e16b082e9225 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:48:30 compute-0 nova_compute[260935]: 2025-10-11 08:48:30.639 2 DEBUG oslo_concurrency.lockutils [req-9ef83ac4-c3ea-48a2-949d-5c0f7898257e req-64d86a1c-6cd9-4eda-aad4-9b01272a88b2 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "f6e6ccd5-d393-4fa3-bf88-491311678dd1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:48:30 compute-0 nova_compute[260935]: 2025-10-11 08:48:30.639 2 DEBUG oslo_concurrency.lockutils [req-9ef83ac4-c3ea-48a2-949d-5c0f7898257e req-64d86a1c-6cd9-4eda-aad4-9b01272a88b2 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "f6e6ccd5-d393-4fa3-bf88-491311678dd1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:48:30 compute-0 nova_compute[260935]: 2025-10-11 08:48:30.639 2 DEBUG oslo_concurrency.lockutils [req-9ef83ac4-c3ea-48a2-949d-5c0f7898257e req-64d86a1c-6cd9-4eda-aad4-9b01272a88b2 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "f6e6ccd5-d393-4fa3-bf88-491311678dd1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:48:30 compute-0 nova_compute[260935]: 2025-10-11 08:48:30.640 2 DEBUG nova.compute.manager [req-9ef83ac4-c3ea-48a2-949d-5c0f7898257e req-64d86a1c-6cd9-4eda-aad4-9b01272a88b2 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: f6e6ccd5-d393-4fa3-bf88-491311678dd1] No waiting events found dispatching network-vif-plugged-128b1135-2e8f-4e78-8e09-e16b082e9225 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 08:48:30 compute-0 nova_compute[260935]: 2025-10-11 08:48:30.640 2 WARNING nova.compute.manager [req-9ef83ac4-c3ea-48a2-949d-5c0f7898257e req-64d86a1c-6cd9-4eda-aad4-9b01272a88b2 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: f6e6ccd5-d393-4fa3-bf88-491311678dd1] Received unexpected event network-vif-plugged-128b1135-2e8f-4e78-8e09-e16b082e9225 for instance with vm_state deleted and task_state None.
Oct 11 08:48:30 compute-0 nova_compute[260935]: 2025-10-11 08:48:30.641 2 DEBUG nova.compute.manager [req-9ef83ac4-c3ea-48a2-949d-5c0f7898257e req-64d86a1c-6cd9-4eda-aad4-9b01272a88b2 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: f6e6ccd5-d393-4fa3-bf88-491311678dd1] Received event network-vif-deleted-128b1135-2e8f-4e78-8e09-e16b082e9225 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:48:30 compute-0 nova_compute[260935]: 2025-10-11 08:48:30.697 2 DEBUG oslo_concurrency.lockutils [None req-bf530d45-37dc-4204-bd7a-4fc58106c816 16055681fed745bb89347149995b8486 dc753a4e96fc46008b6e6b1fd29b160d - - default default] Lock "f6e6ccd5-d393-4fa3-bf88-491311678dd1" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.426s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:48:30 compute-0 ceph-mon[74313]: pgmap v1281: 321 pgs: 321 active+clean; 167 MiB data, 332 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 112 KiB/s wr, 186 op/s
Oct 11 08:48:30 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/1270255684' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:48:31 compute-0 nova_compute[260935]: 2025-10-11 08:48:31.005 2 DEBUG oslo_concurrency.lockutils [None req-ba70e0d6-0056-4e33-9444-313e6b6d359c 6b08b83b77b84cd894e155d2a06682a4 8d6c7a0d842f4dcb95421a3f47580c49 - - default default] Acquiring lock "e10cd028-76c1-4eb5-be43-f51e4da8abc1" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:48:31 compute-0 nova_compute[260935]: 2025-10-11 08:48:31.005 2 DEBUG oslo_concurrency.lockutils [None req-ba70e0d6-0056-4e33-9444-313e6b6d359c 6b08b83b77b84cd894e155d2a06682a4 8d6c7a0d842f4dcb95421a3f47580c49 - - default default] Lock "e10cd028-76c1-4eb5-be43-f51e4da8abc1" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:48:31 compute-0 nova_compute[260935]: 2025-10-11 08:48:31.024 2 DEBUG nova.compute.manager [None req-ba70e0d6-0056-4e33-9444-313e6b6d359c 6b08b83b77b84cd894e155d2a06682a4 8d6c7a0d842f4dcb95421a3f47580c49 - - default default] [instance: e10cd028-76c1-4eb5-be43-f51e4da8abc1] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 11 08:48:31 compute-0 nova_compute[260935]: 2025-10-11 08:48:31.098 2 DEBUG oslo_concurrency.lockutils [None req-ba70e0d6-0056-4e33-9444-313e6b6d359c 6b08b83b77b84cd894e155d2a06682a4 8d6c7a0d842f4dcb95421a3f47580c49 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:48:31 compute-0 nova_compute[260935]: 2025-10-11 08:48:31.099 2 DEBUG oslo_concurrency.lockutils [None req-ba70e0d6-0056-4e33-9444-313e6b6d359c 6b08b83b77b84cd894e155d2a06682a4 8d6c7a0d842f4dcb95421a3f47580c49 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:48:31 compute-0 nova_compute[260935]: 2025-10-11 08:48:31.107 2 DEBUG nova.virt.hardware [None req-ba70e0d6-0056-4e33-9444-313e6b6d359c 6b08b83b77b84cd894e155d2a06682a4 8d6c7a0d842f4dcb95421a3f47580c49 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 11 08:48:31 compute-0 nova_compute[260935]: 2025-10-11 08:48:31.107 2 INFO nova.compute.claims [None req-ba70e0d6-0056-4e33-9444-313e6b6d359c 6b08b83b77b84cd894e155d2a06682a4 8d6c7a0d842f4dcb95421a3f47580c49 - - default default] [instance: e10cd028-76c1-4eb5-be43-f51e4da8abc1] Claim successful on node compute-0.ctlplane.example.com
Oct 11 08:48:31 compute-0 nova_compute[260935]: 2025-10-11 08:48:31.247 2 DEBUG oslo_concurrency.processutils [None req-ba70e0d6-0056-4e33-9444-313e6b6d359c 6b08b83b77b84cd894e155d2a06682a4 8d6c7a0d842f4dcb95421a3f47580c49 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:48:31 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 08:48:31 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1999242782' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:48:31 compute-0 nova_compute[260935]: 2025-10-11 08:48:31.761 2 DEBUG oslo_concurrency.processutils [None req-ba70e0d6-0056-4e33-9444-313e6b6d359c 6b08b83b77b84cd894e155d2a06682a4 8d6c7a0d842f4dcb95421a3f47580c49 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.514s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:48:31 compute-0 nova_compute[260935]: 2025-10-11 08:48:31.769 2 DEBUG nova.compute.provider_tree [None req-ba70e0d6-0056-4e33-9444-313e6b6d359c 6b08b83b77b84cd894e155d2a06682a4 8d6c7a0d842f4dcb95421a3f47580c49 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 08:48:31 compute-0 nova_compute[260935]: 2025-10-11 08:48:31.788 2 DEBUG nova.scheduler.client.report [None req-ba70e0d6-0056-4e33-9444-313e6b6d359c 6b08b83b77b84cd894e155d2a06682a4 8d6c7a0d842f4dcb95421a3f47580c49 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 08:48:31 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1282: 321 pgs: 321 active+clean; 167 MiB data, 332 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 112 KiB/s wr, 186 op/s
Oct 11 08:48:31 compute-0 nova_compute[260935]: 2025-10-11 08:48:31.819 2 DEBUG oslo_concurrency.lockutils [None req-ba70e0d6-0056-4e33-9444-313e6b6d359c 6b08b83b77b84cd894e155d2a06682a4 8d6c7a0d842f4dcb95421a3f47580c49 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.720s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:48:31 compute-0 nova_compute[260935]: 2025-10-11 08:48:31.821 2 DEBUG nova.compute.manager [None req-ba70e0d6-0056-4e33-9444-313e6b6d359c 6b08b83b77b84cd894e155d2a06682a4 8d6c7a0d842f4dcb95421a3f47580c49 - - default default] [instance: e10cd028-76c1-4eb5-be43-f51e4da8abc1] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 11 08:48:31 compute-0 nova_compute[260935]: 2025-10-11 08:48:31.876 2 DEBUG nova.compute.manager [None req-ba70e0d6-0056-4e33-9444-313e6b6d359c 6b08b83b77b84cd894e155d2a06682a4 8d6c7a0d842f4dcb95421a3f47580c49 - - default default] [instance: e10cd028-76c1-4eb5-be43-f51e4da8abc1] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 11 08:48:31 compute-0 nova_compute[260935]: 2025-10-11 08:48:31.876 2 DEBUG nova.network.neutron [None req-ba70e0d6-0056-4e33-9444-313e6b6d359c 6b08b83b77b84cd894e155d2a06682a4 8d6c7a0d842f4dcb95421a3f47580c49 - - default default] [instance: e10cd028-76c1-4eb5-be43-f51e4da8abc1] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 11 08:48:31 compute-0 nova_compute[260935]: 2025-10-11 08:48:31.887 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:48:31 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/1999242782' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:48:31 compute-0 nova_compute[260935]: 2025-10-11 08:48:31.898 2 INFO nova.virt.libvirt.driver [None req-ba70e0d6-0056-4e33-9444-313e6b6d359c 6b08b83b77b84cd894e155d2a06682a4 8d6c7a0d842f4dcb95421a3f47580c49 - - default default] [instance: e10cd028-76c1-4eb5-be43-f51e4da8abc1] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 11 08:48:31 compute-0 nova_compute[260935]: 2025-10-11 08:48:31.917 2 DEBUG nova.compute.manager [None req-ba70e0d6-0056-4e33-9444-313e6b6d359c 6b08b83b77b84cd894e155d2a06682a4 8d6c7a0d842f4dcb95421a3f47580c49 - - default default] [instance: e10cd028-76c1-4eb5-be43-f51e4da8abc1] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 11 08:48:32 compute-0 nova_compute[260935]: 2025-10-11 08:48:32.027 2 DEBUG nova.compute.manager [None req-ba70e0d6-0056-4e33-9444-313e6b6d359c 6b08b83b77b84cd894e155d2a06682a4 8d6c7a0d842f4dcb95421a3f47580c49 - - default default] [instance: e10cd028-76c1-4eb5-be43-f51e4da8abc1] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 11 08:48:32 compute-0 nova_compute[260935]: 2025-10-11 08:48:32.029 2 DEBUG nova.virt.libvirt.driver [None req-ba70e0d6-0056-4e33-9444-313e6b6d359c 6b08b83b77b84cd894e155d2a06682a4 8d6c7a0d842f4dcb95421a3f47580c49 - - default default] [instance: e10cd028-76c1-4eb5-be43-f51e4da8abc1] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 11 08:48:32 compute-0 nova_compute[260935]: 2025-10-11 08:48:32.030 2 INFO nova.virt.libvirt.driver [None req-ba70e0d6-0056-4e33-9444-313e6b6d359c 6b08b83b77b84cd894e155d2a06682a4 8d6c7a0d842f4dcb95421a3f47580c49 - - default default] [instance: e10cd028-76c1-4eb5-be43-f51e4da8abc1] Creating image(s)
Oct 11 08:48:32 compute-0 nova_compute[260935]: 2025-10-11 08:48:32.068 2 DEBUG nova.storage.rbd_utils [None req-ba70e0d6-0056-4e33-9444-313e6b6d359c 6b08b83b77b84cd894e155d2a06682a4 8d6c7a0d842f4dcb95421a3f47580c49 - - default default] rbd image e10cd028-76c1-4eb5-be43-f51e4da8abc1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:48:32 compute-0 nova_compute[260935]: 2025-10-11 08:48:32.117 2 DEBUG nova.storage.rbd_utils [None req-ba70e0d6-0056-4e33-9444-313e6b6d359c 6b08b83b77b84cd894e155d2a06682a4 8d6c7a0d842f4dcb95421a3f47580c49 - - default default] rbd image e10cd028-76c1-4eb5-be43-f51e4da8abc1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:48:32 compute-0 nova_compute[260935]: 2025-10-11 08:48:32.154 2 DEBUG nova.storage.rbd_utils [None req-ba70e0d6-0056-4e33-9444-313e6b6d359c 6b08b83b77b84cd894e155d2a06682a4 8d6c7a0d842f4dcb95421a3f47580c49 - - default default] rbd image e10cd028-76c1-4eb5-be43-f51e4da8abc1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:48:32 compute-0 nova_compute[260935]: 2025-10-11 08:48:32.163 2 DEBUG oslo_concurrency.processutils [None req-ba70e0d6-0056-4e33-9444-313e6b6d359c 6b08b83b77b84cd894e155d2a06682a4 8d6c7a0d842f4dcb95421a3f47580c49 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:48:32 compute-0 nova_compute[260935]: 2025-10-11 08:48:32.241 2 DEBUG oslo_concurrency.processutils [None req-ba70e0d6-0056-4e33-9444-313e6b6d359c 6b08b83b77b84cd894e155d2a06682a4 8d6c7a0d842f4dcb95421a3f47580c49 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json" returned: 0 in 0.078s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:48:32 compute-0 nova_compute[260935]: 2025-10-11 08:48:32.242 2 DEBUG oslo_concurrency.lockutils [None req-ba70e0d6-0056-4e33-9444-313e6b6d359c 6b08b83b77b84cd894e155d2a06682a4 8d6c7a0d842f4dcb95421a3f47580c49 - - default default] Acquiring lock "9811042c7d73cc51997f7c966840f4b7728169a1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:48:32 compute-0 nova_compute[260935]: 2025-10-11 08:48:32.242 2 DEBUG oslo_concurrency.lockutils [None req-ba70e0d6-0056-4e33-9444-313e6b6d359c 6b08b83b77b84cd894e155d2a06682a4 8d6c7a0d842f4dcb95421a3f47580c49 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:48:32 compute-0 nova_compute[260935]: 2025-10-11 08:48:32.243 2 DEBUG oslo_concurrency.lockutils [None req-ba70e0d6-0056-4e33-9444-313e6b6d359c 6b08b83b77b84cd894e155d2a06682a4 8d6c7a0d842f4dcb95421a3f47580c49 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:48:32 compute-0 nova_compute[260935]: 2025-10-11 08:48:32.264 2 DEBUG nova.storage.rbd_utils [None req-ba70e0d6-0056-4e33-9444-313e6b6d359c 6b08b83b77b84cd894e155d2a06682a4 8d6c7a0d842f4dcb95421a3f47580c49 - - default default] rbd image e10cd028-76c1-4eb5-be43-f51e4da8abc1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:48:32 compute-0 nova_compute[260935]: 2025-10-11 08:48:32.268 2 DEBUG oslo_concurrency.processutils [None req-ba70e0d6-0056-4e33-9444-313e6b6d359c 6b08b83b77b84cd894e155d2a06682a4 8d6c7a0d842f4dcb95421a3f47580c49 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 e10cd028-76c1-4eb5-be43-f51e4da8abc1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:48:32 compute-0 nova_compute[260935]: 2025-10-11 08:48:32.291 2 DEBUG nova.policy [None req-ba70e0d6-0056-4e33-9444-313e6b6d359c 6b08b83b77b84cd894e155d2a06682a4 8d6c7a0d842f4dcb95421a3f47580c49 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '6b08b83b77b84cd894e155d2a06682a4', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '8d6c7a0d842f4dcb95421a3f47580c49', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 11 08:48:32 compute-0 nova_compute[260935]: 2025-10-11 08:48:32.561 2 DEBUG oslo_concurrency.processutils [None req-ba70e0d6-0056-4e33-9444-313e6b6d359c 6b08b83b77b84cd894e155d2a06682a4 8d6c7a0d842f4dcb95421a3f47580c49 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 e10cd028-76c1-4eb5-be43-f51e4da8abc1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.293s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:48:32 compute-0 nova_compute[260935]: 2025-10-11 08:48:32.629 2 DEBUG nova.storage.rbd_utils [None req-ba70e0d6-0056-4e33-9444-313e6b6d359c 6b08b83b77b84cd894e155d2a06682a4 8d6c7a0d842f4dcb95421a3f47580c49 - - default default] resizing rbd image e10cd028-76c1-4eb5-be43-f51e4da8abc1_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 11 08:48:32 compute-0 nova_compute[260935]: 2025-10-11 08:48:32.729 2 DEBUG nova.objects.instance [None req-ba70e0d6-0056-4e33-9444-313e6b6d359c 6b08b83b77b84cd894e155d2a06682a4 8d6c7a0d842f4dcb95421a3f47580c49 - - default default] Lazy-loading 'migration_context' on Instance uuid e10cd028-76c1-4eb5-be43-f51e4da8abc1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:48:32 compute-0 nova_compute[260935]: 2025-10-11 08:48:32.744 2 DEBUG nova.virt.libvirt.driver [None req-ba70e0d6-0056-4e33-9444-313e6b6d359c 6b08b83b77b84cd894e155d2a06682a4 8d6c7a0d842f4dcb95421a3f47580c49 - - default default] [instance: e10cd028-76c1-4eb5-be43-f51e4da8abc1] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 11 08:48:32 compute-0 nova_compute[260935]: 2025-10-11 08:48:32.745 2 DEBUG nova.virt.libvirt.driver [None req-ba70e0d6-0056-4e33-9444-313e6b6d359c 6b08b83b77b84cd894e155d2a06682a4 8d6c7a0d842f4dcb95421a3f47580c49 - - default default] [instance: e10cd028-76c1-4eb5-be43-f51e4da8abc1] Ensure instance console log exists: /var/lib/nova/instances/e10cd028-76c1-4eb5-be43-f51e4da8abc1/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 11 08:48:32 compute-0 nova_compute[260935]: 2025-10-11 08:48:32.745 2 DEBUG oslo_concurrency.lockutils [None req-ba70e0d6-0056-4e33-9444-313e6b6d359c 6b08b83b77b84cd894e155d2a06682a4 8d6c7a0d842f4dcb95421a3f47580c49 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:48:32 compute-0 nova_compute[260935]: 2025-10-11 08:48:32.745 2 DEBUG oslo_concurrency.lockutils [None req-ba70e0d6-0056-4e33-9444-313e6b6d359c 6b08b83b77b84cd894e155d2a06682a4 8d6c7a0d842f4dcb95421a3f47580c49 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:48:32 compute-0 nova_compute[260935]: 2025-10-11 08:48:32.745 2 DEBUG oslo_concurrency.lockutils [None req-ba70e0d6-0056-4e33-9444-313e6b6d359c 6b08b83b77b84cd894e155d2a06682a4 8d6c7a0d842f4dcb95421a3f47580c49 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:48:32 compute-0 ceph-mon[74313]: pgmap v1282: 321 pgs: 321 active+clean; 167 MiB data, 332 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 112 KiB/s wr, 186 op/s
Oct 11 08:48:32 compute-0 nova_compute[260935]: 2025-10-11 08:48:32.967 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:48:33 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e163 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 08:48:33 compute-0 nova_compute[260935]: 2025-10-11 08:48:33.538 2 DEBUG nova.network.neutron [None req-ba70e0d6-0056-4e33-9444-313e6b6d359c 6b08b83b77b84cd894e155d2a06682a4 8d6c7a0d842f4dcb95421a3f47580c49 - - default default] [instance: e10cd028-76c1-4eb5-be43-f51e4da8abc1] Successfully created port: b5bae935-7639-4a76-988c-e09d0c6f5fb1 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 11 08:48:33 compute-0 nova_compute[260935]: 2025-10-11 08:48:33.562 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:48:33 compute-0 ovn_controller[152945]: 2025-10-11T08:48:33Z|00014|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:97:65:f3 10.100.0.8
Oct 11 08:48:33 compute-0 ovn_controller[152945]: 2025-10-11T08:48:33Z|00015|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:97:65:f3 10.100.0.8
Oct 11 08:48:33 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1283: 321 pgs: 321 active+clean; 124 MiB data, 308 MiB used, 60 GiB / 60 GiB avail; 4.0 MiB/s rd, 2.5 MiB/s wr, 255 op/s
Oct 11 08:48:33 compute-0 podman[290905]: 2025-10-11 08:48:33.848034599 +0000 UTC m=+0.131060157 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct 11 08:48:33 compute-0 podman[290906]: 2025-10-11 08:48:33.867796034 +0000 UTC m=+0.149566056 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 08:48:34 compute-0 ceph-mon[74313]: pgmap v1283: 321 pgs: 321 active+clean; 124 MiB data, 308 MiB used, 60 GiB / 60 GiB avail; 4.0 MiB/s rd, 2.5 MiB/s wr, 255 op/s
Oct 11 08:48:35 compute-0 nova_compute[260935]: 2025-10-11 08:48:35.014 2 DEBUG nova.network.neutron [None req-ba70e0d6-0056-4e33-9444-313e6b6d359c 6b08b83b77b84cd894e155d2a06682a4 8d6c7a0d842f4dcb95421a3f47580c49 - - default default] [instance: e10cd028-76c1-4eb5-be43-f51e4da8abc1] Successfully updated port: b5bae935-7639-4a76-988c-e09d0c6f5fb1 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 11 08:48:35 compute-0 nova_compute[260935]: 2025-10-11 08:48:35.032 2 DEBUG oslo_concurrency.lockutils [None req-ba70e0d6-0056-4e33-9444-313e6b6d359c 6b08b83b77b84cd894e155d2a06682a4 8d6c7a0d842f4dcb95421a3f47580c49 - - default default] Acquiring lock "refresh_cache-e10cd028-76c1-4eb5-be43-f51e4da8abc1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 08:48:35 compute-0 nova_compute[260935]: 2025-10-11 08:48:35.033 2 DEBUG oslo_concurrency.lockutils [None req-ba70e0d6-0056-4e33-9444-313e6b6d359c 6b08b83b77b84cd894e155d2a06682a4 8d6c7a0d842f4dcb95421a3f47580c49 - - default default] Acquired lock "refresh_cache-e10cd028-76c1-4eb5-be43-f51e4da8abc1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 08:48:35 compute-0 nova_compute[260935]: 2025-10-11 08:48:35.033 2 DEBUG nova.network.neutron [None req-ba70e0d6-0056-4e33-9444-313e6b6d359c 6b08b83b77b84cd894e155d2a06682a4 8d6c7a0d842f4dcb95421a3f47580c49 - - default default] [instance: e10cd028-76c1-4eb5-be43-f51e4da8abc1] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 11 08:48:35 compute-0 nova_compute[260935]: 2025-10-11 08:48:35.138 2 DEBUG nova.compute.manager [req-ab7a6d74-b857-456e-9cf1-967caa3d0496 req-4bcdc51e-6185-487f-9441-8a58f219aecd e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: e10cd028-76c1-4eb5-be43-f51e4da8abc1] Received event network-changed-b5bae935-7639-4a76-988c-e09d0c6f5fb1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:48:35 compute-0 nova_compute[260935]: 2025-10-11 08:48:35.139 2 DEBUG nova.compute.manager [req-ab7a6d74-b857-456e-9cf1-967caa3d0496 req-4bcdc51e-6185-487f-9441-8a58f219aecd e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: e10cd028-76c1-4eb5-be43-f51e4da8abc1] Refreshing instance network info cache due to event network-changed-b5bae935-7639-4a76-988c-e09d0c6f5fb1. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 08:48:35 compute-0 nova_compute[260935]: 2025-10-11 08:48:35.139 2 DEBUG oslo_concurrency.lockutils [req-ab7a6d74-b857-456e-9cf1-967caa3d0496 req-4bcdc51e-6185-487f-9441-8a58f219aecd e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-e10cd028-76c1-4eb5-be43-f51e4da8abc1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 08:48:35 compute-0 ovn_controller[152945]: 2025-10-11T08:48:35Z|00076|binding|INFO|Releasing lport 2a916b98-1e7b-4604-b1f0-e2f195b1c17e from this chassis (sb_readonly=0)
Oct 11 08:48:35 compute-0 nova_compute[260935]: 2025-10-11 08:48:35.237 2 DEBUG nova.network.neutron [None req-ba70e0d6-0056-4e33-9444-313e6b6d359c 6b08b83b77b84cd894e155d2a06682a4 8d6c7a0d842f4dcb95421a3f47580c49 - - default default] [instance: e10cd028-76c1-4eb5-be43-f51e4da8abc1] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 11 08:48:35 compute-0 nova_compute[260935]: 2025-10-11 08:48:35.297 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:48:35 compute-0 sshd-session[290951]: Invalid user ins from 152.32.213.170 port 46352
Oct 11 08:48:35 compute-0 sshd-session[290951]: pam_unix(sshd:auth): check pass; user unknown
Oct 11 08:48:35 compute-0 sshd-session[290951]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=152.32.213.170
Oct 11 08:48:35 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1284: 321 pgs: 321 active+clean; 124 MiB data, 308 MiB used, 60 GiB / 60 GiB avail; 643 KiB/s rd, 2.4 MiB/s wr, 87 op/s
Oct 11 08:48:36 compute-0 nova_compute[260935]: 2025-10-11 08:48:36.760 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760172501.758883, 66086b61-46ca-4a1b-a9f0-692678bcbf7a => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:48:36 compute-0 nova_compute[260935]: 2025-10-11 08:48:36.761 2 INFO nova.compute.manager [-] [instance: 66086b61-46ca-4a1b-a9f0-692678bcbf7a] VM Stopped (Lifecycle Event)
Oct 11 08:48:36 compute-0 nova_compute[260935]: 2025-10-11 08:48:36.786 2 DEBUG nova.compute.manager [None req-9bb50622-f646-49a2-9933-b24878935f6a - - - - - -] [instance: 66086b61-46ca-4a1b-a9f0-692678bcbf7a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:48:36 compute-0 nova_compute[260935]: 2025-10-11 08:48:36.890 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:48:36 compute-0 ceph-mon[74313]: pgmap v1284: 321 pgs: 321 active+clean; 124 MiB data, 308 MiB used, 60 GiB / 60 GiB avail; 643 KiB/s rd, 2.4 MiB/s wr, 87 op/s
Oct 11 08:48:37 compute-0 sshd-session[290951]: Failed password for invalid user ins from 152.32.213.170 port 46352 ssh2
Oct 11 08:48:37 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 08:48:37 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3818684593' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 08:48:37 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 08:48:37 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3818684593' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 08:48:37 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1285: 321 pgs: 321 active+clean; 167 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 844 KiB/s rd, 3.9 MiB/s wr, 134 op/s
Oct 11 08:48:37 compute-0 ceph-mon[74313]: from='client.? 192.168.122.10:0/3818684593' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 08:48:37 compute-0 ceph-mon[74313]: from='client.? 192.168.122.10:0/3818684593' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 08:48:37 compute-0 nova_compute[260935]: 2025-10-11 08:48:37.984 2 DEBUG nova.network.neutron [None req-ba70e0d6-0056-4e33-9444-313e6b6d359c 6b08b83b77b84cd894e155d2a06682a4 8d6c7a0d842f4dcb95421a3f47580c49 - - default default] [instance: e10cd028-76c1-4eb5-be43-f51e4da8abc1] Updating instance_info_cache with network_info: [{"id": "b5bae935-7639-4a76-988c-e09d0c6f5fb1", "address": "fa:16:3e:ad:83:55", "network": {"id": "2880dd81-df95-47c3-aa3d-53c3f2548f15", "bridge": "br-int", "label": "tempest-ServersTestJSON-1125718573-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8d6c7a0d842f4dcb95421a3f47580c49", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb5bae935-76", "ovs_interfaceid": "b5bae935-7639-4a76-988c-e09d0c6f5fb1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:48:38 compute-0 nova_compute[260935]: 2025-10-11 08:48:38.016 2 DEBUG oslo_concurrency.lockutils [None req-ba70e0d6-0056-4e33-9444-313e6b6d359c 6b08b83b77b84cd894e155d2a06682a4 8d6c7a0d842f4dcb95421a3f47580c49 - - default default] Releasing lock "refresh_cache-e10cd028-76c1-4eb5-be43-f51e4da8abc1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 08:48:38 compute-0 nova_compute[260935]: 2025-10-11 08:48:38.016 2 DEBUG nova.compute.manager [None req-ba70e0d6-0056-4e33-9444-313e6b6d359c 6b08b83b77b84cd894e155d2a06682a4 8d6c7a0d842f4dcb95421a3f47580c49 - - default default] [instance: e10cd028-76c1-4eb5-be43-f51e4da8abc1] Instance network_info: |[{"id": "b5bae935-7639-4a76-988c-e09d0c6f5fb1", "address": "fa:16:3e:ad:83:55", "network": {"id": "2880dd81-df95-47c3-aa3d-53c3f2548f15", "bridge": "br-int", "label": "tempest-ServersTestJSON-1125718573-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8d6c7a0d842f4dcb95421a3f47580c49", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb5bae935-76", "ovs_interfaceid": "b5bae935-7639-4a76-988c-e09d0c6f5fb1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 11 08:48:38 compute-0 nova_compute[260935]: 2025-10-11 08:48:38.017 2 DEBUG oslo_concurrency.lockutils [req-ab7a6d74-b857-456e-9cf1-967caa3d0496 req-4bcdc51e-6185-487f-9441-8a58f219aecd e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-e10cd028-76c1-4eb5-be43-f51e4da8abc1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 08:48:38 compute-0 nova_compute[260935]: 2025-10-11 08:48:38.017 2 DEBUG nova.network.neutron [req-ab7a6d74-b857-456e-9cf1-967caa3d0496 req-4bcdc51e-6185-487f-9441-8a58f219aecd e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: e10cd028-76c1-4eb5-be43-f51e4da8abc1] Refreshing network info cache for port b5bae935-7639-4a76-988c-e09d0c6f5fb1 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 08:48:38 compute-0 nova_compute[260935]: 2025-10-11 08:48:38.022 2 DEBUG nova.virt.libvirt.driver [None req-ba70e0d6-0056-4e33-9444-313e6b6d359c 6b08b83b77b84cd894e155d2a06682a4 8d6c7a0d842f4dcb95421a3f47580c49 - - default default] [instance: e10cd028-76c1-4eb5-be43-f51e4da8abc1] Start _get_guest_xml network_info=[{"id": "b5bae935-7639-4a76-988c-e09d0c6f5fb1", "address": "fa:16:3e:ad:83:55", "network": {"id": "2880dd81-df95-47c3-aa3d-53c3f2548f15", "bridge": "br-int", "label": "tempest-ServersTestJSON-1125718573-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8d6c7a0d842f4dcb95421a3f47580c49", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb5bae935-76", "ovs_interfaceid": "b5bae935-7639-4a76-988c-e09d0c6f5fb1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 11 08:48:38 compute-0 nova_compute[260935]: 2025-10-11 08:48:38.028 2 WARNING nova.virt.libvirt.driver [None req-ba70e0d6-0056-4e33-9444-313e6b6d359c 6b08b83b77b84cd894e155d2a06682a4 8d6c7a0d842f4dcb95421a3f47580c49 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 08:48:38 compute-0 nova_compute[260935]: 2025-10-11 08:48:38.039 2 DEBUG nova.virt.libvirt.host [None req-ba70e0d6-0056-4e33-9444-313e6b6d359c 6b08b83b77b84cd894e155d2a06682a4 8d6c7a0d842f4dcb95421a3f47580c49 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 11 08:48:38 compute-0 nova_compute[260935]: 2025-10-11 08:48:38.040 2 DEBUG nova.virt.libvirt.host [None req-ba70e0d6-0056-4e33-9444-313e6b6d359c 6b08b83b77b84cd894e155d2a06682a4 8d6c7a0d842f4dcb95421a3f47580c49 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 11 08:48:38 compute-0 nova_compute[260935]: 2025-10-11 08:48:38.043 2 DEBUG nova.virt.libvirt.host [None req-ba70e0d6-0056-4e33-9444-313e6b6d359c 6b08b83b77b84cd894e155d2a06682a4 8d6c7a0d842f4dcb95421a3f47580c49 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 11 08:48:38 compute-0 nova_compute[260935]: 2025-10-11 08:48:38.044 2 DEBUG nova.virt.libvirt.host [None req-ba70e0d6-0056-4e33-9444-313e6b6d359c 6b08b83b77b84cd894e155d2a06682a4 8d6c7a0d842f4dcb95421a3f47580c49 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 11 08:48:38 compute-0 nova_compute[260935]: 2025-10-11 08:48:38.044 2 DEBUG nova.virt.libvirt.driver [None req-ba70e0d6-0056-4e33-9444-313e6b6d359c 6b08b83b77b84cd894e155d2a06682a4 8d6c7a0d842f4dcb95421a3f47580c49 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 11 08:48:38 compute-0 nova_compute[260935]: 2025-10-11 08:48:38.044 2 DEBUG nova.virt.hardware [None req-ba70e0d6-0056-4e33-9444-313e6b6d359c 6b08b83b77b84cd894e155d2a06682a4 8d6c7a0d842f4dcb95421a3f47580c49 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 11 08:48:38 compute-0 nova_compute[260935]: 2025-10-11 08:48:38.045 2 DEBUG nova.virt.hardware [None req-ba70e0d6-0056-4e33-9444-313e6b6d359c 6b08b83b77b84cd894e155d2a06682a4 8d6c7a0d842f4dcb95421a3f47580c49 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 11 08:48:38 compute-0 nova_compute[260935]: 2025-10-11 08:48:38.045 2 DEBUG nova.virt.hardware [None req-ba70e0d6-0056-4e33-9444-313e6b6d359c 6b08b83b77b84cd894e155d2a06682a4 8d6c7a0d842f4dcb95421a3f47580c49 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 11 08:48:38 compute-0 nova_compute[260935]: 2025-10-11 08:48:38.046 2 DEBUG nova.virt.hardware [None req-ba70e0d6-0056-4e33-9444-313e6b6d359c 6b08b83b77b84cd894e155d2a06682a4 8d6c7a0d842f4dcb95421a3f47580c49 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 11 08:48:38 compute-0 nova_compute[260935]: 2025-10-11 08:48:38.046 2 DEBUG nova.virt.hardware [None req-ba70e0d6-0056-4e33-9444-313e6b6d359c 6b08b83b77b84cd894e155d2a06682a4 8d6c7a0d842f4dcb95421a3f47580c49 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 11 08:48:38 compute-0 nova_compute[260935]: 2025-10-11 08:48:38.046 2 DEBUG nova.virt.hardware [None req-ba70e0d6-0056-4e33-9444-313e6b6d359c 6b08b83b77b84cd894e155d2a06682a4 8d6c7a0d842f4dcb95421a3f47580c49 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 11 08:48:38 compute-0 nova_compute[260935]: 2025-10-11 08:48:38.047 2 DEBUG nova.virt.hardware [None req-ba70e0d6-0056-4e33-9444-313e6b6d359c 6b08b83b77b84cd894e155d2a06682a4 8d6c7a0d842f4dcb95421a3f47580c49 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 11 08:48:38 compute-0 nova_compute[260935]: 2025-10-11 08:48:38.047 2 DEBUG nova.virt.hardware [None req-ba70e0d6-0056-4e33-9444-313e6b6d359c 6b08b83b77b84cd894e155d2a06682a4 8d6c7a0d842f4dcb95421a3f47580c49 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 11 08:48:38 compute-0 nova_compute[260935]: 2025-10-11 08:48:38.047 2 DEBUG nova.virt.hardware [None req-ba70e0d6-0056-4e33-9444-313e6b6d359c 6b08b83b77b84cd894e155d2a06682a4 8d6c7a0d842f4dcb95421a3f47580c49 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 11 08:48:38 compute-0 nova_compute[260935]: 2025-10-11 08:48:38.048 2 DEBUG nova.virt.hardware [None req-ba70e0d6-0056-4e33-9444-313e6b6d359c 6b08b83b77b84cd894e155d2a06682a4 8d6c7a0d842f4dcb95421a3f47580c49 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 11 08:48:38 compute-0 nova_compute[260935]: 2025-10-11 08:48:38.048 2 DEBUG nova.virt.hardware [None req-ba70e0d6-0056-4e33-9444-313e6b6d359c 6b08b83b77b84cd894e155d2a06682a4 8d6c7a0d842f4dcb95421a3f47580c49 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 11 08:48:38 compute-0 nova_compute[260935]: 2025-10-11 08:48:38.052 2 DEBUG oslo_concurrency.processutils [None req-ba70e0d6-0056-4e33-9444-313e6b6d359c 6b08b83b77b84cd894e155d2a06682a4 8d6c7a0d842f4dcb95421a3f47580c49 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:48:38 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e163 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 08:48:38 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 08:48:38 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2102150919' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:48:38 compute-0 nova_compute[260935]: 2025-10-11 08:48:38.545 2 DEBUG oslo_concurrency.processutils [None req-ba70e0d6-0056-4e33-9444-313e6b6d359c 6b08b83b77b84cd894e155d2a06682a4 8d6c7a0d842f4dcb95421a3f47580c49 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.493s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:48:38 compute-0 nova_compute[260935]: 2025-10-11 08:48:38.579 2 DEBUG nova.storage.rbd_utils [None req-ba70e0d6-0056-4e33-9444-313e6b6d359c 6b08b83b77b84cd894e155d2a06682a4 8d6c7a0d842f4dcb95421a3f47580c49 - - default default] rbd image e10cd028-76c1-4eb5-be43-f51e4da8abc1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:48:38 compute-0 nova_compute[260935]: 2025-10-11 08:48:38.584 2 DEBUG oslo_concurrency.processutils [None req-ba70e0d6-0056-4e33-9444-313e6b6d359c 6b08b83b77b84cd894e155d2a06682a4 8d6c7a0d842f4dcb95421a3f47580c49 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:48:38 compute-0 nova_compute[260935]: 2025-10-11 08:48:38.618 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:48:38 compute-0 ceph-mon[74313]: pgmap v1285: 321 pgs: 321 active+clean; 167 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 844 KiB/s rd, 3.9 MiB/s wr, 134 op/s
Oct 11 08:48:38 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/2102150919' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:48:39 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 08:48:39 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1600308633' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:48:39 compute-0 nova_compute[260935]: 2025-10-11 08:48:39.116 2 DEBUG oslo_concurrency.processutils [None req-ba70e0d6-0056-4e33-9444-313e6b6d359c 6b08b83b77b84cd894e155d2a06682a4 8d6c7a0d842f4dcb95421a3f47580c49 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.532s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:48:39 compute-0 nova_compute[260935]: 2025-10-11 08:48:39.119 2 DEBUG nova.virt.libvirt.vif [None req-ba70e0d6-0056-4e33-9444-313e6b6d359c 6b08b83b77b84cd894e155d2a06682a4 8d6c7a0d842f4dcb95421a3f47580c49 - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T08:48:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-170168590',display_name='tempest-ServersTestJSON-server-170168590',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-170168590',id=21,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBBLmmDX7+o3xlBXsWFtGAAW6QN89FexPyRikLBBoMkqYEgmzcpeem7mJuwXNPqh7hh6YHBKO8aG3FnT45N5dmtZiE21YMODPbWwRwlsUeKoenY7euJ0iBxGg5aRfD6zgQ==',key_name='tempest-keypair-1869462671',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8d6c7a0d842f4dcb95421a3f47580c49',ramdisk_id='',reservation_id='r-ijkk9nxi',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-381670503',owner_user_name='tempest-ServersTestJSON-381670503-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T08:48:31Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='6b08b83b77b84cd894e155d2a06682a4',uuid=e10cd028-76c1-4eb5-be43-f51e4da8abc1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b5bae935-7639-4a76-988c-e09d0c6f5fb1", "address": "fa:16:3e:ad:83:55", "network": {"id": "2880dd81-df95-47c3-aa3d-53c3f2548f15", "bridge": "br-int", "label": "tempest-ServersTestJSON-1125718573-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8d6c7a0d842f4dcb95421a3f47580c49", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb5bae935-76", "ovs_interfaceid": "b5bae935-7639-4a76-988c-e09d0c6f5fb1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 11 08:48:39 compute-0 nova_compute[260935]: 2025-10-11 08:48:39.120 2 DEBUG nova.network.os_vif_util [None req-ba70e0d6-0056-4e33-9444-313e6b6d359c 6b08b83b77b84cd894e155d2a06682a4 8d6c7a0d842f4dcb95421a3f47580c49 - - default default] Converting VIF {"id": "b5bae935-7639-4a76-988c-e09d0c6f5fb1", "address": "fa:16:3e:ad:83:55", "network": {"id": "2880dd81-df95-47c3-aa3d-53c3f2548f15", "bridge": "br-int", "label": "tempest-ServersTestJSON-1125718573-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8d6c7a0d842f4dcb95421a3f47580c49", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb5bae935-76", "ovs_interfaceid": "b5bae935-7639-4a76-988c-e09d0c6f5fb1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 08:48:39 compute-0 nova_compute[260935]: 2025-10-11 08:48:39.121 2 DEBUG nova.network.os_vif_util [None req-ba70e0d6-0056-4e33-9444-313e6b6d359c 6b08b83b77b84cd894e155d2a06682a4 8d6c7a0d842f4dcb95421a3f47580c49 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ad:83:55,bridge_name='br-int',has_traffic_filtering=True,id=b5bae935-7639-4a76-988c-e09d0c6f5fb1,network=Network(2880dd81-df95-47c3-aa3d-53c3f2548f15),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb5bae935-76') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 08:48:39 compute-0 nova_compute[260935]: 2025-10-11 08:48:39.123 2 DEBUG nova.objects.instance [None req-ba70e0d6-0056-4e33-9444-313e6b6d359c 6b08b83b77b84cd894e155d2a06682a4 8d6c7a0d842f4dcb95421a3f47580c49 - - default default] Lazy-loading 'pci_devices' on Instance uuid e10cd028-76c1-4eb5-be43-f51e4da8abc1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:48:39 compute-0 nova_compute[260935]: 2025-10-11 08:48:39.157 2 DEBUG nova.virt.libvirt.driver [None req-ba70e0d6-0056-4e33-9444-313e6b6d359c 6b08b83b77b84cd894e155d2a06682a4 8d6c7a0d842f4dcb95421a3f47580c49 - - default default] [instance: e10cd028-76c1-4eb5-be43-f51e4da8abc1] End _get_guest_xml xml=<domain type="kvm">
Oct 11 08:48:39 compute-0 nova_compute[260935]:   <uuid>e10cd028-76c1-4eb5-be43-f51e4da8abc1</uuid>
Oct 11 08:48:39 compute-0 nova_compute[260935]:   <name>instance-00000015</name>
Oct 11 08:48:39 compute-0 nova_compute[260935]:   <memory>131072</memory>
Oct 11 08:48:39 compute-0 nova_compute[260935]:   <vcpu>1</vcpu>
Oct 11 08:48:39 compute-0 nova_compute[260935]:   <metadata>
Oct 11 08:48:39 compute-0 nova_compute[260935]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 08:48:39 compute-0 nova_compute[260935]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 08:48:39 compute-0 nova_compute[260935]:       <nova:name>tempest-ServersTestJSON-server-170168590</nova:name>
Oct 11 08:48:39 compute-0 nova_compute[260935]:       <nova:creationTime>2025-10-11 08:48:38</nova:creationTime>
Oct 11 08:48:39 compute-0 nova_compute[260935]:       <nova:flavor name="m1.nano">
Oct 11 08:48:39 compute-0 nova_compute[260935]:         <nova:memory>128</nova:memory>
Oct 11 08:48:39 compute-0 nova_compute[260935]:         <nova:disk>1</nova:disk>
Oct 11 08:48:39 compute-0 nova_compute[260935]:         <nova:swap>0</nova:swap>
Oct 11 08:48:39 compute-0 nova_compute[260935]:         <nova:ephemeral>0</nova:ephemeral>
Oct 11 08:48:39 compute-0 nova_compute[260935]:         <nova:vcpus>1</nova:vcpus>
Oct 11 08:48:39 compute-0 nova_compute[260935]:       </nova:flavor>
Oct 11 08:48:39 compute-0 nova_compute[260935]:       <nova:owner>
Oct 11 08:48:39 compute-0 nova_compute[260935]:         <nova:user uuid="6b08b83b77b84cd894e155d2a06682a4">tempest-ServersTestJSON-381670503-project-member</nova:user>
Oct 11 08:48:39 compute-0 nova_compute[260935]:         <nova:project uuid="8d6c7a0d842f4dcb95421a3f47580c49">tempest-ServersTestJSON-381670503</nova:project>
Oct 11 08:48:39 compute-0 nova_compute[260935]:       </nova:owner>
Oct 11 08:48:39 compute-0 nova_compute[260935]:       <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 08:48:39 compute-0 nova_compute[260935]:       <nova:ports>
Oct 11 08:48:39 compute-0 nova_compute[260935]:         <nova:port uuid="b5bae935-7639-4a76-988c-e09d0c6f5fb1">
Oct 11 08:48:39 compute-0 nova_compute[260935]:           <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Oct 11 08:48:39 compute-0 nova_compute[260935]:         </nova:port>
Oct 11 08:48:39 compute-0 nova_compute[260935]:       </nova:ports>
Oct 11 08:48:39 compute-0 nova_compute[260935]:     </nova:instance>
Oct 11 08:48:39 compute-0 nova_compute[260935]:   </metadata>
Oct 11 08:48:39 compute-0 nova_compute[260935]:   <sysinfo type="smbios">
Oct 11 08:48:39 compute-0 nova_compute[260935]:     <system>
Oct 11 08:48:39 compute-0 nova_compute[260935]:       <entry name="manufacturer">RDO</entry>
Oct 11 08:48:39 compute-0 nova_compute[260935]:       <entry name="product">OpenStack Compute</entry>
Oct 11 08:48:39 compute-0 nova_compute[260935]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 08:48:39 compute-0 nova_compute[260935]:       <entry name="serial">e10cd028-76c1-4eb5-be43-f51e4da8abc1</entry>
Oct 11 08:48:39 compute-0 nova_compute[260935]:       <entry name="uuid">e10cd028-76c1-4eb5-be43-f51e4da8abc1</entry>
Oct 11 08:48:39 compute-0 nova_compute[260935]:       <entry name="family">Virtual Machine</entry>
Oct 11 08:48:39 compute-0 nova_compute[260935]:     </system>
Oct 11 08:48:39 compute-0 nova_compute[260935]:   </sysinfo>
Oct 11 08:48:39 compute-0 nova_compute[260935]:   <os>
Oct 11 08:48:39 compute-0 nova_compute[260935]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 11 08:48:39 compute-0 nova_compute[260935]:     <boot dev="hd"/>
Oct 11 08:48:39 compute-0 nova_compute[260935]:     <smbios mode="sysinfo"/>
Oct 11 08:48:39 compute-0 nova_compute[260935]:   </os>
Oct 11 08:48:39 compute-0 nova_compute[260935]:   <features>
Oct 11 08:48:39 compute-0 nova_compute[260935]:     <acpi/>
Oct 11 08:48:39 compute-0 nova_compute[260935]:     <apic/>
Oct 11 08:48:39 compute-0 nova_compute[260935]:     <vmcoreinfo/>
Oct 11 08:48:39 compute-0 nova_compute[260935]:   </features>
Oct 11 08:48:39 compute-0 nova_compute[260935]:   <clock offset="utc">
Oct 11 08:48:39 compute-0 nova_compute[260935]:     <timer name="pit" tickpolicy="delay"/>
Oct 11 08:48:39 compute-0 nova_compute[260935]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 11 08:48:39 compute-0 nova_compute[260935]:     <timer name="hpet" present="no"/>
Oct 11 08:48:39 compute-0 nova_compute[260935]:   </clock>
Oct 11 08:48:39 compute-0 nova_compute[260935]:   <cpu mode="host-model" match="exact">
Oct 11 08:48:39 compute-0 nova_compute[260935]:     <topology sockets="1" cores="1" threads="1"/>
Oct 11 08:48:39 compute-0 nova_compute[260935]:   </cpu>
Oct 11 08:48:39 compute-0 nova_compute[260935]:   <devices>
Oct 11 08:48:39 compute-0 nova_compute[260935]:     <disk type="network" device="disk">
Oct 11 08:48:39 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 08:48:39 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/e10cd028-76c1-4eb5-be43-f51e4da8abc1_disk">
Oct 11 08:48:39 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 08:48:39 compute-0 nova_compute[260935]:       </source>
Oct 11 08:48:39 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 08:48:39 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 08:48:39 compute-0 nova_compute[260935]:       </auth>
Oct 11 08:48:39 compute-0 nova_compute[260935]:       <target dev="vda" bus="virtio"/>
Oct 11 08:48:39 compute-0 nova_compute[260935]:     </disk>
Oct 11 08:48:39 compute-0 nova_compute[260935]:     <disk type="network" device="cdrom">
Oct 11 08:48:39 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 08:48:39 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/e10cd028-76c1-4eb5-be43-f51e4da8abc1_disk.config">
Oct 11 08:48:39 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 08:48:39 compute-0 nova_compute[260935]:       </source>
Oct 11 08:48:39 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 08:48:39 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 08:48:39 compute-0 nova_compute[260935]:       </auth>
Oct 11 08:48:39 compute-0 nova_compute[260935]:       <target dev="sda" bus="sata"/>
Oct 11 08:48:39 compute-0 nova_compute[260935]:     </disk>
Oct 11 08:48:39 compute-0 nova_compute[260935]:     <interface type="ethernet">
Oct 11 08:48:39 compute-0 nova_compute[260935]:       <mac address="fa:16:3e:ad:83:55"/>
Oct 11 08:48:39 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 08:48:39 compute-0 nova_compute[260935]:       <driver name="vhost" rx_queue_size="512"/>
Oct 11 08:48:39 compute-0 nova_compute[260935]:       <mtu size="1442"/>
Oct 11 08:48:39 compute-0 nova_compute[260935]:       <target dev="tapb5bae935-76"/>
Oct 11 08:48:39 compute-0 nova_compute[260935]:     </interface>
Oct 11 08:48:39 compute-0 nova_compute[260935]:     <serial type="pty">
Oct 11 08:48:39 compute-0 nova_compute[260935]:       <log file="/var/lib/nova/instances/e10cd028-76c1-4eb5-be43-f51e4da8abc1/console.log" append="off"/>
Oct 11 08:48:39 compute-0 nova_compute[260935]:     </serial>
Oct 11 08:48:39 compute-0 nova_compute[260935]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 08:48:39 compute-0 nova_compute[260935]:     <video>
Oct 11 08:48:39 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 08:48:39 compute-0 nova_compute[260935]:     </video>
Oct 11 08:48:39 compute-0 nova_compute[260935]:     <input type="tablet" bus="usb"/>
Oct 11 08:48:39 compute-0 nova_compute[260935]:     <rng model="virtio">
Oct 11 08:48:39 compute-0 nova_compute[260935]:       <backend model="random">/dev/urandom</backend>
Oct 11 08:48:39 compute-0 nova_compute[260935]:     </rng>
Oct 11 08:48:39 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root"/>
Oct 11 08:48:39 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:48:39 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:48:39 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:48:39 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:48:39 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:48:39 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:48:39 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:48:39 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:48:39 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:48:39 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:48:39 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:48:39 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:48:39 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:48:39 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:48:39 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:48:39 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:48:39 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:48:39 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:48:39 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:48:39 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:48:39 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:48:39 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:48:39 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:48:39 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:48:39 compute-0 nova_compute[260935]:     <controller type="usb" index="0"/>
Oct 11 08:48:39 compute-0 nova_compute[260935]:     <memballoon model="virtio">
Oct 11 08:48:39 compute-0 nova_compute[260935]:       <stats period="10"/>
Oct 11 08:48:39 compute-0 nova_compute[260935]:     </memballoon>
Oct 11 08:48:39 compute-0 nova_compute[260935]:   </devices>
Oct 11 08:48:39 compute-0 nova_compute[260935]: </domain>
Oct 11 08:48:39 compute-0 nova_compute[260935]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 11 08:48:39 compute-0 nova_compute[260935]: 2025-10-11 08:48:39.160 2 DEBUG nova.compute.manager [None req-ba70e0d6-0056-4e33-9444-313e6b6d359c 6b08b83b77b84cd894e155d2a06682a4 8d6c7a0d842f4dcb95421a3f47580c49 - - default default] [instance: e10cd028-76c1-4eb5-be43-f51e4da8abc1] Preparing to wait for external event network-vif-plugged-b5bae935-7639-4a76-988c-e09d0c6f5fb1 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 11 08:48:39 compute-0 nova_compute[260935]: 2025-10-11 08:48:39.160 2 DEBUG oslo_concurrency.lockutils [None req-ba70e0d6-0056-4e33-9444-313e6b6d359c 6b08b83b77b84cd894e155d2a06682a4 8d6c7a0d842f4dcb95421a3f47580c49 - - default default] Acquiring lock "e10cd028-76c1-4eb5-be43-f51e4da8abc1-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:48:39 compute-0 nova_compute[260935]: 2025-10-11 08:48:39.161 2 DEBUG oslo_concurrency.lockutils [None req-ba70e0d6-0056-4e33-9444-313e6b6d359c 6b08b83b77b84cd894e155d2a06682a4 8d6c7a0d842f4dcb95421a3f47580c49 - - default default] Lock "e10cd028-76c1-4eb5-be43-f51e4da8abc1-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:48:39 compute-0 nova_compute[260935]: 2025-10-11 08:48:39.161 2 DEBUG oslo_concurrency.lockutils [None req-ba70e0d6-0056-4e33-9444-313e6b6d359c 6b08b83b77b84cd894e155d2a06682a4 8d6c7a0d842f4dcb95421a3f47580c49 - - default default] Lock "e10cd028-76c1-4eb5-be43-f51e4da8abc1-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:48:39 compute-0 nova_compute[260935]: 2025-10-11 08:48:39.163 2 DEBUG nova.virt.libvirt.vif [None req-ba70e0d6-0056-4e33-9444-313e6b6d359c 6b08b83b77b84cd894e155d2a06682a4 8d6c7a0d842f4dcb95421a3f47580c49 - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T08:48:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-170168590',display_name='tempest-ServersTestJSON-server-170168590',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-170168590',id=21,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBBLmmDX7+o3xlBXsWFtGAAW6QN89FexPyRikLBBoMkqYEgmzcpeem7mJuwXNPqh7hh6YHBKO8aG3FnT45N5dmtZiE21YMODPbWwRwlsUeKoenY7euJ0iBxGg5aRfD6zgQ==',key_name='tempest-keypair-1869462671',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8d6c7a0d842f4dcb95421a3f47580c49',ramdisk_id='',reservation_id='r-ijkk9nxi',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-381670503',owner_user_name='tempest-ServersTestJSON-381670503-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T08:48:31Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='6b08b83b77b84cd894e155d2a06682a4',uuid=e10cd028-76c1-4eb5-be43-f51e4da8abc1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b5bae935-7639-4a76-988c-e09d0c6f5fb1", "address": "fa:16:3e:ad:83:55", "network": {"id": "2880dd81-df95-47c3-aa3d-53c3f2548f15", "bridge": "br-int", "label": "tempest-ServersTestJSON-1125718573-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8d6c7a0d842f4dcb95421a3f47580c49", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb5bae935-76", "ovs_interfaceid": "b5bae935-7639-4a76-988c-e09d0c6f5fb1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 11 08:48:39 compute-0 nova_compute[260935]: 2025-10-11 08:48:39.163 2 DEBUG nova.network.os_vif_util [None req-ba70e0d6-0056-4e33-9444-313e6b6d359c 6b08b83b77b84cd894e155d2a06682a4 8d6c7a0d842f4dcb95421a3f47580c49 - - default default] Converting VIF {"id": "b5bae935-7639-4a76-988c-e09d0c6f5fb1", "address": "fa:16:3e:ad:83:55", "network": {"id": "2880dd81-df95-47c3-aa3d-53c3f2548f15", "bridge": "br-int", "label": "tempest-ServersTestJSON-1125718573-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8d6c7a0d842f4dcb95421a3f47580c49", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb5bae935-76", "ovs_interfaceid": "b5bae935-7639-4a76-988c-e09d0c6f5fb1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 08:48:39 compute-0 nova_compute[260935]: 2025-10-11 08:48:39.164 2 DEBUG nova.network.os_vif_util [None req-ba70e0d6-0056-4e33-9444-313e6b6d359c 6b08b83b77b84cd894e155d2a06682a4 8d6c7a0d842f4dcb95421a3f47580c49 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ad:83:55,bridge_name='br-int',has_traffic_filtering=True,id=b5bae935-7639-4a76-988c-e09d0c6f5fb1,network=Network(2880dd81-df95-47c3-aa3d-53c3f2548f15),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb5bae935-76') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 08:48:39 compute-0 nova_compute[260935]: 2025-10-11 08:48:39.165 2 DEBUG os_vif [None req-ba70e0d6-0056-4e33-9444-313e6b6d359c 6b08b83b77b84cd894e155d2a06682a4 8d6c7a0d842f4dcb95421a3f47580c49 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ad:83:55,bridge_name='br-int',has_traffic_filtering=True,id=b5bae935-7639-4a76-988c-e09d0c6f5fb1,network=Network(2880dd81-df95-47c3-aa3d-53c3f2548f15),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb5bae935-76') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 11 08:48:39 compute-0 nova_compute[260935]: 2025-10-11 08:48:39.166 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:48:39 compute-0 nova_compute[260935]: 2025-10-11 08:48:39.167 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:48:39 compute-0 nova_compute[260935]: 2025-10-11 08:48:39.168 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 08:48:39 compute-0 nova_compute[260935]: 2025-10-11 08:48:39.172 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:48:39 compute-0 nova_compute[260935]: 2025-10-11 08:48:39.173 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb5bae935-76, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:48:39 compute-0 nova_compute[260935]: 2025-10-11 08:48:39.174 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapb5bae935-76, col_values=(('external_ids', {'iface-id': 'b5bae935-7639-4a76-988c-e09d0c6f5fb1', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ad:83:55', 'vm-uuid': 'e10cd028-76c1-4eb5-be43-f51e4da8abc1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:48:39 compute-0 nova_compute[260935]: 2025-10-11 08:48:39.204 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:48:39 compute-0 NetworkManager[44960]: <info>  [1760172519.2059] manager: (tapb5bae935-76): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/52)
Oct 11 08:48:39 compute-0 nova_compute[260935]: 2025-10-11 08:48:39.209 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 08:48:39 compute-0 nova_compute[260935]: 2025-10-11 08:48:39.215 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:48:39 compute-0 nova_compute[260935]: 2025-10-11 08:48:39.217 2 INFO os_vif [None req-ba70e0d6-0056-4e33-9444-313e6b6d359c 6b08b83b77b84cd894e155d2a06682a4 8d6c7a0d842f4dcb95421a3f47580c49 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ad:83:55,bridge_name='br-int',has_traffic_filtering=True,id=b5bae935-7639-4a76-988c-e09d0c6f5fb1,network=Network(2880dd81-df95-47c3-aa3d-53c3f2548f15),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb5bae935-76')
Oct 11 08:48:39 compute-0 nova_compute[260935]: 2025-10-11 08:48:39.286 2 DEBUG nova.virt.libvirt.driver [None req-ba70e0d6-0056-4e33-9444-313e6b6d359c 6b08b83b77b84cd894e155d2a06682a4 8d6c7a0d842f4dcb95421a3f47580c49 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 08:48:39 compute-0 nova_compute[260935]: 2025-10-11 08:48:39.287 2 DEBUG nova.virt.libvirt.driver [None req-ba70e0d6-0056-4e33-9444-313e6b6d359c 6b08b83b77b84cd894e155d2a06682a4 8d6c7a0d842f4dcb95421a3f47580c49 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 08:48:39 compute-0 nova_compute[260935]: 2025-10-11 08:48:39.288 2 DEBUG nova.virt.libvirt.driver [None req-ba70e0d6-0056-4e33-9444-313e6b6d359c 6b08b83b77b84cd894e155d2a06682a4 8d6c7a0d842f4dcb95421a3f47580c49 - - default default] No VIF found with MAC fa:16:3e:ad:83:55, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 11 08:48:39 compute-0 nova_compute[260935]: 2025-10-11 08:48:39.289 2 INFO nova.virt.libvirt.driver [None req-ba70e0d6-0056-4e33-9444-313e6b6d359c 6b08b83b77b84cd894e155d2a06682a4 8d6c7a0d842f4dcb95421a3f47580c49 - - default default] [instance: e10cd028-76c1-4eb5-be43-f51e4da8abc1] Using config drive
Oct 11 08:48:39 compute-0 nova_compute[260935]: 2025-10-11 08:48:39.324 2 DEBUG nova.storage.rbd_utils [None req-ba70e0d6-0056-4e33-9444-313e6b6d359c 6b08b83b77b84cd894e155d2a06682a4 8d6c7a0d842f4dcb95421a3f47580c49 - - default default] rbd image e10cd028-76c1-4eb5-be43-f51e4da8abc1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:48:39 compute-0 sshd-session[290951]: Received disconnect from 152.32.213.170 port 46352:11: Bye Bye [preauth]
Oct 11 08:48:39 compute-0 sshd-session[290951]: Disconnected from invalid user ins 152.32.213.170 port 46352 [preauth]
Oct 11 08:48:39 compute-0 nova_compute[260935]: 2025-10-11 08:48:39.746 2 INFO nova.virt.libvirt.driver [None req-ba70e0d6-0056-4e33-9444-313e6b6d359c 6b08b83b77b84cd894e155d2a06682a4 8d6c7a0d842f4dcb95421a3f47580c49 - - default default] [instance: e10cd028-76c1-4eb5-be43-f51e4da8abc1] Creating config drive at /var/lib/nova/instances/e10cd028-76c1-4eb5-be43-f51e4da8abc1/disk.config
Oct 11 08:48:39 compute-0 nova_compute[260935]: 2025-10-11 08:48:39.759 2 DEBUG oslo_concurrency.processutils [None req-ba70e0d6-0056-4e33-9444-313e6b6d359c 6b08b83b77b84cd894e155d2a06682a4 8d6c7a0d842f4dcb95421a3f47580c49 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/e10cd028-76c1-4eb5-be43-f51e4da8abc1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpuwr7xpyo execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:48:39 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1286: 321 pgs: 321 active+clean; 167 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 292 KiB/s rd, 3.9 MiB/s wr, 116 op/s
Oct 11 08:48:39 compute-0 nova_compute[260935]: 2025-10-11 08:48:39.914 2 DEBUG oslo_concurrency.processutils [None req-ba70e0d6-0056-4e33-9444-313e6b6d359c 6b08b83b77b84cd894e155d2a06682a4 8d6c7a0d842f4dcb95421a3f47580c49 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/e10cd028-76c1-4eb5-be43-f51e4da8abc1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpuwr7xpyo" returned: 0 in 0.155s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:48:39 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/1600308633' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:48:39 compute-0 nova_compute[260935]: 2025-10-11 08:48:39.957 2 DEBUG nova.storage.rbd_utils [None req-ba70e0d6-0056-4e33-9444-313e6b6d359c 6b08b83b77b84cd894e155d2a06682a4 8d6c7a0d842f4dcb95421a3f47580c49 - - default default] rbd image e10cd028-76c1-4eb5-be43-f51e4da8abc1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:48:39 compute-0 nova_compute[260935]: 2025-10-11 08:48:39.963 2 DEBUG oslo_concurrency.processutils [None req-ba70e0d6-0056-4e33-9444-313e6b6d359c 6b08b83b77b84cd894e155d2a06682a4 8d6c7a0d842f4dcb95421a3f47580c49 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/e10cd028-76c1-4eb5-be43-f51e4da8abc1/disk.config e10cd028-76c1-4eb5-be43-f51e4da8abc1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:48:40 compute-0 nova_compute[260935]: 2025-10-11 08:48:40.165 2 DEBUG oslo_concurrency.processutils [None req-ba70e0d6-0056-4e33-9444-313e6b6d359c 6b08b83b77b84cd894e155d2a06682a4 8d6c7a0d842f4dcb95421a3f47580c49 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/e10cd028-76c1-4eb5-be43-f51e4da8abc1/disk.config e10cd028-76c1-4eb5-be43-f51e4da8abc1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.202s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:48:40 compute-0 nova_compute[260935]: 2025-10-11 08:48:40.166 2 INFO nova.virt.libvirt.driver [None req-ba70e0d6-0056-4e33-9444-313e6b6d359c 6b08b83b77b84cd894e155d2a06682a4 8d6c7a0d842f4dcb95421a3f47580c49 - - default default] [instance: e10cd028-76c1-4eb5-be43-f51e4da8abc1] Deleting local config drive /var/lib/nova/instances/e10cd028-76c1-4eb5-be43-f51e4da8abc1/disk.config because it was imported into RBD.
Oct 11 08:48:40 compute-0 kernel: tapb5bae935-76: entered promiscuous mode
Oct 11 08:48:40 compute-0 NetworkManager[44960]: <info>  [1760172520.2264] manager: (tapb5bae935-76): new Tun device (/org/freedesktop/NetworkManager/Devices/53)
Oct 11 08:48:40 compute-0 ovn_controller[152945]: 2025-10-11T08:48:40Z|00077|binding|INFO|Claiming lport b5bae935-7639-4a76-988c-e09d0c6f5fb1 for this chassis.
Oct 11 08:48:40 compute-0 ovn_controller[152945]: 2025-10-11T08:48:40Z|00078|binding|INFO|b5bae935-7639-4a76-988c-e09d0c6f5fb1: Claiming fa:16:3e:ad:83:55 10.100.0.11
Oct 11 08:48:40 compute-0 nova_compute[260935]: 2025-10-11 08:48:40.270 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:48:40 compute-0 nova_compute[260935]: 2025-10-11 08:48:40.273 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:48:40 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:48:40.285 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ad:83:55 10.100.0.11'], port_security=['fa:16:3e:ad:83:55 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'e10cd028-76c1-4eb5-be43-f51e4da8abc1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2880dd81-df95-47c3-aa3d-53c3f2548f15', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8d6c7a0d842f4dcb95421a3f47580c49', 'neutron:revision_number': '2', 'neutron:security_group_ids': '7275978f-63aa-48f1-b8b7-cd0104b14473', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1c8074e3-09b3-477f-801a-39440484f747, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=b5bae935-7639-4a76-988c-e09d0c6f5fb1) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 08:48:40 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:48:40.288 162815 INFO neutron.agent.ovn.metadata.agent [-] Port b5bae935-7639-4a76-988c-e09d0c6f5fb1 in datapath 2880dd81-df95-47c3-aa3d-53c3f2548f15 bound to our chassis
Oct 11 08:48:40 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:48:40.292 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 2880dd81-df95-47c3-aa3d-53c3f2548f15
Oct 11 08:48:40 compute-0 nova_compute[260935]: 2025-10-11 08:48:40.300 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:48:40 compute-0 ovn_controller[152945]: 2025-10-11T08:48:40Z|00079|binding|INFO|Setting lport b5bae935-7639-4a76-988c-e09d0c6f5fb1 ovn-installed in OVS
Oct 11 08:48:40 compute-0 ovn_controller[152945]: 2025-10-11T08:48:40Z|00080|binding|INFO|Setting lport b5bae935-7639-4a76-988c-e09d0c6f5fb1 up in Southbound
Oct 11 08:48:40 compute-0 systemd-machined[215705]: New machine qemu-23-instance-00000015.
Oct 11 08:48:40 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:48:40.312 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[b1edf0e3-372c-47f6-97d6-80e69d1539e8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:48:40 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:48:40.316 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap2880dd81-d1 in ovnmeta-2880dd81-df95-47c3-aa3d-53c3f2548f15 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 11 08:48:40 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:48:40.317 276945 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap2880dd81-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 11 08:48:40 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:48:40.317 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[d2fb791b-02b0-46b2-bcbc-3537ba2b6dbc]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:48:40 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:48:40.319 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[33366c6d-7041-4d28-9c2f-c00f58fa4b93]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:48:40 compute-0 systemd[1]: Started Virtual Machine qemu-23-instance-00000015.
Oct 11 08:48:40 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:48:40.337 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[ad4e9e37-63cf-4763-8724-79265a79931e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:48:40 compute-0 systemd-udevd[291091]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 08:48:40 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:48:40.359 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[a495240c-5413-413e-ba28-622c81455edf]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:48:40 compute-0 NetworkManager[44960]: <info>  [1760172520.3708] device (tapb5bae935-76): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 08:48:40 compute-0 NetworkManager[44960]: <info>  [1760172520.3726] device (tapb5bae935-76): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 08:48:40 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:48:40.407 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[11816466-dbe6-462e-aec2-3e228102818b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:48:40 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:48:40.414 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[a36308ce-89be-401a-92b0-4e3a47f65fff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:48:40 compute-0 NetworkManager[44960]: <info>  [1760172520.4156] manager: (tap2880dd81-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/54)
Oct 11 08:48:40 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:48:40.468 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[26d5a6df-386f-4d98-87d5-160cd6962862]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:48:40 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:48:40.472 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[4f36b1eb-5752-4966-be0c-fb145efeb09f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:48:40 compute-0 NetworkManager[44960]: <info>  [1760172520.5133] device (tap2880dd81-d0): carrier: link connected
Oct 11 08:48:40 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:48:40.522 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[46503e41-5d43-45b7-88a3-3d5b37b3678d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:48:40 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:48:40.550 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[09469b34-83ad-41b9-ba9d-4b62ea82a1fa]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2880dd81-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:bf:9e:f3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 196, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 196, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 30], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 436521, 'reachable_time': 44999, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 168, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 168, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 291122, 'error': None, 'target': 'ovnmeta-2880dd81-df95-47c3-aa3d-53c3f2548f15', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:48:40 compute-0 nova_compute[260935]: 2025-10-11 08:48:40.551 2 DEBUG oslo_concurrency.lockutils [None req-f23aeac2-74ce-4b61-86f7-5e084e4fabda 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Acquiring lock "interface-057de6d9-3f9e-4b23-9019-f62ba6b453e7-None" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:48:40 compute-0 nova_compute[260935]: 2025-10-11 08:48:40.552 2 DEBUG oslo_concurrency.lockutils [None req-f23aeac2-74ce-4b61-86f7-5e084e4fabda 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Lock "interface-057de6d9-3f9e-4b23-9019-f62ba6b453e7-None" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:48:40 compute-0 nova_compute[260935]: 2025-10-11 08:48:40.553 2 DEBUG nova.objects.instance [None req-f23aeac2-74ce-4b61-86f7-5e084e4fabda 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Lazy-loading 'flavor' on Instance uuid 057de6d9-3f9e-4b23-9019-f62ba6b453e7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:48:40 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:48:40.576 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[73d03c32-de99-4dee-84d4-7c8490e1b55a]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:febf:9ef3'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 436521, 'tstamp': 436521}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 291123, 'error': None, 'target': 'ovnmeta-2880dd81-df95-47c3-aa3d-53c3f2548f15', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:48:40 compute-0 nova_compute[260935]: 2025-10-11 08:48:40.587 2 DEBUG nova.objects.instance [None req-f23aeac2-74ce-4b61-86f7-5e084e4fabda 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Lazy-loading 'pci_requests' on Instance uuid 057de6d9-3f9e-4b23-9019-f62ba6b453e7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:48:40 compute-0 nova_compute[260935]: 2025-10-11 08:48:40.603 2 DEBUG nova.network.neutron [None req-f23aeac2-74ce-4b61-86f7-5e084e4fabda 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 11 08:48:40 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:48:40.606 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[c9add7ba-218a-4693-b935-1efbcceefa31]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2880dd81-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:bf:9e:f3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 196, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 196, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 30], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 436521, 'reachable_time': 44999, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 168, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 168, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 291124, 'error': None, 'target': 'ovnmeta-2880dd81-df95-47c3-aa3d-53c3f2548f15', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:48:40 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:48:40.659 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[52dd990c-6006-43c6-8dcc-05f42e7ef197]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:48:40 compute-0 nova_compute[260935]: 2025-10-11 08:48:40.718 2 DEBUG nova.compute.manager [req-e6d4747d-9889-444c-a644-06f13f5b6866 req-dbaa294c-20e3-41dd-9d39-a5eb8c5955c1 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: e10cd028-76c1-4eb5-be43-f51e4da8abc1] Received event network-vif-plugged-b5bae935-7639-4a76-988c-e09d0c6f5fb1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:48:40 compute-0 nova_compute[260935]: 2025-10-11 08:48:40.718 2 DEBUG oslo_concurrency.lockutils [req-e6d4747d-9889-444c-a644-06f13f5b6866 req-dbaa294c-20e3-41dd-9d39-a5eb8c5955c1 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "e10cd028-76c1-4eb5-be43-f51e4da8abc1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:48:40 compute-0 nova_compute[260935]: 2025-10-11 08:48:40.718 2 DEBUG oslo_concurrency.lockutils [req-e6d4747d-9889-444c-a644-06f13f5b6866 req-dbaa294c-20e3-41dd-9d39-a5eb8c5955c1 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "e10cd028-76c1-4eb5-be43-f51e4da8abc1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:48:40 compute-0 nova_compute[260935]: 2025-10-11 08:48:40.719 2 DEBUG oslo_concurrency.lockutils [req-e6d4747d-9889-444c-a644-06f13f5b6866 req-dbaa294c-20e3-41dd-9d39-a5eb8c5955c1 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "e10cd028-76c1-4eb5-be43-f51e4da8abc1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:48:40 compute-0 nova_compute[260935]: 2025-10-11 08:48:40.719 2 DEBUG nova.compute.manager [req-e6d4747d-9889-444c-a644-06f13f5b6866 req-dbaa294c-20e3-41dd-9d39-a5eb8c5955c1 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: e10cd028-76c1-4eb5-be43-f51e4da8abc1] Processing event network-vif-plugged-b5bae935-7639-4a76-988c-e09d0c6f5fb1 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 11 08:48:40 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:48:40.760 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[e49f3567-ea33-48bd-982c-2e03e473e7fc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:48:40 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:48:40.764 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2880dd81-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:48:40 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:48:40.765 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 08:48:40 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:48:40.765 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2880dd81-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:48:40 compute-0 NetworkManager[44960]: <info>  [1760172520.7680] manager: (tap2880dd81-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/55)
Oct 11 08:48:40 compute-0 nova_compute[260935]: 2025-10-11 08:48:40.767 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:48:40 compute-0 kernel: tap2880dd81-d0: entered promiscuous mode
Oct 11 08:48:40 compute-0 nova_compute[260935]: 2025-10-11 08:48:40.787 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:48:40 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:48:40.788 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap2880dd81-d0, col_values=(('external_ids', {'iface-id': '0d142335-06a1-47f2-b37a-cf32717864bc'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:48:40 compute-0 ovn_controller[152945]: 2025-10-11T08:48:40Z|00081|binding|INFO|Releasing lport 0d142335-06a1-47f2-b37a-cf32717864bc from this chassis (sb_readonly=0)
Oct 11 08:48:40 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:48:40.791 162815 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/2880dd81-df95-47c3-aa3d-53c3f2548f15.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/2880dd81-df95-47c3-aa3d-53c3f2548f15.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 11 08:48:40 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:48:40.792 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[50414a75-9876-438c-b524-f8393db696d9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:48:40 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:48:40.793 162815 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 11 08:48:40 compute-0 ovn_metadata_agent[162810]: global
Oct 11 08:48:40 compute-0 ovn_metadata_agent[162810]:     log         /dev/log local0 debug
Oct 11 08:48:40 compute-0 ovn_metadata_agent[162810]:     log-tag     haproxy-metadata-proxy-2880dd81-df95-47c3-aa3d-53c3f2548f15
Oct 11 08:48:40 compute-0 ovn_metadata_agent[162810]:     user        root
Oct 11 08:48:40 compute-0 ovn_metadata_agent[162810]:     group       root
Oct 11 08:48:40 compute-0 ovn_metadata_agent[162810]:     maxconn     1024
Oct 11 08:48:40 compute-0 ovn_metadata_agent[162810]:     pidfile     /var/lib/neutron/external/pids/2880dd81-df95-47c3-aa3d-53c3f2548f15.pid.haproxy
Oct 11 08:48:40 compute-0 ovn_metadata_agent[162810]:     daemon
Oct 11 08:48:40 compute-0 ovn_metadata_agent[162810]: 
Oct 11 08:48:40 compute-0 ovn_metadata_agent[162810]: defaults
Oct 11 08:48:40 compute-0 ovn_metadata_agent[162810]:     log global
Oct 11 08:48:40 compute-0 ovn_metadata_agent[162810]:     mode http
Oct 11 08:48:40 compute-0 ovn_metadata_agent[162810]:     option httplog
Oct 11 08:48:40 compute-0 ovn_metadata_agent[162810]:     option dontlognull
Oct 11 08:48:40 compute-0 ovn_metadata_agent[162810]:     option http-server-close
Oct 11 08:48:40 compute-0 ovn_metadata_agent[162810]:     option forwardfor
Oct 11 08:48:40 compute-0 ovn_metadata_agent[162810]:     retries                 3
Oct 11 08:48:40 compute-0 ovn_metadata_agent[162810]:     timeout http-request    30s
Oct 11 08:48:40 compute-0 ovn_metadata_agent[162810]:     timeout connect         30s
Oct 11 08:48:40 compute-0 ovn_metadata_agent[162810]:     timeout client          32s
Oct 11 08:48:40 compute-0 ovn_metadata_agent[162810]:     timeout server          32s
Oct 11 08:48:40 compute-0 ovn_metadata_agent[162810]:     timeout http-keep-alive 30s
Oct 11 08:48:40 compute-0 ovn_metadata_agent[162810]: 
Oct 11 08:48:40 compute-0 ovn_metadata_agent[162810]: 
Oct 11 08:48:40 compute-0 ovn_metadata_agent[162810]: listen listener
Oct 11 08:48:40 compute-0 ovn_metadata_agent[162810]:     bind 169.254.169.254:80
Oct 11 08:48:40 compute-0 ovn_metadata_agent[162810]:     server metadata /var/lib/neutron/metadata_proxy
Oct 11 08:48:40 compute-0 ovn_metadata_agent[162810]:     http-request add-header X-OVN-Network-ID 2880dd81-df95-47c3-aa3d-53c3f2548f15
Oct 11 08:48:40 compute-0 ovn_metadata_agent[162810]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 11 08:48:40 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:48:40.794 162815 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-2880dd81-df95-47c3-aa3d-53c3f2548f15', 'env', 'PROCESS_TAG=haproxy-2880dd81-df95-47c3-aa3d-53c3f2548f15', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/2880dd81-df95-47c3-aa3d-53c3f2548f15.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 11 08:48:40 compute-0 nova_compute[260935]: 2025-10-11 08:48:40.807 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:48:40 compute-0 nova_compute[260935]: 2025-10-11 08:48:40.917 2 DEBUG nova.network.neutron [req-ab7a6d74-b857-456e-9cf1-967caa3d0496 req-4bcdc51e-6185-487f-9441-8a58f219aecd e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: e10cd028-76c1-4eb5-be43-f51e4da8abc1] Updated VIF entry in instance network info cache for port b5bae935-7639-4a76-988c-e09d0c6f5fb1. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 08:48:40 compute-0 nova_compute[260935]: 2025-10-11 08:48:40.917 2 DEBUG nova.network.neutron [req-ab7a6d74-b857-456e-9cf1-967caa3d0496 req-4bcdc51e-6185-487f-9441-8a58f219aecd e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: e10cd028-76c1-4eb5-be43-f51e4da8abc1] Updating instance_info_cache with network_info: [{"id": "b5bae935-7639-4a76-988c-e09d0c6f5fb1", "address": "fa:16:3e:ad:83:55", "network": {"id": "2880dd81-df95-47c3-aa3d-53c3f2548f15", "bridge": "br-int", "label": "tempest-ServersTestJSON-1125718573-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8d6c7a0d842f4dcb95421a3f47580c49", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb5bae935-76", "ovs_interfaceid": "b5bae935-7639-4a76-988c-e09d0c6f5fb1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:48:40 compute-0 nova_compute[260935]: 2025-10-11 08:48:40.936 2 DEBUG oslo_concurrency.lockutils [req-ab7a6d74-b857-456e-9cf1-967caa3d0496 req-4bcdc51e-6185-487f-9441-8a58f219aecd e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-e10cd028-76c1-4eb5-be43-f51e4da8abc1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 08:48:40 compute-0 ceph-mon[74313]: pgmap v1286: 321 pgs: 321 active+clean; 167 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 292 KiB/s rd, 3.9 MiB/s wr, 116 op/s
Oct 11 08:48:41 compute-0 nova_compute[260935]: 2025-10-11 08:48:41.180 2 DEBUG nova.policy [None req-f23aeac2-74ce-4b61-86f7-5e084e4fabda 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '34f29a5a135d45f597eeaa741009aa67', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'eddb41c523294041b154a0a99c88e82b', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 11 08:48:41 compute-0 podman[291198]: 2025-10-11 08:48:41.248111358 +0000 UTC m=+0.057085853 container create 371c0477cdc472f8f0672be8efb1599ac2b744a3f40122793d8d14be97d4c4a1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-2880dd81-df95-47c3-aa3d-53c3f2548f15, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Oct 11 08:48:41 compute-0 nova_compute[260935]: 2025-10-11 08:48:41.271 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172521.2711942, e10cd028-76c1-4eb5-be43-f51e4da8abc1 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:48:41 compute-0 nova_compute[260935]: 2025-10-11 08:48:41.272 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: e10cd028-76c1-4eb5-be43-f51e4da8abc1] VM Started (Lifecycle Event)
Oct 11 08:48:41 compute-0 nova_compute[260935]: 2025-10-11 08:48:41.276 2 DEBUG nova.compute.manager [None req-ba70e0d6-0056-4e33-9444-313e6b6d359c 6b08b83b77b84cd894e155d2a06682a4 8d6c7a0d842f4dcb95421a3f47580c49 - - default default] [instance: e10cd028-76c1-4eb5-be43-f51e4da8abc1] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 11 08:48:41 compute-0 nova_compute[260935]: 2025-10-11 08:48:41.280 2 DEBUG nova.virt.libvirt.driver [None req-ba70e0d6-0056-4e33-9444-313e6b6d359c 6b08b83b77b84cd894e155d2a06682a4 8d6c7a0d842f4dcb95421a3f47580c49 - - default default] [instance: e10cd028-76c1-4eb5-be43-f51e4da8abc1] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 11 08:48:41 compute-0 nova_compute[260935]: 2025-10-11 08:48:41.285 2 INFO nova.virt.libvirt.driver [-] [instance: e10cd028-76c1-4eb5-be43-f51e4da8abc1] Instance spawned successfully.
Oct 11 08:48:41 compute-0 nova_compute[260935]: 2025-10-11 08:48:41.286 2 DEBUG nova.virt.libvirt.driver [None req-ba70e0d6-0056-4e33-9444-313e6b6d359c 6b08b83b77b84cd894e155d2a06682a4 8d6c7a0d842f4dcb95421a3f47580c49 - - default default] [instance: e10cd028-76c1-4eb5-be43-f51e4da8abc1] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 11 08:48:41 compute-0 systemd[1]: Started libpod-conmon-371c0477cdc472f8f0672be8efb1599ac2b744a3f40122793d8d14be97d4c4a1.scope.
Oct 11 08:48:41 compute-0 nova_compute[260935]: 2025-10-11 08:48:41.307 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: e10cd028-76c1-4eb5-be43-f51e4da8abc1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:48:41 compute-0 podman[291198]: 2025-10-11 08:48:41.219525861 +0000 UTC m=+0.028500336 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 11 08:48:41 compute-0 nova_compute[260935]: 2025-10-11 08:48:41.318 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: e10cd028-76c1-4eb5-be43-f51e4da8abc1] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 08:48:41 compute-0 nova_compute[260935]: 2025-10-11 08:48:41.325 2 DEBUG nova.virt.libvirt.driver [None req-ba70e0d6-0056-4e33-9444-313e6b6d359c 6b08b83b77b84cd894e155d2a06682a4 8d6c7a0d842f4dcb95421a3f47580c49 - - default default] [instance: e10cd028-76c1-4eb5-be43-f51e4da8abc1] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:48:41 compute-0 nova_compute[260935]: 2025-10-11 08:48:41.326 2 DEBUG nova.virt.libvirt.driver [None req-ba70e0d6-0056-4e33-9444-313e6b6d359c 6b08b83b77b84cd894e155d2a06682a4 8d6c7a0d842f4dcb95421a3f47580c49 - - default default] [instance: e10cd028-76c1-4eb5-be43-f51e4da8abc1] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:48:41 compute-0 nova_compute[260935]: 2025-10-11 08:48:41.327 2 DEBUG nova.virt.libvirt.driver [None req-ba70e0d6-0056-4e33-9444-313e6b6d359c 6b08b83b77b84cd894e155d2a06682a4 8d6c7a0d842f4dcb95421a3f47580c49 - - default default] [instance: e10cd028-76c1-4eb5-be43-f51e4da8abc1] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:48:41 compute-0 nova_compute[260935]: 2025-10-11 08:48:41.328 2 DEBUG nova.virt.libvirt.driver [None req-ba70e0d6-0056-4e33-9444-313e6b6d359c 6b08b83b77b84cd894e155d2a06682a4 8d6c7a0d842f4dcb95421a3f47580c49 - - default default] [instance: e10cd028-76c1-4eb5-be43-f51e4da8abc1] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:48:41 compute-0 nova_compute[260935]: 2025-10-11 08:48:41.329 2 DEBUG nova.virt.libvirt.driver [None req-ba70e0d6-0056-4e33-9444-313e6b6d359c 6b08b83b77b84cd894e155d2a06682a4 8d6c7a0d842f4dcb95421a3f47580c49 - - default default] [instance: e10cd028-76c1-4eb5-be43-f51e4da8abc1] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:48:41 compute-0 nova_compute[260935]: 2025-10-11 08:48:41.330 2 DEBUG nova.virt.libvirt.driver [None req-ba70e0d6-0056-4e33-9444-313e6b6d359c 6b08b83b77b84cd894e155d2a06682a4 8d6c7a0d842f4dcb95421a3f47580c49 - - default default] [instance: e10cd028-76c1-4eb5-be43-f51e4da8abc1] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:48:41 compute-0 systemd[1]: Started libcrun container.
Oct 11 08:48:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a90ad88f1ec264b7e39d2fba8c4def1e28c3e05feed22a0602c12c8a9a6991b/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 11 08:48:41 compute-0 nova_compute[260935]: 2025-10-11 08:48:41.358 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: e10cd028-76c1-4eb5-be43-f51e4da8abc1] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 08:48:41 compute-0 nova_compute[260935]: 2025-10-11 08:48:41.358 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172521.2713978, e10cd028-76c1-4eb5-be43-f51e4da8abc1 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:48:41 compute-0 nova_compute[260935]: 2025-10-11 08:48:41.359 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: e10cd028-76c1-4eb5-be43-f51e4da8abc1] VM Paused (Lifecycle Event)
Oct 11 08:48:41 compute-0 podman[291198]: 2025-10-11 08:48:41.373492012 +0000 UTC m=+0.182466557 container init 371c0477cdc472f8f0672be8efb1599ac2b744a3f40122793d8d14be97d4c4a1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-2880dd81-df95-47c3-aa3d-53c3f2548f15, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct 11 08:48:41 compute-0 nova_compute[260935]: 2025-10-11 08:48:41.384 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: e10cd028-76c1-4eb5-be43-f51e4da8abc1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:48:41 compute-0 podman[291198]: 2025-10-11 08:48:41.38567958 +0000 UTC m=+0.194654065 container start 371c0477cdc472f8f0672be8efb1599ac2b744a3f40122793d8d14be97d4c4a1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-2880dd81-df95-47c3-aa3d-53c3f2548f15, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 11 08:48:41 compute-0 nova_compute[260935]: 2025-10-11 08:48:41.390 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172521.2785738, e10cd028-76c1-4eb5-be43-f51e4da8abc1 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:48:41 compute-0 nova_compute[260935]: 2025-10-11 08:48:41.390 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: e10cd028-76c1-4eb5-be43-f51e4da8abc1] VM Resumed (Lifecycle Event)
Oct 11 08:48:41 compute-0 nova_compute[260935]: 2025-10-11 08:48:41.400 2 INFO nova.compute.manager [None req-ba70e0d6-0056-4e33-9444-313e6b6d359c 6b08b83b77b84cd894e155d2a06682a4 8d6c7a0d842f4dcb95421a3f47580c49 - - default default] [instance: e10cd028-76c1-4eb5-be43-f51e4da8abc1] Took 9.37 seconds to spawn the instance on the hypervisor.
Oct 11 08:48:41 compute-0 nova_compute[260935]: 2025-10-11 08:48:41.400 2 DEBUG nova.compute.manager [None req-ba70e0d6-0056-4e33-9444-313e6b6d359c 6b08b83b77b84cd894e155d2a06682a4 8d6c7a0d842f4dcb95421a3f47580c49 - - default default] [instance: e10cd028-76c1-4eb5-be43-f51e4da8abc1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:48:41 compute-0 nova_compute[260935]: 2025-10-11 08:48:41.412 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: e10cd028-76c1-4eb5-be43-f51e4da8abc1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:48:41 compute-0 nova_compute[260935]: 2025-10-11 08:48:41.417 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: e10cd028-76c1-4eb5-be43-f51e4da8abc1] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 08:48:41 compute-0 neutron-haproxy-ovnmeta-2880dd81-df95-47c3-aa3d-53c3f2548f15[291214]: [NOTICE]   (291218) : New worker (291220) forked
Oct 11 08:48:41 compute-0 neutron-haproxy-ovnmeta-2880dd81-df95-47c3-aa3d-53c3f2548f15[291214]: [NOTICE]   (291218) : Loading success.
Oct 11 08:48:41 compute-0 nova_compute[260935]: 2025-10-11 08:48:41.447 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: e10cd028-76c1-4eb5-be43-f51e4da8abc1] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 08:48:41 compute-0 nova_compute[260935]: 2025-10-11 08:48:41.471 2 INFO nova.compute.manager [None req-ba70e0d6-0056-4e33-9444-313e6b6d359c 6b08b83b77b84cd894e155d2a06682a4 8d6c7a0d842f4dcb95421a3f47580c49 - - default default] [instance: e10cd028-76c1-4eb5-be43-f51e4da8abc1] Took 10.40 seconds to build instance.
Oct 11 08:48:41 compute-0 nova_compute[260935]: 2025-10-11 08:48:41.490 2 DEBUG oslo_concurrency.lockutils [None req-ba70e0d6-0056-4e33-9444-313e6b6d359c 6b08b83b77b84cd894e155d2a06682a4 8d6c7a0d842f4dcb95421a3f47580c49 - - default default] Lock "e10cd028-76c1-4eb5-be43-f51e4da8abc1" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.485s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:48:41 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1287: 321 pgs: 321 active+clean; 167 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 292 KiB/s rd, 3.9 MiB/s wr, 116 op/s
Oct 11 08:48:41 compute-0 nova_compute[260935]: 2025-10-11 08:48:41.893 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:48:41 compute-0 nova_compute[260935]: 2025-10-11 08:48:41.902 2 DEBUG nova.network.neutron [None req-f23aeac2-74ce-4b61-86f7-5e084e4fabda 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] Successfully created port: b3e1a780-925f-4f4e-bca5-4b8f5ee45e1e _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 11 08:48:42 compute-0 nova_compute[260935]: 2025-10-11 08:48:42.918 2 DEBUG nova.compute.manager [req-18b98de8-9499-426d-b22d-70f3b9d6f2a7 req-cd126c65-8b89-486e-a641-a2741f149494 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: e10cd028-76c1-4eb5-be43-f51e4da8abc1] Received event network-vif-plugged-b5bae935-7639-4a76-988c-e09d0c6f5fb1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:48:42 compute-0 nova_compute[260935]: 2025-10-11 08:48:42.919 2 DEBUG oslo_concurrency.lockutils [req-18b98de8-9499-426d-b22d-70f3b9d6f2a7 req-cd126c65-8b89-486e-a641-a2741f149494 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "e10cd028-76c1-4eb5-be43-f51e4da8abc1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:48:42 compute-0 nova_compute[260935]: 2025-10-11 08:48:42.920 2 DEBUG oslo_concurrency.lockutils [req-18b98de8-9499-426d-b22d-70f3b9d6f2a7 req-cd126c65-8b89-486e-a641-a2741f149494 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "e10cd028-76c1-4eb5-be43-f51e4da8abc1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:48:42 compute-0 nova_compute[260935]: 2025-10-11 08:48:42.920 2 DEBUG oslo_concurrency.lockutils [req-18b98de8-9499-426d-b22d-70f3b9d6f2a7 req-cd126c65-8b89-486e-a641-a2741f149494 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "e10cd028-76c1-4eb5-be43-f51e4da8abc1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:48:42 compute-0 nova_compute[260935]: 2025-10-11 08:48:42.920 2 DEBUG nova.compute.manager [req-18b98de8-9499-426d-b22d-70f3b9d6f2a7 req-cd126c65-8b89-486e-a641-a2741f149494 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: e10cd028-76c1-4eb5-be43-f51e4da8abc1] No waiting events found dispatching network-vif-plugged-b5bae935-7639-4a76-988c-e09d0c6f5fb1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 08:48:42 compute-0 nova_compute[260935]: 2025-10-11 08:48:42.921 2 WARNING nova.compute.manager [req-18b98de8-9499-426d-b22d-70f3b9d6f2a7 req-cd126c65-8b89-486e-a641-a2741f149494 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: e10cd028-76c1-4eb5-be43-f51e4da8abc1] Received unexpected event network-vif-plugged-b5bae935-7639-4a76-988c-e09d0c6f5fb1 for instance with vm_state active and task_state None.
Oct 11 08:48:42 compute-0 ceph-mon[74313]: pgmap v1287: 321 pgs: 321 active+clean; 167 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 292 KiB/s rd, 3.9 MiB/s wr, 116 op/s
Oct 11 08:48:43 compute-0 nova_compute[260935]: 2025-10-11 08:48:43.000 2 DEBUG nova.network.neutron [None req-f23aeac2-74ce-4b61-86f7-5e084e4fabda 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] Successfully updated port: b3e1a780-925f-4f4e-bca5-4b8f5ee45e1e _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 11 08:48:43 compute-0 nova_compute[260935]: 2025-10-11 08:48:43.014 2 DEBUG oslo_concurrency.lockutils [None req-f23aeac2-74ce-4b61-86f7-5e084e4fabda 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Acquiring lock "refresh_cache-057de6d9-3f9e-4b23-9019-f62ba6b453e7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 08:48:43 compute-0 nova_compute[260935]: 2025-10-11 08:48:43.015 2 DEBUG oslo_concurrency.lockutils [None req-f23aeac2-74ce-4b61-86f7-5e084e4fabda 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Acquired lock "refresh_cache-057de6d9-3f9e-4b23-9019-f62ba6b453e7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 08:48:43 compute-0 nova_compute[260935]: 2025-10-11 08:48:43.015 2 DEBUG nova.network.neutron [None req-f23aeac2-74ce-4b61-86f7-5e084e4fabda 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 11 08:48:43 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e163 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 08:48:43 compute-0 nova_compute[260935]: 2025-10-11 08:48:43.215 2 WARNING nova.network.neutron [None req-f23aeac2-74ce-4b61-86f7-5e084e4fabda 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] fff13396-b787-4c6e-9112-a1c2ef57b26d already exists in list: networks containing: ['fff13396-b787-4c6e-9112-a1c2ef57b26d']. ignoring it
Oct 11 08:48:43 compute-0 nova_compute[260935]: 2025-10-11 08:48:43.517 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760172508.5161822, f6e6ccd5-d393-4fa3-bf88-491311678dd1 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:48:43 compute-0 nova_compute[260935]: 2025-10-11 08:48:43.517 2 INFO nova.compute.manager [-] [instance: f6e6ccd5-d393-4fa3-bf88-491311678dd1] VM Stopped (Lifecycle Event)
Oct 11 08:48:43 compute-0 nova_compute[260935]: 2025-10-11 08:48:43.549 2 DEBUG nova.compute.manager [None req-cfce80e5-6ef8-4dfe-b05e-708fc1b8b606 - - - - - -] [instance: f6e6ccd5-d393-4fa3-bf88-491311678dd1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:48:43 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1288: 321 pgs: 321 active+clean; 167 MiB data, 354 MiB used, 60 GiB / 60 GiB avail; 1.6 MiB/s rd, 3.9 MiB/s wr, 171 op/s
Oct 11 08:48:44 compute-0 nova_compute[260935]: 2025-10-11 08:48:44.206 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:48:44 compute-0 nova_compute[260935]: 2025-10-11 08:48:44.585 2 DEBUG nova.compute.manager [req-8d725885-de81-40f6-ba5f-70072b8f3a7a req-4ab6bdef-0f8d-482d-8cbd-f0e422f95b04 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] Received event network-changed-b3e1a780-925f-4f4e-bca5-4b8f5ee45e1e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:48:44 compute-0 nova_compute[260935]: 2025-10-11 08:48:44.585 2 DEBUG nova.compute.manager [req-8d725885-de81-40f6-ba5f-70072b8f3a7a req-4ab6bdef-0f8d-482d-8cbd-f0e422f95b04 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] Refreshing instance network info cache due to event network-changed-b3e1a780-925f-4f4e-bca5-4b8f5ee45e1e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 08:48:44 compute-0 nova_compute[260935]: 2025-10-11 08:48:44.586 2 DEBUG oslo_concurrency.lockutils [req-8d725885-de81-40f6-ba5f-70072b8f3a7a req-4ab6bdef-0f8d-482d-8cbd-f0e422f95b04 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-057de6d9-3f9e-4b23-9019-f62ba6b453e7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 08:48:44 compute-0 ceph-mon[74313]: pgmap v1288: 321 pgs: 321 active+clean; 167 MiB data, 354 MiB used, 60 GiB / 60 GiB avail; 1.6 MiB/s rd, 3.9 MiB/s wr, 171 op/s
Oct 11 08:48:45 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1289: 321 pgs: 321 active+clean; 167 MiB data, 354 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 1.5 MiB/s wr, 101 op/s
Oct 11 08:48:46 compute-0 nova_compute[260935]: 2025-10-11 08:48:46.338 2 DEBUG nova.network.neutron [None req-f23aeac2-74ce-4b61-86f7-5e084e4fabda 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] Updating instance_info_cache with network_info: [{"id": "db31f1b4-b009-40dc-a028-b72fe0b1eb45", "address": "fa:16:3e:97:65:f3", "network": {"id": "fff13396-b787-4c6e-9112-a1c2ef57b26d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-795919911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.230", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eddb41c523294041b154a0a99c88e82b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdb31f1b4-b0", "ovs_interfaceid": "db31f1b4-b009-40dc-a028-b72fe0b1eb45", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "b3e1a780-925f-4f4e-bca5-4b8f5ee45e1e", "address": "fa:16:3e:f2:dc:ce", "network": {"id": "fff13396-b787-4c6e-9112-a1c2ef57b26d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-795919911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eddb41c523294041b154a0a99c88e82b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb3e1a780-92", "ovs_interfaceid": "b3e1a780-925f-4f4e-bca5-4b8f5ee45e1e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:48:46 compute-0 nova_compute[260935]: 2025-10-11 08:48:46.364 2 DEBUG oslo_concurrency.lockutils [None req-f23aeac2-74ce-4b61-86f7-5e084e4fabda 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Releasing lock "refresh_cache-057de6d9-3f9e-4b23-9019-f62ba6b453e7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 08:48:46 compute-0 nova_compute[260935]: 2025-10-11 08:48:46.365 2 DEBUG oslo_concurrency.lockutils [req-8d725885-de81-40f6-ba5f-70072b8f3a7a req-4ab6bdef-0f8d-482d-8cbd-f0e422f95b04 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-057de6d9-3f9e-4b23-9019-f62ba6b453e7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 08:48:46 compute-0 nova_compute[260935]: 2025-10-11 08:48:46.365 2 DEBUG nova.network.neutron [req-8d725885-de81-40f6-ba5f-70072b8f3a7a req-4ab6bdef-0f8d-482d-8cbd-f0e422f95b04 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] Refreshing network info cache for port b3e1a780-925f-4f4e-bca5-4b8f5ee45e1e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 08:48:46 compute-0 nova_compute[260935]: 2025-10-11 08:48:46.369 2 DEBUG nova.virt.libvirt.vif [None req-f23aeac2-74ce-4b61-86f7-5e084e4fabda 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T08:48:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1270343715',display_name='tempest-AttachInterfacesTestJSON-server-1270343715',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1270343715',id=20,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGqp+brcDuFBg126s8uf0VE8L4fUfMeeG8JT9FKFYCB1vHmrbx9C6Kt8XshIYtJqZ0JEMq6H9A4MzX7hRa62ELfLstfe4uxEEdjGiwcDGhX0TR8t1c69HTxfDL2XuPv0hw==',key_name='tempest-keypair-1655747577',keypairs=<?>,launch_index=0,launched_at=2025-10-11T08:48:21Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='eddb41c523294041b154a0a99c88e82b',ramdisk_id='',reservation_id='r-i4h1iqr7',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-2072786320',owner_user_name='tempest-AttachInterfacesTestJSON-2072786320-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T08:48:21Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='34f29a5a135d45f597eeaa741009aa67',uuid=057de6d9-3f9e-4b23-9019-f62ba6b453e7,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b3e1a780-925f-4f4e-bca5-4b8f5ee45e1e", "address": "fa:16:3e:f2:dc:ce", "network": {"id": "fff13396-b787-4c6e-9112-a1c2ef57b26d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-795919911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eddb41c523294041b154a0a99c88e82b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb3e1a780-92", "ovs_interfaceid": "b3e1a780-925f-4f4e-bca5-4b8f5ee45e1e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 11 08:48:46 compute-0 nova_compute[260935]: 2025-10-11 08:48:46.370 2 DEBUG nova.network.os_vif_util [None req-f23aeac2-74ce-4b61-86f7-5e084e4fabda 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Converting VIF {"id": "b3e1a780-925f-4f4e-bca5-4b8f5ee45e1e", "address": "fa:16:3e:f2:dc:ce", "network": {"id": "fff13396-b787-4c6e-9112-a1c2ef57b26d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-795919911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eddb41c523294041b154a0a99c88e82b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb3e1a780-92", "ovs_interfaceid": "b3e1a780-925f-4f4e-bca5-4b8f5ee45e1e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 08:48:46 compute-0 nova_compute[260935]: 2025-10-11 08:48:46.371 2 DEBUG nova.network.os_vif_util [None req-f23aeac2-74ce-4b61-86f7-5e084e4fabda 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f2:dc:ce,bridge_name='br-int',has_traffic_filtering=True,id=b3e1a780-925f-4f4e-bca5-4b8f5ee45e1e,network=Network(fff13396-b787-4c6e-9112-a1c2ef57b26d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb3e1a780-92') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 08:48:46 compute-0 nova_compute[260935]: 2025-10-11 08:48:46.371 2 DEBUG os_vif [None req-f23aeac2-74ce-4b61-86f7-5e084e4fabda 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f2:dc:ce,bridge_name='br-int',has_traffic_filtering=True,id=b3e1a780-925f-4f4e-bca5-4b8f5ee45e1e,network=Network(fff13396-b787-4c6e-9112-a1c2ef57b26d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb3e1a780-92') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 11 08:48:46 compute-0 nova_compute[260935]: 2025-10-11 08:48:46.372 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:48:46 compute-0 nova_compute[260935]: 2025-10-11 08:48:46.373 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:48:46 compute-0 nova_compute[260935]: 2025-10-11 08:48:46.373 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 08:48:46 compute-0 nova_compute[260935]: 2025-10-11 08:48:46.377 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:48:46 compute-0 nova_compute[260935]: 2025-10-11 08:48:46.377 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb3e1a780-92, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:48:46 compute-0 nova_compute[260935]: 2025-10-11 08:48:46.378 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapb3e1a780-92, col_values=(('external_ids', {'iface-id': 'b3e1a780-925f-4f4e-bca5-4b8f5ee45e1e', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:f2:dc:ce', 'vm-uuid': '057de6d9-3f9e-4b23-9019-f62ba6b453e7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:48:46 compute-0 NetworkManager[44960]: <info>  [1760172526.3819] manager: (tapb3e1a780-92): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/56)
Oct 11 08:48:46 compute-0 nova_compute[260935]: 2025-10-11 08:48:46.383 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:48:46 compute-0 nova_compute[260935]: 2025-10-11 08:48:46.400 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 08:48:46 compute-0 nova_compute[260935]: 2025-10-11 08:48:46.403 2 INFO os_vif [None req-f23aeac2-74ce-4b61-86f7-5e084e4fabda 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f2:dc:ce,bridge_name='br-int',has_traffic_filtering=True,id=b3e1a780-925f-4f4e-bca5-4b8f5ee45e1e,network=Network(fff13396-b787-4c6e-9112-a1c2ef57b26d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb3e1a780-92')
Oct 11 08:48:46 compute-0 nova_compute[260935]: 2025-10-11 08:48:46.404 2 DEBUG nova.virt.libvirt.vif [None req-f23aeac2-74ce-4b61-86f7-5e084e4fabda 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T08:48:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1270343715',display_name='tempest-AttachInterfacesTestJSON-server-1270343715',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1270343715',id=20,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGqp+brcDuFBg126s8uf0VE8L4fUfMeeG8JT9FKFYCB1vHmrbx9C6Kt8XshIYtJqZ0JEMq6H9A4MzX7hRa62ELfLstfe4uxEEdjGiwcDGhX0TR8t1c69HTxfDL2XuPv0hw==',key_name='tempest-keypair-1655747577',keypairs=<?>,launch_index=0,launched_at=2025-10-11T08:48:21Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='eddb41c523294041b154a0a99c88e82b',ramdisk_id='',reservation_id='r-i4h1iqr7',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-2072786320',owner_user_name='tempest-AttachInterfacesTestJSON-2072786320-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T08:48:21Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='34f29a5a135d45f597eeaa741009aa67',uuid=057de6d9-3f9e-4b23-9019-f62ba6b453e7,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b3e1a780-925f-4f4e-bca5-4b8f5ee45e1e", "address": "fa:16:3e:f2:dc:ce", "network": {"id": "fff13396-b787-4c6e-9112-a1c2ef57b26d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-795919911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eddb41c523294041b154a0a99c88e82b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb3e1a780-92", "ovs_interfaceid": "b3e1a780-925f-4f4e-bca5-4b8f5ee45e1e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 11 08:48:46 compute-0 nova_compute[260935]: 2025-10-11 08:48:46.404 2 DEBUG nova.network.os_vif_util [None req-f23aeac2-74ce-4b61-86f7-5e084e4fabda 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Converting VIF {"id": "b3e1a780-925f-4f4e-bca5-4b8f5ee45e1e", "address": "fa:16:3e:f2:dc:ce", "network": {"id": "fff13396-b787-4c6e-9112-a1c2ef57b26d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-795919911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eddb41c523294041b154a0a99c88e82b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb3e1a780-92", "ovs_interfaceid": "b3e1a780-925f-4f4e-bca5-4b8f5ee45e1e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 08:48:46 compute-0 nova_compute[260935]: 2025-10-11 08:48:46.405 2 DEBUG nova.network.os_vif_util [None req-f23aeac2-74ce-4b61-86f7-5e084e4fabda 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f2:dc:ce,bridge_name='br-int',has_traffic_filtering=True,id=b3e1a780-925f-4f4e-bca5-4b8f5ee45e1e,network=Network(fff13396-b787-4c6e-9112-a1c2ef57b26d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb3e1a780-92') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 08:48:46 compute-0 nova_compute[260935]: 2025-10-11 08:48:46.409 2 DEBUG nova.virt.libvirt.guest [None req-f23aeac2-74ce-4b61-86f7-5e084e4fabda 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] attach device xml: <interface type="ethernet">
Oct 11 08:48:46 compute-0 nova_compute[260935]:   <mac address="fa:16:3e:f2:dc:ce"/>
Oct 11 08:48:46 compute-0 nova_compute[260935]:   <model type="virtio"/>
Oct 11 08:48:46 compute-0 nova_compute[260935]:   <driver name="vhost" rx_queue_size="512"/>
Oct 11 08:48:46 compute-0 nova_compute[260935]:   <mtu size="1442"/>
Oct 11 08:48:46 compute-0 nova_compute[260935]:   <target dev="tapb3e1a780-92"/>
Oct 11 08:48:46 compute-0 nova_compute[260935]: </interface>
Oct 11 08:48:46 compute-0 nova_compute[260935]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Oct 11 08:48:46 compute-0 kernel: tapb3e1a780-92: entered promiscuous mode
Oct 11 08:48:46 compute-0 NetworkManager[44960]: <info>  [1760172526.4258] manager: (tapb3e1a780-92): new Tun device (/org/freedesktop/NetworkManager/Devices/57)
Oct 11 08:48:46 compute-0 ovn_controller[152945]: 2025-10-11T08:48:46Z|00082|binding|INFO|Claiming lport b3e1a780-925f-4f4e-bca5-4b8f5ee45e1e for this chassis.
Oct 11 08:48:46 compute-0 ovn_controller[152945]: 2025-10-11T08:48:46Z|00083|binding|INFO|b3e1a780-925f-4f4e-bca5-4b8f5ee45e1e: Claiming fa:16:3e:f2:dc:ce 10.100.0.12
Oct 11 08:48:46 compute-0 nova_compute[260935]: 2025-10-11 08:48:46.427 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:48:46 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:48:46.439 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f2:dc:ce 10.100.0.12'], port_security=['fa:16:3e:f2:dc:ce 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '057de6d9-3f9e-4b23-9019-f62ba6b453e7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fff13396-b787-4c6e-9112-a1c2ef57b26d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'eddb41c523294041b154a0a99c88e82b', 'neutron:revision_number': '2', 'neutron:security_group_ids': '9a83c3d0-687d-44b7-980a-bde786b1b429', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c4201c7b-c907-464d-88cb-d19f17d8f067, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=b3e1a780-925f-4f4e-bca5-4b8f5ee45e1e) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 08:48:46 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:48:46.445 162815 INFO neutron.agent.ovn.metadata.agent [-] Port b3e1a780-925f-4f4e-bca5-4b8f5ee45e1e in datapath fff13396-b787-4c6e-9112-a1c2ef57b26d bound to our chassis
Oct 11 08:48:46 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:48:46.450 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network fff13396-b787-4c6e-9112-a1c2ef57b26d
Oct 11 08:48:46 compute-0 ovn_controller[152945]: 2025-10-11T08:48:46Z|00084|binding|INFO|Setting lport b3e1a780-925f-4f4e-bca5-4b8f5ee45e1e ovn-installed in OVS
Oct 11 08:48:46 compute-0 ovn_controller[152945]: 2025-10-11T08:48:46Z|00085|binding|INFO|Setting lport b3e1a780-925f-4f4e-bca5-4b8f5ee45e1e up in Southbound
Oct 11 08:48:46 compute-0 nova_compute[260935]: 2025-10-11 08:48:46.456 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:48:46 compute-0 nova_compute[260935]: 2025-10-11 08:48:46.461 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:48:46 compute-0 systemd-udevd[291236]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 08:48:46 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:48:46.484 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[848f7625-1660-4948-8e66-a854d4250076]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:48:46 compute-0 NetworkManager[44960]: <info>  [1760172526.5141] device (tapb3e1a780-92): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 08:48:46 compute-0 NetworkManager[44960]: <info>  [1760172526.5166] device (tapb3e1a780-92): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 08:48:46 compute-0 nova_compute[260935]: 2025-10-11 08:48:46.545 2 DEBUG nova.virt.libvirt.driver [None req-f23aeac2-74ce-4b61-86f7-5e084e4fabda 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 08:48:46 compute-0 nova_compute[260935]: 2025-10-11 08:48:46.547 2 DEBUG nova.virt.libvirt.driver [None req-f23aeac2-74ce-4b61-86f7-5e084e4fabda 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 08:48:46 compute-0 nova_compute[260935]: 2025-10-11 08:48:46.547 2 DEBUG nova.virt.libvirt.driver [None req-f23aeac2-74ce-4b61-86f7-5e084e4fabda 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] No VIF found with MAC fa:16:3e:97:65:f3, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 11 08:48:46 compute-0 nova_compute[260935]: 2025-10-11 08:48:46.548 2 DEBUG nova.virt.libvirt.driver [None req-f23aeac2-74ce-4b61-86f7-5e084e4fabda 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] No VIF found with MAC fa:16:3e:f2:dc:ce, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 11 08:48:46 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:48:46.550 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[5938400f-164c-4243-b8ab-4e93f90b1319]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:48:46 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:48:46.554 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[156b792f-5e24-4a0e-81e1-71896fb93098]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:48:46 compute-0 nova_compute[260935]: 2025-10-11 08:48:46.577 2 DEBUG nova.virt.libvirt.guest [None req-f23aeac2-74ce-4b61-86f7-5e084e4fabda 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 08:48:46 compute-0 nova_compute[260935]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 08:48:46 compute-0 nova_compute[260935]:   <nova:name>tempest-AttachInterfacesTestJSON-server-1270343715</nova:name>
Oct 11 08:48:46 compute-0 nova_compute[260935]:   <nova:creationTime>2025-10-11 08:48:46</nova:creationTime>
Oct 11 08:48:46 compute-0 nova_compute[260935]:   <nova:flavor name="m1.nano">
Oct 11 08:48:46 compute-0 nova_compute[260935]:     <nova:memory>128</nova:memory>
Oct 11 08:48:46 compute-0 nova_compute[260935]:     <nova:disk>1</nova:disk>
Oct 11 08:48:46 compute-0 nova_compute[260935]:     <nova:swap>0</nova:swap>
Oct 11 08:48:46 compute-0 nova_compute[260935]:     <nova:ephemeral>0</nova:ephemeral>
Oct 11 08:48:46 compute-0 nova_compute[260935]:     <nova:vcpus>1</nova:vcpus>
Oct 11 08:48:46 compute-0 nova_compute[260935]:   </nova:flavor>
Oct 11 08:48:46 compute-0 nova_compute[260935]:   <nova:owner>
Oct 11 08:48:46 compute-0 nova_compute[260935]:     <nova:user uuid="34f29a5a135d45f597eeaa741009aa67">tempest-AttachInterfacesTestJSON-2072786320-project-member</nova:user>
Oct 11 08:48:46 compute-0 nova_compute[260935]:     <nova:project uuid="eddb41c523294041b154a0a99c88e82b">tempest-AttachInterfacesTestJSON-2072786320</nova:project>
Oct 11 08:48:46 compute-0 nova_compute[260935]:   </nova:owner>
Oct 11 08:48:46 compute-0 nova_compute[260935]:   <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 08:48:46 compute-0 nova_compute[260935]:   <nova:ports>
Oct 11 08:48:46 compute-0 nova_compute[260935]:     <nova:port uuid="db31f1b4-b009-40dc-a028-b72fe0b1eb45">
Oct 11 08:48:46 compute-0 nova_compute[260935]:       <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Oct 11 08:48:46 compute-0 nova_compute[260935]:     </nova:port>
Oct 11 08:48:46 compute-0 nova_compute[260935]:     <nova:port uuid="b3e1a780-925f-4f4e-bca5-4b8f5ee45e1e">
Oct 11 08:48:46 compute-0 nova_compute[260935]:       <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Oct 11 08:48:46 compute-0 nova_compute[260935]:     </nova:port>
Oct 11 08:48:46 compute-0 nova_compute[260935]:   </nova:ports>
Oct 11 08:48:46 compute-0 nova_compute[260935]: </nova:instance>
Oct 11 08:48:46 compute-0 nova_compute[260935]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Oct 11 08:48:46 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:48:46.604 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[820de533-482e-4973-986d-cde0a597f617]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:48:46 compute-0 nova_compute[260935]: 2025-10-11 08:48:46.626 2 DEBUG oslo_concurrency.lockutils [None req-f23aeac2-74ce-4b61-86f7-5e084e4fabda 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Lock "interface-057de6d9-3f9e-4b23-9019-f62ba6b453e7-None" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 6.074s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:48:46 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:48:46.630 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[c6ade1f2-e68b-430c-9334-ce6dae11dfce]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfff13396-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:dd:a4:2d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 27], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 434479, 'reachable_time': 16585, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 291243, 'error': None, 'target': 'ovnmeta-fff13396-b787-4c6e-9112-a1c2ef57b26d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:48:46 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:48:46.649 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[fc3f683d-96a1-443f-b65c-1011b3e90b2f]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapfff13396-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 434496, 'tstamp': 434496}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 291244, 'error': None, 'target': 'ovnmeta-fff13396-b787-4c6e-9112-a1c2ef57b26d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapfff13396-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 434502, 'tstamp': 434502}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 291244, 'error': None, 'target': 'ovnmeta-fff13396-b787-4c6e-9112-a1c2ef57b26d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:48:46 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:48:46.651 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfff13396-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:48:46 compute-0 nova_compute[260935]: 2025-10-11 08:48:46.652 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:48:46 compute-0 nova_compute[260935]: 2025-10-11 08:48:46.653 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:48:46 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:48:46.654 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfff13396-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:48:46 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:48:46.654 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 08:48:46 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:48:46.654 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapfff13396-b0, col_values=(('external_ids', {'iface-id': '2a916b98-1e7b-4604-b1f0-e2f195b1c17e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:48:46 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:48:46.654 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 08:48:46 compute-0 nova_compute[260935]: 2025-10-11 08:48:46.895 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:48:46 compute-0 ceph-mon[74313]: pgmap v1289: 321 pgs: 321 active+clean; 167 MiB data, 354 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 1.5 MiB/s wr, 101 op/s
Oct 11 08:48:46 compute-0 nova_compute[260935]: 2025-10-11 08:48:46.990 2 DEBUG nova.compute.manager [req-398e8b53-4ed5-4913-ae93-06f711a708f7 req-8fd7ba0c-8b85-405b-baa8-4db703878886 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] Received event network-vif-plugged-b3e1a780-925f-4f4e-bca5-4b8f5ee45e1e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:48:46 compute-0 nova_compute[260935]: 2025-10-11 08:48:46.991 2 DEBUG oslo_concurrency.lockutils [req-398e8b53-4ed5-4913-ae93-06f711a708f7 req-8fd7ba0c-8b85-405b-baa8-4db703878886 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "057de6d9-3f9e-4b23-9019-f62ba6b453e7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:48:46 compute-0 nova_compute[260935]: 2025-10-11 08:48:46.992 2 DEBUG oslo_concurrency.lockutils [req-398e8b53-4ed5-4913-ae93-06f711a708f7 req-8fd7ba0c-8b85-405b-baa8-4db703878886 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "057de6d9-3f9e-4b23-9019-f62ba6b453e7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:48:46 compute-0 nova_compute[260935]: 2025-10-11 08:48:46.993 2 DEBUG oslo_concurrency.lockutils [req-398e8b53-4ed5-4913-ae93-06f711a708f7 req-8fd7ba0c-8b85-405b-baa8-4db703878886 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "057de6d9-3f9e-4b23-9019-f62ba6b453e7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:48:46 compute-0 nova_compute[260935]: 2025-10-11 08:48:46.993 2 DEBUG nova.compute.manager [req-398e8b53-4ed5-4913-ae93-06f711a708f7 req-8fd7ba0c-8b85-405b-baa8-4db703878886 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] No waiting events found dispatching network-vif-plugged-b3e1a780-925f-4f4e-bca5-4b8f5ee45e1e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 08:48:46 compute-0 nova_compute[260935]: 2025-10-11 08:48:46.994 2 WARNING nova.compute.manager [req-398e8b53-4ed5-4913-ae93-06f711a708f7 req-8fd7ba0c-8b85-405b-baa8-4db703878886 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] Received unexpected event network-vif-plugged-b3e1a780-925f-4f4e-bca5-4b8f5ee45e1e for instance with vm_state active and task_state None.
Oct 11 08:48:47 compute-0 ovn_controller[152945]: 2025-10-11T08:48:47Z|00016|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:f2:dc:ce 10.100.0.12
Oct 11 08:48:47 compute-0 ovn_controller[152945]: 2025-10-11T08:48:47Z|00017|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:f2:dc:ce 10.100.0.12
Oct 11 08:48:47 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1290: 321 pgs: 321 active+clean; 167 MiB data, 354 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 1.5 MiB/s wr, 121 op/s
Oct 11 08:48:47 compute-0 nova_compute[260935]: 2025-10-11 08:48:47.925 2 DEBUG nova.network.neutron [req-8d725885-de81-40f6-ba5f-70072b8f3a7a req-4ab6bdef-0f8d-482d-8cbd-f0e422f95b04 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] Updated VIF entry in instance network info cache for port b3e1a780-925f-4f4e-bca5-4b8f5ee45e1e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 08:48:47 compute-0 nova_compute[260935]: 2025-10-11 08:48:47.926 2 DEBUG nova.network.neutron [req-8d725885-de81-40f6-ba5f-70072b8f3a7a req-4ab6bdef-0f8d-482d-8cbd-f0e422f95b04 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] Updating instance_info_cache with network_info: [{"id": "db31f1b4-b009-40dc-a028-b72fe0b1eb45", "address": "fa:16:3e:97:65:f3", "network": {"id": "fff13396-b787-4c6e-9112-a1c2ef57b26d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-795919911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.230", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eddb41c523294041b154a0a99c88e82b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdb31f1b4-b0", "ovs_interfaceid": "db31f1b4-b009-40dc-a028-b72fe0b1eb45", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "b3e1a780-925f-4f4e-bca5-4b8f5ee45e1e", "address": "fa:16:3e:f2:dc:ce", "network": {"id": "fff13396-b787-4c6e-9112-a1c2ef57b26d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-795919911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eddb41c523294041b154a0a99c88e82b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb3e1a780-92", "ovs_interfaceid": "b3e1a780-925f-4f4e-bca5-4b8f5ee45e1e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:48:47 compute-0 nova_compute[260935]: 2025-10-11 08:48:47.947 2 DEBUG oslo_concurrency.lockutils [req-8d725885-de81-40f6-ba5f-70072b8f3a7a req-4ab6bdef-0f8d-482d-8cbd-f0e422f95b04 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-057de6d9-3f9e-4b23-9019-f62ba6b453e7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 08:48:48 compute-0 nova_compute[260935]: 2025-10-11 08:48:48.034 2 DEBUG oslo_concurrency.lockutils [None req-228cfb70-d764-4258-a817-7f1d31a627d1 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Acquiring lock "interface-057de6d9-3f9e-4b23-9019-f62ba6b453e7-None" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:48:48 compute-0 nova_compute[260935]: 2025-10-11 08:48:48.035 2 DEBUG oslo_concurrency.lockutils [None req-228cfb70-d764-4258-a817-7f1d31a627d1 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Lock "interface-057de6d9-3f9e-4b23-9019-f62ba6b453e7-None" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:48:48 compute-0 nova_compute[260935]: 2025-10-11 08:48:48.036 2 DEBUG nova.objects.instance [None req-228cfb70-d764-4258-a817-7f1d31a627d1 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Lazy-loading 'flavor' on Instance uuid 057de6d9-3f9e-4b23-9019-f62ba6b453e7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:48:48 compute-0 nova_compute[260935]: 2025-10-11 08:48:48.098 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:48:48 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e163 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 08:48:48 compute-0 nova_compute[260935]: 2025-10-11 08:48:48.876 2 DEBUG nova.objects.instance [None req-228cfb70-d764-4258-a817-7f1d31a627d1 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Lazy-loading 'pci_requests' on Instance uuid 057de6d9-3f9e-4b23-9019-f62ba6b453e7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:48:48 compute-0 nova_compute[260935]: 2025-10-11 08:48:48.897 2 DEBUG nova.network.neutron [None req-228cfb70-d764-4258-a817-7f1d31a627d1 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 11 08:48:48 compute-0 ceph-mon[74313]: pgmap v1290: 321 pgs: 321 active+clean; 167 MiB data, 354 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 1.5 MiB/s wr, 121 op/s
Oct 11 08:48:49 compute-0 nova_compute[260935]: 2025-10-11 08:48:49.228 2 DEBUG nova.compute.manager [req-7adb86f8-c0dd-47b1-89f2-06bd76a5962e req-3f3aed79-5d31-4c4f-b158-a2e75693bc87 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: e10cd028-76c1-4eb5-be43-f51e4da8abc1] Received event network-changed-b5bae935-7639-4a76-988c-e09d0c6f5fb1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:48:49 compute-0 nova_compute[260935]: 2025-10-11 08:48:49.229 2 DEBUG nova.compute.manager [req-7adb86f8-c0dd-47b1-89f2-06bd76a5962e req-3f3aed79-5d31-4c4f-b158-a2e75693bc87 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: e10cd028-76c1-4eb5-be43-f51e4da8abc1] Refreshing instance network info cache due to event network-changed-b5bae935-7639-4a76-988c-e09d0c6f5fb1. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 08:48:49 compute-0 nova_compute[260935]: 2025-10-11 08:48:49.230 2 DEBUG oslo_concurrency.lockutils [req-7adb86f8-c0dd-47b1-89f2-06bd76a5962e req-3f3aed79-5d31-4c4f-b158-a2e75693bc87 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-e10cd028-76c1-4eb5-be43-f51e4da8abc1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 08:48:49 compute-0 nova_compute[260935]: 2025-10-11 08:48:49.230 2 DEBUG oslo_concurrency.lockutils [req-7adb86f8-c0dd-47b1-89f2-06bd76a5962e req-3f3aed79-5d31-4c4f-b158-a2e75693bc87 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-e10cd028-76c1-4eb5-be43-f51e4da8abc1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 08:48:49 compute-0 nova_compute[260935]: 2025-10-11 08:48:49.231 2 DEBUG nova.network.neutron [req-7adb86f8-c0dd-47b1-89f2-06bd76a5962e req-3f3aed79-5d31-4c4f-b158-a2e75693bc87 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: e10cd028-76c1-4eb5-be43-f51e4da8abc1] Refreshing network info cache for port b5bae935-7639-4a76-988c-e09d0c6f5fb1 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 08:48:49 compute-0 nova_compute[260935]: 2025-10-11 08:48:49.273 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:48:49 compute-0 nova_compute[260935]: 2025-10-11 08:48:49.294 2 DEBUG nova.policy [None req-228cfb70-d764-4258-a817-7f1d31a627d1 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '34f29a5a135d45f597eeaa741009aa67', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'eddb41c523294041b154a0a99c88e82b', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 11 08:48:49 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1291: 321 pgs: 321 active+clean; 167 MiB data, 354 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 29 KiB/s wr, 74 op/s
Oct 11 08:48:49 compute-0 nova_compute[260935]: 2025-10-11 08:48:49.926 2 DEBUG nova.network.neutron [None req-228cfb70-d764-4258-a817-7f1d31a627d1 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] Successfully created port: 723ff3bf-882c-4198-afc8-a31026a4ccfc _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 11 08:48:50 compute-0 nova_compute[260935]: 2025-10-11 08:48:50.642 2 DEBUG nova.network.neutron [None req-228cfb70-d764-4258-a817-7f1d31a627d1 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] Successfully updated port: 723ff3bf-882c-4198-afc8-a31026a4ccfc _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 11 08:48:50 compute-0 nova_compute[260935]: 2025-10-11 08:48:50.671 2 DEBUG oslo_concurrency.lockutils [None req-228cfb70-d764-4258-a817-7f1d31a627d1 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Acquiring lock "refresh_cache-057de6d9-3f9e-4b23-9019-f62ba6b453e7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 08:48:50 compute-0 nova_compute[260935]: 2025-10-11 08:48:50.672 2 DEBUG oslo_concurrency.lockutils [None req-228cfb70-d764-4258-a817-7f1d31a627d1 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Acquired lock "refresh_cache-057de6d9-3f9e-4b23-9019-f62ba6b453e7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 08:48:50 compute-0 nova_compute[260935]: 2025-10-11 08:48:50.672 2 DEBUG nova.network.neutron [None req-228cfb70-d764-4258-a817-7f1d31a627d1 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 11 08:48:50 compute-0 nova_compute[260935]: 2025-10-11 08:48:50.754 2 DEBUG nova.compute.manager [req-ec289248-6f0c-4c82-b400-62ea19b8bdf1 req-ca978624-fe00-472c-bc89-4e677bcfb236 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] Received event network-changed-723ff3bf-882c-4198-afc8-a31026a4ccfc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:48:50 compute-0 nova_compute[260935]: 2025-10-11 08:48:50.755 2 DEBUG nova.compute.manager [req-ec289248-6f0c-4c82-b400-62ea19b8bdf1 req-ca978624-fe00-472c-bc89-4e677bcfb236 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] Refreshing instance network info cache due to event network-changed-723ff3bf-882c-4198-afc8-a31026a4ccfc. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 08:48:50 compute-0 nova_compute[260935]: 2025-10-11 08:48:50.756 2 DEBUG oslo_concurrency.lockutils [req-ec289248-6f0c-4c82-b400-62ea19b8bdf1 req-ca978624-fe00-472c-bc89-4e677bcfb236 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-057de6d9-3f9e-4b23-9019-f62ba6b453e7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 08:48:50 compute-0 nova_compute[260935]: 2025-10-11 08:48:50.854 2 WARNING nova.network.neutron [None req-228cfb70-d764-4258-a817-7f1d31a627d1 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] fff13396-b787-4c6e-9112-a1c2ef57b26d already exists in list: networks containing: ['fff13396-b787-4c6e-9112-a1c2ef57b26d']. ignoring it
Oct 11 08:48:50 compute-0 nova_compute[260935]: 2025-10-11 08:48:50.855 2 WARNING nova.network.neutron [None req-228cfb70-d764-4258-a817-7f1d31a627d1 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] fff13396-b787-4c6e-9112-a1c2ef57b26d already exists in list: networks containing: ['fff13396-b787-4c6e-9112-a1c2ef57b26d']. ignoring it
Oct 11 08:48:50 compute-0 ceph-mon[74313]: pgmap v1291: 321 pgs: 321 active+clean; 167 MiB data, 354 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 29 KiB/s wr, 74 op/s
Oct 11 08:48:51 compute-0 nova_compute[260935]: 2025-10-11 08:48:51.306 2 DEBUG nova.network.neutron [req-7adb86f8-c0dd-47b1-89f2-06bd76a5962e req-3f3aed79-5d31-4c4f-b158-a2e75693bc87 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: e10cd028-76c1-4eb5-be43-f51e4da8abc1] Updated VIF entry in instance network info cache for port b5bae935-7639-4a76-988c-e09d0c6f5fb1. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 08:48:51 compute-0 nova_compute[260935]: 2025-10-11 08:48:51.308 2 DEBUG nova.network.neutron [req-7adb86f8-c0dd-47b1-89f2-06bd76a5962e req-3f3aed79-5d31-4c4f-b158-a2e75693bc87 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: e10cd028-76c1-4eb5-be43-f51e4da8abc1] Updating instance_info_cache with network_info: [{"id": "b5bae935-7639-4a76-988c-e09d0c6f5fb1", "address": "fa:16:3e:ad:83:55", "network": {"id": "2880dd81-df95-47c3-aa3d-53c3f2548f15", "bridge": "br-int", "label": "tempest-ServersTestJSON-1125718573-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.223", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8d6c7a0d842f4dcb95421a3f47580c49", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb5bae935-76", "ovs_interfaceid": "b5bae935-7639-4a76-988c-e09d0c6f5fb1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:48:51 compute-0 nova_compute[260935]: 2025-10-11 08:48:51.339 2 DEBUG oslo_concurrency.lockutils [req-7adb86f8-c0dd-47b1-89f2-06bd76a5962e req-3f3aed79-5d31-4c4f-b158-a2e75693bc87 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-e10cd028-76c1-4eb5-be43-f51e4da8abc1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 08:48:51 compute-0 nova_compute[260935]: 2025-10-11 08:48:51.340 2 DEBUG nova.compute.manager [req-7adb86f8-c0dd-47b1-89f2-06bd76a5962e req-3f3aed79-5d31-4c4f-b158-a2e75693bc87 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] Received event network-vif-plugged-b3e1a780-925f-4f4e-bca5-4b8f5ee45e1e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:48:51 compute-0 nova_compute[260935]: 2025-10-11 08:48:51.341 2 DEBUG oslo_concurrency.lockutils [req-7adb86f8-c0dd-47b1-89f2-06bd76a5962e req-3f3aed79-5d31-4c4f-b158-a2e75693bc87 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "057de6d9-3f9e-4b23-9019-f62ba6b453e7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:48:51 compute-0 nova_compute[260935]: 2025-10-11 08:48:51.341 2 DEBUG oslo_concurrency.lockutils [req-7adb86f8-c0dd-47b1-89f2-06bd76a5962e req-3f3aed79-5d31-4c4f-b158-a2e75693bc87 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "057de6d9-3f9e-4b23-9019-f62ba6b453e7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:48:51 compute-0 nova_compute[260935]: 2025-10-11 08:48:51.341 2 DEBUG oslo_concurrency.lockutils [req-7adb86f8-c0dd-47b1-89f2-06bd76a5962e req-3f3aed79-5d31-4c4f-b158-a2e75693bc87 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "057de6d9-3f9e-4b23-9019-f62ba6b453e7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:48:51 compute-0 nova_compute[260935]: 2025-10-11 08:48:51.342 2 DEBUG nova.compute.manager [req-7adb86f8-c0dd-47b1-89f2-06bd76a5962e req-3f3aed79-5d31-4c4f-b158-a2e75693bc87 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] No waiting events found dispatching network-vif-plugged-b3e1a780-925f-4f4e-bca5-4b8f5ee45e1e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 08:48:51 compute-0 nova_compute[260935]: 2025-10-11 08:48:51.342 2 WARNING nova.compute.manager [req-7adb86f8-c0dd-47b1-89f2-06bd76a5962e req-3f3aed79-5d31-4c4f-b158-a2e75693bc87 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] Received unexpected event network-vif-plugged-b3e1a780-925f-4f4e-bca5-4b8f5ee45e1e for instance with vm_state active and task_state None.
Oct 11 08:48:51 compute-0 nova_compute[260935]: 2025-10-11 08:48:51.383 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:48:51 compute-0 podman[291245]: 2025-10-11 08:48:51.789006146 +0000 UTC m=+0.082622973 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 11 08:48:51 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1292: 321 pgs: 321 active+clean; 167 MiB data, 354 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 29 KiB/s wr, 74 op/s
Oct 11 08:48:51 compute-0 nova_compute[260935]: 2025-10-11 08:48:51.899 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:48:52 compute-0 ovn_controller[152945]: 2025-10-11T08:48:52Z|00018|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:ad:83:55 10.100.0.11
Oct 11 08:48:52 compute-0 ovn_controller[152945]: 2025-10-11T08:48:52Z|00019|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:ad:83:55 10.100.0.11
Oct 11 08:48:53 compute-0 ceph-mon[74313]: pgmap v1292: 321 pgs: 321 active+clean; 167 MiB data, 354 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 29 KiB/s wr, 74 op/s
Oct 11 08:48:53 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e163 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 08:48:53 compute-0 nova_compute[260935]: 2025-10-11 08:48:53.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 08:48:53 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1293: 321 pgs: 321 active+clean; 189 MiB data, 396 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 2.1 MiB/s wr, 116 op/s
Oct 11 08:48:54 compute-0 nova_compute[260935]: 2025-10-11 08:48:54.258 2 DEBUG nova.network.neutron [None req-228cfb70-d764-4258-a817-7f1d31a627d1 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] Updating instance_info_cache with network_info: [{"id": "db31f1b4-b009-40dc-a028-b72fe0b1eb45", "address": "fa:16:3e:97:65:f3", "network": {"id": "fff13396-b787-4c6e-9112-a1c2ef57b26d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-795919911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.230", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eddb41c523294041b154a0a99c88e82b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdb31f1b4-b0", "ovs_interfaceid": "db31f1b4-b009-40dc-a028-b72fe0b1eb45", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "b3e1a780-925f-4f4e-bca5-4b8f5ee45e1e", "address": "fa:16:3e:f2:dc:ce", "network": {"id": "fff13396-b787-4c6e-9112-a1c2ef57b26d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-795919911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eddb41c523294041b154a0a99c88e82b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb3e1a780-92", "ovs_interfaceid": "b3e1a780-925f-4f4e-bca5-4b8f5ee45e1e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "723ff3bf-882c-4198-afc8-a31026a4ccfc", "address": "fa:16:3e:90:fd:25", "network": {"id": "fff13396-b787-4c6e-9112-a1c2ef57b26d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-795919911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eddb41c523294041b154a0a99c88e82b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap723ff3bf-88", "ovs_interfaceid": "723ff3bf-882c-4198-afc8-a31026a4ccfc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:48:54 compute-0 nova_compute[260935]: 2025-10-11 08:48:54.292 2 DEBUG oslo_concurrency.lockutils [None req-228cfb70-d764-4258-a817-7f1d31a627d1 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Releasing lock "refresh_cache-057de6d9-3f9e-4b23-9019-f62ba6b453e7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 08:48:54 compute-0 nova_compute[260935]: 2025-10-11 08:48:54.293 2 DEBUG oslo_concurrency.lockutils [req-ec289248-6f0c-4c82-b400-62ea19b8bdf1 req-ca978624-fe00-472c-bc89-4e677bcfb236 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-057de6d9-3f9e-4b23-9019-f62ba6b453e7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 08:48:54 compute-0 nova_compute[260935]: 2025-10-11 08:48:54.293 2 DEBUG nova.network.neutron [req-ec289248-6f0c-4c82-b400-62ea19b8bdf1 req-ca978624-fe00-472c-bc89-4e677bcfb236 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] Refreshing network info cache for port 723ff3bf-882c-4198-afc8-a31026a4ccfc _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 08:48:54 compute-0 nova_compute[260935]: 2025-10-11 08:48:54.296 2 DEBUG nova.virt.libvirt.vif [None req-228cfb70-d764-4258-a817-7f1d31a627d1 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T08:48:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1270343715',display_name='tempest-AttachInterfacesTestJSON-server-1270343715',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1270343715',id=20,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGqp+brcDuFBg126s8uf0VE8L4fUfMeeG8JT9FKFYCB1vHmrbx9C6Kt8XshIYtJqZ0JEMq6H9A4MzX7hRa62ELfLstfe4uxEEdjGiwcDGhX0TR8t1c69HTxfDL2XuPv0hw==',key_name='tempest-keypair-1655747577',keypairs=<?>,launch_index=0,launched_at=2025-10-11T08:48:21Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='eddb41c523294041b154a0a99c88e82b',ramdisk_id='',reservation_id='r-i4h1iqr7',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-2072786320',owner_user_name='tempest-AttachInterfacesTestJSON-2072786320-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T08:48:21Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='34f29a5a135d45f597eeaa741009aa67',uuid=057de6d9-3f9e-4b23-9019-f62ba6b453e7,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "723ff3bf-882c-4198-afc8-a31026a4ccfc", "address": "fa:16:3e:90:fd:25", "network": {"id": "fff13396-b787-4c6e-9112-a1c2ef57b26d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-795919911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eddb41c523294041b154a0a99c88e82b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap723ff3bf-88", "ovs_interfaceid": "723ff3bf-882c-4198-afc8-a31026a4ccfc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 11 08:48:54 compute-0 nova_compute[260935]: 2025-10-11 08:48:54.296 2 DEBUG nova.network.os_vif_util [None req-228cfb70-d764-4258-a817-7f1d31a627d1 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Converting VIF {"id": "723ff3bf-882c-4198-afc8-a31026a4ccfc", "address": "fa:16:3e:90:fd:25", "network": {"id": "fff13396-b787-4c6e-9112-a1c2ef57b26d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-795919911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eddb41c523294041b154a0a99c88e82b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap723ff3bf-88", "ovs_interfaceid": "723ff3bf-882c-4198-afc8-a31026a4ccfc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 08:48:54 compute-0 nova_compute[260935]: 2025-10-11 08:48:54.297 2 DEBUG nova.network.os_vif_util [None req-228cfb70-d764-4258-a817-7f1d31a627d1 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:90:fd:25,bridge_name='br-int',has_traffic_filtering=True,id=723ff3bf-882c-4198-afc8-a31026a4ccfc,network=Network(fff13396-b787-4c6e-9112-a1c2ef57b26d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap723ff3bf-88') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 08:48:54 compute-0 nova_compute[260935]: 2025-10-11 08:48:54.297 2 DEBUG os_vif [None req-228cfb70-d764-4258-a817-7f1d31a627d1 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:90:fd:25,bridge_name='br-int',has_traffic_filtering=True,id=723ff3bf-882c-4198-afc8-a31026a4ccfc,network=Network(fff13396-b787-4c6e-9112-a1c2ef57b26d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap723ff3bf-88') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 11 08:48:54 compute-0 nova_compute[260935]: 2025-10-11 08:48:54.298 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:48:54 compute-0 nova_compute[260935]: 2025-10-11 08:48:54.298 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:48:54 compute-0 nova_compute[260935]: 2025-10-11 08:48:54.298 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 08:48:54 compute-0 nova_compute[260935]: 2025-10-11 08:48:54.301 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:48:54 compute-0 nova_compute[260935]: 2025-10-11 08:48:54.301 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap723ff3bf-88, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:48:54 compute-0 nova_compute[260935]: 2025-10-11 08:48:54.302 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap723ff3bf-88, col_values=(('external_ids', {'iface-id': '723ff3bf-882c-4198-afc8-a31026a4ccfc', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:90:fd:25', 'vm-uuid': '057de6d9-3f9e-4b23-9019-f62ba6b453e7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:48:54 compute-0 NetworkManager[44960]: <info>  [1760172534.3064] manager: (tap723ff3bf-88): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/58)
Oct 11 08:48:54 compute-0 nova_compute[260935]: 2025-10-11 08:48:54.308 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 08:48:54 compute-0 nova_compute[260935]: 2025-10-11 08:48:54.335 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:48:54 compute-0 nova_compute[260935]: 2025-10-11 08:48:54.336 2 INFO os_vif [None req-228cfb70-d764-4258-a817-7f1d31a627d1 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:90:fd:25,bridge_name='br-int',has_traffic_filtering=True,id=723ff3bf-882c-4198-afc8-a31026a4ccfc,network=Network(fff13396-b787-4c6e-9112-a1c2ef57b26d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap723ff3bf-88')
Oct 11 08:48:54 compute-0 nova_compute[260935]: 2025-10-11 08:48:54.337 2 DEBUG nova.virt.libvirt.vif [None req-228cfb70-d764-4258-a817-7f1d31a627d1 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T08:48:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1270343715',display_name='tempest-AttachInterfacesTestJSON-server-1270343715',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1270343715',id=20,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGqp+brcDuFBg126s8uf0VE8L4fUfMeeG8JT9FKFYCB1vHmrbx9C6Kt8XshIYtJqZ0JEMq6H9A4MzX7hRa62ELfLstfe4uxEEdjGiwcDGhX0TR8t1c69HTxfDL2XuPv0hw==',key_name='tempest-keypair-1655747577',keypairs=<?>,launch_index=0,launched_at=2025-10-11T08:48:21Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='eddb41c523294041b154a0a99c88e82b',ramdisk_id='',reservation_id='r-i4h1iqr7',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-2072786320',owner_user_name='tempest-AttachInterfacesTestJSON-2072786320-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T08:48:21Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='34f29a5a135d45f597eeaa741009aa67',uuid=057de6d9-3f9e-4b23-9019-f62ba6b453e7,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "723ff3bf-882c-4198-afc8-a31026a4ccfc", "address": "fa:16:3e:90:fd:25", "network": {"id": "fff13396-b787-4c6e-9112-a1c2ef57b26d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-795919911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eddb41c523294041b154a0a99c88e82b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap723ff3bf-88", "ovs_interfaceid": "723ff3bf-882c-4198-afc8-a31026a4ccfc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 11 08:48:54 compute-0 nova_compute[260935]: 2025-10-11 08:48:54.338 2 DEBUG nova.network.os_vif_util [None req-228cfb70-d764-4258-a817-7f1d31a627d1 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Converting VIF {"id": "723ff3bf-882c-4198-afc8-a31026a4ccfc", "address": "fa:16:3e:90:fd:25", "network": {"id": "fff13396-b787-4c6e-9112-a1c2ef57b26d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-795919911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eddb41c523294041b154a0a99c88e82b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap723ff3bf-88", "ovs_interfaceid": "723ff3bf-882c-4198-afc8-a31026a4ccfc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 08:48:54 compute-0 nova_compute[260935]: 2025-10-11 08:48:54.339 2 DEBUG nova.network.os_vif_util [None req-228cfb70-d764-4258-a817-7f1d31a627d1 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:90:fd:25,bridge_name='br-int',has_traffic_filtering=True,id=723ff3bf-882c-4198-afc8-a31026a4ccfc,network=Network(fff13396-b787-4c6e-9112-a1c2ef57b26d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap723ff3bf-88') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 08:48:54 compute-0 nova_compute[260935]: 2025-10-11 08:48:54.340 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:48:54 compute-0 nova_compute[260935]: 2025-10-11 08:48:54.342 2 DEBUG nova.virt.libvirt.guest [None req-228cfb70-d764-4258-a817-7f1d31a627d1 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] attach device xml: <interface type="ethernet">
Oct 11 08:48:54 compute-0 nova_compute[260935]:   <mac address="fa:16:3e:90:fd:25"/>
Oct 11 08:48:54 compute-0 nova_compute[260935]:   <model type="virtio"/>
Oct 11 08:48:54 compute-0 nova_compute[260935]:   <driver name="vhost" rx_queue_size="512"/>
Oct 11 08:48:54 compute-0 nova_compute[260935]:   <mtu size="1442"/>
Oct 11 08:48:54 compute-0 nova_compute[260935]:   <target dev="tap723ff3bf-88"/>
Oct 11 08:48:54 compute-0 nova_compute[260935]: </interface>
Oct 11 08:48:54 compute-0 nova_compute[260935]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Oct 11 08:48:54 compute-0 kernel: tap723ff3bf-88: entered promiscuous mode
Oct 11 08:48:54 compute-0 NetworkManager[44960]: <info>  [1760172534.3581] manager: (tap723ff3bf-88): new Tun device (/org/freedesktop/NetworkManager/Devices/59)
Oct 11 08:48:54 compute-0 nova_compute[260935]: 2025-10-11 08:48:54.362 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:48:54 compute-0 ovn_controller[152945]: 2025-10-11T08:48:54Z|00086|binding|INFO|Claiming lport 723ff3bf-882c-4198-afc8-a31026a4ccfc for this chassis.
Oct 11 08:48:54 compute-0 ovn_controller[152945]: 2025-10-11T08:48:54Z|00087|binding|INFO|723ff3bf-882c-4198-afc8-a31026a4ccfc: Claiming fa:16:3e:90:fd:25 10.100.0.6
Oct 11 08:48:54 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:48:54.372 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:90:fd:25 10.100.0.6'], port_security=['fa:16:3e:90:fd:25 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '057de6d9-3f9e-4b23-9019-f62ba6b453e7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fff13396-b787-4c6e-9112-a1c2ef57b26d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'eddb41c523294041b154a0a99c88e82b', 'neutron:revision_number': '2', 'neutron:security_group_ids': '9a83c3d0-687d-44b7-980a-bde786b1b429', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c4201c7b-c907-464d-88cb-d19f17d8f067, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=723ff3bf-882c-4198-afc8-a31026a4ccfc) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 08:48:54 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:48:54.374 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 723ff3bf-882c-4198-afc8-a31026a4ccfc in datapath fff13396-b787-4c6e-9112-a1c2ef57b26d bound to our chassis
Oct 11 08:48:54 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:48:54.377 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network fff13396-b787-4c6e-9112-a1c2ef57b26d
Oct 11 08:48:54 compute-0 ovn_controller[152945]: 2025-10-11T08:48:54Z|00088|binding|INFO|Setting lport 723ff3bf-882c-4198-afc8-a31026a4ccfc ovn-installed in OVS
Oct 11 08:48:54 compute-0 ovn_controller[152945]: 2025-10-11T08:48:54Z|00089|binding|INFO|Setting lport 723ff3bf-882c-4198-afc8-a31026a4ccfc up in Southbound
Oct 11 08:48:54 compute-0 nova_compute[260935]: 2025-10-11 08:48:54.386 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:48:54 compute-0 nova_compute[260935]: 2025-10-11 08:48:54.391 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:48:54 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:48:54.401 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[ce68f45d-d6fd-4fee-ad66-5bece935c7e0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:48:54 compute-0 systemd-udevd[291272]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 08:48:54 compute-0 NetworkManager[44960]: <info>  [1760172534.4491] device (tap723ff3bf-88): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 08:48:54 compute-0 NetworkManager[44960]: <info>  [1760172534.4505] device (tap723ff3bf-88): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 08:48:54 compute-0 nova_compute[260935]: 2025-10-11 08:48:54.459 2 DEBUG nova.virt.libvirt.driver [None req-228cfb70-d764-4258-a817-7f1d31a627d1 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 08:48:54 compute-0 nova_compute[260935]: 2025-10-11 08:48:54.460 2 DEBUG nova.virt.libvirt.driver [None req-228cfb70-d764-4258-a817-7f1d31a627d1 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 08:48:54 compute-0 nova_compute[260935]: 2025-10-11 08:48:54.460 2 DEBUG nova.virt.libvirt.driver [None req-228cfb70-d764-4258-a817-7f1d31a627d1 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] No VIF found with MAC fa:16:3e:97:65:f3, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 11 08:48:54 compute-0 nova_compute[260935]: 2025-10-11 08:48:54.460 2 DEBUG nova.virt.libvirt.driver [None req-228cfb70-d764-4258-a817-7f1d31a627d1 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] No VIF found with MAC fa:16:3e:f2:dc:ce, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 11 08:48:54 compute-0 nova_compute[260935]: 2025-10-11 08:48:54.461 2 DEBUG nova.virt.libvirt.driver [None req-228cfb70-d764-4258-a817-7f1d31a627d1 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] No VIF found with MAC fa:16:3e:90:fd:25, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 11 08:48:54 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:48:54.465 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[9bb1861a-3fa5-4661-a402-c364094d9002]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:48:54 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:48:54.468 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[fc265eac-e3fb-467b-96bf-48308fd34f03]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:48:54 compute-0 nova_compute[260935]: 2025-10-11 08:48:54.487 2 DEBUG nova.virt.libvirt.guest [None req-228cfb70-d764-4258-a817-7f1d31a627d1 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 08:48:54 compute-0 nova_compute[260935]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 08:48:54 compute-0 nova_compute[260935]:   <nova:name>tempest-AttachInterfacesTestJSON-server-1270343715</nova:name>
Oct 11 08:48:54 compute-0 nova_compute[260935]:   <nova:creationTime>2025-10-11 08:48:54</nova:creationTime>
Oct 11 08:48:54 compute-0 nova_compute[260935]:   <nova:flavor name="m1.nano">
Oct 11 08:48:54 compute-0 nova_compute[260935]:     <nova:memory>128</nova:memory>
Oct 11 08:48:54 compute-0 nova_compute[260935]:     <nova:disk>1</nova:disk>
Oct 11 08:48:54 compute-0 nova_compute[260935]:     <nova:swap>0</nova:swap>
Oct 11 08:48:54 compute-0 nova_compute[260935]:     <nova:ephemeral>0</nova:ephemeral>
Oct 11 08:48:54 compute-0 nova_compute[260935]:     <nova:vcpus>1</nova:vcpus>
Oct 11 08:48:54 compute-0 nova_compute[260935]:   </nova:flavor>
Oct 11 08:48:54 compute-0 nova_compute[260935]:   <nova:owner>
Oct 11 08:48:54 compute-0 nova_compute[260935]:     <nova:user uuid="34f29a5a135d45f597eeaa741009aa67">tempest-AttachInterfacesTestJSON-2072786320-project-member</nova:user>
Oct 11 08:48:54 compute-0 nova_compute[260935]:     <nova:project uuid="eddb41c523294041b154a0a99c88e82b">tempest-AttachInterfacesTestJSON-2072786320</nova:project>
Oct 11 08:48:54 compute-0 nova_compute[260935]:   </nova:owner>
Oct 11 08:48:54 compute-0 nova_compute[260935]:   <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 08:48:54 compute-0 nova_compute[260935]:   <nova:ports>
Oct 11 08:48:54 compute-0 nova_compute[260935]:     <nova:port uuid="db31f1b4-b009-40dc-a028-b72fe0b1eb45">
Oct 11 08:48:54 compute-0 nova_compute[260935]:       <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Oct 11 08:48:54 compute-0 nova_compute[260935]:     </nova:port>
Oct 11 08:48:54 compute-0 nova_compute[260935]:     <nova:port uuid="b3e1a780-925f-4f4e-bca5-4b8f5ee45e1e">
Oct 11 08:48:54 compute-0 nova_compute[260935]:       <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Oct 11 08:48:54 compute-0 nova_compute[260935]:     </nova:port>
Oct 11 08:48:54 compute-0 nova_compute[260935]:     <nova:port uuid="723ff3bf-882c-4198-afc8-a31026a4ccfc">
Oct 11 08:48:54 compute-0 nova_compute[260935]:       <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Oct 11 08:48:54 compute-0 nova_compute[260935]:     </nova:port>
Oct 11 08:48:54 compute-0 nova_compute[260935]:   </nova:ports>
Oct 11 08:48:54 compute-0 nova_compute[260935]: </nova:instance>
Oct 11 08:48:54 compute-0 nova_compute[260935]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Oct 11 08:48:54 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:48:54.508 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[9a22c43a-9626-4821-b458-5c180ec938e8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:48:54 compute-0 nova_compute[260935]: 2025-10-11 08:48:54.514 2 DEBUG oslo_concurrency.lockutils [None req-228cfb70-d764-4258-a817-7f1d31a627d1 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Lock "interface-057de6d9-3f9e-4b23-9019-f62ba6b453e7-None" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 6.479s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:48:54 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:48:54.533 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[79ed8fad-ca9f-47eb-82d0-e96d44856871]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfff13396-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:dd:a4:2d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 11, 'tx_packets': 7, 'rx_bytes': 958, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 11, 'tx_packets': 7, 'rx_bytes': 958, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 27], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 434479, 'reachable_time': 16585, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 291278, 'error': None, 'target': 'ovnmeta-fff13396-b787-4c6e-9112-a1c2ef57b26d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:48:54 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:48:54.561 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[2a03e968-0ad7-4a48-bec8-18994dfef655]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapfff13396-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 434496, 'tstamp': 434496}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 291279, 'error': None, 'target': 'ovnmeta-fff13396-b787-4c6e-9112-a1c2ef57b26d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapfff13396-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 434502, 'tstamp': 434502}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 291279, 'error': None, 'target': 'ovnmeta-fff13396-b787-4c6e-9112-a1c2ef57b26d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:48:54 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:48:54.563 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfff13396-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:48:54 compute-0 nova_compute[260935]: 2025-10-11 08:48:54.598 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:48:54 compute-0 nova_compute[260935]: 2025-10-11 08:48:54.600 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:48:54 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:48:54.602 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfff13396-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:48:54 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:48:54.603 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 08:48:54 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:48:54.604 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapfff13396-b0, col_values=(('external_ids', {'iface-id': '2a916b98-1e7b-4604-b1f0-e2f195b1c17e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:48:54 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:48:54.605 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 08:48:54 compute-0 nova_compute[260935]: 2025-10-11 08:48:54.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 08:48:54 compute-0 nova_compute[260935]: 2025-10-11 08:48:54.703 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 11 08:48:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 08:48:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 08:48:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 08:48:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 08:48:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 08:48:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 08:48:54 compute-0 ceph-mgr[74605]: [balancer INFO root] Optimize plan auto_2025-10-11_08:48:54
Oct 11 08:48:54 compute-0 ceph-mgr[74605]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 08:48:54 compute-0 ceph-mgr[74605]: [balancer INFO root] do_upmap
Oct 11 08:48:54 compute-0 ceph-mgr[74605]: [balancer INFO root] pools ['default.rgw.control', 'cephfs.cephfs.meta', 'volumes', 'cephfs.cephfs.data', '.mgr', 'backups', '.rgw.root', 'vms', 'images', 'default.rgw.meta', 'default.rgw.log']
Oct 11 08:48:54 compute-0 ceph-mgr[74605]: [balancer INFO root] prepared 0/10 changes
Oct 11 08:48:55 compute-0 ceph-mon[74313]: pgmap v1293: 321 pgs: 321 active+clean; 189 MiB data, 396 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 2.1 MiB/s wr, 116 op/s
Oct 11 08:48:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 08:48:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 08:48:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 08:48:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 08:48:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 08:48:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 08:48:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 08:48:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 08:48:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 08:48:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 08:48:55 compute-0 nova_compute[260935]: 2025-10-11 08:48:55.418 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:48:55 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1294: 321 pgs: 321 active+clean; 189 MiB data, 396 MiB used, 60 GiB / 60 GiB avail; 782 KiB/s rd, 2.0 MiB/s wr, 61 op/s
Oct 11 08:48:55 compute-0 nova_compute[260935]: 2025-10-11 08:48:55.967 2 DEBUG nova.network.neutron [req-ec289248-6f0c-4c82-b400-62ea19b8bdf1 req-ca978624-fe00-472c-bc89-4e677bcfb236 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] Updated VIF entry in instance network info cache for port 723ff3bf-882c-4198-afc8-a31026a4ccfc. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 08:48:55 compute-0 nova_compute[260935]: 2025-10-11 08:48:55.968 2 DEBUG nova.network.neutron [req-ec289248-6f0c-4c82-b400-62ea19b8bdf1 req-ca978624-fe00-472c-bc89-4e677bcfb236 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] Updating instance_info_cache with network_info: [{"id": "db31f1b4-b009-40dc-a028-b72fe0b1eb45", "address": "fa:16:3e:97:65:f3", "network": {"id": "fff13396-b787-4c6e-9112-a1c2ef57b26d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-795919911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.230", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eddb41c523294041b154a0a99c88e82b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdb31f1b4-b0", "ovs_interfaceid": "db31f1b4-b009-40dc-a028-b72fe0b1eb45", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "b3e1a780-925f-4f4e-bca5-4b8f5ee45e1e", "address": "fa:16:3e:f2:dc:ce", "network": {"id": "fff13396-b787-4c6e-9112-a1c2ef57b26d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-795919911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eddb41c523294041b154a0a99c88e82b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb3e1a780-92", "ovs_interfaceid": "b3e1a780-925f-4f4e-bca5-4b8f5ee45e1e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "723ff3bf-882c-4198-afc8-a31026a4ccfc", "address": "fa:16:3e:90:fd:25", "network": {"id": "fff13396-b787-4c6e-9112-a1c2ef57b26d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-795919911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eddb41c523294041b154a0a99c88e82b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap723ff3bf-88", "ovs_interfaceid": "723ff3bf-882c-4198-afc8-a31026a4ccfc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:48:55 compute-0 nova_compute[260935]: 2025-10-11 08:48:55.990 2 DEBUG oslo_concurrency.lockutils [req-ec289248-6f0c-4c82-b400-62ea19b8bdf1 req-ca978624-fe00-472c-bc89-4e677bcfb236 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-057de6d9-3f9e-4b23-9019-f62ba6b453e7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 08:48:56 compute-0 ovn_controller[152945]: 2025-10-11T08:48:56Z|00020|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:90:fd:25 10.100.0.6
Oct 11 08:48:56 compute-0 ovn_controller[152945]: 2025-10-11T08:48:56Z|00021|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:90:fd:25 10.100.0.6
Oct 11 08:48:56 compute-0 nova_compute[260935]: 2025-10-11 08:48:56.934 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:48:57 compute-0 ceph-mon[74313]: pgmap v1294: 321 pgs: 321 active+clean; 189 MiB data, 396 MiB used, 60 GiB / 60 GiB avail; 782 KiB/s rd, 2.0 MiB/s wr, 61 op/s
Oct 11 08:48:57 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:48:57.599 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=8, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:d1:d9', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '16:ab:1e:b7:4b:7f'}, ipsec=False) old=SB_Global(nb_cfg=7) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 08:48:57 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:48:57.602 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 11 08:48:57 compute-0 nova_compute[260935]: 2025-10-11 08:48:57.601 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:48:57 compute-0 nova_compute[260935]: 2025-10-11 08:48:57.708 2 DEBUG nova.compute.manager [req-d6b02d94-a9ad-43a1-8fb6-a75a0d91be81 req-683ce51a-904b-40d5-9df7-2cd69de6cc8a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] Received event network-vif-plugged-723ff3bf-882c-4198-afc8-a31026a4ccfc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:48:57 compute-0 nova_compute[260935]: 2025-10-11 08:48:57.709 2 DEBUG oslo_concurrency.lockutils [req-d6b02d94-a9ad-43a1-8fb6-a75a0d91be81 req-683ce51a-904b-40d5-9df7-2cd69de6cc8a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "057de6d9-3f9e-4b23-9019-f62ba6b453e7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:48:57 compute-0 nova_compute[260935]: 2025-10-11 08:48:57.709 2 DEBUG oslo_concurrency.lockutils [req-d6b02d94-a9ad-43a1-8fb6-a75a0d91be81 req-683ce51a-904b-40d5-9df7-2cd69de6cc8a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "057de6d9-3f9e-4b23-9019-f62ba6b453e7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:48:57 compute-0 nova_compute[260935]: 2025-10-11 08:48:57.709 2 DEBUG oslo_concurrency.lockutils [req-d6b02d94-a9ad-43a1-8fb6-a75a0d91be81 req-683ce51a-904b-40d5-9df7-2cd69de6cc8a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "057de6d9-3f9e-4b23-9019-f62ba6b453e7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:48:57 compute-0 nova_compute[260935]: 2025-10-11 08:48:57.710 2 DEBUG nova.compute.manager [req-d6b02d94-a9ad-43a1-8fb6-a75a0d91be81 req-683ce51a-904b-40d5-9df7-2cd69de6cc8a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] No waiting events found dispatching network-vif-plugged-723ff3bf-882c-4198-afc8-a31026a4ccfc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 08:48:57 compute-0 nova_compute[260935]: 2025-10-11 08:48:57.710 2 WARNING nova.compute.manager [req-d6b02d94-a9ad-43a1-8fb6-a75a0d91be81 req-683ce51a-904b-40d5-9df7-2cd69de6cc8a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] Received unexpected event network-vif-plugged-723ff3bf-882c-4198-afc8-a31026a4ccfc for instance with vm_state active and task_state None.
Oct 11 08:48:57 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1295: 321 pgs: 321 active+clean; 200 MiB data, 396 MiB used, 60 GiB / 60 GiB avail; 911 KiB/s rd, 2.1 MiB/s wr, 81 op/s
Oct 11 08:48:58 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e163 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 08:48:58 compute-0 nova_compute[260935]: 2025-10-11 08:48:58.594 2 DEBUG oslo_concurrency.lockutils [None req-16bb65ea-dd31-4965-bc5b-d90e8bfcadbb a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Acquiring lock "77ff3a9d-3eb2-40ed-ad12-6367fd4e555f" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:48:58 compute-0 nova_compute[260935]: 2025-10-11 08:48:58.594 2 DEBUG oslo_concurrency.lockutils [None req-16bb65ea-dd31-4965-bc5b-d90e8bfcadbb a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Lock "77ff3a9d-3eb2-40ed-ad12-6367fd4e555f" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:48:58 compute-0 nova_compute[260935]: 2025-10-11 08:48:58.613 2 DEBUG nova.compute.manager [None req-16bb65ea-dd31-4965-bc5b-d90e8bfcadbb a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 11 08:48:58 compute-0 nova_compute[260935]: 2025-10-11 08:48:58.697 2 DEBUG oslo_concurrency.lockutils [None req-16bb65ea-dd31-4965-bc5b-d90e8bfcadbb a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:48:58 compute-0 nova_compute[260935]: 2025-10-11 08:48:58.698 2 DEBUG oslo_concurrency.lockutils [None req-16bb65ea-dd31-4965-bc5b-d90e8bfcadbb a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:48:58 compute-0 nova_compute[260935]: 2025-10-11 08:48:58.708 2 DEBUG nova.virt.hardware [None req-16bb65ea-dd31-4965-bc5b-d90e8bfcadbb a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 11 08:48:58 compute-0 nova_compute[260935]: 2025-10-11 08:48:58.708 2 INFO nova.compute.claims [None req-16bb65ea-dd31-4965-bc5b-d90e8bfcadbb a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] Claim successful on node compute-0.ctlplane.example.com
Oct 11 08:48:58 compute-0 podman[291280]: 2025-10-11 08:48:58.797921423 +0000 UTC m=+0.093685269 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=iscsid)
Oct 11 08:48:58 compute-0 nova_compute[260935]: 2025-10-11 08:48:58.866 2 DEBUG oslo_concurrency.processutils [None req-16bb65ea-dd31-4965-bc5b-d90e8bfcadbb a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:48:59 compute-0 ceph-mon[74313]: pgmap v1295: 321 pgs: 321 active+clean; 200 MiB data, 396 MiB used, 60 GiB / 60 GiB avail; 911 KiB/s rd, 2.1 MiB/s wr, 81 op/s
Oct 11 08:48:59 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 08:48:59 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3815789145' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:48:59 compute-0 nova_compute[260935]: 2025-10-11 08:48:59.350 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:48:59 compute-0 nova_compute[260935]: 2025-10-11 08:48:59.368 2 DEBUG oslo_concurrency.processutils [None req-16bb65ea-dd31-4965-bc5b-d90e8bfcadbb a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.501s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:48:59 compute-0 nova_compute[260935]: 2025-10-11 08:48:59.379 2 DEBUG nova.compute.provider_tree [None req-16bb65ea-dd31-4965-bc5b-d90e8bfcadbb a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 08:48:59 compute-0 nova_compute[260935]: 2025-10-11 08:48:59.412 2 DEBUG nova.scheduler.client.report [None req-16bb65ea-dd31-4965-bc5b-d90e8bfcadbb a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 08:48:59 compute-0 nova_compute[260935]: 2025-10-11 08:48:59.442 2 DEBUG oslo_concurrency.lockutils [None req-16bb65ea-dd31-4965-bc5b-d90e8bfcadbb a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.744s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:48:59 compute-0 nova_compute[260935]: 2025-10-11 08:48:59.443 2 DEBUG nova.compute.manager [None req-16bb65ea-dd31-4965-bc5b-d90e8bfcadbb a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 11 08:48:59 compute-0 nova_compute[260935]: 2025-10-11 08:48:59.508 2 DEBUG nova.compute.manager [None req-16bb65ea-dd31-4965-bc5b-d90e8bfcadbb a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 11 08:48:59 compute-0 nova_compute[260935]: 2025-10-11 08:48:59.509 2 DEBUG nova.network.neutron [None req-16bb65ea-dd31-4965-bc5b-d90e8bfcadbb a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 11 08:48:59 compute-0 nova_compute[260935]: 2025-10-11 08:48:59.533 2 INFO nova.virt.libvirt.driver [None req-16bb65ea-dd31-4965-bc5b-d90e8bfcadbb a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 11 08:48:59 compute-0 nova_compute[260935]: 2025-10-11 08:48:59.551 2 DEBUG nova.compute.manager [None req-16bb65ea-dd31-4965-bc5b-d90e8bfcadbb a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 11 08:48:59 compute-0 nova_compute[260935]: 2025-10-11 08:48:59.628 2 DEBUG nova.compute.manager [None req-16bb65ea-dd31-4965-bc5b-d90e8bfcadbb a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 11 08:48:59 compute-0 nova_compute[260935]: 2025-10-11 08:48:59.631 2 DEBUG nova.virt.libvirt.driver [None req-16bb65ea-dd31-4965-bc5b-d90e8bfcadbb a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 11 08:48:59 compute-0 nova_compute[260935]: 2025-10-11 08:48:59.632 2 INFO nova.virt.libvirt.driver [None req-16bb65ea-dd31-4965-bc5b-d90e8bfcadbb a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] Creating image(s)
Oct 11 08:48:59 compute-0 nova_compute[260935]: 2025-10-11 08:48:59.669 2 DEBUG nova.storage.rbd_utils [None req-16bb65ea-dd31-4965-bc5b-d90e8bfcadbb a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] rbd image 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:48:59 compute-0 nova_compute[260935]: 2025-10-11 08:48:59.707 2 DEBUG nova.storage.rbd_utils [None req-16bb65ea-dd31-4965-bc5b-d90e8bfcadbb a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] rbd image 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:48:59 compute-0 nova_compute[260935]: 2025-10-11 08:48:59.750 2 DEBUG nova.storage.rbd_utils [None req-16bb65ea-dd31-4965-bc5b-d90e8bfcadbb a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] rbd image 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:48:59 compute-0 nova_compute[260935]: 2025-10-11 08:48:59.756 2 DEBUG oslo_concurrency.processutils [None req-16bb65ea-dd31-4965-bc5b-d90e8bfcadbb a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:48:59 compute-0 nova_compute[260935]: 2025-10-11 08:48:59.797 2 DEBUG nova.policy [None req-16bb65ea-dd31-4965-bc5b-d90e8bfcadbb a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'a51c2680b31e40b1908642ef8795c6f0', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '39d3043a7835403392c659fbb2fe0b22', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 11 08:48:59 compute-0 nova_compute[260935]: 2025-10-11 08:48:59.802 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 08:48:59 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1296: 321 pgs: 321 active+clean; 200 MiB data, 396 MiB used, 60 GiB / 60 GiB avail; 297 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Oct 11 08:48:59 compute-0 ovn_controller[152945]: 2025-10-11T08:48:59Z|00090|binding|INFO|Releasing lport 0d142335-06a1-47f2-b37a-cf32717864bc from this chassis (sb_readonly=0)
Oct 11 08:48:59 compute-0 ovn_controller[152945]: 2025-10-11T08:48:59Z|00091|binding|INFO|Releasing lport 2a916b98-1e7b-4604-b1f0-e2f195b1c17e from this chassis (sb_readonly=0)
Oct 11 08:48:59 compute-0 nova_compute[260935]: 2025-10-11 08:48:59.842 2 DEBUG nova.compute.manager [req-365e028f-960f-42a4-b796-ff72a9b9357b req-206e1e96-d468-49f1-9b06-ede7901fd7c7 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] Received event network-vif-plugged-723ff3bf-882c-4198-afc8-a31026a4ccfc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:48:59 compute-0 nova_compute[260935]: 2025-10-11 08:48:59.842 2 DEBUG oslo_concurrency.lockutils [req-365e028f-960f-42a4-b796-ff72a9b9357b req-206e1e96-d468-49f1-9b06-ede7901fd7c7 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "057de6d9-3f9e-4b23-9019-f62ba6b453e7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:48:59 compute-0 nova_compute[260935]: 2025-10-11 08:48:59.842 2 DEBUG oslo_concurrency.lockutils [req-365e028f-960f-42a4-b796-ff72a9b9357b req-206e1e96-d468-49f1-9b06-ede7901fd7c7 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "057de6d9-3f9e-4b23-9019-f62ba6b453e7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:48:59 compute-0 nova_compute[260935]: 2025-10-11 08:48:59.842 2 DEBUG oslo_concurrency.lockutils [req-365e028f-960f-42a4-b796-ff72a9b9357b req-206e1e96-d468-49f1-9b06-ede7901fd7c7 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "057de6d9-3f9e-4b23-9019-f62ba6b453e7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:48:59 compute-0 nova_compute[260935]: 2025-10-11 08:48:59.843 2 DEBUG nova.compute.manager [req-365e028f-960f-42a4-b796-ff72a9b9357b req-206e1e96-d468-49f1-9b06-ede7901fd7c7 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] No waiting events found dispatching network-vif-plugged-723ff3bf-882c-4198-afc8-a31026a4ccfc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 08:48:59 compute-0 nova_compute[260935]: 2025-10-11 08:48:59.843 2 WARNING nova.compute.manager [req-365e028f-960f-42a4-b796-ff72a9b9357b req-206e1e96-d468-49f1-9b06-ede7901fd7c7 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] Received unexpected event network-vif-plugged-723ff3bf-882c-4198-afc8-a31026a4ccfc for instance with vm_state active and task_state None.
Oct 11 08:48:59 compute-0 nova_compute[260935]: 2025-10-11 08:48:59.852 2 DEBUG oslo_concurrency.processutils [None req-16bb65ea-dd31-4965-bc5b-d90e8bfcadbb a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json" returned: 0 in 0.096s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:48:59 compute-0 nova_compute[260935]: 2025-10-11 08:48:59.853 2 DEBUG oslo_concurrency.lockutils [None req-16bb65ea-dd31-4965-bc5b-d90e8bfcadbb a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Acquiring lock "9811042c7d73cc51997f7c966840f4b7728169a1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:48:59 compute-0 nova_compute[260935]: 2025-10-11 08:48:59.853 2 DEBUG oslo_concurrency.lockutils [None req-16bb65ea-dd31-4965-bc5b-d90e8bfcadbb a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:48:59 compute-0 nova_compute[260935]: 2025-10-11 08:48:59.854 2 DEBUG oslo_concurrency.lockutils [None req-16bb65ea-dd31-4965-bc5b-d90e8bfcadbb a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:48:59 compute-0 nova_compute[260935]: 2025-10-11 08:48:59.876 2 DEBUG nova.storage.rbd_utils [None req-16bb65ea-dd31-4965-bc5b-d90e8bfcadbb a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] rbd image 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:48:59 compute-0 nova_compute[260935]: 2025-10-11 08:48:59.879 2 DEBUG oslo_concurrency.processutils [None req-16bb65ea-dd31-4965-bc5b-d90e8bfcadbb a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:48:59 compute-0 nova_compute[260935]: 2025-10-11 08:48:59.929 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:49:00 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/3815789145' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:49:00 compute-0 nova_compute[260935]: 2025-10-11 08:49:00.168 2 DEBUG oslo_concurrency.processutils [None req-16bb65ea-dd31-4965-bc5b-d90e8bfcadbb a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.289s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:49:00 compute-0 nova_compute[260935]: 2025-10-11 08:49:00.260 2 DEBUG nova.storage.rbd_utils [None req-16bb65ea-dd31-4965-bc5b-d90e8bfcadbb a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] resizing rbd image 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 11 08:49:00 compute-0 nova_compute[260935]: 2025-10-11 08:49:00.299 2 DEBUG oslo_concurrency.lockutils [None req-1b0e9738-2076-4278-aa27-69b054179b80 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Acquiring lock "interface-057de6d9-3f9e-4b23-9019-f62ba6b453e7-c9724939-cd91-44bb-a86b-72bf93c2a818" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:49:00 compute-0 nova_compute[260935]: 2025-10-11 08:49:00.300 2 DEBUG oslo_concurrency.lockutils [None req-1b0e9738-2076-4278-aa27-69b054179b80 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Lock "interface-057de6d9-3f9e-4b23-9019-f62ba6b453e7-c9724939-cd91-44bb-a86b-72bf93c2a818" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:49:00 compute-0 nova_compute[260935]: 2025-10-11 08:49:00.300 2 DEBUG nova.objects.instance [None req-1b0e9738-2076-4278-aa27-69b054179b80 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Lazy-loading 'flavor' on Instance uuid 057de6d9-3f9e-4b23-9019-f62ba6b453e7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:49:00 compute-0 nova_compute[260935]: 2025-10-11 08:49:00.381 2 DEBUG nova.objects.instance [None req-16bb65ea-dd31-4965-bc5b-d90e8bfcadbb a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Lazy-loading 'migration_context' on Instance uuid 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:49:00 compute-0 nova_compute[260935]: 2025-10-11 08:49:00.397 2 DEBUG nova.virt.libvirt.driver [None req-16bb65ea-dd31-4965-bc5b-d90e8bfcadbb a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 11 08:49:00 compute-0 nova_compute[260935]: 2025-10-11 08:49:00.398 2 DEBUG nova.virt.libvirt.driver [None req-16bb65ea-dd31-4965-bc5b-d90e8bfcadbb a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] Ensure instance console log exists: /var/lib/nova/instances/77ff3a9d-3eb2-40ed-ad12-6367fd4e555f/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 11 08:49:00 compute-0 nova_compute[260935]: 2025-10-11 08:49:00.398 2 DEBUG oslo_concurrency.lockutils [None req-16bb65ea-dd31-4965-bc5b-d90e8bfcadbb a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:49:00 compute-0 nova_compute[260935]: 2025-10-11 08:49:00.399 2 DEBUG oslo_concurrency.lockutils [None req-16bb65ea-dd31-4965-bc5b-d90e8bfcadbb a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:49:00 compute-0 nova_compute[260935]: 2025-10-11 08:49:00.399 2 DEBUG oslo_concurrency.lockutils [None req-16bb65ea-dd31-4965-bc5b-d90e8bfcadbb a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:49:00 compute-0 nova_compute[260935]: 2025-10-11 08:49:00.475 2 DEBUG oslo_concurrency.lockutils [None req-a3faec16-6f5d-41f8-a4bf-87b64a251959 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Acquiring lock "6224c79a-8a36-490c-863a-67251512732f" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:49:00 compute-0 nova_compute[260935]: 2025-10-11 08:49:00.476 2 DEBUG oslo_concurrency.lockutils [None req-a3faec16-6f5d-41f8-a4bf-87b64a251959 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Lock "6224c79a-8a36-490c-863a-67251512732f" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:49:00 compute-0 nova_compute[260935]: 2025-10-11 08:49:00.484 2 DEBUG nova.network.neutron [None req-16bb65ea-dd31-4965-bc5b-d90e8bfcadbb a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] Successfully created port: a3944a31-9560-49ae-b2a5-caaf2736993a _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 11 08:49:00 compute-0 nova_compute[260935]: 2025-10-11 08:49:00.491 2 DEBUG nova.compute.manager [None req-a3faec16-6f5d-41f8-a4bf-87b64a251959 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 6224c79a-8a36-490c-863a-67251512732f] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 11 08:49:00 compute-0 nova_compute[260935]: 2025-10-11 08:49:00.558 2 DEBUG oslo_concurrency.lockutils [None req-a3faec16-6f5d-41f8-a4bf-87b64a251959 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:49:00 compute-0 nova_compute[260935]: 2025-10-11 08:49:00.559 2 DEBUG oslo_concurrency.lockutils [None req-a3faec16-6f5d-41f8-a4bf-87b64a251959 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:49:00 compute-0 nova_compute[260935]: 2025-10-11 08:49:00.566 2 DEBUG nova.virt.hardware [None req-a3faec16-6f5d-41f8-a4bf-87b64a251959 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 11 08:49:00 compute-0 nova_compute[260935]: 2025-10-11 08:49:00.567 2 INFO nova.compute.claims [None req-a3faec16-6f5d-41f8-a4bf-87b64a251959 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 6224c79a-8a36-490c-863a-67251512732f] Claim successful on node compute-0.ctlplane.example.com
Oct 11 08:49:00 compute-0 nova_compute[260935]: 2025-10-11 08:49:00.699 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 08:49:00 compute-0 nova_compute[260935]: 2025-10-11 08:49:00.718 2 DEBUG oslo_concurrency.processutils [None req-a3faec16-6f5d-41f8-a4bf-87b64a251959 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:49:00 compute-0 nova_compute[260935]: 2025-10-11 08:49:00.765 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 08:49:00 compute-0 nova_compute[260935]: 2025-10-11 08:49:00.766 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 08:49:01 compute-0 ceph-mon[74313]: pgmap v1296: 321 pgs: 321 active+clean; 200 MiB data, 396 MiB used, 60 GiB / 60 GiB avail; 297 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Oct 11 08:49:01 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 08:49:01 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3196503439' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:49:01 compute-0 nova_compute[260935]: 2025-10-11 08:49:01.202 2 DEBUG oslo_concurrency.processutils [None req-a3faec16-6f5d-41f8-a4bf-87b64a251959 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.484s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:49:01 compute-0 nova_compute[260935]: 2025-10-11 08:49:01.210 2 DEBUG nova.compute.provider_tree [None req-a3faec16-6f5d-41f8-a4bf-87b64a251959 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 08:49:01 compute-0 nova_compute[260935]: 2025-10-11 08:49:01.224 2 DEBUG nova.objects.instance [None req-1b0e9738-2076-4278-aa27-69b054179b80 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Lazy-loading 'pci_requests' on Instance uuid 057de6d9-3f9e-4b23-9019-f62ba6b453e7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:49:01 compute-0 nova_compute[260935]: 2025-10-11 08:49:01.228 2 DEBUG nova.scheduler.client.report [None req-a3faec16-6f5d-41f8-a4bf-87b64a251959 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 08:49:01 compute-0 nova_compute[260935]: 2025-10-11 08:49:01.236 2 DEBUG nova.network.neutron [None req-1b0e9738-2076-4278-aa27-69b054179b80 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 11 08:49:01 compute-0 nova_compute[260935]: 2025-10-11 08:49:01.249 2 DEBUG oslo_concurrency.lockutils [None req-a3faec16-6f5d-41f8-a4bf-87b64a251959 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.690s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:49:01 compute-0 nova_compute[260935]: 2025-10-11 08:49:01.250 2 DEBUG nova.compute.manager [None req-a3faec16-6f5d-41f8-a4bf-87b64a251959 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 6224c79a-8a36-490c-863a-67251512732f] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 11 08:49:01 compute-0 nova_compute[260935]: 2025-10-11 08:49:01.295 2 DEBUG nova.compute.manager [None req-a3faec16-6f5d-41f8-a4bf-87b64a251959 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 6224c79a-8a36-490c-863a-67251512732f] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 11 08:49:01 compute-0 nova_compute[260935]: 2025-10-11 08:49:01.295 2 DEBUG nova.network.neutron [None req-a3faec16-6f5d-41f8-a4bf-87b64a251959 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 6224c79a-8a36-490c-863a-67251512732f] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 11 08:49:01 compute-0 nova_compute[260935]: 2025-10-11 08:49:01.334 2 INFO nova.virt.libvirt.driver [None req-a3faec16-6f5d-41f8-a4bf-87b64a251959 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 6224c79a-8a36-490c-863a-67251512732f] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 11 08:49:01 compute-0 nova_compute[260935]: 2025-10-11 08:49:01.358 2 DEBUG nova.compute.manager [None req-a3faec16-6f5d-41f8-a4bf-87b64a251959 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 6224c79a-8a36-490c-863a-67251512732f] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 11 08:49:01 compute-0 nova_compute[260935]: 2025-10-11 08:49:01.461 2 DEBUG nova.compute.manager [None req-a3faec16-6f5d-41f8-a4bf-87b64a251959 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 6224c79a-8a36-490c-863a-67251512732f] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 11 08:49:01 compute-0 nova_compute[260935]: 2025-10-11 08:49:01.463 2 DEBUG nova.virt.libvirt.driver [None req-a3faec16-6f5d-41f8-a4bf-87b64a251959 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 6224c79a-8a36-490c-863a-67251512732f] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 11 08:49:01 compute-0 nova_compute[260935]: 2025-10-11 08:49:01.464 2 INFO nova.virt.libvirt.driver [None req-a3faec16-6f5d-41f8-a4bf-87b64a251959 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 6224c79a-8a36-490c-863a-67251512732f] Creating image(s)
Oct 11 08:49:01 compute-0 nova_compute[260935]: 2025-10-11 08:49:01.498 2 DEBUG nova.storage.rbd_utils [None req-a3faec16-6f5d-41f8-a4bf-87b64a251959 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] rbd image 6224c79a-8a36-490c-863a-67251512732f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:49:01 compute-0 nova_compute[260935]: 2025-10-11 08:49:01.538 2 DEBUG nova.storage.rbd_utils [None req-a3faec16-6f5d-41f8-a4bf-87b64a251959 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] rbd image 6224c79a-8a36-490c-863a-67251512732f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:49:01 compute-0 nova_compute[260935]: 2025-10-11 08:49:01.572 2 DEBUG nova.storage.rbd_utils [None req-a3faec16-6f5d-41f8-a4bf-87b64a251959 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] rbd image 6224c79a-8a36-490c-863a-67251512732f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:49:01 compute-0 nova_compute[260935]: 2025-10-11 08:49:01.577 2 DEBUG oslo_concurrency.processutils [None req-a3faec16-6f5d-41f8-a4bf-87b64a251959 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:49:01 compute-0 nova_compute[260935]: 2025-10-11 08:49:01.609 2 DEBUG nova.policy [None req-a3faec16-6f5d-41f8-a4bf-87b64a251959 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'a51c2680b31e40b1908642ef8795c6f0', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '39d3043a7835403392c659fbb2fe0b22', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 11 08:49:01 compute-0 nova_compute[260935]: 2025-10-11 08:49:01.643 2 DEBUG oslo_concurrency.processutils [None req-a3faec16-6f5d-41f8-a4bf-87b64a251959 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:49:01 compute-0 nova_compute[260935]: 2025-10-11 08:49:01.644 2 DEBUG oslo_concurrency.lockutils [None req-a3faec16-6f5d-41f8-a4bf-87b64a251959 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Acquiring lock "9811042c7d73cc51997f7c966840f4b7728169a1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:49:01 compute-0 nova_compute[260935]: 2025-10-11 08:49:01.644 2 DEBUG oslo_concurrency.lockutils [None req-a3faec16-6f5d-41f8-a4bf-87b64a251959 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:49:01 compute-0 nova_compute[260935]: 2025-10-11 08:49:01.645 2 DEBUG oslo_concurrency.lockutils [None req-a3faec16-6f5d-41f8-a4bf-87b64a251959 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:49:01 compute-0 nova_compute[260935]: 2025-10-11 08:49:01.680 2 DEBUG nova.storage.rbd_utils [None req-a3faec16-6f5d-41f8-a4bf-87b64a251959 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] rbd image 6224c79a-8a36-490c-863a-67251512732f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:49:01 compute-0 nova_compute[260935]: 2025-10-11 08:49:01.685 2 DEBUG oslo_concurrency.processutils [None req-a3faec16-6f5d-41f8-a4bf-87b64a251959 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 6224c79a-8a36-490c-863a-67251512732f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:49:01 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1297: 321 pgs: 321 active+clean; 200 MiB data, 396 MiB used, 60 GiB / 60 GiB avail; 297 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Oct 11 08:49:01 compute-0 nova_compute[260935]: 2025-10-11 08:49:01.918 2 DEBUG nova.policy [None req-1b0e9738-2076-4278-aa27-69b054179b80 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '34f29a5a135d45f597eeaa741009aa67', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'eddb41c523294041b154a0a99c88e82b', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 11 08:49:01 compute-0 nova_compute[260935]: 2025-10-11 08:49:01.935 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:49:01 compute-0 nova_compute[260935]: 2025-10-11 08:49:01.946 2 DEBUG oslo_concurrency.processutils [None req-a3faec16-6f5d-41f8-a4bf-87b64a251959 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 6224c79a-8a36-490c-863a-67251512732f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.261s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:49:02 compute-0 nova_compute[260935]: 2025-10-11 08:49:02.010 2 DEBUG nova.storage.rbd_utils [None req-a3faec16-6f5d-41f8-a4bf-87b64a251959 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] resizing rbd image 6224c79a-8a36-490c-863a-67251512732f_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 11 08:49:02 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/3196503439' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:49:02 compute-0 nova_compute[260935]: 2025-10-11 08:49:02.089 2 DEBUG nova.network.neutron [None req-16bb65ea-dd31-4965-bc5b-d90e8bfcadbb a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] Successfully updated port: a3944a31-9560-49ae-b2a5-caaf2736993a _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 11 08:49:02 compute-0 nova_compute[260935]: 2025-10-11 08:49:02.141 2 DEBUG oslo_concurrency.lockutils [None req-16bb65ea-dd31-4965-bc5b-d90e8bfcadbb a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Acquiring lock "refresh_cache-77ff3a9d-3eb2-40ed-ad12-6367fd4e555f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 08:49:02 compute-0 nova_compute[260935]: 2025-10-11 08:49:02.141 2 DEBUG oslo_concurrency.lockutils [None req-16bb65ea-dd31-4965-bc5b-d90e8bfcadbb a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Acquired lock "refresh_cache-77ff3a9d-3eb2-40ed-ad12-6367fd4e555f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 08:49:02 compute-0 nova_compute[260935]: 2025-10-11 08:49:02.142 2 DEBUG nova.network.neutron [None req-16bb65ea-dd31-4965-bc5b-d90e8bfcadbb a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 11 08:49:02 compute-0 nova_compute[260935]: 2025-10-11 08:49:02.152 2 DEBUG nova.objects.instance [None req-a3faec16-6f5d-41f8-a4bf-87b64a251959 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Lazy-loading 'migration_context' on Instance uuid 6224c79a-8a36-490c-863a-67251512732f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:49:02 compute-0 nova_compute[260935]: 2025-10-11 08:49:02.184 2 DEBUG nova.compute.manager [req-8462493e-83ba-4a62-b180-090142cc0609 req-290da09a-a1ae-488c-820f-fd4ea6533a9b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] Received event network-changed-a3944a31-9560-49ae-b2a5-caaf2736993a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:49:02 compute-0 nova_compute[260935]: 2025-10-11 08:49:02.185 2 DEBUG nova.compute.manager [req-8462493e-83ba-4a62-b180-090142cc0609 req-290da09a-a1ae-488c-820f-fd4ea6533a9b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] Refreshing instance network info cache due to event network-changed-a3944a31-9560-49ae-b2a5-caaf2736993a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 08:49:02 compute-0 nova_compute[260935]: 2025-10-11 08:49:02.185 2 DEBUG oslo_concurrency.lockutils [req-8462493e-83ba-4a62-b180-090142cc0609 req-290da09a-a1ae-488c-820f-fd4ea6533a9b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-77ff3a9d-3eb2-40ed-ad12-6367fd4e555f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 08:49:02 compute-0 nova_compute[260935]: 2025-10-11 08:49:02.188 2 DEBUG nova.virt.libvirt.driver [None req-a3faec16-6f5d-41f8-a4bf-87b64a251959 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 6224c79a-8a36-490c-863a-67251512732f] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 11 08:49:02 compute-0 nova_compute[260935]: 2025-10-11 08:49:02.188 2 DEBUG nova.virt.libvirt.driver [None req-a3faec16-6f5d-41f8-a4bf-87b64a251959 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 6224c79a-8a36-490c-863a-67251512732f] Ensure instance console log exists: /var/lib/nova/instances/6224c79a-8a36-490c-863a-67251512732f/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 11 08:49:02 compute-0 nova_compute[260935]: 2025-10-11 08:49:02.189 2 DEBUG oslo_concurrency.lockutils [None req-a3faec16-6f5d-41f8-a4bf-87b64a251959 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:49:02 compute-0 nova_compute[260935]: 2025-10-11 08:49:02.190 2 DEBUG oslo_concurrency.lockutils [None req-a3faec16-6f5d-41f8-a4bf-87b64a251959 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:49:02 compute-0 nova_compute[260935]: 2025-10-11 08:49:02.190 2 DEBUG oslo_concurrency.lockutils [None req-a3faec16-6f5d-41f8-a4bf-87b64a251959 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:49:02 compute-0 nova_compute[260935]: 2025-10-11 08:49:02.334 2 DEBUG nova.network.neutron [None req-16bb65ea-dd31-4965-bc5b-d90e8bfcadbb a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 11 08:49:02 compute-0 nova_compute[260935]: 2025-10-11 08:49:02.394 2 DEBUG nova.network.neutron [None req-a3faec16-6f5d-41f8-a4bf-87b64a251959 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 6224c79a-8a36-490c-863a-67251512732f] Successfully created port: b9569700-d7dc-40dc-a27c-1f72d3675682 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 11 08:49:02 compute-0 nova_compute[260935]: 2025-10-11 08:49:02.766 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 08:49:03 compute-0 ceph-mon[74313]: pgmap v1297: 321 pgs: 321 active+clean; 200 MiB data, 396 MiB used, 60 GiB / 60 GiB avail; 297 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Oct 11 08:49:03 compute-0 nova_compute[260935]: 2025-10-11 08:49:03.123 2 DEBUG nova.network.neutron [None req-1b0e9738-2076-4278-aa27-69b054179b80 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] Successfully updated port: c9724939-cd91-44bb-a86b-72bf93c2a818 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 11 08:49:03 compute-0 nova_compute[260935]: 2025-10-11 08:49:03.147 2 DEBUG oslo_concurrency.lockutils [None req-1b0e9738-2076-4278-aa27-69b054179b80 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Acquiring lock "refresh_cache-057de6d9-3f9e-4b23-9019-f62ba6b453e7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 08:49:03 compute-0 nova_compute[260935]: 2025-10-11 08:49:03.148 2 DEBUG oslo_concurrency.lockutils [None req-1b0e9738-2076-4278-aa27-69b054179b80 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Acquired lock "refresh_cache-057de6d9-3f9e-4b23-9019-f62ba6b453e7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 08:49:03 compute-0 nova_compute[260935]: 2025-10-11 08:49:03.148 2 DEBUG nova.network.neutron [None req-1b0e9738-2076-4278-aa27-69b054179b80 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 11 08:49:03 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e163 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 08:49:03 compute-0 nova_compute[260935]: 2025-10-11 08:49:03.331 2 WARNING nova.network.neutron [None req-1b0e9738-2076-4278-aa27-69b054179b80 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] fff13396-b787-4c6e-9112-a1c2ef57b26d already exists in list: networks containing: ['fff13396-b787-4c6e-9112-a1c2ef57b26d']. ignoring it
Oct 11 08:49:03 compute-0 nova_compute[260935]: 2025-10-11 08:49:03.332 2 WARNING nova.network.neutron [None req-1b0e9738-2076-4278-aa27-69b054179b80 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] fff13396-b787-4c6e-9112-a1c2ef57b26d already exists in list: networks containing: ['fff13396-b787-4c6e-9112-a1c2ef57b26d']. ignoring it
Oct 11 08:49:03 compute-0 nova_compute[260935]: 2025-10-11 08:49:03.334 2 WARNING nova.network.neutron [None req-1b0e9738-2076-4278-aa27-69b054179b80 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] fff13396-b787-4c6e-9112-a1c2ef57b26d already exists in list: networks containing: ['fff13396-b787-4c6e-9112-a1c2ef57b26d']. ignoring it
Oct 11 08:49:03 compute-0 nova_compute[260935]: 2025-10-11 08:49:03.376 2 DEBUG nova.network.neutron [None req-a3faec16-6f5d-41f8-a4bf-87b64a251959 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 6224c79a-8a36-490c-863a-67251512732f] Successfully updated port: b9569700-d7dc-40dc-a27c-1f72d3675682 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 11 08:49:03 compute-0 nova_compute[260935]: 2025-10-11 08:49:03.394 2 DEBUG oslo_concurrency.lockutils [None req-a3faec16-6f5d-41f8-a4bf-87b64a251959 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Acquiring lock "refresh_cache-6224c79a-8a36-490c-863a-67251512732f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 08:49:03 compute-0 nova_compute[260935]: 2025-10-11 08:49:03.394 2 DEBUG oslo_concurrency.lockutils [None req-a3faec16-6f5d-41f8-a4bf-87b64a251959 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Acquired lock "refresh_cache-6224c79a-8a36-490c-863a-67251512732f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 08:49:03 compute-0 nova_compute[260935]: 2025-10-11 08:49:03.395 2 DEBUG nova.network.neutron [None req-a3faec16-6f5d-41f8-a4bf-87b64a251959 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 6224c79a-8a36-490c-863a-67251512732f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 11 08:49:03 compute-0 nova_compute[260935]: 2025-10-11 08:49:03.531 2 DEBUG nova.compute.manager [req-17b33343-2432-45c8-acf2-16c754ad2b86 req-062752ad-273d-4646-93c1-c5b55a8f0515 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 6224c79a-8a36-490c-863a-67251512732f] Received event network-changed-b9569700-d7dc-40dc-a27c-1f72d3675682 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:49:03 compute-0 nova_compute[260935]: 2025-10-11 08:49:03.531 2 DEBUG nova.compute.manager [req-17b33343-2432-45c8-acf2-16c754ad2b86 req-062752ad-273d-4646-93c1-c5b55a8f0515 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 6224c79a-8a36-490c-863a-67251512732f] Refreshing instance network info cache due to event network-changed-b9569700-d7dc-40dc-a27c-1f72d3675682. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 08:49:03 compute-0 nova_compute[260935]: 2025-10-11 08:49:03.532 2 DEBUG oslo_concurrency.lockutils [req-17b33343-2432-45c8-acf2-16c754ad2b86 req-062752ad-273d-4646-93c1-c5b55a8f0515 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-6224c79a-8a36-490c-863a-67251512732f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 08:49:03 compute-0 nova_compute[260935]: 2025-10-11 08:49:03.578 2 DEBUG nova.network.neutron [None req-a3faec16-6f5d-41f8-a4bf-87b64a251959 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 6224c79a-8a36-490c-863a-67251512732f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 11 08:49:03 compute-0 nova_compute[260935]: 2025-10-11 08:49:03.583 2 DEBUG nova.network.neutron [None req-16bb65ea-dd31-4965-bc5b-d90e8bfcadbb a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] Updating instance_info_cache with network_info: [{"id": "a3944a31-9560-49ae-b2a5-caaf2736993a", "address": "fa:16:3e:bf:d8:0b", "network": {"id": "09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1951796893-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "39d3043a7835403392c659fbb2fe0b22", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa3944a31-95", "ovs_interfaceid": "a3944a31-9560-49ae-b2a5-caaf2736993a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:49:03 compute-0 nova_compute[260935]: 2025-10-11 08:49:03.603 2 DEBUG oslo_concurrency.lockutils [None req-16bb65ea-dd31-4965-bc5b-d90e8bfcadbb a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Releasing lock "refresh_cache-77ff3a9d-3eb2-40ed-ad12-6367fd4e555f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 08:49:03 compute-0 nova_compute[260935]: 2025-10-11 08:49:03.603 2 DEBUG nova.compute.manager [None req-16bb65ea-dd31-4965-bc5b-d90e8bfcadbb a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] Instance network_info: |[{"id": "a3944a31-9560-49ae-b2a5-caaf2736993a", "address": "fa:16:3e:bf:d8:0b", "network": {"id": "09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1951796893-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "39d3043a7835403392c659fbb2fe0b22", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa3944a31-95", "ovs_interfaceid": "a3944a31-9560-49ae-b2a5-caaf2736993a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 11 08:49:03 compute-0 nova_compute[260935]: 2025-10-11 08:49:03.604 2 DEBUG oslo_concurrency.lockutils [req-8462493e-83ba-4a62-b180-090142cc0609 req-290da09a-a1ae-488c-820f-fd4ea6533a9b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-77ff3a9d-3eb2-40ed-ad12-6367fd4e555f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 08:49:03 compute-0 nova_compute[260935]: 2025-10-11 08:49:03.604 2 DEBUG nova.network.neutron [req-8462493e-83ba-4a62-b180-090142cc0609 req-290da09a-a1ae-488c-820f-fd4ea6533a9b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] Refreshing network info cache for port a3944a31-9560-49ae-b2a5-caaf2736993a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 08:49:03 compute-0 nova_compute[260935]: 2025-10-11 08:49:03.607 2 DEBUG nova.virt.libvirt.driver [None req-16bb65ea-dd31-4965-bc5b-d90e8bfcadbb a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] Start _get_guest_xml network_info=[{"id": "a3944a31-9560-49ae-b2a5-caaf2736993a", "address": "fa:16:3e:bf:d8:0b", "network": {"id": "09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1951796893-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "39d3043a7835403392c659fbb2fe0b22", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa3944a31-95", "ovs_interfaceid": "a3944a31-9560-49ae-b2a5-caaf2736993a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 11 08:49:03 compute-0 nova_compute[260935]: 2025-10-11 08:49:03.612 2 WARNING nova.virt.libvirt.driver [None req-16bb65ea-dd31-4965-bc5b-d90e8bfcadbb a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 08:49:03 compute-0 nova_compute[260935]: 2025-10-11 08:49:03.617 2 DEBUG nova.virt.libvirt.host [None req-16bb65ea-dd31-4965-bc5b-d90e8bfcadbb a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 11 08:49:03 compute-0 nova_compute[260935]: 2025-10-11 08:49:03.618 2 DEBUG nova.virt.libvirt.host [None req-16bb65ea-dd31-4965-bc5b-d90e8bfcadbb a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 11 08:49:03 compute-0 nova_compute[260935]: 2025-10-11 08:49:03.626 2 DEBUG nova.virt.libvirt.host [None req-16bb65ea-dd31-4965-bc5b-d90e8bfcadbb a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 11 08:49:03 compute-0 nova_compute[260935]: 2025-10-11 08:49:03.626 2 DEBUG nova.virt.libvirt.host [None req-16bb65ea-dd31-4965-bc5b-d90e8bfcadbb a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 11 08:49:03 compute-0 nova_compute[260935]: 2025-10-11 08:49:03.627 2 DEBUG nova.virt.libvirt.driver [None req-16bb65ea-dd31-4965-bc5b-d90e8bfcadbb a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 11 08:49:03 compute-0 nova_compute[260935]: 2025-10-11 08:49:03.627 2 DEBUG nova.virt.hardware [None req-16bb65ea-dd31-4965-bc5b-d90e8bfcadbb a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 11 08:49:03 compute-0 nova_compute[260935]: 2025-10-11 08:49:03.627 2 DEBUG nova.virt.hardware [None req-16bb65ea-dd31-4965-bc5b-d90e8bfcadbb a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 11 08:49:03 compute-0 nova_compute[260935]: 2025-10-11 08:49:03.628 2 DEBUG nova.virt.hardware [None req-16bb65ea-dd31-4965-bc5b-d90e8bfcadbb a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 11 08:49:03 compute-0 nova_compute[260935]: 2025-10-11 08:49:03.628 2 DEBUG nova.virt.hardware [None req-16bb65ea-dd31-4965-bc5b-d90e8bfcadbb a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 11 08:49:03 compute-0 nova_compute[260935]: 2025-10-11 08:49:03.628 2 DEBUG nova.virt.hardware [None req-16bb65ea-dd31-4965-bc5b-d90e8bfcadbb a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 11 08:49:03 compute-0 nova_compute[260935]: 2025-10-11 08:49:03.628 2 DEBUG nova.virt.hardware [None req-16bb65ea-dd31-4965-bc5b-d90e8bfcadbb a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 11 08:49:03 compute-0 nova_compute[260935]: 2025-10-11 08:49:03.628 2 DEBUG nova.virt.hardware [None req-16bb65ea-dd31-4965-bc5b-d90e8bfcadbb a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 11 08:49:03 compute-0 nova_compute[260935]: 2025-10-11 08:49:03.628 2 DEBUG nova.virt.hardware [None req-16bb65ea-dd31-4965-bc5b-d90e8bfcadbb a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 11 08:49:03 compute-0 nova_compute[260935]: 2025-10-11 08:49:03.628 2 DEBUG nova.virt.hardware [None req-16bb65ea-dd31-4965-bc5b-d90e8bfcadbb a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 11 08:49:03 compute-0 nova_compute[260935]: 2025-10-11 08:49:03.629 2 DEBUG nova.virt.hardware [None req-16bb65ea-dd31-4965-bc5b-d90e8bfcadbb a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 11 08:49:03 compute-0 nova_compute[260935]: 2025-10-11 08:49:03.629 2 DEBUG nova.virt.hardware [None req-16bb65ea-dd31-4965-bc5b-d90e8bfcadbb a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 11 08:49:03 compute-0 nova_compute[260935]: 2025-10-11 08:49:03.631 2 DEBUG oslo_concurrency.processutils [None req-16bb65ea-dd31-4965-bc5b-d90e8bfcadbb a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:49:03 compute-0 nova_compute[260935]: 2025-10-11 08:49:03.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 08:49:03 compute-0 nova_compute[260935]: 2025-10-11 08:49:03.735 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:49:03 compute-0 nova_compute[260935]: 2025-10-11 08:49:03.736 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:49:03 compute-0 nova_compute[260935]: 2025-10-11 08:49:03.736 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:49:03 compute-0 nova_compute[260935]: 2025-10-11 08:49:03.736 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 11 08:49:03 compute-0 nova_compute[260935]: 2025-10-11 08:49:03.737 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:49:03 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1298: 321 pgs: 321 active+clean; 293 MiB data, 436 MiB used, 60 GiB / 60 GiB avail; 331 KiB/s rd, 5.7 MiB/s wr, 117 op/s
Oct 11 08:49:04 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 08:49:04 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2379102154' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:49:04 compute-0 nova_compute[260935]: 2025-10-11 08:49:04.127 2 DEBUG oslo_concurrency.processutils [None req-16bb65ea-dd31-4965-bc5b-d90e8bfcadbb a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.495s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:49:04 compute-0 nova_compute[260935]: 2025-10-11 08:49:04.167 2 DEBUG nova.storage.rbd_utils [None req-16bb65ea-dd31-4965-bc5b-d90e8bfcadbb a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] rbd image 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:49:04 compute-0 nova_compute[260935]: 2025-10-11 08:49:04.174 2 DEBUG oslo_concurrency.processutils [None req-16bb65ea-dd31-4965-bc5b-d90e8bfcadbb a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:49:04 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 08:49:04 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3340845604' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:49:04 compute-0 nova_compute[260935]: 2025-10-11 08:49:04.225 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.488s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:49:04 compute-0 nova_compute[260935]: 2025-10-11 08:49:04.392 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:49:04 compute-0 nova_compute[260935]: 2025-10-11 08:49:04.401 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000014 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 08:49:04 compute-0 nova_compute[260935]: 2025-10-11 08:49:04.401 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000014 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 08:49:04 compute-0 nova_compute[260935]: 2025-10-11 08:49:04.406 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000015 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 08:49:04 compute-0 nova_compute[260935]: 2025-10-11 08:49:04.406 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000015 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 08:49:04 compute-0 podman[291739]: 2025-10-11 08:49:04.458302233 +0000 UTC m=+0.159876041 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_id=multipathd, container_name=multipathd, org.label-schema.schema-version=1.0)
Oct 11 08:49:04 compute-0 podman[291740]: 2025-10-11 08:49:04.466515237 +0000 UTC m=+0.162498796 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true)
Oct 11 08:49:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 08:49:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:49:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 08:49:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:49:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.002211811999413948 of space, bias 1.0, pg target 0.6635435998241844 quantized to 32 (current 32)
Oct 11 08:49:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:49:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 08:49:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:49:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 08:49:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:49:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Oct 11 08:49:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:49:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 11 08:49:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:49:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 08:49:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:49:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 11 08:49:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:49:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 11 08:49:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:49:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 08:49:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:49:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 11 08:49:04 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:49:04.604 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=bec9a5e2-82b0-42f0-811d-08d245f1dc66, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '8'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:49:04 compute-0 nova_compute[260935]: 2025-10-11 08:49:04.690 2 WARNING nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 08:49:04 compute-0 nova_compute[260935]: 2025-10-11 08:49:04.692 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4108MB free_disk=59.85559844970703GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 11 08:49:04 compute-0 nova_compute[260935]: 2025-10-11 08:49:04.692 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:49:04 compute-0 nova_compute[260935]: 2025-10-11 08:49:04.692 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:49:04 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 08:49:04 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2471832843' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:49:04 compute-0 nova_compute[260935]: 2025-10-11 08:49:04.720 2 DEBUG oslo_concurrency.processutils [None req-16bb65ea-dd31-4965-bc5b-d90e8bfcadbb a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.545s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:49:04 compute-0 nova_compute[260935]: 2025-10-11 08:49:04.721 2 DEBUG nova.virt.libvirt.vif [None req-16bb65ea-dd31-4965-bc5b-d90e8bfcadbb a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T08:48:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-1988839763',display_name='tempest-ServersAdminTestJSON-server-1988839763',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-1988839763',id=22,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='39d3043a7835403392c659fbb2fe0b22',ramdisk_id='',reservation_id='r-nxz1v0k9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersAdminTestJSON-1756812845',owner_user_name='tempest-ServersAdminTestJSON-1756812845-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T08:48:59Z,user_data=None,user_id='a51c2680b31e40b1908642ef8795c6f0',uuid=77ff3a9d-3eb2-40ed-ad12-6367fd4e555f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a3944a31-9560-49ae-b2a5-caaf2736993a", "address": "fa:16:3e:bf:d8:0b", "network": {"id": "09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1951796893-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "39d3043a7835403392c659fbb2fe0b22", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa3944a31-95", "ovs_interfaceid": "a3944a31-9560-49ae-b2a5-caaf2736993a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 11 08:49:04 compute-0 nova_compute[260935]: 2025-10-11 08:49:04.721 2 DEBUG nova.network.os_vif_util [None req-16bb65ea-dd31-4965-bc5b-d90e8bfcadbb a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Converting VIF {"id": "a3944a31-9560-49ae-b2a5-caaf2736993a", "address": "fa:16:3e:bf:d8:0b", "network": {"id": "09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1951796893-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "39d3043a7835403392c659fbb2fe0b22", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa3944a31-95", "ovs_interfaceid": "a3944a31-9560-49ae-b2a5-caaf2736993a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 08:49:04 compute-0 nova_compute[260935]: 2025-10-11 08:49:04.722 2 DEBUG nova.network.os_vif_util [None req-16bb65ea-dd31-4965-bc5b-d90e8bfcadbb a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:bf:d8:0b,bridge_name='br-int',has_traffic_filtering=True,id=a3944a31-9560-49ae-b2a5-caaf2736993a,network=Network(09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa3944a31-95') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 08:49:04 compute-0 nova_compute[260935]: 2025-10-11 08:49:04.723 2 DEBUG nova.objects.instance [None req-16bb65ea-dd31-4965-bc5b-d90e8bfcadbb a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Lazy-loading 'pci_devices' on Instance uuid 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:49:04 compute-0 nova_compute[260935]: 2025-10-11 08:49:04.747 2 DEBUG nova.network.neutron [None req-a3faec16-6f5d-41f8-a4bf-87b64a251959 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 6224c79a-8a36-490c-863a-67251512732f] Updating instance_info_cache with network_info: [{"id": "b9569700-d7dc-40dc-a27c-1f72d3675682", "address": "fa:16:3e:20:3f:af", "network": {"id": "09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1951796893-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "39d3043a7835403392c659fbb2fe0b22", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb9569700-d7", "ovs_interfaceid": "b9569700-d7dc-40dc-a27c-1f72d3675682", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:49:04 compute-0 nova_compute[260935]: 2025-10-11 08:49:04.755 2 DEBUG nova.virt.libvirt.driver [None req-16bb65ea-dd31-4965-bc5b-d90e8bfcadbb a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] End _get_guest_xml xml=<domain type="kvm">
Oct 11 08:49:04 compute-0 nova_compute[260935]:   <uuid>77ff3a9d-3eb2-40ed-ad12-6367fd4e555f</uuid>
Oct 11 08:49:04 compute-0 nova_compute[260935]:   <name>instance-00000016</name>
Oct 11 08:49:04 compute-0 nova_compute[260935]:   <memory>131072</memory>
Oct 11 08:49:04 compute-0 nova_compute[260935]:   <vcpu>1</vcpu>
Oct 11 08:49:04 compute-0 nova_compute[260935]:   <metadata>
Oct 11 08:49:04 compute-0 nova_compute[260935]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 08:49:04 compute-0 nova_compute[260935]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 08:49:04 compute-0 nova_compute[260935]:       <nova:name>tempest-ServersAdminTestJSON-server-1988839763</nova:name>
Oct 11 08:49:04 compute-0 nova_compute[260935]:       <nova:creationTime>2025-10-11 08:49:03</nova:creationTime>
Oct 11 08:49:04 compute-0 nova_compute[260935]:       <nova:flavor name="m1.nano">
Oct 11 08:49:04 compute-0 nova_compute[260935]:         <nova:memory>128</nova:memory>
Oct 11 08:49:04 compute-0 nova_compute[260935]:         <nova:disk>1</nova:disk>
Oct 11 08:49:04 compute-0 nova_compute[260935]:         <nova:swap>0</nova:swap>
Oct 11 08:49:04 compute-0 nova_compute[260935]:         <nova:ephemeral>0</nova:ephemeral>
Oct 11 08:49:04 compute-0 nova_compute[260935]:         <nova:vcpus>1</nova:vcpus>
Oct 11 08:49:04 compute-0 nova_compute[260935]:       </nova:flavor>
Oct 11 08:49:04 compute-0 nova_compute[260935]:       <nova:owner>
Oct 11 08:49:04 compute-0 nova_compute[260935]:         <nova:user uuid="a51c2680b31e40b1908642ef8795c6f0">tempest-ServersAdminTestJSON-1756812845-project-member</nova:user>
Oct 11 08:49:04 compute-0 nova_compute[260935]:         <nova:project uuid="39d3043a7835403392c659fbb2fe0b22">tempest-ServersAdminTestJSON-1756812845</nova:project>
Oct 11 08:49:04 compute-0 nova_compute[260935]:       </nova:owner>
Oct 11 08:49:04 compute-0 nova_compute[260935]:       <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 08:49:04 compute-0 nova_compute[260935]:       <nova:ports>
Oct 11 08:49:04 compute-0 nova_compute[260935]:         <nova:port uuid="a3944a31-9560-49ae-b2a5-caaf2736993a">
Oct 11 08:49:04 compute-0 nova_compute[260935]:           <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Oct 11 08:49:04 compute-0 nova_compute[260935]:         </nova:port>
Oct 11 08:49:04 compute-0 nova_compute[260935]:       </nova:ports>
Oct 11 08:49:04 compute-0 nova_compute[260935]:     </nova:instance>
Oct 11 08:49:04 compute-0 nova_compute[260935]:   </metadata>
Oct 11 08:49:04 compute-0 nova_compute[260935]:   <sysinfo type="smbios">
Oct 11 08:49:04 compute-0 nova_compute[260935]:     <system>
Oct 11 08:49:04 compute-0 nova_compute[260935]:       <entry name="manufacturer">RDO</entry>
Oct 11 08:49:04 compute-0 nova_compute[260935]:       <entry name="product">OpenStack Compute</entry>
Oct 11 08:49:04 compute-0 nova_compute[260935]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 08:49:04 compute-0 nova_compute[260935]:       <entry name="serial">77ff3a9d-3eb2-40ed-ad12-6367fd4e555f</entry>
Oct 11 08:49:04 compute-0 nova_compute[260935]:       <entry name="uuid">77ff3a9d-3eb2-40ed-ad12-6367fd4e555f</entry>
Oct 11 08:49:04 compute-0 nova_compute[260935]:       <entry name="family">Virtual Machine</entry>
Oct 11 08:49:04 compute-0 nova_compute[260935]:     </system>
Oct 11 08:49:04 compute-0 nova_compute[260935]:   </sysinfo>
Oct 11 08:49:04 compute-0 nova_compute[260935]:   <os>
Oct 11 08:49:04 compute-0 nova_compute[260935]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 11 08:49:04 compute-0 nova_compute[260935]:     <boot dev="hd"/>
Oct 11 08:49:04 compute-0 nova_compute[260935]:     <smbios mode="sysinfo"/>
Oct 11 08:49:04 compute-0 nova_compute[260935]:   </os>
Oct 11 08:49:04 compute-0 nova_compute[260935]:   <features>
Oct 11 08:49:04 compute-0 nova_compute[260935]:     <acpi/>
Oct 11 08:49:04 compute-0 nova_compute[260935]:     <apic/>
Oct 11 08:49:04 compute-0 nova_compute[260935]:     <vmcoreinfo/>
Oct 11 08:49:04 compute-0 nova_compute[260935]:   </features>
Oct 11 08:49:04 compute-0 nova_compute[260935]:   <clock offset="utc">
Oct 11 08:49:04 compute-0 nova_compute[260935]:     <timer name="pit" tickpolicy="delay"/>
Oct 11 08:49:04 compute-0 nova_compute[260935]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 11 08:49:04 compute-0 nova_compute[260935]:     <timer name="hpet" present="no"/>
Oct 11 08:49:04 compute-0 nova_compute[260935]:   </clock>
Oct 11 08:49:04 compute-0 nova_compute[260935]:   <cpu mode="host-model" match="exact">
Oct 11 08:49:04 compute-0 nova_compute[260935]:     <topology sockets="1" cores="1" threads="1"/>
Oct 11 08:49:04 compute-0 nova_compute[260935]:   </cpu>
Oct 11 08:49:04 compute-0 nova_compute[260935]:   <devices>
Oct 11 08:49:04 compute-0 nova_compute[260935]:     <disk type="network" device="disk">
Oct 11 08:49:04 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 08:49:04 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/77ff3a9d-3eb2-40ed-ad12-6367fd4e555f_disk">
Oct 11 08:49:04 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 08:49:04 compute-0 nova_compute[260935]:       </source>
Oct 11 08:49:04 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 08:49:04 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 08:49:04 compute-0 nova_compute[260935]:       </auth>
Oct 11 08:49:04 compute-0 nova_compute[260935]:       <target dev="vda" bus="virtio"/>
Oct 11 08:49:04 compute-0 nova_compute[260935]:     </disk>
Oct 11 08:49:04 compute-0 nova_compute[260935]:     <disk type="network" device="cdrom">
Oct 11 08:49:04 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 08:49:04 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/77ff3a9d-3eb2-40ed-ad12-6367fd4e555f_disk.config">
Oct 11 08:49:04 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 08:49:04 compute-0 nova_compute[260935]:       </source>
Oct 11 08:49:04 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 08:49:04 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 08:49:04 compute-0 nova_compute[260935]:       </auth>
Oct 11 08:49:04 compute-0 nova_compute[260935]:       <target dev="sda" bus="sata"/>
Oct 11 08:49:04 compute-0 nova_compute[260935]:     </disk>
Oct 11 08:49:04 compute-0 nova_compute[260935]:     <interface type="ethernet">
Oct 11 08:49:04 compute-0 nova_compute[260935]:       <mac address="fa:16:3e:bf:d8:0b"/>
Oct 11 08:49:04 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 08:49:04 compute-0 nova_compute[260935]:       <driver name="vhost" rx_queue_size="512"/>
Oct 11 08:49:04 compute-0 nova_compute[260935]:       <mtu size="1442"/>
Oct 11 08:49:04 compute-0 nova_compute[260935]:       <target dev="tapa3944a31-95"/>
Oct 11 08:49:04 compute-0 nova_compute[260935]:     </interface>
Oct 11 08:49:04 compute-0 nova_compute[260935]:     <serial type="pty">
Oct 11 08:49:04 compute-0 nova_compute[260935]:       <log file="/var/lib/nova/instances/77ff3a9d-3eb2-40ed-ad12-6367fd4e555f/console.log" append="off"/>
Oct 11 08:49:04 compute-0 nova_compute[260935]:     </serial>
Oct 11 08:49:04 compute-0 nova_compute[260935]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 08:49:04 compute-0 nova_compute[260935]:     <video>
Oct 11 08:49:04 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 08:49:04 compute-0 nova_compute[260935]:     </video>
Oct 11 08:49:04 compute-0 nova_compute[260935]:     <input type="tablet" bus="usb"/>
Oct 11 08:49:04 compute-0 nova_compute[260935]:     <rng model="virtio">
Oct 11 08:49:04 compute-0 nova_compute[260935]:       <backend model="random">/dev/urandom</backend>
Oct 11 08:49:04 compute-0 nova_compute[260935]:     </rng>
Oct 11 08:49:04 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root"/>
Oct 11 08:49:04 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:49:04 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:49:04 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:49:04 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:49:04 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:49:04 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:49:04 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:49:04 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:49:04 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:49:04 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:49:04 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:49:04 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:49:04 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:49:04 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:49:04 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:49:04 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:49:04 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:49:04 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:49:04 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:49:04 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:49:04 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:49:04 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:49:04 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:49:04 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:49:04 compute-0 nova_compute[260935]:     <controller type="usb" index="0"/>
Oct 11 08:49:04 compute-0 nova_compute[260935]:     <memballoon model="virtio">
Oct 11 08:49:04 compute-0 nova_compute[260935]:       <stats period="10"/>
Oct 11 08:49:04 compute-0 nova_compute[260935]:     </memballoon>
Oct 11 08:49:04 compute-0 nova_compute[260935]:   </devices>
Oct 11 08:49:04 compute-0 nova_compute[260935]: </domain>
Oct 11 08:49:04 compute-0 nova_compute[260935]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 11 08:49:04 compute-0 nova_compute[260935]: 2025-10-11 08:49:04.756 2 DEBUG nova.compute.manager [None req-16bb65ea-dd31-4965-bc5b-d90e8bfcadbb a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] Preparing to wait for external event network-vif-plugged-a3944a31-9560-49ae-b2a5-caaf2736993a prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 11 08:49:04 compute-0 nova_compute[260935]: 2025-10-11 08:49:04.756 2 DEBUG oslo_concurrency.lockutils [None req-16bb65ea-dd31-4965-bc5b-d90e8bfcadbb a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Acquiring lock "77ff3a9d-3eb2-40ed-ad12-6367fd4e555f-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:49:04 compute-0 nova_compute[260935]: 2025-10-11 08:49:04.756 2 DEBUG oslo_concurrency.lockutils [None req-16bb65ea-dd31-4965-bc5b-d90e8bfcadbb a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Lock "77ff3a9d-3eb2-40ed-ad12-6367fd4e555f-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:49:04 compute-0 nova_compute[260935]: 2025-10-11 08:49:04.756 2 DEBUG oslo_concurrency.lockutils [None req-16bb65ea-dd31-4965-bc5b-d90e8bfcadbb a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Lock "77ff3a9d-3eb2-40ed-ad12-6367fd4e555f-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:49:04 compute-0 nova_compute[260935]: 2025-10-11 08:49:04.757 2 DEBUG nova.virt.libvirt.vif [None req-16bb65ea-dd31-4965-bc5b-d90e8bfcadbb a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T08:48:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-1988839763',display_name='tempest-ServersAdminTestJSON-server-1988839763',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-1988839763',id=22,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='39d3043a7835403392c659fbb2fe0b22',ramdisk_id='',reservation_id='r-nxz1v0k9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersAdminTestJSON-1756812845',owner_user_name='tempest-ServersAdminTestJSON-1756812845-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T08:48:59Z,user_data=None,user_id='a51c2680b31e40b1908642ef8795c6f0',uuid=77ff3a9d-3eb2-40ed-ad12-6367fd4e555f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a3944a31-9560-49ae-b2a5-caaf2736993a", "address": "fa:16:3e:bf:d8:0b", "network": {"id": "09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1951796893-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "39d3043a7835403392c659fbb2fe0b22", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa3944a31-95", "ovs_interfaceid": "a3944a31-9560-49ae-b2a5-caaf2736993a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 11 08:49:04 compute-0 nova_compute[260935]: 2025-10-11 08:49:04.758 2 DEBUG nova.network.os_vif_util [None req-16bb65ea-dd31-4965-bc5b-d90e8bfcadbb a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Converting VIF {"id": "a3944a31-9560-49ae-b2a5-caaf2736993a", "address": "fa:16:3e:bf:d8:0b", "network": {"id": "09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1951796893-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "39d3043a7835403392c659fbb2fe0b22", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa3944a31-95", "ovs_interfaceid": "a3944a31-9560-49ae-b2a5-caaf2736993a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 08:49:04 compute-0 nova_compute[260935]: 2025-10-11 08:49:04.758 2 DEBUG nova.network.os_vif_util [None req-16bb65ea-dd31-4965-bc5b-d90e8bfcadbb a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:bf:d8:0b,bridge_name='br-int',has_traffic_filtering=True,id=a3944a31-9560-49ae-b2a5-caaf2736993a,network=Network(09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa3944a31-95') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 08:49:04 compute-0 nova_compute[260935]: 2025-10-11 08:49:04.759 2 DEBUG os_vif [None req-16bb65ea-dd31-4965-bc5b-d90e8bfcadbb a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:bf:d8:0b,bridge_name='br-int',has_traffic_filtering=True,id=a3944a31-9560-49ae-b2a5-caaf2736993a,network=Network(09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa3944a31-95') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 11 08:49:04 compute-0 nova_compute[260935]: 2025-10-11 08:49:04.759 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:49:04 compute-0 nova_compute[260935]: 2025-10-11 08:49:04.760 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:49:04 compute-0 nova_compute[260935]: 2025-10-11 08:49:04.760 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 08:49:04 compute-0 nova_compute[260935]: 2025-10-11 08:49:04.763 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:49:04 compute-0 nova_compute[260935]: 2025-10-11 08:49:04.764 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa3944a31-95, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:49:04 compute-0 nova_compute[260935]: 2025-10-11 08:49:04.764 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa3944a31-95, col_values=(('external_ids', {'iface-id': 'a3944a31-9560-49ae-b2a5-caaf2736993a', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:bf:d8:0b', 'vm-uuid': '77ff3a9d-3eb2-40ed-ad12-6367fd4e555f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:49:04 compute-0 nova_compute[260935]: 2025-10-11 08:49:04.766 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:49:04 compute-0 NetworkManager[44960]: <info>  [1760172544.7681] manager: (tapa3944a31-95): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/60)
Oct 11 08:49:04 compute-0 nova_compute[260935]: 2025-10-11 08:49:04.768 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 08:49:04 compute-0 nova_compute[260935]: 2025-10-11 08:49:04.775 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:49:04 compute-0 nova_compute[260935]: 2025-10-11 08:49:04.776 2 INFO os_vif [None req-16bb65ea-dd31-4965-bc5b-d90e8bfcadbb a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:bf:d8:0b,bridge_name='br-int',has_traffic_filtering=True,id=a3944a31-9560-49ae-b2a5-caaf2736993a,network=Network(09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa3944a31-95')
Oct 11 08:49:04 compute-0 nova_compute[260935]: 2025-10-11 08:49:04.779 2 DEBUG oslo_concurrency.lockutils [None req-a3faec16-6f5d-41f8-a4bf-87b64a251959 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Releasing lock "refresh_cache-6224c79a-8a36-490c-863a-67251512732f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 08:49:04 compute-0 nova_compute[260935]: 2025-10-11 08:49:04.780 2 DEBUG nova.compute.manager [None req-a3faec16-6f5d-41f8-a4bf-87b64a251959 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 6224c79a-8a36-490c-863a-67251512732f] Instance network_info: |[{"id": "b9569700-d7dc-40dc-a27c-1f72d3675682", "address": "fa:16:3e:20:3f:af", "network": {"id": "09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1951796893-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "39d3043a7835403392c659fbb2fe0b22", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb9569700-d7", "ovs_interfaceid": "b9569700-d7dc-40dc-a27c-1f72d3675682", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 11 08:49:04 compute-0 nova_compute[260935]: 2025-10-11 08:49:04.783 2 DEBUG oslo_concurrency.lockutils [req-17b33343-2432-45c8-acf2-16c754ad2b86 req-062752ad-273d-4646-93c1-c5b55a8f0515 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-6224c79a-8a36-490c-863a-67251512732f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 08:49:04 compute-0 nova_compute[260935]: 2025-10-11 08:49:04.783 2 DEBUG nova.network.neutron [req-17b33343-2432-45c8-acf2-16c754ad2b86 req-062752ad-273d-4646-93c1-c5b55a8f0515 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 6224c79a-8a36-490c-863a-67251512732f] Refreshing network info cache for port b9569700-d7dc-40dc-a27c-1f72d3675682 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 08:49:04 compute-0 nova_compute[260935]: 2025-10-11 08:49:04.787 2 DEBUG nova.virt.libvirt.driver [None req-a3faec16-6f5d-41f8-a4bf-87b64a251959 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 6224c79a-8a36-490c-863a-67251512732f] Start _get_guest_xml network_info=[{"id": "b9569700-d7dc-40dc-a27c-1f72d3675682", "address": "fa:16:3e:20:3f:af", "network": {"id": "09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1951796893-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "39d3043a7835403392c659fbb2fe0b22", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb9569700-d7", "ovs_interfaceid": "b9569700-d7dc-40dc-a27c-1f72d3675682", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 11 08:49:04 compute-0 nova_compute[260935]: 2025-10-11 08:49:04.796 2 WARNING nova.virt.libvirt.driver [None req-a3faec16-6f5d-41f8-a4bf-87b64a251959 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 08:49:04 compute-0 nova_compute[260935]: 2025-10-11 08:49:04.807 2 DEBUG nova.virt.libvirt.host [None req-a3faec16-6f5d-41f8-a4bf-87b64a251959 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 11 08:49:04 compute-0 nova_compute[260935]: 2025-10-11 08:49:04.809 2 DEBUG nova.virt.libvirt.host [None req-a3faec16-6f5d-41f8-a4bf-87b64a251959 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 11 08:49:04 compute-0 nova_compute[260935]: 2025-10-11 08:49:04.810 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance 057de6d9-3f9e-4b23-9019-f62ba6b453e7 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 08:49:04 compute-0 nova_compute[260935]: 2025-10-11 08:49:04.810 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance e10cd028-76c1-4eb5-be43-f51e4da8abc1 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 08:49:04 compute-0 nova_compute[260935]: 2025-10-11 08:49:04.810 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 08:49:04 compute-0 nova_compute[260935]: 2025-10-11 08:49:04.811 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance 6224c79a-8a36-490c-863a-67251512732f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 08:49:04 compute-0 nova_compute[260935]: 2025-10-11 08:49:04.811 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 11 08:49:04 compute-0 nova_compute[260935]: 2025-10-11 08:49:04.811 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=1024MB phys_disk=59GB used_disk=4GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 11 08:49:04 compute-0 nova_compute[260935]: 2025-10-11 08:49:04.820 2 DEBUG nova.virt.libvirt.host [None req-a3faec16-6f5d-41f8-a4bf-87b64a251959 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 11 08:49:04 compute-0 nova_compute[260935]: 2025-10-11 08:49:04.821 2 DEBUG nova.virt.libvirt.host [None req-a3faec16-6f5d-41f8-a4bf-87b64a251959 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 11 08:49:04 compute-0 nova_compute[260935]: 2025-10-11 08:49:04.821 2 DEBUG nova.virt.libvirt.driver [None req-a3faec16-6f5d-41f8-a4bf-87b64a251959 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 11 08:49:04 compute-0 nova_compute[260935]: 2025-10-11 08:49:04.821 2 DEBUG nova.virt.hardware [None req-a3faec16-6f5d-41f8-a4bf-87b64a251959 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 11 08:49:04 compute-0 nova_compute[260935]: 2025-10-11 08:49:04.822 2 DEBUG nova.virt.hardware [None req-a3faec16-6f5d-41f8-a4bf-87b64a251959 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 11 08:49:04 compute-0 nova_compute[260935]: 2025-10-11 08:49:04.822 2 DEBUG nova.virt.hardware [None req-a3faec16-6f5d-41f8-a4bf-87b64a251959 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 11 08:49:04 compute-0 nova_compute[260935]: 2025-10-11 08:49:04.823 2 DEBUG nova.virt.hardware [None req-a3faec16-6f5d-41f8-a4bf-87b64a251959 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 11 08:49:04 compute-0 nova_compute[260935]: 2025-10-11 08:49:04.823 2 DEBUG nova.virt.hardware [None req-a3faec16-6f5d-41f8-a4bf-87b64a251959 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 11 08:49:04 compute-0 nova_compute[260935]: 2025-10-11 08:49:04.823 2 DEBUG nova.virt.hardware [None req-a3faec16-6f5d-41f8-a4bf-87b64a251959 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 11 08:49:04 compute-0 nova_compute[260935]: 2025-10-11 08:49:04.823 2 DEBUG nova.virt.hardware [None req-a3faec16-6f5d-41f8-a4bf-87b64a251959 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 11 08:49:04 compute-0 nova_compute[260935]: 2025-10-11 08:49:04.824 2 DEBUG nova.virt.hardware [None req-a3faec16-6f5d-41f8-a4bf-87b64a251959 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 11 08:49:04 compute-0 nova_compute[260935]: 2025-10-11 08:49:04.824 2 DEBUG nova.virt.hardware [None req-a3faec16-6f5d-41f8-a4bf-87b64a251959 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 11 08:49:04 compute-0 nova_compute[260935]: 2025-10-11 08:49:04.824 2 DEBUG nova.virt.hardware [None req-a3faec16-6f5d-41f8-a4bf-87b64a251959 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 11 08:49:04 compute-0 nova_compute[260935]: 2025-10-11 08:49:04.824 2 DEBUG nova.virt.hardware [None req-a3faec16-6f5d-41f8-a4bf-87b64a251959 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 11 08:49:04 compute-0 nova_compute[260935]: 2025-10-11 08:49:04.827 2 DEBUG oslo_concurrency.processutils [None req-a3faec16-6f5d-41f8-a4bf-87b64a251959 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:49:04 compute-0 nova_compute[260935]: 2025-10-11 08:49:04.866 2 DEBUG nova.scheduler.client.report [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Refreshing inventories for resource provider ead2f521-4d5d-46d9-864c-1aac19134114 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Oct 11 08:49:04 compute-0 nova_compute[260935]: 2025-10-11 08:49:04.901 2 DEBUG nova.virt.libvirt.driver [None req-16bb65ea-dd31-4965-bc5b-d90e8bfcadbb a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 08:49:04 compute-0 nova_compute[260935]: 2025-10-11 08:49:04.902 2 DEBUG nova.virt.libvirt.driver [None req-16bb65ea-dd31-4965-bc5b-d90e8bfcadbb a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 08:49:04 compute-0 nova_compute[260935]: 2025-10-11 08:49:04.902 2 DEBUG nova.virt.libvirt.driver [None req-16bb65ea-dd31-4965-bc5b-d90e8bfcadbb a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] No VIF found with MAC fa:16:3e:bf:d8:0b, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 11 08:49:04 compute-0 nova_compute[260935]: 2025-10-11 08:49:04.903 2 INFO nova.virt.libvirt.driver [None req-16bb65ea-dd31-4965-bc5b-d90e8bfcadbb a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] Using config drive
Oct 11 08:49:04 compute-0 nova_compute[260935]: 2025-10-11 08:49:04.929 2 DEBUG nova.storage.rbd_utils [None req-16bb65ea-dd31-4965-bc5b-d90e8bfcadbb a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] rbd image 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:49:04 compute-0 nova_compute[260935]: 2025-10-11 08:49:04.939 2 DEBUG nova.scheduler.client.report [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Updating ProviderTree inventory for provider ead2f521-4d5d-46d9-864c-1aac19134114 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Oct 11 08:49:04 compute-0 nova_compute[260935]: 2025-10-11 08:49:04.939 2 DEBUG nova.compute.provider_tree [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Updating inventory in ProviderTree for provider ead2f521-4d5d-46d9-864c-1aac19134114 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Oct 11 08:49:04 compute-0 nova_compute[260935]: 2025-10-11 08:49:04.974 2 DEBUG nova.scheduler.client.report [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Refreshing aggregate associations for resource provider ead2f521-4d5d-46d9-864c-1aac19134114, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Oct 11 08:49:04 compute-0 nova_compute[260935]: 2025-10-11 08:49:04.997 2 DEBUG nova.scheduler.client.report [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Refreshing trait associations for resource provider ead2f521-4d5d-46d9-864c-1aac19134114, traits: HW_CPU_X86_AESNI,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_CLMUL,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_AVX,COMPUTE_RESCUE_BFV,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_F16C,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NODE,HW_CPU_X86_SSE2,HW_CPU_X86_BMI,COMPUTE_DEVICE_TAGGING,COMPUTE_STORAGE_BUS_FDC,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SHA,COMPUTE_STORAGE_BUS_SATA,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SSE42,HW_CPU_X86_SSE41,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_ABM,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_STORAGE_BUS_IDE,COMPUTE_STORAGE_BUS_USB,COMPUTE_TRUSTED_CERTS,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_FMA3,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_SSE4A,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSSE3,HW_CPU_X86_SVM,HW_CPU_X86_BMI2,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_AVX2,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_SECURITY_TPM_2_0,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_AMD_SVM,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_RTL8139 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Oct 11 08:49:05 compute-0 ceph-mon[74313]: pgmap v1298: 321 pgs: 321 active+clean; 293 MiB data, 436 MiB used, 60 GiB / 60 GiB avail; 331 KiB/s rd, 5.7 MiB/s wr, 117 op/s
Oct 11 08:49:05 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/2379102154' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:49:05 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/3340845604' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:49:05 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/2471832843' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:49:05 compute-0 nova_compute[260935]: 2025-10-11 08:49:05.127 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:49:05 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 08:49:05 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/581308956' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:49:05 compute-0 nova_compute[260935]: 2025-10-11 08:49:05.312 2 DEBUG nova.network.neutron [req-8462493e-83ba-4a62-b180-090142cc0609 req-290da09a-a1ae-488c-820f-fd4ea6533a9b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] Updated VIF entry in instance network info cache for port a3944a31-9560-49ae-b2a5-caaf2736993a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 08:49:05 compute-0 nova_compute[260935]: 2025-10-11 08:49:05.313 2 DEBUG nova.network.neutron [req-8462493e-83ba-4a62-b180-090142cc0609 req-290da09a-a1ae-488c-820f-fd4ea6533a9b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] Updating instance_info_cache with network_info: [{"id": "a3944a31-9560-49ae-b2a5-caaf2736993a", "address": "fa:16:3e:bf:d8:0b", "network": {"id": "09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1951796893-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "39d3043a7835403392c659fbb2fe0b22", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa3944a31-95", "ovs_interfaceid": "a3944a31-9560-49ae-b2a5-caaf2736993a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:49:05 compute-0 nova_compute[260935]: 2025-10-11 08:49:05.338 2 DEBUG oslo_concurrency.processutils [None req-a3faec16-6f5d-41f8-a4bf-87b64a251959 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.510s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:49:05 compute-0 nova_compute[260935]: 2025-10-11 08:49:05.375 2 DEBUG nova.storage.rbd_utils [None req-a3faec16-6f5d-41f8-a4bf-87b64a251959 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] rbd image 6224c79a-8a36-490c-863a-67251512732f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:49:05 compute-0 nova_compute[260935]: 2025-10-11 08:49:05.381 2 DEBUG oslo_concurrency.processutils [None req-a3faec16-6f5d-41f8-a4bf-87b64a251959 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:49:05 compute-0 nova_compute[260935]: 2025-10-11 08:49:05.411 2 DEBUG oslo_concurrency.lockutils [req-8462493e-83ba-4a62-b180-090142cc0609 req-290da09a-a1ae-488c-820f-fd4ea6533a9b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-77ff3a9d-3eb2-40ed-ad12-6367fd4e555f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 08:49:05 compute-0 nova_compute[260935]: 2025-10-11 08:49:05.616 2 INFO nova.virt.libvirt.driver [None req-16bb65ea-dd31-4965-bc5b-d90e8bfcadbb a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] Creating config drive at /var/lib/nova/instances/77ff3a9d-3eb2-40ed-ad12-6367fd4e555f/disk.config
Oct 11 08:49:05 compute-0 nova_compute[260935]: 2025-10-11 08:49:05.621 2 DEBUG oslo_concurrency.processutils [None req-16bb65ea-dd31-4965-bc5b-d90e8bfcadbb a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/77ff3a9d-3eb2-40ed-ad12-6367fd4e555f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpof15y_f7 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:49:05 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 08:49:05 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3215393376' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:49:05 compute-0 nova_compute[260935]: 2025-10-11 08:49:05.654 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.528s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:49:05 compute-0 nova_compute[260935]: 2025-10-11 08:49:05.662 2 DEBUG nova.compute.provider_tree [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 08:49:05 compute-0 nova_compute[260935]: 2025-10-11 08:49:05.683 2 DEBUG nova.scheduler.client.report [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 08:49:05 compute-0 nova_compute[260935]: 2025-10-11 08:49:05.738 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 11 08:49:05 compute-0 nova_compute[260935]: 2025-10-11 08:49:05.738 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.046s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:49:05 compute-0 nova_compute[260935]: 2025-10-11 08:49:05.770 2 DEBUG oslo_concurrency.processutils [None req-16bb65ea-dd31-4965-bc5b-d90e8bfcadbb a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/77ff3a9d-3eb2-40ed-ad12-6367fd4e555f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpof15y_f7" returned: 0 in 0.150s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:49:05 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 08:49:05 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1585438174' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:49:05 compute-0 nova_compute[260935]: 2025-10-11 08:49:05.811 2 DEBUG nova.storage.rbd_utils [None req-16bb65ea-dd31-4965-bc5b-d90e8bfcadbb a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] rbd image 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:49:05 compute-0 nova_compute[260935]: 2025-10-11 08:49:05.817 2 DEBUG oslo_concurrency.processutils [None req-16bb65ea-dd31-4965-bc5b-d90e8bfcadbb a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/77ff3a9d-3eb2-40ed-ad12-6367fd4e555f/disk.config 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:49:05 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1299: 321 pgs: 321 active+clean; 293 MiB data, 436 MiB used, 60 GiB / 60 GiB avail; 164 KiB/s rd, 3.6 MiB/s wr, 75 op/s
Oct 11 08:49:05 compute-0 nova_compute[260935]: 2025-10-11 08:49:05.855 2 DEBUG oslo_concurrency.processutils [None req-a3faec16-6f5d-41f8-a4bf-87b64a251959 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:49:05 compute-0 nova_compute[260935]: 2025-10-11 08:49:05.859 2 DEBUG nova.virt.libvirt.vif [None req-a3faec16-6f5d-41f8-a4bf-87b64a251959 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T08:48:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-993933628',display_name='tempest-ServersAdminTestJSON-server-993933628',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-993933628',id=23,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='39d3043a7835403392c659fbb2fe0b22',ramdisk_id='',reservation_id='r-64hxjs52',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersAdminTestJSON-1756812845',owner_user_name='tempest-ServersAdminTestJSON-1756812845-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T08:49:01Z,user_data=None,user_id='a51c2680b31e40b1908642ef8795c6f0',uuid=6224c79a-8a36-490c-863a-67251512732f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b9569700-d7dc-40dc-a27c-1f72d3675682", "address": "fa:16:3e:20:3f:af", "network": {"id": "09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1951796893-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "39d3043a7835403392c659fbb2fe0b22", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb9569700-d7", "ovs_interfaceid": "b9569700-d7dc-40dc-a27c-1f72d3675682", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 11 08:49:05 compute-0 nova_compute[260935]: 2025-10-11 08:49:05.859 2 DEBUG nova.network.os_vif_util [None req-a3faec16-6f5d-41f8-a4bf-87b64a251959 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Converting VIF {"id": "b9569700-d7dc-40dc-a27c-1f72d3675682", "address": "fa:16:3e:20:3f:af", "network": {"id": "09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1951796893-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "39d3043a7835403392c659fbb2fe0b22", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb9569700-d7", "ovs_interfaceid": "b9569700-d7dc-40dc-a27c-1f72d3675682", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 08:49:05 compute-0 nova_compute[260935]: 2025-10-11 08:49:05.861 2 DEBUG nova.network.os_vif_util [None req-a3faec16-6f5d-41f8-a4bf-87b64a251959 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:20:3f:af,bridge_name='br-int',has_traffic_filtering=True,id=b9569700-d7dc-40dc-a27c-1f72d3675682,network=Network(09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb9569700-d7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 08:49:05 compute-0 nova_compute[260935]: 2025-10-11 08:49:05.863 2 DEBUG nova.objects.instance [None req-a3faec16-6f5d-41f8-a4bf-87b64a251959 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Lazy-loading 'pci_devices' on Instance uuid 6224c79a-8a36-490c-863a-67251512732f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:49:05 compute-0 nova_compute[260935]: 2025-10-11 08:49:05.892 2 DEBUG nova.virt.libvirt.driver [None req-a3faec16-6f5d-41f8-a4bf-87b64a251959 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 6224c79a-8a36-490c-863a-67251512732f] End _get_guest_xml xml=<domain type="kvm">
Oct 11 08:49:05 compute-0 nova_compute[260935]:   <uuid>6224c79a-8a36-490c-863a-67251512732f</uuid>
Oct 11 08:49:05 compute-0 nova_compute[260935]:   <name>instance-00000017</name>
Oct 11 08:49:05 compute-0 nova_compute[260935]:   <memory>131072</memory>
Oct 11 08:49:05 compute-0 nova_compute[260935]:   <vcpu>1</vcpu>
Oct 11 08:49:05 compute-0 nova_compute[260935]:   <metadata>
Oct 11 08:49:05 compute-0 nova_compute[260935]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 08:49:05 compute-0 nova_compute[260935]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 08:49:05 compute-0 nova_compute[260935]:       <nova:name>tempest-ServersAdminTestJSON-server-993933628</nova:name>
Oct 11 08:49:05 compute-0 nova_compute[260935]:       <nova:creationTime>2025-10-11 08:49:04</nova:creationTime>
Oct 11 08:49:05 compute-0 nova_compute[260935]:       <nova:flavor name="m1.nano">
Oct 11 08:49:05 compute-0 nova_compute[260935]:         <nova:memory>128</nova:memory>
Oct 11 08:49:05 compute-0 nova_compute[260935]:         <nova:disk>1</nova:disk>
Oct 11 08:49:05 compute-0 nova_compute[260935]:         <nova:swap>0</nova:swap>
Oct 11 08:49:05 compute-0 nova_compute[260935]:         <nova:ephemeral>0</nova:ephemeral>
Oct 11 08:49:05 compute-0 nova_compute[260935]:         <nova:vcpus>1</nova:vcpus>
Oct 11 08:49:05 compute-0 nova_compute[260935]:       </nova:flavor>
Oct 11 08:49:05 compute-0 nova_compute[260935]:       <nova:owner>
Oct 11 08:49:05 compute-0 nova_compute[260935]:         <nova:user uuid="a51c2680b31e40b1908642ef8795c6f0">tempest-ServersAdminTestJSON-1756812845-project-member</nova:user>
Oct 11 08:49:05 compute-0 nova_compute[260935]:         <nova:project uuid="39d3043a7835403392c659fbb2fe0b22">tempest-ServersAdminTestJSON-1756812845</nova:project>
Oct 11 08:49:05 compute-0 nova_compute[260935]:       </nova:owner>
Oct 11 08:49:05 compute-0 nova_compute[260935]:       <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 08:49:05 compute-0 nova_compute[260935]:       <nova:ports>
Oct 11 08:49:05 compute-0 nova_compute[260935]:         <nova:port uuid="b9569700-d7dc-40dc-a27c-1f72d3675682">
Oct 11 08:49:05 compute-0 nova_compute[260935]:           <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Oct 11 08:49:05 compute-0 nova_compute[260935]:         </nova:port>
Oct 11 08:49:05 compute-0 nova_compute[260935]:       </nova:ports>
Oct 11 08:49:05 compute-0 nova_compute[260935]:     </nova:instance>
Oct 11 08:49:05 compute-0 nova_compute[260935]:   </metadata>
Oct 11 08:49:05 compute-0 nova_compute[260935]:   <sysinfo type="smbios">
Oct 11 08:49:05 compute-0 nova_compute[260935]:     <system>
Oct 11 08:49:05 compute-0 nova_compute[260935]:       <entry name="manufacturer">RDO</entry>
Oct 11 08:49:05 compute-0 nova_compute[260935]:       <entry name="product">OpenStack Compute</entry>
Oct 11 08:49:05 compute-0 nova_compute[260935]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 08:49:05 compute-0 nova_compute[260935]:       <entry name="serial">6224c79a-8a36-490c-863a-67251512732f</entry>
Oct 11 08:49:05 compute-0 nova_compute[260935]:       <entry name="uuid">6224c79a-8a36-490c-863a-67251512732f</entry>
Oct 11 08:49:05 compute-0 nova_compute[260935]:       <entry name="family">Virtual Machine</entry>
Oct 11 08:49:05 compute-0 nova_compute[260935]:     </system>
Oct 11 08:49:05 compute-0 nova_compute[260935]:   </sysinfo>
Oct 11 08:49:05 compute-0 nova_compute[260935]:   <os>
Oct 11 08:49:05 compute-0 nova_compute[260935]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 11 08:49:05 compute-0 nova_compute[260935]:     <boot dev="hd"/>
Oct 11 08:49:05 compute-0 nova_compute[260935]:     <smbios mode="sysinfo"/>
Oct 11 08:49:05 compute-0 nova_compute[260935]:   </os>
Oct 11 08:49:05 compute-0 nova_compute[260935]:   <features>
Oct 11 08:49:05 compute-0 nova_compute[260935]:     <acpi/>
Oct 11 08:49:05 compute-0 nova_compute[260935]:     <apic/>
Oct 11 08:49:05 compute-0 nova_compute[260935]:     <vmcoreinfo/>
Oct 11 08:49:05 compute-0 nova_compute[260935]:   </features>
Oct 11 08:49:05 compute-0 nova_compute[260935]:   <clock offset="utc">
Oct 11 08:49:05 compute-0 nova_compute[260935]:     <timer name="pit" tickpolicy="delay"/>
Oct 11 08:49:05 compute-0 nova_compute[260935]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 11 08:49:05 compute-0 nova_compute[260935]:     <timer name="hpet" present="no"/>
Oct 11 08:49:05 compute-0 nova_compute[260935]:   </clock>
Oct 11 08:49:05 compute-0 nova_compute[260935]:   <cpu mode="host-model" match="exact">
Oct 11 08:49:05 compute-0 nova_compute[260935]:     <topology sockets="1" cores="1" threads="1"/>
Oct 11 08:49:05 compute-0 nova_compute[260935]:   </cpu>
Oct 11 08:49:05 compute-0 nova_compute[260935]:   <devices>
Oct 11 08:49:05 compute-0 nova_compute[260935]:     <disk type="network" device="disk">
Oct 11 08:49:05 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 08:49:05 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/6224c79a-8a36-490c-863a-67251512732f_disk">
Oct 11 08:49:05 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 08:49:05 compute-0 nova_compute[260935]:       </source>
Oct 11 08:49:05 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 08:49:05 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 08:49:05 compute-0 nova_compute[260935]:       </auth>
Oct 11 08:49:05 compute-0 nova_compute[260935]:       <target dev="vda" bus="virtio"/>
Oct 11 08:49:05 compute-0 nova_compute[260935]:     </disk>
Oct 11 08:49:05 compute-0 nova_compute[260935]:     <disk type="network" device="cdrom">
Oct 11 08:49:05 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 08:49:05 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/6224c79a-8a36-490c-863a-67251512732f_disk.config">
Oct 11 08:49:05 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 08:49:05 compute-0 nova_compute[260935]:       </source>
Oct 11 08:49:05 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 08:49:05 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 08:49:05 compute-0 nova_compute[260935]:       </auth>
Oct 11 08:49:05 compute-0 nova_compute[260935]:       <target dev="sda" bus="sata"/>
Oct 11 08:49:05 compute-0 nova_compute[260935]:     </disk>
Oct 11 08:49:05 compute-0 nova_compute[260935]:     <interface type="ethernet">
Oct 11 08:49:05 compute-0 nova_compute[260935]:       <mac address="fa:16:3e:20:3f:af"/>
Oct 11 08:49:05 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 08:49:05 compute-0 nova_compute[260935]:       <driver name="vhost" rx_queue_size="512"/>
Oct 11 08:49:05 compute-0 nova_compute[260935]:       <mtu size="1442"/>
Oct 11 08:49:05 compute-0 nova_compute[260935]:       <target dev="tapb9569700-d7"/>
Oct 11 08:49:05 compute-0 nova_compute[260935]:     </interface>
Oct 11 08:49:05 compute-0 nova_compute[260935]:     <serial type="pty">
Oct 11 08:49:05 compute-0 nova_compute[260935]:       <log file="/var/lib/nova/instances/6224c79a-8a36-490c-863a-67251512732f/console.log" append="off"/>
Oct 11 08:49:05 compute-0 nova_compute[260935]:     </serial>
Oct 11 08:49:05 compute-0 nova_compute[260935]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 08:49:05 compute-0 nova_compute[260935]:     <video>
Oct 11 08:49:05 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 08:49:05 compute-0 nova_compute[260935]:     </video>
Oct 11 08:49:05 compute-0 nova_compute[260935]:     <input type="tablet" bus="usb"/>
Oct 11 08:49:05 compute-0 nova_compute[260935]:     <rng model="virtio">
Oct 11 08:49:05 compute-0 nova_compute[260935]:       <backend model="random">/dev/urandom</backend>
Oct 11 08:49:05 compute-0 nova_compute[260935]:     </rng>
Oct 11 08:49:05 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root"/>
Oct 11 08:49:05 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:49:05 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:49:05 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:49:05 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:49:05 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:49:05 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:49:05 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:49:05 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:49:05 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:49:05 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:49:05 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:49:05 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:49:05 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:49:05 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:49:05 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:49:05 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:49:05 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:49:05 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:49:05 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:49:05 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:49:05 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:49:05 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:49:05 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:49:05 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:49:05 compute-0 nova_compute[260935]:     <controller type="usb" index="0"/>
Oct 11 08:49:05 compute-0 nova_compute[260935]:     <memballoon model="virtio">
Oct 11 08:49:05 compute-0 nova_compute[260935]:       <stats period="10"/>
Oct 11 08:49:05 compute-0 nova_compute[260935]:     </memballoon>
Oct 11 08:49:05 compute-0 nova_compute[260935]:   </devices>
Oct 11 08:49:05 compute-0 nova_compute[260935]: </domain>
Oct 11 08:49:05 compute-0 nova_compute[260935]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 11 08:49:05 compute-0 nova_compute[260935]: 2025-10-11 08:49:05.893 2 DEBUG nova.compute.manager [None req-a3faec16-6f5d-41f8-a4bf-87b64a251959 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 6224c79a-8a36-490c-863a-67251512732f] Preparing to wait for external event network-vif-plugged-b9569700-d7dc-40dc-a27c-1f72d3675682 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 11 08:49:05 compute-0 nova_compute[260935]: 2025-10-11 08:49:05.893 2 DEBUG oslo_concurrency.lockutils [None req-a3faec16-6f5d-41f8-a4bf-87b64a251959 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Acquiring lock "6224c79a-8a36-490c-863a-67251512732f-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:49:05 compute-0 nova_compute[260935]: 2025-10-11 08:49:05.894 2 DEBUG oslo_concurrency.lockutils [None req-a3faec16-6f5d-41f8-a4bf-87b64a251959 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Lock "6224c79a-8a36-490c-863a-67251512732f-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:49:05 compute-0 nova_compute[260935]: 2025-10-11 08:49:05.894 2 DEBUG oslo_concurrency.lockutils [None req-a3faec16-6f5d-41f8-a4bf-87b64a251959 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Lock "6224c79a-8a36-490c-863a-67251512732f-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:49:05 compute-0 nova_compute[260935]: 2025-10-11 08:49:05.895 2 DEBUG nova.virt.libvirt.vif [None req-a3faec16-6f5d-41f8-a4bf-87b64a251959 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T08:48:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-993933628',display_name='tempest-ServersAdminTestJSON-server-993933628',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-993933628',id=23,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='39d3043a7835403392c659fbb2fe0b22',ramdisk_id='',reservation_id='r-64hxjs52',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersAdminTestJSON-1756812845',owner_user_name='tempest-ServersAdminTestJSON-1756812845-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T08:49:01Z,user_data=None,user_id='a51c2680b31e40b1908642ef8795c6f0',uuid=6224c79a-8a36-490c-863a-67251512732f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b9569700-d7dc-40dc-a27c-1f72d3675682", "address": "fa:16:3e:20:3f:af", "network": {"id": "09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1951796893-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "39d3043a7835403392c659fbb2fe0b22", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb9569700-d7", "ovs_interfaceid": "b9569700-d7dc-40dc-a27c-1f72d3675682", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 11 08:49:05 compute-0 nova_compute[260935]: 2025-10-11 08:49:05.896 2 DEBUG nova.network.os_vif_util [None req-a3faec16-6f5d-41f8-a4bf-87b64a251959 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Converting VIF {"id": "b9569700-d7dc-40dc-a27c-1f72d3675682", "address": "fa:16:3e:20:3f:af", "network": {"id": "09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1951796893-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "39d3043a7835403392c659fbb2fe0b22", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb9569700-d7", "ovs_interfaceid": "b9569700-d7dc-40dc-a27c-1f72d3675682", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 08:49:05 compute-0 nova_compute[260935]: 2025-10-11 08:49:05.897 2 DEBUG nova.network.os_vif_util [None req-a3faec16-6f5d-41f8-a4bf-87b64a251959 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:20:3f:af,bridge_name='br-int',has_traffic_filtering=True,id=b9569700-d7dc-40dc-a27c-1f72d3675682,network=Network(09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb9569700-d7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 08:49:05 compute-0 nova_compute[260935]: 2025-10-11 08:49:05.898 2 DEBUG os_vif [None req-a3faec16-6f5d-41f8-a4bf-87b64a251959 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:20:3f:af,bridge_name='br-int',has_traffic_filtering=True,id=b9569700-d7dc-40dc-a27c-1f72d3675682,network=Network(09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb9569700-d7') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 11 08:49:05 compute-0 nova_compute[260935]: 2025-10-11 08:49:05.899 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:49:05 compute-0 nova_compute[260935]: 2025-10-11 08:49:05.899 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:49:05 compute-0 nova_compute[260935]: 2025-10-11 08:49:05.900 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 08:49:05 compute-0 nova_compute[260935]: 2025-10-11 08:49:05.905 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:49:05 compute-0 nova_compute[260935]: 2025-10-11 08:49:05.905 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb9569700-d7, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:49:05 compute-0 nova_compute[260935]: 2025-10-11 08:49:05.906 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapb9569700-d7, col_values=(('external_ids', {'iface-id': 'b9569700-d7dc-40dc-a27c-1f72d3675682', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:20:3f:af', 'vm-uuid': '6224c79a-8a36-490c-863a-67251512732f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:49:05 compute-0 nova_compute[260935]: 2025-10-11 08:49:05.908 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:49:05 compute-0 NetworkManager[44960]: <info>  [1760172545.9101] manager: (tapb9569700-d7): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/61)
Oct 11 08:49:05 compute-0 nova_compute[260935]: 2025-10-11 08:49:05.911 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 08:49:05 compute-0 nova_compute[260935]: 2025-10-11 08:49:05.923 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:49:05 compute-0 nova_compute[260935]: 2025-10-11 08:49:05.924 2 INFO os_vif [None req-a3faec16-6f5d-41f8-a4bf-87b64a251959 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:20:3f:af,bridge_name='br-int',has_traffic_filtering=True,id=b9569700-d7dc-40dc-a27c-1f72d3675682,network=Network(09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb9569700-d7')
Oct 11 08:49:05 compute-0 nova_compute[260935]: 2025-10-11 08:49:05.988 2 DEBUG nova.virt.libvirt.driver [None req-a3faec16-6f5d-41f8-a4bf-87b64a251959 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 08:49:05 compute-0 nova_compute[260935]: 2025-10-11 08:49:05.988 2 DEBUG nova.virt.libvirt.driver [None req-a3faec16-6f5d-41f8-a4bf-87b64a251959 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 08:49:05 compute-0 nova_compute[260935]: 2025-10-11 08:49:05.989 2 DEBUG nova.virt.libvirt.driver [None req-a3faec16-6f5d-41f8-a4bf-87b64a251959 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] No VIF found with MAC fa:16:3e:20:3f:af, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 11 08:49:05 compute-0 nova_compute[260935]: 2025-10-11 08:49:05.989 2 INFO nova.virt.libvirt.driver [None req-a3faec16-6f5d-41f8-a4bf-87b64a251959 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 6224c79a-8a36-490c-863a-67251512732f] Using config drive
Oct 11 08:49:06 compute-0 nova_compute[260935]: 2025-10-11 08:49:06.032 2 DEBUG nova.storage.rbd_utils [None req-a3faec16-6f5d-41f8-a4bf-87b64a251959 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] rbd image 6224c79a-8a36-490c-863a-67251512732f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:49:06 compute-0 nova_compute[260935]: 2025-10-11 08:49:06.049 2 DEBUG nova.compute.manager [req-e6a8f143-dc32-4a50-990c-6f8e732bf55d req-cc0e207a-337b-4976-b32f-222e42b5bae8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] Received event network-changed-c9724939-cd91-44bb-a86b-72bf93c2a818 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:49:06 compute-0 nova_compute[260935]: 2025-10-11 08:49:06.049 2 DEBUG nova.compute.manager [req-e6a8f143-dc32-4a50-990c-6f8e732bf55d req-cc0e207a-337b-4976-b32f-222e42b5bae8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] Refreshing instance network info cache due to event network-changed-c9724939-cd91-44bb-a86b-72bf93c2a818. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 08:49:06 compute-0 nova_compute[260935]: 2025-10-11 08:49:06.050 2 DEBUG oslo_concurrency.lockutils [req-e6a8f143-dc32-4a50-990c-6f8e732bf55d req-cc0e207a-337b-4976-b32f-222e42b5bae8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-057de6d9-3f9e-4b23-9019-f62ba6b453e7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 08:49:06 compute-0 nova_compute[260935]: 2025-10-11 08:49:06.051 2 DEBUG oslo_concurrency.processutils [None req-16bb65ea-dd31-4965-bc5b-d90e8bfcadbb a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/77ff3a9d-3eb2-40ed-ad12-6367fd4e555f/disk.config 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.234s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:49:06 compute-0 nova_compute[260935]: 2025-10-11 08:49:06.052 2 INFO nova.virt.libvirt.driver [None req-16bb65ea-dd31-4965-bc5b-d90e8bfcadbb a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] Deleting local config drive /var/lib/nova/instances/77ff3a9d-3eb2-40ed-ad12-6367fd4e555f/disk.config because it was imported into RBD.
Oct 11 08:49:06 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/581308956' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:49:06 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/3215393376' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:49:06 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/1585438174' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:49:06 compute-0 NetworkManager[44960]: <info>  [1760172546.1277] manager: (tapa3944a31-95): new Tun device (/org/freedesktop/NetworkManager/Devices/62)
Oct 11 08:49:06 compute-0 kernel: tapa3944a31-95: entered promiscuous mode
Oct 11 08:49:06 compute-0 nova_compute[260935]: 2025-10-11 08:49:06.136 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:49:06 compute-0 ovn_controller[152945]: 2025-10-11T08:49:06Z|00092|binding|INFO|Claiming lport a3944a31-9560-49ae-b2a5-caaf2736993a for this chassis.
Oct 11 08:49:06 compute-0 ovn_controller[152945]: 2025-10-11T08:49:06Z|00093|binding|INFO|a3944a31-9560-49ae-b2a5-caaf2736993a: Claiming fa:16:3e:bf:d8:0b 10.100.0.11
Oct 11 08:49:06 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:49:06.145 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:bf:d8:0b 10.100.0.11'], port_security=['fa:16:3e:bf:d8:0b 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '77ff3a9d-3eb2-40ed-ad12-6367fd4e555f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '39d3043a7835403392c659fbb2fe0b22', 'neutron:revision_number': '2', 'neutron:security_group_ids': '8cdf2c97-ed67-4339-928f-1d70d0c6c18c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3bfe7634-8476-437a-9cde-e4512c0e686a, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=a3944a31-9560-49ae-b2a5-caaf2736993a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 08:49:06 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:49:06.148 162815 INFO neutron.agent.ovn.metadata.agent [-] Port a3944a31-9560-49ae-b2a5-caaf2736993a in datapath 09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5 bound to our chassis
Oct 11 08:49:06 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:49:06.152 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5
Oct 11 08:49:06 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:49:06.169 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[7b89cf53-305c-41de-96b8-19b9eaf77592]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:49:06 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:49:06.171 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap09ac2cb6-31 in ovnmeta-09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 11 08:49:06 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:49:06.176 276945 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap09ac2cb6-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 11 08:49:06 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:49:06.177 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[b973b54f-b493-4d82-9c66-e34466396168]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:49:06 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:49:06.178 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[f65ad05a-d8b7-48d7-9584-563ad156f800]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:49:06 compute-0 ovn_controller[152945]: 2025-10-11T08:49:06Z|00094|binding|INFO|Setting lport a3944a31-9560-49ae-b2a5-caaf2736993a ovn-installed in OVS
Oct 11 08:49:06 compute-0 ovn_controller[152945]: 2025-10-11T08:49:06Z|00095|binding|INFO|Setting lport a3944a31-9560-49ae-b2a5-caaf2736993a up in Southbound
Oct 11 08:49:06 compute-0 systemd-machined[215705]: New machine qemu-24-instance-00000016.
Oct 11 08:49:06 compute-0 nova_compute[260935]: 2025-10-11 08:49:06.186 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:49:06 compute-0 systemd[1]: Started Virtual Machine qemu-24-instance-00000016.
Oct 11 08:49:06 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:49:06.194 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[b95f9c9a-5dbd-4c83-bf11-ea677ea4a84b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:49:06 compute-0 systemd-udevd[291984]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 08:49:06 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:49:06.231 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[03533bd3-f21c-42ab-8d34-38b0166133b6]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:49:06 compute-0 NetworkManager[44960]: <info>  [1760172546.2436] device (tapa3944a31-95): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 08:49:06 compute-0 NetworkManager[44960]: <info>  [1760172546.2449] device (tapa3944a31-95): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 08:49:06 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:49:06.285 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[450dca3f-01d0-4269-9f7c-0d4f69907302]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:49:06 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:49:06.295 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[70530710-ec3d-4f2c-a4b2-a5c0d12a5749]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:49:06 compute-0 systemd-udevd[291987]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 08:49:06 compute-0 NetworkManager[44960]: <info>  [1760172546.2967] manager: (tap09ac2cb6-30): new Veth device (/org/freedesktop/NetworkManager/Devices/63)
Oct 11 08:49:06 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:49:06.351 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[6549e75a-49f3-4589-9c1a-e68a78ba084a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:49:06 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:49:06.367 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[8f285bc8-ec65-47d2-a75e-846575f4c83a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:49:06 compute-0 NetworkManager[44960]: <info>  [1760172546.4071] device (tap09ac2cb6-30): carrier: link connected
Oct 11 08:49:06 compute-0 nova_compute[260935]: 2025-10-11 08:49:06.418 2 DEBUG oslo_concurrency.lockutils [None req-f1c4bf5e-d8d0-480f-854b-9ce12c050804 6b08b83b77b84cd894e155d2a06682a4 8d6c7a0d842f4dcb95421a3f47580c49 - - default default] Acquiring lock "e10cd028-76c1-4eb5-be43-f51e4da8abc1" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:49:06 compute-0 nova_compute[260935]: 2025-10-11 08:49:06.418 2 DEBUG oslo_concurrency.lockutils [None req-f1c4bf5e-d8d0-480f-854b-9ce12c050804 6b08b83b77b84cd894e155d2a06682a4 8d6c7a0d842f4dcb95421a3f47580c49 - - default default] Lock "e10cd028-76c1-4eb5-be43-f51e4da8abc1" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:49:06 compute-0 nova_compute[260935]: 2025-10-11 08:49:06.419 2 DEBUG oslo_concurrency.lockutils [None req-f1c4bf5e-d8d0-480f-854b-9ce12c050804 6b08b83b77b84cd894e155d2a06682a4 8d6c7a0d842f4dcb95421a3f47580c49 - - default default] Acquiring lock "e10cd028-76c1-4eb5-be43-f51e4da8abc1-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:49:06 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:49:06.417 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[02d645f9-e525-4942-8a83-9cd340ed89cc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:49:06 compute-0 nova_compute[260935]: 2025-10-11 08:49:06.420 2 DEBUG oslo_concurrency.lockutils [None req-f1c4bf5e-d8d0-480f-854b-9ce12c050804 6b08b83b77b84cd894e155d2a06682a4 8d6c7a0d842f4dcb95421a3f47580c49 - - default default] Lock "e10cd028-76c1-4eb5-be43-f51e4da8abc1-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:49:06 compute-0 nova_compute[260935]: 2025-10-11 08:49:06.420 2 DEBUG oslo_concurrency.lockutils [None req-f1c4bf5e-d8d0-480f-854b-9ce12c050804 6b08b83b77b84cd894e155d2a06682a4 8d6c7a0d842f4dcb95421a3f47580c49 - - default default] Lock "e10cd028-76c1-4eb5-be43-f51e4da8abc1-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:49:06 compute-0 nova_compute[260935]: 2025-10-11 08:49:06.423 2 INFO nova.compute.manager [None req-f1c4bf5e-d8d0-480f-854b-9ce12c050804 6b08b83b77b84cd894e155d2a06682a4 8d6c7a0d842f4dcb95421a3f47580c49 - - default default] [instance: e10cd028-76c1-4eb5-be43-f51e4da8abc1] Terminating instance
Oct 11 08:49:06 compute-0 nova_compute[260935]: 2025-10-11 08:49:06.425 2 DEBUG nova.compute.manager [None req-f1c4bf5e-d8d0-480f-854b-9ce12c050804 6b08b83b77b84cd894e155d2a06682a4 8d6c7a0d842f4dcb95421a3f47580c49 - - default default] [instance: e10cd028-76c1-4eb5-be43-f51e4da8abc1] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 11 08:49:06 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:49:06.454 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[401a32ca-1f67-400f-b79b-067d37d46088]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap09ac2cb6-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9f:b2:33'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 34], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 439110, 'reachable_time': 35426, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 292017, 'error': None, 'target': 'ovnmeta-09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:49:06 compute-0 kernel: tapb5bae935-76 (unregistering): left promiscuous mode
Oct 11 08:49:06 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:49:06.483 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[0b642470-b64f-4a31-ac21-879b7be4f6b6]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe9f:b233'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 439110, 'tstamp': 439110}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 292018, 'error': None, 'target': 'ovnmeta-09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:49:06 compute-0 NetworkManager[44960]: <info>  [1760172546.4887] device (tapb5bae935-76): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 08:49:06 compute-0 ovn_controller[152945]: 2025-10-11T08:49:06Z|00096|binding|INFO|Releasing lport b5bae935-7639-4a76-988c-e09d0c6f5fb1 from this chassis (sb_readonly=0)
Oct 11 08:49:06 compute-0 ovn_controller[152945]: 2025-10-11T08:49:06Z|00097|binding|INFO|Setting lport b5bae935-7639-4a76-988c-e09d0c6f5fb1 down in Southbound
Oct 11 08:49:06 compute-0 nova_compute[260935]: 2025-10-11 08:49:06.508 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:49:06 compute-0 ovn_controller[152945]: 2025-10-11T08:49:06Z|00098|binding|INFO|Removing iface tapb5bae935-76 ovn-installed in OVS
Oct 11 08:49:06 compute-0 nova_compute[260935]: 2025-10-11 08:49:06.510 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:49:06 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:49:06.515 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ad:83:55 10.100.0.11'], port_security=['fa:16:3e:ad:83:55 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'e10cd028-76c1-4eb5-be43-f51e4da8abc1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2880dd81-df95-47c3-aa3d-53c3f2548f15', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8d6c7a0d842f4dcb95421a3f47580c49', 'neutron:revision_number': '4', 'neutron:security_group_ids': '7275978f-63aa-48f1-b8b7-cd0104b14473', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.223'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1c8074e3-09b3-477f-801a-39440484f747, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=b5bae935-7639-4a76-988c-e09d0c6f5fb1) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 08:49:06 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:49:06.521 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[df60ab66-8c36-4892-86ed-f4e4f64150de]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap09ac2cb6-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9f:b2:33'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 34], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 439110, 'reachable_time': 35426, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 292021, 'error': None, 'target': 'ovnmeta-09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:49:06 compute-0 nova_compute[260935]: 2025-10-11 08:49:06.533 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:49:06 compute-0 systemd[1]: machine-qemu\x2d23\x2dinstance\x2d00000015.scope: Deactivated successfully.
Oct 11 08:49:06 compute-0 systemd[1]: machine-qemu\x2d23\x2dinstance\x2d00000015.scope: Consumed 13.070s CPU time.
Oct 11 08:49:06 compute-0 systemd-machined[215705]: Machine qemu-23-instance-00000015 terminated.
Oct 11 08:49:06 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:49:06.572 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[aec43702-9e95-4137-ad69-d23b80acf9b9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:49:06 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:49:06.661 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[1d4c4a47-8ca6-4a8b-ac7b-21fdd49bd178]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:49:06 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:49:06.664 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap09ac2cb6-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:49:06 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:49:06.664 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 08:49:06 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:49:06.665 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap09ac2cb6-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:49:06 compute-0 NetworkManager[44960]: <info>  [1760172546.6680] manager: (tap09ac2cb6-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/64)
Oct 11 08:49:06 compute-0 nova_compute[260935]: 2025-10-11 08:49:06.670 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:49:06 compute-0 nova_compute[260935]: 2025-10-11 08:49:06.674 2 INFO nova.virt.libvirt.driver [-] [instance: e10cd028-76c1-4eb5-be43-f51e4da8abc1] Instance destroyed successfully.
Oct 11 08:49:06 compute-0 nova_compute[260935]: 2025-10-11 08:49:06.675 2 DEBUG nova.objects.instance [None req-f1c4bf5e-d8d0-480f-854b-9ce12c050804 6b08b83b77b84cd894e155d2a06682a4 8d6c7a0d842f4dcb95421a3f47580c49 - - default default] Lazy-loading 'resources' on Instance uuid e10cd028-76c1-4eb5-be43-f51e4da8abc1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:49:06 compute-0 kernel: tap09ac2cb6-30: entered promiscuous mode
Oct 11 08:49:06 compute-0 nova_compute[260935]: 2025-10-11 08:49:06.704 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:49:06 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:49:06.705 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap09ac2cb6-30, col_values=(('external_ids', {'iface-id': '424305ea-6b47-4134-ad52-ee2a450e204c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:49:06 compute-0 nova_compute[260935]: 2025-10-11 08:49:06.706 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:49:06 compute-0 ovn_controller[152945]: 2025-10-11T08:49:06Z|00099|binding|INFO|Releasing lport 424305ea-6b47-4134-ad52-ee2a450e204c from this chassis (sb_readonly=0)
Oct 11 08:49:06 compute-0 nova_compute[260935]: 2025-10-11 08:49:06.739 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 08:49:06 compute-0 nova_compute[260935]: 2025-10-11 08:49:06.739 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 11 08:49:06 compute-0 nova_compute[260935]: 2025-10-11 08:49:06.740 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 11 08:49:06 compute-0 nova_compute[260935]: 2025-10-11 08:49:06.742 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:49:06 compute-0 nova_compute[260935]: 2025-10-11 08:49:06.749 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:49:06 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:49:06.750 162815 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 11 08:49:06 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:49:06.751 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[992fe14f-c7d9-478d-b8c3-dbb18959ebda]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:49:06 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:49:06.752 162815 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 11 08:49:06 compute-0 ovn_metadata_agent[162810]: global
Oct 11 08:49:06 compute-0 ovn_metadata_agent[162810]:     log         /dev/log local0 debug
Oct 11 08:49:06 compute-0 ovn_metadata_agent[162810]:     log-tag     haproxy-metadata-proxy-09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5
Oct 11 08:49:06 compute-0 ovn_metadata_agent[162810]:     user        root
Oct 11 08:49:06 compute-0 ovn_metadata_agent[162810]:     group       root
Oct 11 08:49:06 compute-0 ovn_metadata_agent[162810]:     maxconn     1024
Oct 11 08:49:06 compute-0 ovn_metadata_agent[162810]:     pidfile     /var/lib/neutron/external/pids/09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5.pid.haproxy
Oct 11 08:49:06 compute-0 ovn_metadata_agent[162810]:     daemon
Oct 11 08:49:06 compute-0 ovn_metadata_agent[162810]: 
Oct 11 08:49:06 compute-0 ovn_metadata_agent[162810]: defaults
Oct 11 08:49:06 compute-0 ovn_metadata_agent[162810]:     log global
Oct 11 08:49:06 compute-0 ovn_metadata_agent[162810]:     mode http
Oct 11 08:49:06 compute-0 ovn_metadata_agent[162810]:     option httplog
Oct 11 08:49:06 compute-0 ovn_metadata_agent[162810]:     option dontlognull
Oct 11 08:49:06 compute-0 ovn_metadata_agent[162810]:     option http-server-close
Oct 11 08:49:06 compute-0 ovn_metadata_agent[162810]:     option forwardfor
Oct 11 08:49:06 compute-0 ovn_metadata_agent[162810]:     retries                 3
Oct 11 08:49:06 compute-0 ovn_metadata_agent[162810]:     timeout http-request    30s
Oct 11 08:49:06 compute-0 ovn_metadata_agent[162810]:     timeout connect         30s
Oct 11 08:49:06 compute-0 ovn_metadata_agent[162810]:     timeout client          32s
Oct 11 08:49:06 compute-0 ovn_metadata_agent[162810]:     timeout server          32s
Oct 11 08:49:06 compute-0 ovn_metadata_agent[162810]:     timeout http-keep-alive 30s
Oct 11 08:49:06 compute-0 ovn_metadata_agent[162810]: 
Oct 11 08:49:06 compute-0 ovn_metadata_agent[162810]: 
Oct 11 08:49:06 compute-0 ovn_metadata_agent[162810]: listen listener
Oct 11 08:49:06 compute-0 ovn_metadata_agent[162810]:     bind 169.254.169.254:80
Oct 11 08:49:06 compute-0 ovn_metadata_agent[162810]:     server metadata /var/lib/neutron/metadata_proxy
Oct 11 08:49:06 compute-0 ovn_metadata_agent[162810]:     http-request add-header X-OVN-Network-ID 09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5
Oct 11 08:49:06 compute-0 ovn_metadata_agent[162810]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 11 08:49:06 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:49:06.753 162815 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5', 'env', 'PROCESS_TAG=haproxy-09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 11 08:49:06 compute-0 nova_compute[260935]: 2025-10-11 08:49:06.786 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: e10cd028-76c1-4eb5-be43-f51e4da8abc1] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9875
Oct 11 08:49:06 compute-0 nova_compute[260935]: 2025-10-11 08:49:06.786 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Oct 11 08:49:06 compute-0 nova_compute[260935]: 2025-10-11 08:49:06.787 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: 6224c79a-8a36-490c-863a-67251512732f] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Oct 11 08:49:06 compute-0 nova_compute[260935]: 2025-10-11 08:49:06.792 2 DEBUG nova.virt.libvirt.vif [None req-f1c4bf5e-d8d0-480f-854b-9ce12c050804 6b08b83b77b84cd894e155d2a06682a4 8d6c7a0d842f4dcb95421a3f47580c49 - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T08:48:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestJSON-server-170168590',display_name='tempest-ServersTestJSON-server-170168590',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-170168590',id=21,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBBLmmDX7+o3xlBXsWFtGAAW6QN89FexPyRikLBBoMkqYEgmzcpeem7mJuwXNPqh7hh6YHBKO8aG3FnT45N5dmtZiE21YMODPbWwRwlsUeKoenY7euJ0iBxGg5aRfD6zgQ==',key_name='tempest-keypair-1869462671',keypairs=<?>,launch_index=0,launched_at=2025-10-11T08:48:41Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='8d6c7a0d842f4dcb95421a3f47580c49',ramdisk_id='',reservation_id='r-ijkk9nxi',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestJSON-381670503',owner_user_name='tempest-ServersTestJSON-381670503-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T08:48:41Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='6b08b83b77b84cd894e155d2a06682a4',uuid=e10cd028-76c1-4eb5-be43-f51e4da8abc1,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b5bae935-7639-4a76-988c-e09d0c6f5fb1", "address": "fa:16:3e:ad:83:55", "network": {"id": "2880dd81-df95-47c3-aa3d-53c3f2548f15", "bridge": "br-int", "label": "tempest-ServersTestJSON-1125718573-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.223", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8d6c7a0d842f4dcb95421a3f47580c49", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb5bae935-76", "ovs_interfaceid": "b5bae935-7639-4a76-988c-e09d0c6f5fb1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 11 08:49:06 compute-0 nova_compute[260935]: 2025-10-11 08:49:06.793 2 DEBUG nova.network.os_vif_util [None req-f1c4bf5e-d8d0-480f-854b-9ce12c050804 6b08b83b77b84cd894e155d2a06682a4 8d6c7a0d842f4dcb95421a3f47580c49 - - default default] Converting VIF {"id": "b5bae935-7639-4a76-988c-e09d0c6f5fb1", "address": "fa:16:3e:ad:83:55", "network": {"id": "2880dd81-df95-47c3-aa3d-53c3f2548f15", "bridge": "br-int", "label": "tempest-ServersTestJSON-1125718573-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.223", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8d6c7a0d842f4dcb95421a3f47580c49", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb5bae935-76", "ovs_interfaceid": "b5bae935-7639-4a76-988c-e09d0c6f5fb1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 08:49:06 compute-0 nova_compute[260935]: 2025-10-11 08:49:06.794 2 DEBUG nova.network.os_vif_util [None req-f1c4bf5e-d8d0-480f-854b-9ce12c050804 6b08b83b77b84cd894e155d2a06682a4 8d6c7a0d842f4dcb95421a3f47580c49 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:ad:83:55,bridge_name='br-int',has_traffic_filtering=True,id=b5bae935-7639-4a76-988c-e09d0c6f5fb1,network=Network(2880dd81-df95-47c3-aa3d-53c3f2548f15),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb5bae935-76') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 08:49:06 compute-0 nova_compute[260935]: 2025-10-11 08:49:06.794 2 DEBUG os_vif [None req-f1c4bf5e-d8d0-480f-854b-9ce12c050804 6b08b83b77b84cd894e155d2a06682a4 8d6c7a0d842f4dcb95421a3f47580c49 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:ad:83:55,bridge_name='br-int',has_traffic_filtering=True,id=b5bae935-7639-4a76-988c-e09d0c6f5fb1,network=Network(2880dd81-df95-47c3-aa3d-53c3f2548f15),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb5bae935-76') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 11 08:49:06 compute-0 nova_compute[260935]: 2025-10-11 08:49:06.796 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:49:06 compute-0 nova_compute[260935]: 2025-10-11 08:49:06.796 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb5bae935-76, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:49:06 compute-0 nova_compute[260935]: 2025-10-11 08:49:06.799 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:49:06 compute-0 nova_compute[260935]: 2025-10-11 08:49:06.802 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 08:49:06 compute-0 nova_compute[260935]: 2025-10-11 08:49:06.809 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:49:06 compute-0 nova_compute[260935]: 2025-10-11 08:49:06.812 2 INFO os_vif [None req-f1c4bf5e-d8d0-480f-854b-9ce12c050804 6b08b83b77b84cd894e155d2a06682a4 8d6c7a0d842f4dcb95421a3f47580c49 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:ad:83:55,bridge_name='br-int',has_traffic_filtering=True,id=b5bae935-7639-4a76-988c-e09d0c6f5fb1,network=Network(2880dd81-df95-47c3-aa3d-53c3f2548f15),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb5bae935-76')
Oct 11 08:49:06 compute-0 nova_compute[260935]: 2025-10-11 08:49:06.928 2 INFO nova.virt.libvirt.driver [None req-a3faec16-6f5d-41f8-a4bf-87b64a251959 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 6224c79a-8a36-490c-863a-67251512732f] Creating config drive at /var/lib/nova/instances/6224c79a-8a36-490c-863a-67251512732f/disk.config
Oct 11 08:49:06 compute-0 nova_compute[260935]: 2025-10-11 08:49:06.946 2 DEBUG oslo_concurrency.processutils [None req-a3faec16-6f5d-41f8-a4bf-87b64a251959 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/6224c79a-8a36-490c-863a-67251512732f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpj6axv2dy execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:49:06 compute-0 nova_compute[260935]: 2025-10-11 08:49:06.984 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "refresh_cache-057de6d9-3f9e-4b23-9019-f62ba6b453e7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 08:49:06 compute-0 nova_compute[260935]: 2025-10-11 08:49:06.985 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:49:07 compute-0 nova_compute[260935]: 2025-10-11 08:49:07.027 2 DEBUG nova.network.neutron [req-17b33343-2432-45c8-acf2-16c754ad2b86 req-062752ad-273d-4646-93c1-c5b55a8f0515 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 6224c79a-8a36-490c-863a-67251512732f] Updated VIF entry in instance network info cache for port b9569700-d7dc-40dc-a27c-1f72d3675682. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 08:49:07 compute-0 nova_compute[260935]: 2025-10-11 08:49:07.028 2 DEBUG nova.network.neutron [req-17b33343-2432-45c8-acf2-16c754ad2b86 req-062752ad-273d-4646-93c1-c5b55a8f0515 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 6224c79a-8a36-490c-863a-67251512732f] Updating instance_info_cache with network_info: [{"id": "b9569700-d7dc-40dc-a27c-1f72d3675682", "address": "fa:16:3e:20:3f:af", "network": {"id": "09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1951796893-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "39d3043a7835403392c659fbb2fe0b22", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb9569700-d7", "ovs_interfaceid": "b9569700-d7dc-40dc-a27c-1f72d3675682", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:49:07 compute-0 nova_compute[260935]: 2025-10-11 08:49:07.062 2 DEBUG oslo_concurrency.lockutils [req-17b33343-2432-45c8-acf2-16c754ad2b86 req-062752ad-273d-4646-93c1-c5b55a8f0515 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-6224c79a-8a36-490c-863a-67251512732f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 08:49:07 compute-0 nova_compute[260935]: 2025-10-11 08:49:07.092 2 DEBUG oslo_concurrency.processutils [None req-a3faec16-6f5d-41f8-a4bf-87b64a251959 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/6224c79a-8a36-490c-863a-67251512732f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpj6axv2dy" returned: 0 in 0.146s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:49:07 compute-0 ceph-mon[74313]: pgmap v1299: 321 pgs: 321 active+clean; 293 MiB data, 436 MiB used, 60 GiB / 60 GiB avail; 164 KiB/s rd, 3.6 MiB/s wr, 75 op/s
Oct 11 08:49:07 compute-0 nova_compute[260935]: 2025-10-11 08:49:07.139 2 DEBUG nova.storage.rbd_utils [None req-a3faec16-6f5d-41f8-a4bf-87b64a251959 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] rbd image 6224c79a-8a36-490c-863a-67251512732f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:49:07 compute-0 nova_compute[260935]: 2025-10-11 08:49:07.144 2 DEBUG oslo_concurrency.processutils [None req-a3faec16-6f5d-41f8-a4bf-87b64a251959 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/6224c79a-8a36-490c-863a-67251512732f/disk.config 6224c79a-8a36-490c-863a-67251512732f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:49:07 compute-0 podman[292158]: 2025-10-11 08:49:07.253863833 +0000 UTC m=+0.080690298 container create f161a6e8d54a76f5182305ed623709fa08785ce1e817d89029244e968aa05361 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001)
Oct 11 08:49:07 compute-0 nova_compute[260935]: 2025-10-11 08:49:07.296 2 INFO nova.virt.libvirt.driver [None req-f1c4bf5e-d8d0-480f-854b-9ce12c050804 6b08b83b77b84cd894e155d2a06682a4 8d6c7a0d842f4dcb95421a3f47580c49 - - default default] [instance: e10cd028-76c1-4eb5-be43-f51e4da8abc1] Deleting instance files /var/lib/nova/instances/e10cd028-76c1-4eb5-be43-f51e4da8abc1_del
Oct 11 08:49:07 compute-0 nova_compute[260935]: 2025-10-11 08:49:07.297 2 INFO nova.virt.libvirt.driver [None req-f1c4bf5e-d8d0-480f-854b-9ce12c050804 6b08b83b77b84cd894e155d2a06682a4 8d6c7a0d842f4dcb95421a3f47580c49 - - default default] [instance: e10cd028-76c1-4eb5-be43-f51e4da8abc1] Deletion of /var/lib/nova/instances/e10cd028-76c1-4eb5-be43-f51e4da8abc1_del complete
Oct 11 08:49:07 compute-0 systemd[1]: Started libpod-conmon-f161a6e8d54a76f5182305ed623709fa08785ce1e817d89029244e968aa05361.scope.
Oct 11 08:49:07 compute-0 podman[292158]: 2025-10-11 08:49:07.204554993 +0000 UTC m=+0.031381488 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 11 08:49:07 compute-0 nova_compute[260935]: 2025-10-11 08:49:07.303 2 DEBUG nova.compute.manager [req-32a9b09c-f63e-4319-afa2-04f0c14e0acc req-4164704d-ddfe-4987-850e-76d882d371af e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] Received event network-vif-plugged-a3944a31-9560-49ae-b2a5-caaf2736993a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:49:07 compute-0 nova_compute[260935]: 2025-10-11 08:49:07.303 2 DEBUG oslo_concurrency.lockutils [req-32a9b09c-f63e-4319-afa2-04f0c14e0acc req-4164704d-ddfe-4987-850e-76d882d371af e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "77ff3a9d-3eb2-40ed-ad12-6367fd4e555f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:49:07 compute-0 nova_compute[260935]: 2025-10-11 08:49:07.303 2 DEBUG oslo_concurrency.lockutils [req-32a9b09c-f63e-4319-afa2-04f0c14e0acc req-4164704d-ddfe-4987-850e-76d882d371af e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "77ff3a9d-3eb2-40ed-ad12-6367fd4e555f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:49:07 compute-0 nova_compute[260935]: 2025-10-11 08:49:07.303 2 DEBUG oslo_concurrency.lockutils [req-32a9b09c-f63e-4319-afa2-04f0c14e0acc req-4164704d-ddfe-4987-850e-76d882d371af e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "77ff3a9d-3eb2-40ed-ad12-6367fd4e555f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:49:07 compute-0 nova_compute[260935]: 2025-10-11 08:49:07.303 2 DEBUG nova.compute.manager [req-32a9b09c-f63e-4319-afa2-04f0c14e0acc req-4164704d-ddfe-4987-850e-76d882d371af e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] Processing event network-vif-plugged-a3944a31-9560-49ae-b2a5-caaf2736993a _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 11 08:49:07 compute-0 systemd[1]: Started libcrun container.
Oct 11 08:49:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/58f2e53c9b14a148f14c0e6dc8556f03c44bc8562cdbc60984e7f6e68a2d640e/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 11 08:49:07 compute-0 nova_compute[260935]: 2025-10-11 08:49:07.354 2 DEBUG oslo_concurrency.processutils [None req-a3faec16-6f5d-41f8-a4bf-87b64a251959 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/6224c79a-8a36-490c-863a-67251512732f/disk.config 6224c79a-8a36-490c-863a-67251512732f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.210s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:49:07 compute-0 podman[292158]: 2025-10-11 08:49:07.358281427 +0000 UTC m=+0.185107892 container init f161a6e8d54a76f5182305ed623709fa08785ce1e817d89029244e968aa05361 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 11 08:49:07 compute-0 nova_compute[260935]: 2025-10-11 08:49:07.357 2 INFO nova.virt.libvirt.driver [None req-a3faec16-6f5d-41f8-a4bf-87b64a251959 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 6224c79a-8a36-490c-863a-67251512732f] Deleting local config drive /var/lib/nova/instances/6224c79a-8a36-490c-863a-67251512732f/disk.config because it was imported into RBD.
Oct 11 08:49:07 compute-0 podman[292158]: 2025-10-11 08:49:07.365926866 +0000 UTC m=+0.192753331 container start f161a6e8d54a76f5182305ed623709fa08785ce1e817d89029244e968aa05361 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 08:49:07 compute-0 nova_compute[260935]: 2025-10-11 08:49:07.380 2 INFO nova.compute.manager [None req-f1c4bf5e-d8d0-480f-854b-9ce12c050804 6b08b83b77b84cd894e155d2a06682a4 8d6c7a0d842f4dcb95421a3f47580c49 - - default default] [instance: e10cd028-76c1-4eb5-be43-f51e4da8abc1] Took 0.95 seconds to destroy the instance on the hypervisor.
Oct 11 08:49:07 compute-0 nova_compute[260935]: 2025-10-11 08:49:07.380 2 DEBUG oslo.service.loopingcall [None req-f1c4bf5e-d8d0-480f-854b-9ce12c050804 6b08b83b77b84cd894e155d2a06682a4 8d6c7a0d842f4dcb95421a3f47580c49 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 11 08:49:07 compute-0 nova_compute[260935]: 2025-10-11 08:49:07.381 2 DEBUG nova.compute.manager [-] [instance: e10cd028-76c1-4eb5-be43-f51e4da8abc1] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 11 08:49:07 compute-0 nova_compute[260935]: 2025-10-11 08:49:07.381 2 DEBUG nova.network.neutron [-] [instance: e10cd028-76c1-4eb5-be43-f51e4da8abc1] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 11 08:49:07 compute-0 neutron-haproxy-ovnmeta-09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5[292192]: [NOTICE]   (292196) : New worker (292200) forked
Oct 11 08:49:07 compute-0 neutron-haproxy-ovnmeta-09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5[292192]: [NOTICE]   (292196) : Loading success.
Oct 11 08:49:07 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:49:07.445 162815 INFO neutron.agent.ovn.metadata.agent [-] Port b5bae935-7639-4a76-988c-e09d0c6f5fb1 in datapath 2880dd81-df95-47c3-aa3d-53c3f2548f15 unbound from our chassis
Oct 11 08:49:07 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:49:07.447 162815 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 2880dd81-df95-47c3-aa3d-53c3f2548f15, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 11 08:49:07 compute-0 kernel: tapb9569700-d7: entered promiscuous mode
Oct 11 08:49:07 compute-0 systemd-udevd[292011]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 08:49:07 compute-0 NetworkManager[44960]: <info>  [1760172547.4504] manager: (tapb9569700-d7): new Tun device (/org/freedesktop/NetworkManager/Devices/65)
Oct 11 08:49:07 compute-0 ovn_controller[152945]: 2025-10-11T08:49:07Z|00100|binding|INFO|Claiming lport b9569700-d7dc-40dc-a27c-1f72d3675682 for this chassis.
Oct 11 08:49:07 compute-0 ovn_controller[152945]: 2025-10-11T08:49:07Z|00101|binding|INFO|b9569700-d7dc-40dc-a27c-1f72d3675682: Claiming fa:16:3e:20:3f:af 10.100.0.14
Oct 11 08:49:07 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:49:07.449 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[9d424a47-23e4-4aee-b154-9795afa49979]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:49:07 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:49:07.455 162815 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-2880dd81-df95-47c3-aa3d-53c3f2548f15 namespace which is not needed anymore
Oct 11 08:49:07 compute-0 nova_compute[260935]: 2025-10-11 08:49:07.457 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:49:07 compute-0 NetworkManager[44960]: <info>  [1760172547.4662] device (tapb9569700-d7): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 08:49:07 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:49:07.467 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:20:3f:af 10.100.0.14'], port_security=['fa:16:3e:20:3f:af 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '6224c79a-8a36-490c-863a-67251512732f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '39d3043a7835403392c659fbb2fe0b22', 'neutron:revision_number': '2', 'neutron:security_group_ids': '8cdf2c97-ed67-4339-928f-1d70d0c6c18c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3bfe7634-8476-437a-9cde-e4512c0e686a, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=b9569700-d7dc-40dc-a27c-1f72d3675682) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 08:49:07 compute-0 NetworkManager[44960]: <info>  [1760172547.4683] device (tapb9569700-d7): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 08:49:07 compute-0 ovn_controller[152945]: 2025-10-11T08:49:07Z|00102|binding|INFO|Setting lport b9569700-d7dc-40dc-a27c-1f72d3675682 ovn-installed in OVS
Oct 11 08:49:07 compute-0 ovn_controller[152945]: 2025-10-11T08:49:07Z|00103|binding|INFO|Setting lport b9569700-d7dc-40dc-a27c-1f72d3675682 up in Southbound
Oct 11 08:49:07 compute-0 nova_compute[260935]: 2025-10-11 08:49:07.478 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:49:07 compute-0 nova_compute[260935]: 2025-10-11 08:49:07.481 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:49:07 compute-0 systemd-machined[215705]: New machine qemu-25-instance-00000017.
Oct 11 08:49:07 compute-0 nova_compute[260935]: 2025-10-11 08:49:07.510 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172547.509975, 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:49:07 compute-0 nova_compute[260935]: 2025-10-11 08:49:07.511 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] VM Started (Lifecycle Event)
Oct 11 08:49:07 compute-0 nova_compute[260935]: 2025-10-11 08:49:07.517 2 DEBUG nova.compute.manager [None req-16bb65ea-dd31-4965-bc5b-d90e8bfcadbb a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 11 08:49:07 compute-0 systemd[1]: Started Virtual Machine qemu-25-instance-00000017.
Oct 11 08:49:07 compute-0 nova_compute[260935]: 2025-10-11 08:49:07.527 2 DEBUG nova.virt.libvirt.driver [None req-16bb65ea-dd31-4965-bc5b-d90e8bfcadbb a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 11 08:49:07 compute-0 nova_compute[260935]: 2025-10-11 08:49:07.532 2 INFO nova.virt.libvirt.driver [-] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] Instance spawned successfully.
Oct 11 08:49:07 compute-0 nova_compute[260935]: 2025-10-11 08:49:07.532 2 DEBUG nova.virt.libvirt.driver [None req-16bb65ea-dd31-4965-bc5b-d90e8bfcadbb a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 11 08:49:07 compute-0 nova_compute[260935]: 2025-10-11 08:49:07.537 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:49:07 compute-0 nova_compute[260935]: 2025-10-11 08:49:07.542 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 08:49:07 compute-0 nova_compute[260935]: 2025-10-11 08:49:07.557 2 DEBUG nova.virt.libvirt.driver [None req-16bb65ea-dd31-4965-bc5b-d90e8bfcadbb a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:49:07 compute-0 nova_compute[260935]: 2025-10-11 08:49:07.558 2 DEBUG nova.virt.libvirt.driver [None req-16bb65ea-dd31-4965-bc5b-d90e8bfcadbb a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:49:07 compute-0 nova_compute[260935]: 2025-10-11 08:49:07.560 2 DEBUG nova.virt.libvirt.driver [None req-16bb65ea-dd31-4965-bc5b-d90e8bfcadbb a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:49:07 compute-0 nova_compute[260935]: 2025-10-11 08:49:07.561 2 DEBUG nova.virt.libvirt.driver [None req-16bb65ea-dd31-4965-bc5b-d90e8bfcadbb a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:49:07 compute-0 nova_compute[260935]: 2025-10-11 08:49:07.562 2 DEBUG nova.virt.libvirt.driver [None req-16bb65ea-dd31-4965-bc5b-d90e8bfcadbb a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:49:07 compute-0 nova_compute[260935]: 2025-10-11 08:49:07.562 2 DEBUG nova.virt.libvirt.driver [None req-16bb65ea-dd31-4965-bc5b-d90e8bfcadbb a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:49:07 compute-0 nova_compute[260935]: 2025-10-11 08:49:07.568 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 08:49:07 compute-0 nova_compute[260935]: 2025-10-11 08:49:07.568 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172547.5102317, 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:49:07 compute-0 nova_compute[260935]: 2025-10-11 08:49:07.568 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] VM Paused (Lifecycle Event)
Oct 11 08:49:07 compute-0 nova_compute[260935]: 2025-10-11 08:49:07.604 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:49:07 compute-0 nova_compute[260935]: 2025-10-11 08:49:07.609 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172547.522077, 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:49:07 compute-0 nova_compute[260935]: 2025-10-11 08:49:07.609 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] VM Resumed (Lifecycle Event)
Oct 11 08:49:07 compute-0 nova_compute[260935]: 2025-10-11 08:49:07.632 2 INFO nova.compute.manager [None req-16bb65ea-dd31-4965-bc5b-d90e8bfcadbb a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] Took 8.00 seconds to spawn the instance on the hypervisor.
Oct 11 08:49:07 compute-0 nova_compute[260935]: 2025-10-11 08:49:07.633 2 DEBUG nova.compute.manager [None req-16bb65ea-dd31-4965-bc5b-d90e8bfcadbb a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:49:07 compute-0 nova_compute[260935]: 2025-10-11 08:49:07.634 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:49:07 compute-0 neutron-haproxy-ovnmeta-2880dd81-df95-47c3-aa3d-53c3f2548f15[291214]: [NOTICE]   (291218) : haproxy version is 2.8.14-c23fe91
Oct 11 08:49:07 compute-0 neutron-haproxy-ovnmeta-2880dd81-df95-47c3-aa3d-53c3f2548f15[291214]: [NOTICE]   (291218) : path to executable is /usr/sbin/haproxy
Oct 11 08:49:07 compute-0 neutron-haproxy-ovnmeta-2880dd81-df95-47c3-aa3d-53c3f2548f15[291214]: [WARNING]  (291218) : Exiting Master process...
Oct 11 08:49:07 compute-0 neutron-haproxy-ovnmeta-2880dd81-df95-47c3-aa3d-53c3f2548f15[291214]: [WARNING]  (291218) : Exiting Master process...
Oct 11 08:49:07 compute-0 neutron-haproxy-ovnmeta-2880dd81-df95-47c3-aa3d-53c3f2548f15[291214]: [ALERT]    (291218) : Current worker (291220) exited with code 143 (Terminated)
Oct 11 08:49:07 compute-0 neutron-haproxy-ovnmeta-2880dd81-df95-47c3-aa3d-53c3f2548f15[291214]: [WARNING]  (291218) : All workers exited. Exiting... (0)
Oct 11 08:49:07 compute-0 systemd[1]: libpod-371c0477cdc472f8f0672be8efb1599ac2b744a3f40122793d8d14be97d4c4a1.scope: Deactivated successfully.
Oct 11 08:49:07 compute-0 nova_compute[260935]: 2025-10-11 08:49:07.644 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 08:49:07 compute-0 podman[292240]: 2025-10-11 08:49:07.646776664 +0000 UTC m=+0.050432783 container died 371c0477cdc472f8f0672be8efb1599ac2b744a3f40122793d8d14be97d4c4a1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-2880dd81-df95-47c3-aa3d-53c3f2548f15, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3)
Oct 11 08:49:07 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-371c0477cdc472f8f0672be8efb1599ac2b744a3f40122793d8d14be97d4c4a1-userdata-shm.mount: Deactivated successfully.
Oct 11 08:49:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-9a90ad88f1ec264b7e39d2fba8c4def1e28c3e05feed22a0602c12c8a9a6991b-merged.mount: Deactivated successfully.
Oct 11 08:49:07 compute-0 nova_compute[260935]: 2025-10-11 08:49:07.676 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 08:49:07 compute-0 podman[292240]: 2025-10-11 08:49:07.683089542 +0000 UTC m=+0.086745661 container cleanup 371c0477cdc472f8f0672be8efb1599ac2b744a3f40122793d8d14be97d4c4a1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-2880dd81-df95-47c3-aa3d-53c3f2548f15, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 08:49:07 compute-0 nova_compute[260935]: 2025-10-11 08:49:07.698 2 INFO nova.compute.manager [None req-16bb65ea-dd31-4965-bc5b-d90e8bfcadbb a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] Took 9.04 seconds to build instance.
Oct 11 08:49:07 compute-0 nova_compute[260935]: 2025-10-11 08:49:07.718 2 DEBUG oslo_concurrency.lockutils [None req-16bb65ea-dd31-4965-bc5b-d90e8bfcadbb a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Lock "77ff3a9d-3eb2-40ed-ad12-6367fd4e555f" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.124s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:49:07 compute-0 systemd[1]: libpod-conmon-371c0477cdc472f8f0672be8efb1599ac2b744a3f40122793d8d14be97d4c4a1.scope: Deactivated successfully.
Oct 11 08:49:07 compute-0 podman[292267]: 2025-10-11 08:49:07.76837709 +0000 UTC m=+0.058609556 container remove 371c0477cdc472f8f0672be8efb1599ac2b744a3f40122793d8d14be97d4c4a1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-2880dd81-df95-47c3-aa3d-53c3f2548f15, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 11 08:49:07 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:49:07.779 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[dd6b0a6c-01c2-42cb-9c63-4fe5567fc6ae]: (4, ('Sat Oct 11 08:49:07 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-2880dd81-df95-47c3-aa3d-53c3f2548f15 (371c0477cdc472f8f0672be8efb1599ac2b744a3f40122793d8d14be97d4c4a1)\n371c0477cdc472f8f0672be8efb1599ac2b744a3f40122793d8d14be97d4c4a1\nSat Oct 11 08:49:07 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-2880dd81-df95-47c3-aa3d-53c3f2548f15 (371c0477cdc472f8f0672be8efb1599ac2b744a3f40122793d8d14be97d4c4a1)\n371c0477cdc472f8f0672be8efb1599ac2b744a3f40122793d8d14be97d4c4a1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:49:07 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:49:07.781 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[fc1820b7-eac1-4bf5-871b-b77f052ad842]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:49:07 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:49:07.782 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2880dd81-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:49:07 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1300: 321 pgs: 321 active+clean; 214 MiB data, 415 MiB used, 60 GiB / 60 GiB avail; 192 KiB/s rd, 3.7 MiB/s wr, 117 op/s
Oct 11 08:49:07 compute-0 nova_compute[260935]: 2025-10-11 08:49:07.829 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:49:07 compute-0 kernel: tap2880dd81-d0: left promiscuous mode
Oct 11 08:49:07 compute-0 nova_compute[260935]: 2025-10-11 08:49:07.833 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:49:07 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:49:07.836 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[3dc23973-7b49-40a9-981e-dc7b10741fe3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:49:07 compute-0 nova_compute[260935]: 2025-10-11 08:49:07.851 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:49:07 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:49:07.878 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[db2a1b6a-8de5-4dc8-aae5-53d6f1ee059d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:49:07 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:49:07.879 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[e1ddae32-16f7-4d47-a520-d10e87a0a36c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:49:07 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:49:07.895 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[403c8fb8-19ec-48d5-83d1-eace5e864dab]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 436510, 'reachable_time': 19051, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 292286, 'error': None, 'target': 'ovnmeta-2880dd81-df95-47c3-aa3d-53c3f2548f15', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:49:07 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:49:07.899 162929 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-2880dd81-df95-47c3-aa3d-53c3f2548f15 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 11 08:49:07 compute-0 systemd[1]: run-netns-ovnmeta\x2d2880dd81\x2ddf95\x2d47c3\x2daa3d\x2d53c3f2548f15.mount: Deactivated successfully.
Oct 11 08:49:07 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:49:07.899 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[6fed803f-74dc-4a36-90b1-3bdba562fd7c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:49:07 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:49:07.900 162815 INFO neutron.agent.ovn.metadata.agent [-] Port b9569700-d7dc-40dc-a27c-1f72d3675682 in datapath 09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5 unbound from our chassis
Oct 11 08:49:07 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:49:07.902 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5
Oct 11 08:49:07 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:49:07.924 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[8016ae52-7887-44b8-889c-1e52467b832d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:49:07 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:49:07.963 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[1c25d438-eb4a-4d6e-bf06-c69416ba144b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:49:07 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:49:07.966 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[78c5dccb-758d-4237-894e-85945d99757b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:49:07 compute-0 nova_compute[260935]: 2025-10-11 08:49:07.968 2 DEBUG nova.network.neutron [None req-1b0e9738-2076-4278-aa27-69b054179b80 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] Updating instance_info_cache with network_info: [{"id": "db31f1b4-b009-40dc-a028-b72fe0b1eb45", "address": "fa:16:3e:97:65:f3", "network": {"id": "fff13396-b787-4c6e-9112-a1c2ef57b26d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-795919911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.230", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eddb41c523294041b154a0a99c88e82b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdb31f1b4-b0", "ovs_interfaceid": "db31f1b4-b009-40dc-a028-b72fe0b1eb45", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "b3e1a780-925f-4f4e-bca5-4b8f5ee45e1e", "address": "fa:16:3e:f2:dc:ce", "network": {"id": "fff13396-b787-4c6e-9112-a1c2ef57b26d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-795919911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eddb41c523294041b154a0a99c88e82b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb3e1a780-92", "ovs_interfaceid": "b3e1a780-925f-4f4e-bca5-4b8f5ee45e1e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "723ff3bf-882c-4198-afc8-a31026a4ccfc", "address": "fa:16:3e:90:fd:25", "network": {"id": "fff13396-b787-4c6e-9112-a1c2ef57b26d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-795919911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eddb41c523294041b154a0a99c88e82b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap723ff3bf-88", "ovs_interfaceid": "723ff3bf-882c-4198-afc8-a31026a4ccfc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "c9724939-cd91-44bb-a86b-72bf93c2a818", "address": "fa:16:3e:2b:3e:e2", "network": {"id": "fff13396-b787-4c6e-9112-a1c2ef57b26d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-795919911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eddb41c523294041b154a0a99c88e82b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc9724939-cd", "ovs_interfaceid": "c9724939-cd91-44bb-a86b-72bf93c2a818", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:49:07 compute-0 nova_compute[260935]: 2025-10-11 08:49:07.989 2 DEBUG oslo_concurrency.lockutils [None req-1b0e9738-2076-4278-aa27-69b054179b80 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Releasing lock "refresh_cache-057de6d9-3f9e-4b23-9019-f62ba6b453e7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 08:49:07 compute-0 nova_compute[260935]: 2025-10-11 08:49:07.990 2 DEBUG oslo_concurrency.lockutils [req-e6a8f143-dc32-4a50-990c-6f8e732bf55d req-cc0e207a-337b-4976-b32f-222e42b5bae8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-057de6d9-3f9e-4b23-9019-f62ba6b453e7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 08:49:07 compute-0 nova_compute[260935]: 2025-10-11 08:49:07.990 2 DEBUG nova.network.neutron [req-e6a8f143-dc32-4a50-990c-6f8e732bf55d req-cc0e207a-337b-4976-b32f-222e42b5bae8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] Refreshing network info cache for port c9724939-cd91-44bb-a86b-72bf93c2a818 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 08:49:07 compute-0 nova_compute[260935]: 2025-10-11 08:49:07.995 2 DEBUG nova.virt.libvirt.vif [None req-1b0e9738-2076-4278-aa27-69b054179b80 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T08:48:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1270343715',display_name='tempest-AttachInterfacesTestJSON-server-1270343715',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1270343715',id=20,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGqp+brcDuFBg126s8uf0VE8L4fUfMeeG8JT9FKFYCB1vHmrbx9C6Kt8XshIYtJqZ0JEMq6H9A4MzX7hRa62ELfLstfe4uxEEdjGiwcDGhX0TR8t1c69HTxfDL2XuPv0hw==',key_name='tempest-keypair-1655747577',keypairs=<?>,launch_index=0,launched_at=2025-10-11T08:48:21Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='eddb41c523294041b154a0a99c88e82b',ramdisk_id='',reservation_id='r-i4h1iqr7',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-2072786320',owner_user_name='tempest-AttachInterfacesTestJSON-2072786320-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T08:48:21Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='34f29a5a135d45f597eeaa741009aa67',uuid=057de6d9-3f9e-4b23-9019-f62ba6b453e7,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "c9724939-cd91-44bb-a86b-72bf93c2a818", "address": "fa:16:3e:2b:3e:e2", "network": {"id": "fff13396-b787-4c6e-9112-a1c2ef57b26d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-795919911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eddb41c523294041b154a0a99c88e82b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc9724939-cd", "ovs_interfaceid": "c9724939-cd91-44bb-a86b-72bf93c2a818", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 11 08:49:07 compute-0 nova_compute[260935]: 2025-10-11 08:49:07.995 2 DEBUG nova.network.os_vif_util [None req-1b0e9738-2076-4278-aa27-69b054179b80 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Converting VIF {"id": "c9724939-cd91-44bb-a86b-72bf93c2a818", "address": "fa:16:3e:2b:3e:e2", "network": {"id": "fff13396-b787-4c6e-9112-a1c2ef57b26d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-795919911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eddb41c523294041b154a0a99c88e82b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc9724939-cd", "ovs_interfaceid": "c9724939-cd91-44bb-a86b-72bf93c2a818", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 08:49:07 compute-0 nova_compute[260935]: 2025-10-11 08:49:07.997 2 DEBUG nova.network.os_vif_util [None req-1b0e9738-2076-4278-aa27-69b054179b80 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2b:3e:e2,bridge_name='br-int',has_traffic_filtering=True,id=c9724939-cd91-44bb-a86b-72bf93c2a818,network=Network(fff13396-b787-4c6e-9112-a1c2ef57b26d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapc9724939-cd') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 08:49:07 compute-0 nova_compute[260935]: 2025-10-11 08:49:07.997 2 DEBUG os_vif [None req-1b0e9738-2076-4278-aa27-69b054179b80 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:2b:3e:e2,bridge_name='br-int',has_traffic_filtering=True,id=c9724939-cd91-44bb-a86b-72bf93c2a818,network=Network(fff13396-b787-4c6e-9112-a1c2ef57b26d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapc9724939-cd') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 11 08:49:07 compute-0 nova_compute[260935]: 2025-10-11 08:49:07.998 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:49:07 compute-0 nova_compute[260935]: 2025-10-11 08:49:07.999 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:49:08 compute-0 nova_compute[260935]: 2025-10-11 08:49:07.999 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 08:49:08 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:49:08.000 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[4b131055-8240-4018-899a-2e598379d74c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:49:08 compute-0 nova_compute[260935]: 2025-10-11 08:49:08.007 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:49:08 compute-0 nova_compute[260935]: 2025-10-11 08:49:08.008 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc9724939-cd, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:49:08 compute-0 nova_compute[260935]: 2025-10-11 08:49:08.009 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapc9724939-cd, col_values=(('external_ids', {'iface-id': 'c9724939-cd91-44bb-a86b-72bf93c2a818', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:2b:3e:e2', 'vm-uuid': '057de6d9-3f9e-4b23-9019-f62ba6b453e7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:49:08 compute-0 nova_compute[260935]: 2025-10-11 08:49:08.011 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:49:08 compute-0 NetworkManager[44960]: <info>  [1760172548.0125] manager: (tapc9724939-cd): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/66)
Oct 11 08:49:08 compute-0 nova_compute[260935]: 2025-10-11 08:49:08.014 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 08:49:08 compute-0 nova_compute[260935]: 2025-10-11 08:49:08.022 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:49:08 compute-0 nova_compute[260935]: 2025-10-11 08:49:08.027 2 INFO os_vif [None req-1b0e9738-2076-4278-aa27-69b054179b80 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:2b:3e:e2,bridge_name='br-int',has_traffic_filtering=True,id=c9724939-cd91-44bb-a86b-72bf93c2a818,network=Network(fff13396-b787-4c6e-9112-a1c2ef57b26d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapc9724939-cd')
Oct 11 08:49:08 compute-0 nova_compute[260935]: 2025-10-11 08:49:08.029 2 DEBUG nova.virt.libvirt.vif [None req-1b0e9738-2076-4278-aa27-69b054179b80 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T08:48:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1270343715',display_name='tempest-AttachInterfacesTestJSON-server-1270343715',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1270343715',id=20,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGqp+brcDuFBg126s8uf0VE8L4fUfMeeG8JT9FKFYCB1vHmrbx9C6Kt8XshIYtJqZ0JEMq6H9A4MzX7hRa62ELfLstfe4uxEEdjGiwcDGhX0TR8t1c69HTxfDL2XuPv0hw==',key_name='tempest-keypair-1655747577',keypairs=<?>,launch_index=0,launched_at=2025-10-11T08:48:21Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='eddb41c523294041b154a0a99c88e82b',ramdisk_id='',reservation_id='r-i4h1iqr7',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-2072786320',owner_user_name='tempest-AttachInterfacesTestJSON-2072786320-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T08:48:21Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='34f29a5a135d45f597eeaa741009aa67',uuid=057de6d9-3f9e-4b23-9019-f62ba6b453e7,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "c9724939-cd91-44bb-a86b-72bf93c2a818", "address": "fa:16:3e:2b:3e:e2", "network": {"id": "fff13396-b787-4c6e-9112-a1c2ef57b26d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-795919911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eddb41c523294041b154a0a99c88e82b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc9724939-cd", "ovs_interfaceid": "c9724939-cd91-44bb-a86b-72bf93c2a818", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 11 08:49:08 compute-0 nova_compute[260935]: 2025-10-11 08:49:08.029 2 DEBUG nova.network.os_vif_util [None req-1b0e9738-2076-4278-aa27-69b054179b80 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Converting VIF {"id": "c9724939-cd91-44bb-a86b-72bf93c2a818", "address": "fa:16:3e:2b:3e:e2", "network": {"id": "fff13396-b787-4c6e-9112-a1c2ef57b26d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-795919911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eddb41c523294041b154a0a99c88e82b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc9724939-cd", "ovs_interfaceid": "c9724939-cd91-44bb-a86b-72bf93c2a818", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 08:49:08 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:49:08.028 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[ef8ae2f0-9d1d-4330-926c-fb4b262e948e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap09ac2cb6-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9f:b2:33'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 6, 'rx_bytes': 612, 'tx_bytes': 440, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 6, 'rx_bytes': 612, 'tx_bytes': 440, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 34], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 439110, 'reachable_time': 35426, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 528, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 528, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 292293, 'error': None, 'target': 'ovnmeta-09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:49:08 compute-0 nova_compute[260935]: 2025-10-11 08:49:08.030 2 DEBUG nova.network.os_vif_util [None req-1b0e9738-2076-4278-aa27-69b054179b80 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2b:3e:e2,bridge_name='br-int',has_traffic_filtering=True,id=c9724939-cd91-44bb-a86b-72bf93c2a818,network=Network(fff13396-b787-4c6e-9112-a1c2ef57b26d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapc9724939-cd') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 08:49:08 compute-0 nova_compute[260935]: 2025-10-11 08:49:08.034 2 DEBUG nova.virt.libvirt.guest [None req-1b0e9738-2076-4278-aa27-69b054179b80 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] attach device xml: <interface type="ethernet">
Oct 11 08:49:08 compute-0 nova_compute[260935]:   <mac address="fa:16:3e:2b:3e:e2"/>
Oct 11 08:49:08 compute-0 nova_compute[260935]:   <model type="virtio"/>
Oct 11 08:49:08 compute-0 nova_compute[260935]:   <driver name="vhost" rx_queue_size="512"/>
Oct 11 08:49:08 compute-0 nova_compute[260935]:   <mtu size="1442"/>
Oct 11 08:49:08 compute-0 nova_compute[260935]:   <target dev="tapc9724939-cd"/>
Oct 11 08:49:08 compute-0 nova_compute[260935]: </interface>
Oct 11 08:49:08 compute-0 nova_compute[260935]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Oct 11 08:49:08 compute-0 NetworkManager[44960]: <info>  [1760172548.0512] manager: (tapc9724939-cd): new Tun device (/org/freedesktop/NetworkManager/Devices/67)
Oct 11 08:49:08 compute-0 kernel: tapc9724939-cd: entered promiscuous mode
Oct 11 08:49:08 compute-0 nova_compute[260935]: 2025-10-11 08:49:08.063 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:49:08 compute-0 ovn_controller[152945]: 2025-10-11T08:49:08Z|00104|binding|INFO|Claiming lport c9724939-cd91-44bb-a86b-72bf93c2a818 for this chassis.
Oct 11 08:49:08 compute-0 ovn_controller[152945]: 2025-10-11T08:49:08Z|00105|binding|INFO|c9724939-cd91-44bb-a86b-72bf93c2a818: Claiming fa:16:3e:2b:3e:e2 10.100.0.9
Oct 11 08:49:08 compute-0 NetworkManager[44960]: <info>  [1760172548.0678] device (tapc9724939-cd): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 08:49:08 compute-0 NetworkManager[44960]: <info>  [1760172548.0699] device (tapc9724939-cd): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 08:49:08 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:49:08.075 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2b:3e:e2 10.100.0.9'], port_security=['fa:16:3e:2b:3e:e2 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-AttachInterfacesTestJSON-1629736355', 'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '057de6d9-3f9e-4b23-9019-f62ba6b453e7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fff13396-b787-4c6e-9112-a1c2ef57b26d', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-AttachInterfacesTestJSON-1629736355', 'neutron:project_id': 'eddb41c523294041b154a0a99c88e82b', 'neutron:revision_number': '2', 'neutron:security_group_ids': '9a83c3d0-687d-44b7-980a-bde786b1b429', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c4201c7b-c907-464d-88cb-d19f17d8f067, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=c9724939-cd91-44bb-a86b-72bf93c2a818) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 08:49:08 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:49:08.081 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[339b86d2-7cf0-4e95-a634-4589432ef72b]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap09ac2cb6-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 439130, 'tstamp': 439130}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 292296, 'error': None, 'target': 'ovnmeta-09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap09ac2cb6-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 439135, 'tstamp': 439135}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 292296, 'error': None, 'target': 'ovnmeta-09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:49:08 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:49:08.082 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap09ac2cb6-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:49:08 compute-0 nova_compute[260935]: 2025-10-11 08:49:08.084 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:49:08 compute-0 ovn_controller[152945]: 2025-10-11T08:49:08Z|00106|binding|INFO|Setting lport c9724939-cd91-44bb-a86b-72bf93c2a818 ovn-installed in OVS
Oct 11 08:49:08 compute-0 ovn_controller[152945]: 2025-10-11T08:49:08Z|00107|binding|INFO|Setting lport c9724939-cd91-44bb-a86b-72bf93c2a818 up in Southbound
Oct 11 08:49:08 compute-0 nova_compute[260935]: 2025-10-11 08:49:08.104 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:49:08 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:49:08.112 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap09ac2cb6-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:49:08 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:49:08.113 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 08:49:08 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:49:08.113 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap09ac2cb6-30, col_values=(('external_ids', {'iface-id': '424305ea-6b47-4134-ad52-ee2a450e204c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:49:08 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:49:08.114 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 08:49:08 compute-0 nova_compute[260935]: 2025-10-11 08:49:08.115 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:49:08 compute-0 nova_compute[260935]: 2025-10-11 08:49:08.117 2 DEBUG nova.compute.manager [req-e97601d5-ab84-4108-91ba-9b56f69cdf09 req-7acea177-d73e-445d-931e-b6ebbe3f62f7 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 6224c79a-8a36-490c-863a-67251512732f] Received event network-vif-plugged-b9569700-d7dc-40dc-a27c-1f72d3675682 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:49:08 compute-0 nova_compute[260935]: 2025-10-11 08:49:08.118 2 DEBUG oslo_concurrency.lockutils [req-e97601d5-ab84-4108-91ba-9b56f69cdf09 req-7acea177-d73e-445d-931e-b6ebbe3f62f7 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "6224c79a-8a36-490c-863a-67251512732f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:49:08 compute-0 nova_compute[260935]: 2025-10-11 08:49:08.118 2 DEBUG oslo_concurrency.lockutils [req-e97601d5-ab84-4108-91ba-9b56f69cdf09 req-7acea177-d73e-445d-931e-b6ebbe3f62f7 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "6224c79a-8a36-490c-863a-67251512732f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:49:08 compute-0 nova_compute[260935]: 2025-10-11 08:49:08.119 2 DEBUG oslo_concurrency.lockutils [req-e97601d5-ab84-4108-91ba-9b56f69cdf09 req-7acea177-d73e-445d-931e-b6ebbe3f62f7 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "6224c79a-8a36-490c-863a-67251512732f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:49:08 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:49:08.119 162815 INFO neutron.agent.ovn.metadata.agent [-] Port c9724939-cd91-44bb-a86b-72bf93c2a818 in datapath fff13396-b787-4c6e-9112-a1c2ef57b26d bound to our chassis
Oct 11 08:49:08 compute-0 nova_compute[260935]: 2025-10-11 08:49:08.119 2 DEBUG nova.compute.manager [req-e97601d5-ab84-4108-91ba-9b56f69cdf09 req-7acea177-d73e-445d-931e-b6ebbe3f62f7 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 6224c79a-8a36-490c-863a-67251512732f] Processing event network-vif-plugged-b9569700-d7dc-40dc-a27c-1f72d3675682 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 11 08:49:08 compute-0 nova_compute[260935]: 2025-10-11 08:49:08.119 2 DEBUG nova.compute.manager [req-e97601d5-ab84-4108-91ba-9b56f69cdf09 req-7acea177-d73e-445d-931e-b6ebbe3f62f7 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 6224c79a-8a36-490c-863a-67251512732f] Received event network-vif-plugged-b9569700-d7dc-40dc-a27c-1f72d3675682 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:49:08 compute-0 nova_compute[260935]: 2025-10-11 08:49:08.120 2 DEBUG oslo_concurrency.lockutils [req-e97601d5-ab84-4108-91ba-9b56f69cdf09 req-7acea177-d73e-445d-931e-b6ebbe3f62f7 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "6224c79a-8a36-490c-863a-67251512732f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:49:08 compute-0 nova_compute[260935]: 2025-10-11 08:49:08.120 2 DEBUG oslo_concurrency.lockutils [req-e97601d5-ab84-4108-91ba-9b56f69cdf09 req-7acea177-d73e-445d-931e-b6ebbe3f62f7 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "6224c79a-8a36-490c-863a-67251512732f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:49:08 compute-0 nova_compute[260935]: 2025-10-11 08:49:08.121 2 DEBUG oslo_concurrency.lockutils [req-e97601d5-ab84-4108-91ba-9b56f69cdf09 req-7acea177-d73e-445d-931e-b6ebbe3f62f7 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "6224c79a-8a36-490c-863a-67251512732f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:49:08 compute-0 nova_compute[260935]: 2025-10-11 08:49:08.121 2 DEBUG nova.compute.manager [req-e97601d5-ab84-4108-91ba-9b56f69cdf09 req-7acea177-d73e-445d-931e-b6ebbe3f62f7 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 6224c79a-8a36-490c-863a-67251512732f] No waiting events found dispatching network-vif-plugged-b9569700-d7dc-40dc-a27c-1f72d3675682 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 08:49:08 compute-0 nova_compute[260935]: 2025-10-11 08:49:08.123 2 WARNING nova.compute.manager [req-e97601d5-ab84-4108-91ba-9b56f69cdf09 req-7acea177-d73e-445d-931e-b6ebbe3f62f7 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 6224c79a-8a36-490c-863a-67251512732f] Received unexpected event network-vif-plugged-b9569700-d7dc-40dc-a27c-1f72d3675682 for instance with vm_state building and task_state spawning.
Oct 11 08:49:08 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:49:08.124 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network fff13396-b787-4c6e-9112-a1c2ef57b26d
Oct 11 08:49:08 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:49:08.153 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[c4f50f95-01c5-4673-abb3-eea628a3367e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:49:08 compute-0 nova_compute[260935]: 2025-10-11 08:49:08.175 2 DEBUG nova.virt.libvirt.driver [None req-1b0e9738-2076-4278-aa27-69b054179b80 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 08:49:08 compute-0 nova_compute[260935]: 2025-10-11 08:49:08.175 2 DEBUG nova.virt.libvirt.driver [None req-1b0e9738-2076-4278-aa27-69b054179b80 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 08:49:08 compute-0 nova_compute[260935]: 2025-10-11 08:49:08.175 2 DEBUG nova.virt.libvirt.driver [None req-1b0e9738-2076-4278-aa27-69b054179b80 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] No VIF found with MAC fa:16:3e:97:65:f3, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 11 08:49:08 compute-0 nova_compute[260935]: 2025-10-11 08:49:08.175 2 DEBUG nova.virt.libvirt.driver [None req-1b0e9738-2076-4278-aa27-69b054179b80 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] No VIF found with MAC fa:16:3e:f2:dc:ce, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 11 08:49:08 compute-0 nova_compute[260935]: 2025-10-11 08:49:08.176 2 DEBUG nova.virt.libvirt.driver [None req-1b0e9738-2076-4278-aa27-69b054179b80 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] No VIF found with MAC fa:16:3e:90:fd:25, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 11 08:49:08 compute-0 nova_compute[260935]: 2025-10-11 08:49:08.176 2 DEBUG nova.virt.libvirt.driver [None req-1b0e9738-2076-4278-aa27-69b054179b80 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] No VIF found with MAC fa:16:3e:2b:3e:e2, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 11 08:49:08 compute-0 nova_compute[260935]: 2025-10-11 08:49:08.203 2 DEBUG nova.virt.libvirt.guest [None req-1b0e9738-2076-4278-aa27-69b054179b80 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 08:49:08 compute-0 nova_compute[260935]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 08:49:08 compute-0 nova_compute[260935]:   <nova:name>tempest-AttachInterfacesTestJSON-server-1270343715</nova:name>
Oct 11 08:49:08 compute-0 nova_compute[260935]:   <nova:creationTime>2025-10-11 08:49:08</nova:creationTime>
Oct 11 08:49:08 compute-0 nova_compute[260935]:   <nova:flavor name="m1.nano">
Oct 11 08:49:08 compute-0 nova_compute[260935]:     <nova:memory>128</nova:memory>
Oct 11 08:49:08 compute-0 nova_compute[260935]:     <nova:disk>1</nova:disk>
Oct 11 08:49:08 compute-0 nova_compute[260935]:     <nova:swap>0</nova:swap>
Oct 11 08:49:08 compute-0 nova_compute[260935]:     <nova:ephemeral>0</nova:ephemeral>
Oct 11 08:49:08 compute-0 nova_compute[260935]:     <nova:vcpus>1</nova:vcpus>
Oct 11 08:49:08 compute-0 nova_compute[260935]:   </nova:flavor>
Oct 11 08:49:08 compute-0 nova_compute[260935]:   <nova:owner>
Oct 11 08:49:08 compute-0 nova_compute[260935]:     <nova:user uuid="34f29a5a135d45f597eeaa741009aa67">tempest-AttachInterfacesTestJSON-2072786320-project-member</nova:user>
Oct 11 08:49:08 compute-0 nova_compute[260935]:     <nova:project uuid="eddb41c523294041b154a0a99c88e82b">tempest-AttachInterfacesTestJSON-2072786320</nova:project>
Oct 11 08:49:08 compute-0 nova_compute[260935]:   </nova:owner>
Oct 11 08:49:08 compute-0 nova_compute[260935]:   <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 08:49:08 compute-0 nova_compute[260935]:   <nova:ports>
Oct 11 08:49:08 compute-0 nova_compute[260935]:     <nova:port uuid="db31f1b4-b009-40dc-a028-b72fe0b1eb45">
Oct 11 08:49:08 compute-0 nova_compute[260935]:       <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Oct 11 08:49:08 compute-0 nova_compute[260935]:     </nova:port>
Oct 11 08:49:08 compute-0 nova_compute[260935]:     <nova:port uuid="b3e1a780-925f-4f4e-bca5-4b8f5ee45e1e">
Oct 11 08:49:08 compute-0 nova_compute[260935]:       <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Oct 11 08:49:08 compute-0 nova_compute[260935]:     </nova:port>
Oct 11 08:49:08 compute-0 nova_compute[260935]:     <nova:port uuid="723ff3bf-882c-4198-afc8-a31026a4ccfc">
Oct 11 08:49:08 compute-0 nova_compute[260935]:       <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Oct 11 08:49:08 compute-0 nova_compute[260935]:     </nova:port>
Oct 11 08:49:08 compute-0 nova_compute[260935]:     <nova:port uuid="c9724939-cd91-44bb-a86b-72bf93c2a818">
Oct 11 08:49:08 compute-0 nova_compute[260935]:       <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Oct 11 08:49:08 compute-0 nova_compute[260935]:     </nova:port>
Oct 11 08:49:08 compute-0 nova_compute[260935]:   </nova:ports>
Oct 11 08:49:08 compute-0 nova_compute[260935]: </nova:instance>
Oct 11 08:49:08 compute-0 nova_compute[260935]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Oct 11 08:49:08 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e163 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 08:49:08 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:49:08.210 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[da26b6fe-feb3-4ead-acbb-ec1f6183b6de]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:49:08 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:49:08.217 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[c21ca575-333c-44ec-b0f8-562c1651beee]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:49:08 compute-0 nova_compute[260935]: 2025-10-11 08:49:08.235 2 DEBUG oslo_concurrency.lockutils [None req-1b0e9738-2076-4278-aa27-69b054179b80 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Lock "interface-057de6d9-3f9e-4b23-9019-f62ba6b453e7-c9724939-cd91-44bb-a86b-72bf93c2a818" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 7.935s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:49:08 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:49:08.260 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[815333c6-5e87-44d7-9120-30910f3082ac]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:49:08 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:49:08.284 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[96939928-91c6-46c3-9245-45669c38da79]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfff13396-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:dd:a4:2d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 14, 'tx_packets': 9, 'rx_bytes': 1084, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 14, 'tx_packets': 9, 'rx_bytes': 1084, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 27], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 434479, 'reachable_time': 16585, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 292331, 'error': None, 'target': 'ovnmeta-fff13396-b787-4c6e-9112-a1c2ef57b26d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:49:08 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:49:08.313 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[ed59a759-c493-4a99-b2dc-0813c4ba4729]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapfff13396-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 434496, 'tstamp': 434496}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 292340, 'error': None, 'target': 'ovnmeta-fff13396-b787-4c6e-9112-a1c2ef57b26d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapfff13396-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 434502, 'tstamp': 434502}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 292340, 'error': None, 'target': 'ovnmeta-fff13396-b787-4c6e-9112-a1c2ef57b26d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:49:08 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:49:08.316 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfff13396-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:49:08 compute-0 nova_compute[260935]: 2025-10-11 08:49:08.318 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:49:08 compute-0 nova_compute[260935]: 2025-10-11 08:49:08.319 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:49:08 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:49:08.320 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfff13396-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:49:08 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:49:08.321 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 08:49:08 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:49:08.322 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapfff13396-b0, col_values=(('external_ids', {'iface-id': '2a916b98-1e7b-4604-b1f0-e2f195b1c17e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:49:08 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:49:08.322 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 08:49:08 compute-0 nova_compute[260935]: 2025-10-11 08:49:08.897 2 DEBUG nova.compute.manager [None req-a3faec16-6f5d-41f8-a4bf-87b64a251959 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 6224c79a-8a36-490c-863a-67251512732f] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 11 08:49:08 compute-0 nova_compute[260935]: 2025-10-11 08:49:08.898 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172548.8981323, 6224c79a-8a36-490c-863a-67251512732f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:49:08 compute-0 nova_compute[260935]: 2025-10-11 08:49:08.899 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 6224c79a-8a36-490c-863a-67251512732f] VM Started (Lifecycle Event)
Oct 11 08:49:08 compute-0 nova_compute[260935]: 2025-10-11 08:49:08.905 2 DEBUG nova.virt.libvirt.driver [None req-a3faec16-6f5d-41f8-a4bf-87b64a251959 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 6224c79a-8a36-490c-863a-67251512732f] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 11 08:49:08 compute-0 nova_compute[260935]: 2025-10-11 08:49:08.909 2 INFO nova.virt.libvirt.driver [-] [instance: 6224c79a-8a36-490c-863a-67251512732f] Instance spawned successfully.
Oct 11 08:49:08 compute-0 nova_compute[260935]: 2025-10-11 08:49:08.911 2 DEBUG nova.virt.libvirt.driver [None req-a3faec16-6f5d-41f8-a4bf-87b64a251959 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 6224c79a-8a36-490c-863a-67251512732f] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 11 08:49:08 compute-0 nova_compute[260935]: 2025-10-11 08:49:08.942 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 6224c79a-8a36-490c-863a-67251512732f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:49:08 compute-0 nova_compute[260935]: 2025-10-11 08:49:08.952 2 DEBUG nova.virt.libvirt.driver [None req-a3faec16-6f5d-41f8-a4bf-87b64a251959 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 6224c79a-8a36-490c-863a-67251512732f] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:49:08 compute-0 nova_compute[260935]: 2025-10-11 08:49:08.954 2 DEBUG nova.virt.libvirt.driver [None req-a3faec16-6f5d-41f8-a4bf-87b64a251959 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 6224c79a-8a36-490c-863a-67251512732f] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:49:08 compute-0 nova_compute[260935]: 2025-10-11 08:49:08.955 2 DEBUG nova.virt.libvirt.driver [None req-a3faec16-6f5d-41f8-a4bf-87b64a251959 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 6224c79a-8a36-490c-863a-67251512732f] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:49:08 compute-0 nova_compute[260935]: 2025-10-11 08:49:08.956 2 DEBUG nova.virt.libvirt.driver [None req-a3faec16-6f5d-41f8-a4bf-87b64a251959 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 6224c79a-8a36-490c-863a-67251512732f] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:49:08 compute-0 nova_compute[260935]: 2025-10-11 08:49:08.957 2 DEBUG nova.virt.libvirt.driver [None req-a3faec16-6f5d-41f8-a4bf-87b64a251959 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 6224c79a-8a36-490c-863a-67251512732f] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:49:08 compute-0 nova_compute[260935]: 2025-10-11 08:49:08.959 2 DEBUG nova.virt.libvirt.driver [None req-a3faec16-6f5d-41f8-a4bf-87b64a251959 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 6224c79a-8a36-490c-863a-67251512732f] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:49:08 compute-0 nova_compute[260935]: 2025-10-11 08:49:08.968 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 6224c79a-8a36-490c-863a-67251512732f] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 08:49:09 compute-0 nova_compute[260935]: 2025-10-11 08:49:09.005 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 6224c79a-8a36-490c-863a-67251512732f] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 08:49:09 compute-0 nova_compute[260935]: 2025-10-11 08:49:09.006 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172548.8982635, 6224c79a-8a36-490c-863a-67251512732f => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:49:09 compute-0 nova_compute[260935]: 2025-10-11 08:49:09.007 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 6224c79a-8a36-490c-863a-67251512732f] VM Paused (Lifecycle Event)
Oct 11 08:49:09 compute-0 nova_compute[260935]: 2025-10-11 08:49:09.021 2 INFO nova.compute.manager [None req-a3faec16-6f5d-41f8-a4bf-87b64a251959 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 6224c79a-8a36-490c-863a-67251512732f] Took 7.56 seconds to spawn the instance on the hypervisor.
Oct 11 08:49:09 compute-0 nova_compute[260935]: 2025-10-11 08:49:09.022 2 DEBUG nova.compute.manager [None req-a3faec16-6f5d-41f8-a4bf-87b64a251959 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 6224c79a-8a36-490c-863a-67251512732f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:49:09 compute-0 nova_compute[260935]: 2025-10-11 08:49:09.031 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 6224c79a-8a36-490c-863a-67251512732f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:49:09 compute-0 nova_compute[260935]: 2025-10-11 08:49:09.045 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172548.9043639, 6224c79a-8a36-490c-863a-67251512732f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:49:09 compute-0 nova_compute[260935]: 2025-10-11 08:49:09.046 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 6224c79a-8a36-490c-863a-67251512732f] VM Resumed (Lifecycle Event)
Oct 11 08:49:09 compute-0 nova_compute[260935]: 2025-10-11 08:49:09.072 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 6224c79a-8a36-490c-863a-67251512732f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:49:09 compute-0 nova_compute[260935]: 2025-10-11 08:49:09.078 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 6224c79a-8a36-490c-863a-67251512732f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 08:49:09 compute-0 ceph-mon[74313]: pgmap v1300: 321 pgs: 321 active+clean; 214 MiB data, 415 MiB used, 60 GiB / 60 GiB avail; 192 KiB/s rd, 3.7 MiB/s wr, 117 op/s
Oct 11 08:49:09 compute-0 nova_compute[260935]: 2025-10-11 08:49:09.126 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 6224c79a-8a36-490c-863a-67251512732f] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 08:49:09 compute-0 nova_compute[260935]: 2025-10-11 08:49:09.141 2 INFO nova.compute.manager [None req-a3faec16-6f5d-41f8-a4bf-87b64a251959 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 6224c79a-8a36-490c-863a-67251512732f] Took 8.60 seconds to build instance.
Oct 11 08:49:09 compute-0 nova_compute[260935]: 2025-10-11 08:49:09.163 2 DEBUG oslo_concurrency.lockutils [None req-a3faec16-6f5d-41f8-a4bf-87b64a251959 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Lock "6224c79a-8a36-490c-863a-67251512732f" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.687s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:49:09 compute-0 nova_compute[260935]: 2025-10-11 08:49:09.286 2 DEBUG nova.network.neutron [-] [instance: e10cd028-76c1-4eb5-be43-f51e4da8abc1] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:49:09 compute-0 nova_compute[260935]: 2025-10-11 08:49:09.449 2 DEBUG nova.network.neutron [req-e6a8f143-dc32-4a50-990c-6f8e732bf55d req-cc0e207a-337b-4976-b32f-222e42b5bae8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] Updated VIF entry in instance network info cache for port c9724939-cd91-44bb-a86b-72bf93c2a818. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 08:49:09 compute-0 nova_compute[260935]: 2025-10-11 08:49:09.451 2 DEBUG nova.network.neutron [req-e6a8f143-dc32-4a50-990c-6f8e732bf55d req-cc0e207a-337b-4976-b32f-222e42b5bae8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] Updating instance_info_cache with network_info: [{"id": "db31f1b4-b009-40dc-a028-b72fe0b1eb45", "address": "fa:16:3e:97:65:f3", "network": {"id": "fff13396-b787-4c6e-9112-a1c2ef57b26d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-795919911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.230", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eddb41c523294041b154a0a99c88e82b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdb31f1b4-b0", "ovs_interfaceid": "db31f1b4-b009-40dc-a028-b72fe0b1eb45", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "b3e1a780-925f-4f4e-bca5-4b8f5ee45e1e", "address": "fa:16:3e:f2:dc:ce", "network": {"id": "fff13396-b787-4c6e-9112-a1c2ef57b26d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-795919911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eddb41c523294041b154a0a99c88e82b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb3e1a780-92", "ovs_interfaceid": "b3e1a780-925f-4f4e-bca5-4b8f5ee45e1e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "723ff3bf-882c-4198-afc8-a31026a4ccfc", "address": "fa:16:3e:90:fd:25", "network": {"id": "fff13396-b787-4c6e-9112-a1c2ef57b26d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-795919911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eddb41c523294041b154a0a99c88e82b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap723ff3bf-88", "ovs_interfaceid": "723ff3bf-882c-4198-afc8-a31026a4ccfc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "c9724939-cd91-44bb-a86b-72bf93c2a818", "address": "fa:16:3e:2b:3e:e2", "network": {"id": "fff13396-b787-4c6e-9112-a1c2ef57b26d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-795919911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eddb41c523294041b154a0a99c88e82b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc9724939-cd", "ovs_interfaceid": "c9724939-cd91-44bb-a86b-72bf93c2a818", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:49:09 compute-0 nova_compute[260935]: 2025-10-11 08:49:09.549 2 INFO nova.compute.manager [-] [instance: e10cd028-76c1-4eb5-be43-f51e4da8abc1] Took 2.17 seconds to deallocate network for instance.
Oct 11 08:49:09 compute-0 nova_compute[260935]: 2025-10-11 08:49:09.565 2 DEBUG oslo_concurrency.lockutils [req-e6a8f143-dc32-4a50-990c-6f8e732bf55d req-cc0e207a-337b-4976-b32f-222e42b5bae8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-057de6d9-3f9e-4b23-9019-f62ba6b453e7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 08:49:09 compute-0 nova_compute[260935]: 2025-10-11 08:49:09.566 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquired lock "refresh_cache-057de6d9-3f9e-4b23-9019-f62ba6b453e7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 08:49:09 compute-0 nova_compute[260935]: 2025-10-11 08:49:09.567 2 DEBUG nova.network.neutron [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 11 08:49:09 compute-0 nova_compute[260935]: 2025-10-11 08:49:09.567 2 DEBUG nova.objects.instance [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 057de6d9-3f9e-4b23-9019-f62ba6b453e7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:49:09 compute-0 ovn_controller[152945]: 2025-10-11T08:49:09Z|00022|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:2b:3e:e2 10.100.0.9
Oct 11 08:49:09 compute-0 ovn_controller[152945]: 2025-10-11T08:49:09Z|00023|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:2b:3e:e2 10.100.0.9
Oct 11 08:49:09 compute-0 nova_compute[260935]: 2025-10-11 08:49:09.619 2 DEBUG oslo_concurrency.lockutils [None req-f1c4bf5e-d8d0-480f-854b-9ce12c050804 6b08b83b77b84cd894e155d2a06682a4 8d6c7a0d842f4dcb95421a3f47580c49 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:49:09 compute-0 nova_compute[260935]: 2025-10-11 08:49:09.619 2 DEBUG oslo_concurrency.lockutils [None req-f1c4bf5e-d8d0-480f-854b-9ce12c050804 6b08b83b77b84cd894e155d2a06682a4 8d6c7a0d842f4dcb95421a3f47580c49 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:49:09 compute-0 nova_compute[260935]: 2025-10-11 08:49:09.742 2 DEBUG nova.compute.manager [req-264f091c-6f39-4683-94b3-73e5444da30d req-43156a74-afa6-432b-a69d-d931a81c4e20 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] Received event network-vif-plugged-a3944a31-9560-49ae-b2a5-caaf2736993a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:49:09 compute-0 nova_compute[260935]: 2025-10-11 08:49:09.743 2 DEBUG oslo_concurrency.lockutils [req-264f091c-6f39-4683-94b3-73e5444da30d req-43156a74-afa6-432b-a69d-d931a81c4e20 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "77ff3a9d-3eb2-40ed-ad12-6367fd4e555f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:49:09 compute-0 nova_compute[260935]: 2025-10-11 08:49:09.745 2 DEBUG oslo_concurrency.lockutils [req-264f091c-6f39-4683-94b3-73e5444da30d req-43156a74-afa6-432b-a69d-d931a81c4e20 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "77ff3a9d-3eb2-40ed-ad12-6367fd4e555f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:49:09 compute-0 nova_compute[260935]: 2025-10-11 08:49:09.747 2 DEBUG oslo_concurrency.lockutils [req-264f091c-6f39-4683-94b3-73e5444da30d req-43156a74-afa6-432b-a69d-d931a81c4e20 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "77ff3a9d-3eb2-40ed-ad12-6367fd4e555f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:49:09 compute-0 nova_compute[260935]: 2025-10-11 08:49:09.747 2 DEBUG nova.compute.manager [req-264f091c-6f39-4683-94b3-73e5444da30d req-43156a74-afa6-432b-a69d-d931a81c4e20 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] No waiting events found dispatching network-vif-plugged-a3944a31-9560-49ae-b2a5-caaf2736993a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 08:49:09 compute-0 nova_compute[260935]: 2025-10-11 08:49:09.747 2 WARNING nova.compute.manager [req-264f091c-6f39-4683-94b3-73e5444da30d req-43156a74-afa6-432b-a69d-d931a81c4e20 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] Received unexpected event network-vif-plugged-a3944a31-9560-49ae-b2a5-caaf2736993a for instance with vm_state active and task_state None.
Oct 11 08:49:09 compute-0 nova_compute[260935]: 2025-10-11 08:49:09.748 2 DEBUG nova.compute.manager [req-264f091c-6f39-4683-94b3-73e5444da30d req-43156a74-afa6-432b-a69d-d931a81c4e20 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: e10cd028-76c1-4eb5-be43-f51e4da8abc1] Received event network-vif-unplugged-b5bae935-7639-4a76-988c-e09d0c6f5fb1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:49:09 compute-0 nova_compute[260935]: 2025-10-11 08:49:09.748 2 DEBUG oslo_concurrency.lockutils [req-264f091c-6f39-4683-94b3-73e5444da30d req-43156a74-afa6-432b-a69d-d931a81c4e20 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "e10cd028-76c1-4eb5-be43-f51e4da8abc1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:49:09 compute-0 nova_compute[260935]: 2025-10-11 08:49:09.749 2 DEBUG oslo_concurrency.lockutils [req-264f091c-6f39-4683-94b3-73e5444da30d req-43156a74-afa6-432b-a69d-d931a81c4e20 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "e10cd028-76c1-4eb5-be43-f51e4da8abc1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:49:09 compute-0 nova_compute[260935]: 2025-10-11 08:49:09.749 2 DEBUG oslo_concurrency.lockutils [req-264f091c-6f39-4683-94b3-73e5444da30d req-43156a74-afa6-432b-a69d-d931a81c4e20 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "e10cd028-76c1-4eb5-be43-f51e4da8abc1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:49:09 compute-0 nova_compute[260935]: 2025-10-11 08:49:09.750 2 DEBUG nova.compute.manager [req-264f091c-6f39-4683-94b3-73e5444da30d req-43156a74-afa6-432b-a69d-d931a81c4e20 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: e10cd028-76c1-4eb5-be43-f51e4da8abc1] No waiting events found dispatching network-vif-unplugged-b5bae935-7639-4a76-988c-e09d0c6f5fb1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 08:49:09 compute-0 nova_compute[260935]: 2025-10-11 08:49:09.750 2 WARNING nova.compute.manager [req-264f091c-6f39-4683-94b3-73e5444da30d req-43156a74-afa6-432b-a69d-d931a81c4e20 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: e10cd028-76c1-4eb5-be43-f51e4da8abc1] Received unexpected event network-vif-unplugged-b5bae935-7639-4a76-988c-e09d0c6f5fb1 for instance with vm_state deleted and task_state None.
Oct 11 08:49:09 compute-0 nova_compute[260935]: 2025-10-11 08:49:09.750 2 DEBUG nova.compute.manager [req-264f091c-6f39-4683-94b3-73e5444da30d req-43156a74-afa6-432b-a69d-d931a81c4e20 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: e10cd028-76c1-4eb5-be43-f51e4da8abc1] Received event network-vif-plugged-b5bae935-7639-4a76-988c-e09d0c6f5fb1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:49:09 compute-0 nova_compute[260935]: 2025-10-11 08:49:09.751 2 DEBUG oslo_concurrency.lockutils [req-264f091c-6f39-4683-94b3-73e5444da30d req-43156a74-afa6-432b-a69d-d931a81c4e20 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "e10cd028-76c1-4eb5-be43-f51e4da8abc1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:49:09 compute-0 nova_compute[260935]: 2025-10-11 08:49:09.751 2 DEBUG oslo_concurrency.lockutils [req-264f091c-6f39-4683-94b3-73e5444da30d req-43156a74-afa6-432b-a69d-d931a81c4e20 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "e10cd028-76c1-4eb5-be43-f51e4da8abc1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:49:09 compute-0 nova_compute[260935]: 2025-10-11 08:49:09.752 2 DEBUG oslo_concurrency.lockutils [req-264f091c-6f39-4683-94b3-73e5444da30d req-43156a74-afa6-432b-a69d-d931a81c4e20 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "e10cd028-76c1-4eb5-be43-f51e4da8abc1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:49:09 compute-0 nova_compute[260935]: 2025-10-11 08:49:09.752 2 DEBUG nova.compute.manager [req-264f091c-6f39-4683-94b3-73e5444da30d req-43156a74-afa6-432b-a69d-d931a81c4e20 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: e10cd028-76c1-4eb5-be43-f51e4da8abc1] No waiting events found dispatching network-vif-plugged-b5bae935-7639-4a76-988c-e09d0c6f5fb1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 08:49:09 compute-0 nova_compute[260935]: 2025-10-11 08:49:09.753 2 WARNING nova.compute.manager [req-264f091c-6f39-4683-94b3-73e5444da30d req-43156a74-afa6-432b-a69d-d931a81c4e20 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: e10cd028-76c1-4eb5-be43-f51e4da8abc1] Received unexpected event network-vif-plugged-b5bae935-7639-4a76-988c-e09d0c6f5fb1 for instance with vm_state deleted and task_state None.
Oct 11 08:49:09 compute-0 nova_compute[260935]: 2025-10-11 08:49:09.753 2 DEBUG nova.compute.manager [req-264f091c-6f39-4683-94b3-73e5444da30d req-43156a74-afa6-432b-a69d-d931a81c4e20 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: e10cd028-76c1-4eb5-be43-f51e4da8abc1] Received event network-vif-deleted-b5bae935-7639-4a76-988c-e09d0c6f5fb1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:49:09 compute-0 nova_compute[260935]: 2025-10-11 08:49:09.762 2 DEBUG oslo_concurrency.processutils [None req-f1c4bf5e-d8d0-480f-854b-9ce12c050804 6b08b83b77b84cd894e155d2a06682a4 8d6c7a0d842f4dcb95421a3f47580c49 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:49:09 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1301: 321 pgs: 321 active+clean; 214 MiB data, 415 MiB used, 60 GiB / 60 GiB avail; 63 KiB/s rd, 3.6 MiB/s wr, 96 op/s
Oct 11 08:49:10 compute-0 nova_compute[260935]: 2025-10-11 08:49:10.090 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:49:10 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 08:49:10 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3714218713' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:49:10 compute-0 nova_compute[260935]: 2025-10-11 08:49:10.251 2 DEBUG oslo_concurrency.processutils [None req-f1c4bf5e-d8d0-480f-854b-9ce12c050804 6b08b83b77b84cd894e155d2a06682a4 8d6c7a0d842f4dcb95421a3f47580c49 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.489s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:49:10 compute-0 nova_compute[260935]: 2025-10-11 08:49:10.257 2 DEBUG nova.compute.provider_tree [None req-f1c4bf5e-d8d0-480f-854b-9ce12c050804 6b08b83b77b84cd894e155d2a06682a4 8d6c7a0d842f4dcb95421a3f47580c49 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 08:49:10 compute-0 nova_compute[260935]: 2025-10-11 08:49:10.272 2 DEBUG nova.scheduler.client.report [None req-f1c4bf5e-d8d0-480f-854b-9ce12c050804 6b08b83b77b84cd894e155d2a06682a4 8d6c7a0d842f4dcb95421a3f47580c49 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 08:49:10 compute-0 nova_compute[260935]: 2025-10-11 08:49:10.291 2 DEBUG oslo_concurrency.lockutils [None req-f1c4bf5e-d8d0-480f-854b-9ce12c050804 6b08b83b77b84cd894e155d2a06682a4 8d6c7a0d842f4dcb95421a3f47580c49 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.672s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:49:10 compute-0 nova_compute[260935]: 2025-10-11 08:49:10.321 2 INFO nova.scheduler.client.report [None req-f1c4bf5e-d8d0-480f-854b-9ce12c050804 6b08b83b77b84cd894e155d2a06682a4 8d6c7a0d842f4dcb95421a3f47580c49 - - default default] Deleted allocations for instance e10cd028-76c1-4eb5-be43-f51e4da8abc1
Oct 11 08:49:10 compute-0 nova_compute[260935]: 2025-10-11 08:49:10.392 2 DEBUG oslo_concurrency.lockutils [None req-f1c4bf5e-d8d0-480f-854b-9ce12c050804 6b08b83b77b84cd894e155d2a06682a4 8d6c7a0d842f4dcb95421a3f47580c49 - - default default] Lock "e10cd028-76c1-4eb5-be43-f51e4da8abc1" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.973s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:49:10 compute-0 nova_compute[260935]: 2025-10-11 08:49:10.502 2 DEBUG nova.compute.manager [req-260920d6-4feb-4991-b99e-d542fbf2e7ac req-4d45081b-db12-4cb1-95d1-87a9c865a4ff e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] Received event network-vif-plugged-c9724939-cd91-44bb-a86b-72bf93c2a818 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:49:10 compute-0 nova_compute[260935]: 2025-10-11 08:49:10.503 2 DEBUG oslo_concurrency.lockutils [req-260920d6-4feb-4991-b99e-d542fbf2e7ac req-4d45081b-db12-4cb1-95d1-87a9c865a4ff e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "057de6d9-3f9e-4b23-9019-f62ba6b453e7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:49:10 compute-0 nova_compute[260935]: 2025-10-11 08:49:10.503 2 DEBUG oslo_concurrency.lockutils [req-260920d6-4feb-4991-b99e-d542fbf2e7ac req-4d45081b-db12-4cb1-95d1-87a9c865a4ff e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "057de6d9-3f9e-4b23-9019-f62ba6b453e7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:49:10 compute-0 nova_compute[260935]: 2025-10-11 08:49:10.503 2 DEBUG oslo_concurrency.lockutils [req-260920d6-4feb-4991-b99e-d542fbf2e7ac req-4d45081b-db12-4cb1-95d1-87a9c865a4ff e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "057de6d9-3f9e-4b23-9019-f62ba6b453e7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:49:10 compute-0 nova_compute[260935]: 2025-10-11 08:49:10.503 2 DEBUG nova.compute.manager [req-260920d6-4feb-4991-b99e-d542fbf2e7ac req-4d45081b-db12-4cb1-95d1-87a9c865a4ff e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] No waiting events found dispatching network-vif-plugged-c9724939-cd91-44bb-a86b-72bf93c2a818 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 08:49:10 compute-0 nova_compute[260935]: 2025-10-11 08:49:10.503 2 WARNING nova.compute.manager [req-260920d6-4feb-4991-b99e-d542fbf2e7ac req-4d45081b-db12-4cb1-95d1-87a9c865a4ff e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] Received unexpected event network-vif-plugged-c9724939-cd91-44bb-a86b-72bf93c2a818 for instance with vm_state active and task_state None.
Oct 11 08:49:10 compute-0 nova_compute[260935]: 2025-10-11 08:49:10.503 2 DEBUG nova.compute.manager [req-260920d6-4feb-4991-b99e-d542fbf2e7ac req-4d45081b-db12-4cb1-95d1-87a9c865a4ff e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] Received event network-vif-plugged-c9724939-cd91-44bb-a86b-72bf93c2a818 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:49:10 compute-0 nova_compute[260935]: 2025-10-11 08:49:10.504 2 DEBUG oslo_concurrency.lockutils [req-260920d6-4feb-4991-b99e-d542fbf2e7ac req-4d45081b-db12-4cb1-95d1-87a9c865a4ff e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "057de6d9-3f9e-4b23-9019-f62ba6b453e7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:49:10 compute-0 nova_compute[260935]: 2025-10-11 08:49:10.504 2 DEBUG oslo_concurrency.lockutils [req-260920d6-4feb-4991-b99e-d542fbf2e7ac req-4d45081b-db12-4cb1-95d1-87a9c865a4ff e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "057de6d9-3f9e-4b23-9019-f62ba6b453e7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:49:10 compute-0 nova_compute[260935]: 2025-10-11 08:49:10.504 2 DEBUG oslo_concurrency.lockutils [req-260920d6-4feb-4991-b99e-d542fbf2e7ac req-4d45081b-db12-4cb1-95d1-87a9c865a4ff e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "057de6d9-3f9e-4b23-9019-f62ba6b453e7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:49:10 compute-0 nova_compute[260935]: 2025-10-11 08:49:10.504 2 DEBUG nova.compute.manager [req-260920d6-4feb-4991-b99e-d542fbf2e7ac req-4d45081b-db12-4cb1-95d1-87a9c865a4ff e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] No waiting events found dispatching network-vif-plugged-c9724939-cd91-44bb-a86b-72bf93c2a818 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 08:49:10 compute-0 nova_compute[260935]: 2025-10-11 08:49:10.504 2 WARNING nova.compute.manager [req-260920d6-4feb-4991-b99e-d542fbf2e7ac req-4d45081b-db12-4cb1-95d1-87a9c865a4ff e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] Received unexpected event network-vif-plugged-c9724939-cd91-44bb-a86b-72bf93c2a818 for instance with vm_state active and task_state None.
Oct 11 08:49:10 compute-0 nova_compute[260935]: 2025-10-11 08:49:10.999 2 DEBUG oslo_concurrency.lockutils [None req-e9b5f8c5-14d3-45ad-8e0b-2abef032e048 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Acquiring lock "interface-057de6d9-3f9e-4b23-9019-f62ba6b453e7-b3e1a780-925f-4f4e-bca5-4b8f5ee45e1e" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:49:11 compute-0 nova_compute[260935]: 2025-10-11 08:49:10.999 2 DEBUG oslo_concurrency.lockutils [None req-e9b5f8c5-14d3-45ad-8e0b-2abef032e048 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Lock "interface-057de6d9-3f9e-4b23-9019-f62ba6b453e7-b3e1a780-925f-4f4e-bca5-4b8f5ee45e1e" acquired by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:49:11 compute-0 nova_compute[260935]: 2025-10-11 08:49:11.024 2 DEBUG nova.objects.instance [None req-e9b5f8c5-14d3-45ad-8e0b-2abef032e048 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Lazy-loading 'flavor' on Instance uuid 057de6d9-3f9e-4b23-9019-f62ba6b453e7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:49:11 compute-0 nova_compute[260935]: 2025-10-11 08:49:11.047 2 DEBUG nova.virt.libvirt.vif [None req-e9b5f8c5-14d3-45ad-8e0b-2abef032e048 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T08:48:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1270343715',display_name='tempest-AttachInterfacesTestJSON-server-1270343715',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1270343715',id=20,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGqp+brcDuFBg126s8uf0VE8L4fUfMeeG8JT9FKFYCB1vHmrbx9C6Kt8XshIYtJqZ0JEMq6H9A4MzX7hRa62ELfLstfe4uxEEdjGiwcDGhX0TR8t1c69HTxfDL2XuPv0hw==',key_name='tempest-keypair-1655747577',keypairs=<?>,launch_index=0,launched_at=2025-10-11T08:48:21Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='eddb41c523294041b154a0a99c88e82b',ramdisk_id='',reservation_id='r-i4h1iqr7',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-2072786320',owner_user_name='tempest-AttachInterfacesTestJSON-2072786320-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T08:48:21Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='34f29a5a135d45f597eeaa741009aa67',uuid=057de6d9-3f9e-4b23-9019-f62ba6b453e7,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b3e1a780-925f-4f4e-bca5-4b8f5ee45e1e", "address": "fa:16:3e:f2:dc:ce", "network": {"id": "fff13396-b787-4c6e-9112-a1c2ef57b26d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-795919911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eddb41c523294041b154a0a99c88e82b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb3e1a780-92", "ovs_interfaceid": "b3e1a780-925f-4f4e-bca5-4b8f5ee45e1e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 11 08:49:11 compute-0 nova_compute[260935]: 2025-10-11 08:49:11.047 2 DEBUG nova.network.os_vif_util [None req-e9b5f8c5-14d3-45ad-8e0b-2abef032e048 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Converting VIF {"id": "b3e1a780-925f-4f4e-bca5-4b8f5ee45e1e", "address": "fa:16:3e:f2:dc:ce", "network": {"id": "fff13396-b787-4c6e-9112-a1c2ef57b26d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-795919911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eddb41c523294041b154a0a99c88e82b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb3e1a780-92", "ovs_interfaceid": "b3e1a780-925f-4f4e-bca5-4b8f5ee45e1e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 08:49:11 compute-0 nova_compute[260935]: 2025-10-11 08:49:11.049 2 DEBUG nova.network.os_vif_util [None req-e9b5f8c5-14d3-45ad-8e0b-2abef032e048 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:f2:dc:ce,bridge_name='br-int',has_traffic_filtering=True,id=b3e1a780-925f-4f4e-bca5-4b8f5ee45e1e,network=Network(fff13396-b787-4c6e-9112-a1c2ef57b26d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb3e1a780-92') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 08:49:11 compute-0 nova_compute[260935]: 2025-10-11 08:49:11.054 2 DEBUG nova.virt.libvirt.guest [None req-e9b5f8c5-14d3-45ad-8e0b-2abef032e048 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:f2:dc:ce"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapb3e1a780-92"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Oct 11 08:49:11 compute-0 nova_compute[260935]: 2025-10-11 08:49:11.059 2 DEBUG nova.virt.libvirt.guest [None req-e9b5f8c5-14d3-45ad-8e0b-2abef032e048 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:f2:dc:ce"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapb3e1a780-92"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Oct 11 08:49:11 compute-0 nova_compute[260935]: 2025-10-11 08:49:11.063 2 DEBUG nova.virt.libvirt.driver [None req-e9b5f8c5-14d3-45ad-8e0b-2abef032e048 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Attempting to detach device tapb3e1a780-92 from instance 057de6d9-3f9e-4b23-9019-f62ba6b453e7 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Oct 11 08:49:11 compute-0 nova_compute[260935]: 2025-10-11 08:49:11.063 2 DEBUG nova.virt.libvirt.guest [None req-e9b5f8c5-14d3-45ad-8e0b-2abef032e048 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] detach device xml: <interface type="ethernet">
Oct 11 08:49:11 compute-0 nova_compute[260935]:   <mac address="fa:16:3e:f2:dc:ce"/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:   <model type="virtio"/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:   <driver name="vhost" rx_queue_size="512"/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:   <mtu size="1442"/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:   <target dev="tapb3e1a780-92"/>
Oct 11 08:49:11 compute-0 nova_compute[260935]: </interface>
Oct 11 08:49:11 compute-0 nova_compute[260935]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Oct 11 08:49:11 compute-0 nova_compute[260935]: 2025-10-11 08:49:11.070 2 DEBUG nova.virt.libvirt.guest [None req-e9b5f8c5-14d3-45ad-8e0b-2abef032e048 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:f2:dc:ce"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapb3e1a780-92"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Oct 11 08:49:11 compute-0 nova_compute[260935]: 2025-10-11 08:49:11.076 2 DEBUG nova.virt.libvirt.guest [None req-e9b5f8c5-14d3-45ad-8e0b-2abef032e048 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:f2:dc:ce"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapb3e1a780-92"/></interface>not found in domain: <domain type='kvm' id='22'>
Oct 11 08:49:11 compute-0 nova_compute[260935]:   <name>instance-00000014</name>
Oct 11 08:49:11 compute-0 nova_compute[260935]:   <uuid>057de6d9-3f9e-4b23-9019-f62ba6b453e7</uuid>
Oct 11 08:49:11 compute-0 nova_compute[260935]:   <metadata>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 08:49:11 compute-0 nova_compute[260935]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:   <nova:name>tempest-AttachInterfacesTestJSON-server-1270343715</nova:name>
Oct 11 08:49:11 compute-0 nova_compute[260935]:   <nova:creationTime>2025-10-11 08:49:08</nova:creationTime>
Oct 11 08:49:11 compute-0 nova_compute[260935]:   <nova:flavor name="m1.nano">
Oct 11 08:49:11 compute-0 nova_compute[260935]:     <nova:memory>128</nova:memory>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     <nova:disk>1</nova:disk>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     <nova:swap>0</nova:swap>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     <nova:ephemeral>0</nova:ephemeral>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     <nova:vcpus>1</nova:vcpus>
Oct 11 08:49:11 compute-0 nova_compute[260935]:   </nova:flavor>
Oct 11 08:49:11 compute-0 nova_compute[260935]:   <nova:owner>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     <nova:user uuid="34f29a5a135d45f597eeaa741009aa67">tempest-AttachInterfacesTestJSON-2072786320-project-member</nova:user>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     <nova:project uuid="eddb41c523294041b154a0a99c88e82b">tempest-AttachInterfacesTestJSON-2072786320</nova:project>
Oct 11 08:49:11 compute-0 nova_compute[260935]:   </nova:owner>
Oct 11 08:49:11 compute-0 nova_compute[260935]:   <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:   <nova:ports>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     <nova:port uuid="db31f1b4-b009-40dc-a028-b72fe0b1eb45">
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     </nova:port>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     <nova:port uuid="b3e1a780-925f-4f4e-bca5-4b8f5ee45e1e">
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     </nova:port>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     <nova:port uuid="723ff3bf-882c-4198-afc8-a31026a4ccfc">
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     </nova:port>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     <nova:port uuid="c9724939-cd91-44bb-a86b-72bf93c2a818">
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     </nova:port>
Oct 11 08:49:11 compute-0 nova_compute[260935]:   </nova:ports>
Oct 11 08:49:11 compute-0 nova_compute[260935]: </nova:instance>
Oct 11 08:49:11 compute-0 nova_compute[260935]:   </metadata>
Oct 11 08:49:11 compute-0 nova_compute[260935]:   <memory unit='KiB'>131072</memory>
Oct 11 08:49:11 compute-0 nova_compute[260935]:   <currentMemory unit='KiB'>131072</currentMemory>
Oct 11 08:49:11 compute-0 nova_compute[260935]:   <vcpu placement='static'>1</vcpu>
Oct 11 08:49:11 compute-0 nova_compute[260935]:   <resource>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     <partition>/machine</partition>
Oct 11 08:49:11 compute-0 nova_compute[260935]:   </resource>
Oct 11 08:49:11 compute-0 nova_compute[260935]:   <sysinfo type='smbios'>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     <system>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <entry name='manufacturer'>RDO</entry>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <entry name='product'>OpenStack Compute</entry>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <entry name='serial'>057de6d9-3f9e-4b23-9019-f62ba6b453e7</entry>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <entry name='uuid'>057de6d9-3f9e-4b23-9019-f62ba6b453e7</entry>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <entry name='family'>Virtual Machine</entry>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     </system>
Oct 11 08:49:11 compute-0 nova_compute[260935]:   </sysinfo>
Oct 11 08:49:11 compute-0 nova_compute[260935]:   <os>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     <type arch='x86_64' machine='pc-q35-rhel9.6.0'>hvm</type>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     <boot dev='hd'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     <smbios mode='sysinfo'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:   </os>
Oct 11 08:49:11 compute-0 nova_compute[260935]:   <features>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     <acpi/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     <apic/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     <vmcoreinfo state='on'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:   </features>
Oct 11 08:49:11 compute-0 nova_compute[260935]:   <cpu mode='custom' match='exact' check='full'>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     <model fallback='forbid'>EPYC-Rome</model>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     <vendor>AMD</vendor>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     <feature policy='require' name='x2apic'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     <feature policy='require' name='tsc-deadline'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     <feature policy='require' name='hypervisor'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     <feature policy='require' name='tsc_adjust'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     <feature policy='require' name='spec-ctrl'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     <feature policy='require' name='stibp'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     <feature policy='require' name='arch-capabilities'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     <feature policy='require' name='ssbd'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     <feature policy='require' name='cmp_legacy'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     <feature policy='require' name='overflow-recov'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     <feature policy='require' name='succor'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     <feature policy='require' name='ibrs'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     <feature policy='require' name='amd-ssbd'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     <feature policy='require' name='virt-ssbd'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     <feature policy='disable' name='lbrv'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     <feature policy='disable' name='tsc-scale'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     <feature policy='disable' name='vmcb-clean'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     <feature policy='disable' name='flushbyasid'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     <feature policy='disable' name='pause-filter'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     <feature policy='disable' name='pfthreshold'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     <feature policy='disable' name='svme-addr-chk'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     <feature policy='require' name='lfence-always-serializing'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     <feature policy='require' name='rdctl-no'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     <feature policy='require' name='mds-no'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     <feature policy='require' name='pschange-mc-no'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     <feature policy='require' name='gds-no'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     <feature policy='require' name='rfds-no'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     <feature policy='disable' name='xsaves'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     <feature policy='disable' name='svm'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     <feature policy='require' name='topoext'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     <feature policy='disable' name='npt'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     <feature policy='disable' name='nrip-save'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:   </cpu>
Oct 11 08:49:11 compute-0 nova_compute[260935]:   <clock offset='utc'>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     <timer name='pit' tickpolicy='delay'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     <timer name='rtc' tickpolicy='catchup'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     <timer name='hpet' present='no'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:   </clock>
Oct 11 08:49:11 compute-0 nova_compute[260935]:   <on_poweroff>destroy</on_poweroff>
Oct 11 08:49:11 compute-0 nova_compute[260935]:   <on_reboot>restart</on_reboot>
Oct 11 08:49:11 compute-0 nova_compute[260935]:   <on_crash>destroy</on_crash>
Oct 11 08:49:11 compute-0 nova_compute[260935]:   <devices>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     <disk type='network' device='disk'>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <driver name='qemu' type='raw' cache='none'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <auth username='openstack'>
Oct 11 08:49:11 compute-0 nova_compute[260935]:         <secret type='ceph' uuid='33219f8b-dc38-5a8f-a577-8ccc4b37190a'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       </auth>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <source protocol='rbd' name='vms/057de6d9-3f9e-4b23-9019-f62ba6b453e7_disk' index='2'>
Oct 11 08:49:11 compute-0 nova_compute[260935]:         <host name='192.168.122.100' port='6789'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       </source>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <target dev='vda' bus='virtio'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <alias name='virtio-disk0'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     </disk>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     <disk type='network' device='cdrom'>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <driver name='qemu' type='raw' cache='none'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <auth username='openstack'>
Oct 11 08:49:11 compute-0 nova_compute[260935]:         <secret type='ceph' uuid='33219f8b-dc38-5a8f-a577-8ccc4b37190a'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       </auth>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <source protocol='rbd' name='vms/057de6d9-3f9e-4b23-9019-f62ba6b453e7_disk.config' index='1'>
Oct 11 08:49:11 compute-0 nova_compute[260935]:         <host name='192.168.122.100' port='6789'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       </source>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <target dev='sda' bus='sata'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <readonly/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <alias name='sata0-0-0'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     </disk>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     <controller type='pci' index='0' model='pcie-root'>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <alias name='pcie.0'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     <controller type='pci' index='1' model='pcie-root-port'>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <target chassis='1' port='0x10'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <alias name='pci.1'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     <controller type='pci' index='2' model='pcie-root-port'>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <target chassis='2' port='0x11'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <alias name='pci.2'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     <controller type='pci' index='3' model='pcie-root-port'>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <target chassis='3' port='0x12'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <alias name='pci.3'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     <controller type='pci' index='4' model='pcie-root-port'>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <target chassis='4' port='0x13'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <alias name='pci.4'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     <controller type='pci' index='5' model='pcie-root-port'>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <target chassis='5' port='0x14'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <alias name='pci.5'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     <controller type='pci' index='6' model='pcie-root-port'>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <target chassis='6' port='0x15'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <alias name='pci.6'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     <controller type='pci' index='7' model='pcie-root-port'>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <target chassis='7' port='0x16'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <alias name='pci.7'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     <controller type='pci' index='8' model='pcie-root-port'>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <target chassis='8' port='0x17'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <alias name='pci.8'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     <controller type='pci' index='9' model='pcie-root-port'>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <target chassis='9' port='0x18'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <alias name='pci.9'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     <controller type='pci' index='10' model='pcie-root-port'>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <target chassis='10' port='0x19'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <alias name='pci.10'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     <controller type='pci' index='11' model='pcie-root-port'>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <target chassis='11' port='0x1a'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <alias name='pci.11'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     <controller type='pci' index='12' model='pcie-root-port'>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <target chassis='12' port='0x1b'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <alias name='pci.12'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     <controller type='pci' index='13' model='pcie-root-port'>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <target chassis='13' port='0x1c'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <alias name='pci.13'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     <controller type='pci' index='14' model='pcie-root-port'>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <target chassis='14' port='0x1d'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <alias name='pci.14'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     <controller type='pci' index='15' model='pcie-root-port'>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <target chassis='15' port='0x1e'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <alias name='pci.15'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     <controller type='pci' index='16' model='pcie-root-port'>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <target chassis='16' port='0x1f'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <alias name='pci.16'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     <controller type='pci' index='17' model='pcie-root-port'>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <target chassis='17' port='0x20'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <alias name='pci.17'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     <controller type='pci' index='18' model='pcie-root-port'>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <target chassis='18' port='0x21'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <alias name='pci.18'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     <controller type='pci' index='19' model='pcie-root-port'>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <target chassis='19' port='0x22'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <alias name='pci.19'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     <controller type='pci' index='20' model='pcie-root-port'>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <target chassis='20' port='0x23'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <alias name='pci.20'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     <controller type='pci' index='21' model='pcie-root-port'>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <target chassis='21' port='0x24'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <alias name='pci.21'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     <controller type='pci' index='22' model='pcie-root-port'>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <target chassis='22' port='0x25'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <alias name='pci.22'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     <controller type='pci' index='23' model='pcie-root-port'>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <target chassis='23' port='0x26'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <alias name='pci.23'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     <controller type='pci' index='24' model='pcie-root-port'>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <target chassis='24' port='0x27'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <alias name='pci.24'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     <controller type='pci' index='25' model='pcie-root-port'>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <target chassis='25' port='0x28'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <alias name='pci.25'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <model name='pcie-pci-bridge'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <alias name='pci.26'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     <controller type='usb' index='0' model='piix3-uhci'>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <alias name='usb'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     <controller type='sata' index='0'>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <alias name='ide'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     <interface type='ethernet'>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <mac address='fa:16:3e:97:65:f3'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <target dev='tapdb31f1b4-b0'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <model type='virtio'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <driver name='vhost' rx_queue_size='512'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <mtu size='1442'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <alias name='net0'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     </interface>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     <interface type='ethernet'>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <mac address='fa:16:3e:f2:dc:ce'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <target dev='tapb3e1a780-92'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <model type='virtio'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <driver name='vhost' rx_queue_size='512'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <mtu size='1442'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <alias name='net1'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     </interface>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     <interface type='ethernet'>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <mac address='fa:16:3e:90:fd:25'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <target dev='tap723ff3bf-88'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <model type='virtio'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <driver name='vhost' rx_queue_size='512'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <mtu size='1442'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <alias name='net2'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x07' slot='0x00' function='0x0'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     </interface>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     <interface type='ethernet'>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <mac address='fa:16:3e:2b:3e:e2'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <target dev='tapc9724939-cd'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <model type='virtio'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <driver name='vhost' rx_queue_size='512'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <mtu size='1442'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <alias name='net3'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x08' slot='0x00' function='0x0'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     </interface>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     <serial type='pty'>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <source path='/dev/pts/2'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <log file='/var/lib/nova/instances/057de6d9-3f9e-4b23-9019-f62ba6b453e7/console.log' append='off'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <target type='isa-serial' port='0'>
Oct 11 08:49:11 compute-0 nova_compute[260935]:         <model name='isa-serial'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       </target>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <alias name='serial0'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     </serial>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     <console type='pty' tty='/dev/pts/2'>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <source path='/dev/pts/2'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <log file='/var/lib/nova/instances/057de6d9-3f9e-4b23-9019-f62ba6b453e7/console.log' append='off'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <target type='serial' port='0'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <alias name='serial0'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     </console>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     <input type='tablet' bus='usb'>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <alias name='input0'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <address type='usb' bus='0' port='1'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     </input>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     <input type='mouse' bus='ps2'>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <alias name='input1'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     </input>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     <input type='keyboard' bus='ps2'>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <alias name='input2'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     </input>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     <graphics type='vnc' port='5902' autoport='yes' listen='::0'>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <listen type='address' address='::0'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     </graphics>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     <audio id='1' type='none'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     <video>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <model type='virtio' heads='1' primary='yes'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <alias name='video0'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     </video>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     <watchdog model='itco' action='reset'>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <alias name='watchdog0'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     </watchdog>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     <memballoon model='virtio'>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <stats period='10'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <alias name='balloon0'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     </memballoon>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     <rng model='virtio'>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <backend model='random'>/dev/urandom</backend>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <alias name='rng0'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     </rng>
Oct 11 08:49:11 compute-0 nova_compute[260935]:   </devices>
Oct 11 08:49:11 compute-0 nova_compute[260935]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     <label>system_u:system_r:svirt_t:s0:c662,c935</label>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c662,c935</imagelabel>
Oct 11 08:49:11 compute-0 nova_compute[260935]:   </seclabel>
Oct 11 08:49:11 compute-0 nova_compute[260935]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     <label>+107:+107</label>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     <imagelabel>+107:+107</imagelabel>
Oct 11 08:49:11 compute-0 nova_compute[260935]:   </seclabel>
Oct 11 08:49:11 compute-0 nova_compute[260935]: </domain>
Oct 11 08:49:11 compute-0 nova_compute[260935]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Oct 11 08:49:11 compute-0 nova_compute[260935]: 2025-10-11 08:49:11.076 2 INFO nova.virt.libvirt.driver [None req-e9b5f8c5-14d3-45ad-8e0b-2abef032e048 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Successfully detached device tapb3e1a780-92 from instance 057de6d9-3f9e-4b23-9019-f62ba6b453e7 from the persistent domain config.
Oct 11 08:49:11 compute-0 nova_compute[260935]: 2025-10-11 08:49:11.077 2 DEBUG nova.virt.libvirt.driver [None req-e9b5f8c5-14d3-45ad-8e0b-2abef032e048 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] (1/8): Attempting to detach device tapb3e1a780-92 with device alias net1 from instance 057de6d9-3f9e-4b23-9019-f62ba6b453e7 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Oct 11 08:49:11 compute-0 nova_compute[260935]: 2025-10-11 08:49:11.078 2 DEBUG nova.virt.libvirt.guest [None req-e9b5f8c5-14d3-45ad-8e0b-2abef032e048 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] detach device xml: <interface type="ethernet">
Oct 11 08:49:11 compute-0 nova_compute[260935]:   <mac address="fa:16:3e:f2:dc:ce"/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:   <model type="virtio"/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:   <driver name="vhost" rx_queue_size="512"/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:   <mtu size="1442"/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:   <target dev="tapb3e1a780-92"/>
Oct 11 08:49:11 compute-0 nova_compute[260935]: </interface>
Oct 11 08:49:11 compute-0 nova_compute[260935]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Oct 11 08:49:11 compute-0 ceph-mon[74313]: pgmap v1301: 321 pgs: 321 active+clean; 214 MiB data, 415 MiB used, 60 GiB / 60 GiB avail; 63 KiB/s rd, 3.6 MiB/s wr, 96 op/s
Oct 11 08:49:11 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/3714218713' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:49:11 compute-0 kernel: tapb3e1a780-92 (unregistering): left promiscuous mode
Oct 11 08:49:11 compute-0 NetworkManager[44960]: <info>  [1760172551.2093] device (tapb3e1a780-92): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 08:49:11 compute-0 nova_compute[260935]: 2025-10-11 08:49:11.229 2 DEBUG nova.virt.libvirt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Received event <DeviceRemovedEvent: 1760172551.227427, 057de6d9-3f9e-4b23-9019-f62ba6b453e7 => net1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Oct 11 08:49:11 compute-0 ovn_controller[152945]: 2025-10-11T08:49:11Z|00108|binding|INFO|Releasing lport b3e1a780-925f-4f4e-bca5-4b8f5ee45e1e from this chassis (sb_readonly=0)
Oct 11 08:49:11 compute-0 ovn_controller[152945]: 2025-10-11T08:49:11Z|00109|binding|INFO|Setting lport b3e1a780-925f-4f4e-bca5-4b8f5ee45e1e down in Southbound
Oct 11 08:49:11 compute-0 nova_compute[260935]: 2025-10-11 08:49:11.258 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:49:11 compute-0 ovn_controller[152945]: 2025-10-11T08:49:11Z|00110|binding|INFO|Removing iface tapb3e1a780-92 ovn-installed in OVS
Oct 11 08:49:11 compute-0 nova_compute[260935]: 2025-10-11 08:49:11.268 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:49:11 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:49:11.268 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f2:dc:ce 10.100.0.12'], port_security=['fa:16:3e:f2:dc:ce 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '057de6d9-3f9e-4b23-9019-f62ba6b453e7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fff13396-b787-4c6e-9112-a1c2ef57b26d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'eddb41c523294041b154a0a99c88e82b', 'neutron:revision_number': '4', 'neutron:security_group_ids': '9a83c3d0-687d-44b7-980a-bde786b1b429', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c4201c7b-c907-464d-88cb-d19f17d8f067, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=b3e1a780-925f-4f4e-bca5-4b8f5ee45e1e) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 08:49:11 compute-0 nova_compute[260935]: 2025-10-11 08:49:11.269 2 DEBUG nova.virt.libvirt.driver [None req-e9b5f8c5-14d3-45ad-8e0b-2abef032e048 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Start waiting for the detach event from libvirt for device tapb3e1a780-92 with device alias net1 for instance 057de6d9-3f9e-4b23-9019-f62ba6b453e7 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Oct 11 08:49:11 compute-0 nova_compute[260935]: 2025-10-11 08:49:11.270 2 DEBUG nova.virt.libvirt.guest [None req-e9b5f8c5-14d3-45ad-8e0b-2abef032e048 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:f2:dc:ce"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapb3e1a780-92"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Oct 11 08:49:11 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:49:11.270 162815 INFO neutron.agent.ovn.metadata.agent [-] Port b3e1a780-925f-4f4e-bca5-4b8f5ee45e1e in datapath fff13396-b787-4c6e-9112-a1c2ef57b26d unbound from our chassis
Oct 11 08:49:11 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:49:11.272 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network fff13396-b787-4c6e-9112-a1c2ef57b26d
Oct 11 08:49:11 compute-0 nova_compute[260935]: 2025-10-11 08:49:11.288 2 DEBUG nova.virt.libvirt.guest [None req-e9b5f8c5-14d3-45ad-8e0b-2abef032e048 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:f2:dc:ce"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapb3e1a780-92"/></interface>not found in domain: <domain type='kvm' id='22'>
Oct 11 08:49:11 compute-0 nova_compute[260935]:   <name>instance-00000014</name>
Oct 11 08:49:11 compute-0 nova_compute[260935]:   <uuid>057de6d9-3f9e-4b23-9019-f62ba6b453e7</uuid>
Oct 11 08:49:11 compute-0 nova_compute[260935]:   <metadata>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 08:49:11 compute-0 nova_compute[260935]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:   <nova:name>tempest-AttachInterfacesTestJSON-server-1270343715</nova:name>
Oct 11 08:49:11 compute-0 nova_compute[260935]:   <nova:creationTime>2025-10-11 08:49:08</nova:creationTime>
Oct 11 08:49:11 compute-0 nova_compute[260935]:   <nova:flavor name="m1.nano">
Oct 11 08:49:11 compute-0 nova_compute[260935]:     <nova:memory>128</nova:memory>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     <nova:disk>1</nova:disk>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     <nova:swap>0</nova:swap>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     <nova:ephemeral>0</nova:ephemeral>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     <nova:vcpus>1</nova:vcpus>
Oct 11 08:49:11 compute-0 nova_compute[260935]:   </nova:flavor>
Oct 11 08:49:11 compute-0 nova_compute[260935]:   <nova:owner>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     <nova:user uuid="34f29a5a135d45f597eeaa741009aa67">tempest-AttachInterfacesTestJSON-2072786320-project-member</nova:user>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     <nova:project uuid="eddb41c523294041b154a0a99c88e82b">tempest-AttachInterfacesTestJSON-2072786320</nova:project>
Oct 11 08:49:11 compute-0 nova_compute[260935]:   </nova:owner>
Oct 11 08:49:11 compute-0 nova_compute[260935]:   <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:   <nova:ports>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     <nova:port uuid="db31f1b4-b009-40dc-a028-b72fe0b1eb45">
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     </nova:port>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     <nova:port uuid="b3e1a780-925f-4f4e-bca5-4b8f5ee45e1e">
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     </nova:port>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     <nova:port uuid="723ff3bf-882c-4198-afc8-a31026a4ccfc">
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     </nova:port>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     <nova:port uuid="c9724939-cd91-44bb-a86b-72bf93c2a818">
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     </nova:port>
Oct 11 08:49:11 compute-0 nova_compute[260935]:   </nova:ports>
Oct 11 08:49:11 compute-0 nova_compute[260935]: </nova:instance>
Oct 11 08:49:11 compute-0 nova_compute[260935]:   </metadata>
Oct 11 08:49:11 compute-0 nova_compute[260935]:   <memory unit='KiB'>131072</memory>
Oct 11 08:49:11 compute-0 nova_compute[260935]:   <currentMemory unit='KiB'>131072</currentMemory>
Oct 11 08:49:11 compute-0 nova_compute[260935]:   <vcpu placement='static'>1</vcpu>
Oct 11 08:49:11 compute-0 nova_compute[260935]:   <resource>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     <partition>/machine</partition>
Oct 11 08:49:11 compute-0 nova_compute[260935]:   </resource>
Oct 11 08:49:11 compute-0 nova_compute[260935]:   <sysinfo type='smbios'>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     <system>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <entry name='manufacturer'>RDO</entry>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <entry name='product'>OpenStack Compute</entry>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <entry name='serial'>057de6d9-3f9e-4b23-9019-f62ba6b453e7</entry>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <entry name='uuid'>057de6d9-3f9e-4b23-9019-f62ba6b453e7</entry>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <entry name='family'>Virtual Machine</entry>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     </system>
Oct 11 08:49:11 compute-0 nova_compute[260935]:   </sysinfo>
Oct 11 08:49:11 compute-0 nova_compute[260935]:   <os>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     <type arch='x86_64' machine='pc-q35-rhel9.6.0'>hvm</type>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     <boot dev='hd'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     <smbios mode='sysinfo'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:   </os>
Oct 11 08:49:11 compute-0 nova_compute[260935]:   <features>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     <acpi/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     <apic/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     <vmcoreinfo state='on'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:   </features>
Oct 11 08:49:11 compute-0 nova_compute[260935]:   <cpu mode='custom' match='exact' check='full'>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     <model fallback='forbid'>EPYC-Rome</model>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     <vendor>AMD</vendor>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     <feature policy='require' name='x2apic'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     <feature policy='require' name='tsc-deadline'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     <feature policy='require' name='hypervisor'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     <feature policy='require' name='tsc_adjust'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     <feature policy='require' name='spec-ctrl'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     <feature policy='require' name='stibp'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     <feature policy='require' name='arch-capabilities'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     <feature policy='require' name='ssbd'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     <feature policy='require' name='cmp_legacy'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     <feature policy='require' name='overflow-recov'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     <feature policy='require' name='succor'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     <feature policy='require' name='ibrs'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     <feature policy='require' name='amd-ssbd'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     <feature policy='require' name='virt-ssbd'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     <feature policy='disable' name='lbrv'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     <feature policy='disable' name='tsc-scale'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     <feature policy='disable' name='vmcb-clean'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     <feature policy='disable' name='flushbyasid'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     <feature policy='disable' name='pause-filter'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     <feature policy='disable' name='pfthreshold'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     <feature policy='disable' name='svme-addr-chk'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     <feature policy='require' name='lfence-always-serializing'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     <feature policy='require' name='rdctl-no'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     <feature policy='require' name='mds-no'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     <feature policy='require' name='pschange-mc-no'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     <feature policy='require' name='gds-no'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     <feature policy='require' name='rfds-no'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     <feature policy='disable' name='xsaves'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     <feature policy='disable' name='svm'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     <feature policy='require' name='topoext'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     <feature policy='disable' name='npt'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     <feature policy='disable' name='nrip-save'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:   </cpu>
Oct 11 08:49:11 compute-0 nova_compute[260935]:   <clock offset='utc'>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     <timer name='pit' tickpolicy='delay'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     <timer name='rtc' tickpolicy='catchup'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     <timer name='hpet' present='no'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:   </clock>
Oct 11 08:49:11 compute-0 nova_compute[260935]:   <on_poweroff>destroy</on_poweroff>
Oct 11 08:49:11 compute-0 nova_compute[260935]:   <on_reboot>restart</on_reboot>
Oct 11 08:49:11 compute-0 nova_compute[260935]:   <on_crash>destroy</on_crash>
Oct 11 08:49:11 compute-0 nova_compute[260935]:   <devices>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     <disk type='network' device='disk'>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <driver name='qemu' type='raw' cache='none'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <auth username='openstack'>
Oct 11 08:49:11 compute-0 nova_compute[260935]:         <secret type='ceph' uuid='33219f8b-dc38-5a8f-a577-8ccc4b37190a'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       </auth>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <source protocol='rbd' name='vms/057de6d9-3f9e-4b23-9019-f62ba6b453e7_disk' index='2'>
Oct 11 08:49:11 compute-0 nova_compute[260935]:         <host name='192.168.122.100' port='6789'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       </source>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <target dev='vda' bus='virtio'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <alias name='virtio-disk0'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     </disk>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     <disk type='network' device='cdrom'>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <driver name='qemu' type='raw' cache='none'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <auth username='openstack'>
Oct 11 08:49:11 compute-0 nova_compute[260935]:         <secret type='ceph' uuid='33219f8b-dc38-5a8f-a577-8ccc4b37190a'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       </auth>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <source protocol='rbd' name='vms/057de6d9-3f9e-4b23-9019-f62ba6b453e7_disk.config' index='1'>
Oct 11 08:49:11 compute-0 nova_compute[260935]:         <host name='192.168.122.100' port='6789'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       </source>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <target dev='sda' bus='sata'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <readonly/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <alias name='sata0-0-0'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     </disk>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     <controller type='pci' index='0' model='pcie-root'>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <alias name='pcie.0'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     <controller type='pci' index='1' model='pcie-root-port'>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <target chassis='1' port='0x10'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <alias name='pci.1'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     <controller type='pci' index='2' model='pcie-root-port'>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <target chassis='2' port='0x11'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <alias name='pci.2'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     <controller type='pci' index='3' model='pcie-root-port'>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <target chassis='3' port='0x12'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <alias name='pci.3'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     <controller type='pci' index='4' model='pcie-root-port'>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <target chassis='4' port='0x13'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <alias name='pci.4'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     <controller type='pci' index='5' model='pcie-root-port'>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <target chassis='5' port='0x14'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <alias name='pci.5'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     <controller type='pci' index='6' model='pcie-root-port'>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <target chassis='6' port='0x15'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <alias name='pci.6'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     <controller type='pci' index='7' model='pcie-root-port'>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <target chassis='7' port='0x16'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <alias name='pci.7'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     <controller type='pci' index='8' model='pcie-root-port'>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <target chassis='8' port='0x17'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <alias name='pci.8'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     <controller type='pci' index='9' model='pcie-root-port'>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <target chassis='9' port='0x18'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <alias name='pci.9'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     <controller type='pci' index='10' model='pcie-root-port'>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <target chassis='10' port='0x19'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <alias name='pci.10'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     <controller type='pci' index='11' model='pcie-root-port'>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <target chassis='11' port='0x1a'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <alias name='pci.11'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     <controller type='pci' index='12' model='pcie-root-port'>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <target chassis='12' port='0x1b'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <alias name='pci.12'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     <controller type='pci' index='13' model='pcie-root-port'>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <target chassis='13' port='0x1c'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <alias name='pci.13'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     <controller type='pci' index='14' model='pcie-root-port'>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <target chassis='14' port='0x1d'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <alias name='pci.14'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     <controller type='pci' index='15' model='pcie-root-port'>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <target chassis='15' port='0x1e'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <alias name='pci.15'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     <controller type='pci' index='16' model='pcie-root-port'>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <target chassis='16' port='0x1f'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <alias name='pci.16'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     <controller type='pci' index='17' model='pcie-root-port'>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <target chassis='17' port='0x20'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <alias name='pci.17'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     <controller type='pci' index='18' model='pcie-root-port'>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <target chassis='18' port='0x21'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <alias name='pci.18'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     <controller type='pci' index='19' model='pcie-root-port'>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <target chassis='19' port='0x22'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <alias name='pci.19'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     <controller type='pci' index='20' model='pcie-root-port'>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <target chassis='20' port='0x23'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <alias name='pci.20'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     <controller type='pci' index='21' model='pcie-root-port'>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <target chassis='21' port='0x24'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <alias name='pci.21'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     <controller type='pci' index='22' model='pcie-root-port'>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <target chassis='22' port='0x25'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <alias name='pci.22'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     <controller type='pci' index='23' model='pcie-root-port'>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <target chassis='23' port='0x26'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <alias name='pci.23'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     <controller type='pci' index='24' model='pcie-root-port'>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <target chassis='24' port='0x27'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <alias name='pci.24'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     <controller type='pci' index='25' model='pcie-root-port'>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <target chassis='25' port='0x28'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <alias name='pci.25'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <model name='pcie-pci-bridge'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <alias name='pci.26'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     <controller type='usb' index='0' model='piix3-uhci'>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <alias name='usb'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     <controller type='sata' index='0'>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <alias name='ide'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     <interface type='ethernet'>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <mac address='fa:16:3e:97:65:f3'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <target dev='tapdb31f1b4-b0'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <model type='virtio'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <driver name='vhost' rx_queue_size='512'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <mtu size='1442'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <alias name='net0'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     </interface>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     <interface type='ethernet'>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <mac address='fa:16:3e:90:fd:25'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <target dev='tap723ff3bf-88'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <model type='virtio'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <driver name='vhost' rx_queue_size='512'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <mtu size='1442'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <alias name='net2'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x07' slot='0x00' function='0x0'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     </interface>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     <interface type='ethernet'>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <mac address='fa:16:3e:2b:3e:e2'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <target dev='tapc9724939-cd'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <model type='virtio'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <driver name='vhost' rx_queue_size='512'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <mtu size='1442'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <alias name='net3'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x08' slot='0x00' function='0x0'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     </interface>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     <serial type='pty'>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <source path='/dev/pts/2'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <log file='/var/lib/nova/instances/057de6d9-3f9e-4b23-9019-f62ba6b453e7/console.log' append='off'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <target type='isa-serial' port='0'>
Oct 11 08:49:11 compute-0 nova_compute[260935]:         <model name='isa-serial'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       </target>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <alias name='serial0'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     </serial>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     <console type='pty' tty='/dev/pts/2'>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <source path='/dev/pts/2'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <log file='/var/lib/nova/instances/057de6d9-3f9e-4b23-9019-f62ba6b453e7/console.log' append='off'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <target type='serial' port='0'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <alias name='serial0'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     </console>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     <input type='tablet' bus='usb'>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <alias name='input0'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <address type='usb' bus='0' port='1'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     </input>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     <input type='mouse' bus='ps2'>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <alias name='input1'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     </input>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     <input type='keyboard' bus='ps2'>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <alias name='input2'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     </input>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     <graphics type='vnc' port='5902' autoport='yes' listen='::0'>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <listen type='address' address='::0'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     </graphics>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     <audio id='1' type='none'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     <video>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <model type='virtio' heads='1' primary='yes'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <alias name='video0'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     </video>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     <watchdog model='itco' action='reset'>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <alias name='watchdog0'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     </watchdog>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     <memballoon model='virtio'>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <stats period='10'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <alias name='balloon0'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     </memballoon>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     <rng model='virtio'>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <backend model='random'>/dev/urandom</backend>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <alias name='rng0'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     </rng>
Oct 11 08:49:11 compute-0 nova_compute[260935]:   </devices>
Oct 11 08:49:11 compute-0 nova_compute[260935]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     <label>system_u:system_r:svirt_t:s0:c662,c935</label>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c662,c935</imagelabel>
Oct 11 08:49:11 compute-0 nova_compute[260935]:   </seclabel>
Oct 11 08:49:11 compute-0 nova_compute[260935]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     <label>+107:+107</label>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     <imagelabel>+107:+107</imagelabel>
Oct 11 08:49:11 compute-0 nova_compute[260935]:   </seclabel>
Oct 11 08:49:11 compute-0 nova_compute[260935]: </domain>
Oct 11 08:49:11 compute-0 nova_compute[260935]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Oct 11 08:49:11 compute-0 nova_compute[260935]: 2025-10-11 08:49:11.289 2 INFO nova.virt.libvirt.driver [None req-e9b5f8c5-14d3-45ad-8e0b-2abef032e048 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Successfully detached device tapb3e1a780-92 from instance 057de6d9-3f9e-4b23-9019-f62ba6b453e7 from the live domain config.
Oct 11 08:49:11 compute-0 nova_compute[260935]: 2025-10-11 08:49:11.290 2 DEBUG nova.virt.libvirt.vif [None req-e9b5f8c5-14d3-45ad-8e0b-2abef032e048 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T08:48:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1270343715',display_name='tempest-AttachInterfacesTestJSON-server-1270343715',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1270343715',id=20,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGqp+brcDuFBg126s8uf0VE8L4fUfMeeG8JT9FKFYCB1vHmrbx9C6Kt8XshIYtJqZ0JEMq6H9A4MzX7hRa62ELfLstfe4uxEEdjGiwcDGhX0TR8t1c69HTxfDL2XuPv0hw==',key_name='tempest-keypair-1655747577',keypairs=<?>,launch_index=0,launched_at=2025-10-11T08:48:21Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='eddb41c523294041b154a0a99c88e82b',ramdisk_id='',reservation_id='r-i4h1iqr7',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-2072786320',owner_user_name='tempest-AttachInterfacesTestJSON-2072786320-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T08:48:21Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='34f29a5a135d45f597eeaa741009aa67',uuid=057de6d9-3f9e-4b23-9019-f62ba6b453e7,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b3e1a780-925f-4f4e-bca5-4b8f5ee45e1e", "address": "fa:16:3e:f2:dc:ce", "network": {"id": "fff13396-b787-4c6e-9112-a1c2ef57b26d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-795919911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eddb41c523294041b154a0a99c88e82b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb3e1a780-92", "ovs_interfaceid": "b3e1a780-925f-4f4e-bca5-4b8f5ee45e1e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 11 08:49:11 compute-0 nova_compute[260935]: 2025-10-11 08:49:11.291 2 DEBUG nova.network.os_vif_util [None req-e9b5f8c5-14d3-45ad-8e0b-2abef032e048 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Converting VIF {"id": "b3e1a780-925f-4f4e-bca5-4b8f5ee45e1e", "address": "fa:16:3e:f2:dc:ce", "network": {"id": "fff13396-b787-4c6e-9112-a1c2ef57b26d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-795919911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eddb41c523294041b154a0a99c88e82b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb3e1a780-92", "ovs_interfaceid": "b3e1a780-925f-4f4e-bca5-4b8f5ee45e1e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 08:49:11 compute-0 nova_compute[260935]: 2025-10-11 08:49:11.292 2 DEBUG nova.network.os_vif_util [None req-e9b5f8c5-14d3-45ad-8e0b-2abef032e048 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:f2:dc:ce,bridge_name='br-int',has_traffic_filtering=True,id=b3e1a780-925f-4f4e-bca5-4b8f5ee45e1e,network=Network(fff13396-b787-4c6e-9112-a1c2ef57b26d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb3e1a780-92') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 08:49:11 compute-0 nova_compute[260935]: 2025-10-11 08:49:11.292 2 DEBUG os_vif [None req-e9b5f8c5-14d3-45ad-8e0b-2abef032e048 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:f2:dc:ce,bridge_name='br-int',has_traffic_filtering=True,id=b3e1a780-925f-4f4e-bca5-4b8f5ee45e1e,network=Network(fff13396-b787-4c6e-9112-a1c2ef57b26d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb3e1a780-92') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 11 08:49:11 compute-0 nova_compute[260935]: 2025-10-11 08:49:11.296 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:49:11 compute-0 nova_compute[260935]: 2025-10-11 08:49:11.297 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb3e1a780-92, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:49:11 compute-0 nova_compute[260935]: 2025-10-11 08:49:11.304 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:49:11 compute-0 nova_compute[260935]: 2025-10-11 08:49:11.308 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 08:49:11 compute-0 nova_compute[260935]: 2025-10-11 08:49:11.311 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:49:11 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:49:11.311 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[a8ca8907-c172-46ca-9a12-bb4f222a8789]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:49:11 compute-0 nova_compute[260935]: 2025-10-11 08:49:11.316 2 INFO os_vif [None req-e9b5f8c5-14d3-45ad-8e0b-2abef032e048 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:f2:dc:ce,bridge_name='br-int',has_traffic_filtering=True,id=b3e1a780-925f-4f4e-bca5-4b8f5ee45e1e,network=Network(fff13396-b787-4c6e-9112-a1c2ef57b26d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb3e1a780-92')
Oct 11 08:49:11 compute-0 nova_compute[260935]: 2025-10-11 08:49:11.317 2 DEBUG nova.virt.libvirt.guest [None req-e9b5f8c5-14d3-45ad-8e0b-2abef032e048 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 08:49:11 compute-0 nova_compute[260935]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:   <nova:name>tempest-AttachInterfacesTestJSON-server-1270343715</nova:name>
Oct 11 08:49:11 compute-0 nova_compute[260935]:   <nova:creationTime>2025-10-11 08:49:11</nova:creationTime>
Oct 11 08:49:11 compute-0 nova_compute[260935]:   <nova:flavor name="m1.nano">
Oct 11 08:49:11 compute-0 nova_compute[260935]:     <nova:memory>128</nova:memory>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     <nova:disk>1</nova:disk>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     <nova:swap>0</nova:swap>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     <nova:ephemeral>0</nova:ephemeral>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     <nova:vcpus>1</nova:vcpus>
Oct 11 08:49:11 compute-0 nova_compute[260935]:   </nova:flavor>
Oct 11 08:49:11 compute-0 nova_compute[260935]:   <nova:owner>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     <nova:user uuid="34f29a5a135d45f597eeaa741009aa67">tempest-AttachInterfacesTestJSON-2072786320-project-member</nova:user>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     <nova:project uuid="eddb41c523294041b154a0a99c88e82b">tempest-AttachInterfacesTestJSON-2072786320</nova:project>
Oct 11 08:49:11 compute-0 nova_compute[260935]:   </nova:owner>
Oct 11 08:49:11 compute-0 nova_compute[260935]:   <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:   <nova:ports>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     <nova:port uuid="db31f1b4-b009-40dc-a028-b72fe0b1eb45">
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     </nova:port>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     <nova:port uuid="723ff3bf-882c-4198-afc8-a31026a4ccfc">
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     </nova:port>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     <nova:port uuid="c9724939-cd91-44bb-a86b-72bf93c2a818">
Oct 11 08:49:11 compute-0 nova_compute[260935]:       <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Oct 11 08:49:11 compute-0 nova_compute[260935]:     </nova:port>
Oct 11 08:49:11 compute-0 nova_compute[260935]:   </nova:ports>
Oct 11 08:49:11 compute-0 nova_compute[260935]: </nova:instance>
Oct 11 08:49:11 compute-0 nova_compute[260935]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Oct 11 08:49:11 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:49:11.357 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[bf95283c-be3e-46c9-9a46-71d0273d26ca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:49:11 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:49:11.362 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[d6bc77db-c5d1-4262-8a9b-4a9e8112635b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:49:11 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:49:11.400 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[72086f91-1de5-4ffb-9a0a-8374f5aca92a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:49:11 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:49:11.427 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[0feca262-8202-47d7-825a-8d04b21fbe49]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfff13396-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:dd:a4:2d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 14, 'tx_packets': 11, 'rx_bytes': 1084, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 14, 'tx_packets': 11, 'rx_bytes': 1084, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 27], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 434479, 'reachable_time': 16585, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 292383, 'error': None, 'target': 'ovnmeta-fff13396-b787-4c6e-9112-a1c2ef57b26d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:49:11 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:49:11.452 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[603adcbd-f1c6-43f6-9564-8f34020d4620]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapfff13396-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 434496, 'tstamp': 434496}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 292384, 'error': None, 'target': 'ovnmeta-fff13396-b787-4c6e-9112-a1c2ef57b26d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapfff13396-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 434502, 'tstamp': 434502}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 292384, 'error': None, 'target': 'ovnmeta-fff13396-b787-4c6e-9112-a1c2ef57b26d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:49:11 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:49:11.454 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfff13396-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:49:11 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:49:11.458 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfff13396-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:49:11 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:49:11.458 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 08:49:11 compute-0 nova_compute[260935]: 2025-10-11 08:49:11.458 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:49:11 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:49:11.459 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapfff13396-b0, col_values=(('external_ids', {'iface-id': '2a916b98-1e7b-4604-b1f0-e2f195b1c17e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:49:11 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:49:11.459 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 08:49:11 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1302: 321 pgs: 321 active+clean; 214 MiB data, 415 MiB used, 60 GiB / 60 GiB avail; 63 KiB/s rd, 3.6 MiB/s wr, 96 op/s
Oct 11 08:49:11 compute-0 nova_compute[260935]: 2025-10-11 08:49:11.875 2 DEBUG nova.compute.manager [req-2d558ed2-8396-4c06-a7d9-2bd8cac2cabf req-6b46fa5d-8b2f-4b95-b18b-a56cc0e3b502 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] Received event network-vif-unplugged-b3e1a780-925f-4f4e-bca5-4b8f5ee45e1e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:49:11 compute-0 nova_compute[260935]: 2025-10-11 08:49:11.876 2 DEBUG oslo_concurrency.lockutils [req-2d558ed2-8396-4c06-a7d9-2bd8cac2cabf req-6b46fa5d-8b2f-4b95-b18b-a56cc0e3b502 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "057de6d9-3f9e-4b23-9019-f62ba6b453e7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:49:11 compute-0 nova_compute[260935]: 2025-10-11 08:49:11.876 2 DEBUG oslo_concurrency.lockutils [req-2d558ed2-8396-4c06-a7d9-2bd8cac2cabf req-6b46fa5d-8b2f-4b95-b18b-a56cc0e3b502 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "057de6d9-3f9e-4b23-9019-f62ba6b453e7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:49:11 compute-0 nova_compute[260935]: 2025-10-11 08:49:11.877 2 DEBUG oslo_concurrency.lockutils [req-2d558ed2-8396-4c06-a7d9-2bd8cac2cabf req-6b46fa5d-8b2f-4b95-b18b-a56cc0e3b502 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "057de6d9-3f9e-4b23-9019-f62ba6b453e7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:49:11 compute-0 nova_compute[260935]: 2025-10-11 08:49:11.877 2 DEBUG nova.compute.manager [req-2d558ed2-8396-4c06-a7d9-2bd8cac2cabf req-6b46fa5d-8b2f-4b95-b18b-a56cc0e3b502 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] No waiting events found dispatching network-vif-unplugged-b3e1a780-925f-4f4e-bca5-4b8f5ee45e1e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 08:49:11 compute-0 nova_compute[260935]: 2025-10-11 08:49:11.877 2 WARNING nova.compute.manager [req-2d558ed2-8396-4c06-a7d9-2bd8cac2cabf req-6b46fa5d-8b2f-4b95-b18b-a56cc0e3b502 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] Received unexpected event network-vif-unplugged-b3e1a780-925f-4f4e-bca5-4b8f5ee45e1e for instance with vm_state active and task_state None.
Oct 11 08:49:11 compute-0 nova_compute[260935]: 2025-10-11 08:49:11.942 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:49:12 compute-0 nova_compute[260935]: 2025-10-11 08:49:12.638 2 DEBUG oslo_concurrency.lockutils [None req-e9b5f8c5-14d3-45ad-8e0b-2abef032e048 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Acquiring lock "refresh_cache-057de6d9-3f9e-4b23-9019-f62ba6b453e7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 08:49:13 compute-0 ceph-mon[74313]: pgmap v1302: 321 pgs: 321 active+clean; 214 MiB data, 415 MiB used, 60 GiB / 60 GiB avail; 63 KiB/s rd, 3.6 MiB/s wr, 96 op/s
Oct 11 08:49:13 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e163 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 08:49:13 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:49:13.526 162815 WARNING neutron.agent.ovn.metadata.agent [-] Removing non-external type port 6c1453a9-963e-40bd-a179-414e3b276d7f with type ""
Oct 11 08:49:13 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:49:13.528 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched DELETE: PortBindingDeletedEvent(events=('delete',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2b:3e:e2 10.100.0.9'], port_security=['fa:16:3e:2b:3e:e2 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-AttachInterfacesTestJSON-1629736355', 'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '057de6d9-3f9e-4b23-9019-f62ba6b453e7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fff13396-b787-4c6e-9112-a1c2ef57b26d', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-AttachInterfacesTestJSON-1629736355', 'neutron:project_id': 'eddb41c523294041b154a0a99c88e82b', 'neutron:revision_number': '4', 'neutron:security_group_ids': '9a83c3d0-687d-44b7-980a-bde786b1b429', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c4201c7b-c907-464d-88cb-d19f17d8f067, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=c9724939-cd91-44bb-a86b-72bf93c2a818) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 08:49:13 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:49:13.530 162815 INFO neutron.agent.ovn.metadata.agent [-] Port c9724939-cd91-44bb-a86b-72bf93c2a818 in datapath fff13396-b787-4c6e-9112-a1c2ef57b26d unbound from our chassis
Oct 11 08:49:13 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:49:13.533 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network fff13396-b787-4c6e-9112-a1c2ef57b26d
Oct 11 08:49:13 compute-0 ovn_controller[152945]: 2025-10-11T08:49:13Z|00111|binding|INFO|Removing iface tapc9724939-cd ovn-installed in OVS
Oct 11 08:49:13 compute-0 ovn_controller[152945]: 2025-10-11T08:49:13Z|00112|binding|INFO|Removing lport c9724939-cd91-44bb-a86b-72bf93c2a818 ovn-installed in OVS
Oct 11 08:49:13 compute-0 nova_compute[260935]: 2025-10-11 08:49:13.561 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:49:13 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:49:13.577 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[8e0929c8-ee44-487f-92ae-8aa3eed7f724]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:49:13 compute-0 nova_compute[260935]: 2025-10-11 08:49:13.585 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:49:13 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:49:13.626 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[7ed962f2-29bb-4adb-bec3-ce347bdef836]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:49:13 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:49:13.631 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[2b0fdc4f-5f42-4346-b556-7ecbe039fe35]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:49:13 compute-0 nova_compute[260935]: 2025-10-11 08:49:13.653 2 DEBUG oslo_concurrency.lockutils [None req-ff714f88-10a1-47a3-b901-6ed2315b9ba0 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Acquiring lock "b3e20035-c079-4ad0-a085-2086be520d1d" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:49:13 compute-0 nova_compute[260935]: 2025-10-11 08:49:13.653 2 DEBUG oslo_concurrency.lockutils [None req-ff714f88-10a1-47a3-b901-6ed2315b9ba0 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Lock "b3e20035-c079-4ad0-a085-2086be520d1d" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:49:13 compute-0 nova_compute[260935]: 2025-10-11 08:49:13.675 2 DEBUG nova.compute.manager [None req-ff714f88-10a1-47a3-b901-6ed2315b9ba0 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: b3e20035-c079-4ad0-a085-2086be520d1d] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 11 08:49:13 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:49:13.680 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[ffa105ec-8521-435f-87dc-8cb830894af0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:49:13 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:49:13.710 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[bb66b7c6-4f5f-4e5e-a8ba-b0e7a2cbe21a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfff13396-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:dd:a4:2d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 14, 'tx_packets': 13, 'rx_bytes': 1084, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 14, 'tx_packets': 13, 'rx_bytes': 1084, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 27], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 434479, 'reachable_time': 16585, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 292390, 'error': None, 'target': 'ovnmeta-fff13396-b787-4c6e-9112-a1c2ef57b26d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:49:13 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:49:13.743 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[cad31964-8718-4767-b0c5-b465da8f81a3]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapfff13396-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 434496, 'tstamp': 434496}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 292391, 'error': None, 'target': 'ovnmeta-fff13396-b787-4c6e-9112-a1c2ef57b26d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapfff13396-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 434502, 'tstamp': 434502}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 292391, 'error': None, 'target': 'ovnmeta-fff13396-b787-4c6e-9112-a1c2ef57b26d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:49:13 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:49:13.746 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfff13396-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:49:13 compute-0 nova_compute[260935]: 2025-10-11 08:49:13.752 2 DEBUG oslo_concurrency.lockutils [None req-ff714f88-10a1-47a3-b901-6ed2315b9ba0 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:49:13 compute-0 nova_compute[260935]: 2025-10-11 08:49:13.752 2 DEBUG oslo_concurrency.lockutils [None req-ff714f88-10a1-47a3-b901-6ed2315b9ba0 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:49:13 compute-0 nova_compute[260935]: 2025-10-11 08:49:13.767 2 DEBUG oslo_concurrency.lockutils [None req-e8118ba0-a7d9-4b80-b10f-16c7dcffbfa0 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Acquiring lock "057de6d9-3f9e-4b23-9019-f62ba6b453e7" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:49:13 compute-0 nova_compute[260935]: 2025-10-11 08:49:13.767 2 DEBUG oslo_concurrency.lockutils [None req-e8118ba0-a7d9-4b80-b10f-16c7dcffbfa0 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Lock "057de6d9-3f9e-4b23-9019-f62ba6b453e7" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:49:13 compute-0 nova_compute[260935]: 2025-10-11 08:49:13.767 2 DEBUG oslo_concurrency.lockutils [None req-e8118ba0-a7d9-4b80-b10f-16c7dcffbfa0 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Acquiring lock "057de6d9-3f9e-4b23-9019-f62ba6b453e7-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:49:13 compute-0 nova_compute[260935]: 2025-10-11 08:49:13.767 2 DEBUG oslo_concurrency.lockutils [None req-e8118ba0-a7d9-4b80-b10f-16c7dcffbfa0 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Lock "057de6d9-3f9e-4b23-9019-f62ba6b453e7-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:49:13 compute-0 nova_compute[260935]: 2025-10-11 08:49:13.767 2 DEBUG oslo_concurrency.lockutils [None req-e8118ba0-a7d9-4b80-b10f-16c7dcffbfa0 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Lock "057de6d9-3f9e-4b23-9019-f62ba6b453e7-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:49:13 compute-0 nova_compute[260935]: 2025-10-11 08:49:13.769 2 INFO nova.compute.manager [None req-e8118ba0-a7d9-4b80-b10f-16c7dcffbfa0 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] Terminating instance
Oct 11 08:49:13 compute-0 nova_compute[260935]: 2025-10-11 08:49:13.769 2 DEBUG nova.compute.manager [None req-e8118ba0-a7d9-4b80-b10f-16c7dcffbfa0 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 11 08:49:13 compute-0 nova_compute[260935]: 2025-10-11 08:49:13.783 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:49:13 compute-0 nova_compute[260935]: 2025-10-11 08:49:13.788 2 DEBUG nova.virt.hardware [None req-ff714f88-10a1-47a3-b901-6ed2315b9ba0 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 11 08:49:13 compute-0 nova_compute[260935]: 2025-10-11 08:49:13.789 2 INFO nova.compute.claims [None req-ff714f88-10a1-47a3-b901-6ed2315b9ba0 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: b3e20035-c079-4ad0-a085-2086be520d1d] Claim successful on node compute-0.ctlplane.example.com
Oct 11 08:49:13 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:49:13.789 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfff13396-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:49:13 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:49:13.790 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 08:49:13 compute-0 nova_compute[260935]: 2025-10-11 08:49:13.791 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:49:13 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:49:13.791 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapfff13396-b0, col_values=(('external_ids', {'iface-id': '2a916b98-1e7b-4604-b1f0-e2f195b1c17e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:49:13 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:49:13.792 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 08:49:13 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1303: 321 pgs: 321 active+clean; 214 MiB data, 393 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 3.6 MiB/s wr, 230 op/s
Oct 11 08:49:13 compute-0 kernel: tapdb31f1b4-b0 (unregistering): left promiscuous mode
Oct 11 08:49:13 compute-0 NetworkManager[44960]: <info>  [1760172553.8717] device (tapdb31f1b4-b0): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 08:49:13 compute-0 nova_compute[260935]: 2025-10-11 08:49:13.884 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:49:13 compute-0 ovn_controller[152945]: 2025-10-11T08:49:13Z|00113|binding|INFO|Releasing lport db31f1b4-b009-40dc-a028-b72fe0b1eb45 from this chassis (sb_readonly=0)
Oct 11 08:49:13 compute-0 ovn_controller[152945]: 2025-10-11T08:49:13Z|00114|binding|INFO|Setting lport db31f1b4-b009-40dc-a028-b72fe0b1eb45 down in Southbound
Oct 11 08:49:13 compute-0 ovn_controller[152945]: 2025-10-11T08:49:13Z|00115|binding|INFO|Removing iface tapdb31f1b4-b0 ovn-installed in OVS
Oct 11 08:49:13 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:49:13.897 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:97:65:f3 10.100.0.8'], port_security=['fa:16:3e:97:65:f3 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '057de6d9-3f9e-4b23-9019-f62ba6b453e7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fff13396-b787-4c6e-9112-a1c2ef57b26d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'eddb41c523294041b154a0a99c88e82b', 'neutron:revision_number': '4', 'neutron:security_group_ids': '6953e178-7635-4f97-a5ef-5126f17f4f48', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.230'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c4201c7b-c907-464d-88cb-d19f17d8f067, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=db31f1b4-b009-40dc-a028-b72fe0b1eb45) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 08:49:13 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:49:13.898 162815 INFO neutron.agent.ovn.metadata.agent [-] Port db31f1b4-b009-40dc-a028-b72fe0b1eb45 in datapath fff13396-b787-4c6e-9112-a1c2ef57b26d unbound from our chassis
Oct 11 08:49:13 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:49:13.899 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network fff13396-b787-4c6e-9112-a1c2ef57b26d
Oct 11 08:49:13 compute-0 nova_compute[260935]: 2025-10-11 08:49:13.911 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:49:13 compute-0 kernel: tap723ff3bf-88 (unregistering): left promiscuous mode
Oct 11 08:49:13 compute-0 NetworkManager[44960]: <info>  [1760172553.9245] device (tap723ff3bf-88): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 08:49:13 compute-0 nova_compute[260935]: 2025-10-11 08:49:13.933 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:49:13 compute-0 kernel: tapc9724939-cd (unregistering): left promiscuous mode
Oct 11 08:49:13 compute-0 ovn_controller[152945]: 2025-10-11T08:49:13Z|00116|binding|INFO|Releasing lport 723ff3bf-882c-4198-afc8-a31026a4ccfc from this chassis (sb_readonly=0)
Oct 11 08:49:13 compute-0 rsyslogd[1003]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 11 08:49:13 compute-0 ovn_controller[152945]: 2025-10-11T08:49:13Z|00117|binding|INFO|Setting lport 723ff3bf-882c-4198-afc8-a31026a4ccfc down in Southbound
Oct 11 08:49:13 compute-0 rsyslogd[1003]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 11 08:49:13 compute-0 ovn_controller[152945]: 2025-10-11T08:49:13Z|00118|binding|INFO|Removing iface tap723ff3bf-88 ovn-installed in OVS
Oct 11 08:49:13 compute-0 nova_compute[260935]: 2025-10-11 08:49:13.934 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:49:13 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:49:13.941 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:90:fd:25 10.100.0.6'], port_security=['fa:16:3e:90:fd:25 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '057de6d9-3f9e-4b23-9019-f62ba6b453e7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fff13396-b787-4c6e-9112-a1c2ef57b26d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'eddb41c523294041b154a0a99c88e82b', 'neutron:revision_number': '4', 'neutron:security_group_ids': '9a83c3d0-687d-44b7-980a-bde786b1b429', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c4201c7b-c907-464d-88cb-d19f17d8f067, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=723ff3bf-882c-4198-afc8-a31026a4ccfc) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 08:49:13 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:49:13.947 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[fe13b7c9-f14e-48e0-a398-64f9b1e1d3a9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:49:13 compute-0 NetworkManager[44960]: <info>  [1760172553.9553] device (tapc9724939-cd): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 08:49:13 compute-0 nova_compute[260935]: 2025-10-11 08:49:13.963 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:49:13 compute-0 nova_compute[260935]: 2025-10-11 08:49:13.973 2 DEBUG nova.compute.manager [req-9c7f4d6a-ace7-4b76-9f0a-191e070ec86c req-f580612f-b986-4644-86a2-65ca5e898739 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] Received event network-vif-plugged-b3e1a780-925f-4f4e-bca5-4b8f5ee45e1e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:49:13 compute-0 nova_compute[260935]: 2025-10-11 08:49:13.973 2 DEBUG oslo_concurrency.lockutils [req-9c7f4d6a-ace7-4b76-9f0a-191e070ec86c req-f580612f-b986-4644-86a2-65ca5e898739 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "057de6d9-3f9e-4b23-9019-f62ba6b453e7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:49:13 compute-0 nova_compute[260935]: 2025-10-11 08:49:13.974 2 DEBUG oslo_concurrency.lockutils [req-9c7f4d6a-ace7-4b76-9f0a-191e070ec86c req-f580612f-b986-4644-86a2-65ca5e898739 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "057de6d9-3f9e-4b23-9019-f62ba6b453e7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:49:13 compute-0 nova_compute[260935]: 2025-10-11 08:49:13.974 2 DEBUG oslo_concurrency.lockutils [req-9c7f4d6a-ace7-4b76-9f0a-191e070ec86c req-f580612f-b986-4644-86a2-65ca5e898739 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "057de6d9-3f9e-4b23-9019-f62ba6b453e7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:49:13 compute-0 nova_compute[260935]: 2025-10-11 08:49:13.974 2 DEBUG nova.compute.manager [req-9c7f4d6a-ace7-4b76-9f0a-191e070ec86c req-f580612f-b986-4644-86a2-65ca5e898739 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] No waiting events found dispatching network-vif-plugged-b3e1a780-925f-4f4e-bca5-4b8f5ee45e1e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 08:49:13 compute-0 nova_compute[260935]: 2025-10-11 08:49:13.974 2 WARNING nova.compute.manager [req-9c7f4d6a-ace7-4b76-9f0a-191e070ec86c req-f580612f-b986-4644-86a2-65ca5e898739 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] Received unexpected event network-vif-plugged-b3e1a780-925f-4f4e-bca5-4b8f5ee45e1e for instance with vm_state active and task_state deleting.
Oct 11 08:49:13 compute-0 nova_compute[260935]: 2025-10-11 08:49:13.974 2 DEBUG nova.compute.manager [req-9c7f4d6a-ace7-4b76-9f0a-191e070ec86c req-f580612f-b986-4644-86a2-65ca5e898739 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] Received event network-vif-deleted-b3e1a780-925f-4f4e-bca5-4b8f5ee45e1e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:49:13 compute-0 nova_compute[260935]: 2025-10-11 08:49:13.974 2 INFO nova.compute.manager [req-9c7f4d6a-ace7-4b76-9f0a-191e070ec86c req-f580612f-b986-4644-86a2-65ca5e898739 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] Neutron deleted interface b3e1a780-925f-4f4e-bca5-4b8f5ee45e1e; detaching it from the instance and deleting it from the info cache
Oct 11 08:49:13 compute-0 nova_compute[260935]: 2025-10-11 08:49:13.975 2 DEBUG nova.network.neutron [req-9c7f4d6a-ace7-4b76-9f0a-191e070ec86c req-f580612f-b986-4644-86a2-65ca5e898739 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] Updating instance_info_cache with network_info: [{"id": "db31f1b4-b009-40dc-a028-b72fe0b1eb45", "address": "fa:16:3e:97:65:f3", "network": {"id": "fff13396-b787-4c6e-9112-a1c2ef57b26d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-795919911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.230", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eddb41c523294041b154a0a99c88e82b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdb31f1b4-b0", "ovs_interfaceid": "db31f1b4-b009-40dc-a028-b72fe0b1eb45", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "723ff3bf-882c-4198-afc8-a31026a4ccfc", "address": "fa:16:3e:90:fd:25", "network": {"id": "fff13396-b787-4c6e-9112-a1c2ef57b26d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-795919911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eddb41c523294041b154a0a99c88e82b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap723ff3bf-88", "ovs_interfaceid": "723ff3bf-882c-4198-afc8-a31026a4ccfc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "c9724939-cd91-44bb-a86b-72bf93c2a818", "address": "fa:16:3e:2b:3e:e2", "network": {"id": "fff13396-b787-4c6e-9112-a1c2ef57b26d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-795919911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eddb41c523294041b154a0a99c88e82b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc9724939-cd", "ovs_interfaceid": "c9724939-cd91-44bb-a86b-72bf93c2a818", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:49:13 compute-0 nova_compute[260935]: 2025-10-11 08:49:13.980 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:49:13 compute-0 nova_compute[260935]: 2025-10-11 08:49:13.983 2 DEBUG oslo_concurrency.processutils [None req-ff714f88-10a1-47a3-b901-6ed2315b9ba0 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:49:14 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:49:14.007 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[1849c347-2351-410b-844f-cb04b2cfab9c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:49:14 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:49:14.012 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[1f623058-bd7f-4946-bd0b-5c6eae428b61]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:49:14 compute-0 systemd[1]: machine-qemu\x2d22\x2dinstance\x2d00000014.scope: Deactivated successfully.
Oct 11 08:49:14 compute-0 systemd[1]: machine-qemu\x2d22\x2dinstance\x2d00000014.scope: Consumed 15.609s CPU time.
Oct 11 08:49:14 compute-0 systemd-machined[215705]: Machine qemu-22-instance-00000014 terminated.
Oct 11 08:49:14 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:49:14.055 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[1ffc23b2-ce33-4e85-b899-7c171b8803a9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:49:14 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:49:14.080 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[d817de9c-641f-433f-88b9-461318ac62bd]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfff13396-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:dd:a4:2d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 14, 'tx_packets': 15, 'rx_bytes': 1084, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 14, 'tx_packets': 15, 'rx_bytes': 1084, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 27], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 434479, 'reachable_time': 16585, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 292413, 'error': None, 'target': 'ovnmeta-fff13396-b787-4c6e-9112-a1c2ef57b26d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:49:14 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:49:14.102 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[8af0e64d-e84d-4dbc-b94b-6e760f5abbaa]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapfff13396-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 434496, 'tstamp': 434496}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 292414, 'error': None, 'target': 'ovnmeta-fff13396-b787-4c6e-9112-a1c2ef57b26d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapfff13396-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 434502, 'tstamp': 434502}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 292414, 'error': None, 'target': 'ovnmeta-fff13396-b787-4c6e-9112-a1c2ef57b26d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:49:14 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:49:14.103 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfff13396-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:49:14 compute-0 nova_compute[260935]: 2025-10-11 08:49:14.106 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:49:14 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:49:14.116 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfff13396-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:49:14 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:49:14.116 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 08:49:14 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:49:14.117 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapfff13396-b0, col_values=(('external_ids', {'iface-id': '2a916b98-1e7b-4604-b1f0-e2f195b1c17e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:49:14 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:49:14.117 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 08:49:14 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:49:14.118 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 723ff3bf-882c-4198-afc8-a31026a4ccfc in datapath fff13396-b787-4c6e-9112-a1c2ef57b26d unbound from our chassis
Oct 11 08:49:14 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:49:14.120 162815 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network fff13396-b787-4c6e-9112-a1c2ef57b26d, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 11 08:49:14 compute-0 nova_compute[260935]: 2025-10-11 08:49:14.120 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:49:14 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:49:14.122 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[8048b801-ba33-42b0-b3a0-c2b230fe59c3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:49:14 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:49:14.122 162815 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-fff13396-b787-4c6e-9112-a1c2ef57b26d namespace which is not needed anymore
Oct 11 08:49:14 compute-0 NetworkManager[44960]: <info>  [1760172554.2227] manager: (tap723ff3bf-88): new Tun device (/org/freedesktop/NetworkManager/Devices/68)
Oct 11 08:49:14 compute-0 NetworkManager[44960]: <info>  [1760172554.2323] manager: (tapc9724939-cd): new Tun device (/org/freedesktop/NetworkManager/Devices/69)
Oct 11 08:49:14 compute-0 nova_compute[260935]: 2025-10-11 08:49:14.243 2 DEBUG nova.objects.instance [req-9c7f4d6a-ace7-4b76-9f0a-191e070ec86c req-f580612f-b986-4644-86a2-65ca5e898739 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lazy-loading 'system_metadata' on Instance uuid 057de6d9-3f9e-4b23-9019-f62ba6b453e7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:49:14 compute-0 nova_compute[260935]: 2025-10-11 08:49:14.259 2 INFO nova.virt.libvirt.driver [-] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] Instance destroyed successfully.
Oct 11 08:49:14 compute-0 nova_compute[260935]: 2025-10-11 08:49:14.259 2 DEBUG nova.objects.instance [None req-e8118ba0-a7d9-4b80-b10f-16c7dcffbfa0 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Lazy-loading 'resources' on Instance uuid 057de6d9-3f9e-4b23-9019-f62ba6b453e7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:49:14 compute-0 neutron-haproxy-ovnmeta-fff13396-b787-4c6e-9112-a1c2ef57b26d[290066]: [NOTICE]   (290111) : haproxy version is 2.8.14-c23fe91
Oct 11 08:49:14 compute-0 neutron-haproxy-ovnmeta-fff13396-b787-4c6e-9112-a1c2ef57b26d[290066]: [NOTICE]   (290111) : path to executable is /usr/sbin/haproxy
Oct 11 08:49:14 compute-0 neutron-haproxy-ovnmeta-fff13396-b787-4c6e-9112-a1c2ef57b26d[290066]: [WARNING]  (290111) : Exiting Master process...
Oct 11 08:49:14 compute-0 neutron-haproxy-ovnmeta-fff13396-b787-4c6e-9112-a1c2ef57b26d[290066]: [ALERT]    (290111) : Current worker (290115) exited with code 143 (Terminated)
Oct 11 08:49:14 compute-0 neutron-haproxy-ovnmeta-fff13396-b787-4c6e-9112-a1c2ef57b26d[290066]: [WARNING]  (290111) : All workers exited. Exiting... (0)
Oct 11 08:49:14 compute-0 systemd[1]: libpod-6da8ced8088c8e43c02543b3a07e5131bfc81dd7c04d1410fe0ff52bc270558a.scope: Deactivated successfully.
Oct 11 08:49:14 compute-0 podman[292453]: 2025-10-11 08:49:14.306273643 +0000 UTC m=+0.084047012 container died 6da8ced8088c8e43c02543b3a07e5131bfc81dd7c04d1410fe0ff52bc270558a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-fff13396-b787-4c6e-9112-a1c2ef57b26d, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 11 08:49:14 compute-0 nova_compute[260935]: 2025-10-11 08:49:14.310 2 DEBUG nova.objects.instance [req-9c7f4d6a-ace7-4b76-9f0a-191e070ec86c req-f580612f-b986-4644-86a2-65ca5e898739 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lazy-loading 'flavor' on Instance uuid 057de6d9-3f9e-4b23-9019-f62ba6b453e7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:49:14 compute-0 nova_compute[260935]: 2025-10-11 08:49:14.313 2 DEBUG nova.virt.libvirt.vif [None req-e8118ba0-a7d9-4b80-b10f-16c7dcffbfa0 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T08:48:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1270343715',display_name='tempest-AttachInterfacesTestJSON-server-1270343715',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1270343715',id=20,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGqp+brcDuFBg126s8uf0VE8L4fUfMeeG8JT9FKFYCB1vHmrbx9C6Kt8XshIYtJqZ0JEMq6H9A4MzX7hRa62ELfLstfe4uxEEdjGiwcDGhX0TR8t1c69HTxfDL2XuPv0hw==',key_name='tempest-keypair-1655747577',keypairs=<?>,launch_index=0,launched_at=2025-10-11T08:48:21Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='eddb41c523294041b154a0a99c88e82b',ramdisk_id='',reservation_id='r-i4h1iqr7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-2072786320',owner_user_name='tempest-AttachInterfacesTestJSON-2072786320-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T08:48:21Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='34f29a5a135d45f597eeaa741009aa67',uuid=057de6d9-3f9e-4b23-9019-f62ba6b453e7,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "db31f1b4-b009-40dc-a028-b72fe0b1eb45", "address": "fa:16:3e:97:65:f3", "network": {"id": "fff13396-b787-4c6e-9112-a1c2ef57b26d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-795919911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.230", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eddb41c523294041b154a0a99c88e82b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdb31f1b4-b0", "ovs_interfaceid": "db31f1b4-b009-40dc-a028-b72fe0b1eb45", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 11 08:49:14 compute-0 nova_compute[260935]: 2025-10-11 08:49:14.313 2 DEBUG nova.network.os_vif_util [None req-e8118ba0-a7d9-4b80-b10f-16c7dcffbfa0 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Converting VIF {"id": "db31f1b4-b009-40dc-a028-b72fe0b1eb45", "address": "fa:16:3e:97:65:f3", "network": {"id": "fff13396-b787-4c6e-9112-a1c2ef57b26d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-795919911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.230", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eddb41c523294041b154a0a99c88e82b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdb31f1b4-b0", "ovs_interfaceid": "db31f1b4-b009-40dc-a028-b72fe0b1eb45", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 08:49:14 compute-0 nova_compute[260935]: 2025-10-11 08:49:14.314 2 DEBUG nova.network.os_vif_util [None req-e8118ba0-a7d9-4b80-b10f-16c7dcffbfa0 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:97:65:f3,bridge_name='br-int',has_traffic_filtering=True,id=db31f1b4-b009-40dc-a028-b72fe0b1eb45,network=Network(fff13396-b787-4c6e-9112-a1c2ef57b26d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdb31f1b4-b0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 08:49:14 compute-0 nova_compute[260935]: 2025-10-11 08:49:14.314 2 DEBUG os_vif [None req-e8118ba0-a7d9-4b80-b10f-16c7dcffbfa0 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:97:65:f3,bridge_name='br-int',has_traffic_filtering=True,id=db31f1b4-b009-40dc-a028-b72fe0b1eb45,network=Network(fff13396-b787-4c6e-9112-a1c2ef57b26d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdb31f1b4-b0') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 11 08:49:14 compute-0 nova_compute[260935]: 2025-10-11 08:49:14.317 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:49:14 compute-0 nova_compute[260935]: 2025-10-11 08:49:14.318 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapdb31f1b4-b0, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:49:14 compute-0 nova_compute[260935]: 2025-10-11 08:49:14.321 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:49:14 compute-0 nova_compute[260935]: 2025-10-11 08:49:14.325 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 08:49:14 compute-0 nova_compute[260935]: 2025-10-11 08:49:14.331 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:49:14 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-6da8ced8088c8e43c02543b3a07e5131bfc81dd7c04d1410fe0ff52bc270558a-userdata-shm.mount: Deactivated successfully.
Oct 11 08:49:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-4bdab5ef0ac4476b9090df9e6eb05fca169470796c58d4abb5cf34784f91353f-merged.mount: Deactivated successfully.
Oct 11 08:49:14 compute-0 podman[292453]: 2025-10-11 08:49:14.342115988 +0000 UTC m=+0.119889357 container cleanup 6da8ced8088c8e43c02543b3a07e5131bfc81dd7c04d1410fe0ff52bc270558a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-fff13396-b787-4c6e-9112-a1c2ef57b26d, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001)
Oct 11 08:49:14 compute-0 nova_compute[260935]: 2025-10-11 08:49:14.342 2 INFO os_vif [None req-e8118ba0-a7d9-4b80-b10f-16c7dcffbfa0 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:97:65:f3,bridge_name='br-int',has_traffic_filtering=True,id=db31f1b4-b009-40dc-a028-b72fe0b1eb45,network=Network(fff13396-b787-4c6e-9112-a1c2ef57b26d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdb31f1b4-b0')
Oct 11 08:49:14 compute-0 nova_compute[260935]: 2025-10-11 08:49:14.343 2 DEBUG nova.virt.libvirt.vif [None req-e8118ba0-a7d9-4b80-b10f-16c7dcffbfa0 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T08:48:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1270343715',display_name='tempest-AttachInterfacesTestJSON-server-1270343715',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1270343715',id=20,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGqp+brcDuFBg126s8uf0VE8L4fUfMeeG8JT9FKFYCB1vHmrbx9C6Kt8XshIYtJqZ0JEMq6H9A4MzX7hRa62ELfLstfe4uxEEdjGiwcDGhX0TR8t1c69HTxfDL2XuPv0hw==',key_name='tempest-keypair-1655747577',keypairs=<?>,launch_index=0,launched_at=2025-10-11T08:48:21Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='eddb41c523294041b154a0a99c88e82b',ramdisk_id='',reservation_id='r-i4h1iqr7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-2072786320',owner_user_name='tempest-AttachInterfacesTestJSON-2072786320-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T08:48:21Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='34f29a5a135d45f597eeaa741009aa67',uuid=057de6d9-3f9e-4b23-9019-f62ba6b453e7,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b3e1a780-925f-4f4e-bca5-4b8f5ee45e1e", "address": "fa:16:3e:f2:dc:ce", "network": {"id": "fff13396-b787-4c6e-9112-a1c2ef57b26d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-795919911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eddb41c523294041b154a0a99c88e82b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb3e1a780-92", "ovs_interfaceid": "b3e1a780-925f-4f4e-bca5-4b8f5ee45e1e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 11 08:49:14 compute-0 nova_compute[260935]: 2025-10-11 08:49:14.344 2 DEBUG nova.network.os_vif_util [None req-e8118ba0-a7d9-4b80-b10f-16c7dcffbfa0 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Converting VIF {"id": "b3e1a780-925f-4f4e-bca5-4b8f5ee45e1e", "address": "fa:16:3e:f2:dc:ce", "network": {"id": "fff13396-b787-4c6e-9112-a1c2ef57b26d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-795919911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eddb41c523294041b154a0a99c88e82b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb3e1a780-92", "ovs_interfaceid": "b3e1a780-925f-4f4e-bca5-4b8f5ee45e1e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 08:49:14 compute-0 nova_compute[260935]: 2025-10-11 08:49:14.344 2 DEBUG nova.network.os_vif_util [None req-e8118ba0-a7d9-4b80-b10f-16c7dcffbfa0 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:f2:dc:ce,bridge_name='br-int',has_traffic_filtering=True,id=b3e1a780-925f-4f4e-bca5-4b8f5ee45e1e,network=Network(fff13396-b787-4c6e-9112-a1c2ef57b26d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb3e1a780-92') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 08:49:14 compute-0 nova_compute[260935]: 2025-10-11 08:49:14.345 2 DEBUG os_vif [None req-e8118ba0-a7d9-4b80-b10f-16c7dcffbfa0 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:f2:dc:ce,bridge_name='br-int',has_traffic_filtering=True,id=b3e1a780-925f-4f4e-bca5-4b8f5ee45e1e,network=Network(fff13396-b787-4c6e-9112-a1c2ef57b26d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb3e1a780-92') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 11 08:49:14 compute-0 nova_compute[260935]: 2025-10-11 08:49:14.350 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:49:14 compute-0 nova_compute[260935]: 2025-10-11 08:49:14.350 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb3e1a780-92, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:49:14 compute-0 nova_compute[260935]: 2025-10-11 08:49:14.351 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 08:49:14 compute-0 nova_compute[260935]: 2025-10-11 08:49:14.354 2 DEBUG nova.virt.libvirt.vif [req-9c7f4d6a-ace7-4b76-9f0a-191e070ec86c req-f580612f-b986-4644-86a2-65ca5e898739 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T08:48:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1270343715',display_name='tempest-AttachInterfacesTestJSON-server-1270343715',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1270343715',id=20,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGqp+brcDuFBg126s8uf0VE8L4fUfMeeG8JT9FKFYCB1vHmrbx9C6Kt8XshIYtJqZ0JEMq6H9A4MzX7hRa62ELfLstfe4uxEEdjGiwcDGhX0TR8t1c69HTxfDL2XuPv0hw==',key_name='tempest-keypair-1655747577',keypairs=<?>,launch_index=0,launched_at=2025-10-11T08:48:21Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='eddb41c523294041b154a0a99c88e82b',ramdisk_id='',reservation_id='r-i4h1iqr7',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-2072786320',owner_user_name='tempest-AttachInterfacesTestJSON-2072786320-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T08:49:13Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='34f29a5a135d45f597eeaa741009aa67',uuid=057de6d9-3f9e-4b23-9019-f62ba6b453e7,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b3e1a780-925f-4f4e-bca5-4b8f5ee45e1e", "address": "fa:16:3e:f2:dc:ce", "network": {"id": "fff13396-b787-4c6e-9112-a1c2ef57b26d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-795919911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eddb41c523294041b154a0a99c88e82b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb3e1a780-92", "ovs_interfaceid": "b3e1a780-925f-4f4e-bca5-4b8f5ee45e1e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 11 08:49:14 compute-0 nova_compute[260935]: 2025-10-11 08:49:14.355 2 DEBUG nova.network.os_vif_util [req-9c7f4d6a-ace7-4b76-9f0a-191e070ec86c req-f580612f-b986-4644-86a2-65ca5e898739 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Converting VIF {"id": "b3e1a780-925f-4f4e-bca5-4b8f5ee45e1e", "address": "fa:16:3e:f2:dc:ce", "network": {"id": "fff13396-b787-4c6e-9112-a1c2ef57b26d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-795919911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eddb41c523294041b154a0a99c88e82b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb3e1a780-92", "ovs_interfaceid": "b3e1a780-925f-4f4e-bca5-4b8f5ee45e1e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 08:49:14 compute-0 nova_compute[260935]: 2025-10-11 08:49:14.355 2 DEBUG nova.network.os_vif_util [req-9c7f4d6a-ace7-4b76-9f0a-191e070ec86c req-f580612f-b986-4644-86a2-65ca5e898739 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:f2:dc:ce,bridge_name='br-int',has_traffic_filtering=True,id=b3e1a780-925f-4f4e-bca5-4b8f5ee45e1e,network=Network(fff13396-b787-4c6e-9112-a1c2ef57b26d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb3e1a780-92') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 08:49:14 compute-0 nova_compute[260935]: 2025-10-11 08:49:14.357 2 INFO os_vif [None req-e8118ba0-a7d9-4b80-b10f-16c7dcffbfa0 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:f2:dc:ce,bridge_name='br-int',has_traffic_filtering=True,id=b3e1a780-925f-4f4e-bca5-4b8f5ee45e1e,network=Network(fff13396-b787-4c6e-9112-a1c2ef57b26d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb3e1a780-92')
Oct 11 08:49:14 compute-0 nova_compute[260935]: 2025-10-11 08:49:14.358 2 DEBUG nova.virt.libvirt.vif [None req-e8118ba0-a7d9-4b80-b10f-16c7dcffbfa0 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T08:48:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1270343715',display_name='tempest-AttachInterfacesTestJSON-server-1270343715',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1270343715',id=20,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGqp+brcDuFBg126s8uf0VE8L4fUfMeeG8JT9FKFYCB1vHmrbx9C6Kt8XshIYtJqZ0JEMq6H9A4MzX7hRa62ELfLstfe4uxEEdjGiwcDGhX0TR8t1c69HTxfDL2XuPv0hw==',key_name='tempest-keypair-1655747577',keypairs=<?>,launch_index=0,launched_at=2025-10-11T08:48:21Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='eddb41c523294041b154a0a99c88e82b',ramdisk_id='',reservation_id='r-i4h1iqr7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-2072786320',owner_user_name='tempest-AttachInterfacesTestJSON-2072786320-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T08:48:21Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='34f29a5a135d45f597eeaa741009aa67',uuid=057de6d9-3f9e-4b23-9019-f62ba6b453e7,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "723ff3bf-882c-4198-afc8-a31026a4ccfc", "address": "fa:16:3e:90:fd:25", "network": {"id": "fff13396-b787-4c6e-9112-a1c2ef57b26d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-795919911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eddb41c523294041b154a0a99c88e82b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap723ff3bf-88", "ovs_interfaceid": "723ff3bf-882c-4198-afc8-a31026a4ccfc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 11 08:49:14 compute-0 nova_compute[260935]: 2025-10-11 08:49:14.358 2 DEBUG nova.network.os_vif_util [None req-e8118ba0-a7d9-4b80-b10f-16c7dcffbfa0 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Converting VIF {"id": "723ff3bf-882c-4198-afc8-a31026a4ccfc", "address": "fa:16:3e:90:fd:25", "network": {"id": "fff13396-b787-4c6e-9112-a1c2ef57b26d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-795919911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eddb41c523294041b154a0a99c88e82b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap723ff3bf-88", "ovs_interfaceid": "723ff3bf-882c-4198-afc8-a31026a4ccfc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 08:49:14 compute-0 nova_compute[260935]: 2025-10-11 08:49:14.358 2 DEBUG nova.network.os_vif_util [None req-e8118ba0-a7d9-4b80-b10f-16c7dcffbfa0 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:90:fd:25,bridge_name='br-int',has_traffic_filtering=True,id=723ff3bf-882c-4198-afc8-a31026a4ccfc,network=Network(fff13396-b787-4c6e-9112-a1c2ef57b26d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap723ff3bf-88') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 08:49:14 compute-0 nova_compute[260935]: 2025-10-11 08:49:14.359 2 DEBUG os_vif [None req-e8118ba0-a7d9-4b80-b10f-16c7dcffbfa0 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:90:fd:25,bridge_name='br-int',has_traffic_filtering=True,id=723ff3bf-882c-4198-afc8-a31026a4ccfc,network=Network(fff13396-b787-4c6e-9112-a1c2ef57b26d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap723ff3bf-88') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 11 08:49:14 compute-0 nova_compute[260935]: 2025-10-11 08:49:14.360 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:49:14 compute-0 nova_compute[260935]: 2025-10-11 08:49:14.360 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap723ff3bf-88, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:49:14 compute-0 nova_compute[260935]: 2025-10-11 08:49:14.362 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:49:14 compute-0 nova_compute[260935]: 2025-10-11 08:49:14.363 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 08:49:14 compute-0 nova_compute[260935]: 2025-10-11 08:49:14.364 2 DEBUG nova.virt.libvirt.guest [req-9c7f4d6a-ace7-4b76-9f0a-191e070ec86c req-f580612f-b986-4644-86a2-65ca5e898739 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:f2:dc:ce"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapb3e1a780-92"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Oct 11 08:49:14 compute-0 nova_compute[260935]: 2025-10-11 08:49:14.367 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:49:14 compute-0 nova_compute[260935]: 2025-10-11 08:49:14.369 2 INFO os_vif [None req-e8118ba0-a7d9-4b80-b10f-16c7dcffbfa0 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:90:fd:25,bridge_name='br-int',has_traffic_filtering=True,id=723ff3bf-882c-4198-afc8-a31026a4ccfc,network=Network(fff13396-b787-4c6e-9112-a1c2ef57b26d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap723ff3bf-88')
Oct 11 08:49:14 compute-0 systemd[1]: libpod-conmon-6da8ced8088c8e43c02543b3a07e5131bfc81dd7c04d1410fe0ff52bc270558a.scope: Deactivated successfully.
Oct 11 08:49:14 compute-0 nova_compute[260935]: 2025-10-11 08:49:14.371 2 DEBUG nova.virt.libvirt.vif [None req-e8118ba0-a7d9-4b80-b10f-16c7dcffbfa0 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T08:48:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1270343715',display_name='tempest-AttachInterfacesTestJSON-server-1270343715',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1270343715',id=20,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGqp+brcDuFBg126s8uf0VE8L4fUfMeeG8JT9FKFYCB1vHmrbx9C6Kt8XshIYtJqZ0JEMq6H9A4MzX7hRa62ELfLstfe4uxEEdjGiwcDGhX0TR8t1c69HTxfDL2XuPv0hw==',key_name='tempest-keypair-1655747577',keypairs=<?>,launch_index=0,launched_at=2025-10-11T08:48:21Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='eddb41c523294041b154a0a99c88e82b',ramdisk_id='',reservation_id='r-i4h1iqr7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-2072786320',owner_user_name='tempest-AttachInterfacesTestJSON-2072786320-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T08:48:21Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='34f29a5a135d45f597eeaa741009aa67',uuid=057de6d9-3f9e-4b23-9019-f62ba6b453e7,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "c9724939-cd91-44bb-a86b-72bf93c2a818", "address": "fa:16:3e:2b:3e:e2", "network": {"id": "fff13396-b787-4c6e-9112-a1c2ef57b26d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-795919911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eddb41c523294041b154a0a99c88e82b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc9724939-cd", "ovs_interfaceid": "c9724939-cd91-44bb-a86b-72bf93c2a818", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 11 08:49:14 compute-0 nova_compute[260935]: 2025-10-11 08:49:14.371 2 DEBUG nova.network.os_vif_util [None req-e8118ba0-a7d9-4b80-b10f-16c7dcffbfa0 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Converting VIF {"id": "c9724939-cd91-44bb-a86b-72bf93c2a818", "address": "fa:16:3e:2b:3e:e2", "network": {"id": "fff13396-b787-4c6e-9112-a1c2ef57b26d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-795919911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eddb41c523294041b154a0a99c88e82b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc9724939-cd", "ovs_interfaceid": "c9724939-cd91-44bb-a86b-72bf93c2a818", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 08:49:14 compute-0 nova_compute[260935]: 2025-10-11 08:49:14.372 2 DEBUG nova.network.os_vif_util [None req-e8118ba0-a7d9-4b80-b10f-16c7dcffbfa0 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2b:3e:e2,bridge_name='br-int',has_traffic_filtering=True,id=c9724939-cd91-44bb-a86b-72bf93c2a818,network=Network(fff13396-b787-4c6e-9112-a1c2ef57b26d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapc9724939-cd') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 08:49:14 compute-0 nova_compute[260935]: 2025-10-11 08:49:14.372 2 DEBUG os_vif [None req-e8118ba0-a7d9-4b80-b10f-16c7dcffbfa0 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:2b:3e:e2,bridge_name='br-int',has_traffic_filtering=True,id=c9724939-cd91-44bb-a86b-72bf93c2a818,network=Network(fff13396-b787-4c6e-9112-a1c2ef57b26d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapc9724939-cd') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 11 08:49:14 compute-0 nova_compute[260935]: 2025-10-11 08:49:14.373 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:49:14 compute-0 nova_compute[260935]: 2025-10-11 08:49:14.374 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc9724939-cd, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:49:14 compute-0 nova_compute[260935]: 2025-10-11 08:49:14.375 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:49:14 compute-0 nova_compute[260935]: 2025-10-11 08:49:14.376 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 08:49:14 compute-0 nova_compute[260935]: 2025-10-11 08:49:14.376 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:49:14 compute-0 nova_compute[260935]: 2025-10-11 08:49:14.378 2 DEBUG nova.virt.libvirt.guest [req-9c7f4d6a-ace7-4b76-9f0a-191e070ec86c req-f580612f-b986-4644-86a2-65ca5e898739 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:f2:dc:ce"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapb3e1a780-92"/></interface>not found in domain: <domain type='kvm'>
Oct 11 08:49:14 compute-0 nova_compute[260935]:   <name>instance-00000014</name>
Oct 11 08:49:14 compute-0 nova_compute[260935]:   <uuid>057de6d9-3f9e-4b23-9019-f62ba6b453e7</uuid>
Oct 11 08:49:14 compute-0 nova_compute[260935]:   <metadata>
Oct 11 08:49:14 compute-0 nova_compute[260935]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 08:49:14 compute-0 nova_compute[260935]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 08:49:14 compute-0 nova_compute[260935]:       <nova:name>tempest-AttachInterfacesTestJSON-server-1270343715</nova:name>
Oct 11 08:49:14 compute-0 nova_compute[260935]:       <nova:creationTime>2025-10-11 08:48:17</nova:creationTime>
Oct 11 08:49:14 compute-0 nova_compute[260935]:       <nova:flavor name="m1.nano">
Oct 11 08:49:14 compute-0 nova_compute[260935]:         <nova:memory>128</nova:memory>
Oct 11 08:49:14 compute-0 nova_compute[260935]:         <nova:disk>1</nova:disk>
Oct 11 08:49:14 compute-0 nova_compute[260935]:         <nova:swap>0</nova:swap>
Oct 11 08:49:14 compute-0 nova_compute[260935]:         <nova:ephemeral>0</nova:ephemeral>
Oct 11 08:49:14 compute-0 nova_compute[260935]:         <nova:vcpus>1</nova:vcpus>
Oct 11 08:49:14 compute-0 nova_compute[260935]:       </nova:flavor>
Oct 11 08:49:14 compute-0 nova_compute[260935]:       <nova:owner>
Oct 11 08:49:14 compute-0 nova_compute[260935]:         <nova:user uuid="34f29a5a135d45f597eeaa741009aa67">tempest-AttachInterfacesTestJSON-2072786320-project-member</nova:user>
Oct 11 08:49:14 compute-0 nova_compute[260935]:         <nova:project uuid="eddb41c523294041b154a0a99c88e82b">tempest-AttachInterfacesTestJSON-2072786320</nova:project>
Oct 11 08:49:14 compute-0 nova_compute[260935]:       </nova:owner>
Oct 11 08:49:14 compute-0 nova_compute[260935]:       <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 08:49:14 compute-0 nova_compute[260935]:       <nova:ports>
Oct 11 08:49:14 compute-0 nova_compute[260935]:         <nova:port uuid="db31f1b4-b009-40dc-a028-b72fe0b1eb45">
Oct 11 08:49:14 compute-0 nova_compute[260935]:           <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Oct 11 08:49:14 compute-0 nova_compute[260935]:         </nova:port>
Oct 11 08:49:14 compute-0 nova_compute[260935]:       </nova:ports>
Oct 11 08:49:14 compute-0 nova_compute[260935]:     </nova:instance>
Oct 11 08:49:14 compute-0 nova_compute[260935]:   </metadata>
Oct 11 08:49:14 compute-0 nova_compute[260935]:   <memory unit='KiB'>131072</memory>
Oct 11 08:49:14 compute-0 nova_compute[260935]:   <currentMemory unit='KiB'>131072</currentMemory>
Oct 11 08:49:14 compute-0 nova_compute[260935]:   <vcpu placement='static'>1</vcpu>
Oct 11 08:49:14 compute-0 nova_compute[260935]:   <sysinfo type='smbios'>
Oct 11 08:49:14 compute-0 nova_compute[260935]:     <system>
Oct 11 08:49:14 compute-0 nova_compute[260935]:       <entry name='manufacturer'>RDO</entry>
Oct 11 08:49:14 compute-0 nova_compute[260935]:       <entry name='product'>OpenStack Compute</entry>
Oct 11 08:49:14 compute-0 nova_compute[260935]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 08:49:14 compute-0 nova_compute[260935]:       <entry name='serial'>057de6d9-3f9e-4b23-9019-f62ba6b453e7</entry>
Oct 11 08:49:14 compute-0 nova_compute[260935]:       <entry name='uuid'>057de6d9-3f9e-4b23-9019-f62ba6b453e7</entry>
Oct 11 08:49:14 compute-0 nova_compute[260935]:       <entry name='family'>Virtual Machine</entry>
Oct 11 08:49:14 compute-0 nova_compute[260935]:     </system>
Oct 11 08:49:14 compute-0 nova_compute[260935]:   </sysinfo>
Oct 11 08:49:14 compute-0 nova_compute[260935]:   <os>
Oct 11 08:49:14 compute-0 nova_compute[260935]:     <type arch='x86_64' machine='pc-q35-rhel9.6.0'>hvm</type>
Oct 11 08:49:14 compute-0 nova_compute[260935]:     <boot dev='hd'/>
Oct 11 08:49:14 compute-0 nova_compute[260935]:     <smbios mode='sysinfo'/>
Oct 11 08:49:14 compute-0 nova_compute[260935]:   </os>
Oct 11 08:49:14 compute-0 nova_compute[260935]:   <features>
Oct 11 08:49:14 compute-0 nova_compute[260935]:     <acpi/>
Oct 11 08:49:14 compute-0 nova_compute[260935]:     <apic/>
Oct 11 08:49:14 compute-0 nova_compute[260935]:     <vmcoreinfo state='on'/>
Oct 11 08:49:14 compute-0 nova_compute[260935]:   </features>
Oct 11 08:49:14 compute-0 nova_compute[260935]:   <cpu mode='host-model' check='partial'>
Oct 11 08:49:14 compute-0 nova_compute[260935]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Oct 11 08:49:14 compute-0 nova_compute[260935]:   </cpu>
Oct 11 08:49:14 compute-0 nova_compute[260935]:   <clock offset='utc'>
Oct 11 08:49:14 compute-0 nova_compute[260935]:     <timer name='pit' tickpolicy='delay'/>
Oct 11 08:49:14 compute-0 nova_compute[260935]:     <timer name='rtc' tickpolicy='catchup'/>
Oct 11 08:49:14 compute-0 nova_compute[260935]:     <timer name='hpet' present='no'/>
Oct 11 08:49:14 compute-0 nova_compute[260935]:   </clock>
Oct 11 08:49:14 compute-0 nova_compute[260935]:   <on_poweroff>destroy</on_poweroff>
Oct 11 08:49:14 compute-0 nova_compute[260935]:   <on_reboot>restart</on_reboot>
Oct 11 08:49:14 compute-0 nova_compute[260935]:   <on_crash>destroy</on_crash>
Oct 11 08:49:14 compute-0 nova_compute[260935]:   <devices>
Oct 11 08:49:14 compute-0 nova_compute[260935]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Oct 11 08:49:14 compute-0 nova_compute[260935]:     <disk type='network' device='disk'>
Oct 11 08:49:14 compute-0 nova_compute[260935]:       <driver name='qemu' type='raw' cache='none'/>
Oct 11 08:49:14 compute-0 nova_compute[260935]:       <auth username='openstack'>
Oct 11 08:49:14 compute-0 nova_compute[260935]:         <secret type='ceph' uuid='33219f8b-dc38-5a8f-a577-8ccc4b37190a'/>
Oct 11 08:49:14 compute-0 nova_compute[260935]:       </auth>
Oct 11 08:49:14 compute-0 nova_compute[260935]:       <source protocol='rbd' name='vms/057de6d9-3f9e-4b23-9019-f62ba6b453e7_disk'>
Oct 11 08:49:14 compute-0 nova_compute[260935]:         <host name='192.168.122.100' port='6789'/>
Oct 11 08:49:14 compute-0 nova_compute[260935]:       </source>
Oct 11 08:49:14 compute-0 nova_compute[260935]:       <target dev='vda' bus='virtio'/>
Oct 11 08:49:14 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Oct 11 08:49:14 compute-0 nova_compute[260935]:     </disk>
Oct 11 08:49:14 compute-0 nova_compute[260935]:     <disk type='network' device='cdrom'>
Oct 11 08:49:14 compute-0 nova_compute[260935]:       <driver name='qemu' type='raw' cache='none'/>
Oct 11 08:49:14 compute-0 nova_compute[260935]:       <auth username='openstack'>
Oct 11 08:49:14 compute-0 nova_compute[260935]:         <secret type='ceph' uuid='33219f8b-dc38-5a8f-a577-8ccc4b37190a'/>
Oct 11 08:49:14 compute-0 nova_compute[260935]:       </auth>
Oct 11 08:49:14 compute-0 nova_compute[260935]:       <source protocol='rbd' name='vms/057de6d9-3f9e-4b23-9019-f62ba6b453e7_disk.config'>
Oct 11 08:49:14 compute-0 nova_compute[260935]:         <host name='192.168.122.100' port='6789'/>
Oct 11 08:49:14 compute-0 nova_compute[260935]:       </source>
Oct 11 08:49:14 compute-0 nova_compute[260935]:       <target dev='sda' bus='sata'/>
Oct 11 08:49:14 compute-0 nova_compute[260935]:       <readonly/>
Oct 11 08:49:14 compute-0 nova_compute[260935]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Oct 11 08:49:14 compute-0 nova_compute[260935]:     </disk>
Oct 11 08:49:14 compute-0 nova_compute[260935]:     <controller type='pci' index='0' model='pcie-root'/>
Oct 11 08:49:14 compute-0 nova_compute[260935]:     <controller type='pci' index='1' model='pcie-root-port'>
Oct 11 08:49:14 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 08:49:14 compute-0 nova_compute[260935]:       <target chassis='1' port='0x10'/>
Oct 11 08:49:14 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Oct 11 08:49:14 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:49:14 compute-0 nova_compute[260935]:     <controller type='pci' index='2' model='pcie-root-port'>
Oct 11 08:49:14 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 08:49:14 compute-0 nova_compute[260935]:       <target chassis='2' port='0x11'/>
Oct 11 08:49:14 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Oct 11 08:49:14 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:49:14 compute-0 nova_compute[260935]:     <controller type='pci' index='3' model='pcie-root-port'>
Oct 11 08:49:14 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 08:49:14 compute-0 nova_compute[260935]:       <target chassis='3' port='0x12'/>
Oct 11 08:49:14 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Oct 11 08:49:14 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:49:14 compute-0 nova_compute[260935]:     <controller type='pci' index='4' model='pcie-root-port'>
Oct 11 08:49:14 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 08:49:14 compute-0 nova_compute[260935]:       <target chassis='4' port='0x13'/>
Oct 11 08:49:14 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Oct 11 08:49:14 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:49:14 compute-0 nova_compute[260935]:     <controller type='pci' index='5' model='pcie-root-port'>
Oct 11 08:49:14 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 08:49:14 compute-0 nova_compute[260935]:       <target chassis='5' port='0x14'/>
Oct 11 08:49:14 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Oct 11 08:49:14 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:49:14 compute-0 nova_compute[260935]:     <controller type='pci' index='6' model='pcie-root-port'>
Oct 11 08:49:14 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 08:49:14 compute-0 nova_compute[260935]:       <target chassis='6' port='0x15'/>
Oct 11 08:49:14 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Oct 11 08:49:14 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:49:14 compute-0 nova_compute[260935]:     <controller type='pci' index='7' model='pcie-root-port'>
Oct 11 08:49:14 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 08:49:14 compute-0 nova_compute[260935]:       <target chassis='7' port='0x16'/>
Oct 11 08:49:14 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Oct 11 08:49:14 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:49:14 compute-0 nova_compute[260935]:     <controller type='pci' index='8' model='pcie-root-port'>
Oct 11 08:49:14 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 08:49:14 compute-0 nova_compute[260935]:       <target chassis='8' port='0x17'/>
Oct 11 08:49:14 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Oct 11 08:49:14 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:49:14 compute-0 nova_compute[260935]:     <controller type='pci' index='9' model='pcie-root-port'>
Oct 11 08:49:14 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 08:49:14 compute-0 nova_compute[260935]:       <target chassis='9' port='0x18'/>
Oct 11 08:49:14 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Oct 11 08:49:14 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:49:14 compute-0 nova_compute[260935]:     <controller type='pci' index='10' model='pcie-root-port'>
Oct 11 08:49:14 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 08:49:14 compute-0 nova_compute[260935]:       <target chassis='10' port='0x19'/>
Oct 11 08:49:14 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Oct 11 08:49:14 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:49:14 compute-0 nova_compute[260935]:     <controller type='pci' index='11' model='pcie-root-port'>
Oct 11 08:49:14 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 08:49:14 compute-0 nova_compute[260935]:       <target chassis='11' port='0x1a'/>
Oct 11 08:49:14 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Oct 11 08:49:14 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:49:14 compute-0 nova_compute[260935]:     <controller type='pci' index='12' model='pcie-root-port'>
Oct 11 08:49:14 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 08:49:14 compute-0 nova_compute[260935]:       <target chassis='12' port='0x1b'/>
Oct 11 08:49:14 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Oct 11 08:49:14 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:49:14 compute-0 nova_compute[260935]:     <controller type='pci' index='13' model='pcie-root-port'>
Oct 11 08:49:14 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 08:49:14 compute-0 nova_compute[260935]:       <target chassis='13' port='0x1c'/>
Oct 11 08:49:14 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Oct 11 08:49:14 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:49:14 compute-0 nova_compute[260935]:     <controller type='pci' index='14' model='pcie-root-port'>
Oct 11 08:49:14 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 08:49:14 compute-0 nova_compute[260935]:       <target chassis='14' port='0x1d'/>
Oct 11 08:49:14 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Oct 11 08:49:14 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:49:14 compute-0 nova_compute[260935]:     <controller type='pci' index='15' model='pcie-root-port'>
Oct 11 08:49:14 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 08:49:14 compute-0 nova_compute[260935]:       <target chassis='15' port='0x1e'/>
Oct 11 08:49:14 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Oct 11 08:49:14 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:49:14 compute-0 nova_compute[260935]:     <controller type='pci' index='16' model='pcie-root-port'>
Oct 11 08:49:14 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 08:49:14 compute-0 nova_compute[260935]:       <target chassis='16' port='0x1f'/>
Oct 11 08:49:14 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Oct 11 08:49:14 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:49:14 compute-0 nova_compute[260935]:     <controller type='pci' index='17' model='pcie-root-port'>
Oct 11 08:49:14 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 08:49:14 compute-0 nova_compute[260935]:       <target chassis='17' port='0x20'/>
Oct 11 08:49:14 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Oct 11 08:49:14 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:49:14 compute-0 nova_compute[260935]:     <controller type='pci' index='18' model='pcie-root-port'>
Oct 11 08:49:14 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 08:49:14 compute-0 nova_compute[260935]:       <target chassis='18' port='0x21'/>
Oct 11 08:49:14 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Oct 11 08:49:14 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:49:14 compute-0 nova_compute[260935]:     <controller type='pci' index='19' model='pcie-root-port'>
Oct 11 08:49:14 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 08:49:14 compute-0 nova_compute[260935]:       <target chassis='19' port='0x22'/>
Oct 11 08:49:14 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Oct 11 08:49:14 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:49:14 compute-0 nova_compute[260935]:     <controller type='pci' index='20' model='pcie-root-port'>
Oct 11 08:49:14 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 08:49:14 compute-0 nova_compute[260935]:       <target chassis='20' port='0x23'/>
Oct 11 08:49:14 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Oct 11 08:49:14 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:49:14 compute-0 nova_compute[260935]:     <controller type='pci' index='21' model='pcie-root-port'>
Oct 11 08:49:14 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 08:49:14 compute-0 nova_compute[260935]:       <target chassis='21' port='0x24'/>
Oct 11 08:49:14 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Oct 11 08:49:14 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:49:14 compute-0 nova_compute[260935]:     <controller type='pci' index='22' model='pcie-root-port'>
Oct 11 08:49:14 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 08:49:14 compute-0 nova_compute[260935]:       <target chassis='22' port='0x25'/>
Oct 11 08:49:14 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Oct 11 08:49:14 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:49:14 compute-0 nova_compute[260935]:     <controller type='pci' index='23' model='pcie-root-port'>
Oct 11 08:49:14 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 08:49:14 compute-0 nova_compute[260935]:       <target chassis='23' port='0x26'/>
Oct 11 08:49:14 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Oct 11 08:49:14 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:49:14 compute-0 nova_compute[260935]:     <controller type='pci' index='24' model='pcie-root-port'>
Oct 11 08:49:14 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 08:49:14 compute-0 nova_compute[260935]:       <target chassis='24' port='0x27'/>
Oct 11 08:49:14 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Oct 11 08:49:14 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:49:14 compute-0 nova_compute[260935]:     <controller type='pci' index='25' model='pcie-root-port'>
Oct 11 08:49:14 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 08:49:14 compute-0 nova_compute[260935]:       <target chassis='25' port='0x28'/>
Oct 11 08:49:14 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Oct 11 08:49:14 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:49:14 compute-0 nova_compute[260935]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Oct 11 08:49:14 compute-0 nova_compute[260935]:       <model name='pcie-pci-bridge'/>
Oct 11 08:49:14 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Oct 11 08:49:14 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:49:14 compute-0 nova_compute[260935]:     <controller type='usb' index='0' model='piix3-uhci'>
Oct 11 08:49:14 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Oct 11 08:49:14 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:49:14 compute-0 nova_compute[260935]:     <controller type='sata' index='0'>
Oct 11 08:49:14 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Oct 11 08:49:14 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:49:14 compute-0 nova_compute[260935]:     <interface type='ethernet'>
Oct 11 08:49:14 compute-0 nova_compute[260935]:       <mac address='fa:16:3e:97:65:f3'/>
Oct 11 08:49:14 compute-0 nova_compute[260935]:       <target dev='tapdb31f1b4-b0'/>
Oct 11 08:49:14 compute-0 nova_compute[260935]:       <model type='virtio'/>
Oct 11 08:49:14 compute-0 nova_compute[260935]:       <driver name='vhost' rx_queue_size='512'/>
Oct 11 08:49:14 compute-0 nova_compute[260935]:       <mtu size='1442'/>
Oct 11 08:49:14 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Oct 11 08:49:14 compute-0 nova_compute[260935]:     </interface>
Oct 11 08:49:14 compute-0 nova_compute[260935]:     <interface type='ethernet'>
Oct 11 08:49:14 compute-0 nova_compute[260935]:       <mac address='fa:16:3e:90:fd:25'/>
Oct 11 08:49:14 compute-0 nova_compute[260935]:       <target dev='tap723ff3bf-88'/>
Oct 11 08:49:14 compute-0 nova_compute[260935]:       <model type='virtio'/>
Oct 11 08:49:14 compute-0 nova_compute[260935]:       <driver name='vhost' rx_queue_size='512'/>
Oct 11 08:49:14 compute-0 nova_compute[260935]:       <mtu size='1442'/>
Oct 11 08:49:14 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x07' slot='0x00' function='0x0'/>
Oct 11 08:49:14 compute-0 nova_compute[260935]:     </interface>
Oct 11 08:49:14 compute-0 nova_compute[260935]:     <interface type='ethernet'>
Oct 11 08:49:14 compute-0 nova_compute[260935]:       <mac address='fa:16:3e:2b:3e:e2'/>
Oct 11 08:49:14 compute-0 nova_compute[260935]:       <target dev='tapc9724939-cd'/>
Oct 11 08:49:14 compute-0 nova_compute[260935]:       <model type='virtio'/>
Oct 11 08:49:14 compute-0 nova_compute[260935]:       <driver name='vhost' rx_queue_size='512'/>
Oct 11 08:49:14 compute-0 nova_compute[260935]:       <mtu size='1442'/>
Oct 11 08:49:14 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x08' slot='0x00' function='0x0'/>
Oct 11 08:49:14 compute-0 nova_compute[260935]:     </interface>
Oct 11 08:49:14 compute-0 nova_compute[260935]:     <serial type='pty'>
Oct 11 08:49:14 compute-0 nova_compute[260935]:       <log file='/var/lib/nova/instances/057de6d9-3f9e-4b23-9019-f62ba6b453e7/console.log' append='off'/>
Oct 11 08:49:14 compute-0 nova_compute[260935]:       <target type='isa-serial' port='0'>
Oct 11 08:49:14 compute-0 nova_compute[260935]:         <model name='isa-serial'/>
Oct 11 08:49:14 compute-0 nova_compute[260935]:       </target>
Oct 11 08:49:14 compute-0 nova_compute[260935]:     </serial>
Oct 11 08:49:14 compute-0 nova_compute[260935]:     <console type='pty'>
Oct 11 08:49:14 compute-0 nova_compute[260935]:       <log file='/var/lib/nova/instances/057de6d9-3f9e-4b23-9019-f62ba6b453e7/console.log' append='off'/>
Oct 11 08:49:14 compute-0 nova_compute[260935]:       <target type='serial' port='0'/>
Oct 11 08:49:14 compute-0 nova_compute[260935]:     </console>
Oct 11 08:49:14 compute-0 nova_compute[260935]:     <input type='tablet' bus='usb'>
Oct 11 08:49:14 compute-0 nova_compute[260935]:       <address type='usb' bus='0' port='1'/>
Oct 11 08:49:14 compute-0 nova_compute[260935]:     </input>
Oct 11 08:49:14 compute-0 nova_compute[260935]:     <input type='mouse' bus='ps2'/>
Oct 11 08:49:14 compute-0 nova_compute[260935]:     <input type='keyboard' bus='ps2'/>
Oct 11 08:49:14 compute-0 nova_compute[260935]:     <graphics type='vnc' port='-1' autoport='yes' listen='::0'>
Oct 11 08:49:14 compute-0 nova_compute[260935]:       <listen type='address' address='::0'/>
Oct 11 08:49:14 compute-0 nova_compute[260935]:     </graphics>
Oct 11 08:49:14 compute-0 nova_compute[260935]:     <audio id='1' type='none'/>
Oct 11 08:49:14 compute-0 nova_compute[260935]:     <video>
Oct 11 08:49:14 compute-0 nova_compute[260935]:       <model type='virtio' heads='1' primary='yes'/>
Oct 11 08:49:14 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Oct 11 08:49:14 compute-0 nova_compute[260935]:     </video>
Oct 11 08:49:14 compute-0 nova_compute[260935]:     <watchdog model='itco' action='reset'/>
Oct 11 08:49:14 compute-0 nova_compute[260935]:     <memballoon model='virtio'>
Oct 11 08:49:14 compute-0 nova_compute[260935]:       <stats period='10'/>
Oct 11 08:49:14 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Oct 11 08:49:14 compute-0 nova_compute[260935]:     </memballoon>
Oct 11 08:49:14 compute-0 nova_compute[260935]:     <rng model='virtio'>
Oct 11 08:49:14 compute-0 nova_compute[260935]:       <backend model='random'>/dev/urandom</backend>
Oct 11 08:49:14 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Oct 11 08:49:14 compute-0 nova_compute[260935]:     </rng>
Oct 11 08:49:14 compute-0 nova_compute[260935]:   </devices>
Oct 11 08:49:14 compute-0 nova_compute[260935]: </domain>
Oct 11 08:49:14 compute-0 nova_compute[260935]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Oct 11 08:49:14 compute-0 nova_compute[260935]: 2025-10-11 08:49:14.378 2 WARNING nova.virt.libvirt.driver [req-9c7f4d6a-ace7-4b76-9f0a-191e070ec86c req-f580612f-b986-4644-86a2-65ca5e898739 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] Detaching interface fa:16:3e:f2:dc:ce failed because the device is no longer found on the guest.: nova.exception.DeviceNotFound: Device 'tapb3e1a780-92' not found.
Oct 11 08:49:14 compute-0 nova_compute[260935]: 2025-10-11 08:49:14.379 2 DEBUG nova.virt.libvirt.vif [req-9c7f4d6a-ace7-4b76-9f0a-191e070ec86c req-f580612f-b986-4644-86a2-65ca5e898739 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T08:48:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1270343715',display_name='tempest-AttachInterfacesTestJSON-server-1270343715',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1270343715',id=20,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGqp+brcDuFBg126s8uf0VE8L4fUfMeeG8JT9FKFYCB1vHmrbx9C6Kt8XshIYtJqZ0JEMq6H9A4MzX7hRa62ELfLstfe4uxEEdjGiwcDGhX0TR8t1c69HTxfDL2XuPv0hw==',key_name='tempest-keypair-1655747577',keypairs=<?>,launch_index=0,launched_at=2025-10-11T08:48:21Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='eddb41c523294041b154a0a99c88e82b',ramdisk_id='',reservation_id='r-i4h1iqr7',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-2072786320',owner_user_name='tempest-AttachInterfacesTestJSON-2072786320-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T08:49:13Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='34f29a5a135d45f597eeaa741009aa67',uuid=057de6d9-3f9e-4b23-9019-f62ba6b453e7,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b3e1a780-925f-4f4e-bca5-4b8f5ee45e1e", "address": "fa:16:3e:f2:dc:ce", "network": {"id": "fff13396-b787-4c6e-9112-a1c2ef57b26d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-795919911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eddb41c523294041b154a0a99c88e82b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb3e1a780-92", "ovs_interfaceid": "b3e1a780-925f-4f4e-bca5-4b8f5ee45e1e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 11 08:49:14 compute-0 nova_compute[260935]: 2025-10-11 08:49:14.379 2 DEBUG nova.network.os_vif_util [req-9c7f4d6a-ace7-4b76-9f0a-191e070ec86c req-f580612f-b986-4644-86a2-65ca5e898739 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Converting VIF {"id": "b3e1a780-925f-4f4e-bca5-4b8f5ee45e1e", "address": "fa:16:3e:f2:dc:ce", "network": {"id": "fff13396-b787-4c6e-9112-a1c2ef57b26d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-795919911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eddb41c523294041b154a0a99c88e82b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb3e1a780-92", "ovs_interfaceid": "b3e1a780-925f-4f4e-bca5-4b8f5ee45e1e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 08:49:14 compute-0 nova_compute[260935]: 2025-10-11 08:49:14.380 2 DEBUG nova.network.os_vif_util [req-9c7f4d6a-ace7-4b76-9f0a-191e070ec86c req-f580612f-b986-4644-86a2-65ca5e898739 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:f2:dc:ce,bridge_name='br-int',has_traffic_filtering=True,id=b3e1a780-925f-4f4e-bca5-4b8f5ee45e1e,network=Network(fff13396-b787-4c6e-9112-a1c2ef57b26d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb3e1a780-92') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 08:49:14 compute-0 nova_compute[260935]: 2025-10-11 08:49:14.380 2 DEBUG os_vif [req-9c7f4d6a-ace7-4b76-9f0a-191e070ec86c req-f580612f-b986-4644-86a2-65ca5e898739 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:f2:dc:ce,bridge_name='br-int',has_traffic_filtering=True,id=b3e1a780-925f-4f4e-bca5-4b8f5ee45e1e,network=Network(fff13396-b787-4c6e-9112-a1c2ef57b26d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb3e1a780-92') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 11 08:49:14 compute-0 nova_compute[260935]: 2025-10-11 08:49:14.381 2 INFO os_vif [None req-e8118ba0-a7d9-4b80-b10f-16c7dcffbfa0 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:2b:3e:e2,bridge_name='br-int',has_traffic_filtering=True,id=c9724939-cd91-44bb-a86b-72bf93c2a818,network=Network(fff13396-b787-4c6e-9112-a1c2ef57b26d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapc9724939-cd')
Oct 11 08:49:14 compute-0 nova_compute[260935]: 2025-10-11 08:49:14.410 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:49:14 compute-0 nova_compute[260935]: 2025-10-11 08:49:14.411 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb3e1a780-92, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:49:14 compute-0 nova_compute[260935]: 2025-10-11 08:49:14.411 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 08:49:14 compute-0 nova_compute[260935]: 2025-10-11 08:49:14.413 2 INFO os_vif [req-9c7f4d6a-ace7-4b76-9f0a-191e070ec86c req-f580612f-b986-4644-86a2-65ca5e898739 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:f2:dc:ce,bridge_name='br-int',has_traffic_filtering=True,id=b3e1a780-925f-4f4e-bca5-4b8f5ee45e1e,network=Network(fff13396-b787-4c6e-9112-a1c2ef57b26d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb3e1a780-92')
Oct 11 08:49:14 compute-0 nova_compute[260935]: 2025-10-11 08:49:14.414 2 DEBUG nova.virt.libvirt.guest [req-9c7f4d6a-ace7-4b76-9f0a-191e070ec86c req-f580612f-b986-4644-86a2-65ca5e898739 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 08:49:14 compute-0 nova_compute[260935]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 08:49:14 compute-0 nova_compute[260935]:   <nova:name>tempest-AttachInterfacesTestJSON-server-1270343715</nova:name>
Oct 11 08:49:14 compute-0 nova_compute[260935]:   <nova:creationTime>2025-10-11 08:49:14</nova:creationTime>
Oct 11 08:49:14 compute-0 nova_compute[260935]:   <nova:flavor name="m1.nano">
Oct 11 08:49:14 compute-0 nova_compute[260935]:     <nova:memory>128</nova:memory>
Oct 11 08:49:14 compute-0 nova_compute[260935]:     <nova:disk>1</nova:disk>
Oct 11 08:49:14 compute-0 nova_compute[260935]:     <nova:swap>0</nova:swap>
Oct 11 08:49:14 compute-0 nova_compute[260935]:     <nova:ephemeral>0</nova:ephemeral>
Oct 11 08:49:14 compute-0 nova_compute[260935]:     <nova:vcpus>1</nova:vcpus>
Oct 11 08:49:14 compute-0 nova_compute[260935]:   </nova:flavor>
Oct 11 08:49:14 compute-0 nova_compute[260935]:   <nova:owner>
Oct 11 08:49:14 compute-0 nova_compute[260935]:     <nova:user uuid="34f29a5a135d45f597eeaa741009aa67">tempest-AttachInterfacesTestJSON-2072786320-project-member</nova:user>
Oct 11 08:49:14 compute-0 nova_compute[260935]:     <nova:project uuid="eddb41c523294041b154a0a99c88e82b">tempest-AttachInterfacesTestJSON-2072786320</nova:project>
Oct 11 08:49:14 compute-0 nova_compute[260935]:   </nova:owner>
Oct 11 08:49:14 compute-0 nova_compute[260935]:   <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 08:49:14 compute-0 nova_compute[260935]:   <nova:ports>
Oct 11 08:49:14 compute-0 nova_compute[260935]:     <nova:port uuid="db31f1b4-b009-40dc-a028-b72fe0b1eb45">
Oct 11 08:49:14 compute-0 nova_compute[260935]:       <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Oct 11 08:49:14 compute-0 nova_compute[260935]:     </nova:port>
Oct 11 08:49:14 compute-0 nova_compute[260935]:     <nova:port uuid="723ff3bf-882c-4198-afc8-a31026a4ccfc">
Oct 11 08:49:14 compute-0 nova_compute[260935]:       <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Oct 11 08:49:14 compute-0 nova_compute[260935]:     </nova:port>
Oct 11 08:49:14 compute-0 nova_compute[260935]:     <nova:port uuid="c9724939-cd91-44bb-a86b-72bf93c2a818">
Oct 11 08:49:14 compute-0 nova_compute[260935]:       <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Oct 11 08:49:14 compute-0 nova_compute[260935]:     </nova:port>
Oct 11 08:49:14 compute-0 nova_compute[260935]:   </nova:ports>
Oct 11 08:49:14 compute-0 nova_compute[260935]: </nova:instance>
Oct 11 08:49:14 compute-0 nova_compute[260935]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Oct 11 08:49:14 compute-0 nova_compute[260935]: 2025-10-11 08:49:14.419 2 DEBUG nova.compute.manager [req-9c7f4d6a-ace7-4b76-9f0a-191e070ec86c req-f580612f-b986-4644-86a2-65ca5e898739 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] Received event network-vif-deleted-c9724939-cd91-44bb-a86b-72bf93c2a818 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:49:14 compute-0 nova_compute[260935]: 2025-10-11 08:49:14.419 2 INFO nova.compute.manager [req-9c7f4d6a-ace7-4b76-9f0a-191e070ec86c req-f580612f-b986-4644-86a2-65ca5e898739 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] Neutron deleted interface c9724939-cd91-44bb-a86b-72bf93c2a818; detaching it from the instance and deleting it from the info cache
Oct 11 08:49:14 compute-0 nova_compute[260935]: 2025-10-11 08:49:14.419 2 DEBUG nova.network.neutron [req-9c7f4d6a-ace7-4b76-9f0a-191e070ec86c req-f580612f-b986-4644-86a2-65ca5e898739 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] Updating instance_info_cache with network_info: [{"id": "db31f1b4-b009-40dc-a028-b72fe0b1eb45", "address": "fa:16:3e:97:65:f3", "network": {"id": "fff13396-b787-4c6e-9112-a1c2ef57b26d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-795919911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.230", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eddb41c523294041b154a0a99c88e82b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdb31f1b4-b0", "ovs_interfaceid": "db31f1b4-b009-40dc-a028-b72fe0b1eb45", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "723ff3bf-882c-4198-afc8-a31026a4ccfc", "address": "fa:16:3e:90:fd:25", "network": {"id": "fff13396-b787-4c6e-9112-a1c2ef57b26d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-795919911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eddb41c523294041b154a0a99c88e82b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap723ff3bf-88", "ovs_interfaceid": "723ff3bf-882c-4198-afc8-a31026a4ccfc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:49:14 compute-0 podman[292512]: 2025-10-11 08:49:14.428995031 +0000 UTC m=+0.059525122 container remove 6da8ced8088c8e43c02543b3a07e5131bfc81dd7c04d1410fe0ff52bc270558a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-fff13396-b787-4c6e-9112-a1c2ef57b26d, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct 11 08:49:14 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:49:14.438 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[35dbcd4e-ee1c-4df9-9cdf-3f740c1d71a1]: (4, ('Sat Oct 11 08:49:14 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-fff13396-b787-4c6e-9112-a1c2ef57b26d (6da8ced8088c8e43c02543b3a07e5131bfc81dd7c04d1410fe0ff52bc270558a)\n6da8ced8088c8e43c02543b3a07e5131bfc81dd7c04d1410fe0ff52bc270558a\nSat Oct 11 08:49:14 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-fff13396-b787-4c6e-9112-a1c2ef57b26d (6da8ced8088c8e43c02543b3a07e5131bfc81dd7c04d1410fe0ff52bc270558a)\n6da8ced8088c8e43c02543b3a07e5131bfc81dd7c04d1410fe0ff52bc270558a\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:49:14 compute-0 nova_compute[260935]: 2025-10-11 08:49:14.441 2 DEBUG nova.virt.libvirt.vif [req-9c7f4d6a-ace7-4b76-9f0a-191e070ec86c req-f580612f-b986-4644-86a2-65ca5e898739 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T08:48:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1270343715',display_name='tempest-AttachInterfacesTestJSON-server-1270343715',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1270343715',id=20,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGqp+brcDuFBg126s8uf0VE8L4fUfMeeG8JT9FKFYCB1vHmrbx9C6Kt8XshIYtJqZ0JEMq6H9A4MzX7hRa62ELfLstfe4uxEEdjGiwcDGhX0TR8t1c69HTxfDL2XuPv0hw==',key_name='tempest-keypair-1655747577',keypairs=<?>,launch_index=0,launched_at=2025-10-11T08:48:21Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='eddb41c523294041b154a0a99c88e82b',ramdisk_id='',reservation_id='r-i4h1iqr7',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-2072786320',owner_user_name='tempest-AttachInterfacesTestJSON-2072786320-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T08:49:13Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='34f29a5a135d45f597eeaa741009aa67',uuid=057de6d9-3f9e-4b23-9019-f62ba6b453e7,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "c9724939-cd91-44bb-a86b-72bf93c2a818", "address": "fa:16:3e:2b:3e:e2", "network": {"id": "fff13396-b787-4c6e-9112-a1c2ef57b26d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-795919911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eddb41c523294041b154a0a99c88e82b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc9724939-cd", "ovs_interfaceid": "c9724939-cd91-44bb-a86b-72bf93c2a818", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 11 08:49:14 compute-0 nova_compute[260935]: 2025-10-11 08:49:14.442 2 DEBUG nova.network.os_vif_util [req-9c7f4d6a-ace7-4b76-9f0a-191e070ec86c req-f580612f-b986-4644-86a2-65ca5e898739 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Converting VIF {"id": "c9724939-cd91-44bb-a86b-72bf93c2a818", "address": "fa:16:3e:2b:3e:e2", "network": {"id": "fff13396-b787-4c6e-9112-a1c2ef57b26d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-795919911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eddb41c523294041b154a0a99c88e82b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc9724939-cd", "ovs_interfaceid": "c9724939-cd91-44bb-a86b-72bf93c2a818", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 08:49:14 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:49:14.442 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[5ca75c32-ff36-48c2-a00d-e7e09e97ddc7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:49:14 compute-0 nova_compute[260935]: 2025-10-11 08:49:14.442 2 DEBUG nova.network.os_vif_util [req-9c7f4d6a-ace7-4b76-9f0a-191e070ec86c req-f580612f-b986-4644-86a2-65ca5e898739 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2b:3e:e2,bridge_name='br-int',has_traffic_filtering=True,id=c9724939-cd91-44bb-a86b-72bf93c2a818,network=Network(fff13396-b787-4c6e-9112-a1c2ef57b26d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapc9724939-cd') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 08:49:14 compute-0 nova_compute[260935]: 2025-10-11 08:49:14.445 2 DEBUG nova.virt.libvirt.guest [req-9c7f4d6a-ace7-4b76-9f0a-191e070ec86c req-f580612f-b986-4644-86a2-65ca5e898739 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:2b:3e:e2"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapc9724939-cd"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Oct 11 08:49:14 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:49:14.445 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfff13396-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:49:14 compute-0 nova_compute[260935]: 2025-10-11 08:49:14.448 2 DEBUG nova.virt.libvirt.driver [req-9c7f4d6a-ace7-4b76-9f0a-191e070ec86c req-f580612f-b986-4644-86a2-65ca5e898739 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Attempting to detach device tapc9724939-cd from instance 057de6d9-3f9e-4b23-9019-f62ba6b453e7 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Oct 11 08:49:14 compute-0 nova_compute[260935]: 2025-10-11 08:49:14.448 2 DEBUG nova.virt.libvirt.guest [req-9c7f4d6a-ace7-4b76-9f0a-191e070ec86c req-f580612f-b986-4644-86a2-65ca5e898739 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] detach device xml: <interface type="ethernet">
Oct 11 08:49:14 compute-0 nova_compute[260935]:   <mac address="fa:16:3e:2b:3e:e2"/>
Oct 11 08:49:14 compute-0 nova_compute[260935]:   <model type="virtio"/>
Oct 11 08:49:14 compute-0 nova_compute[260935]:   <driver name="vhost" rx_queue_size="512"/>
Oct 11 08:49:14 compute-0 nova_compute[260935]:   <mtu size="1442"/>
Oct 11 08:49:14 compute-0 nova_compute[260935]:   <target dev="tapc9724939-cd"/>
Oct 11 08:49:14 compute-0 nova_compute[260935]: </interface>
Oct 11 08:49:14 compute-0 nova_compute[260935]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Oct 11 08:49:14 compute-0 nova_compute[260935]: 2025-10-11 08:49:14.448 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:49:14 compute-0 kernel: tapfff13396-b0: left promiscuous mode
Oct 11 08:49:14 compute-0 nova_compute[260935]: 2025-10-11 08:49:14.453 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:49:14 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:49:14.457 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[54e01b6a-1693-451f-ad49-976dc402c2f5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:49:14 compute-0 nova_compute[260935]: 2025-10-11 08:49:14.466 2 DEBUG nova.virt.libvirt.guest [req-9c7f4d6a-ace7-4b76-9f0a-191e070ec86c req-f580612f-b986-4644-86a2-65ca5e898739 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:2b:3e:e2"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapc9724939-cd"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Oct 11 08:49:14 compute-0 nova_compute[260935]: 2025-10-11 08:49:14.469 2 DEBUG nova.virt.libvirt.guest [req-9c7f4d6a-ace7-4b76-9f0a-191e070ec86c req-f580612f-b986-4644-86a2-65ca5e898739 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:2b:3e:e2"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapc9724939-cd"/></interface>not found in domain: <domain type='kvm'>
Oct 11 08:49:14 compute-0 nova_compute[260935]:   <name>instance-00000014</name>
Oct 11 08:49:14 compute-0 nova_compute[260935]:   <uuid>057de6d9-3f9e-4b23-9019-f62ba6b453e7</uuid>
Oct 11 08:49:14 compute-0 nova_compute[260935]:   <metadata>
Oct 11 08:49:14 compute-0 nova_compute[260935]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 08:49:14 compute-0 nova_compute[260935]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 08:49:14 compute-0 nova_compute[260935]:       <nova:name>tempest-AttachInterfacesTestJSON-server-1270343715</nova:name>
Oct 11 08:49:14 compute-0 nova_compute[260935]:       <nova:creationTime>2025-10-11 08:49:14</nova:creationTime>
Oct 11 08:49:14 compute-0 nova_compute[260935]:       <nova:flavor name="m1.nano">
Oct 11 08:49:14 compute-0 nova_compute[260935]:         <nova:memory>128</nova:memory>
Oct 11 08:49:14 compute-0 nova_compute[260935]:         <nova:disk>1</nova:disk>
Oct 11 08:49:14 compute-0 nova_compute[260935]:         <nova:swap>0</nova:swap>
Oct 11 08:49:14 compute-0 nova_compute[260935]:         <nova:ephemeral>0</nova:ephemeral>
Oct 11 08:49:14 compute-0 nova_compute[260935]:         <nova:vcpus>1</nova:vcpus>
Oct 11 08:49:14 compute-0 nova_compute[260935]:       </nova:flavor>
Oct 11 08:49:14 compute-0 nova_compute[260935]:       <nova:owner>
Oct 11 08:49:14 compute-0 nova_compute[260935]:         <nova:user uuid="34f29a5a135d45f597eeaa741009aa67">tempest-AttachInterfacesTestJSON-2072786320-project-member</nova:user>
Oct 11 08:49:14 compute-0 nova_compute[260935]:         <nova:project uuid="eddb41c523294041b154a0a99c88e82b">tempest-AttachInterfacesTestJSON-2072786320</nova:project>
Oct 11 08:49:14 compute-0 nova_compute[260935]:       </nova:owner>
Oct 11 08:49:14 compute-0 nova_compute[260935]:       <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 08:49:14 compute-0 nova_compute[260935]:       <nova:ports>
Oct 11 08:49:14 compute-0 nova_compute[260935]:         <nova:port uuid="db31f1b4-b009-40dc-a028-b72fe0b1eb45">
Oct 11 08:49:14 compute-0 nova_compute[260935]:           <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Oct 11 08:49:14 compute-0 nova_compute[260935]:         </nova:port>
Oct 11 08:49:14 compute-0 nova_compute[260935]:         <nova:port uuid="723ff3bf-882c-4198-afc8-a31026a4ccfc">
Oct 11 08:49:14 compute-0 nova_compute[260935]:           <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Oct 11 08:49:14 compute-0 nova_compute[260935]:         </nova:port>
Oct 11 08:49:14 compute-0 nova_compute[260935]:         <nova:port uuid="c9724939-cd91-44bb-a86b-72bf93c2a818">
Oct 11 08:49:14 compute-0 nova_compute[260935]:           <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Oct 11 08:49:14 compute-0 nova_compute[260935]:         </nova:port>
Oct 11 08:49:14 compute-0 nova_compute[260935]:       </nova:ports>
Oct 11 08:49:14 compute-0 nova_compute[260935]:     </nova:instance>
Oct 11 08:49:14 compute-0 nova_compute[260935]:   </metadata>
Oct 11 08:49:14 compute-0 nova_compute[260935]:   <memory unit='KiB'>131072</memory>
Oct 11 08:49:14 compute-0 nova_compute[260935]:   <currentMemory unit='KiB'>131072</currentMemory>
Oct 11 08:49:14 compute-0 nova_compute[260935]:   <vcpu placement='static'>1</vcpu>
Oct 11 08:49:14 compute-0 nova_compute[260935]:   <sysinfo type='smbios'>
Oct 11 08:49:14 compute-0 nova_compute[260935]:     <system>
Oct 11 08:49:14 compute-0 nova_compute[260935]:       <entry name='manufacturer'>RDO</entry>
Oct 11 08:49:14 compute-0 nova_compute[260935]:       <entry name='product'>OpenStack Compute</entry>
Oct 11 08:49:14 compute-0 nova_compute[260935]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 08:49:14 compute-0 nova_compute[260935]:       <entry name='serial'>057de6d9-3f9e-4b23-9019-f62ba6b453e7</entry>
Oct 11 08:49:14 compute-0 nova_compute[260935]:       <entry name='uuid'>057de6d9-3f9e-4b23-9019-f62ba6b453e7</entry>
Oct 11 08:49:14 compute-0 nova_compute[260935]:       <entry name='family'>Virtual Machine</entry>
Oct 11 08:49:14 compute-0 nova_compute[260935]:     </system>
Oct 11 08:49:14 compute-0 nova_compute[260935]:   </sysinfo>
Oct 11 08:49:14 compute-0 nova_compute[260935]:   <os>
Oct 11 08:49:14 compute-0 nova_compute[260935]:     <type arch='x86_64' machine='pc-q35-rhel9.6.0'>hvm</type>
Oct 11 08:49:14 compute-0 nova_compute[260935]:     <boot dev='hd'/>
Oct 11 08:49:14 compute-0 nova_compute[260935]:     <smbios mode='sysinfo'/>
Oct 11 08:49:14 compute-0 nova_compute[260935]:   </os>
Oct 11 08:49:14 compute-0 nova_compute[260935]:   <features>
Oct 11 08:49:14 compute-0 nova_compute[260935]:     <acpi/>
Oct 11 08:49:14 compute-0 nova_compute[260935]:     <apic/>
Oct 11 08:49:14 compute-0 nova_compute[260935]:     <vmcoreinfo state='on'/>
Oct 11 08:49:14 compute-0 nova_compute[260935]:   </features>
Oct 11 08:49:14 compute-0 nova_compute[260935]:   <cpu mode='host-model' check='partial'>
Oct 11 08:49:14 compute-0 nova_compute[260935]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Oct 11 08:49:14 compute-0 nova_compute[260935]:   </cpu>
Oct 11 08:49:14 compute-0 nova_compute[260935]:   <clock offset='utc'>
Oct 11 08:49:14 compute-0 nova_compute[260935]:     <timer name='pit' tickpolicy='delay'/>
Oct 11 08:49:14 compute-0 nova_compute[260935]:     <timer name='rtc' tickpolicy='catchup'/>
Oct 11 08:49:14 compute-0 nova_compute[260935]:     <timer name='hpet' present='no'/>
Oct 11 08:49:14 compute-0 nova_compute[260935]:   </clock>
Oct 11 08:49:14 compute-0 nova_compute[260935]:   <on_poweroff>destroy</on_poweroff>
Oct 11 08:49:14 compute-0 nova_compute[260935]:   <on_reboot>restart</on_reboot>
Oct 11 08:49:14 compute-0 nova_compute[260935]:   <on_crash>destroy</on_crash>
Oct 11 08:49:14 compute-0 nova_compute[260935]:   <devices>
Oct 11 08:49:14 compute-0 nova_compute[260935]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Oct 11 08:49:14 compute-0 nova_compute[260935]:     <disk type='network' device='disk'>
Oct 11 08:49:14 compute-0 nova_compute[260935]:       <driver name='qemu' type='raw' cache='none'/>
Oct 11 08:49:14 compute-0 nova_compute[260935]:       <auth username='openstack'>
Oct 11 08:49:14 compute-0 nova_compute[260935]:         <secret type='ceph' uuid='33219f8b-dc38-5a8f-a577-8ccc4b37190a'/>
Oct 11 08:49:14 compute-0 nova_compute[260935]:       </auth>
Oct 11 08:49:14 compute-0 nova_compute[260935]:       <source protocol='rbd' name='vms/057de6d9-3f9e-4b23-9019-f62ba6b453e7_disk'>
Oct 11 08:49:14 compute-0 nova_compute[260935]:         <host name='192.168.122.100' port='6789'/>
Oct 11 08:49:14 compute-0 nova_compute[260935]:       </source>
Oct 11 08:49:14 compute-0 nova_compute[260935]:       <target dev='vda' bus='virtio'/>
Oct 11 08:49:14 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Oct 11 08:49:14 compute-0 nova_compute[260935]:     </disk>
Oct 11 08:49:14 compute-0 nova_compute[260935]:     <disk type='network' device='cdrom'>
Oct 11 08:49:14 compute-0 nova_compute[260935]:       <driver name='qemu' type='raw' cache='none'/>
Oct 11 08:49:14 compute-0 nova_compute[260935]:       <auth username='openstack'>
Oct 11 08:49:14 compute-0 nova_compute[260935]:         <secret type='ceph' uuid='33219f8b-dc38-5a8f-a577-8ccc4b37190a'/>
Oct 11 08:49:14 compute-0 nova_compute[260935]:       </auth>
Oct 11 08:49:14 compute-0 nova_compute[260935]:       <source protocol='rbd' name='vms/057de6d9-3f9e-4b23-9019-f62ba6b453e7_disk.config'>
Oct 11 08:49:14 compute-0 nova_compute[260935]:         <host name='192.168.122.100' port='6789'/>
Oct 11 08:49:14 compute-0 nova_compute[260935]:       </source>
Oct 11 08:49:14 compute-0 nova_compute[260935]:       <target dev='sda' bus='sata'/>
Oct 11 08:49:14 compute-0 nova_compute[260935]:       <readonly/>
Oct 11 08:49:14 compute-0 nova_compute[260935]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Oct 11 08:49:14 compute-0 nova_compute[260935]:     </disk>
Oct 11 08:49:14 compute-0 nova_compute[260935]:     <controller type='pci' index='0' model='pcie-root'/>
Oct 11 08:49:14 compute-0 nova_compute[260935]:     <controller type='pci' index='1' model='pcie-root-port'>
Oct 11 08:49:14 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 08:49:14 compute-0 nova_compute[260935]:       <target chassis='1' port='0x10'/>
Oct 11 08:49:14 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Oct 11 08:49:14 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:49:14 compute-0 nova_compute[260935]:     <controller type='pci' index='2' model='pcie-root-port'>
Oct 11 08:49:14 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 08:49:14 compute-0 nova_compute[260935]:       <target chassis='2' port='0x11'/>
Oct 11 08:49:14 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Oct 11 08:49:14 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:49:14 compute-0 nova_compute[260935]:     <controller type='pci' index='3' model='pcie-root-port'>
Oct 11 08:49:14 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 08:49:14 compute-0 nova_compute[260935]:       <target chassis='3' port='0x12'/>
Oct 11 08:49:14 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Oct 11 08:49:14 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:49:14 compute-0 nova_compute[260935]:     <controller type='pci' index='4' model='pcie-root-port'>
Oct 11 08:49:14 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 08:49:14 compute-0 nova_compute[260935]:       <target chassis='4' port='0x13'/>
Oct 11 08:49:14 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Oct 11 08:49:14 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:49:14 compute-0 nova_compute[260935]:     <controller type='pci' index='5' model='pcie-root-port'>
Oct 11 08:49:14 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 08:49:14 compute-0 nova_compute[260935]:       <target chassis='5' port='0x14'/>
Oct 11 08:49:14 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Oct 11 08:49:14 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:49:14 compute-0 nova_compute[260935]:     <controller type='pci' index='6' model='pcie-root-port'>
Oct 11 08:49:14 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 08:49:14 compute-0 nova_compute[260935]:       <target chassis='6' port='0x15'/>
Oct 11 08:49:14 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Oct 11 08:49:14 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:49:14 compute-0 nova_compute[260935]:     <controller type='pci' index='7' model='pcie-root-port'>
Oct 11 08:49:14 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 08:49:14 compute-0 nova_compute[260935]:       <target chassis='7' port='0x16'/>
Oct 11 08:49:14 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Oct 11 08:49:14 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:49:14 compute-0 nova_compute[260935]:     <controller type='pci' index='8' model='pcie-root-port'>
Oct 11 08:49:14 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 08:49:14 compute-0 nova_compute[260935]:       <target chassis='8' port='0x17'/>
Oct 11 08:49:14 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Oct 11 08:49:14 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:49:14 compute-0 nova_compute[260935]:     <controller type='pci' index='9' model='pcie-root-port'>
Oct 11 08:49:14 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 08:49:14 compute-0 nova_compute[260935]:       <target chassis='9' port='0x18'/>
Oct 11 08:49:14 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Oct 11 08:49:14 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:49:14 compute-0 nova_compute[260935]:     <controller type='pci' index='10' model='pcie-root-port'>
Oct 11 08:49:14 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 08:49:14 compute-0 nova_compute[260935]:       <target chassis='10' port='0x19'/>
Oct 11 08:49:14 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Oct 11 08:49:14 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:49:14 compute-0 nova_compute[260935]:     <controller type='pci' index='11' model='pcie-root-port'>
Oct 11 08:49:14 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 08:49:14 compute-0 nova_compute[260935]:       <target chassis='11' port='0x1a'/>
Oct 11 08:49:14 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Oct 11 08:49:14 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:49:14 compute-0 nova_compute[260935]:     <controller type='pci' index='12' model='pcie-root-port'>
Oct 11 08:49:14 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 08:49:14 compute-0 nova_compute[260935]:       <target chassis='12' port='0x1b'/>
Oct 11 08:49:14 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Oct 11 08:49:14 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:49:14 compute-0 nova_compute[260935]:     <controller type='pci' index='13' model='pcie-root-port'>
Oct 11 08:49:14 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 08:49:14 compute-0 nova_compute[260935]:       <target chassis='13' port='0x1c'/>
Oct 11 08:49:14 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Oct 11 08:49:14 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:49:14 compute-0 nova_compute[260935]:     <controller type='pci' index='14' model='pcie-root-port'>
Oct 11 08:49:14 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 08:49:14 compute-0 nova_compute[260935]:       <target chassis='14' port='0x1d'/>
Oct 11 08:49:14 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Oct 11 08:49:14 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:49:14 compute-0 nova_compute[260935]:     <controller type='pci' index='15' model='pcie-root-port'>
Oct 11 08:49:14 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 08:49:14 compute-0 nova_compute[260935]:       <target chassis='15' port='0x1e'/>
Oct 11 08:49:14 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Oct 11 08:49:14 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:49:14 compute-0 nova_compute[260935]:     <controller type='pci' index='16' model='pcie-root-port'>
Oct 11 08:49:14 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 08:49:14 compute-0 nova_compute[260935]:       <target chassis='16' port='0x1f'/>
Oct 11 08:49:14 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Oct 11 08:49:14 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:49:14 compute-0 nova_compute[260935]:     <controller type='pci' index='17' model='pcie-root-port'>
Oct 11 08:49:14 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 08:49:14 compute-0 nova_compute[260935]:       <target chassis='17' port='0x20'/>
Oct 11 08:49:14 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Oct 11 08:49:14 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:49:14 compute-0 nova_compute[260935]:     <controller type='pci' index='18' model='pcie-root-port'>
Oct 11 08:49:14 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 08:49:14 compute-0 nova_compute[260935]:       <target chassis='18' port='0x21'/>
Oct 11 08:49:14 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Oct 11 08:49:14 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:49:14 compute-0 nova_compute[260935]:     <controller type='pci' index='19' model='pcie-root-port'>
Oct 11 08:49:14 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 08:49:14 compute-0 nova_compute[260935]:       <target chassis='19' port='0x22'/>
Oct 11 08:49:14 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Oct 11 08:49:14 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:49:14 compute-0 nova_compute[260935]:     <controller type='pci' index='20' model='pcie-root-port'>
Oct 11 08:49:14 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 08:49:14 compute-0 nova_compute[260935]:       <target chassis='20' port='0x23'/>
Oct 11 08:49:14 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Oct 11 08:49:14 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:49:14 compute-0 nova_compute[260935]:     <controller type='pci' index='21' model='pcie-root-port'>
Oct 11 08:49:14 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 08:49:14 compute-0 nova_compute[260935]:       <target chassis='21' port='0x24'/>
Oct 11 08:49:14 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Oct 11 08:49:14 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:49:14 compute-0 nova_compute[260935]:     <controller type='pci' index='22' model='pcie-root-port'>
Oct 11 08:49:14 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 08:49:14 compute-0 nova_compute[260935]:       <target chassis='22' port='0x25'/>
Oct 11 08:49:14 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Oct 11 08:49:14 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:49:14 compute-0 nova_compute[260935]:     <controller type='pci' index='23' model='pcie-root-port'>
Oct 11 08:49:14 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 08:49:14 compute-0 nova_compute[260935]:       <target chassis='23' port='0x26'/>
Oct 11 08:49:14 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Oct 11 08:49:14 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:49:14 compute-0 nova_compute[260935]:     <controller type='pci' index='24' model='pcie-root-port'>
Oct 11 08:49:14 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 08:49:14 compute-0 nova_compute[260935]:       <target chassis='24' port='0x27'/>
Oct 11 08:49:14 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Oct 11 08:49:14 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:49:14 compute-0 nova_compute[260935]:     <controller type='pci' index='25' model='pcie-root-port'>
Oct 11 08:49:14 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 08:49:14 compute-0 nova_compute[260935]:       <target chassis='25' port='0x28'/>
Oct 11 08:49:14 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Oct 11 08:49:14 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:49:14 compute-0 nova_compute[260935]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Oct 11 08:49:14 compute-0 nova_compute[260935]:       <model name='pcie-pci-bridge'/>
Oct 11 08:49:14 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Oct 11 08:49:14 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:49:14 compute-0 nova_compute[260935]:     <controller type='usb' index='0' model='piix3-uhci'>
Oct 11 08:49:14 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Oct 11 08:49:14 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:49:14 compute-0 nova_compute[260935]:     <controller type='sata' index='0'>
Oct 11 08:49:14 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Oct 11 08:49:14 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:49:14 compute-0 nova_compute[260935]:     <interface type='ethernet'>
Oct 11 08:49:14 compute-0 nova_compute[260935]:       <mac address='fa:16:3e:97:65:f3'/>
Oct 11 08:49:14 compute-0 nova_compute[260935]:       <target dev='tapdb31f1b4-b0'/>
Oct 11 08:49:14 compute-0 nova_compute[260935]:       <model type='virtio'/>
Oct 11 08:49:14 compute-0 nova_compute[260935]:       <driver name='vhost' rx_queue_size='512'/>
Oct 11 08:49:14 compute-0 nova_compute[260935]:       <mtu size='1442'/>
Oct 11 08:49:14 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Oct 11 08:49:14 compute-0 nova_compute[260935]:     </interface>
Oct 11 08:49:14 compute-0 nova_compute[260935]:     <interface type='ethernet'>
Oct 11 08:49:14 compute-0 nova_compute[260935]:       <mac address='fa:16:3e:90:fd:25'/>
Oct 11 08:49:14 compute-0 nova_compute[260935]:       <target dev='tap723ff3bf-88'/>
Oct 11 08:49:14 compute-0 nova_compute[260935]:       <model type='virtio'/>
Oct 11 08:49:14 compute-0 nova_compute[260935]:       <driver name='vhost' rx_queue_size='512'/>
Oct 11 08:49:14 compute-0 nova_compute[260935]:       <mtu size='1442'/>
Oct 11 08:49:14 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x07' slot='0x00' function='0x0'/>
Oct 11 08:49:14 compute-0 nova_compute[260935]:     </interface>
Oct 11 08:49:14 compute-0 nova_compute[260935]:     <serial type='pty'>
Oct 11 08:49:14 compute-0 nova_compute[260935]:       <log file='/var/lib/nova/instances/057de6d9-3f9e-4b23-9019-f62ba6b453e7/console.log' append='off'/>
Oct 11 08:49:14 compute-0 nova_compute[260935]:       <target type='isa-serial' port='0'>
Oct 11 08:49:14 compute-0 nova_compute[260935]:         <model name='isa-serial'/>
Oct 11 08:49:14 compute-0 nova_compute[260935]:       </target>
Oct 11 08:49:14 compute-0 nova_compute[260935]:     </serial>
Oct 11 08:49:14 compute-0 nova_compute[260935]:     <console type='pty'>
Oct 11 08:49:14 compute-0 nova_compute[260935]:       <log file='/var/lib/nova/instances/057de6d9-3f9e-4b23-9019-f62ba6b453e7/console.log' append='off'/>
Oct 11 08:49:14 compute-0 nova_compute[260935]:       <target type='serial' port='0'/>
Oct 11 08:49:14 compute-0 nova_compute[260935]:     </console>
Oct 11 08:49:14 compute-0 nova_compute[260935]:     <input type='tablet' bus='usb'>
Oct 11 08:49:14 compute-0 nova_compute[260935]:       <address type='usb' bus='0' port='1'/>
Oct 11 08:49:14 compute-0 nova_compute[260935]:     </input>
Oct 11 08:49:14 compute-0 nova_compute[260935]:     <input type='mouse' bus='ps2'/>
Oct 11 08:49:14 compute-0 nova_compute[260935]:     <input type='keyboard' bus='ps2'/>
Oct 11 08:49:14 compute-0 nova_compute[260935]:     <graphics type='vnc' port='-1' autoport='yes' listen='::0'>
Oct 11 08:49:14 compute-0 nova_compute[260935]:       <listen type='address' address='::0'/>
Oct 11 08:49:14 compute-0 nova_compute[260935]:     </graphics>
Oct 11 08:49:14 compute-0 nova_compute[260935]:     <audio id='1' type='none'/>
Oct 11 08:49:14 compute-0 nova_compute[260935]:     <video>
Oct 11 08:49:14 compute-0 nova_compute[260935]:       <model type='virtio' heads='1' primary='yes'/>
Oct 11 08:49:14 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Oct 11 08:49:14 compute-0 nova_compute[260935]:     </video>
Oct 11 08:49:14 compute-0 nova_compute[260935]:     <watchdog model='itco' action='reset'/>
Oct 11 08:49:14 compute-0 nova_compute[260935]:     <memballoon model='virtio'>
Oct 11 08:49:14 compute-0 nova_compute[260935]:       <stats period='10'/>
Oct 11 08:49:14 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Oct 11 08:49:14 compute-0 nova_compute[260935]:     </memballoon>
Oct 11 08:49:14 compute-0 nova_compute[260935]:     <rng model='virtio'>
Oct 11 08:49:14 compute-0 nova_compute[260935]:       <backend model='random'>/dev/urandom</backend>
Oct 11 08:49:14 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Oct 11 08:49:14 compute-0 nova_compute[260935]:     </rng>
Oct 11 08:49:14 compute-0 nova_compute[260935]:   </devices>
Oct 11 08:49:14 compute-0 nova_compute[260935]: </domain>
Oct 11 08:49:14 compute-0 nova_compute[260935]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Oct 11 08:49:14 compute-0 nova_compute[260935]: 2025-10-11 08:49:14.469 2 INFO nova.virt.libvirt.driver [req-9c7f4d6a-ace7-4b76-9f0a-191e070ec86c req-f580612f-b986-4644-86a2-65ca5e898739 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Successfully detached device tapc9724939-cd from instance 057de6d9-3f9e-4b23-9019-f62ba6b453e7 from the persistent domain config.
Oct 11 08:49:14 compute-0 nova_compute[260935]: 2025-10-11 08:49:14.470 2 DEBUG nova.virt.libvirt.vif [req-9c7f4d6a-ace7-4b76-9f0a-191e070ec86c req-f580612f-b986-4644-86a2-65ca5e898739 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T08:48:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1270343715',display_name='tempest-AttachInterfacesTestJSON-server-1270343715',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1270343715',id=20,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGqp+brcDuFBg126s8uf0VE8L4fUfMeeG8JT9FKFYCB1vHmrbx9C6Kt8XshIYtJqZ0JEMq6H9A4MzX7hRa62ELfLstfe4uxEEdjGiwcDGhX0TR8t1c69HTxfDL2XuPv0hw==',key_name='tempest-keypair-1655747577',keypairs=<?>,launch_index=0,launched_at=2025-10-11T08:48:21Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='eddb41c523294041b154a0a99c88e82b',ramdisk_id='',reservation_id='r-i4h1iqr7',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-2072786320',owner_user_name='tempest-AttachInterfacesTestJSON-2072786320-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T08:49:13Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='34f29a5a135d45f597eeaa741009aa67',uuid=057de6d9-3f9e-4b23-9019-f62ba6b453e7,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "c9724939-cd91-44bb-a86b-72bf93c2a818", "address": "fa:16:3e:2b:3e:e2", "network": {"id": "fff13396-b787-4c6e-9112-a1c2ef57b26d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-795919911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eddb41c523294041b154a0a99c88e82b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc9724939-cd", "ovs_interfaceid": "c9724939-cd91-44bb-a86b-72bf93c2a818", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 11 08:49:14 compute-0 nova_compute[260935]: 2025-10-11 08:49:14.470 2 DEBUG nova.network.os_vif_util [req-9c7f4d6a-ace7-4b76-9f0a-191e070ec86c req-f580612f-b986-4644-86a2-65ca5e898739 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Converting VIF {"id": "c9724939-cd91-44bb-a86b-72bf93c2a818", "address": "fa:16:3e:2b:3e:e2", "network": {"id": "fff13396-b787-4c6e-9112-a1c2ef57b26d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-795919911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eddb41c523294041b154a0a99c88e82b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc9724939-cd", "ovs_interfaceid": "c9724939-cd91-44bb-a86b-72bf93c2a818", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 08:49:14 compute-0 nova_compute[260935]: 2025-10-11 08:49:14.471 2 DEBUG nova.network.os_vif_util [req-9c7f4d6a-ace7-4b76-9f0a-191e070ec86c req-f580612f-b986-4644-86a2-65ca5e898739 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2b:3e:e2,bridge_name='br-int',has_traffic_filtering=True,id=c9724939-cd91-44bb-a86b-72bf93c2a818,network=Network(fff13396-b787-4c6e-9112-a1c2ef57b26d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapc9724939-cd') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 08:49:14 compute-0 nova_compute[260935]: 2025-10-11 08:49:14.471 2 DEBUG os_vif [req-9c7f4d6a-ace7-4b76-9f0a-191e070ec86c req-f580612f-b986-4644-86a2-65ca5e898739 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:2b:3e:e2,bridge_name='br-int',has_traffic_filtering=True,id=c9724939-cd91-44bb-a86b-72bf93c2a818,network=Network(fff13396-b787-4c6e-9112-a1c2ef57b26d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapc9724939-cd') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 11 08:49:14 compute-0 nova_compute[260935]: 2025-10-11 08:49:14.473 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:49:14 compute-0 nova_compute[260935]: 2025-10-11 08:49:14.473 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc9724939-cd, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:49:14 compute-0 nova_compute[260935]: 2025-10-11 08:49:14.473 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 08:49:14 compute-0 nova_compute[260935]: 2025-10-11 08:49:14.476 2 INFO os_vif [req-9c7f4d6a-ace7-4b76-9f0a-191e070ec86c req-f580612f-b986-4644-86a2-65ca5e898739 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:2b:3e:e2,bridge_name='br-int',has_traffic_filtering=True,id=c9724939-cd91-44bb-a86b-72bf93c2a818,network=Network(fff13396-b787-4c6e-9112-a1c2ef57b26d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapc9724939-cd')
Oct 11 08:49:14 compute-0 nova_compute[260935]: 2025-10-11 08:49:14.476 2 DEBUG nova.virt.libvirt.guest [req-9c7f4d6a-ace7-4b76-9f0a-191e070ec86c req-f580612f-b986-4644-86a2-65ca5e898739 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 08:49:14 compute-0 nova_compute[260935]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 08:49:14 compute-0 nova_compute[260935]:   <nova:name>tempest-AttachInterfacesTestJSON-server-1270343715</nova:name>
Oct 11 08:49:14 compute-0 nova_compute[260935]:   <nova:creationTime>2025-10-11 08:49:14</nova:creationTime>
Oct 11 08:49:14 compute-0 nova_compute[260935]:   <nova:flavor name="m1.nano">
Oct 11 08:49:14 compute-0 nova_compute[260935]:     <nova:memory>128</nova:memory>
Oct 11 08:49:14 compute-0 nova_compute[260935]:     <nova:disk>1</nova:disk>
Oct 11 08:49:14 compute-0 nova_compute[260935]:     <nova:swap>0</nova:swap>
Oct 11 08:49:14 compute-0 nova_compute[260935]:     <nova:ephemeral>0</nova:ephemeral>
Oct 11 08:49:14 compute-0 nova_compute[260935]:     <nova:vcpus>1</nova:vcpus>
Oct 11 08:49:14 compute-0 nova_compute[260935]:   </nova:flavor>
Oct 11 08:49:14 compute-0 nova_compute[260935]:   <nova:owner>
Oct 11 08:49:14 compute-0 nova_compute[260935]:     <nova:user uuid="34f29a5a135d45f597eeaa741009aa67">tempest-AttachInterfacesTestJSON-2072786320-project-member</nova:user>
Oct 11 08:49:14 compute-0 nova_compute[260935]:     <nova:project uuid="eddb41c523294041b154a0a99c88e82b">tempest-AttachInterfacesTestJSON-2072786320</nova:project>
Oct 11 08:49:14 compute-0 nova_compute[260935]:   </nova:owner>
Oct 11 08:49:14 compute-0 nova_compute[260935]:   <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 08:49:14 compute-0 nova_compute[260935]:   <nova:ports>
Oct 11 08:49:14 compute-0 nova_compute[260935]:     <nova:port uuid="db31f1b4-b009-40dc-a028-b72fe0b1eb45">
Oct 11 08:49:14 compute-0 nova_compute[260935]:       <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Oct 11 08:49:14 compute-0 nova_compute[260935]:     </nova:port>
Oct 11 08:49:14 compute-0 nova_compute[260935]:     <nova:port uuid="723ff3bf-882c-4198-afc8-a31026a4ccfc">
Oct 11 08:49:14 compute-0 nova_compute[260935]:       <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Oct 11 08:49:14 compute-0 nova_compute[260935]:     </nova:port>
Oct 11 08:49:14 compute-0 nova_compute[260935]:   </nova:ports>
Oct 11 08:49:14 compute-0 nova_compute[260935]: </nova:instance>
Oct 11 08:49:14 compute-0 nova_compute[260935]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Oct 11 08:49:14 compute-0 nova_compute[260935]: 2025-10-11 08:49:14.482 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:49:14 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:49:14.492 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[33de4aa4-081d-44d7-88fc-7eb9307ff854]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:49:14 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 08:49:14 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3165190981' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:49:14 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:49:14.493 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[6baad070-e4f8-4ffd-bdd9-bf1536ee5ddb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:49:14 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:49:14.518 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[944c300b-c361-4c33-a99d-d8f42a845acb]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 434469, 'reachable_time': 15922, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 292548, 'error': None, 'target': 'ovnmeta-fff13396-b787-4c6e-9112-a1c2ef57b26d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:49:14 compute-0 systemd[1]: run-netns-ovnmeta\x2dfff13396\x2db787\x2d4c6e\x2d9112\x2da1c2ef57b26d.mount: Deactivated successfully.
Oct 11 08:49:14 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:49:14.523 162929 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-fff13396-b787-4c6e-9112-a1c2ef57b26d deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 11 08:49:14 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:49:14.524 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[4b2698b2-4bb5-48af-9355-ee98397f454c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:49:14 compute-0 nova_compute[260935]: 2025-10-11 08:49:14.525 2 DEBUG oslo_concurrency.processutils [None req-ff714f88-10a1-47a3-b901-6ed2315b9ba0 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.542s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:49:14 compute-0 nova_compute[260935]: 2025-10-11 08:49:14.532 2 DEBUG nova.compute.provider_tree [None req-ff714f88-10a1-47a3-b901-6ed2315b9ba0 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 08:49:14 compute-0 nova_compute[260935]: 2025-10-11 08:49:14.548 2 DEBUG nova.scheduler.client.report [None req-ff714f88-10a1-47a3-b901-6ed2315b9ba0 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 08:49:14 compute-0 nova_compute[260935]: 2025-10-11 08:49:14.574 2 DEBUG oslo_concurrency.lockutils [None req-ff714f88-10a1-47a3-b901-6ed2315b9ba0 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.822s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:49:14 compute-0 nova_compute[260935]: 2025-10-11 08:49:14.575 2 DEBUG nova.compute.manager [None req-ff714f88-10a1-47a3-b901-6ed2315b9ba0 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: b3e20035-c079-4ad0-a085-2086be520d1d] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 11 08:49:14 compute-0 nova_compute[260935]: 2025-10-11 08:49:14.652 2 DEBUG nova.compute.manager [None req-ff714f88-10a1-47a3-b901-6ed2315b9ba0 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: b3e20035-c079-4ad0-a085-2086be520d1d] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 11 08:49:14 compute-0 nova_compute[260935]: 2025-10-11 08:49:14.653 2 DEBUG nova.network.neutron [None req-ff714f88-10a1-47a3-b901-6ed2315b9ba0 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: b3e20035-c079-4ad0-a085-2086be520d1d] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 11 08:49:14 compute-0 nova_compute[260935]: 2025-10-11 08:49:14.672 2 INFO nova.virt.libvirt.driver [None req-ff714f88-10a1-47a3-b901-6ed2315b9ba0 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: b3e20035-c079-4ad0-a085-2086be520d1d] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 11 08:49:14 compute-0 nova_compute[260935]: 2025-10-11 08:49:14.692 2 DEBUG nova.compute.manager [None req-ff714f88-10a1-47a3-b901-6ed2315b9ba0 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: b3e20035-c079-4ad0-a085-2086be520d1d] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 11 08:49:14 compute-0 nova_compute[260935]: 2025-10-11 08:49:14.786 2 DEBUG nova.compute.manager [None req-ff714f88-10a1-47a3-b901-6ed2315b9ba0 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: b3e20035-c079-4ad0-a085-2086be520d1d] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 11 08:49:14 compute-0 nova_compute[260935]: 2025-10-11 08:49:14.788 2 DEBUG nova.virt.libvirt.driver [None req-ff714f88-10a1-47a3-b901-6ed2315b9ba0 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: b3e20035-c079-4ad0-a085-2086be520d1d] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 11 08:49:14 compute-0 nova_compute[260935]: 2025-10-11 08:49:14.789 2 INFO nova.virt.libvirt.driver [None req-ff714f88-10a1-47a3-b901-6ed2315b9ba0 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: b3e20035-c079-4ad0-a085-2086be520d1d] Creating image(s)
Oct 11 08:49:14 compute-0 nova_compute[260935]: 2025-10-11 08:49:14.818 2 DEBUG nova.storage.rbd_utils [None req-ff714f88-10a1-47a3-b901-6ed2315b9ba0 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] rbd image b3e20035-c079-4ad0-a085-2086be520d1d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:49:14 compute-0 nova_compute[260935]: 2025-10-11 08:49:14.845 2 DEBUG nova.storage.rbd_utils [None req-ff714f88-10a1-47a3-b901-6ed2315b9ba0 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] rbd image b3e20035-c079-4ad0-a085-2086be520d1d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:49:14 compute-0 nova_compute[260935]: 2025-10-11 08:49:14.872 2 DEBUG nova.storage.rbd_utils [None req-ff714f88-10a1-47a3-b901-6ed2315b9ba0 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] rbd image b3e20035-c079-4ad0-a085-2086be520d1d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:49:14 compute-0 nova_compute[260935]: 2025-10-11 08:49:14.876 2 DEBUG oslo_concurrency.processutils [None req-ff714f88-10a1-47a3-b901-6ed2315b9ba0 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:49:14 compute-0 nova_compute[260935]: 2025-10-11 08:49:14.915 2 DEBUG nova.compute.manager [req-c9181689-0e14-4d47-ac66-3b0590659678 req-1654703e-54d2-45fa-9068-0ab8428eb752 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] Received event network-vif-unplugged-db31f1b4-b009-40dc-a028-b72fe0b1eb45 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:49:14 compute-0 nova_compute[260935]: 2025-10-11 08:49:14.916 2 DEBUG oslo_concurrency.lockutils [req-c9181689-0e14-4d47-ac66-3b0590659678 req-1654703e-54d2-45fa-9068-0ab8428eb752 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "057de6d9-3f9e-4b23-9019-f62ba6b453e7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:49:14 compute-0 nova_compute[260935]: 2025-10-11 08:49:14.916 2 DEBUG oslo_concurrency.lockutils [req-c9181689-0e14-4d47-ac66-3b0590659678 req-1654703e-54d2-45fa-9068-0ab8428eb752 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "057de6d9-3f9e-4b23-9019-f62ba6b453e7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:49:14 compute-0 nova_compute[260935]: 2025-10-11 08:49:14.917 2 DEBUG oslo_concurrency.lockutils [req-c9181689-0e14-4d47-ac66-3b0590659678 req-1654703e-54d2-45fa-9068-0ab8428eb752 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "057de6d9-3f9e-4b23-9019-f62ba6b453e7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:49:14 compute-0 nova_compute[260935]: 2025-10-11 08:49:14.917 2 DEBUG nova.compute.manager [req-c9181689-0e14-4d47-ac66-3b0590659678 req-1654703e-54d2-45fa-9068-0ab8428eb752 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] No waiting events found dispatching network-vif-unplugged-db31f1b4-b009-40dc-a028-b72fe0b1eb45 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 08:49:14 compute-0 nova_compute[260935]: 2025-10-11 08:49:14.917 2 DEBUG nova.compute.manager [req-c9181689-0e14-4d47-ac66-3b0590659678 req-1654703e-54d2-45fa-9068-0ab8428eb752 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] Received event network-vif-unplugged-db31f1b4-b009-40dc-a028-b72fe0b1eb45 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 11 08:49:14 compute-0 nova_compute[260935]: 2025-10-11 08:49:14.925 2 INFO nova.virt.libvirt.driver [None req-e8118ba0-a7d9-4b80-b10f-16c7dcffbfa0 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] Deleting instance files /var/lib/nova/instances/057de6d9-3f9e-4b23-9019-f62ba6b453e7_del
Oct 11 08:49:14 compute-0 nova_compute[260935]: 2025-10-11 08:49:14.926 2 INFO nova.virt.libvirt.driver [None req-e8118ba0-a7d9-4b80-b10f-16c7dcffbfa0 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] Deletion of /var/lib/nova/instances/057de6d9-3f9e-4b23-9019-f62ba6b453e7_del complete
Oct 11 08:49:14 compute-0 nova_compute[260935]: 2025-10-11 08:49:14.931 2 DEBUG nova.network.neutron [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] Updating instance_info_cache with network_info: [{"id": "db31f1b4-b009-40dc-a028-b72fe0b1eb45", "address": "fa:16:3e:97:65:f3", "network": {"id": "fff13396-b787-4c6e-9112-a1c2ef57b26d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-795919911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.230", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eddb41c523294041b154a0a99c88e82b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdb31f1b4-b0", "ovs_interfaceid": "db31f1b4-b009-40dc-a028-b72fe0b1eb45", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "b3e1a780-925f-4f4e-bca5-4b8f5ee45e1e", "address": "fa:16:3e:f2:dc:ce", "network": {"id": "fff13396-b787-4c6e-9112-a1c2ef57b26d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-795919911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eddb41c523294041b154a0a99c88e82b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb3e1a780-92", "ovs_interfaceid": "b3e1a780-925f-4f4e-bca5-4b8f5ee45e1e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "723ff3bf-882c-4198-afc8-a31026a4ccfc", "address": "fa:16:3e:90:fd:25", "network": {"id": "fff13396-b787-4c6e-9112-a1c2ef57b26d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-795919911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eddb41c523294041b154a0a99c88e82b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap723ff3bf-88", "ovs_interfaceid": "723ff3bf-882c-4198-afc8-a31026a4ccfc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "c9724939-cd91-44bb-a86b-72bf93c2a818", "address": "fa:16:3e:2b:3e:e2", "network": {"id": "fff13396-b787-4c6e-9112-a1c2ef57b26d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-795919911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eddb41c523294041b154a0a99c88e82b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc9724939-cd", "ovs_interfaceid": "c9724939-cd91-44bb-a86b-72bf93c2a818", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:49:14 compute-0 nova_compute[260935]: 2025-10-11 08:49:14.960 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Releasing lock "refresh_cache-057de6d9-3f9e-4b23-9019-f62ba6b453e7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 08:49:14 compute-0 nova_compute[260935]: 2025-10-11 08:49:14.960 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 11 08:49:14 compute-0 nova_compute[260935]: 2025-10-11 08:49:14.960 2 DEBUG oslo_concurrency.lockutils [None req-e9b5f8c5-14d3-45ad-8e0b-2abef032e048 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Acquired lock "refresh_cache-057de6d9-3f9e-4b23-9019-f62ba6b453e7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 08:49:14 compute-0 nova_compute[260935]: 2025-10-11 08:49:14.960 2 DEBUG nova.network.neutron [None req-e9b5f8c5-14d3-45ad-8e0b-2abef032e048 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 11 08:49:14 compute-0 nova_compute[260935]: 2025-10-11 08:49:14.961 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 08:49:14 compute-0 nova_compute[260935]: 2025-10-11 08:49:14.972 2 DEBUG oslo_concurrency.processutils [None req-ff714f88-10a1-47a3-b901-6ed2315b9ba0 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json" returned: 0 in 0.096s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:49:14 compute-0 nova_compute[260935]: 2025-10-11 08:49:14.973 2 DEBUG oslo_concurrency.lockutils [None req-ff714f88-10a1-47a3-b901-6ed2315b9ba0 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Acquiring lock "9811042c7d73cc51997f7c966840f4b7728169a1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:49:14 compute-0 nova_compute[260935]: 2025-10-11 08:49:14.973 2 DEBUG oslo_concurrency.lockutils [None req-ff714f88-10a1-47a3-b901-6ed2315b9ba0 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:49:14 compute-0 nova_compute[260935]: 2025-10-11 08:49:14.974 2 DEBUG oslo_concurrency.lockutils [None req-ff714f88-10a1-47a3-b901-6ed2315b9ba0 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:49:15 compute-0 nova_compute[260935]: 2025-10-11 08:49:14.999 2 DEBUG nova.storage.rbd_utils [None req-ff714f88-10a1-47a3-b901-6ed2315b9ba0 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] rbd image b3e20035-c079-4ad0-a085-2086be520d1d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:49:15 compute-0 nova_compute[260935]: 2025-10-11 08:49:15.004 2 DEBUG oslo_concurrency.processutils [None req-ff714f88-10a1-47a3-b901-6ed2315b9ba0 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 b3e20035-c079-4ad0-a085-2086be520d1d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:49:15 compute-0 nova_compute[260935]: 2025-10-11 08:49:15.045 2 DEBUG nova.policy [None req-ff714f88-10a1-47a3-b901-6ed2315b9ba0 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'a51c2680b31e40b1908642ef8795c6f0', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '39d3043a7835403392c659fbb2fe0b22', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 11 08:49:15 compute-0 nova_compute[260935]: 2025-10-11 08:49:15.057 2 INFO nova.compute.manager [None req-e8118ba0-a7d9-4b80-b10f-16c7dcffbfa0 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] Took 1.29 seconds to destroy the instance on the hypervisor.
Oct 11 08:49:15 compute-0 nova_compute[260935]: 2025-10-11 08:49:15.057 2 DEBUG oslo.service.loopingcall [None req-e8118ba0-a7d9-4b80-b10f-16c7dcffbfa0 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 11 08:49:15 compute-0 nova_compute[260935]: 2025-10-11 08:49:15.058 2 DEBUG nova.compute.manager [-] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 11 08:49:15 compute-0 nova_compute[260935]: 2025-10-11 08:49:15.058 2 DEBUG nova.network.neutron [-] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 11 08:49:15 compute-0 ceph-mon[74313]: pgmap v1303: 321 pgs: 321 active+clean; 214 MiB data, 393 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 3.6 MiB/s wr, 230 op/s
Oct 11 08:49:15 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/3165190981' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:49:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:49:15.182 162815 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:49:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:49:15.184 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:49:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:49:15.185 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:49:15 compute-0 nova_compute[260935]: 2025-10-11 08:49:15.366 2 DEBUG oslo_concurrency.processutils [None req-ff714f88-10a1-47a3-b901-6ed2315b9ba0 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 b3e20035-c079-4ad0-a085-2086be520d1d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.362s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:49:15 compute-0 nova_compute[260935]: 2025-10-11 08:49:15.455 2 DEBUG nova.storage.rbd_utils [None req-ff714f88-10a1-47a3-b901-6ed2315b9ba0 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] resizing rbd image b3e20035-c079-4ad0-a085-2086be520d1d_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 11 08:49:15 compute-0 nova_compute[260935]: 2025-10-11 08:49:15.565 2 DEBUG nova.objects.instance [None req-ff714f88-10a1-47a3-b901-6ed2315b9ba0 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Lazy-loading 'migration_context' on Instance uuid b3e20035-c079-4ad0-a085-2086be520d1d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:49:15 compute-0 nova_compute[260935]: 2025-10-11 08:49:15.636 2 DEBUG nova.virt.libvirt.driver [None req-ff714f88-10a1-47a3-b901-6ed2315b9ba0 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: b3e20035-c079-4ad0-a085-2086be520d1d] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 11 08:49:15 compute-0 nova_compute[260935]: 2025-10-11 08:49:15.637 2 DEBUG nova.virt.libvirt.driver [None req-ff714f88-10a1-47a3-b901-6ed2315b9ba0 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: b3e20035-c079-4ad0-a085-2086be520d1d] Ensure instance console log exists: /var/lib/nova/instances/b3e20035-c079-4ad0-a085-2086be520d1d/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 11 08:49:15 compute-0 nova_compute[260935]: 2025-10-11 08:49:15.637 2 DEBUG oslo_concurrency.lockutils [None req-ff714f88-10a1-47a3-b901-6ed2315b9ba0 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:49:15 compute-0 nova_compute[260935]: 2025-10-11 08:49:15.638 2 DEBUG oslo_concurrency.lockutils [None req-ff714f88-10a1-47a3-b901-6ed2315b9ba0 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:49:15 compute-0 nova_compute[260935]: 2025-10-11 08:49:15.638 2 DEBUG oslo_concurrency.lockutils [None req-ff714f88-10a1-47a3-b901-6ed2315b9ba0 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:49:15 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1304: 321 pgs: 321 active+clean; 214 MiB data, 393 MiB used, 60 GiB / 60 GiB avail; 3.8 MiB/s rd, 35 KiB/s wr, 176 op/s
Oct 11 08:49:16 compute-0 nova_compute[260935]: 2025-10-11 08:49:16.948 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:49:17 compute-0 nova_compute[260935]: 2025-10-11 08:49:17.048 2 DEBUG nova.compute.manager [req-13ac8601-8710-45c7-8416-db00260a77b7 req-60ca4176-60c3-430d-8a3c-b4db19c0bf06 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] Received event network-vif-plugged-db31f1b4-b009-40dc-a028-b72fe0b1eb45 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:49:17 compute-0 nova_compute[260935]: 2025-10-11 08:49:17.049 2 DEBUG oslo_concurrency.lockutils [req-13ac8601-8710-45c7-8416-db00260a77b7 req-60ca4176-60c3-430d-8a3c-b4db19c0bf06 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "057de6d9-3f9e-4b23-9019-f62ba6b453e7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:49:17 compute-0 nova_compute[260935]: 2025-10-11 08:49:17.049 2 DEBUG oslo_concurrency.lockutils [req-13ac8601-8710-45c7-8416-db00260a77b7 req-60ca4176-60c3-430d-8a3c-b4db19c0bf06 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "057de6d9-3f9e-4b23-9019-f62ba6b453e7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:49:17 compute-0 nova_compute[260935]: 2025-10-11 08:49:17.050 2 DEBUG oslo_concurrency.lockutils [req-13ac8601-8710-45c7-8416-db00260a77b7 req-60ca4176-60c3-430d-8a3c-b4db19c0bf06 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "057de6d9-3f9e-4b23-9019-f62ba6b453e7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:49:17 compute-0 nova_compute[260935]: 2025-10-11 08:49:17.050 2 DEBUG nova.compute.manager [req-13ac8601-8710-45c7-8416-db00260a77b7 req-60ca4176-60c3-430d-8a3c-b4db19c0bf06 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] No waiting events found dispatching network-vif-plugged-db31f1b4-b009-40dc-a028-b72fe0b1eb45 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 08:49:17 compute-0 nova_compute[260935]: 2025-10-11 08:49:17.050 2 WARNING nova.compute.manager [req-13ac8601-8710-45c7-8416-db00260a77b7 req-60ca4176-60c3-430d-8a3c-b4db19c0bf06 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] Received unexpected event network-vif-plugged-db31f1b4-b009-40dc-a028-b72fe0b1eb45 for instance with vm_state active and task_state deleting.
Oct 11 08:49:17 compute-0 nova_compute[260935]: 2025-10-11 08:49:17.051 2 DEBUG nova.compute.manager [req-13ac8601-8710-45c7-8416-db00260a77b7 req-60ca4176-60c3-430d-8a3c-b4db19c0bf06 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] Received event network-vif-unplugged-723ff3bf-882c-4198-afc8-a31026a4ccfc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:49:17 compute-0 nova_compute[260935]: 2025-10-11 08:49:17.051 2 DEBUG oslo_concurrency.lockutils [req-13ac8601-8710-45c7-8416-db00260a77b7 req-60ca4176-60c3-430d-8a3c-b4db19c0bf06 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "057de6d9-3f9e-4b23-9019-f62ba6b453e7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:49:17 compute-0 nova_compute[260935]: 2025-10-11 08:49:17.051 2 DEBUG oslo_concurrency.lockutils [req-13ac8601-8710-45c7-8416-db00260a77b7 req-60ca4176-60c3-430d-8a3c-b4db19c0bf06 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "057de6d9-3f9e-4b23-9019-f62ba6b453e7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:49:17 compute-0 nova_compute[260935]: 2025-10-11 08:49:17.052 2 DEBUG oslo_concurrency.lockutils [req-13ac8601-8710-45c7-8416-db00260a77b7 req-60ca4176-60c3-430d-8a3c-b4db19c0bf06 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "057de6d9-3f9e-4b23-9019-f62ba6b453e7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:49:17 compute-0 nova_compute[260935]: 2025-10-11 08:49:17.052 2 DEBUG nova.compute.manager [req-13ac8601-8710-45c7-8416-db00260a77b7 req-60ca4176-60c3-430d-8a3c-b4db19c0bf06 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] No waiting events found dispatching network-vif-unplugged-723ff3bf-882c-4198-afc8-a31026a4ccfc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 08:49:17 compute-0 nova_compute[260935]: 2025-10-11 08:49:17.052 2 DEBUG nova.compute.manager [req-13ac8601-8710-45c7-8416-db00260a77b7 req-60ca4176-60c3-430d-8a3c-b4db19c0bf06 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] Received event network-vif-unplugged-723ff3bf-882c-4198-afc8-a31026a4ccfc for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 11 08:49:17 compute-0 ceph-mon[74313]: pgmap v1304: 321 pgs: 321 active+clean; 214 MiB data, 393 MiB used, 60 GiB / 60 GiB avail; 3.8 MiB/s rd, 35 KiB/s wr, 176 op/s
Oct 11 08:49:17 compute-0 nova_compute[260935]: 2025-10-11 08:49:17.382 2 INFO nova.network.neutron [None req-e9b5f8c5-14d3-45ad-8e0b-2abef032e048 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] Port b3e1a780-925f-4f4e-bca5-4b8f5ee45e1e from network info_cache is no longer associated with instance in Neutron. Removing from network info_cache.
Oct 11 08:49:17 compute-0 nova_compute[260935]: 2025-10-11 08:49:17.779 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:49:17 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1305: 321 pgs: 321 active+clean; 181 MiB data, 369 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.8 MiB/s wr, 231 op/s
Oct 11 08:49:18 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e163 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 08:49:19 compute-0 ceph-mon[74313]: pgmap v1305: 321 pgs: 321 active+clean; 181 MiB data, 369 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.8 MiB/s wr, 231 op/s
Oct 11 08:49:19 compute-0 nova_compute[260935]: 2025-10-11 08:49:19.182 2 DEBUG nova.network.neutron [None req-ff714f88-10a1-47a3-b901-6ed2315b9ba0 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: b3e20035-c079-4ad0-a085-2086be520d1d] Successfully created port: 3a7614a2-ff1f-4015-a387-8b15256f61b2 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 11 08:49:19 compute-0 nova_compute[260935]: 2025-10-11 08:49:19.244 2 INFO nova.network.neutron [None req-e9b5f8c5-14d3-45ad-8e0b-2abef032e048 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] Port c9724939-cd91-44bb-a86b-72bf93c2a818 from network info_cache is no longer associated with instance in Neutron. Removing from network info_cache.
Oct 11 08:49:19 compute-0 nova_compute[260935]: 2025-10-11 08:49:19.245 2 DEBUG nova.network.neutron [None req-e9b5f8c5-14d3-45ad-8e0b-2abef032e048 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] Updating instance_info_cache with network_info: [{"id": "db31f1b4-b009-40dc-a028-b72fe0b1eb45", "address": "fa:16:3e:97:65:f3", "network": {"id": "fff13396-b787-4c6e-9112-a1c2ef57b26d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-795919911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.230", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eddb41c523294041b154a0a99c88e82b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdb31f1b4-b0", "ovs_interfaceid": "db31f1b4-b009-40dc-a028-b72fe0b1eb45", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "723ff3bf-882c-4198-afc8-a31026a4ccfc", "address": "fa:16:3e:90:fd:25", "network": {"id": "fff13396-b787-4c6e-9112-a1c2ef57b26d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-795919911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eddb41c523294041b154a0a99c88e82b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap723ff3bf-88", "ovs_interfaceid": "723ff3bf-882c-4198-afc8-a31026a4ccfc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:49:19 compute-0 nova_compute[260935]: 2025-10-11 08:49:19.330 2 DEBUG oslo_concurrency.lockutils [None req-e9b5f8c5-14d3-45ad-8e0b-2abef032e048 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Releasing lock "refresh_cache-057de6d9-3f9e-4b23-9019-f62ba6b453e7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 08:49:19 compute-0 nova_compute[260935]: 2025-10-11 08:49:19.377 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:49:19 compute-0 nova_compute[260935]: 2025-10-11 08:49:19.464 2 DEBUG nova.compute.manager [req-c6e7016b-7502-491f-af1f-71196022c1d7 req-679c8db1-a60d-4ecc-9da8-3ce71cc29b98 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] Received event network-vif-deleted-db31f1b4-b009-40dc-a028-b72fe0b1eb45 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:49:19 compute-0 nova_compute[260935]: 2025-10-11 08:49:19.465 2 INFO nova.compute.manager [req-c6e7016b-7502-491f-af1f-71196022c1d7 req-679c8db1-a60d-4ecc-9da8-3ce71cc29b98 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] Neutron deleted interface db31f1b4-b009-40dc-a028-b72fe0b1eb45; detaching it from the instance and deleting it from the info cache
Oct 11 08:49:19 compute-0 nova_compute[260935]: 2025-10-11 08:49:19.465 2 DEBUG nova.network.neutron [req-c6e7016b-7502-491f-af1f-71196022c1d7 req-679c8db1-a60d-4ecc-9da8-3ce71cc29b98 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] Updating instance_info_cache with network_info: [{"id": "723ff3bf-882c-4198-afc8-a31026a4ccfc", "address": "fa:16:3e:90:fd:25", "network": {"id": "fff13396-b787-4c6e-9112-a1c2ef57b26d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-795919911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eddb41c523294041b154a0a99c88e82b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap723ff3bf-88", "ovs_interfaceid": "723ff3bf-882c-4198-afc8-a31026a4ccfc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:49:19 compute-0 nova_compute[260935]: 2025-10-11 08:49:19.473 2 DEBUG oslo_concurrency.lockutils [None req-e9b5f8c5-14d3-45ad-8e0b-2abef032e048 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Lock "interface-057de6d9-3f9e-4b23-9019-f62ba6b453e7-b3e1a780-925f-4f4e-bca5-4b8f5ee45e1e" "released" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: held 8.474s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:49:19 compute-0 ceph-osd[89278]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #44. Immutable memtables: 1.
Oct 11 08:49:19 compute-0 nova_compute[260935]: 2025-10-11 08:49:19.655 2 DEBUG nova.compute.manager [req-d047fca7-3822-4774-9cac-56ce2c26c57f req-3cae172c-ab98-4e6d-8954-5514b76d9432 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] Received event network-vif-plugged-723ff3bf-882c-4198-afc8-a31026a4ccfc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:49:19 compute-0 nova_compute[260935]: 2025-10-11 08:49:19.656 2 DEBUG oslo_concurrency.lockutils [req-d047fca7-3822-4774-9cac-56ce2c26c57f req-3cae172c-ab98-4e6d-8954-5514b76d9432 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "057de6d9-3f9e-4b23-9019-f62ba6b453e7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:49:19 compute-0 nova_compute[260935]: 2025-10-11 08:49:19.657 2 DEBUG oslo_concurrency.lockutils [req-d047fca7-3822-4774-9cac-56ce2c26c57f req-3cae172c-ab98-4e6d-8954-5514b76d9432 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "057de6d9-3f9e-4b23-9019-f62ba6b453e7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:49:19 compute-0 nova_compute[260935]: 2025-10-11 08:49:19.657 2 DEBUG oslo_concurrency.lockutils [req-d047fca7-3822-4774-9cac-56ce2c26c57f req-3cae172c-ab98-4e6d-8954-5514b76d9432 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "057de6d9-3f9e-4b23-9019-f62ba6b453e7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:49:19 compute-0 nova_compute[260935]: 2025-10-11 08:49:19.658 2 DEBUG nova.compute.manager [req-d047fca7-3822-4774-9cac-56ce2c26c57f req-3cae172c-ab98-4e6d-8954-5514b76d9432 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] No waiting events found dispatching network-vif-plugged-723ff3bf-882c-4198-afc8-a31026a4ccfc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 08:49:19 compute-0 nova_compute[260935]: 2025-10-11 08:49:19.659 2 WARNING nova.compute.manager [req-d047fca7-3822-4774-9cac-56ce2c26c57f req-3cae172c-ab98-4e6d-8954-5514b76d9432 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] Received unexpected event network-vif-plugged-723ff3bf-882c-4198-afc8-a31026a4ccfc for instance with vm_state active and task_state deleting.
Oct 11 08:49:19 compute-0 nova_compute[260935]: 2025-10-11 08:49:19.664 2 DEBUG nova.compute.manager [req-c6e7016b-7502-491f-af1f-71196022c1d7 req-679c8db1-a60d-4ecc-9da8-3ce71cc29b98 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] Detach interface failed, port_id=db31f1b4-b009-40dc-a028-b72fe0b1eb45, reason: Instance 057de6d9-3f9e-4b23-9019-f62ba6b453e7 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Oct 11 08:49:19 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1306: 321 pgs: 321 active+clean; 181 MiB data, 369 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.8 MiB/s wr, 189 op/s
Oct 11 08:49:19 compute-0 ceph-osd[88249]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #44. Immutable memtables: 1.
Oct 11 08:49:19 compute-0 nova_compute[260935]: 2025-10-11 08:49:19.901 2 DEBUG nova.network.neutron [-] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:49:19 compute-0 nova_compute[260935]: 2025-10-11 08:49:19.952 2 INFO nova.compute.manager [-] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] Took 4.89 seconds to deallocate network for instance.
Oct 11 08:49:20 compute-0 nova_compute[260935]: 2025-10-11 08:49:20.137 2 DEBUG oslo_concurrency.lockutils [None req-e8118ba0-a7d9-4b80-b10f-16c7dcffbfa0 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:49:20 compute-0 nova_compute[260935]: 2025-10-11 08:49:20.138 2 DEBUG oslo_concurrency.lockutils [None req-e8118ba0-a7d9-4b80-b10f-16c7dcffbfa0 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:49:20 compute-0 nova_compute[260935]: 2025-10-11 08:49:20.231 2 DEBUG oslo_concurrency.processutils [None req-e8118ba0-a7d9-4b80-b10f-16c7dcffbfa0 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:49:20 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 08:49:20 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1305414503' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:49:20 compute-0 nova_compute[260935]: 2025-10-11 08:49:20.706 2 DEBUG oslo_concurrency.processutils [None req-e8118ba0-a7d9-4b80-b10f-16c7dcffbfa0 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:49:20 compute-0 nova_compute[260935]: 2025-10-11 08:49:20.716 2 DEBUG nova.compute.provider_tree [None req-e8118ba0-a7d9-4b80-b10f-16c7dcffbfa0 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 08:49:20 compute-0 nova_compute[260935]: 2025-10-11 08:49:20.738 2 DEBUG nova.scheduler.client.report [None req-e8118ba0-a7d9-4b80-b10f-16c7dcffbfa0 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 08:49:20 compute-0 nova_compute[260935]: 2025-10-11 08:49:20.763 2 DEBUG oslo_concurrency.lockutils [None req-e8118ba0-a7d9-4b80-b10f-16c7dcffbfa0 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.624s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:49:20 compute-0 nova_compute[260935]: 2025-10-11 08:49:20.791 2 INFO nova.scheduler.client.report [None req-e8118ba0-a7d9-4b80-b10f-16c7dcffbfa0 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Deleted allocations for instance 057de6d9-3f9e-4b23-9019-f62ba6b453e7
Oct 11 08:49:20 compute-0 ovn_controller[152945]: 2025-10-11T08:49:20Z|00024|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:bf:d8:0b 10.100.0.11
Oct 11 08:49:20 compute-0 ovn_controller[152945]: 2025-10-11T08:49:20Z|00025|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:bf:d8:0b 10.100.0.11
Oct 11 08:49:20 compute-0 nova_compute[260935]: 2025-10-11 08:49:20.896 2 DEBUG oslo_concurrency.lockutils [None req-e8118ba0-a7d9-4b80-b10f-16c7dcffbfa0 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Lock "057de6d9-3f9e-4b23-9019-f62ba6b453e7" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 7.129s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:49:21 compute-0 ceph-mon[74313]: pgmap v1306: 321 pgs: 321 active+clean; 181 MiB data, 369 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.8 MiB/s wr, 189 op/s
Oct 11 08:49:21 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/1305414503' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:49:21 compute-0 ovn_controller[152945]: 2025-10-11T08:49:21Z|00119|binding|INFO|Releasing lport 424305ea-6b47-4134-ad52-ee2a450e204c from this chassis (sb_readonly=0)
Oct 11 08:49:21 compute-0 nova_compute[260935]: 2025-10-11 08:49:21.317 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:49:21 compute-0 nova_compute[260935]: 2025-10-11 08:49:21.461 2 DEBUG nova.network.neutron [None req-ff714f88-10a1-47a3-b901-6ed2315b9ba0 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: b3e20035-c079-4ad0-a085-2086be520d1d] Successfully updated port: 3a7614a2-ff1f-4015-a387-8b15256f61b2 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 11 08:49:21 compute-0 nova_compute[260935]: 2025-10-11 08:49:21.482 2 DEBUG oslo_concurrency.lockutils [None req-ff714f88-10a1-47a3-b901-6ed2315b9ba0 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Acquiring lock "refresh_cache-b3e20035-c079-4ad0-a085-2086be520d1d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 08:49:21 compute-0 nova_compute[260935]: 2025-10-11 08:49:21.482 2 DEBUG oslo_concurrency.lockutils [None req-ff714f88-10a1-47a3-b901-6ed2315b9ba0 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Acquired lock "refresh_cache-b3e20035-c079-4ad0-a085-2086be520d1d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 08:49:21 compute-0 nova_compute[260935]: 2025-10-11 08:49:21.482 2 DEBUG nova.network.neutron [None req-ff714f88-10a1-47a3-b901-6ed2315b9ba0 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: b3e20035-c079-4ad0-a085-2086be520d1d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 11 08:49:21 compute-0 nova_compute[260935]: 2025-10-11 08:49:21.547 2 DEBUG nova.compute.manager [req-70147c05-b470-4752-b1d1-b6f16d087d86 req-e4b98cf6-b2fc-4d0f-9e0a-7f71b9997f74 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] Received event network-vif-deleted-723ff3bf-882c-4198-afc8-a31026a4ccfc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:49:21 compute-0 nova_compute[260935]: 2025-10-11 08:49:21.548 2 DEBUG nova.compute.manager [req-70147c05-b470-4752-b1d1-b6f16d087d86 req-e4b98cf6-b2fc-4d0f-9e0a-7f71b9997f74 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: b3e20035-c079-4ad0-a085-2086be520d1d] Received event network-changed-3a7614a2-ff1f-4015-a387-8b15256f61b2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:49:21 compute-0 nova_compute[260935]: 2025-10-11 08:49:21.549 2 DEBUG nova.compute.manager [req-70147c05-b470-4752-b1d1-b6f16d087d86 req-e4b98cf6-b2fc-4d0f-9e0a-7f71b9997f74 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: b3e20035-c079-4ad0-a085-2086be520d1d] Refreshing instance network info cache due to event network-changed-3a7614a2-ff1f-4015-a387-8b15256f61b2. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 08:49:21 compute-0 nova_compute[260935]: 2025-10-11 08:49:21.550 2 DEBUG oslo_concurrency.lockutils [req-70147c05-b470-4752-b1d1-b6f16d087d86 req-e4b98cf6-b2fc-4d0f-9e0a-7f71b9997f74 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-b3e20035-c079-4ad0-a085-2086be520d1d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 08:49:21 compute-0 ovn_controller[152945]: 2025-10-11T08:49:21Z|00120|binding|INFO|Releasing lport 424305ea-6b47-4134-ad52-ee2a450e204c from this chassis (sb_readonly=0)
Oct 11 08:49:21 compute-0 nova_compute[260935]: 2025-10-11 08:49:21.599 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:49:21 compute-0 nova_compute[260935]: 2025-10-11 08:49:21.660 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760172546.659631, e10cd028-76c1-4eb5-be43-f51e4da8abc1 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:49:21 compute-0 nova_compute[260935]: 2025-10-11 08:49:21.661 2 INFO nova.compute.manager [-] [instance: e10cd028-76c1-4eb5-be43-f51e4da8abc1] VM Stopped (Lifecycle Event)
Oct 11 08:49:21 compute-0 nova_compute[260935]: 2025-10-11 08:49:21.664 2 DEBUG nova.network.neutron [None req-ff714f88-10a1-47a3-b901-6ed2315b9ba0 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: b3e20035-c079-4ad0-a085-2086be520d1d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 11 08:49:21 compute-0 nova_compute[260935]: 2025-10-11 08:49:21.696 2 DEBUG nova.compute.manager [None req-297ac004-08fb-4980-9418-651b0e3f703b - - - - - -] [instance: e10cd028-76c1-4eb5-be43-f51e4da8abc1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:49:21 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1307: 321 pgs: 321 active+clean; 181 MiB data, 369 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.8 MiB/s wr, 189 op/s
Oct 11 08:49:21 compute-0 nova_compute[260935]: 2025-10-11 08:49:21.993 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:49:22 compute-0 sshd-session[292738]: Invalid user insight from 155.4.244.179 port 32265
Oct 11 08:49:22 compute-0 sshd-session[292738]: pam_unix(sshd:auth): check pass; user unknown
Oct 11 08:49:22 compute-0 sshd-session[292738]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=155.4.244.179
Oct 11 08:49:22 compute-0 podman[292742]: 2025-10-11 08:49:22.409972322 +0000 UTC m=+0.089678607 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct 11 08:49:22 compute-0 nova_compute[260935]: 2025-10-11 08:49:22.518 2 DEBUG nova.network.neutron [None req-ff714f88-10a1-47a3-b901-6ed2315b9ba0 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: b3e20035-c079-4ad0-a085-2086be520d1d] Updating instance_info_cache with network_info: [{"id": "3a7614a2-ff1f-4015-a387-8b15256f61b2", "address": "fa:16:3e:e8:51:53", "network": {"id": "09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1951796893-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "39d3043a7835403392c659fbb2fe0b22", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3a7614a2-ff", "ovs_interfaceid": "3a7614a2-ff1f-4015-a387-8b15256f61b2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:49:22 compute-0 nova_compute[260935]: 2025-10-11 08:49:22.545 2 DEBUG oslo_concurrency.lockutils [None req-ff714f88-10a1-47a3-b901-6ed2315b9ba0 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Releasing lock "refresh_cache-b3e20035-c079-4ad0-a085-2086be520d1d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 08:49:22 compute-0 nova_compute[260935]: 2025-10-11 08:49:22.545 2 DEBUG nova.compute.manager [None req-ff714f88-10a1-47a3-b901-6ed2315b9ba0 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: b3e20035-c079-4ad0-a085-2086be520d1d] Instance network_info: |[{"id": "3a7614a2-ff1f-4015-a387-8b15256f61b2", "address": "fa:16:3e:e8:51:53", "network": {"id": "09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1951796893-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "39d3043a7835403392c659fbb2fe0b22", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3a7614a2-ff", "ovs_interfaceid": "3a7614a2-ff1f-4015-a387-8b15256f61b2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 11 08:49:22 compute-0 nova_compute[260935]: 2025-10-11 08:49:22.545 2 DEBUG oslo_concurrency.lockutils [req-70147c05-b470-4752-b1d1-b6f16d087d86 req-e4b98cf6-b2fc-4d0f-9e0a-7f71b9997f74 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-b3e20035-c079-4ad0-a085-2086be520d1d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 08:49:22 compute-0 nova_compute[260935]: 2025-10-11 08:49:22.546 2 DEBUG nova.network.neutron [req-70147c05-b470-4752-b1d1-b6f16d087d86 req-e4b98cf6-b2fc-4d0f-9e0a-7f71b9997f74 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: b3e20035-c079-4ad0-a085-2086be520d1d] Refreshing network info cache for port 3a7614a2-ff1f-4015-a387-8b15256f61b2 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 08:49:22 compute-0 nova_compute[260935]: 2025-10-11 08:49:22.549 2 DEBUG nova.virt.libvirt.driver [None req-ff714f88-10a1-47a3-b901-6ed2315b9ba0 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: b3e20035-c079-4ad0-a085-2086be520d1d] Start _get_guest_xml network_info=[{"id": "3a7614a2-ff1f-4015-a387-8b15256f61b2", "address": "fa:16:3e:e8:51:53", "network": {"id": "09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1951796893-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "39d3043a7835403392c659fbb2fe0b22", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3a7614a2-ff", "ovs_interfaceid": "3a7614a2-ff1f-4015-a387-8b15256f61b2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 11 08:49:22 compute-0 nova_compute[260935]: 2025-10-11 08:49:22.555 2 WARNING nova.virt.libvirt.driver [None req-ff714f88-10a1-47a3-b901-6ed2315b9ba0 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 08:49:22 compute-0 nova_compute[260935]: 2025-10-11 08:49:22.559 2 DEBUG nova.virt.libvirt.host [None req-ff714f88-10a1-47a3-b901-6ed2315b9ba0 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 11 08:49:22 compute-0 nova_compute[260935]: 2025-10-11 08:49:22.561 2 DEBUG nova.virt.libvirt.host [None req-ff714f88-10a1-47a3-b901-6ed2315b9ba0 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 11 08:49:22 compute-0 nova_compute[260935]: 2025-10-11 08:49:22.569 2 DEBUG nova.virt.libvirt.host [None req-ff714f88-10a1-47a3-b901-6ed2315b9ba0 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 11 08:49:22 compute-0 nova_compute[260935]: 2025-10-11 08:49:22.570 2 DEBUG nova.virt.libvirt.host [None req-ff714f88-10a1-47a3-b901-6ed2315b9ba0 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 11 08:49:22 compute-0 nova_compute[260935]: 2025-10-11 08:49:22.570 2 DEBUG nova.virt.libvirt.driver [None req-ff714f88-10a1-47a3-b901-6ed2315b9ba0 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 11 08:49:22 compute-0 nova_compute[260935]: 2025-10-11 08:49:22.571 2 DEBUG nova.virt.hardware [None req-ff714f88-10a1-47a3-b901-6ed2315b9ba0 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 11 08:49:22 compute-0 nova_compute[260935]: 2025-10-11 08:49:22.571 2 DEBUG nova.virt.hardware [None req-ff714f88-10a1-47a3-b901-6ed2315b9ba0 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 11 08:49:22 compute-0 nova_compute[260935]: 2025-10-11 08:49:22.571 2 DEBUG nova.virt.hardware [None req-ff714f88-10a1-47a3-b901-6ed2315b9ba0 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 11 08:49:22 compute-0 nova_compute[260935]: 2025-10-11 08:49:22.572 2 DEBUG nova.virt.hardware [None req-ff714f88-10a1-47a3-b901-6ed2315b9ba0 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 11 08:49:22 compute-0 nova_compute[260935]: 2025-10-11 08:49:22.572 2 DEBUG nova.virt.hardware [None req-ff714f88-10a1-47a3-b901-6ed2315b9ba0 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 11 08:49:22 compute-0 nova_compute[260935]: 2025-10-11 08:49:22.572 2 DEBUG nova.virt.hardware [None req-ff714f88-10a1-47a3-b901-6ed2315b9ba0 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 11 08:49:22 compute-0 nova_compute[260935]: 2025-10-11 08:49:22.573 2 DEBUG nova.virt.hardware [None req-ff714f88-10a1-47a3-b901-6ed2315b9ba0 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 11 08:49:22 compute-0 nova_compute[260935]: 2025-10-11 08:49:22.573 2 DEBUG nova.virt.hardware [None req-ff714f88-10a1-47a3-b901-6ed2315b9ba0 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 11 08:49:22 compute-0 nova_compute[260935]: 2025-10-11 08:49:22.573 2 DEBUG nova.virt.hardware [None req-ff714f88-10a1-47a3-b901-6ed2315b9ba0 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 11 08:49:22 compute-0 nova_compute[260935]: 2025-10-11 08:49:22.573 2 DEBUG nova.virt.hardware [None req-ff714f88-10a1-47a3-b901-6ed2315b9ba0 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 11 08:49:22 compute-0 nova_compute[260935]: 2025-10-11 08:49:22.574 2 DEBUG nova.virt.hardware [None req-ff714f88-10a1-47a3-b901-6ed2315b9ba0 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 11 08:49:22 compute-0 nova_compute[260935]: 2025-10-11 08:49:22.577 2 DEBUG oslo_concurrency.processutils [None req-ff714f88-10a1-47a3-b901-6ed2315b9ba0 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:49:22 compute-0 nova_compute[260935]: 2025-10-11 08:49:22.770 2 DEBUG oslo_concurrency.lockutils [None req-24442349-a5af-424c-880c-1e8729b35923 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Acquiring lock "8e4f771a-b87a-40f9-a12e-b5b4583b96f7" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:49:22 compute-0 nova_compute[260935]: 2025-10-11 08:49:22.771 2 DEBUG oslo_concurrency.lockutils [None req-24442349-a5af-424c-880c-1e8729b35923 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Lock "8e4f771a-b87a-40f9-a12e-b5b4583b96f7" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:49:22 compute-0 nova_compute[260935]: 2025-10-11 08:49:22.788 2 DEBUG nova.compute.manager [None req-24442349-a5af-424c-880c-1e8729b35923 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 8e4f771a-b87a-40f9-a12e-b5b4583b96f7] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 11 08:49:22 compute-0 nova_compute[260935]: 2025-10-11 08:49:22.852 2 DEBUG oslo_concurrency.lockutils [None req-24442349-a5af-424c-880c-1e8729b35923 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:49:22 compute-0 nova_compute[260935]: 2025-10-11 08:49:22.853 2 DEBUG oslo_concurrency.lockutils [None req-24442349-a5af-424c-880c-1e8729b35923 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:49:22 compute-0 nova_compute[260935]: 2025-10-11 08:49:22.862 2 DEBUG nova.virt.hardware [None req-24442349-a5af-424c-880c-1e8729b35923 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 11 08:49:22 compute-0 nova_compute[260935]: 2025-10-11 08:49:22.863 2 INFO nova.compute.claims [None req-24442349-a5af-424c-880c-1e8729b35923 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 8e4f771a-b87a-40f9-a12e-b5b4583b96f7] Claim successful on node compute-0.ctlplane.example.com
Oct 11 08:49:23 compute-0 nova_compute[260935]: 2025-10-11 08:49:23.007 2 DEBUG oslo_concurrency.processutils [None req-24442349-a5af-424c-880c-1e8729b35923 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:49:23 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 08:49:23 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3865369347' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:49:23 compute-0 nova_compute[260935]: 2025-10-11 08:49:23.072 2 DEBUG oslo_concurrency.processutils [None req-ff714f88-10a1-47a3-b901-6ed2315b9ba0 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.495s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:49:23 compute-0 nova_compute[260935]: 2025-10-11 08:49:23.108 2 DEBUG nova.storage.rbd_utils [None req-ff714f88-10a1-47a3-b901-6ed2315b9ba0 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] rbd image b3e20035-c079-4ad0-a085-2086be520d1d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:49:23 compute-0 nova_compute[260935]: 2025-10-11 08:49:23.115 2 DEBUG oslo_concurrency.processutils [None req-ff714f88-10a1-47a3-b901-6ed2315b9ba0 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:49:23 compute-0 ceph-mon[74313]: pgmap v1307: 321 pgs: 321 active+clean; 181 MiB data, 369 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.8 MiB/s wr, 189 op/s
Oct 11 08:49:23 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/3865369347' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:49:23 compute-0 ovn_controller[152945]: 2025-10-11T08:49:23Z|00026|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:20:3f:af 10.100.0.14
Oct 11 08:49:23 compute-0 ovn_controller[152945]: 2025-10-11T08:49:23Z|00027|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:20:3f:af 10.100.0.14
Oct 11 08:49:23 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e163 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 08:49:23 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 08:49:23 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2082289042' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:49:23 compute-0 nova_compute[260935]: 2025-10-11 08:49:23.505 2 DEBUG oslo_concurrency.processutils [None req-24442349-a5af-424c-880c-1e8729b35923 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.498s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:49:23 compute-0 nova_compute[260935]: 2025-10-11 08:49:23.514 2 DEBUG nova.compute.provider_tree [None req-24442349-a5af-424c-880c-1e8729b35923 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 08:49:23 compute-0 nova_compute[260935]: 2025-10-11 08:49:23.538 2 DEBUG nova.scheduler.client.report [None req-24442349-a5af-424c-880c-1e8729b35923 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 08:49:23 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 08:49:23 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2887860017' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:49:23 compute-0 nova_compute[260935]: 2025-10-11 08:49:23.566 2 DEBUG oslo_concurrency.processutils [None req-ff714f88-10a1-47a3-b901-6ed2315b9ba0 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:49:23 compute-0 nova_compute[260935]: 2025-10-11 08:49:23.568 2 DEBUG nova.virt.libvirt.vif [None req-ff714f88-10a1-47a3-b901-6ed2315b9ba0 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T08:49:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-911926473',display_name='tempest-ServersAdminTestJSON-server-911926473',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-911926473',id=24,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='39d3043a7835403392c659fbb2fe0b22',ramdisk_id='',reservation_id='r-14xzfgy4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersAdminTestJSON-1756812845',owner_user_name='tempest-ServersAdminTestJSON-1756812845-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T08:49:14Z,user_data=None,user_id='a51c2680b31e40b1908642ef8795c6f0',uuid=b3e20035-c079-4ad0-a085-2086be520d1d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3a7614a2-ff1f-4015-a387-8b15256f61b2", "address": "fa:16:3e:e8:51:53", "network": {"id": "09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1951796893-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "39d3043a7835403392c659fbb2fe0b22", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3a7614a2-ff", "ovs_interfaceid": "3a7614a2-ff1f-4015-a387-8b15256f61b2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 11 08:49:23 compute-0 nova_compute[260935]: 2025-10-11 08:49:23.569 2 DEBUG nova.network.os_vif_util [None req-ff714f88-10a1-47a3-b901-6ed2315b9ba0 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Converting VIF {"id": "3a7614a2-ff1f-4015-a387-8b15256f61b2", "address": "fa:16:3e:e8:51:53", "network": {"id": "09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1951796893-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "39d3043a7835403392c659fbb2fe0b22", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3a7614a2-ff", "ovs_interfaceid": "3a7614a2-ff1f-4015-a387-8b15256f61b2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 08:49:23 compute-0 nova_compute[260935]: 2025-10-11 08:49:23.570 2 DEBUG nova.network.os_vif_util [None req-ff714f88-10a1-47a3-b901-6ed2315b9ba0 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e8:51:53,bridge_name='br-int',has_traffic_filtering=True,id=3a7614a2-ff1f-4015-a387-8b15256f61b2,network=Network(09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3a7614a2-ff') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 08:49:23 compute-0 nova_compute[260935]: 2025-10-11 08:49:23.572 2 DEBUG nova.objects.instance [None req-ff714f88-10a1-47a3-b901-6ed2315b9ba0 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Lazy-loading 'pci_devices' on Instance uuid b3e20035-c079-4ad0-a085-2086be520d1d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:49:23 compute-0 nova_compute[260935]: 2025-10-11 08:49:23.576 2 DEBUG oslo_concurrency.lockutils [None req-24442349-a5af-424c-880c-1e8729b35923 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.723s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:49:23 compute-0 nova_compute[260935]: 2025-10-11 08:49:23.577 2 DEBUG nova.compute.manager [None req-24442349-a5af-424c-880c-1e8729b35923 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 8e4f771a-b87a-40f9-a12e-b5b4583b96f7] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 11 08:49:23 compute-0 nova_compute[260935]: 2025-10-11 08:49:23.616 2 DEBUG nova.virt.libvirt.driver [None req-ff714f88-10a1-47a3-b901-6ed2315b9ba0 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: b3e20035-c079-4ad0-a085-2086be520d1d] End _get_guest_xml xml=<domain type="kvm">
Oct 11 08:49:23 compute-0 nova_compute[260935]:   <uuid>b3e20035-c079-4ad0-a085-2086be520d1d</uuid>
Oct 11 08:49:23 compute-0 nova_compute[260935]:   <name>instance-00000018</name>
Oct 11 08:49:23 compute-0 nova_compute[260935]:   <memory>131072</memory>
Oct 11 08:49:23 compute-0 nova_compute[260935]:   <vcpu>1</vcpu>
Oct 11 08:49:23 compute-0 nova_compute[260935]:   <metadata>
Oct 11 08:49:23 compute-0 nova_compute[260935]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 08:49:23 compute-0 nova_compute[260935]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 08:49:23 compute-0 nova_compute[260935]:       <nova:name>tempest-ServersAdminTestJSON-server-911926473</nova:name>
Oct 11 08:49:23 compute-0 nova_compute[260935]:       <nova:creationTime>2025-10-11 08:49:22</nova:creationTime>
Oct 11 08:49:23 compute-0 nova_compute[260935]:       <nova:flavor name="m1.nano">
Oct 11 08:49:23 compute-0 nova_compute[260935]:         <nova:memory>128</nova:memory>
Oct 11 08:49:23 compute-0 nova_compute[260935]:         <nova:disk>1</nova:disk>
Oct 11 08:49:23 compute-0 nova_compute[260935]:         <nova:swap>0</nova:swap>
Oct 11 08:49:23 compute-0 nova_compute[260935]:         <nova:ephemeral>0</nova:ephemeral>
Oct 11 08:49:23 compute-0 nova_compute[260935]:         <nova:vcpus>1</nova:vcpus>
Oct 11 08:49:23 compute-0 nova_compute[260935]:       </nova:flavor>
Oct 11 08:49:23 compute-0 nova_compute[260935]:       <nova:owner>
Oct 11 08:49:23 compute-0 nova_compute[260935]:         <nova:user uuid="a51c2680b31e40b1908642ef8795c6f0">tempest-ServersAdminTestJSON-1756812845-project-member</nova:user>
Oct 11 08:49:23 compute-0 nova_compute[260935]:         <nova:project uuid="39d3043a7835403392c659fbb2fe0b22">tempest-ServersAdminTestJSON-1756812845</nova:project>
Oct 11 08:49:23 compute-0 nova_compute[260935]:       </nova:owner>
Oct 11 08:49:23 compute-0 nova_compute[260935]:       <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 08:49:23 compute-0 nova_compute[260935]:       <nova:ports>
Oct 11 08:49:23 compute-0 nova_compute[260935]:         <nova:port uuid="3a7614a2-ff1f-4015-a387-8b15256f61b2">
Oct 11 08:49:23 compute-0 nova_compute[260935]:           <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Oct 11 08:49:23 compute-0 nova_compute[260935]:         </nova:port>
Oct 11 08:49:23 compute-0 nova_compute[260935]:       </nova:ports>
Oct 11 08:49:23 compute-0 nova_compute[260935]:     </nova:instance>
Oct 11 08:49:23 compute-0 nova_compute[260935]:   </metadata>
Oct 11 08:49:23 compute-0 nova_compute[260935]:   <sysinfo type="smbios">
Oct 11 08:49:23 compute-0 nova_compute[260935]:     <system>
Oct 11 08:49:23 compute-0 nova_compute[260935]:       <entry name="manufacturer">RDO</entry>
Oct 11 08:49:23 compute-0 nova_compute[260935]:       <entry name="product">OpenStack Compute</entry>
Oct 11 08:49:23 compute-0 nova_compute[260935]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 08:49:23 compute-0 nova_compute[260935]:       <entry name="serial">b3e20035-c079-4ad0-a085-2086be520d1d</entry>
Oct 11 08:49:23 compute-0 nova_compute[260935]:       <entry name="uuid">b3e20035-c079-4ad0-a085-2086be520d1d</entry>
Oct 11 08:49:23 compute-0 nova_compute[260935]:       <entry name="family">Virtual Machine</entry>
Oct 11 08:49:23 compute-0 nova_compute[260935]:     </system>
Oct 11 08:49:23 compute-0 nova_compute[260935]:   </sysinfo>
Oct 11 08:49:23 compute-0 nova_compute[260935]:   <os>
Oct 11 08:49:23 compute-0 nova_compute[260935]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 11 08:49:23 compute-0 nova_compute[260935]:     <boot dev="hd"/>
Oct 11 08:49:23 compute-0 nova_compute[260935]:     <smbios mode="sysinfo"/>
Oct 11 08:49:23 compute-0 nova_compute[260935]:   </os>
Oct 11 08:49:23 compute-0 nova_compute[260935]:   <features>
Oct 11 08:49:23 compute-0 nova_compute[260935]:     <acpi/>
Oct 11 08:49:23 compute-0 nova_compute[260935]:     <apic/>
Oct 11 08:49:23 compute-0 nova_compute[260935]:     <vmcoreinfo/>
Oct 11 08:49:23 compute-0 nova_compute[260935]:   </features>
Oct 11 08:49:23 compute-0 nova_compute[260935]:   <clock offset="utc">
Oct 11 08:49:23 compute-0 nova_compute[260935]:     <timer name="pit" tickpolicy="delay"/>
Oct 11 08:49:23 compute-0 nova_compute[260935]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 11 08:49:23 compute-0 nova_compute[260935]:     <timer name="hpet" present="no"/>
Oct 11 08:49:23 compute-0 nova_compute[260935]:   </clock>
Oct 11 08:49:23 compute-0 nova_compute[260935]:   <cpu mode="host-model" match="exact">
Oct 11 08:49:23 compute-0 nova_compute[260935]:     <topology sockets="1" cores="1" threads="1"/>
Oct 11 08:49:23 compute-0 nova_compute[260935]:   </cpu>
Oct 11 08:49:23 compute-0 nova_compute[260935]:   <devices>
Oct 11 08:49:23 compute-0 nova_compute[260935]:     <disk type="network" device="disk">
Oct 11 08:49:23 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 08:49:23 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/b3e20035-c079-4ad0-a085-2086be520d1d_disk">
Oct 11 08:49:23 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 08:49:23 compute-0 nova_compute[260935]:       </source>
Oct 11 08:49:23 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 08:49:23 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 08:49:23 compute-0 nova_compute[260935]:       </auth>
Oct 11 08:49:23 compute-0 nova_compute[260935]:       <target dev="vda" bus="virtio"/>
Oct 11 08:49:23 compute-0 nova_compute[260935]:     </disk>
Oct 11 08:49:23 compute-0 nova_compute[260935]:     <disk type="network" device="cdrom">
Oct 11 08:49:23 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 08:49:23 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/b3e20035-c079-4ad0-a085-2086be520d1d_disk.config">
Oct 11 08:49:23 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 08:49:23 compute-0 nova_compute[260935]:       </source>
Oct 11 08:49:23 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 08:49:23 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 08:49:23 compute-0 nova_compute[260935]:       </auth>
Oct 11 08:49:23 compute-0 nova_compute[260935]:       <target dev="sda" bus="sata"/>
Oct 11 08:49:23 compute-0 nova_compute[260935]:     </disk>
Oct 11 08:49:23 compute-0 nova_compute[260935]:     <interface type="ethernet">
Oct 11 08:49:23 compute-0 nova_compute[260935]:       <mac address="fa:16:3e:e8:51:53"/>
Oct 11 08:49:23 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 08:49:23 compute-0 nova_compute[260935]:       <driver name="vhost" rx_queue_size="512"/>
Oct 11 08:49:23 compute-0 nova_compute[260935]:       <mtu size="1442"/>
Oct 11 08:49:23 compute-0 nova_compute[260935]:       <target dev="tap3a7614a2-ff"/>
Oct 11 08:49:23 compute-0 nova_compute[260935]:     </interface>
Oct 11 08:49:23 compute-0 nova_compute[260935]:     <serial type="pty">
Oct 11 08:49:23 compute-0 nova_compute[260935]:       <log file="/var/lib/nova/instances/b3e20035-c079-4ad0-a085-2086be520d1d/console.log" append="off"/>
Oct 11 08:49:23 compute-0 nova_compute[260935]:     </serial>
Oct 11 08:49:23 compute-0 nova_compute[260935]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 08:49:23 compute-0 nova_compute[260935]:     <video>
Oct 11 08:49:23 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 08:49:23 compute-0 nova_compute[260935]:     </video>
Oct 11 08:49:23 compute-0 nova_compute[260935]:     <input type="tablet" bus="usb"/>
Oct 11 08:49:23 compute-0 nova_compute[260935]:     <rng model="virtio">
Oct 11 08:49:23 compute-0 nova_compute[260935]:       <backend model="random">/dev/urandom</backend>
Oct 11 08:49:23 compute-0 nova_compute[260935]:     </rng>
Oct 11 08:49:23 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root"/>
Oct 11 08:49:23 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:49:23 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:49:23 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:49:23 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:49:23 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:49:23 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:49:23 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:49:23 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:49:23 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:49:23 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:49:23 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:49:23 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:49:23 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:49:23 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:49:23 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:49:23 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:49:23 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:49:23 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:49:23 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:49:23 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:49:23 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:49:23 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:49:23 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:49:23 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:49:23 compute-0 nova_compute[260935]:     <controller type="usb" index="0"/>
Oct 11 08:49:23 compute-0 nova_compute[260935]:     <memballoon model="virtio">
Oct 11 08:49:23 compute-0 nova_compute[260935]:       <stats period="10"/>
Oct 11 08:49:23 compute-0 nova_compute[260935]:     </memballoon>
Oct 11 08:49:23 compute-0 nova_compute[260935]:   </devices>
Oct 11 08:49:23 compute-0 nova_compute[260935]: </domain>
Oct 11 08:49:23 compute-0 nova_compute[260935]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 11 08:49:23 compute-0 nova_compute[260935]: 2025-10-11 08:49:23.618 2 DEBUG nova.compute.manager [None req-ff714f88-10a1-47a3-b901-6ed2315b9ba0 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: b3e20035-c079-4ad0-a085-2086be520d1d] Preparing to wait for external event network-vif-plugged-3a7614a2-ff1f-4015-a387-8b15256f61b2 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 11 08:49:23 compute-0 nova_compute[260935]: 2025-10-11 08:49:23.619 2 DEBUG oslo_concurrency.lockutils [None req-ff714f88-10a1-47a3-b901-6ed2315b9ba0 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Acquiring lock "b3e20035-c079-4ad0-a085-2086be520d1d-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:49:23 compute-0 nova_compute[260935]: 2025-10-11 08:49:23.619 2 DEBUG oslo_concurrency.lockutils [None req-ff714f88-10a1-47a3-b901-6ed2315b9ba0 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Lock "b3e20035-c079-4ad0-a085-2086be520d1d-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:49:23 compute-0 nova_compute[260935]: 2025-10-11 08:49:23.620 2 DEBUG oslo_concurrency.lockutils [None req-ff714f88-10a1-47a3-b901-6ed2315b9ba0 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Lock "b3e20035-c079-4ad0-a085-2086be520d1d-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:49:23 compute-0 nova_compute[260935]: 2025-10-11 08:49:23.621 2 DEBUG nova.virt.libvirt.vif [None req-ff714f88-10a1-47a3-b901-6ed2315b9ba0 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T08:49:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-911926473',display_name='tempest-ServersAdminTestJSON-server-911926473',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-911926473',id=24,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='39d3043a7835403392c659fbb2fe0b22',ramdisk_id='',reservation_id='r-14xzfgy4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersAdminTestJSON-1756812845',owner_user_name='tempest-ServersAdminTestJSON-1756812845-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T08:49:14Z,user_data=None,user_id='a51c2680b31e40b1908642ef8795c6f0',uuid=b3e20035-c079-4ad0-a085-2086be520d1d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3a7614a2-ff1f-4015-a387-8b15256f61b2", "address": "fa:16:3e:e8:51:53", "network": {"id": "09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1951796893-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "39d3043a7835403392c659fbb2fe0b22", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3a7614a2-ff", "ovs_interfaceid": "3a7614a2-ff1f-4015-a387-8b15256f61b2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 11 08:49:23 compute-0 nova_compute[260935]: 2025-10-11 08:49:23.622 2 DEBUG nova.network.os_vif_util [None req-ff714f88-10a1-47a3-b901-6ed2315b9ba0 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Converting VIF {"id": "3a7614a2-ff1f-4015-a387-8b15256f61b2", "address": "fa:16:3e:e8:51:53", "network": {"id": "09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1951796893-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "39d3043a7835403392c659fbb2fe0b22", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3a7614a2-ff", "ovs_interfaceid": "3a7614a2-ff1f-4015-a387-8b15256f61b2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 08:49:23 compute-0 nova_compute[260935]: 2025-10-11 08:49:23.623 2 DEBUG nova.network.os_vif_util [None req-ff714f88-10a1-47a3-b901-6ed2315b9ba0 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e8:51:53,bridge_name='br-int',has_traffic_filtering=True,id=3a7614a2-ff1f-4015-a387-8b15256f61b2,network=Network(09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3a7614a2-ff') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 08:49:23 compute-0 nova_compute[260935]: 2025-10-11 08:49:23.624 2 DEBUG os_vif [None req-ff714f88-10a1-47a3-b901-6ed2315b9ba0 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e8:51:53,bridge_name='br-int',has_traffic_filtering=True,id=3a7614a2-ff1f-4015-a387-8b15256f61b2,network=Network(09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3a7614a2-ff') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 11 08:49:23 compute-0 nova_compute[260935]: 2025-10-11 08:49:23.625 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:49:23 compute-0 nova_compute[260935]: 2025-10-11 08:49:23.625 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:49:23 compute-0 nova_compute[260935]: 2025-10-11 08:49:23.626 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 08:49:23 compute-0 nova_compute[260935]: 2025-10-11 08:49:23.630 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:49:23 compute-0 nova_compute[260935]: 2025-10-11 08:49:23.630 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3a7614a2-ff, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:49:23 compute-0 nova_compute[260935]: 2025-10-11 08:49:23.631 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap3a7614a2-ff, col_values=(('external_ids', {'iface-id': '3a7614a2-ff1f-4015-a387-8b15256f61b2', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:e8:51:53', 'vm-uuid': 'b3e20035-c079-4ad0-a085-2086be520d1d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:49:23 compute-0 nova_compute[260935]: 2025-10-11 08:49:23.633 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:49:23 compute-0 NetworkManager[44960]: <info>  [1760172563.6350] manager: (tap3a7614a2-ff): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/70)
Oct 11 08:49:23 compute-0 nova_compute[260935]: 2025-10-11 08:49:23.637 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 08:49:23 compute-0 nova_compute[260935]: 2025-10-11 08:49:23.641 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:49:23 compute-0 nova_compute[260935]: 2025-10-11 08:49:23.642 2 INFO os_vif [None req-ff714f88-10a1-47a3-b901-6ed2315b9ba0 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e8:51:53,bridge_name='br-int',has_traffic_filtering=True,id=3a7614a2-ff1f-4015-a387-8b15256f61b2,network=Network(09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3a7614a2-ff')
Oct 11 08:49:23 compute-0 nova_compute[260935]: 2025-10-11 08:49:23.688 2 DEBUG nova.compute.manager [None req-24442349-a5af-424c-880c-1e8729b35923 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 8e4f771a-b87a-40f9-a12e-b5b4583b96f7] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 11 08:49:23 compute-0 nova_compute[260935]: 2025-10-11 08:49:23.689 2 DEBUG nova.network.neutron [None req-24442349-a5af-424c-880c-1e8729b35923 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 8e4f771a-b87a-40f9-a12e-b5b4583b96f7] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 11 08:49:23 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1308: 321 pgs: 321 active+clean; 239 MiB data, 465 MiB used, 60 GiB / 60 GiB avail; 4.3 MiB/s rd, 5.9 MiB/s wr, 290 op/s
Oct 11 08:49:24 compute-0 sshd-session[292738]: Failed password for invalid user insight from 155.4.244.179 port 32265 ssh2
Oct 11 08:49:24 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/2082289042' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:49:24 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/2887860017' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:49:24 compute-0 nova_compute[260935]: 2025-10-11 08:49:24.305 2 DEBUG nova.policy [None req-24442349-a5af-424c-880c-1e8729b35923 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '1bab12893b9d49aabcb5ca19c9b951de', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'f8c7604961214c6d9d49657535d799a5', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 11 08:49:24 compute-0 nova_compute[260935]: 2025-10-11 08:49:24.469 2 INFO nova.virt.libvirt.driver [None req-24442349-a5af-424c-880c-1e8729b35923 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 8e4f771a-b87a-40f9-a12e-b5b4583b96f7] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 11 08:49:24 compute-0 nova_compute[260935]: 2025-10-11 08:49:24.479 2 DEBUG nova.virt.libvirt.driver [None req-ff714f88-10a1-47a3-b901-6ed2315b9ba0 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 08:49:24 compute-0 nova_compute[260935]: 2025-10-11 08:49:24.480 2 DEBUG nova.virt.libvirt.driver [None req-ff714f88-10a1-47a3-b901-6ed2315b9ba0 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 08:49:24 compute-0 nova_compute[260935]: 2025-10-11 08:49:24.480 2 DEBUG nova.virt.libvirt.driver [None req-ff714f88-10a1-47a3-b901-6ed2315b9ba0 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] No VIF found with MAC fa:16:3e:e8:51:53, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 11 08:49:24 compute-0 nova_compute[260935]: 2025-10-11 08:49:24.481 2 INFO nova.virt.libvirt.driver [None req-ff714f88-10a1-47a3-b901-6ed2315b9ba0 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: b3e20035-c079-4ad0-a085-2086be520d1d] Using config drive
Oct 11 08:49:24 compute-0 nova_compute[260935]: 2025-10-11 08:49:24.516 2 DEBUG nova.storage.rbd_utils [None req-ff714f88-10a1-47a3-b901-6ed2315b9ba0 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] rbd image b3e20035-c079-4ad0-a085-2086be520d1d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:49:24 compute-0 nova_compute[260935]: 2025-10-11 08:49:24.526 2 DEBUG nova.compute.manager [None req-24442349-a5af-424c-880c-1e8729b35923 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 8e4f771a-b87a-40f9-a12e-b5b4583b96f7] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 11 08:49:24 compute-0 nova_compute[260935]: 2025-10-11 08:49:24.716 2 DEBUG nova.compute.manager [None req-24442349-a5af-424c-880c-1e8729b35923 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 8e4f771a-b87a-40f9-a12e-b5b4583b96f7] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 11 08:49:24 compute-0 nova_compute[260935]: 2025-10-11 08:49:24.718 2 DEBUG nova.virt.libvirt.driver [None req-24442349-a5af-424c-880c-1e8729b35923 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 8e4f771a-b87a-40f9-a12e-b5b4583b96f7] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 11 08:49:24 compute-0 nova_compute[260935]: 2025-10-11 08:49:24.718 2 INFO nova.virt.libvirt.driver [None req-24442349-a5af-424c-880c-1e8729b35923 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 8e4f771a-b87a-40f9-a12e-b5b4583b96f7] Creating image(s)
Oct 11 08:49:24 compute-0 nova_compute[260935]: 2025-10-11 08:49:24.755 2 DEBUG nova.storage.rbd_utils [None req-24442349-a5af-424c-880c-1e8729b35923 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] rbd image 8e4f771a-b87a-40f9-a12e-b5b4583b96f7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:49:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 08:49:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 08:49:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 08:49:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 08:49:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 08:49:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 08:49:24 compute-0 nova_compute[260935]: 2025-10-11 08:49:24.795 2 DEBUG nova.storage.rbd_utils [None req-24442349-a5af-424c-880c-1e8729b35923 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] rbd image 8e4f771a-b87a-40f9-a12e-b5b4583b96f7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:49:24 compute-0 nova_compute[260935]: 2025-10-11 08:49:24.832 2 DEBUG nova.storage.rbd_utils [None req-24442349-a5af-424c-880c-1e8729b35923 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] rbd image 8e4f771a-b87a-40f9-a12e-b5b4583b96f7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:49:24 compute-0 nova_compute[260935]: 2025-10-11 08:49:24.837 2 DEBUG oslo_concurrency.processutils [None req-24442349-a5af-424c-880c-1e8729b35923 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:49:24 compute-0 nova_compute[260935]: 2025-10-11 08:49:24.931 2 DEBUG oslo_concurrency.processutils [None req-24442349-a5af-424c-880c-1e8729b35923 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json" returned: 0 in 0.094s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:49:24 compute-0 nova_compute[260935]: 2025-10-11 08:49:24.933 2 DEBUG oslo_concurrency.lockutils [None req-24442349-a5af-424c-880c-1e8729b35923 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Acquiring lock "9811042c7d73cc51997f7c966840f4b7728169a1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:49:24 compute-0 nova_compute[260935]: 2025-10-11 08:49:24.934 2 DEBUG oslo_concurrency.lockutils [None req-24442349-a5af-424c-880c-1e8729b35923 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:49:24 compute-0 nova_compute[260935]: 2025-10-11 08:49:24.935 2 DEBUG oslo_concurrency.lockutils [None req-24442349-a5af-424c-880c-1e8729b35923 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:49:24 compute-0 nova_compute[260935]: 2025-10-11 08:49:24.965 2 DEBUG nova.storage.rbd_utils [None req-24442349-a5af-424c-880c-1e8729b35923 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] rbd image 8e4f771a-b87a-40f9-a12e-b5b4583b96f7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:49:24 compute-0 nova_compute[260935]: 2025-10-11 08:49:24.970 2 DEBUG oslo_concurrency.processutils [None req-24442349-a5af-424c-880c-1e8729b35923 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 8e4f771a-b87a-40f9-a12e-b5b4583b96f7_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:49:25 compute-0 ceph-mon[74313]: pgmap v1308: 321 pgs: 321 active+clean; 239 MiB data, 465 MiB used, 60 GiB / 60 GiB avail; 4.3 MiB/s rd, 5.9 MiB/s wr, 290 op/s
Oct 11 08:49:25 compute-0 nova_compute[260935]: 2025-10-11 08:49:25.257 2 DEBUG nova.network.neutron [req-70147c05-b470-4752-b1d1-b6f16d087d86 req-e4b98cf6-b2fc-4d0f-9e0a-7f71b9997f74 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: b3e20035-c079-4ad0-a085-2086be520d1d] Updated VIF entry in instance network info cache for port 3a7614a2-ff1f-4015-a387-8b15256f61b2. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 08:49:25 compute-0 nova_compute[260935]: 2025-10-11 08:49:25.260 2 DEBUG nova.network.neutron [req-70147c05-b470-4752-b1d1-b6f16d087d86 req-e4b98cf6-b2fc-4d0f-9e0a-7f71b9997f74 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: b3e20035-c079-4ad0-a085-2086be520d1d] Updating instance_info_cache with network_info: [{"id": "3a7614a2-ff1f-4015-a387-8b15256f61b2", "address": "fa:16:3e:e8:51:53", "network": {"id": "09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1951796893-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "39d3043a7835403392c659fbb2fe0b22", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3a7614a2-ff", "ovs_interfaceid": "3a7614a2-ff1f-4015-a387-8b15256f61b2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:49:25 compute-0 nova_compute[260935]: 2025-10-11 08:49:25.265 2 DEBUG oslo_concurrency.processutils [None req-24442349-a5af-424c-880c-1e8729b35923 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 8e4f771a-b87a-40f9-a12e-b5b4583b96f7_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.295s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:49:25 compute-0 nova_compute[260935]: 2025-10-11 08:49:25.270 2 INFO nova.virt.libvirt.driver [None req-ff714f88-10a1-47a3-b901-6ed2315b9ba0 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: b3e20035-c079-4ad0-a085-2086be520d1d] Creating config drive at /var/lib/nova/instances/b3e20035-c079-4ad0-a085-2086be520d1d/disk.config
Oct 11 08:49:25 compute-0 nova_compute[260935]: 2025-10-11 08:49:25.281 2 DEBUG oslo_concurrency.processutils [None req-ff714f88-10a1-47a3-b901-6ed2315b9ba0 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/b3e20035-c079-4ad0-a085-2086be520d1d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpnrqinbk2 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:49:25 compute-0 nova_compute[260935]: 2025-10-11 08:49:25.420 2 DEBUG nova.storage.rbd_utils [None req-24442349-a5af-424c-880c-1e8729b35923 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] resizing rbd image 8e4f771a-b87a-40f9-a12e-b5b4583b96f7_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 11 08:49:25 compute-0 nova_compute[260935]: 2025-10-11 08:49:25.472 2 DEBUG oslo_concurrency.processutils [None req-ff714f88-10a1-47a3-b901-6ed2315b9ba0 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/b3e20035-c079-4ad0-a085-2086be520d1d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpnrqinbk2" returned: 0 in 0.191s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:49:25 compute-0 nova_compute[260935]: 2025-10-11 08:49:25.508 2 DEBUG nova.storage.rbd_utils [None req-ff714f88-10a1-47a3-b901-6ed2315b9ba0 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] rbd image b3e20035-c079-4ad0-a085-2086be520d1d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:49:25 compute-0 nova_compute[260935]: 2025-10-11 08:49:25.512 2 DEBUG oslo_concurrency.processutils [None req-ff714f88-10a1-47a3-b901-6ed2315b9ba0 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/b3e20035-c079-4ad0-a085-2086be520d1d/disk.config b3e20035-c079-4ad0-a085-2086be520d1d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:49:25 compute-0 nova_compute[260935]: 2025-10-11 08:49:25.548 2 DEBUG oslo_concurrency.lockutils [req-70147c05-b470-4752-b1d1-b6f16d087d86 req-e4b98cf6-b2fc-4d0f-9e0a-7f71b9997f74 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-b3e20035-c079-4ad0-a085-2086be520d1d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 08:49:25 compute-0 nova_compute[260935]: 2025-10-11 08:49:25.617 2 DEBUG nova.objects.instance [None req-24442349-a5af-424c-880c-1e8729b35923 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Lazy-loading 'migration_context' on Instance uuid 8e4f771a-b87a-40f9-a12e-b5b4583b96f7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:49:25 compute-0 nova_compute[260935]: 2025-10-11 08:49:25.656 2 DEBUG nova.virt.libvirt.driver [None req-24442349-a5af-424c-880c-1e8729b35923 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 8e4f771a-b87a-40f9-a12e-b5b4583b96f7] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 11 08:49:25 compute-0 nova_compute[260935]: 2025-10-11 08:49:25.657 2 DEBUG nova.virt.libvirt.driver [None req-24442349-a5af-424c-880c-1e8729b35923 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 8e4f771a-b87a-40f9-a12e-b5b4583b96f7] Ensure instance console log exists: /var/lib/nova/instances/8e4f771a-b87a-40f9-a12e-b5b4583b96f7/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 11 08:49:25 compute-0 nova_compute[260935]: 2025-10-11 08:49:25.658 2 DEBUG oslo_concurrency.lockutils [None req-24442349-a5af-424c-880c-1e8729b35923 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:49:25 compute-0 nova_compute[260935]: 2025-10-11 08:49:25.659 2 DEBUG oslo_concurrency.lockutils [None req-24442349-a5af-424c-880c-1e8729b35923 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:49:25 compute-0 nova_compute[260935]: 2025-10-11 08:49:25.659 2 DEBUG oslo_concurrency.lockutils [None req-24442349-a5af-424c-880c-1e8729b35923 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:49:25 compute-0 nova_compute[260935]: 2025-10-11 08:49:25.662 2 DEBUG nova.network.neutron [None req-24442349-a5af-424c-880c-1e8729b35923 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 8e4f771a-b87a-40f9-a12e-b5b4583b96f7] Successfully created port: f1d8b704-c5df-41f7-b46a-04c0e89ab2cd _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 11 08:49:25 compute-0 nova_compute[260935]: 2025-10-11 08:49:25.689 2 DEBUG oslo_concurrency.processutils [None req-ff714f88-10a1-47a3-b901-6ed2315b9ba0 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/b3e20035-c079-4ad0-a085-2086be520d1d/disk.config b3e20035-c079-4ad0-a085-2086be520d1d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.177s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:49:25 compute-0 nova_compute[260935]: 2025-10-11 08:49:25.690 2 INFO nova.virt.libvirt.driver [None req-ff714f88-10a1-47a3-b901-6ed2315b9ba0 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: b3e20035-c079-4ad0-a085-2086be520d1d] Deleting local config drive /var/lib/nova/instances/b3e20035-c079-4ad0-a085-2086be520d1d/disk.config because it was imported into RBD.
Oct 11 08:49:25 compute-0 kernel: tap3a7614a2-ff: entered promiscuous mode
Oct 11 08:49:25 compute-0 NetworkManager[44960]: <info>  [1760172565.7750] manager: (tap3a7614a2-ff): new Tun device (/org/freedesktop/NetworkManager/Devices/71)
Oct 11 08:49:25 compute-0 nova_compute[260935]: 2025-10-11 08:49:25.791 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:49:25 compute-0 ovn_controller[152945]: 2025-10-11T08:49:25Z|00121|binding|INFO|Claiming lport 3a7614a2-ff1f-4015-a387-8b15256f61b2 for this chassis.
Oct 11 08:49:25 compute-0 ovn_controller[152945]: 2025-10-11T08:49:25Z|00122|binding|INFO|3a7614a2-ff1f-4015-a387-8b15256f61b2: Claiming fa:16:3e:e8:51:53 10.100.0.5
Oct 11 08:49:25 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:49:25.817 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e8:51:53 10.100.0.5'], port_security=['fa:16:3e:e8:51:53 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'b3e20035-c079-4ad0-a085-2086be520d1d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '39d3043a7835403392c659fbb2fe0b22', 'neutron:revision_number': '2', 'neutron:security_group_ids': '8cdf2c97-ed67-4339-928f-1d70d0c6c18c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3bfe7634-8476-437a-9cde-e4512c0e686a, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=3a7614a2-ff1f-4015-a387-8b15256f61b2) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 08:49:25 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:49:25.818 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 3a7614a2-ff1f-4015-a387-8b15256f61b2 in datapath 09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5 bound to our chassis
Oct 11 08:49:25 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:49:25.821 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5
Oct 11 08:49:25 compute-0 ovn_controller[152945]: 2025-10-11T08:49:25Z|00123|binding|INFO|Setting lport 3a7614a2-ff1f-4015-a387-8b15256f61b2 ovn-installed in OVS
Oct 11 08:49:25 compute-0 ovn_controller[152945]: 2025-10-11T08:49:25Z|00124|binding|INFO|Setting lport 3a7614a2-ff1f-4015-a387-8b15256f61b2 up in Southbound
Oct 11 08:49:25 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1309: 321 pgs: 321 active+clean; 239 MiB data, 465 MiB used, 60 GiB / 60 GiB avail; 455 KiB/s rd, 5.9 MiB/s wr, 156 op/s
Oct 11 08:49:25 compute-0 nova_compute[260935]: 2025-10-11 08:49:25.837 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:49:25 compute-0 systemd-machined[215705]: New machine qemu-26-instance-00000018.
Oct 11 08:49:25 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:49:25.855 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[b7f696e1-77a7-41ce-8712-4bb832b8ae51]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:49:25 compute-0 systemd[1]: Started Virtual Machine qemu-26-instance-00000018.
Oct 11 08:49:25 compute-0 systemd-udevd[293085]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 08:49:25 compute-0 NetworkManager[44960]: <info>  [1760172565.8782] device (tap3a7614a2-ff): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 08:49:25 compute-0 NetworkManager[44960]: <info>  [1760172565.8800] device (tap3a7614a2-ff): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 08:49:25 compute-0 sshd-session[292738]: Received disconnect from 155.4.244.179 port 32265:11: Bye Bye [preauth]
Oct 11 08:49:25 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:49:25.907 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[7732a58e-1eff-47b9-aa6b-28d68300670f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:49:25 compute-0 sshd-session[292738]: Disconnected from invalid user insight 155.4.244.179 port 32265 [preauth]
Oct 11 08:49:25 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:49:25.910 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[c0f85935-c679-4c51-b4bb-302944cbaa85]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:49:25 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:49:25.963 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[80542b94-1328-4929-a093-64d898e29243]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:49:25 compute-0 sudo[293090]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:49:25 compute-0 sudo[293090]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:49:25 compute-0 sudo[293090]: pam_unix(sudo:session): session closed for user root
Oct 11 08:49:25 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:49:25.988 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[7d707f89-17d4-4500-9937-d0cfe0f3533e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap09ac2cb6-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9f:b2:33'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 8, 'rx_bytes': 832, 'tx_bytes': 524, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 8, 'rx_bytes': 832, 'tx_bytes': 524, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 34], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 439110, 'reachable_time': 35426, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 293122, 'error': None, 'target': 'ovnmeta-09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:49:26 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:49:26.016 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[38f51894-892b-4fcf-b114-605ee8b79710]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap09ac2cb6-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 439130, 'tstamp': 439130}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 293127, 'error': None, 'target': 'ovnmeta-09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap09ac2cb6-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 439135, 'tstamp': 439135}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 293127, 'error': None, 'target': 'ovnmeta-09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:49:26 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:49:26.019 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap09ac2cb6-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:49:26 compute-0 nova_compute[260935]: 2025-10-11 08:49:26.022 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:49:26 compute-0 nova_compute[260935]: 2025-10-11 08:49:26.023 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:49:26 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:49:26.024 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap09ac2cb6-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:49:26 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:49:26.025 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 08:49:26 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:49:26.025 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap09ac2cb6-30, col_values=(('external_ids', {'iface-id': '424305ea-6b47-4134-ad52-ee2a450e204c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:49:26 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:49:26.027 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 08:49:26 compute-0 sudo[293124]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 08:49:26 compute-0 sudo[293124]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:49:26 compute-0 sudo[293124]: pam_unix(sudo:session): session closed for user root
Oct 11 08:49:26 compute-0 sudo[293150]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:49:26 compute-0 sudo[293150]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:49:26 compute-0 sudo[293150]: pam_unix(sudo:session): session closed for user root
Oct 11 08:49:26 compute-0 sudo[293212]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 11 08:49:26 compute-0 sudo[293212]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:49:26 compute-0 nova_compute[260935]: 2025-10-11 08:49:26.702 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172566.702305, b3e20035-c079-4ad0-a085-2086be520d1d => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:49:26 compute-0 nova_compute[260935]: 2025-10-11 08:49:26.703 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: b3e20035-c079-4ad0-a085-2086be520d1d] VM Started (Lifecycle Event)
Oct 11 08:49:26 compute-0 nova_compute[260935]: 2025-10-11 08:49:26.811 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: b3e20035-c079-4ad0-a085-2086be520d1d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:49:26 compute-0 nova_compute[260935]: 2025-10-11 08:49:26.817 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172566.703369, b3e20035-c079-4ad0-a085-2086be520d1d => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:49:26 compute-0 nova_compute[260935]: 2025-10-11 08:49:26.818 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: b3e20035-c079-4ad0-a085-2086be520d1d] VM Paused (Lifecycle Event)
Oct 11 08:49:26 compute-0 sudo[293212]: pam_unix(sudo:session): session closed for user root
Oct 11 08:49:26 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 08:49:26 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 08:49:26 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 11 08:49:26 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 08:49:26 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 11 08:49:26 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 08:49:26 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev 5129121b-ac1b-40c6-bb9e-cc53c3193290 does not exist
Oct 11 08:49:26 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev 537ec94a-fc68-4c76-879c-6731541868c2 does not exist
Oct 11 08:49:26 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev 1d9e505a-7008-4b5d-8e9e-362ed33ceefd does not exist
Oct 11 08:49:26 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 11 08:49:26 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 08:49:26 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 11 08:49:26 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 08:49:26 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 08:49:26 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 08:49:27 compute-0 sudo[293273]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:49:27 compute-0 nova_compute[260935]: 2025-10-11 08:49:27.028 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:49:27 compute-0 sudo[293273]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:49:27 compute-0 sudo[293273]: pam_unix(sudo:session): session closed for user root
Oct 11 08:49:27 compute-0 sudo[293298]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 08:49:27 compute-0 nova_compute[260935]: 2025-10-11 08:49:27.113 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: b3e20035-c079-4ad0-a085-2086be520d1d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:49:27 compute-0 sudo[293298]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:49:27 compute-0 sudo[293298]: pam_unix(sudo:session): session closed for user root
Oct 11 08:49:27 compute-0 nova_compute[260935]: 2025-10-11 08:49:27.119 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: b3e20035-c079-4ad0-a085-2086be520d1d] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 08:49:27 compute-0 nova_compute[260935]: 2025-10-11 08:49:27.171 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: b3e20035-c079-4ad0-a085-2086be520d1d] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 08:49:27 compute-0 ceph-mon[74313]: pgmap v1309: 321 pgs: 321 active+clean; 239 MiB data, 465 MiB used, 60 GiB / 60 GiB avail; 455 KiB/s rd, 5.9 MiB/s wr, 156 op/s
Oct 11 08:49:27 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 08:49:27 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 08:49:27 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 08:49:27 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 08:49:27 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 08:49:27 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 08:49:27 compute-0 sudo[293323]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:49:27 compute-0 sudo[293323]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:49:27 compute-0 sudo[293323]: pam_unix(sudo:session): session closed for user root
Oct 11 08:49:27 compute-0 sudo[293348]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 11 08:49:27 compute-0 sudo[293348]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:49:27 compute-0 nova_compute[260935]: 2025-10-11 08:49:27.589 2 DEBUG nova.network.neutron [None req-24442349-a5af-424c-880c-1e8729b35923 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 8e4f771a-b87a-40f9-a12e-b5b4583b96f7] Successfully updated port: f1d8b704-c5df-41f7-b46a-04c0e89ab2cd _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 11 08:49:27 compute-0 nova_compute[260935]: 2025-10-11 08:49:27.599 2 DEBUG nova.compute.manager [req-0d1a8ac4-2c0c-43cd-bfa7-0720f77e80ac req-a0f5cf9c-7d22-400b-afee-8ba233faebb3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: b3e20035-c079-4ad0-a085-2086be520d1d] Received event network-vif-plugged-3a7614a2-ff1f-4015-a387-8b15256f61b2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:49:27 compute-0 nova_compute[260935]: 2025-10-11 08:49:27.599 2 DEBUG oslo_concurrency.lockutils [req-0d1a8ac4-2c0c-43cd-bfa7-0720f77e80ac req-a0f5cf9c-7d22-400b-afee-8ba233faebb3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "b3e20035-c079-4ad0-a085-2086be520d1d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:49:27 compute-0 nova_compute[260935]: 2025-10-11 08:49:27.599 2 DEBUG oslo_concurrency.lockutils [req-0d1a8ac4-2c0c-43cd-bfa7-0720f77e80ac req-a0f5cf9c-7d22-400b-afee-8ba233faebb3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "b3e20035-c079-4ad0-a085-2086be520d1d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:49:27 compute-0 nova_compute[260935]: 2025-10-11 08:49:27.599 2 DEBUG oslo_concurrency.lockutils [req-0d1a8ac4-2c0c-43cd-bfa7-0720f77e80ac req-a0f5cf9c-7d22-400b-afee-8ba233faebb3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "b3e20035-c079-4ad0-a085-2086be520d1d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:49:27 compute-0 nova_compute[260935]: 2025-10-11 08:49:27.599 2 DEBUG nova.compute.manager [req-0d1a8ac4-2c0c-43cd-bfa7-0720f77e80ac req-a0f5cf9c-7d22-400b-afee-8ba233faebb3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: b3e20035-c079-4ad0-a085-2086be520d1d] Processing event network-vif-plugged-3a7614a2-ff1f-4015-a387-8b15256f61b2 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 11 08:49:27 compute-0 nova_compute[260935]: 2025-10-11 08:49:27.600 2 DEBUG nova.compute.manager [None req-ff714f88-10a1-47a3-b901-6ed2315b9ba0 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: b3e20035-c079-4ad0-a085-2086be520d1d] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 11 08:49:27 compute-0 nova_compute[260935]: 2025-10-11 08:49:27.603 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172567.6031952, b3e20035-c079-4ad0-a085-2086be520d1d => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:49:27 compute-0 nova_compute[260935]: 2025-10-11 08:49:27.603 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: b3e20035-c079-4ad0-a085-2086be520d1d] VM Resumed (Lifecycle Event)
Oct 11 08:49:27 compute-0 nova_compute[260935]: 2025-10-11 08:49:27.604 2 DEBUG nova.virt.libvirt.driver [None req-ff714f88-10a1-47a3-b901-6ed2315b9ba0 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: b3e20035-c079-4ad0-a085-2086be520d1d] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 11 08:49:27 compute-0 nova_compute[260935]: 2025-10-11 08:49:27.607 2 INFO nova.virt.libvirt.driver [-] [instance: b3e20035-c079-4ad0-a085-2086be520d1d] Instance spawned successfully.
Oct 11 08:49:27 compute-0 nova_compute[260935]: 2025-10-11 08:49:27.607 2 DEBUG nova.virt.libvirt.driver [None req-ff714f88-10a1-47a3-b901-6ed2315b9ba0 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: b3e20035-c079-4ad0-a085-2086be520d1d] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 11 08:49:27 compute-0 nova_compute[260935]: 2025-10-11 08:49:27.792 2 DEBUG nova.compute.manager [req-b98f97c0-a73f-4f5e-b94e-07d046b4528e req-983be31e-2754-4d31-b710-1d17e4250d98 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 8e4f771a-b87a-40f9-a12e-b5b4583b96f7] Received event network-changed-f1d8b704-c5df-41f7-b46a-04c0e89ab2cd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:49:27 compute-0 nova_compute[260935]: 2025-10-11 08:49:27.792 2 DEBUG nova.compute.manager [req-b98f97c0-a73f-4f5e-b94e-07d046b4528e req-983be31e-2754-4d31-b710-1d17e4250d98 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 8e4f771a-b87a-40f9-a12e-b5b4583b96f7] Refreshing instance network info cache due to event network-changed-f1d8b704-c5df-41f7-b46a-04c0e89ab2cd. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 08:49:27 compute-0 nova_compute[260935]: 2025-10-11 08:49:27.793 2 DEBUG oslo_concurrency.lockutils [req-b98f97c0-a73f-4f5e-b94e-07d046b4528e req-983be31e-2754-4d31-b710-1d17e4250d98 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-8e4f771a-b87a-40f9-a12e-b5b4583b96f7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 08:49:27 compute-0 nova_compute[260935]: 2025-10-11 08:49:27.793 2 DEBUG oslo_concurrency.lockutils [req-b98f97c0-a73f-4f5e-b94e-07d046b4528e req-983be31e-2754-4d31-b710-1d17e4250d98 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-8e4f771a-b87a-40f9-a12e-b5b4583b96f7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 08:49:27 compute-0 nova_compute[260935]: 2025-10-11 08:49:27.793 2 DEBUG nova.network.neutron [req-b98f97c0-a73f-4f5e-b94e-07d046b4528e req-983be31e-2754-4d31-b710-1d17e4250d98 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 8e4f771a-b87a-40f9-a12e-b5b4583b96f7] Refreshing network info cache for port f1d8b704-c5df-41f7-b46a-04c0e89ab2cd _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 08:49:27 compute-0 nova_compute[260935]: 2025-10-11 08:49:27.807 2 DEBUG oslo_concurrency.lockutils [None req-24442349-a5af-424c-880c-1e8729b35923 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Acquiring lock "refresh_cache-8e4f771a-b87a-40f9-a12e-b5b4583b96f7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 08:49:27 compute-0 nova_compute[260935]: 2025-10-11 08:49:27.827 2 DEBUG nova.virt.libvirt.driver [None req-ff714f88-10a1-47a3-b901-6ed2315b9ba0 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: b3e20035-c079-4ad0-a085-2086be520d1d] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:49:27 compute-0 nova_compute[260935]: 2025-10-11 08:49:27.827 2 DEBUG nova.virt.libvirt.driver [None req-ff714f88-10a1-47a3-b901-6ed2315b9ba0 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: b3e20035-c079-4ad0-a085-2086be520d1d] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:49:27 compute-0 nova_compute[260935]: 2025-10-11 08:49:27.827 2 DEBUG nova.virt.libvirt.driver [None req-ff714f88-10a1-47a3-b901-6ed2315b9ba0 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: b3e20035-c079-4ad0-a085-2086be520d1d] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:49:27 compute-0 nova_compute[260935]: 2025-10-11 08:49:27.828 2 DEBUG nova.virt.libvirt.driver [None req-ff714f88-10a1-47a3-b901-6ed2315b9ba0 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: b3e20035-c079-4ad0-a085-2086be520d1d] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:49:27 compute-0 nova_compute[260935]: 2025-10-11 08:49:27.828 2 DEBUG nova.virt.libvirt.driver [None req-ff714f88-10a1-47a3-b901-6ed2315b9ba0 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: b3e20035-c079-4ad0-a085-2086be520d1d] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:49:27 compute-0 nova_compute[260935]: 2025-10-11 08:49:27.829 2 DEBUG nova.virt.libvirt.driver [None req-ff714f88-10a1-47a3-b901-6ed2315b9ba0 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: b3e20035-c079-4ad0-a085-2086be520d1d] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:49:27 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1310: 321 pgs: 321 active+clean; 293 MiB data, 492 MiB used, 60 GiB / 60 GiB avail; 588 KiB/s rd, 7.8 MiB/s wr, 213 op/s
Oct 11 08:49:27 compute-0 nova_compute[260935]: 2025-10-11 08:49:27.837 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: b3e20035-c079-4ad0-a085-2086be520d1d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:49:27 compute-0 nova_compute[260935]: 2025-10-11 08:49:27.840 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: b3e20035-c079-4ad0-a085-2086be520d1d] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 08:49:27 compute-0 podman[293413]: 2025-10-11 08:49:27.858493857 +0000 UTC m=+0.099871195 container create c68dffc718fc136118cb44053b6a77ec0d9df5ffc6259b4872c4e9a501c3b67d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_goldstine, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct 11 08:49:27 compute-0 nova_compute[260935]: 2025-10-11 08:49:27.879 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: b3e20035-c079-4ad0-a085-2086be520d1d] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 08:49:27 compute-0 podman[293413]: 2025-10-11 08:49:27.815324986 +0000 UTC m=+0.056702324 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 08:49:27 compute-0 systemd[1]: Started libpod-conmon-c68dffc718fc136118cb44053b6a77ec0d9df5ffc6259b4872c4e9a501c3b67d.scope.
Oct 11 08:49:27 compute-0 systemd[1]: Started libcrun container.
Oct 11 08:49:27 compute-0 nova_compute[260935]: 2025-10-11 08:49:27.933 2 INFO nova.compute.manager [None req-ff714f88-10a1-47a3-b901-6ed2315b9ba0 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: b3e20035-c079-4ad0-a085-2086be520d1d] Took 13.15 seconds to spawn the instance on the hypervisor.
Oct 11 08:49:27 compute-0 nova_compute[260935]: 2025-10-11 08:49:27.933 2 DEBUG nova.compute.manager [None req-ff714f88-10a1-47a3-b901-6ed2315b9ba0 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: b3e20035-c079-4ad0-a085-2086be520d1d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:49:27 compute-0 podman[293413]: 2025-10-11 08:49:27.951504028 +0000 UTC m=+0.192881366 container init c68dffc718fc136118cb44053b6a77ec0d9df5ffc6259b4872c4e9a501c3b67d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_goldstine, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 08:49:27 compute-0 podman[293413]: 2025-10-11 08:49:27.964431513 +0000 UTC m=+0.205808851 container start c68dffc718fc136118cb44053b6a77ec0d9df5ffc6259b4872c4e9a501c3b67d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_goldstine, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Oct 11 08:49:27 compute-0 podman[293413]: 2025-10-11 08:49:27.967836049 +0000 UTC m=+0.209213387 container attach c68dffc718fc136118cb44053b6a77ec0d9df5ffc6259b4872c4e9a501c3b67d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_goldstine, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 08:49:27 compute-0 intelligent_goldstine[293429]: 167 167
Oct 11 08:49:27 compute-0 systemd[1]: libpod-c68dffc718fc136118cb44053b6a77ec0d9df5ffc6259b4872c4e9a501c3b67d.scope: Deactivated successfully.
Oct 11 08:49:27 compute-0 podman[293413]: 2025-10-11 08:49:27.975197108 +0000 UTC m=+0.216574446 container died c68dffc718fc136118cb44053b6a77ec0d9df5ffc6259b4872c4e9a501c3b67d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_goldstine, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 08:49:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-0af76125f860beba6550a7e6c7dbac1e30232e0d474ca52d059bf573fb132055-merged.mount: Deactivated successfully.
Oct 11 08:49:28 compute-0 podman[293413]: 2025-10-11 08:49:28.015770155 +0000 UTC m=+0.257147503 container remove c68dffc718fc136118cb44053b6a77ec0d9df5ffc6259b4872c4e9a501c3b67d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_goldstine, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct 11 08:49:28 compute-0 nova_compute[260935]: 2025-10-11 08:49:28.044 2 DEBUG nova.network.neutron [req-b98f97c0-a73f-4f5e-b94e-07d046b4528e req-983be31e-2754-4d31-b710-1d17e4250d98 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 8e4f771a-b87a-40f9-a12e-b5b4583b96f7] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 11 08:49:28 compute-0 systemd[1]: libpod-conmon-c68dffc718fc136118cb44053b6a77ec0d9df5ffc6259b4872c4e9a501c3b67d.scope: Deactivated successfully.
Oct 11 08:49:28 compute-0 nova_compute[260935]: 2025-10-11 08:49:28.067 2 INFO nova.compute.manager [None req-ff714f88-10a1-47a3-b901-6ed2315b9ba0 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: b3e20035-c079-4ad0-a085-2086be520d1d] Took 14.35 seconds to build instance.
Oct 11 08:49:28 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e163 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 08:49:28 compute-0 nova_compute[260935]: 2025-10-11 08:49:28.261 2 DEBUG oslo_concurrency.lockutils [None req-ff714f88-10a1-47a3-b901-6ed2315b9ba0 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Lock "b3e20035-c079-4ad0-a085-2086be520d1d" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 14.607s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:49:28 compute-0 podman[293453]: 2025-10-11 08:49:28.26807683 +0000 UTC m=+0.057560508 container create fec0e3ee2412e8c93cf34d2c8412c859c83ef7b9481e0ca7c88ec238833cea0c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_grothendieck, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct 11 08:49:28 compute-0 systemd[1]: Started libpod-conmon-fec0e3ee2412e8c93cf34d2c8412c859c83ef7b9481e0ca7c88ec238833cea0c.scope.
Oct 11 08:49:28 compute-0 podman[293453]: 2025-10-11 08:49:28.243867226 +0000 UTC m=+0.033350904 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 08:49:28 compute-0 systemd[1]: Started libcrun container.
Oct 11 08:49:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47648c71e18aec7dd0c7c9ca8c1e922900254258a2c32abfcaafee327d69cbc8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 08:49:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47648c71e18aec7dd0c7c9ca8c1e922900254258a2c32abfcaafee327d69cbc8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 08:49:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47648c71e18aec7dd0c7c9ca8c1e922900254258a2c32abfcaafee327d69cbc8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 08:49:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47648c71e18aec7dd0c7c9ca8c1e922900254258a2c32abfcaafee327d69cbc8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 08:49:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47648c71e18aec7dd0c7c9ca8c1e922900254258a2c32abfcaafee327d69cbc8/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 08:49:28 compute-0 podman[293453]: 2025-10-11 08:49:28.407467252 +0000 UTC m=+0.196950940 container init fec0e3ee2412e8c93cf34d2c8412c859c83ef7b9481e0ca7c88ec238833cea0c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_grothendieck, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Oct 11 08:49:28 compute-0 podman[293453]: 2025-10-11 08:49:28.414419039 +0000 UTC m=+0.203902677 container start fec0e3ee2412e8c93cf34d2c8412c859c83ef7b9481e0ca7c88ec238833cea0c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_grothendieck, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct 11 08:49:28 compute-0 podman[293453]: 2025-10-11 08:49:28.419000019 +0000 UTC m=+0.208483697 container attach fec0e3ee2412e8c93cf34d2c8412c859c83ef7b9481e0ca7c88ec238833cea0c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_grothendieck, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 08:49:28 compute-0 nova_compute[260935]: 2025-10-11 08:49:28.532 2 DEBUG nova.network.neutron [req-b98f97c0-a73f-4f5e-b94e-07d046b4528e req-983be31e-2754-4d31-b710-1d17e4250d98 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 8e4f771a-b87a-40f9-a12e-b5b4583b96f7] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:49:28 compute-0 nova_compute[260935]: 2025-10-11 08:49:28.602 2 DEBUG oslo_concurrency.lockutils [req-b98f97c0-a73f-4f5e-b94e-07d046b4528e req-983be31e-2754-4d31-b710-1d17e4250d98 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-8e4f771a-b87a-40f9-a12e-b5b4583b96f7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 08:49:28 compute-0 nova_compute[260935]: 2025-10-11 08:49:28.604 2 DEBUG oslo_concurrency.lockutils [None req-24442349-a5af-424c-880c-1e8729b35923 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Acquired lock "refresh_cache-8e4f771a-b87a-40f9-a12e-b5b4583b96f7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 08:49:28 compute-0 nova_compute[260935]: 2025-10-11 08:49:28.604 2 DEBUG nova.network.neutron [None req-24442349-a5af-424c-880c-1e8729b35923 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 8e4f771a-b87a-40f9-a12e-b5b4583b96f7] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 11 08:49:28 compute-0 nova_compute[260935]: 2025-10-11 08:49:28.634 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:49:28 compute-0 nova_compute[260935]: 2025-10-11 08:49:28.960 2 DEBUG nova.network.neutron [None req-24442349-a5af-424c-880c-1e8729b35923 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 8e4f771a-b87a-40f9-a12e-b5b4583b96f7] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 11 08:49:29 compute-0 ceph-mon[74313]: pgmap v1310: 321 pgs: 321 active+clean; 293 MiB data, 492 MiB used, 60 GiB / 60 GiB avail; 588 KiB/s rd, 7.8 MiB/s wr, 213 op/s
Oct 11 08:49:29 compute-0 nova_compute[260935]: 2025-10-11 08:49:29.248 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760172554.2447348, 057de6d9-3f9e-4b23-9019-f62ba6b453e7 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:49:29 compute-0 nova_compute[260935]: 2025-10-11 08:49:29.249 2 INFO nova.compute.manager [-] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] VM Stopped (Lifecycle Event)
Oct 11 08:49:29 compute-0 nova_compute[260935]: 2025-10-11 08:49:29.286 2 DEBUG nova.compute.manager [None req-a142ebc3-9296-4f0f-9f1f-8a49a44e319e - - - - - -] [instance: 057de6d9-3f9e-4b23-9019-f62ba6b453e7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:49:29 compute-0 nova_compute[260935]: 2025-10-11 08:49:29.701 2 DEBUG nova.network.neutron [None req-24442349-a5af-424c-880c-1e8729b35923 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 8e4f771a-b87a-40f9-a12e-b5b4583b96f7] Updating instance_info_cache with network_info: [{"id": "f1d8b704-c5df-41f7-b46a-04c0e89ab2cd", "address": "fa:16:3e:3c:11:79", "network": {"id": "9bac3530-993f-420e-8692-0b14a331d756", "bridge": "br-int", "label": "tempest-ImagesTestJSON-942705627-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f8c7604961214c6d9d49657535d799a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf1d8b704-c5", "ovs_interfaceid": "f1d8b704-c5df-41f7-b46a-04c0e89ab2cd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:49:29 compute-0 practical_grothendieck[293469]: --> passed data devices: 0 physical, 3 LVM
Oct 11 08:49:29 compute-0 practical_grothendieck[293469]: --> relative data size: 1.0
Oct 11 08:49:29 compute-0 practical_grothendieck[293469]: --> All data devices are unavailable
Oct 11 08:49:29 compute-0 systemd[1]: libpod-fec0e3ee2412e8c93cf34d2c8412c859c83ef7b9481e0ca7c88ec238833cea0c.scope: Deactivated successfully.
Oct 11 08:49:29 compute-0 podman[293453]: 2025-10-11 08:49:29.765997491 +0000 UTC m=+1.555481139 container died fec0e3ee2412e8c93cf34d2c8412c859c83ef7b9481e0ca7c88ec238833cea0c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_grothendieck, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct 11 08:49:29 compute-0 systemd[1]: libpod-fec0e3ee2412e8c93cf34d2c8412c859c83ef7b9481e0ca7c88ec238833cea0c.scope: Consumed 1.212s CPU time.
Oct 11 08:49:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-47648c71e18aec7dd0c7c9ca8c1e922900254258a2c32abfcaafee327d69cbc8-merged.mount: Deactivated successfully.
Oct 11 08:49:29 compute-0 podman[293453]: 2025-10-11 08:49:29.824806764 +0000 UTC m=+1.614290402 container remove fec0e3ee2412e8c93cf34d2c8412c859c83ef7b9481e0ca7c88ec238833cea0c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_grothendieck, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct 11 08:49:29 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1311: 321 pgs: 321 active+clean; 293 MiB data, 492 MiB used, 60 GiB / 60 GiB avail; 552 KiB/s rd, 6.0 MiB/s wr, 158 op/s
Oct 11 08:49:29 compute-0 systemd[1]: libpod-conmon-fec0e3ee2412e8c93cf34d2c8412c859c83ef7b9481e0ca7c88ec238833cea0c.scope: Deactivated successfully.
Oct 11 08:49:29 compute-0 podman[293497]: 2025-10-11 08:49:29.850647275 +0000 UTC m=+0.132352234 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible)
Oct 11 08:49:29 compute-0 sudo[293348]: pam_unix(sudo:session): session closed for user root
Oct 11 08:49:29 compute-0 sudo[293529]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:49:29 compute-0 sudo[293529]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:49:29 compute-0 sudo[293529]: pam_unix(sudo:session): session closed for user root
Oct 11 08:49:29 compute-0 nova_compute[260935]: 2025-10-11 08:49:29.969 2 DEBUG oslo_concurrency.lockutils [None req-24442349-a5af-424c-880c-1e8729b35923 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Releasing lock "refresh_cache-8e4f771a-b87a-40f9-a12e-b5b4583b96f7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 08:49:29 compute-0 nova_compute[260935]: 2025-10-11 08:49:29.969 2 DEBUG nova.compute.manager [None req-24442349-a5af-424c-880c-1e8729b35923 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 8e4f771a-b87a-40f9-a12e-b5b4583b96f7] Instance network_info: |[{"id": "f1d8b704-c5df-41f7-b46a-04c0e89ab2cd", "address": "fa:16:3e:3c:11:79", "network": {"id": "9bac3530-993f-420e-8692-0b14a331d756", "bridge": "br-int", "label": "tempest-ImagesTestJSON-942705627-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f8c7604961214c6d9d49657535d799a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf1d8b704-c5", "ovs_interfaceid": "f1d8b704-c5df-41f7-b46a-04c0e89ab2cd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 11 08:49:29 compute-0 nova_compute[260935]: 2025-10-11 08:49:29.973 2 DEBUG nova.virt.libvirt.driver [None req-24442349-a5af-424c-880c-1e8729b35923 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 8e4f771a-b87a-40f9-a12e-b5b4583b96f7] Start _get_guest_xml network_info=[{"id": "f1d8b704-c5df-41f7-b46a-04c0e89ab2cd", "address": "fa:16:3e:3c:11:79", "network": {"id": "9bac3530-993f-420e-8692-0b14a331d756", "bridge": "br-int", "label": "tempest-ImagesTestJSON-942705627-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f8c7604961214c6d9d49657535d799a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf1d8b704-c5", "ovs_interfaceid": "f1d8b704-c5df-41f7-b46a-04c0e89ab2cd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 11 08:49:29 compute-0 nova_compute[260935]: 2025-10-11 08:49:29.982 2 WARNING nova.virt.libvirt.driver [None req-24442349-a5af-424c-880c-1e8729b35923 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 08:49:29 compute-0 nova_compute[260935]: 2025-10-11 08:49:29.990 2 DEBUG nova.virt.libvirt.host [None req-24442349-a5af-424c-880c-1e8729b35923 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 11 08:49:29 compute-0 nova_compute[260935]: 2025-10-11 08:49:29.991 2 DEBUG nova.virt.libvirt.host [None req-24442349-a5af-424c-880c-1e8729b35923 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 11 08:49:29 compute-0 nova_compute[260935]: 2025-10-11 08:49:29.995 2 DEBUG nova.virt.libvirt.host [None req-24442349-a5af-424c-880c-1e8729b35923 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 11 08:49:29 compute-0 nova_compute[260935]: 2025-10-11 08:49:29.996 2 DEBUG nova.virt.libvirt.host [None req-24442349-a5af-424c-880c-1e8729b35923 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 11 08:49:29 compute-0 nova_compute[260935]: 2025-10-11 08:49:29.996 2 DEBUG nova.virt.libvirt.driver [None req-24442349-a5af-424c-880c-1e8729b35923 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 11 08:49:29 compute-0 nova_compute[260935]: 2025-10-11 08:49:29.996 2 DEBUG nova.virt.hardware [None req-24442349-a5af-424c-880c-1e8729b35923 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 11 08:49:29 compute-0 nova_compute[260935]: 2025-10-11 08:49:29.997 2 DEBUG nova.virt.hardware [None req-24442349-a5af-424c-880c-1e8729b35923 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 11 08:49:29 compute-0 nova_compute[260935]: 2025-10-11 08:49:29.997 2 DEBUG nova.virt.hardware [None req-24442349-a5af-424c-880c-1e8729b35923 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 11 08:49:29 compute-0 nova_compute[260935]: 2025-10-11 08:49:29.998 2 DEBUG nova.virt.hardware [None req-24442349-a5af-424c-880c-1e8729b35923 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 11 08:49:29 compute-0 nova_compute[260935]: 2025-10-11 08:49:29.998 2 DEBUG nova.virt.hardware [None req-24442349-a5af-424c-880c-1e8729b35923 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 11 08:49:29 compute-0 nova_compute[260935]: 2025-10-11 08:49:29.998 2 DEBUG nova.virt.hardware [None req-24442349-a5af-424c-880c-1e8729b35923 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 11 08:49:29 compute-0 nova_compute[260935]: 2025-10-11 08:49:29.999 2 DEBUG nova.virt.hardware [None req-24442349-a5af-424c-880c-1e8729b35923 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 11 08:49:29 compute-0 nova_compute[260935]: 2025-10-11 08:49:29.999 2 DEBUG nova.virt.hardware [None req-24442349-a5af-424c-880c-1e8729b35923 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 11 08:49:30 compute-0 nova_compute[260935]: 2025-10-11 08:49:29.999 2 DEBUG nova.virt.hardware [None req-24442349-a5af-424c-880c-1e8729b35923 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 11 08:49:30 compute-0 nova_compute[260935]: 2025-10-11 08:49:30.000 2 DEBUG nova.virt.hardware [None req-24442349-a5af-424c-880c-1e8729b35923 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 11 08:49:30 compute-0 nova_compute[260935]: 2025-10-11 08:49:30.000 2 DEBUG nova.virt.hardware [None req-24442349-a5af-424c-880c-1e8729b35923 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 11 08:49:30 compute-0 nova_compute[260935]: 2025-10-11 08:49:30.004 2 DEBUG oslo_concurrency.processutils [None req-24442349-a5af-424c-880c-1e8729b35923 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:49:30 compute-0 sudo[293554]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 08:49:30 compute-0 sudo[293554]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:49:30 compute-0 sudo[293554]: pam_unix(sudo:session): session closed for user root
Oct 11 08:49:30 compute-0 sudo[293580]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:49:30 compute-0 sudo[293580]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:49:30 compute-0 sudo[293580]: pam_unix(sudo:session): session closed for user root
Oct 11 08:49:30 compute-0 sudo[293606]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a -- lvm list --format json
Oct 11 08:49:30 compute-0 sudo[293606]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:49:30 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 08:49:30 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/381446818' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:49:30 compute-0 nova_compute[260935]: 2025-10-11 08:49:30.522 2 DEBUG oslo_concurrency.processutils [None req-24442349-a5af-424c-880c-1e8729b35923 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.517s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:49:30 compute-0 nova_compute[260935]: 2025-10-11 08:49:30.561 2 DEBUG nova.storage.rbd_utils [None req-24442349-a5af-424c-880c-1e8729b35923 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] rbd image 8e4f771a-b87a-40f9-a12e-b5b4583b96f7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:49:30 compute-0 nova_compute[260935]: 2025-10-11 08:49:30.568 2 DEBUG oslo_concurrency.processutils [None req-24442349-a5af-424c-880c-1e8729b35923 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:49:30 compute-0 podman[293711]: 2025-10-11 08:49:30.676447469 +0000 UTC m=+0.058083654 container create 364a1f35e33e8223d877e85c7bb928dc2a3a5cef9a06d7bfd25f7689a2d10b8c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_mcclintock, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS)
Oct 11 08:49:30 compute-0 systemd[1]: Started libpod-conmon-364a1f35e33e8223d877e85c7bb928dc2a3a5cef9a06d7bfd25f7689a2d10b8c.scope.
Oct 11 08:49:30 compute-0 podman[293711]: 2025-10-11 08:49:30.647874381 +0000 UTC m=+0.029510616 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 08:49:30 compute-0 systemd[1]: Started libcrun container.
Oct 11 08:49:30 compute-0 podman[293711]: 2025-10-11 08:49:30.78397635 +0000 UTC m=+0.165612555 container init 364a1f35e33e8223d877e85c7bb928dc2a3a5cef9a06d7bfd25f7689a2d10b8c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_mcclintock, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 08:49:30 compute-0 podman[293711]: 2025-10-11 08:49:30.795273089 +0000 UTC m=+0.176909264 container start 364a1f35e33e8223d877e85c7bb928dc2a3a5cef9a06d7bfd25f7689a2d10b8c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_mcclintock, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 08:49:30 compute-0 podman[293711]: 2025-10-11 08:49:30.800477457 +0000 UTC m=+0.182113632 container attach 364a1f35e33e8223d877e85c7bb928dc2a3a5cef9a06d7bfd25f7689a2d10b8c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_mcclintock, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Oct 11 08:49:30 compute-0 modest_mcclintock[293747]: 167 167
Oct 11 08:49:30 compute-0 systemd[1]: libpod-364a1f35e33e8223d877e85c7bb928dc2a3a5cef9a06d7bfd25f7689a2d10b8c.scope: Deactivated successfully.
Oct 11 08:49:30 compute-0 podman[293711]: 2025-10-11 08:49:30.834658263 +0000 UTC m=+0.216294478 container died 364a1f35e33e8223d877e85c7bb928dc2a3a5cef9a06d7bfd25f7689a2d10b8c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_mcclintock, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 08:49:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-e86952f7d813ea1a86e9b63d880acfb5e92a6c3b4679998143ab14144bcd1244-merged.mount: Deactivated successfully.
Oct 11 08:49:30 compute-0 podman[293711]: 2025-10-11 08:49:30.890678127 +0000 UTC m=+0.272314272 container remove 364a1f35e33e8223d877e85c7bb928dc2a3a5cef9a06d7bfd25f7689a2d10b8c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_mcclintock, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct 11 08:49:30 compute-0 systemd[1]: libpod-conmon-364a1f35e33e8223d877e85c7bb928dc2a3a5cef9a06d7bfd25f7689a2d10b8c.scope: Deactivated successfully.
Oct 11 08:49:30 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 08:49:30 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3953573349' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:49:31 compute-0 nova_compute[260935]: 2025-10-11 08:49:30.999 2 DEBUG oslo_concurrency.processutils [None req-24442349-a5af-424c-880c-1e8729b35923 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.431s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:49:31 compute-0 nova_compute[260935]: 2025-10-11 08:49:31.001 2 DEBUG nova.virt.libvirt.vif [None req-24442349-a5af-424c-880c-1e8729b35923 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T08:49:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-1760568996',display_name='tempest-ImagesTestJSON-server-1760568996',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-1760568996',id=25,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f8c7604961214c6d9d49657535d799a5',ramdisk_id='',reservation_id='r-ipkecogn',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-694493184',owner_user_name='tempest-ImagesTestJSON-694493184-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T08:49:24Z,user_data=None,user_id='1bab12893b9d49aabcb5ca19c9b951de',uuid=8e4f771a-b87a-40f9-a12e-b5b4583b96f7,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f1d8b704-c5df-41f7-b46a-04c0e89ab2cd", "address": "fa:16:3e:3c:11:79", "network": {"id": "9bac3530-993f-420e-8692-0b14a331d756", "bridge": "br-int", "label": "tempest-ImagesTestJSON-942705627-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f8c7604961214c6d9d49657535d799a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf1d8b704-c5", "ovs_interfaceid": "f1d8b704-c5df-41f7-b46a-04c0e89ab2cd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 11 08:49:31 compute-0 nova_compute[260935]: 2025-10-11 08:49:31.002 2 DEBUG nova.network.os_vif_util [None req-24442349-a5af-424c-880c-1e8729b35923 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Converting VIF {"id": "f1d8b704-c5df-41f7-b46a-04c0e89ab2cd", "address": "fa:16:3e:3c:11:79", "network": {"id": "9bac3530-993f-420e-8692-0b14a331d756", "bridge": "br-int", "label": "tempest-ImagesTestJSON-942705627-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f8c7604961214c6d9d49657535d799a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf1d8b704-c5", "ovs_interfaceid": "f1d8b704-c5df-41f7-b46a-04c0e89ab2cd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 08:49:31 compute-0 nova_compute[260935]: 2025-10-11 08:49:31.003 2 DEBUG nova.network.os_vif_util [None req-24442349-a5af-424c-880c-1e8729b35923 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3c:11:79,bridge_name='br-int',has_traffic_filtering=True,id=f1d8b704-c5df-41f7-b46a-04c0e89ab2cd,network=Network(9bac3530-993f-420e-8692-0b14a331d756),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf1d8b704-c5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 08:49:31 compute-0 nova_compute[260935]: 2025-10-11 08:49:31.004 2 DEBUG nova.objects.instance [None req-24442349-a5af-424c-880c-1e8729b35923 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Lazy-loading 'pci_devices' on Instance uuid 8e4f771a-b87a-40f9-a12e-b5b4583b96f7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:49:31 compute-0 nova_compute[260935]: 2025-10-11 08:49:31.104 2 DEBUG nova.virt.libvirt.driver [None req-24442349-a5af-424c-880c-1e8729b35923 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 8e4f771a-b87a-40f9-a12e-b5b4583b96f7] End _get_guest_xml xml=<domain type="kvm">
Oct 11 08:49:31 compute-0 nova_compute[260935]:   <uuid>8e4f771a-b87a-40f9-a12e-b5b4583b96f7</uuid>
Oct 11 08:49:31 compute-0 nova_compute[260935]:   <name>instance-00000019</name>
Oct 11 08:49:31 compute-0 nova_compute[260935]:   <memory>131072</memory>
Oct 11 08:49:31 compute-0 nova_compute[260935]:   <vcpu>1</vcpu>
Oct 11 08:49:31 compute-0 nova_compute[260935]:   <metadata>
Oct 11 08:49:31 compute-0 nova_compute[260935]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 08:49:31 compute-0 nova_compute[260935]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 08:49:31 compute-0 nova_compute[260935]:       <nova:name>tempest-ImagesTestJSON-server-1760568996</nova:name>
Oct 11 08:49:31 compute-0 nova_compute[260935]:       <nova:creationTime>2025-10-11 08:49:29</nova:creationTime>
Oct 11 08:49:31 compute-0 nova_compute[260935]:       <nova:flavor name="m1.nano">
Oct 11 08:49:31 compute-0 nova_compute[260935]:         <nova:memory>128</nova:memory>
Oct 11 08:49:31 compute-0 nova_compute[260935]:         <nova:disk>1</nova:disk>
Oct 11 08:49:31 compute-0 nova_compute[260935]:         <nova:swap>0</nova:swap>
Oct 11 08:49:31 compute-0 nova_compute[260935]:         <nova:ephemeral>0</nova:ephemeral>
Oct 11 08:49:31 compute-0 nova_compute[260935]:         <nova:vcpus>1</nova:vcpus>
Oct 11 08:49:31 compute-0 nova_compute[260935]:       </nova:flavor>
Oct 11 08:49:31 compute-0 nova_compute[260935]:       <nova:owner>
Oct 11 08:49:31 compute-0 nova_compute[260935]:         <nova:user uuid="1bab12893b9d49aabcb5ca19c9b951de">tempest-ImagesTestJSON-694493184-project-member</nova:user>
Oct 11 08:49:31 compute-0 nova_compute[260935]:         <nova:project uuid="f8c7604961214c6d9d49657535d799a5">tempest-ImagesTestJSON-694493184</nova:project>
Oct 11 08:49:31 compute-0 nova_compute[260935]:       </nova:owner>
Oct 11 08:49:31 compute-0 nova_compute[260935]:       <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 08:49:31 compute-0 nova_compute[260935]:       <nova:ports>
Oct 11 08:49:31 compute-0 nova_compute[260935]:         <nova:port uuid="f1d8b704-c5df-41f7-b46a-04c0e89ab2cd">
Oct 11 08:49:31 compute-0 nova_compute[260935]:           <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Oct 11 08:49:31 compute-0 nova_compute[260935]:         </nova:port>
Oct 11 08:49:31 compute-0 nova_compute[260935]:       </nova:ports>
Oct 11 08:49:31 compute-0 nova_compute[260935]:     </nova:instance>
Oct 11 08:49:31 compute-0 nova_compute[260935]:   </metadata>
Oct 11 08:49:31 compute-0 nova_compute[260935]:   <sysinfo type="smbios">
Oct 11 08:49:31 compute-0 nova_compute[260935]:     <system>
Oct 11 08:49:31 compute-0 nova_compute[260935]:       <entry name="manufacturer">RDO</entry>
Oct 11 08:49:31 compute-0 nova_compute[260935]:       <entry name="product">OpenStack Compute</entry>
Oct 11 08:49:31 compute-0 nova_compute[260935]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 08:49:31 compute-0 nova_compute[260935]:       <entry name="serial">8e4f771a-b87a-40f9-a12e-b5b4583b96f7</entry>
Oct 11 08:49:31 compute-0 nova_compute[260935]:       <entry name="uuid">8e4f771a-b87a-40f9-a12e-b5b4583b96f7</entry>
Oct 11 08:49:31 compute-0 nova_compute[260935]:       <entry name="family">Virtual Machine</entry>
Oct 11 08:49:31 compute-0 nova_compute[260935]:     </system>
Oct 11 08:49:31 compute-0 nova_compute[260935]:   </sysinfo>
Oct 11 08:49:31 compute-0 nova_compute[260935]:   <os>
Oct 11 08:49:31 compute-0 nova_compute[260935]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 11 08:49:31 compute-0 nova_compute[260935]:     <boot dev="hd"/>
Oct 11 08:49:31 compute-0 nova_compute[260935]:     <smbios mode="sysinfo"/>
Oct 11 08:49:31 compute-0 nova_compute[260935]:   </os>
Oct 11 08:49:31 compute-0 nova_compute[260935]:   <features>
Oct 11 08:49:31 compute-0 nova_compute[260935]:     <acpi/>
Oct 11 08:49:31 compute-0 nova_compute[260935]:     <apic/>
Oct 11 08:49:31 compute-0 nova_compute[260935]:     <vmcoreinfo/>
Oct 11 08:49:31 compute-0 nova_compute[260935]:   </features>
Oct 11 08:49:31 compute-0 nova_compute[260935]:   <clock offset="utc">
Oct 11 08:49:31 compute-0 nova_compute[260935]:     <timer name="pit" tickpolicy="delay"/>
Oct 11 08:49:31 compute-0 nova_compute[260935]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 11 08:49:31 compute-0 nova_compute[260935]:     <timer name="hpet" present="no"/>
Oct 11 08:49:31 compute-0 nova_compute[260935]:   </clock>
Oct 11 08:49:31 compute-0 nova_compute[260935]:   <cpu mode="host-model" match="exact">
Oct 11 08:49:31 compute-0 nova_compute[260935]:     <topology sockets="1" cores="1" threads="1"/>
Oct 11 08:49:31 compute-0 nova_compute[260935]:   </cpu>
Oct 11 08:49:31 compute-0 nova_compute[260935]:   <devices>
Oct 11 08:49:31 compute-0 nova_compute[260935]:     <disk type="network" device="disk">
Oct 11 08:49:31 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 08:49:31 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/8e4f771a-b87a-40f9-a12e-b5b4583b96f7_disk">
Oct 11 08:49:31 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 08:49:31 compute-0 nova_compute[260935]:       </source>
Oct 11 08:49:31 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 08:49:31 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 08:49:31 compute-0 nova_compute[260935]:       </auth>
Oct 11 08:49:31 compute-0 nova_compute[260935]:       <target dev="vda" bus="virtio"/>
Oct 11 08:49:31 compute-0 nova_compute[260935]:     </disk>
Oct 11 08:49:31 compute-0 nova_compute[260935]:     <disk type="network" device="cdrom">
Oct 11 08:49:31 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 08:49:31 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/8e4f771a-b87a-40f9-a12e-b5b4583b96f7_disk.config">
Oct 11 08:49:31 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 08:49:31 compute-0 nova_compute[260935]:       </source>
Oct 11 08:49:31 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 08:49:31 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 08:49:31 compute-0 nova_compute[260935]:       </auth>
Oct 11 08:49:31 compute-0 nova_compute[260935]:       <target dev="sda" bus="sata"/>
Oct 11 08:49:31 compute-0 nova_compute[260935]:     </disk>
Oct 11 08:49:31 compute-0 nova_compute[260935]:     <interface type="ethernet">
Oct 11 08:49:31 compute-0 nova_compute[260935]:       <mac address="fa:16:3e:3c:11:79"/>
Oct 11 08:49:31 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 08:49:31 compute-0 nova_compute[260935]:       <driver name="vhost" rx_queue_size="512"/>
Oct 11 08:49:31 compute-0 nova_compute[260935]:       <mtu size="1442"/>
Oct 11 08:49:31 compute-0 nova_compute[260935]:       <target dev="tapf1d8b704-c5"/>
Oct 11 08:49:31 compute-0 nova_compute[260935]:     </interface>
Oct 11 08:49:31 compute-0 nova_compute[260935]:     <serial type="pty">
Oct 11 08:49:31 compute-0 nova_compute[260935]:       <log file="/var/lib/nova/instances/8e4f771a-b87a-40f9-a12e-b5b4583b96f7/console.log" append="off"/>
Oct 11 08:49:31 compute-0 nova_compute[260935]:     </serial>
Oct 11 08:49:31 compute-0 nova_compute[260935]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 08:49:31 compute-0 nova_compute[260935]:     <video>
Oct 11 08:49:31 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 08:49:31 compute-0 nova_compute[260935]:     </video>
Oct 11 08:49:31 compute-0 nova_compute[260935]:     <input type="tablet" bus="usb"/>
Oct 11 08:49:31 compute-0 nova_compute[260935]:     <rng model="virtio">
Oct 11 08:49:31 compute-0 nova_compute[260935]:       <backend model="random">/dev/urandom</backend>
Oct 11 08:49:31 compute-0 nova_compute[260935]:     </rng>
Oct 11 08:49:31 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root"/>
Oct 11 08:49:31 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:49:31 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:49:31 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:49:31 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:49:31 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:49:31 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:49:31 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:49:31 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:49:31 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:49:31 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:49:31 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:49:31 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:49:31 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:49:31 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:49:31 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:49:31 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:49:31 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:49:31 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:49:31 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:49:31 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:49:31 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:49:31 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:49:31 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:49:31 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:49:31 compute-0 nova_compute[260935]:     <controller type="usb" index="0"/>
Oct 11 08:49:31 compute-0 nova_compute[260935]:     <memballoon model="virtio">
Oct 11 08:49:31 compute-0 nova_compute[260935]:       <stats period="10"/>
Oct 11 08:49:31 compute-0 nova_compute[260935]:     </memballoon>
Oct 11 08:49:31 compute-0 nova_compute[260935]:   </devices>
Oct 11 08:49:31 compute-0 nova_compute[260935]: </domain>
Oct 11 08:49:31 compute-0 nova_compute[260935]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 11 08:49:31 compute-0 nova_compute[260935]: 2025-10-11 08:49:31.104 2 DEBUG nova.compute.manager [None req-24442349-a5af-424c-880c-1e8729b35923 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 8e4f771a-b87a-40f9-a12e-b5b4583b96f7] Preparing to wait for external event network-vif-plugged-f1d8b704-c5df-41f7-b46a-04c0e89ab2cd prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 11 08:49:31 compute-0 nova_compute[260935]: 2025-10-11 08:49:31.104 2 DEBUG oslo_concurrency.lockutils [None req-24442349-a5af-424c-880c-1e8729b35923 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Acquiring lock "8e4f771a-b87a-40f9-a12e-b5b4583b96f7-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:49:31 compute-0 nova_compute[260935]: 2025-10-11 08:49:31.105 2 DEBUG oslo_concurrency.lockutils [None req-24442349-a5af-424c-880c-1e8729b35923 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Lock "8e4f771a-b87a-40f9-a12e-b5b4583b96f7-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:49:31 compute-0 nova_compute[260935]: 2025-10-11 08:49:31.105 2 DEBUG oslo_concurrency.lockutils [None req-24442349-a5af-424c-880c-1e8729b35923 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Lock "8e4f771a-b87a-40f9-a12e-b5b4583b96f7-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:49:31 compute-0 nova_compute[260935]: 2025-10-11 08:49:31.106 2 DEBUG nova.virt.libvirt.vif [None req-24442349-a5af-424c-880c-1e8729b35923 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T08:49:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-1760568996',display_name='tempest-ImagesTestJSON-server-1760568996',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-1760568996',id=25,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f8c7604961214c6d9d49657535d799a5',ramdisk_id='',reservation_id='r-ipkecogn',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-694493184',owner_user_name='tempest-ImagesTestJSON-694493184-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T08:49:24Z,user_data=None,user_id='1bab12893b9d49aabcb5ca19c9b951de',uuid=8e4f771a-b87a-40f9-a12e-b5b4583b96f7,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f1d8b704-c5df-41f7-b46a-04c0e89ab2cd", "address": "fa:16:3e:3c:11:79", "network": {"id": "9bac3530-993f-420e-8692-0b14a331d756", "bridge": "br-int", "label": "tempest-ImagesTestJSON-942705627-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f8c7604961214c6d9d49657535d799a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf1d8b704-c5", "ovs_interfaceid": "f1d8b704-c5df-41f7-b46a-04c0e89ab2cd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 11 08:49:31 compute-0 nova_compute[260935]: 2025-10-11 08:49:31.106 2 DEBUG nova.network.os_vif_util [None req-24442349-a5af-424c-880c-1e8729b35923 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Converting VIF {"id": "f1d8b704-c5df-41f7-b46a-04c0e89ab2cd", "address": "fa:16:3e:3c:11:79", "network": {"id": "9bac3530-993f-420e-8692-0b14a331d756", "bridge": "br-int", "label": "tempest-ImagesTestJSON-942705627-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f8c7604961214c6d9d49657535d799a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf1d8b704-c5", "ovs_interfaceid": "f1d8b704-c5df-41f7-b46a-04c0e89ab2cd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 08:49:31 compute-0 nova_compute[260935]: 2025-10-11 08:49:31.107 2 DEBUG nova.network.os_vif_util [None req-24442349-a5af-424c-880c-1e8729b35923 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3c:11:79,bridge_name='br-int',has_traffic_filtering=True,id=f1d8b704-c5df-41f7-b46a-04c0e89ab2cd,network=Network(9bac3530-993f-420e-8692-0b14a331d756),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf1d8b704-c5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 08:49:31 compute-0 nova_compute[260935]: 2025-10-11 08:49:31.107 2 DEBUG os_vif [None req-24442349-a5af-424c-880c-1e8729b35923 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:3c:11:79,bridge_name='br-int',has_traffic_filtering=True,id=f1d8b704-c5df-41f7-b46a-04c0e89ab2cd,network=Network(9bac3530-993f-420e-8692-0b14a331d756),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf1d8b704-c5') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 11 08:49:31 compute-0 nova_compute[260935]: 2025-10-11 08:49:31.108 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:49:31 compute-0 nova_compute[260935]: 2025-10-11 08:49:31.109 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:49:31 compute-0 nova_compute[260935]: 2025-10-11 08:49:31.109 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 08:49:31 compute-0 nova_compute[260935]: 2025-10-11 08:49:31.113 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:49:31 compute-0 nova_compute[260935]: 2025-10-11 08:49:31.113 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf1d8b704-c5, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:49:31 compute-0 nova_compute[260935]: 2025-10-11 08:49:31.114 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapf1d8b704-c5, col_values=(('external_ids', {'iface-id': 'f1d8b704-c5df-41f7-b46a-04c0e89ab2cd', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:3c:11:79', 'vm-uuid': '8e4f771a-b87a-40f9-a12e-b5b4583b96f7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:49:31 compute-0 nova_compute[260935]: 2025-10-11 08:49:31.115 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:49:31 compute-0 NetworkManager[44960]: <info>  [1760172571.1166] manager: (tapf1d8b704-c5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/72)
Oct 11 08:49:31 compute-0 nova_compute[260935]: 2025-10-11 08:49:31.120 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 08:49:31 compute-0 nova_compute[260935]: 2025-10-11 08:49:31.125 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:49:31 compute-0 nova_compute[260935]: 2025-10-11 08:49:31.127 2 INFO os_vif [None req-24442349-a5af-424c-880c-1e8729b35923 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:3c:11:79,bridge_name='br-int',has_traffic_filtering=True,id=f1d8b704-c5df-41f7-b46a-04c0e89ab2cd,network=Network(9bac3530-993f-420e-8692-0b14a331d756),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf1d8b704-c5')
Oct 11 08:49:31 compute-0 podman[293773]: 2025-10-11 08:49:31.170876652 +0000 UTC m=+0.082209036 container create 89ef72c3cfd863bc0d3a9abc4051cd2428fd2e4811e70387d71f39080c81c7ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_khorana, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct 11 08:49:31 compute-0 nova_compute[260935]: 2025-10-11 08:49:31.190 2 DEBUG nova.virt.libvirt.driver [None req-24442349-a5af-424c-880c-1e8729b35923 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 08:49:31 compute-0 nova_compute[260935]: 2025-10-11 08:49:31.191 2 DEBUG nova.virt.libvirt.driver [None req-24442349-a5af-424c-880c-1e8729b35923 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 08:49:31 compute-0 nova_compute[260935]: 2025-10-11 08:49:31.191 2 DEBUG nova.virt.libvirt.driver [None req-24442349-a5af-424c-880c-1e8729b35923 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] No VIF found with MAC fa:16:3e:3c:11:79, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 11 08:49:31 compute-0 nova_compute[260935]: 2025-10-11 08:49:31.192 2 INFO nova.virt.libvirt.driver [None req-24442349-a5af-424c-880c-1e8729b35923 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 8e4f771a-b87a-40f9-a12e-b5b4583b96f7] Using config drive
Oct 11 08:49:31 compute-0 ceph-mon[74313]: pgmap v1311: 321 pgs: 321 active+clean; 293 MiB data, 492 MiB used, 60 GiB / 60 GiB avail; 552 KiB/s rd, 6.0 MiB/s wr, 158 op/s
Oct 11 08:49:31 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/381446818' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:49:31 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/3953573349' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:49:31 compute-0 nova_compute[260935]: 2025-10-11 08:49:31.218 2 DEBUG nova.storage.rbd_utils [None req-24442349-a5af-424c-880c-1e8729b35923 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] rbd image 8e4f771a-b87a-40f9-a12e-b5b4583b96f7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:49:31 compute-0 systemd[1]: Started libpod-conmon-89ef72c3cfd863bc0d3a9abc4051cd2428fd2e4811e70387d71f39080c81c7ce.scope.
Oct 11 08:49:31 compute-0 podman[293773]: 2025-10-11 08:49:31.151472193 +0000 UTC m=+0.062804607 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 08:49:31 compute-0 systemd[1]: Started libcrun container.
Oct 11 08:49:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f9333c45619715a8386b9cd8bb4165df4704e20f1361c3eda6655b1cbb3738cb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 08:49:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f9333c45619715a8386b9cd8bb4165df4704e20f1361c3eda6655b1cbb3738cb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 08:49:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f9333c45619715a8386b9cd8bb4165df4704e20f1361c3eda6655b1cbb3738cb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 08:49:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f9333c45619715a8386b9cd8bb4165df4704e20f1361c3eda6655b1cbb3738cb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 08:49:31 compute-0 podman[293773]: 2025-10-11 08:49:31.285494103 +0000 UTC m=+0.196826547 container init 89ef72c3cfd863bc0d3a9abc4051cd2428fd2e4811e70387d71f39080c81c7ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_khorana, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True)
Oct 11 08:49:31 compute-0 podman[293773]: 2025-10-11 08:49:31.297544224 +0000 UTC m=+0.208876598 container start 89ef72c3cfd863bc0d3a9abc4051cd2428fd2e4811e70387d71f39080c81c7ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_khorana, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 08:49:31 compute-0 podman[293773]: 2025-10-11 08:49:31.300988771 +0000 UTC m=+0.212321225 container attach 89ef72c3cfd863bc0d3a9abc4051cd2428fd2e4811e70387d71f39080c81c7ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_khorana, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3)
Oct 11 08:49:31 compute-0 nova_compute[260935]: 2025-10-11 08:49:31.559 2 INFO nova.virt.libvirt.driver [None req-24442349-a5af-424c-880c-1e8729b35923 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 8e4f771a-b87a-40f9-a12e-b5b4583b96f7] Creating config drive at /var/lib/nova/instances/8e4f771a-b87a-40f9-a12e-b5b4583b96f7/disk.config
Oct 11 08:49:31 compute-0 nova_compute[260935]: 2025-10-11 08:49:31.574 2 DEBUG oslo_concurrency.processutils [None req-24442349-a5af-424c-880c-1e8729b35923 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/8e4f771a-b87a-40f9-a12e-b5b4583b96f7/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpdo07dn58 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:49:31 compute-0 nova_compute[260935]: 2025-10-11 08:49:31.724 2 DEBUG oslo_concurrency.processutils [None req-24442349-a5af-424c-880c-1e8729b35923 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/8e4f771a-b87a-40f9-a12e-b5b4583b96f7/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpdo07dn58" returned: 0 in 0.150s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:49:31 compute-0 nova_compute[260935]: 2025-10-11 08:49:31.758 2 DEBUG nova.storage.rbd_utils [None req-24442349-a5af-424c-880c-1e8729b35923 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] rbd image 8e4f771a-b87a-40f9-a12e-b5b4583b96f7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:49:31 compute-0 nova_compute[260935]: 2025-10-11 08:49:31.763 2 DEBUG oslo_concurrency.processutils [None req-24442349-a5af-424c-880c-1e8729b35923 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/8e4f771a-b87a-40f9-a12e-b5b4583b96f7/disk.config 8e4f771a-b87a-40f9-a12e-b5b4583b96f7_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:49:31 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1312: 321 pgs: 321 active+clean; 293 MiB data, 492 MiB used, 60 GiB / 60 GiB avail; 552 KiB/s rd, 6.0 MiB/s wr, 158 op/s
Oct 11 08:49:31 compute-0 nova_compute[260935]: 2025-10-11 08:49:31.987 2 DEBUG oslo_concurrency.processutils [None req-24442349-a5af-424c-880c-1e8729b35923 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/8e4f771a-b87a-40f9-a12e-b5b4583b96f7/disk.config 8e4f771a-b87a-40f9-a12e-b5b4583b96f7_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.224s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:49:31 compute-0 nova_compute[260935]: 2025-10-11 08:49:31.988 2 INFO nova.virt.libvirt.driver [None req-24442349-a5af-424c-880c-1e8729b35923 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 8e4f771a-b87a-40f9-a12e-b5b4583b96f7] Deleting local config drive /var/lib/nova/instances/8e4f771a-b87a-40f9-a12e-b5b4583b96f7/disk.config because it was imported into RBD.
Oct 11 08:49:32 compute-0 nova_compute[260935]: 2025-10-11 08:49:32.029 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:49:32 compute-0 kernel: tapf1d8b704-c5: entered promiscuous mode
Oct 11 08:49:32 compute-0 NetworkManager[44960]: <info>  [1760172572.0687] manager: (tapf1d8b704-c5): new Tun device (/org/freedesktop/NetworkManager/Devices/73)
Oct 11 08:49:32 compute-0 nova_compute[260935]: 2025-10-11 08:49:32.078 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:49:32 compute-0 ovn_controller[152945]: 2025-10-11T08:49:32Z|00125|binding|INFO|Claiming lport f1d8b704-c5df-41f7-b46a-04c0e89ab2cd for this chassis.
Oct 11 08:49:32 compute-0 ovn_controller[152945]: 2025-10-11T08:49:32Z|00126|binding|INFO|f1d8b704-c5df-41f7-b46a-04c0e89ab2cd: Claiming fa:16:3e:3c:11:79 10.100.0.7
Oct 11 08:49:32 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:49:32.098 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3c:11:79 10.100.0.7'], port_security=['fa:16:3e:3c:11:79 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '8e4f771a-b87a-40f9-a12e-b5b4583b96f7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9bac3530-993f-420e-8692-0b14a331d756', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f8c7604961214c6d9d49657535d799a5', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'b92be55a-f97b-4770-99c4-ff8e122b8ad7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=956bef08-638b-4ce0-9cc4-80a6cc4f1331, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=f1d8b704-c5df-41f7-b46a-04c0e89ab2cd) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 08:49:32 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:49:32.099 162815 INFO neutron.agent.ovn.metadata.agent [-] Port f1d8b704-c5df-41f7-b46a-04c0e89ab2cd in datapath 9bac3530-993f-420e-8692-0b14a331d756 bound to our chassis
Oct 11 08:49:32 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:49:32.102 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9bac3530-993f-420e-8692-0b14a331d756
Oct 11 08:49:32 compute-0 systemd-udevd[293869]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 08:49:32 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:49:32.121 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[5c570324-8721-4af1-b891-6bded7dc2639]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:49:32 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:49:32.122 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap9bac3530-91 in ovnmeta-9bac3530-993f-420e-8692-0b14a331d756 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 11 08:49:32 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:49:32.125 276945 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap9bac3530-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 11 08:49:32 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:49:32.125 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[b627612e-a4e6-4b5e-b3c8-7a90a07f1385]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:49:32 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:49:32.127 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[d17c40b3-d56f-455e-a7fc-ae27147e9f1c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:49:32 compute-0 NetworkManager[44960]: <info>  [1760172572.1367] device (tapf1d8b704-c5): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 08:49:32 compute-0 NetworkManager[44960]: <info>  [1760172572.1396] device (tapf1d8b704-c5): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 08:49:32 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:49:32.147 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[52068f6b-dc6a-4fee-bcc7-95d5c1801319]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:49:32 compute-0 systemd-machined[215705]: New machine qemu-27-instance-00000019.
Oct 11 08:49:32 compute-0 systemd[1]: Started Virtual Machine qemu-27-instance-00000019.
Oct 11 08:49:32 compute-0 nova_compute[260935]: 2025-10-11 08:49:32.170 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:49:32 compute-0 ovn_controller[152945]: 2025-10-11T08:49:32Z|00127|binding|INFO|Setting lport f1d8b704-c5df-41f7-b46a-04c0e89ab2cd ovn-installed in OVS
Oct 11 08:49:32 compute-0 ovn_controller[152945]: 2025-10-11T08:49:32Z|00128|binding|INFO|Setting lport f1d8b704-c5df-41f7-b46a-04c0e89ab2cd up in Southbound
Oct 11 08:49:32 compute-0 nova_compute[260935]: 2025-10-11 08:49:32.177 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:49:32 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:49:32.186 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[bf96c7c2-5ea0-4d58-aef9-03f191362ff5]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:49:32 compute-0 determined_khorana[293810]: {
Oct 11 08:49:32 compute-0 determined_khorana[293810]:     "0": [
Oct 11 08:49:32 compute-0 determined_khorana[293810]:         {
Oct 11 08:49:32 compute-0 determined_khorana[293810]:             "devices": [
Oct 11 08:49:32 compute-0 determined_khorana[293810]:                 "/dev/loop3"
Oct 11 08:49:32 compute-0 determined_khorana[293810]:             ],
Oct 11 08:49:32 compute-0 determined_khorana[293810]:             "lv_name": "ceph_lv0",
Oct 11 08:49:32 compute-0 determined_khorana[293810]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 08:49:32 compute-0 determined_khorana[293810]:             "lv_size": "21470642176",
Oct 11 08:49:32 compute-0 determined_khorana[293810]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=bd31cfe9-266a-4479-a7f9-659fff14b3e1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 08:49:32 compute-0 determined_khorana[293810]:             "lv_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 08:49:32 compute-0 determined_khorana[293810]:             "name": "ceph_lv0",
Oct 11 08:49:32 compute-0 determined_khorana[293810]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 08:49:32 compute-0 determined_khorana[293810]:             "tags": {
Oct 11 08:49:32 compute-0 determined_khorana[293810]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 11 08:49:32 compute-0 determined_khorana[293810]:                 "ceph.block_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 08:49:32 compute-0 determined_khorana[293810]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 08:49:32 compute-0 determined_khorana[293810]:                 "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 08:49:32 compute-0 determined_khorana[293810]:                 "ceph.cluster_name": "ceph",
Oct 11 08:49:32 compute-0 determined_khorana[293810]:                 "ceph.crush_device_class": "",
Oct 11 08:49:32 compute-0 determined_khorana[293810]:                 "ceph.encrypted": "0",
Oct 11 08:49:32 compute-0 determined_khorana[293810]:                 "ceph.osd_fsid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 08:49:32 compute-0 determined_khorana[293810]:                 "ceph.osd_id": "0",
Oct 11 08:49:32 compute-0 determined_khorana[293810]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 08:49:32 compute-0 determined_khorana[293810]:                 "ceph.type": "block",
Oct 11 08:49:32 compute-0 determined_khorana[293810]:                 "ceph.vdo": "0"
Oct 11 08:49:32 compute-0 determined_khorana[293810]:             },
Oct 11 08:49:32 compute-0 determined_khorana[293810]:             "type": "block",
Oct 11 08:49:32 compute-0 determined_khorana[293810]:             "vg_name": "ceph_vg0"
Oct 11 08:49:32 compute-0 determined_khorana[293810]:         }
Oct 11 08:49:32 compute-0 determined_khorana[293810]:     ],
Oct 11 08:49:32 compute-0 determined_khorana[293810]:     "1": [
Oct 11 08:49:32 compute-0 determined_khorana[293810]:         {
Oct 11 08:49:32 compute-0 determined_khorana[293810]:             "devices": [
Oct 11 08:49:32 compute-0 determined_khorana[293810]:                 "/dev/loop4"
Oct 11 08:49:32 compute-0 determined_khorana[293810]:             ],
Oct 11 08:49:32 compute-0 determined_khorana[293810]:             "lv_name": "ceph_lv1",
Oct 11 08:49:32 compute-0 determined_khorana[293810]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 08:49:32 compute-0 determined_khorana[293810]:             "lv_size": "21470642176",
Oct 11 08:49:32 compute-0 determined_khorana[293810]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c6e730c8-7075-45e5-bf72-061b73088f5b,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 08:49:32 compute-0 determined_khorana[293810]:             "lv_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 08:49:32 compute-0 determined_khorana[293810]:             "name": "ceph_lv1",
Oct 11 08:49:32 compute-0 determined_khorana[293810]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 08:49:32 compute-0 determined_khorana[293810]:             "tags": {
Oct 11 08:49:32 compute-0 determined_khorana[293810]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 11 08:49:32 compute-0 determined_khorana[293810]:                 "ceph.block_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 08:49:32 compute-0 determined_khorana[293810]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 08:49:32 compute-0 determined_khorana[293810]:                 "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 08:49:32 compute-0 determined_khorana[293810]:                 "ceph.cluster_name": "ceph",
Oct 11 08:49:32 compute-0 determined_khorana[293810]:                 "ceph.crush_device_class": "",
Oct 11 08:49:32 compute-0 determined_khorana[293810]:                 "ceph.encrypted": "0",
Oct 11 08:49:32 compute-0 determined_khorana[293810]:                 "ceph.osd_fsid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 08:49:32 compute-0 determined_khorana[293810]:                 "ceph.osd_id": "1",
Oct 11 08:49:32 compute-0 determined_khorana[293810]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 08:49:32 compute-0 determined_khorana[293810]:                 "ceph.type": "block",
Oct 11 08:49:32 compute-0 determined_khorana[293810]:                 "ceph.vdo": "0"
Oct 11 08:49:32 compute-0 determined_khorana[293810]:             },
Oct 11 08:49:32 compute-0 determined_khorana[293810]:             "type": "block",
Oct 11 08:49:32 compute-0 determined_khorana[293810]:             "vg_name": "ceph_vg1"
Oct 11 08:49:32 compute-0 determined_khorana[293810]:         }
Oct 11 08:49:32 compute-0 determined_khorana[293810]:     ],
Oct 11 08:49:32 compute-0 determined_khorana[293810]:     "2": [
Oct 11 08:49:32 compute-0 determined_khorana[293810]:         {
Oct 11 08:49:32 compute-0 determined_khorana[293810]:             "devices": [
Oct 11 08:49:32 compute-0 determined_khorana[293810]:                 "/dev/loop5"
Oct 11 08:49:32 compute-0 determined_khorana[293810]:             ],
Oct 11 08:49:32 compute-0 determined_khorana[293810]:             "lv_name": "ceph_lv2",
Oct 11 08:49:32 compute-0 determined_khorana[293810]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 08:49:32 compute-0 determined_khorana[293810]:             "lv_size": "21470642176",
Oct 11 08:49:32 compute-0 determined_khorana[293810]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9a8d87fe-940e-4960-94fb-405e22e6e9ee,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 08:49:32 compute-0 determined_khorana[293810]:             "lv_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 08:49:32 compute-0 determined_khorana[293810]:             "name": "ceph_lv2",
Oct 11 08:49:32 compute-0 determined_khorana[293810]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 08:49:32 compute-0 determined_khorana[293810]:             "tags": {
Oct 11 08:49:32 compute-0 determined_khorana[293810]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 11 08:49:32 compute-0 determined_khorana[293810]:                 "ceph.block_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 08:49:32 compute-0 determined_khorana[293810]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 08:49:32 compute-0 determined_khorana[293810]:                 "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 08:49:32 compute-0 determined_khorana[293810]:                 "ceph.cluster_name": "ceph",
Oct 11 08:49:32 compute-0 determined_khorana[293810]:                 "ceph.crush_device_class": "",
Oct 11 08:49:32 compute-0 determined_khorana[293810]:                 "ceph.encrypted": "0",
Oct 11 08:49:32 compute-0 determined_khorana[293810]:                 "ceph.osd_fsid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 08:49:32 compute-0 determined_khorana[293810]:                 "ceph.osd_id": "2",
Oct 11 08:49:32 compute-0 determined_khorana[293810]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 08:49:32 compute-0 determined_khorana[293810]:                 "ceph.type": "block",
Oct 11 08:49:32 compute-0 determined_khorana[293810]:                 "ceph.vdo": "0"
Oct 11 08:49:32 compute-0 determined_khorana[293810]:             },
Oct 11 08:49:32 compute-0 determined_khorana[293810]:             "type": "block",
Oct 11 08:49:32 compute-0 determined_khorana[293810]:             "vg_name": "ceph_vg2"
Oct 11 08:49:32 compute-0 determined_khorana[293810]:         }
Oct 11 08:49:32 compute-0 determined_khorana[293810]:     ]
Oct 11 08:49:32 compute-0 determined_khorana[293810]: }
Oct 11 08:49:32 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:49:32.228 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[09679949-ebc1-4523-9fbf-3ff21bbbd838]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:49:32 compute-0 systemd[1]: libpod-89ef72c3cfd863bc0d3a9abc4051cd2428fd2e4811e70387d71f39080c81c7ce.scope: Deactivated successfully.
Oct 11 08:49:32 compute-0 conmon[293810]: conmon 89ef72c3cfd863bc0d3a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-89ef72c3cfd863bc0d3a9abc4051cd2428fd2e4811e70387d71f39080c81c7ce.scope/container/memory.events
Oct 11 08:49:32 compute-0 podman[293773]: 2025-10-11 08:49:32.237132645 +0000 UTC m=+1.148465049 container died 89ef72c3cfd863bc0d3a9abc4051cd2428fd2e4811e70387d71f39080c81c7ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_khorana, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Oct 11 08:49:32 compute-0 NetworkManager[44960]: <info>  [1760172572.2548] manager: (tap9bac3530-90): new Veth device (/org/freedesktop/NetworkManager/Devices/74)
Oct 11 08:49:32 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:49:32.256 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[12ed2ac5-e482-44a6-a1bb-a58b14d3a289]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:49:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-f9333c45619715a8386b9cd8bb4165df4704e20f1361c3eda6655b1cbb3738cb-merged.mount: Deactivated successfully.
Oct 11 08:49:32 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:49:32.301 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[914a0950-d62d-4a8b-b5a6-b23de715d13c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:49:32 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:49:32.305 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[b0265ff8-55fc-441b-af35-f31bd6c8ffba]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:49:32 compute-0 podman[293773]: 2025-10-11 08:49:32.332034848 +0000 UTC m=+1.243367212 container remove 89ef72c3cfd863bc0d3a9abc4051cd2428fd2e4811e70387d71f39080c81c7ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_khorana, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 08:49:32 compute-0 NetworkManager[44960]: <info>  [1760172572.3405] device (tap9bac3530-90): carrier: link connected
Oct 11 08:49:32 compute-0 systemd[1]: libpod-conmon-89ef72c3cfd863bc0d3a9abc4051cd2428fd2e4811e70387d71f39080c81c7ce.scope: Deactivated successfully.
Oct 11 08:49:32 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:49:32.349 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[b927559a-011d-4200-8a78-851d95df6127]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:49:32 compute-0 sudo[293606]: pam_unix(sudo:session): session closed for user root
Oct 11 08:49:32 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:49:32.399 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[54189ea8-b023-4fb9-8952-797dcc7006f0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9bac3530-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:95:35:1f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 43], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 441703, 'reachable_time': 33461, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 293916, 'error': None, 'target': 'ovnmeta-9bac3530-993f-420e-8692-0b14a331d756', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:49:32 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:49:32.418 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[b2c385e6-e907-469d-820a-5ee67eb80d2d]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe95:351f'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 441703, 'tstamp': 441703}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 293923, 'error': None, 'target': 'ovnmeta-9bac3530-993f-420e-8692-0b14a331d756', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:49:32 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:49:32.442 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[3d1d9d56-fb05-47e0-8e81-df818fc5813b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9bac3530-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:95:35:1f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 43], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 441703, 'reachable_time': 33461, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 293938, 'error': None, 'target': 'ovnmeta-9bac3530-993f-420e-8692-0b14a331d756', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:49:32 compute-0 sudo[293917]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:49:32 compute-0 sudo[293917]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:49:32 compute-0 sudo[293917]: pam_unix(sudo:session): session closed for user root
Oct 11 08:49:32 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:49:32.505 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[f08a9003-1e90-4576-90ab-59181f0c7462]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:49:32 compute-0 sudo[293946]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 08:49:32 compute-0 sudo[293946]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:49:32 compute-0 sudo[293946]: pam_unix(sudo:session): session closed for user root
Oct 11 08:49:32 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:49:32.599 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[17349e7c-add4-4717-a7a5-78ff69c1133c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:49:32 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:49:32.601 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9bac3530-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:49:32 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:49:32.601 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 08:49:32 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:49:32.602 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9bac3530-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:49:32 compute-0 nova_compute[260935]: 2025-10-11 08:49:32.604 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:49:32 compute-0 NetworkManager[44960]: <info>  [1760172572.6053] manager: (tap9bac3530-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/75)
Oct 11 08:49:32 compute-0 kernel: tap9bac3530-90: entered promiscuous mode
Oct 11 08:49:32 compute-0 nova_compute[260935]: 2025-10-11 08:49:32.606 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:49:32 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:49:32.610 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9bac3530-90, col_values=(('external_ids', {'iface-id': 'e5becf0d-48c0-404b-9cba-07077454d085'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:49:32 compute-0 ovn_controller[152945]: 2025-10-11T08:49:32Z|00129|binding|INFO|Releasing lport e5becf0d-48c0-404b-9cba-07077454d085 from this chassis (sb_readonly=0)
Oct 11 08:49:32 compute-0 nova_compute[260935]: 2025-10-11 08:49:32.611 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:49:32 compute-0 nova_compute[260935]: 2025-10-11 08:49:32.634 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:49:32 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:49:32.636 162815 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/9bac3530-993f-420e-8692-0b14a331d756.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/9bac3530-993f-420e-8692-0b14a331d756.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 11 08:49:32 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:49:32.637 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[074459c9-b272-4e4d-a395-5d5ee3957132]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:49:32 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:49:32.639 162815 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 11 08:49:32 compute-0 ovn_metadata_agent[162810]: global
Oct 11 08:49:32 compute-0 ovn_metadata_agent[162810]:     log         /dev/log local0 debug
Oct 11 08:49:32 compute-0 ovn_metadata_agent[162810]:     log-tag     haproxy-metadata-proxy-9bac3530-993f-420e-8692-0b14a331d756
Oct 11 08:49:32 compute-0 ovn_metadata_agent[162810]:     user        root
Oct 11 08:49:32 compute-0 ovn_metadata_agent[162810]:     group       root
Oct 11 08:49:32 compute-0 ovn_metadata_agent[162810]:     maxconn     1024
Oct 11 08:49:32 compute-0 ovn_metadata_agent[162810]:     pidfile     /var/lib/neutron/external/pids/9bac3530-993f-420e-8692-0b14a331d756.pid.haproxy
Oct 11 08:49:32 compute-0 ovn_metadata_agent[162810]:     daemon
Oct 11 08:49:32 compute-0 ovn_metadata_agent[162810]: 
Oct 11 08:49:32 compute-0 ovn_metadata_agent[162810]: defaults
Oct 11 08:49:32 compute-0 ovn_metadata_agent[162810]:     log global
Oct 11 08:49:32 compute-0 ovn_metadata_agent[162810]:     mode http
Oct 11 08:49:32 compute-0 ovn_metadata_agent[162810]:     option httplog
Oct 11 08:49:32 compute-0 ovn_metadata_agent[162810]:     option dontlognull
Oct 11 08:49:32 compute-0 ovn_metadata_agent[162810]:     option http-server-close
Oct 11 08:49:32 compute-0 ovn_metadata_agent[162810]:     option forwardfor
Oct 11 08:49:32 compute-0 ovn_metadata_agent[162810]:     retries                 3
Oct 11 08:49:32 compute-0 ovn_metadata_agent[162810]:     timeout http-request    30s
Oct 11 08:49:32 compute-0 ovn_metadata_agent[162810]:     timeout connect         30s
Oct 11 08:49:32 compute-0 ovn_metadata_agent[162810]:     timeout client          32s
Oct 11 08:49:32 compute-0 ovn_metadata_agent[162810]:     timeout server          32s
Oct 11 08:49:32 compute-0 ovn_metadata_agent[162810]:     timeout http-keep-alive 30s
Oct 11 08:49:32 compute-0 ovn_metadata_agent[162810]: 
Oct 11 08:49:32 compute-0 ovn_metadata_agent[162810]: 
Oct 11 08:49:32 compute-0 ovn_metadata_agent[162810]: listen listener
Oct 11 08:49:32 compute-0 ovn_metadata_agent[162810]:     bind 169.254.169.254:80
Oct 11 08:49:32 compute-0 ovn_metadata_agent[162810]:     server metadata /var/lib/neutron/metadata_proxy
Oct 11 08:49:32 compute-0 ovn_metadata_agent[162810]:     http-request add-header X-OVN-Network-ID 9bac3530-993f-420e-8692-0b14a331d756
Oct 11 08:49:32 compute-0 ovn_metadata_agent[162810]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 11 08:49:32 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:49:32.640 162815 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-9bac3530-993f-420e-8692-0b14a331d756', 'env', 'PROCESS_TAG=haproxy-9bac3530-993f-420e-8692-0b14a331d756', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/9bac3530-993f-420e-8692-0b14a331d756.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 11 08:49:32 compute-0 sudo[293975]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:49:32 compute-0 sudo[293975]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:49:32 compute-0 sudo[293975]: pam_unix(sudo:session): session closed for user root
Oct 11 08:49:32 compute-0 sudo[294003]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a -- raw list --format json
Oct 11 08:49:32 compute-0 sudo[294003]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:49:32 compute-0 nova_compute[260935]: 2025-10-11 08:49:32.837 2 DEBUG nova.compute.manager [req-69e99179-e28f-463f-b67a-586832f80090 req-68e51a53-8ece-464c-b4b7-37d14f75839f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: b3e20035-c079-4ad0-a085-2086be520d1d] Received event network-vif-plugged-3a7614a2-ff1f-4015-a387-8b15256f61b2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:49:32 compute-0 nova_compute[260935]: 2025-10-11 08:49:32.838 2 DEBUG oslo_concurrency.lockutils [req-69e99179-e28f-463f-b67a-586832f80090 req-68e51a53-8ece-464c-b4b7-37d14f75839f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "b3e20035-c079-4ad0-a085-2086be520d1d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:49:32 compute-0 nova_compute[260935]: 2025-10-11 08:49:32.839 2 DEBUG oslo_concurrency.lockutils [req-69e99179-e28f-463f-b67a-586832f80090 req-68e51a53-8ece-464c-b4b7-37d14f75839f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "b3e20035-c079-4ad0-a085-2086be520d1d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:49:32 compute-0 nova_compute[260935]: 2025-10-11 08:49:32.840 2 DEBUG oslo_concurrency.lockutils [req-69e99179-e28f-463f-b67a-586832f80090 req-68e51a53-8ece-464c-b4b7-37d14f75839f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "b3e20035-c079-4ad0-a085-2086be520d1d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:49:32 compute-0 nova_compute[260935]: 2025-10-11 08:49:32.841 2 DEBUG nova.compute.manager [req-69e99179-e28f-463f-b67a-586832f80090 req-68e51a53-8ece-464c-b4b7-37d14f75839f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: b3e20035-c079-4ad0-a085-2086be520d1d] No waiting events found dispatching network-vif-plugged-3a7614a2-ff1f-4015-a387-8b15256f61b2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 08:49:32 compute-0 nova_compute[260935]: 2025-10-11 08:49:32.841 2 WARNING nova.compute.manager [req-69e99179-e28f-463f-b67a-586832f80090 req-68e51a53-8ece-464c-b4b7-37d14f75839f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: b3e20035-c079-4ad0-a085-2086be520d1d] Received unexpected event network-vif-plugged-3a7614a2-ff1f-4015-a387-8b15256f61b2 for instance with vm_state active and task_state None.
Oct 11 08:49:33 compute-0 podman[294115]: 2025-10-11 08:49:33.166449826 +0000 UTC m=+0.057145267 container create 012dd3b1240c0de00b1d42eb37a54f1c4b5262e85e9be9f6efc3df774b1c68d0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-9bac3530-993f-420e-8692-0b14a331d756, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Oct 11 08:49:33 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e163 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 08:49:33 compute-0 systemd[1]: Started libpod-conmon-012dd3b1240c0de00b1d42eb37a54f1c4b5262e85e9be9f6efc3df774b1c68d0.scope.
Oct 11 08:49:33 compute-0 ceph-mon[74313]: pgmap v1312: 321 pgs: 321 active+clean; 293 MiB data, 492 MiB used, 60 GiB / 60 GiB avail; 552 KiB/s rd, 6.0 MiB/s wr, 158 op/s
Oct 11 08:49:33 compute-0 podman[294115]: 2025-10-11 08:49:33.134882063 +0000 UTC m=+0.025577534 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 11 08:49:33 compute-0 systemd[1]: Started libcrun container.
Oct 11 08:49:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41c0985dbac57242fc08a988754091dfee60c3f8b611a4151de8d991e5bdb92e/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 11 08:49:33 compute-0 podman[294143]: 2025-10-11 08:49:33.266572867 +0000 UTC m=+0.069995270 container create 35e08efaf44587199cef7bd0a0d9b02a5be3546b3ac4fd0e3ec083186fad2961 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_shamir, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct 11 08:49:33 compute-0 podman[294115]: 2025-10-11 08:49:33.285985906 +0000 UTC m=+0.176681357 container init 012dd3b1240c0de00b1d42eb37a54f1c4b5262e85e9be9f6efc3df774b1c68d0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-9bac3530-993f-420e-8692-0b14a331d756, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 08:49:33 compute-0 podman[294115]: 2025-10-11 08:49:33.299510579 +0000 UTC m=+0.190206020 container start 012dd3b1240c0de00b1d42eb37a54f1c4b5262e85e9be9f6efc3df774b1c68d0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-9bac3530-993f-420e-8692-0b14a331d756, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Oct 11 08:49:33 compute-0 systemd[1]: Started libpod-conmon-35e08efaf44587199cef7bd0a0d9b02a5be3546b3ac4fd0e3ec083186fad2961.scope.
Oct 11 08:49:33 compute-0 neutron-haproxy-ovnmeta-9bac3530-993f-420e-8692-0b14a331d756[294157]: [NOTICE]   (294165) : New worker (294170) forked
Oct 11 08:49:33 compute-0 neutron-haproxy-ovnmeta-9bac3530-993f-420e-8692-0b14a331d756[294157]: [NOTICE]   (294165) : Loading success.
Oct 11 08:49:33 compute-0 podman[294143]: 2025-10-11 08:49:33.242418604 +0000 UTC m=+0.045841047 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 08:49:33 compute-0 systemd[1]: Started libcrun container.
Oct 11 08:49:33 compute-0 podman[294143]: 2025-10-11 08:49:33.366922255 +0000 UTC m=+0.170344668 container init 35e08efaf44587199cef7bd0a0d9b02a5be3546b3ac4fd0e3ec083186fad2961 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_shamir, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct 11 08:49:33 compute-0 podman[294143]: 2025-10-11 08:49:33.375954491 +0000 UTC m=+0.179376874 container start 35e08efaf44587199cef7bd0a0d9b02a5be3546b3ac4fd0e3ec083186fad2961 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_shamir, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct 11 08:49:33 compute-0 podman[294143]: 2025-10-11 08:49:33.380241332 +0000 UTC m=+0.183663745 container attach 35e08efaf44587199cef7bd0a0d9b02a5be3546b3ac4fd0e3ec083186fad2961 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_shamir, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 08:49:33 compute-0 crazy_shamir[294166]: 167 167
Oct 11 08:49:33 compute-0 systemd[1]: libpod-35e08efaf44587199cef7bd0a0d9b02a5be3546b3ac4fd0e3ec083186fad2961.scope: Deactivated successfully.
Oct 11 08:49:33 compute-0 podman[294143]: 2025-10-11 08:49:33.382559108 +0000 UTC m=+0.185981501 container died 35e08efaf44587199cef7bd0a0d9b02a5be3546b3ac4fd0e3ec083186fad2961 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_shamir, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Oct 11 08:49:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-636b7f7837065960dbea48bce148288bea4f19269eaa8118f0f55ade0dfeb25f-merged.mount: Deactivated successfully.
Oct 11 08:49:33 compute-0 podman[294143]: 2025-10-11 08:49:33.422021684 +0000 UTC m=+0.225444067 container remove 35e08efaf44587199cef7bd0a0d9b02a5be3546b3ac4fd0e3ec083186fad2961 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_shamir, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 08:49:33 compute-0 systemd[1]: libpod-conmon-35e08efaf44587199cef7bd0a0d9b02a5be3546b3ac4fd0e3ec083186fad2961.scope: Deactivated successfully.
Oct 11 08:49:33 compute-0 nova_compute[260935]: 2025-10-11 08:49:33.505 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172573.5034428, 8e4f771a-b87a-40f9-a12e-b5b4583b96f7 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:49:33 compute-0 nova_compute[260935]: 2025-10-11 08:49:33.505 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 8e4f771a-b87a-40f9-a12e-b5b4583b96f7] VM Started (Lifecycle Event)
Oct 11 08:49:33 compute-0 nova_compute[260935]: 2025-10-11 08:49:33.548 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 8e4f771a-b87a-40f9-a12e-b5b4583b96f7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:49:33 compute-0 nova_compute[260935]: 2025-10-11 08:49:33.554 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172573.5035553, 8e4f771a-b87a-40f9-a12e-b5b4583b96f7 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:49:33 compute-0 nova_compute[260935]: 2025-10-11 08:49:33.554 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 8e4f771a-b87a-40f9-a12e-b5b4583b96f7] VM Paused (Lifecycle Event)
Oct 11 08:49:33 compute-0 nova_compute[260935]: 2025-10-11 08:49:33.587 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 8e4f771a-b87a-40f9-a12e-b5b4583b96f7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:49:33 compute-0 nova_compute[260935]: 2025-10-11 08:49:33.593 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 8e4f771a-b87a-40f9-a12e-b5b4583b96f7] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 08:49:33 compute-0 podman[294201]: 2025-10-11 08:49:33.651109052 +0000 UTC m=+0.059640117 container create 629b8629ca07c46b8a756312a34b5d4bfb4f34cecf118d0a428d128537171e41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_dubinsky, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 08:49:33 compute-0 systemd[1]: Started libpod-conmon-629b8629ca07c46b8a756312a34b5d4bfb4f34cecf118d0a428d128537171e41.scope.
Oct 11 08:49:33 compute-0 podman[294201]: 2025-10-11 08:49:33.622803952 +0000 UTC m=+0.031335017 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 08:49:33 compute-0 systemd[1]: Started libcrun container.
Oct 11 08:49:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/785c1cc932ca67e56dc37d18e7d2baf8c6e1c02bb6565f525441bfd5e104abf3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 08:49:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/785c1cc932ca67e56dc37d18e7d2baf8c6e1c02bb6565f525441bfd5e104abf3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 08:49:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/785c1cc932ca67e56dc37d18e7d2baf8c6e1c02bb6565f525441bfd5e104abf3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 08:49:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/785c1cc932ca67e56dc37d18e7d2baf8c6e1c02bb6565f525441bfd5e104abf3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 08:49:33 compute-0 nova_compute[260935]: 2025-10-11 08:49:33.750 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 8e4f771a-b87a-40f9-a12e-b5b4583b96f7] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 08:49:33 compute-0 podman[294201]: 2025-10-11 08:49:33.759737814 +0000 UTC m=+0.168268849 container init 629b8629ca07c46b8a756312a34b5d4bfb4f34cecf118d0a428d128537171e41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_dubinsky, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 08:49:33 compute-0 podman[294201]: 2025-10-11 08:49:33.77126264 +0000 UTC m=+0.179793665 container start 629b8629ca07c46b8a756312a34b5d4bfb4f34cecf118d0a428d128537171e41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_dubinsky, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 08:49:33 compute-0 podman[294201]: 2025-10-11 08:49:33.77477476 +0000 UTC m=+0.183305885 container attach 629b8629ca07c46b8a756312a34b5d4bfb4f34cecf118d0a428d128537171e41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_dubinsky, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 08:49:33 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1313: 321 pgs: 321 active+clean; 293 MiB data, 492 MiB used, 60 GiB / 60 GiB avail; 2.4 MiB/s rd, 6.1 MiB/s wr, 228 op/s
Oct 11 08:49:34 compute-0 nova_compute[260935]: 2025-10-11 08:49:34.574 2 DEBUG oslo_concurrency.lockutils [None req-37b7b9ec-bf18-4041-ba84-f84d66a368c5 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Acquiring lock "14711e39-46ca-4856-9c19-fa51b869064d" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:49:34 compute-0 nova_compute[260935]: 2025-10-11 08:49:34.575 2 DEBUG oslo_concurrency.lockutils [None req-37b7b9ec-bf18-4041-ba84-f84d66a368c5 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Lock "14711e39-46ca-4856-9c19-fa51b869064d" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:49:34 compute-0 nova_compute[260935]: 2025-10-11 08:49:34.601 2 DEBUG oslo_concurrency.lockutils [None req-d0ed1468-e072-4f36-ba8a-64df5d0bc7be a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Acquiring lock "90e56ca7-b26f-4f83-908d-75204ecd2533" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:49:34 compute-0 nova_compute[260935]: 2025-10-11 08:49:34.601 2 DEBUG oslo_concurrency.lockutils [None req-d0ed1468-e072-4f36-ba8a-64df5d0bc7be a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Lock "90e56ca7-b26f-4f83-908d-75204ecd2533" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:49:34 compute-0 nova_compute[260935]: 2025-10-11 08:49:34.602 2 DEBUG nova.compute.manager [None req-37b7b9ec-bf18-4041-ba84-f84d66a368c5 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 14711e39-46ca-4856-9c19-fa51b869064d] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 11 08:49:34 compute-0 nova_compute[260935]: 2025-10-11 08:49:34.635 2 DEBUG nova.compute.manager [None req-d0ed1468-e072-4f36-ba8a-64df5d0bc7be a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 90e56ca7-b26f-4f83-908d-75204ecd2533] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 11 08:49:34 compute-0 nova_compute[260935]: 2025-10-11 08:49:34.716 2 DEBUG oslo_concurrency.lockutils [None req-37b7b9ec-bf18-4041-ba84-f84d66a368c5 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:49:34 compute-0 nova_compute[260935]: 2025-10-11 08:49:34.717 2 DEBUG oslo_concurrency.lockutils [None req-37b7b9ec-bf18-4041-ba84-f84d66a368c5 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:49:34 compute-0 nova_compute[260935]: 2025-10-11 08:49:34.735 2 DEBUG nova.virt.hardware [None req-37b7b9ec-bf18-4041-ba84-f84d66a368c5 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 11 08:49:34 compute-0 nova_compute[260935]: 2025-10-11 08:49:34.737 2 INFO nova.compute.claims [None req-37b7b9ec-bf18-4041-ba84-f84d66a368c5 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 14711e39-46ca-4856-9c19-fa51b869064d] Claim successful on node compute-0.ctlplane.example.com
Oct 11 08:49:34 compute-0 nova_compute[260935]: 2025-10-11 08:49:34.740 2 DEBUG oslo_concurrency.lockutils [None req-d0ed1468-e072-4f36-ba8a-64df5d0bc7be a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:49:34 compute-0 interesting_dubinsky[294218]: {
Oct 11 08:49:34 compute-0 interesting_dubinsky[294218]:     "9a8d87fe-940e-4960-94fb-405e22e6e9ee": {
Oct 11 08:49:34 compute-0 interesting_dubinsky[294218]:         "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 08:49:34 compute-0 interesting_dubinsky[294218]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 11 08:49:34 compute-0 interesting_dubinsky[294218]:         "osd_id": 2,
Oct 11 08:49:34 compute-0 interesting_dubinsky[294218]:         "osd_uuid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 08:49:34 compute-0 interesting_dubinsky[294218]:         "type": "bluestore"
Oct 11 08:49:34 compute-0 interesting_dubinsky[294218]:     },
Oct 11 08:49:34 compute-0 interesting_dubinsky[294218]:     "bd31cfe9-266a-4479-a7f9-659fff14b3e1": {
Oct 11 08:49:34 compute-0 interesting_dubinsky[294218]:         "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 08:49:34 compute-0 interesting_dubinsky[294218]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 11 08:49:34 compute-0 interesting_dubinsky[294218]:         "osd_id": 0,
Oct 11 08:49:34 compute-0 interesting_dubinsky[294218]:         "osd_uuid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 08:49:34 compute-0 interesting_dubinsky[294218]:         "type": "bluestore"
Oct 11 08:49:34 compute-0 interesting_dubinsky[294218]:     },
Oct 11 08:49:34 compute-0 interesting_dubinsky[294218]:     "c6e730c8-7075-45e5-bf72-061b73088f5b": {
Oct 11 08:49:34 compute-0 interesting_dubinsky[294218]:         "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 08:49:34 compute-0 interesting_dubinsky[294218]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 11 08:49:34 compute-0 interesting_dubinsky[294218]:         "osd_id": 1,
Oct 11 08:49:34 compute-0 interesting_dubinsky[294218]:         "osd_uuid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 08:49:34 compute-0 interesting_dubinsky[294218]:         "type": "bluestore"
Oct 11 08:49:34 compute-0 interesting_dubinsky[294218]:     }
Oct 11 08:49:34 compute-0 interesting_dubinsky[294218]: }
Oct 11 08:49:34 compute-0 podman[294243]: 2025-10-11 08:49:34.882951699 +0000 UTC m=+0.169078362 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=multipathd, org.label-schema.vendor=CentOS)
Oct 11 08:49:34 compute-0 podman[294244]: 2025-10-11 08:49:34.900630509 +0000 UTC m=+0.187029190 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_id=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, io.buildah.version=1.41.3)
Oct 11 08:49:34 compute-0 systemd[1]: libpod-629b8629ca07c46b8a756312a34b5d4bfb4f34cecf118d0a428d128537171e41.scope: Deactivated successfully.
Oct 11 08:49:34 compute-0 podman[294201]: 2025-10-11 08:49:34.916772766 +0000 UTC m=+1.325303861 container died 629b8629ca07c46b8a756312a34b5d4bfb4f34cecf118d0a428d128537171e41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_dubinsky, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True)
Oct 11 08:49:34 compute-0 systemd[1]: libpod-629b8629ca07c46b8a756312a34b5d4bfb4f34cecf118d0a428d128537171e41.scope: Consumed 1.112s CPU time.
Oct 11 08:49:34 compute-0 nova_compute[260935]: 2025-10-11 08:49:34.926 2 DEBUG nova.compute.manager [req-fa65cc6e-1ade-4705-bb7c-2c509904f490 req-c76a5438-0b15-4d23-a946-84558bc7e9c1 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 8e4f771a-b87a-40f9-a12e-b5b4583b96f7] Received event network-vif-plugged-f1d8b704-c5df-41f7-b46a-04c0e89ab2cd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:49:34 compute-0 nova_compute[260935]: 2025-10-11 08:49:34.926 2 DEBUG oslo_concurrency.lockutils [req-fa65cc6e-1ade-4705-bb7c-2c509904f490 req-c76a5438-0b15-4d23-a946-84558bc7e9c1 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "8e4f771a-b87a-40f9-a12e-b5b4583b96f7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:49:34 compute-0 nova_compute[260935]: 2025-10-11 08:49:34.926 2 DEBUG oslo_concurrency.lockutils [req-fa65cc6e-1ade-4705-bb7c-2c509904f490 req-c76a5438-0b15-4d23-a946-84558bc7e9c1 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "8e4f771a-b87a-40f9-a12e-b5b4583b96f7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:49:34 compute-0 nova_compute[260935]: 2025-10-11 08:49:34.927 2 DEBUG oslo_concurrency.lockutils [req-fa65cc6e-1ade-4705-bb7c-2c509904f490 req-c76a5438-0b15-4d23-a946-84558bc7e9c1 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "8e4f771a-b87a-40f9-a12e-b5b4583b96f7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:49:34 compute-0 nova_compute[260935]: 2025-10-11 08:49:34.927 2 DEBUG nova.compute.manager [req-fa65cc6e-1ade-4705-bb7c-2c509904f490 req-c76a5438-0b15-4d23-a946-84558bc7e9c1 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 8e4f771a-b87a-40f9-a12e-b5b4583b96f7] Processing event network-vif-plugged-f1d8b704-c5df-41f7-b46a-04c0e89ab2cd _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 11 08:49:34 compute-0 nova_compute[260935]: 2025-10-11 08:49:34.927 2 DEBUG nova.compute.manager [req-fa65cc6e-1ade-4705-bb7c-2c509904f490 req-c76a5438-0b15-4d23-a946-84558bc7e9c1 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 8e4f771a-b87a-40f9-a12e-b5b4583b96f7] Received event network-vif-plugged-f1d8b704-c5df-41f7-b46a-04c0e89ab2cd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:49:34 compute-0 nova_compute[260935]: 2025-10-11 08:49:34.927 2 DEBUG oslo_concurrency.lockutils [req-fa65cc6e-1ade-4705-bb7c-2c509904f490 req-c76a5438-0b15-4d23-a946-84558bc7e9c1 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "8e4f771a-b87a-40f9-a12e-b5b4583b96f7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:49:34 compute-0 nova_compute[260935]: 2025-10-11 08:49:34.927 2 DEBUG oslo_concurrency.lockutils [req-fa65cc6e-1ade-4705-bb7c-2c509904f490 req-c76a5438-0b15-4d23-a946-84558bc7e9c1 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "8e4f771a-b87a-40f9-a12e-b5b4583b96f7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:49:34 compute-0 nova_compute[260935]: 2025-10-11 08:49:34.928 2 DEBUG oslo_concurrency.lockutils [req-fa65cc6e-1ade-4705-bb7c-2c509904f490 req-c76a5438-0b15-4d23-a946-84558bc7e9c1 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "8e4f771a-b87a-40f9-a12e-b5b4583b96f7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:49:34 compute-0 nova_compute[260935]: 2025-10-11 08:49:34.928 2 DEBUG nova.compute.manager [req-fa65cc6e-1ade-4705-bb7c-2c509904f490 req-c76a5438-0b15-4d23-a946-84558bc7e9c1 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 8e4f771a-b87a-40f9-a12e-b5b4583b96f7] No waiting events found dispatching network-vif-plugged-f1d8b704-c5df-41f7-b46a-04c0e89ab2cd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 08:49:34 compute-0 nova_compute[260935]: 2025-10-11 08:49:34.928 2 WARNING nova.compute.manager [req-fa65cc6e-1ade-4705-bb7c-2c509904f490 req-c76a5438-0b15-4d23-a946-84558bc7e9c1 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 8e4f771a-b87a-40f9-a12e-b5b4583b96f7] Received unexpected event network-vif-plugged-f1d8b704-c5df-41f7-b46a-04c0e89ab2cd for instance with vm_state building and task_state spawning.
Oct 11 08:49:34 compute-0 nova_compute[260935]: 2025-10-11 08:49:34.929 2 DEBUG nova.compute.manager [None req-24442349-a5af-424c-880c-1e8729b35923 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 8e4f771a-b87a-40f9-a12e-b5b4583b96f7] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 11 08:49:34 compute-0 nova_compute[260935]: 2025-10-11 08:49:34.938 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172574.9380574, 8e4f771a-b87a-40f9-a12e-b5b4583b96f7 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:49:34 compute-0 nova_compute[260935]: 2025-10-11 08:49:34.938 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 8e4f771a-b87a-40f9-a12e-b5b4583b96f7] VM Resumed (Lifecycle Event)
Oct 11 08:49:34 compute-0 nova_compute[260935]: 2025-10-11 08:49:34.946 2 DEBUG nova.virt.libvirt.driver [None req-24442349-a5af-424c-880c-1e8729b35923 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 8e4f771a-b87a-40f9-a12e-b5b4583b96f7] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 11 08:49:34 compute-0 nova_compute[260935]: 2025-10-11 08:49:34.950 2 INFO nova.virt.libvirt.driver [-] [instance: 8e4f771a-b87a-40f9-a12e-b5b4583b96f7] Instance spawned successfully.
Oct 11 08:49:34 compute-0 nova_compute[260935]: 2025-10-11 08:49:34.950 2 DEBUG nova.virt.libvirt.driver [None req-24442349-a5af-424c-880c-1e8729b35923 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 8e4f771a-b87a-40f9-a12e-b5b4583b96f7] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 11 08:49:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-785c1cc932ca67e56dc37d18e7d2baf8c6e1c02bb6565f525441bfd5e104abf3-merged.mount: Deactivated successfully.
Oct 11 08:49:34 compute-0 nova_compute[260935]: 2025-10-11 08:49:34.967 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 8e4f771a-b87a-40f9-a12e-b5b4583b96f7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:49:34 compute-0 nova_compute[260935]: 2025-10-11 08:49:34.978 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 8e4f771a-b87a-40f9-a12e-b5b4583b96f7] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 08:49:34 compute-0 nova_compute[260935]: 2025-10-11 08:49:34.985 2 DEBUG nova.virt.libvirt.driver [None req-24442349-a5af-424c-880c-1e8729b35923 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 8e4f771a-b87a-40f9-a12e-b5b4583b96f7] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:49:34 compute-0 nova_compute[260935]: 2025-10-11 08:49:34.986 2 DEBUG nova.virt.libvirt.driver [None req-24442349-a5af-424c-880c-1e8729b35923 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 8e4f771a-b87a-40f9-a12e-b5b4583b96f7] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:49:34 compute-0 nova_compute[260935]: 2025-10-11 08:49:34.986 2 DEBUG nova.virt.libvirt.driver [None req-24442349-a5af-424c-880c-1e8729b35923 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 8e4f771a-b87a-40f9-a12e-b5b4583b96f7] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:49:34 compute-0 nova_compute[260935]: 2025-10-11 08:49:34.987 2 DEBUG nova.virt.libvirt.driver [None req-24442349-a5af-424c-880c-1e8729b35923 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 8e4f771a-b87a-40f9-a12e-b5b4583b96f7] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:49:34 compute-0 nova_compute[260935]: 2025-10-11 08:49:34.987 2 DEBUG nova.virt.libvirt.driver [None req-24442349-a5af-424c-880c-1e8729b35923 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 8e4f771a-b87a-40f9-a12e-b5b4583b96f7] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:49:34 compute-0 nova_compute[260935]: 2025-10-11 08:49:34.987 2 DEBUG nova.virt.libvirt.driver [None req-24442349-a5af-424c-880c-1e8729b35923 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 8e4f771a-b87a-40f9-a12e-b5b4583b96f7] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:49:34 compute-0 podman[294201]: 2025-10-11 08:49:34.989713938 +0000 UTC m=+1.398244973 container remove 629b8629ca07c46b8a756312a34b5d4bfb4f34cecf118d0a428d128537171e41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_dubinsky, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 11 08:49:35 compute-0 systemd[1]: libpod-conmon-629b8629ca07c46b8a756312a34b5d4bfb4f34cecf118d0a428d128537171e41.scope: Deactivated successfully.
Oct 11 08:49:35 compute-0 nova_compute[260935]: 2025-10-11 08:49:35.013 2 DEBUG oslo_concurrency.processutils [None req-37b7b9ec-bf18-4041-ba84-f84d66a368c5 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:49:35 compute-0 sudo[294003]: pam_unix(sudo:session): session closed for user root
Oct 11 08:49:35 compute-0 nova_compute[260935]: 2025-10-11 08:49:35.039 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 8e4f771a-b87a-40f9-a12e-b5b4583b96f7] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 08:49:35 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 08:49:35 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 08:49:35 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 08:49:35 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 08:49:35 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev 2cfdd73d-4dba-47a8-8909-e5af87e909ff does not exist
Oct 11 08:49:35 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev bc1c4287-4c55-43ed-8c96-a47c215f1dbc does not exist
Oct 11 08:49:35 compute-0 nova_compute[260935]: 2025-10-11 08:49:35.089 2 INFO nova.compute.manager [None req-24442349-a5af-424c-880c-1e8729b35923 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 8e4f771a-b87a-40f9-a12e-b5b4583b96f7] Took 10.37 seconds to spawn the instance on the hypervisor.
Oct 11 08:49:35 compute-0 nova_compute[260935]: 2025-10-11 08:49:35.089 2 DEBUG nova.compute.manager [None req-24442349-a5af-424c-880c-1e8729b35923 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 8e4f771a-b87a-40f9-a12e-b5b4583b96f7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:49:35 compute-0 sudo[294304]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:49:35 compute-0 sudo[294304]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:49:35 compute-0 sudo[294304]: pam_unix(sudo:session): session closed for user root
Oct 11 08:49:35 compute-0 nova_compute[260935]: 2025-10-11 08:49:35.184 2 INFO nova.compute.manager [None req-24442349-a5af-424c-880c-1e8729b35923 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 8e4f771a-b87a-40f9-a12e-b5b4583b96f7] Took 12.35 seconds to build instance.
Oct 11 08:49:35 compute-0 nova_compute[260935]: 2025-10-11 08:49:35.214 2 DEBUG oslo_concurrency.lockutils [None req-24442349-a5af-424c-880c-1e8729b35923 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Lock "8e4f771a-b87a-40f9-a12e-b5b4583b96f7" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.443s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:49:35 compute-0 sudo[294348]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 11 08:49:35 compute-0 sudo[294348]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:49:35 compute-0 ceph-mon[74313]: pgmap v1313: 321 pgs: 321 active+clean; 293 MiB data, 492 MiB used, 60 GiB / 60 GiB avail; 2.4 MiB/s rd, 6.1 MiB/s wr, 228 op/s
Oct 11 08:49:35 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 08:49:35 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 08:49:35 compute-0 sudo[294348]: pam_unix(sudo:session): session closed for user root
Oct 11 08:49:35 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 08:49:35 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3102901362' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:49:35 compute-0 nova_compute[260935]: 2025-10-11 08:49:35.430 2 DEBUG oslo_concurrency.processutils [None req-37b7b9ec-bf18-4041-ba84-f84d66a368c5 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.417s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:49:35 compute-0 nova_compute[260935]: 2025-10-11 08:49:35.435 2 DEBUG nova.compute.provider_tree [None req-37b7b9ec-bf18-4041-ba84-f84d66a368c5 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 08:49:35 compute-0 nova_compute[260935]: 2025-10-11 08:49:35.451 2 DEBUG nova.scheduler.client.report [None req-37b7b9ec-bf18-4041-ba84-f84d66a368c5 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 08:49:35 compute-0 nova_compute[260935]: 2025-10-11 08:49:35.493 2 DEBUG oslo_concurrency.lockutils [None req-37b7b9ec-bf18-4041-ba84-f84d66a368c5 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.776s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:49:35 compute-0 nova_compute[260935]: 2025-10-11 08:49:35.494 2 DEBUG nova.compute.manager [None req-37b7b9ec-bf18-4041-ba84-f84d66a368c5 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 14711e39-46ca-4856-9c19-fa51b869064d] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 11 08:49:35 compute-0 nova_compute[260935]: 2025-10-11 08:49:35.498 2 DEBUG oslo_concurrency.lockutils [None req-d0ed1468-e072-4f36-ba8a-64df5d0bc7be a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.758s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:49:35 compute-0 nova_compute[260935]: 2025-10-11 08:49:35.505 2 DEBUG nova.virt.hardware [None req-d0ed1468-e072-4f36-ba8a-64df5d0bc7be a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 11 08:49:35 compute-0 nova_compute[260935]: 2025-10-11 08:49:35.506 2 INFO nova.compute.claims [None req-d0ed1468-e072-4f36-ba8a-64df5d0bc7be a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 90e56ca7-b26f-4f83-908d-75204ecd2533] Claim successful on node compute-0.ctlplane.example.com
Oct 11 08:49:35 compute-0 nova_compute[260935]: 2025-10-11 08:49:35.596 2 DEBUG nova.compute.manager [None req-37b7b9ec-bf18-4041-ba84-f84d66a368c5 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 14711e39-46ca-4856-9c19-fa51b869064d] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 11 08:49:35 compute-0 nova_compute[260935]: 2025-10-11 08:49:35.596 2 DEBUG nova.network.neutron [None req-37b7b9ec-bf18-4041-ba84-f84d66a368c5 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 14711e39-46ca-4856-9c19-fa51b869064d] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 11 08:49:35 compute-0 nova_compute[260935]: 2025-10-11 08:49:35.742 2 INFO nova.virt.libvirt.driver [None req-37b7b9ec-bf18-4041-ba84-f84d66a368c5 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 14711e39-46ca-4856-9c19-fa51b869064d] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 11 08:49:35 compute-0 nova_compute[260935]: 2025-10-11 08:49:35.779 2 DEBUG nova.compute.manager [None req-37b7b9ec-bf18-4041-ba84-f84d66a368c5 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 14711e39-46ca-4856-9c19-fa51b869064d] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 11 08:49:35 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1314: 321 pgs: 321 active+clean; 293 MiB data, 492 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.9 MiB/s wr, 126 op/s
Oct 11 08:49:35 compute-0 nova_compute[260935]: 2025-10-11 08:49:35.838 2 DEBUG nova.policy [None req-37b7b9ec-bf18-4041-ba84-f84d66a368c5 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '34f29a5a135d45f597eeaa741009aa67', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'eddb41c523294041b154a0a99c88e82b', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 11 08:49:35 compute-0 nova_compute[260935]: 2025-10-11 08:49:35.947 2 DEBUG oslo_concurrency.processutils [None req-d0ed1468-e072-4f36-ba8a-64df5d0bc7be a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:49:36 compute-0 nova_compute[260935]: 2025-10-11 08:49:36.049 2 DEBUG nova.compute.manager [None req-37b7b9ec-bf18-4041-ba84-f84d66a368c5 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 14711e39-46ca-4856-9c19-fa51b869064d] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 11 08:49:36 compute-0 nova_compute[260935]: 2025-10-11 08:49:36.051 2 DEBUG nova.virt.libvirt.driver [None req-37b7b9ec-bf18-4041-ba84-f84d66a368c5 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 14711e39-46ca-4856-9c19-fa51b869064d] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 11 08:49:36 compute-0 nova_compute[260935]: 2025-10-11 08:49:36.052 2 INFO nova.virt.libvirt.driver [None req-37b7b9ec-bf18-4041-ba84-f84d66a368c5 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 14711e39-46ca-4856-9c19-fa51b869064d] Creating image(s)
Oct 11 08:49:36 compute-0 nova_compute[260935]: 2025-10-11 08:49:36.085 2 DEBUG nova.storage.rbd_utils [None req-37b7b9ec-bf18-4041-ba84-f84d66a368c5 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] rbd image 14711e39-46ca-4856-9c19-fa51b869064d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:49:36 compute-0 nova_compute[260935]: 2025-10-11 08:49:36.126 2 DEBUG nova.storage.rbd_utils [None req-37b7b9ec-bf18-4041-ba84-f84d66a368c5 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] rbd image 14711e39-46ca-4856-9c19-fa51b869064d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:49:36 compute-0 nova_compute[260935]: 2025-10-11 08:49:36.153 2 DEBUG nova.storage.rbd_utils [None req-37b7b9ec-bf18-4041-ba84-f84d66a368c5 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] rbd image 14711e39-46ca-4856-9c19-fa51b869064d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:49:36 compute-0 nova_compute[260935]: 2025-10-11 08:49:36.158 2 DEBUG oslo_concurrency.processutils [None req-37b7b9ec-bf18-4041-ba84-f84d66a368c5 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:49:36 compute-0 nova_compute[260935]: 2025-10-11 08:49:36.189 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:49:36 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/3102901362' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:49:36 compute-0 nova_compute[260935]: 2025-10-11 08:49:36.248 2 DEBUG oslo_concurrency.processutils [None req-37b7b9ec-bf18-4041-ba84-f84d66a368c5 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json" returned: 0 in 0.090s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:49:36 compute-0 nova_compute[260935]: 2025-10-11 08:49:36.249 2 DEBUG oslo_concurrency.lockutils [None req-37b7b9ec-bf18-4041-ba84-f84d66a368c5 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Acquiring lock "9811042c7d73cc51997f7c966840f4b7728169a1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:49:36 compute-0 nova_compute[260935]: 2025-10-11 08:49:36.250 2 DEBUG oslo_concurrency.lockutils [None req-37b7b9ec-bf18-4041-ba84-f84d66a368c5 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:49:36 compute-0 nova_compute[260935]: 2025-10-11 08:49:36.251 2 DEBUG oslo_concurrency.lockutils [None req-37b7b9ec-bf18-4041-ba84-f84d66a368c5 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:49:36 compute-0 nova_compute[260935]: 2025-10-11 08:49:36.277 2 DEBUG nova.storage.rbd_utils [None req-37b7b9ec-bf18-4041-ba84-f84d66a368c5 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] rbd image 14711e39-46ca-4856-9c19-fa51b869064d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:49:36 compute-0 nova_compute[260935]: 2025-10-11 08:49:36.281 2 DEBUG oslo_concurrency.processutils [None req-37b7b9ec-bf18-4041-ba84-f84d66a368c5 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 14711e39-46ca-4856-9c19-fa51b869064d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:49:36 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 08:49:36 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1192981870' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:49:36 compute-0 nova_compute[260935]: 2025-10-11 08:49:36.442 2 DEBUG oslo_concurrency.processutils [None req-d0ed1468-e072-4f36-ba8a-64df5d0bc7be a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.495s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:49:36 compute-0 nova_compute[260935]: 2025-10-11 08:49:36.446 2 INFO nova.compute.manager [None req-2be22760-dc02-4c6e-8405-519e4b89fe8c 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 8e4f771a-b87a-40f9-a12e-b5b4583b96f7] Pausing
Oct 11 08:49:36 compute-0 nova_compute[260935]: 2025-10-11 08:49:36.447 2 DEBUG nova.objects.instance [None req-2be22760-dc02-4c6e-8405-519e4b89fe8c 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Lazy-loading 'flavor' on Instance uuid 8e4f771a-b87a-40f9-a12e-b5b4583b96f7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:49:36 compute-0 nova_compute[260935]: 2025-10-11 08:49:36.460 2 DEBUG nova.compute.provider_tree [None req-d0ed1468-e072-4f36-ba8a-64df5d0bc7be a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 08:49:36 compute-0 nova_compute[260935]: 2025-10-11 08:49:36.568 2 DEBUG oslo_concurrency.processutils [None req-37b7b9ec-bf18-4041-ba84-f84d66a368c5 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 14711e39-46ca-4856-9c19-fa51b869064d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.287s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:49:36 compute-0 nova_compute[260935]: 2025-10-11 08:49:36.606 2 DEBUG nova.scheduler.client.report [None req-d0ed1468-e072-4f36-ba8a-64df5d0bc7be a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 08:49:36 compute-0 nova_compute[260935]: 2025-10-11 08:49:36.649 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172576.61852, 8e4f771a-b87a-40f9-a12e-b5b4583b96f7 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:49:36 compute-0 nova_compute[260935]: 2025-10-11 08:49:36.650 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 8e4f771a-b87a-40f9-a12e-b5b4583b96f7] VM Paused (Lifecycle Event)
Oct 11 08:49:36 compute-0 nova_compute[260935]: 2025-10-11 08:49:36.654 2 DEBUG oslo_concurrency.lockutils [None req-d0ed1468-e072-4f36-ba8a-64df5d0bc7be a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.155s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:49:36 compute-0 nova_compute[260935]: 2025-10-11 08:49:36.654 2 DEBUG nova.compute.manager [None req-d0ed1468-e072-4f36-ba8a-64df5d0bc7be a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 90e56ca7-b26f-4f83-908d-75204ecd2533] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 11 08:49:36 compute-0 nova_compute[260935]: 2025-10-11 08:49:36.657 2 DEBUG nova.compute.manager [None req-2be22760-dc02-4c6e-8405-519e4b89fe8c 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 8e4f771a-b87a-40f9-a12e-b5b4583b96f7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:49:36 compute-0 nova_compute[260935]: 2025-10-11 08:49:36.663 2 DEBUG nova.storage.rbd_utils [None req-37b7b9ec-bf18-4041-ba84-f84d66a368c5 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] resizing rbd image 14711e39-46ca-4856-9c19-fa51b869064d_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 11 08:49:36 compute-0 nova_compute[260935]: 2025-10-11 08:49:36.705 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 8e4f771a-b87a-40f9-a12e-b5b4583b96f7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:49:36 compute-0 nova_compute[260935]: 2025-10-11 08:49:36.717 2 DEBUG nova.compute.manager [None req-d0ed1468-e072-4f36-ba8a-64df5d0bc7be a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 90e56ca7-b26f-4f83-908d-75204ecd2533] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 11 08:49:36 compute-0 nova_compute[260935]: 2025-10-11 08:49:36.718 2 DEBUG nova.network.neutron [None req-d0ed1468-e072-4f36-ba8a-64df5d0bc7be a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 90e56ca7-b26f-4f83-908d-75204ecd2533] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 11 08:49:36 compute-0 nova_compute[260935]: 2025-10-11 08:49:36.729 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 8e4f771a-b87a-40f9-a12e-b5b4583b96f7] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: pausing, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 08:49:36 compute-0 nova_compute[260935]: 2025-10-11 08:49:36.741 2 INFO nova.virt.libvirt.driver [None req-d0ed1468-e072-4f36-ba8a-64df5d0bc7be a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 90e56ca7-b26f-4f83-908d-75204ecd2533] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 11 08:49:36 compute-0 nova_compute[260935]: 2025-10-11 08:49:36.794 2 DEBUG nova.objects.instance [None req-37b7b9ec-bf18-4041-ba84-f84d66a368c5 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Lazy-loading 'migration_context' on Instance uuid 14711e39-46ca-4856-9c19-fa51b869064d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:49:36 compute-0 nova_compute[260935]: 2025-10-11 08:49:36.967 2 DEBUG nova.policy [None req-d0ed1468-e072-4f36-ba8a-64df5d0bc7be a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'a51c2680b31e40b1908642ef8795c6f0', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '39d3043a7835403392c659fbb2fe0b22', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 11 08:49:37 compute-0 nova_compute[260935]: 2025-10-11 08:49:37.022 2 DEBUG nova.virt.libvirt.driver [None req-37b7b9ec-bf18-4041-ba84-f84d66a368c5 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 14711e39-46ca-4856-9c19-fa51b869064d] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 11 08:49:37 compute-0 nova_compute[260935]: 2025-10-11 08:49:37.023 2 DEBUG nova.virt.libvirt.driver [None req-37b7b9ec-bf18-4041-ba84-f84d66a368c5 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 14711e39-46ca-4856-9c19-fa51b869064d] Ensure instance console log exists: /var/lib/nova/instances/14711e39-46ca-4856-9c19-fa51b869064d/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 11 08:49:37 compute-0 nova_compute[260935]: 2025-10-11 08:49:37.023 2 DEBUG oslo_concurrency.lockutils [None req-37b7b9ec-bf18-4041-ba84-f84d66a368c5 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:49:37 compute-0 nova_compute[260935]: 2025-10-11 08:49:37.024 2 DEBUG oslo_concurrency.lockutils [None req-37b7b9ec-bf18-4041-ba84-f84d66a368c5 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:49:37 compute-0 nova_compute[260935]: 2025-10-11 08:49:37.024 2 DEBUG oslo_concurrency.lockutils [None req-37b7b9ec-bf18-4041-ba84-f84d66a368c5 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:49:37 compute-0 nova_compute[260935]: 2025-10-11 08:49:37.031 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:49:37 compute-0 nova_compute[260935]: 2025-10-11 08:49:37.074 2 DEBUG nova.compute.manager [None req-d0ed1468-e072-4f36-ba8a-64df5d0bc7be a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 90e56ca7-b26f-4f83-908d-75204ecd2533] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 11 08:49:37 compute-0 ceph-mon[74313]: pgmap v1314: 321 pgs: 321 active+clean; 293 MiB data, 492 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.9 MiB/s wr, 126 op/s
Oct 11 08:49:37 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/1192981870' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:49:37 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 08:49:37 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3088976799' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 08:49:37 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 08:49:37 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3088976799' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 08:49:37 compute-0 nova_compute[260935]: 2025-10-11 08:49:37.530 2 DEBUG nova.compute.manager [None req-d0ed1468-e072-4f36-ba8a-64df5d0bc7be a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 90e56ca7-b26f-4f83-908d-75204ecd2533] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 11 08:49:37 compute-0 nova_compute[260935]: 2025-10-11 08:49:37.532 2 DEBUG nova.virt.libvirt.driver [None req-d0ed1468-e072-4f36-ba8a-64df5d0bc7be a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 90e56ca7-b26f-4f83-908d-75204ecd2533] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 11 08:49:37 compute-0 nova_compute[260935]: 2025-10-11 08:49:37.532 2 INFO nova.virt.libvirt.driver [None req-d0ed1468-e072-4f36-ba8a-64df5d0bc7be a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 90e56ca7-b26f-4f83-908d-75204ecd2533] Creating image(s)
Oct 11 08:49:37 compute-0 nova_compute[260935]: 2025-10-11 08:49:37.570 2 DEBUG nova.storage.rbd_utils [None req-d0ed1468-e072-4f36-ba8a-64df5d0bc7be a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] rbd image 90e56ca7-b26f-4f83-908d-75204ecd2533_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:49:37 compute-0 nova_compute[260935]: 2025-10-11 08:49:37.609 2 DEBUG nova.storage.rbd_utils [None req-d0ed1468-e072-4f36-ba8a-64df5d0bc7be a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] rbd image 90e56ca7-b26f-4f83-908d-75204ecd2533_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:49:37 compute-0 nova_compute[260935]: 2025-10-11 08:49:37.678 2 DEBUG nova.storage.rbd_utils [None req-d0ed1468-e072-4f36-ba8a-64df5d0bc7be a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] rbd image 90e56ca7-b26f-4f83-908d-75204ecd2533_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:49:37 compute-0 nova_compute[260935]: 2025-10-11 08:49:37.683 2 DEBUG oslo_concurrency.processutils [None req-d0ed1468-e072-4f36-ba8a-64df5d0bc7be a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:49:37 compute-0 nova_compute[260935]: 2025-10-11 08:49:37.779 2 DEBUG oslo_concurrency.processutils [None req-d0ed1468-e072-4f36-ba8a-64df5d0bc7be a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json" returned: 0 in 0.096s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:49:37 compute-0 nova_compute[260935]: 2025-10-11 08:49:37.781 2 DEBUG oslo_concurrency.lockutils [None req-d0ed1468-e072-4f36-ba8a-64df5d0bc7be a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Acquiring lock "9811042c7d73cc51997f7c966840f4b7728169a1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:49:37 compute-0 nova_compute[260935]: 2025-10-11 08:49:37.782 2 DEBUG oslo_concurrency.lockutils [None req-d0ed1468-e072-4f36-ba8a-64df5d0bc7be a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:49:37 compute-0 nova_compute[260935]: 2025-10-11 08:49:37.782 2 DEBUG oslo_concurrency.lockutils [None req-d0ed1468-e072-4f36-ba8a-64df5d0bc7be a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:49:37 compute-0 nova_compute[260935]: 2025-10-11 08:49:37.815 2 DEBUG nova.storage.rbd_utils [None req-d0ed1468-e072-4f36-ba8a-64df5d0bc7be a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] rbd image 90e56ca7-b26f-4f83-908d-75204ecd2533_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:49:37 compute-0 nova_compute[260935]: 2025-10-11 08:49:37.820 2 DEBUG oslo_concurrency.processutils [None req-d0ed1468-e072-4f36-ba8a-64df5d0bc7be a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 90e56ca7-b26f-4f83-908d-75204ecd2533_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:49:37 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1315: 321 pgs: 321 active+clean; 339 MiB data, 513 MiB used, 59 GiB / 60 GiB avail; 4.0 MiB/s rd, 3.7 MiB/s wr, 222 op/s
Oct 11 08:49:38 compute-0 nova_compute[260935]: 2025-10-11 08:49:38.104 2 DEBUG oslo_concurrency.processutils [None req-d0ed1468-e072-4f36-ba8a-64df5d0bc7be a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 90e56ca7-b26f-4f83-908d-75204ecd2533_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.284s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:49:38 compute-0 nova_compute[260935]: 2025-10-11 08:49:38.188 2 DEBUG nova.storage.rbd_utils [None req-d0ed1468-e072-4f36-ba8a-64df5d0bc7be a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] resizing rbd image 90e56ca7-b26f-4f83-908d-75204ecd2533_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 11 08:49:38 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e163 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 08:49:38 compute-0 ceph-mon[74313]: from='client.? 192.168.122.10:0/3088976799' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 08:49:38 compute-0 ceph-mon[74313]: from='client.? 192.168.122.10:0/3088976799' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 08:49:38 compute-0 nova_compute[260935]: 2025-10-11 08:49:38.369 2 DEBUG nova.objects.instance [None req-d0ed1468-e072-4f36-ba8a-64df5d0bc7be a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Lazy-loading 'migration_context' on Instance uuid 90e56ca7-b26f-4f83-908d-75204ecd2533 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:49:38 compute-0 nova_compute[260935]: 2025-10-11 08:49:38.567 2 DEBUG nova.virt.libvirt.driver [None req-d0ed1468-e072-4f36-ba8a-64df5d0bc7be a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 90e56ca7-b26f-4f83-908d-75204ecd2533] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 11 08:49:38 compute-0 nova_compute[260935]: 2025-10-11 08:49:38.568 2 DEBUG nova.virt.libvirt.driver [None req-d0ed1468-e072-4f36-ba8a-64df5d0bc7be a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 90e56ca7-b26f-4f83-908d-75204ecd2533] Ensure instance console log exists: /var/lib/nova/instances/90e56ca7-b26f-4f83-908d-75204ecd2533/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 11 08:49:38 compute-0 nova_compute[260935]: 2025-10-11 08:49:38.569 2 DEBUG oslo_concurrency.lockutils [None req-d0ed1468-e072-4f36-ba8a-64df5d0bc7be a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:49:38 compute-0 nova_compute[260935]: 2025-10-11 08:49:38.570 2 DEBUG oslo_concurrency.lockutils [None req-d0ed1468-e072-4f36-ba8a-64df5d0bc7be a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:49:38 compute-0 nova_compute[260935]: 2025-10-11 08:49:38.570 2 DEBUG oslo_concurrency.lockutils [None req-d0ed1468-e072-4f36-ba8a-64df5d0bc7be a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:49:38 compute-0 ovn_controller[152945]: 2025-10-11T08:49:38Z|00028|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:e8:51:53 10.100.0.5
Oct 11 08:49:38 compute-0 ovn_controller[152945]: 2025-10-11T08:49:38Z|00029|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:e8:51:53 10.100.0.5
Oct 11 08:49:39 compute-0 ceph-mon[74313]: pgmap v1315: 321 pgs: 321 active+clean; 339 MiB data, 513 MiB used, 59 GiB / 60 GiB avail; 4.0 MiB/s rd, 3.7 MiB/s wr, 222 op/s
Oct 11 08:49:39 compute-0 ceph-osd[90364]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #44. Immutable memtables: 1.
Oct 11 08:49:39 compute-0 nova_compute[260935]: 2025-10-11 08:49:39.693 2 DEBUG nova.network.neutron [None req-37b7b9ec-bf18-4041-ba84-f84d66a368c5 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 14711e39-46ca-4856-9c19-fa51b869064d] Successfully created port: ac842ebf-4fca-4930-a4d1-3e8a6760d441 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 11 08:49:39 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1316: 321 pgs: 321 active+clean; 339 MiB data, 513 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 1.8 MiB/s wr, 165 op/s
Oct 11 08:49:40 compute-0 nova_compute[260935]: 2025-10-11 08:49:40.042 2 DEBUG nova.network.neutron [None req-d0ed1468-e072-4f36-ba8a-64df5d0bc7be a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 90e56ca7-b26f-4f83-908d-75204ecd2533] Successfully created port: 302c88cf-6eba-4200-adfe-6c23d5e6078d _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 11 08:49:40 compute-0 nova_compute[260935]: 2025-10-11 08:49:40.914 2 DEBUG nova.network.neutron [None req-37b7b9ec-bf18-4041-ba84-f84d66a368c5 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 14711e39-46ca-4856-9c19-fa51b869064d] Successfully updated port: ac842ebf-4fca-4930-a4d1-3e8a6760d441 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 11 08:49:40 compute-0 nova_compute[260935]: 2025-10-11 08:49:40.990 2 DEBUG nova.compute.manager [None req-49a8ed84-8d60-4574-bbd5-06ca9b96a8ef 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 8e4f771a-b87a-40f9-a12e-b5b4583b96f7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:49:41 compute-0 nova_compute[260935]: 2025-10-11 08:49:41.039 2 DEBUG oslo_concurrency.lockutils [None req-37b7b9ec-bf18-4041-ba84-f84d66a368c5 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Acquiring lock "refresh_cache-14711e39-46ca-4856-9c19-fa51b869064d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 08:49:41 compute-0 nova_compute[260935]: 2025-10-11 08:49:41.039 2 DEBUG oslo_concurrency.lockutils [None req-37b7b9ec-bf18-4041-ba84-f84d66a368c5 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Acquired lock "refresh_cache-14711e39-46ca-4856-9c19-fa51b869064d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 08:49:41 compute-0 nova_compute[260935]: 2025-10-11 08:49:41.040 2 DEBUG nova.network.neutron [None req-37b7b9ec-bf18-4041-ba84-f84d66a368c5 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 14711e39-46ca-4856-9c19-fa51b869064d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 11 08:49:41 compute-0 nova_compute[260935]: 2025-10-11 08:49:41.191 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:49:41 compute-0 nova_compute[260935]: 2025-10-11 08:49:41.195 2 INFO nova.compute.manager [None req-49a8ed84-8d60-4574-bbd5-06ca9b96a8ef 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 8e4f771a-b87a-40f9-a12e-b5b4583b96f7] instance snapshotting
Oct 11 08:49:41 compute-0 nova_compute[260935]: 2025-10-11 08:49:41.196 2 WARNING nova.compute.manager [None req-49a8ed84-8d60-4574-bbd5-06ca9b96a8ef 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 8e4f771a-b87a-40f9-a12e-b5b4583b96f7] trying to snapshot a non-running instance: (state: 3 expected: 1)
Oct 11 08:49:41 compute-0 nova_compute[260935]: 2025-10-11 08:49:41.253 2 DEBUG nova.compute.manager [req-94a5d34e-36f6-405b-bf51-64a92a12ee93 req-79d18cb9-fe00-4ed0-aa09-4d65e0ed50e7 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 14711e39-46ca-4856-9c19-fa51b869064d] Received event network-changed-ac842ebf-4fca-4930-a4d1-3e8a6760d441 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:49:41 compute-0 nova_compute[260935]: 2025-10-11 08:49:41.254 2 DEBUG nova.compute.manager [req-94a5d34e-36f6-405b-bf51-64a92a12ee93 req-79d18cb9-fe00-4ed0-aa09-4d65e0ed50e7 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 14711e39-46ca-4856-9c19-fa51b869064d] Refreshing instance network info cache due to event network-changed-ac842ebf-4fca-4930-a4d1-3e8a6760d441. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 08:49:41 compute-0 nova_compute[260935]: 2025-10-11 08:49:41.255 2 DEBUG oslo_concurrency.lockutils [req-94a5d34e-36f6-405b-bf51-64a92a12ee93 req-79d18cb9-fe00-4ed0-aa09-4d65e0ed50e7 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-14711e39-46ca-4856-9c19-fa51b869064d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 08:49:41 compute-0 ceph-mon[74313]: pgmap v1316: 321 pgs: 321 active+clean; 339 MiB data, 513 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 1.8 MiB/s wr, 165 op/s
Oct 11 08:49:41 compute-0 nova_compute[260935]: 2025-10-11 08:49:41.377 2 DEBUG nova.network.neutron [None req-37b7b9ec-bf18-4041-ba84-f84d66a368c5 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 14711e39-46ca-4856-9c19-fa51b869064d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 11 08:49:41 compute-0 nova_compute[260935]: 2025-10-11 08:49:41.402 2 INFO nova.virt.libvirt.driver [None req-49a8ed84-8d60-4574-bbd5-06ca9b96a8ef 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 8e4f771a-b87a-40f9-a12e-b5b4583b96f7] Beginning live snapshot process
Oct 11 08:49:41 compute-0 nova_compute[260935]: 2025-10-11 08:49:41.591 2 DEBUG nova.network.neutron [None req-d0ed1468-e072-4f36-ba8a-64df5d0bc7be a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 90e56ca7-b26f-4f83-908d-75204ecd2533] Successfully updated port: 302c88cf-6eba-4200-adfe-6c23d5e6078d _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 11 08:49:41 compute-0 nova_compute[260935]: 2025-10-11 08:49:41.610 2 DEBUG oslo_concurrency.lockutils [None req-d0ed1468-e072-4f36-ba8a-64df5d0bc7be a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Acquiring lock "refresh_cache-90e56ca7-b26f-4f83-908d-75204ecd2533" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 08:49:41 compute-0 nova_compute[260935]: 2025-10-11 08:49:41.611 2 DEBUG oslo_concurrency.lockutils [None req-d0ed1468-e072-4f36-ba8a-64df5d0bc7be a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Acquired lock "refresh_cache-90e56ca7-b26f-4f83-908d-75204ecd2533" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 08:49:41 compute-0 nova_compute[260935]: 2025-10-11 08:49:41.611 2 DEBUG nova.network.neutron [None req-d0ed1468-e072-4f36-ba8a-64df5d0bc7be a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 90e56ca7-b26f-4f83-908d-75204ecd2533] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 11 08:49:41 compute-0 nova_compute[260935]: 2025-10-11 08:49:41.793 2 DEBUG nova.virt.libvirt.imagebackend [None req-49a8ed84-8d60-4574-bbd5-06ca9b96a8ef 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] No parent info for 03f2fef0-11c0-48e1-b3a0-3e02d898739e; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Oct 11 08:49:41 compute-0 nova_compute[260935]: 2025-10-11 08:49:41.800 2 DEBUG nova.compute.manager [req-cb98ccbb-a461-4db2-ba4c-735969cdecac req-d5eda931-6fd2-4c2d-9a96-8a475425b333 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 90e56ca7-b26f-4f83-908d-75204ecd2533] Received event network-changed-302c88cf-6eba-4200-adfe-6c23d5e6078d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:49:41 compute-0 nova_compute[260935]: 2025-10-11 08:49:41.801 2 DEBUG nova.compute.manager [req-cb98ccbb-a461-4db2-ba4c-735969cdecac req-d5eda931-6fd2-4c2d-9a96-8a475425b333 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 90e56ca7-b26f-4f83-908d-75204ecd2533] Refreshing instance network info cache due to event network-changed-302c88cf-6eba-4200-adfe-6c23d5e6078d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 08:49:41 compute-0 nova_compute[260935]: 2025-10-11 08:49:41.802 2 DEBUG oslo_concurrency.lockutils [req-cb98ccbb-a461-4db2-ba4c-735969cdecac req-d5eda931-6fd2-4c2d-9a96-8a475425b333 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-90e56ca7-b26f-4f83-908d-75204ecd2533" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 08:49:41 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1317: 321 pgs: 321 active+clean; 339 MiB data, 513 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 1.8 MiB/s wr, 165 op/s
Oct 11 08:49:41 compute-0 nova_compute[260935]: 2025-10-11 08:49:41.960 2 DEBUG nova.network.neutron [None req-d0ed1468-e072-4f36-ba8a-64df5d0bc7be a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 90e56ca7-b26f-4f83-908d-75204ecd2533] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 11 08:49:42 compute-0 nova_compute[260935]: 2025-10-11 08:49:42.040 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:49:42 compute-0 nova_compute[260935]: 2025-10-11 08:49:42.325 2 DEBUG nova.storage.rbd_utils [None req-49a8ed84-8d60-4574-bbd5-06ca9b96a8ef 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] creating snapshot(0344ed4086ef47a5a0808d6c4953c045) on rbd image(8e4f771a-b87a-40f9-a12e-b5b4583b96f7_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Oct 11 08:49:43 compute-0 nova_compute[260935]: 2025-10-11 08:49:43.202 2 DEBUG nova.network.neutron [None req-37b7b9ec-bf18-4041-ba84-f84d66a368c5 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 14711e39-46ca-4856-9c19-fa51b869064d] Updating instance_info_cache with network_info: [{"id": "ac842ebf-4fca-4930-a4d1-3e8a6760d441", "address": "fa:16:3e:56:95:17", "network": {"id": "fff13396-b787-4c6e-9112-a1c2ef57b26d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-795919911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eddb41c523294041b154a0a99c88e82b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac842ebf-4f", "ovs_interfaceid": "ac842ebf-4fca-4930-a4d1-3e8a6760d441", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:49:43 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e163 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 08:49:43 compute-0 nova_compute[260935]: 2025-10-11 08:49:43.232 2 DEBUG oslo_concurrency.lockutils [None req-37b7b9ec-bf18-4041-ba84-f84d66a368c5 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Releasing lock "refresh_cache-14711e39-46ca-4856-9c19-fa51b869064d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 08:49:43 compute-0 nova_compute[260935]: 2025-10-11 08:49:43.233 2 DEBUG nova.compute.manager [None req-37b7b9ec-bf18-4041-ba84-f84d66a368c5 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 14711e39-46ca-4856-9c19-fa51b869064d] Instance network_info: |[{"id": "ac842ebf-4fca-4930-a4d1-3e8a6760d441", "address": "fa:16:3e:56:95:17", "network": {"id": "fff13396-b787-4c6e-9112-a1c2ef57b26d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-795919911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eddb41c523294041b154a0a99c88e82b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac842ebf-4f", "ovs_interfaceid": "ac842ebf-4fca-4930-a4d1-3e8a6760d441", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 11 08:49:43 compute-0 nova_compute[260935]: 2025-10-11 08:49:43.239 2 DEBUG oslo_concurrency.lockutils [req-94a5d34e-36f6-405b-bf51-64a92a12ee93 req-79d18cb9-fe00-4ed0-aa09-4d65e0ed50e7 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-14711e39-46ca-4856-9c19-fa51b869064d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 08:49:43 compute-0 nova_compute[260935]: 2025-10-11 08:49:43.240 2 DEBUG nova.network.neutron [req-94a5d34e-36f6-405b-bf51-64a92a12ee93 req-79d18cb9-fe00-4ed0-aa09-4d65e0ed50e7 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 14711e39-46ca-4856-9c19-fa51b869064d] Refreshing network info cache for port ac842ebf-4fca-4930-a4d1-3e8a6760d441 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 08:49:43 compute-0 nova_compute[260935]: 2025-10-11 08:49:43.246 2 DEBUG nova.virt.libvirt.driver [None req-37b7b9ec-bf18-4041-ba84-f84d66a368c5 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 14711e39-46ca-4856-9c19-fa51b869064d] Start _get_guest_xml network_info=[{"id": "ac842ebf-4fca-4930-a4d1-3e8a6760d441", "address": "fa:16:3e:56:95:17", "network": {"id": "fff13396-b787-4c6e-9112-a1c2ef57b26d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-795919911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eddb41c523294041b154a0a99c88e82b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac842ebf-4f", "ovs_interfaceid": "ac842ebf-4fca-4930-a4d1-3e8a6760d441", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 11 08:49:43 compute-0 nova_compute[260935]: 2025-10-11 08:49:43.253 2 WARNING nova.virt.libvirt.driver [None req-37b7b9ec-bf18-4041-ba84-f84d66a368c5 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 08:49:43 compute-0 nova_compute[260935]: 2025-10-11 08:49:43.261 2 DEBUG nova.virt.libvirt.host [None req-37b7b9ec-bf18-4041-ba84-f84d66a368c5 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 11 08:49:43 compute-0 nova_compute[260935]: 2025-10-11 08:49:43.262 2 DEBUG nova.virt.libvirt.host [None req-37b7b9ec-bf18-4041-ba84-f84d66a368c5 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 11 08:49:43 compute-0 nova_compute[260935]: 2025-10-11 08:49:43.274 2 DEBUG nova.virt.libvirt.host [None req-37b7b9ec-bf18-4041-ba84-f84d66a368c5 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 11 08:49:43 compute-0 nova_compute[260935]: 2025-10-11 08:49:43.275 2 DEBUG nova.virt.libvirt.host [None req-37b7b9ec-bf18-4041-ba84-f84d66a368c5 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 11 08:49:43 compute-0 nova_compute[260935]: 2025-10-11 08:49:43.276 2 DEBUG nova.virt.libvirt.driver [None req-37b7b9ec-bf18-4041-ba84-f84d66a368c5 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 11 08:49:43 compute-0 nova_compute[260935]: 2025-10-11 08:49:43.277 2 DEBUG nova.virt.hardware [None req-37b7b9ec-bf18-4041-ba84-f84d66a368c5 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 11 08:49:43 compute-0 nova_compute[260935]: 2025-10-11 08:49:43.278 2 DEBUG nova.virt.hardware [None req-37b7b9ec-bf18-4041-ba84-f84d66a368c5 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 11 08:49:43 compute-0 nova_compute[260935]: 2025-10-11 08:49:43.278 2 DEBUG nova.virt.hardware [None req-37b7b9ec-bf18-4041-ba84-f84d66a368c5 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 11 08:49:43 compute-0 nova_compute[260935]: 2025-10-11 08:49:43.279 2 DEBUG nova.virt.hardware [None req-37b7b9ec-bf18-4041-ba84-f84d66a368c5 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 11 08:49:43 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e163 do_prune osdmap full prune enabled
Oct 11 08:49:43 compute-0 nova_compute[260935]: 2025-10-11 08:49:43.279 2 DEBUG nova.virt.hardware [None req-37b7b9ec-bf18-4041-ba84-f84d66a368c5 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 11 08:49:43 compute-0 nova_compute[260935]: 2025-10-11 08:49:43.279 2 DEBUG nova.virt.hardware [None req-37b7b9ec-bf18-4041-ba84-f84d66a368c5 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 11 08:49:43 compute-0 nova_compute[260935]: 2025-10-11 08:49:43.280 2 DEBUG nova.virt.hardware [None req-37b7b9ec-bf18-4041-ba84-f84d66a368c5 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 11 08:49:43 compute-0 nova_compute[260935]: 2025-10-11 08:49:43.280 2 DEBUG nova.virt.hardware [None req-37b7b9ec-bf18-4041-ba84-f84d66a368c5 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 11 08:49:43 compute-0 nova_compute[260935]: 2025-10-11 08:49:43.281 2 DEBUG nova.virt.hardware [None req-37b7b9ec-bf18-4041-ba84-f84d66a368c5 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 11 08:49:43 compute-0 nova_compute[260935]: 2025-10-11 08:49:43.281 2 DEBUG nova.virt.hardware [None req-37b7b9ec-bf18-4041-ba84-f84d66a368c5 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 11 08:49:43 compute-0 nova_compute[260935]: 2025-10-11 08:49:43.282 2 DEBUG nova.virt.hardware [None req-37b7b9ec-bf18-4041-ba84-f84d66a368c5 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 11 08:49:43 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e164 e164: 3 total, 3 up, 3 in
Oct 11 08:49:43 compute-0 nova_compute[260935]: 2025-10-11 08:49:43.286 2 DEBUG oslo_concurrency.processutils [None req-37b7b9ec-bf18-4041-ba84-f84d66a368c5 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:49:43 compute-0 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e164: 3 total, 3 up, 3 in
Oct 11 08:49:43 compute-0 ceph-mon[74313]: pgmap v1317: 321 pgs: 321 active+clean; 339 MiB data, 513 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 1.8 MiB/s wr, 165 op/s
Oct 11 08:49:43 compute-0 nova_compute[260935]: 2025-10-11 08:49:43.387 2 DEBUG nova.storage.rbd_utils [None req-49a8ed84-8d60-4574-bbd5-06ca9b96a8ef 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] cloning vms/8e4f771a-b87a-40f9-a12e-b5b4583b96f7_disk@0344ed4086ef47a5a0808d6c4953c045 to images/3a225b4b-c750-4e15-aa61-55d1b4ccb7d5 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Oct 11 08:49:43 compute-0 nova_compute[260935]: 2025-10-11 08:49:43.530 2 DEBUG nova.storage.rbd_utils [None req-49a8ed84-8d60-4574-bbd5-06ca9b96a8ef 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] flattening images/3a225b4b-c750-4e15-aa61-55d1b4ccb7d5 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Oct 11 08:49:43 compute-0 nova_compute[260935]: 2025-10-11 08:49:43.656 2 DEBUG nova.network.neutron [None req-d0ed1468-e072-4f36-ba8a-64df5d0bc7be a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 90e56ca7-b26f-4f83-908d-75204ecd2533] Updating instance_info_cache with network_info: [{"id": "302c88cf-6eba-4200-adfe-6c23d5e6078d", "address": "fa:16:3e:52:ed:97", "network": {"id": "09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1951796893-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "39d3043a7835403392c659fbb2fe0b22", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap302c88cf-6e", "ovs_interfaceid": "302c88cf-6eba-4200-adfe-6c23d5e6078d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:49:43 compute-0 nova_compute[260935]: 2025-10-11 08:49:43.677 2 DEBUG oslo_concurrency.lockutils [None req-d0ed1468-e072-4f36-ba8a-64df5d0bc7be a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Releasing lock "refresh_cache-90e56ca7-b26f-4f83-908d-75204ecd2533" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 08:49:43 compute-0 nova_compute[260935]: 2025-10-11 08:49:43.678 2 DEBUG nova.compute.manager [None req-d0ed1468-e072-4f36-ba8a-64df5d0bc7be a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 90e56ca7-b26f-4f83-908d-75204ecd2533] Instance network_info: |[{"id": "302c88cf-6eba-4200-adfe-6c23d5e6078d", "address": "fa:16:3e:52:ed:97", "network": {"id": "09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1951796893-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "39d3043a7835403392c659fbb2fe0b22", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap302c88cf-6e", "ovs_interfaceid": "302c88cf-6eba-4200-adfe-6c23d5e6078d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 11 08:49:43 compute-0 nova_compute[260935]: 2025-10-11 08:49:43.680 2 DEBUG oslo_concurrency.lockutils [req-cb98ccbb-a461-4db2-ba4c-735969cdecac req-d5eda931-6fd2-4c2d-9a96-8a475425b333 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-90e56ca7-b26f-4f83-908d-75204ecd2533" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 08:49:43 compute-0 nova_compute[260935]: 2025-10-11 08:49:43.680 2 DEBUG nova.network.neutron [req-cb98ccbb-a461-4db2-ba4c-735969cdecac req-d5eda931-6fd2-4c2d-9a96-8a475425b333 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 90e56ca7-b26f-4f83-908d-75204ecd2533] Refreshing network info cache for port 302c88cf-6eba-4200-adfe-6c23d5e6078d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 08:49:43 compute-0 nova_compute[260935]: 2025-10-11 08:49:43.686 2 DEBUG nova.virt.libvirt.driver [None req-d0ed1468-e072-4f36-ba8a-64df5d0bc7be a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 90e56ca7-b26f-4f83-908d-75204ecd2533] Start _get_guest_xml network_info=[{"id": "302c88cf-6eba-4200-adfe-6c23d5e6078d", "address": "fa:16:3e:52:ed:97", "network": {"id": "09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1951796893-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "39d3043a7835403392c659fbb2fe0b22", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap302c88cf-6e", "ovs_interfaceid": "302c88cf-6eba-4200-adfe-6c23d5e6078d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 11 08:49:43 compute-0 nova_compute[260935]: 2025-10-11 08:49:43.691 2 WARNING nova.virt.libvirt.driver [None req-d0ed1468-e072-4f36-ba8a-64df5d0bc7be a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 08:49:43 compute-0 nova_compute[260935]: 2025-10-11 08:49:43.697 2 DEBUG nova.virt.libvirt.host [None req-d0ed1468-e072-4f36-ba8a-64df5d0bc7be a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 11 08:49:43 compute-0 nova_compute[260935]: 2025-10-11 08:49:43.698 2 DEBUG nova.virt.libvirt.host [None req-d0ed1468-e072-4f36-ba8a-64df5d0bc7be a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 11 08:49:43 compute-0 nova_compute[260935]: 2025-10-11 08:49:43.705 2 DEBUG nova.virt.libvirt.host [None req-d0ed1468-e072-4f36-ba8a-64df5d0bc7be a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 11 08:49:43 compute-0 nova_compute[260935]: 2025-10-11 08:49:43.706 2 DEBUG nova.virt.libvirt.host [None req-d0ed1468-e072-4f36-ba8a-64df5d0bc7be a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 11 08:49:43 compute-0 nova_compute[260935]: 2025-10-11 08:49:43.707 2 DEBUG nova.virt.libvirt.driver [None req-d0ed1468-e072-4f36-ba8a-64df5d0bc7be a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 11 08:49:43 compute-0 nova_compute[260935]: 2025-10-11 08:49:43.707 2 DEBUG nova.virt.hardware [None req-d0ed1468-e072-4f36-ba8a-64df5d0bc7be a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 11 08:49:43 compute-0 nova_compute[260935]: 2025-10-11 08:49:43.708 2 DEBUG nova.virt.hardware [None req-d0ed1468-e072-4f36-ba8a-64df5d0bc7be a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 11 08:49:43 compute-0 nova_compute[260935]: 2025-10-11 08:49:43.708 2 DEBUG nova.virt.hardware [None req-d0ed1468-e072-4f36-ba8a-64df5d0bc7be a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 11 08:49:43 compute-0 nova_compute[260935]: 2025-10-11 08:49:43.709 2 DEBUG nova.virt.hardware [None req-d0ed1468-e072-4f36-ba8a-64df5d0bc7be a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 11 08:49:43 compute-0 nova_compute[260935]: 2025-10-11 08:49:43.709 2 DEBUG nova.virt.hardware [None req-d0ed1468-e072-4f36-ba8a-64df5d0bc7be a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 11 08:49:43 compute-0 nova_compute[260935]: 2025-10-11 08:49:43.709 2 DEBUG nova.virt.hardware [None req-d0ed1468-e072-4f36-ba8a-64df5d0bc7be a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 11 08:49:43 compute-0 nova_compute[260935]: 2025-10-11 08:49:43.710 2 DEBUG nova.virt.hardware [None req-d0ed1468-e072-4f36-ba8a-64df5d0bc7be a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 11 08:49:43 compute-0 nova_compute[260935]: 2025-10-11 08:49:43.710 2 DEBUG nova.virt.hardware [None req-d0ed1468-e072-4f36-ba8a-64df5d0bc7be a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 11 08:49:43 compute-0 nova_compute[260935]: 2025-10-11 08:49:43.710 2 DEBUG nova.virt.hardware [None req-d0ed1468-e072-4f36-ba8a-64df5d0bc7be a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 11 08:49:43 compute-0 nova_compute[260935]: 2025-10-11 08:49:43.711 2 DEBUG nova.virt.hardware [None req-d0ed1468-e072-4f36-ba8a-64df5d0bc7be a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 11 08:49:43 compute-0 nova_compute[260935]: 2025-10-11 08:49:43.711 2 DEBUG nova.virt.hardware [None req-d0ed1468-e072-4f36-ba8a-64df5d0bc7be a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 11 08:49:43 compute-0 nova_compute[260935]: 2025-10-11 08:49:43.715 2 DEBUG oslo_concurrency.processutils [None req-d0ed1468-e072-4f36-ba8a-64df5d0bc7be a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:49:43 compute-0 nova_compute[260935]: 2025-10-11 08:49:43.815 2 DEBUG nova.storage.rbd_utils [None req-49a8ed84-8d60-4574-bbd5-06ca9b96a8ef 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] removing snapshot(0344ed4086ef47a5a0808d6c4953c045) on rbd image(8e4f771a-b87a-40f9-a12e-b5b4583b96f7_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Oct 11 08:49:43 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 08:49:43 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3267938583' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:49:43 compute-0 nova_compute[260935]: 2025-10-11 08:49:43.838 2 DEBUG oslo_concurrency.processutils [None req-37b7b9ec-bf18-4041-ba84-f84d66a368c5 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.552s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:49:43 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1319: 321 pgs: 321 active+clean; 418 MiB data, 581 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 6.8 MiB/s wr, 227 op/s
Oct 11 08:49:43 compute-0 nova_compute[260935]: 2025-10-11 08:49:43.860 2 DEBUG nova.storage.rbd_utils [None req-37b7b9ec-bf18-4041-ba84-f84d66a368c5 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] rbd image 14711e39-46ca-4856-9c19-fa51b869064d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:49:43 compute-0 nova_compute[260935]: 2025-10-11 08:49:43.864 2 DEBUG oslo_concurrency.processutils [None req-37b7b9ec-bf18-4041-ba84-f84d66a368c5 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:49:44 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 08:49:44 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2783032966' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:49:44 compute-0 nova_compute[260935]: 2025-10-11 08:49:44.166 2 DEBUG oslo_concurrency.processutils [None req-d0ed1468-e072-4f36-ba8a-64df5d0bc7be a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:49:44 compute-0 nova_compute[260935]: 2025-10-11 08:49:44.197 2 DEBUG nova.storage.rbd_utils [None req-d0ed1468-e072-4f36-ba8a-64df5d0bc7be a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] rbd image 90e56ca7-b26f-4f83-908d-75204ecd2533_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:49:44 compute-0 nova_compute[260935]: 2025-10-11 08:49:44.202 2 DEBUG oslo_concurrency.processutils [None req-d0ed1468-e072-4f36-ba8a-64df5d0bc7be a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:49:44 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 08:49:44 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1451476487' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:49:44 compute-0 nova_compute[260935]: 2025-10-11 08:49:44.280 2 DEBUG oslo_concurrency.processutils [None req-37b7b9ec-bf18-4041-ba84-f84d66a368c5 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.415s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:49:44 compute-0 nova_compute[260935]: 2025-10-11 08:49:44.284 2 DEBUG nova.virt.libvirt.vif [None req-37b7b9ec-bf18-4041-ba84-f84d66a368c5 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T08:49:32Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-579139719',display_name='tempest-tempest.common.compute-instance-579139719',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-579139719',id=26,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAAy7u30rY9Ua722Pu5k06TsB7yGIeNfS7lWjZwVhd6kg2xMeuomPU5t2dlqG08LvC5AhOx2wSQ4p/whtQgG8tbhB9ScC2x2P4qlM+3BKH/+XFtSpFY70AQQ3oh5qTSzqA==',key_name='tempest-keypair-878513287',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='eddb41c523294041b154a0a99c88e82b',ramdisk_id='',reservation_id='r-9v1qz90y',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesTestJSON-2072786320',owner_user_name='tempest-AttachInterfacesTestJSON-2072786320-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T08:49:35Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='34f29a5a135d45f597eeaa741009aa67',uuid=14711e39-46ca-4856-9c19-fa51b869064d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ac842ebf-4fca-4930-a4d1-3e8a6760d441", "address": "fa:16:3e:56:95:17", "network": {"id": "fff13396-b787-4c6e-9112-a1c2ef57b26d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-795919911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eddb41c523294041b154a0a99c88e82b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac842ebf-4f", "ovs_interfaceid": "ac842ebf-4fca-4930-a4d1-3e8a6760d441", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 11 08:49:44 compute-0 nova_compute[260935]: 2025-10-11 08:49:44.285 2 DEBUG nova.network.os_vif_util [None req-37b7b9ec-bf18-4041-ba84-f84d66a368c5 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Converting VIF {"id": "ac842ebf-4fca-4930-a4d1-3e8a6760d441", "address": "fa:16:3e:56:95:17", "network": {"id": "fff13396-b787-4c6e-9112-a1c2ef57b26d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-795919911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eddb41c523294041b154a0a99c88e82b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac842ebf-4f", "ovs_interfaceid": "ac842ebf-4fca-4930-a4d1-3e8a6760d441", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 08:49:44 compute-0 nova_compute[260935]: 2025-10-11 08:49:44.287 2 DEBUG nova.network.os_vif_util [None req-37b7b9ec-bf18-4041-ba84-f84d66a368c5 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:56:95:17,bridge_name='br-int',has_traffic_filtering=True,id=ac842ebf-4fca-4930-a4d1-3e8a6760d441,network=Network(fff13396-b787-4c6e-9112-a1c2ef57b26d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapac842ebf-4f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 08:49:44 compute-0 nova_compute[260935]: 2025-10-11 08:49:44.289 2 DEBUG nova.objects.instance [None req-37b7b9ec-bf18-4041-ba84-f84d66a368c5 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Lazy-loading 'pci_devices' on Instance uuid 14711e39-46ca-4856-9c19-fa51b869064d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:49:44 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e164 do_prune osdmap full prune enabled
Oct 11 08:49:44 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e165 e165: 3 total, 3 up, 3 in
Oct 11 08:49:44 compute-0 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e165: 3 total, 3 up, 3 in
Oct 11 08:49:44 compute-0 ceph-mon[74313]: osdmap e164: 3 total, 3 up, 3 in
Oct 11 08:49:44 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/3267938583' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:49:44 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/2783032966' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:49:44 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/1451476487' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:49:44 compute-0 nova_compute[260935]: 2025-10-11 08:49:44.310 2 DEBUG nova.virt.libvirt.driver [None req-37b7b9ec-bf18-4041-ba84-f84d66a368c5 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 14711e39-46ca-4856-9c19-fa51b869064d] End _get_guest_xml xml=<domain type="kvm">
Oct 11 08:49:44 compute-0 nova_compute[260935]:   <uuid>14711e39-46ca-4856-9c19-fa51b869064d</uuid>
Oct 11 08:49:44 compute-0 nova_compute[260935]:   <name>instance-0000001a</name>
Oct 11 08:49:44 compute-0 nova_compute[260935]:   <memory>131072</memory>
Oct 11 08:49:44 compute-0 nova_compute[260935]:   <vcpu>1</vcpu>
Oct 11 08:49:44 compute-0 nova_compute[260935]:   <metadata>
Oct 11 08:49:44 compute-0 nova_compute[260935]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 08:49:44 compute-0 nova_compute[260935]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 08:49:44 compute-0 nova_compute[260935]:       <nova:name>tempest-tempest.common.compute-instance-579139719</nova:name>
Oct 11 08:49:44 compute-0 nova_compute[260935]:       <nova:creationTime>2025-10-11 08:49:43</nova:creationTime>
Oct 11 08:49:44 compute-0 nova_compute[260935]:       <nova:flavor name="m1.nano">
Oct 11 08:49:44 compute-0 nova_compute[260935]:         <nova:memory>128</nova:memory>
Oct 11 08:49:44 compute-0 nova_compute[260935]:         <nova:disk>1</nova:disk>
Oct 11 08:49:44 compute-0 nova_compute[260935]:         <nova:swap>0</nova:swap>
Oct 11 08:49:44 compute-0 nova_compute[260935]:         <nova:ephemeral>0</nova:ephemeral>
Oct 11 08:49:44 compute-0 nova_compute[260935]:         <nova:vcpus>1</nova:vcpus>
Oct 11 08:49:44 compute-0 nova_compute[260935]:       </nova:flavor>
Oct 11 08:49:44 compute-0 nova_compute[260935]:       <nova:owner>
Oct 11 08:49:44 compute-0 nova_compute[260935]:         <nova:user uuid="34f29a5a135d45f597eeaa741009aa67">tempest-AttachInterfacesTestJSON-2072786320-project-member</nova:user>
Oct 11 08:49:44 compute-0 nova_compute[260935]:         <nova:project uuid="eddb41c523294041b154a0a99c88e82b">tempest-AttachInterfacesTestJSON-2072786320</nova:project>
Oct 11 08:49:44 compute-0 nova_compute[260935]:       </nova:owner>
Oct 11 08:49:44 compute-0 nova_compute[260935]:       <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 08:49:44 compute-0 nova_compute[260935]:       <nova:ports>
Oct 11 08:49:44 compute-0 nova_compute[260935]:         <nova:port uuid="ac842ebf-4fca-4930-a4d1-3e8a6760d441">
Oct 11 08:49:44 compute-0 nova_compute[260935]:           <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Oct 11 08:49:44 compute-0 nova_compute[260935]:         </nova:port>
Oct 11 08:49:44 compute-0 nova_compute[260935]:       </nova:ports>
Oct 11 08:49:44 compute-0 nova_compute[260935]:     </nova:instance>
Oct 11 08:49:44 compute-0 nova_compute[260935]:   </metadata>
Oct 11 08:49:44 compute-0 nova_compute[260935]:   <sysinfo type="smbios">
Oct 11 08:49:44 compute-0 nova_compute[260935]:     <system>
Oct 11 08:49:44 compute-0 nova_compute[260935]:       <entry name="manufacturer">RDO</entry>
Oct 11 08:49:44 compute-0 nova_compute[260935]:       <entry name="product">OpenStack Compute</entry>
Oct 11 08:49:44 compute-0 nova_compute[260935]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 08:49:44 compute-0 nova_compute[260935]:       <entry name="serial">14711e39-46ca-4856-9c19-fa51b869064d</entry>
Oct 11 08:49:44 compute-0 nova_compute[260935]:       <entry name="uuid">14711e39-46ca-4856-9c19-fa51b869064d</entry>
Oct 11 08:49:44 compute-0 nova_compute[260935]:       <entry name="family">Virtual Machine</entry>
Oct 11 08:49:44 compute-0 nova_compute[260935]:     </system>
Oct 11 08:49:44 compute-0 nova_compute[260935]:   </sysinfo>
Oct 11 08:49:44 compute-0 nova_compute[260935]:   <os>
Oct 11 08:49:44 compute-0 nova_compute[260935]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 11 08:49:44 compute-0 nova_compute[260935]:     <boot dev="hd"/>
Oct 11 08:49:44 compute-0 nova_compute[260935]:     <smbios mode="sysinfo"/>
Oct 11 08:49:44 compute-0 nova_compute[260935]:   </os>
Oct 11 08:49:44 compute-0 nova_compute[260935]:   <features>
Oct 11 08:49:44 compute-0 nova_compute[260935]:     <acpi/>
Oct 11 08:49:44 compute-0 nova_compute[260935]:     <apic/>
Oct 11 08:49:44 compute-0 nova_compute[260935]:     <vmcoreinfo/>
Oct 11 08:49:44 compute-0 nova_compute[260935]:   </features>
Oct 11 08:49:44 compute-0 nova_compute[260935]:   <clock offset="utc">
Oct 11 08:49:44 compute-0 nova_compute[260935]:     <timer name="pit" tickpolicy="delay"/>
Oct 11 08:49:44 compute-0 nova_compute[260935]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 11 08:49:44 compute-0 nova_compute[260935]:     <timer name="hpet" present="no"/>
Oct 11 08:49:44 compute-0 nova_compute[260935]:   </clock>
Oct 11 08:49:44 compute-0 nova_compute[260935]:   <cpu mode="host-model" match="exact">
Oct 11 08:49:44 compute-0 nova_compute[260935]:     <topology sockets="1" cores="1" threads="1"/>
Oct 11 08:49:44 compute-0 nova_compute[260935]:   </cpu>
Oct 11 08:49:44 compute-0 nova_compute[260935]:   <devices>
Oct 11 08:49:44 compute-0 nova_compute[260935]:     <disk type="network" device="disk">
Oct 11 08:49:44 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 08:49:44 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/14711e39-46ca-4856-9c19-fa51b869064d_disk">
Oct 11 08:49:44 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 08:49:44 compute-0 nova_compute[260935]:       </source>
Oct 11 08:49:44 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 08:49:44 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 08:49:44 compute-0 nova_compute[260935]:       </auth>
Oct 11 08:49:44 compute-0 nova_compute[260935]:       <target dev="vda" bus="virtio"/>
Oct 11 08:49:44 compute-0 nova_compute[260935]:     </disk>
Oct 11 08:49:44 compute-0 nova_compute[260935]:     <disk type="network" device="cdrom">
Oct 11 08:49:44 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 08:49:44 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/14711e39-46ca-4856-9c19-fa51b869064d_disk.config">
Oct 11 08:49:44 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 08:49:44 compute-0 nova_compute[260935]:       </source>
Oct 11 08:49:44 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 08:49:44 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 08:49:44 compute-0 nova_compute[260935]:       </auth>
Oct 11 08:49:44 compute-0 nova_compute[260935]:       <target dev="sda" bus="sata"/>
Oct 11 08:49:44 compute-0 nova_compute[260935]:     </disk>
Oct 11 08:49:44 compute-0 nova_compute[260935]:     <interface type="ethernet">
Oct 11 08:49:44 compute-0 nova_compute[260935]:       <mac address="fa:16:3e:56:95:17"/>
Oct 11 08:49:44 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 08:49:44 compute-0 nova_compute[260935]:       <driver name="vhost" rx_queue_size="512"/>
Oct 11 08:49:44 compute-0 nova_compute[260935]:       <mtu size="1442"/>
Oct 11 08:49:44 compute-0 nova_compute[260935]:       <target dev="tapac842ebf-4f"/>
Oct 11 08:49:44 compute-0 nova_compute[260935]:     </interface>
Oct 11 08:49:44 compute-0 nova_compute[260935]:     <serial type="pty">
Oct 11 08:49:44 compute-0 nova_compute[260935]:       <log file="/var/lib/nova/instances/14711e39-46ca-4856-9c19-fa51b869064d/console.log" append="off"/>
Oct 11 08:49:44 compute-0 nova_compute[260935]:     </serial>
Oct 11 08:49:44 compute-0 nova_compute[260935]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 08:49:44 compute-0 nova_compute[260935]:     <video>
Oct 11 08:49:44 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 08:49:44 compute-0 nova_compute[260935]:     </video>
Oct 11 08:49:44 compute-0 nova_compute[260935]:     <input type="tablet" bus="usb"/>
Oct 11 08:49:44 compute-0 nova_compute[260935]:     <rng model="virtio">
Oct 11 08:49:44 compute-0 nova_compute[260935]:       <backend model="random">/dev/urandom</backend>
Oct 11 08:49:44 compute-0 nova_compute[260935]:     </rng>
Oct 11 08:49:44 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root"/>
Oct 11 08:49:44 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:49:44 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:49:44 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:49:44 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:49:44 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:49:44 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:49:44 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:49:44 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:49:44 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:49:44 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:49:44 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:49:44 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:49:44 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:49:44 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:49:44 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:49:44 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:49:44 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:49:44 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:49:44 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:49:44 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:49:44 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:49:44 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:49:44 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:49:44 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:49:44 compute-0 nova_compute[260935]:     <controller type="usb" index="0"/>
Oct 11 08:49:44 compute-0 nova_compute[260935]:     <memballoon model="virtio">
Oct 11 08:49:44 compute-0 nova_compute[260935]:       <stats period="10"/>
Oct 11 08:49:44 compute-0 nova_compute[260935]:     </memballoon>
Oct 11 08:49:44 compute-0 nova_compute[260935]:   </devices>
Oct 11 08:49:44 compute-0 nova_compute[260935]: </domain>
Oct 11 08:49:44 compute-0 nova_compute[260935]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 11 08:49:44 compute-0 nova_compute[260935]: 2025-10-11 08:49:44.312 2 DEBUG nova.compute.manager [None req-37b7b9ec-bf18-4041-ba84-f84d66a368c5 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 14711e39-46ca-4856-9c19-fa51b869064d] Preparing to wait for external event network-vif-plugged-ac842ebf-4fca-4930-a4d1-3e8a6760d441 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 11 08:49:44 compute-0 nova_compute[260935]: 2025-10-11 08:49:44.312 2 DEBUG oslo_concurrency.lockutils [None req-37b7b9ec-bf18-4041-ba84-f84d66a368c5 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Acquiring lock "14711e39-46ca-4856-9c19-fa51b869064d-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:49:44 compute-0 nova_compute[260935]: 2025-10-11 08:49:44.313 2 DEBUG oslo_concurrency.lockutils [None req-37b7b9ec-bf18-4041-ba84-f84d66a368c5 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Lock "14711e39-46ca-4856-9c19-fa51b869064d-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:49:44 compute-0 nova_compute[260935]: 2025-10-11 08:49:44.313 2 DEBUG oslo_concurrency.lockutils [None req-37b7b9ec-bf18-4041-ba84-f84d66a368c5 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Lock "14711e39-46ca-4856-9c19-fa51b869064d-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:49:44 compute-0 nova_compute[260935]: 2025-10-11 08:49:44.315 2 DEBUG nova.virt.libvirt.vif [None req-37b7b9ec-bf18-4041-ba84-f84d66a368c5 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T08:49:32Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-579139719',display_name='tempest-tempest.common.compute-instance-579139719',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-579139719',id=26,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAAy7u30rY9Ua722Pu5k06TsB7yGIeNfS7lWjZwVhd6kg2xMeuomPU5t2dlqG08LvC5AhOx2wSQ4p/whtQgG8tbhB9ScC2x2P4qlM+3BKH/+XFtSpFY70AQQ3oh5qTSzqA==',key_name='tempest-keypair-878513287',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='eddb41c523294041b154a0a99c88e82b',ramdisk_id='',reservation_id='r-9v1qz90y',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesTestJSON-2072786320',owner_user_name='tempest-AttachInterfacesTestJSON-2072786320-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T08:49:35Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='34f29a5a135d45f597eeaa741009aa67',uuid=14711e39-46ca-4856-9c19-fa51b869064d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ac842ebf-4fca-4930-a4d1-3e8a6760d441", "address": "fa:16:3e:56:95:17", "network": {"id": "fff13396-b787-4c6e-9112-a1c2ef57b26d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-795919911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eddb41c523294041b154a0a99c88e82b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac842ebf-4f", "ovs_interfaceid": "ac842ebf-4fca-4930-a4d1-3e8a6760d441", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 11 08:49:44 compute-0 nova_compute[260935]: 2025-10-11 08:49:44.315 2 DEBUG nova.network.os_vif_util [None req-37b7b9ec-bf18-4041-ba84-f84d66a368c5 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Converting VIF {"id": "ac842ebf-4fca-4930-a4d1-3e8a6760d441", "address": "fa:16:3e:56:95:17", "network": {"id": "fff13396-b787-4c6e-9112-a1c2ef57b26d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-795919911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eddb41c523294041b154a0a99c88e82b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac842ebf-4f", "ovs_interfaceid": "ac842ebf-4fca-4930-a4d1-3e8a6760d441", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 08:49:44 compute-0 nova_compute[260935]: 2025-10-11 08:49:44.317 2 DEBUG nova.network.os_vif_util [None req-37b7b9ec-bf18-4041-ba84-f84d66a368c5 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:56:95:17,bridge_name='br-int',has_traffic_filtering=True,id=ac842ebf-4fca-4930-a4d1-3e8a6760d441,network=Network(fff13396-b787-4c6e-9112-a1c2ef57b26d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapac842ebf-4f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 08:49:44 compute-0 nova_compute[260935]: 2025-10-11 08:49:44.317 2 DEBUG os_vif [None req-37b7b9ec-bf18-4041-ba84-f84d66a368c5 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:56:95:17,bridge_name='br-int',has_traffic_filtering=True,id=ac842ebf-4fca-4930-a4d1-3e8a6760d441,network=Network(fff13396-b787-4c6e-9112-a1c2ef57b26d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapac842ebf-4f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 11 08:49:44 compute-0 nova_compute[260935]: 2025-10-11 08:49:44.318 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:49:44 compute-0 nova_compute[260935]: 2025-10-11 08:49:44.319 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:49:44 compute-0 nova_compute[260935]: 2025-10-11 08:49:44.320 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 08:49:44 compute-0 nova_compute[260935]: 2025-10-11 08:49:44.326 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:49:44 compute-0 nova_compute[260935]: 2025-10-11 08:49:44.326 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapac842ebf-4f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:49:44 compute-0 nova_compute[260935]: 2025-10-11 08:49:44.329 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapac842ebf-4f, col_values=(('external_ids', {'iface-id': 'ac842ebf-4fca-4930-a4d1-3e8a6760d441', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:56:95:17', 'vm-uuid': '14711e39-46ca-4856-9c19-fa51b869064d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:49:44 compute-0 NetworkManager[44960]: <info>  [1760172584.3326] manager: (tapac842ebf-4f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/76)
Oct 11 08:49:44 compute-0 nova_compute[260935]: 2025-10-11 08:49:44.333 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:49:44 compute-0 nova_compute[260935]: 2025-10-11 08:49:44.336 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 08:49:44 compute-0 nova_compute[260935]: 2025-10-11 08:49:44.342 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:49:44 compute-0 nova_compute[260935]: 2025-10-11 08:49:44.343 2 INFO os_vif [None req-37b7b9ec-bf18-4041-ba84-f84d66a368c5 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:56:95:17,bridge_name='br-int',has_traffic_filtering=True,id=ac842ebf-4fca-4930-a4d1-3e8a6760d441,network=Network(fff13396-b787-4c6e-9112-a1c2ef57b26d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapac842ebf-4f')
Oct 11 08:49:44 compute-0 nova_compute[260935]: 2025-10-11 08:49:44.373 2 DEBUG nova.storage.rbd_utils [None req-49a8ed84-8d60-4574-bbd5-06ca9b96a8ef 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] creating snapshot(snap) on rbd image(3a225b4b-c750-4e15-aa61-55d1b4ccb7d5) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Oct 11 08:49:44 compute-0 nova_compute[260935]: 2025-10-11 08:49:44.456 2 DEBUG nova.virt.libvirt.driver [None req-37b7b9ec-bf18-4041-ba84-f84d66a368c5 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 08:49:44 compute-0 nova_compute[260935]: 2025-10-11 08:49:44.456 2 DEBUG nova.virt.libvirt.driver [None req-37b7b9ec-bf18-4041-ba84-f84d66a368c5 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 08:49:44 compute-0 nova_compute[260935]: 2025-10-11 08:49:44.456 2 DEBUG nova.virt.libvirt.driver [None req-37b7b9ec-bf18-4041-ba84-f84d66a368c5 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] No VIF found with MAC fa:16:3e:56:95:17, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 11 08:49:44 compute-0 nova_compute[260935]: 2025-10-11 08:49:44.457 2 INFO nova.virt.libvirt.driver [None req-37b7b9ec-bf18-4041-ba84-f84d66a368c5 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 14711e39-46ca-4856-9c19-fa51b869064d] Using config drive
Oct 11 08:49:44 compute-0 nova_compute[260935]: 2025-10-11 08:49:44.490 2 DEBUG nova.storage.rbd_utils [None req-37b7b9ec-bf18-4041-ba84-f84d66a368c5 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] rbd image 14711e39-46ca-4856-9c19-fa51b869064d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:49:44 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 08:49:44 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2640104767' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:49:44 compute-0 nova_compute[260935]: 2025-10-11 08:49:44.669 2 DEBUG oslo_concurrency.processutils [None req-d0ed1468-e072-4f36-ba8a-64df5d0bc7be a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:49:44 compute-0 nova_compute[260935]: 2025-10-11 08:49:44.671 2 DEBUG nova.virt.libvirt.vif [None req-d0ed1468-e072-4f36-ba8a-64df5d0bc7be a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T08:49:32Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-1529647290',display_name='tempest-ServersAdminTestJSON-server-1529647290',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-1529647290',id=27,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='39d3043a7835403392c659fbb2fe0b22',ramdisk_id='',reservation_id='r-j763nkmo',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersAdminTestJSON-1756812845',owner_user_name='tempest-ServersAdminTestJSON-1756812845-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T08:49:37Z,user_data=None,user_id='a51c2680b31e40b1908642ef8795c6f0',uuid=90e56ca7-b26f-4f83-908d-75204ecd2533,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "302c88cf-6eba-4200-adfe-6c23d5e6078d", "address": "fa:16:3e:52:ed:97", "network": {"id": "09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1951796893-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "39d3043a7835403392c659fbb2fe0b22", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap302c88cf-6e", "ovs_interfaceid": "302c88cf-6eba-4200-adfe-6c23d5e6078d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 11 08:49:44 compute-0 nova_compute[260935]: 2025-10-11 08:49:44.671 2 DEBUG nova.network.os_vif_util [None req-d0ed1468-e072-4f36-ba8a-64df5d0bc7be a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Converting VIF {"id": "302c88cf-6eba-4200-adfe-6c23d5e6078d", "address": "fa:16:3e:52:ed:97", "network": {"id": "09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1951796893-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "39d3043a7835403392c659fbb2fe0b22", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap302c88cf-6e", "ovs_interfaceid": "302c88cf-6eba-4200-adfe-6c23d5e6078d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 08:49:44 compute-0 nova_compute[260935]: 2025-10-11 08:49:44.672 2 DEBUG nova.network.os_vif_util [None req-d0ed1468-e072-4f36-ba8a-64df5d0bc7be a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:52:ed:97,bridge_name='br-int',has_traffic_filtering=True,id=302c88cf-6eba-4200-adfe-6c23d5e6078d,network=Network(09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap302c88cf-6e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 08:49:44 compute-0 nova_compute[260935]: 2025-10-11 08:49:44.673 2 DEBUG nova.objects.instance [None req-d0ed1468-e072-4f36-ba8a-64df5d0bc7be a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Lazy-loading 'pci_devices' on Instance uuid 90e56ca7-b26f-4f83-908d-75204ecd2533 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:49:44 compute-0 nova_compute[260935]: 2025-10-11 08:49:44.834 2 DEBUG nova.virt.libvirt.driver [None req-d0ed1468-e072-4f36-ba8a-64df5d0bc7be a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 90e56ca7-b26f-4f83-908d-75204ecd2533] End _get_guest_xml xml=<domain type="kvm">
Oct 11 08:49:44 compute-0 nova_compute[260935]:   <uuid>90e56ca7-b26f-4f83-908d-75204ecd2533</uuid>
Oct 11 08:49:44 compute-0 nova_compute[260935]:   <name>instance-0000001b</name>
Oct 11 08:49:44 compute-0 nova_compute[260935]:   <memory>131072</memory>
Oct 11 08:49:44 compute-0 nova_compute[260935]:   <vcpu>1</vcpu>
Oct 11 08:49:44 compute-0 nova_compute[260935]:   <metadata>
Oct 11 08:49:44 compute-0 nova_compute[260935]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 08:49:44 compute-0 nova_compute[260935]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 08:49:44 compute-0 nova_compute[260935]:       <nova:name>tempest-ServersAdminTestJSON-server-1529647290</nova:name>
Oct 11 08:49:44 compute-0 nova_compute[260935]:       <nova:creationTime>2025-10-11 08:49:43</nova:creationTime>
Oct 11 08:49:44 compute-0 nova_compute[260935]:       <nova:flavor name="m1.nano">
Oct 11 08:49:44 compute-0 nova_compute[260935]:         <nova:memory>128</nova:memory>
Oct 11 08:49:44 compute-0 nova_compute[260935]:         <nova:disk>1</nova:disk>
Oct 11 08:49:44 compute-0 nova_compute[260935]:         <nova:swap>0</nova:swap>
Oct 11 08:49:44 compute-0 nova_compute[260935]:         <nova:ephemeral>0</nova:ephemeral>
Oct 11 08:49:44 compute-0 nova_compute[260935]:         <nova:vcpus>1</nova:vcpus>
Oct 11 08:49:44 compute-0 nova_compute[260935]:       </nova:flavor>
Oct 11 08:49:44 compute-0 nova_compute[260935]:       <nova:owner>
Oct 11 08:49:44 compute-0 nova_compute[260935]:         <nova:user uuid="a51c2680b31e40b1908642ef8795c6f0">tempest-ServersAdminTestJSON-1756812845-project-member</nova:user>
Oct 11 08:49:44 compute-0 nova_compute[260935]:         <nova:project uuid="39d3043a7835403392c659fbb2fe0b22">tempest-ServersAdminTestJSON-1756812845</nova:project>
Oct 11 08:49:44 compute-0 nova_compute[260935]:       </nova:owner>
Oct 11 08:49:44 compute-0 nova_compute[260935]:       <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 08:49:44 compute-0 nova_compute[260935]:       <nova:ports>
Oct 11 08:49:44 compute-0 nova_compute[260935]:         <nova:port uuid="302c88cf-6eba-4200-adfe-6c23d5e6078d">
Oct 11 08:49:44 compute-0 nova_compute[260935]:           <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Oct 11 08:49:44 compute-0 nova_compute[260935]:         </nova:port>
Oct 11 08:49:44 compute-0 nova_compute[260935]:       </nova:ports>
Oct 11 08:49:44 compute-0 nova_compute[260935]:     </nova:instance>
Oct 11 08:49:44 compute-0 nova_compute[260935]:   </metadata>
Oct 11 08:49:44 compute-0 nova_compute[260935]:   <sysinfo type="smbios">
Oct 11 08:49:44 compute-0 nova_compute[260935]:     <system>
Oct 11 08:49:44 compute-0 nova_compute[260935]:       <entry name="manufacturer">RDO</entry>
Oct 11 08:49:44 compute-0 nova_compute[260935]:       <entry name="product">OpenStack Compute</entry>
Oct 11 08:49:44 compute-0 nova_compute[260935]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 08:49:44 compute-0 nova_compute[260935]:       <entry name="serial">90e56ca7-b26f-4f83-908d-75204ecd2533</entry>
Oct 11 08:49:44 compute-0 nova_compute[260935]:       <entry name="uuid">90e56ca7-b26f-4f83-908d-75204ecd2533</entry>
Oct 11 08:49:44 compute-0 nova_compute[260935]:       <entry name="family">Virtual Machine</entry>
Oct 11 08:49:44 compute-0 nova_compute[260935]:     </system>
Oct 11 08:49:44 compute-0 nova_compute[260935]:   </sysinfo>
Oct 11 08:49:44 compute-0 nova_compute[260935]:   <os>
Oct 11 08:49:44 compute-0 nova_compute[260935]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 11 08:49:44 compute-0 nova_compute[260935]:     <boot dev="hd"/>
Oct 11 08:49:44 compute-0 nova_compute[260935]:     <smbios mode="sysinfo"/>
Oct 11 08:49:44 compute-0 nova_compute[260935]:   </os>
Oct 11 08:49:44 compute-0 nova_compute[260935]:   <features>
Oct 11 08:49:44 compute-0 nova_compute[260935]:     <acpi/>
Oct 11 08:49:44 compute-0 nova_compute[260935]:     <apic/>
Oct 11 08:49:44 compute-0 nova_compute[260935]:     <vmcoreinfo/>
Oct 11 08:49:44 compute-0 nova_compute[260935]:   </features>
Oct 11 08:49:44 compute-0 nova_compute[260935]:   <clock offset="utc">
Oct 11 08:49:44 compute-0 nova_compute[260935]:     <timer name="pit" tickpolicy="delay"/>
Oct 11 08:49:44 compute-0 nova_compute[260935]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 11 08:49:44 compute-0 nova_compute[260935]:     <timer name="hpet" present="no"/>
Oct 11 08:49:44 compute-0 nova_compute[260935]:   </clock>
Oct 11 08:49:44 compute-0 nova_compute[260935]:   <cpu mode="host-model" match="exact">
Oct 11 08:49:44 compute-0 nova_compute[260935]:     <topology sockets="1" cores="1" threads="1"/>
Oct 11 08:49:44 compute-0 nova_compute[260935]:   </cpu>
Oct 11 08:49:44 compute-0 nova_compute[260935]:   <devices>
Oct 11 08:49:44 compute-0 nova_compute[260935]:     <disk type="network" device="disk">
Oct 11 08:49:44 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 08:49:44 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/90e56ca7-b26f-4f83-908d-75204ecd2533_disk">
Oct 11 08:49:44 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 08:49:44 compute-0 nova_compute[260935]:       </source>
Oct 11 08:49:44 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 08:49:44 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 08:49:44 compute-0 nova_compute[260935]:       </auth>
Oct 11 08:49:44 compute-0 nova_compute[260935]:       <target dev="vda" bus="virtio"/>
Oct 11 08:49:44 compute-0 nova_compute[260935]:     </disk>
Oct 11 08:49:44 compute-0 nova_compute[260935]:     <disk type="network" device="cdrom">
Oct 11 08:49:44 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 08:49:44 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/90e56ca7-b26f-4f83-908d-75204ecd2533_disk.config">
Oct 11 08:49:44 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 08:49:44 compute-0 nova_compute[260935]:       </source>
Oct 11 08:49:44 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 08:49:44 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 08:49:44 compute-0 nova_compute[260935]:       </auth>
Oct 11 08:49:44 compute-0 nova_compute[260935]:       <target dev="sda" bus="sata"/>
Oct 11 08:49:44 compute-0 nova_compute[260935]:     </disk>
Oct 11 08:49:44 compute-0 nova_compute[260935]:     <interface type="ethernet">
Oct 11 08:49:44 compute-0 nova_compute[260935]:       <mac address="fa:16:3e:52:ed:97"/>
Oct 11 08:49:44 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 08:49:44 compute-0 nova_compute[260935]:       <driver name="vhost" rx_queue_size="512"/>
Oct 11 08:49:44 compute-0 nova_compute[260935]:       <mtu size="1442"/>
Oct 11 08:49:44 compute-0 nova_compute[260935]:       <target dev="tap302c88cf-6e"/>
Oct 11 08:49:44 compute-0 nova_compute[260935]:     </interface>
Oct 11 08:49:44 compute-0 nova_compute[260935]:     <serial type="pty">
Oct 11 08:49:44 compute-0 nova_compute[260935]:       <log file="/var/lib/nova/instances/90e56ca7-b26f-4f83-908d-75204ecd2533/console.log" append="off"/>
Oct 11 08:49:44 compute-0 nova_compute[260935]:     </serial>
Oct 11 08:49:44 compute-0 nova_compute[260935]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 08:49:44 compute-0 nova_compute[260935]:     <video>
Oct 11 08:49:44 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 08:49:44 compute-0 nova_compute[260935]:     </video>
Oct 11 08:49:44 compute-0 nova_compute[260935]:     <input type="tablet" bus="usb"/>
Oct 11 08:49:44 compute-0 nova_compute[260935]:     <rng model="virtio">
Oct 11 08:49:44 compute-0 nova_compute[260935]:       <backend model="random">/dev/urandom</backend>
Oct 11 08:49:44 compute-0 nova_compute[260935]:     </rng>
Oct 11 08:49:44 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root"/>
Oct 11 08:49:44 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:49:44 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:49:44 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:49:44 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:49:44 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:49:44 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:49:44 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:49:44 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:49:44 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:49:44 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:49:44 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:49:44 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:49:44 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:49:44 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:49:44 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:49:44 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:49:44 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:49:44 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:49:44 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:49:44 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:49:44 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:49:44 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:49:44 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:49:44 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:49:44 compute-0 nova_compute[260935]:     <controller type="usb" index="0"/>
Oct 11 08:49:44 compute-0 nova_compute[260935]:     <memballoon model="virtio">
Oct 11 08:49:44 compute-0 nova_compute[260935]:       <stats period="10"/>
Oct 11 08:49:44 compute-0 nova_compute[260935]:     </memballoon>
Oct 11 08:49:44 compute-0 nova_compute[260935]:   </devices>
Oct 11 08:49:44 compute-0 nova_compute[260935]: </domain>
Oct 11 08:49:44 compute-0 nova_compute[260935]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 11 08:49:44 compute-0 nova_compute[260935]: 2025-10-11 08:49:44.835 2 DEBUG nova.compute.manager [None req-d0ed1468-e072-4f36-ba8a-64df5d0bc7be a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 90e56ca7-b26f-4f83-908d-75204ecd2533] Preparing to wait for external event network-vif-plugged-302c88cf-6eba-4200-adfe-6c23d5e6078d prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 11 08:49:44 compute-0 nova_compute[260935]: 2025-10-11 08:49:44.835 2 DEBUG oslo_concurrency.lockutils [None req-d0ed1468-e072-4f36-ba8a-64df5d0bc7be a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Acquiring lock "90e56ca7-b26f-4f83-908d-75204ecd2533-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:49:44 compute-0 nova_compute[260935]: 2025-10-11 08:49:44.835 2 DEBUG oslo_concurrency.lockutils [None req-d0ed1468-e072-4f36-ba8a-64df5d0bc7be a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Lock "90e56ca7-b26f-4f83-908d-75204ecd2533-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:49:44 compute-0 nova_compute[260935]: 2025-10-11 08:49:44.836 2 DEBUG oslo_concurrency.lockutils [None req-d0ed1468-e072-4f36-ba8a-64df5d0bc7be a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Lock "90e56ca7-b26f-4f83-908d-75204ecd2533-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:49:44 compute-0 nova_compute[260935]: 2025-10-11 08:49:44.837 2 DEBUG nova.virt.libvirt.vif [None req-d0ed1468-e072-4f36-ba8a-64df5d0bc7be a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T08:49:32Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-1529647290',display_name='tempest-ServersAdminTestJSON-server-1529647290',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-1529647290',id=27,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='39d3043a7835403392c659fbb2fe0b22',ramdisk_id='',reservation_id='r-j763nkmo',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersAdminTestJSON-1756812845',owner_user_name='tempest-ServersAdminTestJSON-1756812845-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T08:49:37Z,user_data=None,user_id='a51c2680b31e40b1908642ef8795c6f0',uuid=90e56ca7-b26f-4f83-908d-75204ecd2533,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "302c88cf-6eba-4200-adfe-6c23d5e6078d", "address": "fa:16:3e:52:ed:97", "network": {"id": "09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1951796893-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "39d3043a7835403392c659fbb2fe0b22", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap302c88cf-6e", "ovs_interfaceid": "302c88cf-6eba-4200-adfe-6c23d5e6078d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 11 08:49:44 compute-0 nova_compute[260935]: 2025-10-11 08:49:44.837 2 DEBUG nova.network.os_vif_util [None req-d0ed1468-e072-4f36-ba8a-64df5d0bc7be a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Converting VIF {"id": "302c88cf-6eba-4200-adfe-6c23d5e6078d", "address": "fa:16:3e:52:ed:97", "network": {"id": "09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1951796893-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "39d3043a7835403392c659fbb2fe0b22", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap302c88cf-6e", "ovs_interfaceid": "302c88cf-6eba-4200-adfe-6c23d5e6078d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 08:49:44 compute-0 nova_compute[260935]: 2025-10-11 08:49:44.838 2 DEBUG nova.network.os_vif_util [None req-d0ed1468-e072-4f36-ba8a-64df5d0bc7be a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:52:ed:97,bridge_name='br-int',has_traffic_filtering=True,id=302c88cf-6eba-4200-adfe-6c23d5e6078d,network=Network(09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap302c88cf-6e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 08:49:44 compute-0 nova_compute[260935]: 2025-10-11 08:49:44.839 2 DEBUG os_vif [None req-d0ed1468-e072-4f36-ba8a-64df5d0bc7be a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:52:ed:97,bridge_name='br-int',has_traffic_filtering=True,id=302c88cf-6eba-4200-adfe-6c23d5e6078d,network=Network(09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap302c88cf-6e') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 11 08:49:44 compute-0 nova_compute[260935]: 2025-10-11 08:49:44.844 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:49:44 compute-0 nova_compute[260935]: 2025-10-11 08:49:44.844 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:49:44 compute-0 nova_compute[260935]: 2025-10-11 08:49:44.845 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 08:49:44 compute-0 nova_compute[260935]: 2025-10-11 08:49:44.852 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:49:44 compute-0 nova_compute[260935]: 2025-10-11 08:49:44.852 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap302c88cf-6e, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:49:44 compute-0 nova_compute[260935]: 2025-10-11 08:49:44.853 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap302c88cf-6e, col_values=(('external_ids', {'iface-id': '302c88cf-6eba-4200-adfe-6c23d5e6078d', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:52:ed:97', 'vm-uuid': '90e56ca7-b26f-4f83-908d-75204ecd2533'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:49:44 compute-0 NetworkManager[44960]: <info>  [1760172584.8567] manager: (tap302c88cf-6e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/77)
Oct 11 08:49:44 compute-0 nova_compute[260935]: 2025-10-11 08:49:44.855 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:49:44 compute-0 nova_compute[260935]: 2025-10-11 08:49:44.858 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 08:49:44 compute-0 nova_compute[260935]: 2025-10-11 08:49:44.867 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:49:44 compute-0 nova_compute[260935]: 2025-10-11 08:49:44.868 2 INFO os_vif [None req-d0ed1468-e072-4f36-ba8a-64df5d0bc7be a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:52:ed:97,bridge_name='br-int',has_traffic_filtering=True,id=302c88cf-6eba-4200-adfe-6c23d5e6078d,network=Network(09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap302c88cf-6e')
Oct 11 08:49:45 compute-0 nova_compute[260935]: 2025-10-11 08:49:45.065 2 INFO nova.virt.libvirt.driver [None req-37b7b9ec-bf18-4041-ba84-f84d66a368c5 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 14711e39-46ca-4856-9c19-fa51b869064d] Creating config drive at /var/lib/nova/instances/14711e39-46ca-4856-9c19-fa51b869064d/disk.config
Oct 11 08:49:45 compute-0 nova_compute[260935]: 2025-10-11 08:49:45.076 2 DEBUG oslo_concurrency.processutils [None req-37b7b9ec-bf18-4041-ba84-f84d66a368c5 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/14711e39-46ca-4856-9c19-fa51b869064d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpd5cl00kd execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:49:45 compute-0 nova_compute[260935]: 2025-10-11 08:49:45.119 2 DEBUG nova.network.neutron [req-94a5d34e-36f6-405b-bf51-64a92a12ee93 req-79d18cb9-fe00-4ed0-aa09-4d65e0ed50e7 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 14711e39-46ca-4856-9c19-fa51b869064d] Updated VIF entry in instance network info cache for port ac842ebf-4fca-4930-a4d1-3e8a6760d441. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 08:49:45 compute-0 nova_compute[260935]: 2025-10-11 08:49:45.120 2 DEBUG nova.network.neutron [req-94a5d34e-36f6-405b-bf51-64a92a12ee93 req-79d18cb9-fe00-4ed0-aa09-4d65e0ed50e7 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 14711e39-46ca-4856-9c19-fa51b869064d] Updating instance_info_cache with network_info: [{"id": "ac842ebf-4fca-4930-a4d1-3e8a6760d441", "address": "fa:16:3e:56:95:17", "network": {"id": "fff13396-b787-4c6e-9112-a1c2ef57b26d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-795919911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eddb41c523294041b154a0a99c88e82b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac842ebf-4f", "ovs_interfaceid": "ac842ebf-4fca-4930-a4d1-3e8a6760d441", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:49:45 compute-0 nova_compute[260935]: 2025-10-11 08:49:45.235 2 DEBUG oslo_concurrency.processutils [None req-37b7b9ec-bf18-4041-ba84-f84d66a368c5 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/14711e39-46ca-4856-9c19-fa51b869064d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpd5cl00kd" returned: 0 in 0.159s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:49:45 compute-0 nova_compute[260935]: 2025-10-11 08:49:45.271 2 DEBUG nova.storage.rbd_utils [None req-37b7b9ec-bf18-4041-ba84-f84d66a368c5 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] rbd image 14711e39-46ca-4856-9c19-fa51b869064d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:49:45 compute-0 nova_compute[260935]: 2025-10-11 08:49:45.276 2 DEBUG oslo_concurrency.processutils [None req-37b7b9ec-bf18-4041-ba84-f84d66a368c5 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/14711e39-46ca-4856-9c19-fa51b869064d/disk.config 14711e39-46ca-4856-9c19-fa51b869064d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:49:45 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e165 do_prune osdmap full prune enabled
Oct 11 08:49:45 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e166 e166: 3 total, 3 up, 3 in
Oct 11 08:49:45 compute-0 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e166: 3 total, 3 up, 3 in
Oct 11 08:49:45 compute-0 ceph-mon[74313]: pgmap v1319: 321 pgs: 321 active+clean; 418 MiB data, 581 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 6.8 MiB/s wr, 227 op/s
Oct 11 08:49:45 compute-0 ceph-mon[74313]: osdmap e165: 3 total, 3 up, 3 in
Oct 11 08:49:45 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/2640104767' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:49:45 compute-0 nova_compute[260935]: 2025-10-11 08:49:45.479 2 DEBUG oslo_concurrency.processutils [None req-37b7b9ec-bf18-4041-ba84-f84d66a368c5 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/14711e39-46ca-4856-9c19-fa51b869064d/disk.config 14711e39-46ca-4856-9c19-fa51b869064d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.203s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:49:45 compute-0 nova_compute[260935]: 2025-10-11 08:49:45.481 2 INFO nova.virt.libvirt.driver [None req-37b7b9ec-bf18-4041-ba84-f84d66a368c5 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 14711e39-46ca-4856-9c19-fa51b869064d] Deleting local config drive /var/lib/nova/instances/14711e39-46ca-4856-9c19-fa51b869064d/disk.config because it was imported into RBD.
Oct 11 08:49:45 compute-0 NetworkManager[44960]: <info>  [1760172585.5548] manager: (tapac842ebf-4f): new Tun device (/org/freedesktop/NetworkManager/Devices/78)
Oct 11 08:49:45 compute-0 kernel: tapac842ebf-4f: entered promiscuous mode
Oct 11 08:49:45 compute-0 ovn_controller[152945]: 2025-10-11T08:49:45Z|00130|binding|INFO|Claiming lport ac842ebf-4fca-4930-a4d1-3e8a6760d441 for this chassis.
Oct 11 08:49:45 compute-0 ovn_controller[152945]: 2025-10-11T08:49:45Z|00131|binding|INFO|ac842ebf-4fca-4930-a4d1-3e8a6760d441: Claiming fa:16:3e:56:95:17 10.100.0.14
Oct 11 08:49:45 compute-0 nova_compute[260935]: 2025-10-11 08:49:45.600 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:49:45 compute-0 systemd-machined[215705]: New machine qemu-28-instance-0000001a.
Oct 11 08:49:45 compute-0 systemd[1]: Started Virtual Machine qemu-28-instance-0000001a.
Oct 11 08:49:45 compute-0 systemd-udevd[295071]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 08:49:45 compute-0 NetworkManager[44960]: <info>  [1760172585.6940] device (tapac842ebf-4f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 08:49:45 compute-0 NetworkManager[44960]: <info>  [1760172585.6946] device (tapac842ebf-4f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 08:49:45 compute-0 nova_compute[260935]: 2025-10-11 08:49:45.705 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:49:45 compute-0 ovn_controller[152945]: 2025-10-11T08:49:45Z|00132|binding|INFO|Setting lport ac842ebf-4fca-4930-a4d1-3e8a6760d441 ovn-installed in OVS
Oct 11 08:49:45 compute-0 nova_compute[260935]: 2025-10-11 08:49:45.708 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:49:45 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1322: 321 pgs: 321 active+clean; 418 MiB data, 581 MiB used, 59 GiB / 60 GiB avail; 689 KiB/s rd, 7.8 MiB/s wr, 187 op/s
Oct 11 08:49:46 compute-0 ovn_controller[152945]: 2025-10-11T08:49:46Z|00133|binding|INFO|Setting lport ac842ebf-4fca-4930-a4d1-3e8a6760d441 up in Southbound
Oct 11 08:49:46 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:49:46.289 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:56:95:17 10.100.0.14'], port_security=['fa:16:3e:56:95:17 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '14711e39-46ca-4856-9c19-fa51b869064d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fff13396-b787-4c6e-9112-a1c2ef57b26d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'eddb41c523294041b154a0a99c88e82b', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'b1442d15-c284-4756-a249-5a3bda09cf56', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c4201c7b-c907-464d-88cb-d19f17d8f067, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=ac842ebf-4fca-4930-a4d1-3e8a6760d441) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 08:49:46 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:49:46.293 162815 INFO neutron.agent.ovn.metadata.agent [-] Port ac842ebf-4fca-4930-a4d1-3e8a6760d441 in datapath fff13396-b787-4c6e-9112-a1c2ef57b26d bound to our chassis
Oct 11 08:49:46 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:49:46.298 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network fff13396-b787-4c6e-9112-a1c2ef57b26d
Oct 11 08:49:46 compute-0 nova_compute[260935]: 2025-10-11 08:49:46.316 2 DEBUG oslo_concurrency.lockutils [req-94a5d34e-36f6-405b-bf51-64a92a12ee93 req-79d18cb9-fe00-4ed0-aa09-4d65e0ed50e7 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-14711e39-46ca-4856-9c19-fa51b869064d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 08:49:46 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:49:46.319 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[1525060a-9bb6-46aa-8b71-076d03dbdbed]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:49:46 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:49:46.321 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapfff13396-b1 in ovnmeta-fff13396-b787-4c6e-9112-a1c2ef57b26d namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 11 08:49:46 compute-0 nova_compute[260935]: 2025-10-11 08:49:46.322 2 DEBUG nova.virt.libvirt.driver [None req-d0ed1468-e072-4f36-ba8a-64df5d0bc7be a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 08:49:46 compute-0 nova_compute[260935]: 2025-10-11 08:49:46.323 2 DEBUG nova.virt.libvirt.driver [None req-d0ed1468-e072-4f36-ba8a-64df5d0bc7be a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 08:49:46 compute-0 nova_compute[260935]: 2025-10-11 08:49:46.323 2 DEBUG nova.virt.libvirt.driver [None req-d0ed1468-e072-4f36-ba8a-64df5d0bc7be a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] No VIF found with MAC fa:16:3e:52:ed:97, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 11 08:49:46 compute-0 nova_compute[260935]: 2025-10-11 08:49:46.324 2 INFO nova.virt.libvirt.driver [None req-d0ed1468-e072-4f36-ba8a-64df5d0bc7be a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 90e56ca7-b26f-4f83-908d-75204ecd2533] Using config drive
Oct 11 08:49:46 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:49:46.324 276945 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapfff13396-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 11 08:49:46 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:49:46.324 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[b513812c-e074-4d9d-8703-d6b73b01b7ae]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:49:46 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:49:46.325 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[28d14d15-17fd-4e58-bc77-bcee23c05abf]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:49:46 compute-0 ceph-mon[74313]: osdmap e166: 3 total, 3 up, 3 in
Oct 11 08:49:46 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:49:46.351 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[e6ae8c9d-9333-4742-a734-0af75df651d1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:49:46 compute-0 nova_compute[260935]: 2025-10-11 08:49:46.357 2 DEBUG nova.storage.rbd_utils [None req-d0ed1468-e072-4f36-ba8a-64df5d0bc7be a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] rbd image 90e56ca7-b26f-4f83-908d-75204ecd2533_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:49:46 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:49:46.367 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[8adb76d5-ed22-43c0-a35d-d8484443c3d4]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:49:46 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:49:46.406 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[4f4d7889-62f5-4d5a-9831-759c8f888098]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:49:46 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:49:46.414 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[c1c7b98c-55ee-483d-9b5d-725052ee72c6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:49:46 compute-0 systemd-udevd[295078]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 08:49:46 compute-0 NetworkManager[44960]: <info>  [1760172586.4168] manager: (tapfff13396-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/79)
Oct 11 08:49:46 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:49:46.470 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[bdf5cb8f-6c1c-4202-8d1f-bafe7f1c230c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:49:46 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:49:46.474 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[32d8ee48-3c55-435f-8c91-214716ab29f8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:49:46 compute-0 NetworkManager[44960]: <info>  [1760172586.5039] device (tapfff13396-b0): carrier: link connected
Oct 11 08:49:46 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:49:46.507 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[a4cf2ef1-2696-44d6-ab77-cc7fed7cab80]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:49:46 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:49:46.528 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[814df928-f99c-4d01-98f3-2da6b15fc936]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfff13396-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:dd:a4:2d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 45], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 443120, 'reachable_time': 39241, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 295126, 'error': None, 'target': 'ovnmeta-fff13396-b787-4c6e-9112-a1c2ef57b26d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:49:46 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:49:46.547 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[3655effd-8408-45b7-88dd-0f5d2f93d06d]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fedd:a42d'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 443120, 'tstamp': 443120}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 295127, 'error': None, 'target': 'ovnmeta-fff13396-b787-4c6e-9112-a1c2ef57b26d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:49:46 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:49:46.569 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[21009625-1246-4d7b-b958-6bf49a5f4167]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfff13396-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:dd:a4:2d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 45], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 443120, 'reachable_time': 39241, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 295128, 'error': None, 'target': 'ovnmeta-fff13396-b787-4c6e-9112-a1c2ef57b26d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:49:46 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:49:46.617 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[7273a5d4-9c6b-4a8d-99d0-eb4e06ab6cae]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:49:46 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:49:46.716 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[a346a0e6-55de-43c2-9ee7-e3e082bb3529]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:49:46 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:49:46.718 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfff13396-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:49:46 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:49:46.719 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 08:49:46 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:49:46.720 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfff13396-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:49:46 compute-0 nova_compute[260935]: 2025-10-11 08:49:46.756 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:49:46 compute-0 NetworkManager[44960]: <info>  [1760172586.7580] manager: (tapfff13396-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/80)
Oct 11 08:49:46 compute-0 kernel: tapfff13396-b0: entered promiscuous mode
Oct 11 08:49:46 compute-0 nova_compute[260935]: 2025-10-11 08:49:46.762 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:49:46 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:49:46.764 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapfff13396-b0, col_values=(('external_ids', {'iface-id': '2a916b98-1e7b-4604-b1f0-e2f195b1c17e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:49:46 compute-0 nova_compute[260935]: 2025-10-11 08:49:46.766 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:49:46 compute-0 ovn_controller[152945]: 2025-10-11T08:49:46Z|00134|binding|INFO|Releasing lport 2a916b98-1e7b-4604-b1f0-e2f195b1c17e from this chassis (sb_readonly=0)
Oct 11 08:49:46 compute-0 nova_compute[260935]: 2025-10-11 08:49:46.805 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:49:46 compute-0 nova_compute[260935]: 2025-10-11 08:49:46.808 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:49:46 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:49:46.809 162815 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/fff13396-b787-4c6e-9112-a1c2ef57b26d.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/fff13396-b787-4c6e-9112-a1c2ef57b26d.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 11 08:49:46 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:49:46.811 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[373eb165-1eab-4b07-b6bd-3ee165183b99]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:49:46 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:49:46.813 162815 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 11 08:49:46 compute-0 ovn_metadata_agent[162810]: global
Oct 11 08:49:46 compute-0 ovn_metadata_agent[162810]:     log         /dev/log local0 debug
Oct 11 08:49:46 compute-0 ovn_metadata_agent[162810]:     log-tag     haproxy-metadata-proxy-fff13396-b787-4c6e-9112-a1c2ef57b26d
Oct 11 08:49:46 compute-0 ovn_metadata_agent[162810]:     user        root
Oct 11 08:49:46 compute-0 ovn_metadata_agent[162810]:     group       root
Oct 11 08:49:46 compute-0 ovn_metadata_agent[162810]:     maxconn     1024
Oct 11 08:49:46 compute-0 ovn_metadata_agent[162810]:     pidfile     /var/lib/neutron/external/pids/fff13396-b787-4c6e-9112-a1c2ef57b26d.pid.haproxy
Oct 11 08:49:46 compute-0 ovn_metadata_agent[162810]:     daemon
Oct 11 08:49:46 compute-0 ovn_metadata_agent[162810]: 
Oct 11 08:49:46 compute-0 ovn_metadata_agent[162810]: defaults
Oct 11 08:49:46 compute-0 ovn_metadata_agent[162810]:     log global
Oct 11 08:49:46 compute-0 ovn_metadata_agent[162810]:     mode http
Oct 11 08:49:46 compute-0 ovn_metadata_agent[162810]:     option httplog
Oct 11 08:49:46 compute-0 ovn_metadata_agent[162810]:     option dontlognull
Oct 11 08:49:46 compute-0 ovn_metadata_agent[162810]:     option http-server-close
Oct 11 08:49:46 compute-0 ovn_metadata_agent[162810]:     option forwardfor
Oct 11 08:49:46 compute-0 ovn_metadata_agent[162810]:     retries                 3
Oct 11 08:49:46 compute-0 ovn_metadata_agent[162810]:     timeout http-request    30s
Oct 11 08:49:46 compute-0 ovn_metadata_agent[162810]:     timeout connect         30s
Oct 11 08:49:46 compute-0 ovn_metadata_agent[162810]:     timeout client          32s
Oct 11 08:49:46 compute-0 ovn_metadata_agent[162810]:     timeout server          32s
Oct 11 08:49:46 compute-0 ovn_metadata_agent[162810]:     timeout http-keep-alive 30s
Oct 11 08:49:46 compute-0 ovn_metadata_agent[162810]: 
Oct 11 08:49:46 compute-0 ovn_metadata_agent[162810]: 
Oct 11 08:49:46 compute-0 ovn_metadata_agent[162810]: listen listener
Oct 11 08:49:46 compute-0 ovn_metadata_agent[162810]:     bind 169.254.169.254:80
Oct 11 08:49:46 compute-0 ovn_metadata_agent[162810]:     server metadata /var/lib/neutron/metadata_proxy
Oct 11 08:49:46 compute-0 ovn_metadata_agent[162810]:     http-request add-header X-OVN-Network-ID fff13396-b787-4c6e-9112-a1c2ef57b26d
Oct 11 08:49:46 compute-0 ovn_metadata_agent[162810]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 11 08:49:46 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:49:46.814 162815 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-fff13396-b787-4c6e-9112-a1c2ef57b26d', 'env', 'PROCESS_TAG=haproxy-fff13396-b787-4c6e-9112-a1c2ef57b26d', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/fff13396-b787-4c6e-9112-a1c2ef57b26d.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 11 08:49:47 compute-0 nova_compute[260935]: 2025-10-11 08:49:47.043 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:49:47 compute-0 nova_compute[260935]: 2025-10-11 08:49:47.153 2 INFO nova.virt.libvirt.driver [None req-d0ed1468-e072-4f36-ba8a-64df5d0bc7be a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 90e56ca7-b26f-4f83-908d-75204ecd2533] Creating config drive at /var/lib/nova/instances/90e56ca7-b26f-4f83-908d-75204ecd2533/disk.config
Oct 11 08:49:47 compute-0 nova_compute[260935]: 2025-10-11 08:49:47.164 2 DEBUG oslo_concurrency.processutils [None req-d0ed1468-e072-4f36-ba8a-64df5d0bc7be a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/90e56ca7-b26f-4f83-908d-75204ecd2533/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpcjjqozi7 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:49:47 compute-0 podman[295207]: 2025-10-11 08:49:47.280756329 +0000 UTC m=+0.065749491 container create 7ed29cef5ce31d6622505d70a8cd9e2036824dbcf8fd54bff2161f0611e12d29 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-fff13396-b787-4c6e-9112-a1c2ef57b26d, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 11 08:49:47 compute-0 nova_compute[260935]: 2025-10-11 08:49:47.304 2 DEBUG nova.network.neutron [req-cb98ccbb-a461-4db2-ba4c-735969cdecac req-d5eda931-6fd2-4c2d-9a96-8a475425b333 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 90e56ca7-b26f-4f83-908d-75204ecd2533] Updated VIF entry in instance network info cache for port 302c88cf-6eba-4200-adfe-6c23d5e6078d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 08:49:47 compute-0 nova_compute[260935]: 2025-10-11 08:49:47.305 2 DEBUG nova.network.neutron [req-cb98ccbb-a461-4db2-ba4c-735969cdecac req-d5eda931-6fd2-4c2d-9a96-8a475425b333 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 90e56ca7-b26f-4f83-908d-75204ecd2533] Updating instance_info_cache with network_info: [{"id": "302c88cf-6eba-4200-adfe-6c23d5e6078d", "address": "fa:16:3e:52:ed:97", "network": {"id": "09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1951796893-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "39d3043a7835403392c659fbb2fe0b22", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap302c88cf-6e", "ovs_interfaceid": "302c88cf-6eba-4200-adfe-6c23d5e6078d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:49:47 compute-0 nova_compute[260935]: 2025-10-11 08:49:47.328 2 INFO nova.virt.libvirt.driver [None req-49a8ed84-8d60-4574-bbd5-06ca9b96a8ef 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 8e4f771a-b87a-40f9-a12e-b5b4583b96f7] Snapshot image upload complete
Oct 11 08:49:47 compute-0 nova_compute[260935]: 2025-10-11 08:49:47.329 2 INFO nova.compute.manager [None req-49a8ed84-8d60-4574-bbd5-06ca9b96a8ef 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 8e4f771a-b87a-40f9-a12e-b5b4583b96f7] Took 6.13 seconds to snapshot the instance on the hypervisor.
Oct 11 08:49:47 compute-0 podman[295207]: 2025-10-11 08:49:47.242759884 +0000 UTC m=+0.027753106 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 11 08:49:47 compute-0 nova_compute[260935]: 2025-10-11 08:49:47.335 2 DEBUG oslo_concurrency.processutils [None req-d0ed1468-e072-4f36-ba8a-64df5d0bc7be a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/90e56ca7-b26f-4f83-908d-75204ecd2533/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpcjjqozi7" returned: 0 in 0.170s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:49:47 compute-0 nova_compute[260935]: 2025-10-11 08:49:47.335 2 DEBUG oslo_concurrency.lockutils [req-cb98ccbb-a461-4db2-ba4c-735969cdecac req-d5eda931-6fd2-4c2d-9a96-8a475425b333 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-90e56ca7-b26f-4f83-908d-75204ecd2533" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 08:49:47 compute-0 ceph-mon[74313]: pgmap v1322: 321 pgs: 321 active+clean; 418 MiB data, 581 MiB used, 59 GiB / 60 GiB avail; 689 KiB/s rd, 7.8 MiB/s wr, 187 op/s
Oct 11 08:49:47 compute-0 systemd[1]: Started libpod-conmon-7ed29cef5ce31d6622505d70a8cd9e2036824dbcf8fd54bff2161f0611e12d29.scope.
Oct 11 08:49:47 compute-0 systemd[1]: Started libcrun container.
Oct 11 08:49:47 compute-0 nova_compute[260935]: 2025-10-11 08:49:47.376 2 DEBUG nova.storage.rbd_utils [None req-d0ed1468-e072-4f36-ba8a-64df5d0bc7be a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] rbd image 90e56ca7-b26f-4f83-908d-75204ecd2533_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:49:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c54b9bd547d36ffdac517bcbe86f00a809d959f1c5904ed2903ce52567b3128/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 11 08:49:47 compute-0 nova_compute[260935]: 2025-10-11 08:49:47.387 2 DEBUG oslo_concurrency.processutils [None req-d0ed1468-e072-4f36-ba8a-64df5d0bc7be a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/90e56ca7-b26f-4f83-908d-75204ecd2533/disk.config 90e56ca7-b26f-4f83-908d-75204ecd2533_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:49:47 compute-0 podman[295207]: 2025-10-11 08:49:47.404761636 +0000 UTC m=+0.189754838 container init 7ed29cef5ce31d6622505d70a8cd9e2036824dbcf8fd54bff2161f0611e12d29 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-fff13396-b787-4c6e-9112-a1c2ef57b26d, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 11 08:49:47 compute-0 podman[295207]: 2025-10-11 08:49:47.416652432 +0000 UTC m=+0.201645584 container start 7ed29cef5ce31d6622505d70a8cd9e2036824dbcf8fd54bff2161f0611e12d29 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-fff13396-b787-4c6e-9112-a1c2ef57b26d, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.schema-version=1.0)
Oct 11 08:49:47 compute-0 neutron-haproxy-ovnmeta-fff13396-b787-4c6e-9112-a1c2ef57b26d[295224]: [NOTICE]   (295247) : New worker (295256) forked
Oct 11 08:49:47 compute-0 neutron-haproxy-ovnmeta-fff13396-b787-4c6e-9112-a1c2ef57b26d[295224]: [NOTICE]   (295247) : Loading success.
Oct 11 08:49:47 compute-0 nova_compute[260935]: 2025-10-11 08:49:47.548 2 DEBUG oslo_concurrency.processutils [None req-d0ed1468-e072-4f36-ba8a-64df5d0bc7be a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/90e56ca7-b26f-4f83-908d-75204ecd2533/disk.config 90e56ca7-b26f-4f83-908d-75204ecd2533_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.161s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:49:47 compute-0 nova_compute[260935]: 2025-10-11 08:49:47.549 2 INFO nova.virt.libvirt.driver [None req-d0ed1468-e072-4f36-ba8a-64df5d0bc7be a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 90e56ca7-b26f-4f83-908d-75204ecd2533] Deleting local config drive /var/lib/nova/instances/90e56ca7-b26f-4f83-908d-75204ecd2533/disk.config because it was imported into RBD.
Oct 11 08:49:47 compute-0 kernel: tap302c88cf-6e: entered promiscuous mode
Oct 11 08:49:47 compute-0 NetworkManager[44960]: <info>  [1760172587.6152] manager: (tap302c88cf-6e): new Tun device (/org/freedesktop/NetworkManager/Devices/81)
Oct 11 08:49:47 compute-0 nova_compute[260935]: 2025-10-11 08:49:47.617 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:49:47 compute-0 ovn_controller[152945]: 2025-10-11T08:49:47Z|00135|binding|INFO|Claiming lport 302c88cf-6eba-4200-adfe-6c23d5e6078d for this chassis.
Oct 11 08:49:47 compute-0 ovn_controller[152945]: 2025-10-11T08:49:47Z|00136|binding|INFO|302c88cf-6eba-4200-adfe-6c23d5e6078d: Claiming fa:16:3e:52:ed:97 10.100.0.10
Oct 11 08:49:47 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:49:47.631 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:52:ed:97 10.100.0.10'], port_security=['fa:16:3e:52:ed:97 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '90e56ca7-b26f-4f83-908d-75204ecd2533', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '39d3043a7835403392c659fbb2fe0b22', 'neutron:revision_number': '2', 'neutron:security_group_ids': '8cdf2c97-ed67-4339-928f-1d70d0c6c18c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3bfe7634-8476-437a-9cde-e4512c0e686a, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=302c88cf-6eba-4200-adfe-6c23d5e6078d) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 08:49:47 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:49:47.632 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 302c88cf-6eba-4200-adfe-6c23d5e6078d in datapath 09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5 bound to our chassis
Oct 11 08:49:47 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:49:47.634 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5
Oct 11 08:49:47 compute-0 ovn_controller[152945]: 2025-10-11T08:49:47Z|00137|binding|INFO|Setting lport 302c88cf-6eba-4200-adfe-6c23d5e6078d ovn-installed in OVS
Oct 11 08:49:47 compute-0 ovn_controller[152945]: 2025-10-11T08:49:47Z|00138|binding|INFO|Setting lport 302c88cf-6eba-4200-adfe-6c23d5e6078d up in Southbound
Oct 11 08:49:47 compute-0 nova_compute[260935]: 2025-10-11 08:49:47.636 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:49:47 compute-0 nova_compute[260935]: 2025-10-11 08:49:47.643 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:49:47 compute-0 NetworkManager[44960]: <info>  [1760172587.6448] device (tap302c88cf-6e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 08:49:47 compute-0 NetworkManager[44960]: <info>  [1760172587.6456] device (tap302c88cf-6e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 08:49:47 compute-0 nova_compute[260935]: 2025-10-11 08:49:47.650 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:49:47 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:49:47.653 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[9b2ff697-6c33-4702-ace5-60d06f95d439]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:49:47 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:49:47.684 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[3da2f25e-3b7d-486a-bb91-0d17bedccd7a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:49:47 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:49:47.689 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[b937c492-23a8-4a16-8a3c-55fdf5babb23]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:49:47 compute-0 systemd-machined[215705]: New machine qemu-29-instance-0000001b.
Oct 11 08:49:47 compute-0 systemd[1]: Started Virtual Machine qemu-29-instance-0000001b.
Oct 11 08:49:47 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:49:47.725 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[2ca1740d-545d-4fa3-8240-720e994fbc82]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:49:47 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:49:47.745 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[c488f287-edf0-4be7-9133-2c5b35519eaa]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap09ac2cb6-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9f:b2:33'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 14, 'tx_packets': 10, 'rx_bytes': 1084, 'tx_bytes': 608, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 14, 'tx_packets': 10, 'rx_bytes': 1084, 'tx_bytes': 608, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 34], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 439110, 'reachable_time': 35426, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 295295, 'error': None, 'target': 'ovnmeta-09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:49:47 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:49:47.770 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[cae10ca5-3dda-41c5-b0e3-3f9de5f7ec07]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap09ac2cb6-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 439130, 'tstamp': 439130}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 295299, 'error': None, 'target': 'ovnmeta-09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap09ac2cb6-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 439135, 'tstamp': 439135}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 295299, 'error': None, 'target': 'ovnmeta-09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:49:47 compute-0 nova_compute[260935]: 2025-10-11 08:49:47.772 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172587.7720613, 14711e39-46ca-4856-9c19-fa51b869064d => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:49:47 compute-0 nova_compute[260935]: 2025-10-11 08:49:47.772 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 14711e39-46ca-4856-9c19-fa51b869064d] VM Started (Lifecycle Event)
Oct 11 08:49:47 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:49:47.772 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap09ac2cb6-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:49:47 compute-0 nova_compute[260935]: 2025-10-11 08:49:47.776 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:49:47 compute-0 nova_compute[260935]: 2025-10-11 08:49:47.778 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:49:47 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:49:47.778 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap09ac2cb6-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:49:47 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:49:47.779 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 08:49:47 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:49:47.779 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap09ac2cb6-30, col_values=(('external_ids', {'iface-id': '424305ea-6b47-4134-ad52-ee2a450e204c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:49:47 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:49:47.780 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 08:49:47 compute-0 nova_compute[260935]: 2025-10-11 08:49:47.807 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 14711e39-46ca-4856-9c19-fa51b869064d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:49:47 compute-0 nova_compute[260935]: 2025-10-11 08:49:47.813 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172587.7740571, 14711e39-46ca-4856-9c19-fa51b869064d => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:49:47 compute-0 nova_compute[260935]: 2025-10-11 08:49:47.814 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 14711e39-46ca-4856-9c19-fa51b869064d] VM Paused (Lifecycle Event)
Oct 11 08:49:47 compute-0 nova_compute[260935]: 2025-10-11 08:49:47.839 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 14711e39-46ca-4856-9c19-fa51b869064d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:49:47 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1323: 321 pgs: 321 active+clean; 465 MiB data, 603 MiB used, 59 GiB / 60 GiB avail; 4.3 MiB/s rd, 11 MiB/s wr, 340 op/s
Oct 11 08:49:47 compute-0 nova_compute[260935]: 2025-10-11 08:49:47.845 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 14711e39-46ca-4856-9c19-fa51b869064d] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 08:49:47 compute-0 nova_compute[260935]: 2025-10-11 08:49:47.875 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 14711e39-46ca-4856-9c19-fa51b869064d] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 08:49:48 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 08:49:48 compute-0 nova_compute[260935]: 2025-10-11 08:49:48.523 2 DEBUG nova.compute.manager [req-ea68d0a5-7c2a-431f-b70e-4d04ffb10373 req-76197eb4-841a-4960-8213-a4906a3523dc e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 14711e39-46ca-4856-9c19-fa51b869064d] Received event network-vif-plugged-ac842ebf-4fca-4930-a4d1-3e8a6760d441 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:49:48 compute-0 nova_compute[260935]: 2025-10-11 08:49:48.523 2 DEBUG oslo_concurrency.lockutils [req-ea68d0a5-7c2a-431f-b70e-4d04ffb10373 req-76197eb4-841a-4960-8213-a4906a3523dc e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "14711e39-46ca-4856-9c19-fa51b869064d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:49:48 compute-0 nova_compute[260935]: 2025-10-11 08:49:48.524 2 DEBUG oslo_concurrency.lockutils [req-ea68d0a5-7c2a-431f-b70e-4d04ffb10373 req-76197eb4-841a-4960-8213-a4906a3523dc e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "14711e39-46ca-4856-9c19-fa51b869064d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:49:48 compute-0 nova_compute[260935]: 2025-10-11 08:49:48.524 2 DEBUG oslo_concurrency.lockutils [req-ea68d0a5-7c2a-431f-b70e-4d04ffb10373 req-76197eb4-841a-4960-8213-a4906a3523dc e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "14711e39-46ca-4856-9c19-fa51b869064d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:49:48 compute-0 nova_compute[260935]: 2025-10-11 08:49:48.525 2 DEBUG nova.compute.manager [req-ea68d0a5-7c2a-431f-b70e-4d04ffb10373 req-76197eb4-841a-4960-8213-a4906a3523dc e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 14711e39-46ca-4856-9c19-fa51b869064d] Processing event network-vif-plugged-ac842ebf-4fca-4930-a4d1-3e8a6760d441 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 11 08:49:48 compute-0 nova_compute[260935]: 2025-10-11 08:49:48.526 2 DEBUG nova.compute.manager [None req-37b7b9ec-bf18-4041-ba84-f84d66a368c5 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 14711e39-46ca-4856-9c19-fa51b869064d] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 11 08:49:48 compute-0 nova_compute[260935]: 2025-10-11 08:49:48.531 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172588.530742, 14711e39-46ca-4856-9c19-fa51b869064d => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:49:48 compute-0 nova_compute[260935]: 2025-10-11 08:49:48.531 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 14711e39-46ca-4856-9c19-fa51b869064d] VM Resumed (Lifecycle Event)
Oct 11 08:49:48 compute-0 nova_compute[260935]: 2025-10-11 08:49:48.532 2 DEBUG nova.virt.libvirt.driver [None req-37b7b9ec-bf18-4041-ba84-f84d66a368c5 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 14711e39-46ca-4856-9c19-fa51b869064d] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 11 08:49:48 compute-0 nova_compute[260935]: 2025-10-11 08:49:48.537 2 INFO nova.virt.libvirt.driver [-] [instance: 14711e39-46ca-4856-9c19-fa51b869064d] Instance spawned successfully.
Oct 11 08:49:48 compute-0 nova_compute[260935]: 2025-10-11 08:49:48.537 2 DEBUG nova.virt.libvirt.driver [None req-37b7b9ec-bf18-4041-ba84-f84d66a368c5 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 14711e39-46ca-4856-9c19-fa51b869064d] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 11 08:49:48 compute-0 nova_compute[260935]: 2025-10-11 08:49:48.554 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 14711e39-46ca-4856-9c19-fa51b869064d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:49:48 compute-0 nova_compute[260935]: 2025-10-11 08:49:48.562 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 14711e39-46ca-4856-9c19-fa51b869064d] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 08:49:48 compute-0 nova_compute[260935]: 2025-10-11 08:49:48.569 2 DEBUG nova.virt.libvirt.driver [None req-37b7b9ec-bf18-4041-ba84-f84d66a368c5 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 14711e39-46ca-4856-9c19-fa51b869064d] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:49:48 compute-0 nova_compute[260935]: 2025-10-11 08:49:48.569 2 DEBUG nova.virt.libvirt.driver [None req-37b7b9ec-bf18-4041-ba84-f84d66a368c5 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 14711e39-46ca-4856-9c19-fa51b869064d] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:49:48 compute-0 nova_compute[260935]: 2025-10-11 08:49:48.570 2 DEBUG nova.virt.libvirt.driver [None req-37b7b9ec-bf18-4041-ba84-f84d66a368c5 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 14711e39-46ca-4856-9c19-fa51b869064d] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:49:48 compute-0 nova_compute[260935]: 2025-10-11 08:49:48.570 2 DEBUG nova.virt.libvirt.driver [None req-37b7b9ec-bf18-4041-ba84-f84d66a368c5 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 14711e39-46ca-4856-9c19-fa51b869064d] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:49:48 compute-0 nova_compute[260935]: 2025-10-11 08:49:48.571 2 DEBUG nova.virt.libvirt.driver [None req-37b7b9ec-bf18-4041-ba84-f84d66a368c5 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 14711e39-46ca-4856-9c19-fa51b869064d] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:49:48 compute-0 nova_compute[260935]: 2025-10-11 08:49:48.571 2 DEBUG nova.virt.libvirt.driver [None req-37b7b9ec-bf18-4041-ba84-f84d66a368c5 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 14711e39-46ca-4856-9c19-fa51b869064d] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:49:48 compute-0 nova_compute[260935]: 2025-10-11 08:49:48.609 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 14711e39-46ca-4856-9c19-fa51b869064d] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 08:49:48 compute-0 nova_compute[260935]: 2025-10-11 08:49:48.609 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172588.540548, 90e56ca7-b26f-4f83-908d-75204ecd2533 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:49:48 compute-0 nova_compute[260935]: 2025-10-11 08:49:48.610 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 90e56ca7-b26f-4f83-908d-75204ecd2533] VM Started (Lifecycle Event)
Oct 11 08:49:48 compute-0 nova_compute[260935]: 2025-10-11 08:49:48.647 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 90e56ca7-b26f-4f83-908d-75204ecd2533] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:49:48 compute-0 nova_compute[260935]: 2025-10-11 08:49:48.651 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172588.5406275, 90e56ca7-b26f-4f83-908d-75204ecd2533 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:49:48 compute-0 nova_compute[260935]: 2025-10-11 08:49:48.652 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 90e56ca7-b26f-4f83-908d-75204ecd2533] VM Paused (Lifecycle Event)
Oct 11 08:49:48 compute-0 nova_compute[260935]: 2025-10-11 08:49:48.683 2 INFO nova.compute.manager [None req-37b7b9ec-bf18-4041-ba84-f84d66a368c5 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 14711e39-46ca-4856-9c19-fa51b869064d] Took 12.63 seconds to spawn the instance on the hypervisor.
Oct 11 08:49:48 compute-0 nova_compute[260935]: 2025-10-11 08:49:48.683 2 DEBUG nova.compute.manager [None req-37b7b9ec-bf18-4041-ba84-f84d66a368c5 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 14711e39-46ca-4856-9c19-fa51b869064d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:49:48 compute-0 nova_compute[260935]: 2025-10-11 08:49:48.699 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 90e56ca7-b26f-4f83-908d-75204ecd2533] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:49:48 compute-0 nova_compute[260935]: 2025-10-11 08:49:48.703 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 90e56ca7-b26f-4f83-908d-75204ecd2533] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 08:49:48 compute-0 nova_compute[260935]: 2025-10-11 08:49:48.724 2 DEBUG nova.compute.manager [req-1e589ea9-fe8b-487f-a60c-be3bfc352e68 req-542a9f70-47c2-4bbf-b79e-72b13fcdad87 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 90e56ca7-b26f-4f83-908d-75204ecd2533] Received event network-vif-plugged-302c88cf-6eba-4200-adfe-6c23d5e6078d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:49:48 compute-0 nova_compute[260935]: 2025-10-11 08:49:48.724 2 DEBUG oslo_concurrency.lockutils [req-1e589ea9-fe8b-487f-a60c-be3bfc352e68 req-542a9f70-47c2-4bbf-b79e-72b13fcdad87 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "90e56ca7-b26f-4f83-908d-75204ecd2533-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:49:48 compute-0 nova_compute[260935]: 2025-10-11 08:49:48.724 2 DEBUG oslo_concurrency.lockutils [req-1e589ea9-fe8b-487f-a60c-be3bfc352e68 req-542a9f70-47c2-4bbf-b79e-72b13fcdad87 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "90e56ca7-b26f-4f83-908d-75204ecd2533-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:49:48 compute-0 nova_compute[260935]: 2025-10-11 08:49:48.725 2 DEBUG oslo_concurrency.lockutils [req-1e589ea9-fe8b-487f-a60c-be3bfc352e68 req-542a9f70-47c2-4bbf-b79e-72b13fcdad87 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "90e56ca7-b26f-4f83-908d-75204ecd2533-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:49:48 compute-0 nova_compute[260935]: 2025-10-11 08:49:48.725 2 DEBUG nova.compute.manager [req-1e589ea9-fe8b-487f-a60c-be3bfc352e68 req-542a9f70-47c2-4bbf-b79e-72b13fcdad87 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 90e56ca7-b26f-4f83-908d-75204ecd2533] Processing event network-vif-plugged-302c88cf-6eba-4200-adfe-6c23d5e6078d _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 11 08:49:48 compute-0 nova_compute[260935]: 2025-10-11 08:49:48.726 2 DEBUG nova.compute.manager [None req-d0ed1468-e072-4f36-ba8a-64df5d0bc7be a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 90e56ca7-b26f-4f83-908d-75204ecd2533] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 11 08:49:48 compute-0 nova_compute[260935]: 2025-10-11 08:49:48.731 2 DEBUG nova.virt.libvirt.driver [None req-d0ed1468-e072-4f36-ba8a-64df5d0bc7be a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 90e56ca7-b26f-4f83-908d-75204ecd2533] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 11 08:49:48 compute-0 nova_compute[260935]: 2025-10-11 08:49:48.735 2 INFO nova.virt.libvirt.driver [-] [instance: 90e56ca7-b26f-4f83-908d-75204ecd2533] Instance spawned successfully.
Oct 11 08:49:48 compute-0 nova_compute[260935]: 2025-10-11 08:49:48.735 2 DEBUG nova.virt.libvirt.driver [None req-d0ed1468-e072-4f36-ba8a-64df5d0bc7be a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 90e56ca7-b26f-4f83-908d-75204ecd2533] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 11 08:49:48 compute-0 nova_compute[260935]: 2025-10-11 08:49:48.790 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 90e56ca7-b26f-4f83-908d-75204ecd2533] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 08:49:48 compute-0 nova_compute[260935]: 2025-10-11 08:49:48.791 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172588.729687, 90e56ca7-b26f-4f83-908d-75204ecd2533 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:49:48 compute-0 nova_compute[260935]: 2025-10-11 08:49:48.791 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 90e56ca7-b26f-4f83-908d-75204ecd2533] VM Resumed (Lifecycle Event)
Oct 11 08:49:48 compute-0 nova_compute[260935]: 2025-10-11 08:49:48.837 2 DEBUG nova.virt.libvirt.driver [None req-d0ed1468-e072-4f36-ba8a-64df5d0bc7be a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 90e56ca7-b26f-4f83-908d-75204ecd2533] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:49:48 compute-0 nova_compute[260935]: 2025-10-11 08:49:48.838 2 DEBUG nova.virt.libvirt.driver [None req-d0ed1468-e072-4f36-ba8a-64df5d0bc7be a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 90e56ca7-b26f-4f83-908d-75204ecd2533] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:49:48 compute-0 nova_compute[260935]: 2025-10-11 08:49:48.839 2 DEBUG nova.virt.libvirt.driver [None req-d0ed1468-e072-4f36-ba8a-64df5d0bc7be a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 90e56ca7-b26f-4f83-908d-75204ecd2533] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:49:48 compute-0 nova_compute[260935]: 2025-10-11 08:49:48.839 2 DEBUG nova.virt.libvirt.driver [None req-d0ed1468-e072-4f36-ba8a-64df5d0bc7be a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 90e56ca7-b26f-4f83-908d-75204ecd2533] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:49:48 compute-0 nova_compute[260935]: 2025-10-11 08:49:48.840 2 DEBUG nova.virt.libvirt.driver [None req-d0ed1468-e072-4f36-ba8a-64df5d0bc7be a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 90e56ca7-b26f-4f83-908d-75204ecd2533] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:49:48 compute-0 nova_compute[260935]: 2025-10-11 08:49:48.841 2 DEBUG nova.virt.libvirt.driver [None req-d0ed1468-e072-4f36-ba8a-64df5d0bc7be a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 90e56ca7-b26f-4f83-908d-75204ecd2533] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:49:48 compute-0 nova_compute[260935]: 2025-10-11 08:49:48.903 2 INFO nova.compute.manager [None req-37b7b9ec-bf18-4041-ba84-f84d66a368c5 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 14711e39-46ca-4856-9c19-fa51b869064d] Took 14.23 seconds to build instance.
Oct 11 08:49:48 compute-0 nova_compute[260935]: 2025-10-11 08:49:48.932 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 90e56ca7-b26f-4f83-908d-75204ecd2533] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:49:48 compute-0 nova_compute[260935]: 2025-10-11 08:49:48.939 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 90e56ca7-b26f-4f83-908d-75204ecd2533] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 08:49:49 compute-0 nova_compute[260935]: 2025-10-11 08:49:49.146 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 90e56ca7-b26f-4f83-908d-75204ecd2533] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 08:49:49 compute-0 nova_compute[260935]: 2025-10-11 08:49:49.152 2 DEBUG oslo_concurrency.lockutils [None req-37b7b9ec-bf18-4041-ba84-f84d66a368c5 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Lock "14711e39-46ca-4856-9c19-fa51b869064d" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 14.577s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:49:49 compute-0 nova_compute[260935]: 2025-10-11 08:49:49.202 2 INFO nova.compute.manager [None req-d0ed1468-e072-4f36-ba8a-64df5d0bc7be a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 90e56ca7-b26f-4f83-908d-75204ecd2533] Took 11.67 seconds to spawn the instance on the hypervisor.
Oct 11 08:49:49 compute-0 nova_compute[260935]: 2025-10-11 08:49:49.202 2 DEBUG nova.compute.manager [None req-d0ed1468-e072-4f36-ba8a-64df5d0bc7be a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 90e56ca7-b26f-4f83-908d-75204ecd2533] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:49:49 compute-0 nova_compute[260935]: 2025-10-11 08:49:49.270 2 INFO nova.compute.manager [None req-d0ed1468-e072-4f36-ba8a-64df5d0bc7be a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 90e56ca7-b26f-4f83-908d-75204ecd2533] Took 14.55 seconds to build instance.
Oct 11 08:49:49 compute-0 nova_compute[260935]: 2025-10-11 08:49:49.289 2 DEBUG oslo_concurrency.lockutils [None req-d0ed1468-e072-4f36-ba8a-64df5d0bc7be a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Lock "90e56ca7-b26f-4f83-908d-75204ecd2533" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 14.688s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:49:49 compute-0 ceph-mon[74313]: pgmap v1323: 321 pgs: 321 active+clean; 465 MiB data, 603 MiB used, 59 GiB / 60 GiB avail; 4.3 MiB/s rd, 11 MiB/s wr, 340 op/s
Oct 11 08:49:49 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1324: 321 pgs: 321 active+clean; 465 MiB data, 603 MiB used, 59 GiB / 60 GiB avail; 3.3 MiB/s rd, 3.3 MiB/s wr, 140 op/s
Oct 11 08:49:49 compute-0 nova_compute[260935]: 2025-10-11 08:49:49.854 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:49:50 compute-0 nova_compute[260935]: 2025-10-11 08:49:50.168 2 DEBUG oslo_concurrency.lockutils [None req-562dda92-f28d-460a-8594-d15088f92593 794a63979edc4ddb9f25d94c4999d99b 9b74a79bf5294fa5b15c4e2cc282d6c1 - - default default] Acquiring lock "refresh_cache-90e56ca7-b26f-4f83-908d-75204ecd2533" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 08:49:50 compute-0 nova_compute[260935]: 2025-10-11 08:49:50.168 2 DEBUG oslo_concurrency.lockutils [None req-562dda92-f28d-460a-8594-d15088f92593 794a63979edc4ddb9f25d94c4999d99b 9b74a79bf5294fa5b15c4e2cc282d6c1 - - default default] Acquired lock "refresh_cache-90e56ca7-b26f-4f83-908d-75204ecd2533" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 08:49:50 compute-0 nova_compute[260935]: 2025-10-11 08:49:50.168 2 DEBUG nova.network.neutron [None req-562dda92-f28d-460a-8594-d15088f92593 794a63979edc4ddb9f25d94c4999d99b 9b74a79bf5294fa5b15c4e2cc282d6c1 - - default default] [instance: 90e56ca7-b26f-4f83-908d-75204ecd2533] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 11 08:49:50 compute-0 nova_compute[260935]: 2025-10-11 08:49:50.223 2 DEBUG oslo_concurrency.lockutils [None req-b9b1aca3-aec2-4b7f-b3c9-875c04211fcf f557c82d1a6e44f2890ffb382c99df55 4478aa2544ad454daf82ec0d5a6f1b83 - - default default] Acquiring lock "9f9aca1c-8e65-435a-bfae-1ff0d4386f58" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:49:50 compute-0 nova_compute[260935]: 2025-10-11 08:49:50.223 2 DEBUG oslo_concurrency.lockutils [None req-b9b1aca3-aec2-4b7f-b3c9-875c04211fcf f557c82d1a6e44f2890ffb382c99df55 4478aa2544ad454daf82ec0d5a6f1b83 - - default default] Lock "9f9aca1c-8e65-435a-bfae-1ff0d4386f58" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:49:50 compute-0 nova_compute[260935]: 2025-10-11 08:49:50.239 2 DEBUG nova.compute.manager [None req-b9b1aca3-aec2-4b7f-b3c9-875c04211fcf f557c82d1a6e44f2890ffb382c99df55 4478aa2544ad454daf82ec0d5a6f1b83 - - default default] [instance: 9f9aca1c-8e65-435a-bfae-1ff0d4386f58] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 11 08:49:50 compute-0 nova_compute[260935]: 2025-10-11 08:49:50.310 2 DEBUG oslo_concurrency.lockutils [None req-b9b1aca3-aec2-4b7f-b3c9-875c04211fcf f557c82d1a6e44f2890ffb382c99df55 4478aa2544ad454daf82ec0d5a6f1b83 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:49:50 compute-0 nova_compute[260935]: 2025-10-11 08:49:50.311 2 DEBUG oslo_concurrency.lockutils [None req-b9b1aca3-aec2-4b7f-b3c9-875c04211fcf f557c82d1a6e44f2890ffb382c99df55 4478aa2544ad454daf82ec0d5a6f1b83 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:49:50 compute-0 nova_compute[260935]: 2025-10-11 08:49:50.317 2 DEBUG nova.virt.hardware [None req-b9b1aca3-aec2-4b7f-b3c9-875c04211fcf f557c82d1a6e44f2890ffb382c99df55 4478aa2544ad454daf82ec0d5a6f1b83 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 11 08:49:50 compute-0 nova_compute[260935]: 2025-10-11 08:49:50.317 2 INFO nova.compute.claims [None req-b9b1aca3-aec2-4b7f-b3c9-875c04211fcf f557c82d1a6e44f2890ffb382c99df55 4478aa2544ad454daf82ec0d5a6f1b83 - - default default] [instance: 9f9aca1c-8e65-435a-bfae-1ff0d4386f58] Claim successful on node compute-0.ctlplane.example.com
Oct 11 08:49:50 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e166 do_prune osdmap full prune enabled
Oct 11 08:49:50 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e167 e167: 3 total, 3 up, 3 in
Oct 11 08:49:50 compute-0 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e167: 3 total, 3 up, 3 in
Oct 11 08:49:50 compute-0 nova_compute[260935]: 2025-10-11 08:49:50.418 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:49:50 compute-0 NetworkManager[44960]: <info>  [1760172590.4194] manager: (patch-br-int-to-provnet-02213cae-d648-4a8f-b18b-9bdf9573f1be): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/82)
Oct 11 08:49:50 compute-0 NetworkManager[44960]: <info>  [1760172590.4206] manager: (patch-provnet-02213cae-d648-4a8f-b18b-9bdf9573f1be-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/83)
Oct 11 08:49:50 compute-0 nova_compute[260935]: 2025-10-11 08:49:50.521 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:49:50 compute-0 ovn_controller[152945]: 2025-10-11T08:49:50Z|00139|binding|INFO|Releasing lport 424305ea-6b47-4134-ad52-ee2a450e204c from this chassis (sb_readonly=0)
Oct 11 08:49:50 compute-0 ovn_controller[152945]: 2025-10-11T08:49:50Z|00140|binding|INFO|Releasing lport 2a916b98-1e7b-4604-b1f0-e2f195b1c17e from this chassis (sb_readonly=0)
Oct 11 08:49:50 compute-0 ovn_controller[152945]: 2025-10-11T08:49:50Z|00141|binding|INFO|Releasing lport e5becf0d-48c0-404b-9cba-07077454d085 from this chassis (sb_readonly=0)
Oct 11 08:49:50 compute-0 nova_compute[260935]: 2025-10-11 08:49:50.555 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:49:50 compute-0 nova_compute[260935]: 2025-10-11 08:49:50.564 2 DEBUG oslo_concurrency.processutils [None req-b9b1aca3-aec2-4b7f-b3c9-875c04211fcf f557c82d1a6e44f2890ffb382c99df55 4478aa2544ad454daf82ec0d5a6f1b83 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:49:51 compute-0 nova_compute[260935]: 2025-10-11 08:49:51.053 2 DEBUG nova.compute.manager [req-8df2a6bf-0b16-4e54-bc8f-8b665358cecb req-cd33f2e1-f1b1-462d-a138-8ce499811998 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 14711e39-46ca-4856-9c19-fa51b869064d] Received event network-vif-plugged-ac842ebf-4fca-4930-a4d1-3e8a6760d441 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:49:51 compute-0 nova_compute[260935]: 2025-10-11 08:49:51.054 2 DEBUG oslo_concurrency.lockutils [req-8df2a6bf-0b16-4e54-bc8f-8b665358cecb req-cd33f2e1-f1b1-462d-a138-8ce499811998 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "14711e39-46ca-4856-9c19-fa51b869064d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:49:51 compute-0 nova_compute[260935]: 2025-10-11 08:49:51.054 2 DEBUG oslo_concurrency.lockutils [req-8df2a6bf-0b16-4e54-bc8f-8b665358cecb req-cd33f2e1-f1b1-462d-a138-8ce499811998 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "14711e39-46ca-4856-9c19-fa51b869064d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:49:51 compute-0 nova_compute[260935]: 2025-10-11 08:49:51.055 2 DEBUG oslo_concurrency.lockutils [req-8df2a6bf-0b16-4e54-bc8f-8b665358cecb req-cd33f2e1-f1b1-462d-a138-8ce499811998 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "14711e39-46ca-4856-9c19-fa51b869064d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:49:51 compute-0 nova_compute[260935]: 2025-10-11 08:49:51.055 2 DEBUG nova.compute.manager [req-8df2a6bf-0b16-4e54-bc8f-8b665358cecb req-cd33f2e1-f1b1-462d-a138-8ce499811998 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 14711e39-46ca-4856-9c19-fa51b869064d] No waiting events found dispatching network-vif-plugged-ac842ebf-4fca-4930-a4d1-3e8a6760d441 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 08:49:51 compute-0 nova_compute[260935]: 2025-10-11 08:49:51.056 2 WARNING nova.compute.manager [req-8df2a6bf-0b16-4e54-bc8f-8b665358cecb req-cd33f2e1-f1b1-462d-a138-8ce499811998 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 14711e39-46ca-4856-9c19-fa51b869064d] Received unexpected event network-vif-plugged-ac842ebf-4fca-4930-a4d1-3e8a6760d441 for instance with vm_state active and task_state None.
Oct 11 08:49:51 compute-0 nova_compute[260935]: 2025-10-11 08:49:51.129 2 DEBUG nova.compute.manager [req-983fb0e0-0544-4d1f-a156-bf4bf0288b5c req-3f046720-a76f-4795-8faa-336cc9646ffc e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 90e56ca7-b26f-4f83-908d-75204ecd2533] Received event network-vif-plugged-302c88cf-6eba-4200-adfe-6c23d5e6078d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:49:51 compute-0 nova_compute[260935]: 2025-10-11 08:49:51.130 2 DEBUG oslo_concurrency.lockutils [req-983fb0e0-0544-4d1f-a156-bf4bf0288b5c req-3f046720-a76f-4795-8faa-336cc9646ffc e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "90e56ca7-b26f-4f83-908d-75204ecd2533-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:49:51 compute-0 nova_compute[260935]: 2025-10-11 08:49:51.130 2 DEBUG oslo_concurrency.lockutils [req-983fb0e0-0544-4d1f-a156-bf4bf0288b5c req-3f046720-a76f-4795-8faa-336cc9646ffc e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "90e56ca7-b26f-4f83-908d-75204ecd2533-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:49:51 compute-0 nova_compute[260935]: 2025-10-11 08:49:51.131 2 DEBUG oslo_concurrency.lockutils [req-983fb0e0-0544-4d1f-a156-bf4bf0288b5c req-3f046720-a76f-4795-8faa-336cc9646ffc e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "90e56ca7-b26f-4f83-908d-75204ecd2533-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:49:51 compute-0 nova_compute[260935]: 2025-10-11 08:49:51.131 2 DEBUG nova.compute.manager [req-983fb0e0-0544-4d1f-a156-bf4bf0288b5c req-3f046720-a76f-4795-8faa-336cc9646ffc e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 90e56ca7-b26f-4f83-908d-75204ecd2533] No waiting events found dispatching network-vif-plugged-302c88cf-6eba-4200-adfe-6c23d5e6078d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 08:49:51 compute-0 nova_compute[260935]: 2025-10-11 08:49:51.131 2 WARNING nova.compute.manager [req-983fb0e0-0544-4d1f-a156-bf4bf0288b5c req-3f046720-a76f-4795-8faa-336cc9646ffc e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 90e56ca7-b26f-4f83-908d-75204ecd2533] Received unexpected event network-vif-plugged-302c88cf-6eba-4200-adfe-6c23d5e6078d for instance with vm_state active and task_state None.
Oct 11 08:49:51 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 08:49:51 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1072475470' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:49:51 compute-0 nova_compute[260935]: 2025-10-11 08:49:51.187 2 DEBUG oslo_concurrency.processutils [None req-b9b1aca3-aec2-4b7f-b3c9-875c04211fcf f557c82d1a6e44f2890ffb382c99df55 4478aa2544ad454daf82ec0d5a6f1b83 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.622s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:49:51 compute-0 nova_compute[260935]: 2025-10-11 08:49:51.193 2 DEBUG nova.compute.provider_tree [None req-b9b1aca3-aec2-4b7f-b3c9-875c04211fcf f557c82d1a6e44f2890ffb382c99df55 4478aa2544ad454daf82ec0d5a6f1b83 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 08:49:51 compute-0 nova_compute[260935]: 2025-10-11 08:49:51.222 2 DEBUG nova.scheduler.client.report [None req-b9b1aca3-aec2-4b7f-b3c9-875c04211fcf f557c82d1a6e44f2890ffb382c99df55 4478aa2544ad454daf82ec0d5a6f1b83 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 08:49:51 compute-0 nova_compute[260935]: 2025-10-11 08:49:51.249 2 DEBUG oslo_concurrency.lockutils [None req-b9b1aca3-aec2-4b7f-b3c9-875c04211fcf f557c82d1a6e44f2890ffb382c99df55 4478aa2544ad454daf82ec0d5a6f1b83 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.939s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:49:51 compute-0 nova_compute[260935]: 2025-10-11 08:49:51.250 2 DEBUG nova.compute.manager [None req-b9b1aca3-aec2-4b7f-b3c9-875c04211fcf f557c82d1a6e44f2890ffb382c99df55 4478aa2544ad454daf82ec0d5a6f1b83 - - default default] [instance: 9f9aca1c-8e65-435a-bfae-1ff0d4386f58] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 11 08:49:51 compute-0 nova_compute[260935]: 2025-10-11 08:49:51.254 2 DEBUG oslo_concurrency.lockutils [None req-380ed33b-63cc-462d-8a1e-2d05d5f998cc 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Acquiring lock "8e4f771a-b87a-40f9-a12e-b5b4583b96f7" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:49:51 compute-0 nova_compute[260935]: 2025-10-11 08:49:51.254 2 DEBUG oslo_concurrency.lockutils [None req-380ed33b-63cc-462d-8a1e-2d05d5f998cc 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Lock "8e4f771a-b87a-40f9-a12e-b5b4583b96f7" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:49:51 compute-0 nova_compute[260935]: 2025-10-11 08:49:51.255 2 DEBUG oslo_concurrency.lockutils [None req-380ed33b-63cc-462d-8a1e-2d05d5f998cc 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Acquiring lock "8e4f771a-b87a-40f9-a12e-b5b4583b96f7-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:49:51 compute-0 nova_compute[260935]: 2025-10-11 08:49:51.256 2 DEBUG oslo_concurrency.lockutils [None req-380ed33b-63cc-462d-8a1e-2d05d5f998cc 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Lock "8e4f771a-b87a-40f9-a12e-b5b4583b96f7-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:49:51 compute-0 nova_compute[260935]: 2025-10-11 08:49:51.256 2 DEBUG oslo_concurrency.lockutils [None req-380ed33b-63cc-462d-8a1e-2d05d5f998cc 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Lock "8e4f771a-b87a-40f9-a12e-b5b4583b96f7-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:49:51 compute-0 nova_compute[260935]: 2025-10-11 08:49:51.258 2 INFO nova.compute.manager [None req-380ed33b-63cc-462d-8a1e-2d05d5f998cc 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 8e4f771a-b87a-40f9-a12e-b5b4583b96f7] Terminating instance
Oct 11 08:49:51 compute-0 nova_compute[260935]: 2025-10-11 08:49:51.260 2 DEBUG nova.compute.manager [None req-380ed33b-63cc-462d-8a1e-2d05d5f998cc 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 8e4f771a-b87a-40f9-a12e-b5b4583b96f7] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 11 08:49:51 compute-0 nova_compute[260935]: 2025-10-11 08:49:51.310 2 DEBUG nova.compute.manager [None req-b9b1aca3-aec2-4b7f-b3c9-875c04211fcf f557c82d1a6e44f2890ffb382c99df55 4478aa2544ad454daf82ec0d5a6f1b83 - - default default] [instance: 9f9aca1c-8e65-435a-bfae-1ff0d4386f58] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 11 08:49:51 compute-0 nova_compute[260935]: 2025-10-11 08:49:51.311 2 DEBUG nova.network.neutron [None req-b9b1aca3-aec2-4b7f-b3c9-875c04211fcf f557c82d1a6e44f2890ffb382c99df55 4478aa2544ad454daf82ec0d5a6f1b83 - - default default] [instance: 9f9aca1c-8e65-435a-bfae-1ff0d4386f58] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 11 08:49:51 compute-0 kernel: tapf1d8b704-c5 (unregistering): left promiscuous mode
Oct 11 08:49:51 compute-0 NetworkManager[44960]: <info>  [1760172591.3260] device (tapf1d8b704-c5): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 08:49:51 compute-0 nova_compute[260935]: 2025-10-11 08:49:51.336 2 INFO nova.virt.libvirt.driver [None req-b9b1aca3-aec2-4b7f-b3c9-875c04211fcf f557c82d1a6e44f2890ffb382c99df55 4478aa2544ad454daf82ec0d5a6f1b83 - - default default] [instance: 9f9aca1c-8e65-435a-bfae-1ff0d4386f58] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 11 08:49:51 compute-0 nova_compute[260935]: 2025-10-11 08:49:51.346 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:49:51 compute-0 ovn_controller[152945]: 2025-10-11T08:49:51Z|00142|binding|INFO|Releasing lport f1d8b704-c5df-41f7-b46a-04c0e89ab2cd from this chassis (sb_readonly=0)
Oct 11 08:49:51 compute-0 ovn_controller[152945]: 2025-10-11T08:49:51Z|00143|binding|INFO|Setting lport f1d8b704-c5df-41f7-b46a-04c0e89ab2cd down in Southbound
Oct 11 08:49:51 compute-0 ovn_controller[152945]: 2025-10-11T08:49:51Z|00144|binding|INFO|Removing iface tapf1d8b704-c5 ovn-installed in OVS
Oct 11 08:49:51 compute-0 nova_compute[260935]: 2025-10-11 08:49:51.350 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:49:51 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:49:51.355 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3c:11:79 10.100.0.7'], port_security=['fa:16:3e:3c:11:79 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '8e4f771a-b87a-40f9-a12e-b5b4583b96f7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9bac3530-993f-420e-8692-0b14a331d756', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f8c7604961214c6d9d49657535d799a5', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'b92be55a-f97b-4770-99c4-ff8e122b8ad7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=956bef08-638b-4ce0-9cc4-80a6cc4f1331, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=f1d8b704-c5df-41f7-b46a-04c0e89ab2cd) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 08:49:51 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:49:51.356 162815 INFO neutron.agent.ovn.metadata.agent [-] Port f1d8b704-c5df-41f7-b46a-04c0e89ab2cd in datapath 9bac3530-993f-420e-8692-0b14a331d756 unbound from our chassis
Oct 11 08:49:51 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:49:51.358 162815 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 9bac3530-993f-420e-8692-0b14a331d756, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 11 08:49:51 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:49:51.358 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[3741877f-60e5-48cd-9940-f5be88b8d892]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:49:51 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:49:51.359 162815 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-9bac3530-993f-420e-8692-0b14a331d756 namespace which is not needed anymore
Oct 11 08:49:51 compute-0 ceph-mon[74313]: pgmap v1324: 321 pgs: 321 active+clean; 465 MiB data, 603 MiB used, 59 GiB / 60 GiB avail; 3.3 MiB/s rd, 3.3 MiB/s wr, 140 op/s
Oct 11 08:49:51 compute-0 ceph-mon[74313]: osdmap e167: 3 total, 3 up, 3 in
Oct 11 08:49:51 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/1072475470' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:49:51 compute-0 nova_compute[260935]: 2025-10-11 08:49:51.366 2 DEBUG nova.compute.manager [None req-b9b1aca3-aec2-4b7f-b3c9-875c04211fcf f557c82d1a6e44f2890ffb382c99df55 4478aa2544ad454daf82ec0d5a6f1b83 - - default default] [instance: 9f9aca1c-8e65-435a-bfae-1ff0d4386f58] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 11 08:49:51 compute-0 systemd[1]: machine-qemu\x2d27\x2dinstance\x2d00000019.scope: Deactivated successfully.
Oct 11 08:49:51 compute-0 systemd[1]: machine-qemu\x2d27\x2dinstance\x2d00000019.scope: Consumed 2.850s CPU time.
Oct 11 08:49:51 compute-0 systemd-machined[215705]: Machine qemu-27-instance-00000019 terminated.
Oct 11 08:49:51 compute-0 nova_compute[260935]: 2025-10-11 08:49:51.387 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:49:51 compute-0 nova_compute[260935]: 2025-10-11 08:49:51.485 2 DEBUG nova.compute.manager [None req-b9b1aca3-aec2-4b7f-b3c9-875c04211fcf f557c82d1a6e44f2890ffb382c99df55 4478aa2544ad454daf82ec0d5a6f1b83 - - default default] [instance: 9f9aca1c-8e65-435a-bfae-1ff0d4386f58] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 11 08:49:51 compute-0 nova_compute[260935]: 2025-10-11 08:49:51.486 2 DEBUG nova.virt.libvirt.driver [None req-b9b1aca3-aec2-4b7f-b3c9-875c04211fcf f557c82d1a6e44f2890ffb382c99df55 4478aa2544ad454daf82ec0d5a6f1b83 - - default default] [instance: 9f9aca1c-8e65-435a-bfae-1ff0d4386f58] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 11 08:49:51 compute-0 nova_compute[260935]: 2025-10-11 08:49:51.486 2 INFO nova.virt.libvirt.driver [None req-b9b1aca3-aec2-4b7f-b3c9-875c04211fcf f557c82d1a6e44f2890ffb382c99df55 4478aa2544ad454daf82ec0d5a6f1b83 - - default default] [instance: 9f9aca1c-8e65-435a-bfae-1ff0d4386f58] Creating image(s)
Oct 11 08:49:51 compute-0 neutron-haproxy-ovnmeta-9bac3530-993f-420e-8692-0b14a331d756[294157]: [NOTICE]   (294165) : haproxy version is 2.8.14-c23fe91
Oct 11 08:49:51 compute-0 neutron-haproxy-ovnmeta-9bac3530-993f-420e-8692-0b14a331d756[294157]: [NOTICE]   (294165) : path to executable is /usr/sbin/haproxy
Oct 11 08:49:51 compute-0 neutron-haproxy-ovnmeta-9bac3530-993f-420e-8692-0b14a331d756[294157]: [WARNING]  (294165) : Exiting Master process...
Oct 11 08:49:51 compute-0 neutron-haproxy-ovnmeta-9bac3530-993f-420e-8692-0b14a331d756[294157]: [ALERT]    (294165) : Current worker (294170) exited with code 143 (Terminated)
Oct 11 08:49:51 compute-0 neutron-haproxy-ovnmeta-9bac3530-993f-420e-8692-0b14a331d756[294157]: [WARNING]  (294165) : All workers exited. Exiting... (0)
Oct 11 08:49:51 compute-0 systemd[1]: libpod-012dd3b1240c0de00b1d42eb37a54f1c4b5262e85e9be9f6efc3df774b1c68d0.scope: Deactivated successfully.
Oct 11 08:49:51 compute-0 nova_compute[260935]: 2025-10-11 08:49:51.531 2 DEBUG nova.storage.rbd_utils [None req-b9b1aca3-aec2-4b7f-b3c9-875c04211fcf f557c82d1a6e44f2890ffb382c99df55 4478aa2544ad454daf82ec0d5a6f1b83 - - default default] rbd image 9f9aca1c-8e65-435a-bfae-1ff0d4386f58_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:49:51 compute-0 podman[295390]: 2025-10-11 08:49:51.536184983 +0000 UTC m=+0.065973437 container died 012dd3b1240c0de00b1d42eb37a54f1c4b5262e85e9be9f6efc3df774b1c68d0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-9bac3530-993f-420e-8692-0b14a331d756, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 11 08:49:51 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-012dd3b1240c0de00b1d42eb37a54f1c4b5262e85e9be9f6efc3df774b1c68d0-userdata-shm.mount: Deactivated successfully.
Oct 11 08:49:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-41c0985dbac57242fc08a988754091dfee60c3f8b611a4151de8d991e5bdb92e-merged.mount: Deactivated successfully.
Oct 11 08:49:51 compute-0 nova_compute[260935]: 2025-10-11 08:49:51.574 2 DEBUG nova.storage.rbd_utils [None req-b9b1aca3-aec2-4b7f-b3c9-875c04211fcf f557c82d1a6e44f2890ffb382c99df55 4478aa2544ad454daf82ec0d5a6f1b83 - - default default] rbd image 9f9aca1c-8e65-435a-bfae-1ff0d4386f58_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:49:51 compute-0 podman[295390]: 2025-10-11 08:49:51.579894579 +0000 UTC m=+0.109683033 container cleanup 012dd3b1240c0de00b1d42eb37a54f1c4b5262e85e9be9f6efc3df774b1c68d0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-9bac3530-993f-420e-8692-0b14a331d756, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 11 08:49:51 compute-0 systemd[1]: libpod-conmon-012dd3b1240c0de00b1d42eb37a54f1c4b5262e85e9be9f6efc3df774b1c68d0.scope: Deactivated successfully.
Oct 11 08:49:51 compute-0 nova_compute[260935]: 2025-10-11 08:49:51.605 2 DEBUG nova.storage.rbd_utils [None req-b9b1aca3-aec2-4b7f-b3c9-875c04211fcf f557c82d1a6e44f2890ffb382c99df55 4478aa2544ad454daf82ec0d5a6f1b83 - - default default] rbd image 9f9aca1c-8e65-435a-bfae-1ff0d4386f58_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:49:51 compute-0 nova_compute[260935]: 2025-10-11 08:49:51.609 2 DEBUG oslo_concurrency.processutils [None req-b9b1aca3-aec2-4b7f-b3c9-875c04211fcf f557c82d1a6e44f2890ffb382c99df55 4478aa2544ad454daf82ec0d5a6f1b83 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:49:51 compute-0 nova_compute[260935]: 2025-10-11 08:49:51.642 2 INFO nova.virt.libvirt.driver [-] [instance: 8e4f771a-b87a-40f9-a12e-b5b4583b96f7] Instance destroyed successfully.
Oct 11 08:49:51 compute-0 nova_compute[260935]: 2025-10-11 08:49:51.643 2 DEBUG nova.objects.instance [None req-380ed33b-63cc-462d-8a1e-2d05d5f998cc 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Lazy-loading 'resources' on Instance uuid 8e4f771a-b87a-40f9-a12e-b5b4583b96f7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:49:51 compute-0 podman[295465]: 2025-10-11 08:49:51.657618627 +0000 UTC m=+0.055043858 container remove 012dd3b1240c0de00b1d42eb37a54f1c4b5262e85e9be9f6efc3df774b1c68d0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-9bac3530-993f-420e-8692-0b14a331d756, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3)
Oct 11 08:49:51 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:49:51.667 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[a3abce80-cc45-466d-a111-97a5db531e3f]: (4, ('Sat Oct 11 08:49:51 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-9bac3530-993f-420e-8692-0b14a331d756 (012dd3b1240c0de00b1d42eb37a54f1c4b5262e85e9be9f6efc3df774b1c68d0)\n012dd3b1240c0de00b1d42eb37a54f1c4b5262e85e9be9f6efc3df774b1c68d0\nSat Oct 11 08:49:51 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-9bac3530-993f-420e-8692-0b14a331d756 (012dd3b1240c0de00b1d42eb37a54f1c4b5262e85e9be9f6efc3df774b1c68d0)\n012dd3b1240c0de00b1d42eb37a54f1c4b5262e85e9be9f6efc3df774b1c68d0\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:49:51 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:49:51.669 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[4c1cc4af-f0ff-4a34-af48-84b8f43e9387]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:49:51 compute-0 nova_compute[260935]: 2025-10-11 08:49:51.670 2 DEBUG nova.virt.libvirt.vif [None req-380ed33b-63cc-462d-8a1e-2d05d5f998cc 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T08:49:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-1760568996',display_name='tempest-ImagesTestJSON-server-1760568996',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-1760568996',id=25,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-11T08:49:35Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=3,progress=0,project_id='f8c7604961214c6d9d49657535d799a5',ramdisk_id='',reservation_id='r-ipkecogn',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ImagesTestJSON-694493184',owner_user_name='tempest-ImagesTestJSON-694493184-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T08:49:47Z,user_data=None,user_id='1bab12893b9d49aabcb5ca19c9b951de',uuid=8e4f771a-b87a-40f9-a12e-b5b4583b96f7,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='paused') vif={"id": "f1d8b704-c5df-41f7-b46a-04c0e89ab2cd", "address": "fa:16:3e:3c:11:79", "network": {"id": "9bac3530-993f-420e-8692-0b14a331d756", "bridge": "br-int", "label": "tempest-ImagesTestJSON-942705627-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f8c7604961214c6d9d49657535d799a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf1d8b704-c5", "ovs_interfaceid": "f1d8b704-c5df-41f7-b46a-04c0e89ab2cd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 11 08:49:51 compute-0 nova_compute[260935]: 2025-10-11 08:49:51.671 2 DEBUG nova.network.os_vif_util [None req-380ed33b-63cc-462d-8a1e-2d05d5f998cc 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Converting VIF {"id": "f1d8b704-c5df-41f7-b46a-04c0e89ab2cd", "address": "fa:16:3e:3c:11:79", "network": {"id": "9bac3530-993f-420e-8692-0b14a331d756", "bridge": "br-int", "label": "tempest-ImagesTestJSON-942705627-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f8c7604961214c6d9d49657535d799a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf1d8b704-c5", "ovs_interfaceid": "f1d8b704-c5df-41f7-b46a-04c0e89ab2cd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 08:49:51 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:49:51.671 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9bac3530-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:49:51 compute-0 nova_compute[260935]: 2025-10-11 08:49:51.672 2 DEBUG nova.network.os_vif_util [None req-380ed33b-63cc-462d-8a1e-2d05d5f998cc 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3c:11:79,bridge_name='br-int',has_traffic_filtering=True,id=f1d8b704-c5df-41f7-b46a-04c0e89ab2cd,network=Network(9bac3530-993f-420e-8692-0b14a331d756),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf1d8b704-c5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 08:49:51 compute-0 nova_compute[260935]: 2025-10-11 08:49:51.672 2 DEBUG os_vif [None req-380ed33b-63cc-462d-8a1e-2d05d5f998cc 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:3c:11:79,bridge_name='br-int',has_traffic_filtering=True,id=f1d8b704-c5df-41f7-b46a-04c0e89ab2cd,network=Network(9bac3530-993f-420e-8692-0b14a331d756),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf1d8b704-c5') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 11 08:49:51 compute-0 kernel: tap9bac3530-90: left promiscuous mode
Oct 11 08:49:51 compute-0 nova_compute[260935]: 2025-10-11 08:49:51.719 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:49:51 compute-0 nova_compute[260935]: 2025-10-11 08:49:51.720 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf1d8b704-c5, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:49:51 compute-0 nova_compute[260935]: 2025-10-11 08:49:51.721 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:49:51 compute-0 nova_compute[260935]: 2025-10-11 08:49:51.723 2 DEBUG oslo_concurrency.processutils [None req-b9b1aca3-aec2-4b7f-b3c9-875c04211fcf f557c82d1a6e44f2890ffb382c99df55 4478aa2544ad454daf82ec0d5a6f1b83 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json" returned: 0 in 0.114s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:49:51 compute-0 nova_compute[260935]: 2025-10-11 08:49:51.724 2 DEBUG oslo_concurrency.lockutils [None req-b9b1aca3-aec2-4b7f-b3c9-875c04211fcf f557c82d1a6e44f2890ffb382c99df55 4478aa2544ad454daf82ec0d5a6f1b83 - - default default] Acquiring lock "9811042c7d73cc51997f7c966840f4b7728169a1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:49:51 compute-0 nova_compute[260935]: 2025-10-11 08:49:51.724 2 DEBUG oslo_concurrency.lockutils [None req-b9b1aca3-aec2-4b7f-b3c9-875c04211fcf f557c82d1a6e44f2890ffb382c99df55 4478aa2544ad454daf82ec0d5a6f1b83 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:49:51 compute-0 nova_compute[260935]: 2025-10-11 08:49:51.724 2 DEBUG oslo_concurrency.lockutils [None req-b9b1aca3-aec2-4b7f-b3c9-875c04211fcf f557c82d1a6e44f2890ffb382c99df55 4478aa2544ad454daf82ec0d5a6f1b83 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:49:51 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:49:51.744 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[3d077a3d-2bcd-4ddb-8b61-be507bab6c7e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:49:51 compute-0 nova_compute[260935]: 2025-10-11 08:49:51.748 2 DEBUG nova.storage.rbd_utils [None req-b9b1aca3-aec2-4b7f-b3c9-875c04211fcf f557c82d1a6e44f2890ffb382c99df55 4478aa2544ad454daf82ec0d5a6f1b83 - - default default] rbd image 9f9aca1c-8e65-435a-bfae-1ff0d4386f58_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:49:51 compute-0 nova_compute[260935]: 2025-10-11 08:49:51.755 2 DEBUG oslo_concurrency.processutils [None req-b9b1aca3-aec2-4b7f-b3c9-875c04211fcf f557c82d1a6e44f2890ffb382c99df55 4478aa2544ad454daf82ec0d5a6f1b83 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 9f9aca1c-8e65-435a-bfae-1ff0d4386f58_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:49:51 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:49:51.786 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[535ae7a0-d88f-4c2e-8f2d-89e5e350a78f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:49:51 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:49:51.788 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[fb4b7d03-1119-41c2-991f-f8b6feaaff3c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:49:51 compute-0 nova_compute[260935]: 2025-10-11 08:49:51.790 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:49:51 compute-0 nova_compute[260935]: 2025-10-11 08:49:51.797 2 INFO os_vif [None req-380ed33b-63cc-462d-8a1e-2d05d5f998cc 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:3c:11:79,bridge_name='br-int',has_traffic_filtering=True,id=f1d8b704-c5df-41f7-b46a-04c0e89ab2cd,network=Network(9bac3530-993f-420e-8692-0b14a331d756),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf1d8b704-c5')
Oct 11 08:49:51 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:49:51.810 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[8e85611d-b084-4905-b1f3-9d64be79553f]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 441692, 'reachable_time': 18702, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 295519, 'error': None, 'target': 'ovnmeta-9bac3530-993f-420e-8692-0b14a331d756', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:49:51 compute-0 systemd[1]: run-netns-ovnmeta\x2d9bac3530\x2d993f\x2d420e\x2d8692\x2d0b14a331d756.mount: Deactivated successfully.
Oct 11 08:49:51 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:49:51.813 162929 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-9bac3530-993f-420e-8692-0b14a331d756 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 11 08:49:51 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:49:51.813 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[6cae64f3-c604-4af4-acce-28db5051c81b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:49:51 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1326: 321 pgs: 321 active+clean; 465 MiB data, 603 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 2.9 MiB/s wr, 121 op/s
Oct 11 08:49:52 compute-0 nova_compute[260935]: 2025-10-11 08:49:52.045 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:49:52 compute-0 nova_compute[260935]: 2025-10-11 08:49:52.053 2 DEBUG nova.policy [None req-b9b1aca3-aec2-4b7f-b3c9-875c04211fcf f557c82d1a6e44f2890ffb382c99df55 4478aa2544ad454daf82ec0d5a6f1b83 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'f557c82d1a6e44f2890ffb382c99df55', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '4478aa2544ad454daf82ec0d5a6f1b83', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 11 08:49:52 compute-0 nova_compute[260935]: 2025-10-11 08:49:52.055 2 DEBUG oslo_concurrency.processutils [None req-b9b1aca3-aec2-4b7f-b3c9-875c04211fcf f557c82d1a6e44f2890ffb382c99df55 4478aa2544ad454daf82ec0d5a6f1b83 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 9f9aca1c-8e65-435a-bfae-1ff0d4386f58_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.301s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:49:52 compute-0 nova_compute[260935]: 2025-10-11 08:49:52.119 2 DEBUG nova.storage.rbd_utils [None req-b9b1aca3-aec2-4b7f-b3c9-875c04211fcf f557c82d1a6e44f2890ffb382c99df55 4478aa2544ad454daf82ec0d5a6f1b83 - - default default] resizing rbd image 9f9aca1c-8e65-435a-bfae-1ff0d4386f58_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 11 08:49:52 compute-0 nova_compute[260935]: 2025-10-11 08:49:52.236 2 DEBUG nova.objects.instance [None req-b9b1aca3-aec2-4b7f-b3c9-875c04211fcf f557c82d1a6e44f2890ffb382c99df55 4478aa2544ad454daf82ec0d5a6f1b83 - - default default] Lazy-loading 'migration_context' on Instance uuid 9f9aca1c-8e65-435a-bfae-1ff0d4386f58 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:49:52 compute-0 nova_compute[260935]: 2025-10-11 08:49:52.252 2 DEBUG nova.virt.libvirt.driver [None req-b9b1aca3-aec2-4b7f-b3c9-875c04211fcf f557c82d1a6e44f2890ffb382c99df55 4478aa2544ad454daf82ec0d5a6f1b83 - - default default] [instance: 9f9aca1c-8e65-435a-bfae-1ff0d4386f58] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 11 08:49:52 compute-0 nova_compute[260935]: 2025-10-11 08:49:52.253 2 DEBUG nova.virt.libvirt.driver [None req-b9b1aca3-aec2-4b7f-b3c9-875c04211fcf f557c82d1a6e44f2890ffb382c99df55 4478aa2544ad454daf82ec0d5a6f1b83 - - default default] [instance: 9f9aca1c-8e65-435a-bfae-1ff0d4386f58] Ensure instance console log exists: /var/lib/nova/instances/9f9aca1c-8e65-435a-bfae-1ff0d4386f58/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 11 08:49:52 compute-0 nova_compute[260935]: 2025-10-11 08:49:52.254 2 DEBUG oslo_concurrency.lockutils [None req-b9b1aca3-aec2-4b7f-b3c9-875c04211fcf f557c82d1a6e44f2890ffb382c99df55 4478aa2544ad454daf82ec0d5a6f1b83 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:49:52 compute-0 nova_compute[260935]: 2025-10-11 08:49:52.254 2 DEBUG oslo_concurrency.lockutils [None req-b9b1aca3-aec2-4b7f-b3c9-875c04211fcf f557c82d1a6e44f2890ffb382c99df55 4478aa2544ad454daf82ec0d5a6f1b83 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:49:52 compute-0 nova_compute[260935]: 2025-10-11 08:49:52.255 2 DEBUG oslo_concurrency.lockutils [None req-b9b1aca3-aec2-4b7f-b3c9-875c04211fcf f557c82d1a6e44f2890ffb382c99df55 4478aa2544ad454daf82ec0d5a6f1b83 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:49:52 compute-0 nova_compute[260935]: 2025-10-11 08:49:52.295 2 INFO nova.virt.libvirt.driver [None req-380ed33b-63cc-462d-8a1e-2d05d5f998cc 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 8e4f771a-b87a-40f9-a12e-b5b4583b96f7] Deleting instance files /var/lib/nova/instances/8e4f771a-b87a-40f9-a12e-b5b4583b96f7_del
Oct 11 08:49:52 compute-0 nova_compute[260935]: 2025-10-11 08:49:52.296 2 INFO nova.virt.libvirt.driver [None req-380ed33b-63cc-462d-8a1e-2d05d5f998cc 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 8e4f771a-b87a-40f9-a12e-b5b4583b96f7] Deletion of /var/lib/nova/instances/8e4f771a-b87a-40f9-a12e-b5b4583b96f7_del complete
Oct 11 08:49:52 compute-0 nova_compute[260935]: 2025-10-11 08:49:52.337 2 DEBUG nova.network.neutron [None req-562dda92-f28d-460a-8594-d15088f92593 794a63979edc4ddb9f25d94c4999d99b 9b74a79bf5294fa5b15c4e2cc282d6c1 - - default default] [instance: 90e56ca7-b26f-4f83-908d-75204ecd2533] Updating instance_info_cache with network_info: [{"id": "302c88cf-6eba-4200-adfe-6c23d5e6078d", "address": "fa:16:3e:52:ed:97", "network": {"id": "09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1951796893-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "39d3043a7835403392c659fbb2fe0b22", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap302c88cf-6e", "ovs_interfaceid": "302c88cf-6eba-4200-adfe-6c23d5e6078d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:49:52 compute-0 nova_compute[260935]: 2025-10-11 08:49:52.369 2 DEBUG oslo_concurrency.lockutils [None req-562dda92-f28d-460a-8594-d15088f92593 794a63979edc4ddb9f25d94c4999d99b 9b74a79bf5294fa5b15c4e2cc282d6c1 - - default default] Releasing lock "refresh_cache-90e56ca7-b26f-4f83-908d-75204ecd2533" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 08:49:52 compute-0 nova_compute[260935]: 2025-10-11 08:49:52.373 2 DEBUG nova.compute.manager [None req-562dda92-f28d-460a-8594-d15088f92593 794a63979edc4ddb9f25d94c4999d99b 9b74a79bf5294fa5b15c4e2cc282d6c1 - - default default] [instance: 90e56ca7-b26f-4f83-908d-75204ecd2533] Inject network info _inject_network_info /usr/lib/python3.9/site-packages/nova/compute/manager.py:7144
Oct 11 08:49:52 compute-0 nova_compute[260935]: 2025-10-11 08:49:52.373 2 DEBUG nova.compute.manager [None req-562dda92-f28d-460a-8594-d15088f92593 794a63979edc4ddb9f25d94c4999d99b 9b74a79bf5294fa5b15c4e2cc282d6c1 - - default default] [instance: 90e56ca7-b26f-4f83-908d-75204ecd2533] network_info to inject: |[{"id": "302c88cf-6eba-4200-adfe-6c23d5e6078d", "address": "fa:16:3e:52:ed:97", "network": {"id": "09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1951796893-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "39d3043a7835403392c659fbb2fe0b22", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap302c88cf-6e", "ovs_interfaceid": "302c88cf-6eba-4200-adfe-6c23d5e6078d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _inject_network_info /usr/lib/python3.9/site-packages/nova/compute/manager.py:7145
Oct 11 08:49:52 compute-0 nova_compute[260935]: 2025-10-11 08:49:52.383 2 INFO nova.compute.manager [None req-380ed33b-63cc-462d-8a1e-2d05d5f998cc 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 8e4f771a-b87a-40f9-a12e-b5b4583b96f7] Took 1.12 seconds to destroy the instance on the hypervisor.
Oct 11 08:49:52 compute-0 nova_compute[260935]: 2025-10-11 08:49:52.383 2 DEBUG oslo.service.loopingcall [None req-380ed33b-63cc-462d-8a1e-2d05d5f998cc 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 11 08:49:52 compute-0 nova_compute[260935]: 2025-10-11 08:49:52.384 2 DEBUG nova.compute.manager [-] [instance: 8e4f771a-b87a-40f9-a12e-b5b4583b96f7] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 11 08:49:52 compute-0 nova_compute[260935]: 2025-10-11 08:49:52.384 2 DEBUG nova.network.neutron [-] [instance: 8e4f771a-b87a-40f9-a12e-b5b4583b96f7] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 11 08:49:52 compute-0 podman[295629]: 2025-10-11 08:49:52.81753682 +0000 UTC m=+0.107980605 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct 11 08:49:52 compute-0 nova_compute[260935]: 2025-10-11 08:49:52.912 2 DEBUG nova.network.neutron [None req-b9b1aca3-aec2-4b7f-b3c9-875c04211fcf f557c82d1a6e44f2890ffb382c99df55 4478aa2544ad454daf82ec0d5a6f1b83 - - default default] [instance: 9f9aca1c-8e65-435a-bfae-1ff0d4386f58] Successfully created port: 9007d8a3-8797-49c6-9302-4c8f4d699a45 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 11 08:49:53 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 08:49:53 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e167 do_prune osdmap full prune enabled
Oct 11 08:49:53 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e168 e168: 3 total, 3 up, 3 in
Oct 11 08:49:53 compute-0 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e168: 3 total, 3 up, 3 in
Oct 11 08:49:53 compute-0 ceph-mon[74313]: pgmap v1326: 321 pgs: 321 active+clean; 465 MiB data, 603 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 2.9 MiB/s wr, 121 op/s
Oct 11 08:49:53 compute-0 ceph-mon[74313]: osdmap e168: 3 total, 3 up, 3 in
Oct 11 08:49:53 compute-0 nova_compute[260935]: 2025-10-11 08:49:53.517 2 DEBUG nova.network.neutron [-] [instance: 8e4f771a-b87a-40f9-a12e-b5b4583b96f7] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:49:53 compute-0 nova_compute[260935]: 2025-10-11 08:49:53.541 2 INFO nova.compute.manager [-] [instance: 8e4f771a-b87a-40f9-a12e-b5b4583b96f7] Took 1.16 seconds to deallocate network for instance.
Oct 11 08:49:53 compute-0 nova_compute[260935]: 2025-10-11 08:49:53.599 2 DEBUG oslo_concurrency.lockutils [None req-380ed33b-63cc-462d-8a1e-2d05d5f998cc 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:49:53 compute-0 nova_compute[260935]: 2025-10-11 08:49:53.600 2 DEBUG oslo_concurrency.lockutils [None req-380ed33b-63cc-462d-8a1e-2d05d5f998cc 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:49:53 compute-0 nova_compute[260935]: 2025-10-11 08:49:53.829 2 DEBUG oslo_concurrency.processutils [None req-380ed33b-63cc-462d-8a1e-2d05d5f998cc 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:49:53 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1328: 321 pgs: 321 active+clean; 418 MiB data, 595 MiB used, 59 GiB / 60 GiB avail; 8.5 MiB/s rd, 5.4 MiB/s wr, 429 op/s
Oct 11 08:49:53 compute-0 nova_compute[260935]: 2025-10-11 08:49:53.963 2 DEBUG nova.network.neutron [None req-b9b1aca3-aec2-4b7f-b3c9-875c04211fcf f557c82d1a6e44f2890ffb382c99df55 4478aa2544ad454daf82ec0d5a6f1b83 - - default default] [instance: 9f9aca1c-8e65-435a-bfae-1ff0d4386f58] Successfully updated port: 9007d8a3-8797-49c6-9302-4c8f4d699a45 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 11 08:49:53 compute-0 nova_compute[260935]: 2025-10-11 08:49:53.977 2 DEBUG oslo_concurrency.lockutils [None req-b9b1aca3-aec2-4b7f-b3c9-875c04211fcf f557c82d1a6e44f2890ffb382c99df55 4478aa2544ad454daf82ec0d5a6f1b83 - - default default] Acquiring lock "refresh_cache-9f9aca1c-8e65-435a-bfae-1ff0d4386f58" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 08:49:53 compute-0 nova_compute[260935]: 2025-10-11 08:49:53.978 2 DEBUG oslo_concurrency.lockutils [None req-b9b1aca3-aec2-4b7f-b3c9-875c04211fcf f557c82d1a6e44f2890ffb382c99df55 4478aa2544ad454daf82ec0d5a6f1b83 - - default default] Acquired lock "refresh_cache-9f9aca1c-8e65-435a-bfae-1ff0d4386f58" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 08:49:53 compute-0 nova_compute[260935]: 2025-10-11 08:49:53.978 2 DEBUG nova.network.neutron [None req-b9b1aca3-aec2-4b7f-b3c9-875c04211fcf f557c82d1a6e44f2890ffb382c99df55 4478aa2544ad454daf82ec0d5a6f1b83 - - default default] [instance: 9f9aca1c-8e65-435a-bfae-1ff0d4386f58] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 11 08:49:54 compute-0 nova_compute[260935]: 2025-10-11 08:49:54.155 2 DEBUG nova.network.neutron [None req-b9b1aca3-aec2-4b7f-b3c9-875c04211fcf f557c82d1a6e44f2890ffb382c99df55 4478aa2544ad454daf82ec0d5a6f1b83 - - default default] [instance: 9f9aca1c-8e65-435a-bfae-1ff0d4386f58] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 11 08:49:54 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 08:49:54 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2805905273' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:49:54 compute-0 nova_compute[260935]: 2025-10-11 08:49:54.314 2 DEBUG oslo_concurrency.lockutils [None req-546ee1ad-353e-439d-b095-08a7736b8d68 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Acquiring lock "b35f4147-9e36-4dab-9ac8-2061c97797f2" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:49:54 compute-0 nova_compute[260935]: 2025-10-11 08:49:54.315 2 DEBUG oslo_concurrency.lockutils [None req-546ee1ad-353e-439d-b095-08a7736b8d68 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Lock "b35f4147-9e36-4dab-9ac8-2061c97797f2" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:49:54 compute-0 nova_compute[260935]: 2025-10-11 08:49:54.318 2 DEBUG nova.compute.manager [req-4a8cd530-d3e8-485a-af9c-d96490e4e12a req-6cad29a0-04b0-42b8-b320-f6bb9a693f3f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 14711e39-46ca-4856-9c19-fa51b869064d] Received event network-changed-ac842ebf-4fca-4930-a4d1-3e8a6760d441 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:49:54 compute-0 nova_compute[260935]: 2025-10-11 08:49:54.318 2 DEBUG nova.compute.manager [req-4a8cd530-d3e8-485a-af9c-d96490e4e12a req-6cad29a0-04b0-42b8-b320-f6bb9a693f3f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 14711e39-46ca-4856-9c19-fa51b869064d] Refreshing instance network info cache due to event network-changed-ac842ebf-4fca-4930-a4d1-3e8a6760d441. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 08:49:54 compute-0 nova_compute[260935]: 2025-10-11 08:49:54.318 2 DEBUG oslo_concurrency.lockutils [req-4a8cd530-d3e8-485a-af9c-d96490e4e12a req-6cad29a0-04b0-42b8-b320-f6bb9a693f3f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-14711e39-46ca-4856-9c19-fa51b869064d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 08:49:54 compute-0 nova_compute[260935]: 2025-10-11 08:49:54.318 2 DEBUG oslo_concurrency.lockutils [req-4a8cd530-d3e8-485a-af9c-d96490e4e12a req-6cad29a0-04b0-42b8-b320-f6bb9a693f3f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-14711e39-46ca-4856-9c19-fa51b869064d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 08:49:54 compute-0 nova_compute[260935]: 2025-10-11 08:49:54.319 2 DEBUG nova.network.neutron [req-4a8cd530-d3e8-485a-af9c-d96490e4e12a req-6cad29a0-04b0-42b8-b320-f6bb9a693f3f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 14711e39-46ca-4856-9c19-fa51b869064d] Refreshing network info cache for port ac842ebf-4fca-4930-a4d1-3e8a6760d441 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 08:49:54 compute-0 nova_compute[260935]: 2025-10-11 08:49:54.334 2 DEBUG oslo_concurrency.processutils [None req-380ed33b-63cc-462d-8a1e-2d05d5f998cc 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.505s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:49:54 compute-0 nova_compute[260935]: 2025-10-11 08:49:54.340 2 DEBUG nova.compute.manager [None req-546ee1ad-353e-439d-b095-08a7736b8d68 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: b35f4147-9e36-4dab-9ac8-2061c97797f2] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 11 08:49:54 compute-0 nova_compute[260935]: 2025-10-11 08:49:54.346 2 DEBUG nova.compute.provider_tree [None req-380ed33b-63cc-462d-8a1e-2d05d5f998cc 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 08:49:54 compute-0 nova_compute[260935]: 2025-10-11 08:49:54.365 2 DEBUG nova.scheduler.client.report [None req-380ed33b-63cc-462d-8a1e-2d05d5f998cc 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 08:49:54 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/2805905273' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:49:54 compute-0 nova_compute[260935]: 2025-10-11 08:49:54.392 2 DEBUG nova.compute.manager [req-e29814d3-bc96-40f8-a7ea-728b7332d556 req-e51284b0-9007-4012-b74e-9bdedfac041d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 8e4f771a-b87a-40f9-a12e-b5b4583b96f7] Received event network-vif-unplugged-f1d8b704-c5df-41f7-b46a-04c0e89ab2cd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:49:54 compute-0 nova_compute[260935]: 2025-10-11 08:49:54.393 2 DEBUG oslo_concurrency.lockutils [req-e29814d3-bc96-40f8-a7ea-728b7332d556 req-e51284b0-9007-4012-b74e-9bdedfac041d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "8e4f771a-b87a-40f9-a12e-b5b4583b96f7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:49:54 compute-0 nova_compute[260935]: 2025-10-11 08:49:54.393 2 DEBUG oslo_concurrency.lockutils [req-e29814d3-bc96-40f8-a7ea-728b7332d556 req-e51284b0-9007-4012-b74e-9bdedfac041d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "8e4f771a-b87a-40f9-a12e-b5b4583b96f7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:49:54 compute-0 nova_compute[260935]: 2025-10-11 08:49:54.393 2 DEBUG oslo_concurrency.lockutils [req-e29814d3-bc96-40f8-a7ea-728b7332d556 req-e51284b0-9007-4012-b74e-9bdedfac041d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "8e4f771a-b87a-40f9-a12e-b5b4583b96f7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:49:54 compute-0 nova_compute[260935]: 2025-10-11 08:49:54.393 2 DEBUG nova.compute.manager [req-e29814d3-bc96-40f8-a7ea-728b7332d556 req-e51284b0-9007-4012-b74e-9bdedfac041d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 8e4f771a-b87a-40f9-a12e-b5b4583b96f7] No waiting events found dispatching network-vif-unplugged-f1d8b704-c5df-41f7-b46a-04c0e89ab2cd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 08:49:54 compute-0 nova_compute[260935]: 2025-10-11 08:49:54.394 2 WARNING nova.compute.manager [req-e29814d3-bc96-40f8-a7ea-728b7332d556 req-e51284b0-9007-4012-b74e-9bdedfac041d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 8e4f771a-b87a-40f9-a12e-b5b4583b96f7] Received unexpected event network-vif-unplugged-f1d8b704-c5df-41f7-b46a-04c0e89ab2cd for instance with vm_state deleted and task_state None.
Oct 11 08:49:54 compute-0 nova_compute[260935]: 2025-10-11 08:49:54.394 2 DEBUG nova.compute.manager [req-e29814d3-bc96-40f8-a7ea-728b7332d556 req-e51284b0-9007-4012-b74e-9bdedfac041d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 8e4f771a-b87a-40f9-a12e-b5b4583b96f7] Received event network-vif-plugged-f1d8b704-c5df-41f7-b46a-04c0e89ab2cd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:49:54 compute-0 nova_compute[260935]: 2025-10-11 08:49:54.394 2 DEBUG oslo_concurrency.lockutils [req-e29814d3-bc96-40f8-a7ea-728b7332d556 req-e51284b0-9007-4012-b74e-9bdedfac041d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "8e4f771a-b87a-40f9-a12e-b5b4583b96f7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:49:54 compute-0 nova_compute[260935]: 2025-10-11 08:49:54.395 2 DEBUG oslo_concurrency.lockutils [req-e29814d3-bc96-40f8-a7ea-728b7332d556 req-e51284b0-9007-4012-b74e-9bdedfac041d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "8e4f771a-b87a-40f9-a12e-b5b4583b96f7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:49:54 compute-0 nova_compute[260935]: 2025-10-11 08:49:54.395 2 DEBUG oslo_concurrency.lockutils [req-e29814d3-bc96-40f8-a7ea-728b7332d556 req-e51284b0-9007-4012-b74e-9bdedfac041d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "8e4f771a-b87a-40f9-a12e-b5b4583b96f7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:49:54 compute-0 nova_compute[260935]: 2025-10-11 08:49:54.395 2 DEBUG nova.compute.manager [req-e29814d3-bc96-40f8-a7ea-728b7332d556 req-e51284b0-9007-4012-b74e-9bdedfac041d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 8e4f771a-b87a-40f9-a12e-b5b4583b96f7] No waiting events found dispatching network-vif-plugged-f1d8b704-c5df-41f7-b46a-04c0e89ab2cd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 08:49:54 compute-0 nova_compute[260935]: 2025-10-11 08:49:54.396 2 WARNING nova.compute.manager [req-e29814d3-bc96-40f8-a7ea-728b7332d556 req-e51284b0-9007-4012-b74e-9bdedfac041d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 8e4f771a-b87a-40f9-a12e-b5b4583b96f7] Received unexpected event network-vif-plugged-f1d8b704-c5df-41f7-b46a-04c0e89ab2cd for instance with vm_state deleted and task_state None.
Oct 11 08:49:54 compute-0 nova_compute[260935]: 2025-10-11 08:49:54.409 2 DEBUG oslo_concurrency.lockutils [None req-380ed33b-63cc-462d-8a1e-2d05d5f998cc 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.809s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:49:54 compute-0 nova_compute[260935]: 2025-10-11 08:49:54.429 2 DEBUG oslo_concurrency.lockutils [None req-546ee1ad-353e-439d-b095-08a7736b8d68 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:49:54 compute-0 nova_compute[260935]: 2025-10-11 08:49:54.429 2 DEBUG oslo_concurrency.lockutils [None req-546ee1ad-353e-439d-b095-08a7736b8d68 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:49:54 compute-0 nova_compute[260935]: 2025-10-11 08:49:54.436 2 DEBUG nova.virt.hardware [None req-546ee1ad-353e-439d-b095-08a7736b8d68 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 11 08:49:54 compute-0 nova_compute[260935]: 2025-10-11 08:49:54.436 2 INFO nova.compute.claims [None req-546ee1ad-353e-439d-b095-08a7736b8d68 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: b35f4147-9e36-4dab-9ac8-2061c97797f2] Claim successful on node compute-0.ctlplane.example.com
Oct 11 08:49:54 compute-0 nova_compute[260935]: 2025-10-11 08:49:54.440 2 INFO nova.scheduler.client.report [None req-380ed33b-63cc-462d-8a1e-2d05d5f998cc 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Deleted allocations for instance 8e4f771a-b87a-40f9-a12e-b5b4583b96f7
Oct 11 08:49:54 compute-0 nova_compute[260935]: 2025-10-11 08:49:54.514 2 DEBUG oslo_concurrency.lockutils [None req-380ed33b-63cc-462d-8a1e-2d05d5f998cc 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Lock "8e4f771a-b87a-40f9-a12e-b5b4583b96f7" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.260s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:49:54 compute-0 nova_compute[260935]: 2025-10-11 08:49:54.677 2 DEBUG oslo_concurrency.processutils [None req-546ee1ad-353e-439d-b095-08a7736b8d68 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:49:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 08:49:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 08:49:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 08:49:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 08:49:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 08:49:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 08:49:54 compute-0 ceph-mgr[74605]: [balancer INFO root] Optimize plan auto_2025-10-11_08:49:54
Oct 11 08:49:54 compute-0 ceph-mgr[74605]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 08:49:54 compute-0 ceph-mgr[74605]: [balancer INFO root] do_upmap
Oct 11 08:49:54 compute-0 ceph-mgr[74605]: [balancer INFO root] pools ['default.rgw.control', 'backups', 'default.rgw.meta', '.rgw.root', 'volumes', '.mgr', 'images', 'cephfs.cephfs.meta', 'vms', 'default.rgw.log', 'cephfs.cephfs.data']
Oct 11 08:49:54 compute-0 ceph-mgr[74605]: [balancer INFO root] prepared 0/10 changes
Oct 11 08:49:54 compute-0 nova_compute[260935]: 2025-10-11 08:49:54.977 2 DEBUG oslo_concurrency.lockutils [None req-4002ac9b-cff2-4b50-a89e-ea7290326cae 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Acquiring lock "cb1503a2-bc9c-4faf-ab16-e7227f8c94f7" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:49:54 compute-0 nova_compute[260935]: 2025-10-11 08:49:54.979 2 DEBUG oslo_concurrency.lockutils [None req-4002ac9b-cff2-4b50-a89e-ea7290326cae 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Lock "cb1503a2-bc9c-4faf-ab16-e7227f8c94f7" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:49:54 compute-0 nova_compute[260935]: 2025-10-11 08:49:54.997 2 DEBUG nova.compute.manager [None req-4002ac9b-cff2-4b50-a89e-ea7290326cae 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: cb1503a2-bc9c-4faf-ab16-e7227f8c94f7] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 11 08:49:55 compute-0 nova_compute[260935]: 2025-10-11 08:49:55.084 2 DEBUG oslo_concurrency.lockutils [None req-4002ac9b-cff2-4b50-a89e-ea7290326cae 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:49:55 compute-0 nova_compute[260935]: 2025-10-11 08:49:55.086 2 DEBUG nova.network.neutron [None req-b9b1aca3-aec2-4b7f-b3c9-875c04211fcf f557c82d1a6e44f2890ffb382c99df55 4478aa2544ad454daf82ec0d5a6f1b83 - - default default] [instance: 9f9aca1c-8e65-435a-bfae-1ff0d4386f58] Updating instance_info_cache with network_info: [{"id": "9007d8a3-8797-49c6-9302-4c8f4d699a45", "address": "fa:16:3e:34:0d:85", "network": {"id": "8e0c9798-3406-4335-baf7-3664e8c2cc2d", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-1694728476-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4478aa2544ad454daf82ec0d5a6f1b83", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9007d8a3-87", "ovs_interfaceid": "9007d8a3-8797-49c6-9302-4c8f4d699a45", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:49:55 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 08:49:55 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1919414430' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:49:55 compute-0 nova_compute[260935]: 2025-10-11 08:49:55.108 2 DEBUG oslo_concurrency.lockutils [None req-b9b1aca3-aec2-4b7f-b3c9-875c04211fcf f557c82d1a6e44f2890ffb382c99df55 4478aa2544ad454daf82ec0d5a6f1b83 - - default default] Releasing lock "refresh_cache-9f9aca1c-8e65-435a-bfae-1ff0d4386f58" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 08:49:55 compute-0 nova_compute[260935]: 2025-10-11 08:49:55.109 2 DEBUG nova.compute.manager [None req-b9b1aca3-aec2-4b7f-b3c9-875c04211fcf f557c82d1a6e44f2890ffb382c99df55 4478aa2544ad454daf82ec0d5a6f1b83 - - default default] [instance: 9f9aca1c-8e65-435a-bfae-1ff0d4386f58] Instance network_info: |[{"id": "9007d8a3-8797-49c6-9302-4c8f4d699a45", "address": "fa:16:3e:34:0d:85", "network": {"id": "8e0c9798-3406-4335-baf7-3664e8c2cc2d", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-1694728476-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4478aa2544ad454daf82ec0d5a6f1b83", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9007d8a3-87", "ovs_interfaceid": "9007d8a3-8797-49c6-9302-4c8f4d699a45", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 11 08:49:55 compute-0 nova_compute[260935]: 2025-10-11 08:49:55.112 2 DEBUG nova.virt.libvirt.driver [None req-b9b1aca3-aec2-4b7f-b3c9-875c04211fcf f557c82d1a6e44f2890ffb382c99df55 4478aa2544ad454daf82ec0d5a6f1b83 - - default default] [instance: 9f9aca1c-8e65-435a-bfae-1ff0d4386f58] Start _get_guest_xml network_info=[{"id": "9007d8a3-8797-49c6-9302-4c8f4d699a45", "address": "fa:16:3e:34:0d:85", "network": {"id": "8e0c9798-3406-4335-baf7-3664e8c2cc2d", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-1694728476-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4478aa2544ad454daf82ec0d5a6f1b83", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9007d8a3-87", "ovs_interfaceid": "9007d8a3-8797-49c6-9302-4c8f4d699a45", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 11 08:49:55 compute-0 nova_compute[260935]: 2025-10-11 08:49:55.114 2 DEBUG oslo_concurrency.processutils [None req-546ee1ad-353e-439d-b095-08a7736b8d68 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:49:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 08:49:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 08:49:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 08:49:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 08:49:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 08:49:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 08:49:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 08:49:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 08:49:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 08:49:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 08:49:55 compute-0 nova_compute[260935]: 2025-10-11 08:49:55.120 2 DEBUG nova.compute.provider_tree [None req-546ee1ad-353e-439d-b095-08a7736b8d68 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 08:49:55 compute-0 nova_compute[260935]: 2025-10-11 08:49:55.123 2 WARNING nova.virt.libvirt.driver [None req-b9b1aca3-aec2-4b7f-b3c9-875c04211fcf f557c82d1a6e44f2890ffb382c99df55 4478aa2544ad454daf82ec0d5a6f1b83 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 08:49:55 compute-0 nova_compute[260935]: 2025-10-11 08:49:55.128 2 DEBUG nova.virt.libvirt.host [None req-b9b1aca3-aec2-4b7f-b3c9-875c04211fcf f557c82d1a6e44f2890ffb382c99df55 4478aa2544ad454daf82ec0d5a6f1b83 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 11 08:49:55 compute-0 nova_compute[260935]: 2025-10-11 08:49:55.128 2 DEBUG nova.virt.libvirt.host [None req-b9b1aca3-aec2-4b7f-b3c9-875c04211fcf f557c82d1a6e44f2890ffb382c99df55 4478aa2544ad454daf82ec0d5a6f1b83 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 11 08:49:55 compute-0 nova_compute[260935]: 2025-10-11 08:49:55.134 2 DEBUG nova.virt.libvirt.host [None req-b9b1aca3-aec2-4b7f-b3c9-875c04211fcf f557c82d1a6e44f2890ffb382c99df55 4478aa2544ad454daf82ec0d5a6f1b83 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 11 08:49:55 compute-0 nova_compute[260935]: 2025-10-11 08:49:55.135 2 DEBUG nova.virt.libvirt.host [None req-b9b1aca3-aec2-4b7f-b3c9-875c04211fcf f557c82d1a6e44f2890ffb382c99df55 4478aa2544ad454daf82ec0d5a6f1b83 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 11 08:49:55 compute-0 nova_compute[260935]: 2025-10-11 08:49:55.135 2 DEBUG nova.virt.libvirt.driver [None req-b9b1aca3-aec2-4b7f-b3c9-875c04211fcf f557c82d1a6e44f2890ffb382c99df55 4478aa2544ad454daf82ec0d5a6f1b83 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 11 08:49:55 compute-0 nova_compute[260935]: 2025-10-11 08:49:55.135 2 DEBUG nova.virt.hardware [None req-b9b1aca3-aec2-4b7f-b3c9-875c04211fcf f557c82d1a6e44f2890ffb382c99df55 4478aa2544ad454daf82ec0d5a6f1b83 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 11 08:49:55 compute-0 nova_compute[260935]: 2025-10-11 08:49:55.136 2 DEBUG nova.virt.hardware [None req-b9b1aca3-aec2-4b7f-b3c9-875c04211fcf f557c82d1a6e44f2890ffb382c99df55 4478aa2544ad454daf82ec0d5a6f1b83 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 11 08:49:55 compute-0 nova_compute[260935]: 2025-10-11 08:49:55.136 2 DEBUG nova.virt.hardware [None req-b9b1aca3-aec2-4b7f-b3c9-875c04211fcf f557c82d1a6e44f2890ffb382c99df55 4478aa2544ad454daf82ec0d5a6f1b83 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 11 08:49:55 compute-0 nova_compute[260935]: 2025-10-11 08:49:55.136 2 DEBUG nova.virt.hardware [None req-b9b1aca3-aec2-4b7f-b3c9-875c04211fcf f557c82d1a6e44f2890ffb382c99df55 4478aa2544ad454daf82ec0d5a6f1b83 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 11 08:49:55 compute-0 nova_compute[260935]: 2025-10-11 08:49:55.137 2 DEBUG nova.virt.hardware [None req-b9b1aca3-aec2-4b7f-b3c9-875c04211fcf f557c82d1a6e44f2890ffb382c99df55 4478aa2544ad454daf82ec0d5a6f1b83 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 11 08:49:55 compute-0 nova_compute[260935]: 2025-10-11 08:49:55.137 2 DEBUG nova.virt.hardware [None req-b9b1aca3-aec2-4b7f-b3c9-875c04211fcf f557c82d1a6e44f2890ffb382c99df55 4478aa2544ad454daf82ec0d5a6f1b83 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 11 08:49:55 compute-0 nova_compute[260935]: 2025-10-11 08:49:55.137 2 DEBUG nova.virt.hardware [None req-b9b1aca3-aec2-4b7f-b3c9-875c04211fcf f557c82d1a6e44f2890ffb382c99df55 4478aa2544ad454daf82ec0d5a6f1b83 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 11 08:49:55 compute-0 nova_compute[260935]: 2025-10-11 08:49:55.137 2 DEBUG nova.virt.hardware [None req-b9b1aca3-aec2-4b7f-b3c9-875c04211fcf f557c82d1a6e44f2890ffb382c99df55 4478aa2544ad454daf82ec0d5a6f1b83 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 11 08:49:55 compute-0 nova_compute[260935]: 2025-10-11 08:49:55.137 2 DEBUG nova.virt.hardware [None req-b9b1aca3-aec2-4b7f-b3c9-875c04211fcf f557c82d1a6e44f2890ffb382c99df55 4478aa2544ad454daf82ec0d5a6f1b83 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 11 08:49:55 compute-0 nova_compute[260935]: 2025-10-11 08:49:55.138 2 DEBUG nova.virt.hardware [None req-b9b1aca3-aec2-4b7f-b3c9-875c04211fcf f557c82d1a6e44f2890ffb382c99df55 4478aa2544ad454daf82ec0d5a6f1b83 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 11 08:49:55 compute-0 nova_compute[260935]: 2025-10-11 08:49:55.138 2 DEBUG nova.virt.hardware [None req-b9b1aca3-aec2-4b7f-b3c9-875c04211fcf f557c82d1a6e44f2890ffb382c99df55 4478aa2544ad454daf82ec0d5a6f1b83 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 11 08:49:55 compute-0 nova_compute[260935]: 2025-10-11 08:49:55.142 2 DEBUG oslo_concurrency.processutils [None req-b9b1aca3-aec2-4b7f-b3c9-875c04211fcf f557c82d1a6e44f2890ffb382c99df55 4478aa2544ad454daf82ec0d5a6f1b83 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:49:55 compute-0 nova_compute[260935]: 2025-10-11 08:49:55.169 2 DEBUG nova.scheduler.client.report [None req-546ee1ad-353e-439d-b095-08a7736b8d68 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 08:49:55 compute-0 nova_compute[260935]: 2025-10-11 08:49:55.213 2 DEBUG oslo_concurrency.lockutils [None req-546ee1ad-353e-439d-b095-08a7736b8d68 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.784s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:49:55 compute-0 nova_compute[260935]: 2025-10-11 08:49:55.214 2 DEBUG nova.compute.manager [None req-546ee1ad-353e-439d-b095-08a7736b8d68 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: b35f4147-9e36-4dab-9ac8-2061c97797f2] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 11 08:49:55 compute-0 nova_compute[260935]: 2025-10-11 08:49:55.218 2 DEBUG oslo_concurrency.lockutils [None req-4002ac9b-cff2-4b50-a89e-ea7290326cae 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.134s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:49:55 compute-0 nova_compute[260935]: 2025-10-11 08:49:55.225 2 DEBUG nova.virt.hardware [None req-4002ac9b-cff2-4b50-a89e-ea7290326cae 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 11 08:49:55 compute-0 nova_compute[260935]: 2025-10-11 08:49:55.225 2 INFO nova.compute.claims [None req-4002ac9b-cff2-4b50-a89e-ea7290326cae 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: cb1503a2-bc9c-4faf-ab16-e7227f8c94f7] Claim successful on node compute-0.ctlplane.example.com
Oct 11 08:49:55 compute-0 nova_compute[260935]: 2025-10-11 08:49:55.302 2 DEBUG nova.compute.manager [None req-546ee1ad-353e-439d-b095-08a7736b8d68 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: b35f4147-9e36-4dab-9ac8-2061c97797f2] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 11 08:49:55 compute-0 nova_compute[260935]: 2025-10-11 08:49:55.302 2 DEBUG nova.network.neutron [None req-546ee1ad-353e-439d-b095-08a7736b8d68 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: b35f4147-9e36-4dab-9ac8-2061c97797f2] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 11 08:49:55 compute-0 nova_compute[260935]: 2025-10-11 08:49:55.328 2 INFO nova.virt.libvirt.driver [None req-546ee1ad-353e-439d-b095-08a7736b8d68 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: b35f4147-9e36-4dab-9ac8-2061c97797f2] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 11 08:49:55 compute-0 nova_compute[260935]: 2025-10-11 08:49:55.349 2 DEBUG nova.compute.manager [None req-546ee1ad-353e-439d-b095-08a7736b8d68 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: b35f4147-9e36-4dab-9ac8-2061c97797f2] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 11 08:49:55 compute-0 ceph-mon[74313]: pgmap v1328: 321 pgs: 321 active+clean; 418 MiB data, 595 MiB used, 59 GiB / 60 GiB avail; 8.5 MiB/s rd, 5.4 MiB/s wr, 429 op/s
Oct 11 08:49:55 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/1919414430' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:49:55 compute-0 nova_compute[260935]: 2025-10-11 08:49:55.445 2 DEBUG nova.compute.manager [None req-546ee1ad-353e-439d-b095-08a7736b8d68 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: b35f4147-9e36-4dab-9ac8-2061c97797f2] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 11 08:49:55 compute-0 nova_compute[260935]: 2025-10-11 08:49:55.447 2 DEBUG nova.virt.libvirt.driver [None req-546ee1ad-353e-439d-b095-08a7736b8d68 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: b35f4147-9e36-4dab-9ac8-2061c97797f2] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 11 08:49:55 compute-0 nova_compute[260935]: 2025-10-11 08:49:55.447 2 INFO nova.virt.libvirt.driver [None req-546ee1ad-353e-439d-b095-08a7736b8d68 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: b35f4147-9e36-4dab-9ac8-2061c97797f2] Creating image(s)
Oct 11 08:49:55 compute-0 nova_compute[260935]: 2025-10-11 08:49:55.480 2 DEBUG nova.storage.rbd_utils [None req-546ee1ad-353e-439d-b095-08a7736b8d68 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] rbd image b35f4147-9e36-4dab-9ac8-2061c97797f2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:49:55 compute-0 nova_compute[260935]: 2025-10-11 08:49:55.513 2 DEBUG nova.storage.rbd_utils [None req-546ee1ad-353e-439d-b095-08a7736b8d68 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] rbd image b35f4147-9e36-4dab-9ac8-2061c97797f2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:49:55 compute-0 nova_compute[260935]: 2025-10-11 08:49:55.539 2 DEBUG nova.storage.rbd_utils [None req-546ee1ad-353e-439d-b095-08a7736b8d68 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] rbd image b35f4147-9e36-4dab-9ac8-2061c97797f2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:49:55 compute-0 nova_compute[260935]: 2025-10-11 08:49:55.545 2 DEBUG oslo_concurrency.processutils [None req-546ee1ad-353e-439d-b095-08a7736b8d68 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:49:55 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 08:49:55 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1402160965' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:49:55 compute-0 nova_compute[260935]: 2025-10-11 08:49:55.589 2 DEBUG nova.network.neutron [req-4a8cd530-d3e8-485a-af9c-d96490e4e12a req-6cad29a0-04b0-42b8-b320-f6bb9a693f3f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 14711e39-46ca-4856-9c19-fa51b869064d] Updated VIF entry in instance network info cache for port ac842ebf-4fca-4930-a4d1-3e8a6760d441. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 08:49:55 compute-0 nova_compute[260935]: 2025-10-11 08:49:55.590 2 DEBUG nova.network.neutron [req-4a8cd530-d3e8-485a-af9c-d96490e4e12a req-6cad29a0-04b0-42b8-b320-f6bb9a693f3f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 14711e39-46ca-4856-9c19-fa51b869064d] Updating instance_info_cache with network_info: [{"id": "ac842ebf-4fca-4930-a4d1-3e8a6760d441", "address": "fa:16:3e:56:95:17", "network": {"id": "fff13396-b787-4c6e-9112-a1c2ef57b26d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-795919911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.224", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eddb41c523294041b154a0a99c88e82b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac842ebf-4f", "ovs_interfaceid": "ac842ebf-4fca-4930-a4d1-3e8a6760d441", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:49:55 compute-0 nova_compute[260935]: 2025-10-11 08:49:55.593 2 DEBUG nova.policy [None req-546ee1ad-353e-439d-b095-08a7736b8d68 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '34f29a5a135d45f597eeaa741009aa67', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'eddb41c523294041b154a0a99c88e82b', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 11 08:49:55 compute-0 nova_compute[260935]: 2025-10-11 08:49:55.596 2 DEBUG oslo_concurrency.processutils [None req-b9b1aca3-aec2-4b7f-b3c9-875c04211fcf f557c82d1a6e44f2890ffb382c99df55 4478aa2544ad454daf82ec0d5a6f1b83 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:49:55 compute-0 nova_compute[260935]: 2025-10-11 08:49:55.624 2 DEBUG nova.storage.rbd_utils [None req-b9b1aca3-aec2-4b7f-b3c9-875c04211fcf f557c82d1a6e44f2890ffb382c99df55 4478aa2544ad454daf82ec0d5a6f1b83 - - default default] rbd image 9f9aca1c-8e65-435a-bfae-1ff0d4386f58_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:49:55 compute-0 nova_compute[260935]: 2025-10-11 08:49:55.627 2 DEBUG oslo_concurrency.processutils [None req-b9b1aca3-aec2-4b7f-b3c9-875c04211fcf f557c82d1a6e44f2890ffb382c99df55 4478aa2544ad454daf82ec0d5a6f1b83 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:49:55 compute-0 nova_compute[260935]: 2025-10-11 08:49:55.666 2 DEBUG oslo_concurrency.lockutils [req-4a8cd530-d3e8-485a-af9c-d96490e4e12a req-6cad29a0-04b0-42b8-b320-f6bb9a693f3f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-14711e39-46ca-4856-9c19-fa51b869064d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 08:49:55 compute-0 nova_compute[260935]: 2025-10-11 08:49:55.668 2 DEBUG oslo_concurrency.processutils [None req-546ee1ad-353e-439d-b095-08a7736b8d68 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json" returned: 0 in 0.123s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:49:55 compute-0 nova_compute[260935]: 2025-10-11 08:49:55.669 2 DEBUG oslo_concurrency.lockutils [None req-546ee1ad-353e-439d-b095-08a7736b8d68 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Acquiring lock "9811042c7d73cc51997f7c966840f4b7728169a1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:49:55 compute-0 nova_compute[260935]: 2025-10-11 08:49:55.670 2 DEBUG oslo_concurrency.lockutils [None req-546ee1ad-353e-439d-b095-08a7736b8d68 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:49:55 compute-0 nova_compute[260935]: 2025-10-11 08:49:55.670 2 DEBUG oslo_concurrency.lockutils [None req-546ee1ad-353e-439d-b095-08a7736b8d68 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:49:55 compute-0 nova_compute[260935]: 2025-10-11 08:49:55.693 2 DEBUG nova.storage.rbd_utils [None req-546ee1ad-353e-439d-b095-08a7736b8d68 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] rbd image b35f4147-9e36-4dab-9ac8-2061c97797f2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:49:55 compute-0 nova_compute[260935]: 2025-10-11 08:49:55.697 2 DEBUG oslo_concurrency.processutils [None req-546ee1ad-353e-439d-b095-08a7736b8d68 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 b35f4147-9e36-4dab-9ac8-2061c97797f2_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:49:55 compute-0 nova_compute[260935]: 2025-10-11 08:49:55.734 2 DEBUG oslo_concurrency.processutils [None req-4002ac9b-cff2-4b50-a89e-ea7290326cae 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:49:55 compute-0 nova_compute[260935]: 2025-10-11 08:49:55.763 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 08:49:55 compute-0 nova_compute[260935]: 2025-10-11 08:49:55.764 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 08:49:55 compute-0 nova_compute[260935]: 2025-10-11 08:49:55.764 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 11 08:49:55 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1329: 321 pgs: 321 active+clean; 418 MiB data, 595 MiB used, 59 GiB / 60 GiB avail; 5.8 MiB/s rd, 2.7 MiB/s wr, 314 op/s
Oct 11 08:49:56 compute-0 nova_compute[260935]: 2025-10-11 08:49:56.015 2 DEBUG oslo_concurrency.processutils [None req-546ee1ad-353e-439d-b095-08a7736b8d68 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 b35f4147-9e36-4dab-9ac8-2061c97797f2_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.318s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:49:56 compute-0 nova_compute[260935]: 2025-10-11 08:49:56.063 2 DEBUG nova.storage.rbd_utils [None req-546ee1ad-353e-439d-b095-08a7736b8d68 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] resizing rbd image b35f4147-9e36-4dab-9ac8-2061c97797f2_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 11 08:49:56 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 08:49:56 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1554110969' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:49:56 compute-0 nova_compute[260935]: 2025-10-11 08:49:56.134 2 DEBUG nova.objects.instance [None req-546ee1ad-353e-439d-b095-08a7736b8d68 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Lazy-loading 'migration_context' on Instance uuid b35f4147-9e36-4dab-9ac8-2061c97797f2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:49:56 compute-0 nova_compute[260935]: 2025-10-11 08:49:56.142 2 DEBUG oslo_concurrency.processutils [None req-b9b1aca3-aec2-4b7f-b3c9-875c04211fcf f557c82d1a6e44f2890ffb382c99df55 4478aa2544ad454daf82ec0d5a6f1b83 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.515s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:49:56 compute-0 nova_compute[260935]: 2025-10-11 08:49:56.143 2 DEBUG nova.virt.libvirt.vif [None req-b9b1aca3-aec2-4b7f-b3c9-875c04211fcf f557c82d1a6e44f2890ffb382c99df55 4478aa2544ad454daf82ec0d5a6f1b83 - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T08:49:48Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestManualDisk-server-677562955',display_name='tempest-ServersTestManualDisk-server-677562955',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmanualdisk-server-677562955',id=28,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBE1CYmGp9cIjn1OUZpClvh/431spCYwxzBbkMOHDBaljNzck8rmRcU7SShxhilRUkaibqgjOLZGNsP/o0nw2t9clDZxZT6xlOhd2BSbNJPRKX/ZsVWrtD5Ho3SYRh/eonw==',key_name='tempest-keypair-542099764',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4478aa2544ad454daf82ec0d5a6f1b83',ramdisk_id='',reservation_id='r-p05i8rcj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestManualDisk-133082753',owner_user_name='tempest-ServersTestManualDisk-133082753-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T08:49:51Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='f557c82d1a6e44f2890ffb382c99df55',uuid=9f9aca1c-8e65-435a-bfae-1ff0d4386f58,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9007d8a3-8797-49c6-9302-4c8f4d699a45", "address": "fa:16:3e:34:0d:85", "network": {"id": "8e0c9798-3406-4335-baf7-3664e8c2cc2d", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-1694728476-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4478aa2544ad454daf82ec0d5a6f1b83", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9007d8a3-87", "ovs_interfaceid": "9007d8a3-8797-49c6-9302-4c8f4d699a45", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 11 08:49:56 compute-0 nova_compute[260935]: 2025-10-11 08:49:56.143 2 DEBUG nova.network.os_vif_util [None req-b9b1aca3-aec2-4b7f-b3c9-875c04211fcf f557c82d1a6e44f2890ffb382c99df55 4478aa2544ad454daf82ec0d5a6f1b83 - - default default] Converting VIF {"id": "9007d8a3-8797-49c6-9302-4c8f4d699a45", "address": "fa:16:3e:34:0d:85", "network": {"id": "8e0c9798-3406-4335-baf7-3664e8c2cc2d", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-1694728476-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4478aa2544ad454daf82ec0d5a6f1b83", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9007d8a3-87", "ovs_interfaceid": "9007d8a3-8797-49c6-9302-4c8f4d699a45", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 08:49:56 compute-0 nova_compute[260935]: 2025-10-11 08:49:56.144 2 DEBUG nova.network.os_vif_util [None req-b9b1aca3-aec2-4b7f-b3c9-875c04211fcf f557c82d1a6e44f2890ffb382c99df55 4478aa2544ad454daf82ec0d5a6f1b83 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:34:0d:85,bridge_name='br-int',has_traffic_filtering=True,id=9007d8a3-8797-49c6-9302-4c8f4d699a45,network=Network(8e0c9798-3406-4335-baf7-3664e8c2cc2d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9007d8a3-87') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 08:49:56 compute-0 nova_compute[260935]: 2025-10-11 08:49:56.145 2 DEBUG nova.objects.instance [None req-b9b1aca3-aec2-4b7f-b3c9-875c04211fcf f557c82d1a6e44f2890ffb382c99df55 4478aa2544ad454daf82ec0d5a6f1b83 - - default default] Lazy-loading 'pci_devices' on Instance uuid 9f9aca1c-8e65-435a-bfae-1ff0d4386f58 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:49:56 compute-0 nova_compute[260935]: 2025-10-11 08:49:56.182 2 DEBUG nova.virt.libvirt.driver [None req-546ee1ad-353e-439d-b095-08a7736b8d68 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: b35f4147-9e36-4dab-9ac8-2061c97797f2] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 11 08:49:56 compute-0 nova_compute[260935]: 2025-10-11 08:49:56.182 2 DEBUG nova.virt.libvirt.driver [None req-546ee1ad-353e-439d-b095-08a7736b8d68 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: b35f4147-9e36-4dab-9ac8-2061c97797f2] Ensure instance console log exists: /var/lib/nova/instances/b35f4147-9e36-4dab-9ac8-2061c97797f2/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 11 08:49:56 compute-0 nova_compute[260935]: 2025-10-11 08:49:56.182 2 DEBUG oslo_concurrency.lockutils [None req-546ee1ad-353e-439d-b095-08a7736b8d68 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:49:56 compute-0 nova_compute[260935]: 2025-10-11 08:49:56.183 2 DEBUG oslo_concurrency.lockutils [None req-546ee1ad-353e-439d-b095-08a7736b8d68 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:49:56 compute-0 nova_compute[260935]: 2025-10-11 08:49:56.183 2 DEBUG oslo_concurrency.lockutils [None req-546ee1ad-353e-439d-b095-08a7736b8d68 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:49:56 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 08:49:56 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2212142715' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:49:56 compute-0 nova_compute[260935]: 2025-10-11 08:49:56.221 2 DEBUG oslo_concurrency.processutils [None req-4002ac9b-cff2-4b50-a89e-ea7290326cae 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.488s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:49:56 compute-0 nova_compute[260935]: 2025-10-11 08:49:56.253 2 DEBUG nova.compute.provider_tree [None req-4002ac9b-cff2-4b50-a89e-ea7290326cae 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 08:49:56 compute-0 nova_compute[260935]: 2025-10-11 08:49:56.290 2 DEBUG nova.virt.libvirt.driver [None req-b9b1aca3-aec2-4b7f-b3c9-875c04211fcf f557c82d1a6e44f2890ffb382c99df55 4478aa2544ad454daf82ec0d5a6f1b83 - - default default] [instance: 9f9aca1c-8e65-435a-bfae-1ff0d4386f58] End _get_guest_xml xml=<domain type="kvm">
Oct 11 08:49:56 compute-0 nova_compute[260935]:   <uuid>9f9aca1c-8e65-435a-bfae-1ff0d4386f58</uuid>
Oct 11 08:49:56 compute-0 nova_compute[260935]:   <name>instance-0000001c</name>
Oct 11 08:49:56 compute-0 nova_compute[260935]:   <memory>131072</memory>
Oct 11 08:49:56 compute-0 nova_compute[260935]:   <vcpu>1</vcpu>
Oct 11 08:49:56 compute-0 nova_compute[260935]:   <metadata>
Oct 11 08:49:56 compute-0 nova_compute[260935]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 08:49:56 compute-0 nova_compute[260935]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 08:49:56 compute-0 nova_compute[260935]:       <nova:name>tempest-ServersTestManualDisk-server-677562955</nova:name>
Oct 11 08:49:56 compute-0 nova_compute[260935]:       <nova:creationTime>2025-10-11 08:49:55</nova:creationTime>
Oct 11 08:49:56 compute-0 nova_compute[260935]:       <nova:flavor name="m1.nano">
Oct 11 08:49:56 compute-0 nova_compute[260935]:         <nova:memory>128</nova:memory>
Oct 11 08:49:56 compute-0 nova_compute[260935]:         <nova:disk>1</nova:disk>
Oct 11 08:49:56 compute-0 nova_compute[260935]:         <nova:swap>0</nova:swap>
Oct 11 08:49:56 compute-0 nova_compute[260935]:         <nova:ephemeral>0</nova:ephemeral>
Oct 11 08:49:56 compute-0 nova_compute[260935]:         <nova:vcpus>1</nova:vcpus>
Oct 11 08:49:56 compute-0 nova_compute[260935]:       </nova:flavor>
Oct 11 08:49:56 compute-0 nova_compute[260935]:       <nova:owner>
Oct 11 08:49:56 compute-0 nova_compute[260935]:         <nova:user uuid="f557c82d1a6e44f2890ffb382c99df55">tempest-ServersTestManualDisk-133082753-project-member</nova:user>
Oct 11 08:49:56 compute-0 nova_compute[260935]:         <nova:project uuid="4478aa2544ad454daf82ec0d5a6f1b83">tempest-ServersTestManualDisk-133082753</nova:project>
Oct 11 08:49:56 compute-0 nova_compute[260935]:       </nova:owner>
Oct 11 08:49:56 compute-0 nova_compute[260935]:       <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 08:49:56 compute-0 nova_compute[260935]:       <nova:ports>
Oct 11 08:49:56 compute-0 nova_compute[260935]:         <nova:port uuid="9007d8a3-8797-49c6-9302-4c8f4d699a45">
Oct 11 08:49:56 compute-0 nova_compute[260935]:           <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Oct 11 08:49:56 compute-0 nova_compute[260935]:         </nova:port>
Oct 11 08:49:56 compute-0 nova_compute[260935]:       </nova:ports>
Oct 11 08:49:56 compute-0 nova_compute[260935]:     </nova:instance>
Oct 11 08:49:56 compute-0 nova_compute[260935]:   </metadata>
Oct 11 08:49:56 compute-0 nova_compute[260935]:   <sysinfo type="smbios">
Oct 11 08:49:56 compute-0 nova_compute[260935]:     <system>
Oct 11 08:49:56 compute-0 nova_compute[260935]:       <entry name="manufacturer">RDO</entry>
Oct 11 08:49:56 compute-0 nova_compute[260935]:       <entry name="product">OpenStack Compute</entry>
Oct 11 08:49:56 compute-0 nova_compute[260935]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 08:49:56 compute-0 nova_compute[260935]:       <entry name="serial">9f9aca1c-8e65-435a-bfae-1ff0d4386f58</entry>
Oct 11 08:49:56 compute-0 nova_compute[260935]:       <entry name="uuid">9f9aca1c-8e65-435a-bfae-1ff0d4386f58</entry>
Oct 11 08:49:56 compute-0 nova_compute[260935]:       <entry name="family">Virtual Machine</entry>
Oct 11 08:49:56 compute-0 nova_compute[260935]:     </system>
Oct 11 08:49:56 compute-0 nova_compute[260935]:   </sysinfo>
Oct 11 08:49:56 compute-0 nova_compute[260935]:   <os>
Oct 11 08:49:56 compute-0 nova_compute[260935]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 11 08:49:56 compute-0 nova_compute[260935]:     <boot dev="hd"/>
Oct 11 08:49:56 compute-0 nova_compute[260935]:     <smbios mode="sysinfo"/>
Oct 11 08:49:56 compute-0 nova_compute[260935]:   </os>
Oct 11 08:49:56 compute-0 nova_compute[260935]:   <features>
Oct 11 08:49:56 compute-0 nova_compute[260935]:     <acpi/>
Oct 11 08:49:56 compute-0 nova_compute[260935]:     <apic/>
Oct 11 08:49:56 compute-0 nova_compute[260935]:     <vmcoreinfo/>
Oct 11 08:49:56 compute-0 nova_compute[260935]:   </features>
Oct 11 08:49:56 compute-0 nova_compute[260935]:   <clock offset="utc">
Oct 11 08:49:56 compute-0 nova_compute[260935]:     <timer name="pit" tickpolicy="delay"/>
Oct 11 08:49:56 compute-0 nova_compute[260935]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 11 08:49:56 compute-0 nova_compute[260935]:     <timer name="hpet" present="no"/>
Oct 11 08:49:56 compute-0 nova_compute[260935]:   </clock>
Oct 11 08:49:56 compute-0 nova_compute[260935]:   <cpu mode="host-model" match="exact">
Oct 11 08:49:56 compute-0 nova_compute[260935]:     <topology sockets="1" cores="1" threads="1"/>
Oct 11 08:49:56 compute-0 nova_compute[260935]:   </cpu>
Oct 11 08:49:56 compute-0 nova_compute[260935]:   <devices>
Oct 11 08:49:56 compute-0 nova_compute[260935]:     <disk type="network" device="disk">
Oct 11 08:49:56 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 08:49:56 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/9f9aca1c-8e65-435a-bfae-1ff0d4386f58_disk">
Oct 11 08:49:56 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 08:49:56 compute-0 nova_compute[260935]:       </source>
Oct 11 08:49:56 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 08:49:56 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 08:49:56 compute-0 nova_compute[260935]:       </auth>
Oct 11 08:49:56 compute-0 nova_compute[260935]:       <target dev="vda" bus="virtio"/>
Oct 11 08:49:56 compute-0 nova_compute[260935]:     </disk>
Oct 11 08:49:56 compute-0 nova_compute[260935]:     <disk type="network" device="cdrom">
Oct 11 08:49:56 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 08:49:56 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/9f9aca1c-8e65-435a-bfae-1ff0d4386f58_disk.config">
Oct 11 08:49:56 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 08:49:56 compute-0 nova_compute[260935]:       </source>
Oct 11 08:49:56 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 08:49:56 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 08:49:56 compute-0 nova_compute[260935]:       </auth>
Oct 11 08:49:56 compute-0 nova_compute[260935]:       <target dev="sda" bus="sata"/>
Oct 11 08:49:56 compute-0 nova_compute[260935]:     </disk>
Oct 11 08:49:56 compute-0 nova_compute[260935]:     <interface type="ethernet">
Oct 11 08:49:56 compute-0 nova_compute[260935]:       <mac address="fa:16:3e:34:0d:85"/>
Oct 11 08:49:56 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 08:49:56 compute-0 nova_compute[260935]:       <driver name="vhost" rx_queue_size="512"/>
Oct 11 08:49:56 compute-0 nova_compute[260935]:       <mtu size="1442"/>
Oct 11 08:49:56 compute-0 nova_compute[260935]:       <target dev="tap9007d8a3-87"/>
Oct 11 08:49:56 compute-0 nova_compute[260935]:     </interface>
Oct 11 08:49:56 compute-0 nova_compute[260935]:     <serial type="pty">
Oct 11 08:49:56 compute-0 nova_compute[260935]:       <log file="/var/lib/nova/instances/9f9aca1c-8e65-435a-bfae-1ff0d4386f58/console.log" append="off"/>
Oct 11 08:49:56 compute-0 nova_compute[260935]:     </serial>
Oct 11 08:49:56 compute-0 nova_compute[260935]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 08:49:56 compute-0 nova_compute[260935]:     <video>
Oct 11 08:49:56 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 08:49:56 compute-0 nova_compute[260935]:     </video>
Oct 11 08:49:56 compute-0 nova_compute[260935]:     <input type="tablet" bus="usb"/>
Oct 11 08:49:56 compute-0 nova_compute[260935]:     <rng model="virtio">
Oct 11 08:49:56 compute-0 nova_compute[260935]:       <backend model="random">/dev/urandom</backend>
Oct 11 08:49:56 compute-0 nova_compute[260935]:     </rng>
Oct 11 08:49:56 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root"/>
Oct 11 08:49:56 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:49:56 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:49:56 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:49:56 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:49:56 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:49:56 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:49:56 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:49:56 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:49:56 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:49:56 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:49:56 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:49:56 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:49:56 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:49:56 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:49:56 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:49:56 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:49:56 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:49:56 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:49:56 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:49:56 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:49:56 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:49:56 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:49:56 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:49:56 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:49:56 compute-0 nova_compute[260935]:     <controller type="usb" index="0"/>
Oct 11 08:49:56 compute-0 nova_compute[260935]:     <memballoon model="virtio">
Oct 11 08:49:56 compute-0 nova_compute[260935]:       <stats period="10"/>
Oct 11 08:49:56 compute-0 nova_compute[260935]:     </memballoon>
Oct 11 08:49:56 compute-0 nova_compute[260935]:   </devices>
Oct 11 08:49:56 compute-0 nova_compute[260935]: </domain>
Oct 11 08:49:56 compute-0 nova_compute[260935]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 11 08:49:56 compute-0 nova_compute[260935]: 2025-10-11 08:49:56.290 2 DEBUG nova.compute.manager [None req-b9b1aca3-aec2-4b7f-b3c9-875c04211fcf f557c82d1a6e44f2890ffb382c99df55 4478aa2544ad454daf82ec0d5a6f1b83 - - default default] [instance: 9f9aca1c-8e65-435a-bfae-1ff0d4386f58] Preparing to wait for external event network-vif-plugged-9007d8a3-8797-49c6-9302-4c8f4d699a45 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 11 08:49:56 compute-0 nova_compute[260935]: 2025-10-11 08:49:56.290 2 DEBUG oslo_concurrency.lockutils [None req-b9b1aca3-aec2-4b7f-b3c9-875c04211fcf f557c82d1a6e44f2890ffb382c99df55 4478aa2544ad454daf82ec0d5a6f1b83 - - default default] Acquiring lock "9f9aca1c-8e65-435a-bfae-1ff0d4386f58-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:49:56 compute-0 nova_compute[260935]: 2025-10-11 08:49:56.291 2 DEBUG oslo_concurrency.lockutils [None req-b9b1aca3-aec2-4b7f-b3c9-875c04211fcf f557c82d1a6e44f2890ffb382c99df55 4478aa2544ad454daf82ec0d5a6f1b83 - - default default] Lock "9f9aca1c-8e65-435a-bfae-1ff0d4386f58-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:49:56 compute-0 nova_compute[260935]: 2025-10-11 08:49:56.291 2 DEBUG oslo_concurrency.lockutils [None req-b9b1aca3-aec2-4b7f-b3c9-875c04211fcf f557c82d1a6e44f2890ffb382c99df55 4478aa2544ad454daf82ec0d5a6f1b83 - - default default] Lock "9f9aca1c-8e65-435a-bfae-1ff0d4386f58-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:49:56 compute-0 nova_compute[260935]: 2025-10-11 08:49:56.291 2 DEBUG nova.virt.libvirt.vif [None req-b9b1aca3-aec2-4b7f-b3c9-875c04211fcf f557c82d1a6e44f2890ffb382c99df55 4478aa2544ad454daf82ec0d5a6f1b83 - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T08:49:48Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestManualDisk-server-677562955',display_name='tempest-ServersTestManualDisk-server-677562955',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmanualdisk-server-677562955',id=28,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBE1CYmGp9cIjn1OUZpClvh/431spCYwxzBbkMOHDBaljNzck8rmRcU7SShxhilRUkaibqgjOLZGNsP/o0nw2t9clDZxZT6xlOhd2BSbNJPRKX/ZsVWrtD5Ho3SYRh/eonw==',key_name='tempest-keypair-542099764',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4478aa2544ad454daf82ec0d5a6f1b83',ramdisk_id='',reservation_id='r-p05i8rcj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestManualDisk-133082753',owner_user_name='tempest-ServersTestManualDisk-133082753-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T08:49:51Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='f557c82d1a6e44f2890ffb382c99df55',uuid=9f9aca1c-8e65-435a-bfae-1ff0d4386f58,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9007d8a3-8797-49c6-9302-4c8f4d699a45", "address": "fa:16:3e:34:0d:85", "network": {"id": "8e0c9798-3406-4335-baf7-3664e8c2cc2d", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-1694728476-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4478aa2544ad454daf82ec0d5a6f1b83", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9007d8a3-87", "ovs_interfaceid": "9007d8a3-8797-49c6-9302-4c8f4d699a45", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 11 08:49:56 compute-0 nova_compute[260935]: 2025-10-11 08:49:56.291 2 DEBUG nova.network.os_vif_util [None req-b9b1aca3-aec2-4b7f-b3c9-875c04211fcf f557c82d1a6e44f2890ffb382c99df55 4478aa2544ad454daf82ec0d5a6f1b83 - - default default] Converting VIF {"id": "9007d8a3-8797-49c6-9302-4c8f4d699a45", "address": "fa:16:3e:34:0d:85", "network": {"id": "8e0c9798-3406-4335-baf7-3664e8c2cc2d", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-1694728476-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4478aa2544ad454daf82ec0d5a6f1b83", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9007d8a3-87", "ovs_interfaceid": "9007d8a3-8797-49c6-9302-4c8f4d699a45", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 08:49:56 compute-0 nova_compute[260935]: 2025-10-11 08:49:56.292 2 DEBUG nova.network.os_vif_util [None req-b9b1aca3-aec2-4b7f-b3c9-875c04211fcf f557c82d1a6e44f2890ffb382c99df55 4478aa2544ad454daf82ec0d5a6f1b83 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:34:0d:85,bridge_name='br-int',has_traffic_filtering=True,id=9007d8a3-8797-49c6-9302-4c8f4d699a45,network=Network(8e0c9798-3406-4335-baf7-3664e8c2cc2d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9007d8a3-87') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 08:49:56 compute-0 nova_compute[260935]: 2025-10-11 08:49:56.292 2 DEBUG os_vif [None req-b9b1aca3-aec2-4b7f-b3c9-875c04211fcf f557c82d1a6e44f2890ffb382c99df55 4478aa2544ad454daf82ec0d5a6f1b83 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:34:0d:85,bridge_name='br-int',has_traffic_filtering=True,id=9007d8a3-8797-49c6-9302-4c8f4d699a45,network=Network(8e0c9798-3406-4335-baf7-3664e8c2cc2d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9007d8a3-87') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 11 08:49:56 compute-0 nova_compute[260935]: 2025-10-11 08:49:56.293 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:49:56 compute-0 nova_compute[260935]: 2025-10-11 08:49:56.293 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:49:56 compute-0 nova_compute[260935]: 2025-10-11 08:49:56.293 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 08:49:56 compute-0 nova_compute[260935]: 2025-10-11 08:49:56.296 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:49:56 compute-0 nova_compute[260935]: 2025-10-11 08:49:56.297 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9007d8a3-87, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:49:56 compute-0 nova_compute[260935]: 2025-10-11 08:49:56.297 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap9007d8a3-87, col_values=(('external_ids', {'iface-id': '9007d8a3-8797-49c6-9302-4c8f4d699a45', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:34:0d:85', 'vm-uuid': '9f9aca1c-8e65-435a-bfae-1ff0d4386f58'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:49:56 compute-0 nova_compute[260935]: 2025-10-11 08:49:56.298 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:49:56 compute-0 NetworkManager[44960]: <info>  [1760172596.3001] manager: (tap9007d8a3-87): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/84)
Oct 11 08:49:56 compute-0 nova_compute[260935]: 2025-10-11 08:49:56.301 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 08:49:56 compute-0 nova_compute[260935]: 2025-10-11 08:49:56.305 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:49:56 compute-0 nova_compute[260935]: 2025-10-11 08:49:56.306 2 INFO os_vif [None req-b9b1aca3-aec2-4b7f-b3c9-875c04211fcf f557c82d1a6e44f2890ffb382c99df55 4478aa2544ad454daf82ec0d5a6f1b83 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:34:0d:85,bridge_name='br-int',has_traffic_filtering=True,id=9007d8a3-8797-49c6-9302-4c8f4d699a45,network=Network(8e0c9798-3406-4335-baf7-3664e8c2cc2d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9007d8a3-87')
Oct 11 08:49:56 compute-0 nova_compute[260935]: 2025-10-11 08:49:56.313 2 DEBUG nova.scheduler.client.report [None req-4002ac9b-cff2-4b50-a89e-ea7290326cae 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 08:49:56 compute-0 nova_compute[260935]: 2025-10-11 08:49:56.368 2 DEBUG oslo_concurrency.lockutils [None req-4002ac9b-cff2-4b50-a89e-ea7290326cae 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.150s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:49:56 compute-0 nova_compute[260935]: 2025-10-11 08:49:56.369 2 DEBUG nova.compute.manager [None req-4002ac9b-cff2-4b50-a89e-ea7290326cae 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: cb1503a2-bc9c-4faf-ab16-e7227f8c94f7] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 11 08:49:56 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/1402160965' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:49:56 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/1554110969' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:49:56 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/2212142715' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:49:56 compute-0 nova_compute[260935]: 2025-10-11 08:49:56.434 2 DEBUG nova.compute.manager [req-4c28932d-f193-423c-9777-f0cb67246f73 req-b2a45284-8b3f-41eb-ac2f-3efe658ebff6 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 8e4f771a-b87a-40f9-a12e-b5b4583b96f7] Received event network-vif-deleted-f1d8b704-c5df-41f7-b46a-04c0e89ab2cd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:49:56 compute-0 nova_compute[260935]: 2025-10-11 08:49:56.435 2 DEBUG nova.compute.manager [req-4c28932d-f193-423c-9777-f0cb67246f73 req-b2a45284-8b3f-41eb-ac2f-3efe658ebff6 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 9f9aca1c-8e65-435a-bfae-1ff0d4386f58] Received event network-changed-9007d8a3-8797-49c6-9302-4c8f4d699a45 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:49:56 compute-0 nova_compute[260935]: 2025-10-11 08:49:56.436 2 DEBUG nova.compute.manager [req-4c28932d-f193-423c-9777-f0cb67246f73 req-b2a45284-8b3f-41eb-ac2f-3efe658ebff6 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 9f9aca1c-8e65-435a-bfae-1ff0d4386f58] Refreshing instance network info cache due to event network-changed-9007d8a3-8797-49c6-9302-4c8f4d699a45. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 08:49:56 compute-0 nova_compute[260935]: 2025-10-11 08:49:56.436 2 DEBUG oslo_concurrency.lockutils [req-4c28932d-f193-423c-9777-f0cb67246f73 req-b2a45284-8b3f-41eb-ac2f-3efe658ebff6 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-9f9aca1c-8e65-435a-bfae-1ff0d4386f58" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 08:49:56 compute-0 nova_compute[260935]: 2025-10-11 08:49:56.436 2 DEBUG oslo_concurrency.lockutils [req-4c28932d-f193-423c-9777-f0cb67246f73 req-b2a45284-8b3f-41eb-ac2f-3efe658ebff6 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-9f9aca1c-8e65-435a-bfae-1ff0d4386f58" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 08:49:56 compute-0 nova_compute[260935]: 2025-10-11 08:49:56.436 2 DEBUG nova.network.neutron [req-4c28932d-f193-423c-9777-f0cb67246f73 req-b2a45284-8b3f-41eb-ac2f-3efe658ebff6 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 9f9aca1c-8e65-435a-bfae-1ff0d4386f58] Refreshing network info cache for port 9007d8a3-8797-49c6-9302-4c8f4d699a45 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 08:49:56 compute-0 nova_compute[260935]: 2025-10-11 08:49:56.466 2 DEBUG nova.virt.libvirt.driver [None req-b9b1aca3-aec2-4b7f-b3c9-875c04211fcf f557c82d1a6e44f2890ffb382c99df55 4478aa2544ad454daf82ec0d5a6f1b83 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 08:49:56 compute-0 nova_compute[260935]: 2025-10-11 08:49:56.466 2 DEBUG nova.virt.libvirt.driver [None req-b9b1aca3-aec2-4b7f-b3c9-875c04211fcf f557c82d1a6e44f2890ffb382c99df55 4478aa2544ad454daf82ec0d5a6f1b83 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 08:49:56 compute-0 nova_compute[260935]: 2025-10-11 08:49:56.466 2 DEBUG nova.virt.libvirt.driver [None req-b9b1aca3-aec2-4b7f-b3c9-875c04211fcf f557c82d1a6e44f2890ffb382c99df55 4478aa2544ad454daf82ec0d5a6f1b83 - - default default] No VIF found with MAC fa:16:3e:34:0d:85, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 11 08:49:56 compute-0 nova_compute[260935]: 2025-10-11 08:49:56.467 2 INFO nova.virt.libvirt.driver [None req-b9b1aca3-aec2-4b7f-b3c9-875c04211fcf f557c82d1a6e44f2890ffb382c99df55 4478aa2544ad454daf82ec0d5a6f1b83 - - default default] [instance: 9f9aca1c-8e65-435a-bfae-1ff0d4386f58] Using config drive
Oct 11 08:49:56 compute-0 nova_compute[260935]: 2025-10-11 08:49:56.486 2 DEBUG nova.storage.rbd_utils [None req-b9b1aca3-aec2-4b7f-b3c9-875c04211fcf f557c82d1a6e44f2890ffb382c99df55 4478aa2544ad454daf82ec0d5a6f1b83 - - default default] rbd image 9f9aca1c-8e65-435a-bfae-1ff0d4386f58_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:49:56 compute-0 nova_compute[260935]: 2025-10-11 08:49:56.492 2 DEBUG nova.network.neutron [None req-546ee1ad-353e-439d-b095-08a7736b8d68 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: b35f4147-9e36-4dab-9ac8-2061c97797f2] Successfully created port: c27797c3-6ac7-45ae-9a2e-7fc42908feab _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 11 08:49:56 compute-0 nova_compute[260935]: 2025-10-11 08:49:56.496 2 DEBUG nova.compute.manager [None req-4002ac9b-cff2-4b50-a89e-ea7290326cae 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: cb1503a2-bc9c-4faf-ab16-e7227f8c94f7] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 11 08:49:56 compute-0 nova_compute[260935]: 2025-10-11 08:49:56.496 2 DEBUG nova.network.neutron [None req-4002ac9b-cff2-4b50-a89e-ea7290326cae 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: cb1503a2-bc9c-4faf-ab16-e7227f8c94f7] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 11 08:49:56 compute-0 nova_compute[260935]: 2025-10-11 08:49:56.518 2 INFO nova.virt.libvirt.driver [None req-4002ac9b-cff2-4b50-a89e-ea7290326cae 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: cb1503a2-bc9c-4faf-ab16-e7227f8c94f7] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 11 08:49:56 compute-0 nova_compute[260935]: 2025-10-11 08:49:56.616 2 DEBUG nova.compute.manager [None req-4002ac9b-cff2-4b50-a89e-ea7290326cae 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: cb1503a2-bc9c-4faf-ab16-e7227f8c94f7] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 11 08:49:56 compute-0 nova_compute[260935]: 2025-10-11 08:49:56.753 2 DEBUG nova.compute.manager [None req-4002ac9b-cff2-4b50-a89e-ea7290326cae 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: cb1503a2-bc9c-4faf-ab16-e7227f8c94f7] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 11 08:49:56 compute-0 nova_compute[260935]: 2025-10-11 08:49:56.754 2 DEBUG nova.virt.libvirt.driver [None req-4002ac9b-cff2-4b50-a89e-ea7290326cae 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: cb1503a2-bc9c-4faf-ab16-e7227f8c94f7] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 11 08:49:56 compute-0 nova_compute[260935]: 2025-10-11 08:49:56.755 2 INFO nova.virt.libvirt.driver [None req-4002ac9b-cff2-4b50-a89e-ea7290326cae 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: cb1503a2-bc9c-4faf-ab16-e7227f8c94f7] Creating image(s)
Oct 11 08:49:56 compute-0 nova_compute[260935]: 2025-10-11 08:49:56.787 2 DEBUG nova.storage.rbd_utils [None req-4002ac9b-cff2-4b50-a89e-ea7290326cae 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] rbd image cb1503a2-bc9c-4faf-ab16-e7227f8c94f7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:49:56 compute-0 nova_compute[260935]: 2025-10-11 08:49:56.819 2 DEBUG nova.storage.rbd_utils [None req-4002ac9b-cff2-4b50-a89e-ea7290326cae 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] rbd image cb1503a2-bc9c-4faf-ab16-e7227f8c94f7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:49:56 compute-0 nova_compute[260935]: 2025-10-11 08:49:56.850 2 DEBUG nova.storage.rbd_utils [None req-4002ac9b-cff2-4b50-a89e-ea7290326cae 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] rbd image cb1503a2-bc9c-4faf-ab16-e7227f8c94f7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:49:56 compute-0 nova_compute[260935]: 2025-10-11 08:49:56.855 2 DEBUG oslo_concurrency.processutils [None req-4002ac9b-cff2-4b50-a89e-ea7290326cae 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:49:56 compute-0 nova_compute[260935]: 2025-10-11 08:49:56.888 2 DEBUG nova.policy [None req-4002ac9b-cff2-4b50-a89e-ea7290326cae 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '1bab12893b9d49aabcb5ca19c9b951de', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'f8c7604961214c6d9d49657535d799a5', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 11 08:49:56 compute-0 nova_compute[260935]: 2025-10-11 08:49:56.941 2 DEBUG oslo_concurrency.processutils [None req-4002ac9b-cff2-4b50-a89e-ea7290326cae 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json" returned: 0 in 0.086s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:49:56 compute-0 nova_compute[260935]: 2025-10-11 08:49:56.942 2 DEBUG oslo_concurrency.lockutils [None req-4002ac9b-cff2-4b50-a89e-ea7290326cae 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Acquiring lock "9811042c7d73cc51997f7c966840f4b7728169a1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:49:56 compute-0 nova_compute[260935]: 2025-10-11 08:49:56.942 2 DEBUG oslo_concurrency.lockutils [None req-4002ac9b-cff2-4b50-a89e-ea7290326cae 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:49:56 compute-0 nova_compute[260935]: 2025-10-11 08:49:56.943 2 DEBUG oslo_concurrency.lockutils [None req-4002ac9b-cff2-4b50-a89e-ea7290326cae 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:49:56 compute-0 nova_compute[260935]: 2025-10-11 08:49:56.962 2 DEBUG nova.storage.rbd_utils [None req-4002ac9b-cff2-4b50-a89e-ea7290326cae 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] rbd image cb1503a2-bc9c-4faf-ab16-e7227f8c94f7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:49:56 compute-0 nova_compute[260935]: 2025-10-11 08:49:56.965 2 DEBUG oslo_concurrency.processutils [None req-4002ac9b-cff2-4b50-a89e-ea7290326cae 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 cb1503a2-bc9c-4faf-ab16-e7227f8c94f7_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:49:57 compute-0 nova_compute[260935]: 2025-10-11 08:49:57.047 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:49:57 compute-0 nova_compute[260935]: 2025-10-11 08:49:57.100 2 INFO nova.virt.libvirt.driver [None req-b9b1aca3-aec2-4b7f-b3c9-875c04211fcf f557c82d1a6e44f2890ffb382c99df55 4478aa2544ad454daf82ec0d5a6f1b83 - - default default] [instance: 9f9aca1c-8e65-435a-bfae-1ff0d4386f58] Creating config drive at /var/lib/nova/instances/9f9aca1c-8e65-435a-bfae-1ff0d4386f58/disk.config
Oct 11 08:49:57 compute-0 nova_compute[260935]: 2025-10-11 08:49:57.128 2 DEBUG oslo_concurrency.processutils [None req-b9b1aca3-aec2-4b7f-b3c9-875c04211fcf f557c82d1a6e44f2890ffb382c99df55 4478aa2544ad454daf82ec0d5a6f1b83 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/9f9aca1c-8e65-435a-bfae-1ff0d4386f58/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpdiouyoki execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:49:57 compute-0 nova_compute[260935]: 2025-10-11 08:49:57.204 2 DEBUG oslo_concurrency.processutils [None req-4002ac9b-cff2-4b50-a89e-ea7290326cae 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 cb1503a2-bc9c-4faf-ab16-e7227f8c94f7_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.239s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:49:57 compute-0 nova_compute[260935]: 2025-10-11 08:49:57.289 2 DEBUG nova.storage.rbd_utils [None req-4002ac9b-cff2-4b50-a89e-ea7290326cae 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] resizing rbd image cb1503a2-bc9c-4faf-ab16-e7227f8c94f7_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 11 08:49:57 compute-0 nova_compute[260935]: 2025-10-11 08:49:57.327 2 DEBUG oslo_concurrency.processutils [None req-b9b1aca3-aec2-4b7f-b3c9-875c04211fcf f557c82d1a6e44f2890ffb382c99df55 4478aa2544ad454daf82ec0d5a6f1b83 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/9f9aca1c-8e65-435a-bfae-1ff0d4386f58/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpdiouyoki" returned: 0 in 0.199s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:49:57 compute-0 nova_compute[260935]: 2025-10-11 08:49:57.358 2 DEBUG nova.storage.rbd_utils [None req-b9b1aca3-aec2-4b7f-b3c9-875c04211fcf f557c82d1a6e44f2890ffb382c99df55 4478aa2544ad454daf82ec0d5a6f1b83 - - default default] rbd image 9f9aca1c-8e65-435a-bfae-1ff0d4386f58_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:49:57 compute-0 nova_compute[260935]: 2025-10-11 08:49:57.362 2 DEBUG oslo_concurrency.processutils [None req-b9b1aca3-aec2-4b7f-b3c9-875c04211fcf f557c82d1a6e44f2890ffb382c99df55 4478aa2544ad454daf82ec0d5a6f1b83 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/9f9aca1c-8e65-435a-bfae-1ff0d4386f58/disk.config 9f9aca1c-8e65-435a-bfae-1ff0d4386f58_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:49:57 compute-0 ceph-mon[74313]: pgmap v1329: 321 pgs: 321 active+clean; 418 MiB data, 595 MiB used, 59 GiB / 60 GiB avail; 5.8 MiB/s rd, 2.7 MiB/s wr, 314 op/s
Oct 11 08:49:57 compute-0 nova_compute[260935]: 2025-10-11 08:49:57.466 2 DEBUG nova.objects.instance [None req-4002ac9b-cff2-4b50-a89e-ea7290326cae 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Lazy-loading 'migration_context' on Instance uuid cb1503a2-bc9c-4faf-ab16-e7227f8c94f7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:49:57 compute-0 nova_compute[260935]: 2025-10-11 08:49:57.512 2 DEBUG nova.virt.libvirt.driver [None req-4002ac9b-cff2-4b50-a89e-ea7290326cae 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: cb1503a2-bc9c-4faf-ab16-e7227f8c94f7] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 11 08:49:57 compute-0 nova_compute[260935]: 2025-10-11 08:49:57.512 2 DEBUG nova.virt.libvirt.driver [None req-4002ac9b-cff2-4b50-a89e-ea7290326cae 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: cb1503a2-bc9c-4faf-ab16-e7227f8c94f7] Ensure instance console log exists: /var/lib/nova/instances/cb1503a2-bc9c-4faf-ab16-e7227f8c94f7/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 11 08:49:57 compute-0 nova_compute[260935]: 2025-10-11 08:49:57.513 2 DEBUG oslo_concurrency.lockutils [None req-4002ac9b-cff2-4b50-a89e-ea7290326cae 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:49:57 compute-0 nova_compute[260935]: 2025-10-11 08:49:57.514 2 DEBUG oslo_concurrency.lockutils [None req-4002ac9b-cff2-4b50-a89e-ea7290326cae 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:49:57 compute-0 nova_compute[260935]: 2025-10-11 08:49:57.514 2 DEBUG oslo_concurrency.lockutils [None req-4002ac9b-cff2-4b50-a89e-ea7290326cae 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:49:57 compute-0 nova_compute[260935]: 2025-10-11 08:49:57.535 2 DEBUG oslo_concurrency.processutils [None req-b9b1aca3-aec2-4b7f-b3c9-875c04211fcf f557c82d1a6e44f2890ffb382c99df55 4478aa2544ad454daf82ec0d5a6f1b83 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/9f9aca1c-8e65-435a-bfae-1ff0d4386f58/disk.config 9f9aca1c-8e65-435a-bfae-1ff0d4386f58_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.173s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:49:57 compute-0 nova_compute[260935]: 2025-10-11 08:49:57.535 2 INFO nova.virt.libvirt.driver [None req-b9b1aca3-aec2-4b7f-b3c9-875c04211fcf f557c82d1a6e44f2890ffb382c99df55 4478aa2544ad454daf82ec0d5a6f1b83 - - default default] [instance: 9f9aca1c-8e65-435a-bfae-1ff0d4386f58] Deleting local config drive /var/lib/nova/instances/9f9aca1c-8e65-435a-bfae-1ff0d4386f58/disk.config because it was imported into RBD.
Oct 11 08:49:57 compute-0 kernel: tap9007d8a3-87: entered promiscuous mode
Oct 11 08:49:57 compute-0 NetworkManager[44960]: <info>  [1760172597.5859] manager: (tap9007d8a3-87): new Tun device (/org/freedesktop/NetworkManager/Devices/85)
Oct 11 08:49:57 compute-0 nova_compute[260935]: 2025-10-11 08:49:57.585 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:49:57 compute-0 ovn_controller[152945]: 2025-10-11T08:49:57Z|00145|binding|INFO|Claiming lport 9007d8a3-8797-49c6-9302-4c8f4d699a45 for this chassis.
Oct 11 08:49:57 compute-0 ovn_controller[152945]: 2025-10-11T08:49:57Z|00146|binding|INFO|9007d8a3-8797-49c6-9302-4c8f4d699a45: Claiming fa:16:3e:34:0d:85 10.100.0.3
Oct 11 08:49:57 compute-0 nova_compute[260935]: 2025-10-11 08:49:57.608 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:49:57 compute-0 ovn_controller[152945]: 2025-10-11T08:49:57Z|00147|binding|INFO|Setting lport 9007d8a3-8797-49c6-9302-4c8f4d699a45 ovn-installed in OVS
Oct 11 08:49:57 compute-0 systemd-machined[215705]: New machine qemu-30-instance-0000001c.
Oct 11 08:49:57 compute-0 systemd[1]: Started Virtual Machine qemu-30-instance-0000001c.
Oct 11 08:49:57 compute-0 nova_compute[260935]: 2025-10-11 08:49:57.630 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:49:57 compute-0 systemd-udevd[296180]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 08:49:57 compute-0 NetworkManager[44960]: <info>  [1760172597.6525] device (tap9007d8a3-87): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 08:49:57 compute-0 NetworkManager[44960]: <info>  [1760172597.6554] device (tap9007d8a3-87): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 08:49:57 compute-0 ovn_controller[152945]: 2025-10-11T08:49:57Z|00148|binding|INFO|Setting lport 9007d8a3-8797-49c6-9302-4c8f4d699a45 up in Southbound
Oct 11 08:49:57 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:49:57.712 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:34:0d:85 10.100.0.3'], port_security=['fa:16:3e:34:0d:85 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '9f9aca1c-8e65-435a-bfae-1ff0d4386f58', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8e0c9798-3406-4335-baf7-3664e8c2cc2d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4478aa2544ad454daf82ec0d5a6f1b83', 'neutron:revision_number': '2', 'neutron:security_group_ids': '53a8c688-9a7d-4462-906b-8eed321153ea', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=861cf0ef-52d4-42d5-8406-b93929d21340, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=9007d8a3-8797-49c6-9302-4c8f4d699a45) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 08:49:57 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:49:57.715 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 9007d8a3-8797-49c6-9302-4c8f4d699a45 in datapath 8e0c9798-3406-4335-baf7-3664e8c2cc2d bound to our chassis
Oct 11 08:49:57 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:49:57.720 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 8e0c9798-3406-4335-baf7-3664e8c2cc2d
Oct 11 08:49:57 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:49:57.736 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[a4affe5d-7876-499d-a612-156bbac9ddc3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:49:57 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:49:57.737 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap8e0c9798-31 in ovnmeta-8e0c9798-3406-4335-baf7-3664e8c2cc2d namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 11 08:49:57 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:49:57.739 276945 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap8e0c9798-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 11 08:49:57 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:49:57.739 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[b8d8d711-cd63-49aa-a7f5-87f236b9f07b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:49:57 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:49:57.740 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[8c0b21eb-ced4-44fd-a10c-8e81bcd0dc51]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:49:57 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:49:57.751 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[6efbd235-6088-4bdd-8c27-d43ec87f3f11]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:49:57 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:49:57.767 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[1df3bc41-6ea6-4096-be87-9429df7eb4ba]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:49:57 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:49:57.808 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[6816a7c4-5020-48b4-852a-157b3ddd8369]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:49:57 compute-0 NetworkManager[44960]: <info>  [1760172597.8192] manager: (tap8e0c9798-30): new Veth device (/org/freedesktop/NetworkManager/Devices/86)
Oct 11 08:49:57 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:49:57.818 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[4f37ff69-d34f-47f0-bcab-187afb8368fb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:49:57 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1330: 321 pgs: 321 active+clean; 511 MiB data, 603 MiB used, 59 GiB / 60 GiB avail; 5.9 MiB/s rd, 8.0 MiB/s wr, 392 op/s
Oct 11 08:49:57 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:49:57.870 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[af713344-990b-4f06-b65a-dfddcec6e97d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:49:57 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:49:57.874 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[4463c7ed-c4f3-4195-a1ac-2d6328764ac2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:49:57 compute-0 NetworkManager[44960]: <info>  [1760172597.9144] device (tap8e0c9798-30): carrier: link connected
Oct 11 08:49:57 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:49:57.922 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[a4019683-0242-4569-8aa9-46b24b68697f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:49:57 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:49:57.942 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[bbfd8288-f6f0-42a9-b8be-38693634d0fb]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8e0c9798-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:22:48:55'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 49], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 444261, 'reachable_time': 31002, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 296251, 'error': None, 'target': 'ovnmeta-8e0c9798-3406-4335-baf7-3664e8c2cc2d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:49:57 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:49:57.960 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[e98ce406-e8c8-440f-bae7-77282d879d69]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe22:4855'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 444261, 'tstamp': 444261}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 296253, 'error': None, 'target': 'ovnmeta-8e0c9798-3406-4335-baf7-3664e8c2cc2d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:49:57 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:49:57.981 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[2c921381-a348-4f73-abce-6974b00158c9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8e0c9798-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:22:48:55'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 49], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 444261, 'reachable_time': 31002, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 296257, 'error': None, 'target': 'ovnmeta-8e0c9798-3406-4335-baf7-3664e8c2cc2d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:49:57 compute-0 nova_compute[260935]: 2025-10-11 08:49:57.999 2 DEBUG nova.network.neutron [None req-546ee1ad-353e-439d-b095-08a7736b8d68 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: b35f4147-9e36-4dab-9ac8-2061c97797f2] Successfully updated port: c27797c3-6ac7-45ae-9a2e-7fc42908feab _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 11 08:49:58 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:49:58.018 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[dcaab4b1-20ee-4b4b-acae-9ba8f6a744d6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:49:58 compute-0 nova_compute[260935]: 2025-10-11 08:49:58.080 2 DEBUG oslo_concurrency.lockutils [None req-546ee1ad-353e-439d-b095-08a7736b8d68 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Acquiring lock "refresh_cache-b35f4147-9e36-4dab-9ac8-2061c97797f2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 08:49:58 compute-0 nova_compute[260935]: 2025-10-11 08:49:58.080 2 DEBUG oslo_concurrency.lockutils [None req-546ee1ad-353e-439d-b095-08a7736b8d68 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Acquired lock "refresh_cache-b35f4147-9e36-4dab-9ac8-2061c97797f2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 08:49:58 compute-0 nova_compute[260935]: 2025-10-11 08:49:58.081 2 DEBUG nova.network.neutron [None req-546ee1ad-353e-439d-b095-08a7736b8d68 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: b35f4147-9e36-4dab-9ac8-2061c97797f2] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 11 08:49:58 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:49:58.087 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[b7f47fc1-294d-4af6-96f0-cb6c1b5ccd6e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:49:58 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:49:58.088 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8e0c9798-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:49:58 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:49:58.088 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 08:49:58 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:49:58.089 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8e0c9798-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:49:58 compute-0 NetworkManager[44960]: <info>  [1760172598.0909] manager: (tap8e0c9798-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/87)
Oct 11 08:49:58 compute-0 kernel: tap8e0c9798-30: entered promiscuous mode
Oct 11 08:49:58 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:49:58.093 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap8e0c9798-30, col_values=(('external_ids', {'iface-id': '2e3d9f01-796e-4f56-9d6b-cedcaba6f19e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:49:58 compute-0 ovn_controller[152945]: 2025-10-11T08:49:58Z|00149|binding|INFO|Releasing lport 2e3d9f01-796e-4f56-9d6b-cedcaba6f19e from this chassis (sb_readonly=0)
Oct 11 08:49:58 compute-0 nova_compute[260935]: 2025-10-11 08:49:58.090 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:49:58 compute-0 nova_compute[260935]: 2025-10-11 08:49:58.108 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:49:58 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:49:58.111 162815 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/8e0c9798-3406-4335-baf7-3664e8c2cc2d.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/8e0c9798-3406-4335-baf7-3664e8c2cc2d.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 11 08:49:58 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:49:58.112 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[d70b7ca5-758b-4d2c-8897-7a00f78a7917]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:49:58 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:49:58.112 162815 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 11 08:49:58 compute-0 ovn_metadata_agent[162810]: global
Oct 11 08:49:58 compute-0 ovn_metadata_agent[162810]:     log         /dev/log local0 debug
Oct 11 08:49:58 compute-0 ovn_metadata_agent[162810]:     log-tag     haproxy-metadata-proxy-8e0c9798-3406-4335-baf7-3664e8c2cc2d
Oct 11 08:49:58 compute-0 ovn_metadata_agent[162810]:     user        root
Oct 11 08:49:58 compute-0 ovn_metadata_agent[162810]:     group       root
Oct 11 08:49:58 compute-0 ovn_metadata_agent[162810]:     maxconn     1024
Oct 11 08:49:58 compute-0 ovn_metadata_agent[162810]:     pidfile     /var/lib/neutron/external/pids/8e0c9798-3406-4335-baf7-3664e8c2cc2d.pid.haproxy
Oct 11 08:49:58 compute-0 ovn_metadata_agent[162810]:     daemon
Oct 11 08:49:58 compute-0 ovn_metadata_agent[162810]: 
Oct 11 08:49:58 compute-0 ovn_metadata_agent[162810]: defaults
Oct 11 08:49:58 compute-0 ovn_metadata_agent[162810]:     log global
Oct 11 08:49:58 compute-0 ovn_metadata_agent[162810]:     mode http
Oct 11 08:49:58 compute-0 ovn_metadata_agent[162810]:     option httplog
Oct 11 08:49:58 compute-0 ovn_metadata_agent[162810]:     option dontlognull
Oct 11 08:49:58 compute-0 ovn_metadata_agent[162810]:     option http-server-close
Oct 11 08:49:58 compute-0 ovn_metadata_agent[162810]:     option forwardfor
Oct 11 08:49:58 compute-0 ovn_metadata_agent[162810]:     retries                 3
Oct 11 08:49:58 compute-0 ovn_metadata_agent[162810]:     timeout http-request    30s
Oct 11 08:49:58 compute-0 ovn_metadata_agent[162810]:     timeout connect         30s
Oct 11 08:49:58 compute-0 ovn_metadata_agent[162810]:     timeout client          32s
Oct 11 08:49:58 compute-0 ovn_metadata_agent[162810]:     timeout server          32s
Oct 11 08:49:58 compute-0 ovn_metadata_agent[162810]:     timeout http-keep-alive 30s
Oct 11 08:49:58 compute-0 ovn_metadata_agent[162810]: 
Oct 11 08:49:58 compute-0 ovn_metadata_agent[162810]: 
Oct 11 08:49:58 compute-0 ovn_metadata_agent[162810]: listen listener
Oct 11 08:49:58 compute-0 ovn_metadata_agent[162810]:     bind 169.254.169.254:80
Oct 11 08:49:58 compute-0 ovn_metadata_agent[162810]:     server metadata /var/lib/neutron/metadata_proxy
Oct 11 08:49:58 compute-0 ovn_metadata_agent[162810]:     http-request add-header X-OVN-Network-ID 8e0c9798-3406-4335-baf7-3664e8c2cc2d
Oct 11 08:49:58 compute-0 ovn_metadata_agent[162810]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 11 08:49:58 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:49:58.113 162815 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-8e0c9798-3406-4335-baf7-3664e8c2cc2d', 'env', 'PROCESS_TAG=haproxy-8e0c9798-3406-4335-baf7-3664e8c2cc2d', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/8e0c9798-3406-4335-baf7-3664e8c2cc2d.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 11 08:49:58 compute-0 nova_compute[260935]: 2025-10-11 08:49:58.136 2 DEBUG nova.network.neutron [req-4c28932d-f193-423c-9777-f0cb67246f73 req-b2a45284-8b3f-41eb-ac2f-3efe658ebff6 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 9f9aca1c-8e65-435a-bfae-1ff0d4386f58] Updated VIF entry in instance network info cache for port 9007d8a3-8797-49c6-9302-4c8f4d699a45. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 08:49:58 compute-0 nova_compute[260935]: 2025-10-11 08:49:58.136 2 DEBUG nova.network.neutron [req-4c28932d-f193-423c-9777-f0cb67246f73 req-b2a45284-8b3f-41eb-ac2f-3efe658ebff6 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 9f9aca1c-8e65-435a-bfae-1ff0d4386f58] Updating instance_info_cache with network_info: [{"id": "9007d8a3-8797-49c6-9302-4c8f4d699a45", "address": "fa:16:3e:34:0d:85", "network": {"id": "8e0c9798-3406-4335-baf7-3664e8c2cc2d", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-1694728476-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4478aa2544ad454daf82ec0d5a6f1b83", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9007d8a3-87", "ovs_interfaceid": "9007d8a3-8797-49c6-9302-4c8f4d699a45", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:49:58 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e168 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 08:49:58 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e168 do_prune osdmap full prune enabled
Oct 11 08:49:58 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e169 e169: 3 total, 3 up, 3 in
Oct 11 08:49:58 compute-0 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e169: 3 total, 3 up, 3 in
Oct 11 08:49:58 compute-0 nova_compute[260935]: 2025-10-11 08:49:58.303 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:49:58 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:49:58.305 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=9, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:d1:d9', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '16:ab:1e:b7:4b:7f'}, ipsec=False) old=SB_Global(nb_cfg=8) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 08:49:58 compute-0 nova_compute[260935]: 2025-10-11 08:49:58.383 2 DEBUG nova.network.neutron [None req-546ee1ad-353e-439d-b095-08a7736b8d68 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: b35f4147-9e36-4dab-9ac8-2061c97797f2] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 11 08:49:58 compute-0 nova_compute[260935]: 2025-10-11 08:49:58.389 2 DEBUG oslo_concurrency.lockutils [req-4c28932d-f193-423c-9777-f0cb67246f73 req-b2a45284-8b3f-41eb-ac2f-3efe658ebff6 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-9f9aca1c-8e65-435a-bfae-1ff0d4386f58" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 08:49:58 compute-0 nova_compute[260935]: 2025-10-11 08:49:58.494 2 DEBUG nova.compute.manager [req-c433fdf2-df56-4d97-a529-b4794da6325a req-610dc799-94e7-4669-93c6-b98d77d63311 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 9f9aca1c-8e65-435a-bfae-1ff0d4386f58] Received event network-vif-plugged-9007d8a3-8797-49c6-9302-4c8f4d699a45 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:49:58 compute-0 nova_compute[260935]: 2025-10-11 08:49:58.495 2 DEBUG oslo_concurrency.lockutils [req-c433fdf2-df56-4d97-a529-b4794da6325a req-610dc799-94e7-4669-93c6-b98d77d63311 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "9f9aca1c-8e65-435a-bfae-1ff0d4386f58-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:49:58 compute-0 nova_compute[260935]: 2025-10-11 08:49:58.495 2 DEBUG oslo_concurrency.lockutils [req-c433fdf2-df56-4d97-a529-b4794da6325a req-610dc799-94e7-4669-93c6-b98d77d63311 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "9f9aca1c-8e65-435a-bfae-1ff0d4386f58-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:49:58 compute-0 nova_compute[260935]: 2025-10-11 08:49:58.495 2 DEBUG oslo_concurrency.lockutils [req-c433fdf2-df56-4d97-a529-b4794da6325a req-610dc799-94e7-4669-93c6-b98d77d63311 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "9f9aca1c-8e65-435a-bfae-1ff0d4386f58-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:49:58 compute-0 nova_compute[260935]: 2025-10-11 08:49:58.495 2 DEBUG nova.compute.manager [req-c433fdf2-df56-4d97-a529-b4794da6325a req-610dc799-94e7-4669-93c6-b98d77d63311 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 9f9aca1c-8e65-435a-bfae-1ff0d4386f58] Processing event network-vif-plugged-9007d8a3-8797-49c6-9302-4c8f4d699a45 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 11 08:49:58 compute-0 nova_compute[260935]: 2025-10-11 08:49:58.564 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172598.5633805, 9f9aca1c-8e65-435a-bfae-1ff0d4386f58 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:49:58 compute-0 nova_compute[260935]: 2025-10-11 08:49:58.564 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 9f9aca1c-8e65-435a-bfae-1ff0d4386f58] VM Started (Lifecycle Event)
Oct 11 08:49:58 compute-0 nova_compute[260935]: 2025-10-11 08:49:58.568 2 DEBUG nova.compute.manager [None req-b9b1aca3-aec2-4b7f-b3c9-875c04211fcf f557c82d1a6e44f2890ffb382c99df55 4478aa2544ad454daf82ec0d5a6f1b83 - - default default] [instance: 9f9aca1c-8e65-435a-bfae-1ff0d4386f58] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 11 08:49:58 compute-0 nova_compute[260935]: 2025-10-11 08:49:58.574 2 DEBUG nova.virt.libvirt.driver [None req-b9b1aca3-aec2-4b7f-b3c9-875c04211fcf f557c82d1a6e44f2890ffb382c99df55 4478aa2544ad454daf82ec0d5a6f1b83 - - default default] [instance: 9f9aca1c-8e65-435a-bfae-1ff0d4386f58] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 11 08:49:58 compute-0 podman[296291]: 2025-10-11 08:49:58.577662807 +0000 UTC m=+0.073323865 container create 034d4af584d5ac1017cd36efc964e21839dbbea4a1045cbe9431994128504705 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-8e0c9798-3406-4335-baf7-3664e8c2cc2d, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 11 08:49:58 compute-0 nova_compute[260935]: 2025-10-11 08:49:58.580 2 INFO nova.virt.libvirt.driver [-] [instance: 9f9aca1c-8e65-435a-bfae-1ff0d4386f58] Instance spawned successfully.
Oct 11 08:49:58 compute-0 nova_compute[260935]: 2025-10-11 08:49:58.581 2 DEBUG nova.virt.libvirt.driver [None req-b9b1aca3-aec2-4b7f-b3c9-875c04211fcf f557c82d1a6e44f2890ffb382c99df55 4478aa2544ad454daf82ec0d5a6f1b83 - - default default] [instance: 9f9aca1c-8e65-435a-bfae-1ff0d4386f58] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 11 08:49:58 compute-0 nova_compute[260935]: 2025-10-11 08:49:58.591 2 INFO nova.compute.manager [None req-23edfb2a-135c-4bb9-b6f1-2bd10a1c71cf a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] Rebuilding instance
Oct 11 08:49:58 compute-0 nova_compute[260935]: 2025-10-11 08:49:58.606 2 DEBUG nova.network.neutron [None req-4002ac9b-cff2-4b50-a89e-ea7290326cae 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: cb1503a2-bc9c-4faf-ab16-e7227f8c94f7] Successfully created port: 31025494-a361-4c39-aa29-5841978f12e9 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 11 08:49:58 compute-0 systemd[1]: Started libpod-conmon-034d4af584d5ac1017cd36efc964e21839dbbea4a1045cbe9431994128504705.scope.
Oct 11 08:49:58 compute-0 nova_compute[260935]: 2025-10-11 08:49:58.638 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 9f9aca1c-8e65-435a-bfae-1ff0d4386f58] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:49:58 compute-0 podman[296291]: 2025-10-11 08:49:58.550522929 +0000 UTC m=+0.046183977 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 11 08:49:58 compute-0 nova_compute[260935]: 2025-10-11 08:49:58.646 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 9f9aca1c-8e65-435a-bfae-1ff0d4386f58] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 08:49:58 compute-0 systemd[1]: Started libcrun container.
Oct 11 08:49:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/709c524c656a0c0bd950e9b79a2ea29641c4e55d424e4497bd7851268792db19/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 11 08:49:58 compute-0 nova_compute[260935]: 2025-10-11 08:49:58.675 2 DEBUG nova.compute.manager [req-cc207ed8-b354-4964-a537-e6e33bf35c4f req-3ea8603a-6fdc-481e-b0d8-542471dec1b2 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: b35f4147-9e36-4dab-9ac8-2061c97797f2] Received event network-changed-c27797c3-6ac7-45ae-9a2e-7fc42908feab external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:49:58 compute-0 nova_compute[260935]: 2025-10-11 08:49:58.676 2 DEBUG nova.compute.manager [req-cc207ed8-b354-4964-a537-e6e33bf35c4f req-3ea8603a-6fdc-481e-b0d8-542471dec1b2 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: b35f4147-9e36-4dab-9ac8-2061c97797f2] Refreshing instance network info cache due to event network-changed-c27797c3-6ac7-45ae-9a2e-7fc42908feab. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 08:49:58 compute-0 nova_compute[260935]: 2025-10-11 08:49:58.676 2 DEBUG oslo_concurrency.lockutils [req-cc207ed8-b354-4964-a537-e6e33bf35c4f req-3ea8603a-6fdc-481e-b0d8-542471dec1b2 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-b35f4147-9e36-4dab-9ac8-2061c97797f2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 08:49:58 compute-0 podman[296291]: 2025-10-11 08:49:58.69727957 +0000 UTC m=+0.192940698 container init 034d4af584d5ac1017cd36efc964e21839dbbea4a1045cbe9431994128504705 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-8e0c9798-3406-4335-baf7-3664e8c2cc2d, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 11 08:49:58 compute-0 nova_compute[260935]: 2025-10-11 08:49:58.697 2 DEBUG nova.virt.libvirt.driver [None req-b9b1aca3-aec2-4b7f-b3c9-875c04211fcf f557c82d1a6e44f2890ffb382c99df55 4478aa2544ad454daf82ec0d5a6f1b83 - - default default] [instance: 9f9aca1c-8e65-435a-bfae-1ff0d4386f58] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:49:58 compute-0 nova_compute[260935]: 2025-10-11 08:49:58.697 2 DEBUG nova.virt.libvirt.driver [None req-b9b1aca3-aec2-4b7f-b3c9-875c04211fcf f557c82d1a6e44f2890ffb382c99df55 4478aa2544ad454daf82ec0d5a6f1b83 - - default default] [instance: 9f9aca1c-8e65-435a-bfae-1ff0d4386f58] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:49:58 compute-0 nova_compute[260935]: 2025-10-11 08:49:58.698 2 DEBUG nova.virt.libvirt.driver [None req-b9b1aca3-aec2-4b7f-b3c9-875c04211fcf f557c82d1a6e44f2890ffb382c99df55 4478aa2544ad454daf82ec0d5a6f1b83 - - default default] [instance: 9f9aca1c-8e65-435a-bfae-1ff0d4386f58] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:49:58 compute-0 nova_compute[260935]: 2025-10-11 08:49:58.698 2 DEBUG nova.virt.libvirt.driver [None req-b9b1aca3-aec2-4b7f-b3c9-875c04211fcf f557c82d1a6e44f2890ffb382c99df55 4478aa2544ad454daf82ec0d5a6f1b83 - - default default] [instance: 9f9aca1c-8e65-435a-bfae-1ff0d4386f58] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:49:58 compute-0 nova_compute[260935]: 2025-10-11 08:49:58.699 2 DEBUG nova.virt.libvirt.driver [None req-b9b1aca3-aec2-4b7f-b3c9-875c04211fcf f557c82d1a6e44f2890ffb382c99df55 4478aa2544ad454daf82ec0d5a6f1b83 - - default default] [instance: 9f9aca1c-8e65-435a-bfae-1ff0d4386f58] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:49:58 compute-0 nova_compute[260935]: 2025-10-11 08:49:58.700 2 DEBUG nova.virt.libvirt.driver [None req-b9b1aca3-aec2-4b7f-b3c9-875c04211fcf f557c82d1a6e44f2890ffb382c99df55 4478aa2544ad454daf82ec0d5a6f1b83 - - default default] [instance: 9f9aca1c-8e65-435a-bfae-1ff0d4386f58] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:49:58 compute-0 podman[296291]: 2025-10-11 08:49:58.708469416 +0000 UTC m=+0.204130524 container start 034d4af584d5ac1017cd36efc964e21839dbbea4a1045cbe9431994128504705 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-8e0c9798-3406-4335-baf7-3664e8c2cc2d, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 08:49:58 compute-0 nova_compute[260935]: 2025-10-11 08:49:58.718 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 9f9aca1c-8e65-435a-bfae-1ff0d4386f58] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 08:49:58 compute-0 nova_compute[260935]: 2025-10-11 08:49:58.719 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172598.56374, 9f9aca1c-8e65-435a-bfae-1ff0d4386f58 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:49:58 compute-0 nova_compute[260935]: 2025-10-11 08:49:58.719 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 9f9aca1c-8e65-435a-bfae-1ff0d4386f58] VM Paused (Lifecycle Event)
Oct 11 08:49:58 compute-0 neutron-haproxy-ovnmeta-8e0c9798-3406-4335-baf7-3664e8c2cc2d[296306]: [NOTICE]   (296310) : New worker (296312) forked
Oct 11 08:49:58 compute-0 neutron-haproxy-ovnmeta-8e0c9798-3406-4335-baf7-3664e8c2cc2d[296306]: [NOTICE]   (296310) : Loading success.
Oct 11 08:49:58 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:49:58.784 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 11 08:49:58 compute-0 nova_compute[260935]: 2025-10-11 08:49:58.811 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 9f9aca1c-8e65-435a-bfae-1ff0d4386f58] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:49:58 compute-0 nova_compute[260935]: 2025-10-11 08:49:58.815 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172598.5718732, 9f9aca1c-8e65-435a-bfae-1ff0d4386f58 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:49:58 compute-0 nova_compute[260935]: 2025-10-11 08:49:58.816 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 9f9aca1c-8e65-435a-bfae-1ff0d4386f58] VM Resumed (Lifecycle Event)
Oct 11 08:49:58 compute-0 nova_compute[260935]: 2025-10-11 08:49:58.839 2 DEBUG nova.objects.instance [None req-23edfb2a-135c-4bb9-b6f1-2bd10a1c71cf a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:49:58 compute-0 nova_compute[260935]: 2025-10-11 08:49:58.862 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 9f9aca1c-8e65-435a-bfae-1ff0d4386f58] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:49:58 compute-0 nova_compute[260935]: 2025-10-11 08:49:58.868 2 INFO nova.compute.manager [None req-b9b1aca3-aec2-4b7f-b3c9-875c04211fcf f557c82d1a6e44f2890ffb382c99df55 4478aa2544ad454daf82ec0d5a6f1b83 - - default default] [instance: 9f9aca1c-8e65-435a-bfae-1ff0d4386f58] Took 7.38 seconds to spawn the instance on the hypervisor.
Oct 11 08:49:58 compute-0 nova_compute[260935]: 2025-10-11 08:49:58.868 2 DEBUG nova.compute.manager [None req-b9b1aca3-aec2-4b7f-b3c9-875c04211fcf f557c82d1a6e44f2890ffb382c99df55 4478aa2544ad454daf82ec0d5a6f1b83 - - default default] [instance: 9f9aca1c-8e65-435a-bfae-1ff0d4386f58] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:49:58 compute-0 nova_compute[260935]: 2025-10-11 08:49:58.872 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 9f9aca1c-8e65-435a-bfae-1ff0d4386f58] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 08:49:58 compute-0 nova_compute[260935]: 2025-10-11 08:49:58.915 2 DEBUG nova.compute.manager [None req-23edfb2a-135c-4bb9-b6f1-2bd10a1c71cf a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:49:59 compute-0 nova_compute[260935]: 2025-10-11 08:49:59.000 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 9f9aca1c-8e65-435a-bfae-1ff0d4386f58] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 08:49:59 compute-0 nova_compute[260935]: 2025-10-11 08:49:59.036 2 DEBUG nova.network.neutron [None req-546ee1ad-353e-439d-b095-08a7736b8d68 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: b35f4147-9e36-4dab-9ac8-2061c97797f2] Updating instance_info_cache with network_info: [{"id": "c27797c3-6ac7-45ae-9a2e-7fc42908feab", "address": "fa:16:3e:d9:75:86", "network": {"id": "fff13396-b787-4c6e-9112-a1c2ef57b26d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-795919911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eddb41c523294041b154a0a99c88e82b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc27797c3-6a", "ovs_interfaceid": "c27797c3-6ac7-45ae-9a2e-7fc42908feab", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:49:59 compute-0 nova_compute[260935]: 2025-10-11 08:49:59.117 2 INFO nova.compute.manager [None req-b9b1aca3-aec2-4b7f-b3c9-875c04211fcf f557c82d1a6e44f2890ffb382c99df55 4478aa2544ad454daf82ec0d5a6f1b83 - - default default] [instance: 9f9aca1c-8e65-435a-bfae-1ff0d4386f58] Took 8.84 seconds to build instance.
Oct 11 08:49:59 compute-0 nova_compute[260935]: 2025-10-11 08:49:59.132 2 DEBUG nova.objects.instance [None req-23edfb2a-135c-4bb9-b6f1-2bd10a1c71cf a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Lazy-loading 'pci_requests' on Instance uuid 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:49:59 compute-0 nova_compute[260935]: 2025-10-11 08:49:59.221 2 DEBUG oslo_concurrency.lockutils [None req-546ee1ad-353e-439d-b095-08a7736b8d68 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Releasing lock "refresh_cache-b35f4147-9e36-4dab-9ac8-2061c97797f2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 08:49:59 compute-0 nova_compute[260935]: 2025-10-11 08:49:59.222 2 DEBUG nova.compute.manager [None req-546ee1ad-353e-439d-b095-08a7736b8d68 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: b35f4147-9e36-4dab-9ac8-2061c97797f2] Instance network_info: |[{"id": "c27797c3-6ac7-45ae-9a2e-7fc42908feab", "address": "fa:16:3e:d9:75:86", "network": {"id": "fff13396-b787-4c6e-9112-a1c2ef57b26d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-795919911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eddb41c523294041b154a0a99c88e82b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc27797c3-6a", "ovs_interfaceid": "c27797c3-6ac7-45ae-9a2e-7fc42908feab", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 11 08:49:59 compute-0 nova_compute[260935]: 2025-10-11 08:49:59.222 2 DEBUG oslo_concurrency.lockutils [req-cc207ed8-b354-4964-a537-e6e33bf35c4f req-3ea8603a-6fdc-481e-b0d8-542471dec1b2 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-b35f4147-9e36-4dab-9ac8-2061c97797f2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 08:49:59 compute-0 nova_compute[260935]: 2025-10-11 08:49:59.222 2 DEBUG nova.network.neutron [req-cc207ed8-b354-4964-a537-e6e33bf35c4f req-3ea8603a-6fdc-481e-b0d8-542471dec1b2 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: b35f4147-9e36-4dab-9ac8-2061c97797f2] Refreshing network info cache for port c27797c3-6ac7-45ae-9a2e-7fc42908feab _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 08:49:59 compute-0 nova_compute[260935]: 2025-10-11 08:49:59.227 2 DEBUG nova.virt.libvirt.driver [None req-546ee1ad-353e-439d-b095-08a7736b8d68 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: b35f4147-9e36-4dab-9ac8-2061c97797f2] Start _get_guest_xml network_info=[{"id": "c27797c3-6ac7-45ae-9a2e-7fc42908feab", "address": "fa:16:3e:d9:75:86", "network": {"id": "fff13396-b787-4c6e-9112-a1c2ef57b26d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-795919911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eddb41c523294041b154a0a99c88e82b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc27797c3-6a", "ovs_interfaceid": "c27797c3-6ac7-45ae-9a2e-7fc42908feab", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 11 08:49:59 compute-0 ceph-mon[74313]: pgmap v1330: 321 pgs: 321 active+clean; 511 MiB data, 603 MiB used, 59 GiB / 60 GiB avail; 5.9 MiB/s rd, 8.0 MiB/s wr, 392 op/s
Oct 11 08:49:59 compute-0 ceph-mon[74313]: osdmap e169: 3 total, 3 up, 3 in
Oct 11 08:49:59 compute-0 nova_compute[260935]: 2025-10-11 08:49:59.232 2 WARNING nova.virt.libvirt.driver [None req-546ee1ad-353e-439d-b095-08a7736b8d68 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 08:49:59 compute-0 nova_compute[260935]: 2025-10-11 08:49:59.240 2 DEBUG nova.virt.libvirt.host [None req-546ee1ad-353e-439d-b095-08a7736b8d68 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 11 08:49:59 compute-0 nova_compute[260935]: 2025-10-11 08:49:59.240 2 DEBUG nova.virt.libvirt.host [None req-546ee1ad-353e-439d-b095-08a7736b8d68 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 11 08:49:59 compute-0 nova_compute[260935]: 2025-10-11 08:49:59.244 2 DEBUG nova.virt.libvirt.host [None req-546ee1ad-353e-439d-b095-08a7736b8d68 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 11 08:49:59 compute-0 nova_compute[260935]: 2025-10-11 08:49:59.244 2 DEBUG nova.virt.libvirt.host [None req-546ee1ad-353e-439d-b095-08a7736b8d68 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 11 08:49:59 compute-0 nova_compute[260935]: 2025-10-11 08:49:59.245 2 DEBUG nova.virt.libvirt.driver [None req-546ee1ad-353e-439d-b095-08a7736b8d68 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 11 08:49:59 compute-0 nova_compute[260935]: 2025-10-11 08:49:59.245 2 DEBUG nova.virt.hardware [None req-546ee1ad-353e-439d-b095-08a7736b8d68 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 11 08:49:59 compute-0 nova_compute[260935]: 2025-10-11 08:49:59.246 2 DEBUG nova.virt.hardware [None req-546ee1ad-353e-439d-b095-08a7736b8d68 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 11 08:49:59 compute-0 nova_compute[260935]: 2025-10-11 08:49:59.246 2 DEBUG nova.virt.hardware [None req-546ee1ad-353e-439d-b095-08a7736b8d68 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 11 08:49:59 compute-0 nova_compute[260935]: 2025-10-11 08:49:59.246 2 DEBUG nova.virt.hardware [None req-546ee1ad-353e-439d-b095-08a7736b8d68 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 11 08:49:59 compute-0 nova_compute[260935]: 2025-10-11 08:49:59.246 2 DEBUG nova.virt.hardware [None req-546ee1ad-353e-439d-b095-08a7736b8d68 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 11 08:49:59 compute-0 nova_compute[260935]: 2025-10-11 08:49:59.247 2 DEBUG nova.virt.hardware [None req-546ee1ad-353e-439d-b095-08a7736b8d68 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 11 08:49:59 compute-0 nova_compute[260935]: 2025-10-11 08:49:59.247 2 DEBUG nova.virt.hardware [None req-546ee1ad-353e-439d-b095-08a7736b8d68 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 11 08:49:59 compute-0 nova_compute[260935]: 2025-10-11 08:49:59.247 2 DEBUG nova.virt.hardware [None req-546ee1ad-353e-439d-b095-08a7736b8d68 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 11 08:49:59 compute-0 nova_compute[260935]: 2025-10-11 08:49:59.248 2 DEBUG nova.virt.hardware [None req-546ee1ad-353e-439d-b095-08a7736b8d68 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 11 08:49:59 compute-0 nova_compute[260935]: 2025-10-11 08:49:59.248 2 DEBUG nova.virt.hardware [None req-546ee1ad-353e-439d-b095-08a7736b8d68 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 11 08:49:59 compute-0 nova_compute[260935]: 2025-10-11 08:49:59.248 2 DEBUG nova.virt.hardware [None req-546ee1ad-353e-439d-b095-08a7736b8d68 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 11 08:49:59 compute-0 nova_compute[260935]: 2025-10-11 08:49:59.251 2 DEBUG oslo_concurrency.processutils [None req-546ee1ad-353e-439d-b095-08a7736b8d68 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:49:59 compute-0 nova_compute[260935]: 2025-10-11 08:49:59.300 2 DEBUG nova.objects.instance [None req-23edfb2a-135c-4bb9-b6f1-2bd10a1c71cf a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Lazy-loading 'pci_devices' on Instance uuid 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:49:59 compute-0 nova_compute[260935]: 2025-10-11 08:49:59.304 2 DEBUG oslo_concurrency.lockutils [None req-b9b1aca3-aec2-4b7f-b3c9-875c04211fcf f557c82d1a6e44f2890ffb382c99df55 4478aa2544ad454daf82ec0d5a6f1b83 - - default default] Lock "9f9aca1c-8e65-435a-bfae-1ff0d4386f58" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.081s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:49:59 compute-0 nova_compute[260935]: 2025-10-11 08:49:59.349 2 DEBUG nova.objects.instance [None req-23edfb2a-135c-4bb9-b6f1-2bd10a1c71cf a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Lazy-loading 'resources' on Instance uuid 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:49:59 compute-0 nova_compute[260935]: 2025-10-11 08:49:59.369 2 DEBUG nova.objects.instance [None req-23edfb2a-135c-4bb9-b6f1-2bd10a1c71cf a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Lazy-loading 'migration_context' on Instance uuid 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:49:59 compute-0 nova_compute[260935]: 2025-10-11 08:49:59.435 2 DEBUG nova.objects.instance [None req-23edfb2a-135c-4bb9-b6f1-2bd10a1c71cf a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Oct 11 08:49:59 compute-0 nova_compute[260935]: 2025-10-11 08:49:59.440 2 DEBUG nova.virt.libvirt.driver [None req-23edfb2a-135c-4bb9-b6f1-2bd10a1c71cf a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Oct 11 08:49:59 compute-0 nova_compute[260935]: 2025-10-11 08:49:59.595 2 DEBUG nova.network.neutron [None req-4002ac9b-cff2-4b50-a89e-ea7290326cae 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: cb1503a2-bc9c-4faf-ab16-e7227f8c94f7] Successfully updated port: 31025494-a361-4c39-aa29-5841978f12e9 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 11 08:49:59 compute-0 nova_compute[260935]: 2025-10-11 08:49:59.616 2 DEBUG oslo_concurrency.lockutils [None req-4002ac9b-cff2-4b50-a89e-ea7290326cae 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Acquiring lock "refresh_cache-cb1503a2-bc9c-4faf-ab16-e7227f8c94f7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 08:49:59 compute-0 nova_compute[260935]: 2025-10-11 08:49:59.617 2 DEBUG oslo_concurrency.lockutils [None req-4002ac9b-cff2-4b50-a89e-ea7290326cae 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Acquired lock "refresh_cache-cb1503a2-bc9c-4faf-ab16-e7227f8c94f7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 08:49:59 compute-0 nova_compute[260935]: 2025-10-11 08:49:59.617 2 DEBUG nova.network.neutron [None req-4002ac9b-cff2-4b50-a89e-ea7290326cae 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: cb1503a2-bc9c-4faf-ab16-e7227f8c94f7] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 11 08:49:59 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 08:49:59 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/711992532' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:49:59 compute-0 nova_compute[260935]: 2025-10-11 08:49:59.764 2 DEBUG oslo_concurrency.processutils [None req-546ee1ad-353e-439d-b095-08a7736b8d68 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.513s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:49:59 compute-0 nova_compute[260935]: 2025-10-11 08:49:59.805 2 DEBUG nova.storage.rbd_utils [None req-546ee1ad-353e-439d-b095-08a7736b8d68 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] rbd image b35f4147-9e36-4dab-9ac8-2061c97797f2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:49:59 compute-0 nova_compute[260935]: 2025-10-11 08:49:59.816 2 DEBUG oslo_concurrency.processutils [None req-546ee1ad-353e-439d-b095-08a7736b8d68 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:49:59 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1332: 321 pgs: 321 active+clean; 511 MiB data, 603 MiB used, 59 GiB / 60 GiB avail; 5.8 MiB/s rd, 8.0 MiB/s wr, 392 op/s
Oct 11 08:49:59 compute-0 nova_compute[260935]: 2025-10-11 08:49:59.860 2 DEBUG nova.network.neutron [None req-4002ac9b-cff2-4b50-a89e-ea7290326cae 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: cb1503a2-bc9c-4faf-ab16-e7227f8c94f7] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 11 08:50:00 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/711992532' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:50:00 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 08:50:00 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3554418418' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:50:00 compute-0 nova_compute[260935]: 2025-10-11 08:50:00.387 2 DEBUG oslo_concurrency.processutils [None req-546ee1ad-353e-439d-b095-08a7736b8d68 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.567s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:50:00 compute-0 nova_compute[260935]: 2025-10-11 08:50:00.391 2 DEBUG nova.virt.libvirt.vif [None req-546ee1ad-353e-439d-b095-08a7736b8d68 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T08:49:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1789236056',display_name='tempest-tempest.common.compute-instance-1789236056',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1789236056',id=29,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAAy7u30rY9Ua722Pu5k06TsB7yGIeNfS7lWjZwVhd6kg2xMeuomPU5t2dlqG08LvC5AhOx2wSQ4p/whtQgG8tbhB9ScC2x2P4qlM+3BKH/+XFtSpFY70AQQ3oh5qTSzqA==',key_name='tempest-keypair-878513287',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='eddb41c523294041b154a0a99c88e82b',ramdisk_id='',reservation_id='r-pbhirm4h',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesTestJSON-2072786320',owner_user_name='tempest-AttachInterfacesTestJSON-2072786320-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T08:49:55Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='34f29a5a135d45f597eeaa741009aa67',uuid=b35f4147-9e36-4dab-9ac8-2061c97797f2,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c27797c3-6ac7-45ae-9a2e-7fc42908feab", "address": "fa:16:3e:d9:75:86", "network": {"id": "fff13396-b787-4c6e-9112-a1c2ef57b26d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-795919911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eddb41c523294041b154a0a99c88e82b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc27797c3-6a", "ovs_interfaceid": "c27797c3-6ac7-45ae-9a2e-7fc42908feab", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 11 08:50:00 compute-0 nova_compute[260935]: 2025-10-11 08:50:00.393 2 DEBUG nova.network.os_vif_util [None req-546ee1ad-353e-439d-b095-08a7736b8d68 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Converting VIF {"id": "c27797c3-6ac7-45ae-9a2e-7fc42908feab", "address": "fa:16:3e:d9:75:86", "network": {"id": "fff13396-b787-4c6e-9112-a1c2ef57b26d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-795919911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eddb41c523294041b154a0a99c88e82b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc27797c3-6a", "ovs_interfaceid": "c27797c3-6ac7-45ae-9a2e-7fc42908feab", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 08:50:00 compute-0 nova_compute[260935]: 2025-10-11 08:50:00.395 2 DEBUG nova.network.os_vif_util [None req-546ee1ad-353e-439d-b095-08a7736b8d68 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d9:75:86,bridge_name='br-int',has_traffic_filtering=True,id=c27797c3-6ac7-45ae-9a2e-7fc42908feab,network=Network(fff13396-b787-4c6e-9112-a1c2ef57b26d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc27797c3-6a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 08:50:00 compute-0 nova_compute[260935]: 2025-10-11 08:50:00.399 2 DEBUG nova.objects.instance [None req-546ee1ad-353e-439d-b095-08a7736b8d68 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Lazy-loading 'pci_devices' on Instance uuid b35f4147-9e36-4dab-9ac8-2061c97797f2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:50:00 compute-0 nova_compute[260935]: 2025-10-11 08:50:00.421 2 DEBUG nova.virt.libvirt.driver [None req-546ee1ad-353e-439d-b095-08a7736b8d68 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: b35f4147-9e36-4dab-9ac8-2061c97797f2] End _get_guest_xml xml=<domain type="kvm">
Oct 11 08:50:00 compute-0 nova_compute[260935]:   <uuid>b35f4147-9e36-4dab-9ac8-2061c97797f2</uuid>
Oct 11 08:50:00 compute-0 nova_compute[260935]:   <name>instance-0000001d</name>
Oct 11 08:50:00 compute-0 nova_compute[260935]:   <memory>131072</memory>
Oct 11 08:50:00 compute-0 nova_compute[260935]:   <vcpu>1</vcpu>
Oct 11 08:50:00 compute-0 nova_compute[260935]:   <metadata>
Oct 11 08:50:00 compute-0 nova_compute[260935]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 08:50:00 compute-0 nova_compute[260935]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 08:50:00 compute-0 nova_compute[260935]:       <nova:name>tempest-tempest.common.compute-instance-1789236056</nova:name>
Oct 11 08:50:00 compute-0 nova_compute[260935]:       <nova:creationTime>2025-10-11 08:49:59</nova:creationTime>
Oct 11 08:50:00 compute-0 nova_compute[260935]:       <nova:flavor name="m1.nano">
Oct 11 08:50:00 compute-0 nova_compute[260935]:         <nova:memory>128</nova:memory>
Oct 11 08:50:00 compute-0 nova_compute[260935]:         <nova:disk>1</nova:disk>
Oct 11 08:50:00 compute-0 nova_compute[260935]:         <nova:swap>0</nova:swap>
Oct 11 08:50:00 compute-0 nova_compute[260935]:         <nova:ephemeral>0</nova:ephemeral>
Oct 11 08:50:00 compute-0 nova_compute[260935]:         <nova:vcpus>1</nova:vcpus>
Oct 11 08:50:00 compute-0 nova_compute[260935]:       </nova:flavor>
Oct 11 08:50:00 compute-0 nova_compute[260935]:       <nova:owner>
Oct 11 08:50:00 compute-0 nova_compute[260935]:         <nova:user uuid="34f29a5a135d45f597eeaa741009aa67">tempest-AttachInterfacesTestJSON-2072786320-project-member</nova:user>
Oct 11 08:50:00 compute-0 nova_compute[260935]:         <nova:project uuid="eddb41c523294041b154a0a99c88e82b">tempest-AttachInterfacesTestJSON-2072786320</nova:project>
Oct 11 08:50:00 compute-0 nova_compute[260935]:       </nova:owner>
Oct 11 08:50:00 compute-0 nova_compute[260935]:       <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 08:50:00 compute-0 nova_compute[260935]:       <nova:ports>
Oct 11 08:50:00 compute-0 nova_compute[260935]:         <nova:port uuid="c27797c3-6ac7-45ae-9a2e-7fc42908feab">
Oct 11 08:50:00 compute-0 nova_compute[260935]:           <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Oct 11 08:50:00 compute-0 nova_compute[260935]:         </nova:port>
Oct 11 08:50:00 compute-0 nova_compute[260935]:       </nova:ports>
Oct 11 08:50:00 compute-0 nova_compute[260935]:     </nova:instance>
Oct 11 08:50:00 compute-0 nova_compute[260935]:   </metadata>
Oct 11 08:50:00 compute-0 nova_compute[260935]:   <sysinfo type="smbios">
Oct 11 08:50:00 compute-0 nova_compute[260935]:     <system>
Oct 11 08:50:00 compute-0 nova_compute[260935]:       <entry name="manufacturer">RDO</entry>
Oct 11 08:50:00 compute-0 nova_compute[260935]:       <entry name="product">OpenStack Compute</entry>
Oct 11 08:50:00 compute-0 nova_compute[260935]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 08:50:00 compute-0 nova_compute[260935]:       <entry name="serial">b35f4147-9e36-4dab-9ac8-2061c97797f2</entry>
Oct 11 08:50:00 compute-0 nova_compute[260935]:       <entry name="uuid">b35f4147-9e36-4dab-9ac8-2061c97797f2</entry>
Oct 11 08:50:00 compute-0 nova_compute[260935]:       <entry name="family">Virtual Machine</entry>
Oct 11 08:50:00 compute-0 nova_compute[260935]:     </system>
Oct 11 08:50:00 compute-0 nova_compute[260935]:   </sysinfo>
Oct 11 08:50:00 compute-0 nova_compute[260935]:   <os>
Oct 11 08:50:00 compute-0 nova_compute[260935]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 11 08:50:00 compute-0 nova_compute[260935]:     <boot dev="hd"/>
Oct 11 08:50:00 compute-0 nova_compute[260935]:     <smbios mode="sysinfo"/>
Oct 11 08:50:00 compute-0 nova_compute[260935]:   </os>
Oct 11 08:50:00 compute-0 nova_compute[260935]:   <features>
Oct 11 08:50:00 compute-0 nova_compute[260935]:     <acpi/>
Oct 11 08:50:00 compute-0 nova_compute[260935]:     <apic/>
Oct 11 08:50:00 compute-0 nova_compute[260935]:     <vmcoreinfo/>
Oct 11 08:50:00 compute-0 nova_compute[260935]:   </features>
Oct 11 08:50:00 compute-0 nova_compute[260935]:   <clock offset="utc">
Oct 11 08:50:00 compute-0 nova_compute[260935]:     <timer name="pit" tickpolicy="delay"/>
Oct 11 08:50:00 compute-0 nova_compute[260935]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 11 08:50:00 compute-0 nova_compute[260935]:     <timer name="hpet" present="no"/>
Oct 11 08:50:00 compute-0 nova_compute[260935]:   </clock>
Oct 11 08:50:00 compute-0 nova_compute[260935]:   <cpu mode="host-model" match="exact">
Oct 11 08:50:00 compute-0 nova_compute[260935]:     <topology sockets="1" cores="1" threads="1"/>
Oct 11 08:50:00 compute-0 nova_compute[260935]:   </cpu>
Oct 11 08:50:00 compute-0 nova_compute[260935]:   <devices>
Oct 11 08:50:00 compute-0 nova_compute[260935]:     <disk type="network" device="disk">
Oct 11 08:50:00 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 08:50:00 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/b35f4147-9e36-4dab-9ac8-2061c97797f2_disk">
Oct 11 08:50:00 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 08:50:00 compute-0 nova_compute[260935]:       </source>
Oct 11 08:50:00 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 08:50:00 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 08:50:00 compute-0 nova_compute[260935]:       </auth>
Oct 11 08:50:00 compute-0 nova_compute[260935]:       <target dev="vda" bus="virtio"/>
Oct 11 08:50:00 compute-0 nova_compute[260935]:     </disk>
Oct 11 08:50:00 compute-0 nova_compute[260935]:     <disk type="network" device="cdrom">
Oct 11 08:50:00 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 08:50:00 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/b35f4147-9e36-4dab-9ac8-2061c97797f2_disk.config">
Oct 11 08:50:00 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 08:50:00 compute-0 nova_compute[260935]:       </source>
Oct 11 08:50:00 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 08:50:00 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 08:50:00 compute-0 nova_compute[260935]:       </auth>
Oct 11 08:50:00 compute-0 nova_compute[260935]:       <target dev="sda" bus="sata"/>
Oct 11 08:50:00 compute-0 nova_compute[260935]:     </disk>
Oct 11 08:50:00 compute-0 nova_compute[260935]:     <interface type="ethernet">
Oct 11 08:50:00 compute-0 nova_compute[260935]:       <mac address="fa:16:3e:d9:75:86"/>
Oct 11 08:50:00 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 08:50:00 compute-0 nova_compute[260935]:       <driver name="vhost" rx_queue_size="512"/>
Oct 11 08:50:00 compute-0 nova_compute[260935]:       <mtu size="1442"/>
Oct 11 08:50:00 compute-0 nova_compute[260935]:       <target dev="tapc27797c3-6a"/>
Oct 11 08:50:00 compute-0 nova_compute[260935]:     </interface>
Oct 11 08:50:00 compute-0 nova_compute[260935]:     <serial type="pty">
Oct 11 08:50:00 compute-0 nova_compute[260935]:       <log file="/var/lib/nova/instances/b35f4147-9e36-4dab-9ac8-2061c97797f2/console.log" append="off"/>
Oct 11 08:50:00 compute-0 nova_compute[260935]:     </serial>
Oct 11 08:50:00 compute-0 nova_compute[260935]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 08:50:00 compute-0 nova_compute[260935]:     <video>
Oct 11 08:50:00 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 08:50:00 compute-0 nova_compute[260935]:     </video>
Oct 11 08:50:00 compute-0 nova_compute[260935]:     <input type="tablet" bus="usb"/>
Oct 11 08:50:00 compute-0 nova_compute[260935]:     <rng model="virtio">
Oct 11 08:50:00 compute-0 nova_compute[260935]:       <backend model="random">/dev/urandom</backend>
Oct 11 08:50:00 compute-0 nova_compute[260935]:     </rng>
Oct 11 08:50:00 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root"/>
Oct 11 08:50:00 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:50:00 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:50:00 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:50:00 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:50:00 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:50:00 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:50:00 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:50:00 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:50:00 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:50:00 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:50:00 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:50:00 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:50:00 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:50:00 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:50:00 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:50:00 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:50:00 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:50:00 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:50:00 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:50:00 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:50:00 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:50:00 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:50:00 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:50:00 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:50:00 compute-0 nova_compute[260935]:     <controller type="usb" index="0"/>
Oct 11 08:50:00 compute-0 nova_compute[260935]:     <memballoon model="virtio">
Oct 11 08:50:00 compute-0 nova_compute[260935]:       <stats period="10"/>
Oct 11 08:50:00 compute-0 nova_compute[260935]:     </memballoon>
Oct 11 08:50:00 compute-0 nova_compute[260935]:   </devices>
Oct 11 08:50:00 compute-0 nova_compute[260935]: </domain>
Oct 11 08:50:00 compute-0 nova_compute[260935]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 11 08:50:00 compute-0 nova_compute[260935]: 2025-10-11 08:50:00.423 2 DEBUG nova.compute.manager [None req-546ee1ad-353e-439d-b095-08a7736b8d68 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: b35f4147-9e36-4dab-9ac8-2061c97797f2] Preparing to wait for external event network-vif-plugged-c27797c3-6ac7-45ae-9a2e-7fc42908feab prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 11 08:50:00 compute-0 nova_compute[260935]: 2025-10-11 08:50:00.424 2 DEBUG oslo_concurrency.lockutils [None req-546ee1ad-353e-439d-b095-08a7736b8d68 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Acquiring lock "b35f4147-9e36-4dab-9ac8-2061c97797f2-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:50:00 compute-0 nova_compute[260935]: 2025-10-11 08:50:00.424 2 DEBUG oslo_concurrency.lockutils [None req-546ee1ad-353e-439d-b095-08a7736b8d68 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Lock "b35f4147-9e36-4dab-9ac8-2061c97797f2-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:50:00 compute-0 nova_compute[260935]: 2025-10-11 08:50:00.424 2 DEBUG oslo_concurrency.lockutils [None req-546ee1ad-353e-439d-b095-08a7736b8d68 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Lock "b35f4147-9e36-4dab-9ac8-2061c97797f2-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:50:00 compute-0 nova_compute[260935]: 2025-10-11 08:50:00.425 2 DEBUG nova.virt.libvirt.vif [None req-546ee1ad-353e-439d-b095-08a7736b8d68 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T08:49:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1789236056',display_name='tempest-tempest.common.compute-instance-1789236056',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1789236056',id=29,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAAy7u30rY9Ua722Pu5k06TsB7yGIeNfS7lWjZwVhd6kg2xMeuomPU5t2dlqG08LvC5AhOx2wSQ4p/whtQgG8tbhB9ScC2x2P4qlM+3BKH/+XFtSpFY70AQQ3oh5qTSzqA==',key_name='tempest-keypair-878513287',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='eddb41c523294041b154a0a99c88e82b',ramdisk_id='',reservation_id='r-pbhirm4h',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesTestJSON-2072786320',owner_user_name='tempest-AttachInterfacesTestJSON-2072786320-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T08:49:55Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='34f29a5a135d45f597eeaa741009aa67',uuid=b35f4147-9e36-4dab-9ac8-2061c97797f2,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c27797c3-6ac7-45ae-9a2e-7fc42908feab", "address": "fa:16:3e:d9:75:86", "network": {"id": "fff13396-b787-4c6e-9112-a1c2ef57b26d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-795919911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eddb41c523294041b154a0a99c88e82b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc27797c3-6a", "ovs_interfaceid": "c27797c3-6ac7-45ae-9a2e-7fc42908feab", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 11 08:50:00 compute-0 nova_compute[260935]: 2025-10-11 08:50:00.426 2 DEBUG nova.network.os_vif_util [None req-546ee1ad-353e-439d-b095-08a7736b8d68 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Converting VIF {"id": "c27797c3-6ac7-45ae-9a2e-7fc42908feab", "address": "fa:16:3e:d9:75:86", "network": {"id": "fff13396-b787-4c6e-9112-a1c2ef57b26d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-795919911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eddb41c523294041b154a0a99c88e82b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc27797c3-6a", "ovs_interfaceid": "c27797c3-6ac7-45ae-9a2e-7fc42908feab", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 08:50:00 compute-0 nova_compute[260935]: 2025-10-11 08:50:00.427 2 DEBUG nova.network.os_vif_util [None req-546ee1ad-353e-439d-b095-08a7736b8d68 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d9:75:86,bridge_name='br-int',has_traffic_filtering=True,id=c27797c3-6ac7-45ae-9a2e-7fc42908feab,network=Network(fff13396-b787-4c6e-9112-a1c2ef57b26d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc27797c3-6a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 08:50:00 compute-0 nova_compute[260935]: 2025-10-11 08:50:00.427 2 DEBUG os_vif [None req-546ee1ad-353e-439d-b095-08a7736b8d68 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d9:75:86,bridge_name='br-int',has_traffic_filtering=True,id=c27797c3-6ac7-45ae-9a2e-7fc42908feab,network=Network(fff13396-b787-4c6e-9112-a1c2ef57b26d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc27797c3-6a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 11 08:50:00 compute-0 nova_compute[260935]: 2025-10-11 08:50:00.428 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:50:00 compute-0 nova_compute[260935]: 2025-10-11 08:50:00.429 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:50:00 compute-0 nova_compute[260935]: 2025-10-11 08:50:00.430 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 08:50:00 compute-0 nova_compute[260935]: 2025-10-11 08:50:00.433 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:50:00 compute-0 nova_compute[260935]: 2025-10-11 08:50:00.434 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc27797c3-6a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:50:00 compute-0 nova_compute[260935]: 2025-10-11 08:50:00.434 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapc27797c3-6a, col_values=(('external_ids', {'iface-id': 'c27797c3-6ac7-45ae-9a2e-7fc42908feab', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:d9:75:86', 'vm-uuid': 'b35f4147-9e36-4dab-9ac8-2061c97797f2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:50:00 compute-0 nova_compute[260935]: 2025-10-11 08:50:00.475 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:50:00 compute-0 NetworkManager[44960]: <info>  [1760172600.4781] manager: (tapc27797c3-6a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/88)
Oct 11 08:50:00 compute-0 nova_compute[260935]: 2025-10-11 08:50:00.480 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 08:50:00 compute-0 nova_compute[260935]: 2025-10-11 08:50:00.487 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:50:00 compute-0 nova_compute[260935]: 2025-10-11 08:50:00.488 2 INFO os_vif [None req-546ee1ad-353e-439d-b095-08a7736b8d68 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d9:75:86,bridge_name='br-int',has_traffic_filtering=True,id=c27797c3-6ac7-45ae-9a2e-7fc42908feab,network=Network(fff13396-b787-4c6e-9112-a1c2ef57b26d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc27797c3-6a')
Oct 11 08:50:00 compute-0 nova_compute[260935]: 2025-10-11 08:50:00.551 2 DEBUG nova.network.neutron [req-cc207ed8-b354-4964-a537-e6e33bf35c4f req-3ea8603a-6fdc-481e-b0d8-542471dec1b2 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: b35f4147-9e36-4dab-9ac8-2061c97797f2] Updated VIF entry in instance network info cache for port c27797c3-6ac7-45ae-9a2e-7fc42908feab. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 08:50:00 compute-0 nova_compute[260935]: 2025-10-11 08:50:00.552 2 DEBUG nova.network.neutron [req-cc207ed8-b354-4964-a537-e6e33bf35c4f req-3ea8603a-6fdc-481e-b0d8-542471dec1b2 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: b35f4147-9e36-4dab-9ac8-2061c97797f2] Updating instance_info_cache with network_info: [{"id": "c27797c3-6ac7-45ae-9a2e-7fc42908feab", "address": "fa:16:3e:d9:75:86", "network": {"id": "fff13396-b787-4c6e-9112-a1c2ef57b26d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-795919911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eddb41c523294041b154a0a99c88e82b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc27797c3-6a", "ovs_interfaceid": "c27797c3-6ac7-45ae-9a2e-7fc42908feab", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:50:00 compute-0 nova_compute[260935]: 2025-10-11 08:50:00.557 2 DEBUG nova.virt.libvirt.driver [None req-546ee1ad-353e-439d-b095-08a7736b8d68 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 08:50:00 compute-0 nova_compute[260935]: 2025-10-11 08:50:00.557 2 DEBUG nova.virt.libvirt.driver [None req-546ee1ad-353e-439d-b095-08a7736b8d68 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 08:50:00 compute-0 nova_compute[260935]: 2025-10-11 08:50:00.557 2 DEBUG nova.virt.libvirt.driver [None req-546ee1ad-353e-439d-b095-08a7736b8d68 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] No VIF found with MAC fa:16:3e:d9:75:86, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 11 08:50:00 compute-0 nova_compute[260935]: 2025-10-11 08:50:00.557 2 INFO nova.virt.libvirt.driver [None req-546ee1ad-353e-439d-b095-08a7736b8d68 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: b35f4147-9e36-4dab-9ac8-2061c97797f2] Using config drive
Oct 11 08:50:00 compute-0 nova_compute[260935]: 2025-10-11 08:50:00.583 2 DEBUG nova.storage.rbd_utils [None req-546ee1ad-353e-439d-b095-08a7736b8d68 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] rbd image b35f4147-9e36-4dab-9ac8-2061c97797f2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:50:00 compute-0 nova_compute[260935]: 2025-10-11 08:50:00.590 2 DEBUG oslo_concurrency.lockutils [req-cc207ed8-b354-4964-a537-e6e33bf35c4f req-3ea8603a-6fdc-481e-b0d8-542471dec1b2 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-b35f4147-9e36-4dab-9ac8-2061c97797f2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 08:50:00 compute-0 nova_compute[260935]: 2025-10-11 08:50:00.592 2 DEBUG nova.compute.manager [req-0ac4c347-70f6-4036-8179-93fdb7f0221b req-b623788f-8ca9-41f2-81a8-d92b4613860c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 9f9aca1c-8e65-435a-bfae-1ff0d4386f58] Received event network-vif-plugged-9007d8a3-8797-49c6-9302-4c8f4d699a45 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:50:00 compute-0 nova_compute[260935]: 2025-10-11 08:50:00.592 2 DEBUG oslo_concurrency.lockutils [req-0ac4c347-70f6-4036-8179-93fdb7f0221b req-b623788f-8ca9-41f2-81a8-d92b4613860c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "9f9aca1c-8e65-435a-bfae-1ff0d4386f58-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:50:00 compute-0 nova_compute[260935]: 2025-10-11 08:50:00.593 2 DEBUG oslo_concurrency.lockutils [req-0ac4c347-70f6-4036-8179-93fdb7f0221b req-b623788f-8ca9-41f2-81a8-d92b4613860c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "9f9aca1c-8e65-435a-bfae-1ff0d4386f58-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:50:00 compute-0 nova_compute[260935]: 2025-10-11 08:50:00.593 2 DEBUG oslo_concurrency.lockutils [req-0ac4c347-70f6-4036-8179-93fdb7f0221b req-b623788f-8ca9-41f2-81a8-d92b4613860c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "9f9aca1c-8e65-435a-bfae-1ff0d4386f58-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:50:00 compute-0 nova_compute[260935]: 2025-10-11 08:50:00.593 2 DEBUG nova.compute.manager [req-0ac4c347-70f6-4036-8179-93fdb7f0221b req-b623788f-8ca9-41f2-81a8-d92b4613860c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 9f9aca1c-8e65-435a-bfae-1ff0d4386f58] No waiting events found dispatching network-vif-plugged-9007d8a3-8797-49c6-9302-4c8f4d699a45 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 08:50:00 compute-0 nova_compute[260935]: 2025-10-11 08:50:00.593 2 WARNING nova.compute.manager [req-0ac4c347-70f6-4036-8179-93fdb7f0221b req-b623788f-8ca9-41f2-81a8-d92b4613860c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 9f9aca1c-8e65-435a-bfae-1ff0d4386f58] Received unexpected event network-vif-plugged-9007d8a3-8797-49c6-9302-4c8f4d699a45 for instance with vm_state active and task_state None.
Oct 11 08:50:00 compute-0 nova_compute[260935]: 2025-10-11 08:50:00.594 2 DEBUG nova.compute.manager [req-0ac4c347-70f6-4036-8179-93fdb7f0221b req-b623788f-8ca9-41f2-81a8-d92b4613860c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: cb1503a2-bc9c-4faf-ab16-e7227f8c94f7] Received event network-changed-31025494-a361-4c39-aa29-5841978f12e9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:50:00 compute-0 nova_compute[260935]: 2025-10-11 08:50:00.594 2 DEBUG nova.compute.manager [req-0ac4c347-70f6-4036-8179-93fdb7f0221b req-b623788f-8ca9-41f2-81a8-d92b4613860c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: cb1503a2-bc9c-4faf-ab16-e7227f8c94f7] Refreshing instance network info cache due to event network-changed-31025494-a361-4c39-aa29-5841978f12e9. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 08:50:00 compute-0 nova_compute[260935]: 2025-10-11 08:50:00.594 2 DEBUG oslo_concurrency.lockutils [req-0ac4c347-70f6-4036-8179-93fdb7f0221b req-b623788f-8ca9-41f2-81a8-d92b4613860c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-cb1503a2-bc9c-4faf-ab16-e7227f8c94f7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 08:50:00 compute-0 podman[296385]: 2025-10-11 08:50:00.636630186 +0000 UTC m=+0.096150440 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 08:50:00 compute-0 nova_compute[260935]: 2025-10-11 08:50:00.665 2 DEBUG nova.network.neutron [None req-4002ac9b-cff2-4b50-a89e-ea7290326cae 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: cb1503a2-bc9c-4faf-ab16-e7227f8c94f7] Updating instance_info_cache with network_info: [{"id": "31025494-a361-4c39-aa29-5841978f12e9", "address": "fa:16:3e:a8:8e:61", "network": {"id": "9bac3530-993f-420e-8692-0b14a331d756", "bridge": "br-int", "label": "tempest-ImagesTestJSON-942705627-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f8c7604961214c6d9d49657535d799a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31025494-a3", "ovs_interfaceid": "31025494-a361-4c39-aa29-5841978f12e9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:50:00 compute-0 nova_compute[260935]: 2025-10-11 08:50:00.685 2 DEBUG oslo_concurrency.lockutils [None req-4002ac9b-cff2-4b50-a89e-ea7290326cae 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Releasing lock "refresh_cache-cb1503a2-bc9c-4faf-ab16-e7227f8c94f7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 08:50:00 compute-0 nova_compute[260935]: 2025-10-11 08:50:00.686 2 DEBUG nova.compute.manager [None req-4002ac9b-cff2-4b50-a89e-ea7290326cae 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: cb1503a2-bc9c-4faf-ab16-e7227f8c94f7] Instance network_info: |[{"id": "31025494-a361-4c39-aa29-5841978f12e9", "address": "fa:16:3e:a8:8e:61", "network": {"id": "9bac3530-993f-420e-8692-0b14a331d756", "bridge": "br-int", "label": "tempest-ImagesTestJSON-942705627-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f8c7604961214c6d9d49657535d799a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31025494-a3", "ovs_interfaceid": "31025494-a361-4c39-aa29-5841978f12e9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 11 08:50:00 compute-0 nova_compute[260935]: 2025-10-11 08:50:00.686 2 DEBUG oslo_concurrency.lockutils [req-0ac4c347-70f6-4036-8179-93fdb7f0221b req-b623788f-8ca9-41f2-81a8-d92b4613860c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-cb1503a2-bc9c-4faf-ab16-e7227f8c94f7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 08:50:00 compute-0 nova_compute[260935]: 2025-10-11 08:50:00.686 2 DEBUG nova.network.neutron [req-0ac4c347-70f6-4036-8179-93fdb7f0221b req-b623788f-8ca9-41f2-81a8-d92b4613860c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: cb1503a2-bc9c-4faf-ab16-e7227f8c94f7] Refreshing network info cache for port 31025494-a361-4c39-aa29-5841978f12e9 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 08:50:00 compute-0 nova_compute[260935]: 2025-10-11 08:50:00.688 2 DEBUG nova.virt.libvirt.driver [None req-4002ac9b-cff2-4b50-a89e-ea7290326cae 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: cb1503a2-bc9c-4faf-ab16-e7227f8c94f7] Start _get_guest_xml network_info=[{"id": "31025494-a361-4c39-aa29-5841978f12e9", "address": "fa:16:3e:a8:8e:61", "network": {"id": "9bac3530-993f-420e-8692-0b14a331d756", "bridge": "br-int", "label": "tempest-ImagesTestJSON-942705627-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f8c7604961214c6d9d49657535d799a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31025494-a3", "ovs_interfaceid": "31025494-a361-4c39-aa29-5841978f12e9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 11 08:50:00 compute-0 nova_compute[260935]: 2025-10-11 08:50:00.693 2 WARNING nova.virt.libvirt.driver [None req-4002ac9b-cff2-4b50-a89e-ea7290326cae 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 08:50:00 compute-0 nova_compute[260935]: 2025-10-11 08:50:00.697 2 DEBUG nova.virt.libvirt.host [None req-4002ac9b-cff2-4b50-a89e-ea7290326cae 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 11 08:50:00 compute-0 nova_compute[260935]: 2025-10-11 08:50:00.698 2 DEBUG nova.virt.libvirt.host [None req-4002ac9b-cff2-4b50-a89e-ea7290326cae 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 11 08:50:00 compute-0 nova_compute[260935]: 2025-10-11 08:50:00.702 2 DEBUG nova.virt.libvirt.host [None req-4002ac9b-cff2-4b50-a89e-ea7290326cae 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 11 08:50:00 compute-0 nova_compute[260935]: 2025-10-11 08:50:00.702 2 DEBUG nova.virt.libvirt.host [None req-4002ac9b-cff2-4b50-a89e-ea7290326cae 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 11 08:50:00 compute-0 nova_compute[260935]: 2025-10-11 08:50:00.703 2 DEBUG nova.virt.libvirt.driver [None req-4002ac9b-cff2-4b50-a89e-ea7290326cae 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 11 08:50:00 compute-0 nova_compute[260935]: 2025-10-11 08:50:00.703 2 DEBUG nova.virt.hardware [None req-4002ac9b-cff2-4b50-a89e-ea7290326cae 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 11 08:50:00 compute-0 nova_compute[260935]: 2025-10-11 08:50:00.703 2 DEBUG nova.virt.hardware [None req-4002ac9b-cff2-4b50-a89e-ea7290326cae 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 11 08:50:00 compute-0 nova_compute[260935]: 2025-10-11 08:50:00.703 2 DEBUG nova.virt.hardware [None req-4002ac9b-cff2-4b50-a89e-ea7290326cae 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 11 08:50:00 compute-0 nova_compute[260935]: 2025-10-11 08:50:00.704 2 DEBUG nova.virt.hardware [None req-4002ac9b-cff2-4b50-a89e-ea7290326cae 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 11 08:50:00 compute-0 nova_compute[260935]: 2025-10-11 08:50:00.704 2 DEBUG nova.virt.hardware [None req-4002ac9b-cff2-4b50-a89e-ea7290326cae 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 11 08:50:00 compute-0 nova_compute[260935]: 2025-10-11 08:50:00.704 2 DEBUG nova.virt.hardware [None req-4002ac9b-cff2-4b50-a89e-ea7290326cae 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 11 08:50:00 compute-0 nova_compute[260935]: 2025-10-11 08:50:00.704 2 DEBUG nova.virt.hardware [None req-4002ac9b-cff2-4b50-a89e-ea7290326cae 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 11 08:50:00 compute-0 nova_compute[260935]: 2025-10-11 08:50:00.705 2 DEBUG nova.virt.hardware [None req-4002ac9b-cff2-4b50-a89e-ea7290326cae 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 11 08:50:00 compute-0 nova_compute[260935]: 2025-10-11 08:50:00.705 2 DEBUG nova.virt.hardware [None req-4002ac9b-cff2-4b50-a89e-ea7290326cae 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 11 08:50:00 compute-0 nova_compute[260935]: 2025-10-11 08:50:00.705 2 DEBUG nova.virt.hardware [None req-4002ac9b-cff2-4b50-a89e-ea7290326cae 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 11 08:50:00 compute-0 nova_compute[260935]: 2025-10-11 08:50:00.705 2 DEBUG nova.virt.hardware [None req-4002ac9b-cff2-4b50-a89e-ea7290326cae 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 11 08:50:00 compute-0 nova_compute[260935]: 2025-10-11 08:50:00.708 2 DEBUG oslo_concurrency.processutils [None req-4002ac9b-cff2-4b50-a89e-ea7290326cae 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:50:00 compute-0 nova_compute[260935]: 2025-10-11 08:50:00.736 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 08:50:01 compute-0 nova_compute[260935]: 2025-10-11 08:50:01.189 2 INFO nova.virt.libvirt.driver [None req-546ee1ad-353e-439d-b095-08a7736b8d68 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: b35f4147-9e36-4dab-9ac8-2061c97797f2] Creating config drive at /var/lib/nova/instances/b35f4147-9e36-4dab-9ac8-2061c97797f2/disk.config
Oct 11 08:50:01 compute-0 nova_compute[260935]: 2025-10-11 08:50:01.195 2 DEBUG oslo_concurrency.processutils [None req-546ee1ad-353e-439d-b095-08a7736b8d68 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/b35f4147-9e36-4dab-9ac8-2061c97797f2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp6aqn8w0i execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:50:01 compute-0 ceph-mon[74313]: pgmap v1332: 321 pgs: 321 active+clean; 511 MiB data, 603 MiB used, 59 GiB / 60 GiB avail; 5.8 MiB/s rd, 8.0 MiB/s wr, 392 op/s
Oct 11 08:50:01 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/3554418418' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:50:01 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 08:50:01 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1651090801' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:50:01 compute-0 nova_compute[260935]: 2025-10-11 08:50:01.282 2 DEBUG oslo_concurrency.processutils [None req-4002ac9b-cff2-4b50-a89e-ea7290326cae 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.574s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:50:01 compute-0 nova_compute[260935]: 2025-10-11 08:50:01.308 2 DEBUG nova.storage.rbd_utils [None req-4002ac9b-cff2-4b50-a89e-ea7290326cae 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] rbd image cb1503a2-bc9c-4faf-ab16-e7227f8c94f7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:50:01 compute-0 nova_compute[260935]: 2025-10-11 08:50:01.313 2 DEBUG oslo_concurrency.processutils [None req-4002ac9b-cff2-4b50-a89e-ea7290326cae 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:50:01 compute-0 nova_compute[260935]: 2025-10-11 08:50:01.345 2 DEBUG oslo_concurrency.processutils [None req-546ee1ad-353e-439d-b095-08a7736b8d68 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/b35f4147-9e36-4dab-9ac8-2061c97797f2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp6aqn8w0i" returned: 0 in 0.150s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:50:01 compute-0 nova_compute[260935]: 2025-10-11 08:50:01.380 2 DEBUG nova.storage.rbd_utils [None req-546ee1ad-353e-439d-b095-08a7736b8d68 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] rbd image b35f4147-9e36-4dab-9ac8-2061c97797f2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:50:01 compute-0 nova_compute[260935]: 2025-10-11 08:50:01.385 2 DEBUG oslo_concurrency.processutils [None req-546ee1ad-353e-439d-b095-08a7736b8d68 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/b35f4147-9e36-4dab-9ac8-2061c97797f2/disk.config b35f4147-9e36-4dab-9ac8-2061c97797f2_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:50:01 compute-0 nova_compute[260935]: 2025-10-11 08:50:01.549 2 DEBUG oslo_concurrency.processutils [None req-546ee1ad-353e-439d-b095-08a7736b8d68 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/b35f4147-9e36-4dab-9ac8-2061c97797f2/disk.config b35f4147-9e36-4dab-9ac8-2061c97797f2_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.164s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:50:01 compute-0 nova_compute[260935]: 2025-10-11 08:50:01.549 2 INFO nova.virt.libvirt.driver [None req-546ee1ad-353e-439d-b095-08a7736b8d68 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: b35f4147-9e36-4dab-9ac8-2061c97797f2] Deleting local config drive /var/lib/nova/instances/b35f4147-9e36-4dab-9ac8-2061c97797f2/disk.config because it was imported into RBD.
Oct 11 08:50:01 compute-0 kernel: tapc27797c3-6a: entered promiscuous mode
Oct 11 08:50:01 compute-0 NetworkManager[44960]: <info>  [1760172601.6016] manager: (tapc27797c3-6a): new Tun device (/org/freedesktop/NetworkManager/Devices/89)
Oct 11 08:50:01 compute-0 nova_compute[260935]: 2025-10-11 08:50:01.641 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:50:01 compute-0 ovn_controller[152945]: 2025-10-11T08:50:01Z|00150|binding|INFO|Claiming lport c27797c3-6ac7-45ae-9a2e-7fc42908feab for this chassis.
Oct 11 08:50:01 compute-0 ovn_controller[152945]: 2025-10-11T08:50:01Z|00151|binding|INFO|c27797c3-6ac7-45ae-9a2e-7fc42908feab: Claiming fa:16:3e:d9:75:86 10.100.0.5
Oct 11 08:50:01 compute-0 systemd-udevd[296533]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 08:50:01 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:01.652 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d9:75:86 10.100.0.5'], port_security=['fa:16:3e:d9:75:86 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'b35f4147-9e36-4dab-9ac8-2061c97797f2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fff13396-b787-4c6e-9112-a1c2ef57b26d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'eddb41c523294041b154a0a99c88e82b', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'b1442d15-c284-4756-a249-5a3bda09cf56', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c4201c7b-c907-464d-88cb-d19f17d8f067, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=c27797c3-6ac7-45ae-9a2e-7fc42908feab) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 08:50:01 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:01.653 162815 INFO neutron.agent.ovn.metadata.agent [-] Port c27797c3-6ac7-45ae-9a2e-7fc42908feab in datapath fff13396-b787-4c6e-9112-a1c2ef57b26d bound to our chassis
Oct 11 08:50:01 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:01.655 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network fff13396-b787-4c6e-9112-a1c2ef57b26d
Oct 11 08:50:01 compute-0 NetworkManager[44960]: <info>  [1760172601.6640] device (tapc27797c3-6a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 08:50:01 compute-0 NetworkManager[44960]: <info>  [1760172601.6650] device (tapc27797c3-6a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 08:50:01 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:01.676 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[53021c59-cce1-44ad-bdba-7cc70cd634cf]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:50:01 compute-0 ovn_controller[152945]: 2025-10-11T08:50:01Z|00152|binding|INFO|Setting lport c27797c3-6ac7-45ae-9a2e-7fc42908feab ovn-installed in OVS
Oct 11 08:50:01 compute-0 ovn_controller[152945]: 2025-10-11T08:50:01Z|00153|binding|INFO|Setting lport c27797c3-6ac7-45ae-9a2e-7fc42908feab up in Southbound
Oct 11 08:50:01 compute-0 nova_compute[260935]: 2025-10-11 08:50:01.683 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:50:01 compute-0 nova_compute[260935]: 2025-10-11 08:50:01.687 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:50:01 compute-0 systemd-machined[215705]: New machine qemu-31-instance-0000001d.
Oct 11 08:50:01 compute-0 systemd[1]: Started Virtual Machine qemu-31-instance-0000001d.
Oct 11 08:50:01 compute-0 nova_compute[260935]: 2025-10-11 08:50:01.709 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 08:50:01 compute-0 nova_compute[260935]: 2025-10-11 08:50:01.710 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 08:50:01 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:01.712 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[3727c1a3-155f-490c-9450-faa4174880e7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:50:01 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:01.717 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[43c7c59d-e1e3-4abc-b71f-2fa5e76341bd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:50:01 compute-0 kernel: tapa3944a31-95 (unregistering): left promiscuous mode
Oct 11 08:50:01 compute-0 NetworkManager[44960]: <info>  [1760172601.7324] device (tapa3944a31-95): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 08:50:01 compute-0 ovn_controller[152945]: 2025-10-11T08:50:01Z|00154|binding|INFO|Releasing lport a3944a31-9560-49ae-b2a5-caaf2736993a from this chassis (sb_readonly=0)
Oct 11 08:50:01 compute-0 ovn_controller[152945]: 2025-10-11T08:50:01Z|00155|binding|INFO|Setting lport a3944a31-9560-49ae-b2a5-caaf2736993a down in Southbound
Oct 11 08:50:01 compute-0 ovn_controller[152945]: 2025-10-11T08:50:01Z|00156|binding|INFO|Removing iface tapa3944a31-95 ovn-installed in OVS
Oct 11 08:50:01 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 08:50:01 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3255312818' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:50:01 compute-0 nova_compute[260935]: 2025-10-11 08:50:01.750 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:50:01 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:01.754 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:bf:d8:0b 10.100.0.11'], port_security=['fa:16:3e:bf:d8:0b 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '77ff3a9d-3eb2-40ed-ad12-6367fd4e555f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '39d3043a7835403392c659fbb2fe0b22', 'neutron:revision_number': '4', 'neutron:security_group_ids': '8cdf2c97-ed67-4339-928f-1d70d0c6c18c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3bfe7634-8476-437a-9cde-e4512c0e686a, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=a3944a31-9560-49ae-b2a5-caaf2736993a) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 08:50:01 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:01.761 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[8b43786d-0cc5-4761-bffd-6caf031e6032]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:50:01 compute-0 nova_compute[260935]: 2025-10-11 08:50:01.786 2 DEBUG oslo_concurrency.processutils [None req-4002ac9b-cff2-4b50-a89e-ea7290326cae 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:50:01 compute-0 nova_compute[260935]: 2025-10-11 08:50:01.788 2 DEBUG nova.virt.libvirt.vif [None req-4002ac9b-cff2-4b50-a89e-ea7290326cae 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T08:49:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-1000984331',display_name='tempest-ImagesTestJSON-server-1000984331',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-1000984331',id=30,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f8c7604961214c6d9d49657535d799a5',ramdisk_id='',reservation_id='r-ga7cei6c',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-694493184',owner_user_name='tempest-ImagesTestJSON-694493184-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T08:49:56Z,user_data=None,user_id='1bab12893b9d49aabcb5ca19c9b951de',uuid=cb1503a2-bc9c-4faf-ab16-e7227f8c94f7,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "31025494-a361-4c39-aa29-5841978f12e9", "address": "fa:16:3e:a8:8e:61", "network": {"id": "9bac3530-993f-420e-8692-0b14a331d756", "bridge": "br-int", "label": "tempest-ImagesTestJSON-942705627-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f8c7604961214c6d9d49657535d799a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31025494-a3", "ovs_interfaceid": "31025494-a361-4c39-aa29-5841978f12e9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 11 08:50:01 compute-0 nova_compute[260935]: 2025-10-11 08:50:01.788 2 DEBUG nova.network.os_vif_util [None req-4002ac9b-cff2-4b50-a89e-ea7290326cae 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Converting VIF {"id": "31025494-a361-4c39-aa29-5841978f12e9", "address": "fa:16:3e:a8:8e:61", "network": {"id": "9bac3530-993f-420e-8692-0b14a331d756", "bridge": "br-int", "label": "tempest-ImagesTestJSON-942705627-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f8c7604961214c6d9d49657535d799a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31025494-a3", "ovs_interfaceid": "31025494-a361-4c39-aa29-5841978f12e9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 08:50:01 compute-0 systemd[1]: machine-qemu\x2d24\x2dinstance\x2d00000016.scope: Deactivated successfully.
Oct 11 08:50:01 compute-0 nova_compute[260935]: 2025-10-11 08:50:01.789 2 DEBUG nova.network.os_vif_util [None req-4002ac9b-cff2-4b50-a89e-ea7290326cae 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a8:8e:61,bridge_name='br-int',has_traffic_filtering=True,id=31025494-a361-4c39-aa29-5841978f12e9,network=Network(9bac3530-993f-420e-8692-0b14a331d756),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap31025494-a3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 08:50:01 compute-0 systemd[1]: machine-qemu\x2d24\x2dinstance\x2d00000016.scope: Consumed 15.399s CPU time.
Oct 11 08:50:01 compute-0 nova_compute[260935]: 2025-10-11 08:50:01.790 2 DEBUG nova.objects.instance [None req-4002ac9b-cff2-4b50-a89e-ea7290326cae 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Lazy-loading 'pci_devices' on Instance uuid cb1503a2-bc9c-4faf-ab16-e7227f8c94f7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:50:01 compute-0 nova_compute[260935]: 2025-10-11 08:50:01.792 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:50:01 compute-0 systemd-machined[215705]: Machine qemu-24-instance-00000016 terminated.
Oct 11 08:50:01 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:01.795 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[efa36c84-3fe8-4020-a06e-2fae1129283e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfff13396-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:dd:a4:2d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 832, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 832, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 45], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 443120, 'reachable_time': 39241, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 296556, 'error': None, 'target': 'ovnmeta-fff13396-b787-4c6e-9112-a1c2ef57b26d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:50:01 compute-0 nova_compute[260935]: 2025-10-11 08:50:01.804 2 DEBUG nova.virt.libvirt.driver [None req-4002ac9b-cff2-4b50-a89e-ea7290326cae 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: cb1503a2-bc9c-4faf-ab16-e7227f8c94f7] End _get_guest_xml xml=<domain type="kvm">
Oct 11 08:50:01 compute-0 nova_compute[260935]:   <uuid>cb1503a2-bc9c-4faf-ab16-e7227f8c94f7</uuid>
Oct 11 08:50:01 compute-0 nova_compute[260935]:   <name>instance-0000001e</name>
Oct 11 08:50:01 compute-0 nova_compute[260935]:   <memory>131072</memory>
Oct 11 08:50:01 compute-0 nova_compute[260935]:   <vcpu>1</vcpu>
Oct 11 08:50:01 compute-0 nova_compute[260935]:   <metadata>
Oct 11 08:50:01 compute-0 nova_compute[260935]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 08:50:01 compute-0 nova_compute[260935]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 08:50:01 compute-0 nova_compute[260935]:       <nova:name>tempest-ImagesTestJSON-server-1000984331</nova:name>
Oct 11 08:50:01 compute-0 nova_compute[260935]:       <nova:creationTime>2025-10-11 08:50:00</nova:creationTime>
Oct 11 08:50:01 compute-0 nova_compute[260935]:       <nova:flavor name="m1.nano">
Oct 11 08:50:01 compute-0 nova_compute[260935]:         <nova:memory>128</nova:memory>
Oct 11 08:50:01 compute-0 nova_compute[260935]:         <nova:disk>1</nova:disk>
Oct 11 08:50:01 compute-0 nova_compute[260935]:         <nova:swap>0</nova:swap>
Oct 11 08:50:01 compute-0 nova_compute[260935]:         <nova:ephemeral>0</nova:ephemeral>
Oct 11 08:50:01 compute-0 nova_compute[260935]:         <nova:vcpus>1</nova:vcpus>
Oct 11 08:50:01 compute-0 nova_compute[260935]:       </nova:flavor>
Oct 11 08:50:01 compute-0 nova_compute[260935]:       <nova:owner>
Oct 11 08:50:01 compute-0 nova_compute[260935]:         <nova:user uuid="1bab12893b9d49aabcb5ca19c9b951de">tempest-ImagesTestJSON-694493184-project-member</nova:user>
Oct 11 08:50:01 compute-0 nova_compute[260935]:         <nova:project uuid="f8c7604961214c6d9d49657535d799a5">tempest-ImagesTestJSON-694493184</nova:project>
Oct 11 08:50:01 compute-0 nova_compute[260935]:       </nova:owner>
Oct 11 08:50:01 compute-0 nova_compute[260935]:       <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 08:50:01 compute-0 nova_compute[260935]:       <nova:ports>
Oct 11 08:50:01 compute-0 nova_compute[260935]:         <nova:port uuid="31025494-a361-4c39-aa29-5841978f12e9">
Oct 11 08:50:01 compute-0 nova_compute[260935]:           <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Oct 11 08:50:01 compute-0 nova_compute[260935]:         </nova:port>
Oct 11 08:50:01 compute-0 nova_compute[260935]:       </nova:ports>
Oct 11 08:50:01 compute-0 nova_compute[260935]:     </nova:instance>
Oct 11 08:50:01 compute-0 nova_compute[260935]:   </metadata>
Oct 11 08:50:01 compute-0 nova_compute[260935]:   <sysinfo type="smbios">
Oct 11 08:50:01 compute-0 nova_compute[260935]:     <system>
Oct 11 08:50:01 compute-0 nova_compute[260935]:       <entry name="manufacturer">RDO</entry>
Oct 11 08:50:01 compute-0 nova_compute[260935]:       <entry name="product">OpenStack Compute</entry>
Oct 11 08:50:01 compute-0 nova_compute[260935]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 08:50:01 compute-0 nova_compute[260935]:       <entry name="serial">cb1503a2-bc9c-4faf-ab16-e7227f8c94f7</entry>
Oct 11 08:50:01 compute-0 nova_compute[260935]:       <entry name="uuid">cb1503a2-bc9c-4faf-ab16-e7227f8c94f7</entry>
Oct 11 08:50:01 compute-0 nova_compute[260935]:       <entry name="family">Virtual Machine</entry>
Oct 11 08:50:01 compute-0 nova_compute[260935]:     </system>
Oct 11 08:50:01 compute-0 nova_compute[260935]:   </sysinfo>
Oct 11 08:50:01 compute-0 nova_compute[260935]:   <os>
Oct 11 08:50:01 compute-0 nova_compute[260935]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 11 08:50:01 compute-0 nova_compute[260935]:     <boot dev="hd"/>
Oct 11 08:50:01 compute-0 nova_compute[260935]:     <smbios mode="sysinfo"/>
Oct 11 08:50:01 compute-0 nova_compute[260935]:   </os>
Oct 11 08:50:01 compute-0 nova_compute[260935]:   <features>
Oct 11 08:50:01 compute-0 nova_compute[260935]:     <acpi/>
Oct 11 08:50:01 compute-0 nova_compute[260935]:     <apic/>
Oct 11 08:50:01 compute-0 nova_compute[260935]:     <vmcoreinfo/>
Oct 11 08:50:01 compute-0 nova_compute[260935]:   </features>
Oct 11 08:50:01 compute-0 nova_compute[260935]:   <clock offset="utc">
Oct 11 08:50:01 compute-0 nova_compute[260935]:     <timer name="pit" tickpolicy="delay"/>
Oct 11 08:50:01 compute-0 nova_compute[260935]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 11 08:50:01 compute-0 nova_compute[260935]:     <timer name="hpet" present="no"/>
Oct 11 08:50:01 compute-0 nova_compute[260935]:   </clock>
Oct 11 08:50:01 compute-0 nova_compute[260935]:   <cpu mode="host-model" match="exact">
Oct 11 08:50:01 compute-0 nova_compute[260935]:     <topology sockets="1" cores="1" threads="1"/>
Oct 11 08:50:01 compute-0 nova_compute[260935]:   </cpu>
Oct 11 08:50:01 compute-0 nova_compute[260935]:   <devices>
Oct 11 08:50:01 compute-0 nova_compute[260935]:     <disk type="network" device="disk">
Oct 11 08:50:01 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 08:50:01 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/cb1503a2-bc9c-4faf-ab16-e7227f8c94f7_disk">
Oct 11 08:50:01 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 08:50:01 compute-0 nova_compute[260935]:       </source>
Oct 11 08:50:01 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 08:50:01 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 08:50:01 compute-0 nova_compute[260935]:       </auth>
Oct 11 08:50:01 compute-0 nova_compute[260935]:       <target dev="vda" bus="virtio"/>
Oct 11 08:50:01 compute-0 nova_compute[260935]:     </disk>
Oct 11 08:50:01 compute-0 nova_compute[260935]:     <disk type="network" device="cdrom">
Oct 11 08:50:01 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 08:50:01 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/cb1503a2-bc9c-4faf-ab16-e7227f8c94f7_disk.config">
Oct 11 08:50:01 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 08:50:01 compute-0 nova_compute[260935]:       </source>
Oct 11 08:50:01 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 08:50:01 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 08:50:01 compute-0 nova_compute[260935]:       </auth>
Oct 11 08:50:01 compute-0 nova_compute[260935]:       <target dev="sda" bus="sata"/>
Oct 11 08:50:01 compute-0 nova_compute[260935]:     </disk>
Oct 11 08:50:01 compute-0 nova_compute[260935]:     <interface type="ethernet">
Oct 11 08:50:01 compute-0 nova_compute[260935]:       <mac address="fa:16:3e:a8:8e:61"/>
Oct 11 08:50:01 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 08:50:01 compute-0 nova_compute[260935]:       <driver name="vhost" rx_queue_size="512"/>
Oct 11 08:50:01 compute-0 nova_compute[260935]:       <mtu size="1442"/>
Oct 11 08:50:01 compute-0 nova_compute[260935]:       <target dev="tap31025494-a3"/>
Oct 11 08:50:01 compute-0 nova_compute[260935]:     </interface>
Oct 11 08:50:01 compute-0 nova_compute[260935]:     <serial type="pty">
Oct 11 08:50:01 compute-0 nova_compute[260935]:       <log file="/var/lib/nova/instances/cb1503a2-bc9c-4faf-ab16-e7227f8c94f7/console.log" append="off"/>
Oct 11 08:50:01 compute-0 nova_compute[260935]:     </serial>
Oct 11 08:50:01 compute-0 nova_compute[260935]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 08:50:01 compute-0 nova_compute[260935]:     <video>
Oct 11 08:50:01 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 08:50:01 compute-0 nova_compute[260935]:     </video>
Oct 11 08:50:01 compute-0 nova_compute[260935]:     <input type="tablet" bus="usb"/>
Oct 11 08:50:01 compute-0 nova_compute[260935]:     <rng model="virtio">
Oct 11 08:50:01 compute-0 nova_compute[260935]:       <backend model="random">/dev/urandom</backend>
Oct 11 08:50:01 compute-0 nova_compute[260935]:     </rng>
Oct 11 08:50:01 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root"/>
Oct 11 08:50:01 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:50:01 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:50:01 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:50:01 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:50:01 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:50:01 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:50:01 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:50:01 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:50:01 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:50:01 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:50:01 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:50:01 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:50:01 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:50:01 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:50:01 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:50:01 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:50:01 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:50:01 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:50:01 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:50:01 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:50:01 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:50:01 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:50:01 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:50:01 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:50:01 compute-0 nova_compute[260935]:     <controller type="usb" index="0"/>
Oct 11 08:50:01 compute-0 nova_compute[260935]:     <memballoon model="virtio">
Oct 11 08:50:01 compute-0 nova_compute[260935]:       <stats period="10"/>
Oct 11 08:50:01 compute-0 nova_compute[260935]:     </memballoon>
Oct 11 08:50:01 compute-0 nova_compute[260935]:   </devices>
Oct 11 08:50:01 compute-0 nova_compute[260935]: </domain>
Oct 11 08:50:01 compute-0 nova_compute[260935]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 11 08:50:01 compute-0 nova_compute[260935]: 2025-10-11 08:50:01.805 2 DEBUG nova.compute.manager [None req-4002ac9b-cff2-4b50-a89e-ea7290326cae 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: cb1503a2-bc9c-4faf-ab16-e7227f8c94f7] Preparing to wait for external event network-vif-plugged-31025494-a361-4c39-aa29-5841978f12e9 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 11 08:50:01 compute-0 nova_compute[260935]: 2025-10-11 08:50:01.806 2 DEBUG oslo_concurrency.lockutils [None req-4002ac9b-cff2-4b50-a89e-ea7290326cae 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Acquiring lock "cb1503a2-bc9c-4faf-ab16-e7227f8c94f7-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:50:01 compute-0 nova_compute[260935]: 2025-10-11 08:50:01.807 2 DEBUG oslo_concurrency.lockutils [None req-4002ac9b-cff2-4b50-a89e-ea7290326cae 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Lock "cb1503a2-bc9c-4faf-ab16-e7227f8c94f7-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:50:01 compute-0 nova_compute[260935]: 2025-10-11 08:50:01.807 2 DEBUG oslo_concurrency.lockutils [None req-4002ac9b-cff2-4b50-a89e-ea7290326cae 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Lock "cb1503a2-bc9c-4faf-ab16-e7227f8c94f7-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:50:01 compute-0 nova_compute[260935]: 2025-10-11 08:50:01.807 2 DEBUG nova.virt.libvirt.vif [None req-4002ac9b-cff2-4b50-a89e-ea7290326cae 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T08:49:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-1000984331',display_name='tempest-ImagesTestJSON-server-1000984331',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-1000984331',id=30,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f8c7604961214c6d9d49657535d799a5',ramdisk_id='',reservation_id='r-ga7cei6c',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-694493184',owner_user_name='tempest-ImagesTestJSON-694493184-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T08:49:56Z,user_data=None,user_id='1bab12893b9d49aabcb5ca19c9b951de',uuid=cb1503a2-bc9c-4faf-ab16-e7227f8c94f7,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "31025494-a361-4c39-aa29-5841978f12e9", "address": "fa:16:3e:a8:8e:61", "network": {"id": "9bac3530-993f-420e-8692-0b14a331d756", "bridge": "br-int", "label": "tempest-ImagesTestJSON-942705627-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f8c7604961214c6d9d49657535d799a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31025494-a3", "ovs_interfaceid": "31025494-a361-4c39-aa29-5841978f12e9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 11 08:50:01 compute-0 nova_compute[260935]: 2025-10-11 08:50:01.808 2 DEBUG nova.network.os_vif_util [None req-4002ac9b-cff2-4b50-a89e-ea7290326cae 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Converting VIF {"id": "31025494-a361-4c39-aa29-5841978f12e9", "address": "fa:16:3e:a8:8e:61", "network": {"id": "9bac3530-993f-420e-8692-0b14a331d756", "bridge": "br-int", "label": "tempest-ImagesTestJSON-942705627-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f8c7604961214c6d9d49657535d799a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31025494-a3", "ovs_interfaceid": "31025494-a361-4c39-aa29-5841978f12e9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 08:50:01 compute-0 nova_compute[260935]: 2025-10-11 08:50:01.808 2 DEBUG nova.network.os_vif_util [None req-4002ac9b-cff2-4b50-a89e-ea7290326cae 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a8:8e:61,bridge_name='br-int',has_traffic_filtering=True,id=31025494-a361-4c39-aa29-5841978f12e9,network=Network(9bac3530-993f-420e-8692-0b14a331d756),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap31025494-a3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 08:50:01 compute-0 nova_compute[260935]: 2025-10-11 08:50:01.808 2 DEBUG os_vif [None req-4002ac9b-cff2-4b50-a89e-ea7290326cae 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a8:8e:61,bridge_name='br-int',has_traffic_filtering=True,id=31025494-a361-4c39-aa29-5841978f12e9,network=Network(9bac3530-993f-420e-8692-0b14a331d756),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap31025494-a3') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 11 08:50:01 compute-0 nova_compute[260935]: 2025-10-11 08:50:01.809 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:50:01 compute-0 nova_compute[260935]: 2025-10-11 08:50:01.809 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:50:01 compute-0 nova_compute[260935]: 2025-10-11 08:50:01.809 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 08:50:01 compute-0 nova_compute[260935]: 2025-10-11 08:50:01.815 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:50:01 compute-0 nova_compute[260935]: 2025-10-11 08:50:01.815 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap31025494-a3, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:50:01 compute-0 nova_compute[260935]: 2025-10-11 08:50:01.815 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap31025494-a3, col_values=(('external_ids', {'iface-id': '31025494-a361-4c39-aa29-5841978f12e9', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:a8:8e:61', 'vm-uuid': 'cb1503a2-bc9c-4faf-ab16-e7227f8c94f7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:50:01 compute-0 NetworkManager[44960]: <info>  [1760172601.8189] manager: (tap31025494-a3): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/90)
Oct 11 08:50:01 compute-0 nova_compute[260935]: 2025-10-11 08:50:01.818 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:50:01 compute-0 nova_compute[260935]: 2025-10-11 08:50:01.820 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 08:50:01 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:01.821 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[6abd965a-3b6f-4416-b7d5-885419e152f3]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapfff13396-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 443136, 'tstamp': 443136}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 296557, 'error': None, 'target': 'ovnmeta-fff13396-b787-4c6e-9112-a1c2ef57b26d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapfff13396-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 443140, 'tstamp': 443140}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 296557, 'error': None, 'target': 'ovnmeta-fff13396-b787-4c6e-9112-a1c2ef57b26d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:50:01 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:01.823 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfff13396-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:50:01 compute-0 nova_compute[260935]: 2025-10-11 08:50:01.824 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:50:01 compute-0 nova_compute[260935]: 2025-10-11 08:50:01.828 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:50:01 compute-0 nova_compute[260935]: 2025-10-11 08:50:01.828 2 INFO os_vif [None req-4002ac9b-cff2-4b50-a89e-ea7290326cae 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a8:8e:61,bridge_name='br-int',has_traffic_filtering=True,id=31025494-a361-4c39-aa29-5841978f12e9,network=Network(9bac3530-993f-420e-8692-0b14a331d756),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap31025494-a3')
Oct 11 08:50:01 compute-0 nova_compute[260935]: 2025-10-11 08:50:01.838 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:50:01 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:01.838 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfff13396-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:50:01 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:01.839 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 08:50:01 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:01.839 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapfff13396-b0, col_values=(('external_ids', {'iface-id': '2a916b98-1e7b-4604-b1f0-e2f195b1c17e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:50:01 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:01.840 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 08:50:01 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:01.842 162815 INFO neutron.agent.ovn.metadata.agent [-] Port a3944a31-9560-49ae-b2a5-caaf2736993a in datapath 09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5 unbound from our chassis
Oct 11 08:50:01 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:01.844 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5
Oct 11 08:50:01 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1333: 321 pgs: 321 active+clean; 511 MiB data, 603 MiB used, 59 GiB / 60 GiB avail; 45 KiB/s rd, 5.0 MiB/s wr, 72 op/s
Oct 11 08:50:01 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:01.866 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[ee4d782f-95ad-4df8-9786-6561649e0c6b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:50:01 compute-0 nova_compute[260935]: 2025-10-11 08:50:01.889 2 DEBUG nova.virt.libvirt.driver [None req-4002ac9b-cff2-4b50-a89e-ea7290326cae 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 08:50:01 compute-0 nova_compute[260935]: 2025-10-11 08:50:01.890 2 DEBUG nova.virt.libvirt.driver [None req-4002ac9b-cff2-4b50-a89e-ea7290326cae 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 08:50:01 compute-0 nova_compute[260935]: 2025-10-11 08:50:01.890 2 DEBUG nova.virt.libvirt.driver [None req-4002ac9b-cff2-4b50-a89e-ea7290326cae 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] No VIF found with MAC fa:16:3e:a8:8e:61, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 11 08:50:01 compute-0 nova_compute[260935]: 2025-10-11 08:50:01.891 2 INFO nova.virt.libvirt.driver [None req-4002ac9b-cff2-4b50-a89e-ea7290326cae 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: cb1503a2-bc9c-4faf-ab16-e7227f8c94f7] Using config drive
Oct 11 08:50:01 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:01.925 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[5dc0a1e9-ef47-4330-84da-065554250ae0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:50:01 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:01.928 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[e96da15c-c579-406a-93c1-3ebfd9274f05]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:50:01 compute-0 nova_compute[260935]: 2025-10-11 08:50:01.939 2 DEBUG nova.storage.rbd_utils [None req-4002ac9b-cff2-4b50-a89e-ea7290326cae 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] rbd image cb1503a2-bc9c-4faf-ab16-e7227f8c94f7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:50:01 compute-0 nova_compute[260935]: 2025-10-11 08:50:01.949 2 DEBUG nova.network.neutron [req-0ac4c347-70f6-4036-8179-93fdb7f0221b req-b623788f-8ca9-41f2-81a8-d92b4613860c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: cb1503a2-bc9c-4faf-ab16-e7227f8c94f7] Updated VIF entry in instance network info cache for port 31025494-a361-4c39-aa29-5841978f12e9. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 08:50:01 compute-0 nova_compute[260935]: 2025-10-11 08:50:01.949 2 DEBUG nova.network.neutron [req-0ac4c347-70f6-4036-8179-93fdb7f0221b req-b623788f-8ca9-41f2-81a8-d92b4613860c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: cb1503a2-bc9c-4faf-ab16-e7227f8c94f7] Updating instance_info_cache with network_info: [{"id": "31025494-a361-4c39-aa29-5841978f12e9", "address": "fa:16:3e:a8:8e:61", "network": {"id": "9bac3530-993f-420e-8692-0b14a331d756", "bridge": "br-int", "label": "tempest-ImagesTestJSON-942705627-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f8c7604961214c6d9d49657535d799a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31025494-a3", "ovs_interfaceid": "31025494-a361-4c39-aa29-5841978f12e9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:50:01 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:01.960 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[2a0eaa3a-e758-47a1-9f25-35541f152b10]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:50:01 compute-0 ovn_controller[152945]: 2025-10-11T08:50:01Z|00030|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:52:ed:97 10.100.0.10
Oct 11 08:50:01 compute-0 nova_compute[260935]: 2025-10-11 08:50:01.976 2 DEBUG oslo_concurrency.lockutils [req-0ac4c347-70f6-4036-8179-93fdb7f0221b req-b623788f-8ca9-41f2-81a8-d92b4613860c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-cb1503a2-bc9c-4faf-ab16-e7227f8c94f7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 08:50:01 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:01.980 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[1ada4d1b-8d84-4694-a5e2-2b246cac1aa9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap09ac2cb6-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9f:b2:33'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 14, 'tx_packets': 12, 'rx_bytes': 1084, 'tx_bytes': 692, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 14, 'tx_packets': 12, 'rx_bytes': 1084, 'tx_bytes': 692, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 34], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 439110, 'reachable_time': 35426, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 296591, 'error': None, 'target': 'ovnmeta-09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:50:01 compute-0 ovn_controller[152945]: 2025-10-11T08:50:01Z|00031|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:52:ed:97 10.100.0.10
Oct 11 08:50:02 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:02.000 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[ff7b8bd0-3f24-424b-b51f-2589bb1ab69f]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap09ac2cb6-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 439130, 'tstamp': 439130}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 296600, 'error': None, 'target': 'ovnmeta-09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap09ac2cb6-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 439135, 'tstamp': 439135}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 296600, 'error': None, 'target': 'ovnmeta-09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:50:02 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:02.002 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap09ac2cb6-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:50:02 compute-0 nova_compute[260935]: 2025-10-11 08:50:02.003 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:50:02 compute-0 nova_compute[260935]: 2025-10-11 08:50:02.012 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:50:02 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:02.013 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap09ac2cb6-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:50:02 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:02.013 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 08:50:02 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:02.013 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap09ac2cb6-30, col_values=(('external_ids', {'iface-id': '424305ea-6b47-4134-ad52-ee2a450e204c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:50:02 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:02.014 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 08:50:02 compute-0 nova_compute[260935]: 2025-10-11 08:50:02.055 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:50:02 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/1651090801' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:50:02 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/3255312818' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:50:02 compute-0 nova_compute[260935]: 2025-10-11 08:50:02.332 2 INFO nova.virt.libvirt.driver [None req-4002ac9b-cff2-4b50-a89e-ea7290326cae 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: cb1503a2-bc9c-4faf-ab16-e7227f8c94f7] Creating config drive at /var/lib/nova/instances/cb1503a2-bc9c-4faf-ab16-e7227f8c94f7/disk.config
Oct 11 08:50:02 compute-0 nova_compute[260935]: 2025-10-11 08:50:02.341 2 DEBUG oslo_concurrency.processutils [None req-4002ac9b-cff2-4b50-a89e-ea7290326cae 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/cb1503a2-bc9c-4faf-ab16-e7227f8c94f7/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpvnejti8p execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:50:02 compute-0 nova_compute[260935]: 2025-10-11 08:50:02.472 2 DEBUG oslo_concurrency.processutils [None req-4002ac9b-cff2-4b50-a89e-ea7290326cae 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/cb1503a2-bc9c-4faf-ab16-e7227f8c94f7/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpvnejti8p" returned: 0 in 0.132s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:50:02 compute-0 nova_compute[260935]: 2025-10-11 08:50:02.499 2 DEBUG nova.storage.rbd_utils [None req-4002ac9b-cff2-4b50-a89e-ea7290326cae 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] rbd image cb1503a2-bc9c-4faf-ab16-e7227f8c94f7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:50:02 compute-0 nova_compute[260935]: 2025-10-11 08:50:02.505 2 DEBUG oslo_concurrency.processutils [None req-4002ac9b-cff2-4b50-a89e-ea7290326cae 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/cb1503a2-bc9c-4faf-ab16-e7227f8c94f7/disk.config cb1503a2-bc9c-4faf-ab16-e7227f8c94f7_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:50:02 compute-0 nova_compute[260935]: 2025-10-11 08:50:02.566 2 INFO nova.virt.libvirt.driver [None req-23edfb2a-135c-4bb9-b6f1-2bd10a1c71cf a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] Instance shutdown successfully after 3 seconds.
Oct 11 08:50:02 compute-0 nova_compute[260935]: 2025-10-11 08:50:02.573 2 INFO nova.virt.libvirt.driver [-] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] Instance destroyed successfully.
Oct 11 08:50:02 compute-0 nova_compute[260935]: 2025-10-11 08:50:02.580 2 INFO nova.virt.libvirt.driver [-] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] Instance destroyed successfully.
Oct 11 08:50:02 compute-0 nova_compute[260935]: 2025-10-11 08:50:02.581 2 DEBUG nova.virt.libvirt.vif [None req-23edfb2a-135c-4bb9-b6f1-2bd10a1c71cf a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T08:48:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-1988839763',display_name='tempest-ServersAdminTestJSON-server-1988839763',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-1988839763',id=22,image_ref='95632eb9-5895-4e20-b760-0f149aadf400',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-11T08:49:07Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='39d3043a7835403392c659fbb2fe0b22',ramdisk_id='',reservation_id='r-nxz1v0k9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='95632eb9-5895-4e20-b760-0f149aadf400',image_container_format='bare',image_disk_format='qcow2',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersAdminTestJSON-1756812845',owner_user_name='tempest-ServersAdminTestJSON-1756812845-project-member'},tags=<?>,task_state='rebuilding',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T08:49:57Z,user_data=None,user_id='a51c2680b31e40b1908642ef8795c6f0',uuid=77ff3a9d-3eb2-40ed-ad12-6367fd4e555f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='error') vif={"id": "a3944a31-9560-49ae-b2a5-caaf2736993a", "address": "fa:16:3e:bf:d8:0b", "network": {"id": "09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1951796893-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "39d3043a7835403392c659fbb2fe0b22", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa3944a31-95", "ovs_interfaceid": "a3944a31-9560-49ae-b2a5-caaf2736993a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 11 08:50:02 compute-0 nova_compute[260935]: 2025-10-11 08:50:02.581 2 DEBUG nova.network.os_vif_util [None req-23edfb2a-135c-4bb9-b6f1-2bd10a1c71cf a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Converting VIF {"id": "a3944a31-9560-49ae-b2a5-caaf2736993a", "address": "fa:16:3e:bf:d8:0b", "network": {"id": "09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1951796893-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "39d3043a7835403392c659fbb2fe0b22", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa3944a31-95", "ovs_interfaceid": "a3944a31-9560-49ae-b2a5-caaf2736993a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 08:50:02 compute-0 nova_compute[260935]: 2025-10-11 08:50:02.582 2 DEBUG nova.network.os_vif_util [None req-23edfb2a-135c-4bb9-b6f1-2bd10a1c71cf a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:bf:d8:0b,bridge_name='br-int',has_traffic_filtering=True,id=a3944a31-9560-49ae-b2a5-caaf2736993a,network=Network(09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa3944a31-95') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 08:50:02 compute-0 nova_compute[260935]: 2025-10-11 08:50:02.583 2 DEBUG os_vif [None req-23edfb2a-135c-4bb9-b6f1-2bd10a1c71cf a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:bf:d8:0b,bridge_name='br-int',has_traffic_filtering=True,id=a3944a31-9560-49ae-b2a5-caaf2736993a,network=Network(09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa3944a31-95') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 11 08:50:02 compute-0 nova_compute[260935]: 2025-10-11 08:50:02.585 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:50:02 compute-0 nova_compute[260935]: 2025-10-11 08:50:02.586 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa3944a31-95, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:50:02 compute-0 nova_compute[260935]: 2025-10-11 08:50:02.593 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:50:02 compute-0 nova_compute[260935]: 2025-10-11 08:50:02.596 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 08:50:02 compute-0 nova_compute[260935]: 2025-10-11 08:50:02.600 2 INFO os_vif [None req-23edfb2a-135c-4bb9-b6f1-2bd10a1c71cf a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:bf:d8:0b,bridge_name='br-int',has_traffic_filtering=True,id=a3944a31-9560-49ae-b2a5-caaf2736993a,network=Network(09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa3944a31-95')
Oct 11 08:50:02 compute-0 ovn_controller[152945]: 2025-10-11T08:50:02Z|00032|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:56:95:17 10.100.0.14
Oct 11 08:50:02 compute-0 ovn_controller[152945]: 2025-10-11T08:50:02Z|00033|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:56:95:17 10.100.0.14
Oct 11 08:50:02 compute-0 nova_compute[260935]: 2025-10-11 08:50:02.754 2 DEBUG oslo_concurrency.processutils [None req-4002ac9b-cff2-4b50-a89e-ea7290326cae 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/cb1503a2-bc9c-4faf-ab16-e7227f8c94f7/disk.config cb1503a2-bc9c-4faf-ab16-e7227f8c94f7_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.249s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:50:02 compute-0 nova_compute[260935]: 2025-10-11 08:50:02.754 2 INFO nova.virt.libvirt.driver [None req-4002ac9b-cff2-4b50-a89e-ea7290326cae 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: cb1503a2-bc9c-4faf-ab16-e7227f8c94f7] Deleting local config drive /var/lib/nova/instances/cb1503a2-bc9c-4faf-ab16-e7227f8c94f7/disk.config because it was imported into RBD.
Oct 11 08:50:02 compute-0 unix_chkpwd[296714]: password check failed for user (root)
Oct 11 08:50:02 compute-0 sshd-session[296531]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=152.32.213.170  user=root
Oct 11 08:50:02 compute-0 kernel: tap31025494-a3: entered promiscuous mode
Oct 11 08:50:02 compute-0 NetworkManager[44960]: <info>  [1760172602.8367] manager: (tap31025494-a3): new Tun device (/org/freedesktop/NetworkManager/Devices/91)
Oct 11 08:50:02 compute-0 NetworkManager[44960]: <info>  [1760172602.8526] device (tap31025494-a3): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 08:50:02 compute-0 NetworkManager[44960]: <info>  [1760172602.8535] device (tap31025494-a3): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 08:50:02 compute-0 nova_compute[260935]: 2025-10-11 08:50:02.854 2 DEBUG nova.compute.manager [req-20907ecd-660a-49cf-9694-71f5ef2df0c1 req-2228d9e3-f92c-4155-bc58-0eb804c3a884 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 9f9aca1c-8e65-435a-bfae-1ff0d4386f58] Received event network-changed-9007d8a3-8797-49c6-9302-4c8f4d699a45 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:50:02 compute-0 nova_compute[260935]: 2025-10-11 08:50:02.855 2 DEBUG nova.compute.manager [req-20907ecd-660a-49cf-9694-71f5ef2df0c1 req-2228d9e3-f92c-4155-bc58-0eb804c3a884 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 9f9aca1c-8e65-435a-bfae-1ff0d4386f58] Refreshing instance network info cache due to event network-changed-9007d8a3-8797-49c6-9302-4c8f4d699a45. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 08:50:02 compute-0 nova_compute[260935]: 2025-10-11 08:50:02.855 2 DEBUG oslo_concurrency.lockutils [req-20907ecd-660a-49cf-9694-71f5ef2df0c1 req-2228d9e3-f92c-4155-bc58-0eb804c3a884 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-9f9aca1c-8e65-435a-bfae-1ff0d4386f58" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 08:50:02 compute-0 nova_compute[260935]: 2025-10-11 08:50:02.855 2 DEBUG oslo_concurrency.lockutils [req-20907ecd-660a-49cf-9694-71f5ef2df0c1 req-2228d9e3-f92c-4155-bc58-0eb804c3a884 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-9f9aca1c-8e65-435a-bfae-1ff0d4386f58" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 08:50:02 compute-0 nova_compute[260935]: 2025-10-11 08:50:02.856 2 DEBUG nova.network.neutron [req-20907ecd-660a-49cf-9694-71f5ef2df0c1 req-2228d9e3-f92c-4155-bc58-0eb804c3a884 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 9f9aca1c-8e65-435a-bfae-1ff0d4386f58] Refreshing network info cache for port 9007d8a3-8797-49c6-9302-4c8f4d699a45 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 08:50:02 compute-0 ovn_controller[152945]: 2025-10-11T08:50:02Z|00157|binding|INFO|Claiming lport 31025494-a361-4c39-aa29-5841978f12e9 for this chassis.
Oct 11 08:50:02 compute-0 ovn_controller[152945]: 2025-10-11T08:50:02Z|00158|binding|INFO|31025494-a361-4c39-aa29-5841978f12e9: Claiming fa:16:3e:a8:8e:61 10.100.0.14
Oct 11 08:50:02 compute-0 nova_compute[260935]: 2025-10-11 08:50:02.872 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:50:02 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:02.881 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a8:8e:61 10.100.0.14'], port_security=['fa:16:3e:a8:8e:61 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'cb1503a2-bc9c-4faf-ab16-e7227f8c94f7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9bac3530-993f-420e-8692-0b14a331d756', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f8c7604961214c6d9d49657535d799a5', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'b92be55a-f97b-4770-99c4-ff8e122b8ad7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=956bef08-638b-4ce0-9cc4-80a6cc4f1331, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=31025494-a361-4c39-aa29-5841978f12e9) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 08:50:02 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:02.882 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 31025494-a361-4c39-aa29-5841978f12e9 in datapath 9bac3530-993f-420e-8692-0b14a331d756 bound to our chassis
Oct 11 08:50:02 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:02.884 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9bac3530-993f-420e-8692-0b14a331d756
Oct 11 08:50:02 compute-0 ovn_controller[152945]: 2025-10-11T08:50:02Z|00159|binding|INFO|Setting lport 31025494-a361-4c39-aa29-5841978f12e9 ovn-installed in OVS
Oct 11 08:50:02 compute-0 ovn_controller[152945]: 2025-10-11T08:50:02Z|00160|binding|INFO|Setting lport 31025494-a361-4c39-aa29-5841978f12e9 up in Southbound
Oct 11 08:50:02 compute-0 nova_compute[260935]: 2025-10-11 08:50:02.897 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:50:02 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:02.897 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[6c667153-6a86-4948-9710-262dfeec8703]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:50:02 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:02.900 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap9bac3530-91 in ovnmeta-9bac3530-993f-420e-8692-0b14a331d756 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 11 08:50:02 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:02.902 276945 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap9bac3530-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 11 08:50:02 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:02.902 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[83226346-0e51-42dd-83ad-c68720582434]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:50:02 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:02.903 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[e59e98c3-4ab1-4b0a-a056-77aeda04c97c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:50:02 compute-0 systemd-machined[215705]: New machine qemu-32-instance-0000001e.
Oct 11 08:50:02 compute-0 systemd[1]: Started Virtual Machine qemu-32-instance-0000001e.
Oct 11 08:50:02 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:02.921 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[ed67e9cd-a5db-475e-b493-c5e74ecb39f5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:50:02 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:02.949 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[5a989a81-78ec-4094-8d12-dce6288dfc3e]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:50:02 compute-0 nova_compute[260935]: 2025-10-11 08:50:02.985 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172602.9850109, b35f4147-9e36-4dab-9ac8-2061c97797f2 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:50:02 compute-0 nova_compute[260935]: 2025-10-11 08:50:02.986 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: b35f4147-9e36-4dab-9ac8-2061c97797f2] VM Started (Lifecycle Event)
Oct 11 08:50:02 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:02.996 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[bf45fc8b-bca4-46a3-a69d-e61afd2e2dfa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:50:03 compute-0 nova_compute[260935]: 2025-10-11 08:50:03.003 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: b35f4147-9e36-4dab-9ac8-2061c97797f2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:50:03 compute-0 NetworkManager[44960]: <info>  [1760172603.0039] manager: (tap9bac3530-90): new Veth device (/org/freedesktop/NetworkManager/Devices/92)
Oct 11 08:50:03 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:03.003 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[685e8927-37b5-4491-b74c-f19385884987]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:50:03 compute-0 nova_compute[260935]: 2025-10-11 08:50:03.009 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172602.9851007, b35f4147-9e36-4dab-9ac8-2061c97797f2 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:50:03 compute-0 nova_compute[260935]: 2025-10-11 08:50:03.009 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: b35f4147-9e36-4dab-9ac8-2061c97797f2] VM Paused (Lifecycle Event)
Oct 11 08:50:03 compute-0 nova_compute[260935]: 2025-10-11 08:50:03.037 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: b35f4147-9e36-4dab-9ac8-2061c97797f2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:50:03 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:03.040 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[75b06a2e-c884-4e97-ba2d-cae52500257e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:50:03 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:03.043 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[a2e1f71d-e278-46de-b087-f8df121ef21c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:50:03 compute-0 nova_compute[260935]: 2025-10-11 08:50:03.043 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: b35f4147-9e36-4dab-9ac8-2061c97797f2] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 08:50:03 compute-0 nova_compute[260935]: 2025-10-11 08:50:03.063 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: b35f4147-9e36-4dab-9ac8-2061c97797f2] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 08:50:03 compute-0 NetworkManager[44960]: <info>  [1760172603.0698] device (tap9bac3530-90): carrier: link connected
Oct 11 08:50:03 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:03.077 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[da5418b9-e0fc-46b4-a3b7-651b3fbc66b5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:50:03 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:03.095 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[fdeda3ed-c3b6-4918-b144-b250ab933fc1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9bac3530-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:95:35:1f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 53], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 444777, 'reachable_time': 33335, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 296755, 'error': None, 'target': 'ovnmeta-9bac3530-993f-420e-8692-0b14a331d756', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:50:03 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:03.115 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[2270824e-3918-470b-8564-92168a0550e9]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe95:351f'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 444777, 'tstamp': 444777}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 296756, 'error': None, 'target': 'ovnmeta-9bac3530-993f-420e-8692-0b14a331d756', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:50:03 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:03.139 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[4e36410b-a6de-4b79-bfab-c0315fe3aace]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9bac3530-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:95:35:1f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 53], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 444777, 'reachable_time': 33335, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 296757, 'error': None, 'target': 'ovnmeta-9bac3530-993f-420e-8692-0b14a331d756', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:50:03 compute-0 nova_compute[260935]: 2025-10-11 08:50:03.160 2 INFO nova.virt.libvirt.driver [None req-23edfb2a-135c-4bb9-b6f1-2bd10a1c71cf a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] Deleting instance files /var/lib/nova/instances/77ff3a9d-3eb2-40ed-ad12-6367fd4e555f_del
Oct 11 08:50:03 compute-0 nova_compute[260935]: 2025-10-11 08:50:03.161 2 INFO nova.virt.libvirt.driver [None req-23edfb2a-135c-4bb9-b6f1-2bd10a1c71cf a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] Deletion of /var/lib/nova/instances/77ff3a9d-3eb2-40ed-ad12-6367fd4e555f_del complete
Oct 11 08:50:03 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:03.186 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[081b531f-e278-4204-92f9-3152d26be302]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:50:03 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 08:50:03 compute-0 ceph-mon[74313]: pgmap v1333: 321 pgs: 321 active+clean; 511 MiB data, 603 MiB used, 59 GiB / 60 GiB avail; 45 KiB/s rd, 5.0 MiB/s wr, 72 op/s
Oct 11 08:50:03 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:03.257 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[af16d1a5-5c73-48d6-b499-feca51623ef7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:50:03 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:03.259 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9bac3530-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:50:03 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:03.260 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 08:50:03 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:03.260 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9bac3530-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:50:03 compute-0 NetworkManager[44960]: <info>  [1760172603.2635] manager: (tap9bac3530-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/93)
Oct 11 08:50:03 compute-0 kernel: tap9bac3530-90: entered promiscuous mode
Oct 11 08:50:03 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:03.265 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9bac3530-90, col_values=(('external_ids', {'iface-id': 'e5becf0d-48c0-404b-9cba-07077454d085'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:50:03 compute-0 ovn_controller[152945]: 2025-10-11T08:50:03Z|00161|binding|INFO|Releasing lport e5becf0d-48c0-404b-9cba-07077454d085 from this chassis (sb_readonly=0)
Oct 11 08:50:03 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:03.283 162815 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/9bac3530-993f-420e-8692-0b14a331d756.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/9bac3530-993f-420e-8692-0b14a331d756.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 11 08:50:03 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:03.285 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[68404314-9802-47e2-8fcb-e6111205a59a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:50:03 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:03.286 162815 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 11 08:50:03 compute-0 ovn_metadata_agent[162810]: global
Oct 11 08:50:03 compute-0 ovn_metadata_agent[162810]:     log         /dev/log local0 debug
Oct 11 08:50:03 compute-0 ovn_metadata_agent[162810]:     log-tag     haproxy-metadata-proxy-9bac3530-993f-420e-8692-0b14a331d756
Oct 11 08:50:03 compute-0 ovn_metadata_agent[162810]:     user        root
Oct 11 08:50:03 compute-0 ovn_metadata_agent[162810]:     group       root
Oct 11 08:50:03 compute-0 ovn_metadata_agent[162810]:     maxconn     1024
Oct 11 08:50:03 compute-0 ovn_metadata_agent[162810]:     pidfile     /var/lib/neutron/external/pids/9bac3530-993f-420e-8692-0b14a331d756.pid.haproxy
Oct 11 08:50:03 compute-0 ovn_metadata_agent[162810]:     daemon
Oct 11 08:50:03 compute-0 ovn_metadata_agent[162810]: 
Oct 11 08:50:03 compute-0 ovn_metadata_agent[162810]: defaults
Oct 11 08:50:03 compute-0 ovn_metadata_agent[162810]:     log global
Oct 11 08:50:03 compute-0 ovn_metadata_agent[162810]:     mode http
Oct 11 08:50:03 compute-0 ovn_metadata_agent[162810]:     option httplog
Oct 11 08:50:03 compute-0 ovn_metadata_agent[162810]:     option dontlognull
Oct 11 08:50:03 compute-0 ovn_metadata_agent[162810]:     option http-server-close
Oct 11 08:50:03 compute-0 ovn_metadata_agent[162810]:     option forwardfor
Oct 11 08:50:03 compute-0 ovn_metadata_agent[162810]:     retries                 3
Oct 11 08:50:03 compute-0 ovn_metadata_agent[162810]:     timeout http-request    30s
Oct 11 08:50:03 compute-0 ovn_metadata_agent[162810]:     timeout connect         30s
Oct 11 08:50:03 compute-0 ovn_metadata_agent[162810]:     timeout client          32s
Oct 11 08:50:03 compute-0 ovn_metadata_agent[162810]:     timeout server          32s
Oct 11 08:50:03 compute-0 ovn_metadata_agent[162810]:     timeout http-keep-alive 30s
Oct 11 08:50:03 compute-0 ovn_metadata_agent[162810]: 
Oct 11 08:50:03 compute-0 ovn_metadata_agent[162810]: 
Oct 11 08:50:03 compute-0 ovn_metadata_agent[162810]: listen listener
Oct 11 08:50:03 compute-0 ovn_metadata_agent[162810]:     bind 169.254.169.254:80
Oct 11 08:50:03 compute-0 ovn_metadata_agent[162810]:     server metadata /var/lib/neutron/metadata_proxy
Oct 11 08:50:03 compute-0 ovn_metadata_agent[162810]:     http-request add-header X-OVN-Network-ID 9bac3530-993f-420e-8692-0b14a331d756
Oct 11 08:50:03 compute-0 ovn_metadata_agent[162810]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 11 08:50:03 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:03.289 162815 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-9bac3530-993f-420e-8692-0b14a331d756', 'env', 'PROCESS_TAG=haproxy-9bac3530-993f-420e-8692-0b14a331d756', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/9bac3530-993f-420e-8692-0b14a331d756.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 11 08:50:03 compute-0 nova_compute[260935]: 2025-10-11 08:50:03.372 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:50:03 compute-0 nova_compute[260935]: 2025-10-11 08:50:03.480 2 DEBUG nova.virt.libvirt.driver [None req-23edfb2a-135c-4bb9-b6f1-2bd10a1c71cf a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 11 08:50:03 compute-0 nova_compute[260935]: 2025-10-11 08:50:03.480 2 INFO nova.virt.libvirt.driver [None req-23edfb2a-135c-4bb9-b6f1-2bd10a1c71cf a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] Creating image(s)
Oct 11 08:50:03 compute-0 nova_compute[260935]: 2025-10-11 08:50:03.522 2 DEBUG nova.storage.rbd_utils [None req-23edfb2a-135c-4bb9-b6f1-2bd10a1c71cf a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] rbd image 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:50:03 compute-0 nova_compute[260935]: 2025-10-11 08:50:03.561 2 DEBUG nova.storage.rbd_utils [None req-23edfb2a-135c-4bb9-b6f1-2bd10a1c71cf a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] rbd image 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:50:03 compute-0 nova_compute[260935]: 2025-10-11 08:50:03.605 2 DEBUG nova.storage.rbd_utils [None req-23edfb2a-135c-4bb9-b6f1-2bd10a1c71cf a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] rbd image 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:50:03 compute-0 nova_compute[260935]: 2025-10-11 08:50:03.610 2 DEBUG oslo_concurrency.processutils [None req-23edfb2a-135c-4bb9-b6f1-2bd10a1c71cf a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/d427ed36e4acfaf36d5cf36bd49361b1db4ee571 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:50:03 compute-0 nova_compute[260935]: 2025-10-11 08:50:03.693 2 DEBUG oslo_concurrency.processutils [None req-23edfb2a-135c-4bb9-b6f1-2bd10a1c71cf a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/d427ed36e4acfaf36d5cf36bd49361b1db4ee571 --force-share --output=json" returned: 0 in 0.083s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:50:03 compute-0 nova_compute[260935]: 2025-10-11 08:50:03.695 2 DEBUG oslo_concurrency.lockutils [None req-23edfb2a-135c-4bb9-b6f1-2bd10a1c71cf a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Acquiring lock "d427ed36e4acfaf36d5cf36bd49361b1db4ee571" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:50:03 compute-0 nova_compute[260935]: 2025-10-11 08:50:03.697 2 DEBUG oslo_concurrency.lockutils [None req-23edfb2a-135c-4bb9-b6f1-2bd10a1c71cf a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Lock "d427ed36e4acfaf36d5cf36bd49361b1db4ee571" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:50:03 compute-0 nova_compute[260935]: 2025-10-11 08:50:03.697 2 DEBUG oslo_concurrency.lockutils [None req-23edfb2a-135c-4bb9-b6f1-2bd10a1c71cf a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Lock "d427ed36e4acfaf36d5cf36bd49361b1db4ee571" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:50:03 compute-0 podman[296883]: 2025-10-11 08:50:03.698158306 +0000 UTC m=+0.079007095 container create 8b0a546d98e6676ad7230a9ea46074e9f7737c411187814dca94ba845962cc0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-9bac3530-993f-420e-8692-0b14a331d756, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 11 08:50:03 compute-0 nova_compute[260935]: 2025-10-11 08:50:03.730 2 DEBUG nova.storage.rbd_utils [None req-23edfb2a-135c-4bb9-b6f1-2bd10a1c71cf a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] rbd image 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:50:03 compute-0 nova_compute[260935]: 2025-10-11 08:50:03.735 2 DEBUG oslo_concurrency.processutils [None req-23edfb2a-135c-4bb9-b6f1-2bd10a1c71cf a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/d427ed36e4acfaf36d5cf36bd49361b1db4ee571 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:50:03 compute-0 podman[296883]: 2025-10-11 08:50:03.656984882 +0000 UTC m=+0.037833691 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 11 08:50:03 compute-0 systemd[1]: Started libpod-conmon-8b0a546d98e6676ad7230a9ea46074e9f7737c411187814dca94ba845962cc0f.scope.
Oct 11 08:50:03 compute-0 nova_compute[260935]: 2025-10-11 08:50:03.768 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 08:50:03 compute-0 systemd[1]: Started libcrun container.
Oct 11 08:50:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/34c413eaa3c0da3cf86ff785e342f7b62278895f169bd630f3f2a56ec7d8ae13/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 11 08:50:03 compute-0 podman[296883]: 2025-10-11 08:50:03.831895128 +0000 UTC m=+0.212743927 container init 8b0a546d98e6676ad7230a9ea46074e9f7737c411187814dca94ba845962cc0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-9bac3530-993f-420e-8692-0b14a331d756, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3)
Oct 11 08:50:03 compute-0 podman[296883]: 2025-10-11 08:50:03.839230436 +0000 UTC m=+0.220079205 container start 8b0a546d98e6676ad7230a9ea46074e9f7737c411187814dca94ba845962cc0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-9bac3530-993f-420e-8692-0b14a331d756, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 11 08:50:03 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1334: 321 pgs: 321 active+clean; 571 MiB data, 662 MiB used, 59 GiB / 60 GiB avail; 3.1 MiB/s rd, 9.3 MiB/s wr, 315 op/s
Oct 11 08:50:03 compute-0 neutron-haproxy-ovnmeta-9bac3530-993f-420e-8692-0b14a331d756[296922]: [NOTICE]   (296941) : New worker (296943) forked
Oct 11 08:50:03 compute-0 neutron-haproxy-ovnmeta-9bac3530-993f-420e-8692-0b14a331d756[296922]: [NOTICE]   (296941) : Loading success.
Oct 11 08:50:04 compute-0 nova_compute[260935]: 2025-10-11 08:50:04.017 2 DEBUG oslo_concurrency.processutils [None req-23edfb2a-135c-4bb9-b6f1-2bd10a1c71cf a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/d427ed36e4acfaf36d5cf36bd49361b1db4ee571 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.282s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:50:04 compute-0 nova_compute[260935]: 2025-10-11 08:50:04.045 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172604.0337703, cb1503a2-bc9c-4faf-ab16-e7227f8c94f7 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:50:04 compute-0 nova_compute[260935]: 2025-10-11 08:50:04.046 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: cb1503a2-bc9c-4faf-ab16-e7227f8c94f7] VM Started (Lifecycle Event)
Oct 11 08:50:04 compute-0 nova_compute[260935]: 2025-10-11 08:50:04.076 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: cb1503a2-bc9c-4faf-ab16-e7227f8c94f7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:50:04 compute-0 nova_compute[260935]: 2025-10-11 08:50:04.082 2 DEBUG nova.storage.rbd_utils [None req-23edfb2a-135c-4bb9-b6f1-2bd10a1c71cf a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] resizing rbd image 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 11 08:50:04 compute-0 nova_compute[260935]: 2025-10-11 08:50:04.124 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172604.0339808, cb1503a2-bc9c-4faf-ab16-e7227f8c94f7 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:50:04 compute-0 nova_compute[260935]: 2025-10-11 08:50:04.125 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: cb1503a2-bc9c-4faf-ab16-e7227f8c94f7] VM Paused (Lifecycle Event)
Oct 11 08:50:04 compute-0 nova_compute[260935]: 2025-10-11 08:50:04.147 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: cb1503a2-bc9c-4faf-ab16-e7227f8c94f7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:50:04 compute-0 nova_compute[260935]: 2025-10-11 08:50:04.150 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: cb1503a2-bc9c-4faf-ab16-e7227f8c94f7] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 08:50:04 compute-0 nova_compute[260935]: 2025-10-11 08:50:04.184 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: cb1503a2-bc9c-4faf-ab16-e7227f8c94f7] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 08:50:04 compute-0 nova_compute[260935]: 2025-10-11 08:50:04.190 2 DEBUG nova.virt.libvirt.driver [None req-23edfb2a-135c-4bb9-b6f1-2bd10a1c71cf a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 11 08:50:04 compute-0 nova_compute[260935]: 2025-10-11 08:50:04.191 2 DEBUG nova.virt.libvirt.driver [None req-23edfb2a-135c-4bb9-b6f1-2bd10a1c71cf a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] Ensure instance console log exists: /var/lib/nova/instances/77ff3a9d-3eb2-40ed-ad12-6367fd4e555f/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 11 08:50:04 compute-0 nova_compute[260935]: 2025-10-11 08:50:04.191 2 DEBUG oslo_concurrency.lockutils [None req-23edfb2a-135c-4bb9-b6f1-2bd10a1c71cf a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:50:04 compute-0 nova_compute[260935]: 2025-10-11 08:50:04.191 2 DEBUG oslo_concurrency.lockutils [None req-23edfb2a-135c-4bb9-b6f1-2bd10a1c71cf a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:50:04 compute-0 nova_compute[260935]: 2025-10-11 08:50:04.191 2 DEBUG oslo_concurrency.lockutils [None req-23edfb2a-135c-4bb9-b6f1-2bd10a1c71cf a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:50:04 compute-0 nova_compute[260935]: 2025-10-11 08:50:04.193 2 DEBUG nova.virt.libvirt.driver [None req-23edfb2a-135c-4bb9-b6f1-2bd10a1c71cf a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] Start _get_guest_xml network_info=[{"id": "a3944a31-9560-49ae-b2a5-caaf2736993a", "address": "fa:16:3e:bf:d8:0b", "network": {"id": "09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1951796893-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "39d3043a7835403392c659fbb2fe0b22", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa3944a31-95", "ovs_interfaceid": "a3944a31-9560-49ae-b2a5-caaf2736993a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:29Z,direct_url=<?>,disk_format='qcow2',id=95632eb9-5895-4e20-b760-0f149aadf400,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 11 08:50:04 compute-0 nova_compute[260935]: 2025-10-11 08:50:04.197 2 WARNING nova.virt.libvirt.driver [None req-23edfb2a-135c-4bb9-b6f1-2bd10a1c71cf a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError
Oct 11 08:50:04 compute-0 nova_compute[260935]: 2025-10-11 08:50:04.202 2 DEBUG nova.virt.libvirt.host [None req-23edfb2a-135c-4bb9-b6f1-2bd10a1c71cf a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 11 08:50:04 compute-0 nova_compute[260935]: 2025-10-11 08:50:04.202 2 DEBUG nova.virt.libvirt.host [None req-23edfb2a-135c-4bb9-b6f1-2bd10a1c71cf a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 11 08:50:04 compute-0 nova_compute[260935]: 2025-10-11 08:50:04.205 2 DEBUG nova.virt.libvirt.host [None req-23edfb2a-135c-4bb9-b6f1-2bd10a1c71cf a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 11 08:50:04 compute-0 nova_compute[260935]: 2025-10-11 08:50:04.206 2 DEBUG nova.virt.libvirt.host [None req-23edfb2a-135c-4bb9-b6f1-2bd10a1c71cf a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 11 08:50:04 compute-0 nova_compute[260935]: 2025-10-11 08:50:04.206 2 DEBUG nova.virt.libvirt.driver [None req-23edfb2a-135c-4bb9-b6f1-2bd10a1c71cf a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 11 08:50:04 compute-0 nova_compute[260935]: 2025-10-11 08:50:04.206 2 DEBUG nova.virt.hardware [None req-23edfb2a-135c-4bb9-b6f1-2bd10a1c71cf a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:29Z,direct_url=<?>,disk_format='qcow2',id=95632eb9-5895-4e20-b760-0f149aadf400,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 11 08:50:04 compute-0 nova_compute[260935]: 2025-10-11 08:50:04.207 2 DEBUG nova.virt.hardware [None req-23edfb2a-135c-4bb9-b6f1-2bd10a1c71cf a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 11 08:50:04 compute-0 nova_compute[260935]: 2025-10-11 08:50:04.207 2 DEBUG nova.virt.hardware [None req-23edfb2a-135c-4bb9-b6f1-2bd10a1c71cf a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 11 08:50:04 compute-0 nova_compute[260935]: 2025-10-11 08:50:04.207 2 DEBUG nova.virt.hardware [None req-23edfb2a-135c-4bb9-b6f1-2bd10a1c71cf a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 11 08:50:04 compute-0 nova_compute[260935]: 2025-10-11 08:50:04.207 2 DEBUG nova.virt.hardware [None req-23edfb2a-135c-4bb9-b6f1-2bd10a1c71cf a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 11 08:50:04 compute-0 nova_compute[260935]: 2025-10-11 08:50:04.207 2 DEBUG nova.virt.hardware [None req-23edfb2a-135c-4bb9-b6f1-2bd10a1c71cf a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 11 08:50:04 compute-0 nova_compute[260935]: 2025-10-11 08:50:04.208 2 DEBUG nova.virt.hardware [None req-23edfb2a-135c-4bb9-b6f1-2bd10a1c71cf a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 11 08:50:04 compute-0 nova_compute[260935]: 2025-10-11 08:50:04.208 2 DEBUG nova.virt.hardware [None req-23edfb2a-135c-4bb9-b6f1-2bd10a1c71cf a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 11 08:50:04 compute-0 nova_compute[260935]: 2025-10-11 08:50:04.208 2 DEBUG nova.virt.hardware [None req-23edfb2a-135c-4bb9-b6f1-2bd10a1c71cf a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 11 08:50:04 compute-0 nova_compute[260935]: 2025-10-11 08:50:04.209 2 DEBUG nova.virt.hardware [None req-23edfb2a-135c-4bb9-b6f1-2bd10a1c71cf a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 11 08:50:04 compute-0 nova_compute[260935]: 2025-10-11 08:50:04.209 2 DEBUG nova.virt.hardware [None req-23edfb2a-135c-4bb9-b6f1-2bd10a1c71cf a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 11 08:50:04 compute-0 nova_compute[260935]: 2025-10-11 08:50:04.209 2 DEBUG nova.objects.instance [None req-23edfb2a-135c-4bb9-b6f1-2bd10a1c71cf a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:50:04 compute-0 nova_compute[260935]: 2025-10-11 08:50:04.225 2 DEBUG oslo_concurrency.processutils [None req-23edfb2a-135c-4bb9-b6f1-2bd10a1c71cf a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:50:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 08:50:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:50:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 08:50:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:50:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.004822146964739207 of space, bias 1.0, pg target 1.4466440894217623 quantized to 32 (current 32)
Oct 11 08:50:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:50:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 08:50:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:50:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 08:50:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:50:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.1991676866616201 quantized to 32 (current 32)
Oct 11 08:50:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:50:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006084358924269063 quantized to 16 (current 32)
Oct 11 08:50:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:50:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 08:50:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:50:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.605448655336329e-05 quantized to 32 (current 32)
Oct 11 08:50:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:50:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006464631357035879 quantized to 32 (current 32)
Oct 11 08:50:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:50:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 08:50:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:50:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015210897310672657 quantized to 32 (current 32)
Oct 11 08:50:04 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 08:50:04 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1974380941' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:50:04 compute-0 nova_compute[260935]: 2025-10-11 08:50:04.694 2 DEBUG oslo_concurrency.processutils [None req-23edfb2a-135c-4bb9-b6f1-2bd10a1c71cf a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:50:04 compute-0 sshd-session[296531]: Failed password for root from 152.32.213.170 port 38066 ssh2
Oct 11 08:50:04 compute-0 nova_compute[260935]: 2025-10-11 08:50:04.734 2 DEBUG nova.storage.rbd_utils [None req-23edfb2a-135c-4bb9-b6f1-2bd10a1c71cf a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] rbd image 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:50:04 compute-0 nova_compute[260935]: 2025-10-11 08:50:04.740 2 DEBUG oslo_concurrency.processutils [None req-23edfb2a-135c-4bb9-b6f1-2bd10a1c71cf a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:50:05 compute-0 sshd-session[296531]: Received disconnect from 152.32.213.170 port 38066:11: Bye Bye [preauth]
Oct 11 08:50:05 compute-0 sshd-session[296531]: Disconnected from authenticating user root 152.32.213.170 port 38066 [preauth]
Oct 11 08:50:05 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 08:50:05 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1401924063' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:50:05 compute-0 nova_compute[260935]: 2025-10-11 08:50:05.210 2 DEBUG oslo_concurrency.processutils [None req-23edfb2a-135c-4bb9-b6f1-2bd10a1c71cf a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:50:05 compute-0 nova_compute[260935]: 2025-10-11 08:50:05.213 2 DEBUG nova.virt.libvirt.vif [None req-23edfb2a-135c-4bb9-b6f1-2bd10a1c71cf a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-11T08:48:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-1988839763',display_name='tempest-ServersAdminTestJSON-server-1988839763',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-1988839763',id=22,image_ref='95632eb9-5895-4e20-b760-0f149aadf400',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-11T08:49:07Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='39d3043a7835403392c659fbb2fe0b22',ramdisk_id='',reservation_id='r-nxz1v0k9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='95632eb9-5895-4e20-b760-0f149aadf400',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersAdminTestJSON-1756812845',owner_user_name='tempest-ServersAdminTestJSON-1756812845-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T08:50:03Z,user_data=None,user_id='a51c2680b31e40b1908642ef8795c6f0',uuid=77ff3a9d-3eb2-40ed-ad12-6367fd4e555f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='error') vif={"id": "a3944a31-9560-49ae-b2a5-caaf2736993a", "address": "fa:16:3e:bf:d8:0b", "network": {"id": "09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1951796893-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "39d3043a7835403392c659fbb2fe0b22", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa3944a31-95", "ovs_interfaceid": "a3944a31-9560-49ae-b2a5-caaf2736993a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 11 08:50:05 compute-0 nova_compute[260935]: 2025-10-11 08:50:05.213 2 DEBUG nova.network.os_vif_util [None req-23edfb2a-135c-4bb9-b6f1-2bd10a1c71cf a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Converting VIF {"id": "a3944a31-9560-49ae-b2a5-caaf2736993a", "address": "fa:16:3e:bf:d8:0b", "network": {"id": "09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1951796893-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "39d3043a7835403392c659fbb2fe0b22", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa3944a31-95", "ovs_interfaceid": "a3944a31-9560-49ae-b2a5-caaf2736993a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 08:50:05 compute-0 nova_compute[260935]: 2025-10-11 08:50:05.216 2 DEBUG nova.network.os_vif_util [None req-23edfb2a-135c-4bb9-b6f1-2bd10a1c71cf a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:bf:d8:0b,bridge_name='br-int',has_traffic_filtering=True,id=a3944a31-9560-49ae-b2a5-caaf2736993a,network=Network(09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa3944a31-95') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 08:50:05 compute-0 nova_compute[260935]: 2025-10-11 08:50:05.219 2 DEBUG nova.virt.libvirt.driver [None req-23edfb2a-135c-4bb9-b6f1-2bd10a1c71cf a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] End _get_guest_xml xml=<domain type="kvm">
Oct 11 08:50:05 compute-0 nova_compute[260935]:   <uuid>77ff3a9d-3eb2-40ed-ad12-6367fd4e555f</uuid>
Oct 11 08:50:05 compute-0 nova_compute[260935]:   <name>instance-00000016</name>
Oct 11 08:50:05 compute-0 nova_compute[260935]:   <memory>131072</memory>
Oct 11 08:50:05 compute-0 nova_compute[260935]:   <vcpu>1</vcpu>
Oct 11 08:50:05 compute-0 nova_compute[260935]:   <metadata>
Oct 11 08:50:05 compute-0 nova_compute[260935]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 08:50:05 compute-0 nova_compute[260935]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 08:50:05 compute-0 nova_compute[260935]:       <nova:name>tempest-ServersAdminTestJSON-server-1988839763</nova:name>
Oct 11 08:50:05 compute-0 nova_compute[260935]:       <nova:creationTime>2025-10-11 08:50:04</nova:creationTime>
Oct 11 08:50:05 compute-0 nova_compute[260935]:       <nova:flavor name="m1.nano">
Oct 11 08:50:05 compute-0 nova_compute[260935]:         <nova:memory>128</nova:memory>
Oct 11 08:50:05 compute-0 nova_compute[260935]:         <nova:disk>1</nova:disk>
Oct 11 08:50:05 compute-0 nova_compute[260935]:         <nova:swap>0</nova:swap>
Oct 11 08:50:05 compute-0 nova_compute[260935]:         <nova:ephemeral>0</nova:ephemeral>
Oct 11 08:50:05 compute-0 nova_compute[260935]:         <nova:vcpus>1</nova:vcpus>
Oct 11 08:50:05 compute-0 nova_compute[260935]:       </nova:flavor>
Oct 11 08:50:05 compute-0 nova_compute[260935]:       <nova:owner>
Oct 11 08:50:05 compute-0 nova_compute[260935]:         <nova:user uuid="a51c2680b31e40b1908642ef8795c6f0">tempest-ServersAdminTestJSON-1756812845-project-member</nova:user>
Oct 11 08:50:05 compute-0 nova_compute[260935]:         <nova:project uuid="39d3043a7835403392c659fbb2fe0b22">tempest-ServersAdminTestJSON-1756812845</nova:project>
Oct 11 08:50:05 compute-0 nova_compute[260935]:       </nova:owner>
Oct 11 08:50:05 compute-0 nova_compute[260935]:       <nova:root type="image" uuid="95632eb9-5895-4e20-b760-0f149aadf400"/>
Oct 11 08:50:05 compute-0 nova_compute[260935]:       <nova:ports>
Oct 11 08:50:05 compute-0 nova_compute[260935]:         <nova:port uuid="a3944a31-9560-49ae-b2a5-caaf2736993a">
Oct 11 08:50:05 compute-0 nova_compute[260935]:           <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Oct 11 08:50:05 compute-0 nova_compute[260935]:         </nova:port>
Oct 11 08:50:05 compute-0 nova_compute[260935]:       </nova:ports>
Oct 11 08:50:05 compute-0 nova_compute[260935]:     </nova:instance>
Oct 11 08:50:05 compute-0 nova_compute[260935]:   </metadata>
Oct 11 08:50:05 compute-0 nova_compute[260935]:   <sysinfo type="smbios">
Oct 11 08:50:05 compute-0 nova_compute[260935]:     <system>
Oct 11 08:50:05 compute-0 nova_compute[260935]:       <entry name="manufacturer">RDO</entry>
Oct 11 08:50:05 compute-0 nova_compute[260935]:       <entry name="product">OpenStack Compute</entry>
Oct 11 08:50:05 compute-0 nova_compute[260935]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 08:50:05 compute-0 nova_compute[260935]:       <entry name="serial">77ff3a9d-3eb2-40ed-ad12-6367fd4e555f</entry>
Oct 11 08:50:05 compute-0 nova_compute[260935]:       <entry name="uuid">77ff3a9d-3eb2-40ed-ad12-6367fd4e555f</entry>
Oct 11 08:50:05 compute-0 nova_compute[260935]:       <entry name="family">Virtual Machine</entry>
Oct 11 08:50:05 compute-0 nova_compute[260935]:     </system>
Oct 11 08:50:05 compute-0 nova_compute[260935]:   </sysinfo>
Oct 11 08:50:05 compute-0 nova_compute[260935]:   <os>
Oct 11 08:50:05 compute-0 nova_compute[260935]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 11 08:50:05 compute-0 nova_compute[260935]:     <boot dev="hd"/>
Oct 11 08:50:05 compute-0 nova_compute[260935]:     <smbios mode="sysinfo"/>
Oct 11 08:50:05 compute-0 nova_compute[260935]:   </os>
Oct 11 08:50:05 compute-0 nova_compute[260935]:   <features>
Oct 11 08:50:05 compute-0 nova_compute[260935]:     <acpi/>
Oct 11 08:50:05 compute-0 nova_compute[260935]:     <apic/>
Oct 11 08:50:05 compute-0 nova_compute[260935]:     <vmcoreinfo/>
Oct 11 08:50:05 compute-0 nova_compute[260935]:   </features>
Oct 11 08:50:05 compute-0 nova_compute[260935]:   <clock offset="utc">
Oct 11 08:50:05 compute-0 nova_compute[260935]:     <timer name="pit" tickpolicy="delay"/>
Oct 11 08:50:05 compute-0 nova_compute[260935]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 11 08:50:05 compute-0 nova_compute[260935]:     <timer name="hpet" present="no"/>
Oct 11 08:50:05 compute-0 nova_compute[260935]:   </clock>
Oct 11 08:50:05 compute-0 nova_compute[260935]:   <cpu mode="host-model" match="exact">
Oct 11 08:50:05 compute-0 nova_compute[260935]:     <topology sockets="1" cores="1" threads="1"/>
Oct 11 08:50:05 compute-0 nova_compute[260935]:   </cpu>
Oct 11 08:50:05 compute-0 nova_compute[260935]:   <devices>
Oct 11 08:50:05 compute-0 nova_compute[260935]:     <disk type="network" device="disk">
Oct 11 08:50:05 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 08:50:05 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/77ff3a9d-3eb2-40ed-ad12-6367fd4e555f_disk">
Oct 11 08:50:05 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 08:50:05 compute-0 nova_compute[260935]:       </source>
Oct 11 08:50:05 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 08:50:05 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 08:50:05 compute-0 nova_compute[260935]:       </auth>
Oct 11 08:50:05 compute-0 nova_compute[260935]:       <target dev="vda" bus="virtio"/>
Oct 11 08:50:05 compute-0 nova_compute[260935]:     </disk>
Oct 11 08:50:05 compute-0 nova_compute[260935]:     <disk type="network" device="cdrom">
Oct 11 08:50:05 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 08:50:05 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/77ff3a9d-3eb2-40ed-ad12-6367fd4e555f_disk.config">
Oct 11 08:50:05 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 08:50:05 compute-0 nova_compute[260935]:       </source>
Oct 11 08:50:05 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 08:50:05 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 08:50:05 compute-0 nova_compute[260935]:       </auth>
Oct 11 08:50:05 compute-0 nova_compute[260935]:       <target dev="sda" bus="sata"/>
Oct 11 08:50:05 compute-0 nova_compute[260935]:     </disk>
Oct 11 08:50:05 compute-0 nova_compute[260935]:     <interface type="ethernet">
Oct 11 08:50:05 compute-0 nova_compute[260935]:       <mac address="fa:16:3e:bf:d8:0b"/>
Oct 11 08:50:05 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 08:50:05 compute-0 nova_compute[260935]:       <driver name="vhost" rx_queue_size="512"/>
Oct 11 08:50:05 compute-0 nova_compute[260935]:       <mtu size="1442"/>
Oct 11 08:50:05 compute-0 nova_compute[260935]:       <target dev="tapa3944a31-95"/>
Oct 11 08:50:05 compute-0 nova_compute[260935]:     </interface>
Oct 11 08:50:05 compute-0 nova_compute[260935]:     <serial type="pty">
Oct 11 08:50:05 compute-0 nova_compute[260935]:       <log file="/var/lib/nova/instances/77ff3a9d-3eb2-40ed-ad12-6367fd4e555f/console.log" append="off"/>
Oct 11 08:50:05 compute-0 nova_compute[260935]:     </serial>
Oct 11 08:50:05 compute-0 nova_compute[260935]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 08:50:05 compute-0 nova_compute[260935]:     <video>
Oct 11 08:50:05 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 08:50:05 compute-0 nova_compute[260935]:     </video>
Oct 11 08:50:05 compute-0 nova_compute[260935]:     <input type="tablet" bus="usb"/>
Oct 11 08:50:05 compute-0 nova_compute[260935]:     <rng model="virtio">
Oct 11 08:50:05 compute-0 nova_compute[260935]:       <backend model="random">/dev/urandom</backend>
Oct 11 08:50:05 compute-0 nova_compute[260935]:     </rng>
Oct 11 08:50:05 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root"/>
Oct 11 08:50:05 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:50:05 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:50:05 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:50:05 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:50:05 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:50:05 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:50:05 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:50:05 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:50:05 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:50:05 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:50:05 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:50:05 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:50:05 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:50:05 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:50:05 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:50:05 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:50:05 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:50:05 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:50:05 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:50:05 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:50:05 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:50:05 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:50:05 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:50:05 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:50:05 compute-0 nova_compute[260935]:     <controller type="usb" index="0"/>
Oct 11 08:50:05 compute-0 nova_compute[260935]:     <memballoon model="virtio">
Oct 11 08:50:05 compute-0 nova_compute[260935]:       <stats period="10"/>
Oct 11 08:50:05 compute-0 nova_compute[260935]:     </memballoon>
Oct 11 08:50:05 compute-0 nova_compute[260935]:   </devices>
Oct 11 08:50:05 compute-0 nova_compute[260935]: </domain>
Oct 11 08:50:05 compute-0 nova_compute[260935]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 11 08:50:05 compute-0 nova_compute[260935]: 2025-10-11 08:50:05.220 2 DEBUG nova.compute.manager [None req-23edfb2a-135c-4bb9-b6f1-2bd10a1c71cf a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] Preparing to wait for external event network-vif-plugged-a3944a31-9560-49ae-b2a5-caaf2736993a prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 11 08:50:05 compute-0 nova_compute[260935]: 2025-10-11 08:50:05.221 2 DEBUG oslo_concurrency.lockutils [None req-23edfb2a-135c-4bb9-b6f1-2bd10a1c71cf a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Acquiring lock "77ff3a9d-3eb2-40ed-ad12-6367fd4e555f-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:50:05 compute-0 nova_compute[260935]: 2025-10-11 08:50:05.221 2 DEBUG oslo_concurrency.lockutils [None req-23edfb2a-135c-4bb9-b6f1-2bd10a1c71cf a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Lock "77ff3a9d-3eb2-40ed-ad12-6367fd4e555f-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:50:05 compute-0 nova_compute[260935]: 2025-10-11 08:50:05.222 2 DEBUG oslo_concurrency.lockutils [None req-23edfb2a-135c-4bb9-b6f1-2bd10a1c71cf a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Lock "77ff3a9d-3eb2-40ed-ad12-6367fd4e555f-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:50:05 compute-0 nova_compute[260935]: 2025-10-11 08:50:05.222 2 DEBUG nova.virt.libvirt.vif [None req-23edfb2a-135c-4bb9-b6f1-2bd10a1c71cf a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-11T08:48:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-1988839763',display_name='tempest-ServersAdminTestJSON-server-1988839763',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-1988839763',id=22,image_ref='95632eb9-5895-4e20-b760-0f149aadf400',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-11T08:49:07Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='39d3043a7835403392c659fbb2fe0b22',ramdisk_id='',reservation_id='r-nxz1v0k9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='95632eb9-5895-4e20-b760-0f149aadf400',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersAdminTestJSON-1756812845',owner_user_name='tempest-ServersAdminTestJSON-1756812845-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T08:50:03Z,user_data=None,user_id='a51c2680b31e40b1908642ef8795c6f0',uuid=77ff3a9d-3eb2-40ed-ad12-6367fd4e555f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='error') vif={"id": "a3944a31-9560-49ae-b2a5-caaf2736993a", "address": "fa:16:3e:bf:d8:0b", "network": {"id": "09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1951796893-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "39d3043a7835403392c659fbb2fe0b22", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa3944a31-95", "ovs_interfaceid": "a3944a31-9560-49ae-b2a5-caaf2736993a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 11 08:50:05 compute-0 nova_compute[260935]: 2025-10-11 08:50:05.223 2 DEBUG nova.network.os_vif_util [None req-23edfb2a-135c-4bb9-b6f1-2bd10a1c71cf a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Converting VIF {"id": "a3944a31-9560-49ae-b2a5-caaf2736993a", "address": "fa:16:3e:bf:d8:0b", "network": {"id": "09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1951796893-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "39d3043a7835403392c659fbb2fe0b22", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa3944a31-95", "ovs_interfaceid": "a3944a31-9560-49ae-b2a5-caaf2736993a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 08:50:05 compute-0 nova_compute[260935]: 2025-10-11 08:50:05.223 2 DEBUG nova.network.os_vif_util [None req-23edfb2a-135c-4bb9-b6f1-2bd10a1c71cf a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:bf:d8:0b,bridge_name='br-int',has_traffic_filtering=True,id=a3944a31-9560-49ae-b2a5-caaf2736993a,network=Network(09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa3944a31-95') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 08:50:05 compute-0 nova_compute[260935]: 2025-10-11 08:50:05.224 2 DEBUG os_vif [None req-23edfb2a-135c-4bb9-b6f1-2bd10a1c71cf a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:bf:d8:0b,bridge_name='br-int',has_traffic_filtering=True,id=a3944a31-9560-49ae-b2a5-caaf2736993a,network=Network(09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa3944a31-95') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 11 08:50:05 compute-0 nova_compute[260935]: 2025-10-11 08:50:05.224 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:50:05 compute-0 nova_compute[260935]: 2025-10-11 08:50:05.225 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:50:05 compute-0 nova_compute[260935]: 2025-10-11 08:50:05.226 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 08:50:05 compute-0 nova_compute[260935]: 2025-10-11 08:50:05.228 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:50:05 compute-0 nova_compute[260935]: 2025-10-11 08:50:05.229 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa3944a31-95, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:50:05 compute-0 nova_compute[260935]: 2025-10-11 08:50:05.229 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa3944a31-95, col_values=(('external_ids', {'iface-id': 'a3944a31-9560-49ae-b2a5-caaf2736993a', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:bf:d8:0b', 'vm-uuid': '77ff3a9d-3eb2-40ed-ad12-6367fd4e555f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:50:05 compute-0 nova_compute[260935]: 2025-10-11 08:50:05.260 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:50:05 compute-0 NetworkManager[44960]: <info>  [1760172605.2619] manager: (tapa3944a31-95): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/94)
Oct 11 08:50:05 compute-0 nova_compute[260935]: 2025-10-11 08:50:05.264 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 08:50:05 compute-0 ceph-mon[74313]: pgmap v1334: 321 pgs: 321 active+clean; 571 MiB data, 662 MiB used, 59 GiB / 60 GiB avail; 3.1 MiB/s rd, 9.3 MiB/s wr, 315 op/s
Oct 11 08:50:05 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/1974380941' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:50:05 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/1401924063' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:50:05 compute-0 nova_compute[260935]: 2025-10-11 08:50:05.272 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:50:05 compute-0 nova_compute[260935]: 2025-10-11 08:50:05.273 2 INFO os_vif [None req-23edfb2a-135c-4bb9-b6f1-2bd10a1c71cf a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:bf:d8:0b,bridge_name='br-int',has_traffic_filtering=True,id=a3944a31-9560-49ae-b2a5-caaf2736993a,network=Network(09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa3944a31-95')
Oct 11 08:50:05 compute-0 ceph-mon[74313]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #60. Immutable memtables: 0.
Oct 11 08:50:05 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:50:05.278348) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 11 08:50:05 compute-0 ceph-mon[74313]: rocksdb: [db/flush_job.cc:856] [default] [JOB 31] Flushing memtable with next log file: 60
Oct 11 08:50:05 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760172605278397, "job": 31, "event": "flush_started", "num_memtables": 1, "num_entries": 2228, "num_deletes": 262, "total_data_size": 3285683, "memory_usage": 3349072, "flush_reason": "Manual Compaction"}
Oct 11 08:50:05 compute-0 ceph-mon[74313]: rocksdb: [db/flush_job.cc:885] [default] [JOB 31] Level-0 flush table #61: started
Oct 11 08:50:05 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760172605298940, "cf_name": "default", "job": 31, "event": "table_file_creation", "file_number": 61, "file_size": 3224280, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 25666, "largest_seqno": 27893, "table_properties": {"data_size": 3214155, "index_size": 6427, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2629, "raw_key_size": 21815, "raw_average_key_size": 21, "raw_value_size": 3193613, "raw_average_value_size": 3076, "num_data_blocks": 281, "num_entries": 1038, "num_filter_entries": 1038, "num_deletions": 262, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760172418, "oldest_key_time": 1760172418, "file_creation_time": 1760172605, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "25d4b2da-549a-4320-988a-8e4ed36a341b", "db_session_id": "HBQ1LYMNE0LLZGR7AHC6", "orig_file_number": 61, "seqno_to_time_mapping": "N/A"}}
Oct 11 08:50:05 compute-0 ceph-mon[74313]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 31] Flush lasted 20762 microseconds, and 14044 cpu microseconds.
Oct 11 08:50:05 compute-0 ceph-mon[74313]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 08:50:05 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:50:05.299109) [db/flush_job.cc:967] [default] [JOB 31] Level-0 flush table #61: 3224280 bytes OK
Oct 11 08:50:05 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:50:05.299189) [db/memtable_list.cc:519] [default] Level-0 commit table #61 started
Oct 11 08:50:05 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:50:05.300801) [db/memtable_list.cc:722] [default] Level-0 commit table #61: memtable #1 done
Oct 11 08:50:05 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:50:05.300852) EVENT_LOG_v1 {"time_micros": 1760172605300843, "job": 31, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 11 08:50:05 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:50:05.300880) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 11 08:50:05 compute-0 ceph-mon[74313]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 31] Try to delete WAL files size 3276149, prev total WAL file size 3276149, number of live WAL files 2.
Oct 11 08:50:05 compute-0 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000057.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 08:50:05 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:50:05.303536) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032323539' seq:72057594037927935, type:22 .. '7061786F730032353131' seq:0, type:0; will stop at (end)
Oct 11 08:50:05 compute-0 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 32] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 11 08:50:05 compute-0 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 31 Base level 0, inputs: [61(3148KB)], [59(7004KB)]
Oct 11 08:50:05 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760172605303612, "job": 32, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [61], "files_L6": [59], "score": -1, "input_data_size": 10396898, "oldest_snapshot_seqno": -1}
Oct 11 08:50:05 compute-0 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 32] Generated table #62: 5208 keys, 8651138 bytes, temperature: kUnknown
Oct 11 08:50:05 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760172605364747, "cf_name": "default", "job": 32, "event": "table_file_creation", "file_number": 62, "file_size": 8651138, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8614862, "index_size": 22150, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13061, "raw_key_size": 129472, "raw_average_key_size": 24, "raw_value_size": 8519566, "raw_average_value_size": 1635, "num_data_blocks": 911, "num_entries": 5208, "num_filter_entries": 5208, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760170204, "oldest_key_time": 0, "file_creation_time": 1760172605, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "25d4b2da-549a-4320-988a-8e4ed36a341b", "db_session_id": "HBQ1LYMNE0LLZGR7AHC6", "orig_file_number": 62, "seqno_to_time_mapping": "N/A"}}
Oct 11 08:50:05 compute-0 ceph-mon[74313]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 08:50:05 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:50:05.364966) [db/compaction/compaction_job.cc:1663] [default] [JOB 32] Compacted 1@0 + 1@6 files to L6 => 8651138 bytes
Oct 11 08:50:05 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:50:05.366775) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 169.9 rd, 141.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.1, 6.8 +0.0 blob) out(8.3 +0.0 blob), read-write-amplify(5.9) write-amplify(2.7) OK, records in: 5740, records dropped: 532 output_compression: NoCompression
Oct 11 08:50:05 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:50:05.366792) EVENT_LOG_v1 {"time_micros": 1760172605366784, "job": 32, "event": "compaction_finished", "compaction_time_micros": 61206, "compaction_time_cpu_micros": 34038, "output_level": 6, "num_output_files": 1, "total_output_size": 8651138, "num_input_records": 5740, "num_output_records": 5208, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 11 08:50:05 compute-0 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000061.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 08:50:05 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760172605367517, "job": 32, "event": "table_file_deletion", "file_number": 61}
Oct 11 08:50:05 compute-0 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000059.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 08:50:05 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760172605369519, "job": 32, "event": "table_file_deletion", "file_number": 59}
Oct 11 08:50:05 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:50:05.303135) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 08:50:05 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:50:05.369637) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 08:50:05 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:50:05.369648) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 08:50:05 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:50:05.369650) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 08:50:05 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:50:05.369652) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 08:50:05 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:50:05.369654) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 08:50:05 compute-0 nova_compute[260935]: 2025-10-11 08:50:05.382 2 DEBUG nova.virt.libvirt.driver [None req-23edfb2a-135c-4bb9-b6f1-2bd10a1c71cf a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 08:50:05 compute-0 nova_compute[260935]: 2025-10-11 08:50:05.383 2 DEBUG nova.virt.libvirt.driver [None req-23edfb2a-135c-4bb9-b6f1-2bd10a1c71cf a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 08:50:05 compute-0 nova_compute[260935]: 2025-10-11 08:50:05.383 2 DEBUG nova.virt.libvirt.driver [None req-23edfb2a-135c-4bb9-b6f1-2bd10a1c71cf a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] No VIF found with MAC fa:16:3e:bf:d8:0b, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 11 08:50:05 compute-0 nova_compute[260935]: 2025-10-11 08:50:05.383 2 INFO nova.virt.libvirt.driver [None req-23edfb2a-135c-4bb9-b6f1-2bd10a1c71cf a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] Using config drive
Oct 11 08:50:05 compute-0 podman[297091]: 2025-10-11 08:50:05.39666496 +0000 UTC m=+0.076067393 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 08:50:05 compute-0 nova_compute[260935]: 2025-10-11 08:50:05.426 2 DEBUG nova.storage.rbd_utils [None req-23edfb2a-135c-4bb9-b6f1-2bd10a1c71cf a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] rbd image 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:50:05 compute-0 podman[297092]: 2025-10-11 08:50:05.43273669 +0000 UTC m=+0.112586295 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct 11 08:50:05 compute-0 nova_compute[260935]: 2025-10-11 08:50:05.453 2 DEBUG nova.objects.instance [None req-23edfb2a-135c-4bb9-b6f1-2bd10a1c71cf a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Lazy-loading 'ec2_ids' on Instance uuid 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:50:05 compute-0 nova_compute[260935]: 2025-10-11 08:50:05.489 2 DEBUG nova.objects.instance [None req-23edfb2a-135c-4bb9-b6f1-2bd10a1c71cf a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Lazy-loading 'keypairs' on Instance uuid 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:50:05 compute-0 nova_compute[260935]: 2025-10-11 08:50:05.552 2 DEBUG nova.compute.manager [req-e4912fb3-175a-4513-ae51-2edf2196a6a7 req-9841b910-a205-4e14-b9b1-73d967a3c15e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: cb1503a2-bc9c-4faf-ab16-e7227f8c94f7] Received event network-vif-plugged-31025494-a361-4c39-aa29-5841978f12e9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:50:05 compute-0 nova_compute[260935]: 2025-10-11 08:50:05.552 2 DEBUG oslo_concurrency.lockutils [req-e4912fb3-175a-4513-ae51-2edf2196a6a7 req-9841b910-a205-4e14-b9b1-73d967a3c15e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "cb1503a2-bc9c-4faf-ab16-e7227f8c94f7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:50:05 compute-0 nova_compute[260935]: 2025-10-11 08:50:05.553 2 DEBUG oslo_concurrency.lockutils [req-e4912fb3-175a-4513-ae51-2edf2196a6a7 req-9841b910-a205-4e14-b9b1-73d967a3c15e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "cb1503a2-bc9c-4faf-ab16-e7227f8c94f7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:50:05 compute-0 nova_compute[260935]: 2025-10-11 08:50:05.553 2 DEBUG oslo_concurrency.lockutils [req-e4912fb3-175a-4513-ae51-2edf2196a6a7 req-9841b910-a205-4e14-b9b1-73d967a3c15e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "cb1503a2-bc9c-4faf-ab16-e7227f8c94f7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:50:05 compute-0 nova_compute[260935]: 2025-10-11 08:50:05.553 2 DEBUG nova.compute.manager [req-e4912fb3-175a-4513-ae51-2edf2196a6a7 req-9841b910-a205-4e14-b9b1-73d967a3c15e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: cb1503a2-bc9c-4faf-ab16-e7227f8c94f7] Processing event network-vif-plugged-31025494-a361-4c39-aa29-5841978f12e9 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 11 08:50:05 compute-0 nova_compute[260935]: 2025-10-11 08:50:05.554 2 DEBUG nova.compute.manager [None req-4002ac9b-cff2-4b50-a89e-ea7290326cae 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: cb1503a2-bc9c-4faf-ab16-e7227f8c94f7] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 11 08:50:05 compute-0 nova_compute[260935]: 2025-10-11 08:50:05.560 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172605.560264, cb1503a2-bc9c-4faf-ab16-e7227f8c94f7 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:50:05 compute-0 nova_compute[260935]: 2025-10-11 08:50:05.560 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: cb1503a2-bc9c-4faf-ab16-e7227f8c94f7] VM Resumed (Lifecycle Event)
Oct 11 08:50:05 compute-0 nova_compute[260935]: 2025-10-11 08:50:05.563 2 DEBUG nova.virt.libvirt.driver [None req-4002ac9b-cff2-4b50-a89e-ea7290326cae 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: cb1503a2-bc9c-4faf-ab16-e7227f8c94f7] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 11 08:50:05 compute-0 nova_compute[260935]: 2025-10-11 08:50:05.566 2 INFO nova.virt.libvirt.driver [-] [instance: cb1503a2-bc9c-4faf-ab16-e7227f8c94f7] Instance spawned successfully.
Oct 11 08:50:05 compute-0 nova_compute[260935]: 2025-10-11 08:50:05.566 2 DEBUG nova.virt.libvirt.driver [None req-4002ac9b-cff2-4b50-a89e-ea7290326cae 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: cb1503a2-bc9c-4faf-ab16-e7227f8c94f7] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 11 08:50:05 compute-0 nova_compute[260935]: 2025-10-11 08:50:05.580 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: cb1503a2-bc9c-4faf-ab16-e7227f8c94f7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:50:05 compute-0 nova_compute[260935]: 2025-10-11 08:50:05.588 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: cb1503a2-bc9c-4faf-ab16-e7227f8c94f7] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 08:50:05 compute-0 nova_compute[260935]: 2025-10-11 08:50:05.591 2 DEBUG nova.virt.libvirt.driver [None req-4002ac9b-cff2-4b50-a89e-ea7290326cae 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: cb1503a2-bc9c-4faf-ab16-e7227f8c94f7] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:50:05 compute-0 nova_compute[260935]: 2025-10-11 08:50:05.591 2 DEBUG nova.virt.libvirt.driver [None req-4002ac9b-cff2-4b50-a89e-ea7290326cae 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: cb1503a2-bc9c-4faf-ab16-e7227f8c94f7] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:50:05 compute-0 nova_compute[260935]: 2025-10-11 08:50:05.592 2 DEBUG nova.virt.libvirt.driver [None req-4002ac9b-cff2-4b50-a89e-ea7290326cae 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: cb1503a2-bc9c-4faf-ab16-e7227f8c94f7] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:50:05 compute-0 nova_compute[260935]: 2025-10-11 08:50:05.592 2 DEBUG nova.virt.libvirt.driver [None req-4002ac9b-cff2-4b50-a89e-ea7290326cae 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: cb1503a2-bc9c-4faf-ab16-e7227f8c94f7] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:50:05 compute-0 nova_compute[260935]: 2025-10-11 08:50:05.593 2 DEBUG nova.virt.libvirt.driver [None req-4002ac9b-cff2-4b50-a89e-ea7290326cae 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: cb1503a2-bc9c-4faf-ab16-e7227f8c94f7] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:50:05 compute-0 nova_compute[260935]: 2025-10-11 08:50:05.593 2 DEBUG nova.virt.libvirt.driver [None req-4002ac9b-cff2-4b50-a89e-ea7290326cae 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: cb1503a2-bc9c-4faf-ab16-e7227f8c94f7] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:50:05 compute-0 nova_compute[260935]: 2025-10-11 08:50:05.619 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: cb1503a2-bc9c-4faf-ab16-e7227f8c94f7] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 08:50:05 compute-0 nova_compute[260935]: 2025-10-11 08:50:05.652 2 INFO nova.compute.manager [None req-4002ac9b-cff2-4b50-a89e-ea7290326cae 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: cb1503a2-bc9c-4faf-ab16-e7227f8c94f7] Took 8.90 seconds to spawn the instance on the hypervisor.
Oct 11 08:50:05 compute-0 nova_compute[260935]: 2025-10-11 08:50:05.653 2 DEBUG nova.compute.manager [None req-4002ac9b-cff2-4b50-a89e-ea7290326cae 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: cb1503a2-bc9c-4faf-ab16-e7227f8c94f7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:50:05 compute-0 nova_compute[260935]: 2025-10-11 08:50:05.701 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 08:50:05 compute-0 nova_compute[260935]: 2025-10-11 08:50:05.701 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 11 08:50:05 compute-0 nova_compute[260935]: 2025-10-11 08:50:05.702 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 11 08:50:05 compute-0 nova_compute[260935]: 2025-10-11 08:50:05.732 2 INFO nova.compute.manager [None req-4002ac9b-cff2-4b50-a89e-ea7290326cae 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: cb1503a2-bc9c-4faf-ab16-e7227f8c94f7] Took 10.67 seconds to build instance.
Oct 11 08:50:05 compute-0 nova_compute[260935]: 2025-10-11 08:50:05.739 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: b35f4147-9e36-4dab-9ac8-2061c97797f2] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Oct 11 08:50:05 compute-0 nova_compute[260935]: 2025-10-11 08:50:05.740 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "refresh_cache-77ff3a9d-3eb2-40ed-ad12-6367fd4e555f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 08:50:05 compute-0 nova_compute[260935]: 2025-10-11 08:50:05.740 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquired lock "refresh_cache-77ff3a9d-3eb2-40ed-ad12-6367fd4e555f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 08:50:05 compute-0 nova_compute[260935]: 2025-10-11 08:50:05.740 2 DEBUG nova.network.neutron [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 11 08:50:05 compute-0 nova_compute[260935]: 2025-10-11 08:50:05.741 2 DEBUG nova.objects.instance [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:50:05 compute-0 nova_compute[260935]: 2025-10-11 08:50:05.747 2 DEBUG oslo_concurrency.lockutils [None req-4002ac9b-cff2-4b50-a89e-ea7290326cae 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Lock "cb1503a2-bc9c-4faf-ab16-e7227f8c94f7" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.767s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:50:05 compute-0 nova_compute[260935]: 2025-10-11 08:50:05.756 2 INFO nova.virt.libvirt.driver [None req-23edfb2a-135c-4bb9-b6f1-2bd10a1c71cf a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] Creating config drive at /var/lib/nova/instances/77ff3a9d-3eb2-40ed-ad12-6367fd4e555f/disk.config
Oct 11 08:50:05 compute-0 nova_compute[260935]: 2025-10-11 08:50:05.762 2 DEBUG oslo_concurrency.processutils [None req-23edfb2a-135c-4bb9-b6f1-2bd10a1c71cf a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/77ff3a9d-3eb2-40ed-ad12-6367fd4e555f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpmdweqm5t execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:50:05 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1335: 321 pgs: 321 active+clean; 571 MiB data, 662 MiB used, 59 GiB / 60 GiB avail; 3.1 MiB/s rd, 9.3 MiB/s wr, 315 op/s
Oct 11 08:50:05 compute-0 nova_compute[260935]: 2025-10-11 08:50:05.914 2 DEBUG oslo_concurrency.processutils [None req-23edfb2a-135c-4bb9-b6f1-2bd10a1c71cf a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/77ff3a9d-3eb2-40ed-ad12-6367fd4e555f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpmdweqm5t" returned: 0 in 0.152s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:50:05 compute-0 nova_compute[260935]: 2025-10-11 08:50:05.942 2 DEBUG nova.storage.rbd_utils [None req-23edfb2a-135c-4bb9-b6f1-2bd10a1c71cf a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] rbd image 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:50:05 compute-0 nova_compute[260935]: 2025-10-11 08:50:05.945 2 DEBUG oslo_concurrency.processutils [None req-23edfb2a-135c-4bb9-b6f1-2bd10a1c71cf a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/77ff3a9d-3eb2-40ed-ad12-6367fd4e555f/disk.config 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:50:06 compute-0 nova_compute[260935]: 2025-10-11 08:50:06.111 2 DEBUG oslo_concurrency.processutils [None req-23edfb2a-135c-4bb9-b6f1-2bd10a1c71cf a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/77ff3a9d-3eb2-40ed-ad12-6367fd4e555f/disk.config 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.166s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:50:06 compute-0 nova_compute[260935]: 2025-10-11 08:50:06.112 2 INFO nova.virt.libvirt.driver [None req-23edfb2a-135c-4bb9-b6f1-2bd10a1c71cf a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] Deleting local config drive /var/lib/nova/instances/77ff3a9d-3eb2-40ed-ad12-6367fd4e555f/disk.config because it was imported into RBD.
Oct 11 08:50:06 compute-0 NetworkManager[44960]: <info>  [1760172606.1956] manager: (tapa3944a31-95): new Tun device (/org/freedesktop/NetworkManager/Devices/95)
Oct 11 08:50:06 compute-0 kernel: tapa3944a31-95: entered promiscuous mode
Oct 11 08:50:06 compute-0 ovn_controller[152945]: 2025-10-11T08:50:06Z|00162|binding|INFO|Claiming lport a3944a31-9560-49ae-b2a5-caaf2736993a for this chassis.
Oct 11 08:50:06 compute-0 ovn_controller[152945]: 2025-10-11T08:50:06Z|00163|binding|INFO|a3944a31-9560-49ae-b2a5-caaf2736993a: Claiming fa:16:3e:bf:d8:0b 10.100.0.11
Oct 11 08:50:06 compute-0 nova_compute[260935]: 2025-10-11 08:50:06.201 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:50:06 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:06.213 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:bf:d8:0b 10.100.0.11'], port_security=['fa:16:3e:bf:d8:0b 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '77ff3a9d-3eb2-40ed-ad12-6367fd4e555f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '39d3043a7835403392c659fbb2fe0b22', 'neutron:revision_number': '5', 'neutron:security_group_ids': '8cdf2c97-ed67-4339-928f-1d70d0c6c18c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3bfe7634-8476-437a-9cde-e4512c0e686a, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=a3944a31-9560-49ae-b2a5-caaf2736993a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 08:50:06 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:06.215 162815 INFO neutron.agent.ovn.metadata.agent [-] Port a3944a31-9560-49ae-b2a5-caaf2736993a in datapath 09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5 bound to our chassis
Oct 11 08:50:06 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:06.221 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5
Oct 11 08:50:06 compute-0 ovn_controller[152945]: 2025-10-11T08:50:06Z|00164|binding|INFO|Setting lport a3944a31-9560-49ae-b2a5-caaf2736993a ovn-installed in OVS
Oct 11 08:50:06 compute-0 ovn_controller[152945]: 2025-10-11T08:50:06Z|00165|binding|INFO|Setting lport a3944a31-9560-49ae-b2a5-caaf2736993a up in Southbound
Oct 11 08:50:06 compute-0 nova_compute[260935]: 2025-10-11 08:50:06.247 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:50:06 compute-0 nova_compute[260935]: 2025-10-11 08:50:06.257 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:50:06 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:06.261 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[aeaa9762-34c5-48f4-856e-3c9e84c0600b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:50:06 compute-0 systemd-machined[215705]: New machine qemu-33-instance-00000016.
Oct 11 08:50:06 compute-0 systemd[1]: Started Virtual Machine qemu-33-instance-00000016.
Oct 11 08:50:06 compute-0 systemd-udevd[297211]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 08:50:06 compute-0 NetworkManager[44960]: <info>  [1760172606.3037] device (tapa3944a31-95): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 08:50:06 compute-0 NetworkManager[44960]: <info>  [1760172606.3056] device (tapa3944a31-95): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 08:50:06 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:06.305 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[c305d89c-34e7-455e-94d5-4aec2ebf8360]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:50:06 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:06.309 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[b42813e0-c975-45e3-a3e9-90e41f96facd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:50:06 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:06.350 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[d5ac2232-ac4d-4025-bd17-bceaa8da3383]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:50:06 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:06.387 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[ea16b7e2-00a7-478e-894f-335ac3e5cf81]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap09ac2cb6-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9f:b2:33'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 14, 'tx_packets': 14, 'rx_bytes': 1084, 'tx_bytes': 776, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 14, 'tx_packets': 14, 'rx_bytes': 1084, 'tx_bytes': 776, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 34], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 439110, 'reachable_time': 35426, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 297221, 'error': None, 'target': 'ovnmeta-09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:50:06 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:06.426 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[afd424b8-7aa0-4e5c-95da-49ad5c5ac8e8]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap09ac2cb6-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 439130, 'tstamp': 439130}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 297223, 'error': None, 'target': 'ovnmeta-09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap09ac2cb6-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 439135, 'tstamp': 439135}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 297223, 'error': None, 'target': 'ovnmeta-09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:50:06 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:06.429 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap09ac2cb6-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:50:06 compute-0 nova_compute[260935]: 2025-10-11 08:50:06.466 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:50:06 compute-0 nova_compute[260935]: 2025-10-11 08:50:06.472 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:50:06 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:06.473 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap09ac2cb6-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:50:06 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:06.473 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 08:50:06 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:06.474 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap09ac2cb6-30, col_values=(('external_ids', {'iface-id': '424305ea-6b47-4134-ad52-ee2a450e204c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:50:06 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:06.475 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 08:50:06 compute-0 nova_compute[260935]: 2025-10-11 08:50:06.636 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760172591.4941485, 8e4f771a-b87a-40f9-a12e-b5b4583b96f7 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:50:06 compute-0 nova_compute[260935]: 2025-10-11 08:50:06.637 2 INFO nova.compute.manager [-] [instance: 8e4f771a-b87a-40f9-a12e-b5b4583b96f7] VM Stopped (Lifecycle Event)
Oct 11 08:50:06 compute-0 nova_compute[260935]: 2025-10-11 08:50:06.661 2 DEBUG nova.compute.manager [None req-67fea12c-212a-497b-a538-5724d425b749 - - - - - -] [instance: 8e4f771a-b87a-40f9-a12e-b5b4583b96f7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:50:06 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:06.786 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=bec9a5e2-82b0-42f0-811d-08d245f1dc66, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '9'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:50:07 compute-0 nova_compute[260935]: 2025-10-11 08:50:07.054 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:50:07 compute-0 ceph-mon[74313]: pgmap v1335: 321 pgs: 321 active+clean; 571 MiB data, 662 MiB used, 59 GiB / 60 GiB avail; 3.1 MiB/s rd, 9.3 MiB/s wr, 315 op/s
Oct 11 08:50:07 compute-0 nova_compute[260935]: 2025-10-11 08:50:07.302 2 DEBUG nova.virt.libvirt.host [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Removed pending event for 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Oct 11 08:50:07 compute-0 nova_compute[260935]: 2025-10-11 08:50:07.302 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172607.301627, 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:50:07 compute-0 nova_compute[260935]: 2025-10-11 08:50:07.302 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] VM Started (Lifecycle Event)
Oct 11 08:50:07 compute-0 nova_compute[260935]: 2025-10-11 08:50:07.325 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:50:07 compute-0 nova_compute[260935]: 2025-10-11 08:50:07.328 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172607.303803, 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:50:07 compute-0 nova_compute[260935]: 2025-10-11 08:50:07.328 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] VM Paused (Lifecycle Event)
Oct 11 08:50:07 compute-0 nova_compute[260935]: 2025-10-11 08:50:07.343 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:50:07 compute-0 nova_compute[260935]: 2025-10-11 08:50:07.346 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: error, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 08:50:07 compute-0 nova_compute[260935]: 2025-10-11 08:50:07.377 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Oct 11 08:50:07 compute-0 ceph-mon[74313]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 11 08:50:07 compute-0 ceph-mon[74313]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 2400.0 total, 600.0 interval
                                           Cumulative writes: 6182 writes, 27K keys, 6182 commit groups, 1.0 writes per commit group, ingest: 0.04 GB, 0.02 MB/s
                                           Cumulative WAL: 6182 writes, 6182 syncs, 1.00 writes per sync, written: 0.04 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1563 writes, 7241 keys, 1563 commit groups, 1.0 writes per commit group, ingest: 9.55 MB, 0.02 MB/s
                                           Interval WAL: 1563 writes, 1563 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     83.9      0.39              0.13        16    0.025       0      0       0.0       0.0
                                             L6      1/0    8.25 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   3.2    163.5    132.8      0.81              0.48        15    0.054     69K   8349       0.0       0.0
                                            Sum      1/0    8.25 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   4.2    109.9    116.8      1.20              0.61        31    0.039     69K   8349       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   4.4    127.4    131.7      0.37              0.22        10    0.037     26K   3090       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   0.0    163.5    132.8      0.81              0.48        15    0.054     69K   8349       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     84.7      0.39              0.13        15    0.026       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     12.2      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 2400.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.032, interval 0.011
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.14 GB write, 0.06 MB/s write, 0.13 GB read, 0.05 MB/s read, 1.2 seconds
                                           Interval compaction: 0.05 GB write, 0.08 MB/s write, 0.05 GB read, 0.08 MB/s read, 0.4 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558f0ab3b1f0#2 capacity: 304.00 MB usage: 15.22 MB table_size: 0 occupancy: 18446744073709551615 collections: 5 last_copies: 0 last_secs: 0.000131 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(985,14.66 MB,4.82271%) FilterBlock(32,200.73 KB,0.0644834%) IndexBlock(32,368.94 KB,0.118517%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Oct 11 08:50:07 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1336: 321 pgs: 321 active+clean; 544 MiB data, 638 MiB used, 59 GiB / 60 GiB avail; 5.4 MiB/s rd, 7.3 MiB/s wr, 426 op/s
Oct 11 08:50:08 compute-0 nova_compute[260935]: 2025-10-11 08:50:08.056 2 DEBUG nova.compute.manager [req-8be733e5-675c-48a5-8e01-8e9392e4198b req-a1bf60b8-ddc2-4488-a952-b6eaeb0c26df e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: cb1503a2-bc9c-4faf-ab16-e7227f8c94f7] Received event network-vif-plugged-31025494-a361-4c39-aa29-5841978f12e9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:50:08 compute-0 nova_compute[260935]: 2025-10-11 08:50:08.057 2 DEBUG oslo_concurrency.lockutils [req-8be733e5-675c-48a5-8e01-8e9392e4198b req-a1bf60b8-ddc2-4488-a952-b6eaeb0c26df e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "cb1503a2-bc9c-4faf-ab16-e7227f8c94f7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:50:08 compute-0 nova_compute[260935]: 2025-10-11 08:50:08.057 2 DEBUG oslo_concurrency.lockutils [req-8be733e5-675c-48a5-8e01-8e9392e4198b req-a1bf60b8-ddc2-4488-a952-b6eaeb0c26df e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "cb1503a2-bc9c-4faf-ab16-e7227f8c94f7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:50:08 compute-0 nova_compute[260935]: 2025-10-11 08:50:08.057 2 DEBUG oslo_concurrency.lockutils [req-8be733e5-675c-48a5-8e01-8e9392e4198b req-a1bf60b8-ddc2-4488-a952-b6eaeb0c26df e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "cb1503a2-bc9c-4faf-ab16-e7227f8c94f7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:50:08 compute-0 nova_compute[260935]: 2025-10-11 08:50:08.058 2 DEBUG nova.compute.manager [req-8be733e5-675c-48a5-8e01-8e9392e4198b req-a1bf60b8-ddc2-4488-a952-b6eaeb0c26df e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: cb1503a2-bc9c-4faf-ab16-e7227f8c94f7] No waiting events found dispatching network-vif-plugged-31025494-a361-4c39-aa29-5841978f12e9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 08:50:08 compute-0 nova_compute[260935]: 2025-10-11 08:50:08.058 2 WARNING nova.compute.manager [req-8be733e5-675c-48a5-8e01-8e9392e4198b req-a1bf60b8-ddc2-4488-a952-b6eaeb0c26df e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: cb1503a2-bc9c-4faf-ab16-e7227f8c94f7] Received unexpected event network-vif-plugged-31025494-a361-4c39-aa29-5841978f12e9 for instance with vm_state active and task_state None.
Oct 11 08:50:08 compute-0 nova_compute[260935]: 2025-10-11 08:50:08.168 2 DEBUG oslo_concurrency.lockutils [None req-9e5a278a-2f09-40a3-8a2d-2f913c615ffa 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Acquiring lock "cb1503a2-bc9c-4faf-ab16-e7227f8c94f7" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:50:08 compute-0 nova_compute[260935]: 2025-10-11 08:50:08.169 2 DEBUG oslo_concurrency.lockutils [None req-9e5a278a-2f09-40a3-8a2d-2f913c615ffa 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Lock "cb1503a2-bc9c-4faf-ab16-e7227f8c94f7" acquired by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:50:08 compute-0 nova_compute[260935]: 2025-10-11 08:50:08.170 2 DEBUG nova.compute.manager [None req-9e5a278a-2f09-40a3-8a2d-2f913c615ffa 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: cb1503a2-bc9c-4faf-ab16-e7227f8c94f7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:50:08 compute-0 nova_compute[260935]: 2025-10-11 08:50:08.176 2 DEBUG nova.compute.manager [None req-9e5a278a-2f09-40a3-8a2d-2f913c615ffa 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: cb1503a2-bc9c-4faf-ab16-e7227f8c94f7] Stopping instance; current vm_state: active, current task_state: powering-off, current DB power_state: 1, current VM power_state: 1 do_stop_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3338
Oct 11 08:50:08 compute-0 nova_compute[260935]: 2025-10-11 08:50:08.178 2 DEBUG nova.objects.instance [None req-9e5a278a-2f09-40a3-8a2d-2f913c615ffa 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Lazy-loading 'flavor' on Instance uuid cb1503a2-bc9c-4faf-ab16-e7227f8c94f7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:50:08 compute-0 nova_compute[260935]: 2025-10-11 08:50:08.210 2 DEBUG nova.virt.libvirt.driver [None req-9e5a278a-2f09-40a3-8a2d-2f913c615ffa 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: cb1503a2-bc9c-4faf-ab16-e7227f8c94f7] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Oct 11 08:50:08 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 08:50:09 compute-0 ceph-mon[74313]: pgmap v1336: 321 pgs: 321 active+clean; 544 MiB data, 638 MiB used, 59 GiB / 60 GiB avail; 5.4 MiB/s rd, 7.3 MiB/s wr, 426 op/s
Oct 11 08:50:09 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1337: 321 pgs: 321 active+clean; 544 MiB data, 638 MiB used, 59 GiB / 60 GiB avail; 4.7 MiB/s rd, 6.3 MiB/s wr, 367 op/s
Oct 11 08:50:09 compute-0 nova_compute[260935]: 2025-10-11 08:50:09.929 2 DEBUG nova.network.neutron [req-20907ecd-660a-49cf-9694-71f5ef2df0c1 req-2228d9e3-f92c-4155-bc58-0eb804c3a884 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 9f9aca1c-8e65-435a-bfae-1ff0d4386f58] Updated VIF entry in instance network info cache for port 9007d8a3-8797-49c6-9302-4c8f4d699a45. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 08:50:09 compute-0 nova_compute[260935]: 2025-10-11 08:50:09.931 2 DEBUG nova.network.neutron [req-20907ecd-660a-49cf-9694-71f5ef2df0c1 req-2228d9e3-f92c-4155-bc58-0eb804c3a884 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 9f9aca1c-8e65-435a-bfae-1ff0d4386f58] Updating instance_info_cache with network_info: [{"id": "9007d8a3-8797-49c6-9302-4c8f4d699a45", "address": "fa:16:3e:34:0d:85", "network": {"id": "8e0c9798-3406-4335-baf7-3664e8c2cc2d", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-1694728476-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.204", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4478aa2544ad454daf82ec0d5a6f1b83", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9007d8a3-87", "ovs_interfaceid": "9007d8a3-8797-49c6-9302-4c8f4d699a45", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:50:10 compute-0 nova_compute[260935]: 2025-10-11 08:50:10.092 2 DEBUG oslo_concurrency.lockutils [req-20907ecd-660a-49cf-9694-71f5ef2df0c1 req-2228d9e3-f92c-4155-bc58-0eb804c3a884 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-9f9aca1c-8e65-435a-bfae-1ff0d4386f58" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 08:50:10 compute-0 nova_compute[260935]: 2025-10-11 08:50:10.094 2 DEBUG nova.compute.manager [req-20907ecd-660a-49cf-9694-71f5ef2df0c1 req-2228d9e3-f92c-4155-bc58-0eb804c3a884 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: b35f4147-9e36-4dab-9ac8-2061c97797f2] Received event network-vif-plugged-c27797c3-6ac7-45ae-9a2e-7fc42908feab external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:50:10 compute-0 nova_compute[260935]: 2025-10-11 08:50:10.094 2 DEBUG oslo_concurrency.lockutils [req-20907ecd-660a-49cf-9694-71f5ef2df0c1 req-2228d9e3-f92c-4155-bc58-0eb804c3a884 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "b35f4147-9e36-4dab-9ac8-2061c97797f2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:50:10 compute-0 nova_compute[260935]: 2025-10-11 08:50:10.094 2 DEBUG oslo_concurrency.lockutils [req-20907ecd-660a-49cf-9694-71f5ef2df0c1 req-2228d9e3-f92c-4155-bc58-0eb804c3a884 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "b35f4147-9e36-4dab-9ac8-2061c97797f2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:50:10 compute-0 nova_compute[260935]: 2025-10-11 08:50:10.095 2 DEBUG oslo_concurrency.lockutils [req-20907ecd-660a-49cf-9694-71f5ef2df0c1 req-2228d9e3-f92c-4155-bc58-0eb804c3a884 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "b35f4147-9e36-4dab-9ac8-2061c97797f2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:50:10 compute-0 nova_compute[260935]: 2025-10-11 08:50:10.095 2 DEBUG nova.compute.manager [req-20907ecd-660a-49cf-9694-71f5ef2df0c1 req-2228d9e3-f92c-4155-bc58-0eb804c3a884 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: b35f4147-9e36-4dab-9ac8-2061c97797f2] Processing event network-vif-plugged-c27797c3-6ac7-45ae-9a2e-7fc42908feab _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 11 08:50:10 compute-0 nova_compute[260935]: 2025-10-11 08:50:10.095 2 DEBUG nova.compute.manager [req-20907ecd-660a-49cf-9694-71f5ef2df0c1 req-2228d9e3-f92c-4155-bc58-0eb804c3a884 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: b35f4147-9e36-4dab-9ac8-2061c97797f2] Received event network-vif-plugged-c27797c3-6ac7-45ae-9a2e-7fc42908feab external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:50:10 compute-0 nova_compute[260935]: 2025-10-11 08:50:10.096 2 DEBUG oslo_concurrency.lockutils [req-20907ecd-660a-49cf-9694-71f5ef2df0c1 req-2228d9e3-f92c-4155-bc58-0eb804c3a884 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "b35f4147-9e36-4dab-9ac8-2061c97797f2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:50:10 compute-0 nova_compute[260935]: 2025-10-11 08:50:10.096 2 DEBUG oslo_concurrency.lockutils [req-20907ecd-660a-49cf-9694-71f5ef2df0c1 req-2228d9e3-f92c-4155-bc58-0eb804c3a884 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "b35f4147-9e36-4dab-9ac8-2061c97797f2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:50:10 compute-0 nova_compute[260935]: 2025-10-11 08:50:10.096 2 DEBUG oslo_concurrency.lockutils [req-20907ecd-660a-49cf-9694-71f5ef2df0c1 req-2228d9e3-f92c-4155-bc58-0eb804c3a884 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "b35f4147-9e36-4dab-9ac8-2061c97797f2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:50:10 compute-0 nova_compute[260935]: 2025-10-11 08:50:10.096 2 DEBUG nova.compute.manager [req-20907ecd-660a-49cf-9694-71f5ef2df0c1 req-2228d9e3-f92c-4155-bc58-0eb804c3a884 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: b35f4147-9e36-4dab-9ac8-2061c97797f2] No waiting events found dispatching network-vif-plugged-c27797c3-6ac7-45ae-9a2e-7fc42908feab pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 08:50:10 compute-0 nova_compute[260935]: 2025-10-11 08:50:10.097 2 WARNING nova.compute.manager [req-20907ecd-660a-49cf-9694-71f5ef2df0c1 req-2228d9e3-f92c-4155-bc58-0eb804c3a884 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: b35f4147-9e36-4dab-9ac8-2061c97797f2] Received unexpected event network-vif-plugged-c27797c3-6ac7-45ae-9a2e-7fc42908feab for instance with vm_state building and task_state spawning.
Oct 11 08:50:10 compute-0 nova_compute[260935]: 2025-10-11 08:50:10.097 2 DEBUG nova.compute.manager [req-20907ecd-660a-49cf-9694-71f5ef2df0c1 req-2228d9e3-f92c-4155-bc58-0eb804c3a884 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] Received event network-vif-unplugged-a3944a31-9560-49ae-b2a5-caaf2736993a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:50:10 compute-0 nova_compute[260935]: 2025-10-11 08:50:10.097 2 DEBUG oslo_concurrency.lockutils [req-20907ecd-660a-49cf-9694-71f5ef2df0c1 req-2228d9e3-f92c-4155-bc58-0eb804c3a884 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "77ff3a9d-3eb2-40ed-ad12-6367fd4e555f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:50:10 compute-0 nova_compute[260935]: 2025-10-11 08:50:10.097 2 DEBUG oslo_concurrency.lockutils [req-20907ecd-660a-49cf-9694-71f5ef2df0c1 req-2228d9e3-f92c-4155-bc58-0eb804c3a884 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "77ff3a9d-3eb2-40ed-ad12-6367fd4e555f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:50:10 compute-0 nova_compute[260935]: 2025-10-11 08:50:10.098 2 DEBUG oslo_concurrency.lockutils [req-20907ecd-660a-49cf-9694-71f5ef2df0c1 req-2228d9e3-f92c-4155-bc58-0eb804c3a884 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "77ff3a9d-3eb2-40ed-ad12-6367fd4e555f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:50:10 compute-0 nova_compute[260935]: 2025-10-11 08:50:10.098 2 DEBUG nova.compute.manager [req-20907ecd-660a-49cf-9694-71f5ef2df0c1 req-2228d9e3-f92c-4155-bc58-0eb804c3a884 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] No event matching network-vif-unplugged-a3944a31-9560-49ae-b2a5-caaf2736993a in dict_keys([('network-vif-plugged', 'a3944a31-9560-49ae-b2a5-caaf2736993a')]) pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:325
Oct 11 08:50:10 compute-0 nova_compute[260935]: 2025-10-11 08:50:10.098 2 WARNING nova.compute.manager [req-20907ecd-660a-49cf-9694-71f5ef2df0c1 req-2228d9e3-f92c-4155-bc58-0eb804c3a884 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] Received unexpected event network-vif-unplugged-a3944a31-9560-49ae-b2a5-caaf2736993a for instance with vm_state error and task_state rebuilding.
Oct 11 08:50:10 compute-0 nova_compute[260935]: 2025-10-11 08:50:10.098 2 DEBUG nova.compute.manager [req-20907ecd-660a-49cf-9694-71f5ef2df0c1 req-2228d9e3-f92c-4155-bc58-0eb804c3a884 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] Received event network-vif-plugged-a3944a31-9560-49ae-b2a5-caaf2736993a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:50:10 compute-0 nova_compute[260935]: 2025-10-11 08:50:10.098 2 DEBUG oslo_concurrency.lockutils [req-20907ecd-660a-49cf-9694-71f5ef2df0c1 req-2228d9e3-f92c-4155-bc58-0eb804c3a884 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "77ff3a9d-3eb2-40ed-ad12-6367fd4e555f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:50:10 compute-0 nova_compute[260935]: 2025-10-11 08:50:10.099 2 DEBUG oslo_concurrency.lockutils [req-20907ecd-660a-49cf-9694-71f5ef2df0c1 req-2228d9e3-f92c-4155-bc58-0eb804c3a884 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "77ff3a9d-3eb2-40ed-ad12-6367fd4e555f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:50:10 compute-0 nova_compute[260935]: 2025-10-11 08:50:10.099 2 DEBUG oslo_concurrency.lockutils [req-20907ecd-660a-49cf-9694-71f5ef2df0c1 req-2228d9e3-f92c-4155-bc58-0eb804c3a884 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "77ff3a9d-3eb2-40ed-ad12-6367fd4e555f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:50:10 compute-0 nova_compute[260935]: 2025-10-11 08:50:10.099 2 DEBUG nova.compute.manager [req-20907ecd-660a-49cf-9694-71f5ef2df0c1 req-2228d9e3-f92c-4155-bc58-0eb804c3a884 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] Processing event network-vif-plugged-a3944a31-9560-49ae-b2a5-caaf2736993a _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 11 08:50:10 compute-0 nova_compute[260935]: 2025-10-11 08:50:10.100 2 DEBUG nova.compute.manager [None req-546ee1ad-353e-439d-b095-08a7736b8d68 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: b35f4147-9e36-4dab-9ac8-2061c97797f2] Instance event wait completed in 7 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 11 08:50:10 compute-0 nova_compute[260935]: 2025-10-11 08:50:10.100 2 DEBUG nova.compute.manager [None req-23edfb2a-135c-4bb9-b6f1-2bd10a1c71cf a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] Instance event wait completed in 2 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 11 08:50:10 compute-0 nova_compute[260935]: 2025-10-11 08:50:10.104 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172610.1041093, 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:50:10 compute-0 nova_compute[260935]: 2025-10-11 08:50:10.105 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] VM Resumed (Lifecycle Event)
Oct 11 08:50:10 compute-0 nova_compute[260935]: 2025-10-11 08:50:10.108 2 DEBUG nova.virt.libvirt.driver [None req-546ee1ad-353e-439d-b095-08a7736b8d68 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: b35f4147-9e36-4dab-9ac8-2061c97797f2] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 11 08:50:10 compute-0 nova_compute[260935]: 2025-10-11 08:50:10.108 2 DEBUG nova.virt.libvirt.driver [None req-23edfb2a-135c-4bb9-b6f1-2bd10a1c71cf a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 11 08:50:10 compute-0 nova_compute[260935]: 2025-10-11 08:50:10.113 2 INFO nova.virt.libvirt.driver [-] [instance: b35f4147-9e36-4dab-9ac8-2061c97797f2] Instance spawned successfully.
Oct 11 08:50:10 compute-0 nova_compute[260935]: 2025-10-11 08:50:10.113 2 DEBUG nova.virt.libvirt.driver [None req-546ee1ad-353e-439d-b095-08a7736b8d68 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: b35f4147-9e36-4dab-9ac8-2061c97797f2] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 11 08:50:10 compute-0 nova_compute[260935]: 2025-10-11 08:50:10.115 2 INFO nova.virt.libvirt.driver [-] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] Instance spawned successfully.
Oct 11 08:50:10 compute-0 nova_compute[260935]: 2025-10-11 08:50:10.115 2 DEBUG nova.virt.libvirt.driver [None req-23edfb2a-135c-4bb9-b6f1-2bd10a1c71cf a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 11 08:50:10 compute-0 nova_compute[260935]: 2025-10-11 08:50:10.181 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:50:10 compute-0 nova_compute[260935]: 2025-10-11 08:50:10.185 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: error, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 08:50:10 compute-0 nova_compute[260935]: 2025-10-11 08:50:10.293 2 DEBUG nova.compute.manager [req-5eed3579-e100-4c33-841a-822872a467ad req-f83d1dae-8c2f-4e4c-b31d-5005fbb7277f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] Received event network-vif-plugged-a3944a31-9560-49ae-b2a5-caaf2736993a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:50:10 compute-0 nova_compute[260935]: 2025-10-11 08:50:10.294 2 DEBUG oslo_concurrency.lockutils [req-5eed3579-e100-4c33-841a-822872a467ad req-f83d1dae-8c2f-4e4c-b31d-5005fbb7277f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "77ff3a9d-3eb2-40ed-ad12-6367fd4e555f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:50:10 compute-0 nova_compute[260935]: 2025-10-11 08:50:10.294 2 DEBUG oslo_concurrency.lockutils [req-5eed3579-e100-4c33-841a-822872a467ad req-f83d1dae-8c2f-4e4c-b31d-5005fbb7277f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "77ff3a9d-3eb2-40ed-ad12-6367fd4e555f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:50:10 compute-0 nova_compute[260935]: 2025-10-11 08:50:10.294 2 DEBUG oslo_concurrency.lockutils [req-5eed3579-e100-4c33-841a-822872a467ad req-f83d1dae-8c2f-4e4c-b31d-5005fbb7277f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "77ff3a9d-3eb2-40ed-ad12-6367fd4e555f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:50:10 compute-0 nova_compute[260935]: 2025-10-11 08:50:10.294 2 DEBUG nova.compute.manager [req-5eed3579-e100-4c33-841a-822872a467ad req-f83d1dae-8c2f-4e4c-b31d-5005fbb7277f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] No waiting events found dispatching network-vif-plugged-a3944a31-9560-49ae-b2a5-caaf2736993a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 08:50:10 compute-0 nova_compute[260935]: 2025-10-11 08:50:10.295 2 WARNING nova.compute.manager [req-5eed3579-e100-4c33-841a-822872a467ad req-f83d1dae-8c2f-4e4c-b31d-5005fbb7277f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] Received unexpected event network-vif-plugged-a3944a31-9560-49ae-b2a5-caaf2736993a for instance with vm_state error and task_state rebuild_spawning.
Oct 11 08:50:10 compute-0 nova_compute[260935]: 2025-10-11 08:50:10.295 2 DEBUG nova.compute.manager [req-5eed3579-e100-4c33-841a-822872a467ad req-f83d1dae-8c2f-4e4c-b31d-5005fbb7277f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] Received event network-vif-plugged-a3944a31-9560-49ae-b2a5-caaf2736993a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:50:10 compute-0 nova_compute[260935]: 2025-10-11 08:50:10.295 2 DEBUG oslo_concurrency.lockutils [req-5eed3579-e100-4c33-841a-822872a467ad req-f83d1dae-8c2f-4e4c-b31d-5005fbb7277f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "77ff3a9d-3eb2-40ed-ad12-6367fd4e555f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:50:10 compute-0 nova_compute[260935]: 2025-10-11 08:50:10.295 2 DEBUG oslo_concurrency.lockutils [req-5eed3579-e100-4c33-841a-822872a467ad req-f83d1dae-8c2f-4e4c-b31d-5005fbb7277f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "77ff3a9d-3eb2-40ed-ad12-6367fd4e555f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:50:10 compute-0 nova_compute[260935]: 2025-10-11 08:50:10.295 2 DEBUG oslo_concurrency.lockutils [req-5eed3579-e100-4c33-841a-822872a467ad req-f83d1dae-8c2f-4e4c-b31d-5005fbb7277f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "77ff3a9d-3eb2-40ed-ad12-6367fd4e555f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:50:10 compute-0 nova_compute[260935]: 2025-10-11 08:50:10.296 2 DEBUG nova.compute.manager [req-5eed3579-e100-4c33-841a-822872a467ad req-f83d1dae-8c2f-4e4c-b31d-5005fbb7277f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] No waiting events found dispatching network-vif-plugged-a3944a31-9560-49ae-b2a5-caaf2736993a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 08:50:10 compute-0 nova_compute[260935]: 2025-10-11 08:50:10.296 2 WARNING nova.compute.manager [req-5eed3579-e100-4c33-841a-822872a467ad req-f83d1dae-8c2f-4e4c-b31d-5005fbb7277f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] Received unexpected event network-vif-plugged-a3944a31-9560-49ae-b2a5-caaf2736993a for instance with vm_state error and task_state rebuild_spawning.
Oct 11 08:50:10 compute-0 nova_compute[260935]: 2025-10-11 08:50:10.299 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:50:10 compute-0 nova_compute[260935]: 2025-10-11 08:50:10.311 2 DEBUG nova.virt.libvirt.driver [None req-546ee1ad-353e-439d-b095-08a7736b8d68 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: b35f4147-9e36-4dab-9ac8-2061c97797f2] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:50:10 compute-0 nova_compute[260935]: 2025-10-11 08:50:10.311 2 DEBUG nova.virt.libvirt.driver [None req-546ee1ad-353e-439d-b095-08a7736b8d68 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: b35f4147-9e36-4dab-9ac8-2061c97797f2] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:50:10 compute-0 nova_compute[260935]: 2025-10-11 08:50:10.312 2 DEBUG nova.virt.libvirt.driver [None req-546ee1ad-353e-439d-b095-08a7736b8d68 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: b35f4147-9e36-4dab-9ac8-2061c97797f2] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:50:10 compute-0 nova_compute[260935]: 2025-10-11 08:50:10.312 2 DEBUG nova.virt.libvirt.driver [None req-546ee1ad-353e-439d-b095-08a7736b8d68 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: b35f4147-9e36-4dab-9ac8-2061c97797f2] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:50:10 compute-0 nova_compute[260935]: 2025-10-11 08:50:10.312 2 DEBUG nova.virt.libvirt.driver [None req-546ee1ad-353e-439d-b095-08a7736b8d68 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: b35f4147-9e36-4dab-9ac8-2061c97797f2] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:50:10 compute-0 nova_compute[260935]: 2025-10-11 08:50:10.313 2 DEBUG nova.virt.libvirt.driver [None req-546ee1ad-353e-439d-b095-08a7736b8d68 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: b35f4147-9e36-4dab-9ac8-2061c97797f2] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:50:10 compute-0 nova_compute[260935]: 2025-10-11 08:50:10.321 2 DEBUG nova.virt.libvirt.driver [None req-23edfb2a-135c-4bb9-b6f1-2bd10a1c71cf a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:50:10 compute-0 nova_compute[260935]: 2025-10-11 08:50:10.322 2 DEBUG nova.virt.libvirt.driver [None req-23edfb2a-135c-4bb9-b6f1-2bd10a1c71cf a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:50:10 compute-0 nova_compute[260935]: 2025-10-11 08:50:10.323 2 DEBUG nova.virt.libvirt.driver [None req-23edfb2a-135c-4bb9-b6f1-2bd10a1c71cf a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:50:10 compute-0 nova_compute[260935]: 2025-10-11 08:50:10.324 2 DEBUG nova.virt.libvirt.driver [None req-23edfb2a-135c-4bb9-b6f1-2bd10a1c71cf a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:50:10 compute-0 nova_compute[260935]: 2025-10-11 08:50:10.325 2 DEBUG nova.virt.libvirt.driver [None req-23edfb2a-135c-4bb9-b6f1-2bd10a1c71cf a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:50:10 compute-0 nova_compute[260935]: 2025-10-11 08:50:10.326 2 DEBUG nova.virt.libvirt.driver [None req-23edfb2a-135c-4bb9-b6f1-2bd10a1c71cf a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:50:10 compute-0 nova_compute[260935]: 2025-10-11 08:50:10.356 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Oct 11 08:50:10 compute-0 nova_compute[260935]: 2025-10-11 08:50:10.358 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172610.1051445, b35f4147-9e36-4dab-9ac8-2061c97797f2 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:50:10 compute-0 nova_compute[260935]: 2025-10-11 08:50:10.359 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: b35f4147-9e36-4dab-9ac8-2061c97797f2] VM Resumed (Lifecycle Event)
Oct 11 08:50:10 compute-0 nova_compute[260935]: 2025-10-11 08:50:10.435 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: b35f4147-9e36-4dab-9ac8-2061c97797f2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:50:10 compute-0 nova_compute[260935]: 2025-10-11 08:50:10.439 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: b35f4147-9e36-4dab-9ac8-2061c97797f2] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 08:50:10 compute-0 nova_compute[260935]: 2025-10-11 08:50:10.483 2 DEBUG nova.compute.manager [None req-23edfb2a-135c-4bb9-b6f1-2bd10a1c71cf a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:50:10 compute-0 nova_compute[260935]: 2025-10-11 08:50:10.502 2 INFO nova.compute.manager [None req-546ee1ad-353e-439d-b095-08a7736b8d68 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: b35f4147-9e36-4dab-9ac8-2061c97797f2] Took 15.06 seconds to spawn the instance on the hypervisor.
Oct 11 08:50:10 compute-0 nova_compute[260935]: 2025-10-11 08:50:10.502 2 DEBUG nova.compute.manager [None req-546ee1ad-353e-439d-b095-08a7736b8d68 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: b35f4147-9e36-4dab-9ac8-2061c97797f2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:50:10 compute-0 nova_compute[260935]: 2025-10-11 08:50:10.545 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: b35f4147-9e36-4dab-9ac8-2061c97797f2] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 08:50:10 compute-0 nova_compute[260935]: 2025-10-11 08:50:10.585 2 DEBUG oslo_concurrency.lockutils [None req-23edfb2a-135c-4bb9-b6f1-2bd10a1c71cf a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:50:10 compute-0 nova_compute[260935]: 2025-10-11 08:50:10.585 2 DEBUG oslo_concurrency.lockutils [None req-23edfb2a-135c-4bb9-b6f1-2bd10a1c71cf a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:50:10 compute-0 nova_compute[260935]: 2025-10-11 08:50:10.586 2 DEBUG nova.objects.instance [None req-23edfb2a-135c-4bb9-b6f1-2bd10a1c71cf a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Oct 11 08:50:10 compute-0 nova_compute[260935]: 2025-10-11 08:50:10.622 2 INFO nova.compute.manager [None req-546ee1ad-353e-439d-b095-08a7736b8d68 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: b35f4147-9e36-4dab-9ac8-2061c97797f2] Took 16.22 seconds to build instance.
Oct 11 08:50:10 compute-0 nova_compute[260935]: 2025-10-11 08:50:10.775 2 DEBUG oslo_concurrency.lockutils [None req-23edfb2a-135c-4bb9-b6f1-2bd10a1c71cf a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.190s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:50:10 compute-0 nova_compute[260935]: 2025-10-11 08:50:10.822 2 DEBUG oslo_concurrency.lockutils [None req-546ee1ad-353e-439d-b095-08a7736b8d68 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Lock "b35f4147-9e36-4dab-9ac8-2061c97797f2" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 16.508s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:50:10 compute-0 nova_compute[260935]: 2025-10-11 08:50:10.953 2 DEBUG nova.network.neutron [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] Updating instance_info_cache with network_info: [{"id": "a3944a31-9560-49ae-b2a5-caaf2736993a", "address": "fa:16:3e:bf:d8:0b", "network": {"id": "09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1951796893-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "39d3043a7835403392c659fbb2fe0b22", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa3944a31-95", "ovs_interfaceid": "a3944a31-9560-49ae-b2a5-caaf2736993a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:50:11 compute-0 nova_compute[260935]: 2025-10-11 08:50:11.018 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Releasing lock "refresh_cache-77ff3a9d-3eb2-40ed-ad12-6367fd4e555f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 08:50:11 compute-0 nova_compute[260935]: 2025-10-11 08:50:11.018 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 11 08:50:11 compute-0 nova_compute[260935]: 2025-10-11 08:50:11.019 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 08:50:11 compute-0 nova_compute[260935]: 2025-10-11 08:50:11.019 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 08:50:11 compute-0 nova_compute[260935]: 2025-10-11 08:50:11.114 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:50:11 compute-0 nova_compute[260935]: 2025-10-11 08:50:11.114 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:50:11 compute-0 nova_compute[260935]: 2025-10-11 08:50:11.115 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:50:11 compute-0 nova_compute[260935]: 2025-10-11 08:50:11.115 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 11 08:50:11 compute-0 nova_compute[260935]: 2025-10-11 08:50:11.116 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:50:11 compute-0 ceph-mon[74313]: pgmap v1337: 321 pgs: 321 active+clean; 544 MiB data, 638 MiB used, 59 GiB / 60 GiB avail; 4.7 MiB/s rd, 6.3 MiB/s wr, 367 op/s
Oct 11 08:50:11 compute-0 ovn_controller[152945]: 2025-10-11T08:50:11Z|00034|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:34:0d:85 10.100.0.3
Oct 11 08:50:11 compute-0 ovn_controller[152945]: 2025-10-11T08:50:11Z|00035|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:34:0d:85 10.100.0.3
Oct 11 08:50:11 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 08:50:11 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1638522075' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:50:11 compute-0 nova_compute[260935]: 2025-10-11 08:50:11.624 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.508s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:50:11 compute-0 nova_compute[260935]: 2025-10-11 08:50:11.745 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-0000001e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 08:50:11 compute-0 nova_compute[260935]: 2025-10-11 08:50:11.746 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-0000001e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 08:50:11 compute-0 nova_compute[260935]: 2025-10-11 08:50:11.749 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-0000001a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 08:50:11 compute-0 nova_compute[260935]: 2025-10-11 08:50:11.749 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-0000001a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 08:50:11 compute-0 nova_compute[260935]: 2025-10-11 08:50:11.751 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-0000001b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 08:50:11 compute-0 nova_compute[260935]: 2025-10-11 08:50:11.752 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-0000001b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 08:50:11 compute-0 nova_compute[260935]: 2025-10-11 08:50:11.755 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000017 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 08:50:11 compute-0 nova_compute[260935]: 2025-10-11 08:50:11.755 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000017 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 08:50:11 compute-0 nova_compute[260935]: 2025-10-11 08:50:11.758 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-0000001d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 08:50:11 compute-0 nova_compute[260935]: 2025-10-11 08:50:11.758 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-0000001d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 08:50:11 compute-0 nova_compute[260935]: 2025-10-11 08:50:11.760 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-0000001c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 08:50:11 compute-0 nova_compute[260935]: 2025-10-11 08:50:11.760 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-0000001c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 08:50:11 compute-0 nova_compute[260935]: 2025-10-11 08:50:11.764 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000016 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 08:50:11 compute-0 nova_compute[260935]: 2025-10-11 08:50:11.765 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000016 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 08:50:11 compute-0 nova_compute[260935]: 2025-10-11 08:50:11.768 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000018 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 08:50:11 compute-0 nova_compute[260935]: 2025-10-11 08:50:11.768 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000018 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 08:50:11 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1338: 321 pgs: 321 active+clean; 544 MiB data, 638 MiB used, 59 GiB / 60 GiB avail; 4.5 MiB/s rd, 6.1 MiB/s wr, 355 op/s
Oct 11 08:50:12 compute-0 nova_compute[260935]: 2025-10-11 08:50:12.021 2 WARNING nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 08:50:12 compute-0 nova_compute[260935]: 2025-10-11 08:50:12.022 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3094MB free_disk=59.72287368774414GB free_vcpus=0 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 11 08:50:12 compute-0 nova_compute[260935]: 2025-10-11 08:50:12.022 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:50:12 compute-0 nova_compute[260935]: 2025-10-11 08:50:12.022 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:50:12 compute-0 nova_compute[260935]: 2025-10-11 08:50:12.056 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:50:12 compute-0 nova_compute[260935]: 2025-10-11 08:50:12.149 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 08:50:12 compute-0 nova_compute[260935]: 2025-10-11 08:50:12.149 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance 6224c79a-8a36-490c-863a-67251512732f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 08:50:12 compute-0 nova_compute[260935]: 2025-10-11 08:50:12.150 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance b3e20035-c079-4ad0-a085-2086be520d1d actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 08:50:12 compute-0 nova_compute[260935]: 2025-10-11 08:50:12.150 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance 14711e39-46ca-4856-9c19-fa51b869064d actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 08:50:12 compute-0 nova_compute[260935]: 2025-10-11 08:50:12.150 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance 90e56ca7-b26f-4f83-908d-75204ecd2533 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 08:50:12 compute-0 nova_compute[260935]: 2025-10-11 08:50:12.150 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance 9f9aca1c-8e65-435a-bfae-1ff0d4386f58 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 08:50:12 compute-0 nova_compute[260935]: 2025-10-11 08:50:12.150 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance b35f4147-9e36-4dab-9ac8-2061c97797f2 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 08:50:12 compute-0 nova_compute[260935]: 2025-10-11 08:50:12.150 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance cb1503a2-bc9c-4faf-ab16-e7227f8c94f7 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 08:50:12 compute-0 nova_compute[260935]: 2025-10-11 08:50:12.151 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 8 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 11 08:50:12 compute-0 nova_compute[260935]: 2025-10-11 08:50:12.151 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=1536MB phys_disk=59GB used_disk=8GB total_vcpus=8 used_vcpus=8 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 11 08:50:12 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/1638522075' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:50:12 compute-0 nova_compute[260935]: 2025-10-11 08:50:12.323 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:50:12 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 08:50:12 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3431085559' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:50:12 compute-0 nova_compute[260935]: 2025-10-11 08:50:12.798 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:50:12 compute-0 nova_compute[260935]: 2025-10-11 08:50:12.805 2 DEBUG nova.compute.provider_tree [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 08:50:12 compute-0 nova_compute[260935]: 2025-10-11 08:50:12.829 2 DEBUG nova.scheduler.client.report [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 08:50:12 compute-0 nova_compute[260935]: 2025-10-11 08:50:12.869 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 11 08:50:12 compute-0 nova_compute[260935]: 2025-10-11 08:50:12.870 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.848s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:50:12 compute-0 nova_compute[260935]: 2025-10-11 08:50:12.884 2 INFO nova.compute.manager [None req-658bfebf-d2cf-474e-bf77-56ab2cc2a636 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] Rebuilding instance
Oct 11 08:50:13 compute-0 nova_compute[260935]: 2025-10-11 08:50:13.152 2 DEBUG nova.objects.instance [None req-658bfebf-d2cf-474e-bf77-56ab2cc2a636 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:50:13 compute-0 nova_compute[260935]: 2025-10-11 08:50:13.176 2 DEBUG nova.compute.manager [None req-658bfebf-d2cf-474e-bf77-56ab2cc2a636 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:50:13 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 08:50:13 compute-0 nova_compute[260935]: 2025-10-11 08:50:13.227 2 DEBUG nova.objects.instance [None req-658bfebf-d2cf-474e-bf77-56ab2cc2a636 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Lazy-loading 'pci_requests' on Instance uuid 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:50:13 compute-0 nova_compute[260935]: 2025-10-11 08:50:13.242 2 DEBUG nova.objects.instance [None req-658bfebf-d2cf-474e-bf77-56ab2cc2a636 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Lazy-loading 'pci_devices' on Instance uuid 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:50:13 compute-0 nova_compute[260935]: 2025-10-11 08:50:13.255 2 DEBUG nova.objects.instance [None req-658bfebf-d2cf-474e-bf77-56ab2cc2a636 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Lazy-loading 'resources' on Instance uuid 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:50:13 compute-0 nova_compute[260935]: 2025-10-11 08:50:13.269 2 DEBUG nova.objects.instance [None req-658bfebf-d2cf-474e-bf77-56ab2cc2a636 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Lazy-loading 'migration_context' on Instance uuid 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:50:13 compute-0 nova_compute[260935]: 2025-10-11 08:50:13.280 2 DEBUG nova.objects.instance [None req-658bfebf-d2cf-474e-bf77-56ab2cc2a636 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Oct 11 08:50:13 compute-0 nova_compute[260935]: 2025-10-11 08:50:13.284 2 DEBUG nova.virt.libvirt.driver [None req-658bfebf-d2cf-474e-bf77-56ab2cc2a636 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Oct 11 08:50:13 compute-0 ceph-mon[74313]: pgmap v1338: 321 pgs: 321 active+clean; 544 MiB data, 638 MiB used, 59 GiB / 60 GiB avail; 4.5 MiB/s rd, 6.1 MiB/s wr, 355 op/s
Oct 11 08:50:13 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/3431085559' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:50:13 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1339: 321 pgs: 321 active+clean; 577 MiB data, 647 MiB used, 59 GiB / 60 GiB avail; 8.6 MiB/s rd, 8.2 MiB/s wr, 549 op/s
Oct 11 08:50:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:15.183 162815 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:50:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:15.183 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:50:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:15.184 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:50:15 compute-0 nova_compute[260935]: 2025-10-11 08:50:15.303 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:50:15 compute-0 ceph-mon[74313]: pgmap v1339: 321 pgs: 321 active+clean; 577 MiB data, 647 MiB used, 59 GiB / 60 GiB avail; 8.6 MiB/s rd, 8.2 MiB/s wr, 549 op/s
Oct 11 08:50:15 compute-0 nova_compute[260935]: 2025-10-11 08:50:15.685 2 DEBUG nova.compute.manager [req-a64722a4-3078-4a03-bb6c-205024a3ff3d req-37aa00e8-c36b-4d32-826b-d22025779a55 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 14711e39-46ca-4856-9c19-fa51b869064d] Received event network-changed-ac842ebf-4fca-4930-a4d1-3e8a6760d441 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:50:15 compute-0 nova_compute[260935]: 2025-10-11 08:50:15.686 2 DEBUG nova.compute.manager [req-a64722a4-3078-4a03-bb6c-205024a3ff3d req-37aa00e8-c36b-4d32-826b-d22025779a55 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 14711e39-46ca-4856-9c19-fa51b869064d] Refreshing instance network info cache due to event network-changed-ac842ebf-4fca-4930-a4d1-3e8a6760d441. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 08:50:15 compute-0 nova_compute[260935]: 2025-10-11 08:50:15.686 2 DEBUG oslo_concurrency.lockutils [req-a64722a4-3078-4a03-bb6c-205024a3ff3d req-37aa00e8-c36b-4d32-826b-d22025779a55 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-14711e39-46ca-4856-9c19-fa51b869064d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 08:50:15 compute-0 nova_compute[260935]: 2025-10-11 08:50:15.687 2 DEBUG oslo_concurrency.lockutils [req-a64722a4-3078-4a03-bb6c-205024a3ff3d req-37aa00e8-c36b-4d32-826b-d22025779a55 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-14711e39-46ca-4856-9c19-fa51b869064d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 08:50:15 compute-0 nova_compute[260935]: 2025-10-11 08:50:15.687 2 DEBUG nova.network.neutron [req-a64722a4-3078-4a03-bb6c-205024a3ff3d req-37aa00e8-c36b-4d32-826b-d22025779a55 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 14711e39-46ca-4856-9c19-fa51b869064d] Refreshing network info cache for port ac842ebf-4fca-4930-a4d1-3e8a6760d441 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 08:50:15 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1340: 321 pgs: 321 active+clean; 577 MiB data, 647 MiB used, 59 GiB / 60 GiB avail; 6.1 MiB/s rd, 4.0 MiB/s wr, 339 op/s
Oct 11 08:50:16 compute-0 nova_compute[260935]: 2025-10-11 08:50:16.770 2 DEBUG nova.network.neutron [req-a64722a4-3078-4a03-bb6c-205024a3ff3d req-37aa00e8-c36b-4d32-826b-d22025779a55 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 14711e39-46ca-4856-9c19-fa51b869064d] Updated VIF entry in instance network info cache for port ac842ebf-4fca-4930-a4d1-3e8a6760d441. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 08:50:16 compute-0 nova_compute[260935]: 2025-10-11 08:50:16.771 2 DEBUG nova.network.neutron [req-a64722a4-3078-4a03-bb6c-205024a3ff3d req-37aa00e8-c36b-4d32-826b-d22025779a55 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 14711e39-46ca-4856-9c19-fa51b869064d] Updating instance_info_cache with network_info: [{"id": "ac842ebf-4fca-4930-a4d1-3e8a6760d441", "address": "fa:16:3e:56:95:17", "network": {"id": "fff13396-b787-4c6e-9112-a1c2ef57b26d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-795919911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eddb41c523294041b154a0a99c88e82b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac842ebf-4f", "ovs_interfaceid": "ac842ebf-4fca-4930-a4d1-3e8a6760d441", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:50:16 compute-0 nova_compute[260935]: 2025-10-11 08:50:16.788 2 DEBUG oslo_concurrency.lockutils [req-a64722a4-3078-4a03-bb6c-205024a3ff3d req-37aa00e8-c36b-4d32-826b-d22025779a55 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-14711e39-46ca-4856-9c19-fa51b869064d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 08:50:17 compute-0 nova_compute[260935]: 2025-10-11 08:50:17.061 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:50:17 compute-0 ceph-mon[74313]: pgmap v1340: 321 pgs: 321 active+clean; 577 MiB data, 647 MiB used, 59 GiB / 60 GiB avail; 6.1 MiB/s rd, 4.0 MiB/s wr, 339 op/s
Oct 11 08:50:17 compute-0 nova_compute[260935]: 2025-10-11 08:50:17.804 2 DEBUG nova.compute.manager [req-596a1003-206a-4297-92e1-2bf02d0a142c req-80c0c5b5-a684-4b87-b6b1-b1d32ffe56e5 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: b35f4147-9e36-4dab-9ac8-2061c97797f2] Received event network-changed-c27797c3-6ac7-45ae-9a2e-7fc42908feab external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:50:17 compute-0 nova_compute[260935]: 2025-10-11 08:50:17.804 2 DEBUG nova.compute.manager [req-596a1003-206a-4297-92e1-2bf02d0a142c req-80c0c5b5-a684-4b87-b6b1-b1d32ffe56e5 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: b35f4147-9e36-4dab-9ac8-2061c97797f2] Refreshing instance network info cache due to event network-changed-c27797c3-6ac7-45ae-9a2e-7fc42908feab. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 08:50:17 compute-0 nova_compute[260935]: 2025-10-11 08:50:17.805 2 DEBUG oslo_concurrency.lockutils [req-596a1003-206a-4297-92e1-2bf02d0a142c req-80c0c5b5-a684-4b87-b6b1-b1d32ffe56e5 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-b35f4147-9e36-4dab-9ac8-2061c97797f2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 08:50:17 compute-0 nova_compute[260935]: 2025-10-11 08:50:17.805 2 DEBUG oslo_concurrency.lockutils [req-596a1003-206a-4297-92e1-2bf02d0a142c req-80c0c5b5-a684-4b87-b6b1-b1d32ffe56e5 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-b35f4147-9e36-4dab-9ac8-2061c97797f2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 08:50:17 compute-0 nova_compute[260935]: 2025-10-11 08:50:17.806 2 DEBUG nova.network.neutron [req-596a1003-206a-4297-92e1-2bf02d0a142c req-80c0c5b5-a684-4b87-b6b1-b1d32ffe56e5 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: b35f4147-9e36-4dab-9ac8-2061c97797f2] Refreshing network info cache for port c27797c3-6ac7-45ae-9a2e-7fc42908feab _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 08:50:17 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1341: 321 pgs: 321 active+clean; 610 MiB data, 671 MiB used, 59 GiB / 60 GiB avail; 6.4 MiB/s rd, 6.2 MiB/s wr, 400 op/s
Oct 11 08:50:18 compute-0 ovn_controller[152945]: 2025-10-11T08:50:18Z|00036|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:a8:8e:61 10.100.0.14
Oct 11 08:50:18 compute-0 ovn_controller[152945]: 2025-10-11T08:50:18Z|00037|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:a8:8e:61 10.100.0.14
Oct 11 08:50:18 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 08:50:18 compute-0 nova_compute[260935]: 2025-10-11 08:50:18.273 2 DEBUG nova.virt.libvirt.driver [None req-9e5a278a-2f09-40a3-8a2d-2f913c615ffa 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: cb1503a2-bc9c-4faf-ab16-e7227f8c94f7] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Oct 11 08:50:19 compute-0 nova_compute[260935]: 2025-10-11 08:50:19.327 2 DEBUG nova.compute.manager [req-c9a75884-5ba1-4edc-ba86-e068a1c83958 req-9c292f4f-9ab8-448d-ad5c-1ad1418ce00d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: b35f4147-9e36-4dab-9ac8-2061c97797f2] Received event network-changed-c27797c3-6ac7-45ae-9a2e-7fc42908feab external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:50:19 compute-0 nova_compute[260935]: 2025-10-11 08:50:19.328 2 DEBUG nova.compute.manager [req-c9a75884-5ba1-4edc-ba86-e068a1c83958 req-9c292f4f-9ab8-448d-ad5c-1ad1418ce00d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: b35f4147-9e36-4dab-9ac8-2061c97797f2] Refreshing instance network info cache due to event network-changed-c27797c3-6ac7-45ae-9a2e-7fc42908feab. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 08:50:19 compute-0 nova_compute[260935]: 2025-10-11 08:50:19.328 2 DEBUG oslo_concurrency.lockutils [req-c9a75884-5ba1-4edc-ba86-e068a1c83958 req-9c292f4f-9ab8-448d-ad5c-1ad1418ce00d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-b35f4147-9e36-4dab-9ac8-2061c97797f2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 08:50:19 compute-0 ceph-mon[74313]: pgmap v1341: 321 pgs: 321 active+clean; 610 MiB data, 671 MiB used, 59 GiB / 60 GiB avail; 6.4 MiB/s rd, 6.2 MiB/s wr, 400 op/s
Oct 11 08:50:19 compute-0 nova_compute[260935]: 2025-10-11 08:50:19.780 2 DEBUG nova.network.neutron [req-596a1003-206a-4297-92e1-2bf02d0a142c req-80c0c5b5-a684-4b87-b6b1-b1d32ffe56e5 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: b35f4147-9e36-4dab-9ac8-2061c97797f2] Updated VIF entry in instance network info cache for port c27797c3-6ac7-45ae-9a2e-7fc42908feab. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 08:50:19 compute-0 nova_compute[260935]: 2025-10-11 08:50:19.781 2 DEBUG nova.network.neutron [req-596a1003-206a-4297-92e1-2bf02d0a142c req-80c0c5b5-a684-4b87-b6b1-b1d32ffe56e5 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: b35f4147-9e36-4dab-9ac8-2061c97797f2] Updating instance_info_cache with network_info: [{"id": "c27797c3-6ac7-45ae-9a2e-7fc42908feab", "address": "fa:16:3e:d9:75:86", "network": {"id": "fff13396-b787-4c6e-9112-a1c2ef57b26d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-795919911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eddb41c523294041b154a0a99c88e82b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc27797c3-6a", "ovs_interfaceid": "c27797c3-6ac7-45ae-9a2e-7fc42908feab", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:50:19 compute-0 nova_compute[260935]: 2025-10-11 08:50:19.783 2 DEBUG oslo_concurrency.lockutils [None req-49ced0b0-22cb-48b2-bf9e-9281176f5b95 f557c82d1a6e44f2890ffb382c99df55 4478aa2544ad454daf82ec0d5a6f1b83 - - default default] Acquiring lock "9f9aca1c-8e65-435a-bfae-1ff0d4386f58" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:50:19 compute-0 nova_compute[260935]: 2025-10-11 08:50:19.784 2 DEBUG oslo_concurrency.lockutils [None req-49ced0b0-22cb-48b2-bf9e-9281176f5b95 f557c82d1a6e44f2890ffb382c99df55 4478aa2544ad454daf82ec0d5a6f1b83 - - default default] Lock "9f9aca1c-8e65-435a-bfae-1ff0d4386f58" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:50:19 compute-0 nova_compute[260935]: 2025-10-11 08:50:19.785 2 DEBUG oslo_concurrency.lockutils [None req-49ced0b0-22cb-48b2-bf9e-9281176f5b95 f557c82d1a6e44f2890ffb382c99df55 4478aa2544ad454daf82ec0d5a6f1b83 - - default default] Acquiring lock "9f9aca1c-8e65-435a-bfae-1ff0d4386f58-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:50:19 compute-0 nova_compute[260935]: 2025-10-11 08:50:19.785 2 DEBUG oslo_concurrency.lockutils [None req-49ced0b0-22cb-48b2-bf9e-9281176f5b95 f557c82d1a6e44f2890ffb382c99df55 4478aa2544ad454daf82ec0d5a6f1b83 - - default default] Lock "9f9aca1c-8e65-435a-bfae-1ff0d4386f58-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:50:19 compute-0 nova_compute[260935]: 2025-10-11 08:50:19.786 2 DEBUG oslo_concurrency.lockutils [None req-49ced0b0-22cb-48b2-bf9e-9281176f5b95 f557c82d1a6e44f2890ffb382c99df55 4478aa2544ad454daf82ec0d5a6f1b83 - - default default] Lock "9f9aca1c-8e65-435a-bfae-1ff0d4386f58-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:50:19 compute-0 nova_compute[260935]: 2025-10-11 08:50:19.788 2 INFO nova.compute.manager [None req-49ced0b0-22cb-48b2-bf9e-9281176f5b95 f557c82d1a6e44f2890ffb382c99df55 4478aa2544ad454daf82ec0d5a6f1b83 - - default default] [instance: 9f9aca1c-8e65-435a-bfae-1ff0d4386f58] Terminating instance
Oct 11 08:50:19 compute-0 nova_compute[260935]: 2025-10-11 08:50:19.791 2 DEBUG nova.compute.manager [None req-49ced0b0-22cb-48b2-bf9e-9281176f5b95 f557c82d1a6e44f2890ffb382c99df55 4478aa2544ad454daf82ec0d5a6f1b83 - - default default] [instance: 9f9aca1c-8e65-435a-bfae-1ff0d4386f58] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 11 08:50:19 compute-0 nova_compute[260935]: 2025-10-11 08:50:19.796 2 DEBUG oslo_concurrency.lockutils [None req-644005e2-9afc-4683-b0cd-8c4d30c643f6 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Acquiring lock "interface-14711e39-46ca-4856-9c19-fa51b869064d-f045b3aa-3ff6-4dea-ad61-a59a01735124" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:50:19 compute-0 nova_compute[260935]: 2025-10-11 08:50:19.797 2 DEBUG oslo_concurrency.lockutils [None req-644005e2-9afc-4683-b0cd-8c4d30c643f6 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Lock "interface-14711e39-46ca-4856-9c19-fa51b869064d-f045b3aa-3ff6-4dea-ad61-a59a01735124" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:50:19 compute-0 nova_compute[260935]: 2025-10-11 08:50:19.798 2 DEBUG nova.objects.instance [None req-644005e2-9afc-4683-b0cd-8c4d30c643f6 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Lazy-loading 'flavor' on Instance uuid 14711e39-46ca-4856-9c19-fa51b869064d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:50:19 compute-0 nova_compute[260935]: 2025-10-11 08:50:19.824 2 DEBUG oslo_concurrency.lockutils [req-596a1003-206a-4297-92e1-2bf02d0a142c req-80c0c5b5-a684-4b87-b6b1-b1d32ffe56e5 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-b35f4147-9e36-4dab-9ac8-2061c97797f2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 08:50:19 compute-0 nova_compute[260935]: 2025-10-11 08:50:19.827 2 DEBUG oslo_concurrency.lockutils [req-c9a75884-5ba1-4edc-ba86-e068a1c83958 req-9c292f4f-9ab8-448d-ad5c-1ad1418ce00d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-b35f4147-9e36-4dab-9ac8-2061c97797f2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 08:50:19 compute-0 nova_compute[260935]: 2025-10-11 08:50:19.828 2 DEBUG nova.network.neutron [req-c9a75884-5ba1-4edc-ba86-e068a1c83958 req-9c292f4f-9ab8-448d-ad5c-1ad1418ce00d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: b35f4147-9e36-4dab-9ac8-2061c97797f2] Refreshing network info cache for port c27797c3-6ac7-45ae-9a2e-7fc42908feab _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 08:50:19 compute-0 kernel: tap9007d8a3-87 (unregistering): left promiscuous mode
Oct 11 08:50:19 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1342: 321 pgs: 321 active+clean; 610 MiB data, 671 MiB used, 59 GiB / 60 GiB avail; 4.4 MiB/s rd, 4.3 MiB/s wr, 255 op/s
Oct 11 08:50:19 compute-0 NetworkManager[44960]: <info>  [1760172619.8664] device (tap9007d8a3-87): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 08:50:19 compute-0 ovn_controller[152945]: 2025-10-11T08:50:19Z|00166|binding|INFO|Releasing lport 9007d8a3-8797-49c6-9302-4c8f4d699a45 from this chassis (sb_readonly=0)
Oct 11 08:50:19 compute-0 ovn_controller[152945]: 2025-10-11T08:50:19Z|00167|binding|INFO|Setting lport 9007d8a3-8797-49c6-9302-4c8f4d699a45 down in Southbound
Oct 11 08:50:19 compute-0 nova_compute[260935]: 2025-10-11 08:50:19.887 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:50:19 compute-0 ovn_controller[152945]: 2025-10-11T08:50:19Z|00168|binding|INFO|Removing iface tap9007d8a3-87 ovn-installed in OVS
Oct 11 08:50:19 compute-0 nova_compute[260935]: 2025-10-11 08:50:19.890 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:50:19 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:19.893 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:34:0d:85 10.100.0.3'], port_security=['fa:16:3e:34:0d:85 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '9f9aca1c-8e65-435a-bfae-1ff0d4386f58', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8e0c9798-3406-4335-baf7-3664e8c2cc2d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4478aa2544ad454daf82ec0d5a6f1b83', 'neutron:revision_number': '4', 'neutron:security_group_ids': '53a8c688-9a7d-4462-906b-8eed321153ea', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.204'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=861cf0ef-52d4-42d5-8406-b93929d21340, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=9007d8a3-8797-49c6-9302-4c8f4d699a45) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 08:50:19 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:19.895 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 9007d8a3-8797-49c6-9302-4c8f4d699a45 in datapath 8e0c9798-3406-4335-baf7-3664e8c2cc2d unbound from our chassis
Oct 11 08:50:19 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:19.898 162815 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 8e0c9798-3406-4335-baf7-3664e8c2cc2d, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 11 08:50:19 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:19.899 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[83cdd9ea-b47e-4465-b9ab-0b2cf5318d37]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:50:19 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:19.900 162815 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-8e0c9798-3406-4335-baf7-3664e8c2cc2d namespace which is not needed anymore
Oct 11 08:50:19 compute-0 nova_compute[260935]: 2025-10-11 08:50:19.934 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:50:19 compute-0 systemd[1]: machine-qemu\x2d30\x2dinstance\x2d0000001c.scope: Deactivated successfully.
Oct 11 08:50:19 compute-0 systemd[1]: machine-qemu\x2d30\x2dinstance\x2d0000001c.scope: Consumed 13.010s CPU time.
Oct 11 08:50:19 compute-0 systemd-machined[215705]: Machine qemu-30-instance-0000001c terminated.
Oct 11 08:50:20 compute-0 nova_compute[260935]: 2025-10-11 08:50:20.041 2 INFO nova.virt.libvirt.driver [-] [instance: 9f9aca1c-8e65-435a-bfae-1ff0d4386f58] Instance destroyed successfully.
Oct 11 08:50:20 compute-0 nova_compute[260935]: 2025-10-11 08:50:20.042 2 DEBUG nova.objects.instance [None req-49ced0b0-22cb-48b2-bf9e-9281176f5b95 f557c82d1a6e44f2890ffb382c99df55 4478aa2544ad454daf82ec0d5a6f1b83 - - default default] Lazy-loading 'resources' on Instance uuid 9f9aca1c-8e65-435a-bfae-1ff0d4386f58 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:50:20 compute-0 nova_compute[260935]: 2025-10-11 08:50:20.063 2 DEBUG nova.virt.libvirt.vif [None req-49ced0b0-22cb-48b2-bf9e-9281176f5b95 f557c82d1a6e44f2890ffb382c99df55 4478aa2544ad454daf82ec0d5a6f1b83 - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T08:49:48Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestManualDisk-server-677562955',display_name='tempest-ServersTestManualDisk-server-677562955',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmanualdisk-server-677562955',id=28,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBE1CYmGp9cIjn1OUZpClvh/431spCYwxzBbkMOHDBaljNzck8rmRcU7SShxhilRUkaibqgjOLZGNsP/o0nw2t9clDZxZT6xlOhd2BSbNJPRKX/ZsVWrtD5Ho3SYRh/eonw==',key_name='tempest-keypair-542099764',keypairs=<?>,launch_index=0,launched_at=2025-10-11T08:49:58Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='4478aa2544ad454daf82ec0d5a6f1b83',ramdisk_id='',reservation_id='r-p05i8rcj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestManualDisk-133082753',owner_user_name='tempest-ServersTestManualDisk-133082753-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T08:49:59Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='f557c82d1a6e44f2890ffb382c99df55',uuid=9f9aca1c-8e65-435a-bfae-1ff0d4386f58,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9007d8a3-8797-49c6-9302-4c8f4d699a45", "address": "fa:16:3e:34:0d:85", "network": {"id": "8e0c9798-3406-4335-baf7-3664e8c2cc2d", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-1694728476-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.204", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4478aa2544ad454daf82ec0d5a6f1b83", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9007d8a3-87", "ovs_interfaceid": "9007d8a3-8797-49c6-9302-4c8f4d699a45", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 11 08:50:20 compute-0 nova_compute[260935]: 2025-10-11 08:50:20.065 2 DEBUG nova.network.os_vif_util [None req-49ced0b0-22cb-48b2-bf9e-9281176f5b95 f557c82d1a6e44f2890ffb382c99df55 4478aa2544ad454daf82ec0d5a6f1b83 - - default default] Converting VIF {"id": "9007d8a3-8797-49c6-9302-4c8f4d699a45", "address": "fa:16:3e:34:0d:85", "network": {"id": "8e0c9798-3406-4335-baf7-3664e8c2cc2d", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-1694728476-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.204", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4478aa2544ad454daf82ec0d5a6f1b83", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9007d8a3-87", "ovs_interfaceid": "9007d8a3-8797-49c6-9302-4c8f4d699a45", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 08:50:20 compute-0 nova_compute[260935]: 2025-10-11 08:50:20.068 2 DEBUG nova.network.os_vif_util [None req-49ced0b0-22cb-48b2-bf9e-9281176f5b95 f557c82d1a6e44f2890ffb382c99df55 4478aa2544ad454daf82ec0d5a6f1b83 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:34:0d:85,bridge_name='br-int',has_traffic_filtering=True,id=9007d8a3-8797-49c6-9302-4c8f4d699a45,network=Network(8e0c9798-3406-4335-baf7-3664e8c2cc2d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9007d8a3-87') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 08:50:20 compute-0 nova_compute[260935]: 2025-10-11 08:50:20.069 2 DEBUG os_vif [None req-49ced0b0-22cb-48b2-bf9e-9281176f5b95 f557c82d1a6e44f2890ffb382c99df55 4478aa2544ad454daf82ec0d5a6f1b83 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:34:0d:85,bridge_name='br-int',has_traffic_filtering=True,id=9007d8a3-8797-49c6-9302-4c8f4d699a45,network=Network(8e0c9798-3406-4335-baf7-3664e8c2cc2d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9007d8a3-87') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 11 08:50:20 compute-0 neutron-haproxy-ovnmeta-8e0c9798-3406-4335-baf7-3664e8c2cc2d[296306]: [NOTICE]   (296310) : haproxy version is 2.8.14-c23fe91
Oct 11 08:50:20 compute-0 neutron-haproxy-ovnmeta-8e0c9798-3406-4335-baf7-3664e8c2cc2d[296306]: [NOTICE]   (296310) : path to executable is /usr/sbin/haproxy
Oct 11 08:50:20 compute-0 neutron-haproxy-ovnmeta-8e0c9798-3406-4335-baf7-3664e8c2cc2d[296306]: [WARNING]  (296310) : Exiting Master process...
Oct 11 08:50:20 compute-0 neutron-haproxy-ovnmeta-8e0c9798-3406-4335-baf7-3664e8c2cc2d[296306]: [WARNING]  (296310) : Exiting Master process...
Oct 11 08:50:20 compute-0 neutron-haproxy-ovnmeta-8e0c9798-3406-4335-baf7-3664e8c2cc2d[296306]: [ALERT]    (296310) : Current worker (296312) exited with code 143 (Terminated)
Oct 11 08:50:20 compute-0 neutron-haproxy-ovnmeta-8e0c9798-3406-4335-baf7-3664e8c2cc2d[296306]: [WARNING]  (296310) : All workers exited. Exiting... (0)
Oct 11 08:50:20 compute-0 systemd[1]: libpod-034d4af584d5ac1017cd36efc964e21839dbbea4a1045cbe9431994128504705.scope: Deactivated successfully.
Oct 11 08:50:20 compute-0 conmon[296306]: conmon 034d4af584d5ac1017cd <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-034d4af584d5ac1017cd36efc964e21839dbbea4a1045cbe9431994128504705.scope/container/memory.events
Oct 11 08:50:20 compute-0 nova_compute[260935]: 2025-10-11 08:50:20.078 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:50:20 compute-0 nova_compute[260935]: 2025-10-11 08:50:20.079 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9007d8a3-87, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:50:20 compute-0 nova_compute[260935]: 2025-10-11 08:50:20.081 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:50:20 compute-0 nova_compute[260935]: 2025-10-11 08:50:20.083 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:50:20 compute-0 podman[297339]: 2025-10-11 08:50:20.084613306 +0000 UTC m=+0.061633904 container died 034d4af584d5ac1017cd36efc964e21839dbbea4a1045cbe9431994128504705 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-8e0c9798-3406-4335-baf7-3664e8c2cc2d, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001)
Oct 11 08:50:20 compute-0 nova_compute[260935]: 2025-10-11 08:50:20.088 2 INFO os_vif [None req-49ced0b0-22cb-48b2-bf9e-9281176f5b95 f557c82d1a6e44f2890ffb382c99df55 4478aa2544ad454daf82ec0d5a6f1b83 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:34:0d:85,bridge_name='br-int',has_traffic_filtering=True,id=9007d8a3-8797-49c6-9302-4c8f4d699a45,network=Network(8e0c9798-3406-4335-baf7-3664e8c2cc2d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9007d8a3-87')
Oct 11 08:50:20 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-034d4af584d5ac1017cd36efc964e21839dbbea4a1045cbe9431994128504705-userdata-shm.mount: Deactivated successfully.
Oct 11 08:50:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-709c524c656a0c0bd950e9b79a2ea29641c4e55d424e4497bd7851268792db19-merged.mount: Deactivated successfully.
Oct 11 08:50:20 compute-0 podman[297339]: 2025-10-11 08:50:20.13390365 +0000 UTC m=+0.110924248 container cleanup 034d4af584d5ac1017cd36efc964e21839dbbea4a1045cbe9431994128504705 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-8e0c9798-3406-4335-baf7-3664e8c2cc2d, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Oct 11 08:50:20 compute-0 systemd[1]: libpod-conmon-034d4af584d5ac1017cd36efc964e21839dbbea4a1045cbe9431994128504705.scope: Deactivated successfully.
Oct 11 08:50:20 compute-0 podman[297390]: 2025-10-11 08:50:20.207896562 +0000 UTC m=+0.050485009 container remove 034d4af584d5ac1017cd36efc964e21839dbbea4a1045cbe9431994128504705 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-8e0c9798-3406-4335-baf7-3664e8c2cc2d, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 11 08:50:20 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:20.213 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[41336208-f98a-4514-9fe3-a6d26fa95365]: (4, ('Sat Oct 11 08:50:20 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-8e0c9798-3406-4335-baf7-3664e8c2cc2d (034d4af584d5ac1017cd36efc964e21839dbbea4a1045cbe9431994128504705)\n034d4af584d5ac1017cd36efc964e21839dbbea4a1045cbe9431994128504705\nSat Oct 11 08:50:20 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-8e0c9798-3406-4335-baf7-3664e8c2cc2d (034d4af584d5ac1017cd36efc964e21839dbbea4a1045cbe9431994128504705)\n034d4af584d5ac1017cd36efc964e21839dbbea4a1045cbe9431994128504705\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:50:20 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:20.216 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[87253f51-2b89-4206-8d1f-61a4f659805a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:50:20 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:20.217 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8e0c9798-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:50:20 compute-0 nova_compute[260935]: 2025-10-11 08:50:20.219 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:50:20 compute-0 nova_compute[260935]: 2025-10-11 08:50:20.253 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:50:20 compute-0 kernel: tap8e0c9798-30: left promiscuous mode
Oct 11 08:50:20 compute-0 nova_compute[260935]: 2025-10-11 08:50:20.259 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:50:20 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:20.263 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[09678b3e-1ab1-4fc8-87ee-df1d8fde322c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:50:20 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:20.288 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[c5c39aa3-faac-48ac-ab59-0c27aa1f11de]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:50:20 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:20.290 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[76274d3d-4575-4873-ba0d-6ccaba625612]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:50:20 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:20.311 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[697912a6-a6da-48bd-a2af-e389baefe863]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 444250, 'reachable_time': 22433, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 297409, 'error': None, 'target': 'ovnmeta-8e0c9798-3406-4335-baf7-3664e8c2cc2d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:50:20 compute-0 systemd[1]: run-netns-ovnmeta\x2d8e0c9798\x2d3406\x2d4335\x2dbaf7\x2d3664e8c2cc2d.mount: Deactivated successfully.
Oct 11 08:50:20 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:20.315 162929 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-8e0c9798-3406-4335-baf7-3664e8c2cc2d deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 11 08:50:20 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:20.315 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[1240b7de-c4dc-448b-afe2-b9b8eda407bf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:50:20 compute-0 nova_compute[260935]: 2025-10-11 08:50:20.535 2 INFO nova.virt.libvirt.driver [None req-49ced0b0-22cb-48b2-bf9e-9281176f5b95 f557c82d1a6e44f2890ffb382c99df55 4478aa2544ad454daf82ec0d5a6f1b83 - - default default] [instance: 9f9aca1c-8e65-435a-bfae-1ff0d4386f58] Deleting instance files /var/lib/nova/instances/9f9aca1c-8e65-435a-bfae-1ff0d4386f58_del
Oct 11 08:50:20 compute-0 nova_compute[260935]: 2025-10-11 08:50:20.536 2 INFO nova.virt.libvirt.driver [None req-49ced0b0-22cb-48b2-bf9e-9281176f5b95 f557c82d1a6e44f2890ffb382c99df55 4478aa2544ad454daf82ec0d5a6f1b83 - - default default] [instance: 9f9aca1c-8e65-435a-bfae-1ff0d4386f58] Deletion of /var/lib/nova/instances/9f9aca1c-8e65-435a-bfae-1ff0d4386f58_del complete
Oct 11 08:50:20 compute-0 kernel: tap31025494-a3 (unregistering): left promiscuous mode
Oct 11 08:50:20 compute-0 NetworkManager[44960]: <info>  [1760172620.5745] device (tap31025494-a3): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 08:50:20 compute-0 ovn_controller[152945]: 2025-10-11T08:50:20Z|00169|binding|INFO|Releasing lport 31025494-a361-4c39-aa29-5841978f12e9 from this chassis (sb_readonly=0)
Oct 11 08:50:20 compute-0 ovn_controller[152945]: 2025-10-11T08:50:20Z|00170|binding|INFO|Setting lport 31025494-a361-4c39-aa29-5841978f12e9 down in Southbound
Oct 11 08:50:20 compute-0 ovn_controller[152945]: 2025-10-11T08:50:20Z|00171|binding|INFO|Removing iface tap31025494-a3 ovn-installed in OVS
Oct 11 08:50:20 compute-0 nova_compute[260935]: 2025-10-11 08:50:20.596 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:50:20 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:20.598 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a8:8e:61 10.100.0.14'], port_security=['fa:16:3e:a8:8e:61 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'cb1503a2-bc9c-4faf-ab16-e7227f8c94f7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9bac3530-993f-420e-8692-0b14a331d756', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f8c7604961214c6d9d49657535d799a5', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'b92be55a-f97b-4770-99c4-ff8e122b8ad7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=956bef08-638b-4ce0-9cc4-80a6cc4f1331, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=31025494-a361-4c39-aa29-5841978f12e9) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 08:50:20 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:20.600 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 31025494-a361-4c39-aa29-5841978f12e9 in datapath 9bac3530-993f-420e-8692-0b14a331d756 unbound from our chassis
Oct 11 08:50:20 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:20.603 162815 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 9bac3530-993f-420e-8692-0b14a331d756, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 11 08:50:20 compute-0 nova_compute[260935]: 2025-10-11 08:50:20.604 2 INFO nova.compute.manager [None req-49ced0b0-22cb-48b2-bf9e-9281176f5b95 f557c82d1a6e44f2890ffb382c99df55 4478aa2544ad454daf82ec0d5a6f1b83 - - default default] [instance: 9f9aca1c-8e65-435a-bfae-1ff0d4386f58] Took 0.81 seconds to destroy the instance on the hypervisor.
Oct 11 08:50:20 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:20.604 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[b2ac4138-ada5-4d7f-8f7e-7ff0bae9e47a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:50:20 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:20.604 162815 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-9bac3530-993f-420e-8692-0b14a331d756 namespace which is not needed anymore
Oct 11 08:50:20 compute-0 nova_compute[260935]: 2025-10-11 08:50:20.604 2 DEBUG oslo.service.loopingcall [None req-49ced0b0-22cb-48b2-bf9e-9281176f5b95 f557c82d1a6e44f2890ffb382c99df55 4478aa2544ad454daf82ec0d5a6f1b83 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 11 08:50:20 compute-0 nova_compute[260935]: 2025-10-11 08:50:20.606 2 DEBUG nova.compute.manager [-] [instance: 9f9aca1c-8e65-435a-bfae-1ff0d4386f58] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 11 08:50:20 compute-0 nova_compute[260935]: 2025-10-11 08:50:20.606 2 DEBUG nova.network.neutron [-] [instance: 9f9aca1c-8e65-435a-bfae-1ff0d4386f58] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 11 08:50:20 compute-0 nova_compute[260935]: 2025-10-11 08:50:20.625 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:50:20 compute-0 systemd[1]: machine-qemu\x2d32\x2dinstance\x2d0000001e.scope: Deactivated successfully.
Oct 11 08:50:20 compute-0 systemd[1]: machine-qemu\x2d32\x2dinstance\x2d0000001e.scope: Consumed 12.310s CPU time.
Oct 11 08:50:20 compute-0 systemd-machined[215705]: Machine qemu-32-instance-0000001e terminated.
Oct 11 08:50:20 compute-0 neutron-haproxy-ovnmeta-9bac3530-993f-420e-8692-0b14a331d756[296922]: [NOTICE]   (296941) : haproxy version is 2.8.14-c23fe91
Oct 11 08:50:20 compute-0 neutron-haproxy-ovnmeta-9bac3530-993f-420e-8692-0b14a331d756[296922]: [NOTICE]   (296941) : path to executable is /usr/sbin/haproxy
Oct 11 08:50:20 compute-0 neutron-haproxy-ovnmeta-9bac3530-993f-420e-8692-0b14a331d756[296922]: [ALERT]    (296941) : Current worker (296943) exited with code 143 (Terminated)
Oct 11 08:50:20 compute-0 neutron-haproxy-ovnmeta-9bac3530-993f-420e-8692-0b14a331d756[296922]: [WARNING]  (296941) : All workers exited. Exiting... (0)
Oct 11 08:50:20 compute-0 systemd[1]: libpod-8b0a546d98e6676ad7230a9ea46074e9f7737c411187814dca94ba845962cc0f.scope: Deactivated successfully.
Oct 11 08:50:20 compute-0 podman[297431]: 2025-10-11 08:50:20.75660103 +0000 UTC m=+0.048974526 container died 8b0a546d98e6676ad7230a9ea46074e9f7737c411187814dca94ba845962cc0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-9bac3530-993f-420e-8692-0b14a331d756, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 11 08:50:20 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-8b0a546d98e6676ad7230a9ea46074e9f7737c411187814dca94ba845962cc0f-userdata-shm.mount: Deactivated successfully.
Oct 11 08:50:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-34c413eaa3c0da3cf86ff785e342f7b62278895f169bd630f3f2a56ec7d8ae13-merged.mount: Deactivated successfully.
Oct 11 08:50:20 compute-0 podman[297431]: 2025-10-11 08:50:20.809019242 +0000 UTC m=+0.101392728 container cleanup 8b0a546d98e6676ad7230a9ea46074e9f7737c411187814dca94ba845962cc0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-9bac3530-993f-420e-8692-0b14a331d756, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 08:50:20 compute-0 systemd[1]: libpod-conmon-8b0a546d98e6676ad7230a9ea46074e9f7737c411187814dca94ba845962cc0f.scope: Deactivated successfully.
Oct 11 08:50:20 compute-0 nova_compute[260935]: 2025-10-11 08:50:20.895 2 DEBUG nova.objects.instance [None req-644005e2-9afc-4683-b0cd-8c4d30c643f6 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Lazy-loading 'pci_requests' on Instance uuid 14711e39-46ca-4856-9c19-fa51b869064d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:50:20 compute-0 podman[297463]: 2025-10-11 08:50:20.925962439 +0000 UTC m=+0.073989033 container remove 8b0a546d98e6676ad7230a9ea46074e9f7737c411187814dca94ba845962cc0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-9bac3530-993f-420e-8692-0b14a331d756, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct 11 08:50:20 compute-0 nova_compute[260935]: 2025-10-11 08:50:20.932 2 DEBUG nova.network.neutron [None req-644005e2-9afc-4683-b0cd-8c4d30c643f6 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 14711e39-46ca-4856-9c19-fa51b869064d] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 11 08:50:20 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:20.963 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[ac495425-9e7c-4bae-8e64-4f5b0571c451]: (4, ('Sat Oct 11 08:50:20 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-9bac3530-993f-420e-8692-0b14a331d756 (8b0a546d98e6676ad7230a9ea46074e9f7737c411187814dca94ba845962cc0f)\n8b0a546d98e6676ad7230a9ea46074e9f7737c411187814dca94ba845962cc0f\nSat Oct 11 08:50:20 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-9bac3530-993f-420e-8692-0b14a331d756 (8b0a546d98e6676ad7230a9ea46074e9f7737c411187814dca94ba845962cc0f)\n8b0a546d98e6676ad7230a9ea46074e9f7737c411187814dca94ba845962cc0f\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:50:20 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:20.965 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[8ae56427-0a74-475b-9310-9e41eccec376]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:50:20 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:20.966 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9bac3530-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:50:20 compute-0 nova_compute[260935]: 2025-10-11 08:50:20.967 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:50:20 compute-0 kernel: tap9bac3530-90: left promiscuous mode
Oct 11 08:50:21 compute-0 nova_compute[260935]: 2025-10-11 08:50:21.001 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:50:21 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:21.005 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[b46fd84c-dfcd-4de0-bff7-c773a2bb1854]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:50:21 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:21.032 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[b4bf6a2b-7393-4804-b076-ce65758ce770]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:50:21 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:21.034 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[06d96047-901d-4830-b886-6a6692692e1c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:50:21 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:21.047 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[caac0c1a-a4e6-4a9c-8fea-8aa16d2034f2]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 444769, 'reachable_time': 34264, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 297488, 'error': None, 'target': 'ovnmeta-9bac3530-993f-420e-8692-0b14a331d756', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:50:21 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:21.050 162929 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-9bac3530-993f-420e-8692-0b14a331d756 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 11 08:50:21 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:21.050 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[d24a39ad-adf5-47bd-8376-c33423bc9be0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:50:21 compute-0 systemd[1]: run-netns-ovnmeta\x2d9bac3530\x2d993f\x2d420e\x2d8692\x2d0b14a331d756.mount: Deactivated successfully.
Oct 11 08:50:21 compute-0 nova_compute[260935]: 2025-10-11 08:50:21.237 2 DEBUG nova.policy [None req-644005e2-9afc-4683-b0cd-8c4d30c643f6 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '34f29a5a135d45f597eeaa741009aa67', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'eddb41c523294041b154a0a99c88e82b', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 11 08:50:21 compute-0 nova_compute[260935]: 2025-10-11 08:50:21.289 2 INFO nova.virt.libvirt.driver [None req-9e5a278a-2f09-40a3-8a2d-2f913c615ffa 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: cb1503a2-bc9c-4faf-ab16-e7227f8c94f7] Instance shutdown successfully after 13 seconds.
Oct 11 08:50:21 compute-0 nova_compute[260935]: 2025-10-11 08:50:21.296 2 INFO nova.virt.libvirt.driver [-] [instance: cb1503a2-bc9c-4faf-ab16-e7227f8c94f7] Instance destroyed successfully.
Oct 11 08:50:21 compute-0 nova_compute[260935]: 2025-10-11 08:50:21.296 2 DEBUG nova.objects.instance [None req-9e5a278a-2f09-40a3-8a2d-2f913c615ffa 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Lazy-loading 'numa_topology' on Instance uuid cb1503a2-bc9c-4faf-ab16-e7227f8c94f7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:50:21 compute-0 nova_compute[260935]: 2025-10-11 08:50:21.329 2 DEBUG nova.compute.manager [None req-9e5a278a-2f09-40a3-8a2d-2f913c615ffa 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: cb1503a2-bc9c-4faf-ab16-e7227f8c94f7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:50:21 compute-0 ceph-mon[74313]: pgmap v1342: 321 pgs: 321 active+clean; 610 MiB data, 671 MiB used, 59 GiB / 60 GiB avail; 4.4 MiB/s rd, 4.3 MiB/s wr, 255 op/s
Oct 11 08:50:21 compute-0 nova_compute[260935]: 2025-10-11 08:50:21.451 2 DEBUG oslo_concurrency.lockutils [None req-9e5a278a-2f09-40a3-8a2d-2f913c615ffa 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Lock "cb1503a2-bc9c-4faf-ab16-e7227f8c94f7" "released" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: held 13.282s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:50:21 compute-0 nova_compute[260935]: 2025-10-11 08:50:21.855 2 DEBUG nova.compute.manager [req-bf178e5d-ba15-4d23-aece-37b966a1fed3 req-f9ef3964-92b4-4926-a7bd-a4b4d547a7c4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 9f9aca1c-8e65-435a-bfae-1ff0d4386f58] Received event network-vif-unplugged-9007d8a3-8797-49c6-9302-4c8f4d699a45 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:50:21 compute-0 nova_compute[260935]: 2025-10-11 08:50:21.855 2 DEBUG oslo_concurrency.lockutils [req-bf178e5d-ba15-4d23-aece-37b966a1fed3 req-f9ef3964-92b4-4926-a7bd-a4b4d547a7c4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "9f9aca1c-8e65-435a-bfae-1ff0d4386f58-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:50:21 compute-0 nova_compute[260935]: 2025-10-11 08:50:21.855 2 DEBUG oslo_concurrency.lockutils [req-bf178e5d-ba15-4d23-aece-37b966a1fed3 req-f9ef3964-92b4-4926-a7bd-a4b4d547a7c4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "9f9aca1c-8e65-435a-bfae-1ff0d4386f58-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:50:21 compute-0 nova_compute[260935]: 2025-10-11 08:50:21.855 2 DEBUG oslo_concurrency.lockutils [req-bf178e5d-ba15-4d23-aece-37b966a1fed3 req-f9ef3964-92b4-4926-a7bd-a4b4d547a7c4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "9f9aca1c-8e65-435a-bfae-1ff0d4386f58-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:50:21 compute-0 nova_compute[260935]: 2025-10-11 08:50:21.856 2 DEBUG nova.compute.manager [req-bf178e5d-ba15-4d23-aece-37b966a1fed3 req-f9ef3964-92b4-4926-a7bd-a4b4d547a7c4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 9f9aca1c-8e65-435a-bfae-1ff0d4386f58] No waiting events found dispatching network-vif-unplugged-9007d8a3-8797-49c6-9302-4c8f4d699a45 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 08:50:21 compute-0 nova_compute[260935]: 2025-10-11 08:50:21.856 2 DEBUG nova.compute.manager [req-bf178e5d-ba15-4d23-aece-37b966a1fed3 req-f9ef3964-92b4-4926-a7bd-a4b4d547a7c4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 9f9aca1c-8e65-435a-bfae-1ff0d4386f58] Received event network-vif-unplugged-9007d8a3-8797-49c6-9302-4c8f4d699a45 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 11 08:50:21 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1343: 321 pgs: 321 active+clean; 610 MiB data, 671 MiB used, 59 GiB / 60 GiB avail; 4.4 MiB/s rd, 4.3 MiB/s wr, 255 op/s
Oct 11 08:50:22 compute-0 nova_compute[260935]: 2025-10-11 08:50:22.070 2 DEBUG nova.network.neutron [None req-644005e2-9afc-4683-b0cd-8c4d30c643f6 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 14711e39-46ca-4856-9c19-fa51b869064d] Successfully updated port: f045b3aa-3ff6-4dea-ad61-a59a01735124 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 11 08:50:22 compute-0 nova_compute[260935]: 2025-10-11 08:50:22.087 2 DEBUG oslo_concurrency.lockutils [None req-644005e2-9afc-4683-b0cd-8c4d30c643f6 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Acquiring lock "refresh_cache-14711e39-46ca-4856-9c19-fa51b869064d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 08:50:22 compute-0 nova_compute[260935]: 2025-10-11 08:50:22.087 2 DEBUG oslo_concurrency.lockutils [None req-644005e2-9afc-4683-b0cd-8c4d30c643f6 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Acquired lock "refresh_cache-14711e39-46ca-4856-9c19-fa51b869064d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 08:50:22 compute-0 nova_compute[260935]: 2025-10-11 08:50:22.087 2 DEBUG nova.network.neutron [None req-644005e2-9afc-4683-b0cd-8c4d30c643f6 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 14711e39-46ca-4856-9c19-fa51b869064d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 11 08:50:22 compute-0 nova_compute[260935]: 2025-10-11 08:50:22.105 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:50:22 compute-0 ovn_controller[152945]: 2025-10-11T08:50:22Z|00038|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:d9:75:86 10.100.0.5
Oct 11 08:50:22 compute-0 ovn_controller[152945]: 2025-10-11T08:50:22Z|00039|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:d9:75:86 10.100.0.5
Oct 11 08:50:22 compute-0 nova_compute[260935]: 2025-10-11 08:50:22.393 2 DEBUG nova.network.neutron [req-c9a75884-5ba1-4edc-ba86-e068a1c83958 req-9c292f4f-9ab8-448d-ad5c-1ad1418ce00d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: b35f4147-9e36-4dab-9ac8-2061c97797f2] Updated VIF entry in instance network info cache for port c27797c3-6ac7-45ae-9a2e-7fc42908feab. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 08:50:22 compute-0 nova_compute[260935]: 2025-10-11 08:50:22.394 2 DEBUG nova.network.neutron [req-c9a75884-5ba1-4edc-ba86-e068a1c83958 req-9c292f4f-9ab8-448d-ad5c-1ad1418ce00d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: b35f4147-9e36-4dab-9ac8-2061c97797f2] Updating instance_info_cache with network_info: [{"id": "c27797c3-6ac7-45ae-9a2e-7fc42908feab", "address": "fa:16:3e:d9:75:86", "network": {"id": "fff13396-b787-4c6e-9112-a1c2ef57b26d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-795919911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eddb41c523294041b154a0a99c88e82b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc27797c3-6a", "ovs_interfaceid": "c27797c3-6ac7-45ae-9a2e-7fc42908feab", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:50:22 compute-0 nova_compute[260935]: 2025-10-11 08:50:22.403 2 WARNING nova.network.neutron [None req-644005e2-9afc-4683-b0cd-8c4d30c643f6 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 14711e39-46ca-4856-9c19-fa51b869064d] fff13396-b787-4c6e-9112-a1c2ef57b26d already exists in list: networks containing: ['fff13396-b787-4c6e-9112-a1c2ef57b26d']. ignoring it
Oct 11 08:50:22 compute-0 nova_compute[260935]: 2025-10-11 08:50:22.416 2 DEBUG oslo_concurrency.lockutils [req-c9a75884-5ba1-4edc-ba86-e068a1c83958 req-9c292f4f-9ab8-448d-ad5c-1ad1418ce00d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-b35f4147-9e36-4dab-9ac8-2061c97797f2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 08:50:22 compute-0 nova_compute[260935]: 2025-10-11 08:50:22.488 2 DEBUG nova.network.neutron [-] [instance: 9f9aca1c-8e65-435a-bfae-1ff0d4386f58] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:50:22 compute-0 nova_compute[260935]: 2025-10-11 08:50:22.510 2 INFO nova.compute.manager [-] [instance: 9f9aca1c-8e65-435a-bfae-1ff0d4386f58] Took 1.90 seconds to deallocate network for instance.
Oct 11 08:50:22 compute-0 nova_compute[260935]: 2025-10-11 08:50:22.552 2 DEBUG oslo_concurrency.lockutils [None req-49ced0b0-22cb-48b2-bf9e-9281176f5b95 f557c82d1a6e44f2890ffb382c99df55 4478aa2544ad454daf82ec0d5a6f1b83 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:50:22 compute-0 nova_compute[260935]: 2025-10-11 08:50:22.553 2 DEBUG oslo_concurrency.lockutils [None req-49ced0b0-22cb-48b2-bf9e-9281176f5b95 f557c82d1a6e44f2890ffb382c99df55 4478aa2544ad454daf82ec0d5a6f1b83 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:50:22 compute-0 nova_compute[260935]: 2025-10-11 08:50:22.756 2 DEBUG oslo_concurrency.processutils [None req-49ced0b0-22cb-48b2-bf9e-9281176f5b95 f557c82d1a6e44f2890ffb382c99df55 4478aa2544ad454daf82ec0d5a6f1b83 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:50:22 compute-0 nova_compute[260935]: 2025-10-11 08:50:22.813 2 DEBUG nova.compute.manager [req-e25d1736-e6e5-4d6f-a2c6-cb703440408b req-a7a2cf3d-cd0b-4ef6-86e1-d86c67796ee4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 14711e39-46ca-4856-9c19-fa51b869064d] Received event network-changed-ac842ebf-4fca-4930-a4d1-3e8a6760d441 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:50:22 compute-0 nova_compute[260935]: 2025-10-11 08:50:22.814 2 DEBUG nova.compute.manager [req-e25d1736-e6e5-4d6f-a2c6-cb703440408b req-a7a2cf3d-cd0b-4ef6-86e1-d86c67796ee4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 14711e39-46ca-4856-9c19-fa51b869064d] Refreshing instance network info cache due to event network-changed-ac842ebf-4fca-4930-a4d1-3e8a6760d441. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 08:50:22 compute-0 nova_compute[260935]: 2025-10-11 08:50:22.815 2 DEBUG oslo_concurrency.lockutils [req-e25d1736-e6e5-4d6f-a2c6-cb703440408b req-a7a2cf3d-cd0b-4ef6-86e1-d86c67796ee4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-14711e39-46ca-4856-9c19-fa51b869064d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 08:50:23 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 08:50:23 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2134780301' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:50:23 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 08:50:23 compute-0 nova_compute[260935]: 2025-10-11 08:50:23.236 2 DEBUG oslo_concurrency.processutils [None req-49ced0b0-22cb-48b2-bf9e-9281176f5b95 f557c82d1a6e44f2890ffb382c99df55 4478aa2544ad454daf82ec0d5a6f1b83 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:50:23 compute-0 nova_compute[260935]: 2025-10-11 08:50:23.241 2 DEBUG nova.compute.provider_tree [None req-49ced0b0-22cb-48b2-bf9e-9281176f5b95 f557c82d1a6e44f2890ffb382c99df55 4478aa2544ad454daf82ec0d5a6f1b83 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 08:50:23 compute-0 nova_compute[260935]: 2025-10-11 08:50:23.268 2 DEBUG nova.scheduler.client.report [None req-49ced0b0-22cb-48b2-bf9e-9281176f5b95 f557c82d1a6e44f2890ffb382c99df55 4478aa2544ad454daf82ec0d5a6f1b83 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 08:50:23 compute-0 nova_compute[260935]: 2025-10-11 08:50:23.290 2 DEBUG oslo_concurrency.lockutils [None req-49ced0b0-22cb-48b2-bf9e-9281176f5b95 f557c82d1a6e44f2890ffb382c99df55 4478aa2544ad454daf82ec0d5a6f1b83 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.737s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:50:23 compute-0 nova_compute[260935]: 2025-10-11 08:50:23.327 2 INFO nova.scheduler.client.report [None req-49ced0b0-22cb-48b2-bf9e-9281176f5b95 f557c82d1a6e44f2890ffb382c99df55 4478aa2544ad454daf82ec0d5a6f1b83 - - default default] Deleted allocations for instance 9f9aca1c-8e65-435a-bfae-1ff0d4386f58
Oct 11 08:50:23 compute-0 nova_compute[260935]: 2025-10-11 08:50:23.331 2 DEBUG nova.virt.libvirt.driver [None req-658bfebf-d2cf-474e-bf77-56ab2cc2a636 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Oct 11 08:50:23 compute-0 ceph-mon[74313]: pgmap v1343: 321 pgs: 321 active+clean; 610 MiB data, 671 MiB used, 59 GiB / 60 GiB avail; 4.4 MiB/s rd, 4.3 MiB/s wr, 255 op/s
Oct 11 08:50:23 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/2134780301' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:50:23 compute-0 nova_compute[260935]: 2025-10-11 08:50:23.412 2 DEBUG oslo_concurrency.lockutils [None req-49ced0b0-22cb-48b2-bf9e-9281176f5b95 f557c82d1a6e44f2890ffb382c99df55 4478aa2544ad454daf82ec0d5a6f1b83 - - default default] Lock "9f9aca1c-8e65-435a-bfae-1ff0d4386f58" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.628s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:50:23 compute-0 nova_compute[260935]: 2025-10-11 08:50:23.477 2 DEBUG nova.compute.manager [None req-18a12c87-f15e-4353-88fa-1b82f2fee926 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: cb1503a2-bc9c-4faf-ab16-e7227f8c94f7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:50:23 compute-0 nova_compute[260935]: 2025-10-11 08:50:23.513 2 INFO nova.compute.manager [None req-18a12c87-f15e-4353-88fa-1b82f2fee926 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: cb1503a2-bc9c-4faf-ab16-e7227f8c94f7] instance snapshotting
Oct 11 08:50:23 compute-0 nova_compute[260935]: 2025-10-11 08:50:23.513 2 WARNING nova.compute.manager [None req-18a12c87-f15e-4353-88fa-1b82f2fee926 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: cb1503a2-bc9c-4faf-ab16-e7227f8c94f7] trying to snapshot a non-running instance: (state: 4 expected: 1)
Oct 11 08:50:23 compute-0 nova_compute[260935]: 2025-10-11 08:50:23.725 2 INFO nova.virt.libvirt.driver [None req-18a12c87-f15e-4353-88fa-1b82f2fee926 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: cb1503a2-bc9c-4faf-ab16-e7227f8c94f7] Beginning cold snapshot process
Oct 11 08:50:23 compute-0 podman[297513]: 2025-10-11 08:50:23.76614239 +0000 UTC m=+0.070228378 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_managed=true, io.buildah.version=1.41.3)
Oct 11 08:50:23 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1344: 321 pgs: 321 active+clean; 579 MiB data, 660 MiB used, 59 GiB / 60 GiB avail; 4.8 MiB/s rd, 8.4 MiB/s wr, 369 op/s
Oct 11 08:50:23 compute-0 nova_compute[260935]: 2025-10-11 08:50:23.900 2 DEBUG nova.virt.libvirt.imagebackend [None req-18a12c87-f15e-4353-88fa-1b82f2fee926 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] No parent info for 03f2fef0-11c0-48e1-b3a0-3e02d898739e; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Oct 11 08:50:23 compute-0 nova_compute[260935]: 2025-10-11 08:50:23.905 2 DEBUG nova.network.neutron [None req-644005e2-9afc-4683-b0cd-8c4d30c643f6 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 14711e39-46ca-4856-9c19-fa51b869064d] Updating instance_info_cache with network_info: [{"id": "ac842ebf-4fca-4930-a4d1-3e8a6760d441", "address": "fa:16:3e:56:95:17", "network": {"id": "fff13396-b787-4c6e-9112-a1c2ef57b26d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-795919911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.224", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eddb41c523294041b154a0a99c88e82b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac842ebf-4f", "ovs_interfaceid": "ac842ebf-4fca-4930-a4d1-3e8a6760d441", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "f045b3aa-3ff6-4dea-ad61-a59a01735124", "address": "fa:16:3e:ee:3e:c0", "network": {"id": "fff13396-b787-4c6e-9112-a1c2ef57b26d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-795919911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eddb41c523294041b154a0a99c88e82b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf045b3aa-3f", "ovs_interfaceid": "f045b3aa-3ff6-4dea-ad61-a59a01735124", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:50:23 compute-0 nova_compute[260935]: 2025-10-11 08:50:23.927 2 DEBUG oslo_concurrency.lockutils [None req-644005e2-9afc-4683-b0cd-8c4d30c643f6 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Releasing lock "refresh_cache-14711e39-46ca-4856-9c19-fa51b869064d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 08:50:23 compute-0 nova_compute[260935]: 2025-10-11 08:50:23.928 2 DEBUG oslo_concurrency.lockutils [req-e25d1736-e6e5-4d6f-a2c6-cb703440408b req-a7a2cf3d-cd0b-4ef6-86e1-d86c67796ee4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-14711e39-46ca-4856-9c19-fa51b869064d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 08:50:23 compute-0 nova_compute[260935]: 2025-10-11 08:50:23.929 2 DEBUG nova.network.neutron [req-e25d1736-e6e5-4d6f-a2c6-cb703440408b req-a7a2cf3d-cd0b-4ef6-86e1-d86c67796ee4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 14711e39-46ca-4856-9c19-fa51b869064d] Refreshing network info cache for port ac842ebf-4fca-4930-a4d1-3e8a6760d441 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 08:50:23 compute-0 nova_compute[260935]: 2025-10-11 08:50:23.934 2 DEBUG nova.virt.libvirt.vif [None req-644005e2-9afc-4683-b0cd-8c4d30c643f6 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T08:49:32Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-579139719',display_name='tempest-tempest.common.compute-instance-579139719',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-579139719',id=26,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAAy7u30rY9Ua722Pu5k06TsB7yGIeNfS7lWjZwVhd6kg2xMeuomPU5t2dlqG08LvC5AhOx2wSQ4p/whtQgG8tbhB9ScC2x2P4qlM+3BKH/+XFtSpFY70AQQ3oh5qTSzqA==',key_name='tempest-keypair-878513287',keypairs=<?>,launch_index=0,launched_at=2025-10-11T08:49:48Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='eddb41c523294041b154a0a99c88e82b',ramdisk_id='',reservation_id='r-9v1qz90y',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-2072786320',owner_user_name='tempest-AttachInterfacesTestJSON-2072786320-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T08:49:48Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='34f29a5a135d45f597eeaa741009aa67',uuid=14711e39-46ca-4856-9c19-fa51b869064d,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f045b3aa-3ff6-4dea-ad61-a59a01735124", "address": "fa:16:3e:ee:3e:c0", "network": {"id": "fff13396-b787-4c6e-9112-a1c2ef57b26d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-795919911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eddb41c523294041b154a0a99c88e82b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf045b3aa-3f", "ovs_interfaceid": "f045b3aa-3ff6-4dea-ad61-a59a01735124", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 11 08:50:23 compute-0 nova_compute[260935]: 2025-10-11 08:50:23.934 2 DEBUG nova.network.os_vif_util [None req-644005e2-9afc-4683-b0cd-8c4d30c643f6 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Converting VIF {"id": "f045b3aa-3ff6-4dea-ad61-a59a01735124", "address": "fa:16:3e:ee:3e:c0", "network": {"id": "fff13396-b787-4c6e-9112-a1c2ef57b26d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-795919911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eddb41c523294041b154a0a99c88e82b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf045b3aa-3f", "ovs_interfaceid": "f045b3aa-3ff6-4dea-ad61-a59a01735124", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 08:50:23 compute-0 nova_compute[260935]: 2025-10-11 08:50:23.936 2 DEBUG nova.network.os_vif_util [None req-644005e2-9afc-4683-b0cd-8c4d30c643f6 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ee:3e:c0,bridge_name='br-int',has_traffic_filtering=True,id=f045b3aa-3ff6-4dea-ad61-a59a01735124,network=Network(fff13396-b787-4c6e-9112-a1c2ef57b26d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapf045b3aa-3f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 08:50:23 compute-0 nova_compute[260935]: 2025-10-11 08:50:23.936 2 DEBUG os_vif [None req-644005e2-9afc-4683-b0cd-8c4d30c643f6 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ee:3e:c0,bridge_name='br-int',has_traffic_filtering=True,id=f045b3aa-3ff6-4dea-ad61-a59a01735124,network=Network(fff13396-b787-4c6e-9112-a1c2ef57b26d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapf045b3aa-3f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 11 08:50:23 compute-0 nova_compute[260935]: 2025-10-11 08:50:23.937 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:50:23 compute-0 nova_compute[260935]: 2025-10-11 08:50:23.938 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:50:23 compute-0 nova_compute[260935]: 2025-10-11 08:50:23.939 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 08:50:23 compute-0 nova_compute[260935]: 2025-10-11 08:50:23.943 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:50:23 compute-0 nova_compute[260935]: 2025-10-11 08:50:23.944 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf045b3aa-3f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:50:23 compute-0 nova_compute[260935]: 2025-10-11 08:50:23.945 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapf045b3aa-3f, col_values=(('external_ids', {'iface-id': 'f045b3aa-3ff6-4dea-ad61-a59a01735124', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ee:3e:c0', 'vm-uuid': '14711e39-46ca-4856-9c19-fa51b869064d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:50:23 compute-0 NetworkManager[44960]: <info>  [1760172623.9495] manager: (tapf045b3aa-3f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/96)
Oct 11 08:50:23 compute-0 nova_compute[260935]: 2025-10-11 08:50:23.948 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:50:23 compute-0 nova_compute[260935]: 2025-10-11 08:50:23.960 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 08:50:23 compute-0 nova_compute[260935]: 2025-10-11 08:50:23.962 2 INFO os_vif [None req-644005e2-9afc-4683-b0cd-8c4d30c643f6 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ee:3e:c0,bridge_name='br-int',has_traffic_filtering=True,id=f045b3aa-3ff6-4dea-ad61-a59a01735124,network=Network(fff13396-b787-4c6e-9112-a1c2ef57b26d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapf045b3aa-3f')
Oct 11 08:50:23 compute-0 nova_compute[260935]: 2025-10-11 08:50:23.963 2 DEBUG nova.virt.libvirt.vif [None req-644005e2-9afc-4683-b0cd-8c4d30c643f6 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T08:49:32Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-579139719',display_name='tempest-tempest.common.compute-instance-579139719',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-579139719',id=26,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAAy7u30rY9Ua722Pu5k06TsB7yGIeNfS7lWjZwVhd6kg2xMeuomPU5t2dlqG08LvC5AhOx2wSQ4p/whtQgG8tbhB9ScC2x2P4qlM+3BKH/+XFtSpFY70AQQ3oh5qTSzqA==',key_name='tempest-keypair-878513287',keypairs=<?>,launch_index=0,launched_at=2025-10-11T08:49:48Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='eddb41c523294041b154a0a99c88e82b',ramdisk_id='',reservation_id='r-9v1qz90y',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-2072786320',owner_user_name='tempest-AttachInterfacesTestJSON-2072786320-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T08:49:48Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='34f29a5a135d45f597eeaa741009aa67',uuid=14711e39-46ca-4856-9c19-fa51b869064d,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f045b3aa-3ff6-4dea-ad61-a59a01735124", "address": "fa:16:3e:ee:3e:c0", "network": {"id": "fff13396-b787-4c6e-9112-a1c2ef57b26d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-795919911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eddb41c523294041b154a0a99c88e82b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf045b3aa-3f", "ovs_interfaceid": "f045b3aa-3ff6-4dea-ad61-a59a01735124", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 11 08:50:23 compute-0 nova_compute[260935]: 2025-10-11 08:50:23.964 2 DEBUG nova.network.os_vif_util [None req-644005e2-9afc-4683-b0cd-8c4d30c643f6 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Converting VIF {"id": "f045b3aa-3ff6-4dea-ad61-a59a01735124", "address": "fa:16:3e:ee:3e:c0", "network": {"id": "fff13396-b787-4c6e-9112-a1c2ef57b26d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-795919911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eddb41c523294041b154a0a99c88e82b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf045b3aa-3f", "ovs_interfaceid": "f045b3aa-3ff6-4dea-ad61-a59a01735124", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 08:50:23 compute-0 nova_compute[260935]: 2025-10-11 08:50:23.964 2 DEBUG nova.network.os_vif_util [None req-644005e2-9afc-4683-b0cd-8c4d30c643f6 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ee:3e:c0,bridge_name='br-int',has_traffic_filtering=True,id=f045b3aa-3ff6-4dea-ad61-a59a01735124,network=Network(fff13396-b787-4c6e-9112-a1c2ef57b26d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapf045b3aa-3f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 08:50:23 compute-0 nova_compute[260935]: 2025-10-11 08:50:23.968 2 DEBUG nova.virt.libvirt.guest [None req-644005e2-9afc-4683-b0cd-8c4d30c643f6 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] attach device xml: <interface type="ethernet">
Oct 11 08:50:23 compute-0 nova_compute[260935]:   <mac address="fa:16:3e:ee:3e:c0"/>
Oct 11 08:50:23 compute-0 nova_compute[260935]:   <model type="virtio"/>
Oct 11 08:50:23 compute-0 nova_compute[260935]:   <driver name="vhost" rx_queue_size="512"/>
Oct 11 08:50:23 compute-0 nova_compute[260935]:   <mtu size="1442"/>
Oct 11 08:50:23 compute-0 nova_compute[260935]:   <target dev="tapf045b3aa-3f"/>
Oct 11 08:50:23 compute-0 nova_compute[260935]: </interface>
Oct 11 08:50:23 compute-0 nova_compute[260935]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Oct 11 08:50:23 compute-0 kernel: tapf045b3aa-3f: entered promiscuous mode
Oct 11 08:50:23 compute-0 NetworkManager[44960]: <info>  [1760172623.9868] manager: (tapf045b3aa-3f): new Tun device (/org/freedesktop/NetworkManager/Devices/97)
Oct 11 08:50:23 compute-0 ovn_controller[152945]: 2025-10-11T08:50:23Z|00172|binding|INFO|Claiming lport f045b3aa-3ff6-4dea-ad61-a59a01735124 for this chassis.
Oct 11 08:50:23 compute-0 ovn_controller[152945]: 2025-10-11T08:50:23Z|00173|binding|INFO|f045b3aa-3ff6-4dea-ad61-a59a01735124: Claiming fa:16:3e:ee:3e:c0 10.100.0.7
Oct 11 08:50:23 compute-0 nova_compute[260935]: 2025-10-11 08:50:23.991 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:50:24 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:24.001 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ee:3e:c0 10.100.0.7'], port_security=['fa:16:3e:ee:3e:c0 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-AttachInterfacesTestJSON-350865697', 'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '14711e39-46ca-4856-9c19-fa51b869064d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fff13396-b787-4c6e-9112-a1c2ef57b26d', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-AttachInterfacesTestJSON-350865697', 'neutron:project_id': 'eddb41c523294041b154a0a99c88e82b', 'neutron:revision_number': '2', 'neutron:security_group_ids': '9a83c3d0-687d-44b7-980a-bde786b1b429', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c4201c7b-c907-464d-88cb-d19f17d8f067, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=f045b3aa-3ff6-4dea-ad61-a59a01735124) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 08:50:24 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:24.002 162815 INFO neutron.agent.ovn.metadata.agent [-] Port f045b3aa-3ff6-4dea-ad61-a59a01735124 in datapath fff13396-b787-4c6e-9112-a1c2ef57b26d bound to our chassis
Oct 11 08:50:24 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:24.005 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network fff13396-b787-4c6e-9112-a1c2ef57b26d
Oct 11 08:50:24 compute-0 ovn_controller[152945]: 2025-10-11T08:50:24Z|00174|binding|INFO|Setting lport f045b3aa-3ff6-4dea-ad61-a59a01735124 ovn-installed in OVS
Oct 11 08:50:24 compute-0 ovn_controller[152945]: 2025-10-11T08:50:24Z|00175|binding|INFO|Setting lport f045b3aa-3ff6-4dea-ad61-a59a01735124 up in Southbound
Oct 11 08:50:24 compute-0 nova_compute[260935]: 2025-10-11 08:50:24.020 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:50:24 compute-0 nova_compute[260935]: 2025-10-11 08:50:24.023 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:50:24 compute-0 systemd-udevd[297577]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 08:50:24 compute-0 NetworkManager[44960]: <info>  [1760172624.0484] device (tapf045b3aa-3f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 08:50:24 compute-0 NetworkManager[44960]: <info>  [1760172624.0490] device (tapf045b3aa-3f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 08:50:24 compute-0 nova_compute[260935]: 2025-10-11 08:50:24.049 2 DEBUG nova.compute.manager [req-3cbe713b-75f3-43ca-bb9a-98a2fab5235b req-eae75a65-2db0-4971-bf89-0816ac33dca3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 9f9aca1c-8e65-435a-bfae-1ff0d4386f58] Received event network-vif-plugged-9007d8a3-8797-49c6-9302-4c8f4d699a45 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:50:24 compute-0 nova_compute[260935]: 2025-10-11 08:50:24.050 2 DEBUG oslo_concurrency.lockutils [req-3cbe713b-75f3-43ca-bb9a-98a2fab5235b req-eae75a65-2db0-4971-bf89-0816ac33dca3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "9f9aca1c-8e65-435a-bfae-1ff0d4386f58-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:50:24 compute-0 nova_compute[260935]: 2025-10-11 08:50:24.051 2 DEBUG oslo_concurrency.lockutils [req-3cbe713b-75f3-43ca-bb9a-98a2fab5235b req-eae75a65-2db0-4971-bf89-0816ac33dca3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "9f9aca1c-8e65-435a-bfae-1ff0d4386f58-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:50:24 compute-0 nova_compute[260935]: 2025-10-11 08:50:24.051 2 DEBUG oslo_concurrency.lockutils [req-3cbe713b-75f3-43ca-bb9a-98a2fab5235b req-eae75a65-2db0-4971-bf89-0816ac33dca3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "9f9aca1c-8e65-435a-bfae-1ff0d4386f58-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:50:24 compute-0 nova_compute[260935]: 2025-10-11 08:50:24.051 2 DEBUG nova.compute.manager [req-3cbe713b-75f3-43ca-bb9a-98a2fab5235b req-eae75a65-2db0-4971-bf89-0816ac33dca3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 9f9aca1c-8e65-435a-bfae-1ff0d4386f58] No waiting events found dispatching network-vif-plugged-9007d8a3-8797-49c6-9302-4c8f4d699a45 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 08:50:24 compute-0 nova_compute[260935]: 2025-10-11 08:50:24.051 2 WARNING nova.compute.manager [req-3cbe713b-75f3-43ca-bb9a-98a2fab5235b req-eae75a65-2db0-4971-bf89-0816ac33dca3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 9f9aca1c-8e65-435a-bfae-1ff0d4386f58] Received unexpected event network-vif-plugged-9007d8a3-8797-49c6-9302-4c8f4d699a45 for instance with vm_state deleted and task_state None.
Oct 11 08:50:24 compute-0 nova_compute[260935]: 2025-10-11 08:50:24.052 2 DEBUG nova.compute.manager [req-3cbe713b-75f3-43ca-bb9a-98a2fab5235b req-eae75a65-2db0-4971-bf89-0816ac33dca3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: cb1503a2-bc9c-4faf-ab16-e7227f8c94f7] Received event network-vif-unplugged-31025494-a361-4c39-aa29-5841978f12e9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:50:24 compute-0 nova_compute[260935]: 2025-10-11 08:50:24.052 2 DEBUG oslo_concurrency.lockutils [req-3cbe713b-75f3-43ca-bb9a-98a2fab5235b req-eae75a65-2db0-4971-bf89-0816ac33dca3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "cb1503a2-bc9c-4faf-ab16-e7227f8c94f7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:50:24 compute-0 nova_compute[260935]: 2025-10-11 08:50:24.052 2 DEBUG oslo_concurrency.lockutils [req-3cbe713b-75f3-43ca-bb9a-98a2fab5235b req-eae75a65-2db0-4971-bf89-0816ac33dca3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "cb1503a2-bc9c-4faf-ab16-e7227f8c94f7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:50:24 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:24.052 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[7706dcd6-4be2-4751-8f21-2a34dc65b9ce]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:50:24 compute-0 nova_compute[260935]: 2025-10-11 08:50:24.052 2 DEBUG oslo_concurrency.lockutils [req-3cbe713b-75f3-43ca-bb9a-98a2fab5235b req-eae75a65-2db0-4971-bf89-0816ac33dca3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "cb1503a2-bc9c-4faf-ab16-e7227f8c94f7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:50:24 compute-0 nova_compute[260935]: 2025-10-11 08:50:24.053 2 DEBUG nova.compute.manager [req-3cbe713b-75f3-43ca-bb9a-98a2fab5235b req-eae75a65-2db0-4971-bf89-0816ac33dca3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: cb1503a2-bc9c-4faf-ab16-e7227f8c94f7] No waiting events found dispatching network-vif-unplugged-31025494-a361-4c39-aa29-5841978f12e9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 08:50:24 compute-0 nova_compute[260935]: 2025-10-11 08:50:24.053 2 WARNING nova.compute.manager [req-3cbe713b-75f3-43ca-bb9a-98a2fab5235b req-eae75a65-2db0-4971-bf89-0816ac33dca3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: cb1503a2-bc9c-4faf-ab16-e7227f8c94f7] Received unexpected event network-vif-unplugged-31025494-a361-4c39-aa29-5841978f12e9 for instance with vm_state stopped and task_state image_uploading.
Oct 11 08:50:24 compute-0 nova_compute[260935]: 2025-10-11 08:50:24.053 2 DEBUG nova.compute.manager [req-3cbe713b-75f3-43ca-bb9a-98a2fab5235b req-eae75a65-2db0-4971-bf89-0816ac33dca3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: cb1503a2-bc9c-4faf-ab16-e7227f8c94f7] Received event network-vif-plugged-31025494-a361-4c39-aa29-5841978f12e9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:50:24 compute-0 nova_compute[260935]: 2025-10-11 08:50:24.053 2 DEBUG oslo_concurrency.lockutils [req-3cbe713b-75f3-43ca-bb9a-98a2fab5235b req-eae75a65-2db0-4971-bf89-0816ac33dca3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "cb1503a2-bc9c-4faf-ab16-e7227f8c94f7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:50:24 compute-0 nova_compute[260935]: 2025-10-11 08:50:24.054 2 DEBUG oslo_concurrency.lockutils [req-3cbe713b-75f3-43ca-bb9a-98a2fab5235b req-eae75a65-2db0-4971-bf89-0816ac33dca3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "cb1503a2-bc9c-4faf-ab16-e7227f8c94f7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:50:24 compute-0 nova_compute[260935]: 2025-10-11 08:50:24.054 2 DEBUG oslo_concurrency.lockutils [req-3cbe713b-75f3-43ca-bb9a-98a2fab5235b req-eae75a65-2db0-4971-bf89-0816ac33dca3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "cb1503a2-bc9c-4faf-ab16-e7227f8c94f7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:50:24 compute-0 nova_compute[260935]: 2025-10-11 08:50:24.054 2 DEBUG nova.compute.manager [req-3cbe713b-75f3-43ca-bb9a-98a2fab5235b req-eae75a65-2db0-4971-bf89-0816ac33dca3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: cb1503a2-bc9c-4faf-ab16-e7227f8c94f7] No waiting events found dispatching network-vif-plugged-31025494-a361-4c39-aa29-5841978f12e9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 08:50:24 compute-0 nova_compute[260935]: 2025-10-11 08:50:24.054 2 WARNING nova.compute.manager [req-3cbe713b-75f3-43ca-bb9a-98a2fab5235b req-eae75a65-2db0-4971-bf89-0816ac33dca3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: cb1503a2-bc9c-4faf-ab16-e7227f8c94f7] Received unexpected event network-vif-plugged-31025494-a361-4c39-aa29-5841978f12e9 for instance with vm_state stopped and task_state image_uploading.
Oct 11 08:50:24 compute-0 nova_compute[260935]: 2025-10-11 08:50:24.054 2 DEBUG nova.compute.manager [req-3cbe713b-75f3-43ca-bb9a-98a2fab5235b req-eae75a65-2db0-4971-bf89-0816ac33dca3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 9f9aca1c-8e65-435a-bfae-1ff0d4386f58] Received event network-vif-deleted-9007d8a3-8797-49c6-9302-4c8f4d699a45 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:50:24 compute-0 nova_compute[260935]: 2025-10-11 08:50:24.078 2 DEBUG nova.virt.libvirt.driver [None req-644005e2-9afc-4683-b0cd-8c4d30c643f6 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 08:50:24 compute-0 nova_compute[260935]: 2025-10-11 08:50:24.079 2 DEBUG nova.virt.libvirt.driver [None req-644005e2-9afc-4683-b0cd-8c4d30c643f6 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 08:50:24 compute-0 nova_compute[260935]: 2025-10-11 08:50:24.079 2 DEBUG nova.virt.libvirt.driver [None req-644005e2-9afc-4683-b0cd-8c4d30c643f6 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] No VIF found with MAC fa:16:3e:56:95:17, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 11 08:50:24 compute-0 nova_compute[260935]: 2025-10-11 08:50:24.079 2 DEBUG nova.virt.libvirt.driver [None req-644005e2-9afc-4683-b0cd-8c4d30c643f6 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] No VIF found with MAC fa:16:3e:ee:3e:c0, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 11 08:50:24 compute-0 ovn_controller[152945]: 2025-10-11T08:50:24Z|00040|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:bf:d8:0b 10.100.0.11
Oct 11 08:50:24 compute-0 ovn_controller[152945]: 2025-10-11T08:50:24Z|00041|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:bf:d8:0b 10.100.0.11
Oct 11 08:50:24 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:24.096 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[ad99747a-2dc9-4710-bdbc-2833a492a17e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:50:24 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:24.098 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[a32b3677-e906-4ce0-a0c6-475a07f11c25]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:50:24 compute-0 nova_compute[260935]: 2025-10-11 08:50:24.102 2 DEBUG nova.storage.rbd_utils [None req-18a12c87-f15e-4353-88fa-1b82f2fee926 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] creating snapshot(5a11c0b76fc649679b388e414df4e2e7) on rbd image(cb1503a2-bc9c-4faf-ab16-e7227f8c94f7_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Oct 11 08:50:24 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:24.131 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[f477e0e3-df53-40ec-b0f8-868d083ee114]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:50:24 compute-0 nova_compute[260935]: 2025-10-11 08:50:24.147 2 DEBUG nova.virt.libvirt.guest [None req-644005e2-9afc-4683-b0cd-8c4d30c643f6 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 08:50:24 compute-0 nova_compute[260935]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 08:50:24 compute-0 nova_compute[260935]:   <nova:name>tempest-tempest.common.compute-instance-579139719</nova:name>
Oct 11 08:50:24 compute-0 nova_compute[260935]:   <nova:creationTime>2025-10-11 08:50:24</nova:creationTime>
Oct 11 08:50:24 compute-0 nova_compute[260935]:   <nova:flavor name="m1.nano">
Oct 11 08:50:24 compute-0 nova_compute[260935]:     <nova:memory>128</nova:memory>
Oct 11 08:50:24 compute-0 nova_compute[260935]:     <nova:disk>1</nova:disk>
Oct 11 08:50:24 compute-0 nova_compute[260935]:     <nova:swap>0</nova:swap>
Oct 11 08:50:24 compute-0 nova_compute[260935]:     <nova:ephemeral>0</nova:ephemeral>
Oct 11 08:50:24 compute-0 nova_compute[260935]:     <nova:vcpus>1</nova:vcpus>
Oct 11 08:50:24 compute-0 nova_compute[260935]:   </nova:flavor>
Oct 11 08:50:24 compute-0 nova_compute[260935]:   <nova:owner>
Oct 11 08:50:24 compute-0 nova_compute[260935]:     <nova:user uuid="34f29a5a135d45f597eeaa741009aa67">tempest-AttachInterfacesTestJSON-2072786320-project-member</nova:user>
Oct 11 08:50:24 compute-0 nova_compute[260935]:     <nova:project uuid="eddb41c523294041b154a0a99c88e82b">tempest-AttachInterfacesTestJSON-2072786320</nova:project>
Oct 11 08:50:24 compute-0 nova_compute[260935]:   </nova:owner>
Oct 11 08:50:24 compute-0 nova_compute[260935]:   <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 08:50:24 compute-0 nova_compute[260935]:   <nova:ports>
Oct 11 08:50:24 compute-0 nova_compute[260935]:     <nova:port uuid="ac842ebf-4fca-4930-a4d1-3e8a6760d441">
Oct 11 08:50:24 compute-0 nova_compute[260935]:       <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Oct 11 08:50:24 compute-0 nova_compute[260935]:     </nova:port>
Oct 11 08:50:24 compute-0 nova_compute[260935]:     <nova:port uuid="f045b3aa-3ff6-4dea-ad61-a59a01735124">
Oct 11 08:50:24 compute-0 nova_compute[260935]:       <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Oct 11 08:50:24 compute-0 nova_compute[260935]:     </nova:port>
Oct 11 08:50:24 compute-0 nova_compute[260935]:   </nova:ports>
Oct 11 08:50:24 compute-0 nova_compute[260935]: </nova:instance>
Oct 11 08:50:24 compute-0 nova_compute[260935]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Oct 11 08:50:24 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:24.153 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[1b5eea39-2b12-4ebf-876a-48d6ce39fe1e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfff13396-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:dd:a4:2d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 916, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 916, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 45], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 443120, 'reachable_time': 39241, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 297604, 'error': None, 'target': 'ovnmeta-fff13396-b787-4c6e-9112-a1c2ef57b26d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:50:24 compute-0 nova_compute[260935]: 2025-10-11 08:50:24.175 2 DEBUG oslo_concurrency.lockutils [None req-644005e2-9afc-4683-b0cd-8c4d30c643f6 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Lock "interface-14711e39-46ca-4856-9c19-fa51b869064d-f045b3aa-3ff6-4dea-ad61-a59a01735124" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 4.378s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:50:24 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:24.176 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[0b864eef-0a5d-4e2b-9862-e896ad20886a]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapfff13396-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 443136, 'tstamp': 443136}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 297605, 'error': None, 'target': 'ovnmeta-fff13396-b787-4c6e-9112-a1c2ef57b26d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapfff13396-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 443140, 'tstamp': 443140}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 297605, 'error': None, 'target': 'ovnmeta-fff13396-b787-4c6e-9112-a1c2ef57b26d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:50:24 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:24.178 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfff13396-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:50:24 compute-0 nova_compute[260935]: 2025-10-11 08:50:24.210 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:50:24 compute-0 nova_compute[260935]: 2025-10-11 08:50:24.221 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:50:24 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:24.222 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfff13396-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:50:24 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:24.222 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 08:50:24 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:24.223 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapfff13396-b0, col_values=(('external_ids', {'iface-id': '2a916b98-1e7b-4604-b1f0-e2f195b1c17e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:50:24 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:24.223 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 08:50:24 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e169 do_prune osdmap full prune enabled
Oct 11 08:50:24 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e170 e170: 3 total, 3 up, 3 in
Oct 11 08:50:24 compute-0 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e170: 3 total, 3 up, 3 in
Oct 11 08:50:24 compute-0 nova_compute[260935]: 2025-10-11 08:50:24.434 2 DEBUG nova.storage.rbd_utils [None req-18a12c87-f15e-4353-88fa-1b82f2fee926 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] cloning vms/cb1503a2-bc9c-4faf-ab16-e7227f8c94f7_disk@5a11c0b76fc649679b388e414df4e2e7 to images/84322915-869e-419d-b9c3-505cab3e7ea4 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Oct 11 08:50:24 compute-0 nova_compute[260935]: 2025-10-11 08:50:24.551 2 DEBUG nova.storage.rbd_utils [None req-18a12c87-f15e-4353-88fa-1b82f2fee926 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] flattening images/84322915-869e-419d-b9c3-505cab3e7ea4 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Oct 11 08:50:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 08:50:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 08:50:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 08:50:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 08:50:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 08:50:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 08:50:25 compute-0 nova_compute[260935]: 2025-10-11 08:50:25.000 2 DEBUG nova.storage.rbd_utils [None req-18a12c87-f15e-4353-88fa-1b82f2fee926 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] removing snapshot(5a11c0b76fc649679b388e414df4e2e7) on rbd image(cb1503a2-bc9c-4faf-ab16-e7227f8c94f7_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Oct 11 08:50:25 compute-0 nova_compute[260935]: 2025-10-11 08:50:25.195 2 DEBUG nova.compute.manager [req-1fc696f3-59ef-4390-a3db-4767007bd053 req-9803d2b2-25d4-4845-8526-22acc68ef111 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 14711e39-46ca-4856-9c19-fa51b869064d] Received event network-changed-f045b3aa-3ff6-4dea-ad61-a59a01735124 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:50:25 compute-0 nova_compute[260935]: 2025-10-11 08:50:25.196 2 DEBUG nova.compute.manager [req-1fc696f3-59ef-4390-a3db-4767007bd053 req-9803d2b2-25d4-4845-8526-22acc68ef111 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 14711e39-46ca-4856-9c19-fa51b869064d] Refreshing instance network info cache due to event network-changed-f045b3aa-3ff6-4dea-ad61-a59a01735124. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 08:50:25 compute-0 nova_compute[260935]: 2025-10-11 08:50:25.197 2 DEBUG oslo_concurrency.lockutils [req-1fc696f3-59ef-4390-a3db-4767007bd053 req-9803d2b2-25d4-4845-8526-22acc68ef111 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-14711e39-46ca-4856-9c19-fa51b869064d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 08:50:25 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e170 do_prune osdmap full prune enabled
Oct 11 08:50:25 compute-0 ceph-mon[74313]: pgmap v1344: 321 pgs: 321 active+clean; 579 MiB data, 660 MiB used, 59 GiB / 60 GiB avail; 4.8 MiB/s rd, 8.4 MiB/s wr, 369 op/s
Oct 11 08:50:25 compute-0 ceph-mon[74313]: osdmap e170: 3 total, 3 up, 3 in
Oct 11 08:50:25 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e171 e171: 3 total, 3 up, 3 in
Oct 11 08:50:25 compute-0 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e171: 3 total, 3 up, 3 in
Oct 11 08:50:25 compute-0 nova_compute[260935]: 2025-10-11 08:50:25.444 2 DEBUG nova.storage.rbd_utils [None req-18a12c87-f15e-4353-88fa-1b82f2fee926 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] creating snapshot(snap) on rbd image(84322915-869e-419d-b9c3-505cab3e7ea4) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Oct 11 08:50:25 compute-0 nova_compute[260935]: 2025-10-11 08:50:25.492 2 DEBUG oslo_concurrency.lockutils [None req-f5b9d75c-fa68-46d5-a532-b168d70600ed 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Acquiring lock "interface-14711e39-46ca-4856-9c19-fa51b869064d-f045b3aa-3ff6-4dea-ad61-a59a01735124" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:50:25 compute-0 nova_compute[260935]: 2025-10-11 08:50:25.493 2 DEBUG oslo_concurrency.lockutils [None req-f5b9d75c-fa68-46d5-a532-b168d70600ed 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Lock "interface-14711e39-46ca-4856-9c19-fa51b869064d-f045b3aa-3ff6-4dea-ad61-a59a01735124" acquired by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:50:25 compute-0 nova_compute[260935]: 2025-10-11 08:50:25.509 2 DEBUG nova.objects.instance [None req-f5b9d75c-fa68-46d5-a532-b168d70600ed 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Lazy-loading 'flavor' on Instance uuid 14711e39-46ca-4856-9c19-fa51b869064d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:50:25 compute-0 nova_compute[260935]: 2025-10-11 08:50:25.528 2 DEBUG nova.virt.libvirt.vif [None req-f5b9d75c-fa68-46d5-a532-b168d70600ed 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T08:49:32Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-579139719',display_name='tempest-tempest.common.compute-instance-579139719',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-579139719',id=26,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAAy7u30rY9Ua722Pu5k06TsB7yGIeNfS7lWjZwVhd6kg2xMeuomPU5t2dlqG08LvC5AhOx2wSQ4p/whtQgG8tbhB9ScC2x2P4qlM+3BKH/+XFtSpFY70AQQ3oh5qTSzqA==',key_name='tempest-keypair-878513287',keypairs=<?>,launch_index=0,launched_at=2025-10-11T08:49:48Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='eddb41c523294041b154a0a99c88e82b',ramdisk_id='',reservation_id='r-9v1qz90y',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-2072786320',owner_user_name='tempest-AttachInterfacesTestJSON-2072786320-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T08:49:48Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='34f29a5a135d45f597eeaa741009aa67',uuid=14711e39-46ca-4856-9c19-fa51b869064d,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f045b3aa-3ff6-4dea-ad61-a59a01735124", "address": "fa:16:3e:ee:3e:c0", "network": {"id": "fff13396-b787-4c6e-9112-a1c2ef57b26d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-795919911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eddb41c523294041b154a0a99c88e82b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf045b3aa-3f", "ovs_interfaceid": "f045b3aa-3ff6-4dea-ad61-a59a01735124", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 11 08:50:25 compute-0 nova_compute[260935]: 2025-10-11 08:50:25.528 2 DEBUG nova.network.os_vif_util [None req-f5b9d75c-fa68-46d5-a532-b168d70600ed 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Converting VIF {"id": "f045b3aa-3ff6-4dea-ad61-a59a01735124", "address": "fa:16:3e:ee:3e:c0", "network": {"id": "fff13396-b787-4c6e-9112-a1c2ef57b26d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-795919911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eddb41c523294041b154a0a99c88e82b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf045b3aa-3f", "ovs_interfaceid": "f045b3aa-3ff6-4dea-ad61-a59a01735124", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 08:50:25 compute-0 nova_compute[260935]: 2025-10-11 08:50:25.529 2 DEBUG nova.network.os_vif_util [None req-f5b9d75c-fa68-46d5-a532-b168d70600ed 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ee:3e:c0,bridge_name='br-int',has_traffic_filtering=True,id=f045b3aa-3ff6-4dea-ad61-a59a01735124,network=Network(fff13396-b787-4c6e-9112-a1c2ef57b26d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapf045b3aa-3f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 08:50:25 compute-0 nova_compute[260935]: 2025-10-11 08:50:25.533 2 DEBUG nova.virt.libvirt.guest [None req-f5b9d75c-fa68-46d5-a532-b168d70600ed 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:ee:3e:c0"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapf045b3aa-3f"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Oct 11 08:50:25 compute-0 nova_compute[260935]: 2025-10-11 08:50:25.536 2 DEBUG nova.virt.libvirt.guest [None req-f5b9d75c-fa68-46d5-a532-b168d70600ed 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:ee:3e:c0"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapf045b3aa-3f"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Oct 11 08:50:25 compute-0 nova_compute[260935]: 2025-10-11 08:50:25.542 2 DEBUG nova.virt.libvirt.driver [None req-f5b9d75c-fa68-46d5-a532-b168d70600ed 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Attempting to detach device tapf045b3aa-3f from instance 14711e39-46ca-4856-9c19-fa51b869064d from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Oct 11 08:50:25 compute-0 nova_compute[260935]: 2025-10-11 08:50:25.543 2 DEBUG nova.virt.libvirt.guest [None req-f5b9d75c-fa68-46d5-a532-b168d70600ed 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] detach device xml: <interface type="ethernet">
Oct 11 08:50:25 compute-0 nova_compute[260935]:   <mac address="fa:16:3e:ee:3e:c0"/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:   <model type="virtio"/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:   <driver name="vhost" rx_queue_size="512"/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:   <mtu size="1442"/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:   <target dev="tapf045b3aa-3f"/>
Oct 11 08:50:25 compute-0 nova_compute[260935]: </interface>
Oct 11 08:50:25 compute-0 nova_compute[260935]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Oct 11 08:50:25 compute-0 nova_compute[260935]: 2025-10-11 08:50:25.551 2 DEBUG nova.virt.libvirt.guest [None req-f5b9d75c-fa68-46d5-a532-b168d70600ed 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:ee:3e:c0"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapf045b3aa-3f"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Oct 11 08:50:25 compute-0 nova_compute[260935]: 2025-10-11 08:50:25.557 2 DEBUG nova.virt.libvirt.guest [None req-f5b9d75c-fa68-46d5-a532-b168d70600ed 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:ee:3e:c0"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapf045b3aa-3f"/></interface>not found in domain: <domain type='kvm' id='28'>
Oct 11 08:50:25 compute-0 nova_compute[260935]:   <name>instance-0000001a</name>
Oct 11 08:50:25 compute-0 nova_compute[260935]:   <uuid>14711e39-46ca-4856-9c19-fa51b869064d</uuid>
Oct 11 08:50:25 compute-0 nova_compute[260935]:   <metadata>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 08:50:25 compute-0 nova_compute[260935]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:   <nova:name>tempest-tempest.common.compute-instance-579139719</nova:name>
Oct 11 08:50:25 compute-0 nova_compute[260935]:   <nova:creationTime>2025-10-11 08:50:24</nova:creationTime>
Oct 11 08:50:25 compute-0 nova_compute[260935]:   <nova:flavor name="m1.nano">
Oct 11 08:50:25 compute-0 nova_compute[260935]:     <nova:memory>128</nova:memory>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     <nova:disk>1</nova:disk>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     <nova:swap>0</nova:swap>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     <nova:ephemeral>0</nova:ephemeral>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     <nova:vcpus>1</nova:vcpus>
Oct 11 08:50:25 compute-0 nova_compute[260935]:   </nova:flavor>
Oct 11 08:50:25 compute-0 nova_compute[260935]:   <nova:owner>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     <nova:user uuid="34f29a5a135d45f597eeaa741009aa67">tempest-AttachInterfacesTestJSON-2072786320-project-member</nova:user>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     <nova:project uuid="eddb41c523294041b154a0a99c88e82b">tempest-AttachInterfacesTestJSON-2072786320</nova:project>
Oct 11 08:50:25 compute-0 nova_compute[260935]:   </nova:owner>
Oct 11 08:50:25 compute-0 nova_compute[260935]:   <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:   <nova:ports>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     <nova:port uuid="ac842ebf-4fca-4930-a4d1-3e8a6760d441">
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     </nova:port>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     <nova:port uuid="f045b3aa-3ff6-4dea-ad61-a59a01735124">
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     </nova:port>
Oct 11 08:50:25 compute-0 nova_compute[260935]:   </nova:ports>
Oct 11 08:50:25 compute-0 nova_compute[260935]: </nova:instance>
Oct 11 08:50:25 compute-0 nova_compute[260935]:   </metadata>
Oct 11 08:50:25 compute-0 nova_compute[260935]:   <memory unit='KiB'>131072</memory>
Oct 11 08:50:25 compute-0 nova_compute[260935]:   <currentMemory unit='KiB'>131072</currentMemory>
Oct 11 08:50:25 compute-0 nova_compute[260935]:   <vcpu placement='static'>1</vcpu>
Oct 11 08:50:25 compute-0 nova_compute[260935]:   <resource>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     <partition>/machine</partition>
Oct 11 08:50:25 compute-0 nova_compute[260935]:   </resource>
Oct 11 08:50:25 compute-0 nova_compute[260935]:   <sysinfo type='smbios'>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     <system>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <entry name='manufacturer'>RDO</entry>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <entry name='product'>OpenStack Compute</entry>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <entry name='serial'>14711e39-46ca-4856-9c19-fa51b869064d</entry>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <entry name='uuid'>14711e39-46ca-4856-9c19-fa51b869064d</entry>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <entry name='family'>Virtual Machine</entry>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     </system>
Oct 11 08:50:25 compute-0 nova_compute[260935]:   </sysinfo>
Oct 11 08:50:25 compute-0 nova_compute[260935]:   <os>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     <type arch='x86_64' machine='pc-q35-rhel9.6.0'>hvm</type>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     <boot dev='hd'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     <smbios mode='sysinfo'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:   </os>
Oct 11 08:50:25 compute-0 nova_compute[260935]:   <features>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     <acpi/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     <apic/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     <vmcoreinfo state='on'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:   </features>
Oct 11 08:50:25 compute-0 nova_compute[260935]:   <cpu mode='custom' match='exact' check='full'>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     <model fallback='forbid'>EPYC-Rome</model>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     <vendor>AMD</vendor>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     <feature policy='require' name='x2apic'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     <feature policy='require' name='tsc-deadline'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     <feature policy='require' name='hypervisor'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     <feature policy='require' name='tsc_adjust'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     <feature policy='require' name='spec-ctrl'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     <feature policy='require' name='stibp'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     <feature policy='require' name='arch-capabilities'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     <feature policy='require' name='ssbd'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     <feature policy='require' name='cmp_legacy'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     <feature policy='require' name='overflow-recov'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     <feature policy='require' name='succor'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     <feature policy='require' name='ibrs'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     <feature policy='require' name='amd-ssbd'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     <feature policy='require' name='virt-ssbd'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     <feature policy='disable' name='lbrv'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     <feature policy='disable' name='tsc-scale'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     <feature policy='disable' name='vmcb-clean'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     <feature policy='disable' name='flushbyasid'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     <feature policy='disable' name='pause-filter'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     <feature policy='disable' name='pfthreshold'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     <feature policy='disable' name='svme-addr-chk'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     <feature policy='require' name='lfence-always-serializing'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     <feature policy='require' name='rdctl-no'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     <feature policy='require' name='mds-no'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     <feature policy='require' name='pschange-mc-no'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     <feature policy='require' name='gds-no'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     <feature policy='require' name='rfds-no'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     <feature policy='disable' name='xsaves'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     <feature policy='disable' name='svm'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     <feature policy='require' name='topoext'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     <feature policy='disable' name='npt'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     <feature policy='disable' name='nrip-save'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:   </cpu>
Oct 11 08:50:25 compute-0 nova_compute[260935]:   <clock offset='utc'>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     <timer name='pit' tickpolicy='delay'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     <timer name='rtc' tickpolicy='catchup'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     <timer name='hpet' present='no'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:   </clock>
Oct 11 08:50:25 compute-0 nova_compute[260935]:   <on_poweroff>destroy</on_poweroff>
Oct 11 08:50:25 compute-0 nova_compute[260935]:   <on_reboot>restart</on_reboot>
Oct 11 08:50:25 compute-0 nova_compute[260935]:   <on_crash>destroy</on_crash>
Oct 11 08:50:25 compute-0 nova_compute[260935]:   <devices>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     <disk type='network' device='disk'>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <driver name='qemu' type='raw' cache='none'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <auth username='openstack'>
Oct 11 08:50:25 compute-0 nova_compute[260935]:         <secret type='ceph' uuid='33219f8b-dc38-5a8f-a577-8ccc4b37190a'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       </auth>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <source protocol='rbd' name='vms/14711e39-46ca-4856-9c19-fa51b869064d_disk' index='2'>
Oct 11 08:50:25 compute-0 nova_compute[260935]:         <host name='192.168.122.100' port='6789'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       </source>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <target dev='vda' bus='virtio'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <alias name='virtio-disk0'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     </disk>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     <disk type='network' device='cdrom'>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <driver name='qemu' type='raw' cache='none'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <auth username='openstack'>
Oct 11 08:50:25 compute-0 nova_compute[260935]:         <secret type='ceph' uuid='33219f8b-dc38-5a8f-a577-8ccc4b37190a'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       </auth>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <source protocol='rbd' name='vms/14711e39-46ca-4856-9c19-fa51b869064d_disk.config' index='1'>
Oct 11 08:50:25 compute-0 nova_compute[260935]:         <host name='192.168.122.100' port='6789'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       </source>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <target dev='sda' bus='sata'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <readonly/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <alias name='sata0-0-0'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     </disk>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     <controller type='pci' index='0' model='pcie-root'>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <alias name='pcie.0'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     <controller type='pci' index='1' model='pcie-root-port'>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <target chassis='1' port='0x10'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <alias name='pci.1'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     <controller type='pci' index='2' model='pcie-root-port'>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <target chassis='2' port='0x11'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <alias name='pci.2'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     <controller type='pci' index='3' model='pcie-root-port'>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <target chassis='3' port='0x12'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <alias name='pci.3'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     <controller type='pci' index='4' model='pcie-root-port'>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <target chassis='4' port='0x13'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <alias name='pci.4'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     <controller type='pci' index='5' model='pcie-root-port'>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <target chassis='5' port='0x14'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <alias name='pci.5'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     <controller type='pci' index='6' model='pcie-root-port'>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <target chassis='6' port='0x15'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <alias name='pci.6'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     <controller type='pci' index='7' model='pcie-root-port'>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <target chassis='7' port='0x16'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <alias name='pci.7'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     <controller type='pci' index='8' model='pcie-root-port'>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <target chassis='8' port='0x17'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <alias name='pci.8'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     <controller type='pci' index='9' model='pcie-root-port'>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <target chassis='9' port='0x18'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <alias name='pci.9'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     <controller type='pci' index='10' model='pcie-root-port'>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <target chassis='10' port='0x19'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <alias name='pci.10'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     <controller type='pci' index='11' model='pcie-root-port'>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <target chassis='11' port='0x1a'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <alias name='pci.11'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     <controller type='pci' index='12' model='pcie-root-port'>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <target chassis='12' port='0x1b'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <alias name='pci.12'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     <controller type='pci' index='13' model='pcie-root-port'>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <target chassis='13' port='0x1c'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <alias name='pci.13'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     <controller type='pci' index='14' model='pcie-root-port'>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <target chassis='14' port='0x1d'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <alias name='pci.14'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     <controller type='pci' index='15' model='pcie-root-port'>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <target chassis='15' port='0x1e'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <alias name='pci.15'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     <controller type='pci' index='16' model='pcie-root-port'>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <target chassis='16' port='0x1f'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <alias name='pci.16'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     <controller type='pci' index='17' model='pcie-root-port'>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <target chassis='17' port='0x20'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <alias name='pci.17'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     <controller type='pci' index='18' model='pcie-root-port'>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <target chassis='18' port='0x21'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <alias name='pci.18'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     <controller type='pci' index='19' model='pcie-root-port'>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <target chassis='19' port='0x22'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <alias name='pci.19'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     <controller type='pci' index='20' model='pcie-root-port'>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <target chassis='20' port='0x23'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <alias name='pci.20'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     <controller type='pci' index='21' model='pcie-root-port'>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <target chassis='21' port='0x24'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <alias name='pci.21'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     <controller type='pci' index='22' model='pcie-root-port'>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <target chassis='22' port='0x25'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <alias name='pci.22'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     <controller type='pci' index='23' model='pcie-root-port'>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <target chassis='23' port='0x26'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <alias name='pci.23'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     <controller type='pci' index='24' model='pcie-root-port'>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <target chassis='24' port='0x27'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <alias name='pci.24'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     <controller type='pci' index='25' model='pcie-root-port'>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <target chassis='25' port='0x28'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <alias name='pci.25'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <model name='pcie-pci-bridge'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <alias name='pci.26'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     <controller type='usb' index='0' model='piix3-uhci'>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <alias name='usb'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     <controller type='sata' index='0'>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <alias name='ide'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     <interface type='ethernet'>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <mac address='fa:16:3e:56:95:17'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <target dev='tapac842ebf-4f'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <model type='virtio'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <driver name='vhost' rx_queue_size='512'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <mtu size='1442'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <alias name='net0'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     </interface>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     <interface type='ethernet'>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <mac address='fa:16:3e:ee:3e:c0'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <target dev='tapf045b3aa-3f'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <model type='virtio'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <driver name='vhost' rx_queue_size='512'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <mtu size='1442'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <alias name='net1'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     </interface>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     <serial type='pty'>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <source path='/dev/pts/4'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <log file='/var/lib/nova/instances/14711e39-46ca-4856-9c19-fa51b869064d/console.log' append='off'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <target type='isa-serial' port='0'>
Oct 11 08:50:25 compute-0 nova_compute[260935]:         <model name='isa-serial'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       </target>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <alias name='serial0'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     </serial>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     <console type='pty' tty='/dev/pts/4'>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <source path='/dev/pts/4'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <log file='/var/lib/nova/instances/14711e39-46ca-4856-9c19-fa51b869064d/console.log' append='off'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <target type='serial' port='0'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <alias name='serial0'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     </console>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     <input type='tablet' bus='usb'>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <alias name='input0'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <address type='usb' bus='0' port='1'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     </input>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     <input type='mouse' bus='ps2'>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <alias name='input1'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     </input>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     <input type='keyboard' bus='ps2'>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <alias name='input2'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     </input>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     <graphics type='vnc' port='5904' autoport='yes' listen='::0'>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <listen type='address' address='::0'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     </graphics>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     <audio id='1' type='none'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     <video>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <model type='virtio' heads='1' primary='yes'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <alias name='video0'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     </video>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     <watchdog model='itco' action='reset'>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <alias name='watchdog0'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     </watchdog>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     <memballoon model='virtio'>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <stats period='10'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <alias name='balloon0'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     </memballoon>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     <rng model='virtio'>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <backend model='random'>/dev/urandom</backend>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <alias name='rng0'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     </rng>
Oct 11 08:50:25 compute-0 nova_compute[260935]:   </devices>
Oct 11 08:50:25 compute-0 nova_compute[260935]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     <label>system_u:system_r:svirt_t:s0:c266,c549</label>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c266,c549</imagelabel>
Oct 11 08:50:25 compute-0 nova_compute[260935]:   </seclabel>
Oct 11 08:50:25 compute-0 nova_compute[260935]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     <label>+107:+107</label>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     <imagelabel>+107:+107</imagelabel>
Oct 11 08:50:25 compute-0 nova_compute[260935]:   </seclabel>
Oct 11 08:50:25 compute-0 nova_compute[260935]: </domain>
Oct 11 08:50:25 compute-0 nova_compute[260935]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Oct 11 08:50:25 compute-0 nova_compute[260935]: 2025-10-11 08:50:25.557 2 INFO nova.virt.libvirt.driver [None req-f5b9d75c-fa68-46d5-a532-b168d70600ed 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Successfully detached device tapf045b3aa-3f from instance 14711e39-46ca-4856-9c19-fa51b869064d from the persistent domain config.
Oct 11 08:50:25 compute-0 nova_compute[260935]: 2025-10-11 08:50:25.558 2 DEBUG nova.virt.libvirt.driver [None req-f5b9d75c-fa68-46d5-a532-b168d70600ed 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] (1/8): Attempting to detach device tapf045b3aa-3f with device alias net1 from instance 14711e39-46ca-4856-9c19-fa51b869064d from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Oct 11 08:50:25 compute-0 nova_compute[260935]: 2025-10-11 08:50:25.559 2 DEBUG nova.virt.libvirt.guest [None req-f5b9d75c-fa68-46d5-a532-b168d70600ed 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] detach device xml: <interface type="ethernet">
Oct 11 08:50:25 compute-0 nova_compute[260935]:   <mac address="fa:16:3e:ee:3e:c0"/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:   <model type="virtio"/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:   <driver name="vhost" rx_queue_size="512"/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:   <mtu size="1442"/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:   <target dev="tapf045b3aa-3f"/>
Oct 11 08:50:25 compute-0 nova_compute[260935]: </interface>
Oct 11 08:50:25 compute-0 nova_compute[260935]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Oct 11 08:50:25 compute-0 kernel: tapf045b3aa-3f (unregistering): left promiscuous mode
Oct 11 08:50:25 compute-0 NetworkManager[44960]: <info>  [1760172625.6871] device (tapf045b3aa-3f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 08:50:25 compute-0 nova_compute[260935]: 2025-10-11 08:50:25.710 2 DEBUG nova.virt.libvirt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Received event <DeviceRemovedEvent: 1760172625.7100601, 14711e39-46ca-4856-9c19-fa51b869064d => net1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Oct 11 08:50:25 compute-0 nova_compute[260935]: 2025-10-11 08:50:25.713 2 DEBUG nova.virt.libvirt.driver [None req-f5b9d75c-fa68-46d5-a532-b168d70600ed 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Start waiting for the detach event from libvirt for device tapf045b3aa-3f with device alias net1 for instance 14711e39-46ca-4856-9c19-fa51b869064d _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Oct 11 08:50:25 compute-0 nova_compute[260935]: 2025-10-11 08:50:25.713 2 DEBUG nova.virt.libvirt.guest [None req-f5b9d75c-fa68-46d5-a532-b168d70600ed 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:ee:3e:c0"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapf045b3aa-3f"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Oct 11 08:50:25 compute-0 nova_compute[260935]: 2025-10-11 08:50:25.719 2 DEBUG nova.virt.libvirt.guest [None req-f5b9d75c-fa68-46d5-a532-b168d70600ed 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:ee:3e:c0"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapf045b3aa-3f"/></interface>not found in domain: <domain type='kvm' id='28'>
Oct 11 08:50:25 compute-0 nova_compute[260935]:   <name>instance-0000001a</name>
Oct 11 08:50:25 compute-0 nova_compute[260935]:   <uuid>14711e39-46ca-4856-9c19-fa51b869064d</uuid>
Oct 11 08:50:25 compute-0 nova_compute[260935]:   <metadata>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 08:50:25 compute-0 nova_compute[260935]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:   <nova:name>tempest-tempest.common.compute-instance-579139719</nova:name>
Oct 11 08:50:25 compute-0 nova_compute[260935]:   <nova:creationTime>2025-10-11 08:50:24</nova:creationTime>
Oct 11 08:50:25 compute-0 nova_compute[260935]:   <nova:flavor name="m1.nano">
Oct 11 08:50:25 compute-0 nova_compute[260935]:     <nova:memory>128</nova:memory>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     <nova:disk>1</nova:disk>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     <nova:swap>0</nova:swap>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     <nova:ephemeral>0</nova:ephemeral>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     <nova:vcpus>1</nova:vcpus>
Oct 11 08:50:25 compute-0 nova_compute[260935]:   </nova:flavor>
Oct 11 08:50:25 compute-0 nova_compute[260935]:   <nova:owner>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     <nova:user uuid="34f29a5a135d45f597eeaa741009aa67">tempest-AttachInterfacesTestJSON-2072786320-project-member</nova:user>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     <nova:project uuid="eddb41c523294041b154a0a99c88e82b">tempest-AttachInterfacesTestJSON-2072786320</nova:project>
Oct 11 08:50:25 compute-0 nova_compute[260935]:   </nova:owner>
Oct 11 08:50:25 compute-0 nova_compute[260935]:   <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:   <nova:ports>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     <nova:port uuid="ac842ebf-4fca-4930-a4d1-3e8a6760d441">
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     </nova:port>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     <nova:port uuid="f045b3aa-3ff6-4dea-ad61-a59a01735124">
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     </nova:port>
Oct 11 08:50:25 compute-0 nova_compute[260935]:   </nova:ports>
Oct 11 08:50:25 compute-0 nova_compute[260935]: </nova:instance>
Oct 11 08:50:25 compute-0 nova_compute[260935]:   </metadata>
Oct 11 08:50:25 compute-0 nova_compute[260935]:   <memory unit='KiB'>131072</memory>
Oct 11 08:50:25 compute-0 nova_compute[260935]:   <currentMemory unit='KiB'>131072</currentMemory>
Oct 11 08:50:25 compute-0 nova_compute[260935]:   <vcpu placement='static'>1</vcpu>
Oct 11 08:50:25 compute-0 nova_compute[260935]:   <resource>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     <partition>/machine</partition>
Oct 11 08:50:25 compute-0 nova_compute[260935]:   </resource>
Oct 11 08:50:25 compute-0 nova_compute[260935]:   <sysinfo type='smbios'>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     <system>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <entry name='manufacturer'>RDO</entry>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <entry name='product'>OpenStack Compute</entry>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <entry name='serial'>14711e39-46ca-4856-9c19-fa51b869064d</entry>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <entry name='uuid'>14711e39-46ca-4856-9c19-fa51b869064d</entry>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <entry name='family'>Virtual Machine</entry>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     </system>
Oct 11 08:50:25 compute-0 nova_compute[260935]:   </sysinfo>
Oct 11 08:50:25 compute-0 nova_compute[260935]:   <os>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     <type arch='x86_64' machine='pc-q35-rhel9.6.0'>hvm</type>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     <boot dev='hd'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     <smbios mode='sysinfo'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:   </os>
Oct 11 08:50:25 compute-0 nova_compute[260935]:   <features>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     <acpi/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     <apic/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     <vmcoreinfo state='on'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:   </features>
Oct 11 08:50:25 compute-0 nova_compute[260935]:   <cpu mode='custom' match='exact' check='full'>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     <model fallback='forbid'>EPYC-Rome</model>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     <vendor>AMD</vendor>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     <feature policy='require' name='x2apic'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     <feature policy='require' name='tsc-deadline'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     <feature policy='require' name='hypervisor'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     <feature policy='require' name='tsc_adjust'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     <feature policy='require' name='spec-ctrl'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     <feature policy='require' name='stibp'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     <feature policy='require' name='arch-capabilities'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     <feature policy='require' name='ssbd'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     <feature policy='require' name='cmp_legacy'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     <feature policy='require' name='overflow-recov'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     <feature policy='require' name='succor'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     <feature policy='require' name='ibrs'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     <feature policy='require' name='amd-ssbd'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     <feature policy='require' name='virt-ssbd'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     <feature policy='disable' name='lbrv'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     <feature policy='disable' name='tsc-scale'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     <feature policy='disable' name='vmcb-clean'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     <feature policy='disable' name='flushbyasid'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     <feature policy='disable' name='pause-filter'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     <feature policy='disable' name='pfthreshold'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     <feature policy='disable' name='svme-addr-chk'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     <feature policy='require' name='lfence-always-serializing'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     <feature policy='require' name='rdctl-no'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     <feature policy='require' name='mds-no'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     <feature policy='require' name='pschange-mc-no'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     <feature policy='require' name='gds-no'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     <feature policy='require' name='rfds-no'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     <feature policy='disable' name='xsaves'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     <feature policy='disable' name='svm'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     <feature policy='require' name='topoext'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     <feature policy='disable' name='npt'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     <feature policy='disable' name='nrip-save'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:   </cpu>
Oct 11 08:50:25 compute-0 nova_compute[260935]:   <clock offset='utc'>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     <timer name='pit' tickpolicy='delay'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     <timer name='rtc' tickpolicy='catchup'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     <timer name='hpet' present='no'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:   </clock>
Oct 11 08:50:25 compute-0 nova_compute[260935]:   <on_poweroff>destroy</on_poweroff>
Oct 11 08:50:25 compute-0 nova_compute[260935]:   <on_reboot>restart</on_reboot>
Oct 11 08:50:25 compute-0 nova_compute[260935]:   <on_crash>destroy</on_crash>
Oct 11 08:50:25 compute-0 nova_compute[260935]:   <devices>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     <disk type='network' device='disk'>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <driver name='qemu' type='raw' cache='none'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <auth username='openstack'>
Oct 11 08:50:25 compute-0 nova_compute[260935]:         <secret type='ceph' uuid='33219f8b-dc38-5a8f-a577-8ccc4b37190a'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       </auth>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <source protocol='rbd' name='vms/14711e39-46ca-4856-9c19-fa51b869064d_disk' index='2'>
Oct 11 08:50:25 compute-0 nova_compute[260935]:         <host name='192.168.122.100' port='6789'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       </source>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <target dev='vda' bus='virtio'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <alias name='virtio-disk0'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     </disk>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     <disk type='network' device='cdrom'>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <driver name='qemu' type='raw' cache='none'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <auth username='openstack'>
Oct 11 08:50:25 compute-0 nova_compute[260935]:         <secret type='ceph' uuid='33219f8b-dc38-5a8f-a577-8ccc4b37190a'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       </auth>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <source protocol='rbd' name='vms/14711e39-46ca-4856-9c19-fa51b869064d_disk.config' index='1'>
Oct 11 08:50:25 compute-0 nova_compute[260935]:         <host name='192.168.122.100' port='6789'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       </source>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <target dev='sda' bus='sata'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <readonly/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <alias name='sata0-0-0'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     </disk>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     <controller type='pci' index='0' model='pcie-root'>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <alias name='pcie.0'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     <controller type='pci' index='1' model='pcie-root-port'>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <target chassis='1' port='0x10'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <alias name='pci.1'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     <controller type='pci' index='2' model='pcie-root-port'>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <target chassis='2' port='0x11'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <alias name='pci.2'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     <controller type='pci' index='3' model='pcie-root-port'>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <target chassis='3' port='0x12'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <alias name='pci.3'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     <controller type='pci' index='4' model='pcie-root-port'>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <target chassis='4' port='0x13'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <alias name='pci.4'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     <controller type='pci' index='5' model='pcie-root-port'>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <target chassis='5' port='0x14'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <alias name='pci.5'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     <controller type='pci' index='6' model='pcie-root-port'>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <target chassis='6' port='0x15'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <alias name='pci.6'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     <controller type='pci' index='7' model='pcie-root-port'>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <target chassis='7' port='0x16'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <alias name='pci.7'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     <controller type='pci' index='8' model='pcie-root-port'>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <target chassis='8' port='0x17'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <alias name='pci.8'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     <controller type='pci' index='9' model='pcie-root-port'>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <target chassis='9' port='0x18'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <alias name='pci.9'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     <controller type='pci' index='10' model='pcie-root-port'>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <target chassis='10' port='0x19'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <alias name='pci.10'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     <controller type='pci' index='11' model='pcie-root-port'>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <target chassis='11' port='0x1a'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <alias name='pci.11'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     <controller type='pci' index='12' model='pcie-root-port'>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <target chassis='12' port='0x1b'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <alias name='pci.12'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     <controller type='pci' index='13' model='pcie-root-port'>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <target chassis='13' port='0x1c'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <alias name='pci.13'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     <controller type='pci' index='14' model='pcie-root-port'>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <target chassis='14' port='0x1d'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <alias name='pci.14'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     <controller type='pci' index='15' model='pcie-root-port'>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <target chassis='15' port='0x1e'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <alias name='pci.15'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     <controller type='pci' index='16' model='pcie-root-port'>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <target chassis='16' port='0x1f'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <alias name='pci.16'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     <controller type='pci' index='17' model='pcie-root-port'>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <target chassis='17' port='0x20'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <alias name='pci.17'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     <controller type='pci' index='18' model='pcie-root-port'>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <target chassis='18' port='0x21'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <alias name='pci.18'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     <controller type='pci' index='19' model='pcie-root-port'>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <target chassis='19' port='0x22'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <alias name='pci.19'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     <controller type='pci' index='20' model='pcie-root-port'>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <target chassis='20' port='0x23'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <alias name='pci.20'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     <controller type='pci' index='21' model='pcie-root-port'>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <target chassis='21' port='0x24'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <alias name='pci.21'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     <controller type='pci' index='22' model='pcie-root-port'>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <target chassis='22' port='0x25'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <alias name='pci.22'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     <controller type='pci' index='23' model='pcie-root-port'>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <target chassis='23' port='0x26'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <alias name='pci.23'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     <controller type='pci' index='24' model='pcie-root-port'>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <target chassis='24' port='0x27'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <alias name='pci.24'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     <controller type='pci' index='25' model='pcie-root-port'>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <target chassis='25' port='0x28'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <alias name='pci.25'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <model name='pcie-pci-bridge'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <alias name='pci.26'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     <controller type='usb' index='0' model='piix3-uhci'>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <alias name='usb'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     <controller type='sata' index='0'>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <alias name='ide'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     <interface type='ethernet'>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <mac address='fa:16:3e:56:95:17'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <target dev='tapac842ebf-4f'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <model type='virtio'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <driver name='vhost' rx_queue_size='512'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <mtu size='1442'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <alias name='net0'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     </interface>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     <serial type='pty'>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <source path='/dev/pts/4'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <log file='/var/lib/nova/instances/14711e39-46ca-4856-9c19-fa51b869064d/console.log' append='off'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <target type='isa-serial' port='0'>
Oct 11 08:50:25 compute-0 nova_compute[260935]:         <model name='isa-serial'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       </target>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <alias name='serial0'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     </serial>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     <console type='pty' tty='/dev/pts/4'>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <source path='/dev/pts/4'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <log file='/var/lib/nova/instances/14711e39-46ca-4856-9c19-fa51b869064d/console.log' append='off'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <target type='serial' port='0'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <alias name='serial0'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     </console>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     <input type='tablet' bus='usb'>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <alias name='input0'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <address type='usb' bus='0' port='1'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     </input>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     <input type='mouse' bus='ps2'>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <alias name='input1'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     </input>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     <input type='keyboard' bus='ps2'>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <alias name='input2'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     </input>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     <graphics type='vnc' port='5904' autoport='yes' listen='::0'>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <listen type='address' address='::0'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     </graphics>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     <audio id='1' type='none'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     <video>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <model type='virtio' heads='1' primary='yes'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <alias name='video0'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     </video>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     <watchdog model='itco' action='reset'>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <alias name='watchdog0'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     </watchdog>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     <memballoon model='virtio'>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <stats period='10'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <alias name='balloon0'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     </memballoon>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     <rng model='virtio'>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <backend model='random'>/dev/urandom</backend>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <alias name='rng0'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     </rng>
Oct 11 08:50:25 compute-0 nova_compute[260935]:   </devices>
Oct 11 08:50:25 compute-0 nova_compute[260935]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     <label>system_u:system_r:svirt_t:s0:c266,c549</label>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c266,c549</imagelabel>
Oct 11 08:50:25 compute-0 nova_compute[260935]:   </seclabel>
Oct 11 08:50:25 compute-0 nova_compute[260935]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     <label>+107:+107</label>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     <imagelabel>+107:+107</imagelabel>
Oct 11 08:50:25 compute-0 nova_compute[260935]:   </seclabel>
Oct 11 08:50:25 compute-0 nova_compute[260935]: </domain>
Oct 11 08:50:25 compute-0 nova_compute[260935]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Oct 11 08:50:25 compute-0 nova_compute[260935]: 2025-10-11 08:50:25.719 2 INFO nova.virt.libvirt.driver [None req-f5b9d75c-fa68-46d5-a532-b168d70600ed 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Successfully detached device tapf045b3aa-3f from instance 14711e39-46ca-4856-9c19-fa51b869064d from the live domain config.
Oct 11 08:50:25 compute-0 nova_compute[260935]: 2025-10-11 08:50:25.721 2 DEBUG nova.virt.libvirt.vif [None req-f5b9d75c-fa68-46d5-a532-b168d70600ed 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T08:49:32Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-579139719',display_name='tempest-tempest.common.compute-instance-579139719',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-579139719',id=26,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAAy7u30rY9Ua722Pu5k06TsB7yGIeNfS7lWjZwVhd6kg2xMeuomPU5t2dlqG08LvC5AhOx2wSQ4p/whtQgG8tbhB9ScC2x2P4qlM+3BKH/+XFtSpFY70AQQ3oh5qTSzqA==',key_name='tempest-keypair-878513287',keypairs=<?>,launch_index=0,launched_at=2025-10-11T08:49:48Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='eddb41c523294041b154a0a99c88e82b',ramdisk_id='',reservation_id='r-9v1qz90y',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-2072786320',owner_user_name='tempest-AttachInterfacesTestJSON-2072786320-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T08:49:48Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='34f29a5a135d45f597eeaa741009aa67',uuid=14711e39-46ca-4856-9c19-fa51b869064d,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f045b3aa-3ff6-4dea-ad61-a59a01735124", "address": "fa:16:3e:ee:3e:c0", "network": {"id": "fff13396-b787-4c6e-9112-a1c2ef57b26d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-795919911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eddb41c523294041b154a0a99c88e82b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf045b3aa-3f", "ovs_interfaceid": "f045b3aa-3ff6-4dea-ad61-a59a01735124", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 11 08:50:25 compute-0 nova_compute[260935]: 2025-10-11 08:50:25.721 2 DEBUG nova.network.os_vif_util [None req-f5b9d75c-fa68-46d5-a532-b168d70600ed 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Converting VIF {"id": "f045b3aa-3ff6-4dea-ad61-a59a01735124", "address": "fa:16:3e:ee:3e:c0", "network": {"id": "fff13396-b787-4c6e-9112-a1c2ef57b26d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-795919911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eddb41c523294041b154a0a99c88e82b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf045b3aa-3f", "ovs_interfaceid": "f045b3aa-3ff6-4dea-ad61-a59a01735124", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 08:50:25 compute-0 nova_compute[260935]: 2025-10-11 08:50:25.722 2 DEBUG nova.network.os_vif_util [None req-f5b9d75c-fa68-46d5-a532-b168d70600ed 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ee:3e:c0,bridge_name='br-int',has_traffic_filtering=True,id=f045b3aa-3ff6-4dea-ad61-a59a01735124,network=Network(fff13396-b787-4c6e-9112-a1c2ef57b26d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapf045b3aa-3f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 08:50:25 compute-0 nova_compute[260935]: 2025-10-11 08:50:25.723 2 DEBUG os_vif [None req-f5b9d75c-fa68-46d5-a532-b168d70600ed 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ee:3e:c0,bridge_name='br-int',has_traffic_filtering=True,id=f045b3aa-3ff6-4dea-ad61-a59a01735124,network=Network(fff13396-b787-4c6e-9112-a1c2ef57b26d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapf045b3aa-3f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 11 08:50:25 compute-0 nova_compute[260935]: 2025-10-11 08:50:25.725 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:50:25 compute-0 nova_compute[260935]: 2025-10-11 08:50:25.725 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf045b3aa-3f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:50:25 compute-0 nova_compute[260935]: 2025-10-11 08:50:25.760 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:50:25 compute-0 nova_compute[260935]: 2025-10-11 08:50:25.763 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 08:50:25 compute-0 ovn_controller[152945]: 2025-10-11T08:50:25Z|00176|binding|INFO|Releasing lport f045b3aa-3ff6-4dea-ad61-a59a01735124 from this chassis (sb_readonly=0)
Oct 11 08:50:25 compute-0 ovn_controller[152945]: 2025-10-11T08:50:25Z|00177|binding|INFO|Setting lport f045b3aa-3ff6-4dea-ad61-a59a01735124 down in Southbound
Oct 11 08:50:25 compute-0 ovn_controller[152945]: 2025-10-11T08:50:25Z|00178|binding|INFO|Removing iface tapf045b3aa-3f ovn-installed in OVS
Oct 11 08:50:25 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:25.771 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ee:3e:c0 10.100.0.7'], port_security=['fa:16:3e:ee:3e:c0 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-AttachInterfacesTestJSON-350865697', 'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '14711e39-46ca-4856-9c19-fa51b869064d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fff13396-b787-4c6e-9112-a1c2ef57b26d', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-AttachInterfacesTestJSON-350865697', 'neutron:project_id': 'eddb41c523294041b154a0a99c88e82b', 'neutron:revision_number': '4', 'neutron:security_group_ids': '9a83c3d0-687d-44b7-980a-bde786b1b429', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c4201c7b-c907-464d-88cb-d19f17d8f067, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=f045b3aa-3ff6-4dea-ad61-a59a01735124) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 08:50:25 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:25.773 162815 INFO neutron.agent.ovn.metadata.agent [-] Port f045b3aa-3ff6-4dea-ad61-a59a01735124 in datapath fff13396-b787-4c6e-9112-a1c2ef57b26d unbound from our chassis
Oct 11 08:50:25 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:25.776 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network fff13396-b787-4c6e-9112-a1c2ef57b26d
Oct 11 08:50:25 compute-0 nova_compute[260935]: 2025-10-11 08:50:25.800 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:50:25 compute-0 nova_compute[260935]: 2025-10-11 08:50:25.802 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:50:25 compute-0 nova_compute[260935]: 2025-10-11 08:50:25.804 2 INFO os_vif [None req-f5b9d75c-fa68-46d5-a532-b168d70600ed 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ee:3e:c0,bridge_name='br-int',has_traffic_filtering=True,id=f045b3aa-3ff6-4dea-ad61-a59a01735124,network=Network(fff13396-b787-4c6e-9112-a1c2ef57b26d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapf045b3aa-3f')
Oct 11 08:50:25 compute-0 nova_compute[260935]: 2025-10-11 08:50:25.804 2 DEBUG nova.virt.libvirt.guest [None req-f5b9d75c-fa68-46d5-a532-b168d70600ed 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 08:50:25 compute-0 nova_compute[260935]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:   <nova:name>tempest-tempest.common.compute-instance-579139719</nova:name>
Oct 11 08:50:25 compute-0 nova_compute[260935]:   <nova:creationTime>2025-10-11 08:50:25</nova:creationTime>
Oct 11 08:50:25 compute-0 nova_compute[260935]:   <nova:flavor name="m1.nano">
Oct 11 08:50:25 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:25.804 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[e21ad0ba-7d7f-4ac1-a600-9e11bd0a15ab]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:50:25 compute-0 nova_compute[260935]:     <nova:memory>128</nova:memory>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     <nova:disk>1</nova:disk>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     <nova:swap>0</nova:swap>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     <nova:ephemeral>0</nova:ephemeral>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     <nova:vcpus>1</nova:vcpus>
Oct 11 08:50:25 compute-0 nova_compute[260935]:   </nova:flavor>
Oct 11 08:50:25 compute-0 nova_compute[260935]:   <nova:owner>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     <nova:user uuid="34f29a5a135d45f597eeaa741009aa67">tempest-AttachInterfacesTestJSON-2072786320-project-member</nova:user>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     <nova:project uuid="eddb41c523294041b154a0a99c88e82b">tempest-AttachInterfacesTestJSON-2072786320</nova:project>
Oct 11 08:50:25 compute-0 nova_compute[260935]:   </nova:owner>
Oct 11 08:50:25 compute-0 nova_compute[260935]:   <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:   <nova:ports>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     <nova:port uuid="ac842ebf-4fca-4930-a4d1-3e8a6760d441">
Oct 11 08:50:25 compute-0 nova_compute[260935]:       <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Oct 11 08:50:25 compute-0 nova_compute[260935]:     </nova:port>
Oct 11 08:50:25 compute-0 nova_compute[260935]:   </nova:ports>
Oct 11 08:50:25 compute-0 nova_compute[260935]: </nova:instance>
Oct 11 08:50:25 compute-0 nova_compute[260935]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Oct 11 08:50:25 compute-0 rsyslogd[1003]: imjournal from <np0005481065:nova_compute>: begin to drop messages due to rate-limiting
Oct 11 08:50:25 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:25.855 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[7679cc5f-eb47-4462-b911-a0a497cefa19]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:50:25 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:25.859 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[2d5b89f5-fe52-4275-8aa3-08adb85f9b47]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:50:25 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1347: 321 pgs: 321 active+clean; 579 MiB data, 660 MiB used, 59 GiB / 60 GiB avail; 609 KiB/s rd, 6.2 MiB/s wr, 170 op/s
Oct 11 08:50:25 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:25.898 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[0fb1873b-7b2c-450b-9d42-3de505f70eee]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:50:25 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:25.926 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[503008ae-c9e8-45ea-bde9-728b1cad4a09]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfff13396-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:dd:a4:2d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 9, 'rx_bytes': 916, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 9, 'rx_bytes': 916, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 45], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 443120, 'reachable_time': 39241, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 297709, 'error': None, 'target': 'ovnmeta-fff13396-b787-4c6e-9112-a1c2ef57b26d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:50:25 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:25.950 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[980e942c-27dd-45d9-b813-8c7945f3e718]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapfff13396-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 443136, 'tstamp': 443136}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 297710, 'error': None, 'target': 'ovnmeta-fff13396-b787-4c6e-9112-a1c2ef57b26d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapfff13396-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 443140, 'tstamp': 443140}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 297710, 'error': None, 'target': 'ovnmeta-fff13396-b787-4c6e-9112-a1c2ef57b26d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:50:25 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:25.951 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfff13396-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:50:25 compute-0 nova_compute[260935]: 2025-10-11 08:50:25.954 2 DEBUG nova.network.neutron [req-e25d1736-e6e5-4d6f-a2c6-cb703440408b req-a7a2cf3d-cd0b-4ef6-86e1-d86c67796ee4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 14711e39-46ca-4856-9c19-fa51b869064d] Updated VIF entry in instance network info cache for port ac842ebf-4fca-4930-a4d1-3e8a6760d441. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 08:50:25 compute-0 nova_compute[260935]: 2025-10-11 08:50:25.954 2 DEBUG nova.network.neutron [req-e25d1736-e6e5-4d6f-a2c6-cb703440408b req-a7a2cf3d-cd0b-4ef6-86e1-d86c67796ee4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 14711e39-46ca-4856-9c19-fa51b869064d] Updating instance_info_cache with network_info: [{"id": "ac842ebf-4fca-4930-a4d1-3e8a6760d441", "address": "fa:16:3e:56:95:17", "network": {"id": "fff13396-b787-4c6e-9112-a1c2ef57b26d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-795919911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.224", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eddb41c523294041b154a0a99c88e82b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac842ebf-4f", "ovs_interfaceid": "ac842ebf-4fca-4930-a4d1-3e8a6760d441", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "f045b3aa-3ff6-4dea-ad61-a59a01735124", "address": "fa:16:3e:ee:3e:c0", "network": {"id": "fff13396-b787-4c6e-9112-a1c2ef57b26d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-795919911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eddb41c523294041b154a0a99c88e82b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf045b3aa-3f", "ovs_interfaceid": "f045b3aa-3ff6-4dea-ad61-a59a01735124", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:50:25 compute-0 nova_compute[260935]: 2025-10-11 08:50:25.957 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:50:25 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:25.959 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfff13396-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:50:25 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:25.959 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 08:50:25 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:25.960 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapfff13396-b0, col_values=(('external_ids', {'iface-id': '2a916b98-1e7b-4604-b1f0-e2f195b1c17e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:50:25 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:25.960 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 08:50:25 compute-0 nova_compute[260935]: 2025-10-11 08:50:25.977 2 DEBUG oslo_concurrency.lockutils [req-e25d1736-e6e5-4d6f-a2c6-cb703440408b req-a7a2cf3d-cd0b-4ef6-86e1-d86c67796ee4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-14711e39-46ca-4856-9c19-fa51b869064d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 08:50:25 compute-0 nova_compute[260935]: 2025-10-11 08:50:25.978 2 DEBUG oslo_concurrency.lockutils [req-1fc696f3-59ef-4390-a3db-4767007bd053 req-9803d2b2-25d4-4845-8526-22acc68ef111 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-14711e39-46ca-4856-9c19-fa51b869064d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 08:50:25 compute-0 nova_compute[260935]: 2025-10-11 08:50:25.979 2 DEBUG nova.network.neutron [req-1fc696f3-59ef-4390-a3db-4767007bd053 req-9803d2b2-25d4-4845-8526-22acc68ef111 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 14711e39-46ca-4856-9c19-fa51b869064d] Refreshing network info cache for port f045b3aa-3ff6-4dea-ad61-a59a01735124 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 08:50:26 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e171 do_prune osdmap full prune enabled
Oct 11 08:50:26 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e172 e172: 3 total, 3 up, 3 in
Oct 11 08:50:26 compute-0 ceph-mon[74313]: osdmap e171: 3 total, 3 up, 3 in
Oct 11 08:50:26 compute-0 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e172: 3 total, 3 up, 3 in
Oct 11 08:50:26 compute-0 kernel: tapa3944a31-95 (unregistering): left promiscuous mode
Oct 11 08:50:26 compute-0 NetworkManager[44960]: <info>  [1760172626.6226] device (tapa3944a31-95): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 08:50:26 compute-0 ovn_controller[152945]: 2025-10-11T08:50:26Z|00179|binding|INFO|Releasing lport a3944a31-9560-49ae-b2a5-caaf2736993a from this chassis (sb_readonly=0)
Oct 11 08:50:26 compute-0 ovn_controller[152945]: 2025-10-11T08:50:26Z|00180|binding|INFO|Setting lport a3944a31-9560-49ae-b2a5-caaf2736993a down in Southbound
Oct 11 08:50:26 compute-0 nova_compute[260935]: 2025-10-11 08:50:26.638 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:50:26 compute-0 ovn_controller[152945]: 2025-10-11T08:50:26Z|00181|binding|INFO|Removing iface tapa3944a31-95 ovn-installed in OVS
Oct 11 08:50:26 compute-0 nova_compute[260935]: 2025-10-11 08:50:26.642 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:50:26 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:26.647 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:bf:d8:0b 10.100.0.11'], port_security=['fa:16:3e:bf:d8:0b 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '77ff3a9d-3eb2-40ed-ad12-6367fd4e555f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '39d3043a7835403392c659fbb2fe0b22', 'neutron:revision_number': '6', 'neutron:security_group_ids': '8cdf2c97-ed67-4339-928f-1d70d0c6c18c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3bfe7634-8476-437a-9cde-e4512c0e686a, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=a3944a31-9560-49ae-b2a5-caaf2736993a) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 08:50:26 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:26.648 162815 INFO neutron.agent.ovn.metadata.agent [-] Port a3944a31-9560-49ae-b2a5-caaf2736993a in datapath 09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5 unbound from our chassis
Oct 11 08:50:26 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:26.651 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5
Oct 11 08:50:26 compute-0 nova_compute[260935]: 2025-10-11 08:50:26.678 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:50:26 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:26.678 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[b805ff3a-2df0-4db8-8cfe-65f5bc0742d5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:50:26 compute-0 systemd[1]: machine-qemu\x2d33\x2dinstance\x2d00000016.scope: Deactivated successfully.
Oct 11 08:50:26 compute-0 systemd[1]: machine-qemu\x2d33\x2dinstance\x2d00000016.scope: Consumed 14.260s CPU time.
Oct 11 08:50:26 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:26.718 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[44d511dc-71c9-4f26-8aca-963dbdfc1250]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:50:26 compute-0 systemd-machined[215705]: Machine qemu-33-instance-00000016 terminated.
Oct 11 08:50:26 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:26.727 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[966ec424-2393-4343-a1aa-5ce365540efd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:50:26 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:26.769 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[0a29f84c-3317-4691-9370-a69478a2c992]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:50:26 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:26.798 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[bd597ca3-bdfe-49db-a9cd-7bac18e75f97]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap09ac2cb6-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9f:b2:33'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 16, 'tx_packets': 16, 'rx_bytes': 1168, 'tx_bytes': 860, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 16, 'tx_packets': 16, 'rx_bytes': 1168, 'tx_bytes': 860, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 34], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 439110, 'reachable_time': 35426, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 297722, 'error': None, 'target': 'ovnmeta-09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:50:26 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:26.827 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[7fe90eaa-30f6-4c4a-b00d-dcc8da8dcc26]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap09ac2cb6-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 439130, 'tstamp': 439130}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 297723, 'error': None, 'target': 'ovnmeta-09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap09ac2cb6-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 439135, 'tstamp': 439135}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 297723, 'error': None, 'target': 'ovnmeta-09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:50:26 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:26.829 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap09ac2cb6-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:50:26 compute-0 nova_compute[260935]: 2025-10-11 08:50:26.869 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:50:26 compute-0 nova_compute[260935]: 2025-10-11 08:50:26.876 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:50:26 compute-0 nova_compute[260935]: 2025-10-11 08:50:26.891 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:50:26 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:26.892 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap09ac2cb6-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:50:26 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:26.893 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 08:50:26 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:26.893 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap09ac2cb6-30, col_values=(('external_ids', {'iface-id': '424305ea-6b47-4134-ad52-ee2a450e204c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:50:26 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:26.894 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 08:50:27 compute-0 nova_compute[260935]: 2025-10-11 08:50:27.109 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:50:27 compute-0 nova_compute[260935]: 2025-10-11 08:50:27.292 2 DEBUG nova.compute.manager [req-1ea41447-a2e3-43d0-9d26-3396efa4d651 req-62d5ec45-d76e-4406-bfcb-199accd7ef7b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] Received event network-vif-unplugged-a3944a31-9560-49ae-b2a5-caaf2736993a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:50:27 compute-0 nova_compute[260935]: 2025-10-11 08:50:27.293 2 DEBUG oslo_concurrency.lockutils [req-1ea41447-a2e3-43d0-9d26-3396efa4d651 req-62d5ec45-d76e-4406-bfcb-199accd7ef7b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "77ff3a9d-3eb2-40ed-ad12-6367fd4e555f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:50:27 compute-0 nova_compute[260935]: 2025-10-11 08:50:27.293 2 DEBUG oslo_concurrency.lockutils [req-1ea41447-a2e3-43d0-9d26-3396efa4d651 req-62d5ec45-d76e-4406-bfcb-199accd7ef7b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "77ff3a9d-3eb2-40ed-ad12-6367fd4e555f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:50:27 compute-0 nova_compute[260935]: 2025-10-11 08:50:27.294 2 DEBUG oslo_concurrency.lockutils [req-1ea41447-a2e3-43d0-9d26-3396efa4d651 req-62d5ec45-d76e-4406-bfcb-199accd7ef7b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "77ff3a9d-3eb2-40ed-ad12-6367fd4e555f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:50:27 compute-0 nova_compute[260935]: 2025-10-11 08:50:27.294 2 DEBUG nova.compute.manager [req-1ea41447-a2e3-43d0-9d26-3396efa4d651 req-62d5ec45-d76e-4406-bfcb-199accd7ef7b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] No waiting events found dispatching network-vif-unplugged-a3944a31-9560-49ae-b2a5-caaf2736993a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 08:50:27 compute-0 nova_compute[260935]: 2025-10-11 08:50:27.295 2 WARNING nova.compute.manager [req-1ea41447-a2e3-43d0-9d26-3396efa4d651 req-62d5ec45-d76e-4406-bfcb-199accd7ef7b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] Received unexpected event network-vif-unplugged-a3944a31-9560-49ae-b2a5-caaf2736993a for instance with vm_state active and task_state rebuilding.
Oct 11 08:50:27 compute-0 nova_compute[260935]: 2025-10-11 08:50:27.313 2 DEBUG nova.compute.manager [req-0af2be3c-0817-4e0e-a7f6-dfd627f381ef req-b642390b-2cae-4e4b-b91b-bafcae6b569a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 14711e39-46ca-4856-9c19-fa51b869064d] Received event network-vif-unplugged-f045b3aa-3ff6-4dea-ad61-a59a01735124 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:50:27 compute-0 nova_compute[260935]: 2025-10-11 08:50:27.313 2 DEBUG oslo_concurrency.lockutils [req-0af2be3c-0817-4e0e-a7f6-dfd627f381ef req-b642390b-2cae-4e4b-b91b-bafcae6b569a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "14711e39-46ca-4856-9c19-fa51b869064d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:50:27 compute-0 nova_compute[260935]: 2025-10-11 08:50:27.313 2 DEBUG oslo_concurrency.lockutils [req-0af2be3c-0817-4e0e-a7f6-dfd627f381ef req-b642390b-2cae-4e4b-b91b-bafcae6b569a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "14711e39-46ca-4856-9c19-fa51b869064d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:50:27 compute-0 nova_compute[260935]: 2025-10-11 08:50:27.314 2 DEBUG oslo_concurrency.lockutils [req-0af2be3c-0817-4e0e-a7f6-dfd627f381ef req-b642390b-2cae-4e4b-b91b-bafcae6b569a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "14711e39-46ca-4856-9c19-fa51b869064d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:50:27 compute-0 nova_compute[260935]: 2025-10-11 08:50:27.314 2 DEBUG nova.compute.manager [req-0af2be3c-0817-4e0e-a7f6-dfd627f381ef req-b642390b-2cae-4e4b-b91b-bafcae6b569a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 14711e39-46ca-4856-9c19-fa51b869064d] No waiting events found dispatching network-vif-unplugged-f045b3aa-3ff6-4dea-ad61-a59a01735124 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 08:50:27 compute-0 nova_compute[260935]: 2025-10-11 08:50:27.314 2 WARNING nova.compute.manager [req-0af2be3c-0817-4e0e-a7f6-dfd627f381ef req-b642390b-2cae-4e4b-b91b-bafcae6b569a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 14711e39-46ca-4856-9c19-fa51b869064d] Received unexpected event network-vif-unplugged-f045b3aa-3ff6-4dea-ad61-a59a01735124 for instance with vm_state active and task_state None.
Oct 11 08:50:27 compute-0 nova_compute[260935]: 2025-10-11 08:50:27.314 2 DEBUG nova.compute.manager [req-0af2be3c-0817-4e0e-a7f6-dfd627f381ef req-b642390b-2cae-4e4b-b91b-bafcae6b569a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 14711e39-46ca-4856-9c19-fa51b869064d] Received event network-vif-plugged-f045b3aa-3ff6-4dea-ad61-a59a01735124 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:50:27 compute-0 nova_compute[260935]: 2025-10-11 08:50:27.315 2 DEBUG oslo_concurrency.lockutils [req-0af2be3c-0817-4e0e-a7f6-dfd627f381ef req-b642390b-2cae-4e4b-b91b-bafcae6b569a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "14711e39-46ca-4856-9c19-fa51b869064d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:50:27 compute-0 nova_compute[260935]: 2025-10-11 08:50:27.315 2 DEBUG oslo_concurrency.lockutils [req-0af2be3c-0817-4e0e-a7f6-dfd627f381ef req-b642390b-2cae-4e4b-b91b-bafcae6b569a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "14711e39-46ca-4856-9c19-fa51b869064d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:50:27 compute-0 nova_compute[260935]: 2025-10-11 08:50:27.315 2 DEBUG oslo_concurrency.lockutils [req-0af2be3c-0817-4e0e-a7f6-dfd627f381ef req-b642390b-2cae-4e4b-b91b-bafcae6b569a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "14711e39-46ca-4856-9c19-fa51b869064d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:50:27 compute-0 nova_compute[260935]: 2025-10-11 08:50:27.315 2 DEBUG nova.compute.manager [req-0af2be3c-0817-4e0e-a7f6-dfd627f381ef req-b642390b-2cae-4e4b-b91b-bafcae6b569a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 14711e39-46ca-4856-9c19-fa51b869064d] No waiting events found dispatching network-vif-plugged-f045b3aa-3ff6-4dea-ad61-a59a01735124 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 08:50:27 compute-0 nova_compute[260935]: 2025-10-11 08:50:27.316 2 WARNING nova.compute.manager [req-0af2be3c-0817-4e0e-a7f6-dfd627f381ef req-b642390b-2cae-4e4b-b91b-bafcae6b569a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 14711e39-46ca-4856-9c19-fa51b869064d] Received unexpected event network-vif-plugged-f045b3aa-3ff6-4dea-ad61-a59a01735124 for instance with vm_state active and task_state None.
Oct 11 08:50:27 compute-0 nova_compute[260935]: 2025-10-11 08:50:27.350 2 INFO nova.virt.libvirt.driver [None req-658bfebf-d2cf-474e-bf77-56ab2cc2a636 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] Instance shutdown successfully after 14 seconds.
Oct 11 08:50:27 compute-0 nova_compute[260935]: 2025-10-11 08:50:27.357 2 INFO nova.virt.libvirt.driver [-] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] Instance destroyed successfully.
Oct 11 08:50:27 compute-0 nova_compute[260935]: 2025-10-11 08:50:27.363 2 INFO nova.virt.libvirt.driver [-] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] Instance destroyed successfully.
Oct 11 08:50:27 compute-0 nova_compute[260935]: 2025-10-11 08:50:27.364 2 DEBUG nova.virt.libvirt.vif [None req-658bfebf-d2cf-474e-bf77-56ab2cc2a636 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-11T08:48:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-1988839763',display_name='tempest-ServersAdminTestJSON-server-1988839763',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-1988839763',id=22,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-11T08:50:10Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='39d3043a7835403392c659fbb2fe0b22',ramdisk_id='',reservation_id='r-nxz1v0k9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersAdminTestJSON-1756812845',owner_user_name='tempest-ServersAdminTestJSON-1756812845-project-member'},tags=<?>,task_state='rebuilding',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T08:50:12Z,user_data=None,user_id='a51c2680b31e40b1908642ef8795c6f0',uuid=77ff3a9d-3eb2-40ed-ad12-6367fd4e555f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a3944a31-9560-49ae-b2a5-caaf2736993a", "address": "fa:16:3e:bf:d8:0b", "network": {"id": "09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1951796893-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "39d3043a7835403392c659fbb2fe0b22", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa3944a31-95", "ovs_interfaceid": "a3944a31-9560-49ae-b2a5-caaf2736993a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 11 08:50:27 compute-0 nova_compute[260935]: 2025-10-11 08:50:27.364 2 DEBUG nova.network.os_vif_util [None req-658bfebf-d2cf-474e-bf77-56ab2cc2a636 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Converting VIF {"id": "a3944a31-9560-49ae-b2a5-caaf2736993a", "address": "fa:16:3e:bf:d8:0b", "network": {"id": "09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1951796893-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "39d3043a7835403392c659fbb2fe0b22", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa3944a31-95", "ovs_interfaceid": "a3944a31-9560-49ae-b2a5-caaf2736993a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 08:50:27 compute-0 nova_compute[260935]: 2025-10-11 08:50:27.365 2 DEBUG nova.network.os_vif_util [None req-658bfebf-d2cf-474e-bf77-56ab2cc2a636 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:bf:d8:0b,bridge_name='br-int',has_traffic_filtering=True,id=a3944a31-9560-49ae-b2a5-caaf2736993a,network=Network(09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa3944a31-95') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 08:50:27 compute-0 nova_compute[260935]: 2025-10-11 08:50:27.366 2 DEBUG os_vif [None req-658bfebf-d2cf-474e-bf77-56ab2cc2a636 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:bf:d8:0b,bridge_name='br-int',has_traffic_filtering=True,id=a3944a31-9560-49ae-b2a5-caaf2736993a,network=Network(09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa3944a31-95') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 11 08:50:27 compute-0 nova_compute[260935]: 2025-10-11 08:50:27.368 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:50:27 compute-0 nova_compute[260935]: 2025-10-11 08:50:27.368 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa3944a31-95, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:50:27 compute-0 nova_compute[260935]: 2025-10-11 08:50:27.370 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:50:27 compute-0 nova_compute[260935]: 2025-10-11 08:50:27.372 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 08:50:27 compute-0 nova_compute[260935]: 2025-10-11 08:50:27.376 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:50:27 compute-0 nova_compute[260935]: 2025-10-11 08:50:27.379 2 INFO os_vif [None req-658bfebf-d2cf-474e-bf77-56ab2cc2a636 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:bf:d8:0b,bridge_name='br-int',has_traffic_filtering=True,id=a3944a31-9560-49ae-b2a5-caaf2736993a,network=Network(09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa3944a31-95')
Oct 11 08:50:27 compute-0 ceph-mon[74313]: pgmap v1347: 321 pgs: 321 active+clean; 579 MiB data, 660 MiB used, 59 GiB / 60 GiB avail; 609 KiB/s rd, 6.2 MiB/s wr, 170 op/s
Oct 11 08:50:27 compute-0 ceph-mon[74313]: osdmap e172: 3 total, 3 up, 3 in
Oct 11 08:50:27 compute-0 nova_compute[260935]: 2025-10-11 08:50:27.799 2 INFO nova.virt.libvirt.driver [None req-18a12c87-f15e-4353-88fa-1b82f2fee926 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: cb1503a2-bc9c-4faf-ab16-e7227f8c94f7] Snapshot image upload complete
Oct 11 08:50:27 compute-0 nova_compute[260935]: 2025-10-11 08:50:27.800 2 INFO nova.compute.manager [None req-18a12c87-f15e-4353-88fa-1b82f2fee926 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: cb1503a2-bc9c-4faf-ab16-e7227f8c94f7] Took 4.28 seconds to snapshot the instance on the hypervisor.
Oct 11 08:50:27 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1349: 321 pgs: 321 active+clean; 658 MiB data, 723 MiB used, 59 GiB / 60 GiB avail; 9.1 MiB/s rd, 16 MiB/s wr, 482 op/s
Oct 11 08:50:27 compute-0 nova_compute[260935]: 2025-10-11 08:50:27.866 2 INFO nova.virt.libvirt.driver [None req-658bfebf-d2cf-474e-bf77-56ab2cc2a636 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] Deleting instance files /var/lib/nova/instances/77ff3a9d-3eb2-40ed-ad12-6367fd4e555f_del
Oct 11 08:50:27 compute-0 nova_compute[260935]: 2025-10-11 08:50:27.868 2 INFO nova.virt.libvirt.driver [None req-658bfebf-d2cf-474e-bf77-56ab2cc2a636 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] Deletion of /var/lib/nova/instances/77ff3a9d-3eb2-40ed-ad12-6367fd4e555f_del complete
Oct 11 08:50:28 compute-0 nova_compute[260935]: 2025-10-11 08:50:28.044 2 DEBUG nova.virt.libvirt.driver [None req-658bfebf-d2cf-474e-bf77-56ab2cc2a636 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 11 08:50:28 compute-0 nova_compute[260935]: 2025-10-11 08:50:28.044 2 INFO nova.virt.libvirt.driver [None req-658bfebf-d2cf-474e-bf77-56ab2cc2a636 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] Creating image(s)
Oct 11 08:50:28 compute-0 nova_compute[260935]: 2025-10-11 08:50:28.066 2 DEBUG nova.storage.rbd_utils [None req-658bfebf-d2cf-474e-bf77-56ab2cc2a636 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] rbd image 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:50:28 compute-0 nova_compute[260935]: 2025-10-11 08:50:28.091 2 DEBUG nova.storage.rbd_utils [None req-658bfebf-d2cf-474e-bf77-56ab2cc2a636 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] rbd image 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:50:28 compute-0 nova_compute[260935]: 2025-10-11 08:50:28.114 2 DEBUG nova.storage.rbd_utils [None req-658bfebf-d2cf-474e-bf77-56ab2cc2a636 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] rbd image 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:50:28 compute-0 nova_compute[260935]: 2025-10-11 08:50:28.118 2 DEBUG oslo_concurrency.processutils [None req-658bfebf-d2cf-474e-bf77-56ab2cc2a636 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:50:28 compute-0 nova_compute[260935]: 2025-10-11 08:50:28.201 2 DEBUG oslo_concurrency.processutils [None req-658bfebf-d2cf-474e-bf77-56ab2cc2a636 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json" returned: 0 in 0.083s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:50:28 compute-0 nova_compute[260935]: 2025-10-11 08:50:28.202 2 DEBUG oslo_concurrency.lockutils [None req-658bfebf-d2cf-474e-bf77-56ab2cc2a636 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Acquiring lock "9811042c7d73cc51997f7c966840f4b7728169a1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:50:28 compute-0 nova_compute[260935]: 2025-10-11 08:50:28.202 2 DEBUG oslo_concurrency.lockutils [None req-658bfebf-d2cf-474e-bf77-56ab2cc2a636 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:50:28 compute-0 nova_compute[260935]: 2025-10-11 08:50:28.202 2 DEBUG oslo_concurrency.lockutils [None req-658bfebf-d2cf-474e-bf77-56ab2cc2a636 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:50:28 compute-0 nova_compute[260935]: 2025-10-11 08:50:28.224 2 DEBUG nova.storage.rbd_utils [None req-658bfebf-d2cf-474e-bf77-56ab2cc2a636 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] rbd image 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:50:28 compute-0 nova_compute[260935]: 2025-10-11 08:50:28.227 2 DEBUG oslo_concurrency.processutils [None req-658bfebf-d2cf-474e-bf77-56ab2cc2a636 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:50:28 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e172 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 08:50:28 compute-0 nova_compute[260935]: 2025-10-11 08:50:28.347 2 DEBUG nova.network.neutron [req-1fc696f3-59ef-4390-a3db-4767007bd053 req-9803d2b2-25d4-4845-8526-22acc68ef111 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 14711e39-46ca-4856-9c19-fa51b869064d] Updated VIF entry in instance network info cache for port f045b3aa-3ff6-4dea-ad61-a59a01735124. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 08:50:28 compute-0 nova_compute[260935]: 2025-10-11 08:50:28.349 2 DEBUG nova.network.neutron [req-1fc696f3-59ef-4390-a3db-4767007bd053 req-9803d2b2-25d4-4845-8526-22acc68ef111 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 14711e39-46ca-4856-9c19-fa51b869064d] Updating instance_info_cache with network_info: [{"id": "ac842ebf-4fca-4930-a4d1-3e8a6760d441", "address": "fa:16:3e:56:95:17", "network": {"id": "fff13396-b787-4c6e-9112-a1c2ef57b26d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-795919911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.224", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eddb41c523294041b154a0a99c88e82b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac842ebf-4f", "ovs_interfaceid": "ac842ebf-4fca-4930-a4d1-3e8a6760d441", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "f045b3aa-3ff6-4dea-ad61-a59a01735124", "address": "fa:16:3e:ee:3e:c0", "network": {"id": "fff13396-b787-4c6e-9112-a1c2ef57b26d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-795919911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eddb41c523294041b154a0a99c88e82b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf045b3aa-3f", "ovs_interfaceid": "f045b3aa-3ff6-4dea-ad61-a59a01735124", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:50:28 compute-0 nova_compute[260935]: 2025-10-11 08:50:28.371 2 DEBUG oslo_concurrency.lockutils [req-1fc696f3-59ef-4390-a3db-4767007bd053 req-9803d2b2-25d4-4845-8526-22acc68ef111 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-14711e39-46ca-4856-9c19-fa51b869064d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 08:50:28 compute-0 nova_compute[260935]: 2025-10-11 08:50:28.371 2 DEBUG nova.compute.manager [req-1fc696f3-59ef-4390-a3db-4767007bd053 req-9803d2b2-25d4-4845-8526-22acc68ef111 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 14711e39-46ca-4856-9c19-fa51b869064d] Received event network-vif-plugged-f045b3aa-3ff6-4dea-ad61-a59a01735124 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:50:28 compute-0 nova_compute[260935]: 2025-10-11 08:50:28.371 2 DEBUG oslo_concurrency.lockutils [req-1fc696f3-59ef-4390-a3db-4767007bd053 req-9803d2b2-25d4-4845-8526-22acc68ef111 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "14711e39-46ca-4856-9c19-fa51b869064d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:50:28 compute-0 nova_compute[260935]: 2025-10-11 08:50:28.372 2 DEBUG oslo_concurrency.lockutils [req-1fc696f3-59ef-4390-a3db-4767007bd053 req-9803d2b2-25d4-4845-8526-22acc68ef111 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "14711e39-46ca-4856-9c19-fa51b869064d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:50:28 compute-0 nova_compute[260935]: 2025-10-11 08:50:28.372 2 DEBUG oslo_concurrency.lockutils [req-1fc696f3-59ef-4390-a3db-4767007bd053 req-9803d2b2-25d4-4845-8526-22acc68ef111 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "14711e39-46ca-4856-9c19-fa51b869064d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:50:28 compute-0 nova_compute[260935]: 2025-10-11 08:50:28.372 2 DEBUG nova.compute.manager [req-1fc696f3-59ef-4390-a3db-4767007bd053 req-9803d2b2-25d4-4845-8526-22acc68ef111 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 14711e39-46ca-4856-9c19-fa51b869064d] No waiting events found dispatching network-vif-plugged-f045b3aa-3ff6-4dea-ad61-a59a01735124 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 08:50:28 compute-0 nova_compute[260935]: 2025-10-11 08:50:28.372 2 WARNING nova.compute.manager [req-1fc696f3-59ef-4390-a3db-4767007bd053 req-9803d2b2-25d4-4845-8526-22acc68ef111 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 14711e39-46ca-4856-9c19-fa51b869064d] Received unexpected event network-vif-plugged-f045b3aa-3ff6-4dea-ad61-a59a01735124 for instance with vm_state active and task_state None.
Oct 11 08:50:28 compute-0 nova_compute[260935]: 2025-10-11 08:50:28.373 2 DEBUG nova.compute.manager [req-1fc696f3-59ef-4390-a3db-4767007bd053 req-9803d2b2-25d4-4845-8526-22acc68ef111 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 14711e39-46ca-4856-9c19-fa51b869064d] Received event network-vif-plugged-f045b3aa-3ff6-4dea-ad61-a59a01735124 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:50:28 compute-0 nova_compute[260935]: 2025-10-11 08:50:28.373 2 DEBUG oslo_concurrency.lockutils [req-1fc696f3-59ef-4390-a3db-4767007bd053 req-9803d2b2-25d4-4845-8526-22acc68ef111 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "14711e39-46ca-4856-9c19-fa51b869064d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:50:28 compute-0 nova_compute[260935]: 2025-10-11 08:50:28.373 2 DEBUG oslo_concurrency.lockutils [req-1fc696f3-59ef-4390-a3db-4767007bd053 req-9803d2b2-25d4-4845-8526-22acc68ef111 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "14711e39-46ca-4856-9c19-fa51b869064d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:50:28 compute-0 nova_compute[260935]: 2025-10-11 08:50:28.373 2 DEBUG oslo_concurrency.lockutils [req-1fc696f3-59ef-4390-a3db-4767007bd053 req-9803d2b2-25d4-4845-8526-22acc68ef111 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "14711e39-46ca-4856-9c19-fa51b869064d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:50:28 compute-0 nova_compute[260935]: 2025-10-11 08:50:28.374 2 DEBUG nova.compute.manager [req-1fc696f3-59ef-4390-a3db-4767007bd053 req-9803d2b2-25d4-4845-8526-22acc68ef111 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 14711e39-46ca-4856-9c19-fa51b869064d] No waiting events found dispatching network-vif-plugged-f045b3aa-3ff6-4dea-ad61-a59a01735124 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 08:50:28 compute-0 nova_compute[260935]: 2025-10-11 08:50:28.374 2 WARNING nova.compute.manager [req-1fc696f3-59ef-4390-a3db-4767007bd053 req-9803d2b2-25d4-4845-8526-22acc68ef111 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 14711e39-46ca-4856-9c19-fa51b869064d] Received unexpected event network-vif-plugged-f045b3aa-3ff6-4dea-ad61-a59a01735124 for instance with vm_state active and task_state None.
Oct 11 08:50:28 compute-0 nova_compute[260935]: 2025-10-11 08:50:28.569 2 DEBUG oslo_concurrency.processutils [None req-658bfebf-d2cf-474e-bf77-56ab2cc2a636 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.342s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:50:28 compute-0 nova_compute[260935]: 2025-10-11 08:50:28.645 2 DEBUG nova.storage.rbd_utils [None req-658bfebf-d2cf-474e-bf77-56ab2cc2a636 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] resizing rbd image 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 11 08:50:28 compute-0 nova_compute[260935]: 2025-10-11 08:50:28.773 2 DEBUG nova.virt.libvirt.driver [None req-658bfebf-d2cf-474e-bf77-56ab2cc2a636 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 11 08:50:28 compute-0 nova_compute[260935]: 2025-10-11 08:50:28.774 2 DEBUG nova.virt.libvirt.driver [None req-658bfebf-d2cf-474e-bf77-56ab2cc2a636 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] Ensure instance console log exists: /var/lib/nova/instances/77ff3a9d-3eb2-40ed-ad12-6367fd4e555f/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 11 08:50:28 compute-0 nova_compute[260935]: 2025-10-11 08:50:28.774 2 DEBUG oslo_concurrency.lockutils [None req-658bfebf-d2cf-474e-bf77-56ab2cc2a636 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:50:28 compute-0 nova_compute[260935]: 2025-10-11 08:50:28.775 2 DEBUG oslo_concurrency.lockutils [None req-658bfebf-d2cf-474e-bf77-56ab2cc2a636 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:50:28 compute-0 nova_compute[260935]: 2025-10-11 08:50:28.775 2 DEBUG oslo_concurrency.lockutils [None req-658bfebf-d2cf-474e-bf77-56ab2cc2a636 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:50:28 compute-0 nova_compute[260935]: 2025-10-11 08:50:28.779 2 DEBUG nova.virt.libvirt.driver [None req-658bfebf-d2cf-474e-bf77-56ab2cc2a636 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] Start _get_guest_xml network_info=[{"id": "a3944a31-9560-49ae-b2a5-caaf2736993a", "address": "fa:16:3e:bf:d8:0b", "network": {"id": "09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1951796893-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "39d3043a7835403392c659fbb2fe0b22", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa3944a31-95", "ovs_interfaceid": "a3944a31-9560-49ae-b2a5-caaf2736993a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 11 08:50:28 compute-0 nova_compute[260935]: 2025-10-11 08:50:28.786 2 WARNING nova.virt.libvirt.driver [None req-658bfebf-d2cf-474e-bf77-56ab2cc2a636 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError
Oct 11 08:50:28 compute-0 nova_compute[260935]: 2025-10-11 08:50:28.792 2 DEBUG nova.virt.libvirt.host [None req-658bfebf-d2cf-474e-bf77-56ab2cc2a636 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 11 08:50:28 compute-0 nova_compute[260935]: 2025-10-11 08:50:28.793 2 DEBUG nova.virt.libvirt.host [None req-658bfebf-d2cf-474e-bf77-56ab2cc2a636 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 11 08:50:28 compute-0 nova_compute[260935]: 2025-10-11 08:50:28.796 2 DEBUG nova.virt.libvirt.host [None req-658bfebf-d2cf-474e-bf77-56ab2cc2a636 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 11 08:50:28 compute-0 nova_compute[260935]: 2025-10-11 08:50:28.797 2 DEBUG nova.virt.libvirt.host [None req-658bfebf-d2cf-474e-bf77-56ab2cc2a636 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 11 08:50:28 compute-0 nova_compute[260935]: 2025-10-11 08:50:28.798 2 DEBUG nova.virt.libvirt.driver [None req-658bfebf-d2cf-474e-bf77-56ab2cc2a636 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 11 08:50:28 compute-0 nova_compute[260935]: 2025-10-11 08:50:28.798 2 DEBUG nova.virt.hardware [None req-658bfebf-d2cf-474e-bf77-56ab2cc2a636 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 11 08:50:28 compute-0 nova_compute[260935]: 2025-10-11 08:50:28.799 2 DEBUG nova.virt.hardware [None req-658bfebf-d2cf-474e-bf77-56ab2cc2a636 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 11 08:50:28 compute-0 nova_compute[260935]: 2025-10-11 08:50:28.799 2 DEBUG nova.virt.hardware [None req-658bfebf-d2cf-474e-bf77-56ab2cc2a636 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 11 08:50:28 compute-0 nova_compute[260935]: 2025-10-11 08:50:28.799 2 DEBUG nova.virt.hardware [None req-658bfebf-d2cf-474e-bf77-56ab2cc2a636 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 11 08:50:28 compute-0 nova_compute[260935]: 2025-10-11 08:50:28.800 2 DEBUG nova.virt.hardware [None req-658bfebf-d2cf-474e-bf77-56ab2cc2a636 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 11 08:50:28 compute-0 nova_compute[260935]: 2025-10-11 08:50:28.800 2 DEBUG nova.virt.hardware [None req-658bfebf-d2cf-474e-bf77-56ab2cc2a636 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 11 08:50:28 compute-0 nova_compute[260935]: 2025-10-11 08:50:28.800 2 DEBUG nova.virt.hardware [None req-658bfebf-d2cf-474e-bf77-56ab2cc2a636 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 11 08:50:28 compute-0 nova_compute[260935]: 2025-10-11 08:50:28.801 2 DEBUG nova.virt.hardware [None req-658bfebf-d2cf-474e-bf77-56ab2cc2a636 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 11 08:50:28 compute-0 nova_compute[260935]: 2025-10-11 08:50:28.801 2 DEBUG nova.virt.hardware [None req-658bfebf-d2cf-474e-bf77-56ab2cc2a636 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 11 08:50:28 compute-0 nova_compute[260935]: 2025-10-11 08:50:28.802 2 DEBUG nova.virt.hardware [None req-658bfebf-d2cf-474e-bf77-56ab2cc2a636 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 11 08:50:28 compute-0 nova_compute[260935]: 2025-10-11 08:50:28.802 2 DEBUG nova.virt.hardware [None req-658bfebf-d2cf-474e-bf77-56ab2cc2a636 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 11 08:50:28 compute-0 nova_compute[260935]: 2025-10-11 08:50:28.802 2 DEBUG nova.objects.instance [None req-658bfebf-d2cf-474e-bf77-56ab2cc2a636 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:50:28 compute-0 nova_compute[260935]: 2025-10-11 08:50:28.821 2 DEBUG oslo_concurrency.processutils [None req-658bfebf-d2cf-474e-bf77-56ab2cc2a636 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:50:28 compute-0 nova_compute[260935]: 2025-10-11 08:50:28.990 2 DEBUG oslo_concurrency.lockutils [None req-f5b9d75c-fa68-46d5-a532-b168d70600ed 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Acquiring lock "refresh_cache-14711e39-46ca-4856-9c19-fa51b869064d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 08:50:28 compute-0 nova_compute[260935]: 2025-10-11 08:50:28.991 2 DEBUG oslo_concurrency.lockutils [None req-f5b9d75c-fa68-46d5-a532-b168d70600ed 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Acquired lock "refresh_cache-14711e39-46ca-4856-9c19-fa51b869064d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 08:50:28 compute-0 nova_compute[260935]: 2025-10-11 08:50:28.991 2 DEBUG nova.network.neutron [None req-f5b9d75c-fa68-46d5-a532-b168d70600ed 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 14711e39-46ca-4856-9c19-fa51b869064d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 11 08:50:29 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 08:50:29 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1448164505' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:50:29 compute-0 nova_compute[260935]: 2025-10-11 08:50:29.323 2 DEBUG oslo_concurrency.processutils [None req-658bfebf-d2cf-474e-bf77-56ab2cc2a636 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.502s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:50:29 compute-0 nova_compute[260935]: 2025-10-11 08:50:29.353 2 DEBUG nova.storage.rbd_utils [None req-658bfebf-d2cf-474e-bf77-56ab2cc2a636 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] rbd image 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:50:29 compute-0 nova_compute[260935]: 2025-10-11 08:50:29.360 2 DEBUG oslo_concurrency.processutils [None req-658bfebf-d2cf-474e-bf77-56ab2cc2a636 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:50:29 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e172 do_prune osdmap full prune enabled
Oct 11 08:50:29 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e173 e173: 3 total, 3 up, 3 in
Oct 11 08:50:29 compute-0 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e173: 3 total, 3 up, 3 in
Oct 11 08:50:29 compute-0 ceph-mon[74313]: pgmap v1349: 321 pgs: 321 active+clean; 658 MiB data, 723 MiB used, 59 GiB / 60 GiB avail; 9.1 MiB/s rd, 16 MiB/s wr, 482 op/s
Oct 11 08:50:29 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/1448164505' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:50:29 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1351: 321 pgs: 321 active+clean; 658 MiB data, 723 MiB used, 59 GiB / 60 GiB avail; 9.1 MiB/s rd, 8.9 MiB/s wr, 278 op/s
Oct 11 08:50:29 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 08:50:29 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2835589156' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:50:29 compute-0 nova_compute[260935]: 2025-10-11 08:50:29.882 2 DEBUG oslo_concurrency.processutils [None req-658bfebf-d2cf-474e-bf77-56ab2cc2a636 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.521s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:50:29 compute-0 nova_compute[260935]: 2025-10-11 08:50:29.884 2 DEBUG nova.virt.libvirt.vif [None req-658bfebf-d2cf-474e-bf77-56ab2cc2a636 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-11T08:48:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-1988839763',display_name='tempest-ServersAdminTestJSON-server-1988839763',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-1988839763',id=22,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-11T08:50:10Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='39d3043a7835403392c659fbb2fe0b22',ramdisk_id='',reservation_id='r-nxz1v0k9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='2',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersAdminTestJSON-1756812845',owner_user_name='tempest-ServersAdminTestJSON-1756812845-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T08:50:27Z,user_data=None,user_id='a51c2680b31e40b1908642ef8795c6f0',uuid=77ff3a9d-3eb2-40ed-ad12-6367fd4e555f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a3944a31-9560-49ae-b2a5-caaf2736993a", "address": "fa:16:3e:bf:d8:0b", "network": {"id": "09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1951796893-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "39d3043a7835403392c659fbb2fe0b22", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa3944a31-95", "ovs_interfaceid": "a3944a31-9560-49ae-b2a5-caaf2736993a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 11 08:50:29 compute-0 nova_compute[260935]: 2025-10-11 08:50:29.885 2 DEBUG nova.network.os_vif_util [None req-658bfebf-d2cf-474e-bf77-56ab2cc2a636 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Converting VIF {"id": "a3944a31-9560-49ae-b2a5-caaf2736993a", "address": "fa:16:3e:bf:d8:0b", "network": {"id": "09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1951796893-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "39d3043a7835403392c659fbb2fe0b22", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa3944a31-95", "ovs_interfaceid": "a3944a31-9560-49ae-b2a5-caaf2736993a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 08:50:29 compute-0 nova_compute[260935]: 2025-10-11 08:50:29.886 2 DEBUG nova.network.os_vif_util [None req-658bfebf-d2cf-474e-bf77-56ab2cc2a636 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:bf:d8:0b,bridge_name='br-int',has_traffic_filtering=True,id=a3944a31-9560-49ae-b2a5-caaf2736993a,network=Network(09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa3944a31-95') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 08:50:29 compute-0 nova_compute[260935]: 2025-10-11 08:50:29.891 2 DEBUG nova.virt.libvirt.driver [None req-658bfebf-d2cf-474e-bf77-56ab2cc2a636 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] End _get_guest_xml xml=<domain type="kvm">
Oct 11 08:50:29 compute-0 nova_compute[260935]:   <uuid>77ff3a9d-3eb2-40ed-ad12-6367fd4e555f</uuid>
Oct 11 08:50:29 compute-0 nova_compute[260935]:   <name>instance-00000016</name>
Oct 11 08:50:29 compute-0 nova_compute[260935]:   <memory>131072</memory>
Oct 11 08:50:29 compute-0 nova_compute[260935]:   <vcpu>1</vcpu>
Oct 11 08:50:29 compute-0 nova_compute[260935]:   <metadata>
Oct 11 08:50:29 compute-0 nova_compute[260935]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 08:50:29 compute-0 nova_compute[260935]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 08:50:29 compute-0 nova_compute[260935]:       <nova:name>tempest-ServersAdminTestJSON-server-1988839763</nova:name>
Oct 11 08:50:29 compute-0 nova_compute[260935]:       <nova:creationTime>2025-10-11 08:50:28</nova:creationTime>
Oct 11 08:50:29 compute-0 nova_compute[260935]:       <nova:flavor name="m1.nano">
Oct 11 08:50:29 compute-0 nova_compute[260935]:         <nova:memory>128</nova:memory>
Oct 11 08:50:29 compute-0 nova_compute[260935]:         <nova:disk>1</nova:disk>
Oct 11 08:50:29 compute-0 nova_compute[260935]:         <nova:swap>0</nova:swap>
Oct 11 08:50:29 compute-0 nova_compute[260935]:         <nova:ephemeral>0</nova:ephemeral>
Oct 11 08:50:29 compute-0 nova_compute[260935]:         <nova:vcpus>1</nova:vcpus>
Oct 11 08:50:29 compute-0 nova_compute[260935]:       </nova:flavor>
Oct 11 08:50:29 compute-0 nova_compute[260935]:       <nova:owner>
Oct 11 08:50:29 compute-0 nova_compute[260935]:         <nova:user uuid="a51c2680b31e40b1908642ef8795c6f0">tempest-ServersAdminTestJSON-1756812845-project-member</nova:user>
Oct 11 08:50:29 compute-0 nova_compute[260935]:         <nova:project uuid="39d3043a7835403392c659fbb2fe0b22">tempest-ServersAdminTestJSON-1756812845</nova:project>
Oct 11 08:50:29 compute-0 nova_compute[260935]:       </nova:owner>
Oct 11 08:50:29 compute-0 nova_compute[260935]:       <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 08:50:29 compute-0 nova_compute[260935]:       <nova:ports>
Oct 11 08:50:29 compute-0 nova_compute[260935]:         <nova:port uuid="a3944a31-9560-49ae-b2a5-caaf2736993a">
Oct 11 08:50:29 compute-0 nova_compute[260935]:           <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Oct 11 08:50:29 compute-0 nova_compute[260935]:         </nova:port>
Oct 11 08:50:29 compute-0 nova_compute[260935]:       </nova:ports>
Oct 11 08:50:29 compute-0 nova_compute[260935]:     </nova:instance>
Oct 11 08:50:29 compute-0 nova_compute[260935]:   </metadata>
Oct 11 08:50:29 compute-0 nova_compute[260935]:   <sysinfo type="smbios">
Oct 11 08:50:29 compute-0 nova_compute[260935]:     <system>
Oct 11 08:50:29 compute-0 nova_compute[260935]:       <entry name="manufacturer">RDO</entry>
Oct 11 08:50:29 compute-0 nova_compute[260935]:       <entry name="product">OpenStack Compute</entry>
Oct 11 08:50:29 compute-0 nova_compute[260935]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 08:50:29 compute-0 nova_compute[260935]:       <entry name="serial">77ff3a9d-3eb2-40ed-ad12-6367fd4e555f</entry>
Oct 11 08:50:29 compute-0 nova_compute[260935]:       <entry name="uuid">77ff3a9d-3eb2-40ed-ad12-6367fd4e555f</entry>
Oct 11 08:50:29 compute-0 nova_compute[260935]:       <entry name="family">Virtual Machine</entry>
Oct 11 08:50:29 compute-0 nova_compute[260935]:     </system>
Oct 11 08:50:29 compute-0 nova_compute[260935]:   </sysinfo>
Oct 11 08:50:29 compute-0 nova_compute[260935]:   <os>
Oct 11 08:50:29 compute-0 nova_compute[260935]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 11 08:50:29 compute-0 nova_compute[260935]:     <boot dev="hd"/>
Oct 11 08:50:29 compute-0 nova_compute[260935]:     <smbios mode="sysinfo"/>
Oct 11 08:50:29 compute-0 nova_compute[260935]:   </os>
Oct 11 08:50:29 compute-0 nova_compute[260935]:   <features>
Oct 11 08:50:29 compute-0 nova_compute[260935]:     <acpi/>
Oct 11 08:50:29 compute-0 nova_compute[260935]:     <apic/>
Oct 11 08:50:29 compute-0 nova_compute[260935]:     <vmcoreinfo/>
Oct 11 08:50:29 compute-0 nova_compute[260935]:   </features>
Oct 11 08:50:29 compute-0 nova_compute[260935]:   <clock offset="utc">
Oct 11 08:50:29 compute-0 nova_compute[260935]:     <timer name="pit" tickpolicy="delay"/>
Oct 11 08:50:29 compute-0 nova_compute[260935]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 11 08:50:29 compute-0 nova_compute[260935]:     <timer name="hpet" present="no"/>
Oct 11 08:50:29 compute-0 nova_compute[260935]:   </clock>
Oct 11 08:50:29 compute-0 nova_compute[260935]:   <cpu mode="host-model" match="exact">
Oct 11 08:50:29 compute-0 nova_compute[260935]:     <topology sockets="1" cores="1" threads="1"/>
Oct 11 08:50:29 compute-0 nova_compute[260935]:   </cpu>
Oct 11 08:50:29 compute-0 nova_compute[260935]:   <devices>
Oct 11 08:50:29 compute-0 nova_compute[260935]:     <disk type="network" device="disk">
Oct 11 08:50:29 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 08:50:29 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/77ff3a9d-3eb2-40ed-ad12-6367fd4e555f_disk">
Oct 11 08:50:29 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 08:50:29 compute-0 nova_compute[260935]:       </source>
Oct 11 08:50:29 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 08:50:29 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 08:50:29 compute-0 nova_compute[260935]:       </auth>
Oct 11 08:50:29 compute-0 nova_compute[260935]:       <target dev="vda" bus="virtio"/>
Oct 11 08:50:29 compute-0 nova_compute[260935]:     </disk>
Oct 11 08:50:29 compute-0 nova_compute[260935]:     <disk type="network" device="cdrom">
Oct 11 08:50:29 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 08:50:29 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/77ff3a9d-3eb2-40ed-ad12-6367fd4e555f_disk.config">
Oct 11 08:50:29 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 08:50:29 compute-0 nova_compute[260935]:       </source>
Oct 11 08:50:29 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 08:50:29 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 08:50:29 compute-0 nova_compute[260935]:       </auth>
Oct 11 08:50:29 compute-0 nova_compute[260935]:       <target dev="sda" bus="sata"/>
Oct 11 08:50:29 compute-0 nova_compute[260935]:     </disk>
Oct 11 08:50:29 compute-0 nova_compute[260935]:     <interface type="ethernet">
Oct 11 08:50:29 compute-0 nova_compute[260935]:       <mac address="fa:16:3e:bf:d8:0b"/>
Oct 11 08:50:29 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 08:50:29 compute-0 nova_compute[260935]:       <driver name="vhost" rx_queue_size="512"/>
Oct 11 08:50:29 compute-0 nova_compute[260935]:       <mtu size="1442"/>
Oct 11 08:50:29 compute-0 nova_compute[260935]:       <target dev="tapa3944a31-95"/>
Oct 11 08:50:29 compute-0 nova_compute[260935]:     </interface>
Oct 11 08:50:29 compute-0 nova_compute[260935]:     <serial type="pty">
Oct 11 08:50:29 compute-0 nova_compute[260935]:       <log file="/var/lib/nova/instances/77ff3a9d-3eb2-40ed-ad12-6367fd4e555f/console.log" append="off"/>
Oct 11 08:50:29 compute-0 nova_compute[260935]:     </serial>
Oct 11 08:50:29 compute-0 nova_compute[260935]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 08:50:29 compute-0 nova_compute[260935]:     <video>
Oct 11 08:50:29 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 08:50:29 compute-0 nova_compute[260935]:     </video>
Oct 11 08:50:29 compute-0 nova_compute[260935]:     <input type="tablet" bus="usb"/>
Oct 11 08:50:29 compute-0 nova_compute[260935]:     <rng model="virtio">
Oct 11 08:50:29 compute-0 nova_compute[260935]:       <backend model="random">/dev/urandom</backend>
Oct 11 08:50:29 compute-0 nova_compute[260935]:     </rng>
Oct 11 08:50:29 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root"/>
Oct 11 08:50:29 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:50:29 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:50:29 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:50:29 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:50:29 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:50:29 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:50:29 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:50:29 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:50:29 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:50:29 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:50:29 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:50:29 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:50:29 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:50:29 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:50:29 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:50:29 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:50:29 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:50:29 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:50:29 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:50:29 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:50:29 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:50:29 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:50:29 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:50:29 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:50:29 compute-0 nova_compute[260935]:     <controller type="usb" index="0"/>
Oct 11 08:50:29 compute-0 nova_compute[260935]:     <memballoon model="virtio">
Oct 11 08:50:29 compute-0 nova_compute[260935]:       <stats period="10"/>
Oct 11 08:50:29 compute-0 nova_compute[260935]:     </memballoon>
Oct 11 08:50:29 compute-0 nova_compute[260935]:   </devices>
Oct 11 08:50:29 compute-0 nova_compute[260935]: </domain>
Oct 11 08:50:29 compute-0 nova_compute[260935]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 11 08:50:29 compute-0 nova_compute[260935]: 2025-10-11 08:50:29.892 2 DEBUG nova.compute.manager [None req-658bfebf-d2cf-474e-bf77-56ab2cc2a636 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] Preparing to wait for external event network-vif-plugged-a3944a31-9560-49ae-b2a5-caaf2736993a prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 11 08:50:29 compute-0 nova_compute[260935]: 2025-10-11 08:50:29.892 2 DEBUG oslo_concurrency.lockutils [None req-658bfebf-d2cf-474e-bf77-56ab2cc2a636 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Acquiring lock "77ff3a9d-3eb2-40ed-ad12-6367fd4e555f-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:50:29 compute-0 nova_compute[260935]: 2025-10-11 08:50:29.892 2 DEBUG oslo_concurrency.lockutils [None req-658bfebf-d2cf-474e-bf77-56ab2cc2a636 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Lock "77ff3a9d-3eb2-40ed-ad12-6367fd4e555f-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:50:29 compute-0 nova_compute[260935]: 2025-10-11 08:50:29.893 2 DEBUG oslo_concurrency.lockutils [None req-658bfebf-d2cf-474e-bf77-56ab2cc2a636 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Lock "77ff3a9d-3eb2-40ed-ad12-6367fd4e555f-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:50:29 compute-0 nova_compute[260935]: 2025-10-11 08:50:29.894 2 DEBUG nova.virt.libvirt.vif [None req-658bfebf-d2cf-474e-bf77-56ab2cc2a636 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-11T08:48:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-1988839763',display_name='tempest-ServersAdminTestJSON-server-1988839763',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-1988839763',id=22,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-11T08:50:10Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='39d3043a7835403392c659fbb2fe0b22',ramdisk_id='',reservation_id='r-nxz1v0k9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='2',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersAdminTestJSON-1756812845',owner_user_name='tempest-ServersAdminTestJSON-1756812845-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T08:50:27Z,user_data=None,user_id='a51c2680b31e40b1908642ef8795c6f0',uuid=77ff3a9d-3eb2-40ed-ad12-6367fd4e555f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a3944a31-9560-49ae-b2a5-caaf2736993a", "address": "fa:16:3e:bf:d8:0b", "network": {"id": "09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1951796893-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "39d3043a7835403392c659fbb2fe0b22", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa3944a31-95", "ovs_interfaceid": "a3944a31-9560-49ae-b2a5-caaf2736993a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 11 08:50:29 compute-0 nova_compute[260935]: 2025-10-11 08:50:29.894 2 DEBUG nova.network.os_vif_util [None req-658bfebf-d2cf-474e-bf77-56ab2cc2a636 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Converting VIF {"id": "a3944a31-9560-49ae-b2a5-caaf2736993a", "address": "fa:16:3e:bf:d8:0b", "network": {"id": "09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1951796893-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "39d3043a7835403392c659fbb2fe0b22", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa3944a31-95", "ovs_interfaceid": "a3944a31-9560-49ae-b2a5-caaf2736993a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 08:50:29 compute-0 nova_compute[260935]: 2025-10-11 08:50:29.895 2 DEBUG nova.network.os_vif_util [None req-658bfebf-d2cf-474e-bf77-56ab2cc2a636 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:bf:d8:0b,bridge_name='br-int',has_traffic_filtering=True,id=a3944a31-9560-49ae-b2a5-caaf2736993a,network=Network(09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa3944a31-95') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 08:50:29 compute-0 nova_compute[260935]: 2025-10-11 08:50:29.896 2 DEBUG os_vif [None req-658bfebf-d2cf-474e-bf77-56ab2cc2a636 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:bf:d8:0b,bridge_name='br-int',has_traffic_filtering=True,id=a3944a31-9560-49ae-b2a5-caaf2736993a,network=Network(09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa3944a31-95') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 11 08:50:29 compute-0 nova_compute[260935]: 2025-10-11 08:50:29.897 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:50:29 compute-0 nova_compute[260935]: 2025-10-11 08:50:29.898 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:50:29 compute-0 nova_compute[260935]: 2025-10-11 08:50:29.899 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 08:50:29 compute-0 nova_compute[260935]: 2025-10-11 08:50:29.901 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:50:29 compute-0 nova_compute[260935]: 2025-10-11 08:50:29.902 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa3944a31-95, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:50:29 compute-0 nova_compute[260935]: 2025-10-11 08:50:29.902 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa3944a31-95, col_values=(('external_ids', {'iface-id': 'a3944a31-9560-49ae-b2a5-caaf2736993a', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:bf:d8:0b', 'vm-uuid': '77ff3a9d-3eb2-40ed-ad12-6367fd4e555f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:50:29 compute-0 NetworkManager[44960]: <info>  [1760172629.9059] manager: (tapa3944a31-95): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/98)
Oct 11 08:50:29 compute-0 nova_compute[260935]: 2025-10-11 08:50:29.904 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:50:29 compute-0 nova_compute[260935]: 2025-10-11 08:50:29.908 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 08:50:29 compute-0 nova_compute[260935]: 2025-10-11 08:50:29.915 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:50:29 compute-0 nova_compute[260935]: 2025-10-11 08:50:29.916 2 INFO os_vif [None req-658bfebf-d2cf-474e-bf77-56ab2cc2a636 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:bf:d8:0b,bridge_name='br-int',has_traffic_filtering=True,id=a3944a31-9560-49ae-b2a5-caaf2736993a,network=Network(09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa3944a31-95')
Oct 11 08:50:29 compute-0 nova_compute[260935]: 2025-10-11 08:50:29.954 2 DEBUG nova.compute.manager [req-3c004a00-eaa4-42a9-ae7b-3e5c3b758c59 req-c861d2eb-de0d-40ae-af9f-be0c76fbb008 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] Received event network-vif-plugged-a3944a31-9560-49ae-b2a5-caaf2736993a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:50:29 compute-0 nova_compute[260935]: 2025-10-11 08:50:29.954 2 DEBUG oslo_concurrency.lockutils [req-3c004a00-eaa4-42a9-ae7b-3e5c3b758c59 req-c861d2eb-de0d-40ae-af9f-be0c76fbb008 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "77ff3a9d-3eb2-40ed-ad12-6367fd4e555f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:50:29 compute-0 nova_compute[260935]: 2025-10-11 08:50:29.955 2 DEBUG oslo_concurrency.lockutils [req-3c004a00-eaa4-42a9-ae7b-3e5c3b758c59 req-c861d2eb-de0d-40ae-af9f-be0c76fbb008 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "77ff3a9d-3eb2-40ed-ad12-6367fd4e555f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:50:29 compute-0 nova_compute[260935]: 2025-10-11 08:50:29.955 2 DEBUG oslo_concurrency.lockutils [req-3c004a00-eaa4-42a9-ae7b-3e5c3b758c59 req-c861d2eb-de0d-40ae-af9f-be0c76fbb008 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "77ff3a9d-3eb2-40ed-ad12-6367fd4e555f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:50:29 compute-0 nova_compute[260935]: 2025-10-11 08:50:29.955 2 DEBUG nova.compute.manager [req-3c004a00-eaa4-42a9-ae7b-3e5c3b758c59 req-c861d2eb-de0d-40ae-af9f-be0c76fbb008 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] Processing event network-vif-plugged-a3944a31-9560-49ae-b2a5-caaf2736993a _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 11 08:50:29 compute-0 nova_compute[260935]: 2025-10-11 08:50:29.980 2 DEBUG nova.virt.libvirt.driver [None req-658bfebf-d2cf-474e-bf77-56ab2cc2a636 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 08:50:29 compute-0 nova_compute[260935]: 2025-10-11 08:50:29.981 2 DEBUG nova.virt.libvirt.driver [None req-658bfebf-d2cf-474e-bf77-56ab2cc2a636 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 08:50:29 compute-0 nova_compute[260935]: 2025-10-11 08:50:29.981 2 DEBUG nova.virt.libvirt.driver [None req-658bfebf-d2cf-474e-bf77-56ab2cc2a636 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] No VIF found with MAC fa:16:3e:bf:d8:0b, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 11 08:50:29 compute-0 nova_compute[260935]: 2025-10-11 08:50:29.982 2 INFO nova.virt.libvirt.driver [None req-658bfebf-d2cf-474e-bf77-56ab2cc2a636 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] Using config drive
Oct 11 08:50:30 compute-0 nova_compute[260935]: 2025-10-11 08:50:30.013 2 DEBUG nova.storage.rbd_utils [None req-658bfebf-d2cf-474e-bf77-56ab2cc2a636 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] rbd image 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:50:30 compute-0 nova_compute[260935]: 2025-10-11 08:50:30.033 2 DEBUG nova.objects.instance [None req-658bfebf-d2cf-474e-bf77-56ab2cc2a636 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Lazy-loading 'ec2_ids' on Instance uuid 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:50:30 compute-0 nova_compute[260935]: 2025-10-11 08:50:30.066 2 DEBUG nova.objects.instance [None req-658bfebf-d2cf-474e-bf77-56ab2cc2a636 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Lazy-loading 'keypairs' on Instance uuid 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:50:30 compute-0 nova_compute[260935]: 2025-10-11 08:50:30.068 2 DEBUG oslo_concurrency.lockutils [None req-794458f2-9917-484b-b2a3-3b226201730e 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Acquiring lock "cb1503a2-bc9c-4faf-ab16-e7227f8c94f7" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:50:30 compute-0 nova_compute[260935]: 2025-10-11 08:50:30.069 2 DEBUG oslo_concurrency.lockutils [None req-794458f2-9917-484b-b2a3-3b226201730e 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Lock "cb1503a2-bc9c-4faf-ab16-e7227f8c94f7" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:50:30 compute-0 nova_compute[260935]: 2025-10-11 08:50:30.069 2 DEBUG oslo_concurrency.lockutils [None req-794458f2-9917-484b-b2a3-3b226201730e 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Acquiring lock "cb1503a2-bc9c-4faf-ab16-e7227f8c94f7-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:50:30 compute-0 nova_compute[260935]: 2025-10-11 08:50:30.070 2 DEBUG oslo_concurrency.lockutils [None req-794458f2-9917-484b-b2a3-3b226201730e 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Lock "cb1503a2-bc9c-4faf-ab16-e7227f8c94f7-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:50:30 compute-0 nova_compute[260935]: 2025-10-11 08:50:30.070 2 DEBUG oslo_concurrency.lockutils [None req-794458f2-9917-484b-b2a3-3b226201730e 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Lock "cb1503a2-bc9c-4faf-ab16-e7227f8c94f7-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:50:30 compute-0 nova_compute[260935]: 2025-10-11 08:50:30.072 2 INFO nova.compute.manager [None req-794458f2-9917-484b-b2a3-3b226201730e 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: cb1503a2-bc9c-4faf-ab16-e7227f8c94f7] Terminating instance
Oct 11 08:50:30 compute-0 nova_compute[260935]: 2025-10-11 08:50:30.074 2 DEBUG nova.compute.manager [None req-794458f2-9917-484b-b2a3-3b226201730e 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: cb1503a2-bc9c-4faf-ab16-e7227f8c94f7] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 11 08:50:30 compute-0 nova_compute[260935]: 2025-10-11 08:50:30.085 2 INFO nova.virt.libvirt.driver [-] [instance: cb1503a2-bc9c-4faf-ab16-e7227f8c94f7] Instance destroyed successfully.
Oct 11 08:50:30 compute-0 nova_compute[260935]: 2025-10-11 08:50:30.085 2 DEBUG nova.objects.instance [None req-794458f2-9917-484b-b2a3-3b226201730e 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Lazy-loading 'resources' on Instance uuid cb1503a2-bc9c-4faf-ab16-e7227f8c94f7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:50:30 compute-0 nova_compute[260935]: 2025-10-11 08:50:30.096 2 DEBUG nova.virt.libvirt.vif [None req-794458f2-9917-484b-b2a3-3b226201730e 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T08:49:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-1000984331',display_name='tempest-ImagesTestJSON-server-1000984331',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-1000984331',id=30,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-11T08:50:05Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='f8c7604961214c6d9d49657535d799a5',ramdisk_id='',reservation_id='r-ga7cei6c',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ImagesTestJSON-694493184',owner_user_name='tempest-ImagesTestJSON-694493184-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T08:50:27Z,user_data=None,user_id='1bab12893b9d49aabcb5ca19c9b951de',uuid=cb1503a2-bc9c-4faf-ab16-e7227f8c94f7,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "31025494-a361-4c39-aa29-5841978f12e9", "address": "fa:16:3e:a8:8e:61", "network": {"id": "9bac3530-993f-420e-8692-0b14a331d756", "bridge": "br-int", "label": "tempest-ImagesTestJSON-942705627-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f8c7604961214c6d9d49657535d799a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31025494-a3", "ovs_interfaceid": "31025494-a361-4c39-aa29-5841978f12e9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 11 08:50:30 compute-0 nova_compute[260935]: 2025-10-11 08:50:30.096 2 DEBUG nova.network.os_vif_util [None req-794458f2-9917-484b-b2a3-3b226201730e 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Converting VIF {"id": "31025494-a361-4c39-aa29-5841978f12e9", "address": "fa:16:3e:a8:8e:61", "network": {"id": "9bac3530-993f-420e-8692-0b14a331d756", "bridge": "br-int", "label": "tempest-ImagesTestJSON-942705627-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f8c7604961214c6d9d49657535d799a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31025494-a3", "ovs_interfaceid": "31025494-a361-4c39-aa29-5841978f12e9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 08:50:30 compute-0 nova_compute[260935]: 2025-10-11 08:50:30.097 2 DEBUG nova.network.os_vif_util [None req-794458f2-9917-484b-b2a3-3b226201730e 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a8:8e:61,bridge_name='br-int',has_traffic_filtering=True,id=31025494-a361-4c39-aa29-5841978f12e9,network=Network(9bac3530-993f-420e-8692-0b14a331d756),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap31025494-a3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 08:50:30 compute-0 nova_compute[260935]: 2025-10-11 08:50:30.098 2 DEBUG os_vif [None req-794458f2-9917-484b-b2a3-3b226201730e 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a8:8e:61,bridge_name='br-int',has_traffic_filtering=True,id=31025494-a361-4c39-aa29-5841978f12e9,network=Network(9bac3530-993f-420e-8692-0b14a331d756),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap31025494-a3') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 11 08:50:30 compute-0 nova_compute[260935]: 2025-10-11 08:50:30.099 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:50:30 compute-0 nova_compute[260935]: 2025-10-11 08:50:30.100 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap31025494-a3, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:50:30 compute-0 nova_compute[260935]: 2025-10-11 08:50:30.101 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:50:30 compute-0 nova_compute[260935]: 2025-10-11 08:50:30.104 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 08:50:30 compute-0 nova_compute[260935]: 2025-10-11 08:50:30.108 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:50:30 compute-0 nova_compute[260935]: 2025-10-11 08:50:30.110 2 INFO os_vif [None req-794458f2-9917-484b-b2a3-3b226201730e 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a8:8e:61,bridge_name='br-int',has_traffic_filtering=True,id=31025494-a361-4c39-aa29-5841978f12e9,network=Network(9bac3530-993f-420e-8692-0b14a331d756),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap31025494-a3')
Oct 11 08:50:30 compute-0 ceph-mon[74313]: osdmap e173: 3 total, 3 up, 3 in
Oct 11 08:50:30 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/2835589156' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:50:30 compute-0 nova_compute[260935]: 2025-10-11 08:50:30.544 2 INFO nova.virt.libvirt.driver [None req-794458f2-9917-484b-b2a3-3b226201730e 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: cb1503a2-bc9c-4faf-ab16-e7227f8c94f7] Deleting instance files /var/lib/nova/instances/cb1503a2-bc9c-4faf-ab16-e7227f8c94f7_del
Oct 11 08:50:30 compute-0 nova_compute[260935]: 2025-10-11 08:50:30.546 2 INFO nova.virt.libvirt.driver [None req-794458f2-9917-484b-b2a3-3b226201730e 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: cb1503a2-bc9c-4faf-ab16-e7227f8c94f7] Deletion of /var/lib/nova/instances/cb1503a2-bc9c-4faf-ab16-e7227f8c94f7_del complete
Oct 11 08:50:30 compute-0 nova_compute[260935]: 2025-10-11 08:50:30.621 2 INFO nova.compute.manager [None req-794458f2-9917-484b-b2a3-3b226201730e 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: cb1503a2-bc9c-4faf-ab16-e7227f8c94f7] Took 0.55 seconds to destroy the instance on the hypervisor.
Oct 11 08:50:30 compute-0 nova_compute[260935]: 2025-10-11 08:50:30.622 2 DEBUG oslo.service.loopingcall [None req-794458f2-9917-484b-b2a3-3b226201730e 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 11 08:50:30 compute-0 nova_compute[260935]: 2025-10-11 08:50:30.622 2 DEBUG nova.compute.manager [-] [instance: cb1503a2-bc9c-4faf-ab16-e7227f8c94f7] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 11 08:50:30 compute-0 nova_compute[260935]: 2025-10-11 08:50:30.623 2 DEBUG nova.network.neutron [-] [instance: cb1503a2-bc9c-4faf-ab16-e7227f8c94f7] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 11 08:50:30 compute-0 nova_compute[260935]: 2025-10-11 08:50:30.737 2 INFO nova.virt.libvirt.driver [None req-658bfebf-d2cf-474e-bf77-56ab2cc2a636 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] Creating config drive at /var/lib/nova/instances/77ff3a9d-3eb2-40ed-ad12-6367fd4e555f/disk.config
Oct 11 08:50:30 compute-0 nova_compute[260935]: 2025-10-11 08:50:30.748 2 DEBUG oslo_concurrency.processutils [None req-658bfebf-d2cf-474e-bf77-56ab2cc2a636 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/77ff3a9d-3eb2-40ed-ad12-6367fd4e555f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmphw4s882f execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:50:30 compute-0 podman[298028]: 2025-10-11 08:50:30.824509591 +0000 UTC m=+0.116703882 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=iscsid, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team)
Oct 11 08:50:30 compute-0 nova_compute[260935]: 2025-10-11 08:50:30.915 2 DEBUG oslo_concurrency.processutils [None req-658bfebf-d2cf-474e-bf77-56ab2cc2a636 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/77ff3a9d-3eb2-40ed-ad12-6367fd4e555f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmphw4s882f" returned: 0 in 0.167s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:50:30 compute-0 nova_compute[260935]: 2025-10-11 08:50:30.946 2 DEBUG nova.storage.rbd_utils [None req-658bfebf-d2cf-474e-bf77-56ab2cc2a636 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] rbd image 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:50:30 compute-0 nova_compute[260935]: 2025-10-11 08:50:30.950 2 DEBUG oslo_concurrency.processutils [None req-658bfebf-d2cf-474e-bf77-56ab2cc2a636 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/77ff3a9d-3eb2-40ed-ad12-6367fd4e555f/disk.config 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:50:30 compute-0 nova_compute[260935]: 2025-10-11 08:50:30.991 2 INFO nova.network.neutron [None req-f5b9d75c-fa68-46d5-a532-b168d70600ed 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 14711e39-46ca-4856-9c19-fa51b869064d] Port f045b3aa-3ff6-4dea-ad61-a59a01735124 from network info_cache is no longer associated with instance in Neutron. Removing from network info_cache.
Oct 11 08:50:30 compute-0 nova_compute[260935]: 2025-10-11 08:50:30.992 2 DEBUG nova.network.neutron [None req-f5b9d75c-fa68-46d5-a532-b168d70600ed 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 14711e39-46ca-4856-9c19-fa51b869064d] Updating instance_info_cache with network_info: [{"id": "ac842ebf-4fca-4930-a4d1-3e8a6760d441", "address": "fa:16:3e:56:95:17", "network": {"id": "fff13396-b787-4c6e-9112-a1c2ef57b26d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-795919911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.224", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eddb41c523294041b154a0a99c88e82b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac842ebf-4f", "ovs_interfaceid": "ac842ebf-4fca-4930-a4d1-3e8a6760d441", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:50:31 compute-0 nova_compute[260935]: 2025-10-11 08:50:31.004 2 DEBUG nova.compute.manager [req-7499a0cf-0392-49fe-9518-2db932c85b78 req-b420ce8b-4c9c-4429-ae9a-97374b36f06b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 14711e39-46ca-4856-9c19-fa51b869064d] Received event network-changed-ac842ebf-4fca-4930-a4d1-3e8a6760d441 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:50:31 compute-0 nova_compute[260935]: 2025-10-11 08:50:31.004 2 DEBUG nova.compute.manager [req-7499a0cf-0392-49fe-9518-2db932c85b78 req-b420ce8b-4c9c-4429-ae9a-97374b36f06b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 14711e39-46ca-4856-9c19-fa51b869064d] Refreshing instance network info cache due to event network-changed-ac842ebf-4fca-4930-a4d1-3e8a6760d441. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 08:50:31 compute-0 nova_compute[260935]: 2025-10-11 08:50:31.004 2 DEBUG oslo_concurrency.lockutils [req-7499a0cf-0392-49fe-9518-2db932c85b78 req-b420ce8b-4c9c-4429-ae9a-97374b36f06b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-14711e39-46ca-4856-9c19-fa51b869064d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 08:50:31 compute-0 nova_compute[260935]: 2025-10-11 08:50:31.023 2 DEBUG oslo_concurrency.lockutils [None req-f5b9d75c-fa68-46d5-a532-b168d70600ed 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Releasing lock "refresh_cache-14711e39-46ca-4856-9c19-fa51b869064d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 08:50:31 compute-0 nova_compute[260935]: 2025-10-11 08:50:31.026 2 DEBUG oslo_concurrency.lockutils [req-7499a0cf-0392-49fe-9518-2db932c85b78 req-b420ce8b-4c9c-4429-ae9a-97374b36f06b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-14711e39-46ca-4856-9c19-fa51b869064d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 08:50:31 compute-0 nova_compute[260935]: 2025-10-11 08:50:31.026 2 DEBUG nova.network.neutron [req-7499a0cf-0392-49fe-9518-2db932c85b78 req-b420ce8b-4c9c-4429-ae9a-97374b36f06b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 14711e39-46ca-4856-9c19-fa51b869064d] Refreshing network info cache for port ac842ebf-4fca-4930-a4d1-3e8a6760d441 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 08:50:31 compute-0 nova_compute[260935]: 2025-10-11 08:50:31.068 2 DEBUG oslo_concurrency.lockutils [None req-f5b9d75c-fa68-46d5-a532-b168d70600ed 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Lock "interface-14711e39-46ca-4856-9c19-fa51b869064d-f045b3aa-3ff6-4dea-ad61-a59a01735124" "released" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: held 5.575s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:50:31 compute-0 nova_compute[260935]: 2025-10-11 08:50:31.143 2 DEBUG oslo_concurrency.processutils [None req-658bfebf-d2cf-474e-bf77-56ab2cc2a636 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/77ff3a9d-3eb2-40ed-ad12-6367fd4e555f/disk.config 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.193s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:50:31 compute-0 nova_compute[260935]: 2025-10-11 08:50:31.144 2 INFO nova.virt.libvirt.driver [None req-658bfebf-d2cf-474e-bf77-56ab2cc2a636 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] Deleting local config drive /var/lib/nova/instances/77ff3a9d-3eb2-40ed-ad12-6367fd4e555f/disk.config because it was imported into RBD.
Oct 11 08:50:31 compute-0 kernel: tapa3944a31-95: entered promiscuous mode
Oct 11 08:50:31 compute-0 NetworkManager[44960]: <info>  [1760172631.2123] manager: (tapa3944a31-95): new Tun device (/org/freedesktop/NetworkManager/Devices/99)
Oct 11 08:50:31 compute-0 nova_compute[260935]: 2025-10-11 08:50:31.216 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:50:31 compute-0 ovn_controller[152945]: 2025-10-11T08:50:31Z|00182|binding|INFO|Claiming lport a3944a31-9560-49ae-b2a5-caaf2736993a for this chassis.
Oct 11 08:50:31 compute-0 ovn_controller[152945]: 2025-10-11T08:50:31Z|00183|binding|INFO|a3944a31-9560-49ae-b2a5-caaf2736993a: Claiming fa:16:3e:bf:d8:0b 10.100.0.11
Oct 11 08:50:31 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:31.224 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:bf:d8:0b 10.100.0.11'], port_security=['fa:16:3e:bf:d8:0b 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '77ff3a9d-3eb2-40ed-ad12-6367fd4e555f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '39d3043a7835403392c659fbb2fe0b22', 'neutron:revision_number': '7', 'neutron:security_group_ids': '8cdf2c97-ed67-4339-928f-1d70d0c6c18c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3bfe7634-8476-437a-9cde-e4512c0e686a, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=a3944a31-9560-49ae-b2a5-caaf2736993a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 08:50:31 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:31.225 162815 INFO neutron.agent.ovn.metadata.agent [-] Port a3944a31-9560-49ae-b2a5-caaf2736993a in datapath 09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5 bound to our chassis
Oct 11 08:50:31 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:31.228 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5
Oct 11 08:50:31 compute-0 systemd-udevd[298099]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 08:50:31 compute-0 NetworkManager[44960]: <info>  [1760172631.2469] device (tapa3944a31-95): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 08:50:31 compute-0 NetworkManager[44960]: <info>  [1760172631.2476] device (tapa3944a31-95): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 08:50:31 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:31.250 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[c8f807b4-178d-48f5-8561-e5f7fc4ac6e5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:50:31 compute-0 ovn_controller[152945]: 2025-10-11T08:50:31Z|00184|binding|INFO|Setting lport a3944a31-9560-49ae-b2a5-caaf2736993a ovn-installed in OVS
Oct 11 08:50:31 compute-0 ovn_controller[152945]: 2025-10-11T08:50:31Z|00185|binding|INFO|Setting lport a3944a31-9560-49ae-b2a5-caaf2736993a up in Southbound
Oct 11 08:50:31 compute-0 nova_compute[260935]: 2025-10-11 08:50:31.255 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:50:31 compute-0 nova_compute[260935]: 2025-10-11 08:50:31.260 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:50:31 compute-0 systemd-machined[215705]: New machine qemu-34-instance-00000016.
Oct 11 08:50:31 compute-0 systemd[1]: Started Virtual Machine qemu-34-instance-00000016.
Oct 11 08:50:31 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:31.292 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[f8a161be-b4b5-48b0-936d-5ea0526cc2a7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:50:31 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:31.297 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[c1ab9529-16f4-4b16-9cc9-4f4adcdec705]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:50:31 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:31.336 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[6be8b12c-9a03-4251-9980-a904b15c16b5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:50:31 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:31.365 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[2fba7912-4bc5-43d3-bdc5-836ec81d5dbf]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap09ac2cb6-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9f:b2:33'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 16, 'tx_packets': 18, 'rx_bytes': 1168, 'tx_bytes': 944, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 16, 'tx_packets': 18, 'rx_bytes': 1168, 'tx_bytes': 944, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 34], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 439110, 'reachable_time': 35426, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 298114, 'error': None, 'target': 'ovnmeta-09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:50:31 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:31.394 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[357c10bf-13d4-4be1-b073-cf6ed55e5035]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap09ac2cb6-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 439130, 'tstamp': 439130}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 298116, 'error': None, 'target': 'ovnmeta-09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap09ac2cb6-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 439135, 'tstamp': 439135}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 298116, 'error': None, 'target': 'ovnmeta-09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:50:31 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:31.396 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap09ac2cb6-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:50:31 compute-0 nova_compute[260935]: 2025-10-11 08:50:31.398 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:50:31 compute-0 nova_compute[260935]: 2025-10-11 08:50:31.399 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:50:31 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:31.399 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap09ac2cb6-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:50:31 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:31.400 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 08:50:31 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:31.400 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap09ac2cb6-30, col_values=(('external_ids', {'iface-id': '424305ea-6b47-4134-ad52-ee2a450e204c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:50:31 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:31.400 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 08:50:31 compute-0 ceph-mon[74313]: pgmap v1351: 321 pgs: 321 active+clean; 658 MiB data, 723 MiB used, 59 GiB / 60 GiB avail; 9.1 MiB/s rd, 8.9 MiB/s wr, 278 op/s
Oct 11 08:50:31 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1352: 321 pgs: 321 active+clean; 658 MiB data, 723 MiB used, 59 GiB / 60 GiB avail; 7.7 MiB/s rd, 7.6 MiB/s wr, 236 op/s
Oct 11 08:50:32 compute-0 nova_compute[260935]: 2025-10-11 08:50:32.057 2 DEBUG nova.network.neutron [-] [instance: cb1503a2-bc9c-4faf-ab16-e7227f8c94f7] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:50:32 compute-0 nova_compute[260935]: 2025-10-11 08:50:32.073 2 INFO nova.compute.manager [-] [instance: cb1503a2-bc9c-4faf-ab16-e7227f8c94f7] Took 1.45 seconds to deallocate network for instance.
Oct 11 08:50:32 compute-0 nova_compute[260935]: 2025-10-11 08:50:32.113 2 DEBUG oslo_concurrency.lockutils [None req-794458f2-9917-484b-b2a3-3b226201730e 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:50:32 compute-0 nova_compute[260935]: 2025-10-11 08:50:32.114 2 DEBUG oslo_concurrency.lockutils [None req-794458f2-9917-484b-b2a3-3b226201730e 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:50:32 compute-0 nova_compute[260935]: 2025-10-11 08:50:32.150 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:50:32 compute-0 ovn_controller[152945]: 2025-10-11T08:50:32Z|00186|binding|INFO|Releasing lport 424305ea-6b47-4134-ad52-ee2a450e204c from this chassis (sb_readonly=0)
Oct 11 08:50:32 compute-0 ovn_controller[152945]: 2025-10-11T08:50:32Z|00187|binding|INFO|Releasing lport 2a916b98-1e7b-4604-b1f0-e2f195b1c17e from this chassis (sb_readonly=0)
Oct 11 08:50:32 compute-0 nova_compute[260935]: 2025-10-11 08:50:32.248 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:50:32 compute-0 nova_compute[260935]: 2025-10-11 08:50:32.275 2 DEBUG oslo_concurrency.lockutils [None req-4f0da930-4538-4373-8291-fbc1ddfbf0ef 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Acquiring lock "interface-b35f4147-9e36-4dab-9ac8-2061c97797f2-f045b3aa-3ff6-4dea-ad61-a59a01735124" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:50:32 compute-0 nova_compute[260935]: 2025-10-11 08:50:32.276 2 DEBUG oslo_concurrency.lockutils [None req-4f0da930-4538-4373-8291-fbc1ddfbf0ef 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Lock "interface-b35f4147-9e36-4dab-9ac8-2061c97797f2-f045b3aa-3ff6-4dea-ad61-a59a01735124" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:50:32 compute-0 nova_compute[260935]: 2025-10-11 08:50:32.276 2 DEBUG nova.objects.instance [None req-4f0da930-4538-4373-8291-fbc1ddfbf0ef 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Lazy-loading 'flavor' on Instance uuid b35f4147-9e36-4dab-9ac8-2061c97797f2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:50:32 compute-0 nova_compute[260935]: 2025-10-11 08:50:32.322 2 DEBUG oslo_concurrency.processutils [None req-794458f2-9917-484b-b2a3-3b226201730e 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:50:32 compute-0 nova_compute[260935]: 2025-10-11 08:50:32.604 2 DEBUG nova.virt.libvirt.host [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Removed pending event for 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Oct 11 08:50:32 compute-0 nova_compute[260935]: 2025-10-11 08:50:32.605 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172632.603292, 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:50:32 compute-0 nova_compute[260935]: 2025-10-11 08:50:32.606 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] VM Started (Lifecycle Event)
Oct 11 08:50:32 compute-0 nova_compute[260935]: 2025-10-11 08:50:32.610 2 DEBUG nova.compute.manager [None req-658bfebf-d2cf-474e-bf77-56ab2cc2a636 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 11 08:50:32 compute-0 nova_compute[260935]: 2025-10-11 08:50:32.615 2 DEBUG nova.virt.libvirt.driver [None req-658bfebf-d2cf-474e-bf77-56ab2cc2a636 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 11 08:50:32 compute-0 nova_compute[260935]: 2025-10-11 08:50:32.627 2 INFO nova.virt.libvirt.driver [-] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] Instance spawned successfully.
Oct 11 08:50:32 compute-0 nova_compute[260935]: 2025-10-11 08:50:32.627 2 DEBUG nova.virt.libvirt.driver [None req-658bfebf-d2cf-474e-bf77-56ab2cc2a636 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 11 08:50:32 compute-0 nova_compute[260935]: 2025-10-11 08:50:32.633 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:50:32 compute-0 nova_compute[260935]: 2025-10-11 08:50:32.639 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 08:50:32 compute-0 nova_compute[260935]: 2025-10-11 08:50:32.653 2 DEBUG nova.virt.libvirt.driver [None req-658bfebf-d2cf-474e-bf77-56ab2cc2a636 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:50:32 compute-0 nova_compute[260935]: 2025-10-11 08:50:32.654 2 DEBUG nova.virt.libvirt.driver [None req-658bfebf-d2cf-474e-bf77-56ab2cc2a636 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:50:32 compute-0 nova_compute[260935]: 2025-10-11 08:50:32.655 2 DEBUG nova.virt.libvirt.driver [None req-658bfebf-d2cf-474e-bf77-56ab2cc2a636 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:50:32 compute-0 nova_compute[260935]: 2025-10-11 08:50:32.655 2 DEBUG nova.virt.libvirt.driver [None req-658bfebf-d2cf-474e-bf77-56ab2cc2a636 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:50:32 compute-0 nova_compute[260935]: 2025-10-11 08:50:32.656 2 DEBUG nova.virt.libvirt.driver [None req-658bfebf-d2cf-474e-bf77-56ab2cc2a636 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:50:32 compute-0 nova_compute[260935]: 2025-10-11 08:50:32.657 2 DEBUG nova.virt.libvirt.driver [None req-658bfebf-d2cf-474e-bf77-56ab2cc2a636 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:50:32 compute-0 nova_compute[260935]: 2025-10-11 08:50:32.663 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Oct 11 08:50:32 compute-0 nova_compute[260935]: 2025-10-11 08:50:32.663 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172632.6055014, 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:50:32 compute-0 nova_compute[260935]: 2025-10-11 08:50:32.663 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] VM Paused (Lifecycle Event)
Oct 11 08:50:32 compute-0 nova_compute[260935]: 2025-10-11 08:50:32.706 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:50:32 compute-0 nova_compute[260935]: 2025-10-11 08:50:32.712 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172632.6149218, 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:50:32 compute-0 nova_compute[260935]: 2025-10-11 08:50:32.713 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] VM Resumed (Lifecycle Event)
Oct 11 08:50:32 compute-0 nova_compute[260935]: 2025-10-11 08:50:32.737 2 DEBUG nova.compute.manager [None req-658bfebf-d2cf-474e-bf77-56ab2cc2a636 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:50:32 compute-0 nova_compute[260935]: 2025-10-11 08:50:32.739 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:50:32 compute-0 nova_compute[260935]: 2025-10-11 08:50:32.753 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 08:50:32 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 08:50:32 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1741090320' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:50:32 compute-0 nova_compute[260935]: 2025-10-11 08:50:32.779 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Oct 11 08:50:32 compute-0 nova_compute[260935]: 2025-10-11 08:50:32.798 2 DEBUG oslo_concurrency.lockutils [None req-658bfebf-d2cf-474e-bf77-56ab2cc2a636 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:50:32 compute-0 nova_compute[260935]: 2025-10-11 08:50:32.800 2 DEBUG oslo_concurrency.processutils [None req-794458f2-9917-484b-b2a3-3b226201730e 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:50:32 compute-0 nova_compute[260935]: 2025-10-11 08:50:32.805 2 DEBUG nova.compute.provider_tree [None req-794458f2-9917-484b-b2a3-3b226201730e 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 08:50:32 compute-0 nova_compute[260935]: 2025-10-11 08:50:32.818 2 DEBUG nova.scheduler.client.report [None req-794458f2-9917-484b-b2a3-3b226201730e 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 08:50:32 compute-0 nova_compute[260935]: 2025-10-11 08:50:32.837 2 DEBUG oslo_concurrency.lockutils [None req-794458f2-9917-484b-b2a3-3b226201730e 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.723s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:50:32 compute-0 nova_compute[260935]: 2025-10-11 08:50:32.839 2 DEBUG oslo_concurrency.lockutils [None req-658bfebf-d2cf-474e-bf77-56ab2cc2a636 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.041s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:50:32 compute-0 nova_compute[260935]: 2025-10-11 08:50:32.839 2 DEBUG nova.objects.instance [None req-658bfebf-d2cf-474e-bf77-56ab2cc2a636 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Oct 11 08:50:32 compute-0 nova_compute[260935]: 2025-10-11 08:50:32.890 2 INFO nova.scheduler.client.report [None req-794458f2-9917-484b-b2a3-3b226201730e 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Deleted allocations for instance cb1503a2-bc9c-4faf-ab16-e7227f8c94f7
Oct 11 08:50:32 compute-0 nova_compute[260935]: 2025-10-11 08:50:32.908 2 DEBUG oslo_concurrency.lockutils [None req-658bfebf-d2cf-474e-bf77-56ab2cc2a636 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.069s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:50:32 compute-0 nova_compute[260935]: 2025-10-11 08:50:32.994 2 DEBUG oslo_concurrency.lockutils [None req-794458f2-9917-484b-b2a3-3b226201730e 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Lock "cb1503a2-bc9c-4faf-ab16-e7227f8c94f7" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.925s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:50:33 compute-0 nova_compute[260935]: 2025-10-11 08:50:33.041 2 DEBUG nova.compute.manager [req-ab9c1df9-e1a2-4ba0-928d-4c8f831bac04 req-cf20b2c8-e24b-45c0-b69d-5c8d8fa8eb21 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: cb1503a2-bc9c-4faf-ab16-e7227f8c94f7] Received event network-vif-deleted-31025494-a361-4c39-aa29-5841978f12e9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:50:33 compute-0 nova_compute[260935]: 2025-10-11 08:50:33.227 2 DEBUG oslo_concurrency.lockutils [None req-91a294bf-0f23-448c-a601-022a3b3b6e50 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Acquiring lock "ac3851d8-5df2-4f84-9b28-a5fbf1c31b62" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:50:33 compute-0 nova_compute[260935]: 2025-10-11 08:50:33.228 2 DEBUG oslo_concurrency.lockutils [None req-91a294bf-0f23-448c-a601-022a3b3b6e50 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Lock "ac3851d8-5df2-4f84-9b28-a5fbf1c31b62" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:50:33 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 08:50:33 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e173 do_prune osdmap full prune enabled
Oct 11 08:50:33 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e174 e174: 3 total, 3 up, 3 in
Oct 11 08:50:33 compute-0 nova_compute[260935]: 2025-10-11 08:50:33.239 2 DEBUG nova.objects.instance [None req-4f0da930-4538-4373-8291-fbc1ddfbf0ef 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Lazy-loading 'pci_requests' on Instance uuid b35f4147-9e36-4dab-9ac8-2061c97797f2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:50:33 compute-0 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e174: 3 total, 3 up, 3 in
Oct 11 08:50:33 compute-0 nova_compute[260935]: 2025-10-11 08:50:33.244 2 DEBUG nova.compute.manager [None req-91a294bf-0f23-448c-a601-022a3b3b6e50 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: ac3851d8-5df2-4f84-9b28-a5fbf1c31b62] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 11 08:50:33 compute-0 nova_compute[260935]: 2025-10-11 08:50:33.252 2 DEBUG nova.network.neutron [None req-4f0da930-4538-4373-8291-fbc1ddfbf0ef 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: b35f4147-9e36-4dab-9ac8-2061c97797f2] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 11 08:50:33 compute-0 nova_compute[260935]: 2025-10-11 08:50:33.306 2 DEBUG nova.compute.manager [req-7f383469-bc77-444d-acc9-663e930b572d req-24f35bf6-eda0-4eab-b153-6261bff4e669 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: b35f4147-9e36-4dab-9ac8-2061c97797f2] Received event network-changed-c27797c3-6ac7-45ae-9a2e-7fc42908feab external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:50:33 compute-0 nova_compute[260935]: 2025-10-11 08:50:33.307 2 DEBUG nova.compute.manager [req-7f383469-bc77-444d-acc9-663e930b572d req-24f35bf6-eda0-4eab-b153-6261bff4e669 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: b35f4147-9e36-4dab-9ac8-2061c97797f2] Refreshing instance network info cache due to event network-changed-c27797c3-6ac7-45ae-9a2e-7fc42908feab. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 08:50:33 compute-0 nova_compute[260935]: 2025-10-11 08:50:33.307 2 DEBUG oslo_concurrency.lockutils [req-7f383469-bc77-444d-acc9-663e930b572d req-24f35bf6-eda0-4eab-b153-6261bff4e669 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-b35f4147-9e36-4dab-9ac8-2061c97797f2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 08:50:33 compute-0 nova_compute[260935]: 2025-10-11 08:50:33.307 2 DEBUG oslo_concurrency.lockutils [req-7f383469-bc77-444d-acc9-663e930b572d req-24f35bf6-eda0-4eab-b153-6261bff4e669 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-b35f4147-9e36-4dab-9ac8-2061c97797f2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 08:50:33 compute-0 nova_compute[260935]: 2025-10-11 08:50:33.307 2 DEBUG nova.network.neutron [req-7f383469-bc77-444d-acc9-663e930b572d req-24f35bf6-eda0-4eab-b153-6261bff4e669 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: b35f4147-9e36-4dab-9ac8-2061c97797f2] Refreshing network info cache for port c27797c3-6ac7-45ae-9a2e-7fc42908feab _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 08:50:33 compute-0 nova_compute[260935]: 2025-10-11 08:50:33.320 2 DEBUG oslo_concurrency.lockutils [None req-91a294bf-0f23-448c-a601-022a3b3b6e50 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:50:33 compute-0 nova_compute[260935]: 2025-10-11 08:50:33.321 2 DEBUG oslo_concurrency.lockutils [None req-91a294bf-0f23-448c-a601-022a3b3b6e50 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:50:33 compute-0 nova_compute[260935]: 2025-10-11 08:50:33.335 2 DEBUG nova.virt.hardware [None req-91a294bf-0f23-448c-a601-022a3b3b6e50 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 11 08:50:33 compute-0 nova_compute[260935]: 2025-10-11 08:50:33.337 2 INFO nova.compute.claims [None req-91a294bf-0f23-448c-a601-022a3b3b6e50 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: ac3851d8-5df2-4f84-9b28-a5fbf1c31b62] Claim successful on node compute-0.ctlplane.example.com
Oct 11 08:50:33 compute-0 nova_compute[260935]: 2025-10-11 08:50:33.434 2 DEBUG nova.network.neutron [req-7499a0cf-0392-49fe-9518-2db932c85b78 req-b420ce8b-4c9c-4429-ae9a-97374b36f06b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 14711e39-46ca-4856-9c19-fa51b869064d] Updated VIF entry in instance network info cache for port ac842ebf-4fca-4930-a4d1-3e8a6760d441. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 08:50:33 compute-0 nova_compute[260935]: 2025-10-11 08:50:33.434 2 DEBUG nova.network.neutron [req-7499a0cf-0392-49fe-9518-2db932c85b78 req-b420ce8b-4c9c-4429-ae9a-97374b36f06b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 14711e39-46ca-4856-9c19-fa51b869064d] Updating instance_info_cache with network_info: [{"id": "ac842ebf-4fca-4930-a4d1-3e8a6760d441", "address": "fa:16:3e:56:95:17", "network": {"id": "fff13396-b787-4c6e-9112-a1c2ef57b26d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-795919911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eddb41c523294041b154a0a99c88e82b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac842ebf-4f", "ovs_interfaceid": "ac842ebf-4fca-4930-a4d1-3e8a6760d441", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:50:33 compute-0 nova_compute[260935]: 2025-10-11 08:50:33.454 2 DEBUG oslo_concurrency.lockutils [req-7499a0cf-0392-49fe-9518-2db932c85b78 req-b420ce8b-4c9c-4429-ae9a-97374b36f06b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-14711e39-46ca-4856-9c19-fa51b869064d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 08:50:33 compute-0 ceph-mon[74313]: pgmap v1352: 321 pgs: 321 active+clean; 658 MiB data, 723 MiB used, 59 GiB / 60 GiB avail; 7.7 MiB/s rd, 7.6 MiB/s wr, 236 op/s
Oct 11 08:50:33 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/1741090320' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:50:33 compute-0 ceph-mon[74313]: osdmap e174: 3 total, 3 up, 3 in
Oct 11 08:50:33 compute-0 nova_compute[260935]: 2025-10-11 08:50:33.559 2 DEBUG oslo_concurrency.processutils [None req-91a294bf-0f23-448c-a601-022a3b3b6e50 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:50:33 compute-0 nova_compute[260935]: 2025-10-11 08:50:33.698 2 DEBUG nova.policy [None req-4f0da930-4538-4373-8291-fbc1ddfbf0ef 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '34f29a5a135d45f597eeaa741009aa67', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'eddb41c523294041b154a0a99c88e82b', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 11 08:50:33 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1354: 321 pgs: 321 active+clean; 484 MiB data, 618 MiB used, 59 GiB / 60 GiB avail; 6.8 MiB/s rd, 9.5 MiB/s wr, 403 op/s
Oct 11 08:50:34 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 08:50:34 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/266250651' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:50:34 compute-0 nova_compute[260935]: 2025-10-11 08:50:34.086 2 DEBUG oslo_concurrency.processutils [None req-91a294bf-0f23-448c-a601-022a3b3b6e50 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.527s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:50:34 compute-0 nova_compute[260935]: 2025-10-11 08:50:34.095 2 DEBUG nova.compute.provider_tree [None req-91a294bf-0f23-448c-a601-022a3b3b6e50 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 08:50:34 compute-0 nova_compute[260935]: 2025-10-11 08:50:34.116 2 DEBUG nova.scheduler.client.report [None req-91a294bf-0f23-448c-a601-022a3b3b6e50 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 08:50:34 compute-0 nova_compute[260935]: 2025-10-11 08:50:34.150 2 DEBUG oslo_concurrency.lockutils [None req-91a294bf-0f23-448c-a601-022a3b3b6e50 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.830s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:50:34 compute-0 nova_compute[260935]: 2025-10-11 08:50:34.151 2 DEBUG nova.compute.manager [None req-91a294bf-0f23-448c-a601-022a3b3b6e50 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: ac3851d8-5df2-4f84-9b28-a5fbf1c31b62] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 11 08:50:34 compute-0 nova_compute[260935]: 2025-10-11 08:50:34.218 2 DEBUG nova.compute.manager [None req-91a294bf-0f23-448c-a601-022a3b3b6e50 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: ac3851d8-5df2-4f84-9b28-a5fbf1c31b62] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 11 08:50:34 compute-0 nova_compute[260935]: 2025-10-11 08:50:34.219 2 DEBUG nova.network.neutron [None req-91a294bf-0f23-448c-a601-022a3b3b6e50 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: ac3851d8-5df2-4f84-9b28-a5fbf1c31b62] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 11 08:50:34 compute-0 nova_compute[260935]: 2025-10-11 08:50:34.249 2 INFO nova.virt.libvirt.driver [None req-91a294bf-0f23-448c-a601-022a3b3b6e50 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: ac3851d8-5df2-4f84-9b28-a5fbf1c31b62] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 11 08:50:34 compute-0 nova_compute[260935]: 2025-10-11 08:50:34.301 2 DEBUG nova.compute.manager [None req-91a294bf-0f23-448c-a601-022a3b3b6e50 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: ac3851d8-5df2-4f84-9b28-a5fbf1c31b62] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 11 08:50:34 compute-0 nova_compute[260935]: 2025-10-11 08:50:34.397 2 DEBUG nova.compute.manager [None req-91a294bf-0f23-448c-a601-022a3b3b6e50 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: ac3851d8-5df2-4f84-9b28-a5fbf1c31b62] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 11 08:50:34 compute-0 nova_compute[260935]: 2025-10-11 08:50:34.399 2 DEBUG nova.virt.libvirt.driver [None req-91a294bf-0f23-448c-a601-022a3b3b6e50 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: ac3851d8-5df2-4f84-9b28-a5fbf1c31b62] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 11 08:50:34 compute-0 nova_compute[260935]: 2025-10-11 08:50:34.400 2 INFO nova.virt.libvirt.driver [None req-91a294bf-0f23-448c-a601-022a3b3b6e50 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: ac3851d8-5df2-4f84-9b28-a5fbf1c31b62] Creating image(s)
Oct 11 08:50:34 compute-0 nova_compute[260935]: 2025-10-11 08:50:34.443 2 DEBUG nova.storage.rbd_utils [None req-91a294bf-0f23-448c-a601-022a3b3b6e50 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] rbd image ac3851d8-5df2-4f84-9b28-a5fbf1c31b62_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:50:34 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/266250651' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:50:34 compute-0 nova_compute[260935]: 2025-10-11 08:50:34.491 2 DEBUG nova.storage.rbd_utils [None req-91a294bf-0f23-448c-a601-022a3b3b6e50 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] rbd image ac3851d8-5df2-4f84-9b28-a5fbf1c31b62_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:50:34 compute-0 nova_compute[260935]: 2025-10-11 08:50:34.532 2 DEBUG nova.storage.rbd_utils [None req-91a294bf-0f23-448c-a601-022a3b3b6e50 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] rbd image ac3851d8-5df2-4f84-9b28-a5fbf1c31b62_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:50:34 compute-0 nova_compute[260935]: 2025-10-11 08:50:34.539 2 DEBUG oslo_concurrency.processutils [None req-91a294bf-0f23-448c-a601-022a3b3b6e50 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:50:34 compute-0 nova_compute[260935]: 2025-10-11 08:50:34.588 2 DEBUG nova.policy [None req-91a294bf-0f23-448c-a601-022a3b3b6e50 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '1bab12893b9d49aabcb5ca19c9b951de', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'f8c7604961214c6d9d49657535d799a5', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 11 08:50:34 compute-0 nova_compute[260935]: 2025-10-11 08:50:34.641 2 DEBUG oslo_concurrency.processutils [None req-91a294bf-0f23-448c-a601-022a3b3b6e50 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json" returned: 0 in 0.102s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:50:34 compute-0 nova_compute[260935]: 2025-10-11 08:50:34.642 2 DEBUG oslo_concurrency.lockutils [None req-91a294bf-0f23-448c-a601-022a3b3b6e50 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Acquiring lock "9811042c7d73cc51997f7c966840f4b7728169a1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:50:34 compute-0 nova_compute[260935]: 2025-10-11 08:50:34.643 2 DEBUG oslo_concurrency.lockutils [None req-91a294bf-0f23-448c-a601-022a3b3b6e50 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:50:34 compute-0 nova_compute[260935]: 2025-10-11 08:50:34.643 2 DEBUG oslo_concurrency.lockutils [None req-91a294bf-0f23-448c-a601-022a3b3b6e50 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:50:34 compute-0 nova_compute[260935]: 2025-10-11 08:50:34.672 2 DEBUG nova.storage.rbd_utils [None req-91a294bf-0f23-448c-a601-022a3b3b6e50 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] rbd image ac3851d8-5df2-4f84-9b28-a5fbf1c31b62_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:50:34 compute-0 nova_compute[260935]: 2025-10-11 08:50:34.677 2 DEBUG oslo_concurrency.processutils [None req-91a294bf-0f23-448c-a601-022a3b3b6e50 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 ac3851d8-5df2-4f84-9b28-a5fbf1c31b62_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:50:34 compute-0 nova_compute[260935]: 2025-10-11 08:50:34.891 2 DEBUG nova.network.neutron [req-7f383469-bc77-444d-acc9-663e930b572d req-24f35bf6-eda0-4eab-b153-6261bff4e669 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: b35f4147-9e36-4dab-9ac8-2061c97797f2] Updated VIF entry in instance network info cache for port c27797c3-6ac7-45ae-9a2e-7fc42908feab. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 08:50:34 compute-0 nova_compute[260935]: 2025-10-11 08:50:34.892 2 DEBUG nova.network.neutron [req-7f383469-bc77-444d-acc9-663e930b572d req-24f35bf6-eda0-4eab-b153-6261bff4e669 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: b35f4147-9e36-4dab-9ac8-2061c97797f2] Updating instance_info_cache with network_info: [{"id": "c27797c3-6ac7-45ae-9a2e-7fc42908feab", "address": "fa:16:3e:d9:75:86", "network": {"id": "fff13396-b787-4c6e-9112-a1c2ef57b26d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-795919911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.224", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eddb41c523294041b154a0a99c88e82b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc27797c3-6a", "ovs_interfaceid": "c27797c3-6ac7-45ae-9a2e-7fc42908feab", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:50:34 compute-0 nova_compute[260935]: 2025-10-11 08:50:34.897 2 DEBUG nova.network.neutron [None req-4f0da930-4538-4373-8291-fbc1ddfbf0ef 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: b35f4147-9e36-4dab-9ac8-2061c97797f2] Successfully updated port: f045b3aa-3ff6-4dea-ad61-a59a01735124 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 11 08:50:34 compute-0 nova_compute[260935]: 2025-10-11 08:50:34.925 2 DEBUG oslo_concurrency.lockutils [None req-4f0da930-4538-4373-8291-fbc1ddfbf0ef 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Acquiring lock "refresh_cache-b35f4147-9e36-4dab-9ac8-2061c97797f2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 08:50:34 compute-0 nova_compute[260935]: 2025-10-11 08:50:34.925 2 DEBUG oslo_concurrency.lockutils [req-7f383469-bc77-444d-acc9-663e930b572d req-24f35bf6-eda0-4eab-b153-6261bff4e669 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-b35f4147-9e36-4dab-9ac8-2061c97797f2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 08:50:34 compute-0 nova_compute[260935]: 2025-10-11 08:50:34.926 2 DEBUG oslo_concurrency.lockutils [None req-4f0da930-4538-4373-8291-fbc1ddfbf0ef 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Acquired lock "refresh_cache-b35f4147-9e36-4dab-9ac8-2061c97797f2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 08:50:34 compute-0 nova_compute[260935]: 2025-10-11 08:50:34.926 2 DEBUG nova.network.neutron [None req-4f0da930-4538-4373-8291-fbc1ddfbf0ef 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: b35f4147-9e36-4dab-9ac8-2061c97797f2] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 11 08:50:34 compute-0 nova_compute[260935]: 2025-10-11 08:50:34.967 2 DEBUG oslo_concurrency.processutils [None req-91a294bf-0f23-448c-a601-022a3b3b6e50 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 ac3851d8-5df2-4f84-9b28-a5fbf1c31b62_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.290s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:50:35 compute-0 nova_compute[260935]: 2025-10-11 08:50:35.050 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760172620.038435, 9f9aca1c-8e65-435a-bfae-1ff0d4386f58 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:50:35 compute-0 nova_compute[260935]: 2025-10-11 08:50:35.051 2 INFO nova.compute.manager [-] [instance: 9f9aca1c-8e65-435a-bfae-1ff0d4386f58] VM Stopped (Lifecycle Event)
Oct 11 08:50:35 compute-0 nova_compute[260935]: 2025-10-11 08:50:35.061 2 DEBUG nova.storage.rbd_utils [None req-91a294bf-0f23-448c-a601-022a3b3b6e50 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] resizing rbd image ac3851d8-5df2-4f84-9b28-a5fbf1c31b62_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 11 08:50:35 compute-0 nova_compute[260935]: 2025-10-11 08:50:35.103 2 WARNING nova.network.neutron [None req-4f0da930-4538-4373-8291-fbc1ddfbf0ef 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: b35f4147-9e36-4dab-9ac8-2061c97797f2] fff13396-b787-4c6e-9112-a1c2ef57b26d already exists in list: networks containing: ['fff13396-b787-4c6e-9112-a1c2ef57b26d']. ignoring it
Oct 11 08:50:35 compute-0 nova_compute[260935]: 2025-10-11 08:50:35.109 2 DEBUG nova.compute.manager [None req-abb42863-3dad-4cae-9af4-db14b8bc6029 - - - - - -] [instance: 9f9aca1c-8e65-435a-bfae-1ff0d4386f58] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:50:35 compute-0 nova_compute[260935]: 2025-10-11 08:50:35.110 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:50:35 compute-0 nova_compute[260935]: 2025-10-11 08:50:35.190 2 DEBUG nova.objects.instance [None req-91a294bf-0f23-448c-a601-022a3b3b6e50 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Lazy-loading 'migration_context' on Instance uuid ac3851d8-5df2-4f84-9b28-a5fbf1c31b62 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:50:35 compute-0 nova_compute[260935]: 2025-10-11 08:50:35.208 2 DEBUG nova.virt.libvirt.driver [None req-91a294bf-0f23-448c-a601-022a3b3b6e50 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: ac3851d8-5df2-4f84-9b28-a5fbf1c31b62] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 11 08:50:35 compute-0 nova_compute[260935]: 2025-10-11 08:50:35.209 2 DEBUG nova.virt.libvirt.driver [None req-91a294bf-0f23-448c-a601-022a3b3b6e50 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: ac3851d8-5df2-4f84-9b28-a5fbf1c31b62] Ensure instance console log exists: /var/lib/nova/instances/ac3851d8-5df2-4f84-9b28-a5fbf1c31b62/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 11 08:50:35 compute-0 nova_compute[260935]: 2025-10-11 08:50:35.210 2 DEBUG oslo_concurrency.lockutils [None req-91a294bf-0f23-448c-a601-022a3b3b6e50 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:50:35 compute-0 nova_compute[260935]: 2025-10-11 08:50:35.210 2 DEBUG oslo_concurrency.lockutils [None req-91a294bf-0f23-448c-a601-022a3b3b6e50 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:50:35 compute-0 nova_compute[260935]: 2025-10-11 08:50:35.211 2 DEBUG oslo_concurrency.lockutils [None req-91a294bf-0f23-448c-a601-022a3b3b6e50 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:50:35 compute-0 nova_compute[260935]: 2025-10-11 08:50:35.333 2 DEBUG nova.network.neutron [None req-91a294bf-0f23-448c-a601-022a3b3b6e50 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: ac3851d8-5df2-4f84-9b28-a5fbf1c31b62] Successfully created port: d190526e-2bf1-4e6c-925a-2cc0c2b359a8 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 11 08:50:35 compute-0 sudo[298370]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:50:35 compute-0 sudo[298370]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:50:35 compute-0 sudo[298370]: pam_unix(sudo:session): session closed for user root
Oct 11 08:50:35 compute-0 nova_compute[260935]: 2025-10-11 08:50:35.376 2 DEBUG nova.compute.manager [req-8b8b848e-9e40-4df1-97ad-3d57692807d5 req-40819be8-c07a-406a-9d4e-072fa21da666 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] Received event network-vif-plugged-a3944a31-9560-49ae-b2a5-caaf2736993a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:50:35 compute-0 nova_compute[260935]: 2025-10-11 08:50:35.377 2 DEBUG oslo_concurrency.lockutils [req-8b8b848e-9e40-4df1-97ad-3d57692807d5 req-40819be8-c07a-406a-9d4e-072fa21da666 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "77ff3a9d-3eb2-40ed-ad12-6367fd4e555f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:50:35 compute-0 nova_compute[260935]: 2025-10-11 08:50:35.377 2 DEBUG oslo_concurrency.lockutils [req-8b8b848e-9e40-4df1-97ad-3d57692807d5 req-40819be8-c07a-406a-9d4e-072fa21da666 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "77ff3a9d-3eb2-40ed-ad12-6367fd4e555f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:50:35 compute-0 nova_compute[260935]: 2025-10-11 08:50:35.377 2 DEBUG oslo_concurrency.lockutils [req-8b8b848e-9e40-4df1-97ad-3d57692807d5 req-40819be8-c07a-406a-9d4e-072fa21da666 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "77ff3a9d-3eb2-40ed-ad12-6367fd4e555f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:50:35 compute-0 nova_compute[260935]: 2025-10-11 08:50:35.377 2 DEBUG nova.compute.manager [req-8b8b848e-9e40-4df1-97ad-3d57692807d5 req-40819be8-c07a-406a-9d4e-072fa21da666 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] No waiting events found dispatching network-vif-plugged-a3944a31-9560-49ae-b2a5-caaf2736993a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 08:50:35 compute-0 nova_compute[260935]: 2025-10-11 08:50:35.378 2 WARNING nova.compute.manager [req-8b8b848e-9e40-4df1-97ad-3d57692807d5 req-40819be8-c07a-406a-9d4e-072fa21da666 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] Received unexpected event network-vif-plugged-a3944a31-9560-49ae-b2a5-caaf2736993a for instance with vm_state error and task_state None.
Oct 11 08:50:35 compute-0 nova_compute[260935]: 2025-10-11 08:50:35.378 2 DEBUG nova.compute.manager [req-8b8b848e-9e40-4df1-97ad-3d57692807d5 req-40819be8-c07a-406a-9d4e-072fa21da666 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] Received event network-vif-plugged-a3944a31-9560-49ae-b2a5-caaf2736993a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:50:35 compute-0 nova_compute[260935]: 2025-10-11 08:50:35.378 2 DEBUG oslo_concurrency.lockutils [req-8b8b848e-9e40-4df1-97ad-3d57692807d5 req-40819be8-c07a-406a-9d4e-072fa21da666 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "77ff3a9d-3eb2-40ed-ad12-6367fd4e555f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:50:35 compute-0 nova_compute[260935]: 2025-10-11 08:50:35.378 2 DEBUG oslo_concurrency.lockutils [req-8b8b848e-9e40-4df1-97ad-3d57692807d5 req-40819be8-c07a-406a-9d4e-072fa21da666 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "77ff3a9d-3eb2-40ed-ad12-6367fd4e555f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:50:35 compute-0 nova_compute[260935]: 2025-10-11 08:50:35.378 2 DEBUG oslo_concurrency.lockutils [req-8b8b848e-9e40-4df1-97ad-3d57692807d5 req-40819be8-c07a-406a-9d4e-072fa21da666 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "77ff3a9d-3eb2-40ed-ad12-6367fd4e555f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:50:35 compute-0 nova_compute[260935]: 2025-10-11 08:50:35.378 2 DEBUG nova.compute.manager [req-8b8b848e-9e40-4df1-97ad-3d57692807d5 req-40819be8-c07a-406a-9d4e-072fa21da666 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] No waiting events found dispatching network-vif-plugged-a3944a31-9560-49ae-b2a5-caaf2736993a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 08:50:35 compute-0 nova_compute[260935]: 2025-10-11 08:50:35.379 2 WARNING nova.compute.manager [req-8b8b848e-9e40-4df1-97ad-3d57692807d5 req-40819be8-c07a-406a-9d4e-072fa21da666 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] Received unexpected event network-vif-plugged-a3944a31-9560-49ae-b2a5-caaf2736993a for instance with vm_state error and task_state None.
Oct 11 08:50:35 compute-0 nova_compute[260935]: 2025-10-11 08:50:35.379 2 DEBUG nova.compute.manager [req-8b8b848e-9e40-4df1-97ad-3d57692807d5 req-40819be8-c07a-406a-9d4e-072fa21da666 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: b35f4147-9e36-4dab-9ac8-2061c97797f2] Received event network-changed-f045b3aa-3ff6-4dea-ad61-a59a01735124 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:50:35 compute-0 nova_compute[260935]: 2025-10-11 08:50:35.379 2 DEBUG nova.compute.manager [req-8b8b848e-9e40-4df1-97ad-3d57692807d5 req-40819be8-c07a-406a-9d4e-072fa21da666 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: b35f4147-9e36-4dab-9ac8-2061c97797f2] Refreshing instance network info cache due to event network-changed-f045b3aa-3ff6-4dea-ad61-a59a01735124. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 08:50:35 compute-0 nova_compute[260935]: 2025-10-11 08:50:35.379 2 DEBUG oslo_concurrency.lockutils [req-8b8b848e-9e40-4df1-97ad-3d57692807d5 req-40819be8-c07a-406a-9d4e-072fa21da666 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-b35f4147-9e36-4dab-9ac8-2061c97797f2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 08:50:35 compute-0 sudo[298395]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 08:50:35 compute-0 sudo[298395]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:50:35 compute-0 sudo[298395]: pam_unix(sudo:session): session closed for user root
Oct 11 08:50:35 compute-0 ceph-mon[74313]: pgmap v1354: 321 pgs: 321 active+clean; 484 MiB data, 618 MiB used, 59 GiB / 60 GiB avail; 6.8 MiB/s rd, 9.5 MiB/s wr, 403 op/s
Oct 11 08:50:35 compute-0 sudo[298431]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:50:35 compute-0 sudo[298431]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:50:35 compute-0 sudo[298431]: pam_unix(sudo:session): session closed for user root
Oct 11 08:50:35 compute-0 podman[298419]: 2025-10-11 08:50:35.564368765 +0000 UTC m=+0.088796932 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Oct 11 08:50:35 compute-0 sudo[298483]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 11 08:50:35 compute-0 sudo[298483]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:50:35 compute-0 podman[298420]: 2025-10-11 08:50:35.628345864 +0000 UTC m=+0.156193368 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 11 08:50:35 compute-0 nova_compute[260935]: 2025-10-11 08:50:35.837 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760172620.8333688, cb1503a2-bc9c-4faf-ab16-e7227f8c94f7 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:50:35 compute-0 nova_compute[260935]: 2025-10-11 08:50:35.837 2 INFO nova.compute.manager [-] [instance: cb1503a2-bc9c-4faf-ab16-e7227f8c94f7] VM Stopped (Lifecycle Event)
Oct 11 08:50:35 compute-0 nova_compute[260935]: 2025-10-11 08:50:35.858 2 DEBUG nova.compute.manager [None req-c09bbdd7-24c6-4587-bcf9-2a74124463eb - - - - - -] [instance: cb1503a2-bc9c-4faf-ab16-e7227f8c94f7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:50:35 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1355: 321 pgs: 321 active+clean; 484 MiB data, 618 MiB used, 59 GiB / 60 GiB avail; 127 KiB/s rd, 2.7 MiB/s wr, 184 op/s
Oct 11 08:50:36 compute-0 sudo[298483]: pam_unix(sudo:session): session closed for user root
Oct 11 08:50:36 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 08:50:36 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 08:50:36 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 11 08:50:36 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 08:50:36 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 11 08:50:36 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 08:50:36 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev 04a2cb2c-e83b-42dd-9568-20108c436e9e does not exist
Oct 11 08:50:36 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev 4aef11a4-8ffa-4bb6-8498-6f6baa929111 does not exist
Oct 11 08:50:36 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev a7042dd5-d2d9-4633-b9e3-8ce3ca9241a7 does not exist
Oct 11 08:50:36 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 11 08:50:36 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 08:50:36 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 11 08:50:36 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 08:50:36 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 08:50:36 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 08:50:36 compute-0 nova_compute[260935]: 2025-10-11 08:50:36.348 2 DEBUG nova.network.neutron [None req-91a294bf-0f23-448c-a601-022a3b3b6e50 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: ac3851d8-5df2-4f84-9b28-a5fbf1c31b62] Successfully updated port: d190526e-2bf1-4e6c-925a-2cc0c2b359a8 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 11 08:50:36 compute-0 nova_compute[260935]: 2025-10-11 08:50:36.365 2 DEBUG oslo_concurrency.lockutils [None req-91a294bf-0f23-448c-a601-022a3b3b6e50 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Acquiring lock "refresh_cache-ac3851d8-5df2-4f84-9b28-a5fbf1c31b62" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 08:50:36 compute-0 nova_compute[260935]: 2025-10-11 08:50:36.365 2 DEBUG oslo_concurrency.lockutils [None req-91a294bf-0f23-448c-a601-022a3b3b6e50 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Acquired lock "refresh_cache-ac3851d8-5df2-4f84-9b28-a5fbf1c31b62" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 08:50:36 compute-0 nova_compute[260935]: 2025-10-11 08:50:36.366 2 DEBUG nova.network.neutron [None req-91a294bf-0f23-448c-a601-022a3b3b6e50 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: ac3851d8-5df2-4f84-9b28-a5fbf1c31b62] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 11 08:50:36 compute-0 sudo[298545]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:50:36 compute-0 sudo[298545]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:50:36 compute-0 sudo[298545]: pam_unix(sudo:session): session closed for user root
Oct 11 08:50:36 compute-0 ceph-mon[74313]: pgmap v1355: 321 pgs: 321 active+clean; 484 MiB data, 618 MiB used, 59 GiB / 60 GiB avail; 127 KiB/s rd, 2.7 MiB/s wr, 184 op/s
Oct 11 08:50:36 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 08:50:36 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 08:50:36 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 08:50:36 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 08:50:36 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 08:50:36 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 08:50:36 compute-0 sudo[298570]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 08:50:36 compute-0 sudo[298570]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:50:36 compute-0 sudo[298570]: pam_unix(sudo:session): session closed for user root
Oct 11 08:50:36 compute-0 nova_compute[260935]: 2025-10-11 08:50:36.507 2 DEBUG nova.network.neutron [None req-91a294bf-0f23-448c-a601-022a3b3b6e50 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: ac3851d8-5df2-4f84-9b28-a5fbf1c31b62] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 11 08:50:36 compute-0 sudo[298595]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:50:36 compute-0 sudo[298595]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:50:36 compute-0 sudo[298595]: pam_unix(sudo:session): session closed for user root
Oct 11 08:50:36 compute-0 sudo[298620]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 11 08:50:36 compute-0 sudo[298620]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:50:36 compute-0 nova_compute[260935]: 2025-10-11 08:50:36.686 2 DEBUG oslo_concurrency.lockutils [None req-461635e7-4a9b-4353-8945-fee1f3505527 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Acquiring lock "90e56ca7-b26f-4f83-908d-75204ecd2533" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:50:36 compute-0 nova_compute[260935]: 2025-10-11 08:50:36.687 2 DEBUG oslo_concurrency.lockutils [None req-461635e7-4a9b-4353-8945-fee1f3505527 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Lock "90e56ca7-b26f-4f83-908d-75204ecd2533" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:50:36 compute-0 nova_compute[260935]: 2025-10-11 08:50:36.687 2 DEBUG oslo_concurrency.lockutils [None req-461635e7-4a9b-4353-8945-fee1f3505527 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Acquiring lock "90e56ca7-b26f-4f83-908d-75204ecd2533-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:50:36 compute-0 nova_compute[260935]: 2025-10-11 08:50:36.688 2 DEBUG oslo_concurrency.lockutils [None req-461635e7-4a9b-4353-8945-fee1f3505527 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Lock "90e56ca7-b26f-4f83-908d-75204ecd2533-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:50:36 compute-0 nova_compute[260935]: 2025-10-11 08:50:36.688 2 DEBUG oslo_concurrency.lockutils [None req-461635e7-4a9b-4353-8945-fee1f3505527 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Lock "90e56ca7-b26f-4f83-908d-75204ecd2533-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:50:36 compute-0 nova_compute[260935]: 2025-10-11 08:50:36.692 2 INFO nova.compute.manager [None req-461635e7-4a9b-4353-8945-fee1f3505527 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 90e56ca7-b26f-4f83-908d-75204ecd2533] Terminating instance
Oct 11 08:50:36 compute-0 nova_compute[260935]: 2025-10-11 08:50:36.694 2 DEBUG nova.compute.manager [None req-461635e7-4a9b-4353-8945-fee1f3505527 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 90e56ca7-b26f-4f83-908d-75204ecd2533] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 11 08:50:36 compute-0 kernel: tap302c88cf-6e (unregistering): left promiscuous mode
Oct 11 08:50:36 compute-0 NetworkManager[44960]: <info>  [1760172636.7565] device (tap302c88cf-6e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 08:50:36 compute-0 nova_compute[260935]: 2025-10-11 08:50:36.780 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:50:36 compute-0 ovn_controller[152945]: 2025-10-11T08:50:36Z|00188|binding|INFO|Releasing lport 302c88cf-6eba-4200-adfe-6c23d5e6078d from this chassis (sb_readonly=0)
Oct 11 08:50:36 compute-0 ovn_controller[152945]: 2025-10-11T08:50:36Z|00189|binding|INFO|Setting lport 302c88cf-6eba-4200-adfe-6c23d5e6078d down in Southbound
Oct 11 08:50:36 compute-0 ovn_controller[152945]: 2025-10-11T08:50:36Z|00190|binding|INFO|Removing iface tap302c88cf-6e ovn-installed in OVS
Oct 11 08:50:36 compute-0 nova_compute[260935]: 2025-10-11 08:50:36.783 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:50:36 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:36.792 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:52:ed:97 10.100.0.10'], port_security=['fa:16:3e:52:ed:97 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '90e56ca7-b26f-4f83-908d-75204ecd2533', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '39d3043a7835403392c659fbb2fe0b22', 'neutron:revision_number': '4', 'neutron:security_group_ids': '8cdf2c97-ed67-4339-928f-1d70d0c6c18c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3bfe7634-8476-437a-9cde-e4512c0e686a, chassis=[], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=302c88cf-6eba-4200-adfe-6c23d5e6078d) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 08:50:36 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:36.795 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 302c88cf-6eba-4200-adfe-6c23d5e6078d in datapath 09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5 unbound from our chassis
Oct 11 08:50:36 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:36.799 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5
Oct 11 08:50:36 compute-0 nova_compute[260935]: 2025-10-11 08:50:36.803 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:50:36 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:36.820 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[4b9a4fbf-ac6d-430c-92ea-409bb18381bf]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:50:36 compute-0 systemd[1]: machine-qemu\x2d29\x2dinstance\x2d0000001b.scope: Deactivated successfully.
Oct 11 08:50:36 compute-0 systemd[1]: machine-qemu\x2d29\x2dinstance\x2d0000001b.scope: Consumed 15.431s CPU time.
Oct 11 08:50:36 compute-0 systemd-machined[215705]: Machine qemu-29-instance-0000001b terminated.
Oct 11 08:50:36 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:36.863 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[f0c7f096-92df-437f-86a0-915e12354bc0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:50:36 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:36.867 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[c6456d8d-a928-4d17-9c3f-5d0089244f0c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:50:36 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:36.907 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[74ffe0dd-e06a-4c2e-b448-46b3e366f9cd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:50:36 compute-0 nova_compute[260935]: 2025-10-11 08:50:36.937 2 INFO nova.virt.libvirt.driver [-] [instance: 90e56ca7-b26f-4f83-908d-75204ecd2533] Instance destroyed successfully.
Oct 11 08:50:36 compute-0 nova_compute[260935]: 2025-10-11 08:50:36.938 2 DEBUG nova.objects.instance [None req-461635e7-4a9b-4353-8945-fee1f3505527 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Lazy-loading 'resources' on Instance uuid 90e56ca7-b26f-4f83-908d-75204ecd2533 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:50:36 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:36.936 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[ad867e72-e2f3-4bb5-86cf-01c7f1bf0a58]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap09ac2cb6-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9f:b2:33'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 16, 'tx_packets': 20, 'rx_bytes': 1168, 'tx_bytes': 1028, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 16, 'tx_packets': 20, 'rx_bytes': 1168, 'tx_bytes': 1028, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 34], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 439110, 'reachable_time': 35426, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 298689, 'error': None, 'target': 'ovnmeta-09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:50:36 compute-0 nova_compute[260935]: 2025-10-11 08:50:36.957 2 DEBUG nova.virt.libvirt.vif [None req-461635e7-4a9b-4353-8945-fee1f3505527 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T08:49:32Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-1529647290',display_name='tempest-ServersAdminTestJSON-server-1529647290',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-1529647290',id=27,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-11T08:49:49Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='39d3043a7835403392c659fbb2fe0b22',ramdisk_id='',reservation_id='r-j763nkmo',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersAdminTestJSON-1756812845',owner_user_name='tempest-ServersAdminTestJSON-1756812845-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T08:49:49Z,user_data=None,user_id='a51c2680b31e40b1908642ef8795c6f0',uuid=90e56ca7-b26f-4f83-908d-75204ecd2533,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "302c88cf-6eba-4200-adfe-6c23d5e6078d", "address": "fa:16:3e:52:ed:97", "network": {"id": "09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1951796893-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "39d3043a7835403392c659fbb2fe0b22", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap302c88cf-6e", "ovs_interfaceid": "302c88cf-6eba-4200-adfe-6c23d5e6078d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 11 08:50:36 compute-0 nova_compute[260935]: 2025-10-11 08:50:36.957 2 DEBUG nova.network.os_vif_util [None req-461635e7-4a9b-4353-8945-fee1f3505527 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Converting VIF {"id": "302c88cf-6eba-4200-adfe-6c23d5e6078d", "address": "fa:16:3e:52:ed:97", "network": {"id": "09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1951796893-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "39d3043a7835403392c659fbb2fe0b22", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap302c88cf-6e", "ovs_interfaceid": "302c88cf-6eba-4200-adfe-6c23d5e6078d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 08:50:36 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:36.957 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[ae1325f3-21b6-4048-b9f3-f4d1bbb42ce3]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap09ac2cb6-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 439130, 'tstamp': 439130}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 298701, 'error': None, 'target': 'ovnmeta-09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap09ac2cb6-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 439135, 'tstamp': 439135}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 298701, 'error': None, 'target': 'ovnmeta-09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:50:36 compute-0 nova_compute[260935]: 2025-10-11 08:50:36.958 2 DEBUG nova.network.os_vif_util [None req-461635e7-4a9b-4353-8945-fee1f3505527 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:52:ed:97,bridge_name='br-int',has_traffic_filtering=True,id=302c88cf-6eba-4200-adfe-6c23d5e6078d,network=Network(09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap302c88cf-6e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 08:50:36 compute-0 nova_compute[260935]: 2025-10-11 08:50:36.958 2 DEBUG os_vif [None req-461635e7-4a9b-4353-8945-fee1f3505527 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:52:ed:97,bridge_name='br-int',has_traffic_filtering=True,id=302c88cf-6eba-4200-adfe-6c23d5e6078d,network=Network(09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap302c88cf-6e') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 11 08:50:36 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:36.959 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap09ac2cb6-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:50:36 compute-0 nova_compute[260935]: 2025-10-11 08:50:36.960 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:50:36 compute-0 nova_compute[260935]: 2025-10-11 08:50:36.960 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap302c88cf-6e, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:50:36 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:36.997 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap09ac2cb6-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:50:36 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:36.998 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 08:50:36 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:36.998 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap09ac2cb6-30, col_values=(('external_ids', {'iface-id': '424305ea-6b47-4134-ad52-ee2a450e204c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:50:36 compute-0 nova_compute[260935]: 2025-10-11 08:50:36.998 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:50:36 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:36.999 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 08:50:37 compute-0 nova_compute[260935]: 2025-10-11 08:50:37.001 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 08:50:37 compute-0 nova_compute[260935]: 2025-10-11 08:50:37.003 2 INFO os_vif [None req-461635e7-4a9b-4353-8945-fee1f3505527 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:52:ed:97,bridge_name='br-int',has_traffic_filtering=True,id=302c88cf-6eba-4200-adfe-6c23d5e6078d,network=Network(09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap302c88cf-6e')
Oct 11 08:50:37 compute-0 podman[298706]: 2025-10-11 08:50:37.072957207 +0000 UTC m=+0.059915265 container create 89b12e97ad4a4a4d733e87b10bee478f04844dba8e0a9a5789584a23675e9f57 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_rubin, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct 11 08:50:37 compute-0 systemd[1]: Started libpod-conmon-89b12e97ad4a4a4d733e87b10bee478f04844dba8e0a9a5789584a23675e9f57.scope.
Oct 11 08:50:37 compute-0 podman[298706]: 2025-10-11 08:50:37.050315247 +0000 UTC m=+0.037273335 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 08:50:37 compute-0 systemd[1]: Started libcrun container.
Oct 11 08:50:37 compute-0 nova_compute[260935]: 2025-10-11 08:50:37.153 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:50:37 compute-0 podman[298706]: 2025-10-11 08:50:37.168010836 +0000 UTC m=+0.154968904 container init 89b12e97ad4a4a4d733e87b10bee478f04844dba8e0a9a5789584a23675e9f57 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_rubin, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 08:50:37 compute-0 podman[298706]: 2025-10-11 08:50:37.17876001 +0000 UTC m=+0.165718088 container start 89b12e97ad4a4a4d733e87b10bee478f04844dba8e0a9a5789584a23675e9f57 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_rubin, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct 11 08:50:37 compute-0 systemd[1]: libpod-89b12e97ad4a4a4d733e87b10bee478f04844dba8e0a9a5789584a23675e9f57.scope: Deactivated successfully.
Oct 11 08:50:37 compute-0 determined_rubin[298740]: 167 167
Oct 11 08:50:37 compute-0 podman[298706]: 2025-10-11 08:50:37.188549866 +0000 UTC m=+0.175507914 container attach 89b12e97ad4a4a4d733e87b10bee478f04844dba8e0a9a5789584a23675e9f57 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_rubin, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 11 08:50:37 compute-0 conmon[298740]: conmon 89b12e97ad4a4a4d733e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-89b12e97ad4a4a4d733e87b10bee478f04844dba8e0a9a5789584a23675e9f57.scope/container/memory.events
Oct 11 08:50:37 compute-0 podman[298706]: 2025-10-11 08:50:37.1893884 +0000 UTC m=+0.176346438 container died 89b12e97ad4a4a4d733e87b10bee478f04844dba8e0a9a5789584a23675e9f57 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_rubin, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 08:50:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-000e8aee8958416f38b8b6820328452ca781017c95026193317e0ee360aa988d-merged.mount: Deactivated successfully.
Oct 11 08:50:37 compute-0 podman[298706]: 2025-10-11 08:50:37.240502506 +0000 UTC m=+0.227460554 container remove 89b12e97ad4a4a4d733e87b10bee478f04844dba8e0a9a5789584a23675e9f57 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_rubin, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Oct 11 08:50:37 compute-0 systemd[1]: libpod-conmon-89b12e97ad4a4a4d733e87b10bee478f04844dba8e0a9a5789584a23675e9f57.scope: Deactivated successfully.
Oct 11 08:50:37 compute-0 nova_compute[260935]: 2025-10-11 08:50:37.302 2 DEBUG nova.network.neutron [None req-4f0da930-4538-4373-8291-fbc1ddfbf0ef 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: b35f4147-9e36-4dab-9ac8-2061c97797f2] Updating instance_info_cache with network_info: [{"id": "c27797c3-6ac7-45ae-9a2e-7fc42908feab", "address": "fa:16:3e:d9:75:86", "network": {"id": "fff13396-b787-4c6e-9112-a1c2ef57b26d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-795919911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.224", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eddb41c523294041b154a0a99c88e82b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc27797c3-6a", "ovs_interfaceid": "c27797c3-6ac7-45ae-9a2e-7fc42908feab", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "f045b3aa-3ff6-4dea-ad61-a59a01735124", "address": "fa:16:3e:ee:3e:c0", "network": {"id": "fff13396-b787-4c6e-9112-a1c2ef57b26d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-795919911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eddb41c523294041b154a0a99c88e82b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf045b3aa-3f", "ovs_interfaceid": "f045b3aa-3ff6-4dea-ad61-a59a01735124", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:50:37 compute-0 nova_compute[260935]: 2025-10-11 08:50:37.324 2 DEBUG oslo_concurrency.lockutils [None req-4f0da930-4538-4373-8291-fbc1ddfbf0ef 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Releasing lock "refresh_cache-b35f4147-9e36-4dab-9ac8-2061c97797f2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 08:50:37 compute-0 nova_compute[260935]: 2025-10-11 08:50:37.325 2 DEBUG oslo_concurrency.lockutils [req-8b8b848e-9e40-4df1-97ad-3d57692807d5 req-40819be8-c07a-406a-9d4e-072fa21da666 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-b35f4147-9e36-4dab-9ac8-2061c97797f2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 08:50:37 compute-0 nova_compute[260935]: 2025-10-11 08:50:37.325 2 DEBUG nova.network.neutron [req-8b8b848e-9e40-4df1-97ad-3d57692807d5 req-40819be8-c07a-406a-9d4e-072fa21da666 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: b35f4147-9e36-4dab-9ac8-2061c97797f2] Refreshing network info cache for port f045b3aa-3ff6-4dea-ad61-a59a01735124 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 08:50:37 compute-0 nova_compute[260935]: 2025-10-11 08:50:37.330 2 DEBUG nova.virt.libvirt.vif [None req-4f0da930-4538-4373-8291-fbc1ddfbf0ef 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T08:49:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1789236056',display_name='tempest-tempest.common.compute-instance-1789236056',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1789236056',id=29,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAAy7u30rY9Ua722Pu5k06TsB7yGIeNfS7lWjZwVhd6kg2xMeuomPU5t2dlqG08LvC5AhOx2wSQ4p/whtQgG8tbhB9ScC2x2P4qlM+3BKH/+XFtSpFY70AQQ3oh5qTSzqA==',key_name='tempest-keypair-878513287',keypairs=<?>,launch_index=0,launched_at=2025-10-11T08:50:10Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='eddb41c523294041b154a0a99c88e82b',ramdisk_id='',reservation_id='r-pbhirm4h',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-2072786320',owner_user_name='tempest-AttachInterfacesTestJSON-2072786320-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T08:50:10Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='34f29a5a135d45f597eeaa741009aa67',uuid=b35f4147-9e36-4dab-9ac8-2061c97797f2,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f045b3aa-3ff6-4dea-ad61-a59a01735124", "address": "fa:16:3e:ee:3e:c0", "network": {"id": "fff13396-b787-4c6e-9112-a1c2ef57b26d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-795919911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eddb41c523294041b154a0a99c88e82b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf045b3aa-3f", "ovs_interfaceid": "f045b3aa-3ff6-4dea-ad61-a59a01735124", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 11 08:50:37 compute-0 nova_compute[260935]: 2025-10-11 08:50:37.330 2 DEBUG nova.network.os_vif_util [None req-4f0da930-4538-4373-8291-fbc1ddfbf0ef 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Converting VIF {"id": "f045b3aa-3ff6-4dea-ad61-a59a01735124", "address": "fa:16:3e:ee:3e:c0", "network": {"id": "fff13396-b787-4c6e-9112-a1c2ef57b26d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-795919911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eddb41c523294041b154a0a99c88e82b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf045b3aa-3f", "ovs_interfaceid": "f045b3aa-3ff6-4dea-ad61-a59a01735124", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 08:50:37 compute-0 nova_compute[260935]: 2025-10-11 08:50:37.331 2 DEBUG nova.network.os_vif_util [None req-4f0da930-4538-4373-8291-fbc1ddfbf0ef 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ee:3e:c0,bridge_name='br-int',has_traffic_filtering=True,id=f045b3aa-3ff6-4dea-ad61-a59a01735124,network=Network(fff13396-b787-4c6e-9112-a1c2ef57b26d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapf045b3aa-3f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 08:50:37 compute-0 nova_compute[260935]: 2025-10-11 08:50:37.331 2 DEBUG os_vif [None req-4f0da930-4538-4373-8291-fbc1ddfbf0ef 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ee:3e:c0,bridge_name='br-int',has_traffic_filtering=True,id=f045b3aa-3ff6-4dea-ad61-a59a01735124,network=Network(fff13396-b787-4c6e-9112-a1c2ef57b26d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapf045b3aa-3f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 11 08:50:37 compute-0 nova_compute[260935]: 2025-10-11 08:50:37.331 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:50:37 compute-0 nova_compute[260935]: 2025-10-11 08:50:37.332 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:50:37 compute-0 nova_compute[260935]: 2025-10-11 08:50:37.332 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 08:50:37 compute-0 nova_compute[260935]: 2025-10-11 08:50:37.335 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:50:37 compute-0 nova_compute[260935]: 2025-10-11 08:50:37.335 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf045b3aa-3f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:50:37 compute-0 nova_compute[260935]: 2025-10-11 08:50:37.335 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapf045b3aa-3f, col_values=(('external_ids', {'iface-id': 'f045b3aa-3ff6-4dea-ad61-a59a01735124', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ee:3e:c0', 'vm-uuid': 'b35f4147-9e36-4dab-9ac8-2061c97797f2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:50:37 compute-0 NetworkManager[44960]: <info>  [1760172637.3384] manager: (tapf045b3aa-3f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/100)
Oct 11 08:50:37 compute-0 nova_compute[260935]: 2025-10-11 08:50:37.341 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 08:50:37 compute-0 nova_compute[260935]: 2025-10-11 08:50:37.345 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:50:37 compute-0 nova_compute[260935]: 2025-10-11 08:50:37.347 2 INFO os_vif [None req-4f0da930-4538-4373-8291-fbc1ddfbf0ef 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ee:3e:c0,bridge_name='br-int',has_traffic_filtering=True,id=f045b3aa-3ff6-4dea-ad61-a59a01735124,network=Network(fff13396-b787-4c6e-9112-a1c2ef57b26d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapf045b3aa-3f')
Oct 11 08:50:37 compute-0 nova_compute[260935]: 2025-10-11 08:50:37.348 2 DEBUG nova.virt.libvirt.vif [None req-4f0da930-4538-4373-8291-fbc1ddfbf0ef 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T08:49:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1789236056',display_name='tempest-tempest.common.compute-instance-1789236056',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1789236056',id=29,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAAy7u30rY9Ua722Pu5k06TsB7yGIeNfS7lWjZwVhd6kg2xMeuomPU5t2dlqG08LvC5AhOx2wSQ4p/whtQgG8tbhB9ScC2x2P4qlM+3BKH/+XFtSpFY70AQQ3oh5qTSzqA==',key_name='tempest-keypair-878513287',keypairs=<?>,launch_index=0,launched_at=2025-10-11T08:50:10Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='eddb41c523294041b154a0a99c88e82b',ramdisk_id='',reservation_id='r-pbhirm4h',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-2072786320',owner_user_name='tempest-AttachInterfacesTestJSON-2072786320-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T08:50:10Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='34f29a5a135d45f597eeaa741009aa67',uuid=b35f4147-9e36-4dab-9ac8-2061c97797f2,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f045b3aa-3ff6-4dea-ad61-a59a01735124", "address": "fa:16:3e:ee:3e:c0", "network": {"id": "fff13396-b787-4c6e-9112-a1c2ef57b26d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-795919911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eddb41c523294041b154a0a99c88e82b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf045b3aa-3f", "ovs_interfaceid": "f045b3aa-3ff6-4dea-ad61-a59a01735124", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 11 08:50:37 compute-0 nova_compute[260935]: 2025-10-11 08:50:37.348 2 DEBUG nova.network.os_vif_util [None req-4f0da930-4538-4373-8291-fbc1ddfbf0ef 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Converting VIF {"id": "f045b3aa-3ff6-4dea-ad61-a59a01735124", "address": "fa:16:3e:ee:3e:c0", "network": {"id": "fff13396-b787-4c6e-9112-a1c2ef57b26d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-795919911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eddb41c523294041b154a0a99c88e82b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf045b3aa-3f", "ovs_interfaceid": "f045b3aa-3ff6-4dea-ad61-a59a01735124", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 08:50:37 compute-0 nova_compute[260935]: 2025-10-11 08:50:37.348 2 DEBUG nova.network.os_vif_util [None req-4f0da930-4538-4373-8291-fbc1ddfbf0ef 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ee:3e:c0,bridge_name='br-int',has_traffic_filtering=True,id=f045b3aa-3ff6-4dea-ad61-a59a01735124,network=Network(fff13396-b787-4c6e-9112-a1c2ef57b26d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapf045b3aa-3f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 08:50:37 compute-0 nova_compute[260935]: 2025-10-11 08:50:37.354 2 DEBUG nova.virt.libvirt.guest [None req-4f0da930-4538-4373-8291-fbc1ddfbf0ef 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] attach device xml: <interface type="ethernet">
Oct 11 08:50:37 compute-0 nova_compute[260935]:   <mac address="fa:16:3e:ee:3e:c0"/>
Oct 11 08:50:37 compute-0 nova_compute[260935]:   <model type="virtio"/>
Oct 11 08:50:37 compute-0 nova_compute[260935]:   <driver name="vhost" rx_queue_size="512"/>
Oct 11 08:50:37 compute-0 nova_compute[260935]:   <mtu size="1442"/>
Oct 11 08:50:37 compute-0 nova_compute[260935]:   <target dev="tapf045b3aa-3f"/>
Oct 11 08:50:37 compute-0 nova_compute[260935]: </interface>
Oct 11 08:50:37 compute-0 nova_compute[260935]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Oct 11 08:50:37 compute-0 kernel: tapf045b3aa-3f: entered promiscuous mode
Oct 11 08:50:37 compute-0 systemd-udevd[298658]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 08:50:37 compute-0 ovn_controller[152945]: 2025-10-11T08:50:37Z|00191|binding|INFO|Claiming lport f045b3aa-3ff6-4dea-ad61-a59a01735124 for this chassis.
Oct 11 08:50:37 compute-0 NetworkManager[44960]: <info>  [1760172637.3720] manager: (tapf045b3aa-3f): new Tun device (/org/freedesktop/NetworkManager/Devices/101)
Oct 11 08:50:37 compute-0 ovn_controller[152945]: 2025-10-11T08:50:37Z|00192|binding|INFO|f045b3aa-3ff6-4dea-ad61-a59a01735124: Claiming fa:16:3e:ee:3e:c0 10.100.0.7
Oct 11 08:50:37 compute-0 nova_compute[260935]: 2025-10-11 08:50:37.377 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:50:37 compute-0 NetworkManager[44960]: <info>  [1760172637.3828] device (tapf045b3aa-3f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 08:50:37 compute-0 NetworkManager[44960]: <info>  [1760172637.3835] device (tapf045b3aa-3f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 08:50:37 compute-0 ovn_controller[152945]: 2025-10-11T08:50:37Z|00193|binding|INFO|Setting lport f045b3aa-3ff6-4dea-ad61-a59a01735124 ovn-installed in OVS
Oct 11 08:50:37 compute-0 nova_compute[260935]: 2025-10-11 08:50:37.394 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:50:37 compute-0 nova_compute[260935]: 2025-10-11 08:50:37.399 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:50:37 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:37.429 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ee:3e:c0 10.100.0.7'], port_security=['fa:16:3e:ee:3e:c0 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-AttachInterfacesTestJSON-350865697', 'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'b35f4147-9e36-4dab-9ac8-2061c97797f2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fff13396-b787-4c6e-9112-a1c2ef57b26d', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-AttachInterfacesTestJSON-350865697', 'neutron:project_id': 'eddb41c523294041b154a0a99c88e82b', 'neutron:revision_number': '7', 'neutron:security_group_ids': '9a83c3d0-687d-44b7-980a-bde786b1b429', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c4201c7b-c907-464d-88cb-d19f17d8f067, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=f045b3aa-3ff6-4dea-ad61-a59a01735124) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 08:50:37 compute-0 ovn_controller[152945]: 2025-10-11T08:50:37Z|00194|binding|INFO|Setting lport f045b3aa-3ff6-4dea-ad61-a59a01735124 up in Southbound
Oct 11 08:50:37 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:37.430 162815 INFO neutron.agent.ovn.metadata.agent [-] Port f045b3aa-3ff6-4dea-ad61-a59a01735124 in datapath fff13396-b787-4c6e-9112-a1c2ef57b26d bound to our chassis
Oct 11 08:50:37 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:37.434 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network fff13396-b787-4c6e-9112-a1c2ef57b26d
Oct 11 08:50:37 compute-0 ovn_controller[152945]: 2025-10-11T08:50:37Z|00195|binding|INFO|Releasing lport 424305ea-6b47-4134-ad52-ee2a450e204c from this chassis (sb_readonly=0)
Oct 11 08:50:37 compute-0 ovn_controller[152945]: 2025-10-11T08:50:37Z|00196|binding|INFO|Releasing lport 2a916b98-1e7b-4604-b1f0-e2f195b1c17e from this chassis (sb_readonly=0)
Oct 11 08:50:37 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:37.476 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[e91d3f0a-b6e5-482b-bbbc-b2aa658a00c4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:50:37 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:37.506 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[4e6bbaab-207e-42d4-846d-ea5bf411d858]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:50:37 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:37.509 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[558a9bb4-b974-4630-8efb-319376569fe9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:50:37 compute-0 podman[298770]: 2025-10-11 08:50:37.513167437 +0000 UTC m=+0.070492835 container create 7c30740d886b87093352eb2071bd31d0c601e6a5c38d179448be8939f67c3c51 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_poitras, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 08:50:37 compute-0 nova_compute[260935]: 2025-10-11 08:50:37.537 2 INFO nova.virt.libvirt.driver [None req-461635e7-4a9b-4353-8945-fee1f3505527 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 90e56ca7-b26f-4f83-908d-75204ecd2533] Deleting instance files /var/lib/nova/instances/90e56ca7-b26f-4f83-908d-75204ecd2533_del
Oct 11 08:50:37 compute-0 nova_compute[260935]: 2025-10-11 08:50:37.538 2 INFO nova.virt.libvirt.driver [None req-461635e7-4a9b-4353-8945-fee1f3505527 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 90e56ca7-b26f-4f83-908d-75204ecd2533] Deletion of /var/lib/nova/instances/90e56ca7-b26f-4f83-908d-75204ecd2533_del complete
Oct 11 08:50:37 compute-0 nova_compute[260935]: 2025-10-11 08:50:37.546 2 DEBUG nova.network.neutron [None req-91a294bf-0f23-448c-a601-022a3b3b6e50 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: ac3851d8-5df2-4f84-9b28-a5fbf1c31b62] Updating instance_info_cache with network_info: [{"id": "d190526e-2bf1-4e6c-925a-2cc0c2b359a8", "address": "fa:16:3e:57:75:39", "network": {"id": "9bac3530-993f-420e-8692-0b14a331d756", "bridge": "br-int", "label": "tempest-ImagesTestJSON-942705627-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f8c7604961214c6d9d49657535d799a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd190526e-2b", "ovs_interfaceid": "d190526e-2bf1-4e6c-925a-2cc0c2b359a8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:50:37 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 08:50:37 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3823410354' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 08:50:37 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 08:50:37 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3823410354' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 08:50:37 compute-0 nova_compute[260935]: 2025-10-11 08:50:37.554 2 DEBUG nova.virt.libvirt.driver [None req-4f0da930-4538-4373-8291-fbc1ddfbf0ef 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 08:50:37 compute-0 nova_compute[260935]: 2025-10-11 08:50:37.555 2 DEBUG nova.virt.libvirt.driver [None req-4f0da930-4538-4373-8291-fbc1ddfbf0ef 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 08:50:37 compute-0 nova_compute[260935]: 2025-10-11 08:50:37.555 2 DEBUG nova.virt.libvirt.driver [None req-4f0da930-4538-4373-8291-fbc1ddfbf0ef 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] No VIF found with MAC fa:16:3e:d9:75:86, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 11 08:50:37 compute-0 nova_compute[260935]: 2025-10-11 08:50:37.555 2 DEBUG nova.virt.libvirt.driver [None req-4f0da930-4538-4373-8291-fbc1ddfbf0ef 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] No VIF found with MAC fa:16:3e:ee:3e:c0, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 11 08:50:37 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:37.560 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[8651d1bc-8681-4280-9eb1-eb5d03e3f27f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:50:37 compute-0 systemd[1]: Started libpod-conmon-7c30740d886b87093352eb2071bd31d0c601e6a5c38d179448be8939f67c3c51.scope.
Oct 11 08:50:37 compute-0 podman[298770]: 2025-10-11 08:50:37.485451293 +0000 UTC m=+0.042776781 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 08:50:37 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:37.579 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[52458ba8-1c0d-4b80-8417-6866defcc01d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfff13396-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:dd:a4:2d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 11, 'rx_bytes': 1000, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 11, 'rx_bytes': 1000, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 45], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 443120, 'reachable_time': 39241, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 298792, 'error': None, 'target': 'ovnmeta-fff13396-b787-4c6e-9112-a1c2ef57b26d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:50:37 compute-0 systemd[1]: Started libcrun container.
Oct 11 08:50:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6187bdca455a5fe6729f7c759cb35fb5376a5903e3752ce52834441335acc108/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 08:50:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6187bdca455a5fe6729f7c759cb35fb5376a5903e3752ce52834441335acc108/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 08:50:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6187bdca455a5fe6729f7c759cb35fb5376a5903e3752ce52834441335acc108/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 08:50:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6187bdca455a5fe6729f7c759cb35fb5376a5903e3752ce52834441335acc108/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 08:50:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6187bdca455a5fe6729f7c759cb35fb5376a5903e3752ce52834441335acc108/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 08:50:37 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:37.608 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[1660bac5-2062-4a3d-bf5e-b5c877a6c9ec]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapfff13396-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 443136, 'tstamp': 443136}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 298795, 'error': None, 'target': 'ovnmeta-fff13396-b787-4c6e-9112-a1c2ef57b26d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapfff13396-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 443140, 'tstamp': 443140}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 298795, 'error': None, 'target': 'ovnmeta-fff13396-b787-4c6e-9112-a1c2ef57b26d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:50:37 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:37.610 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfff13396-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:50:37 compute-0 ceph-mon[74313]: from='client.? 192.168.122.10:0/3823410354' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 08:50:37 compute-0 ceph-mon[74313]: from='client.? 192.168.122.10:0/3823410354' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 08:50:37 compute-0 nova_compute[260935]: 2025-10-11 08:50:37.621 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:50:37 compute-0 nova_compute[260935]: 2025-10-11 08:50:37.622 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:50:37 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:37.623 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfff13396-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:50:37 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:37.623 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 08:50:37 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:37.623 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapfff13396-b0, col_values=(('external_ids', {'iface-id': '2a916b98-1e7b-4604-b1f0-e2f195b1c17e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:50:37 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:37.624 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 08:50:37 compute-0 podman[298770]: 2025-10-11 08:50:37.62715667 +0000 UTC m=+0.184482088 container init 7c30740d886b87093352eb2071bd31d0c601e6a5c38d179448be8939f67c3c51 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_poitras, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 08:50:37 compute-0 podman[298770]: 2025-10-11 08:50:37.636169155 +0000 UTC m=+0.193494563 container start 7c30740d886b87093352eb2071bd31d0c601e6a5c38d179448be8939f67c3c51 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_poitras, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 11 08:50:37 compute-0 podman[298770]: 2025-10-11 08:50:37.639787018 +0000 UTC m=+0.197112426 container attach 7c30740d886b87093352eb2071bd31d0c601e6a5c38d179448be8939f67c3c51 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_poitras, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct 11 08:50:37 compute-0 nova_compute[260935]: 2025-10-11 08:50:37.684 2 DEBUG nova.compute.manager [req-2a3997f8-18d6-4a9c-944c-288aff9d78a2 req-29b9ebe6-113b-4f87-b5a6-dabbec6fe588 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ac3851d8-5df2-4f84-9b28-a5fbf1c31b62] Received event network-changed-d190526e-2bf1-4e6c-925a-2cc0c2b359a8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:50:37 compute-0 nova_compute[260935]: 2025-10-11 08:50:37.684 2 DEBUG nova.compute.manager [req-2a3997f8-18d6-4a9c-944c-288aff9d78a2 req-29b9ebe6-113b-4f87-b5a6-dabbec6fe588 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ac3851d8-5df2-4f84-9b28-a5fbf1c31b62] Refreshing instance network info cache due to event network-changed-d190526e-2bf1-4e6c-925a-2cc0c2b359a8. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 08:50:37 compute-0 nova_compute[260935]: 2025-10-11 08:50:37.684 2 DEBUG oslo_concurrency.lockutils [req-2a3997f8-18d6-4a9c-944c-288aff9d78a2 req-29b9ebe6-113b-4f87-b5a6-dabbec6fe588 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-ac3851d8-5df2-4f84-9b28-a5fbf1c31b62" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 08:50:37 compute-0 nova_compute[260935]: 2025-10-11 08:50:37.786 2 DEBUG nova.virt.libvirt.guest [None req-4f0da930-4538-4373-8291-fbc1ddfbf0ef 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 08:50:37 compute-0 nova_compute[260935]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 08:50:37 compute-0 nova_compute[260935]:   <nova:name>tempest-tempest.common.compute-instance-1789236056</nova:name>
Oct 11 08:50:37 compute-0 nova_compute[260935]:   <nova:creationTime>2025-10-11 08:50:37</nova:creationTime>
Oct 11 08:50:37 compute-0 nova_compute[260935]:   <nova:flavor name="m1.nano">
Oct 11 08:50:37 compute-0 nova_compute[260935]:     <nova:memory>128</nova:memory>
Oct 11 08:50:37 compute-0 nova_compute[260935]:     <nova:disk>1</nova:disk>
Oct 11 08:50:37 compute-0 nova_compute[260935]:     <nova:swap>0</nova:swap>
Oct 11 08:50:37 compute-0 nova_compute[260935]:     <nova:ephemeral>0</nova:ephemeral>
Oct 11 08:50:37 compute-0 nova_compute[260935]:     <nova:vcpus>1</nova:vcpus>
Oct 11 08:50:37 compute-0 nova_compute[260935]:   </nova:flavor>
Oct 11 08:50:37 compute-0 nova_compute[260935]:   <nova:owner>
Oct 11 08:50:37 compute-0 nova_compute[260935]:     <nova:user uuid="34f29a5a135d45f597eeaa741009aa67">tempest-AttachInterfacesTestJSON-2072786320-project-member</nova:user>
Oct 11 08:50:37 compute-0 nova_compute[260935]:     <nova:project uuid="eddb41c523294041b154a0a99c88e82b">tempest-AttachInterfacesTestJSON-2072786320</nova:project>
Oct 11 08:50:37 compute-0 nova_compute[260935]:   </nova:owner>
Oct 11 08:50:37 compute-0 nova_compute[260935]:   <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 08:50:37 compute-0 nova_compute[260935]:   <nova:ports>
Oct 11 08:50:37 compute-0 nova_compute[260935]:     <nova:port uuid="c27797c3-6ac7-45ae-9a2e-7fc42908feab">
Oct 11 08:50:37 compute-0 nova_compute[260935]:       <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Oct 11 08:50:37 compute-0 nova_compute[260935]:     </nova:port>
Oct 11 08:50:37 compute-0 nova_compute[260935]:     <nova:port uuid="f045b3aa-3ff6-4dea-ad61-a59a01735124">
Oct 11 08:50:37 compute-0 nova_compute[260935]:       <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Oct 11 08:50:37 compute-0 nova_compute[260935]:     </nova:port>
Oct 11 08:50:37 compute-0 nova_compute[260935]:   </nova:ports>
Oct 11 08:50:37 compute-0 nova_compute[260935]: </nova:instance>
Oct 11 08:50:37 compute-0 nova_compute[260935]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Oct 11 08:50:37 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1356: 321 pgs: 321 active+clean; 451 MiB data, 626 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 5.1 MiB/s wr, 341 op/s
Oct 11 08:50:37 compute-0 nova_compute[260935]: 2025-10-11 08:50:37.970 2 DEBUG oslo_concurrency.lockutils [None req-91a294bf-0f23-448c-a601-022a3b3b6e50 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Releasing lock "refresh_cache-ac3851d8-5df2-4f84-9b28-a5fbf1c31b62" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 08:50:37 compute-0 nova_compute[260935]: 2025-10-11 08:50:37.971 2 DEBUG nova.compute.manager [None req-91a294bf-0f23-448c-a601-022a3b3b6e50 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: ac3851d8-5df2-4f84-9b28-a5fbf1c31b62] Instance network_info: |[{"id": "d190526e-2bf1-4e6c-925a-2cc0c2b359a8", "address": "fa:16:3e:57:75:39", "network": {"id": "9bac3530-993f-420e-8692-0b14a331d756", "bridge": "br-int", "label": "tempest-ImagesTestJSON-942705627-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f8c7604961214c6d9d49657535d799a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd190526e-2b", "ovs_interfaceid": "d190526e-2bf1-4e6c-925a-2cc0c2b359a8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 11 08:50:37 compute-0 nova_compute[260935]: 2025-10-11 08:50:37.972 2 DEBUG oslo_concurrency.lockutils [req-2a3997f8-18d6-4a9c-944c-288aff9d78a2 req-29b9ebe6-113b-4f87-b5a6-dabbec6fe588 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-ac3851d8-5df2-4f84-9b28-a5fbf1c31b62" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 08:50:37 compute-0 nova_compute[260935]: 2025-10-11 08:50:37.973 2 DEBUG nova.network.neutron [req-2a3997f8-18d6-4a9c-944c-288aff9d78a2 req-29b9ebe6-113b-4f87-b5a6-dabbec6fe588 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ac3851d8-5df2-4f84-9b28-a5fbf1c31b62] Refreshing network info cache for port d190526e-2bf1-4e6c-925a-2cc0c2b359a8 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 08:50:37 compute-0 nova_compute[260935]: 2025-10-11 08:50:37.979 2 DEBUG nova.virt.libvirt.driver [None req-91a294bf-0f23-448c-a601-022a3b3b6e50 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: ac3851d8-5df2-4f84-9b28-a5fbf1c31b62] Start _get_guest_xml network_info=[{"id": "d190526e-2bf1-4e6c-925a-2cc0c2b359a8", "address": "fa:16:3e:57:75:39", "network": {"id": "9bac3530-993f-420e-8692-0b14a331d756", "bridge": "br-int", "label": "tempest-ImagesTestJSON-942705627-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f8c7604961214c6d9d49657535d799a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd190526e-2b", "ovs_interfaceid": "d190526e-2bf1-4e6c-925a-2cc0c2b359a8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 11 08:50:37 compute-0 nova_compute[260935]: 2025-10-11 08:50:37.985 2 DEBUG oslo_concurrency.lockutils [None req-4f0da930-4538-4373-8291-fbc1ddfbf0ef 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Lock "interface-b35f4147-9e36-4dab-9ac8-2061c97797f2-f045b3aa-3ff6-4dea-ad61-a59a01735124" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 5.709s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:50:37 compute-0 nova_compute[260935]: 2025-10-11 08:50:37.994 2 INFO nova.compute.manager [None req-461635e7-4a9b-4353-8945-fee1f3505527 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 90e56ca7-b26f-4f83-908d-75204ecd2533] Took 1.30 seconds to destroy the instance on the hypervisor.
Oct 11 08:50:37 compute-0 nova_compute[260935]: 2025-10-11 08:50:37.994 2 DEBUG oslo.service.loopingcall [None req-461635e7-4a9b-4353-8945-fee1f3505527 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 11 08:50:37 compute-0 nova_compute[260935]: 2025-10-11 08:50:37.995 2 DEBUG nova.compute.manager [-] [instance: 90e56ca7-b26f-4f83-908d-75204ecd2533] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 11 08:50:37 compute-0 nova_compute[260935]: 2025-10-11 08:50:37.996 2 DEBUG nova.network.neutron [-] [instance: 90e56ca7-b26f-4f83-908d-75204ecd2533] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 11 08:50:38 compute-0 nova_compute[260935]: 2025-10-11 08:50:38.002 2 WARNING nova.virt.libvirt.driver [None req-91a294bf-0f23-448c-a601-022a3b3b6e50 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 08:50:38 compute-0 nova_compute[260935]: 2025-10-11 08:50:38.009 2 DEBUG nova.virt.libvirt.host [None req-91a294bf-0f23-448c-a601-022a3b3b6e50 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 11 08:50:38 compute-0 nova_compute[260935]: 2025-10-11 08:50:38.011 2 DEBUG nova.virt.libvirt.host [None req-91a294bf-0f23-448c-a601-022a3b3b6e50 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 11 08:50:38 compute-0 nova_compute[260935]: 2025-10-11 08:50:38.015 2 DEBUG nova.virt.libvirt.host [None req-91a294bf-0f23-448c-a601-022a3b3b6e50 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 11 08:50:38 compute-0 nova_compute[260935]: 2025-10-11 08:50:38.016 2 DEBUG nova.virt.libvirt.host [None req-91a294bf-0f23-448c-a601-022a3b3b6e50 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 11 08:50:38 compute-0 nova_compute[260935]: 2025-10-11 08:50:38.017 2 DEBUG nova.virt.libvirt.driver [None req-91a294bf-0f23-448c-a601-022a3b3b6e50 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 11 08:50:38 compute-0 nova_compute[260935]: 2025-10-11 08:50:38.017 2 DEBUG nova.virt.hardware [None req-91a294bf-0f23-448c-a601-022a3b3b6e50 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 11 08:50:38 compute-0 nova_compute[260935]: 2025-10-11 08:50:38.018 2 DEBUG nova.virt.hardware [None req-91a294bf-0f23-448c-a601-022a3b3b6e50 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 11 08:50:38 compute-0 nova_compute[260935]: 2025-10-11 08:50:38.019 2 DEBUG nova.virt.hardware [None req-91a294bf-0f23-448c-a601-022a3b3b6e50 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 11 08:50:38 compute-0 nova_compute[260935]: 2025-10-11 08:50:38.019 2 DEBUG nova.virt.hardware [None req-91a294bf-0f23-448c-a601-022a3b3b6e50 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 11 08:50:38 compute-0 nova_compute[260935]: 2025-10-11 08:50:38.020 2 DEBUG nova.virt.hardware [None req-91a294bf-0f23-448c-a601-022a3b3b6e50 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 11 08:50:38 compute-0 nova_compute[260935]: 2025-10-11 08:50:38.020 2 DEBUG nova.virt.hardware [None req-91a294bf-0f23-448c-a601-022a3b3b6e50 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 11 08:50:38 compute-0 nova_compute[260935]: 2025-10-11 08:50:38.021 2 DEBUG nova.virt.hardware [None req-91a294bf-0f23-448c-a601-022a3b3b6e50 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 11 08:50:38 compute-0 nova_compute[260935]: 2025-10-11 08:50:38.021 2 DEBUG nova.virt.hardware [None req-91a294bf-0f23-448c-a601-022a3b3b6e50 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 11 08:50:38 compute-0 nova_compute[260935]: 2025-10-11 08:50:38.022 2 DEBUG nova.virt.hardware [None req-91a294bf-0f23-448c-a601-022a3b3b6e50 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 11 08:50:38 compute-0 nova_compute[260935]: 2025-10-11 08:50:38.022 2 DEBUG nova.virt.hardware [None req-91a294bf-0f23-448c-a601-022a3b3b6e50 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 11 08:50:38 compute-0 nova_compute[260935]: 2025-10-11 08:50:38.023 2 DEBUG nova.virt.hardware [None req-91a294bf-0f23-448c-a601-022a3b3b6e50 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 11 08:50:38 compute-0 nova_compute[260935]: 2025-10-11 08:50:38.028 2 DEBUG oslo_concurrency.processutils [None req-91a294bf-0f23-448c-a601-022a3b3b6e50 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:50:38 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e174 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 08:50:38 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e174 do_prune osdmap full prune enabled
Oct 11 08:50:38 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e175 e175: 3 total, 3 up, 3 in
Oct 11 08:50:38 compute-0 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e175: 3 total, 3 up, 3 in
Oct 11 08:50:38 compute-0 nova_compute[260935]: 2025-10-11 08:50:38.388 2 DEBUG nova.compute.manager [req-de238b66-cea4-43ea-9944-4b193a6a0f1b req-2039087f-c1bf-439a-bcb6-be0e523cd7b7 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: b35f4147-9e36-4dab-9ac8-2061c97797f2] Received event network-vif-plugged-f045b3aa-3ff6-4dea-ad61-a59a01735124 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:50:38 compute-0 nova_compute[260935]: 2025-10-11 08:50:38.390 2 DEBUG oslo_concurrency.lockutils [req-de238b66-cea4-43ea-9944-4b193a6a0f1b req-2039087f-c1bf-439a-bcb6-be0e523cd7b7 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "b35f4147-9e36-4dab-9ac8-2061c97797f2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:50:38 compute-0 nova_compute[260935]: 2025-10-11 08:50:38.391 2 DEBUG oslo_concurrency.lockutils [req-de238b66-cea4-43ea-9944-4b193a6a0f1b req-2039087f-c1bf-439a-bcb6-be0e523cd7b7 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "b35f4147-9e36-4dab-9ac8-2061c97797f2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:50:38 compute-0 nova_compute[260935]: 2025-10-11 08:50:38.392 2 DEBUG oslo_concurrency.lockutils [req-de238b66-cea4-43ea-9944-4b193a6a0f1b req-2039087f-c1bf-439a-bcb6-be0e523cd7b7 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "b35f4147-9e36-4dab-9ac8-2061c97797f2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:50:38 compute-0 nova_compute[260935]: 2025-10-11 08:50:38.392 2 DEBUG nova.compute.manager [req-de238b66-cea4-43ea-9944-4b193a6a0f1b req-2039087f-c1bf-439a-bcb6-be0e523cd7b7 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: b35f4147-9e36-4dab-9ac8-2061c97797f2] No waiting events found dispatching network-vif-plugged-f045b3aa-3ff6-4dea-ad61-a59a01735124 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 08:50:38 compute-0 nova_compute[260935]: 2025-10-11 08:50:38.393 2 WARNING nova.compute.manager [req-de238b66-cea4-43ea-9944-4b193a6a0f1b req-2039087f-c1bf-439a-bcb6-be0e523cd7b7 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: b35f4147-9e36-4dab-9ac8-2061c97797f2] Received unexpected event network-vif-plugged-f045b3aa-3ff6-4dea-ad61-a59a01735124 for instance with vm_state active and task_state None.
Oct 11 08:50:38 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 08:50:38 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/660154896' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:50:38 compute-0 nova_compute[260935]: 2025-10-11 08:50:38.591 2 DEBUG oslo_concurrency.processutils [None req-91a294bf-0f23-448c-a601-022a3b3b6e50 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.563s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:50:38 compute-0 nova_compute[260935]: 2025-10-11 08:50:38.625 2 DEBUG nova.storage.rbd_utils [None req-91a294bf-0f23-448c-a601-022a3b3b6e50 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] rbd image ac3851d8-5df2-4f84-9b28-a5fbf1c31b62_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:50:38 compute-0 nova_compute[260935]: 2025-10-11 08:50:38.634 2 DEBUG oslo_concurrency.processutils [None req-91a294bf-0f23-448c-a601-022a3b3b6e50 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:50:38 compute-0 ovn_controller[152945]: 2025-10-11T08:50:38Z|00042|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:ee:3e:c0 10.100.0.7
Oct 11 08:50:38 compute-0 ovn_controller[152945]: 2025-10-11T08:50:38Z|00043|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:ee:3e:c0 10.100.0.7
Oct 11 08:50:38 compute-0 quirky_poitras[298791]: --> passed data devices: 0 physical, 3 LVM
Oct 11 08:50:38 compute-0 quirky_poitras[298791]: --> relative data size: 1.0
Oct 11 08:50:38 compute-0 quirky_poitras[298791]: --> All data devices are unavailable
Oct 11 08:50:38 compute-0 systemd[1]: libpod-7c30740d886b87093352eb2071bd31d0c601e6a5c38d179448be8939f67c3c51.scope: Deactivated successfully.
Oct 11 08:50:38 compute-0 systemd[1]: libpod-7c30740d886b87093352eb2071bd31d0c601e6a5c38d179448be8939f67c3c51.scope: Consumed 1.118s CPU time.
Oct 11 08:50:38 compute-0 podman[298882]: 2025-10-11 08:50:38.94680414 +0000 UTC m=+0.036433771 container died 7c30740d886b87093352eb2071bd31d0c601e6a5c38d179448be8939f67c3c51 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_poitras, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct 11 08:50:38 compute-0 nova_compute[260935]: 2025-10-11 08:50:38.948 2 DEBUG nova.network.neutron [-] [instance: 90e56ca7-b26f-4f83-908d-75204ecd2533] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:50:38 compute-0 nova_compute[260935]: 2025-10-11 08:50:38.977 2 INFO nova.compute.manager [-] [instance: 90e56ca7-b26f-4f83-908d-75204ecd2533] Took 0.98 seconds to deallocate network for instance.
Oct 11 08:50:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-6187bdca455a5fe6729f7c759cb35fb5376a5903e3752ce52834441335acc108-merged.mount: Deactivated successfully.
Oct 11 08:50:39 compute-0 podman[298882]: 2025-10-11 08:50:39.016127411 +0000 UTC m=+0.105757012 container remove 7c30740d886b87093352eb2071bd31d0c601e6a5c38d179448be8939f67c3c51 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_poitras, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct 11 08:50:39 compute-0 systemd[1]: libpod-conmon-7c30740d886b87093352eb2071bd31d0c601e6a5c38d179448be8939f67c3c51.scope: Deactivated successfully.
Oct 11 08:50:39 compute-0 sudo[298620]: pam_unix(sudo:session): session closed for user root
Oct 11 08:50:39 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 08:50:39 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4128355360' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:50:39 compute-0 nova_compute[260935]: 2025-10-11 08:50:39.077 2 DEBUG oslo_concurrency.lockutils [None req-461635e7-4a9b-4353-8945-fee1f3505527 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:50:39 compute-0 nova_compute[260935]: 2025-10-11 08:50:39.078 2 DEBUG oslo_concurrency.lockutils [None req-461635e7-4a9b-4353-8945-fee1f3505527 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:50:39 compute-0 nova_compute[260935]: 2025-10-11 08:50:39.081 2 DEBUG oslo_concurrency.processutils [None req-91a294bf-0f23-448c-a601-022a3b3b6e50 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:50:39 compute-0 nova_compute[260935]: 2025-10-11 08:50:39.082 2 DEBUG nova.virt.libvirt.vif [None req-91a294bf-0f23-448c-a601-022a3b3b6e50 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T08:50:32Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-1714775256',display_name='tempest-ImagesTestJSON-server-1714775256',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-1714775256',id=31,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f8c7604961214c6d9d49657535d799a5',ramdisk_id='',reservation_id='r-wz17gid5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-694493184',owner_user_name='tempest-ImagesTestJSON-694493184-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T08:50:34Z,user_data=None,user_id='1bab12893b9d49aabcb5ca19c9b951de',uuid=ac3851d8-5df2-4f84-9b28-a5fbf1c31b62,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d190526e-2bf1-4e6c-925a-2cc0c2b359a8", "address": "fa:16:3e:57:75:39", "network": {"id": "9bac3530-993f-420e-8692-0b14a331d756", "bridge": "br-int", "label": "tempest-ImagesTestJSON-942705627-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f8c7604961214c6d9d49657535d799a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd190526e-2b", "ovs_interfaceid": "d190526e-2bf1-4e6c-925a-2cc0c2b359a8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 11 08:50:39 compute-0 nova_compute[260935]: 2025-10-11 08:50:39.083 2 DEBUG nova.network.os_vif_util [None req-91a294bf-0f23-448c-a601-022a3b3b6e50 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Converting VIF {"id": "d190526e-2bf1-4e6c-925a-2cc0c2b359a8", "address": "fa:16:3e:57:75:39", "network": {"id": "9bac3530-993f-420e-8692-0b14a331d756", "bridge": "br-int", "label": "tempest-ImagesTestJSON-942705627-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f8c7604961214c6d9d49657535d799a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd190526e-2b", "ovs_interfaceid": "d190526e-2bf1-4e6c-925a-2cc0c2b359a8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 08:50:39 compute-0 nova_compute[260935]: 2025-10-11 08:50:39.084 2 DEBUG nova.network.os_vif_util [None req-91a294bf-0f23-448c-a601-022a3b3b6e50 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:57:75:39,bridge_name='br-int',has_traffic_filtering=True,id=d190526e-2bf1-4e6c-925a-2cc0c2b359a8,network=Network(9bac3530-993f-420e-8692-0b14a331d756),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd190526e-2b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 08:50:39 compute-0 nova_compute[260935]: 2025-10-11 08:50:39.086 2 DEBUG nova.objects.instance [None req-91a294bf-0f23-448c-a601-022a3b3b6e50 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Lazy-loading 'pci_devices' on Instance uuid ac3851d8-5df2-4f84-9b28-a5fbf1c31b62 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:50:39 compute-0 nova_compute[260935]: 2025-10-11 08:50:39.092 2 DEBUG oslo_concurrency.lockutils [None req-d54a8a3f-002d-4f75-b215-3ef1f4e3b3df 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Acquiring lock "interface-b35f4147-9e36-4dab-9ac8-2061c97797f2-f045b3aa-3ff6-4dea-ad61-a59a01735124" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:50:39 compute-0 nova_compute[260935]: 2025-10-11 08:50:39.092 2 DEBUG oslo_concurrency.lockutils [None req-d54a8a3f-002d-4f75-b215-3ef1f4e3b3df 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Lock "interface-b35f4147-9e36-4dab-9ac8-2061c97797f2-f045b3aa-3ff6-4dea-ad61-a59a01735124" acquired by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:50:39 compute-0 nova_compute[260935]: 2025-10-11 08:50:39.108 2 DEBUG nova.virt.libvirt.driver [None req-91a294bf-0f23-448c-a601-022a3b3b6e50 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: ac3851d8-5df2-4f84-9b28-a5fbf1c31b62] End _get_guest_xml xml=<domain type="kvm">
Oct 11 08:50:39 compute-0 nova_compute[260935]:   <uuid>ac3851d8-5df2-4f84-9b28-a5fbf1c31b62</uuid>
Oct 11 08:50:39 compute-0 nova_compute[260935]:   <name>instance-0000001f</name>
Oct 11 08:50:39 compute-0 nova_compute[260935]:   <memory>131072</memory>
Oct 11 08:50:39 compute-0 nova_compute[260935]:   <vcpu>1</vcpu>
Oct 11 08:50:39 compute-0 nova_compute[260935]:   <metadata>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <nova:name>tempest-ImagesTestJSON-server-1714775256</nova:name>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <nova:creationTime>2025-10-11 08:50:38</nova:creationTime>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <nova:flavor name="m1.nano">
Oct 11 08:50:39 compute-0 nova_compute[260935]:         <nova:memory>128</nova:memory>
Oct 11 08:50:39 compute-0 nova_compute[260935]:         <nova:disk>1</nova:disk>
Oct 11 08:50:39 compute-0 nova_compute[260935]:         <nova:swap>0</nova:swap>
Oct 11 08:50:39 compute-0 nova_compute[260935]:         <nova:ephemeral>0</nova:ephemeral>
Oct 11 08:50:39 compute-0 nova_compute[260935]:         <nova:vcpus>1</nova:vcpus>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       </nova:flavor>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <nova:owner>
Oct 11 08:50:39 compute-0 nova_compute[260935]:         <nova:user uuid="1bab12893b9d49aabcb5ca19c9b951de">tempest-ImagesTestJSON-694493184-project-member</nova:user>
Oct 11 08:50:39 compute-0 nova_compute[260935]:         <nova:project uuid="f8c7604961214c6d9d49657535d799a5">tempest-ImagesTestJSON-694493184</nova:project>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       </nova:owner>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <nova:ports>
Oct 11 08:50:39 compute-0 nova_compute[260935]:         <nova:port uuid="d190526e-2bf1-4e6c-925a-2cc0c2b359a8">
Oct 11 08:50:39 compute-0 nova_compute[260935]:           <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:         </nova:port>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       </nova:ports>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     </nova:instance>
Oct 11 08:50:39 compute-0 nova_compute[260935]:   </metadata>
Oct 11 08:50:39 compute-0 nova_compute[260935]:   <sysinfo type="smbios">
Oct 11 08:50:39 compute-0 nova_compute[260935]:     <system>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <entry name="manufacturer">RDO</entry>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <entry name="product">OpenStack Compute</entry>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <entry name="serial">ac3851d8-5df2-4f84-9b28-a5fbf1c31b62</entry>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <entry name="uuid">ac3851d8-5df2-4f84-9b28-a5fbf1c31b62</entry>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <entry name="family">Virtual Machine</entry>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     </system>
Oct 11 08:50:39 compute-0 nova_compute[260935]:   </sysinfo>
Oct 11 08:50:39 compute-0 nova_compute[260935]:   <os>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     <boot dev="hd"/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     <smbios mode="sysinfo"/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:   </os>
Oct 11 08:50:39 compute-0 nova_compute[260935]:   <features>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     <acpi/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     <apic/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     <vmcoreinfo/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:   </features>
Oct 11 08:50:39 compute-0 nova_compute[260935]:   <clock offset="utc">
Oct 11 08:50:39 compute-0 nova_compute[260935]:     <timer name="pit" tickpolicy="delay"/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     <timer name="hpet" present="no"/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:   </clock>
Oct 11 08:50:39 compute-0 nova_compute[260935]:   <cpu mode="host-model" match="exact">
Oct 11 08:50:39 compute-0 nova_compute[260935]:     <topology sockets="1" cores="1" threads="1"/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:   </cpu>
Oct 11 08:50:39 compute-0 nova_compute[260935]:   <devices>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     <disk type="network" device="disk">
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/ac3851d8-5df2-4f84-9b28-a5fbf1c31b62_disk">
Oct 11 08:50:39 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       </source>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 08:50:39 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       </auth>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <target dev="vda" bus="virtio"/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     </disk>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     <disk type="network" device="cdrom">
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/ac3851d8-5df2-4f84-9b28-a5fbf1c31b62_disk.config">
Oct 11 08:50:39 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       </source>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 08:50:39 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       </auth>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <target dev="sda" bus="sata"/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     </disk>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     <interface type="ethernet">
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <mac address="fa:16:3e:57:75:39"/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <driver name="vhost" rx_queue_size="512"/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <mtu size="1442"/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <target dev="tapd190526e-2b"/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     </interface>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     <serial type="pty">
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <log file="/var/lib/nova/instances/ac3851d8-5df2-4f84-9b28-a5fbf1c31b62/console.log" append="off"/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     </serial>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     <video>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     </video>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     <input type="tablet" bus="usb"/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     <rng model="virtio">
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <backend model="random">/dev/urandom</backend>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     </rng>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root"/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     <controller type="usb" index="0"/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     <memballoon model="virtio">
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <stats period="10"/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     </memballoon>
Oct 11 08:50:39 compute-0 nova_compute[260935]:   </devices>
Oct 11 08:50:39 compute-0 nova_compute[260935]: </domain>
Oct 11 08:50:39 compute-0 nova_compute[260935]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 11 08:50:39 compute-0 nova_compute[260935]: 2025-10-11 08:50:39.109 2 DEBUG nova.compute.manager [None req-91a294bf-0f23-448c-a601-022a3b3b6e50 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: ac3851d8-5df2-4f84-9b28-a5fbf1c31b62] Preparing to wait for external event network-vif-plugged-d190526e-2bf1-4e6c-925a-2cc0c2b359a8 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 11 08:50:39 compute-0 nova_compute[260935]: 2025-10-11 08:50:39.109 2 DEBUG oslo_concurrency.lockutils [None req-91a294bf-0f23-448c-a601-022a3b3b6e50 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Acquiring lock "ac3851d8-5df2-4f84-9b28-a5fbf1c31b62-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:50:39 compute-0 nova_compute[260935]: 2025-10-11 08:50:39.109 2 DEBUG oslo_concurrency.lockutils [None req-91a294bf-0f23-448c-a601-022a3b3b6e50 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Lock "ac3851d8-5df2-4f84-9b28-a5fbf1c31b62-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:50:39 compute-0 nova_compute[260935]: 2025-10-11 08:50:39.110 2 DEBUG oslo_concurrency.lockutils [None req-91a294bf-0f23-448c-a601-022a3b3b6e50 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Lock "ac3851d8-5df2-4f84-9b28-a5fbf1c31b62-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:50:39 compute-0 nova_compute[260935]: 2025-10-11 08:50:39.110 2 DEBUG nova.virt.libvirt.vif [None req-91a294bf-0f23-448c-a601-022a3b3b6e50 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T08:50:32Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-1714775256',display_name='tempest-ImagesTestJSON-server-1714775256',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-1714775256',id=31,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f8c7604961214c6d9d49657535d799a5',ramdisk_id='',reservation_id='r-wz17gid5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-694493184',owner_user_name='tempest-ImagesTestJSON-694493184-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T08:50:34Z,user_data=None,user_id='1bab12893b9d49aabcb5ca19c9b951de',uuid=ac3851d8-5df2-4f84-9b28-a5fbf1c31b62,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d190526e-2bf1-4e6c-925a-2cc0c2b359a8", "address": "fa:16:3e:57:75:39", "network": {"id": "9bac3530-993f-420e-8692-0b14a331d756", "bridge": "br-int", "label": "tempest-ImagesTestJSON-942705627-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f8c7604961214c6d9d49657535d799a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd190526e-2b", "ovs_interfaceid": "d190526e-2bf1-4e6c-925a-2cc0c2b359a8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 11 08:50:39 compute-0 nova_compute[260935]: 2025-10-11 08:50:39.111 2 DEBUG nova.network.os_vif_util [None req-91a294bf-0f23-448c-a601-022a3b3b6e50 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Converting VIF {"id": "d190526e-2bf1-4e6c-925a-2cc0c2b359a8", "address": "fa:16:3e:57:75:39", "network": {"id": "9bac3530-993f-420e-8692-0b14a331d756", "bridge": "br-int", "label": "tempest-ImagesTestJSON-942705627-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f8c7604961214c6d9d49657535d799a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd190526e-2b", "ovs_interfaceid": "d190526e-2bf1-4e6c-925a-2cc0c2b359a8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 08:50:39 compute-0 nova_compute[260935]: 2025-10-11 08:50:39.112 2 DEBUG nova.network.os_vif_util [None req-91a294bf-0f23-448c-a601-022a3b3b6e50 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:57:75:39,bridge_name='br-int',has_traffic_filtering=True,id=d190526e-2bf1-4e6c-925a-2cc0c2b359a8,network=Network(9bac3530-993f-420e-8692-0b14a331d756),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd190526e-2b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 08:50:39 compute-0 nova_compute[260935]: 2025-10-11 08:50:39.112 2 DEBUG os_vif [None req-91a294bf-0f23-448c-a601-022a3b3b6e50 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:57:75:39,bridge_name='br-int',has_traffic_filtering=True,id=d190526e-2bf1-4e6c-925a-2cc0c2b359a8,network=Network(9bac3530-993f-420e-8692-0b14a331d756),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd190526e-2b') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 11 08:50:39 compute-0 nova_compute[260935]: 2025-10-11 08:50:39.113 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:50:39 compute-0 nova_compute[260935]: 2025-10-11 08:50:39.113 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:50:39 compute-0 nova_compute[260935]: 2025-10-11 08:50:39.114 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 08:50:39 compute-0 nova_compute[260935]: 2025-10-11 08:50:39.117 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:50:39 compute-0 nova_compute[260935]: 2025-10-11 08:50:39.117 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd190526e-2b, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:50:39 compute-0 nova_compute[260935]: 2025-10-11 08:50:39.118 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapd190526e-2b, col_values=(('external_ids', {'iface-id': 'd190526e-2bf1-4e6c-925a-2cc0c2b359a8', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:57:75:39', 'vm-uuid': 'ac3851d8-5df2-4f84-9b28-a5fbf1c31b62'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:50:39 compute-0 nova_compute[260935]: 2025-10-11 08:50:39.120 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:50:39 compute-0 NetworkManager[44960]: <info>  [1760172639.1211] manager: (tapd190526e-2b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/102)
Oct 11 08:50:39 compute-0 nova_compute[260935]: 2025-10-11 08:50:39.122 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 08:50:39 compute-0 nova_compute[260935]: 2025-10-11 08:50:39.125 2 DEBUG nova.objects.instance [None req-d54a8a3f-002d-4f75-b215-3ef1f4e3b3df 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Lazy-loading 'flavor' on Instance uuid b35f4147-9e36-4dab-9ac8-2061c97797f2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:50:39 compute-0 nova_compute[260935]: 2025-10-11 08:50:39.130 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:50:39 compute-0 nova_compute[260935]: 2025-10-11 08:50:39.132 2 INFO os_vif [None req-91a294bf-0f23-448c-a601-022a3b3b6e50 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:57:75:39,bridge_name='br-int',has_traffic_filtering=True,id=d190526e-2bf1-4e6c-925a-2cc0c2b359a8,network=Network(9bac3530-993f-420e-8692-0b14a331d756),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd190526e-2b')
Oct 11 08:50:39 compute-0 sudo[298898]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:50:39 compute-0 sudo[298898]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:50:39 compute-0 sudo[298898]: pam_unix(sudo:session): session closed for user root
Oct 11 08:50:39 compute-0 nova_compute[260935]: 2025-10-11 08:50:39.157 2 DEBUG nova.virt.libvirt.vif [None req-d54a8a3f-002d-4f75-b215-3ef1f4e3b3df 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T08:49:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1789236056',display_name='tempest-tempest.common.compute-instance-1789236056',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1789236056',id=29,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAAy7u30rY9Ua722Pu5k06TsB7yGIeNfS7lWjZwVhd6kg2xMeuomPU5t2dlqG08LvC5AhOx2wSQ4p/whtQgG8tbhB9ScC2x2P4qlM+3BKH/+XFtSpFY70AQQ3oh5qTSzqA==',key_name='tempest-keypair-878513287',keypairs=<?>,launch_index=0,launched_at=2025-10-11T08:50:10Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='eddb41c523294041b154a0a99c88e82b',ramdisk_id='',reservation_id='r-pbhirm4h',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-2072786320',owner_user_name='tempest-AttachInterfacesTestJSON-2072786320-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T08:50:10Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='34f29a5a135d45f597eeaa741009aa67',uuid=b35f4147-9e36-4dab-9ac8-2061c97797f2,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f045b3aa-3ff6-4dea-ad61-a59a01735124", "address": "fa:16:3e:ee:3e:c0", "network": {"id": "fff13396-b787-4c6e-9112-a1c2ef57b26d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-795919911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eddb41c523294041b154a0a99c88e82b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf045b3aa-3f", "ovs_interfaceid": "f045b3aa-3ff6-4dea-ad61-a59a01735124", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 11 08:50:39 compute-0 nova_compute[260935]: 2025-10-11 08:50:39.158 2 DEBUG nova.network.os_vif_util [None req-d54a8a3f-002d-4f75-b215-3ef1f4e3b3df 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Converting VIF {"id": "f045b3aa-3ff6-4dea-ad61-a59a01735124", "address": "fa:16:3e:ee:3e:c0", "network": {"id": "fff13396-b787-4c6e-9112-a1c2ef57b26d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-795919911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eddb41c523294041b154a0a99c88e82b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf045b3aa-3f", "ovs_interfaceid": "f045b3aa-3ff6-4dea-ad61-a59a01735124", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 08:50:39 compute-0 nova_compute[260935]: 2025-10-11 08:50:39.159 2 DEBUG nova.network.os_vif_util [None req-d54a8a3f-002d-4f75-b215-3ef1f4e3b3df 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ee:3e:c0,bridge_name='br-int',has_traffic_filtering=True,id=f045b3aa-3ff6-4dea-ad61-a59a01735124,network=Network(fff13396-b787-4c6e-9112-a1c2ef57b26d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapf045b3aa-3f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 08:50:39 compute-0 nova_compute[260935]: 2025-10-11 08:50:39.166 2 DEBUG nova.virt.libvirt.guest [None req-d54a8a3f-002d-4f75-b215-3ef1f4e3b3df 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:ee:3e:c0"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapf045b3aa-3f"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Oct 11 08:50:39 compute-0 nova_compute[260935]: 2025-10-11 08:50:39.169 2 DEBUG nova.virt.libvirt.guest [None req-d54a8a3f-002d-4f75-b215-3ef1f4e3b3df 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:ee:3e:c0"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapf045b3aa-3f"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Oct 11 08:50:39 compute-0 nova_compute[260935]: 2025-10-11 08:50:39.178 2 DEBUG nova.virt.libvirt.driver [None req-d54a8a3f-002d-4f75-b215-3ef1f4e3b3df 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Attempting to detach device tapf045b3aa-3f from instance b35f4147-9e36-4dab-9ac8-2061c97797f2 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Oct 11 08:50:39 compute-0 nova_compute[260935]: 2025-10-11 08:50:39.178 2 DEBUG nova.virt.libvirt.guest [None req-d54a8a3f-002d-4f75-b215-3ef1f4e3b3df 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] detach device xml: <interface type="ethernet">
Oct 11 08:50:39 compute-0 nova_compute[260935]:   <mac address="fa:16:3e:ee:3e:c0"/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:   <model type="virtio"/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:   <driver name="vhost" rx_queue_size="512"/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:   <mtu size="1442"/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:   <target dev="tapf045b3aa-3f"/>
Oct 11 08:50:39 compute-0 nova_compute[260935]: </interface>
Oct 11 08:50:39 compute-0 nova_compute[260935]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Oct 11 08:50:39 compute-0 nova_compute[260935]: 2025-10-11 08:50:39.184 2 DEBUG nova.virt.libvirt.guest [None req-d54a8a3f-002d-4f75-b215-3ef1f4e3b3df 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:ee:3e:c0"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapf045b3aa-3f"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Oct 11 08:50:39 compute-0 nova_compute[260935]: 2025-10-11 08:50:39.194 2 DEBUG nova.virt.libvirt.guest [None req-d54a8a3f-002d-4f75-b215-3ef1f4e3b3df 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:ee:3e:c0"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapf045b3aa-3f"/></interface>not found in domain: <domain type='kvm' id='31'>
Oct 11 08:50:39 compute-0 nova_compute[260935]:   <name>instance-0000001d</name>
Oct 11 08:50:39 compute-0 nova_compute[260935]:   <uuid>b35f4147-9e36-4dab-9ac8-2061c97797f2</uuid>
Oct 11 08:50:39 compute-0 nova_compute[260935]:   <metadata>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 08:50:39 compute-0 nova_compute[260935]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:   <nova:name>tempest-tempest.common.compute-instance-1789236056</nova:name>
Oct 11 08:50:39 compute-0 nova_compute[260935]:   <nova:creationTime>2025-10-11 08:50:37</nova:creationTime>
Oct 11 08:50:39 compute-0 nova_compute[260935]:   <nova:flavor name="m1.nano">
Oct 11 08:50:39 compute-0 nova_compute[260935]:     <nova:memory>128</nova:memory>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     <nova:disk>1</nova:disk>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     <nova:swap>0</nova:swap>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     <nova:ephemeral>0</nova:ephemeral>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     <nova:vcpus>1</nova:vcpus>
Oct 11 08:50:39 compute-0 nova_compute[260935]:   </nova:flavor>
Oct 11 08:50:39 compute-0 nova_compute[260935]:   <nova:owner>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     <nova:user uuid="34f29a5a135d45f597eeaa741009aa67">tempest-AttachInterfacesTestJSON-2072786320-project-member</nova:user>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     <nova:project uuid="eddb41c523294041b154a0a99c88e82b">tempest-AttachInterfacesTestJSON-2072786320</nova:project>
Oct 11 08:50:39 compute-0 nova_compute[260935]:   </nova:owner>
Oct 11 08:50:39 compute-0 nova_compute[260935]:   <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:   <nova:ports>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     <nova:port uuid="c27797c3-6ac7-45ae-9a2e-7fc42908feab">
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     </nova:port>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     <nova:port uuid="f045b3aa-3ff6-4dea-ad61-a59a01735124">
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     </nova:port>
Oct 11 08:50:39 compute-0 nova_compute[260935]:   </nova:ports>
Oct 11 08:50:39 compute-0 nova_compute[260935]: </nova:instance>
Oct 11 08:50:39 compute-0 nova_compute[260935]:   </metadata>
Oct 11 08:50:39 compute-0 nova_compute[260935]:   <memory unit='KiB'>131072</memory>
Oct 11 08:50:39 compute-0 nova_compute[260935]:   <currentMemory unit='KiB'>131072</currentMemory>
Oct 11 08:50:39 compute-0 nova_compute[260935]:   <vcpu placement='static'>1</vcpu>
Oct 11 08:50:39 compute-0 nova_compute[260935]:   <resource>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     <partition>/machine</partition>
Oct 11 08:50:39 compute-0 nova_compute[260935]:   </resource>
Oct 11 08:50:39 compute-0 nova_compute[260935]:   <sysinfo type='smbios'>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     <system>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <entry name='manufacturer'>RDO</entry>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <entry name='product'>OpenStack Compute</entry>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <entry name='serial'>b35f4147-9e36-4dab-9ac8-2061c97797f2</entry>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <entry name='uuid'>b35f4147-9e36-4dab-9ac8-2061c97797f2</entry>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <entry name='family'>Virtual Machine</entry>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     </system>
Oct 11 08:50:39 compute-0 nova_compute[260935]:   </sysinfo>
Oct 11 08:50:39 compute-0 nova_compute[260935]:   <os>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     <type arch='x86_64' machine='pc-q35-rhel9.6.0'>hvm</type>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     <boot dev='hd'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     <smbios mode='sysinfo'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:   </os>
Oct 11 08:50:39 compute-0 nova_compute[260935]:   <features>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     <acpi/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     <apic/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     <vmcoreinfo state='on'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:   </features>
Oct 11 08:50:39 compute-0 nova_compute[260935]:   <cpu mode='custom' match='exact' check='full'>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     <model fallback='forbid'>EPYC-Rome</model>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     <vendor>AMD</vendor>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     <feature policy='require' name='x2apic'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     <feature policy='require' name='tsc-deadline'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     <feature policy='require' name='hypervisor'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     <feature policy='require' name='tsc_adjust'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     <feature policy='require' name='spec-ctrl'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     <feature policy='require' name='stibp'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     <feature policy='require' name='arch-capabilities'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     <feature policy='require' name='ssbd'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     <feature policy='require' name='cmp_legacy'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     <feature policy='require' name='overflow-recov'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     <feature policy='require' name='succor'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     <feature policy='require' name='ibrs'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     <feature policy='require' name='amd-ssbd'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     <feature policy='require' name='virt-ssbd'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     <feature policy='disable' name='lbrv'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     <feature policy='disable' name='tsc-scale'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     <feature policy='disable' name='vmcb-clean'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     <feature policy='disable' name='flushbyasid'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     <feature policy='disable' name='pause-filter'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     <feature policy='disable' name='pfthreshold'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     <feature policy='disable' name='svme-addr-chk'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     <feature policy='require' name='lfence-always-serializing'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     <feature policy='require' name='rdctl-no'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     <feature policy='require' name='mds-no'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     <feature policy='require' name='pschange-mc-no'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     <feature policy='require' name='gds-no'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     <feature policy='require' name='rfds-no'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     <feature policy='disable' name='xsaves'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     <feature policy='disable' name='svm'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     <feature policy='require' name='topoext'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     <feature policy='disable' name='npt'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     <feature policy='disable' name='nrip-save'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:   </cpu>
Oct 11 08:50:39 compute-0 nova_compute[260935]:   <clock offset='utc'>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     <timer name='pit' tickpolicy='delay'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     <timer name='rtc' tickpolicy='catchup'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     <timer name='hpet' present='no'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:   </clock>
Oct 11 08:50:39 compute-0 nova_compute[260935]:   <on_poweroff>destroy</on_poweroff>
Oct 11 08:50:39 compute-0 nova_compute[260935]:   <on_reboot>restart</on_reboot>
Oct 11 08:50:39 compute-0 nova_compute[260935]:   <on_crash>destroy</on_crash>
Oct 11 08:50:39 compute-0 nova_compute[260935]:   <devices>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     <disk type='network' device='disk'>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <driver name='qemu' type='raw' cache='none'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <auth username='openstack'>
Oct 11 08:50:39 compute-0 nova_compute[260935]:         <secret type='ceph' uuid='33219f8b-dc38-5a8f-a577-8ccc4b37190a'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       </auth>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <source protocol='rbd' name='vms/b35f4147-9e36-4dab-9ac8-2061c97797f2_disk' index='2'>
Oct 11 08:50:39 compute-0 nova_compute[260935]:         <host name='192.168.122.100' port='6789'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       </source>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <target dev='vda' bus='virtio'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <alias name='virtio-disk0'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     </disk>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     <disk type='network' device='cdrom'>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <driver name='qemu' type='raw' cache='none'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <auth username='openstack'>
Oct 11 08:50:39 compute-0 nova_compute[260935]:         <secret type='ceph' uuid='33219f8b-dc38-5a8f-a577-8ccc4b37190a'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       </auth>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <source protocol='rbd' name='vms/b35f4147-9e36-4dab-9ac8-2061c97797f2_disk.config' index='1'>
Oct 11 08:50:39 compute-0 nova_compute[260935]:         <host name='192.168.122.100' port='6789'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       </source>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <target dev='sda' bus='sata'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <readonly/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <alias name='sata0-0-0'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     </disk>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     <controller type='pci' index='0' model='pcie-root'>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <alias name='pcie.0'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     <controller type='pci' index='1' model='pcie-root-port'>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <target chassis='1' port='0x10'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <alias name='pci.1'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     <controller type='pci' index='2' model='pcie-root-port'>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <target chassis='2' port='0x11'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <alias name='pci.2'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     <controller type='pci' index='3' model='pcie-root-port'>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <target chassis='3' port='0x12'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <alias name='pci.3'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     <controller type='pci' index='4' model='pcie-root-port'>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <target chassis='4' port='0x13'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <alias name='pci.4'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     <controller type='pci' index='5' model='pcie-root-port'>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <target chassis='5' port='0x14'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <alias name='pci.5'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     <controller type='pci' index='6' model='pcie-root-port'>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <target chassis='6' port='0x15'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <alias name='pci.6'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     <controller type='pci' index='7' model='pcie-root-port'>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <target chassis='7' port='0x16'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <alias name='pci.7'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     <controller type='pci' index='8' model='pcie-root-port'>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <target chassis='8' port='0x17'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <alias name='pci.8'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     <controller type='pci' index='9' model='pcie-root-port'>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <target chassis='9' port='0x18'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <alias name='pci.9'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     <controller type='pci' index='10' model='pcie-root-port'>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <target chassis='10' port='0x19'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <alias name='pci.10'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     <controller type='pci' index='11' model='pcie-root-port'>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <target chassis='11' port='0x1a'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <alias name='pci.11'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     <controller type='pci' index='12' model='pcie-root-port'>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <target chassis='12' port='0x1b'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <alias name='pci.12'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     <controller type='pci' index='13' model='pcie-root-port'>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <target chassis='13' port='0x1c'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <alias name='pci.13'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     <controller type='pci' index='14' model='pcie-root-port'>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <target chassis='14' port='0x1d'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <alias name='pci.14'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     <controller type='pci' index='15' model='pcie-root-port'>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <target chassis='15' port='0x1e'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <alias name='pci.15'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     <controller type='pci' index='16' model='pcie-root-port'>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <target chassis='16' port='0x1f'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <alias name='pci.16'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     <controller type='pci' index='17' model='pcie-root-port'>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <target chassis='17' port='0x20'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <alias name='pci.17'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     <controller type='pci' index='18' model='pcie-root-port'>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <target chassis='18' port='0x21'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <alias name='pci.18'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     <controller type='pci' index='19' model='pcie-root-port'>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <target chassis='19' port='0x22'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <alias name='pci.19'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     <controller type='pci' index='20' model='pcie-root-port'>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <target chassis='20' port='0x23'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <alias name='pci.20'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     <controller type='pci' index='21' model='pcie-root-port'>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <target chassis='21' port='0x24'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <alias name='pci.21'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     <controller type='pci' index='22' model='pcie-root-port'>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <target chassis='22' port='0x25'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <alias name='pci.22'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     <controller type='pci' index='23' model='pcie-root-port'>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <target chassis='23' port='0x26'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <alias name='pci.23'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     <controller type='pci' index='24' model='pcie-root-port'>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <target chassis='24' port='0x27'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <alias name='pci.24'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     <controller type='pci' index='25' model='pcie-root-port'>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <target chassis='25' port='0x28'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <alias name='pci.25'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <model name='pcie-pci-bridge'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <alias name='pci.26'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     <controller type='usb' index='0' model='piix3-uhci'>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <alias name='usb'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     <controller type='sata' index='0'>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <alias name='ide'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     <interface type='ethernet'>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <mac address='fa:16:3e:d9:75:86'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <target dev='tapc27797c3-6a'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <model type='virtio'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <driver name='vhost' rx_queue_size='512'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <mtu size='1442'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <alias name='net0'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     </interface>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     <interface type='ethernet'>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <mac address='fa:16:3e:ee:3e:c0'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <target dev='tapf045b3aa-3f'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <model type='virtio'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <driver name='vhost' rx_queue_size='512'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <mtu size='1442'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <alias name='net1'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     </interface>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     <serial type='pty'>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <source path='/dev/pts/0'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <log file='/var/lib/nova/instances/b35f4147-9e36-4dab-9ac8-2061c97797f2/console.log' append='off'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <target type='isa-serial' port='0'>
Oct 11 08:50:39 compute-0 nova_compute[260935]:         <model name='isa-serial'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       </target>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <alias name='serial0'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     </serial>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     <console type='pty' tty='/dev/pts/0'>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <source path='/dev/pts/0'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <log file='/var/lib/nova/instances/b35f4147-9e36-4dab-9ac8-2061c97797f2/console.log' append='off'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <target type='serial' port='0'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <alias name='serial0'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     </console>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     <input type='tablet' bus='usb'>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <alias name='input0'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <address type='usb' bus='0' port='1'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     </input>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     <input type='mouse' bus='ps2'>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <alias name='input1'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     </input>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     <input type='keyboard' bus='ps2'>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <alias name='input2'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     </input>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     <graphics type='vnc' port='5906' autoport='yes' listen='::0'>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <listen type='address' address='::0'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     </graphics>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     <audio id='1' type='none'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     <video>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <model type='virtio' heads='1' primary='yes'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <alias name='video0'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     </video>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     <watchdog model='itco' action='reset'>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <alias name='watchdog0'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     </watchdog>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     <memballoon model='virtio'>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <stats period='10'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <alias name='balloon0'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     </memballoon>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     <rng model='virtio'>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <backend model='random'>/dev/urandom</backend>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <alias name='rng0'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     </rng>
Oct 11 08:50:39 compute-0 nova_compute[260935]:   </devices>
Oct 11 08:50:39 compute-0 nova_compute[260935]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     <label>system_u:system_r:svirt_t:s0:c39,c884</label>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c39,c884</imagelabel>
Oct 11 08:50:39 compute-0 nova_compute[260935]:   </seclabel>
Oct 11 08:50:39 compute-0 nova_compute[260935]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     <label>+107:+107</label>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     <imagelabel>+107:+107</imagelabel>
Oct 11 08:50:39 compute-0 nova_compute[260935]:   </seclabel>
Oct 11 08:50:39 compute-0 nova_compute[260935]: </domain>
Oct 11 08:50:39 compute-0 nova_compute[260935]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Oct 11 08:50:39 compute-0 nova_compute[260935]: 2025-10-11 08:50:39.194 2 INFO nova.virt.libvirt.driver [None req-d54a8a3f-002d-4f75-b215-3ef1f4e3b3df 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Successfully detached device tapf045b3aa-3f from instance b35f4147-9e36-4dab-9ac8-2061c97797f2 from the persistent domain config.
Oct 11 08:50:39 compute-0 nova_compute[260935]: 2025-10-11 08:50:39.194 2 DEBUG nova.virt.libvirt.driver [None req-d54a8a3f-002d-4f75-b215-3ef1f4e3b3df 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] (1/8): Attempting to detach device tapf045b3aa-3f with device alias net1 from instance b35f4147-9e36-4dab-9ac8-2061c97797f2 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Oct 11 08:50:39 compute-0 nova_compute[260935]: 2025-10-11 08:50:39.195 2 DEBUG nova.virt.libvirt.guest [None req-d54a8a3f-002d-4f75-b215-3ef1f4e3b3df 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] detach device xml: <interface type="ethernet">
Oct 11 08:50:39 compute-0 nova_compute[260935]:   <mac address="fa:16:3e:ee:3e:c0"/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:   <model type="virtio"/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:   <driver name="vhost" rx_queue_size="512"/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:   <mtu size="1442"/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:   <target dev="tapf045b3aa-3f"/>
Oct 11 08:50:39 compute-0 nova_compute[260935]: </interface>
Oct 11 08:50:39 compute-0 nova_compute[260935]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Oct 11 08:50:39 compute-0 sudo[298927]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 08:50:39 compute-0 nova_compute[260935]: 2025-10-11 08:50:39.226 2 DEBUG nova.virt.libvirt.driver [None req-91a294bf-0f23-448c-a601-022a3b3b6e50 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 08:50:39 compute-0 nova_compute[260935]: 2025-10-11 08:50:39.226 2 DEBUG nova.virt.libvirt.driver [None req-91a294bf-0f23-448c-a601-022a3b3b6e50 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 08:50:39 compute-0 nova_compute[260935]: 2025-10-11 08:50:39.226 2 DEBUG nova.virt.libvirt.driver [None req-91a294bf-0f23-448c-a601-022a3b3b6e50 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] No VIF found with MAC fa:16:3e:57:75:39, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 11 08:50:39 compute-0 nova_compute[260935]: 2025-10-11 08:50:39.227 2 INFO nova.virt.libvirt.driver [None req-91a294bf-0f23-448c-a601-022a3b3b6e50 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: ac3851d8-5df2-4f84-9b28-a5fbf1c31b62] Using config drive
Oct 11 08:50:39 compute-0 sudo[298927]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:50:39 compute-0 sudo[298927]: pam_unix(sudo:session): session closed for user root
Oct 11 08:50:39 compute-0 ceph-mon[74313]: pgmap v1356: 321 pgs: 321 active+clean; 451 MiB data, 626 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 5.1 MiB/s wr, 341 op/s
Oct 11 08:50:39 compute-0 ceph-mon[74313]: osdmap e175: 3 total, 3 up, 3 in
Oct 11 08:50:39 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/660154896' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:50:39 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/4128355360' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:50:39 compute-0 nova_compute[260935]: 2025-10-11 08:50:39.256 2 DEBUG nova.storage.rbd_utils [None req-91a294bf-0f23-448c-a601-022a3b3b6e50 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] rbd image ac3851d8-5df2-4f84-9b28-a5fbf1c31b62_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:50:39 compute-0 sudo[298967]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:50:39 compute-0 sudo[298967]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:50:39 compute-0 sudo[298967]: pam_unix(sudo:session): session closed for user root
Oct 11 08:50:39 compute-0 kernel: tapf045b3aa-3f (unregistering): left promiscuous mode
Oct 11 08:50:39 compute-0 NetworkManager[44960]: <info>  [1760172639.3320] device (tapf045b3aa-3f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 08:50:39 compute-0 nova_compute[260935]: 2025-10-11 08:50:39.352 2 DEBUG oslo_concurrency.processutils [None req-461635e7-4a9b-4353-8945-fee1f3505527 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:50:39 compute-0 nova_compute[260935]: 2025-10-11 08:50:39.386 2 DEBUG nova.network.neutron [req-8b8b848e-9e40-4df1-97ad-3d57692807d5 req-40819be8-c07a-406a-9d4e-072fa21da666 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: b35f4147-9e36-4dab-9ac8-2061c97797f2] Updated VIF entry in instance network info cache for port f045b3aa-3ff6-4dea-ad61-a59a01735124. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 08:50:39 compute-0 nova_compute[260935]: 2025-10-11 08:50:39.388 2 DEBUG nova.network.neutron [req-8b8b848e-9e40-4df1-97ad-3d57692807d5 req-40819be8-c07a-406a-9d4e-072fa21da666 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: b35f4147-9e36-4dab-9ac8-2061c97797f2] Updating instance_info_cache with network_info: [{"id": "c27797c3-6ac7-45ae-9a2e-7fc42908feab", "address": "fa:16:3e:d9:75:86", "network": {"id": "fff13396-b787-4c6e-9112-a1c2ef57b26d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-795919911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.224", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eddb41c523294041b154a0a99c88e82b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc27797c3-6a", "ovs_interfaceid": "c27797c3-6ac7-45ae-9a2e-7fc42908feab", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "f045b3aa-3ff6-4dea-ad61-a59a01735124", "address": "fa:16:3e:ee:3e:c0", "network": {"id": "fff13396-b787-4c6e-9112-a1c2ef57b26d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-795919911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eddb41c523294041b154a0a99c88e82b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf045b3aa-3f", "ovs_interfaceid": "f045b3aa-3ff6-4dea-ad61-a59a01735124", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:50:39 compute-0 ovn_controller[152945]: 2025-10-11T08:50:39Z|00197|binding|INFO|Releasing lport f045b3aa-3ff6-4dea-ad61-a59a01735124 from this chassis (sb_readonly=0)
Oct 11 08:50:39 compute-0 ovn_controller[152945]: 2025-10-11T08:50:39Z|00198|binding|INFO|Setting lport f045b3aa-3ff6-4dea-ad61-a59a01735124 down in Southbound
Oct 11 08:50:39 compute-0 ovn_controller[152945]: 2025-10-11T08:50:39Z|00199|binding|INFO|Removing iface tapf045b3aa-3f ovn-installed in OVS
Oct 11 08:50:39 compute-0 nova_compute[260935]: 2025-10-11 08:50:39.394 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:50:39 compute-0 nova_compute[260935]: 2025-10-11 08:50:39.397 2 DEBUG nova.virt.libvirt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Received event <DeviceRemovedEvent: 1760172639.393127, b35f4147-9e36-4dab-9ac8-2061c97797f2 => net1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Oct 11 08:50:39 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:39.400 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ee:3e:c0 10.100.0.7'], port_security=['fa:16:3e:ee:3e:c0 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-AttachInterfacesTestJSON-350865697', 'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'b35f4147-9e36-4dab-9ac8-2061c97797f2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fff13396-b787-4c6e-9112-a1c2ef57b26d', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-AttachInterfacesTestJSON-350865697', 'neutron:project_id': 'eddb41c523294041b154a0a99c88e82b', 'neutron:revision_number': '9', 'neutron:security_group_ids': '9a83c3d0-687d-44b7-980a-bde786b1b429', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c4201c7b-c907-464d-88cb-d19f17d8f067, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=f045b3aa-3ff6-4dea-ad61-a59a01735124) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 08:50:39 compute-0 nova_compute[260935]: 2025-10-11 08:50:39.399 2 DEBUG nova.virt.libvirt.driver [None req-d54a8a3f-002d-4f75-b215-3ef1f4e3b3df 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Start waiting for the detach event from libvirt for device tapf045b3aa-3f with device alias net1 for instance b35f4147-9e36-4dab-9ac8-2061c97797f2 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Oct 11 08:50:39 compute-0 nova_compute[260935]: 2025-10-11 08:50:39.400 2 DEBUG nova.virt.libvirt.guest [None req-d54a8a3f-002d-4f75-b215-3ef1f4e3b3df 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:ee:3e:c0"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapf045b3aa-3f"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Oct 11 08:50:39 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:39.402 162815 INFO neutron.agent.ovn.metadata.agent [-] Port f045b3aa-3ff6-4dea-ad61-a59a01735124 in datapath fff13396-b787-4c6e-9112-a1c2ef57b26d unbound from our chassis
Oct 11 08:50:39 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:39.405 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network fff13396-b787-4c6e-9112-a1c2ef57b26d
Oct 11 08:50:39 compute-0 nova_compute[260935]: 2025-10-11 08:50:39.413 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:50:39 compute-0 nova_compute[260935]: 2025-10-11 08:50:39.417 2 DEBUG oslo_concurrency.lockutils [req-8b8b848e-9e40-4df1-97ad-3d57692807d5 req-40819be8-c07a-406a-9d4e-072fa21da666 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-b35f4147-9e36-4dab-9ac8-2061c97797f2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 08:50:39 compute-0 nova_compute[260935]: 2025-10-11 08:50:39.418 2 DEBUG nova.virt.libvirt.guest [None req-d54a8a3f-002d-4f75-b215-3ef1f4e3b3df 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:ee:3e:c0"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapf045b3aa-3f"/></interface>not found in domain: <domain type='kvm' id='31'>
Oct 11 08:50:39 compute-0 nova_compute[260935]:   <name>instance-0000001d</name>
Oct 11 08:50:39 compute-0 nova_compute[260935]:   <uuid>b35f4147-9e36-4dab-9ac8-2061c97797f2</uuid>
Oct 11 08:50:39 compute-0 nova_compute[260935]:   <metadata>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 08:50:39 compute-0 nova_compute[260935]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:   <nova:name>tempest-tempest.common.compute-instance-1789236056</nova:name>
Oct 11 08:50:39 compute-0 nova_compute[260935]:   <nova:creationTime>2025-10-11 08:50:37</nova:creationTime>
Oct 11 08:50:39 compute-0 nova_compute[260935]:   <nova:flavor name="m1.nano">
Oct 11 08:50:39 compute-0 nova_compute[260935]:     <nova:memory>128</nova:memory>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     <nova:disk>1</nova:disk>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     <nova:swap>0</nova:swap>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     <nova:ephemeral>0</nova:ephemeral>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     <nova:vcpus>1</nova:vcpus>
Oct 11 08:50:39 compute-0 nova_compute[260935]:   </nova:flavor>
Oct 11 08:50:39 compute-0 nova_compute[260935]:   <nova:owner>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     <nova:user uuid="34f29a5a135d45f597eeaa741009aa67">tempest-AttachInterfacesTestJSON-2072786320-project-member</nova:user>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     <nova:project uuid="eddb41c523294041b154a0a99c88e82b">tempest-AttachInterfacesTestJSON-2072786320</nova:project>
Oct 11 08:50:39 compute-0 nova_compute[260935]:   </nova:owner>
Oct 11 08:50:39 compute-0 nova_compute[260935]:   <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:   <nova:ports>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     <nova:port uuid="c27797c3-6ac7-45ae-9a2e-7fc42908feab">
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     </nova:port>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     <nova:port uuid="f045b3aa-3ff6-4dea-ad61-a59a01735124">
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     </nova:port>
Oct 11 08:50:39 compute-0 nova_compute[260935]:   </nova:ports>
Oct 11 08:50:39 compute-0 nova_compute[260935]: </nova:instance>
Oct 11 08:50:39 compute-0 nova_compute[260935]:   </metadata>
Oct 11 08:50:39 compute-0 nova_compute[260935]:   <memory unit='KiB'>131072</memory>
Oct 11 08:50:39 compute-0 nova_compute[260935]:   <currentMemory unit='KiB'>131072</currentMemory>
Oct 11 08:50:39 compute-0 nova_compute[260935]:   <vcpu placement='static'>1</vcpu>
Oct 11 08:50:39 compute-0 nova_compute[260935]:   <resource>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     <partition>/machine</partition>
Oct 11 08:50:39 compute-0 nova_compute[260935]:   </resource>
Oct 11 08:50:39 compute-0 nova_compute[260935]:   <sysinfo type='smbios'>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     <system>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <entry name='manufacturer'>RDO</entry>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <entry name='product'>OpenStack Compute</entry>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <entry name='serial'>b35f4147-9e36-4dab-9ac8-2061c97797f2</entry>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <entry name='uuid'>b35f4147-9e36-4dab-9ac8-2061c97797f2</entry>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <entry name='family'>Virtual Machine</entry>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     </system>
Oct 11 08:50:39 compute-0 nova_compute[260935]:   </sysinfo>
Oct 11 08:50:39 compute-0 nova_compute[260935]:   <os>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     <type arch='x86_64' machine='pc-q35-rhel9.6.0'>hvm</type>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     <boot dev='hd'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     <smbios mode='sysinfo'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:   </os>
Oct 11 08:50:39 compute-0 nova_compute[260935]:   <features>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     <acpi/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     <apic/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     <vmcoreinfo state='on'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:   </features>
Oct 11 08:50:39 compute-0 nova_compute[260935]:   <cpu mode='custom' match='exact' check='full'>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     <model fallback='forbid'>EPYC-Rome</model>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     <vendor>AMD</vendor>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     <feature policy='require' name='x2apic'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     <feature policy='require' name='tsc-deadline'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     <feature policy='require' name='hypervisor'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     <feature policy='require' name='tsc_adjust'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     <feature policy='require' name='spec-ctrl'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     <feature policy='require' name='stibp'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     <feature policy='require' name='arch-capabilities'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     <feature policy='require' name='ssbd'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     <feature policy='require' name='cmp_legacy'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     <feature policy='require' name='overflow-recov'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     <feature policy='require' name='succor'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     <feature policy='require' name='ibrs'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     <feature policy='require' name='amd-ssbd'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     <feature policy='require' name='virt-ssbd'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     <feature policy='disable' name='lbrv'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     <feature policy='disable' name='tsc-scale'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     <feature policy='disable' name='vmcb-clean'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     <feature policy='disable' name='flushbyasid'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     <feature policy='disable' name='pause-filter'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     <feature policy='disable' name='pfthreshold'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     <feature policy='disable' name='svme-addr-chk'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     <feature policy='require' name='lfence-always-serializing'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     <feature policy='require' name='rdctl-no'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     <feature policy='require' name='mds-no'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     <feature policy='require' name='pschange-mc-no'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     <feature policy='require' name='gds-no'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     <feature policy='require' name='rfds-no'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     <feature policy='disable' name='xsaves'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     <feature policy='disable' name='svm'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     <feature policy='require' name='topoext'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     <feature policy='disable' name='npt'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     <feature policy='disable' name='nrip-save'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:   </cpu>
Oct 11 08:50:39 compute-0 nova_compute[260935]:   <clock offset='utc'>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     <timer name='pit' tickpolicy='delay'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     <timer name='rtc' tickpolicy='catchup'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     <timer name='hpet' present='no'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:   </clock>
Oct 11 08:50:39 compute-0 nova_compute[260935]:   <on_poweroff>destroy</on_poweroff>
Oct 11 08:50:39 compute-0 nova_compute[260935]:   <on_reboot>restart</on_reboot>
Oct 11 08:50:39 compute-0 nova_compute[260935]:   <on_crash>destroy</on_crash>
Oct 11 08:50:39 compute-0 nova_compute[260935]:   <devices>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     <disk type='network' device='disk'>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <driver name='qemu' type='raw' cache='none'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <auth username='openstack'>
Oct 11 08:50:39 compute-0 nova_compute[260935]:         <secret type='ceph' uuid='33219f8b-dc38-5a8f-a577-8ccc4b37190a'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       </auth>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <source protocol='rbd' name='vms/b35f4147-9e36-4dab-9ac8-2061c97797f2_disk' index='2'>
Oct 11 08:50:39 compute-0 nova_compute[260935]:         <host name='192.168.122.100' port='6789'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       </source>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <target dev='vda' bus='virtio'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <alias name='virtio-disk0'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     </disk>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     <disk type='network' device='cdrom'>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <driver name='qemu' type='raw' cache='none'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <auth username='openstack'>
Oct 11 08:50:39 compute-0 nova_compute[260935]:         <secret type='ceph' uuid='33219f8b-dc38-5a8f-a577-8ccc4b37190a'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       </auth>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <source protocol='rbd' name='vms/b35f4147-9e36-4dab-9ac8-2061c97797f2_disk.config' index='1'>
Oct 11 08:50:39 compute-0 nova_compute[260935]:         <host name='192.168.122.100' port='6789'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       </source>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <target dev='sda' bus='sata'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <readonly/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <alias name='sata0-0-0'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     </disk>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     <controller type='pci' index='0' model='pcie-root'>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <alias name='pcie.0'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     <controller type='pci' index='1' model='pcie-root-port'>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <target chassis='1' port='0x10'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <alias name='pci.1'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     <controller type='pci' index='2' model='pcie-root-port'>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <target chassis='2' port='0x11'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <alias name='pci.2'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     <controller type='pci' index='3' model='pcie-root-port'>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <target chassis='3' port='0x12'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <alias name='pci.3'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     <controller type='pci' index='4' model='pcie-root-port'>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <target chassis='4' port='0x13'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <alias name='pci.4'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     <controller type='pci' index='5' model='pcie-root-port'>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <target chassis='5' port='0x14'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <alias name='pci.5'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     <controller type='pci' index='6' model='pcie-root-port'>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <target chassis='6' port='0x15'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <alias name='pci.6'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     <controller type='pci' index='7' model='pcie-root-port'>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <target chassis='7' port='0x16'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <alias name='pci.7'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     <controller type='pci' index='8' model='pcie-root-port'>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <target chassis='8' port='0x17'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <alias name='pci.8'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     <controller type='pci' index='9' model='pcie-root-port'>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <target chassis='9' port='0x18'/>
Oct 11 08:50:39 compute-0 sudo[298995]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a -- lvm list --format json
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <alias name='pci.9'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     <controller type='pci' index='10' model='pcie-root-port'>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <target chassis='10' port='0x19'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <alias name='pci.10'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     <controller type='pci' index='11' model='pcie-root-port'>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <target chassis='11' port='0x1a'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <alias name='pci.11'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     <controller type='pci' index='12' model='pcie-root-port'>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <target chassis='12' port='0x1b'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <alias name='pci.12'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     <controller type='pci' index='13' model='pcie-root-port'>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <target chassis='13' port='0x1c'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <alias name='pci.13'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     <controller type='pci' index='14' model='pcie-root-port'>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <target chassis='14' port='0x1d'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <alias name='pci.14'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     <controller type='pci' index='15' model='pcie-root-port'>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <target chassis='15' port='0x1e'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <alias name='pci.15'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     <controller type='pci' index='16' model='pcie-root-port'>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <target chassis='16' port='0x1f'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <alias name='pci.16'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     <controller type='pci' index='17' model='pcie-root-port'>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <target chassis='17' port='0x20'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <alias name='pci.17'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     <controller type='pci' index='18' model='pcie-root-port'>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <target chassis='18' port='0x21'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <alias name='pci.18'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     <controller type='pci' index='19' model='pcie-root-port'>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <target chassis='19' port='0x22'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <alias name='pci.19'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     <controller type='pci' index='20' model='pcie-root-port'>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <target chassis='20' port='0x23'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <alias name='pci.20'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     <controller type='pci' index='21' model='pcie-root-port'>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <target chassis='21' port='0x24'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <alias name='pci.21'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     <controller type='pci' index='22' model='pcie-root-port'>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <target chassis='22' port='0x25'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <alias name='pci.22'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     <controller type='pci' index='23' model='pcie-root-port'>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <target chassis='23' port='0x26'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <alias name='pci.23'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     <controller type='pci' index='24' model='pcie-root-port'>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <target chassis='24' port='0x27'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <alias name='pci.24'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     <controller type='pci' index='25' model='pcie-root-port'>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <target chassis='25' port='0x28'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <alias name='pci.25'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <model name='pcie-pci-bridge'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <alias name='pci.26'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     <controller type='usb' index='0' model='piix3-uhci'>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <alias name='usb'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     <controller type='sata' index='0'>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <alias name='ide'/>
Oct 11 08:50:39 compute-0 sudo[298995]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     </controller>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     <interface type='ethernet'>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <mac address='fa:16:3e:d9:75:86'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <target dev='tapc27797c3-6a'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <model type='virtio'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <driver name='vhost' rx_queue_size='512'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <mtu size='1442'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <alias name='net0'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     </interface>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     <serial type='pty'>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <source path='/dev/pts/0'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <log file='/var/lib/nova/instances/b35f4147-9e36-4dab-9ac8-2061c97797f2/console.log' append='off'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <target type='isa-serial' port='0'>
Oct 11 08:50:39 compute-0 nova_compute[260935]:         <model name='isa-serial'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       </target>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <alias name='serial0'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     </serial>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     <console type='pty' tty='/dev/pts/0'>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <source path='/dev/pts/0'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <log file='/var/lib/nova/instances/b35f4147-9e36-4dab-9ac8-2061c97797f2/console.log' append='off'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <target type='serial' port='0'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <alias name='serial0'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     </console>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     <input type='tablet' bus='usb'>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <alias name='input0'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <address type='usb' bus='0' port='1'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     </input>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     <input type='mouse' bus='ps2'>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <alias name='input1'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     </input>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     <input type='keyboard' bus='ps2'>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <alias name='input2'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     </input>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     <graphics type='vnc' port='5906' autoport='yes' listen='::0'>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <listen type='address' address='::0'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     </graphics>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     <audio id='1' type='none'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     <video>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <model type='virtio' heads='1' primary='yes'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <alias name='video0'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     </video>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     <watchdog model='itco' action='reset'>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <alias name='watchdog0'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     </watchdog>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     <memballoon model='virtio'>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <stats period='10'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <alias name='balloon0'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     </memballoon>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     <rng model='virtio'>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <backend model='random'>/dev/urandom</backend>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <alias name='rng0'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     </rng>
Oct 11 08:50:39 compute-0 nova_compute[260935]:   </devices>
Oct 11 08:50:39 compute-0 nova_compute[260935]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     <label>system_u:system_r:svirt_t:s0:c39,c884</label>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c39,c884</imagelabel>
Oct 11 08:50:39 compute-0 nova_compute[260935]:   </seclabel>
Oct 11 08:50:39 compute-0 nova_compute[260935]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     <label>+107:+107</label>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     <imagelabel>+107:+107</imagelabel>
Oct 11 08:50:39 compute-0 nova_compute[260935]:   </seclabel>
Oct 11 08:50:39 compute-0 nova_compute[260935]: </domain>
Oct 11 08:50:39 compute-0 nova_compute[260935]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Oct 11 08:50:39 compute-0 nova_compute[260935]: 2025-10-11 08:50:39.419 2 INFO nova.virt.libvirt.driver [None req-d54a8a3f-002d-4f75-b215-3ef1f4e3b3df 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Successfully detached device tapf045b3aa-3f from instance b35f4147-9e36-4dab-9ac8-2061c97797f2 from the live domain config.
Oct 11 08:50:39 compute-0 nova_compute[260935]: 2025-10-11 08:50:39.420 2 DEBUG nova.virt.libvirt.vif [None req-d54a8a3f-002d-4f75-b215-3ef1f4e3b3df 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T08:49:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1789236056',display_name='tempest-tempest.common.compute-instance-1789236056',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1789236056',id=29,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAAy7u30rY9Ua722Pu5k06TsB7yGIeNfS7lWjZwVhd6kg2xMeuomPU5t2dlqG08LvC5AhOx2wSQ4p/whtQgG8tbhB9ScC2x2P4qlM+3BKH/+XFtSpFY70AQQ3oh5qTSzqA==',key_name='tempest-keypair-878513287',keypairs=<?>,launch_index=0,launched_at=2025-10-11T08:50:10Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='eddb41c523294041b154a0a99c88e82b',ramdisk_id='',reservation_id='r-pbhirm4h',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-2072786320',owner_user_name='tempest-AttachInterfacesTestJSON-2072786320-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T08:50:10Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='34f29a5a135d45f597eeaa741009aa67',uuid=b35f4147-9e36-4dab-9ac8-2061c97797f2,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f045b3aa-3ff6-4dea-ad61-a59a01735124", "address": "fa:16:3e:ee:3e:c0", "network": {"id": "fff13396-b787-4c6e-9112-a1c2ef57b26d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-795919911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eddb41c523294041b154a0a99c88e82b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf045b3aa-3f", "ovs_interfaceid": "f045b3aa-3ff6-4dea-ad61-a59a01735124", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 11 08:50:39 compute-0 nova_compute[260935]: 2025-10-11 08:50:39.421 2 DEBUG nova.network.os_vif_util [None req-d54a8a3f-002d-4f75-b215-3ef1f4e3b3df 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Converting VIF {"id": "f045b3aa-3ff6-4dea-ad61-a59a01735124", "address": "fa:16:3e:ee:3e:c0", "network": {"id": "fff13396-b787-4c6e-9112-a1c2ef57b26d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-795919911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eddb41c523294041b154a0a99c88e82b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf045b3aa-3f", "ovs_interfaceid": "f045b3aa-3ff6-4dea-ad61-a59a01735124", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 08:50:39 compute-0 nova_compute[260935]: 2025-10-11 08:50:39.422 2 DEBUG nova.network.os_vif_util [None req-d54a8a3f-002d-4f75-b215-3ef1f4e3b3df 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ee:3e:c0,bridge_name='br-int',has_traffic_filtering=True,id=f045b3aa-3ff6-4dea-ad61-a59a01735124,network=Network(fff13396-b787-4c6e-9112-a1c2ef57b26d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapf045b3aa-3f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 08:50:39 compute-0 nova_compute[260935]: 2025-10-11 08:50:39.423 2 DEBUG os_vif [None req-d54a8a3f-002d-4f75-b215-3ef1f4e3b3df 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ee:3e:c0,bridge_name='br-int',has_traffic_filtering=True,id=f045b3aa-3ff6-4dea-ad61-a59a01735124,network=Network(fff13396-b787-4c6e-9112-a1c2ef57b26d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapf045b3aa-3f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 11 08:50:39 compute-0 nova_compute[260935]: 2025-10-11 08:50:39.425 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:50:39 compute-0 nova_compute[260935]: 2025-10-11 08:50:39.425 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf045b3aa-3f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:50:39 compute-0 nova_compute[260935]: 2025-10-11 08:50:39.427 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:50:39 compute-0 nova_compute[260935]: 2025-10-11 08:50:39.429 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 08:50:39 compute-0 nova_compute[260935]: 2025-10-11 08:50:39.433 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:50:39 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:39.433 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[2a280cbd-31b2-4722-a3ca-5679abd7a527]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:50:39 compute-0 nova_compute[260935]: 2025-10-11 08:50:39.436 2 INFO os_vif [None req-d54a8a3f-002d-4f75-b215-3ef1f4e3b3df 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ee:3e:c0,bridge_name='br-int',has_traffic_filtering=True,id=f045b3aa-3ff6-4dea-ad61-a59a01735124,network=Network(fff13396-b787-4c6e-9112-a1c2ef57b26d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapf045b3aa-3f')
Oct 11 08:50:39 compute-0 nova_compute[260935]: 2025-10-11 08:50:39.437 2 DEBUG nova.virt.libvirt.guest [None req-d54a8a3f-002d-4f75-b215-3ef1f4e3b3df 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 08:50:39 compute-0 nova_compute[260935]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:   <nova:name>tempest-tempest.common.compute-instance-1789236056</nova:name>
Oct 11 08:50:39 compute-0 nova_compute[260935]:   <nova:creationTime>2025-10-11 08:50:39</nova:creationTime>
Oct 11 08:50:39 compute-0 nova_compute[260935]:   <nova:flavor name="m1.nano">
Oct 11 08:50:39 compute-0 nova_compute[260935]:     <nova:memory>128</nova:memory>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     <nova:disk>1</nova:disk>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     <nova:swap>0</nova:swap>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     <nova:ephemeral>0</nova:ephemeral>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     <nova:vcpus>1</nova:vcpus>
Oct 11 08:50:39 compute-0 nova_compute[260935]:   </nova:flavor>
Oct 11 08:50:39 compute-0 nova_compute[260935]:   <nova:owner>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     <nova:user uuid="34f29a5a135d45f597eeaa741009aa67">tempest-AttachInterfacesTestJSON-2072786320-project-member</nova:user>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     <nova:project uuid="eddb41c523294041b154a0a99c88e82b">tempest-AttachInterfacesTestJSON-2072786320</nova:project>
Oct 11 08:50:39 compute-0 nova_compute[260935]:   </nova:owner>
Oct 11 08:50:39 compute-0 nova_compute[260935]:   <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:   <nova:ports>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     <nova:port uuid="c27797c3-6ac7-45ae-9a2e-7fc42908feab">
Oct 11 08:50:39 compute-0 nova_compute[260935]:       <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Oct 11 08:50:39 compute-0 nova_compute[260935]:     </nova:port>
Oct 11 08:50:39 compute-0 nova_compute[260935]:   </nova:ports>
Oct 11 08:50:39 compute-0 nova_compute[260935]: </nova:instance>
Oct 11 08:50:39 compute-0 nova_compute[260935]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Oct 11 08:50:39 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:39.462 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[af9e7ac6-7603-4c03-a72d-e650c1bc449f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:50:39 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:39.469 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[5ecd9d69-7e5b-4f1c-ab7e-40583d0cde6c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:50:39 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:39.502 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[9cb47b2e-ca64-40c2-b2d8-5d832c6365ed]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:50:39 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:39.528 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[50b809ac-b371-4025-9bf0-500ed598d70e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfff13396-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:dd:a4:2d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 13, 'rx_bytes': 1000, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 13, 'rx_bytes': 1000, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 45], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 443120, 'reachable_time': 39241, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 299042, 'error': None, 'target': 'ovnmeta-fff13396-b787-4c6e-9112-a1c2ef57b26d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:50:39 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:39.558 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[136d3180-c4d5-46ba-afc8-e1f5a7b69337]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapfff13396-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 443136, 'tstamp': 443136}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 299052, 'error': None, 'target': 'ovnmeta-fff13396-b787-4c6e-9112-a1c2ef57b26d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapfff13396-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 443140, 'tstamp': 443140}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 299052, 'error': None, 'target': 'ovnmeta-fff13396-b787-4c6e-9112-a1c2ef57b26d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:50:39 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:39.559 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfff13396-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:50:39 compute-0 nova_compute[260935]: 2025-10-11 08:50:39.561 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:50:39 compute-0 nova_compute[260935]: 2025-10-11 08:50:39.568 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:50:39 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:39.569 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfff13396-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:50:39 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:39.569 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 08:50:39 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:39.570 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapfff13396-b0, col_values=(('external_ids', {'iface-id': '2a916b98-1e7b-4604-b1f0-e2f195b1c17e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:50:39 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:39.570 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 08:50:39 compute-0 nova_compute[260935]: 2025-10-11 08:50:39.771 2 DEBUG nova.compute.manager [req-6285b97a-4c20-4861-badf-da85c7320e76 req-95bf65a6-8549-44d7-b121-0006b10464ae e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 90e56ca7-b26f-4f83-908d-75204ecd2533] Received event network-vif-plugged-302c88cf-6eba-4200-adfe-6c23d5e6078d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:50:39 compute-0 nova_compute[260935]: 2025-10-11 08:50:39.771 2 DEBUG oslo_concurrency.lockutils [req-6285b97a-4c20-4861-badf-da85c7320e76 req-95bf65a6-8549-44d7-b121-0006b10464ae e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "90e56ca7-b26f-4f83-908d-75204ecd2533-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:50:39 compute-0 nova_compute[260935]: 2025-10-11 08:50:39.772 2 DEBUG oslo_concurrency.lockutils [req-6285b97a-4c20-4861-badf-da85c7320e76 req-95bf65a6-8549-44d7-b121-0006b10464ae e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "90e56ca7-b26f-4f83-908d-75204ecd2533-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:50:39 compute-0 nova_compute[260935]: 2025-10-11 08:50:39.772 2 DEBUG oslo_concurrency.lockutils [req-6285b97a-4c20-4861-badf-da85c7320e76 req-95bf65a6-8549-44d7-b121-0006b10464ae e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "90e56ca7-b26f-4f83-908d-75204ecd2533-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:50:39 compute-0 nova_compute[260935]: 2025-10-11 08:50:39.772 2 DEBUG nova.compute.manager [req-6285b97a-4c20-4861-badf-da85c7320e76 req-95bf65a6-8549-44d7-b121-0006b10464ae e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 90e56ca7-b26f-4f83-908d-75204ecd2533] No waiting events found dispatching network-vif-plugged-302c88cf-6eba-4200-adfe-6c23d5e6078d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 08:50:39 compute-0 nova_compute[260935]: 2025-10-11 08:50:39.772 2 WARNING nova.compute.manager [req-6285b97a-4c20-4861-badf-da85c7320e76 req-95bf65a6-8549-44d7-b121-0006b10464ae e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 90e56ca7-b26f-4f83-908d-75204ecd2533] Received unexpected event network-vif-plugged-302c88cf-6eba-4200-adfe-6c23d5e6078d for instance with vm_state deleted and task_state None.
Oct 11 08:50:39 compute-0 nova_compute[260935]: 2025-10-11 08:50:39.772 2 DEBUG nova.compute.manager [req-6285b97a-4c20-4861-badf-da85c7320e76 req-95bf65a6-8549-44d7-b121-0006b10464ae e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 90e56ca7-b26f-4f83-908d-75204ecd2533] Received event network-vif-deleted-302c88cf-6eba-4200-adfe-6c23d5e6078d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:50:39 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 08:50:39 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3441222680' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:50:39 compute-0 podman[299093]: 2025-10-11 08:50:39.82408997 +0000 UTC m=+0.055712066 container create 8d90003eb221a4b694a84b4ff61e5018eea1a226af97b1fb43100fc9b8702e48 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_joliot, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 08:50:39 compute-0 nova_compute[260935]: 2025-10-11 08:50:39.827 2 DEBUG oslo_concurrency.processutils [None req-461635e7-4a9b-4353-8945-fee1f3505527 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:50:39 compute-0 nova_compute[260935]: 2025-10-11 08:50:39.834 2 DEBUG nova.compute.provider_tree [None req-461635e7-4a9b-4353-8945-fee1f3505527 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 08:50:39 compute-0 nova_compute[260935]: 2025-10-11 08:50:39.857 2 DEBUG nova.scheduler.client.report [None req-461635e7-4a9b-4353-8945-fee1f3505527 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 08:50:39 compute-0 systemd[1]: Started libpod-conmon-8d90003eb221a4b694a84b4ff61e5018eea1a226af97b1fb43100fc9b8702e48.scope.
Oct 11 08:50:39 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1358: 321 pgs: 321 active+clean; 451 MiB data, 626 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 5.4 MiB/s wr, 359 op/s
Oct 11 08:50:39 compute-0 nova_compute[260935]: 2025-10-11 08:50:39.882 2 DEBUG oslo_concurrency.lockutils [None req-461635e7-4a9b-4353-8945-fee1f3505527 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.804s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:50:39 compute-0 podman[299093]: 2025-10-11 08:50:39.794490513 +0000 UTC m=+0.026112659 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 08:50:39 compute-0 systemd[1]: Started libcrun container.
Oct 11 08:50:39 compute-0 podman[299093]: 2025-10-11 08:50:39.903258679 +0000 UTC m=+0.134880795 container init 8d90003eb221a4b694a84b4ff61e5018eea1a226af97b1fb43100fc9b8702e48 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_joliot, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 08:50:39 compute-0 nova_compute[260935]: 2025-10-11 08:50:39.910 2 INFO nova.scheduler.client.report [None req-461635e7-4a9b-4353-8945-fee1f3505527 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Deleted allocations for instance 90e56ca7-b26f-4f83-908d-75204ecd2533
Oct 11 08:50:39 compute-0 podman[299093]: 2025-10-11 08:50:39.915452254 +0000 UTC m=+0.147074390 container start 8d90003eb221a4b694a84b4ff61e5018eea1a226af97b1fb43100fc9b8702e48 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_joliot, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 11 08:50:39 compute-0 podman[299093]: 2025-10-11 08:50:39.919733445 +0000 UTC m=+0.151355541 container attach 8d90003eb221a4b694a84b4ff61e5018eea1a226af97b1fb43100fc9b8702e48 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_joliot, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Oct 11 08:50:39 compute-0 vigorous_joliot[299111]: 167 167
Oct 11 08:50:39 compute-0 systemd[1]: libpod-8d90003eb221a4b694a84b4ff61e5018eea1a226af97b1fb43100fc9b8702e48.scope: Deactivated successfully.
Oct 11 08:50:39 compute-0 conmon[299111]: conmon 8d90003eb221a4b694a8 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-8d90003eb221a4b694a84b4ff61e5018eea1a226af97b1fb43100fc9b8702e48.scope/container/memory.events
Oct 11 08:50:39 compute-0 podman[299093]: 2025-10-11 08:50:39.92838853 +0000 UTC m=+0.160010666 container died 8d90003eb221a4b694a84b4ff61e5018eea1a226af97b1fb43100fc9b8702e48 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_joliot, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 08:50:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-55cbe4a4a7994548f68127a130b9ee55078c2e59517a0b0c30a52d79eb90b7e3-merged.mount: Deactivated successfully.
Oct 11 08:50:39 compute-0 podman[299093]: 2025-10-11 08:50:39.978133477 +0000 UTC m=+0.209755583 container remove 8d90003eb221a4b694a84b4ff61e5018eea1a226af97b1fb43100fc9b8702e48 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_joliot, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct 11 08:50:39 compute-0 nova_compute[260935]: 2025-10-11 08:50:39.991 2 DEBUG oslo_concurrency.lockutils [None req-461635e7-4a9b-4353-8945-fee1f3505527 a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Lock "90e56ca7-b26f-4f83-908d-75204ecd2533" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.304s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:50:40 compute-0 systemd[1]: libpod-conmon-8d90003eb221a4b694a84b4ff61e5018eea1a226af97b1fb43100fc9b8702e48.scope: Deactivated successfully.
Oct 11 08:50:40 compute-0 nova_compute[260935]: 2025-10-11 08:50:40.025 2 DEBUG nova.network.neutron [req-2a3997f8-18d6-4a9c-944c-288aff9d78a2 req-29b9ebe6-113b-4f87-b5a6-dabbec6fe588 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ac3851d8-5df2-4f84-9b28-a5fbf1c31b62] Updated VIF entry in instance network info cache for port d190526e-2bf1-4e6c-925a-2cc0c2b359a8. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 08:50:40 compute-0 nova_compute[260935]: 2025-10-11 08:50:40.026 2 DEBUG nova.network.neutron [req-2a3997f8-18d6-4a9c-944c-288aff9d78a2 req-29b9ebe6-113b-4f87-b5a6-dabbec6fe588 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ac3851d8-5df2-4f84-9b28-a5fbf1c31b62] Updating instance_info_cache with network_info: [{"id": "d190526e-2bf1-4e6c-925a-2cc0c2b359a8", "address": "fa:16:3e:57:75:39", "network": {"id": "9bac3530-993f-420e-8692-0b14a331d756", "bridge": "br-int", "label": "tempest-ImagesTestJSON-942705627-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f8c7604961214c6d9d49657535d799a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd190526e-2b", "ovs_interfaceid": "d190526e-2bf1-4e6c-925a-2cc0c2b359a8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:50:40 compute-0 nova_compute[260935]: 2025-10-11 08:50:40.038 2 DEBUG oslo_concurrency.lockutils [req-2a3997f8-18d6-4a9c-944c-288aff9d78a2 req-29b9ebe6-113b-4f87-b5a6-dabbec6fe588 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-ac3851d8-5df2-4f84-9b28-a5fbf1c31b62" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 08:50:40 compute-0 nova_compute[260935]: 2025-10-11 08:50:40.038 2 DEBUG nova.compute.manager [req-2a3997f8-18d6-4a9c-944c-288aff9d78a2 req-29b9ebe6-113b-4f87-b5a6-dabbec6fe588 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 90e56ca7-b26f-4f83-908d-75204ecd2533] Received event network-vif-unplugged-302c88cf-6eba-4200-adfe-6c23d5e6078d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:50:40 compute-0 nova_compute[260935]: 2025-10-11 08:50:40.038 2 DEBUG oslo_concurrency.lockutils [req-2a3997f8-18d6-4a9c-944c-288aff9d78a2 req-29b9ebe6-113b-4f87-b5a6-dabbec6fe588 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "90e56ca7-b26f-4f83-908d-75204ecd2533-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:50:40 compute-0 nova_compute[260935]: 2025-10-11 08:50:40.039 2 DEBUG oslo_concurrency.lockutils [req-2a3997f8-18d6-4a9c-944c-288aff9d78a2 req-29b9ebe6-113b-4f87-b5a6-dabbec6fe588 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "90e56ca7-b26f-4f83-908d-75204ecd2533-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:50:40 compute-0 nova_compute[260935]: 2025-10-11 08:50:40.039 2 DEBUG oslo_concurrency.lockutils [req-2a3997f8-18d6-4a9c-944c-288aff9d78a2 req-29b9ebe6-113b-4f87-b5a6-dabbec6fe588 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "90e56ca7-b26f-4f83-908d-75204ecd2533-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:50:40 compute-0 nova_compute[260935]: 2025-10-11 08:50:40.039 2 DEBUG nova.compute.manager [req-2a3997f8-18d6-4a9c-944c-288aff9d78a2 req-29b9ebe6-113b-4f87-b5a6-dabbec6fe588 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 90e56ca7-b26f-4f83-908d-75204ecd2533] No waiting events found dispatching network-vif-unplugged-302c88cf-6eba-4200-adfe-6c23d5e6078d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 08:50:40 compute-0 nova_compute[260935]: 2025-10-11 08:50:40.039 2 DEBUG nova.compute.manager [req-2a3997f8-18d6-4a9c-944c-288aff9d78a2 req-29b9ebe6-113b-4f87-b5a6-dabbec6fe588 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 90e56ca7-b26f-4f83-908d-75204ecd2533] Received event network-vif-unplugged-302c88cf-6eba-4200-adfe-6c23d5e6078d for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 11 08:50:40 compute-0 nova_compute[260935]: 2025-10-11 08:50:40.216 2 INFO nova.virt.libvirt.driver [None req-91a294bf-0f23-448c-a601-022a3b3b6e50 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: ac3851d8-5df2-4f84-9b28-a5fbf1c31b62] Creating config drive at /var/lib/nova/instances/ac3851d8-5df2-4f84-9b28-a5fbf1c31b62/disk.config
Oct 11 08:50:40 compute-0 nova_compute[260935]: 2025-10-11 08:50:40.223 2 DEBUG oslo_concurrency.processutils [None req-91a294bf-0f23-448c-a601-022a3b3b6e50 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/ac3851d8-5df2-4f84-9b28-a5fbf1c31b62/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmplpmaj_tn execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:50:40 compute-0 podman[299135]: 2025-10-11 08:50:40.246906217 +0000 UTC m=+0.068558129 container create bff6e53267997eb758ab123b77b7ffc49a1742b492be128f6aec5b25510fa715 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_hamilton, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 08:50:40 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/3441222680' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:50:40 compute-0 podman[299135]: 2025-10-11 08:50:40.213582924 +0000 UTC m=+0.035234886 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 08:50:40 compute-0 systemd[1]: Started libpod-conmon-bff6e53267997eb758ab123b77b7ffc49a1742b492be128f6aec5b25510fa715.scope.
Oct 11 08:50:40 compute-0 systemd[1]: Started libcrun container.
Oct 11 08:50:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7ef9203430e1712f16cd68dfd31ed1f561380c449930a346931e6800c5ba3ca/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 08:50:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7ef9203430e1712f16cd68dfd31ed1f561380c449930a346931e6800c5ba3ca/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 08:50:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7ef9203430e1712f16cd68dfd31ed1f561380c449930a346931e6800c5ba3ca/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 08:50:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7ef9203430e1712f16cd68dfd31ed1f561380c449930a346931e6800c5ba3ca/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 08:50:40 compute-0 podman[299135]: 2025-10-11 08:50:40.35874814 +0000 UTC m=+0.180400102 container init bff6e53267997eb758ab123b77b7ffc49a1742b492be128f6aec5b25510fa715 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_hamilton, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 08:50:40 compute-0 nova_compute[260935]: 2025-10-11 08:50:40.372 2 DEBUG oslo_concurrency.processutils [None req-91a294bf-0f23-448c-a601-022a3b3b6e50 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/ac3851d8-5df2-4f84-9b28-a5fbf1c31b62/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmplpmaj_tn" returned: 0 in 0.149s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:50:40 compute-0 podman[299135]: 2025-10-11 08:50:40.373092595 +0000 UTC m=+0.194744507 container start bff6e53267997eb758ab123b77b7ffc49a1742b492be128f6aec5b25510fa715 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_hamilton, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct 11 08:50:40 compute-0 podman[299135]: 2025-10-11 08:50:40.388507391 +0000 UTC m=+0.210159303 container attach bff6e53267997eb758ab123b77b7ffc49a1742b492be128f6aec5b25510fa715 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_hamilton, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct 11 08:50:40 compute-0 nova_compute[260935]: 2025-10-11 08:50:40.420 2 DEBUG nova.storage.rbd_utils [None req-91a294bf-0f23-448c-a601-022a3b3b6e50 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] rbd image ac3851d8-5df2-4f84-9b28-a5fbf1c31b62_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:50:40 compute-0 nova_compute[260935]: 2025-10-11 08:50:40.426 2 DEBUG oslo_concurrency.processutils [None req-91a294bf-0f23-448c-a601-022a3b3b6e50 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/ac3851d8-5df2-4f84-9b28-a5fbf1c31b62/disk.config ac3851d8-5df2-4f84-9b28-a5fbf1c31b62_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:50:40 compute-0 nova_compute[260935]: 2025-10-11 08:50:40.485 2 DEBUG nova.compute.manager [req-4e9af9d8-7be0-44f8-a824-21714986aa95 req-0c84bf7e-5a87-41e8-b360-4c4450ac659f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: b35f4147-9e36-4dab-9ac8-2061c97797f2] Received event network-vif-plugged-f045b3aa-3ff6-4dea-ad61-a59a01735124 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:50:40 compute-0 nova_compute[260935]: 2025-10-11 08:50:40.486 2 DEBUG oslo_concurrency.lockutils [req-4e9af9d8-7be0-44f8-a824-21714986aa95 req-0c84bf7e-5a87-41e8-b360-4c4450ac659f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "b35f4147-9e36-4dab-9ac8-2061c97797f2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:50:40 compute-0 nova_compute[260935]: 2025-10-11 08:50:40.487 2 DEBUG oslo_concurrency.lockutils [req-4e9af9d8-7be0-44f8-a824-21714986aa95 req-0c84bf7e-5a87-41e8-b360-4c4450ac659f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "b35f4147-9e36-4dab-9ac8-2061c97797f2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:50:40 compute-0 nova_compute[260935]: 2025-10-11 08:50:40.487 2 DEBUG oslo_concurrency.lockutils [req-4e9af9d8-7be0-44f8-a824-21714986aa95 req-0c84bf7e-5a87-41e8-b360-4c4450ac659f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "b35f4147-9e36-4dab-9ac8-2061c97797f2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:50:40 compute-0 nova_compute[260935]: 2025-10-11 08:50:40.488 2 DEBUG nova.compute.manager [req-4e9af9d8-7be0-44f8-a824-21714986aa95 req-0c84bf7e-5a87-41e8-b360-4c4450ac659f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: b35f4147-9e36-4dab-9ac8-2061c97797f2] No waiting events found dispatching network-vif-plugged-f045b3aa-3ff6-4dea-ad61-a59a01735124 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 08:50:40 compute-0 nova_compute[260935]: 2025-10-11 08:50:40.488 2 WARNING nova.compute.manager [req-4e9af9d8-7be0-44f8-a824-21714986aa95 req-0c84bf7e-5a87-41e8-b360-4c4450ac659f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: b35f4147-9e36-4dab-9ac8-2061c97797f2] Received unexpected event network-vif-plugged-f045b3aa-3ff6-4dea-ad61-a59a01735124 for instance with vm_state active and task_state None.
Oct 11 08:50:40 compute-0 nova_compute[260935]: 2025-10-11 08:50:40.489 2 DEBUG nova.compute.manager [req-4e9af9d8-7be0-44f8-a824-21714986aa95 req-0c84bf7e-5a87-41e8-b360-4c4450ac659f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: b35f4147-9e36-4dab-9ac8-2061c97797f2] Received event network-vif-unplugged-f045b3aa-3ff6-4dea-ad61-a59a01735124 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:50:40 compute-0 nova_compute[260935]: 2025-10-11 08:50:40.489 2 DEBUG oslo_concurrency.lockutils [req-4e9af9d8-7be0-44f8-a824-21714986aa95 req-0c84bf7e-5a87-41e8-b360-4c4450ac659f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "b35f4147-9e36-4dab-9ac8-2061c97797f2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:50:40 compute-0 nova_compute[260935]: 2025-10-11 08:50:40.491 2 DEBUG oslo_concurrency.lockutils [req-4e9af9d8-7be0-44f8-a824-21714986aa95 req-0c84bf7e-5a87-41e8-b360-4c4450ac659f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "b35f4147-9e36-4dab-9ac8-2061c97797f2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:50:40 compute-0 nova_compute[260935]: 2025-10-11 08:50:40.492 2 DEBUG oslo_concurrency.lockutils [req-4e9af9d8-7be0-44f8-a824-21714986aa95 req-0c84bf7e-5a87-41e8-b360-4c4450ac659f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "b35f4147-9e36-4dab-9ac8-2061c97797f2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:50:40 compute-0 nova_compute[260935]: 2025-10-11 08:50:40.492 2 DEBUG nova.compute.manager [req-4e9af9d8-7be0-44f8-a824-21714986aa95 req-0c84bf7e-5a87-41e8-b360-4c4450ac659f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: b35f4147-9e36-4dab-9ac8-2061c97797f2] No waiting events found dispatching network-vif-unplugged-f045b3aa-3ff6-4dea-ad61-a59a01735124 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 08:50:40 compute-0 nova_compute[260935]: 2025-10-11 08:50:40.492 2 WARNING nova.compute.manager [req-4e9af9d8-7be0-44f8-a824-21714986aa95 req-0c84bf7e-5a87-41e8-b360-4c4450ac659f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: b35f4147-9e36-4dab-9ac8-2061c97797f2] Received unexpected event network-vif-unplugged-f045b3aa-3ff6-4dea-ad61-a59a01735124 for instance with vm_state active and task_state None.
Oct 11 08:50:40 compute-0 nova_compute[260935]: 2025-10-11 08:50:40.493 2 DEBUG nova.compute.manager [req-4e9af9d8-7be0-44f8-a824-21714986aa95 req-0c84bf7e-5a87-41e8-b360-4c4450ac659f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: b35f4147-9e36-4dab-9ac8-2061c97797f2] Received event network-vif-plugged-f045b3aa-3ff6-4dea-ad61-a59a01735124 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:50:40 compute-0 nova_compute[260935]: 2025-10-11 08:50:40.493 2 DEBUG oslo_concurrency.lockutils [req-4e9af9d8-7be0-44f8-a824-21714986aa95 req-0c84bf7e-5a87-41e8-b360-4c4450ac659f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "b35f4147-9e36-4dab-9ac8-2061c97797f2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:50:40 compute-0 nova_compute[260935]: 2025-10-11 08:50:40.494 2 DEBUG oslo_concurrency.lockutils [req-4e9af9d8-7be0-44f8-a824-21714986aa95 req-0c84bf7e-5a87-41e8-b360-4c4450ac659f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "b35f4147-9e36-4dab-9ac8-2061c97797f2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:50:40 compute-0 nova_compute[260935]: 2025-10-11 08:50:40.494 2 DEBUG oslo_concurrency.lockutils [req-4e9af9d8-7be0-44f8-a824-21714986aa95 req-0c84bf7e-5a87-41e8-b360-4c4450ac659f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "b35f4147-9e36-4dab-9ac8-2061c97797f2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:50:40 compute-0 nova_compute[260935]: 2025-10-11 08:50:40.495 2 DEBUG nova.compute.manager [req-4e9af9d8-7be0-44f8-a824-21714986aa95 req-0c84bf7e-5a87-41e8-b360-4c4450ac659f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: b35f4147-9e36-4dab-9ac8-2061c97797f2] No waiting events found dispatching network-vif-plugged-f045b3aa-3ff6-4dea-ad61-a59a01735124 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 08:50:40 compute-0 nova_compute[260935]: 2025-10-11 08:50:40.495 2 WARNING nova.compute.manager [req-4e9af9d8-7be0-44f8-a824-21714986aa95 req-0c84bf7e-5a87-41e8-b360-4c4450ac659f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: b35f4147-9e36-4dab-9ac8-2061c97797f2] Received unexpected event network-vif-plugged-f045b3aa-3ff6-4dea-ad61-a59a01735124 for instance with vm_state active and task_state None.
Oct 11 08:50:40 compute-0 nova_compute[260935]: 2025-10-11 08:50:40.618 2 DEBUG oslo_concurrency.processutils [None req-91a294bf-0f23-448c-a601-022a3b3b6e50 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/ac3851d8-5df2-4f84-9b28-a5fbf1c31b62/disk.config ac3851d8-5df2-4f84-9b28-a5fbf1c31b62_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.191s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:50:40 compute-0 nova_compute[260935]: 2025-10-11 08:50:40.619 2 INFO nova.virt.libvirt.driver [None req-91a294bf-0f23-448c-a601-022a3b3b6e50 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: ac3851d8-5df2-4f84-9b28-a5fbf1c31b62] Deleting local config drive /var/lib/nova/instances/ac3851d8-5df2-4f84-9b28-a5fbf1c31b62/disk.config because it was imported into RBD.
Oct 11 08:50:40 compute-0 kernel: tapd190526e-2b: entered promiscuous mode
Oct 11 08:50:40 compute-0 NetworkManager[44960]: <info>  [1760172640.7249] manager: (tapd190526e-2b): new Tun device (/org/freedesktop/NetworkManager/Devices/103)
Oct 11 08:50:40 compute-0 systemd-udevd[299207]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 08:50:40 compute-0 NetworkManager[44960]: <info>  [1760172640.7875] device (tapd190526e-2b): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 08:50:40 compute-0 NetworkManager[44960]: <info>  [1760172640.7888] device (tapd190526e-2b): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 08:50:40 compute-0 ovn_controller[152945]: 2025-10-11T08:50:40Z|00200|binding|INFO|Claiming lport d190526e-2bf1-4e6c-925a-2cc0c2b359a8 for this chassis.
Oct 11 08:50:40 compute-0 ovn_controller[152945]: 2025-10-11T08:50:40Z|00201|binding|INFO|d190526e-2bf1-4e6c-925a-2cc0c2b359a8: Claiming fa:16:3e:57:75:39 10.100.0.3
Oct 11 08:50:40 compute-0 nova_compute[260935]: 2025-10-11 08:50:40.793 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:50:40 compute-0 ovn_controller[152945]: 2025-10-11T08:50:40Z|00202|binding|INFO|Setting lport d190526e-2bf1-4e6c-925a-2cc0c2b359a8 ovn-installed in OVS
Oct 11 08:50:40 compute-0 ovn_controller[152945]: 2025-10-11T08:50:40Z|00203|binding|INFO|Setting lport d190526e-2bf1-4e6c-925a-2cc0c2b359a8 up in Southbound
Oct 11 08:50:40 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:40.811 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:57:75:39 10.100.0.3'], port_security=['fa:16:3e:57:75:39 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'ac3851d8-5df2-4f84-9b28-a5fbf1c31b62', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9bac3530-993f-420e-8692-0b14a331d756', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f8c7604961214c6d9d49657535d799a5', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'b92be55a-f97b-4770-99c4-ff8e122b8ad7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=956bef08-638b-4ce0-9cc4-80a6cc4f1331, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=d190526e-2bf1-4e6c-925a-2cc0c2b359a8) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 08:50:40 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:40.813 162815 INFO neutron.agent.ovn.metadata.agent [-] Port d190526e-2bf1-4e6c-925a-2cc0c2b359a8 in datapath 9bac3530-993f-420e-8692-0b14a331d756 bound to our chassis
Oct 11 08:50:40 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:40.815 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9bac3530-993f-420e-8692-0b14a331d756
Oct 11 08:50:40 compute-0 nova_compute[260935]: 2025-10-11 08:50:40.814 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:50:40 compute-0 nova_compute[260935]: 2025-10-11 08:50:40.815 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:50:40 compute-0 systemd-machined[215705]: New machine qemu-35-instance-0000001f.
Oct 11 08:50:40 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:40.828 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[b4319b77-3589-41cc-8cef-ba3397eb4717]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:50:40 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:40.829 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap9bac3530-91 in ovnmeta-9bac3530-993f-420e-8692-0b14a331d756 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 11 08:50:40 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:40.831 276945 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap9bac3530-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 11 08:50:40 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:40.831 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[b55ca7d6-4856-4006-bdcc-1d5bf7e2f15f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:50:40 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:40.832 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[50ebe131-8f28-47ad-b3ac-29a821f46e4e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:50:40 compute-0 systemd[1]: Started Virtual Machine qemu-35-instance-0000001f.
Oct 11 08:50:40 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:40.852 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[483ae7c8-d082-4503-9d93-66418fa03e43]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:50:40 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:40.882 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[f4aaeb4e-46f8-4a0e-8ba0-c3ff5f59fbf2]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:50:40 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:40.945 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[905e3845-2952-40e8-8d8a-b111cbb96eec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:50:40 compute-0 NetworkManager[44960]: <info>  [1760172640.9535] manager: (tap9bac3530-90): new Veth device (/org/freedesktop/NetworkManager/Devices/104)
Oct 11 08:50:40 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:40.952 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[24fd929d-897a-4876-8aa6-9f38bebe8cf2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:50:41 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:41.009 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[83335fdd-ba5c-4790-a4e0-17be4063943a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:50:41 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:41.013 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[d14b51b1-0455-42f7-af42-57a98750a7d7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:50:41 compute-0 NetworkManager[44960]: <info>  [1760172641.0401] device (tap9bac3530-90): carrier: link connected
Oct 11 08:50:41 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:41.046 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[93436bcf-852f-431f-a11d-b142ae43cbfd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:50:41 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:41.064 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[f1001616-ab24-49f7-a01c-1037715b479d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9bac3530-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:95:35:1f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 220, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 220, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 63], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 448574, 'reachable_time': 25440, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 192, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 192, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 299242, 'error': None, 'target': 'ovnmeta-9bac3530-993f-420e-8692-0b14a331d756', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:50:41 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:41.083 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[6bbefceb-2b0f-48c3-b46c-14ceb85a9f91]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe95:351f'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 448574, 'tstamp': 448574}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 299243, 'error': None, 'target': 'ovnmeta-9bac3530-993f-420e-8692-0b14a331d756', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:50:41 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:41.101 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[8334db99-85fe-4406-814e-5115cb13199e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9bac3530-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:95:35:1f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 220, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 220, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 63], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 448574, 'reachable_time': 25440, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 192, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 192, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 299245, 'error': None, 'target': 'ovnmeta-9bac3530-993f-420e-8692-0b14a331d756', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:50:41 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:41.140 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[9b483f99-ebeb-4b36-97cb-a90afd75bb3d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:50:41 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:41.211 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[60f9fd61-0765-4972-9bcb-e47641a4f039]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:50:41 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:41.213 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9bac3530-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:50:41 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:41.213 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 08:50:41 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:41.214 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9bac3530-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:50:41 compute-0 kernel: tap9bac3530-90: entered promiscuous mode
Oct 11 08:50:41 compute-0 NetworkManager[44960]: <info>  [1760172641.2176] manager: (tap9bac3530-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/105)
Oct 11 08:50:41 compute-0 nova_compute[260935]: 2025-10-11 08:50:41.216 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:50:41 compute-0 nova_compute[260935]: 2025-10-11 08:50:41.220 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:50:41 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:41.222 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9bac3530-90, col_values=(('external_ids', {'iface-id': 'e5becf0d-48c0-404b-9cba-07077454d085'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:50:41 compute-0 ovn_controller[152945]: 2025-10-11T08:50:41Z|00204|binding|INFO|Releasing lport e5becf0d-48c0-404b-9cba-07077454d085 from this chassis (sb_readonly=0)
Oct 11 08:50:41 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:41.226 162815 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/9bac3530-993f-420e-8692-0b14a331d756.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/9bac3530-993f-420e-8692-0b14a331d756.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 11 08:50:41 compute-0 nova_compute[260935]: 2025-10-11 08:50:41.226 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:50:41 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:41.227 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[1dcb1f13-0d81-426c-ae0c-7d0dbf1c27ab]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:50:41 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:41.228 162815 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 11 08:50:41 compute-0 ovn_metadata_agent[162810]: global
Oct 11 08:50:41 compute-0 ovn_metadata_agent[162810]:     log         /dev/log local0 debug
Oct 11 08:50:41 compute-0 ovn_metadata_agent[162810]:     log-tag     haproxy-metadata-proxy-9bac3530-993f-420e-8692-0b14a331d756
Oct 11 08:50:41 compute-0 ovn_metadata_agent[162810]:     user        root
Oct 11 08:50:41 compute-0 ovn_metadata_agent[162810]:     group       root
Oct 11 08:50:41 compute-0 ovn_metadata_agent[162810]:     maxconn     1024
Oct 11 08:50:41 compute-0 ovn_metadata_agent[162810]:     pidfile     /var/lib/neutron/external/pids/9bac3530-993f-420e-8692-0b14a331d756.pid.haproxy
Oct 11 08:50:41 compute-0 ovn_metadata_agent[162810]:     daemon
Oct 11 08:50:41 compute-0 ovn_metadata_agent[162810]: 
Oct 11 08:50:41 compute-0 ovn_metadata_agent[162810]: defaults
Oct 11 08:50:41 compute-0 ovn_metadata_agent[162810]:     log global
Oct 11 08:50:41 compute-0 ovn_metadata_agent[162810]:     mode http
Oct 11 08:50:41 compute-0 ovn_metadata_agent[162810]:     option httplog
Oct 11 08:50:41 compute-0 ovn_metadata_agent[162810]:     option dontlognull
Oct 11 08:50:41 compute-0 ovn_metadata_agent[162810]:     option http-server-close
Oct 11 08:50:41 compute-0 ovn_metadata_agent[162810]:     option forwardfor
Oct 11 08:50:41 compute-0 ovn_metadata_agent[162810]:     retries                 3
Oct 11 08:50:41 compute-0 ovn_metadata_agent[162810]:     timeout http-request    30s
Oct 11 08:50:41 compute-0 ovn_metadata_agent[162810]:     timeout connect         30s
Oct 11 08:50:41 compute-0 ovn_metadata_agent[162810]:     timeout client          32s
Oct 11 08:50:41 compute-0 ovn_metadata_agent[162810]:     timeout server          32s
Oct 11 08:50:41 compute-0 ovn_metadata_agent[162810]:     timeout http-keep-alive 30s
Oct 11 08:50:41 compute-0 ovn_metadata_agent[162810]: 
Oct 11 08:50:41 compute-0 ovn_metadata_agent[162810]: 
Oct 11 08:50:41 compute-0 ovn_metadata_agent[162810]: listen listener
Oct 11 08:50:41 compute-0 ovn_metadata_agent[162810]:     bind 169.254.169.254:80
Oct 11 08:50:41 compute-0 ovn_metadata_agent[162810]:     server metadata /var/lib/neutron/metadata_proxy
Oct 11 08:50:41 compute-0 ovn_metadata_agent[162810]:     http-request add-header X-OVN-Network-ID 9bac3530-993f-420e-8692-0b14a331d756
Oct 11 08:50:41 compute-0 ovn_metadata_agent[162810]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 11 08:50:41 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:41.229 162815 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-9bac3530-993f-420e-8692-0b14a331d756', 'env', 'PROCESS_TAG=haproxy-9bac3530-993f-420e-8692-0b14a331d756', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/9bac3530-993f-420e-8692-0b14a331d756.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 11 08:50:41 compute-0 thirsty_hamilton[299154]: {
Oct 11 08:50:41 compute-0 thirsty_hamilton[299154]:     "0": [
Oct 11 08:50:41 compute-0 thirsty_hamilton[299154]:         {
Oct 11 08:50:41 compute-0 thirsty_hamilton[299154]:             "devices": [
Oct 11 08:50:41 compute-0 thirsty_hamilton[299154]:                 "/dev/loop3"
Oct 11 08:50:41 compute-0 thirsty_hamilton[299154]:             ],
Oct 11 08:50:41 compute-0 thirsty_hamilton[299154]:             "lv_name": "ceph_lv0",
Oct 11 08:50:41 compute-0 thirsty_hamilton[299154]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 08:50:41 compute-0 thirsty_hamilton[299154]:             "lv_size": "21470642176",
Oct 11 08:50:41 compute-0 thirsty_hamilton[299154]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=bd31cfe9-266a-4479-a7f9-659fff14b3e1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 08:50:41 compute-0 thirsty_hamilton[299154]:             "lv_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 08:50:41 compute-0 thirsty_hamilton[299154]:             "name": "ceph_lv0",
Oct 11 08:50:41 compute-0 thirsty_hamilton[299154]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 08:50:41 compute-0 thirsty_hamilton[299154]:             "tags": {
Oct 11 08:50:41 compute-0 thirsty_hamilton[299154]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 11 08:50:41 compute-0 thirsty_hamilton[299154]:                 "ceph.block_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 08:50:41 compute-0 thirsty_hamilton[299154]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 08:50:41 compute-0 thirsty_hamilton[299154]:                 "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 08:50:41 compute-0 thirsty_hamilton[299154]:                 "ceph.cluster_name": "ceph",
Oct 11 08:50:41 compute-0 thirsty_hamilton[299154]:                 "ceph.crush_device_class": "",
Oct 11 08:50:41 compute-0 thirsty_hamilton[299154]:                 "ceph.encrypted": "0",
Oct 11 08:50:41 compute-0 thirsty_hamilton[299154]:                 "ceph.osd_fsid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 08:50:41 compute-0 thirsty_hamilton[299154]:                 "ceph.osd_id": "0",
Oct 11 08:50:41 compute-0 thirsty_hamilton[299154]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 08:50:41 compute-0 thirsty_hamilton[299154]:                 "ceph.type": "block",
Oct 11 08:50:41 compute-0 thirsty_hamilton[299154]:                 "ceph.vdo": "0"
Oct 11 08:50:41 compute-0 thirsty_hamilton[299154]:             },
Oct 11 08:50:41 compute-0 thirsty_hamilton[299154]:             "type": "block",
Oct 11 08:50:41 compute-0 thirsty_hamilton[299154]:             "vg_name": "ceph_vg0"
Oct 11 08:50:41 compute-0 thirsty_hamilton[299154]:         }
Oct 11 08:50:41 compute-0 thirsty_hamilton[299154]:     ],
Oct 11 08:50:41 compute-0 thirsty_hamilton[299154]:     "1": [
Oct 11 08:50:41 compute-0 thirsty_hamilton[299154]:         {
Oct 11 08:50:41 compute-0 thirsty_hamilton[299154]:             "devices": [
Oct 11 08:50:41 compute-0 thirsty_hamilton[299154]:                 "/dev/loop4"
Oct 11 08:50:41 compute-0 thirsty_hamilton[299154]:             ],
Oct 11 08:50:41 compute-0 thirsty_hamilton[299154]:             "lv_name": "ceph_lv1",
Oct 11 08:50:41 compute-0 thirsty_hamilton[299154]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 08:50:41 compute-0 thirsty_hamilton[299154]:             "lv_size": "21470642176",
Oct 11 08:50:41 compute-0 thirsty_hamilton[299154]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c6e730c8-7075-45e5-bf72-061b73088f5b,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 08:50:41 compute-0 thirsty_hamilton[299154]:             "lv_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 08:50:41 compute-0 thirsty_hamilton[299154]:             "name": "ceph_lv1",
Oct 11 08:50:41 compute-0 thirsty_hamilton[299154]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 08:50:41 compute-0 thirsty_hamilton[299154]:             "tags": {
Oct 11 08:50:41 compute-0 thirsty_hamilton[299154]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 11 08:50:41 compute-0 thirsty_hamilton[299154]:                 "ceph.block_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 08:50:41 compute-0 thirsty_hamilton[299154]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 08:50:41 compute-0 thirsty_hamilton[299154]:                 "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 08:50:41 compute-0 thirsty_hamilton[299154]:                 "ceph.cluster_name": "ceph",
Oct 11 08:50:41 compute-0 thirsty_hamilton[299154]:                 "ceph.crush_device_class": "",
Oct 11 08:50:41 compute-0 thirsty_hamilton[299154]:                 "ceph.encrypted": "0",
Oct 11 08:50:41 compute-0 thirsty_hamilton[299154]:                 "ceph.osd_fsid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 08:50:41 compute-0 thirsty_hamilton[299154]:                 "ceph.osd_id": "1",
Oct 11 08:50:41 compute-0 thirsty_hamilton[299154]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 08:50:41 compute-0 thirsty_hamilton[299154]:                 "ceph.type": "block",
Oct 11 08:50:41 compute-0 thirsty_hamilton[299154]:                 "ceph.vdo": "0"
Oct 11 08:50:41 compute-0 thirsty_hamilton[299154]:             },
Oct 11 08:50:41 compute-0 thirsty_hamilton[299154]:             "type": "block",
Oct 11 08:50:41 compute-0 thirsty_hamilton[299154]:             "vg_name": "ceph_vg1"
Oct 11 08:50:41 compute-0 thirsty_hamilton[299154]:         }
Oct 11 08:50:41 compute-0 thirsty_hamilton[299154]:     ],
Oct 11 08:50:41 compute-0 thirsty_hamilton[299154]:     "2": [
Oct 11 08:50:41 compute-0 thirsty_hamilton[299154]:         {
Oct 11 08:50:41 compute-0 thirsty_hamilton[299154]:             "devices": [
Oct 11 08:50:41 compute-0 thirsty_hamilton[299154]:                 "/dev/loop5"
Oct 11 08:50:41 compute-0 thirsty_hamilton[299154]:             ],
Oct 11 08:50:41 compute-0 thirsty_hamilton[299154]:             "lv_name": "ceph_lv2",
Oct 11 08:50:41 compute-0 thirsty_hamilton[299154]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 08:50:41 compute-0 thirsty_hamilton[299154]:             "lv_size": "21470642176",
Oct 11 08:50:41 compute-0 thirsty_hamilton[299154]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9a8d87fe-940e-4960-94fb-405e22e6e9ee,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 08:50:41 compute-0 thirsty_hamilton[299154]:             "lv_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 08:50:41 compute-0 thirsty_hamilton[299154]:             "name": "ceph_lv2",
Oct 11 08:50:41 compute-0 thirsty_hamilton[299154]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 08:50:41 compute-0 thirsty_hamilton[299154]:             "tags": {
Oct 11 08:50:41 compute-0 thirsty_hamilton[299154]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 11 08:50:41 compute-0 thirsty_hamilton[299154]:                 "ceph.block_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 08:50:41 compute-0 thirsty_hamilton[299154]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 08:50:41 compute-0 thirsty_hamilton[299154]:                 "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 08:50:41 compute-0 thirsty_hamilton[299154]:                 "ceph.cluster_name": "ceph",
Oct 11 08:50:41 compute-0 thirsty_hamilton[299154]:                 "ceph.crush_device_class": "",
Oct 11 08:50:41 compute-0 thirsty_hamilton[299154]:                 "ceph.encrypted": "0",
Oct 11 08:50:41 compute-0 thirsty_hamilton[299154]:                 "ceph.osd_fsid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 08:50:41 compute-0 thirsty_hamilton[299154]:                 "ceph.osd_id": "2",
Oct 11 08:50:41 compute-0 thirsty_hamilton[299154]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 08:50:41 compute-0 thirsty_hamilton[299154]:                 "ceph.type": "block",
Oct 11 08:50:41 compute-0 thirsty_hamilton[299154]:                 "ceph.vdo": "0"
Oct 11 08:50:41 compute-0 thirsty_hamilton[299154]:             },
Oct 11 08:50:41 compute-0 thirsty_hamilton[299154]:             "type": "block",
Oct 11 08:50:41 compute-0 thirsty_hamilton[299154]:             "vg_name": "ceph_vg2"
Oct 11 08:50:41 compute-0 thirsty_hamilton[299154]:         }
Oct 11 08:50:41 compute-0 thirsty_hamilton[299154]:     ]
Oct 11 08:50:41 compute-0 thirsty_hamilton[299154]: }
Oct 11 08:50:41 compute-0 nova_compute[260935]: 2025-10-11 08:50:41.241 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:50:41 compute-0 ceph-mon[74313]: pgmap v1358: 321 pgs: 321 active+clean; 451 MiB data, 626 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 5.4 MiB/s wr, 359 op/s
Oct 11 08:50:41 compute-0 podman[299135]: 2025-10-11 08:50:41.291316823 +0000 UTC m=+1.112968705 container died bff6e53267997eb758ab123b77b7ffc49a1742b492be128f6aec5b25510fa715 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_hamilton, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct 11 08:50:41 compute-0 systemd[1]: libpod-bff6e53267997eb758ab123b77b7ffc49a1742b492be128f6aec5b25510fa715.scope: Deactivated successfully.
Oct 11 08:50:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-d7ef9203430e1712f16cd68dfd31ed1f561380c449930a346931e6800c5ba3ca-merged.mount: Deactivated successfully.
Oct 11 08:50:41 compute-0 podman[299135]: 2025-10-11 08:50:41.345689071 +0000 UTC m=+1.167340943 container remove bff6e53267997eb758ab123b77b7ffc49a1742b492be128f6aec5b25510fa715 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_hamilton, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Oct 11 08:50:41 compute-0 systemd[1]: libpod-conmon-bff6e53267997eb758ab123b77b7ffc49a1742b492be128f6aec5b25510fa715.scope: Deactivated successfully.
Oct 11 08:50:41 compute-0 nova_compute[260935]: 2025-10-11 08:50:41.368 2 DEBUG oslo_concurrency.lockutils [None req-12c644d8-b54f-41a1-aef3-6030c074162f a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Acquiring lock "b3e20035-c079-4ad0-a085-2086be520d1d" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:50:41 compute-0 nova_compute[260935]: 2025-10-11 08:50:41.369 2 DEBUG oslo_concurrency.lockutils [None req-12c644d8-b54f-41a1-aef3-6030c074162f a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Lock "b3e20035-c079-4ad0-a085-2086be520d1d" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:50:41 compute-0 nova_compute[260935]: 2025-10-11 08:50:41.369 2 DEBUG oslo_concurrency.lockutils [None req-12c644d8-b54f-41a1-aef3-6030c074162f a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Acquiring lock "b3e20035-c079-4ad0-a085-2086be520d1d-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:50:41 compute-0 nova_compute[260935]: 2025-10-11 08:50:41.370 2 DEBUG oslo_concurrency.lockutils [None req-12c644d8-b54f-41a1-aef3-6030c074162f a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Lock "b3e20035-c079-4ad0-a085-2086be520d1d-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:50:41 compute-0 nova_compute[260935]: 2025-10-11 08:50:41.370 2 DEBUG oslo_concurrency.lockutils [None req-12c644d8-b54f-41a1-aef3-6030c074162f a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Lock "b3e20035-c079-4ad0-a085-2086be520d1d-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:50:41 compute-0 nova_compute[260935]: 2025-10-11 08:50:41.372 2 INFO nova.compute.manager [None req-12c644d8-b54f-41a1-aef3-6030c074162f a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: b3e20035-c079-4ad0-a085-2086be520d1d] Terminating instance
Oct 11 08:50:41 compute-0 nova_compute[260935]: 2025-10-11 08:50:41.373 2 DEBUG nova.compute.manager [None req-12c644d8-b54f-41a1-aef3-6030c074162f a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: b3e20035-c079-4ad0-a085-2086be520d1d] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 11 08:50:41 compute-0 sudo[298995]: pam_unix(sudo:session): session closed for user root
Oct 11 08:50:41 compute-0 kernel: tap3a7614a2-ff (unregistering): left promiscuous mode
Oct 11 08:50:41 compute-0 NetworkManager[44960]: <info>  [1760172641.4474] device (tap3a7614a2-ff): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 08:50:41 compute-0 ovn_controller[152945]: 2025-10-11T08:50:41Z|00205|binding|INFO|Releasing lport 3a7614a2-ff1f-4015-a387-8b15256f61b2 from this chassis (sb_readonly=0)
Oct 11 08:50:41 compute-0 ovn_controller[152945]: 2025-10-11T08:50:41Z|00206|binding|INFO|Setting lport 3a7614a2-ff1f-4015-a387-8b15256f61b2 down in Southbound
Oct 11 08:50:41 compute-0 nova_compute[260935]: 2025-10-11 08:50:41.464 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:50:41 compute-0 ovn_controller[152945]: 2025-10-11T08:50:41Z|00207|binding|INFO|Removing iface tap3a7614a2-ff ovn-installed in OVS
Oct 11 08:50:41 compute-0 sudo[299270]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:50:41 compute-0 sudo[299270]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:50:41 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:41.473 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e8:51:53 10.100.0.5'], port_security=['fa:16:3e:e8:51:53 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'b3e20035-c079-4ad0-a085-2086be520d1d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '39d3043a7835403392c659fbb2fe0b22', 'neutron:revision_number': '4', 'neutron:security_group_ids': '8cdf2c97-ed67-4339-928f-1d70d0c6c18c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3bfe7634-8476-437a-9cde-e4512c0e686a, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=3a7614a2-ff1f-4015-a387-8b15256f61b2) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 08:50:41 compute-0 sudo[299270]: pam_unix(sudo:session): session closed for user root
Oct 11 08:50:41 compute-0 nova_compute[260935]: 2025-10-11 08:50:41.494 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:50:41 compute-0 systemd[1]: machine-qemu\x2d26\x2dinstance\x2d00000018.scope: Deactivated successfully.
Oct 11 08:50:41 compute-0 systemd[1]: machine-qemu\x2d26\x2dinstance\x2d00000018.scope: Consumed 15.200s CPU time.
Oct 11 08:50:41 compute-0 systemd-machined[215705]: Machine qemu-26-instance-00000018 terminated.
Oct 11 08:50:41 compute-0 sudo[299298]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 08:50:41 compute-0 sudo[299298]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:50:41 compute-0 sudo[299298]: pam_unix(sudo:session): session closed for user root
Oct 11 08:50:41 compute-0 sudo[299341]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:50:41 compute-0 sudo[299341]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:50:41 compute-0 sudo[299341]: pam_unix(sudo:session): session closed for user root
Oct 11 08:50:41 compute-0 nova_compute[260935]: 2025-10-11 08:50:41.630 2 INFO nova.virt.libvirt.driver [-] [instance: b3e20035-c079-4ad0-a085-2086be520d1d] Instance destroyed successfully.
Oct 11 08:50:41 compute-0 nova_compute[260935]: 2025-10-11 08:50:41.631 2 DEBUG nova.objects.instance [None req-12c644d8-b54f-41a1-aef3-6030c074162f a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Lazy-loading 'resources' on Instance uuid b3e20035-c079-4ad0-a085-2086be520d1d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:50:41 compute-0 nova_compute[260935]: 2025-10-11 08:50:41.648 2 DEBUG nova.virt.libvirt.vif [None req-12c644d8-b54f-41a1-aef3-6030c074162f a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T08:49:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-911926473',display_name='tempest-ServersAdminTestJSON-server-911926473',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-911926473',id=24,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-11T08:49:27Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='39d3043a7835403392c659fbb2fe0b22',ramdisk_id='',reservation_id='r-14xzfgy4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersAdminTestJSON-1756812845',owner_user_name='tempest-ServersAdminTestJSON-1756812845-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T08:49:27Z,user_data=None,user_id='a51c2680b31e40b1908642ef8795c6f0',uuid=b3e20035-c079-4ad0-a085-2086be520d1d,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "3a7614a2-ff1f-4015-a387-8b15256f61b2", "address": "fa:16:3e:e8:51:53", "network": {"id": "09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1951796893-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "39d3043a7835403392c659fbb2fe0b22", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3a7614a2-ff", "ovs_interfaceid": "3a7614a2-ff1f-4015-a387-8b15256f61b2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 11 08:50:41 compute-0 nova_compute[260935]: 2025-10-11 08:50:41.649 2 DEBUG nova.network.os_vif_util [None req-12c644d8-b54f-41a1-aef3-6030c074162f a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Converting VIF {"id": "3a7614a2-ff1f-4015-a387-8b15256f61b2", "address": "fa:16:3e:e8:51:53", "network": {"id": "09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1951796893-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "39d3043a7835403392c659fbb2fe0b22", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3a7614a2-ff", "ovs_interfaceid": "3a7614a2-ff1f-4015-a387-8b15256f61b2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 08:50:41 compute-0 nova_compute[260935]: 2025-10-11 08:50:41.650 2 DEBUG nova.network.os_vif_util [None req-12c644d8-b54f-41a1-aef3-6030c074162f a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e8:51:53,bridge_name='br-int',has_traffic_filtering=True,id=3a7614a2-ff1f-4015-a387-8b15256f61b2,network=Network(09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3a7614a2-ff') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 08:50:41 compute-0 nova_compute[260935]: 2025-10-11 08:50:41.651 2 DEBUG os_vif [None req-12c644d8-b54f-41a1-aef3-6030c074162f a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e8:51:53,bridge_name='br-int',has_traffic_filtering=True,id=3a7614a2-ff1f-4015-a387-8b15256f61b2,network=Network(09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3a7614a2-ff') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 11 08:50:41 compute-0 nova_compute[260935]: 2025-10-11 08:50:41.653 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:50:41 compute-0 nova_compute[260935]: 2025-10-11 08:50:41.653 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3a7614a2-ff, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:50:41 compute-0 nova_compute[260935]: 2025-10-11 08:50:41.655 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:50:41 compute-0 nova_compute[260935]: 2025-10-11 08:50:41.658 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 08:50:41 compute-0 nova_compute[260935]: 2025-10-11 08:50:41.666 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:50:41 compute-0 sudo[299390]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a -- raw list --format json
Oct 11 08:50:41 compute-0 nova_compute[260935]: 2025-10-11 08:50:41.668 2 INFO os_vif [None req-12c644d8-b54f-41a1-aef3-6030c074162f a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e8:51:53,bridge_name='br-int',has_traffic_filtering=True,id=3a7614a2-ff1f-4015-a387-8b15256f61b2,network=Network(09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3a7614a2-ff')
Oct 11 08:50:41 compute-0 sudo[299390]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:50:41 compute-0 podman[299464]: 2025-10-11 08:50:41.770455853 +0000 UTC m=+0.045031614 container create d6b33e72de80955ab943272f9f518de444cf0bbebf307601581e40a72330454d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-9bac3530-993f-420e-8692-0b14a331d756, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Oct 11 08:50:41 compute-0 systemd[1]: Started libpod-conmon-d6b33e72de80955ab943272f9f518de444cf0bbebf307601581e40a72330454d.scope.
Oct 11 08:50:41 compute-0 systemd[1]: Started libcrun container.
Oct 11 08:50:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0346dbffe7c0e5b77d041061aabf8a470abfa12a3cff5967778da0d06409088/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 11 08:50:41 compute-0 podman[299464]: 2025-10-11 08:50:41.745985981 +0000 UTC m=+0.020561762 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 11 08:50:41 compute-0 podman[299464]: 2025-10-11 08:50:41.853216444 +0000 UTC m=+0.127792215 container init d6b33e72de80955ab943272f9f518de444cf0bbebf307601581e40a72330454d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-9bac3530-993f-420e-8692-0b14a331d756, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 11 08:50:41 compute-0 podman[299464]: 2025-10-11 08:50:41.865665316 +0000 UTC m=+0.140241087 container start d6b33e72de80955ab943272f9f518de444cf0bbebf307601581e40a72330454d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-9bac3530-993f-420e-8692-0b14a331d756, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct 11 08:50:41 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1359: 321 pgs: 321 active+clean; 451 MiB data, 626 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 2.5 MiB/s wr, 162 op/s
Oct 11 08:50:41 compute-0 neutron-haproxy-ovnmeta-9bac3530-993f-420e-8692-0b14a331d756[299479]: [NOTICE]   (299495) : New worker (299502) forked
Oct 11 08:50:41 compute-0 neutron-haproxy-ovnmeta-9bac3530-993f-420e-8692-0b14a331d756[299479]: [NOTICE]   (299495) : Loading success.
Oct 11 08:50:41 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:41.964 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 3a7614a2-ff1f-4015-a387-8b15256f61b2 in datapath 09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5 unbound from our chassis
Oct 11 08:50:41 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:41.969 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5
Oct 11 08:50:41 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:41.984 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[16384bb0-22a4-4858-bf70-85a2e0e06200]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:50:42 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:42.041 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[f561b372-718d-4f32-9051-183fb20c4af9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:50:42 compute-0 nova_compute[260935]: 2025-10-11 08:50:42.045 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:50:42 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:42.046 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[e1005999-673c-456f-ad2a-a91249993ce0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:50:42 compute-0 nova_compute[260935]: 2025-10-11 08:50:42.059 2 DEBUG nova.compute.manager [req-6cb8cd1d-575b-4622-b818-693a5974281b req-36af8158-5c2d-4468-ad1c-6d0679fcc249 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ac3851d8-5df2-4f84-9b28-a5fbf1c31b62] Received event network-vif-plugged-d190526e-2bf1-4e6c-925a-2cc0c2b359a8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:50:42 compute-0 nova_compute[260935]: 2025-10-11 08:50:42.059 2 DEBUG oslo_concurrency.lockutils [req-6cb8cd1d-575b-4622-b818-693a5974281b req-36af8158-5c2d-4468-ad1c-6d0679fcc249 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "ac3851d8-5df2-4f84-9b28-a5fbf1c31b62-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:50:42 compute-0 nova_compute[260935]: 2025-10-11 08:50:42.060 2 DEBUG oslo_concurrency.lockutils [req-6cb8cd1d-575b-4622-b818-693a5974281b req-36af8158-5c2d-4468-ad1c-6d0679fcc249 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "ac3851d8-5df2-4f84-9b28-a5fbf1c31b62-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:50:42 compute-0 nova_compute[260935]: 2025-10-11 08:50:42.060 2 DEBUG oslo_concurrency.lockutils [req-6cb8cd1d-575b-4622-b818-693a5974281b req-36af8158-5c2d-4468-ad1c-6d0679fcc249 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "ac3851d8-5df2-4f84-9b28-a5fbf1c31b62-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:50:42 compute-0 nova_compute[260935]: 2025-10-11 08:50:42.061 2 DEBUG nova.compute.manager [req-6cb8cd1d-575b-4622-b818-693a5974281b req-36af8158-5c2d-4468-ad1c-6d0679fcc249 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ac3851d8-5df2-4f84-9b28-a5fbf1c31b62] Processing event network-vif-plugged-d190526e-2bf1-4e6c-925a-2cc0c2b359a8 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 11 08:50:42 compute-0 nova_compute[260935]: 2025-10-11 08:50:42.061 2 DEBUG nova.compute.manager [req-6cb8cd1d-575b-4622-b818-693a5974281b req-36af8158-5c2d-4468-ad1c-6d0679fcc249 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ac3851d8-5df2-4f84-9b28-a5fbf1c31b62] Received event network-vif-plugged-d190526e-2bf1-4e6c-925a-2cc0c2b359a8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:50:42 compute-0 nova_compute[260935]: 2025-10-11 08:50:42.061 2 DEBUG oslo_concurrency.lockutils [req-6cb8cd1d-575b-4622-b818-693a5974281b req-36af8158-5c2d-4468-ad1c-6d0679fcc249 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "ac3851d8-5df2-4f84-9b28-a5fbf1c31b62-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:50:42 compute-0 nova_compute[260935]: 2025-10-11 08:50:42.061 2 DEBUG oslo_concurrency.lockutils [req-6cb8cd1d-575b-4622-b818-693a5974281b req-36af8158-5c2d-4468-ad1c-6d0679fcc249 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "ac3851d8-5df2-4f84-9b28-a5fbf1c31b62-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:50:42 compute-0 nova_compute[260935]: 2025-10-11 08:50:42.062 2 DEBUG oslo_concurrency.lockutils [req-6cb8cd1d-575b-4622-b818-693a5974281b req-36af8158-5c2d-4468-ad1c-6d0679fcc249 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "ac3851d8-5df2-4f84-9b28-a5fbf1c31b62-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:50:42 compute-0 nova_compute[260935]: 2025-10-11 08:50:42.062 2 DEBUG nova.compute.manager [req-6cb8cd1d-575b-4622-b818-693a5974281b req-36af8158-5c2d-4468-ad1c-6d0679fcc249 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ac3851d8-5df2-4f84-9b28-a5fbf1c31b62] No waiting events found dispatching network-vif-plugged-d190526e-2bf1-4e6c-925a-2cc0c2b359a8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 08:50:42 compute-0 nova_compute[260935]: 2025-10-11 08:50:42.062 2 WARNING nova.compute.manager [req-6cb8cd1d-575b-4622-b818-693a5974281b req-36af8158-5c2d-4468-ad1c-6d0679fcc249 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ac3851d8-5df2-4f84-9b28-a5fbf1c31b62] Received unexpected event network-vif-plugged-d190526e-2bf1-4e6c-925a-2cc0c2b359a8 for instance with vm_state building and task_state spawning.
Oct 11 08:50:42 compute-0 podman[299529]: 2025-10-11 08:50:42.064445787 +0000 UTC m=+0.062933781 container create f7c53369de4d373d83b987475ddb0041cb98cad0881f94795e8d5593a35492ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_northcutt, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 08:50:42 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:42.090 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[0a31cc57-b08b-48e0-8116-c8a3358040d0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:50:42 compute-0 systemd[1]: Started libpod-conmon-f7c53369de4d373d83b987475ddb0041cb98cad0881f94795e8d5593a35492ca.scope.
Oct 11 08:50:42 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:42.118 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[775e174c-5639-4fec-92fa-909dc10c7630]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap09ac2cb6-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9f:b2:33'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 16, 'tx_packets': 22, 'rx_bytes': 1168, 'tx_bytes': 1112, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 16, 'tx_packets': 22, 'rx_bytes': 1168, 'tx_bytes': 1112, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 34], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 439110, 'reachable_time': 35426, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 299550, 'error': None, 'target': 'ovnmeta-09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:50:42 compute-0 podman[299529]: 2025-10-11 08:50:42.033728549 +0000 UTC m=+0.032216582 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 08:50:42 compute-0 systemd[1]: Started libcrun container.
Oct 11 08:50:42 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:42.141 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[b5f09e5d-363f-47b8-8155-644431585818]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap09ac2cb6-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 439130, 'tstamp': 439130}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 299554, 'error': None, 'target': 'ovnmeta-09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap09ac2cb6-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 439135, 'tstamp': 439135}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 299554, 'error': None, 'target': 'ovnmeta-09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:50:42 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:42.147 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap09ac2cb6-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:50:42 compute-0 nova_compute[260935]: 2025-10-11 08:50:42.152 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:50:42 compute-0 nova_compute[260935]: 2025-10-11 08:50:42.153 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:50:42 compute-0 nova_compute[260935]: 2025-10-11 08:50:42.158 2 INFO nova.virt.libvirt.driver [None req-12c644d8-b54f-41a1-aef3-6030c074162f a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: b3e20035-c079-4ad0-a085-2086be520d1d] Deleting instance files /var/lib/nova/instances/b3e20035-c079-4ad0-a085-2086be520d1d_del
Oct 11 08:50:42 compute-0 nova_compute[260935]: 2025-10-11 08:50:42.158 2 INFO nova.virt.libvirt.driver [None req-12c644d8-b54f-41a1-aef3-6030c074162f a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: b3e20035-c079-4ad0-a085-2086be520d1d] Deletion of /var/lib/nova/instances/b3e20035-c079-4ad0-a085-2086be520d1d_del complete
Oct 11 08:50:42 compute-0 nova_compute[260935]: 2025-10-11 08:50:42.160 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:50:42 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:42.162 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap09ac2cb6-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:50:42 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:42.163 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 08:50:42 compute-0 podman[299529]: 2025-10-11 08:50:42.161204954 +0000 UTC m=+0.159692987 container init f7c53369de4d373d83b987475ddb0041cb98cad0881f94795e8d5593a35492ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_northcutt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 08:50:42 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:42.163 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap09ac2cb6-30, col_values=(('external_ids', {'iface-id': '424305ea-6b47-4134-ad52-ee2a450e204c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:50:42 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:42.164 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 08:50:42 compute-0 podman[299529]: 2025-10-11 08:50:42.172183884 +0000 UTC m=+0.170671877 container start f7c53369de4d373d83b987475ddb0041cb98cad0881f94795e8d5593a35492ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_northcutt, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 08:50:42 compute-0 podman[299529]: 2025-10-11 08:50:42.175253491 +0000 UTC m=+0.173741494 container attach f7c53369de4d373d83b987475ddb0041cb98cad0881f94795e8d5593a35492ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_northcutt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507)
Oct 11 08:50:42 compute-0 condescending_northcutt[299551]: 167 167
Oct 11 08:50:42 compute-0 systemd[1]: libpod-f7c53369de4d373d83b987475ddb0041cb98cad0881f94795e8d5593a35492ca.scope: Deactivated successfully.
Oct 11 08:50:42 compute-0 podman[299529]: 2025-10-11 08:50:42.181968761 +0000 UTC m=+0.180456764 container died f7c53369de4d373d83b987475ddb0041cb98cad0881f94795e8d5593a35492ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_northcutt, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct 11 08:50:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-e4d618b16f4d912ec24fa52ab4cf51c424aa4747860be14772472dcd98afe174-merged.mount: Deactivated successfully.
Oct 11 08:50:42 compute-0 nova_compute[260935]: 2025-10-11 08:50:42.210 2 INFO nova.compute.manager [None req-12c644d8-b54f-41a1-aef3-6030c074162f a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: b3e20035-c079-4ad0-a085-2086be520d1d] Took 0.84 seconds to destroy the instance on the hypervisor.
Oct 11 08:50:42 compute-0 nova_compute[260935]: 2025-10-11 08:50:42.210 2 DEBUG oslo.service.loopingcall [None req-12c644d8-b54f-41a1-aef3-6030c074162f a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 11 08:50:42 compute-0 nova_compute[260935]: 2025-10-11 08:50:42.211 2 DEBUG nova.compute.manager [-] [instance: b3e20035-c079-4ad0-a085-2086be520d1d] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 11 08:50:42 compute-0 nova_compute[260935]: 2025-10-11 08:50:42.211 2 DEBUG nova.network.neutron [-] [instance: b3e20035-c079-4ad0-a085-2086be520d1d] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 11 08:50:42 compute-0 podman[299529]: 2025-10-11 08:50:42.220980234 +0000 UTC m=+0.219468227 container remove f7c53369de4d373d83b987475ddb0041cb98cad0881f94795e8d5593a35492ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_northcutt, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef)
Oct 11 08:50:42 compute-0 systemd[1]: libpod-conmon-f7c53369de4d373d83b987475ddb0041cb98cad0881f94795e8d5593a35492ca.scope: Deactivated successfully.
Oct 11 08:50:42 compute-0 nova_compute[260935]: 2025-10-11 08:50:42.270 2 DEBUG oslo_concurrency.lockutils [None req-d54a8a3f-002d-4f75-b215-3ef1f4e3b3df 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Acquiring lock "refresh_cache-b35f4147-9e36-4dab-9ac8-2061c97797f2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 08:50:42 compute-0 nova_compute[260935]: 2025-10-11 08:50:42.271 2 DEBUG oslo_concurrency.lockutils [None req-d54a8a3f-002d-4f75-b215-3ef1f4e3b3df 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Acquired lock "refresh_cache-b35f4147-9e36-4dab-9ac8-2061c97797f2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 08:50:42 compute-0 nova_compute[260935]: 2025-10-11 08:50:42.271 2 DEBUG nova.network.neutron [None req-d54a8a3f-002d-4f75-b215-3ef1f4e3b3df 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: b35f4147-9e36-4dab-9ac8-2061c97797f2] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 11 08:50:42 compute-0 nova_compute[260935]: 2025-10-11 08:50:42.284 2 DEBUG nova.compute.manager [None req-91a294bf-0f23-448c-a601-022a3b3b6e50 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: ac3851d8-5df2-4f84-9b28-a5fbf1c31b62] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 11 08:50:42 compute-0 nova_compute[260935]: 2025-10-11 08:50:42.286 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172642.2861435, ac3851d8-5df2-4f84-9b28-a5fbf1c31b62 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:50:42 compute-0 nova_compute[260935]: 2025-10-11 08:50:42.287 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: ac3851d8-5df2-4f84-9b28-a5fbf1c31b62] VM Started (Lifecycle Event)
Oct 11 08:50:42 compute-0 nova_compute[260935]: 2025-10-11 08:50:42.290 2 DEBUG nova.virt.libvirt.driver [None req-91a294bf-0f23-448c-a601-022a3b3b6e50 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: ac3851d8-5df2-4f84-9b28-a5fbf1c31b62] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 11 08:50:42 compute-0 nova_compute[260935]: 2025-10-11 08:50:42.296 2 INFO nova.virt.libvirt.driver [-] [instance: ac3851d8-5df2-4f84-9b28-a5fbf1c31b62] Instance spawned successfully.
Oct 11 08:50:42 compute-0 nova_compute[260935]: 2025-10-11 08:50:42.297 2 DEBUG nova.virt.libvirt.driver [None req-91a294bf-0f23-448c-a601-022a3b3b6e50 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: ac3851d8-5df2-4f84-9b28-a5fbf1c31b62] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 11 08:50:42 compute-0 nova_compute[260935]: 2025-10-11 08:50:42.309 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: ac3851d8-5df2-4f84-9b28-a5fbf1c31b62] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:50:42 compute-0 nova_compute[260935]: 2025-10-11 08:50:42.328 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: ac3851d8-5df2-4f84-9b28-a5fbf1c31b62] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 08:50:42 compute-0 nova_compute[260935]: 2025-10-11 08:50:42.331 2 DEBUG nova.virt.libvirt.driver [None req-91a294bf-0f23-448c-a601-022a3b3b6e50 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: ac3851d8-5df2-4f84-9b28-a5fbf1c31b62] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:50:42 compute-0 nova_compute[260935]: 2025-10-11 08:50:42.331 2 DEBUG nova.virt.libvirt.driver [None req-91a294bf-0f23-448c-a601-022a3b3b6e50 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: ac3851d8-5df2-4f84-9b28-a5fbf1c31b62] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:50:42 compute-0 nova_compute[260935]: 2025-10-11 08:50:42.332 2 DEBUG nova.virt.libvirt.driver [None req-91a294bf-0f23-448c-a601-022a3b3b6e50 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: ac3851d8-5df2-4f84-9b28-a5fbf1c31b62] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:50:42 compute-0 nova_compute[260935]: 2025-10-11 08:50:42.332 2 DEBUG nova.virt.libvirt.driver [None req-91a294bf-0f23-448c-a601-022a3b3b6e50 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: ac3851d8-5df2-4f84-9b28-a5fbf1c31b62] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:50:42 compute-0 nova_compute[260935]: 2025-10-11 08:50:42.332 2 DEBUG nova.virt.libvirt.driver [None req-91a294bf-0f23-448c-a601-022a3b3b6e50 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: ac3851d8-5df2-4f84-9b28-a5fbf1c31b62] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:50:42 compute-0 nova_compute[260935]: 2025-10-11 08:50:42.333 2 DEBUG nova.virt.libvirt.driver [None req-91a294bf-0f23-448c-a601-022a3b3b6e50 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: ac3851d8-5df2-4f84-9b28-a5fbf1c31b62] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:50:42 compute-0 nova_compute[260935]: 2025-10-11 08:50:42.352 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: ac3851d8-5df2-4f84-9b28-a5fbf1c31b62] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 08:50:42 compute-0 nova_compute[260935]: 2025-10-11 08:50:42.353 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172642.2862077, ac3851d8-5df2-4f84-9b28-a5fbf1c31b62 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:50:42 compute-0 nova_compute[260935]: 2025-10-11 08:50:42.353 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: ac3851d8-5df2-4f84-9b28-a5fbf1c31b62] VM Paused (Lifecycle Event)
Oct 11 08:50:42 compute-0 nova_compute[260935]: 2025-10-11 08:50:42.391 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: ac3851d8-5df2-4f84-9b28-a5fbf1c31b62] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:50:42 compute-0 nova_compute[260935]: 2025-10-11 08:50:42.396 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172642.290156, ac3851d8-5df2-4f84-9b28-a5fbf1c31b62 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:50:42 compute-0 nova_compute[260935]: 2025-10-11 08:50:42.396 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: ac3851d8-5df2-4f84-9b28-a5fbf1c31b62] VM Resumed (Lifecycle Event)
Oct 11 08:50:42 compute-0 nova_compute[260935]: 2025-10-11 08:50:42.398 2 INFO nova.compute.manager [None req-91a294bf-0f23-448c-a601-022a3b3b6e50 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: ac3851d8-5df2-4f84-9b28-a5fbf1c31b62] Took 8.00 seconds to spawn the instance on the hypervisor.
Oct 11 08:50:42 compute-0 nova_compute[260935]: 2025-10-11 08:50:42.398 2 DEBUG nova.compute.manager [None req-91a294bf-0f23-448c-a601-022a3b3b6e50 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: ac3851d8-5df2-4f84-9b28-a5fbf1c31b62] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:50:42 compute-0 nova_compute[260935]: 2025-10-11 08:50:42.415 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: ac3851d8-5df2-4f84-9b28-a5fbf1c31b62] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:50:42 compute-0 nova_compute[260935]: 2025-10-11 08:50:42.418 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: ac3851d8-5df2-4f84-9b28-a5fbf1c31b62] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 08:50:42 compute-0 podman[299576]: 2025-10-11 08:50:42.4421813 +0000 UTC m=+0.061277694 container create adead1b1c50a7b22eb2eaa0cb4e0a8644d6a8a8de0534016ea1ad64381871d6e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_borg, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct 11 08:50:42 compute-0 nova_compute[260935]: 2025-10-11 08:50:42.458 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: ac3851d8-5df2-4f84-9b28-a5fbf1c31b62] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 08:50:42 compute-0 nova_compute[260935]: 2025-10-11 08:50:42.481 2 INFO nova.compute.manager [None req-91a294bf-0f23-448c-a601-022a3b3b6e50 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: ac3851d8-5df2-4f84-9b28-a5fbf1c31b62] Took 9.18 seconds to build instance.
Oct 11 08:50:42 compute-0 nova_compute[260935]: 2025-10-11 08:50:42.500 2 DEBUG oslo_concurrency.lockutils [None req-91a294bf-0f23-448c-a601-022a3b3b6e50 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Lock "ac3851d8-5df2-4f84-9b28-a5fbf1c31b62" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.272s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:50:42 compute-0 systemd[1]: Started libpod-conmon-adead1b1c50a7b22eb2eaa0cb4e0a8644d6a8a8de0534016ea1ad64381871d6e.scope.
Oct 11 08:50:42 compute-0 podman[299576]: 2025-10-11 08:50:42.421911717 +0000 UTC m=+0.041008121 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 08:50:42 compute-0 systemd[1]: Started libcrun container.
Oct 11 08:50:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc15b90a81b0990b065736f737a03d2d9ef0df3d07ac9cb481faf4f38f388e49/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 08:50:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc15b90a81b0990b065736f737a03d2d9ef0df3d07ac9cb481faf4f38f388e49/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 08:50:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc15b90a81b0990b065736f737a03d2d9ef0df3d07ac9cb481faf4f38f388e49/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 08:50:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc15b90a81b0990b065736f737a03d2d9ef0df3d07ac9cb481faf4f38f388e49/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 08:50:42 compute-0 podman[299576]: 2025-10-11 08:50:42.586607364 +0000 UTC m=+0.205703778 container init adead1b1c50a7b22eb2eaa0cb4e0a8644d6a8a8de0534016ea1ad64381871d6e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_borg, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 08:50:42 compute-0 podman[299576]: 2025-10-11 08:50:42.59919552 +0000 UTC m=+0.218291914 container start adead1b1c50a7b22eb2eaa0cb4e0a8644d6a8a8de0534016ea1ad64381871d6e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_borg, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default)
Oct 11 08:50:42 compute-0 podman[299576]: 2025-10-11 08:50:42.602594557 +0000 UTC m=+0.221690951 container attach adead1b1c50a7b22eb2eaa0cb4e0a8644d6a8a8de0534016ea1ad64381871d6e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_borg, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct 11 08:50:42 compute-0 nova_compute[260935]: 2025-10-11 08:50:42.800 2 DEBUG oslo_concurrency.lockutils [None req-21fe9cac-4353-4d0f-8ae8-fb8101a4c552 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Acquiring lock "b35f4147-9e36-4dab-9ac8-2061c97797f2" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:50:42 compute-0 nova_compute[260935]: 2025-10-11 08:50:42.800 2 DEBUG oslo_concurrency.lockutils [None req-21fe9cac-4353-4d0f-8ae8-fb8101a4c552 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Lock "b35f4147-9e36-4dab-9ac8-2061c97797f2" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:50:42 compute-0 nova_compute[260935]: 2025-10-11 08:50:42.801 2 DEBUG oslo_concurrency.lockutils [None req-21fe9cac-4353-4d0f-8ae8-fb8101a4c552 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Acquiring lock "b35f4147-9e36-4dab-9ac8-2061c97797f2-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:50:42 compute-0 nova_compute[260935]: 2025-10-11 08:50:42.801 2 DEBUG oslo_concurrency.lockutils [None req-21fe9cac-4353-4d0f-8ae8-fb8101a4c552 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Lock "b35f4147-9e36-4dab-9ac8-2061c97797f2-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:50:42 compute-0 nova_compute[260935]: 2025-10-11 08:50:42.801 2 DEBUG oslo_concurrency.lockutils [None req-21fe9cac-4353-4d0f-8ae8-fb8101a4c552 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Lock "b35f4147-9e36-4dab-9ac8-2061c97797f2-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:50:42 compute-0 nova_compute[260935]: 2025-10-11 08:50:42.803 2 INFO nova.compute.manager [None req-21fe9cac-4353-4d0f-8ae8-fb8101a4c552 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: b35f4147-9e36-4dab-9ac8-2061c97797f2] Terminating instance
Oct 11 08:50:42 compute-0 nova_compute[260935]: 2025-10-11 08:50:42.804 2 DEBUG nova.compute.manager [None req-21fe9cac-4353-4d0f-8ae8-fb8101a4c552 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: b35f4147-9e36-4dab-9ac8-2061c97797f2] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 11 08:50:42 compute-0 kernel: tapc27797c3-6a (unregistering): left promiscuous mode
Oct 11 08:50:42 compute-0 NetworkManager[44960]: <info>  [1760172642.8623] device (tapc27797c3-6a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 08:50:42 compute-0 ovn_controller[152945]: 2025-10-11T08:50:42Z|00208|binding|INFO|Releasing lport c27797c3-6ac7-45ae-9a2e-7fc42908feab from this chassis (sb_readonly=0)
Oct 11 08:50:42 compute-0 ovn_controller[152945]: 2025-10-11T08:50:42Z|00209|binding|INFO|Setting lport c27797c3-6ac7-45ae-9a2e-7fc42908feab down in Southbound
Oct 11 08:50:42 compute-0 nova_compute[260935]: 2025-10-11 08:50:42.880 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:50:42 compute-0 ovn_controller[152945]: 2025-10-11T08:50:42Z|00210|binding|INFO|Removing iface tapc27797c3-6a ovn-installed in OVS
Oct 11 08:50:42 compute-0 nova_compute[260935]: 2025-10-11 08:50:42.883 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:50:42 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:42.890 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d9:75:86 10.100.0.5'], port_security=['fa:16:3e:d9:75:86 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'b35f4147-9e36-4dab-9ac8-2061c97797f2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fff13396-b787-4c6e-9112-a1c2ef57b26d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'eddb41c523294041b154a0a99c88e82b', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'b1442d15-c284-4756-a249-5a3bda09cf56', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.224'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c4201c7b-c907-464d-88cb-d19f17d8f067, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=c27797c3-6ac7-45ae-9a2e-7fc42908feab) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 08:50:42 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:42.891 162815 INFO neutron.agent.ovn.metadata.agent [-] Port c27797c3-6ac7-45ae-9a2e-7fc42908feab in datapath fff13396-b787-4c6e-9112-a1c2ef57b26d unbound from our chassis
Oct 11 08:50:42 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:42.893 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network fff13396-b787-4c6e-9112-a1c2ef57b26d
Oct 11 08:50:42 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:42.919 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[90dfa656-da1a-4727-849c-5529f3a64228]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:50:42 compute-0 nova_compute[260935]: 2025-10-11 08:50:42.929 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:50:42 compute-0 systemd[1]: machine-qemu\x2d31\x2dinstance\x2d0000001d.scope: Deactivated successfully.
Oct 11 08:50:42 compute-0 systemd[1]: machine-qemu\x2d31\x2dinstance\x2d0000001d.scope: Consumed 14.282s CPU time.
Oct 11 08:50:42 compute-0 systemd-machined[215705]: Machine qemu-31-instance-0000001d terminated.
Oct 11 08:50:42 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:42.976 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[2ce0ee54-c4b8-44f9-98cc-bc07d024d89f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:50:42 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:42.982 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[d04452fe-7216-42d7-ae0d-d2d423bfa503]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:50:43 compute-0 NetworkManager[44960]: <info>  [1760172643.0297] manager: (tapc27797c3-6a): new Tun device (/org/freedesktop/NetworkManager/Devices/106)
Oct 11 08:50:43 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:43.048 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[2b021cb4-44ec-4a25-9214-fa2fdd755334]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:50:43 compute-0 nova_compute[260935]: 2025-10-11 08:50:43.076 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:50:43 compute-0 nova_compute[260935]: 2025-10-11 08:50:43.082 2 INFO nova.virt.libvirt.driver [-] [instance: b35f4147-9e36-4dab-9ac8-2061c97797f2] Instance destroyed successfully.
Oct 11 08:50:43 compute-0 nova_compute[260935]: 2025-10-11 08:50:43.083 2 DEBUG nova.objects.instance [None req-21fe9cac-4353-4d0f-8ae8-fb8101a4c552 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Lazy-loading 'resources' on Instance uuid b35f4147-9e36-4dab-9ac8-2061c97797f2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:50:43 compute-0 nova_compute[260935]: 2025-10-11 08:50:43.106 2 DEBUG nova.virt.libvirt.vif [None req-21fe9cac-4353-4d0f-8ae8-fb8101a4c552 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T08:49:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1789236056',display_name='tempest-tempest.common.compute-instance-1789236056',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1789236056',id=29,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAAy7u30rY9Ua722Pu5k06TsB7yGIeNfS7lWjZwVhd6kg2xMeuomPU5t2dlqG08LvC5AhOx2wSQ4p/whtQgG8tbhB9ScC2x2P4qlM+3BKH/+XFtSpFY70AQQ3oh5qTSzqA==',key_name='tempest-keypair-878513287',keypairs=<?>,launch_index=0,launched_at=2025-10-11T08:50:10Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='eddb41c523294041b154a0a99c88e82b',ramdisk_id='',reservation_id='r-pbhirm4h',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-2072786320',owner_user_name='tempest-AttachInterfacesTestJSON-2072786320-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T08:50:10Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='34f29a5a135d45f597eeaa741009aa67',uuid=b35f4147-9e36-4dab-9ac8-2061c97797f2,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "c27797c3-6ac7-45ae-9a2e-7fc42908feab", "address": "fa:16:3e:d9:75:86", "network": {"id": "fff13396-b787-4c6e-9112-a1c2ef57b26d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-795919911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.224", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eddb41c523294041b154a0a99c88e82b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc27797c3-6a", "ovs_interfaceid": "c27797c3-6ac7-45ae-9a2e-7fc42908feab", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 11 08:50:43 compute-0 nova_compute[260935]: 2025-10-11 08:50:43.106 2 DEBUG nova.network.os_vif_util [None req-21fe9cac-4353-4d0f-8ae8-fb8101a4c552 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Converting VIF {"id": "c27797c3-6ac7-45ae-9a2e-7fc42908feab", "address": "fa:16:3e:d9:75:86", "network": {"id": "fff13396-b787-4c6e-9112-a1c2ef57b26d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-795919911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.224", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eddb41c523294041b154a0a99c88e82b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc27797c3-6a", "ovs_interfaceid": "c27797c3-6ac7-45ae-9a2e-7fc42908feab", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 08:50:43 compute-0 nova_compute[260935]: 2025-10-11 08:50:43.107 2 DEBUG nova.network.os_vif_util [None req-21fe9cac-4353-4d0f-8ae8-fb8101a4c552 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:d9:75:86,bridge_name='br-int',has_traffic_filtering=True,id=c27797c3-6ac7-45ae-9a2e-7fc42908feab,network=Network(fff13396-b787-4c6e-9112-a1c2ef57b26d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc27797c3-6a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 08:50:43 compute-0 nova_compute[260935]: 2025-10-11 08:50:43.107 2 DEBUG os_vif [None req-21fe9cac-4353-4d0f-8ae8-fb8101a4c552 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:d9:75:86,bridge_name='br-int',has_traffic_filtering=True,id=c27797c3-6ac7-45ae-9a2e-7fc42908feab,network=Network(fff13396-b787-4c6e-9112-a1c2ef57b26d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc27797c3-6a') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 11 08:50:43 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:43.106 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[7bc94111-c5e5-4059-a8ff-6c32c20f8030]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfff13396-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:dd:a4:2d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 15, 'rx_bytes': 1000, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 15, 'rx_bytes': 1000, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 45], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 443120, 'reachable_time': 39241, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 299616, 'error': None, 'target': 'ovnmeta-fff13396-b787-4c6e-9112-a1c2ef57b26d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:50:43 compute-0 nova_compute[260935]: 2025-10-11 08:50:43.108 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:50:43 compute-0 nova_compute[260935]: 2025-10-11 08:50:43.108 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc27797c3-6a, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:50:43 compute-0 nova_compute[260935]: 2025-10-11 08:50:43.114 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:50:43 compute-0 nova_compute[260935]: 2025-10-11 08:50:43.116 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 08:50:43 compute-0 nova_compute[260935]: 2025-10-11 08:50:43.118 2 INFO os_vif [None req-21fe9cac-4353-4d0f-8ae8-fb8101a4c552 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:d9:75:86,bridge_name='br-int',has_traffic_filtering=True,id=c27797c3-6ac7-45ae-9a2e-7fc42908feab,network=Network(fff13396-b787-4c6e-9112-a1c2ef57b26d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc27797c3-6a')
Oct 11 08:50:43 compute-0 nova_compute[260935]: 2025-10-11 08:50:43.118 2 DEBUG nova.virt.libvirt.vif [None req-21fe9cac-4353-4d0f-8ae8-fb8101a4c552 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T08:49:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1789236056',display_name='tempest-tempest.common.compute-instance-1789236056',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1789236056',id=29,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAAy7u30rY9Ua722Pu5k06TsB7yGIeNfS7lWjZwVhd6kg2xMeuomPU5t2dlqG08LvC5AhOx2wSQ4p/whtQgG8tbhB9ScC2x2P4qlM+3BKH/+XFtSpFY70AQQ3oh5qTSzqA==',key_name='tempest-keypair-878513287',keypairs=<?>,launch_index=0,launched_at=2025-10-11T08:50:10Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='eddb41c523294041b154a0a99c88e82b',ramdisk_id='',reservation_id='r-pbhirm4h',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-2072786320',owner_user_name='tempest-AttachInterfacesTestJSON-2072786320-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T08:50:10Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='34f29a5a135d45f597eeaa741009aa67',uuid=b35f4147-9e36-4dab-9ac8-2061c97797f2,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f045b3aa-3ff6-4dea-ad61-a59a01735124", "address": "fa:16:3e:ee:3e:c0", "network": {"id": "fff13396-b787-4c6e-9112-a1c2ef57b26d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-795919911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eddb41c523294041b154a0a99c88e82b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf045b3aa-3f", "ovs_interfaceid": "f045b3aa-3ff6-4dea-ad61-a59a01735124", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 11 08:50:43 compute-0 nova_compute[260935]: 2025-10-11 08:50:43.118 2 DEBUG nova.network.os_vif_util [None req-21fe9cac-4353-4d0f-8ae8-fb8101a4c552 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Converting VIF {"id": "f045b3aa-3ff6-4dea-ad61-a59a01735124", "address": "fa:16:3e:ee:3e:c0", "network": {"id": "fff13396-b787-4c6e-9112-a1c2ef57b26d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-795919911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eddb41c523294041b154a0a99c88e82b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf045b3aa-3f", "ovs_interfaceid": "f045b3aa-3ff6-4dea-ad61-a59a01735124", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 08:50:43 compute-0 nova_compute[260935]: 2025-10-11 08:50:43.119 2 DEBUG nova.network.os_vif_util [None req-21fe9cac-4353-4d0f-8ae8-fb8101a4c552 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ee:3e:c0,bridge_name='br-int',has_traffic_filtering=True,id=f045b3aa-3ff6-4dea-ad61-a59a01735124,network=Network(fff13396-b787-4c6e-9112-a1c2ef57b26d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapf045b3aa-3f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 08:50:43 compute-0 nova_compute[260935]: 2025-10-11 08:50:43.119 2 DEBUG os_vif [None req-21fe9cac-4353-4d0f-8ae8-fb8101a4c552 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ee:3e:c0,bridge_name='br-int',has_traffic_filtering=True,id=f045b3aa-3ff6-4dea-ad61-a59a01735124,network=Network(fff13396-b787-4c6e-9112-a1c2ef57b26d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapf045b3aa-3f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 11 08:50:43 compute-0 nova_compute[260935]: 2025-10-11 08:50:43.120 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:50:43 compute-0 nova_compute[260935]: 2025-10-11 08:50:43.120 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf045b3aa-3f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:50:43 compute-0 nova_compute[260935]: 2025-10-11 08:50:43.120 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 08:50:43 compute-0 nova_compute[260935]: 2025-10-11 08:50:43.122 2 INFO os_vif [None req-21fe9cac-4353-4d0f-8ae8-fb8101a4c552 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ee:3e:c0,bridge_name='br-int',has_traffic_filtering=True,id=f045b3aa-3ff6-4dea-ad61-a59a01735124,network=Network(fff13396-b787-4c6e-9112-a1c2ef57b26d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapf045b3aa-3f')
Oct 11 08:50:43 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:43.137 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[9c0c7f25-8ed4-479b-8724-92248ad58bb3]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapfff13396-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 443136, 'tstamp': 443136}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 299617, 'error': None, 'target': 'ovnmeta-fff13396-b787-4c6e-9112-a1c2ef57b26d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapfff13396-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 443140, 'tstamp': 443140}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 299617, 'error': None, 'target': 'ovnmeta-fff13396-b787-4c6e-9112-a1c2ef57b26d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:50:43 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:43.140 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfff13396-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:50:43 compute-0 nova_compute[260935]: 2025-10-11 08:50:43.142 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:50:43 compute-0 nova_compute[260935]: 2025-10-11 08:50:43.144 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:50:43 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:43.145 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfff13396-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:50:43 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:43.146 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 08:50:43 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:43.147 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapfff13396-b0, col_values=(('external_ids', {'iface-id': '2a916b98-1e7b-4604-b1f0-e2f195b1c17e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:50:43 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:43.147 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 08:50:43 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e175 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 08:50:43 compute-0 ceph-mon[74313]: pgmap v1359: 321 pgs: 321 active+clean; 451 MiB data, 626 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 2.5 MiB/s wr, 162 op/s
Oct 11 08:50:43 compute-0 nova_compute[260935]: 2025-10-11 08:50:43.504 2 DEBUG nova.network.neutron [-] [instance: b3e20035-c079-4ad0-a085-2086be520d1d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:50:43 compute-0 nova_compute[260935]: 2025-10-11 08:50:43.533 2 INFO nova.compute.manager [-] [instance: b3e20035-c079-4ad0-a085-2086be520d1d] Took 1.32 seconds to deallocate network for instance.
Oct 11 08:50:43 compute-0 nova_compute[260935]: 2025-10-11 08:50:43.601 2 DEBUG oslo_concurrency.lockutils [None req-12c644d8-b54f-41a1-aef3-6030c074162f a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:50:43 compute-0 nova_compute[260935]: 2025-10-11 08:50:43.601 2 DEBUG oslo_concurrency.lockutils [None req-12c644d8-b54f-41a1-aef3-6030c074162f a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:50:43 compute-0 nova_compute[260935]: 2025-10-11 08:50:43.630 2 INFO nova.virt.libvirt.driver [None req-21fe9cac-4353-4d0f-8ae8-fb8101a4c552 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: b35f4147-9e36-4dab-9ac8-2061c97797f2] Deleting instance files /var/lib/nova/instances/b35f4147-9e36-4dab-9ac8-2061c97797f2_del
Oct 11 08:50:43 compute-0 nova_compute[260935]: 2025-10-11 08:50:43.630 2 INFO nova.virt.libvirt.driver [None req-21fe9cac-4353-4d0f-8ae8-fb8101a4c552 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: b35f4147-9e36-4dab-9ac8-2061c97797f2] Deletion of /var/lib/nova/instances/b35f4147-9e36-4dab-9ac8-2061c97797f2_del complete
Oct 11 08:50:43 compute-0 nova_compute[260935]: 2025-10-11 08:50:43.676 2 INFO nova.compute.manager [None req-21fe9cac-4353-4d0f-8ae8-fb8101a4c552 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: b35f4147-9e36-4dab-9ac8-2061c97797f2] Took 0.87 seconds to destroy the instance on the hypervisor.
Oct 11 08:50:43 compute-0 nova_compute[260935]: 2025-10-11 08:50:43.676 2 DEBUG oslo.service.loopingcall [None req-21fe9cac-4353-4d0f-8ae8-fb8101a4c552 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 11 08:50:43 compute-0 nova_compute[260935]: 2025-10-11 08:50:43.676 2 DEBUG nova.compute.manager [-] [instance: b35f4147-9e36-4dab-9ac8-2061c97797f2] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 11 08:50:43 compute-0 nova_compute[260935]: 2025-10-11 08:50:43.677 2 DEBUG nova.network.neutron [-] [instance: b35f4147-9e36-4dab-9ac8-2061c97797f2] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 11 08:50:43 compute-0 stoic_borg[299593]: {
Oct 11 08:50:43 compute-0 stoic_borg[299593]:     "9a8d87fe-940e-4960-94fb-405e22e6e9ee": {
Oct 11 08:50:43 compute-0 stoic_borg[299593]:         "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 08:50:43 compute-0 stoic_borg[299593]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 11 08:50:43 compute-0 stoic_borg[299593]:         "osd_id": 2,
Oct 11 08:50:43 compute-0 stoic_borg[299593]:         "osd_uuid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 08:50:43 compute-0 stoic_borg[299593]:         "type": "bluestore"
Oct 11 08:50:43 compute-0 stoic_borg[299593]:     },
Oct 11 08:50:43 compute-0 stoic_borg[299593]:     "bd31cfe9-266a-4479-a7f9-659fff14b3e1": {
Oct 11 08:50:43 compute-0 stoic_borg[299593]:         "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 08:50:43 compute-0 stoic_borg[299593]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 11 08:50:43 compute-0 stoic_borg[299593]:         "osd_id": 0,
Oct 11 08:50:43 compute-0 stoic_borg[299593]:         "osd_uuid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 08:50:43 compute-0 stoic_borg[299593]:         "type": "bluestore"
Oct 11 08:50:43 compute-0 stoic_borg[299593]:     },
Oct 11 08:50:43 compute-0 stoic_borg[299593]:     "c6e730c8-7075-45e5-bf72-061b73088f5b": {
Oct 11 08:50:43 compute-0 stoic_borg[299593]:         "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 08:50:43 compute-0 stoic_borg[299593]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 11 08:50:43 compute-0 stoic_borg[299593]:         "osd_id": 1,
Oct 11 08:50:43 compute-0 stoic_borg[299593]:         "osd_uuid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 08:50:43 compute-0 stoic_borg[299593]:         "type": "bluestore"
Oct 11 08:50:43 compute-0 stoic_borg[299593]:     }
Oct 11 08:50:43 compute-0 stoic_borg[299593]: }
Oct 11 08:50:43 compute-0 nova_compute[260935]: 2025-10-11 08:50:43.811 2 DEBUG nova.objects.instance [None req-d320f62a-71b6-4e37-ae9f-8e5ee6c823e1 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Lazy-loading 'pci_devices' on Instance uuid ac3851d8-5df2-4f84-9b28-a5fbf1c31b62 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:50:43 compute-0 systemd[1]: libpod-adead1b1c50a7b22eb2eaa0cb4e0a8644d6a8a8de0534016ea1ad64381871d6e.scope: Deactivated successfully.
Oct 11 08:50:43 compute-0 systemd[1]: libpod-adead1b1c50a7b22eb2eaa0cb4e0a8644d6a8a8de0534016ea1ad64381871d6e.scope: Consumed 1.129s CPU time.
Oct 11 08:50:43 compute-0 podman[299576]: 2025-10-11 08:50:43.844840537 +0000 UTC m=+1.463936931 container died adead1b1c50a7b22eb2eaa0cb4e0a8644d6a8a8de0534016ea1ad64381871d6e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_borg, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct 11 08:50:43 compute-0 nova_compute[260935]: 2025-10-11 08:50:43.847 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172643.8431635, ac3851d8-5df2-4f84-9b28-a5fbf1c31b62 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:50:43 compute-0 nova_compute[260935]: 2025-10-11 08:50:43.847 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: ac3851d8-5df2-4f84-9b28-a5fbf1c31b62] VM Paused (Lifecycle Event)
Oct 11 08:50:43 compute-0 nova_compute[260935]: 2025-10-11 08:50:43.860 2 DEBUG oslo_concurrency.processutils [None req-12c644d8-b54f-41a1-aef3-6030c074162f a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:50:43 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1360: 321 pgs: 321 active+clean; 372 MiB data, 556 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 2.2 MiB/s wr, 191 op/s
Oct 11 08:50:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-bc15b90a81b0990b065736f737a03d2d9ef0df3d07ac9cb481faf4f38f388e49-merged.mount: Deactivated successfully.
Oct 11 08:50:43 compute-0 nova_compute[260935]: 2025-10-11 08:50:43.901 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: ac3851d8-5df2-4f84-9b28-a5fbf1c31b62] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:50:43 compute-0 nova_compute[260935]: 2025-10-11 08:50:43.907 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: ac3851d8-5df2-4f84-9b28-a5fbf1c31b62] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: suspending, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 08:50:43 compute-0 podman[299576]: 2025-10-11 08:50:43.914381263 +0000 UTC m=+1.533477657 container remove adead1b1c50a7b22eb2eaa0cb4e0a8644d6a8a8de0534016ea1ad64381871d6e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_borg, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct 11 08:50:43 compute-0 systemd[1]: libpod-conmon-adead1b1c50a7b22eb2eaa0cb4e0a8644d6a8a8de0534016ea1ad64381871d6e.scope: Deactivated successfully.
Oct 11 08:50:43 compute-0 nova_compute[260935]: 2025-10-11 08:50:43.936 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: ac3851d8-5df2-4f84-9b28-a5fbf1c31b62] During sync_power_state the instance has a pending task (suspending). Skip.
Oct 11 08:50:43 compute-0 sudo[299390]: pam_unix(sudo:session): session closed for user root
Oct 11 08:50:43 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 08:50:43 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 08:50:43 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 08:50:43 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 08:50:43 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev 9554ce56-e39b-4f41-a0ab-d3cd42269de9 does not exist
Oct 11 08:50:43 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev 46d48053-a235-4f39-9027-bde80e6a6dde does not exist
Oct 11 08:50:44 compute-0 kernel: tapd190526e-2b (unregistering): left promiscuous mode
Oct 11 08:50:44 compute-0 NetworkManager[44960]: <info>  [1760172644.0318] device (tapd190526e-2b): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 08:50:44 compute-0 ovn_controller[152945]: 2025-10-11T08:50:44Z|00211|binding|INFO|Releasing lport d190526e-2bf1-4e6c-925a-2cc0c2b359a8 from this chassis (sb_readonly=0)
Oct 11 08:50:44 compute-0 nova_compute[260935]: 2025-10-11 08:50:44.044 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:50:44 compute-0 ovn_controller[152945]: 2025-10-11T08:50:44Z|00212|binding|INFO|Setting lport d190526e-2bf1-4e6c-925a-2cc0c2b359a8 down in Southbound
Oct 11 08:50:44 compute-0 ovn_controller[152945]: 2025-10-11T08:50:44Z|00213|binding|INFO|Removing iface tapd190526e-2b ovn-installed in OVS
Oct 11 08:50:44 compute-0 sudo[299696]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:50:44 compute-0 rsyslogd[1003]: imjournal: 2545 messages lost due to rate-limiting (20000 allowed within 600 seconds)
Oct 11 08:50:44 compute-0 sudo[299696]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:50:44 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:44.051 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:57:75:39 10.100.0.3'], port_security=['fa:16:3e:57:75:39 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'ac3851d8-5df2-4f84-9b28-a5fbf1c31b62', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9bac3530-993f-420e-8692-0b14a331d756', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f8c7604961214c6d9d49657535d799a5', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'b92be55a-f97b-4770-99c4-ff8e122b8ad7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=956bef08-638b-4ce0-9cc4-80a6cc4f1331, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=d190526e-2bf1-4e6c-925a-2cc0c2b359a8) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 08:50:44 compute-0 sudo[299696]: pam_unix(sudo:session): session closed for user root
Oct 11 08:50:44 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:44.055 162815 INFO neutron.agent.ovn.metadata.agent [-] Port d190526e-2bf1-4e6c-925a-2cc0c2b359a8 in datapath 9bac3530-993f-420e-8692-0b14a331d756 unbound from our chassis
Oct 11 08:50:44 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:44.057 162815 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 9bac3530-993f-420e-8692-0b14a331d756, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 11 08:50:44 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:44.058 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[e08b3dff-27fd-4c36-b369-486dc83ea9ac]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:50:44 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:44.059 162815 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-9bac3530-993f-420e-8692-0b14a331d756 namespace which is not needed anymore
Oct 11 08:50:44 compute-0 nova_compute[260935]: 2025-10-11 08:50:44.065 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:50:44 compute-0 systemd[1]: machine-qemu\x2d35\x2dinstance\x2d0000001f.scope: Deactivated successfully.
Oct 11 08:50:44 compute-0 systemd[1]: machine-qemu\x2d35\x2dinstance\x2d0000001f.scope: Consumed 2.693s CPU time.
Oct 11 08:50:44 compute-0 systemd-machined[215705]: Machine qemu-35-instance-0000001f terminated.
Oct 11 08:50:44 compute-0 sudo[299725]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 11 08:50:44 compute-0 sudo[299725]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:50:44 compute-0 sudo[299725]: pam_unix(sudo:session): session closed for user root
Oct 11 08:50:44 compute-0 nova_compute[260935]: 2025-10-11 08:50:44.258 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:50:44 compute-0 neutron-haproxy-ovnmeta-9bac3530-993f-420e-8692-0b14a331d756[299479]: [NOTICE]   (299495) : haproxy version is 2.8.14-c23fe91
Oct 11 08:50:44 compute-0 neutron-haproxy-ovnmeta-9bac3530-993f-420e-8692-0b14a331d756[299479]: [NOTICE]   (299495) : path to executable is /usr/sbin/haproxy
Oct 11 08:50:44 compute-0 neutron-haproxy-ovnmeta-9bac3530-993f-420e-8692-0b14a331d756[299479]: [ALERT]    (299495) : Current worker (299502) exited with code 143 (Terminated)
Oct 11 08:50:44 compute-0 neutron-haproxy-ovnmeta-9bac3530-993f-420e-8692-0b14a331d756[299479]: [WARNING]  (299495) : All workers exited. Exiting... (0)
Oct 11 08:50:44 compute-0 nova_compute[260935]: 2025-10-11 08:50:44.262 2 DEBUG nova.compute.manager [None req-d320f62a-71b6-4e37-ae9f-8e5ee6c823e1 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: ac3851d8-5df2-4f84-9b28-a5fbf1c31b62] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:50:44 compute-0 systemd[1]: libpod-d6b33e72de80955ab943272f9f518de444cf0bbebf307601581e40a72330454d.scope: Deactivated successfully.
Oct 11 08:50:44 compute-0 podman[299772]: 2025-10-11 08:50:44.273264223 +0000 UTC m=+0.103380605 container died d6b33e72de80955ab943272f9f518de444cf0bbebf307601581e40a72330454d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-9bac3530-993f-420e-8692-0b14a331d756, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 11 08:50:44 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-d6b33e72de80955ab943272f9f518de444cf0bbebf307601581e40a72330454d-userdata-shm.mount: Deactivated successfully.
Oct 11 08:50:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-b0346dbffe7c0e5b77d041061aabf8a470abfa12a3cff5967778da0d06409088-merged.mount: Deactivated successfully.
Oct 11 08:50:44 compute-0 podman[299772]: 2025-10-11 08:50:44.328047172 +0000 UTC m=+0.158163574 container cleanup d6b33e72de80955ab943272f9f518de444cf0bbebf307601581e40a72330454d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-9bac3530-993f-420e-8692-0b14a331d756, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.schema-version=1.0)
Oct 11 08:50:44 compute-0 nova_compute[260935]: 2025-10-11 08:50:44.331 2 DEBUG nova.compute.manager [req-ff4c5288-5979-4df0-aa68-ec3c6e59d598 req-36a6b8ef-a7b9-42ba-875d-39c658b96473 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: b3e20035-c079-4ad0-a085-2086be520d1d] Received event network-vif-unplugged-3a7614a2-ff1f-4015-a387-8b15256f61b2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:50:44 compute-0 nova_compute[260935]: 2025-10-11 08:50:44.332 2 DEBUG oslo_concurrency.lockutils [req-ff4c5288-5979-4df0-aa68-ec3c6e59d598 req-36a6b8ef-a7b9-42ba-875d-39c658b96473 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "b3e20035-c079-4ad0-a085-2086be520d1d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:50:44 compute-0 nova_compute[260935]: 2025-10-11 08:50:44.332 2 DEBUG oslo_concurrency.lockutils [req-ff4c5288-5979-4df0-aa68-ec3c6e59d598 req-36a6b8ef-a7b9-42ba-875d-39c658b96473 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "b3e20035-c079-4ad0-a085-2086be520d1d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:50:44 compute-0 nova_compute[260935]: 2025-10-11 08:50:44.332 2 DEBUG oslo_concurrency.lockutils [req-ff4c5288-5979-4df0-aa68-ec3c6e59d598 req-36a6b8ef-a7b9-42ba-875d-39c658b96473 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "b3e20035-c079-4ad0-a085-2086be520d1d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:50:44 compute-0 nova_compute[260935]: 2025-10-11 08:50:44.332 2 DEBUG nova.compute.manager [req-ff4c5288-5979-4df0-aa68-ec3c6e59d598 req-36a6b8ef-a7b9-42ba-875d-39c658b96473 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: b3e20035-c079-4ad0-a085-2086be520d1d] No waiting events found dispatching network-vif-unplugged-3a7614a2-ff1f-4015-a387-8b15256f61b2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 08:50:44 compute-0 nova_compute[260935]: 2025-10-11 08:50:44.332 2 WARNING nova.compute.manager [req-ff4c5288-5979-4df0-aa68-ec3c6e59d598 req-36a6b8ef-a7b9-42ba-875d-39c658b96473 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: b3e20035-c079-4ad0-a085-2086be520d1d] Received unexpected event network-vif-unplugged-3a7614a2-ff1f-4015-a387-8b15256f61b2 for instance with vm_state deleted and task_state None.
Oct 11 08:50:44 compute-0 nova_compute[260935]: 2025-10-11 08:50:44.332 2 DEBUG nova.compute.manager [req-ff4c5288-5979-4df0-aa68-ec3c6e59d598 req-36a6b8ef-a7b9-42ba-875d-39c658b96473 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: b3e20035-c079-4ad0-a085-2086be520d1d] Received event network-vif-plugged-3a7614a2-ff1f-4015-a387-8b15256f61b2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:50:44 compute-0 nova_compute[260935]: 2025-10-11 08:50:44.333 2 DEBUG oslo_concurrency.lockutils [req-ff4c5288-5979-4df0-aa68-ec3c6e59d598 req-36a6b8ef-a7b9-42ba-875d-39c658b96473 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "b3e20035-c079-4ad0-a085-2086be520d1d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:50:44 compute-0 nova_compute[260935]: 2025-10-11 08:50:44.333 2 DEBUG oslo_concurrency.lockutils [req-ff4c5288-5979-4df0-aa68-ec3c6e59d598 req-36a6b8ef-a7b9-42ba-875d-39c658b96473 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "b3e20035-c079-4ad0-a085-2086be520d1d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:50:44 compute-0 nova_compute[260935]: 2025-10-11 08:50:44.333 2 DEBUG oslo_concurrency.lockutils [req-ff4c5288-5979-4df0-aa68-ec3c6e59d598 req-36a6b8ef-a7b9-42ba-875d-39c658b96473 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "b3e20035-c079-4ad0-a085-2086be520d1d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:50:44 compute-0 nova_compute[260935]: 2025-10-11 08:50:44.333 2 DEBUG nova.compute.manager [req-ff4c5288-5979-4df0-aa68-ec3c6e59d598 req-36a6b8ef-a7b9-42ba-875d-39c658b96473 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: b3e20035-c079-4ad0-a085-2086be520d1d] No waiting events found dispatching network-vif-plugged-3a7614a2-ff1f-4015-a387-8b15256f61b2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 08:50:44 compute-0 nova_compute[260935]: 2025-10-11 08:50:44.333 2 WARNING nova.compute.manager [req-ff4c5288-5979-4df0-aa68-ec3c6e59d598 req-36a6b8ef-a7b9-42ba-875d-39c658b96473 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: b3e20035-c079-4ad0-a085-2086be520d1d] Received unexpected event network-vif-plugged-3a7614a2-ff1f-4015-a387-8b15256f61b2 for instance with vm_state deleted and task_state None.
Oct 11 08:50:44 compute-0 nova_compute[260935]: 2025-10-11 08:50:44.333 2 DEBUG nova.compute.manager [req-ff4c5288-5979-4df0-aa68-ec3c6e59d598 req-36a6b8ef-a7b9-42ba-875d-39c658b96473 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: b35f4147-9e36-4dab-9ac8-2061c97797f2] Received event network-vif-unplugged-c27797c3-6ac7-45ae-9a2e-7fc42908feab external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:50:44 compute-0 nova_compute[260935]: 2025-10-11 08:50:44.334 2 DEBUG oslo_concurrency.lockutils [req-ff4c5288-5979-4df0-aa68-ec3c6e59d598 req-36a6b8ef-a7b9-42ba-875d-39c658b96473 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "b35f4147-9e36-4dab-9ac8-2061c97797f2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:50:44 compute-0 nova_compute[260935]: 2025-10-11 08:50:44.334 2 DEBUG oslo_concurrency.lockutils [req-ff4c5288-5979-4df0-aa68-ec3c6e59d598 req-36a6b8ef-a7b9-42ba-875d-39c658b96473 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "b35f4147-9e36-4dab-9ac8-2061c97797f2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:50:44 compute-0 nova_compute[260935]: 2025-10-11 08:50:44.334 2 DEBUG oslo_concurrency.lockutils [req-ff4c5288-5979-4df0-aa68-ec3c6e59d598 req-36a6b8ef-a7b9-42ba-875d-39c658b96473 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "b35f4147-9e36-4dab-9ac8-2061c97797f2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:50:44 compute-0 nova_compute[260935]: 2025-10-11 08:50:44.334 2 DEBUG nova.compute.manager [req-ff4c5288-5979-4df0-aa68-ec3c6e59d598 req-36a6b8ef-a7b9-42ba-875d-39c658b96473 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: b35f4147-9e36-4dab-9ac8-2061c97797f2] No waiting events found dispatching network-vif-unplugged-c27797c3-6ac7-45ae-9a2e-7fc42908feab pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 08:50:44 compute-0 nova_compute[260935]: 2025-10-11 08:50:44.334 2 DEBUG nova.compute.manager [req-ff4c5288-5979-4df0-aa68-ec3c6e59d598 req-36a6b8ef-a7b9-42ba-875d-39c658b96473 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: b35f4147-9e36-4dab-9ac8-2061c97797f2] Received event network-vif-unplugged-c27797c3-6ac7-45ae-9a2e-7fc42908feab for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 11 08:50:44 compute-0 nova_compute[260935]: 2025-10-11 08:50:44.335 2 DEBUG nova.compute.manager [req-ff4c5288-5979-4df0-aa68-ec3c6e59d598 req-36a6b8ef-a7b9-42ba-875d-39c658b96473 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: b3e20035-c079-4ad0-a085-2086be520d1d] Received event network-vif-deleted-3a7614a2-ff1f-4015-a387-8b15256f61b2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:50:44 compute-0 nova_compute[260935]: 2025-10-11 08:50:44.335 2 DEBUG nova.compute.manager [req-ff4c5288-5979-4df0-aa68-ec3c6e59d598 req-36a6b8ef-a7b9-42ba-875d-39c658b96473 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: b35f4147-9e36-4dab-9ac8-2061c97797f2] Received event network-vif-plugged-c27797c3-6ac7-45ae-9a2e-7fc42908feab external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:50:44 compute-0 nova_compute[260935]: 2025-10-11 08:50:44.335 2 DEBUG oslo_concurrency.lockutils [req-ff4c5288-5979-4df0-aa68-ec3c6e59d598 req-36a6b8ef-a7b9-42ba-875d-39c658b96473 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "b35f4147-9e36-4dab-9ac8-2061c97797f2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:50:44 compute-0 nova_compute[260935]: 2025-10-11 08:50:44.335 2 DEBUG oslo_concurrency.lockutils [req-ff4c5288-5979-4df0-aa68-ec3c6e59d598 req-36a6b8ef-a7b9-42ba-875d-39c658b96473 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "b35f4147-9e36-4dab-9ac8-2061c97797f2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:50:44 compute-0 nova_compute[260935]: 2025-10-11 08:50:44.335 2 DEBUG oslo_concurrency.lockutils [req-ff4c5288-5979-4df0-aa68-ec3c6e59d598 req-36a6b8ef-a7b9-42ba-875d-39c658b96473 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "b35f4147-9e36-4dab-9ac8-2061c97797f2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:50:44 compute-0 nova_compute[260935]: 2025-10-11 08:50:44.335 2 DEBUG nova.compute.manager [req-ff4c5288-5979-4df0-aa68-ec3c6e59d598 req-36a6b8ef-a7b9-42ba-875d-39c658b96473 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: b35f4147-9e36-4dab-9ac8-2061c97797f2] No waiting events found dispatching network-vif-plugged-c27797c3-6ac7-45ae-9a2e-7fc42908feab pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 08:50:44 compute-0 nova_compute[260935]: 2025-10-11 08:50:44.336 2 WARNING nova.compute.manager [req-ff4c5288-5979-4df0-aa68-ec3c6e59d598 req-36a6b8ef-a7b9-42ba-875d-39c658b96473 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: b35f4147-9e36-4dab-9ac8-2061c97797f2] Received unexpected event network-vif-plugged-c27797c3-6ac7-45ae-9a2e-7fc42908feab for instance with vm_state active and task_state deleting.
Oct 11 08:50:44 compute-0 nova_compute[260935]: 2025-10-11 08:50:44.336 2 DEBUG nova.compute.manager [req-ff4c5288-5979-4df0-aa68-ec3c6e59d598 req-36a6b8ef-a7b9-42ba-875d-39c658b96473 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ac3851d8-5df2-4f84-9b28-a5fbf1c31b62] Received event network-vif-unplugged-d190526e-2bf1-4e6c-925a-2cc0c2b359a8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:50:44 compute-0 nova_compute[260935]: 2025-10-11 08:50:44.336 2 DEBUG oslo_concurrency.lockutils [req-ff4c5288-5979-4df0-aa68-ec3c6e59d598 req-36a6b8ef-a7b9-42ba-875d-39c658b96473 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "ac3851d8-5df2-4f84-9b28-a5fbf1c31b62-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:50:44 compute-0 nova_compute[260935]: 2025-10-11 08:50:44.336 2 DEBUG oslo_concurrency.lockutils [req-ff4c5288-5979-4df0-aa68-ec3c6e59d598 req-36a6b8ef-a7b9-42ba-875d-39c658b96473 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "ac3851d8-5df2-4f84-9b28-a5fbf1c31b62-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:50:44 compute-0 nova_compute[260935]: 2025-10-11 08:50:44.336 2 DEBUG oslo_concurrency.lockutils [req-ff4c5288-5979-4df0-aa68-ec3c6e59d598 req-36a6b8ef-a7b9-42ba-875d-39c658b96473 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "ac3851d8-5df2-4f84-9b28-a5fbf1c31b62-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:50:44 compute-0 nova_compute[260935]: 2025-10-11 08:50:44.336 2 DEBUG nova.compute.manager [req-ff4c5288-5979-4df0-aa68-ec3c6e59d598 req-36a6b8ef-a7b9-42ba-875d-39c658b96473 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ac3851d8-5df2-4f84-9b28-a5fbf1c31b62] No waiting events found dispatching network-vif-unplugged-d190526e-2bf1-4e6c-925a-2cc0c2b359a8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 08:50:44 compute-0 nova_compute[260935]: 2025-10-11 08:50:44.336 2 WARNING nova.compute.manager [req-ff4c5288-5979-4df0-aa68-ec3c6e59d598 req-36a6b8ef-a7b9-42ba-875d-39c658b96473 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ac3851d8-5df2-4f84-9b28-a5fbf1c31b62] Received unexpected event network-vif-unplugged-d190526e-2bf1-4e6c-925a-2cc0c2b359a8 for instance with vm_state active and task_state suspending.
Oct 11 08:50:44 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 08:50:44 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4159873179' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:50:44 compute-0 systemd[1]: libpod-conmon-d6b33e72de80955ab943272f9f518de444cf0bbebf307601581e40a72330454d.scope: Deactivated successfully.
Oct 11 08:50:44 compute-0 nova_compute[260935]: 2025-10-11 08:50:44.372 2 DEBUG oslo_concurrency.processutils [None req-12c644d8-b54f-41a1-aef3-6030c074162f a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.512s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:50:44 compute-0 nova_compute[260935]: 2025-10-11 08:50:44.378 2 DEBUG nova.compute.provider_tree [None req-12c644d8-b54f-41a1-aef3-6030c074162f a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 08:50:44 compute-0 nova_compute[260935]: 2025-10-11 08:50:44.392 2 DEBUG nova.scheduler.client.report [None req-12c644d8-b54f-41a1-aef3-6030c074162f a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 08:50:44 compute-0 nova_compute[260935]: 2025-10-11 08:50:44.410 2 DEBUG oslo_concurrency.lockutils [None req-12c644d8-b54f-41a1-aef3-6030c074162f a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.809s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:50:44 compute-0 podman[299812]: 2025-10-11 08:50:44.426597479 +0000 UTC m=+0.063228019 container remove d6b33e72de80955ab943272f9f518de444cf0bbebf307601581e40a72330454d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-9bac3530-993f-420e-8692-0b14a331d756, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Oct 11 08:50:44 compute-0 nova_compute[260935]: 2025-10-11 08:50:44.434 2 INFO nova.scheduler.client.report [None req-12c644d8-b54f-41a1-aef3-6030c074162f a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Deleted allocations for instance b3e20035-c079-4ad0-a085-2086be520d1d
Oct 11 08:50:44 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:44.440 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[9545df2d-1c4d-47db-82ee-17afcf3079e2]: (4, ('Sat Oct 11 08:50:44 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-9bac3530-993f-420e-8692-0b14a331d756 (d6b33e72de80955ab943272f9f518de444cf0bbebf307601581e40a72330454d)\nd6b33e72de80955ab943272f9f518de444cf0bbebf307601581e40a72330454d\nSat Oct 11 08:50:44 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-9bac3530-993f-420e-8692-0b14a331d756 (d6b33e72de80955ab943272f9f518de444cf0bbebf307601581e40a72330454d)\nd6b33e72de80955ab943272f9f518de444cf0bbebf307601581e40a72330454d\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:50:44 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:44.442 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[0aa19906-e538-48f2-ac10-03fe8a76f533]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:50:44 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:44.443 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9bac3530-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:50:44 compute-0 nova_compute[260935]: 2025-10-11 08:50:44.445 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:50:44 compute-0 kernel: tap9bac3530-90: left promiscuous mode
Oct 11 08:50:44 compute-0 nova_compute[260935]: 2025-10-11 08:50:44.486 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:50:44 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:44.493 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[ed33538c-59c2-4ede-9063-a28939362540]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:50:44 compute-0 nova_compute[260935]: 2025-10-11 08:50:44.506 2 DEBUG oslo_concurrency.lockutils [None req-12c644d8-b54f-41a1-aef3-6030c074162f a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Lock "b3e20035-c079-4ad0-a085-2086be520d1d" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.137s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:50:44 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:44.517 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[30ed4ad1-9745-4fc9-bd86-d8a93b648cfa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:50:44 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:44.518 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[6016f5f8-86a4-4246-b855-b6ccaf1dcb69]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:50:44 compute-0 nova_compute[260935]: 2025-10-11 08:50:44.529 2 INFO nova.network.neutron [None req-d54a8a3f-002d-4f75-b215-3ef1f4e3b3df 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: b35f4147-9e36-4dab-9ac8-2061c97797f2] Port f045b3aa-3ff6-4dea-ad61-a59a01735124 from network info_cache is no longer associated with instance in Neutron. Removing from network info_cache.
Oct 11 08:50:44 compute-0 nova_compute[260935]: 2025-10-11 08:50:44.530 2 DEBUG nova.network.neutron [None req-d54a8a3f-002d-4f75-b215-3ef1f4e3b3df 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: b35f4147-9e36-4dab-9ac8-2061c97797f2] Updating instance_info_cache with network_info: [{"id": "c27797c3-6ac7-45ae-9a2e-7fc42908feab", "address": "fa:16:3e:d9:75:86", "network": {"id": "fff13396-b787-4c6e-9112-a1c2ef57b26d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-795919911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.224", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eddb41c523294041b154a0a99c88e82b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc27797c3-6a", "ovs_interfaceid": "c27797c3-6ac7-45ae-9a2e-7fc42908feab", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:50:44 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:44.539 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[0d2a5f85-48e1-49c2-b00b-1465a1de1aa6]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 448564, 'reachable_time': 23723, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 299832, 'error': None, 'target': 'ovnmeta-9bac3530-993f-420e-8692-0b14a331d756', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:50:44 compute-0 systemd[1]: run-netns-ovnmeta\x2d9bac3530\x2d993f\x2d420e\x2d8692\x2d0b14a331d756.mount: Deactivated successfully.
Oct 11 08:50:44 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:44.546 162929 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-9bac3530-993f-420e-8692-0b14a331d756 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 11 08:50:44 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:44.547 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[dd9d2bcc-0101-42b5-a2b9-52532e604bb5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:50:44 compute-0 nova_compute[260935]: 2025-10-11 08:50:44.566 2 DEBUG oslo_concurrency.lockutils [None req-d54a8a3f-002d-4f75-b215-3ef1f4e3b3df 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Releasing lock "refresh_cache-b35f4147-9e36-4dab-9ac8-2061c97797f2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 08:50:44 compute-0 nova_compute[260935]: 2025-10-11 08:50:44.599 2 DEBUG oslo_concurrency.lockutils [None req-d54a8a3f-002d-4f75-b215-3ef1f4e3b3df 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Lock "interface-b35f4147-9e36-4dab-9ac8-2061c97797f2-f045b3aa-3ff6-4dea-ad61-a59a01735124" "released" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: held 5.507s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:50:44 compute-0 ceph-mon[74313]: pgmap v1360: 321 pgs: 321 active+clean; 372 MiB data, 556 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 2.2 MiB/s wr, 191 op/s
Oct 11 08:50:44 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 08:50:44 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 08:50:44 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/4159873179' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:50:45 compute-0 ovn_controller[152945]: 2025-10-11T08:50:45Z|00044|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:bf:d8:0b 10.100.0.11
Oct 11 08:50:45 compute-0 ovn_controller[152945]: 2025-10-11T08:50:45Z|00045|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:bf:d8:0b 10.100.0.11
Oct 11 08:50:45 compute-0 nova_compute[260935]: 2025-10-11 08:50:45.135 2 DEBUG oslo_concurrency.lockutils [None req-046881b0-aa8a-4051-8a37-563c0b66e01f a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Acquiring lock "6224c79a-8a36-490c-863a-67251512732f" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:50:45 compute-0 nova_compute[260935]: 2025-10-11 08:50:45.136 2 DEBUG oslo_concurrency.lockutils [None req-046881b0-aa8a-4051-8a37-563c0b66e01f a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Lock "6224c79a-8a36-490c-863a-67251512732f" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:50:45 compute-0 nova_compute[260935]: 2025-10-11 08:50:45.136 2 DEBUG oslo_concurrency.lockutils [None req-046881b0-aa8a-4051-8a37-563c0b66e01f a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Acquiring lock "6224c79a-8a36-490c-863a-67251512732f-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:50:45 compute-0 nova_compute[260935]: 2025-10-11 08:50:45.137 2 DEBUG oslo_concurrency.lockutils [None req-046881b0-aa8a-4051-8a37-563c0b66e01f a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Lock "6224c79a-8a36-490c-863a-67251512732f-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:50:45 compute-0 nova_compute[260935]: 2025-10-11 08:50:45.137 2 DEBUG oslo_concurrency.lockutils [None req-046881b0-aa8a-4051-8a37-563c0b66e01f a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Lock "6224c79a-8a36-490c-863a-67251512732f-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:50:45 compute-0 nova_compute[260935]: 2025-10-11 08:50:45.139 2 INFO nova.compute.manager [None req-046881b0-aa8a-4051-8a37-563c0b66e01f a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 6224c79a-8a36-490c-863a-67251512732f] Terminating instance
Oct 11 08:50:45 compute-0 nova_compute[260935]: 2025-10-11 08:50:45.141 2 DEBUG nova.compute.manager [None req-046881b0-aa8a-4051-8a37-563c0b66e01f a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 6224c79a-8a36-490c-863a-67251512732f] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 11 08:50:45 compute-0 nova_compute[260935]: 2025-10-11 08:50:45.147 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:50:45 compute-0 kernel: tapb9569700-d7 (unregistering): left promiscuous mode
Oct 11 08:50:45 compute-0 NetworkManager[44960]: <info>  [1760172645.2133] device (tapb9569700-d7): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 08:50:45 compute-0 ovn_controller[152945]: 2025-10-11T08:50:45Z|00214|binding|INFO|Releasing lport b9569700-d7dc-40dc-a27c-1f72d3675682 from this chassis (sb_readonly=0)
Oct 11 08:50:45 compute-0 ovn_controller[152945]: 2025-10-11T08:50:45Z|00215|binding|INFO|Setting lport b9569700-d7dc-40dc-a27c-1f72d3675682 down in Southbound
Oct 11 08:50:45 compute-0 ovn_controller[152945]: 2025-10-11T08:50:45Z|00216|binding|INFO|Removing iface tapb9569700-d7 ovn-installed in OVS
Oct 11 08:50:45 compute-0 nova_compute[260935]: 2025-10-11 08:50:45.232 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:50:45 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:45.239 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:20:3f:af 10.100.0.14'], port_security=['fa:16:3e:20:3f:af 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '6224c79a-8a36-490c-863a-67251512732f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '39d3043a7835403392c659fbb2fe0b22', 'neutron:revision_number': '4', 'neutron:security_group_ids': '8cdf2c97-ed67-4339-928f-1d70d0c6c18c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3bfe7634-8476-437a-9cde-e4512c0e686a, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=b9569700-d7dc-40dc-a27c-1f72d3675682) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 08:50:45 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:45.242 162815 INFO neutron.agent.ovn.metadata.agent [-] Port b9569700-d7dc-40dc-a27c-1f72d3675682 in datapath 09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5 unbound from our chassis
Oct 11 08:50:45 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:45.246 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5
Oct 11 08:50:45 compute-0 nova_compute[260935]: 2025-10-11 08:50:45.261 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:50:45 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:45.275 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[9f4d16fd-564d-40e7-ba76-1279ada9ddf8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:50:45 compute-0 systemd[1]: machine-qemu\x2d25\x2dinstance\x2d00000017.scope: Deactivated successfully.
Oct 11 08:50:45 compute-0 systemd[1]: machine-qemu\x2d25\x2dinstance\x2d00000017.scope: Consumed 19.159s CPU time.
Oct 11 08:50:45 compute-0 systemd-machined[215705]: Machine qemu-25-instance-00000017 terminated.
Oct 11 08:50:45 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:45.323 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[cdcc4d94-d48b-47c4-81f0-77933f88e2e7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:50:45 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:45.327 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[ada7c67a-547c-4ec9-aea5-74a52702674d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:50:45 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:45.383 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[7352b98f-344f-4930-b91b-75755b4c7338]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:50:45 compute-0 nova_compute[260935]: 2025-10-11 08:50:45.394 2 INFO nova.virt.libvirt.driver [-] [instance: 6224c79a-8a36-490c-863a-67251512732f] Instance destroyed successfully.
Oct 11 08:50:45 compute-0 nova_compute[260935]: 2025-10-11 08:50:45.394 2 DEBUG nova.objects.instance [None req-046881b0-aa8a-4051-8a37-563c0b66e01f a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Lazy-loading 'resources' on Instance uuid 6224c79a-8a36-490c-863a-67251512732f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:50:45 compute-0 nova_compute[260935]: 2025-10-11 08:50:45.413 2 DEBUG nova.virt.libvirt.vif [None req-046881b0-aa8a-4051-8a37-563c0b66e01f a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T08:48:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-993933628',display_name='tempest-ServersAdminTestJSON-server-993933628',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-993933628',id=23,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-11T08:49:09Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='39d3043a7835403392c659fbb2fe0b22',ramdisk_id='',reservation_id='r-64hxjs52',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersAdminTestJSON-1756812845',owner_user_name='tempest-ServersAdminTestJSON-1756812845-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T08:49:09Z,user_data=None,user_id='a51c2680b31e40b1908642ef8795c6f0',uuid=6224c79a-8a36-490c-863a-67251512732f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b9569700-d7dc-40dc-a27c-1f72d3675682", "address": "fa:16:3e:20:3f:af", "network": {"id": "09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1951796893-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "39d3043a7835403392c659fbb2fe0b22", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb9569700-d7", "ovs_interfaceid": "b9569700-d7dc-40dc-a27c-1f72d3675682", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 11 08:50:45 compute-0 nova_compute[260935]: 2025-10-11 08:50:45.414 2 DEBUG nova.network.os_vif_util [None req-046881b0-aa8a-4051-8a37-563c0b66e01f a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Converting VIF {"id": "b9569700-d7dc-40dc-a27c-1f72d3675682", "address": "fa:16:3e:20:3f:af", "network": {"id": "09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1951796893-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "39d3043a7835403392c659fbb2fe0b22", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb9569700-d7", "ovs_interfaceid": "b9569700-d7dc-40dc-a27c-1f72d3675682", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 08:50:45 compute-0 nova_compute[260935]: 2025-10-11 08:50:45.415 2 DEBUG nova.network.os_vif_util [None req-046881b0-aa8a-4051-8a37-563c0b66e01f a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:20:3f:af,bridge_name='br-int',has_traffic_filtering=True,id=b9569700-d7dc-40dc-a27c-1f72d3675682,network=Network(09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb9569700-d7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 08:50:45 compute-0 nova_compute[260935]: 2025-10-11 08:50:45.415 2 DEBUG os_vif [None req-046881b0-aa8a-4051-8a37-563c0b66e01f a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:20:3f:af,bridge_name='br-int',has_traffic_filtering=True,id=b9569700-d7dc-40dc-a27c-1f72d3675682,network=Network(09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb9569700-d7') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 11 08:50:45 compute-0 nova_compute[260935]: 2025-10-11 08:50:45.418 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:50:45 compute-0 nova_compute[260935]: 2025-10-11 08:50:45.418 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb9569700-d7, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:50:45 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:45.422 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[c16eee5f-bc4e-46d9-abdf-428aa66fc27f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap09ac2cb6-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9f:b2:33'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 16, 'tx_packets': 24, 'rx_bytes': 1168, 'tx_bytes': 1196, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 16, 'tx_packets': 24, 'rx_bytes': 1168, 'tx_bytes': 1196, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 34], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 439110, 'reachable_time': 35426, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 299855, 'error': None, 'target': 'ovnmeta-09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:50:45 compute-0 nova_compute[260935]: 2025-10-11 08:50:45.461 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:50:45 compute-0 nova_compute[260935]: 2025-10-11 08:50:45.464 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 08:50:45 compute-0 nova_compute[260935]: 2025-10-11 08:50:45.470 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:50:45 compute-0 nova_compute[260935]: 2025-10-11 08:50:45.474 2 INFO os_vif [None req-046881b0-aa8a-4051-8a37-563c0b66e01f a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:20:3f:af,bridge_name='br-int',has_traffic_filtering=True,id=b9569700-d7dc-40dc-a27c-1f72d3675682,network=Network(09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb9569700-d7')
Oct 11 08:50:45 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:45.485 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[09a3f543-9f1c-44d2-af40-48b865f7d76b]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap09ac2cb6-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 439130, 'tstamp': 439130}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 299858, 'error': None, 'target': 'ovnmeta-09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap09ac2cb6-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 439135, 'tstamp': 439135}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 299858, 'error': None, 'target': 'ovnmeta-09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:50:45 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:45.486 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap09ac2cb6-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:50:45 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:45.496 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap09ac2cb6-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:50:45 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:45.496 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 08:50:45 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:45.497 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap09ac2cb6-30, col_values=(('external_ids', {'iface-id': '424305ea-6b47-4134-ad52-ee2a450e204c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:50:45 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:45.497 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 08:50:45 compute-0 nova_compute[260935]: 2025-10-11 08:50:45.511 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:50:45 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1361: 321 pgs: 321 active+clean; 372 MiB data, 556 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 2.2 MiB/s wr, 191 op/s
Oct 11 08:50:45 compute-0 nova_compute[260935]: 2025-10-11 08:50:45.896 2 DEBUG nova.network.neutron [-] [instance: b35f4147-9e36-4dab-9ac8-2061c97797f2] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:50:45 compute-0 nova_compute[260935]: 2025-10-11 08:50:45.911 2 INFO nova.compute.manager [-] [instance: b35f4147-9e36-4dab-9ac8-2061c97797f2] Took 2.23 seconds to deallocate network for instance.
Oct 11 08:50:45 compute-0 nova_compute[260935]: 2025-10-11 08:50:45.920 2 INFO nova.virt.libvirt.driver [None req-046881b0-aa8a-4051-8a37-563c0b66e01f a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 6224c79a-8a36-490c-863a-67251512732f] Deleting instance files /var/lib/nova/instances/6224c79a-8a36-490c-863a-67251512732f_del
Oct 11 08:50:45 compute-0 nova_compute[260935]: 2025-10-11 08:50:45.921 2 INFO nova.virt.libvirt.driver [None req-046881b0-aa8a-4051-8a37-563c0b66e01f a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 6224c79a-8a36-490c-863a-67251512732f] Deletion of /var/lib/nova/instances/6224c79a-8a36-490c-863a-67251512732f_del complete
Oct 11 08:50:45 compute-0 nova_compute[260935]: 2025-10-11 08:50:45.979 2 DEBUG oslo_concurrency.lockutils [None req-21fe9cac-4353-4d0f-8ae8-fb8101a4c552 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:50:45 compute-0 nova_compute[260935]: 2025-10-11 08:50:45.980 2 DEBUG oslo_concurrency.lockutils [None req-21fe9cac-4353-4d0f-8ae8-fb8101a4c552 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:50:45 compute-0 nova_compute[260935]: 2025-10-11 08:50:45.987 2 INFO nova.compute.manager [None req-046881b0-aa8a-4051-8a37-563c0b66e01f a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 6224c79a-8a36-490c-863a-67251512732f] Took 0.85 seconds to destroy the instance on the hypervisor.
Oct 11 08:50:45 compute-0 nova_compute[260935]: 2025-10-11 08:50:45.987 2 DEBUG oslo.service.loopingcall [None req-046881b0-aa8a-4051-8a37-563c0b66e01f a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 11 08:50:45 compute-0 nova_compute[260935]: 2025-10-11 08:50:45.988 2 DEBUG nova.compute.manager [-] [instance: 6224c79a-8a36-490c-863a-67251512732f] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 11 08:50:45 compute-0 nova_compute[260935]: 2025-10-11 08:50:45.988 2 DEBUG nova.network.neutron [-] [instance: 6224c79a-8a36-490c-863a-67251512732f] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 11 08:50:46 compute-0 nova_compute[260935]: 2025-10-11 08:50:46.158 2 DEBUG oslo_concurrency.processutils [None req-21fe9cac-4353-4d0f-8ae8-fb8101a4c552 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:50:46 compute-0 nova_compute[260935]: 2025-10-11 08:50:46.562 2 DEBUG nova.compute.manager [req-5cda5b83-f7c5-4821-a1ba-8d6fa210df55 req-24015828-f5d7-4923-9e7c-e9a14101a0ea e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ac3851d8-5df2-4f84-9b28-a5fbf1c31b62] Received event network-vif-plugged-d190526e-2bf1-4e6c-925a-2cc0c2b359a8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:50:46 compute-0 nova_compute[260935]: 2025-10-11 08:50:46.563 2 DEBUG oslo_concurrency.lockutils [req-5cda5b83-f7c5-4821-a1ba-8d6fa210df55 req-24015828-f5d7-4923-9e7c-e9a14101a0ea e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "ac3851d8-5df2-4f84-9b28-a5fbf1c31b62-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:50:46 compute-0 nova_compute[260935]: 2025-10-11 08:50:46.564 2 DEBUG oslo_concurrency.lockutils [req-5cda5b83-f7c5-4821-a1ba-8d6fa210df55 req-24015828-f5d7-4923-9e7c-e9a14101a0ea e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "ac3851d8-5df2-4f84-9b28-a5fbf1c31b62-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:50:46 compute-0 nova_compute[260935]: 2025-10-11 08:50:46.564 2 DEBUG oslo_concurrency.lockutils [req-5cda5b83-f7c5-4821-a1ba-8d6fa210df55 req-24015828-f5d7-4923-9e7c-e9a14101a0ea e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "ac3851d8-5df2-4f84-9b28-a5fbf1c31b62-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:50:46 compute-0 nova_compute[260935]: 2025-10-11 08:50:46.564 2 DEBUG nova.compute.manager [req-5cda5b83-f7c5-4821-a1ba-8d6fa210df55 req-24015828-f5d7-4923-9e7c-e9a14101a0ea e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ac3851d8-5df2-4f84-9b28-a5fbf1c31b62] No waiting events found dispatching network-vif-plugged-d190526e-2bf1-4e6c-925a-2cc0c2b359a8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 08:50:46 compute-0 nova_compute[260935]: 2025-10-11 08:50:46.565 2 WARNING nova.compute.manager [req-5cda5b83-f7c5-4821-a1ba-8d6fa210df55 req-24015828-f5d7-4923-9e7c-e9a14101a0ea e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ac3851d8-5df2-4f84-9b28-a5fbf1c31b62] Received unexpected event network-vif-plugged-d190526e-2bf1-4e6c-925a-2cc0c2b359a8 for instance with vm_state suspended and task_state image_snapshot_pending.
Oct 11 08:50:46 compute-0 nova_compute[260935]: 2025-10-11 08:50:46.565 2 DEBUG nova.compute.manager [req-5cda5b83-f7c5-4821-a1ba-8d6fa210df55 req-24015828-f5d7-4923-9e7c-e9a14101a0ea e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: b35f4147-9e36-4dab-9ac8-2061c97797f2] Received event network-vif-deleted-c27797c3-6ac7-45ae-9a2e-7fc42908feab external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:50:46 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 08:50:46 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2323926474' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:50:46 compute-0 nova_compute[260935]: 2025-10-11 08:50:46.596 2 DEBUG oslo_concurrency.processutils [None req-21fe9cac-4353-4d0f-8ae8-fb8101a4c552 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:50:46 compute-0 nova_compute[260935]: 2025-10-11 08:50:46.607 2 DEBUG nova.compute.provider_tree [None req-21fe9cac-4353-4d0f-8ae8-fb8101a4c552 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 08:50:46 compute-0 nova_compute[260935]: 2025-10-11 08:50:46.626 2 DEBUG nova.scheduler.client.report [None req-21fe9cac-4353-4d0f-8ae8-fb8101a4c552 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 08:50:46 compute-0 nova_compute[260935]: 2025-10-11 08:50:46.668 2 DEBUG oslo_concurrency.lockutils [None req-21fe9cac-4353-4d0f-8ae8-fb8101a4c552 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.688s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:50:46 compute-0 nova_compute[260935]: 2025-10-11 08:50:46.674 2 DEBUG nova.compute.manager [None req-b5ae42e4-fca2-497f-8563-d753ca93958c 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: ac3851d8-5df2-4f84-9b28-a5fbf1c31b62] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:50:46 compute-0 nova_compute[260935]: 2025-10-11 08:50:46.716 2 INFO nova.scheduler.client.report [None req-21fe9cac-4353-4d0f-8ae8-fb8101a4c552 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Deleted allocations for instance b35f4147-9e36-4dab-9ac8-2061c97797f2
Oct 11 08:50:46 compute-0 nova_compute[260935]: 2025-10-11 08:50:46.731 2 DEBUG nova.network.neutron [-] [instance: 6224c79a-8a36-490c-863a-67251512732f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:50:46 compute-0 nova_compute[260935]: 2025-10-11 08:50:46.740 2 INFO nova.compute.manager [None req-b5ae42e4-fca2-497f-8563-d753ca93958c 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: ac3851d8-5df2-4f84-9b28-a5fbf1c31b62] instance snapshotting
Oct 11 08:50:46 compute-0 nova_compute[260935]: 2025-10-11 08:50:46.740 2 WARNING nova.compute.manager [None req-b5ae42e4-fca2-497f-8563-d753ca93958c 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: ac3851d8-5df2-4f84-9b28-a5fbf1c31b62] trying to snapshot a non-running instance: (state: 4 expected: 1)
Oct 11 08:50:46 compute-0 nova_compute[260935]: 2025-10-11 08:50:46.750 2 INFO nova.compute.manager [-] [instance: 6224c79a-8a36-490c-863a-67251512732f] Took 0.76 seconds to deallocate network for instance.
Oct 11 08:50:46 compute-0 nova_compute[260935]: 2025-10-11 08:50:46.807 2 DEBUG oslo_concurrency.lockutils [None req-21fe9cac-4353-4d0f-8ae8-fb8101a4c552 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Lock "b35f4147-9e36-4dab-9ac8-2061c97797f2" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.006s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:50:46 compute-0 nova_compute[260935]: 2025-10-11 08:50:46.811 2 DEBUG oslo_concurrency.lockutils [None req-046881b0-aa8a-4051-8a37-563c0b66e01f a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:50:46 compute-0 nova_compute[260935]: 2025-10-11 08:50:46.812 2 DEBUG oslo_concurrency.lockutils [None req-046881b0-aa8a-4051-8a37-563c0b66e01f a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:50:46 compute-0 nova_compute[260935]: 2025-10-11 08:50:46.932 2 DEBUG oslo_concurrency.processutils [None req-046881b0-aa8a-4051-8a37-563c0b66e01f a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:50:47 compute-0 ceph-mon[74313]: pgmap v1361: 321 pgs: 321 active+clean; 372 MiB data, 556 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 2.2 MiB/s wr, 191 op/s
Oct 11 08:50:47 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/2323926474' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:50:47 compute-0 nova_compute[260935]: 2025-10-11 08:50:47.001 2 INFO nova.virt.libvirt.driver [None req-b5ae42e4-fca2-497f-8563-d753ca93958c 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: ac3851d8-5df2-4f84-9b28-a5fbf1c31b62] Beginning cold snapshot process
Oct 11 08:50:47 compute-0 nova_compute[260935]: 2025-10-11 08:50:47.132 2 DEBUG nova.virt.libvirt.imagebackend [None req-b5ae42e4-fca2-497f-8563-d753ca93958c 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] No parent info for 03f2fef0-11c0-48e1-b3a0-3e02d898739e; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Oct 11 08:50:47 compute-0 nova_compute[260935]: 2025-10-11 08:50:47.157 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:50:47 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 08:50:47 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3562417516' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:50:47 compute-0 nova_compute[260935]: 2025-10-11 08:50:47.400 2 DEBUG oslo_concurrency.processutils [None req-046881b0-aa8a-4051-8a37-563c0b66e01f a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.468s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:50:47 compute-0 nova_compute[260935]: 2025-10-11 08:50:47.407 2 DEBUG nova.compute.provider_tree [None req-046881b0-aa8a-4051-8a37-563c0b66e01f a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 08:50:47 compute-0 nova_compute[260935]: 2025-10-11 08:50:47.427 2 DEBUG nova.storage.rbd_utils [None req-b5ae42e4-fca2-497f-8563-d753ca93958c 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] creating snapshot(854a61dcba4c481eaeae67e6378c78dd) on rbd image(ac3851d8-5df2-4f84-9b28-a5fbf1c31b62_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Oct 11 08:50:47 compute-0 nova_compute[260935]: 2025-10-11 08:50:47.461 2 DEBUG nova.scheduler.client.report [None req-046881b0-aa8a-4051-8a37-563c0b66e01f a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 08:50:47 compute-0 nova_compute[260935]: 2025-10-11 08:50:47.491 2 DEBUG oslo_concurrency.lockutils [None req-046881b0-aa8a-4051-8a37-563c0b66e01f a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.680s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:50:47 compute-0 nova_compute[260935]: 2025-10-11 08:50:47.512 2 INFO nova.scheduler.client.report [None req-046881b0-aa8a-4051-8a37-563c0b66e01f a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Deleted allocations for instance 6224c79a-8a36-490c-863a-67251512732f
Oct 11 08:50:47 compute-0 nova_compute[260935]: 2025-10-11 08:50:47.616 2 DEBUG oslo_concurrency.lockutils [None req-046881b0-aa8a-4051-8a37-563c0b66e01f a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Lock "6224c79a-8a36-490c-863a-67251512732f" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.480s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:50:47 compute-0 nova_compute[260935]: 2025-10-11 08:50:47.752 2 DEBUG oslo_concurrency.lockutils [None req-d3c18f18-c02a-4253-a213-db44ffaf6acd 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Acquiring lock "14711e39-46ca-4856-9c19-fa51b869064d" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:50:47 compute-0 nova_compute[260935]: 2025-10-11 08:50:47.753 2 DEBUG oslo_concurrency.lockutils [None req-d3c18f18-c02a-4253-a213-db44ffaf6acd 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Lock "14711e39-46ca-4856-9c19-fa51b869064d" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:50:47 compute-0 nova_compute[260935]: 2025-10-11 08:50:47.753 2 DEBUG oslo_concurrency.lockutils [None req-d3c18f18-c02a-4253-a213-db44ffaf6acd 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Acquiring lock "14711e39-46ca-4856-9c19-fa51b869064d-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:50:47 compute-0 nova_compute[260935]: 2025-10-11 08:50:47.754 2 DEBUG oslo_concurrency.lockutils [None req-d3c18f18-c02a-4253-a213-db44ffaf6acd 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Lock "14711e39-46ca-4856-9c19-fa51b869064d-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:50:47 compute-0 nova_compute[260935]: 2025-10-11 08:50:47.754 2 DEBUG oslo_concurrency.lockutils [None req-d3c18f18-c02a-4253-a213-db44ffaf6acd 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Lock "14711e39-46ca-4856-9c19-fa51b869064d-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:50:47 compute-0 nova_compute[260935]: 2025-10-11 08:50:47.757 2 INFO nova.compute.manager [None req-d3c18f18-c02a-4253-a213-db44ffaf6acd 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 14711e39-46ca-4856-9c19-fa51b869064d] Terminating instance
Oct 11 08:50:47 compute-0 nova_compute[260935]: 2025-10-11 08:50:47.759 2 DEBUG nova.compute.manager [None req-d3c18f18-c02a-4253-a213-db44ffaf6acd 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 14711e39-46ca-4856-9c19-fa51b869064d] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 11 08:50:47 compute-0 kernel: tapac842ebf-4f (unregistering): left promiscuous mode
Oct 11 08:50:47 compute-0 NetworkManager[44960]: <info>  [1760172647.8335] device (tapac842ebf-4f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 08:50:47 compute-0 nova_compute[260935]: 2025-10-11 08:50:47.853 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:50:47 compute-0 ovn_controller[152945]: 2025-10-11T08:50:47Z|00217|binding|INFO|Releasing lport ac842ebf-4fca-4930-a4d1-3e8a6760d441 from this chassis (sb_readonly=0)
Oct 11 08:50:47 compute-0 ovn_controller[152945]: 2025-10-11T08:50:47Z|00218|binding|INFO|Setting lport ac842ebf-4fca-4930-a4d1-3e8a6760d441 down in Southbound
Oct 11 08:50:47 compute-0 ovn_controller[152945]: 2025-10-11T08:50:47Z|00219|binding|INFO|Removing iface tapac842ebf-4f ovn-installed in OVS
Oct 11 08:50:47 compute-0 nova_compute[260935]: 2025-10-11 08:50:47.855 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:50:47 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:47.863 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:56:95:17 10.100.0.14'], port_security=['fa:16:3e:56:95:17 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '14711e39-46ca-4856-9c19-fa51b869064d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fff13396-b787-4c6e-9112-a1c2ef57b26d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'eddb41c523294041b154a0a99c88e82b', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'b1442d15-c284-4756-a249-5a3bda09cf56', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c4201c7b-c907-464d-88cb-d19f17d8f067, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=ac842ebf-4fca-4930-a4d1-3e8a6760d441) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 08:50:47 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:47.865 162815 INFO neutron.agent.ovn.metadata.agent [-] Port ac842ebf-4fca-4930-a4d1-3e8a6760d441 in datapath fff13396-b787-4c6e-9112-a1c2ef57b26d unbound from our chassis
Oct 11 08:50:47 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1362: 321 pgs: 321 active+clean; 246 MiB data, 487 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.6 MiB/s wr, 255 op/s
Oct 11 08:50:47 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:47.869 162815 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network fff13396-b787-4c6e-9112-a1c2ef57b26d, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 11 08:50:47 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:47.870 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[7ff62cdc-e72a-4910-9bf7-b004dc6f1f8a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:50:47 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:47.870 162815 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-fff13396-b787-4c6e-9112-a1c2ef57b26d namespace which is not needed anymore
Oct 11 08:50:47 compute-0 nova_compute[260935]: 2025-10-11 08:50:47.880 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:50:47 compute-0 systemd[1]: machine-qemu\x2d28\x2dinstance\x2d0000001a.scope: Deactivated successfully.
Oct 11 08:50:47 compute-0 systemd[1]: machine-qemu\x2d28\x2dinstance\x2d0000001a.scope: Consumed 16.855s CPU time.
Oct 11 08:50:47 compute-0 systemd-machined[215705]: Machine qemu-28-instance-0000001a terminated.
Oct 11 08:50:48 compute-0 nova_compute[260935]: 2025-10-11 08:50:47.999 2 INFO nova.virt.libvirt.driver [-] [instance: 14711e39-46ca-4856-9c19-fa51b869064d] Instance destroyed successfully.
Oct 11 08:50:48 compute-0 nova_compute[260935]: 2025-10-11 08:50:48.000 2 DEBUG nova.objects.instance [None req-d3c18f18-c02a-4253-a213-db44ffaf6acd 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Lazy-loading 'resources' on Instance uuid 14711e39-46ca-4856-9c19-fa51b869064d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:50:48 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e175 do_prune osdmap full prune enabled
Oct 11 08:50:48 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e176 e176: 3 total, 3 up, 3 in
Oct 11 08:50:48 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/3562417516' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:50:48 compute-0 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e176: 3 total, 3 up, 3 in
Oct 11 08:50:48 compute-0 nova_compute[260935]: 2025-10-11 08:50:48.021 2 DEBUG nova.virt.libvirt.vif [None req-d3c18f18-c02a-4253-a213-db44ffaf6acd 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T08:49:32Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-579139719',display_name='tempest-tempest.common.compute-instance-579139719',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-579139719',id=26,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAAy7u30rY9Ua722Pu5k06TsB7yGIeNfS7lWjZwVhd6kg2xMeuomPU5t2dlqG08LvC5AhOx2wSQ4p/whtQgG8tbhB9ScC2x2P4qlM+3BKH/+XFtSpFY70AQQ3oh5qTSzqA==',key_name='tempest-keypair-878513287',keypairs=<?>,launch_index=0,launched_at=2025-10-11T08:49:48Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='eddb41c523294041b154a0a99c88e82b',ramdisk_id='',reservation_id='r-9v1qz90y',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-2072786320',owner_user_name='tempest-AttachInterfacesTestJSON-2072786320-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T08:49:48Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='34f29a5a135d45f597eeaa741009aa67',uuid=14711e39-46ca-4856-9c19-fa51b869064d,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ac842ebf-4fca-4930-a4d1-3e8a6760d441", "address": "fa:16:3e:56:95:17", "network": {"id": "fff13396-b787-4c6e-9112-a1c2ef57b26d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-795919911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eddb41c523294041b154a0a99c88e82b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac842ebf-4f", "ovs_interfaceid": "ac842ebf-4fca-4930-a4d1-3e8a6760d441", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 11 08:50:48 compute-0 nova_compute[260935]: 2025-10-11 08:50:48.022 2 DEBUG nova.network.os_vif_util [None req-d3c18f18-c02a-4253-a213-db44ffaf6acd 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Converting VIF {"id": "ac842ebf-4fca-4930-a4d1-3e8a6760d441", "address": "fa:16:3e:56:95:17", "network": {"id": "fff13396-b787-4c6e-9112-a1c2ef57b26d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-795919911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eddb41c523294041b154a0a99c88e82b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac842ebf-4f", "ovs_interfaceid": "ac842ebf-4fca-4930-a4d1-3e8a6760d441", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 08:50:48 compute-0 nova_compute[260935]: 2025-10-11 08:50:48.024 2 DEBUG nova.network.os_vif_util [None req-d3c18f18-c02a-4253-a213-db44ffaf6acd 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:56:95:17,bridge_name='br-int',has_traffic_filtering=True,id=ac842ebf-4fca-4930-a4d1-3e8a6760d441,network=Network(fff13396-b787-4c6e-9112-a1c2ef57b26d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapac842ebf-4f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 08:50:48 compute-0 nova_compute[260935]: 2025-10-11 08:50:48.024 2 DEBUG os_vif [None req-d3c18f18-c02a-4253-a213-db44ffaf6acd 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:56:95:17,bridge_name='br-int',has_traffic_filtering=True,id=ac842ebf-4fca-4930-a4d1-3e8a6760d441,network=Network(fff13396-b787-4c6e-9112-a1c2ef57b26d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapac842ebf-4f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 11 08:50:48 compute-0 nova_compute[260935]: 2025-10-11 08:50:48.026 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:50:48 compute-0 nova_compute[260935]: 2025-10-11 08:50:48.027 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapac842ebf-4f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:50:48 compute-0 nova_compute[260935]: 2025-10-11 08:50:48.029 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:50:48 compute-0 nova_compute[260935]: 2025-10-11 08:50:48.031 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 08:50:48 compute-0 nova_compute[260935]: 2025-10-11 08:50:48.032 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:50:48 compute-0 neutron-haproxy-ovnmeta-fff13396-b787-4c6e-9112-a1c2ef57b26d[295224]: [NOTICE]   (295247) : haproxy version is 2.8.14-c23fe91
Oct 11 08:50:48 compute-0 neutron-haproxy-ovnmeta-fff13396-b787-4c6e-9112-a1c2ef57b26d[295224]: [NOTICE]   (295247) : path to executable is /usr/sbin/haproxy
Oct 11 08:50:48 compute-0 neutron-haproxy-ovnmeta-fff13396-b787-4c6e-9112-a1c2ef57b26d[295224]: [WARNING]  (295247) : Exiting Master process...
Oct 11 08:50:48 compute-0 nova_compute[260935]: 2025-10-11 08:50:48.036 2 INFO os_vif [None req-d3c18f18-c02a-4253-a213-db44ffaf6acd 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:56:95:17,bridge_name='br-int',has_traffic_filtering=True,id=ac842ebf-4fca-4930-a4d1-3e8a6760d441,network=Network(fff13396-b787-4c6e-9112-a1c2ef57b26d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapac842ebf-4f')
Oct 11 08:50:48 compute-0 neutron-haproxy-ovnmeta-fff13396-b787-4c6e-9112-a1c2ef57b26d[295224]: [ALERT]    (295247) : Current worker (295256) exited with code 143 (Terminated)
Oct 11 08:50:48 compute-0 neutron-haproxy-ovnmeta-fff13396-b787-4c6e-9112-a1c2ef57b26d[295224]: [WARNING]  (295247) : All workers exited. Exiting... (0)
Oct 11 08:50:48 compute-0 systemd[1]: libpod-7ed29cef5ce31d6622505d70a8cd9e2036824dbcf8fd54bff2161f0611e12d29.scope: Deactivated successfully.
Oct 11 08:50:48 compute-0 podman[300000]: 2025-10-11 08:50:48.046666635 +0000 UTC m=+0.061826000 container died 7ed29cef5ce31d6622505d70a8cd9e2036824dbcf8fd54bff2161f0611e12d29 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-fff13396-b787-4c6e-9112-a1c2ef57b26d, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct 11 08:50:48 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-7ed29cef5ce31d6622505d70a8cd9e2036824dbcf8fd54bff2161f0611e12d29-userdata-shm.mount: Deactivated successfully.
Oct 11 08:50:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-1c54b9bd547d36ffdac517bcbe86f00a809d959f1c5904ed2903ce52567b3128-merged.mount: Deactivated successfully.
Oct 11 08:50:48 compute-0 podman[300000]: 2025-10-11 08:50:48.109956965 +0000 UTC m=+0.125116330 container cleanup 7ed29cef5ce31d6622505d70a8cd9e2036824dbcf8fd54bff2161f0611e12d29 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-fff13396-b787-4c6e-9112-a1c2ef57b26d, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 08:50:48 compute-0 nova_compute[260935]: 2025-10-11 08:50:48.113 2 DEBUG nova.storage.rbd_utils [None req-b5ae42e4-fca2-497f-8563-d753ca93958c 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] cloning vms/ac3851d8-5df2-4f84-9b28-a5fbf1c31b62_disk@854a61dcba4c481eaeae67e6378c78dd to images/a3f24c7b-a065-417e-b63f-e5280c978ce3 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Oct 11 08:50:48 compute-0 systemd[1]: libpod-conmon-7ed29cef5ce31d6622505d70a8cd9e2036824dbcf8fd54bff2161f0611e12d29.scope: Deactivated successfully.
Oct 11 08:50:48 compute-0 podman[300062]: 2025-10-11 08:50:48.204840578 +0000 UTC m=+0.063675022 container remove 7ed29cef5ce31d6622505d70a8cd9e2036824dbcf8fd54bff2161f0611e12d29 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-fff13396-b787-4c6e-9112-a1c2ef57b26d, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.license=GPLv2)
Oct 11 08:50:48 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:48.215 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[b59bc355-95ca-4957-abcd-b7881fdd5c94]: (4, ('Sat Oct 11 08:50:47 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-fff13396-b787-4c6e-9112-a1c2ef57b26d (7ed29cef5ce31d6622505d70a8cd9e2036824dbcf8fd54bff2161f0611e12d29)\n7ed29cef5ce31d6622505d70a8cd9e2036824dbcf8fd54bff2161f0611e12d29\nSat Oct 11 08:50:48 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-fff13396-b787-4c6e-9112-a1c2ef57b26d (7ed29cef5ce31d6622505d70a8cd9e2036824dbcf8fd54bff2161f0611e12d29)\n7ed29cef5ce31d6622505d70a8cd9e2036824dbcf8fd54bff2161f0611e12d29\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:50:48 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:48.217 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[0fdd70f1-f16a-4a2c-aa0d-9d5e6c407dfa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:50:48 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:48.218 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfff13396-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:50:48 compute-0 nova_compute[260935]: 2025-10-11 08:50:48.221 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:50:48 compute-0 kernel: tapfff13396-b0: left promiscuous mode
Oct 11 08:50:48 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e176 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 08:50:48 compute-0 nova_compute[260935]: 2025-10-11 08:50:48.248 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:50:48 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:48.258 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[2b0c3f56-4119-4df9-a27b-84a8eb637a3d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:50:48 compute-0 nova_compute[260935]: 2025-10-11 08:50:48.279 2 DEBUG nova.storage.rbd_utils [None req-b5ae42e4-fca2-497f-8563-d753ca93958c 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] flattening images/a3f24c7b-a065-417e-b63f-e5280c978ce3 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Oct 11 08:50:48 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:48.282 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[c2e1044c-a9e9-4afb-ba0f-1173b504a1d5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:50:48 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:48.283 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[f4503d79-8797-4df3-9d51-da0c5135f3c7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:50:48 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:48.299 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[b3ca88b5-72fb-4d70-98af-ca3ce678198e]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 443110, 'reachable_time': 18325, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 300124, 'error': None, 'target': 'ovnmeta-fff13396-b787-4c6e-9112-a1c2ef57b26d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:50:48 compute-0 systemd[1]: run-netns-ovnmeta\x2dfff13396\x2db787\x2d4c6e\x2d9112\x2da1c2ef57b26d.mount: Deactivated successfully.
Oct 11 08:50:48 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:48.305 162929 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-fff13396-b787-4c6e-9112-a1c2ef57b26d deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 11 08:50:48 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:48.306 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[d906c437-f411-445b-9c8e-6f89f3c8402d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:50:48 compute-0 nova_compute[260935]: 2025-10-11 08:50:48.483 2 DEBUG oslo_concurrency.lockutils [None req-cfd8987d-7906-4fa6-9dd8-07dc376f483a a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Acquiring lock "77ff3a9d-3eb2-40ed-ad12-6367fd4e555f" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:50:48 compute-0 nova_compute[260935]: 2025-10-11 08:50:48.484 2 DEBUG oslo_concurrency.lockutils [None req-cfd8987d-7906-4fa6-9dd8-07dc376f483a a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Lock "77ff3a9d-3eb2-40ed-ad12-6367fd4e555f" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:50:48 compute-0 nova_compute[260935]: 2025-10-11 08:50:48.484 2 DEBUG oslo_concurrency.lockutils [None req-cfd8987d-7906-4fa6-9dd8-07dc376f483a a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Acquiring lock "77ff3a9d-3eb2-40ed-ad12-6367fd4e555f-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:50:48 compute-0 nova_compute[260935]: 2025-10-11 08:50:48.485 2 DEBUG oslo_concurrency.lockutils [None req-cfd8987d-7906-4fa6-9dd8-07dc376f483a a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Lock "77ff3a9d-3eb2-40ed-ad12-6367fd4e555f-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:50:48 compute-0 nova_compute[260935]: 2025-10-11 08:50:48.485 2 DEBUG oslo_concurrency.lockutils [None req-cfd8987d-7906-4fa6-9dd8-07dc376f483a a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Lock "77ff3a9d-3eb2-40ed-ad12-6367fd4e555f-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:50:48 compute-0 nova_compute[260935]: 2025-10-11 08:50:48.487 2 INFO nova.compute.manager [None req-cfd8987d-7906-4fa6-9dd8-07dc376f483a a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] Terminating instance
Oct 11 08:50:48 compute-0 nova_compute[260935]: 2025-10-11 08:50:48.488 2 DEBUG nova.compute.manager [None req-cfd8987d-7906-4fa6-9dd8-07dc376f483a a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 11 08:50:48 compute-0 kernel: tapa3944a31-95 (unregistering): left promiscuous mode
Oct 11 08:50:48 compute-0 NetworkManager[44960]: <info>  [1760172648.5572] device (tapa3944a31-95): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 08:50:48 compute-0 ovn_controller[152945]: 2025-10-11T08:50:48Z|00220|binding|INFO|Releasing lport a3944a31-9560-49ae-b2a5-caaf2736993a from this chassis (sb_readonly=0)
Oct 11 08:50:48 compute-0 ovn_controller[152945]: 2025-10-11T08:50:48Z|00221|binding|INFO|Setting lport a3944a31-9560-49ae-b2a5-caaf2736993a down in Southbound
Oct 11 08:50:48 compute-0 ovn_controller[152945]: 2025-10-11T08:50:48Z|00222|binding|INFO|Removing iface tapa3944a31-95 ovn-installed in OVS
Oct 11 08:50:48 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:48.578 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:bf:d8:0b 10.100.0.11'], port_security=['fa:16:3e:bf:d8:0b 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '77ff3a9d-3eb2-40ed-ad12-6367fd4e555f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '39d3043a7835403392c659fbb2fe0b22', 'neutron:revision_number': '8', 'neutron:security_group_ids': '8cdf2c97-ed67-4339-928f-1d70d0c6c18c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3bfe7634-8476-437a-9cde-e4512c0e686a, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=a3944a31-9560-49ae-b2a5-caaf2736993a) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 08:50:48 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:48.580 162815 INFO neutron.agent.ovn.metadata.agent [-] Port a3944a31-9560-49ae-b2a5-caaf2736993a in datapath 09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5 unbound from our chassis
Oct 11 08:50:48 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:48.585 162815 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 11 08:50:48 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:48.586 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[47c47067-c013-4520-8ec0-27f42211686b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:50:48 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:48.589 162815 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5 namespace which is not needed anymore
Oct 11 08:50:48 compute-0 nova_compute[260935]: 2025-10-11 08:50:48.597 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:50:48 compute-0 systemd[1]: machine-qemu\x2d34\x2dinstance\x2d00000016.scope: Deactivated successfully.
Oct 11 08:50:48 compute-0 systemd[1]: machine-qemu\x2d34\x2dinstance\x2d00000016.scope: Consumed 13.347s CPU time.
Oct 11 08:50:48 compute-0 systemd-machined[215705]: Machine qemu-34-instance-00000016 terminated.
Oct 11 08:50:48 compute-0 nova_compute[260935]: 2025-10-11 08:50:48.696 2 DEBUG nova.compute.manager [req-1b10d1ec-fa32-4218-8cee-26d629b9362a req-332f5a06-9aab-409c-a07e-2821b75bd822 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 6224c79a-8a36-490c-863a-67251512732f] Received event network-vif-deleted-b9569700-d7dc-40dc-a27c-1f72d3675682 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:50:48 compute-0 nova_compute[260935]: 2025-10-11 08:50:48.710 2 DEBUG nova.storage.rbd_utils [None req-b5ae42e4-fca2-497f-8563-d753ca93958c 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] removing snapshot(854a61dcba4c481eaeae67e6378c78dd) on rbd image(ac3851d8-5df2-4f84-9b28-a5fbf1c31b62_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Oct 11 08:50:48 compute-0 nova_compute[260935]: 2025-10-11 08:50:48.732 2 INFO nova.virt.libvirt.driver [-] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] Instance destroyed successfully.
Oct 11 08:50:48 compute-0 nova_compute[260935]: 2025-10-11 08:50:48.733 2 DEBUG nova.objects.instance [None req-cfd8987d-7906-4fa6-9dd8-07dc376f483a a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Lazy-loading 'resources' on Instance uuid 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:50:48 compute-0 nova_compute[260935]: 2025-10-11 08:50:48.780 2 INFO nova.virt.libvirt.driver [None req-d3c18f18-c02a-4253-a213-db44ffaf6acd 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 14711e39-46ca-4856-9c19-fa51b869064d] Deleting instance files /var/lib/nova/instances/14711e39-46ca-4856-9c19-fa51b869064d_del
Oct 11 08:50:48 compute-0 nova_compute[260935]: 2025-10-11 08:50:48.781 2 INFO nova.virt.libvirt.driver [None req-d3c18f18-c02a-4253-a213-db44ffaf6acd 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 14711e39-46ca-4856-9c19-fa51b869064d] Deletion of /var/lib/nova/instances/14711e39-46ca-4856-9c19-fa51b869064d_del complete
Oct 11 08:50:48 compute-0 neutron-haproxy-ovnmeta-09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5[292192]: [NOTICE]   (292196) : haproxy version is 2.8.14-c23fe91
Oct 11 08:50:48 compute-0 neutron-haproxy-ovnmeta-09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5[292192]: [NOTICE]   (292196) : path to executable is /usr/sbin/haproxy
Oct 11 08:50:48 compute-0 neutron-haproxy-ovnmeta-09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5[292192]: [WARNING]  (292196) : Exiting Master process...
Oct 11 08:50:48 compute-0 neutron-haproxy-ovnmeta-09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5[292192]: [WARNING]  (292196) : Exiting Master process...
Oct 11 08:50:48 compute-0 nova_compute[260935]: 2025-10-11 08:50:48.786 2 DEBUG nova.virt.libvirt.vif [None req-cfd8987d-7906-4fa6-9dd8-07dc376f483a a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-11T08:48:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-1988839763',display_name='tempest-ServersAdminTestJSON-server-1988839763',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-1988839763',id=22,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-11T08:50:32Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='39d3043a7835403392c659fbb2fe0b22',ramdisk_id='',reservation_id='r-nxz1v0k9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='2',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersAdminTestJSON-1756812845',owner_user_name='tempest-ServersAdminTestJSON-1756812845-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T08:50:35Z,user_data=None,user_id='a51c2680b31e40b1908642ef8795c6f0',uuid=77ff3a9d-3eb2-40ed-ad12-6367fd4e555f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a3944a31-9560-49ae-b2a5-caaf2736993a", "address": "fa:16:3e:bf:d8:0b", "network": {"id": "09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1951796893-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "39d3043a7835403392c659fbb2fe0b22", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa3944a31-95", "ovs_interfaceid": "a3944a31-9560-49ae-b2a5-caaf2736993a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 11 08:50:48 compute-0 nova_compute[260935]: 2025-10-11 08:50:48.787 2 DEBUG nova.network.os_vif_util [None req-cfd8987d-7906-4fa6-9dd8-07dc376f483a a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Converting VIF {"id": "a3944a31-9560-49ae-b2a5-caaf2736993a", "address": "fa:16:3e:bf:d8:0b", "network": {"id": "09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1951796893-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "39d3043a7835403392c659fbb2fe0b22", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa3944a31-95", "ovs_interfaceid": "a3944a31-9560-49ae-b2a5-caaf2736993a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 08:50:48 compute-0 nova_compute[260935]: 2025-10-11 08:50:48.787 2 DEBUG nova.network.os_vif_util [None req-cfd8987d-7906-4fa6-9dd8-07dc376f483a a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:bf:d8:0b,bridge_name='br-int',has_traffic_filtering=True,id=a3944a31-9560-49ae-b2a5-caaf2736993a,network=Network(09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa3944a31-95') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 08:50:48 compute-0 nova_compute[260935]: 2025-10-11 08:50:48.788 2 DEBUG os_vif [None req-cfd8987d-7906-4fa6-9dd8-07dc376f483a a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:bf:d8:0b,bridge_name='br-int',has_traffic_filtering=True,id=a3944a31-9560-49ae-b2a5-caaf2736993a,network=Network(09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa3944a31-95') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 11 08:50:48 compute-0 neutron-haproxy-ovnmeta-09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5[292192]: [ALERT]    (292196) : Current worker (292200) exited with code 143 (Terminated)
Oct 11 08:50:48 compute-0 neutron-haproxy-ovnmeta-09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5[292192]: [WARNING]  (292196) : All workers exited. Exiting... (0)
Oct 11 08:50:48 compute-0 nova_compute[260935]: 2025-10-11 08:50:48.790 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:50:48 compute-0 nova_compute[260935]: 2025-10-11 08:50:48.790 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa3944a31-95, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:50:48 compute-0 nova_compute[260935]: 2025-10-11 08:50:48.792 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:50:48 compute-0 systemd[1]: libpod-f161a6e8d54a76f5182305ed623709fa08785ce1e817d89029244e968aa05361.scope: Deactivated successfully.
Oct 11 08:50:48 compute-0 nova_compute[260935]: 2025-10-11 08:50:48.795 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 08:50:48 compute-0 podman[300176]: 2025-10-11 08:50:48.797684924 +0000 UTC m=+0.077755050 container died f161a6e8d54a76f5182305ed623709fa08785ce1e817d89029244e968aa05361 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Oct 11 08:50:48 compute-0 nova_compute[260935]: 2025-10-11 08:50:48.798 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:50:48 compute-0 nova_compute[260935]: 2025-10-11 08:50:48.801 2 INFO os_vif [None req-cfd8987d-7906-4fa6-9dd8-07dc376f483a a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:bf:d8:0b,bridge_name='br-int',has_traffic_filtering=True,id=a3944a31-9560-49ae-b2a5-caaf2736993a,network=Network(09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa3944a31-95')
Oct 11 08:50:48 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-f161a6e8d54a76f5182305ed623709fa08785ce1e817d89029244e968aa05361-userdata-shm.mount: Deactivated successfully.
Oct 11 08:50:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-58f2e53c9b14a148f14c0e6dc8556f03c44bc8562cdbc60984e7f6e68a2d640e-merged.mount: Deactivated successfully.
Oct 11 08:50:48 compute-0 podman[300176]: 2025-10-11 08:50:48.841197194 +0000 UTC m=+0.121267290 container cleanup f161a6e8d54a76f5182305ed623709fa08785ce1e817d89029244e968aa05361 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 11 08:50:48 compute-0 systemd[1]: libpod-conmon-f161a6e8d54a76f5182305ed623709fa08785ce1e817d89029244e968aa05361.scope: Deactivated successfully.
Oct 11 08:50:48 compute-0 podman[300237]: 2025-10-11 08:50:48.940543714 +0000 UTC m=+0.067536901 container remove f161a6e8d54a76f5182305ed623709fa08785ce1e817d89029244e968aa05361 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 11 08:50:48 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:48.949 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[04b1c17c-4135-4e95-b5b3-0f1af18bfd41]: (4, ('Sat Oct 11 08:50:48 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5 (f161a6e8d54a76f5182305ed623709fa08785ce1e817d89029244e968aa05361)\nf161a6e8d54a76f5182305ed623709fa08785ce1e817d89029244e968aa05361\nSat Oct 11 08:50:48 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5 (f161a6e8d54a76f5182305ed623709fa08785ce1e817d89029244e968aa05361)\nf161a6e8d54a76f5182305ed623709fa08785ce1e817d89029244e968aa05361\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:50:48 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:48.951 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[bca166af-9c40-4af7-8852-32af010c3f94]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:50:48 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:48.953 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap09ac2cb6-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:50:48 compute-0 nova_compute[260935]: 2025-10-11 08:50:48.955 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:50:48 compute-0 kernel: tap09ac2cb6-30: left promiscuous mode
Oct 11 08:50:48 compute-0 nova_compute[260935]: 2025-10-11 08:50:48.997 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:50:49 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:49.001 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[8311313f-54cd-4bfa-a9bc-8ca4aecf0738]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:50:49 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e176 do_prune osdmap full prune enabled
Oct 11 08:50:49 compute-0 ceph-mon[74313]: pgmap v1362: 321 pgs: 321 active+clean; 246 MiB data, 487 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.6 MiB/s wr, 255 op/s
Oct 11 08:50:49 compute-0 ceph-mon[74313]: osdmap e176: 3 total, 3 up, 3 in
Oct 11 08:50:49 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e177 e177: 3 total, 3 up, 3 in
Oct 11 08:50:49 compute-0 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e177: 3 total, 3 up, 3 in
Oct 11 08:50:49 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:49.034 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[4216b2b7-ae28-44f6-af3d-8e82c78d00f9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:50:49 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:49.036 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[25c158f3-f214-464a-941d-3c1024ee8312]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:50:49 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:49.056 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[df40849c-0f0e-4786-a547-895e2c9e8969]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 439097, 'reachable_time': 31436, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 300258, 'error': None, 'target': 'ovnmeta-09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:50:49 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:49.059 162929 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-09ac2cb6-3e22-4dd3-895f-47d3e5dc3cd5 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 11 08:50:49 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:49.060 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[5ccca476-69c9-4433-b63d-dbeac4f4f6d4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:50:49 compute-0 nova_compute[260935]: 2025-10-11 08:50:49.080 2 DEBUG nova.storage.rbd_utils [None req-b5ae42e4-fca2-497f-8563-d753ca93958c 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] creating snapshot(snap) on rbd image(a3f24c7b-a065-417e-b63f-e5280c978ce3) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Oct 11 08:50:49 compute-0 systemd[1]: run-netns-ovnmeta\x2d09ac2cb6\x2d3e22\x2d4dd3\x2d895f\x2d47d3e5dc3cd5.mount: Deactivated successfully.
Oct 11 08:50:49 compute-0 nova_compute[260935]: 2025-10-11 08:50:49.258 2 INFO nova.compute.manager [None req-d3c18f18-c02a-4253-a213-db44ffaf6acd 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] [instance: 14711e39-46ca-4856-9c19-fa51b869064d] Took 1.50 seconds to destroy the instance on the hypervisor.
Oct 11 08:50:49 compute-0 nova_compute[260935]: 2025-10-11 08:50:49.259 2 DEBUG oslo.service.loopingcall [None req-d3c18f18-c02a-4253-a213-db44ffaf6acd 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 11 08:50:49 compute-0 nova_compute[260935]: 2025-10-11 08:50:49.261 2 DEBUG nova.compute.manager [-] [instance: 14711e39-46ca-4856-9c19-fa51b869064d] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 11 08:50:49 compute-0 nova_compute[260935]: 2025-10-11 08:50:49.261 2 DEBUG nova.network.neutron [-] [instance: 14711e39-46ca-4856-9c19-fa51b869064d] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 11 08:50:49 compute-0 nova_compute[260935]: 2025-10-11 08:50:49.283 2 INFO nova.virt.libvirt.driver [None req-cfd8987d-7906-4fa6-9dd8-07dc376f483a a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] Deleting instance files /var/lib/nova/instances/77ff3a9d-3eb2-40ed-ad12-6367fd4e555f_del
Oct 11 08:50:49 compute-0 nova_compute[260935]: 2025-10-11 08:50:49.285 2 INFO nova.virt.libvirt.driver [None req-cfd8987d-7906-4fa6-9dd8-07dc376f483a a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] Deletion of /var/lib/nova/instances/77ff3a9d-3eb2-40ed-ad12-6367fd4e555f_del complete
Oct 11 08:50:49 compute-0 nova_compute[260935]: 2025-10-11 08:50:49.623 2 INFO nova.compute.manager [None req-cfd8987d-7906-4fa6-9dd8-07dc376f483a a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] Took 1.13 seconds to destroy the instance on the hypervisor.
Oct 11 08:50:49 compute-0 nova_compute[260935]: 2025-10-11 08:50:49.623 2 DEBUG oslo.service.loopingcall [None req-cfd8987d-7906-4fa6-9dd8-07dc376f483a a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 11 08:50:49 compute-0 nova_compute[260935]: 2025-10-11 08:50:49.624 2 DEBUG nova.compute.manager [-] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 11 08:50:49 compute-0 nova_compute[260935]: 2025-10-11 08:50:49.624 2 DEBUG nova.network.neutron [-] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 11 08:50:49 compute-0 nova_compute[260935]: 2025-10-11 08:50:49.821 2 DEBUG nova.compute.manager [req-4424305f-8602-4288-9966-6b5ff364f0c2 req-56267edb-f88a-447d-b1d4-851b40d35543 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 14711e39-46ca-4856-9c19-fa51b869064d] Received event network-vif-unplugged-ac842ebf-4fca-4930-a4d1-3e8a6760d441 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:50:49 compute-0 nova_compute[260935]: 2025-10-11 08:50:49.822 2 DEBUG oslo_concurrency.lockutils [req-4424305f-8602-4288-9966-6b5ff364f0c2 req-56267edb-f88a-447d-b1d4-851b40d35543 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "14711e39-46ca-4856-9c19-fa51b869064d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:50:49 compute-0 nova_compute[260935]: 2025-10-11 08:50:49.823 2 DEBUG oslo_concurrency.lockutils [req-4424305f-8602-4288-9966-6b5ff364f0c2 req-56267edb-f88a-447d-b1d4-851b40d35543 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "14711e39-46ca-4856-9c19-fa51b869064d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:50:49 compute-0 nova_compute[260935]: 2025-10-11 08:50:49.823 2 DEBUG oslo_concurrency.lockutils [req-4424305f-8602-4288-9966-6b5ff364f0c2 req-56267edb-f88a-447d-b1d4-851b40d35543 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "14711e39-46ca-4856-9c19-fa51b869064d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:50:49 compute-0 nova_compute[260935]: 2025-10-11 08:50:49.824 2 DEBUG nova.compute.manager [req-4424305f-8602-4288-9966-6b5ff364f0c2 req-56267edb-f88a-447d-b1d4-851b40d35543 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 14711e39-46ca-4856-9c19-fa51b869064d] No waiting events found dispatching network-vif-unplugged-ac842ebf-4fca-4930-a4d1-3e8a6760d441 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 08:50:49 compute-0 nova_compute[260935]: 2025-10-11 08:50:49.825 2 DEBUG nova.compute.manager [req-4424305f-8602-4288-9966-6b5ff364f0c2 req-56267edb-f88a-447d-b1d4-851b40d35543 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 14711e39-46ca-4856-9c19-fa51b869064d] Received event network-vif-unplugged-ac842ebf-4fca-4930-a4d1-3e8a6760d441 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 11 08:50:49 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1365: 321 pgs: 321 active+clean; 246 MiB data, 487 MiB used, 60 GiB / 60 GiB avail; 2.8 MiB/s rd, 3.2 MiB/s wr, 319 op/s
Oct 11 08:50:49 compute-0 nova_compute[260935]: 2025-10-11 08:50:49.890 2 DEBUG nova.network.neutron [-] [instance: 14711e39-46ca-4856-9c19-fa51b869064d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:50:49 compute-0 nova_compute[260935]: 2025-10-11 08:50:49.908 2 INFO nova.compute.manager [-] [instance: 14711e39-46ca-4856-9c19-fa51b869064d] Took 0.65 seconds to deallocate network for instance.
Oct 11 08:50:49 compute-0 nova_compute[260935]: 2025-10-11 08:50:49.965 2 DEBUG oslo_concurrency.lockutils [None req-d3c18f18-c02a-4253-a213-db44ffaf6acd 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:50:49 compute-0 nova_compute[260935]: 2025-10-11 08:50:49.966 2 DEBUG oslo_concurrency.lockutils [None req-d3c18f18-c02a-4253-a213-db44ffaf6acd 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:50:50 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e177 do_prune osdmap full prune enabled
Oct 11 08:50:50 compute-0 ceph-mon[74313]: osdmap e177: 3 total, 3 up, 3 in
Oct 11 08:50:50 compute-0 nova_compute[260935]: 2025-10-11 08:50:50.031 2 DEBUG oslo_concurrency.processutils [None req-d3c18f18-c02a-4253-a213-db44ffaf6acd 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:50:50 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e178 e178: 3 total, 3 up, 3 in
Oct 11 08:50:50 compute-0 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e178: 3 total, 3 up, 3 in
Oct 11 08:50:50 compute-0 nova_compute[260935]: 2025-10-11 08:50:50.228 2 DEBUG nova.network.neutron [-] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:50:50 compute-0 nova_compute[260935]: 2025-10-11 08:50:50.255 2 INFO nova.compute.manager [-] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] Took 0.63 seconds to deallocate network for instance.
Oct 11 08:50:50 compute-0 nova_compute[260935]: 2025-10-11 08:50:50.302 2 DEBUG oslo_concurrency.lockutils [None req-cfd8987d-7906-4fa6-9dd8-07dc376f483a a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:50:50 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 08:50:50 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1310658092' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:50:50 compute-0 nova_compute[260935]: 2025-10-11 08:50:50.526 2 DEBUG oslo_concurrency.processutils [None req-d3c18f18-c02a-4253-a213-db44ffaf6acd 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.495s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:50:50 compute-0 nova_compute[260935]: 2025-10-11 08:50:50.536 2 DEBUG nova.compute.provider_tree [None req-d3c18f18-c02a-4253-a213-db44ffaf6acd 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 08:50:50 compute-0 nova_compute[260935]: 2025-10-11 08:50:50.552 2 DEBUG nova.scheduler.client.report [None req-d3c18f18-c02a-4253-a213-db44ffaf6acd 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 08:50:50 compute-0 nova_compute[260935]: 2025-10-11 08:50:50.578 2 DEBUG oslo_concurrency.lockutils [None req-d3c18f18-c02a-4253-a213-db44ffaf6acd 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.612s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:50:50 compute-0 nova_compute[260935]: 2025-10-11 08:50:50.582 2 DEBUG oslo_concurrency.lockutils [None req-cfd8987d-7906-4fa6-9dd8-07dc376f483a a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.280s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:50:50 compute-0 nova_compute[260935]: 2025-10-11 08:50:50.602 2 INFO nova.scheduler.client.report [None req-d3c18f18-c02a-4253-a213-db44ffaf6acd 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Deleted allocations for instance 14711e39-46ca-4856-9c19-fa51b869064d
Oct 11 08:50:50 compute-0 nova_compute[260935]: 2025-10-11 08:50:50.671 2 DEBUG oslo_concurrency.lockutils [None req-d3c18f18-c02a-4253-a213-db44ffaf6acd 34f29a5a135d45f597eeaa741009aa67 eddb41c523294041b154a0a99c88e82b - - default default] Lock "14711e39-46ca-4856-9c19-fa51b869064d" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.919s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:50:50 compute-0 nova_compute[260935]: 2025-10-11 08:50:50.684 2 DEBUG oslo_concurrency.processutils [None req-cfd8987d-7906-4fa6-9dd8-07dc376f483a a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:50:51 compute-0 nova_compute[260935]: 2025-10-11 08:50:51.000 2 DEBUG nova.compute.manager [req-fb91add6-24eb-467d-8fe1-6e07386ace70 req-e90c83d8-a51e-49a6-8283-074c5af150d0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] Received event network-vif-unplugged-a3944a31-9560-49ae-b2a5-caaf2736993a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:50:51 compute-0 nova_compute[260935]: 2025-10-11 08:50:51.001 2 DEBUG oslo_concurrency.lockutils [req-fb91add6-24eb-467d-8fe1-6e07386ace70 req-e90c83d8-a51e-49a6-8283-074c5af150d0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "77ff3a9d-3eb2-40ed-ad12-6367fd4e555f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:50:51 compute-0 nova_compute[260935]: 2025-10-11 08:50:51.002 2 DEBUG oslo_concurrency.lockutils [req-fb91add6-24eb-467d-8fe1-6e07386ace70 req-e90c83d8-a51e-49a6-8283-074c5af150d0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "77ff3a9d-3eb2-40ed-ad12-6367fd4e555f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:50:51 compute-0 nova_compute[260935]: 2025-10-11 08:50:51.002 2 DEBUG oslo_concurrency.lockutils [req-fb91add6-24eb-467d-8fe1-6e07386ace70 req-e90c83d8-a51e-49a6-8283-074c5af150d0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "77ff3a9d-3eb2-40ed-ad12-6367fd4e555f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:50:51 compute-0 nova_compute[260935]: 2025-10-11 08:50:51.003 2 DEBUG nova.compute.manager [req-fb91add6-24eb-467d-8fe1-6e07386ace70 req-e90c83d8-a51e-49a6-8283-074c5af150d0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] No waiting events found dispatching network-vif-unplugged-a3944a31-9560-49ae-b2a5-caaf2736993a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 08:50:51 compute-0 nova_compute[260935]: 2025-10-11 08:50:51.003 2 WARNING nova.compute.manager [req-fb91add6-24eb-467d-8fe1-6e07386ace70 req-e90c83d8-a51e-49a6-8283-074c5af150d0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] Received unexpected event network-vif-unplugged-a3944a31-9560-49ae-b2a5-caaf2736993a for instance with vm_state deleted and task_state None.
Oct 11 08:50:51 compute-0 nova_compute[260935]: 2025-10-11 08:50:51.004 2 DEBUG nova.compute.manager [req-fb91add6-24eb-467d-8fe1-6e07386ace70 req-e90c83d8-a51e-49a6-8283-074c5af150d0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] Received event network-vif-plugged-a3944a31-9560-49ae-b2a5-caaf2736993a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:50:51 compute-0 nova_compute[260935]: 2025-10-11 08:50:51.005 2 DEBUG oslo_concurrency.lockutils [req-fb91add6-24eb-467d-8fe1-6e07386ace70 req-e90c83d8-a51e-49a6-8283-074c5af150d0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "77ff3a9d-3eb2-40ed-ad12-6367fd4e555f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:50:51 compute-0 nova_compute[260935]: 2025-10-11 08:50:51.005 2 DEBUG oslo_concurrency.lockutils [req-fb91add6-24eb-467d-8fe1-6e07386ace70 req-e90c83d8-a51e-49a6-8283-074c5af150d0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "77ff3a9d-3eb2-40ed-ad12-6367fd4e555f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:50:51 compute-0 nova_compute[260935]: 2025-10-11 08:50:51.006 2 DEBUG oslo_concurrency.lockutils [req-fb91add6-24eb-467d-8fe1-6e07386ace70 req-e90c83d8-a51e-49a6-8283-074c5af150d0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "77ff3a9d-3eb2-40ed-ad12-6367fd4e555f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:50:51 compute-0 nova_compute[260935]: 2025-10-11 08:50:51.006 2 DEBUG nova.compute.manager [req-fb91add6-24eb-467d-8fe1-6e07386ace70 req-e90c83d8-a51e-49a6-8283-074c5af150d0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] No waiting events found dispatching network-vif-plugged-a3944a31-9560-49ae-b2a5-caaf2736993a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 08:50:51 compute-0 nova_compute[260935]: 2025-10-11 08:50:51.007 2 WARNING nova.compute.manager [req-fb91add6-24eb-467d-8fe1-6e07386ace70 req-e90c83d8-a51e-49a6-8283-074c5af150d0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] Received unexpected event network-vif-plugged-a3944a31-9560-49ae-b2a5-caaf2736993a for instance with vm_state deleted and task_state None.
Oct 11 08:50:51 compute-0 nova_compute[260935]: 2025-10-11 08:50:51.007 2 DEBUG nova.compute.manager [req-fb91add6-24eb-467d-8fe1-6e07386ace70 req-e90c83d8-a51e-49a6-8283-074c5af150d0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 14711e39-46ca-4856-9c19-fa51b869064d] Received event network-vif-deleted-ac842ebf-4fca-4930-a4d1-3e8a6760d441 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:50:51 compute-0 nova_compute[260935]: 2025-10-11 08:50:51.008 2 DEBUG nova.compute.manager [req-fb91add6-24eb-467d-8fe1-6e07386ace70 req-e90c83d8-a51e-49a6-8283-074c5af150d0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] Received event network-vif-deleted-a3944a31-9560-49ae-b2a5-caaf2736993a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:50:51 compute-0 ceph-mon[74313]: pgmap v1365: 321 pgs: 321 active+clean; 246 MiB data, 487 MiB used, 60 GiB / 60 GiB avail; 2.8 MiB/s rd, 3.2 MiB/s wr, 319 op/s
Oct 11 08:50:51 compute-0 ceph-mon[74313]: osdmap e178: 3 total, 3 up, 3 in
Oct 11 08:50:51 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/1310658092' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:50:51 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 08:50:51 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1331782170' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:50:51 compute-0 nova_compute[260935]: 2025-10-11 08:50:51.169 2 DEBUG oslo_concurrency.processutils [None req-cfd8987d-7906-4fa6-9dd8-07dc376f483a a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.485s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:50:51 compute-0 nova_compute[260935]: 2025-10-11 08:50:51.178 2 DEBUG nova.compute.provider_tree [None req-cfd8987d-7906-4fa6-9dd8-07dc376f483a a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 08:50:51 compute-0 nova_compute[260935]: 2025-10-11 08:50:51.196 2 DEBUG nova.scheduler.client.report [None req-cfd8987d-7906-4fa6-9dd8-07dc376f483a a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 08:50:51 compute-0 nova_compute[260935]: 2025-10-11 08:50:51.225 2 DEBUG oslo_concurrency.lockutils [None req-cfd8987d-7906-4fa6-9dd8-07dc376f483a a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.643s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:50:51 compute-0 nova_compute[260935]: 2025-10-11 08:50:51.262 2 INFO nova.scheduler.client.report [None req-cfd8987d-7906-4fa6-9dd8-07dc376f483a a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Deleted allocations for instance 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f
Oct 11 08:50:51 compute-0 nova_compute[260935]: 2025-10-11 08:50:51.350 2 DEBUG oslo_concurrency.lockutils [None req-cfd8987d-7906-4fa6-9dd8-07dc376f483a a51c2680b31e40b1908642ef8795c6f0 39d3043a7835403392c659fbb2fe0b22 - - default default] Lock "77ff3a9d-3eb2-40ed-ad12-6367fd4e555f" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.866s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:50:51 compute-0 nova_compute[260935]: 2025-10-11 08:50:51.364 2 INFO nova.virt.libvirt.driver [None req-b5ae42e4-fca2-497f-8563-d753ca93958c 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: ac3851d8-5df2-4f84-9b28-a5fbf1c31b62] Snapshot image upload complete
Oct 11 08:50:51 compute-0 nova_compute[260935]: 2025-10-11 08:50:51.365 2 INFO nova.compute.manager [None req-b5ae42e4-fca2-497f-8563-d753ca93958c 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: ac3851d8-5df2-4f84-9b28-a5fbf1c31b62] Took 4.62 seconds to snapshot the instance on the hypervisor.
Oct 11 08:50:51 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1367: 321 pgs: 321 active+clean; 246 MiB data, 487 MiB used, 60 GiB / 60 GiB avail; 3.6 MiB/s rd, 4.3 MiB/s wr, 340 op/s
Oct 11 08:50:51 compute-0 nova_compute[260935]: 2025-10-11 08:50:51.911 2 DEBUG nova.compute.manager [req-8c11787a-5da3-472e-b12f-5145e8881ad2 req-b70b1203-9bc8-419a-8ba3-cf752443283d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 14711e39-46ca-4856-9c19-fa51b869064d] Received event network-vif-plugged-ac842ebf-4fca-4930-a4d1-3e8a6760d441 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:50:51 compute-0 nova_compute[260935]: 2025-10-11 08:50:51.912 2 DEBUG oslo_concurrency.lockutils [req-8c11787a-5da3-472e-b12f-5145e8881ad2 req-b70b1203-9bc8-419a-8ba3-cf752443283d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "14711e39-46ca-4856-9c19-fa51b869064d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:50:51 compute-0 nova_compute[260935]: 2025-10-11 08:50:51.913 2 DEBUG oslo_concurrency.lockutils [req-8c11787a-5da3-472e-b12f-5145e8881ad2 req-b70b1203-9bc8-419a-8ba3-cf752443283d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "14711e39-46ca-4856-9c19-fa51b869064d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:50:51 compute-0 nova_compute[260935]: 2025-10-11 08:50:51.913 2 DEBUG oslo_concurrency.lockutils [req-8c11787a-5da3-472e-b12f-5145e8881ad2 req-b70b1203-9bc8-419a-8ba3-cf752443283d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "14711e39-46ca-4856-9c19-fa51b869064d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:50:51 compute-0 nova_compute[260935]: 2025-10-11 08:50:51.914 2 DEBUG nova.compute.manager [req-8c11787a-5da3-472e-b12f-5145e8881ad2 req-b70b1203-9bc8-419a-8ba3-cf752443283d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 14711e39-46ca-4856-9c19-fa51b869064d] No waiting events found dispatching network-vif-plugged-ac842ebf-4fca-4930-a4d1-3e8a6760d441 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 08:50:51 compute-0 nova_compute[260935]: 2025-10-11 08:50:51.914 2 WARNING nova.compute.manager [req-8c11787a-5da3-472e-b12f-5145e8881ad2 req-b70b1203-9bc8-419a-8ba3-cf752443283d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 14711e39-46ca-4856-9c19-fa51b869064d] Received unexpected event network-vif-plugged-ac842ebf-4fca-4930-a4d1-3e8a6760d441 for instance with vm_state deleted and task_state None.
Oct 11 08:50:51 compute-0 nova_compute[260935]: 2025-10-11 08:50:51.934 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760172636.9303, 90e56ca7-b26f-4f83-908d-75204ecd2533 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:50:51 compute-0 nova_compute[260935]: 2025-10-11 08:50:51.934 2 INFO nova.compute.manager [-] [instance: 90e56ca7-b26f-4f83-908d-75204ecd2533] VM Stopped (Lifecycle Event)
Oct 11 08:50:51 compute-0 nova_compute[260935]: 2025-10-11 08:50:51.953 2 DEBUG nova.compute.manager [None req-095a72b4-cc85-4845-b60a-537360bbde7b - - - - - -] [instance: 90e56ca7-b26f-4f83-908d-75204ecd2533] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:50:52 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/1331782170' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:50:52 compute-0 nova_compute[260935]: 2025-10-11 08:50:52.200 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:50:53 compute-0 ceph-mon[74313]: pgmap v1367: 321 pgs: 321 active+clean; 246 MiB data, 487 MiB used, 60 GiB / 60 GiB avail; 3.6 MiB/s rd, 4.3 MiB/s wr, 340 op/s
Oct 11 08:50:53 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e178 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 08:50:53 compute-0 nova_compute[260935]: 2025-10-11 08:50:53.793 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:50:53 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1368: 321 pgs: 321 active+clean; 134 MiB data, 401 MiB used, 60 GiB / 60 GiB avail; 3.7 MiB/s rd, 3.6 MiB/s wr, 236 op/s
Oct 11 08:50:54 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e178 do_prune osdmap full prune enabled
Oct 11 08:50:54 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e179 e179: 3 total, 3 up, 3 in
Oct 11 08:50:54 compute-0 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e179: 3 total, 3 up, 3 in
Oct 11 08:50:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 08:50:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 08:50:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 08:50:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 08:50:54 compute-0 podman[300321]: 2025-10-11 08:50:54.790026087 +0000 UTC m=+0.086883268 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Oct 11 08:50:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 08:50:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 08:50:54 compute-0 ceph-mgr[74605]: [balancer INFO root] Optimize plan auto_2025-10-11_08:50:54
Oct 11 08:50:54 compute-0 ceph-mgr[74605]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 08:50:54 compute-0 ceph-mgr[74605]: [balancer INFO root] do_upmap
Oct 11 08:50:54 compute-0 ceph-mgr[74605]: [balancer INFO root] pools ['images', 'cephfs.cephfs.meta', '.mgr', 'default.rgw.meta', 'default.rgw.control', 'default.rgw.log', 'volumes', 'cephfs.cephfs.data', '.rgw.root', 'vms', 'backups']
Oct 11 08:50:54 compute-0 ceph-mgr[74605]: [balancer INFO root] prepared 0/10 changes
Oct 11 08:50:55 compute-0 ceph-mon[74313]: pgmap v1368: 321 pgs: 321 active+clean; 134 MiB data, 401 MiB used, 60 GiB / 60 GiB avail; 3.7 MiB/s rd, 3.6 MiB/s wr, 236 op/s
Oct 11 08:50:55 compute-0 ceph-mon[74313]: osdmap e179: 3 total, 3 up, 3 in
Oct 11 08:50:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 08:50:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 08:50:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 08:50:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 08:50:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 08:50:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 08:50:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 08:50:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 08:50:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 08:50:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 08:50:55 compute-0 nova_compute[260935]: 2025-10-11 08:50:55.256 2 DEBUG oslo_concurrency.lockutils [None req-9594826e-1983-4d93-9d74-c393427fb458 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Acquiring lock "ac3851d8-5df2-4f84-9b28-a5fbf1c31b62" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:50:55 compute-0 nova_compute[260935]: 2025-10-11 08:50:55.257 2 DEBUG oslo_concurrency.lockutils [None req-9594826e-1983-4d93-9d74-c393427fb458 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Lock "ac3851d8-5df2-4f84-9b28-a5fbf1c31b62" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:50:55 compute-0 nova_compute[260935]: 2025-10-11 08:50:55.257 2 DEBUG oslo_concurrency.lockutils [None req-9594826e-1983-4d93-9d74-c393427fb458 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Acquiring lock "ac3851d8-5df2-4f84-9b28-a5fbf1c31b62-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:50:55 compute-0 nova_compute[260935]: 2025-10-11 08:50:55.258 2 DEBUG oslo_concurrency.lockutils [None req-9594826e-1983-4d93-9d74-c393427fb458 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Lock "ac3851d8-5df2-4f84-9b28-a5fbf1c31b62-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:50:55 compute-0 nova_compute[260935]: 2025-10-11 08:50:55.258 2 DEBUG oslo_concurrency.lockutils [None req-9594826e-1983-4d93-9d74-c393427fb458 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Lock "ac3851d8-5df2-4f84-9b28-a5fbf1c31b62-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:50:55 compute-0 nova_compute[260935]: 2025-10-11 08:50:55.260 2 INFO nova.compute.manager [None req-9594826e-1983-4d93-9d74-c393427fb458 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: ac3851d8-5df2-4f84-9b28-a5fbf1c31b62] Terminating instance
Oct 11 08:50:55 compute-0 nova_compute[260935]: 2025-10-11 08:50:55.261 2 DEBUG nova.compute.manager [None req-9594826e-1983-4d93-9d74-c393427fb458 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: ac3851d8-5df2-4f84-9b28-a5fbf1c31b62] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 11 08:50:55 compute-0 nova_compute[260935]: 2025-10-11 08:50:55.271 2 INFO nova.virt.libvirt.driver [-] [instance: ac3851d8-5df2-4f84-9b28-a5fbf1c31b62] Instance destroyed successfully.
Oct 11 08:50:55 compute-0 nova_compute[260935]: 2025-10-11 08:50:55.272 2 DEBUG nova.objects.instance [None req-9594826e-1983-4d93-9d74-c393427fb458 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Lazy-loading 'resources' on Instance uuid ac3851d8-5df2-4f84-9b28-a5fbf1c31b62 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:50:55 compute-0 nova_compute[260935]: 2025-10-11 08:50:55.301 2 DEBUG nova.virt.libvirt.vif [None req-9594826e-1983-4d93-9d74-c393427fb458 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T08:50:32Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-1714775256',display_name='tempest-ImagesTestJSON-server-1714775256',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-1714775256',id=31,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-11T08:50:42Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='f8c7604961214c6d9d49657535d799a5',ramdisk_id='',reservation_id='r-wz17gid5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-ImagesTestJSON-694493184',owner_user_name='tempest-ImagesTestJSON-694493184-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T08:50:51Z,user_data=None,user_id='1bab12893b9d49aabcb5ca19c9b951de',uuid=ac3851d8-5df2-4f84-9b28-a5fbf1c31b62,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='suspended') vif={"id": "d190526e-2bf1-4e6c-925a-2cc0c2b359a8", "address": "fa:16:3e:57:75:39", "network": {"id": "9bac3530-993f-420e-8692-0b14a331d756", "bridge": "br-int", "label": "tempest-ImagesTestJSON-942705627-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f8c7604961214c6d9d49657535d799a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd190526e-2b", "ovs_interfaceid": "d190526e-2bf1-4e6c-925a-2cc0c2b359a8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 11 08:50:55 compute-0 nova_compute[260935]: 2025-10-11 08:50:55.302 2 DEBUG nova.network.os_vif_util [None req-9594826e-1983-4d93-9d74-c393427fb458 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Converting VIF {"id": "d190526e-2bf1-4e6c-925a-2cc0c2b359a8", "address": "fa:16:3e:57:75:39", "network": {"id": "9bac3530-993f-420e-8692-0b14a331d756", "bridge": "br-int", "label": "tempest-ImagesTestJSON-942705627-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f8c7604961214c6d9d49657535d799a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd190526e-2b", "ovs_interfaceid": "d190526e-2bf1-4e6c-925a-2cc0c2b359a8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 08:50:55 compute-0 nova_compute[260935]: 2025-10-11 08:50:55.303 2 DEBUG nova.network.os_vif_util [None req-9594826e-1983-4d93-9d74-c393427fb458 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:57:75:39,bridge_name='br-int',has_traffic_filtering=True,id=d190526e-2bf1-4e6c-925a-2cc0c2b359a8,network=Network(9bac3530-993f-420e-8692-0b14a331d756),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd190526e-2b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 08:50:55 compute-0 nova_compute[260935]: 2025-10-11 08:50:55.303 2 DEBUG os_vif [None req-9594826e-1983-4d93-9d74-c393427fb458 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:57:75:39,bridge_name='br-int',has_traffic_filtering=True,id=d190526e-2bf1-4e6c-925a-2cc0c2b359a8,network=Network(9bac3530-993f-420e-8692-0b14a331d756),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd190526e-2b') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 11 08:50:55 compute-0 nova_compute[260935]: 2025-10-11 08:50:55.306 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:50:55 compute-0 nova_compute[260935]: 2025-10-11 08:50:55.306 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd190526e-2b, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:50:55 compute-0 nova_compute[260935]: 2025-10-11 08:50:55.308 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:50:55 compute-0 nova_compute[260935]: 2025-10-11 08:50:55.311 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 08:50:55 compute-0 nova_compute[260935]: 2025-10-11 08:50:55.314 2 INFO os_vif [None req-9594826e-1983-4d93-9d74-c393427fb458 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:57:75:39,bridge_name='br-int',has_traffic_filtering=True,id=d190526e-2bf1-4e6c-925a-2cc0c2b359a8,network=Network(9bac3530-993f-420e-8692-0b14a331d756),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd190526e-2b')
Oct 11 08:50:55 compute-0 nova_compute[260935]: 2025-10-11 08:50:55.754 2 INFO nova.virt.libvirt.driver [None req-9594826e-1983-4d93-9d74-c393427fb458 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: ac3851d8-5df2-4f84-9b28-a5fbf1c31b62] Deleting instance files /var/lib/nova/instances/ac3851d8-5df2-4f84-9b28-a5fbf1c31b62_del
Oct 11 08:50:55 compute-0 nova_compute[260935]: 2025-10-11 08:50:55.756 2 INFO nova.virt.libvirt.driver [None req-9594826e-1983-4d93-9d74-c393427fb458 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: ac3851d8-5df2-4f84-9b28-a5fbf1c31b62] Deletion of /var/lib/nova/instances/ac3851d8-5df2-4f84-9b28-a5fbf1c31b62_del complete
Oct 11 08:50:55 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1370: 321 pgs: 321 active+clean; 134 MiB data, 401 MiB used, 60 GiB / 60 GiB avail; 3.2 MiB/s rd, 3.1 MiB/s wr, 207 op/s
Oct 11 08:50:55 compute-0 nova_compute[260935]: 2025-10-11 08:50:55.982 2 INFO nova.compute.manager [None req-9594826e-1983-4d93-9d74-c393427fb458 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: ac3851d8-5df2-4f84-9b28-a5fbf1c31b62] Took 0.72 seconds to destroy the instance on the hypervisor.
Oct 11 08:50:55 compute-0 nova_compute[260935]: 2025-10-11 08:50:55.984 2 DEBUG oslo.service.loopingcall [None req-9594826e-1983-4d93-9d74-c393427fb458 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 11 08:50:55 compute-0 nova_compute[260935]: 2025-10-11 08:50:55.984 2 DEBUG nova.compute.manager [-] [instance: ac3851d8-5df2-4f84-9b28-a5fbf1c31b62] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 11 08:50:55 compute-0 nova_compute[260935]: 2025-10-11 08:50:55.985 2 DEBUG nova.network.neutron [-] [instance: ac3851d8-5df2-4f84-9b28-a5fbf1c31b62] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 11 08:50:56 compute-0 nova_compute[260935]: 2025-10-11 08:50:56.129 2 DEBUG oslo_concurrency.lockutils [None req-a8a0f7c3-f1bc-448a-a5ff-f73f3f5f2861 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Acquiring lock "9f842544-f85a-4c24-b273-8ae74177617e" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:50:56 compute-0 nova_compute[260935]: 2025-10-11 08:50:56.130 2 DEBUG oslo_concurrency.lockutils [None req-a8a0f7c3-f1bc-448a-a5ff-f73f3f5f2861 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Lock "9f842544-f85a-4c24-b273-8ae74177617e" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:50:56 compute-0 nova_compute[260935]: 2025-10-11 08:50:56.203 2 DEBUG nova.compute.manager [None req-a8a0f7c3-f1bc-448a-a5ff-f73f3f5f2861 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] [instance: 9f842544-f85a-4c24-b273-8ae74177617e] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 11 08:50:56 compute-0 nova_compute[260935]: 2025-10-11 08:50:56.297 2 DEBUG oslo_concurrency.lockutils [None req-a8a0f7c3-f1bc-448a-a5ff-f73f3f5f2861 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:50:56 compute-0 nova_compute[260935]: 2025-10-11 08:50:56.298 2 DEBUG oslo_concurrency.lockutils [None req-a8a0f7c3-f1bc-448a-a5ff-f73f3f5f2861 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:50:56 compute-0 nova_compute[260935]: 2025-10-11 08:50:56.305 2 DEBUG nova.virt.hardware [None req-a8a0f7c3-f1bc-448a-a5ff-f73f3f5f2861 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 11 08:50:56 compute-0 nova_compute[260935]: 2025-10-11 08:50:56.306 2 INFO nova.compute.claims [None req-a8a0f7c3-f1bc-448a-a5ff-f73f3f5f2861 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] [instance: 9f842544-f85a-4c24-b273-8ae74177617e] Claim successful on node compute-0.ctlplane.example.com
Oct 11 08:50:56 compute-0 nova_compute[260935]: 2025-10-11 08:50:56.451 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:50:56 compute-0 nova_compute[260935]: 2025-10-11 08:50:56.474 2 DEBUG oslo_concurrency.processutils [None req-a8a0f7c3-f1bc-448a-a5ff-f73f3f5f2861 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:50:56 compute-0 nova_compute[260935]: 2025-10-11 08:50:56.620 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760172641.6185648, b3e20035-c079-4ad0-a085-2086be520d1d => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:50:56 compute-0 nova_compute[260935]: 2025-10-11 08:50:56.620 2 INFO nova.compute.manager [-] [instance: b3e20035-c079-4ad0-a085-2086be520d1d] VM Stopped (Lifecycle Event)
Oct 11 08:50:56 compute-0 nova_compute[260935]: 2025-10-11 08:50:56.651 2 DEBUG nova.compute.manager [None req-79174a50-acde-48a1-8515-a057021d2fae - - - - - -] [instance: b3e20035-c079-4ad0-a085-2086be520d1d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:50:56 compute-0 nova_compute[260935]: 2025-10-11 08:50:56.765 2 DEBUG nova.network.neutron [-] [instance: ac3851d8-5df2-4f84-9b28-a5fbf1c31b62] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:50:56 compute-0 nova_compute[260935]: 2025-10-11 08:50:56.803 2 DEBUG nova.compute.manager [req-9d5cc97b-c6ec-4c22-9632-97b46985c909 req-86ec6d3a-0d2c-4323-8e33-6010b2592af8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ac3851d8-5df2-4f84-9b28-a5fbf1c31b62] Received event network-vif-deleted-d190526e-2bf1-4e6c-925a-2cc0c2b359a8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:50:56 compute-0 nova_compute[260935]: 2025-10-11 08:50:56.804 2 INFO nova.compute.manager [req-9d5cc97b-c6ec-4c22-9632-97b46985c909 req-86ec6d3a-0d2c-4323-8e33-6010b2592af8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ac3851d8-5df2-4f84-9b28-a5fbf1c31b62] Neutron deleted interface d190526e-2bf1-4e6c-925a-2cc0c2b359a8; detaching it from the instance and deleting it from the info cache
Oct 11 08:50:56 compute-0 nova_compute[260935]: 2025-10-11 08:50:56.805 2 DEBUG nova.network.neutron [req-9d5cc97b-c6ec-4c22-9632-97b46985c909 req-86ec6d3a-0d2c-4323-8e33-6010b2592af8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ac3851d8-5df2-4f84-9b28-a5fbf1c31b62] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:50:56 compute-0 nova_compute[260935]: 2025-10-11 08:50:56.810 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:50:56 compute-0 nova_compute[260935]: 2025-10-11 08:50:56.908 2 INFO nova.compute.manager [-] [instance: ac3851d8-5df2-4f84-9b28-a5fbf1c31b62] Took 0.92 seconds to deallocate network for instance.
Oct 11 08:50:56 compute-0 nova_compute[260935]: 2025-10-11 08:50:56.931 2 DEBUG nova.compute.manager [req-9d5cc97b-c6ec-4c22-9632-97b46985c909 req-86ec6d3a-0d2c-4323-8e33-6010b2592af8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ac3851d8-5df2-4f84-9b28-a5fbf1c31b62] Detach interface failed, port_id=d190526e-2bf1-4e6c-925a-2cc0c2b359a8, reason: Instance ac3851d8-5df2-4f84-9b28-a5fbf1c31b62 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Oct 11 08:50:56 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 08:50:56 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2755288498' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:50:56 compute-0 nova_compute[260935]: 2025-10-11 08:50:56.973 2 DEBUG oslo_concurrency.processutils [None req-a8a0f7c3-f1bc-448a-a5ff-f73f3f5f2861 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.500s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:50:56 compute-0 nova_compute[260935]: 2025-10-11 08:50:56.982 2 DEBUG nova.compute.provider_tree [None req-a8a0f7c3-f1bc-448a-a5ff-f73f3f5f2861 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 08:50:57 compute-0 ceph-mon[74313]: pgmap v1370: 321 pgs: 321 active+clean; 134 MiB data, 401 MiB used, 60 GiB / 60 GiB avail; 3.2 MiB/s rd, 3.1 MiB/s wr, 207 op/s
Oct 11 08:50:57 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/2755288498' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:50:57 compute-0 nova_compute[260935]: 2025-10-11 08:50:57.141 2 DEBUG nova.scheduler.client.report [None req-a8a0f7c3-f1bc-448a-a5ff-f73f3f5f2861 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 08:50:57 compute-0 nova_compute[260935]: 2025-10-11 08:50:57.155 2 DEBUG oslo_concurrency.lockutils [None req-9594826e-1983-4d93-9d74-c393427fb458 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:50:57 compute-0 nova_compute[260935]: 2025-10-11 08:50:57.203 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:50:57 compute-0 nova_compute[260935]: 2025-10-11 08:50:57.220 2 DEBUG oslo_concurrency.lockutils [None req-a8a0f7c3-f1bc-448a-a5ff-f73f3f5f2861 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.923s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:50:57 compute-0 nova_compute[260935]: 2025-10-11 08:50:57.221 2 DEBUG nova.compute.manager [None req-a8a0f7c3-f1bc-448a-a5ff-f73f3f5f2861 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] [instance: 9f842544-f85a-4c24-b273-8ae74177617e] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 11 08:50:57 compute-0 nova_compute[260935]: 2025-10-11 08:50:57.225 2 DEBUG oslo_concurrency.lockutils [None req-9594826e-1983-4d93-9d74-c393427fb458 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.070s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:50:57 compute-0 nova_compute[260935]: 2025-10-11 08:50:57.296 2 DEBUG oslo_concurrency.processutils [None req-9594826e-1983-4d93-9d74-c393427fb458 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:50:57 compute-0 nova_compute[260935]: 2025-10-11 08:50:57.376 2 DEBUG nova.compute.manager [None req-a8a0f7c3-f1bc-448a-a5ff-f73f3f5f2861 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] [instance: 9f842544-f85a-4c24-b273-8ae74177617e] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 11 08:50:57 compute-0 nova_compute[260935]: 2025-10-11 08:50:57.379 2 DEBUG nova.network.neutron [None req-a8a0f7c3-f1bc-448a-a5ff-f73f3f5f2861 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] [instance: 9f842544-f85a-4c24-b273-8ae74177617e] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 11 08:50:57 compute-0 nova_compute[260935]: 2025-10-11 08:50:57.428 2 INFO nova.virt.libvirt.driver [None req-a8a0f7c3-f1bc-448a-a5ff-f73f3f5f2861 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] [instance: 9f842544-f85a-4c24-b273-8ae74177617e] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 11 08:50:57 compute-0 nova_compute[260935]: 2025-10-11 08:50:57.487 2 DEBUG nova.compute.manager [None req-a8a0f7c3-f1bc-448a-a5ff-f73f3f5f2861 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] [instance: 9f842544-f85a-4c24-b273-8ae74177617e] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 11 08:50:57 compute-0 nova_compute[260935]: 2025-10-11 08:50:57.555 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 08:50:57 compute-0 nova_compute[260935]: 2025-10-11 08:50:57.556 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 08:50:57 compute-0 nova_compute[260935]: 2025-10-11 08:50:57.557 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 11 08:50:57 compute-0 nova_compute[260935]: 2025-10-11 08:50:57.638 2 DEBUG nova.policy [None req-a8a0f7c3-f1bc-448a-a5ff-f73f3f5f2861 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'fb27f51b5ffd414ab5ddbea179ada690', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'f84de17ba2c5470fbc4c7fe809e7d7b7', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 11 08:50:57 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 08:50:57 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2993305814' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:50:57 compute-0 nova_compute[260935]: 2025-10-11 08:50:57.752 2 DEBUG nova.compute.manager [None req-a8a0f7c3-f1bc-448a-a5ff-f73f3f5f2861 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] [instance: 9f842544-f85a-4c24-b273-8ae74177617e] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 11 08:50:57 compute-0 nova_compute[260935]: 2025-10-11 08:50:57.754 2 DEBUG nova.virt.libvirt.driver [None req-a8a0f7c3-f1bc-448a-a5ff-f73f3f5f2861 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] [instance: 9f842544-f85a-4c24-b273-8ae74177617e] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 11 08:50:57 compute-0 nova_compute[260935]: 2025-10-11 08:50:57.755 2 INFO nova.virt.libvirt.driver [None req-a8a0f7c3-f1bc-448a-a5ff-f73f3f5f2861 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] [instance: 9f842544-f85a-4c24-b273-8ae74177617e] Creating image(s)
Oct 11 08:50:57 compute-0 nova_compute[260935]: 2025-10-11 08:50:57.793 2 DEBUG nova.storage.rbd_utils [None req-a8a0f7c3-f1bc-448a-a5ff-f73f3f5f2861 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] rbd image 9f842544-f85a-4c24-b273-8ae74177617e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:50:57 compute-0 nova_compute[260935]: 2025-10-11 08:50:57.831 2 DEBUG nova.storage.rbd_utils [None req-a8a0f7c3-f1bc-448a-a5ff-f73f3f5f2861 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] rbd image 9f842544-f85a-4c24-b273-8ae74177617e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:50:57 compute-0 nova_compute[260935]: 2025-10-11 08:50:57.864 2 DEBUG nova.storage.rbd_utils [None req-a8a0f7c3-f1bc-448a-a5ff-f73f3f5f2861 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] rbd image 9f842544-f85a-4c24-b273-8ae74177617e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:50:57 compute-0 nova_compute[260935]: 2025-10-11 08:50:57.868 2 DEBUG oslo_concurrency.processutils [None req-a8a0f7c3-f1bc-448a-a5ff-f73f3f5f2861 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:50:57 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1371: 321 pgs: 321 active+clean; 41 MiB data, 358 MiB used, 60 GiB / 60 GiB avail; 2.8 MiB/s rd, 2.7 MiB/s wr, 249 op/s
Oct 11 08:50:57 compute-0 nova_compute[260935]: 2025-10-11 08:50:57.899 2 DEBUG oslo_concurrency.processutils [None req-9594826e-1983-4d93-9d74-c393427fb458 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.602s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:50:57 compute-0 nova_compute[260935]: 2025-10-11 08:50:57.907 2 DEBUG nova.compute.provider_tree [None req-9594826e-1983-4d93-9d74-c393427fb458 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 08:50:57 compute-0 nova_compute[260935]: 2025-10-11 08:50:57.954 2 DEBUG nova.scheduler.client.report [None req-9594826e-1983-4d93-9d74-c393427fb458 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 08:50:57 compute-0 nova_compute[260935]: 2025-10-11 08:50:57.959 2 DEBUG oslo_concurrency.processutils [None req-a8a0f7c3-f1bc-448a-a5ff-f73f3f5f2861 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json" returned: 0 in 0.091s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:50:57 compute-0 nova_compute[260935]: 2025-10-11 08:50:57.962 2 DEBUG oslo_concurrency.lockutils [None req-a8a0f7c3-f1bc-448a-a5ff-f73f3f5f2861 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Acquiring lock "9811042c7d73cc51997f7c966840f4b7728169a1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:50:57 compute-0 nova_compute[260935]: 2025-10-11 08:50:57.962 2 DEBUG oslo_concurrency.lockutils [None req-a8a0f7c3-f1bc-448a-a5ff-f73f3f5f2861 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:50:57 compute-0 nova_compute[260935]: 2025-10-11 08:50:57.963 2 DEBUG oslo_concurrency.lockutils [None req-a8a0f7c3-f1bc-448a-a5ff-f73f3f5f2861 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:50:57 compute-0 nova_compute[260935]: 2025-10-11 08:50:57.991 2 DEBUG nova.storage.rbd_utils [None req-a8a0f7c3-f1bc-448a-a5ff-f73f3f5f2861 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] rbd image 9f842544-f85a-4c24-b273-8ae74177617e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:50:57 compute-0 nova_compute[260935]: 2025-10-11 08:50:57.995 2 DEBUG oslo_concurrency.processutils [None req-a8a0f7c3-f1bc-448a-a5ff-f73f3f5f2861 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 9f842544-f85a-4c24-b273-8ae74177617e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:50:58 compute-0 nova_compute[260935]: 2025-10-11 08:50:58.030 2 DEBUG oslo_concurrency.lockutils [None req-9594826e-1983-4d93-9d74-c393427fb458 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.804s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:50:58 compute-0 nova_compute[260935]: 2025-10-11 08:50:58.081 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760172643.0801108, b35f4147-9e36-4dab-9ac8-2061c97797f2 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:50:58 compute-0 nova_compute[260935]: 2025-10-11 08:50:58.082 2 INFO nova.compute.manager [-] [instance: b35f4147-9e36-4dab-9ac8-2061c97797f2] VM Stopped (Lifecycle Event)
Oct 11 08:50:58 compute-0 nova_compute[260935]: 2025-10-11 08:50:58.087 2 INFO nova.scheduler.client.report [None req-9594826e-1983-4d93-9d74-c393427fb458 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Deleted allocations for instance ac3851d8-5df2-4f84-9b28-a5fbf1c31b62
Oct 11 08:50:58 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/2993305814' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:50:58 compute-0 nova_compute[260935]: 2025-10-11 08:50:58.111 2 DEBUG nova.compute.manager [None req-6564532c-d5ed-484a-84bd-c03507f21e67 - - - - - -] [instance: b35f4147-9e36-4dab-9ac8-2061c97797f2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:50:58 compute-0 nova_compute[260935]: 2025-10-11 08:50:58.182 2 DEBUG oslo_concurrency.lockutils [None req-1d1846a2-13f3-4f48-8e57-f9456f6cfa2f 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Acquiring lock "5b50d851-e482-40a2-8b7d-d3eca87e15ab" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:50:58 compute-0 nova_compute[260935]: 2025-10-11 08:50:58.183 2 DEBUG oslo_concurrency.lockutils [None req-1d1846a2-13f3-4f48-8e57-f9456f6cfa2f 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Lock "5b50d851-e482-40a2-8b7d-d3eca87e15ab" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:50:58 compute-0 nova_compute[260935]: 2025-10-11 08:50:58.210 2 DEBUG oslo_concurrency.lockutils [None req-9594826e-1983-4d93-9d74-c393427fb458 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Lock "ac3851d8-5df2-4f84-9b28-a5fbf1c31b62" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.953s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:50:58 compute-0 nova_compute[260935]: 2025-10-11 08:50:58.226 2 DEBUG nova.compute.manager [None req-1d1846a2-13f3-4f48-8e57-f9456f6cfa2f 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 5b50d851-e482-40a2-8b7d-d3eca87e15ab] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 11 08:50:58 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 08:50:58 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e179 do_prune osdmap full prune enabled
Oct 11 08:50:58 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e180 e180: 3 total, 3 up, 3 in
Oct 11 08:50:58 compute-0 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e180: 3 total, 3 up, 3 in
Oct 11 08:50:58 compute-0 nova_compute[260935]: 2025-10-11 08:50:58.287 2 DEBUG oslo_concurrency.lockutils [None req-1d1846a2-13f3-4f48-8e57-f9456f6cfa2f 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:50:58 compute-0 nova_compute[260935]: 2025-10-11 08:50:58.288 2 DEBUG oslo_concurrency.lockutils [None req-1d1846a2-13f3-4f48-8e57-f9456f6cfa2f 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:50:58 compute-0 nova_compute[260935]: 2025-10-11 08:50:58.296 2 DEBUG nova.virt.hardware [None req-1d1846a2-13f3-4f48-8e57-f9456f6cfa2f 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 11 08:50:58 compute-0 nova_compute[260935]: 2025-10-11 08:50:58.296 2 INFO nova.compute.claims [None req-1d1846a2-13f3-4f48-8e57-f9456f6cfa2f 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 5b50d851-e482-40a2-8b7d-d3eca87e15ab] Claim successful on node compute-0.ctlplane.example.com
Oct 11 08:50:58 compute-0 nova_compute[260935]: 2025-10-11 08:50:58.304 2 DEBUG oslo_concurrency.processutils [None req-a8a0f7c3-f1bc-448a-a5ff-f73f3f5f2861 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 9f842544-f85a-4c24-b273-8ae74177617e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.309s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:50:58 compute-0 nova_compute[260935]: 2025-10-11 08:50:58.418 2 DEBUG nova.storage.rbd_utils [None req-a8a0f7c3-f1bc-448a-a5ff-f73f3f5f2861 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] resizing rbd image 9f842544-f85a-4c24-b273-8ae74177617e_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 11 08:50:58 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:58.472 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=10, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:d1:d9', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '16:ab:1e:b7:4b:7f'}, ipsec=False) old=SB_Global(nb_cfg=9) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 08:50:58 compute-0 nova_compute[260935]: 2025-10-11 08:50:58.474 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:50:58 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:58.474 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 11 08:50:58 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:50:58.475 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=bec9a5e2-82b0-42f0-811d-08d245f1dc66, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '10'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:50:58 compute-0 nova_compute[260935]: 2025-10-11 08:50:58.494 2 DEBUG oslo_concurrency.processutils [None req-1d1846a2-13f3-4f48-8e57-f9456f6cfa2f 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:50:58 compute-0 nova_compute[260935]: 2025-10-11 08:50:58.589 2 DEBUG nova.network.neutron [None req-a8a0f7c3-f1bc-448a-a5ff-f73f3f5f2861 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] [instance: 9f842544-f85a-4c24-b273-8ae74177617e] Successfully created port: fc76a4bd-0e3d-426e-820f-690283cf8257 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 11 08:50:58 compute-0 nova_compute[260935]: 2025-10-11 08:50:58.601 2 DEBUG nova.objects.instance [None req-a8a0f7c3-f1bc-448a-a5ff-f73f3f5f2861 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Lazy-loading 'migration_context' on Instance uuid 9f842544-f85a-4c24-b273-8ae74177617e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:50:58 compute-0 nova_compute[260935]: 2025-10-11 08:50:58.624 2 DEBUG nova.virt.libvirt.driver [None req-a8a0f7c3-f1bc-448a-a5ff-f73f3f5f2861 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] [instance: 9f842544-f85a-4c24-b273-8ae74177617e] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 11 08:50:58 compute-0 nova_compute[260935]: 2025-10-11 08:50:58.625 2 DEBUG nova.virt.libvirt.driver [None req-a8a0f7c3-f1bc-448a-a5ff-f73f3f5f2861 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] [instance: 9f842544-f85a-4c24-b273-8ae74177617e] Ensure instance console log exists: /var/lib/nova/instances/9f842544-f85a-4c24-b273-8ae74177617e/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 11 08:50:58 compute-0 nova_compute[260935]: 2025-10-11 08:50:58.626 2 DEBUG oslo_concurrency.lockutils [None req-a8a0f7c3-f1bc-448a-a5ff-f73f3f5f2861 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:50:58 compute-0 nova_compute[260935]: 2025-10-11 08:50:58.627 2 DEBUG oslo_concurrency.lockutils [None req-a8a0f7c3-f1bc-448a-a5ff-f73f3f5f2861 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:50:58 compute-0 nova_compute[260935]: 2025-10-11 08:50:58.627 2 DEBUG oslo_concurrency.lockutils [None req-a8a0f7c3-f1bc-448a-a5ff-f73f3f5f2861 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:50:58 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 08:50:58 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2964181207' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:50:58 compute-0 nova_compute[260935]: 2025-10-11 08:50:58.970 2 DEBUG oslo_concurrency.processutils [None req-1d1846a2-13f3-4f48-8e57-f9456f6cfa2f 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:50:58 compute-0 nova_compute[260935]: 2025-10-11 08:50:58.978 2 DEBUG nova.compute.provider_tree [None req-1d1846a2-13f3-4f48-8e57-f9456f6cfa2f 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 08:50:58 compute-0 nova_compute[260935]: 2025-10-11 08:50:58.996 2 DEBUG nova.scheduler.client.report [None req-1d1846a2-13f3-4f48-8e57-f9456f6cfa2f 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 08:50:59 compute-0 nova_compute[260935]: 2025-10-11 08:50:59.023 2 DEBUG oslo_concurrency.lockutils [None req-1d1846a2-13f3-4f48-8e57-f9456f6cfa2f 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.735s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:50:59 compute-0 nova_compute[260935]: 2025-10-11 08:50:59.024 2 DEBUG nova.compute.manager [None req-1d1846a2-13f3-4f48-8e57-f9456f6cfa2f 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 5b50d851-e482-40a2-8b7d-d3eca87e15ab] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 11 08:50:59 compute-0 nova_compute[260935]: 2025-10-11 08:50:59.102 2 DEBUG nova.compute.manager [None req-1d1846a2-13f3-4f48-8e57-f9456f6cfa2f 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 5b50d851-e482-40a2-8b7d-d3eca87e15ab] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 11 08:50:59 compute-0 nova_compute[260935]: 2025-10-11 08:50:59.102 2 DEBUG nova.network.neutron [None req-1d1846a2-13f3-4f48-8e57-f9456f6cfa2f 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 5b50d851-e482-40a2-8b7d-d3eca87e15ab] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 11 08:50:59 compute-0 nova_compute[260935]: 2025-10-11 08:50:59.129 2 INFO nova.virt.libvirt.driver [None req-1d1846a2-13f3-4f48-8e57-f9456f6cfa2f 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 5b50d851-e482-40a2-8b7d-d3eca87e15ab] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 11 08:50:59 compute-0 nova_compute[260935]: 2025-10-11 08:50:59.156 2 DEBUG nova.compute.manager [None req-1d1846a2-13f3-4f48-8e57-f9456f6cfa2f 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 5b50d851-e482-40a2-8b7d-d3eca87e15ab] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 11 08:50:59 compute-0 ceph-mon[74313]: pgmap v1371: 321 pgs: 321 active+clean; 41 MiB data, 358 MiB used, 60 GiB / 60 GiB avail; 2.8 MiB/s rd, 2.7 MiB/s wr, 249 op/s
Oct 11 08:50:59 compute-0 ceph-mon[74313]: osdmap e180: 3 total, 3 up, 3 in
Oct 11 08:50:59 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/2964181207' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:50:59 compute-0 nova_compute[260935]: 2025-10-11 08:50:59.262 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760172644.2619982, ac3851d8-5df2-4f84-9b28-a5fbf1c31b62 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:50:59 compute-0 nova_compute[260935]: 2025-10-11 08:50:59.263 2 INFO nova.compute.manager [-] [instance: ac3851d8-5df2-4f84-9b28-a5fbf1c31b62] VM Stopped (Lifecycle Event)
Oct 11 08:50:59 compute-0 nova_compute[260935]: 2025-10-11 08:50:59.269 2 DEBUG nova.compute.manager [None req-1d1846a2-13f3-4f48-8e57-f9456f6cfa2f 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 5b50d851-e482-40a2-8b7d-d3eca87e15ab] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 11 08:50:59 compute-0 nova_compute[260935]: 2025-10-11 08:50:59.270 2 DEBUG nova.virt.libvirt.driver [None req-1d1846a2-13f3-4f48-8e57-f9456f6cfa2f 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 5b50d851-e482-40a2-8b7d-d3eca87e15ab] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 11 08:50:59 compute-0 nova_compute[260935]: 2025-10-11 08:50:59.271 2 INFO nova.virt.libvirt.driver [None req-1d1846a2-13f3-4f48-8e57-f9456f6cfa2f 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 5b50d851-e482-40a2-8b7d-d3eca87e15ab] Creating image(s)
Oct 11 08:50:59 compute-0 nova_compute[260935]: 2025-10-11 08:50:59.302 2 DEBUG nova.storage.rbd_utils [None req-1d1846a2-13f3-4f48-8e57-f9456f6cfa2f 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] rbd image 5b50d851-e482-40a2-8b7d-d3eca87e15ab_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:50:59 compute-0 nova_compute[260935]: 2025-10-11 08:50:59.338 2 DEBUG nova.storage.rbd_utils [None req-1d1846a2-13f3-4f48-8e57-f9456f6cfa2f 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] rbd image 5b50d851-e482-40a2-8b7d-d3eca87e15ab_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:50:59 compute-0 nova_compute[260935]: 2025-10-11 08:50:59.373 2 DEBUG nova.storage.rbd_utils [None req-1d1846a2-13f3-4f48-8e57-f9456f6cfa2f 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] rbd image 5b50d851-e482-40a2-8b7d-d3eca87e15ab_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:50:59 compute-0 nova_compute[260935]: 2025-10-11 08:50:59.379 2 DEBUG oslo_concurrency.processutils [None req-1d1846a2-13f3-4f48-8e57-f9456f6cfa2f 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:50:59 compute-0 nova_compute[260935]: 2025-10-11 08:50:59.421 2 DEBUG nova.compute.manager [None req-ec52a2b7-3f6f-42b8-96da-6fbf8a680e49 - - - - - -] [instance: ac3851d8-5df2-4f84-9b28-a5fbf1c31b62] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:50:59 compute-0 nova_compute[260935]: 2025-10-11 08:50:59.473 2 DEBUG oslo_concurrency.processutils [None req-1d1846a2-13f3-4f48-8e57-f9456f6cfa2f 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json" returned: 0 in 0.095s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:50:59 compute-0 nova_compute[260935]: 2025-10-11 08:50:59.474 2 DEBUG oslo_concurrency.lockutils [None req-1d1846a2-13f3-4f48-8e57-f9456f6cfa2f 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Acquiring lock "9811042c7d73cc51997f7c966840f4b7728169a1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:50:59 compute-0 nova_compute[260935]: 2025-10-11 08:50:59.475 2 DEBUG oslo_concurrency.lockutils [None req-1d1846a2-13f3-4f48-8e57-f9456f6cfa2f 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:50:59 compute-0 nova_compute[260935]: 2025-10-11 08:50:59.476 2 DEBUG oslo_concurrency.lockutils [None req-1d1846a2-13f3-4f48-8e57-f9456f6cfa2f 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:50:59 compute-0 nova_compute[260935]: 2025-10-11 08:50:59.510 2 DEBUG nova.storage.rbd_utils [None req-1d1846a2-13f3-4f48-8e57-f9456f6cfa2f 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] rbd image 5b50d851-e482-40a2-8b7d-d3eca87e15ab_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:50:59 compute-0 nova_compute[260935]: 2025-10-11 08:50:59.516 2 DEBUG oslo_concurrency.processutils [None req-1d1846a2-13f3-4f48-8e57-f9456f6cfa2f 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 5b50d851-e482-40a2-8b7d-d3eca87e15ab_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:50:59 compute-0 nova_compute[260935]: 2025-10-11 08:50:59.828 2 DEBUG oslo_concurrency.processutils [None req-1d1846a2-13f3-4f48-8e57-f9456f6cfa2f 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 5b50d851-e482-40a2-8b7d-d3eca87e15ab_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.311s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:50:59 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1373: 321 pgs: 321 active+clean; 41 MiB data, 358 MiB used, 60 GiB / 60 GiB avail; 2.8 MiB/s rd, 2.7 MiB/s wr, 249 op/s
Oct 11 08:50:59 compute-0 nova_compute[260935]: 2025-10-11 08:50:59.898 2 DEBUG nova.storage.rbd_utils [None req-1d1846a2-13f3-4f48-8e57-f9456f6cfa2f 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] resizing rbd image 5b50d851-e482-40a2-8b7d-d3eca87e15ab_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 11 08:51:00 compute-0 nova_compute[260935]: 2025-10-11 08:51:00.007 2 DEBUG nova.policy [None req-1d1846a2-13f3-4f48-8e57-f9456f6cfa2f 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '1bab12893b9d49aabcb5ca19c9b951de', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'f8c7604961214c6d9d49657535d799a5', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 11 08:51:00 compute-0 nova_compute[260935]: 2025-10-11 08:51:00.022 2 DEBUG nova.objects.instance [None req-1d1846a2-13f3-4f48-8e57-f9456f6cfa2f 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Lazy-loading 'migration_context' on Instance uuid 5b50d851-e482-40a2-8b7d-d3eca87e15ab obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:51:00 compute-0 nova_compute[260935]: 2025-10-11 08:51:00.128 2 DEBUG nova.virt.libvirt.driver [None req-1d1846a2-13f3-4f48-8e57-f9456f6cfa2f 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 5b50d851-e482-40a2-8b7d-d3eca87e15ab] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 11 08:51:00 compute-0 nova_compute[260935]: 2025-10-11 08:51:00.130 2 DEBUG nova.virt.libvirt.driver [None req-1d1846a2-13f3-4f48-8e57-f9456f6cfa2f 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 5b50d851-e482-40a2-8b7d-d3eca87e15ab] Ensure instance console log exists: /var/lib/nova/instances/5b50d851-e482-40a2-8b7d-d3eca87e15ab/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 11 08:51:00 compute-0 nova_compute[260935]: 2025-10-11 08:51:00.131 2 DEBUG oslo_concurrency.lockutils [None req-1d1846a2-13f3-4f48-8e57-f9456f6cfa2f 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:51:00 compute-0 nova_compute[260935]: 2025-10-11 08:51:00.131 2 DEBUG oslo_concurrency.lockutils [None req-1d1846a2-13f3-4f48-8e57-f9456f6cfa2f 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:51:00 compute-0 nova_compute[260935]: 2025-10-11 08:51:00.132 2 DEBUG oslo_concurrency.lockutils [None req-1d1846a2-13f3-4f48-8e57-f9456f6cfa2f 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:51:00 compute-0 nova_compute[260935]: 2025-10-11 08:51:00.310 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:51:00 compute-0 nova_compute[260935]: 2025-10-11 08:51:00.341 2 DEBUG nova.network.neutron [None req-a8a0f7c3-f1bc-448a-a5ff-f73f3f5f2861 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] [instance: 9f842544-f85a-4c24-b273-8ae74177617e] Successfully created port: ccf53fcf-c7bd-41f4-986b-d90fd701f3e4 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 11 08:51:00 compute-0 nova_compute[260935]: 2025-10-11 08:51:00.390 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760172645.3893924, 6224c79a-8a36-490c-863a-67251512732f => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:51:00 compute-0 nova_compute[260935]: 2025-10-11 08:51:00.390 2 INFO nova.compute.manager [-] [instance: 6224c79a-8a36-490c-863a-67251512732f] VM Stopped (Lifecycle Event)
Oct 11 08:51:00 compute-0 nova_compute[260935]: 2025-10-11 08:51:00.417 2 DEBUG nova.compute.manager [None req-e847c770-9a88-43a0-9686-fea569edcf50 - - - - - -] [instance: 6224c79a-8a36-490c-863a-67251512732f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:51:01 compute-0 nova_compute[260935]: 2025-10-11 08:51:01.220 2 DEBUG nova.network.neutron [None req-1d1846a2-13f3-4f48-8e57-f9456f6cfa2f 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 5b50d851-e482-40a2-8b7d-d3eca87e15ab] Successfully created port: 965f1a38-4159-41be-ac4a-f436a8ddeeab _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 11 08:51:01 compute-0 ceph-mon[74313]: pgmap v1373: 321 pgs: 321 active+clean; 41 MiB data, 358 MiB used, 60 GiB / 60 GiB avail; 2.8 MiB/s rd, 2.7 MiB/s wr, 249 op/s
Oct 11 08:51:01 compute-0 nova_compute[260935]: 2025-10-11 08:51:01.340 2 DEBUG nova.network.neutron [None req-a8a0f7c3-f1bc-448a-a5ff-f73f3f5f2861 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] [instance: 9f842544-f85a-4c24-b273-8ae74177617e] Successfully created port: e7343dc3-6cda-4dfb-8098-f021900f4584 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 11 08:51:01 compute-0 podman[300759]: 2025-10-11 08:51:01.351185108 +0000 UTC m=+0.091374946 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_id=iscsid)
Oct 11 08:51:01 compute-0 nova_compute[260935]: 2025-10-11 08:51:01.706 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 08:51:01 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1374: 321 pgs: 321 active+clean; 41 MiB data, 358 MiB used, 60 GiB / 60 GiB avail; 51 KiB/s rd, 3.5 KiB/s wr, 72 op/s
Oct 11 08:51:02 compute-0 nova_compute[260935]: 2025-10-11 08:51:02.233 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:51:02 compute-0 nova_compute[260935]: 2025-10-11 08:51:02.341 2 DEBUG nova.network.neutron [None req-1d1846a2-13f3-4f48-8e57-f9456f6cfa2f 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 5b50d851-e482-40a2-8b7d-d3eca87e15ab] Successfully updated port: 965f1a38-4159-41be-ac4a-f436a8ddeeab _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 11 08:51:02 compute-0 nova_compute[260935]: 2025-10-11 08:51:02.378 2 DEBUG oslo_concurrency.lockutils [None req-1d1846a2-13f3-4f48-8e57-f9456f6cfa2f 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Acquiring lock "refresh_cache-5b50d851-e482-40a2-8b7d-d3eca87e15ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 08:51:02 compute-0 nova_compute[260935]: 2025-10-11 08:51:02.378 2 DEBUG oslo_concurrency.lockutils [None req-1d1846a2-13f3-4f48-8e57-f9456f6cfa2f 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Acquired lock "refresh_cache-5b50d851-e482-40a2-8b7d-d3eca87e15ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 08:51:02 compute-0 nova_compute[260935]: 2025-10-11 08:51:02.379 2 DEBUG nova.network.neutron [None req-1d1846a2-13f3-4f48-8e57-f9456f6cfa2f 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 5b50d851-e482-40a2-8b7d-d3eca87e15ab] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 11 08:51:02 compute-0 nova_compute[260935]: 2025-10-11 08:51:02.512 2 DEBUG nova.compute.manager [req-2956e34d-4759-477d-bc53-18b5a23c0636 req-6d0cfd7d-c466-446b-bee1-308373dc4175 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 5b50d851-e482-40a2-8b7d-d3eca87e15ab] Received event network-changed-965f1a38-4159-41be-ac4a-f436a8ddeeab external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:51:02 compute-0 nova_compute[260935]: 2025-10-11 08:51:02.513 2 DEBUG nova.compute.manager [req-2956e34d-4759-477d-bc53-18b5a23c0636 req-6d0cfd7d-c466-446b-bee1-308373dc4175 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 5b50d851-e482-40a2-8b7d-d3eca87e15ab] Refreshing instance network info cache due to event network-changed-965f1a38-4159-41be-ac4a-f436a8ddeeab. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 08:51:02 compute-0 nova_compute[260935]: 2025-10-11 08:51:02.514 2 DEBUG oslo_concurrency.lockutils [req-2956e34d-4759-477d-bc53-18b5a23c0636 req-6d0cfd7d-c466-446b-bee1-308373dc4175 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-5b50d851-e482-40a2-8b7d-d3eca87e15ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 08:51:02 compute-0 nova_compute[260935]: 2025-10-11 08:51:02.545 2 DEBUG nova.network.neutron [None req-a8a0f7c3-f1bc-448a-a5ff-f73f3f5f2861 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] [instance: 9f842544-f85a-4c24-b273-8ae74177617e] Successfully updated port: fc76a4bd-0e3d-426e-820f-690283cf8257 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 11 08:51:02 compute-0 nova_compute[260935]: 2025-10-11 08:51:02.659 2 DEBUG nova.compute.manager [req-68805fd0-9138-433b-99b2-80e567632e2e req-416c5a24-239d-4d36-a040-5a7c63db660e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 9f842544-f85a-4c24-b273-8ae74177617e] Received event network-changed-fc76a4bd-0e3d-426e-820f-690283cf8257 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:51:02 compute-0 nova_compute[260935]: 2025-10-11 08:51:02.660 2 DEBUG nova.compute.manager [req-68805fd0-9138-433b-99b2-80e567632e2e req-416c5a24-239d-4d36-a040-5a7c63db660e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 9f842544-f85a-4c24-b273-8ae74177617e] Refreshing instance network info cache due to event network-changed-fc76a4bd-0e3d-426e-820f-690283cf8257. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 08:51:02 compute-0 nova_compute[260935]: 2025-10-11 08:51:02.661 2 DEBUG oslo_concurrency.lockutils [req-68805fd0-9138-433b-99b2-80e567632e2e req-416c5a24-239d-4d36-a040-5a7c63db660e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-9f842544-f85a-4c24-b273-8ae74177617e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 08:51:02 compute-0 nova_compute[260935]: 2025-10-11 08:51:02.661 2 DEBUG oslo_concurrency.lockutils [req-68805fd0-9138-433b-99b2-80e567632e2e req-416c5a24-239d-4d36-a040-5a7c63db660e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-9f842544-f85a-4c24-b273-8ae74177617e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 08:51:02 compute-0 nova_compute[260935]: 2025-10-11 08:51:02.661 2 DEBUG nova.network.neutron [req-68805fd0-9138-433b-99b2-80e567632e2e req-416c5a24-239d-4d36-a040-5a7c63db660e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 9f842544-f85a-4c24-b273-8ae74177617e] Refreshing network info cache for port fc76a4bd-0e3d-426e-820f-690283cf8257 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 08:51:02 compute-0 nova_compute[260935]: 2025-10-11 08:51:02.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 08:51:02 compute-0 nova_compute[260935]: 2025-10-11 08:51:02.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 08:51:02 compute-0 nova_compute[260935]: 2025-10-11 08:51:02.930 2 DEBUG nova.network.neutron [None req-1d1846a2-13f3-4f48-8e57-f9456f6cfa2f 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 5b50d851-e482-40a2-8b7d-d3eca87e15ab] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 11 08:51:02 compute-0 nova_compute[260935]: 2025-10-11 08:51:02.996 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760172647.9939656, 14711e39-46ca-4856-9c19-fa51b869064d => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:51:02 compute-0 nova_compute[260935]: 2025-10-11 08:51:02.997 2 INFO nova.compute.manager [-] [instance: 14711e39-46ca-4856-9c19-fa51b869064d] VM Stopped (Lifecycle Event)
Oct 11 08:51:03 compute-0 nova_compute[260935]: 2025-10-11 08:51:03.033 2 DEBUG nova.compute.manager [None req-0d74b37d-7121-4557-81ca-1844e1d92960 - - - - - -] [instance: 14711e39-46ca-4856-9c19-fa51b869064d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:51:03 compute-0 nova_compute[260935]: 2025-10-11 08:51:03.122 2 DEBUG nova.network.neutron [req-68805fd0-9138-433b-99b2-80e567632e2e req-416c5a24-239d-4d36-a040-5a7c63db660e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 9f842544-f85a-4c24-b273-8ae74177617e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 11 08:51:03 compute-0 nova_compute[260935]: 2025-10-11 08:51:03.238 2 DEBUG nova.network.neutron [None req-a8a0f7c3-f1bc-448a-a5ff-f73f3f5f2861 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] [instance: 9f842544-f85a-4c24-b273-8ae74177617e] Successfully updated port: ccf53fcf-c7bd-41f4-986b-d90fd701f3e4 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 11 08:51:03 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 08:51:03 compute-0 ceph-mon[74313]: pgmap v1374: 321 pgs: 321 active+clean; 41 MiB data, 358 MiB used, 60 GiB / 60 GiB avail; 51 KiB/s rd, 3.5 KiB/s wr, 72 op/s
Oct 11 08:51:03 compute-0 nova_compute[260935]: 2025-10-11 08:51:03.605 2 DEBUG nova.network.neutron [req-68805fd0-9138-433b-99b2-80e567632e2e req-416c5a24-239d-4d36-a040-5a7c63db660e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 9f842544-f85a-4c24-b273-8ae74177617e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:51:03 compute-0 nova_compute[260935]: 2025-10-11 08:51:03.624 2 DEBUG oslo_concurrency.lockutils [req-68805fd0-9138-433b-99b2-80e567632e2e req-416c5a24-239d-4d36-a040-5a7c63db660e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-9f842544-f85a-4c24-b273-8ae74177617e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 08:51:03 compute-0 nova_compute[260935]: 2025-10-11 08:51:03.698 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 08:51:03 compute-0 nova_compute[260935]: 2025-10-11 08:51:03.727 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760172648.7260673, 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:51:03 compute-0 nova_compute[260935]: 2025-10-11 08:51:03.727 2 INFO nova.compute.manager [-] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] VM Stopped (Lifecycle Event)
Oct 11 08:51:03 compute-0 nova_compute[260935]: 2025-10-11 08:51:03.744 2 DEBUG nova.compute.manager [None req-261660aa-ca60-4ec4-863f-04c67f450651 - - - - - -] [instance: 77ff3a9d-3eb2-40ed-ad12-6367fd4e555f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:51:03 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1375: 321 pgs: 321 active+clean; 134 MiB data, 401 MiB used, 60 GiB / 60 GiB avail; 83 KiB/s rd, 4.3 MiB/s wr, 125 op/s
Oct 11 08:51:04 compute-0 nova_compute[260935]: 2025-10-11 08:51:04.041 2 DEBUG nova.network.neutron [None req-a8a0f7c3-f1bc-448a-a5ff-f73f3f5f2861 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] [instance: 9f842544-f85a-4c24-b273-8ae74177617e] Successfully updated port: e7343dc3-6cda-4dfb-8098-f021900f4584 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 11 08:51:04 compute-0 nova_compute[260935]: 2025-10-11 08:51:04.056 2 DEBUG oslo_concurrency.lockutils [None req-a8a0f7c3-f1bc-448a-a5ff-f73f3f5f2861 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Acquiring lock "refresh_cache-9f842544-f85a-4c24-b273-8ae74177617e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 08:51:04 compute-0 nova_compute[260935]: 2025-10-11 08:51:04.056 2 DEBUG oslo_concurrency.lockutils [None req-a8a0f7c3-f1bc-448a-a5ff-f73f3f5f2861 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Acquired lock "refresh_cache-9f842544-f85a-4c24-b273-8ae74177617e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 08:51:04 compute-0 nova_compute[260935]: 2025-10-11 08:51:04.056 2 DEBUG nova.network.neutron [None req-a8a0f7c3-f1bc-448a-a5ff-f73f3f5f2861 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] [instance: 9f842544-f85a-4c24-b273-8ae74177617e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 11 08:51:04 compute-0 nova_compute[260935]: 2025-10-11 08:51:04.164 2 DEBUG nova.network.neutron [None req-1d1846a2-13f3-4f48-8e57-f9456f6cfa2f 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 5b50d851-e482-40a2-8b7d-d3eca87e15ab] Updating instance_info_cache with network_info: [{"id": "965f1a38-4159-41be-ac4a-f436a8ddeeab", "address": "fa:16:3e:fe:8d:cb", "network": {"id": "9bac3530-993f-420e-8692-0b14a331d756", "bridge": "br-int", "label": "tempest-ImagesTestJSON-942705627-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f8c7604961214c6d9d49657535d799a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap965f1a38-41", "ovs_interfaceid": "965f1a38-4159-41be-ac4a-f436a8ddeeab", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:51:04 compute-0 nova_compute[260935]: 2025-10-11 08:51:04.194 2 DEBUG oslo_concurrency.lockutils [None req-1d1846a2-13f3-4f48-8e57-f9456f6cfa2f 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Releasing lock "refresh_cache-5b50d851-e482-40a2-8b7d-d3eca87e15ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 08:51:04 compute-0 nova_compute[260935]: 2025-10-11 08:51:04.194 2 DEBUG nova.compute.manager [None req-1d1846a2-13f3-4f48-8e57-f9456f6cfa2f 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 5b50d851-e482-40a2-8b7d-d3eca87e15ab] Instance network_info: |[{"id": "965f1a38-4159-41be-ac4a-f436a8ddeeab", "address": "fa:16:3e:fe:8d:cb", "network": {"id": "9bac3530-993f-420e-8692-0b14a331d756", "bridge": "br-int", "label": "tempest-ImagesTestJSON-942705627-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f8c7604961214c6d9d49657535d799a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap965f1a38-41", "ovs_interfaceid": "965f1a38-4159-41be-ac4a-f436a8ddeeab", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 11 08:51:04 compute-0 nova_compute[260935]: 2025-10-11 08:51:04.195 2 DEBUG oslo_concurrency.lockutils [req-2956e34d-4759-477d-bc53-18b5a23c0636 req-6d0cfd7d-c466-446b-bee1-308373dc4175 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-5b50d851-e482-40a2-8b7d-d3eca87e15ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 08:51:04 compute-0 nova_compute[260935]: 2025-10-11 08:51:04.196 2 DEBUG nova.network.neutron [req-2956e34d-4759-477d-bc53-18b5a23c0636 req-6d0cfd7d-c466-446b-bee1-308373dc4175 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 5b50d851-e482-40a2-8b7d-d3eca87e15ab] Refreshing network info cache for port 965f1a38-4159-41be-ac4a-f436a8ddeeab _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 08:51:04 compute-0 nova_compute[260935]: 2025-10-11 08:51:04.201 2 DEBUG nova.virt.libvirt.driver [None req-1d1846a2-13f3-4f48-8e57-f9456f6cfa2f 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 5b50d851-e482-40a2-8b7d-d3eca87e15ab] Start _get_guest_xml network_info=[{"id": "965f1a38-4159-41be-ac4a-f436a8ddeeab", "address": "fa:16:3e:fe:8d:cb", "network": {"id": "9bac3530-993f-420e-8692-0b14a331d756", "bridge": "br-int", "label": "tempest-ImagesTestJSON-942705627-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f8c7604961214c6d9d49657535d799a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap965f1a38-41", "ovs_interfaceid": "965f1a38-4159-41be-ac4a-f436a8ddeeab", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 11 08:51:04 compute-0 nova_compute[260935]: 2025-10-11 08:51:04.208 2 WARNING nova.virt.libvirt.driver [None req-1d1846a2-13f3-4f48-8e57-f9456f6cfa2f 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 08:51:04 compute-0 nova_compute[260935]: 2025-10-11 08:51:04.213 2 DEBUG nova.virt.libvirt.host [None req-1d1846a2-13f3-4f48-8e57-f9456f6cfa2f 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 11 08:51:04 compute-0 nova_compute[260935]: 2025-10-11 08:51:04.213 2 DEBUG nova.virt.libvirt.host [None req-1d1846a2-13f3-4f48-8e57-f9456f6cfa2f 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 11 08:51:04 compute-0 nova_compute[260935]: 2025-10-11 08:51:04.218 2 DEBUG nova.virt.libvirt.host [None req-1d1846a2-13f3-4f48-8e57-f9456f6cfa2f 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 11 08:51:04 compute-0 nova_compute[260935]: 2025-10-11 08:51:04.219 2 DEBUG nova.virt.libvirt.host [None req-1d1846a2-13f3-4f48-8e57-f9456f6cfa2f 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 11 08:51:04 compute-0 nova_compute[260935]: 2025-10-11 08:51:04.220 2 DEBUG nova.virt.libvirt.driver [None req-1d1846a2-13f3-4f48-8e57-f9456f6cfa2f 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 11 08:51:04 compute-0 nova_compute[260935]: 2025-10-11 08:51:04.221 2 DEBUG nova.virt.hardware [None req-1d1846a2-13f3-4f48-8e57-f9456f6cfa2f 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 11 08:51:04 compute-0 nova_compute[260935]: 2025-10-11 08:51:04.221 2 DEBUG nova.virt.hardware [None req-1d1846a2-13f3-4f48-8e57-f9456f6cfa2f 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 11 08:51:04 compute-0 nova_compute[260935]: 2025-10-11 08:51:04.222 2 DEBUG nova.virt.hardware [None req-1d1846a2-13f3-4f48-8e57-f9456f6cfa2f 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 11 08:51:04 compute-0 nova_compute[260935]: 2025-10-11 08:51:04.223 2 DEBUG nova.virt.hardware [None req-1d1846a2-13f3-4f48-8e57-f9456f6cfa2f 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 11 08:51:04 compute-0 nova_compute[260935]: 2025-10-11 08:51:04.223 2 DEBUG nova.virt.hardware [None req-1d1846a2-13f3-4f48-8e57-f9456f6cfa2f 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 11 08:51:04 compute-0 nova_compute[260935]: 2025-10-11 08:51:04.224 2 DEBUG nova.virt.hardware [None req-1d1846a2-13f3-4f48-8e57-f9456f6cfa2f 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 11 08:51:04 compute-0 nova_compute[260935]: 2025-10-11 08:51:04.224 2 DEBUG nova.virt.hardware [None req-1d1846a2-13f3-4f48-8e57-f9456f6cfa2f 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 11 08:51:04 compute-0 nova_compute[260935]: 2025-10-11 08:51:04.225 2 DEBUG nova.virt.hardware [None req-1d1846a2-13f3-4f48-8e57-f9456f6cfa2f 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 11 08:51:04 compute-0 nova_compute[260935]: 2025-10-11 08:51:04.225 2 DEBUG nova.virt.hardware [None req-1d1846a2-13f3-4f48-8e57-f9456f6cfa2f 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 11 08:51:04 compute-0 nova_compute[260935]: 2025-10-11 08:51:04.226 2 DEBUG nova.virt.hardware [None req-1d1846a2-13f3-4f48-8e57-f9456f6cfa2f 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 11 08:51:04 compute-0 nova_compute[260935]: 2025-10-11 08:51:04.226 2 DEBUG nova.virt.hardware [None req-1d1846a2-13f3-4f48-8e57-f9456f6cfa2f 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 11 08:51:04 compute-0 nova_compute[260935]: 2025-10-11 08:51:04.231 2 DEBUG oslo_concurrency.processutils [None req-1d1846a2-13f3-4f48-8e57-f9456f6cfa2f 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:51:04 compute-0 nova_compute[260935]: 2025-10-11 08:51:04.377 2 DEBUG nova.network.neutron [None req-a8a0f7c3-f1bc-448a-a5ff-f73f3f5f2861 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] [instance: 9f842544-f85a-4c24-b273-8ae74177617e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 11 08:51:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 08:51:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:51:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 08:51:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:51:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0006919304917952725 of space, bias 1.0, pg target 0.20757914753858175 quantized to 32 (current 32)
Oct 11 08:51:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:51:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 08:51:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:51:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 08:51:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:51:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Oct 11 08:51:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:51:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 11 08:51:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:51:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 08:51:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:51:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 11 08:51:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:51:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 11 08:51:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:51:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 08:51:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:51:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 11 08:51:04 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 08:51:04 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4268172853' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:51:04 compute-0 nova_compute[260935]: 2025-10-11 08:51:04.701 2 DEBUG oslo_concurrency.processutils [None req-1d1846a2-13f3-4f48-8e57-f9456f6cfa2f 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:51:04 compute-0 nova_compute[260935]: 2025-10-11 08:51:04.732 2 DEBUG nova.storage.rbd_utils [None req-1d1846a2-13f3-4f48-8e57-f9456f6cfa2f 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] rbd image 5b50d851-e482-40a2-8b7d-d3eca87e15ab_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:51:04 compute-0 nova_compute[260935]: 2025-10-11 08:51:04.737 2 DEBUG oslo_concurrency.processutils [None req-1d1846a2-13f3-4f48-8e57-f9456f6cfa2f 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:51:04 compute-0 nova_compute[260935]: 2025-10-11 08:51:04.875 2 DEBUG nova.compute.manager [req-414781cc-d6a1-42c0-9106-f249fbcb35e8 req-87a8b0a5-1cf6-4f03-8179-f4cd618401f3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 9f842544-f85a-4c24-b273-8ae74177617e] Received event network-changed-ccf53fcf-c7bd-41f4-986b-d90fd701f3e4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:51:04 compute-0 nova_compute[260935]: 2025-10-11 08:51:04.876 2 DEBUG nova.compute.manager [req-414781cc-d6a1-42c0-9106-f249fbcb35e8 req-87a8b0a5-1cf6-4f03-8179-f4cd618401f3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 9f842544-f85a-4c24-b273-8ae74177617e] Refreshing instance network info cache due to event network-changed-ccf53fcf-c7bd-41f4-986b-d90fd701f3e4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 08:51:04 compute-0 nova_compute[260935]: 2025-10-11 08:51:04.877 2 DEBUG oslo_concurrency.lockutils [req-414781cc-d6a1-42c0-9106-f249fbcb35e8 req-87a8b0a5-1cf6-4f03-8179-f4cd618401f3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-9f842544-f85a-4c24-b273-8ae74177617e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 08:51:05 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 08:51:05 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1842549304' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:51:05 compute-0 nova_compute[260935]: 2025-10-11 08:51:05.236 2 DEBUG oslo_concurrency.processutils [None req-1d1846a2-13f3-4f48-8e57-f9456f6cfa2f 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.499s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:51:05 compute-0 nova_compute[260935]: 2025-10-11 08:51:05.238 2 DEBUG nova.virt.libvirt.vif [None req-1d1846a2-13f3-4f48-8e57-f9456f6cfa2f 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T08:50:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-1550870837',display_name='tempest-ImagesTestJSON-server-1550870837',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-1550870837',id=33,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f8c7604961214c6d9d49657535d799a5',ramdisk_id='',reservation_id='r-jse0ea25',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-694493184',owner_user_name='tempest-ImagesTestJSON-694493184-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T08:50:59Z,user_data=None,user_id='1bab12893b9d49aabcb5ca19c9b951de',uuid=5b50d851-e482-40a2-8b7d-d3eca87e15ab,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "965f1a38-4159-41be-ac4a-f436a8ddeeab", "address": "fa:16:3e:fe:8d:cb", "network": {"id": "9bac3530-993f-420e-8692-0b14a331d756", "bridge": "br-int", "label": "tempest-ImagesTestJSON-942705627-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f8c7604961214c6d9d49657535d799a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap965f1a38-41", "ovs_interfaceid": "965f1a38-4159-41be-ac4a-f436a8ddeeab", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 11 08:51:05 compute-0 nova_compute[260935]: 2025-10-11 08:51:05.239 2 DEBUG nova.network.os_vif_util [None req-1d1846a2-13f3-4f48-8e57-f9456f6cfa2f 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Converting VIF {"id": "965f1a38-4159-41be-ac4a-f436a8ddeeab", "address": "fa:16:3e:fe:8d:cb", "network": {"id": "9bac3530-993f-420e-8692-0b14a331d756", "bridge": "br-int", "label": "tempest-ImagesTestJSON-942705627-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f8c7604961214c6d9d49657535d799a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap965f1a38-41", "ovs_interfaceid": "965f1a38-4159-41be-ac4a-f436a8ddeeab", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 08:51:05 compute-0 nova_compute[260935]: 2025-10-11 08:51:05.240 2 DEBUG nova.network.os_vif_util [None req-1d1846a2-13f3-4f48-8e57-f9456f6cfa2f 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fe:8d:cb,bridge_name='br-int',has_traffic_filtering=True,id=965f1a38-4159-41be-ac4a-f436a8ddeeab,network=Network(9bac3530-993f-420e-8692-0b14a331d756),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap965f1a38-41') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 08:51:05 compute-0 nova_compute[260935]: 2025-10-11 08:51:05.242 2 DEBUG nova.objects.instance [None req-1d1846a2-13f3-4f48-8e57-f9456f6cfa2f 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Lazy-loading 'pci_devices' on Instance uuid 5b50d851-e482-40a2-8b7d-d3eca87e15ab obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:51:05 compute-0 nova_compute[260935]: 2025-10-11 08:51:05.257 2 DEBUG nova.virt.libvirt.driver [None req-1d1846a2-13f3-4f48-8e57-f9456f6cfa2f 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 5b50d851-e482-40a2-8b7d-d3eca87e15ab] End _get_guest_xml xml=<domain type="kvm">
Oct 11 08:51:05 compute-0 nova_compute[260935]:   <uuid>5b50d851-e482-40a2-8b7d-d3eca87e15ab</uuid>
Oct 11 08:51:05 compute-0 nova_compute[260935]:   <name>instance-00000021</name>
Oct 11 08:51:05 compute-0 nova_compute[260935]:   <memory>131072</memory>
Oct 11 08:51:05 compute-0 nova_compute[260935]:   <vcpu>1</vcpu>
Oct 11 08:51:05 compute-0 nova_compute[260935]:   <metadata>
Oct 11 08:51:05 compute-0 nova_compute[260935]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 08:51:05 compute-0 nova_compute[260935]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 08:51:05 compute-0 nova_compute[260935]:       <nova:name>tempest-ImagesTestJSON-server-1550870837</nova:name>
Oct 11 08:51:05 compute-0 nova_compute[260935]:       <nova:creationTime>2025-10-11 08:51:04</nova:creationTime>
Oct 11 08:51:05 compute-0 nova_compute[260935]:       <nova:flavor name="m1.nano">
Oct 11 08:51:05 compute-0 nova_compute[260935]:         <nova:memory>128</nova:memory>
Oct 11 08:51:05 compute-0 nova_compute[260935]:         <nova:disk>1</nova:disk>
Oct 11 08:51:05 compute-0 nova_compute[260935]:         <nova:swap>0</nova:swap>
Oct 11 08:51:05 compute-0 nova_compute[260935]:         <nova:ephemeral>0</nova:ephemeral>
Oct 11 08:51:05 compute-0 nova_compute[260935]:         <nova:vcpus>1</nova:vcpus>
Oct 11 08:51:05 compute-0 nova_compute[260935]:       </nova:flavor>
Oct 11 08:51:05 compute-0 nova_compute[260935]:       <nova:owner>
Oct 11 08:51:05 compute-0 nova_compute[260935]:         <nova:user uuid="1bab12893b9d49aabcb5ca19c9b951de">tempest-ImagesTestJSON-694493184-project-member</nova:user>
Oct 11 08:51:05 compute-0 nova_compute[260935]:         <nova:project uuid="f8c7604961214c6d9d49657535d799a5">tempest-ImagesTestJSON-694493184</nova:project>
Oct 11 08:51:05 compute-0 nova_compute[260935]:       </nova:owner>
Oct 11 08:51:05 compute-0 nova_compute[260935]:       <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 08:51:05 compute-0 nova_compute[260935]:       <nova:ports>
Oct 11 08:51:05 compute-0 nova_compute[260935]:         <nova:port uuid="965f1a38-4159-41be-ac4a-f436a8ddeeab">
Oct 11 08:51:05 compute-0 nova_compute[260935]:           <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Oct 11 08:51:05 compute-0 nova_compute[260935]:         </nova:port>
Oct 11 08:51:05 compute-0 nova_compute[260935]:       </nova:ports>
Oct 11 08:51:05 compute-0 nova_compute[260935]:     </nova:instance>
Oct 11 08:51:05 compute-0 nova_compute[260935]:   </metadata>
Oct 11 08:51:05 compute-0 nova_compute[260935]:   <sysinfo type="smbios">
Oct 11 08:51:05 compute-0 nova_compute[260935]:     <system>
Oct 11 08:51:05 compute-0 nova_compute[260935]:       <entry name="manufacturer">RDO</entry>
Oct 11 08:51:05 compute-0 nova_compute[260935]:       <entry name="product">OpenStack Compute</entry>
Oct 11 08:51:05 compute-0 nova_compute[260935]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 08:51:05 compute-0 nova_compute[260935]:       <entry name="serial">5b50d851-e482-40a2-8b7d-d3eca87e15ab</entry>
Oct 11 08:51:05 compute-0 nova_compute[260935]:       <entry name="uuid">5b50d851-e482-40a2-8b7d-d3eca87e15ab</entry>
Oct 11 08:51:05 compute-0 nova_compute[260935]:       <entry name="family">Virtual Machine</entry>
Oct 11 08:51:05 compute-0 nova_compute[260935]:     </system>
Oct 11 08:51:05 compute-0 nova_compute[260935]:   </sysinfo>
Oct 11 08:51:05 compute-0 nova_compute[260935]:   <os>
Oct 11 08:51:05 compute-0 nova_compute[260935]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 11 08:51:05 compute-0 nova_compute[260935]:     <boot dev="hd"/>
Oct 11 08:51:05 compute-0 nova_compute[260935]:     <smbios mode="sysinfo"/>
Oct 11 08:51:05 compute-0 nova_compute[260935]:   </os>
Oct 11 08:51:05 compute-0 nova_compute[260935]:   <features>
Oct 11 08:51:05 compute-0 nova_compute[260935]:     <acpi/>
Oct 11 08:51:05 compute-0 nova_compute[260935]:     <apic/>
Oct 11 08:51:05 compute-0 nova_compute[260935]:     <vmcoreinfo/>
Oct 11 08:51:05 compute-0 nova_compute[260935]:   </features>
Oct 11 08:51:05 compute-0 nova_compute[260935]:   <clock offset="utc">
Oct 11 08:51:05 compute-0 nova_compute[260935]:     <timer name="pit" tickpolicy="delay"/>
Oct 11 08:51:05 compute-0 nova_compute[260935]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 11 08:51:05 compute-0 nova_compute[260935]:     <timer name="hpet" present="no"/>
Oct 11 08:51:05 compute-0 nova_compute[260935]:   </clock>
Oct 11 08:51:05 compute-0 nova_compute[260935]:   <cpu mode="host-model" match="exact">
Oct 11 08:51:05 compute-0 nova_compute[260935]:     <topology sockets="1" cores="1" threads="1"/>
Oct 11 08:51:05 compute-0 nova_compute[260935]:   </cpu>
Oct 11 08:51:05 compute-0 nova_compute[260935]:   <devices>
Oct 11 08:51:05 compute-0 nova_compute[260935]:     <disk type="network" device="disk">
Oct 11 08:51:05 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 08:51:05 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/5b50d851-e482-40a2-8b7d-d3eca87e15ab_disk">
Oct 11 08:51:05 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 08:51:05 compute-0 nova_compute[260935]:       </source>
Oct 11 08:51:05 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 08:51:05 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 08:51:05 compute-0 nova_compute[260935]:       </auth>
Oct 11 08:51:05 compute-0 nova_compute[260935]:       <target dev="vda" bus="virtio"/>
Oct 11 08:51:05 compute-0 nova_compute[260935]:     </disk>
Oct 11 08:51:05 compute-0 nova_compute[260935]:     <disk type="network" device="cdrom">
Oct 11 08:51:05 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 08:51:05 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/5b50d851-e482-40a2-8b7d-d3eca87e15ab_disk.config">
Oct 11 08:51:05 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 08:51:05 compute-0 nova_compute[260935]:       </source>
Oct 11 08:51:05 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 08:51:05 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 08:51:05 compute-0 nova_compute[260935]:       </auth>
Oct 11 08:51:05 compute-0 nova_compute[260935]:       <target dev="sda" bus="sata"/>
Oct 11 08:51:05 compute-0 nova_compute[260935]:     </disk>
Oct 11 08:51:05 compute-0 nova_compute[260935]:     <interface type="ethernet">
Oct 11 08:51:05 compute-0 nova_compute[260935]:       <mac address="fa:16:3e:fe:8d:cb"/>
Oct 11 08:51:05 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 08:51:05 compute-0 nova_compute[260935]:       <driver name="vhost" rx_queue_size="512"/>
Oct 11 08:51:05 compute-0 nova_compute[260935]:       <mtu size="1442"/>
Oct 11 08:51:05 compute-0 nova_compute[260935]:       <target dev="tap965f1a38-41"/>
Oct 11 08:51:05 compute-0 nova_compute[260935]:     </interface>
Oct 11 08:51:05 compute-0 nova_compute[260935]:     <serial type="pty">
Oct 11 08:51:05 compute-0 nova_compute[260935]:       <log file="/var/lib/nova/instances/5b50d851-e482-40a2-8b7d-d3eca87e15ab/console.log" append="off"/>
Oct 11 08:51:05 compute-0 nova_compute[260935]:     </serial>
Oct 11 08:51:05 compute-0 nova_compute[260935]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 08:51:05 compute-0 nova_compute[260935]:     <video>
Oct 11 08:51:05 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 08:51:05 compute-0 nova_compute[260935]:     </video>
Oct 11 08:51:05 compute-0 nova_compute[260935]:     <input type="tablet" bus="usb"/>
Oct 11 08:51:05 compute-0 nova_compute[260935]:     <rng model="virtio">
Oct 11 08:51:05 compute-0 nova_compute[260935]:       <backend model="random">/dev/urandom</backend>
Oct 11 08:51:05 compute-0 nova_compute[260935]:     </rng>
Oct 11 08:51:05 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root"/>
Oct 11 08:51:05 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:51:05 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:51:05 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:51:05 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:51:05 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:51:05 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:51:05 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:51:05 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:51:05 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:51:05 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:51:05 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:51:05 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:51:05 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:51:05 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:51:05 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:51:05 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:51:05 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:51:05 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:51:05 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:51:05 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:51:05 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:51:05 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:51:05 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:51:05 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:51:05 compute-0 nova_compute[260935]:     <controller type="usb" index="0"/>
Oct 11 08:51:05 compute-0 nova_compute[260935]:     <memballoon model="virtio">
Oct 11 08:51:05 compute-0 nova_compute[260935]:       <stats period="10"/>
Oct 11 08:51:05 compute-0 nova_compute[260935]:     </memballoon>
Oct 11 08:51:05 compute-0 nova_compute[260935]:   </devices>
Oct 11 08:51:05 compute-0 nova_compute[260935]: </domain>
Oct 11 08:51:05 compute-0 nova_compute[260935]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 11 08:51:05 compute-0 nova_compute[260935]: 2025-10-11 08:51:05.259 2 DEBUG nova.compute.manager [None req-1d1846a2-13f3-4f48-8e57-f9456f6cfa2f 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 5b50d851-e482-40a2-8b7d-d3eca87e15ab] Preparing to wait for external event network-vif-plugged-965f1a38-4159-41be-ac4a-f436a8ddeeab prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 11 08:51:05 compute-0 nova_compute[260935]: 2025-10-11 08:51:05.259 2 DEBUG oslo_concurrency.lockutils [None req-1d1846a2-13f3-4f48-8e57-f9456f6cfa2f 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Acquiring lock "5b50d851-e482-40a2-8b7d-d3eca87e15ab-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:51:05 compute-0 nova_compute[260935]: 2025-10-11 08:51:05.260 2 DEBUG oslo_concurrency.lockutils [None req-1d1846a2-13f3-4f48-8e57-f9456f6cfa2f 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Lock "5b50d851-e482-40a2-8b7d-d3eca87e15ab-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:51:05 compute-0 nova_compute[260935]: 2025-10-11 08:51:05.260 2 DEBUG oslo_concurrency.lockutils [None req-1d1846a2-13f3-4f48-8e57-f9456f6cfa2f 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Lock "5b50d851-e482-40a2-8b7d-d3eca87e15ab-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:51:05 compute-0 nova_compute[260935]: 2025-10-11 08:51:05.262 2 DEBUG nova.virt.libvirt.vif [None req-1d1846a2-13f3-4f48-8e57-f9456f6cfa2f 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T08:50:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-1550870837',display_name='tempest-ImagesTestJSON-server-1550870837',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-1550870837',id=33,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f8c7604961214c6d9d49657535d799a5',ramdisk_id='',reservation_id='r-jse0ea25',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-694493184',owner_user_name='tempest-ImagesTestJSON-694493184-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T08:50:59Z,user_data=None,user_id='1bab12893b9d49aabcb5ca19c9b951de',uuid=5b50d851-e482-40a2-8b7d-d3eca87e15ab,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "965f1a38-4159-41be-ac4a-f436a8ddeeab", "address": "fa:16:3e:fe:8d:cb", "network": {"id": "9bac3530-993f-420e-8692-0b14a331d756", "bridge": "br-int", "label": "tempest-ImagesTestJSON-942705627-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f8c7604961214c6d9d49657535d799a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap965f1a38-41", "ovs_interfaceid": "965f1a38-4159-41be-ac4a-f436a8ddeeab", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 11 08:51:05 compute-0 nova_compute[260935]: 2025-10-11 08:51:05.262 2 DEBUG nova.network.os_vif_util [None req-1d1846a2-13f3-4f48-8e57-f9456f6cfa2f 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Converting VIF {"id": "965f1a38-4159-41be-ac4a-f436a8ddeeab", "address": "fa:16:3e:fe:8d:cb", "network": {"id": "9bac3530-993f-420e-8692-0b14a331d756", "bridge": "br-int", "label": "tempest-ImagesTestJSON-942705627-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f8c7604961214c6d9d49657535d799a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap965f1a38-41", "ovs_interfaceid": "965f1a38-4159-41be-ac4a-f436a8ddeeab", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 08:51:05 compute-0 nova_compute[260935]: 2025-10-11 08:51:05.264 2 DEBUG nova.network.os_vif_util [None req-1d1846a2-13f3-4f48-8e57-f9456f6cfa2f 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fe:8d:cb,bridge_name='br-int',has_traffic_filtering=True,id=965f1a38-4159-41be-ac4a-f436a8ddeeab,network=Network(9bac3530-993f-420e-8692-0b14a331d756),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap965f1a38-41') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 08:51:05 compute-0 nova_compute[260935]: 2025-10-11 08:51:05.264 2 DEBUG os_vif [None req-1d1846a2-13f3-4f48-8e57-f9456f6cfa2f 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:fe:8d:cb,bridge_name='br-int',has_traffic_filtering=True,id=965f1a38-4159-41be-ac4a-f436a8ddeeab,network=Network(9bac3530-993f-420e-8692-0b14a331d756),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap965f1a38-41') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 11 08:51:05 compute-0 nova_compute[260935]: 2025-10-11 08:51:05.266 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:51:05 compute-0 nova_compute[260935]: 2025-10-11 08:51:05.267 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:51:05 compute-0 nova_compute[260935]: 2025-10-11 08:51:05.268 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 08:51:05 compute-0 nova_compute[260935]: 2025-10-11 08:51:05.272 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:51:05 compute-0 nova_compute[260935]: 2025-10-11 08:51:05.273 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap965f1a38-41, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:51:05 compute-0 nova_compute[260935]: 2025-10-11 08:51:05.275 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap965f1a38-41, col_values=(('external_ids', {'iface-id': '965f1a38-4159-41be-ac4a-f436a8ddeeab', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:fe:8d:cb', 'vm-uuid': '5b50d851-e482-40a2-8b7d-d3eca87e15ab'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:51:05 compute-0 nova_compute[260935]: 2025-10-11 08:51:05.277 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:51:05 compute-0 NetworkManager[44960]: <info>  [1760172665.2786] manager: (tap965f1a38-41): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/107)
Oct 11 08:51:05 compute-0 nova_compute[260935]: 2025-10-11 08:51:05.281 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 08:51:05 compute-0 ceph-mon[74313]: pgmap v1375: 321 pgs: 321 active+clean; 134 MiB data, 401 MiB used, 60 GiB / 60 GiB avail; 83 KiB/s rd, 4.3 MiB/s wr, 125 op/s
Oct 11 08:51:05 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/4268172853' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:51:05 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/1842549304' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:51:05 compute-0 nova_compute[260935]: 2025-10-11 08:51:05.283 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:51:05 compute-0 nova_compute[260935]: 2025-10-11 08:51:05.285 2 INFO os_vif [None req-1d1846a2-13f3-4f48-8e57-f9456f6cfa2f 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:fe:8d:cb,bridge_name='br-int',has_traffic_filtering=True,id=965f1a38-4159-41be-ac4a-f436a8ddeeab,network=Network(9bac3530-993f-420e-8692-0b14a331d756),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap965f1a38-41')
Oct 11 08:51:05 compute-0 nova_compute[260935]: 2025-10-11 08:51:05.336 2 DEBUG nova.virt.libvirt.driver [None req-1d1846a2-13f3-4f48-8e57-f9456f6cfa2f 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 08:51:05 compute-0 nova_compute[260935]: 2025-10-11 08:51:05.337 2 DEBUG nova.virt.libvirt.driver [None req-1d1846a2-13f3-4f48-8e57-f9456f6cfa2f 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 08:51:05 compute-0 nova_compute[260935]: 2025-10-11 08:51:05.337 2 DEBUG nova.virt.libvirt.driver [None req-1d1846a2-13f3-4f48-8e57-f9456f6cfa2f 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] No VIF found with MAC fa:16:3e:fe:8d:cb, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 11 08:51:05 compute-0 nova_compute[260935]: 2025-10-11 08:51:05.337 2 INFO nova.virt.libvirt.driver [None req-1d1846a2-13f3-4f48-8e57-f9456f6cfa2f 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 5b50d851-e482-40a2-8b7d-d3eca87e15ab] Using config drive
Oct 11 08:51:05 compute-0 nova_compute[260935]: 2025-10-11 08:51:05.357 2 DEBUG nova.storage.rbd_utils [None req-1d1846a2-13f3-4f48-8e57-f9456f6cfa2f 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] rbd image 5b50d851-e482-40a2-8b7d-d3eca87e15ab_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:51:05 compute-0 nova_compute[260935]: 2025-10-11 08:51:05.698 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 08:51:05 compute-0 podman[300863]: 2025-10-11 08:51:05.795286826 +0000 UTC m=+0.092387854 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=multipathd, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct 11 08:51:05 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1376: 321 pgs: 321 active+clean; 134 MiB data, 401 MiB used, 60 GiB / 60 GiB avail; 82 KiB/s rd, 4.3 MiB/s wr, 122 op/s
Oct 11 08:51:05 compute-0 podman[300864]: 2025-10-11 08:51:05.904237137 +0000 UTC m=+0.190528439 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 11 08:51:06 compute-0 nova_compute[260935]: 2025-10-11 08:51:06.308 2 INFO nova.virt.libvirt.driver [None req-1d1846a2-13f3-4f48-8e57-f9456f6cfa2f 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 5b50d851-e482-40a2-8b7d-d3eca87e15ab] Creating config drive at /var/lib/nova/instances/5b50d851-e482-40a2-8b7d-d3eca87e15ab/disk.config
Oct 11 08:51:06 compute-0 nova_compute[260935]: 2025-10-11 08:51:06.317 2 DEBUG oslo_concurrency.processutils [None req-1d1846a2-13f3-4f48-8e57-f9456f6cfa2f 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/5b50d851-e482-40a2-8b7d-d3eca87e15ab/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmptkpppzls execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:51:06 compute-0 nova_compute[260935]: 2025-10-11 08:51:06.483 2 DEBUG oslo_concurrency.processutils [None req-1d1846a2-13f3-4f48-8e57-f9456f6cfa2f 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/5b50d851-e482-40a2-8b7d-d3eca87e15ab/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmptkpppzls" returned: 0 in 0.165s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:51:06 compute-0 nova_compute[260935]: 2025-10-11 08:51:06.524 2 DEBUG nova.storage.rbd_utils [None req-1d1846a2-13f3-4f48-8e57-f9456f6cfa2f 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] rbd image 5b50d851-e482-40a2-8b7d-d3eca87e15ab_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:51:06 compute-0 nova_compute[260935]: 2025-10-11 08:51:06.530 2 DEBUG oslo_concurrency.processutils [None req-1d1846a2-13f3-4f48-8e57-f9456f6cfa2f 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/5b50d851-e482-40a2-8b7d-d3eca87e15ab/disk.config 5b50d851-e482-40a2-8b7d-d3eca87e15ab_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:51:06 compute-0 nova_compute[260935]: 2025-10-11 08:51:06.640 2 DEBUG nova.network.neutron [req-2956e34d-4759-477d-bc53-18b5a23c0636 req-6d0cfd7d-c466-446b-bee1-308373dc4175 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 5b50d851-e482-40a2-8b7d-d3eca87e15ab] Updated VIF entry in instance network info cache for port 965f1a38-4159-41be-ac4a-f436a8ddeeab. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 08:51:06 compute-0 nova_compute[260935]: 2025-10-11 08:51:06.642 2 DEBUG nova.network.neutron [req-2956e34d-4759-477d-bc53-18b5a23c0636 req-6d0cfd7d-c466-446b-bee1-308373dc4175 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 5b50d851-e482-40a2-8b7d-d3eca87e15ab] Updating instance_info_cache with network_info: [{"id": "965f1a38-4159-41be-ac4a-f436a8ddeeab", "address": "fa:16:3e:fe:8d:cb", "network": {"id": "9bac3530-993f-420e-8692-0b14a331d756", "bridge": "br-int", "label": "tempest-ImagesTestJSON-942705627-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f8c7604961214c6d9d49657535d799a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap965f1a38-41", "ovs_interfaceid": "965f1a38-4159-41be-ac4a-f436a8ddeeab", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:51:06 compute-0 nova_compute[260935]: 2025-10-11 08:51:06.669 2 DEBUG oslo_concurrency.lockutils [req-2956e34d-4759-477d-bc53-18b5a23c0636 req-6d0cfd7d-c466-446b-bee1-308373dc4175 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-5b50d851-e482-40a2-8b7d-d3eca87e15ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 08:51:06 compute-0 nova_compute[260935]: 2025-10-11 08:51:06.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 08:51:06 compute-0 nova_compute[260935]: 2025-10-11 08:51:06.703 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 11 08:51:06 compute-0 nova_compute[260935]: 2025-10-11 08:51:06.740 2 DEBUG oslo_concurrency.processutils [None req-1d1846a2-13f3-4f48-8e57-f9456f6cfa2f 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/5b50d851-e482-40a2-8b7d-d3eca87e15ab/disk.config 5b50d851-e482-40a2-8b7d-d3eca87e15ab_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.210s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:51:06 compute-0 nova_compute[260935]: 2025-10-11 08:51:06.742 2 INFO nova.virt.libvirt.driver [None req-1d1846a2-13f3-4f48-8e57-f9456f6cfa2f 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 5b50d851-e482-40a2-8b7d-d3eca87e15ab] Deleting local config drive /var/lib/nova/instances/5b50d851-e482-40a2-8b7d-d3eca87e15ab/disk.config because it was imported into RBD.
Oct 11 08:51:06 compute-0 kernel: tap965f1a38-41: entered promiscuous mode
Oct 11 08:51:06 compute-0 NetworkManager[44960]: <info>  [1760172666.8344] manager: (tap965f1a38-41): new Tun device (/org/freedesktop/NetworkManager/Devices/108)
Oct 11 08:51:06 compute-0 ovn_controller[152945]: 2025-10-11T08:51:06Z|00223|binding|INFO|Claiming lport 965f1a38-4159-41be-ac4a-f436a8ddeeab for this chassis.
Oct 11 08:51:06 compute-0 ovn_controller[152945]: 2025-10-11T08:51:06Z|00224|binding|INFO|965f1a38-4159-41be-ac4a-f436a8ddeeab: Claiming fa:16:3e:fe:8d:cb 10.100.0.4
Oct 11 08:51:06 compute-0 nova_compute[260935]: 2025-10-11 08:51:06.837 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:51:06 compute-0 nova_compute[260935]: 2025-10-11 08:51:06.845 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 11 08:51:06 compute-0 nova_compute[260935]: 2025-10-11 08:51:06.846 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 08:51:06 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:06.859 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fe:8d:cb 10.100.0.4'], port_security=['fa:16:3e:fe:8d:cb 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '5b50d851-e482-40a2-8b7d-d3eca87e15ab', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9bac3530-993f-420e-8692-0b14a331d756', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f8c7604961214c6d9d49657535d799a5', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'b92be55a-f97b-4770-99c4-ff8e122b8ad7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=956bef08-638b-4ce0-9cc4-80a6cc4f1331, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=965f1a38-4159-41be-ac4a-f436a8ddeeab) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 08:51:06 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:06.863 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 965f1a38-4159-41be-ac4a-f436a8ddeeab in datapath 9bac3530-993f-420e-8692-0b14a331d756 bound to our chassis
Oct 11 08:51:06 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:06.867 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9bac3530-993f-420e-8692-0b14a331d756
Oct 11 08:51:06 compute-0 systemd-machined[215705]: New machine qemu-36-instance-00000021.
Oct 11 08:51:06 compute-0 nova_compute[260935]: 2025-10-11 08:51:06.883 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:51:06 compute-0 nova_compute[260935]: 2025-10-11 08:51:06.884 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:51:06 compute-0 nova_compute[260935]: 2025-10-11 08:51:06.884 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:51:06 compute-0 nova_compute[260935]: 2025-10-11 08:51:06.885 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 11 08:51:06 compute-0 nova_compute[260935]: 2025-10-11 08:51:06.885 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:51:06 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:06.887 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[787f1b69-f375-43c4-a19f-5ece67f315f4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:51:06 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:06.888 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap9bac3530-91 in ovnmeta-9bac3530-993f-420e-8692-0b14a331d756 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 11 08:51:06 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:06.891 276945 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap9bac3530-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 11 08:51:06 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:06.892 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[4b871840-0dd2-44a0-85de-a61029bb7b63]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:51:06 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:06.899 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[fec92914-6cc6-45fa-8fbf-b07947d78941]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:51:06 compute-0 systemd[1]: Started Virtual Machine qemu-36-instance-00000021.
Oct 11 08:51:06 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:06.918 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[f3eed645-38f4-4599-9de2-d9f1dddbead8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:51:06 compute-0 systemd-udevd[300967]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 08:51:06 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:06.945 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[e495f27d-00b3-4781-bcc8-91e8cb728e4d]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:51:06 compute-0 ovn_controller[152945]: 2025-10-11T08:51:06Z|00225|binding|INFO|Setting lport 965f1a38-4159-41be-ac4a-f436a8ddeeab ovn-installed in OVS
Oct 11 08:51:06 compute-0 ovn_controller[152945]: 2025-10-11T08:51:06Z|00226|binding|INFO|Setting lport 965f1a38-4159-41be-ac4a-f436a8ddeeab up in Southbound
Oct 11 08:51:06 compute-0 NetworkManager[44960]: <info>  [1760172666.9677] device (tap965f1a38-41): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 08:51:06 compute-0 NetworkManager[44960]: <info>  [1760172666.9689] device (tap965f1a38-41): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 08:51:06 compute-0 nova_compute[260935]: 2025-10-11 08:51:06.998 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:51:07 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:07.018 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[67629c31-57ac-4a6c-95cc-174576f13a23]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:51:07 compute-0 NetworkManager[44960]: <info>  [1760172667.0267] manager: (tap9bac3530-90): new Veth device (/org/freedesktop/NetworkManager/Devices/109)
Oct 11 08:51:07 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:07.028 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[f29547cb-51af-429d-b8aa-54774c12e587]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:51:07 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:07.072 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[f90fed78-ac6d-4c6a-8b46-95e384ba3f2c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:51:07 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:07.077 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[48a110d0-07fb-44a0-96e4-417395ba55f1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:51:07 compute-0 NetworkManager[44960]: <info>  [1760172667.0996] device (tap9bac3530-90): carrier: link connected
Oct 11 08:51:07 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:07.107 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[813b24d3-868d-4b23-aa16-4c39337c297c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:51:07 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:07.141 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[71b1681a-d9c9-4a5d-840f-cb5ee210c9c2]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9bac3530-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:95:35:1f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 71], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 451180, 'reachable_time': 25473, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 301016, 'error': None, 'target': 'ovnmeta-9bac3530-993f-420e-8692-0b14a331d756', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:51:07 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:07.169 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[5ca70518-aa25-4ddf-a009-be4a50d947df]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe95:351f'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 451180, 'tstamp': 451180}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 301017, 'error': None, 'target': 'ovnmeta-9bac3530-993f-420e-8692-0b14a331d756', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:51:07 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:07.195 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[d9e6f817-f1d6-41bc-b3f4-c45673356e0a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9bac3530-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:95:35:1f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 71], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 451180, 'reachable_time': 25473, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 301018, 'error': None, 'target': 'ovnmeta-9bac3530-993f-420e-8692-0b14a331d756', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:51:07 compute-0 nova_compute[260935]: 2025-10-11 08:51:07.217 2 DEBUG nova.compute.manager [req-89d9cef3-4466-473f-8804-aa8942e8e6af req-3230d788-c141-4e2e-bef9-dcc0d5af1349 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 5b50d851-e482-40a2-8b7d-d3eca87e15ab] Received event network-vif-plugged-965f1a38-4159-41be-ac4a-f436a8ddeeab external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:51:07 compute-0 nova_compute[260935]: 2025-10-11 08:51:07.218 2 DEBUG oslo_concurrency.lockutils [req-89d9cef3-4466-473f-8804-aa8942e8e6af req-3230d788-c141-4e2e-bef9-dcc0d5af1349 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "5b50d851-e482-40a2-8b7d-d3eca87e15ab-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:51:07 compute-0 nova_compute[260935]: 2025-10-11 08:51:07.219 2 DEBUG oslo_concurrency.lockutils [req-89d9cef3-4466-473f-8804-aa8942e8e6af req-3230d788-c141-4e2e-bef9-dcc0d5af1349 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "5b50d851-e482-40a2-8b7d-d3eca87e15ab-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:51:07 compute-0 nova_compute[260935]: 2025-10-11 08:51:07.219 2 DEBUG oslo_concurrency.lockutils [req-89d9cef3-4466-473f-8804-aa8942e8e6af req-3230d788-c141-4e2e-bef9-dcc0d5af1349 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "5b50d851-e482-40a2-8b7d-d3eca87e15ab-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:51:07 compute-0 nova_compute[260935]: 2025-10-11 08:51:07.220 2 DEBUG nova.compute.manager [req-89d9cef3-4466-473f-8804-aa8942e8e6af req-3230d788-c141-4e2e-bef9-dcc0d5af1349 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 5b50d851-e482-40a2-8b7d-d3eca87e15ab] Processing event network-vif-plugged-965f1a38-4159-41be-ac4a-f436a8ddeeab _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 11 08:51:07 compute-0 nova_compute[260935]: 2025-10-11 08:51:07.237 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:51:07 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:07.239 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[0b5e61dd-c6ff-4d70-8233-38f34816048c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:51:07 compute-0 ceph-mon[74313]: pgmap v1376: 321 pgs: 321 active+clean; 134 MiB data, 401 MiB used, 60 GiB / 60 GiB avail; 82 KiB/s rd, 4.3 MiB/s wr, 122 op/s
Oct 11 08:51:07 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:07.317 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[8c310fb4-bcd4-4d25-be54-714b08a1651e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:51:07 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:07.319 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9bac3530-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:51:07 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:07.319 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 08:51:07 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:07.320 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9bac3530-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:51:07 compute-0 NetworkManager[44960]: <info>  [1760172667.3240] manager: (tap9bac3530-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/110)
Oct 11 08:51:07 compute-0 kernel: tap9bac3530-90: entered promiscuous mode
Oct 11 08:51:07 compute-0 nova_compute[260935]: 2025-10-11 08:51:07.323 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:51:07 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:07.334 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9bac3530-90, col_values=(('external_ids', {'iface-id': 'e5becf0d-48c0-404b-9cba-07077454d085'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:51:07 compute-0 ovn_controller[152945]: 2025-10-11T08:51:07Z|00227|binding|INFO|Releasing lport e5becf0d-48c0-404b-9cba-07077454d085 from this chassis (sb_readonly=0)
Oct 11 08:51:07 compute-0 nova_compute[260935]: 2025-10-11 08:51:07.335 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:51:07 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:07.338 162815 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/9bac3530-993f-420e-8692-0b14a331d756.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/9bac3530-993f-420e-8692-0b14a331d756.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 11 08:51:07 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:07.339 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[c05c43f7-be57-49bc-9e5e-4e788d2f48cd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:51:07 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:07.341 162815 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 11 08:51:07 compute-0 ovn_metadata_agent[162810]: global
Oct 11 08:51:07 compute-0 ovn_metadata_agent[162810]:     log         /dev/log local0 debug
Oct 11 08:51:07 compute-0 ovn_metadata_agent[162810]:     log-tag     haproxy-metadata-proxy-9bac3530-993f-420e-8692-0b14a331d756
Oct 11 08:51:07 compute-0 ovn_metadata_agent[162810]:     user        root
Oct 11 08:51:07 compute-0 ovn_metadata_agent[162810]:     group       root
Oct 11 08:51:07 compute-0 ovn_metadata_agent[162810]:     maxconn     1024
Oct 11 08:51:07 compute-0 ovn_metadata_agent[162810]:     pidfile     /var/lib/neutron/external/pids/9bac3530-993f-420e-8692-0b14a331d756.pid.haproxy
Oct 11 08:51:07 compute-0 ovn_metadata_agent[162810]:     daemon
Oct 11 08:51:07 compute-0 ovn_metadata_agent[162810]: 
Oct 11 08:51:07 compute-0 ovn_metadata_agent[162810]: defaults
Oct 11 08:51:07 compute-0 ovn_metadata_agent[162810]:     log global
Oct 11 08:51:07 compute-0 ovn_metadata_agent[162810]:     mode http
Oct 11 08:51:07 compute-0 ovn_metadata_agent[162810]:     option httplog
Oct 11 08:51:07 compute-0 ovn_metadata_agent[162810]:     option dontlognull
Oct 11 08:51:07 compute-0 ovn_metadata_agent[162810]:     option http-server-close
Oct 11 08:51:07 compute-0 ovn_metadata_agent[162810]:     option forwardfor
Oct 11 08:51:07 compute-0 ovn_metadata_agent[162810]:     retries                 3
Oct 11 08:51:07 compute-0 ovn_metadata_agent[162810]:     timeout http-request    30s
Oct 11 08:51:07 compute-0 ovn_metadata_agent[162810]:     timeout connect         30s
Oct 11 08:51:07 compute-0 ovn_metadata_agent[162810]:     timeout client          32s
Oct 11 08:51:07 compute-0 ovn_metadata_agent[162810]:     timeout server          32s
Oct 11 08:51:07 compute-0 ovn_metadata_agent[162810]:     timeout http-keep-alive 30s
Oct 11 08:51:07 compute-0 ovn_metadata_agent[162810]: 
Oct 11 08:51:07 compute-0 ovn_metadata_agent[162810]: 
Oct 11 08:51:07 compute-0 ovn_metadata_agent[162810]: listen listener
Oct 11 08:51:07 compute-0 ovn_metadata_agent[162810]:     bind 169.254.169.254:80
Oct 11 08:51:07 compute-0 ovn_metadata_agent[162810]:     server metadata /var/lib/neutron/metadata_proxy
Oct 11 08:51:07 compute-0 ovn_metadata_agent[162810]:     http-request add-header X-OVN-Network-ID 9bac3530-993f-420e-8692-0b14a331d756
Oct 11 08:51:07 compute-0 ovn_metadata_agent[162810]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 11 08:51:07 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:07.343 162815 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-9bac3530-993f-420e-8692-0b14a331d756', 'env', 'PROCESS_TAG=haproxy-9bac3530-993f-420e-8692-0b14a331d756', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/9bac3530-993f-420e-8692-0b14a331d756.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 11 08:51:07 compute-0 nova_compute[260935]: 2025-10-11 08:51:07.352 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:51:07 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 08:51:07 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2145660885' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:51:07 compute-0 nova_compute[260935]: 2025-10-11 08:51:07.423 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.538s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:51:07 compute-0 nova_compute[260935]: 2025-10-11 08:51:07.504 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000021 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 08:51:07 compute-0 nova_compute[260935]: 2025-10-11 08:51:07.505 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000021 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 08:51:07 compute-0 nova_compute[260935]: 2025-10-11 08:51:07.796 2 WARNING nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 08:51:07 compute-0 nova_compute[260935]: 2025-10-11 08:51:07.798 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4283MB free_disk=59.946773529052734GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 11 08:51:07 compute-0 nova_compute[260935]: 2025-10-11 08:51:07.798 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:51:07 compute-0 nova_compute[260935]: 2025-10-11 08:51:07.799 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:51:07 compute-0 nova_compute[260935]: 2025-10-11 08:51:07.851 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172667.850748, 5b50d851-e482-40a2-8b7d-d3eca87e15ab => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:51:07 compute-0 nova_compute[260935]: 2025-10-11 08:51:07.852 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 5b50d851-e482-40a2-8b7d-d3eca87e15ab] VM Started (Lifecycle Event)
Oct 11 08:51:07 compute-0 nova_compute[260935]: 2025-10-11 08:51:07.856 2 DEBUG nova.compute.manager [None req-1d1846a2-13f3-4f48-8e57-f9456f6cfa2f 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 5b50d851-e482-40a2-8b7d-d3eca87e15ab] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 11 08:51:07 compute-0 nova_compute[260935]: 2025-10-11 08:51:07.867 2 DEBUG nova.virt.libvirt.driver [None req-1d1846a2-13f3-4f48-8e57-f9456f6cfa2f 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 5b50d851-e482-40a2-8b7d-d3eca87e15ab] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 11 08:51:07 compute-0 nova_compute[260935]: 2025-10-11 08:51:07.875 2 INFO nova.virt.libvirt.driver [-] [instance: 5b50d851-e482-40a2-8b7d-d3eca87e15ab] Instance spawned successfully.
Oct 11 08:51:07 compute-0 nova_compute[260935]: 2025-10-11 08:51:07.878 2 DEBUG nova.virt.libvirt.driver [None req-1d1846a2-13f3-4f48-8e57-f9456f6cfa2f 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 5b50d851-e482-40a2-8b7d-d3eca87e15ab] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 11 08:51:07 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1377: 321 pgs: 321 active+clean; 134 MiB data, 401 MiB used, 60 GiB / 60 GiB avail; 50 KiB/s rd, 4.3 MiB/s wr, 76 op/s
Oct 11 08:51:07 compute-0 nova_compute[260935]: 2025-10-11 08:51:07.892 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 5b50d851-e482-40a2-8b7d-d3eca87e15ab] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:51:07 compute-0 nova_compute[260935]: 2025-10-11 08:51:07.896 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance 9f842544-f85a-4c24-b273-8ae74177617e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 08:51:07 compute-0 nova_compute[260935]: 2025-10-11 08:51:07.897 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance 5b50d851-e482-40a2-8b7d-d3eca87e15ab actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 08:51:07 compute-0 nova_compute[260935]: 2025-10-11 08:51:07.897 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 11 08:51:07 compute-0 nova_compute[260935]: 2025-10-11 08:51:07.898 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 11 08:51:07 compute-0 podman[301096]: 2025-10-11 08:51:07.901012376 +0000 UTC m=+0.083506433 container create 4c69755b2a5a85b116644ea586653252eee337e5bcfe7e1984d85c100f402219 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-9bac3530-993f-420e-8692-0b14a331d756, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 11 08:51:07 compute-0 nova_compute[260935]: 2025-10-11 08:51:07.904 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 5b50d851-e482-40a2-8b7d-d3eca87e15ab] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 08:51:07 compute-0 nova_compute[260935]: 2025-10-11 08:51:07.923 2 DEBUG nova.virt.libvirt.driver [None req-1d1846a2-13f3-4f48-8e57-f9456f6cfa2f 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 5b50d851-e482-40a2-8b7d-d3eca87e15ab] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:51:07 compute-0 nova_compute[260935]: 2025-10-11 08:51:07.925 2 DEBUG nova.virt.libvirt.driver [None req-1d1846a2-13f3-4f48-8e57-f9456f6cfa2f 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 5b50d851-e482-40a2-8b7d-d3eca87e15ab] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:51:07 compute-0 nova_compute[260935]: 2025-10-11 08:51:07.925 2 DEBUG nova.virt.libvirt.driver [None req-1d1846a2-13f3-4f48-8e57-f9456f6cfa2f 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 5b50d851-e482-40a2-8b7d-d3eca87e15ab] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:51:07 compute-0 nova_compute[260935]: 2025-10-11 08:51:07.925 2 DEBUG nova.virt.libvirt.driver [None req-1d1846a2-13f3-4f48-8e57-f9456f6cfa2f 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 5b50d851-e482-40a2-8b7d-d3eca87e15ab] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:51:07 compute-0 nova_compute[260935]: 2025-10-11 08:51:07.926 2 DEBUG nova.virt.libvirt.driver [None req-1d1846a2-13f3-4f48-8e57-f9456f6cfa2f 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 5b50d851-e482-40a2-8b7d-d3eca87e15ab] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:51:07 compute-0 nova_compute[260935]: 2025-10-11 08:51:07.926 2 DEBUG nova.virt.libvirt.driver [None req-1d1846a2-13f3-4f48-8e57-f9456f6cfa2f 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 5b50d851-e482-40a2-8b7d-d3eca87e15ab] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:51:07 compute-0 nova_compute[260935]: 2025-10-11 08:51:07.929 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 5b50d851-e482-40a2-8b7d-d3eca87e15ab] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 08:51:07 compute-0 nova_compute[260935]: 2025-10-11 08:51:07.930 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172667.8523057, 5b50d851-e482-40a2-8b7d-d3eca87e15ab => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:51:07 compute-0 nova_compute[260935]: 2025-10-11 08:51:07.930 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 5b50d851-e482-40a2-8b7d-d3eca87e15ab] VM Paused (Lifecycle Event)
Oct 11 08:51:07 compute-0 systemd[1]: Started libpod-conmon-4c69755b2a5a85b116644ea586653252eee337e5bcfe7e1984d85c100f402219.scope.
Oct 11 08:51:07 compute-0 podman[301096]: 2025-10-11 08:51:07.863295339 +0000 UTC m=+0.045789396 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 11 08:51:07 compute-0 systemd[1]: Started libcrun container.
Oct 11 08:51:07 compute-0 nova_compute[260935]: 2025-10-11 08:51:07.970 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 5b50d851-e482-40a2-8b7d-d3eca87e15ab] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:51:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81784f68462ca80ca6e90ba3c15edeb0a8d5f87b882ef7fdf8a487f1955dcf4c/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 11 08:51:07 compute-0 nova_compute[260935]: 2025-10-11 08:51:07.975 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172667.8645847, 5b50d851-e482-40a2-8b7d-d3eca87e15ab => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:51:07 compute-0 nova_compute[260935]: 2025-10-11 08:51:07.984 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 5b50d851-e482-40a2-8b7d-d3eca87e15ab] VM Resumed (Lifecycle Event)
Oct 11 08:51:07 compute-0 nova_compute[260935]: 2025-10-11 08:51:07.987 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:51:07 compute-0 podman[301096]: 2025-10-11 08:51:07.992257866 +0000 UTC m=+0.174751903 container init 4c69755b2a5a85b116644ea586653252eee337e5bcfe7e1984d85c100f402219 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-9bac3530-993f-420e-8692-0b14a331d756, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct 11 08:51:08 compute-0 podman[301096]: 2025-10-11 08:51:08.002157216 +0000 UTC m=+0.184651243 container start 4c69755b2a5a85b116644ea586653252eee337e5bcfe7e1984d85c100f402219 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-9bac3530-993f-420e-8692-0b14a331d756, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Oct 11 08:51:08 compute-0 nova_compute[260935]: 2025-10-11 08:51:08.034 2 INFO nova.compute.manager [None req-1d1846a2-13f3-4f48-8e57-f9456f6cfa2f 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 5b50d851-e482-40a2-8b7d-d3eca87e15ab] Took 8.76 seconds to spawn the instance on the hypervisor.
Oct 11 08:51:08 compute-0 nova_compute[260935]: 2025-10-11 08:51:08.035 2 DEBUG nova.compute.manager [None req-1d1846a2-13f3-4f48-8e57-f9456f6cfa2f 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 5b50d851-e482-40a2-8b7d-d3eca87e15ab] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:51:08 compute-0 neutron-haproxy-ovnmeta-9bac3530-993f-420e-8692-0b14a331d756[301111]: [NOTICE]   (301116) : New worker (301118) forked
Oct 11 08:51:08 compute-0 neutron-haproxy-ovnmeta-9bac3530-993f-420e-8692-0b14a331d756[301111]: [NOTICE]   (301116) : Loading success.
Oct 11 08:51:08 compute-0 nova_compute[260935]: 2025-10-11 08:51:08.055 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 5b50d851-e482-40a2-8b7d-d3eca87e15ab] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:51:08 compute-0 nova_compute[260935]: 2025-10-11 08:51:08.060 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 5b50d851-e482-40a2-8b7d-d3eca87e15ab] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 08:51:08 compute-0 nova_compute[260935]: 2025-10-11 08:51:08.079 2 DEBUG nova.network.neutron [None req-a8a0f7c3-f1bc-448a-a5ff-f73f3f5f2861 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] [instance: 9f842544-f85a-4c24-b273-8ae74177617e] Updating instance_info_cache with network_info: [{"id": "fc76a4bd-0e3d-426e-820f-690283cf8257", "address": "fa:16:3e:91:1c:e4", "network": {"id": "da40451d-49f4-4bd4-b0a3-55dc537c2426", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1795623970", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.51", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f84de17ba2c5470fbc4c7fe809e7d7b7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfc76a4bd-0e", "ovs_interfaceid": "fc76a4bd-0e3d-426e-820f-690283cf8257", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "ccf53fcf-c7bd-41f4-986b-d90fd701f3e4", "address": "fa:16:3e:82:ff:9f", "network": {"id": "27019660-0844-42c7-bd58-04b2d25d3924", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-190755633", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.42", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f84de17ba2c5470fbc4c7fe809e7d7b7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapccf53fcf-c7", "ovs_interfaceid": "ccf53fcf-c7bd-41f4-986b-d90fd701f3e4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "e7343dc3-6cda-4dfb-8098-f021900f4584", "address": "fa:16:3e:25:95:fe", "network": {"id": "da40451d-49f4-4bd4-b0a3-55dc537c2426", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1795623970", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.254", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f84de17ba2c5470fbc4c7fe809e7d7b7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape7343dc3-6c", "ovs_interfaceid": "e7343dc3-6cda-4dfb-8098-f021900f4584", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:51:08 compute-0 nova_compute[260935]: 2025-10-11 08:51:08.106 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 5b50d851-e482-40a2-8b7d-d3eca87e15ab] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 08:51:08 compute-0 nova_compute[260935]: 2025-10-11 08:51:08.120 2 DEBUG oslo_concurrency.lockutils [None req-a8a0f7c3-f1bc-448a-a5ff-f73f3f5f2861 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Releasing lock "refresh_cache-9f842544-f85a-4c24-b273-8ae74177617e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 08:51:08 compute-0 nova_compute[260935]: 2025-10-11 08:51:08.121 2 DEBUG nova.compute.manager [None req-a8a0f7c3-f1bc-448a-a5ff-f73f3f5f2861 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] [instance: 9f842544-f85a-4c24-b273-8ae74177617e] Instance network_info: |[{"id": "fc76a4bd-0e3d-426e-820f-690283cf8257", "address": "fa:16:3e:91:1c:e4", "network": {"id": "da40451d-49f4-4bd4-b0a3-55dc537c2426", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1795623970", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.51", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f84de17ba2c5470fbc4c7fe809e7d7b7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfc76a4bd-0e", "ovs_interfaceid": "fc76a4bd-0e3d-426e-820f-690283cf8257", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "ccf53fcf-c7bd-41f4-986b-d90fd701f3e4", "address": "fa:16:3e:82:ff:9f", "network": {"id": "27019660-0844-42c7-bd58-04b2d25d3924", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-190755633", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.42", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f84de17ba2c5470fbc4c7fe809e7d7b7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapccf53fcf-c7", "ovs_interfaceid": "ccf53fcf-c7bd-41f4-986b-d90fd701f3e4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "e7343dc3-6cda-4dfb-8098-f021900f4584", "address": "fa:16:3e:25:95:fe", "network": {"id": "da40451d-49f4-4bd4-b0a3-55dc537c2426", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1795623970", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.254", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f84de17ba2c5470fbc4c7fe809e7d7b7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape7343dc3-6c", "ovs_interfaceid": "e7343dc3-6cda-4dfb-8098-f021900f4584", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 11 08:51:08 compute-0 nova_compute[260935]: 2025-10-11 08:51:08.122 2 DEBUG oslo_concurrency.lockutils [req-414781cc-d6a1-42c0-9106-f249fbcb35e8 req-87a8b0a5-1cf6-4f03-8179-f4cd618401f3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-9f842544-f85a-4c24-b273-8ae74177617e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 08:51:08 compute-0 nova_compute[260935]: 2025-10-11 08:51:08.123 2 DEBUG nova.network.neutron [req-414781cc-d6a1-42c0-9106-f249fbcb35e8 req-87a8b0a5-1cf6-4f03-8179-f4cd618401f3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 9f842544-f85a-4c24-b273-8ae74177617e] Refreshing network info cache for port ccf53fcf-c7bd-41f4-986b-d90fd701f3e4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 08:51:08 compute-0 nova_compute[260935]: 2025-10-11 08:51:08.130 2 DEBUG nova.virt.libvirt.driver [None req-a8a0f7c3-f1bc-448a-a5ff-f73f3f5f2861 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] [instance: 9f842544-f85a-4c24-b273-8ae74177617e] Start _get_guest_xml network_info=[{"id": "fc76a4bd-0e3d-426e-820f-690283cf8257", "address": "fa:16:3e:91:1c:e4", "network": {"id": "da40451d-49f4-4bd4-b0a3-55dc537c2426", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1795623970", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.51", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f84de17ba2c5470fbc4c7fe809e7d7b7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfc76a4bd-0e", "ovs_interfaceid": "fc76a4bd-0e3d-426e-820f-690283cf8257", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "ccf53fcf-c7bd-41f4-986b-d90fd701f3e4", "address": "fa:16:3e:82:ff:9f", "network": {"id": "27019660-0844-42c7-bd58-04b2d25d3924", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-190755633", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.42", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f84de17ba2c5470fbc4c7fe809e7d7b7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapccf53fcf-c7", "ovs_interfaceid": "ccf53fcf-c7bd-41f4-986b-d90fd701f3e4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "e7343dc3-6cda-4dfb-8098-f021900f4584", "address": "fa:16:3e:25:95:fe", "network": {"id": "da40451d-49f4-4bd4-b0a3-55dc537c2426", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1795623970", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.254", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f84de17ba2c5470fbc4c7fe809e7d7b7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape7343dc3-6c", "ovs_interfaceid": "e7343dc3-6cda-4dfb-8098-f021900f4584", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 11 08:51:08 compute-0 nova_compute[260935]: 2025-10-11 08:51:08.146 2 INFO nova.compute.manager [None req-1d1846a2-13f3-4f48-8e57-f9456f6cfa2f 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 5b50d851-e482-40a2-8b7d-d3eca87e15ab] Took 9.87 seconds to build instance.
Oct 11 08:51:08 compute-0 nova_compute[260935]: 2025-10-11 08:51:08.151 2 WARNING nova.virt.libvirt.driver [None req-a8a0f7c3-f1bc-448a-a5ff-f73f3f5f2861 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 08:51:08 compute-0 nova_compute[260935]: 2025-10-11 08:51:08.164 2 DEBUG nova.virt.libvirt.host [None req-a8a0f7c3-f1bc-448a-a5ff-f73f3f5f2861 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 11 08:51:08 compute-0 nova_compute[260935]: 2025-10-11 08:51:08.165 2 DEBUG nova.virt.libvirt.host [None req-a8a0f7c3-f1bc-448a-a5ff-f73f3f5f2861 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 11 08:51:08 compute-0 nova_compute[260935]: 2025-10-11 08:51:08.169 2 DEBUG nova.virt.libvirt.host [None req-a8a0f7c3-f1bc-448a-a5ff-f73f3f5f2861 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 11 08:51:08 compute-0 nova_compute[260935]: 2025-10-11 08:51:08.170 2 DEBUG nova.virt.libvirt.host [None req-a8a0f7c3-f1bc-448a-a5ff-f73f3f5f2861 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 11 08:51:08 compute-0 nova_compute[260935]: 2025-10-11 08:51:08.171 2 DEBUG nova.virt.libvirt.driver [None req-a8a0f7c3-f1bc-448a-a5ff-f73f3f5f2861 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 11 08:51:08 compute-0 nova_compute[260935]: 2025-10-11 08:51:08.172 2 DEBUG nova.virt.hardware [None req-a8a0f7c3-f1bc-448a-a5ff-f73f3f5f2861 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 11 08:51:08 compute-0 nova_compute[260935]: 2025-10-11 08:51:08.173 2 DEBUG nova.virt.hardware [None req-a8a0f7c3-f1bc-448a-a5ff-f73f3f5f2861 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 11 08:51:08 compute-0 nova_compute[260935]: 2025-10-11 08:51:08.173 2 DEBUG nova.virt.hardware [None req-a8a0f7c3-f1bc-448a-a5ff-f73f3f5f2861 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 11 08:51:08 compute-0 nova_compute[260935]: 2025-10-11 08:51:08.174 2 DEBUG nova.virt.hardware [None req-a8a0f7c3-f1bc-448a-a5ff-f73f3f5f2861 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 11 08:51:08 compute-0 nova_compute[260935]: 2025-10-11 08:51:08.175 2 DEBUG nova.virt.hardware [None req-a8a0f7c3-f1bc-448a-a5ff-f73f3f5f2861 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 11 08:51:08 compute-0 nova_compute[260935]: 2025-10-11 08:51:08.175 2 DEBUG nova.virt.hardware [None req-a8a0f7c3-f1bc-448a-a5ff-f73f3f5f2861 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 11 08:51:08 compute-0 nova_compute[260935]: 2025-10-11 08:51:08.176 2 DEBUG nova.virt.hardware [None req-a8a0f7c3-f1bc-448a-a5ff-f73f3f5f2861 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 11 08:51:08 compute-0 nova_compute[260935]: 2025-10-11 08:51:08.176 2 DEBUG nova.virt.hardware [None req-a8a0f7c3-f1bc-448a-a5ff-f73f3f5f2861 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 11 08:51:08 compute-0 nova_compute[260935]: 2025-10-11 08:51:08.177 2 DEBUG nova.virt.hardware [None req-a8a0f7c3-f1bc-448a-a5ff-f73f3f5f2861 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 11 08:51:08 compute-0 nova_compute[260935]: 2025-10-11 08:51:08.177 2 DEBUG nova.virt.hardware [None req-a8a0f7c3-f1bc-448a-a5ff-f73f3f5f2861 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 11 08:51:08 compute-0 nova_compute[260935]: 2025-10-11 08:51:08.178 2 DEBUG nova.virt.hardware [None req-a8a0f7c3-f1bc-448a-a5ff-f73f3f5f2861 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 11 08:51:08 compute-0 nova_compute[260935]: 2025-10-11 08:51:08.184 2 DEBUG oslo_concurrency.processutils [None req-a8a0f7c3-f1bc-448a-a5ff-f73f3f5f2861 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:51:08 compute-0 nova_compute[260935]: 2025-10-11 08:51:08.235 2 DEBUG oslo_concurrency.lockutils [None req-1d1846a2-13f3-4f48-8e57-f9456f6cfa2f 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Lock "5b50d851-e482-40a2-8b7d-d3eca87e15ab" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.052s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:51:08 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 08:51:08 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/2145660885' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:51:08 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 08:51:08 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1773337102' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:51:08 compute-0 nova_compute[260935]: 2025-10-11 08:51:08.506 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.519s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:51:08 compute-0 nova_compute[260935]: 2025-10-11 08:51:08.514 2 DEBUG nova.compute.provider_tree [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 08:51:08 compute-0 nova_compute[260935]: 2025-10-11 08:51:08.537 2 DEBUG nova.scheduler.client.report [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 08:51:08 compute-0 nova_compute[260935]: 2025-10-11 08:51:08.572 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 11 08:51:08 compute-0 nova_compute[260935]: 2025-10-11 08:51:08.573 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.774s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:51:08 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 08:51:08 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1450823365' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:51:08 compute-0 nova_compute[260935]: 2025-10-11 08:51:08.717 2 DEBUG oslo_concurrency.processutils [None req-a8a0f7c3-f1bc-448a-a5ff-f73f3f5f2861 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.533s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:51:08 compute-0 nova_compute[260935]: 2025-10-11 08:51:08.759 2 DEBUG nova.storage.rbd_utils [None req-a8a0f7c3-f1bc-448a-a5ff-f73f3f5f2861 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] rbd image 9f842544-f85a-4c24-b273-8ae74177617e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:51:08 compute-0 nova_compute[260935]: 2025-10-11 08:51:08.769 2 DEBUG oslo_concurrency.processutils [None req-a8a0f7c3-f1bc-448a-a5ff-f73f3f5f2861 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:51:09 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 08:51:09 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3478334644' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:51:09 compute-0 nova_compute[260935]: 2025-10-11 08:51:09.290 2 DEBUG oslo_concurrency.processutils [None req-a8a0f7c3-f1bc-448a-a5ff-f73f3f5f2861 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.522s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:51:09 compute-0 nova_compute[260935]: 2025-10-11 08:51:09.294 2 DEBUG nova.virt.libvirt.vif [None req-a8a0f7c3-f1bc-448a-a5ff-f73f3f5f2861 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T08:50:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-358627564',display_name='tempest-ServersTestMultiNic-server-358627564',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-358627564',id=32,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f84de17ba2c5470fbc4c7fe809e7d7b7',ramdisk_id='',reservation_id='r-ku19xg05',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestMultiNic-65661968',owner_user_name='tempest-ServersTestMultiNic-65661968-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T08:50:57Z,user_data=None,user_id='fb27f51b5ffd414ab5ddbea179ada690',uuid=9f842544-f85a-4c24-b273-8ae74177617e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "fc76a4bd-0e3d-426e-820f-690283cf8257", "address": "fa:16:3e:91:1c:e4", "network": {"id": "da40451d-49f4-4bd4-b0a3-55dc537c2426", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1795623970", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.51", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f84de17ba2c5470fbc4c7fe809e7d7b7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfc76a4bd-0e", "ovs_interfaceid": "fc76a4bd-0e3d-426e-820f-690283cf8257", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 11 08:51:09 compute-0 nova_compute[260935]: 2025-10-11 08:51:09.295 2 DEBUG nova.network.os_vif_util [None req-a8a0f7c3-f1bc-448a-a5ff-f73f3f5f2861 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Converting VIF {"id": "fc76a4bd-0e3d-426e-820f-690283cf8257", "address": "fa:16:3e:91:1c:e4", "network": {"id": "da40451d-49f4-4bd4-b0a3-55dc537c2426", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1795623970", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.51", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f84de17ba2c5470fbc4c7fe809e7d7b7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfc76a4bd-0e", "ovs_interfaceid": "fc76a4bd-0e3d-426e-820f-690283cf8257", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 08:51:09 compute-0 nova_compute[260935]: 2025-10-11 08:51:09.297 2 DEBUG nova.network.os_vif_util [None req-a8a0f7c3-f1bc-448a-a5ff-f73f3f5f2861 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:91:1c:e4,bridge_name='br-int',has_traffic_filtering=True,id=fc76a4bd-0e3d-426e-820f-690283cf8257,network=Network(da40451d-49f4-4bd4-b0a3-55dc537c2426),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfc76a4bd-0e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 08:51:09 compute-0 nova_compute[260935]: 2025-10-11 08:51:09.299 2 DEBUG nova.virt.libvirt.vif [None req-a8a0f7c3-f1bc-448a-a5ff-f73f3f5f2861 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T08:50:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-358627564',display_name='tempest-ServersTestMultiNic-server-358627564',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-358627564',id=32,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f84de17ba2c5470fbc4c7fe809e7d7b7',ramdisk_id='',reservation_id='r-ku19xg05',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestMultiNic-65661968',owner_user_name='tempest-ServersTestMultiNic-65661968-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T08:50:57Z,user_data=None,user_id='fb27f51b5ffd414ab5ddbea179ada690',uuid=9f842544-f85a-4c24-b273-8ae74177617e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ccf53fcf-c7bd-41f4-986b-d90fd701f3e4", "address": "fa:16:3e:82:ff:9f", "network": {"id": "27019660-0844-42c7-bd58-04b2d25d3924", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-190755633", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.42", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f84de17ba2c5470fbc4c7fe809e7d7b7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapccf53fcf-c7", "ovs_interfaceid": "ccf53fcf-c7bd-41f4-986b-d90fd701f3e4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 11 08:51:09 compute-0 nova_compute[260935]: 2025-10-11 08:51:09.299 2 DEBUG nova.network.os_vif_util [None req-a8a0f7c3-f1bc-448a-a5ff-f73f3f5f2861 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Converting VIF {"id": "ccf53fcf-c7bd-41f4-986b-d90fd701f3e4", "address": "fa:16:3e:82:ff:9f", "network": {"id": "27019660-0844-42c7-bd58-04b2d25d3924", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-190755633", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.42", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f84de17ba2c5470fbc4c7fe809e7d7b7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapccf53fcf-c7", "ovs_interfaceid": "ccf53fcf-c7bd-41f4-986b-d90fd701f3e4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 08:51:09 compute-0 nova_compute[260935]: 2025-10-11 08:51:09.301 2 DEBUG nova.network.os_vif_util [None req-a8a0f7c3-f1bc-448a-a5ff-f73f3f5f2861 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:82:ff:9f,bridge_name='br-int',has_traffic_filtering=True,id=ccf53fcf-c7bd-41f4-986b-d90fd701f3e4,network=Network(27019660-0844-42c7-bd58-04b2d25d3924),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapccf53fcf-c7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 08:51:09 compute-0 nova_compute[260935]: 2025-10-11 08:51:09.302 2 DEBUG nova.virt.libvirt.vif [None req-a8a0f7c3-f1bc-448a-a5ff-f73f3f5f2861 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T08:50:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-358627564',display_name='tempest-ServersTestMultiNic-server-358627564',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-358627564',id=32,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f84de17ba2c5470fbc4c7fe809e7d7b7',ramdisk_id='',reservation_id='r-ku19xg05',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestMultiNic-65661968',owner_user_name='tempest-ServersTestMultiNic-65661968-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T08:50:57Z,user_data=None,user_id='fb27f51b5ffd414ab5ddbea179ada690',uuid=9f842544-f85a-4c24-b273-8ae74177617e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e7343dc3-6cda-4dfb-8098-f021900f4584", "address": "fa:16:3e:25:95:fe", "network": {"id": "da40451d-49f4-4bd4-b0a3-55dc537c2426", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1795623970", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.254", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f84de17ba2c5470fbc4c7fe809e7d7b7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape7343dc3-6c", "ovs_interfaceid": "e7343dc3-6cda-4dfb-8098-f021900f4584", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 11 08:51:09 compute-0 nova_compute[260935]: 2025-10-11 08:51:09.303 2 DEBUG nova.network.os_vif_util [None req-a8a0f7c3-f1bc-448a-a5ff-f73f3f5f2861 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Converting VIF {"id": "e7343dc3-6cda-4dfb-8098-f021900f4584", "address": "fa:16:3e:25:95:fe", "network": {"id": "da40451d-49f4-4bd4-b0a3-55dc537c2426", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1795623970", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.254", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f84de17ba2c5470fbc4c7fe809e7d7b7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape7343dc3-6c", "ovs_interfaceid": "e7343dc3-6cda-4dfb-8098-f021900f4584", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 08:51:09 compute-0 nova_compute[260935]: 2025-10-11 08:51:09.304 2 DEBUG nova.network.os_vif_util [None req-a8a0f7c3-f1bc-448a-a5ff-f73f3f5f2861 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:25:95:fe,bridge_name='br-int',has_traffic_filtering=True,id=e7343dc3-6cda-4dfb-8098-f021900f4584,network=Network(da40451d-49f4-4bd4-b0a3-55dc537c2426),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape7343dc3-6c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 08:51:09 compute-0 nova_compute[260935]: 2025-10-11 08:51:09.306 2 DEBUG nova.objects.instance [None req-a8a0f7c3-f1bc-448a-a5ff-f73f3f5f2861 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Lazy-loading 'pci_devices' on Instance uuid 9f842544-f85a-4c24-b273-8ae74177617e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:51:09 compute-0 nova_compute[260935]: 2025-10-11 08:51:09.313 2 DEBUG nova.compute.manager [req-9472affb-1139-4c69-b922-088de0948d7b req-1279bf96-1ac5-4958-9c5b-28b7fc7a41a7 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 5b50d851-e482-40a2-8b7d-d3eca87e15ab] Received event network-vif-plugged-965f1a38-4159-41be-ac4a-f436a8ddeeab external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:51:09 compute-0 nova_compute[260935]: 2025-10-11 08:51:09.313 2 DEBUG oslo_concurrency.lockutils [req-9472affb-1139-4c69-b922-088de0948d7b req-1279bf96-1ac5-4958-9c5b-28b7fc7a41a7 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "5b50d851-e482-40a2-8b7d-d3eca87e15ab-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:51:09 compute-0 ceph-mon[74313]: pgmap v1377: 321 pgs: 321 active+clean; 134 MiB data, 401 MiB used, 60 GiB / 60 GiB avail; 50 KiB/s rd, 4.3 MiB/s wr, 76 op/s
Oct 11 08:51:09 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/1773337102' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:51:09 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/1450823365' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:51:09 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/3478334644' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:51:09 compute-0 nova_compute[260935]: 2025-10-11 08:51:09.315 2 DEBUG oslo_concurrency.lockutils [req-9472affb-1139-4c69-b922-088de0948d7b req-1279bf96-1ac5-4958-9c5b-28b7fc7a41a7 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "5b50d851-e482-40a2-8b7d-d3eca87e15ab-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:51:09 compute-0 nova_compute[260935]: 2025-10-11 08:51:09.316 2 DEBUG oslo_concurrency.lockutils [req-9472affb-1139-4c69-b922-088de0948d7b req-1279bf96-1ac5-4958-9c5b-28b7fc7a41a7 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "5b50d851-e482-40a2-8b7d-d3eca87e15ab-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:51:09 compute-0 nova_compute[260935]: 2025-10-11 08:51:09.317 2 DEBUG nova.compute.manager [req-9472affb-1139-4c69-b922-088de0948d7b req-1279bf96-1ac5-4958-9c5b-28b7fc7a41a7 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 5b50d851-e482-40a2-8b7d-d3eca87e15ab] No waiting events found dispatching network-vif-plugged-965f1a38-4159-41be-ac4a-f436a8ddeeab pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 08:51:09 compute-0 nova_compute[260935]: 2025-10-11 08:51:09.317 2 WARNING nova.compute.manager [req-9472affb-1139-4c69-b922-088de0948d7b req-1279bf96-1ac5-4958-9c5b-28b7fc7a41a7 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 5b50d851-e482-40a2-8b7d-d3eca87e15ab] Received unexpected event network-vif-plugged-965f1a38-4159-41be-ac4a-f436a8ddeeab for instance with vm_state active and task_state None.
Oct 11 08:51:09 compute-0 nova_compute[260935]: 2025-10-11 08:51:09.331 2 DEBUG nova.virt.libvirt.driver [None req-a8a0f7c3-f1bc-448a-a5ff-f73f3f5f2861 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] [instance: 9f842544-f85a-4c24-b273-8ae74177617e] End _get_guest_xml xml=<domain type="kvm">
Oct 11 08:51:09 compute-0 nova_compute[260935]:   <uuid>9f842544-f85a-4c24-b273-8ae74177617e</uuid>
Oct 11 08:51:09 compute-0 nova_compute[260935]:   <name>instance-00000020</name>
Oct 11 08:51:09 compute-0 nova_compute[260935]:   <memory>131072</memory>
Oct 11 08:51:09 compute-0 nova_compute[260935]:   <vcpu>1</vcpu>
Oct 11 08:51:09 compute-0 nova_compute[260935]:   <metadata>
Oct 11 08:51:09 compute-0 nova_compute[260935]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 08:51:09 compute-0 nova_compute[260935]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 08:51:09 compute-0 nova_compute[260935]:       <nova:name>tempest-ServersTestMultiNic-server-358627564</nova:name>
Oct 11 08:51:09 compute-0 nova_compute[260935]:       <nova:creationTime>2025-10-11 08:51:08</nova:creationTime>
Oct 11 08:51:09 compute-0 nova_compute[260935]:       <nova:flavor name="m1.nano">
Oct 11 08:51:09 compute-0 nova_compute[260935]:         <nova:memory>128</nova:memory>
Oct 11 08:51:09 compute-0 nova_compute[260935]:         <nova:disk>1</nova:disk>
Oct 11 08:51:09 compute-0 nova_compute[260935]:         <nova:swap>0</nova:swap>
Oct 11 08:51:09 compute-0 nova_compute[260935]:         <nova:ephemeral>0</nova:ephemeral>
Oct 11 08:51:09 compute-0 nova_compute[260935]:         <nova:vcpus>1</nova:vcpus>
Oct 11 08:51:09 compute-0 nova_compute[260935]:       </nova:flavor>
Oct 11 08:51:09 compute-0 nova_compute[260935]:       <nova:owner>
Oct 11 08:51:09 compute-0 nova_compute[260935]:         <nova:user uuid="fb27f51b5ffd414ab5ddbea179ada690">tempest-ServersTestMultiNic-65661968-project-member</nova:user>
Oct 11 08:51:09 compute-0 nova_compute[260935]:         <nova:project uuid="f84de17ba2c5470fbc4c7fe809e7d7b7">tempest-ServersTestMultiNic-65661968</nova:project>
Oct 11 08:51:09 compute-0 nova_compute[260935]:       </nova:owner>
Oct 11 08:51:09 compute-0 nova_compute[260935]:       <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 08:51:09 compute-0 nova_compute[260935]:       <nova:ports>
Oct 11 08:51:09 compute-0 nova_compute[260935]:         <nova:port uuid="fc76a4bd-0e3d-426e-820f-690283cf8257">
Oct 11 08:51:09 compute-0 nova_compute[260935]:           <nova:ip type="fixed" address="10.100.0.51" ipVersion="4"/>
Oct 11 08:51:09 compute-0 nova_compute[260935]:         </nova:port>
Oct 11 08:51:09 compute-0 nova_compute[260935]:         <nova:port uuid="ccf53fcf-c7bd-41f4-986b-d90fd701f3e4">
Oct 11 08:51:09 compute-0 nova_compute[260935]:           <nova:ip type="fixed" address="10.100.1.42" ipVersion="4"/>
Oct 11 08:51:09 compute-0 nova_compute[260935]:         </nova:port>
Oct 11 08:51:09 compute-0 nova_compute[260935]:         <nova:port uuid="e7343dc3-6cda-4dfb-8098-f021900f4584">
Oct 11 08:51:09 compute-0 nova_compute[260935]:           <nova:ip type="fixed" address="10.100.0.254" ipVersion="4"/>
Oct 11 08:51:09 compute-0 nova_compute[260935]:         </nova:port>
Oct 11 08:51:09 compute-0 nova_compute[260935]:       </nova:ports>
Oct 11 08:51:09 compute-0 nova_compute[260935]:     </nova:instance>
Oct 11 08:51:09 compute-0 nova_compute[260935]:   </metadata>
Oct 11 08:51:09 compute-0 nova_compute[260935]:   <sysinfo type="smbios">
Oct 11 08:51:09 compute-0 nova_compute[260935]:     <system>
Oct 11 08:51:09 compute-0 nova_compute[260935]:       <entry name="manufacturer">RDO</entry>
Oct 11 08:51:09 compute-0 nova_compute[260935]:       <entry name="product">OpenStack Compute</entry>
Oct 11 08:51:09 compute-0 nova_compute[260935]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 08:51:09 compute-0 nova_compute[260935]:       <entry name="serial">9f842544-f85a-4c24-b273-8ae74177617e</entry>
Oct 11 08:51:09 compute-0 nova_compute[260935]:       <entry name="uuid">9f842544-f85a-4c24-b273-8ae74177617e</entry>
Oct 11 08:51:09 compute-0 nova_compute[260935]:       <entry name="family">Virtual Machine</entry>
Oct 11 08:51:09 compute-0 nova_compute[260935]:     </system>
Oct 11 08:51:09 compute-0 nova_compute[260935]:   </sysinfo>
Oct 11 08:51:09 compute-0 nova_compute[260935]:   <os>
Oct 11 08:51:09 compute-0 nova_compute[260935]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 11 08:51:09 compute-0 nova_compute[260935]:     <boot dev="hd"/>
Oct 11 08:51:09 compute-0 nova_compute[260935]:     <smbios mode="sysinfo"/>
Oct 11 08:51:09 compute-0 nova_compute[260935]:   </os>
Oct 11 08:51:09 compute-0 nova_compute[260935]:   <features>
Oct 11 08:51:09 compute-0 nova_compute[260935]:     <acpi/>
Oct 11 08:51:09 compute-0 nova_compute[260935]:     <apic/>
Oct 11 08:51:09 compute-0 nova_compute[260935]:     <vmcoreinfo/>
Oct 11 08:51:09 compute-0 nova_compute[260935]:   </features>
Oct 11 08:51:09 compute-0 nova_compute[260935]:   <clock offset="utc">
Oct 11 08:51:09 compute-0 nova_compute[260935]:     <timer name="pit" tickpolicy="delay"/>
Oct 11 08:51:09 compute-0 nova_compute[260935]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 11 08:51:09 compute-0 nova_compute[260935]:     <timer name="hpet" present="no"/>
Oct 11 08:51:09 compute-0 nova_compute[260935]:   </clock>
Oct 11 08:51:09 compute-0 nova_compute[260935]:   <cpu mode="host-model" match="exact">
Oct 11 08:51:09 compute-0 nova_compute[260935]:     <topology sockets="1" cores="1" threads="1"/>
Oct 11 08:51:09 compute-0 nova_compute[260935]:   </cpu>
Oct 11 08:51:09 compute-0 nova_compute[260935]:   <devices>
Oct 11 08:51:09 compute-0 nova_compute[260935]:     <disk type="network" device="disk">
Oct 11 08:51:09 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 08:51:09 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/9f842544-f85a-4c24-b273-8ae74177617e_disk">
Oct 11 08:51:09 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 08:51:09 compute-0 nova_compute[260935]:       </source>
Oct 11 08:51:09 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 08:51:09 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 08:51:09 compute-0 nova_compute[260935]:       </auth>
Oct 11 08:51:09 compute-0 nova_compute[260935]:       <target dev="vda" bus="virtio"/>
Oct 11 08:51:09 compute-0 nova_compute[260935]:     </disk>
Oct 11 08:51:09 compute-0 nova_compute[260935]:     <disk type="network" device="cdrom">
Oct 11 08:51:09 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 08:51:09 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/9f842544-f85a-4c24-b273-8ae74177617e_disk.config">
Oct 11 08:51:09 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 08:51:09 compute-0 nova_compute[260935]:       </source>
Oct 11 08:51:09 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 08:51:09 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 08:51:09 compute-0 nova_compute[260935]:       </auth>
Oct 11 08:51:09 compute-0 nova_compute[260935]:       <target dev="sda" bus="sata"/>
Oct 11 08:51:09 compute-0 nova_compute[260935]:     </disk>
Oct 11 08:51:09 compute-0 nova_compute[260935]:     <interface type="ethernet">
Oct 11 08:51:09 compute-0 nova_compute[260935]:       <mac address="fa:16:3e:91:1c:e4"/>
Oct 11 08:51:09 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 08:51:09 compute-0 nova_compute[260935]:       <driver name="vhost" rx_queue_size="512"/>
Oct 11 08:51:09 compute-0 nova_compute[260935]:       <mtu size="1442"/>
Oct 11 08:51:09 compute-0 nova_compute[260935]:       <target dev="tapfc76a4bd-0e"/>
Oct 11 08:51:09 compute-0 nova_compute[260935]:     </interface>
Oct 11 08:51:09 compute-0 nova_compute[260935]:     <interface type="ethernet">
Oct 11 08:51:09 compute-0 nova_compute[260935]:       <mac address="fa:16:3e:82:ff:9f"/>
Oct 11 08:51:09 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 08:51:09 compute-0 nova_compute[260935]:       <driver name="vhost" rx_queue_size="512"/>
Oct 11 08:51:09 compute-0 nova_compute[260935]:       <mtu size="1442"/>
Oct 11 08:51:09 compute-0 nova_compute[260935]:       <target dev="tapccf53fcf-c7"/>
Oct 11 08:51:09 compute-0 nova_compute[260935]:     </interface>
Oct 11 08:51:09 compute-0 nova_compute[260935]:     <interface type="ethernet">
Oct 11 08:51:09 compute-0 nova_compute[260935]:       <mac address="fa:16:3e:25:95:fe"/>
Oct 11 08:51:09 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 08:51:09 compute-0 nova_compute[260935]:       <driver name="vhost" rx_queue_size="512"/>
Oct 11 08:51:09 compute-0 nova_compute[260935]:       <mtu size="1442"/>
Oct 11 08:51:09 compute-0 nova_compute[260935]:       <target dev="tape7343dc3-6c"/>
Oct 11 08:51:09 compute-0 nova_compute[260935]:     </interface>
Oct 11 08:51:09 compute-0 nova_compute[260935]:     <serial type="pty">
Oct 11 08:51:09 compute-0 nova_compute[260935]:       <log file="/var/lib/nova/instances/9f842544-f85a-4c24-b273-8ae74177617e/console.log" append="off"/>
Oct 11 08:51:09 compute-0 nova_compute[260935]:     </serial>
Oct 11 08:51:09 compute-0 nova_compute[260935]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 08:51:09 compute-0 nova_compute[260935]:     <video>
Oct 11 08:51:09 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 08:51:09 compute-0 nova_compute[260935]:     </video>
Oct 11 08:51:09 compute-0 nova_compute[260935]:     <input type="tablet" bus="usb"/>
Oct 11 08:51:09 compute-0 nova_compute[260935]:     <rng model="virtio">
Oct 11 08:51:09 compute-0 nova_compute[260935]:       <backend model="random">/dev/urandom</backend>
Oct 11 08:51:09 compute-0 nova_compute[260935]:     </rng>
Oct 11 08:51:09 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root"/>
Oct 11 08:51:09 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:51:09 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:51:09 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:51:09 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:51:09 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:51:09 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:51:09 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:51:09 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:51:09 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:51:09 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:51:09 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:51:09 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:51:09 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:51:09 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:51:09 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:51:09 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:51:09 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:51:09 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:51:09 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:51:09 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:51:09 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:51:09 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:51:09 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:51:09 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:51:09 compute-0 nova_compute[260935]:     <controller type="usb" index="0"/>
Oct 11 08:51:09 compute-0 nova_compute[260935]:     <memballoon model="virtio">
Oct 11 08:51:09 compute-0 nova_compute[260935]:       <stats period="10"/>
Oct 11 08:51:09 compute-0 nova_compute[260935]:     </memballoon>
Oct 11 08:51:09 compute-0 nova_compute[260935]:   </devices>
Oct 11 08:51:09 compute-0 nova_compute[260935]: </domain>
Oct 11 08:51:09 compute-0 nova_compute[260935]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 11 08:51:09 compute-0 nova_compute[260935]: 2025-10-11 08:51:09.343 2 DEBUG nova.compute.manager [None req-a8a0f7c3-f1bc-448a-a5ff-f73f3f5f2861 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] [instance: 9f842544-f85a-4c24-b273-8ae74177617e] Preparing to wait for external event network-vif-plugged-fc76a4bd-0e3d-426e-820f-690283cf8257 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 11 08:51:09 compute-0 nova_compute[260935]: 2025-10-11 08:51:09.344 2 DEBUG oslo_concurrency.lockutils [None req-a8a0f7c3-f1bc-448a-a5ff-f73f3f5f2861 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Acquiring lock "9f842544-f85a-4c24-b273-8ae74177617e-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:51:09 compute-0 nova_compute[260935]: 2025-10-11 08:51:09.344 2 DEBUG oslo_concurrency.lockutils [None req-a8a0f7c3-f1bc-448a-a5ff-f73f3f5f2861 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Lock "9f842544-f85a-4c24-b273-8ae74177617e-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:51:09 compute-0 nova_compute[260935]: 2025-10-11 08:51:09.344 2 DEBUG oslo_concurrency.lockutils [None req-a8a0f7c3-f1bc-448a-a5ff-f73f3f5f2861 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Lock "9f842544-f85a-4c24-b273-8ae74177617e-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:51:09 compute-0 nova_compute[260935]: 2025-10-11 08:51:09.345 2 DEBUG nova.compute.manager [None req-a8a0f7c3-f1bc-448a-a5ff-f73f3f5f2861 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] [instance: 9f842544-f85a-4c24-b273-8ae74177617e] Preparing to wait for external event network-vif-plugged-ccf53fcf-c7bd-41f4-986b-d90fd701f3e4 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 11 08:51:09 compute-0 nova_compute[260935]: 2025-10-11 08:51:09.345 2 DEBUG oslo_concurrency.lockutils [None req-a8a0f7c3-f1bc-448a-a5ff-f73f3f5f2861 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Acquiring lock "9f842544-f85a-4c24-b273-8ae74177617e-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:51:09 compute-0 nova_compute[260935]: 2025-10-11 08:51:09.345 2 DEBUG oslo_concurrency.lockutils [None req-a8a0f7c3-f1bc-448a-a5ff-f73f3f5f2861 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Lock "9f842544-f85a-4c24-b273-8ae74177617e-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:51:09 compute-0 nova_compute[260935]: 2025-10-11 08:51:09.345 2 DEBUG oslo_concurrency.lockutils [None req-a8a0f7c3-f1bc-448a-a5ff-f73f3f5f2861 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Lock "9f842544-f85a-4c24-b273-8ae74177617e-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:51:09 compute-0 nova_compute[260935]: 2025-10-11 08:51:09.346 2 DEBUG nova.compute.manager [None req-a8a0f7c3-f1bc-448a-a5ff-f73f3f5f2861 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] [instance: 9f842544-f85a-4c24-b273-8ae74177617e] Preparing to wait for external event network-vif-plugged-e7343dc3-6cda-4dfb-8098-f021900f4584 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 11 08:51:09 compute-0 nova_compute[260935]: 2025-10-11 08:51:09.346 2 DEBUG oslo_concurrency.lockutils [None req-a8a0f7c3-f1bc-448a-a5ff-f73f3f5f2861 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Acquiring lock "9f842544-f85a-4c24-b273-8ae74177617e-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:51:09 compute-0 nova_compute[260935]: 2025-10-11 08:51:09.346 2 DEBUG oslo_concurrency.lockutils [None req-a8a0f7c3-f1bc-448a-a5ff-f73f3f5f2861 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Lock "9f842544-f85a-4c24-b273-8ae74177617e-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:51:09 compute-0 nova_compute[260935]: 2025-10-11 08:51:09.347 2 DEBUG oslo_concurrency.lockutils [None req-a8a0f7c3-f1bc-448a-a5ff-f73f3f5f2861 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Lock "9f842544-f85a-4c24-b273-8ae74177617e-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:51:09 compute-0 nova_compute[260935]: 2025-10-11 08:51:09.348 2 DEBUG nova.virt.libvirt.vif [None req-a8a0f7c3-f1bc-448a-a5ff-f73f3f5f2861 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T08:50:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-358627564',display_name='tempest-ServersTestMultiNic-server-358627564',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-358627564',id=32,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f84de17ba2c5470fbc4c7fe809e7d7b7',ramdisk_id='',reservation_id='r-ku19xg05',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestMultiNic-65661968',owner_user_name='tempest-ServersTestMultiNic-65661968-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T08:50:57Z,user_data=None,user_id='fb27f51b5ffd414ab5ddbea179ada690',uuid=9f842544-f85a-4c24-b273-8ae74177617e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "fc76a4bd-0e3d-426e-820f-690283cf8257", "address": "fa:16:3e:91:1c:e4", "network": {"id": "da40451d-49f4-4bd4-b0a3-55dc537c2426", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1795623970", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.51", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f84de17ba2c5470fbc4c7fe809e7d7b7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfc76a4bd-0e", "ovs_interfaceid": "fc76a4bd-0e3d-426e-820f-690283cf8257", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 11 08:51:09 compute-0 nova_compute[260935]: 2025-10-11 08:51:09.348 2 DEBUG nova.network.os_vif_util [None req-a8a0f7c3-f1bc-448a-a5ff-f73f3f5f2861 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Converting VIF {"id": "fc76a4bd-0e3d-426e-820f-690283cf8257", "address": "fa:16:3e:91:1c:e4", "network": {"id": "da40451d-49f4-4bd4-b0a3-55dc537c2426", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1795623970", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.51", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f84de17ba2c5470fbc4c7fe809e7d7b7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfc76a4bd-0e", "ovs_interfaceid": "fc76a4bd-0e3d-426e-820f-690283cf8257", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 08:51:09 compute-0 nova_compute[260935]: 2025-10-11 08:51:09.350 2 DEBUG nova.network.os_vif_util [None req-a8a0f7c3-f1bc-448a-a5ff-f73f3f5f2861 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:91:1c:e4,bridge_name='br-int',has_traffic_filtering=True,id=fc76a4bd-0e3d-426e-820f-690283cf8257,network=Network(da40451d-49f4-4bd4-b0a3-55dc537c2426),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfc76a4bd-0e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 08:51:09 compute-0 nova_compute[260935]: 2025-10-11 08:51:09.351 2 DEBUG os_vif [None req-a8a0f7c3-f1bc-448a-a5ff-f73f3f5f2861 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:91:1c:e4,bridge_name='br-int',has_traffic_filtering=True,id=fc76a4bd-0e3d-426e-820f-690283cf8257,network=Network(da40451d-49f4-4bd4-b0a3-55dc537c2426),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfc76a4bd-0e') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 11 08:51:09 compute-0 nova_compute[260935]: 2025-10-11 08:51:09.352 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:51:09 compute-0 nova_compute[260935]: 2025-10-11 08:51:09.353 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:51:09 compute-0 nova_compute[260935]: 2025-10-11 08:51:09.354 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 08:51:09 compute-0 nova_compute[260935]: 2025-10-11 08:51:09.359 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:51:09 compute-0 nova_compute[260935]: 2025-10-11 08:51:09.359 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfc76a4bd-0e, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:51:09 compute-0 nova_compute[260935]: 2025-10-11 08:51:09.360 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapfc76a4bd-0e, col_values=(('external_ids', {'iface-id': 'fc76a4bd-0e3d-426e-820f-690283cf8257', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:91:1c:e4', 'vm-uuid': '9f842544-f85a-4c24-b273-8ae74177617e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:51:09 compute-0 nova_compute[260935]: 2025-10-11 08:51:09.393 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:51:09 compute-0 NetworkManager[44960]: <info>  [1760172669.3947] manager: (tapfc76a4bd-0e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/111)
Oct 11 08:51:09 compute-0 nova_compute[260935]: 2025-10-11 08:51:09.400 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 08:51:09 compute-0 nova_compute[260935]: 2025-10-11 08:51:09.402 2 INFO os_vif [None req-a8a0f7c3-f1bc-448a-a5ff-f73f3f5f2861 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:91:1c:e4,bridge_name='br-int',has_traffic_filtering=True,id=fc76a4bd-0e3d-426e-820f-690283cf8257,network=Network(da40451d-49f4-4bd4-b0a3-55dc537c2426),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfc76a4bd-0e')
Oct 11 08:51:09 compute-0 nova_compute[260935]: 2025-10-11 08:51:09.404 2 DEBUG nova.virt.libvirt.vif [None req-a8a0f7c3-f1bc-448a-a5ff-f73f3f5f2861 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T08:50:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-358627564',display_name='tempest-ServersTestMultiNic-server-358627564',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-358627564',id=32,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f84de17ba2c5470fbc4c7fe809e7d7b7',ramdisk_id='',reservation_id='r-ku19xg05',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestMultiNic-65661968',owner_user_name='tempest-ServersTestMultiNic-65661968-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T08:50:57Z,user_data=None,user_id='fb27f51b5ffd414ab5ddbea179ada690',uuid=9f842544-f85a-4c24-b273-8ae74177617e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ccf53fcf-c7bd-41f4-986b-d90fd701f3e4", "address": "fa:16:3e:82:ff:9f", "network": {"id": "27019660-0844-42c7-bd58-04b2d25d3924", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-190755633", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.42", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f84de17ba2c5470fbc4c7fe809e7d7b7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapccf53fcf-c7", "ovs_interfaceid": "ccf53fcf-c7bd-41f4-986b-d90fd701f3e4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 11 08:51:09 compute-0 nova_compute[260935]: 2025-10-11 08:51:09.404 2 DEBUG nova.network.os_vif_util [None req-a8a0f7c3-f1bc-448a-a5ff-f73f3f5f2861 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Converting VIF {"id": "ccf53fcf-c7bd-41f4-986b-d90fd701f3e4", "address": "fa:16:3e:82:ff:9f", "network": {"id": "27019660-0844-42c7-bd58-04b2d25d3924", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-190755633", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.42", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f84de17ba2c5470fbc4c7fe809e7d7b7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapccf53fcf-c7", "ovs_interfaceid": "ccf53fcf-c7bd-41f4-986b-d90fd701f3e4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 08:51:09 compute-0 nova_compute[260935]: 2025-10-11 08:51:09.405 2 DEBUG nova.network.os_vif_util [None req-a8a0f7c3-f1bc-448a-a5ff-f73f3f5f2861 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:82:ff:9f,bridge_name='br-int',has_traffic_filtering=True,id=ccf53fcf-c7bd-41f4-986b-d90fd701f3e4,network=Network(27019660-0844-42c7-bd58-04b2d25d3924),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapccf53fcf-c7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 08:51:09 compute-0 nova_compute[260935]: 2025-10-11 08:51:09.405 2 DEBUG os_vif [None req-a8a0f7c3-f1bc-448a-a5ff-f73f3f5f2861 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:82:ff:9f,bridge_name='br-int',has_traffic_filtering=True,id=ccf53fcf-c7bd-41f4-986b-d90fd701f3e4,network=Network(27019660-0844-42c7-bd58-04b2d25d3924),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapccf53fcf-c7') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 11 08:51:09 compute-0 nova_compute[260935]: 2025-10-11 08:51:09.406 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:51:09 compute-0 nova_compute[260935]: 2025-10-11 08:51:09.406 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:51:09 compute-0 nova_compute[260935]: 2025-10-11 08:51:09.407 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 08:51:09 compute-0 nova_compute[260935]: 2025-10-11 08:51:09.409 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:51:09 compute-0 nova_compute[260935]: 2025-10-11 08:51:09.410 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapccf53fcf-c7, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:51:09 compute-0 nova_compute[260935]: 2025-10-11 08:51:09.410 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapccf53fcf-c7, col_values=(('external_ids', {'iface-id': 'ccf53fcf-c7bd-41f4-986b-d90fd701f3e4', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:82:ff:9f', 'vm-uuid': '9f842544-f85a-4c24-b273-8ae74177617e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:51:09 compute-0 NetworkManager[44960]: <info>  [1760172669.4127] manager: (tapccf53fcf-c7): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/112)
Oct 11 08:51:09 compute-0 nova_compute[260935]: 2025-10-11 08:51:09.412 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:51:09 compute-0 nova_compute[260935]: 2025-10-11 08:51:09.417 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 08:51:09 compute-0 nova_compute[260935]: 2025-10-11 08:51:09.419 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:51:09 compute-0 nova_compute[260935]: 2025-10-11 08:51:09.420 2 INFO os_vif [None req-a8a0f7c3-f1bc-448a-a5ff-f73f3f5f2861 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:82:ff:9f,bridge_name='br-int',has_traffic_filtering=True,id=ccf53fcf-c7bd-41f4-986b-d90fd701f3e4,network=Network(27019660-0844-42c7-bd58-04b2d25d3924),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapccf53fcf-c7')
Oct 11 08:51:09 compute-0 nova_compute[260935]: 2025-10-11 08:51:09.421 2 DEBUG nova.virt.libvirt.vif [None req-a8a0f7c3-f1bc-448a-a5ff-f73f3f5f2861 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T08:50:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-358627564',display_name='tempest-ServersTestMultiNic-server-358627564',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-358627564',id=32,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f84de17ba2c5470fbc4c7fe809e7d7b7',ramdisk_id='',reservation_id='r-ku19xg05',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestMultiNic-65661968',owner_user_name='tempest-ServersTestMultiNic-65661968-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T08:50:57Z,user_data=None,user_id='fb27f51b5ffd414ab5ddbea179ada690',uuid=9f842544-f85a-4c24-b273-8ae74177617e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e7343dc3-6cda-4dfb-8098-f021900f4584", "address": "fa:16:3e:25:95:fe", "network": {"id": "da40451d-49f4-4bd4-b0a3-55dc537c2426", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1795623970", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.254", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f84de17ba2c5470fbc4c7fe809e7d7b7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape7343dc3-6c", "ovs_interfaceid": "e7343dc3-6cda-4dfb-8098-f021900f4584", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 11 08:51:09 compute-0 nova_compute[260935]: 2025-10-11 08:51:09.422 2 DEBUG nova.network.os_vif_util [None req-a8a0f7c3-f1bc-448a-a5ff-f73f3f5f2861 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Converting VIF {"id": "e7343dc3-6cda-4dfb-8098-f021900f4584", "address": "fa:16:3e:25:95:fe", "network": {"id": "da40451d-49f4-4bd4-b0a3-55dc537c2426", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1795623970", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.254", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f84de17ba2c5470fbc4c7fe809e7d7b7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape7343dc3-6c", "ovs_interfaceid": "e7343dc3-6cda-4dfb-8098-f021900f4584", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 08:51:09 compute-0 nova_compute[260935]: 2025-10-11 08:51:09.423 2 DEBUG nova.network.os_vif_util [None req-a8a0f7c3-f1bc-448a-a5ff-f73f3f5f2861 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:25:95:fe,bridge_name='br-int',has_traffic_filtering=True,id=e7343dc3-6cda-4dfb-8098-f021900f4584,network=Network(da40451d-49f4-4bd4-b0a3-55dc537c2426),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape7343dc3-6c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 08:51:09 compute-0 nova_compute[260935]: 2025-10-11 08:51:09.423 2 DEBUG os_vif [None req-a8a0f7c3-f1bc-448a-a5ff-f73f3f5f2861 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:25:95:fe,bridge_name='br-int',has_traffic_filtering=True,id=e7343dc3-6cda-4dfb-8098-f021900f4584,network=Network(da40451d-49f4-4bd4-b0a3-55dc537c2426),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape7343dc3-6c') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 11 08:51:09 compute-0 nova_compute[260935]: 2025-10-11 08:51:09.424 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:51:09 compute-0 nova_compute[260935]: 2025-10-11 08:51:09.425 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:51:09 compute-0 nova_compute[260935]: 2025-10-11 08:51:09.425 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 08:51:09 compute-0 nova_compute[260935]: 2025-10-11 08:51:09.427 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:51:09 compute-0 nova_compute[260935]: 2025-10-11 08:51:09.427 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape7343dc3-6c, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:51:09 compute-0 nova_compute[260935]: 2025-10-11 08:51:09.428 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tape7343dc3-6c, col_values=(('external_ids', {'iface-id': 'e7343dc3-6cda-4dfb-8098-f021900f4584', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:25:95:fe', 'vm-uuid': '9f842544-f85a-4c24-b273-8ae74177617e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:51:09 compute-0 nova_compute[260935]: 2025-10-11 08:51:09.429 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:51:09 compute-0 NetworkManager[44960]: <info>  [1760172669.4304] manager: (tape7343dc3-6c): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/113)
Oct 11 08:51:09 compute-0 nova_compute[260935]: 2025-10-11 08:51:09.433 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 08:51:09 compute-0 nova_compute[260935]: 2025-10-11 08:51:09.434 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 08:51:09 compute-0 nova_compute[260935]: 2025-10-11 08:51:09.441 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:51:09 compute-0 nova_compute[260935]: 2025-10-11 08:51:09.442 2 INFO os_vif [None req-a8a0f7c3-f1bc-448a-a5ff-f73f3f5f2861 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:25:95:fe,bridge_name='br-int',has_traffic_filtering=True,id=e7343dc3-6cda-4dfb-8098-f021900f4584,network=Network(da40451d-49f4-4bd4-b0a3-55dc537c2426),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape7343dc3-6c')
Oct 11 08:51:09 compute-0 nova_compute[260935]: 2025-10-11 08:51:09.491 2 DEBUG nova.virt.libvirt.driver [None req-a8a0f7c3-f1bc-448a-a5ff-f73f3f5f2861 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 08:51:09 compute-0 nova_compute[260935]: 2025-10-11 08:51:09.492 2 DEBUG nova.virt.libvirt.driver [None req-a8a0f7c3-f1bc-448a-a5ff-f73f3f5f2861 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 08:51:09 compute-0 nova_compute[260935]: 2025-10-11 08:51:09.493 2 DEBUG nova.virt.libvirt.driver [None req-a8a0f7c3-f1bc-448a-a5ff-f73f3f5f2861 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] No VIF found with MAC fa:16:3e:91:1c:e4, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 11 08:51:09 compute-0 nova_compute[260935]: 2025-10-11 08:51:09.494 2 DEBUG nova.virt.libvirt.driver [None req-a8a0f7c3-f1bc-448a-a5ff-f73f3f5f2861 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] No VIF found with MAC fa:16:3e:82:ff:9f, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 11 08:51:09 compute-0 nova_compute[260935]: 2025-10-11 08:51:09.494 2 DEBUG nova.virt.libvirt.driver [None req-a8a0f7c3-f1bc-448a-a5ff-f73f3f5f2861 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] No VIF found with MAC fa:16:3e:25:95:fe, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 11 08:51:09 compute-0 nova_compute[260935]: 2025-10-11 08:51:09.495 2 INFO nova.virt.libvirt.driver [None req-a8a0f7c3-f1bc-448a-a5ff-f73f3f5f2861 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] [instance: 9f842544-f85a-4c24-b273-8ae74177617e] Using config drive
Oct 11 08:51:09 compute-0 nova_compute[260935]: 2025-10-11 08:51:09.523 2 DEBUG nova.storage.rbd_utils [None req-a8a0f7c3-f1bc-448a-a5ff-f73f3f5f2861 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] rbd image 9f842544-f85a-4c24-b273-8ae74177617e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:51:09 compute-0 nova_compute[260935]: 2025-10-11 08:51:09.618 2 DEBUG nova.compute.manager [None req-414dafa3-ac2d-4e0d-aebd-68df71fcf0fe 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 5b50d851-e482-40a2-8b7d-d3eca87e15ab] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:51:09 compute-0 nova_compute[260935]: 2025-10-11 08:51:09.658 2 INFO nova.compute.manager [None req-414dafa3-ac2d-4e0d-aebd-68df71fcf0fe 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 5b50d851-e482-40a2-8b7d-d3eca87e15ab] instance snapshotting
Oct 11 08:51:09 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1378: 321 pgs: 321 active+clean; 134 MiB data, 401 MiB used, 60 GiB / 60 GiB avail; 43 KiB/s rd, 3.7 MiB/s wr, 66 op/s
Oct 11 08:51:10 compute-0 nova_compute[260935]: 2025-10-11 08:51:10.285 2 INFO nova.virt.libvirt.driver [None req-414dafa3-ac2d-4e0d-aebd-68df71fcf0fe 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 5b50d851-e482-40a2-8b7d-d3eca87e15ab] Beginning live snapshot process
Oct 11 08:51:10 compute-0 nova_compute[260935]: 2025-10-11 08:51:10.442 2 DEBUG nova.virt.libvirt.imagebackend [None req-414dafa3-ac2d-4e0d-aebd-68df71fcf0fe 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] No parent info for 03f2fef0-11c0-48e1-b3a0-3e02d898739e; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Oct 11 08:51:10 compute-0 nova_compute[260935]: 2025-10-11 08:51:10.727 2 INFO nova.virt.libvirt.driver [None req-a8a0f7c3-f1bc-448a-a5ff-f73f3f5f2861 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] [instance: 9f842544-f85a-4c24-b273-8ae74177617e] Creating config drive at /var/lib/nova/instances/9f842544-f85a-4c24-b273-8ae74177617e/disk.config
Oct 11 08:51:10 compute-0 nova_compute[260935]: 2025-10-11 08:51:10.737 2 DEBUG oslo_concurrency.processutils [None req-a8a0f7c3-f1bc-448a-a5ff-f73f3f5f2861 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/9f842544-f85a-4c24-b273-8ae74177617e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp3s5gi1id execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:51:10 compute-0 nova_compute[260935]: 2025-10-11 08:51:10.781 2 DEBUG nova.storage.rbd_utils [None req-414dafa3-ac2d-4e0d-aebd-68df71fcf0fe 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] creating snapshot(e9e22bb70e184ce295ed0bc013a73ecc) on rbd image(5b50d851-e482-40a2-8b7d-d3eca87e15ab_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Oct 11 08:51:10 compute-0 nova_compute[260935]: 2025-10-11 08:51:10.823 2 DEBUG nova.network.neutron [req-414781cc-d6a1-42c0-9106-f249fbcb35e8 req-87a8b0a5-1cf6-4f03-8179-f4cd618401f3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 9f842544-f85a-4c24-b273-8ae74177617e] Updated VIF entry in instance network info cache for port ccf53fcf-c7bd-41f4-986b-d90fd701f3e4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 08:51:10 compute-0 nova_compute[260935]: 2025-10-11 08:51:10.824 2 DEBUG nova.network.neutron [req-414781cc-d6a1-42c0-9106-f249fbcb35e8 req-87a8b0a5-1cf6-4f03-8179-f4cd618401f3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 9f842544-f85a-4c24-b273-8ae74177617e] Updating instance_info_cache with network_info: [{"id": "fc76a4bd-0e3d-426e-820f-690283cf8257", "address": "fa:16:3e:91:1c:e4", "network": {"id": "da40451d-49f4-4bd4-b0a3-55dc537c2426", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1795623970", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.51", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f84de17ba2c5470fbc4c7fe809e7d7b7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfc76a4bd-0e", "ovs_interfaceid": "fc76a4bd-0e3d-426e-820f-690283cf8257", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "ccf53fcf-c7bd-41f4-986b-d90fd701f3e4", "address": "fa:16:3e:82:ff:9f", "network": {"id": "27019660-0844-42c7-bd58-04b2d25d3924", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-190755633", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.42", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f84de17ba2c5470fbc4c7fe809e7d7b7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapccf53fcf-c7", "ovs_interfaceid": "ccf53fcf-c7bd-41f4-986b-d90fd701f3e4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "e7343dc3-6cda-4dfb-8098-f021900f4584", "address": "fa:16:3e:25:95:fe", "network": {"id": "da40451d-49f4-4bd4-b0a3-55dc537c2426", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1795623970", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.254", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f84de17ba2c5470fbc4c7fe809e7d7b7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape7343dc3-6c", "ovs_interfaceid": "e7343dc3-6cda-4dfb-8098-f021900f4584", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:51:10 compute-0 nova_compute[260935]: 2025-10-11 08:51:10.843 2 DEBUG oslo_concurrency.lockutils [req-414781cc-d6a1-42c0-9106-f249fbcb35e8 req-87a8b0a5-1cf6-4f03-8179-f4cd618401f3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-9f842544-f85a-4c24-b273-8ae74177617e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 08:51:10 compute-0 nova_compute[260935]: 2025-10-11 08:51:10.844 2 DEBUG nova.compute.manager [req-414781cc-d6a1-42c0-9106-f249fbcb35e8 req-87a8b0a5-1cf6-4f03-8179-f4cd618401f3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 9f842544-f85a-4c24-b273-8ae74177617e] Received event network-changed-e7343dc3-6cda-4dfb-8098-f021900f4584 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:51:10 compute-0 nova_compute[260935]: 2025-10-11 08:51:10.844 2 DEBUG nova.compute.manager [req-414781cc-d6a1-42c0-9106-f249fbcb35e8 req-87a8b0a5-1cf6-4f03-8179-f4cd618401f3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 9f842544-f85a-4c24-b273-8ae74177617e] Refreshing instance network info cache due to event network-changed-e7343dc3-6cda-4dfb-8098-f021900f4584. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 08:51:10 compute-0 nova_compute[260935]: 2025-10-11 08:51:10.845 2 DEBUG oslo_concurrency.lockutils [req-414781cc-d6a1-42c0-9106-f249fbcb35e8 req-87a8b0a5-1cf6-4f03-8179-f4cd618401f3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-9f842544-f85a-4c24-b273-8ae74177617e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 08:51:10 compute-0 nova_compute[260935]: 2025-10-11 08:51:10.845 2 DEBUG oslo_concurrency.lockutils [req-414781cc-d6a1-42c0-9106-f249fbcb35e8 req-87a8b0a5-1cf6-4f03-8179-f4cd618401f3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-9f842544-f85a-4c24-b273-8ae74177617e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 08:51:10 compute-0 nova_compute[260935]: 2025-10-11 08:51:10.846 2 DEBUG nova.network.neutron [req-414781cc-d6a1-42c0-9106-f249fbcb35e8 req-87a8b0a5-1cf6-4f03-8179-f4cd618401f3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 9f842544-f85a-4c24-b273-8ae74177617e] Refreshing network info cache for port e7343dc3-6cda-4dfb-8098-f021900f4584 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 08:51:10 compute-0 nova_compute[260935]: 2025-10-11 08:51:10.890 2 DEBUG oslo_concurrency.processutils [None req-a8a0f7c3-f1bc-448a-a5ff-f73f3f5f2861 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/9f842544-f85a-4c24-b273-8ae74177617e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp3s5gi1id" returned: 0 in 0.153s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:51:10 compute-0 nova_compute[260935]: 2025-10-11 08:51:10.921 2 DEBUG nova.storage.rbd_utils [None req-a8a0f7c3-f1bc-448a-a5ff-f73f3f5f2861 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] rbd image 9f842544-f85a-4c24-b273-8ae74177617e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:51:10 compute-0 nova_compute[260935]: 2025-10-11 08:51:10.925 2 DEBUG oslo_concurrency.processutils [None req-a8a0f7c3-f1bc-448a-a5ff-f73f3f5f2861 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/9f842544-f85a-4c24-b273-8ae74177617e/disk.config 9f842544-f85a-4c24-b273-8ae74177617e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:51:11 compute-0 sshd-session[301235]: Invalid user sumit from 155.4.244.179 port 2019
Oct 11 08:51:11 compute-0 sshd-session[301235]: pam_unix(sshd:auth): check pass; user unknown
Oct 11 08:51:11 compute-0 sshd-session[301235]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=155.4.244.179
Oct 11 08:51:11 compute-0 nova_compute[260935]: 2025-10-11 08:51:11.123 2 DEBUG oslo_concurrency.processutils [None req-a8a0f7c3-f1bc-448a-a5ff-f73f3f5f2861 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/9f842544-f85a-4c24-b273-8ae74177617e/disk.config 9f842544-f85a-4c24-b273-8ae74177617e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.198s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:51:11 compute-0 nova_compute[260935]: 2025-10-11 08:51:11.125 2 INFO nova.virt.libvirt.driver [None req-a8a0f7c3-f1bc-448a-a5ff-f73f3f5f2861 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] [instance: 9f842544-f85a-4c24-b273-8ae74177617e] Deleting local config drive /var/lib/nova/instances/9f842544-f85a-4c24-b273-8ae74177617e/disk.config because it was imported into RBD.
Oct 11 08:51:11 compute-0 kernel: tapfc76a4bd-0e: entered promiscuous mode
Oct 11 08:51:11 compute-0 NetworkManager[44960]: <info>  [1760172671.2039] manager: (tapfc76a4bd-0e): new Tun device (/org/freedesktop/NetworkManager/Devices/114)
Oct 11 08:51:11 compute-0 ovn_controller[152945]: 2025-10-11T08:51:11Z|00228|binding|INFO|Claiming lport fc76a4bd-0e3d-426e-820f-690283cf8257 for this chassis.
Oct 11 08:51:11 compute-0 ovn_controller[152945]: 2025-10-11T08:51:11Z|00229|binding|INFO|fc76a4bd-0e3d-426e-820f-690283cf8257: Claiming fa:16:3e:91:1c:e4 10.100.0.51
Oct 11 08:51:11 compute-0 nova_compute[260935]: 2025-10-11 08:51:11.217 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:51:11 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:11.227 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:91:1c:e4 10.100.0.51'], port_security=['fa:16:3e:91:1c:e4 10.100.0.51'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.51/24', 'neutron:device_id': '9f842544-f85a-4c24-b273-8ae74177617e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-da40451d-49f4-4bd4-b0a3-55dc537c2426', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f84de17ba2c5470fbc4c7fe809e7d7b7', 'neutron:revision_number': '2', 'neutron:security_group_ids': '1a4e329d-ca16-4021-b76e-a221f4eabd06', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=dbdc5e62-4240-4c3e-8aee-e7dbf176e812, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=fc76a4bd-0e3d-426e-820f-690283cf8257) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 08:51:11 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:11.231 162815 INFO neutron.agent.ovn.metadata.agent [-] Port fc76a4bd-0e3d-426e-820f-690283cf8257 in datapath da40451d-49f4-4bd4-b0a3-55dc537c2426 bound to our chassis
Oct 11 08:51:11 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:11.235 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network da40451d-49f4-4bd4-b0a3-55dc537c2426
Oct 11 08:51:11 compute-0 NetworkManager[44960]: <info>  [1760172671.2437] manager: (tapccf53fcf-c7): new Tun device (/org/freedesktop/NetworkManager/Devices/115)
Oct 11 08:51:11 compute-0 systemd-udevd[301342]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 08:51:11 compute-0 systemd-udevd[301343]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 08:51:11 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:11.255 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[2556687f-95e4-467a-8fef-578408904368]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:51:11 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:11.257 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapda40451d-41 in ovnmeta-da40451d-49f4-4bd4-b0a3-55dc537c2426 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 11 08:51:11 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:11.260 276945 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapda40451d-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 11 08:51:11 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:11.260 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[0a137fd8-5286-4a46-acde-23773f86b42e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:51:11 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:11.262 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[09757eba-1313-472c-b9d9-0e918b97f3e4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:51:11 compute-0 NetworkManager[44960]: <info>  [1760172671.2644] manager: (tape7343dc3-6c): new Tun device (/org/freedesktop/NetworkManager/Devices/116)
Oct 11 08:51:11 compute-0 systemd-udevd[301350]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 08:51:11 compute-0 NetworkManager[44960]: <info>  [1760172671.2707] device (tapfc76a4bd-0e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 08:51:11 compute-0 NetworkManager[44960]: <info>  [1760172671.2737] device (tapfc76a4bd-0e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 08:51:11 compute-0 kernel: tape7343dc3-6c: entered promiscuous mode
Oct 11 08:51:11 compute-0 kernel: tapccf53fcf-c7: entered promiscuous mode
Oct 11 08:51:11 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:11.286 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[a8443c34-180e-4f76-bcb3-6392957ddfd3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:51:11 compute-0 NetworkManager[44960]: <info>  [1760172671.2922] device (tapccf53fcf-c7): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 08:51:11 compute-0 NetworkManager[44960]: <info>  [1760172671.2936] device (tapccf53fcf-c7): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 08:51:11 compute-0 ovn_controller[152945]: 2025-10-11T08:51:11Z|00230|binding|INFO|Claiming lport ccf53fcf-c7bd-41f4-986b-d90fd701f3e4 for this chassis.
Oct 11 08:51:11 compute-0 nova_compute[260935]: 2025-10-11 08:51:11.296 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:51:11 compute-0 ovn_controller[152945]: 2025-10-11T08:51:11Z|00231|binding|INFO|ccf53fcf-c7bd-41f4-986b-d90fd701f3e4: Claiming fa:16:3e:82:ff:9f 10.100.1.42
Oct 11 08:51:11 compute-0 ovn_controller[152945]: 2025-10-11T08:51:11Z|00232|binding|INFO|Claiming lport e7343dc3-6cda-4dfb-8098-f021900f4584 for this chassis.
Oct 11 08:51:11 compute-0 ovn_controller[152945]: 2025-10-11T08:51:11Z|00233|binding|INFO|e7343dc3-6cda-4dfb-8098-f021900f4584: Claiming fa:16:3e:25:95:fe 10.100.0.254
Oct 11 08:51:11 compute-0 nova_compute[260935]: 2025-10-11 08:51:11.298 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:51:11 compute-0 ovn_controller[152945]: 2025-10-11T08:51:11Z|00234|binding|INFO|Setting lport fc76a4bd-0e3d-426e-820f-690283cf8257 ovn-installed in OVS
Oct 11 08:51:11 compute-0 nova_compute[260935]: 2025-10-11 08:51:11.301 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:51:11 compute-0 NetworkManager[44960]: <info>  [1760172671.3060] device (tape7343dc3-6c): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 08:51:11 compute-0 NetworkManager[44960]: <info>  [1760172671.3067] device (tape7343dc3-6c): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 08:51:11 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:11.306 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:25:95:fe 10.100.0.254'], port_security=['fa:16:3e:25:95:fe 10.100.0.254'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.254/24', 'neutron:device_id': '9f842544-f85a-4c24-b273-8ae74177617e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-da40451d-49f4-4bd4-b0a3-55dc537c2426', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f84de17ba2c5470fbc4c7fe809e7d7b7', 'neutron:revision_number': '2', 'neutron:security_group_ids': '1a4e329d-ca16-4021-b76e-a221f4eabd06', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=dbdc5e62-4240-4c3e-8aee-e7dbf176e812, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=e7343dc3-6cda-4dfb-8098-f021900f4584) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 08:51:11 compute-0 ovn_controller[152945]: 2025-10-11T08:51:11Z|00235|binding|INFO|Setting lport fc76a4bd-0e3d-426e-820f-690283cf8257 up in Southbound
Oct 11 08:51:11 compute-0 systemd-machined[215705]: New machine qemu-37-instance-00000020.
Oct 11 08:51:11 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:11.309 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:82:ff:9f 10.100.1.42'], port_security=['fa:16:3e:82:ff:9f 10.100.1.42'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.1.42/24', 'neutron:device_id': '9f842544-f85a-4c24-b273-8ae74177617e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-27019660-0844-42c7-bd58-04b2d25d3924', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f84de17ba2c5470fbc4c7fe809e7d7b7', 'neutron:revision_number': '2', 'neutron:security_group_ids': '1a4e329d-ca16-4021-b76e-a221f4eabd06', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e9317b60-e04e-4d9d-813e-91a63ec5f815, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=ccf53fcf-c7bd-41f4-986b-d90fd701f3e4) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 08:51:11 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e180 do_prune osdmap full prune enabled
Oct 11 08:51:11 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e181 e181: 3 total, 3 up, 3 in
Oct 11 08:51:11 compute-0 systemd[1]: Started Virtual Machine qemu-37-instance-00000020.
Oct 11 08:51:11 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:11.329 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[f35ac527-b444-4778-90bc-68a5c944f4cb]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:51:11 compute-0 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e181: 3 total, 3 up, 3 in
Oct 11 08:51:11 compute-0 ceph-mon[74313]: pgmap v1378: 321 pgs: 321 active+clean; 134 MiB data, 401 MiB used, 60 GiB / 60 GiB avail; 43 KiB/s rd, 3.7 MiB/s wr, 66 op/s
Oct 11 08:51:11 compute-0 ovn_controller[152945]: 2025-10-11T08:51:11Z|00236|binding|INFO|Setting lport ccf53fcf-c7bd-41f4-986b-d90fd701f3e4 ovn-installed in OVS
Oct 11 08:51:11 compute-0 ovn_controller[152945]: 2025-10-11T08:51:11Z|00237|binding|INFO|Setting lport ccf53fcf-c7bd-41f4-986b-d90fd701f3e4 up in Southbound
Oct 11 08:51:11 compute-0 ovn_controller[152945]: 2025-10-11T08:51:11Z|00238|binding|INFO|Setting lport e7343dc3-6cda-4dfb-8098-f021900f4584 ovn-installed in OVS
Oct 11 08:51:11 compute-0 ovn_controller[152945]: 2025-10-11T08:51:11Z|00239|binding|INFO|Setting lport e7343dc3-6cda-4dfb-8098-f021900f4584 up in Southbound
Oct 11 08:51:11 compute-0 nova_compute[260935]: 2025-10-11 08:51:11.363 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:51:11 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:11.378 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[b4ac2fdb-42af-4a47-b507-3a8e6d5caaa9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:51:11 compute-0 NetworkManager[44960]: <info>  [1760172671.3846] manager: (tapda40451d-40): new Veth device (/org/freedesktop/NetworkManager/Devices/117)
Oct 11 08:51:11 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:11.383 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[d0689018-518e-43f0-b60a-aade93807e9c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:51:11 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:11.423 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[ef97727d-f9d1-4c49-a7ed-b9162a063e58]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:51:11 compute-0 nova_compute[260935]: 2025-10-11 08:51:11.418 2 DEBUG nova.storage.rbd_utils [None req-414dafa3-ac2d-4e0d-aebd-68df71fcf0fe 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] cloning vms/5b50d851-e482-40a2-8b7d-d3eca87e15ab_disk@e9e22bb70e184ce295ed0bc013a73ecc to images/70bd8f23-d068-4d06-af15-566e76e92803 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Oct 11 08:51:11 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:11.427 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[c36bb27c-3fac-45db-a947-1782b415092d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:51:11 compute-0 NetworkManager[44960]: <info>  [1760172671.4563] device (tapda40451d-40): carrier: link connected
Oct 11 08:51:11 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:11.465 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[cb883f38-97df-4436-ae85-263103cbb630]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:51:11 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:11.489 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[48d0c9e1-71ea-49b6-a991-d98eb40540e0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapda40451d-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d6:aa:2c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 75], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 451615, 'reachable_time': 38583, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 301414, 'error': None, 'target': 'ovnmeta-da40451d-49f4-4bd4-b0a3-55dc537c2426', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:51:11 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:11.512 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[1e139af0-001f-416f-b6a6-dfb0a8a353d6]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fed6:aa2c'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 451615, 'tstamp': 451615}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 301418, 'error': None, 'target': 'ovnmeta-da40451d-49f4-4bd4-b0a3-55dc537c2426', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:51:11 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:11.535 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[eb92fa7d-1764-471d-adf5-91a3c509b982]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapda40451d-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d6:aa:2c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 196, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 196, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 75], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 451615, 'reachable_time': 38583, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 168, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 168, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 301422, 'error': None, 'target': 'ovnmeta-da40451d-49f4-4bd4-b0a3-55dc537c2426', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:51:11 compute-0 nova_compute[260935]: 2025-10-11 08:51:11.559 2 DEBUG nova.storage.rbd_utils [None req-414dafa3-ac2d-4e0d-aebd-68df71fcf0fe 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] flattening images/70bd8f23-d068-4d06-af15-566e76e92803 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Oct 11 08:51:11 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:11.566 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[a12dcff3-c161-4d61-923a-418ca6578139]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:51:11 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:11.629 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[3d07bc9c-23e5-4869-8c45-32262c977aba]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:51:11 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:11.631 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapda40451d-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:51:11 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:11.631 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 08:51:11 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:11.631 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapda40451d-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:51:11 compute-0 kernel: tapda40451d-40: entered promiscuous mode
Oct 11 08:51:11 compute-0 NetworkManager[44960]: <info>  [1760172671.6340] manager: (tapda40451d-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/118)
Oct 11 08:51:11 compute-0 nova_compute[260935]: 2025-10-11 08:51:11.633 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:51:11 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:11.640 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapda40451d-40, col_values=(('external_ids', {'iface-id': 'cf50a7a8-d7ea-4ad9-aa80-73a2c01bc493'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:51:11 compute-0 ovn_controller[152945]: 2025-10-11T08:51:11Z|00240|binding|INFO|Releasing lport cf50a7a8-d7ea-4ad9-aa80-73a2c01bc493 from this chassis (sb_readonly=0)
Oct 11 08:51:11 compute-0 nova_compute[260935]: 2025-10-11 08:51:11.651 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:51:11 compute-0 nova_compute[260935]: 2025-10-11 08:51:11.662 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:51:11 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:11.664 162815 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/da40451d-49f4-4bd4-b0a3-55dc537c2426.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/da40451d-49f4-4bd4-b0a3-55dc537c2426.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 11 08:51:11 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:11.666 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[e8453c27-b788-40f9-863e-003194fe586f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:51:11 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:11.667 162815 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 11 08:51:11 compute-0 ovn_metadata_agent[162810]: global
Oct 11 08:51:11 compute-0 ovn_metadata_agent[162810]:     log         /dev/log local0 debug
Oct 11 08:51:11 compute-0 ovn_metadata_agent[162810]:     log-tag     haproxy-metadata-proxy-da40451d-49f4-4bd4-b0a3-55dc537c2426
Oct 11 08:51:11 compute-0 ovn_metadata_agent[162810]:     user        root
Oct 11 08:51:11 compute-0 ovn_metadata_agent[162810]:     group       root
Oct 11 08:51:11 compute-0 ovn_metadata_agent[162810]:     maxconn     1024
Oct 11 08:51:11 compute-0 ovn_metadata_agent[162810]:     pidfile     /var/lib/neutron/external/pids/da40451d-49f4-4bd4-b0a3-55dc537c2426.pid.haproxy
Oct 11 08:51:11 compute-0 ovn_metadata_agent[162810]:     daemon
Oct 11 08:51:11 compute-0 ovn_metadata_agent[162810]: 
Oct 11 08:51:11 compute-0 ovn_metadata_agent[162810]: defaults
Oct 11 08:51:11 compute-0 ovn_metadata_agent[162810]:     log global
Oct 11 08:51:11 compute-0 ovn_metadata_agent[162810]:     mode http
Oct 11 08:51:11 compute-0 ovn_metadata_agent[162810]:     option httplog
Oct 11 08:51:11 compute-0 ovn_metadata_agent[162810]:     option dontlognull
Oct 11 08:51:11 compute-0 ovn_metadata_agent[162810]:     option http-server-close
Oct 11 08:51:11 compute-0 ovn_metadata_agent[162810]:     option forwardfor
Oct 11 08:51:11 compute-0 ovn_metadata_agent[162810]:     retries                 3
Oct 11 08:51:11 compute-0 ovn_metadata_agent[162810]:     timeout http-request    30s
Oct 11 08:51:11 compute-0 ovn_metadata_agent[162810]:     timeout connect         30s
Oct 11 08:51:11 compute-0 ovn_metadata_agent[162810]:     timeout client          32s
Oct 11 08:51:11 compute-0 ovn_metadata_agent[162810]:     timeout server          32s
Oct 11 08:51:11 compute-0 ovn_metadata_agent[162810]:     timeout http-keep-alive 30s
Oct 11 08:51:11 compute-0 ovn_metadata_agent[162810]: 
Oct 11 08:51:11 compute-0 ovn_metadata_agent[162810]: 
Oct 11 08:51:11 compute-0 ovn_metadata_agent[162810]: listen listener
Oct 11 08:51:11 compute-0 ovn_metadata_agent[162810]:     bind 169.254.169.254:80
Oct 11 08:51:11 compute-0 ovn_metadata_agent[162810]:     server metadata /var/lib/neutron/metadata_proxy
Oct 11 08:51:11 compute-0 ovn_metadata_agent[162810]:     http-request add-header X-OVN-Network-ID da40451d-49f4-4bd4-b0a3-55dc537c2426
Oct 11 08:51:11 compute-0 ovn_metadata_agent[162810]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 11 08:51:11 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:11.671 162815 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-da40451d-49f4-4bd4-b0a3-55dc537c2426', 'env', 'PROCESS_TAG=haproxy-da40451d-49f4-4bd4-b0a3-55dc537c2426', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/da40451d-49f4-4bd4-b0a3-55dc537c2426.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 11 08:51:11 compute-0 nova_compute[260935]: 2025-10-11 08:51:11.826 2 DEBUG nova.compute.manager [req-734846ac-d593-425d-a91d-f82dc1770b3f req-2c75593b-1268-4e49-958a-91352b373ed0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 9f842544-f85a-4c24-b273-8ae74177617e] Received event network-vif-plugged-e7343dc3-6cda-4dfb-8098-f021900f4584 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:51:11 compute-0 nova_compute[260935]: 2025-10-11 08:51:11.827 2 DEBUG oslo_concurrency.lockutils [req-734846ac-d593-425d-a91d-f82dc1770b3f req-2c75593b-1268-4e49-958a-91352b373ed0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "9f842544-f85a-4c24-b273-8ae74177617e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:51:11 compute-0 nova_compute[260935]: 2025-10-11 08:51:11.827 2 DEBUG oslo_concurrency.lockutils [req-734846ac-d593-425d-a91d-f82dc1770b3f req-2c75593b-1268-4e49-958a-91352b373ed0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "9f842544-f85a-4c24-b273-8ae74177617e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:51:11 compute-0 nova_compute[260935]: 2025-10-11 08:51:11.828 2 DEBUG oslo_concurrency.lockutils [req-734846ac-d593-425d-a91d-f82dc1770b3f req-2c75593b-1268-4e49-958a-91352b373ed0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "9f842544-f85a-4c24-b273-8ae74177617e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:51:11 compute-0 nova_compute[260935]: 2025-10-11 08:51:11.828 2 DEBUG nova.compute.manager [req-734846ac-d593-425d-a91d-f82dc1770b3f req-2c75593b-1268-4e49-958a-91352b373ed0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 9f842544-f85a-4c24-b273-8ae74177617e] Processing event network-vif-plugged-e7343dc3-6cda-4dfb-8098-f021900f4584 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 11 08:51:11 compute-0 nova_compute[260935]: 2025-10-11 08:51:11.848 2 DEBUG nova.storage.rbd_utils [None req-414dafa3-ac2d-4e0d-aebd-68df71fcf0fe 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] removing snapshot(e9e22bb70e184ce295ed0bc013a73ecc) on rbd image(5b50d851-e482-40a2-8b7d-d3eca87e15ab_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Oct 11 08:51:11 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1380: 321 pgs: 321 active+clean; 134 MiB data, 401 MiB used, 60 GiB / 60 GiB avail; 50 KiB/s rd, 4.3 MiB/s wr, 76 op/s
Oct 11 08:51:12 compute-0 podman[301534]: 2025-10-11 08:51:12.18295314 +0000 UTC m=+0.089126812 container create f8f7c9bbddd2f77740d6cac8e986d87f20a8751a24dfffeb15fae153fbcb35f1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-da40451d-49f4-4bd4-b0a3-55dc537c2426, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 11 08:51:12 compute-0 podman[301534]: 2025-10-11 08:51:12.140569061 +0000 UTC m=+0.046742813 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 11 08:51:12 compute-0 nova_compute[260935]: 2025-10-11 08:51:12.279 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:51:12 compute-0 systemd[1]: Started libpod-conmon-f8f7c9bbddd2f77740d6cac8e986d87f20a8751a24dfffeb15fae153fbcb35f1.scope.
Oct 11 08:51:12 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e181 do_prune osdmap full prune enabled
Oct 11 08:51:12 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e182 e182: 3 total, 3 up, 3 in
Oct 11 08:51:12 compute-0 ceph-mon[74313]: osdmap e181: 3 total, 3 up, 3 in
Oct 11 08:51:12 compute-0 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e182: 3 total, 3 up, 3 in
Oct 11 08:51:12 compute-0 systemd[1]: Started libcrun container.
Oct 11 08:51:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f6a6ed8c063f0dc224c05dfdf416baf50adf8b36b436263058ef960624c7842/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 11 08:51:12 compute-0 podman[301534]: 2025-10-11 08:51:12.369018662 +0000 UTC m=+0.275192354 container init f8f7c9bbddd2f77740d6cac8e986d87f20a8751a24dfffeb15fae153fbcb35f1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-da40451d-49f4-4bd4-b0a3-55dc537c2426, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 11 08:51:12 compute-0 podman[301534]: 2025-10-11 08:51:12.377156952 +0000 UTC m=+0.283330614 container start f8f7c9bbddd2f77740d6cac8e986d87f20a8751a24dfffeb15fae153fbcb35f1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-da40451d-49f4-4bd4-b0a3-55dc537c2426, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Oct 11 08:51:12 compute-0 nova_compute[260935]: 2025-10-11 08:51:12.394 2 DEBUG nova.storage.rbd_utils [None req-414dafa3-ac2d-4e0d-aebd-68df71fcf0fe 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] creating snapshot(snap) on rbd image(70bd8f23-d068-4d06-af15-566e76e92803) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Oct 11 08:51:12 compute-0 neutron-haproxy-ovnmeta-da40451d-49f4-4bd4-b0a3-55dc537c2426[301549]: [NOTICE]   (301553) : New worker (301562) forked
Oct 11 08:51:12 compute-0 neutron-haproxy-ovnmeta-da40451d-49f4-4bd4-b0a3-55dc537c2426[301549]: [NOTICE]   (301553) : Loading success.
Oct 11 08:51:12 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:12.439 162815 INFO neutron.agent.ovn.metadata.agent [-] Port e7343dc3-6cda-4dfb-8098-f021900f4584 in datapath da40451d-49f4-4bd4-b0a3-55dc537c2426 unbound from our chassis
Oct 11 08:51:12 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:12.441 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network da40451d-49f4-4bd4-b0a3-55dc537c2426
Oct 11 08:51:12 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:12.458 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[ca8b592b-fcf7-45a1-abc8-9c3740b9a8d4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:51:12 compute-0 nova_compute[260935]: 2025-10-11 08:51:12.472 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172672.4690866, 9f842544-f85a-4c24-b273-8ae74177617e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:51:12 compute-0 nova_compute[260935]: 2025-10-11 08:51:12.473 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 9f842544-f85a-4c24-b273-8ae74177617e] VM Started (Lifecycle Event)
Oct 11 08:51:12 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:12.491 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[a79022e0-fead-4236-9f09-46164e640d9b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:51:12 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:12.493 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[69bfc85b-a645-4ac1-bdec-74059b25d530]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:51:12 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:12.517 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[c06937dd-ba60-4111-9695-dd976bb68aab]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:51:12 compute-0 nova_compute[260935]: 2025-10-11 08:51:12.551 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 9f842544-f85a-4c24-b273-8ae74177617e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:51:12 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:12.552 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[b88b38b1-e6bd-495c-a059-296d53140c9f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapda40451d-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d6:aa:2c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 6, 'rx_bytes': 612, 'tx_bytes': 444, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 6, 'rx_bytes': 612, 'tx_bytes': 444, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 75], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 451615, 'reachable_time': 38583, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 528, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 528, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 301587, 'error': None, 'target': 'ovnmeta-da40451d-49f4-4bd4-b0a3-55dc537c2426', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:51:12 compute-0 nova_compute[260935]: 2025-10-11 08:51:12.557 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172672.469218, 9f842544-f85a-4c24-b273-8ae74177617e => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:51:12 compute-0 nova_compute[260935]: 2025-10-11 08:51:12.557 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 9f842544-f85a-4c24-b273-8ae74177617e] VM Paused (Lifecycle Event)
Oct 11 08:51:12 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:12.573 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[67a45851-a734-4a69-9e91-ce6d1ac5a3e6]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapda40451d-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 451629, 'tstamp': 451629}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 301588, 'error': None, 'target': 'ovnmeta-da40451d-49f4-4bd4-b0a3-55dc537c2426', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.255'], ['IFA_LABEL', 'tapda40451d-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 451632, 'tstamp': 451632}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 301588, 'error': None, 'target': 'ovnmeta-da40451d-49f4-4bd4-b0a3-55dc537c2426', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:51:12 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:12.575 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapda40451d-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:51:12 compute-0 nova_compute[260935]: 2025-10-11 08:51:12.577 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:51:12 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:12.578 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapda40451d-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:51:12 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:12.578 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 08:51:12 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:12.579 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapda40451d-40, col_values=(('external_ids', {'iface-id': 'cf50a7a8-d7ea-4ad9-aa80-73a2c01bc493'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:51:12 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:12.579 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 08:51:12 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:12.580 162815 INFO neutron.agent.ovn.metadata.agent [-] Port ccf53fcf-c7bd-41f4-986b-d90fd701f3e4 in datapath 27019660-0844-42c7-bd58-04b2d25d3924 unbound from our chassis
Oct 11 08:51:12 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:12.581 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 27019660-0844-42c7-bd58-04b2d25d3924
Oct 11 08:51:12 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:12.592 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[d4151532-32bf-4c1f-8a6e-d1a11a1d7552]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:51:12 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:12.593 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap27019660-01 in ovnmeta-27019660-0844-42c7-bd58-04b2d25d3924 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 11 08:51:12 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:12.595 276945 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap27019660-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 11 08:51:12 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:12.595 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[580fa3e6-d680-47fb-bff6-4f9f24ab8ab1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:51:12 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:12.596 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[d62af85d-ff44-4c4a-a208-d3fb8291bfd5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:51:12 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:12.612 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[c5598702-4183-4056-9f01-3feced8a54cc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:51:12 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:12.626 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[45af3767-0a04-4307-92d6-94d9f61179b5]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:51:12 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:12.659 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[f25ac40c-9f29-4eed-8f22-f2d4012cef7b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:51:12 compute-0 nova_compute[260935]: 2025-10-11 08:51:12.665 2 DEBUG nova.network.neutron [req-414781cc-d6a1-42c0-9106-f249fbcb35e8 req-87a8b0a5-1cf6-4f03-8179-f4cd618401f3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 9f842544-f85a-4c24-b273-8ae74177617e] Updated VIF entry in instance network info cache for port e7343dc3-6cda-4dfb-8098-f021900f4584. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 08:51:12 compute-0 nova_compute[260935]: 2025-10-11 08:51:12.666 2 DEBUG nova.network.neutron [req-414781cc-d6a1-42c0-9106-f249fbcb35e8 req-87a8b0a5-1cf6-4f03-8179-f4cd618401f3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 9f842544-f85a-4c24-b273-8ae74177617e] Updating instance_info_cache with network_info: [{"id": "fc76a4bd-0e3d-426e-820f-690283cf8257", "address": "fa:16:3e:91:1c:e4", "network": {"id": "da40451d-49f4-4bd4-b0a3-55dc537c2426", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1795623970", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.51", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f84de17ba2c5470fbc4c7fe809e7d7b7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfc76a4bd-0e", "ovs_interfaceid": "fc76a4bd-0e3d-426e-820f-690283cf8257", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "ccf53fcf-c7bd-41f4-986b-d90fd701f3e4", "address": "fa:16:3e:82:ff:9f", "network": {"id": "27019660-0844-42c7-bd58-04b2d25d3924", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-190755633", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.42", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f84de17ba2c5470fbc4c7fe809e7d7b7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapccf53fcf-c7", "ovs_interfaceid": "ccf53fcf-c7bd-41f4-986b-d90fd701f3e4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "e7343dc3-6cda-4dfb-8098-f021900f4584", "address": "fa:16:3e:25:95:fe", "network": {"id": "da40451d-49f4-4bd4-b0a3-55dc537c2426", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1795623970", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.254", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f84de17ba2c5470fbc4c7fe809e7d7b7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape7343dc3-6c", "ovs_interfaceid": "e7343dc3-6cda-4dfb-8098-f021900f4584", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:51:12 compute-0 NetworkManager[44960]: <info>  [1760172672.6728] manager: (tap27019660-00): new Veth device (/org/freedesktop/NetworkManager/Devices/119)
Oct 11 08:51:12 compute-0 nova_compute[260935]: 2025-10-11 08:51:12.673 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 9f842544-f85a-4c24-b273-8ae74177617e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:51:12 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:12.675 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[a38e30b4-abd0-4e23-9bb5-cfbc846c7ec4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:51:12 compute-0 nova_compute[260935]: 2025-10-11 08:51:12.679 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 9f842544-f85a-4c24-b273-8ae74177617e] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 08:51:12 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:12.728 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[8ee7699d-1257-46bb-9e13-335f44ae0a32]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:51:12 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:12.732 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[97dae52c-35d0-45d3-a80d-a0ccbae59aef]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:51:12 compute-0 sshd-session[301235]: Failed password for invalid user sumit from 155.4.244.179 port 2019 ssh2
Oct 11 08:51:12 compute-0 NetworkManager[44960]: <info>  [1760172672.7707] device (tap27019660-00): carrier: link connected
Oct 11 08:51:12 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:12.785 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[2d2a2e22-4e0f-4a9d-b377-eba7663ac74a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:51:12 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:12.809 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[eebbcccc-1380-4ac1-8ea6-7e9fb3b927a4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap27019660-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ae:7b:58'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 76], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 451747, 'reachable_time': 29931, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 301599, 'error': None, 'target': 'ovnmeta-27019660-0844-42c7-bd58-04b2d25d3924', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:51:12 compute-0 nova_compute[260935]: 2025-10-11 08:51:12.817 2 DEBUG oslo_concurrency.lockutils [req-414781cc-d6a1-42c0-9106-f249fbcb35e8 req-87a8b0a5-1cf6-4f03-8179-f4cd618401f3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-9f842544-f85a-4c24-b273-8ae74177617e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 08:51:12 compute-0 nova_compute[260935]: 2025-10-11 08:51:12.825 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 9f842544-f85a-4c24-b273-8ae74177617e] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 08:51:12 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:12.847 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[f57e9098-2805-4da3-a54d-6db66157f808]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feae:7b58'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 451747, 'tstamp': 451747}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 301600, 'error': None, 'target': 'ovnmeta-27019660-0844-42c7-bd58-04b2d25d3924', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:51:12 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:12.871 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[600b0997-276f-4744-9ca5-dd91644b8e36]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap27019660-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ae:7b:58'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 76], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 451747, 'reachable_time': 29931, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 301601, 'error': None, 'target': 'ovnmeta-27019660-0844-42c7-bd58-04b2d25d3924', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:51:12 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:12.926 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[ea3c524b-99ba-4599-807b-75be3ff6e917]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:51:13 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:13.028 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[bc05be55-ffa4-4258-b3d2-c5f461fad75b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:51:13 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:13.030 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap27019660-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:51:13 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:13.031 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 08:51:13 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:13.031 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap27019660-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:51:13 compute-0 kernel: tap27019660-00: entered promiscuous mode
Oct 11 08:51:13 compute-0 NetworkManager[44960]: <info>  [1760172673.0357] manager: (tap27019660-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/120)
Oct 11 08:51:13 compute-0 nova_compute[260935]: 2025-10-11 08:51:13.034 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:51:13 compute-0 nova_compute[260935]: 2025-10-11 08:51:13.038 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:51:13 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:13.040 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap27019660-00, col_values=(('external_ids', {'iface-id': 'a330e7c0-3dc2-4e01-abd9-1653b7179f53'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:51:13 compute-0 nova_compute[260935]: 2025-10-11 08:51:13.041 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:51:13 compute-0 ovn_controller[152945]: 2025-10-11T08:51:13Z|00241|binding|INFO|Releasing lport a330e7c0-3dc2-4e01-abd9-1653b7179f53 from this chassis (sb_readonly=0)
Oct 11 08:51:13 compute-0 nova_compute[260935]: 2025-10-11 08:51:13.078 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:51:13 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:13.080 162815 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/27019660-0844-42c7-bd58-04b2d25d3924.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/27019660-0844-42c7-bd58-04b2d25d3924.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 11 08:51:13 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:13.081 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[d5c88b77-53ba-417d-8972-3d12ff190c97]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:51:13 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:13.082 162815 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 11 08:51:13 compute-0 ovn_metadata_agent[162810]: global
Oct 11 08:51:13 compute-0 ovn_metadata_agent[162810]:     log         /dev/log local0 debug
Oct 11 08:51:13 compute-0 ovn_metadata_agent[162810]:     log-tag     haproxy-metadata-proxy-27019660-0844-42c7-bd58-04b2d25d3924
Oct 11 08:51:13 compute-0 ovn_metadata_agent[162810]:     user        root
Oct 11 08:51:13 compute-0 ovn_metadata_agent[162810]:     group       root
Oct 11 08:51:13 compute-0 ovn_metadata_agent[162810]:     maxconn     1024
Oct 11 08:51:13 compute-0 ovn_metadata_agent[162810]:     pidfile     /var/lib/neutron/external/pids/27019660-0844-42c7-bd58-04b2d25d3924.pid.haproxy
Oct 11 08:51:13 compute-0 ovn_metadata_agent[162810]:     daemon
Oct 11 08:51:13 compute-0 ovn_metadata_agent[162810]: 
Oct 11 08:51:13 compute-0 ovn_metadata_agent[162810]: defaults
Oct 11 08:51:13 compute-0 ovn_metadata_agent[162810]:     log global
Oct 11 08:51:13 compute-0 ovn_metadata_agent[162810]:     mode http
Oct 11 08:51:13 compute-0 ovn_metadata_agent[162810]:     option httplog
Oct 11 08:51:13 compute-0 ovn_metadata_agent[162810]:     option dontlognull
Oct 11 08:51:13 compute-0 ovn_metadata_agent[162810]:     option http-server-close
Oct 11 08:51:13 compute-0 ovn_metadata_agent[162810]:     option forwardfor
Oct 11 08:51:13 compute-0 ovn_metadata_agent[162810]:     retries                 3
Oct 11 08:51:13 compute-0 ovn_metadata_agent[162810]:     timeout http-request    30s
Oct 11 08:51:13 compute-0 ovn_metadata_agent[162810]:     timeout connect         30s
Oct 11 08:51:13 compute-0 ovn_metadata_agent[162810]:     timeout client          32s
Oct 11 08:51:13 compute-0 ovn_metadata_agent[162810]:     timeout server          32s
Oct 11 08:51:13 compute-0 ovn_metadata_agent[162810]:     timeout http-keep-alive 30s
Oct 11 08:51:13 compute-0 ovn_metadata_agent[162810]: 
Oct 11 08:51:13 compute-0 ovn_metadata_agent[162810]: 
Oct 11 08:51:13 compute-0 ovn_metadata_agent[162810]: listen listener
Oct 11 08:51:13 compute-0 ovn_metadata_agent[162810]:     bind 169.254.169.254:80
Oct 11 08:51:13 compute-0 ovn_metadata_agent[162810]:     server metadata /var/lib/neutron/metadata_proxy
Oct 11 08:51:13 compute-0 ovn_metadata_agent[162810]:     http-request add-header X-OVN-Network-ID 27019660-0844-42c7-bd58-04b2d25d3924
Oct 11 08:51:13 compute-0 ovn_metadata_agent[162810]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 11 08:51:13 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:13.083 162815 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-27019660-0844-42c7-bd58-04b2d25d3924', 'env', 'PROCESS_TAG=haproxy-27019660-0844-42c7-bd58-04b2d25d3924', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/27019660-0844-42c7-bd58-04b2d25d3924.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 11 08:51:13 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 08:51:13 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e182 do_prune osdmap full prune enabled
Oct 11 08:51:13 compute-0 ceph-mon[74313]: pgmap v1380: 321 pgs: 321 active+clean; 134 MiB data, 401 MiB used, 60 GiB / 60 GiB avail; 50 KiB/s rd, 4.3 MiB/s wr, 76 op/s
Oct 11 08:51:13 compute-0 ceph-mon[74313]: osdmap e182: 3 total, 3 up, 3 in
Oct 11 08:51:13 compute-0 sshd-session[301235]: Received disconnect from 155.4.244.179 port 2019:11: Bye Bye [preauth]
Oct 11 08:51:13 compute-0 sshd-session[301235]: Disconnected from invalid user sumit 155.4.244.179 port 2019 [preauth]
Oct 11 08:51:13 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e183 e183: 3 total, 3 up, 3 in
Oct 11 08:51:13 compute-0 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e183: 3 total, 3 up, 3 in
Oct 11 08:51:13 compute-0 podman[301633]: 2025-10-11 08:51:13.546042697 +0000 UTC m=+0.064831134 container create 0993cadefbc4243bfef6735aa729df33ea38f2f3700a7a90d1f3d13ef9163650 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-27019660-0844-42c7-bd58-04b2d25d3924, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 11 08:51:13 compute-0 podman[301633]: 2025-10-11 08:51:13.50971992 +0000 UTC m=+0.028508397 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 11 08:51:13 compute-0 systemd[1]: Started libpod-conmon-0993cadefbc4243bfef6735aa729df33ea38f2f3700a7a90d1f3d13ef9163650.scope.
Oct 11 08:51:13 compute-0 systemd[1]: Started libcrun container.
Oct 11 08:51:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c691cf546be102ecd2a636c399d183790889e1bda6debfc8222fabd7050af86c/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 11 08:51:13 compute-0 podman[301633]: 2025-10-11 08:51:13.675317043 +0000 UTC m=+0.194105480 container init 0993cadefbc4243bfef6735aa729df33ea38f2f3700a7a90d1f3d13ef9163650 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-27019660-0844-42c7-bd58-04b2d25d3924, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 11 08:51:13 compute-0 podman[301633]: 2025-10-11 08:51:13.686250132 +0000 UTC m=+0.205038569 container start 0993cadefbc4243bfef6735aa729df33ea38f2f3700a7a90d1f3d13ef9163650 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-27019660-0844-42c7-bd58-04b2d25d3924, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0)
Oct 11 08:51:13 compute-0 neutron-haproxy-ovnmeta-27019660-0844-42c7-bd58-04b2d25d3924[301648]: [NOTICE]   (301652) : New worker (301654) forked
Oct 11 08:51:13 compute-0 neutron-haproxy-ovnmeta-27019660-0844-42c7-bd58-04b2d25d3924[301648]: [NOTICE]   (301652) : Loading success.
Oct 11 08:51:13 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1383: 321 pgs: 5 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 314 active+clean; 181 MiB data, 422 MiB used, 60 GiB / 60 GiB avail; 7.4 MiB/s rd, 3.6 MiB/s wr, 249 op/s
Oct 11 08:51:14 compute-0 nova_compute[260935]: 2025-10-11 08:51:14.302 2 DEBUG nova.compute.manager [req-1f4f563e-8fc4-48e4-ba8e-429970605cee req-6f88bb4a-0ee3-40ca-8dd1-122fdc1f7133 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 9f842544-f85a-4c24-b273-8ae74177617e] Received event network-vif-plugged-e7343dc3-6cda-4dfb-8098-f021900f4584 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:51:14 compute-0 nova_compute[260935]: 2025-10-11 08:51:14.303 2 DEBUG oslo_concurrency.lockutils [req-1f4f563e-8fc4-48e4-ba8e-429970605cee req-6f88bb4a-0ee3-40ca-8dd1-122fdc1f7133 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "9f842544-f85a-4c24-b273-8ae74177617e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:51:14 compute-0 nova_compute[260935]: 2025-10-11 08:51:14.303 2 DEBUG oslo_concurrency.lockutils [req-1f4f563e-8fc4-48e4-ba8e-429970605cee req-6f88bb4a-0ee3-40ca-8dd1-122fdc1f7133 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "9f842544-f85a-4c24-b273-8ae74177617e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:51:14 compute-0 nova_compute[260935]: 2025-10-11 08:51:14.304 2 DEBUG oslo_concurrency.lockutils [req-1f4f563e-8fc4-48e4-ba8e-429970605cee req-6f88bb4a-0ee3-40ca-8dd1-122fdc1f7133 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "9f842544-f85a-4c24-b273-8ae74177617e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:51:14 compute-0 nova_compute[260935]: 2025-10-11 08:51:14.304 2 DEBUG nova.compute.manager [req-1f4f563e-8fc4-48e4-ba8e-429970605cee req-6f88bb4a-0ee3-40ca-8dd1-122fdc1f7133 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 9f842544-f85a-4c24-b273-8ae74177617e] No event matching network-vif-plugged-e7343dc3-6cda-4dfb-8098-f021900f4584 in dict_keys([('network-vif-plugged', 'fc76a4bd-0e3d-426e-820f-690283cf8257'), ('network-vif-plugged', 'ccf53fcf-c7bd-41f4-986b-d90fd701f3e4')]) pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:325
Oct 11 08:51:14 compute-0 nova_compute[260935]: 2025-10-11 08:51:14.305 2 WARNING nova.compute.manager [req-1f4f563e-8fc4-48e4-ba8e-429970605cee req-6f88bb4a-0ee3-40ca-8dd1-122fdc1f7133 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 9f842544-f85a-4c24-b273-8ae74177617e] Received unexpected event network-vif-plugged-e7343dc3-6cda-4dfb-8098-f021900f4584 for instance with vm_state building and task_state spawning.
Oct 11 08:51:14 compute-0 nova_compute[260935]: 2025-10-11 08:51:14.305 2 DEBUG nova.compute.manager [req-1f4f563e-8fc4-48e4-ba8e-429970605cee req-6f88bb4a-0ee3-40ca-8dd1-122fdc1f7133 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 9f842544-f85a-4c24-b273-8ae74177617e] Received event network-vif-plugged-ccf53fcf-c7bd-41f4-986b-d90fd701f3e4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:51:14 compute-0 nova_compute[260935]: 2025-10-11 08:51:14.306 2 DEBUG oslo_concurrency.lockutils [req-1f4f563e-8fc4-48e4-ba8e-429970605cee req-6f88bb4a-0ee3-40ca-8dd1-122fdc1f7133 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "9f842544-f85a-4c24-b273-8ae74177617e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:51:14 compute-0 nova_compute[260935]: 2025-10-11 08:51:14.306 2 DEBUG oslo_concurrency.lockutils [req-1f4f563e-8fc4-48e4-ba8e-429970605cee req-6f88bb4a-0ee3-40ca-8dd1-122fdc1f7133 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "9f842544-f85a-4c24-b273-8ae74177617e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:51:14 compute-0 nova_compute[260935]: 2025-10-11 08:51:14.307 2 DEBUG oslo_concurrency.lockutils [req-1f4f563e-8fc4-48e4-ba8e-429970605cee req-6f88bb4a-0ee3-40ca-8dd1-122fdc1f7133 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "9f842544-f85a-4c24-b273-8ae74177617e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:51:14 compute-0 nova_compute[260935]: 2025-10-11 08:51:14.307 2 DEBUG nova.compute.manager [req-1f4f563e-8fc4-48e4-ba8e-429970605cee req-6f88bb4a-0ee3-40ca-8dd1-122fdc1f7133 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 9f842544-f85a-4c24-b273-8ae74177617e] Processing event network-vif-plugged-ccf53fcf-c7bd-41f4-986b-d90fd701f3e4 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 11 08:51:14 compute-0 nova_compute[260935]: 2025-10-11 08:51:14.308 2 DEBUG nova.compute.manager [req-1f4f563e-8fc4-48e4-ba8e-429970605cee req-6f88bb4a-0ee3-40ca-8dd1-122fdc1f7133 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 9f842544-f85a-4c24-b273-8ae74177617e] Received event network-vif-plugged-ccf53fcf-c7bd-41f4-986b-d90fd701f3e4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:51:14 compute-0 nova_compute[260935]: 2025-10-11 08:51:14.308 2 DEBUG oslo_concurrency.lockutils [req-1f4f563e-8fc4-48e4-ba8e-429970605cee req-6f88bb4a-0ee3-40ca-8dd1-122fdc1f7133 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "9f842544-f85a-4c24-b273-8ae74177617e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:51:14 compute-0 nova_compute[260935]: 2025-10-11 08:51:14.309 2 DEBUG oslo_concurrency.lockutils [req-1f4f563e-8fc4-48e4-ba8e-429970605cee req-6f88bb4a-0ee3-40ca-8dd1-122fdc1f7133 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "9f842544-f85a-4c24-b273-8ae74177617e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:51:14 compute-0 nova_compute[260935]: 2025-10-11 08:51:14.309 2 DEBUG oslo_concurrency.lockutils [req-1f4f563e-8fc4-48e4-ba8e-429970605cee req-6f88bb4a-0ee3-40ca-8dd1-122fdc1f7133 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "9f842544-f85a-4c24-b273-8ae74177617e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:51:14 compute-0 nova_compute[260935]: 2025-10-11 08:51:14.310 2 DEBUG nova.compute.manager [req-1f4f563e-8fc4-48e4-ba8e-429970605cee req-6f88bb4a-0ee3-40ca-8dd1-122fdc1f7133 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 9f842544-f85a-4c24-b273-8ae74177617e] No event matching network-vif-plugged-ccf53fcf-c7bd-41f4-986b-d90fd701f3e4 in dict_keys([('network-vif-plugged', 'fc76a4bd-0e3d-426e-820f-690283cf8257')]) pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:325
Oct 11 08:51:14 compute-0 nova_compute[260935]: 2025-10-11 08:51:14.310 2 WARNING nova.compute.manager [req-1f4f563e-8fc4-48e4-ba8e-429970605cee req-6f88bb4a-0ee3-40ca-8dd1-122fdc1f7133 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 9f842544-f85a-4c24-b273-8ae74177617e] Received unexpected event network-vif-plugged-ccf53fcf-c7bd-41f4-986b-d90fd701f3e4 for instance with vm_state building and task_state spawning.
Oct 11 08:51:14 compute-0 ceph-mon[74313]: osdmap e183: 3 total, 3 up, 3 in
Oct 11 08:51:14 compute-0 nova_compute[260935]: 2025-10-11 08:51:14.430 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:51:14 compute-0 nova_compute[260935]: 2025-10-11 08:51:14.892 2 INFO nova.virt.libvirt.driver [None req-414dafa3-ac2d-4e0d-aebd-68df71fcf0fe 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 5b50d851-e482-40a2-8b7d-d3eca87e15ab] Snapshot image upload complete
Oct 11 08:51:14 compute-0 nova_compute[260935]: 2025-10-11 08:51:14.893 2 INFO nova.compute.manager [None req-414dafa3-ac2d-4e0d-aebd-68df71fcf0fe 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 5b50d851-e482-40a2-8b7d-d3eca87e15ab] Took 5.23 seconds to snapshot the instance on the hypervisor.
Oct 11 08:51:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:15.185 162815 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:51:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:15.186 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:51:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:15.187 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:51:15 compute-0 ceph-mon[74313]: pgmap v1383: 321 pgs: 5 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 314 active+clean; 181 MiB data, 422 MiB used, 60 GiB / 60 GiB avail; 7.4 MiB/s rd, 3.6 MiB/s wr, 249 op/s
Oct 11 08:51:15 compute-0 nova_compute[260935]: 2025-10-11 08:51:15.650 2 DEBUG oslo_concurrency.lockutils [None req-ed53888a-b1b8-4be4-802f-f342c7e9210d f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] Acquiring lock "88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:51:15 compute-0 nova_compute[260935]: 2025-10-11 08:51:15.651 2 DEBUG oslo_concurrency.lockutils [None req-ed53888a-b1b8-4be4-802f-f342c7e9210d f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] Lock "88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:51:15 compute-0 nova_compute[260935]: 2025-10-11 08:51:15.670 2 DEBUG nova.compute.manager [None req-ed53888a-b1b8-4be4-802f-f342c7e9210d f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] [instance: 88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 11 08:51:15 compute-0 nova_compute[260935]: 2025-10-11 08:51:15.761 2 DEBUG oslo_concurrency.lockutils [None req-ed53888a-b1b8-4be4-802f-f342c7e9210d f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:51:15 compute-0 nova_compute[260935]: 2025-10-11 08:51:15.762 2 DEBUG oslo_concurrency.lockutils [None req-ed53888a-b1b8-4be4-802f-f342c7e9210d f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:51:15 compute-0 nova_compute[260935]: 2025-10-11 08:51:15.772 2 DEBUG nova.virt.hardware [None req-ed53888a-b1b8-4be4-802f-f342c7e9210d f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 11 08:51:15 compute-0 nova_compute[260935]: 2025-10-11 08:51:15.772 2 INFO nova.compute.claims [None req-ed53888a-b1b8-4be4-802f-f342c7e9210d f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] [instance: 88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a] Claim successful on node compute-0.ctlplane.example.com
Oct 11 08:51:15 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1384: 321 pgs: 5 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 314 active+clean; 181 MiB data, 422 MiB used, 60 GiB / 60 GiB avail; 7.4 MiB/s rd, 3.6 MiB/s wr, 249 op/s
Oct 11 08:51:15 compute-0 nova_compute[260935]: 2025-10-11 08:51:15.896 2 DEBUG oslo_concurrency.processutils [None req-ed53888a-b1b8-4be4-802f-f342c7e9210d f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:51:16 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 08:51:16 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2506380279' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:51:16 compute-0 nova_compute[260935]: 2025-10-11 08:51:16.374 2 DEBUG oslo_concurrency.processutils [None req-ed53888a-b1b8-4be4-802f-f342c7e9210d f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:51:16 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/2506380279' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:51:16 compute-0 nova_compute[260935]: 2025-10-11 08:51:16.383 2 DEBUG nova.compute.provider_tree [None req-ed53888a-b1b8-4be4-802f-f342c7e9210d f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 08:51:16 compute-0 nova_compute[260935]: 2025-10-11 08:51:16.400 2 DEBUG nova.scheduler.client.report [None req-ed53888a-b1b8-4be4-802f-f342c7e9210d f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 08:51:16 compute-0 nova_compute[260935]: 2025-10-11 08:51:16.423 2 DEBUG oslo_concurrency.lockutils [None req-ed53888a-b1b8-4be4-802f-f342c7e9210d f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.661s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:51:16 compute-0 nova_compute[260935]: 2025-10-11 08:51:16.423 2 DEBUG nova.compute.manager [None req-ed53888a-b1b8-4be4-802f-f342c7e9210d f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] [instance: 88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 11 08:51:16 compute-0 nova_compute[260935]: 2025-10-11 08:51:16.478 2 DEBUG nova.compute.manager [None req-ed53888a-b1b8-4be4-802f-f342c7e9210d f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] [instance: 88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 11 08:51:16 compute-0 nova_compute[260935]: 2025-10-11 08:51:16.479 2 DEBUG nova.network.neutron [None req-ed53888a-b1b8-4be4-802f-f342c7e9210d f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] [instance: 88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 11 08:51:16 compute-0 nova_compute[260935]: 2025-10-11 08:51:16.505 2 INFO nova.virt.libvirt.driver [None req-ed53888a-b1b8-4be4-802f-f342c7e9210d f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] [instance: 88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 11 08:51:16 compute-0 nova_compute[260935]: 2025-10-11 08:51:16.527 2 DEBUG nova.compute.manager [None req-ed53888a-b1b8-4be4-802f-f342c7e9210d f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] [instance: 88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 11 08:51:16 compute-0 nova_compute[260935]: 2025-10-11 08:51:16.652 2 DEBUG nova.compute.manager [None req-ed53888a-b1b8-4be4-802f-f342c7e9210d f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] [instance: 88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 11 08:51:16 compute-0 nova_compute[260935]: 2025-10-11 08:51:16.654 2 DEBUG nova.virt.libvirt.driver [None req-ed53888a-b1b8-4be4-802f-f342c7e9210d f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] [instance: 88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 11 08:51:16 compute-0 nova_compute[260935]: 2025-10-11 08:51:16.654 2 INFO nova.virt.libvirt.driver [None req-ed53888a-b1b8-4be4-802f-f342c7e9210d f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] [instance: 88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a] Creating image(s)
Oct 11 08:51:16 compute-0 nova_compute[260935]: 2025-10-11 08:51:16.694 2 DEBUG nova.storage.rbd_utils [None req-ed53888a-b1b8-4be4-802f-f342c7e9210d f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] rbd image 88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:51:16 compute-0 nova_compute[260935]: 2025-10-11 08:51:16.727 2 DEBUG nova.storage.rbd_utils [None req-ed53888a-b1b8-4be4-802f-f342c7e9210d f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] rbd image 88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:51:16 compute-0 nova_compute[260935]: 2025-10-11 08:51:16.757 2 DEBUG nova.storage.rbd_utils [None req-ed53888a-b1b8-4be4-802f-f342c7e9210d f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] rbd image 88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:51:16 compute-0 nova_compute[260935]: 2025-10-11 08:51:16.762 2 DEBUG oslo_concurrency.processutils [None req-ed53888a-b1b8-4be4-802f-f342c7e9210d f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:51:16 compute-0 nova_compute[260935]: 2025-10-11 08:51:16.845 2 DEBUG nova.policy [None req-ed53888a-b1b8-4be4-802f-f342c7e9210d f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'f961a579f0a74ab3a913fc3b21acea43', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '80ef0690d9e94d289f05d85941ef7154', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 11 08:51:16 compute-0 nova_compute[260935]: 2025-10-11 08:51:16.856 2 DEBUG oslo_concurrency.processutils [None req-ed53888a-b1b8-4be4-802f-f342c7e9210d f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json" returned: 0 in 0.094s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:51:16 compute-0 nova_compute[260935]: 2025-10-11 08:51:16.856 2 DEBUG oslo_concurrency.lockutils [None req-ed53888a-b1b8-4be4-802f-f342c7e9210d f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] Acquiring lock "9811042c7d73cc51997f7c966840f4b7728169a1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:51:16 compute-0 nova_compute[260935]: 2025-10-11 08:51:16.857 2 DEBUG oslo_concurrency.lockutils [None req-ed53888a-b1b8-4be4-802f-f342c7e9210d f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:51:16 compute-0 nova_compute[260935]: 2025-10-11 08:51:16.857 2 DEBUG oslo_concurrency.lockutils [None req-ed53888a-b1b8-4be4-802f-f342c7e9210d f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:51:16 compute-0 nova_compute[260935]: 2025-10-11 08:51:16.880 2 DEBUG nova.storage.rbd_utils [None req-ed53888a-b1b8-4be4-802f-f342c7e9210d f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] rbd image 88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:51:16 compute-0 nova_compute[260935]: 2025-10-11 08:51:16.884 2 DEBUG oslo_concurrency.processutils [None req-ed53888a-b1b8-4be4-802f-f342c7e9210d f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:51:17 compute-0 nova_compute[260935]: 2025-10-11 08:51:17.175 2 DEBUG oslo_concurrency.processutils [None req-ed53888a-b1b8-4be4-802f-f342c7e9210d f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.291s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:51:17 compute-0 nova_compute[260935]: 2025-10-11 08:51:17.247 2 DEBUG nova.storage.rbd_utils [None req-ed53888a-b1b8-4be4-802f-f342c7e9210d f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] resizing rbd image 88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 11 08:51:17 compute-0 nova_compute[260935]: 2025-10-11 08:51:17.286 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:51:17 compute-0 nova_compute[260935]: 2025-10-11 08:51:17.365 2 DEBUG nova.objects.instance [None req-ed53888a-b1b8-4be4-802f-f342c7e9210d f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] Lazy-loading 'migration_context' on Instance uuid 88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:51:17 compute-0 nova_compute[260935]: 2025-10-11 08:51:17.380 2 DEBUG nova.virt.libvirt.driver [None req-ed53888a-b1b8-4be4-802f-f342c7e9210d f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] [instance: 88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 11 08:51:17 compute-0 nova_compute[260935]: 2025-10-11 08:51:17.381 2 DEBUG nova.virt.libvirt.driver [None req-ed53888a-b1b8-4be4-802f-f342c7e9210d f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] [instance: 88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a] Ensure instance console log exists: /var/lib/nova/instances/88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 11 08:51:17 compute-0 nova_compute[260935]: 2025-10-11 08:51:17.381 2 DEBUG oslo_concurrency.lockutils [None req-ed53888a-b1b8-4be4-802f-f342c7e9210d f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:51:17 compute-0 nova_compute[260935]: 2025-10-11 08:51:17.381 2 DEBUG oslo_concurrency.lockutils [None req-ed53888a-b1b8-4be4-802f-f342c7e9210d f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:51:17 compute-0 nova_compute[260935]: 2025-10-11 08:51:17.382 2 DEBUG oslo_concurrency.lockutils [None req-ed53888a-b1b8-4be4-802f-f342c7e9210d f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:51:17 compute-0 ceph-mon[74313]: pgmap v1384: 321 pgs: 5 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 314 active+clean; 181 MiB data, 422 MiB used, 60 GiB / 60 GiB avail; 7.4 MiB/s rd, 3.6 MiB/s wr, 249 op/s
Oct 11 08:51:17 compute-0 nova_compute[260935]: 2025-10-11 08:51:17.548 2 DEBUG nova.compute.manager [req-5320edb0-e747-4d23-b93b-7b7080b3050f req-72dec7f4-b01d-4268-a3af-e5f05760f864 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 9f842544-f85a-4c24-b273-8ae74177617e] Received event network-vif-plugged-fc76a4bd-0e3d-426e-820f-690283cf8257 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:51:17 compute-0 nova_compute[260935]: 2025-10-11 08:51:17.549 2 DEBUG oslo_concurrency.lockutils [req-5320edb0-e747-4d23-b93b-7b7080b3050f req-72dec7f4-b01d-4268-a3af-e5f05760f864 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "9f842544-f85a-4c24-b273-8ae74177617e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:51:17 compute-0 nova_compute[260935]: 2025-10-11 08:51:17.549 2 DEBUG oslo_concurrency.lockutils [req-5320edb0-e747-4d23-b93b-7b7080b3050f req-72dec7f4-b01d-4268-a3af-e5f05760f864 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "9f842544-f85a-4c24-b273-8ae74177617e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:51:17 compute-0 nova_compute[260935]: 2025-10-11 08:51:17.550 2 DEBUG oslo_concurrency.lockutils [req-5320edb0-e747-4d23-b93b-7b7080b3050f req-72dec7f4-b01d-4268-a3af-e5f05760f864 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "9f842544-f85a-4c24-b273-8ae74177617e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:51:17 compute-0 nova_compute[260935]: 2025-10-11 08:51:17.550 2 DEBUG nova.compute.manager [req-5320edb0-e747-4d23-b93b-7b7080b3050f req-72dec7f4-b01d-4268-a3af-e5f05760f864 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 9f842544-f85a-4c24-b273-8ae74177617e] Processing event network-vif-plugged-fc76a4bd-0e3d-426e-820f-690283cf8257 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 11 08:51:17 compute-0 nova_compute[260935]: 2025-10-11 08:51:17.552 2 DEBUG nova.compute.manager [None req-a8a0f7c3-f1bc-448a-a5ff-f73f3f5f2861 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] [instance: 9f842544-f85a-4c24-b273-8ae74177617e] Instance event wait completed in 5 seconds for network-vif-plugged,network-vif-plugged,network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 11 08:51:17 compute-0 nova_compute[260935]: 2025-10-11 08:51:17.557 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172677.5567293, 9f842544-f85a-4c24-b273-8ae74177617e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:51:17 compute-0 nova_compute[260935]: 2025-10-11 08:51:17.558 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 9f842544-f85a-4c24-b273-8ae74177617e] VM Resumed (Lifecycle Event)
Oct 11 08:51:17 compute-0 nova_compute[260935]: 2025-10-11 08:51:17.563 2 DEBUG nova.virt.libvirt.driver [None req-a8a0f7c3-f1bc-448a-a5ff-f73f3f5f2861 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] [instance: 9f842544-f85a-4c24-b273-8ae74177617e] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 11 08:51:17 compute-0 nova_compute[260935]: 2025-10-11 08:51:17.569 2 INFO nova.virt.libvirt.driver [-] [instance: 9f842544-f85a-4c24-b273-8ae74177617e] Instance spawned successfully.
Oct 11 08:51:17 compute-0 nova_compute[260935]: 2025-10-11 08:51:17.569 2 DEBUG nova.virt.libvirt.driver [None req-a8a0f7c3-f1bc-448a-a5ff-f73f3f5f2861 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] [instance: 9f842544-f85a-4c24-b273-8ae74177617e] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 11 08:51:17 compute-0 nova_compute[260935]: 2025-10-11 08:51:17.589 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 9f842544-f85a-4c24-b273-8ae74177617e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:51:17 compute-0 nova_compute[260935]: 2025-10-11 08:51:17.600 2 DEBUG nova.network.neutron [None req-ed53888a-b1b8-4be4-802f-f342c7e9210d f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] [instance: 88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a] Successfully created port: ab7592f9-1746-47d1-a702-b7be704ccabb _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 11 08:51:17 compute-0 nova_compute[260935]: 2025-10-11 08:51:17.610 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 9f842544-f85a-4c24-b273-8ae74177617e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 08:51:17 compute-0 nova_compute[260935]: 2025-10-11 08:51:17.616 2 DEBUG nova.virt.libvirt.driver [None req-a8a0f7c3-f1bc-448a-a5ff-f73f3f5f2861 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] [instance: 9f842544-f85a-4c24-b273-8ae74177617e] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:51:17 compute-0 nova_compute[260935]: 2025-10-11 08:51:17.617 2 DEBUG nova.virt.libvirt.driver [None req-a8a0f7c3-f1bc-448a-a5ff-f73f3f5f2861 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] [instance: 9f842544-f85a-4c24-b273-8ae74177617e] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:51:17 compute-0 nova_compute[260935]: 2025-10-11 08:51:17.618 2 DEBUG nova.virt.libvirt.driver [None req-a8a0f7c3-f1bc-448a-a5ff-f73f3f5f2861 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] [instance: 9f842544-f85a-4c24-b273-8ae74177617e] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:51:17 compute-0 nova_compute[260935]: 2025-10-11 08:51:17.619 2 DEBUG nova.virt.libvirt.driver [None req-a8a0f7c3-f1bc-448a-a5ff-f73f3f5f2861 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] [instance: 9f842544-f85a-4c24-b273-8ae74177617e] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:51:17 compute-0 nova_compute[260935]: 2025-10-11 08:51:17.620 2 DEBUG nova.virt.libvirt.driver [None req-a8a0f7c3-f1bc-448a-a5ff-f73f3f5f2861 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] [instance: 9f842544-f85a-4c24-b273-8ae74177617e] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:51:17 compute-0 nova_compute[260935]: 2025-10-11 08:51:17.621 2 DEBUG nova.virt.libvirt.driver [None req-a8a0f7c3-f1bc-448a-a5ff-f73f3f5f2861 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] [instance: 9f842544-f85a-4c24-b273-8ae74177617e] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:51:17 compute-0 nova_compute[260935]: 2025-10-11 08:51:17.632 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 9f842544-f85a-4c24-b273-8ae74177617e] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 08:51:17 compute-0 nova_compute[260935]: 2025-10-11 08:51:17.685 2 INFO nova.compute.manager [None req-a8a0f7c3-f1bc-448a-a5ff-f73f3f5f2861 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] [instance: 9f842544-f85a-4c24-b273-8ae74177617e] Took 19.93 seconds to spawn the instance on the hypervisor.
Oct 11 08:51:17 compute-0 nova_compute[260935]: 2025-10-11 08:51:17.686 2 DEBUG nova.compute.manager [None req-a8a0f7c3-f1bc-448a-a5ff-f73f3f5f2861 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] [instance: 9f842544-f85a-4c24-b273-8ae74177617e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:51:17 compute-0 nova_compute[260935]: 2025-10-11 08:51:17.758 2 INFO nova.compute.manager [None req-a8a0f7c3-f1bc-448a-a5ff-f73f3f5f2861 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] [instance: 9f842544-f85a-4c24-b273-8ae74177617e] Took 21.49 seconds to build instance.
Oct 11 08:51:17 compute-0 nova_compute[260935]: 2025-10-11 08:51:17.776 2 DEBUG oslo_concurrency.lockutils [None req-a8a0f7c3-f1bc-448a-a5ff-f73f3f5f2861 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Lock "9f842544-f85a-4c24-b273-8ae74177617e" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 21.646s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:51:17 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1385: 321 pgs: 321 active+clean; 227 MiB data, 435 MiB used, 60 GiB / 60 GiB avail; 6.9 MiB/s rd, 6.5 MiB/s wr, 303 op/s
Oct 11 08:51:18 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 08:51:18 compute-0 nova_compute[260935]: 2025-10-11 08:51:18.830 2 DEBUG nova.network.neutron [None req-ed53888a-b1b8-4be4-802f-f342c7e9210d f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] [instance: 88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a] Successfully updated port: ab7592f9-1746-47d1-a702-b7be704ccabb _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 11 08:51:18 compute-0 nova_compute[260935]: 2025-10-11 08:51:18.848 2 DEBUG oslo_concurrency.lockutils [None req-ed53888a-b1b8-4be4-802f-f342c7e9210d f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] Acquiring lock "refresh_cache-88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 08:51:18 compute-0 nova_compute[260935]: 2025-10-11 08:51:18.848 2 DEBUG oslo_concurrency.lockutils [None req-ed53888a-b1b8-4be4-802f-f342c7e9210d f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] Acquired lock "refresh_cache-88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 08:51:18 compute-0 nova_compute[260935]: 2025-10-11 08:51:18.848 2 DEBUG nova.network.neutron [None req-ed53888a-b1b8-4be4-802f-f342c7e9210d f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] [instance: 88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 11 08:51:18 compute-0 nova_compute[260935]: 2025-10-11 08:51:18.973 2 DEBUG oslo_concurrency.lockutils [None req-9b407ac6-4d33-48b3-b730-b3666d8fc2bf 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Acquiring lock "3283d482-4ea1-400a-9a1b-486479801813" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:51:18 compute-0 nova_compute[260935]: 2025-10-11 08:51:18.973 2 DEBUG oslo_concurrency.lockutils [None req-9b407ac6-4d33-48b3-b730-b3666d8fc2bf 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Lock "3283d482-4ea1-400a-9a1b-486479801813" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:51:18 compute-0 nova_compute[260935]: 2025-10-11 08:51:18.977 2 DEBUG nova.network.neutron [None req-ed53888a-b1b8-4be4-802f-f342c7e9210d f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] [instance: 88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 11 08:51:18 compute-0 nova_compute[260935]: 2025-10-11 08:51:18.988 2 DEBUG nova.compute.manager [None req-9b407ac6-4d33-48b3-b730-b3666d8fc2bf 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 3283d482-4ea1-400a-9a1b-486479801813] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 11 08:51:19 compute-0 nova_compute[260935]: 2025-10-11 08:51:19.063 2 DEBUG oslo_concurrency.lockutils [None req-9b407ac6-4d33-48b3-b730-b3666d8fc2bf 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:51:19 compute-0 nova_compute[260935]: 2025-10-11 08:51:19.064 2 DEBUG oslo_concurrency.lockutils [None req-9b407ac6-4d33-48b3-b730-b3666d8fc2bf 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:51:19 compute-0 nova_compute[260935]: 2025-10-11 08:51:19.070 2 DEBUG nova.virt.hardware [None req-9b407ac6-4d33-48b3-b730-b3666d8fc2bf 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 11 08:51:19 compute-0 nova_compute[260935]: 2025-10-11 08:51:19.070 2 INFO nova.compute.claims [None req-9b407ac6-4d33-48b3-b730-b3666d8fc2bf 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 3283d482-4ea1-400a-9a1b-486479801813] Claim successful on node compute-0.ctlplane.example.com
Oct 11 08:51:19 compute-0 nova_compute[260935]: 2025-10-11 08:51:19.251 2 DEBUG oslo_concurrency.processutils [None req-9b407ac6-4d33-48b3-b730-b3666d8fc2bf 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:51:19 compute-0 ceph-mon[74313]: pgmap v1385: 321 pgs: 321 active+clean; 227 MiB data, 435 MiB used, 60 GiB / 60 GiB avail; 6.9 MiB/s rd, 6.5 MiB/s wr, 303 op/s
Oct 11 08:51:19 compute-0 nova_compute[260935]: 2025-10-11 08:51:19.433 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:51:19 compute-0 nova_compute[260935]: 2025-10-11 08:51:19.514 2 DEBUG oslo_concurrency.lockutils [None req-006ae92d-7d66-4cbb-95bd-85378e297c52 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Acquiring lock "9f842544-f85a-4c24-b273-8ae74177617e" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:51:19 compute-0 nova_compute[260935]: 2025-10-11 08:51:19.516 2 DEBUG oslo_concurrency.lockutils [None req-006ae92d-7d66-4cbb-95bd-85378e297c52 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Lock "9f842544-f85a-4c24-b273-8ae74177617e" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:51:19 compute-0 nova_compute[260935]: 2025-10-11 08:51:19.516 2 DEBUG oslo_concurrency.lockutils [None req-006ae92d-7d66-4cbb-95bd-85378e297c52 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Acquiring lock "9f842544-f85a-4c24-b273-8ae74177617e-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:51:19 compute-0 nova_compute[260935]: 2025-10-11 08:51:19.517 2 DEBUG oslo_concurrency.lockutils [None req-006ae92d-7d66-4cbb-95bd-85378e297c52 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Lock "9f842544-f85a-4c24-b273-8ae74177617e-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:51:19 compute-0 nova_compute[260935]: 2025-10-11 08:51:19.518 2 DEBUG oslo_concurrency.lockutils [None req-006ae92d-7d66-4cbb-95bd-85378e297c52 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Lock "9f842544-f85a-4c24-b273-8ae74177617e-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:51:19 compute-0 nova_compute[260935]: 2025-10-11 08:51:19.520 2 INFO nova.compute.manager [None req-006ae92d-7d66-4cbb-95bd-85378e297c52 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] [instance: 9f842544-f85a-4c24-b273-8ae74177617e] Terminating instance
Oct 11 08:51:19 compute-0 nova_compute[260935]: 2025-10-11 08:51:19.522 2 DEBUG nova.compute.manager [None req-006ae92d-7d66-4cbb-95bd-85378e297c52 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] [instance: 9f842544-f85a-4c24-b273-8ae74177617e] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 11 08:51:19 compute-0 kernel: tapfc76a4bd-0e (unregistering): left promiscuous mode
Oct 11 08:51:19 compute-0 NetworkManager[44960]: <info>  [1760172679.5746] device (tapfc76a4bd-0e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 08:51:19 compute-0 nova_compute[260935]: 2025-10-11 08:51:19.593 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:51:19 compute-0 ovn_controller[152945]: 2025-10-11T08:51:19Z|00242|binding|INFO|Releasing lport fc76a4bd-0e3d-426e-820f-690283cf8257 from this chassis (sb_readonly=0)
Oct 11 08:51:19 compute-0 ovn_controller[152945]: 2025-10-11T08:51:19Z|00243|binding|INFO|Setting lport fc76a4bd-0e3d-426e-820f-690283cf8257 down in Southbound
Oct 11 08:51:19 compute-0 ovn_controller[152945]: 2025-10-11T08:51:19Z|00244|binding|INFO|Removing iface tapfc76a4bd-0e ovn-installed in OVS
Oct 11 08:51:19 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:19.606 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:91:1c:e4 10.100.0.51'], port_security=['fa:16:3e:91:1c:e4 10.100.0.51'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.51/24', 'neutron:device_id': '9f842544-f85a-4c24-b273-8ae74177617e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-da40451d-49f4-4bd4-b0a3-55dc537c2426', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f84de17ba2c5470fbc4c7fe809e7d7b7', 'neutron:revision_number': '4', 'neutron:security_group_ids': '1a4e329d-ca16-4021-b76e-a221f4eabd06', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=dbdc5e62-4240-4c3e-8aee-e7dbf176e812, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=fc76a4bd-0e3d-426e-820f-690283cf8257) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 08:51:19 compute-0 kernel: tapccf53fcf-c7 (unregistering): left promiscuous mode
Oct 11 08:51:19 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:19.609 162815 INFO neutron.agent.ovn.metadata.agent [-] Port fc76a4bd-0e3d-426e-820f-690283cf8257 in datapath da40451d-49f4-4bd4-b0a3-55dc537c2426 unbound from our chassis
Oct 11 08:51:19 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:19.612 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network da40451d-49f4-4bd4-b0a3-55dc537c2426
Oct 11 08:51:19 compute-0 NetworkManager[44960]: <info>  [1760172679.6136] device (tapccf53fcf-c7): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 08:51:19 compute-0 nova_compute[260935]: 2025-10-11 08:51:19.628 2 DEBUG nova.compute.manager [req-a8323f1e-ba80-40a6-8ccd-630d70908edf req-c350c201-b633-454b-8d06-9d36eb81f350 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 9f842544-f85a-4c24-b273-8ae74177617e] Received event network-vif-plugged-fc76a4bd-0e3d-426e-820f-690283cf8257 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:51:19 compute-0 nova_compute[260935]: 2025-10-11 08:51:19.629 2 DEBUG oslo_concurrency.lockutils [req-a8323f1e-ba80-40a6-8ccd-630d70908edf req-c350c201-b633-454b-8d06-9d36eb81f350 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "9f842544-f85a-4c24-b273-8ae74177617e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:51:19 compute-0 nova_compute[260935]: 2025-10-11 08:51:19.630 2 DEBUG oslo_concurrency.lockutils [req-a8323f1e-ba80-40a6-8ccd-630d70908edf req-c350c201-b633-454b-8d06-9d36eb81f350 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "9f842544-f85a-4c24-b273-8ae74177617e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:51:19 compute-0 nova_compute[260935]: 2025-10-11 08:51:19.630 2 DEBUG oslo_concurrency.lockutils [req-a8323f1e-ba80-40a6-8ccd-630d70908edf req-c350c201-b633-454b-8d06-9d36eb81f350 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "9f842544-f85a-4c24-b273-8ae74177617e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:51:19 compute-0 nova_compute[260935]: 2025-10-11 08:51:19.631 2 DEBUG nova.compute.manager [req-a8323f1e-ba80-40a6-8ccd-630d70908edf req-c350c201-b633-454b-8d06-9d36eb81f350 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 9f842544-f85a-4c24-b273-8ae74177617e] No waiting events found dispatching network-vif-plugged-fc76a4bd-0e3d-426e-820f-690283cf8257 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 08:51:19 compute-0 nova_compute[260935]: 2025-10-11 08:51:19.631 2 WARNING nova.compute.manager [req-a8323f1e-ba80-40a6-8ccd-630d70908edf req-c350c201-b633-454b-8d06-9d36eb81f350 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 9f842544-f85a-4c24-b273-8ae74177617e] Received unexpected event network-vif-plugged-fc76a4bd-0e3d-426e-820f-690283cf8257 for instance with vm_state active and task_state deleting.
Oct 11 08:51:19 compute-0 nova_compute[260935]: 2025-10-11 08:51:19.632 2 DEBUG nova.compute.manager [req-a8323f1e-ba80-40a6-8ccd-630d70908edf req-c350c201-b633-454b-8d06-9d36eb81f350 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a] Received event network-changed-ab7592f9-1746-47d1-a702-b7be704ccabb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:51:19 compute-0 nova_compute[260935]: 2025-10-11 08:51:19.632 2 DEBUG nova.compute.manager [req-a8323f1e-ba80-40a6-8ccd-630d70908edf req-c350c201-b633-454b-8d06-9d36eb81f350 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a] Refreshing instance network info cache due to event network-changed-ab7592f9-1746-47d1-a702-b7be704ccabb. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 08:51:19 compute-0 nova_compute[260935]: 2025-10-11 08:51:19.633 2 DEBUG oslo_concurrency.lockutils [req-a8323f1e-ba80-40a6-8ccd-630d70908edf req-c350c201-b633-454b-8d06-9d36eb81f350 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 08:51:19 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:19.637 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[f9fca451-eb63-497d-9a87-e058da276a0c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:51:19 compute-0 nova_compute[260935]: 2025-10-11 08:51:19.642 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:51:19 compute-0 kernel: tape7343dc3-6c (unregistering): left promiscuous mode
Oct 11 08:51:19 compute-0 ovn_controller[152945]: 2025-10-11T08:51:19Z|00245|binding|INFO|Releasing lport ccf53fcf-c7bd-41f4-986b-d90fd701f3e4 from this chassis (sb_readonly=0)
Oct 11 08:51:19 compute-0 ovn_controller[152945]: 2025-10-11T08:51:19Z|00246|binding|INFO|Setting lport ccf53fcf-c7bd-41f4-986b-d90fd701f3e4 down in Southbound
Oct 11 08:51:19 compute-0 NetworkManager[44960]: <info>  [1760172679.6559] device (tape7343dc3-6c): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 08:51:19 compute-0 ovn_controller[152945]: 2025-10-11T08:51:19Z|00247|binding|INFO|Removing iface tapccf53fcf-c7 ovn-installed in OVS
Oct 11 08:51:19 compute-0 nova_compute[260935]: 2025-10-11 08:51:19.657 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:51:19 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:19.666 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:82:ff:9f 10.100.1.42'], port_security=['fa:16:3e:82:ff:9f 10.100.1.42'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.1.42/24', 'neutron:device_id': '9f842544-f85a-4c24-b273-8ae74177617e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-27019660-0844-42c7-bd58-04b2d25d3924', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f84de17ba2c5470fbc4c7fe809e7d7b7', 'neutron:revision_number': '4', 'neutron:security_group_ids': '1a4e329d-ca16-4021-b76e-a221f4eabd06', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e9317b60-e04e-4d9d-813e-91a63ec5f815, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=ccf53fcf-c7bd-41f4-986b-d90fd701f3e4) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 08:51:19 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:19.694 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[9bb38bcc-3a96-441a-882a-99b99561bdb2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:51:19 compute-0 ovn_controller[152945]: 2025-10-11T08:51:19Z|00248|binding|INFO|Releasing lport e7343dc3-6cda-4dfb-8098-f021900f4584 from this chassis (sb_readonly=0)
Oct 11 08:51:19 compute-0 ovn_controller[152945]: 2025-10-11T08:51:19Z|00249|binding|INFO|Setting lport e7343dc3-6cda-4dfb-8098-f021900f4584 down in Southbound
Oct 11 08:51:19 compute-0 ovn_controller[152945]: 2025-10-11T08:51:19Z|00250|binding|INFO|Removing iface tape7343dc3-6c ovn-installed in OVS
Oct 11 08:51:19 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:19.741 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[477eba71-19c7-4bae-903d-b494dfb7e0d8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:51:19 compute-0 nova_compute[260935]: 2025-10-11 08:51:19.741 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:51:19 compute-0 nova_compute[260935]: 2025-10-11 08:51:19.744 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:51:19 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:19.748 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:25:95:fe 10.100.0.254'], port_security=['fa:16:3e:25:95:fe 10.100.0.254'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.254/24', 'neutron:device_id': '9f842544-f85a-4c24-b273-8ae74177617e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-da40451d-49f4-4bd4-b0a3-55dc537c2426', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f84de17ba2c5470fbc4c7fe809e7d7b7', 'neutron:revision_number': '4', 'neutron:security_group_ids': '1a4e329d-ca16-4021-b76e-a221f4eabd06', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=dbdc5e62-4240-4c3e-8aee-e7dbf176e812, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=e7343dc3-6cda-4dfb-8098-f021900f4584) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 08:51:19 compute-0 systemd[1]: machine-qemu\x2d37\x2dinstance\x2d00000020.scope: Deactivated successfully.
Oct 11 08:51:19 compute-0 systemd[1]: machine-qemu\x2d37\x2dinstance\x2d00000020.scope: Consumed 2.954s CPU time.
Oct 11 08:51:19 compute-0 systemd-machined[215705]: Machine qemu-37-instance-00000020 terminated.
Oct 11 08:51:19 compute-0 nova_compute[260935]: 2025-10-11 08:51:19.758 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:51:19 compute-0 nova_compute[260935]: 2025-10-11 08:51:19.763 2 DEBUG nova.network.neutron [None req-ed53888a-b1b8-4be4-802f-f342c7e9210d f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] [instance: 88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a] Updating instance_info_cache with network_info: [{"id": "ab7592f9-1746-47d1-a702-b7be704ccabb", "address": "fa:16:3e:af:3d:5c", "network": {"id": "ce478624-f4e7-4bd7-81b2-172d9f364a89", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1659766389-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "80ef0690d9e94d289f05d85941ef7154", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapab7592f9-17", "ovs_interfaceid": "ab7592f9-1746-47d1-a702-b7be704ccabb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:51:19 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 08:51:19 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3882468972' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:51:19 compute-0 nova_compute[260935]: 2025-10-11 08:51:19.790 2 DEBUG oslo_concurrency.lockutils [None req-ed53888a-b1b8-4be4-802f-f342c7e9210d f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] Releasing lock "refresh_cache-88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 08:51:19 compute-0 nova_compute[260935]: 2025-10-11 08:51:19.790 2 DEBUG nova.compute.manager [None req-ed53888a-b1b8-4be4-802f-f342c7e9210d f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] [instance: 88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a] Instance network_info: |[{"id": "ab7592f9-1746-47d1-a702-b7be704ccabb", "address": "fa:16:3e:af:3d:5c", "network": {"id": "ce478624-f4e7-4bd7-81b2-172d9f364a89", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1659766389-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "80ef0690d9e94d289f05d85941ef7154", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapab7592f9-17", "ovs_interfaceid": "ab7592f9-1746-47d1-a702-b7be704ccabb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 11 08:51:19 compute-0 nova_compute[260935]: 2025-10-11 08:51:19.791 2 DEBUG oslo_concurrency.lockutils [req-a8323f1e-ba80-40a6-8ccd-630d70908edf req-c350c201-b633-454b-8d06-9d36eb81f350 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 08:51:19 compute-0 nova_compute[260935]: 2025-10-11 08:51:19.791 2 DEBUG nova.network.neutron [req-a8323f1e-ba80-40a6-8ccd-630d70908edf req-c350c201-b633-454b-8d06-9d36eb81f350 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a] Refreshing network info cache for port ab7592f9-1746-47d1-a702-b7be704ccabb _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 08:51:19 compute-0 nova_compute[260935]: 2025-10-11 08:51:19.794 2 DEBUG nova.virt.libvirt.driver [None req-ed53888a-b1b8-4be4-802f-f342c7e9210d f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] [instance: 88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a] Start _get_guest_xml network_info=[{"id": "ab7592f9-1746-47d1-a702-b7be704ccabb", "address": "fa:16:3e:af:3d:5c", "network": {"id": "ce478624-f4e7-4bd7-81b2-172d9f364a89", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1659766389-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "80ef0690d9e94d289f05d85941ef7154", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapab7592f9-17", "ovs_interfaceid": "ab7592f9-1746-47d1-a702-b7be704ccabb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 11 08:51:19 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:19.794 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[dd199b23-9fef-4ff7-a523-1a5d7345b13f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:51:19 compute-0 nova_compute[260935]: 2025-10-11 08:51:19.800 2 DEBUG oslo_concurrency.processutils [None req-9b407ac6-4d33-48b3-b730-b3666d8fc2bf 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.550s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:51:19 compute-0 nova_compute[260935]: 2025-10-11 08:51:19.808 2 WARNING nova.virt.libvirt.driver [None req-ed53888a-b1b8-4be4-802f-f342c7e9210d f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 08:51:19 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:19.817 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[2420e570-82e8-48f0-9bc5-315da4b0a9ad]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapda40451d-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d6:aa:2c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 8, 'rx_bytes': 832, 'tx_bytes': 528, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 8, 'rx_bytes': 832, 'tx_bytes': 528, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 75], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 451615, 'reachable_time': 38583, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 301893, 'error': None, 'target': 'ovnmeta-da40451d-49f4-4bd4-b0a3-55dc537c2426', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:51:19 compute-0 nova_compute[260935]: 2025-10-11 08:51:19.819 2 DEBUG nova.virt.libvirt.host [None req-ed53888a-b1b8-4be4-802f-f342c7e9210d f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 11 08:51:19 compute-0 nova_compute[260935]: 2025-10-11 08:51:19.820 2 DEBUG nova.virt.libvirt.host [None req-ed53888a-b1b8-4be4-802f-f342c7e9210d f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 11 08:51:19 compute-0 nova_compute[260935]: 2025-10-11 08:51:19.822 2 DEBUG nova.compute.provider_tree [None req-9b407ac6-4d33-48b3-b730-b3666d8fc2bf 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 08:51:19 compute-0 nova_compute[260935]: 2025-10-11 08:51:19.828 2 DEBUG nova.virt.libvirt.host [None req-ed53888a-b1b8-4be4-802f-f342c7e9210d f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 11 08:51:19 compute-0 nova_compute[260935]: 2025-10-11 08:51:19.829 2 DEBUG nova.virt.libvirt.host [None req-ed53888a-b1b8-4be4-802f-f342c7e9210d f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 11 08:51:19 compute-0 nova_compute[260935]: 2025-10-11 08:51:19.831 2 DEBUG nova.virt.libvirt.driver [None req-ed53888a-b1b8-4be4-802f-f342c7e9210d f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 11 08:51:19 compute-0 nova_compute[260935]: 2025-10-11 08:51:19.831 2 DEBUG nova.virt.hardware [None req-ed53888a-b1b8-4be4-802f-f342c7e9210d f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 11 08:51:19 compute-0 nova_compute[260935]: 2025-10-11 08:51:19.832 2 DEBUG nova.virt.hardware [None req-ed53888a-b1b8-4be4-802f-f342c7e9210d f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 11 08:51:19 compute-0 nova_compute[260935]: 2025-10-11 08:51:19.834 2 DEBUG nova.virt.hardware [None req-ed53888a-b1b8-4be4-802f-f342c7e9210d f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 11 08:51:19 compute-0 nova_compute[260935]: 2025-10-11 08:51:19.835 2 DEBUG nova.virt.hardware [None req-ed53888a-b1b8-4be4-802f-f342c7e9210d f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 11 08:51:19 compute-0 nova_compute[260935]: 2025-10-11 08:51:19.835 2 DEBUG nova.virt.hardware [None req-ed53888a-b1b8-4be4-802f-f342c7e9210d f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 11 08:51:19 compute-0 nova_compute[260935]: 2025-10-11 08:51:19.836 2 DEBUG nova.virt.hardware [None req-ed53888a-b1b8-4be4-802f-f342c7e9210d f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 11 08:51:19 compute-0 nova_compute[260935]: 2025-10-11 08:51:19.837 2 DEBUG nova.virt.hardware [None req-ed53888a-b1b8-4be4-802f-f342c7e9210d f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 11 08:51:19 compute-0 nova_compute[260935]: 2025-10-11 08:51:19.838 2 DEBUG nova.virt.hardware [None req-ed53888a-b1b8-4be4-802f-f342c7e9210d f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 11 08:51:19 compute-0 nova_compute[260935]: 2025-10-11 08:51:19.839 2 DEBUG nova.virt.hardware [None req-ed53888a-b1b8-4be4-802f-f342c7e9210d f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 11 08:51:19 compute-0 nova_compute[260935]: 2025-10-11 08:51:19.839 2 DEBUG nova.virt.hardware [None req-ed53888a-b1b8-4be4-802f-f342c7e9210d f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 11 08:51:19 compute-0 nova_compute[260935]: 2025-10-11 08:51:19.840 2 DEBUG nova.virt.hardware [None req-ed53888a-b1b8-4be4-802f-f342c7e9210d f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 11 08:51:19 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:19.839 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[975c810d-f9f5-40c4-96a9-23d9ab3a32a7]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapda40451d-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 451629, 'tstamp': 451629}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 301894, 'error': None, 'target': 'ovnmeta-da40451d-49f4-4bd4-b0a3-55dc537c2426', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.255'], ['IFA_LABEL', 'tapda40451d-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 451632, 'tstamp': 451632}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 301894, 'error': None, 'target': 'ovnmeta-da40451d-49f4-4bd4-b0a3-55dc537c2426', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:51:19 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:19.843 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapda40451d-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:51:19 compute-0 nova_compute[260935]: 2025-10-11 08:51:19.844 2 DEBUG oslo_concurrency.processutils [None req-ed53888a-b1b8-4be4-802f-f342c7e9210d f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:51:19 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:19.860 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapda40451d-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:51:19 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:19.860 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 08:51:19 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:19.861 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapda40451d-40, col_values=(('external_ids', {'iface-id': 'cf50a7a8-d7ea-4ad9-aa80-73a2c01bc493'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:51:19 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:19.861 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 08:51:19 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:19.862 162815 INFO neutron.agent.ovn.metadata.agent [-] Port ccf53fcf-c7bd-41f4-986b-d90fd701f3e4 in datapath 27019660-0844-42c7-bd58-04b2d25d3924 unbound from our chassis
Oct 11 08:51:19 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:19.863 162815 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 27019660-0844-42c7-bd58-04b2d25d3924, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 11 08:51:19 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:19.866 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[d409476e-b62c-46cb-ae74-92271b8f5ffc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:51:19 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:19.867 162815 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-27019660-0844-42c7-bd58-04b2d25d3924 namespace which is not needed anymore
Oct 11 08:51:19 compute-0 nova_compute[260935]: 2025-10-11 08:51:19.875 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:51:19 compute-0 nova_compute[260935]: 2025-10-11 08:51:19.880 2 DEBUG nova.scheduler.client.report [None req-9b407ac6-4d33-48b3-b730-b3666d8fc2bf 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 08:51:19 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1386: 321 pgs: 321 active+clean; 227 MiB data, 435 MiB used, 60 GiB / 60 GiB avail; 5.6 MiB/s rd, 5.3 MiB/s wr, 248 op/s
Oct 11 08:51:19 compute-0 nova_compute[260935]: 2025-10-11 08:51:19.901 2 DEBUG oslo_concurrency.lockutils [None req-9b407ac6-4d33-48b3-b730-b3666d8fc2bf 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.837s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:51:19 compute-0 nova_compute[260935]: 2025-10-11 08:51:19.903 2 DEBUG nova.compute.manager [None req-9b407ac6-4d33-48b3-b730-b3666d8fc2bf 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 3283d482-4ea1-400a-9a1b-486479801813] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 11 08:51:19 compute-0 nova_compute[260935]: 2025-10-11 08:51:19.947 2 DEBUG nova.compute.manager [None req-9b407ac6-4d33-48b3-b730-b3666d8fc2bf 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 3283d482-4ea1-400a-9a1b-486479801813] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 11 08:51:19 compute-0 nova_compute[260935]: 2025-10-11 08:51:19.949 2 DEBUG nova.network.neutron [None req-9b407ac6-4d33-48b3-b730-b3666d8fc2bf 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 3283d482-4ea1-400a-9a1b-486479801813] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 11 08:51:19 compute-0 NetworkManager[44960]: <info>  [1760172679.9563] manager: (tapccf53fcf-c7): new Tun device (/org/freedesktop/NetworkManager/Devices/121)
Oct 11 08:51:19 compute-0 nova_compute[260935]: 2025-10-11 08:51:19.972 2 INFO nova.virt.libvirt.driver [None req-9b407ac6-4d33-48b3-b730-b3666d8fc2bf 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 3283d482-4ea1-400a-9a1b-486479801813] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 11 08:51:19 compute-0 NetworkManager[44960]: <info>  [1760172679.9734] manager: (tape7343dc3-6c): new Tun device (/org/freedesktop/NetworkManager/Devices/122)
Oct 11 08:51:19 compute-0 nova_compute[260935]: 2025-10-11 08:51:19.988 2 DEBUG nova.compute.manager [None req-9b407ac6-4d33-48b3-b730-b3666d8fc2bf 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 3283d482-4ea1-400a-9a1b-486479801813] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 11 08:51:20 compute-0 nova_compute[260935]: 2025-10-11 08:51:20.009 2 INFO nova.virt.libvirt.driver [-] [instance: 9f842544-f85a-4c24-b273-8ae74177617e] Instance destroyed successfully.
Oct 11 08:51:20 compute-0 nova_compute[260935]: 2025-10-11 08:51:20.013 2 DEBUG nova.objects.instance [None req-006ae92d-7d66-4cbb-95bd-85378e297c52 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Lazy-loading 'resources' on Instance uuid 9f842544-f85a-4c24-b273-8ae74177617e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:51:20 compute-0 nova_compute[260935]: 2025-10-11 08:51:20.030 2 DEBUG nova.virt.libvirt.vif [None req-006ae92d-7d66-4cbb-95bd-85378e297c52 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T08:50:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-358627564',display_name='tempest-ServersTestMultiNic-server-358627564',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-358627564',id=32,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-11T08:51:17Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='f84de17ba2c5470fbc4c7fe809e7d7b7',ramdisk_id='',reservation_id='r-ku19xg05',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestMultiNic-65661968',owner_user_name='tempest-ServersTestMultiNic-65661968-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T08:51:17Z,user_data=None,user_id='fb27f51b5ffd414ab5ddbea179ada690',uuid=9f842544-f85a-4c24-b273-8ae74177617e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "fc76a4bd-0e3d-426e-820f-690283cf8257", "address": "fa:16:3e:91:1c:e4", "network": {"id": "da40451d-49f4-4bd4-b0a3-55dc537c2426", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1795623970", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.51", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f84de17ba2c5470fbc4c7fe809e7d7b7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfc76a4bd-0e", "ovs_interfaceid": "fc76a4bd-0e3d-426e-820f-690283cf8257", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 11 08:51:20 compute-0 nova_compute[260935]: 2025-10-11 08:51:20.031 2 DEBUG nova.network.os_vif_util [None req-006ae92d-7d66-4cbb-95bd-85378e297c52 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Converting VIF {"id": "fc76a4bd-0e3d-426e-820f-690283cf8257", "address": "fa:16:3e:91:1c:e4", "network": {"id": "da40451d-49f4-4bd4-b0a3-55dc537c2426", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1795623970", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.51", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f84de17ba2c5470fbc4c7fe809e7d7b7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfc76a4bd-0e", "ovs_interfaceid": "fc76a4bd-0e3d-426e-820f-690283cf8257", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 08:51:20 compute-0 nova_compute[260935]: 2025-10-11 08:51:20.033 2 DEBUG nova.network.os_vif_util [None req-006ae92d-7d66-4cbb-95bd-85378e297c52 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:91:1c:e4,bridge_name='br-int',has_traffic_filtering=True,id=fc76a4bd-0e3d-426e-820f-690283cf8257,network=Network(da40451d-49f4-4bd4-b0a3-55dc537c2426),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfc76a4bd-0e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 08:51:20 compute-0 nova_compute[260935]: 2025-10-11 08:51:20.034 2 DEBUG os_vif [None req-006ae92d-7d66-4cbb-95bd-85378e297c52 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:91:1c:e4,bridge_name='br-int',has_traffic_filtering=True,id=fc76a4bd-0e3d-426e-820f-690283cf8257,network=Network(da40451d-49f4-4bd4-b0a3-55dc537c2426),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfc76a4bd-0e') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 11 08:51:20 compute-0 neutron-haproxy-ovnmeta-27019660-0844-42c7-bd58-04b2d25d3924[301648]: [NOTICE]   (301652) : haproxy version is 2.8.14-c23fe91
Oct 11 08:51:20 compute-0 neutron-haproxy-ovnmeta-27019660-0844-42c7-bd58-04b2d25d3924[301648]: [NOTICE]   (301652) : path to executable is /usr/sbin/haproxy
Oct 11 08:51:20 compute-0 neutron-haproxy-ovnmeta-27019660-0844-42c7-bd58-04b2d25d3924[301648]: [WARNING]  (301652) : Exiting Master process...
Oct 11 08:51:20 compute-0 neutron-haproxy-ovnmeta-27019660-0844-42c7-bd58-04b2d25d3924[301648]: [WARNING]  (301652) : Exiting Master process...
Oct 11 08:51:20 compute-0 nova_compute[260935]: 2025-10-11 08:51:20.038 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:51:20 compute-0 nova_compute[260935]: 2025-10-11 08:51:20.039 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfc76a4bd-0e, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:51:20 compute-0 neutron-haproxy-ovnmeta-27019660-0844-42c7-bd58-04b2d25d3924[301648]: [ALERT]    (301652) : Current worker (301654) exited with code 143 (Terminated)
Oct 11 08:51:20 compute-0 neutron-haproxy-ovnmeta-27019660-0844-42c7-bd58-04b2d25d3924[301648]: [WARNING]  (301652) : All workers exited. Exiting... (0)
Oct 11 08:51:20 compute-0 systemd[1]: libpod-0993cadefbc4243bfef6735aa729df33ea38f2f3700a7a90d1f3d13ef9163650.scope: Deactivated successfully.
Oct 11 08:51:20 compute-0 nova_compute[260935]: 2025-10-11 08:51:20.052 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:51:20 compute-0 nova_compute[260935]: 2025-10-11 08:51:20.055 2 INFO os_vif [None req-006ae92d-7d66-4cbb-95bd-85378e297c52 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:91:1c:e4,bridge_name='br-int',has_traffic_filtering=True,id=fc76a4bd-0e3d-426e-820f-690283cf8257,network=Network(da40451d-49f4-4bd4-b0a3-55dc537c2426),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfc76a4bd-0e')
Oct 11 08:51:20 compute-0 nova_compute[260935]: 2025-10-11 08:51:20.056 2 DEBUG nova.virt.libvirt.vif [None req-006ae92d-7d66-4cbb-95bd-85378e297c52 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T08:50:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-358627564',display_name='tempest-ServersTestMultiNic-server-358627564',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-358627564',id=32,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-11T08:51:17Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='f84de17ba2c5470fbc4c7fe809e7d7b7',ramdisk_id='',reservation_id='r-ku19xg05',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestMultiNic-65661968',owner_user_name='tempest-ServersTestMultiNic-65661968-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T08:51:17Z,user_data=None,user_id='fb27f51b5ffd414ab5ddbea179ada690',uuid=9f842544-f85a-4c24-b273-8ae74177617e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ccf53fcf-c7bd-41f4-986b-d90fd701f3e4", "address": "fa:16:3e:82:ff:9f", "network": {"id": "27019660-0844-42c7-bd58-04b2d25d3924", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-190755633", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.42", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f84de17ba2c5470fbc4c7fe809e7d7b7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapccf53fcf-c7", "ovs_interfaceid": "ccf53fcf-c7bd-41f4-986b-d90fd701f3e4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 11 08:51:20 compute-0 nova_compute[260935]: 2025-10-11 08:51:20.056 2 DEBUG nova.network.os_vif_util [None req-006ae92d-7d66-4cbb-95bd-85378e297c52 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Converting VIF {"id": "ccf53fcf-c7bd-41f4-986b-d90fd701f3e4", "address": "fa:16:3e:82:ff:9f", "network": {"id": "27019660-0844-42c7-bd58-04b2d25d3924", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-190755633", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.42", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f84de17ba2c5470fbc4c7fe809e7d7b7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapccf53fcf-c7", "ovs_interfaceid": "ccf53fcf-c7bd-41f4-986b-d90fd701f3e4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 08:51:20 compute-0 nova_compute[260935]: 2025-10-11 08:51:20.057 2 DEBUG nova.network.os_vif_util [None req-006ae92d-7d66-4cbb-95bd-85378e297c52 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:82:ff:9f,bridge_name='br-int',has_traffic_filtering=True,id=ccf53fcf-c7bd-41f4-986b-d90fd701f3e4,network=Network(27019660-0844-42c7-bd58-04b2d25d3924),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapccf53fcf-c7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 08:51:20 compute-0 nova_compute[260935]: 2025-10-11 08:51:20.057 2 DEBUG os_vif [None req-006ae92d-7d66-4cbb-95bd-85378e297c52 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:82:ff:9f,bridge_name='br-int',has_traffic_filtering=True,id=ccf53fcf-c7bd-41f4-986b-d90fd701f3e4,network=Network(27019660-0844-42c7-bd58-04b2d25d3924),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapccf53fcf-c7') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 11 08:51:20 compute-0 podman[301939]: 2025-10-11 08:51:20.058109739 +0000 UTC m=+0.066092120 container died 0993cadefbc4243bfef6735aa729df33ea38f2f3700a7a90d1f3d13ef9163650 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-27019660-0844-42c7-bd58-04b2d25d3924, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Oct 11 08:51:20 compute-0 nova_compute[260935]: 2025-10-11 08:51:20.059 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:51:20 compute-0 nova_compute[260935]: 2025-10-11 08:51:20.059 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapccf53fcf-c7, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:51:20 compute-0 nova_compute[260935]: 2025-10-11 08:51:20.061 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:51:20 compute-0 nova_compute[260935]: 2025-10-11 08:51:20.063 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 08:51:20 compute-0 nova_compute[260935]: 2025-10-11 08:51:20.068 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:51:20 compute-0 nova_compute[260935]: 2025-10-11 08:51:20.069 2 INFO os_vif [None req-006ae92d-7d66-4cbb-95bd-85378e297c52 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:82:ff:9f,bridge_name='br-int',has_traffic_filtering=True,id=ccf53fcf-c7bd-41f4-986b-d90fd701f3e4,network=Network(27019660-0844-42c7-bd58-04b2d25d3924),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapccf53fcf-c7')
Oct 11 08:51:20 compute-0 nova_compute[260935]: 2025-10-11 08:51:20.070 2 DEBUG nova.virt.libvirt.vif [None req-006ae92d-7d66-4cbb-95bd-85378e297c52 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T08:50:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-358627564',display_name='tempest-ServersTestMultiNic-server-358627564',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-358627564',id=32,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-11T08:51:17Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='f84de17ba2c5470fbc4c7fe809e7d7b7',ramdisk_id='',reservation_id='r-ku19xg05',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestMultiNic-65661968',owner_user_name='tempest-ServersTestMultiNic-65661968-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T08:51:17Z,user_data=None,user_id='fb27f51b5ffd414ab5ddbea179ada690',uuid=9f842544-f85a-4c24-b273-8ae74177617e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "e7343dc3-6cda-4dfb-8098-f021900f4584", "address": "fa:16:3e:25:95:fe", "network": {"id": "da40451d-49f4-4bd4-b0a3-55dc537c2426", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1795623970", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.254", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f84de17ba2c5470fbc4c7fe809e7d7b7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape7343dc3-6c", "ovs_interfaceid": "e7343dc3-6cda-4dfb-8098-f021900f4584", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 11 08:51:20 compute-0 nova_compute[260935]: 2025-10-11 08:51:20.070 2 DEBUG nova.network.os_vif_util [None req-006ae92d-7d66-4cbb-95bd-85378e297c52 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Converting VIF {"id": "e7343dc3-6cda-4dfb-8098-f021900f4584", "address": "fa:16:3e:25:95:fe", "network": {"id": "da40451d-49f4-4bd4-b0a3-55dc537c2426", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1795623970", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.254", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f84de17ba2c5470fbc4c7fe809e7d7b7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape7343dc3-6c", "ovs_interfaceid": "e7343dc3-6cda-4dfb-8098-f021900f4584", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 08:51:20 compute-0 nova_compute[260935]: 2025-10-11 08:51:20.071 2 DEBUG nova.network.os_vif_util [None req-006ae92d-7d66-4cbb-95bd-85378e297c52 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:25:95:fe,bridge_name='br-int',has_traffic_filtering=True,id=e7343dc3-6cda-4dfb-8098-f021900f4584,network=Network(da40451d-49f4-4bd4-b0a3-55dc537c2426),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape7343dc3-6c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 08:51:20 compute-0 nova_compute[260935]: 2025-10-11 08:51:20.071 2 DEBUG os_vif [None req-006ae92d-7d66-4cbb-95bd-85378e297c52 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:25:95:fe,bridge_name='br-int',has_traffic_filtering=True,id=e7343dc3-6cda-4dfb-8098-f021900f4584,network=Network(da40451d-49f4-4bd4-b0a3-55dc537c2426),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape7343dc3-6c') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 11 08:51:20 compute-0 nova_compute[260935]: 2025-10-11 08:51:20.072 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:51:20 compute-0 nova_compute[260935]: 2025-10-11 08:51:20.072 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape7343dc3-6c, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:51:20 compute-0 nova_compute[260935]: 2025-10-11 08:51:20.073 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:51:20 compute-0 nova_compute[260935]: 2025-10-11 08:51:20.075 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:51:20 compute-0 nova_compute[260935]: 2025-10-11 08:51:20.076 2 INFO os_vif [None req-006ae92d-7d66-4cbb-95bd-85378e297c52 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:25:95:fe,bridge_name='br-int',has_traffic_filtering=True,id=e7343dc3-6cda-4dfb-8098-f021900f4584,network=Network(da40451d-49f4-4bd4-b0a3-55dc537c2426),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape7343dc3-6c')
Oct 11 08:51:20 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-0993cadefbc4243bfef6735aa729df33ea38f2f3700a7a90d1f3d13ef9163650-userdata-shm.mount: Deactivated successfully.
Oct 11 08:51:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-c691cf546be102ecd2a636c399d183790889e1bda6debfc8222fabd7050af86c-merged.mount: Deactivated successfully.
Oct 11 08:51:20 compute-0 podman[301939]: 2025-10-11 08:51:20.09741425 +0000 UTC m=+0.105396621 container cleanup 0993cadefbc4243bfef6735aa729df33ea38f2f3700a7a90d1f3d13ef9163650 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-27019660-0844-42c7-bd58-04b2d25d3924, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true)
Oct 11 08:51:20 compute-0 nova_compute[260935]: 2025-10-11 08:51:20.100 2 DEBUG nova.compute.manager [None req-9b407ac6-4d33-48b3-b730-b3666d8fc2bf 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 3283d482-4ea1-400a-9a1b-486479801813] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 11 08:51:20 compute-0 nova_compute[260935]: 2025-10-11 08:51:20.101 2 DEBUG nova.virt.libvirt.driver [None req-9b407ac6-4d33-48b3-b730-b3666d8fc2bf 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 3283d482-4ea1-400a-9a1b-486479801813] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 11 08:51:20 compute-0 nova_compute[260935]: 2025-10-11 08:51:20.101 2 INFO nova.virt.libvirt.driver [None req-9b407ac6-4d33-48b3-b730-b3666d8fc2bf 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 3283d482-4ea1-400a-9a1b-486479801813] Creating image(s)
Oct 11 08:51:20 compute-0 systemd[1]: libpod-conmon-0993cadefbc4243bfef6735aa729df33ea38f2f3700a7a90d1f3d13ef9163650.scope: Deactivated successfully.
Oct 11 08:51:20 compute-0 nova_compute[260935]: 2025-10-11 08:51:20.125 2 DEBUG nova.storage.rbd_utils [None req-9b407ac6-4d33-48b3-b730-b3666d8fc2bf 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] rbd image 3283d482-4ea1-400a-9a1b-486479801813_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:51:20 compute-0 nova_compute[260935]: 2025-10-11 08:51:20.158 2 DEBUG nova.storage.rbd_utils [None req-9b407ac6-4d33-48b3-b730-b3666d8fc2bf 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] rbd image 3283d482-4ea1-400a-9a1b-486479801813_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:51:20 compute-0 podman[302015]: 2025-10-11 08:51:20.162062209 +0000 UTC m=+0.043128801 container remove 0993cadefbc4243bfef6735aa729df33ea38f2f3700a7a90d1f3d13ef9163650 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-27019660-0844-42c7-bd58-04b2d25d3924, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3)
Oct 11 08:51:20 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:20.169 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[e41fda03-78ea-4336-8721-0026693e3015]: (4, ('Sat Oct 11 08:51:19 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-27019660-0844-42c7-bd58-04b2d25d3924 (0993cadefbc4243bfef6735aa729df33ea38f2f3700a7a90d1f3d13ef9163650)\n0993cadefbc4243bfef6735aa729df33ea38f2f3700a7a90d1f3d13ef9163650\nSat Oct 11 08:51:20 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-27019660-0844-42c7-bd58-04b2d25d3924 (0993cadefbc4243bfef6735aa729df33ea38f2f3700a7a90d1f3d13ef9163650)\n0993cadefbc4243bfef6735aa729df33ea38f2f3700a7a90d1f3d13ef9163650\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:51:20 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:20.170 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[f5ec2be8-e4d2-452e-8a47-31e087cbcf95]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:51:20 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:20.171 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap27019660-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:51:20 compute-0 kernel: tap27019660-00: left promiscuous mode
Oct 11 08:51:20 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:20.192 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[614e03be-27ad-4a71-a713-c02783e793a1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:51:20 compute-0 nova_compute[260935]: 2025-10-11 08:51:20.207 2 DEBUG nova.storage.rbd_utils [None req-9b407ac6-4d33-48b3-b730-b3666d8fc2bf 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] rbd image 3283d482-4ea1-400a-9a1b-486479801813_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:51:20 compute-0 nova_compute[260935]: 2025-10-11 08:51:20.214 2 DEBUG oslo_concurrency.lockutils [None req-9b407ac6-4d33-48b3-b730-b3666d8fc2bf 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Acquiring lock "e581753e2032aab19679355af058b2466b4ef11c" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:51:20 compute-0 nova_compute[260935]: 2025-10-11 08:51:20.215 2 DEBUG oslo_concurrency.lockutils [None req-9b407ac6-4d33-48b3-b730-b3666d8fc2bf 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Lock "e581753e2032aab19679355af058b2466b4ef11c" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:51:20 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:20.214 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[a4464784-1e7e-4f77-98d2-0ea8ee434b34]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:51:20 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:20.215 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[62d57491-240c-4cc9-bc36-1ae06d15ac8f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:51:20 compute-0 nova_compute[260935]: 2025-10-11 08:51:20.218 2 DEBUG nova.compute.manager [req-95a5cf01-6740-4b9e-b1b5-751c99d6741b req-eaeaf1a3-bd89-4b8c-a704-b73fa1745036 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 9f842544-f85a-4c24-b273-8ae74177617e] Received event network-vif-unplugged-ccf53fcf-c7bd-41f4-986b-d90fd701f3e4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:51:20 compute-0 nova_compute[260935]: 2025-10-11 08:51:20.218 2 DEBUG oslo_concurrency.lockutils [req-95a5cf01-6740-4b9e-b1b5-751c99d6741b req-eaeaf1a3-bd89-4b8c-a704-b73fa1745036 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "9f842544-f85a-4c24-b273-8ae74177617e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:51:20 compute-0 nova_compute[260935]: 2025-10-11 08:51:20.218 2 DEBUG oslo_concurrency.lockutils [req-95a5cf01-6740-4b9e-b1b5-751c99d6741b req-eaeaf1a3-bd89-4b8c-a704-b73fa1745036 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "9f842544-f85a-4c24-b273-8ae74177617e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:51:20 compute-0 nova_compute[260935]: 2025-10-11 08:51:20.218 2 DEBUG oslo_concurrency.lockutils [req-95a5cf01-6740-4b9e-b1b5-751c99d6741b req-eaeaf1a3-bd89-4b8c-a704-b73fa1745036 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "9f842544-f85a-4c24-b273-8ae74177617e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:51:20 compute-0 nova_compute[260935]: 2025-10-11 08:51:20.219 2 DEBUG nova.compute.manager [req-95a5cf01-6740-4b9e-b1b5-751c99d6741b req-eaeaf1a3-bd89-4b8c-a704-b73fa1745036 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 9f842544-f85a-4c24-b273-8ae74177617e] No waiting events found dispatching network-vif-unplugged-ccf53fcf-c7bd-41f4-986b-d90fd701f3e4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 08:51:20 compute-0 nova_compute[260935]: 2025-10-11 08:51:20.219 2 DEBUG nova.compute.manager [req-95a5cf01-6740-4b9e-b1b5-751c99d6741b req-eaeaf1a3-bd89-4b8c-a704-b73fa1745036 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 9f842544-f85a-4c24-b273-8ae74177617e] Received event network-vif-unplugged-ccf53fcf-c7bd-41f4-986b-d90fd701f3e4 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 11 08:51:20 compute-0 nova_compute[260935]: 2025-10-11 08:51:20.220 2 DEBUG nova.policy [None req-9b407ac6-4d33-48b3-b730-b3666d8fc2bf 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '1bab12893b9d49aabcb5ca19c9b951de', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'f8c7604961214c6d9d49657535d799a5', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 11 08:51:20 compute-0 nova_compute[260935]: 2025-10-11 08:51:20.223 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:51:20 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:20.236 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[e7da8976-24f3-4bcb-b535-1e24666064d5]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 451735, 'reachable_time': 44714, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 302082, 'error': None, 'target': 'ovnmeta-27019660-0844-42c7-bd58-04b2d25d3924', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:51:20 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:20.241 162929 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-27019660-0844-42c7-bd58-04b2d25d3924 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 11 08:51:20 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:20.241 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[d1b74ef0-4881-49c7-bdf7-820327f6fdc4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:51:20 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:20.242 162815 INFO neutron.agent.ovn.metadata.agent [-] Port e7343dc3-6cda-4dfb-8098-f021900f4584 in datapath da40451d-49f4-4bd4-b0a3-55dc537c2426 unbound from our chassis
Oct 11 08:51:20 compute-0 systemd[1]: run-netns-ovnmeta\x2d27019660\x2d0844\x2d42c7\x2dbd58\x2d04b2d25d3924.mount: Deactivated successfully.
Oct 11 08:51:20 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:20.243 162815 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network da40451d-49f4-4bd4-b0a3-55dc537c2426, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 11 08:51:20 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:20.244 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[47a6a009-da62-4e33-8b2d-90123723bacc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:51:20 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:20.244 162815 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-da40451d-49f4-4bd4-b0a3-55dc537c2426 namespace which is not needed anymore
Oct 11 08:51:20 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 08:51:20 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3063958681' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:51:20 compute-0 nova_compute[260935]: 2025-10-11 08:51:20.340 2 DEBUG oslo_concurrency.processutils [None req-ed53888a-b1b8-4be4-802f-f342c7e9210d f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.496s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:51:20 compute-0 nova_compute[260935]: 2025-10-11 08:51:20.358 2 DEBUG nova.storage.rbd_utils [None req-ed53888a-b1b8-4be4-802f-f342c7e9210d f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] rbd image 88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:51:20 compute-0 nova_compute[260935]: 2025-10-11 08:51:20.362 2 DEBUG oslo_concurrency.processutils [None req-ed53888a-b1b8-4be4-802f-f342c7e9210d f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:51:20 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/3882468972' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:51:20 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/3063958681' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:51:20 compute-0 neutron-haproxy-ovnmeta-da40451d-49f4-4bd4-b0a3-55dc537c2426[301549]: [NOTICE]   (301553) : haproxy version is 2.8.14-c23fe91
Oct 11 08:51:20 compute-0 neutron-haproxy-ovnmeta-da40451d-49f4-4bd4-b0a3-55dc537c2426[301549]: [NOTICE]   (301553) : path to executable is /usr/sbin/haproxy
Oct 11 08:51:20 compute-0 neutron-haproxy-ovnmeta-da40451d-49f4-4bd4-b0a3-55dc537c2426[301549]: [WARNING]  (301553) : Exiting Master process...
Oct 11 08:51:20 compute-0 neutron-haproxy-ovnmeta-da40451d-49f4-4bd4-b0a3-55dc537c2426[301549]: [ALERT]    (301553) : Current worker (301562) exited with code 143 (Terminated)
Oct 11 08:51:20 compute-0 neutron-haproxy-ovnmeta-da40451d-49f4-4bd4-b0a3-55dc537c2426[301549]: [WARNING]  (301553) : All workers exited. Exiting... (0)
Oct 11 08:51:20 compute-0 systemd[1]: libpod-f8f7c9bbddd2f77740d6cac8e986d87f20a8751a24dfffeb15fae153fbcb35f1.scope: Deactivated successfully.
Oct 11 08:51:20 compute-0 podman[302105]: 2025-10-11 08:51:20.425350225 +0000 UTC m=+0.069115896 container died f8f7c9bbddd2f77740d6cac8e986d87f20a8751a24dfffeb15fae153fbcb35f1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-da40451d-49f4-4bd4-b0a3-55dc537c2426, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2)
Oct 11 08:51:20 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-f8f7c9bbddd2f77740d6cac8e986d87f20a8751a24dfffeb15fae153fbcb35f1-userdata-shm.mount: Deactivated successfully.
Oct 11 08:51:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-0f6a6ed8c063f0dc224c05dfdf416baf50adf8b36b436263058ef960624c7842-merged.mount: Deactivated successfully.
Oct 11 08:51:20 compute-0 podman[302105]: 2025-10-11 08:51:20.465227572 +0000 UTC m=+0.108993233 container cleanup f8f7c9bbddd2f77740d6cac8e986d87f20a8751a24dfffeb15fae153fbcb35f1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-da40451d-49f4-4bd4-b0a3-55dc537c2426, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 11 08:51:20 compute-0 systemd[1]: libpod-conmon-f8f7c9bbddd2f77740d6cac8e986d87f20a8751a24dfffeb15fae153fbcb35f1.scope: Deactivated successfully.
Oct 11 08:51:20 compute-0 podman[302157]: 2025-10-11 08:51:20.533920715 +0000 UTC m=+0.042033530 container remove f8f7c9bbddd2f77740d6cac8e986d87f20a8751a24dfffeb15fae153fbcb35f1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-da40451d-49f4-4bd4-b0a3-55dc537c2426, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct 11 08:51:20 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:20.541 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[d7e2dba9-6e4d-4f03-92ec-a31813f4d593]: (4, ('Sat Oct 11 08:51:20 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-da40451d-49f4-4bd4-b0a3-55dc537c2426 (f8f7c9bbddd2f77740d6cac8e986d87f20a8751a24dfffeb15fae153fbcb35f1)\nf8f7c9bbddd2f77740d6cac8e986d87f20a8751a24dfffeb15fae153fbcb35f1\nSat Oct 11 08:51:20 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-da40451d-49f4-4bd4-b0a3-55dc537c2426 (f8f7c9bbddd2f77740d6cac8e986d87f20a8751a24dfffeb15fae153fbcb35f1)\nf8f7c9bbddd2f77740d6cac8e986d87f20a8751a24dfffeb15fae153fbcb35f1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:51:20 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:20.543 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[cb5047cc-e8e9-4792-8517-dde5b86b0558]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:51:20 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:20.543 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapda40451d-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:51:20 compute-0 nova_compute[260935]: 2025-10-11 08:51:20.545 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:51:20 compute-0 kernel: tapda40451d-40: left promiscuous mode
Oct 11 08:51:20 compute-0 nova_compute[260935]: 2025-10-11 08:51:20.561 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:51:20 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:20.565 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[85221e1a-074c-4d27-9fed-678bc0277023]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:51:20 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:20.594 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[1f66d1ac-af39-461d-a859-98171f6dfab3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:51:20 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:20.595 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[b0f2dfc9-0374-43ba-adba-113a8adeef2d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:51:20 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:20.613 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[a6822467-8b73-4530-966b-aeb1b3e6adcb]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 451607, 'reachable_time': 17145, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 302191, 'error': None, 'target': 'ovnmeta-da40451d-49f4-4bd4-b0a3-55dc537c2426', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:51:20 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:20.615 162929 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-da40451d-49f4-4bd4-b0a3-55dc537c2426 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 11 08:51:20 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:20.615 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[e6f154bf-62e7-442e-be0d-da651e959e89]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:51:20 compute-0 nova_compute[260935]: 2025-10-11 08:51:20.625 2 INFO nova.virt.libvirt.driver [None req-006ae92d-7d66-4cbb-95bd-85378e297c52 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] [instance: 9f842544-f85a-4c24-b273-8ae74177617e] Deleting instance files /var/lib/nova/instances/9f842544-f85a-4c24-b273-8ae74177617e_del
Oct 11 08:51:20 compute-0 nova_compute[260935]: 2025-10-11 08:51:20.626 2 INFO nova.virt.libvirt.driver [None req-006ae92d-7d66-4cbb-95bd-85378e297c52 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] [instance: 9f842544-f85a-4c24-b273-8ae74177617e] Deletion of /var/lib/nova/instances/9f842544-f85a-4c24-b273-8ae74177617e_del complete
Oct 11 08:51:20 compute-0 nova_compute[260935]: 2025-10-11 08:51:20.704 2 DEBUG nova.virt.libvirt.imagebackend [None req-9b407ac6-4d33-48b3-b730-b3666d8fc2bf 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Image locations are: [{'url': 'rbd://33219f8b-dc38-5a8f-a577-8ccc4b37190a/images/70bd8f23-d068-4d06-af15-566e76e92803/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://33219f8b-dc38-5a8f-a577-8ccc4b37190a/images/70bd8f23-d068-4d06-af15-566e76e92803/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085
Oct 11 08:51:20 compute-0 nova_compute[260935]: 2025-10-11 08:51:20.774 2 INFO nova.compute.manager [None req-006ae92d-7d66-4cbb-95bd-85378e297c52 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] [instance: 9f842544-f85a-4c24-b273-8ae74177617e] Took 1.25 seconds to destroy the instance on the hypervisor.
Oct 11 08:51:20 compute-0 nova_compute[260935]: 2025-10-11 08:51:20.775 2 DEBUG oslo.service.loopingcall [None req-006ae92d-7d66-4cbb-95bd-85378e297c52 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 11 08:51:20 compute-0 nova_compute[260935]: 2025-10-11 08:51:20.776 2 DEBUG nova.compute.manager [-] [instance: 9f842544-f85a-4c24-b273-8ae74177617e] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 11 08:51:20 compute-0 nova_compute[260935]: 2025-10-11 08:51:20.776 2 DEBUG nova.network.neutron [-] [instance: 9f842544-f85a-4c24-b273-8ae74177617e] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 11 08:51:20 compute-0 nova_compute[260935]: 2025-10-11 08:51:20.783 2 DEBUG nova.virt.libvirt.imagebackend [None req-9b407ac6-4d33-48b3-b730-b3666d8fc2bf 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Selected location: {'url': 'rbd://33219f8b-dc38-5a8f-a577-8ccc4b37190a/images/70bd8f23-d068-4d06-af15-566e76e92803/snap', 'metadata': {'store': 'default_backend'}} clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1094
Oct 11 08:51:20 compute-0 nova_compute[260935]: 2025-10-11 08:51:20.784 2 DEBUG nova.storage.rbd_utils [None req-9b407ac6-4d33-48b3-b730-b3666d8fc2bf 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] cloning images/70bd8f23-d068-4d06-af15-566e76e92803@snap to None/3283d482-4ea1-400a-9a1b-486479801813_disk clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Oct 11 08:51:20 compute-0 ovn_controller[152945]: 2025-10-11T08:51:20Z|00046|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:fe:8d:cb 10.100.0.4
Oct 11 08:51:20 compute-0 ovn_controller[152945]: 2025-10-11T08:51:20Z|00047|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:fe:8d:cb 10.100.0.4
Oct 11 08:51:20 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 08:51:20 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1083942975' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:51:20 compute-0 nova_compute[260935]: 2025-10-11 08:51:20.881 2 DEBUG oslo_concurrency.processutils [None req-ed53888a-b1b8-4be4-802f-f342c7e9210d f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.519s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:51:20 compute-0 nova_compute[260935]: 2025-10-11 08:51:20.883 2 DEBUG nova.virt.libvirt.vif [None req-ed53888a-b1b8-4be4-802f-f342c7e9210d f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T08:51:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-AttachInterfacesV270Test-server-2081426915',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesv270test-server-2081426915',id=34,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='80ef0690d9e94d289f05d85941ef7154',ramdisk_id='',reservation_id='r-0lez6trd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesV270Test-1301456020',owner_user_name='tempest-AttachInterfacesV270Test-1301456020-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T08:51:16Z,user_data=None,user_id='f961a579f0a74ab3a913fc3b21acea43',uuid=88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ab7592f9-1746-47d1-a702-b7be704ccabb", "address": "fa:16:3e:af:3d:5c", "network": {"id": "ce478624-f4e7-4bd7-81b2-172d9f364a89", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1659766389-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "80ef0690d9e94d289f05d85941ef7154", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapab7592f9-17", "ovs_interfaceid": "ab7592f9-1746-47d1-a702-b7be704ccabb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 11 08:51:20 compute-0 nova_compute[260935]: 2025-10-11 08:51:20.884 2 DEBUG nova.network.os_vif_util [None req-ed53888a-b1b8-4be4-802f-f342c7e9210d f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] Converting VIF {"id": "ab7592f9-1746-47d1-a702-b7be704ccabb", "address": "fa:16:3e:af:3d:5c", "network": {"id": "ce478624-f4e7-4bd7-81b2-172d9f364a89", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1659766389-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "80ef0690d9e94d289f05d85941ef7154", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapab7592f9-17", "ovs_interfaceid": "ab7592f9-1746-47d1-a702-b7be704ccabb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 08:51:20 compute-0 nova_compute[260935]: 2025-10-11 08:51:20.885 2 DEBUG nova.network.os_vif_util [None req-ed53888a-b1b8-4be4-802f-f342c7e9210d f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:af:3d:5c,bridge_name='br-int',has_traffic_filtering=True,id=ab7592f9-1746-47d1-a702-b7be704ccabb,network=Network(ce478624-f4e7-4bd7-81b2-172d9f364a89),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapab7592f9-17') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 08:51:20 compute-0 nova_compute[260935]: 2025-10-11 08:51:20.887 2 DEBUG nova.objects.instance [None req-ed53888a-b1b8-4be4-802f-f342c7e9210d f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] Lazy-loading 'pci_devices' on Instance uuid 88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:51:20 compute-0 nova_compute[260935]: 2025-10-11 08:51:20.907 2 DEBUG oslo_concurrency.lockutils [None req-9b407ac6-4d33-48b3-b730-b3666d8fc2bf 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Lock "e581753e2032aab19679355af058b2466b4ef11c" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.692s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:51:20 compute-0 nova_compute[260935]: 2025-10-11 08:51:20.963 2 DEBUG nova.network.neutron [None req-9b407ac6-4d33-48b3-b730-b3666d8fc2bf 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 3283d482-4ea1-400a-9a1b-486479801813] Successfully created port: 2b54c9c3-c2f2-466e-95a2-db8b1ebfdb37 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 11 08:51:20 compute-0 nova_compute[260935]: 2025-10-11 08:51:20.974 2 DEBUG nova.virt.libvirt.driver [None req-ed53888a-b1b8-4be4-802f-f342c7e9210d f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] [instance: 88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a] End _get_guest_xml xml=<domain type="kvm">
Oct 11 08:51:20 compute-0 nova_compute[260935]:   <uuid>88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a</uuid>
Oct 11 08:51:20 compute-0 nova_compute[260935]:   <name>instance-00000022</name>
Oct 11 08:51:20 compute-0 nova_compute[260935]:   <memory>131072</memory>
Oct 11 08:51:20 compute-0 nova_compute[260935]:   <vcpu>1</vcpu>
Oct 11 08:51:20 compute-0 nova_compute[260935]:   <metadata>
Oct 11 08:51:20 compute-0 nova_compute[260935]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 08:51:20 compute-0 nova_compute[260935]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 08:51:20 compute-0 nova_compute[260935]:       <nova:name>tempest-AttachInterfacesV270Test-server-2081426915</nova:name>
Oct 11 08:51:20 compute-0 nova_compute[260935]:       <nova:creationTime>2025-10-11 08:51:19</nova:creationTime>
Oct 11 08:51:20 compute-0 nova_compute[260935]:       <nova:flavor name="m1.nano">
Oct 11 08:51:20 compute-0 nova_compute[260935]:         <nova:memory>128</nova:memory>
Oct 11 08:51:20 compute-0 nova_compute[260935]:         <nova:disk>1</nova:disk>
Oct 11 08:51:20 compute-0 nova_compute[260935]:         <nova:swap>0</nova:swap>
Oct 11 08:51:20 compute-0 nova_compute[260935]:         <nova:ephemeral>0</nova:ephemeral>
Oct 11 08:51:20 compute-0 nova_compute[260935]:         <nova:vcpus>1</nova:vcpus>
Oct 11 08:51:20 compute-0 nova_compute[260935]:       </nova:flavor>
Oct 11 08:51:20 compute-0 nova_compute[260935]:       <nova:owner>
Oct 11 08:51:20 compute-0 nova_compute[260935]:         <nova:user uuid="f961a579f0a74ab3a913fc3b21acea43">tempest-AttachInterfacesV270Test-1301456020-project-member</nova:user>
Oct 11 08:51:20 compute-0 nova_compute[260935]:         <nova:project uuid="80ef0690d9e94d289f05d85941ef7154">tempest-AttachInterfacesV270Test-1301456020</nova:project>
Oct 11 08:51:20 compute-0 nova_compute[260935]:       </nova:owner>
Oct 11 08:51:20 compute-0 nova_compute[260935]:       <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 08:51:20 compute-0 nova_compute[260935]:       <nova:ports>
Oct 11 08:51:20 compute-0 nova_compute[260935]:         <nova:port uuid="ab7592f9-1746-47d1-a702-b7be704ccabb">
Oct 11 08:51:20 compute-0 nova_compute[260935]:           <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Oct 11 08:51:20 compute-0 nova_compute[260935]:         </nova:port>
Oct 11 08:51:20 compute-0 nova_compute[260935]:       </nova:ports>
Oct 11 08:51:20 compute-0 nova_compute[260935]:     </nova:instance>
Oct 11 08:51:20 compute-0 nova_compute[260935]:   </metadata>
Oct 11 08:51:20 compute-0 nova_compute[260935]:   <sysinfo type="smbios">
Oct 11 08:51:20 compute-0 nova_compute[260935]:     <system>
Oct 11 08:51:20 compute-0 nova_compute[260935]:       <entry name="manufacturer">RDO</entry>
Oct 11 08:51:20 compute-0 nova_compute[260935]:       <entry name="product">OpenStack Compute</entry>
Oct 11 08:51:20 compute-0 nova_compute[260935]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 08:51:20 compute-0 nova_compute[260935]:       <entry name="serial">88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a</entry>
Oct 11 08:51:20 compute-0 nova_compute[260935]:       <entry name="uuid">88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a</entry>
Oct 11 08:51:20 compute-0 nova_compute[260935]:       <entry name="family">Virtual Machine</entry>
Oct 11 08:51:20 compute-0 nova_compute[260935]:     </system>
Oct 11 08:51:20 compute-0 nova_compute[260935]:   </sysinfo>
Oct 11 08:51:20 compute-0 nova_compute[260935]:   <os>
Oct 11 08:51:20 compute-0 nova_compute[260935]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 11 08:51:20 compute-0 nova_compute[260935]:     <boot dev="hd"/>
Oct 11 08:51:20 compute-0 nova_compute[260935]:     <smbios mode="sysinfo"/>
Oct 11 08:51:20 compute-0 nova_compute[260935]:   </os>
Oct 11 08:51:20 compute-0 nova_compute[260935]:   <features>
Oct 11 08:51:20 compute-0 nova_compute[260935]:     <acpi/>
Oct 11 08:51:20 compute-0 nova_compute[260935]:     <apic/>
Oct 11 08:51:20 compute-0 nova_compute[260935]:     <vmcoreinfo/>
Oct 11 08:51:20 compute-0 nova_compute[260935]:   </features>
Oct 11 08:51:20 compute-0 nova_compute[260935]:   <clock offset="utc">
Oct 11 08:51:20 compute-0 nova_compute[260935]:     <timer name="pit" tickpolicy="delay"/>
Oct 11 08:51:20 compute-0 nova_compute[260935]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 11 08:51:20 compute-0 nova_compute[260935]:     <timer name="hpet" present="no"/>
Oct 11 08:51:20 compute-0 nova_compute[260935]:   </clock>
Oct 11 08:51:20 compute-0 nova_compute[260935]:   <cpu mode="host-model" match="exact">
Oct 11 08:51:20 compute-0 nova_compute[260935]:     <topology sockets="1" cores="1" threads="1"/>
Oct 11 08:51:20 compute-0 nova_compute[260935]:   </cpu>
Oct 11 08:51:20 compute-0 nova_compute[260935]:   <devices>
Oct 11 08:51:20 compute-0 nova_compute[260935]:     <disk type="network" device="disk">
Oct 11 08:51:20 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 08:51:20 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a_disk">
Oct 11 08:51:20 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 08:51:20 compute-0 nova_compute[260935]:       </source>
Oct 11 08:51:20 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 08:51:20 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 08:51:20 compute-0 nova_compute[260935]:       </auth>
Oct 11 08:51:20 compute-0 nova_compute[260935]:       <target dev="vda" bus="virtio"/>
Oct 11 08:51:20 compute-0 nova_compute[260935]:     </disk>
Oct 11 08:51:20 compute-0 nova_compute[260935]:     <disk type="network" device="cdrom">
Oct 11 08:51:20 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 08:51:20 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a_disk.config">
Oct 11 08:51:20 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 08:51:20 compute-0 nova_compute[260935]:       </source>
Oct 11 08:51:20 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 08:51:20 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 08:51:20 compute-0 nova_compute[260935]:       </auth>
Oct 11 08:51:20 compute-0 nova_compute[260935]:       <target dev="sda" bus="sata"/>
Oct 11 08:51:20 compute-0 nova_compute[260935]:     </disk>
Oct 11 08:51:20 compute-0 nova_compute[260935]:     <interface type="ethernet">
Oct 11 08:51:20 compute-0 nova_compute[260935]:       <mac address="fa:16:3e:af:3d:5c"/>
Oct 11 08:51:20 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 08:51:20 compute-0 nova_compute[260935]:       <driver name="vhost" rx_queue_size="512"/>
Oct 11 08:51:20 compute-0 nova_compute[260935]:       <mtu size="1442"/>
Oct 11 08:51:20 compute-0 nova_compute[260935]:       <target dev="tapab7592f9-17"/>
Oct 11 08:51:20 compute-0 nova_compute[260935]:     </interface>
Oct 11 08:51:20 compute-0 nova_compute[260935]:     <serial type="pty">
Oct 11 08:51:20 compute-0 nova_compute[260935]:       <log file="/var/lib/nova/instances/88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a/console.log" append="off"/>
Oct 11 08:51:20 compute-0 nova_compute[260935]:     </serial>
Oct 11 08:51:20 compute-0 nova_compute[260935]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 08:51:20 compute-0 nova_compute[260935]:     <video>
Oct 11 08:51:20 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 08:51:20 compute-0 nova_compute[260935]:     </video>
Oct 11 08:51:20 compute-0 nova_compute[260935]:     <input type="tablet" bus="usb"/>
Oct 11 08:51:20 compute-0 nova_compute[260935]:     <rng model="virtio">
Oct 11 08:51:20 compute-0 nova_compute[260935]:       <backend model="random">/dev/urandom</backend>
Oct 11 08:51:20 compute-0 nova_compute[260935]:     </rng>
Oct 11 08:51:20 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root"/>
Oct 11 08:51:20 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:51:20 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:51:20 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:51:20 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:51:20 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:51:20 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:51:20 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:51:20 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:51:20 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:51:20 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:51:20 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:51:20 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:51:20 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:51:20 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:51:20 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:51:20 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:51:20 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:51:20 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:51:20 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:51:20 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:51:20 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:51:20 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:51:20 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:51:20 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:51:20 compute-0 nova_compute[260935]:     <controller type="usb" index="0"/>
Oct 11 08:51:20 compute-0 nova_compute[260935]:     <memballoon model="virtio">
Oct 11 08:51:20 compute-0 nova_compute[260935]:       <stats period="10"/>
Oct 11 08:51:20 compute-0 nova_compute[260935]:     </memballoon>
Oct 11 08:51:20 compute-0 nova_compute[260935]:   </devices>
Oct 11 08:51:20 compute-0 nova_compute[260935]: </domain>
Oct 11 08:51:20 compute-0 nova_compute[260935]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 11 08:51:20 compute-0 nova_compute[260935]: 2025-10-11 08:51:20.975 2 DEBUG nova.compute.manager [None req-ed53888a-b1b8-4be4-802f-f342c7e9210d f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] [instance: 88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a] Preparing to wait for external event network-vif-plugged-ab7592f9-1746-47d1-a702-b7be704ccabb prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 11 08:51:20 compute-0 nova_compute[260935]: 2025-10-11 08:51:20.975 2 DEBUG oslo_concurrency.lockutils [None req-ed53888a-b1b8-4be4-802f-f342c7e9210d f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] Acquiring lock "88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:51:20 compute-0 nova_compute[260935]: 2025-10-11 08:51:20.976 2 DEBUG oslo_concurrency.lockutils [None req-ed53888a-b1b8-4be4-802f-f342c7e9210d f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] Lock "88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:51:20 compute-0 nova_compute[260935]: 2025-10-11 08:51:20.976 2 DEBUG oslo_concurrency.lockutils [None req-ed53888a-b1b8-4be4-802f-f342c7e9210d f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] Lock "88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:51:20 compute-0 nova_compute[260935]: 2025-10-11 08:51:20.977 2 DEBUG nova.virt.libvirt.vif [None req-ed53888a-b1b8-4be4-802f-f342c7e9210d f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T08:51:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-AttachInterfacesV270Test-server-2081426915',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesv270test-server-2081426915',id=34,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='80ef0690d9e94d289f05d85941ef7154',ramdisk_id='',reservation_id='r-0lez6trd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesV270Test-1301456020',owner_user_name='tempest-AttachInterfacesV270Test-1301456020-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T08:51:16Z,user_data=None,user_id='f961a579f0a74ab3a913fc3b21acea43',uuid=88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ab7592f9-1746-47d1-a702-b7be704ccabb", "address": "fa:16:3e:af:3d:5c", "network": {"id": "ce478624-f4e7-4bd7-81b2-172d9f364a89", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1659766389-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "80ef0690d9e94d289f05d85941ef7154", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapab7592f9-17", "ovs_interfaceid": "ab7592f9-1746-47d1-a702-b7be704ccabb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 11 08:51:20 compute-0 nova_compute[260935]: 2025-10-11 08:51:20.978 2 DEBUG nova.network.os_vif_util [None req-ed53888a-b1b8-4be4-802f-f342c7e9210d f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] Converting VIF {"id": "ab7592f9-1746-47d1-a702-b7be704ccabb", "address": "fa:16:3e:af:3d:5c", "network": {"id": "ce478624-f4e7-4bd7-81b2-172d9f364a89", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1659766389-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "80ef0690d9e94d289f05d85941ef7154", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapab7592f9-17", "ovs_interfaceid": "ab7592f9-1746-47d1-a702-b7be704ccabb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 08:51:20 compute-0 nova_compute[260935]: 2025-10-11 08:51:20.979 2 DEBUG nova.network.os_vif_util [None req-ed53888a-b1b8-4be4-802f-f342c7e9210d f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:af:3d:5c,bridge_name='br-int',has_traffic_filtering=True,id=ab7592f9-1746-47d1-a702-b7be704ccabb,network=Network(ce478624-f4e7-4bd7-81b2-172d9f364a89),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapab7592f9-17') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 08:51:20 compute-0 nova_compute[260935]: 2025-10-11 08:51:20.979 2 DEBUG os_vif [None req-ed53888a-b1b8-4be4-802f-f342c7e9210d f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:af:3d:5c,bridge_name='br-int',has_traffic_filtering=True,id=ab7592f9-1746-47d1-a702-b7be704ccabb,network=Network(ce478624-f4e7-4bd7-81b2-172d9f364a89),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapab7592f9-17') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 11 08:51:20 compute-0 nova_compute[260935]: 2025-10-11 08:51:20.980 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:51:20 compute-0 nova_compute[260935]: 2025-10-11 08:51:20.980 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:51:20 compute-0 nova_compute[260935]: 2025-10-11 08:51:20.981 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 08:51:21 compute-0 nova_compute[260935]: 2025-10-11 08:51:21.040 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:51:21 compute-0 nova_compute[260935]: 2025-10-11 08:51:21.040 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapab7592f9-17, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:51:21 compute-0 nova_compute[260935]: 2025-10-11 08:51:21.041 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapab7592f9-17, col_values=(('external_ids', {'iface-id': 'ab7592f9-1746-47d1-a702-b7be704ccabb', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:af:3d:5c', 'vm-uuid': '88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:51:21 compute-0 NetworkManager[44960]: <info>  [1760172681.0477] manager: (tapab7592f9-17): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/123)
Oct 11 08:51:21 compute-0 systemd[1]: run-netns-ovnmeta\x2dda40451d\x2d49f4\x2d4bd4\x2db0a3\x2d55dc537c2426.mount: Deactivated successfully.
Oct 11 08:51:21 compute-0 nova_compute[260935]: 2025-10-11 08:51:21.102 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:51:21 compute-0 nova_compute[260935]: 2025-10-11 08:51:21.105 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 08:51:21 compute-0 nova_compute[260935]: 2025-10-11 08:51:21.113 2 INFO os_vif [None req-ed53888a-b1b8-4be4-802f-f342c7e9210d f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:af:3d:5c,bridge_name='br-int',has_traffic_filtering=True,id=ab7592f9-1746-47d1-a702-b7be704ccabb,network=Network(ce478624-f4e7-4bd7-81b2-172d9f364a89),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapab7592f9-17')
Oct 11 08:51:21 compute-0 nova_compute[260935]: 2025-10-11 08:51:21.126 2 DEBUG nova.objects.instance [None req-9b407ac6-4d33-48b3-b730-b3666d8fc2bf 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Lazy-loading 'migration_context' on Instance uuid 3283d482-4ea1-400a-9a1b-486479801813 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:51:21 compute-0 nova_compute[260935]: 2025-10-11 08:51:21.167 2 DEBUG nova.virt.libvirt.driver [None req-9b407ac6-4d33-48b3-b730-b3666d8fc2bf 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 3283d482-4ea1-400a-9a1b-486479801813] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 11 08:51:21 compute-0 nova_compute[260935]: 2025-10-11 08:51:21.168 2 DEBUG nova.virt.libvirt.driver [None req-9b407ac6-4d33-48b3-b730-b3666d8fc2bf 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 3283d482-4ea1-400a-9a1b-486479801813] Ensure instance console log exists: /var/lib/nova/instances/3283d482-4ea1-400a-9a1b-486479801813/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 11 08:51:21 compute-0 nova_compute[260935]: 2025-10-11 08:51:21.169 2 DEBUG oslo_concurrency.lockutils [None req-9b407ac6-4d33-48b3-b730-b3666d8fc2bf 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:51:21 compute-0 nova_compute[260935]: 2025-10-11 08:51:21.170 2 DEBUG oslo_concurrency.lockutils [None req-9b407ac6-4d33-48b3-b730-b3666d8fc2bf 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:51:21 compute-0 nova_compute[260935]: 2025-10-11 08:51:21.170 2 DEBUG oslo_concurrency.lockutils [None req-9b407ac6-4d33-48b3-b730-b3666d8fc2bf 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:51:21 compute-0 nova_compute[260935]: 2025-10-11 08:51:21.195 2 DEBUG nova.virt.libvirt.driver [None req-ed53888a-b1b8-4be4-802f-f342c7e9210d f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 08:51:21 compute-0 nova_compute[260935]: 2025-10-11 08:51:21.195 2 DEBUG nova.virt.libvirt.driver [None req-ed53888a-b1b8-4be4-802f-f342c7e9210d f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 08:51:21 compute-0 nova_compute[260935]: 2025-10-11 08:51:21.196 2 DEBUG nova.virt.libvirt.driver [None req-ed53888a-b1b8-4be4-802f-f342c7e9210d f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] No VIF found with MAC fa:16:3e:af:3d:5c, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 11 08:51:21 compute-0 nova_compute[260935]: 2025-10-11 08:51:21.197 2 INFO nova.virt.libvirt.driver [None req-ed53888a-b1b8-4be4-802f-f342c7e9210d f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] [instance: 88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a] Using config drive
Oct 11 08:51:21 compute-0 nova_compute[260935]: 2025-10-11 08:51:21.229 2 DEBUG nova.storage.rbd_utils [None req-ed53888a-b1b8-4be4-802f-f342c7e9210d f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] rbd image 88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:51:21 compute-0 ceph-mon[74313]: pgmap v1386: 321 pgs: 321 active+clean; 227 MiB data, 435 MiB used, 60 GiB / 60 GiB avail; 5.6 MiB/s rd, 5.3 MiB/s wr, 248 op/s
Oct 11 08:51:21 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/1083942975' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:51:21 compute-0 nova_compute[260935]: 2025-10-11 08:51:21.497 2 DEBUG nova.network.neutron [req-a8323f1e-ba80-40a6-8ccd-630d70908edf req-c350c201-b633-454b-8d06-9d36eb81f350 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a] Updated VIF entry in instance network info cache for port ab7592f9-1746-47d1-a702-b7be704ccabb. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 08:51:21 compute-0 nova_compute[260935]: 2025-10-11 08:51:21.498 2 DEBUG nova.network.neutron [req-a8323f1e-ba80-40a6-8ccd-630d70908edf req-c350c201-b633-454b-8d06-9d36eb81f350 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a] Updating instance_info_cache with network_info: [{"id": "ab7592f9-1746-47d1-a702-b7be704ccabb", "address": "fa:16:3e:af:3d:5c", "network": {"id": "ce478624-f4e7-4bd7-81b2-172d9f364a89", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1659766389-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "80ef0690d9e94d289f05d85941ef7154", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapab7592f9-17", "ovs_interfaceid": "ab7592f9-1746-47d1-a702-b7be704ccabb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:51:21 compute-0 nova_compute[260935]: 2025-10-11 08:51:21.516 2 DEBUG oslo_concurrency.lockutils [req-a8323f1e-ba80-40a6-8ccd-630d70908edf req-c350c201-b633-454b-8d06-9d36eb81f350 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 08:51:21 compute-0 nova_compute[260935]: 2025-10-11 08:51:21.773 2 INFO nova.virt.libvirt.driver [None req-ed53888a-b1b8-4be4-802f-f342c7e9210d f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] [instance: 88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a] Creating config drive at /var/lib/nova/instances/88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a/disk.config
Oct 11 08:51:21 compute-0 nova_compute[260935]: 2025-10-11 08:51:21.784 2 DEBUG oslo_concurrency.processutils [None req-ed53888a-b1b8-4be4-802f-f342c7e9210d f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpe3cai2ld execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:51:21 compute-0 nova_compute[260935]: 2025-10-11 08:51:21.839 2 DEBUG nova.compute.manager [req-379a8014-74a2-439e-b95f-e854baf41d60 req-d420aa9f-7c59-4d33-aa2d-aee6dbcd145f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 9f842544-f85a-4c24-b273-8ae74177617e] Received event network-vif-unplugged-fc76a4bd-0e3d-426e-820f-690283cf8257 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:51:21 compute-0 nova_compute[260935]: 2025-10-11 08:51:21.839 2 DEBUG oslo_concurrency.lockutils [req-379a8014-74a2-439e-b95f-e854baf41d60 req-d420aa9f-7c59-4d33-aa2d-aee6dbcd145f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "9f842544-f85a-4c24-b273-8ae74177617e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:51:21 compute-0 nova_compute[260935]: 2025-10-11 08:51:21.840 2 DEBUG oslo_concurrency.lockutils [req-379a8014-74a2-439e-b95f-e854baf41d60 req-d420aa9f-7c59-4d33-aa2d-aee6dbcd145f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "9f842544-f85a-4c24-b273-8ae74177617e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:51:21 compute-0 nova_compute[260935]: 2025-10-11 08:51:21.840 2 DEBUG oslo_concurrency.lockutils [req-379a8014-74a2-439e-b95f-e854baf41d60 req-d420aa9f-7c59-4d33-aa2d-aee6dbcd145f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "9f842544-f85a-4c24-b273-8ae74177617e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:51:21 compute-0 nova_compute[260935]: 2025-10-11 08:51:21.841 2 DEBUG nova.compute.manager [req-379a8014-74a2-439e-b95f-e854baf41d60 req-d420aa9f-7c59-4d33-aa2d-aee6dbcd145f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 9f842544-f85a-4c24-b273-8ae74177617e] No waiting events found dispatching network-vif-unplugged-fc76a4bd-0e3d-426e-820f-690283cf8257 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 08:51:21 compute-0 nova_compute[260935]: 2025-10-11 08:51:21.841 2 DEBUG nova.compute.manager [req-379a8014-74a2-439e-b95f-e854baf41d60 req-d420aa9f-7c59-4d33-aa2d-aee6dbcd145f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 9f842544-f85a-4c24-b273-8ae74177617e] Received event network-vif-unplugged-fc76a4bd-0e3d-426e-820f-690283cf8257 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 11 08:51:21 compute-0 nova_compute[260935]: 2025-10-11 08:51:21.841 2 DEBUG nova.compute.manager [req-379a8014-74a2-439e-b95f-e854baf41d60 req-d420aa9f-7c59-4d33-aa2d-aee6dbcd145f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 9f842544-f85a-4c24-b273-8ae74177617e] Received event network-vif-plugged-fc76a4bd-0e3d-426e-820f-690283cf8257 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:51:21 compute-0 nova_compute[260935]: 2025-10-11 08:51:21.842 2 DEBUG oslo_concurrency.lockutils [req-379a8014-74a2-439e-b95f-e854baf41d60 req-d420aa9f-7c59-4d33-aa2d-aee6dbcd145f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "9f842544-f85a-4c24-b273-8ae74177617e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:51:21 compute-0 nova_compute[260935]: 2025-10-11 08:51:21.842 2 DEBUG oslo_concurrency.lockutils [req-379a8014-74a2-439e-b95f-e854baf41d60 req-d420aa9f-7c59-4d33-aa2d-aee6dbcd145f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "9f842544-f85a-4c24-b273-8ae74177617e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:51:21 compute-0 nova_compute[260935]: 2025-10-11 08:51:21.843 2 DEBUG oslo_concurrency.lockutils [req-379a8014-74a2-439e-b95f-e854baf41d60 req-d420aa9f-7c59-4d33-aa2d-aee6dbcd145f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "9f842544-f85a-4c24-b273-8ae74177617e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:51:21 compute-0 nova_compute[260935]: 2025-10-11 08:51:21.843 2 DEBUG nova.compute.manager [req-379a8014-74a2-439e-b95f-e854baf41d60 req-d420aa9f-7c59-4d33-aa2d-aee6dbcd145f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 9f842544-f85a-4c24-b273-8ae74177617e] No waiting events found dispatching network-vif-plugged-fc76a4bd-0e3d-426e-820f-690283cf8257 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 08:51:21 compute-0 nova_compute[260935]: 2025-10-11 08:51:21.843 2 WARNING nova.compute.manager [req-379a8014-74a2-439e-b95f-e854baf41d60 req-d420aa9f-7c59-4d33-aa2d-aee6dbcd145f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 9f842544-f85a-4c24-b273-8ae74177617e] Received unexpected event network-vif-plugged-fc76a4bd-0e3d-426e-820f-690283cf8257 for instance with vm_state active and task_state deleting.
Oct 11 08:51:21 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1387: 321 pgs: 321 active+clean; 227 MiB data, 435 MiB used, 60 GiB / 60 GiB avail; 3.5 MiB/s rd, 2.8 MiB/s wr, 172 op/s
Oct 11 08:51:21 compute-0 nova_compute[260935]: 2025-10-11 08:51:21.945 2 DEBUG oslo_concurrency.processutils [None req-ed53888a-b1b8-4be4-802f-f342c7e9210d f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpe3cai2ld" returned: 0 in 0.162s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:51:21 compute-0 nova_compute[260935]: 2025-10-11 08:51:21.985 2 DEBUG nova.storage.rbd_utils [None req-ed53888a-b1b8-4be4-802f-f342c7e9210d f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] rbd image 88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:51:21 compute-0 nova_compute[260935]: 2025-10-11 08:51:21.990 2 DEBUG oslo_concurrency.processutils [None req-ed53888a-b1b8-4be4-802f-f342c7e9210d f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a/disk.config 88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:51:22 compute-0 nova_compute[260935]: 2025-10-11 08:51:22.171 2 DEBUG nova.network.neutron [None req-9b407ac6-4d33-48b3-b730-b3666d8fc2bf 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 3283d482-4ea1-400a-9a1b-486479801813] Successfully updated port: 2b54c9c3-c2f2-466e-95a2-db8b1ebfdb37 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 11 08:51:22 compute-0 nova_compute[260935]: 2025-10-11 08:51:22.179 2 DEBUG oslo_concurrency.processutils [None req-ed53888a-b1b8-4be4-802f-f342c7e9210d f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a/disk.config 88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.189s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:51:22 compute-0 nova_compute[260935]: 2025-10-11 08:51:22.180 2 INFO nova.virt.libvirt.driver [None req-ed53888a-b1b8-4be4-802f-f342c7e9210d f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] [instance: 88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a] Deleting local config drive /var/lib/nova/instances/88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a/disk.config because it was imported into RBD.
Oct 11 08:51:22 compute-0 nova_compute[260935]: 2025-10-11 08:51:22.186 2 DEBUG nova.compute.manager [req-939fc24c-6341-4c2e-9f2c-0fbeb284b8c4 req-7ec2d948-0c61-45eb-9fdb-1bc026274648 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 9f842544-f85a-4c24-b273-8ae74177617e] Received event network-vif-plugged-ccf53fcf-c7bd-41f4-986b-d90fd701f3e4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:51:22 compute-0 nova_compute[260935]: 2025-10-11 08:51:22.186 2 DEBUG oslo_concurrency.lockutils [req-939fc24c-6341-4c2e-9f2c-0fbeb284b8c4 req-7ec2d948-0c61-45eb-9fdb-1bc026274648 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "9f842544-f85a-4c24-b273-8ae74177617e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:51:22 compute-0 nova_compute[260935]: 2025-10-11 08:51:22.187 2 DEBUG oslo_concurrency.lockutils [req-939fc24c-6341-4c2e-9f2c-0fbeb284b8c4 req-7ec2d948-0c61-45eb-9fdb-1bc026274648 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "9f842544-f85a-4c24-b273-8ae74177617e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:51:22 compute-0 nova_compute[260935]: 2025-10-11 08:51:22.187 2 DEBUG oslo_concurrency.lockutils [req-939fc24c-6341-4c2e-9f2c-0fbeb284b8c4 req-7ec2d948-0c61-45eb-9fdb-1bc026274648 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "9f842544-f85a-4c24-b273-8ae74177617e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:51:22 compute-0 nova_compute[260935]: 2025-10-11 08:51:22.188 2 DEBUG nova.compute.manager [req-939fc24c-6341-4c2e-9f2c-0fbeb284b8c4 req-7ec2d948-0c61-45eb-9fdb-1bc026274648 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 9f842544-f85a-4c24-b273-8ae74177617e] No waiting events found dispatching network-vif-plugged-ccf53fcf-c7bd-41f4-986b-d90fd701f3e4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 08:51:22 compute-0 nova_compute[260935]: 2025-10-11 08:51:22.188 2 WARNING nova.compute.manager [req-939fc24c-6341-4c2e-9f2c-0fbeb284b8c4 req-7ec2d948-0c61-45eb-9fdb-1bc026274648 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 9f842544-f85a-4c24-b273-8ae74177617e] Received unexpected event network-vif-plugged-ccf53fcf-c7bd-41f4-986b-d90fd701f3e4 for instance with vm_state active and task_state deleting.
Oct 11 08:51:22 compute-0 nova_compute[260935]: 2025-10-11 08:51:22.189 2 DEBUG nova.compute.manager [req-939fc24c-6341-4c2e-9f2c-0fbeb284b8c4 req-7ec2d948-0c61-45eb-9fdb-1bc026274648 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 9f842544-f85a-4c24-b273-8ae74177617e] Received event network-vif-unplugged-e7343dc3-6cda-4dfb-8098-f021900f4584 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:51:22 compute-0 nova_compute[260935]: 2025-10-11 08:51:22.189 2 DEBUG oslo_concurrency.lockutils [req-939fc24c-6341-4c2e-9f2c-0fbeb284b8c4 req-7ec2d948-0c61-45eb-9fdb-1bc026274648 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "9f842544-f85a-4c24-b273-8ae74177617e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:51:22 compute-0 nova_compute[260935]: 2025-10-11 08:51:22.189 2 DEBUG oslo_concurrency.lockutils [req-939fc24c-6341-4c2e-9f2c-0fbeb284b8c4 req-7ec2d948-0c61-45eb-9fdb-1bc026274648 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "9f842544-f85a-4c24-b273-8ae74177617e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:51:22 compute-0 nova_compute[260935]: 2025-10-11 08:51:22.190 2 DEBUG oslo_concurrency.lockutils [req-939fc24c-6341-4c2e-9f2c-0fbeb284b8c4 req-7ec2d948-0c61-45eb-9fdb-1bc026274648 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "9f842544-f85a-4c24-b273-8ae74177617e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:51:22 compute-0 nova_compute[260935]: 2025-10-11 08:51:22.190 2 DEBUG nova.compute.manager [req-939fc24c-6341-4c2e-9f2c-0fbeb284b8c4 req-7ec2d948-0c61-45eb-9fdb-1bc026274648 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 9f842544-f85a-4c24-b273-8ae74177617e] No waiting events found dispatching network-vif-unplugged-e7343dc3-6cda-4dfb-8098-f021900f4584 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 08:51:22 compute-0 nova_compute[260935]: 2025-10-11 08:51:22.191 2 DEBUG nova.compute.manager [req-939fc24c-6341-4c2e-9f2c-0fbeb284b8c4 req-7ec2d948-0c61-45eb-9fdb-1bc026274648 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 9f842544-f85a-4c24-b273-8ae74177617e] Received event network-vif-unplugged-e7343dc3-6cda-4dfb-8098-f021900f4584 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 11 08:51:22 compute-0 nova_compute[260935]: 2025-10-11 08:51:22.191 2 DEBUG nova.compute.manager [req-939fc24c-6341-4c2e-9f2c-0fbeb284b8c4 req-7ec2d948-0c61-45eb-9fdb-1bc026274648 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 9f842544-f85a-4c24-b273-8ae74177617e] Received event network-vif-plugged-e7343dc3-6cda-4dfb-8098-f021900f4584 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:51:22 compute-0 nova_compute[260935]: 2025-10-11 08:51:22.192 2 DEBUG oslo_concurrency.lockutils [req-939fc24c-6341-4c2e-9f2c-0fbeb284b8c4 req-7ec2d948-0c61-45eb-9fdb-1bc026274648 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "9f842544-f85a-4c24-b273-8ae74177617e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:51:22 compute-0 nova_compute[260935]: 2025-10-11 08:51:22.192 2 DEBUG oslo_concurrency.lockutils [req-939fc24c-6341-4c2e-9f2c-0fbeb284b8c4 req-7ec2d948-0c61-45eb-9fdb-1bc026274648 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "9f842544-f85a-4c24-b273-8ae74177617e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:51:22 compute-0 nova_compute[260935]: 2025-10-11 08:51:22.192 2 DEBUG oslo_concurrency.lockutils [req-939fc24c-6341-4c2e-9f2c-0fbeb284b8c4 req-7ec2d948-0c61-45eb-9fdb-1bc026274648 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "9f842544-f85a-4c24-b273-8ae74177617e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:51:22 compute-0 nova_compute[260935]: 2025-10-11 08:51:22.193 2 DEBUG nova.compute.manager [req-939fc24c-6341-4c2e-9f2c-0fbeb284b8c4 req-7ec2d948-0c61-45eb-9fdb-1bc026274648 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 9f842544-f85a-4c24-b273-8ae74177617e] No waiting events found dispatching network-vif-plugged-e7343dc3-6cda-4dfb-8098-f021900f4584 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 08:51:22 compute-0 nova_compute[260935]: 2025-10-11 08:51:22.193 2 WARNING nova.compute.manager [req-939fc24c-6341-4c2e-9f2c-0fbeb284b8c4 req-7ec2d948-0c61-45eb-9fdb-1bc026274648 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 9f842544-f85a-4c24-b273-8ae74177617e] Received unexpected event network-vif-plugged-e7343dc3-6cda-4dfb-8098-f021900f4584 for instance with vm_state active and task_state deleting.
Oct 11 08:51:22 compute-0 nova_compute[260935]: 2025-10-11 08:51:22.195 2 DEBUG oslo_concurrency.lockutils [None req-9b407ac6-4d33-48b3-b730-b3666d8fc2bf 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Acquiring lock "refresh_cache-3283d482-4ea1-400a-9a1b-486479801813" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 08:51:22 compute-0 nova_compute[260935]: 2025-10-11 08:51:22.195 2 DEBUG oslo_concurrency.lockutils [None req-9b407ac6-4d33-48b3-b730-b3666d8fc2bf 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Acquired lock "refresh_cache-3283d482-4ea1-400a-9a1b-486479801813" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 08:51:22 compute-0 nova_compute[260935]: 2025-10-11 08:51:22.195 2 DEBUG nova.network.neutron [None req-9b407ac6-4d33-48b3-b730-b3666d8fc2bf 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 3283d482-4ea1-400a-9a1b-486479801813] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 11 08:51:22 compute-0 kernel: tapab7592f9-17: entered promiscuous mode
Oct 11 08:51:22 compute-0 systemd-udevd[301937]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 08:51:22 compute-0 NetworkManager[44960]: <info>  [1760172682.2626] manager: (tapab7592f9-17): new Tun device (/org/freedesktop/NetworkManager/Devices/124)
Oct 11 08:51:22 compute-0 ovn_controller[152945]: 2025-10-11T08:51:22Z|00251|binding|INFO|Claiming lport ab7592f9-1746-47d1-a702-b7be704ccabb for this chassis.
Oct 11 08:51:22 compute-0 ovn_controller[152945]: 2025-10-11T08:51:22Z|00252|binding|INFO|ab7592f9-1746-47d1-a702-b7be704ccabb: Claiming fa:16:3e:af:3d:5c 10.100.0.9
Oct 11 08:51:22 compute-0 nova_compute[260935]: 2025-10-11 08:51:22.263 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:51:22 compute-0 NetworkManager[44960]: <info>  [1760172682.2727] device (tapab7592f9-17): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 08:51:22 compute-0 NetworkManager[44960]: <info>  [1760172682.2738] device (tapab7592f9-17): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 08:51:22 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:22.279 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:af:3d:5c 10.100.0.9'], port_security=['fa:16:3e:af:3d:5c 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ce478624-f4e7-4bd7-81b2-172d9f364a89', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '80ef0690d9e94d289f05d85941ef7154', 'neutron:revision_number': '2', 'neutron:security_group_ids': '29aef361-18af-45f6-a74a-33bc137a03ad', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f751cc78-ae4f-490f-8fb4-d5f1bb30fc5b, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=ab7592f9-1746-47d1-a702-b7be704ccabb) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 08:51:22 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:22.280 162815 INFO neutron.agent.ovn.metadata.agent [-] Port ab7592f9-1746-47d1-a702-b7be704ccabb in datapath ce478624-f4e7-4bd7-81b2-172d9f364a89 bound to our chassis
Oct 11 08:51:22 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:22.282 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ce478624-f4e7-4bd7-81b2-172d9f364a89
Oct 11 08:51:22 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:22.298 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[8a3e5ed9-a08f-4f06-add2-ec78a1c1a3c0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:51:22 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:22.299 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapce478624-f1 in ovnmeta-ce478624-f4e7-4bd7-81b2-172d9f364a89 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 11 08:51:22 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:22.303 276945 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapce478624-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 11 08:51:22 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:22.303 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[cf822a37-d1bc-45aa-95e8-c8fdbfa1a070]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:51:22 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:22.305 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[4f0b8b16-c2c3-4707-8aee-c6c79ffbc15a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:51:22 compute-0 systemd-machined[215705]: New machine qemu-38-instance-00000022.
Oct 11 08:51:22 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:22.317 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[bd6110a5-4f27-45a2-9519-29585da8a6e8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:51:22 compute-0 systemd[1]: Started Virtual Machine qemu-38-instance-00000022.
Oct 11 08:51:22 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:22.348 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[2592b64d-2c6b-47a5-ba6e-d63e5d297ea2]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:51:22 compute-0 nova_compute[260935]: 2025-10-11 08:51:22.378 2 DEBUG nova.network.neutron [None req-9b407ac6-4d33-48b3-b730-b3666d8fc2bf 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 3283d482-4ea1-400a-9a1b-486479801813] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 11 08:51:22 compute-0 nova_compute[260935]: 2025-10-11 08:51:22.381 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:51:22 compute-0 nova_compute[260935]: 2025-10-11 08:51:22.382 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:51:22 compute-0 ovn_controller[152945]: 2025-10-11T08:51:22Z|00253|binding|INFO|Setting lport ab7592f9-1746-47d1-a702-b7be704ccabb ovn-installed in OVS
Oct 11 08:51:22 compute-0 ovn_controller[152945]: 2025-10-11T08:51:22Z|00254|binding|INFO|Setting lport ab7592f9-1746-47d1-a702-b7be704ccabb up in Southbound
Oct 11 08:51:22 compute-0 nova_compute[260935]: 2025-10-11 08:51:22.390 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:51:22 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:22.394 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[668ebeed-aba8-40aa-a9e5-c9bb113e6351]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:51:22 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:22.403 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[afe951f6-b1c1-4588-8293-52e16c8f4ee8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:51:22 compute-0 NetworkManager[44960]: <info>  [1760172682.4054] manager: (tapce478624-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/125)
Oct 11 08:51:22 compute-0 ovn_controller[152945]: 2025-10-11T08:51:22Z|00255|binding|INFO|Releasing lport e5becf0d-48c0-404b-9cba-07077454d085 from this chassis (sb_readonly=0)
Oct 11 08:51:22 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:22.456 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[ab4eda7e-98cf-4b19-b4b7-dacb5794f537]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:51:22 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:22.462 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[7f43d17b-b8bb-45c3-84c3-d365cf136270]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:51:22 compute-0 nova_compute[260935]: 2025-10-11 08:51:22.485 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:51:22 compute-0 NetworkManager[44960]: <info>  [1760172682.4981] device (tapce478624-f0): carrier: link connected
Oct 11 08:51:22 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:22.509 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[c0675d18-e009-4d5c-8640-9c16dabe46fe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:51:22 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:22.535 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[2d1fb8e4-b24d-4168-98a3-35ed63599474]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapce478624-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2d:5c:24'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 81], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 452720, 'reachable_time': 18973, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 302419, 'error': None, 'target': 'ovnmeta-ce478624-f4e7-4bd7-81b2-172d9f364a89', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:51:22 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:22.556 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[23ac0926-a0a8-4adc-ae36-97c065498c8c]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe2d:5c24'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 452720, 'tstamp': 452720}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 302420, 'error': None, 'target': 'ovnmeta-ce478624-f4e7-4bd7-81b2-172d9f364a89', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:51:22 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:22.582 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[f9a38cce-ccbd-45ce-a49c-91afda5c538c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapce478624-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2d:5c:24'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 81], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 452720, 'reachable_time': 18973, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 302421, 'error': None, 'target': 'ovnmeta-ce478624-f4e7-4bd7-81b2-172d9f364a89', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:51:22 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:22.635 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[316b0420-e46c-40ca-ba79-006ee4902357]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:51:22 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:22.726 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[2207d38c-a743-4c77-a9ab-a02bb0b073fd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:51:22 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:22.728 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapce478624-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:51:22 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:22.729 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 08:51:22 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:22.730 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapce478624-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:51:22 compute-0 nova_compute[260935]: 2025-10-11 08:51:22.732 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:51:22 compute-0 kernel: tapce478624-f0: entered promiscuous mode
Oct 11 08:51:22 compute-0 NetworkManager[44960]: <info>  [1760172682.7336] manager: (tapce478624-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/126)
Oct 11 08:51:22 compute-0 nova_compute[260935]: 2025-10-11 08:51:22.735 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:51:22 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:22.739 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapce478624-f0, col_values=(('external_ids', {'iface-id': '875825ad-1b50-485c-91da-f53ce1ebd5e3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:51:22 compute-0 nova_compute[260935]: 2025-10-11 08:51:22.741 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:51:22 compute-0 ovn_controller[152945]: 2025-10-11T08:51:22Z|00256|binding|INFO|Releasing lport 875825ad-1b50-485c-91da-f53ce1ebd5e3 from this chassis (sb_readonly=0)
Oct 11 08:51:22 compute-0 nova_compute[260935]: 2025-10-11 08:51:22.742 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:51:22 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:22.743 162815 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/ce478624-f4e7-4bd7-81b2-172d9f364a89.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/ce478624-f4e7-4bd7-81b2-172d9f364a89.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 11 08:51:22 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:22.744 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[3657bb5a-25a5-4a8e-ad51-4bbdc19237d4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:51:22 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:22.745 162815 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 11 08:51:22 compute-0 ovn_metadata_agent[162810]: global
Oct 11 08:51:22 compute-0 ovn_metadata_agent[162810]:     log         /dev/log local0 debug
Oct 11 08:51:22 compute-0 ovn_metadata_agent[162810]:     log-tag     haproxy-metadata-proxy-ce478624-f4e7-4bd7-81b2-172d9f364a89
Oct 11 08:51:22 compute-0 ovn_metadata_agent[162810]:     user        root
Oct 11 08:51:22 compute-0 ovn_metadata_agent[162810]:     group       root
Oct 11 08:51:22 compute-0 ovn_metadata_agent[162810]:     maxconn     1024
Oct 11 08:51:22 compute-0 ovn_metadata_agent[162810]:     pidfile     /var/lib/neutron/external/pids/ce478624-f4e7-4bd7-81b2-172d9f364a89.pid.haproxy
Oct 11 08:51:22 compute-0 ovn_metadata_agent[162810]:     daemon
Oct 11 08:51:22 compute-0 ovn_metadata_agent[162810]: 
Oct 11 08:51:22 compute-0 ovn_metadata_agent[162810]: defaults
Oct 11 08:51:22 compute-0 ovn_metadata_agent[162810]:     log global
Oct 11 08:51:22 compute-0 ovn_metadata_agent[162810]:     mode http
Oct 11 08:51:22 compute-0 ovn_metadata_agent[162810]:     option httplog
Oct 11 08:51:22 compute-0 ovn_metadata_agent[162810]:     option dontlognull
Oct 11 08:51:22 compute-0 ovn_metadata_agent[162810]:     option http-server-close
Oct 11 08:51:22 compute-0 ovn_metadata_agent[162810]:     option forwardfor
Oct 11 08:51:22 compute-0 ovn_metadata_agent[162810]:     retries                 3
Oct 11 08:51:22 compute-0 ovn_metadata_agent[162810]:     timeout http-request    30s
Oct 11 08:51:22 compute-0 ovn_metadata_agent[162810]:     timeout connect         30s
Oct 11 08:51:22 compute-0 ovn_metadata_agent[162810]:     timeout client          32s
Oct 11 08:51:22 compute-0 ovn_metadata_agent[162810]:     timeout server          32s
Oct 11 08:51:22 compute-0 ovn_metadata_agent[162810]:     timeout http-keep-alive 30s
Oct 11 08:51:22 compute-0 ovn_metadata_agent[162810]: 
Oct 11 08:51:22 compute-0 ovn_metadata_agent[162810]: 
Oct 11 08:51:22 compute-0 ovn_metadata_agent[162810]: listen listener
Oct 11 08:51:22 compute-0 ovn_metadata_agent[162810]:     bind 169.254.169.254:80
Oct 11 08:51:22 compute-0 ovn_metadata_agent[162810]:     server metadata /var/lib/neutron/metadata_proxy
Oct 11 08:51:22 compute-0 ovn_metadata_agent[162810]:     http-request add-header X-OVN-Network-ID ce478624-f4e7-4bd7-81b2-172d9f364a89
Oct 11 08:51:22 compute-0 ovn_metadata_agent[162810]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 11 08:51:22 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:22.746 162815 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-ce478624-f4e7-4bd7-81b2-172d9f364a89', 'env', 'PROCESS_TAG=haproxy-ce478624-f4e7-4bd7-81b2-172d9f364a89', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/ce478624-f4e7-4bd7-81b2-172d9f364a89.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 11 08:51:22 compute-0 nova_compute[260935]: 2025-10-11 08:51:22.766 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:51:23 compute-0 podman[302496]: 2025-10-11 08:51:23.122659945 +0000 UTC m=+0.056166959 container create b02629e856b8dc326de2a6ee34cc94950403c804b4362549501605f9cd83d758 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-ce478624-f4e7-4bd7-81b2-172d9f364a89, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Oct 11 08:51:23 compute-0 systemd[1]: Started libpod-conmon-b02629e856b8dc326de2a6ee34cc94950403c804b4362549501605f9cd83d758.scope.
Oct 11 08:51:23 compute-0 podman[302496]: 2025-10-11 08:51:23.09171788 +0000 UTC m=+0.025224884 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 11 08:51:23 compute-0 systemd[1]: Started libcrun container.
Oct 11 08:51:23 compute-0 nova_compute[260935]: 2025-10-11 08:51:23.215 2 DEBUG nova.network.neutron [-] [instance: 9f842544-f85a-4c24-b273-8ae74177617e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:51:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0135a08a783b5b6f32ef7dcf7d88e116aafc097e2f6da2c1ab4657101d9a11a8/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 11 08:51:23 compute-0 podman[302496]: 2025-10-11 08:51:23.231254605 +0000 UTC m=+0.164761629 container init b02629e856b8dc326de2a6ee34cc94950403c804b4362549501605f9cd83d758 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-ce478624-f4e7-4bd7-81b2-172d9f364a89, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct 11 08:51:23 compute-0 podman[302496]: 2025-10-11 08:51:23.23812995 +0000 UTC m=+0.171636934 container start b02629e856b8dc326de2a6ee34cc94950403c804b4362549501605f9cd83d758 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-ce478624-f4e7-4bd7-81b2-172d9f364a89, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team)
Oct 11 08:51:23 compute-0 nova_compute[260935]: 2025-10-11 08:51:23.253 2 INFO nova.compute.manager [-] [instance: 9f842544-f85a-4c24-b273-8ae74177617e] Took 2.48 seconds to deallocate network for instance.
Oct 11 08:51:23 compute-0 neutron-haproxy-ovnmeta-ce478624-f4e7-4bd7-81b2-172d9f364a89[302511]: [NOTICE]   (302515) : New worker (302517) forked
Oct 11 08:51:23 compute-0 neutron-haproxy-ovnmeta-ce478624-f4e7-4bd7-81b2-172d9f364a89[302511]: [NOTICE]   (302515) : Loading success.
Oct 11 08:51:23 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 08:51:23 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e183 do_prune osdmap full prune enabled
Oct 11 08:51:23 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e184 e184: 3 total, 3 up, 3 in
Oct 11 08:51:23 compute-0 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e184: 3 total, 3 up, 3 in
Oct 11 08:51:23 compute-0 nova_compute[260935]: 2025-10-11 08:51:23.318 2 DEBUG oslo_concurrency.lockutils [None req-006ae92d-7d66-4cbb-95bd-85378e297c52 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:51:23 compute-0 nova_compute[260935]: 2025-10-11 08:51:23.319 2 DEBUG oslo_concurrency.lockutils [None req-006ae92d-7d66-4cbb-95bd-85378e297c52 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:51:23 compute-0 nova_compute[260935]: 2025-10-11 08:51:23.380 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172683.3802261, 88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:51:23 compute-0 nova_compute[260935]: 2025-10-11 08:51:23.380 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a] VM Started (Lifecycle Event)
Oct 11 08:51:23 compute-0 nova_compute[260935]: 2025-10-11 08:51:23.402 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:51:23 compute-0 nova_compute[260935]: 2025-10-11 08:51:23.408 2 DEBUG oslo_concurrency.processutils [None req-006ae92d-7d66-4cbb-95bd-85378e297c52 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:51:23 compute-0 nova_compute[260935]: 2025-10-11 08:51:23.433 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172683.3834124, 88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:51:23 compute-0 ceph-mon[74313]: pgmap v1387: 321 pgs: 321 active+clean; 227 MiB data, 435 MiB used, 60 GiB / 60 GiB avail; 3.5 MiB/s rd, 2.8 MiB/s wr, 172 op/s
Oct 11 08:51:23 compute-0 nova_compute[260935]: 2025-10-11 08:51:23.434 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a] VM Paused (Lifecycle Event)
Oct 11 08:51:23 compute-0 ceph-mon[74313]: osdmap e184: 3 total, 3 up, 3 in
Oct 11 08:51:23 compute-0 nova_compute[260935]: 2025-10-11 08:51:23.452 2 DEBUG nova.network.neutron [None req-9b407ac6-4d33-48b3-b730-b3666d8fc2bf 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 3283d482-4ea1-400a-9a1b-486479801813] Updating instance_info_cache with network_info: [{"id": "2b54c9c3-c2f2-466e-95a2-db8b1ebfdb37", "address": "fa:16:3e:2d:84:1d", "network": {"id": "9bac3530-993f-420e-8692-0b14a331d756", "bridge": "br-int", "label": "tempest-ImagesTestJSON-942705627-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f8c7604961214c6d9d49657535d799a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2b54c9c3-c2", "ovs_interfaceid": "2b54c9c3-c2f2-466e-95a2-db8b1ebfdb37", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:51:23 compute-0 nova_compute[260935]: 2025-10-11 08:51:23.491 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:51:23 compute-0 nova_compute[260935]: 2025-10-11 08:51:23.496 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 08:51:23 compute-0 nova_compute[260935]: 2025-10-11 08:51:23.690 2 DEBUG oslo_concurrency.lockutils [None req-9b407ac6-4d33-48b3-b730-b3666d8fc2bf 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Releasing lock "refresh_cache-3283d482-4ea1-400a-9a1b-486479801813" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 08:51:23 compute-0 nova_compute[260935]: 2025-10-11 08:51:23.691 2 DEBUG nova.compute.manager [None req-9b407ac6-4d33-48b3-b730-b3666d8fc2bf 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 3283d482-4ea1-400a-9a1b-486479801813] Instance network_info: |[{"id": "2b54c9c3-c2f2-466e-95a2-db8b1ebfdb37", "address": "fa:16:3e:2d:84:1d", "network": {"id": "9bac3530-993f-420e-8692-0b14a331d756", "bridge": "br-int", "label": "tempest-ImagesTestJSON-942705627-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f8c7604961214c6d9d49657535d799a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2b54c9c3-c2", "ovs_interfaceid": "2b54c9c3-c2f2-466e-95a2-db8b1ebfdb37", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 11 08:51:23 compute-0 nova_compute[260935]: 2025-10-11 08:51:23.695 2 DEBUG nova.virt.libvirt.driver [None req-9b407ac6-4d33-48b3-b730-b3666d8fc2bf 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 3283d482-4ea1-400a-9a1b-486479801813] Start _get_guest_xml network_info=[{"id": "2b54c9c3-c2f2-466e-95a2-db8b1ebfdb37", "address": "fa:16:3e:2d:84:1d", "network": {"id": "9bac3530-993f-420e-8692-0b14a331d756", "bridge": "br-int", "label": "tempest-ImagesTestJSON-942705627-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f8c7604961214c6d9d49657535d799a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2b54c9c3-c2", "ovs_interfaceid": "2b54c9c3-c2f2-466e-95a2-db8b1ebfdb37", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='',container_format='bare',created_at=2025-10-11T08:51:09Z,direct_url=<?>,disk_format='raw',id=70bd8f23-d068-4d06-af15-566e76e92803,min_disk=1,min_ram=0,name='tempest-test-snap-1090653870',owner='f8c7604961214c6d9d49657535d799a5',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2025-10-11T08:51:14Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '70bd8f23-d068-4d06-af15-566e76e92803'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 11 08:51:23 compute-0 nova_compute[260935]: 2025-10-11 08:51:23.702 2 WARNING nova.virt.libvirt.driver [None req-9b407ac6-4d33-48b3-b730-b3666d8fc2bf 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 08:51:23 compute-0 nova_compute[260935]: 2025-10-11 08:51:23.707 2 DEBUG nova.virt.libvirt.host [None req-9b407ac6-4d33-48b3-b730-b3666d8fc2bf 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 11 08:51:23 compute-0 nova_compute[260935]: 2025-10-11 08:51:23.708 2 DEBUG nova.virt.libvirt.host [None req-9b407ac6-4d33-48b3-b730-b3666d8fc2bf 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 11 08:51:23 compute-0 nova_compute[260935]: 2025-10-11 08:51:23.712 2 DEBUG nova.virt.libvirt.host [None req-9b407ac6-4d33-48b3-b730-b3666d8fc2bf 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 11 08:51:23 compute-0 nova_compute[260935]: 2025-10-11 08:51:23.713 2 DEBUG nova.virt.libvirt.host [None req-9b407ac6-4d33-48b3-b730-b3666d8fc2bf 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 11 08:51:23 compute-0 nova_compute[260935]: 2025-10-11 08:51:23.713 2 DEBUG nova.virt.libvirt.driver [None req-9b407ac6-4d33-48b3-b730-b3666d8fc2bf 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 11 08:51:23 compute-0 nova_compute[260935]: 2025-10-11 08:51:23.714 2 DEBUG nova.virt.hardware [None req-9b407ac6-4d33-48b3-b730-b3666d8fc2bf 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='',container_format='bare',created_at=2025-10-11T08:51:09Z,direct_url=<?>,disk_format='raw',id=70bd8f23-d068-4d06-af15-566e76e92803,min_disk=1,min_ram=0,name='tempest-test-snap-1090653870',owner='f8c7604961214c6d9d49657535d799a5',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2025-10-11T08:51:14Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 11 08:51:23 compute-0 nova_compute[260935]: 2025-10-11 08:51:23.714 2 DEBUG nova.virt.hardware [None req-9b407ac6-4d33-48b3-b730-b3666d8fc2bf 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 11 08:51:23 compute-0 nova_compute[260935]: 2025-10-11 08:51:23.715 2 DEBUG nova.virt.hardware [None req-9b407ac6-4d33-48b3-b730-b3666d8fc2bf 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 11 08:51:23 compute-0 nova_compute[260935]: 2025-10-11 08:51:23.715 2 DEBUG nova.virt.hardware [None req-9b407ac6-4d33-48b3-b730-b3666d8fc2bf 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 11 08:51:23 compute-0 nova_compute[260935]: 2025-10-11 08:51:23.715 2 DEBUG nova.virt.hardware [None req-9b407ac6-4d33-48b3-b730-b3666d8fc2bf 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 11 08:51:23 compute-0 nova_compute[260935]: 2025-10-11 08:51:23.716 2 DEBUG nova.virt.hardware [None req-9b407ac6-4d33-48b3-b730-b3666d8fc2bf 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 11 08:51:23 compute-0 nova_compute[260935]: 2025-10-11 08:51:23.716 2 DEBUG nova.virt.hardware [None req-9b407ac6-4d33-48b3-b730-b3666d8fc2bf 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 11 08:51:23 compute-0 nova_compute[260935]: 2025-10-11 08:51:23.716 2 DEBUG nova.virt.hardware [None req-9b407ac6-4d33-48b3-b730-b3666d8fc2bf 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 11 08:51:23 compute-0 nova_compute[260935]: 2025-10-11 08:51:23.717 2 DEBUG nova.virt.hardware [None req-9b407ac6-4d33-48b3-b730-b3666d8fc2bf 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 11 08:51:23 compute-0 nova_compute[260935]: 2025-10-11 08:51:23.717 2 DEBUG nova.virt.hardware [None req-9b407ac6-4d33-48b3-b730-b3666d8fc2bf 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 11 08:51:23 compute-0 nova_compute[260935]: 2025-10-11 08:51:23.718 2 DEBUG nova.virt.hardware [None req-9b407ac6-4d33-48b3-b730-b3666d8fc2bf 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 11 08:51:23 compute-0 nova_compute[260935]: 2025-10-11 08:51:23.723 2 DEBUG oslo_concurrency.processutils [None req-9b407ac6-4d33-48b3-b730-b3666d8fc2bf 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:51:23 compute-0 nova_compute[260935]: 2025-10-11 08:51:23.771 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 08:51:23 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 08:51:23 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2790349999' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:51:23 compute-0 nova_compute[260935]: 2025-10-11 08:51:23.816 2 DEBUG oslo_concurrency.processutils [None req-006ae92d-7d66-4cbb-95bd-85378e297c52 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.408s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:51:23 compute-0 nova_compute[260935]: 2025-10-11 08:51:23.824 2 DEBUG nova.compute.provider_tree [None req-006ae92d-7d66-4cbb-95bd-85378e297c52 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 08:51:23 compute-0 nova_compute[260935]: 2025-10-11 08:51:23.858 2 DEBUG nova.scheduler.client.report [None req-006ae92d-7d66-4cbb-95bd-85378e297c52 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 08:51:23 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1389: 321 pgs: 321 active+clean; 213 MiB data, 471 MiB used, 60 GiB / 60 GiB avail; 2.8 MiB/s rd, 4.7 MiB/s wr, 274 op/s
Oct 11 08:51:24 compute-0 nova_compute[260935]: 2025-10-11 08:51:24.118 2 DEBUG nova.compute.manager [req-cffe39aa-f7dd-49b6-b4c3-19afa3fafc40 req-46d03a44-d61b-4242-8c99-291943394d7a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 9f842544-f85a-4c24-b273-8ae74177617e] Received event network-vif-deleted-e7343dc3-6cda-4dfb-8098-f021900f4584 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:51:24 compute-0 nova_compute[260935]: 2025-10-11 08:51:24.119 2 DEBUG nova.compute.manager [req-cffe39aa-f7dd-49b6-b4c3-19afa3fafc40 req-46d03a44-d61b-4242-8c99-291943394d7a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 3283d482-4ea1-400a-9a1b-486479801813] Received event network-changed-2b54c9c3-c2f2-466e-95a2-db8b1ebfdb37 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:51:24 compute-0 nova_compute[260935]: 2025-10-11 08:51:24.124 2 DEBUG nova.compute.manager [req-cffe39aa-f7dd-49b6-b4c3-19afa3fafc40 req-46d03a44-d61b-4242-8c99-291943394d7a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 3283d482-4ea1-400a-9a1b-486479801813] Refreshing instance network info cache due to event network-changed-2b54c9c3-c2f2-466e-95a2-db8b1ebfdb37. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 08:51:24 compute-0 nova_compute[260935]: 2025-10-11 08:51:24.124 2 DEBUG oslo_concurrency.lockutils [req-cffe39aa-f7dd-49b6-b4c3-19afa3fafc40 req-46d03a44-d61b-4242-8c99-291943394d7a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-3283d482-4ea1-400a-9a1b-486479801813" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 08:51:24 compute-0 nova_compute[260935]: 2025-10-11 08:51:24.125 2 DEBUG oslo_concurrency.lockutils [req-cffe39aa-f7dd-49b6-b4c3-19afa3fafc40 req-46d03a44-d61b-4242-8c99-291943394d7a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-3283d482-4ea1-400a-9a1b-486479801813" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 08:51:24 compute-0 nova_compute[260935]: 2025-10-11 08:51:24.125 2 DEBUG nova.network.neutron [req-cffe39aa-f7dd-49b6-b4c3-19afa3fafc40 req-46d03a44-d61b-4242-8c99-291943394d7a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 3283d482-4ea1-400a-9a1b-486479801813] Refreshing network info cache for port 2b54c9c3-c2f2-466e-95a2-db8b1ebfdb37 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 08:51:24 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 08:51:24 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/543535219' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:51:24 compute-0 nova_compute[260935]: 2025-10-11 08:51:24.177 2 DEBUG oslo_concurrency.lockutils [None req-006ae92d-7d66-4cbb-95bd-85378e297c52 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.858s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:51:24 compute-0 nova_compute[260935]: 2025-10-11 08:51:24.201 2 DEBUG oslo_concurrency.processutils [None req-9b407ac6-4d33-48b3-b730-b3666d8fc2bf 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:51:24 compute-0 nova_compute[260935]: 2025-10-11 08:51:24.260 2 DEBUG nova.storage.rbd_utils [None req-9b407ac6-4d33-48b3-b730-b3666d8fc2bf 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] rbd image 3283d482-4ea1-400a-9a1b-486479801813_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:51:24 compute-0 nova_compute[260935]: 2025-10-11 08:51:24.268 2 DEBUG oslo_concurrency.processutils [None req-9b407ac6-4d33-48b3-b730-b3666d8fc2bf 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:51:24 compute-0 nova_compute[260935]: 2025-10-11 08:51:24.326 2 INFO nova.scheduler.client.report [None req-006ae92d-7d66-4cbb-95bd-85378e297c52 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Deleted allocations for instance 9f842544-f85a-4c24-b273-8ae74177617e
Oct 11 08:51:24 compute-0 nova_compute[260935]: 2025-10-11 08:51:24.337 2 DEBUG nova.compute.manager [req-7ab430ff-66b6-4ae8-9e39-008ad4871e7f req-10b1f041-c53d-4753-9f63-48b8bcda8b36 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a] Received event network-vif-plugged-ab7592f9-1746-47d1-a702-b7be704ccabb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:51:24 compute-0 nova_compute[260935]: 2025-10-11 08:51:24.338 2 DEBUG oslo_concurrency.lockutils [req-7ab430ff-66b6-4ae8-9e39-008ad4871e7f req-10b1f041-c53d-4753-9f63-48b8bcda8b36 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:51:24 compute-0 nova_compute[260935]: 2025-10-11 08:51:24.338 2 DEBUG oslo_concurrency.lockutils [req-7ab430ff-66b6-4ae8-9e39-008ad4871e7f req-10b1f041-c53d-4753-9f63-48b8bcda8b36 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:51:24 compute-0 nova_compute[260935]: 2025-10-11 08:51:24.339 2 DEBUG oslo_concurrency.lockutils [req-7ab430ff-66b6-4ae8-9e39-008ad4871e7f req-10b1f041-c53d-4753-9f63-48b8bcda8b36 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:51:24 compute-0 nova_compute[260935]: 2025-10-11 08:51:24.339 2 DEBUG nova.compute.manager [req-7ab430ff-66b6-4ae8-9e39-008ad4871e7f req-10b1f041-c53d-4753-9f63-48b8bcda8b36 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a] Processing event network-vif-plugged-ab7592f9-1746-47d1-a702-b7be704ccabb _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 11 08:51:24 compute-0 nova_compute[260935]: 2025-10-11 08:51:24.339 2 DEBUG nova.compute.manager [req-7ab430ff-66b6-4ae8-9e39-008ad4871e7f req-10b1f041-c53d-4753-9f63-48b8bcda8b36 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a] Received event network-vif-plugged-ab7592f9-1746-47d1-a702-b7be704ccabb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:51:24 compute-0 nova_compute[260935]: 2025-10-11 08:51:24.340 2 DEBUG oslo_concurrency.lockutils [req-7ab430ff-66b6-4ae8-9e39-008ad4871e7f req-10b1f041-c53d-4753-9f63-48b8bcda8b36 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:51:24 compute-0 nova_compute[260935]: 2025-10-11 08:51:24.340 2 DEBUG oslo_concurrency.lockutils [req-7ab430ff-66b6-4ae8-9e39-008ad4871e7f req-10b1f041-c53d-4753-9f63-48b8bcda8b36 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:51:24 compute-0 nova_compute[260935]: 2025-10-11 08:51:24.341 2 DEBUG oslo_concurrency.lockutils [req-7ab430ff-66b6-4ae8-9e39-008ad4871e7f req-10b1f041-c53d-4753-9f63-48b8bcda8b36 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:51:24 compute-0 nova_compute[260935]: 2025-10-11 08:51:24.341 2 DEBUG nova.compute.manager [req-7ab430ff-66b6-4ae8-9e39-008ad4871e7f req-10b1f041-c53d-4753-9f63-48b8bcda8b36 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a] No waiting events found dispatching network-vif-plugged-ab7592f9-1746-47d1-a702-b7be704ccabb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 08:51:24 compute-0 nova_compute[260935]: 2025-10-11 08:51:24.341 2 WARNING nova.compute.manager [req-7ab430ff-66b6-4ae8-9e39-008ad4871e7f req-10b1f041-c53d-4753-9f63-48b8bcda8b36 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a] Received unexpected event network-vif-plugged-ab7592f9-1746-47d1-a702-b7be704ccabb for instance with vm_state building and task_state spawning.
Oct 11 08:51:24 compute-0 nova_compute[260935]: 2025-10-11 08:51:24.344 2 DEBUG nova.compute.manager [None req-ed53888a-b1b8-4be4-802f-f342c7e9210d f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] [instance: 88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 11 08:51:24 compute-0 nova_compute[260935]: 2025-10-11 08:51:24.353 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172684.3519356, 88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:51:24 compute-0 nova_compute[260935]: 2025-10-11 08:51:24.354 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a] VM Resumed (Lifecycle Event)
Oct 11 08:51:24 compute-0 nova_compute[260935]: 2025-10-11 08:51:24.357 2 DEBUG nova.virt.libvirt.driver [None req-ed53888a-b1b8-4be4-802f-f342c7e9210d f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] [instance: 88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 11 08:51:24 compute-0 nova_compute[260935]: 2025-10-11 08:51:24.370 2 INFO nova.virt.libvirt.driver [-] [instance: 88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a] Instance spawned successfully.
Oct 11 08:51:24 compute-0 nova_compute[260935]: 2025-10-11 08:51:24.377 2 DEBUG nova.virt.libvirt.driver [None req-ed53888a-b1b8-4be4-802f-f342c7e9210d f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] [instance: 88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 11 08:51:24 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/2790349999' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:51:24 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/543535219' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:51:24 compute-0 nova_compute[260935]: 2025-10-11 08:51:24.440 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:51:24 compute-0 nova_compute[260935]: 2025-10-11 08:51:24.459 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 08:51:24 compute-0 nova_compute[260935]: 2025-10-11 08:51:24.472 2 DEBUG nova.virt.libvirt.driver [None req-ed53888a-b1b8-4be4-802f-f342c7e9210d f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] [instance: 88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:51:24 compute-0 nova_compute[260935]: 2025-10-11 08:51:24.472 2 DEBUG nova.virt.libvirt.driver [None req-ed53888a-b1b8-4be4-802f-f342c7e9210d f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] [instance: 88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:51:24 compute-0 nova_compute[260935]: 2025-10-11 08:51:24.473 2 DEBUG nova.virt.libvirt.driver [None req-ed53888a-b1b8-4be4-802f-f342c7e9210d f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] [instance: 88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:51:24 compute-0 nova_compute[260935]: 2025-10-11 08:51:24.474 2 DEBUG nova.virt.libvirt.driver [None req-ed53888a-b1b8-4be4-802f-f342c7e9210d f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] [instance: 88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:51:24 compute-0 nova_compute[260935]: 2025-10-11 08:51:24.475 2 DEBUG nova.virt.libvirt.driver [None req-ed53888a-b1b8-4be4-802f-f342c7e9210d f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] [instance: 88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:51:24 compute-0 nova_compute[260935]: 2025-10-11 08:51:24.475 2 DEBUG nova.virt.libvirt.driver [None req-ed53888a-b1b8-4be4-802f-f342c7e9210d f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] [instance: 88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:51:24 compute-0 nova_compute[260935]: 2025-10-11 08:51:24.485 2 DEBUG oslo_concurrency.lockutils [None req-006ae92d-7d66-4cbb-95bd-85378e297c52 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Lock "9f842544-f85a-4c24-b273-8ae74177617e" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.969s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:51:24 compute-0 nova_compute[260935]: 2025-10-11 08:51:24.488 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 08:51:24 compute-0 nova_compute[260935]: 2025-10-11 08:51:24.550 2 INFO nova.compute.manager [None req-ed53888a-b1b8-4be4-802f-f342c7e9210d f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] [instance: 88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a] Took 7.90 seconds to spawn the instance on the hypervisor.
Oct 11 08:51:24 compute-0 nova_compute[260935]: 2025-10-11 08:51:24.551 2 DEBUG nova.compute.manager [None req-ed53888a-b1b8-4be4-802f-f342c7e9210d f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] [instance: 88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:51:24 compute-0 nova_compute[260935]: 2025-10-11 08:51:24.609 2 INFO nova.compute.manager [None req-ed53888a-b1b8-4be4-802f-f342c7e9210d f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] [instance: 88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a] Took 8.89 seconds to build instance.
Oct 11 08:51:24 compute-0 nova_compute[260935]: 2025-10-11 08:51:24.634 2 DEBUG oslo_concurrency.lockutils [None req-ed53888a-b1b8-4be4-802f-f342c7e9210d f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] Lock "88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.984s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:51:24 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 08:51:24 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1555119279' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:51:24 compute-0 nova_compute[260935]: 2025-10-11 08:51:24.785 2 DEBUG oslo_concurrency.processutils [None req-9b407ac6-4d33-48b3-b730-b3666d8fc2bf 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.517s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:51:24 compute-0 nova_compute[260935]: 2025-10-11 08:51:24.786 2 DEBUG nova.virt.libvirt.vif [None req-9b407ac6-4d33-48b3-b730-b3666d8fc2bf 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T08:51:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-849464772',display_name='tempest-ImagesTestJSON-server-849464772',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-849464772',id=35,image_ref='70bd8f23-d068-4d06-af15-566e76e92803',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f8c7604961214c6d9d49657535d799a5',ramdisk_id='',reservation_id='r-lbwd8nss',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_boot_roles='member,reader',image_container_format='bare',image_disk_format='raw',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_image_location='snapshot',image_image_state='available',image_image_type='snapshot',image_instance_uuid='5b50d851-e482-40a2-8b7d-d3eca87e15ab',image_min_disk='1',image_min_ram='0',image_owner_id='f8c7604961214c6d9d49657535d799a5',image_owner_project_name='tempest-ImagesTestJSON-694493184',image_owner_user_name='tempest-ImagesTestJSON-694493184-project-member',image_user_id='1bab12893b9d49aabcb5ca19c9b951de',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-694493184',owner_user_name='tempest-ImagesTestJSON-694493184-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T08:51:20Z,user_data=None,user_id='1bab12893b9d49aabcb5ca19c9b951de',uuid=3283d482-4ea1-400a-9a1b-486479801813,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2b54c9c3-c2f2-466e-95a2-db8b1ebfdb37", "address": "fa:16:3e:2d:84:1d", "network": {"id": "9bac3530-993f-420e-8692-0b14a331d756", "bridge": "br-int", "label": "tempest-ImagesTestJSON-942705627-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f8c7604961214c6d9d49657535d799a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2b54c9c3-c2", "ovs_interfaceid": "2b54c9c3-c2f2-466e-95a2-db8b1ebfdb37", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 11 08:51:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 08:51:24 compute-0 nova_compute[260935]: 2025-10-11 08:51:24.787 2 DEBUG nova.network.os_vif_util [None req-9b407ac6-4d33-48b3-b730-b3666d8fc2bf 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Converting VIF {"id": "2b54c9c3-c2f2-466e-95a2-db8b1ebfdb37", "address": "fa:16:3e:2d:84:1d", "network": {"id": "9bac3530-993f-420e-8692-0b14a331d756", "bridge": "br-int", "label": "tempest-ImagesTestJSON-942705627-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f8c7604961214c6d9d49657535d799a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2b54c9c3-c2", "ovs_interfaceid": "2b54c9c3-c2f2-466e-95a2-db8b1ebfdb37", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 08:51:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 08:51:24 compute-0 nova_compute[260935]: 2025-10-11 08:51:24.788 2 DEBUG nova.network.os_vif_util [None req-9b407ac6-4d33-48b3-b730-b3666d8fc2bf 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2d:84:1d,bridge_name='br-int',has_traffic_filtering=True,id=2b54c9c3-c2f2-466e-95a2-db8b1ebfdb37,network=Network(9bac3530-993f-420e-8692-0b14a331d756),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2b54c9c3-c2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 08:51:24 compute-0 nova_compute[260935]: 2025-10-11 08:51:24.789 2 DEBUG nova.objects.instance [None req-9b407ac6-4d33-48b3-b730-b3666d8fc2bf 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Lazy-loading 'pci_devices' on Instance uuid 3283d482-4ea1-400a-9a1b-486479801813 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:51:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 08:51:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 08:51:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 08:51:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 08:51:24 compute-0 nova_compute[260935]: 2025-10-11 08:51:24.841 2 DEBUG nova.virt.libvirt.driver [None req-9b407ac6-4d33-48b3-b730-b3666d8fc2bf 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 3283d482-4ea1-400a-9a1b-486479801813] End _get_guest_xml xml=<domain type="kvm">
Oct 11 08:51:24 compute-0 nova_compute[260935]:   <uuid>3283d482-4ea1-400a-9a1b-486479801813</uuid>
Oct 11 08:51:24 compute-0 nova_compute[260935]:   <name>instance-00000023</name>
Oct 11 08:51:24 compute-0 nova_compute[260935]:   <memory>131072</memory>
Oct 11 08:51:24 compute-0 nova_compute[260935]:   <vcpu>1</vcpu>
Oct 11 08:51:24 compute-0 nova_compute[260935]:   <metadata>
Oct 11 08:51:24 compute-0 nova_compute[260935]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 08:51:24 compute-0 nova_compute[260935]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 08:51:24 compute-0 nova_compute[260935]:       <nova:name>tempest-ImagesTestJSON-server-849464772</nova:name>
Oct 11 08:51:24 compute-0 nova_compute[260935]:       <nova:creationTime>2025-10-11 08:51:23</nova:creationTime>
Oct 11 08:51:24 compute-0 nova_compute[260935]:       <nova:flavor name="m1.nano">
Oct 11 08:51:24 compute-0 nova_compute[260935]:         <nova:memory>128</nova:memory>
Oct 11 08:51:24 compute-0 nova_compute[260935]:         <nova:disk>1</nova:disk>
Oct 11 08:51:24 compute-0 nova_compute[260935]:         <nova:swap>0</nova:swap>
Oct 11 08:51:24 compute-0 nova_compute[260935]:         <nova:ephemeral>0</nova:ephemeral>
Oct 11 08:51:24 compute-0 nova_compute[260935]:         <nova:vcpus>1</nova:vcpus>
Oct 11 08:51:24 compute-0 nova_compute[260935]:       </nova:flavor>
Oct 11 08:51:24 compute-0 nova_compute[260935]:       <nova:owner>
Oct 11 08:51:24 compute-0 nova_compute[260935]:         <nova:user uuid="1bab12893b9d49aabcb5ca19c9b951de">tempest-ImagesTestJSON-694493184-project-member</nova:user>
Oct 11 08:51:24 compute-0 nova_compute[260935]:         <nova:project uuid="f8c7604961214c6d9d49657535d799a5">tempest-ImagesTestJSON-694493184</nova:project>
Oct 11 08:51:24 compute-0 nova_compute[260935]:       </nova:owner>
Oct 11 08:51:24 compute-0 nova_compute[260935]:       <nova:root type="image" uuid="70bd8f23-d068-4d06-af15-566e76e92803"/>
Oct 11 08:51:24 compute-0 nova_compute[260935]:       <nova:ports>
Oct 11 08:51:24 compute-0 nova_compute[260935]:         <nova:port uuid="2b54c9c3-c2f2-466e-95a2-db8b1ebfdb37">
Oct 11 08:51:24 compute-0 nova_compute[260935]:           <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Oct 11 08:51:24 compute-0 nova_compute[260935]:         </nova:port>
Oct 11 08:51:24 compute-0 nova_compute[260935]:       </nova:ports>
Oct 11 08:51:24 compute-0 nova_compute[260935]:     </nova:instance>
Oct 11 08:51:24 compute-0 nova_compute[260935]:   </metadata>
Oct 11 08:51:24 compute-0 nova_compute[260935]:   <sysinfo type="smbios">
Oct 11 08:51:24 compute-0 nova_compute[260935]:     <system>
Oct 11 08:51:24 compute-0 nova_compute[260935]:       <entry name="manufacturer">RDO</entry>
Oct 11 08:51:24 compute-0 nova_compute[260935]:       <entry name="product">OpenStack Compute</entry>
Oct 11 08:51:24 compute-0 nova_compute[260935]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 08:51:24 compute-0 nova_compute[260935]:       <entry name="serial">3283d482-4ea1-400a-9a1b-486479801813</entry>
Oct 11 08:51:24 compute-0 nova_compute[260935]:       <entry name="uuid">3283d482-4ea1-400a-9a1b-486479801813</entry>
Oct 11 08:51:24 compute-0 nova_compute[260935]:       <entry name="family">Virtual Machine</entry>
Oct 11 08:51:24 compute-0 nova_compute[260935]:     </system>
Oct 11 08:51:24 compute-0 nova_compute[260935]:   </sysinfo>
Oct 11 08:51:24 compute-0 nova_compute[260935]:   <os>
Oct 11 08:51:24 compute-0 nova_compute[260935]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 11 08:51:24 compute-0 nova_compute[260935]:     <boot dev="hd"/>
Oct 11 08:51:24 compute-0 nova_compute[260935]:     <smbios mode="sysinfo"/>
Oct 11 08:51:24 compute-0 nova_compute[260935]:   </os>
Oct 11 08:51:24 compute-0 nova_compute[260935]:   <features>
Oct 11 08:51:24 compute-0 nova_compute[260935]:     <acpi/>
Oct 11 08:51:24 compute-0 nova_compute[260935]:     <apic/>
Oct 11 08:51:24 compute-0 nova_compute[260935]:     <vmcoreinfo/>
Oct 11 08:51:24 compute-0 nova_compute[260935]:   </features>
Oct 11 08:51:24 compute-0 nova_compute[260935]:   <clock offset="utc">
Oct 11 08:51:24 compute-0 nova_compute[260935]:     <timer name="pit" tickpolicy="delay"/>
Oct 11 08:51:24 compute-0 nova_compute[260935]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 11 08:51:24 compute-0 nova_compute[260935]:     <timer name="hpet" present="no"/>
Oct 11 08:51:24 compute-0 nova_compute[260935]:   </clock>
Oct 11 08:51:24 compute-0 nova_compute[260935]:   <cpu mode="host-model" match="exact">
Oct 11 08:51:24 compute-0 nova_compute[260935]:     <topology sockets="1" cores="1" threads="1"/>
Oct 11 08:51:24 compute-0 nova_compute[260935]:   </cpu>
Oct 11 08:51:24 compute-0 nova_compute[260935]:   <devices>
Oct 11 08:51:24 compute-0 nova_compute[260935]:     <disk type="network" device="disk">
Oct 11 08:51:24 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 08:51:24 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/3283d482-4ea1-400a-9a1b-486479801813_disk">
Oct 11 08:51:24 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 08:51:24 compute-0 nova_compute[260935]:       </source>
Oct 11 08:51:24 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 08:51:24 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 08:51:24 compute-0 nova_compute[260935]:       </auth>
Oct 11 08:51:24 compute-0 nova_compute[260935]:       <target dev="vda" bus="virtio"/>
Oct 11 08:51:24 compute-0 nova_compute[260935]:     </disk>
Oct 11 08:51:24 compute-0 nova_compute[260935]:     <disk type="network" device="cdrom">
Oct 11 08:51:24 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 08:51:24 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/3283d482-4ea1-400a-9a1b-486479801813_disk.config">
Oct 11 08:51:24 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 08:51:24 compute-0 nova_compute[260935]:       </source>
Oct 11 08:51:24 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 08:51:24 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 08:51:24 compute-0 nova_compute[260935]:       </auth>
Oct 11 08:51:24 compute-0 nova_compute[260935]:       <target dev="sda" bus="sata"/>
Oct 11 08:51:24 compute-0 nova_compute[260935]:     </disk>
Oct 11 08:51:24 compute-0 nova_compute[260935]:     <interface type="ethernet">
Oct 11 08:51:24 compute-0 nova_compute[260935]:       <mac address="fa:16:3e:2d:84:1d"/>
Oct 11 08:51:24 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 08:51:24 compute-0 nova_compute[260935]:       <driver name="vhost" rx_queue_size="512"/>
Oct 11 08:51:24 compute-0 nova_compute[260935]:       <mtu size="1442"/>
Oct 11 08:51:24 compute-0 nova_compute[260935]:       <target dev="tap2b54c9c3-c2"/>
Oct 11 08:51:24 compute-0 nova_compute[260935]:     </interface>
Oct 11 08:51:24 compute-0 nova_compute[260935]:     <serial type="pty">
Oct 11 08:51:24 compute-0 nova_compute[260935]:       <log file="/var/lib/nova/instances/3283d482-4ea1-400a-9a1b-486479801813/console.log" append="off"/>
Oct 11 08:51:24 compute-0 nova_compute[260935]:     </serial>
Oct 11 08:51:24 compute-0 nova_compute[260935]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 08:51:24 compute-0 nova_compute[260935]:     <video>
Oct 11 08:51:24 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 08:51:24 compute-0 nova_compute[260935]:     </video>
Oct 11 08:51:24 compute-0 nova_compute[260935]:     <input type="tablet" bus="usb"/>
Oct 11 08:51:24 compute-0 nova_compute[260935]:     <input type="keyboard" bus="usb"/>
Oct 11 08:51:24 compute-0 nova_compute[260935]:     <rng model="virtio">
Oct 11 08:51:24 compute-0 nova_compute[260935]:       <backend model="random">/dev/urandom</backend>
Oct 11 08:51:24 compute-0 nova_compute[260935]:     </rng>
Oct 11 08:51:24 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root"/>
Oct 11 08:51:24 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:51:24 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:51:24 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:51:24 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:51:24 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:51:24 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:51:24 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:51:24 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:51:24 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:51:24 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:51:24 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:51:24 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:51:24 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:51:24 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:51:24 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:51:24 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:51:24 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:51:24 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:51:24 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:51:24 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:51:24 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:51:24 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:51:24 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:51:24 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:51:24 compute-0 nova_compute[260935]:     <controller type="usb" index="0"/>
Oct 11 08:51:24 compute-0 nova_compute[260935]:     <memballoon model="virtio">
Oct 11 08:51:24 compute-0 nova_compute[260935]:       <stats period="10"/>
Oct 11 08:51:24 compute-0 nova_compute[260935]:     </memballoon>
Oct 11 08:51:24 compute-0 nova_compute[260935]:   </devices>
Oct 11 08:51:24 compute-0 nova_compute[260935]: </domain>
Oct 11 08:51:24 compute-0 nova_compute[260935]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 11 08:51:24 compute-0 nova_compute[260935]: 2025-10-11 08:51:24.842 2 DEBUG nova.compute.manager [None req-9b407ac6-4d33-48b3-b730-b3666d8fc2bf 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 3283d482-4ea1-400a-9a1b-486479801813] Preparing to wait for external event network-vif-plugged-2b54c9c3-c2f2-466e-95a2-db8b1ebfdb37 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 11 08:51:24 compute-0 nova_compute[260935]: 2025-10-11 08:51:24.842 2 DEBUG oslo_concurrency.lockutils [None req-9b407ac6-4d33-48b3-b730-b3666d8fc2bf 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Acquiring lock "3283d482-4ea1-400a-9a1b-486479801813-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:51:24 compute-0 nova_compute[260935]: 2025-10-11 08:51:24.843 2 DEBUG oslo_concurrency.lockutils [None req-9b407ac6-4d33-48b3-b730-b3666d8fc2bf 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Lock "3283d482-4ea1-400a-9a1b-486479801813-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:51:24 compute-0 nova_compute[260935]: 2025-10-11 08:51:24.843 2 DEBUG oslo_concurrency.lockutils [None req-9b407ac6-4d33-48b3-b730-b3666d8fc2bf 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Lock "3283d482-4ea1-400a-9a1b-486479801813-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:51:24 compute-0 nova_compute[260935]: 2025-10-11 08:51:24.844 2 DEBUG nova.virt.libvirt.vif [None req-9b407ac6-4d33-48b3-b730-b3666d8fc2bf 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T08:51:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-849464772',display_name='tempest-ImagesTestJSON-server-849464772',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-849464772',id=35,image_ref='70bd8f23-d068-4d06-af15-566e76e92803',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f8c7604961214c6d9d49657535d799a5',ramdisk_id='',reservation_id='r-lbwd8nss',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_boot_roles='member,reader',image_container_format='bare',image_disk_format='raw',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_image_location='snapshot',image_image_state='available',image_image_type='snapshot',image_instance_uuid='5b50d851-e482-40a2-8b7d-d3eca87e15ab',image_min_disk='1',image_min_ram='0',image_owner_id='f8c7604961214c6d9d49657535d799a5',image_owner_project_name='tempest-ImagesTestJSON-694493184',image_owner_user_name='tempest-ImagesTestJSON-694493184-project-member',image_user_id='1bab12893b9d49aabcb5ca19c9b951de',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-694493184',owner_user_name='tempest-ImagesTestJSON-694493184-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T08:51:20Z,user_data=None,user_id='1bab12893b9d49aabcb5ca19c9b951de',uuid=3283d482-4ea1-400a-9a1b-486479801813,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2b54c9c3-c2f2-466e-95a2-db8b1ebfdb37", "address": "fa:16:3e:2d:84:1d", "network": {"id": "9bac3530-993f-420e-8692-0b14a331d756", "bridge": "br-int", "label": "tempest-ImagesTestJSON-942705627-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f8c7604961214c6d9d49657535d799a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2b54c9c3-c2", "ovs_interfaceid": "2b54c9c3-c2f2-466e-95a2-db8b1ebfdb37", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 11 08:51:24 compute-0 nova_compute[260935]: 2025-10-11 08:51:24.844 2 DEBUG nova.network.os_vif_util [None req-9b407ac6-4d33-48b3-b730-b3666d8fc2bf 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Converting VIF {"id": "2b54c9c3-c2f2-466e-95a2-db8b1ebfdb37", "address": "fa:16:3e:2d:84:1d", "network": {"id": "9bac3530-993f-420e-8692-0b14a331d756", "bridge": "br-int", "label": "tempest-ImagesTestJSON-942705627-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f8c7604961214c6d9d49657535d799a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2b54c9c3-c2", "ovs_interfaceid": "2b54c9c3-c2f2-466e-95a2-db8b1ebfdb37", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 08:51:24 compute-0 nova_compute[260935]: 2025-10-11 08:51:24.845 2 DEBUG nova.network.os_vif_util [None req-9b407ac6-4d33-48b3-b730-b3666d8fc2bf 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2d:84:1d,bridge_name='br-int',has_traffic_filtering=True,id=2b54c9c3-c2f2-466e-95a2-db8b1ebfdb37,network=Network(9bac3530-993f-420e-8692-0b14a331d756),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2b54c9c3-c2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 08:51:24 compute-0 nova_compute[260935]: 2025-10-11 08:51:24.845 2 DEBUG os_vif [None req-9b407ac6-4d33-48b3-b730-b3666d8fc2bf 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:2d:84:1d,bridge_name='br-int',has_traffic_filtering=True,id=2b54c9c3-c2f2-466e-95a2-db8b1ebfdb37,network=Network(9bac3530-993f-420e-8692-0b14a331d756),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2b54c9c3-c2') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 11 08:51:24 compute-0 nova_compute[260935]: 2025-10-11 08:51:24.846 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:51:24 compute-0 nova_compute[260935]: 2025-10-11 08:51:24.846 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:51:24 compute-0 nova_compute[260935]: 2025-10-11 08:51:24.847 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 08:51:24 compute-0 nova_compute[260935]: 2025-10-11 08:51:24.851 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:51:24 compute-0 nova_compute[260935]: 2025-10-11 08:51:24.851 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2b54c9c3-c2, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:51:24 compute-0 nova_compute[260935]: 2025-10-11 08:51:24.852 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap2b54c9c3-c2, col_values=(('external_ids', {'iface-id': '2b54c9c3-c2f2-466e-95a2-db8b1ebfdb37', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:2d:84:1d', 'vm-uuid': '3283d482-4ea1-400a-9a1b-486479801813'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:51:24 compute-0 nova_compute[260935]: 2025-10-11 08:51:24.858 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:51:24 compute-0 NetworkManager[44960]: <info>  [1760172684.8597] manager: (tap2b54c9c3-c2): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/127)
Oct 11 08:51:24 compute-0 nova_compute[260935]: 2025-10-11 08:51:24.860 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 08:51:24 compute-0 nova_compute[260935]: 2025-10-11 08:51:24.868 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:51:24 compute-0 nova_compute[260935]: 2025-10-11 08:51:24.869 2 INFO os_vif [None req-9b407ac6-4d33-48b3-b730-b3666d8fc2bf 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:2d:84:1d,bridge_name='br-int',has_traffic_filtering=True,id=2b54c9c3-c2f2-466e-95a2-db8b1ebfdb37,network=Network(9bac3530-993f-420e-8692-0b14a331d756),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2b54c9c3-c2')
Oct 11 08:51:24 compute-0 nova_compute[260935]: 2025-10-11 08:51:24.967 2 DEBUG nova.virt.libvirt.driver [None req-9b407ac6-4d33-48b3-b730-b3666d8fc2bf 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 08:51:24 compute-0 nova_compute[260935]: 2025-10-11 08:51:24.968 2 DEBUG nova.virt.libvirt.driver [None req-9b407ac6-4d33-48b3-b730-b3666d8fc2bf 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 08:51:24 compute-0 nova_compute[260935]: 2025-10-11 08:51:24.968 2 DEBUG nova.virt.libvirt.driver [None req-9b407ac6-4d33-48b3-b730-b3666d8fc2bf 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] No VIF found with MAC fa:16:3e:2d:84:1d, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 11 08:51:24 compute-0 nova_compute[260935]: 2025-10-11 08:51:24.969 2 INFO nova.virt.libvirt.driver [None req-9b407ac6-4d33-48b3-b730-b3666d8fc2bf 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 3283d482-4ea1-400a-9a1b-486479801813] Using config drive
Oct 11 08:51:25 compute-0 nova_compute[260935]: 2025-10-11 08:51:25.002 2 DEBUG nova.storage.rbd_utils [None req-9b407ac6-4d33-48b3-b730-b3666d8fc2bf 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] rbd image 3283d482-4ea1-400a-9a1b-486479801813_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:51:25 compute-0 podman[302614]: 2025-10-11 08:51:25.026184566 +0000 UTC m=+0.099713000 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Oct 11 08:51:25 compute-0 nova_compute[260935]: 2025-10-11 08:51:25.334 2 DEBUG nova.network.neutron [req-cffe39aa-f7dd-49b6-b4c3-19afa3fafc40 req-46d03a44-d61b-4242-8c99-291943394d7a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 3283d482-4ea1-400a-9a1b-486479801813] Updated VIF entry in instance network info cache for port 2b54c9c3-c2f2-466e-95a2-db8b1ebfdb37. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 08:51:25 compute-0 nova_compute[260935]: 2025-10-11 08:51:25.340 2 DEBUG nova.network.neutron [req-cffe39aa-f7dd-49b6-b4c3-19afa3fafc40 req-46d03a44-d61b-4242-8c99-291943394d7a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 3283d482-4ea1-400a-9a1b-486479801813] Updating instance_info_cache with network_info: [{"id": "2b54c9c3-c2f2-466e-95a2-db8b1ebfdb37", "address": "fa:16:3e:2d:84:1d", "network": {"id": "9bac3530-993f-420e-8692-0b14a331d756", "bridge": "br-int", "label": "tempest-ImagesTestJSON-942705627-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f8c7604961214c6d9d49657535d799a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2b54c9c3-c2", "ovs_interfaceid": "2b54c9c3-c2f2-466e-95a2-db8b1ebfdb37", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:51:25 compute-0 nova_compute[260935]: 2025-10-11 08:51:25.361 2 DEBUG oslo_concurrency.lockutils [req-cffe39aa-f7dd-49b6-b4c3-19afa3fafc40 req-46d03a44-d61b-4242-8c99-291943394d7a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-3283d482-4ea1-400a-9a1b-486479801813" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 08:51:25 compute-0 nova_compute[260935]: 2025-10-11 08:51:25.361 2 DEBUG nova.compute.manager [req-cffe39aa-f7dd-49b6-b4c3-19afa3fafc40 req-46d03a44-d61b-4242-8c99-291943394d7a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 9f842544-f85a-4c24-b273-8ae74177617e] Received event network-vif-deleted-fc76a4bd-0e3d-426e-820f-690283cf8257 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:51:25 compute-0 nova_compute[260935]: 2025-10-11 08:51:25.362 2 DEBUG nova.compute.manager [req-cffe39aa-f7dd-49b6-b4c3-19afa3fafc40 req-46d03a44-d61b-4242-8c99-291943394d7a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 9f842544-f85a-4c24-b273-8ae74177617e] Received event network-vif-deleted-ccf53fcf-c7bd-41f4-986b-d90fd701f3e4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:51:25 compute-0 unix_chkpwd[302653]: password check failed for user (root)
Oct 11 08:51:25 compute-0 sshd-session[302568]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=152.32.213.170  user=root
Oct 11 08:51:25 compute-0 nova_compute[260935]: 2025-10-11 08:51:25.399 2 INFO nova.virt.libvirt.driver [None req-9b407ac6-4d33-48b3-b730-b3666d8fc2bf 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 3283d482-4ea1-400a-9a1b-486479801813] Creating config drive at /var/lib/nova/instances/3283d482-4ea1-400a-9a1b-486479801813/disk.config
Oct 11 08:51:25 compute-0 nova_compute[260935]: 2025-10-11 08:51:25.409 2 DEBUG oslo_concurrency.processutils [None req-9b407ac6-4d33-48b3-b730-b3666d8fc2bf 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/3283d482-4ea1-400a-9a1b-486479801813/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp4aw16o0u execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:51:25 compute-0 ceph-mon[74313]: pgmap v1389: 321 pgs: 321 active+clean; 213 MiB data, 471 MiB used, 60 GiB / 60 GiB avail; 2.8 MiB/s rd, 4.7 MiB/s wr, 274 op/s
Oct 11 08:51:25 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/1555119279' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:51:25 compute-0 nova_compute[260935]: 2025-10-11 08:51:25.565 2 DEBUG oslo_concurrency.processutils [None req-9b407ac6-4d33-48b3-b730-b3666d8fc2bf 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/3283d482-4ea1-400a-9a1b-486479801813/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp4aw16o0u" returned: 0 in 0.156s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:51:25 compute-0 nova_compute[260935]: 2025-10-11 08:51:25.600 2 DEBUG nova.storage.rbd_utils [None req-9b407ac6-4d33-48b3-b730-b3666d8fc2bf 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] rbd image 3283d482-4ea1-400a-9a1b-486479801813_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:51:25 compute-0 nova_compute[260935]: 2025-10-11 08:51:25.608 2 DEBUG oslo_concurrency.processutils [None req-9b407ac6-4d33-48b3-b730-b3666d8fc2bf 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/3283d482-4ea1-400a-9a1b-486479801813/disk.config 3283d482-4ea1-400a-9a1b-486479801813_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:51:25 compute-0 nova_compute[260935]: 2025-10-11 08:51:25.647 2 DEBUG oslo_concurrency.lockutils [None req-5217595d-a70c-467d-b9d4-b95bbe81b0eb f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] Acquiring lock "interface-88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a-None" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:51:25 compute-0 nova_compute[260935]: 2025-10-11 08:51:25.648 2 DEBUG oslo_concurrency.lockutils [None req-5217595d-a70c-467d-b9d4-b95bbe81b0eb f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] Lock "interface-88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a-None" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:51:25 compute-0 nova_compute[260935]: 2025-10-11 08:51:25.649 2 DEBUG nova.objects.instance [None req-5217595d-a70c-467d-b9d4-b95bbe81b0eb f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] Lazy-loading 'flavor' on Instance uuid 88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:51:25 compute-0 nova_compute[260935]: 2025-10-11 08:51:25.681 2 DEBUG nova.objects.instance [None req-5217595d-a70c-467d-b9d4-b95bbe81b0eb f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] Lazy-loading 'pci_requests' on Instance uuid 88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:51:25 compute-0 nova_compute[260935]: 2025-10-11 08:51:25.726 2 DEBUG nova.network.neutron [None req-5217595d-a70c-467d-b9d4-b95bbe81b0eb f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] [instance: 88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 11 08:51:25 compute-0 nova_compute[260935]: 2025-10-11 08:51:25.794 2 DEBUG oslo_concurrency.processutils [None req-9b407ac6-4d33-48b3-b730-b3666d8fc2bf 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/3283d482-4ea1-400a-9a1b-486479801813/disk.config 3283d482-4ea1-400a-9a1b-486479801813_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.186s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:51:25 compute-0 nova_compute[260935]: 2025-10-11 08:51:25.795 2 INFO nova.virt.libvirt.driver [None req-9b407ac6-4d33-48b3-b730-b3666d8fc2bf 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 3283d482-4ea1-400a-9a1b-486479801813] Deleting local config drive /var/lib/nova/instances/3283d482-4ea1-400a-9a1b-486479801813/disk.config because it was imported into RBD.
Oct 11 08:51:25 compute-0 kernel: tap2b54c9c3-c2: entered promiscuous mode
Oct 11 08:51:25 compute-0 NetworkManager[44960]: <info>  [1760172685.8755] manager: (tap2b54c9c3-c2): new Tun device (/org/freedesktop/NetworkManager/Devices/128)
Oct 11 08:51:25 compute-0 nova_compute[260935]: 2025-10-11 08:51:25.877 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:51:25 compute-0 ovn_controller[152945]: 2025-10-11T08:51:25Z|00257|binding|INFO|Claiming lport 2b54c9c3-c2f2-466e-95a2-db8b1ebfdb37 for this chassis.
Oct 11 08:51:25 compute-0 ovn_controller[152945]: 2025-10-11T08:51:25Z|00258|binding|INFO|2b54c9c3-c2f2-466e-95a2-db8b1ebfdb37: Claiming fa:16:3e:2d:84:1d 10.100.0.11
Oct 11 08:51:25 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1390: 321 pgs: 321 active+clean; 213 MiB data, 471 MiB used, 60 GiB / 60 GiB avail; 2.8 MiB/s rd, 4.7 MiB/s wr, 274 op/s
Oct 11 08:51:25 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:25.897 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2d:84:1d 10.100.0.11'], port_security=['fa:16:3e:2d:84:1d 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '3283d482-4ea1-400a-9a1b-486479801813', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9bac3530-993f-420e-8692-0b14a331d756', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f8c7604961214c6d9d49657535d799a5', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'b92be55a-f97b-4770-99c4-ff8e122b8ad7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=956bef08-638b-4ce0-9cc4-80a6cc4f1331, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=2b54c9c3-c2f2-466e-95a2-db8b1ebfdb37) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 08:51:25 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:25.898 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 2b54c9c3-c2f2-466e-95a2-db8b1ebfdb37 in datapath 9bac3530-993f-420e-8692-0b14a331d756 bound to our chassis
Oct 11 08:51:25 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:25.900 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9bac3530-993f-420e-8692-0b14a331d756
Oct 11 08:51:25 compute-0 ovn_controller[152945]: 2025-10-11T08:51:25Z|00259|binding|INFO|Setting lport 2b54c9c3-c2f2-466e-95a2-db8b1ebfdb37 ovn-installed in OVS
Oct 11 08:51:25 compute-0 ovn_controller[152945]: 2025-10-11T08:51:25Z|00260|binding|INFO|Setting lport 2b54c9c3-c2f2-466e-95a2-db8b1ebfdb37 up in Southbound
Oct 11 08:51:25 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:25.928 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[09e3cb9a-c82c-4b3f-998e-7a24a04b9ddb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:51:25 compute-0 nova_compute[260935]: 2025-10-11 08:51:25.974 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:51:25 compute-0 systemd-machined[215705]: New machine qemu-39-instance-00000023.
Oct 11 08:51:25 compute-0 systemd[1]: Started Virtual Machine qemu-39-instance-00000023.
Oct 11 08:51:26 compute-0 systemd-udevd[302710]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 08:51:26 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:26.022 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[912a40d6-6402-46ff-90b5-03526555ea7c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:51:26 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:26.026 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[8a86c3cc-f2bb-4f61-b23d-a8b63f282970]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:51:26 compute-0 NetworkManager[44960]: <info>  [1760172686.0426] device (tap2b54c9c3-c2): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 08:51:26 compute-0 NetworkManager[44960]: <info>  [1760172686.0439] device (tap2b54c9c3-c2): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 08:51:26 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:26.105 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[06942bb9-41de-42b7-8a52-cba8c171912b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:51:26 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:26.131 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[a8c0a21d-221c-4919-b06b-af512d0057d9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9bac3530-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:95:35:1f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 9, 'tx_packets': 5, 'rx_bytes': 874, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 9, 'tx_packets': 5, 'rx_bytes': 874, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 71], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 451180, 'reachable_time': 25473, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 302720, 'error': None, 'target': 'ovnmeta-9bac3530-993f-420e-8692-0b14a331d756', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:51:26 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:26.156 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[776600ac-1be1-4657-87de-6013cf8c726e]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap9bac3530-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 451197, 'tstamp': 451197}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 302722, 'error': None, 'target': 'ovnmeta-9bac3530-993f-420e-8692-0b14a331d756', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap9bac3530-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 451201, 'tstamp': 451201}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 302722, 'error': None, 'target': 'ovnmeta-9bac3530-993f-420e-8692-0b14a331d756', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:51:26 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:26.158 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9bac3530-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:51:26 compute-0 nova_compute[260935]: 2025-10-11 08:51:26.160 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:51:26 compute-0 nova_compute[260935]: 2025-10-11 08:51:26.162 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:51:26 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:26.163 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9bac3530-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:51:26 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:26.164 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 08:51:26 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:26.165 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9bac3530-90, col_values=(('external_ids', {'iface-id': 'e5becf0d-48c0-404b-9cba-07077454d085'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:51:26 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:26.166 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 08:51:26 compute-0 nova_compute[260935]: 2025-10-11 08:51:26.279 2 DEBUG nova.policy [None req-5217595d-a70c-467d-b9d4-b95bbe81b0eb f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'f961a579f0a74ab3a913fc3b21acea43', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '80ef0690d9e94d289f05d85941ef7154', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 11 08:51:27 compute-0 nova_compute[260935]: 2025-10-11 08:51:27.054 2 DEBUG nova.network.neutron [None req-5217595d-a70c-467d-b9d4-b95bbe81b0eb f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] [instance: 88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a] Successfully created port: e758e6dc-cadd-4687-a634-519fb2ecace8 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 11 08:51:27 compute-0 sshd-session[302568]: Failed password for root from 152.32.213.170 port 38652 ssh2
Oct 11 08:51:27 compute-0 nova_compute[260935]: 2025-10-11 08:51:27.384 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:51:27 compute-0 ceph-mon[74313]: pgmap v1390: 321 pgs: 321 active+clean; 213 MiB data, 471 MiB used, 60 GiB / 60 GiB avail; 2.8 MiB/s rd, 4.7 MiB/s wr, 274 op/s
Oct 11 08:51:27 compute-0 nova_compute[260935]: 2025-10-11 08:51:27.669 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172687.6690502, 3283d482-4ea1-400a-9a1b-486479801813 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:51:27 compute-0 nova_compute[260935]: 2025-10-11 08:51:27.670 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 3283d482-4ea1-400a-9a1b-486479801813] VM Started (Lifecycle Event)
Oct 11 08:51:27 compute-0 sshd-session[302568]: Received disconnect from 152.32.213.170 port 38652:11: Bye Bye [preauth]
Oct 11 08:51:27 compute-0 sshd-session[302568]: Disconnected from authenticating user root 152.32.213.170 port 38652 [preauth]
Oct 11 08:51:27 compute-0 nova_compute[260935]: 2025-10-11 08:51:27.792 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 3283d482-4ea1-400a-9a1b-486479801813] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:51:27 compute-0 nova_compute[260935]: 2025-10-11 08:51:27.798 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172687.6691945, 3283d482-4ea1-400a-9a1b-486479801813 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:51:27 compute-0 nova_compute[260935]: 2025-10-11 08:51:27.799 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 3283d482-4ea1-400a-9a1b-486479801813] VM Paused (Lifecycle Event)
Oct 11 08:51:27 compute-0 nova_compute[260935]: 2025-10-11 08:51:27.818 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 3283d482-4ea1-400a-9a1b-486479801813] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:51:27 compute-0 nova_compute[260935]: 2025-10-11 08:51:27.821 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 3283d482-4ea1-400a-9a1b-486479801813] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 08:51:27 compute-0 nova_compute[260935]: 2025-10-11 08:51:27.849 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 3283d482-4ea1-400a-9a1b-486479801813] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 08:51:27 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1391: 321 pgs: 321 active+clean; 214 MiB data, 469 MiB used, 60 GiB / 60 GiB avail; 5.0 MiB/s rd, 2.6 MiB/s wr, 323 op/s
Oct 11 08:51:28 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e184 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 08:51:28 compute-0 nova_compute[260935]: 2025-10-11 08:51:28.715 2 DEBUG nova.compute.manager [req-d3e60dd6-1fdd-4012-9977-11b06ff0cf76 req-e6f6d58b-28f8-45bb-8982-f61ce10923cd e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 3283d482-4ea1-400a-9a1b-486479801813] Received event network-vif-plugged-2b54c9c3-c2f2-466e-95a2-db8b1ebfdb37 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:51:28 compute-0 nova_compute[260935]: 2025-10-11 08:51:28.715 2 DEBUG oslo_concurrency.lockutils [req-d3e60dd6-1fdd-4012-9977-11b06ff0cf76 req-e6f6d58b-28f8-45bb-8982-f61ce10923cd e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "3283d482-4ea1-400a-9a1b-486479801813-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:51:28 compute-0 nova_compute[260935]: 2025-10-11 08:51:28.716 2 DEBUG oslo_concurrency.lockutils [req-d3e60dd6-1fdd-4012-9977-11b06ff0cf76 req-e6f6d58b-28f8-45bb-8982-f61ce10923cd e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "3283d482-4ea1-400a-9a1b-486479801813-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:51:28 compute-0 nova_compute[260935]: 2025-10-11 08:51:28.716 2 DEBUG oslo_concurrency.lockutils [req-d3e60dd6-1fdd-4012-9977-11b06ff0cf76 req-e6f6d58b-28f8-45bb-8982-f61ce10923cd e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "3283d482-4ea1-400a-9a1b-486479801813-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:51:28 compute-0 nova_compute[260935]: 2025-10-11 08:51:28.716 2 DEBUG nova.compute.manager [req-d3e60dd6-1fdd-4012-9977-11b06ff0cf76 req-e6f6d58b-28f8-45bb-8982-f61ce10923cd e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 3283d482-4ea1-400a-9a1b-486479801813] Processing event network-vif-plugged-2b54c9c3-c2f2-466e-95a2-db8b1ebfdb37 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 11 08:51:28 compute-0 nova_compute[260935]: 2025-10-11 08:51:28.717 2 DEBUG nova.compute.manager [None req-9b407ac6-4d33-48b3-b730-b3666d8fc2bf 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 3283d482-4ea1-400a-9a1b-486479801813] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 11 08:51:28 compute-0 nova_compute[260935]: 2025-10-11 08:51:28.729 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172688.7294729, 3283d482-4ea1-400a-9a1b-486479801813 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:51:28 compute-0 nova_compute[260935]: 2025-10-11 08:51:28.730 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 3283d482-4ea1-400a-9a1b-486479801813] VM Resumed (Lifecycle Event)
Oct 11 08:51:28 compute-0 nova_compute[260935]: 2025-10-11 08:51:28.731 2 DEBUG nova.virt.libvirt.driver [None req-9b407ac6-4d33-48b3-b730-b3666d8fc2bf 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 3283d482-4ea1-400a-9a1b-486479801813] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 11 08:51:28 compute-0 nova_compute[260935]: 2025-10-11 08:51:28.736 2 INFO nova.virt.libvirt.driver [-] [instance: 3283d482-4ea1-400a-9a1b-486479801813] Instance spawned successfully.
Oct 11 08:51:28 compute-0 nova_compute[260935]: 2025-10-11 08:51:28.736 2 INFO nova.compute.manager [None req-9b407ac6-4d33-48b3-b730-b3666d8fc2bf 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 3283d482-4ea1-400a-9a1b-486479801813] Took 8.64 seconds to spawn the instance on the hypervisor.
Oct 11 08:51:28 compute-0 nova_compute[260935]: 2025-10-11 08:51:28.736 2 DEBUG nova.compute.manager [None req-9b407ac6-4d33-48b3-b730-b3666d8fc2bf 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 3283d482-4ea1-400a-9a1b-486479801813] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:51:28 compute-0 nova_compute[260935]: 2025-10-11 08:51:28.753 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 3283d482-4ea1-400a-9a1b-486479801813] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:51:28 compute-0 nova_compute[260935]: 2025-10-11 08:51:28.757 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 3283d482-4ea1-400a-9a1b-486479801813] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 08:51:28 compute-0 nova_compute[260935]: 2025-10-11 08:51:28.785 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 3283d482-4ea1-400a-9a1b-486479801813] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 08:51:28 compute-0 nova_compute[260935]: 2025-10-11 08:51:28.832 2 INFO nova.compute.manager [None req-9b407ac6-4d33-48b3-b730-b3666d8fc2bf 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 3283d482-4ea1-400a-9a1b-486479801813] Took 9.79 seconds to build instance.
Oct 11 08:51:28 compute-0 nova_compute[260935]: 2025-10-11 08:51:28.983 2 DEBUG oslo_concurrency.lockutils [None req-9b407ac6-4d33-48b3-b730-b3666d8fc2bf 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Lock "3283d482-4ea1-400a-9a1b-486479801813" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.010s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:51:29 compute-0 ceph-mon[74313]: pgmap v1391: 321 pgs: 321 active+clean; 214 MiB data, 469 MiB used, 60 GiB / 60 GiB avail; 5.0 MiB/s rd, 2.6 MiB/s wr, 323 op/s
Oct 11 08:51:29 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1392: 321 pgs: 321 active+clean; 214 MiB data, 469 MiB used, 60 GiB / 60 GiB avail; 5.0 MiB/s rd, 2.6 MiB/s wr, 323 op/s
Oct 11 08:51:29 compute-0 nova_compute[260935]: 2025-10-11 08:51:29.896 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:51:30 compute-0 nova_compute[260935]: 2025-10-11 08:51:30.995 2 DEBUG nova.network.neutron [None req-5217595d-a70c-467d-b9d4-b95bbe81b0eb f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] [instance: 88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a] Successfully updated port: e758e6dc-cadd-4687-a634-519fb2ecace8 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 11 08:51:31 compute-0 nova_compute[260935]: 2025-10-11 08:51:31.017 2 DEBUG oslo_concurrency.lockutils [None req-5217595d-a70c-467d-b9d4-b95bbe81b0eb f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] Acquiring lock "refresh_cache-88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 08:51:31 compute-0 nova_compute[260935]: 2025-10-11 08:51:31.018 2 DEBUG oslo_concurrency.lockutils [None req-5217595d-a70c-467d-b9d4-b95bbe81b0eb f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] Acquired lock "refresh_cache-88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 08:51:31 compute-0 nova_compute[260935]: 2025-10-11 08:51:31.018 2 DEBUG nova.network.neutron [None req-5217595d-a70c-467d-b9d4-b95bbe81b0eb f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] [instance: 88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 11 08:51:31 compute-0 nova_compute[260935]: 2025-10-11 08:51:31.157 2 DEBUG nova.compute.manager [req-ec9a0fdd-29ed-4bd4-a11f-faa643d037f1 req-3759b8ab-4758-47b3-a481-5d370edd5f81 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a] Received event network-changed-e758e6dc-cadd-4687-a634-519fb2ecace8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:51:31 compute-0 nova_compute[260935]: 2025-10-11 08:51:31.157 2 DEBUG nova.compute.manager [req-ec9a0fdd-29ed-4bd4-a11f-faa643d037f1 req-3759b8ab-4758-47b3-a481-5d370edd5f81 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a] Refreshing instance network info cache due to event network-changed-e758e6dc-cadd-4687-a634-519fb2ecace8. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 08:51:31 compute-0 nova_compute[260935]: 2025-10-11 08:51:31.157 2 DEBUG oslo_concurrency.lockutils [req-ec9a0fdd-29ed-4bd4-a11f-faa643d037f1 req-3759b8ab-4758-47b3-a481-5d370edd5f81 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 08:51:31 compute-0 nova_compute[260935]: 2025-10-11 08:51:31.217 2 WARNING nova.network.neutron [None req-5217595d-a70c-467d-b9d4-b95bbe81b0eb f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] [instance: 88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a] ce478624-f4e7-4bd7-81b2-172d9f364a89 already exists in list: networks containing: ['ce478624-f4e7-4bd7-81b2-172d9f364a89']. ignoring it
Oct 11 08:51:31 compute-0 nova_compute[260935]: 2025-10-11 08:51:31.414 2 DEBUG oslo_concurrency.lockutils [None req-1afcfcc8-320c-4c1e-bd79-14e8718fe2df 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Acquiring lock "3283d482-4ea1-400a-9a1b-486479801813" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:51:31 compute-0 nova_compute[260935]: 2025-10-11 08:51:31.414 2 DEBUG oslo_concurrency.lockutils [None req-1afcfcc8-320c-4c1e-bd79-14e8718fe2df 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Lock "3283d482-4ea1-400a-9a1b-486479801813" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:51:31 compute-0 nova_compute[260935]: 2025-10-11 08:51:31.416 2 DEBUG oslo_concurrency.lockutils [None req-1afcfcc8-320c-4c1e-bd79-14e8718fe2df 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Acquiring lock "3283d482-4ea1-400a-9a1b-486479801813-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:51:31 compute-0 nova_compute[260935]: 2025-10-11 08:51:31.416 2 DEBUG oslo_concurrency.lockutils [None req-1afcfcc8-320c-4c1e-bd79-14e8718fe2df 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Lock "3283d482-4ea1-400a-9a1b-486479801813-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:51:31 compute-0 nova_compute[260935]: 2025-10-11 08:51:31.417 2 DEBUG oslo_concurrency.lockutils [None req-1afcfcc8-320c-4c1e-bd79-14e8718fe2df 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Lock "3283d482-4ea1-400a-9a1b-486479801813-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:51:31 compute-0 nova_compute[260935]: 2025-10-11 08:51:31.418 2 INFO nova.compute.manager [None req-1afcfcc8-320c-4c1e-bd79-14e8718fe2df 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 3283d482-4ea1-400a-9a1b-486479801813] Terminating instance
Oct 11 08:51:31 compute-0 nova_compute[260935]: 2025-10-11 08:51:31.420 2 DEBUG nova.compute.manager [None req-1afcfcc8-320c-4c1e-bd79-14e8718fe2df 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 3283d482-4ea1-400a-9a1b-486479801813] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 11 08:51:31 compute-0 kernel: tap2b54c9c3-c2 (unregistering): left promiscuous mode
Oct 11 08:51:31 compute-0 ceph-mon[74313]: pgmap v1392: 321 pgs: 321 active+clean; 214 MiB data, 469 MiB used, 60 GiB / 60 GiB avail; 5.0 MiB/s rd, 2.6 MiB/s wr, 323 op/s
Oct 11 08:51:31 compute-0 NetworkManager[44960]: <info>  [1760172691.4865] device (tap2b54c9c3-c2): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 08:51:31 compute-0 ovn_controller[152945]: 2025-10-11T08:51:31Z|00261|binding|INFO|Releasing lport 2b54c9c3-c2f2-466e-95a2-db8b1ebfdb37 from this chassis (sb_readonly=0)
Oct 11 08:51:31 compute-0 ovn_controller[152945]: 2025-10-11T08:51:31Z|00262|binding|INFO|Setting lport 2b54c9c3-c2f2-466e-95a2-db8b1ebfdb37 down in Southbound
Oct 11 08:51:31 compute-0 ovn_controller[152945]: 2025-10-11T08:51:31Z|00263|binding|INFO|Removing iface tap2b54c9c3-c2 ovn-installed in OVS
Oct 11 08:51:31 compute-0 nova_compute[260935]: 2025-10-11 08:51:31.500 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:51:31 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:31.503 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2d:84:1d 10.100.0.11'], port_security=['fa:16:3e:2d:84:1d 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '3283d482-4ea1-400a-9a1b-486479801813', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9bac3530-993f-420e-8692-0b14a331d756', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f8c7604961214c6d9d49657535d799a5', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'b92be55a-f97b-4770-99c4-ff8e122b8ad7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=956bef08-638b-4ce0-9cc4-80a6cc4f1331, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=2b54c9c3-c2f2-466e-95a2-db8b1ebfdb37) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 08:51:31 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:31.504 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 2b54c9c3-c2f2-466e-95a2-db8b1ebfdb37 in datapath 9bac3530-993f-420e-8692-0b14a331d756 unbound from our chassis
Oct 11 08:51:31 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:31.505 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9bac3530-993f-420e-8692-0b14a331d756
Oct 11 08:51:31 compute-0 nova_compute[260935]: 2025-10-11 08:51:31.519 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:51:31 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:31.531 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[0c77da4c-0bd3-484e-af90-9e9932f7c575]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:51:31 compute-0 systemd[1]: machine-qemu\x2d39\x2dinstance\x2d00000023.scope: Deactivated successfully.
Oct 11 08:51:31 compute-0 systemd[1]: machine-qemu\x2d39\x2dinstance\x2d00000023.scope: Consumed 4.475s CPU time.
Oct 11 08:51:31 compute-0 systemd-machined[215705]: Machine qemu-39-instance-00000023 terminated.
Oct 11 08:51:31 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:31.571 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[9e2f316f-ba0c-4f56-9283-ddce3ad59a36]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:51:31 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:31.575 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[3af2be35-79fc-4ade-b576-0293cde7a40c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:51:31 compute-0 podman[302765]: 2025-10-11 08:51:31.590602208 +0000 UTC m=+0.095546783 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=iscsid, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 11 08:51:31 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:31.614 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[7db42535-3fd4-4e93-ad0f-87ed1ee85b92]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:51:31 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:31.644 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[a0ea8b3e-710f-4a4e-81c6-c116d2982b21]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9bac3530-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:95:35:1f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 916, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 916, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 71], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 451180, 'reachable_time': 25473, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 302793, 'error': None, 'target': 'ovnmeta-9bac3530-993f-420e-8692-0b14a331d756', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:51:31 compute-0 nova_compute[260935]: 2025-10-11 08:51:31.661 2 INFO nova.virt.libvirt.driver [-] [instance: 3283d482-4ea1-400a-9a1b-486479801813] Instance destroyed successfully.
Oct 11 08:51:31 compute-0 nova_compute[260935]: 2025-10-11 08:51:31.662 2 DEBUG nova.objects.instance [None req-1afcfcc8-320c-4c1e-bd79-14e8718fe2df 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Lazy-loading 'resources' on Instance uuid 3283d482-4ea1-400a-9a1b-486479801813 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:51:31 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:31.665 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[9badc75d-90c9-4439-bb36-2058ba226d0d]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap9bac3530-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 451197, 'tstamp': 451197}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 302799, 'error': None, 'target': 'ovnmeta-9bac3530-993f-420e-8692-0b14a331d756', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap9bac3530-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 451201, 'tstamp': 451201}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 302799, 'error': None, 'target': 'ovnmeta-9bac3530-993f-420e-8692-0b14a331d756', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:51:31 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:31.666 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9bac3530-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:51:31 compute-0 nova_compute[260935]: 2025-10-11 08:51:31.668 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:51:31 compute-0 nova_compute[260935]: 2025-10-11 08:51:31.675 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:51:31 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:31.676 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9bac3530-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:51:31 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:31.676 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 08:51:31 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:31.676 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9bac3530-90, col_values=(('external_ids', {'iface-id': 'e5becf0d-48c0-404b-9cba-07077454d085'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:51:31 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:31.677 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 08:51:31 compute-0 nova_compute[260935]: 2025-10-11 08:51:31.680 2 DEBUG nova.virt.libvirt.vif [None req-1afcfcc8-320c-4c1e-bd79-14e8718fe2df 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T08:51:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-849464772',display_name='tempest-ImagesTestJSON-server-849464772',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-849464772',id=35,image_ref='70bd8f23-d068-4d06-af15-566e76e92803',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-11T08:51:28Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='f8c7604961214c6d9d49657535d799a5',ramdisk_id='',reservation_id='r-lbwd8nss',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_boot_roles='member,reader',image_container_format='bare',image_disk_format='raw',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_image_location='snapshot',image_image_state='available',image_image_type='snapshot',image_instance_uuid='5b50d851-e482-40a2-8b7d-d3eca87e15ab',image_min_disk='1',image_min_ram='0',image_owner_id='f8c7604961214c6d9d49657535d799a5',image_owner_project_name='tempest-ImagesTestJSON-694493184',image_owner_user_name='tempest-ImagesTestJSON-694493184-project-member',image_user_id='1bab12893b9d49aabcb5ca19c9b951de',owner_project_name='tempest-ImagesTestJSON-694493184',owner_user_name='tempest-ImagesTestJSON-694493184-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T08:51:28Z,user_data=None,user_id='1bab12893b9d49aabcb5ca19c9b951de',uuid=3283d482-4ea1-400a-9a1b-486479801813,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "2b54c9c3-c2f2-466e-95a2-db8b1ebfdb37", "address": "fa:16:3e:2d:84:1d", "network": {"id": "9bac3530-993f-420e-8692-0b14a331d756", "bridge": "br-int", "label": "tempest-ImagesTestJSON-942705627-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f8c7604961214c6d9d49657535d799a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2b54c9c3-c2", "ovs_interfaceid": "2b54c9c3-c2f2-466e-95a2-db8b1ebfdb37", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 11 08:51:31 compute-0 nova_compute[260935]: 2025-10-11 08:51:31.680 2 DEBUG nova.network.os_vif_util [None req-1afcfcc8-320c-4c1e-bd79-14e8718fe2df 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Converting VIF {"id": "2b54c9c3-c2f2-466e-95a2-db8b1ebfdb37", "address": "fa:16:3e:2d:84:1d", "network": {"id": "9bac3530-993f-420e-8692-0b14a331d756", "bridge": "br-int", "label": "tempest-ImagesTestJSON-942705627-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f8c7604961214c6d9d49657535d799a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2b54c9c3-c2", "ovs_interfaceid": "2b54c9c3-c2f2-466e-95a2-db8b1ebfdb37", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 08:51:31 compute-0 nova_compute[260935]: 2025-10-11 08:51:31.681 2 DEBUG nova.network.os_vif_util [None req-1afcfcc8-320c-4c1e-bd79-14e8718fe2df 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2d:84:1d,bridge_name='br-int',has_traffic_filtering=True,id=2b54c9c3-c2f2-466e-95a2-db8b1ebfdb37,network=Network(9bac3530-993f-420e-8692-0b14a331d756),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2b54c9c3-c2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 08:51:31 compute-0 nova_compute[260935]: 2025-10-11 08:51:31.681 2 DEBUG os_vif [None req-1afcfcc8-320c-4c1e-bd79-14e8718fe2df 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:2d:84:1d,bridge_name='br-int',has_traffic_filtering=True,id=2b54c9c3-c2f2-466e-95a2-db8b1ebfdb37,network=Network(9bac3530-993f-420e-8692-0b14a331d756),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2b54c9c3-c2') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 11 08:51:31 compute-0 nova_compute[260935]: 2025-10-11 08:51:31.682 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:51:31 compute-0 nova_compute[260935]: 2025-10-11 08:51:31.683 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2b54c9c3-c2, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:51:31 compute-0 nova_compute[260935]: 2025-10-11 08:51:31.684 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:51:31 compute-0 nova_compute[260935]: 2025-10-11 08:51:31.685 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:51:31 compute-0 nova_compute[260935]: 2025-10-11 08:51:31.688 2 INFO os_vif [None req-1afcfcc8-320c-4c1e-bd79-14e8718fe2df 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:2d:84:1d,bridge_name='br-int',has_traffic_filtering=True,id=2b54c9c3-c2f2-466e-95a2-db8b1ebfdb37,network=Network(9bac3530-993f-420e-8692-0b14a331d756),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2b54c9c3-c2')
Oct 11 08:51:31 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1393: 321 pgs: 321 active+clean; 214 MiB data, 469 MiB used, 60 GiB / 60 GiB avail; 5.0 MiB/s rd, 2.6 MiB/s wr, 323 op/s
Oct 11 08:51:32 compute-0 nova_compute[260935]: 2025-10-11 08:51:32.054 2 INFO nova.virt.libvirt.driver [None req-1afcfcc8-320c-4c1e-bd79-14e8718fe2df 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 3283d482-4ea1-400a-9a1b-486479801813] Deleting instance files /var/lib/nova/instances/3283d482-4ea1-400a-9a1b-486479801813_del
Oct 11 08:51:32 compute-0 nova_compute[260935]: 2025-10-11 08:51:32.055 2 INFO nova.virt.libvirt.driver [None req-1afcfcc8-320c-4c1e-bd79-14e8718fe2df 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 3283d482-4ea1-400a-9a1b-486479801813] Deletion of /var/lib/nova/instances/3283d482-4ea1-400a-9a1b-486479801813_del complete
Oct 11 08:51:32 compute-0 nova_compute[260935]: 2025-10-11 08:51:32.130 2 INFO nova.compute.manager [None req-1afcfcc8-320c-4c1e-bd79-14e8718fe2df 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 3283d482-4ea1-400a-9a1b-486479801813] Took 0.71 seconds to destroy the instance on the hypervisor.
Oct 11 08:51:32 compute-0 nova_compute[260935]: 2025-10-11 08:51:32.131 2 DEBUG oslo.service.loopingcall [None req-1afcfcc8-320c-4c1e-bd79-14e8718fe2df 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 11 08:51:32 compute-0 nova_compute[260935]: 2025-10-11 08:51:32.131 2 DEBUG nova.compute.manager [-] [instance: 3283d482-4ea1-400a-9a1b-486479801813] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 11 08:51:32 compute-0 nova_compute[260935]: 2025-10-11 08:51:32.131 2 DEBUG nova.network.neutron [-] [instance: 3283d482-4ea1-400a-9a1b-486479801813] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 11 08:51:32 compute-0 nova_compute[260935]: 2025-10-11 08:51:32.387 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:51:32 compute-0 ceph-mon[74313]: pgmap v1393: 321 pgs: 321 active+clean; 214 MiB data, 469 MiB used, 60 GiB / 60 GiB avail; 5.0 MiB/s rd, 2.6 MiB/s wr, 323 op/s
Oct 11 08:51:32 compute-0 nova_compute[260935]: 2025-10-11 08:51:32.935 2 DEBUG nova.network.neutron [-] [instance: 3283d482-4ea1-400a-9a1b-486479801813] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:51:32 compute-0 nova_compute[260935]: 2025-10-11 08:51:32.951 2 INFO nova.compute.manager [-] [instance: 3283d482-4ea1-400a-9a1b-486479801813] Took 0.82 seconds to deallocate network for instance.
Oct 11 08:51:32 compute-0 nova_compute[260935]: 2025-10-11 08:51:32.995 2 DEBUG oslo_concurrency.lockutils [None req-1afcfcc8-320c-4c1e-bd79-14e8718fe2df 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:51:32 compute-0 nova_compute[260935]: 2025-10-11 08:51:32.996 2 DEBUG oslo_concurrency.lockutils [None req-1afcfcc8-320c-4c1e-bd79-14e8718fe2df 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:51:33 compute-0 nova_compute[260935]: 2025-10-11 08:51:33.079 2 DEBUG oslo_concurrency.processutils [None req-1afcfcc8-320c-4c1e-bd79-14e8718fe2df 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:51:33 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e184 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 08:51:33 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 08:51:33 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/988541132' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:51:33 compute-0 nova_compute[260935]: 2025-10-11 08:51:33.567 2 DEBUG oslo_concurrency.processutils [None req-1afcfcc8-320c-4c1e-bd79-14e8718fe2df 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.488s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:51:33 compute-0 nova_compute[260935]: 2025-10-11 08:51:33.578 2 DEBUG nova.compute.provider_tree [None req-1afcfcc8-320c-4c1e-bd79-14e8718fe2df 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 08:51:33 compute-0 nova_compute[260935]: 2025-10-11 08:51:33.595 2 DEBUG nova.scheduler.client.report [None req-1afcfcc8-320c-4c1e-bd79-14e8718fe2df 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 08:51:33 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/988541132' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:51:33 compute-0 nova_compute[260935]: 2025-10-11 08:51:33.620 2 DEBUG oslo_concurrency.lockutils [None req-1afcfcc8-320c-4c1e-bd79-14e8718fe2df 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.623s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:51:33 compute-0 nova_compute[260935]: 2025-10-11 08:51:33.646 2 INFO nova.scheduler.client.report [None req-1afcfcc8-320c-4c1e-bd79-14e8718fe2df 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Deleted allocations for instance 3283d482-4ea1-400a-9a1b-486479801813
Oct 11 08:51:33 compute-0 nova_compute[260935]: 2025-10-11 08:51:33.714 2 DEBUG nova.network.neutron [None req-5217595d-a70c-467d-b9d4-b95bbe81b0eb f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] [instance: 88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a] Updating instance_info_cache with network_info: [{"id": "ab7592f9-1746-47d1-a702-b7be704ccabb", "address": "fa:16:3e:af:3d:5c", "network": {"id": "ce478624-f4e7-4bd7-81b2-172d9f364a89", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1659766389-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "80ef0690d9e94d289f05d85941ef7154", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapab7592f9-17", "ovs_interfaceid": "ab7592f9-1746-47d1-a702-b7be704ccabb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "e758e6dc-cadd-4687-a634-519fb2ecace8", "address": "fa:16:3e:4b:01:c2", "network": {"id": "ce478624-f4e7-4bd7-81b2-172d9f364a89", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1659766389-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "80ef0690d9e94d289f05d85941ef7154", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape758e6dc-ca", "ovs_interfaceid": "e758e6dc-cadd-4687-a634-519fb2ecace8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:51:33 compute-0 nova_compute[260935]: 2025-10-11 08:51:33.731 2 DEBUG oslo_concurrency.lockutils [None req-1afcfcc8-320c-4c1e-bd79-14e8718fe2df 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Lock "3283d482-4ea1-400a-9a1b-486479801813" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.316s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:51:33 compute-0 nova_compute[260935]: 2025-10-11 08:51:33.758 2 DEBUG oslo_concurrency.lockutils [None req-5217595d-a70c-467d-b9d4-b95bbe81b0eb f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] Releasing lock "refresh_cache-88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 08:51:33 compute-0 nova_compute[260935]: 2025-10-11 08:51:33.761 2 DEBUG oslo_concurrency.lockutils [req-ec9a0fdd-29ed-4bd4-a11f-faa643d037f1 req-3759b8ab-4758-47b3-a481-5d370edd5f81 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 08:51:33 compute-0 nova_compute[260935]: 2025-10-11 08:51:33.761 2 DEBUG nova.network.neutron [req-ec9a0fdd-29ed-4bd4-a11f-faa643d037f1 req-3759b8ab-4758-47b3-a481-5d370edd5f81 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a] Refreshing network info cache for port e758e6dc-cadd-4687-a634-519fb2ecace8 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 08:51:33 compute-0 nova_compute[260935]: 2025-10-11 08:51:33.767 2 DEBUG nova.virt.libvirt.vif [None req-5217595d-a70c-467d-b9d4-b95bbe81b0eb f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T08:51:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-AttachInterfacesV270Test-server-2081426915',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesv270test-server-2081426915',id=34,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-11T08:51:24Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='80ef0690d9e94d289f05d85941ef7154',ramdisk_id='',reservation_id='r-0lez6trd',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesV270Test-1301456020',owner_user_name='tempest-AttachInterfacesV270Test-1301456020-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T08:51:24Z,user_data=None,user_id='f961a579f0a74ab3a913fc3b21acea43',uuid=88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "e758e6dc-cadd-4687-a634-519fb2ecace8", "address": "fa:16:3e:4b:01:c2", "network": {"id": "ce478624-f4e7-4bd7-81b2-172d9f364a89", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1659766389-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "80ef0690d9e94d289f05d85941ef7154", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape758e6dc-ca", "ovs_interfaceid": "e758e6dc-cadd-4687-a634-519fb2ecace8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 11 08:51:33 compute-0 nova_compute[260935]: 2025-10-11 08:51:33.768 2 DEBUG nova.network.os_vif_util [None req-5217595d-a70c-467d-b9d4-b95bbe81b0eb f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] Converting VIF {"id": "e758e6dc-cadd-4687-a634-519fb2ecace8", "address": "fa:16:3e:4b:01:c2", "network": {"id": "ce478624-f4e7-4bd7-81b2-172d9f364a89", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1659766389-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "80ef0690d9e94d289f05d85941ef7154", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape758e6dc-ca", "ovs_interfaceid": "e758e6dc-cadd-4687-a634-519fb2ecace8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 08:51:33 compute-0 nova_compute[260935]: 2025-10-11 08:51:33.770 2 DEBUG nova.network.os_vif_util [None req-5217595d-a70c-467d-b9d4-b95bbe81b0eb f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4b:01:c2,bridge_name='br-int',has_traffic_filtering=True,id=e758e6dc-cadd-4687-a634-519fb2ecace8,network=Network(ce478624-f4e7-4bd7-81b2-172d9f364a89),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape758e6dc-ca') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 08:51:33 compute-0 nova_compute[260935]: 2025-10-11 08:51:33.771 2 DEBUG os_vif [None req-5217595d-a70c-467d-b9d4-b95bbe81b0eb f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:4b:01:c2,bridge_name='br-int',has_traffic_filtering=True,id=e758e6dc-cadd-4687-a634-519fb2ecace8,network=Network(ce478624-f4e7-4bd7-81b2-172d9f364a89),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape758e6dc-ca') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 11 08:51:33 compute-0 nova_compute[260935]: 2025-10-11 08:51:33.773 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:51:33 compute-0 nova_compute[260935]: 2025-10-11 08:51:33.774 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:51:33 compute-0 nova_compute[260935]: 2025-10-11 08:51:33.775 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 08:51:33 compute-0 nova_compute[260935]: 2025-10-11 08:51:33.778 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:51:33 compute-0 nova_compute[260935]: 2025-10-11 08:51:33.779 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape758e6dc-ca, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:51:33 compute-0 nova_compute[260935]: 2025-10-11 08:51:33.780 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tape758e6dc-ca, col_values=(('external_ids', {'iface-id': 'e758e6dc-cadd-4687-a634-519fb2ecace8', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:4b:01:c2', 'vm-uuid': '88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:51:33 compute-0 nova_compute[260935]: 2025-10-11 08:51:33.782 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:51:33 compute-0 NetworkManager[44960]: <info>  [1760172693.7838] manager: (tape758e6dc-ca): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/129)
Oct 11 08:51:33 compute-0 nova_compute[260935]: 2025-10-11 08:51:33.792 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:51:33 compute-0 nova_compute[260935]: 2025-10-11 08:51:33.793 2 INFO os_vif [None req-5217595d-a70c-467d-b9d4-b95bbe81b0eb f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:4b:01:c2,bridge_name='br-int',has_traffic_filtering=True,id=e758e6dc-cadd-4687-a634-519fb2ecace8,network=Network(ce478624-f4e7-4bd7-81b2-172d9f364a89),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape758e6dc-ca')
Oct 11 08:51:33 compute-0 nova_compute[260935]: 2025-10-11 08:51:33.795 2 DEBUG nova.virt.libvirt.vif [None req-5217595d-a70c-467d-b9d4-b95bbe81b0eb f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T08:51:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-AttachInterfacesV270Test-server-2081426915',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesv270test-server-2081426915',id=34,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-11T08:51:24Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='80ef0690d9e94d289f05d85941ef7154',ramdisk_id='',reservation_id='r-0lez6trd',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesV270Test-1301456020',owner_user_name='tempest-AttachInterfacesV270Test-1301456020-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T08:51:24Z,user_data=None,user_id='f961a579f0a74ab3a913fc3b21acea43',uuid=88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "e758e6dc-cadd-4687-a634-519fb2ecace8", "address": "fa:16:3e:4b:01:c2", "network": {"id": "ce478624-f4e7-4bd7-81b2-172d9f364a89", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1659766389-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "80ef0690d9e94d289f05d85941ef7154", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape758e6dc-ca", "ovs_interfaceid": "e758e6dc-cadd-4687-a634-519fb2ecace8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 11 08:51:33 compute-0 nova_compute[260935]: 2025-10-11 08:51:33.795 2 DEBUG nova.network.os_vif_util [None req-5217595d-a70c-467d-b9d4-b95bbe81b0eb f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] Converting VIF {"id": "e758e6dc-cadd-4687-a634-519fb2ecace8", "address": "fa:16:3e:4b:01:c2", "network": {"id": "ce478624-f4e7-4bd7-81b2-172d9f364a89", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1659766389-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "80ef0690d9e94d289f05d85941ef7154", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape758e6dc-ca", "ovs_interfaceid": "e758e6dc-cadd-4687-a634-519fb2ecace8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 08:51:33 compute-0 nova_compute[260935]: 2025-10-11 08:51:33.797 2 DEBUG nova.network.os_vif_util [None req-5217595d-a70c-467d-b9d4-b95bbe81b0eb f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4b:01:c2,bridge_name='br-int',has_traffic_filtering=True,id=e758e6dc-cadd-4687-a634-519fb2ecace8,network=Network(ce478624-f4e7-4bd7-81b2-172d9f364a89),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape758e6dc-ca') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 08:51:33 compute-0 nova_compute[260935]: 2025-10-11 08:51:33.801 2 DEBUG nova.virt.libvirt.guest [None req-5217595d-a70c-467d-b9d4-b95bbe81b0eb f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] attach device xml: <interface type="ethernet">
Oct 11 08:51:33 compute-0 nova_compute[260935]:   <mac address="fa:16:3e:4b:01:c2"/>
Oct 11 08:51:33 compute-0 nova_compute[260935]:   <model type="virtio"/>
Oct 11 08:51:33 compute-0 nova_compute[260935]:   <driver name="vhost" rx_queue_size="512"/>
Oct 11 08:51:33 compute-0 nova_compute[260935]:   <mtu size="1442"/>
Oct 11 08:51:33 compute-0 nova_compute[260935]:   <target dev="tape758e6dc-ca"/>
Oct 11 08:51:33 compute-0 nova_compute[260935]: </interface>
Oct 11 08:51:33 compute-0 nova_compute[260935]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Oct 11 08:51:33 compute-0 systemd-udevd[302775]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 08:51:33 compute-0 kernel: tape758e6dc-ca: entered promiscuous mode
Oct 11 08:51:33 compute-0 NetworkManager[44960]: <info>  [1760172693.8255] manager: (tape758e6dc-ca): new Tun device (/org/freedesktop/NetworkManager/Devices/130)
Oct 11 08:51:33 compute-0 ovn_controller[152945]: 2025-10-11T08:51:33Z|00264|binding|INFO|Claiming lport e758e6dc-cadd-4687-a634-519fb2ecace8 for this chassis.
Oct 11 08:51:33 compute-0 ovn_controller[152945]: 2025-10-11T08:51:33Z|00265|binding|INFO|e758e6dc-cadd-4687-a634-519fb2ecace8: Claiming fa:16:3e:4b:01:c2 10.100.0.4
Oct 11 08:51:33 compute-0 nova_compute[260935]: 2025-10-11 08:51:33.827 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:51:33 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:33.835 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4b:01:c2 10.100.0.4'], port_security=['fa:16:3e:4b:01:c2 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ce478624-f4e7-4bd7-81b2-172d9f364a89', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '80ef0690d9e94d289f05d85941ef7154', 'neutron:revision_number': '2', 'neutron:security_group_ids': '29aef361-18af-45f6-a74a-33bc137a03ad', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f751cc78-ae4f-490f-8fb4-d5f1bb30fc5b, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=e758e6dc-cadd-4687-a634-519fb2ecace8) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 08:51:33 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:33.837 162815 INFO neutron.agent.ovn.metadata.agent [-] Port e758e6dc-cadd-4687-a634-519fb2ecace8 in datapath ce478624-f4e7-4bd7-81b2-172d9f364a89 bound to our chassis
Oct 11 08:51:33 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:33.841 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ce478624-f4e7-4bd7-81b2-172d9f364a89
Oct 11 08:51:33 compute-0 NetworkManager[44960]: <info>  [1760172693.8496] device (tape758e6dc-ca): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 08:51:33 compute-0 NetworkManager[44960]: <info>  [1760172693.8520] device (tape758e6dc-ca): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 08:51:33 compute-0 ovn_controller[152945]: 2025-10-11T08:51:33Z|00266|binding|INFO|Setting lport e758e6dc-cadd-4687-a634-519fb2ecace8 up in Southbound
Oct 11 08:51:33 compute-0 ovn_controller[152945]: 2025-10-11T08:51:33Z|00267|binding|INFO|Setting lport e758e6dc-cadd-4687-a634-519fb2ecace8 ovn-installed in OVS
Oct 11 08:51:33 compute-0 nova_compute[260935]: 2025-10-11 08:51:33.867 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:51:33 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:33.867 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[27265177-e573-40a6-9505-175c6bbe26ba]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:51:33 compute-0 nova_compute[260935]: 2025-10-11 08:51:33.869 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:51:33 compute-0 nova_compute[260935]: 2025-10-11 08:51:33.874 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:51:33 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1394: 321 pgs: 321 active+clean; 213 MiB data, 469 MiB used, 60 GiB / 60 GiB avail; 4.3 MiB/s rd, 29 KiB/s wr, 192 op/s
Oct 11 08:51:33 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:33.921 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[fe0f49c7-fb25-4f3e-88ee-3cd7e59425ff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:51:33 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:33.926 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[d99b8cca-17d0-448e-83b6-2c3002def38a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:51:33 compute-0 nova_compute[260935]: 2025-10-11 08:51:33.927 2 DEBUG nova.compute.manager [req-d8f71588-b4aa-4b87-b6e2-e6f8ffcfc804 req-737b15b8-58e1-4bbe-afa0-df82d38e27c9 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 3283d482-4ea1-400a-9a1b-486479801813] Received event network-vif-plugged-2b54c9c3-c2f2-466e-95a2-db8b1ebfdb37 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:51:33 compute-0 nova_compute[260935]: 2025-10-11 08:51:33.928 2 DEBUG oslo_concurrency.lockutils [req-d8f71588-b4aa-4b87-b6e2-e6f8ffcfc804 req-737b15b8-58e1-4bbe-afa0-df82d38e27c9 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "3283d482-4ea1-400a-9a1b-486479801813-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:51:33 compute-0 nova_compute[260935]: 2025-10-11 08:51:33.928 2 DEBUG oslo_concurrency.lockutils [req-d8f71588-b4aa-4b87-b6e2-e6f8ffcfc804 req-737b15b8-58e1-4bbe-afa0-df82d38e27c9 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "3283d482-4ea1-400a-9a1b-486479801813-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:51:33 compute-0 nova_compute[260935]: 2025-10-11 08:51:33.929 2 DEBUG oslo_concurrency.lockutils [req-d8f71588-b4aa-4b87-b6e2-e6f8ffcfc804 req-737b15b8-58e1-4bbe-afa0-df82d38e27c9 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "3283d482-4ea1-400a-9a1b-486479801813-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:51:33 compute-0 nova_compute[260935]: 2025-10-11 08:51:33.930 2 DEBUG nova.compute.manager [req-d8f71588-b4aa-4b87-b6e2-e6f8ffcfc804 req-737b15b8-58e1-4bbe-afa0-df82d38e27c9 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 3283d482-4ea1-400a-9a1b-486479801813] No waiting events found dispatching network-vif-plugged-2b54c9c3-c2f2-466e-95a2-db8b1ebfdb37 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 08:51:33 compute-0 nova_compute[260935]: 2025-10-11 08:51:33.930 2 WARNING nova.compute.manager [req-d8f71588-b4aa-4b87-b6e2-e6f8ffcfc804 req-737b15b8-58e1-4bbe-afa0-df82d38e27c9 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 3283d482-4ea1-400a-9a1b-486479801813] Received unexpected event network-vif-plugged-2b54c9c3-c2f2-466e-95a2-db8b1ebfdb37 for instance with vm_state deleted and task_state None.
Oct 11 08:51:33 compute-0 nova_compute[260935]: 2025-10-11 08:51:33.931 2 DEBUG nova.compute.manager [req-d8f71588-b4aa-4b87-b6e2-e6f8ffcfc804 req-737b15b8-58e1-4bbe-afa0-df82d38e27c9 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 3283d482-4ea1-400a-9a1b-486479801813] Received event network-vif-deleted-2b54c9c3-c2f2-466e-95a2-db8b1ebfdb37 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:51:33 compute-0 nova_compute[260935]: 2025-10-11 08:51:33.960 2 DEBUG nova.virt.libvirt.driver [None req-5217595d-a70c-467d-b9d4-b95bbe81b0eb f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 08:51:33 compute-0 nova_compute[260935]: 2025-10-11 08:51:33.961 2 DEBUG nova.virt.libvirt.driver [None req-5217595d-a70c-467d-b9d4-b95bbe81b0eb f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 08:51:33 compute-0 nova_compute[260935]: 2025-10-11 08:51:33.962 2 DEBUG nova.virt.libvirt.driver [None req-5217595d-a70c-467d-b9d4-b95bbe81b0eb f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] No VIF found with MAC fa:16:3e:af:3d:5c, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 11 08:51:33 compute-0 nova_compute[260935]: 2025-10-11 08:51:33.962 2 DEBUG nova.virt.libvirt.driver [None req-5217595d-a70c-467d-b9d4-b95bbe81b0eb f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] No VIF found with MAC fa:16:3e:4b:01:c2, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 11 08:51:33 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:33.980 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[d6abe7c6-7e40-40bd-9479-ec8536930f1c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:51:33 compute-0 nova_compute[260935]: 2025-10-11 08:51:33.989 2 DEBUG nova.virt.libvirt.guest [None req-5217595d-a70c-467d-b9d4-b95bbe81b0eb f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 08:51:33 compute-0 nova_compute[260935]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 08:51:33 compute-0 nova_compute[260935]:   <nova:name>tempest-AttachInterfacesV270Test-server-2081426915</nova:name>
Oct 11 08:51:33 compute-0 nova_compute[260935]:   <nova:creationTime>2025-10-11 08:51:33</nova:creationTime>
Oct 11 08:51:33 compute-0 nova_compute[260935]:   <nova:flavor name="m1.nano">
Oct 11 08:51:33 compute-0 nova_compute[260935]:     <nova:memory>128</nova:memory>
Oct 11 08:51:33 compute-0 nova_compute[260935]:     <nova:disk>1</nova:disk>
Oct 11 08:51:33 compute-0 nova_compute[260935]:     <nova:swap>0</nova:swap>
Oct 11 08:51:33 compute-0 nova_compute[260935]:     <nova:ephemeral>0</nova:ephemeral>
Oct 11 08:51:33 compute-0 nova_compute[260935]:     <nova:vcpus>1</nova:vcpus>
Oct 11 08:51:33 compute-0 nova_compute[260935]:   </nova:flavor>
Oct 11 08:51:33 compute-0 nova_compute[260935]:   <nova:owner>
Oct 11 08:51:33 compute-0 nova_compute[260935]:     <nova:user uuid="f961a579f0a74ab3a913fc3b21acea43">tempest-AttachInterfacesV270Test-1301456020-project-member</nova:user>
Oct 11 08:51:33 compute-0 nova_compute[260935]:     <nova:project uuid="80ef0690d9e94d289f05d85941ef7154">tempest-AttachInterfacesV270Test-1301456020</nova:project>
Oct 11 08:51:33 compute-0 nova_compute[260935]:   </nova:owner>
Oct 11 08:51:33 compute-0 nova_compute[260935]:   <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 08:51:33 compute-0 nova_compute[260935]:   <nova:ports>
Oct 11 08:51:33 compute-0 nova_compute[260935]:     <nova:port uuid="ab7592f9-1746-47d1-a702-b7be704ccabb">
Oct 11 08:51:33 compute-0 nova_compute[260935]:       <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Oct 11 08:51:33 compute-0 nova_compute[260935]:     </nova:port>
Oct 11 08:51:33 compute-0 nova_compute[260935]:     <nova:port uuid="e758e6dc-cadd-4687-a634-519fb2ecace8">
Oct 11 08:51:33 compute-0 nova_compute[260935]:       <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Oct 11 08:51:33 compute-0 nova_compute[260935]:     </nova:port>
Oct 11 08:51:33 compute-0 nova_compute[260935]:   </nova:ports>
Oct 11 08:51:33 compute-0 nova_compute[260935]: </nova:instance>
Oct 11 08:51:33 compute-0 nova_compute[260935]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Oct 11 08:51:34 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:34.011 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[22c2a1e1-d5b3-4556-bd5e-0fb2a1a50857]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapce478624-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2d:5c:24'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 832, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 832, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 81], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 452720, 'reachable_time': 18973, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 302862, 'error': None, 'target': 'ovnmeta-ce478624-f4e7-4bd7-81b2-172d9f364a89', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:51:34 compute-0 nova_compute[260935]: 2025-10-11 08:51:34.015 2 DEBUG oslo_concurrency.lockutils [None req-5217595d-a70c-467d-b9d4-b95bbe81b0eb f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] Lock "interface-88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a-None" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 8.367s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:51:34 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:34.039 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[67ef0e55-232a-406e-9b87-58700c36e9f8]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapce478624-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 452738, 'tstamp': 452738}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 302863, 'error': None, 'target': 'ovnmeta-ce478624-f4e7-4bd7-81b2-172d9f364a89', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapce478624-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 452742, 'tstamp': 452742}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 302863, 'error': None, 'target': 'ovnmeta-ce478624-f4e7-4bd7-81b2-172d9f364a89', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:51:34 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:34.042 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapce478624-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:51:34 compute-0 nova_compute[260935]: 2025-10-11 08:51:34.045 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:51:34 compute-0 nova_compute[260935]: 2025-10-11 08:51:34.046 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:51:34 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:34.046 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapce478624-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:51:34 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:34.047 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 08:51:34 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:34.047 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapce478624-f0, col_values=(('external_ids', {'iface-id': '875825ad-1b50-485c-91da-f53ce1ebd5e3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:51:34 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:34.047 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 08:51:34 compute-0 ceph-mon[74313]: pgmap v1394: 321 pgs: 321 active+clean; 213 MiB data, 469 MiB used, 60 GiB / 60 GiB avail; 4.3 MiB/s rd, 29 KiB/s wr, 192 op/s
Oct 11 08:51:34 compute-0 nova_compute[260935]: 2025-10-11 08:51:34.987 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760172679.985902, 9f842544-f85a-4c24-b273-8ae74177617e => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:51:34 compute-0 nova_compute[260935]: 2025-10-11 08:51:34.988 2 INFO nova.compute.manager [-] [instance: 9f842544-f85a-4c24-b273-8ae74177617e] VM Stopped (Lifecycle Event)
Oct 11 08:51:35 compute-0 nova_compute[260935]: 2025-10-11 08:51:35.025 2 DEBUG nova.compute.manager [None req-300f184f-b1d8-4036-bc07-c6e5189a912d - - - - - -] [instance: 9f842544-f85a-4c24-b273-8ae74177617e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:51:35 compute-0 nova_compute[260935]: 2025-10-11 08:51:35.165 2 DEBUG oslo_concurrency.lockutils [None req-dc6d8888-3dba-41b3-b7b9-b5da906bd38a 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] Acquiring lock "872b1c1d-bc87-4123-a599-4d64b89018aa" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:51:35 compute-0 nova_compute[260935]: 2025-10-11 08:51:35.165 2 DEBUG oslo_concurrency.lockutils [None req-dc6d8888-3dba-41b3-b7b9-b5da906bd38a 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] Lock "872b1c1d-bc87-4123-a599-4d64b89018aa" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:51:35 compute-0 nova_compute[260935]: 2025-10-11 08:51:35.182 2 DEBUG nova.compute.manager [None req-dc6d8888-3dba-41b3-b7b9-b5da906bd38a 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] [instance: 872b1c1d-bc87-4123-a599-4d64b89018aa] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 11 08:51:35 compute-0 nova_compute[260935]: 2025-10-11 08:51:35.246 2 DEBUG oslo_concurrency.lockutils [None req-dc6d8888-3dba-41b3-b7b9-b5da906bd38a 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:51:35 compute-0 nova_compute[260935]: 2025-10-11 08:51:35.247 2 DEBUG oslo_concurrency.lockutils [None req-dc6d8888-3dba-41b3-b7b9-b5da906bd38a 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:51:35 compute-0 nova_compute[260935]: 2025-10-11 08:51:35.255 2 DEBUG nova.virt.hardware [None req-dc6d8888-3dba-41b3-b7b9-b5da906bd38a 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 11 08:51:35 compute-0 nova_compute[260935]: 2025-10-11 08:51:35.255 2 INFO nova.compute.claims [None req-dc6d8888-3dba-41b3-b7b9-b5da906bd38a 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] [instance: 872b1c1d-bc87-4123-a599-4d64b89018aa] Claim successful on node compute-0.ctlplane.example.com
Oct 11 08:51:35 compute-0 nova_compute[260935]: 2025-10-11 08:51:35.389 2 DEBUG oslo_concurrency.processutils [None req-dc6d8888-3dba-41b3-b7b9-b5da906bd38a 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:51:35 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e184 do_prune osdmap full prune enabled
Oct 11 08:51:35 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e185 e185: 3 total, 3 up, 3 in
Oct 11 08:51:35 compute-0 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e185: 3 total, 3 up, 3 in
Oct 11 08:51:35 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 08:51:35 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2978932494' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:51:35 compute-0 nova_compute[260935]: 2025-10-11 08:51:35.862 2 DEBUG oslo_concurrency.processutils [None req-dc6d8888-3dba-41b3-b7b9-b5da906bd38a 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:51:35 compute-0 nova_compute[260935]: 2025-10-11 08:51:35.870 2 DEBUG nova.compute.provider_tree [None req-dc6d8888-3dba-41b3-b7b9-b5da906bd38a 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 08:51:35 compute-0 nova_compute[260935]: 2025-10-11 08:51:35.875 2 DEBUG nova.network.neutron [req-ec9a0fdd-29ed-4bd4-a11f-faa643d037f1 req-3759b8ab-4758-47b3-a481-5d370edd5f81 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a] Updated VIF entry in instance network info cache for port e758e6dc-cadd-4687-a634-519fb2ecace8. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 08:51:35 compute-0 nova_compute[260935]: 2025-10-11 08:51:35.876 2 DEBUG nova.network.neutron [req-ec9a0fdd-29ed-4bd4-a11f-faa643d037f1 req-3759b8ab-4758-47b3-a481-5d370edd5f81 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a] Updating instance_info_cache with network_info: [{"id": "ab7592f9-1746-47d1-a702-b7be704ccabb", "address": "fa:16:3e:af:3d:5c", "network": {"id": "ce478624-f4e7-4bd7-81b2-172d9f364a89", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1659766389-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "80ef0690d9e94d289f05d85941ef7154", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapab7592f9-17", "ovs_interfaceid": "ab7592f9-1746-47d1-a702-b7be704ccabb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "e758e6dc-cadd-4687-a634-519fb2ecace8", "address": "fa:16:3e:4b:01:c2", "network": {"id": "ce478624-f4e7-4bd7-81b2-172d9f364a89", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1659766389-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "80ef0690d9e94d289f05d85941ef7154", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape758e6dc-ca", "ovs_interfaceid": "e758e6dc-cadd-4687-a634-519fb2ecace8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:51:35 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1396: 321 pgs: 321 active+clean; 213 MiB data, 469 MiB used, 60 GiB / 60 GiB avail; 4.6 MiB/s rd, 31 KiB/s wr, 204 op/s
Oct 11 08:51:35 compute-0 nova_compute[260935]: 2025-10-11 08:51:35.900 2 DEBUG nova.scheduler.client.report [None req-dc6d8888-3dba-41b3-b7b9-b5da906bd38a 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 08:51:35 compute-0 nova_compute[260935]: 2025-10-11 08:51:35.907 2 DEBUG oslo_concurrency.lockutils [req-ec9a0fdd-29ed-4bd4-a11f-faa643d037f1 req-3759b8ab-4758-47b3-a481-5d370edd5f81 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 08:51:35 compute-0 nova_compute[260935]: 2025-10-11 08:51:35.923 2 DEBUG oslo_concurrency.lockutils [None req-dc6d8888-3dba-41b3-b7b9-b5da906bd38a 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.677s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:51:35 compute-0 nova_compute[260935]: 2025-10-11 08:51:35.925 2 DEBUG nova.compute.manager [None req-dc6d8888-3dba-41b3-b7b9-b5da906bd38a 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] [instance: 872b1c1d-bc87-4123-a599-4d64b89018aa] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 11 08:51:35 compute-0 nova_compute[260935]: 2025-10-11 08:51:35.965 2 DEBUG nova.compute.manager [None req-dc6d8888-3dba-41b3-b7b9-b5da906bd38a 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] [instance: 872b1c1d-bc87-4123-a599-4d64b89018aa] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 11 08:51:35 compute-0 nova_compute[260935]: 2025-10-11 08:51:35.966 2 DEBUG nova.network.neutron [None req-dc6d8888-3dba-41b3-b7b9-b5da906bd38a 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] [instance: 872b1c1d-bc87-4123-a599-4d64b89018aa] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 11 08:51:35 compute-0 nova_compute[260935]: 2025-10-11 08:51:35.986 2 INFO nova.virt.libvirt.driver [None req-dc6d8888-3dba-41b3-b7b9-b5da906bd38a 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] [instance: 872b1c1d-bc87-4123-a599-4d64b89018aa] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 11 08:51:36 compute-0 nova_compute[260935]: 2025-10-11 08:51:36.009 2 DEBUG nova.compute.manager [None req-dc6d8888-3dba-41b3-b7b9-b5da906bd38a 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] [instance: 872b1c1d-bc87-4123-a599-4d64b89018aa] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 11 08:51:36 compute-0 nova_compute[260935]: 2025-10-11 08:51:36.138 2 DEBUG nova.policy [None req-dc6d8888-3dba-41b3-b7b9-b5da906bd38a 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '7f37cd0ba15f412b88192a506c5cec79', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'b23ff73ca27245eeb1b46f51326a5568', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 11 08:51:36 compute-0 nova_compute[260935]: 2025-10-11 08:51:36.144 2 DEBUG nova.compute.manager [None req-dc6d8888-3dba-41b3-b7b9-b5da906bd38a 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] [instance: 872b1c1d-bc87-4123-a599-4d64b89018aa] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 11 08:51:36 compute-0 nova_compute[260935]: 2025-10-11 08:51:36.147 2 DEBUG nova.virt.libvirt.driver [None req-dc6d8888-3dba-41b3-b7b9-b5da906bd38a 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] [instance: 872b1c1d-bc87-4123-a599-4d64b89018aa] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 11 08:51:36 compute-0 nova_compute[260935]: 2025-10-11 08:51:36.148 2 INFO nova.virt.libvirt.driver [None req-dc6d8888-3dba-41b3-b7b9-b5da906bd38a 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] [instance: 872b1c1d-bc87-4123-a599-4d64b89018aa] Creating image(s)
Oct 11 08:51:36 compute-0 nova_compute[260935]: 2025-10-11 08:51:36.181 2 DEBUG nova.storage.rbd_utils [None req-dc6d8888-3dba-41b3-b7b9-b5da906bd38a 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] rbd image 872b1c1d-bc87-4123-a599-4d64b89018aa_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:51:36 compute-0 nova_compute[260935]: 2025-10-11 08:51:36.216 2 DEBUG nova.storage.rbd_utils [None req-dc6d8888-3dba-41b3-b7b9-b5da906bd38a 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] rbd image 872b1c1d-bc87-4123-a599-4d64b89018aa_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:51:36 compute-0 nova_compute[260935]: 2025-10-11 08:51:36.251 2 DEBUG nova.storage.rbd_utils [None req-dc6d8888-3dba-41b3-b7b9-b5da906bd38a 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] rbd image 872b1c1d-bc87-4123-a599-4d64b89018aa_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:51:36 compute-0 nova_compute[260935]: 2025-10-11 08:51:36.256 2 DEBUG oslo_concurrency.processutils [None req-dc6d8888-3dba-41b3-b7b9-b5da906bd38a 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:51:36 compute-0 nova_compute[260935]: 2025-10-11 08:51:36.353 2 DEBUG oslo_concurrency.processutils [None req-dc6d8888-3dba-41b3-b7b9-b5da906bd38a 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json" returned: 0 in 0.097s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:51:36 compute-0 nova_compute[260935]: 2025-10-11 08:51:36.355 2 DEBUG oslo_concurrency.lockutils [None req-dc6d8888-3dba-41b3-b7b9-b5da906bd38a 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] Acquiring lock "9811042c7d73cc51997f7c966840f4b7728169a1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:51:36 compute-0 nova_compute[260935]: 2025-10-11 08:51:36.356 2 DEBUG oslo_concurrency.lockutils [None req-dc6d8888-3dba-41b3-b7b9-b5da906bd38a 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:51:36 compute-0 nova_compute[260935]: 2025-10-11 08:51:36.356 2 DEBUG oslo_concurrency.lockutils [None req-dc6d8888-3dba-41b3-b7b9-b5da906bd38a 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:51:36 compute-0 nova_compute[260935]: 2025-10-11 08:51:36.390 2 DEBUG nova.storage.rbd_utils [None req-dc6d8888-3dba-41b3-b7b9-b5da906bd38a 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] rbd image 872b1c1d-bc87-4123-a599-4d64b89018aa_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:51:36 compute-0 nova_compute[260935]: 2025-10-11 08:51:36.395 2 DEBUG oslo_concurrency.processutils [None req-dc6d8888-3dba-41b3-b7b9-b5da906bd38a 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 872b1c1d-bc87-4123-a599-4d64b89018aa_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:51:36 compute-0 podman[302977]: 2025-10-11 08:51:36.804146708 +0000 UTC m=+0.101072049 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_id=multipathd, container_name=multipathd)
Oct 11 08:51:36 compute-0 ovn_controller[152945]: 2025-10-11T08:51:36Z|00048|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:4b:01:c2 10.100.0.4
Oct 11 08:51:36 compute-0 ovn_controller[152945]: 2025-10-11T08:51:36Z|00049|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:4b:01:c2 10.100.0.4
Oct 11 08:51:36 compute-0 podman[302978]: 2025-10-11 08:51:36.87671241 +0000 UTC m=+0.169978458 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2)
Oct 11 08:51:37 compute-0 ovn_controller[152945]: 2025-10-11T08:51:37Z|00050|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:af:3d:5c 10.100.0.9
Oct 11 08:51:37 compute-0 ovn_controller[152945]: 2025-10-11T08:51:37Z|00051|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:af:3d:5c 10.100.0.9
Oct 11 08:51:37 compute-0 ceph-mon[74313]: osdmap e185: 3 total, 3 up, 3 in
Oct 11 08:51:37 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/2978932494' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:51:37 compute-0 ceph-mon[74313]: pgmap v1396: 321 pgs: 321 active+clean; 213 MiB data, 469 MiB used, 60 GiB / 60 GiB avail; 4.6 MiB/s rd, 31 KiB/s wr, 204 op/s
Oct 11 08:51:37 compute-0 nova_compute[260935]: 2025-10-11 08:51:37.393 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:51:37 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 08:51:37 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/232623428' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 08:51:37 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 08:51:37 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/232623428' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 08:51:37 compute-0 nova_compute[260935]: 2025-10-11 08:51:37.597 2 DEBUG nova.network.neutron [None req-dc6d8888-3dba-41b3-b7b9-b5da906bd38a 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] [instance: 872b1c1d-bc87-4123-a599-4d64b89018aa] Successfully created port: 108be440-11bf-41f6-a628-86fbac597b7d _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 11 08:51:37 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1397: 321 pgs: 321 active+clean; 239 MiB data, 478 MiB used, 60 GiB / 60 GiB avail; 2.7 MiB/s rd, 2.5 MiB/s wr, 185 op/s
Oct 11 08:51:38 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e185 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 08:51:38 compute-0 ceph-mon[74313]: from='client.? 192.168.122.10:0/232623428' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 08:51:38 compute-0 ceph-mon[74313]: from='client.? 192.168.122.10:0/232623428' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 08:51:38 compute-0 nova_compute[260935]: 2025-10-11 08:51:38.783 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:51:39 compute-0 nova_compute[260935]: 2025-10-11 08:51:39.156 2 DEBUG oslo_concurrency.processutils [None req-dc6d8888-3dba-41b3-b7b9-b5da906bd38a 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 872b1c1d-bc87-4123-a599-4d64b89018aa_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.761s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:51:39 compute-0 nova_compute[260935]: 2025-10-11 08:51:39.248 2 DEBUG nova.network.neutron [None req-dc6d8888-3dba-41b3-b7b9-b5da906bd38a 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] [instance: 872b1c1d-bc87-4123-a599-4d64b89018aa] Successfully updated port: 108be440-11bf-41f6-a628-86fbac597b7d _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 11 08:51:39 compute-0 nova_compute[260935]: 2025-10-11 08:51:39.260 2 DEBUG nova.storage.rbd_utils [None req-dc6d8888-3dba-41b3-b7b9-b5da906bd38a 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] resizing rbd image 872b1c1d-bc87-4123-a599-4d64b89018aa_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 11 08:51:39 compute-0 nova_compute[260935]: 2025-10-11 08:51:39.318 2 DEBUG oslo_concurrency.lockutils [None req-dc6d8888-3dba-41b3-b7b9-b5da906bd38a 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] Acquiring lock "refresh_cache-872b1c1d-bc87-4123-a599-4d64b89018aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 08:51:39 compute-0 nova_compute[260935]: 2025-10-11 08:51:39.318 2 DEBUG oslo_concurrency.lockutils [None req-dc6d8888-3dba-41b3-b7b9-b5da906bd38a 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] Acquired lock "refresh_cache-872b1c1d-bc87-4123-a599-4d64b89018aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 08:51:39 compute-0 nova_compute[260935]: 2025-10-11 08:51:39.318 2 DEBUG nova.network.neutron [None req-dc6d8888-3dba-41b3-b7b9-b5da906bd38a 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] [instance: 872b1c1d-bc87-4123-a599-4d64b89018aa] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 11 08:51:39 compute-0 nova_compute[260935]: 2025-10-11 08:51:39.419 2 DEBUG nova.objects.instance [None req-dc6d8888-3dba-41b3-b7b9-b5da906bd38a 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] Lazy-loading 'migration_context' on Instance uuid 872b1c1d-bc87-4123-a599-4d64b89018aa obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:51:39 compute-0 nova_compute[260935]: 2025-10-11 08:51:39.443 2 DEBUG nova.virt.libvirt.driver [None req-dc6d8888-3dba-41b3-b7b9-b5da906bd38a 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] [instance: 872b1c1d-bc87-4123-a599-4d64b89018aa] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 11 08:51:39 compute-0 nova_compute[260935]: 2025-10-11 08:51:39.444 2 DEBUG nova.virt.libvirt.driver [None req-dc6d8888-3dba-41b3-b7b9-b5da906bd38a 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] [instance: 872b1c1d-bc87-4123-a599-4d64b89018aa] Ensure instance console log exists: /var/lib/nova/instances/872b1c1d-bc87-4123-a599-4d64b89018aa/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 11 08:51:39 compute-0 nova_compute[260935]: 2025-10-11 08:51:39.444 2 DEBUG oslo_concurrency.lockutils [None req-dc6d8888-3dba-41b3-b7b9-b5da906bd38a 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:51:39 compute-0 nova_compute[260935]: 2025-10-11 08:51:39.445 2 DEBUG oslo_concurrency.lockutils [None req-dc6d8888-3dba-41b3-b7b9-b5da906bd38a 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:51:39 compute-0 nova_compute[260935]: 2025-10-11 08:51:39.445 2 DEBUG oslo_concurrency.lockutils [None req-dc6d8888-3dba-41b3-b7b9-b5da906bd38a 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:51:39 compute-0 ceph-mon[74313]: pgmap v1397: 321 pgs: 321 active+clean; 239 MiB data, 478 MiB used, 60 GiB / 60 GiB avail; 2.7 MiB/s rd, 2.5 MiB/s wr, 185 op/s
Oct 11 08:51:39 compute-0 nova_compute[260935]: 2025-10-11 08:51:39.551 2 DEBUG nova.network.neutron [None req-dc6d8888-3dba-41b3-b7b9-b5da906bd38a 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] [instance: 872b1c1d-bc87-4123-a599-4d64b89018aa] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 11 08:51:39 compute-0 nova_compute[260935]: 2025-10-11 08:51:39.596 2 DEBUG nova.compute.manager [req-a74b530b-01ec-4e15-bf7e-de50c92c1aa1 req-4ca95200-7be0-44b6-b9da-65113a67b440 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a] Received event network-vif-plugged-e758e6dc-cadd-4687-a634-519fb2ecace8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:51:39 compute-0 nova_compute[260935]: 2025-10-11 08:51:39.597 2 DEBUG oslo_concurrency.lockutils [req-a74b530b-01ec-4e15-bf7e-de50c92c1aa1 req-4ca95200-7be0-44b6-b9da-65113a67b440 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:51:39 compute-0 nova_compute[260935]: 2025-10-11 08:51:39.597 2 DEBUG oslo_concurrency.lockutils [req-a74b530b-01ec-4e15-bf7e-de50c92c1aa1 req-4ca95200-7be0-44b6-b9da-65113a67b440 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:51:39 compute-0 nova_compute[260935]: 2025-10-11 08:51:39.598 2 DEBUG oslo_concurrency.lockutils [req-a74b530b-01ec-4e15-bf7e-de50c92c1aa1 req-4ca95200-7be0-44b6-b9da-65113a67b440 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:51:39 compute-0 nova_compute[260935]: 2025-10-11 08:51:39.598 2 DEBUG nova.compute.manager [req-a74b530b-01ec-4e15-bf7e-de50c92c1aa1 req-4ca95200-7be0-44b6-b9da-65113a67b440 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a] No waiting events found dispatching network-vif-plugged-e758e6dc-cadd-4687-a634-519fb2ecace8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 08:51:39 compute-0 nova_compute[260935]: 2025-10-11 08:51:39.598 2 WARNING nova.compute.manager [req-a74b530b-01ec-4e15-bf7e-de50c92c1aa1 req-4ca95200-7be0-44b6-b9da-65113a67b440 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a] Received unexpected event network-vif-plugged-e758e6dc-cadd-4687-a634-519fb2ecace8 for instance with vm_state active and task_state None.
Oct 11 08:51:39 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1398: 321 pgs: 321 active+clean; 239 MiB data, 478 MiB used, 60 GiB / 60 GiB avail; 2.7 MiB/s rd, 2.5 MiB/s wr, 185 op/s
Oct 11 08:51:40 compute-0 nova_compute[260935]: 2025-10-11 08:51:40.367 2 DEBUG oslo_concurrency.lockutils [None req-bcf9f562-5918-4b0f-95da-1a9d85e57363 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Acquiring lock "205bee5e-165a-468c-87d3-db44e03ace3e" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:51:40 compute-0 nova_compute[260935]: 2025-10-11 08:51:40.367 2 DEBUG oslo_concurrency.lockutils [None req-bcf9f562-5918-4b0f-95da-1a9d85e57363 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Lock "205bee5e-165a-468c-87d3-db44e03ace3e" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:51:40 compute-0 nova_compute[260935]: 2025-10-11 08:51:40.383 2 DEBUG oslo_concurrency.lockutils [None req-bda1840f-8760-4fce-a3ed-29b2272bc656 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Acquiring lock "5b50d851-e482-40a2-8b7d-d3eca87e15ab" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:51:40 compute-0 nova_compute[260935]: 2025-10-11 08:51:40.383 2 DEBUG oslo_concurrency.lockutils [None req-bda1840f-8760-4fce-a3ed-29b2272bc656 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Lock "5b50d851-e482-40a2-8b7d-d3eca87e15ab" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:51:40 compute-0 nova_compute[260935]: 2025-10-11 08:51:40.384 2 DEBUG oslo_concurrency.lockutils [None req-bda1840f-8760-4fce-a3ed-29b2272bc656 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Acquiring lock "5b50d851-e482-40a2-8b7d-d3eca87e15ab-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:51:40 compute-0 nova_compute[260935]: 2025-10-11 08:51:40.384 2 DEBUG oslo_concurrency.lockutils [None req-bda1840f-8760-4fce-a3ed-29b2272bc656 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Lock "5b50d851-e482-40a2-8b7d-d3eca87e15ab-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:51:40 compute-0 nova_compute[260935]: 2025-10-11 08:51:40.384 2 DEBUG oslo_concurrency.lockutils [None req-bda1840f-8760-4fce-a3ed-29b2272bc656 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Lock "5b50d851-e482-40a2-8b7d-d3eca87e15ab-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:51:40 compute-0 nova_compute[260935]: 2025-10-11 08:51:40.387 2 INFO nova.compute.manager [None req-bda1840f-8760-4fce-a3ed-29b2272bc656 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 5b50d851-e482-40a2-8b7d-d3eca87e15ab] Terminating instance
Oct 11 08:51:40 compute-0 nova_compute[260935]: 2025-10-11 08:51:40.390 2 DEBUG nova.compute.manager [None req-bda1840f-8760-4fce-a3ed-29b2272bc656 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 5b50d851-e482-40a2-8b7d-d3eca87e15ab] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 11 08:51:40 compute-0 nova_compute[260935]: 2025-10-11 08:51:40.406 2 DEBUG nova.compute.manager [None req-bcf9f562-5918-4b0f-95da-1a9d85e57363 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] [instance: 205bee5e-165a-468c-87d3-db44e03ace3e] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 11 08:51:40 compute-0 kernel: tap965f1a38-41 (unregistering): left promiscuous mode
Oct 11 08:51:40 compute-0 NetworkManager[44960]: <info>  [1760172700.4555] device (tap965f1a38-41): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 08:51:40 compute-0 ovn_controller[152945]: 2025-10-11T08:51:40Z|00268|binding|INFO|Releasing lport 965f1a38-4159-41be-ac4a-f436a8ddeeab from this chassis (sb_readonly=0)
Oct 11 08:51:40 compute-0 ovn_controller[152945]: 2025-10-11T08:51:40Z|00269|binding|INFO|Setting lport 965f1a38-4159-41be-ac4a-f436a8ddeeab down in Southbound
Oct 11 08:51:40 compute-0 ovn_controller[152945]: 2025-10-11T08:51:40Z|00270|binding|INFO|Removing iface tap965f1a38-41 ovn-installed in OVS
Oct 11 08:51:40 compute-0 nova_compute[260935]: 2025-10-11 08:51:40.495 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:51:40 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:40.503 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fe:8d:cb 10.100.0.4'], port_security=['fa:16:3e:fe:8d:cb 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '5b50d851-e482-40a2-8b7d-d3eca87e15ab', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9bac3530-993f-420e-8692-0b14a331d756', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f8c7604961214c6d9d49657535d799a5', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'b92be55a-f97b-4770-99c4-ff8e122b8ad7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=956bef08-638b-4ce0-9cc4-80a6cc4f1331, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=965f1a38-4159-41be-ac4a-f436a8ddeeab) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 08:51:40 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:40.504 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 965f1a38-4159-41be-ac4a-f436a8ddeeab in datapath 9bac3530-993f-420e-8692-0b14a331d756 unbound from our chassis
Oct 11 08:51:40 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:40.507 162815 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 9bac3530-993f-420e-8692-0b14a331d756, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 11 08:51:40 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:40.508 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[d466436c-7117-4525-bf4f-d022d6e38f3b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:51:40 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:40.509 162815 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-9bac3530-993f-420e-8692-0b14a331d756 namespace which is not needed anymore
Oct 11 08:51:40 compute-0 nova_compute[260935]: 2025-10-11 08:51:40.540 2 DEBUG oslo_concurrency.lockutils [None req-bcf9f562-5918-4b0f-95da-1a9d85e57363 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:51:40 compute-0 nova_compute[260935]: 2025-10-11 08:51:40.541 2 DEBUG oslo_concurrency.lockutils [None req-bcf9f562-5918-4b0f-95da-1a9d85e57363 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:51:40 compute-0 nova_compute[260935]: 2025-10-11 08:51:40.545 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:51:40 compute-0 ceph-mon[74313]: pgmap v1398: 321 pgs: 321 active+clean; 239 MiB data, 478 MiB used, 60 GiB / 60 GiB avail; 2.7 MiB/s rd, 2.5 MiB/s wr, 185 op/s
Oct 11 08:51:40 compute-0 nova_compute[260935]: 2025-10-11 08:51:40.549 2 DEBUG nova.virt.hardware [None req-bcf9f562-5918-4b0f-95da-1a9d85e57363 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 11 08:51:40 compute-0 nova_compute[260935]: 2025-10-11 08:51:40.550 2 INFO nova.compute.claims [None req-bcf9f562-5918-4b0f-95da-1a9d85e57363 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] [instance: 205bee5e-165a-468c-87d3-db44e03ace3e] Claim successful on node compute-0.ctlplane.example.com
Oct 11 08:51:40 compute-0 systemd[1]: machine-qemu\x2d36\x2dinstance\x2d00000021.scope: Deactivated successfully.
Oct 11 08:51:40 compute-0 systemd[1]: machine-qemu\x2d36\x2dinstance\x2d00000021.scope: Consumed 13.409s CPU time.
Oct 11 08:51:40 compute-0 systemd-machined[215705]: Machine qemu-36-instance-00000021 terminated.
Oct 11 08:51:40 compute-0 nova_compute[260935]: 2025-10-11 08:51:40.634 2 INFO nova.virt.libvirt.driver [-] [instance: 5b50d851-e482-40a2-8b7d-d3eca87e15ab] Instance destroyed successfully.
Oct 11 08:51:40 compute-0 nova_compute[260935]: 2025-10-11 08:51:40.636 2 DEBUG nova.objects.instance [None req-bda1840f-8760-4fce-a3ed-29b2272bc656 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Lazy-loading 'resources' on Instance uuid 5b50d851-e482-40a2-8b7d-d3eca87e15ab obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:51:40 compute-0 nova_compute[260935]: 2025-10-11 08:51:40.652 2 DEBUG nova.virt.libvirt.vif [None req-bda1840f-8760-4fce-a3ed-29b2272bc656 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T08:50:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-1550870837',display_name='tempest-ImagesTestJSON-server-1550870837',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-1550870837',id=33,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-11T08:51:08Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='f8c7604961214c6d9d49657535d799a5',ramdisk_id='',reservation_id='r-jse0ea25',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ImagesTestJSON-694493184',owner_user_name='tempest-ImagesTestJSON-694493184-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T08:51:14Z,user_data=None,user_id='1bab12893b9d49aabcb5ca19c9b951de',uuid=5b50d851-e482-40a2-8b7d-d3eca87e15ab,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "965f1a38-4159-41be-ac4a-f436a8ddeeab", "address": "fa:16:3e:fe:8d:cb", "network": {"id": "9bac3530-993f-420e-8692-0b14a331d756", "bridge": "br-int", "label": "tempest-ImagesTestJSON-942705627-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f8c7604961214c6d9d49657535d799a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap965f1a38-41", "ovs_interfaceid": "965f1a38-4159-41be-ac4a-f436a8ddeeab", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 11 08:51:40 compute-0 nova_compute[260935]: 2025-10-11 08:51:40.653 2 DEBUG nova.network.os_vif_util [None req-bda1840f-8760-4fce-a3ed-29b2272bc656 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Converting VIF {"id": "965f1a38-4159-41be-ac4a-f436a8ddeeab", "address": "fa:16:3e:fe:8d:cb", "network": {"id": "9bac3530-993f-420e-8692-0b14a331d756", "bridge": "br-int", "label": "tempest-ImagesTestJSON-942705627-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f8c7604961214c6d9d49657535d799a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap965f1a38-41", "ovs_interfaceid": "965f1a38-4159-41be-ac4a-f436a8ddeeab", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 08:51:40 compute-0 nova_compute[260935]: 2025-10-11 08:51:40.654 2 DEBUG nova.network.os_vif_util [None req-bda1840f-8760-4fce-a3ed-29b2272bc656 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fe:8d:cb,bridge_name='br-int',has_traffic_filtering=True,id=965f1a38-4159-41be-ac4a-f436a8ddeeab,network=Network(9bac3530-993f-420e-8692-0b14a331d756),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap965f1a38-41') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 08:51:40 compute-0 nova_compute[260935]: 2025-10-11 08:51:40.654 2 DEBUG os_vif [None req-bda1840f-8760-4fce-a3ed-29b2272bc656 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:fe:8d:cb,bridge_name='br-int',has_traffic_filtering=True,id=965f1a38-4159-41be-ac4a-f436a8ddeeab,network=Network(9bac3530-993f-420e-8692-0b14a331d756),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap965f1a38-41') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 11 08:51:40 compute-0 nova_compute[260935]: 2025-10-11 08:51:40.656 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:51:40 compute-0 nova_compute[260935]: 2025-10-11 08:51:40.657 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap965f1a38-41, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:51:40 compute-0 nova_compute[260935]: 2025-10-11 08:51:40.663 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:51:40 compute-0 nova_compute[260935]: 2025-10-11 08:51:40.665 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 08:51:40 compute-0 nova_compute[260935]: 2025-10-11 08:51:40.667 2 INFO os_vif [None req-bda1840f-8760-4fce-a3ed-29b2272bc656 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:fe:8d:cb,bridge_name='br-int',has_traffic_filtering=True,id=965f1a38-4159-41be-ac4a-f436a8ddeeab,network=Network(9bac3530-993f-420e-8692-0b14a331d756),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap965f1a38-41')
Oct 11 08:51:40 compute-0 neutron-haproxy-ovnmeta-9bac3530-993f-420e-8692-0b14a331d756[301111]: [NOTICE]   (301116) : haproxy version is 2.8.14-c23fe91
Oct 11 08:51:40 compute-0 neutron-haproxy-ovnmeta-9bac3530-993f-420e-8692-0b14a331d756[301111]: [NOTICE]   (301116) : path to executable is /usr/sbin/haproxy
Oct 11 08:51:40 compute-0 neutron-haproxy-ovnmeta-9bac3530-993f-420e-8692-0b14a331d756[301111]: [WARNING]  (301116) : Exiting Master process...
Oct 11 08:51:40 compute-0 neutron-haproxy-ovnmeta-9bac3530-993f-420e-8692-0b14a331d756[301111]: [ALERT]    (301116) : Current worker (301118) exited with code 143 (Terminated)
Oct 11 08:51:40 compute-0 neutron-haproxy-ovnmeta-9bac3530-993f-420e-8692-0b14a331d756[301111]: [WARNING]  (301116) : All workers exited. Exiting... (0)
Oct 11 08:51:40 compute-0 systemd[1]: libpod-4c69755b2a5a85b116644ea586653252eee337e5bcfe7e1984d85c100f402219.scope: Deactivated successfully.
Oct 11 08:51:40 compute-0 conmon[301111]: conmon 4c69755b2a5a85b11664 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-4c69755b2a5a85b116644ea586653252eee337e5bcfe7e1984d85c100f402219.scope/container/memory.events
Oct 11 08:51:40 compute-0 podman[303123]: 2025-10-11 08:51:40.689144866 +0000 UTC m=+0.056633232 container died 4c69755b2a5a85b116644ea586653252eee337e5bcfe7e1984d85c100f402219 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-9bac3530-993f-420e-8692-0b14a331d756, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 08:51:40 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-4c69755b2a5a85b116644ea586653252eee337e5bcfe7e1984d85c100f402219-userdata-shm.mount: Deactivated successfully.
Oct 11 08:51:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-81784f68462ca80ca6e90ba3c15edeb0a8d5f87b882ef7fdf8a487f1955dcf4c-merged.mount: Deactivated successfully.
Oct 11 08:51:40 compute-0 podman[303123]: 2025-10-11 08:51:40.745783308 +0000 UTC m=+0.113271694 container cleanup 4c69755b2a5a85b116644ea586653252eee337e5bcfe7e1984d85c100f402219 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-9bac3530-993f-420e-8692-0b14a331d756, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3)
Oct 11 08:51:40 compute-0 nova_compute[260935]: 2025-10-11 08:51:40.747 2 DEBUG oslo_concurrency.processutils [None req-bcf9f562-5918-4b0f-95da-1a9d85e57363 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:51:40 compute-0 systemd[1]: libpod-conmon-4c69755b2a5a85b116644ea586653252eee337e5bcfe7e1984d85c100f402219.scope: Deactivated successfully.
Oct 11 08:51:40 compute-0 podman[303178]: 2025-10-11 08:51:40.853499584 +0000 UTC m=+0.072952484 container remove 4c69755b2a5a85b116644ea586653252eee337e5bcfe7e1984d85c100f402219 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-9bac3530-993f-420e-8692-0b14a331d756, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 11 08:51:40 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:40.865 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[a3136498-436f-4387-acdc-b4b582612210]: (4, ('Sat Oct 11 08:51:40 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-9bac3530-993f-420e-8692-0b14a331d756 (4c69755b2a5a85b116644ea586653252eee337e5bcfe7e1984d85c100f402219)\n4c69755b2a5a85b116644ea586653252eee337e5bcfe7e1984d85c100f402219\nSat Oct 11 08:51:40 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-9bac3530-993f-420e-8692-0b14a331d756 (4c69755b2a5a85b116644ea586653252eee337e5bcfe7e1984d85c100f402219)\n4c69755b2a5a85b116644ea586653252eee337e5bcfe7e1984d85c100f402219\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:51:40 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:40.869 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[830cf377-d9e1-4810-a9e9-4cda842fdd8f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:51:40 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:40.870 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9bac3530-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:51:40 compute-0 kernel: tap9bac3530-90: left promiscuous mode
Oct 11 08:51:40 compute-0 nova_compute[260935]: 2025-10-11 08:51:40.877 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:51:40 compute-0 nova_compute[260935]: 2025-10-11 08:51:40.899 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:51:40 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:40.906 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[2f1177f2-e235-4702-a3e4-f8ef2feffd2e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:51:40 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:40.937 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[16b449ee-ab0e-4711-9b0e-21e0aaa4a31a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:51:40 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:40.939 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[c050ee7a-077d-42df-8ca1-1ab1461a00bc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:51:40 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:40.963 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[08ac3927-208b-4905-b56f-a3a9ba4208e1]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 451171, 'reachable_time': 17467, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 303214, 'error': None, 'target': 'ovnmeta-9bac3530-993f-420e-8692-0b14a331d756', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:51:40 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:40.967 162929 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-9bac3530-993f-420e-8692-0b14a331d756 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 11 08:51:40 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:40.967 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[323d3617-f96d-4027-b2c0-865ab8a4cc1e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:51:40 compute-0 systemd[1]: run-netns-ovnmeta\x2d9bac3530\x2d993f\x2d420e\x2d8692\x2d0b14a331d756.mount: Deactivated successfully.
Oct 11 08:51:41 compute-0 nova_compute[260935]: 2025-10-11 08:51:41.130 2 INFO nova.virt.libvirt.driver [None req-bda1840f-8760-4fce-a3ed-29b2272bc656 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 5b50d851-e482-40a2-8b7d-d3eca87e15ab] Deleting instance files /var/lib/nova/instances/5b50d851-e482-40a2-8b7d-d3eca87e15ab_del
Oct 11 08:51:41 compute-0 nova_compute[260935]: 2025-10-11 08:51:41.132 2 INFO nova.virt.libvirt.driver [None req-bda1840f-8760-4fce-a3ed-29b2272bc656 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 5b50d851-e482-40a2-8b7d-d3eca87e15ab] Deletion of /var/lib/nova/instances/5b50d851-e482-40a2-8b7d-d3eca87e15ab_del complete
Oct 11 08:51:41 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 08:51:41 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2644219216' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:51:41 compute-0 nova_compute[260935]: 2025-10-11 08:51:41.211 2 INFO nova.compute.manager [None req-bda1840f-8760-4fce-a3ed-29b2272bc656 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 5b50d851-e482-40a2-8b7d-d3eca87e15ab] Took 0.82 seconds to destroy the instance on the hypervisor.
Oct 11 08:51:41 compute-0 nova_compute[260935]: 2025-10-11 08:51:41.212 2 DEBUG oslo.service.loopingcall [None req-bda1840f-8760-4fce-a3ed-29b2272bc656 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 11 08:51:41 compute-0 nova_compute[260935]: 2025-10-11 08:51:41.213 2 DEBUG nova.compute.manager [-] [instance: 5b50d851-e482-40a2-8b7d-d3eca87e15ab] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 11 08:51:41 compute-0 nova_compute[260935]: 2025-10-11 08:51:41.213 2 DEBUG nova.network.neutron [-] [instance: 5b50d851-e482-40a2-8b7d-d3eca87e15ab] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 11 08:51:41 compute-0 nova_compute[260935]: 2025-10-11 08:51:41.218 2 DEBUG oslo_concurrency.processutils [None req-bcf9f562-5918-4b0f-95da-1a9d85e57363 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.471s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:51:41 compute-0 nova_compute[260935]: 2025-10-11 08:51:41.229 2 DEBUG nova.compute.provider_tree [None req-bcf9f562-5918-4b0f-95da-1a9d85e57363 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 08:51:41 compute-0 nova_compute[260935]: 2025-10-11 08:51:41.246 2 DEBUG nova.scheduler.client.report [None req-bcf9f562-5918-4b0f-95da-1a9d85e57363 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 08:51:41 compute-0 nova_compute[260935]: 2025-10-11 08:51:41.288 2 DEBUG nova.network.neutron [None req-dc6d8888-3dba-41b3-b7b9-b5da906bd38a 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] [instance: 872b1c1d-bc87-4123-a599-4d64b89018aa] Updating instance_info_cache with network_info: [{"id": "108be440-11bf-41f6-a628-86fbac597b7d", "address": "fa:16:3e:c7:33:2e", "network": {"id": "882d76f4-8cc7-44c7-ad90-277f4f92e044", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-3978382-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b23ff73ca27245eeb1b46f51326a5568", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap108be440-11", "ovs_interfaceid": "108be440-11bf-41f6-a628-86fbac597b7d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:51:41 compute-0 nova_compute[260935]: 2025-10-11 08:51:41.383 2 DEBUG oslo_concurrency.lockutils [None req-bcf9f562-5918-4b0f-95da-1a9d85e57363 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.842s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:51:41 compute-0 nova_compute[260935]: 2025-10-11 08:51:41.384 2 DEBUG nova.compute.manager [None req-bcf9f562-5918-4b0f-95da-1a9d85e57363 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] [instance: 205bee5e-165a-468c-87d3-db44e03ace3e] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 11 08:51:41 compute-0 nova_compute[260935]: 2025-10-11 08:51:41.389 2 DEBUG oslo_concurrency.lockutils [None req-dc6d8888-3dba-41b3-b7b9-b5da906bd38a 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] Releasing lock "refresh_cache-872b1c1d-bc87-4123-a599-4d64b89018aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 08:51:41 compute-0 nova_compute[260935]: 2025-10-11 08:51:41.390 2 DEBUG nova.compute.manager [None req-dc6d8888-3dba-41b3-b7b9-b5da906bd38a 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] [instance: 872b1c1d-bc87-4123-a599-4d64b89018aa] Instance network_info: |[{"id": "108be440-11bf-41f6-a628-86fbac597b7d", "address": "fa:16:3e:c7:33:2e", "network": {"id": "882d76f4-8cc7-44c7-ad90-277f4f92e044", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-3978382-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b23ff73ca27245eeb1b46f51326a5568", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap108be440-11", "ovs_interfaceid": "108be440-11bf-41f6-a628-86fbac597b7d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 11 08:51:41 compute-0 nova_compute[260935]: 2025-10-11 08:51:41.394 2 DEBUG nova.virt.libvirt.driver [None req-dc6d8888-3dba-41b3-b7b9-b5da906bd38a 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] [instance: 872b1c1d-bc87-4123-a599-4d64b89018aa] Start _get_guest_xml network_info=[{"id": "108be440-11bf-41f6-a628-86fbac597b7d", "address": "fa:16:3e:c7:33:2e", "network": {"id": "882d76f4-8cc7-44c7-ad90-277f4f92e044", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-3978382-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b23ff73ca27245eeb1b46f51326a5568", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap108be440-11", "ovs_interfaceid": "108be440-11bf-41f6-a628-86fbac597b7d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 11 08:51:41 compute-0 nova_compute[260935]: 2025-10-11 08:51:41.401 2 WARNING nova.virt.libvirt.driver [None req-dc6d8888-3dba-41b3-b7b9-b5da906bd38a 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 08:51:41 compute-0 nova_compute[260935]: 2025-10-11 08:51:41.407 2 DEBUG nova.virt.libvirt.host [None req-dc6d8888-3dba-41b3-b7b9-b5da906bd38a 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 11 08:51:41 compute-0 nova_compute[260935]: 2025-10-11 08:51:41.408 2 DEBUG nova.virt.libvirt.host [None req-dc6d8888-3dba-41b3-b7b9-b5da906bd38a 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 11 08:51:41 compute-0 nova_compute[260935]: 2025-10-11 08:51:41.414 2 DEBUG nova.virt.libvirt.host [None req-dc6d8888-3dba-41b3-b7b9-b5da906bd38a 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 11 08:51:41 compute-0 nova_compute[260935]: 2025-10-11 08:51:41.415 2 DEBUG nova.virt.libvirt.host [None req-dc6d8888-3dba-41b3-b7b9-b5da906bd38a 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 11 08:51:41 compute-0 nova_compute[260935]: 2025-10-11 08:51:41.416 2 DEBUG nova.virt.libvirt.driver [None req-dc6d8888-3dba-41b3-b7b9-b5da906bd38a 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 11 08:51:41 compute-0 nova_compute[260935]: 2025-10-11 08:51:41.416 2 DEBUG nova.virt.hardware [None req-dc6d8888-3dba-41b3-b7b9-b5da906bd38a 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 11 08:51:41 compute-0 nova_compute[260935]: 2025-10-11 08:51:41.417 2 DEBUG nova.virt.hardware [None req-dc6d8888-3dba-41b3-b7b9-b5da906bd38a 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 11 08:51:41 compute-0 nova_compute[260935]: 2025-10-11 08:51:41.417 2 DEBUG nova.virt.hardware [None req-dc6d8888-3dba-41b3-b7b9-b5da906bd38a 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 11 08:51:41 compute-0 nova_compute[260935]: 2025-10-11 08:51:41.418 2 DEBUG nova.virt.hardware [None req-dc6d8888-3dba-41b3-b7b9-b5da906bd38a 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 11 08:51:41 compute-0 nova_compute[260935]: 2025-10-11 08:51:41.418 2 DEBUG nova.virt.hardware [None req-dc6d8888-3dba-41b3-b7b9-b5da906bd38a 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 11 08:51:41 compute-0 nova_compute[260935]: 2025-10-11 08:51:41.419 2 DEBUG nova.virt.hardware [None req-dc6d8888-3dba-41b3-b7b9-b5da906bd38a 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 11 08:51:41 compute-0 nova_compute[260935]: 2025-10-11 08:51:41.419 2 DEBUG nova.virt.hardware [None req-dc6d8888-3dba-41b3-b7b9-b5da906bd38a 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 11 08:51:41 compute-0 nova_compute[260935]: 2025-10-11 08:51:41.419 2 DEBUG nova.virt.hardware [None req-dc6d8888-3dba-41b3-b7b9-b5da906bd38a 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 11 08:51:41 compute-0 nova_compute[260935]: 2025-10-11 08:51:41.420 2 DEBUG nova.virt.hardware [None req-dc6d8888-3dba-41b3-b7b9-b5da906bd38a 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 11 08:51:41 compute-0 nova_compute[260935]: 2025-10-11 08:51:41.420 2 DEBUG nova.virt.hardware [None req-dc6d8888-3dba-41b3-b7b9-b5da906bd38a 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 11 08:51:41 compute-0 nova_compute[260935]: 2025-10-11 08:51:41.421 2 DEBUG nova.virt.hardware [None req-dc6d8888-3dba-41b3-b7b9-b5da906bd38a 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 11 08:51:41 compute-0 nova_compute[260935]: 2025-10-11 08:51:41.425 2 DEBUG oslo_concurrency.processutils [None req-dc6d8888-3dba-41b3-b7b9-b5da906bd38a 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:51:41 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/2644219216' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:51:41 compute-0 nova_compute[260935]: 2025-10-11 08:51:41.570 2 DEBUG nova.compute.manager [None req-bcf9f562-5918-4b0f-95da-1a9d85e57363 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] [instance: 205bee5e-165a-468c-87d3-db44e03ace3e] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 11 08:51:41 compute-0 nova_compute[260935]: 2025-10-11 08:51:41.571 2 DEBUG nova.network.neutron [None req-bcf9f562-5918-4b0f-95da-1a9d85e57363 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] [instance: 205bee5e-165a-468c-87d3-db44e03ace3e] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 11 08:51:41 compute-0 nova_compute[260935]: 2025-10-11 08:51:41.750 2 INFO nova.virt.libvirt.driver [None req-bcf9f562-5918-4b0f-95da-1a9d85e57363 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] [instance: 205bee5e-165a-468c-87d3-db44e03ace3e] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 11 08:51:41 compute-0 nova_compute[260935]: 2025-10-11 08:51:41.806 2 DEBUG nova.compute.manager [None req-bcf9f562-5918-4b0f-95da-1a9d85e57363 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] [instance: 205bee5e-165a-468c-87d3-db44e03ace3e] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 11 08:51:41 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 08:51:41 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1862886025' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:51:41 compute-0 nova_compute[260935]: 2025-10-11 08:51:41.888 2 DEBUG oslo_concurrency.processutils [None req-dc6d8888-3dba-41b3-b7b9-b5da906bd38a 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:51:41 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1399: 321 pgs: 321 active+clean; 239 MiB data, 478 MiB used, 60 GiB / 60 GiB avail; 2.7 MiB/s rd, 2.5 MiB/s wr, 185 op/s
Oct 11 08:51:41 compute-0 nova_compute[260935]: 2025-10-11 08:51:41.920 2 DEBUG nova.storage.rbd_utils [None req-dc6d8888-3dba-41b3-b7b9-b5da906bd38a 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] rbd image 872b1c1d-bc87-4123-a599-4d64b89018aa_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:51:41 compute-0 nova_compute[260935]: 2025-10-11 08:51:41.931 2 DEBUG oslo_concurrency.processutils [None req-dc6d8888-3dba-41b3-b7b9-b5da906bd38a 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:51:42 compute-0 nova_compute[260935]: 2025-10-11 08:51:42.070 2 DEBUG nova.policy [None req-bcf9f562-5918-4b0f-95da-1a9d85e57363 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'fb27f51b5ffd414ab5ddbea179ada690', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'f84de17ba2c5470fbc4c7fe809e7d7b7', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 11 08:51:42 compute-0 nova_compute[260935]: 2025-10-11 08:51:42.224 2 DEBUG nova.compute.manager [None req-bcf9f562-5918-4b0f-95da-1a9d85e57363 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] [instance: 205bee5e-165a-468c-87d3-db44e03ace3e] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 11 08:51:42 compute-0 nova_compute[260935]: 2025-10-11 08:51:42.227 2 DEBUG nova.virt.libvirt.driver [None req-bcf9f562-5918-4b0f-95da-1a9d85e57363 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] [instance: 205bee5e-165a-468c-87d3-db44e03ace3e] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 11 08:51:42 compute-0 nova_compute[260935]: 2025-10-11 08:51:42.227 2 INFO nova.virt.libvirt.driver [None req-bcf9f562-5918-4b0f-95da-1a9d85e57363 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] [instance: 205bee5e-165a-468c-87d3-db44e03ace3e] Creating image(s)
Oct 11 08:51:42 compute-0 nova_compute[260935]: 2025-10-11 08:51:42.261 2 DEBUG nova.storage.rbd_utils [None req-bcf9f562-5918-4b0f-95da-1a9d85e57363 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] rbd image 205bee5e-165a-468c-87d3-db44e03ace3e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:51:42 compute-0 nova_compute[260935]: 2025-10-11 08:51:42.297 2 DEBUG nova.storage.rbd_utils [None req-bcf9f562-5918-4b0f-95da-1a9d85e57363 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] rbd image 205bee5e-165a-468c-87d3-db44e03ace3e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:51:42 compute-0 nova_compute[260935]: 2025-10-11 08:51:42.333 2 DEBUG nova.storage.rbd_utils [None req-bcf9f562-5918-4b0f-95da-1a9d85e57363 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] rbd image 205bee5e-165a-468c-87d3-db44e03ace3e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:51:42 compute-0 nova_compute[260935]: 2025-10-11 08:51:42.340 2 DEBUG oslo_concurrency.processutils [None req-bcf9f562-5918-4b0f-95da-1a9d85e57363 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:51:42 compute-0 nova_compute[260935]: 2025-10-11 08:51:42.401 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:51:42 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 08:51:42 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/124764432' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:51:42 compute-0 nova_compute[260935]: 2025-10-11 08:51:42.426 2 DEBUG oslo_concurrency.processutils [None req-dc6d8888-3dba-41b3-b7b9-b5da906bd38a 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.495s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:51:42 compute-0 nova_compute[260935]: 2025-10-11 08:51:42.427 2 DEBUG nova.virt.libvirt.vif [None req-dc6d8888-3dba-41b3-b7b9-b5da906bd38a 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T08:51:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-FloatingIPsAssociationTestJSON-server-652156110',display_name='tempest-FloatingIPsAssociationTestJSON-server-652156110',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-floatingipsassociationtestjson-server-652156110',id=36,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b23ff73ca27245eeb1b46f51326a5568',ramdisk_id='',reservation_id='r-dxyrerkf',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-FloatingIPsAssociationTestJSON-1277815089',owner_user_name='tempest-FloatingIPsAssociationTestJSON-1277815089-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T08:51:36Z,user_data=None,user_id='7f37cd0ba15f412b88192a506c5cec79',uuid=872b1c1d-bc87-4123-a599-4d64b89018aa,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "108be440-11bf-41f6-a628-86fbac597b7d", "address": "fa:16:3e:c7:33:2e", "network": {"id": "882d76f4-8cc7-44c7-ad90-277f4f92e044", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-3978382-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b23ff73ca27245eeb1b46f51326a5568", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap108be440-11", "ovs_interfaceid": "108be440-11bf-41f6-a628-86fbac597b7d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 11 08:51:42 compute-0 nova_compute[260935]: 2025-10-11 08:51:42.428 2 DEBUG nova.network.os_vif_util [None req-dc6d8888-3dba-41b3-b7b9-b5da906bd38a 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] Converting VIF {"id": "108be440-11bf-41f6-a628-86fbac597b7d", "address": "fa:16:3e:c7:33:2e", "network": {"id": "882d76f4-8cc7-44c7-ad90-277f4f92e044", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-3978382-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b23ff73ca27245eeb1b46f51326a5568", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap108be440-11", "ovs_interfaceid": "108be440-11bf-41f6-a628-86fbac597b7d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 08:51:42 compute-0 nova_compute[260935]: 2025-10-11 08:51:42.429 2 DEBUG nova.network.os_vif_util [None req-dc6d8888-3dba-41b3-b7b9-b5da906bd38a 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c7:33:2e,bridge_name='br-int',has_traffic_filtering=True,id=108be440-11bf-41f6-a628-86fbac597b7d,network=Network(882d76f4-8cc7-44c7-ad90-277f4f92e044),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap108be440-11') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 08:51:42 compute-0 nova_compute[260935]: 2025-10-11 08:51:42.431 2 DEBUG nova.objects.instance [None req-dc6d8888-3dba-41b3-b7b9-b5da906bd38a 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] Lazy-loading 'pci_devices' on Instance uuid 872b1c1d-bc87-4123-a599-4d64b89018aa obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:51:42 compute-0 nova_compute[260935]: 2025-10-11 08:51:42.436 2 DEBUG oslo_concurrency.processutils [None req-bcf9f562-5918-4b0f-95da-1a9d85e57363 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json" returned: 0 in 0.096s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:51:42 compute-0 nova_compute[260935]: 2025-10-11 08:51:42.436 2 DEBUG oslo_concurrency.lockutils [None req-bcf9f562-5918-4b0f-95da-1a9d85e57363 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Acquiring lock "9811042c7d73cc51997f7c966840f4b7728169a1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:51:42 compute-0 nova_compute[260935]: 2025-10-11 08:51:42.437 2 DEBUG oslo_concurrency.lockutils [None req-bcf9f562-5918-4b0f-95da-1a9d85e57363 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:51:42 compute-0 nova_compute[260935]: 2025-10-11 08:51:42.438 2 DEBUG oslo_concurrency.lockutils [None req-bcf9f562-5918-4b0f-95da-1a9d85e57363 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:51:42 compute-0 nova_compute[260935]: 2025-10-11 08:51:42.466 2 DEBUG nova.storage.rbd_utils [None req-bcf9f562-5918-4b0f-95da-1a9d85e57363 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] rbd image 205bee5e-165a-468c-87d3-db44e03ace3e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:51:42 compute-0 nova_compute[260935]: 2025-10-11 08:51:42.469 2 DEBUG oslo_concurrency.processutils [None req-bcf9f562-5918-4b0f-95da-1a9d85e57363 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 205bee5e-165a-468c-87d3-db44e03ace3e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:51:42 compute-0 nova_compute[260935]: 2025-10-11 08:51:42.516 2 DEBUG nova.virt.libvirt.driver [None req-dc6d8888-3dba-41b3-b7b9-b5da906bd38a 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] [instance: 872b1c1d-bc87-4123-a599-4d64b89018aa] End _get_guest_xml xml=<domain type="kvm">
Oct 11 08:51:42 compute-0 nova_compute[260935]:   <uuid>872b1c1d-bc87-4123-a599-4d64b89018aa</uuid>
Oct 11 08:51:42 compute-0 nova_compute[260935]:   <name>instance-00000024</name>
Oct 11 08:51:42 compute-0 nova_compute[260935]:   <memory>131072</memory>
Oct 11 08:51:42 compute-0 nova_compute[260935]:   <vcpu>1</vcpu>
Oct 11 08:51:42 compute-0 nova_compute[260935]:   <metadata>
Oct 11 08:51:42 compute-0 nova_compute[260935]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 08:51:42 compute-0 nova_compute[260935]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 08:51:42 compute-0 nova_compute[260935]:       <nova:name>tempest-FloatingIPsAssociationTestJSON-server-652156110</nova:name>
Oct 11 08:51:42 compute-0 nova_compute[260935]:       <nova:creationTime>2025-10-11 08:51:41</nova:creationTime>
Oct 11 08:51:42 compute-0 nova_compute[260935]:       <nova:flavor name="m1.nano">
Oct 11 08:51:42 compute-0 nova_compute[260935]:         <nova:memory>128</nova:memory>
Oct 11 08:51:42 compute-0 nova_compute[260935]:         <nova:disk>1</nova:disk>
Oct 11 08:51:42 compute-0 nova_compute[260935]:         <nova:swap>0</nova:swap>
Oct 11 08:51:42 compute-0 nova_compute[260935]:         <nova:ephemeral>0</nova:ephemeral>
Oct 11 08:51:42 compute-0 nova_compute[260935]:         <nova:vcpus>1</nova:vcpus>
Oct 11 08:51:42 compute-0 nova_compute[260935]:       </nova:flavor>
Oct 11 08:51:42 compute-0 nova_compute[260935]:       <nova:owner>
Oct 11 08:51:42 compute-0 nova_compute[260935]:         <nova:user uuid="7f37cd0ba15f412b88192a506c5cec79">tempest-FloatingIPsAssociationTestJSON-1277815089-project-member</nova:user>
Oct 11 08:51:42 compute-0 nova_compute[260935]:         <nova:project uuid="b23ff73ca27245eeb1b46f51326a5568">tempest-FloatingIPsAssociationTestJSON-1277815089</nova:project>
Oct 11 08:51:42 compute-0 nova_compute[260935]:       </nova:owner>
Oct 11 08:51:42 compute-0 nova_compute[260935]:       <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 08:51:42 compute-0 nova_compute[260935]:       <nova:ports>
Oct 11 08:51:42 compute-0 nova_compute[260935]:         <nova:port uuid="108be440-11bf-41f6-a628-86fbac597b7d">
Oct 11 08:51:42 compute-0 nova_compute[260935]:           <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Oct 11 08:51:42 compute-0 nova_compute[260935]:         </nova:port>
Oct 11 08:51:42 compute-0 nova_compute[260935]:       </nova:ports>
Oct 11 08:51:42 compute-0 nova_compute[260935]:     </nova:instance>
Oct 11 08:51:42 compute-0 nova_compute[260935]:   </metadata>
Oct 11 08:51:42 compute-0 nova_compute[260935]:   <sysinfo type="smbios">
Oct 11 08:51:42 compute-0 nova_compute[260935]:     <system>
Oct 11 08:51:42 compute-0 nova_compute[260935]:       <entry name="manufacturer">RDO</entry>
Oct 11 08:51:42 compute-0 nova_compute[260935]:       <entry name="product">OpenStack Compute</entry>
Oct 11 08:51:42 compute-0 nova_compute[260935]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 08:51:42 compute-0 nova_compute[260935]:       <entry name="serial">872b1c1d-bc87-4123-a599-4d64b89018aa</entry>
Oct 11 08:51:42 compute-0 nova_compute[260935]:       <entry name="uuid">872b1c1d-bc87-4123-a599-4d64b89018aa</entry>
Oct 11 08:51:42 compute-0 nova_compute[260935]:       <entry name="family">Virtual Machine</entry>
Oct 11 08:51:42 compute-0 nova_compute[260935]:     </system>
Oct 11 08:51:42 compute-0 nova_compute[260935]:   </sysinfo>
Oct 11 08:51:42 compute-0 nova_compute[260935]:   <os>
Oct 11 08:51:42 compute-0 nova_compute[260935]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 11 08:51:42 compute-0 nova_compute[260935]:     <boot dev="hd"/>
Oct 11 08:51:42 compute-0 nova_compute[260935]:     <smbios mode="sysinfo"/>
Oct 11 08:51:42 compute-0 nova_compute[260935]:   </os>
Oct 11 08:51:42 compute-0 nova_compute[260935]:   <features>
Oct 11 08:51:42 compute-0 nova_compute[260935]:     <acpi/>
Oct 11 08:51:42 compute-0 nova_compute[260935]:     <apic/>
Oct 11 08:51:42 compute-0 nova_compute[260935]:     <vmcoreinfo/>
Oct 11 08:51:42 compute-0 nova_compute[260935]:   </features>
Oct 11 08:51:42 compute-0 nova_compute[260935]:   <clock offset="utc">
Oct 11 08:51:42 compute-0 nova_compute[260935]:     <timer name="pit" tickpolicy="delay"/>
Oct 11 08:51:42 compute-0 nova_compute[260935]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 11 08:51:42 compute-0 nova_compute[260935]:     <timer name="hpet" present="no"/>
Oct 11 08:51:42 compute-0 nova_compute[260935]:   </clock>
Oct 11 08:51:42 compute-0 nova_compute[260935]:   <cpu mode="host-model" match="exact">
Oct 11 08:51:42 compute-0 nova_compute[260935]:     <topology sockets="1" cores="1" threads="1"/>
Oct 11 08:51:42 compute-0 nova_compute[260935]:   </cpu>
Oct 11 08:51:42 compute-0 nova_compute[260935]:   <devices>
Oct 11 08:51:42 compute-0 nova_compute[260935]:     <disk type="network" device="disk">
Oct 11 08:51:42 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 08:51:42 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/872b1c1d-bc87-4123-a599-4d64b89018aa_disk">
Oct 11 08:51:42 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 08:51:42 compute-0 nova_compute[260935]:       </source>
Oct 11 08:51:42 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 08:51:42 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 08:51:42 compute-0 nova_compute[260935]:       </auth>
Oct 11 08:51:42 compute-0 nova_compute[260935]:       <target dev="vda" bus="virtio"/>
Oct 11 08:51:42 compute-0 nova_compute[260935]:     </disk>
Oct 11 08:51:42 compute-0 nova_compute[260935]:     <disk type="network" device="cdrom">
Oct 11 08:51:42 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 08:51:42 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/872b1c1d-bc87-4123-a599-4d64b89018aa_disk.config">
Oct 11 08:51:42 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 08:51:42 compute-0 nova_compute[260935]:       </source>
Oct 11 08:51:42 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 08:51:42 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 08:51:42 compute-0 nova_compute[260935]:       </auth>
Oct 11 08:51:42 compute-0 nova_compute[260935]:       <target dev="sda" bus="sata"/>
Oct 11 08:51:42 compute-0 nova_compute[260935]:     </disk>
Oct 11 08:51:42 compute-0 nova_compute[260935]:     <interface type="ethernet">
Oct 11 08:51:42 compute-0 nova_compute[260935]:       <mac address="fa:16:3e:c7:33:2e"/>
Oct 11 08:51:42 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 08:51:42 compute-0 nova_compute[260935]:       <driver name="vhost" rx_queue_size="512"/>
Oct 11 08:51:42 compute-0 nova_compute[260935]:       <mtu size="1442"/>
Oct 11 08:51:42 compute-0 nova_compute[260935]:       <target dev="tap108be440-11"/>
Oct 11 08:51:42 compute-0 nova_compute[260935]:     </interface>
Oct 11 08:51:42 compute-0 nova_compute[260935]:     <serial type="pty">
Oct 11 08:51:42 compute-0 nova_compute[260935]:       <log file="/var/lib/nova/instances/872b1c1d-bc87-4123-a599-4d64b89018aa/console.log" append="off"/>
Oct 11 08:51:42 compute-0 nova_compute[260935]:     </serial>
Oct 11 08:51:42 compute-0 nova_compute[260935]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 08:51:42 compute-0 nova_compute[260935]:     <video>
Oct 11 08:51:42 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 08:51:42 compute-0 nova_compute[260935]:     </video>
Oct 11 08:51:42 compute-0 nova_compute[260935]:     <input type="tablet" bus="usb"/>
Oct 11 08:51:42 compute-0 nova_compute[260935]:     <rng model="virtio">
Oct 11 08:51:42 compute-0 nova_compute[260935]:       <backend model="random">/dev/urandom</backend>
Oct 11 08:51:42 compute-0 nova_compute[260935]:     </rng>
Oct 11 08:51:42 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root"/>
Oct 11 08:51:42 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:51:42 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:51:42 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:51:42 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:51:42 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:51:42 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:51:42 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:51:42 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:51:42 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:51:42 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:51:42 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:51:42 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:51:42 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:51:42 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:51:42 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:51:42 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:51:42 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:51:42 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:51:42 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:51:42 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:51:42 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:51:42 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:51:42 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:51:42 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:51:42 compute-0 nova_compute[260935]:     <controller type="usb" index="0"/>
Oct 11 08:51:42 compute-0 nova_compute[260935]:     <memballoon model="virtio">
Oct 11 08:51:42 compute-0 nova_compute[260935]:       <stats period="10"/>
Oct 11 08:51:42 compute-0 nova_compute[260935]:     </memballoon>
Oct 11 08:51:42 compute-0 nova_compute[260935]:   </devices>
Oct 11 08:51:42 compute-0 nova_compute[260935]: </domain>
Oct 11 08:51:42 compute-0 nova_compute[260935]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 11 08:51:42 compute-0 nova_compute[260935]: 2025-10-11 08:51:42.518 2 DEBUG nova.compute.manager [None req-dc6d8888-3dba-41b3-b7b9-b5da906bd38a 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] [instance: 872b1c1d-bc87-4123-a599-4d64b89018aa] Preparing to wait for external event network-vif-plugged-108be440-11bf-41f6-a628-86fbac597b7d prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 11 08:51:42 compute-0 nova_compute[260935]: 2025-10-11 08:51:42.519 2 DEBUG oslo_concurrency.lockutils [None req-dc6d8888-3dba-41b3-b7b9-b5da906bd38a 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] Acquiring lock "872b1c1d-bc87-4123-a599-4d64b89018aa-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:51:42 compute-0 nova_compute[260935]: 2025-10-11 08:51:42.520 2 DEBUG oslo_concurrency.lockutils [None req-dc6d8888-3dba-41b3-b7b9-b5da906bd38a 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] Lock "872b1c1d-bc87-4123-a599-4d64b89018aa-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:51:42 compute-0 nova_compute[260935]: 2025-10-11 08:51:42.520 2 DEBUG oslo_concurrency.lockutils [None req-dc6d8888-3dba-41b3-b7b9-b5da906bd38a 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] Lock "872b1c1d-bc87-4123-a599-4d64b89018aa-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:51:42 compute-0 nova_compute[260935]: 2025-10-11 08:51:42.522 2 DEBUG nova.virt.libvirt.vif [None req-dc6d8888-3dba-41b3-b7b9-b5da906bd38a 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T08:51:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-FloatingIPsAssociationTestJSON-server-652156110',display_name='tempest-FloatingIPsAssociationTestJSON-server-652156110',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-floatingipsassociationtestjson-server-652156110',id=36,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b23ff73ca27245eeb1b46f51326a5568',ramdisk_id='',reservation_id='r-dxyrerkf',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-FloatingIPsAssociationTestJSON-1277815089',owner_user_name='tempest-FloatingIPsAssociationTestJSON-1277815089-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T08:51:36Z,user_data=None,user_id='7f37cd0ba15f412b88192a506c5cec79',uuid=872b1c1d-bc87-4123-a599-4d64b89018aa,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "108be440-11bf-41f6-a628-86fbac597b7d", "address": "fa:16:3e:c7:33:2e", "network": {"id": "882d76f4-8cc7-44c7-ad90-277f4f92e044", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-3978382-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b23ff73ca27245eeb1b46f51326a5568", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap108be440-11", "ovs_interfaceid": "108be440-11bf-41f6-a628-86fbac597b7d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 11 08:51:42 compute-0 nova_compute[260935]: 2025-10-11 08:51:42.522 2 DEBUG nova.network.os_vif_util [None req-dc6d8888-3dba-41b3-b7b9-b5da906bd38a 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] Converting VIF {"id": "108be440-11bf-41f6-a628-86fbac597b7d", "address": "fa:16:3e:c7:33:2e", "network": {"id": "882d76f4-8cc7-44c7-ad90-277f4f92e044", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-3978382-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b23ff73ca27245eeb1b46f51326a5568", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap108be440-11", "ovs_interfaceid": "108be440-11bf-41f6-a628-86fbac597b7d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 08:51:42 compute-0 nova_compute[260935]: 2025-10-11 08:51:42.524 2 DEBUG nova.network.os_vif_util [None req-dc6d8888-3dba-41b3-b7b9-b5da906bd38a 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c7:33:2e,bridge_name='br-int',has_traffic_filtering=True,id=108be440-11bf-41f6-a628-86fbac597b7d,network=Network(882d76f4-8cc7-44c7-ad90-277f4f92e044),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap108be440-11') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 08:51:42 compute-0 nova_compute[260935]: 2025-10-11 08:51:42.524 2 DEBUG os_vif [None req-dc6d8888-3dba-41b3-b7b9-b5da906bd38a 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c7:33:2e,bridge_name='br-int',has_traffic_filtering=True,id=108be440-11bf-41f6-a628-86fbac597b7d,network=Network(882d76f4-8cc7-44c7-ad90-277f4f92e044),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap108be440-11') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 11 08:51:42 compute-0 nova_compute[260935]: 2025-10-11 08:51:42.526 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:51:42 compute-0 nova_compute[260935]: 2025-10-11 08:51:42.527 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:51:42 compute-0 nova_compute[260935]: 2025-10-11 08:51:42.528 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 08:51:42 compute-0 nova_compute[260935]: 2025-10-11 08:51:42.531 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:51:42 compute-0 nova_compute[260935]: 2025-10-11 08:51:42.532 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap108be440-11, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:51:42 compute-0 nova_compute[260935]: 2025-10-11 08:51:42.533 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap108be440-11, col_values=(('external_ids', {'iface-id': '108be440-11bf-41f6-a628-86fbac597b7d', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:c7:33:2e', 'vm-uuid': '872b1c1d-bc87-4123-a599-4d64b89018aa'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:51:42 compute-0 NetworkManager[44960]: <info>  [1760172702.5366] manager: (tap108be440-11): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/131)
Oct 11 08:51:42 compute-0 nova_compute[260935]: 2025-10-11 08:51:42.535 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:51:42 compute-0 nova_compute[260935]: 2025-10-11 08:51:42.540 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 08:51:42 compute-0 nova_compute[260935]: 2025-10-11 08:51:42.544 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:51:42 compute-0 nova_compute[260935]: 2025-10-11 08:51:42.546 2 INFO os_vif [None req-dc6d8888-3dba-41b3-b7b9-b5da906bd38a 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c7:33:2e,bridge_name='br-int',has_traffic_filtering=True,id=108be440-11bf-41f6-a628-86fbac597b7d,network=Network(882d76f4-8cc7-44c7-ad90-277f4f92e044),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap108be440-11')
Oct 11 08:51:42 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/1862886025' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:51:42 compute-0 ceph-mon[74313]: pgmap v1399: 321 pgs: 321 active+clean; 239 MiB data, 478 MiB used, 60 GiB / 60 GiB avail; 2.7 MiB/s rd, 2.5 MiB/s wr, 185 op/s
Oct 11 08:51:42 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/124764432' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:51:42 compute-0 nova_compute[260935]: 2025-10-11 08:51:42.677 2 DEBUG nova.virt.libvirt.driver [None req-dc6d8888-3dba-41b3-b7b9-b5da906bd38a 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 08:51:42 compute-0 nova_compute[260935]: 2025-10-11 08:51:42.678 2 DEBUG nova.virt.libvirt.driver [None req-dc6d8888-3dba-41b3-b7b9-b5da906bd38a 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 08:51:42 compute-0 nova_compute[260935]: 2025-10-11 08:51:42.678 2 DEBUG nova.virt.libvirt.driver [None req-dc6d8888-3dba-41b3-b7b9-b5da906bd38a 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] No VIF found with MAC fa:16:3e:c7:33:2e, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 11 08:51:42 compute-0 nova_compute[260935]: 2025-10-11 08:51:42.679 2 INFO nova.virt.libvirt.driver [None req-dc6d8888-3dba-41b3-b7b9-b5da906bd38a 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] [instance: 872b1c1d-bc87-4123-a599-4d64b89018aa] Using config drive
Oct 11 08:51:42 compute-0 nova_compute[260935]: 2025-10-11 08:51:42.717 2 DEBUG nova.storage.rbd_utils [None req-dc6d8888-3dba-41b3-b7b9-b5da906bd38a 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] rbd image 872b1c1d-bc87-4123-a599-4d64b89018aa_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:51:42 compute-0 nova_compute[260935]: 2025-10-11 08:51:42.829 2 DEBUG oslo_concurrency.processutils [None req-bcf9f562-5918-4b0f-95da-1a9d85e57363 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 205bee5e-165a-468c-87d3-db44e03ace3e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.360s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:51:42 compute-0 nova_compute[260935]: 2025-10-11 08:51:42.855 2 DEBUG nova.network.neutron [-] [instance: 5b50d851-e482-40a2-8b7d-d3eca87e15ab] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:51:42 compute-0 nova_compute[260935]: 2025-10-11 08:51:42.893 2 DEBUG nova.storage.rbd_utils [None req-bcf9f562-5918-4b0f-95da-1a9d85e57363 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] resizing rbd image 205bee5e-165a-468c-87d3-db44e03ace3e_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 11 08:51:43 compute-0 nova_compute[260935]: 2025-10-11 08:51:43.004 2 INFO nova.compute.manager [-] [instance: 5b50d851-e482-40a2-8b7d-d3eca87e15ab] Took 1.79 seconds to deallocate network for instance.
Oct 11 08:51:43 compute-0 nova_compute[260935]: 2025-10-11 08:51:43.015 2 DEBUG nova.objects.instance [None req-bcf9f562-5918-4b0f-95da-1a9d85e57363 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Lazy-loading 'migration_context' on Instance uuid 205bee5e-165a-468c-87d3-db44e03ace3e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:51:43 compute-0 nova_compute[260935]: 2025-10-11 08:51:43.068 2 DEBUG nova.network.neutron [None req-bcf9f562-5918-4b0f-95da-1a9d85e57363 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] [instance: 205bee5e-165a-468c-87d3-db44e03ace3e] Successfully created port: 30f88ebc-fba6-476c-8953-ad64358029bb _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 11 08:51:43 compute-0 nova_compute[260935]: 2025-10-11 08:51:43.082 2 DEBUG nova.virt.libvirt.driver [None req-bcf9f562-5918-4b0f-95da-1a9d85e57363 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] [instance: 205bee5e-165a-468c-87d3-db44e03ace3e] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 11 08:51:43 compute-0 nova_compute[260935]: 2025-10-11 08:51:43.082 2 DEBUG nova.virt.libvirt.driver [None req-bcf9f562-5918-4b0f-95da-1a9d85e57363 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] [instance: 205bee5e-165a-468c-87d3-db44e03ace3e] Ensure instance console log exists: /var/lib/nova/instances/205bee5e-165a-468c-87d3-db44e03ace3e/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 11 08:51:43 compute-0 nova_compute[260935]: 2025-10-11 08:51:43.083 2 DEBUG oslo_concurrency.lockutils [None req-bcf9f562-5918-4b0f-95da-1a9d85e57363 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:51:43 compute-0 nova_compute[260935]: 2025-10-11 08:51:43.083 2 DEBUG oslo_concurrency.lockutils [None req-bcf9f562-5918-4b0f-95da-1a9d85e57363 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:51:43 compute-0 nova_compute[260935]: 2025-10-11 08:51:43.084 2 DEBUG oslo_concurrency.lockutils [None req-bcf9f562-5918-4b0f-95da-1a9d85e57363 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:51:43 compute-0 nova_compute[260935]: 2025-10-11 08:51:43.126 2 INFO nova.virt.libvirt.driver [None req-dc6d8888-3dba-41b3-b7b9-b5da906bd38a 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] [instance: 872b1c1d-bc87-4123-a599-4d64b89018aa] Creating config drive at /var/lib/nova/instances/872b1c1d-bc87-4123-a599-4d64b89018aa/disk.config
Oct 11 08:51:43 compute-0 nova_compute[260935]: 2025-10-11 08:51:43.134 2 DEBUG oslo_concurrency.processutils [None req-dc6d8888-3dba-41b3-b7b9-b5da906bd38a 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/872b1c1d-bc87-4123-a599-4d64b89018aa/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpkxkq7fgt execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:51:43 compute-0 nova_compute[260935]: 2025-10-11 08:51:43.210 2 DEBUG oslo_concurrency.lockutils [None req-f9ec43b7-ddbc-4089-91f7-721f9679d04c f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] Acquiring lock "88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:51:43 compute-0 nova_compute[260935]: 2025-10-11 08:51:43.213 2 DEBUG oslo_concurrency.lockutils [None req-f9ec43b7-ddbc-4089-91f7-721f9679d04c f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] Lock "88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:51:43 compute-0 nova_compute[260935]: 2025-10-11 08:51:43.214 2 DEBUG oslo_concurrency.lockutils [None req-f9ec43b7-ddbc-4089-91f7-721f9679d04c f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] Acquiring lock "88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:51:43 compute-0 nova_compute[260935]: 2025-10-11 08:51:43.214 2 DEBUG oslo_concurrency.lockutils [None req-f9ec43b7-ddbc-4089-91f7-721f9679d04c f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] Lock "88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:51:43 compute-0 nova_compute[260935]: 2025-10-11 08:51:43.215 2 DEBUG oslo_concurrency.lockutils [None req-f9ec43b7-ddbc-4089-91f7-721f9679d04c f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] Lock "88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:51:43 compute-0 nova_compute[260935]: 2025-10-11 08:51:43.219 2 INFO nova.compute.manager [None req-f9ec43b7-ddbc-4089-91f7-721f9679d04c f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] [instance: 88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a] Terminating instance
Oct 11 08:51:43 compute-0 nova_compute[260935]: 2025-10-11 08:51:43.221 2 DEBUG nova.compute.manager [None req-f9ec43b7-ddbc-4089-91f7-721f9679d04c f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] [instance: 88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 11 08:51:43 compute-0 nova_compute[260935]: 2025-10-11 08:51:43.236 2 DEBUG nova.compute.manager [req-944d9592-ae71-40e2-8a8f-504e64698827 req-cacd7c57-26ec-4fb7-a428-f79a4fe6c125 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a] Received event network-vif-plugged-e758e6dc-cadd-4687-a634-519fb2ecace8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:51:43 compute-0 nova_compute[260935]: 2025-10-11 08:51:43.236 2 DEBUG oslo_concurrency.lockutils [req-944d9592-ae71-40e2-8a8f-504e64698827 req-cacd7c57-26ec-4fb7-a428-f79a4fe6c125 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:51:43 compute-0 nova_compute[260935]: 2025-10-11 08:51:43.237 2 DEBUG oslo_concurrency.lockutils [req-944d9592-ae71-40e2-8a8f-504e64698827 req-cacd7c57-26ec-4fb7-a428-f79a4fe6c125 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:51:43 compute-0 nova_compute[260935]: 2025-10-11 08:51:43.237 2 DEBUG oslo_concurrency.lockutils [req-944d9592-ae71-40e2-8a8f-504e64698827 req-cacd7c57-26ec-4fb7-a428-f79a4fe6c125 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:51:43 compute-0 nova_compute[260935]: 2025-10-11 08:51:43.237 2 DEBUG nova.compute.manager [req-944d9592-ae71-40e2-8a8f-504e64698827 req-cacd7c57-26ec-4fb7-a428-f79a4fe6c125 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a] No waiting events found dispatching network-vif-plugged-e758e6dc-cadd-4687-a634-519fb2ecace8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 08:51:43 compute-0 nova_compute[260935]: 2025-10-11 08:51:43.238 2 WARNING nova.compute.manager [req-944d9592-ae71-40e2-8a8f-504e64698827 req-cacd7c57-26ec-4fb7-a428-f79a4fe6c125 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a] Received unexpected event network-vif-plugged-e758e6dc-cadd-4687-a634-519fb2ecace8 for instance with vm_state active and task_state deleting.
Oct 11 08:51:43 compute-0 nova_compute[260935]: 2025-10-11 08:51:43.238 2 DEBUG nova.compute.manager [req-944d9592-ae71-40e2-8a8f-504e64698827 req-cacd7c57-26ec-4fb7-a428-f79a4fe6c125 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 872b1c1d-bc87-4123-a599-4d64b89018aa] Received event network-changed-108be440-11bf-41f6-a628-86fbac597b7d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:51:43 compute-0 nova_compute[260935]: 2025-10-11 08:51:43.238 2 DEBUG nova.compute.manager [req-944d9592-ae71-40e2-8a8f-504e64698827 req-cacd7c57-26ec-4fb7-a428-f79a4fe6c125 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 872b1c1d-bc87-4123-a599-4d64b89018aa] Refreshing instance network info cache due to event network-changed-108be440-11bf-41f6-a628-86fbac597b7d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 08:51:43 compute-0 nova_compute[260935]: 2025-10-11 08:51:43.239 2 DEBUG oslo_concurrency.lockutils [req-944d9592-ae71-40e2-8a8f-504e64698827 req-cacd7c57-26ec-4fb7-a428-f79a4fe6c125 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-872b1c1d-bc87-4123-a599-4d64b89018aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 08:51:43 compute-0 nova_compute[260935]: 2025-10-11 08:51:43.239 2 DEBUG oslo_concurrency.lockutils [req-944d9592-ae71-40e2-8a8f-504e64698827 req-cacd7c57-26ec-4fb7-a428-f79a4fe6c125 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-872b1c1d-bc87-4123-a599-4d64b89018aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 08:51:43 compute-0 nova_compute[260935]: 2025-10-11 08:51:43.240 2 DEBUG nova.network.neutron [req-944d9592-ae71-40e2-8a8f-504e64698827 req-cacd7c57-26ec-4fb7-a428-f79a4fe6c125 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 872b1c1d-bc87-4123-a599-4d64b89018aa] Refreshing network info cache for port 108be440-11bf-41f6-a628-86fbac597b7d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 08:51:43 compute-0 nova_compute[260935]: 2025-10-11 08:51:43.244 2 DEBUG oslo_concurrency.lockutils [None req-bda1840f-8760-4fce-a3ed-29b2272bc656 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:51:43 compute-0 nova_compute[260935]: 2025-10-11 08:51:43.245 2 DEBUG oslo_concurrency.lockutils [None req-bda1840f-8760-4fce-a3ed-29b2272bc656 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:51:43 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e185 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 08:51:43 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e185 do_prune osdmap full prune enabled
Oct 11 08:51:43 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e186 e186: 3 total, 3 up, 3 in
Oct 11 08:51:43 compute-0 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e186: 3 total, 3 up, 3 in
Oct 11 08:51:43 compute-0 kernel: tapab7592f9-17 (unregistering): left promiscuous mode
Oct 11 08:51:43 compute-0 NetworkManager[44960]: <info>  [1760172703.2897] device (tapab7592f9-17): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 08:51:43 compute-0 nova_compute[260935]: 2025-10-11 08:51:43.295 2 DEBUG oslo_concurrency.processutils [None req-dc6d8888-3dba-41b3-b7b9-b5da906bd38a 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/872b1c1d-bc87-4123-a599-4d64b89018aa/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpkxkq7fgt" returned: 0 in 0.161s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:51:43 compute-0 ovn_controller[152945]: 2025-10-11T08:51:43Z|00271|binding|INFO|Releasing lport ab7592f9-1746-47d1-a702-b7be704ccabb from this chassis (sb_readonly=0)
Oct 11 08:51:43 compute-0 ovn_controller[152945]: 2025-10-11T08:51:43Z|00272|binding|INFO|Setting lport ab7592f9-1746-47d1-a702-b7be704ccabb down in Southbound
Oct 11 08:51:43 compute-0 ovn_controller[152945]: 2025-10-11T08:51:43Z|00273|binding|INFO|Removing iface tapab7592f9-17 ovn-installed in OVS
Oct 11 08:51:43 compute-0 kernel: tape758e6dc-ca (unregistering): left promiscuous mode
Oct 11 08:51:43 compute-0 NetworkManager[44960]: <info>  [1760172703.3527] device (tape758e6dc-ca): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 08:51:43 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:43.370 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:af:3d:5c 10.100.0.9'], port_security=['fa:16:3e:af:3d:5c 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ce478624-f4e7-4bd7-81b2-172d9f364a89', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '80ef0690d9e94d289f05d85941ef7154', 'neutron:revision_number': '4', 'neutron:security_group_ids': '29aef361-18af-45f6-a74a-33bc137a03ad', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f751cc78-ae4f-490f-8fb4-d5f1bb30fc5b, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=ab7592f9-1746-47d1-a702-b7be704ccabb) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 08:51:43 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:43.373 162815 INFO neutron.agent.ovn.metadata.agent [-] Port ab7592f9-1746-47d1-a702-b7be704ccabb in datapath ce478624-f4e7-4bd7-81b2-172d9f364a89 unbound from our chassis
Oct 11 08:51:43 compute-0 ovn_controller[152945]: 2025-10-11T08:51:43Z|00274|binding|INFO|Releasing lport e758e6dc-cadd-4687-a634-519fb2ecace8 from this chassis (sb_readonly=0)
Oct 11 08:51:43 compute-0 ovn_controller[152945]: 2025-10-11T08:51:43Z|00275|binding|INFO|Setting lport e758e6dc-cadd-4687-a634-519fb2ecace8 down in Southbound
Oct 11 08:51:43 compute-0 ovn_controller[152945]: 2025-10-11T08:51:43Z|00276|binding|INFO|Removing iface tape758e6dc-ca ovn-installed in OVS
Oct 11 08:51:43 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:43.380 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ce478624-f4e7-4bd7-81b2-172d9f364a89
Oct 11 08:51:43 compute-0 nova_compute[260935]: 2025-10-11 08:51:43.384 2 DEBUG nova.storage.rbd_utils [None req-dc6d8888-3dba-41b3-b7b9-b5da906bd38a 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] rbd image 872b1c1d-bc87-4123-a599-4d64b89018aa_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:51:43 compute-0 nova_compute[260935]: 2025-10-11 08:51:43.391 2 DEBUG oslo_concurrency.processutils [None req-dc6d8888-3dba-41b3-b7b9-b5da906bd38a 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/872b1c1d-bc87-4123-a599-4d64b89018aa/disk.config 872b1c1d-bc87-4123-a599-4d64b89018aa_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:51:43 compute-0 systemd[1]: machine-qemu\x2d38\x2dinstance\x2d00000022.scope: Deactivated successfully.
Oct 11 08:51:43 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:43.404 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[f9b3c5c4-57de-4519-8aa4-791a336e02d4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:51:43 compute-0 systemd[1]: machine-qemu\x2d38\x2dinstance\x2d00000022.scope: Consumed 12.881s CPU time.
Oct 11 08:51:43 compute-0 systemd-machined[215705]: Machine qemu-38-instance-00000022 terminated.
Oct 11 08:51:43 compute-0 nova_compute[260935]: 2025-10-11 08:51:43.432 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:51:43 compute-0 NetworkManager[44960]: <info>  [1760172703.4458] manager: (tapab7592f9-17): new Tun device (/org/freedesktop/NetworkManager/Devices/132)
Oct 11 08:51:43 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:43.449 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[1dce8ba0-6f7e-4f0a-943a-e6b0a4c75cac]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:51:43 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:43.455 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[7c5838b0-2891-4298-892e-7533c6414e34]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:51:43 compute-0 NetworkManager[44960]: <info>  [1760172703.4657] manager: (tape758e6dc-ca): new Tun device (/org/freedesktop/NetworkManager/Devices/133)
Oct 11 08:51:43 compute-0 nova_compute[260935]: 2025-10-11 08:51:43.467 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:51:43 compute-0 nova_compute[260935]: 2025-10-11 08:51:43.480 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:51:43 compute-0 nova_compute[260935]: 2025-10-11 08:51:43.485 2 INFO nova.virt.libvirt.driver [-] [instance: 88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a] Instance destroyed successfully.
Oct 11 08:51:43 compute-0 nova_compute[260935]: 2025-10-11 08:51:43.487 2 DEBUG nova.objects.instance [None req-f9ec43b7-ddbc-4089-91f7-721f9679d04c f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] Lazy-loading 'resources' on Instance uuid 88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:51:43 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:43.515 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[7b59336c-6b1d-49a1-b0b7-120f447813bd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:51:43 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:43.544 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[db2303a4-41c8-43d5-952d-724e62621a21]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapce478624-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2d:5c:24'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 916, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 916, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 81], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 452720, 'reachable_time': 18973, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 303546, 'error': None, 'target': 'ovnmeta-ce478624-f4e7-4bd7-81b2-172d9f364a89', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:51:43 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:43.563 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4b:01:c2 10.100.0.4'], port_security=['fa:16:3e:4b:01:c2 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ce478624-f4e7-4bd7-81b2-172d9f364a89', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '80ef0690d9e94d289f05d85941ef7154', 'neutron:revision_number': '4', 'neutron:security_group_ids': '29aef361-18af-45f6-a74a-33bc137a03ad', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f751cc78-ae4f-490f-8fb4-d5f1bb30fc5b, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=e758e6dc-cadd-4687-a634-519fb2ecace8) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 08:51:43 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:43.565 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[5056ae66-e0ad-4a57-a542-55902baca6d8]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapce478624-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 452738, 'tstamp': 452738}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 303550, 'error': None, 'target': 'ovnmeta-ce478624-f4e7-4bd7-81b2-172d9f364a89', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapce478624-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 452742, 'tstamp': 452742}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 303550, 'error': None, 'target': 'ovnmeta-ce478624-f4e7-4bd7-81b2-172d9f364a89', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:51:43 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:43.566 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapce478624-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:51:43 compute-0 nova_compute[260935]: 2025-10-11 08:51:43.568 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:51:43 compute-0 nova_compute[260935]: 2025-10-11 08:51:43.581 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:51:43 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:43.583 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapce478624-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:51:43 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:43.583 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 08:51:43 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:43.584 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapce478624-f0, col_values=(('external_ids', {'iface-id': '875825ad-1b50-485c-91da-f53ce1ebd5e3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:51:43 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:43.584 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 08:51:43 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:43.585 162815 INFO neutron.agent.ovn.metadata.agent [-] Port e758e6dc-cadd-4687-a634-519fb2ecace8 in datapath ce478624-f4e7-4bd7-81b2-172d9f364a89 unbound from our chassis
Oct 11 08:51:43 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:43.587 162815 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network ce478624-f4e7-4bd7-81b2-172d9f364a89, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 11 08:51:43 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:43.588 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[a6bc1cb6-98c8-4610-b3f0-9d64c4e66971]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:51:43 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:43.588 162815 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-ce478624-f4e7-4bd7-81b2-172d9f364a89 namespace which is not needed anymore
Oct 11 08:51:43 compute-0 nova_compute[260935]: 2025-10-11 08:51:43.610 2 DEBUG oslo_concurrency.processutils [None req-dc6d8888-3dba-41b3-b7b9-b5da906bd38a 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/872b1c1d-bc87-4123-a599-4d64b89018aa/disk.config 872b1c1d-bc87-4123-a599-4d64b89018aa_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.218s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:51:43 compute-0 nova_compute[260935]: 2025-10-11 08:51:43.611 2 INFO nova.virt.libvirt.driver [None req-dc6d8888-3dba-41b3-b7b9-b5da906bd38a 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] [instance: 872b1c1d-bc87-4123-a599-4d64b89018aa] Deleting local config drive /var/lib/nova/instances/872b1c1d-bc87-4123-a599-4d64b89018aa/disk.config because it was imported into RBD.
Oct 11 08:51:43 compute-0 nova_compute[260935]: 2025-10-11 08:51:43.632 2 DEBUG nova.virt.libvirt.vif [None req-f9ec43b7-ddbc-4089-91f7-721f9679d04c f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T08:51:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-AttachInterfacesV270Test-server-2081426915',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesv270test-server-2081426915',id=34,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-11T08:51:24Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='80ef0690d9e94d289f05d85941ef7154',ramdisk_id='',reservation_id='r-0lez6trd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesV270Test-1301456020',owner_user_name='tempest-AttachInterfacesV270Test-1301456020-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T08:51:24Z,user_data=None,user_id='f961a579f0a74ab3a913fc3b21acea43',uuid=88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ab7592f9-1746-47d1-a702-b7be704ccabb", "address": "fa:16:3e:af:3d:5c", "network": {"id": "ce478624-f4e7-4bd7-81b2-172d9f364a89", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1659766389-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "80ef0690d9e94d289f05d85941ef7154", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapab7592f9-17", "ovs_interfaceid": "ab7592f9-1746-47d1-a702-b7be704ccabb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 11 08:51:43 compute-0 nova_compute[260935]: 2025-10-11 08:51:43.633 2 DEBUG nova.network.os_vif_util [None req-f9ec43b7-ddbc-4089-91f7-721f9679d04c f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] Converting VIF {"id": "ab7592f9-1746-47d1-a702-b7be704ccabb", "address": "fa:16:3e:af:3d:5c", "network": {"id": "ce478624-f4e7-4bd7-81b2-172d9f364a89", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1659766389-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "80ef0690d9e94d289f05d85941ef7154", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapab7592f9-17", "ovs_interfaceid": "ab7592f9-1746-47d1-a702-b7be704ccabb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 08:51:43 compute-0 nova_compute[260935]: 2025-10-11 08:51:43.634 2 DEBUG nova.network.os_vif_util [None req-f9ec43b7-ddbc-4089-91f7-721f9679d04c f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:af:3d:5c,bridge_name='br-int',has_traffic_filtering=True,id=ab7592f9-1746-47d1-a702-b7be704ccabb,network=Network(ce478624-f4e7-4bd7-81b2-172d9f364a89),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapab7592f9-17') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 08:51:43 compute-0 nova_compute[260935]: 2025-10-11 08:51:43.634 2 DEBUG os_vif [None req-f9ec43b7-ddbc-4089-91f7-721f9679d04c f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:af:3d:5c,bridge_name='br-int',has_traffic_filtering=True,id=ab7592f9-1746-47d1-a702-b7be704ccabb,network=Network(ce478624-f4e7-4bd7-81b2-172d9f364a89),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapab7592f9-17') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 11 08:51:43 compute-0 nova_compute[260935]: 2025-10-11 08:51:43.636 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:51:43 compute-0 nova_compute[260935]: 2025-10-11 08:51:43.636 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapab7592f9-17, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:51:43 compute-0 nova_compute[260935]: 2025-10-11 08:51:43.638 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:51:43 compute-0 nova_compute[260935]: 2025-10-11 08:51:43.640 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 08:51:43 compute-0 nova_compute[260935]: 2025-10-11 08:51:43.651 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:51:43 compute-0 nova_compute[260935]: 2025-10-11 08:51:43.655 2 INFO os_vif [None req-f9ec43b7-ddbc-4089-91f7-721f9679d04c f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:af:3d:5c,bridge_name='br-int',has_traffic_filtering=True,id=ab7592f9-1746-47d1-a702-b7be704ccabb,network=Network(ce478624-f4e7-4bd7-81b2-172d9f364a89),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapab7592f9-17')
Oct 11 08:51:43 compute-0 nova_compute[260935]: 2025-10-11 08:51:43.656 2 DEBUG nova.virt.libvirt.vif [None req-f9ec43b7-ddbc-4089-91f7-721f9679d04c f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T08:51:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-AttachInterfacesV270Test-server-2081426915',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesv270test-server-2081426915',id=34,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-11T08:51:24Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='80ef0690d9e94d289f05d85941ef7154',ramdisk_id='',reservation_id='r-0lez6trd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesV270Test-1301456020',owner_user_name='tempest-AttachInterfacesV270Test-1301456020-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T08:51:24Z,user_data=None,user_id='f961a579f0a74ab3a913fc3b21acea43',uuid=88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "e758e6dc-cadd-4687-a634-519fb2ecace8", "address": "fa:16:3e:4b:01:c2", "network": {"id": "ce478624-f4e7-4bd7-81b2-172d9f364a89", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1659766389-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "80ef0690d9e94d289f05d85941ef7154", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape758e6dc-ca", "ovs_interfaceid": "e758e6dc-cadd-4687-a634-519fb2ecace8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 11 08:51:43 compute-0 nova_compute[260935]: 2025-10-11 08:51:43.657 2 DEBUG nova.network.os_vif_util [None req-f9ec43b7-ddbc-4089-91f7-721f9679d04c f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] Converting VIF {"id": "e758e6dc-cadd-4687-a634-519fb2ecace8", "address": "fa:16:3e:4b:01:c2", "network": {"id": "ce478624-f4e7-4bd7-81b2-172d9f364a89", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1659766389-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "80ef0690d9e94d289f05d85941ef7154", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape758e6dc-ca", "ovs_interfaceid": "e758e6dc-cadd-4687-a634-519fb2ecace8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 08:51:43 compute-0 nova_compute[260935]: 2025-10-11 08:51:43.658 2 DEBUG nova.network.os_vif_util [None req-f9ec43b7-ddbc-4089-91f7-721f9679d04c f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4b:01:c2,bridge_name='br-int',has_traffic_filtering=True,id=e758e6dc-cadd-4687-a634-519fb2ecace8,network=Network(ce478624-f4e7-4bd7-81b2-172d9f364a89),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape758e6dc-ca') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 08:51:43 compute-0 nova_compute[260935]: 2025-10-11 08:51:43.658 2 DEBUG os_vif [None req-f9ec43b7-ddbc-4089-91f7-721f9679d04c f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:4b:01:c2,bridge_name='br-int',has_traffic_filtering=True,id=e758e6dc-cadd-4687-a634-519fb2ecace8,network=Network(ce478624-f4e7-4bd7-81b2-172d9f364a89),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape758e6dc-ca') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 11 08:51:43 compute-0 nova_compute[260935]: 2025-10-11 08:51:43.660 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:51:43 compute-0 nova_compute[260935]: 2025-10-11 08:51:43.660 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape758e6dc-ca, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:51:43 compute-0 nova_compute[260935]: 2025-10-11 08:51:43.662 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:51:43 compute-0 nova_compute[260935]: 2025-10-11 08:51:43.664 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 08:51:43 compute-0 nova_compute[260935]: 2025-10-11 08:51:43.672 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:51:43 compute-0 nova_compute[260935]: 2025-10-11 08:51:43.676 2 INFO os_vif [None req-f9ec43b7-ddbc-4089-91f7-721f9679d04c f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:4b:01:c2,bridge_name='br-int',has_traffic_filtering=True,id=e758e6dc-cadd-4687-a634-519fb2ecace8,network=Network(ce478624-f4e7-4bd7-81b2-172d9f364a89),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape758e6dc-ca')
Oct 11 08:51:43 compute-0 kernel: tap108be440-11: entered promiscuous mode
Oct 11 08:51:43 compute-0 NetworkManager[44960]: <info>  [1760172703.6974] manager: (tap108be440-11): new Tun device (/org/freedesktop/NetworkManager/Devices/134)
Oct 11 08:51:43 compute-0 ovn_controller[152945]: 2025-10-11T08:51:43Z|00277|binding|INFO|Claiming lport 108be440-11bf-41f6-a628-86fbac597b7d for this chassis.
Oct 11 08:51:43 compute-0 ovn_controller[152945]: 2025-10-11T08:51:43Z|00278|binding|INFO|108be440-11bf-41f6-a628-86fbac597b7d: Claiming fa:16:3e:c7:33:2e 10.100.0.6
Oct 11 08:51:43 compute-0 systemd-udevd[303523]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 08:51:43 compute-0 NetworkManager[44960]: <info>  [1760172703.7243] device (tap108be440-11): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 08:51:43 compute-0 NetworkManager[44960]: <info>  [1760172703.7256] device (tap108be440-11): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 08:51:43 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:43.733 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c7:33:2e 10.100.0.6'], port_security=['fa:16:3e:c7:33:2e 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '872b1c1d-bc87-4123-a599-4d64b89018aa', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-882d76f4-8cc7-44c7-ad90-277f4f92e044', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b23ff73ca27245eeb1b46f51326a5568', 'neutron:revision_number': '2', 'neutron:security_group_ids': '1c2e46c6-00fe-417f-b7cc-3c9acf40921b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4841cc19-0006-4d3d-bb24-b088854c8627, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=108be440-11bf-41f6-a628-86fbac597b7d) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 08:51:43 compute-0 nova_compute[260935]: 2025-10-11 08:51:43.733 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:51:43 compute-0 nova_compute[260935]: 2025-10-11 08:51:43.737 2 DEBUG oslo_concurrency.processutils [None req-bda1840f-8760-4fce-a3ed-29b2272bc656 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:51:43 compute-0 systemd-machined[215705]: New machine qemu-40-instance-00000024.
Oct 11 08:51:43 compute-0 systemd[1]: Started Virtual Machine qemu-40-instance-00000024.
Oct 11 08:51:43 compute-0 neutron-haproxy-ovnmeta-ce478624-f4e7-4bd7-81b2-172d9f364a89[302511]: [NOTICE]   (302515) : haproxy version is 2.8.14-c23fe91
Oct 11 08:51:43 compute-0 neutron-haproxy-ovnmeta-ce478624-f4e7-4bd7-81b2-172d9f364a89[302511]: [NOTICE]   (302515) : path to executable is /usr/sbin/haproxy
Oct 11 08:51:43 compute-0 neutron-haproxy-ovnmeta-ce478624-f4e7-4bd7-81b2-172d9f364a89[302511]: [WARNING]  (302515) : Exiting Master process...
Oct 11 08:51:43 compute-0 neutron-haproxy-ovnmeta-ce478624-f4e7-4bd7-81b2-172d9f364a89[302511]: [ALERT]    (302515) : Current worker (302517) exited with code 143 (Terminated)
Oct 11 08:51:43 compute-0 neutron-haproxy-ovnmeta-ce478624-f4e7-4bd7-81b2-172d9f364a89[302511]: [WARNING]  (302515) : All workers exited. Exiting... (0)
Oct 11 08:51:43 compute-0 systemd[1]: libpod-b02629e856b8dc326de2a6ee34cc94950403c804b4362549501605f9cd83d758.scope: Deactivated successfully.
Oct 11 08:51:43 compute-0 podman[303591]: 2025-10-11 08:51:43.789374112 +0000 UTC m=+0.064618218 container died b02629e856b8dc326de2a6ee34cc94950403c804b4362549501605f9cd83d758 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-ce478624-f4e7-4bd7-81b2-172d9f364a89, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team)
Oct 11 08:51:43 compute-0 nova_compute[260935]: 2025-10-11 08:51:43.813 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:51:43 compute-0 nova_compute[260935]: 2025-10-11 08:51:43.824 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:51:43 compute-0 ovn_controller[152945]: 2025-10-11T08:51:43Z|00279|binding|INFO|Setting lport 108be440-11bf-41f6-a628-86fbac597b7d ovn-installed in OVS
Oct 11 08:51:43 compute-0 ovn_controller[152945]: 2025-10-11T08:51:43Z|00280|binding|INFO|Setting lport 108be440-11bf-41f6-a628-86fbac597b7d up in Southbound
Oct 11 08:51:43 compute-0 nova_compute[260935]: 2025-10-11 08:51:43.829 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:51:43 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b02629e856b8dc326de2a6ee34cc94950403c804b4362549501605f9cd83d758-userdata-shm.mount: Deactivated successfully.
Oct 11 08:51:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-0135a08a783b5b6f32ef7dcf7d88e116aafc097e2f6da2c1ab4657101d9a11a8-merged.mount: Deactivated successfully.
Oct 11 08:51:43 compute-0 podman[303591]: 2025-10-11 08:51:43.854294028 +0000 UTC m=+0.129538134 container cleanup b02629e856b8dc326de2a6ee34cc94950403c804b4362549501605f9cd83d758 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-ce478624-f4e7-4bd7-81b2-172d9f364a89, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3)
Oct 11 08:51:43 compute-0 systemd[1]: libpod-conmon-b02629e856b8dc326de2a6ee34cc94950403c804b4362549501605f9cd83d758.scope: Deactivated successfully.
Oct 11 08:51:43 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1401: 321 pgs: 321 active+clean; 185 MiB data, 449 MiB used, 60 GiB / 60 GiB avail; 653 KiB/s rd, 6.9 MiB/s wr, 213 op/s
Oct 11 08:51:43 compute-0 podman[303644]: 2025-10-11 08:51:43.945156567 +0000 UTC m=+0.055676305 container remove b02629e856b8dc326de2a6ee34cc94950403c804b4362549501605f9cd83d758 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-ce478624-f4e7-4bd7-81b2-172d9f364a89, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3)
Oct 11 08:51:43 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:43.953 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[956bfe20-2dd5-480a-91f5-3d7dd39b93bb]: (4, ('Sat Oct 11 08:51:43 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-ce478624-f4e7-4bd7-81b2-172d9f364a89 (b02629e856b8dc326de2a6ee34cc94950403c804b4362549501605f9cd83d758)\nb02629e856b8dc326de2a6ee34cc94950403c804b4362549501605f9cd83d758\nSat Oct 11 08:51:43 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-ce478624-f4e7-4bd7-81b2-172d9f364a89 (b02629e856b8dc326de2a6ee34cc94950403c804b4362549501605f9cd83d758)\nb02629e856b8dc326de2a6ee34cc94950403c804b4362549501605f9cd83d758\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:51:43 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:43.956 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[5dd6c884-4641-4f47-b554-cab2c8e5a6c4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:51:43 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:43.958 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapce478624-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:51:43 compute-0 nova_compute[260935]: 2025-10-11 08:51:43.961 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:51:43 compute-0 kernel: tapce478624-f0: left promiscuous mode
Oct 11 08:51:43 compute-0 nova_compute[260935]: 2025-10-11 08:51:43.985 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:51:43 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:43.987 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[19dece38-afd7-47d1-9a94-826ad4740d38]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:51:44 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:44.012 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[b802a071-6d74-4e9f-8ebe-43201ac9ad1b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:51:44 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:44.014 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[136caae8-ce49-436e-b7f1-9bf45619f417]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:51:44 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:44.035 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[25a7357a-db45-4b03-ab5d-6dd8b9e6b766]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 452708, 'reachable_time': 20166, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 303678, 'error': None, 'target': 'ovnmeta-ce478624-f4e7-4bd7-81b2-172d9f364a89', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:51:44 compute-0 systemd[1]: run-netns-ovnmeta\x2dce478624\x2df4e7\x2d4bd7\x2d81b2\x2d172d9f364a89.mount: Deactivated successfully.
Oct 11 08:51:44 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:44.042 162929 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-ce478624-f4e7-4bd7-81b2-172d9f364a89 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 11 08:51:44 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:44.042 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[60f8ab5a-5461-4d52-934f-043a8b72137a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:51:44 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:44.043 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 108be440-11bf-41f6-a628-86fbac597b7d in datapath 882d76f4-8cc7-44c7-ad90-277f4f92e044 unbound from our chassis
Oct 11 08:51:44 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:44.046 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 882d76f4-8cc7-44c7-ad90-277f4f92e044
Oct 11 08:51:44 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:44.066 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[f6784c69-a9ec-4f0f-8485-ed539501d149]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:51:44 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:44.070 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap882d76f4-81 in ovnmeta-882d76f4-8cc7-44c7-ad90-277f4f92e044 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 11 08:51:44 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:44.072 276945 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap882d76f4-80 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 11 08:51:44 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:44.072 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[675f3747-76d9-4a4d-96a7-afd1f85627f4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:51:44 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:44.074 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[d924befd-d420-474a-9e9d-4f3be3b65f81]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:51:44 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:44.095 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[c0ce9cd0-0dbb-4197-bffe-292d7b442484]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:51:44 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:44.130 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[09a4a4e0-a9a4-4616-a4fa-52f940aa1269]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:51:44 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:44.174 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[b4fda481-ed10-486a-94be-e0b7e20d3ab7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:51:44 compute-0 NetworkManager[44960]: <info>  [1760172704.1887] manager: (tap882d76f4-80): new Veth device (/org/freedesktop/NetworkManager/Devices/135)
Oct 11 08:51:44 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:44.186 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[494537c1-fb88-4a7e-b6ee-ab14960ee54f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:51:44 compute-0 sudo[303683]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:51:44 compute-0 sudo[303683]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:51:44 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 08:51:44 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3102064606' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:51:44 compute-0 sudo[303683]: pam_unix(sudo:session): session closed for user root
Oct 11 08:51:44 compute-0 nova_compute[260935]: 2025-10-11 08:51:44.242 2 DEBUG nova.network.neutron [None req-bcf9f562-5918-4b0f-95da-1a9d85e57363 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] [instance: 205bee5e-165a-468c-87d3-db44e03ace3e] Successfully created port: e389558a-ec7e-4610-aef7-2cfb76da6814 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 11 08:51:44 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:44.245 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[f1fc0ccf-cc66-4ed4-8242-28921f8fc65d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:51:44 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:44.250 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[86892cf2-157f-4798-9454-2e34d9e00c12]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:51:44 compute-0 nova_compute[260935]: 2025-10-11 08:51:44.249 2 DEBUG oslo_concurrency.processutils [None req-bda1840f-8760-4fce-a3ed-29b2272bc656 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.512s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:51:44 compute-0 nova_compute[260935]: 2025-10-11 08:51:44.273 2 DEBUG nova.compute.provider_tree [None req-bda1840f-8760-4fce-a3ed-29b2272bc656 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 08:51:44 compute-0 NetworkManager[44960]: <info>  [1760172704.2771] device (tap882d76f4-80): carrier: link connected
Oct 11 08:51:44 compute-0 ceph-mon[74313]: osdmap e186: 3 total, 3 up, 3 in
Oct 11 08:51:44 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/3102064606' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:51:44 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:44.287 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[c7afa085-d203-4fba-8ed5-fca7b8c3fb77]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:51:44 compute-0 nova_compute[260935]: 2025-10-11 08:51:44.298 2 DEBUG nova.scheduler.client.report [None req-bda1840f-8760-4fce-a3ed-29b2272bc656 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 08:51:44 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:44.311 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[abd9a429-113c-4ebd-b83d-6e76bedcd126]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap882d76f4-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c6:0a:f7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 89], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 454897, 'reachable_time': 19017, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 303753, 'error': None, 'target': 'ovnmeta-882d76f4-8cc7-44c7-ad90-277f4f92e044', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:51:44 compute-0 nova_compute[260935]: 2025-10-11 08:51:44.322 2 DEBUG oslo_concurrency.lockutils [None req-f3e73633-9738-40f5-969e-90ec80c20430 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Acquiring lock "1cecd438-75a3-4140-ad35-7439630b1be2" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:51:44 compute-0 nova_compute[260935]: 2025-10-11 08:51:44.322 2 DEBUG oslo_concurrency.lockutils [None req-f3e73633-9738-40f5-969e-90ec80c20430 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Lock "1cecd438-75a3-4140-ad35-7439630b1be2" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:51:44 compute-0 nova_compute[260935]: 2025-10-11 08:51:44.323 2 DEBUG oslo_concurrency.lockutils [None req-bda1840f-8760-4fce-a3ed-29b2272bc656 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.079s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:51:44 compute-0 sudo[303728]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 08:51:44 compute-0 sudo[303728]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:51:44 compute-0 sudo[303728]: pam_unix(sudo:session): session closed for user root
Oct 11 08:51:44 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:44.339 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[b730624b-03f6-4940-a88e-6871ecc69f15]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fec6:af7'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 454897, 'tstamp': 454897}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 303754, 'error': None, 'target': 'ovnmeta-882d76f4-8cc7-44c7-ad90-277f4f92e044', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:51:44 compute-0 nova_compute[260935]: 2025-10-11 08:51:44.344 2 INFO nova.virt.libvirt.driver [None req-f9ec43b7-ddbc-4089-91f7-721f9679d04c f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] [instance: 88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a] Deleting instance files /var/lib/nova/instances/88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a_del
Oct 11 08:51:44 compute-0 nova_compute[260935]: 2025-10-11 08:51:44.345 2 INFO nova.virt.libvirt.driver [None req-f9ec43b7-ddbc-4089-91f7-721f9679d04c f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] [instance: 88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a] Deletion of /var/lib/nova/instances/88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a_del complete
Oct 11 08:51:44 compute-0 nova_compute[260935]: 2025-10-11 08:51:44.349 2 DEBUG nova.compute.manager [None req-f3e73633-9738-40f5-969e-90ec80c20430 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 1cecd438-75a3-4140-ad35-7439630b1be2] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 11 08:51:44 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:44.362 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[9941343f-cdc0-4e9f-8aff-c5f9a1a71e3e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap882d76f4-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c6:0a:f7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 89], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 454897, 'reachable_time': 19017, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 303757, 'error': None, 'target': 'ovnmeta-882d76f4-8cc7-44c7-ad90-277f4f92e044', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:51:44 compute-0 nova_compute[260935]: 2025-10-11 08:51:44.365 2 INFO nova.scheduler.client.report [None req-bda1840f-8760-4fce-a3ed-29b2272bc656 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Deleted allocations for instance 5b50d851-e482-40a2-8b7d-d3eca87e15ab
Oct 11 08:51:44 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:44.412 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[48851c5b-4ab1-4628-9779-bc52bae58a1e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:51:44 compute-0 sudo[303758]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:51:44 compute-0 sudo[303758]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:51:44 compute-0 sudo[303758]: pam_unix(sudo:session): session closed for user root
Oct 11 08:51:44 compute-0 nova_compute[260935]: 2025-10-11 08:51:44.449 2 INFO nova.compute.manager [None req-f9ec43b7-ddbc-4089-91f7-721f9679d04c f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] [instance: 88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a] Took 1.23 seconds to destroy the instance on the hypervisor.
Oct 11 08:51:44 compute-0 nova_compute[260935]: 2025-10-11 08:51:44.450 2 DEBUG oslo.service.loopingcall [None req-f9ec43b7-ddbc-4089-91f7-721f9679d04c f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 11 08:51:44 compute-0 nova_compute[260935]: 2025-10-11 08:51:44.450 2 DEBUG nova.compute.manager [-] [instance: 88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 11 08:51:44 compute-0 nova_compute[260935]: 2025-10-11 08:51:44.451 2 DEBUG nova.network.neutron [-] [instance: 88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 11 08:51:44 compute-0 nova_compute[260935]: 2025-10-11 08:51:44.466 2 DEBUG oslo_concurrency.lockutils [None req-f3e73633-9738-40f5-969e-90ec80c20430 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:51:44 compute-0 nova_compute[260935]: 2025-10-11 08:51:44.466 2 DEBUG oslo_concurrency.lockutils [None req-f3e73633-9738-40f5-969e-90ec80c20430 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:51:44 compute-0 nova_compute[260935]: 2025-10-11 08:51:44.473 2 DEBUG nova.virt.hardware [None req-f3e73633-9738-40f5-969e-90ec80c20430 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 11 08:51:44 compute-0 nova_compute[260935]: 2025-10-11 08:51:44.474 2 INFO nova.compute.claims [None req-f3e73633-9738-40f5-969e-90ec80c20430 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 1cecd438-75a3-4140-ad35-7439630b1be2] Claim successful on node compute-0.ctlplane.example.com
Oct 11 08:51:44 compute-0 nova_compute[260935]: 2025-10-11 08:51:44.477 2 DEBUG oslo_concurrency.lockutils [None req-bda1840f-8760-4fce-a3ed-29b2272bc656 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Lock "5b50d851-e482-40a2-8b7d-d3eca87e15ab" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.094s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:51:44 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:44.493 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[95e40e0c-e330-4a01-aff4-3819666ab166]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:51:44 compute-0 sudo[303803]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 11 08:51:44 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:44.494 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap882d76f4-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:51:44 compute-0 sudo[303803]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:51:44 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:44.495 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 08:51:44 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:44.495 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap882d76f4-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:51:44 compute-0 NetworkManager[44960]: <info>  [1760172704.5398] manager: (tap882d76f4-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/136)
Oct 11 08:51:44 compute-0 nova_compute[260935]: 2025-10-11 08:51:44.539 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:51:44 compute-0 kernel: tap882d76f4-80: entered promiscuous mode
Oct 11 08:51:44 compute-0 nova_compute[260935]: 2025-10-11 08:51:44.547 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:51:44 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:44.549 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap882d76f4-80, col_values=(('external_ids', {'iface-id': 'abe57b96-aedd-418b-be8f-7ad9fb9218ac'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:51:44 compute-0 ovn_controller[152945]: 2025-10-11T08:51:44Z|00281|binding|INFO|Releasing lport abe57b96-aedd-418b-be8f-7ad9fb9218ac from this chassis (sb_readonly=0)
Oct 11 08:51:44 compute-0 nova_compute[260935]: 2025-10-11 08:51:44.550 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:51:44 compute-0 nova_compute[260935]: 2025-10-11 08:51:44.574 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:51:44 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:44.581 162815 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/882d76f4-8cc7-44c7-ad90-277f4f92e044.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/882d76f4-8cc7-44c7-ad90-277f4f92e044.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 11 08:51:44 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:44.582 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[9509c161-fcb0-4220-8eaa-49c9132bbab1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:51:44 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:44.584 162815 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 11 08:51:44 compute-0 ovn_metadata_agent[162810]: global
Oct 11 08:51:44 compute-0 ovn_metadata_agent[162810]:     log         /dev/log local0 debug
Oct 11 08:51:44 compute-0 ovn_metadata_agent[162810]:     log-tag     haproxy-metadata-proxy-882d76f4-8cc7-44c7-ad90-277f4f92e044
Oct 11 08:51:44 compute-0 ovn_metadata_agent[162810]:     user        root
Oct 11 08:51:44 compute-0 ovn_metadata_agent[162810]:     group       root
Oct 11 08:51:44 compute-0 ovn_metadata_agent[162810]:     maxconn     1024
Oct 11 08:51:44 compute-0 ovn_metadata_agent[162810]:     pidfile     /var/lib/neutron/external/pids/882d76f4-8cc7-44c7-ad90-277f4f92e044.pid.haproxy
Oct 11 08:51:44 compute-0 ovn_metadata_agent[162810]:     daemon
Oct 11 08:51:44 compute-0 ovn_metadata_agent[162810]: 
Oct 11 08:51:44 compute-0 ovn_metadata_agent[162810]: defaults
Oct 11 08:51:44 compute-0 ovn_metadata_agent[162810]:     log global
Oct 11 08:51:44 compute-0 ovn_metadata_agent[162810]:     mode http
Oct 11 08:51:44 compute-0 ovn_metadata_agent[162810]:     option httplog
Oct 11 08:51:44 compute-0 ovn_metadata_agent[162810]:     option dontlognull
Oct 11 08:51:44 compute-0 ovn_metadata_agent[162810]:     option http-server-close
Oct 11 08:51:44 compute-0 ovn_metadata_agent[162810]:     option forwardfor
Oct 11 08:51:44 compute-0 ovn_metadata_agent[162810]:     retries                 3
Oct 11 08:51:44 compute-0 ovn_metadata_agent[162810]:     timeout http-request    30s
Oct 11 08:51:44 compute-0 ovn_metadata_agent[162810]:     timeout connect         30s
Oct 11 08:51:44 compute-0 ovn_metadata_agent[162810]:     timeout client          32s
Oct 11 08:51:44 compute-0 ovn_metadata_agent[162810]:     timeout server          32s
Oct 11 08:51:44 compute-0 ovn_metadata_agent[162810]:     timeout http-keep-alive 30s
Oct 11 08:51:44 compute-0 ovn_metadata_agent[162810]: 
Oct 11 08:51:44 compute-0 ovn_metadata_agent[162810]: 
Oct 11 08:51:44 compute-0 ovn_metadata_agent[162810]: listen listener
Oct 11 08:51:44 compute-0 ovn_metadata_agent[162810]:     bind 169.254.169.254:80
Oct 11 08:51:44 compute-0 ovn_metadata_agent[162810]:     server metadata /var/lib/neutron/metadata_proxy
Oct 11 08:51:44 compute-0 ovn_metadata_agent[162810]:     http-request add-header X-OVN-Network-ID 882d76f4-8cc7-44c7-ad90-277f4f92e044
Oct 11 08:51:44 compute-0 ovn_metadata_agent[162810]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 11 08:51:44 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:44.585 162815 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-882d76f4-8cc7-44c7-ad90-277f4f92e044', 'env', 'PROCESS_TAG=haproxy-882d76f4-8cc7-44c7-ad90-277f4f92e044', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/882d76f4-8cc7-44c7-ad90-277f4f92e044.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 11 08:51:44 compute-0 nova_compute[260935]: 2025-10-11 08:51:44.591 2 DEBUG nova.compute.manager [req-d4b0f78c-bbc6-4f01-875a-59ca0b3efefa req-087d2d57-a13c-49c3-95e4-43983f2e7184 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a] Received event network-vif-unplugged-ab7592f9-1746-47d1-a702-b7be704ccabb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:51:44 compute-0 nova_compute[260935]: 2025-10-11 08:51:44.592 2 DEBUG oslo_concurrency.lockutils [req-d4b0f78c-bbc6-4f01-875a-59ca0b3efefa req-087d2d57-a13c-49c3-95e4-43983f2e7184 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:51:44 compute-0 nova_compute[260935]: 2025-10-11 08:51:44.592 2 DEBUG oslo_concurrency.lockutils [req-d4b0f78c-bbc6-4f01-875a-59ca0b3efefa req-087d2d57-a13c-49c3-95e4-43983f2e7184 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:51:44 compute-0 nova_compute[260935]: 2025-10-11 08:51:44.593 2 DEBUG oslo_concurrency.lockutils [req-d4b0f78c-bbc6-4f01-875a-59ca0b3efefa req-087d2d57-a13c-49c3-95e4-43983f2e7184 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:51:44 compute-0 nova_compute[260935]: 2025-10-11 08:51:44.593 2 DEBUG nova.compute.manager [req-d4b0f78c-bbc6-4f01-875a-59ca0b3efefa req-087d2d57-a13c-49c3-95e4-43983f2e7184 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a] No waiting events found dispatching network-vif-unplugged-ab7592f9-1746-47d1-a702-b7be704ccabb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 08:51:44 compute-0 nova_compute[260935]: 2025-10-11 08:51:44.594 2 DEBUG nova.compute.manager [req-d4b0f78c-bbc6-4f01-875a-59ca0b3efefa req-087d2d57-a13c-49c3-95e4-43983f2e7184 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a] Received event network-vif-unplugged-ab7592f9-1746-47d1-a702-b7be704ccabb for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 11 08:51:44 compute-0 nova_compute[260935]: 2025-10-11 08:51:44.654 2 DEBUG oslo_concurrency.processutils [None req-f3e73633-9738-40f5-969e-90ec80c20430 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:51:44 compute-0 nova_compute[260935]: 2025-10-11 08:51:44.868 2 DEBUG nova.network.neutron [req-944d9592-ae71-40e2-8a8f-504e64698827 req-cacd7c57-26ec-4fb7-a428-f79a4fe6c125 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 872b1c1d-bc87-4123-a599-4d64b89018aa] Updated VIF entry in instance network info cache for port 108be440-11bf-41f6-a628-86fbac597b7d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 08:51:44 compute-0 nova_compute[260935]: 2025-10-11 08:51:44.872 2 DEBUG nova.network.neutron [req-944d9592-ae71-40e2-8a8f-504e64698827 req-cacd7c57-26ec-4fb7-a428-f79a4fe6c125 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 872b1c1d-bc87-4123-a599-4d64b89018aa] Updating instance_info_cache with network_info: [{"id": "108be440-11bf-41f6-a628-86fbac597b7d", "address": "fa:16:3e:c7:33:2e", "network": {"id": "882d76f4-8cc7-44c7-ad90-277f4f92e044", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-3978382-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b23ff73ca27245eeb1b46f51326a5568", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap108be440-11", "ovs_interfaceid": "108be440-11bf-41f6-a628-86fbac597b7d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:51:44 compute-0 nova_compute[260935]: 2025-10-11 08:51:44.893 2 DEBUG oslo_concurrency.lockutils [req-944d9592-ae71-40e2-8a8f-504e64698827 req-cacd7c57-26ec-4fb7-a428-f79a4fe6c125 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-872b1c1d-bc87-4123-a599-4d64b89018aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 08:51:44 compute-0 nova_compute[260935]: 2025-10-11 08:51:44.894 2 DEBUG nova.compute.manager [req-944d9592-ae71-40e2-8a8f-504e64698827 req-cacd7c57-26ec-4fb7-a428-f79a4fe6c125 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 5b50d851-e482-40a2-8b7d-d3eca87e15ab] Received event network-vif-unplugged-965f1a38-4159-41be-ac4a-f436a8ddeeab external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:51:44 compute-0 nova_compute[260935]: 2025-10-11 08:51:44.895 2 DEBUG oslo_concurrency.lockutils [req-944d9592-ae71-40e2-8a8f-504e64698827 req-cacd7c57-26ec-4fb7-a428-f79a4fe6c125 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "5b50d851-e482-40a2-8b7d-d3eca87e15ab-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:51:44 compute-0 nova_compute[260935]: 2025-10-11 08:51:44.896 2 DEBUG oslo_concurrency.lockutils [req-944d9592-ae71-40e2-8a8f-504e64698827 req-cacd7c57-26ec-4fb7-a428-f79a4fe6c125 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "5b50d851-e482-40a2-8b7d-d3eca87e15ab-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:51:44 compute-0 nova_compute[260935]: 2025-10-11 08:51:44.897 2 DEBUG oslo_concurrency.lockutils [req-944d9592-ae71-40e2-8a8f-504e64698827 req-cacd7c57-26ec-4fb7-a428-f79a4fe6c125 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "5b50d851-e482-40a2-8b7d-d3eca87e15ab-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:51:44 compute-0 nova_compute[260935]: 2025-10-11 08:51:44.897 2 DEBUG nova.compute.manager [req-944d9592-ae71-40e2-8a8f-504e64698827 req-cacd7c57-26ec-4fb7-a428-f79a4fe6c125 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 5b50d851-e482-40a2-8b7d-d3eca87e15ab] No waiting events found dispatching network-vif-unplugged-965f1a38-4159-41be-ac4a-f436a8ddeeab pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 08:51:44 compute-0 nova_compute[260935]: 2025-10-11 08:51:44.898 2 DEBUG nova.compute.manager [req-944d9592-ae71-40e2-8a8f-504e64698827 req-cacd7c57-26ec-4fb7-a428-f79a4fe6c125 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 5b50d851-e482-40a2-8b7d-d3eca87e15ab] Received event network-vif-unplugged-965f1a38-4159-41be-ac4a-f436a8ddeeab for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 11 08:51:45 compute-0 podman[303917]: 2025-10-11 08:51:45.006999316 +0000 UTC m=+0.056516829 container create c606860635a65babc5f4e20e1937563ff429e8f57b93abca2fa39c3cc7258fd7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-882d76f4-8cc7-44c7-ad90-277f4f92e044, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 11 08:51:45 compute-0 systemd[1]: Started libpod-conmon-c606860635a65babc5f4e20e1937563ff429e8f57b93abca2fa39c3cc7258fd7.scope.
Oct 11 08:51:45 compute-0 sudo[303803]: pam_unix(sudo:session): session closed for user root
Oct 11 08:51:45 compute-0 podman[303917]: 2025-10-11 08:51:44.974762594 +0000 UTC m=+0.024280137 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 11 08:51:45 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 08:51:45 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/398392368' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:51:45 compute-0 systemd[1]: Started libcrun container.
Oct 11 08:51:45 compute-0 nova_compute[260935]: 2025-10-11 08:51:45.102 2 DEBUG oslo_concurrency.processutils [None req-f3e73633-9738-40f5-969e-90ec80c20430 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:51:45 compute-0 nova_compute[260935]: 2025-10-11 08:51:45.109 2 DEBUG nova.compute.provider_tree [None req-f3e73633-9738-40f5-969e-90ec80c20430 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 08:51:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/43007129255deff7aeaffb2ae4dcdedaaf076ce8807efca09774d5aee6cf146c/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 11 08:51:45 compute-0 nova_compute[260935]: 2025-10-11 08:51:45.118 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172705.1180646, 872b1c1d-bc87-4123-a599-4d64b89018aa => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:51:45 compute-0 nova_compute[260935]: 2025-10-11 08:51:45.118 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 872b1c1d-bc87-4123-a599-4d64b89018aa] VM Started (Lifecycle Event)
Oct 11 08:51:45 compute-0 podman[303917]: 2025-10-11 08:51:45.131807186 +0000 UTC m=+0.181324719 container init c606860635a65babc5f4e20e1937563ff429e8f57b93abca2fa39c3cc7258fd7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-882d76f4-8cc7-44c7-ad90-277f4f92e044, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 11 08:51:45 compute-0 podman[303917]: 2025-10-11 08:51:45.136486538 +0000 UTC m=+0.186004051 container start c606860635a65babc5f4e20e1937563ff429e8f57b93abca2fa39c3cc7258fd7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-882d76f4-8cc7-44c7-ad90-277f4f92e044, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true)
Oct 11 08:51:45 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 08:51:45 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 08:51:45 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 11 08:51:45 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 08:51:45 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 11 08:51:45 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 08:51:45 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev b0f93bc6-6f60-49d1-92f5-7d07a4149a47 does not exist
Oct 11 08:51:45 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev 6377aee4-a59c-4cfd-a1d7-b87f2eb13adc does not exist
Oct 11 08:51:45 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev d5c9b4ef-1bc7-420a-ac48-c3bd69fde34e does not exist
Oct 11 08:51:45 compute-0 nova_compute[260935]: 2025-10-11 08:51:45.157 2 DEBUG nova.scheduler.client.report [None req-f3e73633-9738-40f5-969e-90ec80c20430 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 08:51:45 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 11 08:51:45 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 08:51:45 compute-0 nova_compute[260935]: 2025-10-11 08:51:45.161 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 872b1c1d-bc87-4123-a599-4d64b89018aa] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:51:45 compute-0 neutron-haproxy-ovnmeta-882d76f4-8cc7-44c7-ad90-277f4f92e044[303946]: [NOTICE]   (303952) : New worker (303954) forked
Oct 11 08:51:45 compute-0 neutron-haproxy-ovnmeta-882d76f4-8cc7-44c7-ad90-277f4f92e044[303946]: [NOTICE]   (303952) : Loading success.
Oct 11 08:51:45 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 11 08:51:45 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 08:51:45 compute-0 nova_compute[260935]: 2025-10-11 08:51:45.165 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172705.1184494, 872b1c1d-bc87-4123-a599-4d64b89018aa => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:51:45 compute-0 nova_compute[260935]: 2025-10-11 08:51:45.165 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 872b1c1d-bc87-4123-a599-4d64b89018aa] VM Paused (Lifecycle Event)
Oct 11 08:51:45 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 08:51:45 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 08:51:45 compute-0 nova_compute[260935]: 2025-10-11 08:51:45.186 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 872b1c1d-bc87-4123-a599-4d64b89018aa] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:51:45 compute-0 nova_compute[260935]: 2025-10-11 08:51:45.189 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 872b1c1d-bc87-4123-a599-4d64b89018aa] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 08:51:45 compute-0 nova_compute[260935]: 2025-10-11 08:51:45.192 2 DEBUG oslo_concurrency.lockutils [None req-f3e73633-9738-40f5-969e-90ec80c20430 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.726s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:51:45 compute-0 nova_compute[260935]: 2025-10-11 08:51:45.192 2 DEBUG nova.compute.manager [None req-f3e73633-9738-40f5-969e-90ec80c20430 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 1cecd438-75a3-4140-ad35-7439630b1be2] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 11 08:51:45 compute-0 sudo[303962]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:51:45 compute-0 sudo[303962]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:51:45 compute-0 sudo[303962]: pam_unix(sudo:session): session closed for user root
Oct 11 08:51:45 compute-0 nova_compute[260935]: 2025-10-11 08:51:45.233 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 872b1c1d-bc87-4123-a599-4d64b89018aa] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 08:51:45 compute-0 nova_compute[260935]: 2025-10-11 08:51:45.270 2 DEBUG nova.compute.manager [None req-f3e73633-9738-40f5-969e-90ec80c20430 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 1cecd438-75a3-4140-ad35-7439630b1be2] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 11 08:51:45 compute-0 nova_compute[260935]: 2025-10-11 08:51:45.271 2 DEBUG nova.network.neutron [None req-f3e73633-9738-40f5-969e-90ec80c20430 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 1cecd438-75a3-4140-ad35-7439630b1be2] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 11 08:51:45 compute-0 sudo[303988]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 08:51:45 compute-0 sudo[303988]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:51:45 compute-0 sudo[303988]: pam_unix(sudo:session): session closed for user root
Oct 11 08:51:45 compute-0 nova_compute[260935]: 2025-10-11 08:51:45.293 2 INFO nova.virt.libvirt.driver [None req-f3e73633-9738-40f5-969e-90ec80c20430 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 1cecd438-75a3-4140-ad35-7439630b1be2] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 11 08:51:45 compute-0 ceph-mon[74313]: pgmap v1401: 321 pgs: 321 active+clean; 185 MiB data, 449 MiB used, 60 GiB / 60 GiB avail; 653 KiB/s rd, 6.9 MiB/s wr, 213 op/s
Oct 11 08:51:45 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/398392368' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:51:45 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 08:51:45 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 08:51:45 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 08:51:45 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 08:51:45 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 08:51:45 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 08:51:45 compute-0 nova_compute[260935]: 2025-10-11 08:51:45.315 2 DEBUG nova.compute.manager [None req-f3e73633-9738-40f5-969e-90ec80c20430 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 1cecd438-75a3-4140-ad35-7439630b1be2] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 11 08:51:45 compute-0 sudo[304013]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:51:45 compute-0 sudo[304013]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:51:45 compute-0 sudo[304013]: pam_unix(sudo:session): session closed for user root
Oct 11 08:51:45 compute-0 nova_compute[260935]: 2025-10-11 08:51:45.396 2 DEBUG nova.compute.manager [req-0d04d990-0121-40ec-a760-80781ae89b6f req-08ba0558-db42-4080-bb5d-cdd90c7c0285 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 5b50d851-e482-40a2-8b7d-d3eca87e15ab] Received event network-vif-plugged-965f1a38-4159-41be-ac4a-f436a8ddeeab external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:51:45 compute-0 nova_compute[260935]: 2025-10-11 08:51:45.397 2 DEBUG oslo_concurrency.lockutils [req-0d04d990-0121-40ec-a760-80781ae89b6f req-08ba0558-db42-4080-bb5d-cdd90c7c0285 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "5b50d851-e482-40a2-8b7d-d3eca87e15ab-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:51:45 compute-0 nova_compute[260935]: 2025-10-11 08:51:45.397 2 DEBUG oslo_concurrency.lockutils [req-0d04d990-0121-40ec-a760-80781ae89b6f req-08ba0558-db42-4080-bb5d-cdd90c7c0285 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "5b50d851-e482-40a2-8b7d-d3eca87e15ab-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:51:45 compute-0 nova_compute[260935]: 2025-10-11 08:51:45.397 2 DEBUG oslo_concurrency.lockutils [req-0d04d990-0121-40ec-a760-80781ae89b6f req-08ba0558-db42-4080-bb5d-cdd90c7c0285 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "5b50d851-e482-40a2-8b7d-d3eca87e15ab-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:51:45 compute-0 nova_compute[260935]: 2025-10-11 08:51:45.398 2 DEBUG nova.compute.manager [req-0d04d990-0121-40ec-a760-80781ae89b6f req-08ba0558-db42-4080-bb5d-cdd90c7c0285 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 5b50d851-e482-40a2-8b7d-d3eca87e15ab] No waiting events found dispatching network-vif-plugged-965f1a38-4159-41be-ac4a-f436a8ddeeab pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 08:51:45 compute-0 nova_compute[260935]: 2025-10-11 08:51:45.398 2 WARNING nova.compute.manager [req-0d04d990-0121-40ec-a760-80781ae89b6f req-08ba0558-db42-4080-bb5d-cdd90c7c0285 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 5b50d851-e482-40a2-8b7d-d3eca87e15ab] Received unexpected event network-vif-plugged-965f1a38-4159-41be-ac4a-f436a8ddeeab for instance with vm_state deleted and task_state None.
Oct 11 08:51:45 compute-0 nova_compute[260935]: 2025-10-11 08:51:45.398 2 DEBUG nova.compute.manager [req-0d04d990-0121-40ec-a760-80781ae89b6f req-08ba0558-db42-4080-bb5d-cdd90c7c0285 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 5b50d851-e482-40a2-8b7d-d3eca87e15ab] Received event network-vif-deleted-965f1a38-4159-41be-ac4a-f436a8ddeeab external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:51:45 compute-0 nova_compute[260935]: 2025-10-11 08:51:45.399 2 DEBUG nova.compute.manager [req-0d04d990-0121-40ec-a760-80781ae89b6f req-08ba0558-db42-4080-bb5d-cdd90c7c0285 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a] Received event network-vif-unplugged-e758e6dc-cadd-4687-a634-519fb2ecace8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:51:45 compute-0 nova_compute[260935]: 2025-10-11 08:51:45.399 2 DEBUG oslo_concurrency.lockutils [req-0d04d990-0121-40ec-a760-80781ae89b6f req-08ba0558-db42-4080-bb5d-cdd90c7c0285 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:51:45 compute-0 nova_compute[260935]: 2025-10-11 08:51:45.399 2 DEBUG oslo_concurrency.lockutils [req-0d04d990-0121-40ec-a760-80781ae89b6f req-08ba0558-db42-4080-bb5d-cdd90c7c0285 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:51:45 compute-0 nova_compute[260935]: 2025-10-11 08:51:45.399 2 DEBUG oslo_concurrency.lockutils [req-0d04d990-0121-40ec-a760-80781ae89b6f req-08ba0558-db42-4080-bb5d-cdd90c7c0285 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:51:45 compute-0 nova_compute[260935]: 2025-10-11 08:51:45.400 2 DEBUG nova.compute.manager [req-0d04d990-0121-40ec-a760-80781ae89b6f req-08ba0558-db42-4080-bb5d-cdd90c7c0285 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a] No waiting events found dispatching network-vif-unplugged-e758e6dc-cadd-4687-a634-519fb2ecace8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 08:51:45 compute-0 nova_compute[260935]: 2025-10-11 08:51:45.400 2 DEBUG nova.compute.manager [req-0d04d990-0121-40ec-a760-80781ae89b6f req-08ba0558-db42-4080-bb5d-cdd90c7c0285 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a] Received event network-vif-unplugged-e758e6dc-cadd-4687-a634-519fb2ecace8 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 11 08:51:45 compute-0 nova_compute[260935]: 2025-10-11 08:51:45.400 2 DEBUG nova.compute.manager [req-0d04d990-0121-40ec-a760-80781ae89b6f req-08ba0558-db42-4080-bb5d-cdd90c7c0285 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a] Received event network-vif-plugged-e758e6dc-cadd-4687-a634-519fb2ecace8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:51:45 compute-0 nova_compute[260935]: 2025-10-11 08:51:45.400 2 DEBUG oslo_concurrency.lockutils [req-0d04d990-0121-40ec-a760-80781ae89b6f req-08ba0558-db42-4080-bb5d-cdd90c7c0285 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:51:45 compute-0 nova_compute[260935]: 2025-10-11 08:51:45.401 2 DEBUG oslo_concurrency.lockutils [req-0d04d990-0121-40ec-a760-80781ae89b6f req-08ba0558-db42-4080-bb5d-cdd90c7c0285 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:51:45 compute-0 nova_compute[260935]: 2025-10-11 08:51:45.401 2 DEBUG oslo_concurrency.lockutils [req-0d04d990-0121-40ec-a760-80781ae89b6f req-08ba0558-db42-4080-bb5d-cdd90c7c0285 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:51:45 compute-0 nova_compute[260935]: 2025-10-11 08:51:45.401 2 DEBUG nova.compute.manager [req-0d04d990-0121-40ec-a760-80781ae89b6f req-08ba0558-db42-4080-bb5d-cdd90c7c0285 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a] No waiting events found dispatching network-vif-plugged-e758e6dc-cadd-4687-a634-519fb2ecace8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 08:51:45 compute-0 nova_compute[260935]: 2025-10-11 08:51:45.401 2 WARNING nova.compute.manager [req-0d04d990-0121-40ec-a760-80781ae89b6f req-08ba0558-db42-4080-bb5d-cdd90c7c0285 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a] Received unexpected event network-vif-plugged-e758e6dc-cadd-4687-a634-519fb2ecace8 for instance with vm_state active and task_state deleting.
Oct 11 08:51:45 compute-0 nova_compute[260935]: 2025-10-11 08:51:45.402 2 DEBUG nova.compute.manager [req-0d04d990-0121-40ec-a760-80781ae89b6f req-08ba0558-db42-4080-bb5d-cdd90c7c0285 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 872b1c1d-bc87-4123-a599-4d64b89018aa] Received event network-vif-plugged-108be440-11bf-41f6-a628-86fbac597b7d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:51:45 compute-0 nova_compute[260935]: 2025-10-11 08:51:45.402 2 DEBUG oslo_concurrency.lockutils [req-0d04d990-0121-40ec-a760-80781ae89b6f req-08ba0558-db42-4080-bb5d-cdd90c7c0285 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "872b1c1d-bc87-4123-a599-4d64b89018aa-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:51:45 compute-0 nova_compute[260935]: 2025-10-11 08:51:45.402 2 DEBUG oslo_concurrency.lockutils [req-0d04d990-0121-40ec-a760-80781ae89b6f req-08ba0558-db42-4080-bb5d-cdd90c7c0285 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "872b1c1d-bc87-4123-a599-4d64b89018aa-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:51:45 compute-0 nova_compute[260935]: 2025-10-11 08:51:45.402 2 DEBUG oslo_concurrency.lockutils [req-0d04d990-0121-40ec-a760-80781ae89b6f req-08ba0558-db42-4080-bb5d-cdd90c7c0285 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "872b1c1d-bc87-4123-a599-4d64b89018aa-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:51:45 compute-0 nova_compute[260935]: 2025-10-11 08:51:45.403 2 DEBUG nova.compute.manager [req-0d04d990-0121-40ec-a760-80781ae89b6f req-08ba0558-db42-4080-bb5d-cdd90c7c0285 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 872b1c1d-bc87-4123-a599-4d64b89018aa] Processing event network-vif-plugged-108be440-11bf-41f6-a628-86fbac597b7d _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 11 08:51:45 compute-0 nova_compute[260935]: 2025-10-11 08:51:45.403 2 DEBUG nova.compute.manager [None req-dc6d8888-3dba-41b3-b7b9-b5da906bd38a 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] [instance: 872b1c1d-bc87-4123-a599-4d64b89018aa] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 11 08:51:45 compute-0 nova_compute[260935]: 2025-10-11 08:51:45.411 2 DEBUG nova.virt.libvirt.driver [None req-dc6d8888-3dba-41b3-b7b9-b5da906bd38a 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] [instance: 872b1c1d-bc87-4123-a599-4d64b89018aa] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 11 08:51:45 compute-0 nova_compute[260935]: 2025-10-11 08:51:45.411 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172705.4113019, 872b1c1d-bc87-4123-a599-4d64b89018aa => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:51:45 compute-0 nova_compute[260935]: 2025-10-11 08:51:45.412 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 872b1c1d-bc87-4123-a599-4d64b89018aa] VM Resumed (Lifecycle Event)
Oct 11 08:51:45 compute-0 nova_compute[260935]: 2025-10-11 08:51:45.415 2 INFO nova.virt.libvirt.driver [-] [instance: 872b1c1d-bc87-4123-a599-4d64b89018aa] Instance spawned successfully.
Oct 11 08:51:45 compute-0 nova_compute[260935]: 2025-10-11 08:51:45.416 2 DEBUG nova.virt.libvirt.driver [None req-dc6d8888-3dba-41b3-b7b9-b5da906bd38a 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] [instance: 872b1c1d-bc87-4123-a599-4d64b89018aa] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 11 08:51:45 compute-0 nova_compute[260935]: 2025-10-11 08:51:45.442 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 872b1c1d-bc87-4123-a599-4d64b89018aa] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:51:45 compute-0 nova_compute[260935]: 2025-10-11 08:51:45.445 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 872b1c1d-bc87-4123-a599-4d64b89018aa] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 08:51:45 compute-0 nova_compute[260935]: 2025-10-11 08:51:45.452 2 DEBUG nova.virt.libvirt.driver [None req-dc6d8888-3dba-41b3-b7b9-b5da906bd38a 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] [instance: 872b1c1d-bc87-4123-a599-4d64b89018aa] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:51:45 compute-0 nova_compute[260935]: 2025-10-11 08:51:45.452 2 DEBUG nova.virt.libvirt.driver [None req-dc6d8888-3dba-41b3-b7b9-b5da906bd38a 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] [instance: 872b1c1d-bc87-4123-a599-4d64b89018aa] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:51:45 compute-0 nova_compute[260935]: 2025-10-11 08:51:45.453 2 DEBUG nova.virt.libvirt.driver [None req-dc6d8888-3dba-41b3-b7b9-b5da906bd38a 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] [instance: 872b1c1d-bc87-4123-a599-4d64b89018aa] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:51:45 compute-0 nova_compute[260935]: 2025-10-11 08:51:45.453 2 DEBUG nova.virt.libvirt.driver [None req-dc6d8888-3dba-41b3-b7b9-b5da906bd38a 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] [instance: 872b1c1d-bc87-4123-a599-4d64b89018aa] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:51:45 compute-0 nova_compute[260935]: 2025-10-11 08:51:45.453 2 DEBUG nova.virt.libvirt.driver [None req-dc6d8888-3dba-41b3-b7b9-b5da906bd38a 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] [instance: 872b1c1d-bc87-4123-a599-4d64b89018aa] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:51:45 compute-0 nova_compute[260935]: 2025-10-11 08:51:45.454 2 DEBUG nova.virt.libvirt.driver [None req-dc6d8888-3dba-41b3-b7b9-b5da906bd38a 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] [instance: 872b1c1d-bc87-4123-a599-4d64b89018aa] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:51:45 compute-0 nova_compute[260935]: 2025-10-11 08:51:45.457 2 DEBUG nova.compute.manager [None req-f3e73633-9738-40f5-969e-90ec80c20430 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 1cecd438-75a3-4140-ad35-7439630b1be2] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 11 08:51:45 compute-0 nova_compute[260935]: 2025-10-11 08:51:45.457 2 DEBUG nova.virt.libvirt.driver [None req-f3e73633-9738-40f5-969e-90ec80c20430 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 1cecd438-75a3-4140-ad35-7439630b1be2] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 11 08:51:45 compute-0 nova_compute[260935]: 2025-10-11 08:51:45.458 2 INFO nova.virt.libvirt.driver [None req-f3e73633-9738-40f5-969e-90ec80c20430 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 1cecd438-75a3-4140-ad35-7439630b1be2] Creating image(s)
Oct 11 08:51:45 compute-0 sudo[304038]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 11 08:51:45 compute-0 sudo[304038]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:51:45 compute-0 nova_compute[260935]: 2025-10-11 08:51:45.485 2 DEBUG nova.storage.rbd_utils [None req-f3e73633-9738-40f5-969e-90ec80c20430 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] rbd image 1cecd438-75a3-4140-ad35-7439630b1be2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:51:45 compute-0 nova_compute[260935]: 2025-10-11 08:51:45.520 2 DEBUG nova.storage.rbd_utils [None req-f3e73633-9738-40f5-969e-90ec80c20430 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] rbd image 1cecd438-75a3-4140-ad35-7439630b1be2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:51:45 compute-0 nova_compute[260935]: 2025-10-11 08:51:45.554 2 DEBUG nova.storage.rbd_utils [None req-f3e73633-9738-40f5-969e-90ec80c20430 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] rbd image 1cecd438-75a3-4140-ad35-7439630b1be2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:51:45 compute-0 nova_compute[260935]: 2025-10-11 08:51:45.558 2 DEBUG oslo_concurrency.processutils [None req-f3e73633-9738-40f5-969e-90ec80c20430 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:51:45 compute-0 nova_compute[260935]: 2025-10-11 08:51:45.611 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 872b1c1d-bc87-4123-a599-4d64b89018aa] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 08:51:45 compute-0 nova_compute[260935]: 2025-10-11 08:51:45.617 2 INFO nova.compute.manager [None req-dc6d8888-3dba-41b3-b7b9-b5da906bd38a 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] [instance: 872b1c1d-bc87-4123-a599-4d64b89018aa] Took 9.47 seconds to spawn the instance on the hypervisor.
Oct 11 08:51:45 compute-0 nova_compute[260935]: 2025-10-11 08:51:45.618 2 DEBUG nova.compute.manager [None req-dc6d8888-3dba-41b3-b7b9-b5da906bd38a 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] [instance: 872b1c1d-bc87-4123-a599-4d64b89018aa] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:51:45 compute-0 nova_compute[260935]: 2025-10-11 08:51:45.622 2 DEBUG nova.policy [None req-f3e73633-9738-40f5-969e-90ec80c20430 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '1bab12893b9d49aabcb5ca19c9b951de', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'f8c7604961214c6d9d49657535d799a5', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 11 08:51:45 compute-0 nova_compute[260935]: 2025-10-11 08:51:45.656 2 DEBUG oslo_concurrency.processutils [None req-f3e73633-9738-40f5-969e-90ec80c20430 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json" returned: 0 in 0.098s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:51:45 compute-0 nova_compute[260935]: 2025-10-11 08:51:45.658 2 DEBUG oslo_concurrency.lockutils [None req-f3e73633-9738-40f5-969e-90ec80c20430 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Acquiring lock "9811042c7d73cc51997f7c966840f4b7728169a1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:51:45 compute-0 nova_compute[260935]: 2025-10-11 08:51:45.659 2 DEBUG oslo_concurrency.lockutils [None req-f3e73633-9738-40f5-969e-90ec80c20430 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:51:45 compute-0 nova_compute[260935]: 2025-10-11 08:51:45.659 2 DEBUG oslo_concurrency.lockutils [None req-f3e73633-9738-40f5-969e-90ec80c20430 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:51:45 compute-0 nova_compute[260935]: 2025-10-11 08:51:45.695 2 DEBUG nova.storage.rbd_utils [None req-f3e73633-9738-40f5-969e-90ec80c20430 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] rbd image 1cecd438-75a3-4140-ad35-7439630b1be2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:51:45 compute-0 nova_compute[260935]: 2025-10-11 08:51:45.703 2 DEBUG oslo_concurrency.processutils [None req-f3e73633-9738-40f5-969e-90ec80c20430 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 1cecd438-75a3-4140-ad35-7439630b1be2_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:51:45 compute-0 nova_compute[260935]: 2025-10-11 08:51:45.751 2 INFO nova.compute.manager [None req-dc6d8888-3dba-41b3-b7b9-b5da906bd38a 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] [instance: 872b1c1d-bc87-4123-a599-4d64b89018aa] Took 10.53 seconds to build instance.
Oct 11 08:51:45 compute-0 nova_compute[260935]: 2025-10-11 08:51:45.769 2 DEBUG oslo_concurrency.lockutils [None req-dc6d8888-3dba-41b3-b7b9-b5da906bd38a 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] Lock "872b1c1d-bc87-4123-a599-4d64b89018aa" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.603s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:51:45 compute-0 podman[304187]: 2025-10-11 08:51:45.841205858 +0000 UTC m=+0.052591149 container create c163337c28db586d0e79f59d974bc7cd13091a496f6438f729e8e75b046c9a8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_brattain, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 08:51:45 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1402: 321 pgs: 321 active+clean; 185 MiB data, 449 MiB used, 60 GiB / 60 GiB avail; 528 KiB/s rd, 5.5 MiB/s wr, 172 op/s
Oct 11 08:51:45 compute-0 systemd[1]: Started libpod-conmon-c163337c28db586d0e79f59d974bc7cd13091a496f6438f729e8e75b046c9a8b.scope.
Oct 11 08:51:45 compute-0 podman[304187]: 2025-10-11 08:51:45.817437536 +0000 UTC m=+0.028822877 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 08:51:45 compute-0 systemd[1]: Started libcrun container.
Oct 11 08:51:45 compute-0 podman[304187]: 2025-10-11 08:51:45.99050381 +0000 UTC m=+0.201889161 container init c163337c28db586d0e79f59d974bc7cd13091a496f6438f729e8e75b046c9a8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_brattain, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Oct 11 08:51:46 compute-0 podman[304187]: 2025-10-11 08:51:46.003039184 +0000 UTC m=+0.214424475 container start c163337c28db586d0e79f59d974bc7cd13091a496f6438f729e8e75b046c9a8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_brattain, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 08:51:46 compute-0 podman[304187]: 2025-10-11 08:51:46.007426068 +0000 UTC m=+0.218811389 container attach c163337c28db586d0e79f59d974bc7cd13091a496f6438f729e8e75b046c9a8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_brattain, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Oct 11 08:51:46 compute-0 modest_brattain[304215]: 167 167
Oct 11 08:51:46 compute-0 systemd[1]: libpod-c163337c28db586d0e79f59d974bc7cd13091a496f6438f729e8e75b046c9a8b.scope: Deactivated successfully.
Oct 11 08:51:46 compute-0 conmon[304215]: conmon c163337c28db586d0e79 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c163337c28db586d0e79f59d974bc7cd13091a496f6438f729e8e75b046c9a8b.scope/container/memory.events
Oct 11 08:51:46 compute-0 podman[304187]: 2025-10-11 08:51:46.012890663 +0000 UTC m=+0.224275964 container died c163337c28db586d0e79f59d974bc7cd13091a496f6438f729e8e75b046c9a8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_brattain, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct 11 08:51:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-a66f90427bb1c7fb6fca6a262ff65ead13b0b25be553489eee9625c70b749e56-merged.mount: Deactivated successfully.
Oct 11 08:51:46 compute-0 podman[304187]: 2025-10-11 08:51:46.062521457 +0000 UTC m=+0.273906748 container remove c163337c28db586d0e79f59d974bc7cd13091a496f6438f729e8e75b046c9a8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_brattain, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 08:51:46 compute-0 nova_compute[260935]: 2025-10-11 08:51:46.066 2 DEBUG oslo_concurrency.processutils [None req-f3e73633-9738-40f5-969e-90ec80c20430 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 1cecd438-75a3-4140-ad35-7439630b1be2_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.363s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:51:46 compute-0 systemd[1]: libpod-conmon-c163337c28db586d0e79f59d974bc7cd13091a496f6438f729e8e75b046c9a8b.scope: Deactivated successfully.
Oct 11 08:51:46 compute-0 nova_compute[260935]: 2025-10-11 08:51:46.196 2 DEBUG nova.storage.rbd_utils [None req-f3e73633-9738-40f5-969e-90ec80c20430 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] resizing rbd image 1cecd438-75a3-4140-ad35-7439630b1be2_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 11 08:51:46 compute-0 podman[304292]: 2025-10-11 08:51:46.339832059 +0000 UTC m=+0.084329496 container create f2439ece2a45ce1fee32491dd5de8bfb9fa4a04bc899954e07eebf083a8d6dca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_agnesi, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 08:51:46 compute-0 nova_compute[260935]: 2025-10-11 08:51:46.340 2 DEBUG nova.objects.instance [None req-f3e73633-9738-40f5-969e-90ec80c20430 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Lazy-loading 'migration_context' on Instance uuid 1cecd438-75a3-4140-ad35-7439630b1be2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:51:46 compute-0 nova_compute[260935]: 2025-10-11 08:51:46.346 2 DEBUG nova.network.neutron [None req-f3e73633-9738-40f5-969e-90ec80c20430 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 1cecd438-75a3-4140-ad35-7439630b1be2] Successfully created port: 7290684c-3823-4e1e-84f5-826316aa4548 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 11 08:51:46 compute-0 nova_compute[260935]: 2025-10-11 08:51:46.356 2 DEBUG nova.network.neutron [None req-bcf9f562-5918-4b0f-95da-1a9d85e57363 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] [instance: 205bee5e-165a-468c-87d3-db44e03ace3e] Successfully updated port: 30f88ebc-fba6-476c-8953-ad64358029bb _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 11 08:51:46 compute-0 nova_compute[260935]: 2025-10-11 08:51:46.377 2 DEBUG nova.virt.libvirt.driver [None req-f3e73633-9738-40f5-969e-90ec80c20430 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 1cecd438-75a3-4140-ad35-7439630b1be2] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 11 08:51:46 compute-0 nova_compute[260935]: 2025-10-11 08:51:46.378 2 DEBUG nova.virt.libvirt.driver [None req-f3e73633-9738-40f5-969e-90ec80c20430 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 1cecd438-75a3-4140-ad35-7439630b1be2] Ensure instance console log exists: /var/lib/nova/instances/1cecd438-75a3-4140-ad35-7439630b1be2/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 11 08:51:46 compute-0 nova_compute[260935]: 2025-10-11 08:51:46.378 2 DEBUG oslo_concurrency.lockutils [None req-f3e73633-9738-40f5-969e-90ec80c20430 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:51:46 compute-0 nova_compute[260935]: 2025-10-11 08:51:46.379 2 DEBUG oslo_concurrency.lockutils [None req-f3e73633-9738-40f5-969e-90ec80c20430 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:51:46 compute-0 nova_compute[260935]: 2025-10-11 08:51:46.379 2 DEBUG oslo_concurrency.lockutils [None req-f3e73633-9738-40f5-969e-90ec80c20430 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:51:46 compute-0 systemd[1]: Started libpod-conmon-f2439ece2a45ce1fee32491dd5de8bfb9fa4a04bc899954e07eebf083a8d6dca.scope.
Oct 11 08:51:46 compute-0 podman[304292]: 2025-10-11 08:51:46.305722664 +0000 UTC m=+0.050220161 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 08:51:46 compute-0 systemd[1]: Started libcrun container.
Oct 11 08:51:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cefc3e90171c4916069a4f285fed17ad5869ccb9f0d0666bc18a4e2d9d2e337f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 08:51:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cefc3e90171c4916069a4f285fed17ad5869ccb9f0d0666bc18a4e2d9d2e337f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 08:51:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cefc3e90171c4916069a4f285fed17ad5869ccb9f0d0666bc18a4e2d9d2e337f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 08:51:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cefc3e90171c4916069a4f285fed17ad5869ccb9f0d0666bc18a4e2d9d2e337f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 08:51:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cefc3e90171c4916069a4f285fed17ad5869ccb9f0d0666bc18a4e2d9d2e337f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 08:51:46 compute-0 podman[304292]: 2025-10-11 08:51:46.486296661 +0000 UTC m=+0.230794138 container init f2439ece2a45ce1fee32491dd5de8bfb9fa4a04bc899954e07eebf083a8d6dca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_agnesi, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 08:51:46 compute-0 podman[304292]: 2025-10-11 08:51:46.497347874 +0000 UTC m=+0.241845291 container start f2439ece2a45ce1fee32491dd5de8bfb9fa4a04bc899954e07eebf083a8d6dca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_agnesi, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Oct 11 08:51:46 compute-0 podman[304292]: 2025-10-11 08:51:46.501514301 +0000 UTC m=+0.246011788 container attach f2439ece2a45ce1fee32491dd5de8bfb9fa4a04bc899954e07eebf083a8d6dca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_agnesi, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 08:51:46 compute-0 nova_compute[260935]: 2025-10-11 08:51:46.547 2 DEBUG nova.network.neutron [-] [instance: 88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:51:46 compute-0 nova_compute[260935]: 2025-10-11 08:51:46.567 2 INFO nova.compute.manager [-] [instance: 88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a] Took 2.12 seconds to deallocate network for instance.
Oct 11 08:51:46 compute-0 nova_compute[260935]: 2025-10-11 08:51:46.629 2 DEBUG oslo_concurrency.lockutils [None req-f9ec43b7-ddbc-4089-91f7-721f9679d04c f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:51:46 compute-0 nova_compute[260935]: 2025-10-11 08:51:46.629 2 DEBUG oslo_concurrency.lockutils [None req-f9ec43b7-ddbc-4089-91f7-721f9679d04c f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:51:46 compute-0 nova_compute[260935]: 2025-10-11 08:51:46.659 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760172691.658396, 3283d482-4ea1-400a-9a1b-486479801813 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:51:46 compute-0 nova_compute[260935]: 2025-10-11 08:51:46.659 2 INFO nova.compute.manager [-] [instance: 3283d482-4ea1-400a-9a1b-486479801813] VM Stopped (Lifecycle Event)
Oct 11 08:51:46 compute-0 nova_compute[260935]: 2025-10-11 08:51:46.691 2 DEBUG nova.compute.manager [None req-7a2867e9-1006-401c-8ea7-ac3756b1cf7e - - - - - -] [instance: 3283d482-4ea1-400a-9a1b-486479801813] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:51:46 compute-0 nova_compute[260935]: 2025-10-11 08:51:46.751 2 DEBUG oslo_concurrency.processutils [None req-f9ec43b7-ddbc-4089-91f7-721f9679d04c f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:51:47 compute-0 nova_compute[260935]: 2025-10-11 08:51:47.006 2 DEBUG nova.compute.manager [req-deede00b-64c4-415b-bfb1-575d4b23fe84 req-cf95fa32-007b-47bd-811f-95fc11cc2cdc e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a] Received event network-vif-plugged-ab7592f9-1746-47d1-a702-b7be704ccabb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:51:47 compute-0 nova_compute[260935]: 2025-10-11 08:51:47.007 2 DEBUG oslo_concurrency.lockutils [req-deede00b-64c4-415b-bfb1-575d4b23fe84 req-cf95fa32-007b-47bd-811f-95fc11cc2cdc e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:51:47 compute-0 nova_compute[260935]: 2025-10-11 08:51:47.007 2 DEBUG oslo_concurrency.lockutils [req-deede00b-64c4-415b-bfb1-575d4b23fe84 req-cf95fa32-007b-47bd-811f-95fc11cc2cdc e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:51:47 compute-0 nova_compute[260935]: 2025-10-11 08:51:47.007 2 DEBUG oslo_concurrency.lockutils [req-deede00b-64c4-415b-bfb1-575d4b23fe84 req-cf95fa32-007b-47bd-811f-95fc11cc2cdc e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:51:47 compute-0 nova_compute[260935]: 2025-10-11 08:51:47.008 2 DEBUG nova.compute.manager [req-deede00b-64c4-415b-bfb1-575d4b23fe84 req-cf95fa32-007b-47bd-811f-95fc11cc2cdc e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a] No waiting events found dispatching network-vif-plugged-ab7592f9-1746-47d1-a702-b7be704ccabb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 08:51:47 compute-0 nova_compute[260935]: 2025-10-11 08:51:47.008 2 WARNING nova.compute.manager [req-deede00b-64c4-415b-bfb1-575d4b23fe84 req-cf95fa32-007b-47bd-811f-95fc11cc2cdc e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a] Received unexpected event network-vif-plugged-ab7592f9-1746-47d1-a702-b7be704ccabb for instance with vm_state deleted and task_state None.
Oct 11 08:51:47 compute-0 nova_compute[260935]: 2025-10-11 08:51:47.143 2 DEBUG nova.network.neutron [None req-f3e73633-9738-40f5-969e-90ec80c20430 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 1cecd438-75a3-4140-ad35-7439630b1be2] Successfully updated port: 7290684c-3823-4e1e-84f5-826316aa4548 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 11 08:51:47 compute-0 nova_compute[260935]: 2025-10-11 08:51:47.158 2 DEBUG oslo_concurrency.lockutils [None req-f3e73633-9738-40f5-969e-90ec80c20430 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Acquiring lock "refresh_cache-1cecd438-75a3-4140-ad35-7439630b1be2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 08:51:47 compute-0 nova_compute[260935]: 2025-10-11 08:51:47.158 2 DEBUG oslo_concurrency.lockutils [None req-f3e73633-9738-40f5-969e-90ec80c20430 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Acquired lock "refresh_cache-1cecd438-75a3-4140-ad35-7439630b1be2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 08:51:47 compute-0 nova_compute[260935]: 2025-10-11 08:51:47.159 2 DEBUG nova.network.neutron [None req-f3e73633-9738-40f5-969e-90ec80c20430 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 1cecd438-75a3-4140-ad35-7439630b1be2] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 11 08:51:47 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 08:51:47 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3661709572' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:51:47 compute-0 nova_compute[260935]: 2025-10-11 08:51:47.212 2 DEBUG oslo_concurrency.processutils [None req-f9ec43b7-ddbc-4089-91f7-721f9679d04c f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:51:47 compute-0 nova_compute[260935]: 2025-10-11 08:51:47.218 2 DEBUG nova.compute.provider_tree [None req-f9ec43b7-ddbc-4089-91f7-721f9679d04c f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 08:51:47 compute-0 nova_compute[260935]: 2025-10-11 08:51:47.236 2 DEBUG nova.scheduler.client.report [None req-f9ec43b7-ddbc-4089-91f7-721f9679d04c f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 08:51:47 compute-0 nova_compute[260935]: 2025-10-11 08:51:47.258 2 DEBUG oslo_concurrency.lockutils [None req-f9ec43b7-ddbc-4089-91f7-721f9679d04c f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.629s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:51:47 compute-0 nova_compute[260935]: 2025-10-11 08:51:47.298 2 INFO nova.scheduler.client.report [None req-f9ec43b7-ddbc-4089-91f7-721f9679d04c f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] Deleted allocations for instance 88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a
Oct 11 08:51:47 compute-0 ceph-mon[74313]: pgmap v1402: 321 pgs: 321 active+clean; 185 MiB data, 449 MiB used, 60 GiB / 60 GiB avail; 528 KiB/s rd, 5.5 MiB/s wr, 172 op/s
Oct 11 08:51:47 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/3661709572' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:51:47 compute-0 nova_compute[260935]: 2025-10-11 08:51:47.359 2 DEBUG nova.network.neutron [None req-f3e73633-9738-40f5-969e-90ec80c20430 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 1cecd438-75a3-4140-ad35-7439630b1be2] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 11 08:51:47 compute-0 nova_compute[260935]: 2025-10-11 08:51:47.381 2 DEBUG oslo_concurrency.lockutils [None req-f9ec43b7-ddbc-4089-91f7-721f9679d04c f961a579f0a74ab3a913fc3b21acea43 80ef0690d9e94d289f05d85941ef7154 - - default default] Lock "88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.169s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:51:47 compute-0 nova_compute[260935]: 2025-10-11 08:51:47.404 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:51:47 compute-0 charming_agnesi[304326]: --> passed data devices: 0 physical, 3 LVM
Oct 11 08:51:47 compute-0 charming_agnesi[304326]: --> relative data size: 1.0
Oct 11 08:51:47 compute-0 charming_agnesi[304326]: --> All data devices are unavailable
Oct 11 08:51:47 compute-0 systemd[1]: libpod-f2439ece2a45ce1fee32491dd5de8bfb9fa4a04bc899954e07eebf083a8d6dca.scope: Deactivated successfully.
Oct 11 08:51:47 compute-0 podman[304292]: 2025-10-11 08:51:47.675803801 +0000 UTC m=+1.420301238 container died f2439ece2a45ce1fee32491dd5de8bfb9fa4a04bc899954e07eebf083a8d6dca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_agnesi, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct 11 08:51:47 compute-0 systemd[1]: libpod-f2439ece2a45ce1fee32491dd5de8bfb9fa4a04bc899954e07eebf083a8d6dca.scope: Consumed 1.091s CPU time.
Oct 11 08:51:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-cefc3e90171c4916069a4f285fed17ad5869ccb9f0d0666bc18a4e2d9d2e337f-merged.mount: Deactivated successfully.
Oct 11 08:51:47 compute-0 podman[304292]: 2025-10-11 08:51:47.756378699 +0000 UTC m=+1.500876146 container remove f2439ece2a45ce1fee32491dd5de8bfb9fa4a04bc899954e07eebf083a8d6dca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_agnesi, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct 11 08:51:47 compute-0 systemd[1]: libpod-conmon-f2439ece2a45ce1fee32491dd5de8bfb9fa4a04bc899954e07eebf083a8d6dca.scope: Deactivated successfully.
Oct 11 08:51:47 compute-0 sudo[304038]: pam_unix(sudo:session): session closed for user root
Oct 11 08:51:47 compute-0 sudo[304390]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:51:47 compute-0 sudo[304390]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:51:47 compute-0 sudo[304390]: pam_unix(sudo:session): session closed for user root
Oct 11 08:51:47 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1403: 321 pgs: 321 active+clean; 180 MiB data, 446 MiB used, 60 GiB / 60 GiB avail; 2.5 MiB/s rd, 6.5 MiB/s wr, 277 op/s
Oct 11 08:51:47 compute-0 sudo[304415]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 08:51:47 compute-0 sudo[304415]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:51:47 compute-0 sudo[304415]: pam_unix(sudo:session): session closed for user root
Oct 11 08:51:48 compute-0 sudo[304440]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:51:48 compute-0 sudo[304440]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:51:48 compute-0 sudo[304440]: pam_unix(sudo:session): session closed for user root
Oct 11 08:51:48 compute-0 sudo[304465]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a -- lvm list --format json
Oct 11 08:51:48 compute-0 sudo[304465]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:51:48 compute-0 nova_compute[260935]: 2025-10-11 08:51:48.198 2 DEBUG nova.network.neutron [None req-bcf9f562-5918-4b0f-95da-1a9d85e57363 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] [instance: 205bee5e-165a-468c-87d3-db44e03ace3e] Successfully updated port: e389558a-ec7e-4610-aef7-2cfb76da6814 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 11 08:51:48 compute-0 nova_compute[260935]: 2025-10-11 08:51:48.214 2 DEBUG oslo_concurrency.lockutils [None req-bcf9f562-5918-4b0f-95da-1a9d85e57363 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Acquiring lock "refresh_cache-205bee5e-165a-468c-87d3-db44e03ace3e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 08:51:48 compute-0 nova_compute[260935]: 2025-10-11 08:51:48.215 2 DEBUG oslo_concurrency.lockutils [None req-bcf9f562-5918-4b0f-95da-1a9d85e57363 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Acquired lock "refresh_cache-205bee5e-165a-468c-87d3-db44e03ace3e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 08:51:48 compute-0 nova_compute[260935]: 2025-10-11 08:51:48.215 2 DEBUG nova.network.neutron [None req-bcf9f562-5918-4b0f-95da-1a9d85e57363 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] [instance: 205bee5e-165a-468c-87d3-db44e03ace3e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 11 08:51:48 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 08:51:48 compute-0 nova_compute[260935]: 2025-10-11 08:51:48.387 2 DEBUG nova.network.neutron [None req-bcf9f562-5918-4b0f-95da-1a9d85e57363 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] [instance: 205bee5e-165a-468c-87d3-db44e03ace3e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 11 08:51:48 compute-0 nova_compute[260935]: 2025-10-11 08:51:48.442 2 DEBUG nova.network.neutron [None req-f3e73633-9738-40f5-969e-90ec80c20430 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 1cecd438-75a3-4140-ad35-7439630b1be2] Updating instance_info_cache with network_info: [{"id": "7290684c-3823-4e1e-84f5-826316aa4548", "address": "fa:16:3e:7f:41:62", "network": {"id": "9bac3530-993f-420e-8692-0b14a331d756", "bridge": "br-int", "label": "tempest-ImagesTestJSON-942705627-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f8c7604961214c6d9d49657535d799a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7290684c-38", "ovs_interfaceid": "7290684c-3823-4e1e-84f5-826316aa4548", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:51:48 compute-0 nova_compute[260935]: 2025-10-11 08:51:48.465 2 DEBUG oslo_concurrency.lockutils [None req-f3e73633-9738-40f5-969e-90ec80c20430 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Releasing lock "refresh_cache-1cecd438-75a3-4140-ad35-7439630b1be2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 08:51:48 compute-0 nova_compute[260935]: 2025-10-11 08:51:48.465 2 DEBUG nova.compute.manager [None req-f3e73633-9738-40f5-969e-90ec80c20430 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 1cecd438-75a3-4140-ad35-7439630b1be2] Instance network_info: |[{"id": "7290684c-3823-4e1e-84f5-826316aa4548", "address": "fa:16:3e:7f:41:62", "network": {"id": "9bac3530-993f-420e-8692-0b14a331d756", "bridge": "br-int", "label": "tempest-ImagesTestJSON-942705627-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f8c7604961214c6d9d49657535d799a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7290684c-38", "ovs_interfaceid": "7290684c-3823-4e1e-84f5-826316aa4548", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 11 08:51:48 compute-0 nova_compute[260935]: 2025-10-11 08:51:48.468 2 DEBUG nova.virt.libvirt.driver [None req-f3e73633-9738-40f5-969e-90ec80c20430 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 1cecd438-75a3-4140-ad35-7439630b1be2] Start _get_guest_xml network_info=[{"id": "7290684c-3823-4e1e-84f5-826316aa4548", "address": "fa:16:3e:7f:41:62", "network": {"id": "9bac3530-993f-420e-8692-0b14a331d756", "bridge": "br-int", "label": "tempest-ImagesTestJSON-942705627-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f8c7604961214c6d9d49657535d799a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7290684c-38", "ovs_interfaceid": "7290684c-3823-4e1e-84f5-826316aa4548", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 11 08:51:48 compute-0 nova_compute[260935]: 2025-10-11 08:51:48.474 2 WARNING nova.virt.libvirt.driver [None req-f3e73633-9738-40f5-969e-90ec80c20430 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 08:51:48 compute-0 nova_compute[260935]: 2025-10-11 08:51:48.480 2 DEBUG nova.virt.libvirt.host [None req-f3e73633-9738-40f5-969e-90ec80c20430 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 11 08:51:48 compute-0 nova_compute[260935]: 2025-10-11 08:51:48.481 2 DEBUG nova.virt.libvirt.host [None req-f3e73633-9738-40f5-969e-90ec80c20430 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 11 08:51:48 compute-0 nova_compute[260935]: 2025-10-11 08:51:48.484 2 DEBUG nova.virt.libvirt.host [None req-f3e73633-9738-40f5-969e-90ec80c20430 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 11 08:51:48 compute-0 nova_compute[260935]: 2025-10-11 08:51:48.485 2 DEBUG nova.virt.libvirt.host [None req-f3e73633-9738-40f5-969e-90ec80c20430 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 11 08:51:48 compute-0 nova_compute[260935]: 2025-10-11 08:51:48.486 2 DEBUG nova.virt.libvirt.driver [None req-f3e73633-9738-40f5-969e-90ec80c20430 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 11 08:51:48 compute-0 nova_compute[260935]: 2025-10-11 08:51:48.486 2 DEBUG nova.virt.hardware [None req-f3e73633-9738-40f5-969e-90ec80c20430 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 11 08:51:48 compute-0 nova_compute[260935]: 2025-10-11 08:51:48.487 2 DEBUG nova.virt.hardware [None req-f3e73633-9738-40f5-969e-90ec80c20430 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 11 08:51:48 compute-0 nova_compute[260935]: 2025-10-11 08:51:48.487 2 DEBUG nova.virt.hardware [None req-f3e73633-9738-40f5-969e-90ec80c20430 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 11 08:51:48 compute-0 nova_compute[260935]: 2025-10-11 08:51:48.488 2 DEBUG nova.virt.hardware [None req-f3e73633-9738-40f5-969e-90ec80c20430 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 11 08:51:48 compute-0 nova_compute[260935]: 2025-10-11 08:51:48.488 2 DEBUG nova.virt.hardware [None req-f3e73633-9738-40f5-969e-90ec80c20430 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 11 08:51:48 compute-0 nova_compute[260935]: 2025-10-11 08:51:48.489 2 DEBUG nova.virt.hardware [None req-f3e73633-9738-40f5-969e-90ec80c20430 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 11 08:51:48 compute-0 nova_compute[260935]: 2025-10-11 08:51:48.489 2 DEBUG nova.virt.hardware [None req-f3e73633-9738-40f5-969e-90ec80c20430 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 11 08:51:48 compute-0 nova_compute[260935]: 2025-10-11 08:51:48.490 2 DEBUG nova.virt.hardware [None req-f3e73633-9738-40f5-969e-90ec80c20430 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 11 08:51:48 compute-0 nova_compute[260935]: 2025-10-11 08:51:48.490 2 DEBUG nova.virt.hardware [None req-f3e73633-9738-40f5-969e-90ec80c20430 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 11 08:51:48 compute-0 nova_compute[260935]: 2025-10-11 08:51:48.490 2 DEBUG nova.virt.hardware [None req-f3e73633-9738-40f5-969e-90ec80c20430 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 11 08:51:48 compute-0 nova_compute[260935]: 2025-10-11 08:51:48.491 2 DEBUG nova.virt.hardware [None req-f3e73633-9738-40f5-969e-90ec80c20430 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 11 08:51:48 compute-0 nova_compute[260935]: 2025-10-11 08:51:48.495 2 DEBUG oslo_concurrency.processutils [None req-f3e73633-9738-40f5-969e-90ec80c20430 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:51:48 compute-0 podman[304533]: 2025-10-11 08:51:48.518956734 +0000 UTC m=+0.059945676 container create 53e45b88636b08d5f51df2a75166906a4d765912fdaea3d81b3d1b3b64233109 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_murdock, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3)
Oct 11 08:51:48 compute-0 nova_compute[260935]: 2025-10-11 08:51:48.539 2 DEBUG nova.compute.manager [req-18b1f560-5b30-4047-bffe-840d9c80635f req-3dbda898-6a48-421b-b8a8-2f61879a7924 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 872b1c1d-bc87-4123-a599-4d64b89018aa] Received event network-vif-plugged-108be440-11bf-41f6-a628-86fbac597b7d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:51:48 compute-0 nova_compute[260935]: 2025-10-11 08:51:48.540 2 DEBUG oslo_concurrency.lockutils [req-18b1f560-5b30-4047-bffe-840d9c80635f req-3dbda898-6a48-421b-b8a8-2f61879a7924 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "872b1c1d-bc87-4123-a599-4d64b89018aa-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:51:48 compute-0 nova_compute[260935]: 2025-10-11 08:51:48.541 2 DEBUG oslo_concurrency.lockutils [req-18b1f560-5b30-4047-bffe-840d9c80635f req-3dbda898-6a48-421b-b8a8-2f61879a7924 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "872b1c1d-bc87-4123-a599-4d64b89018aa-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:51:48 compute-0 nova_compute[260935]: 2025-10-11 08:51:48.541 2 DEBUG oslo_concurrency.lockutils [req-18b1f560-5b30-4047-bffe-840d9c80635f req-3dbda898-6a48-421b-b8a8-2f61879a7924 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "872b1c1d-bc87-4123-a599-4d64b89018aa-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:51:48 compute-0 nova_compute[260935]: 2025-10-11 08:51:48.542 2 DEBUG nova.compute.manager [req-18b1f560-5b30-4047-bffe-840d9c80635f req-3dbda898-6a48-421b-b8a8-2f61879a7924 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 872b1c1d-bc87-4123-a599-4d64b89018aa] No waiting events found dispatching network-vif-plugged-108be440-11bf-41f6-a628-86fbac597b7d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 08:51:48 compute-0 nova_compute[260935]: 2025-10-11 08:51:48.542 2 WARNING nova.compute.manager [req-18b1f560-5b30-4047-bffe-840d9c80635f req-3dbda898-6a48-421b-b8a8-2f61879a7924 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 872b1c1d-bc87-4123-a599-4d64b89018aa] Received unexpected event network-vif-plugged-108be440-11bf-41f6-a628-86fbac597b7d for instance with vm_state active and task_state None.
Oct 11 08:51:48 compute-0 nova_compute[260935]: 2025-10-11 08:51:48.542 2 DEBUG nova.compute.manager [req-18b1f560-5b30-4047-bffe-840d9c80635f req-3dbda898-6a48-421b-b8a8-2f61879a7924 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a] Received event network-vif-deleted-ab7592f9-1746-47d1-a702-b7be704ccabb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:51:48 compute-0 nova_compute[260935]: 2025-10-11 08:51:48.543 2 DEBUG nova.compute.manager [req-18b1f560-5b30-4047-bffe-840d9c80635f req-3dbda898-6a48-421b-b8a8-2f61879a7924 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 205bee5e-165a-468c-87d3-db44e03ace3e] Received event network-changed-30f88ebc-fba6-476c-8953-ad64358029bb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:51:48 compute-0 nova_compute[260935]: 2025-10-11 08:51:48.543 2 DEBUG nova.compute.manager [req-18b1f560-5b30-4047-bffe-840d9c80635f req-3dbda898-6a48-421b-b8a8-2f61879a7924 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 205bee5e-165a-468c-87d3-db44e03ace3e] Refreshing instance network info cache due to event network-changed-30f88ebc-fba6-476c-8953-ad64358029bb. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 08:51:48 compute-0 nova_compute[260935]: 2025-10-11 08:51:48.543 2 DEBUG oslo_concurrency.lockutils [req-18b1f560-5b30-4047-bffe-840d9c80635f req-3dbda898-6a48-421b-b8a8-2f61879a7924 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-205bee5e-165a-468c-87d3-db44e03ace3e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 08:51:48 compute-0 systemd[1]: Started libpod-conmon-53e45b88636b08d5f51df2a75166906a4d765912fdaea3d81b3d1b3b64233109.scope.
Oct 11 08:51:48 compute-0 podman[304533]: 2025-10-11 08:51:48.488067761 +0000 UTC m=+0.029056793 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 08:51:48 compute-0 systemd[1]: Started libcrun container.
Oct 11 08:51:48 compute-0 podman[304533]: 2025-10-11 08:51:48.612016866 +0000 UTC m=+0.153005908 container init 53e45b88636b08d5f51df2a75166906a4d765912fdaea3d81b3d1b3b64233109 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_murdock, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 08:51:48 compute-0 podman[304533]: 2025-10-11 08:51:48.62490642 +0000 UTC m=+0.165895362 container start 53e45b88636b08d5f51df2a75166906a4d765912fdaea3d81b3d1b3b64233109 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_murdock, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct 11 08:51:48 compute-0 podman[304533]: 2025-10-11 08:51:48.628464921 +0000 UTC m=+0.169453903 container attach 53e45b88636b08d5f51df2a75166906a4d765912fdaea3d81b3d1b3b64233109 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_murdock, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 08:51:48 compute-0 gracious_murdock[304550]: 167 167
Oct 11 08:51:48 compute-0 systemd[1]: libpod-53e45b88636b08d5f51df2a75166906a4d765912fdaea3d81b3d1b3b64233109.scope: Deactivated successfully.
Oct 11 08:51:48 compute-0 podman[304533]: 2025-10-11 08:51:48.635789458 +0000 UTC m=+0.176778430 container died 53e45b88636b08d5f51df2a75166906a4d765912fdaea3d81b3d1b3b64233109 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_murdock, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 08:51:48 compute-0 nova_compute[260935]: 2025-10-11 08:51:48.662 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:51:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-837d24aa7ec2c3db428f68c6c4fa0bfe653a46d2efdda2e9aa9d2925ea6889ee-merged.mount: Deactivated successfully.
Oct 11 08:51:48 compute-0 podman[304533]: 2025-10-11 08:51:48.681683766 +0000 UTC m=+0.222672718 container remove 53e45b88636b08d5f51df2a75166906a4d765912fdaea3d81b3d1b3b64233109 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_murdock, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 08:51:48 compute-0 systemd[1]: libpod-conmon-53e45b88636b08d5f51df2a75166906a4d765912fdaea3d81b3d1b3b64233109.scope: Deactivated successfully.
Oct 11 08:51:48 compute-0 podman[304592]: 2025-10-11 08:51:48.934165966 +0000 UTC m=+0.075564958 container create 470fba272ba173b6e140441af48ba30555ce0133e9ef09a0e6b69fbdcf029ce8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_robinson, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 08:51:48 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 08:51:48 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2221305782' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:51:48 compute-0 nova_compute[260935]: 2025-10-11 08:51:48.977 2 DEBUG oslo_concurrency.processutils [None req-f3e73633-9738-40f5-969e-90ec80c20430 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.483s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:51:48 compute-0 podman[304592]: 2025-10-11 08:51:48.902205323 +0000 UTC m=+0.043604375 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 08:51:48 compute-0 systemd[1]: Started libpod-conmon-470fba272ba173b6e140441af48ba30555ce0133e9ef09a0e6b69fbdcf029ce8.scope.
Oct 11 08:51:49 compute-0 nova_compute[260935]: 2025-10-11 08:51:49.023 2 DEBUG nova.storage.rbd_utils [None req-f3e73633-9738-40f5-969e-90ec80c20430 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] rbd image 1cecd438-75a3-4140-ad35-7439630b1be2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:51:49 compute-0 nova_compute[260935]: 2025-10-11 08:51:49.032 2 DEBUG oslo_concurrency.processutils [None req-f3e73633-9738-40f5-969e-90ec80c20430 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:51:49 compute-0 systemd[1]: Started libcrun container.
Oct 11 08:51:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/380c25bd118040f4df26fb05122a29aa96b2e9165a8dc57b44c167c96bf1c784/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 08:51:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/380c25bd118040f4df26fb05122a29aa96b2e9165a8dc57b44c167c96bf1c784/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 08:51:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/380c25bd118040f4df26fb05122a29aa96b2e9165a8dc57b44c167c96bf1c784/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 08:51:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/380c25bd118040f4df26fb05122a29aa96b2e9165a8dc57b44c167c96bf1c784/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 08:51:49 compute-0 podman[304592]: 2025-10-11 08:51:49.068170286 +0000 UTC m=+0.209569278 container init 470fba272ba173b6e140441af48ba30555ce0133e9ef09a0e6b69fbdcf029ce8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_robinson, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 08:51:49 compute-0 podman[304592]: 2025-10-11 08:51:49.079299401 +0000 UTC m=+0.220698393 container start 470fba272ba173b6e140441af48ba30555ce0133e9ef09a0e6b69fbdcf029ce8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_robinson, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct 11 08:51:49 compute-0 podman[304592]: 2025-10-11 08:51:49.082716158 +0000 UTC m=+0.224115150 container attach 470fba272ba173b6e140441af48ba30555ce0133e9ef09a0e6b69fbdcf029ce8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_robinson, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct 11 08:51:49 compute-0 nova_compute[260935]: 2025-10-11 08:51:49.297 2 DEBUG nova.compute.manager [req-e2ed1991-2d14-42af-8112-020ab611cbd6 req-7bc4f01a-a2b2-4c11-a4ba-4e1bb56b307c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 1cecd438-75a3-4140-ad35-7439630b1be2] Received event network-changed-7290684c-3823-4e1e-84f5-826316aa4548 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:51:49 compute-0 nova_compute[260935]: 2025-10-11 08:51:49.300 2 DEBUG nova.compute.manager [req-e2ed1991-2d14-42af-8112-020ab611cbd6 req-7bc4f01a-a2b2-4c11-a4ba-4e1bb56b307c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 1cecd438-75a3-4140-ad35-7439630b1be2] Refreshing instance network info cache due to event network-changed-7290684c-3823-4e1e-84f5-826316aa4548. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 08:51:49 compute-0 nova_compute[260935]: 2025-10-11 08:51:49.300 2 DEBUG oslo_concurrency.lockutils [req-e2ed1991-2d14-42af-8112-020ab611cbd6 req-7bc4f01a-a2b2-4c11-a4ba-4e1bb56b307c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-1cecd438-75a3-4140-ad35-7439630b1be2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 08:51:49 compute-0 nova_compute[260935]: 2025-10-11 08:51:49.300 2 DEBUG oslo_concurrency.lockutils [req-e2ed1991-2d14-42af-8112-020ab611cbd6 req-7bc4f01a-a2b2-4c11-a4ba-4e1bb56b307c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-1cecd438-75a3-4140-ad35-7439630b1be2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 08:51:49 compute-0 nova_compute[260935]: 2025-10-11 08:51:49.301 2 DEBUG nova.network.neutron [req-e2ed1991-2d14-42af-8112-020ab611cbd6 req-7bc4f01a-a2b2-4c11-a4ba-4e1bb56b307c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 1cecd438-75a3-4140-ad35-7439630b1be2] Refreshing network info cache for port 7290684c-3823-4e1e-84f5-826316aa4548 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 08:51:49 compute-0 ceph-mon[74313]: pgmap v1403: 321 pgs: 321 active+clean; 180 MiB data, 446 MiB used, 60 GiB / 60 GiB avail; 2.5 MiB/s rd, 6.5 MiB/s wr, 277 op/s
Oct 11 08:51:49 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/2221305782' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:51:49 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 08:51:49 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3005440805' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:51:49 compute-0 nova_compute[260935]: 2025-10-11 08:51:49.478 2 DEBUG oslo_concurrency.processutils [None req-f3e73633-9738-40f5-969e-90ec80c20430 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:51:49 compute-0 nova_compute[260935]: 2025-10-11 08:51:49.480 2 DEBUG nova.virt.libvirt.vif [None req-f3e73633-9738-40f5-969e-90ec80c20430 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T08:51:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-45605011',display_name='tempest-ImagesTestJSON-server-45605011',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-45605011',id=38,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f8c7604961214c6d9d49657535d799a5',ramdisk_id='',reservation_id='r-bbdhfx2h',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-694493184',owner_user_name='tempest-ImagesTestJSON-694493184-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T08:51:45Z,user_data=None,user_id='1bab12893b9d49aabcb5ca19c9b951de',uuid=1cecd438-75a3-4140-ad35-7439630b1be2,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "7290684c-3823-4e1e-84f5-826316aa4548", "address": "fa:16:3e:7f:41:62", "network": {"id": "9bac3530-993f-420e-8692-0b14a331d756", "bridge": "br-int", "label": "tempest-ImagesTestJSON-942705627-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f8c7604961214c6d9d49657535d799a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7290684c-38", "ovs_interfaceid": "7290684c-3823-4e1e-84f5-826316aa4548", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 11 08:51:49 compute-0 nova_compute[260935]: 2025-10-11 08:51:49.480 2 DEBUG nova.network.os_vif_util [None req-f3e73633-9738-40f5-969e-90ec80c20430 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Converting VIF {"id": "7290684c-3823-4e1e-84f5-826316aa4548", "address": "fa:16:3e:7f:41:62", "network": {"id": "9bac3530-993f-420e-8692-0b14a331d756", "bridge": "br-int", "label": "tempest-ImagesTestJSON-942705627-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f8c7604961214c6d9d49657535d799a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7290684c-38", "ovs_interfaceid": "7290684c-3823-4e1e-84f5-826316aa4548", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 08:51:49 compute-0 nova_compute[260935]: 2025-10-11 08:51:49.481 2 DEBUG nova.network.os_vif_util [None req-f3e73633-9738-40f5-969e-90ec80c20430 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7f:41:62,bridge_name='br-int',has_traffic_filtering=True,id=7290684c-3823-4e1e-84f5-826316aa4548,network=Network(9bac3530-993f-420e-8692-0b14a331d756),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7290684c-38') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 08:51:49 compute-0 nova_compute[260935]: 2025-10-11 08:51:49.482 2 DEBUG nova.objects.instance [None req-f3e73633-9738-40f5-969e-90ec80c20430 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Lazy-loading 'pci_devices' on Instance uuid 1cecd438-75a3-4140-ad35-7439630b1be2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:51:49 compute-0 nova_compute[260935]: 2025-10-11 08:51:49.551 2 DEBUG nova.virt.libvirt.driver [None req-f3e73633-9738-40f5-969e-90ec80c20430 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 1cecd438-75a3-4140-ad35-7439630b1be2] End _get_guest_xml xml=<domain type="kvm">
Oct 11 08:51:49 compute-0 nova_compute[260935]:   <uuid>1cecd438-75a3-4140-ad35-7439630b1be2</uuid>
Oct 11 08:51:49 compute-0 nova_compute[260935]:   <name>instance-00000026</name>
Oct 11 08:51:49 compute-0 nova_compute[260935]:   <memory>131072</memory>
Oct 11 08:51:49 compute-0 nova_compute[260935]:   <vcpu>1</vcpu>
Oct 11 08:51:49 compute-0 nova_compute[260935]:   <metadata>
Oct 11 08:51:49 compute-0 nova_compute[260935]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 08:51:49 compute-0 nova_compute[260935]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 08:51:49 compute-0 nova_compute[260935]:       <nova:name>tempest-ImagesTestJSON-server-45605011</nova:name>
Oct 11 08:51:49 compute-0 nova_compute[260935]:       <nova:creationTime>2025-10-11 08:51:48</nova:creationTime>
Oct 11 08:51:49 compute-0 nova_compute[260935]:       <nova:flavor name="m1.nano">
Oct 11 08:51:49 compute-0 nova_compute[260935]:         <nova:memory>128</nova:memory>
Oct 11 08:51:49 compute-0 nova_compute[260935]:         <nova:disk>1</nova:disk>
Oct 11 08:51:49 compute-0 nova_compute[260935]:         <nova:swap>0</nova:swap>
Oct 11 08:51:49 compute-0 nova_compute[260935]:         <nova:ephemeral>0</nova:ephemeral>
Oct 11 08:51:49 compute-0 nova_compute[260935]:         <nova:vcpus>1</nova:vcpus>
Oct 11 08:51:49 compute-0 nova_compute[260935]:       </nova:flavor>
Oct 11 08:51:49 compute-0 nova_compute[260935]:       <nova:owner>
Oct 11 08:51:49 compute-0 nova_compute[260935]:         <nova:user uuid="1bab12893b9d49aabcb5ca19c9b951de">tempest-ImagesTestJSON-694493184-project-member</nova:user>
Oct 11 08:51:49 compute-0 nova_compute[260935]:         <nova:project uuid="f8c7604961214c6d9d49657535d799a5">tempest-ImagesTestJSON-694493184</nova:project>
Oct 11 08:51:49 compute-0 nova_compute[260935]:       </nova:owner>
Oct 11 08:51:49 compute-0 nova_compute[260935]:       <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 08:51:49 compute-0 nova_compute[260935]:       <nova:ports>
Oct 11 08:51:49 compute-0 nova_compute[260935]:         <nova:port uuid="7290684c-3823-4e1e-84f5-826316aa4548">
Oct 11 08:51:49 compute-0 nova_compute[260935]:           <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Oct 11 08:51:49 compute-0 nova_compute[260935]:         </nova:port>
Oct 11 08:51:49 compute-0 nova_compute[260935]:       </nova:ports>
Oct 11 08:51:49 compute-0 nova_compute[260935]:     </nova:instance>
Oct 11 08:51:49 compute-0 nova_compute[260935]:   </metadata>
Oct 11 08:51:49 compute-0 nova_compute[260935]:   <sysinfo type="smbios">
Oct 11 08:51:49 compute-0 nova_compute[260935]:     <system>
Oct 11 08:51:49 compute-0 nova_compute[260935]:       <entry name="manufacturer">RDO</entry>
Oct 11 08:51:49 compute-0 nova_compute[260935]:       <entry name="product">OpenStack Compute</entry>
Oct 11 08:51:49 compute-0 nova_compute[260935]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 08:51:49 compute-0 nova_compute[260935]:       <entry name="serial">1cecd438-75a3-4140-ad35-7439630b1be2</entry>
Oct 11 08:51:49 compute-0 nova_compute[260935]:       <entry name="uuid">1cecd438-75a3-4140-ad35-7439630b1be2</entry>
Oct 11 08:51:49 compute-0 nova_compute[260935]:       <entry name="family">Virtual Machine</entry>
Oct 11 08:51:49 compute-0 nova_compute[260935]:     </system>
Oct 11 08:51:49 compute-0 nova_compute[260935]:   </sysinfo>
Oct 11 08:51:49 compute-0 nova_compute[260935]:   <os>
Oct 11 08:51:49 compute-0 nova_compute[260935]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 11 08:51:49 compute-0 nova_compute[260935]:     <boot dev="hd"/>
Oct 11 08:51:49 compute-0 nova_compute[260935]:     <smbios mode="sysinfo"/>
Oct 11 08:51:49 compute-0 nova_compute[260935]:   </os>
Oct 11 08:51:49 compute-0 nova_compute[260935]:   <features>
Oct 11 08:51:49 compute-0 nova_compute[260935]:     <acpi/>
Oct 11 08:51:49 compute-0 nova_compute[260935]:     <apic/>
Oct 11 08:51:49 compute-0 nova_compute[260935]:     <vmcoreinfo/>
Oct 11 08:51:49 compute-0 nova_compute[260935]:   </features>
Oct 11 08:51:49 compute-0 nova_compute[260935]:   <clock offset="utc">
Oct 11 08:51:49 compute-0 nova_compute[260935]:     <timer name="pit" tickpolicy="delay"/>
Oct 11 08:51:49 compute-0 nova_compute[260935]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 11 08:51:49 compute-0 nova_compute[260935]:     <timer name="hpet" present="no"/>
Oct 11 08:51:49 compute-0 nova_compute[260935]:   </clock>
Oct 11 08:51:49 compute-0 nova_compute[260935]:   <cpu mode="host-model" match="exact">
Oct 11 08:51:49 compute-0 nova_compute[260935]:     <topology sockets="1" cores="1" threads="1"/>
Oct 11 08:51:49 compute-0 nova_compute[260935]:   </cpu>
Oct 11 08:51:49 compute-0 nova_compute[260935]:   <devices>
Oct 11 08:51:49 compute-0 nova_compute[260935]:     <disk type="network" device="disk">
Oct 11 08:51:49 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 08:51:49 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/1cecd438-75a3-4140-ad35-7439630b1be2_disk">
Oct 11 08:51:49 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 08:51:49 compute-0 nova_compute[260935]:       </source>
Oct 11 08:51:49 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 08:51:49 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 08:51:49 compute-0 nova_compute[260935]:       </auth>
Oct 11 08:51:49 compute-0 nova_compute[260935]:       <target dev="vda" bus="virtio"/>
Oct 11 08:51:49 compute-0 nova_compute[260935]:     </disk>
Oct 11 08:51:49 compute-0 nova_compute[260935]:     <disk type="network" device="cdrom">
Oct 11 08:51:49 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 08:51:49 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/1cecd438-75a3-4140-ad35-7439630b1be2_disk.config">
Oct 11 08:51:49 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 08:51:49 compute-0 nova_compute[260935]:       </source>
Oct 11 08:51:49 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 08:51:49 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 08:51:49 compute-0 nova_compute[260935]:       </auth>
Oct 11 08:51:49 compute-0 nova_compute[260935]:       <target dev="sda" bus="sata"/>
Oct 11 08:51:49 compute-0 nova_compute[260935]:     </disk>
Oct 11 08:51:49 compute-0 nova_compute[260935]:     <interface type="ethernet">
Oct 11 08:51:49 compute-0 nova_compute[260935]:       <mac address="fa:16:3e:7f:41:62"/>
Oct 11 08:51:49 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 08:51:49 compute-0 nova_compute[260935]:       <driver name="vhost" rx_queue_size="512"/>
Oct 11 08:51:49 compute-0 nova_compute[260935]:       <mtu size="1442"/>
Oct 11 08:51:49 compute-0 nova_compute[260935]:       <target dev="tap7290684c-38"/>
Oct 11 08:51:49 compute-0 nova_compute[260935]:     </interface>
Oct 11 08:51:49 compute-0 nova_compute[260935]:     <serial type="pty">
Oct 11 08:51:49 compute-0 nova_compute[260935]:       <log file="/var/lib/nova/instances/1cecd438-75a3-4140-ad35-7439630b1be2/console.log" append="off"/>
Oct 11 08:51:49 compute-0 nova_compute[260935]:     </serial>
Oct 11 08:51:49 compute-0 nova_compute[260935]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 08:51:49 compute-0 nova_compute[260935]:     <video>
Oct 11 08:51:49 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 08:51:49 compute-0 nova_compute[260935]:     </video>
Oct 11 08:51:49 compute-0 nova_compute[260935]:     <input type="tablet" bus="usb"/>
Oct 11 08:51:49 compute-0 nova_compute[260935]:     <rng model="virtio">
Oct 11 08:51:49 compute-0 nova_compute[260935]:       <backend model="random">/dev/urandom</backend>
Oct 11 08:51:49 compute-0 nova_compute[260935]:     </rng>
Oct 11 08:51:49 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root"/>
Oct 11 08:51:49 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:51:49 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:51:49 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:51:49 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:51:49 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:51:49 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:51:49 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:51:49 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:51:49 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:51:49 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:51:49 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:51:49 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:51:49 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:51:49 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:51:49 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:51:49 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:51:49 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:51:49 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:51:49 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:51:49 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:51:49 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:51:49 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:51:49 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:51:49 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:51:49 compute-0 nova_compute[260935]:     <controller type="usb" index="0"/>
Oct 11 08:51:49 compute-0 nova_compute[260935]:     <memballoon model="virtio">
Oct 11 08:51:49 compute-0 nova_compute[260935]:       <stats period="10"/>
Oct 11 08:51:49 compute-0 nova_compute[260935]:     </memballoon>
Oct 11 08:51:49 compute-0 nova_compute[260935]:   </devices>
Oct 11 08:51:49 compute-0 nova_compute[260935]: </domain>
Oct 11 08:51:49 compute-0 nova_compute[260935]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 11 08:51:49 compute-0 nova_compute[260935]: 2025-10-11 08:51:49.563 2 DEBUG nova.compute.manager [None req-f3e73633-9738-40f5-969e-90ec80c20430 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 1cecd438-75a3-4140-ad35-7439630b1be2] Preparing to wait for external event network-vif-plugged-7290684c-3823-4e1e-84f5-826316aa4548 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 11 08:51:49 compute-0 nova_compute[260935]: 2025-10-11 08:51:49.563 2 DEBUG oslo_concurrency.lockutils [None req-f3e73633-9738-40f5-969e-90ec80c20430 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Acquiring lock "1cecd438-75a3-4140-ad35-7439630b1be2-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:51:49 compute-0 nova_compute[260935]: 2025-10-11 08:51:49.564 2 DEBUG oslo_concurrency.lockutils [None req-f3e73633-9738-40f5-969e-90ec80c20430 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Lock "1cecd438-75a3-4140-ad35-7439630b1be2-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:51:49 compute-0 nova_compute[260935]: 2025-10-11 08:51:49.564 2 DEBUG oslo_concurrency.lockutils [None req-f3e73633-9738-40f5-969e-90ec80c20430 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Lock "1cecd438-75a3-4140-ad35-7439630b1be2-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:51:49 compute-0 nova_compute[260935]: 2025-10-11 08:51:49.565 2 DEBUG nova.virt.libvirt.vif [None req-f3e73633-9738-40f5-969e-90ec80c20430 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T08:51:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-45605011',display_name='tempest-ImagesTestJSON-server-45605011',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-45605011',id=38,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f8c7604961214c6d9d49657535d799a5',ramdisk_id='',reservation_id='r-bbdhfx2h',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-694493184',owner_user_name='tempest-ImagesTestJSON-694493184-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T08:51:45Z,user_data=None,user_id='1bab12893b9d49aabcb5ca19c9b951de',uuid=1cecd438-75a3-4140-ad35-7439630b1be2,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "7290684c-3823-4e1e-84f5-826316aa4548", "address": "fa:16:3e:7f:41:62", "network": {"id": "9bac3530-993f-420e-8692-0b14a331d756", "bridge": "br-int", "label": "tempest-ImagesTestJSON-942705627-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f8c7604961214c6d9d49657535d799a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7290684c-38", "ovs_interfaceid": "7290684c-3823-4e1e-84f5-826316aa4548", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 11 08:51:49 compute-0 nova_compute[260935]: 2025-10-11 08:51:49.566 2 DEBUG nova.network.os_vif_util [None req-f3e73633-9738-40f5-969e-90ec80c20430 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Converting VIF {"id": "7290684c-3823-4e1e-84f5-826316aa4548", "address": "fa:16:3e:7f:41:62", "network": {"id": "9bac3530-993f-420e-8692-0b14a331d756", "bridge": "br-int", "label": "tempest-ImagesTestJSON-942705627-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f8c7604961214c6d9d49657535d799a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7290684c-38", "ovs_interfaceid": "7290684c-3823-4e1e-84f5-826316aa4548", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 08:51:49 compute-0 nova_compute[260935]: 2025-10-11 08:51:49.568 2 DEBUG nova.network.os_vif_util [None req-f3e73633-9738-40f5-969e-90ec80c20430 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7f:41:62,bridge_name='br-int',has_traffic_filtering=True,id=7290684c-3823-4e1e-84f5-826316aa4548,network=Network(9bac3530-993f-420e-8692-0b14a331d756),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7290684c-38') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 08:51:49 compute-0 nova_compute[260935]: 2025-10-11 08:51:49.569 2 DEBUG os_vif [None req-f3e73633-9738-40f5-969e-90ec80c20430 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:7f:41:62,bridge_name='br-int',has_traffic_filtering=True,id=7290684c-3823-4e1e-84f5-826316aa4548,network=Network(9bac3530-993f-420e-8692-0b14a331d756),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7290684c-38') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 11 08:51:49 compute-0 nova_compute[260935]: 2025-10-11 08:51:49.570 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:51:49 compute-0 nova_compute[260935]: 2025-10-11 08:51:49.571 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:51:49 compute-0 nova_compute[260935]: 2025-10-11 08:51:49.572 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 08:51:49 compute-0 nova_compute[260935]: 2025-10-11 08:51:49.577 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:51:49 compute-0 nova_compute[260935]: 2025-10-11 08:51:49.577 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7290684c-38, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:51:49 compute-0 nova_compute[260935]: 2025-10-11 08:51:49.578 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap7290684c-38, col_values=(('external_ids', {'iface-id': '7290684c-3823-4e1e-84f5-826316aa4548', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:7f:41:62', 'vm-uuid': '1cecd438-75a3-4140-ad35-7439630b1be2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:51:49 compute-0 nova_compute[260935]: 2025-10-11 08:51:49.580 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:51:49 compute-0 NetworkManager[44960]: <info>  [1760172709.5820] manager: (tap7290684c-38): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/137)
Oct 11 08:51:49 compute-0 nova_compute[260935]: 2025-10-11 08:51:49.586 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 08:51:49 compute-0 nova_compute[260935]: 2025-10-11 08:51:49.590 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:51:49 compute-0 nova_compute[260935]: 2025-10-11 08:51:49.591 2 INFO os_vif [None req-f3e73633-9738-40f5-969e-90ec80c20430 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:7f:41:62,bridge_name='br-int',has_traffic_filtering=True,id=7290684c-3823-4e1e-84f5-826316aa4548,network=Network(9bac3530-993f-420e-8692-0b14a331d756),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7290684c-38')
Oct 11 08:51:49 compute-0 nova_compute[260935]: 2025-10-11 08:51:49.689 2 DEBUG nova.virt.libvirt.driver [None req-f3e73633-9738-40f5-969e-90ec80c20430 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 08:51:49 compute-0 nova_compute[260935]: 2025-10-11 08:51:49.690 2 DEBUG nova.virt.libvirt.driver [None req-f3e73633-9738-40f5-969e-90ec80c20430 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 08:51:49 compute-0 nova_compute[260935]: 2025-10-11 08:51:49.691 2 DEBUG nova.virt.libvirt.driver [None req-f3e73633-9738-40f5-969e-90ec80c20430 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] No VIF found with MAC fa:16:3e:7f:41:62, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 11 08:51:49 compute-0 nova_compute[260935]: 2025-10-11 08:51:49.692 2 INFO nova.virt.libvirt.driver [None req-f3e73633-9738-40f5-969e-90ec80c20430 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 1cecd438-75a3-4140-ad35-7439630b1be2] Using config drive
Oct 11 08:51:49 compute-0 nova_compute[260935]: 2025-10-11 08:51:49.730 2 DEBUG nova.storage.rbd_utils [None req-f3e73633-9738-40f5-969e-90ec80c20430 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] rbd image 1cecd438-75a3-4140-ad35-7439630b1be2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:51:49 compute-0 epic_robinson[304624]: {
Oct 11 08:51:49 compute-0 epic_robinson[304624]:     "0": [
Oct 11 08:51:49 compute-0 epic_robinson[304624]:         {
Oct 11 08:51:49 compute-0 epic_robinson[304624]:             "devices": [
Oct 11 08:51:49 compute-0 epic_robinson[304624]:                 "/dev/loop3"
Oct 11 08:51:49 compute-0 epic_robinson[304624]:             ],
Oct 11 08:51:49 compute-0 epic_robinson[304624]:             "lv_name": "ceph_lv0",
Oct 11 08:51:49 compute-0 epic_robinson[304624]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 08:51:49 compute-0 epic_robinson[304624]:             "lv_size": "21470642176",
Oct 11 08:51:49 compute-0 epic_robinson[304624]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=bd31cfe9-266a-4479-a7f9-659fff14b3e1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 08:51:49 compute-0 epic_robinson[304624]:             "lv_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 08:51:49 compute-0 epic_robinson[304624]:             "name": "ceph_lv0",
Oct 11 08:51:49 compute-0 epic_robinson[304624]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 08:51:49 compute-0 epic_robinson[304624]:             "tags": {
Oct 11 08:51:49 compute-0 epic_robinson[304624]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 11 08:51:49 compute-0 epic_robinson[304624]:                 "ceph.block_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 08:51:49 compute-0 epic_robinson[304624]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 08:51:49 compute-0 epic_robinson[304624]:                 "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 08:51:49 compute-0 epic_robinson[304624]:                 "ceph.cluster_name": "ceph",
Oct 11 08:51:49 compute-0 epic_robinson[304624]:                 "ceph.crush_device_class": "",
Oct 11 08:51:49 compute-0 epic_robinson[304624]:                 "ceph.encrypted": "0",
Oct 11 08:51:49 compute-0 epic_robinson[304624]:                 "ceph.osd_fsid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 08:51:49 compute-0 epic_robinson[304624]:                 "ceph.osd_id": "0",
Oct 11 08:51:49 compute-0 epic_robinson[304624]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 08:51:49 compute-0 epic_robinson[304624]:                 "ceph.type": "block",
Oct 11 08:51:49 compute-0 epic_robinson[304624]:                 "ceph.vdo": "0"
Oct 11 08:51:49 compute-0 epic_robinson[304624]:             },
Oct 11 08:51:49 compute-0 epic_robinson[304624]:             "type": "block",
Oct 11 08:51:49 compute-0 epic_robinson[304624]:             "vg_name": "ceph_vg0"
Oct 11 08:51:49 compute-0 epic_robinson[304624]:         }
Oct 11 08:51:49 compute-0 epic_robinson[304624]:     ],
Oct 11 08:51:49 compute-0 epic_robinson[304624]:     "1": [
Oct 11 08:51:49 compute-0 epic_robinson[304624]:         {
Oct 11 08:51:49 compute-0 epic_robinson[304624]:             "devices": [
Oct 11 08:51:49 compute-0 epic_robinson[304624]:                 "/dev/loop4"
Oct 11 08:51:49 compute-0 epic_robinson[304624]:             ],
Oct 11 08:51:49 compute-0 epic_robinson[304624]:             "lv_name": "ceph_lv1",
Oct 11 08:51:49 compute-0 epic_robinson[304624]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 08:51:49 compute-0 epic_robinson[304624]:             "lv_size": "21470642176",
Oct 11 08:51:49 compute-0 epic_robinson[304624]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c6e730c8-7075-45e5-bf72-061b73088f5b,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 08:51:49 compute-0 epic_robinson[304624]:             "lv_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 08:51:49 compute-0 epic_robinson[304624]:             "name": "ceph_lv1",
Oct 11 08:51:49 compute-0 epic_robinson[304624]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 08:51:49 compute-0 epic_robinson[304624]:             "tags": {
Oct 11 08:51:49 compute-0 epic_robinson[304624]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 11 08:51:49 compute-0 epic_robinson[304624]:                 "ceph.block_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 08:51:49 compute-0 epic_robinson[304624]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 08:51:49 compute-0 epic_robinson[304624]:                 "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 08:51:49 compute-0 epic_robinson[304624]:                 "ceph.cluster_name": "ceph",
Oct 11 08:51:49 compute-0 epic_robinson[304624]:                 "ceph.crush_device_class": "",
Oct 11 08:51:49 compute-0 epic_robinson[304624]:                 "ceph.encrypted": "0",
Oct 11 08:51:49 compute-0 epic_robinson[304624]:                 "ceph.osd_fsid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 08:51:49 compute-0 epic_robinson[304624]:                 "ceph.osd_id": "1",
Oct 11 08:51:49 compute-0 epic_robinson[304624]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 08:51:49 compute-0 epic_robinson[304624]:                 "ceph.type": "block",
Oct 11 08:51:49 compute-0 epic_robinson[304624]:                 "ceph.vdo": "0"
Oct 11 08:51:49 compute-0 epic_robinson[304624]:             },
Oct 11 08:51:49 compute-0 epic_robinson[304624]:             "type": "block",
Oct 11 08:51:49 compute-0 epic_robinson[304624]:             "vg_name": "ceph_vg1"
Oct 11 08:51:49 compute-0 epic_robinson[304624]:         }
Oct 11 08:51:49 compute-0 epic_robinson[304624]:     ],
Oct 11 08:51:49 compute-0 epic_robinson[304624]:     "2": [
Oct 11 08:51:49 compute-0 epic_robinson[304624]:         {
Oct 11 08:51:49 compute-0 epic_robinson[304624]:             "devices": [
Oct 11 08:51:49 compute-0 epic_robinson[304624]:                 "/dev/loop5"
Oct 11 08:51:49 compute-0 epic_robinson[304624]:             ],
Oct 11 08:51:49 compute-0 epic_robinson[304624]:             "lv_name": "ceph_lv2",
Oct 11 08:51:49 compute-0 epic_robinson[304624]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 08:51:49 compute-0 epic_robinson[304624]:             "lv_size": "21470642176",
Oct 11 08:51:49 compute-0 epic_robinson[304624]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9a8d87fe-940e-4960-94fb-405e22e6e9ee,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 08:51:49 compute-0 epic_robinson[304624]:             "lv_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 08:51:49 compute-0 epic_robinson[304624]:             "name": "ceph_lv2",
Oct 11 08:51:49 compute-0 epic_robinson[304624]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 08:51:49 compute-0 epic_robinson[304624]:             "tags": {
Oct 11 08:51:49 compute-0 epic_robinson[304624]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 11 08:51:49 compute-0 epic_robinson[304624]:                 "ceph.block_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 08:51:49 compute-0 epic_robinson[304624]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 08:51:49 compute-0 epic_robinson[304624]:                 "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 08:51:49 compute-0 epic_robinson[304624]:                 "ceph.cluster_name": "ceph",
Oct 11 08:51:49 compute-0 epic_robinson[304624]:                 "ceph.crush_device_class": "",
Oct 11 08:51:49 compute-0 epic_robinson[304624]:                 "ceph.encrypted": "0",
Oct 11 08:51:49 compute-0 epic_robinson[304624]:                 "ceph.osd_fsid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 08:51:49 compute-0 epic_robinson[304624]:                 "ceph.osd_id": "2",
Oct 11 08:51:49 compute-0 epic_robinson[304624]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 08:51:49 compute-0 epic_robinson[304624]:                 "ceph.type": "block",
Oct 11 08:51:49 compute-0 epic_robinson[304624]:                 "ceph.vdo": "0"
Oct 11 08:51:49 compute-0 epic_robinson[304624]:             },
Oct 11 08:51:49 compute-0 epic_robinson[304624]:             "type": "block",
Oct 11 08:51:49 compute-0 epic_robinson[304624]:             "vg_name": "ceph_vg2"
Oct 11 08:51:49 compute-0 epic_robinson[304624]:         }
Oct 11 08:51:49 compute-0 epic_robinson[304624]:     ]
Oct 11 08:51:49 compute-0 epic_robinson[304624]: }
Oct 11 08:51:49 compute-0 systemd[1]: libpod-470fba272ba173b6e140441af48ba30555ce0133e9ef09a0e6b69fbdcf029ce8.scope: Deactivated successfully.
Oct 11 08:51:49 compute-0 podman[304592]: 2025-10-11 08:51:49.894709091 +0000 UTC m=+1.036108083 container died 470fba272ba173b6e140441af48ba30555ce0133e9ef09a0e6b69fbdcf029ce8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_robinson, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 08:51:49 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1404: 321 pgs: 321 active+clean; 180 MiB data, 446 MiB used, 60 GiB / 60 GiB avail; 2.5 MiB/s rd, 6.5 MiB/s wr, 277 op/s
Oct 11 08:51:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-380c25bd118040f4df26fb05122a29aa96b2e9165a8dc57b44c167c96bf1c784-merged.mount: Deactivated successfully.
Oct 11 08:51:49 compute-0 podman[304592]: 2025-10-11 08:51:49.98307986 +0000 UTC m=+1.124478852 container remove 470fba272ba173b6e140441af48ba30555ce0133e9ef09a0e6b69fbdcf029ce8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_robinson, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Oct 11 08:51:49 compute-0 systemd[1]: libpod-conmon-470fba272ba173b6e140441af48ba30555ce0133e9ef09a0e6b69fbdcf029ce8.scope: Deactivated successfully.
Oct 11 08:51:50 compute-0 sudo[304465]: pam_unix(sudo:session): session closed for user root
Oct 11 08:51:50 compute-0 sudo[304694]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:51:50 compute-0 sudo[304694]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:51:50 compute-0 sudo[304694]: pam_unix(sudo:session): session closed for user root
Oct 11 08:51:50 compute-0 sudo[304719]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 08:51:50 compute-0 sudo[304719]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:51:50 compute-0 sudo[304719]: pam_unix(sudo:session): session closed for user root
Oct 11 08:51:50 compute-0 sudo[304744]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:51:50 compute-0 sudo[304744]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:51:50 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/3005440805' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:51:50 compute-0 sudo[304744]: pam_unix(sudo:session): session closed for user root
Oct 11 08:51:50 compute-0 sudo[304769]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a -- raw list --format json
Oct 11 08:51:50 compute-0 sudo[304769]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:51:50 compute-0 podman[304833]: 2025-10-11 08:51:50.892575871 +0000 UTC m=+0.077968186 container create d77e33b3270985ec832bd24d0af73efbd129d490e5bf3e1e6e3b02a2a39a44ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_bell, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 08:51:50 compute-0 podman[304833]: 2025-10-11 08:51:50.86143408 +0000 UTC m=+0.046826445 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 08:51:50 compute-0 systemd[1]: Started libpod-conmon-d77e33b3270985ec832bd24d0af73efbd129d490e5bf3e1e6e3b02a2a39a44ed.scope.
Oct 11 08:51:50 compute-0 systemd[1]: Started libcrun container.
Oct 11 08:51:51 compute-0 podman[304833]: 2025-10-11 08:51:51.014443788 +0000 UTC m=+0.199836153 container init d77e33b3270985ec832bd24d0af73efbd129d490e5bf3e1e6e3b02a2a39a44ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_bell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct 11 08:51:51 compute-0 podman[304833]: 2025-10-11 08:51:51.027091745 +0000 UTC m=+0.212484050 container start d77e33b3270985ec832bd24d0af73efbd129d490e5bf3e1e6e3b02a2a39a44ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_bell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct 11 08:51:51 compute-0 podman[304833]: 2025-10-11 08:51:51.031609243 +0000 UTC m=+0.217001558 container attach d77e33b3270985ec832bd24d0af73efbd129d490e5bf3e1e6e3b02a2a39a44ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_bell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507)
Oct 11 08:51:51 compute-0 happy_bell[304850]: 167 167
Oct 11 08:51:51 compute-0 systemd[1]: libpod-d77e33b3270985ec832bd24d0af73efbd129d490e5bf3e1e6e3b02a2a39a44ed.scope: Deactivated successfully.
Oct 11 08:51:51 compute-0 podman[304833]: 2025-10-11 08:51:51.038063575 +0000 UTC m=+0.223455890 container died d77e33b3270985ec832bd24d0af73efbd129d490e5bf3e1e6e3b02a2a39a44ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_bell, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 08:51:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-a1ba4542a2f3450a932d705eebd67e3f05d2d82dfa276d52f2cc6f6d67f243e2-merged.mount: Deactivated successfully.
Oct 11 08:51:51 compute-0 podman[304833]: 2025-10-11 08:51:51.094197193 +0000 UTC m=+0.279589508 container remove d77e33b3270985ec832bd24d0af73efbd129d490e5bf3e1e6e3b02a2a39a44ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_bell, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 08:51:51 compute-0 systemd[1]: libpod-conmon-d77e33b3270985ec832bd24d0af73efbd129d490e5bf3e1e6e3b02a2a39a44ed.scope: Deactivated successfully.
Oct 11 08:51:51 compute-0 ceph-mon[74313]: pgmap v1404: 321 pgs: 321 active+clean; 180 MiB data, 446 MiB used, 60 GiB / 60 GiB avail; 2.5 MiB/s rd, 6.5 MiB/s wr, 277 op/s
Oct 11 08:51:51 compute-0 podman[304873]: 2025-10-11 08:51:51.367900223 +0000 UTC m=+0.065095622 container create 1cc072f4dc4625defc2bfc1a7ba8e9e02ed760558035360855c1280acc1204e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_perlman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2)
Oct 11 08:51:51 compute-0 nova_compute[260935]: 2025-10-11 08:51:51.387 2 INFO nova.virt.libvirt.driver [None req-f3e73633-9738-40f5-969e-90ec80c20430 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 1cecd438-75a3-4140-ad35-7439630b1be2] Creating config drive at /var/lib/nova/instances/1cecd438-75a3-4140-ad35-7439630b1be2/disk.config
Oct 11 08:51:51 compute-0 nova_compute[260935]: 2025-10-11 08:51:51.393 2 DEBUG oslo_concurrency.processutils [None req-f3e73633-9738-40f5-969e-90ec80c20430 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/1cecd438-75a3-4140-ad35-7439630b1be2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp7ehh3bmo execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:51:51 compute-0 systemd[1]: Started libpod-conmon-1cc072f4dc4625defc2bfc1a7ba8e9e02ed760558035360855c1280acc1204e7.scope.
Oct 11 08:51:51 compute-0 podman[304873]: 2025-10-11 08:51:51.342225027 +0000 UTC m=+0.039420486 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 08:51:51 compute-0 systemd[1]: Started libcrun container.
Oct 11 08:51:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2751d60763e0af113cc26b6a0ebed6af910b3fb061d43b3af3d183db66bb8994/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 08:51:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2751d60763e0af113cc26b6a0ebed6af910b3fb061d43b3af3d183db66bb8994/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 08:51:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2751d60763e0af113cc26b6a0ebed6af910b3fb061d43b3af3d183db66bb8994/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 08:51:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2751d60763e0af113cc26b6a0ebed6af910b3fb061d43b3af3d183db66bb8994/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 08:51:51 compute-0 podman[304873]: 2025-10-11 08:51:51.485752386 +0000 UTC m=+0.182947775 container init 1cc072f4dc4625defc2bfc1a7ba8e9e02ed760558035360855c1280acc1204e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_perlman, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct 11 08:51:51 compute-0 podman[304873]: 2025-10-11 08:51:51.492583169 +0000 UTC m=+0.189778568 container start 1cc072f4dc4625defc2bfc1a7ba8e9e02ed760558035360855c1280acc1204e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_perlman, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 08:51:51 compute-0 podman[304873]: 2025-10-11 08:51:51.497358494 +0000 UTC m=+0.194553893 container attach 1cc072f4dc4625defc2bfc1a7ba8e9e02ed760558035360855c1280acc1204e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_perlman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct 11 08:51:51 compute-0 nova_compute[260935]: 2025-10-11 08:51:51.551 2 DEBUG oslo_concurrency.processutils [None req-f3e73633-9738-40f5-969e-90ec80c20430 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/1cecd438-75a3-4140-ad35-7439630b1be2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp7ehh3bmo" returned: 0 in 0.158s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:51:51 compute-0 nova_compute[260935]: 2025-10-11 08:51:51.576 2 DEBUG nova.storage.rbd_utils [None req-f3e73633-9738-40f5-969e-90ec80c20430 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] rbd image 1cecd438-75a3-4140-ad35-7439630b1be2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:51:51 compute-0 nova_compute[260935]: 2025-10-11 08:51:51.579 2 DEBUG oslo_concurrency.processutils [None req-f3e73633-9738-40f5-969e-90ec80c20430 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/1cecd438-75a3-4140-ad35-7439630b1be2/disk.config 1cecd438-75a3-4140-ad35-7439630b1be2_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:51:51 compute-0 nova_compute[260935]: 2025-10-11 08:51:51.655 2 DEBUG nova.network.neutron [None req-bcf9f562-5918-4b0f-95da-1a9d85e57363 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] [instance: 205bee5e-165a-468c-87d3-db44e03ace3e] Updating instance_info_cache with network_info: [{"id": "30f88ebc-fba6-476c-8953-ad64358029bb", "address": "fa:16:3e:84:a9:6f", "network": {"id": "98ec9321-44f2-4cc2-a0a2-531598e629d2", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-550040071", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.192", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f84de17ba2c5470fbc4c7fe809e7d7b7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap30f88ebc-fb", "ovs_interfaceid": "30f88ebc-fba6-476c-8953-ad64358029bb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "e389558a-ec7e-4610-aef7-2cfb76da6814", "address": "fa:16:3e:67:bd:db", "network": {"id": "7aa0dec9-58b4-4b16-b265-f3c8cbedd3a7", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-2082644605", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.173", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f84de17ba2c5470fbc4c7fe809e7d7b7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape389558a-ec", "ovs_interfaceid": "e389558a-ec7e-4610-aef7-2cfb76da6814", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:51:51 compute-0 nova_compute[260935]: 2025-10-11 08:51:51.729 2 DEBUG oslo_concurrency.processutils [None req-f3e73633-9738-40f5-969e-90ec80c20430 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/1cecd438-75a3-4140-ad35-7439630b1be2/disk.config 1cecd438-75a3-4140-ad35-7439630b1be2_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.150s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:51:51 compute-0 nova_compute[260935]: 2025-10-11 08:51:51.730 2 INFO nova.virt.libvirt.driver [None req-f3e73633-9738-40f5-969e-90ec80c20430 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 1cecd438-75a3-4140-ad35-7439630b1be2] Deleting local config drive /var/lib/nova/instances/1cecd438-75a3-4140-ad35-7439630b1be2/disk.config because it was imported into RBD.
Oct 11 08:51:51 compute-0 nova_compute[260935]: 2025-10-11 08:51:51.739 2 DEBUG oslo_concurrency.lockutils [None req-bcf9f562-5918-4b0f-95da-1a9d85e57363 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Releasing lock "refresh_cache-205bee5e-165a-468c-87d3-db44e03ace3e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 08:51:51 compute-0 nova_compute[260935]: 2025-10-11 08:51:51.739 2 DEBUG nova.compute.manager [None req-bcf9f562-5918-4b0f-95da-1a9d85e57363 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] [instance: 205bee5e-165a-468c-87d3-db44e03ace3e] Instance network_info: |[{"id": "30f88ebc-fba6-476c-8953-ad64358029bb", "address": "fa:16:3e:84:a9:6f", "network": {"id": "98ec9321-44f2-4cc2-a0a2-531598e629d2", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-550040071", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.192", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f84de17ba2c5470fbc4c7fe809e7d7b7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap30f88ebc-fb", "ovs_interfaceid": "30f88ebc-fba6-476c-8953-ad64358029bb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "e389558a-ec7e-4610-aef7-2cfb76da6814", "address": "fa:16:3e:67:bd:db", "network": {"id": "7aa0dec9-58b4-4b16-b265-f3c8cbedd3a7", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-2082644605", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.173", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f84de17ba2c5470fbc4c7fe809e7d7b7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape389558a-ec", "ovs_interfaceid": "e389558a-ec7e-4610-aef7-2cfb76da6814", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 11 08:51:51 compute-0 nova_compute[260935]: 2025-10-11 08:51:51.740 2 DEBUG oslo_concurrency.lockutils [req-18b1f560-5b30-4047-bffe-840d9c80635f req-3dbda898-6a48-421b-b8a8-2f61879a7924 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-205bee5e-165a-468c-87d3-db44e03ace3e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 08:51:51 compute-0 nova_compute[260935]: 2025-10-11 08:51:51.740 2 DEBUG nova.network.neutron [req-18b1f560-5b30-4047-bffe-840d9c80635f req-3dbda898-6a48-421b-b8a8-2f61879a7924 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 205bee5e-165a-468c-87d3-db44e03ace3e] Refreshing network info cache for port 30f88ebc-fba6-476c-8953-ad64358029bb _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 08:51:51 compute-0 nova_compute[260935]: 2025-10-11 08:51:51.743 2 DEBUG nova.virt.libvirt.driver [None req-bcf9f562-5918-4b0f-95da-1a9d85e57363 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] [instance: 205bee5e-165a-468c-87d3-db44e03ace3e] Start _get_guest_xml network_info=[{"id": "30f88ebc-fba6-476c-8953-ad64358029bb", "address": "fa:16:3e:84:a9:6f", "network": {"id": "98ec9321-44f2-4cc2-a0a2-531598e629d2", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-550040071", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.192", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f84de17ba2c5470fbc4c7fe809e7d7b7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap30f88ebc-fb", "ovs_interfaceid": "30f88ebc-fba6-476c-8953-ad64358029bb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "e389558a-ec7e-4610-aef7-2cfb76da6814", "address": "fa:16:3e:67:bd:db", "network": {"id": "7aa0dec9-58b4-4b16-b265-f3c8cbedd3a7", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-2082644605", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.173", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f84de17ba2c5470fbc4c7fe809e7d7b7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape389558a-ec", "ovs_interfaceid": "e389558a-ec7e-4610-aef7-2cfb76da6814", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 11 08:51:51 compute-0 nova_compute[260935]: 2025-10-11 08:51:51.758 2 WARNING nova.virt.libvirt.driver [None req-bcf9f562-5918-4b0f-95da-1a9d85e57363 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 08:51:51 compute-0 nova_compute[260935]: 2025-10-11 08:51:51.765 2 DEBUG nova.virt.libvirt.host [None req-bcf9f562-5918-4b0f-95da-1a9d85e57363 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 11 08:51:51 compute-0 nova_compute[260935]: 2025-10-11 08:51:51.765 2 DEBUG nova.virt.libvirt.host [None req-bcf9f562-5918-4b0f-95da-1a9d85e57363 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 11 08:51:51 compute-0 nova_compute[260935]: 2025-10-11 08:51:51.769 2 DEBUG nova.virt.libvirt.host [None req-bcf9f562-5918-4b0f-95da-1a9d85e57363 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 11 08:51:51 compute-0 nova_compute[260935]: 2025-10-11 08:51:51.770 2 DEBUG nova.virt.libvirt.host [None req-bcf9f562-5918-4b0f-95da-1a9d85e57363 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 11 08:51:51 compute-0 nova_compute[260935]: 2025-10-11 08:51:51.770 2 DEBUG nova.virt.libvirt.driver [None req-bcf9f562-5918-4b0f-95da-1a9d85e57363 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 11 08:51:51 compute-0 nova_compute[260935]: 2025-10-11 08:51:51.770 2 DEBUG nova.virt.hardware [None req-bcf9f562-5918-4b0f-95da-1a9d85e57363 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 11 08:51:51 compute-0 nova_compute[260935]: 2025-10-11 08:51:51.771 2 DEBUG nova.virt.hardware [None req-bcf9f562-5918-4b0f-95da-1a9d85e57363 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 11 08:51:51 compute-0 nova_compute[260935]: 2025-10-11 08:51:51.771 2 DEBUG nova.virt.hardware [None req-bcf9f562-5918-4b0f-95da-1a9d85e57363 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 11 08:51:51 compute-0 nova_compute[260935]: 2025-10-11 08:51:51.771 2 DEBUG nova.virt.hardware [None req-bcf9f562-5918-4b0f-95da-1a9d85e57363 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 11 08:51:51 compute-0 nova_compute[260935]: 2025-10-11 08:51:51.771 2 DEBUG nova.virt.hardware [None req-bcf9f562-5918-4b0f-95da-1a9d85e57363 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 11 08:51:51 compute-0 nova_compute[260935]: 2025-10-11 08:51:51.771 2 DEBUG nova.virt.hardware [None req-bcf9f562-5918-4b0f-95da-1a9d85e57363 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 11 08:51:51 compute-0 nova_compute[260935]: 2025-10-11 08:51:51.771 2 DEBUG nova.virt.hardware [None req-bcf9f562-5918-4b0f-95da-1a9d85e57363 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 11 08:51:51 compute-0 nova_compute[260935]: 2025-10-11 08:51:51.772 2 DEBUG nova.virt.hardware [None req-bcf9f562-5918-4b0f-95da-1a9d85e57363 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 11 08:51:51 compute-0 nova_compute[260935]: 2025-10-11 08:51:51.772 2 DEBUG nova.virt.hardware [None req-bcf9f562-5918-4b0f-95da-1a9d85e57363 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 11 08:51:51 compute-0 nova_compute[260935]: 2025-10-11 08:51:51.772 2 DEBUG nova.virt.hardware [None req-bcf9f562-5918-4b0f-95da-1a9d85e57363 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 11 08:51:51 compute-0 nova_compute[260935]: 2025-10-11 08:51:51.772 2 DEBUG nova.virt.hardware [None req-bcf9f562-5918-4b0f-95da-1a9d85e57363 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 11 08:51:51 compute-0 nova_compute[260935]: 2025-10-11 08:51:51.775 2 DEBUG oslo_concurrency.processutils [None req-bcf9f562-5918-4b0f-95da-1a9d85e57363 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:51:51 compute-0 kernel: tap7290684c-38: entered promiscuous mode
Oct 11 08:51:51 compute-0 NetworkManager[44960]: <info>  [1760172711.8021] manager: (tap7290684c-38): new Tun device (/org/freedesktop/NetworkManager/Devices/138)
Oct 11 08:51:51 compute-0 ovn_controller[152945]: 2025-10-11T08:51:51Z|00282|binding|INFO|Claiming lport 7290684c-3823-4e1e-84f5-826316aa4548 for this chassis.
Oct 11 08:51:51 compute-0 ovn_controller[152945]: 2025-10-11T08:51:51Z|00283|binding|INFO|7290684c-3823-4e1e-84f5-826316aa4548: Claiming fa:16:3e:7f:41:62 10.100.0.9
Oct 11 08:51:51 compute-0 nova_compute[260935]: 2025-10-11 08:51:51.811 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:51:51 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:51.814 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7f:41:62 10.100.0.9'], port_security=['fa:16:3e:7f:41:62 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '1cecd438-75a3-4140-ad35-7439630b1be2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9bac3530-993f-420e-8692-0b14a331d756', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f8c7604961214c6d9d49657535d799a5', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'b92be55a-f97b-4770-99c4-ff8e122b8ad7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=956bef08-638b-4ce0-9cc4-80a6cc4f1331, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=7290684c-3823-4e1e-84f5-826316aa4548) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 08:51:51 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:51.815 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 7290684c-3823-4e1e-84f5-826316aa4548 in datapath 9bac3530-993f-420e-8692-0b14a331d756 bound to our chassis
Oct 11 08:51:51 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:51.816 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9bac3530-993f-420e-8692-0b14a331d756
Oct 11 08:51:51 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:51.827 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[d8bb36d8-ef14-4ee0-b657-c91634f730e6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:51:51 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:51.829 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap9bac3530-91 in ovnmeta-9bac3530-993f-420e-8692-0b14a331d756 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 11 08:51:51 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:51.830 276945 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap9bac3530-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 11 08:51:51 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:51.831 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[086ee292-f6a6-417a-b588-2d4d1860eb88]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:51:51 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:51.832 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[782b1fa5-c5ef-4738-ae22-181b3e8e59e2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:51:51 compute-0 systemd-machined[215705]: New machine qemu-41-instance-00000026.
Oct 11 08:51:51 compute-0 ovn_controller[152945]: 2025-10-11T08:51:51Z|00284|binding|INFO|Setting lport 7290684c-3823-4e1e-84f5-826316aa4548 ovn-installed in OVS
Oct 11 08:51:51 compute-0 ovn_controller[152945]: 2025-10-11T08:51:51Z|00285|binding|INFO|Setting lport 7290684c-3823-4e1e-84f5-826316aa4548 up in Southbound
Oct 11 08:51:51 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:51.846 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[15df475c-59fa-458d-91a2-48f1bd6666c1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:51:51 compute-0 nova_compute[260935]: 2025-10-11 08:51:51.848 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:51:51 compute-0 nova_compute[260935]: 2025-10-11 08:51:51.851 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:51:51 compute-0 systemd[1]: Started Virtual Machine qemu-41-instance-00000026.
Oct 11 08:51:51 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:51.862 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[e7074563-c576-46af-9bcc-5d96a51fe2d8]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:51:51 compute-0 systemd-udevd[304951]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 08:51:51 compute-0 NetworkManager[44960]: <info>  [1760172711.8896] device (tap7290684c-38): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 08:51:51 compute-0 NetworkManager[44960]: <info>  [1760172711.8905] device (tap7290684c-38): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 08:51:51 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1405: 321 pgs: 321 active+clean; 180 MiB data, 446 MiB used, 60 GiB / 60 GiB avail; 2.5 MiB/s rd, 6.5 MiB/s wr, 277 op/s
Oct 11 08:51:51 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:51.917 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[65e2eec6-01a2-4c6d-b858-8baa522519bd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:51:51 compute-0 NetworkManager[44960]: <info>  [1760172711.9238] manager: (tap9bac3530-90): new Veth device (/org/freedesktop/NetworkManager/Devices/139)
Oct 11 08:51:51 compute-0 systemd-udevd[304956]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 08:51:51 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:51.924 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[20e5f3f2-05bd-4dce-a26e-02bcbf385c7a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:51:51 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:51.960 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[a1b41f58-1574-4e99-add4-518a87f1de85]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:51:51 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:51.963 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[863c745a-d003-4e32-853f-bfa3ef74a6f2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:51:51 compute-0 NetworkManager[44960]: <info>  [1760172711.9881] device (tap9bac3530-90): carrier: link connected
Oct 11 08:51:51 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:51.994 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[9f6b8dbb-95e4-4507-98d7-7488e6c3a534]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:51:52 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:52.014 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[a1fd28f4-4baf-4bb1-87fb-683468c12a1f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9bac3530-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:95:35:1f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 91], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 455669, 'reachable_time': 40500, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 305000, 'error': None, 'target': 'ovnmeta-9bac3530-993f-420e-8692-0b14a331d756', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:51:52 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:52.034 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[769dea0c-1432-458f-ab80-2549ec683a59]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe95:351f'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 455669, 'tstamp': 455669}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 305001, 'error': None, 'target': 'ovnmeta-9bac3530-993f-420e-8692-0b14a331d756', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:51:52 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:52.051 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[3529771c-396a-43c9-8784-51676e708a2f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9bac3530-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:95:35:1f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 91], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 455669, 'reachable_time': 40500, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 305002, 'error': None, 'target': 'ovnmeta-9bac3530-993f-420e-8692-0b14a331d756', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:51:52 compute-0 nova_compute[260935]: 2025-10-11 08:51:52.086 2 DEBUG nova.compute.manager [req-13beb4be-75da-41d4-82ed-b3dba8c20fe0 req-004bb7a9-53c9-46c5-bdf3-a2a729e03547 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 205bee5e-165a-468c-87d3-db44e03ace3e] Received event network-changed-e389558a-ec7e-4610-aef7-2cfb76da6814 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:51:52 compute-0 nova_compute[260935]: 2025-10-11 08:51:52.087 2 DEBUG nova.compute.manager [req-13beb4be-75da-41d4-82ed-b3dba8c20fe0 req-004bb7a9-53c9-46c5-bdf3-a2a729e03547 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 205bee5e-165a-468c-87d3-db44e03ace3e] Refreshing instance network info cache due to event network-changed-e389558a-ec7e-4610-aef7-2cfb76da6814. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 08:51:52 compute-0 nova_compute[260935]: 2025-10-11 08:51:52.087 2 DEBUG oslo_concurrency.lockutils [req-13beb4be-75da-41d4-82ed-b3dba8c20fe0 req-004bb7a9-53c9-46c5-bdf3-a2a729e03547 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-205bee5e-165a-468c-87d3-db44e03ace3e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 08:51:52 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:52.099 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[17704a91-af3c-4d55-bd66-33ab5b556e76]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:51:52 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:52.179 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[c3b471b6-a95e-4644-a9cd-01193a7b8f25]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:51:52 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:52.181 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9bac3530-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:51:52 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:52.182 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 08:51:52 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:52.182 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9bac3530-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:51:52 compute-0 NetworkManager[44960]: <info>  [1760172712.2245] manager: (tap9bac3530-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/140)
Oct 11 08:51:52 compute-0 kernel: tap9bac3530-90: entered promiscuous mode
Oct 11 08:51:52 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:52.227 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9bac3530-90, col_values=(('external_ids', {'iface-id': 'e5becf0d-48c0-404b-9cba-07077454d085'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:51:52 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:52.230 162815 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/9bac3530-993f-420e-8692-0b14a331d756.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/9bac3530-993f-420e-8692-0b14a331d756.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 11 08:51:52 compute-0 nova_compute[260935]: 2025-10-11 08:51:52.232 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:51:52 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:52.233 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[13649bc5-23e6-4d29-baee-07ca80189d0e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:51:52 compute-0 ovn_controller[152945]: 2025-10-11T08:51:52Z|00286|binding|INFO|Releasing lport e5becf0d-48c0-404b-9cba-07077454d085 from this chassis (sb_readonly=0)
Oct 11 08:51:52 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:52.234 162815 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 11 08:51:52 compute-0 ovn_metadata_agent[162810]: global
Oct 11 08:51:52 compute-0 ovn_metadata_agent[162810]:     log         /dev/log local0 debug
Oct 11 08:51:52 compute-0 ovn_metadata_agent[162810]:     log-tag     haproxy-metadata-proxy-9bac3530-993f-420e-8692-0b14a331d756
Oct 11 08:51:52 compute-0 ovn_metadata_agent[162810]:     user        root
Oct 11 08:51:52 compute-0 ovn_metadata_agent[162810]:     group       root
Oct 11 08:51:52 compute-0 ovn_metadata_agent[162810]:     maxconn     1024
Oct 11 08:51:52 compute-0 ovn_metadata_agent[162810]:     pidfile     /var/lib/neutron/external/pids/9bac3530-993f-420e-8692-0b14a331d756.pid.haproxy
Oct 11 08:51:52 compute-0 ovn_metadata_agent[162810]:     daemon
Oct 11 08:51:52 compute-0 ovn_metadata_agent[162810]: 
Oct 11 08:51:52 compute-0 ovn_metadata_agent[162810]: defaults
Oct 11 08:51:52 compute-0 ovn_metadata_agent[162810]:     log global
Oct 11 08:51:52 compute-0 ovn_metadata_agent[162810]:     mode http
Oct 11 08:51:52 compute-0 ovn_metadata_agent[162810]:     option httplog
Oct 11 08:51:52 compute-0 ovn_metadata_agent[162810]:     option dontlognull
Oct 11 08:51:52 compute-0 ovn_metadata_agent[162810]:     option http-server-close
Oct 11 08:51:52 compute-0 ovn_metadata_agent[162810]:     option forwardfor
Oct 11 08:51:52 compute-0 ovn_metadata_agent[162810]:     retries                 3
Oct 11 08:51:52 compute-0 ovn_metadata_agent[162810]:     timeout http-request    30s
Oct 11 08:51:52 compute-0 ovn_metadata_agent[162810]:     timeout connect         30s
Oct 11 08:51:52 compute-0 ovn_metadata_agent[162810]:     timeout client          32s
Oct 11 08:51:52 compute-0 ovn_metadata_agent[162810]:     timeout server          32s
Oct 11 08:51:52 compute-0 ovn_metadata_agent[162810]:     timeout http-keep-alive 30s
Oct 11 08:51:52 compute-0 ovn_metadata_agent[162810]: 
Oct 11 08:51:52 compute-0 ovn_metadata_agent[162810]: 
Oct 11 08:51:52 compute-0 ovn_metadata_agent[162810]: listen listener
Oct 11 08:51:52 compute-0 ovn_metadata_agent[162810]:     bind 169.254.169.254:80
Oct 11 08:51:52 compute-0 ovn_metadata_agent[162810]:     server metadata /var/lib/neutron/metadata_proxy
Oct 11 08:51:52 compute-0 ovn_metadata_agent[162810]:     http-request add-header X-OVN-Network-ID 9bac3530-993f-420e-8692-0b14a331d756
Oct 11 08:51:52 compute-0 ovn_metadata_agent[162810]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 11 08:51:52 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:52.236 162815 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-9bac3530-993f-420e-8692-0b14a331d756', 'env', 'PROCESS_TAG=haproxy-9bac3530-993f-420e-8692-0b14a331d756', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/9bac3530-993f-420e-8692-0b14a331d756.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 11 08:51:52 compute-0 nova_compute[260935]: 2025-10-11 08:51:52.236 2 DEBUG nova.compute.manager [req-80412251-fe85-43a2-8a22-4b723e034e33 req-bb9269c7-faa2-4695-bbf7-6d87e60e45e4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 1cecd438-75a3-4140-ad35-7439630b1be2] Received event network-vif-plugged-7290684c-3823-4e1e-84f5-826316aa4548 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:51:52 compute-0 nova_compute[260935]: 2025-10-11 08:51:52.236 2 DEBUG oslo_concurrency.lockutils [req-80412251-fe85-43a2-8a22-4b723e034e33 req-bb9269c7-faa2-4695-bbf7-6d87e60e45e4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "1cecd438-75a3-4140-ad35-7439630b1be2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:51:52 compute-0 nova_compute[260935]: 2025-10-11 08:51:52.237 2 DEBUG oslo_concurrency.lockutils [req-80412251-fe85-43a2-8a22-4b723e034e33 req-bb9269c7-faa2-4695-bbf7-6d87e60e45e4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "1cecd438-75a3-4140-ad35-7439630b1be2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:51:52 compute-0 nova_compute[260935]: 2025-10-11 08:51:52.237 2 DEBUG oslo_concurrency.lockutils [req-80412251-fe85-43a2-8a22-4b723e034e33 req-bb9269c7-faa2-4695-bbf7-6d87e60e45e4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "1cecd438-75a3-4140-ad35-7439630b1be2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:51:52 compute-0 nova_compute[260935]: 2025-10-11 08:51:52.237 2 DEBUG nova.compute.manager [req-80412251-fe85-43a2-8a22-4b723e034e33 req-bb9269c7-faa2-4695-bbf7-6d87e60e45e4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 1cecd438-75a3-4140-ad35-7439630b1be2] Processing event network-vif-plugged-7290684c-3823-4e1e-84f5-826316aa4548 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 11 08:51:52 compute-0 nova_compute[260935]: 2025-10-11 08:51:52.256 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:51:52 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 08:51:52 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1147531894' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:51:52 compute-0 nova_compute[260935]: 2025-10-11 08:51:52.284 2 DEBUG oslo_concurrency.processutils [None req-bcf9f562-5918-4b0f-95da-1a9d85e57363 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.509s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:51:52 compute-0 nova_compute[260935]: 2025-10-11 08:51:52.311 2 DEBUG nova.storage.rbd_utils [None req-bcf9f562-5918-4b0f-95da-1a9d85e57363 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] rbd image 205bee5e-165a-468c-87d3-db44e03ace3e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:51:52 compute-0 nova_compute[260935]: 2025-10-11 08:51:52.322 2 DEBUG oslo_concurrency.processutils [None req-bcf9f562-5918-4b0f-95da-1a9d85e57363 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:51:52 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/1147531894' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:51:52 compute-0 nova_compute[260935]: 2025-10-11 08:51:52.405 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:51:52 compute-0 intelligent_perlman[304890]: {
Oct 11 08:51:52 compute-0 intelligent_perlman[304890]:     "9a8d87fe-940e-4960-94fb-405e22e6e9ee": {
Oct 11 08:51:52 compute-0 intelligent_perlman[304890]:         "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 08:51:52 compute-0 intelligent_perlman[304890]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 11 08:51:52 compute-0 intelligent_perlman[304890]:         "osd_id": 2,
Oct 11 08:51:52 compute-0 intelligent_perlman[304890]:         "osd_uuid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 08:51:52 compute-0 intelligent_perlman[304890]:         "type": "bluestore"
Oct 11 08:51:52 compute-0 intelligent_perlman[304890]:     },
Oct 11 08:51:52 compute-0 intelligent_perlman[304890]:     "bd31cfe9-266a-4479-a7f9-659fff14b3e1": {
Oct 11 08:51:52 compute-0 intelligent_perlman[304890]:         "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 08:51:52 compute-0 intelligent_perlman[304890]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 11 08:51:52 compute-0 intelligent_perlman[304890]:         "osd_id": 0,
Oct 11 08:51:52 compute-0 intelligent_perlman[304890]:         "osd_uuid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 08:51:52 compute-0 intelligent_perlman[304890]:         "type": "bluestore"
Oct 11 08:51:52 compute-0 intelligent_perlman[304890]:     },
Oct 11 08:51:52 compute-0 intelligent_perlman[304890]:     "c6e730c8-7075-45e5-bf72-061b73088f5b": {
Oct 11 08:51:52 compute-0 intelligent_perlman[304890]:         "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 08:51:52 compute-0 intelligent_perlman[304890]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 11 08:51:52 compute-0 intelligent_perlman[304890]:         "osd_id": 1,
Oct 11 08:51:52 compute-0 intelligent_perlman[304890]:         "osd_uuid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 08:51:52 compute-0 intelligent_perlman[304890]:         "type": "bluestore"
Oct 11 08:51:52 compute-0 intelligent_perlman[304890]:     }
Oct 11 08:51:52 compute-0 intelligent_perlman[304890]: }
Oct 11 08:51:52 compute-0 systemd[1]: libpod-1cc072f4dc4625defc2bfc1a7ba8e9e02ed760558035360855c1280acc1204e7.scope: Deactivated successfully.
Oct 11 08:51:52 compute-0 podman[304873]: 2025-10-11 08:51:52.516585798 +0000 UTC m=+1.213781197 container died 1cc072f4dc4625defc2bfc1a7ba8e9e02ed760558035360855c1280acc1204e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_perlman, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 08:51:52 compute-0 nova_compute[260935]: 2025-10-11 08:51:52.555 2 DEBUG nova.network.neutron [req-e2ed1991-2d14-42af-8112-020ab611cbd6 req-7bc4f01a-a2b2-4c11-a4ba-4e1bb56b307c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 1cecd438-75a3-4140-ad35-7439630b1be2] Updated VIF entry in instance network info cache for port 7290684c-3823-4e1e-84f5-826316aa4548. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 08:51:52 compute-0 nova_compute[260935]: 2025-10-11 08:51:52.557 2 DEBUG nova.network.neutron [req-e2ed1991-2d14-42af-8112-020ab611cbd6 req-7bc4f01a-a2b2-4c11-a4ba-4e1bb56b307c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 1cecd438-75a3-4140-ad35-7439630b1be2] Updating instance_info_cache with network_info: [{"id": "7290684c-3823-4e1e-84f5-826316aa4548", "address": "fa:16:3e:7f:41:62", "network": {"id": "9bac3530-993f-420e-8692-0b14a331d756", "bridge": "br-int", "label": "tempest-ImagesTestJSON-942705627-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f8c7604961214c6d9d49657535d799a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7290684c-38", "ovs_interfaceid": "7290684c-3823-4e1e-84f5-826316aa4548", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:51:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-2751d60763e0af113cc26b6a0ebed6af910b3fb061d43b3af3d183db66bb8994-merged.mount: Deactivated successfully.
Oct 11 08:51:52 compute-0 nova_compute[260935]: 2025-10-11 08:51:52.576 2 DEBUG oslo_concurrency.lockutils [req-e2ed1991-2d14-42af-8112-020ab611cbd6 req-7bc4f01a-a2b2-4c11-a4ba-4e1bb56b307c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-1cecd438-75a3-4140-ad35-7439630b1be2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 08:51:52 compute-0 podman[304873]: 2025-10-11 08:51:52.594219103 +0000 UTC m=+1.291414472 container remove 1cc072f4dc4625defc2bfc1a7ba8e9e02ed760558035360855c1280acc1204e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_perlman, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 11 08:51:52 compute-0 systemd[1]: libpod-conmon-1cc072f4dc4625defc2bfc1a7ba8e9e02ed760558035360855c1280acc1204e7.scope: Deactivated successfully.
Oct 11 08:51:52 compute-0 sudo[304769]: pam_unix(sudo:session): session closed for user root
Oct 11 08:51:52 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 08:51:52 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 08:51:52 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 08:51:52 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 08:51:52 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev cab240f1-85c2-484a-9dc1-041b5e368c4b does not exist
Oct 11 08:51:52 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev 8382618e-0e89-478b-917d-938ebf5d3d85 does not exist
Oct 11 08:51:52 compute-0 podman[305153]: 2025-10-11 08:51:52.665010065 +0000 UTC m=+0.066139291 container create f81c47c225c7ddf12f9c961bf177d7a7c127519972d558801a670cad60b3507e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-9bac3530-993f-420e-8692-0b14a331d756, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 08:51:52 compute-0 podman[305153]: 2025-10-11 08:51:52.622628146 +0000 UTC m=+0.023757372 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 11 08:51:52 compute-0 systemd[1]: Started libpod-conmon-f81c47c225c7ddf12f9c961bf177d7a7c127519972d558801a670cad60b3507e.scope.
Oct 11 08:51:52 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 08:51:52 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3241342263' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:51:52 compute-0 systemd[1]: Started libcrun container.
Oct 11 08:51:52 compute-0 sudo[305166]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:51:52 compute-0 sudo[305166]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:51:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ddfd5ce54df1d1aac4c1e654420057838a802df3d512bb77cf4087a954ac2fb/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 11 08:51:52 compute-0 sudo[305166]: pam_unix(sudo:session): session closed for user root
Oct 11 08:51:52 compute-0 nova_compute[260935]: 2025-10-11 08:51:52.770 2 DEBUG oslo_concurrency.processutils [None req-bcf9f562-5918-4b0f-95da-1a9d85e57363 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:51:52 compute-0 nova_compute[260935]: 2025-10-11 08:51:52.771 2 DEBUG nova.virt.libvirt.vif [None req-bcf9f562-5918-4b0f-95da-1a9d85e57363 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T08:51:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-318864694',display_name='tempest-ServersTestMultiNic-server-318864694',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-318864694',id=37,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f84de17ba2c5470fbc4c7fe809e7d7b7',ramdisk_id='',reservation_id='r-932cooa3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestMultiNic-65661968',owner_user_name='tempest-ServersTestMultiNic-65661968-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T08:51:41Z,user_data=None,user_id='fb27f51b5ffd414ab5ddbea179ada690',uuid=205bee5e-165a-468c-87d3-db44e03ace3e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "30f88ebc-fba6-476c-8953-ad64358029bb", "address": "fa:16:3e:84:a9:6f", "network": {"id": "98ec9321-44f2-4cc2-a0a2-531598e629d2", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-550040071", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.192", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f84de17ba2c5470fbc4c7fe809e7d7b7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap30f88ebc-fb", "ovs_interfaceid": "30f88ebc-fba6-476c-8953-ad64358029bb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 11 08:51:52 compute-0 nova_compute[260935]: 2025-10-11 08:51:52.772 2 DEBUG nova.network.os_vif_util [None req-bcf9f562-5918-4b0f-95da-1a9d85e57363 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Converting VIF {"id": "30f88ebc-fba6-476c-8953-ad64358029bb", "address": "fa:16:3e:84:a9:6f", "network": {"id": "98ec9321-44f2-4cc2-a0a2-531598e629d2", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-550040071", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.192", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f84de17ba2c5470fbc4c7fe809e7d7b7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap30f88ebc-fb", "ovs_interfaceid": "30f88ebc-fba6-476c-8953-ad64358029bb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 08:51:52 compute-0 nova_compute[260935]: 2025-10-11 08:51:52.773 2 DEBUG nova.network.os_vif_util [None req-bcf9f562-5918-4b0f-95da-1a9d85e57363 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:84:a9:6f,bridge_name='br-int',has_traffic_filtering=True,id=30f88ebc-fba6-476c-8953-ad64358029bb,network=Network(98ec9321-44f2-4cc2-a0a2-531598e629d2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap30f88ebc-fb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 08:51:52 compute-0 nova_compute[260935]: 2025-10-11 08:51:52.774 2 DEBUG nova.virt.libvirt.vif [None req-bcf9f562-5918-4b0f-95da-1a9d85e57363 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T08:51:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-318864694',display_name='tempest-ServersTestMultiNic-server-318864694',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-318864694',id=37,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f84de17ba2c5470fbc4c7fe809e7d7b7',ramdisk_id='',reservation_id='r-932cooa3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestMultiNic-65661968',owner_user_name='tempest-ServersTestMultiNic-65661968-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T08:51:41Z,user_data=None,user_id='fb27f51b5ffd414ab5ddbea179ada690',uuid=205bee5e-165a-468c-87d3-db44e03ace3e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e389558a-ec7e-4610-aef7-2cfb76da6814", "address": "fa:16:3e:67:bd:db", "network": {"id": "7aa0dec9-58b4-4b16-b265-f3c8cbedd3a7", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-2082644605", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.173", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f84de17ba2c5470fbc4c7fe809e7d7b7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape389558a-ec", "ovs_interfaceid": "e389558a-ec7e-4610-aef7-2cfb76da6814", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 11 08:51:52 compute-0 nova_compute[260935]: 2025-10-11 08:51:52.775 2 DEBUG nova.network.os_vif_util [None req-bcf9f562-5918-4b0f-95da-1a9d85e57363 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Converting VIF {"id": "e389558a-ec7e-4610-aef7-2cfb76da6814", "address": "fa:16:3e:67:bd:db", "network": {"id": "7aa0dec9-58b4-4b16-b265-f3c8cbedd3a7", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-2082644605", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.173", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f84de17ba2c5470fbc4c7fe809e7d7b7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape389558a-ec", "ovs_interfaceid": "e389558a-ec7e-4610-aef7-2cfb76da6814", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 08:51:52 compute-0 nova_compute[260935]: 2025-10-11 08:51:52.775 2 DEBUG nova.network.os_vif_util [None req-bcf9f562-5918-4b0f-95da-1a9d85e57363 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:67:bd:db,bridge_name='br-int',has_traffic_filtering=True,id=e389558a-ec7e-4610-aef7-2cfb76da6814,network=Network(7aa0dec9-58b4-4b16-b265-f3c8cbedd3a7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape389558a-ec') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 08:51:52 compute-0 nova_compute[260935]: 2025-10-11 08:51:52.777 2 DEBUG nova.objects.instance [None req-bcf9f562-5918-4b0f-95da-1a9d85e57363 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Lazy-loading 'pci_devices' on Instance uuid 205bee5e-165a-468c-87d3-db44e03ace3e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:51:52 compute-0 podman[305153]: 2025-10-11 08:51:52.786253304 +0000 UTC m=+0.187382540 container init f81c47c225c7ddf12f9c961bf177d7a7c127519972d558801a670cad60b3507e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-9bac3530-993f-420e-8692-0b14a331d756, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct 11 08:51:52 compute-0 podman[305153]: 2025-10-11 08:51:52.792098429 +0000 UTC m=+0.193227625 container start f81c47c225c7ddf12f9c961bf177d7a7c127519972d558801a670cad60b3507e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-9bac3530-993f-420e-8692-0b14a331d756, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Oct 11 08:51:52 compute-0 neutron-haproxy-ovnmeta-9bac3530-993f-420e-8692-0b14a331d756[305191]: [NOTICE]   (305217) : New worker (305224) forked
Oct 11 08:51:52 compute-0 neutron-haproxy-ovnmeta-9bac3530-993f-420e-8692-0b14a331d756[305191]: [NOTICE]   (305217) : Loading success.
Oct 11 08:51:52 compute-0 sudo[305198]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 11 08:51:52 compute-0 sudo[305198]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:51:52 compute-0 sudo[305198]: pam_unix(sudo:session): session closed for user root
Oct 11 08:51:52 compute-0 nova_compute[260935]: 2025-10-11 08:51:52.897 2 DEBUG nova.virt.libvirt.driver [None req-bcf9f562-5918-4b0f-95da-1a9d85e57363 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] [instance: 205bee5e-165a-468c-87d3-db44e03ace3e] End _get_guest_xml xml=<domain type="kvm">
Oct 11 08:51:52 compute-0 nova_compute[260935]:   <uuid>205bee5e-165a-468c-87d3-db44e03ace3e</uuid>
Oct 11 08:51:52 compute-0 nova_compute[260935]:   <name>instance-00000025</name>
Oct 11 08:51:52 compute-0 nova_compute[260935]:   <memory>131072</memory>
Oct 11 08:51:52 compute-0 nova_compute[260935]:   <vcpu>1</vcpu>
Oct 11 08:51:52 compute-0 nova_compute[260935]:   <metadata>
Oct 11 08:51:52 compute-0 nova_compute[260935]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 08:51:52 compute-0 nova_compute[260935]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 08:51:52 compute-0 nova_compute[260935]:       <nova:name>tempest-ServersTestMultiNic-server-318864694</nova:name>
Oct 11 08:51:52 compute-0 nova_compute[260935]:       <nova:creationTime>2025-10-11 08:51:51</nova:creationTime>
Oct 11 08:51:52 compute-0 nova_compute[260935]:       <nova:flavor name="m1.nano">
Oct 11 08:51:52 compute-0 nova_compute[260935]:         <nova:memory>128</nova:memory>
Oct 11 08:51:52 compute-0 nova_compute[260935]:         <nova:disk>1</nova:disk>
Oct 11 08:51:52 compute-0 nova_compute[260935]:         <nova:swap>0</nova:swap>
Oct 11 08:51:52 compute-0 nova_compute[260935]:         <nova:ephemeral>0</nova:ephemeral>
Oct 11 08:51:52 compute-0 nova_compute[260935]:         <nova:vcpus>1</nova:vcpus>
Oct 11 08:51:52 compute-0 nova_compute[260935]:       </nova:flavor>
Oct 11 08:51:52 compute-0 nova_compute[260935]:       <nova:owner>
Oct 11 08:51:52 compute-0 nova_compute[260935]:         <nova:user uuid="fb27f51b5ffd414ab5ddbea179ada690">tempest-ServersTestMultiNic-65661968-project-member</nova:user>
Oct 11 08:51:52 compute-0 nova_compute[260935]:         <nova:project uuid="f84de17ba2c5470fbc4c7fe809e7d7b7">tempest-ServersTestMultiNic-65661968</nova:project>
Oct 11 08:51:52 compute-0 nova_compute[260935]:       </nova:owner>
Oct 11 08:51:52 compute-0 nova_compute[260935]:       <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 08:51:52 compute-0 nova_compute[260935]:       <nova:ports>
Oct 11 08:51:52 compute-0 nova_compute[260935]:         <nova:port uuid="30f88ebc-fba6-476c-8953-ad64358029bb">
Oct 11 08:51:52 compute-0 nova_compute[260935]:           <nova:ip type="fixed" address="10.100.0.192" ipVersion="4"/>
Oct 11 08:51:52 compute-0 nova_compute[260935]:         </nova:port>
Oct 11 08:51:52 compute-0 nova_compute[260935]:         <nova:port uuid="e389558a-ec7e-4610-aef7-2cfb76da6814">
Oct 11 08:51:52 compute-0 nova_compute[260935]:           <nova:ip type="fixed" address="10.100.1.173" ipVersion="4"/>
Oct 11 08:51:52 compute-0 nova_compute[260935]:         </nova:port>
Oct 11 08:51:52 compute-0 nova_compute[260935]:       </nova:ports>
Oct 11 08:51:52 compute-0 nova_compute[260935]:     </nova:instance>
Oct 11 08:51:52 compute-0 nova_compute[260935]:   </metadata>
Oct 11 08:51:52 compute-0 nova_compute[260935]:   <sysinfo type="smbios">
Oct 11 08:51:52 compute-0 nova_compute[260935]:     <system>
Oct 11 08:51:52 compute-0 nova_compute[260935]:       <entry name="manufacturer">RDO</entry>
Oct 11 08:51:52 compute-0 nova_compute[260935]:       <entry name="product">OpenStack Compute</entry>
Oct 11 08:51:52 compute-0 nova_compute[260935]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 08:51:52 compute-0 nova_compute[260935]:       <entry name="serial">205bee5e-165a-468c-87d3-db44e03ace3e</entry>
Oct 11 08:51:52 compute-0 nova_compute[260935]:       <entry name="uuid">205bee5e-165a-468c-87d3-db44e03ace3e</entry>
Oct 11 08:51:52 compute-0 nova_compute[260935]:       <entry name="family">Virtual Machine</entry>
Oct 11 08:51:52 compute-0 nova_compute[260935]:     </system>
Oct 11 08:51:52 compute-0 nova_compute[260935]:   </sysinfo>
Oct 11 08:51:52 compute-0 nova_compute[260935]:   <os>
Oct 11 08:51:52 compute-0 nova_compute[260935]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 11 08:51:52 compute-0 nova_compute[260935]:     <boot dev="hd"/>
Oct 11 08:51:52 compute-0 nova_compute[260935]:     <smbios mode="sysinfo"/>
Oct 11 08:51:52 compute-0 nova_compute[260935]:   </os>
Oct 11 08:51:52 compute-0 nova_compute[260935]:   <features>
Oct 11 08:51:52 compute-0 nova_compute[260935]:     <acpi/>
Oct 11 08:51:52 compute-0 nova_compute[260935]:     <apic/>
Oct 11 08:51:52 compute-0 nova_compute[260935]:     <vmcoreinfo/>
Oct 11 08:51:52 compute-0 nova_compute[260935]:   </features>
Oct 11 08:51:52 compute-0 nova_compute[260935]:   <clock offset="utc">
Oct 11 08:51:52 compute-0 nova_compute[260935]:     <timer name="pit" tickpolicy="delay"/>
Oct 11 08:51:52 compute-0 nova_compute[260935]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 11 08:51:52 compute-0 nova_compute[260935]:     <timer name="hpet" present="no"/>
Oct 11 08:51:52 compute-0 nova_compute[260935]:   </clock>
Oct 11 08:51:52 compute-0 nova_compute[260935]:   <cpu mode="host-model" match="exact">
Oct 11 08:51:52 compute-0 nova_compute[260935]:     <topology sockets="1" cores="1" threads="1"/>
Oct 11 08:51:52 compute-0 nova_compute[260935]:   </cpu>
Oct 11 08:51:52 compute-0 nova_compute[260935]:   <devices>
Oct 11 08:51:52 compute-0 nova_compute[260935]:     <disk type="network" device="disk">
Oct 11 08:51:52 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 08:51:52 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/205bee5e-165a-468c-87d3-db44e03ace3e_disk">
Oct 11 08:51:52 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 08:51:52 compute-0 nova_compute[260935]:       </source>
Oct 11 08:51:52 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 08:51:52 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 08:51:52 compute-0 nova_compute[260935]:       </auth>
Oct 11 08:51:52 compute-0 nova_compute[260935]:       <target dev="vda" bus="virtio"/>
Oct 11 08:51:52 compute-0 nova_compute[260935]:     </disk>
Oct 11 08:51:52 compute-0 nova_compute[260935]:     <disk type="network" device="cdrom">
Oct 11 08:51:52 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 08:51:52 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/205bee5e-165a-468c-87d3-db44e03ace3e_disk.config">
Oct 11 08:51:52 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 08:51:52 compute-0 nova_compute[260935]:       </source>
Oct 11 08:51:52 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 08:51:52 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 08:51:52 compute-0 nova_compute[260935]:       </auth>
Oct 11 08:51:52 compute-0 nova_compute[260935]:       <target dev="sda" bus="sata"/>
Oct 11 08:51:52 compute-0 nova_compute[260935]:     </disk>
Oct 11 08:51:52 compute-0 nova_compute[260935]:     <interface type="ethernet">
Oct 11 08:51:52 compute-0 nova_compute[260935]:       <mac address="fa:16:3e:84:a9:6f"/>
Oct 11 08:51:52 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 08:51:52 compute-0 nova_compute[260935]:       <driver name="vhost" rx_queue_size="512"/>
Oct 11 08:51:52 compute-0 nova_compute[260935]:       <mtu size="1442"/>
Oct 11 08:51:52 compute-0 nova_compute[260935]:       <target dev="tap30f88ebc-fb"/>
Oct 11 08:51:52 compute-0 nova_compute[260935]:     </interface>
Oct 11 08:51:52 compute-0 nova_compute[260935]:     <interface type="ethernet">
Oct 11 08:51:52 compute-0 nova_compute[260935]:       <mac address="fa:16:3e:67:bd:db"/>
Oct 11 08:51:52 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 08:51:52 compute-0 nova_compute[260935]:       <driver name="vhost" rx_queue_size="512"/>
Oct 11 08:51:52 compute-0 nova_compute[260935]:       <mtu size="1442"/>
Oct 11 08:51:52 compute-0 nova_compute[260935]:       <target dev="tape389558a-ec"/>
Oct 11 08:51:52 compute-0 nova_compute[260935]:     </interface>
Oct 11 08:51:52 compute-0 nova_compute[260935]:     <serial type="pty">
Oct 11 08:51:52 compute-0 nova_compute[260935]:       <log file="/var/lib/nova/instances/205bee5e-165a-468c-87d3-db44e03ace3e/console.log" append="off"/>
Oct 11 08:51:52 compute-0 nova_compute[260935]:     </serial>
Oct 11 08:51:52 compute-0 nova_compute[260935]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 08:51:52 compute-0 nova_compute[260935]:     <video>
Oct 11 08:51:52 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 08:51:52 compute-0 nova_compute[260935]:     </video>
Oct 11 08:51:52 compute-0 nova_compute[260935]:     <input type="tablet" bus="usb"/>
Oct 11 08:51:52 compute-0 nova_compute[260935]:     <rng model="virtio">
Oct 11 08:51:52 compute-0 nova_compute[260935]:       <backend model="random">/dev/urandom</backend>
Oct 11 08:51:52 compute-0 nova_compute[260935]:     </rng>
Oct 11 08:51:52 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root"/>
Oct 11 08:51:52 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:51:52 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:51:52 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:51:52 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:51:52 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:51:52 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:51:52 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:51:52 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:51:52 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:51:52 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:51:52 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:51:52 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:51:52 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:51:52 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:51:52 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:51:52 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:51:52 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:51:52 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:51:52 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:51:52 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:51:52 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:51:52 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:51:52 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:51:52 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:51:52 compute-0 nova_compute[260935]:     <controller type="usb" index="0"/>
Oct 11 08:51:52 compute-0 nova_compute[260935]:     <memballoon model="virtio">
Oct 11 08:51:52 compute-0 nova_compute[260935]:       <stats period="10"/>
Oct 11 08:51:52 compute-0 nova_compute[260935]:     </memballoon>
Oct 11 08:51:52 compute-0 nova_compute[260935]:   </devices>
Oct 11 08:51:52 compute-0 nova_compute[260935]: </domain>
Oct 11 08:51:52 compute-0 nova_compute[260935]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 11 08:51:52 compute-0 nova_compute[260935]: 2025-10-11 08:51:52.911 2 DEBUG nova.compute.manager [None req-bcf9f562-5918-4b0f-95da-1a9d85e57363 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] [instance: 205bee5e-165a-468c-87d3-db44e03ace3e] Preparing to wait for external event network-vif-plugged-30f88ebc-fba6-476c-8953-ad64358029bb prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 11 08:51:52 compute-0 nova_compute[260935]: 2025-10-11 08:51:52.911 2 DEBUG oslo_concurrency.lockutils [None req-bcf9f562-5918-4b0f-95da-1a9d85e57363 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Acquiring lock "205bee5e-165a-468c-87d3-db44e03ace3e-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:51:52 compute-0 nova_compute[260935]: 2025-10-11 08:51:52.912 2 DEBUG oslo_concurrency.lockutils [None req-bcf9f562-5918-4b0f-95da-1a9d85e57363 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Lock "205bee5e-165a-468c-87d3-db44e03ace3e-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:51:52 compute-0 nova_compute[260935]: 2025-10-11 08:51:52.912 2 DEBUG oslo_concurrency.lockutils [None req-bcf9f562-5918-4b0f-95da-1a9d85e57363 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Lock "205bee5e-165a-468c-87d3-db44e03ace3e-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:51:52 compute-0 nova_compute[260935]: 2025-10-11 08:51:52.912 2 DEBUG nova.compute.manager [None req-bcf9f562-5918-4b0f-95da-1a9d85e57363 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] [instance: 205bee5e-165a-468c-87d3-db44e03ace3e] Preparing to wait for external event network-vif-plugged-e389558a-ec7e-4610-aef7-2cfb76da6814 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 11 08:51:52 compute-0 nova_compute[260935]: 2025-10-11 08:51:52.913 2 DEBUG oslo_concurrency.lockutils [None req-bcf9f562-5918-4b0f-95da-1a9d85e57363 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Acquiring lock "205bee5e-165a-468c-87d3-db44e03ace3e-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:51:52 compute-0 nova_compute[260935]: 2025-10-11 08:51:52.913 2 DEBUG oslo_concurrency.lockutils [None req-bcf9f562-5918-4b0f-95da-1a9d85e57363 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Lock "205bee5e-165a-468c-87d3-db44e03ace3e-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:51:52 compute-0 nova_compute[260935]: 2025-10-11 08:51:52.913 2 DEBUG oslo_concurrency.lockutils [None req-bcf9f562-5918-4b0f-95da-1a9d85e57363 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Lock "205bee5e-165a-468c-87d3-db44e03ace3e-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:51:52 compute-0 nova_compute[260935]: 2025-10-11 08:51:52.915 2 DEBUG nova.virt.libvirt.vif [None req-bcf9f562-5918-4b0f-95da-1a9d85e57363 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T08:51:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-318864694',display_name='tempest-ServersTestMultiNic-server-318864694',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-318864694',id=37,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f84de17ba2c5470fbc4c7fe809e7d7b7',ramdisk_id='',reservation_id='r-932cooa3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestMultiNic-65661968',owner_user_name='tempest-ServersTestMultiNic-65661968-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T08:51:41Z,user_data=None,user_id='fb27f51b5ffd414ab5ddbea179ada690',uuid=205bee5e-165a-468c-87d3-db44e03ace3e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "30f88ebc-fba6-476c-8953-ad64358029bb", "address": "fa:16:3e:84:a9:6f", "network": {"id": "98ec9321-44f2-4cc2-a0a2-531598e629d2", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-550040071", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.192", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f84de17ba2c5470fbc4c7fe809e7d7b7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap30f88ebc-fb", "ovs_interfaceid": "30f88ebc-fba6-476c-8953-ad64358029bb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 11 08:51:52 compute-0 nova_compute[260935]: 2025-10-11 08:51:52.915 2 DEBUG nova.network.os_vif_util [None req-bcf9f562-5918-4b0f-95da-1a9d85e57363 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Converting VIF {"id": "30f88ebc-fba6-476c-8953-ad64358029bb", "address": "fa:16:3e:84:a9:6f", "network": {"id": "98ec9321-44f2-4cc2-a0a2-531598e629d2", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-550040071", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.192", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f84de17ba2c5470fbc4c7fe809e7d7b7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap30f88ebc-fb", "ovs_interfaceid": "30f88ebc-fba6-476c-8953-ad64358029bb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 08:51:52 compute-0 nova_compute[260935]: 2025-10-11 08:51:52.916 2 DEBUG nova.network.os_vif_util [None req-bcf9f562-5918-4b0f-95da-1a9d85e57363 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:84:a9:6f,bridge_name='br-int',has_traffic_filtering=True,id=30f88ebc-fba6-476c-8953-ad64358029bb,network=Network(98ec9321-44f2-4cc2-a0a2-531598e629d2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap30f88ebc-fb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 08:51:52 compute-0 nova_compute[260935]: 2025-10-11 08:51:52.918 2 DEBUG os_vif [None req-bcf9f562-5918-4b0f-95da-1a9d85e57363 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:84:a9:6f,bridge_name='br-int',has_traffic_filtering=True,id=30f88ebc-fba6-476c-8953-ad64358029bb,network=Network(98ec9321-44f2-4cc2-a0a2-531598e629d2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap30f88ebc-fb') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 11 08:51:52 compute-0 nova_compute[260935]: 2025-10-11 08:51:52.919 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:51:52 compute-0 nova_compute[260935]: 2025-10-11 08:51:52.919 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:51:52 compute-0 nova_compute[260935]: 2025-10-11 08:51:52.920 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 08:51:52 compute-0 nova_compute[260935]: 2025-10-11 08:51:52.925 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:51:52 compute-0 nova_compute[260935]: 2025-10-11 08:51:52.925 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap30f88ebc-fb, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:51:52 compute-0 nova_compute[260935]: 2025-10-11 08:51:52.926 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap30f88ebc-fb, col_values=(('external_ids', {'iface-id': '30f88ebc-fba6-476c-8953-ad64358029bb', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:84:a9:6f', 'vm-uuid': '205bee5e-165a-468c-87d3-db44e03ace3e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:51:52 compute-0 NetworkManager[44960]: <info>  [1760172712.9298] manager: (tap30f88ebc-fb): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/141)
Oct 11 08:51:52 compute-0 nova_compute[260935]: 2025-10-11 08:51:52.928 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:51:52 compute-0 nova_compute[260935]: 2025-10-11 08:51:52.935 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172712.9346688, 1cecd438-75a3-4140-ad35-7439630b1be2 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:51:52 compute-0 nova_compute[260935]: 2025-10-11 08:51:52.936 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 1cecd438-75a3-4140-ad35-7439630b1be2] VM Started (Lifecycle Event)
Oct 11 08:51:52 compute-0 nova_compute[260935]: 2025-10-11 08:51:52.938 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 08:51:52 compute-0 nova_compute[260935]: 2025-10-11 08:51:52.940 2 DEBUG nova.compute.manager [None req-f3e73633-9738-40f5-969e-90ec80c20430 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 1cecd438-75a3-4140-ad35-7439630b1be2] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 11 08:51:52 compute-0 nova_compute[260935]: 2025-10-11 08:51:52.941 2 INFO os_vif [None req-bcf9f562-5918-4b0f-95da-1a9d85e57363 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:84:a9:6f,bridge_name='br-int',has_traffic_filtering=True,id=30f88ebc-fba6-476c-8953-ad64358029bb,network=Network(98ec9321-44f2-4cc2-a0a2-531598e629d2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap30f88ebc-fb')
Oct 11 08:51:52 compute-0 nova_compute[260935]: 2025-10-11 08:51:52.943 2 DEBUG nova.virt.libvirt.vif [None req-bcf9f562-5918-4b0f-95da-1a9d85e57363 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T08:51:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-318864694',display_name='tempest-ServersTestMultiNic-server-318864694',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-318864694',id=37,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f84de17ba2c5470fbc4c7fe809e7d7b7',ramdisk_id='',reservation_id='r-932cooa3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestMultiNic-65661968',owner_user_name='tempest-ServersTestMultiNic-65661968-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T08:51:41Z,user_data=None,user_id='fb27f51b5ffd414ab5ddbea179ada690',uuid=205bee5e-165a-468c-87d3-db44e03ace3e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e389558a-ec7e-4610-aef7-2cfb76da6814", "address": "fa:16:3e:67:bd:db", "network": {"id": "7aa0dec9-58b4-4b16-b265-f3c8cbedd3a7", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-2082644605", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.173", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f84de17ba2c5470fbc4c7fe809e7d7b7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape389558a-ec", "ovs_interfaceid": "e389558a-ec7e-4610-aef7-2cfb76da6814", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 11 08:51:52 compute-0 nova_compute[260935]: 2025-10-11 08:51:52.954 2 DEBUG nova.network.os_vif_util [None req-bcf9f562-5918-4b0f-95da-1a9d85e57363 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Converting VIF {"id": "e389558a-ec7e-4610-aef7-2cfb76da6814", "address": "fa:16:3e:67:bd:db", "network": {"id": "7aa0dec9-58b4-4b16-b265-f3c8cbedd3a7", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-2082644605", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.173", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f84de17ba2c5470fbc4c7fe809e7d7b7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape389558a-ec", "ovs_interfaceid": "e389558a-ec7e-4610-aef7-2cfb76da6814", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 08:51:52 compute-0 nova_compute[260935]: 2025-10-11 08:51:52.955 2 DEBUG nova.network.os_vif_util [None req-bcf9f562-5918-4b0f-95da-1a9d85e57363 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:67:bd:db,bridge_name='br-int',has_traffic_filtering=True,id=e389558a-ec7e-4610-aef7-2cfb76da6814,network=Network(7aa0dec9-58b4-4b16-b265-f3c8cbedd3a7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape389558a-ec') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 08:51:52 compute-0 nova_compute[260935]: 2025-10-11 08:51:52.955 2 DEBUG os_vif [None req-bcf9f562-5918-4b0f-95da-1a9d85e57363 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:67:bd:db,bridge_name='br-int',has_traffic_filtering=True,id=e389558a-ec7e-4610-aef7-2cfb76da6814,network=Network(7aa0dec9-58b4-4b16-b265-f3c8cbedd3a7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape389558a-ec') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 11 08:51:52 compute-0 nova_compute[260935]: 2025-10-11 08:51:52.956 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:51:52 compute-0 nova_compute[260935]: 2025-10-11 08:51:52.956 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:51:52 compute-0 nova_compute[260935]: 2025-10-11 08:51:52.957 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 08:51:52 compute-0 nova_compute[260935]: 2025-10-11 08:51:52.959 2 DEBUG nova.virt.libvirt.driver [None req-f3e73633-9738-40f5-969e-90ec80c20430 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 1cecd438-75a3-4140-ad35-7439630b1be2] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 11 08:51:52 compute-0 nova_compute[260935]: 2025-10-11 08:51:52.963 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 1cecd438-75a3-4140-ad35-7439630b1be2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:51:52 compute-0 nova_compute[260935]: 2025-10-11 08:51:52.965 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:51:52 compute-0 nova_compute[260935]: 2025-10-11 08:51:52.965 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape389558a-ec, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:51:52 compute-0 nova_compute[260935]: 2025-10-11 08:51:52.966 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tape389558a-ec, col_values=(('external_ids', {'iface-id': 'e389558a-ec7e-4610-aef7-2cfb76da6814', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:67:bd:db', 'vm-uuid': '205bee5e-165a-468c-87d3-db44e03ace3e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:51:52 compute-0 nova_compute[260935]: 2025-10-11 08:51:52.967 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:51:52 compute-0 NetworkManager[44960]: <info>  [1760172712.9681] manager: (tape389558a-ec): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/142)
Oct 11 08:51:52 compute-0 nova_compute[260935]: 2025-10-11 08:51:52.969 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 08:51:52 compute-0 nova_compute[260935]: 2025-10-11 08:51:52.972 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 1cecd438-75a3-4140-ad35-7439630b1be2] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 08:51:52 compute-0 nova_compute[260935]: 2025-10-11 08:51:52.974 2 INFO nova.virt.libvirt.driver [-] [instance: 1cecd438-75a3-4140-ad35-7439630b1be2] Instance spawned successfully.
Oct 11 08:51:52 compute-0 nova_compute[260935]: 2025-10-11 08:51:52.975 2 DEBUG nova.virt.libvirt.driver [None req-f3e73633-9738-40f5-969e-90ec80c20430 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 1cecd438-75a3-4140-ad35-7439630b1be2] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 11 08:51:52 compute-0 nova_compute[260935]: 2025-10-11 08:51:52.976 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:51:52 compute-0 nova_compute[260935]: 2025-10-11 08:51:52.977 2 INFO os_vif [None req-bcf9f562-5918-4b0f-95da-1a9d85e57363 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:67:bd:db,bridge_name='br-int',has_traffic_filtering=True,id=e389558a-ec7e-4610-aef7-2cfb76da6814,network=Network(7aa0dec9-58b4-4b16-b265-f3c8cbedd3a7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape389558a-ec')
Oct 11 08:51:52 compute-0 nova_compute[260935]: 2025-10-11 08:51:52.996 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 1cecd438-75a3-4140-ad35-7439630b1be2] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 08:51:52 compute-0 nova_compute[260935]: 2025-10-11 08:51:52.997 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172712.9360156, 1cecd438-75a3-4140-ad35-7439630b1be2 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:51:52 compute-0 nova_compute[260935]: 2025-10-11 08:51:52.998 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 1cecd438-75a3-4140-ad35-7439630b1be2] VM Paused (Lifecycle Event)
Oct 11 08:51:53 compute-0 nova_compute[260935]: 2025-10-11 08:51:53.004 2 DEBUG nova.virt.libvirt.driver [None req-f3e73633-9738-40f5-969e-90ec80c20430 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 1cecd438-75a3-4140-ad35-7439630b1be2] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:51:53 compute-0 nova_compute[260935]: 2025-10-11 08:51:53.004 2 DEBUG nova.virt.libvirt.driver [None req-f3e73633-9738-40f5-969e-90ec80c20430 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 1cecd438-75a3-4140-ad35-7439630b1be2] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:51:53 compute-0 nova_compute[260935]: 2025-10-11 08:51:53.005 2 DEBUG nova.virt.libvirt.driver [None req-f3e73633-9738-40f5-969e-90ec80c20430 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 1cecd438-75a3-4140-ad35-7439630b1be2] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:51:53 compute-0 nova_compute[260935]: 2025-10-11 08:51:53.005 2 DEBUG nova.virt.libvirt.driver [None req-f3e73633-9738-40f5-969e-90ec80c20430 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 1cecd438-75a3-4140-ad35-7439630b1be2] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:51:53 compute-0 nova_compute[260935]: 2025-10-11 08:51:53.006 2 DEBUG nova.virt.libvirt.driver [None req-f3e73633-9738-40f5-969e-90ec80c20430 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 1cecd438-75a3-4140-ad35-7439630b1be2] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:51:53 compute-0 nova_compute[260935]: 2025-10-11 08:51:53.006 2 DEBUG nova.virt.libvirt.driver [None req-f3e73633-9738-40f5-969e-90ec80c20430 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 1cecd438-75a3-4140-ad35-7439630b1be2] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:51:53 compute-0 nova_compute[260935]: 2025-10-11 08:51:53.013 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 1cecd438-75a3-4140-ad35-7439630b1be2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:51:53 compute-0 nova_compute[260935]: 2025-10-11 08:51:53.020 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172712.951538, 1cecd438-75a3-4140-ad35-7439630b1be2 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:51:53 compute-0 nova_compute[260935]: 2025-10-11 08:51:53.020 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 1cecd438-75a3-4140-ad35-7439630b1be2] VM Resumed (Lifecycle Event)
Oct 11 08:51:53 compute-0 nova_compute[260935]: 2025-10-11 08:51:53.040 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 1cecd438-75a3-4140-ad35-7439630b1be2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:51:53 compute-0 nova_compute[260935]: 2025-10-11 08:51:53.043 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 1cecd438-75a3-4140-ad35-7439630b1be2] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 08:51:53 compute-0 nova_compute[260935]: 2025-10-11 08:51:53.177 2 DEBUG nova.virt.libvirt.driver [None req-bcf9f562-5918-4b0f-95da-1a9d85e57363 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 08:51:53 compute-0 nova_compute[260935]: 2025-10-11 08:51:53.177 2 DEBUG nova.virt.libvirt.driver [None req-bcf9f562-5918-4b0f-95da-1a9d85e57363 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 08:51:53 compute-0 nova_compute[260935]: 2025-10-11 08:51:53.178 2 DEBUG nova.virt.libvirt.driver [None req-bcf9f562-5918-4b0f-95da-1a9d85e57363 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] No VIF found with MAC fa:16:3e:84:a9:6f, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 11 08:51:53 compute-0 nova_compute[260935]: 2025-10-11 08:51:53.178 2 DEBUG nova.virt.libvirt.driver [None req-bcf9f562-5918-4b0f-95da-1a9d85e57363 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] No VIF found with MAC fa:16:3e:67:bd:db, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 11 08:51:53 compute-0 nova_compute[260935]: 2025-10-11 08:51:53.179 2 INFO nova.virt.libvirt.driver [None req-bcf9f562-5918-4b0f-95da-1a9d85e57363 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] [instance: 205bee5e-165a-468c-87d3-db44e03ace3e] Using config drive
Oct 11 08:51:53 compute-0 nova_compute[260935]: 2025-10-11 08:51:53.212 2 DEBUG nova.storage.rbd_utils [None req-bcf9f562-5918-4b0f-95da-1a9d85e57363 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] rbd image 205bee5e-165a-468c-87d3-db44e03ace3e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:51:53 compute-0 nova_compute[260935]: 2025-10-11 08:51:53.222 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 1cecd438-75a3-4140-ad35-7439630b1be2] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 08:51:53 compute-0 nova_compute[260935]: 2025-10-11 08:51:53.224 2 INFO nova.compute.manager [None req-f3e73633-9738-40f5-969e-90ec80c20430 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 1cecd438-75a3-4140-ad35-7439630b1be2] Took 7.77 seconds to spawn the instance on the hypervisor.
Oct 11 08:51:53 compute-0 nova_compute[260935]: 2025-10-11 08:51:53.225 2 DEBUG nova.compute.manager [None req-f3e73633-9738-40f5-969e-90ec80c20430 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 1cecd438-75a3-4140-ad35-7439630b1be2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:51:53 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 08:51:53 compute-0 nova_compute[260935]: 2025-10-11 08:51:53.305 2 INFO nova.compute.manager [None req-f3e73633-9738-40f5-969e-90ec80c20430 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 1cecd438-75a3-4140-ad35-7439630b1be2] Took 8.85 seconds to build instance.
Oct 11 08:51:53 compute-0 nova_compute[260935]: 2025-10-11 08:51:53.326 2 DEBUG oslo_concurrency.lockutils [None req-f3e73633-9738-40f5-969e-90ec80c20430 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Lock "1cecd438-75a3-4140-ad35-7439630b1be2" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.004s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:51:53 compute-0 ceph-mon[74313]: pgmap v1405: 321 pgs: 321 active+clean; 180 MiB data, 446 MiB used, 60 GiB / 60 GiB avail; 2.5 MiB/s rd, 6.5 MiB/s wr, 277 op/s
Oct 11 08:51:53 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 08:51:53 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 08:51:53 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/3241342263' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:51:53 compute-0 nova_compute[260935]: 2025-10-11 08:51:53.520 2 DEBUG nova.network.neutron [req-18b1f560-5b30-4047-bffe-840d9c80635f req-3dbda898-6a48-421b-b8a8-2f61879a7924 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 205bee5e-165a-468c-87d3-db44e03ace3e] Updated VIF entry in instance network info cache for port 30f88ebc-fba6-476c-8953-ad64358029bb. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 08:51:53 compute-0 nova_compute[260935]: 2025-10-11 08:51:53.520 2 DEBUG nova.network.neutron [req-18b1f560-5b30-4047-bffe-840d9c80635f req-3dbda898-6a48-421b-b8a8-2f61879a7924 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 205bee5e-165a-468c-87d3-db44e03ace3e] Updating instance_info_cache with network_info: [{"id": "30f88ebc-fba6-476c-8953-ad64358029bb", "address": "fa:16:3e:84:a9:6f", "network": {"id": "98ec9321-44f2-4cc2-a0a2-531598e629d2", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-550040071", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.192", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f84de17ba2c5470fbc4c7fe809e7d7b7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap30f88ebc-fb", "ovs_interfaceid": "30f88ebc-fba6-476c-8953-ad64358029bb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "e389558a-ec7e-4610-aef7-2cfb76da6814", "address": "fa:16:3e:67:bd:db", "network": {"id": "7aa0dec9-58b4-4b16-b265-f3c8cbedd3a7", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-2082644605", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.173", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f84de17ba2c5470fbc4c7fe809e7d7b7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape389558a-ec", "ovs_interfaceid": "e389558a-ec7e-4610-aef7-2cfb76da6814", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:51:53 compute-0 nova_compute[260935]: 2025-10-11 08:51:53.549 2 DEBUG oslo_concurrency.lockutils [req-18b1f560-5b30-4047-bffe-840d9c80635f req-3dbda898-6a48-421b-b8a8-2f61879a7924 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-205bee5e-165a-468c-87d3-db44e03ace3e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 08:51:53 compute-0 nova_compute[260935]: 2025-10-11 08:51:53.550 2 DEBUG nova.compute.manager [req-18b1f560-5b30-4047-bffe-840d9c80635f req-3dbda898-6a48-421b-b8a8-2f61879a7924 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a] Received event network-vif-deleted-e758e6dc-cadd-4687-a634-519fb2ecace8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:51:53 compute-0 nova_compute[260935]: 2025-10-11 08:51:53.550 2 DEBUG oslo_concurrency.lockutils [req-13beb4be-75da-41d4-82ed-b3dba8c20fe0 req-004bb7a9-53c9-46c5-bdf3-a2a729e03547 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-205bee5e-165a-468c-87d3-db44e03ace3e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 08:51:53 compute-0 nova_compute[260935]: 2025-10-11 08:51:53.550 2 DEBUG nova.network.neutron [req-13beb4be-75da-41d4-82ed-b3dba8c20fe0 req-004bb7a9-53c9-46c5-bdf3-a2a729e03547 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 205bee5e-165a-468c-87d3-db44e03ace3e] Refreshing network info cache for port e389558a-ec7e-4610-aef7-2cfb76da6814 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 08:51:53 compute-0 nova_compute[260935]: 2025-10-11 08:51:53.634 2 INFO nova.virt.libvirt.driver [None req-bcf9f562-5918-4b0f-95da-1a9d85e57363 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] [instance: 205bee5e-165a-468c-87d3-db44e03ace3e] Creating config drive at /var/lib/nova/instances/205bee5e-165a-468c-87d3-db44e03ace3e/disk.config
Oct 11 08:51:53 compute-0 nova_compute[260935]: 2025-10-11 08:51:53.639 2 DEBUG oslo_concurrency.processutils [None req-bcf9f562-5918-4b0f-95da-1a9d85e57363 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/205bee5e-165a-468c-87d3-db44e03ace3e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpfxw46wcj execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:51:53 compute-0 nova_compute[260935]: 2025-10-11 08:51:53.778 2 DEBUG oslo_concurrency.processutils [None req-bcf9f562-5918-4b0f-95da-1a9d85e57363 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/205bee5e-165a-468c-87d3-db44e03ace3e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpfxw46wcj" returned: 0 in 0.139s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:51:53 compute-0 nova_compute[260935]: 2025-10-11 08:51:53.807 2 DEBUG nova.storage.rbd_utils [None req-bcf9f562-5918-4b0f-95da-1a9d85e57363 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] rbd image 205bee5e-165a-468c-87d3-db44e03ace3e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:51:53 compute-0 nova_compute[260935]: 2025-10-11 08:51:53.813 2 DEBUG oslo_concurrency.processutils [None req-bcf9f562-5918-4b0f-95da-1a9d85e57363 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/205bee5e-165a-468c-87d3-db44e03ace3e/disk.config 205bee5e-165a-468c-87d3-db44e03ace3e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:51:53 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1406: 321 pgs: 321 active+clean; 181 MiB data, 446 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.2 MiB/s wr, 185 op/s
Oct 11 08:51:53 compute-0 nova_compute[260935]: 2025-10-11 08:51:53.917 2 DEBUG oslo_concurrency.lockutils [None req-280c3641-40c8-40d9-abdd-b0b9acaa9cd9 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] Acquiring lock "840d2a1b-48bb-42ec-aa12-067e1b65ea39" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:51:53 compute-0 nova_compute[260935]: 2025-10-11 08:51:53.918 2 DEBUG oslo_concurrency.lockutils [None req-280c3641-40c8-40d9-abdd-b0b9acaa9cd9 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] Lock "840d2a1b-48bb-42ec-aa12-067e1b65ea39" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:51:53 compute-0 nova_compute[260935]: 2025-10-11 08:51:53.940 2 DEBUG nova.compute.manager [None req-280c3641-40c8-40d9-abdd-b0b9acaa9cd9 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] [instance: 840d2a1b-48bb-42ec-aa12-067e1b65ea39] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 11 08:51:53 compute-0 ovn_controller[152945]: 2025-10-11T08:51:53Z|00287|binding|INFO|Releasing lport e5becf0d-48c0-404b-9cba-07077454d085 from this chassis (sb_readonly=0)
Oct 11 08:51:53 compute-0 ovn_controller[152945]: 2025-10-11T08:51:53Z|00288|binding|INFO|Releasing lport abe57b96-aedd-418b-be8f-7ad9fb9218ac from this chassis (sb_readonly=0)
Oct 11 08:51:54 compute-0 nova_compute[260935]: 2025-10-11 08:51:54.024 2 DEBUG oslo_concurrency.processutils [None req-bcf9f562-5918-4b0f-95da-1a9d85e57363 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/205bee5e-165a-468c-87d3-db44e03ace3e/disk.config 205bee5e-165a-468c-87d3-db44e03ace3e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.211s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:51:54 compute-0 nova_compute[260935]: 2025-10-11 08:51:54.025 2 INFO nova.virt.libvirt.driver [None req-bcf9f562-5918-4b0f-95da-1a9d85e57363 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] [instance: 205bee5e-165a-468c-87d3-db44e03ace3e] Deleting local config drive /var/lib/nova/instances/205bee5e-165a-468c-87d3-db44e03ace3e/disk.config because it was imported into RBD.
Oct 11 08:51:54 compute-0 nova_compute[260935]: 2025-10-11 08:51:54.040 2 DEBUG oslo_concurrency.lockutils [None req-280c3641-40c8-40d9-abdd-b0b9acaa9cd9 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:51:54 compute-0 nova_compute[260935]: 2025-10-11 08:51:54.041 2 DEBUG oslo_concurrency.lockutils [None req-280c3641-40c8-40d9-abdd-b0b9acaa9cd9 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:51:54 compute-0 nova_compute[260935]: 2025-10-11 08:51:54.055 2 DEBUG nova.virt.hardware [None req-280c3641-40c8-40d9-abdd-b0b9acaa9cd9 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 11 08:51:54 compute-0 nova_compute[260935]: 2025-10-11 08:51:54.056 2 INFO nova.compute.claims [None req-280c3641-40c8-40d9-abdd-b0b9acaa9cd9 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] [instance: 840d2a1b-48bb-42ec-aa12-067e1b65ea39] Claim successful on node compute-0.ctlplane.example.com
Oct 11 08:51:54 compute-0 nova_compute[260935]: 2025-10-11 08:51:54.066 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:51:54 compute-0 NetworkManager[44960]: <info>  [1760172714.1018] manager: (tap30f88ebc-fb): new Tun device (/org/freedesktop/NetworkManager/Devices/143)
Oct 11 08:51:54 compute-0 kernel: tap30f88ebc-fb: entered promiscuous mode
Oct 11 08:51:54 compute-0 nova_compute[260935]: 2025-10-11 08:51:54.114 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:51:54 compute-0 ovn_controller[152945]: 2025-10-11T08:51:54Z|00289|binding|INFO|Claiming lport 30f88ebc-fba6-476c-8953-ad64358029bb for this chassis.
Oct 11 08:51:54 compute-0 ovn_controller[152945]: 2025-10-11T08:51:54Z|00290|binding|INFO|30f88ebc-fba6-476c-8953-ad64358029bb: Claiming fa:16:3e:84:a9:6f 10.100.0.192
Oct 11 08:51:54 compute-0 NetworkManager[44960]: <info>  [1760172714.1275] device (tap30f88ebc-fb): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 08:51:54 compute-0 NetworkManager[44960]: <info>  [1760172714.1286] device (tap30f88ebc-fb): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 08:51:54 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:54.124 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:84:a9:6f 10.100.0.192'], port_security=['fa:16:3e:84:a9:6f 10.100.0.192'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.192/24', 'neutron:device_id': '205bee5e-165a-468c-87d3-db44e03ace3e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-98ec9321-44f2-4cc2-a0a2-531598e629d2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f84de17ba2c5470fbc4c7fe809e7d7b7', 'neutron:revision_number': '2', 'neutron:security_group_ids': '1a4e329d-ca16-4021-b76e-a221f4eabd06', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=310a160b-9164-4639-a339-2ba6c58b1709, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=30f88ebc-fba6-476c-8953-ad64358029bb) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 08:51:54 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:54.129 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 30f88ebc-fba6-476c-8953-ad64358029bb in datapath 98ec9321-44f2-4cc2-a0a2-531598e629d2 bound to our chassis
Oct 11 08:51:54 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:54.131 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 98ec9321-44f2-4cc2-a0a2-531598e629d2
Oct 11 08:51:54 compute-0 NetworkManager[44960]: <info>  [1760172714.1365] manager: (tape389558a-ec): new Tun device (/org/freedesktop/NetworkManager/Devices/144)
Oct 11 08:51:54 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:54.148 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[dbf7788c-79f5-458e-9c70-2ecf7bc9ce04]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:51:54 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:54.149 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap98ec9321-41 in ovnmeta-98ec9321-44f2-4cc2-a0a2-531598e629d2 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 11 08:51:54 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:54.154 276945 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap98ec9321-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 11 08:51:54 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:54.154 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[c83f0e22-a67a-47e5-b14a-41662afb7271]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:51:54 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:54.155 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[7da3804e-905b-4d3d-ad92-89b006ab1fb2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:51:54 compute-0 kernel: tape389558a-ec: entered promiscuous mode
Oct 11 08:51:54 compute-0 NetworkManager[44960]: <info>  [1760172714.1610] device (tape389558a-ec): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 08:51:54 compute-0 NetworkManager[44960]: <info>  [1760172714.1637] device (tape389558a-ec): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 08:51:54 compute-0 ovn_controller[152945]: 2025-10-11T08:51:54Z|00291|binding|INFO|Claiming lport e389558a-ec7e-4610-aef7-2cfb76da6814 for this chassis.
Oct 11 08:51:54 compute-0 ovn_controller[152945]: 2025-10-11T08:51:54Z|00292|binding|INFO|e389558a-ec7e-4610-aef7-2cfb76da6814: Claiming fa:16:3e:67:bd:db 10.100.1.173
Oct 11 08:51:54 compute-0 ovn_controller[152945]: 2025-10-11T08:51:54Z|00293|binding|INFO|Setting lport 30f88ebc-fba6-476c-8953-ad64358029bb ovn-installed in OVS
Oct 11 08:51:54 compute-0 ovn_controller[152945]: 2025-10-11T08:51:54Z|00294|binding|INFO|Setting lport 30f88ebc-fba6-476c-8953-ad64358029bb up in Southbound
Oct 11 08:51:54 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:54.171 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[1f1b3cd5-d4b2-42ff-a102-4e12752e9039]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:51:54 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:54.174 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:67:bd:db 10.100.1.173'], port_security=['fa:16:3e:67:bd:db 10.100.1.173'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.1.173/24', 'neutron:device_id': '205bee5e-165a-468c-87d3-db44e03ace3e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7aa0dec9-58b4-4b16-b265-f3c8cbedd3a7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f84de17ba2c5470fbc4c7fe809e7d7b7', 'neutron:revision_number': '2', 'neutron:security_group_ids': '1a4e329d-ca16-4021-b76e-a221f4eabd06', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=bdcaf9f8-32cf-46cd-8760-089ea8b1e312, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=e389558a-ec7e-4610-aef7-2cfb76da6814) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 08:51:54 compute-0 systemd-machined[215705]: New machine qemu-42-instance-00000025.
Oct 11 08:51:54 compute-0 systemd[1]: Started Virtual Machine qemu-42-instance-00000025.
Oct 11 08:51:54 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:54.207 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[08936cab-dd05-419d-80b9-c7a7f1ef1367]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:51:54 compute-0 ovn_controller[152945]: 2025-10-11T08:51:54Z|00295|binding|INFO|Setting lport e389558a-ec7e-4610-aef7-2cfb76da6814 ovn-installed in OVS
Oct 11 08:51:54 compute-0 ovn_controller[152945]: 2025-10-11T08:51:54Z|00296|binding|INFO|Setting lport e389558a-ec7e-4610-aef7-2cfb76da6814 up in Southbound
Oct 11 08:51:54 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:54.251 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[5cd545d3-4979-41a1-a900-2d437d6f3691]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:51:54 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:54.256 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[f57db02e-685a-4fda-a245-e9641e00b824]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:51:54 compute-0 NetworkManager[44960]: <info>  [1760172714.2604] manager: (tap98ec9321-40): new Veth device (/org/freedesktop/NetworkManager/Devices/145)
Oct 11 08:51:54 compute-0 nova_compute[260935]: 2025-10-11 08:51:54.320 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:51:54 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:54.322 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[b81eeb8b-6197-4b20-9126-7fd99895f2ec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:51:54 compute-0 nova_compute[260935]: 2025-10-11 08:51:54.326 2 DEBUG nova.compute.manager [req-bdf198d8-2280-4e34-8de6-f854c06df9ba req-518c9b8f-d86d-4839-9466-ccae24e5570a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 1cecd438-75a3-4140-ad35-7439630b1be2] Received event network-vif-plugged-7290684c-3823-4e1e-84f5-826316aa4548 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:51:54 compute-0 nova_compute[260935]: 2025-10-11 08:51:54.326 2 DEBUG oslo_concurrency.lockutils [req-bdf198d8-2280-4e34-8de6-f854c06df9ba req-518c9b8f-d86d-4839-9466-ccae24e5570a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "1cecd438-75a3-4140-ad35-7439630b1be2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:51:54 compute-0 nova_compute[260935]: 2025-10-11 08:51:54.326 2 DEBUG oslo_concurrency.lockutils [req-bdf198d8-2280-4e34-8de6-f854c06df9ba req-518c9b8f-d86d-4839-9466-ccae24e5570a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "1cecd438-75a3-4140-ad35-7439630b1be2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:51:54 compute-0 nova_compute[260935]: 2025-10-11 08:51:54.326 2 DEBUG oslo_concurrency.lockutils [req-bdf198d8-2280-4e34-8de6-f854c06df9ba req-518c9b8f-d86d-4839-9466-ccae24e5570a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "1cecd438-75a3-4140-ad35-7439630b1be2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:51:54 compute-0 nova_compute[260935]: 2025-10-11 08:51:54.327 2 DEBUG nova.compute.manager [req-bdf198d8-2280-4e34-8de6-f854c06df9ba req-518c9b8f-d86d-4839-9466-ccae24e5570a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 1cecd438-75a3-4140-ad35-7439630b1be2] No waiting events found dispatching network-vif-plugged-7290684c-3823-4e1e-84f5-826316aa4548 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 08:51:54 compute-0 nova_compute[260935]: 2025-10-11 08:51:54.327 2 WARNING nova.compute.manager [req-bdf198d8-2280-4e34-8de6-f854c06df9ba req-518c9b8f-d86d-4839-9466-ccae24e5570a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 1cecd438-75a3-4140-ad35-7439630b1be2] Received unexpected event network-vif-plugged-7290684c-3823-4e1e-84f5-826316aa4548 for instance with vm_state active and task_state None.
Oct 11 08:51:54 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:54.330 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[eff8777e-fe57-4f2e-af92-d0c6de414856]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:51:54 compute-0 NetworkManager[44960]: <info>  [1760172714.3634] device (tap98ec9321-40): carrier: link connected
Oct 11 08:51:54 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:54.375 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[35d4cc39-5a07-4932-b03d-0a4252e31326]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:51:54 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:54.404 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[667131ad-2c6d-48e7-a288-c0406465a7a9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap98ec9321-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:fb:82:1b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 94], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 455906, 'reachable_time': 18858, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 305332, 'error': None, 'target': 'ovnmeta-98ec9321-44f2-4cc2-a0a2-531598e629d2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:51:54 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:54.425 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[b5475861-723d-4f65-bac8-aae0b21394b0]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fefb:821b'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 455906, 'tstamp': 455906}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 305333, 'error': None, 'target': 'ovnmeta-98ec9321-44f2-4cc2-a0a2-531598e629d2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:51:54 compute-0 nova_compute[260935]: 2025-10-11 08:51:54.431 2 DEBUG oslo_concurrency.processutils [None req-280c3641-40c8-40d9-abdd-b0b9acaa9cd9 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:51:54 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:54.453 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[6155aa3c-5bec-450b-99af-71fe94956a04]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap98ec9321-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:fb:82:1b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 94], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 455906, 'reachable_time': 18858, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 305334, 'error': None, 'target': 'ovnmeta-98ec9321-44f2-4cc2-a0a2-531598e629d2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:51:54 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:54.498 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[5e2fc27f-7ed5-4a20-9e70-fdf972877daa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:51:54 compute-0 nova_compute[260935]: 2025-10-11 08:51:54.522 2 DEBUG nova.compute.manager [req-9f6d84af-496c-4bf5-8469-4f34a69c2704 req-1001e463-0c00-44bd-b6fa-447002bb7d88 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 205bee5e-165a-468c-87d3-db44e03ace3e] Received event network-vif-plugged-e389558a-ec7e-4610-aef7-2cfb76da6814 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:51:54 compute-0 nova_compute[260935]: 2025-10-11 08:51:54.523 2 DEBUG oslo_concurrency.lockutils [req-9f6d84af-496c-4bf5-8469-4f34a69c2704 req-1001e463-0c00-44bd-b6fa-447002bb7d88 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "205bee5e-165a-468c-87d3-db44e03ace3e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:51:54 compute-0 nova_compute[260935]: 2025-10-11 08:51:54.523 2 DEBUG oslo_concurrency.lockutils [req-9f6d84af-496c-4bf5-8469-4f34a69c2704 req-1001e463-0c00-44bd-b6fa-447002bb7d88 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "205bee5e-165a-468c-87d3-db44e03ace3e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:51:54 compute-0 nova_compute[260935]: 2025-10-11 08:51:54.523 2 DEBUG oslo_concurrency.lockutils [req-9f6d84af-496c-4bf5-8469-4f34a69c2704 req-1001e463-0c00-44bd-b6fa-447002bb7d88 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "205bee5e-165a-468c-87d3-db44e03ace3e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:51:54 compute-0 nova_compute[260935]: 2025-10-11 08:51:54.524 2 DEBUG nova.compute.manager [req-9f6d84af-496c-4bf5-8469-4f34a69c2704 req-1001e463-0c00-44bd-b6fa-447002bb7d88 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 205bee5e-165a-468c-87d3-db44e03ace3e] Processing event network-vif-plugged-e389558a-ec7e-4610-aef7-2cfb76da6814 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 11 08:51:54 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:54.592 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[ec4f8e13-894e-4c5a-a359-267e6788f69b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:51:54 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:54.594 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap98ec9321-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:51:54 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:54.594 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 08:51:54 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:54.595 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap98ec9321-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:51:54 compute-0 nova_compute[260935]: 2025-10-11 08:51:54.597 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:51:54 compute-0 NetworkManager[44960]: <info>  [1760172714.5977] manager: (tap98ec9321-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/146)
Oct 11 08:51:54 compute-0 kernel: tap98ec9321-40: entered promiscuous mode
Oct 11 08:51:54 compute-0 nova_compute[260935]: 2025-10-11 08:51:54.600 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:51:54 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:54.602 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap98ec9321-40, col_values=(('external_ids', {'iface-id': 'e3dc9ca3-46c1-4422-a2ef-638d9e041fda'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:51:54 compute-0 nova_compute[260935]: 2025-10-11 08:51:54.603 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:51:54 compute-0 ovn_controller[152945]: 2025-10-11T08:51:54Z|00297|binding|INFO|Releasing lport e3dc9ca3-46c1-4422-a2ef-638d9e041fda from this chassis (sb_readonly=0)
Oct 11 08:51:54 compute-0 nova_compute[260935]: 2025-10-11 08:51:54.604 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:51:54 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:54.605 162815 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/98ec9321-44f2-4cc2-a0a2-531598e629d2.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/98ec9321-44f2-4cc2-a0a2-531598e629d2.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 11 08:51:54 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:54.605 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[1509a69e-7943-4c18-93df-69bc959fc88c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:51:54 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:54.606 162815 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 11 08:51:54 compute-0 ovn_metadata_agent[162810]: global
Oct 11 08:51:54 compute-0 ovn_metadata_agent[162810]:     log         /dev/log local0 debug
Oct 11 08:51:54 compute-0 ovn_metadata_agent[162810]:     log-tag     haproxy-metadata-proxy-98ec9321-44f2-4cc2-a0a2-531598e629d2
Oct 11 08:51:54 compute-0 ovn_metadata_agent[162810]:     user        root
Oct 11 08:51:54 compute-0 ovn_metadata_agent[162810]:     group       root
Oct 11 08:51:54 compute-0 ovn_metadata_agent[162810]:     maxconn     1024
Oct 11 08:51:54 compute-0 ovn_metadata_agent[162810]:     pidfile     /var/lib/neutron/external/pids/98ec9321-44f2-4cc2-a0a2-531598e629d2.pid.haproxy
Oct 11 08:51:54 compute-0 ovn_metadata_agent[162810]:     daemon
Oct 11 08:51:54 compute-0 ovn_metadata_agent[162810]: 
Oct 11 08:51:54 compute-0 ovn_metadata_agent[162810]: defaults
Oct 11 08:51:54 compute-0 ovn_metadata_agent[162810]:     log global
Oct 11 08:51:54 compute-0 ovn_metadata_agent[162810]:     mode http
Oct 11 08:51:54 compute-0 ovn_metadata_agent[162810]:     option httplog
Oct 11 08:51:54 compute-0 ovn_metadata_agent[162810]:     option dontlognull
Oct 11 08:51:54 compute-0 ovn_metadata_agent[162810]:     option http-server-close
Oct 11 08:51:54 compute-0 ovn_metadata_agent[162810]:     option forwardfor
Oct 11 08:51:54 compute-0 ovn_metadata_agent[162810]:     retries                 3
Oct 11 08:51:54 compute-0 ovn_metadata_agent[162810]:     timeout http-request    30s
Oct 11 08:51:54 compute-0 ovn_metadata_agent[162810]:     timeout connect         30s
Oct 11 08:51:54 compute-0 ovn_metadata_agent[162810]:     timeout client          32s
Oct 11 08:51:54 compute-0 ovn_metadata_agent[162810]:     timeout server          32s
Oct 11 08:51:54 compute-0 ovn_metadata_agent[162810]:     timeout http-keep-alive 30s
Oct 11 08:51:54 compute-0 ovn_metadata_agent[162810]: 
Oct 11 08:51:54 compute-0 ovn_metadata_agent[162810]: 
Oct 11 08:51:54 compute-0 ovn_metadata_agent[162810]: listen listener
Oct 11 08:51:54 compute-0 ovn_metadata_agent[162810]:     bind 169.254.169.254:80
Oct 11 08:51:54 compute-0 ovn_metadata_agent[162810]:     server metadata /var/lib/neutron/metadata_proxy
Oct 11 08:51:54 compute-0 ovn_metadata_agent[162810]:     http-request add-header X-OVN-Network-ID 98ec9321-44f2-4cc2-a0a2-531598e629d2
Oct 11 08:51:54 compute-0 ovn_metadata_agent[162810]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 11 08:51:54 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:54.606 162815 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-98ec9321-44f2-4cc2-a0a2-531598e629d2', 'env', 'PROCESS_TAG=haproxy-98ec9321-44f2-4cc2-a0a2-531598e629d2', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/98ec9321-44f2-4cc2-a0a2-531598e629d2.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 11 08:51:54 compute-0 nova_compute[260935]: 2025-10-11 08:51:54.628 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:51:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 08:51:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 08:51:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 08:51:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 08:51:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 08:51:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 08:51:54 compute-0 ceph-mgr[74605]: [balancer INFO root] Optimize plan auto_2025-10-11_08:51:54
Oct 11 08:51:54 compute-0 ceph-mgr[74605]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 08:51:54 compute-0 ceph-mgr[74605]: [balancer INFO root] do_upmap
Oct 11 08:51:54 compute-0 ceph-mgr[74605]: [balancer INFO root] pools ['.mgr', 'vms', 'backups', 'cephfs.cephfs.meta', 'images', 'default.rgw.log', 'cephfs.cephfs.data', 'volumes', '.rgw.root', 'default.rgw.meta', 'default.rgw.control']
Oct 11 08:51:54 compute-0 ceph-mgr[74605]: [balancer INFO root] prepared 0/10 changes
Oct 11 08:51:54 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 08:51:54 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2264999212' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:51:54 compute-0 nova_compute[260935]: 2025-10-11 08:51:54.909 2 DEBUG oslo_concurrency.processutils [None req-280c3641-40c8-40d9-abdd-b0b9acaa9cd9 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:51:54 compute-0 nova_compute[260935]: 2025-10-11 08:51:54.917 2 DEBUG nova.compute.provider_tree [None req-280c3641-40c8-40d9-abdd-b0b9acaa9cd9 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 08:51:54 compute-0 nova_compute[260935]: 2025-10-11 08:51:54.932 2 DEBUG nova.scheduler.client.report [None req-280c3641-40c8-40d9-abdd-b0b9acaa9cd9 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 08:51:54 compute-0 nova_compute[260935]: 2025-10-11 08:51:54.950 2 DEBUG oslo_concurrency.lockutils [None req-280c3641-40c8-40d9-abdd-b0b9acaa9cd9 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.908s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:51:54 compute-0 nova_compute[260935]: 2025-10-11 08:51:54.951 2 DEBUG nova.compute.manager [None req-280c3641-40c8-40d9-abdd-b0b9acaa9cd9 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] [instance: 840d2a1b-48bb-42ec-aa12-067e1b65ea39] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 11 08:51:54 compute-0 podman[305427]: 2025-10-11 08:51:54.990935143 +0000 UTC m=+0.061851760 container create cf510c4b706e2f2d45925ce3f9c6c72673e0db26b6b8d1cd42dadad92afd6053 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-98ec9321-44f2-4cc2-a0a2-531598e629d2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, tcib_managed=true, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 11 08:51:55 compute-0 nova_compute[260935]: 2025-10-11 08:51:55.002 2 DEBUG nova.compute.manager [None req-280c3641-40c8-40d9-abdd-b0b9acaa9cd9 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] [instance: 840d2a1b-48bb-42ec-aa12-067e1b65ea39] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 11 08:51:55 compute-0 nova_compute[260935]: 2025-10-11 08:51:55.002 2 DEBUG nova.network.neutron [None req-280c3641-40c8-40d9-abdd-b0b9acaa9cd9 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] [instance: 840d2a1b-48bb-42ec-aa12-067e1b65ea39] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 11 08:51:55 compute-0 nova_compute[260935]: 2025-10-11 08:51:55.029 2 INFO nova.virt.libvirt.driver [None req-280c3641-40c8-40d9-abdd-b0b9acaa9cd9 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] [instance: 840d2a1b-48bb-42ec-aa12-067e1b65ea39] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 11 08:51:55 compute-0 systemd[1]: Started libpod-conmon-cf510c4b706e2f2d45925ce3f9c6c72673e0db26b6b8d1cd42dadad92afd6053.scope.
Oct 11 08:51:55 compute-0 podman[305427]: 2025-10-11 08:51:54.961191082 +0000 UTC m=+0.032107709 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 11 08:51:55 compute-0 nova_compute[260935]: 2025-10-11 08:51:55.051 2 DEBUG nova.compute.manager [None req-280c3641-40c8-40d9-abdd-b0b9acaa9cd9 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] [instance: 840d2a1b-48bb-42ec-aa12-067e1b65ea39] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 11 08:51:55 compute-0 ceph-osd[88249]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 11 08:51:55 compute-0 ceph-osd[88249]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 2400.1 total, 600.0 interval
                                           Cumulative writes: 16K writes, 65K keys, 16K commit groups, 1.0 writes per commit group, ingest: 0.06 GB, 0.03 MB/s
                                           Cumulative WAL: 16K writes, 5101 syncs, 3.19 writes per sync, written: 0.06 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 10K writes, 40K keys, 10K commit groups, 1.0 writes per commit group, ingest: 41.21 MB, 0.07 MB/s
                                           Interval WAL: 10K writes, 4028 syncs, 2.53 writes per sync, written: 0.04 GB, 0.07 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 11 08:51:55 compute-0 systemd[1]: Started libcrun container.
Oct 11 08:51:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3306a34e27badc402a040902ece0ef4a86df5e07e08134f4bb9517ea1ccf4987/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 11 08:51:55 compute-0 podman[305427]: 2025-10-11 08:51:55.082678827 +0000 UTC m=+0.153595464 container init cf510c4b706e2f2d45925ce3f9c6c72673e0db26b6b8d1cd42dadad92afd6053 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-98ec9321-44f2-4cc2-a0a2-531598e629d2, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 11 08:51:55 compute-0 podman[305427]: 2025-10-11 08:51:55.09372355 +0000 UTC m=+0.164640167 container start cf510c4b706e2f2d45925ce3f9c6c72673e0db26b6b8d1cd42dadad92afd6053 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-98ec9321-44f2-4cc2-a0a2-531598e629d2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, tcib_managed=true, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Oct 11 08:51:55 compute-0 neutron-haproxy-ovnmeta-98ec9321-44f2-4cc2-a0a2-531598e629d2[305444]: [NOTICE]   (305457) : New worker (305459) forked
Oct 11 08:51:55 compute-0 neutron-haproxy-ovnmeta-98ec9321-44f2-4cc2-a0a2-531598e629d2[305444]: [NOTICE]   (305457) : Loading success.
Oct 11 08:51:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 08:51:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 08:51:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 08:51:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 08:51:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 08:51:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 08:51:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 08:51:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 08:51:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 08:51:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 08:51:55 compute-0 podman[305447]: 2025-10-11 08:51:55.153301545 +0000 UTC m=+0.079933522 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct 11 08:51:55 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:55.158 162815 INFO neutron.agent.ovn.metadata.agent [-] Port e389558a-ec7e-4610-aef7-2cfb76da6814 in datapath 7aa0dec9-58b4-4b16-b265-f3c8cbedd3a7 unbound from our chassis
Oct 11 08:51:55 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:55.162 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7aa0dec9-58b4-4b16-b265-f3c8cbedd3a7
Oct 11 08:51:55 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:55.173 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[124725f0-d70e-4e65-ad6c-19e901a10474]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:51:55 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:55.174 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap7aa0dec9-51 in ovnmeta-7aa0dec9-58b4-4b16-b265-f3c8cbedd3a7 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 11 08:51:55 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:55.176 276945 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap7aa0dec9-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 11 08:51:55 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:55.176 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[74d1edab-b4b2-4b63-94a5-899d4d25eae7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:51:55 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:55.177 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[eb3e8c21-fa2b-4067-9d89-023436bc679f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:51:55 compute-0 nova_compute[260935]: 2025-10-11 08:51:55.185 2 DEBUG nova.compute.manager [None req-280c3641-40c8-40d9-abdd-b0b9acaa9cd9 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] [instance: 840d2a1b-48bb-42ec-aa12-067e1b65ea39] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 11 08:51:55 compute-0 nova_compute[260935]: 2025-10-11 08:51:55.186 2 DEBUG nova.virt.libvirt.driver [None req-280c3641-40c8-40d9-abdd-b0b9acaa9cd9 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] [instance: 840d2a1b-48bb-42ec-aa12-067e1b65ea39] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 11 08:51:55 compute-0 nova_compute[260935]: 2025-10-11 08:51:55.186 2 INFO nova.virt.libvirt.driver [None req-280c3641-40c8-40d9-abdd-b0b9acaa9cd9 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] [instance: 840d2a1b-48bb-42ec-aa12-067e1b65ea39] Creating image(s)
Oct 11 08:51:55 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:55.200 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[563a7dc0-efa8-43f8-9a91-7b0556be591f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:51:55 compute-0 nova_compute[260935]: 2025-10-11 08:51:55.213 2 DEBUG nova.storage.rbd_utils [None req-280c3641-40c8-40d9-abdd-b0b9acaa9cd9 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] rbd image 840d2a1b-48bb-42ec-aa12-067e1b65ea39_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:51:55 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:55.223 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[b7e29c8a-40f0-4d51-b1af-1f3e97433701]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:51:55 compute-0 nova_compute[260935]: 2025-10-11 08:51:55.292 2 DEBUG nova.storage.rbd_utils [None req-280c3641-40c8-40d9-abdd-b0b9acaa9cd9 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] rbd image 840d2a1b-48bb-42ec-aa12-067e1b65ea39_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:51:55 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:55.305 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[d6c6ba7e-cf37-4115-8aef-de4bd5aa716a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:51:55 compute-0 systemd-udevd[305424]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 08:51:55 compute-0 NetworkManager[44960]: <info>  [1760172715.3131] manager: (tap7aa0dec9-50): new Veth device (/org/freedesktop/NetworkManager/Devices/147)
Oct 11 08:51:55 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:55.311 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[f85c6b09-89de-421c-a586-e013c389ed17]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:51:55 compute-0 nova_compute[260935]: 2025-10-11 08:51:55.348 2 DEBUG nova.storage.rbd_utils [None req-280c3641-40c8-40d9-abdd-b0b9acaa9cd9 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] rbd image 840d2a1b-48bb-42ec-aa12-067e1b65ea39_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:51:55 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:55.348 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[d7d45ca3-d2ac-40c7-a2b4-0fdbd3ac48c0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:51:55 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:55.352 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[c0eb21b5-9031-407d-be65-69c493566bb8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:51:55 compute-0 ceph-mon[74313]: pgmap v1406: 321 pgs: 321 active+clean; 181 MiB data, 446 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.2 MiB/s wr, 185 op/s
Oct 11 08:51:55 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/2264999212' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:51:55 compute-0 nova_compute[260935]: 2025-10-11 08:51:55.368 2 DEBUG oslo_concurrency.processutils [None req-280c3641-40c8-40d9-abdd-b0b9acaa9cd9 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:51:55 compute-0 NetworkManager[44960]: <info>  [1760172715.3823] device (tap7aa0dec9-50): carrier: link connected
Oct 11 08:51:55 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:55.393 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[b2cca89a-8110-4d93-8706-8f9b1bd4e7e5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:51:55 compute-0 nova_compute[260935]: 2025-10-11 08:51:55.397 2 DEBUG nova.policy [None req-280c3641-40c8-40d9-abdd-b0b9acaa9cd9 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '7f37cd0ba15f412b88192a506c5cec79', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'b23ff73ca27245eeb1b46f51326a5568', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 11 08:51:55 compute-0 nova_compute[260935]: 2025-10-11 08:51:55.401 2 DEBUG nova.network.neutron [req-13beb4be-75da-41d4-82ed-b3dba8c20fe0 req-004bb7a9-53c9-46c5-bdf3-a2a729e03547 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 205bee5e-165a-468c-87d3-db44e03ace3e] Updated VIF entry in instance network info cache for port e389558a-ec7e-4610-aef7-2cfb76da6814. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 08:51:55 compute-0 nova_compute[260935]: 2025-10-11 08:51:55.402 2 DEBUG nova.network.neutron [req-13beb4be-75da-41d4-82ed-b3dba8c20fe0 req-004bb7a9-53c9-46c5-bdf3-a2a729e03547 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 205bee5e-165a-468c-87d3-db44e03ace3e] Updating instance_info_cache with network_info: [{"id": "30f88ebc-fba6-476c-8953-ad64358029bb", "address": "fa:16:3e:84:a9:6f", "network": {"id": "98ec9321-44f2-4cc2-a0a2-531598e629d2", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-550040071", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.192", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f84de17ba2c5470fbc4c7fe809e7d7b7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap30f88ebc-fb", "ovs_interfaceid": "30f88ebc-fba6-476c-8953-ad64358029bb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "e389558a-ec7e-4610-aef7-2cfb76da6814", "address": "fa:16:3e:67:bd:db", "network": {"id": "7aa0dec9-58b4-4b16-b265-f3c8cbedd3a7", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-2082644605", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.173", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f84de17ba2c5470fbc4c7fe809e7d7b7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape389558a-ec", "ovs_interfaceid": "e389558a-ec7e-4610-aef7-2cfb76da6814", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:51:55 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:55.415 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[14d11d3f-1d35-47da-9589-4845bffadb89]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7aa0dec9-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:59:23:0a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 95], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 456008, 'reachable_time': 40647, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 305555, 'error': None, 'target': 'ovnmeta-7aa0dec9-58b4-4b16-b265-f3c8cbedd3a7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:51:55 compute-0 nova_compute[260935]: 2025-10-11 08:51:55.434 2 DEBUG oslo_concurrency.lockutils [req-13beb4be-75da-41d4-82ed-b3dba8c20fe0 req-004bb7a9-53c9-46c5-bdf3-a2a729e03547 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-205bee5e-165a-468c-87d3-db44e03ace3e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 08:51:55 compute-0 nova_compute[260935]: 2025-10-11 08:51:55.435 2 DEBUG oslo_concurrency.processutils [None req-280c3641-40c8-40d9-abdd-b0b9acaa9cd9 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:51:55 compute-0 nova_compute[260935]: 2025-10-11 08:51:55.435 2 DEBUG oslo_concurrency.lockutils [None req-280c3641-40c8-40d9-abdd-b0b9acaa9cd9 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] Acquiring lock "9811042c7d73cc51997f7c966840f4b7728169a1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:51:55 compute-0 nova_compute[260935]: 2025-10-11 08:51:55.436 2 DEBUG oslo_concurrency.lockutils [None req-280c3641-40c8-40d9-abdd-b0b9acaa9cd9 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:51:55 compute-0 nova_compute[260935]: 2025-10-11 08:51:55.436 2 DEBUG oslo_concurrency.lockutils [None req-280c3641-40c8-40d9-abdd-b0b9acaa9cd9 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:51:55 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:55.447 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[c90e32ab-0389-4690-9eab-6875e3cfac9c]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe59:230a'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 456008, 'tstamp': 456008}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 305556, 'error': None, 'target': 'ovnmeta-7aa0dec9-58b4-4b16-b265-f3c8cbedd3a7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:51:55 compute-0 nova_compute[260935]: 2025-10-11 08:51:55.461 2 DEBUG nova.storage.rbd_utils [None req-280c3641-40c8-40d9-abdd-b0b9acaa9cd9 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] rbd image 840d2a1b-48bb-42ec-aa12-067e1b65ea39_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:51:55 compute-0 nova_compute[260935]: 2025-10-11 08:51:55.468 2 DEBUG oslo_concurrency.processutils [None req-280c3641-40c8-40d9-abdd-b0b9acaa9cd9 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 840d2a1b-48bb-42ec-aa12-067e1b65ea39_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:51:55 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:55.471 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[c717bb68-4154-49a2-ad65-1176a86d2f66]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7aa0dec9-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:59:23:0a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 95], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 456008, 'reachable_time': 40647, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 305574, 'error': None, 'target': 'ovnmeta-7aa0dec9-58b4-4b16-b265-f3c8cbedd3a7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:51:55 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:55.508 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[3ca05aea-5e96-4613-8dc8-d2341c928c73]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:51:55 compute-0 nova_compute[260935]: 2025-10-11 08:51:55.621 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172715.6194649, 205bee5e-165a-468c-87d3-db44e03ace3e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:51:55 compute-0 nova_compute[260935]: 2025-10-11 08:51:55.623 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 205bee5e-165a-468c-87d3-db44e03ace3e] VM Started (Lifecycle Event)
Oct 11 08:51:55 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:55.629 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[3cb411c8-33ab-47df-937e-afc9f9b66b87]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:51:55 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:55.631 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7aa0dec9-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:51:55 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:55.632 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 08:51:55 compute-0 nova_compute[260935]: 2025-10-11 08:51:55.632 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760172700.6303284, 5b50d851-e482-40a2-8b7d-d3eca87e15ab => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:51:55 compute-0 nova_compute[260935]: 2025-10-11 08:51:55.632 2 INFO nova.compute.manager [-] [instance: 5b50d851-e482-40a2-8b7d-d3eca87e15ab] VM Stopped (Lifecycle Event)
Oct 11 08:51:55 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:55.633 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7aa0dec9-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:51:55 compute-0 nova_compute[260935]: 2025-10-11 08:51:55.636 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:51:55 compute-0 NetworkManager[44960]: <info>  [1760172715.6363] manager: (tap7aa0dec9-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/148)
Oct 11 08:51:55 compute-0 kernel: tap7aa0dec9-50: entered promiscuous mode
Oct 11 08:51:55 compute-0 nova_compute[260935]: 2025-10-11 08:51:55.641 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:51:55 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:55.643 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7aa0dec9-50, col_values=(('external_ids', {'iface-id': '90d43dcf-2ebb-421e-9ea5-e973adcf2f96'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:51:55 compute-0 ovn_controller[152945]: 2025-10-11T08:51:55Z|00298|binding|INFO|Releasing lport 90d43dcf-2ebb-421e-9ea5-e973adcf2f96 from this chassis (sb_readonly=0)
Oct 11 08:51:55 compute-0 nova_compute[260935]: 2025-10-11 08:51:55.644 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:51:55 compute-0 nova_compute[260935]: 2025-10-11 08:51:55.664 2 DEBUG nova.compute.manager [None req-28749f6d-c2e4-4b3b-b5b9-867e3c5d28f0 - - - - - -] [instance: 5b50d851-e482-40a2-8b7d-d3eca87e15ab] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:51:55 compute-0 nova_compute[260935]: 2025-10-11 08:51:55.667 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 205bee5e-165a-468c-87d3-db44e03ace3e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:51:55 compute-0 nova_compute[260935]: 2025-10-11 08:51:55.678 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172715.6203399, 205bee5e-165a-468c-87d3-db44e03ace3e => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:51:55 compute-0 nova_compute[260935]: 2025-10-11 08:51:55.679 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 205bee5e-165a-468c-87d3-db44e03ace3e] VM Paused (Lifecycle Event)
Oct 11 08:51:55 compute-0 nova_compute[260935]: 2025-10-11 08:51:55.680 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:51:55 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:55.681 162815 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/7aa0dec9-58b4-4b16-b265-f3c8cbedd3a7.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/7aa0dec9-58b4-4b16-b265-f3c8cbedd3a7.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 11 08:51:55 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:55.685 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[7c9edda1-f67c-4a8e-9644-75f9a00f3ec1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:51:55 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:55.686 162815 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 11 08:51:55 compute-0 ovn_metadata_agent[162810]: global
Oct 11 08:51:55 compute-0 ovn_metadata_agent[162810]:     log         /dev/log local0 debug
Oct 11 08:51:55 compute-0 ovn_metadata_agent[162810]:     log-tag     haproxy-metadata-proxy-7aa0dec9-58b4-4b16-b265-f3c8cbedd3a7
Oct 11 08:51:55 compute-0 ovn_metadata_agent[162810]:     user        root
Oct 11 08:51:55 compute-0 ovn_metadata_agent[162810]:     group       root
Oct 11 08:51:55 compute-0 ovn_metadata_agent[162810]:     maxconn     1024
Oct 11 08:51:55 compute-0 ovn_metadata_agent[162810]:     pidfile     /var/lib/neutron/external/pids/7aa0dec9-58b4-4b16-b265-f3c8cbedd3a7.pid.haproxy
Oct 11 08:51:55 compute-0 ovn_metadata_agent[162810]:     daemon
Oct 11 08:51:55 compute-0 ovn_metadata_agent[162810]: 
Oct 11 08:51:55 compute-0 ovn_metadata_agent[162810]: defaults
Oct 11 08:51:55 compute-0 ovn_metadata_agent[162810]:     log global
Oct 11 08:51:55 compute-0 ovn_metadata_agent[162810]:     mode http
Oct 11 08:51:55 compute-0 ovn_metadata_agent[162810]:     option httplog
Oct 11 08:51:55 compute-0 ovn_metadata_agent[162810]:     option dontlognull
Oct 11 08:51:55 compute-0 ovn_metadata_agent[162810]:     option http-server-close
Oct 11 08:51:55 compute-0 ovn_metadata_agent[162810]:     option forwardfor
Oct 11 08:51:55 compute-0 ovn_metadata_agent[162810]:     retries                 3
Oct 11 08:51:55 compute-0 ovn_metadata_agent[162810]:     timeout http-request    30s
Oct 11 08:51:55 compute-0 ovn_metadata_agent[162810]:     timeout connect         30s
Oct 11 08:51:55 compute-0 ovn_metadata_agent[162810]:     timeout client          32s
Oct 11 08:51:55 compute-0 ovn_metadata_agent[162810]:     timeout server          32s
Oct 11 08:51:55 compute-0 ovn_metadata_agent[162810]:     timeout http-keep-alive 30s
Oct 11 08:51:55 compute-0 ovn_metadata_agent[162810]: 
Oct 11 08:51:55 compute-0 ovn_metadata_agent[162810]: 
Oct 11 08:51:55 compute-0 ovn_metadata_agent[162810]: listen listener
Oct 11 08:51:55 compute-0 ovn_metadata_agent[162810]:     bind 169.254.169.254:80
Oct 11 08:51:55 compute-0 ovn_metadata_agent[162810]:     server metadata /var/lib/neutron/metadata_proxy
Oct 11 08:51:55 compute-0 ovn_metadata_agent[162810]:     http-request add-header X-OVN-Network-ID 7aa0dec9-58b4-4b16-b265-f3c8cbedd3a7
Oct 11 08:51:55 compute-0 ovn_metadata_agent[162810]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 11 08:51:55 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:51:55.687 162815 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-7aa0dec9-58b4-4b16-b265-f3c8cbedd3a7', 'env', 'PROCESS_TAG=haproxy-7aa0dec9-58b4-4b16-b265-f3c8cbedd3a7', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/7aa0dec9-58b4-4b16-b265-f3c8cbedd3a7.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 11 08:51:55 compute-0 nova_compute[260935]: 2025-10-11 08:51:55.704 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 205bee5e-165a-468c-87d3-db44e03ace3e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:51:55 compute-0 nova_compute[260935]: 2025-10-11 08:51:55.707 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 205bee5e-165a-468c-87d3-db44e03ace3e] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 08:51:55 compute-0 nova_compute[260935]: 2025-10-11 08:51:55.734 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 205bee5e-165a-468c-87d3-db44e03ace3e] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 08:51:55 compute-0 nova_compute[260935]: 2025-10-11 08:51:55.785 2 DEBUG oslo_concurrency.processutils [None req-280c3641-40c8-40d9-abdd-b0b9acaa9cd9 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 840d2a1b-48bb-42ec-aa12-067e1b65ea39_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.317s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:51:55 compute-0 nova_compute[260935]: 2025-10-11 08:51:55.890 2 DEBUG nova.storage.rbd_utils [None req-280c3641-40c8-40d9-abdd-b0b9acaa9cd9 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] resizing rbd image 840d2a1b-48bb-42ec-aa12-067e1b65ea39_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 11 08:51:55 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1407: 321 pgs: 321 active+clean; 181 MiB data, 446 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.9 MiB/s wr, 163 op/s
Oct 11 08:51:56 compute-0 nova_compute[260935]: 2025-10-11 08:51:56.030 2 DEBUG nova.objects.instance [None req-280c3641-40c8-40d9-abdd-b0b9acaa9cd9 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] Lazy-loading 'migration_context' on Instance uuid 840d2a1b-48bb-42ec-aa12-067e1b65ea39 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:51:56 compute-0 nova_compute[260935]: 2025-10-11 08:51:56.053 2 DEBUG nova.virt.libvirt.driver [None req-280c3641-40c8-40d9-abdd-b0b9acaa9cd9 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] [instance: 840d2a1b-48bb-42ec-aa12-067e1b65ea39] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 11 08:51:56 compute-0 nova_compute[260935]: 2025-10-11 08:51:56.054 2 DEBUG nova.virt.libvirt.driver [None req-280c3641-40c8-40d9-abdd-b0b9acaa9cd9 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] [instance: 840d2a1b-48bb-42ec-aa12-067e1b65ea39] Ensure instance console log exists: /var/lib/nova/instances/840d2a1b-48bb-42ec-aa12-067e1b65ea39/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 11 08:51:56 compute-0 nova_compute[260935]: 2025-10-11 08:51:56.054 2 DEBUG oslo_concurrency.lockutils [None req-280c3641-40c8-40d9-abdd-b0b9acaa9cd9 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:51:56 compute-0 nova_compute[260935]: 2025-10-11 08:51:56.055 2 DEBUG oslo_concurrency.lockutils [None req-280c3641-40c8-40d9-abdd-b0b9acaa9cd9 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:51:56 compute-0 nova_compute[260935]: 2025-10-11 08:51:56.055 2 DEBUG oslo_concurrency.lockutils [None req-280c3641-40c8-40d9-abdd-b0b9acaa9cd9 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:51:56 compute-0 nova_compute[260935]: 2025-10-11 08:51:56.127 2 DEBUG nova.compute.manager [None req-925046ff-c9d6-441c-a90b-369ac96f271a 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 1cecd438-75a3-4140-ad35-7439630b1be2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:51:56 compute-0 nova_compute[260935]: 2025-10-11 08:51:56.138 2 DEBUG nova.network.neutron [None req-280c3641-40c8-40d9-abdd-b0b9acaa9cd9 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] [instance: 840d2a1b-48bb-42ec-aa12-067e1b65ea39] Successfully created port: bf26054f-43d5-471a-bca4-e7948b4409d0 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 11 08:51:56 compute-0 ceph-osd[88249]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #45. Immutable memtables: 2.
Oct 11 08:51:56 compute-0 nova_compute[260935]: 2025-10-11 08:51:56.166 2 INFO nova.compute.manager [None req-925046ff-c9d6-441c-a90b-369ac96f271a 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 1cecd438-75a3-4140-ad35-7439630b1be2] instance snapshotting
Oct 11 08:51:56 compute-0 podman[305701]: 2025-10-11 08:51:56.172713883 +0000 UTC m=+0.112134262 container create 5698044c3b3e464a40e793712c4eace612b32ffc3835eeba24ac108bb902523c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-7aa0dec9-58b4-4b16-b265-f3c8cbedd3a7, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001)
Oct 11 08:51:56 compute-0 podman[305701]: 2025-10-11 08:51:56.106558652 +0000 UTC m=+0.045979021 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 11 08:51:56 compute-0 systemd[1]: Started libpod-conmon-5698044c3b3e464a40e793712c4eace612b32ffc3835eeba24ac108bb902523c.scope.
Oct 11 08:51:56 compute-0 systemd[1]: Started libcrun container.
Oct 11 08:51:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a65300d208cb9158423dc57616ca08fb30633e13196b7c893ce5c6964e246ba/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 11 08:51:56 compute-0 podman[305701]: 2025-10-11 08:51:56.285503183 +0000 UTC m=+0.224923572 container init 5698044c3b3e464a40e793712c4eace612b32ffc3835eeba24ac108bb902523c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-7aa0dec9-58b4-4b16-b265-f3c8cbedd3a7, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2)
Oct 11 08:51:56 compute-0 podman[305701]: 2025-10-11 08:51:56.296521784 +0000 UTC m=+0.235942153 container start 5698044c3b3e464a40e793712c4eace612b32ffc3835eeba24ac108bb902523c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-7aa0dec9-58b4-4b16-b265-f3c8cbedd3a7, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 11 08:51:56 compute-0 ceph-osd[89278]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #45. Immutable memtables: 2.
Oct 11 08:51:56 compute-0 neutron-haproxy-ovnmeta-7aa0dec9-58b4-4b16-b265-f3c8cbedd3a7[305716]: [NOTICE]   (305720) : New worker (305722) forked
Oct 11 08:51:56 compute-0 neutron-haproxy-ovnmeta-7aa0dec9-58b4-4b16-b265-f3c8cbedd3a7[305716]: [NOTICE]   (305720) : Loading success.
Oct 11 08:51:56 compute-0 nova_compute[260935]: 2025-10-11 08:51:56.544 2 INFO nova.virt.libvirt.driver [None req-925046ff-c9d6-441c-a90b-369ac96f271a 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 1cecd438-75a3-4140-ad35-7439630b1be2] Beginning live snapshot process
Oct 11 08:51:56 compute-0 nova_compute[260935]: 2025-10-11 08:51:56.742 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 08:51:56 compute-0 nova_compute[260935]: 2025-10-11 08:51:56.751 2 DEBUG nova.virt.libvirt.imagebackend [None req-925046ff-c9d6-441c-a90b-369ac96f271a 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] No parent info for 03f2fef0-11c0-48e1-b3a0-3e02d898739e; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Oct 11 08:51:56 compute-0 nova_compute[260935]: 2025-10-11 08:51:56.807 2 DEBUG nova.compute.manager [req-83e3797e-fe2e-485b-aece-e354e10d0f6a req-1d4cd149-53bc-470c-a829-d1c87fd54318 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 205bee5e-165a-468c-87d3-db44e03ace3e] Received event network-vif-plugged-30f88ebc-fba6-476c-8953-ad64358029bb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:51:56 compute-0 nova_compute[260935]: 2025-10-11 08:51:56.808 2 DEBUG oslo_concurrency.lockutils [req-83e3797e-fe2e-485b-aece-e354e10d0f6a req-1d4cd149-53bc-470c-a829-d1c87fd54318 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "205bee5e-165a-468c-87d3-db44e03ace3e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:51:56 compute-0 nova_compute[260935]: 2025-10-11 08:51:56.809 2 DEBUG oslo_concurrency.lockutils [req-83e3797e-fe2e-485b-aece-e354e10d0f6a req-1d4cd149-53bc-470c-a829-d1c87fd54318 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "205bee5e-165a-468c-87d3-db44e03ace3e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:51:56 compute-0 nova_compute[260935]: 2025-10-11 08:51:56.809 2 DEBUG oslo_concurrency.lockutils [req-83e3797e-fe2e-485b-aece-e354e10d0f6a req-1d4cd149-53bc-470c-a829-d1c87fd54318 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "205bee5e-165a-468c-87d3-db44e03ace3e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:51:56 compute-0 nova_compute[260935]: 2025-10-11 08:51:56.810 2 DEBUG nova.compute.manager [req-83e3797e-fe2e-485b-aece-e354e10d0f6a req-1d4cd149-53bc-470c-a829-d1c87fd54318 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 205bee5e-165a-468c-87d3-db44e03ace3e] Processing event network-vif-plugged-30f88ebc-fba6-476c-8953-ad64358029bb _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 11 08:51:56 compute-0 nova_compute[260935]: 2025-10-11 08:51:56.810 2 DEBUG nova.compute.manager [req-83e3797e-fe2e-485b-aece-e354e10d0f6a req-1d4cd149-53bc-470c-a829-d1c87fd54318 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 205bee5e-165a-468c-87d3-db44e03ace3e] Received event network-vif-plugged-30f88ebc-fba6-476c-8953-ad64358029bb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:51:56 compute-0 nova_compute[260935]: 2025-10-11 08:51:56.811 2 DEBUG oslo_concurrency.lockutils [req-83e3797e-fe2e-485b-aece-e354e10d0f6a req-1d4cd149-53bc-470c-a829-d1c87fd54318 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "205bee5e-165a-468c-87d3-db44e03ace3e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:51:56 compute-0 nova_compute[260935]: 2025-10-11 08:51:56.811 2 DEBUG oslo_concurrency.lockutils [req-83e3797e-fe2e-485b-aece-e354e10d0f6a req-1d4cd149-53bc-470c-a829-d1c87fd54318 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "205bee5e-165a-468c-87d3-db44e03ace3e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:51:56 compute-0 nova_compute[260935]: 2025-10-11 08:51:56.812 2 DEBUG oslo_concurrency.lockutils [req-83e3797e-fe2e-485b-aece-e354e10d0f6a req-1d4cd149-53bc-470c-a829-d1c87fd54318 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "205bee5e-165a-468c-87d3-db44e03ace3e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:51:56 compute-0 nova_compute[260935]: 2025-10-11 08:51:56.812 2 DEBUG nova.compute.manager [req-83e3797e-fe2e-485b-aece-e354e10d0f6a req-1d4cd149-53bc-470c-a829-d1c87fd54318 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 205bee5e-165a-468c-87d3-db44e03ace3e] No waiting events found dispatching network-vif-plugged-30f88ebc-fba6-476c-8953-ad64358029bb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 08:51:56 compute-0 nova_compute[260935]: 2025-10-11 08:51:56.813 2 WARNING nova.compute.manager [req-83e3797e-fe2e-485b-aece-e354e10d0f6a req-1d4cd149-53bc-470c-a829-d1c87fd54318 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 205bee5e-165a-468c-87d3-db44e03ace3e] Received unexpected event network-vif-plugged-30f88ebc-fba6-476c-8953-ad64358029bb for instance with vm_state building and task_state spawning.
Oct 11 08:51:56 compute-0 nova_compute[260935]: 2025-10-11 08:51:56.814 2 DEBUG nova.compute.manager [None req-bcf9f562-5918-4b0f-95da-1a9d85e57363 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] [instance: 205bee5e-165a-468c-87d3-db44e03ace3e] Instance event wait completed in 1 seconds for network-vif-plugged,network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 11 08:51:56 compute-0 nova_compute[260935]: 2025-10-11 08:51:56.831 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172716.8305564, 205bee5e-165a-468c-87d3-db44e03ace3e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:51:56 compute-0 nova_compute[260935]: 2025-10-11 08:51:56.831 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 205bee5e-165a-468c-87d3-db44e03ace3e] VM Resumed (Lifecycle Event)
Oct 11 08:51:56 compute-0 nova_compute[260935]: 2025-10-11 08:51:56.834 2 DEBUG nova.virt.libvirt.driver [None req-bcf9f562-5918-4b0f-95da-1a9d85e57363 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] [instance: 205bee5e-165a-468c-87d3-db44e03ace3e] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 11 08:51:56 compute-0 nova_compute[260935]: 2025-10-11 08:51:56.839 2 INFO nova.virt.libvirt.driver [-] [instance: 205bee5e-165a-468c-87d3-db44e03ace3e] Instance spawned successfully.
Oct 11 08:51:56 compute-0 nova_compute[260935]: 2025-10-11 08:51:56.840 2 DEBUG nova.virt.libvirt.driver [None req-bcf9f562-5918-4b0f-95da-1a9d85e57363 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] [instance: 205bee5e-165a-468c-87d3-db44e03ace3e] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 11 08:51:56 compute-0 nova_compute[260935]: 2025-10-11 08:51:56.972 2 DEBUG nova.storage.rbd_utils [None req-925046ff-c9d6-441c-a90b-369ac96f271a 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] creating snapshot(44f23a098021423198ece7a5a6ca7e99) on rbd image(1cecd438-75a3-4140-ad35-7439630b1be2_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Oct 11 08:51:57 compute-0 nova_compute[260935]: 2025-10-11 08:51:57.012 2 DEBUG nova.compute.manager [req-c2770c4e-9b50-429f-aef1-c676ef6e7b78 req-516c7e38-59d1-4635-ad19-2a795099c245 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 205bee5e-165a-468c-87d3-db44e03ace3e] Received event network-vif-plugged-e389558a-ec7e-4610-aef7-2cfb76da6814 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:51:57 compute-0 nova_compute[260935]: 2025-10-11 08:51:57.012 2 DEBUG oslo_concurrency.lockutils [req-c2770c4e-9b50-429f-aef1-c676ef6e7b78 req-516c7e38-59d1-4635-ad19-2a795099c245 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "205bee5e-165a-468c-87d3-db44e03ace3e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:51:57 compute-0 nova_compute[260935]: 2025-10-11 08:51:57.012 2 DEBUG oslo_concurrency.lockutils [req-c2770c4e-9b50-429f-aef1-c676ef6e7b78 req-516c7e38-59d1-4635-ad19-2a795099c245 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "205bee5e-165a-468c-87d3-db44e03ace3e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:51:57 compute-0 nova_compute[260935]: 2025-10-11 08:51:57.013 2 DEBUG oslo_concurrency.lockutils [req-c2770c4e-9b50-429f-aef1-c676ef6e7b78 req-516c7e38-59d1-4635-ad19-2a795099c245 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "205bee5e-165a-468c-87d3-db44e03ace3e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:51:57 compute-0 nova_compute[260935]: 2025-10-11 08:51:57.013 2 DEBUG nova.compute.manager [req-c2770c4e-9b50-429f-aef1-c676ef6e7b78 req-516c7e38-59d1-4635-ad19-2a795099c245 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 205bee5e-165a-468c-87d3-db44e03ace3e] No waiting events found dispatching network-vif-plugged-e389558a-ec7e-4610-aef7-2cfb76da6814 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 08:51:57 compute-0 nova_compute[260935]: 2025-10-11 08:51:57.013 2 WARNING nova.compute.manager [req-c2770c4e-9b50-429f-aef1-c676ef6e7b78 req-516c7e38-59d1-4635-ad19-2a795099c245 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 205bee5e-165a-468c-87d3-db44e03ace3e] Received unexpected event network-vif-plugged-e389558a-ec7e-4610-aef7-2cfb76da6814 for instance with vm_state building and task_state spawning.
Oct 11 08:51:57 compute-0 nova_compute[260935]: 2025-10-11 08:51:57.031 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 205bee5e-165a-468c-87d3-db44e03ace3e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:51:57 compute-0 nova_compute[260935]: 2025-10-11 08:51:57.035 2 DEBUG nova.virt.libvirt.driver [None req-bcf9f562-5918-4b0f-95da-1a9d85e57363 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] [instance: 205bee5e-165a-468c-87d3-db44e03ace3e] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:51:57 compute-0 nova_compute[260935]: 2025-10-11 08:51:57.035 2 DEBUG nova.virt.libvirt.driver [None req-bcf9f562-5918-4b0f-95da-1a9d85e57363 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] [instance: 205bee5e-165a-468c-87d3-db44e03ace3e] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:51:57 compute-0 nova_compute[260935]: 2025-10-11 08:51:57.036 2 DEBUG nova.virt.libvirt.driver [None req-bcf9f562-5918-4b0f-95da-1a9d85e57363 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] [instance: 205bee5e-165a-468c-87d3-db44e03ace3e] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:51:57 compute-0 nova_compute[260935]: 2025-10-11 08:51:57.036 2 DEBUG nova.virt.libvirt.driver [None req-bcf9f562-5918-4b0f-95da-1a9d85e57363 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] [instance: 205bee5e-165a-468c-87d3-db44e03ace3e] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:51:57 compute-0 nova_compute[260935]: 2025-10-11 08:51:57.036 2 DEBUG nova.virt.libvirt.driver [None req-bcf9f562-5918-4b0f-95da-1a9d85e57363 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] [instance: 205bee5e-165a-468c-87d3-db44e03ace3e] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:51:57 compute-0 nova_compute[260935]: 2025-10-11 08:51:57.037 2 DEBUG nova.virt.libvirt.driver [None req-bcf9f562-5918-4b0f-95da-1a9d85e57363 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] [instance: 205bee5e-165a-468c-87d3-db44e03ace3e] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:51:57 compute-0 nova_compute[260935]: 2025-10-11 08:51:57.041 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 205bee5e-165a-468c-87d3-db44e03ace3e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 08:51:57 compute-0 ovn_controller[152945]: 2025-10-11T08:51:57Z|00052|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:c7:33:2e 10.100.0.6
Oct 11 08:51:57 compute-0 ovn_controller[152945]: 2025-10-11T08:51:57Z|00053|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:c7:33:2e 10.100.0.6
Oct 11 08:51:57 compute-0 nova_compute[260935]: 2025-10-11 08:51:57.149 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 205bee5e-165a-468c-87d3-db44e03ace3e] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 08:51:57 compute-0 nova_compute[260935]: 2025-10-11 08:51:57.253 2 INFO nova.compute.manager [None req-bcf9f562-5918-4b0f-95da-1a9d85e57363 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] [instance: 205bee5e-165a-468c-87d3-db44e03ace3e] Took 15.03 seconds to spawn the instance on the hypervisor.
Oct 11 08:51:57 compute-0 nova_compute[260935]: 2025-10-11 08:51:57.254 2 DEBUG nova.compute.manager [None req-bcf9f562-5918-4b0f-95da-1a9d85e57363 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] [instance: 205bee5e-165a-468c-87d3-db44e03ace3e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:51:57 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e186 do_prune osdmap full prune enabled
Oct 11 08:51:57 compute-0 ceph-mon[74313]: pgmap v1407: 321 pgs: 321 active+clean; 181 MiB data, 446 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.9 MiB/s wr, 163 op/s
Oct 11 08:51:57 compute-0 nova_compute[260935]: 2025-10-11 08:51:57.372 2 INFO nova.compute.manager [None req-bcf9f562-5918-4b0f-95da-1a9d85e57363 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] [instance: 205bee5e-165a-468c-87d3-db44e03ace3e] Took 16.87 seconds to build instance.
Oct 11 08:51:57 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e187 e187: 3 total, 3 up, 3 in
Oct 11 08:51:57 compute-0 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e187: 3 total, 3 up, 3 in
Oct 11 08:51:57 compute-0 nova_compute[260935]: 2025-10-11 08:51:57.409 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:51:57 compute-0 nova_compute[260935]: 2025-10-11 08:51:57.448 2 DEBUG nova.storage.rbd_utils [None req-925046ff-c9d6-441c-a90b-369ac96f271a 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] cloning vms/1cecd438-75a3-4140-ad35-7439630b1be2_disk@44f23a098021423198ece7a5a6ca7e99 to images/1eaca523-07cd-4bcb-9efb-5df816d086ce clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Oct 11 08:51:57 compute-0 nova_compute[260935]: 2025-10-11 08:51:57.508 2 DEBUG oslo_concurrency.lockutils [None req-bcf9f562-5918-4b0f-95da-1a9d85e57363 fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Lock "205bee5e-165a-468c-87d3-db44e03ace3e" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 17.141s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:51:57 compute-0 nova_compute[260935]: 2025-10-11 08:51:57.597 2 DEBUG nova.storage.rbd_utils [None req-925046ff-c9d6-441c-a90b-369ac96f271a 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] flattening images/1eaca523-07cd-4bcb-9efb-5df816d086ce flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Oct 11 08:51:57 compute-0 nova_compute[260935]: 2025-10-11 08:51:57.671 2 DEBUG nova.network.neutron [None req-280c3641-40c8-40d9-abdd-b0b9acaa9cd9 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] [instance: 840d2a1b-48bb-42ec-aa12-067e1b65ea39] Successfully updated port: bf26054f-43d5-471a-bca4-e7948b4409d0 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 11 08:51:57 compute-0 nova_compute[260935]: 2025-10-11 08:51:57.884 2 DEBUG nova.storage.rbd_utils [None req-925046ff-c9d6-441c-a90b-369ac96f271a 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] removing snapshot(44f23a098021423198ece7a5a6ca7e99) on rbd image(1cecd438-75a3-4140-ad35-7439630b1be2_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Oct 11 08:51:57 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1409: 321 pgs: 321 active+clean; 264 MiB data, 527 MiB used, 59 GiB / 60 GiB avail; 3.7 MiB/s rd, 5.1 MiB/s wr, 229 op/s
Oct 11 08:51:57 compute-0 nova_compute[260935]: 2025-10-11 08:51:57.931 2 DEBUG oslo_concurrency.lockutils [None req-280c3641-40c8-40d9-abdd-b0b9acaa9cd9 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] Acquiring lock "refresh_cache-840d2a1b-48bb-42ec-aa12-067e1b65ea39" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 08:51:57 compute-0 nova_compute[260935]: 2025-10-11 08:51:57.931 2 DEBUG oslo_concurrency.lockutils [None req-280c3641-40c8-40d9-abdd-b0b9acaa9cd9 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] Acquired lock "refresh_cache-840d2a1b-48bb-42ec-aa12-067e1b65ea39" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 08:51:57 compute-0 nova_compute[260935]: 2025-10-11 08:51:57.931 2 DEBUG nova.network.neutron [None req-280c3641-40c8-40d9-abdd-b0b9acaa9cd9 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] [instance: 840d2a1b-48bb-42ec-aa12-067e1b65ea39] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 11 08:51:57 compute-0 nova_compute[260935]: 2025-10-11 08:51:57.968 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:51:58 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e187 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 08:51:58 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e187 do_prune osdmap full prune enabled
Oct 11 08:51:58 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e188 e188: 3 total, 3 up, 3 in
Oct 11 08:51:58 compute-0 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e188: 3 total, 3 up, 3 in
Oct 11 08:51:58 compute-0 ceph-mon[74313]: osdmap e187: 3 total, 3 up, 3 in
Oct 11 08:51:58 compute-0 nova_compute[260935]: 2025-10-11 08:51:58.422 2 DEBUG nova.storage.rbd_utils [None req-925046ff-c9d6-441c-a90b-369ac96f271a 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] creating snapshot(snap) on rbd image(1eaca523-07cd-4bcb-9efb-5df816d086ce) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Oct 11 08:51:58 compute-0 nova_compute[260935]: 2025-10-11 08:51:58.483 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760172703.4819636, 88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:51:58 compute-0 nova_compute[260935]: 2025-10-11 08:51:58.484 2 INFO nova.compute.manager [-] [instance: 88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a] VM Stopped (Lifecycle Event)
Oct 11 08:51:58 compute-0 nova_compute[260935]: 2025-10-11 08:51:58.659 2 DEBUG nova.compute.manager [None req-29c1ab44-e3a5-4342-bb6d-04a7486cdb71 - - - - - -] [instance: 88d6e4a6-9ed7-4d8a-8632-9282c2bb1a1a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:51:58 compute-0 nova_compute[260935]: 2025-10-11 08:51:58.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 08:51:58 compute-0 nova_compute[260935]: 2025-10-11 08:51:58.702 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 11 08:51:58 compute-0 nova_compute[260935]: 2025-10-11 08:51:58.940 2 DEBUG nova.network.neutron [None req-280c3641-40c8-40d9-abdd-b0b9acaa9cd9 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] [instance: 840d2a1b-48bb-42ec-aa12-067e1b65ea39] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 11 08:51:59 compute-0 nova_compute[260935]: 2025-10-11 08:51:59.361 2 DEBUG nova.compute.manager [req-4198e2d9-f34d-4aef-a085-d181880f1e6b req-7fda7ade-ccf2-452c-94f6-37f429d6f633 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 840d2a1b-48bb-42ec-aa12-067e1b65ea39] Received event network-changed-bf26054f-43d5-471a-bca4-e7948b4409d0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:51:59 compute-0 nova_compute[260935]: 2025-10-11 08:51:59.362 2 DEBUG nova.compute.manager [req-4198e2d9-f34d-4aef-a085-d181880f1e6b req-7fda7ade-ccf2-452c-94f6-37f429d6f633 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 840d2a1b-48bb-42ec-aa12-067e1b65ea39] Refreshing instance network info cache due to event network-changed-bf26054f-43d5-471a-bca4-e7948b4409d0. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 08:51:59 compute-0 nova_compute[260935]: 2025-10-11 08:51:59.362 2 DEBUG oslo_concurrency.lockutils [req-4198e2d9-f34d-4aef-a085-d181880f1e6b req-7fda7ade-ccf2-452c-94f6-37f429d6f633 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-840d2a1b-48bb-42ec-aa12-067e1b65ea39" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 08:51:59 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e188 do_prune osdmap full prune enabled
Oct 11 08:51:59 compute-0 ceph-mon[74313]: pgmap v1409: 321 pgs: 321 active+clean; 264 MiB data, 527 MiB used, 59 GiB / 60 GiB avail; 3.7 MiB/s rd, 5.1 MiB/s wr, 229 op/s
Oct 11 08:51:59 compute-0 ceph-mon[74313]: osdmap e188: 3 total, 3 up, 3 in
Oct 11 08:51:59 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e189 e189: 3 total, 3 up, 3 in
Oct 11 08:51:59 compute-0 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e189: 3 total, 3 up, 3 in
Oct 11 08:51:59 compute-0 nova_compute[260935]: 2025-10-11 08:51:59.604 2 ERROR nova.virt.libvirt.driver [None req-925046ff-c9d6-441c-a90b-369ac96f271a 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Failed to snapshot image: nova.exception.ImageNotFound: Image 1eaca523-07cd-4bcb-9efb-5df816d086ce could not be found.
Oct 11 08:51:59 compute-0 nova_compute[260935]: 2025-10-11 08:51:59.604 2 ERROR nova.virt.libvirt.driver Traceback (most recent call last):
Oct 11 08:51:59 compute-0 nova_compute[260935]: 2025-10-11 08:51:59.604 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 691, in update
Oct 11 08:51:59 compute-0 nova_compute[260935]: 2025-10-11 08:51:59.604 2 ERROR nova.virt.libvirt.driver     image = self._update_v2(context, sent_service_image_meta, data)
Oct 11 08:51:59 compute-0 nova_compute[260935]: 2025-10-11 08:51:59.604 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 700, in _update_v2
Oct 11 08:51:59 compute-0 nova_compute[260935]: 2025-10-11 08:51:59.604 2 ERROR nova.virt.libvirt.driver     image = self._client.call(
Oct 11 08:51:59 compute-0 nova_compute[260935]: 2025-10-11 08:51:59.604 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 191, in call
Oct 11 08:51:59 compute-0 nova_compute[260935]: 2025-10-11 08:51:59.604 2 ERROR nova.virt.libvirt.driver     result = getattr(controller, method)(*args, **kwargs)
Oct 11 08:51:59 compute-0 nova_compute[260935]: 2025-10-11 08:51:59.604 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 440, in update
Oct 11 08:51:59 compute-0 nova_compute[260935]: 2025-10-11 08:51:59.604 2 ERROR nova.virt.libvirt.driver     unvalidated_image = self.get(image_id)
Oct 11 08:51:59 compute-0 nova_compute[260935]: 2025-10-11 08:51:59.604 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 197, in get
Oct 11 08:51:59 compute-0 nova_compute[260935]: 2025-10-11 08:51:59.604 2 ERROR nova.virt.libvirt.driver     return self._get(image_id)
Oct 11 08:51:59 compute-0 nova_compute[260935]: 2025-10-11 08:51:59.604 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/utils.py", line 649, in inner
Oct 11 08:51:59 compute-0 nova_compute[260935]: 2025-10-11 08:51:59.604 2 ERROR nova.virt.libvirt.driver     return RequestIdProxy(wrapped(*args, **kwargs))
Oct 11 08:51:59 compute-0 nova_compute[260935]: 2025-10-11 08:51:59.604 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 190, in _get
Oct 11 08:51:59 compute-0 nova_compute[260935]: 2025-10-11 08:51:59.604 2 ERROR nova.virt.libvirt.driver     resp, body = self.http_client.get(url, headers=header)
Oct 11 08:51:59 compute-0 nova_compute[260935]: 2025-10-11 08:51:59.604 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/keystoneauth1/adapter.py", line 395, in get
Oct 11 08:51:59 compute-0 nova_compute[260935]: 2025-10-11 08:51:59.604 2 ERROR nova.virt.libvirt.driver     return self.request(url, 'GET', **kwargs)
Oct 11 08:51:59 compute-0 nova_compute[260935]: 2025-10-11 08:51:59.604 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/http.py", line 380, in request
Oct 11 08:51:59 compute-0 nova_compute[260935]: 2025-10-11 08:51:59.604 2 ERROR nova.virt.libvirt.driver     return self._handle_response(resp)
Oct 11 08:51:59 compute-0 nova_compute[260935]: 2025-10-11 08:51:59.604 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/http.py", line 120, in _handle_response
Oct 11 08:51:59 compute-0 nova_compute[260935]: 2025-10-11 08:51:59.604 2 ERROR nova.virt.libvirt.driver     raise exc.from_response(resp, resp.content)
Oct 11 08:51:59 compute-0 nova_compute[260935]: 2025-10-11 08:51:59.604 2 ERROR nova.virt.libvirt.driver glanceclient.exc.HTTPNotFound: HTTP 404 Not Found: No image found with ID 1eaca523-07cd-4bcb-9efb-5df816d086ce
Oct 11 08:51:59 compute-0 nova_compute[260935]: 2025-10-11 08:51:59.604 2 ERROR nova.virt.libvirt.driver 
Oct 11 08:51:59 compute-0 nova_compute[260935]: 2025-10-11 08:51:59.604 2 ERROR nova.virt.libvirt.driver During handling of the above exception, another exception occurred:
Oct 11 08:51:59 compute-0 nova_compute[260935]: 2025-10-11 08:51:59.604 2 ERROR nova.virt.libvirt.driver 
Oct 11 08:51:59 compute-0 nova_compute[260935]: 2025-10-11 08:51:59.604 2 ERROR nova.virt.libvirt.driver Traceback (most recent call last):
Oct 11 08:51:59 compute-0 nova_compute[260935]: 2025-10-11 08:51:59.604 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 3082, in snapshot
Oct 11 08:51:59 compute-0 nova_compute[260935]: 2025-10-11 08:51:59.604 2 ERROR nova.virt.libvirt.driver     self._image_api.update(context, image_id, metadata,
Oct 11 08:51:59 compute-0 nova_compute[260935]: 2025-10-11 08:51:59.604 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 1243, in update
Oct 11 08:51:59 compute-0 nova_compute[260935]: 2025-10-11 08:51:59.604 2 ERROR nova.virt.libvirt.driver     return session.update(context, image_id, image_info, data=data,
Oct 11 08:51:59 compute-0 nova_compute[260935]: 2025-10-11 08:51:59.604 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 693, in update
Oct 11 08:51:59 compute-0 nova_compute[260935]: 2025-10-11 08:51:59.604 2 ERROR nova.virt.libvirt.driver     _reraise_translated_image_exception(image_id)
Oct 11 08:51:59 compute-0 nova_compute[260935]: 2025-10-11 08:51:59.604 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 1031, in _reraise_translated_image_exception
Oct 11 08:51:59 compute-0 nova_compute[260935]: 2025-10-11 08:51:59.604 2 ERROR nova.virt.libvirt.driver     raise new_exc.with_traceback(exc_trace)
Oct 11 08:51:59 compute-0 nova_compute[260935]: 2025-10-11 08:51:59.604 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 691, in update
Oct 11 08:51:59 compute-0 nova_compute[260935]: 2025-10-11 08:51:59.604 2 ERROR nova.virt.libvirt.driver     image = self._update_v2(context, sent_service_image_meta, data)
Oct 11 08:51:59 compute-0 nova_compute[260935]: 2025-10-11 08:51:59.604 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 700, in _update_v2
Oct 11 08:51:59 compute-0 nova_compute[260935]: 2025-10-11 08:51:59.604 2 ERROR nova.virt.libvirt.driver     image = self._client.call(
Oct 11 08:51:59 compute-0 nova_compute[260935]: 2025-10-11 08:51:59.604 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 191, in call
Oct 11 08:51:59 compute-0 nova_compute[260935]: 2025-10-11 08:51:59.604 2 ERROR nova.virt.libvirt.driver     result = getattr(controller, method)(*args, **kwargs)
Oct 11 08:51:59 compute-0 nova_compute[260935]: 2025-10-11 08:51:59.604 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 440, in update
Oct 11 08:51:59 compute-0 nova_compute[260935]: 2025-10-11 08:51:59.604 2 ERROR nova.virt.libvirt.driver     unvalidated_image = self.get(image_id)
Oct 11 08:51:59 compute-0 nova_compute[260935]: 2025-10-11 08:51:59.604 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 197, in get
Oct 11 08:51:59 compute-0 nova_compute[260935]: 2025-10-11 08:51:59.604 2 ERROR nova.virt.libvirt.driver     return self._get(image_id)
Oct 11 08:51:59 compute-0 nova_compute[260935]: 2025-10-11 08:51:59.604 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/utils.py", line 649, in inner
Oct 11 08:51:59 compute-0 nova_compute[260935]: 2025-10-11 08:51:59.604 2 ERROR nova.virt.libvirt.driver     return RequestIdProxy(wrapped(*args, **kwargs))
Oct 11 08:51:59 compute-0 nova_compute[260935]: 2025-10-11 08:51:59.604 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 190, in _get
Oct 11 08:51:59 compute-0 nova_compute[260935]: 2025-10-11 08:51:59.604 2 ERROR nova.virt.libvirt.driver     resp, body = self.http_client.get(url, headers=header)
Oct 11 08:51:59 compute-0 nova_compute[260935]: 2025-10-11 08:51:59.604 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/keystoneauth1/adapter.py", line 395, in get
Oct 11 08:51:59 compute-0 nova_compute[260935]: 2025-10-11 08:51:59.604 2 ERROR nova.virt.libvirt.driver     return self.request(url, 'GET', **kwargs)
Oct 11 08:51:59 compute-0 nova_compute[260935]: 2025-10-11 08:51:59.604 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/http.py", line 380, in request
Oct 11 08:51:59 compute-0 nova_compute[260935]: 2025-10-11 08:51:59.604 2 ERROR nova.virt.libvirt.driver     return self._handle_response(resp)
Oct 11 08:51:59 compute-0 nova_compute[260935]: 2025-10-11 08:51:59.604 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/http.py", line 120, in _handle_response
Oct 11 08:51:59 compute-0 nova_compute[260935]: 2025-10-11 08:51:59.604 2 ERROR nova.virt.libvirt.driver     raise exc.from_response(resp, resp.content)
Oct 11 08:51:59 compute-0 nova_compute[260935]: 2025-10-11 08:51:59.604 2 ERROR nova.virt.libvirt.driver nova.exception.ImageNotFound: Image 1eaca523-07cd-4bcb-9efb-5df816d086ce could not be found.
Oct 11 08:51:59 compute-0 nova_compute[260935]: 2025-10-11 08:51:59.604 2 ERROR nova.virt.libvirt.driver 
Oct 11 08:51:59 compute-0 nova_compute[260935]: 2025-10-11 08:51:59.671 2 DEBUG nova.storage.rbd_utils [None req-925046ff-c9d6-441c-a90b-369ac96f271a 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] removing snapshot(snap) on rbd image(1eaca523-07cd-4bcb-9efb-5df816d086ce) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Oct 11 08:51:59 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1412: 321 pgs: 321 active+clean; 264 MiB data, 527 MiB used, 59 GiB / 60 GiB avail; 6.1 MiB/s rd, 8.5 MiB/s wr, 363 op/s
Oct 11 08:52:00 compute-0 ceph-osd[89278]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 11 08:52:00 compute-0 ceph-osd[89278]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 2400.1 total, 600.0 interval
                                           Cumulative writes: 17K writes, 66K keys, 17K commit groups, 1.0 writes per commit group, ingest: 0.06 GB, 0.03 MB/s
                                           Cumulative WAL: 17K writes, 5508 syncs, 3.14 writes per sync, written: 0.06 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 10K writes, 37K keys, 10K commit groups, 1.0 writes per commit group, ingest: 41.63 MB, 0.07 MB/s
                                           Interval WAL: 10K writes, 4092 syncs, 2.48 writes per sync, written: 0.04 GB, 0.07 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 11 08:52:00 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e189 do_prune osdmap full prune enabled
Oct 11 08:52:00 compute-0 ceph-mon[74313]: osdmap e189: 3 total, 3 up, 3 in
Oct 11 08:52:00 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e190 e190: 3 total, 3 up, 3 in
Oct 11 08:52:00 compute-0 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e190: 3 total, 3 up, 3 in
Oct 11 08:52:00 compute-0 nova_compute[260935]: 2025-10-11 08:52:00.955 2 WARNING nova.compute.manager [None req-925046ff-c9d6-441c-a90b-369ac96f271a 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 1cecd438-75a3-4140-ad35-7439630b1be2] Image not found during snapshot: nova.exception.ImageNotFound: Image 1eaca523-07cd-4bcb-9efb-5df816d086ce could not be found.
Oct 11 08:52:01 compute-0 nova_compute[260935]: 2025-10-11 08:52:01.386 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:52:01 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:01.386 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=11, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:d1:d9', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '16:ab:1e:b7:4b:7f'}, ipsec=False) old=SB_Global(nb_cfg=10) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 08:52:01 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:01.389 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 11 08:52:01 compute-0 nova_compute[260935]: 2025-10-11 08:52:01.390 2 DEBUG oslo_concurrency.lockutils [None req-edf69d64-0447-4ac0-849a-abd5e8df75fd fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Acquiring lock "205bee5e-165a-468c-87d3-db44e03ace3e" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:52:01 compute-0 nova_compute[260935]: 2025-10-11 08:52:01.391 2 DEBUG oslo_concurrency.lockutils [None req-edf69d64-0447-4ac0-849a-abd5e8df75fd fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Lock "205bee5e-165a-468c-87d3-db44e03ace3e" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:52:01 compute-0 nova_compute[260935]: 2025-10-11 08:52:01.391 2 DEBUG oslo_concurrency.lockutils [None req-edf69d64-0447-4ac0-849a-abd5e8df75fd fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Acquiring lock "205bee5e-165a-468c-87d3-db44e03ace3e-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:52:01 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:01.391 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=bec9a5e2-82b0-42f0-811d-08d245f1dc66, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '11'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:52:01 compute-0 nova_compute[260935]: 2025-10-11 08:52:01.391 2 DEBUG oslo_concurrency.lockutils [None req-edf69d64-0447-4ac0-849a-abd5e8df75fd fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Lock "205bee5e-165a-468c-87d3-db44e03ace3e-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:52:01 compute-0 nova_compute[260935]: 2025-10-11 08:52:01.392 2 DEBUG oslo_concurrency.lockutils [None req-edf69d64-0447-4ac0-849a-abd5e8df75fd fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Lock "205bee5e-165a-468c-87d3-db44e03ace3e-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:52:01 compute-0 nova_compute[260935]: 2025-10-11 08:52:01.393 2 INFO nova.compute.manager [None req-edf69d64-0447-4ac0-849a-abd5e8df75fd fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] [instance: 205bee5e-165a-468c-87d3-db44e03ace3e] Terminating instance
Oct 11 08:52:01 compute-0 nova_compute[260935]: 2025-10-11 08:52:01.395 2 DEBUG nova.compute.manager [None req-edf69d64-0447-4ac0-849a-abd5e8df75fd fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] [instance: 205bee5e-165a-468c-87d3-db44e03ace3e] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 11 08:52:01 compute-0 ceph-mon[74313]: pgmap v1412: 321 pgs: 321 active+clean; 264 MiB data, 527 MiB used, 59 GiB / 60 GiB avail; 6.1 MiB/s rd, 8.5 MiB/s wr, 363 op/s
Oct 11 08:52:01 compute-0 ceph-mon[74313]: osdmap e190: 3 total, 3 up, 3 in
Oct 11 08:52:01 compute-0 kernel: tap30f88ebc-fb (unregistering): left promiscuous mode
Oct 11 08:52:01 compute-0 NetworkManager[44960]: <info>  [1760172721.4466] device (tap30f88ebc-fb): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 08:52:01 compute-0 nova_compute[260935]: 2025-10-11 08:52:01.464 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:52:01 compute-0 ovn_controller[152945]: 2025-10-11T08:52:01Z|00299|binding|INFO|Releasing lport 30f88ebc-fba6-476c-8953-ad64358029bb from this chassis (sb_readonly=0)
Oct 11 08:52:01 compute-0 ovn_controller[152945]: 2025-10-11T08:52:01Z|00300|binding|INFO|Setting lport 30f88ebc-fba6-476c-8953-ad64358029bb down in Southbound
Oct 11 08:52:01 compute-0 ovn_controller[152945]: 2025-10-11T08:52:01Z|00301|binding|INFO|Removing iface tap30f88ebc-fb ovn-installed in OVS
Oct 11 08:52:01 compute-0 nova_compute[260935]: 2025-10-11 08:52:01.466 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:52:01 compute-0 kernel: tape389558a-ec (unregistering): left promiscuous mode
Oct 11 08:52:01 compute-0 NetworkManager[44960]: <info>  [1760172721.4825] device (tape389558a-ec): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 08:52:01 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:01.505 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:84:a9:6f 10.100.0.192'], port_security=['fa:16:3e:84:a9:6f 10.100.0.192'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.192/24', 'neutron:device_id': '205bee5e-165a-468c-87d3-db44e03ace3e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-98ec9321-44f2-4cc2-a0a2-531598e629d2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f84de17ba2c5470fbc4c7fe809e7d7b7', 'neutron:revision_number': '4', 'neutron:security_group_ids': '1a4e329d-ca16-4021-b76e-a221f4eabd06', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=310a160b-9164-4639-a339-2ba6c58b1709, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=30f88ebc-fba6-476c-8953-ad64358029bb) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 08:52:01 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:01.507 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 30f88ebc-fba6-476c-8953-ad64358029bb in datapath 98ec9321-44f2-4cc2-a0a2-531598e629d2 unbound from our chassis
Oct 11 08:52:01 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:01.510 162815 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 98ec9321-44f2-4cc2-a0a2-531598e629d2, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 11 08:52:01 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:01.511 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[3f6853e8-8b44-4fea-93a8-ef8497486c4b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:52:01 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:01.512 162815 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-98ec9321-44f2-4cc2-a0a2-531598e629d2 namespace which is not needed anymore
Oct 11 08:52:01 compute-0 nova_compute[260935]: 2025-10-11 08:52:01.521 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:52:01 compute-0 nova_compute[260935]: 2025-10-11 08:52:01.524 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:52:01 compute-0 ovn_controller[152945]: 2025-10-11T08:52:01Z|00302|binding|INFO|Releasing lport e389558a-ec7e-4610-aef7-2cfb76da6814 from this chassis (sb_readonly=0)
Oct 11 08:52:01 compute-0 ovn_controller[152945]: 2025-10-11T08:52:01Z|00303|binding|INFO|Setting lport e389558a-ec7e-4610-aef7-2cfb76da6814 down in Southbound
Oct 11 08:52:01 compute-0 ovn_controller[152945]: 2025-10-11T08:52:01Z|00304|binding|INFO|Removing iface tape389558a-ec ovn-installed in OVS
Oct 11 08:52:01 compute-0 nova_compute[260935]: 2025-10-11 08:52:01.527 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:52:01 compute-0 systemd[1]: machine-qemu\x2d42\x2dinstance\x2d00000025.scope: Deactivated successfully.
Oct 11 08:52:01 compute-0 systemd[1]: machine-qemu\x2d42\x2dinstance\x2d00000025.scope: Consumed 5.669s CPU time.
Oct 11 08:52:01 compute-0 systemd-machined[215705]: Machine qemu-42-instance-00000025 terminated.
Oct 11 08:52:01 compute-0 nova_compute[260935]: 2025-10-11 08:52:01.563 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:52:01 compute-0 NetworkManager[44960]: <info>  [1760172721.6302] manager: (tape389558a-ec): new Tun device (/org/freedesktop/NetworkManager/Devices/149)
Oct 11 08:52:01 compute-0 nova_compute[260935]: 2025-10-11 08:52:01.637 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:52:01 compute-0 nova_compute[260935]: 2025-10-11 08:52:01.651 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:52:01 compute-0 nova_compute[260935]: 2025-10-11 08:52:01.653 2 INFO nova.virt.libvirt.driver [-] [instance: 205bee5e-165a-468c-87d3-db44e03ace3e] Instance destroyed successfully.
Oct 11 08:52:01 compute-0 nova_compute[260935]: 2025-10-11 08:52:01.654 2 DEBUG nova.objects.instance [None req-edf69d64-0447-4ac0-849a-abd5e8df75fd fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Lazy-loading 'resources' on Instance uuid 205bee5e-165a-468c-87d3-db44e03ace3e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:52:01 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:01.676 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:67:bd:db 10.100.1.173'], port_security=['fa:16:3e:67:bd:db 10.100.1.173'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.1.173/24', 'neutron:device_id': '205bee5e-165a-468c-87d3-db44e03ace3e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7aa0dec9-58b4-4b16-b265-f3c8cbedd3a7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f84de17ba2c5470fbc4c7fe809e7d7b7', 'neutron:revision_number': '4', 'neutron:security_group_ids': '1a4e329d-ca16-4021-b76e-a221f4eabd06', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=bdcaf9f8-32cf-46cd-8760-089ea8b1e312, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=e389558a-ec7e-4610-aef7-2cfb76da6814) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 08:52:01 compute-0 nova_compute[260935]: 2025-10-11 08:52:01.672 2 DEBUG nova.virt.libvirt.vif [None req-edf69d64-0447-4ac0-849a-abd5e8df75fd fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T08:51:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-318864694',display_name='tempest-ServersTestMultiNic-server-318864694',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-318864694',id=37,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-11T08:51:57Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='f84de17ba2c5470fbc4c7fe809e7d7b7',ramdisk_id='',reservation_id='r-932cooa3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestMultiNic-65661968',owner_user_name='tempest-ServersTestMultiNic-65661968-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T08:51:57Z,user_data=None,user_id='fb27f51b5ffd414ab5ddbea179ada690',uuid=205bee5e-165a-468c-87d3-db44e03ace3e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "30f88ebc-fba6-476c-8953-ad64358029bb", "address": "fa:16:3e:84:a9:6f", "network": {"id": "98ec9321-44f2-4cc2-a0a2-531598e629d2", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-550040071", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.192", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f84de17ba2c5470fbc4c7fe809e7d7b7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap30f88ebc-fb", "ovs_interfaceid": "30f88ebc-fba6-476c-8953-ad64358029bb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 11 08:52:01 compute-0 nova_compute[260935]: 2025-10-11 08:52:01.673 2 DEBUG nova.network.os_vif_util [None req-edf69d64-0447-4ac0-849a-abd5e8df75fd fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Converting VIF {"id": "30f88ebc-fba6-476c-8953-ad64358029bb", "address": "fa:16:3e:84:a9:6f", "network": {"id": "98ec9321-44f2-4cc2-a0a2-531598e629d2", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-550040071", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.192", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f84de17ba2c5470fbc4c7fe809e7d7b7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap30f88ebc-fb", "ovs_interfaceid": "30f88ebc-fba6-476c-8953-ad64358029bb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 08:52:01 compute-0 nova_compute[260935]: 2025-10-11 08:52:01.674 2 DEBUG nova.network.os_vif_util [None req-edf69d64-0447-4ac0-849a-abd5e8df75fd fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:84:a9:6f,bridge_name='br-int',has_traffic_filtering=True,id=30f88ebc-fba6-476c-8953-ad64358029bb,network=Network(98ec9321-44f2-4cc2-a0a2-531598e629d2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap30f88ebc-fb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 08:52:01 compute-0 nova_compute[260935]: 2025-10-11 08:52:01.674 2 DEBUG os_vif [None req-edf69d64-0447-4ac0-849a-abd5e8df75fd fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:84:a9:6f,bridge_name='br-int',has_traffic_filtering=True,id=30f88ebc-fba6-476c-8953-ad64358029bb,network=Network(98ec9321-44f2-4cc2-a0a2-531598e629d2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap30f88ebc-fb') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 11 08:52:01 compute-0 nova_compute[260935]: 2025-10-11 08:52:01.676 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:52:01 compute-0 nova_compute[260935]: 2025-10-11 08:52:01.676 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap30f88ebc-fb, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:52:01 compute-0 nova_compute[260935]: 2025-10-11 08:52:01.681 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:52:01 compute-0 nova_compute[260935]: 2025-10-11 08:52:01.682 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 08:52:01 compute-0 neutron-haproxy-ovnmeta-98ec9321-44f2-4cc2-a0a2-531598e629d2[305444]: [NOTICE]   (305457) : haproxy version is 2.8.14-c23fe91
Oct 11 08:52:01 compute-0 neutron-haproxy-ovnmeta-98ec9321-44f2-4cc2-a0a2-531598e629d2[305444]: [NOTICE]   (305457) : path to executable is /usr/sbin/haproxy
Oct 11 08:52:01 compute-0 neutron-haproxy-ovnmeta-98ec9321-44f2-4cc2-a0a2-531598e629d2[305444]: [WARNING]  (305457) : Exiting Master process...
Oct 11 08:52:01 compute-0 neutron-haproxy-ovnmeta-98ec9321-44f2-4cc2-a0a2-531598e629d2[305444]: [ALERT]    (305457) : Current worker (305459) exited with code 143 (Terminated)
Oct 11 08:52:01 compute-0 neutron-haproxy-ovnmeta-98ec9321-44f2-4cc2-a0a2-531598e629d2[305444]: [WARNING]  (305457) : All workers exited. Exiting... (0)
Oct 11 08:52:01 compute-0 systemd[1]: libpod-cf510c4b706e2f2d45925ce3f9c6c72673e0db26b6b8d1cd42dadad92afd6053.scope: Deactivated successfully.
Oct 11 08:52:01 compute-0 nova_compute[260935]: 2025-10-11 08:52:01.686 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:52:01 compute-0 nova_compute[260935]: 2025-10-11 08:52:01.691 2 INFO os_vif [None req-edf69d64-0447-4ac0-849a-abd5e8df75fd fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:84:a9:6f,bridge_name='br-int',has_traffic_filtering=True,id=30f88ebc-fba6-476c-8953-ad64358029bb,network=Network(98ec9321-44f2-4cc2-a0a2-531598e629d2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap30f88ebc-fb')
Oct 11 08:52:01 compute-0 nova_compute[260935]: 2025-10-11 08:52:01.692 2 DEBUG nova.virt.libvirt.vif [None req-edf69d64-0447-4ac0-849a-abd5e8df75fd fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T08:51:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-318864694',display_name='tempest-ServersTestMultiNic-server-318864694',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-318864694',id=37,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-11T08:51:57Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='f84de17ba2c5470fbc4c7fe809e7d7b7',ramdisk_id='',reservation_id='r-932cooa3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestMultiNic-65661968',owner_user_name='tempest-ServersTestMultiNic-65661968-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T08:51:57Z,user_data=None,user_id='fb27f51b5ffd414ab5ddbea179ada690',uuid=205bee5e-165a-468c-87d3-db44e03ace3e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "e389558a-ec7e-4610-aef7-2cfb76da6814", "address": "fa:16:3e:67:bd:db", "network": {"id": "7aa0dec9-58b4-4b16-b265-f3c8cbedd3a7", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-2082644605", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.173", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f84de17ba2c5470fbc4c7fe809e7d7b7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape389558a-ec", "ovs_interfaceid": "e389558a-ec7e-4610-aef7-2cfb76da6814", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 11 08:52:01 compute-0 nova_compute[260935]: 2025-10-11 08:52:01.692 2 DEBUG nova.network.os_vif_util [None req-edf69d64-0447-4ac0-849a-abd5e8df75fd fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Converting VIF {"id": "e389558a-ec7e-4610-aef7-2cfb76da6814", "address": "fa:16:3e:67:bd:db", "network": {"id": "7aa0dec9-58b4-4b16-b265-f3c8cbedd3a7", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-2082644605", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.173", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f84de17ba2c5470fbc4c7fe809e7d7b7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape389558a-ec", "ovs_interfaceid": "e389558a-ec7e-4610-aef7-2cfb76da6814", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 08:52:01 compute-0 nova_compute[260935]: 2025-10-11 08:52:01.693 2 DEBUG nova.network.os_vif_util [None req-edf69d64-0447-4ac0-849a-abd5e8df75fd fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:67:bd:db,bridge_name='br-int',has_traffic_filtering=True,id=e389558a-ec7e-4610-aef7-2cfb76da6814,network=Network(7aa0dec9-58b4-4b16-b265-f3c8cbedd3a7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape389558a-ec') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 08:52:01 compute-0 nova_compute[260935]: 2025-10-11 08:52:01.693 2 DEBUG os_vif [None req-edf69d64-0447-4ac0-849a-abd5e8df75fd fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:67:bd:db,bridge_name='br-int',has_traffic_filtering=True,id=e389558a-ec7e-4610-aef7-2cfb76da6814,network=Network(7aa0dec9-58b4-4b16-b265-f3c8cbedd3a7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape389558a-ec') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 11 08:52:01 compute-0 nova_compute[260935]: 2025-10-11 08:52:01.694 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:52:01 compute-0 nova_compute[260935]: 2025-10-11 08:52:01.695 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape389558a-ec, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:52:01 compute-0 nova_compute[260935]: 2025-10-11 08:52:01.697 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:52:01 compute-0 nova_compute[260935]: 2025-10-11 08:52:01.698 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:52:01 compute-0 podman[305935]: 2025-10-11 08:52:01.700280593 +0000 UTC m=+0.078118600 container died cf510c4b706e2f2d45925ce3f9c6c72673e0db26b6b8d1cd42dadad92afd6053 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-98ec9321-44f2-4cc2-a0a2-531598e629d2, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 08:52:01 compute-0 nova_compute[260935]: 2025-10-11 08:52:01.700 2 INFO os_vif [None req-edf69d64-0447-4ac0-849a-abd5e8df75fd fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:67:bd:db,bridge_name='br-int',has_traffic_filtering=True,id=e389558a-ec7e-4610-aef7-2cfb76da6814,network=Network(7aa0dec9-58b4-4b16-b265-f3c8cbedd3a7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape389558a-ec')
Oct 11 08:52:01 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-cf510c4b706e2f2d45925ce3f9c6c72673e0db26b6b8d1cd42dadad92afd6053-userdata-shm.mount: Deactivated successfully.
Oct 11 08:52:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-3306a34e27badc402a040902ece0ef4a86df5e07e08134f4bb9517ea1ccf4987-merged.mount: Deactivated successfully.
Oct 11 08:52:01 compute-0 nova_compute[260935]: 2025-10-11 08:52:01.752 2 DEBUG nova.network.neutron [None req-280c3641-40c8-40d9-abdd-b0b9acaa9cd9 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] [instance: 840d2a1b-48bb-42ec-aa12-067e1b65ea39] Updating instance_info_cache with network_info: [{"id": "bf26054f-43d5-471a-bca4-e7948b4409d0", "address": "fa:16:3e:6d:9c:27", "network": {"id": "882d76f4-8cc7-44c7-ad90-277f4f92e044", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-3978382-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b23ff73ca27245eeb1b46f51326a5568", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf26054f-43", "ovs_interfaceid": "bf26054f-43d5-471a-bca4-e7948b4409d0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:52:01 compute-0 podman[305935]: 2025-10-11 08:52:01.754495127 +0000 UTC m=+0.132333104 container cleanup cf510c4b706e2f2d45925ce3f9c6c72673e0db26b6b8d1cd42dadad92afd6053 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-98ec9321-44f2-4cc2-a0a2-531598e629d2, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 11 08:52:01 compute-0 podman[305968]: 2025-10-11 08:52:01.760765004 +0000 UTC m=+0.079949012 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=iscsid, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct 11 08:52:01 compute-0 systemd[1]: libpod-conmon-cf510c4b706e2f2d45925ce3f9c6c72673e0db26b6b8d1cd42dadad92afd6053.scope: Deactivated successfully.
Oct 11 08:52:01 compute-0 podman[306025]: 2025-10-11 08:52:01.819539376 +0000 UTC m=+0.041055572 container remove cf510c4b706e2f2d45925ce3f9c6c72673e0db26b6b8d1cd42dadad92afd6053 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-98ec9321-44f2-4cc2-a0a2-531598e629d2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 11 08:52:01 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:01.826 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[c2c0bb66-6844-4c36-ae4d-868930823ab4]: (4, ('Sat Oct 11 08:52:01 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-98ec9321-44f2-4cc2-a0a2-531598e629d2 (cf510c4b706e2f2d45925ce3f9c6c72673e0db26b6b8d1cd42dadad92afd6053)\ncf510c4b706e2f2d45925ce3f9c6c72673e0db26b6b8d1cd42dadad92afd6053\nSat Oct 11 08:52:01 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-98ec9321-44f2-4cc2-a0a2-531598e629d2 (cf510c4b706e2f2d45925ce3f9c6c72673e0db26b6b8d1cd42dadad92afd6053)\ncf510c4b706e2f2d45925ce3f9c6c72673e0db26b6b8d1cd42dadad92afd6053\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:52:01 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:01.828 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[86ad350c-cd03-4de7-80d8-43cf612c397c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:52:01 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:01.830 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap98ec9321-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:52:01 compute-0 nova_compute[260935]: 2025-10-11 08:52:01.832 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:52:01 compute-0 kernel: tap98ec9321-40: left promiscuous mode
Oct 11 08:52:01 compute-0 nova_compute[260935]: 2025-10-11 08:52:01.853 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:52:01 compute-0 nova_compute[260935]: 2025-10-11 08:52:01.856 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:52:01 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:01.859 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[7a784092-e1f2-4d71-a577-84a9529610c1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:52:01 compute-0 nova_compute[260935]: 2025-10-11 08:52:01.872 2 DEBUG oslo_concurrency.lockutils [None req-280c3641-40c8-40d9-abdd-b0b9acaa9cd9 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] Releasing lock "refresh_cache-840d2a1b-48bb-42ec-aa12-067e1b65ea39" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 08:52:01 compute-0 nova_compute[260935]: 2025-10-11 08:52:01.873 2 DEBUG nova.compute.manager [None req-280c3641-40c8-40d9-abdd-b0b9acaa9cd9 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] [instance: 840d2a1b-48bb-42ec-aa12-067e1b65ea39] Instance network_info: |[{"id": "bf26054f-43d5-471a-bca4-e7948b4409d0", "address": "fa:16:3e:6d:9c:27", "network": {"id": "882d76f4-8cc7-44c7-ad90-277f4f92e044", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-3978382-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b23ff73ca27245eeb1b46f51326a5568", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf26054f-43", "ovs_interfaceid": "bf26054f-43d5-471a-bca4-e7948b4409d0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 11 08:52:01 compute-0 nova_compute[260935]: 2025-10-11 08:52:01.873 2 DEBUG oslo_concurrency.lockutils [req-4198e2d9-f34d-4aef-a085-d181880f1e6b req-7fda7ade-ccf2-452c-94f6-37f429d6f633 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-840d2a1b-48bb-42ec-aa12-067e1b65ea39" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 08:52:01 compute-0 nova_compute[260935]: 2025-10-11 08:52:01.873 2 DEBUG nova.network.neutron [req-4198e2d9-f34d-4aef-a085-d181880f1e6b req-7fda7ade-ccf2-452c-94f6-37f429d6f633 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 840d2a1b-48bb-42ec-aa12-067e1b65ea39] Refreshing network info cache for port bf26054f-43d5-471a-bca4-e7948b4409d0 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 08:52:01 compute-0 nova_compute[260935]: 2025-10-11 08:52:01.877 2 DEBUG nova.virt.libvirt.driver [None req-280c3641-40c8-40d9-abdd-b0b9acaa9cd9 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] [instance: 840d2a1b-48bb-42ec-aa12-067e1b65ea39] Start _get_guest_xml network_info=[{"id": "bf26054f-43d5-471a-bca4-e7948b4409d0", "address": "fa:16:3e:6d:9c:27", "network": {"id": "882d76f4-8cc7-44c7-ad90-277f4f92e044", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-3978382-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b23ff73ca27245eeb1b46f51326a5568", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf26054f-43", "ovs_interfaceid": "bf26054f-43d5-471a-bca4-e7948b4409d0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 11 08:52:01 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:01.885 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[148f4e6e-caee-4e9f-a52d-11b6ae03ca31]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:52:01 compute-0 nova_compute[260935]: 2025-10-11 08:52:01.885 2 WARNING nova.virt.libvirt.driver [None req-280c3641-40c8-40d9-abdd-b0b9acaa9cd9 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 08:52:01 compute-0 nova_compute[260935]: 2025-10-11 08:52:01.895 2 DEBUG nova.virt.libvirt.host [None req-280c3641-40c8-40d9-abdd-b0b9acaa9cd9 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 11 08:52:01 compute-0 nova_compute[260935]: 2025-10-11 08:52:01.896 2 DEBUG nova.virt.libvirt.host [None req-280c3641-40c8-40d9-abdd-b0b9acaa9cd9 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 11 08:52:01 compute-0 nova_compute[260935]: 2025-10-11 08:52:01.900 2 DEBUG nova.virt.libvirt.host [None req-280c3641-40c8-40d9-abdd-b0b9acaa9cd9 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 11 08:52:01 compute-0 nova_compute[260935]: 2025-10-11 08:52:01.900 2 DEBUG nova.virt.libvirt.host [None req-280c3641-40c8-40d9-abdd-b0b9acaa9cd9 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 11 08:52:01 compute-0 nova_compute[260935]: 2025-10-11 08:52:01.900 2 DEBUG nova.virt.libvirt.driver [None req-280c3641-40c8-40d9-abdd-b0b9acaa9cd9 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 11 08:52:01 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:01.900 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[f7f9e100-8e41-4682-ba69-3ac8f0036531]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:52:01 compute-0 nova_compute[260935]: 2025-10-11 08:52:01.900 2 DEBUG nova.virt.hardware [None req-280c3641-40c8-40d9-abdd-b0b9acaa9cd9 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 11 08:52:01 compute-0 nova_compute[260935]: 2025-10-11 08:52:01.901 2 DEBUG nova.virt.hardware [None req-280c3641-40c8-40d9-abdd-b0b9acaa9cd9 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 11 08:52:01 compute-0 nova_compute[260935]: 2025-10-11 08:52:01.901 2 DEBUG nova.virt.hardware [None req-280c3641-40c8-40d9-abdd-b0b9acaa9cd9 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 11 08:52:01 compute-0 nova_compute[260935]: 2025-10-11 08:52:01.901 2 DEBUG nova.virt.hardware [None req-280c3641-40c8-40d9-abdd-b0b9acaa9cd9 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 11 08:52:01 compute-0 nova_compute[260935]: 2025-10-11 08:52:01.901 2 DEBUG nova.virt.hardware [None req-280c3641-40c8-40d9-abdd-b0b9acaa9cd9 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 11 08:52:01 compute-0 nova_compute[260935]: 2025-10-11 08:52:01.901 2 DEBUG nova.virt.hardware [None req-280c3641-40c8-40d9-abdd-b0b9acaa9cd9 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 11 08:52:01 compute-0 nova_compute[260935]: 2025-10-11 08:52:01.902 2 DEBUG nova.virt.hardware [None req-280c3641-40c8-40d9-abdd-b0b9acaa9cd9 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 11 08:52:01 compute-0 nova_compute[260935]: 2025-10-11 08:52:01.902 2 DEBUG nova.virt.hardware [None req-280c3641-40c8-40d9-abdd-b0b9acaa9cd9 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 11 08:52:01 compute-0 nova_compute[260935]: 2025-10-11 08:52:01.902 2 DEBUG nova.virt.hardware [None req-280c3641-40c8-40d9-abdd-b0b9acaa9cd9 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 11 08:52:01 compute-0 nova_compute[260935]: 2025-10-11 08:52:01.902 2 DEBUG nova.virt.hardware [None req-280c3641-40c8-40d9-abdd-b0b9acaa9cd9 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 11 08:52:01 compute-0 nova_compute[260935]: 2025-10-11 08:52:01.902 2 DEBUG nova.virt.hardware [None req-280c3641-40c8-40d9-abdd-b0b9acaa9cd9 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 11 08:52:01 compute-0 nova_compute[260935]: 2025-10-11 08:52:01.904 2 DEBUG oslo_concurrency.processutils [None req-280c3641-40c8-40d9-abdd-b0b9acaa9cd9 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:52:01 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1414: 321 pgs: 321 active+clean; 264 MiB data, 527 MiB used, 59 GiB / 60 GiB avail; 6.4 MiB/s rd, 5.0 MiB/s wr, 340 op/s
Oct 11 08:52:01 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:01.929 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[7e42c255-9cbb-4aa4-b0a4-33a725327359]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 455894, 'reachable_time': 39596, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 306040, 'error': None, 'target': 'ovnmeta-98ec9321-44f2-4cc2-a0a2-531598e629d2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:52:01 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:01.933 162929 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-98ec9321-44f2-4cc2-a0a2-531598e629d2 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 11 08:52:01 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:01.933 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[60496f57-139d-4496-a156-fdf6b1a9a986]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:52:01 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:01.934 162815 INFO neutron.agent.ovn.metadata.agent [-] Port e389558a-ec7e-4610-aef7-2cfb76da6814 in datapath 7aa0dec9-58b4-4b16-b265-f3c8cbedd3a7 unbound from our chassis
Oct 11 08:52:01 compute-0 systemd[1]: run-netns-ovnmeta\x2d98ec9321\x2d44f2\x2d4cc2\x2da0a2\x2d531598e629d2.mount: Deactivated successfully.
Oct 11 08:52:01 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:01.936 162815 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7aa0dec9-58b4-4b16-b265-f3c8cbedd3a7, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 11 08:52:01 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:01.937 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[05a3d55a-3609-4787-95de-7b41c2c2378f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:52:01 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:01.938 162815 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-7aa0dec9-58b4-4b16-b265-f3c8cbedd3a7 namespace which is not needed anymore
Oct 11 08:52:02 compute-0 neutron-haproxy-ovnmeta-7aa0dec9-58b4-4b16-b265-f3c8cbedd3a7[305716]: [NOTICE]   (305720) : haproxy version is 2.8.14-c23fe91
Oct 11 08:52:02 compute-0 neutron-haproxy-ovnmeta-7aa0dec9-58b4-4b16-b265-f3c8cbedd3a7[305716]: [NOTICE]   (305720) : path to executable is /usr/sbin/haproxy
Oct 11 08:52:02 compute-0 neutron-haproxy-ovnmeta-7aa0dec9-58b4-4b16-b265-f3c8cbedd3a7[305716]: [ALERT]    (305720) : Current worker (305722) exited with code 143 (Terminated)
Oct 11 08:52:02 compute-0 neutron-haproxy-ovnmeta-7aa0dec9-58b4-4b16-b265-f3c8cbedd3a7[305716]: [WARNING]  (305720) : All workers exited. Exiting... (0)
Oct 11 08:52:02 compute-0 systemd[1]: libpod-5698044c3b3e464a40e793712c4eace612b32ffc3835eeba24ac108bb902523c.scope: Deactivated successfully.
Oct 11 08:52:02 compute-0 podman[306060]: 2025-10-11 08:52:02.1174189 +0000 UTC m=+0.065135663 container died 5698044c3b3e464a40e793712c4eace612b32ffc3835eeba24ac108bb902523c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-7aa0dec9-58b4-4b16-b265-f3c8cbedd3a7, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct 11 08:52:02 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-5698044c3b3e464a40e793712c4eace612b32ffc3835eeba24ac108bb902523c-userdata-shm.mount: Deactivated successfully.
Oct 11 08:52:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-4a65300d208cb9158423dc57616ca08fb30633e13196b7c893ce5c6964e246ba-merged.mount: Deactivated successfully.
Oct 11 08:52:02 compute-0 nova_compute[260935]: 2025-10-11 08:52:02.160 2 INFO nova.virt.libvirt.driver [None req-edf69d64-0447-4ac0-849a-abd5e8df75fd fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] [instance: 205bee5e-165a-468c-87d3-db44e03ace3e] Deleting instance files /var/lib/nova/instances/205bee5e-165a-468c-87d3-db44e03ace3e_del
Oct 11 08:52:02 compute-0 nova_compute[260935]: 2025-10-11 08:52:02.162 2 INFO nova.virt.libvirt.driver [None req-edf69d64-0447-4ac0-849a-abd5e8df75fd fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] [instance: 205bee5e-165a-468c-87d3-db44e03ace3e] Deletion of /var/lib/nova/instances/205bee5e-165a-468c-87d3-db44e03ace3e_del complete
Oct 11 08:52:02 compute-0 podman[306060]: 2025-10-11 08:52:02.164309316 +0000 UTC m=+0.112026069 container cleanup 5698044c3b3e464a40e793712c4eace612b32ffc3835eeba24ac108bb902523c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-7aa0dec9-58b4-4b16-b265-f3c8cbedd3a7, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 11 08:52:02 compute-0 systemd[1]: libpod-conmon-5698044c3b3e464a40e793712c4eace612b32ffc3835eeba24ac108bb902523c.scope: Deactivated successfully.
Oct 11 08:52:02 compute-0 podman[306107]: 2025-10-11 08:52:02.259947051 +0000 UTC m=+0.061143010 container remove 5698044c3b3e464a40e793712c4eace612b32ffc3835eeba24ac108bb902523c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-7aa0dec9-58b4-4b16-b265-f3c8cbedd3a7, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 11 08:52:02 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:02.267 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[fc7ff873-2f64-4c15-81b2-ba52a3a57d04]: (4, ('Sat Oct 11 08:52:02 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-7aa0dec9-58b4-4b16-b265-f3c8cbedd3a7 (5698044c3b3e464a40e793712c4eace612b32ffc3835eeba24ac108bb902523c)\n5698044c3b3e464a40e793712c4eace612b32ffc3835eeba24ac108bb902523c\nSat Oct 11 08:52:02 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-7aa0dec9-58b4-4b16-b265-f3c8cbedd3a7 (5698044c3b3e464a40e793712c4eace612b32ffc3835eeba24ac108bb902523c)\n5698044c3b3e464a40e793712c4eace612b32ffc3835eeba24ac108bb902523c\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:52:02 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:02.269 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[470260d3-f1c9-4c70-88f1-8e49d96bac47]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:52:02 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:02.270 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7aa0dec9-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:52:02 compute-0 nova_compute[260935]: 2025-10-11 08:52:02.273 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:52:02 compute-0 kernel: tap7aa0dec9-50: left promiscuous mode
Oct 11 08:52:02 compute-0 nova_compute[260935]: 2025-10-11 08:52:02.312 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:52:02 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:02.314 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[00f3afe4-9b34-409e-b936-121ac6470552]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:52:02 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:02.360 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[53c3a28e-416e-494c-a47a-d1d0cca4735b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:52:02 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:02.361 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[58a9b940-3a04-4af6-843c-47919976367a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:52:02 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:02.383 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[87ff5784-78a5-4e66-9fcc-7b9365a6c69b]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 456000, 'reachable_time': 31417, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 306121, 'error': None, 'target': 'ovnmeta-7aa0dec9-58b4-4b16-b265-f3c8cbedd3a7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:52:02 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:02.386 162929 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-7aa0dec9-58b4-4b16-b265-f3c8cbedd3a7 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 11 08:52:02 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:02.386 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[fd76124f-3623-4d65-bb07-a095da24d5bf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:52:02 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 08:52:02 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4230801645' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:52:02 compute-0 nova_compute[260935]: 2025-10-11 08:52:02.412 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:52:02 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/4230801645' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:52:02 compute-0 nova_compute[260935]: 2025-10-11 08:52:02.427 2 DEBUG oslo_concurrency.processutils [None req-280c3641-40c8-40d9-abdd-b0b9acaa9cd9 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.523s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:52:02 compute-0 nova_compute[260935]: 2025-10-11 08:52:02.459 2 DEBUG nova.storage.rbd_utils [None req-280c3641-40c8-40d9-abdd-b0b9acaa9cd9 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] rbd image 840d2a1b-48bb-42ec-aa12-067e1b65ea39_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:52:02 compute-0 nova_compute[260935]: 2025-10-11 08:52:02.463 2 DEBUG oslo_concurrency.processutils [None req-280c3641-40c8-40d9-abdd-b0b9acaa9cd9 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:52:02 compute-0 nova_compute[260935]: 2025-10-11 08:52:02.499 2 INFO nova.compute.manager [None req-edf69d64-0447-4ac0-849a-abd5e8df75fd fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] [instance: 205bee5e-165a-468c-87d3-db44e03ace3e] Took 1.10 seconds to destroy the instance on the hypervisor.
Oct 11 08:52:02 compute-0 nova_compute[260935]: 2025-10-11 08:52:02.500 2 DEBUG oslo.service.loopingcall [None req-edf69d64-0447-4ac0-849a-abd5e8df75fd fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 11 08:52:02 compute-0 nova_compute[260935]: 2025-10-11 08:52:02.500 2 DEBUG nova.compute.manager [-] [instance: 205bee5e-165a-468c-87d3-db44e03ace3e] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 11 08:52:02 compute-0 nova_compute[260935]: 2025-10-11 08:52:02.500 2 DEBUG nova.network.neutron [-] [instance: 205bee5e-165a-468c-87d3-db44e03ace3e] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 11 08:52:02 compute-0 nova_compute[260935]: 2025-10-11 08:52:02.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 08:52:02 compute-0 nova_compute[260935]: 2025-10-11 08:52:02.716 2 DEBUG nova.compute.manager [req-5c1b0ae2-19c9-4eb8-a527-4e3d9d435caa req-78d3fd41-a67e-477a-9e1f-bec1d803bec2 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 205bee5e-165a-468c-87d3-db44e03ace3e] Received event network-vif-unplugged-30f88ebc-fba6-476c-8953-ad64358029bb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:52:02 compute-0 nova_compute[260935]: 2025-10-11 08:52:02.717 2 DEBUG oslo_concurrency.lockutils [req-5c1b0ae2-19c9-4eb8-a527-4e3d9d435caa req-78d3fd41-a67e-477a-9e1f-bec1d803bec2 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "205bee5e-165a-468c-87d3-db44e03ace3e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:52:02 compute-0 nova_compute[260935]: 2025-10-11 08:52:02.717 2 DEBUG oslo_concurrency.lockutils [req-5c1b0ae2-19c9-4eb8-a527-4e3d9d435caa req-78d3fd41-a67e-477a-9e1f-bec1d803bec2 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "205bee5e-165a-468c-87d3-db44e03ace3e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:52:02 compute-0 nova_compute[260935]: 2025-10-11 08:52:02.718 2 DEBUG oslo_concurrency.lockutils [req-5c1b0ae2-19c9-4eb8-a527-4e3d9d435caa req-78d3fd41-a67e-477a-9e1f-bec1d803bec2 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "205bee5e-165a-468c-87d3-db44e03ace3e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:52:02 compute-0 nova_compute[260935]: 2025-10-11 08:52:02.718 2 DEBUG nova.compute.manager [req-5c1b0ae2-19c9-4eb8-a527-4e3d9d435caa req-78d3fd41-a67e-477a-9e1f-bec1d803bec2 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 205bee5e-165a-468c-87d3-db44e03ace3e] No waiting events found dispatching network-vif-unplugged-30f88ebc-fba6-476c-8953-ad64358029bb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 08:52:02 compute-0 nova_compute[260935]: 2025-10-11 08:52:02.718 2 DEBUG nova.compute.manager [req-5c1b0ae2-19c9-4eb8-a527-4e3d9d435caa req-78d3fd41-a67e-477a-9e1f-bec1d803bec2 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 205bee5e-165a-468c-87d3-db44e03ace3e] Received event network-vif-unplugged-30f88ebc-fba6-476c-8953-ad64358029bb for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 11 08:52:02 compute-0 systemd[1]: run-netns-ovnmeta\x2d7aa0dec9\x2d58b4\x2d4b16\x2db265\x2df3c8cbedd3a7.mount: Deactivated successfully.
Oct 11 08:52:02 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 08:52:02 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/150167715' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:52:02 compute-0 nova_compute[260935]: 2025-10-11 08:52:02.912 2 DEBUG oslo_concurrency.processutils [None req-280c3641-40c8-40d9-abdd-b0b9acaa9cd9 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:52:02 compute-0 nova_compute[260935]: 2025-10-11 08:52:02.914 2 DEBUG nova.virt.libvirt.vif [None req-280c3641-40c8-40d9-abdd-b0b9acaa9cd9 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T08:51:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-FloatingIPsAssociationTestJSON-server-397044721',display_name='tempest-FloatingIPsAssociationTestJSON-server-397044721',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-floatingipsassociationtestjson-server-397044721',id=39,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b23ff73ca27245eeb1b46f51326a5568',ramdisk_id='',reservation_id='r-5vhnbeg2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-FloatingIPsAssociationTestJSON-1277815089',owner_user_name='tempest-FloatingIPsAssociationTestJSON-1277815089-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T08:51:55Z,user_data=None,user_id='7f37cd0ba15f412b88192a506c5cec79',uuid=840d2a1b-48bb-42ec-aa12-067e1b65ea39,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "bf26054f-43d5-471a-bca4-e7948b4409d0", "address": "fa:16:3e:6d:9c:27", "network": {"id": "882d76f4-8cc7-44c7-ad90-277f4f92e044", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-3978382-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b23ff73ca27245eeb1b46f51326a5568", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf26054f-43", "ovs_interfaceid": "bf26054f-43d5-471a-bca4-e7948b4409d0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 11 08:52:02 compute-0 nova_compute[260935]: 2025-10-11 08:52:02.914 2 DEBUG nova.network.os_vif_util [None req-280c3641-40c8-40d9-abdd-b0b9acaa9cd9 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] Converting VIF {"id": "bf26054f-43d5-471a-bca4-e7948b4409d0", "address": "fa:16:3e:6d:9c:27", "network": {"id": "882d76f4-8cc7-44c7-ad90-277f4f92e044", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-3978382-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b23ff73ca27245eeb1b46f51326a5568", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf26054f-43", "ovs_interfaceid": "bf26054f-43d5-471a-bca4-e7948b4409d0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 08:52:02 compute-0 nova_compute[260935]: 2025-10-11 08:52:02.914 2 DEBUG nova.network.os_vif_util [None req-280c3641-40c8-40d9-abdd-b0b9acaa9cd9 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6d:9c:27,bridge_name='br-int',has_traffic_filtering=True,id=bf26054f-43d5-471a-bca4-e7948b4409d0,network=Network(882d76f4-8cc7-44c7-ad90-277f4f92e044),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf26054f-43') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 08:52:02 compute-0 nova_compute[260935]: 2025-10-11 08:52:02.915 2 DEBUG nova.objects.instance [None req-280c3641-40c8-40d9-abdd-b0b9acaa9cd9 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] Lazy-loading 'pci_devices' on Instance uuid 840d2a1b-48bb-42ec-aa12-067e1b65ea39 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:52:03 compute-0 nova_compute[260935]: 2025-10-11 08:52:03.029 2 DEBUG nova.virt.libvirt.driver [None req-280c3641-40c8-40d9-abdd-b0b9acaa9cd9 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] [instance: 840d2a1b-48bb-42ec-aa12-067e1b65ea39] End _get_guest_xml xml=<domain type="kvm">
Oct 11 08:52:03 compute-0 nova_compute[260935]:   <uuid>840d2a1b-48bb-42ec-aa12-067e1b65ea39</uuid>
Oct 11 08:52:03 compute-0 nova_compute[260935]:   <name>instance-00000027</name>
Oct 11 08:52:03 compute-0 nova_compute[260935]:   <memory>131072</memory>
Oct 11 08:52:03 compute-0 nova_compute[260935]:   <vcpu>1</vcpu>
Oct 11 08:52:03 compute-0 nova_compute[260935]:   <metadata>
Oct 11 08:52:03 compute-0 nova_compute[260935]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 08:52:03 compute-0 nova_compute[260935]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 08:52:03 compute-0 nova_compute[260935]:       <nova:name>tempest-FloatingIPsAssociationTestJSON-server-397044721</nova:name>
Oct 11 08:52:03 compute-0 nova_compute[260935]:       <nova:creationTime>2025-10-11 08:52:01</nova:creationTime>
Oct 11 08:52:03 compute-0 nova_compute[260935]:       <nova:flavor name="m1.nano">
Oct 11 08:52:03 compute-0 nova_compute[260935]:         <nova:memory>128</nova:memory>
Oct 11 08:52:03 compute-0 nova_compute[260935]:         <nova:disk>1</nova:disk>
Oct 11 08:52:03 compute-0 nova_compute[260935]:         <nova:swap>0</nova:swap>
Oct 11 08:52:03 compute-0 nova_compute[260935]:         <nova:ephemeral>0</nova:ephemeral>
Oct 11 08:52:03 compute-0 nova_compute[260935]:         <nova:vcpus>1</nova:vcpus>
Oct 11 08:52:03 compute-0 nova_compute[260935]:       </nova:flavor>
Oct 11 08:52:03 compute-0 nova_compute[260935]:       <nova:owner>
Oct 11 08:52:03 compute-0 nova_compute[260935]:         <nova:user uuid="7f37cd0ba15f412b88192a506c5cec79">tempest-FloatingIPsAssociationTestJSON-1277815089-project-member</nova:user>
Oct 11 08:52:03 compute-0 nova_compute[260935]:         <nova:project uuid="b23ff73ca27245eeb1b46f51326a5568">tempest-FloatingIPsAssociationTestJSON-1277815089</nova:project>
Oct 11 08:52:03 compute-0 nova_compute[260935]:       </nova:owner>
Oct 11 08:52:03 compute-0 nova_compute[260935]:       <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 08:52:03 compute-0 nova_compute[260935]:       <nova:ports>
Oct 11 08:52:03 compute-0 nova_compute[260935]:         <nova:port uuid="bf26054f-43d5-471a-bca4-e7948b4409d0">
Oct 11 08:52:03 compute-0 nova_compute[260935]:           <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Oct 11 08:52:03 compute-0 nova_compute[260935]:         </nova:port>
Oct 11 08:52:03 compute-0 nova_compute[260935]:       </nova:ports>
Oct 11 08:52:03 compute-0 nova_compute[260935]:     </nova:instance>
Oct 11 08:52:03 compute-0 nova_compute[260935]:   </metadata>
Oct 11 08:52:03 compute-0 nova_compute[260935]:   <sysinfo type="smbios">
Oct 11 08:52:03 compute-0 nova_compute[260935]:     <system>
Oct 11 08:52:03 compute-0 nova_compute[260935]:       <entry name="manufacturer">RDO</entry>
Oct 11 08:52:03 compute-0 nova_compute[260935]:       <entry name="product">OpenStack Compute</entry>
Oct 11 08:52:03 compute-0 nova_compute[260935]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 08:52:03 compute-0 nova_compute[260935]:       <entry name="serial">840d2a1b-48bb-42ec-aa12-067e1b65ea39</entry>
Oct 11 08:52:03 compute-0 nova_compute[260935]:       <entry name="uuid">840d2a1b-48bb-42ec-aa12-067e1b65ea39</entry>
Oct 11 08:52:03 compute-0 nova_compute[260935]:       <entry name="family">Virtual Machine</entry>
Oct 11 08:52:03 compute-0 nova_compute[260935]:     </system>
Oct 11 08:52:03 compute-0 nova_compute[260935]:   </sysinfo>
Oct 11 08:52:03 compute-0 nova_compute[260935]:   <os>
Oct 11 08:52:03 compute-0 nova_compute[260935]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 11 08:52:03 compute-0 nova_compute[260935]:     <boot dev="hd"/>
Oct 11 08:52:03 compute-0 nova_compute[260935]:     <smbios mode="sysinfo"/>
Oct 11 08:52:03 compute-0 nova_compute[260935]:   </os>
Oct 11 08:52:03 compute-0 nova_compute[260935]:   <features>
Oct 11 08:52:03 compute-0 nova_compute[260935]:     <acpi/>
Oct 11 08:52:03 compute-0 nova_compute[260935]:     <apic/>
Oct 11 08:52:03 compute-0 nova_compute[260935]:     <vmcoreinfo/>
Oct 11 08:52:03 compute-0 nova_compute[260935]:   </features>
Oct 11 08:52:03 compute-0 nova_compute[260935]:   <clock offset="utc">
Oct 11 08:52:03 compute-0 nova_compute[260935]:     <timer name="pit" tickpolicy="delay"/>
Oct 11 08:52:03 compute-0 nova_compute[260935]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 11 08:52:03 compute-0 nova_compute[260935]:     <timer name="hpet" present="no"/>
Oct 11 08:52:03 compute-0 nova_compute[260935]:   </clock>
Oct 11 08:52:03 compute-0 nova_compute[260935]:   <cpu mode="host-model" match="exact">
Oct 11 08:52:03 compute-0 nova_compute[260935]:     <topology sockets="1" cores="1" threads="1"/>
Oct 11 08:52:03 compute-0 nova_compute[260935]:   </cpu>
Oct 11 08:52:03 compute-0 nova_compute[260935]:   <devices>
Oct 11 08:52:03 compute-0 nova_compute[260935]:     <disk type="network" device="disk">
Oct 11 08:52:03 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 08:52:03 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/840d2a1b-48bb-42ec-aa12-067e1b65ea39_disk">
Oct 11 08:52:03 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 08:52:03 compute-0 nova_compute[260935]:       </source>
Oct 11 08:52:03 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 08:52:03 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 08:52:03 compute-0 nova_compute[260935]:       </auth>
Oct 11 08:52:03 compute-0 nova_compute[260935]:       <target dev="vda" bus="virtio"/>
Oct 11 08:52:03 compute-0 nova_compute[260935]:     </disk>
Oct 11 08:52:03 compute-0 nova_compute[260935]:     <disk type="network" device="cdrom">
Oct 11 08:52:03 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 08:52:03 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/840d2a1b-48bb-42ec-aa12-067e1b65ea39_disk.config">
Oct 11 08:52:03 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 08:52:03 compute-0 nova_compute[260935]:       </source>
Oct 11 08:52:03 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 08:52:03 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 08:52:03 compute-0 nova_compute[260935]:       </auth>
Oct 11 08:52:03 compute-0 nova_compute[260935]:       <target dev="sda" bus="sata"/>
Oct 11 08:52:03 compute-0 nova_compute[260935]:     </disk>
Oct 11 08:52:03 compute-0 nova_compute[260935]:     <interface type="ethernet">
Oct 11 08:52:03 compute-0 nova_compute[260935]:       <mac address="fa:16:3e:6d:9c:27"/>
Oct 11 08:52:03 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 08:52:03 compute-0 nova_compute[260935]:       <driver name="vhost" rx_queue_size="512"/>
Oct 11 08:52:03 compute-0 nova_compute[260935]:       <mtu size="1442"/>
Oct 11 08:52:03 compute-0 nova_compute[260935]:       <target dev="tapbf26054f-43"/>
Oct 11 08:52:03 compute-0 nova_compute[260935]:     </interface>
Oct 11 08:52:03 compute-0 nova_compute[260935]:     <serial type="pty">
Oct 11 08:52:03 compute-0 nova_compute[260935]:       <log file="/var/lib/nova/instances/840d2a1b-48bb-42ec-aa12-067e1b65ea39/console.log" append="off"/>
Oct 11 08:52:03 compute-0 nova_compute[260935]:     </serial>
Oct 11 08:52:03 compute-0 nova_compute[260935]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 08:52:03 compute-0 nova_compute[260935]:     <video>
Oct 11 08:52:03 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 08:52:03 compute-0 nova_compute[260935]:     </video>
Oct 11 08:52:03 compute-0 nova_compute[260935]:     <input type="tablet" bus="usb"/>
Oct 11 08:52:03 compute-0 nova_compute[260935]:     <rng model="virtio">
Oct 11 08:52:03 compute-0 nova_compute[260935]:       <backend model="random">/dev/urandom</backend>
Oct 11 08:52:03 compute-0 nova_compute[260935]:     </rng>
Oct 11 08:52:03 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root"/>
Oct 11 08:52:03 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:52:03 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:52:03 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:52:03 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:52:03 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:52:03 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:52:03 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:52:03 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:52:03 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:52:03 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:52:03 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:52:03 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:52:03 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:52:03 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:52:03 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:52:03 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:52:03 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:52:03 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:52:03 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:52:03 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:52:03 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:52:03 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:52:03 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:52:03 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:52:03 compute-0 nova_compute[260935]:     <controller type="usb" index="0"/>
Oct 11 08:52:03 compute-0 nova_compute[260935]:     <memballoon model="virtio">
Oct 11 08:52:03 compute-0 nova_compute[260935]:       <stats period="10"/>
Oct 11 08:52:03 compute-0 nova_compute[260935]:     </memballoon>
Oct 11 08:52:03 compute-0 nova_compute[260935]:   </devices>
Oct 11 08:52:03 compute-0 nova_compute[260935]: </domain>
Oct 11 08:52:03 compute-0 nova_compute[260935]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 11 08:52:03 compute-0 nova_compute[260935]: 2025-10-11 08:52:03.029 2 DEBUG nova.compute.manager [None req-280c3641-40c8-40d9-abdd-b0b9acaa9cd9 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] [instance: 840d2a1b-48bb-42ec-aa12-067e1b65ea39] Preparing to wait for external event network-vif-plugged-bf26054f-43d5-471a-bca4-e7948b4409d0 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 11 08:52:03 compute-0 nova_compute[260935]: 2025-10-11 08:52:03.044 2 DEBUG oslo_concurrency.lockutils [None req-280c3641-40c8-40d9-abdd-b0b9acaa9cd9 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] Acquiring lock "840d2a1b-48bb-42ec-aa12-067e1b65ea39-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:52:03 compute-0 nova_compute[260935]: 2025-10-11 08:52:03.044 2 DEBUG oslo_concurrency.lockutils [None req-280c3641-40c8-40d9-abdd-b0b9acaa9cd9 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] Lock "840d2a1b-48bb-42ec-aa12-067e1b65ea39-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:52:03 compute-0 nova_compute[260935]: 2025-10-11 08:52:03.044 2 DEBUG oslo_concurrency.lockutils [None req-280c3641-40c8-40d9-abdd-b0b9acaa9cd9 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] Lock "840d2a1b-48bb-42ec-aa12-067e1b65ea39-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:52:03 compute-0 nova_compute[260935]: 2025-10-11 08:52:03.045 2 DEBUG nova.virt.libvirt.vif [None req-280c3641-40c8-40d9-abdd-b0b9acaa9cd9 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T08:51:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-FloatingIPsAssociationTestJSON-server-397044721',display_name='tempest-FloatingIPsAssociationTestJSON-server-397044721',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-floatingipsassociationtestjson-server-397044721',id=39,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b23ff73ca27245eeb1b46f51326a5568',ramdisk_id='',reservation_id='r-5vhnbeg2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-FloatingIPsAssociationTestJSON-1277815089',owner_user_name='tempest-FloatingIPsAssociationTestJSON-1277815089-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T08:51:55Z,user_data=None,user_id='7f37cd0ba15f412b88192a506c5cec79',uuid=840d2a1b-48bb-42ec-aa12-067e1b65ea39,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "bf26054f-43d5-471a-bca4-e7948b4409d0", "address": "fa:16:3e:6d:9c:27", "network": {"id": "882d76f4-8cc7-44c7-ad90-277f4f92e044", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-3978382-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b23ff73ca27245eeb1b46f51326a5568", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf26054f-43", "ovs_interfaceid": "bf26054f-43d5-471a-bca4-e7948b4409d0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 11 08:52:03 compute-0 nova_compute[260935]: 2025-10-11 08:52:03.045 2 DEBUG nova.network.os_vif_util [None req-280c3641-40c8-40d9-abdd-b0b9acaa9cd9 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] Converting VIF {"id": "bf26054f-43d5-471a-bca4-e7948b4409d0", "address": "fa:16:3e:6d:9c:27", "network": {"id": "882d76f4-8cc7-44c7-ad90-277f4f92e044", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-3978382-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b23ff73ca27245eeb1b46f51326a5568", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf26054f-43", "ovs_interfaceid": "bf26054f-43d5-471a-bca4-e7948b4409d0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 08:52:03 compute-0 nova_compute[260935]: 2025-10-11 08:52:03.046 2 DEBUG nova.network.os_vif_util [None req-280c3641-40c8-40d9-abdd-b0b9acaa9cd9 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6d:9c:27,bridge_name='br-int',has_traffic_filtering=True,id=bf26054f-43d5-471a-bca4-e7948b4409d0,network=Network(882d76f4-8cc7-44c7-ad90-277f4f92e044),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf26054f-43') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 08:52:03 compute-0 nova_compute[260935]: 2025-10-11 08:52:03.046 2 DEBUG os_vif [None req-280c3641-40c8-40d9-abdd-b0b9acaa9cd9 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:6d:9c:27,bridge_name='br-int',has_traffic_filtering=True,id=bf26054f-43d5-471a-bca4-e7948b4409d0,network=Network(882d76f4-8cc7-44c7-ad90-277f4f92e044),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf26054f-43') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 11 08:52:03 compute-0 nova_compute[260935]: 2025-10-11 08:52:03.047 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:52:03 compute-0 nova_compute[260935]: 2025-10-11 08:52:03.048 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:52:03 compute-0 nova_compute[260935]: 2025-10-11 08:52:03.048 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 08:52:03 compute-0 nova_compute[260935]: 2025-10-11 08:52:03.052 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:52:03 compute-0 nova_compute[260935]: 2025-10-11 08:52:03.052 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbf26054f-43, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:52:03 compute-0 nova_compute[260935]: 2025-10-11 08:52:03.053 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapbf26054f-43, col_values=(('external_ids', {'iface-id': 'bf26054f-43d5-471a-bca4-e7948b4409d0', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:6d:9c:27', 'vm-uuid': '840d2a1b-48bb-42ec-aa12-067e1b65ea39'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:52:03 compute-0 nova_compute[260935]: 2025-10-11 08:52:03.088 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:52:03 compute-0 NetworkManager[44960]: <info>  [1760172723.0897] manager: (tapbf26054f-43): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/150)
Oct 11 08:52:03 compute-0 nova_compute[260935]: 2025-10-11 08:52:03.092 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 08:52:03 compute-0 nova_compute[260935]: 2025-10-11 08:52:03.097 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:52:03 compute-0 nova_compute[260935]: 2025-10-11 08:52:03.098 2 INFO os_vif [None req-280c3641-40c8-40d9-abdd-b0b9acaa9cd9 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:6d:9c:27,bridge_name='br-int',has_traffic_filtering=True,id=bf26054f-43d5-471a-bca4-e7948b4409d0,network=Network(882d76f4-8cc7-44c7-ad90-277f4f92e044),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf26054f-43')
Oct 11 08:52:03 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e190 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 08:52:03 compute-0 nova_compute[260935]: 2025-10-11 08:52:03.334 2 DEBUG nova.virt.libvirt.driver [None req-280c3641-40c8-40d9-abdd-b0b9acaa9cd9 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 08:52:03 compute-0 nova_compute[260935]: 2025-10-11 08:52:03.339 2 DEBUG nova.virt.libvirt.driver [None req-280c3641-40c8-40d9-abdd-b0b9acaa9cd9 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 08:52:03 compute-0 nova_compute[260935]: 2025-10-11 08:52:03.339 2 DEBUG nova.virt.libvirt.driver [None req-280c3641-40c8-40d9-abdd-b0b9acaa9cd9 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] No VIF found with MAC fa:16:3e:6d:9c:27, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 11 08:52:03 compute-0 nova_compute[260935]: 2025-10-11 08:52:03.340 2 INFO nova.virt.libvirt.driver [None req-280c3641-40c8-40d9-abdd-b0b9acaa9cd9 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] [instance: 840d2a1b-48bb-42ec-aa12-067e1b65ea39] Using config drive
Oct 11 08:52:03 compute-0 nova_compute[260935]: 2025-10-11 08:52:03.371 2 DEBUG nova.storage.rbd_utils [None req-280c3641-40c8-40d9-abdd-b0b9acaa9cd9 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] rbd image 840d2a1b-48bb-42ec-aa12-067e1b65ea39_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:52:03 compute-0 ceph-mon[74313]: pgmap v1414: 321 pgs: 321 active+clean; 264 MiB data, 527 MiB used, 59 GiB / 60 GiB avail; 6.4 MiB/s rd, 5.0 MiB/s wr, 340 op/s
Oct 11 08:52:03 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/150167715' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:52:03 compute-0 nova_compute[260935]: 2025-10-11 08:52:03.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 08:52:03 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1415: 321 pgs: 321 active+clean; 213 MiB data, 534 MiB used, 59 GiB / 60 GiB avail; 5.9 MiB/s rd, 2.8 MiB/s wr, 298 op/s
Oct 11 08:52:04 compute-0 nova_compute[260935]: 2025-10-11 08:52:04.185 2 INFO nova.virt.libvirt.driver [None req-280c3641-40c8-40d9-abdd-b0b9acaa9cd9 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] [instance: 840d2a1b-48bb-42ec-aa12-067e1b65ea39] Creating config drive at /var/lib/nova/instances/840d2a1b-48bb-42ec-aa12-067e1b65ea39/disk.config
Oct 11 08:52:04 compute-0 nova_compute[260935]: 2025-10-11 08:52:04.195 2 DEBUG oslo_concurrency.processutils [None req-280c3641-40c8-40d9-abdd-b0b9acaa9cd9 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/840d2a1b-48bb-42ec-aa12-067e1b65ea39/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp84ibcmva execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:52:04 compute-0 nova_compute[260935]: 2025-10-11 08:52:04.335 2 DEBUG oslo_concurrency.processutils [None req-280c3641-40c8-40d9-abdd-b0b9acaa9cd9 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/840d2a1b-48bb-42ec-aa12-067e1b65ea39/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp84ibcmva" returned: 0 in 0.140s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:52:04 compute-0 nova_compute[260935]: 2025-10-11 08:52:04.366 2 DEBUG nova.storage.rbd_utils [None req-280c3641-40c8-40d9-abdd-b0b9acaa9cd9 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] rbd image 840d2a1b-48bb-42ec-aa12-067e1b65ea39_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:52:04 compute-0 nova_compute[260935]: 2025-10-11 08:52:04.370 2 DEBUG oslo_concurrency.processutils [None req-280c3641-40c8-40d9-abdd-b0b9acaa9cd9 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/840d2a1b-48bb-42ec-aa12-067e1b65ea39/disk.config 840d2a1b-48bb-42ec-aa12-067e1b65ea39_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:52:04 compute-0 nova_compute[260935]: 2025-10-11 08:52:04.405 2 DEBUG nova.compute.manager [req-76f744fe-b035-43d3-ae0a-f2d27ea2b93a req-5431233d-ca53-4603-8350-05abdede4eb2 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 205bee5e-165a-468c-87d3-db44e03ace3e] Received event network-vif-unplugged-e389558a-ec7e-4610-aef7-2cfb76da6814 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:52:04 compute-0 nova_compute[260935]: 2025-10-11 08:52:04.406 2 DEBUG oslo_concurrency.lockutils [req-76f744fe-b035-43d3-ae0a-f2d27ea2b93a req-5431233d-ca53-4603-8350-05abdede4eb2 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "205bee5e-165a-468c-87d3-db44e03ace3e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:52:04 compute-0 nova_compute[260935]: 2025-10-11 08:52:04.407 2 DEBUG oslo_concurrency.lockutils [req-76f744fe-b035-43d3-ae0a-f2d27ea2b93a req-5431233d-ca53-4603-8350-05abdede4eb2 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "205bee5e-165a-468c-87d3-db44e03ace3e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:52:04 compute-0 nova_compute[260935]: 2025-10-11 08:52:04.407 2 DEBUG oslo_concurrency.lockutils [req-76f744fe-b035-43d3-ae0a-f2d27ea2b93a req-5431233d-ca53-4603-8350-05abdede4eb2 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "205bee5e-165a-468c-87d3-db44e03ace3e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:52:04 compute-0 nova_compute[260935]: 2025-10-11 08:52:04.407 2 DEBUG nova.compute.manager [req-76f744fe-b035-43d3-ae0a-f2d27ea2b93a req-5431233d-ca53-4603-8350-05abdede4eb2 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 205bee5e-165a-468c-87d3-db44e03ace3e] No waiting events found dispatching network-vif-unplugged-e389558a-ec7e-4610-aef7-2cfb76da6814 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 08:52:04 compute-0 nova_compute[260935]: 2025-10-11 08:52:04.408 2 DEBUG nova.compute.manager [req-76f744fe-b035-43d3-ae0a-f2d27ea2b93a req-5431233d-ca53-4603-8350-05abdede4eb2 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 205bee5e-165a-468c-87d3-db44e03ace3e] Received event network-vif-unplugged-e389558a-ec7e-4610-aef7-2cfb76da6814 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 11 08:52:04 compute-0 nova_compute[260935]: 2025-10-11 08:52:04.552 2 DEBUG oslo_concurrency.processutils [None req-280c3641-40c8-40d9-abdd-b0b9acaa9cd9 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/840d2a1b-48bb-42ec-aa12-067e1b65ea39/disk.config 840d2a1b-48bb-42ec-aa12-067e1b65ea39_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.181s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:52:04 compute-0 nova_compute[260935]: 2025-10-11 08:52:04.553 2 INFO nova.virt.libvirt.driver [None req-280c3641-40c8-40d9-abdd-b0b9acaa9cd9 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] [instance: 840d2a1b-48bb-42ec-aa12-067e1b65ea39] Deleting local config drive /var/lib/nova/instances/840d2a1b-48bb-42ec-aa12-067e1b65ea39/disk.config because it was imported into RBD.
Oct 11 08:52:04 compute-0 NetworkManager[44960]: <info>  [1760172724.6233] manager: (tapbf26054f-43): new Tun device (/org/freedesktop/NetworkManager/Devices/151)
Oct 11 08:52:04 compute-0 kernel: tapbf26054f-43: entered promiscuous mode
Oct 11 08:52:04 compute-0 ovn_controller[152945]: 2025-10-11T08:52:04Z|00305|binding|INFO|Claiming lport bf26054f-43d5-471a-bca4-e7948b4409d0 for this chassis.
Oct 11 08:52:04 compute-0 ovn_controller[152945]: 2025-10-11T08:52:04Z|00306|binding|INFO|bf26054f-43d5-471a-bca4-e7948b4409d0: Claiming fa:16:3e:6d:9c:27 10.100.0.12
Oct 11 08:52:04 compute-0 nova_compute[260935]: 2025-10-11 08:52:04.636 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:52:04 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:04.640 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6d:9c:27 10.100.0.12'], port_security=['fa:16:3e:6d:9c:27 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '840d2a1b-48bb-42ec-aa12-067e1b65ea39', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-882d76f4-8cc7-44c7-ad90-277f4f92e044', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b23ff73ca27245eeb1b46f51326a5568', 'neutron:revision_number': '2', 'neutron:security_group_ids': '1c2e46c6-00fe-417f-b7cc-3c9acf40921b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4841cc19-0006-4d3d-bb24-b088854c8627, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=bf26054f-43d5-471a-bca4-e7948b4409d0) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 08:52:04 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:04.642 162815 INFO neutron.agent.ovn.metadata.agent [-] Port bf26054f-43d5-471a-bca4-e7948b4409d0 in datapath 882d76f4-8cc7-44c7-ad90-277f4f92e044 bound to our chassis
Oct 11 08:52:04 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:04.652 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 882d76f4-8cc7-44c7-ad90-277f4f92e044
Oct 11 08:52:04 compute-0 ovn_controller[152945]: 2025-10-11T08:52:04Z|00054|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:7f:41:62 10.100.0.9
Oct 11 08:52:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 08:52:04 compute-0 ovn_controller[152945]: 2025-10-11T08:52:04Z|00307|binding|INFO|Setting lport bf26054f-43d5-471a-bca4-e7948b4409d0 ovn-installed in OVS
Oct 11 08:52:04 compute-0 ovn_controller[152945]: 2025-10-11T08:52:04Z|00055|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:7f:41:62 10.100.0.9
Oct 11 08:52:04 compute-0 ovn_controller[152945]: 2025-10-11T08:52:04Z|00308|binding|INFO|Setting lport bf26054f-43d5-471a-bca4-e7948b4409d0 up in Southbound
Oct 11 08:52:04 compute-0 nova_compute[260935]: 2025-10-11 08:52:04.659 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:52:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:52:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 08:52:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:52:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0014502496825551867 of space, bias 1.0, pg target 0.43507490476655597 quantized to 32 (current 32)
Oct 11 08:52:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:52:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 08:52:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:52:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 08:52:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:52:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Oct 11 08:52:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:52:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 11 08:52:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:52:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 08:52:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:52:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 11 08:52:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:52:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 11 08:52:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:52:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 08:52:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:52:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 11 08:52:04 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:04.673 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[96eeddb1-e22b-4ce1-8ab5-91daaa2fa88a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:52:04 compute-0 systemd-machined[215705]: New machine qemu-43-instance-00000027.
Oct 11 08:52:04 compute-0 systemd-udevd[306241]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 08:52:04 compute-0 systemd[1]: Started Virtual Machine qemu-43-instance-00000027.
Oct 11 08:52:04 compute-0 nova_compute[260935]: 2025-10-11 08:52:04.699 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 08:52:04 compute-0 nova_compute[260935]: 2025-10-11 08:52:04.700 2 DEBUG oslo_concurrency.lockutils [None req-fe3e845c-c74f-4725-ac43-be05e8dd5ef7 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Acquiring lock "1cecd438-75a3-4140-ad35-7439630b1be2" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:52:04 compute-0 nova_compute[260935]: 2025-10-11 08:52:04.701 2 DEBUG oslo_concurrency.lockutils [None req-fe3e845c-c74f-4725-ac43-be05e8dd5ef7 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Lock "1cecd438-75a3-4140-ad35-7439630b1be2" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:52:04 compute-0 nova_compute[260935]: 2025-10-11 08:52:04.701 2 DEBUG oslo_concurrency.lockutils [None req-fe3e845c-c74f-4725-ac43-be05e8dd5ef7 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Acquiring lock "1cecd438-75a3-4140-ad35-7439630b1be2-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:52:04 compute-0 nova_compute[260935]: 2025-10-11 08:52:04.702 2 DEBUG oslo_concurrency.lockutils [None req-fe3e845c-c74f-4725-ac43-be05e8dd5ef7 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Lock "1cecd438-75a3-4140-ad35-7439630b1be2-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:52:04 compute-0 nova_compute[260935]: 2025-10-11 08:52:04.703 2 DEBUG oslo_concurrency.lockutils [None req-fe3e845c-c74f-4725-ac43-be05e8dd5ef7 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Lock "1cecd438-75a3-4140-ad35-7439630b1be2-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:52:04 compute-0 nova_compute[260935]: 2025-10-11 08:52:04.706 2 INFO nova.compute.manager [None req-fe3e845c-c74f-4725-ac43-be05e8dd5ef7 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 1cecd438-75a3-4140-ad35-7439630b1be2] Terminating instance
Oct 11 08:52:04 compute-0 NetworkManager[44960]: <info>  [1760172724.7110] device (tapbf26054f-43): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 08:52:04 compute-0 NetworkManager[44960]: <info>  [1760172724.7140] device (tapbf26054f-43): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 08:52:04 compute-0 nova_compute[260935]: 2025-10-11 08:52:04.717 2 DEBUG nova.compute.manager [None req-fe3e845c-c74f-4725-ac43-be05e8dd5ef7 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 1cecd438-75a3-4140-ad35-7439630b1be2] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 11 08:52:04 compute-0 nova_compute[260935]: 2025-10-11 08:52:04.718 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 08:52:04 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:04.735 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[4f28b403-13d7-4601-9380-b8cf96ebe79c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:52:04 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:04.740 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[de48f0ca-0f9a-4c03-8ac3-e5266fd2e2a7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:52:04 compute-0 kernel: tap7290684c-38 (unregistering): left promiscuous mode
Oct 11 08:52:04 compute-0 NetworkManager[44960]: <info>  [1760172724.7806] device (tap7290684c-38): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 08:52:04 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:04.789 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[69a0b73f-7624-44dd-ae50-8bb42d281053]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:52:04 compute-0 nova_compute[260935]: 2025-10-11 08:52:04.798 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:52:04 compute-0 ovn_controller[152945]: 2025-10-11T08:52:04Z|00309|binding|INFO|Releasing lport 7290684c-3823-4e1e-84f5-826316aa4548 from this chassis (sb_readonly=0)
Oct 11 08:52:04 compute-0 ovn_controller[152945]: 2025-10-11T08:52:04Z|00310|binding|INFO|Setting lport 7290684c-3823-4e1e-84f5-826316aa4548 down in Southbound
Oct 11 08:52:04 compute-0 ovn_controller[152945]: 2025-10-11T08:52:04Z|00311|binding|INFO|Removing iface tap7290684c-38 ovn-installed in OVS
Oct 11 08:52:04 compute-0 nova_compute[260935]: 2025-10-11 08:52:04.801 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:52:04 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:04.807 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[da3d92fc-0526-4a4c-9907-57317eafdfeb]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap882d76f4-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c6:0a:f7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 89], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 454897, 'reachable_time': 19017, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 306256, 'error': None, 'target': 'ovnmeta-882d76f4-8cc7-44c7-ad90-277f4f92e044', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:52:04 compute-0 nova_compute[260935]: 2025-10-11 08:52:04.823 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:52:04 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:04.825 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[4ec85d09-9a06-48d2-8bdd-40c7f19a79bd]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap882d76f4-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 454914, 'tstamp': 454914}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 306257, 'error': None, 'target': 'ovnmeta-882d76f4-8cc7-44c7-ad90-277f4f92e044', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap882d76f4-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 454918, 'tstamp': 454918}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 306257, 'error': None, 'target': 'ovnmeta-882d76f4-8cc7-44c7-ad90-277f4f92e044', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:52:04 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:04.827 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap882d76f4-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:52:04 compute-0 nova_compute[260935]: 2025-10-11 08:52:04.829 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:52:04 compute-0 nova_compute[260935]: 2025-10-11 08:52:04.833 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:52:04 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:04.833 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap882d76f4-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:52:04 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:04.835 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 08:52:04 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:04.836 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap882d76f4-80, col_values=(('external_ids', {'iface-id': 'abe57b96-aedd-418b-be8f-7ad9fb9218ac'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:52:04 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:04.837 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 08:52:04 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:04.842 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7f:41:62 10.100.0.9'], port_security=['fa:16:3e:7f:41:62 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '1cecd438-75a3-4140-ad35-7439630b1be2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9bac3530-993f-420e-8692-0b14a331d756', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f8c7604961214c6d9d49657535d799a5', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'b92be55a-f97b-4770-99c4-ff8e122b8ad7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=956bef08-638b-4ce0-9cc4-80a6cc4f1331, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=7290684c-3823-4e1e-84f5-826316aa4548) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 08:52:04 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:04.844 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 7290684c-3823-4e1e-84f5-826316aa4548 in datapath 9bac3530-993f-420e-8692-0b14a331d756 unbound from our chassis
Oct 11 08:52:04 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:04.848 162815 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 9bac3530-993f-420e-8692-0b14a331d756, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 11 08:52:04 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:04.849 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[21a888da-a24c-471d-81c8-768288aa39de]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:52:04 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:04.850 162815 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-9bac3530-993f-420e-8692-0b14a331d756 namespace which is not needed anymore
Oct 11 08:52:04 compute-0 nova_compute[260935]: 2025-10-11 08:52:04.855 2 DEBUG nova.compute.manager [req-303c1880-bb39-46a7-8e76-3d89fbf37874 req-73d22e08-a6f8-4194-bc60-972f1e93a7b0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 205bee5e-165a-468c-87d3-db44e03ace3e] Received event network-vif-plugged-30f88ebc-fba6-476c-8953-ad64358029bb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:52:04 compute-0 nova_compute[260935]: 2025-10-11 08:52:04.855 2 DEBUG oslo_concurrency.lockutils [req-303c1880-bb39-46a7-8e76-3d89fbf37874 req-73d22e08-a6f8-4194-bc60-972f1e93a7b0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "205bee5e-165a-468c-87d3-db44e03ace3e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:52:04 compute-0 nova_compute[260935]: 2025-10-11 08:52:04.855 2 DEBUG oslo_concurrency.lockutils [req-303c1880-bb39-46a7-8e76-3d89fbf37874 req-73d22e08-a6f8-4194-bc60-972f1e93a7b0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "205bee5e-165a-468c-87d3-db44e03ace3e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:52:04 compute-0 nova_compute[260935]: 2025-10-11 08:52:04.855 2 DEBUG oslo_concurrency.lockutils [req-303c1880-bb39-46a7-8e76-3d89fbf37874 req-73d22e08-a6f8-4194-bc60-972f1e93a7b0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "205bee5e-165a-468c-87d3-db44e03ace3e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:52:04 compute-0 nova_compute[260935]: 2025-10-11 08:52:04.856 2 DEBUG nova.compute.manager [req-303c1880-bb39-46a7-8e76-3d89fbf37874 req-73d22e08-a6f8-4194-bc60-972f1e93a7b0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 205bee5e-165a-468c-87d3-db44e03ace3e] No waiting events found dispatching network-vif-plugged-30f88ebc-fba6-476c-8953-ad64358029bb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 08:52:04 compute-0 systemd[1]: machine-qemu\x2d41\x2dinstance\x2d00000026.scope: Deactivated successfully.
Oct 11 08:52:04 compute-0 nova_compute[260935]: 2025-10-11 08:52:04.856 2 WARNING nova.compute.manager [req-303c1880-bb39-46a7-8e76-3d89fbf37874 req-73d22e08-a6f8-4194-bc60-972f1e93a7b0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 205bee5e-165a-468c-87d3-db44e03ace3e] Received unexpected event network-vif-plugged-30f88ebc-fba6-476c-8953-ad64358029bb for instance with vm_state active and task_state deleting.
Oct 11 08:52:04 compute-0 systemd[1]: machine-qemu\x2d41\x2dinstance\x2d00000026.scope: Consumed 12.027s CPU time.
Oct 11 08:52:04 compute-0 systemd-machined[215705]: Machine qemu-41-instance-00000026 terminated.
Oct 11 08:52:04 compute-0 nova_compute[260935]: 2025-10-11 08:52:04.962 2 DEBUG nova.network.neutron [req-4198e2d9-f34d-4aef-a085-d181880f1e6b req-7fda7ade-ccf2-452c-94f6-37f429d6f633 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 840d2a1b-48bb-42ec-aa12-067e1b65ea39] Updated VIF entry in instance network info cache for port bf26054f-43d5-471a-bca4-e7948b4409d0. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 08:52:04 compute-0 nova_compute[260935]: 2025-10-11 08:52:04.962 2 DEBUG nova.network.neutron [req-4198e2d9-f34d-4aef-a085-d181880f1e6b req-7fda7ade-ccf2-452c-94f6-37f429d6f633 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 840d2a1b-48bb-42ec-aa12-067e1b65ea39] Updating instance_info_cache with network_info: [{"id": "bf26054f-43d5-471a-bca4-e7948b4409d0", "address": "fa:16:3e:6d:9c:27", "network": {"id": "882d76f4-8cc7-44c7-ad90-277f4f92e044", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-3978382-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b23ff73ca27245eeb1b46f51326a5568", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf26054f-43", "ovs_interfaceid": "bf26054f-43d5-471a-bca4-e7948b4409d0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:52:04 compute-0 nova_compute[260935]: 2025-10-11 08:52:04.971 2 INFO nova.virt.libvirt.driver [-] [instance: 1cecd438-75a3-4140-ad35-7439630b1be2] Instance destroyed successfully.
Oct 11 08:52:04 compute-0 nova_compute[260935]: 2025-10-11 08:52:04.971 2 DEBUG nova.objects.instance [None req-fe3e845c-c74f-4725-ac43-be05e8dd5ef7 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Lazy-loading 'resources' on Instance uuid 1cecd438-75a3-4140-ad35-7439630b1be2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:52:04 compute-0 nova_compute[260935]: 2025-10-11 08:52:04.980 2 DEBUG oslo_concurrency.lockutils [req-4198e2d9-f34d-4aef-a085-d181880f1e6b req-7fda7ade-ccf2-452c-94f6-37f429d6f633 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-840d2a1b-48bb-42ec-aa12-067e1b65ea39" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 08:52:04 compute-0 nova_compute[260935]: 2025-10-11 08:52:04.992 2 DEBUG nova.virt.libvirt.vif [None req-fe3e845c-c74f-4725-ac43-be05e8dd5ef7 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T08:51:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-45605011',display_name='tempest-ImagesTestJSON-server-45605011',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-45605011',id=38,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-11T08:51:53Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='f8c7604961214c6d9d49657535d799a5',ramdisk_id='',reservation_id='r-bbdhfx2h',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ImagesTestJSON-694493184',owner_user_name='tempest-ImagesTestJSON-694493184-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T08:52:00Z,user_data=None,user_id='1bab12893b9d49aabcb5ca19c9b951de',uuid=1cecd438-75a3-4140-ad35-7439630b1be2,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "7290684c-3823-4e1e-84f5-826316aa4548", "address": "fa:16:3e:7f:41:62", "network": {"id": "9bac3530-993f-420e-8692-0b14a331d756", "bridge": "br-int", "label": "tempest-ImagesTestJSON-942705627-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f8c7604961214c6d9d49657535d799a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7290684c-38", "ovs_interfaceid": "7290684c-3823-4e1e-84f5-826316aa4548", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 11 08:52:04 compute-0 nova_compute[260935]: 2025-10-11 08:52:04.992 2 DEBUG nova.network.os_vif_util [None req-fe3e845c-c74f-4725-ac43-be05e8dd5ef7 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Converting VIF {"id": "7290684c-3823-4e1e-84f5-826316aa4548", "address": "fa:16:3e:7f:41:62", "network": {"id": "9bac3530-993f-420e-8692-0b14a331d756", "bridge": "br-int", "label": "tempest-ImagesTestJSON-942705627-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f8c7604961214c6d9d49657535d799a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7290684c-38", "ovs_interfaceid": "7290684c-3823-4e1e-84f5-826316aa4548", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 08:52:04 compute-0 nova_compute[260935]: 2025-10-11 08:52:04.993 2 DEBUG nova.network.os_vif_util [None req-fe3e845c-c74f-4725-ac43-be05e8dd5ef7 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7f:41:62,bridge_name='br-int',has_traffic_filtering=True,id=7290684c-3823-4e1e-84f5-826316aa4548,network=Network(9bac3530-993f-420e-8692-0b14a331d756),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7290684c-38') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 08:52:04 compute-0 nova_compute[260935]: 2025-10-11 08:52:04.993 2 DEBUG os_vif [None req-fe3e845c-c74f-4725-ac43-be05e8dd5ef7 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:7f:41:62,bridge_name='br-int',has_traffic_filtering=True,id=7290684c-3823-4e1e-84f5-826316aa4548,network=Network(9bac3530-993f-420e-8692-0b14a331d756),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7290684c-38') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 11 08:52:04 compute-0 nova_compute[260935]: 2025-10-11 08:52:04.995 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:52:04 compute-0 nova_compute[260935]: 2025-10-11 08:52:04.995 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7290684c-38, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:52:04 compute-0 nova_compute[260935]: 2025-10-11 08:52:04.998 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 08:52:05 compute-0 nova_compute[260935]: 2025-10-11 08:52:05.001 2 INFO os_vif [None req-fe3e845c-c74f-4725-ac43-be05e8dd5ef7 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:7f:41:62,bridge_name='br-int',has_traffic_filtering=True,id=7290684c-3823-4e1e-84f5-826316aa4548,network=Network(9bac3530-993f-420e-8692-0b14a331d756),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7290684c-38')
Oct 11 08:52:05 compute-0 neutron-haproxy-ovnmeta-9bac3530-993f-420e-8692-0b14a331d756[305191]: [NOTICE]   (305217) : haproxy version is 2.8.14-c23fe91
Oct 11 08:52:05 compute-0 neutron-haproxy-ovnmeta-9bac3530-993f-420e-8692-0b14a331d756[305191]: [NOTICE]   (305217) : path to executable is /usr/sbin/haproxy
Oct 11 08:52:05 compute-0 neutron-haproxy-ovnmeta-9bac3530-993f-420e-8692-0b14a331d756[305191]: [WARNING]  (305217) : Exiting Master process...
Oct 11 08:52:05 compute-0 neutron-haproxy-ovnmeta-9bac3530-993f-420e-8692-0b14a331d756[305191]: [ALERT]    (305217) : Current worker (305224) exited with code 143 (Terminated)
Oct 11 08:52:05 compute-0 neutron-haproxy-ovnmeta-9bac3530-993f-420e-8692-0b14a331d756[305191]: [WARNING]  (305217) : All workers exited. Exiting... (0)
Oct 11 08:52:05 compute-0 systemd[1]: libpod-f81c47c225c7ddf12f9c961bf177d7a7c127519972d558801a670cad60b3507e.scope: Deactivated successfully.
Oct 11 08:52:05 compute-0 podman[306286]: 2025-10-11 08:52:05.068440285 +0000 UTC m=+0.073365486 container died f81c47c225c7ddf12f9c961bf177d7a7c127519972d558801a670cad60b3507e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-9bac3530-993f-420e-8692-0b14a331d756, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS)
Oct 11 08:52:05 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-f81c47c225c7ddf12f9c961bf177d7a7c127519972d558801a670cad60b3507e-userdata-shm.mount: Deactivated successfully.
Oct 11 08:52:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-7ddfd5ce54df1d1aac4c1e654420057838a802df3d512bb77cf4087a954ac2fb-merged.mount: Deactivated successfully.
Oct 11 08:52:05 compute-0 podman[306286]: 2025-10-11 08:52:05.128781701 +0000 UTC m=+0.133706902 container cleanup f81c47c225c7ddf12f9c961bf177d7a7c127519972d558801a670cad60b3507e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-9bac3530-993f-420e-8692-0b14a331d756, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct 11 08:52:05 compute-0 systemd[1]: libpod-conmon-f81c47c225c7ddf12f9c961bf177d7a7c127519972d558801a670cad60b3507e.scope: Deactivated successfully.
Oct 11 08:52:05 compute-0 podman[306332]: 2025-10-11 08:52:05.229231462 +0000 UTC m=+0.061527161 container remove f81c47c225c7ddf12f9c961bf177d7a7c127519972d558801a670cad60b3507e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-9bac3530-993f-420e-8692-0b14a331d756, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Oct 11 08:52:05 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:05.237 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[01b723ee-f8bb-46dc-9088-8cada6d22852]: (4, ('Sat Oct 11 08:52:04 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-9bac3530-993f-420e-8692-0b14a331d756 (f81c47c225c7ddf12f9c961bf177d7a7c127519972d558801a670cad60b3507e)\nf81c47c225c7ddf12f9c961bf177d7a7c127519972d558801a670cad60b3507e\nSat Oct 11 08:52:05 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-9bac3530-993f-420e-8692-0b14a331d756 (f81c47c225c7ddf12f9c961bf177d7a7c127519972d558801a670cad60b3507e)\nf81c47c225c7ddf12f9c961bf177d7a7c127519972d558801a670cad60b3507e\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:52:05 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:05.240 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[9e9065fd-8d8d-40b1-8d18-c9d613ed2027]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:52:05 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:05.241 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9bac3530-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:52:05 compute-0 kernel: tap9bac3530-90: left promiscuous mode
Oct 11 08:52:05 compute-0 nova_compute[260935]: 2025-10-11 08:52:05.244 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:52:05 compute-0 nova_compute[260935]: 2025-10-11 08:52:05.270 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:52:05 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:05.276 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[922e5c0a-ef0e-433c-93b9-71741641dae0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:52:05 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:05.303 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[6459e7d5-ef14-48d9-a6f1-b5b639878d99]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:52:05 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:05.305 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[22df6530-ccc2-41c9-8824-9f577e9315a0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:52:05 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:05.341 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[eb4d1e17-613b-4866-bf29-2c5d374dea5a]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 455661, 'reachable_time': 18743, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 306348, 'error': None, 'target': 'ovnmeta-9bac3530-993f-420e-8692-0b14a331d756', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:52:05 compute-0 systemd[1]: run-netns-ovnmeta\x2d9bac3530\x2d993f\x2d420e\x2d8692\x2d0b14a331d756.mount: Deactivated successfully.
Oct 11 08:52:05 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:05.351 162929 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-9bac3530-993f-420e-8692-0b14a331d756 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 11 08:52:05 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:05.351 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[d9490544-4044-4e51-893a-aa62ea5c3a79]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:52:05 compute-0 ceph-mon[74313]: pgmap v1415: 321 pgs: 321 active+clean; 213 MiB data, 534 MiB used, 59 GiB / 60 GiB avail; 5.9 MiB/s rd, 2.8 MiB/s wr, 298 op/s
Oct 11 08:52:05 compute-0 nova_compute[260935]: 2025-10-11 08:52:05.470 2 INFO nova.virt.libvirt.driver [None req-fe3e845c-c74f-4725-ac43-be05e8dd5ef7 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 1cecd438-75a3-4140-ad35-7439630b1be2] Deleting instance files /var/lib/nova/instances/1cecd438-75a3-4140-ad35-7439630b1be2_del
Oct 11 08:52:05 compute-0 nova_compute[260935]: 2025-10-11 08:52:05.470 2 INFO nova.virt.libvirt.driver [None req-fe3e845c-c74f-4725-ac43-be05e8dd5ef7 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 1cecd438-75a3-4140-ad35-7439630b1be2] Deletion of /var/lib/nova/instances/1cecd438-75a3-4140-ad35-7439630b1be2_del complete
Oct 11 08:52:05 compute-0 nova_compute[260935]: 2025-10-11 08:52:05.587 2 INFO nova.compute.manager [None req-fe3e845c-c74f-4725-ac43-be05e8dd5ef7 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] [instance: 1cecd438-75a3-4140-ad35-7439630b1be2] Took 0.87 seconds to destroy the instance on the hypervisor.
Oct 11 08:52:05 compute-0 nova_compute[260935]: 2025-10-11 08:52:05.588 2 DEBUG oslo.service.loopingcall [None req-fe3e845c-c74f-4725-ac43-be05e8dd5ef7 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 11 08:52:05 compute-0 nova_compute[260935]: 2025-10-11 08:52:05.588 2 DEBUG nova.compute.manager [-] [instance: 1cecd438-75a3-4140-ad35-7439630b1be2] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 11 08:52:05 compute-0 nova_compute[260935]: 2025-10-11 08:52:05.588 2 DEBUG nova.network.neutron [-] [instance: 1cecd438-75a3-4140-ad35-7439630b1be2] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 11 08:52:05 compute-0 nova_compute[260935]: 2025-10-11 08:52:05.664 2 DEBUG nova.network.neutron [-] [instance: 205bee5e-165a-468c-87d3-db44e03ace3e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:52:05 compute-0 nova_compute[260935]: 2025-10-11 08:52:05.708 2 INFO nova.compute.manager [-] [instance: 205bee5e-165a-468c-87d3-db44e03ace3e] Took 3.21 seconds to deallocate network for instance.
Oct 11 08:52:05 compute-0 nova_compute[260935]: 2025-10-11 08:52:05.853 2 DEBUG oslo_concurrency.lockutils [None req-edf69d64-0447-4ac0-849a-abd5e8df75fd fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:52:05 compute-0 nova_compute[260935]: 2025-10-11 08:52:05.854 2 DEBUG oslo_concurrency.lockutils [None req-edf69d64-0447-4ac0-849a-abd5e8df75fd fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:52:05 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1416: 321 pgs: 321 active+clean; 213 MiB data, 534 MiB used, 59 GiB / 60 GiB avail; 4.7 MiB/s rd, 2.3 MiB/s wr, 238 op/s
Oct 11 08:52:05 compute-0 nova_compute[260935]: 2025-10-11 08:52:05.984 2 DEBUG oslo_concurrency.processutils [None req-edf69d64-0447-4ac0-849a-abd5e8df75fd fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:52:06 compute-0 nova_compute[260935]: 2025-10-11 08:52:06.245 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172726.2438536, 840d2a1b-48bb-42ec-aa12-067e1b65ea39 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:52:06 compute-0 nova_compute[260935]: 2025-10-11 08:52:06.246 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 840d2a1b-48bb-42ec-aa12-067e1b65ea39] VM Started (Lifecycle Event)
Oct 11 08:52:06 compute-0 ceph-osd[90364]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 11 08:52:06 compute-0 ceph-osd[90364]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 2400.3 total, 600.0 interval
                                           Cumulative writes: 13K writes, 55K keys, 13K commit groups, 1.0 writes per commit group, ingest: 0.05 GB, 0.02 MB/s
                                           Cumulative WAL: 13K writes, 4294 syncs, 3.25 writes per sync, written: 0.05 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 7888 writes, 30K keys, 7888 commit groups, 1.0 writes per commit group, ingest: 31.66 MB, 0.05 MB/s
                                           Interval WAL: 7888 writes, 3237 syncs, 2.44 writes per sync, written: 0.03 GB, 0.05 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 11 08:52:06 compute-0 nova_compute[260935]: 2025-10-11 08:52:06.309 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 840d2a1b-48bb-42ec-aa12-067e1b65ea39] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:52:06 compute-0 nova_compute[260935]: 2025-10-11 08:52:06.314 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172726.2441976, 840d2a1b-48bb-42ec-aa12-067e1b65ea39 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:52:06 compute-0 nova_compute[260935]: 2025-10-11 08:52:06.315 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 840d2a1b-48bb-42ec-aa12-067e1b65ea39] VM Paused (Lifecycle Event)
Oct 11 08:52:06 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 08:52:06 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/292514972' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:52:06 compute-0 nova_compute[260935]: 2025-10-11 08:52:06.433 2 DEBUG oslo_concurrency.processutils [None req-edf69d64-0447-4ac0-849a-abd5e8df75fd fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:52:06 compute-0 nova_compute[260935]: 2025-10-11 08:52:06.440 2 DEBUG nova.compute.provider_tree [None req-edf69d64-0447-4ac0-849a-abd5e8df75fd fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 08:52:06 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/292514972' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:52:06 compute-0 nova_compute[260935]: 2025-10-11 08:52:06.466 2 DEBUG nova.compute.manager [req-6be89823-bb65-43eb-bfbc-bfe3ef11aac0 req-01be8c57-0f4f-41ce-b69e-2983458a3e46 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 205bee5e-165a-468c-87d3-db44e03ace3e] Received event network-vif-plugged-e389558a-ec7e-4610-aef7-2cfb76da6814 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:52:06 compute-0 nova_compute[260935]: 2025-10-11 08:52:06.467 2 DEBUG oslo_concurrency.lockutils [req-6be89823-bb65-43eb-bfbc-bfe3ef11aac0 req-01be8c57-0f4f-41ce-b69e-2983458a3e46 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "205bee5e-165a-468c-87d3-db44e03ace3e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:52:06 compute-0 nova_compute[260935]: 2025-10-11 08:52:06.468 2 DEBUG oslo_concurrency.lockutils [req-6be89823-bb65-43eb-bfbc-bfe3ef11aac0 req-01be8c57-0f4f-41ce-b69e-2983458a3e46 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "205bee5e-165a-468c-87d3-db44e03ace3e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:52:06 compute-0 nova_compute[260935]: 2025-10-11 08:52:06.468 2 DEBUG oslo_concurrency.lockutils [req-6be89823-bb65-43eb-bfbc-bfe3ef11aac0 req-01be8c57-0f4f-41ce-b69e-2983458a3e46 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "205bee5e-165a-468c-87d3-db44e03ace3e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:52:06 compute-0 nova_compute[260935]: 2025-10-11 08:52:06.468 2 DEBUG nova.compute.manager [req-6be89823-bb65-43eb-bfbc-bfe3ef11aac0 req-01be8c57-0f4f-41ce-b69e-2983458a3e46 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 205bee5e-165a-468c-87d3-db44e03ace3e] No waiting events found dispatching network-vif-plugged-e389558a-ec7e-4610-aef7-2cfb76da6814 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 08:52:06 compute-0 nova_compute[260935]: 2025-10-11 08:52:06.469 2 WARNING nova.compute.manager [req-6be89823-bb65-43eb-bfbc-bfe3ef11aac0 req-01be8c57-0f4f-41ce-b69e-2983458a3e46 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 205bee5e-165a-468c-87d3-db44e03ace3e] Received unexpected event network-vif-plugged-e389558a-ec7e-4610-aef7-2cfb76da6814 for instance with vm_state deleted and task_state None.
Oct 11 08:52:06 compute-0 nova_compute[260935]: 2025-10-11 08:52:06.469 2 DEBUG nova.compute.manager [req-6be89823-bb65-43eb-bfbc-bfe3ef11aac0 req-01be8c57-0f4f-41ce-b69e-2983458a3e46 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 205bee5e-165a-468c-87d3-db44e03ace3e] Received event network-vif-deleted-30f88ebc-fba6-476c-8953-ad64358029bb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:52:06 compute-0 nova_compute[260935]: 2025-10-11 08:52:06.469 2 DEBUG nova.compute.manager [req-6be89823-bb65-43eb-bfbc-bfe3ef11aac0 req-01be8c57-0f4f-41ce-b69e-2983458a3e46 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 840d2a1b-48bb-42ec-aa12-067e1b65ea39] Received event network-vif-plugged-bf26054f-43d5-471a-bca4-e7948b4409d0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:52:06 compute-0 nova_compute[260935]: 2025-10-11 08:52:06.469 2 DEBUG oslo_concurrency.lockutils [req-6be89823-bb65-43eb-bfbc-bfe3ef11aac0 req-01be8c57-0f4f-41ce-b69e-2983458a3e46 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "840d2a1b-48bb-42ec-aa12-067e1b65ea39-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:52:06 compute-0 nova_compute[260935]: 2025-10-11 08:52:06.470 2 DEBUG oslo_concurrency.lockutils [req-6be89823-bb65-43eb-bfbc-bfe3ef11aac0 req-01be8c57-0f4f-41ce-b69e-2983458a3e46 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "840d2a1b-48bb-42ec-aa12-067e1b65ea39-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:52:06 compute-0 nova_compute[260935]: 2025-10-11 08:52:06.470 2 DEBUG oslo_concurrency.lockutils [req-6be89823-bb65-43eb-bfbc-bfe3ef11aac0 req-01be8c57-0f4f-41ce-b69e-2983458a3e46 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "840d2a1b-48bb-42ec-aa12-067e1b65ea39-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:52:06 compute-0 nova_compute[260935]: 2025-10-11 08:52:06.471 2 DEBUG nova.compute.manager [req-6be89823-bb65-43eb-bfbc-bfe3ef11aac0 req-01be8c57-0f4f-41ce-b69e-2983458a3e46 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 840d2a1b-48bb-42ec-aa12-067e1b65ea39] Processing event network-vif-plugged-bf26054f-43d5-471a-bca4-e7948b4409d0 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 11 08:52:06 compute-0 nova_compute[260935]: 2025-10-11 08:52:06.471 2 DEBUG nova.compute.manager [req-6be89823-bb65-43eb-bfbc-bfe3ef11aac0 req-01be8c57-0f4f-41ce-b69e-2983458a3e46 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 205bee5e-165a-468c-87d3-db44e03ace3e] Received event network-vif-deleted-e389558a-ec7e-4610-aef7-2cfb76da6814 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:52:06 compute-0 nova_compute[260935]: 2025-10-11 08:52:06.471 2 DEBUG nova.compute.manager [req-6be89823-bb65-43eb-bfbc-bfe3ef11aac0 req-01be8c57-0f4f-41ce-b69e-2983458a3e46 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 840d2a1b-48bb-42ec-aa12-067e1b65ea39] Received event network-vif-plugged-bf26054f-43d5-471a-bca4-e7948b4409d0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:52:06 compute-0 nova_compute[260935]: 2025-10-11 08:52:06.471 2 DEBUG oslo_concurrency.lockutils [req-6be89823-bb65-43eb-bfbc-bfe3ef11aac0 req-01be8c57-0f4f-41ce-b69e-2983458a3e46 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "840d2a1b-48bb-42ec-aa12-067e1b65ea39-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:52:06 compute-0 nova_compute[260935]: 2025-10-11 08:52:06.472 2 DEBUG oslo_concurrency.lockutils [req-6be89823-bb65-43eb-bfbc-bfe3ef11aac0 req-01be8c57-0f4f-41ce-b69e-2983458a3e46 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "840d2a1b-48bb-42ec-aa12-067e1b65ea39-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:52:06 compute-0 nova_compute[260935]: 2025-10-11 08:52:06.472 2 DEBUG oslo_concurrency.lockutils [req-6be89823-bb65-43eb-bfbc-bfe3ef11aac0 req-01be8c57-0f4f-41ce-b69e-2983458a3e46 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "840d2a1b-48bb-42ec-aa12-067e1b65ea39-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:52:06 compute-0 nova_compute[260935]: 2025-10-11 08:52:06.473 2 DEBUG nova.compute.manager [req-6be89823-bb65-43eb-bfbc-bfe3ef11aac0 req-01be8c57-0f4f-41ce-b69e-2983458a3e46 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 840d2a1b-48bb-42ec-aa12-067e1b65ea39] No waiting events found dispatching network-vif-plugged-bf26054f-43d5-471a-bca4-e7948b4409d0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 08:52:06 compute-0 nova_compute[260935]: 2025-10-11 08:52:06.474 2 WARNING nova.compute.manager [req-6be89823-bb65-43eb-bfbc-bfe3ef11aac0 req-01be8c57-0f4f-41ce-b69e-2983458a3e46 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 840d2a1b-48bb-42ec-aa12-067e1b65ea39] Received unexpected event network-vif-plugged-bf26054f-43d5-471a-bca4-e7948b4409d0 for instance with vm_state building and task_state spawning.
Oct 11 08:52:06 compute-0 nova_compute[260935]: 2025-10-11 08:52:06.475 2 DEBUG nova.compute.manager [None req-280c3641-40c8-40d9-abdd-b0b9acaa9cd9 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] [instance: 840d2a1b-48bb-42ec-aa12-067e1b65ea39] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 11 08:52:06 compute-0 nova_compute[260935]: 2025-10-11 08:52:06.481 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 840d2a1b-48bb-42ec-aa12-067e1b65ea39] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:52:06 compute-0 nova_compute[260935]: 2025-10-11 08:52:06.485 2 DEBUG nova.scheduler.client.report [None req-edf69d64-0447-4ac0-849a-abd5e8df75fd fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 08:52:06 compute-0 nova_compute[260935]: 2025-10-11 08:52:06.491 2 DEBUG nova.network.neutron [-] [instance: 1cecd438-75a3-4140-ad35-7439630b1be2] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:52:06 compute-0 nova_compute[260935]: 2025-10-11 08:52:06.499 2 DEBUG nova.virt.libvirt.driver [None req-280c3641-40c8-40d9-abdd-b0b9acaa9cd9 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] [instance: 840d2a1b-48bb-42ec-aa12-067e1b65ea39] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 11 08:52:06 compute-0 nova_compute[260935]: 2025-10-11 08:52:06.505 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172726.4821656, 840d2a1b-48bb-42ec-aa12-067e1b65ea39 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:52:06 compute-0 nova_compute[260935]: 2025-10-11 08:52:06.505 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 840d2a1b-48bb-42ec-aa12-067e1b65ea39] VM Resumed (Lifecycle Event)
Oct 11 08:52:06 compute-0 nova_compute[260935]: 2025-10-11 08:52:06.509 2 INFO nova.virt.libvirt.driver [-] [instance: 840d2a1b-48bb-42ec-aa12-067e1b65ea39] Instance spawned successfully.
Oct 11 08:52:06 compute-0 nova_compute[260935]: 2025-10-11 08:52:06.509 2 DEBUG nova.virt.libvirt.driver [None req-280c3641-40c8-40d9-abdd-b0b9acaa9cd9 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] [instance: 840d2a1b-48bb-42ec-aa12-067e1b65ea39] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 11 08:52:06 compute-0 nova_compute[260935]: 2025-10-11 08:52:06.636 2 DEBUG oslo_concurrency.lockutils [None req-edf69d64-0447-4ac0-849a-abd5e8df75fd fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.782s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:52:06 compute-0 nova_compute[260935]: 2025-10-11 08:52:06.706 2 DEBUG nova.virt.libvirt.driver [None req-280c3641-40c8-40d9-abdd-b0b9acaa9cd9 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] [instance: 840d2a1b-48bb-42ec-aa12-067e1b65ea39] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:52:06 compute-0 nova_compute[260935]: 2025-10-11 08:52:06.707 2 DEBUG nova.virt.libvirt.driver [None req-280c3641-40c8-40d9-abdd-b0b9acaa9cd9 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] [instance: 840d2a1b-48bb-42ec-aa12-067e1b65ea39] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:52:06 compute-0 nova_compute[260935]: 2025-10-11 08:52:06.708 2 DEBUG nova.virt.libvirt.driver [None req-280c3641-40c8-40d9-abdd-b0b9acaa9cd9 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] [instance: 840d2a1b-48bb-42ec-aa12-067e1b65ea39] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:52:06 compute-0 nova_compute[260935]: 2025-10-11 08:52:06.709 2 DEBUG nova.virt.libvirt.driver [None req-280c3641-40c8-40d9-abdd-b0b9acaa9cd9 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] [instance: 840d2a1b-48bb-42ec-aa12-067e1b65ea39] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:52:06 compute-0 nova_compute[260935]: 2025-10-11 08:52:06.710 2 DEBUG nova.virt.libvirt.driver [None req-280c3641-40c8-40d9-abdd-b0b9acaa9cd9 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] [instance: 840d2a1b-48bb-42ec-aa12-067e1b65ea39] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:52:06 compute-0 nova_compute[260935]: 2025-10-11 08:52:06.710 2 DEBUG nova.virt.libvirt.driver [None req-280c3641-40c8-40d9-abdd-b0b9acaa9cd9 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] [instance: 840d2a1b-48bb-42ec-aa12-067e1b65ea39] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:52:06 compute-0 nova_compute[260935]: 2025-10-11 08:52:06.718 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 840d2a1b-48bb-42ec-aa12-067e1b65ea39] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:52:06 compute-0 nova_compute[260935]: 2025-10-11 08:52:06.720 2 INFO nova.compute.manager [-] [instance: 1cecd438-75a3-4140-ad35-7439630b1be2] Took 1.13 seconds to deallocate network for instance.
Oct 11 08:52:06 compute-0 nova_compute[260935]: 2025-10-11 08:52:06.722 2 INFO nova.scheduler.client.report [None req-edf69d64-0447-4ac0-849a-abd5e8df75fd fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Deleted allocations for instance 205bee5e-165a-468c-87d3-db44e03ace3e
Oct 11 08:52:06 compute-0 nova_compute[260935]: 2025-10-11 08:52:06.733 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 840d2a1b-48bb-42ec-aa12-067e1b65ea39] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 08:52:06 compute-0 nova_compute[260935]: 2025-10-11 08:52:06.792 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 840d2a1b-48bb-42ec-aa12-067e1b65ea39] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 08:52:06 compute-0 nova_compute[260935]: 2025-10-11 08:52:06.921 2 DEBUG oslo_concurrency.lockutils [None req-fe3e845c-c74f-4725-ac43-be05e8dd5ef7 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:52:06 compute-0 nova_compute[260935]: 2025-10-11 08:52:06.921 2 DEBUG oslo_concurrency.lockutils [None req-fe3e845c-c74f-4725-ac43-be05e8dd5ef7 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:52:06 compute-0 nova_compute[260935]: 2025-10-11 08:52:06.927 2 INFO nova.compute.manager [None req-280c3641-40c8-40d9-abdd-b0b9acaa9cd9 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] [instance: 840d2a1b-48bb-42ec-aa12-067e1b65ea39] Took 11.74 seconds to spawn the instance on the hypervisor.
Oct 11 08:52:06 compute-0 nova_compute[260935]: 2025-10-11 08:52:06.927 2 DEBUG nova.compute.manager [None req-280c3641-40c8-40d9-abdd-b0b9acaa9cd9 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] [instance: 840d2a1b-48bb-42ec-aa12-067e1b65ea39] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:52:06 compute-0 nova_compute[260935]: 2025-10-11 08:52:06.937 2 DEBUG oslo_concurrency.lockutils [None req-edf69d64-0447-4ac0-849a-abd5e8df75fd fb27f51b5ffd414ab5ddbea179ada690 f84de17ba2c5470fbc4c7fe809e7d7b7 - - default default] Lock "205bee5e-165a-468c-87d3-db44e03ace3e" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.546s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:52:06 compute-0 nova_compute[260935]: 2025-10-11 08:52:06.959 2 DEBUG nova.compute.manager [req-19251d93-2e0e-4464-b895-b7a377953982 req-50e40549-3375-4905-afe0-451944d60629 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 1cecd438-75a3-4140-ad35-7439630b1be2] Received event network-vif-unplugged-7290684c-3823-4e1e-84f5-826316aa4548 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:52:06 compute-0 nova_compute[260935]: 2025-10-11 08:52:06.959 2 DEBUG oslo_concurrency.lockutils [req-19251d93-2e0e-4464-b895-b7a377953982 req-50e40549-3375-4905-afe0-451944d60629 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "1cecd438-75a3-4140-ad35-7439630b1be2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:52:06 compute-0 nova_compute[260935]: 2025-10-11 08:52:06.960 2 DEBUG oslo_concurrency.lockutils [req-19251d93-2e0e-4464-b895-b7a377953982 req-50e40549-3375-4905-afe0-451944d60629 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "1cecd438-75a3-4140-ad35-7439630b1be2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:52:06 compute-0 nova_compute[260935]: 2025-10-11 08:52:06.960 2 DEBUG oslo_concurrency.lockutils [req-19251d93-2e0e-4464-b895-b7a377953982 req-50e40549-3375-4905-afe0-451944d60629 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "1cecd438-75a3-4140-ad35-7439630b1be2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:52:06 compute-0 nova_compute[260935]: 2025-10-11 08:52:06.960 2 DEBUG nova.compute.manager [req-19251d93-2e0e-4464-b895-b7a377953982 req-50e40549-3375-4905-afe0-451944d60629 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 1cecd438-75a3-4140-ad35-7439630b1be2] No waiting events found dispatching network-vif-unplugged-7290684c-3823-4e1e-84f5-826316aa4548 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 08:52:06 compute-0 nova_compute[260935]: 2025-10-11 08:52:06.961 2 WARNING nova.compute.manager [req-19251d93-2e0e-4464-b895-b7a377953982 req-50e40549-3375-4905-afe0-451944d60629 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 1cecd438-75a3-4140-ad35-7439630b1be2] Received unexpected event network-vif-unplugged-7290684c-3823-4e1e-84f5-826316aa4548 for instance with vm_state deleted and task_state None.
Oct 11 08:52:06 compute-0 nova_compute[260935]: 2025-10-11 08:52:06.961 2 DEBUG nova.compute.manager [req-19251d93-2e0e-4464-b895-b7a377953982 req-50e40549-3375-4905-afe0-451944d60629 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 1cecd438-75a3-4140-ad35-7439630b1be2] Received event network-vif-plugged-7290684c-3823-4e1e-84f5-826316aa4548 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:52:06 compute-0 nova_compute[260935]: 2025-10-11 08:52:06.961 2 DEBUG oslo_concurrency.lockutils [req-19251d93-2e0e-4464-b895-b7a377953982 req-50e40549-3375-4905-afe0-451944d60629 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "1cecd438-75a3-4140-ad35-7439630b1be2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:52:06 compute-0 nova_compute[260935]: 2025-10-11 08:52:06.961 2 DEBUG oslo_concurrency.lockutils [req-19251d93-2e0e-4464-b895-b7a377953982 req-50e40549-3375-4905-afe0-451944d60629 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "1cecd438-75a3-4140-ad35-7439630b1be2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:52:06 compute-0 nova_compute[260935]: 2025-10-11 08:52:06.962 2 DEBUG oslo_concurrency.lockutils [req-19251d93-2e0e-4464-b895-b7a377953982 req-50e40549-3375-4905-afe0-451944d60629 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "1cecd438-75a3-4140-ad35-7439630b1be2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:52:06 compute-0 nova_compute[260935]: 2025-10-11 08:52:06.962 2 DEBUG nova.compute.manager [req-19251d93-2e0e-4464-b895-b7a377953982 req-50e40549-3375-4905-afe0-451944d60629 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 1cecd438-75a3-4140-ad35-7439630b1be2] No waiting events found dispatching network-vif-plugged-7290684c-3823-4e1e-84f5-826316aa4548 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 08:52:06 compute-0 nova_compute[260935]: 2025-10-11 08:52:06.962 2 WARNING nova.compute.manager [req-19251d93-2e0e-4464-b895-b7a377953982 req-50e40549-3375-4905-afe0-451944d60629 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 1cecd438-75a3-4140-ad35-7439630b1be2] Received unexpected event network-vif-plugged-7290684c-3823-4e1e-84f5-826316aa4548 for instance with vm_state deleted and task_state None.
Oct 11 08:52:06 compute-0 nova_compute[260935]: 2025-10-11 08:52:06.963 2 DEBUG nova.compute.manager [req-19251d93-2e0e-4464-b895-b7a377953982 req-50e40549-3375-4905-afe0-451944d60629 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 1cecd438-75a3-4140-ad35-7439630b1be2] Received event network-vif-deleted-7290684c-3823-4e1e-84f5-826316aa4548 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:52:07 compute-0 nova_compute[260935]: 2025-10-11 08:52:07.000 2 INFO nova.compute.manager [None req-280c3641-40c8-40d9-abdd-b0b9acaa9cd9 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] [instance: 840d2a1b-48bb-42ec-aa12-067e1b65ea39] Took 13.00 seconds to build instance.
Oct 11 08:52:07 compute-0 nova_compute[260935]: 2025-10-11 08:52:07.021 2 DEBUG oslo_concurrency.lockutils [None req-280c3641-40c8-40d9-abdd-b0b9acaa9cd9 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] Lock "840d2a1b-48bb-42ec-aa12-067e1b65ea39" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.103s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:52:07 compute-0 nova_compute[260935]: 2025-10-11 08:52:07.031 2 DEBUG oslo_concurrency.processutils [None req-fe3e845c-c74f-4725-ac43-be05e8dd5ef7 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:52:07 compute-0 nova_compute[260935]: 2025-10-11 08:52:07.413 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:52:07 compute-0 ceph-mon[74313]: pgmap v1416: 321 pgs: 321 active+clean; 213 MiB data, 534 MiB used, 59 GiB / 60 GiB avail; 4.7 MiB/s rd, 2.3 MiB/s wr, 238 op/s
Oct 11 08:52:07 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 08:52:07 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/85545623' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:52:07 compute-0 nova_compute[260935]: 2025-10-11 08:52:07.488 2 DEBUG oslo_concurrency.processutils [None req-fe3e845c-c74f-4725-ac43-be05e8dd5ef7 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:52:07 compute-0 nova_compute[260935]: 2025-10-11 08:52:07.494 2 DEBUG nova.compute.provider_tree [None req-fe3e845c-c74f-4725-ac43-be05e8dd5ef7 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 08:52:07 compute-0 nova_compute[260935]: 2025-10-11 08:52:07.515 2 DEBUG nova.scheduler.client.report [None req-fe3e845c-c74f-4725-ac43-be05e8dd5ef7 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 08:52:07 compute-0 nova_compute[260935]: 2025-10-11 08:52:07.538 2 DEBUG oslo_concurrency.lockutils [None req-fe3e845c-c74f-4725-ac43-be05e8dd5ef7 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.617s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:52:07 compute-0 nova_compute[260935]: 2025-10-11 08:52:07.573 2 INFO nova.scheduler.client.report [None req-fe3e845c-c74f-4725-ac43-be05e8dd5ef7 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Deleted allocations for instance 1cecd438-75a3-4140-ad35-7439630b1be2
Oct 11 08:52:07 compute-0 nova_compute[260935]: 2025-10-11 08:52:07.680 2 DEBUG oslo_concurrency.lockutils [None req-fe3e845c-c74f-4725-ac43-be05e8dd5ef7 1bab12893b9d49aabcb5ca19c9b951de f8c7604961214c6d9d49657535d799a5 - - default default] Lock "1cecd438-75a3-4140-ad35-7439630b1be2" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.979s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:52:07 compute-0 nova_compute[260935]: 2025-10-11 08:52:07.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 08:52:07 compute-0 podman[306435]: 2025-10-11 08:52:07.790669049 +0000 UTC m=+0.084442459 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=multipathd, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 08:52:07 compute-0 podman[306436]: 2025-10-11 08:52:07.815416329 +0000 UTC m=+0.112548444 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_controller)
Oct 11 08:52:07 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1417: 321 pgs: 321 active+clean; 167 MiB data, 511 MiB used, 59 GiB / 60 GiB avail; 5.5 MiB/s rd, 4.9 MiB/s wr, 362 op/s
Oct 11 08:52:07 compute-0 ceph-mgr[74605]: [devicehealth INFO root] Check health
Oct 11 08:52:08 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e190 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 08:52:08 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e190 do_prune osdmap full prune enabled
Oct 11 08:52:08 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e191 e191: 3 total, 3 up, 3 in
Oct 11 08:52:08 compute-0 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e191: 3 total, 3 up, 3 in
Oct 11 08:52:08 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/85545623' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:52:08 compute-0 ceph-mon[74313]: osdmap e191: 3 total, 3 up, 3 in
Oct 11 08:52:08 compute-0 nova_compute[260935]: 2025-10-11 08:52:08.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 08:52:08 compute-0 nova_compute[260935]: 2025-10-11 08:52:08.703 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 11 08:52:08 compute-0 nova_compute[260935]: 2025-10-11 08:52:08.704 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 11 08:52:08 compute-0 nova_compute[260935]: 2025-10-11 08:52:08.996 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "refresh_cache-872b1c1d-bc87-4123-a599-4d64b89018aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 08:52:08 compute-0 nova_compute[260935]: 2025-10-11 08:52:08.996 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquired lock "refresh_cache-872b1c1d-bc87-4123-a599-4d64b89018aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 08:52:08 compute-0 nova_compute[260935]: 2025-10-11 08:52:08.997 2 DEBUG nova.network.neutron [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: 872b1c1d-bc87-4123-a599-4d64b89018aa] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 11 08:52:08 compute-0 nova_compute[260935]: 2025-10-11 08:52:08.997 2 DEBUG nova.objects.instance [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 872b1c1d-bc87-4123-a599-4d64b89018aa obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:52:09 compute-0 ceph-mon[74313]: pgmap v1417: 321 pgs: 321 active+clean; 167 MiB data, 511 MiB used, 59 GiB / 60 GiB avail; 5.5 MiB/s rd, 4.9 MiB/s wr, 362 op/s
Oct 11 08:52:09 compute-0 ovn_controller[152945]: 2025-10-11T08:52:09Z|00312|binding|INFO|Releasing lport abe57b96-aedd-418b-be8f-7ad9fb9218ac from this chassis (sb_readonly=0)
Oct 11 08:52:09 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1419: 321 pgs: 321 active+clean; 167 MiB data, 511 MiB used, 59 GiB / 60 GiB avail; 5.0 MiB/s rd, 4.4 MiB/s wr, 324 op/s
Oct 11 08:52:09 compute-0 nova_compute[260935]: 2025-10-11 08:52:09.909 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:52:09 compute-0 nova_compute[260935]: 2025-10-11 08:52:09.973 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:52:09 compute-0 NetworkManager[44960]: <info>  [1760172729.9749] manager: (patch-provnet-02213cae-d648-4a8f-b18b-9bdf9573f1be-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/152)
Oct 11 08:52:09 compute-0 NetworkManager[44960]: <info>  [1760172729.9770] manager: (patch-br-int-to-provnet-02213cae-d648-4a8f-b18b-9bdf9573f1be): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/153)
Oct 11 08:52:09 compute-0 nova_compute[260935]: 2025-10-11 08:52:09.996 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:52:10 compute-0 nova_compute[260935]: 2025-10-11 08:52:10.150 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:52:10 compute-0 ovn_controller[152945]: 2025-10-11T08:52:10Z|00313|binding|INFO|Releasing lport abe57b96-aedd-418b-be8f-7ad9fb9218ac from this chassis (sb_readonly=0)
Oct 11 08:52:10 compute-0 nova_compute[260935]: 2025-10-11 08:52:10.168 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:52:10 compute-0 nova_compute[260935]: 2025-10-11 08:52:10.568 2 DEBUG nova.compute.manager [req-e9ccf899-764f-4317-b0b0-1722872270ee req-fdd9567c-0e59-4fb7-bb8f-8fa17a789e18 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 872b1c1d-bc87-4123-a599-4d64b89018aa] Received event network-changed-108be440-11bf-41f6-a628-86fbac597b7d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:52:10 compute-0 nova_compute[260935]: 2025-10-11 08:52:10.569 2 DEBUG nova.compute.manager [req-e9ccf899-764f-4317-b0b0-1722872270ee req-fdd9567c-0e59-4fb7-bb8f-8fa17a789e18 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 872b1c1d-bc87-4123-a599-4d64b89018aa] Refreshing instance network info cache due to event network-changed-108be440-11bf-41f6-a628-86fbac597b7d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 08:52:10 compute-0 nova_compute[260935]: 2025-10-11 08:52:10.569 2 DEBUG oslo_concurrency.lockutils [req-e9ccf899-764f-4317-b0b0-1722872270ee req-fdd9567c-0e59-4fb7-bb8f-8fa17a789e18 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-872b1c1d-bc87-4123-a599-4d64b89018aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 08:52:11 compute-0 ceph-mon[74313]: pgmap v1419: 321 pgs: 321 active+clean; 167 MiB data, 511 MiB used, 59 GiB / 60 GiB avail; 5.0 MiB/s rd, 4.4 MiB/s wr, 324 op/s
Oct 11 08:52:11 compute-0 nova_compute[260935]: 2025-10-11 08:52:11.544 2 DEBUG nova.network.neutron [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: 872b1c1d-bc87-4123-a599-4d64b89018aa] Updating instance_info_cache with network_info: [{"id": "108be440-11bf-41f6-a628-86fbac597b7d", "address": "fa:16:3e:c7:33:2e", "network": {"id": "882d76f4-8cc7-44c7-ad90-277f4f92e044", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-3978382-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.194", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b23ff73ca27245eeb1b46f51326a5568", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap108be440-11", "ovs_interfaceid": "108be440-11bf-41f6-a628-86fbac597b7d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:52:11 compute-0 nova_compute[260935]: 2025-10-11 08:52:11.731 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Releasing lock "refresh_cache-872b1c1d-bc87-4123-a599-4d64b89018aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 08:52:11 compute-0 nova_compute[260935]: 2025-10-11 08:52:11.732 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: 872b1c1d-bc87-4123-a599-4d64b89018aa] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 11 08:52:11 compute-0 nova_compute[260935]: 2025-10-11 08:52:11.732 2 DEBUG oslo_concurrency.lockutils [req-e9ccf899-764f-4317-b0b0-1722872270ee req-fdd9567c-0e59-4fb7-bb8f-8fa17a789e18 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-872b1c1d-bc87-4123-a599-4d64b89018aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 08:52:11 compute-0 nova_compute[260935]: 2025-10-11 08:52:11.733 2 DEBUG nova.network.neutron [req-e9ccf899-764f-4317-b0b0-1722872270ee req-fdd9567c-0e59-4fb7-bb8f-8fa17a789e18 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 872b1c1d-bc87-4123-a599-4d64b89018aa] Refreshing network info cache for port 108be440-11bf-41f6-a628-86fbac597b7d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 08:52:11 compute-0 nova_compute[260935]: 2025-10-11 08:52:11.734 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 08:52:11 compute-0 nova_compute[260935]: 2025-10-11 08:52:11.791 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:52:11 compute-0 nova_compute[260935]: 2025-10-11 08:52:11.791 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:52:11 compute-0 nova_compute[260935]: 2025-10-11 08:52:11.791 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:52:11 compute-0 nova_compute[260935]: 2025-10-11 08:52:11.792 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 11 08:52:11 compute-0 nova_compute[260935]: 2025-10-11 08:52:11.792 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:52:11 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1420: 321 pgs: 321 active+clean; 167 MiB data, 511 MiB used, 59 GiB / 60 GiB avail; 4.7 MiB/s rd, 4.2 MiB/s wr, 307 op/s
Oct 11 08:52:12 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 08:52:12 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/654906437' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:52:12 compute-0 nova_compute[260935]: 2025-10-11 08:52:12.276 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.484s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:52:12 compute-0 nova_compute[260935]: 2025-10-11 08:52:12.402 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000027 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 08:52:12 compute-0 nova_compute[260935]: 2025-10-11 08:52:12.403 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000027 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 08:52:12 compute-0 nova_compute[260935]: 2025-10-11 08:52:12.409 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000024 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 08:52:12 compute-0 nova_compute[260935]: 2025-10-11 08:52:12.409 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000024 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 08:52:12 compute-0 nova_compute[260935]: 2025-10-11 08:52:12.415 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:52:12 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/654906437' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:52:12 compute-0 nova_compute[260935]: 2025-10-11 08:52:12.689 2 WARNING nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 08:52:12 compute-0 nova_compute[260935]: 2025-10-11 08:52:12.692 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3907MB free_disk=59.921897888183594GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 11 08:52:12 compute-0 nova_compute[260935]: 2025-10-11 08:52:12.692 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:52:12 compute-0 nova_compute[260935]: 2025-10-11 08:52:12.693 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:52:12 compute-0 nova_compute[260935]: 2025-10-11 08:52:12.779 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance 872b1c1d-bc87-4123-a599-4d64b89018aa actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 08:52:12 compute-0 nova_compute[260935]: 2025-10-11 08:52:12.780 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance 840d2a1b-48bb-42ec-aa12-067e1b65ea39 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 08:52:12 compute-0 nova_compute[260935]: 2025-10-11 08:52:12.781 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 11 08:52:12 compute-0 nova_compute[260935]: 2025-10-11 08:52:12.781 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 11 08:52:12 compute-0 nova_compute[260935]: 2025-10-11 08:52:12.855 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:52:13 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e191 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 08:52:13 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 08:52:13 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/936869615' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:52:13 compute-0 nova_compute[260935]: 2025-10-11 08:52:13.297 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:52:13 compute-0 nova_compute[260935]: 2025-10-11 08:52:13.305 2 DEBUG nova.compute.provider_tree [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 08:52:13 compute-0 nova_compute[260935]: 2025-10-11 08:52:13.328 2 DEBUG nova.scheduler.client.report [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 08:52:13 compute-0 nova_compute[260935]: 2025-10-11 08:52:13.363 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 11 08:52:13 compute-0 nova_compute[260935]: 2025-10-11 08:52:13.364 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.671s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:52:13 compute-0 ceph-mon[74313]: pgmap v1420: 321 pgs: 321 active+clean; 167 MiB data, 511 MiB used, 59 GiB / 60 GiB avail; 4.7 MiB/s rd, 4.2 MiB/s wr, 307 op/s
Oct 11 08:52:13 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/936869615' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:52:13 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1421: 321 pgs: 321 active+clean; 167 MiB data, 485 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 2.5 MiB/s wr, 174 op/s
Oct 11 08:52:14 compute-0 ceph-mon[74313]: pgmap v1421: 321 pgs: 321 active+clean; 167 MiB data, 485 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 2.5 MiB/s wr, 174 op/s
Oct 11 08:52:14 compute-0 nova_compute[260935]: 2025-10-11 08:52:14.999 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:52:15 compute-0 nova_compute[260935]: 2025-10-11 08:52:15.049 2 DEBUG nova.network.neutron [req-e9ccf899-764f-4317-b0b0-1722872270ee req-fdd9567c-0e59-4fb7-bb8f-8fa17a789e18 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 872b1c1d-bc87-4123-a599-4d64b89018aa] Updated VIF entry in instance network info cache for port 108be440-11bf-41f6-a628-86fbac597b7d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 08:52:15 compute-0 nova_compute[260935]: 2025-10-11 08:52:15.050 2 DEBUG nova.network.neutron [req-e9ccf899-764f-4317-b0b0-1722872270ee req-fdd9567c-0e59-4fb7-bb8f-8fa17a789e18 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 872b1c1d-bc87-4123-a599-4d64b89018aa] Updating instance_info_cache with network_info: [{"id": "108be440-11bf-41f6-a628-86fbac597b7d", "address": "fa:16:3e:c7:33:2e", "network": {"id": "882d76f4-8cc7-44c7-ad90-277f4f92e044", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-3978382-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.194", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b23ff73ca27245eeb1b46f51326a5568", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap108be440-11", "ovs_interfaceid": "108be440-11bf-41f6-a628-86fbac597b7d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:52:15 compute-0 nova_compute[260935]: 2025-10-11 08:52:15.086 2 DEBUG oslo_concurrency.lockutils [req-e9ccf899-764f-4317-b0b0-1722872270ee req-fdd9567c-0e59-4fb7-bb8f-8fa17a789e18 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-872b1c1d-bc87-4123-a599-4d64b89018aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 08:52:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:15.185 162815 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:52:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:15.186 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:52:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:15.187 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:52:15 compute-0 ovn_controller[152945]: 2025-10-11T08:52:15Z|00314|binding|INFO|Releasing lport abe57b96-aedd-418b-be8f-7ad9fb9218ac from this chassis (sb_readonly=0)
Oct 11 08:52:15 compute-0 ovn_controller[152945]: 2025-10-11T08:52:15Z|00315|binding|INFO|Releasing lport abe57b96-aedd-418b-be8f-7ad9fb9218ac from this chassis (sb_readonly=0)
Oct 11 08:52:15 compute-0 nova_compute[260935]: 2025-10-11 08:52:15.520 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:52:15 compute-0 nova_compute[260935]: 2025-10-11 08:52:15.579 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:52:15 compute-0 nova_compute[260935]: 2025-10-11 08:52:15.754 2 DEBUG nova.compute.manager [req-d12a0dfc-10a2-4bda-a9f5-d4da12e3e3bd req-052c383d-e6d4-4643-b36e-22ddbc5ba1f0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 872b1c1d-bc87-4123-a599-4d64b89018aa] Received event network-changed-108be440-11bf-41f6-a628-86fbac597b7d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:52:15 compute-0 nova_compute[260935]: 2025-10-11 08:52:15.755 2 DEBUG nova.compute.manager [req-d12a0dfc-10a2-4bda-a9f5-d4da12e3e3bd req-052c383d-e6d4-4643-b36e-22ddbc5ba1f0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 872b1c1d-bc87-4123-a599-4d64b89018aa] Refreshing instance network info cache due to event network-changed-108be440-11bf-41f6-a628-86fbac597b7d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 08:52:15 compute-0 nova_compute[260935]: 2025-10-11 08:52:15.756 2 DEBUG oslo_concurrency.lockutils [req-d12a0dfc-10a2-4bda-a9f5-d4da12e3e3bd req-052c383d-e6d4-4643-b36e-22ddbc5ba1f0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-872b1c1d-bc87-4123-a599-4d64b89018aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 08:52:15 compute-0 nova_compute[260935]: 2025-10-11 08:52:15.756 2 DEBUG oslo_concurrency.lockutils [req-d12a0dfc-10a2-4bda-a9f5-d4da12e3e3bd req-052c383d-e6d4-4643-b36e-22ddbc5ba1f0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-872b1c1d-bc87-4123-a599-4d64b89018aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 08:52:15 compute-0 nova_compute[260935]: 2025-10-11 08:52:15.756 2 DEBUG nova.network.neutron [req-d12a0dfc-10a2-4bda-a9f5-d4da12e3e3bd req-052c383d-e6d4-4643-b36e-22ddbc5ba1f0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 872b1c1d-bc87-4123-a599-4d64b89018aa] Refreshing network info cache for port 108be440-11bf-41f6-a628-86fbac597b7d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 08:52:15 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1422: 321 pgs: 321 active+clean; 167 MiB data, 485 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 2.5 MiB/s wr, 174 op/s
Oct 11 08:52:16 compute-0 nova_compute[260935]: 2025-10-11 08:52:16.648 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760172721.6469274, 205bee5e-165a-468c-87d3-db44e03ace3e => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:52:16 compute-0 nova_compute[260935]: 2025-10-11 08:52:16.649 2 INFO nova.compute.manager [-] [instance: 205bee5e-165a-468c-87d3-db44e03ace3e] VM Stopped (Lifecycle Event)
Oct 11 08:52:16 compute-0 nova_compute[260935]: 2025-10-11 08:52:16.671 2 DEBUG nova.compute.manager [None req-b67f88a5-a729-4920-b216-0db5d9c8f882 - - - - - -] [instance: 205bee5e-165a-468c-87d3-db44e03ace3e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:52:16 compute-0 ovn_controller[152945]: 2025-10-11T08:52:16Z|00316|binding|INFO|Releasing lport abe57b96-aedd-418b-be8f-7ad9fb9218ac from this chassis (sb_readonly=0)
Oct 11 08:52:17 compute-0 nova_compute[260935]: 2025-10-11 08:52:17.002 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:52:17 compute-0 ceph-mon[74313]: pgmap v1422: 321 pgs: 321 active+clean; 167 MiB data, 485 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 2.5 MiB/s wr, 174 op/s
Oct 11 08:52:17 compute-0 nova_compute[260935]: 2025-10-11 08:52:17.009 2 DEBUG nova.network.neutron [req-d12a0dfc-10a2-4bda-a9f5-d4da12e3e3bd req-052c383d-e6d4-4643-b36e-22ddbc5ba1f0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 872b1c1d-bc87-4123-a599-4d64b89018aa] Updated VIF entry in instance network info cache for port 108be440-11bf-41f6-a628-86fbac597b7d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 08:52:17 compute-0 nova_compute[260935]: 2025-10-11 08:52:17.010 2 DEBUG nova.network.neutron [req-d12a0dfc-10a2-4bda-a9f5-d4da12e3e3bd req-052c383d-e6d4-4643-b36e-22ddbc5ba1f0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 872b1c1d-bc87-4123-a599-4d64b89018aa] Updating instance_info_cache with network_info: [{"id": "108be440-11bf-41f6-a628-86fbac597b7d", "address": "fa:16:3e:c7:33:2e", "network": {"id": "882d76f4-8cc7-44c7-ad90-277f4f92e044", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-3978382-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b23ff73ca27245eeb1b46f51326a5568", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap108be440-11", "ovs_interfaceid": "108be440-11bf-41f6-a628-86fbac597b7d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:52:17 compute-0 nova_compute[260935]: 2025-10-11 08:52:17.039 2 DEBUG oslo_concurrency.lockutils [req-d12a0dfc-10a2-4bda-a9f5-d4da12e3e3bd req-052c383d-e6d4-4643-b36e-22ddbc5ba1f0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-872b1c1d-bc87-4123-a599-4d64b89018aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 08:52:17 compute-0 nova_compute[260935]: 2025-10-11 08:52:17.418 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:52:17 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1423: 321 pgs: 321 active+clean; 170 MiB data, 485 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 404 KiB/s wr, 61 op/s
Oct 11 08:52:17 compute-0 nova_compute[260935]: 2025-10-11 08:52:17.918 2 DEBUG nova.compute.manager [req-bc1167c0-e1c1-4a41-b5a7-9b2a590b0237 req-f0f736c4-55c8-4c31-a363-f43efc97d0c0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 840d2a1b-48bb-42ec-aa12-067e1b65ea39] Received event network-changed-bf26054f-43d5-471a-bca4-e7948b4409d0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:52:17 compute-0 nova_compute[260935]: 2025-10-11 08:52:17.919 2 DEBUG nova.compute.manager [req-bc1167c0-e1c1-4a41-b5a7-9b2a590b0237 req-f0f736c4-55c8-4c31-a363-f43efc97d0c0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 840d2a1b-48bb-42ec-aa12-067e1b65ea39] Refreshing instance network info cache due to event network-changed-bf26054f-43d5-471a-bca4-e7948b4409d0. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 08:52:17 compute-0 nova_compute[260935]: 2025-10-11 08:52:17.919 2 DEBUG oslo_concurrency.lockutils [req-bc1167c0-e1c1-4a41-b5a7-9b2a590b0237 req-f0f736c4-55c8-4c31-a363-f43efc97d0c0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-840d2a1b-48bb-42ec-aa12-067e1b65ea39" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 08:52:17 compute-0 nova_compute[260935]: 2025-10-11 08:52:17.919 2 DEBUG oslo_concurrency.lockutils [req-bc1167c0-e1c1-4a41-b5a7-9b2a590b0237 req-f0f736c4-55c8-4c31-a363-f43efc97d0c0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-840d2a1b-48bb-42ec-aa12-067e1b65ea39" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 08:52:17 compute-0 nova_compute[260935]: 2025-10-11 08:52:17.920 2 DEBUG nova.network.neutron [req-bc1167c0-e1c1-4a41-b5a7-9b2a590b0237 req-f0f736c4-55c8-4c31-a363-f43efc97d0c0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 840d2a1b-48bb-42ec-aa12-067e1b65ea39] Refreshing network info cache for port bf26054f-43d5-471a-bca4-e7948b4409d0 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 08:52:18 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e191 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 08:52:18 compute-0 nova_compute[260935]: 2025-10-11 08:52:18.905 2 DEBUG oslo_concurrency.lockutils [None req-f27d8cfa-6060-4135-ac4c-97293b7b00df 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Acquiring lock "98fabab3-6b4a-44f3-b232-f23f34f4e19f" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:52:18 compute-0 nova_compute[260935]: 2025-10-11 08:52:18.905 2 DEBUG oslo_concurrency.lockutils [None req-f27d8cfa-6060-4135-ac4c-97293b7b00df 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Lock "98fabab3-6b4a-44f3-b232-f23f34f4e19f" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:52:18 compute-0 nova_compute[260935]: 2025-10-11 08:52:18.927 2 DEBUG nova.compute.manager [None req-f27d8cfa-6060-4135-ac4c-97293b7b00df 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 98fabab3-6b4a-44f3-b232-f23f34f4e19f] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 11 08:52:19 compute-0 ceph-mon[74313]: pgmap v1423: 321 pgs: 321 active+clean; 170 MiB data, 485 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 404 KiB/s wr, 61 op/s
Oct 11 08:52:19 compute-0 nova_compute[260935]: 2025-10-11 08:52:19.024 2 DEBUG oslo_concurrency.lockutils [None req-f27d8cfa-6060-4135-ac4c-97293b7b00df 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:52:19 compute-0 nova_compute[260935]: 2025-10-11 08:52:19.025 2 DEBUG oslo_concurrency.lockutils [None req-f27d8cfa-6060-4135-ac4c-97293b7b00df 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:52:19 compute-0 nova_compute[260935]: 2025-10-11 08:52:19.031 2 DEBUG nova.virt.hardware [None req-f27d8cfa-6060-4135-ac4c-97293b7b00df 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 11 08:52:19 compute-0 nova_compute[260935]: 2025-10-11 08:52:19.031 2 INFO nova.compute.claims [None req-f27d8cfa-6060-4135-ac4c-97293b7b00df 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 98fabab3-6b4a-44f3-b232-f23f34f4e19f] Claim successful on node compute-0.ctlplane.example.com
Oct 11 08:52:19 compute-0 nova_compute[260935]: 2025-10-11 08:52:19.179 2 DEBUG oslo_concurrency.processutils [None req-f27d8cfa-6060-4135-ac4c-97293b7b00df 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:52:19 compute-0 ovn_controller[152945]: 2025-10-11T08:52:19Z|00056|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:6d:9c:27 10.100.0.12
Oct 11 08:52:19 compute-0 ovn_controller[152945]: 2025-10-11T08:52:19Z|00057|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:6d:9c:27 10.100.0.12
Oct 11 08:52:19 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 08:52:19 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2273607470' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:52:19 compute-0 nova_compute[260935]: 2025-10-11 08:52:19.628 2 DEBUG oslo_concurrency.processutils [None req-f27d8cfa-6060-4135-ac4c-97293b7b00df 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:52:19 compute-0 nova_compute[260935]: 2025-10-11 08:52:19.634 2 DEBUG nova.compute.provider_tree [None req-f27d8cfa-6060-4135-ac4c-97293b7b00df 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 08:52:19 compute-0 nova_compute[260935]: 2025-10-11 08:52:19.656 2 DEBUG nova.scheduler.client.report [None req-f27d8cfa-6060-4135-ac4c-97293b7b00df 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 08:52:19 compute-0 nova_compute[260935]: 2025-10-11 08:52:19.686 2 DEBUG oslo_concurrency.lockutils [None req-f27d8cfa-6060-4135-ac4c-97293b7b00df 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.661s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:52:19 compute-0 nova_compute[260935]: 2025-10-11 08:52:19.687 2 DEBUG nova.compute.manager [None req-f27d8cfa-6060-4135-ac4c-97293b7b00df 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 98fabab3-6b4a-44f3-b232-f23f34f4e19f] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 11 08:52:19 compute-0 nova_compute[260935]: 2025-10-11 08:52:19.755 2 DEBUG nova.compute.manager [None req-f27d8cfa-6060-4135-ac4c-97293b7b00df 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 98fabab3-6b4a-44f3-b232-f23f34f4e19f] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 11 08:52:19 compute-0 nova_compute[260935]: 2025-10-11 08:52:19.755 2 DEBUG nova.network.neutron [None req-f27d8cfa-6060-4135-ac4c-97293b7b00df 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 98fabab3-6b4a-44f3-b232-f23f34f4e19f] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 11 08:52:19 compute-0 nova_compute[260935]: 2025-10-11 08:52:19.784 2 INFO nova.virt.libvirt.driver [None req-f27d8cfa-6060-4135-ac4c-97293b7b00df 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 98fabab3-6b4a-44f3-b232-f23f34f4e19f] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 11 08:52:19 compute-0 nova_compute[260935]: 2025-10-11 08:52:19.806 2 DEBUG nova.compute.manager [None req-f27d8cfa-6060-4135-ac4c-97293b7b00df 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 98fabab3-6b4a-44f3-b232-f23f34f4e19f] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 11 08:52:19 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1424: 321 pgs: 321 active+clean; 170 MiB data, 485 MiB used, 60 GiB / 60 GiB avail; 1.3 MiB/s rd, 347 KiB/s wr, 52 op/s
Oct 11 08:52:19 compute-0 nova_compute[260935]: 2025-10-11 08:52:19.954 2 DEBUG nova.compute.manager [None req-f27d8cfa-6060-4135-ac4c-97293b7b00df 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 98fabab3-6b4a-44f3-b232-f23f34f4e19f] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 11 08:52:19 compute-0 nova_compute[260935]: 2025-10-11 08:52:19.955 2 DEBUG nova.virt.libvirt.driver [None req-f27d8cfa-6060-4135-ac4c-97293b7b00df 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 98fabab3-6b4a-44f3-b232-f23f34f4e19f] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 11 08:52:19 compute-0 nova_compute[260935]: 2025-10-11 08:52:19.955 2 INFO nova.virt.libvirt.driver [None req-f27d8cfa-6060-4135-ac4c-97293b7b00df 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 98fabab3-6b4a-44f3-b232-f23f34f4e19f] Creating image(s)
Oct 11 08:52:19 compute-0 nova_compute[260935]: 2025-10-11 08:52:19.978 2 DEBUG nova.storage.rbd_utils [None req-f27d8cfa-6060-4135-ac4c-97293b7b00df 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] rbd image 98fabab3-6b4a-44f3-b232-f23f34f4e19f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:52:20 compute-0 nova_compute[260935]: 2025-10-11 08:52:19.999 2 DEBUG nova.storage.rbd_utils [None req-f27d8cfa-6060-4135-ac4c-97293b7b00df 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] rbd image 98fabab3-6b4a-44f3-b232-f23f34f4e19f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:52:20 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/2273607470' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:52:20 compute-0 nova_compute[260935]: 2025-10-11 08:52:20.066 2 DEBUG nova.storage.rbd_utils [None req-f27d8cfa-6060-4135-ac4c-97293b7b00df 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] rbd image 98fabab3-6b4a-44f3-b232-f23f34f4e19f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:52:20 compute-0 nova_compute[260935]: 2025-10-11 08:52:20.069 2 DEBUG oslo_concurrency.processutils [None req-f27d8cfa-6060-4135-ac4c-97293b7b00df 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:52:20 compute-0 nova_compute[260935]: 2025-10-11 08:52:20.106 2 DEBUG nova.policy [None req-f27d8cfa-6060-4135-ac4c-97293b7b00df 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '2a171de1f79843e0b048393cabfee77d', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '9af27fad6b5a4783b66213343f27f0a1', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 11 08:52:20 compute-0 nova_compute[260935]: 2025-10-11 08:52:20.110 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:52:20 compute-0 nova_compute[260935]: 2025-10-11 08:52:20.111 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760172724.9593604, 1cecd438-75a3-4140-ad35-7439630b1be2 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:52:20 compute-0 nova_compute[260935]: 2025-10-11 08:52:20.111 2 INFO nova.compute.manager [-] [instance: 1cecd438-75a3-4140-ad35-7439630b1be2] VM Stopped (Lifecycle Event)
Oct 11 08:52:20 compute-0 nova_compute[260935]: 2025-10-11 08:52:20.138 2 DEBUG nova.compute.manager [None req-125062ce-c246-42fd-8643-79b48cc61826 - - - - - -] [instance: 1cecd438-75a3-4140-ad35-7439630b1be2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:52:20 compute-0 nova_compute[260935]: 2025-10-11 08:52:20.153 2 DEBUG oslo_concurrency.processutils [None req-f27d8cfa-6060-4135-ac4c-97293b7b00df 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json" returned: 0 in 0.084s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:52:20 compute-0 nova_compute[260935]: 2025-10-11 08:52:20.153 2 DEBUG oslo_concurrency.lockutils [None req-f27d8cfa-6060-4135-ac4c-97293b7b00df 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Acquiring lock "9811042c7d73cc51997f7c966840f4b7728169a1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:52:20 compute-0 nova_compute[260935]: 2025-10-11 08:52:20.154 2 DEBUG oslo_concurrency.lockutils [None req-f27d8cfa-6060-4135-ac4c-97293b7b00df 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:52:20 compute-0 nova_compute[260935]: 2025-10-11 08:52:20.154 2 DEBUG oslo_concurrency.lockutils [None req-f27d8cfa-6060-4135-ac4c-97293b7b00df 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:52:20 compute-0 nova_compute[260935]: 2025-10-11 08:52:20.173 2 DEBUG nova.storage.rbd_utils [None req-f27d8cfa-6060-4135-ac4c-97293b7b00df 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] rbd image 98fabab3-6b4a-44f3-b232-f23f34f4e19f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:52:20 compute-0 nova_compute[260935]: 2025-10-11 08:52:20.177 2 DEBUG oslo_concurrency.processutils [None req-f27d8cfa-6060-4135-ac4c-97293b7b00df 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 98fabab3-6b4a-44f3-b232-f23f34f4e19f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:52:20 compute-0 nova_compute[260935]: 2025-10-11 08:52:20.493 2 DEBUG oslo_concurrency.processutils [None req-f27d8cfa-6060-4135-ac4c-97293b7b00df 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 98fabab3-6b4a-44f3-b232-f23f34f4e19f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.316s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:52:20 compute-0 nova_compute[260935]: 2025-10-11 08:52:20.584 2 DEBUG nova.storage.rbd_utils [None req-f27d8cfa-6060-4135-ac4c-97293b7b00df 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] resizing rbd image 98fabab3-6b4a-44f3-b232-f23f34f4e19f_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 11 08:52:20 compute-0 nova_compute[260935]: 2025-10-11 08:52:20.719 2 DEBUG nova.objects.instance [None req-f27d8cfa-6060-4135-ac4c-97293b7b00df 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Lazy-loading 'migration_context' on Instance uuid 98fabab3-6b4a-44f3-b232-f23f34f4e19f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:52:20 compute-0 nova_compute[260935]: 2025-10-11 08:52:20.737 2 DEBUG nova.virt.libvirt.driver [None req-f27d8cfa-6060-4135-ac4c-97293b7b00df 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 98fabab3-6b4a-44f3-b232-f23f34f4e19f] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 11 08:52:20 compute-0 nova_compute[260935]: 2025-10-11 08:52:20.737 2 DEBUG nova.virt.libvirt.driver [None req-f27d8cfa-6060-4135-ac4c-97293b7b00df 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 98fabab3-6b4a-44f3-b232-f23f34f4e19f] Ensure instance console log exists: /var/lib/nova/instances/98fabab3-6b4a-44f3-b232-f23f34f4e19f/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 11 08:52:20 compute-0 nova_compute[260935]: 2025-10-11 08:52:20.738 2 DEBUG oslo_concurrency.lockutils [None req-f27d8cfa-6060-4135-ac4c-97293b7b00df 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:52:20 compute-0 nova_compute[260935]: 2025-10-11 08:52:20.739 2 DEBUG oslo_concurrency.lockutils [None req-f27d8cfa-6060-4135-ac4c-97293b7b00df 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:52:20 compute-0 nova_compute[260935]: 2025-10-11 08:52:20.739 2 DEBUG oslo_concurrency.lockutils [None req-f27d8cfa-6060-4135-ac4c-97293b7b00df 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:52:21 compute-0 ceph-mon[74313]: pgmap v1424: 321 pgs: 321 active+clean; 170 MiB data, 485 MiB used, 60 GiB / 60 GiB avail; 1.3 MiB/s rd, 347 KiB/s wr, 52 op/s
Oct 11 08:52:21 compute-0 nova_compute[260935]: 2025-10-11 08:52:21.178 2 DEBUG nova.network.neutron [req-bc1167c0-e1c1-4a41-b5a7-9b2a590b0237 req-f0f736c4-55c8-4c31-a363-f43efc97d0c0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 840d2a1b-48bb-42ec-aa12-067e1b65ea39] Updated VIF entry in instance network info cache for port bf26054f-43d5-471a-bca4-e7948b4409d0. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 08:52:21 compute-0 nova_compute[260935]: 2025-10-11 08:52:21.179 2 DEBUG nova.network.neutron [req-bc1167c0-e1c1-4a41-b5a7-9b2a590b0237 req-f0f736c4-55c8-4c31-a363-f43efc97d0c0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 840d2a1b-48bb-42ec-aa12-067e1b65ea39] Updating instance_info_cache with network_info: [{"id": "bf26054f-43d5-471a-bca4-e7948b4409d0", "address": "fa:16:3e:6d:9c:27", "network": {"id": "882d76f4-8cc7-44c7-ad90-277f4f92e044", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-3978382-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.194", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b23ff73ca27245eeb1b46f51326a5568", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf26054f-43", "ovs_interfaceid": "bf26054f-43d5-471a-bca4-e7948b4409d0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:52:21 compute-0 nova_compute[260935]: 2025-10-11 08:52:21.200 2 DEBUG oslo_concurrency.lockutils [req-bc1167c0-e1c1-4a41-b5a7-9b2a590b0237 req-f0f736c4-55c8-4c31-a363-f43efc97d0c0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-840d2a1b-48bb-42ec-aa12-067e1b65ea39" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 08:52:21 compute-0 nova_compute[260935]: 2025-10-11 08:52:21.247 2 DEBUG nova.network.neutron [None req-f27d8cfa-6060-4135-ac4c-97293b7b00df 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 98fabab3-6b4a-44f3-b232-f23f34f4e19f] Successfully created port: 20c8164c-0779-4589-b3d3-afc10a47631f _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 11 08:52:21 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1425: 321 pgs: 321 active+clean; 170 MiB data, 485 MiB used, 60 GiB / 60 GiB avail; 1.2 MiB/s rd, 337 KiB/s wr, 51 op/s
Oct 11 08:52:21 compute-0 nova_compute[260935]: 2025-10-11 08:52:21.974 2 DEBUG nova.network.neutron [None req-f27d8cfa-6060-4135-ac4c-97293b7b00df 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 98fabab3-6b4a-44f3-b232-f23f34f4e19f] Successfully updated port: 20c8164c-0779-4589-b3d3-afc10a47631f _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 11 08:52:21 compute-0 nova_compute[260935]: 2025-10-11 08:52:21.990 2 DEBUG oslo_concurrency.lockutils [None req-f27d8cfa-6060-4135-ac4c-97293b7b00df 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Acquiring lock "refresh_cache-98fabab3-6b4a-44f3-b232-f23f34f4e19f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 08:52:21 compute-0 nova_compute[260935]: 2025-10-11 08:52:21.991 2 DEBUG oslo_concurrency.lockutils [None req-f27d8cfa-6060-4135-ac4c-97293b7b00df 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Acquired lock "refresh_cache-98fabab3-6b4a-44f3-b232-f23f34f4e19f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 08:52:21 compute-0 nova_compute[260935]: 2025-10-11 08:52:21.991 2 DEBUG nova.network.neutron [None req-f27d8cfa-6060-4135-ac4c-97293b7b00df 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 98fabab3-6b4a-44f3-b232-f23f34f4e19f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 11 08:52:22 compute-0 nova_compute[260935]: 2025-10-11 08:52:22.106 2 DEBUG nova.compute.manager [req-e05c9b7c-6e20-469f-9fee-c6c4f5b79b0c req-a6701c37-d6f2-4ef6-ab25-13c4651f50d6 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 98fabab3-6b4a-44f3-b232-f23f34f4e19f] Received event network-changed-20c8164c-0779-4589-b3d3-afc10a47631f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:52:22 compute-0 nova_compute[260935]: 2025-10-11 08:52:22.107 2 DEBUG nova.compute.manager [req-e05c9b7c-6e20-469f-9fee-c6c4f5b79b0c req-a6701c37-d6f2-4ef6-ab25-13c4651f50d6 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 98fabab3-6b4a-44f3-b232-f23f34f4e19f] Refreshing instance network info cache due to event network-changed-20c8164c-0779-4589-b3d3-afc10a47631f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 08:52:22 compute-0 nova_compute[260935]: 2025-10-11 08:52:22.108 2 DEBUG oslo_concurrency.lockutils [req-e05c9b7c-6e20-469f-9fee-c6c4f5b79b0c req-a6701c37-d6f2-4ef6-ab25-13c4651f50d6 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-98fabab3-6b4a-44f3-b232-f23f34f4e19f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 08:52:22 compute-0 nova_compute[260935]: 2025-10-11 08:52:22.186 2 DEBUG nova.network.neutron [None req-f27d8cfa-6060-4135-ac4c-97293b7b00df 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 98fabab3-6b4a-44f3-b232-f23f34f4e19f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 11 08:52:22 compute-0 nova_compute[260935]: 2025-10-11 08:52:22.455 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:52:23 compute-0 ceph-mon[74313]: pgmap v1425: 321 pgs: 321 active+clean; 170 MiB data, 485 MiB used, 60 GiB / 60 GiB avail; 1.2 MiB/s rd, 337 KiB/s wr, 51 op/s
Oct 11 08:52:23 compute-0 nova_compute[260935]: 2025-10-11 08:52:23.252 2 DEBUG nova.network.neutron [None req-f27d8cfa-6060-4135-ac4c-97293b7b00df 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 98fabab3-6b4a-44f3-b232-f23f34f4e19f] Updating instance_info_cache with network_info: [{"id": "20c8164c-0779-4589-b3d3-afc10a47631f", "address": "fa:16:3e:bf:90:73", "network": {"id": "e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-964284753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9af27fad6b5a4783b66213343f27f0a1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap20c8164c-07", "ovs_interfaceid": "20c8164c-0779-4589-b3d3-afc10a47631f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:52:23 compute-0 nova_compute[260935]: 2025-10-11 08:52:23.276 2 DEBUG oslo_concurrency.lockutils [None req-f27d8cfa-6060-4135-ac4c-97293b7b00df 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Releasing lock "refresh_cache-98fabab3-6b4a-44f3-b232-f23f34f4e19f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 08:52:23 compute-0 nova_compute[260935]: 2025-10-11 08:52:23.277 2 DEBUG nova.compute.manager [None req-f27d8cfa-6060-4135-ac4c-97293b7b00df 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 98fabab3-6b4a-44f3-b232-f23f34f4e19f] Instance network_info: |[{"id": "20c8164c-0779-4589-b3d3-afc10a47631f", "address": "fa:16:3e:bf:90:73", "network": {"id": "e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-964284753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9af27fad6b5a4783b66213343f27f0a1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap20c8164c-07", "ovs_interfaceid": "20c8164c-0779-4589-b3d3-afc10a47631f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 11 08:52:23 compute-0 nova_compute[260935]: 2025-10-11 08:52:23.278 2 DEBUG oslo_concurrency.lockutils [req-e05c9b7c-6e20-469f-9fee-c6c4f5b79b0c req-a6701c37-d6f2-4ef6-ab25-13c4651f50d6 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-98fabab3-6b4a-44f3-b232-f23f34f4e19f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 08:52:23 compute-0 nova_compute[260935]: 2025-10-11 08:52:23.278 2 DEBUG nova.network.neutron [req-e05c9b7c-6e20-469f-9fee-c6c4f5b79b0c req-a6701c37-d6f2-4ef6-ab25-13c4651f50d6 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 98fabab3-6b4a-44f3-b232-f23f34f4e19f] Refreshing network info cache for port 20c8164c-0779-4589-b3d3-afc10a47631f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 08:52:23 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e191 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 08:52:23 compute-0 nova_compute[260935]: 2025-10-11 08:52:23.285 2 DEBUG nova.virt.libvirt.driver [None req-f27d8cfa-6060-4135-ac4c-97293b7b00df 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 98fabab3-6b4a-44f3-b232-f23f34f4e19f] Start _get_guest_xml network_info=[{"id": "20c8164c-0779-4589-b3d3-afc10a47631f", "address": "fa:16:3e:bf:90:73", "network": {"id": "e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-964284753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9af27fad6b5a4783b66213343f27f0a1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap20c8164c-07", "ovs_interfaceid": "20c8164c-0779-4589-b3d3-afc10a47631f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 11 08:52:23 compute-0 nova_compute[260935]: 2025-10-11 08:52:23.292 2 WARNING nova.virt.libvirt.driver [None req-f27d8cfa-6060-4135-ac4c-97293b7b00df 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 08:52:23 compute-0 nova_compute[260935]: 2025-10-11 08:52:23.302 2 DEBUG nova.virt.libvirt.host [None req-f27d8cfa-6060-4135-ac4c-97293b7b00df 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 11 08:52:23 compute-0 nova_compute[260935]: 2025-10-11 08:52:23.303 2 DEBUG nova.virt.libvirt.host [None req-f27d8cfa-6060-4135-ac4c-97293b7b00df 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 11 08:52:23 compute-0 nova_compute[260935]: 2025-10-11 08:52:23.307 2 DEBUG nova.virt.libvirt.host [None req-f27d8cfa-6060-4135-ac4c-97293b7b00df 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 11 08:52:23 compute-0 nova_compute[260935]: 2025-10-11 08:52:23.308 2 DEBUG nova.virt.libvirt.host [None req-f27d8cfa-6060-4135-ac4c-97293b7b00df 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 11 08:52:23 compute-0 nova_compute[260935]: 2025-10-11 08:52:23.309 2 DEBUG nova.virt.libvirt.driver [None req-f27d8cfa-6060-4135-ac4c-97293b7b00df 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 11 08:52:23 compute-0 nova_compute[260935]: 2025-10-11 08:52:23.310 2 DEBUG nova.virt.hardware [None req-f27d8cfa-6060-4135-ac4c-97293b7b00df 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 11 08:52:23 compute-0 nova_compute[260935]: 2025-10-11 08:52:23.311 2 DEBUG nova.virt.hardware [None req-f27d8cfa-6060-4135-ac4c-97293b7b00df 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 11 08:52:23 compute-0 nova_compute[260935]: 2025-10-11 08:52:23.311 2 DEBUG nova.virt.hardware [None req-f27d8cfa-6060-4135-ac4c-97293b7b00df 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 11 08:52:23 compute-0 nova_compute[260935]: 2025-10-11 08:52:23.312 2 DEBUG nova.virt.hardware [None req-f27d8cfa-6060-4135-ac4c-97293b7b00df 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 11 08:52:23 compute-0 nova_compute[260935]: 2025-10-11 08:52:23.312 2 DEBUG nova.virt.hardware [None req-f27d8cfa-6060-4135-ac4c-97293b7b00df 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 11 08:52:23 compute-0 nova_compute[260935]: 2025-10-11 08:52:23.313 2 DEBUG nova.virt.hardware [None req-f27d8cfa-6060-4135-ac4c-97293b7b00df 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 11 08:52:23 compute-0 nova_compute[260935]: 2025-10-11 08:52:23.313 2 DEBUG nova.virt.hardware [None req-f27d8cfa-6060-4135-ac4c-97293b7b00df 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 11 08:52:23 compute-0 nova_compute[260935]: 2025-10-11 08:52:23.314 2 DEBUG nova.virt.hardware [None req-f27d8cfa-6060-4135-ac4c-97293b7b00df 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 11 08:52:23 compute-0 nova_compute[260935]: 2025-10-11 08:52:23.314 2 DEBUG nova.virt.hardware [None req-f27d8cfa-6060-4135-ac4c-97293b7b00df 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 11 08:52:23 compute-0 nova_compute[260935]: 2025-10-11 08:52:23.315 2 DEBUG nova.virt.hardware [None req-f27d8cfa-6060-4135-ac4c-97293b7b00df 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 11 08:52:23 compute-0 nova_compute[260935]: 2025-10-11 08:52:23.315 2 DEBUG nova.virt.hardware [None req-f27d8cfa-6060-4135-ac4c-97293b7b00df 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 11 08:52:23 compute-0 nova_compute[260935]: 2025-10-11 08:52:23.320 2 DEBUG oslo_concurrency.processutils [None req-f27d8cfa-6060-4135-ac4c-97293b7b00df 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:52:23 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 08:52:23 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/16960398' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:52:23 compute-0 nova_compute[260935]: 2025-10-11 08:52:23.798 2 DEBUG oslo_concurrency.processutils [None req-f27d8cfa-6060-4135-ac4c-97293b7b00df 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:52:23 compute-0 nova_compute[260935]: 2025-10-11 08:52:23.833 2 DEBUG nova.storage.rbd_utils [None req-f27d8cfa-6060-4135-ac4c-97293b7b00df 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] rbd image 98fabab3-6b4a-44f3-b232-f23f34f4e19f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:52:23 compute-0 nova_compute[260935]: 2025-10-11 08:52:23.838 2 DEBUG oslo_concurrency.processutils [None req-f27d8cfa-6060-4135-ac4c-97293b7b00df 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:52:23 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1426: 321 pgs: 321 active+clean; 246 MiB data, 531 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 3.9 MiB/s wr, 127 op/s
Oct 11 08:52:24 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/16960398' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:52:24 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 08:52:24 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2228053641' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:52:24 compute-0 nova_compute[260935]: 2025-10-11 08:52:24.316 2 DEBUG oslo_concurrency.processutils [None req-f27d8cfa-6060-4135-ac4c-97293b7b00df 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.477s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:52:24 compute-0 nova_compute[260935]: 2025-10-11 08:52:24.319 2 DEBUG nova.virt.libvirt.vif [None req-f27d8cfa-6060-4135-ac4c-97293b7b00df 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T08:52:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-1265647930',display_name='tempest-ServerDiskConfigTestJSON-server-1265647930',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-1265647930',id=40,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9af27fad6b5a4783b66213343f27f0a1',ramdisk_id='',reservation_id='r-qeuhsate',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerDiskConfigTestJSON-387886039',owner_user_name='tempest-ServerDiskConfigTestJSON-387886039-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T08:52:19Z,user_data=None,user_id='2a171de1f79843e0b048393cabfee77d',uuid=98fabab3-6b4a-44f3-b232-f23f34f4e19f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "20c8164c-0779-4589-b3d3-afc10a47631f", "address": "fa:16:3e:bf:90:73", "network": {"id": "e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-964284753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9af27fad6b5a4783b66213343f27f0a1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap20c8164c-07", "ovs_interfaceid": "20c8164c-0779-4589-b3d3-afc10a47631f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 11 08:52:24 compute-0 nova_compute[260935]: 2025-10-11 08:52:24.320 2 DEBUG nova.network.os_vif_util [None req-f27d8cfa-6060-4135-ac4c-97293b7b00df 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Converting VIF {"id": "20c8164c-0779-4589-b3d3-afc10a47631f", "address": "fa:16:3e:bf:90:73", "network": {"id": "e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-964284753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9af27fad6b5a4783b66213343f27f0a1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap20c8164c-07", "ovs_interfaceid": "20c8164c-0779-4589-b3d3-afc10a47631f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 08:52:24 compute-0 nova_compute[260935]: 2025-10-11 08:52:24.322 2 DEBUG nova.network.os_vif_util [None req-f27d8cfa-6060-4135-ac4c-97293b7b00df 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:bf:90:73,bridge_name='br-int',has_traffic_filtering=True,id=20c8164c-0779-4589-b3d3-afc10a47631f,network=Network(e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap20c8164c-07') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 08:52:24 compute-0 nova_compute[260935]: 2025-10-11 08:52:24.323 2 DEBUG nova.objects.instance [None req-f27d8cfa-6060-4135-ac4c-97293b7b00df 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Lazy-loading 'pci_devices' on Instance uuid 98fabab3-6b4a-44f3-b232-f23f34f4e19f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:52:24 compute-0 nova_compute[260935]: 2025-10-11 08:52:24.329 2 DEBUG nova.compute.manager [req-40b93b29-5bc4-4366-8632-576d5d47e7e7 req-0570f71b-0e8e-4c53-b8b4-554728cbc4f5 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 840d2a1b-48bb-42ec-aa12-067e1b65ea39] Received event network-changed-bf26054f-43d5-471a-bca4-e7948b4409d0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:52:24 compute-0 nova_compute[260935]: 2025-10-11 08:52:24.329 2 DEBUG nova.compute.manager [req-40b93b29-5bc4-4366-8632-576d5d47e7e7 req-0570f71b-0e8e-4c53-b8b4-554728cbc4f5 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 840d2a1b-48bb-42ec-aa12-067e1b65ea39] Refreshing instance network info cache due to event network-changed-bf26054f-43d5-471a-bca4-e7948b4409d0. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 08:52:24 compute-0 nova_compute[260935]: 2025-10-11 08:52:24.330 2 DEBUG oslo_concurrency.lockutils [req-40b93b29-5bc4-4366-8632-576d5d47e7e7 req-0570f71b-0e8e-4c53-b8b4-554728cbc4f5 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-840d2a1b-48bb-42ec-aa12-067e1b65ea39" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 08:52:24 compute-0 nova_compute[260935]: 2025-10-11 08:52:24.330 2 DEBUG oslo_concurrency.lockutils [req-40b93b29-5bc4-4366-8632-576d5d47e7e7 req-0570f71b-0e8e-4c53-b8b4-554728cbc4f5 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-840d2a1b-48bb-42ec-aa12-067e1b65ea39" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 08:52:24 compute-0 nova_compute[260935]: 2025-10-11 08:52:24.331 2 DEBUG nova.network.neutron [req-40b93b29-5bc4-4366-8632-576d5d47e7e7 req-0570f71b-0e8e-4c53-b8b4-554728cbc4f5 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 840d2a1b-48bb-42ec-aa12-067e1b65ea39] Refreshing network info cache for port bf26054f-43d5-471a-bca4-e7948b4409d0 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 08:52:24 compute-0 nova_compute[260935]: 2025-10-11 08:52:24.363 2 DEBUG nova.virt.libvirt.driver [None req-f27d8cfa-6060-4135-ac4c-97293b7b00df 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 98fabab3-6b4a-44f3-b232-f23f34f4e19f] End _get_guest_xml xml=<domain type="kvm">
Oct 11 08:52:24 compute-0 nova_compute[260935]:   <uuid>98fabab3-6b4a-44f3-b232-f23f34f4e19f</uuid>
Oct 11 08:52:24 compute-0 nova_compute[260935]:   <name>instance-00000028</name>
Oct 11 08:52:24 compute-0 nova_compute[260935]:   <memory>131072</memory>
Oct 11 08:52:24 compute-0 nova_compute[260935]:   <vcpu>1</vcpu>
Oct 11 08:52:24 compute-0 nova_compute[260935]:   <metadata>
Oct 11 08:52:24 compute-0 nova_compute[260935]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 08:52:24 compute-0 nova_compute[260935]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 08:52:24 compute-0 nova_compute[260935]:       <nova:name>tempest-ServerDiskConfigTestJSON-server-1265647930</nova:name>
Oct 11 08:52:24 compute-0 nova_compute[260935]:       <nova:creationTime>2025-10-11 08:52:23</nova:creationTime>
Oct 11 08:52:24 compute-0 nova_compute[260935]:       <nova:flavor name="m1.nano">
Oct 11 08:52:24 compute-0 nova_compute[260935]:         <nova:memory>128</nova:memory>
Oct 11 08:52:24 compute-0 nova_compute[260935]:         <nova:disk>1</nova:disk>
Oct 11 08:52:24 compute-0 nova_compute[260935]:         <nova:swap>0</nova:swap>
Oct 11 08:52:24 compute-0 nova_compute[260935]:         <nova:ephemeral>0</nova:ephemeral>
Oct 11 08:52:24 compute-0 nova_compute[260935]:         <nova:vcpus>1</nova:vcpus>
Oct 11 08:52:24 compute-0 nova_compute[260935]:       </nova:flavor>
Oct 11 08:52:24 compute-0 nova_compute[260935]:       <nova:owner>
Oct 11 08:52:24 compute-0 nova_compute[260935]:         <nova:user uuid="2a171de1f79843e0b048393cabfee77d">tempest-ServerDiskConfigTestJSON-387886039-project-member</nova:user>
Oct 11 08:52:24 compute-0 nova_compute[260935]:         <nova:project uuid="9af27fad6b5a4783b66213343f27f0a1">tempest-ServerDiskConfigTestJSON-387886039</nova:project>
Oct 11 08:52:24 compute-0 nova_compute[260935]:       </nova:owner>
Oct 11 08:52:24 compute-0 nova_compute[260935]:       <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 08:52:24 compute-0 nova_compute[260935]:       <nova:ports>
Oct 11 08:52:24 compute-0 nova_compute[260935]:         <nova:port uuid="20c8164c-0779-4589-b3d3-afc10a47631f">
Oct 11 08:52:24 compute-0 nova_compute[260935]:           <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Oct 11 08:52:24 compute-0 nova_compute[260935]:         </nova:port>
Oct 11 08:52:24 compute-0 nova_compute[260935]:       </nova:ports>
Oct 11 08:52:24 compute-0 nova_compute[260935]:     </nova:instance>
Oct 11 08:52:24 compute-0 nova_compute[260935]:   </metadata>
Oct 11 08:52:24 compute-0 nova_compute[260935]:   <sysinfo type="smbios">
Oct 11 08:52:24 compute-0 nova_compute[260935]:     <system>
Oct 11 08:52:24 compute-0 nova_compute[260935]:       <entry name="manufacturer">RDO</entry>
Oct 11 08:52:24 compute-0 nova_compute[260935]:       <entry name="product">OpenStack Compute</entry>
Oct 11 08:52:24 compute-0 nova_compute[260935]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 08:52:24 compute-0 nova_compute[260935]:       <entry name="serial">98fabab3-6b4a-44f3-b232-f23f34f4e19f</entry>
Oct 11 08:52:24 compute-0 nova_compute[260935]:       <entry name="uuid">98fabab3-6b4a-44f3-b232-f23f34f4e19f</entry>
Oct 11 08:52:24 compute-0 nova_compute[260935]:       <entry name="family">Virtual Machine</entry>
Oct 11 08:52:24 compute-0 nova_compute[260935]:     </system>
Oct 11 08:52:24 compute-0 nova_compute[260935]:   </sysinfo>
Oct 11 08:52:24 compute-0 nova_compute[260935]:   <os>
Oct 11 08:52:24 compute-0 nova_compute[260935]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 11 08:52:24 compute-0 nova_compute[260935]:     <boot dev="hd"/>
Oct 11 08:52:24 compute-0 nova_compute[260935]:     <smbios mode="sysinfo"/>
Oct 11 08:52:24 compute-0 nova_compute[260935]:   </os>
Oct 11 08:52:24 compute-0 nova_compute[260935]:   <features>
Oct 11 08:52:24 compute-0 nova_compute[260935]:     <acpi/>
Oct 11 08:52:24 compute-0 nova_compute[260935]:     <apic/>
Oct 11 08:52:24 compute-0 nova_compute[260935]:     <vmcoreinfo/>
Oct 11 08:52:24 compute-0 nova_compute[260935]:   </features>
Oct 11 08:52:24 compute-0 nova_compute[260935]:   <clock offset="utc">
Oct 11 08:52:24 compute-0 nova_compute[260935]:     <timer name="pit" tickpolicy="delay"/>
Oct 11 08:52:24 compute-0 nova_compute[260935]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 11 08:52:24 compute-0 nova_compute[260935]:     <timer name="hpet" present="no"/>
Oct 11 08:52:24 compute-0 nova_compute[260935]:   </clock>
Oct 11 08:52:24 compute-0 nova_compute[260935]:   <cpu mode="host-model" match="exact">
Oct 11 08:52:24 compute-0 nova_compute[260935]:     <topology sockets="1" cores="1" threads="1"/>
Oct 11 08:52:24 compute-0 nova_compute[260935]:   </cpu>
Oct 11 08:52:24 compute-0 nova_compute[260935]:   <devices>
Oct 11 08:52:24 compute-0 nova_compute[260935]:     <disk type="network" device="disk">
Oct 11 08:52:24 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 08:52:24 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/98fabab3-6b4a-44f3-b232-f23f34f4e19f_disk">
Oct 11 08:52:24 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 08:52:24 compute-0 nova_compute[260935]:       </source>
Oct 11 08:52:24 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 08:52:24 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 08:52:24 compute-0 nova_compute[260935]:       </auth>
Oct 11 08:52:24 compute-0 nova_compute[260935]:       <target dev="vda" bus="virtio"/>
Oct 11 08:52:24 compute-0 nova_compute[260935]:     </disk>
Oct 11 08:52:24 compute-0 nova_compute[260935]:     <disk type="network" device="cdrom">
Oct 11 08:52:24 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 08:52:24 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/98fabab3-6b4a-44f3-b232-f23f34f4e19f_disk.config">
Oct 11 08:52:24 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 08:52:24 compute-0 nova_compute[260935]:       </source>
Oct 11 08:52:24 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 08:52:24 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 08:52:24 compute-0 nova_compute[260935]:       </auth>
Oct 11 08:52:24 compute-0 nova_compute[260935]:       <target dev="sda" bus="sata"/>
Oct 11 08:52:24 compute-0 nova_compute[260935]:     </disk>
Oct 11 08:52:24 compute-0 nova_compute[260935]:     <interface type="ethernet">
Oct 11 08:52:24 compute-0 nova_compute[260935]:       <mac address="fa:16:3e:bf:90:73"/>
Oct 11 08:52:24 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 08:52:24 compute-0 nova_compute[260935]:       <driver name="vhost" rx_queue_size="512"/>
Oct 11 08:52:24 compute-0 nova_compute[260935]:       <mtu size="1442"/>
Oct 11 08:52:24 compute-0 nova_compute[260935]:       <target dev="tap20c8164c-07"/>
Oct 11 08:52:24 compute-0 nova_compute[260935]:     </interface>
Oct 11 08:52:24 compute-0 nova_compute[260935]:     <serial type="pty">
Oct 11 08:52:24 compute-0 nova_compute[260935]:       <log file="/var/lib/nova/instances/98fabab3-6b4a-44f3-b232-f23f34f4e19f/console.log" append="off"/>
Oct 11 08:52:24 compute-0 nova_compute[260935]:     </serial>
Oct 11 08:52:24 compute-0 nova_compute[260935]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 08:52:24 compute-0 nova_compute[260935]:     <video>
Oct 11 08:52:24 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 08:52:24 compute-0 nova_compute[260935]:     </video>
Oct 11 08:52:24 compute-0 nova_compute[260935]:     <input type="tablet" bus="usb"/>
Oct 11 08:52:24 compute-0 nova_compute[260935]:     <rng model="virtio">
Oct 11 08:52:24 compute-0 nova_compute[260935]:       <backend model="random">/dev/urandom</backend>
Oct 11 08:52:24 compute-0 nova_compute[260935]:     </rng>
Oct 11 08:52:24 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root"/>
Oct 11 08:52:24 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:52:24 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:52:24 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:52:24 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:52:24 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:52:24 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:52:24 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:52:24 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:52:24 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:52:24 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:52:24 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:52:24 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:52:24 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:52:24 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:52:24 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:52:24 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:52:24 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:52:24 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:52:24 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:52:24 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:52:24 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:52:24 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:52:24 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:52:24 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:52:24 compute-0 nova_compute[260935]:     <controller type="usb" index="0"/>
Oct 11 08:52:24 compute-0 nova_compute[260935]:     <memballoon model="virtio">
Oct 11 08:52:24 compute-0 nova_compute[260935]:       <stats period="10"/>
Oct 11 08:52:24 compute-0 nova_compute[260935]:     </memballoon>
Oct 11 08:52:24 compute-0 nova_compute[260935]:   </devices>
Oct 11 08:52:24 compute-0 nova_compute[260935]: </domain>
Oct 11 08:52:24 compute-0 nova_compute[260935]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 11 08:52:24 compute-0 nova_compute[260935]: 2025-10-11 08:52:24.365 2 DEBUG nova.compute.manager [None req-f27d8cfa-6060-4135-ac4c-97293b7b00df 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 98fabab3-6b4a-44f3-b232-f23f34f4e19f] Preparing to wait for external event network-vif-plugged-20c8164c-0779-4589-b3d3-afc10a47631f prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 11 08:52:24 compute-0 nova_compute[260935]: 2025-10-11 08:52:24.366 2 DEBUG oslo_concurrency.lockutils [None req-f27d8cfa-6060-4135-ac4c-97293b7b00df 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Acquiring lock "98fabab3-6b4a-44f3-b232-f23f34f4e19f-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:52:24 compute-0 nova_compute[260935]: 2025-10-11 08:52:24.367 2 DEBUG oslo_concurrency.lockutils [None req-f27d8cfa-6060-4135-ac4c-97293b7b00df 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Lock "98fabab3-6b4a-44f3-b232-f23f34f4e19f-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:52:24 compute-0 nova_compute[260935]: 2025-10-11 08:52:24.368 2 DEBUG oslo_concurrency.lockutils [None req-f27d8cfa-6060-4135-ac4c-97293b7b00df 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Lock "98fabab3-6b4a-44f3-b232-f23f34f4e19f-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:52:24 compute-0 nova_compute[260935]: 2025-10-11 08:52:24.370 2 DEBUG nova.virt.libvirt.vif [None req-f27d8cfa-6060-4135-ac4c-97293b7b00df 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T08:52:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-1265647930',display_name='tempest-ServerDiskConfigTestJSON-server-1265647930',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-1265647930',id=40,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9af27fad6b5a4783b66213343f27f0a1',ramdisk_id='',reservation_id='r-qeuhsate',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerDiskConfigTestJSON-387886039',owner_user_name='tempest-ServerDiskConfigTestJSON-387886039-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T08:52:19Z,user_data=None,user_id='2a171de1f79843e0b048393cabfee77d',uuid=98fabab3-6b4a-44f3-b232-f23f34f4e19f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "20c8164c-0779-4589-b3d3-afc10a47631f", "address": "fa:16:3e:bf:90:73", "network": {"id": "e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-964284753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9af27fad6b5a4783b66213343f27f0a1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap20c8164c-07", "ovs_interfaceid": "20c8164c-0779-4589-b3d3-afc10a47631f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 11 08:52:24 compute-0 nova_compute[260935]: 2025-10-11 08:52:24.370 2 DEBUG nova.network.os_vif_util [None req-f27d8cfa-6060-4135-ac4c-97293b7b00df 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Converting VIF {"id": "20c8164c-0779-4589-b3d3-afc10a47631f", "address": "fa:16:3e:bf:90:73", "network": {"id": "e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-964284753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9af27fad6b5a4783b66213343f27f0a1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap20c8164c-07", "ovs_interfaceid": "20c8164c-0779-4589-b3d3-afc10a47631f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 08:52:24 compute-0 nova_compute[260935]: 2025-10-11 08:52:24.372 2 DEBUG nova.network.os_vif_util [None req-f27d8cfa-6060-4135-ac4c-97293b7b00df 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:bf:90:73,bridge_name='br-int',has_traffic_filtering=True,id=20c8164c-0779-4589-b3d3-afc10a47631f,network=Network(e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap20c8164c-07') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 08:52:24 compute-0 nova_compute[260935]: 2025-10-11 08:52:24.373 2 DEBUG os_vif [None req-f27d8cfa-6060-4135-ac4c-97293b7b00df 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:bf:90:73,bridge_name='br-int',has_traffic_filtering=True,id=20c8164c-0779-4589-b3d3-afc10a47631f,network=Network(e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap20c8164c-07') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 11 08:52:24 compute-0 nova_compute[260935]: 2025-10-11 08:52:24.374 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:52:24 compute-0 nova_compute[260935]: 2025-10-11 08:52:24.376 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:52:24 compute-0 nova_compute[260935]: 2025-10-11 08:52:24.377 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 08:52:24 compute-0 nova_compute[260935]: 2025-10-11 08:52:24.382 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:52:24 compute-0 nova_compute[260935]: 2025-10-11 08:52:24.383 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap20c8164c-07, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:52:24 compute-0 nova_compute[260935]: 2025-10-11 08:52:24.384 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap20c8164c-07, col_values=(('external_ids', {'iface-id': '20c8164c-0779-4589-b3d3-afc10a47631f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:bf:90:73', 'vm-uuid': '98fabab3-6b4a-44f3-b232-f23f34f4e19f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:52:24 compute-0 nova_compute[260935]: 2025-10-11 08:52:24.386 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:52:24 compute-0 NetworkManager[44960]: <info>  [1760172744.3878] manager: (tap20c8164c-07): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/154)
Oct 11 08:52:24 compute-0 nova_compute[260935]: 2025-10-11 08:52:24.391 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 08:52:24 compute-0 nova_compute[260935]: 2025-10-11 08:52:24.395 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:52:24 compute-0 nova_compute[260935]: 2025-10-11 08:52:24.397 2 INFO os_vif [None req-f27d8cfa-6060-4135-ac4c-97293b7b00df 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:bf:90:73,bridge_name='br-int',has_traffic_filtering=True,id=20c8164c-0779-4589-b3d3-afc10a47631f,network=Network(e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap20c8164c-07')
Oct 11 08:52:24 compute-0 nova_compute[260935]: 2025-10-11 08:52:24.468 2 DEBUG nova.virt.libvirt.driver [None req-f27d8cfa-6060-4135-ac4c-97293b7b00df 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 08:52:24 compute-0 nova_compute[260935]: 2025-10-11 08:52:24.468 2 DEBUG nova.virt.libvirt.driver [None req-f27d8cfa-6060-4135-ac4c-97293b7b00df 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 08:52:24 compute-0 nova_compute[260935]: 2025-10-11 08:52:24.468 2 DEBUG nova.virt.libvirt.driver [None req-f27d8cfa-6060-4135-ac4c-97293b7b00df 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] No VIF found with MAC fa:16:3e:bf:90:73, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 11 08:52:24 compute-0 nova_compute[260935]: 2025-10-11 08:52:24.469 2 INFO nova.virt.libvirt.driver [None req-f27d8cfa-6060-4135-ac4c-97293b7b00df 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 98fabab3-6b4a-44f3-b232-f23f34f4e19f] Using config drive
Oct 11 08:52:24 compute-0 nova_compute[260935]: 2025-10-11 08:52:24.502 2 DEBUG nova.storage.rbd_utils [None req-f27d8cfa-6060-4135-ac4c-97293b7b00df 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] rbd image 98fabab3-6b4a-44f3-b232-f23f34f4e19f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:52:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 08:52:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 08:52:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 08:52:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 08:52:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 08:52:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 08:52:24 compute-0 nova_compute[260935]: 2025-10-11 08:52:24.975 2 INFO nova.virt.libvirt.driver [None req-f27d8cfa-6060-4135-ac4c-97293b7b00df 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 98fabab3-6b4a-44f3-b232-f23f34f4e19f] Creating config drive at /var/lib/nova/instances/98fabab3-6b4a-44f3-b232-f23f34f4e19f/disk.config
Oct 11 08:52:24 compute-0 nova_compute[260935]: 2025-10-11 08:52:24.985 2 DEBUG oslo_concurrency.processutils [None req-f27d8cfa-6060-4135-ac4c-97293b7b00df 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/98fabab3-6b4a-44f3-b232-f23f34f4e19f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpv73rd79n execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:52:25 compute-0 nova_compute[260935]: 2025-10-11 08:52:25.025 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:52:25 compute-0 ceph-mon[74313]: pgmap v1426: 321 pgs: 321 active+clean; 246 MiB data, 531 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 3.9 MiB/s wr, 127 op/s
Oct 11 08:52:25 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/2228053641' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:52:25 compute-0 nova_compute[260935]: 2025-10-11 08:52:25.134 2 DEBUG oslo_concurrency.lockutils [None req-246c938c-4e76-4a49-8f73-6af5f563282d 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] Acquiring lock "840d2a1b-48bb-42ec-aa12-067e1b65ea39" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:52:25 compute-0 nova_compute[260935]: 2025-10-11 08:52:25.135 2 DEBUG oslo_concurrency.lockutils [None req-246c938c-4e76-4a49-8f73-6af5f563282d 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] Lock "840d2a1b-48bb-42ec-aa12-067e1b65ea39" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:52:25 compute-0 nova_compute[260935]: 2025-10-11 08:52:25.135 2 DEBUG oslo_concurrency.lockutils [None req-246c938c-4e76-4a49-8f73-6af5f563282d 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] Acquiring lock "840d2a1b-48bb-42ec-aa12-067e1b65ea39-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:52:25 compute-0 nova_compute[260935]: 2025-10-11 08:52:25.136 2 DEBUG oslo_concurrency.lockutils [None req-246c938c-4e76-4a49-8f73-6af5f563282d 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] Lock "840d2a1b-48bb-42ec-aa12-067e1b65ea39-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:52:25 compute-0 nova_compute[260935]: 2025-10-11 08:52:25.137 2 DEBUG oslo_concurrency.lockutils [None req-246c938c-4e76-4a49-8f73-6af5f563282d 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] Lock "840d2a1b-48bb-42ec-aa12-067e1b65ea39-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:52:25 compute-0 nova_compute[260935]: 2025-10-11 08:52:25.139 2 INFO nova.compute.manager [None req-246c938c-4e76-4a49-8f73-6af5f563282d 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] [instance: 840d2a1b-48bb-42ec-aa12-067e1b65ea39] Terminating instance
Oct 11 08:52:25 compute-0 nova_compute[260935]: 2025-10-11 08:52:25.141 2 DEBUG nova.compute.manager [None req-246c938c-4e76-4a49-8f73-6af5f563282d 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] [instance: 840d2a1b-48bb-42ec-aa12-067e1b65ea39] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 11 08:52:25 compute-0 nova_compute[260935]: 2025-10-11 08:52:25.143 2 DEBUG oslo_concurrency.processutils [None req-f27d8cfa-6060-4135-ac4c-97293b7b00df 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/98fabab3-6b4a-44f3-b232-f23f34f4e19f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpv73rd79n" returned: 0 in 0.159s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:52:25 compute-0 nova_compute[260935]: 2025-10-11 08:52:25.180 2 DEBUG nova.storage.rbd_utils [None req-f27d8cfa-6060-4135-ac4c-97293b7b00df 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] rbd image 98fabab3-6b4a-44f3-b232-f23f34f4e19f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:52:25 compute-0 nova_compute[260935]: 2025-10-11 08:52:25.185 2 DEBUG oslo_concurrency.processutils [None req-f27d8cfa-6060-4135-ac4c-97293b7b00df 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/98fabab3-6b4a-44f3-b232-f23f34f4e19f/disk.config 98fabab3-6b4a-44f3-b232-f23f34f4e19f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:52:25 compute-0 kernel: tapbf26054f-43 (unregistering): left promiscuous mode
Oct 11 08:52:25 compute-0 NetworkManager[44960]: <info>  [1760172745.2977] device (tapbf26054f-43): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 08:52:25 compute-0 ovn_controller[152945]: 2025-10-11T08:52:25Z|00317|binding|INFO|Releasing lport bf26054f-43d5-471a-bca4-e7948b4409d0 from this chassis (sb_readonly=0)
Oct 11 08:52:25 compute-0 nova_compute[260935]: 2025-10-11 08:52:25.314 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:52:25 compute-0 ovn_controller[152945]: 2025-10-11T08:52:25Z|00318|binding|INFO|Setting lport bf26054f-43d5-471a-bca4-e7948b4409d0 down in Southbound
Oct 11 08:52:25 compute-0 ovn_controller[152945]: 2025-10-11T08:52:25Z|00319|binding|INFO|Removing iface tapbf26054f-43 ovn-installed in OVS
Oct 11 08:52:25 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:25.322 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6d:9c:27 10.100.0.12'], port_security=['fa:16:3e:6d:9c:27 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '840d2a1b-48bb-42ec-aa12-067e1b65ea39', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-882d76f4-8cc7-44c7-ad90-277f4f92e044', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b23ff73ca27245eeb1b46f51326a5568', 'neutron:revision_number': '4', 'neutron:security_group_ids': '1c2e46c6-00fe-417f-b7cc-3c9acf40921b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4841cc19-0006-4d3d-bb24-b088854c8627, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=bf26054f-43d5-471a-bca4-e7948b4409d0) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 08:52:25 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:25.323 162815 INFO neutron.agent.ovn.metadata.agent [-] Port bf26054f-43d5-471a-bca4-e7948b4409d0 in datapath 882d76f4-8cc7-44c7-ad90-277f4f92e044 unbound from our chassis
Oct 11 08:52:25 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:25.325 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 882d76f4-8cc7-44c7-ad90-277f4f92e044
Oct 11 08:52:25 compute-0 nova_compute[260935]: 2025-10-11 08:52:25.356 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:52:25 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:25.356 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[caeada04-f98a-4f11-8996-15d6433c49ab]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:52:25 compute-0 systemd[1]: machine-qemu\x2d43\x2dinstance\x2d00000027.scope: Deactivated successfully.
Oct 11 08:52:25 compute-0 systemd[1]: machine-qemu\x2d43\x2dinstance\x2d00000027.scope: Consumed 13.690s CPU time.
Oct 11 08:52:25 compute-0 systemd-machined[215705]: Machine qemu-43-instance-00000027 terminated.
Oct 11 08:52:25 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:25.398 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[7afd06bd-22eb-4c45-a780-7efc1edb158f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:52:25 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:25.404 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[2cc9a78a-4a7e-4fe5-801e-93ad284efb1e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:52:25 compute-0 nova_compute[260935]: 2025-10-11 08:52:25.433 2 DEBUG oslo_concurrency.processutils [None req-f27d8cfa-6060-4135-ac4c-97293b7b00df 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/98fabab3-6b4a-44f3-b232-f23f34f4e19f/disk.config 98fabab3-6b4a-44f3-b232-f23f34f4e19f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.248s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:52:25 compute-0 nova_compute[260935]: 2025-10-11 08:52:25.434 2 INFO nova.virt.libvirt.driver [None req-f27d8cfa-6060-4135-ac4c-97293b7b00df 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 98fabab3-6b4a-44f3-b232-f23f34f4e19f] Deleting local config drive /var/lib/nova/instances/98fabab3-6b4a-44f3-b232-f23f34f4e19f/disk.config because it was imported into RBD.
Oct 11 08:52:25 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:25.456 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[8c0f37fd-4606-4bcf-bd85-3731c8cadca0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:52:25 compute-0 podman[306838]: 2025-10-11 08:52:25.462653088 +0000 UTC m=+0.123548595 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 11 08:52:25 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:25.486 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[dddff302-f724-4c2c-8cfd-6a13d5190aca]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap882d76f4-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c6:0a:f7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 11, 'tx_packets': 7, 'rx_bytes': 958, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 11, 'tx_packets': 7, 'rx_bytes': 958, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 89], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 454897, 'reachable_time': 23401, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 306877, 'error': None, 'target': 'ovnmeta-882d76f4-8cc7-44c7-ad90-277f4f92e044', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:52:25 compute-0 nova_compute[260935]: 2025-10-11 08:52:25.492 2 INFO nova.virt.libvirt.driver [-] [instance: 840d2a1b-48bb-42ec-aa12-067e1b65ea39] Instance destroyed successfully.
Oct 11 08:52:25 compute-0 nova_compute[260935]: 2025-10-11 08:52:25.493 2 DEBUG nova.objects.instance [None req-246c938c-4e76-4a49-8f73-6af5f563282d 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] Lazy-loading 'resources' on Instance uuid 840d2a1b-48bb-42ec-aa12-067e1b65ea39 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:52:25 compute-0 nova_compute[260935]: 2025-10-11 08:52:25.510 2 DEBUG nova.virt.libvirt.vif [None req-246c938c-4e76-4a49-8f73-6af5f563282d 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T08:51:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-FloatingIPsAssociationTestJSON-server-397044721',display_name='tempest-FloatingIPsAssociationTestJSON-server-397044721',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-floatingipsassociationtestjson-server-397044721',id=39,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-11T08:52:06Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='b23ff73ca27245eeb1b46f51326a5568',ramdisk_id='',reservation_id='r-5vhnbeg2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-FloatingIPsAssociationTestJSON-1277815089',owner_user_name='tempest-FloatingIPsAssociationTestJSON-1277815089-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T08:52:06Z,user_data=None,user_id='7f37cd0ba15f412b88192a506c5cec79',uuid=840d2a1b-48bb-42ec-aa12-067e1b65ea39,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "bf26054f-43d5-471a-bca4-e7948b4409d0", "address": "fa:16:3e:6d:9c:27", "network": {"id": "882d76f4-8cc7-44c7-ad90-277f4f92e044", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-3978382-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b23ff73ca27245eeb1b46f51326a5568", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf26054f-43", "ovs_interfaceid": "bf26054f-43d5-471a-bca4-e7948b4409d0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 11 08:52:25 compute-0 nova_compute[260935]: 2025-10-11 08:52:25.511 2 DEBUG nova.network.os_vif_util [None req-246c938c-4e76-4a49-8f73-6af5f563282d 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] Converting VIF {"id": "bf26054f-43d5-471a-bca4-e7948b4409d0", "address": "fa:16:3e:6d:9c:27", "network": {"id": "882d76f4-8cc7-44c7-ad90-277f4f92e044", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-3978382-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b23ff73ca27245eeb1b46f51326a5568", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf26054f-43", "ovs_interfaceid": "bf26054f-43d5-471a-bca4-e7948b4409d0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 08:52:25 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:25.511 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[ec3dcf03-14dc-4ef0-924c-038581e75edd]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap882d76f4-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 454914, 'tstamp': 454914}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 306887, 'error': None, 'target': 'ovnmeta-882d76f4-8cc7-44c7-ad90-277f4f92e044', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap882d76f4-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 454918, 'tstamp': 454918}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 306887, 'error': None, 'target': 'ovnmeta-882d76f4-8cc7-44c7-ad90-277f4f92e044', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:52:25 compute-0 nova_compute[260935]: 2025-10-11 08:52:25.512 2 DEBUG nova.network.os_vif_util [None req-246c938c-4e76-4a49-8f73-6af5f563282d 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:6d:9c:27,bridge_name='br-int',has_traffic_filtering=True,id=bf26054f-43d5-471a-bca4-e7948b4409d0,network=Network(882d76f4-8cc7-44c7-ad90-277f4f92e044),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf26054f-43') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 08:52:25 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:25.513 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap882d76f4-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:52:25 compute-0 nova_compute[260935]: 2025-10-11 08:52:25.513 2 DEBUG os_vif [None req-246c938c-4e76-4a49-8f73-6af5f563282d 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:6d:9c:27,bridge_name='br-int',has_traffic_filtering=True,id=bf26054f-43d5-471a-bca4-e7948b4409d0,network=Network(882d76f4-8cc7-44c7-ad90-277f4f92e044),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf26054f-43') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 11 08:52:25 compute-0 nova_compute[260935]: 2025-10-11 08:52:25.516 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:52:25 compute-0 nova_compute[260935]: 2025-10-11 08:52:25.517 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbf26054f-43, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:52:25 compute-0 nova_compute[260935]: 2025-10-11 08:52:25.520 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:52:25 compute-0 nova_compute[260935]: 2025-10-11 08:52:25.523 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 08:52:25 compute-0 nova_compute[260935]: 2025-10-11 08:52:25.527 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:52:25 compute-0 systemd-udevd[306848]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 08:52:25 compute-0 kernel: tap20c8164c-07: entered promiscuous mode
Oct 11 08:52:25 compute-0 NetworkManager[44960]: <info>  [1760172745.5328] manager: (tap20c8164c-07): new Tun device (/org/freedesktop/NetworkManager/Devices/155)
Oct 11 08:52:25 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:25.533 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap882d76f4-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:52:25 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:25.533 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 08:52:25 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:25.534 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap882d76f4-80, col_values=(('external_ids', {'iface-id': 'abe57b96-aedd-418b-be8f-7ad9fb9218ac'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:52:25 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:25.535 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 08:52:25 compute-0 nova_compute[260935]: 2025-10-11 08:52:25.540 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:52:25 compute-0 ovn_controller[152945]: 2025-10-11T08:52:25Z|00320|binding|INFO|Claiming lport 20c8164c-0779-4589-b3d3-afc10a47631f for this chassis.
Oct 11 08:52:25 compute-0 ovn_controller[152945]: 2025-10-11T08:52:25Z|00321|binding|INFO|20c8164c-0779-4589-b3d3-afc10a47631f: Claiming fa:16:3e:bf:90:73 10.100.0.12
Oct 11 08:52:25 compute-0 nova_compute[260935]: 2025-10-11 08:52:25.543 2 INFO os_vif [None req-246c938c-4e76-4a49-8f73-6af5f563282d 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:6d:9c:27,bridge_name='br-int',has_traffic_filtering=True,id=bf26054f-43d5-471a-bca4-e7948b4409d0,network=Network(882d76f4-8cc7-44c7-ad90-277f4f92e044),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf26054f-43')
Oct 11 08:52:25 compute-0 NetworkManager[44960]: <info>  [1760172745.5529] device (tap20c8164c-07): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 08:52:25 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:25.553 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:bf:90:73 10.100.0.12'], port_security=['fa:16:3e:bf:90:73 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '98fabab3-6b4a-44f3-b232-f23f34f4e19f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9af27fad6b5a4783b66213343f27f0a1', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'a50a9db8-e2ab-4969-88fc-b4ddbb372174', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b34768a1-4d4c-416a-8ec1-d8538916a72a, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=20c8164c-0779-4589-b3d3-afc10a47631f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 08:52:25 compute-0 NetworkManager[44960]: <info>  [1760172745.5547] device (tap20c8164c-07): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 08:52:25 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:25.555 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 20c8164c-0779-4589-b3d3-afc10a47631f in datapath e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd bound to our chassis
Oct 11 08:52:25 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:25.558 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd
Oct 11 08:52:25 compute-0 systemd-machined[215705]: New machine qemu-44-instance-00000028.
Oct 11 08:52:25 compute-0 ovn_controller[152945]: 2025-10-11T08:52:25Z|00322|binding|INFO|Setting lport 20c8164c-0779-4589-b3d3-afc10a47631f ovn-installed in OVS
Oct 11 08:52:25 compute-0 ovn_controller[152945]: 2025-10-11T08:52:25Z|00323|binding|INFO|Setting lport 20c8164c-0779-4589-b3d3-afc10a47631f up in Southbound
Oct 11 08:52:25 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:25.577 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[c4ae4e52-ba94-43eb-951e-4f1fd215bfe6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:52:25 compute-0 nova_compute[260935]: 2025-10-11 08:52:25.579 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:52:25 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:25.578 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tape5d4fc7a-11 in ovnmeta-e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 11 08:52:25 compute-0 nova_compute[260935]: 2025-10-11 08:52:25.581 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:52:25 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:25.581 276945 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tape5d4fc7a-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 11 08:52:25 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:25.582 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[be134c99-122e-400b-b179-2d8b8cf94b48]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:52:25 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:25.583 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[79ff8c07-5818-46bb-8344-c5ec6d331ce5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:52:25 compute-0 systemd[1]: Started Virtual Machine qemu-44-instance-00000028.
Oct 11 08:52:25 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:25.602 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[59fff58b-37b8-476d-a4bb-4406ff149226]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:52:25 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:25.631 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[71a81077-fdc5-41c2-8de0-e95344102628]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:52:25 compute-0 nova_compute[260935]: 2025-10-11 08:52:25.678 2 DEBUG nova.network.neutron [req-e05c9b7c-6e20-469f-9fee-c6c4f5b79b0c req-a6701c37-d6f2-4ef6-ab25-13c4651f50d6 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 98fabab3-6b4a-44f3-b232-f23f34f4e19f] Updated VIF entry in instance network info cache for port 20c8164c-0779-4589-b3d3-afc10a47631f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 08:52:25 compute-0 nova_compute[260935]: 2025-10-11 08:52:25.679 2 DEBUG nova.network.neutron [req-e05c9b7c-6e20-469f-9fee-c6c4f5b79b0c req-a6701c37-d6f2-4ef6-ab25-13c4651f50d6 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 98fabab3-6b4a-44f3-b232-f23f34f4e19f] Updating instance_info_cache with network_info: [{"id": "20c8164c-0779-4589-b3d3-afc10a47631f", "address": "fa:16:3e:bf:90:73", "network": {"id": "e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-964284753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9af27fad6b5a4783b66213343f27f0a1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap20c8164c-07", "ovs_interfaceid": "20c8164c-0779-4589-b3d3-afc10a47631f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:52:25 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:25.679 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[45ea9783-27bb-4d65-882a-8df7eea026fb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:52:25 compute-0 NetworkManager[44960]: <info>  [1760172745.6893] manager: (tape5d4fc7a-10): new Veth device (/org/freedesktop/NetworkManager/Devices/156)
Oct 11 08:52:25 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:25.688 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[cd7453ed-d113-44ea-af73-089a1aab0392]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:52:25 compute-0 nova_compute[260935]: 2025-10-11 08:52:25.700 2 DEBUG oslo_concurrency.lockutils [req-e05c9b7c-6e20-469f-9fee-c6c4f5b79b0c req-a6701c37-d6f2-4ef6-ab25-13c4651f50d6 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-98fabab3-6b4a-44f3-b232-f23f34f4e19f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 08:52:25 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:25.729 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[bb97663f-d1ee-49fd-bae3-245f12a60f9d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:52:25 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:25.734 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[c55b58ed-356e-4df7-b668-1bd0beb16f52]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:52:25 compute-0 NetworkManager[44960]: <info>  [1760172745.7644] device (tape5d4fc7a-10): carrier: link connected
Oct 11 08:52:25 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:25.770 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[fedcb07b-fd0e-4eec-88bd-3fe6bc448463]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:52:25 compute-0 nova_compute[260935]: 2025-10-11 08:52:25.818 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:52:25 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:25.843 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[6050e627-4e36-470a-9edd-ff0b74feae85]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape5d4fc7a-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:17:20:33'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 102], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 459046, 'reachable_time': 26652, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 306949, 'error': None, 'target': 'ovnmeta-e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:52:25 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:25.865 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[8d4471d1-ec8f-43ea-bb06-12a71a78b7b7]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe17:2033'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 459046, 'tstamp': 459046}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 306950, 'error': None, 'target': 'ovnmeta-e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:52:25 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:25.895 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[851a9698-9360-46a1-860d-c78e05b1df38]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape5d4fc7a-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:17:20:33'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 102], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 459046, 'reachable_time': 26652, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 306951, 'error': None, 'target': 'ovnmeta-e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:52:25 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1427: 321 pgs: 321 active+clean; 246 MiB data, 531 MiB used, 59 GiB / 60 GiB avail; 318 KiB/s rd, 3.9 MiB/s wr, 89 op/s
Oct 11 08:52:25 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:25.949 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[c8f06ba7-ad16-490b-b868-1da6d663c3de]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:52:26 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:26.058 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[28fba919-3a20-44c7-a4e4-9e1380488c2e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:52:26 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:26.060 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape5d4fc7a-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:52:26 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:26.061 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 08:52:26 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:26.063 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape5d4fc7a-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:52:26 compute-0 NetworkManager[44960]: <info>  [1760172746.0664] manager: (tape5d4fc7a-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/157)
Oct 11 08:52:26 compute-0 kernel: tape5d4fc7a-10: entered promiscuous mode
Oct 11 08:52:26 compute-0 nova_compute[260935]: 2025-10-11 08:52:26.065 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:52:26 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:26.070 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape5d4fc7a-10, col_values=(('external_ids', {'iface-id': '7a0f31c4-9bda-45df-9fec-aacc40fc88c1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:52:26 compute-0 ovn_controller[152945]: 2025-10-11T08:52:26Z|00324|binding|INFO|Releasing lport 7a0f31c4-9bda-45df-9fec-aacc40fc88c1 from this chassis (sb_readonly=0)
Oct 11 08:52:26 compute-0 nova_compute[260935]: 2025-10-11 08:52:26.072 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:52:26 compute-0 nova_compute[260935]: 2025-10-11 08:52:26.090 2 INFO nova.virt.libvirt.driver [None req-246c938c-4e76-4a49-8f73-6af5f563282d 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] [instance: 840d2a1b-48bb-42ec-aa12-067e1b65ea39] Deleting instance files /var/lib/nova/instances/840d2a1b-48bb-42ec-aa12-067e1b65ea39_del
Oct 11 08:52:26 compute-0 nova_compute[260935]: 2025-10-11 08:52:26.091 2 INFO nova.virt.libvirt.driver [None req-246c938c-4e76-4a49-8f73-6af5f563282d 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] [instance: 840d2a1b-48bb-42ec-aa12-067e1b65ea39] Deletion of /var/lib/nova/instances/840d2a1b-48bb-42ec-aa12-067e1b65ea39_del complete
Oct 11 08:52:26 compute-0 nova_compute[260935]: 2025-10-11 08:52:26.095 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:52:26 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:26.096 162815 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 11 08:52:26 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:26.097 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[ca3af2f9-93f5-4b2f-9ecd-3fe4263dee78]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:52:26 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:26.099 162815 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 11 08:52:26 compute-0 ovn_metadata_agent[162810]: global
Oct 11 08:52:26 compute-0 ovn_metadata_agent[162810]:     log         /dev/log local0 debug
Oct 11 08:52:26 compute-0 ovn_metadata_agent[162810]:     log-tag     haproxy-metadata-proxy-e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd
Oct 11 08:52:26 compute-0 ovn_metadata_agent[162810]:     user        root
Oct 11 08:52:26 compute-0 ovn_metadata_agent[162810]:     group       root
Oct 11 08:52:26 compute-0 ovn_metadata_agent[162810]:     maxconn     1024
Oct 11 08:52:26 compute-0 ovn_metadata_agent[162810]:     pidfile     /var/lib/neutron/external/pids/e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd.pid.haproxy
Oct 11 08:52:26 compute-0 ovn_metadata_agent[162810]:     daemon
Oct 11 08:52:26 compute-0 ovn_metadata_agent[162810]: 
Oct 11 08:52:26 compute-0 ovn_metadata_agent[162810]: defaults
Oct 11 08:52:26 compute-0 ovn_metadata_agent[162810]:     log global
Oct 11 08:52:26 compute-0 ovn_metadata_agent[162810]:     mode http
Oct 11 08:52:26 compute-0 ovn_metadata_agent[162810]:     option httplog
Oct 11 08:52:26 compute-0 ovn_metadata_agent[162810]:     option dontlognull
Oct 11 08:52:26 compute-0 ovn_metadata_agent[162810]:     option http-server-close
Oct 11 08:52:26 compute-0 ovn_metadata_agent[162810]:     option forwardfor
Oct 11 08:52:26 compute-0 ovn_metadata_agent[162810]:     retries                 3
Oct 11 08:52:26 compute-0 ovn_metadata_agent[162810]:     timeout http-request    30s
Oct 11 08:52:26 compute-0 ovn_metadata_agent[162810]:     timeout connect         30s
Oct 11 08:52:26 compute-0 ovn_metadata_agent[162810]:     timeout client          32s
Oct 11 08:52:26 compute-0 ovn_metadata_agent[162810]:     timeout server          32s
Oct 11 08:52:26 compute-0 ovn_metadata_agent[162810]:     timeout http-keep-alive 30s
Oct 11 08:52:26 compute-0 ovn_metadata_agent[162810]: 
Oct 11 08:52:26 compute-0 ovn_metadata_agent[162810]: 
Oct 11 08:52:26 compute-0 ovn_metadata_agent[162810]: listen listener
Oct 11 08:52:26 compute-0 ovn_metadata_agent[162810]:     bind 169.254.169.254:80
Oct 11 08:52:26 compute-0 ovn_metadata_agent[162810]:     server metadata /var/lib/neutron/metadata_proxy
Oct 11 08:52:26 compute-0 ovn_metadata_agent[162810]:     http-request add-header X-OVN-Network-ID e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd
Oct 11 08:52:26 compute-0 ovn_metadata_agent[162810]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 11 08:52:26 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:26.100 162815 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd', 'env', 'PROCESS_TAG=haproxy-e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 11 08:52:26 compute-0 nova_compute[260935]: 2025-10-11 08:52:26.176 2 INFO nova.compute.manager [None req-246c938c-4e76-4a49-8f73-6af5f563282d 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] [instance: 840d2a1b-48bb-42ec-aa12-067e1b65ea39] Took 1.03 seconds to destroy the instance on the hypervisor.
Oct 11 08:52:26 compute-0 nova_compute[260935]: 2025-10-11 08:52:26.178 2 DEBUG oslo.service.loopingcall [None req-246c938c-4e76-4a49-8f73-6af5f563282d 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 11 08:52:26 compute-0 nova_compute[260935]: 2025-10-11 08:52:26.178 2 DEBUG nova.compute.manager [-] [instance: 840d2a1b-48bb-42ec-aa12-067e1b65ea39] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 11 08:52:26 compute-0 nova_compute[260935]: 2025-10-11 08:52:26.178 2 DEBUG nova.network.neutron [-] [instance: 840d2a1b-48bb-42ec-aa12-067e1b65ea39] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 11 08:52:26 compute-0 nova_compute[260935]: 2025-10-11 08:52:26.424 2 DEBUG nova.compute.manager [req-86597d18-7aa1-4141-b807-89569d9965be req-5be5dcf5-0dea-45a9-8d6d-8401781c5d38 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 840d2a1b-48bb-42ec-aa12-067e1b65ea39] Received event network-vif-unplugged-bf26054f-43d5-471a-bca4-e7948b4409d0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:52:26 compute-0 nova_compute[260935]: 2025-10-11 08:52:26.425 2 DEBUG oslo_concurrency.lockutils [req-86597d18-7aa1-4141-b807-89569d9965be req-5be5dcf5-0dea-45a9-8d6d-8401781c5d38 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "840d2a1b-48bb-42ec-aa12-067e1b65ea39-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:52:26 compute-0 nova_compute[260935]: 2025-10-11 08:52:26.426 2 DEBUG oslo_concurrency.lockutils [req-86597d18-7aa1-4141-b807-89569d9965be req-5be5dcf5-0dea-45a9-8d6d-8401781c5d38 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "840d2a1b-48bb-42ec-aa12-067e1b65ea39-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:52:26 compute-0 nova_compute[260935]: 2025-10-11 08:52:26.426 2 DEBUG oslo_concurrency.lockutils [req-86597d18-7aa1-4141-b807-89569d9965be req-5be5dcf5-0dea-45a9-8d6d-8401781c5d38 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "840d2a1b-48bb-42ec-aa12-067e1b65ea39-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:52:26 compute-0 nova_compute[260935]: 2025-10-11 08:52:26.426 2 DEBUG nova.compute.manager [req-86597d18-7aa1-4141-b807-89569d9965be req-5be5dcf5-0dea-45a9-8d6d-8401781c5d38 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 840d2a1b-48bb-42ec-aa12-067e1b65ea39] No waiting events found dispatching network-vif-unplugged-bf26054f-43d5-471a-bca4-e7948b4409d0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 08:52:26 compute-0 nova_compute[260935]: 2025-10-11 08:52:26.427 2 DEBUG nova.compute.manager [req-86597d18-7aa1-4141-b807-89569d9965be req-5be5dcf5-0dea-45a9-8d6d-8401781c5d38 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 840d2a1b-48bb-42ec-aa12-067e1b65ea39] Received event network-vif-unplugged-bf26054f-43d5-471a-bca4-e7948b4409d0 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 11 08:52:26 compute-0 nova_compute[260935]: 2025-10-11 08:52:26.427 2 DEBUG nova.compute.manager [req-86597d18-7aa1-4141-b807-89569d9965be req-5be5dcf5-0dea-45a9-8d6d-8401781c5d38 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 840d2a1b-48bb-42ec-aa12-067e1b65ea39] Received event network-vif-plugged-bf26054f-43d5-471a-bca4-e7948b4409d0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:52:26 compute-0 nova_compute[260935]: 2025-10-11 08:52:26.428 2 DEBUG oslo_concurrency.lockutils [req-86597d18-7aa1-4141-b807-89569d9965be req-5be5dcf5-0dea-45a9-8d6d-8401781c5d38 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "840d2a1b-48bb-42ec-aa12-067e1b65ea39-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:52:26 compute-0 nova_compute[260935]: 2025-10-11 08:52:26.428 2 DEBUG oslo_concurrency.lockutils [req-86597d18-7aa1-4141-b807-89569d9965be req-5be5dcf5-0dea-45a9-8d6d-8401781c5d38 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "840d2a1b-48bb-42ec-aa12-067e1b65ea39-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:52:26 compute-0 nova_compute[260935]: 2025-10-11 08:52:26.428 2 DEBUG oslo_concurrency.lockutils [req-86597d18-7aa1-4141-b807-89569d9965be req-5be5dcf5-0dea-45a9-8d6d-8401781c5d38 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "840d2a1b-48bb-42ec-aa12-067e1b65ea39-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:52:26 compute-0 nova_compute[260935]: 2025-10-11 08:52:26.429 2 DEBUG nova.compute.manager [req-86597d18-7aa1-4141-b807-89569d9965be req-5be5dcf5-0dea-45a9-8d6d-8401781c5d38 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 840d2a1b-48bb-42ec-aa12-067e1b65ea39] No waiting events found dispatching network-vif-plugged-bf26054f-43d5-471a-bca4-e7948b4409d0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 08:52:26 compute-0 nova_compute[260935]: 2025-10-11 08:52:26.429 2 WARNING nova.compute.manager [req-86597d18-7aa1-4141-b807-89569d9965be req-5be5dcf5-0dea-45a9-8d6d-8401781c5d38 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 840d2a1b-48bb-42ec-aa12-067e1b65ea39] Received unexpected event network-vif-plugged-bf26054f-43d5-471a-bca4-e7948b4409d0 for instance with vm_state active and task_state deleting.
Oct 11 08:52:26 compute-0 nova_compute[260935]: 2025-10-11 08:52:26.547 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172746.5470092, 98fabab3-6b4a-44f3-b232-f23f34f4e19f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:52:26 compute-0 nova_compute[260935]: 2025-10-11 08:52:26.549 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 98fabab3-6b4a-44f3-b232-f23f34f4e19f] VM Started (Lifecycle Event)
Oct 11 08:52:26 compute-0 podman[307026]: 2025-10-11 08:52:26.550455771 +0000 UTC m=+0.075688381 container create cf84e2cb23338f38ffba7aa87fde853c0346e0e3a636c10ffc6f7b131567dfc3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 11 08:52:26 compute-0 nova_compute[260935]: 2025-10-11 08:52:26.572 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 98fabab3-6b4a-44f3-b232-f23f34f4e19f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:52:26 compute-0 nova_compute[260935]: 2025-10-11 08:52:26.580 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172746.5472462, 98fabab3-6b4a-44f3-b232-f23f34f4e19f => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:52:26 compute-0 nova_compute[260935]: 2025-10-11 08:52:26.582 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 98fabab3-6b4a-44f3-b232-f23f34f4e19f] VM Paused (Lifecycle Event)
Oct 11 08:52:26 compute-0 systemd[1]: Started libpod-conmon-cf84e2cb23338f38ffba7aa87fde853c0346e0e3a636c10ffc6f7b131567dfc3.scope.
Oct 11 08:52:26 compute-0 nova_compute[260935]: 2025-10-11 08:52:26.603 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 98fabab3-6b4a-44f3-b232-f23f34f4e19f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:52:26 compute-0 nova_compute[260935]: 2025-10-11 08:52:26.607 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 98fabab3-6b4a-44f3-b232-f23f34f4e19f] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 08:52:26 compute-0 podman[307026]: 2025-10-11 08:52:26.518807626 +0000 UTC m=+0.044040266 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 11 08:52:26 compute-0 nova_compute[260935]: 2025-10-11 08:52:26.634 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 98fabab3-6b4a-44f3-b232-f23f34f4e19f] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 08:52:26 compute-0 systemd[1]: Started libcrun container.
Oct 11 08:52:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/95e0d34e1c9b5cb00bbf8b4589ff9eade911de7e540b11ead99c1576f4f3ca7c/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 11 08:52:26 compute-0 podman[307026]: 2025-10-11 08:52:26.658314682 +0000 UTC m=+0.183547392 container init cf84e2cb23338f38ffba7aa87fde853c0346e0e3a636c10ffc6f7b131567dfc3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Oct 11 08:52:26 compute-0 podman[307026]: 2025-10-11 08:52:26.667930574 +0000 UTC m=+0.193163204 container start cf84e2cb23338f38ffba7aa87fde853c0346e0e3a636c10ffc6f7b131567dfc3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 08:52:26 compute-0 neutron-haproxy-ovnmeta-e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd[307041]: [NOTICE]   (307045) : New worker (307047) forked
Oct 11 08:52:26 compute-0 neutron-haproxy-ovnmeta-e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd[307041]: [NOTICE]   (307045) : Loading success.
Oct 11 08:52:26 compute-0 nova_compute[260935]: 2025-10-11 08:52:26.780 2 DEBUG nova.network.neutron [req-40b93b29-5bc4-4366-8632-576d5d47e7e7 req-0570f71b-0e8e-4c53-b8b4-554728cbc4f5 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 840d2a1b-48bb-42ec-aa12-067e1b65ea39] Updated VIF entry in instance network info cache for port bf26054f-43d5-471a-bca4-e7948b4409d0. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 08:52:26 compute-0 nova_compute[260935]: 2025-10-11 08:52:26.781 2 DEBUG nova.network.neutron [req-40b93b29-5bc4-4366-8632-576d5d47e7e7 req-0570f71b-0e8e-4c53-b8b4-554728cbc4f5 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 840d2a1b-48bb-42ec-aa12-067e1b65ea39] Updating instance_info_cache with network_info: [{"id": "bf26054f-43d5-471a-bca4-e7948b4409d0", "address": "fa:16:3e:6d:9c:27", "network": {"id": "882d76f4-8cc7-44c7-ad90-277f4f92e044", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-3978382-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b23ff73ca27245eeb1b46f51326a5568", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf26054f-43", "ovs_interfaceid": "bf26054f-43d5-471a-bca4-e7948b4409d0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:52:26 compute-0 nova_compute[260935]: 2025-10-11 08:52:26.800 2 DEBUG oslo_concurrency.lockutils [req-40b93b29-5bc4-4366-8632-576d5d47e7e7 req-0570f71b-0e8e-4c53-b8b4-554728cbc4f5 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-840d2a1b-48bb-42ec-aa12-067e1b65ea39" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 08:52:27 compute-0 ceph-mon[74313]: pgmap v1427: 321 pgs: 321 active+clean; 246 MiB data, 531 MiB used, 59 GiB / 60 GiB avail; 318 KiB/s rd, 3.9 MiB/s wr, 89 op/s
Oct 11 08:52:27 compute-0 nova_compute[260935]: 2025-10-11 08:52:27.446 2 DEBUG nova.network.neutron [-] [instance: 840d2a1b-48bb-42ec-aa12-067e1b65ea39] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:52:27 compute-0 nova_compute[260935]: 2025-10-11 08:52:27.456 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:52:27 compute-0 nova_compute[260935]: 2025-10-11 08:52:27.490 2 INFO nova.compute.manager [-] [instance: 840d2a1b-48bb-42ec-aa12-067e1b65ea39] Took 1.31 seconds to deallocate network for instance.
Oct 11 08:52:27 compute-0 nova_compute[260935]: 2025-10-11 08:52:27.547 2 DEBUG oslo_concurrency.lockutils [None req-246c938c-4e76-4a49-8f73-6af5f563282d 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:52:27 compute-0 nova_compute[260935]: 2025-10-11 08:52:27.548 2 DEBUG oslo_concurrency.lockutils [None req-246c938c-4e76-4a49-8f73-6af5f563282d 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:52:27 compute-0 nova_compute[260935]: 2025-10-11 08:52:27.655 2 DEBUG oslo_concurrency.processutils [None req-246c938c-4e76-4a49-8f73-6af5f563282d 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:52:27 compute-0 nova_compute[260935]: 2025-10-11 08:52:27.784 2 DEBUG nova.compute.manager [req-c13bd8b3-cc28-45a7-9607-5304f2f086ad req-be81931b-8443-4126-ada2-523f90bf4d33 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 840d2a1b-48bb-42ec-aa12-067e1b65ea39] Received event network-vif-deleted-bf26054f-43d5-471a-bca4-e7948b4409d0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:52:27 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1428: 321 pgs: 321 active+clean; 167 MiB data, 496 MiB used, 60 GiB / 60 GiB avail; 345 KiB/s rd, 3.9 MiB/s wr, 127 op/s
Oct 11 08:52:28 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 08:52:28 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1305509786' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:52:28 compute-0 nova_compute[260935]: 2025-10-11 08:52:28.184 2 DEBUG oslo_concurrency.processutils [None req-246c938c-4e76-4a49-8f73-6af5f563282d 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.529s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:52:28 compute-0 nova_compute[260935]: 2025-10-11 08:52:28.191 2 DEBUG nova.compute.provider_tree [None req-246c938c-4e76-4a49-8f73-6af5f563282d 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 08:52:28 compute-0 nova_compute[260935]: 2025-10-11 08:52:28.211 2 DEBUG nova.scheduler.client.report [None req-246c938c-4e76-4a49-8f73-6af5f563282d 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 08:52:28 compute-0 nova_compute[260935]: 2025-10-11 08:52:28.229 2 DEBUG oslo_concurrency.lockutils [None req-246c938c-4e76-4a49-8f73-6af5f563282d 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.681s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:52:28 compute-0 nova_compute[260935]: 2025-10-11 08:52:28.269 2 INFO nova.scheduler.client.report [None req-246c938c-4e76-4a49-8f73-6af5f563282d 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] Deleted allocations for instance 840d2a1b-48bb-42ec-aa12-067e1b65ea39
Oct 11 08:52:28 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e191 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 08:52:28 compute-0 nova_compute[260935]: 2025-10-11 08:52:28.340 2 DEBUG oslo_concurrency.lockutils [None req-246c938c-4e76-4a49-8f73-6af5f563282d 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] Lock "840d2a1b-48bb-42ec-aa12-067e1b65ea39" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.205s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:52:28 compute-0 nova_compute[260935]: 2025-10-11 08:52:28.635 2 DEBUG nova.compute.manager [req-c5bae198-9c77-434b-aa5c-ab1fc53025d5 req-dbd45bee-b5cd-48e2-8f44-3ab2ce3293b4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 98fabab3-6b4a-44f3-b232-f23f34f4e19f] Received event network-vif-plugged-20c8164c-0779-4589-b3d3-afc10a47631f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:52:28 compute-0 nova_compute[260935]: 2025-10-11 08:52:28.635 2 DEBUG oslo_concurrency.lockutils [req-c5bae198-9c77-434b-aa5c-ab1fc53025d5 req-dbd45bee-b5cd-48e2-8f44-3ab2ce3293b4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "98fabab3-6b4a-44f3-b232-f23f34f4e19f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:52:28 compute-0 nova_compute[260935]: 2025-10-11 08:52:28.636 2 DEBUG oslo_concurrency.lockutils [req-c5bae198-9c77-434b-aa5c-ab1fc53025d5 req-dbd45bee-b5cd-48e2-8f44-3ab2ce3293b4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "98fabab3-6b4a-44f3-b232-f23f34f4e19f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:52:28 compute-0 nova_compute[260935]: 2025-10-11 08:52:28.637 2 DEBUG oslo_concurrency.lockutils [req-c5bae198-9c77-434b-aa5c-ab1fc53025d5 req-dbd45bee-b5cd-48e2-8f44-3ab2ce3293b4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "98fabab3-6b4a-44f3-b232-f23f34f4e19f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:52:28 compute-0 nova_compute[260935]: 2025-10-11 08:52:28.637 2 DEBUG nova.compute.manager [req-c5bae198-9c77-434b-aa5c-ab1fc53025d5 req-dbd45bee-b5cd-48e2-8f44-3ab2ce3293b4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 98fabab3-6b4a-44f3-b232-f23f34f4e19f] Processing event network-vif-plugged-20c8164c-0779-4589-b3d3-afc10a47631f _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 11 08:52:28 compute-0 nova_compute[260935]: 2025-10-11 08:52:28.638 2 DEBUG nova.compute.manager [req-c5bae198-9c77-434b-aa5c-ab1fc53025d5 req-dbd45bee-b5cd-48e2-8f44-3ab2ce3293b4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 98fabab3-6b4a-44f3-b232-f23f34f4e19f] Received event network-vif-plugged-20c8164c-0779-4589-b3d3-afc10a47631f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:52:28 compute-0 nova_compute[260935]: 2025-10-11 08:52:28.638 2 DEBUG oslo_concurrency.lockutils [req-c5bae198-9c77-434b-aa5c-ab1fc53025d5 req-dbd45bee-b5cd-48e2-8f44-3ab2ce3293b4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "98fabab3-6b4a-44f3-b232-f23f34f4e19f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:52:28 compute-0 nova_compute[260935]: 2025-10-11 08:52:28.639 2 DEBUG oslo_concurrency.lockutils [req-c5bae198-9c77-434b-aa5c-ab1fc53025d5 req-dbd45bee-b5cd-48e2-8f44-3ab2ce3293b4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "98fabab3-6b4a-44f3-b232-f23f34f4e19f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:52:28 compute-0 nova_compute[260935]: 2025-10-11 08:52:28.639 2 DEBUG oslo_concurrency.lockutils [req-c5bae198-9c77-434b-aa5c-ab1fc53025d5 req-dbd45bee-b5cd-48e2-8f44-3ab2ce3293b4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "98fabab3-6b4a-44f3-b232-f23f34f4e19f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:52:28 compute-0 nova_compute[260935]: 2025-10-11 08:52:28.640 2 DEBUG nova.compute.manager [req-c5bae198-9c77-434b-aa5c-ab1fc53025d5 req-dbd45bee-b5cd-48e2-8f44-3ab2ce3293b4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 98fabab3-6b4a-44f3-b232-f23f34f4e19f] No waiting events found dispatching network-vif-plugged-20c8164c-0779-4589-b3d3-afc10a47631f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 08:52:28 compute-0 nova_compute[260935]: 2025-10-11 08:52:28.640 2 WARNING nova.compute.manager [req-c5bae198-9c77-434b-aa5c-ab1fc53025d5 req-dbd45bee-b5cd-48e2-8f44-3ab2ce3293b4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 98fabab3-6b4a-44f3-b232-f23f34f4e19f] Received unexpected event network-vif-plugged-20c8164c-0779-4589-b3d3-afc10a47631f for instance with vm_state building and task_state spawning.
Oct 11 08:52:28 compute-0 nova_compute[260935]: 2025-10-11 08:52:28.641 2 DEBUG nova.compute.manager [req-c5bae198-9c77-434b-aa5c-ab1fc53025d5 req-dbd45bee-b5cd-48e2-8f44-3ab2ce3293b4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 872b1c1d-bc87-4123-a599-4d64b89018aa] Received event network-changed-108be440-11bf-41f6-a628-86fbac597b7d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:52:28 compute-0 nova_compute[260935]: 2025-10-11 08:52:28.641 2 DEBUG nova.compute.manager [req-c5bae198-9c77-434b-aa5c-ab1fc53025d5 req-dbd45bee-b5cd-48e2-8f44-3ab2ce3293b4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 872b1c1d-bc87-4123-a599-4d64b89018aa] Refreshing instance network info cache due to event network-changed-108be440-11bf-41f6-a628-86fbac597b7d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 08:52:28 compute-0 nova_compute[260935]: 2025-10-11 08:52:28.641 2 DEBUG oslo_concurrency.lockutils [req-c5bae198-9c77-434b-aa5c-ab1fc53025d5 req-dbd45bee-b5cd-48e2-8f44-3ab2ce3293b4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-872b1c1d-bc87-4123-a599-4d64b89018aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 08:52:28 compute-0 nova_compute[260935]: 2025-10-11 08:52:28.642 2 DEBUG oslo_concurrency.lockutils [req-c5bae198-9c77-434b-aa5c-ab1fc53025d5 req-dbd45bee-b5cd-48e2-8f44-3ab2ce3293b4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-872b1c1d-bc87-4123-a599-4d64b89018aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 08:52:28 compute-0 nova_compute[260935]: 2025-10-11 08:52:28.642 2 DEBUG nova.network.neutron [req-c5bae198-9c77-434b-aa5c-ab1fc53025d5 req-dbd45bee-b5cd-48e2-8f44-3ab2ce3293b4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 872b1c1d-bc87-4123-a599-4d64b89018aa] Refreshing network info cache for port 108be440-11bf-41f6-a628-86fbac597b7d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 08:52:28 compute-0 nova_compute[260935]: 2025-10-11 08:52:28.646 2 DEBUG nova.compute.manager [None req-f27d8cfa-6060-4135-ac4c-97293b7b00df 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 98fabab3-6b4a-44f3-b232-f23f34f4e19f] Instance event wait completed in 2 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 11 08:52:28 compute-0 nova_compute[260935]: 2025-10-11 08:52:28.652 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172748.6509447, 98fabab3-6b4a-44f3-b232-f23f34f4e19f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:52:28 compute-0 nova_compute[260935]: 2025-10-11 08:52:28.653 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 98fabab3-6b4a-44f3-b232-f23f34f4e19f] VM Resumed (Lifecycle Event)
Oct 11 08:52:28 compute-0 nova_compute[260935]: 2025-10-11 08:52:28.657 2 DEBUG nova.virt.libvirt.driver [None req-f27d8cfa-6060-4135-ac4c-97293b7b00df 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 98fabab3-6b4a-44f3-b232-f23f34f4e19f] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 11 08:52:28 compute-0 nova_compute[260935]: 2025-10-11 08:52:28.662 2 INFO nova.virt.libvirt.driver [-] [instance: 98fabab3-6b4a-44f3-b232-f23f34f4e19f] Instance spawned successfully.
Oct 11 08:52:28 compute-0 nova_compute[260935]: 2025-10-11 08:52:28.663 2 DEBUG nova.virt.libvirt.driver [None req-f27d8cfa-6060-4135-ac4c-97293b7b00df 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 98fabab3-6b4a-44f3-b232-f23f34f4e19f] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 11 08:52:28 compute-0 nova_compute[260935]: 2025-10-11 08:52:28.685 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 98fabab3-6b4a-44f3-b232-f23f34f4e19f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:52:28 compute-0 nova_compute[260935]: 2025-10-11 08:52:28.696 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 98fabab3-6b4a-44f3-b232-f23f34f4e19f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 08:52:28 compute-0 nova_compute[260935]: 2025-10-11 08:52:28.701 2 DEBUG nova.virt.libvirt.driver [None req-f27d8cfa-6060-4135-ac4c-97293b7b00df 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 98fabab3-6b4a-44f3-b232-f23f34f4e19f] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:52:28 compute-0 nova_compute[260935]: 2025-10-11 08:52:28.702 2 DEBUG nova.virt.libvirt.driver [None req-f27d8cfa-6060-4135-ac4c-97293b7b00df 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 98fabab3-6b4a-44f3-b232-f23f34f4e19f] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:52:28 compute-0 nova_compute[260935]: 2025-10-11 08:52:28.703 2 DEBUG nova.virt.libvirt.driver [None req-f27d8cfa-6060-4135-ac4c-97293b7b00df 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 98fabab3-6b4a-44f3-b232-f23f34f4e19f] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:52:28 compute-0 nova_compute[260935]: 2025-10-11 08:52:28.704 2 DEBUG nova.virt.libvirt.driver [None req-f27d8cfa-6060-4135-ac4c-97293b7b00df 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 98fabab3-6b4a-44f3-b232-f23f34f4e19f] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:52:28 compute-0 nova_compute[260935]: 2025-10-11 08:52:28.705 2 DEBUG nova.virt.libvirt.driver [None req-f27d8cfa-6060-4135-ac4c-97293b7b00df 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 98fabab3-6b4a-44f3-b232-f23f34f4e19f] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:52:28 compute-0 nova_compute[260935]: 2025-10-11 08:52:28.706 2 DEBUG nova.virt.libvirt.driver [None req-f27d8cfa-6060-4135-ac4c-97293b7b00df 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 98fabab3-6b4a-44f3-b232-f23f34f4e19f] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:52:28 compute-0 nova_compute[260935]: 2025-10-11 08:52:28.732 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 98fabab3-6b4a-44f3-b232-f23f34f4e19f] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 08:52:28 compute-0 nova_compute[260935]: 2025-10-11 08:52:28.781 2 INFO nova.compute.manager [None req-f27d8cfa-6060-4135-ac4c-97293b7b00df 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 98fabab3-6b4a-44f3-b232-f23f34f4e19f] Took 8.83 seconds to spawn the instance on the hypervisor.
Oct 11 08:52:28 compute-0 nova_compute[260935]: 2025-10-11 08:52:28.782 2 DEBUG nova.compute.manager [None req-f27d8cfa-6060-4135-ac4c-97293b7b00df 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 98fabab3-6b4a-44f3-b232-f23f34f4e19f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:52:28 compute-0 nova_compute[260935]: 2025-10-11 08:52:28.870 2 INFO nova.compute.manager [None req-f27d8cfa-6060-4135-ac4c-97293b7b00df 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 98fabab3-6b4a-44f3-b232-f23f34f4e19f] Took 9.87 seconds to build instance.
Oct 11 08:52:28 compute-0 nova_compute[260935]: 2025-10-11 08:52:28.891 2 DEBUG oslo_concurrency.lockutils [None req-f27d8cfa-6060-4135-ac4c-97293b7b00df 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Lock "98fabab3-6b4a-44f3-b232-f23f34f4e19f" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.986s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:52:29 compute-0 ceph-mon[74313]: pgmap v1428: 321 pgs: 321 active+clean; 167 MiB data, 496 MiB used, 60 GiB / 60 GiB avail; 345 KiB/s rd, 3.9 MiB/s wr, 127 op/s
Oct 11 08:52:29 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/1305509786' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:52:29 compute-0 nova_compute[260935]: 2025-10-11 08:52:29.193 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:52:29 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1429: 321 pgs: 321 active+clean; 167 MiB data, 496 MiB used, 60 GiB / 60 GiB avail; 315 KiB/s rd, 3.6 MiB/s wr, 114 op/s
Oct 11 08:52:30 compute-0 nova_compute[260935]: 2025-10-11 08:52:30.105 2 DEBUG nova.network.neutron [req-c5bae198-9c77-434b-aa5c-ab1fc53025d5 req-dbd45bee-b5cd-48e2-8f44-3ab2ce3293b4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 872b1c1d-bc87-4123-a599-4d64b89018aa] Updated VIF entry in instance network info cache for port 108be440-11bf-41f6-a628-86fbac597b7d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 08:52:30 compute-0 nova_compute[260935]: 2025-10-11 08:52:30.106 2 DEBUG nova.network.neutron [req-c5bae198-9c77-434b-aa5c-ab1fc53025d5 req-dbd45bee-b5cd-48e2-8f44-3ab2ce3293b4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 872b1c1d-bc87-4123-a599-4d64b89018aa] Updating instance_info_cache with network_info: [{"id": "108be440-11bf-41f6-a628-86fbac597b7d", "address": "fa:16:3e:c7:33:2e", "network": {"id": "882d76f4-8cc7-44c7-ad90-277f4f92e044", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-3978382-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.194", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b23ff73ca27245eeb1b46f51326a5568", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap108be440-11", "ovs_interfaceid": "108be440-11bf-41f6-a628-86fbac597b7d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:52:30 compute-0 nova_compute[260935]: 2025-10-11 08:52:30.126 2 DEBUG oslo_concurrency.lockutils [req-c5bae198-9c77-434b-aa5c-ab1fc53025d5 req-dbd45bee-b5cd-48e2-8f44-3ab2ce3293b4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-872b1c1d-bc87-4123-a599-4d64b89018aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 08:52:30 compute-0 nova_compute[260935]: 2025-10-11 08:52:30.295 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:52:30 compute-0 nova_compute[260935]: 2025-10-11 08:52:30.520 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:52:31 compute-0 ceph-mon[74313]: pgmap v1429: 321 pgs: 321 active+clean; 167 MiB data, 496 MiB used, 60 GiB / 60 GiB avail; 315 KiB/s rd, 3.6 MiB/s wr, 114 op/s
Oct 11 08:52:31 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1430: 321 pgs: 321 active+clean; 167 MiB data, 496 MiB used, 60 GiB / 60 GiB avail; 315 KiB/s rd, 3.6 MiB/s wr, 114 op/s
Oct 11 08:52:32 compute-0 nova_compute[260935]: 2025-10-11 08:52:32.489 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:52:32 compute-0 nova_compute[260935]: 2025-10-11 08:52:32.549 2 DEBUG oslo_concurrency.lockutils [None req-a82ca57d-9e53-4e21-b593-ce9829203cde 713a9b01d75f4fd2b8ad41bac3ba6343 6fb895e8125a437b8cc29be31706fccb - - default default] Acquiring lock "dd616f8b-2be3-455f-8979-ac6e9e57af2d" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:52:32 compute-0 nova_compute[260935]: 2025-10-11 08:52:32.549 2 DEBUG oslo_concurrency.lockutils [None req-a82ca57d-9e53-4e21-b593-ce9829203cde 713a9b01d75f4fd2b8ad41bac3ba6343 6fb895e8125a437b8cc29be31706fccb - - default default] Lock "dd616f8b-2be3-455f-8979-ac6e9e57af2d" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:52:32 compute-0 nova_compute[260935]: 2025-10-11 08:52:32.577 2 DEBUG nova.compute.manager [None req-a82ca57d-9e53-4e21-b593-ce9829203cde 713a9b01d75f4fd2b8ad41bac3ba6343 6fb895e8125a437b8cc29be31706fccb - - default default] [instance: dd616f8b-2be3-455f-8979-ac6e9e57af2d] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 11 08:52:32 compute-0 nova_compute[260935]: 2025-10-11 08:52:32.665 2 DEBUG oslo_concurrency.lockutils [None req-a82ca57d-9e53-4e21-b593-ce9829203cde 713a9b01d75f4fd2b8ad41bac3ba6343 6fb895e8125a437b8cc29be31706fccb - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:52:32 compute-0 nova_compute[260935]: 2025-10-11 08:52:32.666 2 DEBUG oslo_concurrency.lockutils [None req-a82ca57d-9e53-4e21-b593-ce9829203cde 713a9b01d75f4fd2b8ad41bac3ba6343 6fb895e8125a437b8cc29be31706fccb - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:52:32 compute-0 nova_compute[260935]: 2025-10-11 08:52:32.675 2 DEBUG nova.virt.hardware [None req-a82ca57d-9e53-4e21-b593-ce9829203cde 713a9b01d75f4fd2b8ad41bac3ba6343 6fb895e8125a437b8cc29be31706fccb - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 11 08:52:32 compute-0 nova_compute[260935]: 2025-10-11 08:52:32.676 2 INFO nova.compute.claims [None req-a82ca57d-9e53-4e21-b593-ce9829203cde 713a9b01d75f4fd2b8ad41bac3ba6343 6fb895e8125a437b8cc29be31706fccb - - default default] [instance: dd616f8b-2be3-455f-8979-ac6e9e57af2d] Claim successful on node compute-0.ctlplane.example.com
Oct 11 08:52:32 compute-0 podman[307078]: 2025-10-11 08:52:32.826730965 +0000 UTC m=+0.119550232 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=iscsid, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible)
Oct 11 08:52:32 compute-0 nova_compute[260935]: 2025-10-11 08:52:32.838 2 DEBUG oslo_concurrency.processutils [None req-a82ca57d-9e53-4e21-b593-ce9829203cde 713a9b01d75f4fd2b8ad41bac3ba6343 6fb895e8125a437b8cc29be31706fccb - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:52:33 compute-0 ceph-mon[74313]: pgmap v1430: 321 pgs: 321 active+clean; 167 MiB data, 496 MiB used, 60 GiB / 60 GiB avail; 315 KiB/s rd, 3.6 MiB/s wr, 114 op/s
Oct 11 08:52:33 compute-0 nova_compute[260935]: 2025-10-11 08:52:33.126 2 DEBUG nova.compute.manager [req-71cf0004-6e29-4f12-a789-4a0f55773950 req-c62cab1a-a9f0-457b-b21e-a98b97ff7efc e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 872b1c1d-bc87-4123-a599-4d64b89018aa] Received event network-changed-108be440-11bf-41f6-a628-86fbac597b7d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:52:33 compute-0 nova_compute[260935]: 2025-10-11 08:52:33.127 2 DEBUG nova.compute.manager [req-71cf0004-6e29-4f12-a789-4a0f55773950 req-c62cab1a-a9f0-457b-b21e-a98b97ff7efc e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 872b1c1d-bc87-4123-a599-4d64b89018aa] Refreshing instance network info cache due to event network-changed-108be440-11bf-41f6-a628-86fbac597b7d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 08:52:33 compute-0 nova_compute[260935]: 2025-10-11 08:52:33.128 2 DEBUG oslo_concurrency.lockutils [req-71cf0004-6e29-4f12-a789-4a0f55773950 req-c62cab1a-a9f0-457b-b21e-a98b97ff7efc e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-872b1c1d-bc87-4123-a599-4d64b89018aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 08:52:33 compute-0 nova_compute[260935]: 2025-10-11 08:52:33.128 2 DEBUG oslo_concurrency.lockutils [req-71cf0004-6e29-4f12-a789-4a0f55773950 req-c62cab1a-a9f0-457b-b21e-a98b97ff7efc e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-872b1c1d-bc87-4123-a599-4d64b89018aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 08:52:33 compute-0 nova_compute[260935]: 2025-10-11 08:52:33.129 2 DEBUG nova.network.neutron [req-71cf0004-6e29-4f12-a789-4a0f55773950 req-c62cab1a-a9f0-457b-b21e-a98b97ff7efc e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 872b1c1d-bc87-4123-a599-4d64b89018aa] Refreshing network info cache for port 108be440-11bf-41f6-a628-86fbac597b7d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 08:52:33 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e191 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 08:52:33 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 08:52:33 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3389995710' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:52:33 compute-0 nova_compute[260935]: 2025-10-11 08:52:33.315 2 DEBUG oslo_concurrency.processutils [None req-a82ca57d-9e53-4e21-b593-ce9829203cde 713a9b01d75f4fd2b8ad41bac3ba6343 6fb895e8125a437b8cc29be31706fccb - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.477s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:52:33 compute-0 nova_compute[260935]: 2025-10-11 08:52:33.324 2 DEBUG nova.compute.provider_tree [None req-a82ca57d-9e53-4e21-b593-ce9829203cde 713a9b01d75f4fd2b8ad41bac3ba6343 6fb895e8125a437b8cc29be31706fccb - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 08:52:33 compute-0 nova_compute[260935]: 2025-10-11 08:52:33.348 2 DEBUG nova.scheduler.client.report [None req-a82ca57d-9e53-4e21-b593-ce9829203cde 713a9b01d75f4fd2b8ad41bac3ba6343 6fb895e8125a437b8cc29be31706fccb - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 08:52:33 compute-0 nova_compute[260935]: 2025-10-11 08:52:33.380 2 DEBUG oslo_concurrency.lockutils [None req-a82ca57d-9e53-4e21-b593-ce9829203cde 713a9b01d75f4fd2b8ad41bac3ba6343 6fb895e8125a437b8cc29be31706fccb - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.715s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:52:33 compute-0 nova_compute[260935]: 2025-10-11 08:52:33.382 2 DEBUG nova.compute.manager [None req-a82ca57d-9e53-4e21-b593-ce9829203cde 713a9b01d75f4fd2b8ad41bac3ba6343 6fb895e8125a437b8cc29be31706fccb - - default default] [instance: dd616f8b-2be3-455f-8979-ac6e9e57af2d] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 11 08:52:33 compute-0 nova_compute[260935]: 2025-10-11 08:52:33.435 2 DEBUG nova.compute.manager [None req-a82ca57d-9e53-4e21-b593-ce9829203cde 713a9b01d75f4fd2b8ad41bac3ba6343 6fb895e8125a437b8cc29be31706fccb - - default default] [instance: dd616f8b-2be3-455f-8979-ac6e9e57af2d] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 11 08:52:33 compute-0 nova_compute[260935]: 2025-10-11 08:52:33.436 2 DEBUG nova.network.neutron [None req-a82ca57d-9e53-4e21-b593-ce9829203cde 713a9b01d75f4fd2b8ad41bac3ba6343 6fb895e8125a437b8cc29be31706fccb - - default default] [instance: dd616f8b-2be3-455f-8979-ac6e9e57af2d] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 11 08:52:33 compute-0 nova_compute[260935]: 2025-10-11 08:52:33.466 2 INFO nova.virt.libvirt.driver [None req-a82ca57d-9e53-4e21-b593-ce9829203cde 713a9b01d75f4fd2b8ad41bac3ba6343 6fb895e8125a437b8cc29be31706fccb - - default default] [instance: dd616f8b-2be3-455f-8979-ac6e9e57af2d] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 11 08:52:33 compute-0 nova_compute[260935]: 2025-10-11 08:52:33.491 2 DEBUG nova.compute.manager [None req-a82ca57d-9e53-4e21-b593-ce9829203cde 713a9b01d75f4fd2b8ad41bac3ba6343 6fb895e8125a437b8cc29be31706fccb - - default default] [instance: dd616f8b-2be3-455f-8979-ac6e9e57af2d] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 11 08:52:33 compute-0 nova_compute[260935]: 2025-10-11 08:52:33.592 2 DEBUG nova.compute.manager [None req-a82ca57d-9e53-4e21-b593-ce9829203cde 713a9b01d75f4fd2b8ad41bac3ba6343 6fb895e8125a437b8cc29be31706fccb - - default default] [instance: dd616f8b-2be3-455f-8979-ac6e9e57af2d] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 11 08:52:33 compute-0 nova_compute[260935]: 2025-10-11 08:52:33.594 2 DEBUG nova.virt.libvirt.driver [None req-a82ca57d-9e53-4e21-b593-ce9829203cde 713a9b01d75f4fd2b8ad41bac3ba6343 6fb895e8125a437b8cc29be31706fccb - - default default] [instance: dd616f8b-2be3-455f-8979-ac6e9e57af2d] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 11 08:52:33 compute-0 nova_compute[260935]: 2025-10-11 08:52:33.595 2 INFO nova.virt.libvirt.driver [None req-a82ca57d-9e53-4e21-b593-ce9829203cde 713a9b01d75f4fd2b8ad41bac3ba6343 6fb895e8125a437b8cc29be31706fccb - - default default] [instance: dd616f8b-2be3-455f-8979-ac6e9e57af2d] Creating image(s)
Oct 11 08:52:33 compute-0 nova_compute[260935]: 2025-10-11 08:52:33.630 2 DEBUG nova.storage.rbd_utils [None req-a82ca57d-9e53-4e21-b593-ce9829203cde 713a9b01d75f4fd2b8ad41bac3ba6343 6fb895e8125a437b8cc29be31706fccb - - default default] rbd image dd616f8b-2be3-455f-8979-ac6e9e57af2d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:52:33 compute-0 nova_compute[260935]: 2025-10-11 08:52:33.673 2 DEBUG nova.storage.rbd_utils [None req-a82ca57d-9e53-4e21-b593-ce9829203cde 713a9b01d75f4fd2b8ad41bac3ba6343 6fb895e8125a437b8cc29be31706fccb - - default default] rbd image dd616f8b-2be3-455f-8979-ac6e9e57af2d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:52:33 compute-0 nova_compute[260935]: 2025-10-11 08:52:33.711 2 DEBUG nova.storage.rbd_utils [None req-a82ca57d-9e53-4e21-b593-ce9829203cde 713a9b01d75f4fd2b8ad41bac3ba6343 6fb895e8125a437b8cc29be31706fccb - - default default] rbd image dd616f8b-2be3-455f-8979-ac6e9e57af2d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:52:33 compute-0 nova_compute[260935]: 2025-10-11 08:52:33.717 2 DEBUG oslo_concurrency.processutils [None req-a82ca57d-9e53-4e21-b593-ce9829203cde 713a9b01d75f4fd2b8ad41bac3ba6343 6fb895e8125a437b8cc29be31706fccb - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:52:33 compute-0 nova_compute[260935]: 2025-10-11 08:52:33.760 2 DEBUG nova.policy [None req-a82ca57d-9e53-4e21-b593-ce9829203cde 713a9b01d75f4fd2b8ad41bac3ba6343 6fb895e8125a437b8cc29be31706fccb - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '713a9b01d75f4fd2b8ad41bac3ba6343', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '6fb895e8125a437b8cc29be31706fccb', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 11 08:52:33 compute-0 nova_compute[260935]: 2025-10-11 08:52:33.814 2 DEBUG oslo_concurrency.processutils [None req-a82ca57d-9e53-4e21-b593-ce9829203cde 713a9b01d75f4fd2b8ad41bac3ba6343 6fb895e8125a437b8cc29be31706fccb - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json" returned: 0 in 0.097s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:52:33 compute-0 nova_compute[260935]: 2025-10-11 08:52:33.815 2 DEBUG oslo_concurrency.lockutils [None req-a82ca57d-9e53-4e21-b593-ce9829203cde 713a9b01d75f4fd2b8ad41bac3ba6343 6fb895e8125a437b8cc29be31706fccb - - default default] Acquiring lock "9811042c7d73cc51997f7c966840f4b7728169a1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:52:33 compute-0 nova_compute[260935]: 2025-10-11 08:52:33.816 2 DEBUG oslo_concurrency.lockutils [None req-a82ca57d-9e53-4e21-b593-ce9829203cde 713a9b01d75f4fd2b8ad41bac3ba6343 6fb895e8125a437b8cc29be31706fccb - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:52:33 compute-0 nova_compute[260935]: 2025-10-11 08:52:33.817 2 DEBUG oslo_concurrency.lockutils [None req-a82ca57d-9e53-4e21-b593-ce9829203cde 713a9b01d75f4fd2b8ad41bac3ba6343 6fb895e8125a437b8cc29be31706fccb - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:52:33 compute-0 nova_compute[260935]: 2025-10-11 08:52:33.853 2 DEBUG nova.storage.rbd_utils [None req-a82ca57d-9e53-4e21-b593-ce9829203cde 713a9b01d75f4fd2b8ad41bac3ba6343 6fb895e8125a437b8cc29be31706fccb - - default default] rbd image dd616f8b-2be3-455f-8979-ac6e9e57af2d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:52:33 compute-0 nova_compute[260935]: 2025-10-11 08:52:33.858 2 DEBUG oslo_concurrency.processutils [None req-a82ca57d-9e53-4e21-b593-ce9829203cde 713a9b01d75f4fd2b8ad41bac3ba6343 6fb895e8125a437b8cc29be31706fccb - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 dd616f8b-2be3-455f-8979-ac6e9e57af2d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:52:33 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1431: 321 pgs: 321 active+clean; 167 MiB data, 489 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.6 MiB/s wr, 179 op/s
Oct 11 08:52:34 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/3389995710' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:52:34 compute-0 nova_compute[260935]: 2025-10-11 08:52:34.195 2 DEBUG oslo_concurrency.processutils [None req-a82ca57d-9e53-4e21-b593-ce9829203cde 713a9b01d75f4fd2b8ad41bac3ba6343 6fb895e8125a437b8cc29be31706fccb - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 dd616f8b-2be3-455f-8979-ac6e9e57af2d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.337s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:52:34 compute-0 nova_compute[260935]: 2025-10-11 08:52:34.293 2 DEBUG nova.storage.rbd_utils [None req-a82ca57d-9e53-4e21-b593-ce9829203cde 713a9b01d75f4fd2b8ad41bac3ba6343 6fb895e8125a437b8cc29be31706fccb - - default default] resizing rbd image dd616f8b-2be3-455f-8979-ac6e9e57af2d_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 11 08:52:34 compute-0 nova_compute[260935]: 2025-10-11 08:52:34.390 2 DEBUG nova.objects.instance [None req-a82ca57d-9e53-4e21-b593-ce9829203cde 713a9b01d75f4fd2b8ad41bac3ba6343 6fb895e8125a437b8cc29be31706fccb - - default default] Lazy-loading 'migration_context' on Instance uuid dd616f8b-2be3-455f-8979-ac6e9e57af2d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:52:34 compute-0 nova_compute[260935]: 2025-10-11 08:52:34.410 2 DEBUG nova.virt.libvirt.driver [None req-a82ca57d-9e53-4e21-b593-ce9829203cde 713a9b01d75f4fd2b8ad41bac3ba6343 6fb895e8125a437b8cc29be31706fccb - - default default] [instance: dd616f8b-2be3-455f-8979-ac6e9e57af2d] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 11 08:52:34 compute-0 nova_compute[260935]: 2025-10-11 08:52:34.411 2 DEBUG nova.virt.libvirt.driver [None req-a82ca57d-9e53-4e21-b593-ce9829203cde 713a9b01d75f4fd2b8ad41bac3ba6343 6fb895e8125a437b8cc29be31706fccb - - default default] [instance: dd616f8b-2be3-455f-8979-ac6e9e57af2d] Ensure instance console log exists: /var/lib/nova/instances/dd616f8b-2be3-455f-8979-ac6e9e57af2d/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 11 08:52:34 compute-0 nova_compute[260935]: 2025-10-11 08:52:34.411 2 DEBUG oslo_concurrency.lockutils [None req-a82ca57d-9e53-4e21-b593-ce9829203cde 713a9b01d75f4fd2b8ad41bac3ba6343 6fb895e8125a437b8cc29be31706fccb - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:52:34 compute-0 nova_compute[260935]: 2025-10-11 08:52:34.412 2 DEBUG oslo_concurrency.lockutils [None req-a82ca57d-9e53-4e21-b593-ce9829203cde 713a9b01d75f4fd2b8ad41bac3ba6343 6fb895e8125a437b8cc29be31706fccb - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:52:34 compute-0 nova_compute[260935]: 2025-10-11 08:52:34.413 2 DEBUG oslo_concurrency.lockutils [None req-a82ca57d-9e53-4e21-b593-ce9829203cde 713a9b01d75f4fd2b8ad41bac3ba6343 6fb895e8125a437b8cc29be31706fccb - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:52:34 compute-0 nova_compute[260935]: 2025-10-11 08:52:34.494 2 DEBUG nova.network.neutron [None req-a82ca57d-9e53-4e21-b593-ce9829203cde 713a9b01d75f4fd2b8ad41bac3ba6343 6fb895e8125a437b8cc29be31706fccb - - default default] [instance: dd616f8b-2be3-455f-8979-ac6e9e57af2d] Successfully created port: a5dcad71-6ce4-4a7c-95fc-900cb7ee620e _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 11 08:52:35 compute-0 nova_compute[260935]: 2025-10-11 08:52:35.149 2 DEBUG nova.network.neutron [req-71cf0004-6e29-4f12-a789-4a0f55773950 req-c62cab1a-a9f0-457b-b21e-a98b97ff7efc e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 872b1c1d-bc87-4123-a599-4d64b89018aa] Updated VIF entry in instance network info cache for port 108be440-11bf-41f6-a628-86fbac597b7d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 08:52:35 compute-0 nova_compute[260935]: 2025-10-11 08:52:35.150 2 DEBUG nova.network.neutron [req-71cf0004-6e29-4f12-a789-4a0f55773950 req-c62cab1a-a9f0-457b-b21e-a98b97ff7efc e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 872b1c1d-bc87-4123-a599-4d64b89018aa] Updating instance_info_cache with network_info: [{"id": "108be440-11bf-41f6-a628-86fbac597b7d", "address": "fa:16:3e:c7:33:2e", "network": {"id": "882d76f4-8cc7-44c7-ad90-277f4f92e044", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-3978382-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b23ff73ca27245eeb1b46f51326a5568", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap108be440-11", "ovs_interfaceid": "108be440-11bf-41f6-a628-86fbac597b7d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:52:35 compute-0 ceph-mon[74313]: pgmap v1431: 321 pgs: 321 active+clean; 167 MiB data, 489 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.6 MiB/s wr, 179 op/s
Oct 11 08:52:35 compute-0 nova_compute[260935]: 2025-10-11 08:52:35.172 2 DEBUG oslo_concurrency.lockutils [req-71cf0004-6e29-4f12-a789-4a0f55773950 req-c62cab1a-a9f0-457b-b21e-a98b97ff7efc e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-872b1c1d-bc87-4123-a599-4d64b89018aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 08:52:35 compute-0 nova_compute[260935]: 2025-10-11 08:52:35.387 2 INFO nova.compute.manager [None req-5fe3681c-599b-41f7-8745-542beaa784ee 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 98fabab3-6b4a-44f3-b232-f23f34f4e19f] Rebuilding instance
Oct 11 08:52:35 compute-0 nova_compute[260935]: 2025-10-11 08:52:35.456 2 DEBUG oslo_concurrency.lockutils [None req-548173ae-bbbe-4fb0-825e-0c1b62bb69a3 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] Acquiring lock "5750649d-960f-42d5-b127-de8b9a2bee8f" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:52:35 compute-0 nova_compute[260935]: 2025-10-11 08:52:35.457 2 DEBUG oslo_concurrency.lockutils [None req-548173ae-bbbe-4fb0-825e-0c1b62bb69a3 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] Lock "5750649d-960f-42d5-b127-de8b9a2bee8f" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:52:35 compute-0 nova_compute[260935]: 2025-10-11 08:52:35.477 2 DEBUG nova.compute.manager [None req-548173ae-bbbe-4fb0-825e-0c1b62bb69a3 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] [instance: 5750649d-960f-42d5-b127-de8b9a2bee8f] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 11 08:52:35 compute-0 nova_compute[260935]: 2025-10-11 08:52:35.504 2 DEBUG nova.network.neutron [None req-a82ca57d-9e53-4e21-b593-ce9829203cde 713a9b01d75f4fd2b8ad41bac3ba6343 6fb895e8125a437b8cc29be31706fccb - - default default] [instance: dd616f8b-2be3-455f-8979-ac6e9e57af2d] Successfully updated port: a5dcad71-6ce4-4a7c-95fc-900cb7ee620e _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 11 08:52:35 compute-0 nova_compute[260935]: 2025-10-11 08:52:35.525 2 DEBUG oslo_concurrency.lockutils [None req-a82ca57d-9e53-4e21-b593-ce9829203cde 713a9b01d75f4fd2b8ad41bac3ba6343 6fb895e8125a437b8cc29be31706fccb - - default default] Acquiring lock "refresh_cache-dd616f8b-2be3-455f-8979-ac6e9e57af2d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 08:52:35 compute-0 nova_compute[260935]: 2025-10-11 08:52:35.526 2 DEBUG oslo_concurrency.lockutils [None req-a82ca57d-9e53-4e21-b593-ce9829203cde 713a9b01d75f4fd2b8ad41bac3ba6343 6fb895e8125a437b8cc29be31706fccb - - default default] Acquired lock "refresh_cache-dd616f8b-2be3-455f-8979-ac6e9e57af2d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 08:52:35 compute-0 nova_compute[260935]: 2025-10-11 08:52:35.526 2 DEBUG nova.network.neutron [None req-a82ca57d-9e53-4e21-b593-ce9829203cde 713a9b01d75f4fd2b8ad41bac3ba6343 6fb895e8125a437b8cc29be31706fccb - - default default] [instance: dd616f8b-2be3-455f-8979-ac6e9e57af2d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 11 08:52:35 compute-0 nova_compute[260935]: 2025-10-11 08:52:35.527 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:52:35 compute-0 nova_compute[260935]: 2025-10-11 08:52:35.550 2 DEBUG oslo_concurrency.lockutils [None req-548173ae-bbbe-4fb0-825e-0c1b62bb69a3 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:52:35 compute-0 nova_compute[260935]: 2025-10-11 08:52:35.550 2 DEBUG oslo_concurrency.lockutils [None req-548173ae-bbbe-4fb0-825e-0c1b62bb69a3 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:52:35 compute-0 nova_compute[260935]: 2025-10-11 08:52:35.558 2 DEBUG nova.virt.hardware [None req-548173ae-bbbe-4fb0-825e-0c1b62bb69a3 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 11 08:52:35 compute-0 nova_compute[260935]: 2025-10-11 08:52:35.558 2 INFO nova.compute.claims [None req-548173ae-bbbe-4fb0-825e-0c1b62bb69a3 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] [instance: 5750649d-960f-42d5-b127-de8b9a2bee8f] Claim successful on node compute-0.ctlplane.example.com
Oct 11 08:52:35 compute-0 nova_compute[260935]: 2025-10-11 08:52:35.664 2 DEBUG nova.objects.instance [None req-5fe3681c-599b-41f7-8745-542beaa784ee 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 98fabab3-6b4a-44f3-b232-f23f34f4e19f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:52:35 compute-0 nova_compute[260935]: 2025-10-11 08:52:35.680 2 DEBUG nova.compute.manager [None req-5fe3681c-599b-41f7-8745-542beaa784ee 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 98fabab3-6b4a-44f3-b232-f23f34f4e19f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:52:35 compute-0 nova_compute[260935]: 2025-10-11 08:52:35.727 2 DEBUG nova.network.neutron [None req-a82ca57d-9e53-4e21-b593-ce9829203cde 713a9b01d75f4fd2b8ad41bac3ba6343 6fb895e8125a437b8cc29be31706fccb - - default default] [instance: dd616f8b-2be3-455f-8979-ac6e9e57af2d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 11 08:52:35 compute-0 nova_compute[260935]: 2025-10-11 08:52:35.746 2 DEBUG nova.objects.instance [None req-5fe3681c-599b-41f7-8745-542beaa784ee 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Lazy-loading 'pci_requests' on Instance uuid 98fabab3-6b4a-44f3-b232-f23f34f4e19f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:52:35 compute-0 nova_compute[260935]: 2025-10-11 08:52:35.752 2 DEBUG oslo_concurrency.processutils [None req-548173ae-bbbe-4fb0-825e-0c1b62bb69a3 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:52:35 compute-0 nova_compute[260935]: 2025-10-11 08:52:35.795 2 DEBUG nova.objects.instance [None req-5fe3681c-599b-41f7-8745-542beaa784ee 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Lazy-loading 'pci_devices' on Instance uuid 98fabab3-6b4a-44f3-b232-f23f34f4e19f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:52:35 compute-0 nova_compute[260935]: 2025-10-11 08:52:35.809 2 DEBUG nova.objects.instance [None req-5fe3681c-599b-41f7-8745-542beaa784ee 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Lazy-loading 'resources' on Instance uuid 98fabab3-6b4a-44f3-b232-f23f34f4e19f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:52:35 compute-0 nova_compute[260935]: 2025-10-11 08:52:35.838 2 DEBUG nova.objects.instance [None req-5fe3681c-599b-41f7-8745-542beaa784ee 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Lazy-loading 'migration_context' on Instance uuid 98fabab3-6b4a-44f3-b232-f23f34f4e19f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:52:35 compute-0 nova_compute[260935]: 2025-10-11 08:52:35.854 2 DEBUG nova.objects.instance [None req-5fe3681c-599b-41f7-8745-542beaa784ee 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 98fabab3-6b4a-44f3-b232-f23f34f4e19f] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Oct 11 08:52:35 compute-0 nova_compute[260935]: 2025-10-11 08:52:35.863 2 DEBUG nova.virt.libvirt.driver [None req-5fe3681c-599b-41f7-8745-542beaa784ee 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 98fabab3-6b4a-44f3-b232-f23f34f4e19f] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Oct 11 08:52:35 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1432: 321 pgs: 321 active+clean; 167 MiB data, 489 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 27 KiB/s wr, 102 op/s
Oct 11 08:52:36 compute-0 nova_compute[260935]: 2025-10-11 08:52:36.126 2 DEBUG nova.compute.manager [req-105183b8-0d0d-46b7-a878-25cb3a0dde91 req-64e46d8c-ba1b-4df1-9024-4292acd28f53 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: dd616f8b-2be3-455f-8979-ac6e9e57af2d] Received event network-changed-a5dcad71-6ce4-4a7c-95fc-900cb7ee620e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:52:36 compute-0 nova_compute[260935]: 2025-10-11 08:52:36.127 2 DEBUG nova.compute.manager [req-105183b8-0d0d-46b7-a878-25cb3a0dde91 req-64e46d8c-ba1b-4df1-9024-4292acd28f53 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: dd616f8b-2be3-455f-8979-ac6e9e57af2d] Refreshing instance network info cache due to event network-changed-a5dcad71-6ce4-4a7c-95fc-900cb7ee620e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 08:52:36 compute-0 nova_compute[260935]: 2025-10-11 08:52:36.128 2 DEBUG oslo_concurrency.lockutils [req-105183b8-0d0d-46b7-a878-25cb3a0dde91 req-64e46d8c-ba1b-4df1-9024-4292acd28f53 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-dd616f8b-2be3-455f-8979-ac6e9e57af2d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 08:52:36 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 08:52:36 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2579751925' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:52:36 compute-0 nova_compute[260935]: 2025-10-11 08:52:36.181 2 DEBUG oslo_concurrency.processutils [None req-548173ae-bbbe-4fb0-825e-0c1b62bb69a3 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.429s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:52:36 compute-0 nova_compute[260935]: 2025-10-11 08:52:36.187 2 DEBUG nova.compute.provider_tree [None req-548173ae-bbbe-4fb0-825e-0c1b62bb69a3 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 08:52:36 compute-0 nova_compute[260935]: 2025-10-11 08:52:36.208 2 DEBUG nova.scheduler.client.report [None req-548173ae-bbbe-4fb0-825e-0c1b62bb69a3 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 08:52:36 compute-0 nova_compute[260935]: 2025-10-11 08:52:36.238 2 DEBUG oslo_concurrency.lockutils [None req-548173ae-bbbe-4fb0-825e-0c1b62bb69a3 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.688s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:52:36 compute-0 nova_compute[260935]: 2025-10-11 08:52:36.240 2 DEBUG nova.compute.manager [None req-548173ae-bbbe-4fb0-825e-0c1b62bb69a3 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] [instance: 5750649d-960f-42d5-b127-de8b9a2bee8f] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 11 08:52:36 compute-0 nova_compute[260935]: 2025-10-11 08:52:36.298 2 DEBUG nova.compute.manager [None req-548173ae-bbbe-4fb0-825e-0c1b62bb69a3 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] [instance: 5750649d-960f-42d5-b127-de8b9a2bee8f] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 11 08:52:36 compute-0 nova_compute[260935]: 2025-10-11 08:52:36.299 2 DEBUG nova.network.neutron [None req-548173ae-bbbe-4fb0-825e-0c1b62bb69a3 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] [instance: 5750649d-960f-42d5-b127-de8b9a2bee8f] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 11 08:52:36 compute-0 nova_compute[260935]: 2025-10-11 08:52:36.325 2 INFO nova.virt.libvirt.driver [None req-548173ae-bbbe-4fb0-825e-0c1b62bb69a3 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] [instance: 5750649d-960f-42d5-b127-de8b9a2bee8f] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 11 08:52:36 compute-0 nova_compute[260935]: 2025-10-11 08:52:36.382 2 DEBUG nova.compute.manager [None req-548173ae-bbbe-4fb0-825e-0c1b62bb69a3 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] [instance: 5750649d-960f-42d5-b127-de8b9a2bee8f] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 11 08:52:36 compute-0 nova_compute[260935]: 2025-10-11 08:52:36.510 2 DEBUG nova.compute.manager [None req-548173ae-bbbe-4fb0-825e-0c1b62bb69a3 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] [instance: 5750649d-960f-42d5-b127-de8b9a2bee8f] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 11 08:52:36 compute-0 nova_compute[260935]: 2025-10-11 08:52:36.512 2 DEBUG nova.virt.libvirt.driver [None req-548173ae-bbbe-4fb0-825e-0c1b62bb69a3 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] [instance: 5750649d-960f-42d5-b127-de8b9a2bee8f] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 11 08:52:36 compute-0 nova_compute[260935]: 2025-10-11 08:52:36.513 2 INFO nova.virt.libvirt.driver [None req-548173ae-bbbe-4fb0-825e-0c1b62bb69a3 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] [instance: 5750649d-960f-42d5-b127-de8b9a2bee8f] Creating image(s)
Oct 11 08:52:36 compute-0 nova_compute[260935]: 2025-10-11 08:52:36.538 2 DEBUG nova.storage.rbd_utils [None req-548173ae-bbbe-4fb0-825e-0c1b62bb69a3 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] rbd image 5750649d-960f-42d5-b127-de8b9a2bee8f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:52:36 compute-0 nova_compute[260935]: 2025-10-11 08:52:36.573 2 DEBUG nova.storage.rbd_utils [None req-548173ae-bbbe-4fb0-825e-0c1b62bb69a3 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] rbd image 5750649d-960f-42d5-b127-de8b9a2bee8f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:52:36 compute-0 nova_compute[260935]: 2025-10-11 08:52:36.601 2 DEBUG nova.storage.rbd_utils [None req-548173ae-bbbe-4fb0-825e-0c1b62bb69a3 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] rbd image 5750649d-960f-42d5-b127-de8b9a2bee8f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:52:36 compute-0 nova_compute[260935]: 2025-10-11 08:52:36.607 2 DEBUG oslo_concurrency.processutils [None req-548173ae-bbbe-4fb0-825e-0c1b62bb69a3 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:52:36 compute-0 nova_compute[260935]: 2025-10-11 08:52:36.657 2 DEBUG nova.policy [None req-548173ae-bbbe-4fb0-825e-0c1b62bb69a3 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'aeed30817a7740109e765d227ff2b78a', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'e394c641aa0e46e2a7d8129cd88c9a01', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 11 08:52:36 compute-0 nova_compute[260935]: 2025-10-11 08:52:36.698 2 DEBUG oslo_concurrency.processutils [None req-548173ae-bbbe-4fb0-825e-0c1b62bb69a3 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json" returned: 0 in 0.091s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:52:36 compute-0 nova_compute[260935]: 2025-10-11 08:52:36.699 2 DEBUG oslo_concurrency.lockutils [None req-548173ae-bbbe-4fb0-825e-0c1b62bb69a3 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] Acquiring lock "9811042c7d73cc51997f7c966840f4b7728169a1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:52:36 compute-0 nova_compute[260935]: 2025-10-11 08:52:36.699 2 DEBUG oslo_concurrency.lockutils [None req-548173ae-bbbe-4fb0-825e-0c1b62bb69a3 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:52:36 compute-0 nova_compute[260935]: 2025-10-11 08:52:36.700 2 DEBUG oslo_concurrency.lockutils [None req-548173ae-bbbe-4fb0-825e-0c1b62bb69a3 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:52:36 compute-0 nova_compute[260935]: 2025-10-11 08:52:36.729 2 DEBUG nova.storage.rbd_utils [None req-548173ae-bbbe-4fb0-825e-0c1b62bb69a3 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] rbd image 5750649d-960f-42d5-b127-de8b9a2bee8f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:52:36 compute-0 nova_compute[260935]: 2025-10-11 08:52:36.737 2 DEBUG oslo_concurrency.processutils [None req-548173ae-bbbe-4fb0-825e-0c1b62bb69a3 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 5750649d-960f-42d5-b127-de8b9a2bee8f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:52:36 compute-0 nova_compute[260935]: 2025-10-11 08:52:36.853 2 DEBUG oslo_concurrency.lockutils [None req-945a89b9-902f-4e98-8219-c31dfa583812 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] Acquiring lock "872b1c1d-bc87-4123-a599-4d64b89018aa" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:52:36 compute-0 nova_compute[260935]: 2025-10-11 08:52:36.857 2 DEBUG oslo_concurrency.lockutils [None req-945a89b9-902f-4e98-8219-c31dfa583812 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] Lock "872b1c1d-bc87-4123-a599-4d64b89018aa" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:52:36 compute-0 nova_compute[260935]: 2025-10-11 08:52:36.857 2 DEBUG oslo_concurrency.lockutils [None req-945a89b9-902f-4e98-8219-c31dfa583812 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] Acquiring lock "872b1c1d-bc87-4123-a599-4d64b89018aa-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:52:36 compute-0 nova_compute[260935]: 2025-10-11 08:52:36.858 2 DEBUG oslo_concurrency.lockutils [None req-945a89b9-902f-4e98-8219-c31dfa583812 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] Lock "872b1c1d-bc87-4123-a599-4d64b89018aa-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:52:36 compute-0 nova_compute[260935]: 2025-10-11 08:52:36.858 2 DEBUG oslo_concurrency.lockutils [None req-945a89b9-902f-4e98-8219-c31dfa583812 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] Lock "872b1c1d-bc87-4123-a599-4d64b89018aa-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:52:36 compute-0 nova_compute[260935]: 2025-10-11 08:52:36.860 2 INFO nova.compute.manager [None req-945a89b9-902f-4e98-8219-c31dfa583812 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] [instance: 872b1c1d-bc87-4123-a599-4d64b89018aa] Terminating instance
Oct 11 08:52:36 compute-0 nova_compute[260935]: 2025-10-11 08:52:36.862 2 DEBUG nova.compute.manager [None req-945a89b9-902f-4e98-8219-c31dfa583812 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] [instance: 872b1c1d-bc87-4123-a599-4d64b89018aa] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 11 08:52:36 compute-0 nova_compute[260935]: 2025-10-11 08:52:36.909 2 DEBUG nova.network.neutron [None req-a82ca57d-9e53-4e21-b593-ce9829203cde 713a9b01d75f4fd2b8ad41bac3ba6343 6fb895e8125a437b8cc29be31706fccb - - default default] [instance: dd616f8b-2be3-455f-8979-ac6e9e57af2d] Updating instance_info_cache with network_info: [{"id": "a5dcad71-6ce4-4a7c-95fc-900cb7ee620e", "address": "fa:16:3e:40:7a:6c", "network": {"id": "f4f23928-daba-425d-b61c-e65657ec3386", "bridge": "br-int", "label": "tempest-ImagesNegativeTestJSON-643614329-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6fb895e8125a437b8cc29be31706fccb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa5dcad71-6c", "ovs_interfaceid": "a5dcad71-6ce4-4a7c-95fc-900cb7ee620e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:52:36 compute-0 kernel: tap108be440-11 (unregistering): left promiscuous mode
Oct 11 08:52:36 compute-0 NetworkManager[44960]: <info>  [1760172756.9379] device (tap108be440-11): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 08:52:36 compute-0 nova_compute[260935]: 2025-10-11 08:52:36.940 2 DEBUG oslo_concurrency.lockutils [None req-a82ca57d-9e53-4e21-b593-ce9829203cde 713a9b01d75f4fd2b8ad41bac3ba6343 6fb895e8125a437b8cc29be31706fccb - - default default] Releasing lock "refresh_cache-dd616f8b-2be3-455f-8979-ac6e9e57af2d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 08:52:36 compute-0 nova_compute[260935]: 2025-10-11 08:52:36.941 2 DEBUG nova.compute.manager [None req-a82ca57d-9e53-4e21-b593-ce9829203cde 713a9b01d75f4fd2b8ad41bac3ba6343 6fb895e8125a437b8cc29be31706fccb - - default default] [instance: dd616f8b-2be3-455f-8979-ac6e9e57af2d] Instance network_info: |[{"id": "a5dcad71-6ce4-4a7c-95fc-900cb7ee620e", "address": "fa:16:3e:40:7a:6c", "network": {"id": "f4f23928-daba-425d-b61c-e65657ec3386", "bridge": "br-int", "label": "tempest-ImagesNegativeTestJSON-643614329-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6fb895e8125a437b8cc29be31706fccb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa5dcad71-6c", "ovs_interfaceid": "a5dcad71-6ce4-4a7c-95fc-900cb7ee620e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 11 08:52:36 compute-0 nova_compute[260935]: 2025-10-11 08:52:36.942 2 DEBUG oslo_concurrency.lockutils [req-105183b8-0d0d-46b7-a878-25cb3a0dde91 req-64e46d8c-ba1b-4df1-9024-4292acd28f53 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-dd616f8b-2be3-455f-8979-ac6e9e57af2d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 08:52:36 compute-0 nova_compute[260935]: 2025-10-11 08:52:36.943 2 DEBUG nova.network.neutron [req-105183b8-0d0d-46b7-a878-25cb3a0dde91 req-64e46d8c-ba1b-4df1-9024-4292acd28f53 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: dd616f8b-2be3-455f-8979-ac6e9e57af2d] Refreshing network info cache for port a5dcad71-6ce4-4a7c-95fc-900cb7ee620e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 08:52:36 compute-0 nova_compute[260935]: 2025-10-11 08:52:36.953 2 DEBUG nova.virt.libvirt.driver [None req-a82ca57d-9e53-4e21-b593-ce9829203cde 713a9b01d75f4fd2b8ad41bac3ba6343 6fb895e8125a437b8cc29be31706fccb - - default default] [instance: dd616f8b-2be3-455f-8979-ac6e9e57af2d] Start _get_guest_xml network_info=[{"id": "a5dcad71-6ce4-4a7c-95fc-900cb7ee620e", "address": "fa:16:3e:40:7a:6c", "network": {"id": "f4f23928-daba-425d-b61c-e65657ec3386", "bridge": "br-int", "label": "tempest-ImagesNegativeTestJSON-643614329-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6fb895e8125a437b8cc29be31706fccb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa5dcad71-6c", "ovs_interfaceid": "a5dcad71-6ce4-4a7c-95fc-900cb7ee620e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 11 08:52:37 compute-0 ovn_controller[152945]: 2025-10-11T08:52:37Z|00325|binding|INFO|Releasing lport 108be440-11bf-41f6-a628-86fbac597b7d from this chassis (sb_readonly=0)
Oct 11 08:52:37 compute-0 ovn_controller[152945]: 2025-10-11T08:52:37Z|00326|binding|INFO|Setting lport 108be440-11bf-41f6-a628-86fbac597b7d down in Southbound
Oct 11 08:52:37 compute-0 ovn_controller[152945]: 2025-10-11T08:52:37Z|00327|binding|INFO|Removing iface tap108be440-11 ovn-installed in OVS
Oct 11 08:52:37 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:37.023 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c7:33:2e 10.100.0.6'], port_security=['fa:16:3e:c7:33:2e 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '872b1c1d-bc87-4123-a599-4d64b89018aa', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-882d76f4-8cc7-44c7-ad90-277f4f92e044', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b23ff73ca27245eeb1b46f51326a5568', 'neutron:revision_number': '4', 'neutron:security_group_ids': '1c2e46c6-00fe-417f-b7cc-3c9acf40921b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4841cc19-0006-4d3d-bb24-b088854c8627, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=108be440-11bf-41f6-a628-86fbac597b7d) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 08:52:37 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:37.024 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 108be440-11bf-41f6-a628-86fbac597b7d in datapath 882d76f4-8cc7-44c7-ad90-277f4f92e044 unbound from our chassis
Oct 11 08:52:37 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:37.025 162815 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 882d76f4-8cc7-44c7-ad90-277f4f92e044, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 11 08:52:37 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:37.026 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[b46d3303-6f78-4cea-8cc1-8f4dfe0a0ed4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:52:37 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:37.027 162815 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-882d76f4-8cc7-44c7-ad90-277f4f92e044 namespace which is not needed anymore
Oct 11 08:52:37 compute-0 nova_compute[260935]: 2025-10-11 08:52:37.031 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:52:37 compute-0 nova_compute[260935]: 2025-10-11 08:52:37.056 2 WARNING nova.virt.libvirt.driver [None req-a82ca57d-9e53-4e21-b593-ce9829203cde 713a9b01d75f4fd2b8ad41bac3ba6343 6fb895e8125a437b8cc29be31706fccb - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 08:52:37 compute-0 systemd[1]: machine-qemu\x2d40\x2dinstance\x2d00000024.scope: Deactivated successfully.
Oct 11 08:52:37 compute-0 systemd[1]: machine-qemu\x2d40\x2dinstance\x2d00000024.scope: Consumed 14.575s CPU time.
Oct 11 08:52:37 compute-0 systemd-machined[215705]: Machine qemu-40-instance-00000024 terminated.
Oct 11 08:52:37 compute-0 nova_compute[260935]: 2025-10-11 08:52:37.065 2 DEBUG nova.virt.libvirt.host [None req-a82ca57d-9e53-4e21-b593-ce9829203cde 713a9b01d75f4fd2b8ad41bac3ba6343 6fb895e8125a437b8cc29be31706fccb - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 11 08:52:37 compute-0 nova_compute[260935]: 2025-10-11 08:52:37.066 2 DEBUG nova.virt.libvirt.host [None req-a82ca57d-9e53-4e21-b593-ce9829203cde 713a9b01d75f4fd2b8ad41bac3ba6343 6fb895e8125a437b8cc29be31706fccb - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 11 08:52:37 compute-0 nova_compute[260935]: 2025-10-11 08:52:37.070 2 DEBUG nova.virt.libvirt.host [None req-a82ca57d-9e53-4e21-b593-ce9829203cde 713a9b01d75f4fd2b8ad41bac3ba6343 6fb895e8125a437b8cc29be31706fccb - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 11 08:52:37 compute-0 nova_compute[260935]: 2025-10-11 08:52:37.070 2 DEBUG nova.virt.libvirt.host [None req-a82ca57d-9e53-4e21-b593-ce9829203cde 713a9b01d75f4fd2b8ad41bac3ba6343 6fb895e8125a437b8cc29be31706fccb - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 11 08:52:37 compute-0 nova_compute[260935]: 2025-10-11 08:52:37.071 2 DEBUG nova.virt.libvirt.driver [None req-a82ca57d-9e53-4e21-b593-ce9829203cde 713a9b01d75f4fd2b8ad41bac3ba6343 6fb895e8125a437b8cc29be31706fccb - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 11 08:52:37 compute-0 nova_compute[260935]: 2025-10-11 08:52:37.071 2 DEBUG nova.virt.hardware [None req-a82ca57d-9e53-4e21-b593-ce9829203cde 713a9b01d75f4fd2b8ad41bac3ba6343 6fb895e8125a437b8cc29be31706fccb - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 11 08:52:37 compute-0 nova_compute[260935]: 2025-10-11 08:52:37.071 2 DEBUG nova.virt.hardware [None req-a82ca57d-9e53-4e21-b593-ce9829203cde 713a9b01d75f4fd2b8ad41bac3ba6343 6fb895e8125a437b8cc29be31706fccb - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 11 08:52:37 compute-0 nova_compute[260935]: 2025-10-11 08:52:37.071 2 DEBUG nova.virt.hardware [None req-a82ca57d-9e53-4e21-b593-ce9829203cde 713a9b01d75f4fd2b8ad41bac3ba6343 6fb895e8125a437b8cc29be31706fccb - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 11 08:52:37 compute-0 nova_compute[260935]: 2025-10-11 08:52:37.072 2 DEBUG nova.virt.hardware [None req-a82ca57d-9e53-4e21-b593-ce9829203cde 713a9b01d75f4fd2b8ad41bac3ba6343 6fb895e8125a437b8cc29be31706fccb - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 11 08:52:37 compute-0 nova_compute[260935]: 2025-10-11 08:52:37.072 2 DEBUG nova.virt.hardware [None req-a82ca57d-9e53-4e21-b593-ce9829203cde 713a9b01d75f4fd2b8ad41bac3ba6343 6fb895e8125a437b8cc29be31706fccb - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 11 08:52:37 compute-0 nova_compute[260935]: 2025-10-11 08:52:37.072 2 DEBUG nova.virt.hardware [None req-a82ca57d-9e53-4e21-b593-ce9829203cde 713a9b01d75f4fd2b8ad41bac3ba6343 6fb895e8125a437b8cc29be31706fccb - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 11 08:52:37 compute-0 nova_compute[260935]: 2025-10-11 08:52:37.072 2 DEBUG nova.virt.hardware [None req-a82ca57d-9e53-4e21-b593-ce9829203cde 713a9b01d75f4fd2b8ad41bac3ba6343 6fb895e8125a437b8cc29be31706fccb - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 11 08:52:37 compute-0 nova_compute[260935]: 2025-10-11 08:52:37.072 2 DEBUG nova.virt.hardware [None req-a82ca57d-9e53-4e21-b593-ce9829203cde 713a9b01d75f4fd2b8ad41bac3ba6343 6fb895e8125a437b8cc29be31706fccb - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 11 08:52:37 compute-0 nova_compute[260935]: 2025-10-11 08:52:37.072 2 DEBUG nova.virt.hardware [None req-a82ca57d-9e53-4e21-b593-ce9829203cde 713a9b01d75f4fd2b8ad41bac3ba6343 6fb895e8125a437b8cc29be31706fccb - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 11 08:52:37 compute-0 nova_compute[260935]: 2025-10-11 08:52:37.073 2 DEBUG nova.virt.hardware [None req-a82ca57d-9e53-4e21-b593-ce9829203cde 713a9b01d75f4fd2b8ad41bac3ba6343 6fb895e8125a437b8cc29be31706fccb - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 11 08:52:37 compute-0 nova_compute[260935]: 2025-10-11 08:52:37.073 2 DEBUG nova.virt.hardware [None req-a82ca57d-9e53-4e21-b593-ce9829203cde 713a9b01d75f4fd2b8ad41bac3ba6343 6fb895e8125a437b8cc29be31706fccb - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 11 08:52:37 compute-0 nova_compute[260935]: 2025-10-11 08:52:37.076 2 DEBUG oslo_concurrency.processutils [None req-a82ca57d-9e53-4e21-b593-ce9829203cde 713a9b01d75f4fd2b8ad41bac3ba6343 6fb895e8125a437b8cc29be31706fccb - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:52:37 compute-0 nova_compute[260935]: 2025-10-11 08:52:37.127 2 INFO nova.virt.libvirt.driver [-] [instance: 872b1c1d-bc87-4123-a599-4d64b89018aa] Instance destroyed successfully.
Oct 11 08:52:37 compute-0 nova_compute[260935]: 2025-10-11 08:52:37.128 2 DEBUG nova.objects.instance [None req-945a89b9-902f-4e98-8219-c31dfa583812 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] Lazy-loading 'resources' on Instance uuid 872b1c1d-bc87-4123-a599-4d64b89018aa obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:52:37 compute-0 nova_compute[260935]: 2025-10-11 08:52:37.149 2 DEBUG oslo_concurrency.processutils [None req-548173ae-bbbe-4fb0-825e-0c1b62bb69a3 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 5750649d-960f-42d5-b127-de8b9a2bee8f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.413s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:52:37 compute-0 nova_compute[260935]: 2025-10-11 08:52:37.151 2 DEBUG nova.virt.libvirt.vif [None req-945a89b9-902f-4e98-8219-c31dfa583812 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T08:51:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-FloatingIPsAssociationTestJSON-server-652156110',display_name='tempest-FloatingIPsAssociationTestJSON-server-652156110',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-floatingipsassociationtestjson-server-652156110',id=36,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-11T08:51:45Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='b23ff73ca27245eeb1b46f51326a5568',ramdisk_id='',reservation_id='r-dxyrerkf',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-FloatingIPsAssociationTestJSON-1277815089',owner_user_name='tempest-FloatingIPsAssociationTestJSON-1277815089-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T08:51:45Z,user_data=None,user_id='7f37cd0ba15f412b88192a506c5cec79',uuid=872b1c1d-bc87-4123-a599-4d64b89018aa,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "108be440-11bf-41f6-a628-86fbac597b7d", "address": "fa:16:3e:c7:33:2e", "network": {"id": "882d76f4-8cc7-44c7-ad90-277f4f92e044", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-3978382-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b23ff73ca27245eeb1b46f51326a5568", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap108be440-11", "ovs_interfaceid": "108be440-11bf-41f6-a628-86fbac597b7d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 11 08:52:37 compute-0 nova_compute[260935]: 2025-10-11 08:52:37.151 2 DEBUG nova.network.os_vif_util [None req-945a89b9-902f-4e98-8219-c31dfa583812 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] Converting VIF {"id": "108be440-11bf-41f6-a628-86fbac597b7d", "address": "fa:16:3e:c7:33:2e", "network": {"id": "882d76f4-8cc7-44c7-ad90-277f4f92e044", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-3978382-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b23ff73ca27245eeb1b46f51326a5568", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap108be440-11", "ovs_interfaceid": "108be440-11bf-41f6-a628-86fbac597b7d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 08:52:37 compute-0 nova_compute[260935]: 2025-10-11 08:52:37.152 2 DEBUG nova.network.os_vif_util [None req-945a89b9-902f-4e98-8219-c31dfa583812 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:c7:33:2e,bridge_name='br-int',has_traffic_filtering=True,id=108be440-11bf-41f6-a628-86fbac597b7d,network=Network(882d76f4-8cc7-44c7-ad90-277f4f92e044),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap108be440-11') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 08:52:37 compute-0 nova_compute[260935]: 2025-10-11 08:52:37.152 2 DEBUG os_vif [None req-945a89b9-902f-4e98-8219-c31dfa583812 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:c7:33:2e,bridge_name='br-int',has_traffic_filtering=True,id=108be440-11bf-41f6-a628-86fbac597b7d,network=Network(882d76f4-8cc7-44c7-ad90-277f4f92e044),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap108be440-11') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 11 08:52:37 compute-0 ceph-mon[74313]: pgmap v1432: 321 pgs: 321 active+clean; 167 MiB data, 489 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 27 KiB/s wr, 102 op/s
Oct 11 08:52:37 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/2579751925' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:52:37 compute-0 neutron-haproxy-ovnmeta-882d76f4-8cc7-44c7-ad90-277f4f92e044[303946]: [NOTICE]   (303952) : haproxy version is 2.8.14-c23fe91
Oct 11 08:52:37 compute-0 neutron-haproxy-ovnmeta-882d76f4-8cc7-44c7-ad90-277f4f92e044[303946]: [NOTICE]   (303952) : path to executable is /usr/sbin/haproxy
Oct 11 08:52:37 compute-0 neutron-haproxy-ovnmeta-882d76f4-8cc7-44c7-ad90-277f4f92e044[303946]: [ALERT]    (303952) : Current worker (303954) exited with code 143 (Terminated)
Oct 11 08:52:37 compute-0 neutron-haproxy-ovnmeta-882d76f4-8cc7-44c7-ad90-277f4f92e044[303946]: [WARNING]  (303952) : All workers exited. Exiting... (0)
Oct 11 08:52:37 compute-0 systemd[1]: libpod-c606860635a65babc5f4e20e1937563ff429e8f57b93abca2fa39c3cc7258fd7.scope: Deactivated successfully.
Oct 11 08:52:37 compute-0 podman[307435]: 2025-10-11 08:52:37.178544775 +0000 UTC m=+0.057605130 container died c606860635a65babc5f4e20e1937563ff429e8f57b93abca2fa39c3cc7258fd7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-882d76f4-8cc7-44c7-ad90-277f4f92e044, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0)
Oct 11 08:52:37 compute-0 nova_compute[260935]: 2025-10-11 08:52:37.195 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:52:37 compute-0 nova_compute[260935]: 2025-10-11 08:52:37.196 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap108be440-11, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:52:37 compute-0 nova_compute[260935]: 2025-10-11 08:52:37.198 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:52:37 compute-0 nova_compute[260935]: 2025-10-11 08:52:37.200 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:52:37 compute-0 nova_compute[260935]: 2025-10-11 08:52:37.202 2 INFO os_vif [None req-945a89b9-902f-4e98-8219-c31dfa583812 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:c7:33:2e,bridge_name='br-int',has_traffic_filtering=True,id=108be440-11bf-41f6-a628-86fbac597b7d,network=Network(882d76f4-8cc7-44c7-ad90-277f4f92e044),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap108be440-11')
Oct 11 08:52:37 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c606860635a65babc5f4e20e1937563ff429e8f57b93abca2fa39c3cc7258fd7-userdata-shm.mount: Deactivated successfully.
Oct 11 08:52:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-43007129255deff7aeaffb2ae4dcdedaaf076ce8807efca09774d5aee6cf146c-merged.mount: Deactivated successfully.
Oct 11 08:52:37 compute-0 podman[307435]: 2025-10-11 08:52:37.229947399 +0000 UTC m=+0.109007744 container cleanup c606860635a65babc5f4e20e1937563ff429e8f57b93abca2fa39c3cc7258fd7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-882d76f4-8cc7-44c7-ad90-277f4f92e044, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3)
Oct 11 08:52:37 compute-0 systemd[1]: libpod-conmon-c606860635a65babc5f4e20e1937563ff429e8f57b93abca2fa39c3cc7258fd7.scope: Deactivated successfully.
Oct 11 08:52:37 compute-0 nova_compute[260935]: 2025-10-11 08:52:37.291 2 DEBUG nova.storage.rbd_utils [None req-548173ae-bbbe-4fb0-825e-0c1b62bb69a3 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] resizing rbd image 5750649d-960f-42d5-b127-de8b9a2bee8f_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 11 08:52:37 compute-0 podman[307510]: 2025-10-11 08:52:37.310044744 +0000 UTC m=+0.053000510 container remove c606860635a65babc5f4e20e1937563ff429e8f57b93abca2fa39c3cc7258fd7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-882d76f4-8cc7-44c7-ad90-277f4f92e044, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Oct 11 08:52:37 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:37.317 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[618d2fe1-1c82-4cf5-b86b-73fe1401601e]: (4, ('Sat Oct 11 08:52:37 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-882d76f4-8cc7-44c7-ad90-277f4f92e044 (c606860635a65babc5f4e20e1937563ff429e8f57b93abca2fa39c3cc7258fd7)\nc606860635a65babc5f4e20e1937563ff429e8f57b93abca2fa39c3cc7258fd7\nSat Oct 11 08:52:37 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-882d76f4-8cc7-44c7-ad90-277f4f92e044 (c606860635a65babc5f4e20e1937563ff429e8f57b93abca2fa39c3cc7258fd7)\nc606860635a65babc5f4e20e1937563ff429e8f57b93abca2fa39c3cc7258fd7\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:52:37 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:37.321 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[8ab2aad0-3704-4af7-9358-c31c5ae3d2a7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:52:37 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:37.323 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap882d76f4-80, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:52:37 compute-0 kernel: tap882d76f4-80: left promiscuous mode
Oct 11 08:52:37 compute-0 nova_compute[260935]: 2025-10-11 08:52:37.335 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:52:37 compute-0 nova_compute[260935]: 2025-10-11 08:52:37.349 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:52:37 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:37.354 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[fca15d0c-9294-4ddf-8527-f1803e5d854c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:52:37 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:37.387 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[ad705350-c9d1-40f0-9478-7c4a0dd522bc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:52:37 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:37.388 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[293e8261-5839-49b5-814b-0a5cebb447bf]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:52:37 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:37.414 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[3a66e563-246a-48a1-b15d-56aa82ae7fe9]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 454886, 'reachable_time': 38404, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 307573, 'error': None, 'target': 'ovnmeta-882d76f4-8cc7-44c7-ad90-277f4f92e044', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:52:37 compute-0 systemd[1]: run-netns-ovnmeta\x2d882d76f4\x2d8cc7\x2d44c7\x2dad90\x2d277f4f92e044.mount: Deactivated successfully.
Oct 11 08:52:37 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:37.426 162929 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-882d76f4-8cc7-44c7-ad90-277f4f92e044 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 11 08:52:37 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:37.426 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[9e7be4f8-7359-4710-91d3-c027db6551fc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:52:37 compute-0 nova_compute[260935]: 2025-10-11 08:52:37.448 2 DEBUG nova.objects.instance [None req-548173ae-bbbe-4fb0-825e-0c1b62bb69a3 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] Lazy-loading 'migration_context' on Instance uuid 5750649d-960f-42d5-b127-de8b9a2bee8f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:52:37 compute-0 nova_compute[260935]: 2025-10-11 08:52:37.470 2 DEBUG nova.virt.libvirt.driver [None req-548173ae-bbbe-4fb0-825e-0c1b62bb69a3 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] [instance: 5750649d-960f-42d5-b127-de8b9a2bee8f] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 11 08:52:37 compute-0 nova_compute[260935]: 2025-10-11 08:52:37.470 2 DEBUG nova.virt.libvirt.driver [None req-548173ae-bbbe-4fb0-825e-0c1b62bb69a3 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] [instance: 5750649d-960f-42d5-b127-de8b9a2bee8f] Ensure instance console log exists: /var/lib/nova/instances/5750649d-960f-42d5-b127-de8b9a2bee8f/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 11 08:52:37 compute-0 nova_compute[260935]: 2025-10-11 08:52:37.471 2 DEBUG oslo_concurrency.lockutils [None req-548173ae-bbbe-4fb0-825e-0c1b62bb69a3 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:52:37 compute-0 nova_compute[260935]: 2025-10-11 08:52:37.471 2 DEBUG oslo_concurrency.lockutils [None req-548173ae-bbbe-4fb0-825e-0c1b62bb69a3 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:52:37 compute-0 nova_compute[260935]: 2025-10-11 08:52:37.471 2 DEBUG oslo_concurrency.lockutils [None req-548173ae-bbbe-4fb0-825e-0c1b62bb69a3 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:52:37 compute-0 nova_compute[260935]: 2025-10-11 08:52:37.491 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:52:37 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 08:52:37 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2909891233' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 08:52:37 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 08:52:37 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2909891233' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 08:52:37 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 08:52:37 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2614441609' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:52:37 compute-0 nova_compute[260935]: 2025-10-11 08:52:37.566 2 DEBUG oslo_concurrency.processutils [None req-a82ca57d-9e53-4e21-b593-ce9829203cde 713a9b01d75f4fd2b8ad41bac3ba6343 6fb895e8125a437b8cc29be31706fccb - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.490s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:52:37 compute-0 nova_compute[260935]: 2025-10-11 08:52:37.591 2 DEBUG nova.storage.rbd_utils [None req-a82ca57d-9e53-4e21-b593-ce9829203cde 713a9b01d75f4fd2b8ad41bac3ba6343 6fb895e8125a437b8cc29be31706fccb - - default default] rbd image dd616f8b-2be3-455f-8979-ac6e9e57af2d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:52:37 compute-0 nova_compute[260935]: 2025-10-11 08:52:37.599 2 DEBUG oslo_concurrency.processutils [None req-a82ca57d-9e53-4e21-b593-ce9829203cde 713a9b01d75f4fd2b8ad41bac3ba6343 6fb895e8125a437b8cc29be31706fccb - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:52:37 compute-0 nova_compute[260935]: 2025-10-11 08:52:37.675 2 INFO nova.virt.libvirt.driver [None req-945a89b9-902f-4e98-8219-c31dfa583812 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] [instance: 872b1c1d-bc87-4123-a599-4d64b89018aa] Deleting instance files /var/lib/nova/instances/872b1c1d-bc87-4123-a599-4d64b89018aa_del
Oct 11 08:52:37 compute-0 nova_compute[260935]: 2025-10-11 08:52:37.677 2 INFO nova.virt.libvirt.driver [None req-945a89b9-902f-4e98-8219-c31dfa583812 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] [instance: 872b1c1d-bc87-4123-a599-4d64b89018aa] Deletion of /var/lib/nova/instances/872b1c1d-bc87-4123-a599-4d64b89018aa_del complete
Oct 11 08:52:37 compute-0 nova_compute[260935]: 2025-10-11 08:52:37.745 2 INFO nova.compute.manager [None req-945a89b9-902f-4e98-8219-c31dfa583812 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] [instance: 872b1c1d-bc87-4123-a599-4d64b89018aa] Took 0.88 seconds to destroy the instance on the hypervisor.
Oct 11 08:52:37 compute-0 nova_compute[260935]: 2025-10-11 08:52:37.746 2 DEBUG oslo.service.loopingcall [None req-945a89b9-902f-4e98-8219-c31dfa583812 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 11 08:52:37 compute-0 nova_compute[260935]: 2025-10-11 08:52:37.747 2 DEBUG nova.compute.manager [-] [instance: 872b1c1d-bc87-4123-a599-4d64b89018aa] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 11 08:52:37 compute-0 nova_compute[260935]: 2025-10-11 08:52:37.747 2 DEBUG nova.network.neutron [-] [instance: 872b1c1d-bc87-4123-a599-4d64b89018aa] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 11 08:52:37 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1433: 321 pgs: 321 active+clean; 180 MiB data, 507 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.6 MiB/s wr, 175 op/s
Oct 11 08:52:38 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 08:52:38 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2408323234' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:52:38 compute-0 nova_compute[260935]: 2025-10-11 08:52:38.131 2 DEBUG oslo_concurrency.processutils [None req-a82ca57d-9e53-4e21-b593-ce9829203cde 713a9b01d75f4fd2b8ad41bac3ba6343 6fb895e8125a437b8cc29be31706fccb - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.532s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:52:38 compute-0 nova_compute[260935]: 2025-10-11 08:52:38.132 2 DEBUG nova.virt.libvirt.vif [None req-a82ca57d-9e53-4e21-b593-ce9829203cde 713a9b01d75f4fd2b8ad41bac3ba6343 6fb895e8125a437b8cc29be31706fccb - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T08:52:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesNegativeTestJSON-server-1610135239',display_name='tempest-ImagesNegativeTestJSON-server-1610135239',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesnegativetestjson-server-1610135239',id=41,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6fb895e8125a437b8cc29be31706fccb',ramdisk_id='',reservation_id='r-xtlxm36r',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesNegativeTestJSON-1030914515',owner_user_name='tempest-ImagesNegativeTestJSON-1030914515-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T08:52:33Z,user_data=None,user_id='713a9b01d75f4fd2b8ad41bac3ba6343',uuid=dd616f8b-2be3-455f-8979-ac6e9e57af2d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a5dcad71-6ce4-4a7c-95fc-900cb7ee620e", "address": "fa:16:3e:40:7a:6c", "network": {"id": "f4f23928-daba-425d-b61c-e65657ec3386", "bridge": "br-int", "label": "tempest-ImagesNegativeTestJSON-643614329-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6fb895e8125a437b8cc29be31706fccb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa5dcad71-6c", "ovs_interfaceid": "a5dcad71-6ce4-4a7c-95fc-900cb7ee620e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 11 08:52:38 compute-0 nova_compute[260935]: 2025-10-11 08:52:38.133 2 DEBUG nova.network.os_vif_util [None req-a82ca57d-9e53-4e21-b593-ce9829203cde 713a9b01d75f4fd2b8ad41bac3ba6343 6fb895e8125a437b8cc29be31706fccb - - default default] Converting VIF {"id": "a5dcad71-6ce4-4a7c-95fc-900cb7ee620e", "address": "fa:16:3e:40:7a:6c", "network": {"id": "f4f23928-daba-425d-b61c-e65657ec3386", "bridge": "br-int", "label": "tempest-ImagesNegativeTestJSON-643614329-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6fb895e8125a437b8cc29be31706fccb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa5dcad71-6c", "ovs_interfaceid": "a5dcad71-6ce4-4a7c-95fc-900cb7ee620e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 08:52:38 compute-0 nova_compute[260935]: 2025-10-11 08:52:38.133 2 DEBUG nova.network.os_vif_util [None req-a82ca57d-9e53-4e21-b593-ce9829203cde 713a9b01d75f4fd2b8ad41bac3ba6343 6fb895e8125a437b8cc29be31706fccb - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:40:7a:6c,bridge_name='br-int',has_traffic_filtering=True,id=a5dcad71-6ce4-4a7c-95fc-900cb7ee620e,network=Network(f4f23928-daba-425d-b61c-e65657ec3386),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa5dcad71-6c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 08:52:38 compute-0 nova_compute[260935]: 2025-10-11 08:52:38.135 2 DEBUG nova.objects.instance [None req-a82ca57d-9e53-4e21-b593-ce9829203cde 713a9b01d75f4fd2b8ad41bac3ba6343 6fb895e8125a437b8cc29be31706fccb - - default default] Lazy-loading 'pci_devices' on Instance uuid dd616f8b-2be3-455f-8979-ac6e9e57af2d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:52:38 compute-0 nova_compute[260935]: 2025-10-11 08:52:38.152 2 DEBUG nova.network.neutron [None req-548173ae-bbbe-4fb0-825e-0c1b62bb69a3 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] [instance: 5750649d-960f-42d5-b127-de8b9a2bee8f] Successfully created port: 81c1d19a-c479-4381-9557-92f3e52b0cf0 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 11 08:52:38 compute-0 nova_compute[260935]: 2025-10-11 08:52:38.159 2 DEBUG nova.virt.libvirt.driver [None req-a82ca57d-9e53-4e21-b593-ce9829203cde 713a9b01d75f4fd2b8ad41bac3ba6343 6fb895e8125a437b8cc29be31706fccb - - default default] [instance: dd616f8b-2be3-455f-8979-ac6e9e57af2d] End _get_guest_xml xml=<domain type="kvm">
Oct 11 08:52:38 compute-0 nova_compute[260935]:   <uuid>dd616f8b-2be3-455f-8979-ac6e9e57af2d</uuid>
Oct 11 08:52:38 compute-0 nova_compute[260935]:   <name>instance-00000029</name>
Oct 11 08:52:38 compute-0 nova_compute[260935]:   <memory>131072</memory>
Oct 11 08:52:38 compute-0 nova_compute[260935]:   <vcpu>1</vcpu>
Oct 11 08:52:38 compute-0 nova_compute[260935]:   <metadata>
Oct 11 08:52:38 compute-0 nova_compute[260935]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 08:52:38 compute-0 nova_compute[260935]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 08:52:38 compute-0 nova_compute[260935]:       <nova:name>tempest-ImagesNegativeTestJSON-server-1610135239</nova:name>
Oct 11 08:52:38 compute-0 nova_compute[260935]:       <nova:creationTime>2025-10-11 08:52:37</nova:creationTime>
Oct 11 08:52:38 compute-0 nova_compute[260935]:       <nova:flavor name="m1.nano">
Oct 11 08:52:38 compute-0 nova_compute[260935]:         <nova:memory>128</nova:memory>
Oct 11 08:52:38 compute-0 nova_compute[260935]:         <nova:disk>1</nova:disk>
Oct 11 08:52:38 compute-0 nova_compute[260935]:         <nova:swap>0</nova:swap>
Oct 11 08:52:38 compute-0 nova_compute[260935]:         <nova:ephemeral>0</nova:ephemeral>
Oct 11 08:52:38 compute-0 nova_compute[260935]:         <nova:vcpus>1</nova:vcpus>
Oct 11 08:52:38 compute-0 nova_compute[260935]:       </nova:flavor>
Oct 11 08:52:38 compute-0 nova_compute[260935]:       <nova:owner>
Oct 11 08:52:38 compute-0 nova_compute[260935]:         <nova:user uuid="713a9b01d75f4fd2b8ad41bac3ba6343">tempest-ImagesNegativeTestJSON-1030914515-project-member</nova:user>
Oct 11 08:52:38 compute-0 nova_compute[260935]:         <nova:project uuid="6fb895e8125a437b8cc29be31706fccb">tempest-ImagesNegativeTestJSON-1030914515</nova:project>
Oct 11 08:52:38 compute-0 nova_compute[260935]:       </nova:owner>
Oct 11 08:52:38 compute-0 nova_compute[260935]:       <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 08:52:38 compute-0 nova_compute[260935]:       <nova:ports>
Oct 11 08:52:38 compute-0 nova_compute[260935]:         <nova:port uuid="a5dcad71-6ce4-4a7c-95fc-900cb7ee620e">
Oct 11 08:52:38 compute-0 nova_compute[260935]:           <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Oct 11 08:52:38 compute-0 nova_compute[260935]:         </nova:port>
Oct 11 08:52:38 compute-0 nova_compute[260935]:       </nova:ports>
Oct 11 08:52:38 compute-0 nova_compute[260935]:     </nova:instance>
Oct 11 08:52:38 compute-0 nova_compute[260935]:   </metadata>
Oct 11 08:52:38 compute-0 nova_compute[260935]:   <sysinfo type="smbios">
Oct 11 08:52:38 compute-0 nova_compute[260935]:     <system>
Oct 11 08:52:38 compute-0 nova_compute[260935]:       <entry name="manufacturer">RDO</entry>
Oct 11 08:52:38 compute-0 nova_compute[260935]:       <entry name="product">OpenStack Compute</entry>
Oct 11 08:52:38 compute-0 nova_compute[260935]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 08:52:38 compute-0 nova_compute[260935]:       <entry name="serial">dd616f8b-2be3-455f-8979-ac6e9e57af2d</entry>
Oct 11 08:52:38 compute-0 nova_compute[260935]:       <entry name="uuid">dd616f8b-2be3-455f-8979-ac6e9e57af2d</entry>
Oct 11 08:52:38 compute-0 nova_compute[260935]:       <entry name="family">Virtual Machine</entry>
Oct 11 08:52:38 compute-0 nova_compute[260935]:     </system>
Oct 11 08:52:38 compute-0 nova_compute[260935]:   </sysinfo>
Oct 11 08:52:38 compute-0 nova_compute[260935]:   <os>
Oct 11 08:52:38 compute-0 nova_compute[260935]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 11 08:52:38 compute-0 nova_compute[260935]:     <boot dev="hd"/>
Oct 11 08:52:38 compute-0 nova_compute[260935]:     <smbios mode="sysinfo"/>
Oct 11 08:52:38 compute-0 nova_compute[260935]:   </os>
Oct 11 08:52:38 compute-0 nova_compute[260935]:   <features>
Oct 11 08:52:38 compute-0 nova_compute[260935]:     <acpi/>
Oct 11 08:52:38 compute-0 nova_compute[260935]:     <apic/>
Oct 11 08:52:38 compute-0 nova_compute[260935]:     <vmcoreinfo/>
Oct 11 08:52:38 compute-0 nova_compute[260935]:   </features>
Oct 11 08:52:38 compute-0 nova_compute[260935]:   <clock offset="utc">
Oct 11 08:52:38 compute-0 nova_compute[260935]:     <timer name="pit" tickpolicy="delay"/>
Oct 11 08:52:38 compute-0 nova_compute[260935]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 11 08:52:38 compute-0 nova_compute[260935]:     <timer name="hpet" present="no"/>
Oct 11 08:52:38 compute-0 nova_compute[260935]:   </clock>
Oct 11 08:52:38 compute-0 nova_compute[260935]:   <cpu mode="host-model" match="exact">
Oct 11 08:52:38 compute-0 nova_compute[260935]:     <topology sockets="1" cores="1" threads="1"/>
Oct 11 08:52:38 compute-0 nova_compute[260935]:   </cpu>
Oct 11 08:52:38 compute-0 nova_compute[260935]:   <devices>
Oct 11 08:52:38 compute-0 nova_compute[260935]:     <disk type="network" device="disk">
Oct 11 08:52:38 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 08:52:38 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/dd616f8b-2be3-455f-8979-ac6e9e57af2d_disk">
Oct 11 08:52:38 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 08:52:38 compute-0 nova_compute[260935]:       </source>
Oct 11 08:52:38 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 08:52:38 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 08:52:38 compute-0 nova_compute[260935]:       </auth>
Oct 11 08:52:38 compute-0 nova_compute[260935]:       <target dev="vda" bus="virtio"/>
Oct 11 08:52:38 compute-0 nova_compute[260935]:     </disk>
Oct 11 08:52:38 compute-0 nova_compute[260935]:     <disk type="network" device="cdrom">
Oct 11 08:52:38 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 08:52:38 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/dd616f8b-2be3-455f-8979-ac6e9e57af2d_disk.config">
Oct 11 08:52:38 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 08:52:38 compute-0 nova_compute[260935]:       </source>
Oct 11 08:52:38 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 08:52:38 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 08:52:38 compute-0 nova_compute[260935]:       </auth>
Oct 11 08:52:38 compute-0 nova_compute[260935]:       <target dev="sda" bus="sata"/>
Oct 11 08:52:38 compute-0 nova_compute[260935]:     </disk>
Oct 11 08:52:38 compute-0 nova_compute[260935]:     <interface type="ethernet">
Oct 11 08:52:38 compute-0 nova_compute[260935]:       <mac address="fa:16:3e:40:7a:6c"/>
Oct 11 08:52:38 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 08:52:38 compute-0 nova_compute[260935]:       <driver name="vhost" rx_queue_size="512"/>
Oct 11 08:52:38 compute-0 nova_compute[260935]:       <mtu size="1442"/>
Oct 11 08:52:38 compute-0 nova_compute[260935]:       <target dev="tapa5dcad71-6c"/>
Oct 11 08:52:38 compute-0 nova_compute[260935]:     </interface>
Oct 11 08:52:38 compute-0 nova_compute[260935]:     <serial type="pty">
Oct 11 08:52:38 compute-0 nova_compute[260935]:       <log file="/var/lib/nova/instances/dd616f8b-2be3-455f-8979-ac6e9e57af2d/console.log" append="off"/>
Oct 11 08:52:38 compute-0 nova_compute[260935]:     </serial>
Oct 11 08:52:38 compute-0 nova_compute[260935]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 08:52:38 compute-0 nova_compute[260935]:     <video>
Oct 11 08:52:38 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 08:52:38 compute-0 nova_compute[260935]:     </video>
Oct 11 08:52:38 compute-0 nova_compute[260935]:     <input type="tablet" bus="usb"/>
Oct 11 08:52:38 compute-0 nova_compute[260935]:     <rng model="virtio">
Oct 11 08:52:38 compute-0 nova_compute[260935]:       <backend model="random">/dev/urandom</backend>
Oct 11 08:52:38 compute-0 nova_compute[260935]:     </rng>
Oct 11 08:52:38 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root"/>
Oct 11 08:52:38 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:52:38 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:52:38 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:52:38 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:52:38 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:52:38 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:52:38 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:52:38 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:52:38 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:52:38 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:52:38 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:52:38 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:52:38 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:52:38 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:52:38 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:52:38 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:52:38 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:52:38 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:52:38 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:52:38 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:52:38 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:52:38 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:52:38 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:52:38 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:52:38 compute-0 nova_compute[260935]:     <controller type="usb" index="0"/>
Oct 11 08:52:38 compute-0 nova_compute[260935]:     <memballoon model="virtio">
Oct 11 08:52:38 compute-0 nova_compute[260935]:       <stats period="10"/>
Oct 11 08:52:38 compute-0 nova_compute[260935]:     </memballoon>
Oct 11 08:52:38 compute-0 nova_compute[260935]:   </devices>
Oct 11 08:52:38 compute-0 nova_compute[260935]: </domain>
Oct 11 08:52:38 compute-0 nova_compute[260935]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 11 08:52:38 compute-0 nova_compute[260935]: 2025-10-11 08:52:38.160 2 DEBUG nova.compute.manager [None req-a82ca57d-9e53-4e21-b593-ce9829203cde 713a9b01d75f4fd2b8ad41bac3ba6343 6fb895e8125a437b8cc29be31706fccb - - default default] [instance: dd616f8b-2be3-455f-8979-ac6e9e57af2d] Preparing to wait for external event network-vif-plugged-a5dcad71-6ce4-4a7c-95fc-900cb7ee620e prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 11 08:52:38 compute-0 nova_compute[260935]: 2025-10-11 08:52:38.160 2 DEBUG oslo_concurrency.lockutils [None req-a82ca57d-9e53-4e21-b593-ce9829203cde 713a9b01d75f4fd2b8ad41bac3ba6343 6fb895e8125a437b8cc29be31706fccb - - default default] Acquiring lock "dd616f8b-2be3-455f-8979-ac6e9e57af2d-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:52:38 compute-0 nova_compute[260935]: 2025-10-11 08:52:38.161 2 DEBUG oslo_concurrency.lockutils [None req-a82ca57d-9e53-4e21-b593-ce9829203cde 713a9b01d75f4fd2b8ad41bac3ba6343 6fb895e8125a437b8cc29be31706fccb - - default default] Lock "dd616f8b-2be3-455f-8979-ac6e9e57af2d-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:52:38 compute-0 nova_compute[260935]: 2025-10-11 08:52:38.161 2 DEBUG oslo_concurrency.lockutils [None req-a82ca57d-9e53-4e21-b593-ce9829203cde 713a9b01d75f4fd2b8ad41bac3ba6343 6fb895e8125a437b8cc29be31706fccb - - default default] Lock "dd616f8b-2be3-455f-8979-ac6e9e57af2d-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:52:38 compute-0 nova_compute[260935]: 2025-10-11 08:52:38.162 2 DEBUG nova.virt.libvirt.vif [None req-a82ca57d-9e53-4e21-b593-ce9829203cde 713a9b01d75f4fd2b8ad41bac3ba6343 6fb895e8125a437b8cc29be31706fccb - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T08:52:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesNegativeTestJSON-server-1610135239',display_name='tempest-ImagesNegativeTestJSON-server-1610135239',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesnegativetestjson-server-1610135239',id=41,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6fb895e8125a437b8cc29be31706fccb',ramdisk_id='',reservation_id='r-xtlxm36r',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesNegativeTestJSON-1030914515',owner_user_name='tempest-ImagesNegativeTestJSON-1030914515-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T08:52:33Z,user_data=None,user_id='713a9b01d75f4fd2b8ad41bac3ba6343',uuid=dd616f8b-2be3-455f-8979-ac6e9e57af2d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a5dcad71-6ce4-4a7c-95fc-900cb7ee620e", "address": "fa:16:3e:40:7a:6c", "network": {"id": "f4f23928-daba-425d-b61c-e65657ec3386", "bridge": "br-int", "label": "tempest-ImagesNegativeTestJSON-643614329-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6fb895e8125a437b8cc29be31706fccb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa5dcad71-6c", "ovs_interfaceid": "a5dcad71-6ce4-4a7c-95fc-900cb7ee620e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 11 08:52:38 compute-0 nova_compute[260935]: 2025-10-11 08:52:38.162 2 DEBUG nova.network.os_vif_util [None req-a82ca57d-9e53-4e21-b593-ce9829203cde 713a9b01d75f4fd2b8ad41bac3ba6343 6fb895e8125a437b8cc29be31706fccb - - default default] Converting VIF {"id": "a5dcad71-6ce4-4a7c-95fc-900cb7ee620e", "address": "fa:16:3e:40:7a:6c", "network": {"id": "f4f23928-daba-425d-b61c-e65657ec3386", "bridge": "br-int", "label": "tempest-ImagesNegativeTestJSON-643614329-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6fb895e8125a437b8cc29be31706fccb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa5dcad71-6c", "ovs_interfaceid": "a5dcad71-6ce4-4a7c-95fc-900cb7ee620e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 08:52:38 compute-0 nova_compute[260935]: 2025-10-11 08:52:38.163 2 DEBUG nova.network.os_vif_util [None req-a82ca57d-9e53-4e21-b593-ce9829203cde 713a9b01d75f4fd2b8ad41bac3ba6343 6fb895e8125a437b8cc29be31706fccb - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:40:7a:6c,bridge_name='br-int',has_traffic_filtering=True,id=a5dcad71-6ce4-4a7c-95fc-900cb7ee620e,network=Network(f4f23928-daba-425d-b61c-e65657ec3386),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa5dcad71-6c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 08:52:38 compute-0 nova_compute[260935]: 2025-10-11 08:52:38.163 2 DEBUG os_vif [None req-a82ca57d-9e53-4e21-b593-ce9829203cde 713a9b01d75f4fd2b8ad41bac3ba6343 6fb895e8125a437b8cc29be31706fccb - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:40:7a:6c,bridge_name='br-int',has_traffic_filtering=True,id=a5dcad71-6ce4-4a7c-95fc-900cb7ee620e,network=Network(f4f23928-daba-425d-b61c-e65657ec3386),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa5dcad71-6c') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 11 08:52:38 compute-0 nova_compute[260935]: 2025-10-11 08:52:38.164 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:52:38 compute-0 nova_compute[260935]: 2025-10-11 08:52:38.164 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:52:38 compute-0 nova_compute[260935]: 2025-10-11 08:52:38.165 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 08:52:38 compute-0 ceph-mon[74313]: from='client.? 192.168.122.10:0/2909891233' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 08:52:38 compute-0 ceph-mon[74313]: from='client.? 192.168.122.10:0/2909891233' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 08:52:38 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/2614441609' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:52:38 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/2408323234' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:52:38 compute-0 nova_compute[260935]: 2025-10-11 08:52:38.171 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:52:38 compute-0 nova_compute[260935]: 2025-10-11 08:52:38.171 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa5dcad71-6c, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:52:38 compute-0 nova_compute[260935]: 2025-10-11 08:52:38.171 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa5dcad71-6c, col_values=(('external_ids', {'iface-id': 'a5dcad71-6ce4-4a7c-95fc-900cb7ee620e', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:40:7a:6c', 'vm-uuid': 'dd616f8b-2be3-455f-8979-ac6e9e57af2d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:52:38 compute-0 nova_compute[260935]: 2025-10-11 08:52:38.199 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:52:38 compute-0 NetworkManager[44960]: <info>  [1760172758.2057] manager: (tapa5dcad71-6c): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/158)
Oct 11 08:52:38 compute-0 nova_compute[260935]: 2025-10-11 08:52:38.206 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:52:38 compute-0 nova_compute[260935]: 2025-10-11 08:52:38.206 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 08:52:38 compute-0 nova_compute[260935]: 2025-10-11 08:52:38.214 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:52:38 compute-0 nova_compute[260935]: 2025-10-11 08:52:38.214 2 INFO os_vif [None req-a82ca57d-9e53-4e21-b593-ce9829203cde 713a9b01d75f4fd2b8ad41bac3ba6343 6fb895e8125a437b8cc29be31706fccb - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:40:7a:6c,bridge_name='br-int',has_traffic_filtering=True,id=a5dcad71-6ce4-4a7c-95fc-900cb7ee620e,network=Network(f4f23928-daba-425d-b61c-e65657ec3386),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa5dcad71-6c')
Oct 11 08:52:38 compute-0 nova_compute[260935]: 2025-10-11 08:52:38.232 2 DEBUG nova.compute.manager [req-37042f26-928e-4728-93e5-0a7b251c53dd req-a87e0089-2388-4289-98bb-16465f1fa62b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 872b1c1d-bc87-4123-a599-4d64b89018aa] Received event network-vif-unplugged-108be440-11bf-41f6-a628-86fbac597b7d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:52:38 compute-0 nova_compute[260935]: 2025-10-11 08:52:38.232 2 DEBUG oslo_concurrency.lockutils [req-37042f26-928e-4728-93e5-0a7b251c53dd req-a87e0089-2388-4289-98bb-16465f1fa62b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "872b1c1d-bc87-4123-a599-4d64b89018aa-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:52:38 compute-0 nova_compute[260935]: 2025-10-11 08:52:38.233 2 DEBUG oslo_concurrency.lockutils [req-37042f26-928e-4728-93e5-0a7b251c53dd req-a87e0089-2388-4289-98bb-16465f1fa62b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "872b1c1d-bc87-4123-a599-4d64b89018aa-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:52:38 compute-0 nova_compute[260935]: 2025-10-11 08:52:38.233 2 DEBUG oslo_concurrency.lockutils [req-37042f26-928e-4728-93e5-0a7b251c53dd req-a87e0089-2388-4289-98bb-16465f1fa62b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "872b1c1d-bc87-4123-a599-4d64b89018aa-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:52:38 compute-0 nova_compute[260935]: 2025-10-11 08:52:38.233 2 DEBUG nova.compute.manager [req-37042f26-928e-4728-93e5-0a7b251c53dd req-a87e0089-2388-4289-98bb-16465f1fa62b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 872b1c1d-bc87-4123-a599-4d64b89018aa] No waiting events found dispatching network-vif-unplugged-108be440-11bf-41f6-a628-86fbac597b7d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 08:52:38 compute-0 nova_compute[260935]: 2025-10-11 08:52:38.233 2 DEBUG nova.compute.manager [req-37042f26-928e-4728-93e5-0a7b251c53dd req-a87e0089-2388-4289-98bb-16465f1fa62b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 872b1c1d-bc87-4123-a599-4d64b89018aa] Received event network-vif-unplugged-108be440-11bf-41f6-a628-86fbac597b7d for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 11 08:52:38 compute-0 nova_compute[260935]: 2025-10-11 08:52:38.233 2 DEBUG nova.compute.manager [req-37042f26-928e-4728-93e5-0a7b251c53dd req-a87e0089-2388-4289-98bb-16465f1fa62b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 872b1c1d-bc87-4123-a599-4d64b89018aa] Received event network-vif-plugged-108be440-11bf-41f6-a628-86fbac597b7d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:52:38 compute-0 nova_compute[260935]: 2025-10-11 08:52:38.233 2 DEBUG oslo_concurrency.lockutils [req-37042f26-928e-4728-93e5-0a7b251c53dd req-a87e0089-2388-4289-98bb-16465f1fa62b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "872b1c1d-bc87-4123-a599-4d64b89018aa-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:52:38 compute-0 nova_compute[260935]: 2025-10-11 08:52:38.234 2 DEBUG oslo_concurrency.lockutils [req-37042f26-928e-4728-93e5-0a7b251c53dd req-a87e0089-2388-4289-98bb-16465f1fa62b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "872b1c1d-bc87-4123-a599-4d64b89018aa-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:52:38 compute-0 nova_compute[260935]: 2025-10-11 08:52:38.234 2 DEBUG oslo_concurrency.lockutils [req-37042f26-928e-4728-93e5-0a7b251c53dd req-a87e0089-2388-4289-98bb-16465f1fa62b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "872b1c1d-bc87-4123-a599-4d64b89018aa-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:52:38 compute-0 nova_compute[260935]: 2025-10-11 08:52:38.234 2 DEBUG nova.compute.manager [req-37042f26-928e-4728-93e5-0a7b251c53dd req-a87e0089-2388-4289-98bb-16465f1fa62b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 872b1c1d-bc87-4123-a599-4d64b89018aa] No waiting events found dispatching network-vif-plugged-108be440-11bf-41f6-a628-86fbac597b7d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 08:52:38 compute-0 nova_compute[260935]: 2025-10-11 08:52:38.234 2 WARNING nova.compute.manager [req-37042f26-928e-4728-93e5-0a7b251c53dd req-a87e0089-2388-4289-98bb-16465f1fa62b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 872b1c1d-bc87-4123-a599-4d64b89018aa] Received unexpected event network-vif-plugged-108be440-11bf-41f6-a628-86fbac597b7d for instance with vm_state active and task_state deleting.
Oct 11 08:52:38 compute-0 nova_compute[260935]: 2025-10-11 08:52:38.279 2 DEBUG nova.virt.libvirt.driver [None req-a82ca57d-9e53-4e21-b593-ce9829203cde 713a9b01d75f4fd2b8ad41bac3ba6343 6fb895e8125a437b8cc29be31706fccb - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 08:52:38 compute-0 nova_compute[260935]: 2025-10-11 08:52:38.280 2 DEBUG nova.virt.libvirt.driver [None req-a82ca57d-9e53-4e21-b593-ce9829203cde 713a9b01d75f4fd2b8ad41bac3ba6343 6fb895e8125a437b8cc29be31706fccb - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 08:52:38 compute-0 nova_compute[260935]: 2025-10-11 08:52:38.280 2 DEBUG nova.virt.libvirt.driver [None req-a82ca57d-9e53-4e21-b593-ce9829203cde 713a9b01d75f4fd2b8ad41bac3ba6343 6fb895e8125a437b8cc29be31706fccb - - default default] No VIF found with MAC fa:16:3e:40:7a:6c, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 11 08:52:38 compute-0 nova_compute[260935]: 2025-10-11 08:52:38.281 2 INFO nova.virt.libvirt.driver [None req-a82ca57d-9e53-4e21-b593-ce9829203cde 713a9b01d75f4fd2b8ad41bac3ba6343 6fb895e8125a437b8cc29be31706fccb - - default default] [instance: dd616f8b-2be3-455f-8979-ac6e9e57af2d] Using config drive
Oct 11 08:52:38 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e191 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 08:52:38 compute-0 nova_compute[260935]: 2025-10-11 08:52:38.310 2 DEBUG nova.storage.rbd_utils [None req-a82ca57d-9e53-4e21-b593-ce9829203cde 713a9b01d75f4fd2b8ad41bac3ba6343 6fb895e8125a437b8cc29be31706fccb - - default default] rbd image dd616f8b-2be3-455f-8979-ac6e9e57af2d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:52:38 compute-0 podman[307637]: 2025-10-11 08:52:38.358634167 +0000 UTC m=+0.090351875 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 11 08:52:38 compute-0 nova_compute[260935]: 2025-10-11 08:52:38.403 2 DEBUG nova.network.neutron [-] [instance: 872b1c1d-bc87-4123-a599-4d64b89018aa] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:52:38 compute-0 podman[307638]: 2025-10-11 08:52:38.404792243 +0000 UTC m=+0.138325382 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=ovn_controller, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 08:52:38 compute-0 nova_compute[260935]: 2025-10-11 08:52:38.424 2 INFO nova.compute.manager [-] [instance: 872b1c1d-bc87-4123-a599-4d64b89018aa] Took 0.68 seconds to deallocate network for instance.
Oct 11 08:52:38 compute-0 nova_compute[260935]: 2025-10-11 08:52:38.477 2 DEBUG oslo_concurrency.lockutils [None req-945a89b9-902f-4e98-8219-c31dfa583812 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:52:38 compute-0 nova_compute[260935]: 2025-10-11 08:52:38.478 2 DEBUG oslo_concurrency.lockutils [None req-945a89b9-902f-4e98-8219-c31dfa583812 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:52:38 compute-0 nova_compute[260935]: 2025-10-11 08:52:38.483 2 DEBUG nova.compute.manager [req-310b79db-49ff-4075-aa84-d6129636a051 req-a942d1fd-0413-4ef4-afe4-f9b14281a34f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 872b1c1d-bc87-4123-a599-4d64b89018aa] Received event network-vif-deleted-108be440-11bf-41f6-a628-86fbac597b7d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:52:38 compute-0 nova_compute[260935]: 2025-10-11 08:52:38.598 2 DEBUG oslo_concurrency.processutils [None req-945a89b9-902f-4e98-8219-c31dfa583812 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:52:38 compute-0 nova_compute[260935]: 2025-10-11 08:52:38.802 2 INFO nova.virt.libvirt.driver [None req-a82ca57d-9e53-4e21-b593-ce9829203cde 713a9b01d75f4fd2b8ad41bac3ba6343 6fb895e8125a437b8cc29be31706fccb - - default default] [instance: dd616f8b-2be3-455f-8979-ac6e9e57af2d] Creating config drive at /var/lib/nova/instances/dd616f8b-2be3-455f-8979-ac6e9e57af2d/disk.config
Oct 11 08:52:38 compute-0 nova_compute[260935]: 2025-10-11 08:52:38.809 2 DEBUG oslo_concurrency.processutils [None req-a82ca57d-9e53-4e21-b593-ce9829203cde 713a9b01d75f4fd2b8ad41bac3ba6343 6fb895e8125a437b8cc29be31706fccb - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/dd616f8b-2be3-455f-8979-ac6e9e57af2d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpf_ase0x0 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:52:38 compute-0 nova_compute[260935]: 2025-10-11 08:52:38.976 2 DEBUG oslo_concurrency.processutils [None req-a82ca57d-9e53-4e21-b593-ce9829203cde 713a9b01d75f4fd2b8ad41bac3ba6343 6fb895e8125a437b8cc29be31706fccb - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/dd616f8b-2be3-455f-8979-ac6e9e57af2d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpf_ase0x0" returned: 0 in 0.167s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:52:39 compute-0 nova_compute[260935]: 2025-10-11 08:52:39.014 2 DEBUG nova.storage.rbd_utils [None req-a82ca57d-9e53-4e21-b593-ce9829203cde 713a9b01d75f4fd2b8ad41bac3ba6343 6fb895e8125a437b8cc29be31706fccb - - default default] rbd image dd616f8b-2be3-455f-8979-ac6e9e57af2d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:52:39 compute-0 nova_compute[260935]: 2025-10-11 08:52:39.020 2 DEBUG oslo_concurrency.processutils [None req-a82ca57d-9e53-4e21-b593-ce9829203cde 713a9b01d75f4fd2b8ad41bac3ba6343 6fb895e8125a437b8cc29be31706fccb - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/dd616f8b-2be3-455f-8979-ac6e9e57af2d/disk.config dd616f8b-2be3-455f-8979-ac6e9e57af2d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:52:39 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 08:52:39 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1192914383' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:52:39 compute-0 nova_compute[260935]: 2025-10-11 08:52:39.087 2 DEBUG nova.network.neutron [req-105183b8-0d0d-46b7-a878-25cb3a0dde91 req-64e46d8c-ba1b-4df1-9024-4292acd28f53 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: dd616f8b-2be3-455f-8979-ac6e9e57af2d] Updated VIF entry in instance network info cache for port a5dcad71-6ce4-4a7c-95fc-900cb7ee620e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 08:52:39 compute-0 nova_compute[260935]: 2025-10-11 08:52:39.088 2 DEBUG nova.network.neutron [req-105183b8-0d0d-46b7-a878-25cb3a0dde91 req-64e46d8c-ba1b-4df1-9024-4292acd28f53 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: dd616f8b-2be3-455f-8979-ac6e9e57af2d] Updating instance_info_cache with network_info: [{"id": "a5dcad71-6ce4-4a7c-95fc-900cb7ee620e", "address": "fa:16:3e:40:7a:6c", "network": {"id": "f4f23928-daba-425d-b61c-e65657ec3386", "bridge": "br-int", "label": "tempest-ImagesNegativeTestJSON-643614329-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6fb895e8125a437b8cc29be31706fccb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa5dcad71-6c", "ovs_interfaceid": "a5dcad71-6ce4-4a7c-95fc-900cb7ee620e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:52:39 compute-0 nova_compute[260935]: 2025-10-11 08:52:39.101 2 DEBUG oslo_concurrency.processutils [None req-945a89b9-902f-4e98-8219-c31dfa583812 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.503s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:52:39 compute-0 nova_compute[260935]: 2025-10-11 08:52:39.110 2 DEBUG nova.compute.provider_tree [None req-945a89b9-902f-4e98-8219-c31dfa583812 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 08:52:39 compute-0 nova_compute[260935]: 2025-10-11 08:52:39.115 2 DEBUG oslo_concurrency.lockutils [req-105183b8-0d0d-46b7-a878-25cb3a0dde91 req-64e46d8c-ba1b-4df1-9024-4292acd28f53 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-dd616f8b-2be3-455f-8979-ac6e9e57af2d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 08:52:39 compute-0 nova_compute[260935]: 2025-10-11 08:52:39.135 2 DEBUG nova.scheduler.client.report [None req-945a89b9-902f-4e98-8219-c31dfa583812 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 08:52:39 compute-0 nova_compute[260935]: 2025-10-11 08:52:39.164 2 DEBUG oslo_concurrency.lockutils [None req-945a89b9-902f-4e98-8219-c31dfa583812 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.686s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:52:39 compute-0 nova_compute[260935]: 2025-10-11 08:52:39.196 2 DEBUG nova.network.neutron [None req-548173ae-bbbe-4fb0-825e-0c1b62bb69a3 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] [instance: 5750649d-960f-42d5-b127-de8b9a2bee8f] Successfully updated port: 81c1d19a-c479-4381-9557-92f3e52b0cf0 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 11 08:52:39 compute-0 ceph-mon[74313]: pgmap v1433: 321 pgs: 321 active+clean; 180 MiB data, 507 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.6 MiB/s wr, 175 op/s
Oct 11 08:52:39 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/1192914383' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:52:39 compute-0 nova_compute[260935]: 2025-10-11 08:52:39.211 2 INFO nova.scheduler.client.report [None req-945a89b9-902f-4e98-8219-c31dfa583812 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] Deleted allocations for instance 872b1c1d-bc87-4123-a599-4d64b89018aa
Oct 11 08:52:39 compute-0 nova_compute[260935]: 2025-10-11 08:52:39.215 2 DEBUG oslo_concurrency.lockutils [None req-548173ae-bbbe-4fb0-825e-0c1b62bb69a3 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] Acquiring lock "refresh_cache-5750649d-960f-42d5-b127-de8b9a2bee8f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 08:52:39 compute-0 nova_compute[260935]: 2025-10-11 08:52:39.216 2 DEBUG oslo_concurrency.lockutils [None req-548173ae-bbbe-4fb0-825e-0c1b62bb69a3 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] Acquired lock "refresh_cache-5750649d-960f-42d5-b127-de8b9a2bee8f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 08:52:39 compute-0 nova_compute[260935]: 2025-10-11 08:52:39.216 2 DEBUG nova.network.neutron [None req-548173ae-bbbe-4fb0-825e-0c1b62bb69a3 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] [instance: 5750649d-960f-42d5-b127-de8b9a2bee8f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 11 08:52:39 compute-0 nova_compute[260935]: 2025-10-11 08:52:39.223 2 DEBUG oslo_concurrency.processutils [None req-a82ca57d-9e53-4e21-b593-ce9829203cde 713a9b01d75f4fd2b8ad41bac3ba6343 6fb895e8125a437b8cc29be31706fccb - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/dd616f8b-2be3-455f-8979-ac6e9e57af2d/disk.config dd616f8b-2be3-455f-8979-ac6e9e57af2d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.203s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:52:39 compute-0 nova_compute[260935]: 2025-10-11 08:52:39.224 2 INFO nova.virt.libvirt.driver [None req-a82ca57d-9e53-4e21-b593-ce9829203cde 713a9b01d75f4fd2b8ad41bac3ba6343 6fb895e8125a437b8cc29be31706fccb - - default default] [instance: dd616f8b-2be3-455f-8979-ac6e9e57af2d] Deleting local config drive /var/lib/nova/instances/dd616f8b-2be3-455f-8979-ac6e9e57af2d/disk.config because it was imported into RBD.
Oct 11 08:52:39 compute-0 kernel: tapa5dcad71-6c: entered promiscuous mode
Oct 11 08:52:39 compute-0 nova_compute[260935]: 2025-10-11 08:52:39.296 2 DEBUG oslo_concurrency.lockutils [None req-945a89b9-902f-4e98-8219-c31dfa583812 7f37cd0ba15f412b88192a506c5cec79 b23ff73ca27245eeb1b46f51326a5568 - - default default] Lock "872b1c1d-bc87-4123-a599-4d64b89018aa" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.439s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:52:39 compute-0 NetworkManager[44960]: <info>  [1760172759.2980] manager: (tapa5dcad71-6c): new Tun device (/org/freedesktop/NetworkManager/Devices/159)
Oct 11 08:52:39 compute-0 systemd-udevd[307409]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 08:52:39 compute-0 nova_compute[260935]: 2025-10-11 08:52:39.340 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:52:39 compute-0 ovn_controller[152945]: 2025-10-11T08:52:39Z|00328|binding|INFO|Claiming lport a5dcad71-6ce4-4a7c-95fc-900cb7ee620e for this chassis.
Oct 11 08:52:39 compute-0 ovn_controller[152945]: 2025-10-11T08:52:39Z|00329|binding|INFO|a5dcad71-6ce4-4a7c-95fc-900cb7ee620e: Claiming fa:16:3e:40:7a:6c 10.100.0.9
Oct 11 08:52:39 compute-0 NetworkManager[44960]: <info>  [1760172759.3460] device (tapa5dcad71-6c): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 08:52:39 compute-0 NetworkManager[44960]: <info>  [1760172759.3476] device (tapa5dcad71-6c): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 08:52:39 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:39.349 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:40:7a:6c 10.100.0.9'], port_security=['fa:16:3e:40:7a:6c 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'dd616f8b-2be3-455f-8979-ac6e9e57af2d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f4f23928-daba-425d-b61c-e65657ec3386', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6fb895e8125a437b8cc29be31706fccb', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'ff7ea8b2-ee94-49d1-aa81-ade5270f3ae5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=80d8b754-3910-4271-b26f-fbc82b8c8419, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=a5dcad71-6ce4-4a7c-95fc-900cb7ee620e) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 08:52:39 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:39.350 162815 INFO neutron.agent.ovn.metadata.agent [-] Port a5dcad71-6ce4-4a7c-95fc-900cb7ee620e in datapath f4f23928-daba-425d-b61c-e65657ec3386 bound to our chassis
Oct 11 08:52:39 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:39.352 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f4f23928-daba-425d-b61c-e65657ec3386
Oct 11 08:52:39 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:39.369 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[ee3860cc-399e-490d-a36c-552ee685464a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:52:39 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:39.370 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapf4f23928-d1 in ovnmeta-f4f23928-daba-425d-b61c-e65657ec3386 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 11 08:52:39 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:39.372 276945 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapf4f23928-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 11 08:52:39 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:39.373 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[3848c99a-49b7-4417-b98c-733e68489775]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:52:39 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:39.373 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[b241ae37-491e-498f-b361-2a70a346b0f3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:52:39 compute-0 nova_compute[260935]: 2025-10-11 08:52:39.379 2 DEBUG nova.network.neutron [None req-548173ae-bbbe-4fb0-825e-0c1b62bb69a3 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] [instance: 5750649d-960f-42d5-b127-de8b9a2bee8f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 11 08:52:39 compute-0 ovn_controller[152945]: 2025-10-11T08:52:39Z|00330|binding|INFO|Setting lport a5dcad71-6ce4-4a7c-95fc-900cb7ee620e ovn-installed in OVS
Oct 11 08:52:39 compute-0 ovn_controller[152945]: 2025-10-11T08:52:39Z|00331|binding|INFO|Setting lport a5dcad71-6ce4-4a7c-95fc-900cb7ee620e up in Southbound
Oct 11 08:52:39 compute-0 nova_compute[260935]: 2025-10-11 08:52:39.385 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:52:39 compute-0 systemd-machined[215705]: New machine qemu-45-instance-00000029.
Oct 11 08:52:39 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:39.401 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[3602d432-15b3-4100-8883-a8a6dfd8f375]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:52:39 compute-0 systemd[1]: Started Virtual Machine qemu-45-instance-00000029.
Oct 11 08:52:39 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:39.428 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[5fcf5d2d-3e34-4d74-a7a6-e519e3d08daa]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:52:39 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:39.477 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[453df920-73cb-42b0-a2e1-712b47e4d1ab]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:52:39 compute-0 NetworkManager[44960]: <info>  [1760172759.4835] manager: (tapf4f23928-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/160)
Oct 11 08:52:39 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:39.482 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[4c67fec5-d078-4b4d-9a54-10bd604ece68]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:52:39 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:39.527 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[1977eafb-5f4e-457b-86b0-6fe644076de1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:52:39 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:39.532 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[fed1e6da-282a-430d-9433-9556cdbcdf5b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:52:39 compute-0 rsyslogd[1003]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 11 08:52:39 compute-0 rsyslogd[1003]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 11 08:52:39 compute-0 NetworkManager[44960]: <info>  [1760172759.5595] device (tapf4f23928-d0): carrier: link connected
Oct 11 08:52:39 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:39.571 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[f1938fa6-426f-43fc-ad8e-e2634478dfa8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:52:39 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:39.593 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[7f6dfc8c-4195-4678-b478-ebd8b405b2eb]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf4f23928-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a6:0c:d5'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 105], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 460426, 'reachable_time': 36731, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 307809, 'error': None, 'target': 'ovnmeta-f4f23928-daba-425d-b61c-e65657ec3386', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:52:39 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:39.617 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[5e3328d1-e940-4d70-848c-b1b42dc1637e]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fea6:cd5'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 460426, 'tstamp': 460426}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 307810, 'error': None, 'target': 'ovnmeta-f4f23928-daba-425d-b61c-e65657ec3386', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:52:39 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:39.639 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[f0246e00-4255-47c8-ad8c-e48e509ea669]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf4f23928-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a6:0c:d5'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 220, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 220, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 105], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 460426, 'reachable_time': 36731, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 192, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 192, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 307811, 'error': None, 'target': 'ovnmeta-f4f23928-daba-425d-b61c-e65657ec3386', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:52:39 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:39.684 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[2a19af0d-bca3-4fb0-b9ae-d81106abe5d2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:52:39 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:39.787 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[d162d837-590c-46f3-b9df-79759c425e4c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:52:39 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:39.789 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf4f23928-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:52:39 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:39.789 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 08:52:39 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:39.790 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf4f23928-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:52:39 compute-0 nova_compute[260935]: 2025-10-11 08:52:39.792 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:52:39 compute-0 kernel: tapf4f23928-d0: entered promiscuous mode
Oct 11 08:52:39 compute-0 NetworkManager[44960]: <info>  [1760172759.7928] manager: (tapf4f23928-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/161)
Oct 11 08:52:39 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:39.795 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf4f23928-d0, col_values=(('external_ids', {'iface-id': 'a2760dfe-6dd6-4319-8bbc-a3a72b9651da'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:52:39 compute-0 ovn_controller[152945]: 2025-10-11T08:52:39Z|00332|binding|INFO|Releasing lport a2760dfe-6dd6-4319-8bbc-a3a72b9651da from this chassis (sb_readonly=0)
Oct 11 08:52:39 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:39.798 162815 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/f4f23928-daba-425d-b61c-e65657ec3386.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/f4f23928-daba-425d-b61c-e65657ec3386.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 11 08:52:39 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:39.799 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[28452e46-cf5d-4706-b779-2c98d4766379]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:52:39 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:39.800 162815 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 11 08:52:39 compute-0 ovn_metadata_agent[162810]: global
Oct 11 08:52:39 compute-0 ovn_metadata_agent[162810]:     log         /dev/log local0 debug
Oct 11 08:52:39 compute-0 ovn_metadata_agent[162810]:     log-tag     haproxy-metadata-proxy-f4f23928-daba-425d-b61c-e65657ec3386
Oct 11 08:52:39 compute-0 ovn_metadata_agent[162810]:     user        root
Oct 11 08:52:39 compute-0 ovn_metadata_agent[162810]:     group       root
Oct 11 08:52:39 compute-0 ovn_metadata_agent[162810]:     maxconn     1024
Oct 11 08:52:39 compute-0 ovn_metadata_agent[162810]:     pidfile     /var/lib/neutron/external/pids/f4f23928-daba-425d-b61c-e65657ec3386.pid.haproxy
Oct 11 08:52:39 compute-0 ovn_metadata_agent[162810]:     daemon
Oct 11 08:52:39 compute-0 ovn_metadata_agent[162810]: 
Oct 11 08:52:39 compute-0 ovn_metadata_agent[162810]: defaults
Oct 11 08:52:39 compute-0 ovn_metadata_agent[162810]:     log global
Oct 11 08:52:39 compute-0 ovn_metadata_agent[162810]:     mode http
Oct 11 08:52:39 compute-0 ovn_metadata_agent[162810]:     option httplog
Oct 11 08:52:39 compute-0 ovn_metadata_agent[162810]:     option dontlognull
Oct 11 08:52:39 compute-0 ovn_metadata_agent[162810]:     option http-server-close
Oct 11 08:52:39 compute-0 ovn_metadata_agent[162810]:     option forwardfor
Oct 11 08:52:39 compute-0 ovn_metadata_agent[162810]:     retries                 3
Oct 11 08:52:39 compute-0 ovn_metadata_agent[162810]:     timeout http-request    30s
Oct 11 08:52:39 compute-0 ovn_metadata_agent[162810]:     timeout connect         30s
Oct 11 08:52:39 compute-0 ovn_metadata_agent[162810]:     timeout client          32s
Oct 11 08:52:39 compute-0 ovn_metadata_agent[162810]:     timeout server          32s
Oct 11 08:52:39 compute-0 ovn_metadata_agent[162810]:     timeout http-keep-alive 30s
Oct 11 08:52:39 compute-0 ovn_metadata_agent[162810]: 
Oct 11 08:52:39 compute-0 ovn_metadata_agent[162810]: 
Oct 11 08:52:39 compute-0 ovn_metadata_agent[162810]: listen listener
Oct 11 08:52:39 compute-0 ovn_metadata_agent[162810]:     bind 169.254.169.254:80
Oct 11 08:52:39 compute-0 ovn_metadata_agent[162810]:     server metadata /var/lib/neutron/metadata_proxy
Oct 11 08:52:39 compute-0 ovn_metadata_agent[162810]:     http-request add-header X-OVN-Network-ID f4f23928-daba-425d-b61c-e65657ec3386
Oct 11 08:52:39 compute-0 ovn_metadata_agent[162810]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 11 08:52:39 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:39.801 162815 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-f4f23928-daba-425d-b61c-e65657ec3386', 'env', 'PROCESS_TAG=haproxy-f4f23928-daba-425d-b61c-e65657ec3386', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/f4f23928-daba-425d-b61c-e65657ec3386.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 11 08:52:39 compute-0 nova_compute[260935]: 2025-10-11 08:52:39.801 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:52:39 compute-0 nova_compute[260935]: 2025-10-11 08:52:39.838 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:52:39 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1434: 321 pgs: 321 active+clean; 180 MiB data, 507 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.5 MiB/s wr, 137 op/s
Oct 11 08:52:39 compute-0 nova_compute[260935]: 2025-10-11 08:52:39.950 2 DEBUG nova.network.neutron [None req-548173ae-bbbe-4fb0-825e-0c1b62bb69a3 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] [instance: 5750649d-960f-42d5-b127-de8b9a2bee8f] Updating instance_info_cache with network_info: [{"id": "81c1d19a-c479-4381-9557-92f3e52b0cf0", "address": "fa:16:3e:49:c0:a4", "network": {"id": "aa9adc37-a18c-4b31-b4cc-d46c43f91e9b", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-563102071-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e394c641aa0e46e2a7d8129cd88c9a01", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap81c1d19a-c4", "ovs_interfaceid": "81c1d19a-c479-4381-9557-92f3e52b0cf0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:52:39 compute-0 nova_compute[260935]: 2025-10-11 08:52:39.970 2 DEBUG oslo_concurrency.lockutils [None req-548173ae-bbbe-4fb0-825e-0c1b62bb69a3 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] Releasing lock "refresh_cache-5750649d-960f-42d5-b127-de8b9a2bee8f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 08:52:39 compute-0 nova_compute[260935]: 2025-10-11 08:52:39.970 2 DEBUG nova.compute.manager [None req-548173ae-bbbe-4fb0-825e-0c1b62bb69a3 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] [instance: 5750649d-960f-42d5-b127-de8b9a2bee8f] Instance network_info: |[{"id": "81c1d19a-c479-4381-9557-92f3e52b0cf0", "address": "fa:16:3e:49:c0:a4", "network": {"id": "aa9adc37-a18c-4b31-b4cc-d46c43f91e9b", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-563102071-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e394c641aa0e46e2a7d8129cd88c9a01", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap81c1d19a-c4", "ovs_interfaceid": "81c1d19a-c479-4381-9557-92f3e52b0cf0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 11 08:52:39 compute-0 nova_compute[260935]: 2025-10-11 08:52:39.972 2 DEBUG nova.virt.libvirt.driver [None req-548173ae-bbbe-4fb0-825e-0c1b62bb69a3 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] [instance: 5750649d-960f-42d5-b127-de8b9a2bee8f] Start _get_guest_xml network_info=[{"id": "81c1d19a-c479-4381-9557-92f3e52b0cf0", "address": "fa:16:3e:49:c0:a4", "network": {"id": "aa9adc37-a18c-4b31-b4cc-d46c43f91e9b", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-563102071-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e394c641aa0e46e2a7d8129cd88c9a01", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap81c1d19a-c4", "ovs_interfaceid": "81c1d19a-c479-4381-9557-92f3e52b0cf0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 11 08:52:39 compute-0 nova_compute[260935]: 2025-10-11 08:52:39.977 2 WARNING nova.virt.libvirt.driver [None req-548173ae-bbbe-4fb0-825e-0c1b62bb69a3 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 08:52:39 compute-0 nova_compute[260935]: 2025-10-11 08:52:39.982 2 DEBUG nova.virt.libvirt.host [None req-548173ae-bbbe-4fb0-825e-0c1b62bb69a3 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 11 08:52:39 compute-0 nova_compute[260935]: 2025-10-11 08:52:39.982 2 DEBUG nova.virt.libvirt.host [None req-548173ae-bbbe-4fb0-825e-0c1b62bb69a3 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 11 08:52:39 compute-0 nova_compute[260935]: 2025-10-11 08:52:39.985 2 DEBUG nova.virt.libvirt.host [None req-548173ae-bbbe-4fb0-825e-0c1b62bb69a3 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 11 08:52:39 compute-0 nova_compute[260935]: 2025-10-11 08:52:39.986 2 DEBUG nova.virt.libvirt.host [None req-548173ae-bbbe-4fb0-825e-0c1b62bb69a3 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 11 08:52:39 compute-0 nova_compute[260935]: 2025-10-11 08:52:39.986 2 DEBUG nova.virt.libvirt.driver [None req-548173ae-bbbe-4fb0-825e-0c1b62bb69a3 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 11 08:52:39 compute-0 nova_compute[260935]: 2025-10-11 08:52:39.986 2 DEBUG nova.virt.hardware [None req-548173ae-bbbe-4fb0-825e-0c1b62bb69a3 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 11 08:52:39 compute-0 nova_compute[260935]: 2025-10-11 08:52:39.987 2 DEBUG nova.virt.hardware [None req-548173ae-bbbe-4fb0-825e-0c1b62bb69a3 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 11 08:52:39 compute-0 nova_compute[260935]: 2025-10-11 08:52:39.987 2 DEBUG nova.virt.hardware [None req-548173ae-bbbe-4fb0-825e-0c1b62bb69a3 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 11 08:52:39 compute-0 nova_compute[260935]: 2025-10-11 08:52:39.987 2 DEBUG nova.virt.hardware [None req-548173ae-bbbe-4fb0-825e-0c1b62bb69a3 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 11 08:52:39 compute-0 nova_compute[260935]: 2025-10-11 08:52:39.987 2 DEBUG nova.virt.hardware [None req-548173ae-bbbe-4fb0-825e-0c1b62bb69a3 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 11 08:52:39 compute-0 nova_compute[260935]: 2025-10-11 08:52:39.988 2 DEBUG nova.virt.hardware [None req-548173ae-bbbe-4fb0-825e-0c1b62bb69a3 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 11 08:52:39 compute-0 nova_compute[260935]: 2025-10-11 08:52:39.988 2 DEBUG nova.virt.hardware [None req-548173ae-bbbe-4fb0-825e-0c1b62bb69a3 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 11 08:52:39 compute-0 nova_compute[260935]: 2025-10-11 08:52:39.988 2 DEBUG nova.virt.hardware [None req-548173ae-bbbe-4fb0-825e-0c1b62bb69a3 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 11 08:52:39 compute-0 nova_compute[260935]: 2025-10-11 08:52:39.988 2 DEBUG nova.virt.hardware [None req-548173ae-bbbe-4fb0-825e-0c1b62bb69a3 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 11 08:52:39 compute-0 nova_compute[260935]: 2025-10-11 08:52:39.988 2 DEBUG nova.virt.hardware [None req-548173ae-bbbe-4fb0-825e-0c1b62bb69a3 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 11 08:52:39 compute-0 nova_compute[260935]: 2025-10-11 08:52:39.988 2 DEBUG nova.virt.hardware [None req-548173ae-bbbe-4fb0-825e-0c1b62bb69a3 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 11 08:52:39 compute-0 nova_compute[260935]: 2025-10-11 08:52:39.991 2 DEBUG oslo_concurrency.processutils [None req-548173ae-bbbe-4fb0-825e-0c1b62bb69a3 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:52:40 compute-0 podman[307905]: 2025-10-11 08:52:40.280772286 +0000 UTC m=+0.069481206 container create f8e09cb4eedc0e259649ff3c36a48535508fb7bc291d534579493db0016a0175 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-f4f23928-daba-425d-b61c-e65657ec3386, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001)
Oct 11 08:52:40 compute-0 nova_compute[260935]: 2025-10-11 08:52:40.318 2 DEBUG nova.compute.manager [req-a5aa03d9-d534-4391-97ae-bdbf80c602df req-3cd44909-0238-4938-9d47-409343517324 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 5750649d-960f-42d5-b127-de8b9a2bee8f] Received event network-changed-81c1d19a-c479-4381-9557-92f3e52b0cf0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:52:40 compute-0 nova_compute[260935]: 2025-10-11 08:52:40.320 2 DEBUG nova.compute.manager [req-a5aa03d9-d534-4391-97ae-bdbf80c602df req-3cd44909-0238-4938-9d47-409343517324 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 5750649d-960f-42d5-b127-de8b9a2bee8f] Refreshing instance network info cache due to event network-changed-81c1d19a-c479-4381-9557-92f3e52b0cf0. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 08:52:40 compute-0 nova_compute[260935]: 2025-10-11 08:52:40.321 2 DEBUG oslo_concurrency.lockutils [req-a5aa03d9-d534-4391-97ae-bdbf80c602df req-3cd44909-0238-4938-9d47-409343517324 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-5750649d-960f-42d5-b127-de8b9a2bee8f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 08:52:40 compute-0 nova_compute[260935]: 2025-10-11 08:52:40.321 2 DEBUG oslo_concurrency.lockutils [req-a5aa03d9-d534-4391-97ae-bdbf80c602df req-3cd44909-0238-4938-9d47-409343517324 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-5750649d-960f-42d5-b127-de8b9a2bee8f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 08:52:40 compute-0 nova_compute[260935]: 2025-10-11 08:52:40.322 2 DEBUG nova.network.neutron [req-a5aa03d9-d534-4391-97ae-bdbf80c602df req-3cd44909-0238-4938-9d47-409343517324 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 5750649d-960f-42d5-b127-de8b9a2bee8f] Refreshing network info cache for port 81c1d19a-c479-4381-9557-92f3e52b0cf0 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 08:52:40 compute-0 nova_compute[260935]: 2025-10-11 08:52:40.325 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172760.3139884, dd616f8b-2be3-455f-8979-ac6e9e57af2d => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:52:40 compute-0 nova_compute[260935]: 2025-10-11 08:52:40.326 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: dd616f8b-2be3-455f-8979-ac6e9e57af2d] VM Started (Lifecycle Event)
Oct 11 08:52:40 compute-0 podman[307905]: 2025-10-11 08:52:40.243475961 +0000 UTC m=+0.032184861 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 11 08:52:40 compute-0 systemd[1]: Started libpod-conmon-f8e09cb4eedc0e259649ff3c36a48535508fb7bc291d534579493db0016a0175.scope.
Oct 11 08:52:40 compute-0 nova_compute[260935]: 2025-10-11 08:52:40.358 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: dd616f8b-2be3-455f-8979-ac6e9e57af2d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:52:40 compute-0 nova_compute[260935]: 2025-10-11 08:52:40.367 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172760.3156483, dd616f8b-2be3-455f-8979-ac6e9e57af2d => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:52:40 compute-0 nova_compute[260935]: 2025-10-11 08:52:40.367 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: dd616f8b-2be3-455f-8979-ac6e9e57af2d] VM Paused (Lifecycle Event)
Oct 11 08:52:40 compute-0 systemd[1]: Started libcrun container.
Oct 11 08:52:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93672fbd07a97df543920eefb51abcbc13f5aadfb34ff046cc470d22c59772b4/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 11 08:52:40 compute-0 nova_compute[260935]: 2025-10-11 08:52:40.386 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: dd616f8b-2be3-455f-8979-ac6e9e57af2d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:52:40 compute-0 nova_compute[260935]: 2025-10-11 08:52:40.394 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: dd616f8b-2be3-455f-8979-ac6e9e57af2d] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 08:52:40 compute-0 podman[307905]: 2025-10-11 08:52:40.404234008 +0000 UTC m=+0.192942978 container init f8e09cb4eedc0e259649ff3c36a48535508fb7bc291d534579493db0016a0175 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-f4f23928-daba-425d-b61c-e65657ec3386, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Oct 11 08:52:40 compute-0 nova_compute[260935]: 2025-10-11 08:52:40.412 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: dd616f8b-2be3-455f-8979-ac6e9e57af2d] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 08:52:40 compute-0 podman[307905]: 2025-10-11 08:52:40.414810227 +0000 UTC m=+0.203519147 container start f8e09cb4eedc0e259649ff3c36a48535508fb7bc291d534579493db0016a0175 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-f4f23928-daba-425d-b61c-e65657ec3386, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 08:52:40 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 08:52:40 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1721062941' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:52:40 compute-0 neutron-haproxy-ovnmeta-f4f23928-daba-425d-b61c-e65657ec3386[307920]: [NOTICE]   (307924) : New worker (307928) forked
Oct 11 08:52:40 compute-0 neutron-haproxy-ovnmeta-f4f23928-daba-425d-b61c-e65657ec3386[307920]: [NOTICE]   (307924) : Loading success.
Oct 11 08:52:40 compute-0 nova_compute[260935]: 2025-10-11 08:52:40.461 2 DEBUG oslo_concurrency.processutils [None req-548173ae-bbbe-4fb0-825e-0c1b62bb69a3 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:52:40 compute-0 nova_compute[260935]: 2025-10-11 08:52:40.489 2 DEBUG nova.storage.rbd_utils [None req-548173ae-bbbe-4fb0-825e-0c1b62bb69a3 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] rbd image 5750649d-960f-42d5-b127-de8b9a2bee8f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:52:40 compute-0 nova_compute[260935]: 2025-10-11 08:52:40.493 2 DEBUG oslo_concurrency.processutils [None req-548173ae-bbbe-4fb0-825e-0c1b62bb69a3 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:52:40 compute-0 ovn_controller[152945]: 2025-10-11T08:52:40Z|00058|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:bf:90:73 10.100.0.12
Oct 11 08:52:40 compute-0 ovn_controller[152945]: 2025-10-11T08:52:40Z|00059|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:bf:90:73 10.100.0.12
Oct 11 08:52:40 compute-0 nova_compute[260935]: 2025-10-11 08:52:40.524 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760172745.4866738, 840d2a1b-48bb-42ec-aa12-067e1b65ea39 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:52:40 compute-0 nova_compute[260935]: 2025-10-11 08:52:40.525 2 INFO nova.compute.manager [-] [instance: 840d2a1b-48bb-42ec-aa12-067e1b65ea39] VM Stopped (Lifecycle Event)
Oct 11 08:52:40 compute-0 nova_compute[260935]: 2025-10-11 08:52:40.556 2 DEBUG nova.compute.manager [None req-8ec6ec35-7dfa-4c2d-adcf-996cf8d12f40 - - - - - -] [instance: 840d2a1b-48bb-42ec-aa12-067e1b65ea39] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:52:40 compute-0 nova_compute[260935]: 2025-10-11 08:52:40.594 2 DEBUG nova.compute.manager [req-77328f23-5d3a-4872-8293-5c3a289fca07 req-c458ab29-f284-493d-9ff0-dc56ee5109e2 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: dd616f8b-2be3-455f-8979-ac6e9e57af2d] Received event network-vif-plugged-a5dcad71-6ce4-4a7c-95fc-900cb7ee620e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:52:40 compute-0 nova_compute[260935]: 2025-10-11 08:52:40.595 2 DEBUG oslo_concurrency.lockutils [req-77328f23-5d3a-4872-8293-5c3a289fca07 req-c458ab29-f284-493d-9ff0-dc56ee5109e2 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "dd616f8b-2be3-455f-8979-ac6e9e57af2d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:52:40 compute-0 nova_compute[260935]: 2025-10-11 08:52:40.596 2 DEBUG oslo_concurrency.lockutils [req-77328f23-5d3a-4872-8293-5c3a289fca07 req-c458ab29-f284-493d-9ff0-dc56ee5109e2 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "dd616f8b-2be3-455f-8979-ac6e9e57af2d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:52:40 compute-0 nova_compute[260935]: 2025-10-11 08:52:40.596 2 DEBUG oslo_concurrency.lockutils [req-77328f23-5d3a-4872-8293-5c3a289fca07 req-c458ab29-f284-493d-9ff0-dc56ee5109e2 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "dd616f8b-2be3-455f-8979-ac6e9e57af2d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:52:40 compute-0 nova_compute[260935]: 2025-10-11 08:52:40.596 2 DEBUG nova.compute.manager [req-77328f23-5d3a-4872-8293-5c3a289fca07 req-c458ab29-f284-493d-9ff0-dc56ee5109e2 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: dd616f8b-2be3-455f-8979-ac6e9e57af2d] Processing event network-vif-plugged-a5dcad71-6ce4-4a7c-95fc-900cb7ee620e _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 11 08:52:40 compute-0 nova_compute[260935]: 2025-10-11 08:52:40.597 2 DEBUG nova.compute.manager [req-77328f23-5d3a-4872-8293-5c3a289fca07 req-c458ab29-f284-493d-9ff0-dc56ee5109e2 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: dd616f8b-2be3-455f-8979-ac6e9e57af2d] Received event network-vif-plugged-a5dcad71-6ce4-4a7c-95fc-900cb7ee620e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:52:40 compute-0 nova_compute[260935]: 2025-10-11 08:52:40.597 2 DEBUG oslo_concurrency.lockutils [req-77328f23-5d3a-4872-8293-5c3a289fca07 req-c458ab29-f284-493d-9ff0-dc56ee5109e2 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "dd616f8b-2be3-455f-8979-ac6e9e57af2d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:52:40 compute-0 nova_compute[260935]: 2025-10-11 08:52:40.598 2 DEBUG oslo_concurrency.lockutils [req-77328f23-5d3a-4872-8293-5c3a289fca07 req-c458ab29-f284-493d-9ff0-dc56ee5109e2 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "dd616f8b-2be3-455f-8979-ac6e9e57af2d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:52:40 compute-0 nova_compute[260935]: 2025-10-11 08:52:40.598 2 DEBUG oslo_concurrency.lockutils [req-77328f23-5d3a-4872-8293-5c3a289fca07 req-c458ab29-f284-493d-9ff0-dc56ee5109e2 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "dd616f8b-2be3-455f-8979-ac6e9e57af2d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:52:40 compute-0 nova_compute[260935]: 2025-10-11 08:52:40.599 2 DEBUG nova.compute.manager [req-77328f23-5d3a-4872-8293-5c3a289fca07 req-c458ab29-f284-493d-9ff0-dc56ee5109e2 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: dd616f8b-2be3-455f-8979-ac6e9e57af2d] No waiting events found dispatching network-vif-plugged-a5dcad71-6ce4-4a7c-95fc-900cb7ee620e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 08:52:40 compute-0 nova_compute[260935]: 2025-10-11 08:52:40.599 2 WARNING nova.compute.manager [req-77328f23-5d3a-4872-8293-5c3a289fca07 req-c458ab29-f284-493d-9ff0-dc56ee5109e2 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: dd616f8b-2be3-455f-8979-ac6e9e57af2d] Received unexpected event network-vif-plugged-a5dcad71-6ce4-4a7c-95fc-900cb7ee620e for instance with vm_state building and task_state spawning.
Oct 11 08:52:40 compute-0 nova_compute[260935]: 2025-10-11 08:52:40.600 2 DEBUG nova.compute.manager [None req-a82ca57d-9e53-4e21-b593-ce9829203cde 713a9b01d75f4fd2b8ad41bac3ba6343 6fb895e8125a437b8cc29be31706fccb - - default default] [instance: dd616f8b-2be3-455f-8979-ac6e9e57af2d] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 11 08:52:40 compute-0 nova_compute[260935]: 2025-10-11 08:52:40.604 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172760.6039762, dd616f8b-2be3-455f-8979-ac6e9e57af2d => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:52:40 compute-0 nova_compute[260935]: 2025-10-11 08:52:40.604 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: dd616f8b-2be3-455f-8979-ac6e9e57af2d] VM Resumed (Lifecycle Event)
Oct 11 08:52:40 compute-0 nova_compute[260935]: 2025-10-11 08:52:40.609 2 DEBUG nova.virt.libvirt.driver [None req-a82ca57d-9e53-4e21-b593-ce9829203cde 713a9b01d75f4fd2b8ad41bac3ba6343 6fb895e8125a437b8cc29be31706fccb - - default default] [instance: dd616f8b-2be3-455f-8979-ac6e9e57af2d] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 11 08:52:40 compute-0 nova_compute[260935]: 2025-10-11 08:52:40.613 2 INFO nova.virt.libvirt.driver [-] [instance: dd616f8b-2be3-455f-8979-ac6e9e57af2d] Instance spawned successfully.
Oct 11 08:52:40 compute-0 nova_compute[260935]: 2025-10-11 08:52:40.613 2 DEBUG nova.virt.libvirt.driver [None req-a82ca57d-9e53-4e21-b593-ce9829203cde 713a9b01d75f4fd2b8ad41bac3ba6343 6fb895e8125a437b8cc29be31706fccb - - default default] [instance: dd616f8b-2be3-455f-8979-ac6e9e57af2d] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 11 08:52:40 compute-0 nova_compute[260935]: 2025-10-11 08:52:40.636 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: dd616f8b-2be3-455f-8979-ac6e9e57af2d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:52:40 compute-0 nova_compute[260935]: 2025-10-11 08:52:40.643 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: dd616f8b-2be3-455f-8979-ac6e9e57af2d] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 08:52:40 compute-0 nova_compute[260935]: 2025-10-11 08:52:40.646 2 DEBUG nova.virt.libvirt.driver [None req-a82ca57d-9e53-4e21-b593-ce9829203cde 713a9b01d75f4fd2b8ad41bac3ba6343 6fb895e8125a437b8cc29be31706fccb - - default default] [instance: dd616f8b-2be3-455f-8979-ac6e9e57af2d] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:52:40 compute-0 nova_compute[260935]: 2025-10-11 08:52:40.647 2 DEBUG nova.virt.libvirt.driver [None req-a82ca57d-9e53-4e21-b593-ce9829203cde 713a9b01d75f4fd2b8ad41bac3ba6343 6fb895e8125a437b8cc29be31706fccb - - default default] [instance: dd616f8b-2be3-455f-8979-ac6e9e57af2d] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:52:40 compute-0 nova_compute[260935]: 2025-10-11 08:52:40.647 2 DEBUG nova.virt.libvirt.driver [None req-a82ca57d-9e53-4e21-b593-ce9829203cde 713a9b01d75f4fd2b8ad41bac3ba6343 6fb895e8125a437b8cc29be31706fccb - - default default] [instance: dd616f8b-2be3-455f-8979-ac6e9e57af2d] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:52:40 compute-0 nova_compute[260935]: 2025-10-11 08:52:40.648 2 DEBUG nova.virt.libvirt.driver [None req-a82ca57d-9e53-4e21-b593-ce9829203cde 713a9b01d75f4fd2b8ad41bac3ba6343 6fb895e8125a437b8cc29be31706fccb - - default default] [instance: dd616f8b-2be3-455f-8979-ac6e9e57af2d] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:52:40 compute-0 nova_compute[260935]: 2025-10-11 08:52:40.648 2 DEBUG nova.virt.libvirt.driver [None req-a82ca57d-9e53-4e21-b593-ce9829203cde 713a9b01d75f4fd2b8ad41bac3ba6343 6fb895e8125a437b8cc29be31706fccb - - default default] [instance: dd616f8b-2be3-455f-8979-ac6e9e57af2d] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:52:40 compute-0 nova_compute[260935]: 2025-10-11 08:52:40.648 2 DEBUG nova.virt.libvirt.driver [None req-a82ca57d-9e53-4e21-b593-ce9829203cde 713a9b01d75f4fd2b8ad41bac3ba6343 6fb895e8125a437b8cc29be31706fccb - - default default] [instance: dd616f8b-2be3-455f-8979-ac6e9e57af2d] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:52:40 compute-0 nova_compute[260935]: 2025-10-11 08:52:40.681 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: dd616f8b-2be3-455f-8979-ac6e9e57af2d] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 08:52:40 compute-0 nova_compute[260935]: 2025-10-11 08:52:40.710 2 INFO nova.compute.manager [None req-a82ca57d-9e53-4e21-b593-ce9829203cde 713a9b01d75f4fd2b8ad41bac3ba6343 6fb895e8125a437b8cc29be31706fccb - - default default] [instance: dd616f8b-2be3-455f-8979-ac6e9e57af2d] Took 7.12 seconds to spawn the instance on the hypervisor.
Oct 11 08:52:40 compute-0 nova_compute[260935]: 2025-10-11 08:52:40.710 2 DEBUG nova.compute.manager [None req-a82ca57d-9e53-4e21-b593-ce9829203cde 713a9b01d75f4fd2b8ad41bac3ba6343 6fb895e8125a437b8cc29be31706fccb - - default default] [instance: dd616f8b-2be3-455f-8979-ac6e9e57af2d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:52:40 compute-0 nova_compute[260935]: 2025-10-11 08:52:40.771 2 INFO nova.compute.manager [None req-a82ca57d-9e53-4e21-b593-ce9829203cde 713a9b01d75f4fd2b8ad41bac3ba6343 6fb895e8125a437b8cc29be31706fccb - - default default] [instance: dd616f8b-2be3-455f-8979-ac6e9e57af2d] Took 8.13 seconds to build instance.
Oct 11 08:52:40 compute-0 nova_compute[260935]: 2025-10-11 08:52:40.788 2 DEBUG oslo_concurrency.lockutils [None req-a82ca57d-9e53-4e21-b593-ce9829203cde 713a9b01d75f4fd2b8ad41bac3ba6343 6fb895e8125a437b8cc29be31706fccb - - default default] Lock "dd616f8b-2be3-455f-8979-ac6e9e57af2d" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.238s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:52:40 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 08:52:40 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3592473107' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:52:40 compute-0 nova_compute[260935]: 2025-10-11 08:52:40.934 2 DEBUG oslo_concurrency.processutils [None req-548173ae-bbbe-4fb0-825e-0c1b62bb69a3 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:52:40 compute-0 nova_compute[260935]: 2025-10-11 08:52:40.936 2 DEBUG nova.virt.libvirt.vif [None req-548173ae-bbbe-4fb0-825e-0c1b62bb69a3 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T08:52:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-InstanceActionsTestJSON-server-1556963080',display_name='tempest-InstanceActionsTestJSON-server-1556963080',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionstestjson-server-1556963080',id=42,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e394c641aa0e46e2a7d8129cd88c9a01',ramdisk_id='',reservation_id='r-r0vpe0lh',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-InstanceActionsTestJSON-761055393',owner_user_name='tempest-InstanceActionsTestJSON-761055393-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T08:52:36Z,user_data=None,user_id='aeed30817a7740109e765d227ff2b78a',uuid=5750649d-960f-42d5-b127-de8b9a2bee8f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "81c1d19a-c479-4381-9557-92f3e52b0cf0", "address": "fa:16:3e:49:c0:a4", "network": {"id": "aa9adc37-a18c-4b31-b4cc-d46c43f91e9b", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-563102071-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e394c641aa0e46e2a7d8129cd88c9a01", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap81c1d19a-c4", "ovs_interfaceid": "81c1d19a-c479-4381-9557-92f3e52b0cf0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 11 08:52:40 compute-0 nova_compute[260935]: 2025-10-11 08:52:40.938 2 DEBUG nova.network.os_vif_util [None req-548173ae-bbbe-4fb0-825e-0c1b62bb69a3 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] Converting VIF {"id": "81c1d19a-c479-4381-9557-92f3e52b0cf0", "address": "fa:16:3e:49:c0:a4", "network": {"id": "aa9adc37-a18c-4b31-b4cc-d46c43f91e9b", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-563102071-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e394c641aa0e46e2a7d8129cd88c9a01", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap81c1d19a-c4", "ovs_interfaceid": "81c1d19a-c479-4381-9557-92f3e52b0cf0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 08:52:40 compute-0 nova_compute[260935]: 2025-10-11 08:52:40.940 2 DEBUG nova.network.os_vif_util [None req-548173ae-bbbe-4fb0-825e-0c1b62bb69a3 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:49:c0:a4,bridge_name='br-int',has_traffic_filtering=True,id=81c1d19a-c479-4381-9557-92f3e52b0cf0,network=Network(aa9adc37-a18c-4b31-b4cc-d46c43f91e9b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap81c1d19a-c4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 08:52:40 compute-0 nova_compute[260935]: 2025-10-11 08:52:40.941 2 DEBUG nova.objects.instance [None req-548173ae-bbbe-4fb0-825e-0c1b62bb69a3 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] Lazy-loading 'pci_devices' on Instance uuid 5750649d-960f-42d5-b127-de8b9a2bee8f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:52:40 compute-0 nova_compute[260935]: 2025-10-11 08:52:40.958 2 DEBUG nova.virt.libvirt.driver [None req-548173ae-bbbe-4fb0-825e-0c1b62bb69a3 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] [instance: 5750649d-960f-42d5-b127-de8b9a2bee8f] End _get_guest_xml xml=<domain type="kvm">
Oct 11 08:52:40 compute-0 nova_compute[260935]:   <uuid>5750649d-960f-42d5-b127-de8b9a2bee8f</uuid>
Oct 11 08:52:40 compute-0 nova_compute[260935]:   <name>instance-0000002a</name>
Oct 11 08:52:40 compute-0 nova_compute[260935]:   <memory>131072</memory>
Oct 11 08:52:40 compute-0 nova_compute[260935]:   <vcpu>1</vcpu>
Oct 11 08:52:40 compute-0 nova_compute[260935]:   <metadata>
Oct 11 08:52:40 compute-0 nova_compute[260935]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 08:52:40 compute-0 nova_compute[260935]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 08:52:40 compute-0 nova_compute[260935]:       <nova:name>tempest-InstanceActionsTestJSON-server-1556963080</nova:name>
Oct 11 08:52:40 compute-0 nova_compute[260935]:       <nova:creationTime>2025-10-11 08:52:39</nova:creationTime>
Oct 11 08:52:40 compute-0 nova_compute[260935]:       <nova:flavor name="m1.nano">
Oct 11 08:52:40 compute-0 nova_compute[260935]:         <nova:memory>128</nova:memory>
Oct 11 08:52:40 compute-0 nova_compute[260935]:         <nova:disk>1</nova:disk>
Oct 11 08:52:40 compute-0 nova_compute[260935]:         <nova:swap>0</nova:swap>
Oct 11 08:52:40 compute-0 nova_compute[260935]:         <nova:ephemeral>0</nova:ephemeral>
Oct 11 08:52:40 compute-0 nova_compute[260935]:         <nova:vcpus>1</nova:vcpus>
Oct 11 08:52:40 compute-0 nova_compute[260935]:       </nova:flavor>
Oct 11 08:52:40 compute-0 nova_compute[260935]:       <nova:owner>
Oct 11 08:52:40 compute-0 nova_compute[260935]:         <nova:user uuid="aeed30817a7740109e765d227ff2b78a">tempest-InstanceActionsTestJSON-761055393-project-member</nova:user>
Oct 11 08:52:40 compute-0 nova_compute[260935]:         <nova:project uuid="e394c641aa0e46e2a7d8129cd88c9a01">tempest-InstanceActionsTestJSON-761055393</nova:project>
Oct 11 08:52:40 compute-0 nova_compute[260935]:       </nova:owner>
Oct 11 08:52:40 compute-0 nova_compute[260935]:       <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 08:52:40 compute-0 nova_compute[260935]:       <nova:ports>
Oct 11 08:52:40 compute-0 nova_compute[260935]:         <nova:port uuid="81c1d19a-c479-4381-9557-92f3e52b0cf0">
Oct 11 08:52:40 compute-0 nova_compute[260935]:           <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Oct 11 08:52:40 compute-0 nova_compute[260935]:         </nova:port>
Oct 11 08:52:40 compute-0 nova_compute[260935]:       </nova:ports>
Oct 11 08:52:40 compute-0 nova_compute[260935]:     </nova:instance>
Oct 11 08:52:40 compute-0 nova_compute[260935]:   </metadata>
Oct 11 08:52:40 compute-0 nova_compute[260935]:   <sysinfo type="smbios">
Oct 11 08:52:40 compute-0 nova_compute[260935]:     <system>
Oct 11 08:52:40 compute-0 nova_compute[260935]:       <entry name="manufacturer">RDO</entry>
Oct 11 08:52:40 compute-0 nova_compute[260935]:       <entry name="product">OpenStack Compute</entry>
Oct 11 08:52:40 compute-0 nova_compute[260935]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 08:52:40 compute-0 nova_compute[260935]:       <entry name="serial">5750649d-960f-42d5-b127-de8b9a2bee8f</entry>
Oct 11 08:52:40 compute-0 nova_compute[260935]:       <entry name="uuid">5750649d-960f-42d5-b127-de8b9a2bee8f</entry>
Oct 11 08:52:40 compute-0 nova_compute[260935]:       <entry name="family">Virtual Machine</entry>
Oct 11 08:52:40 compute-0 nova_compute[260935]:     </system>
Oct 11 08:52:40 compute-0 nova_compute[260935]:   </sysinfo>
Oct 11 08:52:40 compute-0 nova_compute[260935]:   <os>
Oct 11 08:52:40 compute-0 nova_compute[260935]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 11 08:52:40 compute-0 nova_compute[260935]:     <boot dev="hd"/>
Oct 11 08:52:40 compute-0 nova_compute[260935]:     <smbios mode="sysinfo"/>
Oct 11 08:52:40 compute-0 nova_compute[260935]:   </os>
Oct 11 08:52:40 compute-0 nova_compute[260935]:   <features>
Oct 11 08:52:40 compute-0 nova_compute[260935]:     <acpi/>
Oct 11 08:52:40 compute-0 nova_compute[260935]:     <apic/>
Oct 11 08:52:40 compute-0 nova_compute[260935]:     <vmcoreinfo/>
Oct 11 08:52:40 compute-0 nova_compute[260935]:   </features>
Oct 11 08:52:40 compute-0 nova_compute[260935]:   <clock offset="utc">
Oct 11 08:52:40 compute-0 nova_compute[260935]:     <timer name="pit" tickpolicy="delay"/>
Oct 11 08:52:40 compute-0 nova_compute[260935]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 11 08:52:40 compute-0 nova_compute[260935]:     <timer name="hpet" present="no"/>
Oct 11 08:52:40 compute-0 nova_compute[260935]:   </clock>
Oct 11 08:52:40 compute-0 nova_compute[260935]:   <cpu mode="host-model" match="exact">
Oct 11 08:52:40 compute-0 nova_compute[260935]:     <topology sockets="1" cores="1" threads="1"/>
Oct 11 08:52:40 compute-0 nova_compute[260935]:   </cpu>
Oct 11 08:52:40 compute-0 nova_compute[260935]:   <devices>
Oct 11 08:52:40 compute-0 nova_compute[260935]:     <disk type="network" device="disk">
Oct 11 08:52:40 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 08:52:40 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/5750649d-960f-42d5-b127-de8b9a2bee8f_disk">
Oct 11 08:52:40 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 08:52:40 compute-0 nova_compute[260935]:       </source>
Oct 11 08:52:40 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 08:52:40 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 08:52:40 compute-0 nova_compute[260935]:       </auth>
Oct 11 08:52:40 compute-0 nova_compute[260935]:       <target dev="vda" bus="virtio"/>
Oct 11 08:52:40 compute-0 nova_compute[260935]:     </disk>
Oct 11 08:52:40 compute-0 nova_compute[260935]:     <disk type="network" device="cdrom">
Oct 11 08:52:40 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 08:52:40 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/5750649d-960f-42d5-b127-de8b9a2bee8f_disk.config">
Oct 11 08:52:40 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 08:52:40 compute-0 nova_compute[260935]:       </source>
Oct 11 08:52:40 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 08:52:40 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 08:52:40 compute-0 nova_compute[260935]:       </auth>
Oct 11 08:52:40 compute-0 nova_compute[260935]:       <target dev="sda" bus="sata"/>
Oct 11 08:52:40 compute-0 nova_compute[260935]:     </disk>
Oct 11 08:52:40 compute-0 nova_compute[260935]:     <interface type="ethernet">
Oct 11 08:52:40 compute-0 nova_compute[260935]:       <mac address="fa:16:3e:49:c0:a4"/>
Oct 11 08:52:40 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 08:52:40 compute-0 nova_compute[260935]:       <driver name="vhost" rx_queue_size="512"/>
Oct 11 08:52:40 compute-0 nova_compute[260935]:       <mtu size="1442"/>
Oct 11 08:52:40 compute-0 nova_compute[260935]:       <target dev="tap81c1d19a-c4"/>
Oct 11 08:52:40 compute-0 nova_compute[260935]:     </interface>
Oct 11 08:52:40 compute-0 nova_compute[260935]:     <serial type="pty">
Oct 11 08:52:40 compute-0 nova_compute[260935]:       <log file="/var/lib/nova/instances/5750649d-960f-42d5-b127-de8b9a2bee8f/console.log" append="off"/>
Oct 11 08:52:40 compute-0 nova_compute[260935]:     </serial>
Oct 11 08:52:40 compute-0 nova_compute[260935]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 08:52:40 compute-0 nova_compute[260935]:     <video>
Oct 11 08:52:40 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 08:52:40 compute-0 nova_compute[260935]:     </video>
Oct 11 08:52:40 compute-0 nova_compute[260935]:     <input type="tablet" bus="usb"/>
Oct 11 08:52:40 compute-0 nova_compute[260935]:     <rng model="virtio">
Oct 11 08:52:40 compute-0 nova_compute[260935]:       <backend model="random">/dev/urandom</backend>
Oct 11 08:52:40 compute-0 nova_compute[260935]:     </rng>
Oct 11 08:52:40 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root"/>
Oct 11 08:52:40 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:52:40 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:52:40 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:52:40 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:52:40 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:52:40 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:52:40 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:52:40 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:52:40 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:52:40 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:52:40 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:52:40 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:52:40 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:52:40 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:52:40 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:52:40 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:52:40 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:52:40 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:52:40 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:52:40 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:52:40 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:52:40 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:52:40 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:52:40 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:52:40 compute-0 nova_compute[260935]:     <controller type="usb" index="0"/>
Oct 11 08:52:40 compute-0 nova_compute[260935]:     <memballoon model="virtio">
Oct 11 08:52:40 compute-0 nova_compute[260935]:       <stats period="10"/>
Oct 11 08:52:40 compute-0 nova_compute[260935]:     </memballoon>
Oct 11 08:52:40 compute-0 nova_compute[260935]:   </devices>
Oct 11 08:52:40 compute-0 nova_compute[260935]: </domain>
Oct 11 08:52:40 compute-0 nova_compute[260935]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 11 08:52:40 compute-0 nova_compute[260935]: 2025-10-11 08:52:40.960 2 DEBUG nova.compute.manager [None req-548173ae-bbbe-4fb0-825e-0c1b62bb69a3 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] [instance: 5750649d-960f-42d5-b127-de8b9a2bee8f] Preparing to wait for external event network-vif-plugged-81c1d19a-c479-4381-9557-92f3e52b0cf0 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 11 08:52:40 compute-0 nova_compute[260935]: 2025-10-11 08:52:40.960 2 DEBUG oslo_concurrency.lockutils [None req-548173ae-bbbe-4fb0-825e-0c1b62bb69a3 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] Acquiring lock "5750649d-960f-42d5-b127-de8b9a2bee8f-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:52:40 compute-0 nova_compute[260935]: 2025-10-11 08:52:40.961 2 DEBUG oslo_concurrency.lockutils [None req-548173ae-bbbe-4fb0-825e-0c1b62bb69a3 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] Lock "5750649d-960f-42d5-b127-de8b9a2bee8f-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:52:40 compute-0 nova_compute[260935]: 2025-10-11 08:52:40.961 2 DEBUG oslo_concurrency.lockutils [None req-548173ae-bbbe-4fb0-825e-0c1b62bb69a3 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] Lock "5750649d-960f-42d5-b127-de8b9a2bee8f-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:52:40 compute-0 nova_compute[260935]: 2025-10-11 08:52:40.962 2 DEBUG nova.virt.libvirt.vif [None req-548173ae-bbbe-4fb0-825e-0c1b62bb69a3 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T08:52:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-InstanceActionsTestJSON-server-1556963080',display_name='tempest-InstanceActionsTestJSON-server-1556963080',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionstestjson-server-1556963080',id=42,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e394c641aa0e46e2a7d8129cd88c9a01',ramdisk_id='',reservation_id='r-r0vpe0lh',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-InstanceActionsTestJSON-761055393',owner_user_name='tempest-InstanceActionsTestJSON-761055393-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T08:52:36Z,user_data=None,user_id='aeed30817a7740109e765d227ff2b78a',uuid=5750649d-960f-42d5-b127-de8b9a2bee8f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "81c1d19a-c479-4381-9557-92f3e52b0cf0", "address": "fa:16:3e:49:c0:a4", "network": {"id": "aa9adc37-a18c-4b31-b4cc-d46c43f91e9b", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-563102071-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e394c641aa0e46e2a7d8129cd88c9a01", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap81c1d19a-c4", "ovs_interfaceid": "81c1d19a-c479-4381-9557-92f3e52b0cf0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 11 08:52:40 compute-0 nova_compute[260935]: 2025-10-11 08:52:40.962 2 DEBUG nova.network.os_vif_util [None req-548173ae-bbbe-4fb0-825e-0c1b62bb69a3 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] Converting VIF {"id": "81c1d19a-c479-4381-9557-92f3e52b0cf0", "address": "fa:16:3e:49:c0:a4", "network": {"id": "aa9adc37-a18c-4b31-b4cc-d46c43f91e9b", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-563102071-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e394c641aa0e46e2a7d8129cd88c9a01", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap81c1d19a-c4", "ovs_interfaceid": "81c1d19a-c479-4381-9557-92f3e52b0cf0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 08:52:40 compute-0 nova_compute[260935]: 2025-10-11 08:52:40.963 2 DEBUG nova.network.os_vif_util [None req-548173ae-bbbe-4fb0-825e-0c1b62bb69a3 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:49:c0:a4,bridge_name='br-int',has_traffic_filtering=True,id=81c1d19a-c479-4381-9557-92f3e52b0cf0,network=Network(aa9adc37-a18c-4b31-b4cc-d46c43f91e9b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap81c1d19a-c4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 08:52:40 compute-0 nova_compute[260935]: 2025-10-11 08:52:40.963 2 DEBUG os_vif [None req-548173ae-bbbe-4fb0-825e-0c1b62bb69a3 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:49:c0:a4,bridge_name='br-int',has_traffic_filtering=True,id=81c1d19a-c479-4381-9557-92f3e52b0cf0,network=Network(aa9adc37-a18c-4b31-b4cc-d46c43f91e9b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap81c1d19a-c4') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 11 08:52:40 compute-0 nova_compute[260935]: 2025-10-11 08:52:40.964 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:52:40 compute-0 nova_compute[260935]: 2025-10-11 08:52:40.965 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:52:40 compute-0 nova_compute[260935]: 2025-10-11 08:52:40.965 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 08:52:40 compute-0 nova_compute[260935]: 2025-10-11 08:52:40.968 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:52:40 compute-0 nova_compute[260935]: 2025-10-11 08:52:40.968 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap81c1d19a-c4, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:52:40 compute-0 nova_compute[260935]: 2025-10-11 08:52:40.969 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap81c1d19a-c4, col_values=(('external_ids', {'iface-id': '81c1d19a-c479-4381-9557-92f3e52b0cf0', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:49:c0:a4', 'vm-uuid': '5750649d-960f-42d5-b127-de8b9a2bee8f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:52:40 compute-0 NetworkManager[44960]: <info>  [1760172760.9717] manager: (tap81c1d19a-c4): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/162)
Oct 11 08:52:40 compute-0 nova_compute[260935]: 2025-10-11 08:52:40.973 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 08:52:40 compute-0 nova_compute[260935]: 2025-10-11 08:52:40.978 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:52:40 compute-0 nova_compute[260935]: 2025-10-11 08:52:40.979 2 INFO os_vif [None req-548173ae-bbbe-4fb0-825e-0c1b62bb69a3 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:49:c0:a4,bridge_name='br-int',has_traffic_filtering=True,id=81c1d19a-c479-4381-9557-92f3e52b0cf0,network=Network(aa9adc37-a18c-4b31-b4cc-d46c43f91e9b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap81c1d19a-c4')
Oct 11 08:52:41 compute-0 nova_compute[260935]: 2025-10-11 08:52:41.032 2 DEBUG nova.virt.libvirt.driver [None req-548173ae-bbbe-4fb0-825e-0c1b62bb69a3 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 08:52:41 compute-0 nova_compute[260935]: 2025-10-11 08:52:41.033 2 DEBUG nova.virt.libvirt.driver [None req-548173ae-bbbe-4fb0-825e-0c1b62bb69a3 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 08:52:41 compute-0 nova_compute[260935]: 2025-10-11 08:52:41.034 2 DEBUG nova.virt.libvirt.driver [None req-548173ae-bbbe-4fb0-825e-0c1b62bb69a3 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] No VIF found with MAC fa:16:3e:49:c0:a4, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 11 08:52:41 compute-0 nova_compute[260935]: 2025-10-11 08:52:41.034 2 INFO nova.virt.libvirt.driver [None req-548173ae-bbbe-4fb0-825e-0c1b62bb69a3 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] [instance: 5750649d-960f-42d5-b127-de8b9a2bee8f] Using config drive
Oct 11 08:52:41 compute-0 nova_compute[260935]: 2025-10-11 08:52:41.070 2 DEBUG nova.storage.rbd_utils [None req-548173ae-bbbe-4fb0-825e-0c1b62bb69a3 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] rbd image 5750649d-960f-42d5-b127-de8b9a2bee8f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:52:41 compute-0 ceph-mon[74313]: pgmap v1434: 321 pgs: 321 active+clean; 180 MiB data, 507 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.5 MiB/s wr, 137 op/s
Oct 11 08:52:41 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/1721062941' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:52:41 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/3592473107' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:52:41 compute-0 nova_compute[260935]: 2025-10-11 08:52:41.554 2 INFO nova.virt.libvirt.driver [None req-548173ae-bbbe-4fb0-825e-0c1b62bb69a3 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] [instance: 5750649d-960f-42d5-b127-de8b9a2bee8f] Creating config drive at /var/lib/nova/instances/5750649d-960f-42d5-b127-de8b9a2bee8f/disk.config
Oct 11 08:52:41 compute-0 nova_compute[260935]: 2025-10-11 08:52:41.565 2 DEBUG oslo_concurrency.processutils [None req-548173ae-bbbe-4fb0-825e-0c1b62bb69a3 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/5750649d-960f-42d5-b127-de8b9a2bee8f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp6j82h1qi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:52:41 compute-0 nova_compute[260935]: 2025-10-11 08:52:41.732 2 DEBUG oslo_concurrency.processutils [None req-548173ae-bbbe-4fb0-825e-0c1b62bb69a3 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/5750649d-960f-42d5-b127-de8b9a2bee8f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp6j82h1qi" returned: 0 in 0.167s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:52:41 compute-0 nova_compute[260935]: 2025-10-11 08:52:41.774 2 DEBUG nova.storage.rbd_utils [None req-548173ae-bbbe-4fb0-825e-0c1b62bb69a3 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] rbd image 5750649d-960f-42d5-b127-de8b9a2bee8f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:52:41 compute-0 nova_compute[260935]: 2025-10-11 08:52:41.778 2 DEBUG oslo_concurrency.processutils [None req-548173ae-bbbe-4fb0-825e-0c1b62bb69a3 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/5750649d-960f-42d5-b127-de8b9a2bee8f/disk.config 5750649d-960f-42d5-b127-de8b9a2bee8f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:52:41 compute-0 nova_compute[260935]: 2025-10-11 08:52:41.820 2 DEBUG nova.network.neutron [req-a5aa03d9-d534-4391-97ae-bdbf80c602df req-3cd44909-0238-4938-9d47-409343517324 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 5750649d-960f-42d5-b127-de8b9a2bee8f] Updated VIF entry in instance network info cache for port 81c1d19a-c479-4381-9557-92f3e52b0cf0. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 08:52:41 compute-0 nova_compute[260935]: 2025-10-11 08:52:41.821 2 DEBUG nova.network.neutron [req-a5aa03d9-d534-4391-97ae-bdbf80c602df req-3cd44909-0238-4938-9d47-409343517324 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 5750649d-960f-42d5-b127-de8b9a2bee8f] Updating instance_info_cache with network_info: [{"id": "81c1d19a-c479-4381-9557-92f3e52b0cf0", "address": "fa:16:3e:49:c0:a4", "network": {"id": "aa9adc37-a18c-4b31-b4cc-d46c43f91e9b", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-563102071-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e394c641aa0e46e2a7d8129cd88c9a01", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap81c1d19a-c4", "ovs_interfaceid": "81c1d19a-c479-4381-9557-92f3e52b0cf0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:52:41 compute-0 nova_compute[260935]: 2025-10-11 08:52:41.846 2 DEBUG oslo_concurrency.lockutils [req-a5aa03d9-d534-4391-97ae-bdbf80c602df req-3cd44909-0238-4938-9d47-409343517324 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-5750649d-960f-42d5-b127-de8b9a2bee8f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 08:52:41 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1435: 321 pgs: 321 active+clean; 180 MiB data, 507 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.5 MiB/s wr, 137 op/s
Oct 11 08:52:41 compute-0 nova_compute[260935]: 2025-10-11 08:52:41.993 2 DEBUG oslo_concurrency.processutils [None req-548173ae-bbbe-4fb0-825e-0c1b62bb69a3 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/5750649d-960f-42d5-b127-de8b9a2bee8f/disk.config 5750649d-960f-42d5-b127-de8b9a2bee8f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.215s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:52:41 compute-0 nova_compute[260935]: 2025-10-11 08:52:41.994 2 INFO nova.virt.libvirt.driver [None req-548173ae-bbbe-4fb0-825e-0c1b62bb69a3 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] [instance: 5750649d-960f-42d5-b127-de8b9a2bee8f] Deleting local config drive /var/lib/nova/instances/5750649d-960f-42d5-b127-de8b9a2bee8f/disk.config because it was imported into RBD.
Oct 11 08:52:42 compute-0 kernel: tap81c1d19a-c4: entered promiscuous mode
Oct 11 08:52:42 compute-0 NetworkManager[44960]: <info>  [1760172762.0790] manager: (tap81c1d19a-c4): new Tun device (/org/freedesktop/NetworkManager/Devices/163)
Oct 11 08:52:42 compute-0 ovn_controller[152945]: 2025-10-11T08:52:42Z|00333|binding|INFO|Claiming lport 81c1d19a-c479-4381-9557-92f3e52b0cf0 for this chassis.
Oct 11 08:52:42 compute-0 ovn_controller[152945]: 2025-10-11T08:52:42Z|00334|binding|INFO|81c1d19a-c479-4381-9557-92f3e52b0cf0: Claiming fa:16:3e:49:c0:a4 10.100.0.10
Oct 11 08:52:42 compute-0 nova_compute[260935]: 2025-10-11 08:52:42.079 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:52:42 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:42.088 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:49:c0:a4 10.100.0.10'], port_security=['fa:16:3e:49:c0:a4 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '5750649d-960f-42d5-b127-de8b9a2bee8f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-aa9adc37-a18c-4b31-b4cc-d46c43f91e9b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e394c641aa0e46e2a7d8129cd88c9a01', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'e0772185-519f-4131-b64e-b4728e2c0dd9', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=bf506c8e-a600-4595-8f2f-eb0ff97eb2d2, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=81c1d19a-c479-4381-9557-92f3e52b0cf0) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 08:52:42 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:42.089 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 81c1d19a-c479-4381-9557-92f3e52b0cf0 in datapath aa9adc37-a18c-4b31-b4cc-d46c43f91e9b bound to our chassis
Oct 11 08:52:42 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:42.091 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network aa9adc37-a18c-4b31-b4cc-d46c43f91e9b
Oct 11 08:52:42 compute-0 ovn_controller[152945]: 2025-10-11T08:52:42Z|00335|binding|INFO|Setting lport 81c1d19a-c479-4381-9557-92f3e52b0cf0 ovn-installed in OVS
Oct 11 08:52:42 compute-0 ovn_controller[152945]: 2025-10-11T08:52:42Z|00336|binding|INFO|Setting lport 81c1d19a-c479-4381-9557-92f3e52b0cf0 up in Southbound
Oct 11 08:52:42 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:42.117 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[0e8a4382-883c-4552-887d-74cb430a90f1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:52:42 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:42.118 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapaa9adc37-a1 in ovnmeta-aa9adc37-a18c-4b31-b4cc-d46c43f91e9b namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 11 08:52:42 compute-0 nova_compute[260935]: 2025-10-11 08:52:42.118 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:52:42 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:42.120 276945 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapaa9adc37-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 11 08:52:42 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:42.120 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[f21d3c8e-eff6-4a85-af4a-d515c7d13957]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:52:42 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:42.121 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[7349045f-3f32-4a88-b249-b829069f4a31]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:52:42 compute-0 nova_compute[260935]: 2025-10-11 08:52:42.126 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:52:42 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:42.138 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[1b54a8f3-2bcc-4054-b830-aac15465ad4a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:52:42 compute-0 systemd-machined[215705]: New machine qemu-46-instance-0000002a.
Oct 11 08:52:42 compute-0 systemd[1]: Started Virtual Machine qemu-46-instance-0000002a.
Oct 11 08:52:42 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:42.152 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[e81a960a-762c-421c-aea0-3f38b79bb5d9]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:52:42 compute-0 systemd-udevd[308055]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 08:52:42 compute-0 NetworkManager[44960]: <info>  [1760172762.1857] device (tap81c1d19a-c4): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 08:52:42 compute-0 NetworkManager[44960]: <info>  [1760172762.1873] device (tap81c1d19a-c4): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 08:52:42 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:42.198 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[022edfd7-1a3f-4cd7-9d8d-c0ccdd52a9f0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:52:42 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:42.206 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[1a444b95-08d7-4f66-afe4-14f69de5b2bb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:52:42 compute-0 NetworkManager[44960]: <info>  [1760172762.2086] manager: (tapaa9adc37-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/164)
Oct 11 08:52:42 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:42.253 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[07e56530-1f9e-4bbb-a0fd-14adc14e7e6b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:52:42 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:42.257 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[79d91093-29ea-42ce-8faf-a9a5a7b514f5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:52:42 compute-0 NetworkManager[44960]: <info>  [1760172762.2900] device (tapaa9adc37-a0): carrier: link connected
Oct 11 08:52:42 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:42.297 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[88ed452e-5aa6-47de-8191-c78f0fd9ddc2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:52:42 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:42.320 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[a80b66e4-57e5-4b3f-91f5-3956b979c1d4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapaa9adc37-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:bf:36:18'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 107], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 460699, 'reachable_time': 26270, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 308084, 'error': None, 'target': 'ovnmeta-aa9adc37-a18c-4b31-b4cc-d46c43f91e9b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:52:42 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:42.350 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[e109ac35-3980-4790-ac45-a39d5e97711e]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:febf:3618'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 460699, 'tstamp': 460699}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 308085, 'error': None, 'target': 'ovnmeta-aa9adc37-a18c-4b31-b4cc-d46c43f91e9b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:52:42 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:42.387 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[90a4cd13-f44b-4b0b-9e15-6a0cb0154457]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapaa9adc37-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:bf:36:18'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 196, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 196, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 107], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 460699, 'reachable_time': 26270, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 168, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 168, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 308086, 'error': None, 'target': 'ovnmeta-aa9adc37-a18c-4b31-b4cc-d46c43f91e9b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:52:42 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:42.430 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[eba3ce42-d00b-4e17-be33-f5e2db698605]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:52:42 compute-0 nova_compute[260935]: 2025-10-11 08:52:42.495 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:52:42 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:42.527 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[9e1636bd-1adc-4dd8-9332-c75c0c243127]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:52:42 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:42.529 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapaa9adc37-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:52:42 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:42.530 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 08:52:42 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:42.531 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapaa9adc37-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:52:42 compute-0 nova_compute[260935]: 2025-10-11 08:52:42.534 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:52:42 compute-0 kernel: tapaa9adc37-a0: entered promiscuous mode
Oct 11 08:52:42 compute-0 NetworkManager[44960]: <info>  [1760172762.5346] manager: (tapaa9adc37-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/165)
Oct 11 08:52:42 compute-0 nova_compute[260935]: 2025-10-11 08:52:42.538 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:52:42 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:42.539 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapaa9adc37-a0, col_values=(('external_ids', {'iface-id': '3043e870-2e1f-4df6-b72c-a581ec61613e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:52:42 compute-0 ovn_controller[152945]: 2025-10-11T08:52:42Z|00337|binding|INFO|Releasing lport 3043e870-2e1f-4df6-b72c-a581ec61613e from this chassis (sb_readonly=0)
Oct 11 08:52:42 compute-0 nova_compute[260935]: 2025-10-11 08:52:42.541 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:52:42 compute-0 nova_compute[260935]: 2025-10-11 08:52:42.576 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:52:42 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:42.577 162815 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/aa9adc37-a18c-4b31-b4cc-d46c43f91e9b.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/aa9adc37-a18c-4b31-b4cc-d46c43f91e9b.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 11 08:52:42 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:42.578 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[842d28c4-d081-4a69-b14c-d8a744db63e3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:52:42 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:42.580 162815 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 11 08:52:42 compute-0 ovn_metadata_agent[162810]: global
Oct 11 08:52:42 compute-0 ovn_metadata_agent[162810]:     log         /dev/log local0 debug
Oct 11 08:52:42 compute-0 ovn_metadata_agent[162810]:     log-tag     haproxy-metadata-proxy-aa9adc37-a18c-4b31-b4cc-d46c43f91e9b
Oct 11 08:52:42 compute-0 ovn_metadata_agent[162810]:     user        root
Oct 11 08:52:42 compute-0 ovn_metadata_agent[162810]:     group       root
Oct 11 08:52:42 compute-0 ovn_metadata_agent[162810]:     maxconn     1024
Oct 11 08:52:42 compute-0 ovn_metadata_agent[162810]:     pidfile     /var/lib/neutron/external/pids/aa9adc37-a18c-4b31-b4cc-d46c43f91e9b.pid.haproxy
Oct 11 08:52:42 compute-0 ovn_metadata_agent[162810]:     daemon
Oct 11 08:52:42 compute-0 ovn_metadata_agent[162810]: 
Oct 11 08:52:42 compute-0 ovn_metadata_agent[162810]: defaults
Oct 11 08:52:42 compute-0 ovn_metadata_agent[162810]:     log global
Oct 11 08:52:42 compute-0 ovn_metadata_agent[162810]:     mode http
Oct 11 08:52:42 compute-0 ovn_metadata_agent[162810]:     option httplog
Oct 11 08:52:42 compute-0 ovn_metadata_agent[162810]:     option dontlognull
Oct 11 08:52:42 compute-0 ovn_metadata_agent[162810]:     option http-server-close
Oct 11 08:52:42 compute-0 ovn_metadata_agent[162810]:     option forwardfor
Oct 11 08:52:42 compute-0 ovn_metadata_agent[162810]:     retries                 3
Oct 11 08:52:42 compute-0 ovn_metadata_agent[162810]:     timeout http-request    30s
Oct 11 08:52:42 compute-0 ovn_metadata_agent[162810]:     timeout connect         30s
Oct 11 08:52:42 compute-0 ovn_metadata_agent[162810]:     timeout client          32s
Oct 11 08:52:42 compute-0 ovn_metadata_agent[162810]:     timeout server          32s
Oct 11 08:52:42 compute-0 ovn_metadata_agent[162810]:     timeout http-keep-alive 30s
Oct 11 08:52:42 compute-0 ovn_metadata_agent[162810]: 
Oct 11 08:52:42 compute-0 ovn_metadata_agent[162810]: 
Oct 11 08:52:42 compute-0 ovn_metadata_agent[162810]: listen listener
Oct 11 08:52:42 compute-0 ovn_metadata_agent[162810]:     bind 169.254.169.254:80
Oct 11 08:52:42 compute-0 ovn_metadata_agent[162810]:     server metadata /var/lib/neutron/metadata_proxy
Oct 11 08:52:42 compute-0 ovn_metadata_agent[162810]:     http-request add-header X-OVN-Network-ID aa9adc37-a18c-4b31-b4cc-d46c43f91e9b
Oct 11 08:52:42 compute-0 ovn_metadata_agent[162810]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 11 08:52:42 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:42.581 162815 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-aa9adc37-a18c-4b31-b4cc-d46c43f91e9b', 'env', 'PROCESS_TAG=haproxy-aa9adc37-a18c-4b31-b4cc-d46c43f91e9b', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/aa9adc37-a18c-4b31-b4cc-d46c43f91e9b.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 11 08:52:42 compute-0 nova_compute[260935]: 2025-10-11 08:52:42.907 2 DEBUG nova.compute.manager [req-bea205ab-8ad1-4f21-912c-2a2d7b7800b7 req-a19eebee-e85f-4277-a814-8daa1ff425f4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 5750649d-960f-42d5-b127-de8b9a2bee8f] Received event network-vif-plugged-81c1d19a-c479-4381-9557-92f3e52b0cf0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:52:42 compute-0 nova_compute[260935]: 2025-10-11 08:52:42.910 2 DEBUG oslo_concurrency.lockutils [req-bea205ab-8ad1-4f21-912c-2a2d7b7800b7 req-a19eebee-e85f-4277-a814-8daa1ff425f4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "5750649d-960f-42d5-b127-de8b9a2bee8f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:52:42 compute-0 nova_compute[260935]: 2025-10-11 08:52:42.911 2 DEBUG oslo_concurrency.lockutils [req-bea205ab-8ad1-4f21-912c-2a2d7b7800b7 req-a19eebee-e85f-4277-a814-8daa1ff425f4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "5750649d-960f-42d5-b127-de8b9a2bee8f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:52:42 compute-0 nova_compute[260935]: 2025-10-11 08:52:42.911 2 DEBUG oslo_concurrency.lockutils [req-bea205ab-8ad1-4f21-912c-2a2d7b7800b7 req-a19eebee-e85f-4277-a814-8daa1ff425f4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "5750649d-960f-42d5-b127-de8b9a2bee8f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:52:42 compute-0 nova_compute[260935]: 2025-10-11 08:52:42.911 2 DEBUG nova.compute.manager [req-bea205ab-8ad1-4f21-912c-2a2d7b7800b7 req-a19eebee-e85f-4277-a814-8daa1ff425f4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 5750649d-960f-42d5-b127-de8b9a2bee8f] Processing event network-vif-plugged-81c1d19a-c479-4381-9557-92f3e52b0cf0 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 11 08:52:42 compute-0 nova_compute[260935]: 2025-10-11 08:52:42.912 2 DEBUG nova.compute.manager [req-bea205ab-8ad1-4f21-912c-2a2d7b7800b7 req-a19eebee-e85f-4277-a814-8daa1ff425f4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 5750649d-960f-42d5-b127-de8b9a2bee8f] Received event network-vif-plugged-81c1d19a-c479-4381-9557-92f3e52b0cf0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:52:42 compute-0 nova_compute[260935]: 2025-10-11 08:52:42.912 2 DEBUG oslo_concurrency.lockutils [req-bea205ab-8ad1-4f21-912c-2a2d7b7800b7 req-a19eebee-e85f-4277-a814-8daa1ff425f4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "5750649d-960f-42d5-b127-de8b9a2bee8f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:52:42 compute-0 nova_compute[260935]: 2025-10-11 08:52:42.912 2 DEBUG oslo_concurrency.lockutils [req-bea205ab-8ad1-4f21-912c-2a2d7b7800b7 req-a19eebee-e85f-4277-a814-8daa1ff425f4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "5750649d-960f-42d5-b127-de8b9a2bee8f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:52:42 compute-0 nova_compute[260935]: 2025-10-11 08:52:42.913 2 DEBUG oslo_concurrency.lockutils [req-bea205ab-8ad1-4f21-912c-2a2d7b7800b7 req-a19eebee-e85f-4277-a814-8daa1ff425f4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "5750649d-960f-42d5-b127-de8b9a2bee8f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:52:42 compute-0 nova_compute[260935]: 2025-10-11 08:52:42.913 2 DEBUG nova.compute.manager [req-bea205ab-8ad1-4f21-912c-2a2d7b7800b7 req-a19eebee-e85f-4277-a814-8daa1ff425f4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 5750649d-960f-42d5-b127-de8b9a2bee8f] No waiting events found dispatching network-vif-plugged-81c1d19a-c479-4381-9557-92f3e52b0cf0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 08:52:42 compute-0 nova_compute[260935]: 2025-10-11 08:52:42.913 2 WARNING nova.compute.manager [req-bea205ab-8ad1-4f21-912c-2a2d7b7800b7 req-a19eebee-e85f-4277-a814-8daa1ff425f4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 5750649d-960f-42d5-b127-de8b9a2bee8f] Received unexpected event network-vif-plugged-81c1d19a-c479-4381-9557-92f3e52b0cf0 for instance with vm_state building and task_state spawning.
Oct 11 08:52:43 compute-0 podman[308159]: 2025-10-11 08:52:43.069994335 +0000 UTC m=+0.113502011 container create 94a21b0b94de7518338fcaeb99ecd605aad02a7ec069ff0c0918f06ba08b8847 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-aa9adc37-a18c-4b31-b4cc-d46c43f91e9b, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Oct 11 08:52:43 compute-0 nova_compute[260935]: 2025-10-11 08:52:43.084 2 DEBUG oslo_concurrency.lockutils [None req-f23160f8-92e3-41f9-ba4d-288df21f8b7a 713a9b01d75f4fd2b8ad41bac3ba6343 6fb895e8125a437b8cc29be31706fccb - - default default] Acquiring lock "dd616f8b-2be3-455f-8979-ac6e9e57af2d" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:52:43 compute-0 nova_compute[260935]: 2025-10-11 08:52:43.085 2 DEBUG oslo_concurrency.lockutils [None req-f23160f8-92e3-41f9-ba4d-288df21f8b7a 713a9b01d75f4fd2b8ad41bac3ba6343 6fb895e8125a437b8cc29be31706fccb - - default default] Lock "dd616f8b-2be3-455f-8979-ac6e9e57af2d" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:52:43 compute-0 nova_compute[260935]: 2025-10-11 08:52:43.085 2 DEBUG oslo_concurrency.lockutils [None req-f23160f8-92e3-41f9-ba4d-288df21f8b7a 713a9b01d75f4fd2b8ad41bac3ba6343 6fb895e8125a437b8cc29be31706fccb - - default default] Acquiring lock "dd616f8b-2be3-455f-8979-ac6e9e57af2d-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:52:43 compute-0 nova_compute[260935]: 2025-10-11 08:52:43.085 2 DEBUG oslo_concurrency.lockutils [None req-f23160f8-92e3-41f9-ba4d-288df21f8b7a 713a9b01d75f4fd2b8ad41bac3ba6343 6fb895e8125a437b8cc29be31706fccb - - default default] Lock "dd616f8b-2be3-455f-8979-ac6e9e57af2d-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:52:43 compute-0 nova_compute[260935]: 2025-10-11 08:52:43.086 2 DEBUG oslo_concurrency.lockutils [None req-f23160f8-92e3-41f9-ba4d-288df21f8b7a 713a9b01d75f4fd2b8ad41bac3ba6343 6fb895e8125a437b8cc29be31706fccb - - default default] Lock "dd616f8b-2be3-455f-8979-ac6e9e57af2d-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:52:43 compute-0 nova_compute[260935]: 2025-10-11 08:52:43.088 2 INFO nova.compute.manager [None req-f23160f8-92e3-41f9-ba4d-288df21f8b7a 713a9b01d75f4fd2b8ad41bac3ba6343 6fb895e8125a437b8cc29be31706fccb - - default default] [instance: dd616f8b-2be3-455f-8979-ac6e9e57af2d] Terminating instance
Oct 11 08:52:43 compute-0 nova_compute[260935]: 2025-10-11 08:52:43.090 2 DEBUG nova.compute.manager [None req-f23160f8-92e3-41f9-ba4d-288df21f8b7a 713a9b01d75f4fd2b8ad41bac3ba6343 6fb895e8125a437b8cc29be31706fccb - - default default] [instance: dd616f8b-2be3-455f-8979-ac6e9e57af2d] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 11 08:52:43 compute-0 podman[308159]: 2025-10-11 08:52:43.006132229 +0000 UTC m=+0.049639915 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 11 08:52:43 compute-0 kernel: tapa5dcad71-6c (unregistering): left promiscuous mode
Oct 11 08:52:43 compute-0 NetworkManager[44960]: <info>  [1760172763.1385] device (tapa5dcad71-6c): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 08:52:43 compute-0 nova_compute[260935]: 2025-10-11 08:52:43.151 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:52:43 compute-0 ovn_controller[152945]: 2025-10-11T08:52:43Z|00338|binding|INFO|Releasing lport a5dcad71-6ce4-4a7c-95fc-900cb7ee620e from this chassis (sb_readonly=0)
Oct 11 08:52:43 compute-0 ovn_controller[152945]: 2025-10-11T08:52:43Z|00339|binding|INFO|Setting lport a5dcad71-6ce4-4a7c-95fc-900cb7ee620e down in Southbound
Oct 11 08:52:43 compute-0 systemd[1]: Started libpod-conmon-94a21b0b94de7518338fcaeb99ecd605aad02a7ec069ff0c0918f06ba08b8847.scope.
Oct 11 08:52:43 compute-0 ovn_controller[152945]: 2025-10-11T08:52:43Z|00340|binding|INFO|Removing iface tapa5dcad71-6c ovn-installed in OVS
Oct 11 08:52:43 compute-0 nova_compute[260935]: 2025-10-11 08:52:43.159 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:52:43 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:43.172 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:40:7a:6c 10.100.0.9'], port_security=['fa:16:3e:40:7a:6c 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'dd616f8b-2be3-455f-8979-ac6e9e57af2d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f4f23928-daba-425d-b61c-e65657ec3386', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6fb895e8125a437b8cc29be31706fccb', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'ff7ea8b2-ee94-49d1-aa81-ade5270f3ae5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=80d8b754-3910-4271-b26f-fbc82b8c8419, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=a5dcad71-6ce4-4a7c-95fc-900cb7ee620e) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 08:52:43 compute-0 systemd[1]: Started libcrun container.
Oct 11 08:52:43 compute-0 nova_compute[260935]: 2025-10-11 08:52:43.185 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:52:43 compute-0 systemd[1]: machine-qemu\x2d45\x2dinstance\x2d00000029.scope: Deactivated successfully.
Oct 11 08:52:43 compute-0 systemd[1]: machine-qemu\x2d45\x2dinstance\x2d00000029.scope: Consumed 3.324s CPU time.
Oct 11 08:52:43 compute-0 systemd-machined[215705]: Machine qemu-45-instance-00000029 terminated.
Oct 11 08:52:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1aaec042284ad1b73f2650a678143c3397946f80b5c9fd24812807415a14b7ec/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 11 08:52:43 compute-0 podman[308159]: 2025-10-11 08:52:43.223333052 +0000 UTC m=+0.266840778 container init 94a21b0b94de7518338fcaeb99ecd605aad02a7ec069ff0c0918f06ba08b8847 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-aa9adc37-a18c-4b31-b4cc-d46c43f91e9b, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 08:52:43 compute-0 ceph-mon[74313]: pgmap v1435: 321 pgs: 321 active+clean; 180 MiB data, 507 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.5 MiB/s wr, 137 op/s
Oct 11 08:52:43 compute-0 podman[308159]: 2025-10-11 08:52:43.234988811 +0000 UTC m=+0.278496487 container start 94a21b0b94de7518338fcaeb99ecd605aad02a7ec069ff0c0918f06ba08b8847 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-aa9adc37-a18c-4b31-b4cc-d46c43f91e9b, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.license=GPLv2)
Oct 11 08:52:43 compute-0 neutron-haproxy-ovnmeta-aa9adc37-a18c-4b31-b4cc-d46c43f91e9b[308177]: [NOTICE]   (308181) : New worker (308183) forked
Oct 11 08:52:43 compute-0 neutron-haproxy-ovnmeta-aa9adc37-a18c-4b31-b4cc-d46c43f91e9b[308177]: [NOTICE]   (308181) : Loading success.
Oct 11 08:52:43 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e191 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 08:52:43 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:43.287 162815 INFO neutron.agent.ovn.metadata.agent [-] Port a5dcad71-6ce4-4a7c-95fc-900cb7ee620e in datapath f4f23928-daba-425d-b61c-e65657ec3386 unbound from our chassis
Oct 11 08:52:43 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:43.289 162815 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f4f23928-daba-425d-b61c-e65657ec3386, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 11 08:52:43 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:43.289 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[d60c86a7-fc0a-491c-b094-55a5129ac6b5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:52:43 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:43.290 162815 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-f4f23928-daba-425d-b61c-e65657ec3386 namespace which is not needed anymore
Oct 11 08:52:43 compute-0 nova_compute[260935]: 2025-10-11 08:52:43.350 2 INFO nova.virt.libvirt.driver [-] [instance: dd616f8b-2be3-455f-8979-ac6e9e57af2d] Instance destroyed successfully.
Oct 11 08:52:43 compute-0 nova_compute[260935]: 2025-10-11 08:52:43.351 2 DEBUG nova.objects.instance [None req-f23160f8-92e3-41f9-ba4d-288df21f8b7a 713a9b01d75f4fd2b8ad41bac3ba6343 6fb895e8125a437b8cc29be31706fccb - - default default] Lazy-loading 'resources' on Instance uuid dd616f8b-2be3-455f-8979-ac6e9e57af2d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:52:43 compute-0 ovn_controller[152945]: 2025-10-11T08:52:43Z|00341|binding|INFO|Releasing lport a2760dfe-6dd6-4319-8bbc-a3a72b9651da from this chassis (sb_readonly=0)
Oct 11 08:52:43 compute-0 ovn_controller[152945]: 2025-10-11T08:52:43Z|00342|binding|INFO|Releasing lport 3043e870-2e1f-4df6-b72c-a581ec61613e from this chassis (sb_readonly=0)
Oct 11 08:52:43 compute-0 ovn_controller[152945]: 2025-10-11T08:52:43Z|00343|binding|INFO|Releasing lport 7a0f31c4-9bda-45df-9fec-aacc40fc88c1 from this chassis (sb_readonly=0)
Oct 11 08:52:43 compute-0 nova_compute[260935]: 2025-10-11 08:52:43.371 2 DEBUG nova.virt.libvirt.vif [None req-f23160f8-92e3-41f9-ba4d-288df21f8b7a 713a9b01d75f4fd2b8ad41bac3ba6343 6fb895e8125a437b8cc29be31706fccb - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T08:52:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesNegativeTestJSON-server-1610135239',display_name='tempest-ImagesNegativeTestJSON-server-1610135239',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesnegativetestjson-server-1610135239',id=41,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-11T08:52:40Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='6fb895e8125a437b8cc29be31706fccb',ramdisk_id='',reservation_id='r-xtlxm36r',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ImagesNegativeTestJSON-1030914515',owner_user_name='tempest-ImagesNegativeTestJSON-1030914515-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T08:52:40Z,user_data=None,user_id='713a9b01d75f4fd2b8ad41bac3ba6343',uuid=dd616f8b-2be3-455f-8979-ac6e9e57af2d,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a5dcad71-6ce4-4a7c-95fc-900cb7ee620e", "address": "fa:16:3e:40:7a:6c", "network": {"id": "f4f23928-daba-425d-b61c-e65657ec3386", "bridge": "br-int", "label": "tempest-ImagesNegativeTestJSON-643614329-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6fb895e8125a437b8cc29be31706fccb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa5dcad71-6c", "ovs_interfaceid": "a5dcad71-6ce4-4a7c-95fc-900cb7ee620e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 11 08:52:43 compute-0 nova_compute[260935]: 2025-10-11 08:52:43.372 2 DEBUG nova.network.os_vif_util [None req-f23160f8-92e3-41f9-ba4d-288df21f8b7a 713a9b01d75f4fd2b8ad41bac3ba6343 6fb895e8125a437b8cc29be31706fccb - - default default] Converting VIF {"id": "a5dcad71-6ce4-4a7c-95fc-900cb7ee620e", "address": "fa:16:3e:40:7a:6c", "network": {"id": "f4f23928-daba-425d-b61c-e65657ec3386", "bridge": "br-int", "label": "tempest-ImagesNegativeTestJSON-643614329-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6fb895e8125a437b8cc29be31706fccb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa5dcad71-6c", "ovs_interfaceid": "a5dcad71-6ce4-4a7c-95fc-900cb7ee620e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 08:52:43 compute-0 nova_compute[260935]: 2025-10-11 08:52:43.374 2 DEBUG nova.network.os_vif_util [None req-f23160f8-92e3-41f9-ba4d-288df21f8b7a 713a9b01d75f4fd2b8ad41bac3ba6343 6fb895e8125a437b8cc29be31706fccb - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:40:7a:6c,bridge_name='br-int',has_traffic_filtering=True,id=a5dcad71-6ce4-4a7c-95fc-900cb7ee620e,network=Network(f4f23928-daba-425d-b61c-e65657ec3386),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa5dcad71-6c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 08:52:43 compute-0 nova_compute[260935]: 2025-10-11 08:52:43.374 2 DEBUG os_vif [None req-f23160f8-92e3-41f9-ba4d-288df21f8b7a 713a9b01d75f4fd2b8ad41bac3ba6343 6fb895e8125a437b8cc29be31706fccb - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:40:7a:6c,bridge_name='br-int',has_traffic_filtering=True,id=a5dcad71-6ce4-4a7c-95fc-900cb7ee620e,network=Network(f4f23928-daba-425d-b61c-e65657ec3386),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa5dcad71-6c') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 11 08:52:43 compute-0 nova_compute[260935]: 2025-10-11 08:52:43.378 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:52:43 compute-0 nova_compute[260935]: 2025-10-11 08:52:43.378 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa5dcad71-6c, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:52:43 compute-0 nova_compute[260935]: 2025-10-11 08:52:43.383 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 08:52:43 compute-0 neutron-haproxy-ovnmeta-f4f23928-daba-425d-b61c-e65657ec3386[307920]: [NOTICE]   (307924) : haproxy version is 2.8.14-c23fe91
Oct 11 08:52:43 compute-0 neutron-haproxy-ovnmeta-f4f23928-daba-425d-b61c-e65657ec3386[307920]: [NOTICE]   (307924) : path to executable is /usr/sbin/haproxy
Oct 11 08:52:43 compute-0 neutron-haproxy-ovnmeta-f4f23928-daba-425d-b61c-e65657ec3386[307920]: [WARNING]  (307924) : Exiting Master process...
Oct 11 08:52:43 compute-0 neutron-haproxy-ovnmeta-f4f23928-daba-425d-b61c-e65657ec3386[307920]: [WARNING]  (307924) : Exiting Master process...
Oct 11 08:52:43 compute-0 neutron-haproxy-ovnmeta-f4f23928-daba-425d-b61c-e65657ec3386[307920]: [ALERT]    (307924) : Current worker (307928) exited with code 143 (Terminated)
Oct 11 08:52:43 compute-0 neutron-haproxy-ovnmeta-f4f23928-daba-425d-b61c-e65657ec3386[307920]: [WARNING]  (307924) : All workers exited. Exiting... (0)
Oct 11 08:52:43 compute-0 systemd[1]: libpod-f8e09cb4eedc0e259649ff3c36a48535508fb7bc291d534579493db0016a0175.scope: Deactivated successfully.
Oct 11 08:52:43 compute-0 podman[308219]: 2025-10-11 08:52:43.484984251 +0000 UTC m=+0.068463147 container died f8e09cb4eedc0e259649ff3c36a48535508fb7bc291d534579493db0016a0175 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-f4f23928-daba-425d-b61c-e65657ec3386, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 11 08:52:43 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-f8e09cb4eedc0e259649ff3c36a48535508fb7bc291d534579493db0016a0175-userdata-shm.mount: Deactivated successfully.
Oct 11 08:52:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-93672fbd07a97df543920eefb51abcbc13f5aadfb34ff046cc470d22c59772b4-merged.mount: Deactivated successfully.
Oct 11 08:52:43 compute-0 podman[308219]: 2025-10-11 08:52:43.538974648 +0000 UTC m=+0.122453584 container cleanup f8e09cb4eedc0e259649ff3c36a48535508fb7bc291d534579493db0016a0175 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-f4f23928-daba-425d-b61c-e65657ec3386, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 11 08:52:43 compute-0 nova_compute[260935]: 2025-10-11 08:52:43.544 2 DEBUG nova.compute.manager [None req-548173ae-bbbe-4fb0-825e-0c1b62bb69a3 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] [instance: 5750649d-960f-42d5-b127-de8b9a2bee8f] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 11 08:52:43 compute-0 nova_compute[260935]: 2025-10-11 08:52:43.545 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172763.5433564, 5750649d-960f-42d5-b127-de8b9a2bee8f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:52:43 compute-0 nova_compute[260935]: 2025-10-11 08:52:43.545 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 5750649d-960f-42d5-b127-de8b9a2bee8f] VM Started (Lifecycle Event)
Oct 11 08:52:43 compute-0 nova_compute[260935]: 2025-10-11 08:52:43.548 2 DEBUG nova.virt.libvirt.driver [None req-548173ae-bbbe-4fb0-825e-0c1b62bb69a3 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] [instance: 5750649d-960f-42d5-b127-de8b9a2bee8f] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 11 08:52:43 compute-0 nova_compute[260935]: 2025-10-11 08:52:43.553 2 INFO nova.virt.libvirt.driver [-] [instance: 5750649d-960f-42d5-b127-de8b9a2bee8f] Instance spawned successfully.
Oct 11 08:52:43 compute-0 nova_compute[260935]: 2025-10-11 08:52:43.553 2 DEBUG nova.virt.libvirt.driver [None req-548173ae-bbbe-4fb0-825e-0c1b62bb69a3 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] [instance: 5750649d-960f-42d5-b127-de8b9a2bee8f] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 11 08:52:43 compute-0 nova_compute[260935]: 2025-10-11 08:52:43.558 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:52:43 compute-0 ovn_controller[152945]: 2025-10-11T08:52:43Z|00344|binding|INFO|Releasing lport a2760dfe-6dd6-4319-8bbc-a3a72b9651da from this chassis (sb_readonly=0)
Oct 11 08:52:43 compute-0 ovn_controller[152945]: 2025-10-11T08:52:43Z|00345|binding|INFO|Releasing lport 3043e870-2e1f-4df6-b72c-a581ec61613e from this chassis (sb_readonly=0)
Oct 11 08:52:43 compute-0 ovn_controller[152945]: 2025-10-11T08:52:43Z|00346|binding|INFO|Releasing lport 7a0f31c4-9bda-45df-9fec-aacc40fc88c1 from this chassis (sb_readonly=0)
Oct 11 08:52:43 compute-0 nova_compute[260935]: 2025-10-11 08:52:43.564 2 INFO os_vif [None req-f23160f8-92e3-41f9-ba4d-288df21f8b7a 713a9b01d75f4fd2b8ad41bac3ba6343 6fb895e8125a437b8cc29be31706fccb - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:40:7a:6c,bridge_name='br-int',has_traffic_filtering=True,id=a5dcad71-6ce4-4a7c-95fc-900cb7ee620e,network=Network(f4f23928-daba-425d-b61c-e65657ec3386),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa5dcad71-6c')
Oct 11 08:52:43 compute-0 systemd[1]: libpod-conmon-f8e09cb4eedc0e259649ff3c36a48535508fb7bc291d534579493db0016a0175.scope: Deactivated successfully.
Oct 11 08:52:43 compute-0 nova_compute[260935]: 2025-10-11 08:52:43.589 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:52:43 compute-0 nova_compute[260935]: 2025-10-11 08:52:43.596 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 5750649d-960f-42d5-b127-de8b9a2bee8f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:52:43 compute-0 nova_compute[260935]: 2025-10-11 08:52:43.609 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 5750649d-960f-42d5-b127-de8b9a2bee8f] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 08:52:43 compute-0 podman[308246]: 2025-10-11 08:52:43.61435669 +0000 UTC m=+0.047178915 container remove f8e09cb4eedc0e259649ff3c36a48535508fb7bc291d534579493db0016a0175 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-f4f23928-daba-425d-b61c-e65657ec3386, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3)
Oct 11 08:52:43 compute-0 nova_compute[260935]: 2025-10-11 08:52:43.615 2 DEBUG nova.virt.libvirt.driver [None req-548173ae-bbbe-4fb0-825e-0c1b62bb69a3 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] [instance: 5750649d-960f-42d5-b127-de8b9a2bee8f] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:52:43 compute-0 nova_compute[260935]: 2025-10-11 08:52:43.615 2 DEBUG nova.virt.libvirt.driver [None req-548173ae-bbbe-4fb0-825e-0c1b62bb69a3 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] [instance: 5750649d-960f-42d5-b127-de8b9a2bee8f] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:52:43 compute-0 nova_compute[260935]: 2025-10-11 08:52:43.616 2 DEBUG nova.virt.libvirt.driver [None req-548173ae-bbbe-4fb0-825e-0c1b62bb69a3 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] [instance: 5750649d-960f-42d5-b127-de8b9a2bee8f] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:52:43 compute-0 nova_compute[260935]: 2025-10-11 08:52:43.616 2 DEBUG nova.virt.libvirt.driver [None req-548173ae-bbbe-4fb0-825e-0c1b62bb69a3 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] [instance: 5750649d-960f-42d5-b127-de8b9a2bee8f] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:52:43 compute-0 nova_compute[260935]: 2025-10-11 08:52:43.616 2 DEBUG nova.virt.libvirt.driver [None req-548173ae-bbbe-4fb0-825e-0c1b62bb69a3 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] [instance: 5750649d-960f-42d5-b127-de8b9a2bee8f] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:52:43 compute-0 nova_compute[260935]: 2025-10-11 08:52:43.617 2 DEBUG nova.virt.libvirt.driver [None req-548173ae-bbbe-4fb0-825e-0c1b62bb69a3 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] [instance: 5750649d-960f-42d5-b127-de8b9a2bee8f] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:52:43 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:43.623 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[188c5fce-cf86-42c8-be38-611022c967fe]: (4, ('Sat Oct 11 08:52:43 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-f4f23928-daba-425d-b61c-e65657ec3386 (f8e09cb4eedc0e259649ff3c36a48535508fb7bc291d534579493db0016a0175)\nf8e09cb4eedc0e259649ff3c36a48535508fb7bc291d534579493db0016a0175\nSat Oct 11 08:52:43 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-f4f23928-daba-425d-b61c-e65657ec3386 (f8e09cb4eedc0e259649ff3c36a48535508fb7bc291d534579493db0016a0175)\nf8e09cb4eedc0e259649ff3c36a48535508fb7bc291d534579493db0016a0175\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:52:43 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:43.626 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[6514d528-699e-4158-b6c3-f158e2c0a034]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:52:43 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:43.627 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf4f23928-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:52:43 compute-0 nova_compute[260935]: 2025-10-11 08:52:43.629 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:52:43 compute-0 kernel: tapf4f23928-d0: left promiscuous mode
Oct 11 08:52:43 compute-0 nova_compute[260935]: 2025-10-11 08:52:43.631 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:52:43 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:43.636 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[39324262-c8f5-4a3b-a90f-0706b6ad145c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:52:43 compute-0 nova_compute[260935]: 2025-10-11 08:52:43.650 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:52:43 compute-0 nova_compute[260935]: 2025-10-11 08:52:43.653 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 5750649d-960f-42d5-b127-de8b9a2bee8f] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 08:52:43 compute-0 nova_compute[260935]: 2025-10-11 08:52:43.653 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172763.54498, 5750649d-960f-42d5-b127-de8b9a2bee8f => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:52:43 compute-0 nova_compute[260935]: 2025-10-11 08:52:43.654 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 5750649d-960f-42d5-b127-de8b9a2bee8f] VM Paused (Lifecycle Event)
Oct 11 08:52:43 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:43.656 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[9b5b9a98-967b-4ed1-8595-a6963aa839ec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:52:43 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:43.659 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[66f91a56-c875-4344-a80e-7e7914bb09cb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:52:43 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:43.684 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[f89e000f-ea44-4157-a751-1f9ed5861ad9]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 460417, 'reachable_time': 20140, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 308279, 'error': None, 'target': 'ovnmeta-f4f23928-daba-425d-b61c-e65657ec3386', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:52:43 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:43.689 162929 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-f4f23928-daba-425d-b61c-e65657ec3386 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 11 08:52:43 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:43.689 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[899bdcbb-400a-4ab8-99fb-755fc331fe4f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:52:43 compute-0 systemd[1]: run-netns-ovnmeta\x2df4f23928\x2ddaba\x2d425d\x2db61c\x2de65657ec3386.mount: Deactivated successfully.
Oct 11 08:52:43 compute-0 nova_compute[260935]: 2025-10-11 08:52:43.697 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 5750649d-960f-42d5-b127-de8b9a2bee8f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:52:43 compute-0 nova_compute[260935]: 2025-10-11 08:52:43.703 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172763.5480337, 5750649d-960f-42d5-b127-de8b9a2bee8f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:52:43 compute-0 nova_compute[260935]: 2025-10-11 08:52:43.703 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 5750649d-960f-42d5-b127-de8b9a2bee8f] VM Resumed (Lifecycle Event)
Oct 11 08:52:43 compute-0 nova_compute[260935]: 2025-10-11 08:52:43.707 2 INFO nova.compute.manager [None req-548173ae-bbbe-4fb0-825e-0c1b62bb69a3 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] [instance: 5750649d-960f-42d5-b127-de8b9a2bee8f] Took 7.20 seconds to spawn the instance on the hypervisor.
Oct 11 08:52:43 compute-0 nova_compute[260935]: 2025-10-11 08:52:43.707 2 DEBUG nova.compute.manager [None req-548173ae-bbbe-4fb0-825e-0c1b62bb69a3 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] [instance: 5750649d-960f-42d5-b127-de8b9a2bee8f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:52:43 compute-0 nova_compute[260935]: 2025-10-11 08:52:43.740 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 5750649d-960f-42d5-b127-de8b9a2bee8f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:52:43 compute-0 nova_compute[260935]: 2025-10-11 08:52:43.748 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 5750649d-960f-42d5-b127-de8b9a2bee8f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 08:52:43 compute-0 nova_compute[260935]: 2025-10-11 08:52:43.785 2 INFO nova.compute.manager [None req-548173ae-bbbe-4fb0-825e-0c1b62bb69a3 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] [instance: 5750649d-960f-42d5-b127-de8b9a2bee8f] Took 8.25 seconds to build instance.
Oct 11 08:52:43 compute-0 nova_compute[260935]: 2025-10-11 08:52:43.805 2 DEBUG oslo_concurrency.lockutils [None req-548173ae-bbbe-4fb0-825e-0c1b62bb69a3 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] Lock "5750649d-960f-42d5-b127-de8b9a2bee8f" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.348s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:52:43 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1436: 321 pgs: 321 active+clean; 213 MiB data, 510 MiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 5.7 MiB/s wr, 288 op/s
Oct 11 08:52:44 compute-0 nova_compute[260935]: 2025-10-11 08:52:43.999 2 INFO nova.virt.libvirt.driver [None req-f23160f8-92e3-41f9-ba4d-288df21f8b7a 713a9b01d75f4fd2b8ad41bac3ba6343 6fb895e8125a437b8cc29be31706fccb - - default default] [instance: dd616f8b-2be3-455f-8979-ac6e9e57af2d] Deleting instance files /var/lib/nova/instances/dd616f8b-2be3-455f-8979-ac6e9e57af2d_del
Oct 11 08:52:44 compute-0 nova_compute[260935]: 2025-10-11 08:52:44.000 2 INFO nova.virt.libvirt.driver [None req-f23160f8-92e3-41f9-ba4d-288df21f8b7a 713a9b01d75f4fd2b8ad41bac3ba6343 6fb895e8125a437b8cc29be31706fccb - - default default] [instance: dd616f8b-2be3-455f-8979-ac6e9e57af2d] Deletion of /var/lib/nova/instances/dd616f8b-2be3-455f-8979-ac6e9e57af2d_del complete
Oct 11 08:52:44 compute-0 nova_compute[260935]: 2025-10-11 08:52:44.095 2 INFO nova.compute.manager [None req-f23160f8-92e3-41f9-ba4d-288df21f8b7a 713a9b01d75f4fd2b8ad41bac3ba6343 6fb895e8125a437b8cc29be31706fccb - - default default] [instance: dd616f8b-2be3-455f-8979-ac6e9e57af2d] Took 1.00 seconds to destroy the instance on the hypervisor.
Oct 11 08:52:44 compute-0 nova_compute[260935]: 2025-10-11 08:52:44.096 2 DEBUG oslo.service.loopingcall [None req-f23160f8-92e3-41f9-ba4d-288df21f8b7a 713a9b01d75f4fd2b8ad41bac3ba6343 6fb895e8125a437b8cc29be31706fccb - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 11 08:52:44 compute-0 nova_compute[260935]: 2025-10-11 08:52:44.096 2 DEBUG nova.compute.manager [-] [instance: dd616f8b-2be3-455f-8979-ac6e9e57af2d] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 11 08:52:44 compute-0 nova_compute[260935]: 2025-10-11 08:52:44.096 2 DEBUG nova.network.neutron [-] [instance: dd616f8b-2be3-455f-8979-ac6e9e57af2d] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 11 08:52:45 compute-0 ceph-mon[74313]: pgmap v1436: 321 pgs: 321 active+clean; 213 MiB data, 510 MiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 5.7 MiB/s wr, 288 op/s
Oct 11 08:52:45 compute-0 nova_compute[260935]: 2025-10-11 08:52:45.295 2 DEBUG nova.compute.manager [req-38873521-963a-4ceb-9a0d-3e9a9c774fda req-01d61b01-611d-46ea-a305-82cec5bd3d4d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: dd616f8b-2be3-455f-8979-ac6e9e57af2d] Received event network-vif-unplugged-a5dcad71-6ce4-4a7c-95fc-900cb7ee620e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:52:45 compute-0 nova_compute[260935]: 2025-10-11 08:52:45.296 2 DEBUG oslo_concurrency.lockutils [req-38873521-963a-4ceb-9a0d-3e9a9c774fda req-01d61b01-611d-46ea-a305-82cec5bd3d4d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "dd616f8b-2be3-455f-8979-ac6e9e57af2d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:52:45 compute-0 nova_compute[260935]: 2025-10-11 08:52:45.297 2 DEBUG oslo_concurrency.lockutils [req-38873521-963a-4ceb-9a0d-3e9a9c774fda req-01d61b01-611d-46ea-a305-82cec5bd3d4d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "dd616f8b-2be3-455f-8979-ac6e9e57af2d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:52:45 compute-0 nova_compute[260935]: 2025-10-11 08:52:45.297 2 DEBUG oslo_concurrency.lockutils [req-38873521-963a-4ceb-9a0d-3e9a9c774fda req-01d61b01-611d-46ea-a305-82cec5bd3d4d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "dd616f8b-2be3-455f-8979-ac6e9e57af2d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:52:45 compute-0 nova_compute[260935]: 2025-10-11 08:52:45.298 2 DEBUG nova.compute.manager [req-38873521-963a-4ceb-9a0d-3e9a9c774fda req-01d61b01-611d-46ea-a305-82cec5bd3d4d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: dd616f8b-2be3-455f-8979-ac6e9e57af2d] No waiting events found dispatching network-vif-unplugged-a5dcad71-6ce4-4a7c-95fc-900cb7ee620e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 08:52:45 compute-0 nova_compute[260935]: 2025-10-11 08:52:45.298 2 DEBUG nova.compute.manager [req-38873521-963a-4ceb-9a0d-3e9a9c774fda req-01d61b01-611d-46ea-a305-82cec5bd3d4d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: dd616f8b-2be3-455f-8979-ac6e9e57af2d] Received event network-vif-unplugged-a5dcad71-6ce4-4a7c-95fc-900cb7ee620e for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 11 08:52:45 compute-0 nova_compute[260935]: 2025-10-11 08:52:45.299 2 DEBUG nova.compute.manager [req-38873521-963a-4ceb-9a0d-3e9a9c774fda req-01d61b01-611d-46ea-a305-82cec5bd3d4d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: dd616f8b-2be3-455f-8979-ac6e9e57af2d] Received event network-vif-plugged-a5dcad71-6ce4-4a7c-95fc-900cb7ee620e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:52:45 compute-0 nova_compute[260935]: 2025-10-11 08:52:45.299 2 DEBUG oslo_concurrency.lockutils [req-38873521-963a-4ceb-9a0d-3e9a9c774fda req-01d61b01-611d-46ea-a305-82cec5bd3d4d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "dd616f8b-2be3-455f-8979-ac6e9e57af2d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:52:45 compute-0 nova_compute[260935]: 2025-10-11 08:52:45.300 2 DEBUG oslo_concurrency.lockutils [req-38873521-963a-4ceb-9a0d-3e9a9c774fda req-01d61b01-611d-46ea-a305-82cec5bd3d4d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "dd616f8b-2be3-455f-8979-ac6e9e57af2d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:52:45 compute-0 nova_compute[260935]: 2025-10-11 08:52:45.300 2 DEBUG oslo_concurrency.lockutils [req-38873521-963a-4ceb-9a0d-3e9a9c774fda req-01d61b01-611d-46ea-a305-82cec5bd3d4d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "dd616f8b-2be3-455f-8979-ac6e9e57af2d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:52:45 compute-0 nova_compute[260935]: 2025-10-11 08:52:45.300 2 DEBUG nova.compute.manager [req-38873521-963a-4ceb-9a0d-3e9a9c774fda req-01d61b01-611d-46ea-a305-82cec5bd3d4d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: dd616f8b-2be3-455f-8979-ac6e9e57af2d] No waiting events found dispatching network-vif-plugged-a5dcad71-6ce4-4a7c-95fc-900cb7ee620e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 08:52:45 compute-0 nova_compute[260935]: 2025-10-11 08:52:45.301 2 WARNING nova.compute.manager [req-38873521-963a-4ceb-9a0d-3e9a9c774fda req-01d61b01-611d-46ea-a305-82cec5bd3d4d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: dd616f8b-2be3-455f-8979-ac6e9e57af2d] Received unexpected event network-vif-plugged-a5dcad71-6ce4-4a7c-95fc-900cb7ee620e for instance with vm_state active and task_state deleting.
Oct 11 08:52:45 compute-0 nova_compute[260935]: 2025-10-11 08:52:45.459 2 DEBUG nova.network.neutron [-] [instance: dd616f8b-2be3-455f-8979-ac6e9e57af2d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:52:45 compute-0 nova_compute[260935]: 2025-10-11 08:52:45.476 2 INFO nova.compute.manager [-] [instance: dd616f8b-2be3-455f-8979-ac6e9e57af2d] Took 1.38 seconds to deallocate network for instance.
Oct 11 08:52:45 compute-0 nova_compute[260935]: 2025-10-11 08:52:45.531 2 DEBUG oslo_concurrency.lockutils [None req-f23160f8-92e3-41f9-ba4d-288df21f8b7a 713a9b01d75f4fd2b8ad41bac3ba6343 6fb895e8125a437b8cc29be31706fccb - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:52:45 compute-0 nova_compute[260935]: 2025-10-11 08:52:45.533 2 DEBUG oslo_concurrency.lockutils [None req-f23160f8-92e3-41f9-ba4d-288df21f8b7a 713a9b01d75f4fd2b8ad41bac3ba6343 6fb895e8125a437b8cc29be31706fccb - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:52:45 compute-0 nova_compute[260935]: 2025-10-11 08:52:45.551 2 DEBUG nova.compute.manager [req-a7d8479e-41c3-4975-980a-7b86dd80df05 req-1cb1bca1-34ae-4f1f-b45b-e57931351c31 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: dd616f8b-2be3-455f-8979-ac6e9e57af2d] Received event network-vif-deleted-a5dcad71-6ce4-4a7c-95fc-900cb7ee620e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:52:45 compute-0 nova_compute[260935]: 2025-10-11 08:52:45.625 2 DEBUG oslo_concurrency.processutils [None req-f23160f8-92e3-41f9-ba4d-288df21f8b7a 713a9b01d75f4fd2b8ad41bac3ba6343 6fb895e8125a437b8cc29be31706fccb - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:52:45 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1437: 321 pgs: 321 active+clean; 213 MiB data, 510 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 5.7 MiB/s wr, 224 op/s
Oct 11 08:52:45 compute-0 nova_compute[260935]: 2025-10-11 08:52:45.939 2 DEBUG nova.virt.libvirt.driver [None req-5fe3681c-599b-41f7-8745-542beaa784ee 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 98fabab3-6b4a-44f3-b232-f23f34f4e19f] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Oct 11 08:52:46 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 08:52:46 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2545751957' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:52:46 compute-0 nova_compute[260935]: 2025-10-11 08:52:46.181 2 DEBUG oslo_concurrency.processutils [None req-f23160f8-92e3-41f9-ba4d-288df21f8b7a 713a9b01d75f4fd2b8ad41bac3ba6343 6fb895e8125a437b8cc29be31706fccb - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.556s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:52:46 compute-0 nova_compute[260935]: 2025-10-11 08:52:46.189 2 DEBUG nova.compute.provider_tree [None req-f23160f8-92e3-41f9-ba4d-288df21f8b7a 713a9b01d75f4fd2b8ad41bac3ba6343 6fb895e8125a437b8cc29be31706fccb - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 08:52:46 compute-0 nova_compute[260935]: 2025-10-11 08:52:46.209 2 DEBUG nova.scheduler.client.report [None req-f23160f8-92e3-41f9-ba4d-288df21f8b7a 713a9b01d75f4fd2b8ad41bac3ba6343 6fb895e8125a437b8cc29be31706fccb - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 08:52:46 compute-0 nova_compute[260935]: 2025-10-11 08:52:46.238 2 DEBUG oslo_concurrency.lockutils [None req-f23160f8-92e3-41f9-ba4d-288df21f8b7a 713a9b01d75f4fd2b8ad41bac3ba6343 6fb895e8125a437b8cc29be31706fccb - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.705s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:52:46 compute-0 nova_compute[260935]: 2025-10-11 08:52:46.245 2 DEBUG oslo_concurrency.lockutils [None req-5be838d6-01b0-4631-b01e-7414e6528635 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] Acquiring lock "5750649d-960f-42d5-b127-de8b9a2bee8f" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:52:46 compute-0 nova_compute[260935]: 2025-10-11 08:52:46.246 2 DEBUG oslo_concurrency.lockutils [None req-5be838d6-01b0-4631-b01e-7414e6528635 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] Lock "5750649d-960f-42d5-b127-de8b9a2bee8f" acquired by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:52:46 compute-0 nova_compute[260935]: 2025-10-11 08:52:46.247 2 INFO nova.compute.manager [None req-5be838d6-01b0-4631-b01e-7414e6528635 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] [instance: 5750649d-960f-42d5-b127-de8b9a2bee8f] Rebooting instance
Oct 11 08:52:46 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/2545751957' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:52:46 compute-0 nova_compute[260935]: 2025-10-11 08:52:46.272 2 DEBUG oslo_concurrency.lockutils [None req-5be838d6-01b0-4631-b01e-7414e6528635 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] Acquiring lock "refresh_cache-5750649d-960f-42d5-b127-de8b9a2bee8f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 08:52:46 compute-0 nova_compute[260935]: 2025-10-11 08:52:46.272 2 DEBUG oslo_concurrency.lockutils [None req-5be838d6-01b0-4631-b01e-7414e6528635 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] Acquired lock "refresh_cache-5750649d-960f-42d5-b127-de8b9a2bee8f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 08:52:46 compute-0 nova_compute[260935]: 2025-10-11 08:52:46.273 2 DEBUG nova.network.neutron [None req-5be838d6-01b0-4631-b01e-7414e6528635 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] [instance: 5750649d-960f-42d5-b127-de8b9a2bee8f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 11 08:52:46 compute-0 nova_compute[260935]: 2025-10-11 08:52:46.284 2 INFO nova.scheduler.client.report [None req-f23160f8-92e3-41f9-ba4d-288df21f8b7a 713a9b01d75f4fd2b8ad41bac3ba6343 6fb895e8125a437b8cc29be31706fccb - - default default] Deleted allocations for instance dd616f8b-2be3-455f-8979-ac6e9e57af2d
Oct 11 08:52:46 compute-0 nova_compute[260935]: 2025-10-11 08:52:46.408 2 DEBUG oslo_concurrency.lockutils [None req-f23160f8-92e3-41f9-ba4d-288df21f8b7a 713a9b01d75f4fd2b8ad41bac3ba6343 6fb895e8125a437b8cc29be31706fccb - - default default] Lock "dd616f8b-2be3-455f-8979-ac6e9e57af2d" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.323s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:52:47 compute-0 ceph-mon[74313]: pgmap v1437: 321 pgs: 321 active+clean; 213 MiB data, 510 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 5.7 MiB/s wr, 224 op/s
Oct 11 08:52:47 compute-0 nova_compute[260935]: 2025-10-11 08:52:47.499 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:52:47 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1438: 321 pgs: 321 active+clean; 167 MiB data, 489 MiB used, 60 GiB / 60 GiB avail; 4.2 MiB/s rd, 5.7 MiB/s wr, 321 op/s
Oct 11 08:52:48 compute-0 kernel: tap20c8164c-07 (unregistering): left promiscuous mode
Oct 11 08:52:48 compute-0 NetworkManager[44960]: <info>  [1760172768.2077] device (tap20c8164c-07): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 08:52:48 compute-0 nova_compute[260935]: 2025-10-11 08:52:48.225 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:52:48 compute-0 ovn_controller[152945]: 2025-10-11T08:52:48Z|00347|binding|INFO|Releasing lport 20c8164c-0779-4589-b3d3-afc10a47631f from this chassis (sb_readonly=0)
Oct 11 08:52:48 compute-0 ovn_controller[152945]: 2025-10-11T08:52:48Z|00348|binding|INFO|Setting lport 20c8164c-0779-4589-b3d3-afc10a47631f down in Southbound
Oct 11 08:52:48 compute-0 ovn_controller[152945]: 2025-10-11T08:52:48Z|00349|binding|INFO|Removing iface tap20c8164c-07 ovn-installed in OVS
Oct 11 08:52:48 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:48.234 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:bf:90:73 10.100.0.12'], port_security=['fa:16:3e:bf:90:73 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '98fabab3-6b4a-44f3-b232-f23f34f4e19f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9af27fad6b5a4783b66213343f27f0a1', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'a50a9db8-e2ab-4969-88fc-b4ddbb372174', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b34768a1-4d4c-416a-8ec1-d8538916a72a, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=20c8164c-0779-4589-b3d3-afc10a47631f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 08:52:48 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:48.236 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 20c8164c-0779-4589-b3d3-afc10a47631f in datapath e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd unbound from our chassis
Oct 11 08:52:48 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:48.239 162815 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 11 08:52:48 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:48.241 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[67cfa682-d7dd-46c8-b710-a32089e927d3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:52:48 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:48.242 162815 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd namespace which is not needed anymore
Oct 11 08:52:48 compute-0 nova_compute[260935]: 2025-10-11 08:52:48.268 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:52:48 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e191 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 08:52:48 compute-0 systemd[1]: machine-qemu\x2d44\x2dinstance\x2d00000028.scope: Deactivated successfully.
Oct 11 08:52:48 compute-0 systemd[1]: machine-qemu\x2d44\x2dinstance\x2d00000028.scope: Consumed 13.280s CPU time.
Oct 11 08:52:48 compute-0 systemd-machined[215705]: Machine qemu-44-instance-00000028 terminated.
Oct 11 08:52:48 compute-0 nova_compute[260935]: 2025-10-11 08:52:48.381 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:52:48 compute-0 neutron-haproxy-ovnmeta-e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd[307041]: [NOTICE]   (307045) : haproxy version is 2.8.14-c23fe91
Oct 11 08:52:48 compute-0 neutron-haproxy-ovnmeta-e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd[307041]: [NOTICE]   (307045) : path to executable is /usr/sbin/haproxy
Oct 11 08:52:48 compute-0 neutron-haproxy-ovnmeta-e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd[307041]: [WARNING]  (307045) : Exiting Master process...
Oct 11 08:52:48 compute-0 neutron-haproxy-ovnmeta-e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd[307041]: [WARNING]  (307045) : Exiting Master process...
Oct 11 08:52:48 compute-0 neutron-haproxy-ovnmeta-e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd[307041]: [ALERT]    (307045) : Current worker (307047) exited with code 143 (Terminated)
Oct 11 08:52:48 compute-0 neutron-haproxy-ovnmeta-e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd[307041]: [WARNING]  (307045) : All workers exited. Exiting... (0)
Oct 11 08:52:48 compute-0 systemd[1]: libpod-cf84e2cb23338f38ffba7aa87fde853c0346e0e3a636c10ffc6f7b131567dfc3.scope: Deactivated successfully.
Oct 11 08:52:48 compute-0 podman[308325]: 2025-10-11 08:52:48.445047653 +0000 UTC m=+0.071504694 container died cf84e2cb23338f38ffba7aa87fde853c0346e0e3a636c10ffc6f7b131567dfc3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 11 08:52:48 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-cf84e2cb23338f38ffba7aa87fde853c0346e0e3a636c10ffc6f7b131567dfc3-userdata-shm.mount: Deactivated successfully.
Oct 11 08:52:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-95e0d34e1c9b5cb00bbf8b4589ff9eade911de7e540b11ead99c1576f4f3ca7c-merged.mount: Deactivated successfully.
Oct 11 08:52:48 compute-0 podman[308325]: 2025-10-11 08:52:48.507671254 +0000 UTC m=+0.134128265 container cleanup cf84e2cb23338f38ffba7aa87fde853c0346e0e3a636c10ffc6f7b131567dfc3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct 11 08:52:48 compute-0 systemd[1]: libpod-conmon-cf84e2cb23338f38ffba7aa87fde853c0346e0e3a636c10ffc6f7b131567dfc3.scope: Deactivated successfully.
Oct 11 08:52:48 compute-0 podman[308366]: 2025-10-11 08:52:48.586333768 +0000 UTC m=+0.051343023 container remove cf84e2cb23338f38ffba7aa87fde853c0346e0e3a636c10ffc6f7b131567dfc3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.schema-version=1.0)
Oct 11 08:52:48 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:48.597 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[87495b02-ff49-4b78-8ba0-391cea7d9c34]: (4, ('Sat Oct 11 08:52:48 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd (cf84e2cb23338f38ffba7aa87fde853c0346e0e3a636c10ffc6f7b131567dfc3)\ncf84e2cb23338f38ffba7aa87fde853c0346e0e3a636c10ffc6f7b131567dfc3\nSat Oct 11 08:52:48 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd (cf84e2cb23338f38ffba7aa87fde853c0346e0e3a636c10ffc6f7b131567dfc3)\ncf84e2cb23338f38ffba7aa87fde853c0346e0e3a636c10ffc6f7b131567dfc3\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:52:48 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:48.599 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[746c2d9d-ac21-4595-8fd8-36e0bac42fb5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:52:48 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:48.600 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape5d4fc7a-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:52:48 compute-0 kernel: tape5d4fc7a-10: left promiscuous mode
Oct 11 08:52:48 compute-0 nova_compute[260935]: 2025-10-11 08:52:48.606 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:52:48 compute-0 nova_compute[260935]: 2025-10-11 08:52:48.643 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:52:48 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:48.645 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[52300dc3-1664-45fc-a3a0-e0805eec68de]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:52:48 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:48.683 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[56ea8e36-61bb-4fae-ba9f-3c4a1ae68ba4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:52:48 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:48.685 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[6ca6de7c-07b6-450f-b9b3-ec750af08fb5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:52:48 compute-0 nova_compute[260935]: 2025-10-11 08:52:48.685 2 DEBUG nova.network.neutron [None req-5be838d6-01b0-4631-b01e-7414e6528635 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] [instance: 5750649d-960f-42d5-b127-de8b9a2bee8f] Updating instance_info_cache with network_info: [{"id": "81c1d19a-c479-4381-9557-92f3e52b0cf0", "address": "fa:16:3e:49:c0:a4", "network": {"id": "aa9adc37-a18c-4b31-b4cc-d46c43f91e9b", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-563102071-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e394c641aa0e46e2a7d8129cd88c9a01", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap81c1d19a-c4", "ovs_interfaceid": "81c1d19a-c479-4381-9557-92f3e52b0cf0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:52:48 compute-0 nova_compute[260935]: 2025-10-11 08:52:48.711 2 DEBUG oslo_concurrency.lockutils [None req-5be838d6-01b0-4631-b01e-7414e6528635 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] Releasing lock "refresh_cache-5750649d-960f-42d5-b127-de8b9a2bee8f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 08:52:48 compute-0 nova_compute[260935]: 2025-10-11 08:52:48.713 2 DEBUG nova.compute.manager [None req-5be838d6-01b0-4631-b01e-7414e6528635 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] [instance: 5750649d-960f-42d5-b127-de8b9a2bee8f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:52:48 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:48.713 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[33b1c907-9305-45c8-ad84-28f64a0d5a7c]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 459037, 'reachable_time': 32395, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 308384, 'error': None, 'target': 'ovnmeta-e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:52:48 compute-0 systemd[1]: run-netns-ovnmeta\x2de5d4fc7a\x2d1d47\x2d4774\x2daad7\x2d8bb2b388fbbd.mount: Deactivated successfully.
Oct 11 08:52:48 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:48.720 162929 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 11 08:52:48 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:48.720 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[4c116757-6d35-40b0-b39f-1f64d3080d3c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:52:48 compute-0 nova_compute[260935]: 2025-10-11 08:52:48.907 2 DEBUG nova.compute.manager [req-ff92a531-8736-4cc4-b6a0-18ecba7b2e04 req-fbf6e7ea-53f3-488b-a330-da465885419a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 98fabab3-6b4a-44f3-b232-f23f34f4e19f] Received event network-vif-unplugged-20c8164c-0779-4589-b3d3-afc10a47631f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:52:48 compute-0 nova_compute[260935]: 2025-10-11 08:52:48.907 2 DEBUG oslo_concurrency.lockutils [req-ff92a531-8736-4cc4-b6a0-18ecba7b2e04 req-fbf6e7ea-53f3-488b-a330-da465885419a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "98fabab3-6b4a-44f3-b232-f23f34f4e19f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:52:48 compute-0 nova_compute[260935]: 2025-10-11 08:52:48.908 2 DEBUG oslo_concurrency.lockutils [req-ff92a531-8736-4cc4-b6a0-18ecba7b2e04 req-fbf6e7ea-53f3-488b-a330-da465885419a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "98fabab3-6b4a-44f3-b232-f23f34f4e19f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:52:48 compute-0 nova_compute[260935]: 2025-10-11 08:52:48.908 2 DEBUG oslo_concurrency.lockutils [req-ff92a531-8736-4cc4-b6a0-18ecba7b2e04 req-fbf6e7ea-53f3-488b-a330-da465885419a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "98fabab3-6b4a-44f3-b232-f23f34f4e19f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:52:48 compute-0 nova_compute[260935]: 2025-10-11 08:52:48.908 2 DEBUG nova.compute.manager [req-ff92a531-8736-4cc4-b6a0-18ecba7b2e04 req-fbf6e7ea-53f3-488b-a330-da465885419a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 98fabab3-6b4a-44f3-b232-f23f34f4e19f] No waiting events found dispatching network-vif-unplugged-20c8164c-0779-4589-b3d3-afc10a47631f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 08:52:48 compute-0 nova_compute[260935]: 2025-10-11 08:52:48.909 2 WARNING nova.compute.manager [req-ff92a531-8736-4cc4-b6a0-18ecba7b2e04 req-fbf6e7ea-53f3-488b-a330-da465885419a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 98fabab3-6b4a-44f3-b232-f23f34f4e19f] Received unexpected event network-vif-unplugged-20c8164c-0779-4589-b3d3-afc10a47631f for instance with vm_state active and task_state rebuilding.
Oct 11 08:52:48 compute-0 nova_compute[260935]: 2025-10-11 08:52:48.956 2 INFO nova.virt.libvirt.driver [None req-5fe3681c-599b-41f7-8745-542beaa784ee 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 98fabab3-6b4a-44f3-b232-f23f34f4e19f] Instance shutdown successfully after 13 seconds.
Oct 11 08:52:48 compute-0 nova_compute[260935]: 2025-10-11 08:52:48.968 2 INFO nova.virt.libvirt.driver [-] [instance: 98fabab3-6b4a-44f3-b232-f23f34f4e19f] Instance destroyed successfully.
Oct 11 08:52:48 compute-0 nova_compute[260935]: 2025-10-11 08:52:48.974 2 INFO nova.virt.libvirt.driver [-] [instance: 98fabab3-6b4a-44f3-b232-f23f34f4e19f] Instance destroyed successfully.
Oct 11 08:52:48 compute-0 nova_compute[260935]: 2025-10-11 08:52:48.975 2 DEBUG nova.virt.libvirt.vif [None req-5fe3681c-599b-41f7-8745-542beaa784ee 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T08:52:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-1265647930',display_name='tempest-ServerDiskConfigTestJSON-server-1265647930',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-1265647930',id=40,image_ref='95632eb9-5895-4e20-b760-0f149aadf400',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-11T08:52:28Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='9af27fad6b5a4783b66213343f27f0a1',ramdisk_id='',reservation_id='r-qeuhsate',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='95632eb9-5895-4e20-b760-0f149aadf400',image_container_format='bare',image_disk_format='qcow2',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerDiskConfigTestJSON-387886039',owner_user_name='tempest-ServerDiskConfigTestJSON-387886039-project-member'},tags=<?>,task_state='rebuilding',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T08:52:34Z,user_data=None,user_id='2a171de1f79843e0b048393cabfee77d',uuid=98fabab3-6b4a-44f3-b232-f23f34f4e19f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "20c8164c-0779-4589-b3d3-afc10a47631f", "address": "fa:16:3e:bf:90:73", "network": {"id": "e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-964284753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9af27fad6b5a4783b66213343f27f0a1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap20c8164c-07", "ovs_interfaceid": "20c8164c-0779-4589-b3d3-afc10a47631f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 11 08:52:48 compute-0 nova_compute[260935]: 2025-10-11 08:52:48.977 2 DEBUG nova.network.os_vif_util [None req-5fe3681c-599b-41f7-8745-542beaa784ee 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Converting VIF {"id": "20c8164c-0779-4589-b3d3-afc10a47631f", "address": "fa:16:3e:bf:90:73", "network": {"id": "e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-964284753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9af27fad6b5a4783b66213343f27f0a1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap20c8164c-07", "ovs_interfaceid": "20c8164c-0779-4589-b3d3-afc10a47631f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 08:52:48 compute-0 nova_compute[260935]: 2025-10-11 08:52:48.978 2 DEBUG nova.network.os_vif_util [None req-5fe3681c-599b-41f7-8745-542beaa784ee 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:bf:90:73,bridge_name='br-int',has_traffic_filtering=True,id=20c8164c-0779-4589-b3d3-afc10a47631f,network=Network(e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap20c8164c-07') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 08:52:48 compute-0 nova_compute[260935]: 2025-10-11 08:52:48.978 2 DEBUG os_vif [None req-5fe3681c-599b-41f7-8745-542beaa784ee 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:bf:90:73,bridge_name='br-int',has_traffic_filtering=True,id=20c8164c-0779-4589-b3d3-afc10a47631f,network=Network(e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap20c8164c-07') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 11 08:52:48 compute-0 nova_compute[260935]: 2025-10-11 08:52:48.980 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:52:48 compute-0 nova_compute[260935]: 2025-10-11 08:52:48.980 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap20c8164c-07, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:52:48 compute-0 nova_compute[260935]: 2025-10-11 08:52:48.984 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:52:48 compute-0 nova_compute[260935]: 2025-10-11 08:52:48.986 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 08:52:48 compute-0 nova_compute[260935]: 2025-10-11 08:52:48.989 2 INFO os_vif [None req-5fe3681c-599b-41f7-8745-542beaa784ee 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:bf:90:73,bridge_name='br-int',has_traffic_filtering=True,id=20c8164c-0779-4589-b3d3-afc10a47631f,network=Network(e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap20c8164c-07')
Oct 11 08:52:49 compute-0 kernel: tap81c1d19a-c4 (unregistering): left promiscuous mode
Oct 11 08:52:49 compute-0 NetworkManager[44960]: <info>  [1760172769.0110] device (tap81c1d19a-c4): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 08:52:49 compute-0 nova_compute[260935]: 2025-10-11 08:52:49.022 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:52:49 compute-0 ovn_controller[152945]: 2025-10-11T08:52:49Z|00350|binding|INFO|Releasing lport 81c1d19a-c479-4381-9557-92f3e52b0cf0 from this chassis (sb_readonly=0)
Oct 11 08:52:49 compute-0 ovn_controller[152945]: 2025-10-11T08:52:49Z|00351|binding|INFO|Setting lport 81c1d19a-c479-4381-9557-92f3e52b0cf0 down in Southbound
Oct 11 08:52:49 compute-0 ovn_controller[152945]: 2025-10-11T08:52:49Z|00352|binding|INFO|Removing iface tap81c1d19a-c4 ovn-installed in OVS
Oct 11 08:52:49 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:49.028 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:49:c0:a4 10.100.0.10'], port_security=['fa:16:3e:49:c0:a4 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '5750649d-960f-42d5-b127-de8b9a2bee8f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-aa9adc37-a18c-4b31-b4cc-d46c43f91e9b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e394c641aa0e46e2a7d8129cd88c9a01', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'e0772185-519f-4131-b64e-b4728e2c0dd9', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=bf506c8e-a600-4595-8f2f-eb0ff97eb2d2, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=81c1d19a-c479-4381-9557-92f3e52b0cf0) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 08:52:49 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:49.029 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 81c1d19a-c479-4381-9557-92f3e52b0cf0 in datapath aa9adc37-a18c-4b31-b4cc-d46c43f91e9b unbound from our chassis
Oct 11 08:52:49 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:49.030 162815 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network aa9adc37-a18c-4b31-b4cc-d46c43f91e9b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 11 08:52:49 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:49.031 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[69a16a00-7120-4a51-9b02-4e21c3eef7f6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:52:49 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:49.031 162815 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-aa9adc37-a18c-4b31-b4cc-d46c43f91e9b namespace which is not needed anymore
Oct 11 08:52:49 compute-0 nova_compute[260935]: 2025-10-11 08:52:49.049 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:52:49 compute-0 systemd[1]: machine-qemu\x2d46\x2dinstance\x2d0000002a.scope: Deactivated successfully.
Oct 11 08:52:49 compute-0 systemd[1]: machine-qemu\x2d46\x2dinstance\x2d0000002a.scope: Consumed 6.736s CPU time.
Oct 11 08:52:49 compute-0 systemd-machined[215705]: Machine qemu-46-instance-0000002a terminated.
Oct 11 08:52:49 compute-0 systemd-udevd[308305]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 08:52:49 compute-0 kernel: tap81c1d19a-c4: entered promiscuous mode
Oct 11 08:52:49 compute-0 NetworkManager[44960]: <info>  [1760172769.1912] manager: (tap81c1d19a-c4): new Tun device (/org/freedesktop/NetworkManager/Devices/166)
Oct 11 08:52:49 compute-0 kernel: tap81c1d19a-c4 (unregistering): left promiscuous mode
Oct 11 08:52:49 compute-0 nova_compute[260935]: 2025-10-11 08:52:49.213 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:52:49 compute-0 virtnodedevd[261258]: libvirt version: 10.10.0, package: 15.el9 (builder@centos.org, 2025-08-18-13:22:20, )
Oct 11 08:52:49 compute-0 virtnodedevd[261258]: hostname: compute-0
Oct 11 08:52:49 compute-0 virtnodedevd[261258]: ethtool ioctl error on tap81c1d19a-c4: No such device
Oct 11 08:52:49 compute-0 virtnodedevd[261258]: ethtool ioctl error on tap81c1d19a-c4: No such device
Oct 11 08:52:49 compute-0 nova_compute[260935]: 2025-10-11 08:52:49.225 2 INFO nova.virt.libvirt.driver [-] [instance: 5750649d-960f-42d5-b127-de8b9a2bee8f] Instance destroyed successfully.
Oct 11 08:52:49 compute-0 nova_compute[260935]: 2025-10-11 08:52:49.226 2 DEBUG nova.objects.instance [None req-5be838d6-01b0-4631-b01e-7414e6528635 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] Lazy-loading 'resources' on Instance uuid 5750649d-960f-42d5-b127-de8b9a2bee8f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:52:49 compute-0 nova_compute[260935]: 2025-10-11 08:52:49.231 2 DEBUG nova.compute.manager [req-df8f348c-3f75-4727-810f-44a3eb2d13cf req-05ae73c5-02a7-44cb-b3ee-dd61fe7edb3e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 5750649d-960f-42d5-b127-de8b9a2bee8f] Received event network-vif-unplugged-81c1d19a-c479-4381-9557-92f3e52b0cf0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:52:49 compute-0 nova_compute[260935]: 2025-10-11 08:52:49.232 2 DEBUG oslo_concurrency.lockutils [req-df8f348c-3f75-4727-810f-44a3eb2d13cf req-05ae73c5-02a7-44cb-b3ee-dd61fe7edb3e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "5750649d-960f-42d5-b127-de8b9a2bee8f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:52:49 compute-0 virtnodedevd[261258]: ethtool ioctl error on tap81c1d19a-c4: No such device
Oct 11 08:52:49 compute-0 nova_compute[260935]: 2025-10-11 08:52:49.232 2 DEBUG oslo_concurrency.lockutils [req-df8f348c-3f75-4727-810f-44a3eb2d13cf req-05ae73c5-02a7-44cb-b3ee-dd61fe7edb3e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "5750649d-960f-42d5-b127-de8b9a2bee8f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:52:49 compute-0 neutron-haproxy-ovnmeta-aa9adc37-a18c-4b31-b4cc-d46c43f91e9b[308177]: [NOTICE]   (308181) : haproxy version is 2.8.14-c23fe91
Oct 11 08:52:49 compute-0 neutron-haproxy-ovnmeta-aa9adc37-a18c-4b31-b4cc-d46c43f91e9b[308177]: [NOTICE]   (308181) : path to executable is /usr/sbin/haproxy
Oct 11 08:52:49 compute-0 neutron-haproxy-ovnmeta-aa9adc37-a18c-4b31-b4cc-d46c43f91e9b[308177]: [WARNING]  (308181) : Exiting Master process...
Oct 11 08:52:49 compute-0 neutron-haproxy-ovnmeta-aa9adc37-a18c-4b31-b4cc-d46c43f91e9b[308177]: [ALERT]    (308181) : Current worker (308183) exited with code 143 (Terminated)
Oct 11 08:52:49 compute-0 neutron-haproxy-ovnmeta-aa9adc37-a18c-4b31-b4cc-d46c43f91e9b[308177]: [WARNING]  (308181) : All workers exited. Exiting... (0)
Oct 11 08:52:49 compute-0 nova_compute[260935]: 2025-10-11 08:52:49.233 2 DEBUG oslo_concurrency.lockutils [req-df8f348c-3f75-4727-810f-44a3eb2d13cf req-05ae73c5-02a7-44cb-b3ee-dd61fe7edb3e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "5750649d-960f-42d5-b127-de8b9a2bee8f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:52:49 compute-0 nova_compute[260935]: 2025-10-11 08:52:49.236 2 DEBUG nova.compute.manager [req-df8f348c-3f75-4727-810f-44a3eb2d13cf req-05ae73c5-02a7-44cb-b3ee-dd61fe7edb3e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 5750649d-960f-42d5-b127-de8b9a2bee8f] No waiting events found dispatching network-vif-unplugged-81c1d19a-c479-4381-9557-92f3e52b0cf0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 08:52:49 compute-0 nova_compute[260935]: 2025-10-11 08:52:49.236 2 WARNING nova.compute.manager [req-df8f348c-3f75-4727-810f-44a3eb2d13cf req-05ae73c5-02a7-44cb-b3ee-dd61fe7edb3e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 5750649d-960f-42d5-b127-de8b9a2bee8f] Received unexpected event network-vif-unplugged-81c1d19a-c479-4381-9557-92f3e52b0cf0 for instance with vm_state active and task_state reboot_started_hard.
Oct 11 08:52:49 compute-0 systemd[1]: libpod-94a21b0b94de7518338fcaeb99ecd605aad02a7ec069ff0c0918f06ba08b8847.scope: Deactivated successfully.
Oct 11 08:52:49 compute-0 virtnodedevd[261258]: ethtool ioctl error on tap81c1d19a-c4: No such device
Oct 11 08:52:49 compute-0 podman[308424]: 2025-10-11 08:52:49.245084847 +0000 UTC m=+0.080307352 container died 94a21b0b94de7518338fcaeb99ecd605aad02a7ec069ff0c0918f06ba08b8847 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-aa9adc37-a18c-4b31-b4cc-d46c43f91e9b, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 11 08:52:49 compute-0 nova_compute[260935]: 2025-10-11 08:52:49.252 2 DEBUG nova.virt.libvirt.vif [None req-5be838d6-01b0-4631-b01e-7414e6528635 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T08:52:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-InstanceActionsTestJSON-server-1556963080',display_name='tempest-InstanceActionsTestJSON-server-1556963080',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionstestjson-server-1556963080',id=42,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-11T08:52:43Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='e394c641aa0e46e2a7d8129cd88c9a01',ramdisk_id='',reservation_id='r-r0vpe0lh',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-InstanceActionsTestJSON-761055393',owner_user_name='tempest-InstanceActionsTestJSON-761055393-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T08:52:48Z,user_data=None,user_id='aeed30817a7740109e765d227ff2b78a',uuid=5750649d-960f-42d5-b127-de8b9a2bee8f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "81c1d19a-c479-4381-9557-92f3e52b0cf0", "address": "fa:16:3e:49:c0:a4", "network": {"id": "aa9adc37-a18c-4b31-b4cc-d46c43f91e9b", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-563102071-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e394c641aa0e46e2a7d8129cd88c9a01", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap81c1d19a-c4", "ovs_interfaceid": "81c1d19a-c479-4381-9557-92f3e52b0cf0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 11 08:52:49 compute-0 nova_compute[260935]: 2025-10-11 08:52:49.252 2 DEBUG nova.network.os_vif_util [None req-5be838d6-01b0-4631-b01e-7414e6528635 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] Converting VIF {"id": "81c1d19a-c479-4381-9557-92f3e52b0cf0", "address": "fa:16:3e:49:c0:a4", "network": {"id": "aa9adc37-a18c-4b31-b4cc-d46c43f91e9b", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-563102071-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e394c641aa0e46e2a7d8129cd88c9a01", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap81c1d19a-c4", "ovs_interfaceid": "81c1d19a-c479-4381-9557-92f3e52b0cf0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 08:52:49 compute-0 nova_compute[260935]: 2025-10-11 08:52:49.253 2 DEBUG nova.network.os_vif_util [None req-5be838d6-01b0-4631-b01e-7414e6528635 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:49:c0:a4,bridge_name='br-int',has_traffic_filtering=True,id=81c1d19a-c479-4381-9557-92f3e52b0cf0,network=Network(aa9adc37-a18c-4b31-b4cc-d46c43f91e9b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap81c1d19a-c4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 08:52:49 compute-0 nova_compute[260935]: 2025-10-11 08:52:49.253 2 DEBUG os_vif [None req-5be838d6-01b0-4631-b01e-7414e6528635 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:49:c0:a4,bridge_name='br-int',has_traffic_filtering=True,id=81c1d19a-c479-4381-9557-92f3e52b0cf0,network=Network(aa9adc37-a18c-4b31-b4cc-d46c43f91e9b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap81c1d19a-c4') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 11 08:52:49 compute-0 virtnodedevd[261258]: ethtool ioctl error on tap81c1d19a-c4: No such device
Oct 11 08:52:49 compute-0 nova_compute[260935]: 2025-10-11 08:52:49.254 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:52:49 compute-0 nova_compute[260935]: 2025-10-11 08:52:49.255 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap81c1d19a-c4, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:52:49 compute-0 nova_compute[260935]: 2025-10-11 08:52:49.256 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:52:49 compute-0 nova_compute[260935]: 2025-10-11 08:52:49.258 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 08:52:49 compute-0 nova_compute[260935]: 2025-10-11 08:52:49.258 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:52:49 compute-0 nova_compute[260935]: 2025-10-11 08:52:49.260 2 INFO os_vif [None req-5be838d6-01b0-4631-b01e-7414e6528635 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:49:c0:a4,bridge_name='br-int',has_traffic_filtering=True,id=81c1d19a-c479-4381-9557-92f3e52b0cf0,network=Network(aa9adc37-a18c-4b31-b4cc-d46c43f91e9b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap81c1d19a-c4')
Oct 11 08:52:49 compute-0 virtnodedevd[261258]: ethtool ioctl error on tap81c1d19a-c4: No such device
Oct 11 08:52:49 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e191 do_prune osdmap full prune enabled
Oct 11 08:52:49 compute-0 nova_compute[260935]: 2025-10-11 08:52:49.270 2 DEBUG nova.virt.libvirt.driver [None req-5be838d6-01b0-4631-b01e-7414e6528635 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] [instance: 5750649d-960f-42d5-b127-de8b9a2bee8f] Start _get_guest_xml network_info=[{"id": "81c1d19a-c479-4381-9557-92f3e52b0cf0", "address": "fa:16:3e:49:c0:a4", "network": {"id": "aa9adc37-a18c-4b31-b4cc-d46c43f91e9b", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-563102071-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e394c641aa0e46e2a7d8129cd88c9a01", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap81c1d19a-c4", "ovs_interfaceid": "81c1d19a-c479-4381-9557-92f3e52b0cf0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 11 08:52:49 compute-0 ceph-mon[74313]: pgmap v1438: 321 pgs: 321 active+clean; 167 MiB data, 489 MiB used, 60 GiB / 60 GiB avail; 4.2 MiB/s rd, 5.7 MiB/s wr, 321 op/s
Oct 11 08:52:49 compute-0 nova_compute[260935]: 2025-10-11 08:52:49.275 2 WARNING nova.virt.libvirt.driver [None req-5be838d6-01b0-4631-b01e-7414e6528635 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 08:52:49 compute-0 virtnodedevd[261258]: ethtool ioctl error on tap81c1d19a-c4: No such device
Oct 11 08:52:49 compute-0 nova_compute[260935]: 2025-10-11 08:52:49.281 2 DEBUG nova.virt.libvirt.host [None req-5be838d6-01b0-4631-b01e-7414e6528635 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 11 08:52:49 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e192 e192: 3 total, 3 up, 3 in
Oct 11 08:52:49 compute-0 nova_compute[260935]: 2025-10-11 08:52:49.282 2 DEBUG nova.virt.libvirt.host [None req-5be838d6-01b0-4631-b01e-7414e6528635 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 11 08:52:49 compute-0 virtnodedevd[261258]: ethtool ioctl error on tap81c1d19a-c4: No such device
Oct 11 08:52:49 compute-0 nova_compute[260935]: 2025-10-11 08:52:49.286 2 DEBUG nova.virt.libvirt.host [None req-5be838d6-01b0-4631-b01e-7414e6528635 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 11 08:52:49 compute-0 nova_compute[260935]: 2025-10-11 08:52:49.286 2 DEBUG nova.virt.libvirt.host [None req-5be838d6-01b0-4631-b01e-7414e6528635 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 11 08:52:49 compute-0 nova_compute[260935]: 2025-10-11 08:52:49.287 2 DEBUG nova.virt.libvirt.driver [None req-5be838d6-01b0-4631-b01e-7414e6528635 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 11 08:52:49 compute-0 nova_compute[260935]: 2025-10-11 08:52:49.287 2 DEBUG nova.virt.hardware [None req-5be838d6-01b0-4631-b01e-7414e6528635 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 11 08:52:49 compute-0 nova_compute[260935]: 2025-10-11 08:52:49.287 2 DEBUG nova.virt.hardware [None req-5be838d6-01b0-4631-b01e-7414e6528635 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 11 08:52:49 compute-0 nova_compute[260935]: 2025-10-11 08:52:49.288 2 DEBUG nova.virt.hardware [None req-5be838d6-01b0-4631-b01e-7414e6528635 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 11 08:52:49 compute-0 nova_compute[260935]: 2025-10-11 08:52:49.288 2 DEBUG nova.virt.hardware [None req-5be838d6-01b0-4631-b01e-7414e6528635 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 11 08:52:49 compute-0 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e192: 3 total, 3 up, 3 in
Oct 11 08:52:49 compute-0 nova_compute[260935]: 2025-10-11 08:52:49.288 2 DEBUG nova.virt.hardware [None req-5be838d6-01b0-4631-b01e-7414e6528635 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 11 08:52:49 compute-0 nova_compute[260935]: 2025-10-11 08:52:49.288 2 DEBUG nova.virt.hardware [None req-5be838d6-01b0-4631-b01e-7414e6528635 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 11 08:52:49 compute-0 nova_compute[260935]: 2025-10-11 08:52:49.289 2 DEBUG nova.virt.hardware [None req-5be838d6-01b0-4631-b01e-7414e6528635 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 11 08:52:49 compute-0 nova_compute[260935]: 2025-10-11 08:52:49.289 2 DEBUG nova.virt.hardware [None req-5be838d6-01b0-4631-b01e-7414e6528635 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 11 08:52:49 compute-0 nova_compute[260935]: 2025-10-11 08:52:49.290 2 DEBUG nova.virt.hardware [None req-5be838d6-01b0-4631-b01e-7414e6528635 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 11 08:52:49 compute-0 nova_compute[260935]: 2025-10-11 08:52:49.290 2 DEBUG nova.virt.hardware [None req-5be838d6-01b0-4631-b01e-7414e6528635 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 11 08:52:49 compute-0 nova_compute[260935]: 2025-10-11 08:52:49.290 2 DEBUG nova.virt.hardware [None req-5be838d6-01b0-4631-b01e-7414e6528635 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 11 08:52:49 compute-0 nova_compute[260935]: 2025-10-11 08:52:49.291 2 DEBUG nova.objects.instance [None req-5be838d6-01b0-4631-b01e-7414e6528635 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 5750649d-960f-42d5-b127-de8b9a2bee8f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:52:49 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-94a21b0b94de7518338fcaeb99ecd605aad02a7ec069ff0c0918f06ba08b8847-userdata-shm.mount: Deactivated successfully.
Oct 11 08:52:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-1aaec042284ad1b73f2650a678143c3397946f80b5c9fd24812807415a14b7ec-merged.mount: Deactivated successfully.
Oct 11 08:52:49 compute-0 podman[308424]: 2025-10-11 08:52:49.312217476 +0000 UTC m=+0.147439921 container cleanup 94a21b0b94de7518338fcaeb99ecd605aad02a7ec069ff0c0918f06ba08b8847 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-aa9adc37-a18c-4b31-b4cc-d46c43f91e9b, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 11 08:52:49 compute-0 nova_compute[260935]: 2025-10-11 08:52:49.333 2 DEBUG oslo_concurrency.processutils [None req-5be838d6-01b0-4631-b01e-7414e6528635 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:52:49 compute-0 systemd[1]: libpod-conmon-94a21b0b94de7518338fcaeb99ecd605aad02a7ec069ff0c0918f06ba08b8847.scope: Deactivated successfully.
Oct 11 08:52:49 compute-0 podman[308474]: 2025-10-11 08:52:49.40078362 +0000 UTC m=+0.050919361 container remove 94a21b0b94de7518338fcaeb99ecd605aad02a7ec069ff0c0918f06ba08b8847 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-aa9adc37-a18c-4b31-b4cc-d46c43f91e9b, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2)
Oct 11 08:52:49 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:49.410 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[2c320c23-8ebb-4de7-8337-cedaf5c898aa]: (4, ('Sat Oct 11 08:52:49 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-aa9adc37-a18c-4b31-b4cc-d46c43f91e9b (94a21b0b94de7518338fcaeb99ecd605aad02a7ec069ff0c0918f06ba08b8847)\n94a21b0b94de7518338fcaeb99ecd605aad02a7ec069ff0c0918f06ba08b8847\nSat Oct 11 08:52:49 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-aa9adc37-a18c-4b31-b4cc-d46c43f91e9b (94a21b0b94de7518338fcaeb99ecd605aad02a7ec069ff0c0918f06ba08b8847)\n94a21b0b94de7518338fcaeb99ecd605aad02a7ec069ff0c0918f06ba08b8847\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:52:49 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:49.412 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[ca27a427-f626-4075-b0e7-b23bffbe704e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:52:49 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:49.413 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapaa9adc37-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:52:49 compute-0 nova_compute[260935]: 2025-10-11 08:52:49.416 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:52:49 compute-0 kernel: tapaa9adc37-a0: left promiscuous mode
Oct 11 08:52:49 compute-0 nova_compute[260935]: 2025-10-11 08:52:49.450 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:52:49 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:49.454 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[461558be-f03f-4c9c-8c4f-a17277eaf4a0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:52:49 compute-0 nova_compute[260935]: 2025-10-11 08:52:49.472 2 INFO nova.virt.libvirt.driver [None req-5fe3681c-599b-41f7-8745-542beaa784ee 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 98fabab3-6b4a-44f3-b232-f23f34f4e19f] Deleting instance files /var/lib/nova/instances/98fabab3-6b4a-44f3-b232-f23f34f4e19f_del
Oct 11 08:52:49 compute-0 nova_compute[260935]: 2025-10-11 08:52:49.473 2 INFO nova.virt.libvirt.driver [None req-5fe3681c-599b-41f7-8745-542beaa784ee 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 98fabab3-6b4a-44f3-b232-f23f34f4e19f] Deletion of /var/lib/nova/instances/98fabab3-6b4a-44f3-b232-f23f34f4e19f_del complete
Oct 11 08:52:49 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:49.474 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[0e772696-1a5d-4068-a05b-316855561cf7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:52:49 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:49.475 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[00e5cb40-e77c-4460-a656-810bacf7e1a2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:52:49 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:49.497 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[215b045c-514d-4803-81b7-a25a5534c73c]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 460689, 'reachable_time': 37724, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 308504, 'error': None, 'target': 'ovnmeta-aa9adc37-a18c-4b31-b4cc-d46c43f91e9b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:52:49 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:49.500 162929 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-aa9adc37-a18c-4b31-b4cc-d46c43f91e9b deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 11 08:52:49 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:49.501 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[335c0ea4-0efd-43cb-8411-691acf289fdd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:52:49 compute-0 systemd[1]: run-netns-ovnmeta\x2daa9adc37\x2da18c\x2d4b31\x2db4cc\x2dd46c43f91e9b.mount: Deactivated successfully.
Oct 11 08:52:49 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 08:52:49 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2449534070' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:52:49 compute-0 nova_compute[260935]: 2025-10-11 08:52:49.779 2 DEBUG oslo_concurrency.processutils [None req-5be838d6-01b0-4631-b01e-7414e6528635 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:52:49 compute-0 nova_compute[260935]: 2025-10-11 08:52:49.825 2 DEBUG oslo_concurrency.processutils [None req-5be838d6-01b0-4631-b01e-7414e6528635 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:52:49 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1440: 321 pgs: 321 active+clean; 167 MiB data, 489 MiB used, 60 GiB / 60 GiB avail; 5.0 MiB/s rd, 2.6 MiB/s wr, 298 op/s
Oct 11 08:52:49 compute-0 nova_compute[260935]: 2025-10-11 08:52:49.979 2 DEBUG nova.virt.libvirt.driver [None req-5fe3681c-599b-41f7-8745-542beaa784ee 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 98fabab3-6b4a-44f3-b232-f23f34f4e19f] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 11 08:52:49 compute-0 nova_compute[260935]: 2025-10-11 08:52:49.980 2 INFO nova.virt.libvirt.driver [None req-5fe3681c-599b-41f7-8745-542beaa784ee 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 98fabab3-6b4a-44f3-b232-f23f34f4e19f] Creating image(s)
Oct 11 08:52:50 compute-0 nova_compute[260935]: 2025-10-11 08:52:50.041 2 DEBUG nova.storage.rbd_utils [None req-5fe3681c-599b-41f7-8745-542beaa784ee 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] rbd image 98fabab3-6b4a-44f3-b232-f23f34f4e19f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:52:50 compute-0 nova_compute[260935]: 2025-10-11 08:52:50.078 2 DEBUG nova.storage.rbd_utils [None req-5fe3681c-599b-41f7-8745-542beaa784ee 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] rbd image 98fabab3-6b4a-44f3-b232-f23f34f4e19f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:52:50 compute-0 nova_compute[260935]: 2025-10-11 08:52:50.116 2 DEBUG nova.storage.rbd_utils [None req-5fe3681c-599b-41f7-8745-542beaa784ee 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] rbd image 98fabab3-6b4a-44f3-b232-f23f34f4e19f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:52:50 compute-0 nova_compute[260935]: 2025-10-11 08:52:50.122 2 DEBUG oslo_concurrency.processutils [None req-5fe3681c-599b-41f7-8745-542beaa784ee 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/d427ed36e4acfaf36d5cf36bd49361b1db4ee571 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:52:50 compute-0 nova_compute[260935]: 2025-10-11 08:52:50.205 2 DEBUG oslo_concurrency.processutils [None req-5fe3681c-599b-41f7-8745-542beaa784ee 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/d427ed36e4acfaf36d5cf36bd49361b1db4ee571 --force-share --output=json" returned: 0 in 0.082s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:52:50 compute-0 nova_compute[260935]: 2025-10-11 08:52:50.206 2 DEBUG oslo_concurrency.lockutils [None req-5fe3681c-599b-41f7-8745-542beaa784ee 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Acquiring lock "d427ed36e4acfaf36d5cf36bd49361b1db4ee571" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:52:50 compute-0 nova_compute[260935]: 2025-10-11 08:52:50.206 2 DEBUG oslo_concurrency.lockutils [None req-5fe3681c-599b-41f7-8745-542beaa784ee 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Lock "d427ed36e4acfaf36d5cf36bd49361b1db4ee571" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:52:50 compute-0 nova_compute[260935]: 2025-10-11 08:52:50.207 2 DEBUG oslo_concurrency.lockutils [None req-5fe3681c-599b-41f7-8745-542beaa784ee 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Lock "d427ed36e4acfaf36d5cf36bd49361b1db4ee571" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:52:50 compute-0 nova_compute[260935]: 2025-10-11 08:52:50.234 2 DEBUG nova.storage.rbd_utils [None req-5fe3681c-599b-41f7-8745-542beaa784ee 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] rbd image 98fabab3-6b4a-44f3-b232-f23f34f4e19f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:52:50 compute-0 nova_compute[260935]: 2025-10-11 08:52:50.240 2 DEBUG oslo_concurrency.processutils [None req-5fe3681c-599b-41f7-8745-542beaa784ee 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/d427ed36e4acfaf36d5cf36bd49361b1db4ee571 98fabab3-6b4a-44f3-b232-f23f34f4e19f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:52:50 compute-0 ceph-mon[74313]: osdmap e192: 3 total, 3 up, 3 in
Oct 11 08:52:50 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/2449534070' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:52:50 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 08:52:50 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1546140836' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:52:50 compute-0 nova_compute[260935]: 2025-10-11 08:52:50.319 2 DEBUG oslo_concurrency.processutils [None req-5be838d6-01b0-4631-b01e-7414e6528635 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.494s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:52:50 compute-0 nova_compute[260935]: 2025-10-11 08:52:50.329 2 DEBUG nova.virt.libvirt.vif [None req-5be838d6-01b0-4631-b01e-7414e6528635 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T08:52:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-InstanceActionsTestJSON-server-1556963080',display_name='tempest-InstanceActionsTestJSON-server-1556963080',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionstestjson-server-1556963080',id=42,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-11T08:52:43Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='e394c641aa0e46e2a7d8129cd88c9a01',ramdisk_id='',reservation_id='r-r0vpe0lh',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-InstanceActionsTestJSON-761055393',owner_user_name='tempest-InstanceActionsTestJSON-761055393-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T08:52:48Z,user_data=None,user_id='aeed30817a7740109e765d227ff2b78a',uuid=5750649d-960f-42d5-b127-de8b9a2bee8f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "81c1d19a-c479-4381-9557-92f3e52b0cf0", "address": "fa:16:3e:49:c0:a4", "network": {"id": "aa9adc37-a18c-4b31-b4cc-d46c43f91e9b", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-563102071-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e394c641aa0e46e2a7d8129cd88c9a01", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap81c1d19a-c4", "ovs_interfaceid": "81c1d19a-c479-4381-9557-92f3e52b0cf0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 11 08:52:50 compute-0 nova_compute[260935]: 2025-10-11 08:52:50.330 2 DEBUG nova.network.os_vif_util [None req-5be838d6-01b0-4631-b01e-7414e6528635 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] Converting VIF {"id": "81c1d19a-c479-4381-9557-92f3e52b0cf0", "address": "fa:16:3e:49:c0:a4", "network": {"id": "aa9adc37-a18c-4b31-b4cc-d46c43f91e9b", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-563102071-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e394c641aa0e46e2a7d8129cd88c9a01", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap81c1d19a-c4", "ovs_interfaceid": "81c1d19a-c479-4381-9557-92f3e52b0cf0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 08:52:50 compute-0 nova_compute[260935]: 2025-10-11 08:52:50.332 2 DEBUG nova.network.os_vif_util [None req-5be838d6-01b0-4631-b01e-7414e6528635 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:49:c0:a4,bridge_name='br-int',has_traffic_filtering=True,id=81c1d19a-c479-4381-9557-92f3e52b0cf0,network=Network(aa9adc37-a18c-4b31-b4cc-d46c43f91e9b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap81c1d19a-c4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 08:52:50 compute-0 nova_compute[260935]: 2025-10-11 08:52:50.334 2 DEBUG nova.objects.instance [None req-5be838d6-01b0-4631-b01e-7414e6528635 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] Lazy-loading 'pci_devices' on Instance uuid 5750649d-960f-42d5-b127-de8b9a2bee8f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:52:50 compute-0 nova_compute[260935]: 2025-10-11 08:52:50.351 2 DEBUG nova.virt.libvirt.driver [None req-5be838d6-01b0-4631-b01e-7414e6528635 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] [instance: 5750649d-960f-42d5-b127-de8b9a2bee8f] End _get_guest_xml xml=<domain type="kvm">
Oct 11 08:52:50 compute-0 nova_compute[260935]:   <uuid>5750649d-960f-42d5-b127-de8b9a2bee8f</uuid>
Oct 11 08:52:50 compute-0 nova_compute[260935]:   <name>instance-0000002a</name>
Oct 11 08:52:50 compute-0 nova_compute[260935]:   <memory>131072</memory>
Oct 11 08:52:50 compute-0 nova_compute[260935]:   <vcpu>1</vcpu>
Oct 11 08:52:50 compute-0 nova_compute[260935]:   <metadata>
Oct 11 08:52:50 compute-0 nova_compute[260935]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 08:52:50 compute-0 nova_compute[260935]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 08:52:50 compute-0 nova_compute[260935]:       <nova:name>tempest-InstanceActionsTestJSON-server-1556963080</nova:name>
Oct 11 08:52:50 compute-0 nova_compute[260935]:       <nova:creationTime>2025-10-11 08:52:49</nova:creationTime>
Oct 11 08:52:50 compute-0 nova_compute[260935]:       <nova:flavor name="m1.nano">
Oct 11 08:52:50 compute-0 nova_compute[260935]:         <nova:memory>128</nova:memory>
Oct 11 08:52:50 compute-0 nova_compute[260935]:         <nova:disk>1</nova:disk>
Oct 11 08:52:50 compute-0 nova_compute[260935]:         <nova:swap>0</nova:swap>
Oct 11 08:52:50 compute-0 nova_compute[260935]:         <nova:ephemeral>0</nova:ephemeral>
Oct 11 08:52:50 compute-0 nova_compute[260935]:         <nova:vcpus>1</nova:vcpus>
Oct 11 08:52:50 compute-0 nova_compute[260935]:       </nova:flavor>
Oct 11 08:52:50 compute-0 nova_compute[260935]:       <nova:owner>
Oct 11 08:52:50 compute-0 nova_compute[260935]:         <nova:user uuid="aeed30817a7740109e765d227ff2b78a">tempest-InstanceActionsTestJSON-761055393-project-member</nova:user>
Oct 11 08:52:50 compute-0 nova_compute[260935]:         <nova:project uuid="e394c641aa0e46e2a7d8129cd88c9a01">tempest-InstanceActionsTestJSON-761055393</nova:project>
Oct 11 08:52:50 compute-0 nova_compute[260935]:       </nova:owner>
Oct 11 08:52:50 compute-0 nova_compute[260935]:       <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 08:52:50 compute-0 nova_compute[260935]:       <nova:ports>
Oct 11 08:52:50 compute-0 nova_compute[260935]:         <nova:port uuid="81c1d19a-c479-4381-9557-92f3e52b0cf0">
Oct 11 08:52:50 compute-0 nova_compute[260935]:           <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Oct 11 08:52:50 compute-0 nova_compute[260935]:         </nova:port>
Oct 11 08:52:50 compute-0 nova_compute[260935]:       </nova:ports>
Oct 11 08:52:50 compute-0 nova_compute[260935]:     </nova:instance>
Oct 11 08:52:50 compute-0 nova_compute[260935]:   </metadata>
Oct 11 08:52:50 compute-0 nova_compute[260935]:   <sysinfo type="smbios">
Oct 11 08:52:50 compute-0 nova_compute[260935]:     <system>
Oct 11 08:52:50 compute-0 nova_compute[260935]:       <entry name="manufacturer">RDO</entry>
Oct 11 08:52:50 compute-0 nova_compute[260935]:       <entry name="product">OpenStack Compute</entry>
Oct 11 08:52:50 compute-0 nova_compute[260935]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 08:52:50 compute-0 nova_compute[260935]:       <entry name="serial">5750649d-960f-42d5-b127-de8b9a2bee8f</entry>
Oct 11 08:52:50 compute-0 nova_compute[260935]:       <entry name="uuid">5750649d-960f-42d5-b127-de8b9a2bee8f</entry>
Oct 11 08:52:50 compute-0 nova_compute[260935]:       <entry name="family">Virtual Machine</entry>
Oct 11 08:52:50 compute-0 nova_compute[260935]:     </system>
Oct 11 08:52:50 compute-0 nova_compute[260935]:   </sysinfo>
Oct 11 08:52:50 compute-0 nova_compute[260935]:   <os>
Oct 11 08:52:50 compute-0 nova_compute[260935]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 11 08:52:50 compute-0 nova_compute[260935]:     <boot dev="hd"/>
Oct 11 08:52:50 compute-0 nova_compute[260935]:     <smbios mode="sysinfo"/>
Oct 11 08:52:50 compute-0 nova_compute[260935]:   </os>
Oct 11 08:52:50 compute-0 nova_compute[260935]:   <features>
Oct 11 08:52:50 compute-0 nova_compute[260935]:     <acpi/>
Oct 11 08:52:50 compute-0 nova_compute[260935]:     <apic/>
Oct 11 08:52:50 compute-0 nova_compute[260935]:     <vmcoreinfo/>
Oct 11 08:52:50 compute-0 nova_compute[260935]:   </features>
Oct 11 08:52:50 compute-0 nova_compute[260935]:   <clock offset="utc">
Oct 11 08:52:50 compute-0 nova_compute[260935]:     <timer name="pit" tickpolicy="delay"/>
Oct 11 08:52:50 compute-0 nova_compute[260935]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 11 08:52:50 compute-0 nova_compute[260935]:     <timer name="hpet" present="no"/>
Oct 11 08:52:50 compute-0 nova_compute[260935]:   </clock>
Oct 11 08:52:50 compute-0 nova_compute[260935]:   <cpu mode="host-model" match="exact">
Oct 11 08:52:50 compute-0 nova_compute[260935]:     <topology sockets="1" cores="1" threads="1"/>
Oct 11 08:52:50 compute-0 nova_compute[260935]:   </cpu>
Oct 11 08:52:50 compute-0 nova_compute[260935]:   <devices>
Oct 11 08:52:50 compute-0 nova_compute[260935]:     <disk type="network" device="disk">
Oct 11 08:52:50 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 08:52:50 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/5750649d-960f-42d5-b127-de8b9a2bee8f_disk">
Oct 11 08:52:50 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 08:52:50 compute-0 nova_compute[260935]:       </source>
Oct 11 08:52:50 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 08:52:50 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 08:52:50 compute-0 nova_compute[260935]:       </auth>
Oct 11 08:52:50 compute-0 nova_compute[260935]:       <target dev="vda" bus="virtio"/>
Oct 11 08:52:50 compute-0 nova_compute[260935]:     </disk>
Oct 11 08:52:50 compute-0 nova_compute[260935]:     <disk type="network" device="cdrom">
Oct 11 08:52:50 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 08:52:50 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/5750649d-960f-42d5-b127-de8b9a2bee8f_disk.config">
Oct 11 08:52:50 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 08:52:50 compute-0 nova_compute[260935]:       </source>
Oct 11 08:52:50 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 08:52:50 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 08:52:50 compute-0 nova_compute[260935]:       </auth>
Oct 11 08:52:50 compute-0 nova_compute[260935]:       <target dev="sda" bus="sata"/>
Oct 11 08:52:50 compute-0 nova_compute[260935]:     </disk>
Oct 11 08:52:50 compute-0 nova_compute[260935]:     <interface type="ethernet">
Oct 11 08:52:50 compute-0 nova_compute[260935]:       <mac address="fa:16:3e:49:c0:a4"/>
Oct 11 08:52:50 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 08:52:50 compute-0 nova_compute[260935]:       <driver name="vhost" rx_queue_size="512"/>
Oct 11 08:52:50 compute-0 nova_compute[260935]:       <mtu size="1442"/>
Oct 11 08:52:50 compute-0 nova_compute[260935]:       <target dev="tap81c1d19a-c4"/>
Oct 11 08:52:50 compute-0 nova_compute[260935]:     </interface>
Oct 11 08:52:50 compute-0 nova_compute[260935]:     <serial type="pty">
Oct 11 08:52:50 compute-0 nova_compute[260935]:       <log file="/var/lib/nova/instances/5750649d-960f-42d5-b127-de8b9a2bee8f/console.log" append="off"/>
Oct 11 08:52:50 compute-0 nova_compute[260935]:     </serial>
Oct 11 08:52:50 compute-0 nova_compute[260935]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 08:52:50 compute-0 nova_compute[260935]:     <video>
Oct 11 08:52:50 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 08:52:50 compute-0 nova_compute[260935]:     </video>
Oct 11 08:52:50 compute-0 nova_compute[260935]:     <input type="tablet" bus="usb"/>
Oct 11 08:52:50 compute-0 nova_compute[260935]:     <input type="keyboard" bus="usb"/>
Oct 11 08:52:50 compute-0 nova_compute[260935]:     <rng model="virtio">
Oct 11 08:52:50 compute-0 nova_compute[260935]:       <backend model="random">/dev/urandom</backend>
Oct 11 08:52:50 compute-0 nova_compute[260935]:     </rng>
Oct 11 08:52:50 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root"/>
Oct 11 08:52:50 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:52:50 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:52:50 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:52:50 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:52:50 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:52:50 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:52:50 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:52:50 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:52:50 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:52:50 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:52:50 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:52:50 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:52:50 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:52:50 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:52:50 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:52:50 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:52:50 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:52:50 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:52:50 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:52:50 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:52:50 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:52:50 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:52:50 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:52:50 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:52:50 compute-0 nova_compute[260935]:     <controller type="usb" index="0"/>
Oct 11 08:52:50 compute-0 nova_compute[260935]:     <memballoon model="virtio">
Oct 11 08:52:50 compute-0 nova_compute[260935]:       <stats period="10"/>
Oct 11 08:52:50 compute-0 nova_compute[260935]:     </memballoon>
Oct 11 08:52:50 compute-0 nova_compute[260935]:   </devices>
Oct 11 08:52:50 compute-0 nova_compute[260935]: </domain>
Oct 11 08:52:50 compute-0 nova_compute[260935]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 11 08:52:50 compute-0 nova_compute[260935]: 2025-10-11 08:52:50.352 2 DEBUG nova.virt.libvirt.driver [None req-5be838d6-01b0-4631-b01e-7414e6528635 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] skipping disk for instance-0000002a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 08:52:50 compute-0 nova_compute[260935]: 2025-10-11 08:52:50.353 2 DEBUG nova.virt.libvirt.driver [None req-5be838d6-01b0-4631-b01e-7414e6528635 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] skipping disk for instance-0000002a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 08:52:50 compute-0 nova_compute[260935]: 2025-10-11 08:52:50.354 2 DEBUG nova.virt.libvirt.vif [None req-5be838d6-01b0-4631-b01e-7414e6528635 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T08:52:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-InstanceActionsTestJSON-server-1556963080',display_name='tempest-InstanceActionsTestJSON-server-1556963080',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionstestjson-server-1556963080',id=42,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-11T08:52:43Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=<?>,power_state=1,progress=0,project_id='e394c641aa0e46e2a7d8129cd88c9a01',ramdisk_id='',reservation_id='r-r0vpe0lh',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-InstanceActionsTestJSON-761055393',owner_user_name='tempest-InstanceActionsTestJSON-761055393-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T08:52:48Z,user_data=None,user_id='aeed30817a7740109e765d227ff2b78a',uuid=5750649d-960f-42d5-b127-de8b9a2bee8f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "81c1d19a-c479-4381-9557-92f3e52b0cf0", "address": "fa:16:3e:49:c0:a4", "network": {"id": "aa9adc37-a18c-4b31-b4cc-d46c43f91e9b", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-563102071-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e394c641aa0e46e2a7d8129cd88c9a01", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap81c1d19a-c4", "ovs_interfaceid": "81c1d19a-c479-4381-9557-92f3e52b0cf0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 11 08:52:50 compute-0 nova_compute[260935]: 2025-10-11 08:52:50.354 2 DEBUG nova.network.os_vif_util [None req-5be838d6-01b0-4631-b01e-7414e6528635 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] Converting VIF {"id": "81c1d19a-c479-4381-9557-92f3e52b0cf0", "address": "fa:16:3e:49:c0:a4", "network": {"id": "aa9adc37-a18c-4b31-b4cc-d46c43f91e9b", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-563102071-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e394c641aa0e46e2a7d8129cd88c9a01", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap81c1d19a-c4", "ovs_interfaceid": "81c1d19a-c479-4381-9557-92f3e52b0cf0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 08:52:50 compute-0 nova_compute[260935]: 2025-10-11 08:52:50.355 2 DEBUG nova.network.os_vif_util [None req-5be838d6-01b0-4631-b01e-7414e6528635 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:49:c0:a4,bridge_name='br-int',has_traffic_filtering=True,id=81c1d19a-c479-4381-9557-92f3e52b0cf0,network=Network(aa9adc37-a18c-4b31-b4cc-d46c43f91e9b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap81c1d19a-c4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 08:52:50 compute-0 nova_compute[260935]: 2025-10-11 08:52:50.355 2 DEBUG os_vif [None req-5be838d6-01b0-4631-b01e-7414e6528635 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] Plugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:49:c0:a4,bridge_name='br-int',has_traffic_filtering=True,id=81c1d19a-c479-4381-9557-92f3e52b0cf0,network=Network(aa9adc37-a18c-4b31-b4cc-d46c43f91e9b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap81c1d19a-c4') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 11 08:52:50 compute-0 nova_compute[260935]: 2025-10-11 08:52:50.356 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:52:50 compute-0 nova_compute[260935]: 2025-10-11 08:52:50.356 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:52:50 compute-0 nova_compute[260935]: 2025-10-11 08:52:50.357 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 08:52:50 compute-0 nova_compute[260935]: 2025-10-11 08:52:50.361 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:52:50 compute-0 nova_compute[260935]: 2025-10-11 08:52:50.361 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap81c1d19a-c4, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:52:50 compute-0 nova_compute[260935]: 2025-10-11 08:52:50.361 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap81c1d19a-c4, col_values=(('external_ids', {'iface-id': '81c1d19a-c479-4381-9557-92f3e52b0cf0', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:49:c0:a4', 'vm-uuid': '5750649d-960f-42d5-b127-de8b9a2bee8f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:52:50 compute-0 nova_compute[260935]: 2025-10-11 08:52:50.404 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:52:50 compute-0 NetworkManager[44960]: <info>  [1760172770.4051] manager: (tap81c1d19a-c4): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/167)
Oct 11 08:52:50 compute-0 nova_compute[260935]: 2025-10-11 08:52:50.408 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 08:52:50 compute-0 nova_compute[260935]: 2025-10-11 08:52:50.410 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:52:50 compute-0 nova_compute[260935]: 2025-10-11 08:52:50.411 2 INFO os_vif [None req-5be838d6-01b0-4631-b01e-7414e6528635 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] Successfully plugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:49:c0:a4,bridge_name='br-int',has_traffic_filtering=True,id=81c1d19a-c479-4381-9557-92f3e52b0cf0,network=Network(aa9adc37-a18c-4b31-b4cc-d46c43f91e9b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap81c1d19a-c4')
Oct 11 08:52:50 compute-0 kernel: tap81c1d19a-c4: entered promiscuous mode
Oct 11 08:52:50 compute-0 NetworkManager[44960]: <info>  [1760172770.5204] manager: (tap81c1d19a-c4): new Tun device (/org/freedesktop/NetworkManager/Devices/168)
Oct 11 08:52:50 compute-0 nova_compute[260935]: 2025-10-11 08:52:50.519 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:52:50 compute-0 ovn_controller[152945]: 2025-10-11T08:52:50Z|00353|binding|INFO|Claiming lport 81c1d19a-c479-4381-9557-92f3e52b0cf0 for this chassis.
Oct 11 08:52:50 compute-0 ovn_controller[152945]: 2025-10-11T08:52:50Z|00354|binding|INFO|81c1d19a-c479-4381-9557-92f3e52b0cf0: Claiming fa:16:3e:49:c0:a4 10.100.0.10
Oct 11 08:52:50 compute-0 nova_compute[260935]: 2025-10-11 08:52:50.524 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:52:50 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:50.529 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:49:c0:a4 10.100.0.10'], port_security=['fa:16:3e:49:c0:a4 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '5750649d-960f-42d5-b127-de8b9a2bee8f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-aa9adc37-a18c-4b31-b4cc-d46c43f91e9b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e394c641aa0e46e2a7d8129cd88c9a01', 'neutron:revision_number': '5', 'neutron:security_group_ids': 'e0772185-519f-4131-b64e-b4728e2c0dd9', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=bf506c8e-a600-4595-8f2f-eb0ff97eb2d2, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=81c1d19a-c479-4381-9557-92f3e52b0cf0) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 08:52:50 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:50.530 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 81c1d19a-c479-4381-9557-92f3e52b0cf0 in datapath aa9adc37-a18c-4b31-b4cc-d46c43f91e9b bound to our chassis
Oct 11 08:52:50 compute-0 NetworkManager[44960]: <info>  [1760172770.5328] device (tap81c1d19a-c4): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 08:52:50 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:50.533 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network aa9adc37-a18c-4b31-b4cc-d46c43f91e9b
Oct 11 08:52:50 compute-0 NetworkManager[44960]: <info>  [1760172770.5334] device (tap81c1d19a-c4): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 08:52:50 compute-0 ovn_controller[152945]: 2025-10-11T08:52:50Z|00355|binding|INFO|Setting lport 81c1d19a-c479-4381-9557-92f3e52b0cf0 ovn-installed in OVS
Oct 11 08:52:50 compute-0 ovn_controller[152945]: 2025-10-11T08:52:50Z|00356|binding|INFO|Setting lport 81c1d19a-c479-4381-9557-92f3e52b0cf0 up in Southbound
Oct 11 08:52:50 compute-0 nova_compute[260935]: 2025-10-11 08:52:50.548 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:52:50 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:50.553 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[04427270-6e6b-43df-a952-2bb1145525d8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:52:50 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:50.555 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapaa9adc37-a1 in ovnmeta-aa9adc37-a18c-4b31-b4cc-d46c43f91e9b namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 11 08:52:50 compute-0 nova_compute[260935]: 2025-10-11 08:52:50.552 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:52:50 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:50.559 276945 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapaa9adc37-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 11 08:52:50 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:50.559 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[0ead639b-8f6c-47ce-a625-d5dbfa55cbf4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:52:50 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:50.560 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[1d215652-de6e-44ad-a55d-9266cbf4fb4a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:52:50 compute-0 systemd-machined[215705]: New machine qemu-47-instance-0000002a.
Oct 11 08:52:50 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:50.583 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[c6741854-52b1-4880-8185-bca15cc34fb7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:52:50 compute-0 systemd[1]: Started Virtual Machine qemu-47-instance-0000002a.
Oct 11 08:52:50 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:50.604 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[493a476e-5e9b-41b5-9de0-72503101b161]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:52:50 compute-0 nova_compute[260935]: 2025-10-11 08:52:50.613 2 DEBUG oslo_concurrency.processutils [None req-5fe3681c-599b-41f7-8745-542beaa784ee 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/d427ed36e4acfaf36d5cf36bd49361b1db4ee571 98fabab3-6b4a-44f3-b232-f23f34f4e19f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.373s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:52:50 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:50.640 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[575c2dc9-332c-4e71-b0bb-7f3e9eb6589f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:52:50 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:50.645 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[f12892e7-89ff-4885-b6b4-a40f821b4175]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:52:50 compute-0 NetworkManager[44960]: <info>  [1760172770.6486] manager: (tapaa9adc37-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/169)
Oct 11 08:52:50 compute-0 sshd-session[308496]: Invalid user warango from 152.32.213.170 port 60158
Oct 11 08:52:50 compute-0 sshd-session[308496]: pam_unix(sshd:auth): check pass; user unknown
Oct 11 08:52:50 compute-0 sshd-session[308496]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=152.32.213.170
Oct 11 08:52:50 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:50.702 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[6f64a8ee-13b2-4fd0-b7d5-2d20353f98fb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:52:50 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:50.706 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[97253bb6-a19e-474c-a8ac-7928b2aed5e3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:52:50 compute-0 NetworkManager[44960]: <info>  [1760172770.7369] device (tapaa9adc37-a0): carrier: link connected
Oct 11 08:52:50 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:50.748 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[d1b38dda-b1b3-409a-92ad-02fe7a750c2a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:52:50 compute-0 nova_compute[260935]: 2025-10-11 08:52:50.750 2 DEBUG nova.storage.rbd_utils [None req-5fe3681c-599b-41f7-8745-542beaa784ee 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] resizing rbd image 98fabab3-6b4a-44f3-b232-f23f34f4e19f_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 11 08:52:50 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:50.777 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[aee7c13a-5bd7-470a-945d-878f009a9c0b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapaa9adc37-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:bf:36:18'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 112], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 461543, 'reachable_time': 16788, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 308732, 'error': None, 'target': 'ovnmeta-aa9adc37-a18c-4b31-b4cc-d46c43f91e9b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:52:50 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:50.803 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[62111d47-83fa-470c-98b3-5c919d1252bd]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:febf:3618'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 461543, 'tstamp': 461543}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 308756, 'error': None, 'target': 'ovnmeta-aa9adc37-a18c-4b31-b4cc-d46c43f91e9b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:52:50 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:50.893 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[79a5ab5f-b1a8-4a00-863b-737c03794393]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapaa9adc37-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:bf:36:18'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 112], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 461543, 'reachable_time': 16788, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 308770, 'error': None, 'target': 'ovnmeta-aa9adc37-a18c-4b31-b4cc-d46c43f91e9b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:52:50 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:50.929 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[191e469d-c14e-490b-abb2-80005c798968]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:52:50 compute-0 nova_compute[260935]: 2025-10-11 08:52:50.946 2 DEBUG nova.virt.libvirt.driver [None req-5fe3681c-599b-41f7-8745-542beaa784ee 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 98fabab3-6b4a-44f3-b232-f23f34f4e19f] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 11 08:52:50 compute-0 nova_compute[260935]: 2025-10-11 08:52:50.946 2 DEBUG nova.virt.libvirt.driver [None req-5fe3681c-599b-41f7-8745-542beaa784ee 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 98fabab3-6b4a-44f3-b232-f23f34f4e19f] Ensure instance console log exists: /var/lib/nova/instances/98fabab3-6b4a-44f3-b232-f23f34f4e19f/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 11 08:52:50 compute-0 nova_compute[260935]: 2025-10-11 08:52:50.964 2 DEBUG oslo_concurrency.lockutils [None req-5fe3681c-599b-41f7-8745-542beaa784ee 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:52:50 compute-0 nova_compute[260935]: 2025-10-11 08:52:50.965 2 DEBUG oslo_concurrency.lockutils [None req-5fe3681c-599b-41f7-8745-542beaa784ee 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:52:50 compute-0 nova_compute[260935]: 2025-10-11 08:52:50.965 2 DEBUG oslo_concurrency.lockutils [None req-5fe3681c-599b-41f7-8745-542beaa784ee 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:52:50 compute-0 nova_compute[260935]: 2025-10-11 08:52:50.968 2 DEBUG nova.virt.libvirt.driver [None req-5fe3681c-599b-41f7-8745-542beaa784ee 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 98fabab3-6b4a-44f3-b232-f23f34f4e19f] Start _get_guest_xml network_info=[{"id": "20c8164c-0779-4589-b3d3-afc10a47631f", "address": "fa:16:3e:bf:90:73", "network": {"id": "e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-964284753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9af27fad6b5a4783b66213343f27f0a1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap20c8164c-07", "ovs_interfaceid": "20c8164c-0779-4589-b3d3-afc10a47631f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:29Z,direct_url=<?>,disk_format='qcow2',id=95632eb9-5895-4e20-b760-0f149aadf400,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 11 08:52:50 compute-0 nova_compute[260935]: 2025-10-11 08:52:50.975 2 WARNING nova.virt.libvirt.driver [None req-5fe3681c-599b-41f7-8745-542beaa784ee 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError
Oct 11 08:52:50 compute-0 nova_compute[260935]: 2025-10-11 08:52:50.980 2 DEBUG nova.virt.libvirt.host [None req-5fe3681c-599b-41f7-8745-542beaa784ee 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 11 08:52:50 compute-0 nova_compute[260935]: 2025-10-11 08:52:50.981 2 DEBUG nova.virt.libvirt.host [None req-5fe3681c-599b-41f7-8745-542beaa784ee 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 11 08:52:50 compute-0 nova_compute[260935]: 2025-10-11 08:52:50.985 2 DEBUG nova.virt.libvirt.host [None req-5fe3681c-599b-41f7-8745-542beaa784ee 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 11 08:52:50 compute-0 nova_compute[260935]: 2025-10-11 08:52:50.985 2 DEBUG nova.virt.libvirt.host [None req-5fe3681c-599b-41f7-8745-542beaa784ee 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 11 08:52:50 compute-0 nova_compute[260935]: 2025-10-11 08:52:50.985 2 DEBUG nova.virt.libvirt.driver [None req-5fe3681c-599b-41f7-8745-542beaa784ee 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 11 08:52:50 compute-0 nova_compute[260935]: 2025-10-11 08:52:50.986 2 DEBUG nova.virt.hardware [None req-5fe3681c-599b-41f7-8745-542beaa784ee 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:29Z,direct_url=<?>,disk_format='qcow2',id=95632eb9-5895-4e20-b760-0f149aadf400,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 11 08:52:50 compute-0 nova_compute[260935]: 2025-10-11 08:52:50.986 2 DEBUG nova.virt.hardware [None req-5fe3681c-599b-41f7-8745-542beaa784ee 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 11 08:52:50 compute-0 nova_compute[260935]: 2025-10-11 08:52:50.986 2 DEBUG nova.virt.hardware [None req-5fe3681c-599b-41f7-8745-542beaa784ee 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 11 08:52:50 compute-0 nova_compute[260935]: 2025-10-11 08:52:50.987 2 DEBUG nova.virt.hardware [None req-5fe3681c-599b-41f7-8745-542beaa784ee 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 11 08:52:50 compute-0 nova_compute[260935]: 2025-10-11 08:52:50.987 2 DEBUG nova.virt.hardware [None req-5fe3681c-599b-41f7-8745-542beaa784ee 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 11 08:52:50 compute-0 nova_compute[260935]: 2025-10-11 08:52:50.987 2 DEBUG nova.virt.hardware [None req-5fe3681c-599b-41f7-8745-542beaa784ee 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 11 08:52:50 compute-0 nova_compute[260935]: 2025-10-11 08:52:50.987 2 DEBUG nova.virt.hardware [None req-5fe3681c-599b-41f7-8745-542beaa784ee 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 11 08:52:50 compute-0 nova_compute[260935]: 2025-10-11 08:52:50.987 2 DEBUG nova.virt.hardware [None req-5fe3681c-599b-41f7-8745-542beaa784ee 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 11 08:52:50 compute-0 nova_compute[260935]: 2025-10-11 08:52:50.988 2 DEBUG nova.virt.hardware [None req-5fe3681c-599b-41f7-8745-542beaa784ee 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 11 08:52:50 compute-0 nova_compute[260935]: 2025-10-11 08:52:50.988 2 DEBUG nova.virt.hardware [None req-5fe3681c-599b-41f7-8745-542beaa784ee 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 11 08:52:50 compute-0 nova_compute[260935]: 2025-10-11 08:52:50.988 2 DEBUG nova.virt.hardware [None req-5fe3681c-599b-41f7-8745-542beaa784ee 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 11 08:52:50 compute-0 nova_compute[260935]: 2025-10-11 08:52:50.988 2 DEBUG nova.objects.instance [None req-5fe3681c-599b-41f7-8745-542beaa784ee 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 98fabab3-6b4a-44f3-b232-f23f34f4e19f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:52:51 compute-0 nova_compute[260935]: 2025-10-11 08:52:51.007 2 DEBUG oslo_concurrency.processutils [None req-5fe3681c-599b-41f7-8745-542beaa784ee 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:52:51 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:51.022 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[cc786796-b098-4d8f-97da-1bd6f09ec5a5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:52:51 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:51.024 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapaa9adc37-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:52:51 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:51.024 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 08:52:51 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:51.025 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapaa9adc37-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:52:51 compute-0 NetworkManager[44960]: <info>  [1760172771.0275] manager: (tapaa9adc37-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/170)
Oct 11 08:52:51 compute-0 kernel: tapaa9adc37-a0: entered promiscuous mode
Oct 11 08:52:51 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:51.032 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapaa9adc37-a0, col_values=(('external_ids', {'iface-id': '3043e870-2e1f-4df6-b72c-a581ec61613e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:52:51 compute-0 ovn_controller[152945]: 2025-10-11T08:52:51Z|00357|binding|INFO|Releasing lport 3043e870-2e1f-4df6-b72c-a581ec61613e from this chassis (sb_readonly=0)
Oct 11 08:52:51 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:51.037 162815 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/aa9adc37-a18c-4b31-b4cc-d46c43f91e9b.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/aa9adc37-a18c-4b31-b4cc-d46c43f91e9b.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 11 08:52:51 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:51.039 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[96d1781b-caba-42fc-b38c-09c394b5b4c2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:52:51 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:51.040 162815 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 11 08:52:51 compute-0 ovn_metadata_agent[162810]: global
Oct 11 08:52:51 compute-0 ovn_metadata_agent[162810]:     log         /dev/log local0 debug
Oct 11 08:52:51 compute-0 ovn_metadata_agent[162810]:     log-tag     haproxy-metadata-proxy-aa9adc37-a18c-4b31-b4cc-d46c43f91e9b
Oct 11 08:52:51 compute-0 ovn_metadata_agent[162810]:     user        root
Oct 11 08:52:51 compute-0 ovn_metadata_agent[162810]:     group       root
Oct 11 08:52:51 compute-0 ovn_metadata_agent[162810]:     maxconn     1024
Oct 11 08:52:51 compute-0 ovn_metadata_agent[162810]:     pidfile     /var/lib/neutron/external/pids/aa9adc37-a18c-4b31-b4cc-d46c43f91e9b.pid.haproxy
Oct 11 08:52:51 compute-0 ovn_metadata_agent[162810]:     daemon
Oct 11 08:52:51 compute-0 ovn_metadata_agent[162810]: 
Oct 11 08:52:51 compute-0 ovn_metadata_agent[162810]: defaults
Oct 11 08:52:51 compute-0 ovn_metadata_agent[162810]:     log global
Oct 11 08:52:51 compute-0 ovn_metadata_agent[162810]:     mode http
Oct 11 08:52:51 compute-0 ovn_metadata_agent[162810]:     option httplog
Oct 11 08:52:51 compute-0 ovn_metadata_agent[162810]:     option dontlognull
Oct 11 08:52:51 compute-0 ovn_metadata_agent[162810]:     option http-server-close
Oct 11 08:52:51 compute-0 ovn_metadata_agent[162810]:     option forwardfor
Oct 11 08:52:51 compute-0 ovn_metadata_agent[162810]:     retries                 3
Oct 11 08:52:51 compute-0 ovn_metadata_agent[162810]:     timeout http-request    30s
Oct 11 08:52:51 compute-0 ovn_metadata_agent[162810]:     timeout connect         30s
Oct 11 08:52:51 compute-0 ovn_metadata_agent[162810]:     timeout client          32s
Oct 11 08:52:51 compute-0 ovn_metadata_agent[162810]:     timeout server          32s
Oct 11 08:52:51 compute-0 ovn_metadata_agent[162810]:     timeout http-keep-alive 30s
Oct 11 08:52:51 compute-0 ovn_metadata_agent[162810]: 
Oct 11 08:52:51 compute-0 ovn_metadata_agent[162810]: 
Oct 11 08:52:51 compute-0 ovn_metadata_agent[162810]: listen listener
Oct 11 08:52:51 compute-0 ovn_metadata_agent[162810]:     bind 169.254.169.254:80
Oct 11 08:52:51 compute-0 ovn_metadata_agent[162810]:     server metadata /var/lib/neutron/metadata_proxy
Oct 11 08:52:51 compute-0 ovn_metadata_agent[162810]:     http-request add-header X-OVN-Network-ID aa9adc37-a18c-4b31-b4cc-d46c43f91e9b
Oct 11 08:52:51 compute-0 ovn_metadata_agent[162810]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 11 08:52:51 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:51.041 162815 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-aa9adc37-a18c-4b31-b4cc-d46c43f91e9b', 'env', 'PROCESS_TAG=haproxy-aa9adc37-a18c-4b31-b4cc-d46c43f91e9b', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/aa9adc37-a18c-4b31-b4cc-d46c43f91e9b.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 11 08:52:51 compute-0 nova_compute[260935]: 2025-10-11 08:52:51.047 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:52:51 compute-0 ceph-mon[74313]: pgmap v1440: 321 pgs: 321 active+clean; 167 MiB data, 489 MiB used, 60 GiB / 60 GiB avail; 5.0 MiB/s rd, 2.6 MiB/s wr, 298 op/s
Oct 11 08:52:51 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/1546140836' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:52:51 compute-0 nova_compute[260935]: 2025-10-11 08:52:51.387 2 DEBUG nova.compute.manager [req-443894da-07d8-49e3-ac67-1740758d0439 req-6b708ddf-889a-44b7-8294-9c62c5f76c16 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 98fabab3-6b4a-44f3-b232-f23f34f4e19f] Received event network-vif-plugged-20c8164c-0779-4589-b3d3-afc10a47631f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:52:51 compute-0 nova_compute[260935]: 2025-10-11 08:52:51.387 2 DEBUG oslo_concurrency.lockutils [req-443894da-07d8-49e3-ac67-1740758d0439 req-6b708ddf-889a-44b7-8294-9c62c5f76c16 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "98fabab3-6b4a-44f3-b232-f23f34f4e19f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:52:51 compute-0 nova_compute[260935]: 2025-10-11 08:52:51.388 2 DEBUG oslo_concurrency.lockutils [req-443894da-07d8-49e3-ac67-1740758d0439 req-6b708ddf-889a-44b7-8294-9c62c5f76c16 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "98fabab3-6b4a-44f3-b232-f23f34f4e19f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:52:51 compute-0 nova_compute[260935]: 2025-10-11 08:52:51.388 2 DEBUG oslo_concurrency.lockutils [req-443894da-07d8-49e3-ac67-1740758d0439 req-6b708ddf-889a-44b7-8294-9c62c5f76c16 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "98fabab3-6b4a-44f3-b232-f23f34f4e19f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:52:51 compute-0 nova_compute[260935]: 2025-10-11 08:52:51.388 2 DEBUG nova.compute.manager [req-443894da-07d8-49e3-ac67-1740758d0439 req-6b708ddf-889a-44b7-8294-9c62c5f76c16 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 98fabab3-6b4a-44f3-b232-f23f34f4e19f] No waiting events found dispatching network-vif-plugged-20c8164c-0779-4589-b3d3-afc10a47631f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 08:52:51 compute-0 nova_compute[260935]: 2025-10-11 08:52:51.389 2 WARNING nova.compute.manager [req-443894da-07d8-49e3-ac67-1740758d0439 req-6b708ddf-889a-44b7-8294-9c62c5f76c16 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 98fabab3-6b4a-44f3-b232-f23f34f4e19f] Received unexpected event network-vif-plugged-20c8164c-0779-4589-b3d3-afc10a47631f for instance with vm_state active and task_state rebuild_spawning.
Oct 11 08:52:51 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 08:52:51 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3437292564' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:52:51 compute-0 nova_compute[260935]: 2025-10-11 08:52:51.473 2 DEBUG oslo_concurrency.processutils [None req-5fe3681c-599b-41f7-8745-542beaa784ee 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:52:51 compute-0 podman[308863]: 2025-10-11 08:52:51.489394977 +0000 UTC m=+0.077695308 container create 458401b6351fc27ad334ef31058a6591a600ca5a50b42d2ff10e9293f3fcd45c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-aa9adc37-a18c-4b31-b4cc-d46c43f91e9b, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Oct 11 08:52:51 compute-0 nova_compute[260935]: 2025-10-11 08:52:51.512 2 DEBUG nova.storage.rbd_utils [None req-5fe3681c-599b-41f7-8745-542beaa784ee 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] rbd image 98fabab3-6b4a-44f3-b232-f23f34f4e19f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:52:51 compute-0 nova_compute[260935]: 2025-10-11 08:52:51.518 2 DEBUG oslo_concurrency.processutils [None req-5fe3681c-599b-41f7-8745-542beaa784ee 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:52:51 compute-0 systemd[1]: Started libpod-conmon-458401b6351fc27ad334ef31058a6591a600ca5a50b42d2ff10e9293f3fcd45c.scope.
Oct 11 08:52:51 compute-0 podman[308863]: 2025-10-11 08:52:51.45097356 +0000 UTC m=+0.039273951 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 11 08:52:51 compute-0 nova_compute[260935]: 2025-10-11 08:52:51.563 2 DEBUG nova.virt.libvirt.host [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Removed pending event for 5750649d-960f-42d5-b127-de8b9a2bee8f due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Oct 11 08:52:51 compute-0 nova_compute[260935]: 2025-10-11 08:52:51.564 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172771.5501351, 5750649d-960f-42d5-b127-de8b9a2bee8f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:52:51 compute-0 nova_compute[260935]: 2025-10-11 08:52:51.564 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 5750649d-960f-42d5-b127-de8b9a2bee8f] VM Resumed (Lifecycle Event)
Oct 11 08:52:51 compute-0 nova_compute[260935]: 2025-10-11 08:52:51.575 2 DEBUG nova.compute.manager [None req-5be838d6-01b0-4631-b01e-7414e6528635 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] [instance: 5750649d-960f-42d5-b127-de8b9a2bee8f] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 11 08:52:51 compute-0 systemd[1]: Started libcrun container.
Oct 11 08:52:51 compute-0 nova_compute[260935]: 2025-10-11 08:52:51.585 2 INFO nova.virt.libvirt.driver [-] [instance: 5750649d-960f-42d5-b127-de8b9a2bee8f] Instance rebooted successfully.
Oct 11 08:52:51 compute-0 nova_compute[260935]: 2025-10-11 08:52:51.585 2 DEBUG nova.compute.manager [None req-5be838d6-01b0-4631-b01e-7414e6528635 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] [instance: 5750649d-960f-42d5-b127-de8b9a2bee8f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:52:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c0972cf6f05853bc79fa51b63023889054a6bcc2df62a3dc5f83edc5bef62cfe/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 11 08:52:51 compute-0 nova_compute[260935]: 2025-10-11 08:52:51.596 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 5750649d-960f-42d5-b127-de8b9a2bee8f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:52:51 compute-0 podman[308863]: 2025-10-11 08:52:51.601815836 +0000 UTC m=+0.190116187 container init 458401b6351fc27ad334ef31058a6591a600ca5a50b42d2ff10e9293f3fcd45c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-aa9adc37-a18c-4b31-b4cc-d46c43f91e9b, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 11 08:52:51 compute-0 nova_compute[260935]: 2025-10-11 08:52:51.603 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 5750649d-960f-42d5-b127-de8b9a2bee8f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: reboot_started_hard, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 08:52:51 compute-0 podman[308863]: 2025-10-11 08:52:51.608630299 +0000 UTC m=+0.196930640 container start 458401b6351fc27ad334ef31058a6591a600ca5a50b42d2ff10e9293f3fcd45c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-aa9adc37-a18c-4b31-b4cc-d46c43f91e9b, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team)
Oct 11 08:52:51 compute-0 nova_compute[260935]: 2025-10-11 08:52:51.629 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 5750649d-960f-42d5-b127-de8b9a2bee8f] During sync_power_state the instance has a pending task (reboot_started_hard). Skip.
Oct 11 08:52:51 compute-0 nova_compute[260935]: 2025-10-11 08:52:51.629 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172771.5525095, 5750649d-960f-42d5-b127-de8b9a2bee8f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:52:51 compute-0 nova_compute[260935]: 2025-10-11 08:52:51.629 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 5750649d-960f-42d5-b127-de8b9a2bee8f] VM Started (Lifecycle Event)
Oct 11 08:52:51 compute-0 neutron-haproxy-ovnmeta-aa9adc37-a18c-4b31-b4cc-d46c43f91e9b[308899]: [NOTICE]   (308903) : New worker (308905) forked
Oct 11 08:52:51 compute-0 neutron-haproxy-ovnmeta-aa9adc37-a18c-4b31-b4cc-d46c43f91e9b[308899]: [NOTICE]   (308903) : Loading success.
Oct 11 08:52:51 compute-0 nova_compute[260935]: 2025-10-11 08:52:51.648 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 5750649d-960f-42d5-b127-de8b9a2bee8f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:52:51 compute-0 nova_compute[260935]: 2025-10-11 08:52:51.653 2 DEBUG oslo_concurrency.lockutils [None req-5be838d6-01b0-4631-b01e-7414e6528635 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] Lock "5750649d-960f-42d5-b127-de8b9a2bee8f" "released" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: held 5.407s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:52:51 compute-0 nova_compute[260935]: 2025-10-11 08:52:51.655 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 5750649d-960f-42d5-b127-de8b9a2bee8f] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: reboot_started_hard, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 08:52:51 compute-0 nova_compute[260935]: 2025-10-11 08:52:51.698 2 DEBUG nova.compute.manager [req-53e0c9cc-3ce8-4dcb-ae28-f2ebf760fd3d req-61ede8f3-4d22-4548-9422-c9e3ab09de04 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 5750649d-960f-42d5-b127-de8b9a2bee8f] Received event network-vif-plugged-81c1d19a-c479-4381-9557-92f3e52b0cf0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:52:51 compute-0 nova_compute[260935]: 2025-10-11 08:52:51.699 2 DEBUG oslo_concurrency.lockutils [req-53e0c9cc-3ce8-4dcb-ae28-f2ebf760fd3d req-61ede8f3-4d22-4548-9422-c9e3ab09de04 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "5750649d-960f-42d5-b127-de8b9a2bee8f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:52:51 compute-0 nova_compute[260935]: 2025-10-11 08:52:51.699 2 DEBUG oslo_concurrency.lockutils [req-53e0c9cc-3ce8-4dcb-ae28-f2ebf760fd3d req-61ede8f3-4d22-4548-9422-c9e3ab09de04 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "5750649d-960f-42d5-b127-de8b9a2bee8f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:52:51 compute-0 nova_compute[260935]: 2025-10-11 08:52:51.699 2 DEBUG oslo_concurrency.lockutils [req-53e0c9cc-3ce8-4dcb-ae28-f2ebf760fd3d req-61ede8f3-4d22-4548-9422-c9e3ab09de04 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "5750649d-960f-42d5-b127-de8b9a2bee8f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:52:51 compute-0 nova_compute[260935]: 2025-10-11 08:52:51.699 2 DEBUG nova.compute.manager [req-53e0c9cc-3ce8-4dcb-ae28-f2ebf760fd3d req-61ede8f3-4d22-4548-9422-c9e3ab09de04 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 5750649d-960f-42d5-b127-de8b9a2bee8f] No waiting events found dispatching network-vif-plugged-81c1d19a-c479-4381-9557-92f3e52b0cf0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 08:52:51 compute-0 nova_compute[260935]: 2025-10-11 08:52:51.699 2 WARNING nova.compute.manager [req-53e0c9cc-3ce8-4dcb-ae28-f2ebf760fd3d req-61ede8f3-4d22-4548-9422-c9e3ab09de04 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 5750649d-960f-42d5-b127-de8b9a2bee8f] Received unexpected event network-vif-plugged-81c1d19a-c479-4381-9557-92f3e52b0cf0 for instance with vm_state active and task_state None.
Oct 11 08:52:51 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1441: 321 pgs: 321 active+clean; 167 MiB data, 489 MiB used, 60 GiB / 60 GiB avail; 5.0 MiB/s rd, 2.6 MiB/s wr, 298 op/s
Oct 11 08:52:51 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 08:52:51 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/502272125' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:52:52 compute-0 nova_compute[260935]: 2025-10-11 08:52:52.010 2 DEBUG oslo_concurrency.processutils [None req-5fe3681c-599b-41f7-8745-542beaa784ee 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.491s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:52:52 compute-0 nova_compute[260935]: 2025-10-11 08:52:52.012 2 DEBUG nova.virt.libvirt.vif [None req-5fe3681c-599b-41f7-8745-542beaa784ee 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-11T08:52:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-1265647930',display_name='tempest-ServerDiskConfigTestJSON-server-1265647930',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-1265647930',id=40,image_ref='95632eb9-5895-4e20-b760-0f149aadf400',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-11T08:52:28Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='9af27fad6b5a4783b66213343f27f0a1',ramdisk_id='',reservation_id='r-qeuhsate',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='95632eb9-5895-4e20-b760-0f149aadf400',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerDiskConfigTestJSON-387886039',owner_user_name='tempest-ServerDiskConfigTestJSON-387886039-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T08:52:49Z,user_data=None,user_id='2a171de1f79843e0b048393cabfee77d',uuid=98fabab3-6b4a-44f3-b232-f23f34f4e19f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "20c8164c-0779-4589-b3d3-afc10a47631f", "address": "fa:16:3e:bf:90:73", "network": {"id": "e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-964284753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9af27fad6b5a4783b66213343f27f0a1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap20c8164c-07", "ovs_interfaceid": "20c8164c-0779-4589-b3d3-afc10a47631f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 11 08:52:52 compute-0 nova_compute[260935]: 2025-10-11 08:52:52.012 2 DEBUG nova.network.os_vif_util [None req-5fe3681c-599b-41f7-8745-542beaa784ee 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Converting VIF {"id": "20c8164c-0779-4589-b3d3-afc10a47631f", "address": "fa:16:3e:bf:90:73", "network": {"id": "e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-964284753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9af27fad6b5a4783b66213343f27f0a1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap20c8164c-07", "ovs_interfaceid": "20c8164c-0779-4589-b3d3-afc10a47631f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 08:52:52 compute-0 nova_compute[260935]: 2025-10-11 08:52:52.013 2 DEBUG nova.network.os_vif_util [None req-5fe3681c-599b-41f7-8745-542beaa784ee 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:bf:90:73,bridge_name='br-int',has_traffic_filtering=True,id=20c8164c-0779-4589-b3d3-afc10a47631f,network=Network(e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap20c8164c-07') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 08:52:52 compute-0 nova_compute[260935]: 2025-10-11 08:52:52.017 2 DEBUG nova.virt.libvirt.driver [None req-5fe3681c-599b-41f7-8745-542beaa784ee 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 98fabab3-6b4a-44f3-b232-f23f34f4e19f] End _get_guest_xml xml=<domain type="kvm">
Oct 11 08:52:52 compute-0 nova_compute[260935]:   <uuid>98fabab3-6b4a-44f3-b232-f23f34f4e19f</uuid>
Oct 11 08:52:52 compute-0 nova_compute[260935]:   <name>instance-00000028</name>
Oct 11 08:52:52 compute-0 nova_compute[260935]:   <memory>131072</memory>
Oct 11 08:52:52 compute-0 nova_compute[260935]:   <vcpu>1</vcpu>
Oct 11 08:52:52 compute-0 nova_compute[260935]:   <metadata>
Oct 11 08:52:52 compute-0 nova_compute[260935]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 08:52:52 compute-0 nova_compute[260935]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 08:52:52 compute-0 nova_compute[260935]:       <nova:name>tempest-ServerDiskConfigTestJSON-server-1265647930</nova:name>
Oct 11 08:52:52 compute-0 nova_compute[260935]:       <nova:creationTime>2025-10-11 08:52:50</nova:creationTime>
Oct 11 08:52:52 compute-0 nova_compute[260935]:       <nova:flavor name="m1.nano">
Oct 11 08:52:52 compute-0 nova_compute[260935]:         <nova:memory>128</nova:memory>
Oct 11 08:52:52 compute-0 nova_compute[260935]:         <nova:disk>1</nova:disk>
Oct 11 08:52:52 compute-0 nova_compute[260935]:         <nova:swap>0</nova:swap>
Oct 11 08:52:52 compute-0 nova_compute[260935]:         <nova:ephemeral>0</nova:ephemeral>
Oct 11 08:52:52 compute-0 nova_compute[260935]:         <nova:vcpus>1</nova:vcpus>
Oct 11 08:52:52 compute-0 nova_compute[260935]:       </nova:flavor>
Oct 11 08:52:52 compute-0 nova_compute[260935]:       <nova:owner>
Oct 11 08:52:52 compute-0 nova_compute[260935]:         <nova:user uuid="2a171de1f79843e0b048393cabfee77d">tempest-ServerDiskConfigTestJSON-387886039-project-member</nova:user>
Oct 11 08:52:52 compute-0 nova_compute[260935]:         <nova:project uuid="9af27fad6b5a4783b66213343f27f0a1">tempest-ServerDiskConfigTestJSON-387886039</nova:project>
Oct 11 08:52:52 compute-0 nova_compute[260935]:       </nova:owner>
Oct 11 08:52:52 compute-0 nova_compute[260935]:       <nova:root type="image" uuid="95632eb9-5895-4e20-b760-0f149aadf400"/>
Oct 11 08:52:52 compute-0 nova_compute[260935]:       <nova:ports>
Oct 11 08:52:52 compute-0 nova_compute[260935]:         <nova:port uuid="20c8164c-0779-4589-b3d3-afc10a47631f">
Oct 11 08:52:52 compute-0 nova_compute[260935]:           <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Oct 11 08:52:52 compute-0 nova_compute[260935]:         </nova:port>
Oct 11 08:52:52 compute-0 nova_compute[260935]:       </nova:ports>
Oct 11 08:52:52 compute-0 nova_compute[260935]:     </nova:instance>
Oct 11 08:52:52 compute-0 nova_compute[260935]:   </metadata>
Oct 11 08:52:52 compute-0 nova_compute[260935]:   <sysinfo type="smbios">
Oct 11 08:52:52 compute-0 nova_compute[260935]:     <system>
Oct 11 08:52:52 compute-0 nova_compute[260935]:       <entry name="manufacturer">RDO</entry>
Oct 11 08:52:52 compute-0 nova_compute[260935]:       <entry name="product">OpenStack Compute</entry>
Oct 11 08:52:52 compute-0 nova_compute[260935]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 08:52:52 compute-0 nova_compute[260935]:       <entry name="serial">98fabab3-6b4a-44f3-b232-f23f34f4e19f</entry>
Oct 11 08:52:52 compute-0 nova_compute[260935]:       <entry name="uuid">98fabab3-6b4a-44f3-b232-f23f34f4e19f</entry>
Oct 11 08:52:52 compute-0 nova_compute[260935]:       <entry name="family">Virtual Machine</entry>
Oct 11 08:52:52 compute-0 nova_compute[260935]:     </system>
Oct 11 08:52:52 compute-0 nova_compute[260935]:   </sysinfo>
Oct 11 08:52:52 compute-0 nova_compute[260935]:   <os>
Oct 11 08:52:52 compute-0 nova_compute[260935]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 11 08:52:52 compute-0 nova_compute[260935]:     <boot dev="hd"/>
Oct 11 08:52:52 compute-0 nova_compute[260935]:     <smbios mode="sysinfo"/>
Oct 11 08:52:52 compute-0 nova_compute[260935]:   </os>
Oct 11 08:52:52 compute-0 nova_compute[260935]:   <features>
Oct 11 08:52:52 compute-0 nova_compute[260935]:     <acpi/>
Oct 11 08:52:52 compute-0 nova_compute[260935]:     <apic/>
Oct 11 08:52:52 compute-0 nova_compute[260935]:     <vmcoreinfo/>
Oct 11 08:52:52 compute-0 nova_compute[260935]:   </features>
Oct 11 08:52:52 compute-0 nova_compute[260935]:   <clock offset="utc">
Oct 11 08:52:52 compute-0 nova_compute[260935]:     <timer name="pit" tickpolicy="delay"/>
Oct 11 08:52:52 compute-0 nova_compute[260935]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 11 08:52:52 compute-0 nova_compute[260935]:     <timer name="hpet" present="no"/>
Oct 11 08:52:52 compute-0 nova_compute[260935]:   </clock>
Oct 11 08:52:52 compute-0 nova_compute[260935]:   <cpu mode="host-model" match="exact">
Oct 11 08:52:52 compute-0 nova_compute[260935]:     <topology sockets="1" cores="1" threads="1"/>
Oct 11 08:52:52 compute-0 nova_compute[260935]:   </cpu>
Oct 11 08:52:52 compute-0 nova_compute[260935]:   <devices>
Oct 11 08:52:52 compute-0 nova_compute[260935]:     <disk type="network" device="disk">
Oct 11 08:52:52 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 08:52:52 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/98fabab3-6b4a-44f3-b232-f23f34f4e19f_disk">
Oct 11 08:52:52 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 08:52:52 compute-0 nova_compute[260935]:       </source>
Oct 11 08:52:52 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 08:52:52 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 08:52:52 compute-0 nova_compute[260935]:       </auth>
Oct 11 08:52:52 compute-0 nova_compute[260935]:       <target dev="vda" bus="virtio"/>
Oct 11 08:52:52 compute-0 nova_compute[260935]:     </disk>
Oct 11 08:52:52 compute-0 nova_compute[260935]:     <disk type="network" device="cdrom">
Oct 11 08:52:52 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 08:52:52 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/98fabab3-6b4a-44f3-b232-f23f34f4e19f_disk.config">
Oct 11 08:52:52 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 08:52:52 compute-0 nova_compute[260935]:       </source>
Oct 11 08:52:52 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 08:52:52 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 08:52:52 compute-0 nova_compute[260935]:       </auth>
Oct 11 08:52:52 compute-0 nova_compute[260935]:       <target dev="sda" bus="sata"/>
Oct 11 08:52:52 compute-0 nova_compute[260935]:     </disk>
Oct 11 08:52:52 compute-0 nova_compute[260935]:     <interface type="ethernet">
Oct 11 08:52:52 compute-0 nova_compute[260935]:       <mac address="fa:16:3e:bf:90:73"/>
Oct 11 08:52:52 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 08:52:52 compute-0 nova_compute[260935]:       <driver name="vhost" rx_queue_size="512"/>
Oct 11 08:52:52 compute-0 nova_compute[260935]:       <mtu size="1442"/>
Oct 11 08:52:52 compute-0 nova_compute[260935]:       <target dev="tap20c8164c-07"/>
Oct 11 08:52:52 compute-0 nova_compute[260935]:     </interface>
Oct 11 08:52:52 compute-0 nova_compute[260935]:     <serial type="pty">
Oct 11 08:52:52 compute-0 nova_compute[260935]:       <log file="/var/lib/nova/instances/98fabab3-6b4a-44f3-b232-f23f34f4e19f/console.log" append="off"/>
Oct 11 08:52:52 compute-0 nova_compute[260935]:     </serial>
Oct 11 08:52:52 compute-0 nova_compute[260935]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 08:52:52 compute-0 nova_compute[260935]:     <video>
Oct 11 08:52:52 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 08:52:52 compute-0 nova_compute[260935]:     </video>
Oct 11 08:52:52 compute-0 nova_compute[260935]:     <input type="tablet" bus="usb"/>
Oct 11 08:52:52 compute-0 nova_compute[260935]:     <rng model="virtio">
Oct 11 08:52:52 compute-0 nova_compute[260935]:       <backend model="random">/dev/urandom</backend>
Oct 11 08:52:52 compute-0 nova_compute[260935]:     </rng>
Oct 11 08:52:52 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root"/>
Oct 11 08:52:52 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:52:52 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:52:52 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:52:52 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:52:52 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:52:52 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:52:52 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:52:52 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:52:52 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:52:52 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:52:52 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:52:52 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:52:52 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:52:52 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:52:52 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:52:52 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:52:52 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:52:52 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:52:52 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:52:52 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:52:52 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:52:52 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:52:52 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:52:52 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:52:52 compute-0 nova_compute[260935]:     <controller type="usb" index="0"/>
Oct 11 08:52:52 compute-0 nova_compute[260935]:     <memballoon model="virtio">
Oct 11 08:52:52 compute-0 nova_compute[260935]:       <stats period="10"/>
Oct 11 08:52:52 compute-0 nova_compute[260935]:     </memballoon>
Oct 11 08:52:52 compute-0 nova_compute[260935]:   </devices>
Oct 11 08:52:52 compute-0 nova_compute[260935]: </domain>
Oct 11 08:52:52 compute-0 nova_compute[260935]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 11 08:52:52 compute-0 nova_compute[260935]: 2025-10-11 08:52:52.018 2 DEBUG nova.compute.manager [None req-5fe3681c-599b-41f7-8745-542beaa784ee 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 98fabab3-6b4a-44f3-b232-f23f34f4e19f] Preparing to wait for external event network-vif-plugged-20c8164c-0779-4589-b3d3-afc10a47631f prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 11 08:52:52 compute-0 nova_compute[260935]: 2025-10-11 08:52:52.019 2 DEBUG oslo_concurrency.lockutils [None req-5fe3681c-599b-41f7-8745-542beaa784ee 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Acquiring lock "98fabab3-6b4a-44f3-b232-f23f34f4e19f-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:52:52 compute-0 nova_compute[260935]: 2025-10-11 08:52:52.019 2 DEBUG oslo_concurrency.lockutils [None req-5fe3681c-599b-41f7-8745-542beaa784ee 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Lock "98fabab3-6b4a-44f3-b232-f23f34f4e19f-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:52:52 compute-0 nova_compute[260935]: 2025-10-11 08:52:52.019 2 DEBUG oslo_concurrency.lockutils [None req-5fe3681c-599b-41f7-8745-542beaa784ee 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Lock "98fabab3-6b4a-44f3-b232-f23f34f4e19f-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:52:52 compute-0 nova_compute[260935]: 2025-10-11 08:52:52.020 2 DEBUG nova.virt.libvirt.vif [None req-5fe3681c-599b-41f7-8745-542beaa784ee 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-11T08:52:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-1265647930',display_name='tempest-ServerDiskConfigTestJSON-server-1265647930',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-1265647930',id=40,image_ref='95632eb9-5895-4e20-b760-0f149aadf400',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-11T08:52:28Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='9af27fad6b5a4783b66213343f27f0a1',ramdisk_id='',reservation_id='r-qeuhsate',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='95632eb9-5895-4e20-b760-0f149aadf400',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerDiskConfigTestJSON-387886039',owner_user_name='tempest-ServerDiskConfigTestJSON-387886039-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T08:52:49Z,user_data=None,user_id='2a171de1f79843e0b048393cabfee77d',uuid=98fabab3-6b4a-44f3-b232-f23f34f4e19f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "20c8164c-0779-4589-b3d3-afc10a47631f", "address": "fa:16:3e:bf:90:73", "network": {"id": "e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-964284753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9af27fad6b5a4783b66213343f27f0a1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap20c8164c-07", "ovs_interfaceid": "20c8164c-0779-4589-b3d3-afc10a47631f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 11 08:52:52 compute-0 nova_compute[260935]: 2025-10-11 08:52:52.020 2 DEBUG nova.network.os_vif_util [None req-5fe3681c-599b-41f7-8745-542beaa784ee 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Converting VIF {"id": "20c8164c-0779-4589-b3d3-afc10a47631f", "address": "fa:16:3e:bf:90:73", "network": {"id": "e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-964284753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9af27fad6b5a4783b66213343f27f0a1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap20c8164c-07", "ovs_interfaceid": "20c8164c-0779-4589-b3d3-afc10a47631f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 08:52:52 compute-0 nova_compute[260935]: 2025-10-11 08:52:52.021 2 DEBUG nova.network.os_vif_util [None req-5fe3681c-599b-41f7-8745-542beaa784ee 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:bf:90:73,bridge_name='br-int',has_traffic_filtering=True,id=20c8164c-0779-4589-b3d3-afc10a47631f,network=Network(e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap20c8164c-07') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 08:52:52 compute-0 nova_compute[260935]: 2025-10-11 08:52:52.021 2 DEBUG os_vif [None req-5fe3681c-599b-41f7-8745-542beaa784ee 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:bf:90:73,bridge_name='br-int',has_traffic_filtering=True,id=20c8164c-0779-4589-b3d3-afc10a47631f,network=Network(e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap20c8164c-07') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 11 08:52:52 compute-0 nova_compute[260935]: 2025-10-11 08:52:52.022 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:52:52 compute-0 nova_compute[260935]: 2025-10-11 08:52:52.023 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:52:52 compute-0 nova_compute[260935]: 2025-10-11 08:52:52.023 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 08:52:52 compute-0 nova_compute[260935]: 2025-10-11 08:52:52.026 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:52:52 compute-0 nova_compute[260935]: 2025-10-11 08:52:52.026 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap20c8164c-07, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:52:52 compute-0 nova_compute[260935]: 2025-10-11 08:52:52.027 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap20c8164c-07, col_values=(('external_ids', {'iface-id': '20c8164c-0779-4589-b3d3-afc10a47631f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:bf:90:73', 'vm-uuid': '98fabab3-6b4a-44f3-b232-f23f34f4e19f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:52:52 compute-0 nova_compute[260935]: 2025-10-11 08:52:52.029 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:52:52 compute-0 NetworkManager[44960]: <info>  [1760172772.0303] manager: (tap20c8164c-07): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/171)
Oct 11 08:52:52 compute-0 nova_compute[260935]: 2025-10-11 08:52:52.032 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 08:52:52 compute-0 nova_compute[260935]: 2025-10-11 08:52:52.037 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:52:52 compute-0 nova_compute[260935]: 2025-10-11 08:52:52.039 2 INFO os_vif [None req-5fe3681c-599b-41f7-8745-542beaa784ee 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:bf:90:73,bridge_name='br-int',has_traffic_filtering=True,id=20c8164c-0779-4589-b3d3-afc10a47631f,network=Network(e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap20c8164c-07')
Oct 11 08:52:52 compute-0 nova_compute[260935]: 2025-10-11 08:52:52.097 2 DEBUG nova.virt.libvirt.driver [None req-5fe3681c-599b-41f7-8745-542beaa784ee 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 08:52:52 compute-0 nova_compute[260935]: 2025-10-11 08:52:52.098 2 DEBUG nova.virt.libvirt.driver [None req-5fe3681c-599b-41f7-8745-542beaa784ee 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 08:52:52 compute-0 nova_compute[260935]: 2025-10-11 08:52:52.098 2 DEBUG nova.virt.libvirt.driver [None req-5fe3681c-599b-41f7-8745-542beaa784ee 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] No VIF found with MAC fa:16:3e:bf:90:73, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 11 08:52:52 compute-0 nova_compute[260935]: 2025-10-11 08:52:52.099 2 INFO nova.virt.libvirt.driver [None req-5fe3681c-599b-41f7-8745-542beaa784ee 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 98fabab3-6b4a-44f3-b232-f23f34f4e19f] Using config drive
Oct 11 08:52:52 compute-0 nova_compute[260935]: 2025-10-11 08:52:52.130 2 DEBUG nova.storage.rbd_utils [None req-5fe3681c-599b-41f7-8745-542beaa784ee 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] rbd image 98fabab3-6b4a-44f3-b232-f23f34f4e19f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:52:52 compute-0 nova_compute[260935]: 2025-10-11 08:52:52.137 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760172757.121933, 872b1c1d-bc87-4123-a599-4d64b89018aa => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:52:52 compute-0 nova_compute[260935]: 2025-10-11 08:52:52.138 2 INFO nova.compute.manager [-] [instance: 872b1c1d-bc87-4123-a599-4d64b89018aa] VM Stopped (Lifecycle Event)
Oct 11 08:52:52 compute-0 nova_compute[260935]: 2025-10-11 08:52:52.157 2 DEBUG nova.objects.instance [None req-5fe3681c-599b-41f7-8745-542beaa784ee 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Lazy-loading 'ec2_ids' on Instance uuid 98fabab3-6b4a-44f3-b232-f23f34f4e19f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:52:52 compute-0 nova_compute[260935]: 2025-10-11 08:52:52.164 2 DEBUG nova.compute.manager [None req-b0b99b4c-71f4-4506-8327-1e4c8c21c884 - - - - - -] [instance: 872b1c1d-bc87-4123-a599-4d64b89018aa] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:52:52 compute-0 nova_compute[260935]: 2025-10-11 08:52:52.206 2 DEBUG nova.objects.instance [None req-5fe3681c-599b-41f7-8745-542beaa784ee 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Lazy-loading 'keypairs' on Instance uuid 98fabab3-6b4a-44f3-b232-f23f34f4e19f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:52:52 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/3437292564' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:52:52 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/502272125' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:52:52 compute-0 nova_compute[260935]: 2025-10-11 08:52:52.502 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:52:52 compute-0 nova_compute[260935]: 2025-10-11 08:52:52.784 2 INFO nova.virt.libvirt.driver [None req-5fe3681c-599b-41f7-8745-542beaa784ee 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 98fabab3-6b4a-44f3-b232-f23f34f4e19f] Creating config drive at /var/lib/nova/instances/98fabab3-6b4a-44f3-b232-f23f34f4e19f/disk.config
Oct 11 08:52:52 compute-0 nova_compute[260935]: 2025-10-11 08:52:52.790 2 DEBUG oslo_concurrency.processutils [None req-5fe3681c-599b-41f7-8745-542beaa784ee 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/98fabab3-6b4a-44f3-b232-f23f34f4e19f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpyzkc8jeu execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:52:52 compute-0 nova_compute[260935]: 2025-10-11 08:52:52.940 2 DEBUG oslo_concurrency.processutils [None req-5fe3681c-599b-41f7-8745-542beaa784ee 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/98fabab3-6b4a-44f3-b232-f23f34f4e19f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpyzkc8jeu" returned: 0 in 0.149s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:52:52 compute-0 sudo[308959]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:52:52 compute-0 sudo[308959]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:52:52 compute-0 sudo[308959]: pam_unix(sudo:session): session closed for user root
Oct 11 08:52:52 compute-0 sshd-session[308496]: Failed password for invalid user warango from 152.32.213.170 port 60158 ssh2
Oct 11 08:52:52 compute-0 nova_compute[260935]: 2025-10-11 08:52:52.989 2 DEBUG nova.storage.rbd_utils [None req-5fe3681c-599b-41f7-8745-542beaa784ee 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] rbd image 98fabab3-6b4a-44f3-b232-f23f34f4e19f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:52:52 compute-0 nova_compute[260935]: 2025-10-11 08:52:52.994 2 DEBUG oslo_concurrency.processutils [None req-5fe3681c-599b-41f7-8745-542beaa784ee 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/98fabab3-6b4a-44f3-b232-f23f34f4e19f/disk.config 98fabab3-6b4a-44f3-b232-f23f34f4e19f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:52:53 compute-0 sudo[308991]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 08:52:53 compute-0 sudo[308991]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:52:53 compute-0 sudo[308991]: pam_unix(sudo:session): session closed for user root
Oct 11 08:52:53 compute-0 sudo[309028]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:52:53 compute-0 sudo[309028]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:52:53 compute-0 sudo[309028]: pam_unix(sudo:session): session closed for user root
Oct 11 08:52:53 compute-0 nova_compute[260935]: 2025-10-11 08:52:53.207 2 DEBUG oslo_concurrency.processutils [None req-5fe3681c-599b-41f7-8745-542beaa784ee 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/98fabab3-6b4a-44f3-b232-f23f34f4e19f/disk.config 98fabab3-6b4a-44f3-b232-f23f34f4e19f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.213s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:52:53 compute-0 nova_compute[260935]: 2025-10-11 08:52:53.210 2 INFO nova.virt.libvirt.driver [None req-5fe3681c-599b-41f7-8745-542beaa784ee 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 98fabab3-6b4a-44f3-b232-f23f34f4e19f] Deleting local config drive /var/lib/nova/instances/98fabab3-6b4a-44f3-b232-f23f34f4e19f/disk.config because it was imported into RBD.
Oct 11 08:52:53 compute-0 sudo[309071]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Oct 11 08:52:53 compute-0 sudo[309071]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:52:53 compute-0 kernel: tap20c8164c-07: entered promiscuous mode
Oct 11 08:52:53 compute-0 NetworkManager[44960]: <info>  [1760172773.2797] manager: (tap20c8164c-07): new Tun device (/org/freedesktop/NetworkManager/Devices/172)
Oct 11 08:52:53 compute-0 ovn_controller[152945]: 2025-10-11T08:52:53Z|00358|binding|INFO|Claiming lport 20c8164c-0779-4589-b3d3-afc10a47631f for this chassis.
Oct 11 08:52:53 compute-0 ovn_controller[152945]: 2025-10-11T08:52:53Z|00359|binding|INFO|20c8164c-0779-4589-b3d3-afc10a47631f: Claiming fa:16:3e:bf:90:73 10.100.0.12
Oct 11 08:52:53 compute-0 nova_compute[260935]: 2025-10-11 08:52:53.282 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:52:53 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e192 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 08:52:53 compute-0 ovn_controller[152945]: 2025-10-11T08:52:53Z|00360|binding|INFO|Setting lport 20c8164c-0779-4589-b3d3-afc10a47631f ovn-installed in OVS
Oct 11 08:52:53 compute-0 nova_compute[260935]: 2025-10-11 08:52:53.318 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:52:53 compute-0 nova_compute[260935]: 2025-10-11 08:52:53.320 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:52:53 compute-0 ceph-mon[74313]: pgmap v1441: 321 pgs: 321 active+clean; 167 MiB data, 489 MiB used, 60 GiB / 60 GiB avail; 5.0 MiB/s rd, 2.6 MiB/s wr, 298 op/s
Oct 11 08:52:53 compute-0 systemd-machined[215705]: New machine qemu-48-instance-00000028.
Oct 11 08:52:53 compute-0 systemd-udevd[309109]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 08:52:53 compute-0 NetworkManager[44960]: <info>  [1760172773.3468] device (tap20c8164c-07): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 08:52:53 compute-0 NetworkManager[44960]: <info>  [1760172773.3481] device (tap20c8164c-07): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 08:52:53 compute-0 systemd[1]: Started Virtual Machine qemu-48-instance-00000028.
Oct 11 08:52:53 compute-0 ovn_controller[152945]: 2025-10-11T08:52:53Z|00361|binding|INFO|Setting lport 20c8164c-0779-4589-b3d3-afc10a47631f up in Southbound
Oct 11 08:52:53 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:53.424 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:bf:90:73 10.100.0.12'], port_security=['fa:16:3e:bf:90:73 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '98fabab3-6b4a-44f3-b232-f23f34f4e19f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9af27fad6b5a4783b66213343f27f0a1', 'neutron:revision_number': '5', 'neutron:security_group_ids': 'a50a9db8-e2ab-4969-88fc-b4ddbb372174', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b34768a1-4d4c-416a-8ec1-d8538916a72a, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=20c8164c-0779-4589-b3d3-afc10a47631f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 08:52:53 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:53.427 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 20c8164c-0779-4589-b3d3-afc10a47631f in datapath e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd bound to our chassis
Oct 11 08:52:53 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:53.431 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd
Oct 11 08:52:53 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:53.444 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[697e1822-0d94-4f16-ac76-3584939955ad]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:52:53 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:53.444 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tape5d4fc7a-11 in ovnmeta-e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 11 08:52:53 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:53.447 276945 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tape5d4fc7a-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 11 08:52:53 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:53.447 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[e71e3b25-707d-456e-a1e9-86bc28963f60]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:52:53 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:53.449 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[618e2487-1e39-44d5-bafa-696a66f86c2d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:52:53 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:53.469 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[3cf71ae4-43dc-423e-8b23-ef1de4c776f7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:52:53 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:53.497 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[ed0aac54-8570-48e7-a254-e66469ae1a5f]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:52:53 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:53.543 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[8ac71e67-0fec-486b-9b7c-a4328b899e54]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:52:53 compute-0 NetworkManager[44960]: <info>  [1760172773.5538] manager: (tape5d4fc7a-10): new Veth device (/org/freedesktop/NetworkManager/Devices/173)
Oct 11 08:52:53 compute-0 systemd-udevd[309111]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 08:52:53 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:53.552 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[f4f8854a-835c-43dc-811d-2bfc663b0db1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:52:53 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:53.608 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[bd2452c3-f950-4495-ac3c-165c2f2f117f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:52:53 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:53.618 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[cc5bf88e-88cd-4a38-8f48-6decb140cf88]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:52:53 compute-0 sudo[309071]: pam_unix(sudo:session): session closed for user root
Oct 11 08:52:53 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 08:52:53 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 08:52:53 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 08:52:53 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 08:52:53 compute-0 NetworkManager[44960]: <info>  [1760172773.6734] device (tape5d4fc7a-10): carrier: link connected
Oct 11 08:52:53 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:53.690 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[bbae958a-cbea-4d52-b9b1-5f2ce0fcac0b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:52:53 compute-0 sudo[309167]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:52:53 compute-0 sudo[309167]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:52:53 compute-0 sudo[309167]: pam_unix(sudo:session): session closed for user root
Oct 11 08:52:53 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:53.726 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[75c16df7-d6f7-44b6-bb2f-0b3edde2056f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape5d4fc7a-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:17:20:33'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 114], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 461837, 'reachable_time': 30932, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 309213, 'error': None, 'target': 'ovnmeta-e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:52:53 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:53.753 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[b8fb8ee7-8627-4a1c-b48b-763044b573d5]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe17:2033'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 461837, 'tstamp': 461837}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 309226, 'error': None, 'target': 'ovnmeta-e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:52:53 compute-0 sudo[309222]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 08:52:53 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:53.788 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[b5128597-baf3-4b52-809c-516c5f8b7b82]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape5d4fc7a-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:17:20:33'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 114], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 461837, 'reachable_time': 30932, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 309248, 'error': None, 'target': 'ovnmeta-e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:52:53 compute-0 sudo[309222]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:52:53 compute-0 sudo[309222]: pam_unix(sudo:session): session closed for user root
Oct 11 08:52:53 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:53.831 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[54e54439-f705-4f77-b47e-f6e617a2bd07]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:52:53 compute-0 sudo[309255]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:52:53 compute-0 sudo[309255]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:52:53 compute-0 sudo[309255]: pam_unix(sudo:session): session closed for user root
Oct 11 08:52:53 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1442: 321 pgs: 321 active+clean; 134 MiB data, 464 MiB used, 60 GiB / 60 GiB avail; 3.6 MiB/s rd, 2.2 MiB/s wr, 247 op/s
Oct 11 08:52:53 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:53.941 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[96fa8750-eb22-47cc-8a5e-93bc5c5b9610]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:52:53 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:53.942 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape5d4fc7a-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:52:53 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:53.944 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 08:52:53 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:53.945 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape5d4fc7a-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:52:53 compute-0 nova_compute[260935]: 2025-10-11 08:52:53.946 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:52:53 compute-0 kernel: tape5d4fc7a-10: entered promiscuous mode
Oct 11 08:52:53 compute-0 NetworkManager[44960]: <info>  [1760172773.9484] manager: (tape5d4fc7a-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/174)
Oct 11 08:52:53 compute-0 sudo[309285]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 11 08:52:53 compute-0 sudo[309285]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:52:53 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:53.952 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape5d4fc7a-10, col_values=(('external_ids', {'iface-id': '7a0f31c4-9bda-45df-9fec-aacc40fc88c1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:52:53 compute-0 ovn_controller[152945]: 2025-10-11T08:52:53Z|00362|binding|INFO|Releasing lport 7a0f31c4-9bda-45df-9fec-aacc40fc88c1 from this chassis (sb_readonly=0)
Oct 11 08:52:53 compute-0 nova_compute[260935]: 2025-10-11 08:52:53.953 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:52:53 compute-0 nova_compute[260935]: 2025-10-11 08:52:53.973 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:52:53 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:53.976 162815 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 11 08:52:53 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:53.977 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[51e2e008-b8c4-4430-9430-9f7997e51d13]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:52:53 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:53.979 162815 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 11 08:52:53 compute-0 ovn_metadata_agent[162810]: global
Oct 11 08:52:53 compute-0 ovn_metadata_agent[162810]:     log         /dev/log local0 debug
Oct 11 08:52:53 compute-0 ovn_metadata_agent[162810]:     log-tag     haproxy-metadata-proxy-e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd
Oct 11 08:52:53 compute-0 ovn_metadata_agent[162810]:     user        root
Oct 11 08:52:53 compute-0 ovn_metadata_agent[162810]:     group       root
Oct 11 08:52:53 compute-0 ovn_metadata_agent[162810]:     maxconn     1024
Oct 11 08:52:53 compute-0 ovn_metadata_agent[162810]:     pidfile     /var/lib/neutron/external/pids/e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd.pid.haproxy
Oct 11 08:52:53 compute-0 ovn_metadata_agent[162810]:     daemon
Oct 11 08:52:53 compute-0 ovn_metadata_agent[162810]: 
Oct 11 08:52:53 compute-0 ovn_metadata_agent[162810]: defaults
Oct 11 08:52:53 compute-0 ovn_metadata_agent[162810]:     log global
Oct 11 08:52:53 compute-0 ovn_metadata_agent[162810]:     mode http
Oct 11 08:52:53 compute-0 ovn_metadata_agent[162810]:     option httplog
Oct 11 08:52:53 compute-0 ovn_metadata_agent[162810]:     option dontlognull
Oct 11 08:52:53 compute-0 ovn_metadata_agent[162810]:     option http-server-close
Oct 11 08:52:53 compute-0 ovn_metadata_agent[162810]:     option forwardfor
Oct 11 08:52:53 compute-0 ovn_metadata_agent[162810]:     retries                 3
Oct 11 08:52:53 compute-0 ovn_metadata_agent[162810]:     timeout http-request    30s
Oct 11 08:52:53 compute-0 ovn_metadata_agent[162810]:     timeout connect         30s
Oct 11 08:52:53 compute-0 ovn_metadata_agent[162810]:     timeout client          32s
Oct 11 08:52:53 compute-0 ovn_metadata_agent[162810]:     timeout server          32s
Oct 11 08:52:53 compute-0 ovn_metadata_agent[162810]:     timeout http-keep-alive 30s
Oct 11 08:52:53 compute-0 ovn_metadata_agent[162810]: 
Oct 11 08:52:53 compute-0 ovn_metadata_agent[162810]: 
Oct 11 08:52:53 compute-0 ovn_metadata_agent[162810]: listen listener
Oct 11 08:52:53 compute-0 ovn_metadata_agent[162810]:     bind 169.254.169.254:80
Oct 11 08:52:53 compute-0 ovn_metadata_agent[162810]:     server metadata /var/lib/neutron/metadata_proxy
Oct 11 08:52:53 compute-0 ovn_metadata_agent[162810]:     http-request add-header X-OVN-Network-ID e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd
Oct 11 08:52:53 compute-0 ovn_metadata_agent[162810]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 11 08:52:53 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:53.980 162815 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd', 'env', 'PROCESS_TAG=haproxy-e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 11 08:52:54 compute-0 nova_compute[260935]: 2025-10-11 08:52:54.131 2 DEBUG oslo_concurrency.lockutils [None req-c3e300fc-08be-4a83-8017-2538602b2e40 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] Acquiring lock "5750649d-960f-42d5-b127-de8b9a2bee8f" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:52:54 compute-0 nova_compute[260935]: 2025-10-11 08:52:54.132 2 DEBUG oslo_concurrency.lockutils [None req-c3e300fc-08be-4a83-8017-2538602b2e40 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] Lock "5750649d-960f-42d5-b127-de8b9a2bee8f" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:52:54 compute-0 nova_compute[260935]: 2025-10-11 08:52:54.133 2 DEBUG oslo_concurrency.lockutils [None req-c3e300fc-08be-4a83-8017-2538602b2e40 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] Acquiring lock "5750649d-960f-42d5-b127-de8b9a2bee8f-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:52:54 compute-0 nova_compute[260935]: 2025-10-11 08:52:54.134 2 DEBUG oslo_concurrency.lockutils [None req-c3e300fc-08be-4a83-8017-2538602b2e40 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] Lock "5750649d-960f-42d5-b127-de8b9a2bee8f-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:52:54 compute-0 nova_compute[260935]: 2025-10-11 08:52:54.135 2 DEBUG oslo_concurrency.lockutils [None req-c3e300fc-08be-4a83-8017-2538602b2e40 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] Lock "5750649d-960f-42d5-b127-de8b9a2bee8f-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:52:54 compute-0 nova_compute[260935]: 2025-10-11 08:52:54.141 2 INFO nova.compute.manager [None req-c3e300fc-08be-4a83-8017-2538602b2e40 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] [instance: 5750649d-960f-42d5-b127-de8b9a2bee8f] Terminating instance
Oct 11 08:52:54 compute-0 nova_compute[260935]: 2025-10-11 08:52:54.143 2 DEBUG nova.compute.manager [None req-c3e300fc-08be-4a83-8017-2538602b2e40 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] [instance: 5750649d-960f-42d5-b127-de8b9a2bee8f] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 11 08:52:54 compute-0 kernel: tap81c1d19a-c4 (unregistering): left promiscuous mode
Oct 11 08:52:54 compute-0 NetworkManager[44960]: <info>  [1760172774.2048] device (tap81c1d19a-c4): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 08:52:54 compute-0 nova_compute[260935]: 2025-10-11 08:52:54.215 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:52:54 compute-0 ovn_controller[152945]: 2025-10-11T08:52:54Z|00363|binding|INFO|Releasing lport 81c1d19a-c479-4381-9557-92f3e52b0cf0 from this chassis (sb_readonly=0)
Oct 11 08:52:54 compute-0 ovn_controller[152945]: 2025-10-11T08:52:54Z|00364|binding|INFO|Setting lport 81c1d19a-c479-4381-9557-92f3e52b0cf0 down in Southbound
Oct 11 08:52:54 compute-0 ovn_controller[152945]: 2025-10-11T08:52:54Z|00365|binding|INFO|Removing iface tap81c1d19a-c4 ovn-installed in OVS
Oct 11 08:52:54 compute-0 nova_compute[260935]: 2025-10-11 08:52:54.232 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:52:54 compute-0 nova_compute[260935]: 2025-10-11 08:52:54.259 2 DEBUG nova.compute.manager [req-e373de34-7511-4d36-8097-f2d7bdea427d req-8658f0f0-bd9e-4824-a2ba-9145ee9c3433 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 98fabab3-6b4a-44f3-b232-f23f34f4e19f] Received event network-vif-plugged-20c8164c-0779-4589-b3d3-afc10a47631f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:52:54 compute-0 nova_compute[260935]: 2025-10-11 08:52:54.260 2 DEBUG oslo_concurrency.lockutils [req-e373de34-7511-4d36-8097-f2d7bdea427d req-8658f0f0-bd9e-4824-a2ba-9145ee9c3433 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "98fabab3-6b4a-44f3-b232-f23f34f4e19f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:52:54 compute-0 nova_compute[260935]: 2025-10-11 08:52:54.260 2 DEBUG oslo_concurrency.lockutils [req-e373de34-7511-4d36-8097-f2d7bdea427d req-8658f0f0-bd9e-4824-a2ba-9145ee9c3433 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "98fabab3-6b4a-44f3-b232-f23f34f4e19f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:52:54 compute-0 nova_compute[260935]: 2025-10-11 08:52:54.261 2 DEBUG oslo_concurrency.lockutils [req-e373de34-7511-4d36-8097-f2d7bdea427d req-8658f0f0-bd9e-4824-a2ba-9145ee9c3433 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "98fabab3-6b4a-44f3-b232-f23f34f4e19f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:52:54 compute-0 nova_compute[260935]: 2025-10-11 08:52:54.261 2 DEBUG nova.compute.manager [req-e373de34-7511-4d36-8097-f2d7bdea427d req-8658f0f0-bd9e-4824-a2ba-9145ee9c3433 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 98fabab3-6b4a-44f3-b232-f23f34f4e19f] Processing event network-vif-plugged-20c8164c-0779-4589-b3d3-afc10a47631f _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 11 08:52:54 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:54.265 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:49:c0:a4 10.100.0.10'], port_security=['fa:16:3e:49:c0:a4 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '5750649d-960f-42d5-b127-de8b9a2bee8f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-aa9adc37-a18c-4b31-b4cc-d46c43f91e9b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e394c641aa0e46e2a7d8129cd88c9a01', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'e0772185-519f-4131-b64e-b4728e2c0dd9', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=bf506c8e-a600-4595-8f2f-eb0ff97eb2d2, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=81c1d19a-c479-4381-9557-92f3e52b0cf0) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 08:52:54 compute-0 systemd[1]: machine-qemu\x2d47\x2dinstance\x2d0000002a.scope: Deactivated successfully.
Oct 11 08:52:54 compute-0 systemd[1]: machine-qemu\x2d47\x2dinstance\x2d0000002a.scope: Consumed 3.554s CPU time.
Oct 11 08:52:54 compute-0 systemd-machined[215705]: Machine qemu-47-instance-0000002a terminated.
Oct 11 08:52:54 compute-0 nova_compute[260935]: 2025-10-11 08:52:54.334 2 DEBUG nova.compute.manager [req-8299269b-7e7d-42c7-803e-326b654a787a req-8d2d5315-7a44-4ef7-91e4-8573c137b863 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 5750649d-960f-42d5-b127-de8b9a2bee8f] Received event network-vif-plugged-81c1d19a-c479-4381-9557-92f3e52b0cf0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:52:54 compute-0 nova_compute[260935]: 2025-10-11 08:52:54.335 2 DEBUG oslo_concurrency.lockutils [req-8299269b-7e7d-42c7-803e-326b654a787a req-8d2d5315-7a44-4ef7-91e4-8573c137b863 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "5750649d-960f-42d5-b127-de8b9a2bee8f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:52:54 compute-0 nova_compute[260935]: 2025-10-11 08:52:54.336 2 DEBUG oslo_concurrency.lockutils [req-8299269b-7e7d-42c7-803e-326b654a787a req-8d2d5315-7a44-4ef7-91e4-8573c137b863 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "5750649d-960f-42d5-b127-de8b9a2bee8f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:52:54 compute-0 nova_compute[260935]: 2025-10-11 08:52:54.336 2 DEBUG oslo_concurrency.lockutils [req-8299269b-7e7d-42c7-803e-326b654a787a req-8d2d5315-7a44-4ef7-91e4-8573c137b863 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "5750649d-960f-42d5-b127-de8b9a2bee8f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:52:54 compute-0 nova_compute[260935]: 2025-10-11 08:52:54.336 2 DEBUG nova.compute.manager [req-8299269b-7e7d-42c7-803e-326b654a787a req-8d2d5315-7a44-4ef7-91e4-8573c137b863 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 5750649d-960f-42d5-b127-de8b9a2bee8f] No waiting events found dispatching network-vif-plugged-81c1d19a-c479-4381-9557-92f3e52b0cf0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 08:52:54 compute-0 nova_compute[260935]: 2025-10-11 08:52:54.336 2 WARNING nova.compute.manager [req-8299269b-7e7d-42c7-803e-326b654a787a req-8d2d5315-7a44-4ef7-91e4-8573c137b863 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 5750649d-960f-42d5-b127-de8b9a2bee8f] Received unexpected event network-vif-plugged-81c1d19a-c479-4381-9557-92f3e52b0cf0 for instance with vm_state active and task_state deleting.
Oct 11 08:52:54 compute-0 nova_compute[260935]: 2025-10-11 08:52:54.337 2 DEBUG nova.compute.manager [req-8299269b-7e7d-42c7-803e-326b654a787a req-8d2d5315-7a44-4ef7-91e4-8573c137b863 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 5750649d-960f-42d5-b127-de8b9a2bee8f] Received event network-vif-plugged-81c1d19a-c479-4381-9557-92f3e52b0cf0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:52:54 compute-0 nova_compute[260935]: 2025-10-11 08:52:54.337 2 DEBUG oslo_concurrency.lockutils [req-8299269b-7e7d-42c7-803e-326b654a787a req-8d2d5315-7a44-4ef7-91e4-8573c137b863 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "5750649d-960f-42d5-b127-de8b9a2bee8f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:52:54 compute-0 nova_compute[260935]: 2025-10-11 08:52:54.337 2 DEBUG oslo_concurrency.lockutils [req-8299269b-7e7d-42c7-803e-326b654a787a req-8d2d5315-7a44-4ef7-91e4-8573c137b863 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "5750649d-960f-42d5-b127-de8b9a2bee8f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:52:54 compute-0 nova_compute[260935]: 2025-10-11 08:52:54.338 2 DEBUG oslo_concurrency.lockutils [req-8299269b-7e7d-42c7-803e-326b654a787a req-8d2d5315-7a44-4ef7-91e4-8573c137b863 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "5750649d-960f-42d5-b127-de8b9a2bee8f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:52:54 compute-0 nova_compute[260935]: 2025-10-11 08:52:54.338 2 DEBUG nova.compute.manager [req-8299269b-7e7d-42c7-803e-326b654a787a req-8d2d5315-7a44-4ef7-91e4-8573c137b863 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 5750649d-960f-42d5-b127-de8b9a2bee8f] No waiting events found dispatching network-vif-plugged-81c1d19a-c479-4381-9557-92f3e52b0cf0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 08:52:54 compute-0 nova_compute[260935]: 2025-10-11 08:52:54.338 2 WARNING nova.compute.manager [req-8299269b-7e7d-42c7-803e-326b654a787a req-8d2d5315-7a44-4ef7-91e4-8573c137b863 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 5750649d-960f-42d5-b127-de8b9a2bee8f] Received unexpected event network-vif-plugged-81c1d19a-c479-4381-9557-92f3e52b0cf0 for instance with vm_state active and task_state deleting.
Oct 11 08:52:54 compute-0 nova_compute[260935]: 2025-10-11 08:52:54.379 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:52:54 compute-0 nova_compute[260935]: 2025-10-11 08:52:54.381 2 DEBUG nova.compute.manager [None req-5fe3681c-599b-41f7-8745-542beaa784ee 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 98fabab3-6b4a-44f3-b232-f23f34f4e19f] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 11 08:52:54 compute-0 nova_compute[260935]: 2025-10-11 08:52:54.382 2 DEBUG nova.virt.libvirt.host [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Removed pending event for 98fabab3-6b4a-44f3-b232-f23f34f4e19f due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Oct 11 08:52:54 compute-0 nova_compute[260935]: 2025-10-11 08:52:54.382 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172774.3791225, 98fabab3-6b4a-44f3-b232-f23f34f4e19f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:52:54 compute-0 nova_compute[260935]: 2025-10-11 08:52:54.383 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 98fabab3-6b4a-44f3-b232-f23f34f4e19f] VM Started (Lifecycle Event)
Oct 11 08:52:54 compute-0 nova_compute[260935]: 2025-10-11 08:52:54.390 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:52:54 compute-0 nova_compute[260935]: 2025-10-11 08:52:54.391 2 DEBUG nova.virt.libvirt.driver [None req-5fe3681c-599b-41f7-8745-542beaa784ee 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 98fabab3-6b4a-44f3-b232-f23f34f4e19f] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 11 08:52:54 compute-0 nova_compute[260935]: 2025-10-11 08:52:54.396 2 INFO nova.virt.libvirt.driver [-] [instance: 5750649d-960f-42d5-b127-de8b9a2bee8f] Instance destroyed successfully.
Oct 11 08:52:54 compute-0 nova_compute[260935]: 2025-10-11 08:52:54.396 2 DEBUG nova.objects.instance [None req-c3e300fc-08be-4a83-8017-2538602b2e40 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] Lazy-loading 'resources' on Instance uuid 5750649d-960f-42d5-b127-de8b9a2bee8f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:52:54 compute-0 nova_compute[260935]: 2025-10-11 08:52:54.398 2 INFO nova.virt.libvirt.driver [-] [instance: 98fabab3-6b4a-44f3-b232-f23f34f4e19f] Instance spawned successfully.
Oct 11 08:52:54 compute-0 nova_compute[260935]: 2025-10-11 08:52:54.399 2 DEBUG nova.virt.libvirt.driver [None req-5fe3681c-599b-41f7-8745-542beaa784ee 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 98fabab3-6b4a-44f3-b232-f23f34f4e19f] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 11 08:52:54 compute-0 nova_compute[260935]: 2025-10-11 08:52:54.462 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 98fabab3-6b4a-44f3-b232-f23f34f4e19f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:52:54 compute-0 nova_compute[260935]: 2025-10-11 08:52:54.471 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 98fabab3-6b4a-44f3-b232-f23f34f4e19f] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 08:52:54 compute-0 podman[309364]: 2025-10-11 08:52:54.496345474 +0000 UTC m=+0.070700011 container create d4e2f908c9b22d1b7b0ed1fbea6cdc9975eb11f0b165c5a96baab3af81b2bd58 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001)
Oct 11 08:52:54 compute-0 systemd[1]: Started libpod-conmon-d4e2f908c9b22d1b7b0ed1fbea6cdc9975eb11f0b165c5a96baab3af81b2bd58.scope.
Oct 11 08:52:54 compute-0 podman[309364]: 2025-10-11 08:52:54.455275332 +0000 UTC m=+0.029629849 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 11 08:52:54 compute-0 systemd[1]: Started libcrun container.
Oct 11 08:52:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d1665eb2b30cde48dd305e3696f1763d29e793e23b05e0d4ebb02f8a2d77485/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 11 08:52:54 compute-0 nova_compute[260935]: 2025-10-11 08:52:54.589 2 DEBUG nova.virt.libvirt.vif [None req-c3e300fc-08be-4a83-8017-2538602b2e40 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T08:52:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-InstanceActionsTestJSON-server-1556963080',display_name='tempest-InstanceActionsTestJSON-server-1556963080',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionstestjson-server-1556963080',id=42,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-11T08:52:43Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='e394c641aa0e46e2a7d8129cd88c9a01',ramdisk_id='',reservation_id='r-r0vpe0lh',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-InstanceActionsTestJSON-761055393',owner_user_name='tempest-InstanceActionsTestJSON-761055393-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T08:52:51Z,user_data=None,user_id='aeed30817a7740109e765d227ff2b78a',uuid=5750649d-960f-42d5-b127-de8b9a2bee8f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "81c1d19a-c479-4381-9557-92f3e52b0cf0", "address": "fa:16:3e:49:c0:a4", "network": {"id": "aa9adc37-a18c-4b31-b4cc-d46c43f91e9b", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-563102071-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e394c641aa0e46e2a7d8129cd88c9a01", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap81c1d19a-c4", "ovs_interfaceid": "81c1d19a-c479-4381-9557-92f3e52b0cf0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 11 08:52:54 compute-0 nova_compute[260935]: 2025-10-11 08:52:54.590 2 DEBUG nova.network.os_vif_util [None req-c3e300fc-08be-4a83-8017-2538602b2e40 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] Converting VIF {"id": "81c1d19a-c479-4381-9557-92f3e52b0cf0", "address": "fa:16:3e:49:c0:a4", "network": {"id": "aa9adc37-a18c-4b31-b4cc-d46c43f91e9b", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-563102071-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e394c641aa0e46e2a7d8129cd88c9a01", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap81c1d19a-c4", "ovs_interfaceid": "81c1d19a-c479-4381-9557-92f3e52b0cf0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 08:52:54 compute-0 nova_compute[260935]: 2025-10-11 08:52:54.591 2 DEBUG nova.network.os_vif_util [None req-c3e300fc-08be-4a83-8017-2538602b2e40 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:49:c0:a4,bridge_name='br-int',has_traffic_filtering=True,id=81c1d19a-c479-4381-9557-92f3e52b0cf0,network=Network(aa9adc37-a18c-4b31-b4cc-d46c43f91e9b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap81c1d19a-c4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 08:52:54 compute-0 nova_compute[260935]: 2025-10-11 08:52:54.591 2 DEBUG os_vif [None req-c3e300fc-08be-4a83-8017-2538602b2e40 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:49:c0:a4,bridge_name='br-int',has_traffic_filtering=True,id=81c1d19a-c479-4381-9557-92f3e52b0cf0,network=Network(aa9adc37-a18c-4b31-b4cc-d46c43f91e9b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap81c1d19a-c4') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 11 08:52:54 compute-0 nova_compute[260935]: 2025-10-11 08:52:54.593 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:52:54 compute-0 nova_compute[260935]: 2025-10-11 08:52:54.594 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap81c1d19a-c4, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:52:54 compute-0 nova_compute[260935]: 2025-10-11 08:52:54.596 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 98fabab3-6b4a-44f3-b232-f23f34f4e19f] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Oct 11 08:52:54 compute-0 nova_compute[260935]: 2025-10-11 08:52:54.596 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172774.3792953, 98fabab3-6b4a-44f3-b232-f23f34f4e19f => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:52:54 compute-0 nova_compute[260935]: 2025-10-11 08:52:54.596 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 98fabab3-6b4a-44f3-b232-f23f34f4e19f] VM Paused (Lifecycle Event)
Oct 11 08:52:54 compute-0 nova_compute[260935]: 2025-10-11 08:52:54.600 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:52:54 compute-0 nova_compute[260935]: 2025-10-11 08:52:54.602 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 08:52:54 compute-0 nova_compute[260935]: 2025-10-11 08:52:54.606 2 DEBUG nova.virt.libvirt.driver [None req-5fe3681c-599b-41f7-8745-542beaa784ee 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 98fabab3-6b4a-44f3-b232-f23f34f4e19f] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:52:54 compute-0 nova_compute[260935]: 2025-10-11 08:52:54.607 2 DEBUG nova.virt.libvirt.driver [None req-5fe3681c-599b-41f7-8745-542beaa784ee 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 98fabab3-6b4a-44f3-b232-f23f34f4e19f] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:52:54 compute-0 nova_compute[260935]: 2025-10-11 08:52:54.607 2 DEBUG nova.virt.libvirt.driver [None req-5fe3681c-599b-41f7-8745-542beaa784ee 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 98fabab3-6b4a-44f3-b232-f23f34f4e19f] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:52:54 compute-0 nova_compute[260935]: 2025-10-11 08:52:54.608 2 DEBUG nova.virt.libvirt.driver [None req-5fe3681c-599b-41f7-8745-542beaa784ee 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 98fabab3-6b4a-44f3-b232-f23f34f4e19f] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:52:54 compute-0 nova_compute[260935]: 2025-10-11 08:52:54.608 2 DEBUG nova.virt.libvirt.driver [None req-5fe3681c-599b-41f7-8745-542beaa784ee 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 98fabab3-6b4a-44f3-b232-f23f34f4e19f] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:52:54 compute-0 nova_compute[260935]: 2025-10-11 08:52:54.610 2 DEBUG nova.virt.libvirt.driver [None req-5fe3681c-599b-41f7-8745-542beaa784ee 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 98fabab3-6b4a-44f3-b232-f23f34f4e19f] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:52:54 compute-0 nova_compute[260935]: 2025-10-11 08:52:54.615 2 INFO os_vif [None req-c3e300fc-08be-4a83-8017-2538602b2e40 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:49:c0:a4,bridge_name='br-int',has_traffic_filtering=True,id=81c1d19a-c479-4381-9557-92f3e52b0cf0,network=Network(aa9adc37-a18c-4b31-b4cc-d46c43f91e9b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap81c1d19a-c4')
Oct 11 08:52:54 compute-0 podman[309364]: 2025-10-11 08:52:54.619521087 +0000 UTC m=+0.193875614 container init d4e2f908c9b22d1b7b0ed1fbea6cdc9975eb11f0b165c5a96baab3af81b2bd58 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001)
Oct 11 08:52:54 compute-0 sudo[309285]: pam_unix(sudo:session): session closed for user root
Oct 11 08:52:54 compute-0 podman[309364]: 2025-10-11 08:52:54.6305727 +0000 UTC m=+0.204927237 container start d4e2f908c9b22d1b7b0ed1fbea6cdc9975eb11f0b165c5a96baab3af81b2bd58 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 08:52:54 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 08:52:54 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 08:52:54 compute-0 ceph-mon[74313]: pgmap v1442: 321 pgs: 321 active+clean; 134 MiB data, 464 MiB used, 60 GiB / 60 GiB avail; 3.6 MiB/s rd, 2.2 MiB/s wr, 247 op/s
Oct 11 08:52:54 compute-0 neutron-haproxy-ovnmeta-e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd[309388]: [NOTICE]   (309405) : New worker (309415) forked
Oct 11 08:52:54 compute-0 neutron-haproxy-ovnmeta-e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd[309388]: [NOTICE]   (309405) : Loading success.
Oct 11 08:52:54 compute-0 nova_compute[260935]: 2025-10-11 08:52:54.681 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 98fabab3-6b4a-44f3-b232-f23f34f4e19f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:52:54 compute-0 nova_compute[260935]: 2025-10-11 08:52:54.692 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172774.3893898, 98fabab3-6b4a-44f3-b232-f23f34f4e19f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:52:54 compute-0 nova_compute[260935]: 2025-10-11 08:52:54.692 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 98fabab3-6b4a-44f3-b232-f23f34f4e19f] VM Resumed (Lifecycle Event)
Oct 11 08:52:54 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 08:52:54 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 08:52:54 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 11 08:52:54 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 08:52:54 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 11 08:52:54 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 08:52:54 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev cc9bca7e-c460-4cd1-949e-0387bdd64ebf does not exist
Oct 11 08:52:54 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev dcfcaca8-bda3-4291-9c36-4c00045a41a2 does not exist
Oct 11 08:52:54 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev cb1fdd65-3a4c-4c01-a895-fac2b3993ccc does not exist
Oct 11 08:52:54 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 11 08:52:54 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 08:52:54 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 11 08:52:54 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 08:52:54 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 08:52:54 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 08:52:54 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:54.740 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 81c1d19a-c479-4381-9557-92f3e52b0cf0 in datapath aa9adc37-a18c-4b31-b4cc-d46c43f91e9b unbound from our chassis
Oct 11 08:52:54 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:54.743 162815 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network aa9adc37-a18c-4b31-b4cc-d46c43f91e9b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 11 08:52:54 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:54.744 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[3c212a78-a75e-47ff-add2-8301b10bc9c7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:52:54 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:54.745 162815 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-aa9adc37-a18c-4b31-b4cc-d46c43f91e9b namespace which is not needed anymore
Oct 11 08:52:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 08:52:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 08:52:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 08:52:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 08:52:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 08:52:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 08:52:54 compute-0 sudo[309427]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:52:54 compute-0 sudo[309427]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:52:54 compute-0 sudo[309427]: pam_unix(sudo:session): session closed for user root
Oct 11 08:52:54 compute-0 ceph-mgr[74605]: [balancer INFO root] Optimize plan auto_2025-10-11_08:52:54
Oct 11 08:52:54 compute-0 ceph-mgr[74605]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 08:52:54 compute-0 ceph-mgr[74605]: [balancer INFO root] do_upmap
Oct 11 08:52:54 compute-0 ceph-mgr[74605]: [balancer INFO root] pools ['cephfs.cephfs.data', 'backups', 'default.rgw.log', 'volumes', 'default.rgw.control', '.rgw.root', 'default.rgw.meta', '.mgr', 'images', 'cephfs.cephfs.meta', 'vms']
Oct 11 08:52:54 compute-0 ceph-mgr[74605]: [balancer INFO root] prepared 0/10 changes
Oct 11 08:52:54 compute-0 nova_compute[260935]: 2025-10-11 08:52:54.831 2 DEBUG nova.compute.manager [None req-5fe3681c-599b-41f7-8745-542beaa784ee 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 98fabab3-6b4a-44f3-b232-f23f34f4e19f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:52:54 compute-0 nova_compute[260935]: 2025-10-11 08:52:54.836 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 98fabab3-6b4a-44f3-b232-f23f34f4e19f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:52:54 compute-0 nova_compute[260935]: 2025-10-11 08:52:54.847 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 98fabab3-6b4a-44f3-b232-f23f34f4e19f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 08:52:54 compute-0 nova_compute[260935]: 2025-10-11 08:52:54.875 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 98fabab3-6b4a-44f3-b232-f23f34f4e19f] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Oct 11 08:52:54 compute-0 sudo[309465]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 08:52:54 compute-0 sudo[309465]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:52:54 compute-0 sudo[309465]: pam_unix(sudo:session): session closed for user root
Oct 11 08:52:54 compute-0 nova_compute[260935]: 2025-10-11 08:52:54.908 2 DEBUG oslo_concurrency.lockutils [None req-5fe3681c-599b-41f7-8745-542beaa784ee 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:52:54 compute-0 nova_compute[260935]: 2025-10-11 08:52:54.909 2 DEBUG oslo_concurrency.lockutils [None req-5fe3681c-599b-41f7-8745-542beaa784ee 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:52:54 compute-0 nova_compute[260935]: 2025-10-11 08:52:54.909 2 DEBUG nova.objects.instance [None req-5fe3681c-599b-41f7-8745-542beaa784ee 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 98fabab3-6b4a-44f3-b232-f23f34f4e19f] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Oct 11 08:52:54 compute-0 neutron-haproxy-ovnmeta-aa9adc37-a18c-4b31-b4cc-d46c43f91e9b[308899]: [NOTICE]   (308903) : haproxy version is 2.8.14-c23fe91
Oct 11 08:52:54 compute-0 neutron-haproxy-ovnmeta-aa9adc37-a18c-4b31-b4cc-d46c43f91e9b[308899]: [NOTICE]   (308903) : path to executable is /usr/sbin/haproxy
Oct 11 08:52:54 compute-0 neutron-haproxy-ovnmeta-aa9adc37-a18c-4b31-b4cc-d46c43f91e9b[308899]: [WARNING]  (308903) : Exiting Master process...
Oct 11 08:52:54 compute-0 neutron-haproxy-ovnmeta-aa9adc37-a18c-4b31-b4cc-d46c43f91e9b[308899]: [WARNING]  (308903) : Exiting Master process...
Oct 11 08:52:54 compute-0 neutron-haproxy-ovnmeta-aa9adc37-a18c-4b31-b4cc-d46c43f91e9b[308899]: [ALERT]    (308903) : Current worker (308905) exited with code 143 (Terminated)
Oct 11 08:52:54 compute-0 neutron-haproxy-ovnmeta-aa9adc37-a18c-4b31-b4cc-d46c43f91e9b[308899]: [WARNING]  (308903) : All workers exited. Exiting... (0)
Oct 11 08:52:54 compute-0 systemd[1]: libpod-458401b6351fc27ad334ef31058a6591a600ca5a50b42d2ff10e9293f3fcd45c.scope: Deactivated successfully.
Oct 11 08:52:54 compute-0 podman[309485]: 2025-10-11 08:52:54.954173251 +0000 UTC m=+0.081801514 container died 458401b6351fc27ad334ef31058a6591a600ca5a50b42d2ff10e9293f3fcd45c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-aa9adc37-a18c-4b31-b4cc-d46c43f91e9b, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001)
Oct 11 08:52:54 compute-0 sudo[309500]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:52:54 compute-0 sudo[309500]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:52:54 compute-0 nova_compute[260935]: 2025-10-11 08:52:54.972 2 DEBUG oslo_concurrency.lockutils [None req-5fe3681c-599b-41f7-8745-542beaa784ee 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.063s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:52:54 compute-0 sudo[309500]: pam_unix(sudo:session): session closed for user root
Oct 11 08:52:55 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-458401b6351fc27ad334ef31058a6591a600ca5a50b42d2ff10e9293f3fcd45c-userdata-shm.mount: Deactivated successfully.
Oct 11 08:52:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-c0972cf6f05853bc79fa51b63023889054a6bcc2df62a3dc5f83edc5bef62cfe-merged.mount: Deactivated successfully.
Oct 11 08:52:55 compute-0 podman[309485]: 2025-10-11 08:52:55.018246883 +0000 UTC m=+0.145875146 container cleanup 458401b6351fc27ad334ef31058a6591a600ca5a50b42d2ff10e9293f3fcd45c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-aa9adc37-a18c-4b31-b4cc-d46c43f91e9b, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Oct 11 08:52:55 compute-0 systemd[1]: libpod-conmon-458401b6351fc27ad334ef31058a6591a600ca5a50b42d2ff10e9293f3fcd45c.scope: Deactivated successfully.
Oct 11 08:52:55 compute-0 sudo[309546]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 11 08:52:55 compute-0 sudo[309546]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:52:55 compute-0 nova_compute[260935]: 2025-10-11 08:52:55.130 2 INFO nova.virt.libvirt.driver [None req-c3e300fc-08be-4a83-8017-2538602b2e40 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] [instance: 5750649d-960f-42d5-b127-de8b9a2bee8f] Deleting instance files /var/lib/nova/instances/5750649d-960f-42d5-b127-de8b9a2bee8f_del
Oct 11 08:52:55 compute-0 nova_compute[260935]: 2025-10-11 08:52:55.132 2 INFO nova.virt.libvirt.driver [None req-c3e300fc-08be-4a83-8017-2538602b2e40 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] [instance: 5750649d-960f-42d5-b127-de8b9a2bee8f] Deletion of /var/lib/nova/instances/5750649d-960f-42d5-b127-de8b9a2bee8f_del complete
Oct 11 08:52:55 compute-0 podman[309574]: 2025-10-11 08:52:55.134044418 +0000 UTC m=+0.084808049 container remove 458401b6351fc27ad334ef31058a6591a600ca5a50b42d2ff10e9293f3fcd45c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-aa9adc37-a18c-4b31-b4cc-d46c43f91e9b, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 11 08:52:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 08:52:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 08:52:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 08:52:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 08:52:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 08:52:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 08:52:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 08:52:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 08:52:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 08:52:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 08:52:55 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:55.146 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[2ac9dfd9-777c-4095-af3a-68bff8651397]: (4, ('Sat Oct 11 08:52:54 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-aa9adc37-a18c-4b31-b4cc-d46c43f91e9b (458401b6351fc27ad334ef31058a6591a600ca5a50b42d2ff10e9293f3fcd45c)\n458401b6351fc27ad334ef31058a6591a600ca5a50b42d2ff10e9293f3fcd45c\nSat Oct 11 08:52:55 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-aa9adc37-a18c-4b31-b4cc-d46c43f91e9b (458401b6351fc27ad334ef31058a6591a600ca5a50b42d2ff10e9293f3fcd45c)\n458401b6351fc27ad334ef31058a6591a600ca5a50b42d2ff10e9293f3fcd45c\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:52:55 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:55.148 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[5b816a8d-9c25-4627-a58b-031801dee4b0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:52:55 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:55.149 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapaa9adc37-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:52:55 compute-0 nova_compute[260935]: 2025-10-11 08:52:55.152 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:52:55 compute-0 kernel: tapaa9adc37-a0: left promiscuous mode
Oct 11 08:52:55 compute-0 nova_compute[260935]: 2025-10-11 08:52:55.175 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:52:55 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:55.178 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[311f0cff-0ab0-4c06-ac56-81d05d51f30f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:52:55 compute-0 nova_compute[260935]: 2025-10-11 08:52:55.204 2 INFO nova.compute.manager [None req-c3e300fc-08be-4a83-8017-2538602b2e40 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] [instance: 5750649d-960f-42d5-b127-de8b9a2bee8f] Took 1.06 seconds to destroy the instance on the hypervisor.
Oct 11 08:52:55 compute-0 nova_compute[260935]: 2025-10-11 08:52:55.204 2 DEBUG oslo.service.loopingcall [None req-c3e300fc-08be-4a83-8017-2538602b2e40 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 11 08:52:55 compute-0 nova_compute[260935]: 2025-10-11 08:52:55.205 2 DEBUG nova.compute.manager [-] [instance: 5750649d-960f-42d5-b127-de8b9a2bee8f] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 11 08:52:55 compute-0 nova_compute[260935]: 2025-10-11 08:52:55.205 2 DEBUG nova.network.neutron [-] [instance: 5750649d-960f-42d5-b127-de8b9a2bee8f] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 11 08:52:55 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:55.207 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[5a9684ce-8e97-4fee-b3a1-11749f897f4b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:52:55 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:55.208 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[8af03e73-f226-484c-8a0b-013f75ff6d85]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:52:55 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:55.229 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[22676936-8b72-4206-86a6-1acfde8956d2]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 461533, 'reachable_time': 40680, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 309596, 'error': None, 'target': 'ovnmeta-aa9adc37-a18c-4b31-b4cc-d46c43f91e9b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:52:55 compute-0 systemd[1]: run-netns-ovnmeta\x2daa9adc37\x2da18c\x2d4b31\x2db4cc\x2dd46c43f91e9b.mount: Deactivated successfully.
Oct 11 08:52:55 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:55.235 162929 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-aa9adc37-a18c-4b31-b4cc-d46c43f91e9b deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 11 08:52:55 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:55.235 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[1048a372-432d-4686-a9e2-acd3fac83321]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:52:55 compute-0 podman[309628]: 2025-10-11 08:52:55.469811004 +0000 UTC m=+0.058003902 container create 78d86c7b17fdc07e170edf377356bbe31b9f48e083e5f78763fa94404806fb02 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_kepler, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct 11 08:52:55 compute-0 sshd-session[308496]: Received disconnect from 152.32.213.170 port 60158:11: Bye Bye [preauth]
Oct 11 08:52:55 compute-0 sshd-session[308496]: Disconnected from invalid user warango 152.32.213.170 port 60158 [preauth]
Oct 11 08:52:55 compute-0 podman[309628]: 2025-10-11 08:52:55.445775574 +0000 UTC m=+0.033968482 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 08:52:55 compute-0 systemd[1]: Started libpod-conmon-78d86c7b17fdc07e170edf377356bbe31b9f48e083e5f78763fa94404806fb02.scope.
Oct 11 08:52:55 compute-0 systemd[1]: Started libcrun container.
Oct 11 08:52:55 compute-0 podman[309628]: 2025-10-11 08:52:55.60156355 +0000 UTC m=+0.189756518 container init 78d86c7b17fdc07e170edf377356bbe31b9f48e083e5f78763fa94404806fb02 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_kepler, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 08:52:55 compute-0 podman[309642]: 2025-10-11 08:52:55.613404814 +0000 UTC m=+0.095424659 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Oct 11 08:52:55 compute-0 podman[309628]: 2025-10-11 08:52:55.615875874 +0000 UTC m=+0.204068782 container start 78d86c7b17fdc07e170edf377356bbe31b9f48e083e5f78763fa94404806fb02 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_kepler, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 08:52:55 compute-0 podman[309628]: 2025-10-11 08:52:55.620133995 +0000 UTC m=+0.208327183 container attach 78d86c7b17fdc07e170edf377356bbe31b9f48e083e5f78763fa94404806fb02 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_kepler, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True)
Oct 11 08:52:55 compute-0 crazy_kepler[309654]: 167 167
Oct 11 08:52:55 compute-0 podman[309628]: 2025-10-11 08:52:55.626503875 +0000 UTC m=+0.214696803 container died 78d86c7b17fdc07e170edf377356bbe31b9f48e083e5f78763fa94404806fb02 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_kepler, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct 11 08:52:55 compute-0 systemd[1]: libpod-78d86c7b17fdc07e170edf377356bbe31b9f48e083e5f78763fa94404806fb02.scope: Deactivated successfully.
Oct 11 08:52:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-be798599c83a490b6453859e9bfea1aa0cabd2b8c2cd155c295d6454d1fe7fe7-merged.mount: Deactivated successfully.
Oct 11 08:52:55 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 08:52:55 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 08:52:55 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 08:52:55 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 08:52:55 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 08:52:55 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 08:52:55 compute-0 podman[309628]: 2025-10-11 08:52:55.673763471 +0000 UTC m=+0.261956399 container remove 78d86c7b17fdc07e170edf377356bbe31b9f48e083e5f78763fa94404806fb02 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_kepler, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 08:52:55 compute-0 systemd[1]: libpod-conmon-78d86c7b17fdc07e170edf377356bbe31b9f48e083e5f78763fa94404806fb02.scope: Deactivated successfully.
Oct 11 08:52:55 compute-0 nova_compute[260935]: 2025-10-11 08:52:55.764 2 DEBUG nova.network.neutron [-] [instance: 5750649d-960f-42d5-b127-de8b9a2bee8f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:52:55 compute-0 nova_compute[260935]: 2025-10-11 08:52:55.778 2 INFO nova.compute.manager [-] [instance: 5750649d-960f-42d5-b127-de8b9a2bee8f] Took 0.57 seconds to deallocate network for instance.
Oct 11 08:52:55 compute-0 nova_compute[260935]: 2025-10-11 08:52:55.814 2 DEBUG oslo_concurrency.lockutils [None req-c3e300fc-08be-4a83-8017-2538602b2e40 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:52:55 compute-0 nova_compute[260935]: 2025-10-11 08:52:55.814 2 DEBUG oslo_concurrency.lockutils [None req-c3e300fc-08be-4a83-8017-2538602b2e40 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:52:55 compute-0 nova_compute[260935]: 2025-10-11 08:52:55.885 2 DEBUG oslo_concurrency.processutils [None req-c3e300fc-08be-4a83-8017-2538602b2e40 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:52:55 compute-0 podman[309688]: 2025-10-11 08:52:55.904045014 +0000 UTC m=+0.057924369 container create 044207d2cc99679b368667ac020db1cc6b2c67a47713e9f9a0b850368fce94ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_dewdney, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 08:52:55 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1443: 321 pgs: 321 active+clean; 134 MiB data, 464 MiB used, 60 GiB / 60 GiB avail; 3.6 MiB/s rd, 2.2 MiB/s wr, 247 op/s
Oct 11 08:52:55 compute-0 systemd[1]: Started libpod-conmon-044207d2cc99679b368667ac020db1cc6b2c67a47713e9f9a0b850368fce94ef.scope.
Oct 11 08:52:55 compute-0 podman[309688]: 2025-10-11 08:52:55.878015888 +0000 UTC m=+0.031895283 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 08:52:55 compute-0 systemd[1]: Started libcrun container.
Oct 11 08:52:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98d6b7332c3a361675949a67f88687a0d3fed7083ee0b491120d1b4356570935/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 08:52:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98d6b7332c3a361675949a67f88687a0d3fed7083ee0b491120d1b4356570935/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 08:52:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98d6b7332c3a361675949a67f88687a0d3fed7083ee0b491120d1b4356570935/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 08:52:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98d6b7332c3a361675949a67f88687a0d3fed7083ee0b491120d1b4356570935/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 08:52:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98d6b7332c3a361675949a67f88687a0d3fed7083ee0b491120d1b4356570935/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 08:52:56 compute-0 podman[309688]: 2025-10-11 08:52:56.00929998 +0000 UTC m=+0.163179375 container init 044207d2cc99679b368667ac020db1cc6b2c67a47713e9f9a0b850368fce94ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_dewdney, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct 11 08:52:56 compute-0 podman[309688]: 2025-10-11 08:52:56.017089721 +0000 UTC m=+0.170969076 container start 044207d2cc99679b368667ac020db1cc6b2c67a47713e9f9a0b850368fce94ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_dewdney, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 08:52:56 compute-0 podman[309688]: 2025-10-11 08:52:56.02024013 +0000 UTC m=+0.174119495 container attach 044207d2cc99679b368667ac020db1cc6b2c67a47713e9f9a0b850368fce94ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_dewdney, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 08:52:56 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 08:52:56 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/833367698' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:52:56 compute-0 nova_compute[260935]: 2025-10-11 08:52:56.390 2 DEBUG oslo_concurrency.processutils [None req-c3e300fc-08be-4a83-8017-2538602b2e40 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.506s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:52:56 compute-0 nova_compute[260935]: 2025-10-11 08:52:56.397 2 DEBUG nova.compute.provider_tree [None req-c3e300fc-08be-4a83-8017-2538602b2e40 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 08:52:56 compute-0 nova_compute[260935]: 2025-10-11 08:52:56.413 2 DEBUG nova.scheduler.client.report [None req-c3e300fc-08be-4a83-8017-2538602b2e40 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 08:52:56 compute-0 nova_compute[260935]: 2025-10-11 08:52:56.430 2 DEBUG oslo_concurrency.lockutils [None req-c3e300fc-08be-4a83-8017-2538602b2e40 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.616s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:52:56 compute-0 nova_compute[260935]: 2025-10-11 08:52:56.452 2 INFO nova.scheduler.client.report [None req-c3e300fc-08be-4a83-8017-2538602b2e40 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] Deleted allocations for instance 5750649d-960f-42d5-b127-de8b9a2bee8f
Oct 11 08:52:56 compute-0 nova_compute[260935]: 2025-10-11 08:52:56.524 2 DEBUG oslo_concurrency.lockutils [None req-c3e300fc-08be-4a83-8017-2538602b2e40 aeed30817a7740109e765d227ff2b78a e394c641aa0e46e2a7d8129cd88c9a01 - - default default] Lock "5750649d-960f-42d5-b127-de8b9a2bee8f" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.392s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:52:56 compute-0 ceph-mon[74313]: pgmap v1443: 321 pgs: 321 active+clean; 134 MiB data, 464 MiB used, 60 GiB / 60 GiB avail; 3.6 MiB/s rd, 2.2 MiB/s wr, 247 op/s
Oct 11 08:52:56 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/833367698' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:52:56 compute-0 nova_compute[260935]: 2025-10-11 08:52:56.735 2 DEBUG nova.compute.manager [req-94be7bdf-0548-413a-bd71-76d00ce36e5c req-00bdfc07-4a46-4a94-b8c4-d6c692f1f0b7 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 98fabab3-6b4a-44f3-b232-f23f34f4e19f] Received event network-vif-plugged-20c8164c-0779-4589-b3d3-afc10a47631f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:52:56 compute-0 nova_compute[260935]: 2025-10-11 08:52:56.735 2 DEBUG oslo_concurrency.lockutils [req-94be7bdf-0548-413a-bd71-76d00ce36e5c req-00bdfc07-4a46-4a94-b8c4-d6c692f1f0b7 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "98fabab3-6b4a-44f3-b232-f23f34f4e19f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:52:56 compute-0 nova_compute[260935]: 2025-10-11 08:52:56.736 2 DEBUG oslo_concurrency.lockutils [req-94be7bdf-0548-413a-bd71-76d00ce36e5c req-00bdfc07-4a46-4a94-b8c4-d6c692f1f0b7 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "98fabab3-6b4a-44f3-b232-f23f34f4e19f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:52:56 compute-0 nova_compute[260935]: 2025-10-11 08:52:56.736 2 DEBUG oslo_concurrency.lockutils [req-94be7bdf-0548-413a-bd71-76d00ce36e5c req-00bdfc07-4a46-4a94-b8c4-d6c692f1f0b7 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "98fabab3-6b4a-44f3-b232-f23f34f4e19f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:52:56 compute-0 nova_compute[260935]: 2025-10-11 08:52:56.736 2 DEBUG nova.compute.manager [req-94be7bdf-0548-413a-bd71-76d00ce36e5c req-00bdfc07-4a46-4a94-b8c4-d6c692f1f0b7 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 98fabab3-6b4a-44f3-b232-f23f34f4e19f] No waiting events found dispatching network-vif-plugged-20c8164c-0779-4589-b3d3-afc10a47631f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 08:52:56 compute-0 nova_compute[260935]: 2025-10-11 08:52:56.736 2 WARNING nova.compute.manager [req-94be7bdf-0548-413a-bd71-76d00ce36e5c req-00bdfc07-4a46-4a94-b8c4-d6c692f1f0b7 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 98fabab3-6b4a-44f3-b232-f23f34f4e19f] Received unexpected event network-vif-plugged-20c8164c-0779-4589-b3d3-afc10a47631f for instance with vm_state active and task_state None.
Oct 11 08:52:56 compute-0 nova_compute[260935]: 2025-10-11 08:52:56.736 2 DEBUG nova.compute.manager [req-94be7bdf-0548-413a-bd71-76d00ce36e5c req-00bdfc07-4a46-4a94-b8c4-d6c692f1f0b7 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 5750649d-960f-42d5-b127-de8b9a2bee8f] Received event network-vif-deleted-81c1d19a-c479-4381-9557-92f3e52b0cf0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:52:56 compute-0 nova_compute[260935]: 2025-10-11 08:52:56.803 2 DEBUG nova.compute.manager [req-dc09ce48-9942-4e30-b999-5d3cd5ec4bca req-7db056d7-09e2-4051-9fd6-bb0f797941d9 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 5750649d-960f-42d5-b127-de8b9a2bee8f] Received event network-vif-unplugged-81c1d19a-c479-4381-9557-92f3e52b0cf0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:52:56 compute-0 nova_compute[260935]: 2025-10-11 08:52:56.804 2 DEBUG oslo_concurrency.lockutils [req-dc09ce48-9942-4e30-b999-5d3cd5ec4bca req-7db056d7-09e2-4051-9fd6-bb0f797941d9 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "5750649d-960f-42d5-b127-de8b9a2bee8f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:52:56 compute-0 nova_compute[260935]: 2025-10-11 08:52:56.804 2 DEBUG oslo_concurrency.lockutils [req-dc09ce48-9942-4e30-b999-5d3cd5ec4bca req-7db056d7-09e2-4051-9fd6-bb0f797941d9 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "5750649d-960f-42d5-b127-de8b9a2bee8f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:52:56 compute-0 nova_compute[260935]: 2025-10-11 08:52:56.804 2 DEBUG oslo_concurrency.lockutils [req-dc09ce48-9942-4e30-b999-5d3cd5ec4bca req-7db056d7-09e2-4051-9fd6-bb0f797941d9 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "5750649d-960f-42d5-b127-de8b9a2bee8f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:52:56 compute-0 nova_compute[260935]: 2025-10-11 08:52:56.804 2 DEBUG nova.compute.manager [req-dc09ce48-9942-4e30-b999-5d3cd5ec4bca req-7db056d7-09e2-4051-9fd6-bb0f797941d9 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 5750649d-960f-42d5-b127-de8b9a2bee8f] No waiting events found dispatching network-vif-unplugged-81c1d19a-c479-4381-9557-92f3e52b0cf0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 08:52:56 compute-0 nova_compute[260935]: 2025-10-11 08:52:56.804 2 WARNING nova.compute.manager [req-dc09ce48-9942-4e30-b999-5d3cd5ec4bca req-7db056d7-09e2-4051-9fd6-bb0f797941d9 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 5750649d-960f-42d5-b127-de8b9a2bee8f] Received unexpected event network-vif-unplugged-81c1d19a-c479-4381-9557-92f3e52b0cf0 for instance with vm_state deleted and task_state None.
Oct 11 08:52:56 compute-0 nova_compute[260935]: 2025-10-11 08:52:56.804 2 DEBUG nova.compute.manager [req-dc09ce48-9942-4e30-b999-5d3cd5ec4bca req-7db056d7-09e2-4051-9fd6-bb0f797941d9 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 5750649d-960f-42d5-b127-de8b9a2bee8f] Received event network-vif-plugged-81c1d19a-c479-4381-9557-92f3e52b0cf0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:52:56 compute-0 nova_compute[260935]: 2025-10-11 08:52:56.804 2 DEBUG oslo_concurrency.lockutils [req-dc09ce48-9942-4e30-b999-5d3cd5ec4bca req-7db056d7-09e2-4051-9fd6-bb0f797941d9 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "5750649d-960f-42d5-b127-de8b9a2bee8f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:52:56 compute-0 nova_compute[260935]: 2025-10-11 08:52:56.805 2 DEBUG oslo_concurrency.lockutils [req-dc09ce48-9942-4e30-b999-5d3cd5ec4bca req-7db056d7-09e2-4051-9fd6-bb0f797941d9 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "5750649d-960f-42d5-b127-de8b9a2bee8f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:52:56 compute-0 nova_compute[260935]: 2025-10-11 08:52:56.805 2 DEBUG oslo_concurrency.lockutils [req-dc09ce48-9942-4e30-b999-5d3cd5ec4bca req-7db056d7-09e2-4051-9fd6-bb0f797941d9 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "5750649d-960f-42d5-b127-de8b9a2bee8f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:52:56 compute-0 nova_compute[260935]: 2025-10-11 08:52:56.805 2 DEBUG nova.compute.manager [req-dc09ce48-9942-4e30-b999-5d3cd5ec4bca req-7db056d7-09e2-4051-9fd6-bb0f797941d9 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 5750649d-960f-42d5-b127-de8b9a2bee8f] No waiting events found dispatching network-vif-plugged-81c1d19a-c479-4381-9557-92f3e52b0cf0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 08:52:56 compute-0 nova_compute[260935]: 2025-10-11 08:52:56.805 2 WARNING nova.compute.manager [req-dc09ce48-9942-4e30-b999-5d3cd5ec4bca req-7db056d7-09e2-4051-9fd6-bb0f797941d9 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 5750649d-960f-42d5-b127-de8b9a2bee8f] Received unexpected event network-vif-plugged-81c1d19a-c479-4381-9557-92f3e52b0cf0 for instance with vm_state deleted and task_state None.
Oct 11 08:52:57 compute-0 amazing_dewdney[309705]: --> passed data devices: 0 physical, 3 LVM
Oct 11 08:52:57 compute-0 amazing_dewdney[309705]: --> relative data size: 1.0
Oct 11 08:52:57 compute-0 amazing_dewdney[309705]: --> All data devices are unavailable
Oct 11 08:52:57 compute-0 systemd[1]: libpod-044207d2cc99679b368667ac020db1cc6b2c67a47713e9f9a0b850368fce94ef.scope: Deactivated successfully.
Oct 11 08:52:57 compute-0 systemd[1]: libpod-044207d2cc99679b368667ac020db1cc6b2c67a47713e9f9a0b850368fce94ef.scope: Consumed 1.059s CPU time.
Oct 11 08:52:57 compute-0 conmon[309705]: conmon 044207d2cc99679b3686 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-044207d2cc99679b368667ac020db1cc6b2c67a47713e9f9a0b850368fce94ef.scope/container/memory.events
Oct 11 08:52:57 compute-0 podman[309688]: 2025-10-11 08:52:57.146014496 +0000 UTC m=+1.299893871 container died 044207d2cc99679b368667ac020db1cc6b2c67a47713e9f9a0b850368fce94ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_dewdney, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 08:52:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-98d6b7332c3a361675949a67f88687a0d3fed7083ee0b491120d1b4356570935-merged.mount: Deactivated successfully.
Oct 11 08:52:57 compute-0 podman[309688]: 2025-10-11 08:52:57.217447076 +0000 UTC m=+1.371326441 container remove 044207d2cc99679b368667ac020db1cc6b2c67a47713e9f9a0b850368fce94ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_dewdney, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct 11 08:52:57 compute-0 systemd[1]: libpod-conmon-044207d2cc99679b368667ac020db1cc6b2c67a47713e9f9a0b850368fce94ef.scope: Deactivated successfully.
Oct 11 08:52:57 compute-0 sudo[309546]: pam_unix(sudo:session): session closed for user root
Oct 11 08:52:57 compute-0 sudo[309767]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:52:57 compute-0 sudo[309767]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:52:57 compute-0 sudo[309767]: pam_unix(sudo:session): session closed for user root
Oct 11 08:52:57 compute-0 sudo[309792]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 08:52:57 compute-0 sudo[309792]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:52:57 compute-0 sudo[309792]: pam_unix(sudo:session): session closed for user root
Oct 11 08:52:57 compute-0 nova_compute[260935]: 2025-10-11 08:52:57.537 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:52:57 compute-0 sudo[309817]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:52:57 compute-0 sudo[309817]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:52:57 compute-0 sudo[309817]: pam_unix(sudo:session): session closed for user root
Oct 11 08:52:57 compute-0 nova_compute[260935]: 2025-10-11 08:52:57.638 2 DEBUG oslo_concurrency.lockutils [None req-34b6a3b2-ff2f-4c3b-9697-af6701ea0d27 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Acquiring lock "98fabab3-6b4a-44f3-b232-f23f34f4e19f" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:52:57 compute-0 nova_compute[260935]: 2025-10-11 08:52:57.639 2 DEBUG oslo_concurrency.lockutils [None req-34b6a3b2-ff2f-4c3b-9697-af6701ea0d27 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Lock "98fabab3-6b4a-44f3-b232-f23f34f4e19f" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:52:57 compute-0 nova_compute[260935]: 2025-10-11 08:52:57.642 2 DEBUG oslo_concurrency.lockutils [None req-34b6a3b2-ff2f-4c3b-9697-af6701ea0d27 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Acquiring lock "98fabab3-6b4a-44f3-b232-f23f34f4e19f-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:52:57 compute-0 nova_compute[260935]: 2025-10-11 08:52:57.642 2 DEBUG oslo_concurrency.lockutils [None req-34b6a3b2-ff2f-4c3b-9697-af6701ea0d27 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Lock "98fabab3-6b4a-44f3-b232-f23f34f4e19f-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:52:57 compute-0 nova_compute[260935]: 2025-10-11 08:52:57.643 2 DEBUG oslo_concurrency.lockutils [None req-34b6a3b2-ff2f-4c3b-9697-af6701ea0d27 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Lock "98fabab3-6b4a-44f3-b232-f23f34f4e19f-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:52:57 compute-0 nova_compute[260935]: 2025-10-11 08:52:57.645 2 INFO nova.compute.manager [None req-34b6a3b2-ff2f-4c3b-9697-af6701ea0d27 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 98fabab3-6b4a-44f3-b232-f23f34f4e19f] Terminating instance
Oct 11 08:52:57 compute-0 nova_compute[260935]: 2025-10-11 08:52:57.647 2 DEBUG nova.compute.manager [None req-34b6a3b2-ff2f-4c3b-9697-af6701ea0d27 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 98fabab3-6b4a-44f3-b232-f23f34f4e19f] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 11 08:52:57 compute-0 sudo[309842]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a -- lvm list --format json
Oct 11 08:52:57 compute-0 sudo[309842]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:52:57 compute-0 kernel: tap20c8164c-07 (unregistering): left promiscuous mode
Oct 11 08:52:57 compute-0 NetworkManager[44960]: <info>  [1760172777.7036] device (tap20c8164c-07): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 08:52:57 compute-0 nova_compute[260935]: 2025-10-11 08:52:57.723 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:52:57 compute-0 ovn_controller[152945]: 2025-10-11T08:52:57Z|00366|binding|INFO|Releasing lport 20c8164c-0779-4589-b3d3-afc10a47631f from this chassis (sb_readonly=0)
Oct 11 08:52:57 compute-0 ovn_controller[152945]: 2025-10-11T08:52:57Z|00367|binding|INFO|Setting lport 20c8164c-0779-4589-b3d3-afc10a47631f down in Southbound
Oct 11 08:52:57 compute-0 ovn_controller[152945]: 2025-10-11T08:52:57Z|00368|binding|INFO|Removing iface tap20c8164c-07 ovn-installed in OVS
Oct 11 08:52:57 compute-0 nova_compute[260935]: 2025-10-11 08:52:57.725 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:52:57 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:57.734 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:bf:90:73 10.100.0.12'], port_security=['fa:16:3e:bf:90:73 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '98fabab3-6b4a-44f3-b232-f23f34f4e19f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9af27fad6b5a4783b66213343f27f0a1', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'a50a9db8-e2ab-4969-88fc-b4ddbb372174', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b34768a1-4d4c-416a-8ec1-d8538916a72a, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=20c8164c-0779-4589-b3d3-afc10a47631f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 08:52:57 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:57.736 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 20c8164c-0779-4589-b3d3-afc10a47631f in datapath e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd unbound from our chassis
Oct 11 08:52:57 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:57.739 162815 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 11 08:52:57 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:57.740 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[5d54683b-0c9e-4319-9636-6a3b7d814c39]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:52:57 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:57.741 162815 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd namespace which is not needed anymore
Oct 11 08:52:57 compute-0 nova_compute[260935]: 2025-10-11 08:52:57.769 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:52:57 compute-0 systemd[1]: machine-qemu\x2d48\x2dinstance\x2d00000028.scope: Deactivated successfully.
Oct 11 08:52:57 compute-0 systemd[1]: machine-qemu\x2d48\x2dinstance\x2d00000028.scope: Consumed 4.133s CPU time.
Oct 11 08:52:57 compute-0 systemd-machined[215705]: Machine qemu-48-instance-00000028 terminated.
Oct 11 08:52:57 compute-0 nova_compute[260935]: 2025-10-11 08:52:57.878 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:52:57 compute-0 nova_compute[260935]: 2025-10-11 08:52:57.885 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:52:57 compute-0 nova_compute[260935]: 2025-10-11 08:52:57.895 2 INFO nova.virt.libvirt.driver [-] [instance: 98fabab3-6b4a-44f3-b232-f23f34f4e19f] Instance destroyed successfully.
Oct 11 08:52:57 compute-0 nova_compute[260935]: 2025-10-11 08:52:57.895 2 DEBUG nova.objects.instance [None req-34b6a3b2-ff2f-4c3b-9697-af6701ea0d27 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Lazy-loading 'resources' on Instance uuid 98fabab3-6b4a-44f3-b232-f23f34f4e19f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:52:57 compute-0 nova_compute[260935]: 2025-10-11 08:52:57.918 2 DEBUG nova.virt.libvirt.vif [None req-34b6a3b2-ff2f-4c3b-9697-af6701ea0d27 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-11T08:52:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-1265647930',display_name='tempest-ServerDiskConfigTestJSON-server-1265647930',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-1265647930',id=40,image_ref='95632eb9-5895-4e20-b760-0f149aadf400',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-11T08:52:54Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='9af27fad6b5a4783b66213343f27f0a1',ramdisk_id='',reservation_id='r-qeuhsate',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='95632eb9-5895-4e20-b760-0f149aadf400',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerDiskConfigTestJSON-387886039',owner_user_name='tempest-ServerDiskConfigTestJSON-387886039-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T08:52:54Z,user_data=None,user_id='2a171de1f79843e0b048393cabfee77d',uuid=98fabab3-6b4a-44f3-b232-f23f34f4e19f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "20c8164c-0779-4589-b3d3-afc10a47631f", "address": "fa:16:3e:bf:90:73", "network": {"id": "e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-964284753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9af27fad6b5a4783b66213343f27f0a1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap20c8164c-07", "ovs_interfaceid": "20c8164c-0779-4589-b3d3-afc10a47631f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 11 08:52:57 compute-0 nova_compute[260935]: 2025-10-11 08:52:57.918 2 DEBUG nova.network.os_vif_util [None req-34b6a3b2-ff2f-4c3b-9697-af6701ea0d27 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Converting VIF {"id": "20c8164c-0779-4589-b3d3-afc10a47631f", "address": "fa:16:3e:bf:90:73", "network": {"id": "e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-964284753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9af27fad6b5a4783b66213343f27f0a1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap20c8164c-07", "ovs_interfaceid": "20c8164c-0779-4589-b3d3-afc10a47631f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 08:52:57 compute-0 nova_compute[260935]: 2025-10-11 08:52:57.919 2 DEBUG nova.network.os_vif_util [None req-34b6a3b2-ff2f-4c3b-9697-af6701ea0d27 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:bf:90:73,bridge_name='br-int',has_traffic_filtering=True,id=20c8164c-0779-4589-b3d3-afc10a47631f,network=Network(e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap20c8164c-07') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 08:52:57 compute-0 nova_compute[260935]: 2025-10-11 08:52:57.920 2 DEBUG os_vif [None req-34b6a3b2-ff2f-4c3b-9697-af6701ea0d27 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:bf:90:73,bridge_name='br-int',has_traffic_filtering=True,id=20c8164c-0779-4589-b3d3-afc10a47631f,network=Network(e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap20c8164c-07') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 11 08:52:57 compute-0 nova_compute[260935]: 2025-10-11 08:52:57.921 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:52:57 compute-0 nova_compute[260935]: 2025-10-11 08:52:57.922 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap20c8164c-07, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:52:57 compute-0 nova_compute[260935]: 2025-10-11 08:52:57.924 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:52:57 compute-0 nova_compute[260935]: 2025-10-11 08:52:57.925 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:52:57 compute-0 nova_compute[260935]: 2025-10-11 08:52:57.928 2 INFO os_vif [None req-34b6a3b2-ff2f-4c3b-9697-af6701ea0d27 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:bf:90:73,bridge_name='br-int',has_traffic_filtering=True,id=20c8164c-0779-4589-b3d3-afc10a47631f,network=Network(e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap20c8164c-07')
Oct 11 08:52:57 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1444: 321 pgs: 321 active+clean; 88 MiB data, 445 MiB used, 60 GiB / 60 GiB avail; 4.7 MiB/s rd, 2.1 MiB/s wr, 286 op/s
Oct 11 08:52:57 compute-0 neutron-haproxy-ovnmeta-e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd[309388]: [NOTICE]   (309405) : haproxy version is 2.8.14-c23fe91
Oct 11 08:52:57 compute-0 neutron-haproxy-ovnmeta-e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd[309388]: [NOTICE]   (309405) : path to executable is /usr/sbin/haproxy
Oct 11 08:52:57 compute-0 neutron-haproxy-ovnmeta-e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd[309388]: [WARNING]  (309405) : Exiting Master process...
Oct 11 08:52:57 compute-0 neutron-haproxy-ovnmeta-e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd[309388]: [ALERT]    (309405) : Current worker (309415) exited with code 143 (Terminated)
Oct 11 08:52:57 compute-0 neutron-haproxy-ovnmeta-e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd[309388]: [WARNING]  (309405) : All workers exited. Exiting... (0)
Oct 11 08:52:57 compute-0 systemd[1]: libpod-d4e2f908c9b22d1b7b0ed1fbea6cdc9975eb11f0b165c5a96baab3af81b2bd58.scope: Deactivated successfully.
Oct 11 08:52:57 compute-0 podman[309897]: 2025-10-11 08:52:57.95447963 +0000 UTC m=+0.065833443 container died d4e2f908c9b22d1b7b0ed1fbea6cdc9975eb11f0b165c5a96baab3af81b2bd58 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, io.buildah.version=1.41.3)
Oct 11 08:52:57 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-d4e2f908c9b22d1b7b0ed1fbea6cdc9975eb11f0b165c5a96baab3af81b2bd58-userdata-shm.mount: Deactivated successfully.
Oct 11 08:52:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-5d1665eb2b30cde48dd305e3696f1763d29e793e23b05e0d4ebb02f8a2d77485-merged.mount: Deactivated successfully.
Oct 11 08:52:58 compute-0 podman[309897]: 2025-10-11 08:52:58.010754952 +0000 UTC m=+0.122108735 container cleanup d4e2f908c9b22d1b7b0ed1fbea6cdc9975eb11f0b165c5a96baab3af81b2bd58 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Oct 11 08:52:58 compute-0 systemd[1]: libpod-conmon-d4e2f908c9b22d1b7b0ed1fbea6cdc9975eb11f0b165c5a96baab3af81b2bd58.scope: Deactivated successfully.
Oct 11 08:52:58 compute-0 podman[309969]: 2025-10-11 08:52:58.119524198 +0000 UTC m=+0.074502818 container remove d4e2f908c9b22d1b7b0ed1fbea6cdc9975eb11f0b165c5a96baab3af81b2bd58 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001)
Oct 11 08:52:58 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:58.127 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[882171b8-bc39-4ba8-a09c-bc5df6e883d6]: (4, ('Sat Oct 11 08:52:57 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd (d4e2f908c9b22d1b7b0ed1fbea6cdc9975eb11f0b165c5a96baab3af81b2bd58)\nd4e2f908c9b22d1b7b0ed1fbea6cdc9975eb11f0b165c5a96baab3af81b2bd58\nSat Oct 11 08:52:58 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd (d4e2f908c9b22d1b7b0ed1fbea6cdc9975eb11f0b165c5a96baab3af81b2bd58)\nd4e2f908c9b22d1b7b0ed1fbea6cdc9975eb11f0b165c5a96baab3af81b2bd58\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:52:58 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:58.130 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[63f39671-a53a-4d41-b9eb-275d2db1446a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:52:58 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:58.131 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape5d4fc7a-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:52:58 compute-0 nova_compute[260935]: 2025-10-11 08:52:58.132 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:52:58 compute-0 kernel: tape5d4fc7a-10: left promiscuous mode
Oct 11 08:52:58 compute-0 nova_compute[260935]: 2025-10-11 08:52:58.149 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:52:58 compute-0 nova_compute[260935]: 2025-10-11 08:52:58.153 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:52:58 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:58.157 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[f1e4111b-7966-4960-958c-efecb1731641]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:52:58 compute-0 podman[309994]: 2025-10-11 08:52:58.180930104 +0000 UTC m=+0.053382501 container create 488739b7fbccbe388ee31acba80d3e0f8ddc9ab266817bd3630419faebaaccde (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_leavitt, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Oct 11 08:52:58 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:58.182 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[ada1423d-acd7-4553-b518-498e6d20d4f8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:52:58 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:58.184 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[7f094b42-a03c-4c36-b1c3-fbb58bd484e6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:52:58 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:58.205 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[7ad70094-af83-42b4-a512-cd9f8e48aa49]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 461823, 'reachable_time': 16078, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 310011, 'error': None, 'target': 'ovnmeta-e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:52:58 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:58.207 162929 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 11 08:52:58 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:52:58.207 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[57f69635-09fb-4798-a131-b9c963c44678]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:52:58 compute-0 systemd[1]: run-netns-ovnmeta\x2de5d4fc7a\x2d1d47\x2d4774\x2daad7\x2d8bb2b388fbbd.mount: Deactivated successfully.
Oct 11 08:52:58 compute-0 systemd[1]: Started libpod-conmon-488739b7fbccbe388ee31acba80d3e0f8ddc9ab266817bd3630419faebaaccde.scope.
Oct 11 08:52:58 compute-0 podman[309994]: 2025-10-11 08:52:58.162710049 +0000 UTC m=+0.035162446 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 08:52:58 compute-0 systemd[1]: Started libcrun container.
Oct 11 08:52:58 compute-0 podman[309994]: 2025-10-11 08:52:58.288154347 +0000 UTC m=+0.160606794 container init 488739b7fbccbe388ee31acba80d3e0f8ddc9ab266817bd3630419faebaaccde (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_leavitt, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 08:52:58 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e192 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 08:52:58 compute-0 podman[309994]: 2025-10-11 08:52:58.295425162 +0000 UTC m=+0.167877589 container start 488739b7fbccbe388ee31acba80d3e0f8ddc9ab266817bd3630419faebaaccde (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_leavitt, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 08:52:58 compute-0 podman[309994]: 2025-10-11 08:52:58.298977313 +0000 UTC m=+0.171429700 container attach 488739b7fbccbe388ee31acba80d3e0f8ddc9ab266817bd3630419faebaaccde (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_leavitt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 08:52:58 compute-0 naughty_leavitt[310014]: 167 167
Oct 11 08:52:58 compute-0 systemd[1]: libpod-488739b7fbccbe388ee31acba80d3e0f8ddc9ab266817bd3630419faebaaccde.scope: Deactivated successfully.
Oct 11 08:52:58 compute-0 podman[309994]: 2025-10-11 08:52:58.303358566 +0000 UTC m=+0.175810993 container died 488739b7fbccbe388ee31acba80d3e0f8ddc9ab266817bd3630419faebaaccde (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_leavitt, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct 11 08:52:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-b7eb3d4325e4b87600d0c5dd9eaebd6c92406dd6d45425ca6d8f53c57d422218-merged.mount: Deactivated successfully.
Oct 11 08:52:58 compute-0 nova_compute[260935]: 2025-10-11 08:52:58.343 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760172763.340904, dd616f8b-2be3-455f-8979-ac6e9e57af2d => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:52:58 compute-0 nova_compute[260935]: 2025-10-11 08:52:58.343 2 INFO nova.compute.manager [-] [instance: dd616f8b-2be3-455f-8979-ac6e9e57af2d] VM Stopped (Lifecycle Event)
Oct 11 08:52:58 compute-0 podman[309994]: 2025-10-11 08:52:58.346024823 +0000 UTC m=+0.218477210 container remove 488739b7fbccbe388ee31acba80d3e0f8ddc9ab266817bd3630419faebaaccde (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_leavitt, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default)
Oct 11 08:52:58 compute-0 systemd[1]: libpod-conmon-488739b7fbccbe388ee31acba80d3e0f8ddc9ab266817bd3630419faebaaccde.scope: Deactivated successfully.
Oct 11 08:52:58 compute-0 nova_compute[260935]: 2025-10-11 08:52:58.364 2 DEBUG nova.compute.manager [None req-fc728c8f-b0bf-49a5-b224-4973b1970265 - - - - - -] [instance: dd616f8b-2be3-455f-8979-ac6e9e57af2d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:52:58 compute-0 nova_compute[260935]: 2025-10-11 08:52:58.373 2 INFO nova.virt.libvirt.driver [None req-34b6a3b2-ff2f-4c3b-9697-af6701ea0d27 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 98fabab3-6b4a-44f3-b232-f23f34f4e19f] Deleting instance files /var/lib/nova/instances/98fabab3-6b4a-44f3-b232-f23f34f4e19f_del
Oct 11 08:52:58 compute-0 nova_compute[260935]: 2025-10-11 08:52:58.373 2 INFO nova.virt.libvirt.driver [None req-34b6a3b2-ff2f-4c3b-9697-af6701ea0d27 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 98fabab3-6b4a-44f3-b232-f23f34f4e19f] Deletion of /var/lib/nova/instances/98fabab3-6b4a-44f3-b232-f23f34f4e19f_del complete
Oct 11 08:52:58 compute-0 nova_compute[260935]: 2025-10-11 08:52:58.439 2 INFO nova.compute.manager [None req-34b6a3b2-ff2f-4c3b-9697-af6701ea0d27 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 98fabab3-6b4a-44f3-b232-f23f34f4e19f] Took 0.79 seconds to destroy the instance on the hypervisor.
Oct 11 08:52:58 compute-0 nova_compute[260935]: 2025-10-11 08:52:58.439 2 DEBUG oslo.service.loopingcall [None req-34b6a3b2-ff2f-4c3b-9697-af6701ea0d27 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 11 08:52:58 compute-0 nova_compute[260935]: 2025-10-11 08:52:58.440 2 DEBUG nova.compute.manager [-] [instance: 98fabab3-6b4a-44f3-b232-f23f34f4e19f] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 11 08:52:58 compute-0 nova_compute[260935]: 2025-10-11 08:52:58.440 2 DEBUG nova.network.neutron [-] [instance: 98fabab3-6b4a-44f3-b232-f23f34f4e19f] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 11 08:52:58 compute-0 podman[310039]: 2025-10-11 08:52:58.573897317 +0000 UTC m=+0.069106425 container create 67eafe431d091d218de970a2f69afdd2d0db981e1a58b5c698af5b5cc0586553 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_curran, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 08:52:58 compute-0 systemd[1]: Started libpod-conmon-67eafe431d091d218de970a2f69afdd2d0db981e1a58b5c698af5b5cc0586553.scope.
Oct 11 08:52:58 compute-0 podman[310039]: 2025-10-11 08:52:58.545649819 +0000 UTC m=+0.040858977 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 08:52:58 compute-0 systemd[1]: Started libcrun container.
Oct 11 08:52:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb33431700cc476a4f1a1bfc98f3fab84a65463530df8ea6bbfd830026701ae0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 08:52:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb33431700cc476a4f1a1bfc98f3fab84a65463530df8ea6bbfd830026701ae0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 08:52:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb33431700cc476a4f1a1bfc98f3fab84a65463530df8ea6bbfd830026701ae0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 08:52:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb33431700cc476a4f1a1bfc98f3fab84a65463530df8ea6bbfd830026701ae0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 08:52:58 compute-0 podman[310039]: 2025-10-11 08:52:58.696879985 +0000 UTC m=+0.192089113 container init 67eafe431d091d218de970a2f69afdd2d0db981e1a58b5c698af5b5cc0586553 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_curran, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3)
Oct 11 08:52:58 compute-0 podman[310039]: 2025-10-11 08:52:58.709459881 +0000 UTC m=+0.204668989 container start 67eafe431d091d218de970a2f69afdd2d0db981e1a58b5c698af5b5cc0586553 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_curran, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 08:52:58 compute-0 podman[310039]: 2025-10-11 08:52:58.71506848 +0000 UTC m=+0.210277588 container attach 67eafe431d091d218de970a2f69afdd2d0db981e1a58b5c698af5b5cc0586553 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_curran, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 08:52:58 compute-0 nova_compute[260935]: 2025-10-11 08:52:58.932 2 DEBUG nova.network.neutron [-] [instance: 98fabab3-6b4a-44f3-b232-f23f34f4e19f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:52:58 compute-0 nova_compute[260935]: 2025-10-11 08:52:58.957 2 INFO nova.compute.manager [-] [instance: 98fabab3-6b4a-44f3-b232-f23f34f4e19f] Took 0.52 seconds to deallocate network for instance.
Oct 11 08:52:58 compute-0 ceph-mon[74313]: pgmap v1444: 321 pgs: 321 active+clean; 88 MiB data, 445 MiB used, 60 GiB / 60 GiB avail; 4.7 MiB/s rd, 2.1 MiB/s wr, 286 op/s
Oct 11 08:52:59 compute-0 nova_compute[260935]: 2025-10-11 08:52:59.006 2 DEBUG oslo_concurrency.lockutils [None req-34b6a3b2-ff2f-4c3b-9697-af6701ea0d27 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:52:59 compute-0 nova_compute[260935]: 2025-10-11 08:52:59.006 2 DEBUG oslo_concurrency.lockutils [None req-34b6a3b2-ff2f-4c3b-9697-af6701ea0d27 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:52:59 compute-0 nova_compute[260935]: 2025-10-11 08:52:59.044 2 DEBUG oslo_concurrency.processutils [None req-34b6a3b2-ff2f-4c3b-9697-af6701ea0d27 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:52:59 compute-0 nova_compute[260935]: 2025-10-11 08:52:59.334 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 08:52:59 compute-0 amazing_curran[310055]: {
Oct 11 08:52:59 compute-0 amazing_curran[310055]:     "0": [
Oct 11 08:52:59 compute-0 amazing_curran[310055]:         {
Oct 11 08:52:59 compute-0 amazing_curran[310055]:             "devices": [
Oct 11 08:52:59 compute-0 amazing_curran[310055]:                 "/dev/loop3"
Oct 11 08:52:59 compute-0 amazing_curran[310055]:             ],
Oct 11 08:52:59 compute-0 amazing_curran[310055]:             "lv_name": "ceph_lv0",
Oct 11 08:52:59 compute-0 amazing_curran[310055]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 08:52:59 compute-0 amazing_curran[310055]:             "lv_size": "21470642176",
Oct 11 08:52:59 compute-0 amazing_curran[310055]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=bd31cfe9-266a-4479-a7f9-659fff14b3e1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 08:52:59 compute-0 amazing_curran[310055]:             "lv_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 08:52:59 compute-0 amazing_curran[310055]:             "name": "ceph_lv0",
Oct 11 08:52:59 compute-0 amazing_curran[310055]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 08:52:59 compute-0 amazing_curran[310055]:             "tags": {
Oct 11 08:52:59 compute-0 amazing_curran[310055]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 11 08:52:59 compute-0 amazing_curran[310055]:                 "ceph.block_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 08:52:59 compute-0 amazing_curran[310055]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 08:52:59 compute-0 amazing_curran[310055]:                 "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 08:52:59 compute-0 amazing_curran[310055]:                 "ceph.cluster_name": "ceph",
Oct 11 08:52:59 compute-0 amazing_curran[310055]:                 "ceph.crush_device_class": "",
Oct 11 08:52:59 compute-0 amazing_curran[310055]:                 "ceph.encrypted": "0",
Oct 11 08:52:59 compute-0 amazing_curran[310055]:                 "ceph.osd_fsid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 08:52:59 compute-0 amazing_curran[310055]:                 "ceph.osd_id": "0",
Oct 11 08:52:59 compute-0 amazing_curran[310055]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 08:52:59 compute-0 amazing_curran[310055]:                 "ceph.type": "block",
Oct 11 08:52:59 compute-0 amazing_curran[310055]:                 "ceph.vdo": "0"
Oct 11 08:52:59 compute-0 amazing_curran[310055]:             },
Oct 11 08:52:59 compute-0 amazing_curran[310055]:             "type": "block",
Oct 11 08:52:59 compute-0 amazing_curran[310055]:             "vg_name": "ceph_vg0"
Oct 11 08:52:59 compute-0 amazing_curran[310055]:         }
Oct 11 08:52:59 compute-0 amazing_curran[310055]:     ],
Oct 11 08:52:59 compute-0 amazing_curran[310055]:     "1": [
Oct 11 08:52:59 compute-0 amazing_curran[310055]:         {
Oct 11 08:52:59 compute-0 amazing_curran[310055]:             "devices": [
Oct 11 08:52:59 compute-0 amazing_curran[310055]:                 "/dev/loop4"
Oct 11 08:52:59 compute-0 amazing_curran[310055]:             ],
Oct 11 08:52:59 compute-0 amazing_curran[310055]:             "lv_name": "ceph_lv1",
Oct 11 08:52:59 compute-0 amazing_curran[310055]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 08:52:59 compute-0 amazing_curran[310055]:             "lv_size": "21470642176",
Oct 11 08:52:59 compute-0 amazing_curran[310055]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c6e730c8-7075-45e5-bf72-061b73088f5b,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 08:52:59 compute-0 amazing_curran[310055]:             "lv_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 08:52:59 compute-0 amazing_curran[310055]:             "name": "ceph_lv1",
Oct 11 08:52:59 compute-0 amazing_curran[310055]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 08:52:59 compute-0 amazing_curran[310055]:             "tags": {
Oct 11 08:52:59 compute-0 amazing_curran[310055]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 11 08:52:59 compute-0 amazing_curran[310055]:                 "ceph.block_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 08:52:59 compute-0 amazing_curran[310055]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 08:52:59 compute-0 amazing_curran[310055]:                 "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 08:52:59 compute-0 amazing_curran[310055]:                 "ceph.cluster_name": "ceph",
Oct 11 08:52:59 compute-0 amazing_curran[310055]:                 "ceph.crush_device_class": "",
Oct 11 08:52:59 compute-0 amazing_curran[310055]:                 "ceph.encrypted": "0",
Oct 11 08:52:59 compute-0 amazing_curran[310055]:                 "ceph.osd_fsid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 08:52:59 compute-0 amazing_curran[310055]:                 "ceph.osd_id": "1",
Oct 11 08:52:59 compute-0 amazing_curran[310055]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 08:52:59 compute-0 amazing_curran[310055]:                 "ceph.type": "block",
Oct 11 08:52:59 compute-0 amazing_curran[310055]:                 "ceph.vdo": "0"
Oct 11 08:52:59 compute-0 amazing_curran[310055]:             },
Oct 11 08:52:59 compute-0 amazing_curran[310055]:             "type": "block",
Oct 11 08:52:59 compute-0 amazing_curran[310055]:             "vg_name": "ceph_vg1"
Oct 11 08:52:59 compute-0 amazing_curran[310055]:         }
Oct 11 08:52:59 compute-0 amazing_curran[310055]:     ],
Oct 11 08:52:59 compute-0 amazing_curran[310055]:     "2": [
Oct 11 08:52:59 compute-0 amazing_curran[310055]:         {
Oct 11 08:52:59 compute-0 amazing_curran[310055]:             "devices": [
Oct 11 08:52:59 compute-0 amazing_curran[310055]:                 "/dev/loop5"
Oct 11 08:52:59 compute-0 amazing_curran[310055]:             ],
Oct 11 08:52:59 compute-0 amazing_curran[310055]:             "lv_name": "ceph_lv2",
Oct 11 08:52:59 compute-0 amazing_curran[310055]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 08:52:59 compute-0 amazing_curran[310055]:             "lv_size": "21470642176",
Oct 11 08:52:59 compute-0 amazing_curran[310055]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9a8d87fe-940e-4960-94fb-405e22e6e9ee,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 08:52:59 compute-0 amazing_curran[310055]:             "lv_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 08:52:59 compute-0 amazing_curran[310055]:             "name": "ceph_lv2",
Oct 11 08:52:59 compute-0 amazing_curran[310055]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 08:52:59 compute-0 amazing_curran[310055]:             "tags": {
Oct 11 08:52:59 compute-0 amazing_curran[310055]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 11 08:52:59 compute-0 amazing_curran[310055]:                 "ceph.block_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 08:52:59 compute-0 amazing_curran[310055]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 08:52:59 compute-0 amazing_curran[310055]:                 "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 08:52:59 compute-0 amazing_curran[310055]:                 "ceph.cluster_name": "ceph",
Oct 11 08:52:59 compute-0 amazing_curran[310055]:                 "ceph.crush_device_class": "",
Oct 11 08:52:59 compute-0 amazing_curran[310055]:                 "ceph.encrypted": "0",
Oct 11 08:52:59 compute-0 amazing_curran[310055]:                 "ceph.osd_fsid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 08:52:59 compute-0 amazing_curran[310055]:                 "ceph.osd_id": "2",
Oct 11 08:52:59 compute-0 amazing_curran[310055]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 08:52:59 compute-0 amazing_curran[310055]:                 "ceph.type": "block",
Oct 11 08:52:59 compute-0 amazing_curran[310055]:                 "ceph.vdo": "0"
Oct 11 08:52:59 compute-0 amazing_curran[310055]:             },
Oct 11 08:52:59 compute-0 amazing_curran[310055]:             "type": "block",
Oct 11 08:52:59 compute-0 amazing_curran[310055]:             "vg_name": "ceph_vg2"
Oct 11 08:52:59 compute-0 amazing_curran[310055]:         }
Oct 11 08:52:59 compute-0 amazing_curran[310055]:     ]
Oct 11 08:52:59 compute-0 amazing_curran[310055]: }
Oct 11 08:52:59 compute-0 systemd[1]: libpod-67eafe431d091d218de970a2f69afdd2d0db981e1a58b5c698af5b5cc0586553.scope: Deactivated successfully.
Oct 11 08:52:59 compute-0 conmon[310055]: conmon 67eafe431d091d218de9 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-67eafe431d091d218de970a2f69afdd2d0db981e1a58b5c698af5b5cc0586553.scope/container/memory.events
Oct 11 08:52:59 compute-0 podman[310039]: 2025-10-11 08:52:59.512212103 +0000 UTC m=+1.007421181 container died 67eafe431d091d218de970a2f69afdd2d0db981e1a58b5c698af5b5cc0586553 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_curran, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 08:52:59 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 08:52:59 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3415986618' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:52:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-eb33431700cc476a4f1a1bfc98f3fab84a65463530df8ea6bbfd830026701ae0-merged.mount: Deactivated successfully.
Oct 11 08:52:59 compute-0 nova_compute[260935]: 2025-10-11 08:52:59.549 2 DEBUG oslo_concurrency.processutils [None req-34b6a3b2-ff2f-4c3b-9697-af6701ea0d27 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.505s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:52:59 compute-0 nova_compute[260935]: 2025-10-11 08:52:59.556 2 DEBUG nova.compute.provider_tree [None req-34b6a3b2-ff2f-4c3b-9697-af6701ea0d27 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 08:52:59 compute-0 podman[310039]: 2025-10-11 08:52:59.576917953 +0000 UTC m=+1.072127021 container remove 67eafe431d091d218de970a2f69afdd2d0db981e1a58b5c698af5b5cc0586553 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_curran, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct 11 08:52:59 compute-0 nova_compute[260935]: 2025-10-11 08:52:59.582 2 DEBUG nova.scheduler.client.report [None req-34b6a3b2-ff2f-4c3b-9697-af6701ea0d27 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 08:52:59 compute-0 systemd[1]: libpod-conmon-67eafe431d091d218de970a2f69afdd2d0db981e1a58b5c698af5b5cc0586553.scope: Deactivated successfully.
Oct 11 08:52:59 compute-0 sudo[309842]: pam_unix(sudo:session): session closed for user root
Oct 11 08:52:59 compute-0 nova_compute[260935]: 2025-10-11 08:52:59.614 2 DEBUG oslo_concurrency.lockutils [None req-34b6a3b2-ff2f-4c3b-9697-af6701ea0d27 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.607s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:52:59 compute-0 nova_compute[260935]: 2025-10-11 08:52:59.643 2 INFO nova.scheduler.client.report [None req-34b6a3b2-ff2f-4c3b-9697-af6701ea0d27 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Deleted allocations for instance 98fabab3-6b4a-44f3-b232-f23f34f4e19f
Oct 11 08:52:59 compute-0 sudo[310101]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:52:59 compute-0 sudo[310101]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:52:59 compute-0 sudo[310101]: pam_unix(sudo:session): session closed for user root
Oct 11 08:52:59 compute-0 nova_compute[260935]: 2025-10-11 08:52:59.701 2 DEBUG oslo_concurrency.lockutils [None req-34b6a3b2-ff2f-4c3b-9697-af6701ea0d27 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Lock "98fabab3-6b4a-44f3-b232-f23f34f4e19f" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.062s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:52:59 compute-0 sudo[310126]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 08:52:59 compute-0 sudo[310126]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:52:59 compute-0 sudo[310126]: pam_unix(sudo:session): session closed for user root
Oct 11 08:52:59 compute-0 nova_compute[260935]: 2025-10-11 08:52:59.798 2 DEBUG nova.compute.manager [req-fa403da4-3075-4c49-a7ad-5a120564a015 req-e6181d39-8f48-4207-9829-6db0da095606 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 98fabab3-6b4a-44f3-b232-f23f34f4e19f] Received event network-vif-unplugged-20c8164c-0779-4589-b3d3-afc10a47631f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:52:59 compute-0 nova_compute[260935]: 2025-10-11 08:52:59.798 2 DEBUG oslo_concurrency.lockutils [req-fa403da4-3075-4c49-a7ad-5a120564a015 req-e6181d39-8f48-4207-9829-6db0da095606 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "98fabab3-6b4a-44f3-b232-f23f34f4e19f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:52:59 compute-0 nova_compute[260935]: 2025-10-11 08:52:59.799 2 DEBUG oslo_concurrency.lockutils [req-fa403da4-3075-4c49-a7ad-5a120564a015 req-e6181d39-8f48-4207-9829-6db0da095606 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "98fabab3-6b4a-44f3-b232-f23f34f4e19f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:52:59 compute-0 nova_compute[260935]: 2025-10-11 08:52:59.799 2 DEBUG oslo_concurrency.lockutils [req-fa403da4-3075-4c49-a7ad-5a120564a015 req-e6181d39-8f48-4207-9829-6db0da095606 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "98fabab3-6b4a-44f3-b232-f23f34f4e19f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:52:59 compute-0 nova_compute[260935]: 2025-10-11 08:52:59.799 2 DEBUG nova.compute.manager [req-fa403da4-3075-4c49-a7ad-5a120564a015 req-e6181d39-8f48-4207-9829-6db0da095606 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 98fabab3-6b4a-44f3-b232-f23f34f4e19f] No waiting events found dispatching network-vif-unplugged-20c8164c-0779-4589-b3d3-afc10a47631f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 08:52:59 compute-0 nova_compute[260935]: 2025-10-11 08:52:59.800 2 WARNING nova.compute.manager [req-fa403da4-3075-4c49-a7ad-5a120564a015 req-e6181d39-8f48-4207-9829-6db0da095606 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 98fabab3-6b4a-44f3-b232-f23f34f4e19f] Received unexpected event network-vif-unplugged-20c8164c-0779-4589-b3d3-afc10a47631f for instance with vm_state deleted and task_state None.
Oct 11 08:52:59 compute-0 nova_compute[260935]: 2025-10-11 08:52:59.800 2 DEBUG nova.compute.manager [req-fa403da4-3075-4c49-a7ad-5a120564a015 req-e6181d39-8f48-4207-9829-6db0da095606 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 98fabab3-6b4a-44f3-b232-f23f34f4e19f] Received event network-vif-plugged-20c8164c-0779-4589-b3d3-afc10a47631f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:52:59 compute-0 nova_compute[260935]: 2025-10-11 08:52:59.800 2 DEBUG oslo_concurrency.lockutils [req-fa403da4-3075-4c49-a7ad-5a120564a015 req-e6181d39-8f48-4207-9829-6db0da095606 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "98fabab3-6b4a-44f3-b232-f23f34f4e19f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:52:59 compute-0 nova_compute[260935]: 2025-10-11 08:52:59.801 2 DEBUG oslo_concurrency.lockutils [req-fa403da4-3075-4c49-a7ad-5a120564a015 req-e6181d39-8f48-4207-9829-6db0da095606 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "98fabab3-6b4a-44f3-b232-f23f34f4e19f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:52:59 compute-0 nova_compute[260935]: 2025-10-11 08:52:59.801 2 DEBUG oslo_concurrency.lockutils [req-fa403da4-3075-4c49-a7ad-5a120564a015 req-e6181d39-8f48-4207-9829-6db0da095606 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "98fabab3-6b4a-44f3-b232-f23f34f4e19f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:52:59 compute-0 nova_compute[260935]: 2025-10-11 08:52:59.801 2 DEBUG nova.compute.manager [req-fa403da4-3075-4c49-a7ad-5a120564a015 req-e6181d39-8f48-4207-9829-6db0da095606 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 98fabab3-6b4a-44f3-b232-f23f34f4e19f] No waiting events found dispatching network-vif-plugged-20c8164c-0779-4589-b3d3-afc10a47631f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 08:52:59 compute-0 nova_compute[260935]: 2025-10-11 08:52:59.801 2 WARNING nova.compute.manager [req-fa403da4-3075-4c49-a7ad-5a120564a015 req-e6181d39-8f48-4207-9829-6db0da095606 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 98fabab3-6b4a-44f3-b232-f23f34f4e19f] Received unexpected event network-vif-plugged-20c8164c-0779-4589-b3d3-afc10a47631f for instance with vm_state deleted and task_state None.
Oct 11 08:52:59 compute-0 sudo[310151]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:52:59 compute-0 sudo[310151]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:52:59 compute-0 sudo[310151]: pam_unix(sudo:session): session closed for user root
Oct 11 08:52:59 compute-0 nova_compute[260935]: 2025-10-11 08:52:59.855 2 DEBUG nova.compute.manager [req-7ce82171-9c3e-481e-8edf-4e7d96ed85c1 req-3cb2151b-ff51-4416-b902-dfdd98368a39 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 98fabab3-6b4a-44f3-b232-f23f34f4e19f] Received event network-vif-deleted-20c8164c-0779-4589-b3d3-afc10a47631f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:52:59 compute-0 sudo[310176]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a -- raw list --format json
Oct 11 08:52:59 compute-0 sudo[310176]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:52:59 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1445: 321 pgs: 321 active+clean; 88 MiB data, 445 MiB used, 60 GiB / 60 GiB avail; 4.4 MiB/s rd, 2.0 MiB/s wr, 268 op/s
Oct 11 08:52:59 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/3415986618' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:53:00 compute-0 podman[310240]: 2025-10-11 08:53:00.479692643 +0000 UTC m=+0.061743817 container create 9b36c5a3a7303d6fe946492da37d13b708a88fbe17bc0ae6abfc284844e94d8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_raman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Oct 11 08:53:00 compute-0 systemd[1]: Started libpod-conmon-9b36c5a3a7303d6fe946492da37d13b708a88fbe17bc0ae6abfc284844e94d8b.scope.
Oct 11 08:53:00 compute-0 podman[310240]: 2025-10-11 08:53:00.461789147 +0000 UTC m=+0.043840341 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 08:53:00 compute-0 systemd[1]: Started libcrun container.
Oct 11 08:53:00 compute-0 podman[310240]: 2025-10-11 08:53:00.579064283 +0000 UTC m=+0.161115497 container init 9b36c5a3a7303d6fe946492da37d13b708a88fbe17bc0ae6abfc284844e94d8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_raman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Oct 11 08:53:00 compute-0 podman[310240]: 2025-10-11 08:53:00.590225799 +0000 UTC m=+0.172276993 container start 9b36c5a3a7303d6fe946492da37d13b708a88fbe17bc0ae6abfc284844e94d8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_raman, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 08:53:00 compute-0 podman[310240]: 2025-10-11 08:53:00.593710387 +0000 UTC m=+0.175761641 container attach 9b36c5a3a7303d6fe946492da37d13b708a88fbe17bc0ae6abfc284844e94d8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_raman, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 08:53:00 compute-0 sweet_raman[310256]: 167 167
Oct 11 08:53:00 compute-0 systemd[1]: libpod-9b36c5a3a7303d6fe946492da37d13b708a88fbe17bc0ae6abfc284844e94d8b.scope: Deactivated successfully.
Oct 11 08:53:00 compute-0 podman[310240]: 2025-10-11 08:53:00.599724147 +0000 UTC m=+0.181775331 container died 9b36c5a3a7303d6fe946492da37d13b708a88fbe17bc0ae6abfc284844e94d8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_raman, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct 11 08:53:00 compute-0 nova_compute[260935]: 2025-10-11 08:53:00.615 2 DEBUG oslo_concurrency.lockutils [None req-b98be1ad-2992-426c-b88a-832688514c32 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Acquiring lock "92be5b35-6b7a-4f95-924d-008348f27b42" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:53:00 compute-0 nova_compute[260935]: 2025-10-11 08:53:00.618 2 DEBUG oslo_concurrency.lockutils [None req-b98be1ad-2992-426c-b88a-832688514c32 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Lock "92be5b35-6b7a-4f95-924d-008348f27b42" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:53:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-afce9a37c0a06a95408e5848237e24f20a5711fcbb9d10314e4dc349130b4e74-merged.mount: Deactivated successfully.
Oct 11 08:53:00 compute-0 nova_compute[260935]: 2025-10-11 08:53:00.643 2 DEBUG nova.compute.manager [None req-b98be1ad-2992-426c-b88a-832688514c32 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 92be5b35-6b7a-4f95-924d-008348f27b42] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 11 08:53:00 compute-0 podman[310240]: 2025-10-11 08:53:00.652559552 +0000 UTC m=+0.234610766 container remove 9b36c5a3a7303d6fe946492da37d13b708a88fbe17bc0ae6abfc284844e94d8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_raman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True)
Oct 11 08:53:00 compute-0 systemd[1]: libpod-conmon-9b36c5a3a7303d6fe946492da37d13b708a88fbe17bc0ae6abfc284844e94d8b.scope: Deactivated successfully.
Oct 11 08:53:00 compute-0 nova_compute[260935]: 2025-10-11 08:53:00.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 08:53:00 compute-0 nova_compute[260935]: 2025-10-11 08:53:00.703 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 11 08:53:00 compute-0 nova_compute[260935]: 2025-10-11 08:53:00.748 2 DEBUG oslo_concurrency.lockutils [None req-b98be1ad-2992-426c-b88a-832688514c32 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:53:00 compute-0 nova_compute[260935]: 2025-10-11 08:53:00.748 2 DEBUG oslo_concurrency.lockutils [None req-b98be1ad-2992-426c-b88a-832688514c32 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:53:00 compute-0 nova_compute[260935]: 2025-10-11 08:53:00.759 2 DEBUG nova.virt.hardware [None req-b98be1ad-2992-426c-b88a-832688514c32 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 11 08:53:00 compute-0 nova_compute[260935]: 2025-10-11 08:53:00.760 2 INFO nova.compute.claims [None req-b98be1ad-2992-426c-b88a-832688514c32 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 92be5b35-6b7a-4f95-924d-008348f27b42] Claim successful on node compute-0.ctlplane.example.com
Oct 11 08:53:00 compute-0 podman[310278]: 2025-10-11 08:53:00.872856732 +0000 UTC m=+0.068484838 container create f14261d100ae34e44d30c50dae716cc8ee037f38abc59ab170ae363ecd9c6686 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_hypatia, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 08:53:00 compute-0 podman[310278]: 2025-10-11 08:53:00.846013753 +0000 UTC m=+0.041641909 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 08:53:00 compute-0 systemd[1]: Started libpod-conmon-f14261d100ae34e44d30c50dae716cc8ee037f38abc59ab170ae363ecd9c6686.scope.
Oct 11 08:53:00 compute-0 systemd[1]: Started libcrun container.
Oct 11 08:53:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1cda8c89c8950d7efef89b0f8573a2c883d52ebc4c4077795ee908fb2f84f529/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 08:53:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1cda8c89c8950d7efef89b0f8573a2c883d52ebc4c4077795ee908fb2f84f529/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 08:53:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1cda8c89c8950d7efef89b0f8573a2c883d52ebc4c4077795ee908fb2f84f529/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 08:53:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1cda8c89c8950d7efef89b0f8573a2c883d52ebc4c4077795ee908fb2f84f529/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 08:53:01 compute-0 podman[310278]: 2025-10-11 08:53:01.003323641 +0000 UTC m=+0.198951787 container init f14261d100ae34e44d30c50dae716cc8ee037f38abc59ab170ae363ecd9c6686 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_hypatia, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 08:53:01 compute-0 ceph-mon[74313]: pgmap v1445: 321 pgs: 321 active+clean; 88 MiB data, 445 MiB used, 60 GiB / 60 GiB avail; 4.4 MiB/s rd, 2.0 MiB/s wr, 268 op/s
Oct 11 08:53:01 compute-0 podman[310278]: 2025-10-11 08:53:01.023898763 +0000 UTC m=+0.219526869 container start f14261d100ae34e44d30c50dae716cc8ee037f38abc59ab170ae363ecd9c6686 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_hypatia, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 08:53:01 compute-0 podman[310278]: 2025-10-11 08:53:01.028971007 +0000 UTC m=+0.224599153 container attach f14261d100ae34e44d30c50dae716cc8ee037f38abc59ab170ae363ecd9c6686 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_hypatia, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 11 08:53:01 compute-0 nova_compute[260935]: 2025-10-11 08:53:01.047 2 DEBUG oslo_concurrency.processutils [None req-b98be1ad-2992-426c-b88a-832688514c32 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:53:01 compute-0 nova_compute[260935]: 2025-10-11 08:53:01.281 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:53:01 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 08:53:01 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3978567880' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:53:01 compute-0 nova_compute[260935]: 2025-10-11 08:53:01.520 2 DEBUG oslo_concurrency.processutils [None req-b98be1ad-2992-426c-b88a-832688514c32 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:53:01 compute-0 nova_compute[260935]: 2025-10-11 08:53:01.526 2 DEBUG nova.compute.provider_tree [None req-b98be1ad-2992-426c-b88a-832688514c32 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 08:53:01 compute-0 nova_compute[260935]: 2025-10-11 08:53:01.544 2 DEBUG nova.scheduler.client.report [None req-b98be1ad-2992-426c-b88a-832688514c32 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 08:53:01 compute-0 nova_compute[260935]: 2025-10-11 08:53:01.576 2 DEBUG oslo_concurrency.lockutils [None req-b98be1ad-2992-426c-b88a-832688514c32 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.828s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:53:01 compute-0 nova_compute[260935]: 2025-10-11 08:53:01.577 2 DEBUG nova.compute.manager [None req-b98be1ad-2992-426c-b88a-832688514c32 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 92be5b35-6b7a-4f95-924d-008348f27b42] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 11 08:53:01 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:01.590 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=12, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:d1:d9', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '16:ab:1e:b7:4b:7f'}, ipsec=False) old=SB_Global(nb_cfg=11) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 08:53:01 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:01.591 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 11 08:53:01 compute-0 nova_compute[260935]: 2025-10-11 08:53:01.593 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:53:01 compute-0 nova_compute[260935]: 2025-10-11 08:53:01.623 2 DEBUG nova.compute.manager [None req-b98be1ad-2992-426c-b88a-832688514c32 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 92be5b35-6b7a-4f95-924d-008348f27b42] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 11 08:53:01 compute-0 nova_compute[260935]: 2025-10-11 08:53:01.624 2 DEBUG nova.network.neutron [None req-b98be1ad-2992-426c-b88a-832688514c32 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 92be5b35-6b7a-4f95-924d-008348f27b42] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 11 08:53:01 compute-0 nova_compute[260935]: 2025-10-11 08:53:01.649 2 INFO nova.virt.libvirt.driver [None req-b98be1ad-2992-426c-b88a-832688514c32 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 92be5b35-6b7a-4f95-924d-008348f27b42] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 11 08:53:01 compute-0 nova_compute[260935]: 2025-10-11 08:53:01.666 2 DEBUG nova.compute.manager [None req-b98be1ad-2992-426c-b88a-832688514c32 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 92be5b35-6b7a-4f95-924d-008348f27b42] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 11 08:53:01 compute-0 nova_compute[260935]: 2025-10-11 08:53:01.782 2 DEBUG nova.compute.manager [None req-b98be1ad-2992-426c-b88a-832688514c32 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 92be5b35-6b7a-4f95-924d-008348f27b42] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 11 08:53:01 compute-0 nova_compute[260935]: 2025-10-11 08:53:01.785 2 DEBUG nova.virt.libvirt.driver [None req-b98be1ad-2992-426c-b88a-832688514c32 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 92be5b35-6b7a-4f95-924d-008348f27b42] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 11 08:53:01 compute-0 nova_compute[260935]: 2025-10-11 08:53:01.786 2 INFO nova.virt.libvirt.driver [None req-b98be1ad-2992-426c-b88a-832688514c32 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 92be5b35-6b7a-4f95-924d-008348f27b42] Creating image(s)
Oct 11 08:53:01 compute-0 nova_compute[260935]: 2025-10-11 08:53:01.819 2 DEBUG nova.storage.rbd_utils [None req-b98be1ad-2992-426c-b88a-832688514c32 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] rbd image 92be5b35-6b7a-4f95-924d-008348f27b42_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:53:01 compute-0 nova_compute[260935]: 2025-10-11 08:53:01.860 2 DEBUG nova.storage.rbd_utils [None req-b98be1ad-2992-426c-b88a-832688514c32 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] rbd image 92be5b35-6b7a-4f95-924d-008348f27b42_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:53:01 compute-0 nova_compute[260935]: 2025-10-11 08:53:01.899 2 DEBUG nova.storage.rbd_utils [None req-b98be1ad-2992-426c-b88a-832688514c32 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] rbd image 92be5b35-6b7a-4f95-924d-008348f27b42_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:53:01 compute-0 nova_compute[260935]: 2025-10-11 08:53:01.904 2 DEBUG oslo_concurrency.processutils [None req-b98be1ad-2992-426c-b88a-832688514c32 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:53:01 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1446: 321 pgs: 321 active+clean; 88 MiB data, 445 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.8 MiB/s wr, 238 op/s
Oct 11 08:53:01 compute-0 nova_compute[260935]: 2025-10-11 08:53:01.953 2 DEBUG nova.policy [None req-b98be1ad-2992-426c-b88a-832688514c32 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '2a171de1f79843e0b048393cabfee77d', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '9af27fad6b5a4783b66213343f27f0a1', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 11 08:53:01 compute-0 nova_compute[260935]: 2025-10-11 08:53:01.993 2 DEBUG oslo_concurrency.processutils [None req-b98be1ad-2992-426c-b88a-832688514c32 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json" returned: 0 in 0.089s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:53:01 compute-0 nova_compute[260935]: 2025-10-11 08:53:01.994 2 DEBUG oslo_concurrency.lockutils [None req-b98be1ad-2992-426c-b88a-832688514c32 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Acquiring lock "9811042c7d73cc51997f7c966840f4b7728169a1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:53:01 compute-0 nova_compute[260935]: 2025-10-11 08:53:01.996 2 DEBUG oslo_concurrency.lockutils [None req-b98be1ad-2992-426c-b88a-832688514c32 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:53:01 compute-0 nova_compute[260935]: 2025-10-11 08:53:01.996 2 DEBUG oslo_concurrency.lockutils [None req-b98be1ad-2992-426c-b88a-832688514c32 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:53:02 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/3978567880' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:53:02 compute-0 nova_compute[260935]: 2025-10-11 08:53:02.038 2 DEBUG nova.storage.rbd_utils [None req-b98be1ad-2992-426c-b88a-832688514c32 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] rbd image 92be5b35-6b7a-4f95-924d-008348f27b42_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:53:02 compute-0 nova_compute[260935]: 2025-10-11 08:53:02.043 2 DEBUG oslo_concurrency.processutils [None req-b98be1ad-2992-426c-b88a-832688514c32 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 92be5b35-6b7a-4f95-924d-008348f27b42_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:53:02 compute-0 quirky_hypatia[310295]: {
Oct 11 08:53:02 compute-0 quirky_hypatia[310295]:     "9a8d87fe-940e-4960-94fb-405e22e6e9ee": {
Oct 11 08:53:02 compute-0 quirky_hypatia[310295]:         "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 08:53:02 compute-0 quirky_hypatia[310295]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 11 08:53:02 compute-0 quirky_hypatia[310295]:         "osd_id": 2,
Oct 11 08:53:02 compute-0 quirky_hypatia[310295]:         "osd_uuid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 08:53:02 compute-0 quirky_hypatia[310295]:         "type": "bluestore"
Oct 11 08:53:02 compute-0 quirky_hypatia[310295]:     },
Oct 11 08:53:02 compute-0 quirky_hypatia[310295]:     "bd31cfe9-266a-4479-a7f9-659fff14b3e1": {
Oct 11 08:53:02 compute-0 quirky_hypatia[310295]:         "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 08:53:02 compute-0 quirky_hypatia[310295]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 11 08:53:02 compute-0 quirky_hypatia[310295]:         "osd_id": 0,
Oct 11 08:53:02 compute-0 quirky_hypatia[310295]:         "osd_uuid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 08:53:02 compute-0 quirky_hypatia[310295]:         "type": "bluestore"
Oct 11 08:53:02 compute-0 quirky_hypatia[310295]:     },
Oct 11 08:53:02 compute-0 quirky_hypatia[310295]:     "c6e730c8-7075-45e5-bf72-061b73088f5b": {
Oct 11 08:53:02 compute-0 quirky_hypatia[310295]:         "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 08:53:02 compute-0 quirky_hypatia[310295]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 11 08:53:02 compute-0 quirky_hypatia[310295]:         "osd_id": 1,
Oct 11 08:53:02 compute-0 quirky_hypatia[310295]:         "osd_uuid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 08:53:02 compute-0 quirky_hypatia[310295]:         "type": "bluestore"
Oct 11 08:53:02 compute-0 quirky_hypatia[310295]:     }
Oct 11 08:53:02 compute-0 quirky_hypatia[310295]: }
Oct 11 08:53:02 compute-0 systemd[1]: libpod-f14261d100ae34e44d30c50dae716cc8ee037f38abc59ab170ae363ecd9c6686.scope: Deactivated successfully.
Oct 11 08:53:02 compute-0 podman[310278]: 2025-10-11 08:53:02.134605804 +0000 UTC m=+1.330233910 container died f14261d100ae34e44d30c50dae716cc8ee037f38abc59ab170ae363ecd9c6686 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_hypatia, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct 11 08:53:02 compute-0 systemd[1]: libpod-f14261d100ae34e44d30c50dae716cc8ee037f38abc59ab170ae363ecd9c6686.scope: Consumed 1.108s CPU time.
Oct 11 08:53:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-1cda8c89c8950d7efef89b0f8573a2c883d52ebc4c4077795ee908fb2f84f529-merged.mount: Deactivated successfully.
Oct 11 08:53:02 compute-0 podman[310278]: 2025-10-11 08:53:02.196089413 +0000 UTC m=+1.391717479 container remove f14261d100ae34e44d30c50dae716cc8ee037f38abc59ab170ae363ecd9c6686 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_hypatia, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct 11 08:53:02 compute-0 systemd[1]: libpod-conmon-f14261d100ae34e44d30c50dae716cc8ee037f38abc59ab170ae363ecd9c6686.scope: Deactivated successfully.
Oct 11 08:53:02 compute-0 sudo[310176]: pam_unix(sudo:session): session closed for user root
Oct 11 08:53:02 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 08:53:02 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 08:53:02 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 08:53:02 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 08:53:02 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev 9975a922-4687-40f2-a837-b4207ad0f114 does not exist
Oct 11 08:53:02 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev dd29e3f0-6553-446c-b39f-c25113221b55 does not exist
Oct 11 08:53:02 compute-0 nova_compute[260935]: 2025-10-11 08:53:02.343 2 DEBUG oslo_concurrency.processutils [None req-b98be1ad-2992-426c-b88a-832688514c32 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 92be5b35-6b7a-4f95-924d-008348f27b42_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.300s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:53:02 compute-0 sudo[310457]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:53:02 compute-0 sudo[310457]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:53:02 compute-0 sudo[310457]: pam_unix(sudo:session): session closed for user root
Oct 11 08:53:02 compute-0 nova_compute[260935]: 2025-10-11 08:53:02.403 2 DEBUG nova.storage.rbd_utils [None req-b98be1ad-2992-426c-b88a-832688514c32 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] resizing rbd image 92be5b35-6b7a-4f95-924d-008348f27b42_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 11 08:53:02 compute-0 sudo[310500]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 11 08:53:02 compute-0 sudo[310500]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:53:02 compute-0 sudo[310500]: pam_unix(sudo:session): session closed for user root
Oct 11 08:53:02 compute-0 nova_compute[260935]: 2025-10-11 08:53:02.492 2 DEBUG nova.objects.instance [None req-b98be1ad-2992-426c-b88a-832688514c32 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Lazy-loading 'migration_context' on Instance uuid 92be5b35-6b7a-4f95-924d-008348f27b42 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:53:02 compute-0 nova_compute[260935]: 2025-10-11 08:53:02.512 2 DEBUG nova.virt.libvirt.driver [None req-b98be1ad-2992-426c-b88a-832688514c32 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 92be5b35-6b7a-4f95-924d-008348f27b42] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 11 08:53:02 compute-0 nova_compute[260935]: 2025-10-11 08:53:02.513 2 DEBUG nova.virt.libvirt.driver [None req-b98be1ad-2992-426c-b88a-832688514c32 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 92be5b35-6b7a-4f95-924d-008348f27b42] Ensure instance console log exists: /var/lib/nova/instances/92be5b35-6b7a-4f95-924d-008348f27b42/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 11 08:53:02 compute-0 nova_compute[260935]: 2025-10-11 08:53:02.513 2 DEBUG oslo_concurrency.lockutils [None req-b98be1ad-2992-426c-b88a-832688514c32 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:53:02 compute-0 nova_compute[260935]: 2025-10-11 08:53:02.513 2 DEBUG oslo_concurrency.lockutils [None req-b98be1ad-2992-426c-b88a-832688514c32 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:53:02 compute-0 nova_compute[260935]: 2025-10-11 08:53:02.514 2 DEBUG oslo_concurrency.lockutils [None req-b98be1ad-2992-426c-b88a-832688514c32 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:53:02 compute-0 nova_compute[260935]: 2025-10-11 08:53:02.539 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:53:02 compute-0 nova_compute[260935]: 2025-10-11 08:53:02.925 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:53:02 compute-0 nova_compute[260935]: 2025-10-11 08:53:02.983 2 DEBUG nova.network.neutron [None req-b98be1ad-2992-426c-b88a-832688514c32 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 92be5b35-6b7a-4f95-924d-008348f27b42] Successfully created port: 13bb6d15-e65c-4e29-b0f3-b7a5a830236d _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 11 08:53:03 compute-0 sshd-session[310423]: Invalid user pranav from 155.4.244.179 port 47286
Oct 11 08:53:03 compute-0 sshd-session[310423]: pam_unix(sshd:auth): check pass; user unknown
Oct 11 08:53:03 compute-0 sshd-session[310423]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=155.4.244.179
Oct 11 08:53:03 compute-0 ceph-mon[74313]: pgmap v1446: 321 pgs: 321 active+clean; 88 MiB data, 445 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.8 MiB/s wr, 238 op/s
Oct 11 08:53:03 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 08:53:03 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 08:53:03 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e192 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 08:53:03 compute-0 podman[310579]: 2025-10-11 08:53:03.310308394 +0000 UTC m=+0.104133845 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 11 08:53:03 compute-0 nova_compute[260935]: 2025-10-11 08:53:03.704 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 08:53:03 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1447: 321 pgs: 321 active+clean; 88 MiB data, 433 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 3.6 MiB/s wr, 291 op/s
Oct 11 08:53:04 compute-0 nova_compute[260935]: 2025-10-11 08:53:04.316 2 DEBUG nova.network.neutron [None req-b98be1ad-2992-426c-b88a-832688514c32 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 92be5b35-6b7a-4f95-924d-008348f27b42] Successfully updated port: 13bb6d15-e65c-4e29-b0f3-b7a5a830236d _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 11 08:53:04 compute-0 nova_compute[260935]: 2025-10-11 08:53:04.336 2 DEBUG oslo_concurrency.lockutils [None req-b98be1ad-2992-426c-b88a-832688514c32 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Acquiring lock "refresh_cache-92be5b35-6b7a-4f95-924d-008348f27b42" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 08:53:04 compute-0 nova_compute[260935]: 2025-10-11 08:53:04.336 2 DEBUG oslo_concurrency.lockutils [None req-b98be1ad-2992-426c-b88a-832688514c32 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Acquired lock "refresh_cache-92be5b35-6b7a-4f95-924d-008348f27b42" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 08:53:04 compute-0 nova_compute[260935]: 2025-10-11 08:53:04.337 2 DEBUG nova.network.neutron [None req-b98be1ad-2992-426c-b88a-832688514c32 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 92be5b35-6b7a-4f95-924d-008348f27b42] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 11 08:53:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 08:53:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:53:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 08:53:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:53:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0003459970412515465 of space, bias 1.0, pg target 0.10379911237546395 quantized to 32 (current 32)
Oct 11 08:53:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:53:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 08:53:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:53:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 08:53:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:53:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006663670272514163 of space, bias 1.0, pg target 0.19991010817542487 quantized to 32 (current 32)
Oct 11 08:53:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:53:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 11 08:53:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:53:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 08:53:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:53:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 11 08:53:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:53:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 11 08:53:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:53:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 08:53:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:53:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 11 08:53:04 compute-0 nova_compute[260935]: 2025-10-11 08:53:04.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 08:53:04 compute-0 nova_compute[260935]: 2025-10-11 08:53:04.820 2 DEBUG nova.network.neutron [None req-b98be1ad-2992-426c-b88a-832688514c32 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 92be5b35-6b7a-4f95-924d-008348f27b42] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 11 08:53:05 compute-0 ceph-mon[74313]: pgmap v1447: 321 pgs: 321 active+clean; 88 MiB data, 433 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 3.6 MiB/s wr, 291 op/s
Oct 11 08:53:05 compute-0 nova_compute[260935]: 2025-10-11 08:53:05.340 2 DEBUG nova.compute.manager [req-38da5eba-4ea9-426c-8077-e383ef156bd3 req-2b1b7bde-db0e-499b-96f6-ffd59fc17030 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 92be5b35-6b7a-4f95-924d-008348f27b42] Received event network-changed-13bb6d15-e65c-4e29-b0f3-b7a5a830236d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:53:05 compute-0 nova_compute[260935]: 2025-10-11 08:53:05.340 2 DEBUG nova.compute.manager [req-38da5eba-4ea9-426c-8077-e383ef156bd3 req-2b1b7bde-db0e-499b-96f6-ffd59fc17030 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 92be5b35-6b7a-4f95-924d-008348f27b42] Refreshing instance network info cache due to event network-changed-13bb6d15-e65c-4e29-b0f3-b7a5a830236d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 08:53:05 compute-0 nova_compute[260935]: 2025-10-11 08:53:05.341 2 DEBUG oslo_concurrency.lockutils [req-38da5eba-4ea9-426c-8077-e383ef156bd3 req-2b1b7bde-db0e-499b-96f6-ffd59fc17030 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-92be5b35-6b7a-4f95-924d-008348f27b42" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 08:53:05 compute-0 sshd-session[310423]: Failed password for invalid user pranav from 155.4.244.179 port 47286 ssh2
Oct 11 08:53:05 compute-0 nova_compute[260935]: 2025-10-11 08:53:05.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 08:53:05 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1448: 321 pgs: 321 active+clean; 88 MiB data, 433 MiB used, 60 GiB / 60 GiB avail; 2.9 MiB/s rd, 1.8 MiB/s wr, 183 op/s
Oct 11 08:53:06 compute-0 nova_compute[260935]: 2025-10-11 08:53:06.241 2 DEBUG nova.network.neutron [None req-b98be1ad-2992-426c-b88a-832688514c32 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 92be5b35-6b7a-4f95-924d-008348f27b42] Updating instance_info_cache with network_info: [{"id": "13bb6d15-e65c-4e29-b0f3-b7a5a830236d", "address": "fa:16:3e:5b:b5:4a", "network": {"id": "e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-964284753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9af27fad6b5a4783b66213343f27f0a1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap13bb6d15-e6", "ovs_interfaceid": "13bb6d15-e65c-4e29-b0f3-b7a5a830236d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:53:06 compute-0 nova_compute[260935]: 2025-10-11 08:53:06.261 2 DEBUG oslo_concurrency.lockutils [None req-b98be1ad-2992-426c-b88a-832688514c32 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Releasing lock "refresh_cache-92be5b35-6b7a-4f95-924d-008348f27b42" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 08:53:06 compute-0 nova_compute[260935]: 2025-10-11 08:53:06.261 2 DEBUG nova.compute.manager [None req-b98be1ad-2992-426c-b88a-832688514c32 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 92be5b35-6b7a-4f95-924d-008348f27b42] Instance network_info: |[{"id": "13bb6d15-e65c-4e29-b0f3-b7a5a830236d", "address": "fa:16:3e:5b:b5:4a", "network": {"id": "e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-964284753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9af27fad6b5a4783b66213343f27f0a1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap13bb6d15-e6", "ovs_interfaceid": "13bb6d15-e65c-4e29-b0f3-b7a5a830236d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 11 08:53:06 compute-0 nova_compute[260935]: 2025-10-11 08:53:06.262 2 DEBUG oslo_concurrency.lockutils [req-38da5eba-4ea9-426c-8077-e383ef156bd3 req-2b1b7bde-db0e-499b-96f6-ffd59fc17030 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-92be5b35-6b7a-4f95-924d-008348f27b42" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 08:53:06 compute-0 nova_compute[260935]: 2025-10-11 08:53:06.262 2 DEBUG nova.network.neutron [req-38da5eba-4ea9-426c-8077-e383ef156bd3 req-2b1b7bde-db0e-499b-96f6-ffd59fc17030 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 92be5b35-6b7a-4f95-924d-008348f27b42] Refreshing network info cache for port 13bb6d15-e65c-4e29-b0f3-b7a5a830236d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 08:53:06 compute-0 nova_compute[260935]: 2025-10-11 08:53:06.268 2 DEBUG nova.virt.libvirt.driver [None req-b98be1ad-2992-426c-b88a-832688514c32 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 92be5b35-6b7a-4f95-924d-008348f27b42] Start _get_guest_xml network_info=[{"id": "13bb6d15-e65c-4e29-b0f3-b7a5a830236d", "address": "fa:16:3e:5b:b5:4a", "network": {"id": "e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-964284753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9af27fad6b5a4783b66213343f27f0a1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap13bb6d15-e6", "ovs_interfaceid": "13bb6d15-e65c-4e29-b0f3-b7a5a830236d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 11 08:53:06 compute-0 nova_compute[260935]: 2025-10-11 08:53:06.276 2 WARNING nova.virt.libvirt.driver [None req-b98be1ad-2992-426c-b88a-832688514c32 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 08:53:06 compute-0 nova_compute[260935]: 2025-10-11 08:53:06.287 2 DEBUG nova.virt.libvirt.host [None req-b98be1ad-2992-426c-b88a-832688514c32 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 11 08:53:06 compute-0 nova_compute[260935]: 2025-10-11 08:53:06.287 2 DEBUG nova.virt.libvirt.host [None req-b98be1ad-2992-426c-b88a-832688514c32 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 11 08:53:06 compute-0 nova_compute[260935]: 2025-10-11 08:53:06.291 2 DEBUG nova.virt.libvirt.host [None req-b98be1ad-2992-426c-b88a-832688514c32 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 11 08:53:06 compute-0 nova_compute[260935]: 2025-10-11 08:53:06.292 2 DEBUG nova.virt.libvirt.host [None req-b98be1ad-2992-426c-b88a-832688514c32 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 11 08:53:06 compute-0 nova_compute[260935]: 2025-10-11 08:53:06.292 2 DEBUG nova.virt.libvirt.driver [None req-b98be1ad-2992-426c-b88a-832688514c32 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 11 08:53:06 compute-0 nova_compute[260935]: 2025-10-11 08:53:06.292 2 DEBUG nova.virt.hardware [None req-b98be1ad-2992-426c-b88a-832688514c32 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 11 08:53:06 compute-0 nova_compute[260935]: 2025-10-11 08:53:06.293 2 DEBUG nova.virt.hardware [None req-b98be1ad-2992-426c-b88a-832688514c32 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 11 08:53:06 compute-0 nova_compute[260935]: 2025-10-11 08:53:06.293 2 DEBUG nova.virt.hardware [None req-b98be1ad-2992-426c-b88a-832688514c32 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 11 08:53:06 compute-0 nova_compute[260935]: 2025-10-11 08:53:06.293 2 DEBUG nova.virt.hardware [None req-b98be1ad-2992-426c-b88a-832688514c32 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 11 08:53:06 compute-0 nova_compute[260935]: 2025-10-11 08:53:06.294 2 DEBUG nova.virt.hardware [None req-b98be1ad-2992-426c-b88a-832688514c32 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 11 08:53:06 compute-0 nova_compute[260935]: 2025-10-11 08:53:06.294 2 DEBUG nova.virt.hardware [None req-b98be1ad-2992-426c-b88a-832688514c32 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 11 08:53:06 compute-0 nova_compute[260935]: 2025-10-11 08:53:06.294 2 DEBUG nova.virt.hardware [None req-b98be1ad-2992-426c-b88a-832688514c32 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 11 08:53:06 compute-0 nova_compute[260935]: 2025-10-11 08:53:06.294 2 DEBUG nova.virt.hardware [None req-b98be1ad-2992-426c-b88a-832688514c32 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 11 08:53:06 compute-0 nova_compute[260935]: 2025-10-11 08:53:06.295 2 DEBUG nova.virt.hardware [None req-b98be1ad-2992-426c-b88a-832688514c32 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 11 08:53:06 compute-0 nova_compute[260935]: 2025-10-11 08:53:06.295 2 DEBUG nova.virt.hardware [None req-b98be1ad-2992-426c-b88a-832688514c32 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 11 08:53:06 compute-0 nova_compute[260935]: 2025-10-11 08:53:06.295 2 DEBUG nova.virt.hardware [None req-b98be1ad-2992-426c-b88a-832688514c32 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 11 08:53:06 compute-0 nova_compute[260935]: 2025-10-11 08:53:06.299 2 DEBUG oslo_concurrency.processutils [None req-b98be1ad-2992-426c-b88a-832688514c32 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:53:06 compute-0 nova_compute[260935]: 2025-10-11 08:53:06.699 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 08:53:06 compute-0 nova_compute[260935]: 2025-10-11 08:53:06.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 08:53:06 compute-0 nova_compute[260935]: 2025-10-11 08:53:06.702 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Oct 11 08:53:06 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 08:53:06 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3193996983' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:53:06 compute-0 nova_compute[260935]: 2025-10-11 08:53:06.745 2 DEBUG oslo_concurrency.processutils [None req-b98be1ad-2992-426c-b88a-832688514c32 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:53:06 compute-0 nova_compute[260935]: 2025-10-11 08:53:06.775 2 DEBUG nova.storage.rbd_utils [None req-b98be1ad-2992-426c-b88a-832688514c32 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] rbd image 92be5b35-6b7a-4f95-924d-008348f27b42_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:53:06 compute-0 nova_compute[260935]: 2025-10-11 08:53:06.779 2 DEBUG oslo_concurrency.processutils [None req-b98be1ad-2992-426c-b88a-832688514c32 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:53:07 compute-0 sshd-session[310423]: Received disconnect from 155.4.244.179 port 47286:11: Bye Bye [preauth]
Oct 11 08:53:07 compute-0 sshd-session[310423]: Disconnected from invalid user pranav 155.4.244.179 port 47286 [preauth]
Oct 11 08:53:07 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 08:53:07 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3532255320' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:53:07 compute-0 nova_compute[260935]: 2025-10-11 08:53:07.219 2 DEBUG oslo_concurrency.processutils [None req-b98be1ad-2992-426c-b88a-832688514c32 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:53:07 compute-0 nova_compute[260935]: 2025-10-11 08:53:07.222 2 DEBUG nova.virt.libvirt.vif [None req-b98be1ad-2992-426c-b88a-832688514c32 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T08:52:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-118771075',display_name='tempest-ServerDiskConfigTestJSON-server-118771075',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-118771075',id=43,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9af27fad6b5a4783b66213343f27f0a1',ramdisk_id='',reservation_id='r-aqy60l5l',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerDiskConfigTestJSON-387886039',owner_user_name='tempest-ServerDiskConfigTestJSON-387886039-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T08:53:01Z,user_data=None,user_id='2a171de1f79843e0b048393cabfee77d',uuid=92be5b35-6b7a-4f95-924d-008348f27b42,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "13bb6d15-e65c-4e29-b0f3-b7a5a830236d", "address": "fa:16:3e:5b:b5:4a", "network": {"id": "e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-964284753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9af27fad6b5a4783b66213343f27f0a1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap13bb6d15-e6", "ovs_interfaceid": "13bb6d15-e65c-4e29-b0f3-b7a5a830236d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 11 08:53:07 compute-0 nova_compute[260935]: 2025-10-11 08:53:07.222 2 DEBUG nova.network.os_vif_util [None req-b98be1ad-2992-426c-b88a-832688514c32 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Converting VIF {"id": "13bb6d15-e65c-4e29-b0f3-b7a5a830236d", "address": "fa:16:3e:5b:b5:4a", "network": {"id": "e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-964284753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9af27fad6b5a4783b66213343f27f0a1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap13bb6d15-e6", "ovs_interfaceid": "13bb6d15-e65c-4e29-b0f3-b7a5a830236d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 08:53:07 compute-0 nova_compute[260935]: 2025-10-11 08:53:07.224 2 DEBUG nova.network.os_vif_util [None req-b98be1ad-2992-426c-b88a-832688514c32 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5b:b5:4a,bridge_name='br-int',has_traffic_filtering=True,id=13bb6d15-e65c-4e29-b0f3-b7a5a830236d,network=Network(e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap13bb6d15-e6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 08:53:07 compute-0 nova_compute[260935]: 2025-10-11 08:53:07.225 2 DEBUG nova.objects.instance [None req-b98be1ad-2992-426c-b88a-832688514c32 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Lazy-loading 'pci_devices' on Instance uuid 92be5b35-6b7a-4f95-924d-008348f27b42 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:53:07 compute-0 nova_compute[260935]: 2025-10-11 08:53:07.253 2 DEBUG nova.virt.libvirt.driver [None req-b98be1ad-2992-426c-b88a-832688514c32 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 92be5b35-6b7a-4f95-924d-008348f27b42] End _get_guest_xml xml=<domain type="kvm">
Oct 11 08:53:07 compute-0 nova_compute[260935]:   <uuid>92be5b35-6b7a-4f95-924d-008348f27b42</uuid>
Oct 11 08:53:07 compute-0 nova_compute[260935]:   <name>instance-0000002b</name>
Oct 11 08:53:07 compute-0 nova_compute[260935]:   <memory>131072</memory>
Oct 11 08:53:07 compute-0 nova_compute[260935]:   <vcpu>1</vcpu>
Oct 11 08:53:07 compute-0 nova_compute[260935]:   <metadata>
Oct 11 08:53:07 compute-0 nova_compute[260935]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 08:53:07 compute-0 nova_compute[260935]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 08:53:07 compute-0 nova_compute[260935]:       <nova:name>tempest-ServerDiskConfigTestJSON-server-118771075</nova:name>
Oct 11 08:53:07 compute-0 nova_compute[260935]:       <nova:creationTime>2025-10-11 08:53:06</nova:creationTime>
Oct 11 08:53:07 compute-0 nova_compute[260935]:       <nova:flavor name="m1.nano">
Oct 11 08:53:07 compute-0 nova_compute[260935]:         <nova:memory>128</nova:memory>
Oct 11 08:53:07 compute-0 nova_compute[260935]:         <nova:disk>1</nova:disk>
Oct 11 08:53:07 compute-0 nova_compute[260935]:         <nova:swap>0</nova:swap>
Oct 11 08:53:07 compute-0 nova_compute[260935]:         <nova:ephemeral>0</nova:ephemeral>
Oct 11 08:53:07 compute-0 nova_compute[260935]:         <nova:vcpus>1</nova:vcpus>
Oct 11 08:53:07 compute-0 nova_compute[260935]:       </nova:flavor>
Oct 11 08:53:07 compute-0 nova_compute[260935]:       <nova:owner>
Oct 11 08:53:07 compute-0 nova_compute[260935]:         <nova:user uuid="2a171de1f79843e0b048393cabfee77d">tempest-ServerDiskConfigTestJSON-387886039-project-member</nova:user>
Oct 11 08:53:07 compute-0 nova_compute[260935]:         <nova:project uuid="9af27fad6b5a4783b66213343f27f0a1">tempest-ServerDiskConfigTestJSON-387886039</nova:project>
Oct 11 08:53:07 compute-0 nova_compute[260935]:       </nova:owner>
Oct 11 08:53:07 compute-0 nova_compute[260935]:       <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 08:53:07 compute-0 nova_compute[260935]:       <nova:ports>
Oct 11 08:53:07 compute-0 nova_compute[260935]:         <nova:port uuid="13bb6d15-e65c-4e29-b0f3-b7a5a830236d">
Oct 11 08:53:07 compute-0 nova_compute[260935]:           <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Oct 11 08:53:07 compute-0 nova_compute[260935]:         </nova:port>
Oct 11 08:53:07 compute-0 nova_compute[260935]:       </nova:ports>
Oct 11 08:53:07 compute-0 nova_compute[260935]:     </nova:instance>
Oct 11 08:53:07 compute-0 nova_compute[260935]:   </metadata>
Oct 11 08:53:07 compute-0 nova_compute[260935]:   <sysinfo type="smbios">
Oct 11 08:53:07 compute-0 nova_compute[260935]:     <system>
Oct 11 08:53:07 compute-0 nova_compute[260935]:       <entry name="manufacturer">RDO</entry>
Oct 11 08:53:07 compute-0 nova_compute[260935]:       <entry name="product">OpenStack Compute</entry>
Oct 11 08:53:07 compute-0 nova_compute[260935]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 08:53:07 compute-0 nova_compute[260935]:       <entry name="serial">92be5b35-6b7a-4f95-924d-008348f27b42</entry>
Oct 11 08:53:07 compute-0 nova_compute[260935]:       <entry name="uuid">92be5b35-6b7a-4f95-924d-008348f27b42</entry>
Oct 11 08:53:07 compute-0 nova_compute[260935]:       <entry name="family">Virtual Machine</entry>
Oct 11 08:53:07 compute-0 nova_compute[260935]:     </system>
Oct 11 08:53:07 compute-0 nova_compute[260935]:   </sysinfo>
Oct 11 08:53:07 compute-0 nova_compute[260935]:   <os>
Oct 11 08:53:07 compute-0 nova_compute[260935]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 11 08:53:07 compute-0 nova_compute[260935]:     <boot dev="hd"/>
Oct 11 08:53:07 compute-0 nova_compute[260935]:     <smbios mode="sysinfo"/>
Oct 11 08:53:07 compute-0 nova_compute[260935]:   </os>
Oct 11 08:53:07 compute-0 nova_compute[260935]:   <features>
Oct 11 08:53:07 compute-0 nova_compute[260935]:     <acpi/>
Oct 11 08:53:07 compute-0 nova_compute[260935]:     <apic/>
Oct 11 08:53:07 compute-0 nova_compute[260935]:     <vmcoreinfo/>
Oct 11 08:53:07 compute-0 nova_compute[260935]:   </features>
Oct 11 08:53:07 compute-0 nova_compute[260935]:   <clock offset="utc">
Oct 11 08:53:07 compute-0 nova_compute[260935]:     <timer name="pit" tickpolicy="delay"/>
Oct 11 08:53:07 compute-0 nova_compute[260935]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 11 08:53:07 compute-0 nova_compute[260935]:     <timer name="hpet" present="no"/>
Oct 11 08:53:07 compute-0 nova_compute[260935]:   </clock>
Oct 11 08:53:07 compute-0 nova_compute[260935]:   <cpu mode="host-model" match="exact">
Oct 11 08:53:07 compute-0 nova_compute[260935]:     <topology sockets="1" cores="1" threads="1"/>
Oct 11 08:53:07 compute-0 nova_compute[260935]:   </cpu>
Oct 11 08:53:07 compute-0 nova_compute[260935]:   <devices>
Oct 11 08:53:07 compute-0 nova_compute[260935]:     <disk type="network" device="disk">
Oct 11 08:53:07 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 08:53:07 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/92be5b35-6b7a-4f95-924d-008348f27b42_disk">
Oct 11 08:53:07 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 08:53:07 compute-0 nova_compute[260935]:       </source>
Oct 11 08:53:07 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 08:53:07 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 08:53:07 compute-0 nova_compute[260935]:       </auth>
Oct 11 08:53:07 compute-0 nova_compute[260935]:       <target dev="vda" bus="virtio"/>
Oct 11 08:53:07 compute-0 nova_compute[260935]:     </disk>
Oct 11 08:53:07 compute-0 nova_compute[260935]:     <disk type="network" device="cdrom">
Oct 11 08:53:07 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 08:53:07 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/92be5b35-6b7a-4f95-924d-008348f27b42_disk.config">
Oct 11 08:53:07 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 08:53:07 compute-0 nova_compute[260935]:       </source>
Oct 11 08:53:07 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 08:53:07 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 08:53:07 compute-0 nova_compute[260935]:       </auth>
Oct 11 08:53:07 compute-0 nova_compute[260935]:       <target dev="sda" bus="sata"/>
Oct 11 08:53:07 compute-0 nova_compute[260935]:     </disk>
Oct 11 08:53:07 compute-0 nova_compute[260935]:     <interface type="ethernet">
Oct 11 08:53:07 compute-0 nova_compute[260935]:       <mac address="fa:16:3e:5b:b5:4a"/>
Oct 11 08:53:07 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 08:53:07 compute-0 nova_compute[260935]:       <driver name="vhost" rx_queue_size="512"/>
Oct 11 08:53:07 compute-0 nova_compute[260935]:       <mtu size="1442"/>
Oct 11 08:53:07 compute-0 nova_compute[260935]:       <target dev="tap13bb6d15-e6"/>
Oct 11 08:53:07 compute-0 nova_compute[260935]:     </interface>
Oct 11 08:53:07 compute-0 nova_compute[260935]:     <serial type="pty">
Oct 11 08:53:07 compute-0 nova_compute[260935]:       <log file="/var/lib/nova/instances/92be5b35-6b7a-4f95-924d-008348f27b42/console.log" append="off"/>
Oct 11 08:53:07 compute-0 nova_compute[260935]:     </serial>
Oct 11 08:53:07 compute-0 nova_compute[260935]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 08:53:07 compute-0 nova_compute[260935]:     <video>
Oct 11 08:53:07 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 08:53:07 compute-0 nova_compute[260935]:     </video>
Oct 11 08:53:07 compute-0 nova_compute[260935]:     <input type="tablet" bus="usb"/>
Oct 11 08:53:07 compute-0 nova_compute[260935]:     <rng model="virtio">
Oct 11 08:53:07 compute-0 nova_compute[260935]:       <backend model="random">/dev/urandom</backend>
Oct 11 08:53:07 compute-0 nova_compute[260935]:     </rng>
Oct 11 08:53:07 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root"/>
Oct 11 08:53:07 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:53:07 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:53:07 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:53:07 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:53:07 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:53:07 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:53:07 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:53:07 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:53:07 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:53:07 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:53:07 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:53:07 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:53:07 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:53:07 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:53:07 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:53:07 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:53:07 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:53:07 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:53:07 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:53:07 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:53:07 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:53:07 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:53:07 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:53:07 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:53:07 compute-0 nova_compute[260935]:     <controller type="usb" index="0"/>
Oct 11 08:53:07 compute-0 nova_compute[260935]:     <memballoon model="virtio">
Oct 11 08:53:07 compute-0 nova_compute[260935]:       <stats period="10"/>
Oct 11 08:53:07 compute-0 nova_compute[260935]:     </memballoon>
Oct 11 08:53:07 compute-0 nova_compute[260935]:   </devices>
Oct 11 08:53:07 compute-0 nova_compute[260935]: </domain>
Oct 11 08:53:07 compute-0 nova_compute[260935]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 11 08:53:07 compute-0 nova_compute[260935]: 2025-10-11 08:53:07.254 2 DEBUG nova.compute.manager [None req-b98be1ad-2992-426c-b88a-832688514c32 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 92be5b35-6b7a-4f95-924d-008348f27b42] Preparing to wait for external event network-vif-plugged-13bb6d15-e65c-4e29-b0f3-b7a5a830236d prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 11 08:53:07 compute-0 nova_compute[260935]: 2025-10-11 08:53:07.254 2 DEBUG oslo_concurrency.lockutils [None req-b98be1ad-2992-426c-b88a-832688514c32 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Acquiring lock "92be5b35-6b7a-4f95-924d-008348f27b42-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:53:07 compute-0 nova_compute[260935]: 2025-10-11 08:53:07.255 2 DEBUG oslo_concurrency.lockutils [None req-b98be1ad-2992-426c-b88a-832688514c32 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Lock "92be5b35-6b7a-4f95-924d-008348f27b42-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:53:07 compute-0 nova_compute[260935]: 2025-10-11 08:53:07.255 2 DEBUG oslo_concurrency.lockutils [None req-b98be1ad-2992-426c-b88a-832688514c32 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Lock "92be5b35-6b7a-4f95-924d-008348f27b42-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:53:07 compute-0 nova_compute[260935]: 2025-10-11 08:53:07.256 2 DEBUG nova.virt.libvirt.vif [None req-b98be1ad-2992-426c-b88a-832688514c32 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T08:52:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-118771075',display_name='tempest-ServerDiskConfigTestJSON-server-118771075',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-118771075',id=43,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9af27fad6b5a4783b66213343f27f0a1',ramdisk_id='',reservation_id='r-aqy60l5l',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerDiskConfigTestJSON-387886039',owner_user_name='tempest-ServerDiskConfigTestJSON-387886039-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T08:53:01Z,user_data=None,user_id='2a171de1f79843e0b048393cabfee77d',uuid=92be5b35-6b7a-4f95-924d-008348f27b42,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "13bb6d15-e65c-4e29-b0f3-b7a5a830236d", "address": "fa:16:3e:5b:b5:4a", "network": {"id": "e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-964284753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9af27fad6b5a4783b66213343f27f0a1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap13bb6d15-e6", "ovs_interfaceid": "13bb6d15-e65c-4e29-b0f3-b7a5a830236d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 11 08:53:07 compute-0 nova_compute[260935]: 2025-10-11 08:53:07.256 2 DEBUG nova.network.os_vif_util [None req-b98be1ad-2992-426c-b88a-832688514c32 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Converting VIF {"id": "13bb6d15-e65c-4e29-b0f3-b7a5a830236d", "address": "fa:16:3e:5b:b5:4a", "network": {"id": "e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-964284753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9af27fad6b5a4783b66213343f27f0a1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap13bb6d15-e6", "ovs_interfaceid": "13bb6d15-e65c-4e29-b0f3-b7a5a830236d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 08:53:07 compute-0 nova_compute[260935]: 2025-10-11 08:53:07.257 2 DEBUG nova.network.os_vif_util [None req-b98be1ad-2992-426c-b88a-832688514c32 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5b:b5:4a,bridge_name='br-int',has_traffic_filtering=True,id=13bb6d15-e65c-4e29-b0f3-b7a5a830236d,network=Network(e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap13bb6d15-e6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 08:53:07 compute-0 nova_compute[260935]: 2025-10-11 08:53:07.257 2 DEBUG os_vif [None req-b98be1ad-2992-426c-b88a-832688514c32 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:5b:b5:4a,bridge_name='br-int',has_traffic_filtering=True,id=13bb6d15-e65c-4e29-b0f3-b7a5a830236d,network=Network(e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap13bb6d15-e6') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 11 08:53:07 compute-0 nova_compute[260935]: 2025-10-11 08:53:07.258 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:53:07 compute-0 nova_compute[260935]: 2025-10-11 08:53:07.259 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:53:07 compute-0 nova_compute[260935]: 2025-10-11 08:53:07.260 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 08:53:07 compute-0 nova_compute[260935]: 2025-10-11 08:53:07.264 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:53:07 compute-0 nova_compute[260935]: 2025-10-11 08:53:07.264 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap13bb6d15-e6, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:53:07 compute-0 nova_compute[260935]: 2025-10-11 08:53:07.265 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap13bb6d15-e6, col_values=(('external_ids', {'iface-id': '13bb6d15-e65c-4e29-b0f3-b7a5a830236d', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:5b:b5:4a', 'vm-uuid': '92be5b35-6b7a-4f95-924d-008348f27b42'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:53:07 compute-0 nova_compute[260935]: 2025-10-11 08:53:07.267 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:53:07 compute-0 NetworkManager[44960]: <info>  [1760172787.2683] manager: (tap13bb6d15-e6): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/175)
Oct 11 08:53:07 compute-0 nova_compute[260935]: 2025-10-11 08:53:07.271 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 08:53:07 compute-0 nova_compute[260935]: 2025-10-11 08:53:07.278 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:53:07 compute-0 nova_compute[260935]: 2025-10-11 08:53:07.279 2 INFO os_vif [None req-b98be1ad-2992-426c-b88a-832688514c32 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:5b:b5:4a,bridge_name='br-int',has_traffic_filtering=True,id=13bb6d15-e65c-4e29-b0f3-b7a5a830236d,network=Network(e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap13bb6d15-e6')
Oct 11 08:53:07 compute-0 ceph-mon[74313]: pgmap v1448: 321 pgs: 321 active+clean; 88 MiB data, 433 MiB used, 60 GiB / 60 GiB avail; 2.9 MiB/s rd, 1.8 MiB/s wr, 183 op/s
Oct 11 08:53:07 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/3193996983' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:53:07 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/3532255320' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:53:07 compute-0 nova_compute[260935]: 2025-10-11 08:53:07.350 2 DEBUG nova.virt.libvirt.driver [None req-b98be1ad-2992-426c-b88a-832688514c32 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 08:53:07 compute-0 nova_compute[260935]: 2025-10-11 08:53:07.350 2 DEBUG nova.virt.libvirt.driver [None req-b98be1ad-2992-426c-b88a-832688514c32 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 08:53:07 compute-0 nova_compute[260935]: 2025-10-11 08:53:07.350 2 DEBUG nova.virt.libvirt.driver [None req-b98be1ad-2992-426c-b88a-832688514c32 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] No VIF found with MAC fa:16:3e:5b:b5:4a, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 11 08:53:07 compute-0 nova_compute[260935]: 2025-10-11 08:53:07.351 2 INFO nova.virt.libvirt.driver [None req-b98be1ad-2992-426c-b88a-832688514c32 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 92be5b35-6b7a-4f95-924d-008348f27b42] Using config drive
Oct 11 08:53:07 compute-0 nova_compute[260935]: 2025-10-11 08:53:07.368 2 DEBUG nova.storage.rbd_utils [None req-b98be1ad-2992-426c-b88a-832688514c32 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] rbd image 92be5b35-6b7a-4f95-924d-008348f27b42_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:53:07 compute-0 nova_compute[260935]: 2025-10-11 08:53:07.583 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:53:07 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:07.593 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=bec9a5e2-82b0-42f0-811d-08d245f1dc66, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '12'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:53:07 compute-0 nova_compute[260935]: 2025-10-11 08:53:07.714 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 08:53:07 compute-0 nova_compute[260935]: 2025-10-11 08:53:07.747 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 08:53:07 compute-0 nova_compute[260935]: 2025-10-11 08:53:07.747 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Oct 11 08:53:07 compute-0 nova_compute[260935]: 2025-10-11 08:53:07.765 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Oct 11 08:53:07 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1449: 321 pgs: 321 active+clean; 88 MiB data, 442 MiB used, 60 GiB / 60 GiB avail; 2.9 MiB/s rd, 1.8 MiB/s wr, 184 op/s
Oct 11 08:53:07 compute-0 nova_compute[260935]: 2025-10-11 08:53:07.971 2 INFO nova.virt.libvirt.driver [None req-b98be1ad-2992-426c-b88a-832688514c32 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 92be5b35-6b7a-4f95-924d-008348f27b42] Creating config drive at /var/lib/nova/instances/92be5b35-6b7a-4f95-924d-008348f27b42/disk.config
Oct 11 08:53:07 compute-0 nova_compute[260935]: 2025-10-11 08:53:07.980 2 DEBUG oslo_concurrency.processutils [None req-b98be1ad-2992-426c-b88a-832688514c32 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/92be5b35-6b7a-4f95-924d-008348f27b42/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp53mlddjp execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:53:08 compute-0 nova_compute[260935]: 2025-10-11 08:53:08.031 2 DEBUG nova.network.neutron [req-38da5eba-4ea9-426c-8077-e383ef156bd3 req-2b1b7bde-db0e-499b-96f6-ffd59fc17030 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 92be5b35-6b7a-4f95-924d-008348f27b42] Updated VIF entry in instance network info cache for port 13bb6d15-e65c-4e29-b0f3-b7a5a830236d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 08:53:08 compute-0 nova_compute[260935]: 2025-10-11 08:53:08.033 2 DEBUG nova.network.neutron [req-38da5eba-4ea9-426c-8077-e383ef156bd3 req-2b1b7bde-db0e-499b-96f6-ffd59fc17030 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 92be5b35-6b7a-4f95-924d-008348f27b42] Updating instance_info_cache with network_info: [{"id": "13bb6d15-e65c-4e29-b0f3-b7a5a830236d", "address": "fa:16:3e:5b:b5:4a", "network": {"id": "e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-964284753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9af27fad6b5a4783b66213343f27f0a1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap13bb6d15-e6", "ovs_interfaceid": "13bb6d15-e65c-4e29-b0f3-b7a5a830236d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:53:08 compute-0 nova_compute[260935]: 2025-10-11 08:53:08.050 2 DEBUG oslo_concurrency.lockutils [req-38da5eba-4ea9-426c-8077-e383ef156bd3 req-2b1b7bde-db0e-499b-96f6-ffd59fc17030 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-92be5b35-6b7a-4f95-924d-008348f27b42" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 08:53:08 compute-0 nova_compute[260935]: 2025-10-11 08:53:08.143 2 DEBUG oslo_concurrency.processutils [None req-b98be1ad-2992-426c-b88a-832688514c32 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/92be5b35-6b7a-4f95-924d-008348f27b42/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp53mlddjp" returned: 0 in 0.164s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:53:08 compute-0 nova_compute[260935]: 2025-10-11 08:53:08.179 2 DEBUG nova.storage.rbd_utils [None req-b98be1ad-2992-426c-b88a-832688514c32 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] rbd image 92be5b35-6b7a-4f95-924d-008348f27b42_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:53:08 compute-0 nova_compute[260935]: 2025-10-11 08:53:08.183 2 DEBUG oslo_concurrency.processutils [None req-b98be1ad-2992-426c-b88a-832688514c32 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/92be5b35-6b7a-4f95-924d-008348f27b42/disk.config 92be5b35-6b7a-4f95-924d-008348f27b42_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:53:08 compute-0 nova_compute[260935]: 2025-10-11 08:53:08.265 2 DEBUG oslo_concurrency.lockutils [None req-c65e376a-5698-4e93-8870-52c54aed31b8 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] Acquiring lock "2c551e6f-adba-4963-a583-c5118e2be62a" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:53:08 compute-0 nova_compute[260935]: 2025-10-11 08:53:08.266 2 DEBUG oslo_concurrency.lockutils [None req-c65e376a-5698-4e93-8870-52c54aed31b8 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] Lock "2c551e6f-adba-4963-a583-c5118e2be62a" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:53:08 compute-0 nova_compute[260935]: 2025-10-11 08:53:08.283 2 DEBUG nova.compute.manager [None req-c65e376a-5698-4e93-8870-52c54aed31b8 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] [instance: 2c551e6f-adba-4963-a583-c5118e2be62a] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 11 08:53:08 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e192 do_prune osdmap full prune enabled
Oct 11 08:53:08 compute-0 ceph-mon[74313]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #63. Immutable memtables: 0.
Oct 11 08:53:08 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:53:08.300658) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 11 08:53:08 compute-0 ceph-mon[74313]: rocksdb: [db/flush_job.cc:856] [default] [JOB 33] Flushing memtable with next log file: 63
Oct 11 08:53:08 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760172788300741, "job": 33, "event": "flush_started", "num_memtables": 1, "num_entries": 2479, "num_deletes": 517, "total_data_size": 3280159, "memory_usage": 3341912, "flush_reason": "Manual Compaction"}
Oct 11 08:53:08 compute-0 ceph-mon[74313]: rocksdb: [db/flush_job.cc:885] [default] [JOB 33] Level-0 flush table #64: started
Oct 11 08:53:08 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e193 e193: 3 total, 3 up, 3 in
Oct 11 08:53:08 compute-0 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e193: 3 total, 3 up, 3 in
Oct 11 08:53:08 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760172788333678, "cf_name": "default", "job": 33, "event": "table_file_creation", "file_number": 64, "file_size": 3207618, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 27894, "largest_seqno": 30372, "table_properties": {"data_size": 3196929, "index_size": 6351, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3269, "raw_key_size": 25793, "raw_average_key_size": 19, "raw_value_size": 3173114, "raw_average_value_size": 2452, "num_data_blocks": 278, "num_entries": 1294, "num_filter_entries": 1294, "num_deletions": 517, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760172606, "oldest_key_time": 1760172606, "file_creation_time": 1760172788, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "25d4b2da-549a-4320-988a-8e4ed36a341b", "db_session_id": "HBQ1LYMNE0LLZGR7AHC6", "orig_file_number": 64, "seqno_to_time_mapping": "N/A"}}
Oct 11 08:53:08 compute-0 ceph-mon[74313]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 33] Flush lasted 33073 microseconds, and 13209 cpu microseconds.
Oct 11 08:53:08 compute-0 ceph-mon[74313]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 08:53:08 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:53:08.333739) [db/flush_job.cc:967] [default] [JOB 33] Level-0 flush table #64: 3207618 bytes OK
Oct 11 08:53:08 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:53:08.333769) [db/memtable_list.cc:519] [default] Level-0 commit table #64 started
Oct 11 08:53:08 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:53:08.337079) [db/memtable_list.cc:722] [default] Level-0 commit table #64: memtable #1 done
Oct 11 08:53:08 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:53:08.337105) EVENT_LOG_v1 {"time_micros": 1760172788337097, "job": 33, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 11 08:53:08 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:53:08.337129) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 11 08:53:08 compute-0 ceph-mon[74313]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 33] Try to delete WAL files size 3268589, prev total WAL file size 3268630, number of live WAL files 2.
Oct 11 08:53:08 compute-0 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000060.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 08:53:08 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:53:08.339923) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032353130' seq:72057594037927935, type:22 .. '7061786F730032373632' seq:0, type:0; will stop at (end)
Oct 11 08:53:08 compute-0 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 34] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 11 08:53:08 compute-0 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 33 Base level 0, inputs: [64(3132KB)], [62(8448KB)]
Oct 11 08:53:08 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760172788339991, "job": 34, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [64], "files_L6": [62], "score": -1, "input_data_size": 11858756, "oldest_snapshot_seqno": -1}
Oct 11 08:53:08 compute-0 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 34] Generated table #65: 5457 keys, 10129590 bytes, temperature: kUnknown
Oct 11 08:53:08 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760172788429407, "cf_name": "default", "job": 34, "event": "table_file_creation", "file_number": 65, "file_size": 10129590, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10089267, "index_size": 25557, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13701, "raw_key_size": 136910, "raw_average_key_size": 25, "raw_value_size": 9987205, "raw_average_value_size": 1830, "num_data_blocks": 1047, "num_entries": 5457, "num_filter_entries": 5457, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760170204, "oldest_key_time": 0, "file_creation_time": 1760172788, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "25d4b2da-549a-4320-988a-8e4ed36a341b", "db_session_id": "HBQ1LYMNE0LLZGR7AHC6", "orig_file_number": 65, "seqno_to_time_mapping": "N/A"}}
Oct 11 08:53:08 compute-0 ceph-mon[74313]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 08:53:08 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:53:08.430324) [db/compaction/compaction_job.cc:1663] [default] [JOB 34] Compacted 1@0 + 1@6 files to L6 => 10129590 bytes
Oct 11 08:53:08 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:53:08.432850) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 132.1 rd, 112.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.1, 8.3 +0.0 blob) out(9.7 +0.0 blob), read-write-amplify(6.9) write-amplify(3.2) OK, records in: 6502, records dropped: 1045 output_compression: NoCompression
Oct 11 08:53:08 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:53:08.432889) EVENT_LOG_v1 {"time_micros": 1760172788432871, "job": 34, "event": "compaction_finished", "compaction_time_micros": 89778, "compaction_time_cpu_micros": 48306, "output_level": 6, "num_output_files": 1, "total_output_size": 10129590, "num_input_records": 6502, "num_output_records": 5457, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 11 08:53:08 compute-0 nova_compute[260935]: 2025-10-11 08:53:08.433 2 DEBUG oslo_concurrency.lockutils [None req-c65e376a-5698-4e93-8870-52c54aed31b8 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:53:08 compute-0 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000064.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 08:53:08 compute-0 nova_compute[260935]: 2025-10-11 08:53:08.434 2 DEBUG oslo_concurrency.lockutils [None req-c65e376a-5698-4e93-8870-52c54aed31b8 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:53:08 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760172788435425, "job": 34, "event": "table_file_deletion", "file_number": 64}
Oct 11 08:53:08 compute-0 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000062.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 08:53:08 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760172788439564, "job": 34, "event": "table_file_deletion", "file_number": 62}
Oct 11 08:53:08 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:53:08.339703) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 08:53:08 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:53:08.440177) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 08:53:08 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:53:08.440186) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 08:53:08 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:53:08.440189) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 08:53:08 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:53:08.440192) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 08:53:08 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:53:08.440195) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 08:53:08 compute-0 nova_compute[260935]: 2025-10-11 08:53:08.444 2 DEBUG nova.virt.hardware [None req-c65e376a-5698-4e93-8870-52c54aed31b8 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 11 08:53:08 compute-0 nova_compute[260935]: 2025-10-11 08:53:08.445 2 INFO nova.compute.claims [None req-c65e376a-5698-4e93-8870-52c54aed31b8 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] [instance: 2c551e6f-adba-4963-a583-c5118e2be62a] Claim successful on node compute-0.ctlplane.example.com
Oct 11 08:53:08 compute-0 nova_compute[260935]: 2025-10-11 08:53:08.489 2 DEBUG oslo_concurrency.processutils [None req-b98be1ad-2992-426c-b88a-832688514c32 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/92be5b35-6b7a-4f95-924d-008348f27b42/disk.config 92be5b35-6b7a-4f95-924d-008348f27b42_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.306s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:53:08 compute-0 nova_compute[260935]: 2025-10-11 08:53:08.490 2 INFO nova.virt.libvirt.driver [None req-b98be1ad-2992-426c-b88a-832688514c32 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 92be5b35-6b7a-4f95-924d-008348f27b42] Deleting local config drive /var/lib/nova/instances/92be5b35-6b7a-4f95-924d-008348f27b42/disk.config because it was imported into RBD.
Oct 11 08:53:08 compute-0 kernel: tap13bb6d15-e6: entered promiscuous mode
Oct 11 08:53:08 compute-0 NetworkManager[44960]: <info>  [1760172788.5855] manager: (tap13bb6d15-e6): new Tun device (/org/freedesktop/NetworkManager/Devices/176)
Oct 11 08:53:08 compute-0 ovn_controller[152945]: 2025-10-11T08:53:08Z|00369|binding|INFO|Claiming lport 13bb6d15-e65c-4e29-b0f3-b7a5a830236d for this chassis.
Oct 11 08:53:08 compute-0 ovn_controller[152945]: 2025-10-11T08:53:08Z|00370|binding|INFO|13bb6d15-e65c-4e29-b0f3-b7a5a830236d: Claiming fa:16:3e:5b:b5:4a 10.100.0.12
Oct 11 08:53:08 compute-0 nova_compute[260935]: 2025-10-11 08:53:08.588 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:53:08 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:08.608 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5b:b5:4a 10.100.0.12'], port_security=['fa:16:3e:5b:b5:4a 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '92be5b35-6b7a-4f95-924d-008348f27b42', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9af27fad6b5a4783b66213343f27f0a1', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'a50a9db8-e2ab-4969-88fc-b4ddbb372174', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b34768a1-4d4c-416a-8ec1-d8538916a72a, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=13bb6d15-e65c-4e29-b0f3-b7a5a830236d) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 08:53:08 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:08.610 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 13bb6d15-e65c-4e29-b0f3-b7a5a830236d in datapath e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd bound to our chassis
Oct 11 08:53:08 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:08.614 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd
Oct 11 08:53:08 compute-0 nova_compute[260935]: 2025-10-11 08:53:08.631 2 DEBUG oslo_concurrency.processutils [None req-c65e376a-5698-4e93-8870-52c54aed31b8 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:53:08 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:08.634 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[52903d81-9db3-4b0b-8fa8-5756ec9f3c51]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:53:08 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:08.635 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tape5d4fc7a-11 in ovnmeta-e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 11 08:53:08 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:08.639 276945 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tape5d4fc7a-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 11 08:53:08 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:08.639 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[ad8f81fb-2b2f-4a65-b8e6-2662cf1a5de4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:53:08 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:08.640 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[2bf53cc1-8bd0-41a4-b204-d46b1ae4232d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:53:08 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:08.662 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[8ef8a986-a6d7-47b8-88d0-58e077d3d69a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:53:08 compute-0 systemd-machined[215705]: New machine qemu-49-instance-0000002b.
Oct 11 08:53:08 compute-0 systemd-udevd[310756]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 08:53:08 compute-0 systemd[1]: Started Virtual Machine qemu-49-instance-0000002b.
Oct 11 08:53:08 compute-0 NetworkManager[44960]: <info>  [1760172788.7014] device (tap13bb6d15-e6): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 08:53:08 compute-0 NetworkManager[44960]: <info>  [1760172788.7023] device (tap13bb6d15-e6): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 08:53:08 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:08.705 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[4f6fc483-8ee3-42ba-b2c5-7541c242f4ef]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:53:08 compute-0 nova_compute[260935]: 2025-10-11 08:53:08.710 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:53:08 compute-0 ovn_controller[152945]: 2025-10-11T08:53:08Z|00371|binding|INFO|Setting lport 13bb6d15-e65c-4e29-b0f3-b7a5a830236d ovn-installed in OVS
Oct 11 08:53:08 compute-0 ovn_controller[152945]: 2025-10-11T08:53:08Z|00372|binding|INFO|Setting lport 13bb6d15-e65c-4e29-b0f3-b7a5a830236d up in Southbound
Oct 11 08:53:08 compute-0 nova_compute[260935]: 2025-10-11 08:53:08.717 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:53:08 compute-0 nova_compute[260935]: 2025-10-11 08:53:08.720 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 08:53:08 compute-0 podman[310733]: 2025-10-11 08:53:08.738921065 +0000 UTC m=+0.110941729 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct 11 08:53:08 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:08.747 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[c3734bb4-0998-407f-a11d-e59cc0275325]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:53:08 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:08.755 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[916896fb-7ade-4b3d-bd42-444ebaf5098d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:53:08 compute-0 NetworkManager[44960]: <info>  [1760172788.7562] manager: (tape5d4fc7a-10): new Veth device (/org/freedesktop/NetworkManager/Devices/177)
Oct 11 08:53:08 compute-0 systemd-udevd[310763]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 08:53:08 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:08.797 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[3a5e5172-fcfe-4cb9-88c5-2f390d29d2e8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:53:08 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:08.799 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[a42293ce-0299-4be3-9f07-2cf32ac51e73]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:53:08 compute-0 podman[310734]: 2025-10-11 08:53:08.809024427 +0000 UTC m=+0.164595415 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct 11 08:53:08 compute-0 NetworkManager[44960]: <info>  [1760172788.8415] device (tape5d4fc7a-10): carrier: link connected
Oct 11 08:53:08 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:08.847 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[82a40946-c684-4e57-9456-ee517aee032b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:53:08 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:08.872 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[f0074c37-12da-4699-b3d1-079bcce35bd4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape5d4fc7a-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:17:20:33'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 118], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 463354, 'reachable_time': 25190, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 310833, 'error': None, 'target': 'ovnmeta-e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:53:08 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:08.894 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[655ce9ac-0966-4e33-acf3-4369f472a357]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe17:2033'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 463354, 'tstamp': 463354}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 310834, 'error': None, 'target': 'ovnmeta-e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:53:08 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:08.916 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[dedd3a42-4fd8-4d70-a24c-d8d071b6de39]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape5d4fc7a-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:17:20:33'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 118], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 463354, 'reachable_time': 25190, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 310835, 'error': None, 'target': 'ovnmeta-e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:53:08 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:08.954 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[75534780-5fb8-4b40-831b-5c607bfe571a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:53:09 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:09.025 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[18ac4946-bf25-4532-8e2b-c57bd6b28fc5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:53:09 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:09.026 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape5d4fc7a-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:53:09 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:09.026 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 08:53:09 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:09.027 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape5d4fc7a-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:53:09 compute-0 nova_compute[260935]: 2025-10-11 08:53:09.029 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:53:09 compute-0 NetworkManager[44960]: <info>  [1760172789.0302] manager: (tape5d4fc7a-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/178)
Oct 11 08:53:09 compute-0 kernel: tape5d4fc7a-10: entered promiscuous mode
Oct 11 08:53:09 compute-0 nova_compute[260935]: 2025-10-11 08:53:09.032 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:53:09 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:09.033 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape5d4fc7a-10, col_values=(('external_ids', {'iface-id': '7a0f31c4-9bda-45df-9fec-aacc40fc88c1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:53:09 compute-0 nova_compute[260935]: 2025-10-11 08:53:09.034 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:53:09 compute-0 ovn_controller[152945]: 2025-10-11T08:53:09Z|00373|binding|INFO|Releasing lport 7a0f31c4-9bda-45df-9fec-aacc40fc88c1 from this chassis (sb_readonly=0)
Oct 11 08:53:09 compute-0 nova_compute[260935]: 2025-10-11 08:53:09.055 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:53:09 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:09.056 162815 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 11 08:53:09 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:09.058 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[4e0e16e4-10a8-40d2-a7c5-c87221106bc2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:53:09 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:09.059 162815 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 11 08:53:09 compute-0 ovn_metadata_agent[162810]: global
Oct 11 08:53:09 compute-0 ovn_metadata_agent[162810]:     log         /dev/log local0 debug
Oct 11 08:53:09 compute-0 ovn_metadata_agent[162810]:     log-tag     haproxy-metadata-proxy-e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd
Oct 11 08:53:09 compute-0 ovn_metadata_agent[162810]:     user        root
Oct 11 08:53:09 compute-0 ovn_metadata_agent[162810]:     group       root
Oct 11 08:53:09 compute-0 ovn_metadata_agent[162810]:     maxconn     1024
Oct 11 08:53:09 compute-0 ovn_metadata_agent[162810]:     pidfile     /var/lib/neutron/external/pids/e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd.pid.haproxy
Oct 11 08:53:09 compute-0 ovn_metadata_agent[162810]:     daemon
Oct 11 08:53:09 compute-0 ovn_metadata_agent[162810]: 
Oct 11 08:53:09 compute-0 ovn_metadata_agent[162810]: defaults
Oct 11 08:53:09 compute-0 ovn_metadata_agent[162810]:     log global
Oct 11 08:53:09 compute-0 ovn_metadata_agent[162810]:     mode http
Oct 11 08:53:09 compute-0 ovn_metadata_agent[162810]:     option httplog
Oct 11 08:53:09 compute-0 ovn_metadata_agent[162810]:     option dontlognull
Oct 11 08:53:09 compute-0 ovn_metadata_agent[162810]:     option http-server-close
Oct 11 08:53:09 compute-0 ovn_metadata_agent[162810]:     option forwardfor
Oct 11 08:53:09 compute-0 ovn_metadata_agent[162810]:     retries                 3
Oct 11 08:53:09 compute-0 ovn_metadata_agent[162810]:     timeout http-request    30s
Oct 11 08:53:09 compute-0 ovn_metadata_agent[162810]:     timeout connect         30s
Oct 11 08:53:09 compute-0 ovn_metadata_agent[162810]:     timeout client          32s
Oct 11 08:53:09 compute-0 ovn_metadata_agent[162810]:     timeout server          32s
Oct 11 08:53:09 compute-0 ovn_metadata_agent[162810]:     timeout http-keep-alive 30s
Oct 11 08:53:09 compute-0 ovn_metadata_agent[162810]: 
Oct 11 08:53:09 compute-0 ovn_metadata_agent[162810]: 
Oct 11 08:53:09 compute-0 ovn_metadata_agent[162810]: listen listener
Oct 11 08:53:09 compute-0 ovn_metadata_agent[162810]:     bind 169.254.169.254:80
Oct 11 08:53:09 compute-0 ovn_metadata_agent[162810]:     server metadata /var/lib/neutron/metadata_proxy
Oct 11 08:53:09 compute-0 ovn_metadata_agent[162810]:     http-request add-header X-OVN-Network-ID e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd
Oct 11 08:53:09 compute-0 ovn_metadata_agent[162810]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 11 08:53:09 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:09.059 162815 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd', 'env', 'PROCESS_TAG=haproxy-e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 11 08:53:09 compute-0 nova_compute[260935]: 2025-10-11 08:53:09.065 2 DEBUG nova.compute.manager [req-d9972e53-6061-4406-8fbc-25aee1c67b8a req-459fa68a-bb49-4799-a8cb-9032c356b804 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 92be5b35-6b7a-4f95-924d-008348f27b42] Received event network-vif-plugged-13bb6d15-e65c-4e29-b0f3-b7a5a830236d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:53:09 compute-0 nova_compute[260935]: 2025-10-11 08:53:09.065 2 DEBUG oslo_concurrency.lockutils [req-d9972e53-6061-4406-8fbc-25aee1c67b8a req-459fa68a-bb49-4799-a8cb-9032c356b804 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "92be5b35-6b7a-4f95-924d-008348f27b42-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:53:09 compute-0 nova_compute[260935]: 2025-10-11 08:53:09.066 2 DEBUG oslo_concurrency.lockutils [req-d9972e53-6061-4406-8fbc-25aee1c67b8a req-459fa68a-bb49-4799-a8cb-9032c356b804 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "92be5b35-6b7a-4f95-924d-008348f27b42-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:53:09 compute-0 nova_compute[260935]: 2025-10-11 08:53:09.066 2 DEBUG oslo_concurrency.lockutils [req-d9972e53-6061-4406-8fbc-25aee1c67b8a req-459fa68a-bb49-4799-a8cb-9032c356b804 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "92be5b35-6b7a-4f95-924d-008348f27b42-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:53:09 compute-0 nova_compute[260935]: 2025-10-11 08:53:09.066 2 DEBUG nova.compute.manager [req-d9972e53-6061-4406-8fbc-25aee1c67b8a req-459fa68a-bb49-4799-a8cb-9032c356b804 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 92be5b35-6b7a-4f95-924d-008348f27b42] Processing event network-vif-plugged-13bb6d15-e65c-4e29-b0f3-b7a5a830236d _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 11 08:53:09 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 08:53:09 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2864788087' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:53:09 compute-0 nova_compute[260935]: 2025-10-11 08:53:09.152 2 DEBUG oslo_concurrency.processutils [None req-c65e376a-5698-4e93-8870-52c54aed31b8 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.521s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:53:09 compute-0 nova_compute[260935]: 2025-10-11 08:53:09.159 2 DEBUG nova.compute.provider_tree [None req-c65e376a-5698-4e93-8870-52c54aed31b8 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 08:53:09 compute-0 nova_compute[260935]: 2025-10-11 08:53:09.173 2 DEBUG nova.scheduler.client.report [None req-c65e376a-5698-4e93-8870-52c54aed31b8 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 08:53:09 compute-0 nova_compute[260935]: 2025-10-11 08:53:09.190 2 DEBUG oslo_concurrency.lockutils [None req-c65e376a-5698-4e93-8870-52c54aed31b8 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.756s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:53:09 compute-0 nova_compute[260935]: 2025-10-11 08:53:09.191 2 DEBUG nova.compute.manager [None req-c65e376a-5698-4e93-8870-52c54aed31b8 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] [instance: 2c551e6f-adba-4963-a583-c5118e2be62a] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 11 08:53:09 compute-0 nova_compute[260935]: 2025-10-11 08:53:09.229 2 DEBUG nova.compute.manager [None req-c65e376a-5698-4e93-8870-52c54aed31b8 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] [instance: 2c551e6f-adba-4963-a583-c5118e2be62a] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 11 08:53:09 compute-0 nova_compute[260935]: 2025-10-11 08:53:09.229 2 DEBUG nova.network.neutron [None req-c65e376a-5698-4e93-8870-52c54aed31b8 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] [instance: 2c551e6f-adba-4963-a583-c5118e2be62a] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 11 08:53:09 compute-0 nova_compute[260935]: 2025-10-11 08:53:09.256 2 INFO nova.virt.libvirt.driver [None req-c65e376a-5698-4e93-8870-52c54aed31b8 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] [instance: 2c551e6f-adba-4963-a583-c5118e2be62a] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 11 08:53:09 compute-0 nova_compute[260935]: 2025-10-11 08:53:09.270 2 DEBUG nova.compute.manager [None req-c65e376a-5698-4e93-8870-52c54aed31b8 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] [instance: 2c551e6f-adba-4963-a583-c5118e2be62a] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 11 08:53:09 compute-0 ceph-mon[74313]: pgmap v1449: 321 pgs: 321 active+clean; 88 MiB data, 442 MiB used, 60 GiB / 60 GiB avail; 2.9 MiB/s rd, 1.8 MiB/s wr, 184 op/s
Oct 11 08:53:09 compute-0 ceph-mon[74313]: osdmap e193: 3 total, 3 up, 3 in
Oct 11 08:53:09 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/2864788087' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:53:09 compute-0 nova_compute[260935]: 2025-10-11 08:53:09.349 2 DEBUG nova.compute.manager [None req-c65e376a-5698-4e93-8870-52c54aed31b8 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] [instance: 2c551e6f-adba-4963-a583-c5118e2be62a] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 11 08:53:09 compute-0 nova_compute[260935]: 2025-10-11 08:53:09.350 2 DEBUG nova.virt.libvirt.driver [None req-c65e376a-5698-4e93-8870-52c54aed31b8 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] [instance: 2c551e6f-adba-4963-a583-c5118e2be62a] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 11 08:53:09 compute-0 nova_compute[260935]: 2025-10-11 08:53:09.351 2 INFO nova.virt.libvirt.driver [None req-c65e376a-5698-4e93-8870-52c54aed31b8 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] [instance: 2c551e6f-adba-4963-a583-c5118e2be62a] Creating image(s)
Oct 11 08:53:09 compute-0 nova_compute[260935]: 2025-10-11 08:53:09.383 2 DEBUG nova.storage.rbd_utils [None req-c65e376a-5698-4e93-8870-52c54aed31b8 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] rbd image 2c551e6f-adba-4963-a583-c5118e2be62a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:53:09 compute-0 nova_compute[260935]: 2025-10-11 08:53:09.441 2 DEBUG nova.storage.rbd_utils [None req-c65e376a-5698-4e93-8870-52c54aed31b8 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] rbd image 2c551e6f-adba-4963-a583-c5118e2be62a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:53:09 compute-0 nova_compute[260935]: 2025-10-11 08:53:09.478 2 DEBUG nova.storage.rbd_utils [None req-c65e376a-5698-4e93-8870-52c54aed31b8 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] rbd image 2c551e6f-adba-4963-a583-c5118e2be62a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:53:09 compute-0 nova_compute[260935]: 2025-10-11 08:53:09.485 2 DEBUG oslo_concurrency.processutils [None req-c65e376a-5698-4e93-8870-52c54aed31b8 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:53:09 compute-0 podman[310939]: 2025-10-11 08:53:09.536938573 +0000 UTC m=+0.083154573 container create 8f4077603fd01054fa63797c06b1df995fb1f62bbad8ac790793721f2ef41e8b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 11 08:53:09 compute-0 nova_compute[260935]: 2025-10-11 08:53:09.537 2 DEBUG nova.policy [None req-c65e376a-5698-4e93-8870-52c54aed31b8 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'ea1a78d0e9f549b580366a5b344f23f5', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '141fa83aa39e4cf6883c3f86fe0de7d4', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 11 08:53:09 compute-0 systemd[1]: Started libpod-conmon-8f4077603fd01054fa63797c06b1df995fb1f62bbad8ac790793721f2ef41e8b.scope.
Oct 11 08:53:09 compute-0 nova_compute[260935]: 2025-10-11 08:53:09.591 2 DEBUG oslo_concurrency.processutils [None req-c65e376a-5698-4e93-8870-52c54aed31b8 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json" returned: 0 in 0.107s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:53:09 compute-0 nova_compute[260935]: 2025-10-11 08:53:09.592 2 DEBUG oslo_concurrency.lockutils [None req-c65e376a-5698-4e93-8870-52c54aed31b8 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] Acquiring lock "9811042c7d73cc51997f7c966840f4b7728169a1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:53:09 compute-0 nova_compute[260935]: 2025-10-11 08:53:09.593 2 DEBUG oslo_concurrency.lockutils [None req-c65e376a-5698-4e93-8870-52c54aed31b8 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:53:09 compute-0 nova_compute[260935]: 2025-10-11 08:53:09.594 2 DEBUG oslo_concurrency.lockutils [None req-c65e376a-5698-4e93-8870-52c54aed31b8 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:53:09 compute-0 podman[310939]: 2025-10-11 08:53:09.507297205 +0000 UTC m=+0.053513245 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 11 08:53:09 compute-0 systemd[1]: Started libcrun container.
Oct 11 08:53:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32b308db994383b8949a545fed9fa022caa751e57c9421e100b75cd184b4505b/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 11 08:53:09 compute-0 podman[310939]: 2025-10-11 08:53:09.628993566 +0000 UTC m=+0.175209576 container init 8f4077603fd01054fa63797c06b1df995fb1f62bbad8ac790793721f2ef41e8b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 11 08:53:09 compute-0 podman[310939]: 2025-10-11 08:53:09.638131785 +0000 UTC m=+0.184347785 container start 8f4077603fd01054fa63797c06b1df995fb1f62bbad8ac790793721f2ef41e8b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001)
Oct 11 08:53:09 compute-0 nova_compute[260935]: 2025-10-11 08:53:09.638 2 DEBUG nova.storage.rbd_utils [None req-c65e376a-5698-4e93-8870-52c54aed31b8 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] rbd image 2c551e6f-adba-4963-a583-c5118e2be62a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:53:09 compute-0 nova_compute[260935]: 2025-10-11 08:53:09.648 2 DEBUG oslo_concurrency.processutils [None req-c65e376a-5698-4e93-8870-52c54aed31b8 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 2c551e6f-adba-4963-a583-c5118e2be62a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:53:09 compute-0 neutron-haproxy-ovnmeta-e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd[310979]: [NOTICE]   (311001) : New worker (311004) forked
Oct 11 08:53:09 compute-0 neutron-haproxy-ovnmeta-e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd[310979]: [NOTICE]   (311001) : Loading success.
Oct 11 08:53:09 compute-0 nova_compute[260935]: 2025-10-11 08:53:09.684 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172789.6052735, 92be5b35-6b7a-4f95-924d-008348f27b42 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:53:09 compute-0 nova_compute[260935]: 2025-10-11 08:53:09.685 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 92be5b35-6b7a-4f95-924d-008348f27b42] VM Started (Lifecycle Event)
Oct 11 08:53:09 compute-0 nova_compute[260935]: 2025-10-11 08:53:09.690 2 DEBUG nova.compute.manager [None req-b98be1ad-2992-426c-b88a-832688514c32 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 92be5b35-6b7a-4f95-924d-008348f27b42] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 11 08:53:09 compute-0 nova_compute[260935]: 2025-10-11 08:53:09.692 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760172774.3881469, 5750649d-960f-42d5-b127-de8b9a2bee8f => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:53:09 compute-0 nova_compute[260935]: 2025-10-11 08:53:09.692 2 INFO nova.compute.manager [-] [instance: 5750649d-960f-42d5-b127-de8b9a2bee8f] VM Stopped (Lifecycle Event)
Oct 11 08:53:09 compute-0 nova_compute[260935]: 2025-10-11 08:53:09.696 2 DEBUG nova.virt.libvirt.driver [None req-b98be1ad-2992-426c-b88a-832688514c32 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 92be5b35-6b7a-4f95-924d-008348f27b42] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 11 08:53:09 compute-0 nova_compute[260935]: 2025-10-11 08:53:09.700 2 INFO nova.virt.libvirt.driver [-] [instance: 92be5b35-6b7a-4f95-924d-008348f27b42] Instance spawned successfully.
Oct 11 08:53:09 compute-0 nova_compute[260935]: 2025-10-11 08:53:09.700 2 DEBUG nova.virt.libvirt.driver [None req-b98be1ad-2992-426c-b88a-832688514c32 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 92be5b35-6b7a-4f95-924d-008348f27b42] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 11 08:53:09 compute-0 nova_compute[260935]: 2025-10-11 08:53:09.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 08:53:09 compute-0 nova_compute[260935]: 2025-10-11 08:53:09.706 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 11 08:53:09 compute-0 nova_compute[260935]: 2025-10-11 08:53:09.741 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 92be5b35-6b7a-4f95-924d-008348f27b42] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:53:09 compute-0 nova_compute[260935]: 2025-10-11 08:53:09.744 2 DEBUG nova.compute.manager [None req-b67761ac-2fd0-4cf9-85ca-60791b6f67d8 - - - - - -] [instance: 5750649d-960f-42d5-b127-de8b9a2bee8f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:53:09 compute-0 nova_compute[260935]: 2025-10-11 08:53:09.749 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 11 08:53:09 compute-0 nova_compute[260935]: 2025-10-11 08:53:09.750 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 08:53:09 compute-0 nova_compute[260935]: 2025-10-11 08:53:09.753 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 92be5b35-6b7a-4f95-924d-008348f27b42] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 08:53:09 compute-0 nova_compute[260935]: 2025-10-11 08:53:09.760 2 DEBUG nova.virt.libvirt.driver [None req-b98be1ad-2992-426c-b88a-832688514c32 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 92be5b35-6b7a-4f95-924d-008348f27b42] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:53:09 compute-0 nova_compute[260935]: 2025-10-11 08:53:09.761 2 DEBUG nova.virt.libvirt.driver [None req-b98be1ad-2992-426c-b88a-832688514c32 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 92be5b35-6b7a-4f95-924d-008348f27b42] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:53:09 compute-0 nova_compute[260935]: 2025-10-11 08:53:09.761 2 DEBUG nova.virt.libvirt.driver [None req-b98be1ad-2992-426c-b88a-832688514c32 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 92be5b35-6b7a-4f95-924d-008348f27b42] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:53:09 compute-0 nova_compute[260935]: 2025-10-11 08:53:09.762 2 DEBUG nova.virt.libvirt.driver [None req-b98be1ad-2992-426c-b88a-832688514c32 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 92be5b35-6b7a-4f95-924d-008348f27b42] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:53:09 compute-0 nova_compute[260935]: 2025-10-11 08:53:09.763 2 DEBUG nova.virt.libvirt.driver [None req-b98be1ad-2992-426c-b88a-832688514c32 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 92be5b35-6b7a-4f95-924d-008348f27b42] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:53:09 compute-0 nova_compute[260935]: 2025-10-11 08:53:09.763 2 DEBUG nova.virt.libvirt.driver [None req-b98be1ad-2992-426c-b88a-832688514c32 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 92be5b35-6b7a-4f95-924d-008348f27b42] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:53:09 compute-0 nova_compute[260935]: 2025-10-11 08:53:09.773 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 92be5b35-6b7a-4f95-924d-008348f27b42] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 08:53:09 compute-0 nova_compute[260935]: 2025-10-11 08:53:09.774 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172789.6054146, 92be5b35-6b7a-4f95-924d-008348f27b42 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:53:09 compute-0 nova_compute[260935]: 2025-10-11 08:53:09.774 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 92be5b35-6b7a-4f95-924d-008348f27b42] VM Paused (Lifecycle Event)
Oct 11 08:53:09 compute-0 nova_compute[260935]: 2025-10-11 08:53:09.798 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 92be5b35-6b7a-4f95-924d-008348f27b42] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:53:09 compute-0 nova_compute[260935]: 2025-10-11 08:53:09.804 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:53:09 compute-0 nova_compute[260935]: 2025-10-11 08:53:09.805 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:53:09 compute-0 nova_compute[260935]: 2025-10-11 08:53:09.805 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:53:09 compute-0 nova_compute[260935]: 2025-10-11 08:53:09.805 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 11 08:53:09 compute-0 nova_compute[260935]: 2025-10-11 08:53:09.806 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:53:09 compute-0 nova_compute[260935]: 2025-10-11 08:53:09.857 2 INFO nova.compute.manager [None req-b98be1ad-2992-426c-b88a-832688514c32 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 92be5b35-6b7a-4f95-924d-008348f27b42] Took 8.07 seconds to spawn the instance on the hypervisor.
Oct 11 08:53:09 compute-0 nova_compute[260935]: 2025-10-11 08:53:09.859 2 DEBUG nova.compute.manager [None req-b98be1ad-2992-426c-b88a-832688514c32 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 92be5b35-6b7a-4f95-924d-008348f27b42] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:53:09 compute-0 nova_compute[260935]: 2025-10-11 08:53:09.861 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172789.7062812, 92be5b35-6b7a-4f95-924d-008348f27b42 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:53:09 compute-0 nova_compute[260935]: 2025-10-11 08:53:09.862 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 92be5b35-6b7a-4f95-924d-008348f27b42] VM Resumed (Lifecycle Event)
Oct 11 08:53:09 compute-0 nova_compute[260935]: 2025-10-11 08:53:09.906 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 92be5b35-6b7a-4f95-924d-008348f27b42] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:53:09 compute-0 nova_compute[260935]: 2025-10-11 08:53:09.922 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 92be5b35-6b7a-4f95-924d-008348f27b42] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 08:53:09 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1451: 321 pgs: 321 active+clean; 88 MiB data, 442 MiB used, 60 GiB / 60 GiB avail; 43 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 11 08:53:09 compute-0 nova_compute[260935]: 2025-10-11 08:53:09.941 2 INFO nova.compute.manager [None req-b98be1ad-2992-426c-b88a-832688514c32 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 92be5b35-6b7a-4f95-924d-008348f27b42] Took 9.22 seconds to build instance.
Oct 11 08:53:09 compute-0 nova_compute[260935]: 2025-10-11 08:53:09.969 2 DEBUG oslo_concurrency.lockutils [None req-b98be1ad-2992-426c-b88a-832688514c32 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Lock "92be5b35-6b7a-4f95-924d-008348f27b42" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.351s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:53:09 compute-0 nova_compute[260935]: 2025-10-11 08:53:09.976 2 DEBUG oslo_concurrency.processutils [None req-c65e376a-5698-4e93-8870-52c54aed31b8 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 2c551e6f-adba-4963-a583-c5118e2be62a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.328s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:53:10 compute-0 nova_compute[260935]: 2025-10-11 08:53:10.064 2 DEBUG nova.storage.rbd_utils [None req-c65e376a-5698-4e93-8870-52c54aed31b8 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] resizing rbd image 2c551e6f-adba-4963-a583-c5118e2be62a_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 11 08:53:10 compute-0 nova_compute[260935]: 2025-10-11 08:53:10.188 2 DEBUG nova.objects.instance [None req-c65e376a-5698-4e93-8870-52c54aed31b8 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] Lazy-loading 'migration_context' on Instance uuid 2c551e6f-adba-4963-a583-c5118e2be62a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:53:10 compute-0 nova_compute[260935]: 2025-10-11 08:53:10.206 2 DEBUG nova.virt.libvirt.driver [None req-c65e376a-5698-4e93-8870-52c54aed31b8 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] [instance: 2c551e6f-adba-4963-a583-c5118e2be62a] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 11 08:53:10 compute-0 nova_compute[260935]: 2025-10-11 08:53:10.206 2 DEBUG nova.virt.libvirt.driver [None req-c65e376a-5698-4e93-8870-52c54aed31b8 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] [instance: 2c551e6f-adba-4963-a583-c5118e2be62a] Ensure instance console log exists: /var/lib/nova/instances/2c551e6f-adba-4963-a583-c5118e2be62a/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 11 08:53:10 compute-0 nova_compute[260935]: 2025-10-11 08:53:10.207 2 DEBUG oslo_concurrency.lockutils [None req-c65e376a-5698-4e93-8870-52c54aed31b8 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:53:10 compute-0 nova_compute[260935]: 2025-10-11 08:53:10.208 2 DEBUG oslo_concurrency.lockutils [None req-c65e376a-5698-4e93-8870-52c54aed31b8 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:53:10 compute-0 nova_compute[260935]: 2025-10-11 08:53:10.208 2 DEBUG oslo_concurrency.lockutils [None req-c65e376a-5698-4e93-8870-52c54aed31b8 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:53:10 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 08:53:10 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3780399583' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:53:10 compute-0 nova_compute[260935]: 2025-10-11 08:53:10.287 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.482s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:53:10 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/3780399583' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:53:10 compute-0 nova_compute[260935]: 2025-10-11 08:53:10.365 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-0000002b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 08:53:10 compute-0 nova_compute[260935]: 2025-10-11 08:53:10.365 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-0000002b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 08:53:10 compute-0 nova_compute[260935]: 2025-10-11 08:53:10.424 2 DEBUG nova.network.neutron [None req-c65e376a-5698-4e93-8870-52c54aed31b8 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] [instance: 2c551e6f-adba-4963-a583-c5118e2be62a] Successfully created port: 29b8f5a1-6e7c-42c3-9876-cc4cc9942b76 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 11 08:53:10 compute-0 nova_compute[260935]: 2025-10-11 08:53:10.644 2 WARNING nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 08:53:10 compute-0 nova_compute[260935]: 2025-10-11 08:53:10.647 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4182MB free_disk=59.967525482177734GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 11 08:53:10 compute-0 nova_compute[260935]: 2025-10-11 08:53:10.648 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:53:10 compute-0 nova_compute[260935]: 2025-10-11 08:53:10.648 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:53:10 compute-0 nova_compute[260935]: 2025-10-11 08:53:10.746 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance 92be5b35-6b7a-4f95-924d-008348f27b42 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 08:53:10 compute-0 nova_compute[260935]: 2025-10-11 08:53:10.747 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance 2c551e6f-adba-4963-a583-c5118e2be62a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 08:53:10 compute-0 nova_compute[260935]: 2025-10-11 08:53:10.747 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 11 08:53:10 compute-0 nova_compute[260935]: 2025-10-11 08:53:10.747 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 11 08:53:10 compute-0 nova_compute[260935]: 2025-10-11 08:53:10.806 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:53:11 compute-0 nova_compute[260935]: 2025-10-11 08:53:11.193 2 DEBUG nova.compute.manager [req-ff0c1d83-7808-4b8c-b82c-29d0897a515b req-9f3e7d21-7879-4bc4-b398-7eddfa554359 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 92be5b35-6b7a-4f95-924d-008348f27b42] Received event network-vif-plugged-13bb6d15-e65c-4e29-b0f3-b7a5a830236d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:53:11 compute-0 nova_compute[260935]: 2025-10-11 08:53:11.194 2 DEBUG oslo_concurrency.lockutils [req-ff0c1d83-7808-4b8c-b82c-29d0897a515b req-9f3e7d21-7879-4bc4-b398-7eddfa554359 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "92be5b35-6b7a-4f95-924d-008348f27b42-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:53:11 compute-0 nova_compute[260935]: 2025-10-11 08:53:11.194 2 DEBUG oslo_concurrency.lockutils [req-ff0c1d83-7808-4b8c-b82c-29d0897a515b req-9f3e7d21-7879-4bc4-b398-7eddfa554359 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "92be5b35-6b7a-4f95-924d-008348f27b42-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:53:11 compute-0 nova_compute[260935]: 2025-10-11 08:53:11.195 2 DEBUG oslo_concurrency.lockutils [req-ff0c1d83-7808-4b8c-b82c-29d0897a515b req-9f3e7d21-7879-4bc4-b398-7eddfa554359 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "92be5b35-6b7a-4f95-924d-008348f27b42-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:53:11 compute-0 nova_compute[260935]: 2025-10-11 08:53:11.195 2 DEBUG nova.compute.manager [req-ff0c1d83-7808-4b8c-b82c-29d0897a515b req-9f3e7d21-7879-4bc4-b398-7eddfa554359 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 92be5b35-6b7a-4f95-924d-008348f27b42] No waiting events found dispatching network-vif-plugged-13bb6d15-e65c-4e29-b0f3-b7a5a830236d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 08:53:11 compute-0 nova_compute[260935]: 2025-10-11 08:53:11.195 2 WARNING nova.compute.manager [req-ff0c1d83-7808-4b8c-b82c-29d0897a515b req-9f3e7d21-7879-4bc4-b398-7eddfa554359 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 92be5b35-6b7a-4f95-924d-008348f27b42] Received unexpected event network-vif-plugged-13bb6d15-e65c-4e29-b0f3-b7a5a830236d for instance with vm_state active and task_state None.
Oct 11 08:53:11 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 08:53:11 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4181708531' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:53:11 compute-0 nova_compute[260935]: 2025-10-11 08:53:11.267 2 DEBUG nova.network.neutron [None req-c65e376a-5698-4e93-8870-52c54aed31b8 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] [instance: 2c551e6f-adba-4963-a583-c5118e2be62a] Successfully updated port: 29b8f5a1-6e7c-42c3-9876-cc4cc9942b76 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 11 08:53:11 compute-0 nova_compute[260935]: 2025-10-11 08:53:11.270 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:53:11 compute-0 nova_compute[260935]: 2025-10-11 08:53:11.277 2 DEBUG nova.compute.provider_tree [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 08:53:11 compute-0 nova_compute[260935]: 2025-10-11 08:53:11.293 2 DEBUG oslo_concurrency.lockutils [None req-c65e376a-5698-4e93-8870-52c54aed31b8 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] Acquiring lock "refresh_cache-2c551e6f-adba-4963-a583-c5118e2be62a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 08:53:11 compute-0 nova_compute[260935]: 2025-10-11 08:53:11.294 2 DEBUG oslo_concurrency.lockutils [None req-c65e376a-5698-4e93-8870-52c54aed31b8 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] Acquired lock "refresh_cache-2c551e6f-adba-4963-a583-c5118e2be62a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 08:53:11 compute-0 nova_compute[260935]: 2025-10-11 08:53:11.294 2 DEBUG nova.network.neutron [None req-c65e376a-5698-4e93-8870-52c54aed31b8 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] [instance: 2c551e6f-adba-4963-a583-c5118e2be62a] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 11 08:53:11 compute-0 nova_compute[260935]: 2025-10-11 08:53:11.300 2 DEBUG nova.scheduler.client.report [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 08:53:11 compute-0 ceph-mon[74313]: pgmap v1451: 321 pgs: 321 active+clean; 88 MiB data, 442 MiB used, 60 GiB / 60 GiB avail; 43 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 11 08:53:11 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/4181708531' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:53:11 compute-0 nova_compute[260935]: 2025-10-11 08:53:11.339 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 11 08:53:11 compute-0 nova_compute[260935]: 2025-10-11 08:53:11.340 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.691s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:53:11 compute-0 nova_compute[260935]: 2025-10-11 08:53:11.340 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 08:53:11 compute-0 nova_compute[260935]: 2025-10-11 08:53:11.466 2 DEBUG nova.compute.manager [req-c2ac8725-adad-4a27-bff3-e69a97e4f301 req-89f0cbff-394d-4cd0-9be9-a1fa1dc3db3b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 2c551e6f-adba-4963-a583-c5118e2be62a] Received event network-changed-29b8f5a1-6e7c-42c3-9876-cc4cc9942b76 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:53:11 compute-0 nova_compute[260935]: 2025-10-11 08:53:11.467 2 DEBUG nova.compute.manager [req-c2ac8725-adad-4a27-bff3-e69a97e4f301 req-89f0cbff-394d-4cd0-9be9-a1fa1dc3db3b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 2c551e6f-adba-4963-a583-c5118e2be62a] Refreshing instance network info cache due to event network-changed-29b8f5a1-6e7c-42c3-9876-cc4cc9942b76. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 08:53:11 compute-0 nova_compute[260935]: 2025-10-11 08:53:11.467 2 DEBUG oslo_concurrency.lockutils [req-c2ac8725-adad-4a27-bff3-e69a97e4f301 req-89f0cbff-394d-4cd0-9be9-a1fa1dc3db3b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-2c551e6f-adba-4963-a583-c5118e2be62a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 08:53:11 compute-0 nova_compute[260935]: 2025-10-11 08:53:11.565 2 DEBUG nova.network.neutron [None req-c65e376a-5698-4e93-8870-52c54aed31b8 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] [instance: 2c551e6f-adba-4963-a583-c5118e2be62a] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 11 08:53:11 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1452: 321 pgs: 321 active+clean; 88 MiB data, 442 MiB used, 60 GiB / 60 GiB avail; 43 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 11 08:53:12 compute-0 nova_compute[260935]: 2025-10-11 08:53:12.267 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:53:12 compute-0 nova_compute[260935]: 2025-10-11 08:53:12.341 2 DEBUG nova.network.neutron [None req-c65e376a-5698-4e93-8870-52c54aed31b8 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] [instance: 2c551e6f-adba-4963-a583-c5118e2be62a] Updating instance_info_cache with network_info: [{"id": "29b8f5a1-6e7c-42c3-9876-cc4cc9942b76", "address": "fa:16:3e:d0:47:df", "network": {"id": "eb048a28-2e76-4170-b83c-10a20efb7841", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-1710166253-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "141fa83aa39e4cf6883c3f86fe0de7d4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap29b8f5a1-6e", "ovs_interfaceid": "29b8f5a1-6e7c-42c3-9876-cc4cc9942b76", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:53:12 compute-0 nova_compute[260935]: 2025-10-11 08:53:12.413 2 DEBUG oslo_concurrency.lockutils [None req-c65e376a-5698-4e93-8870-52c54aed31b8 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] Releasing lock "refresh_cache-2c551e6f-adba-4963-a583-c5118e2be62a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 08:53:12 compute-0 nova_compute[260935]: 2025-10-11 08:53:12.413 2 DEBUG nova.compute.manager [None req-c65e376a-5698-4e93-8870-52c54aed31b8 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] [instance: 2c551e6f-adba-4963-a583-c5118e2be62a] Instance network_info: |[{"id": "29b8f5a1-6e7c-42c3-9876-cc4cc9942b76", "address": "fa:16:3e:d0:47:df", "network": {"id": "eb048a28-2e76-4170-b83c-10a20efb7841", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-1710166253-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "141fa83aa39e4cf6883c3f86fe0de7d4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap29b8f5a1-6e", "ovs_interfaceid": "29b8f5a1-6e7c-42c3-9876-cc4cc9942b76", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 11 08:53:12 compute-0 nova_compute[260935]: 2025-10-11 08:53:12.414 2 DEBUG oslo_concurrency.lockutils [req-c2ac8725-adad-4a27-bff3-e69a97e4f301 req-89f0cbff-394d-4cd0-9be9-a1fa1dc3db3b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-2c551e6f-adba-4963-a583-c5118e2be62a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 08:53:12 compute-0 nova_compute[260935]: 2025-10-11 08:53:12.415 2 DEBUG nova.network.neutron [req-c2ac8725-adad-4a27-bff3-e69a97e4f301 req-89f0cbff-394d-4cd0-9be9-a1fa1dc3db3b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 2c551e6f-adba-4963-a583-c5118e2be62a] Refreshing network info cache for port 29b8f5a1-6e7c-42c3-9876-cc4cc9942b76 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 08:53:12 compute-0 nova_compute[260935]: 2025-10-11 08:53:12.420 2 DEBUG nova.virt.libvirt.driver [None req-c65e376a-5698-4e93-8870-52c54aed31b8 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] [instance: 2c551e6f-adba-4963-a583-c5118e2be62a] Start _get_guest_xml network_info=[{"id": "29b8f5a1-6e7c-42c3-9876-cc4cc9942b76", "address": "fa:16:3e:d0:47:df", "network": {"id": "eb048a28-2e76-4170-b83c-10a20efb7841", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-1710166253-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "141fa83aa39e4cf6883c3f86fe0de7d4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap29b8f5a1-6e", "ovs_interfaceid": "29b8f5a1-6e7c-42c3-9876-cc4cc9942b76", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 11 08:53:12 compute-0 nova_compute[260935]: 2025-10-11 08:53:12.426 2 WARNING nova.virt.libvirt.driver [None req-c65e376a-5698-4e93-8870-52c54aed31b8 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 08:53:12 compute-0 nova_compute[260935]: 2025-10-11 08:53:12.432 2 DEBUG nova.virt.libvirt.host [None req-c65e376a-5698-4e93-8870-52c54aed31b8 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 11 08:53:12 compute-0 nova_compute[260935]: 2025-10-11 08:53:12.434 2 DEBUG nova.virt.libvirt.host [None req-c65e376a-5698-4e93-8870-52c54aed31b8 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 11 08:53:12 compute-0 nova_compute[260935]: 2025-10-11 08:53:12.438 2 DEBUG nova.virt.libvirt.host [None req-c65e376a-5698-4e93-8870-52c54aed31b8 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 11 08:53:12 compute-0 nova_compute[260935]: 2025-10-11 08:53:12.439 2 DEBUG nova.virt.libvirt.host [None req-c65e376a-5698-4e93-8870-52c54aed31b8 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 11 08:53:12 compute-0 nova_compute[260935]: 2025-10-11 08:53:12.440 2 DEBUG nova.virt.libvirt.driver [None req-c65e376a-5698-4e93-8870-52c54aed31b8 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 11 08:53:12 compute-0 nova_compute[260935]: 2025-10-11 08:53:12.440 2 DEBUG nova.virt.hardware [None req-c65e376a-5698-4e93-8870-52c54aed31b8 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 11 08:53:12 compute-0 nova_compute[260935]: 2025-10-11 08:53:12.441 2 DEBUG nova.virt.hardware [None req-c65e376a-5698-4e93-8870-52c54aed31b8 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 11 08:53:12 compute-0 nova_compute[260935]: 2025-10-11 08:53:12.441 2 DEBUG nova.virt.hardware [None req-c65e376a-5698-4e93-8870-52c54aed31b8 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 11 08:53:12 compute-0 nova_compute[260935]: 2025-10-11 08:53:12.442 2 DEBUG nova.virt.hardware [None req-c65e376a-5698-4e93-8870-52c54aed31b8 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 11 08:53:12 compute-0 nova_compute[260935]: 2025-10-11 08:53:12.442 2 DEBUG nova.virt.hardware [None req-c65e376a-5698-4e93-8870-52c54aed31b8 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 11 08:53:12 compute-0 nova_compute[260935]: 2025-10-11 08:53:12.443 2 DEBUG nova.virt.hardware [None req-c65e376a-5698-4e93-8870-52c54aed31b8 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 11 08:53:12 compute-0 nova_compute[260935]: 2025-10-11 08:53:12.443 2 DEBUG nova.virt.hardware [None req-c65e376a-5698-4e93-8870-52c54aed31b8 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 11 08:53:12 compute-0 nova_compute[260935]: 2025-10-11 08:53:12.444 2 DEBUG nova.virt.hardware [None req-c65e376a-5698-4e93-8870-52c54aed31b8 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 11 08:53:12 compute-0 nova_compute[260935]: 2025-10-11 08:53:12.444 2 DEBUG nova.virt.hardware [None req-c65e376a-5698-4e93-8870-52c54aed31b8 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 11 08:53:12 compute-0 nova_compute[260935]: 2025-10-11 08:53:12.445 2 DEBUG nova.virt.hardware [None req-c65e376a-5698-4e93-8870-52c54aed31b8 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 11 08:53:12 compute-0 nova_compute[260935]: 2025-10-11 08:53:12.445 2 DEBUG nova.virt.hardware [None req-c65e376a-5698-4e93-8870-52c54aed31b8 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 11 08:53:12 compute-0 nova_compute[260935]: 2025-10-11 08:53:12.451 2 DEBUG oslo_concurrency.processutils [None req-c65e376a-5698-4e93-8870-52c54aed31b8 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:53:12 compute-0 nova_compute[260935]: 2025-10-11 08:53:12.585 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:53:12 compute-0 nova_compute[260935]: 2025-10-11 08:53:12.894 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760172777.892748, 98fabab3-6b4a-44f3-b232-f23f34f4e19f => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:53:12 compute-0 nova_compute[260935]: 2025-10-11 08:53:12.895 2 INFO nova.compute.manager [-] [instance: 98fabab3-6b4a-44f3-b232-f23f34f4e19f] VM Stopped (Lifecycle Event)
Oct 11 08:53:12 compute-0 nova_compute[260935]: 2025-10-11 08:53:12.932 2 DEBUG nova.compute.manager [None req-3608f9a2-9b35-47fe-a037-863e4ddd5669 - - - - - -] [instance: 98fabab3-6b4a-44f3-b232-f23f34f4e19f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:53:13 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 08:53:13 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3883858593' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:53:13 compute-0 nova_compute[260935]: 2025-10-11 08:53:13.031 2 DEBUG oslo_concurrency.processutils [None req-c65e376a-5698-4e93-8870-52c54aed31b8 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.580s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:53:13 compute-0 nova_compute[260935]: 2025-10-11 08:53:13.070 2 DEBUG nova.storage.rbd_utils [None req-c65e376a-5698-4e93-8870-52c54aed31b8 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] rbd image 2c551e6f-adba-4963-a583-c5118e2be62a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:53:13 compute-0 nova_compute[260935]: 2025-10-11 08:53:13.075 2 DEBUG oslo_concurrency.processutils [None req-c65e376a-5698-4e93-8870-52c54aed31b8 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:53:13 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e193 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 08:53:13 compute-0 ceph-mon[74313]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #66. Immutable memtables: 0.
Oct 11 08:53:13 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:53:13.299538) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 11 08:53:13 compute-0 ceph-mon[74313]: rocksdb: [db/flush_job.cc:856] [default] [JOB 35] Flushing memtable with next log file: 66
Oct 11 08:53:13 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760172793299662, "job": 35, "event": "flush_started", "num_memtables": 1, "num_entries": 294, "num_deletes": 250, "total_data_size": 74445, "memory_usage": 81056, "flush_reason": "Manual Compaction"}
Oct 11 08:53:13 compute-0 ceph-mon[74313]: rocksdb: [db/flush_job.cc:885] [default] [JOB 35] Level-0 flush table #67: started
Oct 11 08:53:13 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760172793303373, "cf_name": "default", "job": 35, "event": "table_file_creation", "file_number": 67, "file_size": 73431, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 30373, "largest_seqno": 30666, "table_properties": {"data_size": 71503, "index_size": 156, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 709, "raw_key_size": 5578, "raw_average_key_size": 20, "raw_value_size": 67627, "raw_average_value_size": 245, "num_data_blocks": 7, "num_entries": 275, "num_filter_entries": 275, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760172788, "oldest_key_time": 1760172788, "file_creation_time": 1760172793, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "25d4b2da-549a-4320-988a-8e4ed36a341b", "db_session_id": "HBQ1LYMNE0LLZGR7AHC6", "orig_file_number": 67, "seqno_to_time_mapping": "N/A"}}
Oct 11 08:53:13 compute-0 ceph-mon[74313]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 35] Flush lasted 3858 microseconds, and 1463 cpu microseconds.
Oct 11 08:53:13 compute-0 ceph-mon[74313]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 08:53:13 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:53:13.303425) [db/flush_job.cc:967] [default] [JOB 35] Level-0 flush table #67: 73431 bytes OK
Oct 11 08:53:13 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:53:13.303447) [db/memtable_list.cc:519] [default] Level-0 commit table #67 started
Oct 11 08:53:13 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:53:13.305122) [db/memtable_list.cc:722] [default] Level-0 commit table #67: memtable #1 done
Oct 11 08:53:13 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:53:13.305143) EVENT_LOG_v1 {"time_micros": 1760172793305136, "job": 35, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 11 08:53:13 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:53:13.305166) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 11 08:53:13 compute-0 ceph-mon[74313]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 35] Try to delete WAL files size 72293, prev total WAL file size 72293, number of live WAL files 2.
Oct 11 08:53:13 compute-0 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000063.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 08:53:13 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:53:13.305914) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031303034' seq:72057594037927935, type:22 .. '6D6772737461740031323535' seq:0, type:0; will stop at (end)
Oct 11 08:53:13 compute-0 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 36] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 11 08:53:13 compute-0 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 35 Base level 0, inputs: [67(71KB)], [65(9892KB)]
Oct 11 08:53:13 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760172793305989, "job": 36, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [67], "files_L6": [65], "score": -1, "input_data_size": 10203021, "oldest_snapshot_seqno": -1}
Oct 11 08:53:13 compute-0 ceph-mon[74313]: pgmap v1452: 321 pgs: 321 active+clean; 88 MiB data, 442 MiB used, 60 GiB / 60 GiB avail; 43 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 11 08:53:13 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/3883858593' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:53:13 compute-0 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 36] Generated table #68: 5224 keys, 6912359 bytes, temperature: kUnknown
Oct 11 08:53:13 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760172793362292, "cf_name": "default", "job": 36, "event": "table_file_creation", "file_number": 68, "file_size": 6912359, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 6878460, "index_size": 19709, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13125, "raw_key_size": 132277, "raw_average_key_size": 25, "raw_value_size": 6785298, "raw_average_value_size": 1298, "num_data_blocks": 802, "num_entries": 5224, "num_filter_entries": 5224, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760170204, "oldest_key_time": 0, "file_creation_time": 1760172793, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "25d4b2da-549a-4320-988a-8e4ed36a341b", "db_session_id": "HBQ1LYMNE0LLZGR7AHC6", "orig_file_number": 68, "seqno_to_time_mapping": "N/A"}}
Oct 11 08:53:13 compute-0 ceph-mon[74313]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 08:53:13 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:53:13.362639) [db/compaction/compaction_job.cc:1663] [default] [JOB 36] Compacted 1@0 + 1@6 files to L6 => 6912359 bytes
Oct 11 08:53:13 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:53:13.363745) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 180.9 rd, 122.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.1, 9.7 +0.0 blob) out(6.6 +0.0 blob), read-write-amplify(233.1) write-amplify(94.1) OK, records in: 5732, records dropped: 508 output_compression: NoCompression
Oct 11 08:53:13 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:53:13.363774) EVENT_LOG_v1 {"time_micros": 1760172793363762, "job": 36, "event": "compaction_finished", "compaction_time_micros": 56405, "compaction_time_cpu_micros": 37504, "output_level": 6, "num_output_files": 1, "total_output_size": 6912359, "num_input_records": 5732, "num_output_records": 5224, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 11 08:53:13 compute-0 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000067.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 08:53:13 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760172793364018, "job": 36, "event": "table_file_deletion", "file_number": 67}
Oct 11 08:53:13 compute-0 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000065.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 08:53:13 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760172793367101, "job": 36, "event": "table_file_deletion", "file_number": 65}
Oct 11 08:53:13 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:53:13.305726) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 08:53:13 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:53:13.367252) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 08:53:13 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:53:13.367262) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 08:53:13 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:53:13.367266) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 08:53:13 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:53:13.367270) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 08:53:13 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:53:13.367275) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 08:53:13 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 08:53:13 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1026540899' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:53:13 compute-0 nova_compute[260935]: 2025-10-11 08:53:13.518 2 DEBUG oslo_concurrency.processutils [None req-c65e376a-5698-4e93-8870-52c54aed31b8 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:53:13 compute-0 nova_compute[260935]: 2025-10-11 08:53:13.523 2 DEBUG nova.virt.libvirt.vif [None req-c65e376a-5698-4e93-8870-52c54aed31b8 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T08:53:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesOneServerTestJSON-server-702852874',display_name='tempest-ImagesOneServerTestJSON-server-702852874',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservertestjson-server-702852874',id=44,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='141fa83aa39e4cf6883c3f86fe0de7d4',ramdisk_id='',reservation_id='r-0ygioays',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesOneServerTestJSON-896973684',owner_user_name='tempest-ImagesOneServerTestJSON-896973684-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T08:53:09Z,user_data=None,user_id='ea1a78d0e9f549b580366a5b344f23f5',uuid=2c551e6f-adba-4963-a583-c5118e2be62a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "29b8f5a1-6e7c-42c3-9876-cc4cc9942b76", "address": "fa:16:3e:d0:47:df", "network": {"id": "eb048a28-2e76-4170-b83c-10a20efb7841", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-1710166253-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "141fa83aa39e4cf6883c3f86fe0de7d4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap29b8f5a1-6e", "ovs_interfaceid": "29b8f5a1-6e7c-42c3-9876-cc4cc9942b76", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 11 08:53:13 compute-0 nova_compute[260935]: 2025-10-11 08:53:13.525 2 DEBUG nova.network.os_vif_util [None req-c65e376a-5698-4e93-8870-52c54aed31b8 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] Converting VIF {"id": "29b8f5a1-6e7c-42c3-9876-cc4cc9942b76", "address": "fa:16:3e:d0:47:df", "network": {"id": "eb048a28-2e76-4170-b83c-10a20efb7841", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-1710166253-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "141fa83aa39e4cf6883c3f86fe0de7d4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap29b8f5a1-6e", "ovs_interfaceid": "29b8f5a1-6e7c-42c3-9876-cc4cc9942b76", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 08:53:13 compute-0 nova_compute[260935]: 2025-10-11 08:53:13.527 2 DEBUG nova.network.os_vif_util [None req-c65e376a-5698-4e93-8870-52c54aed31b8 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d0:47:df,bridge_name='br-int',has_traffic_filtering=True,id=29b8f5a1-6e7c-42c3-9876-cc4cc9942b76,network=Network(eb048a28-2e76-4170-b83c-10a20efb7841),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap29b8f5a1-6e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 08:53:13 compute-0 nova_compute[260935]: 2025-10-11 08:53:13.530 2 DEBUG nova.objects.instance [None req-c65e376a-5698-4e93-8870-52c54aed31b8 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] Lazy-loading 'pci_devices' on Instance uuid 2c551e6f-adba-4963-a583-c5118e2be62a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:53:13 compute-0 nova_compute[260935]: 2025-10-11 08:53:13.563 2 DEBUG nova.virt.libvirt.driver [None req-c65e376a-5698-4e93-8870-52c54aed31b8 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] [instance: 2c551e6f-adba-4963-a583-c5118e2be62a] End _get_guest_xml xml=<domain type="kvm">
Oct 11 08:53:13 compute-0 nova_compute[260935]:   <uuid>2c551e6f-adba-4963-a583-c5118e2be62a</uuid>
Oct 11 08:53:13 compute-0 nova_compute[260935]:   <name>instance-0000002c</name>
Oct 11 08:53:13 compute-0 nova_compute[260935]:   <memory>131072</memory>
Oct 11 08:53:13 compute-0 nova_compute[260935]:   <vcpu>1</vcpu>
Oct 11 08:53:13 compute-0 nova_compute[260935]:   <metadata>
Oct 11 08:53:13 compute-0 nova_compute[260935]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 08:53:13 compute-0 nova_compute[260935]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 08:53:13 compute-0 nova_compute[260935]:       <nova:name>tempest-ImagesOneServerTestJSON-server-702852874</nova:name>
Oct 11 08:53:13 compute-0 nova_compute[260935]:       <nova:creationTime>2025-10-11 08:53:12</nova:creationTime>
Oct 11 08:53:13 compute-0 nova_compute[260935]:       <nova:flavor name="m1.nano">
Oct 11 08:53:13 compute-0 nova_compute[260935]:         <nova:memory>128</nova:memory>
Oct 11 08:53:13 compute-0 nova_compute[260935]:         <nova:disk>1</nova:disk>
Oct 11 08:53:13 compute-0 nova_compute[260935]:         <nova:swap>0</nova:swap>
Oct 11 08:53:13 compute-0 nova_compute[260935]:         <nova:ephemeral>0</nova:ephemeral>
Oct 11 08:53:13 compute-0 nova_compute[260935]:         <nova:vcpus>1</nova:vcpus>
Oct 11 08:53:13 compute-0 nova_compute[260935]:       </nova:flavor>
Oct 11 08:53:13 compute-0 nova_compute[260935]:       <nova:owner>
Oct 11 08:53:13 compute-0 nova_compute[260935]:         <nova:user uuid="ea1a78d0e9f549b580366a5b344f23f5">tempest-ImagesOneServerTestJSON-896973684-project-member</nova:user>
Oct 11 08:53:13 compute-0 nova_compute[260935]:         <nova:project uuid="141fa83aa39e4cf6883c3f86fe0de7d4">tempest-ImagesOneServerTestJSON-896973684</nova:project>
Oct 11 08:53:13 compute-0 nova_compute[260935]:       </nova:owner>
Oct 11 08:53:13 compute-0 nova_compute[260935]:       <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 08:53:13 compute-0 nova_compute[260935]:       <nova:ports>
Oct 11 08:53:13 compute-0 nova_compute[260935]:         <nova:port uuid="29b8f5a1-6e7c-42c3-9876-cc4cc9942b76">
Oct 11 08:53:13 compute-0 nova_compute[260935]:           <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Oct 11 08:53:13 compute-0 nova_compute[260935]:         </nova:port>
Oct 11 08:53:13 compute-0 nova_compute[260935]:       </nova:ports>
Oct 11 08:53:13 compute-0 nova_compute[260935]:     </nova:instance>
Oct 11 08:53:13 compute-0 nova_compute[260935]:   </metadata>
Oct 11 08:53:13 compute-0 nova_compute[260935]:   <sysinfo type="smbios">
Oct 11 08:53:13 compute-0 nova_compute[260935]:     <system>
Oct 11 08:53:13 compute-0 nova_compute[260935]:       <entry name="manufacturer">RDO</entry>
Oct 11 08:53:13 compute-0 nova_compute[260935]:       <entry name="product">OpenStack Compute</entry>
Oct 11 08:53:13 compute-0 nova_compute[260935]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 08:53:13 compute-0 nova_compute[260935]:       <entry name="serial">2c551e6f-adba-4963-a583-c5118e2be62a</entry>
Oct 11 08:53:13 compute-0 nova_compute[260935]:       <entry name="uuid">2c551e6f-adba-4963-a583-c5118e2be62a</entry>
Oct 11 08:53:13 compute-0 nova_compute[260935]:       <entry name="family">Virtual Machine</entry>
Oct 11 08:53:13 compute-0 nova_compute[260935]:     </system>
Oct 11 08:53:13 compute-0 nova_compute[260935]:   </sysinfo>
Oct 11 08:53:13 compute-0 nova_compute[260935]:   <os>
Oct 11 08:53:13 compute-0 nova_compute[260935]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 11 08:53:13 compute-0 nova_compute[260935]:     <boot dev="hd"/>
Oct 11 08:53:13 compute-0 nova_compute[260935]:     <smbios mode="sysinfo"/>
Oct 11 08:53:13 compute-0 nova_compute[260935]:   </os>
Oct 11 08:53:13 compute-0 nova_compute[260935]:   <features>
Oct 11 08:53:13 compute-0 nova_compute[260935]:     <acpi/>
Oct 11 08:53:13 compute-0 nova_compute[260935]:     <apic/>
Oct 11 08:53:13 compute-0 nova_compute[260935]:     <vmcoreinfo/>
Oct 11 08:53:13 compute-0 nova_compute[260935]:   </features>
Oct 11 08:53:13 compute-0 nova_compute[260935]:   <clock offset="utc">
Oct 11 08:53:13 compute-0 nova_compute[260935]:     <timer name="pit" tickpolicy="delay"/>
Oct 11 08:53:13 compute-0 nova_compute[260935]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 11 08:53:13 compute-0 nova_compute[260935]:     <timer name="hpet" present="no"/>
Oct 11 08:53:13 compute-0 nova_compute[260935]:   </clock>
Oct 11 08:53:13 compute-0 nova_compute[260935]:   <cpu mode="host-model" match="exact">
Oct 11 08:53:13 compute-0 nova_compute[260935]:     <topology sockets="1" cores="1" threads="1"/>
Oct 11 08:53:13 compute-0 nova_compute[260935]:   </cpu>
Oct 11 08:53:13 compute-0 nova_compute[260935]:   <devices>
Oct 11 08:53:13 compute-0 nova_compute[260935]:     <disk type="network" device="disk">
Oct 11 08:53:13 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 08:53:13 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/2c551e6f-adba-4963-a583-c5118e2be62a_disk">
Oct 11 08:53:13 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 08:53:13 compute-0 nova_compute[260935]:       </source>
Oct 11 08:53:13 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 08:53:13 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 08:53:13 compute-0 nova_compute[260935]:       </auth>
Oct 11 08:53:13 compute-0 nova_compute[260935]:       <target dev="vda" bus="virtio"/>
Oct 11 08:53:13 compute-0 nova_compute[260935]:     </disk>
Oct 11 08:53:13 compute-0 nova_compute[260935]:     <disk type="network" device="cdrom">
Oct 11 08:53:13 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 08:53:13 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/2c551e6f-adba-4963-a583-c5118e2be62a_disk.config">
Oct 11 08:53:13 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 08:53:13 compute-0 nova_compute[260935]:       </source>
Oct 11 08:53:13 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 08:53:13 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 08:53:13 compute-0 nova_compute[260935]:       </auth>
Oct 11 08:53:13 compute-0 nova_compute[260935]:       <target dev="sda" bus="sata"/>
Oct 11 08:53:13 compute-0 nova_compute[260935]:     </disk>
Oct 11 08:53:13 compute-0 nova_compute[260935]:     <interface type="ethernet">
Oct 11 08:53:13 compute-0 nova_compute[260935]:       <mac address="fa:16:3e:d0:47:df"/>
Oct 11 08:53:13 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 08:53:13 compute-0 nova_compute[260935]:       <driver name="vhost" rx_queue_size="512"/>
Oct 11 08:53:13 compute-0 nova_compute[260935]:       <mtu size="1442"/>
Oct 11 08:53:13 compute-0 nova_compute[260935]:       <target dev="tap29b8f5a1-6e"/>
Oct 11 08:53:13 compute-0 nova_compute[260935]:     </interface>
Oct 11 08:53:13 compute-0 nova_compute[260935]:     <serial type="pty">
Oct 11 08:53:13 compute-0 nova_compute[260935]:       <log file="/var/lib/nova/instances/2c551e6f-adba-4963-a583-c5118e2be62a/console.log" append="off"/>
Oct 11 08:53:13 compute-0 nova_compute[260935]:     </serial>
Oct 11 08:53:13 compute-0 nova_compute[260935]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 08:53:13 compute-0 nova_compute[260935]:     <video>
Oct 11 08:53:13 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 08:53:13 compute-0 nova_compute[260935]:     </video>
Oct 11 08:53:13 compute-0 nova_compute[260935]:     <input type="tablet" bus="usb"/>
Oct 11 08:53:13 compute-0 nova_compute[260935]:     <rng model="virtio">
Oct 11 08:53:13 compute-0 nova_compute[260935]:       <backend model="random">/dev/urandom</backend>
Oct 11 08:53:13 compute-0 nova_compute[260935]:     </rng>
Oct 11 08:53:13 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root"/>
Oct 11 08:53:13 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:53:13 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:53:13 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:53:13 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:53:13 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:53:13 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:53:13 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:53:13 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:53:13 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:53:13 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:53:13 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:53:13 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:53:13 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:53:13 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:53:13 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:53:13 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:53:13 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:53:13 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:53:13 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:53:13 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:53:13 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:53:13 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:53:13 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:53:13 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:53:13 compute-0 nova_compute[260935]:     <controller type="usb" index="0"/>
Oct 11 08:53:13 compute-0 nova_compute[260935]:     <memballoon model="virtio">
Oct 11 08:53:13 compute-0 nova_compute[260935]:       <stats period="10"/>
Oct 11 08:53:13 compute-0 nova_compute[260935]:     </memballoon>
Oct 11 08:53:13 compute-0 nova_compute[260935]:   </devices>
Oct 11 08:53:13 compute-0 nova_compute[260935]: </domain>
Oct 11 08:53:13 compute-0 nova_compute[260935]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 11 08:53:13 compute-0 nova_compute[260935]: 2025-10-11 08:53:13.576 2 DEBUG nova.compute.manager [None req-c65e376a-5698-4e93-8870-52c54aed31b8 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] [instance: 2c551e6f-adba-4963-a583-c5118e2be62a] Preparing to wait for external event network-vif-plugged-29b8f5a1-6e7c-42c3-9876-cc4cc9942b76 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 11 08:53:13 compute-0 nova_compute[260935]: 2025-10-11 08:53:13.577 2 DEBUG oslo_concurrency.lockutils [None req-c65e376a-5698-4e93-8870-52c54aed31b8 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] Acquiring lock "2c551e6f-adba-4963-a583-c5118e2be62a-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:53:13 compute-0 nova_compute[260935]: 2025-10-11 08:53:13.577 2 DEBUG oslo_concurrency.lockutils [None req-c65e376a-5698-4e93-8870-52c54aed31b8 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] Lock "2c551e6f-adba-4963-a583-c5118e2be62a-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:53:13 compute-0 nova_compute[260935]: 2025-10-11 08:53:13.578 2 DEBUG oslo_concurrency.lockutils [None req-c65e376a-5698-4e93-8870-52c54aed31b8 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] Lock "2c551e6f-adba-4963-a583-c5118e2be62a-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:53:13 compute-0 nova_compute[260935]: 2025-10-11 08:53:13.579 2 DEBUG nova.virt.libvirt.vif [None req-c65e376a-5698-4e93-8870-52c54aed31b8 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T08:53:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesOneServerTestJSON-server-702852874',display_name='tempest-ImagesOneServerTestJSON-server-702852874',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservertestjson-server-702852874',id=44,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='141fa83aa39e4cf6883c3f86fe0de7d4',ramdisk_id='',reservation_id='r-0ygioays',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesOneServerTestJSON-896973684',owner_user_name='tempest-ImagesOneServerTestJSON-896973684-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T08:53:09Z,user_data=None,user_id='ea1a78d0e9f549b580366a5b344f23f5',uuid=2c551e6f-adba-4963-a583-c5118e2be62a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "29b8f5a1-6e7c-42c3-9876-cc4cc9942b76", "address": "fa:16:3e:d0:47:df", "network": {"id": "eb048a28-2e76-4170-b83c-10a20efb7841", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-1710166253-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "141fa83aa39e4cf6883c3f86fe0de7d4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap29b8f5a1-6e", "ovs_interfaceid": "29b8f5a1-6e7c-42c3-9876-cc4cc9942b76", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 11 08:53:13 compute-0 nova_compute[260935]: 2025-10-11 08:53:13.580 2 DEBUG nova.network.os_vif_util [None req-c65e376a-5698-4e93-8870-52c54aed31b8 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] Converting VIF {"id": "29b8f5a1-6e7c-42c3-9876-cc4cc9942b76", "address": "fa:16:3e:d0:47:df", "network": {"id": "eb048a28-2e76-4170-b83c-10a20efb7841", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-1710166253-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "141fa83aa39e4cf6883c3f86fe0de7d4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap29b8f5a1-6e", "ovs_interfaceid": "29b8f5a1-6e7c-42c3-9876-cc4cc9942b76", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 08:53:13 compute-0 nova_compute[260935]: 2025-10-11 08:53:13.581 2 DEBUG nova.network.os_vif_util [None req-c65e376a-5698-4e93-8870-52c54aed31b8 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d0:47:df,bridge_name='br-int',has_traffic_filtering=True,id=29b8f5a1-6e7c-42c3-9876-cc4cc9942b76,network=Network(eb048a28-2e76-4170-b83c-10a20efb7841),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap29b8f5a1-6e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 08:53:13 compute-0 nova_compute[260935]: 2025-10-11 08:53:13.581 2 DEBUG os_vif [None req-c65e376a-5698-4e93-8870-52c54aed31b8 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d0:47:df,bridge_name='br-int',has_traffic_filtering=True,id=29b8f5a1-6e7c-42c3-9876-cc4cc9942b76,network=Network(eb048a28-2e76-4170-b83c-10a20efb7841),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap29b8f5a1-6e') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 11 08:53:13 compute-0 nova_compute[260935]: 2025-10-11 08:53:13.582 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:53:13 compute-0 nova_compute[260935]: 2025-10-11 08:53:13.585 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:53:13 compute-0 nova_compute[260935]: 2025-10-11 08:53:13.586 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 08:53:13 compute-0 nova_compute[260935]: 2025-10-11 08:53:13.592 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:53:13 compute-0 nova_compute[260935]: 2025-10-11 08:53:13.593 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap29b8f5a1-6e, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:53:13 compute-0 nova_compute[260935]: 2025-10-11 08:53:13.595 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap29b8f5a1-6e, col_values=(('external_ids', {'iface-id': '29b8f5a1-6e7c-42c3-9876-cc4cc9942b76', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:d0:47:df', 'vm-uuid': '2c551e6f-adba-4963-a583-c5118e2be62a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:53:13 compute-0 nova_compute[260935]: 2025-10-11 08:53:13.598 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:53:13 compute-0 NetworkManager[44960]: <info>  [1760172793.5987] manager: (tap29b8f5a1-6e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/179)
Oct 11 08:53:13 compute-0 nova_compute[260935]: 2025-10-11 08:53:13.603 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 08:53:13 compute-0 nova_compute[260935]: 2025-10-11 08:53:13.608 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:53:13 compute-0 nova_compute[260935]: 2025-10-11 08:53:13.609 2 INFO os_vif [None req-c65e376a-5698-4e93-8870-52c54aed31b8 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d0:47:df,bridge_name='br-int',has_traffic_filtering=True,id=29b8f5a1-6e7c-42c3-9876-cc4cc9942b76,network=Network(eb048a28-2e76-4170-b83c-10a20efb7841),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap29b8f5a1-6e')
Oct 11 08:53:13 compute-0 nova_compute[260935]: 2025-10-11 08:53:13.690 2 DEBUG nova.virt.libvirt.driver [None req-c65e376a-5698-4e93-8870-52c54aed31b8 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 08:53:13 compute-0 nova_compute[260935]: 2025-10-11 08:53:13.691 2 DEBUG nova.virt.libvirt.driver [None req-c65e376a-5698-4e93-8870-52c54aed31b8 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 08:53:13 compute-0 nova_compute[260935]: 2025-10-11 08:53:13.692 2 DEBUG nova.virt.libvirt.driver [None req-c65e376a-5698-4e93-8870-52c54aed31b8 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] No VIF found with MAC fa:16:3e:d0:47:df, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 11 08:53:13 compute-0 nova_compute[260935]: 2025-10-11 08:53:13.693 2 INFO nova.virt.libvirt.driver [None req-c65e376a-5698-4e93-8870-52c54aed31b8 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] [instance: 2c551e6f-adba-4963-a583-c5118e2be62a] Using config drive
Oct 11 08:53:13 compute-0 nova_compute[260935]: 2025-10-11 08:53:13.730 2 DEBUG nova.storage.rbd_utils [None req-c65e376a-5698-4e93-8870-52c54aed31b8 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] rbd image 2c551e6f-adba-4963-a583-c5118e2be62a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:53:13 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1453: 321 pgs: 321 active+clean; 134 MiB data, 464 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 145 op/s
Oct 11 08:53:13 compute-0 nova_compute[260935]: 2025-10-11 08:53:13.955 2 DEBUG nova.network.neutron [req-c2ac8725-adad-4a27-bff3-e69a97e4f301 req-89f0cbff-394d-4cd0-9be9-a1fa1dc3db3b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 2c551e6f-adba-4963-a583-c5118e2be62a] Updated VIF entry in instance network info cache for port 29b8f5a1-6e7c-42c3-9876-cc4cc9942b76. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 08:53:13 compute-0 nova_compute[260935]: 2025-10-11 08:53:13.955 2 DEBUG nova.network.neutron [req-c2ac8725-adad-4a27-bff3-e69a97e4f301 req-89f0cbff-394d-4cd0-9be9-a1fa1dc3db3b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 2c551e6f-adba-4963-a583-c5118e2be62a] Updating instance_info_cache with network_info: [{"id": "29b8f5a1-6e7c-42c3-9876-cc4cc9942b76", "address": "fa:16:3e:d0:47:df", "network": {"id": "eb048a28-2e76-4170-b83c-10a20efb7841", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-1710166253-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "141fa83aa39e4cf6883c3f86fe0de7d4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap29b8f5a1-6e", "ovs_interfaceid": "29b8f5a1-6e7c-42c3-9876-cc4cc9942b76", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:53:13 compute-0 nova_compute[260935]: 2025-10-11 08:53:13.978 2 DEBUG oslo_concurrency.lockutils [req-c2ac8725-adad-4a27-bff3-e69a97e4f301 req-89f0cbff-394d-4cd0-9be9-a1fa1dc3db3b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-2c551e6f-adba-4963-a583-c5118e2be62a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 08:53:14 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/1026540899' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:53:14 compute-0 nova_compute[260935]: 2025-10-11 08:53:14.701 2 INFO nova.virt.libvirt.driver [None req-c65e376a-5698-4e93-8870-52c54aed31b8 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] [instance: 2c551e6f-adba-4963-a583-c5118e2be62a] Creating config drive at /var/lib/nova/instances/2c551e6f-adba-4963-a583-c5118e2be62a/disk.config
Oct 11 08:53:14 compute-0 nova_compute[260935]: 2025-10-11 08:53:14.711 2 DEBUG oslo_concurrency.processutils [None req-c65e376a-5698-4e93-8870-52c54aed31b8 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/2c551e6f-adba-4963-a583-c5118e2be62a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpncvrozx1 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:53:14 compute-0 nova_compute[260935]: 2025-10-11 08:53:14.869 2 DEBUG oslo_concurrency.processutils [None req-c65e376a-5698-4e93-8870-52c54aed31b8 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/2c551e6f-adba-4963-a583-c5118e2be62a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpncvrozx1" returned: 0 in 0.158s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:53:14 compute-0 nova_compute[260935]: 2025-10-11 08:53:14.912 2 DEBUG nova.storage.rbd_utils [None req-c65e376a-5698-4e93-8870-52c54aed31b8 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] rbd image 2c551e6f-adba-4963-a583-c5118e2be62a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:53:14 compute-0 nova_compute[260935]: 2025-10-11 08:53:14.918 2 DEBUG oslo_concurrency.processutils [None req-c65e376a-5698-4e93-8870-52c54aed31b8 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/2c551e6f-adba-4963-a583-c5118e2be62a/disk.config 2c551e6f-adba-4963-a583-c5118e2be62a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:53:15 compute-0 nova_compute[260935]: 2025-10-11 08:53:15.113 2 DEBUG oslo_concurrency.processutils [None req-c65e376a-5698-4e93-8870-52c54aed31b8 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/2c551e6f-adba-4963-a583-c5118e2be62a/disk.config 2c551e6f-adba-4963-a583-c5118e2be62a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.194s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:53:15 compute-0 nova_compute[260935]: 2025-10-11 08:53:15.114 2 INFO nova.virt.libvirt.driver [None req-c65e376a-5698-4e93-8870-52c54aed31b8 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] [instance: 2c551e6f-adba-4963-a583-c5118e2be62a] Deleting local config drive /var/lib/nova/instances/2c551e6f-adba-4963-a583-c5118e2be62a/disk.config because it was imported into RBD.
Oct 11 08:53:15 compute-0 kernel: tap29b8f5a1-6e: entered promiscuous mode
Oct 11 08:53:15 compute-0 NetworkManager[44960]: <info>  [1760172795.1853] manager: (tap29b8f5a1-6e): new Tun device (/org/freedesktop/NetworkManager/Devices/180)
Oct 11 08:53:15 compute-0 ovn_controller[152945]: 2025-10-11T08:53:15Z|00374|binding|INFO|Claiming lport 29b8f5a1-6e7c-42c3-9876-cc4cc9942b76 for this chassis.
Oct 11 08:53:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:15.188 162815 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:53:15 compute-0 ovn_controller[152945]: 2025-10-11T08:53:15Z|00375|binding|INFO|29b8f5a1-6e7c-42c3-9876-cc4cc9942b76: Claiming fa:16:3e:d0:47:df 10.100.0.14
Oct 11 08:53:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:15.189 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:53:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:15.190 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:53:15 compute-0 nova_compute[260935]: 2025-10-11 08:53:15.196 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:53:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:15.202 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d0:47:df 10.100.0.14'], port_security=['fa:16:3e:d0:47:df 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '2c551e6f-adba-4963-a583-c5118e2be62a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-eb048a28-2e76-4170-b83c-10a20efb7841', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '141fa83aa39e4cf6883c3f86fe0de7d4', 'neutron:revision_number': '2', 'neutron:security_group_ids': '82265068-f2dc-451b-ac76-cae1a0de925c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=74c9fb34-71dd-4115-aee6-37c651590f11, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=29b8f5a1-6e7c-42c3-9876-cc4cc9942b76) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 08:53:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:15.203 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 29b8f5a1-6e7c-42c3-9876-cc4cc9942b76 in datapath eb048a28-2e76-4170-b83c-10a20efb7841 bound to our chassis
Oct 11 08:53:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:15.205 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network eb048a28-2e76-4170-b83c-10a20efb7841
Oct 11 08:53:15 compute-0 systemd-udevd[311280]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 08:53:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:15.220 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[845d325c-fab6-4c9c-bd40-8244b1a7c7c2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:53:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:15.221 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapeb048a28-21 in ovnmeta-eb048a28-2e76-4170-b83c-10a20efb7841 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 11 08:53:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:15.223 276945 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapeb048a28-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 11 08:53:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:15.223 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[b2a26d3f-91ee-45ae-b608-1c223547811a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:53:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:15.224 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[116b0e9c-3db9-4602-b52b-f4541302c31a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:53:15 compute-0 NetworkManager[44960]: <info>  [1760172795.2330] device (tap29b8f5a1-6e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 08:53:15 compute-0 NetworkManager[44960]: <info>  [1760172795.2355] device (tap29b8f5a1-6e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 08:53:15 compute-0 systemd-machined[215705]: New machine qemu-50-instance-0000002c.
Oct 11 08:53:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:15.244 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[99cde9bd-3d6b-4063-9b3a-a59b4128b4f9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:53:15 compute-0 systemd[1]: Started Virtual Machine qemu-50-instance-0000002c.
Oct 11 08:53:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:15.274 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[988ff2c0-b9ca-400c-b879-d44dfc5b10b3]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:53:15 compute-0 nova_compute[260935]: 2025-10-11 08:53:15.298 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:53:15 compute-0 ovn_controller[152945]: 2025-10-11T08:53:15Z|00376|binding|INFO|Setting lport 29b8f5a1-6e7c-42c3-9876-cc4cc9942b76 ovn-installed in OVS
Oct 11 08:53:15 compute-0 ovn_controller[152945]: 2025-10-11T08:53:15Z|00377|binding|INFO|Setting lport 29b8f5a1-6e7c-42c3-9876-cc4cc9942b76 up in Southbound
Oct 11 08:53:15 compute-0 nova_compute[260935]: 2025-10-11 08:53:15.302 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:53:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:15.315 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[c18fa117-fef0-4512-a3ce-705bb22297c6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:53:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:15.321 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[4425fb80-0c83-4b3c-a4d7-df963b0152c1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:53:15 compute-0 systemd-udevd[311286]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 08:53:15 compute-0 NetworkManager[44960]: <info>  [1760172795.3237] manager: (tapeb048a28-20): new Veth device (/org/freedesktop/NetworkManager/Devices/181)
Oct 11 08:53:15 compute-0 ceph-mon[74313]: pgmap v1453: 321 pgs: 321 active+clean; 134 MiB data, 464 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 145 op/s
Oct 11 08:53:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:15.366 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[67233180-62bf-4295-a7c0-a45204d59483]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:53:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:15.368 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[f9cfbab1-8228-49b7-b559-416311d3e3a5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:53:15 compute-0 NetworkManager[44960]: <info>  [1760172795.4067] device (tapeb048a28-20): carrier: link connected
Oct 11 08:53:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:15.416 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[a33cf4d3-7969-4294-bc6d-7a2f3a30657a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:53:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:15.441 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[b9d61b32-ba92-43ae-a0f6-c0306543850b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapeb048a28-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7d:90:b4'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 120], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 464010, 'reachable_time': 23977, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 311323, 'error': None, 'target': 'ovnmeta-eb048a28-2e76-4170-b83c-10a20efb7841', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:53:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:15.463 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[6beccc49-20c5-4327-b15f-447a88e7fc00]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe7d:90b4'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 464010, 'tstamp': 464010}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 311333, 'error': None, 'target': 'ovnmeta-eb048a28-2e76-4170-b83c-10a20efb7841', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:53:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:15.491 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[36de97b8-3c1f-4b42-b5b0-b3b467ca6bf5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapeb048a28-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7d:90:b4'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 120], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 464010, 'reachable_time': 23977, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 311343, 'error': None, 'target': 'ovnmeta-eb048a28-2e76-4170-b83c-10a20efb7841', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:53:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:15.537 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[4eb84f1b-f251-4d50-8cd2-db052c07c338]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:53:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:15.624 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[17310434-bdce-4104-a26c-e3081919a4fd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:53:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:15.626 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapeb048a28-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:53:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:15.626 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 08:53:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:15.627 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapeb048a28-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:53:15 compute-0 NetworkManager[44960]: <info>  [1760172795.6299] manager: (tapeb048a28-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/182)
Oct 11 08:53:15 compute-0 nova_compute[260935]: 2025-10-11 08:53:15.629 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:53:15 compute-0 kernel: tapeb048a28-20: entered promiscuous mode
Oct 11 08:53:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:15.633 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapeb048a28-20, col_values=(('external_ids', {'iface-id': '7efe2784-feff-4b2a-b250-f9f249333e6d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:53:15 compute-0 ovn_controller[152945]: 2025-10-11T08:53:15Z|00378|binding|INFO|Releasing lport 7efe2784-feff-4b2a-b250-f9f249333e6d from this chassis (sb_readonly=0)
Oct 11 08:53:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:15.636 162815 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/eb048a28-2e76-4170-b83c-10a20efb7841.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/eb048a28-2e76-4170-b83c-10a20efb7841.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 11 08:53:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:15.637 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[6369eb23-08de-488b-b800-3ad2fa07368b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:53:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:15.638 162815 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 11 08:53:15 compute-0 ovn_metadata_agent[162810]: global
Oct 11 08:53:15 compute-0 ovn_metadata_agent[162810]:     log         /dev/log local0 debug
Oct 11 08:53:15 compute-0 ovn_metadata_agent[162810]:     log-tag     haproxy-metadata-proxy-eb048a28-2e76-4170-b83c-10a20efb7841
Oct 11 08:53:15 compute-0 ovn_metadata_agent[162810]:     user        root
Oct 11 08:53:15 compute-0 ovn_metadata_agent[162810]:     group       root
Oct 11 08:53:15 compute-0 ovn_metadata_agent[162810]:     maxconn     1024
Oct 11 08:53:15 compute-0 ovn_metadata_agent[162810]:     pidfile     /var/lib/neutron/external/pids/eb048a28-2e76-4170-b83c-10a20efb7841.pid.haproxy
Oct 11 08:53:15 compute-0 ovn_metadata_agent[162810]:     daemon
Oct 11 08:53:15 compute-0 ovn_metadata_agent[162810]: 
Oct 11 08:53:15 compute-0 ovn_metadata_agent[162810]: defaults
Oct 11 08:53:15 compute-0 ovn_metadata_agent[162810]:     log global
Oct 11 08:53:15 compute-0 ovn_metadata_agent[162810]:     mode http
Oct 11 08:53:15 compute-0 ovn_metadata_agent[162810]:     option httplog
Oct 11 08:53:15 compute-0 ovn_metadata_agent[162810]:     option dontlognull
Oct 11 08:53:15 compute-0 ovn_metadata_agent[162810]:     option http-server-close
Oct 11 08:53:15 compute-0 ovn_metadata_agent[162810]:     option forwardfor
Oct 11 08:53:15 compute-0 ovn_metadata_agent[162810]:     retries                 3
Oct 11 08:53:15 compute-0 ovn_metadata_agent[162810]:     timeout http-request    30s
Oct 11 08:53:15 compute-0 ovn_metadata_agent[162810]:     timeout connect         30s
Oct 11 08:53:15 compute-0 ovn_metadata_agent[162810]:     timeout client          32s
Oct 11 08:53:15 compute-0 ovn_metadata_agent[162810]:     timeout server          32s
Oct 11 08:53:15 compute-0 ovn_metadata_agent[162810]:     timeout http-keep-alive 30s
Oct 11 08:53:15 compute-0 ovn_metadata_agent[162810]: 
Oct 11 08:53:15 compute-0 ovn_metadata_agent[162810]: 
Oct 11 08:53:15 compute-0 ovn_metadata_agent[162810]: listen listener
Oct 11 08:53:15 compute-0 ovn_metadata_agent[162810]:     bind 169.254.169.254:80
Oct 11 08:53:15 compute-0 ovn_metadata_agent[162810]:     server metadata /var/lib/neutron/metadata_proxy
Oct 11 08:53:15 compute-0 ovn_metadata_agent[162810]:     http-request add-header X-OVN-Network-ID eb048a28-2e76-4170-b83c-10a20efb7841
Oct 11 08:53:15 compute-0 ovn_metadata_agent[162810]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 11 08:53:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:15.639 162815 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-eb048a28-2e76-4170-b83c-10a20efb7841', 'env', 'PROCESS_TAG=haproxy-eb048a28-2e76-4170-b83c-10a20efb7841', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/eb048a28-2e76-4170-b83c-10a20efb7841.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 11 08:53:15 compute-0 nova_compute[260935]: 2025-10-11 08:53:15.651 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:53:15 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1454: 321 pgs: 321 active+clean; 134 MiB data, 464 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 145 op/s
Oct 11 08:53:16 compute-0 nova_compute[260935]: 2025-10-11 08:53:16.059 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172796.0580928, 2c551e6f-adba-4963-a583-c5118e2be62a => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:53:16 compute-0 nova_compute[260935]: 2025-10-11 08:53:16.061 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 2c551e6f-adba-4963-a583-c5118e2be62a] VM Started (Lifecycle Event)
Oct 11 08:53:16 compute-0 nova_compute[260935]: 2025-10-11 08:53:16.080 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 2c551e6f-adba-4963-a583-c5118e2be62a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:53:16 compute-0 nova_compute[260935]: 2025-10-11 08:53:16.086 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172796.0584378, 2c551e6f-adba-4963-a583-c5118e2be62a => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:53:16 compute-0 nova_compute[260935]: 2025-10-11 08:53:16.087 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 2c551e6f-adba-4963-a583-c5118e2be62a] VM Paused (Lifecycle Event)
Oct 11 08:53:16 compute-0 nova_compute[260935]: 2025-10-11 08:53:16.113 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 2c551e6f-adba-4963-a583-c5118e2be62a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:53:16 compute-0 podman[311392]: 2025-10-11 08:53:16.116252316 +0000 UTC m=+0.092868777 container create b936b3db46013b0427a4d00c2b0b214765ca225acfddc18e89eb06ec24770bad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-eb048a28-2e76-4170-b83c-10a20efb7841, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS)
Oct 11 08:53:16 compute-0 nova_compute[260935]: 2025-10-11 08:53:16.121 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 2c551e6f-adba-4963-a583-c5118e2be62a] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 08:53:16 compute-0 nova_compute[260935]: 2025-10-11 08:53:16.142 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 2c551e6f-adba-4963-a583-c5118e2be62a] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 08:53:16 compute-0 systemd[1]: Started libpod-conmon-b936b3db46013b0427a4d00c2b0b214765ca225acfddc18e89eb06ec24770bad.scope.
Oct 11 08:53:16 compute-0 podman[311392]: 2025-10-11 08:53:16.075446782 +0000 UTC m=+0.052063273 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 11 08:53:16 compute-0 systemd[1]: Started libcrun container.
Oct 11 08:53:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a9fa1c571dca2050c3743ed1af84045936d49108271115daa852b0e59f97447/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 11 08:53:16 compute-0 podman[311392]: 2025-10-11 08:53:16.226787112 +0000 UTC m=+0.203403603 container init b936b3db46013b0427a4d00c2b0b214765ca225acfddc18e89eb06ec24770bad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-eb048a28-2e76-4170-b83c-10a20efb7841, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true)
Oct 11 08:53:16 compute-0 podman[311392]: 2025-10-11 08:53:16.236580419 +0000 UTC m=+0.213196880 container start b936b3db46013b0427a4d00c2b0b214765ca225acfddc18e89eb06ec24770bad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-eb048a28-2e76-4170-b83c-10a20efb7841, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 08:53:16 compute-0 neutron-haproxy-ovnmeta-eb048a28-2e76-4170-b83c-10a20efb7841[311407]: [NOTICE]   (311411) : New worker (311413) forked
Oct 11 08:53:16 compute-0 neutron-haproxy-ovnmeta-eb048a28-2e76-4170-b83c-10a20efb7841[311407]: [NOTICE]   (311411) : Loading success.
Oct 11 08:53:16 compute-0 nova_compute[260935]: 2025-10-11 08:53:16.692 2 DEBUG oslo_concurrency.lockutils [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Acquiring lock "e3289c21-dd0f-43aa-9d39-3aff16eff5cd" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:53:16 compute-0 nova_compute[260935]: 2025-10-11 08:53:16.693 2 DEBUG oslo_concurrency.lockutils [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Lock "e3289c21-dd0f-43aa-9d39-3aff16eff5cd" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:53:16 compute-0 nova_compute[260935]: 2025-10-11 08:53:16.720 2 DEBUG nova.compute.manager [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: e3289c21-dd0f-43aa-9d39-3aff16eff5cd] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 11 08:53:16 compute-0 nova_compute[260935]: 2025-10-11 08:53:16.729 2 INFO nova.compute.manager [None req-fe77ed62-25b3-4ad0-9b14-d5990c233b42 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 92be5b35-6b7a-4f95-924d-008348f27b42] Rebuilding instance
Oct 11 08:53:16 compute-0 nova_compute[260935]: 2025-10-11 08:53:16.747 2 DEBUG oslo_concurrency.lockutils [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Acquiring lock "dd2f9164-cc85-46e5-9ac5-2847421fe9fc" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:53:16 compute-0 nova_compute[260935]: 2025-10-11 08:53:16.748 2 DEBUG oslo_concurrency.lockutils [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Lock "dd2f9164-cc85-46e5-9ac5-2847421fe9fc" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:53:16 compute-0 nova_compute[260935]: 2025-10-11 08:53:16.779 2 DEBUG nova.compute.manager [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: dd2f9164-cc85-46e5-9ac5-2847421fe9fc] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 11 08:53:16 compute-0 nova_compute[260935]: 2025-10-11 08:53:16.835 2 DEBUG oslo_concurrency.lockutils [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:53:16 compute-0 nova_compute[260935]: 2025-10-11 08:53:16.836 2 DEBUG oslo_concurrency.lockutils [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:53:16 compute-0 nova_compute[260935]: 2025-10-11 08:53:16.843 2 DEBUG nova.virt.hardware [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 11 08:53:16 compute-0 nova_compute[260935]: 2025-10-11 08:53:16.844 2 INFO nova.compute.claims [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: e3289c21-dd0f-43aa-9d39-3aff16eff5cd] Claim successful on node compute-0.ctlplane.example.com
Oct 11 08:53:16 compute-0 nova_compute[260935]: 2025-10-11 08:53:16.865 2 DEBUG oslo_concurrency.lockutils [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:53:17 compute-0 nova_compute[260935]: 2025-10-11 08:53:17.076 2 DEBUG nova.objects.instance [None req-fe77ed62-25b3-4ad0-9b14-d5990c233b42 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 92be5b35-6b7a-4f95-924d-008348f27b42 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:53:17 compute-0 nova_compute[260935]: 2025-10-11 08:53:17.095 2 DEBUG nova.compute.manager [None req-fe77ed62-25b3-4ad0-9b14-d5990c233b42 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 92be5b35-6b7a-4f95-924d-008348f27b42] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:53:17 compute-0 nova_compute[260935]: 2025-10-11 08:53:17.097 2 DEBUG oslo_concurrency.processutils [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:53:17 compute-0 nova_compute[260935]: 2025-10-11 08:53:17.203 2 DEBUG nova.objects.instance [None req-fe77ed62-25b3-4ad0-9b14-d5990c233b42 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Lazy-loading 'pci_requests' on Instance uuid 92be5b35-6b7a-4f95-924d-008348f27b42 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:53:17 compute-0 nova_compute[260935]: 2025-10-11 08:53:17.215 2 DEBUG nova.objects.instance [None req-fe77ed62-25b3-4ad0-9b14-d5990c233b42 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Lazy-loading 'pci_devices' on Instance uuid 92be5b35-6b7a-4f95-924d-008348f27b42 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:53:17 compute-0 nova_compute[260935]: 2025-10-11 08:53:17.227 2 DEBUG nova.objects.instance [None req-fe77ed62-25b3-4ad0-9b14-d5990c233b42 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Lazy-loading 'resources' on Instance uuid 92be5b35-6b7a-4f95-924d-008348f27b42 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:53:17 compute-0 nova_compute[260935]: 2025-10-11 08:53:17.238 2 DEBUG nova.objects.instance [None req-fe77ed62-25b3-4ad0-9b14-d5990c233b42 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Lazy-loading 'migration_context' on Instance uuid 92be5b35-6b7a-4f95-924d-008348f27b42 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:53:17 compute-0 nova_compute[260935]: 2025-10-11 08:53:17.250 2 DEBUG nova.objects.instance [None req-fe77ed62-25b3-4ad0-9b14-d5990c233b42 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 92be5b35-6b7a-4f95-924d-008348f27b42] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Oct 11 08:53:17 compute-0 nova_compute[260935]: 2025-10-11 08:53:17.257 2 DEBUG nova.virt.libvirt.driver [None req-fe77ed62-25b3-4ad0-9b14-d5990c233b42 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 92be5b35-6b7a-4f95-924d-008348f27b42] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Oct 11 08:53:17 compute-0 ceph-mon[74313]: pgmap v1454: 321 pgs: 321 active+clean; 134 MiB data, 464 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 145 op/s
Oct 11 08:53:17 compute-0 nova_compute[260935]: 2025-10-11 08:53:17.596 2 DEBUG nova.compute.manager [req-e7dab3bd-61d3-489c-9e82-32d5fbd774d5 req-374e1fec-c5bd-468f-a945-6efb5c2cffbf e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 2c551e6f-adba-4963-a583-c5118e2be62a] Received event network-vif-plugged-29b8f5a1-6e7c-42c3-9876-cc4cc9942b76 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:53:17 compute-0 nova_compute[260935]: 2025-10-11 08:53:17.597 2 DEBUG oslo_concurrency.lockutils [req-e7dab3bd-61d3-489c-9e82-32d5fbd774d5 req-374e1fec-c5bd-468f-a945-6efb5c2cffbf e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "2c551e6f-adba-4963-a583-c5118e2be62a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:53:17 compute-0 nova_compute[260935]: 2025-10-11 08:53:17.598 2 DEBUG oslo_concurrency.lockutils [req-e7dab3bd-61d3-489c-9e82-32d5fbd774d5 req-374e1fec-c5bd-468f-a945-6efb5c2cffbf e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "2c551e6f-adba-4963-a583-c5118e2be62a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:53:17 compute-0 nova_compute[260935]: 2025-10-11 08:53:17.598 2 DEBUG oslo_concurrency.lockutils [req-e7dab3bd-61d3-489c-9e82-32d5fbd774d5 req-374e1fec-c5bd-468f-a945-6efb5c2cffbf e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "2c551e6f-adba-4963-a583-c5118e2be62a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:53:17 compute-0 nova_compute[260935]: 2025-10-11 08:53:17.599 2 DEBUG nova.compute.manager [req-e7dab3bd-61d3-489c-9e82-32d5fbd774d5 req-374e1fec-c5bd-468f-a945-6efb5c2cffbf e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 2c551e6f-adba-4963-a583-c5118e2be62a] Processing event network-vif-plugged-29b8f5a1-6e7c-42c3-9876-cc4cc9942b76 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 11 08:53:17 compute-0 nova_compute[260935]: 2025-10-11 08:53:17.600 2 DEBUG nova.compute.manager [None req-c65e376a-5698-4e93-8870-52c54aed31b8 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] [instance: 2c551e6f-adba-4963-a583-c5118e2be62a] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 11 08:53:17 compute-0 nova_compute[260935]: 2025-10-11 08:53:17.618 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172797.617573, 2c551e6f-adba-4963-a583-c5118e2be62a => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:53:17 compute-0 nova_compute[260935]: 2025-10-11 08:53:17.619 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 2c551e6f-adba-4963-a583-c5118e2be62a] VM Resumed (Lifecycle Event)
Oct 11 08:53:17 compute-0 nova_compute[260935]: 2025-10-11 08:53:17.622 2 DEBUG nova.virt.libvirt.driver [None req-c65e376a-5698-4e93-8870-52c54aed31b8 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] [instance: 2c551e6f-adba-4963-a583-c5118e2be62a] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 11 08:53:17 compute-0 nova_compute[260935]: 2025-10-11 08:53:17.631 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:53:17 compute-0 nova_compute[260935]: 2025-10-11 08:53:17.643 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 2c551e6f-adba-4963-a583-c5118e2be62a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:53:17 compute-0 nova_compute[260935]: 2025-10-11 08:53:17.644 2 INFO nova.virt.libvirt.driver [-] [instance: 2c551e6f-adba-4963-a583-c5118e2be62a] Instance spawned successfully.
Oct 11 08:53:17 compute-0 nova_compute[260935]: 2025-10-11 08:53:17.645 2 DEBUG nova.virt.libvirt.driver [None req-c65e376a-5698-4e93-8870-52c54aed31b8 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] [instance: 2c551e6f-adba-4963-a583-c5118e2be62a] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 11 08:53:17 compute-0 nova_compute[260935]: 2025-10-11 08:53:17.648 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 2c551e6f-adba-4963-a583-c5118e2be62a] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 08:53:17 compute-0 nova_compute[260935]: 2025-10-11 08:53:17.665 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 2c551e6f-adba-4963-a583-c5118e2be62a] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 08:53:17 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 08:53:17 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2042650142' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:53:17 compute-0 nova_compute[260935]: 2025-10-11 08:53:17.682 2 DEBUG nova.virt.libvirt.driver [None req-c65e376a-5698-4e93-8870-52c54aed31b8 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] [instance: 2c551e6f-adba-4963-a583-c5118e2be62a] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:53:17 compute-0 nova_compute[260935]: 2025-10-11 08:53:17.682 2 DEBUG nova.virt.libvirt.driver [None req-c65e376a-5698-4e93-8870-52c54aed31b8 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] [instance: 2c551e6f-adba-4963-a583-c5118e2be62a] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:53:17 compute-0 nova_compute[260935]: 2025-10-11 08:53:17.683 2 DEBUG nova.virt.libvirt.driver [None req-c65e376a-5698-4e93-8870-52c54aed31b8 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] [instance: 2c551e6f-adba-4963-a583-c5118e2be62a] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:53:17 compute-0 nova_compute[260935]: 2025-10-11 08:53:17.683 2 DEBUG nova.virt.libvirt.driver [None req-c65e376a-5698-4e93-8870-52c54aed31b8 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] [instance: 2c551e6f-adba-4963-a583-c5118e2be62a] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:53:17 compute-0 nova_compute[260935]: 2025-10-11 08:53:17.684 2 DEBUG nova.virt.libvirt.driver [None req-c65e376a-5698-4e93-8870-52c54aed31b8 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] [instance: 2c551e6f-adba-4963-a583-c5118e2be62a] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:53:17 compute-0 nova_compute[260935]: 2025-10-11 08:53:17.685 2 DEBUG nova.virt.libvirt.driver [None req-c65e376a-5698-4e93-8870-52c54aed31b8 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] [instance: 2c551e6f-adba-4963-a583-c5118e2be62a] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:53:17 compute-0 nova_compute[260935]: 2025-10-11 08:53:17.693 2 DEBUG oslo_concurrency.processutils [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.596s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:53:17 compute-0 nova_compute[260935]: 2025-10-11 08:53:17.699 2 DEBUG nova.compute.provider_tree [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 08:53:17 compute-0 nova_compute[260935]: 2025-10-11 08:53:17.710 2 DEBUG nova.scheduler.client.report [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 08:53:17 compute-0 nova_compute[260935]: 2025-10-11 08:53:17.734 2 DEBUG oslo_concurrency.lockutils [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.897s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:53:17 compute-0 nova_compute[260935]: 2025-10-11 08:53:17.735 2 DEBUG nova.compute.manager [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: e3289c21-dd0f-43aa-9d39-3aff16eff5cd] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 11 08:53:17 compute-0 nova_compute[260935]: 2025-10-11 08:53:17.738 2 DEBUG oslo_concurrency.lockutils [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.872s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:53:17 compute-0 nova_compute[260935]: 2025-10-11 08:53:17.740 2 INFO nova.compute.manager [None req-c65e376a-5698-4e93-8870-52c54aed31b8 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] [instance: 2c551e6f-adba-4963-a583-c5118e2be62a] Took 8.39 seconds to spawn the instance on the hypervisor.
Oct 11 08:53:17 compute-0 nova_compute[260935]: 2025-10-11 08:53:17.740 2 DEBUG nova.compute.manager [None req-c65e376a-5698-4e93-8870-52c54aed31b8 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] [instance: 2c551e6f-adba-4963-a583-c5118e2be62a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:53:17 compute-0 nova_compute[260935]: 2025-10-11 08:53:17.749 2 DEBUG nova.virt.hardware [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 11 08:53:17 compute-0 nova_compute[260935]: 2025-10-11 08:53:17.749 2 INFO nova.compute.claims [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: dd2f9164-cc85-46e5-9ac5-2847421fe9fc] Claim successful on node compute-0.ctlplane.example.com
Oct 11 08:53:17 compute-0 nova_compute[260935]: 2025-10-11 08:53:17.822 2 DEBUG nova.compute.manager [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: e3289c21-dd0f-43aa-9d39-3aff16eff5cd] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 11 08:53:17 compute-0 nova_compute[260935]: 2025-10-11 08:53:17.823 2 DEBUG nova.network.neutron [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: e3289c21-dd0f-43aa-9d39-3aff16eff5cd] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 11 08:53:17 compute-0 nova_compute[260935]: 2025-10-11 08:53:17.842 2 INFO nova.virt.libvirt.driver [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: e3289c21-dd0f-43aa-9d39-3aff16eff5cd] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 11 08:53:17 compute-0 nova_compute[260935]: 2025-10-11 08:53:17.850 2 INFO nova.compute.manager [None req-c65e376a-5698-4e93-8870-52c54aed31b8 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] [instance: 2c551e6f-adba-4963-a583-c5118e2be62a] Took 9.52 seconds to build instance.
Oct 11 08:53:17 compute-0 nova_compute[260935]: 2025-10-11 08:53:17.869 2 DEBUG nova.compute.manager [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: e3289c21-dd0f-43aa-9d39-3aff16eff5cd] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 11 08:53:17 compute-0 nova_compute[260935]: 2025-10-11 08:53:17.877 2 DEBUG oslo_concurrency.lockutils [None req-c65e376a-5698-4e93-8870-52c54aed31b8 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] Lock "2c551e6f-adba-4963-a583-c5118e2be62a" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.611s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:53:17 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1455: 321 pgs: 321 active+clean; 134 MiB data, 464 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.2 MiB/s wr, 157 op/s
Oct 11 08:53:17 compute-0 nova_compute[260935]: 2025-10-11 08:53:17.959 2 DEBUG nova.compute.manager [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: e3289c21-dd0f-43aa-9d39-3aff16eff5cd] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 11 08:53:17 compute-0 nova_compute[260935]: 2025-10-11 08:53:17.960 2 DEBUG nova.virt.libvirt.driver [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: e3289c21-dd0f-43aa-9d39-3aff16eff5cd] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 11 08:53:17 compute-0 nova_compute[260935]: 2025-10-11 08:53:17.960 2 INFO nova.virt.libvirt.driver [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: e3289c21-dd0f-43aa-9d39-3aff16eff5cd] Creating image(s)
Oct 11 08:53:17 compute-0 nova_compute[260935]: 2025-10-11 08:53:17.983 2 DEBUG nova.storage.rbd_utils [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] rbd image e3289c21-dd0f-43aa-9d39-3aff16eff5cd_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:53:18 compute-0 nova_compute[260935]: 2025-10-11 08:53:18.007 2 DEBUG nova.storage.rbd_utils [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] rbd image e3289c21-dd0f-43aa-9d39-3aff16eff5cd_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:53:18 compute-0 nova_compute[260935]: 2025-10-11 08:53:18.030 2 DEBUG nova.storage.rbd_utils [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] rbd image e3289c21-dd0f-43aa-9d39-3aff16eff5cd_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:53:18 compute-0 nova_compute[260935]: 2025-10-11 08:53:18.034 2 DEBUG oslo_concurrency.processutils [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:53:18 compute-0 nova_compute[260935]: 2025-10-11 08:53:18.073 2 DEBUG oslo_concurrency.processutils [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:53:18 compute-0 nova_compute[260935]: 2025-10-11 08:53:18.134 2 DEBUG oslo_concurrency.processutils [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json" returned: 0 in 0.100s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:53:18 compute-0 nova_compute[260935]: 2025-10-11 08:53:18.140 2 DEBUG oslo_concurrency.lockutils [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Acquiring lock "9811042c7d73cc51997f7c966840f4b7728169a1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:53:18 compute-0 nova_compute[260935]: 2025-10-11 08:53:18.141 2 DEBUG oslo_concurrency.lockutils [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:53:18 compute-0 nova_compute[260935]: 2025-10-11 08:53:18.142 2 DEBUG oslo_concurrency.lockutils [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:53:18 compute-0 nova_compute[260935]: 2025-10-11 08:53:18.174 2 DEBUG nova.storage.rbd_utils [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] rbd image e3289c21-dd0f-43aa-9d39-3aff16eff5cd_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:53:18 compute-0 nova_compute[260935]: 2025-10-11 08:53:18.179 2 DEBUG oslo_concurrency.processutils [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 e3289c21-dd0f-43aa-9d39-3aff16eff5cd_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:53:18 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e193 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 08:53:18 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e193 do_prune osdmap full prune enabled
Oct 11 08:53:18 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e194 e194: 3 total, 3 up, 3 in
Oct 11 08:53:18 compute-0 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e194: 3 total, 3 up, 3 in
Oct 11 08:53:18 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/2042650142' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:53:18 compute-0 ceph-mon[74313]: osdmap e194: 3 total, 3 up, 3 in
Oct 11 08:53:18 compute-0 nova_compute[260935]: 2025-10-11 08:53:18.509 2 DEBUG oslo_concurrency.processutils [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 e3289c21-dd0f-43aa-9d39-3aff16eff5cd_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.329s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:53:18 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 08:53:18 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4061802871' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:53:18 compute-0 nova_compute[260935]: 2025-10-11 08:53:18.608 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:53:18 compute-0 nova_compute[260935]: 2025-10-11 08:53:18.617 2 DEBUG nova.storage.rbd_utils [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] resizing rbd image e3289c21-dd0f-43aa-9d39-3aff16eff5cd_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 11 08:53:18 compute-0 nova_compute[260935]: 2025-10-11 08:53:18.653 2 DEBUG oslo_concurrency.processutils [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.580s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:53:18 compute-0 nova_compute[260935]: 2025-10-11 08:53:18.663 2 DEBUG nova.compute.provider_tree [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 08:53:18 compute-0 nova_compute[260935]: 2025-10-11 08:53:18.684 2 DEBUG nova.scheduler.client.report [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 08:53:18 compute-0 nova_compute[260935]: 2025-10-11 08:53:18.733 2 DEBUG oslo_concurrency.lockutils [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.995s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:53:18 compute-0 nova_compute[260935]: 2025-10-11 08:53:18.734 2 DEBUG nova.compute.manager [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: dd2f9164-cc85-46e5-9ac5-2847421fe9fc] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 11 08:53:18 compute-0 nova_compute[260935]: 2025-10-11 08:53:18.751 2 DEBUG nova.objects.instance [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Lazy-loading 'migration_context' on Instance uuid e3289c21-dd0f-43aa-9d39-3aff16eff5cd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:53:18 compute-0 nova_compute[260935]: 2025-10-11 08:53:18.766 2 DEBUG nova.virt.libvirt.driver [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: e3289c21-dd0f-43aa-9d39-3aff16eff5cd] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 11 08:53:18 compute-0 nova_compute[260935]: 2025-10-11 08:53:18.769 2 DEBUG nova.virt.libvirt.driver [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: e3289c21-dd0f-43aa-9d39-3aff16eff5cd] Ensure instance console log exists: /var/lib/nova/instances/e3289c21-dd0f-43aa-9d39-3aff16eff5cd/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 11 08:53:18 compute-0 nova_compute[260935]: 2025-10-11 08:53:18.770 2 DEBUG oslo_concurrency.lockutils [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:53:18 compute-0 nova_compute[260935]: 2025-10-11 08:53:18.770 2 DEBUG oslo_concurrency.lockutils [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:53:18 compute-0 nova_compute[260935]: 2025-10-11 08:53:18.771 2 DEBUG oslo_concurrency.lockutils [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:53:18 compute-0 nova_compute[260935]: 2025-10-11 08:53:18.791 2 DEBUG nova.compute.manager [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: dd2f9164-cc85-46e5-9ac5-2847421fe9fc] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 11 08:53:18 compute-0 nova_compute[260935]: 2025-10-11 08:53:18.792 2 DEBUG nova.network.neutron [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: dd2f9164-cc85-46e5-9ac5-2847421fe9fc] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 11 08:53:18 compute-0 nova_compute[260935]: 2025-10-11 08:53:18.819 2 INFO nova.virt.libvirt.driver [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: dd2f9164-cc85-46e5-9ac5-2847421fe9fc] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 11 08:53:18 compute-0 nova_compute[260935]: 2025-10-11 08:53:18.850 2 DEBUG nova.compute.manager [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: dd2f9164-cc85-46e5-9ac5-2847421fe9fc] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 11 08:53:18 compute-0 nova_compute[260935]: 2025-10-11 08:53:18.967 2 DEBUG nova.compute.manager [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: dd2f9164-cc85-46e5-9ac5-2847421fe9fc] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 11 08:53:18 compute-0 nova_compute[260935]: 2025-10-11 08:53:18.968 2 DEBUG nova.virt.libvirt.driver [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: dd2f9164-cc85-46e5-9ac5-2847421fe9fc] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 11 08:53:18 compute-0 nova_compute[260935]: 2025-10-11 08:53:18.969 2 INFO nova.virt.libvirt.driver [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: dd2f9164-cc85-46e5-9ac5-2847421fe9fc] Creating image(s)
Oct 11 08:53:18 compute-0 nova_compute[260935]: 2025-10-11 08:53:18.989 2 DEBUG nova.storage.rbd_utils [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] rbd image dd2f9164-cc85-46e5-9ac5-2847421fe9fc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:53:19 compute-0 nova_compute[260935]: 2025-10-11 08:53:19.016 2 DEBUG nova.storage.rbd_utils [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] rbd image dd2f9164-cc85-46e5-9ac5-2847421fe9fc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:53:19 compute-0 nova_compute[260935]: 2025-10-11 08:53:19.048 2 DEBUG nova.storage.rbd_utils [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] rbd image dd2f9164-cc85-46e5-9ac5-2847421fe9fc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:53:19 compute-0 nova_compute[260935]: 2025-10-11 08:53:19.052 2 DEBUG oslo_concurrency.processutils [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:53:19 compute-0 nova_compute[260935]: 2025-10-11 08:53:19.099 2 DEBUG nova.policy [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '624c293d73ca4d14a182fadee17abb16', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '5fe47dfd30914099a9819413cbab00c6', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 11 08:53:19 compute-0 nova_compute[260935]: 2025-10-11 08:53:19.102 2 DEBUG nova.policy [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '624c293d73ca4d14a182fadee17abb16', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '5fe47dfd30914099a9819413cbab00c6', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 11 08:53:19 compute-0 nova_compute[260935]: 2025-10-11 08:53:19.142 2 DEBUG oslo_concurrency.processutils [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json" returned: 0 in 0.090s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:53:19 compute-0 nova_compute[260935]: 2025-10-11 08:53:19.143 2 DEBUG oslo_concurrency.lockutils [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Acquiring lock "9811042c7d73cc51997f7c966840f4b7728169a1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:53:19 compute-0 nova_compute[260935]: 2025-10-11 08:53:19.144 2 DEBUG oslo_concurrency.lockutils [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:53:19 compute-0 nova_compute[260935]: 2025-10-11 08:53:19.144 2 DEBUG oslo_concurrency.lockutils [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:53:19 compute-0 nova_compute[260935]: 2025-10-11 08:53:19.167 2 DEBUG nova.storage.rbd_utils [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] rbd image dd2f9164-cc85-46e5-9ac5-2847421fe9fc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:53:19 compute-0 nova_compute[260935]: 2025-10-11 08:53:19.174 2 DEBUG oslo_concurrency.processutils [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 dd2f9164-cc85-46e5-9ac5-2847421fe9fc_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:53:19 compute-0 ceph-mon[74313]: pgmap v1455: 321 pgs: 321 active+clean; 134 MiB data, 464 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.2 MiB/s wr, 157 op/s
Oct 11 08:53:19 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/4061802871' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:53:19 compute-0 nova_compute[260935]: 2025-10-11 08:53:19.489 2 DEBUG oslo_concurrency.processutils [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 dd2f9164-cc85-46e5-9ac5-2847421fe9fc_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.315s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:53:19 compute-0 nova_compute[260935]: 2025-10-11 08:53:19.561 2 DEBUG nova.storage.rbd_utils [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] resizing rbd image dd2f9164-cc85-46e5-9ac5-2847421fe9fc_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 11 08:53:19 compute-0 nova_compute[260935]: 2025-10-11 08:53:19.689 2 DEBUG nova.objects.instance [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Lazy-loading 'migration_context' on Instance uuid dd2f9164-cc85-46e5-9ac5-2847421fe9fc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:53:19 compute-0 nova_compute[260935]: 2025-10-11 08:53:19.710 2 DEBUG nova.virt.libvirt.driver [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: dd2f9164-cc85-46e5-9ac5-2847421fe9fc] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 11 08:53:19 compute-0 nova_compute[260935]: 2025-10-11 08:53:19.710 2 DEBUG nova.virt.libvirt.driver [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: dd2f9164-cc85-46e5-9ac5-2847421fe9fc] Ensure instance console log exists: /var/lib/nova/instances/dd2f9164-cc85-46e5-9ac5-2847421fe9fc/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 11 08:53:19 compute-0 nova_compute[260935]: 2025-10-11 08:53:19.711 2 DEBUG oslo_concurrency.lockutils [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:53:19 compute-0 nova_compute[260935]: 2025-10-11 08:53:19.711 2 DEBUG oslo_concurrency.lockutils [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:53:19 compute-0 nova_compute[260935]: 2025-10-11 08:53:19.712 2 DEBUG oslo_concurrency.lockutils [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:53:19 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1457: 321 pgs: 321 active+clean; 134 MiB data, 464 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.2 MiB/s wr, 157 op/s
Oct 11 08:53:20 compute-0 nova_compute[260935]: 2025-10-11 08:53:20.146 2 DEBUG nova.compute.manager [req-6f164a33-7cd7-4c53-9a54-51a3b206314f req-993fb27e-9d3d-4f47-b30a-847a94a74cc0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 2c551e6f-adba-4963-a583-c5118e2be62a] Received event network-vif-plugged-29b8f5a1-6e7c-42c3-9876-cc4cc9942b76 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:53:20 compute-0 nova_compute[260935]: 2025-10-11 08:53:20.150 2 DEBUG oslo_concurrency.lockutils [req-6f164a33-7cd7-4c53-9a54-51a3b206314f req-993fb27e-9d3d-4f47-b30a-847a94a74cc0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "2c551e6f-adba-4963-a583-c5118e2be62a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:53:20 compute-0 nova_compute[260935]: 2025-10-11 08:53:20.150 2 DEBUG oslo_concurrency.lockutils [req-6f164a33-7cd7-4c53-9a54-51a3b206314f req-993fb27e-9d3d-4f47-b30a-847a94a74cc0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "2c551e6f-adba-4963-a583-c5118e2be62a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:53:20 compute-0 nova_compute[260935]: 2025-10-11 08:53:20.151 2 DEBUG oslo_concurrency.lockutils [req-6f164a33-7cd7-4c53-9a54-51a3b206314f req-993fb27e-9d3d-4f47-b30a-847a94a74cc0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "2c551e6f-adba-4963-a583-c5118e2be62a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:53:20 compute-0 nova_compute[260935]: 2025-10-11 08:53:20.151 2 DEBUG nova.compute.manager [req-6f164a33-7cd7-4c53-9a54-51a3b206314f req-993fb27e-9d3d-4f47-b30a-847a94a74cc0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 2c551e6f-adba-4963-a583-c5118e2be62a] No waiting events found dispatching network-vif-plugged-29b8f5a1-6e7c-42c3-9876-cc4cc9942b76 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 08:53:20 compute-0 nova_compute[260935]: 2025-10-11 08:53:20.152 2 WARNING nova.compute.manager [req-6f164a33-7cd7-4c53-9a54-51a3b206314f req-993fb27e-9d3d-4f47-b30a-847a94a74cc0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 2c551e6f-adba-4963-a583-c5118e2be62a] Received unexpected event network-vif-plugged-29b8f5a1-6e7c-42c3-9876-cc4cc9942b76 for instance with vm_state active and task_state None.
Oct 11 08:53:20 compute-0 nova_compute[260935]: 2025-10-11 08:53:20.769 2 DEBUG nova.compute.manager [None req-518c83aa-c875-4011-987e-33e4ebeb8a6e ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] [instance: 2c551e6f-adba-4963-a583-c5118e2be62a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:53:20 compute-0 nova_compute[260935]: 2025-10-11 08:53:20.852 2 INFO nova.compute.manager [None req-518c83aa-c875-4011-987e-33e4ebeb8a6e ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] [instance: 2c551e6f-adba-4963-a583-c5118e2be62a] instance snapshotting
Oct 11 08:53:20 compute-0 nova_compute[260935]: 2025-10-11 08:53:20.914 2 DEBUG nova.network.neutron [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: e3289c21-dd0f-43aa-9d39-3aff16eff5cd] Successfully created port: 69a28fb2-3a8f-49be-9b93-8dc3db323fd5 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 11 08:53:20 compute-0 nova_compute[260935]: 2025-10-11 08:53:20.973 2 DEBUG nova.network.neutron [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: dd2f9164-cc85-46e5-9ac5-2847421fe9fc] Successfully created port: 35cdb65d-4c85-4645-9bdf-3ef8f40f9596 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 11 08:53:21 compute-0 nova_compute[260935]: 2025-10-11 08:53:21.213 2 INFO nova.virt.libvirt.driver [None req-518c83aa-c875-4011-987e-33e4ebeb8a6e ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] [instance: 2c551e6f-adba-4963-a583-c5118e2be62a] Beginning live snapshot process
Oct 11 08:53:21 compute-0 nova_compute[260935]: 2025-10-11 08:53:21.379 2 DEBUG nova.virt.libvirt.imagebackend [None req-518c83aa-c875-4011-987e-33e4ebeb8a6e ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] No parent info for 03f2fef0-11c0-48e1-b3a0-3e02d898739e; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Oct 11 08:53:21 compute-0 ceph-mon[74313]: pgmap v1457: 321 pgs: 321 active+clean; 134 MiB data, 464 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.2 MiB/s wr, 157 op/s
Oct 11 08:53:21 compute-0 nova_compute[260935]: 2025-10-11 08:53:21.608 2 DEBUG nova.storage.rbd_utils [None req-518c83aa-c875-4011-987e-33e4ebeb8a6e ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] creating snapshot(f14c129f75a249f19f2e682befcf2145) on rbd image(2c551e6f-adba-4963-a583-c5118e2be62a_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Oct 11 08:53:21 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1458: 321 pgs: 321 active+clean; 134 MiB data, 464 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.2 MiB/s wr, 157 op/s
Oct 11 08:53:22 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e194 do_prune osdmap full prune enabled
Oct 11 08:53:22 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e195 e195: 3 total, 3 up, 3 in
Oct 11 08:53:22 compute-0 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e195: 3 total, 3 up, 3 in
Oct 11 08:53:22 compute-0 ovn_controller[152945]: 2025-10-11T08:53:22Z|00060|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:5b:b5:4a 10.100.0.12
Oct 11 08:53:22 compute-0 ovn_controller[152945]: 2025-10-11T08:53:22Z|00061|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:5b:b5:4a 10.100.0.12
Oct 11 08:53:22 compute-0 nova_compute[260935]: 2025-10-11 08:53:22.486 2 DEBUG nova.storage.rbd_utils [None req-518c83aa-c875-4011-987e-33e4ebeb8a6e ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] cloning vms/2c551e6f-adba-4963-a583-c5118e2be62a_disk@f14c129f75a249f19f2e682befcf2145 to images/61bf0bdb-0d02-4888-bd1b-e94e583bf619 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Oct 11 08:53:22 compute-0 nova_compute[260935]: 2025-10-11 08:53:22.611 2 DEBUG nova.storage.rbd_utils [None req-518c83aa-c875-4011-987e-33e4ebeb8a6e ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] flattening images/61bf0bdb-0d02-4888-bd1b-e94e583bf619 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Oct 11 08:53:22 compute-0 nova_compute[260935]: 2025-10-11 08:53:22.693 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:53:22 compute-0 nova_compute[260935]: 2025-10-11 08:53:22.879 2 DEBUG nova.storage.rbd_utils [None req-518c83aa-c875-4011-987e-33e4ebeb8a6e ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] removing snapshot(f14c129f75a249f19f2e682befcf2145) on rbd image(2c551e6f-adba-4963-a583-c5118e2be62a_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Oct 11 08:53:23 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e195 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 08:53:23 compute-0 ceph-mon[74313]: pgmap v1458: 321 pgs: 321 active+clean; 134 MiB data, 464 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.2 MiB/s wr, 157 op/s
Oct 11 08:53:23 compute-0 ceph-mon[74313]: osdmap e195: 3 total, 3 up, 3 in
Oct 11 08:53:23 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e195 do_prune osdmap full prune enabled
Oct 11 08:53:23 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e196 e196: 3 total, 3 up, 3 in
Oct 11 08:53:23 compute-0 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e196: 3 total, 3 up, 3 in
Oct 11 08:53:23 compute-0 nova_compute[260935]: 2025-10-11 08:53:23.485 2 DEBUG nova.storage.rbd_utils [None req-518c83aa-c875-4011-987e-33e4ebeb8a6e ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] creating snapshot(snap) on rbd image(61bf0bdb-0d02-4888-bd1b-e94e583bf619) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Oct 11 08:53:23 compute-0 nova_compute[260935]: 2025-10-11 08:53:23.612 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:53:23 compute-0 nova_compute[260935]: 2025-10-11 08:53:23.749 2 DEBUG nova.network.neutron [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: e3289c21-dd0f-43aa-9d39-3aff16eff5cd] Successfully updated port: 69a28fb2-3a8f-49be-9b93-8dc3db323fd5 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 11 08:53:23 compute-0 nova_compute[260935]: 2025-10-11 08:53:23.780 2 DEBUG oslo_concurrency.lockutils [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Acquiring lock "refresh_cache-e3289c21-dd0f-43aa-9d39-3aff16eff5cd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 08:53:23 compute-0 nova_compute[260935]: 2025-10-11 08:53:23.780 2 DEBUG oslo_concurrency.lockutils [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Acquired lock "refresh_cache-e3289c21-dd0f-43aa-9d39-3aff16eff5cd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 08:53:23 compute-0 nova_compute[260935]: 2025-10-11 08:53:23.781 2 DEBUG nova.network.neutron [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: e3289c21-dd0f-43aa-9d39-3aff16eff5cd] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 11 08:53:23 compute-0 nova_compute[260935]: 2025-10-11 08:53:23.829 2 DEBUG nova.network.neutron [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: dd2f9164-cc85-46e5-9ac5-2847421fe9fc] Successfully updated port: 35cdb65d-4c85-4645-9bdf-3ef8f40f9596 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 11 08:53:23 compute-0 nova_compute[260935]: 2025-10-11 08:53:23.850 2 DEBUG oslo_concurrency.lockutils [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Acquiring lock "refresh_cache-dd2f9164-cc85-46e5-9ac5-2847421fe9fc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 08:53:23 compute-0 nova_compute[260935]: 2025-10-11 08:53:23.851 2 DEBUG oslo_concurrency.lockutils [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Acquired lock "refresh_cache-dd2f9164-cc85-46e5-9ac5-2847421fe9fc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 08:53:23 compute-0 nova_compute[260935]: 2025-10-11 08:53:23.851 2 DEBUG nova.network.neutron [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: dd2f9164-cc85-46e5-9ac5-2847421fe9fc] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 11 08:53:23 compute-0 nova_compute[260935]: 2025-10-11 08:53:23.934 2 DEBUG nova.compute.manager [req-2e78e465-415c-4b8d-8b8a-0a43f7ac3938 req-568616a4-c20c-49f1-90d1-472bcf58c89e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: e3289c21-dd0f-43aa-9d39-3aff16eff5cd] Received event network-changed-69a28fb2-3a8f-49be-9b93-8dc3db323fd5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:53:23 compute-0 nova_compute[260935]: 2025-10-11 08:53:23.936 2 DEBUG nova.compute.manager [req-2e78e465-415c-4b8d-8b8a-0a43f7ac3938 req-568616a4-c20c-49f1-90d1-472bcf58c89e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: e3289c21-dd0f-43aa-9d39-3aff16eff5cd] Refreshing instance network info cache due to event network-changed-69a28fb2-3a8f-49be-9b93-8dc3db323fd5. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 08:53:23 compute-0 nova_compute[260935]: 2025-10-11 08:53:23.936 2 DEBUG oslo_concurrency.lockutils [req-2e78e465-415c-4b8d-8b8a-0a43f7ac3938 req-568616a4-c20c-49f1-90d1-472bcf58c89e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-e3289c21-dd0f-43aa-9d39-3aff16eff5cd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 08:53:23 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1461: 321 pgs: 321 active+clean; 281 MiB data, 535 MiB used, 59 GiB / 60 GiB avail; 5.1 MiB/s rd, 13 MiB/s wr, 409 op/s
Oct 11 08:53:24 compute-0 nova_compute[260935]: 2025-10-11 08:53:24.019 2 DEBUG nova.network.neutron [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: e3289c21-dd0f-43aa-9d39-3aff16eff5cd] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 11 08:53:24 compute-0 nova_compute[260935]: 2025-10-11 08:53:24.058 2 DEBUG nova.network.neutron [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: dd2f9164-cc85-46e5-9ac5-2847421fe9fc] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 11 08:53:24 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e196 do_prune osdmap full prune enabled
Oct 11 08:53:24 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e197 e197: 3 total, 3 up, 3 in
Oct 11 08:53:24 compute-0 ceph-mon[74313]: osdmap e196: 3 total, 3 up, 3 in
Oct 11 08:53:24 compute-0 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e197: 3 total, 3 up, 3 in
Oct 11 08:53:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 08:53:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 08:53:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 08:53:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 08:53:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 08:53:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 08:53:25 compute-0 ceph-mon[74313]: pgmap v1461: 321 pgs: 321 active+clean; 281 MiB data, 535 MiB used, 59 GiB / 60 GiB avail; 5.1 MiB/s rd, 13 MiB/s wr, 409 op/s
Oct 11 08:53:25 compute-0 ceph-mon[74313]: osdmap e197: 3 total, 3 up, 3 in
Oct 11 08:53:25 compute-0 nova_compute[260935]: 2025-10-11 08:53:25.577 2 DEBUG nova.network.neutron [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: dd2f9164-cc85-46e5-9ac5-2847421fe9fc] Updating instance_info_cache with network_info: [{"id": "35cdb65d-4c85-4645-9bdf-3ef8f40f9596", "address": "fa:16:3e:37:a9:70", "network": {"id": "aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1676373979-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5fe47dfd30914099a9819413cbab00c6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap35cdb65d-4c", "ovs_interfaceid": "35cdb65d-4c85-4645-9bdf-3ef8f40f9596", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:53:25 compute-0 nova_compute[260935]: 2025-10-11 08:53:25.598 2 DEBUG oslo_concurrency.lockutils [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Releasing lock "refresh_cache-dd2f9164-cc85-46e5-9ac5-2847421fe9fc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 08:53:25 compute-0 nova_compute[260935]: 2025-10-11 08:53:25.599 2 DEBUG nova.compute.manager [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: dd2f9164-cc85-46e5-9ac5-2847421fe9fc] Instance network_info: |[{"id": "35cdb65d-4c85-4645-9bdf-3ef8f40f9596", "address": "fa:16:3e:37:a9:70", "network": {"id": "aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1676373979-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5fe47dfd30914099a9819413cbab00c6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap35cdb65d-4c", "ovs_interfaceid": "35cdb65d-4c85-4645-9bdf-3ef8f40f9596", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 11 08:53:25 compute-0 nova_compute[260935]: 2025-10-11 08:53:25.603 2 DEBUG nova.virt.libvirt.driver [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: dd2f9164-cc85-46e5-9ac5-2847421fe9fc] Start _get_guest_xml network_info=[{"id": "35cdb65d-4c85-4645-9bdf-3ef8f40f9596", "address": "fa:16:3e:37:a9:70", "network": {"id": "aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1676373979-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5fe47dfd30914099a9819413cbab00c6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap35cdb65d-4c", "ovs_interfaceid": "35cdb65d-4c85-4645-9bdf-3ef8f40f9596", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 11 08:53:25 compute-0 nova_compute[260935]: 2025-10-11 08:53:25.608 2 WARNING nova.virt.libvirt.driver [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 08:53:25 compute-0 nova_compute[260935]: 2025-10-11 08:53:25.614 2 DEBUG nova.virt.libvirt.host [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 11 08:53:25 compute-0 nova_compute[260935]: 2025-10-11 08:53:25.615 2 DEBUG nova.virt.libvirt.host [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 11 08:53:25 compute-0 nova_compute[260935]: 2025-10-11 08:53:25.619 2 DEBUG nova.virt.libvirt.host [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 11 08:53:25 compute-0 nova_compute[260935]: 2025-10-11 08:53:25.620 2 DEBUG nova.virt.libvirt.host [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 11 08:53:25 compute-0 nova_compute[260935]: 2025-10-11 08:53:25.620 2 DEBUG nova.virt.libvirt.driver [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 11 08:53:25 compute-0 nova_compute[260935]: 2025-10-11 08:53:25.621 2 DEBUG nova.virt.hardware [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 11 08:53:25 compute-0 nova_compute[260935]: 2025-10-11 08:53:25.622 2 DEBUG nova.virt.hardware [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 11 08:53:25 compute-0 nova_compute[260935]: 2025-10-11 08:53:25.622 2 DEBUG nova.virt.hardware [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 11 08:53:25 compute-0 nova_compute[260935]: 2025-10-11 08:53:25.623 2 DEBUG nova.virt.hardware [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 11 08:53:25 compute-0 nova_compute[260935]: 2025-10-11 08:53:25.623 2 DEBUG nova.virt.hardware [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 11 08:53:25 compute-0 nova_compute[260935]: 2025-10-11 08:53:25.623 2 DEBUG nova.virt.hardware [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 11 08:53:25 compute-0 nova_compute[260935]: 2025-10-11 08:53:25.624 2 DEBUG nova.virt.hardware [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 11 08:53:25 compute-0 nova_compute[260935]: 2025-10-11 08:53:25.624 2 DEBUG nova.virt.hardware [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 11 08:53:25 compute-0 nova_compute[260935]: 2025-10-11 08:53:25.625 2 DEBUG nova.virt.hardware [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 11 08:53:25 compute-0 nova_compute[260935]: 2025-10-11 08:53:25.626 2 DEBUG nova.virt.hardware [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 11 08:53:25 compute-0 nova_compute[260935]: 2025-10-11 08:53:25.626 2 DEBUG nova.virt.hardware [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 11 08:53:25 compute-0 nova_compute[260935]: 2025-10-11 08:53:25.631 2 DEBUG oslo_concurrency.processutils [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:53:25 compute-0 nova_compute[260935]: 2025-10-11 08:53:25.679 2 DEBUG nova.network.neutron [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: e3289c21-dd0f-43aa-9d39-3aff16eff5cd] Updating instance_info_cache with network_info: [{"id": "69a28fb2-3a8f-49be-9b93-8dc3db323fd5", "address": "fa:16:3e:93:4d:8d", "network": {"id": "aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1676373979-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5fe47dfd30914099a9819413cbab00c6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap69a28fb2-3a", "ovs_interfaceid": "69a28fb2-3a8f-49be-9b93-8dc3db323fd5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:53:25 compute-0 nova_compute[260935]: 2025-10-11 08:53:25.716 2 DEBUG oslo_concurrency.lockutils [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Releasing lock "refresh_cache-e3289c21-dd0f-43aa-9d39-3aff16eff5cd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 08:53:25 compute-0 nova_compute[260935]: 2025-10-11 08:53:25.717 2 DEBUG nova.compute.manager [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: e3289c21-dd0f-43aa-9d39-3aff16eff5cd] Instance network_info: |[{"id": "69a28fb2-3a8f-49be-9b93-8dc3db323fd5", "address": "fa:16:3e:93:4d:8d", "network": {"id": "aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1676373979-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5fe47dfd30914099a9819413cbab00c6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap69a28fb2-3a", "ovs_interfaceid": "69a28fb2-3a8f-49be-9b93-8dc3db323fd5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 11 08:53:25 compute-0 nova_compute[260935]: 2025-10-11 08:53:25.718 2 DEBUG oslo_concurrency.lockutils [req-2e78e465-415c-4b8d-8b8a-0a43f7ac3938 req-568616a4-c20c-49f1-90d1-472bcf58c89e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-e3289c21-dd0f-43aa-9d39-3aff16eff5cd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 08:53:25 compute-0 nova_compute[260935]: 2025-10-11 08:53:25.719 2 DEBUG nova.network.neutron [req-2e78e465-415c-4b8d-8b8a-0a43f7ac3938 req-568616a4-c20c-49f1-90d1-472bcf58c89e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: e3289c21-dd0f-43aa-9d39-3aff16eff5cd] Refreshing network info cache for port 69a28fb2-3a8f-49be-9b93-8dc3db323fd5 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 08:53:25 compute-0 nova_compute[260935]: 2025-10-11 08:53:25.725 2 DEBUG nova.virt.libvirt.driver [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: e3289c21-dd0f-43aa-9d39-3aff16eff5cd] Start _get_guest_xml network_info=[{"id": "69a28fb2-3a8f-49be-9b93-8dc3db323fd5", "address": "fa:16:3e:93:4d:8d", "network": {"id": "aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1676373979-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5fe47dfd30914099a9819413cbab00c6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap69a28fb2-3a", "ovs_interfaceid": "69a28fb2-3a8f-49be-9b93-8dc3db323fd5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 11 08:53:25 compute-0 nova_compute[260935]: 2025-10-11 08:53:25.734 2 WARNING nova.virt.libvirt.driver [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 08:53:25 compute-0 nova_compute[260935]: 2025-10-11 08:53:25.740 2 DEBUG nova.virt.libvirt.host [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 11 08:53:25 compute-0 nova_compute[260935]: 2025-10-11 08:53:25.741 2 DEBUG nova.virt.libvirt.host [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 11 08:53:25 compute-0 nova_compute[260935]: 2025-10-11 08:53:25.747 2 DEBUG nova.virt.libvirt.host [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 11 08:53:25 compute-0 nova_compute[260935]: 2025-10-11 08:53:25.748 2 DEBUG nova.virt.libvirt.host [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 11 08:53:25 compute-0 nova_compute[260935]: 2025-10-11 08:53:25.748 2 DEBUG nova.virt.libvirt.driver [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 11 08:53:25 compute-0 nova_compute[260935]: 2025-10-11 08:53:25.749 2 DEBUG nova.virt.hardware [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 11 08:53:25 compute-0 nova_compute[260935]: 2025-10-11 08:53:25.749 2 DEBUG nova.virt.hardware [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 11 08:53:25 compute-0 nova_compute[260935]: 2025-10-11 08:53:25.750 2 DEBUG nova.virt.hardware [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 11 08:53:25 compute-0 nova_compute[260935]: 2025-10-11 08:53:25.750 2 DEBUG nova.virt.hardware [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 11 08:53:25 compute-0 nova_compute[260935]: 2025-10-11 08:53:25.750 2 DEBUG nova.virt.hardware [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 11 08:53:25 compute-0 nova_compute[260935]: 2025-10-11 08:53:25.750 2 DEBUG nova.virt.hardware [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 11 08:53:25 compute-0 nova_compute[260935]: 2025-10-11 08:53:25.750 2 DEBUG nova.virt.hardware [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 11 08:53:25 compute-0 nova_compute[260935]: 2025-10-11 08:53:25.751 2 DEBUG nova.virt.hardware [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 11 08:53:25 compute-0 nova_compute[260935]: 2025-10-11 08:53:25.751 2 DEBUG nova.virt.hardware [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 11 08:53:25 compute-0 nova_compute[260935]: 2025-10-11 08:53:25.751 2 DEBUG nova.virt.hardware [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 11 08:53:25 compute-0 nova_compute[260935]: 2025-10-11 08:53:25.751 2 DEBUG nova.virt.hardware [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 11 08:53:25 compute-0 nova_compute[260935]: 2025-10-11 08:53:25.756 2 DEBUG oslo_concurrency.processutils [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:53:25 compute-0 podman[311940]: 2025-10-11 08:53:25.78117731 +0000 UTC m=+0.084471360 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_managed=true, config_id=ovn_metadata_agent, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 08:53:25 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1463: 321 pgs: 321 active+clean; 281 MiB data, 535 MiB used, 59 GiB / 60 GiB avail; 5.1 MiB/s rd, 13 MiB/s wr, 409 op/s
Oct 11 08:53:26 compute-0 nova_compute[260935]: 2025-10-11 08:53:26.030 2 DEBUG nova.compute.manager [req-65478d01-8716-47ae-9a3d-137c554d1201 req-8a236e0f-e2a3-4526-b2d5-8a02f7de2f30 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: dd2f9164-cc85-46e5-9ac5-2847421fe9fc] Received event network-changed-35cdb65d-4c85-4645-9bdf-3ef8f40f9596 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:53:26 compute-0 nova_compute[260935]: 2025-10-11 08:53:26.031 2 DEBUG nova.compute.manager [req-65478d01-8716-47ae-9a3d-137c554d1201 req-8a236e0f-e2a3-4526-b2d5-8a02f7de2f30 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: dd2f9164-cc85-46e5-9ac5-2847421fe9fc] Refreshing instance network info cache due to event network-changed-35cdb65d-4c85-4645-9bdf-3ef8f40f9596. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 08:53:26 compute-0 nova_compute[260935]: 2025-10-11 08:53:26.032 2 DEBUG oslo_concurrency.lockutils [req-65478d01-8716-47ae-9a3d-137c554d1201 req-8a236e0f-e2a3-4526-b2d5-8a02f7de2f30 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-dd2f9164-cc85-46e5-9ac5-2847421fe9fc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 08:53:26 compute-0 nova_compute[260935]: 2025-10-11 08:53:26.032 2 DEBUG oslo_concurrency.lockutils [req-65478d01-8716-47ae-9a3d-137c554d1201 req-8a236e0f-e2a3-4526-b2d5-8a02f7de2f30 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-dd2f9164-cc85-46e5-9ac5-2847421fe9fc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 08:53:26 compute-0 nova_compute[260935]: 2025-10-11 08:53:26.032 2 DEBUG nova.network.neutron [req-65478d01-8716-47ae-9a3d-137c554d1201 req-8a236e0f-e2a3-4526-b2d5-8a02f7de2f30 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: dd2f9164-cc85-46e5-9ac5-2847421fe9fc] Refreshing network info cache for port 35cdb65d-4c85-4645-9bdf-3ef8f40f9596 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 08:53:26 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 08:53:26 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4017531465' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:53:26 compute-0 nova_compute[260935]: 2025-10-11 08:53:26.140 2 DEBUG oslo_concurrency.processutils [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.509s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:53:26 compute-0 nova_compute[260935]: 2025-10-11 08:53:26.176 2 DEBUG nova.storage.rbd_utils [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] rbd image dd2f9164-cc85-46e5-9ac5-2847421fe9fc_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:53:26 compute-0 nova_compute[260935]: 2025-10-11 08:53:26.181 2 DEBUG oslo_concurrency.processutils [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:53:26 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 08:53:26 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/973492518' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:53:26 compute-0 nova_compute[260935]: 2025-10-11 08:53:26.252 2 INFO nova.virt.libvirt.driver [None req-518c83aa-c875-4011-987e-33e4ebeb8a6e ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] [instance: 2c551e6f-adba-4963-a583-c5118e2be62a] Snapshot image upload complete
Oct 11 08:53:26 compute-0 nova_compute[260935]: 2025-10-11 08:53:26.253 2 INFO nova.compute.manager [None req-518c83aa-c875-4011-987e-33e4ebeb8a6e ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] [instance: 2c551e6f-adba-4963-a583-c5118e2be62a] Took 5.40 seconds to snapshot the instance on the hypervisor.
Oct 11 08:53:26 compute-0 nova_compute[260935]: 2025-10-11 08:53:26.257 2 DEBUG oslo_concurrency.processutils [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.502s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:53:26 compute-0 nova_compute[260935]: 2025-10-11 08:53:26.287 2 DEBUG nova.storage.rbd_utils [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] rbd image e3289c21-dd0f-43aa-9d39-3aff16eff5cd_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:53:26 compute-0 nova_compute[260935]: 2025-10-11 08:53:26.293 2 DEBUG oslo_concurrency.processutils [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:53:26 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/4017531465' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:53:26 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/973492518' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:53:26 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 08:53:26 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2825526755' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:53:26 compute-0 nova_compute[260935]: 2025-10-11 08:53:26.644 2 DEBUG oslo_concurrency.processutils [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:53:26 compute-0 nova_compute[260935]: 2025-10-11 08:53:26.646 2 DEBUG nova.virt.libvirt.vif [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T08:53:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1151635572',display_name='tempest-tempest.common.compute-instance-1151635572-2',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1151635572-2',id=46,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=1,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5fe47dfd30914099a9819413cbab00c6',ramdisk_id='',reservation_id='r-08kxr91r',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-MultipleCreateTestJSON-1825846956',owner_user_name='tempest-MultipleCreateTestJSON-1825846956-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T08:53:18Z,user_data=None,user_id='624c293d73ca4d14a182fadee17abb16',uuid=dd2f9164-cc85-46e5-9ac5-2847421fe9fc,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "35cdb65d-4c85-4645-9bdf-3ef8f40f9596", "address": "fa:16:3e:37:a9:70", "network": {"id": "aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1676373979-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5fe47dfd30914099a9819413cbab00c6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap35cdb65d-4c", "ovs_interfaceid": "35cdb65d-4c85-4645-9bdf-3ef8f40f9596", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 11 08:53:26 compute-0 nova_compute[260935]: 2025-10-11 08:53:26.647 2 DEBUG nova.network.os_vif_util [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Converting VIF {"id": "35cdb65d-4c85-4645-9bdf-3ef8f40f9596", "address": "fa:16:3e:37:a9:70", "network": {"id": "aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1676373979-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5fe47dfd30914099a9819413cbab00c6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap35cdb65d-4c", "ovs_interfaceid": "35cdb65d-4c85-4645-9bdf-3ef8f40f9596", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 08:53:26 compute-0 nova_compute[260935]: 2025-10-11 08:53:26.648 2 DEBUG nova.network.os_vif_util [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:37:a9:70,bridge_name='br-int',has_traffic_filtering=True,id=35cdb65d-4c85-4645-9bdf-3ef8f40f9596,network=Network(aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap35cdb65d-4c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 08:53:26 compute-0 nova_compute[260935]: 2025-10-11 08:53:26.650 2 DEBUG nova.objects.instance [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Lazy-loading 'pci_devices' on Instance uuid dd2f9164-cc85-46e5-9ac5-2847421fe9fc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:53:26 compute-0 nova_compute[260935]: 2025-10-11 08:53:26.670 2 DEBUG nova.virt.libvirt.driver [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: dd2f9164-cc85-46e5-9ac5-2847421fe9fc] End _get_guest_xml xml=<domain type="kvm">
Oct 11 08:53:26 compute-0 nova_compute[260935]:   <uuid>dd2f9164-cc85-46e5-9ac5-2847421fe9fc</uuid>
Oct 11 08:53:26 compute-0 nova_compute[260935]:   <name>instance-0000002e</name>
Oct 11 08:53:26 compute-0 nova_compute[260935]:   <memory>131072</memory>
Oct 11 08:53:26 compute-0 nova_compute[260935]:   <vcpu>1</vcpu>
Oct 11 08:53:26 compute-0 nova_compute[260935]:   <metadata>
Oct 11 08:53:26 compute-0 nova_compute[260935]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 08:53:26 compute-0 nova_compute[260935]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 08:53:26 compute-0 nova_compute[260935]:       <nova:name>tempest-tempest.common.compute-instance-1151635572-2</nova:name>
Oct 11 08:53:26 compute-0 nova_compute[260935]:       <nova:creationTime>2025-10-11 08:53:25</nova:creationTime>
Oct 11 08:53:26 compute-0 nova_compute[260935]:       <nova:flavor name="m1.nano">
Oct 11 08:53:26 compute-0 nova_compute[260935]:         <nova:memory>128</nova:memory>
Oct 11 08:53:26 compute-0 nova_compute[260935]:         <nova:disk>1</nova:disk>
Oct 11 08:53:26 compute-0 nova_compute[260935]:         <nova:swap>0</nova:swap>
Oct 11 08:53:26 compute-0 nova_compute[260935]:         <nova:ephemeral>0</nova:ephemeral>
Oct 11 08:53:26 compute-0 nova_compute[260935]:         <nova:vcpus>1</nova:vcpus>
Oct 11 08:53:26 compute-0 nova_compute[260935]:       </nova:flavor>
Oct 11 08:53:26 compute-0 nova_compute[260935]:       <nova:owner>
Oct 11 08:53:26 compute-0 nova_compute[260935]:         <nova:user uuid="624c293d73ca4d14a182fadee17abb16">tempest-MultipleCreateTestJSON-1825846956-project-member</nova:user>
Oct 11 08:53:26 compute-0 nova_compute[260935]:         <nova:project uuid="5fe47dfd30914099a9819413cbab00c6">tempest-MultipleCreateTestJSON-1825846956</nova:project>
Oct 11 08:53:26 compute-0 nova_compute[260935]:       </nova:owner>
Oct 11 08:53:26 compute-0 nova_compute[260935]:       <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 08:53:26 compute-0 nova_compute[260935]:       <nova:ports>
Oct 11 08:53:26 compute-0 nova_compute[260935]:         <nova:port uuid="35cdb65d-4c85-4645-9bdf-3ef8f40f9596">
Oct 11 08:53:26 compute-0 nova_compute[260935]:           <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Oct 11 08:53:26 compute-0 nova_compute[260935]:         </nova:port>
Oct 11 08:53:26 compute-0 nova_compute[260935]:       </nova:ports>
Oct 11 08:53:26 compute-0 nova_compute[260935]:     </nova:instance>
Oct 11 08:53:26 compute-0 nova_compute[260935]:   </metadata>
Oct 11 08:53:26 compute-0 nova_compute[260935]:   <sysinfo type="smbios">
Oct 11 08:53:26 compute-0 nova_compute[260935]:     <system>
Oct 11 08:53:26 compute-0 nova_compute[260935]:       <entry name="manufacturer">RDO</entry>
Oct 11 08:53:26 compute-0 nova_compute[260935]:       <entry name="product">OpenStack Compute</entry>
Oct 11 08:53:26 compute-0 nova_compute[260935]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 08:53:26 compute-0 nova_compute[260935]:       <entry name="serial">dd2f9164-cc85-46e5-9ac5-2847421fe9fc</entry>
Oct 11 08:53:26 compute-0 nova_compute[260935]:       <entry name="uuid">dd2f9164-cc85-46e5-9ac5-2847421fe9fc</entry>
Oct 11 08:53:26 compute-0 nova_compute[260935]:       <entry name="family">Virtual Machine</entry>
Oct 11 08:53:26 compute-0 nova_compute[260935]:     </system>
Oct 11 08:53:26 compute-0 nova_compute[260935]:   </sysinfo>
Oct 11 08:53:26 compute-0 nova_compute[260935]:   <os>
Oct 11 08:53:26 compute-0 nova_compute[260935]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 11 08:53:26 compute-0 nova_compute[260935]:     <boot dev="hd"/>
Oct 11 08:53:26 compute-0 nova_compute[260935]:     <smbios mode="sysinfo"/>
Oct 11 08:53:26 compute-0 nova_compute[260935]:   </os>
Oct 11 08:53:26 compute-0 nova_compute[260935]:   <features>
Oct 11 08:53:26 compute-0 nova_compute[260935]:     <acpi/>
Oct 11 08:53:26 compute-0 nova_compute[260935]:     <apic/>
Oct 11 08:53:26 compute-0 nova_compute[260935]:     <vmcoreinfo/>
Oct 11 08:53:26 compute-0 nova_compute[260935]:   </features>
Oct 11 08:53:26 compute-0 nova_compute[260935]:   <clock offset="utc">
Oct 11 08:53:26 compute-0 nova_compute[260935]:     <timer name="pit" tickpolicy="delay"/>
Oct 11 08:53:26 compute-0 nova_compute[260935]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 11 08:53:26 compute-0 nova_compute[260935]:     <timer name="hpet" present="no"/>
Oct 11 08:53:26 compute-0 nova_compute[260935]:   </clock>
Oct 11 08:53:26 compute-0 nova_compute[260935]:   <cpu mode="host-model" match="exact">
Oct 11 08:53:26 compute-0 nova_compute[260935]:     <topology sockets="1" cores="1" threads="1"/>
Oct 11 08:53:26 compute-0 nova_compute[260935]:   </cpu>
Oct 11 08:53:26 compute-0 nova_compute[260935]:   <devices>
Oct 11 08:53:26 compute-0 nova_compute[260935]:     <disk type="network" device="disk">
Oct 11 08:53:26 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 08:53:26 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/dd2f9164-cc85-46e5-9ac5-2847421fe9fc_disk">
Oct 11 08:53:26 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 08:53:26 compute-0 nova_compute[260935]:       </source>
Oct 11 08:53:26 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 08:53:26 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 08:53:26 compute-0 nova_compute[260935]:       </auth>
Oct 11 08:53:26 compute-0 nova_compute[260935]:       <target dev="vda" bus="virtio"/>
Oct 11 08:53:26 compute-0 nova_compute[260935]:     </disk>
Oct 11 08:53:26 compute-0 nova_compute[260935]:     <disk type="network" device="cdrom">
Oct 11 08:53:26 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 08:53:26 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/dd2f9164-cc85-46e5-9ac5-2847421fe9fc_disk.config">
Oct 11 08:53:26 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 08:53:26 compute-0 nova_compute[260935]:       </source>
Oct 11 08:53:26 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 08:53:26 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 08:53:26 compute-0 nova_compute[260935]:       </auth>
Oct 11 08:53:26 compute-0 nova_compute[260935]:       <target dev="sda" bus="sata"/>
Oct 11 08:53:26 compute-0 nova_compute[260935]:     </disk>
Oct 11 08:53:26 compute-0 nova_compute[260935]:     <interface type="ethernet">
Oct 11 08:53:26 compute-0 nova_compute[260935]:       <mac address="fa:16:3e:37:a9:70"/>
Oct 11 08:53:26 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 08:53:26 compute-0 nova_compute[260935]:       <driver name="vhost" rx_queue_size="512"/>
Oct 11 08:53:26 compute-0 nova_compute[260935]:       <mtu size="1442"/>
Oct 11 08:53:26 compute-0 nova_compute[260935]:       <target dev="tap35cdb65d-4c"/>
Oct 11 08:53:26 compute-0 nova_compute[260935]:     </interface>
Oct 11 08:53:26 compute-0 nova_compute[260935]:     <serial type="pty">
Oct 11 08:53:26 compute-0 nova_compute[260935]:       <log file="/var/lib/nova/instances/dd2f9164-cc85-46e5-9ac5-2847421fe9fc/console.log" append="off"/>
Oct 11 08:53:26 compute-0 nova_compute[260935]:     </serial>
Oct 11 08:53:26 compute-0 nova_compute[260935]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 08:53:26 compute-0 nova_compute[260935]:     <video>
Oct 11 08:53:26 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 08:53:26 compute-0 nova_compute[260935]:     </video>
Oct 11 08:53:26 compute-0 nova_compute[260935]:     <input type="tablet" bus="usb"/>
Oct 11 08:53:26 compute-0 nova_compute[260935]:     <rng model="virtio">
Oct 11 08:53:26 compute-0 nova_compute[260935]:       <backend model="random">/dev/urandom</backend>
Oct 11 08:53:26 compute-0 nova_compute[260935]:     </rng>
Oct 11 08:53:26 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root"/>
Oct 11 08:53:26 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:53:26 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:53:26 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:53:26 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:53:26 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:53:26 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:53:26 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:53:26 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:53:26 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:53:26 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:53:26 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:53:26 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:53:26 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:53:26 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:53:26 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:53:26 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:53:26 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:53:26 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:53:26 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:53:26 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:53:26 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:53:26 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:53:26 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:53:26 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:53:26 compute-0 nova_compute[260935]:     <controller type="usb" index="0"/>
Oct 11 08:53:26 compute-0 nova_compute[260935]:     <memballoon model="virtio">
Oct 11 08:53:26 compute-0 nova_compute[260935]:       <stats period="10"/>
Oct 11 08:53:26 compute-0 nova_compute[260935]:     </memballoon>
Oct 11 08:53:26 compute-0 nova_compute[260935]:   </devices>
Oct 11 08:53:26 compute-0 nova_compute[260935]: </domain>
Oct 11 08:53:26 compute-0 nova_compute[260935]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 11 08:53:26 compute-0 nova_compute[260935]: 2025-10-11 08:53:26.671 2 DEBUG nova.compute.manager [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: dd2f9164-cc85-46e5-9ac5-2847421fe9fc] Preparing to wait for external event network-vif-plugged-35cdb65d-4c85-4645-9bdf-3ef8f40f9596 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 11 08:53:26 compute-0 nova_compute[260935]: 2025-10-11 08:53:26.671 2 DEBUG oslo_concurrency.lockutils [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Acquiring lock "dd2f9164-cc85-46e5-9ac5-2847421fe9fc-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:53:26 compute-0 nova_compute[260935]: 2025-10-11 08:53:26.672 2 DEBUG oslo_concurrency.lockutils [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Lock "dd2f9164-cc85-46e5-9ac5-2847421fe9fc-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:53:26 compute-0 nova_compute[260935]: 2025-10-11 08:53:26.672 2 DEBUG oslo_concurrency.lockutils [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Lock "dd2f9164-cc85-46e5-9ac5-2847421fe9fc-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:53:26 compute-0 nova_compute[260935]: 2025-10-11 08:53:26.673 2 DEBUG nova.virt.libvirt.vif [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T08:53:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1151635572',display_name='tempest-tempest.common.compute-instance-1151635572-2',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1151635572-2',id=46,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=1,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5fe47dfd30914099a9819413cbab00c6',ramdisk_id='',reservation_id='r-08kxr91r',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-MultipleCreateTestJSON-1825846956',owner_user_name='tempest-MultipleCreateTestJSON-1825846956-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T08:53:18Z,user_data=None,user_id='624c293d73ca4d14a182fadee17abb16',uuid=dd2f9164-cc85-46e5-9ac5-2847421fe9fc,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "35cdb65d-4c85-4645-9bdf-3ef8f40f9596", "address": "fa:16:3e:37:a9:70", "network": {"id": "aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1676373979-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5fe47dfd30914099a9819413cbab00c6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap35cdb65d-4c", "ovs_interfaceid": "35cdb65d-4c85-4645-9bdf-3ef8f40f9596", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 11 08:53:26 compute-0 nova_compute[260935]: 2025-10-11 08:53:26.674 2 DEBUG nova.network.os_vif_util [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Converting VIF {"id": "35cdb65d-4c85-4645-9bdf-3ef8f40f9596", "address": "fa:16:3e:37:a9:70", "network": {"id": "aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1676373979-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5fe47dfd30914099a9819413cbab00c6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap35cdb65d-4c", "ovs_interfaceid": "35cdb65d-4c85-4645-9bdf-3ef8f40f9596", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 08:53:26 compute-0 nova_compute[260935]: 2025-10-11 08:53:26.675 2 DEBUG nova.network.os_vif_util [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:37:a9:70,bridge_name='br-int',has_traffic_filtering=True,id=35cdb65d-4c85-4645-9bdf-3ef8f40f9596,network=Network(aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap35cdb65d-4c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 08:53:26 compute-0 nova_compute[260935]: 2025-10-11 08:53:26.676 2 DEBUG os_vif [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:37:a9:70,bridge_name='br-int',has_traffic_filtering=True,id=35cdb65d-4c85-4645-9bdf-3ef8f40f9596,network=Network(aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap35cdb65d-4c') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 11 08:53:26 compute-0 nova_compute[260935]: 2025-10-11 08:53:26.677 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:53:26 compute-0 nova_compute[260935]: 2025-10-11 08:53:26.678 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:53:26 compute-0 nova_compute[260935]: 2025-10-11 08:53:26.678 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 08:53:26 compute-0 nova_compute[260935]: 2025-10-11 08:53:26.683 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:53:26 compute-0 nova_compute[260935]: 2025-10-11 08:53:26.683 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap35cdb65d-4c, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:53:26 compute-0 nova_compute[260935]: 2025-10-11 08:53:26.684 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap35cdb65d-4c, col_values=(('external_ids', {'iface-id': '35cdb65d-4c85-4645-9bdf-3ef8f40f9596', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:37:a9:70', 'vm-uuid': 'dd2f9164-cc85-46e5-9ac5-2847421fe9fc'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:53:26 compute-0 nova_compute[260935]: 2025-10-11 08:53:26.686 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:53:26 compute-0 NetworkManager[44960]: <info>  [1760172806.6880] manager: (tap35cdb65d-4c): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/183)
Oct 11 08:53:26 compute-0 nova_compute[260935]: 2025-10-11 08:53:26.688 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 08:53:26 compute-0 nova_compute[260935]: 2025-10-11 08:53:26.696 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:53:26 compute-0 nova_compute[260935]: 2025-10-11 08:53:26.698 2 INFO os_vif [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:37:a9:70,bridge_name='br-int',has_traffic_filtering=True,id=35cdb65d-4c85-4645-9bdf-3ef8f40f9596,network=Network(aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap35cdb65d-4c')
Oct 11 08:53:26 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 08:53:26 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/93188165' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:53:26 compute-0 nova_compute[260935]: 2025-10-11 08:53:26.767 2 DEBUG oslo_concurrency.processutils [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:53:26 compute-0 nova_compute[260935]: 2025-10-11 08:53:26.770 2 DEBUG nova.virt.libvirt.vif [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T08:53:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1151635572',display_name='tempest-tempest.common.compute-instance-1151635572-1',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1151635572-1',id=45,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5fe47dfd30914099a9819413cbab00c6',ramdisk_id='',reservation_id='r-08kxr91r',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-MultipleCreateTestJSON-1825846956',owner_user_name='tempest-MultipleCreateTestJSON-1825846956-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T08:53:17Z,user_data=None,user_id='624c293d73ca4d14a182fadee17abb16',uuid=e3289c21-dd0f-43aa-9d39-3aff16eff5cd,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "69a28fb2-3a8f-49be-9b93-8dc3db323fd5", "address": "fa:16:3e:93:4d:8d", "network": {"id": "aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1676373979-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5fe47dfd30914099a9819413cbab00c6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap69a28fb2-3a", "ovs_interfaceid": "69a28fb2-3a8f-49be-9b93-8dc3db323fd5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 11 08:53:26 compute-0 nova_compute[260935]: 2025-10-11 08:53:26.770 2 DEBUG nova.network.os_vif_util [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Converting VIF {"id": "69a28fb2-3a8f-49be-9b93-8dc3db323fd5", "address": "fa:16:3e:93:4d:8d", "network": {"id": "aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1676373979-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5fe47dfd30914099a9819413cbab00c6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap69a28fb2-3a", "ovs_interfaceid": "69a28fb2-3a8f-49be-9b93-8dc3db323fd5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 08:53:26 compute-0 nova_compute[260935]: 2025-10-11 08:53:26.771 2 DEBUG nova.network.os_vif_util [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:93:4d:8d,bridge_name='br-int',has_traffic_filtering=True,id=69a28fb2-3a8f-49be-9b93-8dc3db323fd5,network=Network(aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap69a28fb2-3a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 08:53:26 compute-0 nova_compute[260935]: 2025-10-11 08:53:26.772 2 DEBUG nova.objects.instance [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Lazy-loading 'pci_devices' on Instance uuid e3289c21-dd0f-43aa-9d39-3aff16eff5cd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:53:26 compute-0 nova_compute[260935]: 2025-10-11 08:53:26.779 2 DEBUG nova.virt.libvirt.driver [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 08:53:26 compute-0 nova_compute[260935]: 2025-10-11 08:53:26.779 2 DEBUG nova.virt.libvirt.driver [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 08:53:26 compute-0 nova_compute[260935]: 2025-10-11 08:53:26.780 2 DEBUG nova.virt.libvirt.driver [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] No VIF found with MAC fa:16:3e:37:a9:70, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 11 08:53:26 compute-0 nova_compute[260935]: 2025-10-11 08:53:26.780 2 INFO nova.virt.libvirt.driver [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: dd2f9164-cc85-46e5-9ac5-2847421fe9fc] Using config drive
Oct 11 08:53:26 compute-0 nova_compute[260935]: 2025-10-11 08:53:26.813 2 DEBUG nova.storage.rbd_utils [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] rbd image dd2f9164-cc85-46e5-9ac5-2847421fe9fc_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:53:26 compute-0 nova_compute[260935]: 2025-10-11 08:53:26.823 2 DEBUG nova.virt.libvirt.driver [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: e3289c21-dd0f-43aa-9d39-3aff16eff5cd] End _get_guest_xml xml=<domain type="kvm">
Oct 11 08:53:26 compute-0 nova_compute[260935]:   <uuid>e3289c21-dd0f-43aa-9d39-3aff16eff5cd</uuid>
Oct 11 08:53:26 compute-0 nova_compute[260935]:   <name>instance-0000002d</name>
Oct 11 08:53:26 compute-0 nova_compute[260935]:   <memory>131072</memory>
Oct 11 08:53:26 compute-0 nova_compute[260935]:   <vcpu>1</vcpu>
Oct 11 08:53:26 compute-0 nova_compute[260935]:   <metadata>
Oct 11 08:53:26 compute-0 nova_compute[260935]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 08:53:26 compute-0 nova_compute[260935]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 08:53:26 compute-0 nova_compute[260935]:       <nova:name>tempest-tempest.common.compute-instance-1151635572-1</nova:name>
Oct 11 08:53:26 compute-0 nova_compute[260935]:       <nova:creationTime>2025-10-11 08:53:25</nova:creationTime>
Oct 11 08:53:26 compute-0 nova_compute[260935]:       <nova:flavor name="m1.nano">
Oct 11 08:53:26 compute-0 nova_compute[260935]:         <nova:memory>128</nova:memory>
Oct 11 08:53:26 compute-0 nova_compute[260935]:         <nova:disk>1</nova:disk>
Oct 11 08:53:26 compute-0 nova_compute[260935]:         <nova:swap>0</nova:swap>
Oct 11 08:53:26 compute-0 nova_compute[260935]:         <nova:ephemeral>0</nova:ephemeral>
Oct 11 08:53:26 compute-0 nova_compute[260935]:         <nova:vcpus>1</nova:vcpus>
Oct 11 08:53:26 compute-0 nova_compute[260935]:       </nova:flavor>
Oct 11 08:53:26 compute-0 nova_compute[260935]:       <nova:owner>
Oct 11 08:53:26 compute-0 nova_compute[260935]:         <nova:user uuid="624c293d73ca4d14a182fadee17abb16">tempest-MultipleCreateTestJSON-1825846956-project-member</nova:user>
Oct 11 08:53:26 compute-0 nova_compute[260935]:         <nova:project uuid="5fe47dfd30914099a9819413cbab00c6">tempest-MultipleCreateTestJSON-1825846956</nova:project>
Oct 11 08:53:26 compute-0 nova_compute[260935]:       </nova:owner>
Oct 11 08:53:26 compute-0 nova_compute[260935]:       <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 08:53:26 compute-0 nova_compute[260935]:       <nova:ports>
Oct 11 08:53:26 compute-0 nova_compute[260935]:         <nova:port uuid="69a28fb2-3a8f-49be-9b93-8dc3db323fd5">
Oct 11 08:53:26 compute-0 nova_compute[260935]:           <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Oct 11 08:53:26 compute-0 nova_compute[260935]:         </nova:port>
Oct 11 08:53:26 compute-0 nova_compute[260935]:       </nova:ports>
Oct 11 08:53:26 compute-0 nova_compute[260935]:     </nova:instance>
Oct 11 08:53:26 compute-0 nova_compute[260935]:   </metadata>
Oct 11 08:53:26 compute-0 nova_compute[260935]:   <sysinfo type="smbios">
Oct 11 08:53:26 compute-0 nova_compute[260935]:     <system>
Oct 11 08:53:26 compute-0 nova_compute[260935]:       <entry name="manufacturer">RDO</entry>
Oct 11 08:53:26 compute-0 nova_compute[260935]:       <entry name="product">OpenStack Compute</entry>
Oct 11 08:53:26 compute-0 nova_compute[260935]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 08:53:26 compute-0 nova_compute[260935]:       <entry name="serial">e3289c21-dd0f-43aa-9d39-3aff16eff5cd</entry>
Oct 11 08:53:26 compute-0 nova_compute[260935]:       <entry name="uuid">e3289c21-dd0f-43aa-9d39-3aff16eff5cd</entry>
Oct 11 08:53:26 compute-0 nova_compute[260935]:       <entry name="family">Virtual Machine</entry>
Oct 11 08:53:26 compute-0 nova_compute[260935]:     </system>
Oct 11 08:53:26 compute-0 nova_compute[260935]:   </sysinfo>
Oct 11 08:53:26 compute-0 nova_compute[260935]:   <os>
Oct 11 08:53:26 compute-0 nova_compute[260935]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 11 08:53:26 compute-0 nova_compute[260935]:     <boot dev="hd"/>
Oct 11 08:53:26 compute-0 nova_compute[260935]:     <smbios mode="sysinfo"/>
Oct 11 08:53:26 compute-0 nova_compute[260935]:   </os>
Oct 11 08:53:26 compute-0 nova_compute[260935]:   <features>
Oct 11 08:53:26 compute-0 nova_compute[260935]:     <acpi/>
Oct 11 08:53:26 compute-0 nova_compute[260935]:     <apic/>
Oct 11 08:53:26 compute-0 nova_compute[260935]:     <vmcoreinfo/>
Oct 11 08:53:26 compute-0 nova_compute[260935]:   </features>
Oct 11 08:53:26 compute-0 nova_compute[260935]:   <clock offset="utc">
Oct 11 08:53:26 compute-0 nova_compute[260935]:     <timer name="pit" tickpolicy="delay"/>
Oct 11 08:53:26 compute-0 nova_compute[260935]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 11 08:53:26 compute-0 nova_compute[260935]:     <timer name="hpet" present="no"/>
Oct 11 08:53:26 compute-0 nova_compute[260935]:   </clock>
Oct 11 08:53:26 compute-0 nova_compute[260935]:   <cpu mode="host-model" match="exact">
Oct 11 08:53:26 compute-0 nova_compute[260935]:     <topology sockets="1" cores="1" threads="1"/>
Oct 11 08:53:26 compute-0 nova_compute[260935]:   </cpu>
Oct 11 08:53:26 compute-0 nova_compute[260935]:   <devices>
Oct 11 08:53:26 compute-0 nova_compute[260935]:     <disk type="network" device="disk">
Oct 11 08:53:26 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 08:53:26 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/e3289c21-dd0f-43aa-9d39-3aff16eff5cd_disk">
Oct 11 08:53:26 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 08:53:26 compute-0 nova_compute[260935]:       </source>
Oct 11 08:53:26 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 08:53:26 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 08:53:26 compute-0 nova_compute[260935]:       </auth>
Oct 11 08:53:26 compute-0 nova_compute[260935]:       <target dev="vda" bus="virtio"/>
Oct 11 08:53:26 compute-0 nova_compute[260935]:     </disk>
Oct 11 08:53:26 compute-0 nova_compute[260935]:     <disk type="network" device="cdrom">
Oct 11 08:53:26 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 08:53:26 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/e3289c21-dd0f-43aa-9d39-3aff16eff5cd_disk.config">
Oct 11 08:53:26 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 08:53:26 compute-0 nova_compute[260935]:       </source>
Oct 11 08:53:26 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 08:53:26 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 08:53:26 compute-0 nova_compute[260935]:       </auth>
Oct 11 08:53:26 compute-0 nova_compute[260935]:       <target dev="sda" bus="sata"/>
Oct 11 08:53:26 compute-0 nova_compute[260935]:     </disk>
Oct 11 08:53:26 compute-0 nova_compute[260935]:     <interface type="ethernet">
Oct 11 08:53:26 compute-0 nova_compute[260935]:       <mac address="fa:16:3e:93:4d:8d"/>
Oct 11 08:53:26 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 08:53:26 compute-0 nova_compute[260935]:       <driver name="vhost" rx_queue_size="512"/>
Oct 11 08:53:26 compute-0 nova_compute[260935]:       <mtu size="1442"/>
Oct 11 08:53:26 compute-0 nova_compute[260935]:       <target dev="tap69a28fb2-3a"/>
Oct 11 08:53:26 compute-0 nova_compute[260935]:     </interface>
Oct 11 08:53:26 compute-0 nova_compute[260935]:     <serial type="pty">
Oct 11 08:53:26 compute-0 nova_compute[260935]:       <log file="/var/lib/nova/instances/e3289c21-dd0f-43aa-9d39-3aff16eff5cd/console.log" append="off"/>
Oct 11 08:53:26 compute-0 nova_compute[260935]:     </serial>
Oct 11 08:53:26 compute-0 nova_compute[260935]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 08:53:26 compute-0 nova_compute[260935]:     <video>
Oct 11 08:53:26 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 08:53:26 compute-0 nova_compute[260935]:     </video>
Oct 11 08:53:26 compute-0 nova_compute[260935]:     <input type="tablet" bus="usb"/>
Oct 11 08:53:26 compute-0 nova_compute[260935]:     <rng model="virtio">
Oct 11 08:53:26 compute-0 nova_compute[260935]:       <backend model="random">/dev/urandom</backend>
Oct 11 08:53:26 compute-0 nova_compute[260935]:     </rng>
Oct 11 08:53:26 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root"/>
Oct 11 08:53:26 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:53:26 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:53:26 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:53:26 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:53:26 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:53:26 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:53:26 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:53:26 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:53:26 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:53:26 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:53:26 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:53:26 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:53:26 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:53:26 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:53:26 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:53:26 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:53:26 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:53:26 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:53:26 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:53:26 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:53:26 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:53:26 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:53:26 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:53:26 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:53:26 compute-0 nova_compute[260935]:     <controller type="usb" index="0"/>
Oct 11 08:53:26 compute-0 nova_compute[260935]:     <memballoon model="virtio">
Oct 11 08:53:26 compute-0 nova_compute[260935]:       <stats period="10"/>
Oct 11 08:53:26 compute-0 nova_compute[260935]:     </memballoon>
Oct 11 08:53:26 compute-0 nova_compute[260935]:   </devices>
Oct 11 08:53:26 compute-0 nova_compute[260935]: </domain>
Oct 11 08:53:26 compute-0 nova_compute[260935]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 11 08:53:26 compute-0 nova_compute[260935]: 2025-10-11 08:53:26.823 2 DEBUG nova.compute.manager [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: e3289c21-dd0f-43aa-9d39-3aff16eff5cd] Preparing to wait for external event network-vif-plugged-69a28fb2-3a8f-49be-9b93-8dc3db323fd5 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 11 08:53:26 compute-0 nova_compute[260935]: 2025-10-11 08:53:26.824 2 DEBUG oslo_concurrency.lockutils [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Acquiring lock "e3289c21-dd0f-43aa-9d39-3aff16eff5cd-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:53:26 compute-0 nova_compute[260935]: 2025-10-11 08:53:26.824 2 DEBUG oslo_concurrency.lockutils [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Lock "e3289c21-dd0f-43aa-9d39-3aff16eff5cd-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:53:26 compute-0 nova_compute[260935]: 2025-10-11 08:53:26.824 2 DEBUG oslo_concurrency.lockutils [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Lock "e3289c21-dd0f-43aa-9d39-3aff16eff5cd-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:53:26 compute-0 nova_compute[260935]: 2025-10-11 08:53:26.825 2 DEBUG nova.virt.libvirt.vif [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T08:53:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1151635572',display_name='tempest-tempest.common.compute-instance-1151635572-1',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1151635572-1',id=45,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5fe47dfd30914099a9819413cbab00c6',ramdisk_id='',reservation_id='r-08kxr91r',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-MultipleCreateTestJSON-1825846956',owner_user_name='tempest-MultipleCreateTestJSON-1825846956-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T08:53:17Z,user_data=None,user_id='624c293d73ca4d14a182fadee17abb16',uuid=e3289c21-dd0f-43aa-9d39-3aff16eff5cd,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "69a28fb2-3a8f-49be-9b93-8dc3db323fd5", "address": "fa:16:3e:93:4d:8d", "network": {"id": "aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1676373979-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5fe47dfd30914099a9819413cbab00c6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap69a28fb2-3a", "ovs_interfaceid": "69a28fb2-3a8f-49be-9b93-8dc3db323fd5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 11 08:53:26 compute-0 nova_compute[260935]: 2025-10-11 08:53:26.826 2 DEBUG nova.network.os_vif_util [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Converting VIF {"id": "69a28fb2-3a8f-49be-9b93-8dc3db323fd5", "address": "fa:16:3e:93:4d:8d", "network": {"id": "aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1676373979-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5fe47dfd30914099a9819413cbab00c6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap69a28fb2-3a", "ovs_interfaceid": "69a28fb2-3a8f-49be-9b93-8dc3db323fd5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 08:53:26 compute-0 nova_compute[260935]: 2025-10-11 08:53:26.827 2 DEBUG nova.network.os_vif_util [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:93:4d:8d,bridge_name='br-int',has_traffic_filtering=True,id=69a28fb2-3a8f-49be-9b93-8dc3db323fd5,network=Network(aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap69a28fb2-3a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 08:53:26 compute-0 nova_compute[260935]: 2025-10-11 08:53:26.828 2 DEBUG os_vif [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:93:4d:8d,bridge_name='br-int',has_traffic_filtering=True,id=69a28fb2-3a8f-49be-9b93-8dc3db323fd5,network=Network(aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap69a28fb2-3a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 11 08:53:26 compute-0 nova_compute[260935]: 2025-10-11 08:53:26.829 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:53:26 compute-0 nova_compute[260935]: 2025-10-11 08:53:26.829 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:53:26 compute-0 nova_compute[260935]: 2025-10-11 08:53:26.830 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 08:53:26 compute-0 nova_compute[260935]: 2025-10-11 08:53:26.834 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:53:26 compute-0 nova_compute[260935]: 2025-10-11 08:53:26.834 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap69a28fb2-3a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:53:26 compute-0 nova_compute[260935]: 2025-10-11 08:53:26.835 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap69a28fb2-3a, col_values=(('external_ids', {'iface-id': '69a28fb2-3a8f-49be-9b93-8dc3db323fd5', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:93:4d:8d', 'vm-uuid': 'e3289c21-dd0f-43aa-9d39-3aff16eff5cd'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:53:26 compute-0 NetworkManager[44960]: <info>  [1760172806.8389] manager: (tap69a28fb2-3a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/184)
Oct 11 08:53:26 compute-0 nova_compute[260935]: 2025-10-11 08:53:26.845 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:53:26 compute-0 nova_compute[260935]: 2025-10-11 08:53:26.850 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:53:26 compute-0 nova_compute[260935]: 2025-10-11 08:53:26.851 2 INFO os_vif [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:93:4d:8d,bridge_name='br-int',has_traffic_filtering=True,id=69a28fb2-3a8f-49be-9b93-8dc3db323fd5,network=Network(aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap69a28fb2-3a')
Oct 11 08:53:26 compute-0 nova_compute[260935]: 2025-10-11 08:53:26.908 2 DEBUG nova.virt.libvirt.driver [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 08:53:26 compute-0 nova_compute[260935]: 2025-10-11 08:53:26.909 2 DEBUG nova.virt.libvirt.driver [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 08:53:26 compute-0 nova_compute[260935]: 2025-10-11 08:53:26.909 2 DEBUG nova.virt.libvirt.driver [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] No VIF found with MAC fa:16:3e:93:4d:8d, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 11 08:53:26 compute-0 nova_compute[260935]: 2025-10-11 08:53:26.910 2 INFO nova.virt.libvirt.driver [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: e3289c21-dd0f-43aa-9d39-3aff16eff5cd] Using config drive
Oct 11 08:53:26 compute-0 nova_compute[260935]: 2025-10-11 08:53:26.945 2 DEBUG nova.storage.rbd_utils [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] rbd image e3289c21-dd0f-43aa-9d39-3aff16eff5cd_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:53:27 compute-0 nova_compute[260935]: 2025-10-11 08:53:27.356 2 INFO nova.virt.libvirt.driver [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: dd2f9164-cc85-46e5-9ac5-2847421fe9fc] Creating config drive at /var/lib/nova/instances/dd2f9164-cc85-46e5-9ac5-2847421fe9fc/disk.config
Oct 11 08:53:27 compute-0 nova_compute[260935]: 2025-10-11 08:53:27.365 2 DEBUG oslo_concurrency.processutils [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/dd2f9164-cc85-46e5-9ac5-2847421fe9fc/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpr38qd_6m execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:53:27 compute-0 nova_compute[260935]: 2025-10-11 08:53:27.417 2 DEBUG nova.virt.libvirt.driver [None req-fe77ed62-25b3-4ad0-9b14-d5990c233b42 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 92be5b35-6b7a-4f95-924d-008348f27b42] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Oct 11 08:53:27 compute-0 ceph-mon[74313]: pgmap v1463: 321 pgs: 321 active+clean; 281 MiB data, 535 MiB used, 59 GiB / 60 GiB avail; 5.1 MiB/s rd, 13 MiB/s wr, 409 op/s
Oct 11 08:53:27 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/2825526755' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:53:27 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/93188165' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:53:27 compute-0 nova_compute[260935]: 2025-10-11 08:53:27.499 2 INFO nova.virt.libvirt.driver [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: e3289c21-dd0f-43aa-9d39-3aff16eff5cd] Creating config drive at /var/lib/nova/instances/e3289c21-dd0f-43aa-9d39-3aff16eff5cd/disk.config
Oct 11 08:53:27 compute-0 nova_compute[260935]: 2025-10-11 08:53:27.504 2 DEBUG oslo_concurrency.processutils [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/e3289c21-dd0f-43aa-9d39-3aff16eff5cd/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp460r16g7 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:53:27 compute-0 nova_compute[260935]: 2025-10-11 08:53:27.543 2 DEBUG nova.network.neutron [req-65478d01-8716-47ae-9a3d-137c554d1201 req-8a236e0f-e2a3-4526-b2d5-8a02f7de2f30 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: dd2f9164-cc85-46e5-9ac5-2847421fe9fc] Updated VIF entry in instance network info cache for port 35cdb65d-4c85-4645-9bdf-3ef8f40f9596. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 08:53:27 compute-0 nova_compute[260935]: 2025-10-11 08:53:27.544 2 DEBUG nova.network.neutron [req-65478d01-8716-47ae-9a3d-137c554d1201 req-8a236e0f-e2a3-4526-b2d5-8a02f7de2f30 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: dd2f9164-cc85-46e5-9ac5-2847421fe9fc] Updating instance_info_cache with network_info: [{"id": "35cdb65d-4c85-4645-9bdf-3ef8f40f9596", "address": "fa:16:3e:37:a9:70", "network": {"id": "aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1676373979-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5fe47dfd30914099a9819413cbab00c6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap35cdb65d-4c", "ovs_interfaceid": "35cdb65d-4c85-4645-9bdf-3ef8f40f9596", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:53:27 compute-0 nova_compute[260935]: 2025-10-11 08:53:27.549 2 DEBUG oslo_concurrency.processutils [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/dd2f9164-cc85-46e5-9ac5-2847421fe9fc/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpr38qd_6m" returned: 0 in 0.184s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:53:27 compute-0 nova_compute[260935]: 2025-10-11 08:53:27.579 2 DEBUG nova.storage.rbd_utils [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] rbd image dd2f9164-cc85-46e5-9ac5-2847421fe9fc_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:53:27 compute-0 nova_compute[260935]: 2025-10-11 08:53:27.583 2 DEBUG oslo_concurrency.processutils [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/dd2f9164-cc85-46e5-9ac5-2847421fe9fc/disk.config dd2f9164-cc85-46e5-9ac5-2847421fe9fc_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:53:27 compute-0 nova_compute[260935]: 2025-10-11 08:53:27.638 2 DEBUG oslo_concurrency.lockutils [req-65478d01-8716-47ae-9a3d-137c554d1201 req-8a236e0f-e2a3-4526-b2d5-8a02f7de2f30 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-dd2f9164-cc85-46e5-9ac5-2847421fe9fc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 08:53:27 compute-0 nova_compute[260935]: 2025-10-11 08:53:27.659 2 DEBUG oslo_concurrency.processutils [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/e3289c21-dd0f-43aa-9d39-3aff16eff5cd/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp460r16g7" returned: 0 in 0.155s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:53:27 compute-0 nova_compute[260935]: 2025-10-11 08:53:27.701 2 DEBUG nova.storage.rbd_utils [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] rbd image e3289c21-dd0f-43aa-9d39-3aff16eff5cd_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:53:27 compute-0 nova_compute[260935]: 2025-10-11 08:53:27.706 2 DEBUG oslo_concurrency.processutils [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/e3289c21-dd0f-43aa-9d39-3aff16eff5cd/disk.config e3289c21-dd0f-43aa-9d39-3aff16eff5cd_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:53:27 compute-0 nova_compute[260935]: 2025-10-11 08:53:27.753 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:53:27 compute-0 nova_compute[260935]: 2025-10-11 08:53:27.771 2 DEBUG oslo_concurrency.processutils [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/dd2f9164-cc85-46e5-9ac5-2847421fe9fc/disk.config dd2f9164-cc85-46e5-9ac5-2847421fe9fc_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.188s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:53:27 compute-0 nova_compute[260935]: 2025-10-11 08:53:27.773 2 INFO nova.virt.libvirt.driver [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: dd2f9164-cc85-46e5-9ac5-2847421fe9fc] Deleting local config drive /var/lib/nova/instances/dd2f9164-cc85-46e5-9ac5-2847421fe9fc/disk.config because it was imported into RBD.
Oct 11 08:53:27 compute-0 NetworkManager[44960]: <info>  [1760172807.8315] manager: (tap35cdb65d-4c): new Tun device (/org/freedesktop/NetworkManager/Devices/185)
Oct 11 08:53:27 compute-0 kernel: tap35cdb65d-4c: entered promiscuous mode
Oct 11 08:53:27 compute-0 ovn_controller[152945]: 2025-10-11T08:53:27Z|00379|binding|INFO|Claiming lport 35cdb65d-4c85-4645-9bdf-3ef8f40f9596 for this chassis.
Oct 11 08:53:27 compute-0 nova_compute[260935]: 2025-10-11 08:53:27.837 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:53:27 compute-0 ovn_controller[152945]: 2025-10-11T08:53:27Z|00380|binding|INFO|35cdb65d-4c85-4645-9bdf-3ef8f40f9596: Claiming fa:16:3e:37:a9:70 10.100.0.5
Oct 11 08:53:27 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:27.862 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:37:a9:70 10.100.0.5'], port_security=['fa:16:3e:37:a9:70 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'dd2f9164-cc85-46e5-9ac5-2847421fe9fc', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5fe47dfd30914099a9819413cbab00c6', 'neutron:revision_number': '2', 'neutron:security_group_ids': '76121093-a7de-4040-aef4-22a2c06e5eea', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0c1ab924-8567-41be-9107-10c6210b8f10, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=35cdb65d-4c85-4645-9bdf-3ef8f40f9596) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 08:53:27 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:27.864 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 35cdb65d-4c85-4645-9bdf-3ef8f40f9596 in datapath aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f bound to our chassis
Oct 11 08:53:27 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:27.868 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f
Oct 11 08:53:27 compute-0 systemd-udevd[312218]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 08:53:27 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:27.886 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[8ecb5ce4-e5cb-47f6-ba21-c5050519a0ba]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:53:27 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:27.887 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapaac3fe7a-b1 in ovnmeta-aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 11 08:53:27 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:27.888 276945 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapaac3fe7a-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 11 08:53:27 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:27.888 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[3879ed49-6a17-46bf-b9a3-22463208af77]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:53:27 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:27.889 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[f733addf-94ee-44e7-9b00-b1d11c0ece73]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:53:27 compute-0 systemd-machined[215705]: New machine qemu-51-instance-0000002e.
Oct 11 08:53:27 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:27.904 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[c4880881-2a76-46b5-830c-b2e209e20e4d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:53:27 compute-0 NetworkManager[44960]: <info>  [1760172807.9060] device (tap35cdb65d-4c): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 08:53:27 compute-0 NetworkManager[44960]: <info>  [1760172807.9068] device (tap35cdb65d-4c): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 08:53:27 compute-0 systemd[1]: Started Virtual Machine qemu-51-instance-0000002e.
Oct 11 08:53:27 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:27.929 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[6544bb64-0ccd-42ed-8823-ec3d0439b21e]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:53:27 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1464: 321 pgs: 321 active+clean; 306 MiB data, 557 MiB used, 59 GiB / 60 GiB avail; 8.1 MiB/s rd, 15 MiB/s wr, 499 op/s
Oct 11 08:53:27 compute-0 nova_compute[260935]: 2025-10-11 08:53:27.945 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:53:27 compute-0 nova_compute[260935]: 2025-10-11 08:53:27.953 2 DEBUG oslo_concurrency.processutils [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/e3289c21-dd0f-43aa-9d39-3aff16eff5cd/disk.config e3289c21-dd0f-43aa-9d39-3aff16eff5cd_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.247s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:53:27 compute-0 nova_compute[260935]: 2025-10-11 08:53:27.954 2 INFO nova.virt.libvirt.driver [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: e3289c21-dd0f-43aa-9d39-3aff16eff5cd] Deleting local config drive /var/lib/nova/instances/e3289c21-dd0f-43aa-9d39-3aff16eff5cd/disk.config because it was imported into RBD.
Oct 11 08:53:27 compute-0 ovn_controller[152945]: 2025-10-11T08:53:27Z|00381|binding|INFO|Setting lport 35cdb65d-4c85-4645-9bdf-3ef8f40f9596 ovn-installed in OVS
Oct 11 08:53:27 compute-0 ovn_controller[152945]: 2025-10-11T08:53:27Z|00382|binding|INFO|Setting lport 35cdb65d-4c85-4645-9bdf-3ef8f40f9596 up in Southbound
Oct 11 08:53:27 compute-0 nova_compute[260935]: 2025-10-11 08:53:27.961 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:53:27 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:27.963 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[7141b691-a12f-4778-a135-6c7414437f56]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:53:27 compute-0 systemd-udevd[312223]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 08:53:27 compute-0 NetworkManager[44960]: <info>  [1760172807.9716] manager: (tapaac3fe7a-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/186)
Oct 11 08:53:27 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:27.970 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[5e541e18-25e5-477b-8f38-443e47ff1447]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:53:28 compute-0 nova_compute[260935]: 2025-10-11 08:53:28.007 2 DEBUG nova.network.neutron [req-2e78e465-415c-4b8d-8b8a-0a43f7ac3938 req-568616a4-c20c-49f1-90d1-472bcf58c89e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: e3289c21-dd0f-43aa-9d39-3aff16eff5cd] Updated VIF entry in instance network info cache for port 69a28fb2-3a8f-49be-9b93-8dc3db323fd5. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 08:53:28 compute-0 nova_compute[260935]: 2025-10-11 08:53:28.008 2 DEBUG nova.network.neutron [req-2e78e465-415c-4b8d-8b8a-0a43f7ac3938 req-568616a4-c20c-49f1-90d1-472bcf58c89e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: e3289c21-dd0f-43aa-9d39-3aff16eff5cd] Updating instance_info_cache with network_info: [{"id": "69a28fb2-3a8f-49be-9b93-8dc3db323fd5", "address": "fa:16:3e:93:4d:8d", "network": {"id": "aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1676373979-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5fe47dfd30914099a9819413cbab00c6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap69a28fb2-3a", "ovs_interfaceid": "69a28fb2-3a8f-49be-9b93-8dc3db323fd5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:53:28 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:28.009 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[8562c81c-5582-47be-be52-09196c9267f7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:53:28 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:28.013 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[c6c869e5-b479-4745-805c-88a777e6d837]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:53:28 compute-0 NetworkManager[44960]: <info>  [1760172808.0322] manager: (tap69a28fb2-3a): new Tun device (/org/freedesktop/NetworkManager/Devices/187)
Oct 11 08:53:28 compute-0 kernel: tap69a28fb2-3a: entered promiscuous mode
Oct 11 08:53:28 compute-0 ovn_controller[152945]: 2025-10-11T08:53:28Z|00383|binding|INFO|Claiming lport 69a28fb2-3a8f-49be-9b93-8dc3db323fd5 for this chassis.
Oct 11 08:53:28 compute-0 ovn_controller[152945]: 2025-10-11T08:53:28Z|00384|binding|INFO|69a28fb2-3a8f-49be-9b93-8dc3db323fd5: Claiming fa:16:3e:93:4d:8d 10.100.0.11
Oct 11 08:53:28 compute-0 nova_compute[260935]: 2025-10-11 08:53:28.035 2 DEBUG oslo_concurrency.lockutils [req-2e78e465-415c-4b8d-8b8a-0a43f7ac3938 req-568616a4-c20c-49f1-90d1-472bcf58c89e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-e3289c21-dd0f-43aa-9d39-3aff16eff5cd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 08:53:28 compute-0 nova_compute[260935]: 2025-10-11 08:53:28.035 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:53:28 compute-0 NetworkManager[44960]: <info>  [1760172808.0363] device (tapaac3fe7a-b0): carrier: link connected
Oct 11 08:53:28 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:28.044 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[7cd48173-22fa-406a-8cf4-57a96b236ba4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:53:28 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:28.048 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:93:4d:8d 10.100.0.11'], port_security=['fa:16:3e:93:4d:8d 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'e3289c21-dd0f-43aa-9d39-3aff16eff5cd', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5fe47dfd30914099a9819413cbab00c6', 'neutron:revision_number': '2', 'neutron:security_group_ids': '76121093-a7de-4040-aef4-22a2c06e5eea', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0c1ab924-8567-41be-9107-10c6210b8f10, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=69a28fb2-3a8f-49be-9b93-8dc3db323fd5) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 08:53:28 compute-0 NetworkManager[44960]: <info>  [1760172808.0520] device (tap69a28fb2-3a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 08:53:28 compute-0 NetworkManager[44960]: <info>  [1760172808.0534] device (tap69a28fb2-3a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 08:53:28 compute-0 ovn_controller[152945]: 2025-10-11T08:53:28Z|00385|binding|INFO|Setting lport 69a28fb2-3a8f-49be-9b93-8dc3db323fd5 ovn-installed in OVS
Oct 11 08:53:28 compute-0 ovn_controller[152945]: 2025-10-11T08:53:28Z|00386|binding|INFO|Setting lport 69a28fb2-3a8f-49be-9b93-8dc3db323fd5 up in Southbound
Oct 11 08:53:28 compute-0 nova_compute[260935]: 2025-10-11 08:53:28.056 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:53:28 compute-0 nova_compute[260935]: 2025-10-11 08:53:28.059 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:53:28 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:28.072 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[5ee3c22a-7993-49a8-8f24-06e4ca3c9dc7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapaac3fe7a-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a9:0a:6b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 122], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 465273, 'reachable_time': 34198, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 312266, 'error': None, 'target': 'ovnmeta-aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:53:28 compute-0 systemd-machined[215705]: New machine qemu-52-instance-0000002d.
Oct 11 08:53:28 compute-0 systemd[1]: Started Virtual Machine qemu-52-instance-0000002d.
Oct 11 08:53:28 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:28.096 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[90437ad9-4494-4c4a-8e21-425471dd666a]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fea9:a6b'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 465273, 'tstamp': 465273}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 312269, 'error': None, 'target': 'ovnmeta-aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:53:28 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:28.117 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[1223a9d8-837e-4873-8164-07f497917dfc]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapaac3fe7a-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a9:0a:6b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 122], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 465273, 'reachable_time': 34198, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 312270, 'error': None, 'target': 'ovnmeta-aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:53:28 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:28.162 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[3dd660ab-09e7-42c8-98bd-18ca8b72153b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:53:28 compute-0 nova_compute[260935]: 2025-10-11 08:53:28.252 2 DEBUG nova.compute.manager [req-1f3ffa2b-2998-49c7-bce3-629033f992f3 req-3413247f-ac07-43c9-bbdc-d343eefff5ec e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: dd2f9164-cc85-46e5-9ac5-2847421fe9fc] Received event network-vif-plugged-35cdb65d-4c85-4645-9bdf-3ef8f40f9596 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:53:28 compute-0 nova_compute[260935]: 2025-10-11 08:53:28.254 2 DEBUG oslo_concurrency.lockutils [req-1f3ffa2b-2998-49c7-bce3-629033f992f3 req-3413247f-ac07-43c9-bbdc-d343eefff5ec e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "dd2f9164-cc85-46e5-9ac5-2847421fe9fc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:53:28 compute-0 nova_compute[260935]: 2025-10-11 08:53:28.255 2 DEBUG oslo_concurrency.lockutils [req-1f3ffa2b-2998-49c7-bce3-629033f992f3 req-3413247f-ac07-43c9-bbdc-d343eefff5ec e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "dd2f9164-cc85-46e5-9ac5-2847421fe9fc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:53:28 compute-0 nova_compute[260935]: 2025-10-11 08:53:28.256 2 DEBUG oslo_concurrency.lockutils [req-1f3ffa2b-2998-49c7-bce3-629033f992f3 req-3413247f-ac07-43c9-bbdc-d343eefff5ec e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "dd2f9164-cc85-46e5-9ac5-2847421fe9fc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:53:28 compute-0 nova_compute[260935]: 2025-10-11 08:53:28.256 2 DEBUG nova.compute.manager [req-1f3ffa2b-2998-49c7-bce3-629033f992f3 req-3413247f-ac07-43c9-bbdc-d343eefff5ec e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: dd2f9164-cc85-46e5-9ac5-2847421fe9fc] Processing event network-vif-plugged-35cdb65d-4c85-4645-9bdf-3ef8f40f9596 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 11 08:53:28 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:28.263 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[e8819015-80c4-420e-ac08-2b9676b37f73]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:53:28 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:28.265 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapaac3fe7a-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:53:28 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:28.266 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 08:53:28 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:28.267 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapaac3fe7a-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:53:28 compute-0 nova_compute[260935]: 2025-10-11 08:53:28.269 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:53:28 compute-0 NetworkManager[44960]: <info>  [1760172808.2701] manager: (tapaac3fe7a-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/188)
Oct 11 08:53:28 compute-0 kernel: tapaac3fe7a-b0: entered promiscuous mode
Oct 11 08:53:28 compute-0 nova_compute[260935]: 2025-10-11 08:53:28.275 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:53:28 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:28.281 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapaac3fe7a-b0, col_values=(('external_ids', {'iface-id': 'debf3d0c-b4f8-4ab8-9507-a0acd9b0ee66'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:53:28 compute-0 nova_compute[260935]: 2025-10-11 08:53:28.283 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:53:28 compute-0 ovn_controller[152945]: 2025-10-11T08:53:28Z|00387|binding|INFO|Releasing lport debf3d0c-b4f8-4ab8-9507-a0acd9b0ee66 from this chassis (sb_readonly=0)
Oct 11 08:53:28 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:28.288 162815 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 11 08:53:28 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:28.289 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[2184b6e3-a3fd-4394-954f-be81c546e1bc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:53:28 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:28.291 162815 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 11 08:53:28 compute-0 ovn_metadata_agent[162810]: global
Oct 11 08:53:28 compute-0 ovn_metadata_agent[162810]:     log         /dev/log local0 debug
Oct 11 08:53:28 compute-0 ovn_metadata_agent[162810]:     log-tag     haproxy-metadata-proxy-aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f
Oct 11 08:53:28 compute-0 ovn_metadata_agent[162810]:     user        root
Oct 11 08:53:28 compute-0 ovn_metadata_agent[162810]:     group       root
Oct 11 08:53:28 compute-0 ovn_metadata_agent[162810]:     maxconn     1024
Oct 11 08:53:28 compute-0 ovn_metadata_agent[162810]:     pidfile     /var/lib/neutron/external/pids/aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f.pid.haproxy
Oct 11 08:53:28 compute-0 ovn_metadata_agent[162810]:     daemon
Oct 11 08:53:28 compute-0 ovn_metadata_agent[162810]: 
Oct 11 08:53:28 compute-0 ovn_metadata_agent[162810]: defaults
Oct 11 08:53:28 compute-0 ovn_metadata_agent[162810]:     log global
Oct 11 08:53:28 compute-0 ovn_metadata_agent[162810]:     mode http
Oct 11 08:53:28 compute-0 ovn_metadata_agent[162810]:     option httplog
Oct 11 08:53:28 compute-0 ovn_metadata_agent[162810]:     option dontlognull
Oct 11 08:53:28 compute-0 ovn_metadata_agent[162810]:     option http-server-close
Oct 11 08:53:28 compute-0 ovn_metadata_agent[162810]:     option forwardfor
Oct 11 08:53:28 compute-0 ovn_metadata_agent[162810]:     retries                 3
Oct 11 08:53:28 compute-0 ovn_metadata_agent[162810]:     timeout http-request    30s
Oct 11 08:53:28 compute-0 ovn_metadata_agent[162810]:     timeout connect         30s
Oct 11 08:53:28 compute-0 ovn_metadata_agent[162810]:     timeout client          32s
Oct 11 08:53:28 compute-0 ovn_metadata_agent[162810]:     timeout server          32s
Oct 11 08:53:28 compute-0 ovn_metadata_agent[162810]:     timeout http-keep-alive 30s
Oct 11 08:53:28 compute-0 ovn_metadata_agent[162810]: 
Oct 11 08:53:28 compute-0 ovn_metadata_agent[162810]: 
Oct 11 08:53:28 compute-0 ovn_metadata_agent[162810]: listen listener
Oct 11 08:53:28 compute-0 ovn_metadata_agent[162810]:     bind 169.254.169.254:80
Oct 11 08:53:28 compute-0 ovn_metadata_agent[162810]:     server metadata /var/lib/neutron/metadata_proxy
Oct 11 08:53:28 compute-0 ovn_metadata_agent[162810]:     http-request add-header X-OVN-Network-ID aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f
Oct 11 08:53:28 compute-0 ovn_metadata_agent[162810]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 11 08:53:28 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:28.295 162815 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f', 'env', 'PROCESS_TAG=haproxy-aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 11 08:53:28 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e197 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 08:53:28 compute-0 nova_compute[260935]: 2025-10-11 08:53:28.323 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:53:28 compute-0 podman[312348]: 2025-10-11 08:53:28.783395153 +0000 UTC m=+0.126357914 container create 015a5b99a61fb98e56ce0a6707996a2334b957091df6ba0aa6f28194e95fa8c7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Oct 11 08:53:28 compute-0 podman[312348]: 2025-10-11 08:53:28.710177193 +0000 UTC m=+0.053139954 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 11 08:53:28 compute-0 systemd[1]: Started libpod-conmon-015a5b99a61fb98e56ce0a6707996a2334b957091df6ba0aa6f28194e95fa8c7.scope.
Oct 11 08:53:28 compute-0 systemd[1]: Started libcrun container.
Oct 11 08:53:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/865d12609cd49e2af502496fdc178165cf27c22f0add21517383ac764b2e1b6a/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 11 08:53:28 compute-0 podman[312348]: 2025-10-11 08:53:28.886141379 +0000 UTC m=+0.229104160 container init 015a5b99a61fb98e56ce0a6707996a2334b957091df6ba0aa6f28194e95fa8c7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 08:53:28 compute-0 podman[312348]: 2025-10-11 08:53:28.897367197 +0000 UTC m=+0.240329948 container start 015a5b99a61fb98e56ce0a6707996a2334b957091df6ba0aa6f28194e95fa8c7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Oct 11 08:53:28 compute-0 neutron-haproxy-ovnmeta-aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f[312371]: [NOTICE]   (312404) : New worker (312410) forked
Oct 11 08:53:28 compute-0 neutron-haproxy-ovnmeta-aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f[312371]: [NOTICE]   (312404) : Loading success.
Oct 11 08:53:28 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:28.992 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 69a28fb2-3a8f-49be-9b93-8dc3db323fd5 in datapath aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f unbound from our chassis
Oct 11 08:53:28 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:28.994 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f
Oct 11 08:53:29 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:29.022 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[7d07f9b9-9db1-4fa6-8cd3-6955d9b53326]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:53:29 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:29.074 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[9cb9ab5e-683c-499b-aba0-69d06f494001]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:53:29 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:29.079 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[e2ea61ba-498e-42a7-91a7-23ff317f425d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:53:29 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:29.118 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[3df83cba-648c-4a90-951b-3fd605d7ca4f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:53:29 compute-0 nova_compute[260935]: 2025-10-11 08:53:29.136 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172809.136218, e3289c21-dd0f-43aa-9d39-3aff16eff5cd => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:53:29 compute-0 nova_compute[260935]: 2025-10-11 08:53:29.137 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: e3289c21-dd0f-43aa-9d39-3aff16eff5cd] VM Started (Lifecycle Event)
Oct 11 08:53:29 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:29.141 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[43adf8d4-074c-4e2b-8cae-a92755bb8320]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapaac3fe7a-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a9:0a:6b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 3, 'tx_packets': 5, 'rx_bytes': 306, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 3, 'tx_packets': 5, 'rx_bytes': 306, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 122], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 465273, 'reachable_time': 34198, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 3, 'inoctets': 264, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 3, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 264, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 3, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 312426, 'error': None, 'target': 'ovnmeta-aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:53:29 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:29.160 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[56eac33b-f78b-45f3-ba31-fc63716a2e82]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapaac3fe7a-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 465290, 'tstamp': 465290}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 312427, 'error': None, 'target': 'ovnmeta-aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapaac3fe7a-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 465294, 'tstamp': 465294}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 312427, 'error': None, 'target': 'ovnmeta-aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:53:29 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:29.162 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapaac3fe7a-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:53:29 compute-0 nova_compute[260935]: 2025-10-11 08:53:29.164 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:53:29 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:29.167 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapaac3fe7a-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:53:29 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:29.168 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 08:53:29 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:29.168 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapaac3fe7a-b0, col_values=(('external_ids', {'iface-id': 'debf3d0c-b4f8-4ab8-9507-a0acd9b0ee66'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:53:29 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:29.168 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 08:53:29 compute-0 nova_compute[260935]: 2025-10-11 08:53:29.176 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: e3289c21-dd0f-43aa-9d39-3aff16eff5cd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:53:29 compute-0 nova_compute[260935]: 2025-10-11 08:53:29.180 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172809.13974, e3289c21-dd0f-43aa-9d39-3aff16eff5cd => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:53:29 compute-0 nova_compute[260935]: 2025-10-11 08:53:29.181 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: e3289c21-dd0f-43aa-9d39-3aff16eff5cd] VM Paused (Lifecycle Event)
Oct 11 08:53:29 compute-0 nova_compute[260935]: 2025-10-11 08:53:29.249 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: e3289c21-dd0f-43aa-9d39-3aff16eff5cd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:53:29 compute-0 nova_compute[260935]: 2025-10-11 08:53:29.253 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: e3289c21-dd0f-43aa-9d39-3aff16eff5cd] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 08:53:29 compute-0 nova_compute[260935]: 2025-10-11 08:53:29.289 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: e3289c21-dd0f-43aa-9d39-3aff16eff5cd] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 08:53:29 compute-0 ovn_controller[152945]: 2025-10-11T08:53:29Z|00062|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:d0:47:df 10.100.0.14
Oct 11 08:53:29 compute-0 ovn_controller[152945]: 2025-10-11T08:53:29Z|00063|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:d0:47:df 10.100.0.14
Oct 11 08:53:29 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e197 do_prune osdmap full prune enabled
Oct 11 08:53:29 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e198 e198: 3 total, 3 up, 3 in
Oct 11 08:53:29 compute-0 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e198: 3 total, 3 up, 3 in
Oct 11 08:53:29 compute-0 ceph-mon[74313]: pgmap v1464: 321 pgs: 321 active+clean; 306 MiB data, 557 MiB used, 59 GiB / 60 GiB avail; 8.1 MiB/s rd, 15 MiB/s wr, 499 op/s
Oct 11 08:53:29 compute-0 nova_compute[260935]: 2025-10-11 08:53:29.524 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172809.5241818, dd2f9164-cc85-46e5-9ac5-2847421fe9fc => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:53:29 compute-0 nova_compute[260935]: 2025-10-11 08:53:29.525 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: dd2f9164-cc85-46e5-9ac5-2847421fe9fc] VM Started (Lifecycle Event)
Oct 11 08:53:29 compute-0 nova_compute[260935]: 2025-10-11 08:53:29.527 2 DEBUG nova.compute.manager [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: dd2f9164-cc85-46e5-9ac5-2847421fe9fc] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 11 08:53:29 compute-0 nova_compute[260935]: 2025-10-11 08:53:29.531 2 DEBUG nova.virt.libvirt.driver [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: dd2f9164-cc85-46e5-9ac5-2847421fe9fc] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 11 08:53:29 compute-0 nova_compute[260935]: 2025-10-11 08:53:29.534 2 INFO nova.virt.libvirt.driver [-] [instance: dd2f9164-cc85-46e5-9ac5-2847421fe9fc] Instance spawned successfully.
Oct 11 08:53:29 compute-0 nova_compute[260935]: 2025-10-11 08:53:29.535 2 DEBUG nova.virt.libvirt.driver [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: dd2f9164-cc85-46e5-9ac5-2847421fe9fc] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 11 08:53:29 compute-0 nova_compute[260935]: 2025-10-11 08:53:29.559 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: dd2f9164-cc85-46e5-9ac5-2847421fe9fc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:53:29 compute-0 nova_compute[260935]: 2025-10-11 08:53:29.566 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: dd2f9164-cc85-46e5-9ac5-2847421fe9fc] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 08:53:29 compute-0 nova_compute[260935]: 2025-10-11 08:53:29.576 2 DEBUG nova.virt.libvirt.driver [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: dd2f9164-cc85-46e5-9ac5-2847421fe9fc] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:53:29 compute-0 nova_compute[260935]: 2025-10-11 08:53:29.576 2 DEBUG nova.virt.libvirt.driver [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: dd2f9164-cc85-46e5-9ac5-2847421fe9fc] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:53:29 compute-0 nova_compute[260935]: 2025-10-11 08:53:29.577 2 DEBUG nova.virt.libvirt.driver [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: dd2f9164-cc85-46e5-9ac5-2847421fe9fc] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:53:29 compute-0 nova_compute[260935]: 2025-10-11 08:53:29.578 2 DEBUG nova.virt.libvirt.driver [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: dd2f9164-cc85-46e5-9ac5-2847421fe9fc] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:53:29 compute-0 nova_compute[260935]: 2025-10-11 08:53:29.578 2 DEBUG nova.virt.libvirt.driver [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: dd2f9164-cc85-46e5-9ac5-2847421fe9fc] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:53:29 compute-0 nova_compute[260935]: 2025-10-11 08:53:29.579 2 DEBUG nova.virt.libvirt.driver [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: dd2f9164-cc85-46e5-9ac5-2847421fe9fc] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:53:29 compute-0 nova_compute[260935]: 2025-10-11 08:53:29.588 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: dd2f9164-cc85-46e5-9ac5-2847421fe9fc] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 08:53:29 compute-0 nova_compute[260935]: 2025-10-11 08:53:29.589 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172809.524286, dd2f9164-cc85-46e5-9ac5-2847421fe9fc => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:53:29 compute-0 nova_compute[260935]: 2025-10-11 08:53:29.589 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: dd2f9164-cc85-46e5-9ac5-2847421fe9fc] VM Paused (Lifecycle Event)
Oct 11 08:53:29 compute-0 nova_compute[260935]: 2025-10-11 08:53:29.623 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: dd2f9164-cc85-46e5-9ac5-2847421fe9fc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:53:29 compute-0 nova_compute[260935]: 2025-10-11 08:53:29.628 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172809.5312035, dd2f9164-cc85-46e5-9ac5-2847421fe9fc => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:53:29 compute-0 nova_compute[260935]: 2025-10-11 08:53:29.628 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: dd2f9164-cc85-46e5-9ac5-2847421fe9fc] VM Resumed (Lifecycle Event)
Oct 11 08:53:29 compute-0 nova_compute[260935]: 2025-10-11 08:53:29.635 2 INFO nova.compute.manager [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: dd2f9164-cc85-46e5-9ac5-2847421fe9fc] Took 10.67 seconds to spawn the instance on the hypervisor.
Oct 11 08:53:29 compute-0 nova_compute[260935]: 2025-10-11 08:53:29.635 2 DEBUG nova.compute.manager [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: dd2f9164-cc85-46e5-9ac5-2847421fe9fc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:53:29 compute-0 nova_compute[260935]: 2025-10-11 08:53:29.645 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: dd2f9164-cc85-46e5-9ac5-2847421fe9fc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:53:29 compute-0 nova_compute[260935]: 2025-10-11 08:53:29.648 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: dd2f9164-cc85-46e5-9ac5-2847421fe9fc] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 08:53:29 compute-0 nova_compute[260935]: 2025-10-11 08:53:29.662 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: dd2f9164-cc85-46e5-9ac5-2847421fe9fc] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 08:53:29 compute-0 nova_compute[260935]: 2025-10-11 08:53:29.691 2 INFO nova.compute.manager [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: dd2f9164-cc85-46e5-9ac5-2847421fe9fc] Took 12.84 seconds to build instance.
Oct 11 08:53:29 compute-0 nova_compute[260935]: 2025-10-11 08:53:29.711 2 DEBUG oslo_concurrency.lockutils [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Lock "dd2f9164-cc85-46e5-9ac5-2847421fe9fc" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.963s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:53:29 compute-0 kernel: tap13bb6d15-e6 (unregistering): left promiscuous mode
Oct 11 08:53:29 compute-0 NetworkManager[44960]: <info>  [1760172809.7253] device (tap13bb6d15-e6): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 08:53:29 compute-0 ovn_controller[152945]: 2025-10-11T08:53:29Z|00388|binding|INFO|Releasing lport 13bb6d15-e65c-4e29-b0f3-b7a5a830236d from this chassis (sb_readonly=0)
Oct 11 08:53:29 compute-0 ovn_controller[152945]: 2025-10-11T08:53:29Z|00389|binding|INFO|Setting lport 13bb6d15-e65c-4e29-b0f3-b7a5a830236d down in Southbound
Oct 11 08:53:29 compute-0 nova_compute[260935]: 2025-10-11 08:53:29.736 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:53:29 compute-0 ovn_controller[152945]: 2025-10-11T08:53:29Z|00390|binding|INFO|Removing iface tap13bb6d15-e6 ovn-installed in OVS
Oct 11 08:53:29 compute-0 nova_compute[260935]: 2025-10-11 08:53:29.738 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:53:29 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:29.746 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5b:b5:4a 10.100.0.12'], port_security=['fa:16:3e:5b:b5:4a 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '92be5b35-6b7a-4f95-924d-008348f27b42', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9af27fad6b5a4783b66213343f27f0a1', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'a50a9db8-e2ab-4969-88fc-b4ddbb372174', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b34768a1-4d4c-416a-8ec1-d8538916a72a, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=13bb6d15-e65c-4e29-b0f3-b7a5a830236d) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 08:53:29 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:29.747 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 13bb6d15-e65c-4e29-b0f3-b7a5a830236d in datapath e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd unbound from our chassis
Oct 11 08:53:29 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:29.748 162815 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 11 08:53:29 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:29.749 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[ef1f9e24-1ca1-4c72-b96c-5ffe1c293ab1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:53:29 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:29.750 162815 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd namespace which is not needed anymore
Oct 11 08:53:29 compute-0 nova_compute[260935]: 2025-10-11 08:53:29.754 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:53:29 compute-0 systemd[1]: machine-qemu\x2d49\x2dinstance\x2d0000002b.scope: Deactivated successfully.
Oct 11 08:53:29 compute-0 systemd[1]: machine-qemu\x2d49\x2dinstance\x2d0000002b.scope: Consumed 13.410s CPU time.
Oct 11 08:53:29 compute-0 systemd-machined[215705]: Machine qemu-49-instance-0000002b terminated.
Oct 11 08:53:29 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1466: 321 pgs: 321 active+clean; 306 MiB data, 557 MiB used, 59 GiB / 60 GiB avail; 2.8 MiB/s rd, 1.5 MiB/s wr, 83 op/s
Oct 11 08:53:30 compute-0 nova_compute[260935]: 2025-10-11 08:53:30.005 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:53:30 compute-0 neutron-haproxy-ovnmeta-e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd[310979]: [NOTICE]   (311001) : haproxy version is 2.8.14-c23fe91
Oct 11 08:53:30 compute-0 neutron-haproxy-ovnmeta-e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd[310979]: [NOTICE]   (311001) : path to executable is /usr/sbin/haproxy
Oct 11 08:53:30 compute-0 neutron-haproxy-ovnmeta-e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd[310979]: [WARNING]  (311001) : Exiting Master process...
Oct 11 08:53:30 compute-0 neutron-haproxy-ovnmeta-e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd[310979]: [WARNING]  (311001) : Exiting Master process...
Oct 11 08:53:30 compute-0 neutron-haproxy-ovnmeta-e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd[310979]: [ALERT]    (311001) : Current worker (311004) exited with code 143 (Terminated)
Oct 11 08:53:30 compute-0 neutron-haproxy-ovnmeta-e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd[310979]: [WARNING]  (311001) : All workers exited. Exiting... (0)
Oct 11 08:53:30 compute-0 systemd[1]: libpod-8f4077603fd01054fa63797c06b1df995fb1f62bbad8ac790793721f2ef41e8b.scope: Deactivated successfully.
Oct 11 08:53:30 compute-0 podman[312447]: 2025-10-11 08:53:30.023083162 +0000 UTC m=+0.103004104 container died 8f4077603fd01054fa63797c06b1df995fb1f62bbad8ac790793721f2ef41e8b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 11 08:53:30 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-8f4077603fd01054fa63797c06b1df995fb1f62bbad8ac790793721f2ef41e8b-userdata-shm.mount: Deactivated successfully.
Oct 11 08:53:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-32b308db994383b8949a545fed9fa022caa751e57c9421e100b75cd184b4505b-merged.mount: Deactivated successfully.
Oct 11 08:53:30 compute-0 podman[312447]: 2025-10-11 08:53:30.090099147 +0000 UTC m=+0.170020089 container cleanup 8f4077603fd01054fa63797c06b1df995fb1f62bbad8ac790793721f2ef41e8b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct 11 08:53:30 compute-0 systemd[1]: libpod-conmon-8f4077603fd01054fa63797c06b1df995fb1f62bbad8ac790793721f2ef41e8b.scope: Deactivated successfully.
Oct 11 08:53:30 compute-0 podman[312486]: 2025-10-11 08:53:30.21572288 +0000 UTC m=+0.069074054 container remove 8f4077603fd01054fa63797c06b1df995fb1f62bbad8ac790793721f2ef41e8b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Oct 11 08:53:30 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:30.226 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[33a13dc0-a021-4aab-b8ef-0f1acb61d72c]: (4, ('Sat Oct 11 08:53:29 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd (8f4077603fd01054fa63797c06b1df995fb1f62bbad8ac790793721f2ef41e8b)\n8f4077603fd01054fa63797c06b1df995fb1f62bbad8ac790793721f2ef41e8b\nSat Oct 11 08:53:30 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd (8f4077603fd01054fa63797c06b1df995fb1f62bbad8ac790793721f2ef41e8b)\n8f4077603fd01054fa63797c06b1df995fb1f62bbad8ac790793721f2ef41e8b\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:53:30 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:30.230 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[4ae57bb0-beab-4f55-8a22-38f4577fdffd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:53:30 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:30.231 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape5d4fc7a-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:53:30 compute-0 kernel: tape5d4fc7a-10: left promiscuous mode
Oct 11 08:53:30 compute-0 nova_compute[260935]: 2025-10-11 08:53:30.235 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:53:30 compute-0 nova_compute[260935]: 2025-10-11 08:53:30.272 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:53:30 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:30.276 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[aaccb866-aa95-4e89-b073-9dd44aa0af7c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:53:30 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:30.292 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[820a581f-9ca2-45e6-9a34-03fca2870dd4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:53:30 compute-0 nova_compute[260935]: 2025-10-11 08:53:30.297 2 DEBUG nova.compute.manager [req-141dd33c-9076-47ef-b021-e0251dbb9fe0 req-504686d0-3c65-474b-8d67-7509a5cb59ff e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: e3289c21-dd0f-43aa-9d39-3aff16eff5cd] Received event network-vif-plugged-69a28fb2-3a8f-49be-9b93-8dc3db323fd5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:53:30 compute-0 nova_compute[260935]: 2025-10-11 08:53:30.297 2 DEBUG oslo_concurrency.lockutils [req-141dd33c-9076-47ef-b021-e0251dbb9fe0 req-504686d0-3c65-474b-8d67-7509a5cb59ff e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "e3289c21-dd0f-43aa-9d39-3aff16eff5cd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:53:30 compute-0 nova_compute[260935]: 2025-10-11 08:53:30.298 2 DEBUG oslo_concurrency.lockutils [req-141dd33c-9076-47ef-b021-e0251dbb9fe0 req-504686d0-3c65-474b-8d67-7509a5cb59ff e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "e3289c21-dd0f-43aa-9d39-3aff16eff5cd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:53:30 compute-0 nova_compute[260935]: 2025-10-11 08:53:30.298 2 DEBUG oslo_concurrency.lockutils [req-141dd33c-9076-47ef-b021-e0251dbb9fe0 req-504686d0-3c65-474b-8d67-7509a5cb59ff e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "e3289c21-dd0f-43aa-9d39-3aff16eff5cd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:53:30 compute-0 nova_compute[260935]: 2025-10-11 08:53:30.298 2 DEBUG nova.compute.manager [req-141dd33c-9076-47ef-b021-e0251dbb9fe0 req-504686d0-3c65-474b-8d67-7509a5cb59ff e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: e3289c21-dd0f-43aa-9d39-3aff16eff5cd] Processing event network-vif-plugged-69a28fb2-3a8f-49be-9b93-8dc3db323fd5 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 11 08:53:30 compute-0 nova_compute[260935]: 2025-10-11 08:53:30.298 2 DEBUG nova.compute.manager [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: e3289c21-dd0f-43aa-9d39-3aff16eff5cd] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 11 08:53:30 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:30.299 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[aaa8361d-30f9-4bb3-85ad-35b721bceb99]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:53:30 compute-0 nova_compute[260935]: 2025-10-11 08:53:30.303 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172810.3035688, e3289c21-dd0f-43aa-9d39-3aff16eff5cd => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:53:30 compute-0 nova_compute[260935]: 2025-10-11 08:53:30.304 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: e3289c21-dd0f-43aa-9d39-3aff16eff5cd] VM Resumed (Lifecycle Event)
Oct 11 08:53:30 compute-0 nova_compute[260935]: 2025-10-11 08:53:30.306 2 DEBUG nova.virt.libvirt.driver [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: e3289c21-dd0f-43aa-9d39-3aff16eff5cd] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 11 08:53:30 compute-0 nova_compute[260935]: 2025-10-11 08:53:30.310 2 INFO nova.virt.libvirt.driver [-] [instance: e3289c21-dd0f-43aa-9d39-3aff16eff5cd] Instance spawned successfully.
Oct 11 08:53:30 compute-0 nova_compute[260935]: 2025-10-11 08:53:30.310 2 DEBUG nova.virt.libvirt.driver [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: e3289c21-dd0f-43aa-9d39-3aff16eff5cd] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 11 08:53:30 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:30.317 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[53476561-220c-4b22-b754-563a6bad0dfd]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 463344, 'reachable_time': 16016, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 312505, 'error': None, 'target': 'ovnmeta-e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:53:30 compute-0 systemd[1]: run-netns-ovnmeta\x2de5d4fc7a\x2d1d47\x2d4774\x2daad7\x2d8bb2b388fbbd.mount: Deactivated successfully.
Oct 11 08:53:30 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:30.323 162929 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 11 08:53:30 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:30.323 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[d7ba5545-dcec-499b-a404-9ebe9721c3f8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:53:30 compute-0 nova_compute[260935]: 2025-10-11 08:53:30.331 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: e3289c21-dd0f-43aa-9d39-3aff16eff5cd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:53:30 compute-0 nova_compute[260935]: 2025-10-11 08:53:30.335 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: e3289c21-dd0f-43aa-9d39-3aff16eff5cd] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 08:53:30 compute-0 nova_compute[260935]: 2025-10-11 08:53:30.348 2 DEBUG nova.virt.libvirt.driver [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: e3289c21-dd0f-43aa-9d39-3aff16eff5cd] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:53:30 compute-0 nova_compute[260935]: 2025-10-11 08:53:30.349 2 DEBUG nova.virt.libvirt.driver [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: e3289c21-dd0f-43aa-9d39-3aff16eff5cd] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:53:30 compute-0 nova_compute[260935]: 2025-10-11 08:53:30.349 2 DEBUG nova.virt.libvirt.driver [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: e3289c21-dd0f-43aa-9d39-3aff16eff5cd] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:53:30 compute-0 nova_compute[260935]: 2025-10-11 08:53:30.350 2 DEBUG nova.virt.libvirt.driver [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: e3289c21-dd0f-43aa-9d39-3aff16eff5cd] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:53:30 compute-0 nova_compute[260935]: 2025-10-11 08:53:30.350 2 DEBUG nova.virt.libvirt.driver [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: e3289c21-dd0f-43aa-9d39-3aff16eff5cd] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:53:30 compute-0 nova_compute[260935]: 2025-10-11 08:53:30.351 2 DEBUG nova.virt.libvirt.driver [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: e3289c21-dd0f-43aa-9d39-3aff16eff5cd] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:53:30 compute-0 nova_compute[260935]: 2025-10-11 08:53:30.363 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: e3289c21-dd0f-43aa-9d39-3aff16eff5cd] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 08:53:30 compute-0 nova_compute[260935]: 2025-10-11 08:53:30.413 2 INFO nova.compute.manager [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: e3289c21-dd0f-43aa-9d39-3aff16eff5cd] Took 12.45 seconds to spawn the instance on the hypervisor.
Oct 11 08:53:30 compute-0 nova_compute[260935]: 2025-10-11 08:53:30.414 2 DEBUG nova.compute.manager [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: e3289c21-dd0f-43aa-9d39-3aff16eff5cd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:53:30 compute-0 nova_compute[260935]: 2025-10-11 08:53:30.434 2 INFO nova.virt.libvirt.driver [None req-fe77ed62-25b3-4ad0-9b14-d5990c233b42 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 92be5b35-6b7a-4f95-924d-008348f27b42] Instance shutdown successfully after 13 seconds.
Oct 11 08:53:30 compute-0 nova_compute[260935]: 2025-10-11 08:53:30.440 2 INFO nova.virt.libvirt.driver [-] [instance: 92be5b35-6b7a-4f95-924d-008348f27b42] Instance destroyed successfully.
Oct 11 08:53:30 compute-0 nova_compute[260935]: 2025-10-11 08:53:30.446 2 INFO nova.virt.libvirt.driver [-] [instance: 92be5b35-6b7a-4f95-924d-008348f27b42] Instance destroyed successfully.
Oct 11 08:53:30 compute-0 nova_compute[260935]: 2025-10-11 08:53:30.448 2 DEBUG nova.virt.libvirt.vif [None req-fe77ed62-25b3-4ad0-9b14-d5990c233b42 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T08:52:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-118771075',display_name='tempest-ServerDiskConfigTestJSON-server-118771075',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-118771075',id=43,image_ref='95632eb9-5895-4e20-b760-0f149aadf400',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-11T08:53:09Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='9af27fad6b5a4783b66213343f27f0a1',ramdisk_id='',reservation_id='r-aqy60l5l',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='95632eb9-5895-4e20-b760-0f149aadf400',image_container_format='bare',image_disk_format='qcow2',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerDiskConfigTestJSON-387886039',owner_user_name='tempest-ServerDiskConfigTestJSON-387886039-project-member'},tags=<?>,task_state='rebuilding',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T08:53:16Z,user_data=None,user_id='2a171de1f79843e0b048393cabfee77d',uuid=92be5b35-6b7a-4f95-924d-008348f27b42,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "13bb6d15-e65c-4e29-b0f3-b7a5a830236d", "address": "fa:16:3e:5b:b5:4a", "network": {"id": "e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-964284753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9af27fad6b5a4783b66213343f27f0a1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap13bb6d15-e6", "ovs_interfaceid": "13bb6d15-e65c-4e29-b0f3-b7a5a830236d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 11 08:53:30 compute-0 nova_compute[260935]: 2025-10-11 08:53:30.448 2 DEBUG nova.network.os_vif_util [None req-fe77ed62-25b3-4ad0-9b14-d5990c233b42 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Converting VIF {"id": "13bb6d15-e65c-4e29-b0f3-b7a5a830236d", "address": "fa:16:3e:5b:b5:4a", "network": {"id": "e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-964284753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9af27fad6b5a4783b66213343f27f0a1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap13bb6d15-e6", "ovs_interfaceid": "13bb6d15-e65c-4e29-b0f3-b7a5a830236d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 08:53:30 compute-0 nova_compute[260935]: 2025-10-11 08:53:30.450 2 DEBUG nova.network.os_vif_util [None req-fe77ed62-25b3-4ad0-9b14-d5990c233b42 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5b:b5:4a,bridge_name='br-int',has_traffic_filtering=True,id=13bb6d15-e65c-4e29-b0f3-b7a5a830236d,network=Network(e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap13bb6d15-e6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 08:53:30 compute-0 nova_compute[260935]: 2025-10-11 08:53:30.450 2 DEBUG os_vif [None req-fe77ed62-25b3-4ad0-9b14-d5990c233b42 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:5b:b5:4a,bridge_name='br-int',has_traffic_filtering=True,id=13bb6d15-e65c-4e29-b0f3-b7a5a830236d,network=Network(e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap13bb6d15-e6') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 11 08:53:30 compute-0 nova_compute[260935]: 2025-10-11 08:53:30.454 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:53:30 compute-0 nova_compute[260935]: 2025-10-11 08:53:30.455 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap13bb6d15-e6, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:53:30 compute-0 nova_compute[260935]: 2025-10-11 08:53:30.461 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 08:53:30 compute-0 nova_compute[260935]: 2025-10-11 08:53:30.464 2 INFO os_vif [None req-fe77ed62-25b3-4ad0-9b14-d5990c233b42 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:5b:b5:4a,bridge_name='br-int',has_traffic_filtering=True,id=13bb6d15-e65c-4e29-b0f3-b7a5a830236d,network=Network(e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap13bb6d15-e6')
Oct 11 08:53:30 compute-0 nova_compute[260935]: 2025-10-11 08:53:30.492 2 INFO nova.compute.manager [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: e3289c21-dd0f-43aa-9d39-3aff16eff5cd] Took 13.72 seconds to build instance.
Oct 11 08:53:30 compute-0 nova_compute[260935]: 2025-10-11 08:53:30.512 2 DEBUG oslo_concurrency.lockutils [None req-070b867a-f652-445e-9f16-49b0c3eb5099 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Lock "e3289c21-dd0f-43aa-9d39-3aff16eff5cd" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.819s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:53:30 compute-0 ceph-mon[74313]: osdmap e198: 3 total, 3 up, 3 in
Oct 11 08:53:30 compute-0 ceph-mon[74313]: pgmap v1466: 321 pgs: 321 active+clean; 306 MiB data, 557 MiB used, 59 GiB / 60 GiB avail; 2.8 MiB/s rd, 1.5 MiB/s wr, 83 op/s
Oct 11 08:53:30 compute-0 nova_compute[260935]: 2025-10-11 08:53:30.866 2 DEBUG nova.compute.manager [None req-ea864462-84d7-453c-a364-0b170ab59761 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] [instance: 2c551e6f-adba-4963-a583-c5118e2be62a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:53:30 compute-0 nova_compute[260935]: 2025-10-11 08:53:30.891 2 DEBUG nova.compute.manager [req-ccd99a30-56cf-4b58-8bc1-fdb4bb69f03b req-de43d55a-259a-4ef5-82df-eac64f92c4ed e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: dd2f9164-cc85-46e5-9ac5-2847421fe9fc] Received event network-vif-plugged-35cdb65d-4c85-4645-9bdf-3ef8f40f9596 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:53:30 compute-0 nova_compute[260935]: 2025-10-11 08:53:30.892 2 DEBUG oslo_concurrency.lockutils [req-ccd99a30-56cf-4b58-8bc1-fdb4bb69f03b req-de43d55a-259a-4ef5-82df-eac64f92c4ed e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "dd2f9164-cc85-46e5-9ac5-2847421fe9fc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:53:30 compute-0 nova_compute[260935]: 2025-10-11 08:53:30.893 2 DEBUG oslo_concurrency.lockutils [req-ccd99a30-56cf-4b58-8bc1-fdb4bb69f03b req-de43d55a-259a-4ef5-82df-eac64f92c4ed e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "dd2f9164-cc85-46e5-9ac5-2847421fe9fc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:53:30 compute-0 nova_compute[260935]: 2025-10-11 08:53:30.893 2 DEBUG oslo_concurrency.lockutils [req-ccd99a30-56cf-4b58-8bc1-fdb4bb69f03b req-de43d55a-259a-4ef5-82df-eac64f92c4ed e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "dd2f9164-cc85-46e5-9ac5-2847421fe9fc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:53:30 compute-0 nova_compute[260935]: 2025-10-11 08:53:30.894 2 DEBUG nova.compute.manager [req-ccd99a30-56cf-4b58-8bc1-fdb4bb69f03b req-de43d55a-259a-4ef5-82df-eac64f92c4ed e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: dd2f9164-cc85-46e5-9ac5-2847421fe9fc] No waiting events found dispatching network-vif-plugged-35cdb65d-4c85-4645-9bdf-3ef8f40f9596 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 08:53:30 compute-0 nova_compute[260935]: 2025-10-11 08:53:30.894 2 WARNING nova.compute.manager [req-ccd99a30-56cf-4b58-8bc1-fdb4bb69f03b req-de43d55a-259a-4ef5-82df-eac64f92c4ed e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: dd2f9164-cc85-46e5-9ac5-2847421fe9fc] Received unexpected event network-vif-plugged-35cdb65d-4c85-4645-9bdf-3ef8f40f9596 for instance with vm_state active and task_state None.
Oct 11 08:53:30 compute-0 nova_compute[260935]: 2025-10-11 08:53:30.895 2 DEBUG nova.compute.manager [req-ccd99a30-56cf-4b58-8bc1-fdb4bb69f03b req-de43d55a-259a-4ef5-82df-eac64f92c4ed e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 92be5b35-6b7a-4f95-924d-008348f27b42] Received event network-vif-unplugged-13bb6d15-e65c-4e29-b0f3-b7a5a830236d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:53:30 compute-0 nova_compute[260935]: 2025-10-11 08:53:30.895 2 DEBUG oslo_concurrency.lockutils [req-ccd99a30-56cf-4b58-8bc1-fdb4bb69f03b req-de43d55a-259a-4ef5-82df-eac64f92c4ed e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "92be5b35-6b7a-4f95-924d-008348f27b42-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:53:30 compute-0 nova_compute[260935]: 2025-10-11 08:53:30.896 2 DEBUG oslo_concurrency.lockutils [req-ccd99a30-56cf-4b58-8bc1-fdb4bb69f03b req-de43d55a-259a-4ef5-82df-eac64f92c4ed e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "92be5b35-6b7a-4f95-924d-008348f27b42-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:53:30 compute-0 nova_compute[260935]: 2025-10-11 08:53:30.896 2 DEBUG oslo_concurrency.lockutils [req-ccd99a30-56cf-4b58-8bc1-fdb4bb69f03b req-de43d55a-259a-4ef5-82df-eac64f92c4ed e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "92be5b35-6b7a-4f95-924d-008348f27b42-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:53:30 compute-0 nova_compute[260935]: 2025-10-11 08:53:30.896 2 DEBUG nova.compute.manager [req-ccd99a30-56cf-4b58-8bc1-fdb4bb69f03b req-de43d55a-259a-4ef5-82df-eac64f92c4ed e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 92be5b35-6b7a-4f95-924d-008348f27b42] No waiting events found dispatching network-vif-unplugged-13bb6d15-e65c-4e29-b0f3-b7a5a830236d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 08:53:30 compute-0 nova_compute[260935]: 2025-10-11 08:53:30.897 2 WARNING nova.compute.manager [req-ccd99a30-56cf-4b58-8bc1-fdb4bb69f03b req-de43d55a-259a-4ef5-82df-eac64f92c4ed e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 92be5b35-6b7a-4f95-924d-008348f27b42] Received unexpected event network-vif-unplugged-13bb6d15-e65c-4e29-b0f3-b7a5a830236d for instance with vm_state active and task_state rebuilding.
Oct 11 08:53:30 compute-0 nova_compute[260935]: 2025-10-11 08:53:30.897 2 DEBUG nova.compute.manager [req-ccd99a30-56cf-4b58-8bc1-fdb4bb69f03b req-de43d55a-259a-4ef5-82df-eac64f92c4ed e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 92be5b35-6b7a-4f95-924d-008348f27b42] Received event network-vif-plugged-13bb6d15-e65c-4e29-b0f3-b7a5a830236d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:53:30 compute-0 nova_compute[260935]: 2025-10-11 08:53:30.898 2 DEBUG oslo_concurrency.lockutils [req-ccd99a30-56cf-4b58-8bc1-fdb4bb69f03b req-de43d55a-259a-4ef5-82df-eac64f92c4ed e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "92be5b35-6b7a-4f95-924d-008348f27b42-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:53:30 compute-0 nova_compute[260935]: 2025-10-11 08:53:30.898 2 DEBUG oslo_concurrency.lockutils [req-ccd99a30-56cf-4b58-8bc1-fdb4bb69f03b req-de43d55a-259a-4ef5-82df-eac64f92c4ed e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "92be5b35-6b7a-4f95-924d-008348f27b42-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:53:30 compute-0 nova_compute[260935]: 2025-10-11 08:53:30.899 2 DEBUG oslo_concurrency.lockutils [req-ccd99a30-56cf-4b58-8bc1-fdb4bb69f03b req-de43d55a-259a-4ef5-82df-eac64f92c4ed e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "92be5b35-6b7a-4f95-924d-008348f27b42-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:53:30 compute-0 nova_compute[260935]: 2025-10-11 08:53:30.899 2 DEBUG nova.compute.manager [req-ccd99a30-56cf-4b58-8bc1-fdb4bb69f03b req-de43d55a-259a-4ef5-82df-eac64f92c4ed e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 92be5b35-6b7a-4f95-924d-008348f27b42] No waiting events found dispatching network-vif-plugged-13bb6d15-e65c-4e29-b0f3-b7a5a830236d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 08:53:30 compute-0 nova_compute[260935]: 2025-10-11 08:53:30.900 2 WARNING nova.compute.manager [req-ccd99a30-56cf-4b58-8bc1-fdb4bb69f03b req-de43d55a-259a-4ef5-82df-eac64f92c4ed e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 92be5b35-6b7a-4f95-924d-008348f27b42] Received unexpected event network-vif-plugged-13bb6d15-e65c-4e29-b0f3-b7a5a830236d for instance with vm_state active and task_state rebuilding.
Oct 11 08:53:30 compute-0 nova_compute[260935]: 2025-10-11 08:53:30.936 2 INFO nova.compute.manager [None req-ea864462-84d7-453c-a364-0b170ab59761 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] [instance: 2c551e6f-adba-4963-a583-c5118e2be62a] instance snapshotting
Oct 11 08:53:31 compute-0 nova_compute[260935]: 2025-10-11 08:53:31.018 2 INFO nova.virt.libvirt.driver [None req-fe77ed62-25b3-4ad0-9b14-d5990c233b42 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 92be5b35-6b7a-4f95-924d-008348f27b42] Deleting instance files /var/lib/nova/instances/92be5b35-6b7a-4f95-924d-008348f27b42_del
Oct 11 08:53:31 compute-0 nova_compute[260935]: 2025-10-11 08:53:31.019 2 INFO nova.virt.libvirt.driver [None req-fe77ed62-25b3-4ad0-9b14-d5990c233b42 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 92be5b35-6b7a-4f95-924d-008348f27b42] Deletion of /var/lib/nova/instances/92be5b35-6b7a-4f95-924d-008348f27b42_del complete
Oct 11 08:53:31 compute-0 nova_compute[260935]: 2025-10-11 08:53:31.203 2 INFO nova.virt.libvirt.driver [None req-ea864462-84d7-453c-a364-0b170ab59761 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] [instance: 2c551e6f-adba-4963-a583-c5118e2be62a] Beginning live snapshot process
Oct 11 08:53:31 compute-0 nova_compute[260935]: 2025-10-11 08:53:31.382 2 DEBUG nova.virt.libvirt.driver [None req-fe77ed62-25b3-4ad0-9b14-d5990c233b42 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 92be5b35-6b7a-4f95-924d-008348f27b42] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 11 08:53:31 compute-0 nova_compute[260935]: 2025-10-11 08:53:31.383 2 INFO nova.virt.libvirt.driver [None req-fe77ed62-25b3-4ad0-9b14-d5990c233b42 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 92be5b35-6b7a-4f95-924d-008348f27b42] Creating image(s)
Oct 11 08:53:31 compute-0 nova_compute[260935]: 2025-10-11 08:53:31.415 2 DEBUG nova.storage.rbd_utils [None req-fe77ed62-25b3-4ad0-9b14-d5990c233b42 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] rbd image 92be5b35-6b7a-4f95-924d-008348f27b42_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:53:31 compute-0 nova_compute[260935]: 2025-10-11 08:53:31.446 2 DEBUG nova.storage.rbd_utils [None req-fe77ed62-25b3-4ad0-9b14-d5990c233b42 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] rbd image 92be5b35-6b7a-4f95-924d-008348f27b42_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:53:31 compute-0 nova_compute[260935]: 2025-10-11 08:53:31.478 2 DEBUG nova.storage.rbd_utils [None req-fe77ed62-25b3-4ad0-9b14-d5990c233b42 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] rbd image 92be5b35-6b7a-4f95-924d-008348f27b42_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:53:31 compute-0 nova_compute[260935]: 2025-10-11 08:53:31.484 2 DEBUG oslo_concurrency.processutils [None req-fe77ed62-25b3-4ad0-9b14-d5990c233b42 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/d427ed36e4acfaf36d5cf36bd49361b1db4ee571 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:53:31 compute-0 nova_compute[260935]: 2025-10-11 08:53:31.551 2 DEBUG nova.virt.libvirt.imagebackend [None req-ea864462-84d7-453c-a364-0b170ab59761 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] No parent info for 03f2fef0-11c0-48e1-b3a0-3e02d898739e; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Oct 11 08:53:31 compute-0 nova_compute[260935]: 2025-10-11 08:53:31.585 2 DEBUG oslo_concurrency.lockutils [None req-9e304d57-2dd4-4121-8be6-93377b7cb6cf 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Acquiring lock "e3289c21-dd0f-43aa-9d39-3aff16eff5cd" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:53:31 compute-0 nova_compute[260935]: 2025-10-11 08:53:31.586 2 DEBUG oslo_concurrency.lockutils [None req-9e304d57-2dd4-4121-8be6-93377b7cb6cf 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Lock "e3289c21-dd0f-43aa-9d39-3aff16eff5cd" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:53:31 compute-0 nova_compute[260935]: 2025-10-11 08:53:31.586 2 DEBUG oslo_concurrency.lockutils [None req-9e304d57-2dd4-4121-8be6-93377b7cb6cf 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Acquiring lock "e3289c21-dd0f-43aa-9d39-3aff16eff5cd-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:53:31 compute-0 nova_compute[260935]: 2025-10-11 08:53:31.587 2 DEBUG oslo_concurrency.lockutils [None req-9e304d57-2dd4-4121-8be6-93377b7cb6cf 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Lock "e3289c21-dd0f-43aa-9d39-3aff16eff5cd-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:53:31 compute-0 nova_compute[260935]: 2025-10-11 08:53:31.587 2 DEBUG oslo_concurrency.lockutils [None req-9e304d57-2dd4-4121-8be6-93377b7cb6cf 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Lock "e3289c21-dd0f-43aa-9d39-3aff16eff5cd-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:53:31 compute-0 nova_compute[260935]: 2025-10-11 08:53:31.588 2 INFO nova.compute.manager [None req-9e304d57-2dd4-4121-8be6-93377b7cb6cf 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: e3289c21-dd0f-43aa-9d39-3aff16eff5cd] Terminating instance
Oct 11 08:53:31 compute-0 nova_compute[260935]: 2025-10-11 08:53:31.589 2 DEBUG nova.compute.manager [None req-9e304d57-2dd4-4121-8be6-93377b7cb6cf 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: e3289c21-dd0f-43aa-9d39-3aff16eff5cd] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 11 08:53:31 compute-0 nova_compute[260935]: 2025-10-11 08:53:31.593 2 DEBUG oslo_concurrency.processutils [None req-fe77ed62-25b3-4ad0-9b14-d5990c233b42 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/d427ed36e4acfaf36d5cf36bd49361b1db4ee571 --force-share --output=json" returned: 0 in 0.109s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:53:31 compute-0 nova_compute[260935]: 2025-10-11 08:53:31.594 2 DEBUG oslo_concurrency.lockutils [None req-fe77ed62-25b3-4ad0-9b14-d5990c233b42 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Acquiring lock "d427ed36e4acfaf36d5cf36bd49361b1db4ee571" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:53:31 compute-0 nova_compute[260935]: 2025-10-11 08:53:31.594 2 DEBUG oslo_concurrency.lockutils [None req-fe77ed62-25b3-4ad0-9b14-d5990c233b42 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Lock "d427ed36e4acfaf36d5cf36bd49361b1db4ee571" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:53:31 compute-0 nova_compute[260935]: 2025-10-11 08:53:31.594 2 DEBUG oslo_concurrency.lockutils [None req-fe77ed62-25b3-4ad0-9b14-d5990c233b42 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Lock "d427ed36e4acfaf36d5cf36bd49361b1db4ee571" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:53:31 compute-0 nova_compute[260935]: 2025-10-11 08:53:31.621 2 DEBUG nova.storage.rbd_utils [None req-fe77ed62-25b3-4ad0-9b14-d5990c233b42 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] rbd image 92be5b35-6b7a-4f95-924d-008348f27b42_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:53:31 compute-0 nova_compute[260935]: 2025-10-11 08:53:31.625 2 DEBUG oslo_concurrency.processutils [None req-fe77ed62-25b3-4ad0-9b14-d5990c233b42 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/d427ed36e4acfaf36d5cf36bd49361b1db4ee571 92be5b35-6b7a-4f95-924d-008348f27b42_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:53:31 compute-0 kernel: tap69a28fb2-3a (unregistering): left promiscuous mode
Oct 11 08:53:31 compute-0 NetworkManager[44960]: <info>  [1760172811.7304] device (tap69a28fb2-3a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 08:53:31 compute-0 ovn_controller[152945]: 2025-10-11T08:53:31Z|00391|binding|INFO|Releasing lport 69a28fb2-3a8f-49be-9b93-8dc3db323fd5 from this chassis (sb_readonly=0)
Oct 11 08:53:31 compute-0 ovn_controller[152945]: 2025-10-11T08:53:31Z|00392|binding|INFO|Setting lport 69a28fb2-3a8f-49be-9b93-8dc3db323fd5 down in Southbound
Oct 11 08:53:31 compute-0 ovn_controller[152945]: 2025-10-11T08:53:31Z|00393|binding|INFO|Removing iface tap69a28fb2-3a ovn-installed in OVS
Oct 11 08:53:31 compute-0 nova_compute[260935]: 2025-10-11 08:53:31.737 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:53:31 compute-0 nova_compute[260935]: 2025-10-11 08:53:31.741 2 DEBUG oslo_concurrency.lockutils [None req-c8c60bb6-fa6d-4709-9538-fd7ca557b391 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Acquiring lock "dd2f9164-cc85-46e5-9ac5-2847421fe9fc" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:53:31 compute-0 nova_compute[260935]: 2025-10-11 08:53:31.742 2 DEBUG oslo_concurrency.lockutils [None req-c8c60bb6-fa6d-4709-9538-fd7ca557b391 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Lock "dd2f9164-cc85-46e5-9ac5-2847421fe9fc" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:53:31 compute-0 nova_compute[260935]: 2025-10-11 08:53:31.743 2 DEBUG oslo_concurrency.lockutils [None req-c8c60bb6-fa6d-4709-9538-fd7ca557b391 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Acquiring lock "dd2f9164-cc85-46e5-9ac5-2847421fe9fc-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:53:31 compute-0 nova_compute[260935]: 2025-10-11 08:53:31.743 2 DEBUG oslo_concurrency.lockutils [None req-c8c60bb6-fa6d-4709-9538-fd7ca557b391 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Lock "dd2f9164-cc85-46e5-9ac5-2847421fe9fc-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:53:31 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:31.740 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:93:4d:8d 10.100.0.11'], port_security=['fa:16:3e:93:4d:8d 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'e3289c21-dd0f-43aa-9d39-3aff16eff5cd', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5fe47dfd30914099a9819413cbab00c6', 'neutron:revision_number': '4', 'neutron:security_group_ids': '76121093-a7de-4040-aef4-22a2c06e5eea', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0c1ab924-8567-41be-9107-10c6210b8f10, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=69a28fb2-3a8f-49be-9b93-8dc3db323fd5) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 08:53:31 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:31.742 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 69a28fb2-3a8f-49be-9b93-8dc3db323fd5 in datapath aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f unbound from our chassis
Oct 11 08:53:31 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:31.744 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f
Oct 11 08:53:31 compute-0 nova_compute[260935]: 2025-10-11 08:53:31.744 2 DEBUG oslo_concurrency.lockutils [None req-c8c60bb6-fa6d-4709-9538-fd7ca557b391 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Lock "dd2f9164-cc85-46e5-9ac5-2847421fe9fc-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:53:31 compute-0 nova_compute[260935]: 2025-10-11 08:53:31.746 2 INFO nova.compute.manager [None req-c8c60bb6-fa6d-4709-9538-fd7ca557b391 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: dd2f9164-cc85-46e5-9ac5-2847421fe9fc] Terminating instance
Oct 11 08:53:31 compute-0 nova_compute[260935]: 2025-10-11 08:53:31.747 2 DEBUG nova.compute.manager [None req-c8c60bb6-fa6d-4709-9538-fd7ca557b391 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: dd2f9164-cc85-46e5-9ac5-2847421fe9fc] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 11 08:53:31 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:31.771 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[99be63b1-6f29-4998-8223-fbdd0ffaa7bd]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:53:31 compute-0 nova_compute[260935]: 2025-10-11 08:53:31.777 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:53:31 compute-0 systemd[1]: machine-qemu\x2d52\x2dinstance\x2d0000002d.scope: Deactivated successfully.
Oct 11 08:53:31 compute-0 systemd[1]: machine-qemu\x2d52\x2dinstance\x2d0000002d.scope: Consumed 2.232s CPU time.
Oct 11 08:53:31 compute-0 systemd-machined[215705]: Machine qemu-52-instance-0000002d terminated.
Oct 11 08:53:31 compute-0 kernel: tap35cdb65d-4c (unregistering): left promiscuous mode
Oct 11 08:53:31 compute-0 NetworkManager[44960]: <info>  [1760172811.8104] device (tap35cdb65d-4c): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 08:53:31 compute-0 ovn_controller[152945]: 2025-10-11T08:53:31Z|00394|binding|INFO|Releasing lport 35cdb65d-4c85-4645-9bdf-3ef8f40f9596 from this chassis (sb_readonly=0)
Oct 11 08:53:31 compute-0 ovn_controller[152945]: 2025-10-11T08:53:31Z|00395|binding|INFO|Setting lport 35cdb65d-4c85-4645-9bdf-3ef8f40f9596 down in Southbound
Oct 11 08:53:31 compute-0 ovn_controller[152945]: 2025-10-11T08:53:31Z|00396|binding|INFO|Removing iface tap35cdb65d-4c ovn-installed in OVS
Oct 11 08:53:31 compute-0 nova_compute[260935]: 2025-10-11 08:53:31.829 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:53:31 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:31.828 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:37:a9:70 10.100.0.5'], port_security=['fa:16:3e:37:a9:70 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'dd2f9164-cc85-46e5-9ac5-2847421fe9fc', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5fe47dfd30914099a9819413cbab00c6', 'neutron:revision_number': '4', 'neutron:security_group_ids': '76121093-a7de-4040-aef4-22a2c06e5eea', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0c1ab924-8567-41be-9107-10c6210b8f10, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=35cdb65d-4c85-4645-9bdf-3ef8f40f9596) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 08:53:31 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:31.829 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[b66c6daf-a385-4296-9f44-d72214923a7b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:53:31 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:31.833 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[200ab8ef-dce5-45e0-a26f-13115fea9872]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:53:31 compute-0 nova_compute[260935]: 2025-10-11 08:53:31.839 2 DEBUG nova.storage.rbd_utils [None req-ea864462-84d7-453c-a364-0b170ab59761 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] creating snapshot(d13f8dacb182414abf2529591a8646b0) on rbd image(2c551e6f-adba-4963-a583-c5118e2be62a_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Oct 11 08:53:31 compute-0 systemd[1]: machine-qemu\x2d51\x2dinstance\x2d0000002e.scope: Deactivated successfully.
Oct 11 08:53:31 compute-0 systemd[1]: machine-qemu\x2d51\x2dinstance\x2d0000002e.scope: Consumed 3.544s CPU time.
Oct 11 08:53:31 compute-0 systemd-machined[215705]: Machine qemu-51-instance-0000002e terminated.
Oct 11 08:53:31 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:31.879 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[88b856a7-9a3c-47ff-a37d-7041d21a08be]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:53:31 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:31.915 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[674ab7a2-f4da-4474-9149-f0de49f76db9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapaac3fe7a-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a9:0a:6b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 832, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 832, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 122], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 465273, 'reachable_time': 34198, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 312687, 'error': None, 'target': 'ovnmeta-aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:53:31 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:31.941 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[e69aba8b-6fff-4c47-8bc5-1c34069145ed]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapaac3fe7a-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 465290, 'tstamp': 465290}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 312698, 'error': None, 'target': 'ovnmeta-aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapaac3fe7a-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 465294, 'tstamp': 465294}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 312698, 'error': None, 'target': 'ovnmeta-aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:53:31 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:31.943 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapaac3fe7a-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:53:31 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1467: 321 pgs: 321 active+clean; 306 MiB data, 557 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 1.3 MiB/s wr, 67 op/s
Oct 11 08:53:31 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:31.953 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapaac3fe7a-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:53:31 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:31.953 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 08:53:31 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:31.954 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapaac3fe7a-b0, col_values=(('external_ids', {'iface-id': 'debf3d0c-b4f8-4ab8-9507-a0acd9b0ee66'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:53:31 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:31.954 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 08:53:31 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:31.956 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 35cdb65d-4c85-4645-9bdf-3ef8f40f9596 in datapath aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f unbound from our chassis
Oct 11 08:53:31 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:31.958 162815 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 11 08:53:31 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:31.960 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[d77ac3d6-7dc7-40fa-9b06-c798cc906c5d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:53:31 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:31.960 162815 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f namespace which is not needed anymore
Oct 11 08:53:31 compute-0 nova_compute[260935]: 2025-10-11 08:53:31.986 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:53:31 compute-0 nova_compute[260935]: 2025-10-11 08:53:31.990 2 INFO nova.virt.libvirt.driver [-] [instance: e3289c21-dd0f-43aa-9d39-3aff16eff5cd] Instance destroyed successfully.
Oct 11 08:53:31 compute-0 nova_compute[260935]: 2025-10-11 08:53:31.991 2 DEBUG nova.objects.instance [None req-9e304d57-2dd4-4121-8be6-93377b7cb6cf 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Lazy-loading 'resources' on Instance uuid e3289c21-dd0f-43aa-9d39-3aff16eff5cd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:53:31 compute-0 nova_compute[260935]: 2025-10-11 08:53:31.994 2 INFO nova.virt.libvirt.driver [-] [instance: dd2f9164-cc85-46e5-9ac5-2847421fe9fc] Instance destroyed successfully.
Oct 11 08:53:31 compute-0 nova_compute[260935]: 2025-10-11 08:53:31.995 2 DEBUG nova.objects.instance [None req-c8c60bb6-fa6d-4709-9538-fd7ca557b391 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Lazy-loading 'resources' on Instance uuid dd2f9164-cc85-46e5-9ac5-2847421fe9fc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:53:32 compute-0 nova_compute[260935]: 2025-10-11 08:53:32.010 2 DEBUG nova.virt.libvirt.vif [None req-9e304d57-2dd4-4121-8be6-93377b7cb6cf 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T08:53:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1151635572',display_name='tempest-tempest.common.compute-instance-1151635572-1',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1151635572-1',id=45,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-11T08:53:30Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='5fe47dfd30914099a9819413cbab00c6',ramdisk_id='',reservation_id='r-08kxr91r',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-MultipleCreateTestJSON-1825846956',owner_user_name='tempest-MultipleCreateTestJSON-1825846956-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T08:53:30Z,user_data=None,user_id='624c293d73ca4d14a182fadee17abb16',uuid=e3289c21-dd0f-43aa-9d39-3aff16eff5cd,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "69a28fb2-3a8f-49be-9b93-8dc3db323fd5", "address": "fa:16:3e:93:4d:8d", "network": {"id": "aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1676373979-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5fe47dfd30914099a9819413cbab00c6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap69a28fb2-3a", "ovs_interfaceid": "69a28fb2-3a8f-49be-9b93-8dc3db323fd5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 11 08:53:32 compute-0 nova_compute[260935]: 2025-10-11 08:53:32.011 2 DEBUG nova.network.os_vif_util [None req-9e304d57-2dd4-4121-8be6-93377b7cb6cf 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Converting VIF {"id": "69a28fb2-3a8f-49be-9b93-8dc3db323fd5", "address": "fa:16:3e:93:4d:8d", "network": {"id": "aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1676373979-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5fe47dfd30914099a9819413cbab00c6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap69a28fb2-3a", "ovs_interfaceid": "69a28fb2-3a8f-49be-9b93-8dc3db323fd5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 08:53:32 compute-0 nova_compute[260935]: 2025-10-11 08:53:32.012 2 DEBUG nova.network.os_vif_util [None req-9e304d57-2dd4-4121-8be6-93377b7cb6cf 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:93:4d:8d,bridge_name='br-int',has_traffic_filtering=True,id=69a28fb2-3a8f-49be-9b93-8dc3db323fd5,network=Network(aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap69a28fb2-3a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 08:53:32 compute-0 nova_compute[260935]: 2025-10-11 08:53:32.012 2 DEBUG os_vif [None req-9e304d57-2dd4-4121-8be6-93377b7cb6cf 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:93:4d:8d,bridge_name='br-int',has_traffic_filtering=True,id=69a28fb2-3a8f-49be-9b93-8dc3db323fd5,network=Network(aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap69a28fb2-3a') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 11 08:53:32 compute-0 nova_compute[260935]: 2025-10-11 08:53:32.015 2 DEBUG nova.virt.libvirt.vif [None req-c8c60bb6-fa6d-4709-9538-fd7ca557b391 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T08:53:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1151635572',display_name='tempest-tempest.common.compute-instance-1151635572-2',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1151635572-2',id=46,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=1,launched_at=2025-10-11T08:53:29Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='5fe47dfd30914099a9819413cbab00c6',ramdisk_id='',reservation_id='r-08kxr91r',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-MultipleCreateTestJSON-1825846956',owner_user_name='tempest-MultipleCreateTestJSON-1825846956-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T08:53:29Z,user_data=None,user_id='624c293d73ca4d14a182fadee17abb16',uuid=dd2f9164-cc85-46e5-9ac5-2847421fe9fc,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "35cdb65d-4c85-4645-9bdf-3ef8f40f9596", "address": "fa:16:3e:37:a9:70", "network": {"id": "aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1676373979-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5fe47dfd30914099a9819413cbab00c6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap35cdb65d-4c", "ovs_interfaceid": "35cdb65d-4c85-4645-9bdf-3ef8f40f9596", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 11 08:53:32 compute-0 nova_compute[260935]: 2025-10-11 08:53:32.016 2 DEBUG nova.network.os_vif_util [None req-c8c60bb6-fa6d-4709-9538-fd7ca557b391 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Converting VIF {"id": "35cdb65d-4c85-4645-9bdf-3ef8f40f9596", "address": "fa:16:3e:37:a9:70", "network": {"id": "aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1676373979-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5fe47dfd30914099a9819413cbab00c6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap35cdb65d-4c", "ovs_interfaceid": "35cdb65d-4c85-4645-9bdf-3ef8f40f9596", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 08:53:32 compute-0 nova_compute[260935]: 2025-10-11 08:53:32.016 2 DEBUG nova.network.os_vif_util [None req-c8c60bb6-fa6d-4709-9538-fd7ca557b391 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:37:a9:70,bridge_name='br-int',has_traffic_filtering=True,id=35cdb65d-4c85-4645-9bdf-3ef8f40f9596,network=Network(aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap35cdb65d-4c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 08:53:32 compute-0 nova_compute[260935]: 2025-10-11 08:53:32.017 2 DEBUG os_vif [None req-c8c60bb6-fa6d-4709-9538-fd7ca557b391 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:37:a9:70,bridge_name='br-int',has_traffic_filtering=True,id=35cdb65d-4c85-4645-9bdf-3ef8f40f9596,network=Network(aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap35cdb65d-4c') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 11 08:53:32 compute-0 nova_compute[260935]: 2025-10-11 08:53:32.018 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:53:32 compute-0 nova_compute[260935]: 2025-10-11 08:53:32.018 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap69a28fb2-3a, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:53:32 compute-0 nova_compute[260935]: 2025-10-11 08:53:32.020 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:53:32 compute-0 nova_compute[260935]: 2025-10-11 08:53:32.021 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 08:53:32 compute-0 nova_compute[260935]: 2025-10-11 08:53:32.023 2 DEBUG oslo_concurrency.processutils [None req-fe77ed62-25b3-4ad0-9b14-d5990c233b42 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/d427ed36e4acfaf36d5cf36bd49361b1db4ee571 92be5b35-6b7a-4f95-924d-008348f27b42_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.398s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:53:32 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e198 do_prune osdmap full prune enabled
Oct 11 08:53:32 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e199 e199: 3 total, 3 up, 3 in
Oct 11 08:53:32 compute-0 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e199: 3 total, 3 up, 3 in
Oct 11 08:53:32 compute-0 nova_compute[260935]: 2025-10-11 08:53:32.061 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:53:32 compute-0 nova_compute[260935]: 2025-10-11 08:53:32.063 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:53:32 compute-0 nova_compute[260935]: 2025-10-11 08:53:32.063 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap35cdb65d-4c, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:53:32 compute-0 nova_compute[260935]: 2025-10-11 08:53:32.065 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:53:32 compute-0 nova_compute[260935]: 2025-10-11 08:53:32.069 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 08:53:32 compute-0 nova_compute[260935]: 2025-10-11 08:53:32.070 2 INFO os_vif [None req-9e304d57-2dd4-4121-8be6-93377b7cb6cf 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:93:4d:8d,bridge_name='br-int',has_traffic_filtering=True,id=69a28fb2-3a8f-49be-9b93-8dc3db323fd5,network=Network(aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap69a28fb2-3a')
Oct 11 08:53:32 compute-0 neutron-haproxy-ovnmeta-aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f[312371]: [NOTICE]   (312404) : haproxy version is 2.8.14-c23fe91
Oct 11 08:53:32 compute-0 neutron-haproxy-ovnmeta-aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f[312371]: [NOTICE]   (312404) : path to executable is /usr/sbin/haproxy
Oct 11 08:53:32 compute-0 neutron-haproxy-ovnmeta-aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f[312371]: [WARNING]  (312404) : Exiting Master process...
Oct 11 08:53:32 compute-0 neutron-haproxy-ovnmeta-aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f[312371]: [WARNING]  (312404) : Exiting Master process...
Oct 11 08:53:32 compute-0 neutron-haproxy-ovnmeta-aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f[312371]: [ALERT]    (312404) : Current worker (312410) exited with code 143 (Terminated)
Oct 11 08:53:32 compute-0 neutron-haproxy-ovnmeta-aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f[312371]: [WARNING]  (312404) : All workers exited. Exiting... (0)
Oct 11 08:53:32 compute-0 systemd[1]: libpod-015a5b99a61fb98e56ce0a6707996a2334b957091df6ba0aa6f28194e95fa8c7.scope: Deactivated successfully.
Oct 11 08:53:32 compute-0 podman[312748]: 2025-10-11 08:53:32.130117719 +0000 UTC m=+0.050131579 container died 015a5b99a61fb98e56ce0a6707996a2334b957091df6ba0aa6f28194e95fa8c7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Oct 11 08:53:32 compute-0 nova_compute[260935]: 2025-10-11 08:53:32.132 2 INFO os_vif [None req-c8c60bb6-fa6d-4709-9538-fd7ca557b391 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:37:a9:70,bridge_name='br-int',has_traffic_filtering=True,id=35cdb65d-4c85-4645-9bdf-3ef8f40f9596,network=Network(aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap35cdb65d-4c')
Oct 11 08:53:32 compute-0 nova_compute[260935]: 2025-10-11 08:53:32.162 2 DEBUG nova.storage.rbd_utils [None req-fe77ed62-25b3-4ad0-9b14-d5990c233b42 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] resizing rbd image 92be5b35-6b7a-4f95-924d-008348f27b42_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 11 08:53:32 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-015a5b99a61fb98e56ce0a6707996a2334b957091df6ba0aa6f28194e95fa8c7-userdata-shm.mount: Deactivated successfully.
Oct 11 08:53:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-865d12609cd49e2af502496fdc178165cf27c22f0add21517383ac764b2e1b6a-merged.mount: Deactivated successfully.
Oct 11 08:53:32 compute-0 podman[312748]: 2025-10-11 08:53:32.180995008 +0000 UTC m=+0.101008868 container cleanup 015a5b99a61fb98e56ce0a6707996a2334b957091df6ba0aa6f28194e95fa8c7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001)
Oct 11 08:53:32 compute-0 systemd[1]: libpod-conmon-015a5b99a61fb98e56ce0a6707996a2334b957091df6ba0aa6f28194e95fa8c7.scope: Deactivated successfully.
Oct 11 08:53:32 compute-0 nova_compute[260935]: 2025-10-11 08:53:32.224 2 DEBUG nova.storage.rbd_utils [None req-ea864462-84d7-453c-a364-0b170ab59761 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] cloning vms/2c551e6f-adba-4963-a583-c5118e2be62a_disk@d13f8dacb182414abf2529591a8646b0 to images/d9333d4f-c539-4d70-aedc-258482a78d91 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Oct 11 08:53:32 compute-0 podman[312845]: 2025-10-11 08:53:32.259765345 +0000 UTC m=+0.052380802 container remove 015a5b99a61fb98e56ce0a6707996a2334b957091df6ba0aa6f28194e95fa8c7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS)
Oct 11 08:53:32 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:32.273 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[61e723c4-4a3f-45d8-a4cd-de9164846834]: (4, ('Sat Oct 11 08:53:32 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f (015a5b99a61fb98e56ce0a6707996a2334b957091df6ba0aa6f28194e95fa8c7)\n015a5b99a61fb98e56ce0a6707996a2334b957091df6ba0aa6f28194e95fa8c7\nSat Oct 11 08:53:32 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f (015a5b99a61fb98e56ce0a6707996a2334b957091df6ba0aa6f28194e95fa8c7)\n015a5b99a61fb98e56ce0a6707996a2334b957091df6ba0aa6f28194e95fa8c7\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:53:32 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:32.275 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[dcd9c8df-c3b0-42c8-b0c5-98016640a3aa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:53:32 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:32.276 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapaac3fe7a-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:53:32 compute-0 nova_compute[260935]: 2025-10-11 08:53:32.277 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:53:32 compute-0 kernel: tapaac3fe7a-b0: left promiscuous mode
Oct 11 08:53:32 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:32.312 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[f4305119-a813-4059-ae92-fae5c5f562b4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:53:32 compute-0 nova_compute[260935]: 2025-10-11 08:53:32.337 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:53:32 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:32.340 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[7383fd71-0744-4a9d-9d4b-5733d9bf2e8e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:53:32 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:32.342 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[7dca388b-300c-402a-86a5-313025cce5cb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:53:32 compute-0 nova_compute[260935]: 2025-10-11 08:53:32.357 2 DEBUG nova.virt.libvirt.driver [None req-fe77ed62-25b3-4ad0-9b14-d5990c233b42 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 92be5b35-6b7a-4f95-924d-008348f27b42] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 11 08:53:32 compute-0 nova_compute[260935]: 2025-10-11 08:53:32.358 2 DEBUG nova.virt.libvirt.driver [None req-fe77ed62-25b3-4ad0-9b14-d5990c233b42 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 92be5b35-6b7a-4f95-924d-008348f27b42] Ensure instance console log exists: /var/lib/nova/instances/92be5b35-6b7a-4f95-924d-008348f27b42/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 11 08:53:32 compute-0 nova_compute[260935]: 2025-10-11 08:53:32.358 2 DEBUG oslo_concurrency.lockutils [None req-fe77ed62-25b3-4ad0-9b14-d5990c233b42 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:53:32 compute-0 nova_compute[260935]: 2025-10-11 08:53:32.359 2 DEBUG oslo_concurrency.lockutils [None req-fe77ed62-25b3-4ad0-9b14-d5990c233b42 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:53:32 compute-0 nova_compute[260935]: 2025-10-11 08:53:32.359 2 DEBUG oslo_concurrency.lockutils [None req-fe77ed62-25b3-4ad0-9b14-d5990c233b42 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:53:32 compute-0 nova_compute[260935]: 2025-10-11 08:53:32.362 2 DEBUG nova.virt.libvirt.driver [None req-fe77ed62-25b3-4ad0-9b14-d5990c233b42 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 92be5b35-6b7a-4f95-924d-008348f27b42] Start _get_guest_xml network_info=[{"id": "13bb6d15-e65c-4e29-b0f3-b7a5a830236d", "address": "fa:16:3e:5b:b5:4a", "network": {"id": "e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-964284753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9af27fad6b5a4783b66213343f27f0a1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap13bb6d15-e6", "ovs_interfaceid": "13bb6d15-e65c-4e29-b0f3-b7a5a830236d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:29Z,direct_url=<?>,disk_format='qcow2',id=95632eb9-5895-4e20-b760-0f149aadf400,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 11 08:53:32 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:32.368 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[fddf5d8c-a452-40d7-bcca-64f792d4f09a]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 465266, 'reachable_time': 30081, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 312921, 'error': None, 'target': 'ovnmeta-aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:53:32 compute-0 systemd[1]: run-netns-ovnmeta\x2daac3fe7a\x2db6f5\x2d4809\x2d9f09\x2d04a3bbe54c8f.mount: Deactivated successfully.
Oct 11 08:53:32 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:32.373 162929 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 11 08:53:32 compute-0 nova_compute[260935]: 2025-10-11 08:53:32.374 2 WARNING nova.virt.libvirt.driver [None req-fe77ed62-25b3-4ad0-9b14-d5990c233b42 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError
Oct 11 08:53:32 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:32.373 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[d4be74a0-dba5-42df-ad47-474be2d8f835]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:53:32 compute-0 nova_compute[260935]: 2025-10-11 08:53:32.392 2 DEBUG nova.storage.rbd_utils [None req-ea864462-84d7-453c-a364-0b170ab59761 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] flattening images/d9333d4f-c539-4d70-aedc-258482a78d91 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Oct 11 08:53:32 compute-0 nova_compute[260935]: 2025-10-11 08:53:32.443 2 DEBUG nova.virt.libvirt.host [None req-fe77ed62-25b3-4ad0-9b14-d5990c233b42 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 11 08:53:32 compute-0 nova_compute[260935]: 2025-10-11 08:53:32.444 2 DEBUG nova.virt.libvirt.host [None req-fe77ed62-25b3-4ad0-9b14-d5990c233b42 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 11 08:53:32 compute-0 nova_compute[260935]: 2025-10-11 08:53:32.450 2 DEBUG nova.virt.libvirt.host [None req-fe77ed62-25b3-4ad0-9b14-d5990c233b42 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 11 08:53:32 compute-0 nova_compute[260935]: 2025-10-11 08:53:32.451 2 DEBUG nova.virt.libvirt.host [None req-fe77ed62-25b3-4ad0-9b14-d5990c233b42 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 11 08:53:32 compute-0 nova_compute[260935]: 2025-10-11 08:53:32.451 2 DEBUG nova.virt.libvirt.driver [None req-fe77ed62-25b3-4ad0-9b14-d5990c233b42 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 11 08:53:32 compute-0 nova_compute[260935]: 2025-10-11 08:53:32.452 2 DEBUG nova.virt.hardware [None req-fe77ed62-25b3-4ad0-9b14-d5990c233b42 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:29Z,direct_url=<?>,disk_format='qcow2',id=95632eb9-5895-4e20-b760-0f149aadf400,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 11 08:53:32 compute-0 nova_compute[260935]: 2025-10-11 08:53:32.453 2 DEBUG nova.virt.hardware [None req-fe77ed62-25b3-4ad0-9b14-d5990c233b42 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 11 08:53:32 compute-0 nova_compute[260935]: 2025-10-11 08:53:32.453 2 DEBUG nova.virt.hardware [None req-fe77ed62-25b3-4ad0-9b14-d5990c233b42 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 11 08:53:32 compute-0 nova_compute[260935]: 2025-10-11 08:53:32.454 2 DEBUG nova.virt.hardware [None req-fe77ed62-25b3-4ad0-9b14-d5990c233b42 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 11 08:53:32 compute-0 nova_compute[260935]: 2025-10-11 08:53:32.454 2 DEBUG nova.virt.hardware [None req-fe77ed62-25b3-4ad0-9b14-d5990c233b42 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 11 08:53:32 compute-0 nova_compute[260935]: 2025-10-11 08:53:32.455 2 DEBUG nova.virt.hardware [None req-fe77ed62-25b3-4ad0-9b14-d5990c233b42 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 11 08:53:32 compute-0 nova_compute[260935]: 2025-10-11 08:53:32.455 2 DEBUG nova.virt.hardware [None req-fe77ed62-25b3-4ad0-9b14-d5990c233b42 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 11 08:53:32 compute-0 nova_compute[260935]: 2025-10-11 08:53:32.455 2 DEBUG nova.virt.hardware [None req-fe77ed62-25b3-4ad0-9b14-d5990c233b42 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 11 08:53:32 compute-0 nova_compute[260935]: 2025-10-11 08:53:32.455 2 DEBUG nova.virt.hardware [None req-fe77ed62-25b3-4ad0-9b14-d5990c233b42 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 11 08:53:32 compute-0 nova_compute[260935]: 2025-10-11 08:53:32.457 2 DEBUG nova.virt.hardware [None req-fe77ed62-25b3-4ad0-9b14-d5990c233b42 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 11 08:53:32 compute-0 nova_compute[260935]: 2025-10-11 08:53:32.457 2 DEBUG nova.virt.hardware [None req-fe77ed62-25b3-4ad0-9b14-d5990c233b42 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 11 08:53:32 compute-0 nova_compute[260935]: 2025-10-11 08:53:32.458 2 DEBUG nova.objects.instance [None req-fe77ed62-25b3-4ad0-9b14-d5990c233b42 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 92be5b35-6b7a-4f95-924d-008348f27b42 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:53:32 compute-0 nova_compute[260935]: 2025-10-11 08:53:32.480 2 DEBUG oslo_concurrency.processutils [None req-fe77ed62-25b3-4ad0-9b14-d5990c233b42 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:53:32 compute-0 nova_compute[260935]: 2025-10-11 08:53:32.684 2 DEBUG nova.compute.manager [req-96deb5cc-723d-4d1e-97b1-25284d83845f req-fccfca93-4258-49bc-a6df-6f63075e43c7 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: e3289c21-dd0f-43aa-9d39-3aff16eff5cd] Received event network-vif-plugged-69a28fb2-3a8f-49be-9b93-8dc3db323fd5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:53:32 compute-0 nova_compute[260935]: 2025-10-11 08:53:32.686 2 DEBUG oslo_concurrency.lockutils [req-96deb5cc-723d-4d1e-97b1-25284d83845f req-fccfca93-4258-49bc-a6df-6f63075e43c7 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "e3289c21-dd0f-43aa-9d39-3aff16eff5cd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:53:32 compute-0 nova_compute[260935]: 2025-10-11 08:53:32.686 2 DEBUG oslo_concurrency.lockutils [req-96deb5cc-723d-4d1e-97b1-25284d83845f req-fccfca93-4258-49bc-a6df-6f63075e43c7 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "e3289c21-dd0f-43aa-9d39-3aff16eff5cd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:53:32 compute-0 nova_compute[260935]: 2025-10-11 08:53:32.687 2 DEBUG oslo_concurrency.lockutils [req-96deb5cc-723d-4d1e-97b1-25284d83845f req-fccfca93-4258-49bc-a6df-6f63075e43c7 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "e3289c21-dd0f-43aa-9d39-3aff16eff5cd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:53:32 compute-0 nova_compute[260935]: 2025-10-11 08:53:32.687 2 DEBUG nova.compute.manager [req-96deb5cc-723d-4d1e-97b1-25284d83845f req-fccfca93-4258-49bc-a6df-6f63075e43c7 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: e3289c21-dd0f-43aa-9d39-3aff16eff5cd] No waiting events found dispatching network-vif-plugged-69a28fb2-3a8f-49be-9b93-8dc3db323fd5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 08:53:32 compute-0 nova_compute[260935]: 2025-10-11 08:53:32.687 2 WARNING nova.compute.manager [req-96deb5cc-723d-4d1e-97b1-25284d83845f req-fccfca93-4258-49bc-a6df-6f63075e43c7 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: e3289c21-dd0f-43aa-9d39-3aff16eff5cd] Received unexpected event network-vif-plugged-69a28fb2-3a8f-49be-9b93-8dc3db323fd5 for instance with vm_state active and task_state deleting.
Oct 11 08:53:32 compute-0 nova_compute[260935]: 2025-10-11 08:53:32.716 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:53:32 compute-0 nova_compute[260935]: 2025-10-11 08:53:32.836 2 INFO nova.virt.libvirt.driver [None req-9e304d57-2dd4-4121-8be6-93377b7cb6cf 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: e3289c21-dd0f-43aa-9d39-3aff16eff5cd] Deleting instance files /var/lib/nova/instances/e3289c21-dd0f-43aa-9d39-3aff16eff5cd_del
Oct 11 08:53:32 compute-0 nova_compute[260935]: 2025-10-11 08:53:32.837 2 INFO nova.virt.libvirt.driver [None req-9e304d57-2dd4-4121-8be6-93377b7cb6cf 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: e3289c21-dd0f-43aa-9d39-3aff16eff5cd] Deletion of /var/lib/nova/instances/e3289c21-dd0f-43aa-9d39-3aff16eff5cd_del complete
Oct 11 08:53:32 compute-0 nova_compute[260935]: 2025-10-11 08:53:32.856 2 DEBUG nova.storage.rbd_utils [None req-ea864462-84d7-453c-a364-0b170ab59761 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] removing snapshot(d13f8dacb182414abf2529591a8646b0) on rbd image(2c551e6f-adba-4963-a583-c5118e2be62a_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Oct 11 08:53:32 compute-0 nova_compute[260935]: 2025-10-11 08:53:32.889 2 INFO nova.virt.libvirt.driver [None req-c8c60bb6-fa6d-4709-9538-fd7ca557b391 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: dd2f9164-cc85-46e5-9ac5-2847421fe9fc] Deleting instance files /var/lib/nova/instances/dd2f9164-cc85-46e5-9ac5-2847421fe9fc_del
Oct 11 08:53:32 compute-0 nova_compute[260935]: 2025-10-11 08:53:32.890 2 INFO nova.virt.libvirt.driver [None req-c8c60bb6-fa6d-4709-9538-fd7ca557b391 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: dd2f9164-cc85-46e5-9ac5-2847421fe9fc] Deletion of /var/lib/nova/instances/dd2f9164-cc85-46e5-9ac5-2847421fe9fc_del complete
Oct 11 08:53:32 compute-0 nova_compute[260935]: 2025-10-11 08:53:32.913 2 INFO nova.compute.manager [None req-9e304d57-2dd4-4121-8be6-93377b7cb6cf 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: e3289c21-dd0f-43aa-9d39-3aff16eff5cd] Took 1.32 seconds to destroy the instance on the hypervisor.
Oct 11 08:53:32 compute-0 nova_compute[260935]: 2025-10-11 08:53:32.913 2 DEBUG oslo.service.loopingcall [None req-9e304d57-2dd4-4121-8be6-93377b7cb6cf 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 11 08:53:32 compute-0 nova_compute[260935]: 2025-10-11 08:53:32.914 2 DEBUG nova.compute.manager [-] [instance: e3289c21-dd0f-43aa-9d39-3aff16eff5cd] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 11 08:53:32 compute-0 nova_compute[260935]: 2025-10-11 08:53:32.914 2 DEBUG nova.network.neutron [-] [instance: e3289c21-dd0f-43aa-9d39-3aff16eff5cd] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 11 08:53:32 compute-0 nova_compute[260935]: 2025-10-11 08:53:32.948 2 INFO nova.compute.manager [None req-c8c60bb6-fa6d-4709-9538-fd7ca557b391 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: dd2f9164-cc85-46e5-9ac5-2847421fe9fc] Took 1.20 seconds to destroy the instance on the hypervisor.
Oct 11 08:53:32 compute-0 nova_compute[260935]: 2025-10-11 08:53:32.949 2 DEBUG oslo.service.loopingcall [None req-c8c60bb6-fa6d-4709-9538-fd7ca557b391 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 11 08:53:32 compute-0 nova_compute[260935]: 2025-10-11 08:53:32.949 2 DEBUG nova.compute.manager [-] [instance: dd2f9164-cc85-46e5-9ac5-2847421fe9fc] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 11 08:53:32 compute-0 nova_compute[260935]: 2025-10-11 08:53:32.950 2 DEBUG nova.network.neutron [-] [instance: dd2f9164-cc85-46e5-9ac5-2847421fe9fc] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 11 08:53:32 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 08:53:32 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1468391462' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:53:32 compute-0 nova_compute[260935]: 2025-10-11 08:53:32.999 2 DEBUG oslo_concurrency.processutils [None req-fe77ed62-25b3-4ad0-9b14-d5990c233b42 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.519s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:53:33 compute-0 nova_compute[260935]: 2025-10-11 08:53:33.028 2 DEBUG nova.storage.rbd_utils [None req-fe77ed62-25b3-4ad0-9b14-d5990c233b42 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] rbd image 92be5b35-6b7a-4f95-924d-008348f27b42_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:53:33 compute-0 nova_compute[260935]: 2025-10-11 08:53:33.033 2 DEBUG oslo_concurrency.processutils [None req-fe77ed62-25b3-4ad0-9b14-d5990c233b42 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:53:33 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e199 do_prune osdmap full prune enabled
Oct 11 08:53:33 compute-0 ceph-mon[74313]: pgmap v1467: 321 pgs: 321 active+clean; 306 MiB data, 557 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 1.3 MiB/s wr, 67 op/s
Oct 11 08:53:33 compute-0 ceph-mon[74313]: osdmap e199: 3 total, 3 up, 3 in
Oct 11 08:53:33 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/1468391462' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:53:33 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e200 e200: 3 total, 3 up, 3 in
Oct 11 08:53:33 compute-0 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e200: 3 total, 3 up, 3 in
Oct 11 08:53:33 compute-0 nova_compute[260935]: 2025-10-11 08:53:33.114 2 DEBUG nova.storage.rbd_utils [None req-ea864462-84d7-453c-a364-0b170ab59761 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] creating snapshot(snap) on rbd image(d9333d4f-c539-4d70-aedc-258482a78d91) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Oct 11 08:53:33 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e200 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 08:53:33 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e200 do_prune osdmap full prune enabled
Oct 11 08:53:33 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e201 e201: 3 total, 3 up, 3 in
Oct 11 08:53:33 compute-0 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e201: 3 total, 3 up, 3 in
Oct 11 08:53:33 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 08:53:33 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2171843780' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:53:33 compute-0 nova_compute[260935]: 2025-10-11 08:53:33.499 2 DEBUG nova.compute.manager [req-53ad6e34-30d5-4659-a707-55f19a2c1aab req-db7691e4-928d-47a7-b4d4-2da41cd7c501 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: dd2f9164-cc85-46e5-9ac5-2847421fe9fc] Received event network-vif-unplugged-35cdb65d-4c85-4645-9bdf-3ef8f40f9596 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:53:33 compute-0 nova_compute[260935]: 2025-10-11 08:53:33.500 2 DEBUG oslo_concurrency.lockutils [req-53ad6e34-30d5-4659-a707-55f19a2c1aab req-db7691e4-928d-47a7-b4d4-2da41cd7c501 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "dd2f9164-cc85-46e5-9ac5-2847421fe9fc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:53:33 compute-0 nova_compute[260935]: 2025-10-11 08:53:33.500 2 DEBUG oslo_concurrency.lockutils [req-53ad6e34-30d5-4659-a707-55f19a2c1aab req-db7691e4-928d-47a7-b4d4-2da41cd7c501 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "dd2f9164-cc85-46e5-9ac5-2847421fe9fc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:53:33 compute-0 nova_compute[260935]: 2025-10-11 08:53:33.500 2 DEBUG oslo_concurrency.lockutils [req-53ad6e34-30d5-4659-a707-55f19a2c1aab req-db7691e4-928d-47a7-b4d4-2da41cd7c501 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "dd2f9164-cc85-46e5-9ac5-2847421fe9fc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:53:33 compute-0 nova_compute[260935]: 2025-10-11 08:53:33.501 2 DEBUG nova.compute.manager [req-53ad6e34-30d5-4659-a707-55f19a2c1aab req-db7691e4-928d-47a7-b4d4-2da41cd7c501 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: dd2f9164-cc85-46e5-9ac5-2847421fe9fc] No waiting events found dispatching network-vif-unplugged-35cdb65d-4c85-4645-9bdf-3ef8f40f9596 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 08:53:33 compute-0 nova_compute[260935]: 2025-10-11 08:53:33.501 2 DEBUG nova.compute.manager [req-53ad6e34-30d5-4659-a707-55f19a2c1aab req-db7691e4-928d-47a7-b4d4-2da41cd7c501 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: dd2f9164-cc85-46e5-9ac5-2847421fe9fc] Received event network-vif-unplugged-35cdb65d-4c85-4645-9bdf-3ef8f40f9596 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 11 08:53:33 compute-0 nova_compute[260935]: 2025-10-11 08:53:33.501 2 DEBUG nova.compute.manager [req-53ad6e34-30d5-4659-a707-55f19a2c1aab req-db7691e4-928d-47a7-b4d4-2da41cd7c501 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: dd2f9164-cc85-46e5-9ac5-2847421fe9fc] Received event network-vif-plugged-35cdb65d-4c85-4645-9bdf-3ef8f40f9596 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:53:33 compute-0 nova_compute[260935]: 2025-10-11 08:53:33.501 2 DEBUG oslo_concurrency.lockutils [req-53ad6e34-30d5-4659-a707-55f19a2c1aab req-db7691e4-928d-47a7-b4d4-2da41cd7c501 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "dd2f9164-cc85-46e5-9ac5-2847421fe9fc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:53:33 compute-0 nova_compute[260935]: 2025-10-11 08:53:33.502 2 DEBUG oslo_concurrency.lockutils [req-53ad6e34-30d5-4659-a707-55f19a2c1aab req-db7691e4-928d-47a7-b4d4-2da41cd7c501 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "dd2f9164-cc85-46e5-9ac5-2847421fe9fc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:53:33 compute-0 nova_compute[260935]: 2025-10-11 08:53:33.502 2 DEBUG oslo_concurrency.lockutils [req-53ad6e34-30d5-4659-a707-55f19a2c1aab req-db7691e4-928d-47a7-b4d4-2da41cd7c501 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "dd2f9164-cc85-46e5-9ac5-2847421fe9fc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:53:33 compute-0 nova_compute[260935]: 2025-10-11 08:53:33.502 2 DEBUG nova.compute.manager [req-53ad6e34-30d5-4659-a707-55f19a2c1aab req-db7691e4-928d-47a7-b4d4-2da41cd7c501 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: dd2f9164-cc85-46e5-9ac5-2847421fe9fc] No waiting events found dispatching network-vif-plugged-35cdb65d-4c85-4645-9bdf-3ef8f40f9596 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 08:53:33 compute-0 nova_compute[260935]: 2025-10-11 08:53:33.502 2 WARNING nova.compute.manager [req-53ad6e34-30d5-4659-a707-55f19a2c1aab req-db7691e4-928d-47a7-b4d4-2da41cd7c501 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: dd2f9164-cc85-46e5-9ac5-2847421fe9fc] Received unexpected event network-vif-plugged-35cdb65d-4c85-4645-9bdf-3ef8f40f9596 for instance with vm_state active and task_state deleting.
Oct 11 08:53:33 compute-0 nova_compute[260935]: 2025-10-11 08:53:33.510 2 DEBUG oslo_concurrency.processutils [None req-fe77ed62-25b3-4ad0-9b14-d5990c233b42 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.477s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:53:33 compute-0 nova_compute[260935]: 2025-10-11 08:53:33.511 2 DEBUG nova.virt.libvirt.vif [None req-fe77ed62-25b3-4ad0-9b14-d5990c233b42 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-11T08:52:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-118771075',display_name='tempest-ServerDiskConfigTestJSON-server-118771075',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-118771075',id=43,image_ref='95632eb9-5895-4e20-b760-0f149aadf400',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-11T08:53:09Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='9af27fad6b5a4783b66213343f27f0a1',ramdisk_id='',reservation_id='r-aqy60l5l',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='95632eb9-5895-4e20-b760-0f149aadf400',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerDiskConfigTestJSON-387886039',owner_user_name='tempest-ServerDiskConfigTestJSON-387886039-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T08:53:31Z,user_data=None,user_id='2a171de1f79843e0b048393cabfee77d',uuid=92be5b35-6b7a-4f95-924d-008348f27b42,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "13bb6d15-e65c-4e29-b0f3-b7a5a830236d", "address": "fa:16:3e:5b:b5:4a", "network": {"id": "e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-964284753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9af27fad6b5a4783b66213343f27f0a1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap13bb6d15-e6", "ovs_interfaceid": "13bb6d15-e65c-4e29-b0f3-b7a5a830236d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 11 08:53:33 compute-0 nova_compute[260935]: 2025-10-11 08:53:33.511 2 DEBUG nova.network.os_vif_util [None req-fe77ed62-25b3-4ad0-9b14-d5990c233b42 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Converting VIF {"id": "13bb6d15-e65c-4e29-b0f3-b7a5a830236d", "address": "fa:16:3e:5b:b5:4a", "network": {"id": "e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-964284753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9af27fad6b5a4783b66213343f27f0a1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap13bb6d15-e6", "ovs_interfaceid": "13bb6d15-e65c-4e29-b0f3-b7a5a830236d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 08:53:33 compute-0 nova_compute[260935]: 2025-10-11 08:53:33.512 2 DEBUG nova.network.os_vif_util [None req-fe77ed62-25b3-4ad0-9b14-d5990c233b42 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5b:b5:4a,bridge_name='br-int',has_traffic_filtering=True,id=13bb6d15-e65c-4e29-b0f3-b7a5a830236d,network=Network(e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap13bb6d15-e6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 08:53:33 compute-0 nova_compute[260935]: 2025-10-11 08:53:33.514 2 DEBUG nova.virt.libvirt.driver [None req-fe77ed62-25b3-4ad0-9b14-d5990c233b42 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 92be5b35-6b7a-4f95-924d-008348f27b42] End _get_guest_xml xml=<domain type="kvm">
Oct 11 08:53:33 compute-0 nova_compute[260935]:   <uuid>92be5b35-6b7a-4f95-924d-008348f27b42</uuid>
Oct 11 08:53:33 compute-0 nova_compute[260935]:   <name>instance-0000002b</name>
Oct 11 08:53:33 compute-0 nova_compute[260935]:   <memory>131072</memory>
Oct 11 08:53:33 compute-0 nova_compute[260935]:   <vcpu>1</vcpu>
Oct 11 08:53:33 compute-0 nova_compute[260935]:   <metadata>
Oct 11 08:53:33 compute-0 nova_compute[260935]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 08:53:33 compute-0 nova_compute[260935]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 08:53:33 compute-0 nova_compute[260935]:       <nova:name>tempest-ServerDiskConfigTestJSON-server-118771075</nova:name>
Oct 11 08:53:33 compute-0 nova_compute[260935]:       <nova:creationTime>2025-10-11 08:53:32</nova:creationTime>
Oct 11 08:53:33 compute-0 nova_compute[260935]:       <nova:flavor name="m1.nano">
Oct 11 08:53:33 compute-0 nova_compute[260935]:         <nova:memory>128</nova:memory>
Oct 11 08:53:33 compute-0 nova_compute[260935]:         <nova:disk>1</nova:disk>
Oct 11 08:53:33 compute-0 nova_compute[260935]:         <nova:swap>0</nova:swap>
Oct 11 08:53:33 compute-0 nova_compute[260935]:         <nova:ephemeral>0</nova:ephemeral>
Oct 11 08:53:33 compute-0 nova_compute[260935]:         <nova:vcpus>1</nova:vcpus>
Oct 11 08:53:33 compute-0 nova_compute[260935]:       </nova:flavor>
Oct 11 08:53:33 compute-0 nova_compute[260935]:       <nova:owner>
Oct 11 08:53:33 compute-0 nova_compute[260935]:         <nova:user uuid="2a171de1f79843e0b048393cabfee77d">tempest-ServerDiskConfigTestJSON-387886039-project-member</nova:user>
Oct 11 08:53:33 compute-0 nova_compute[260935]:         <nova:project uuid="9af27fad6b5a4783b66213343f27f0a1">tempest-ServerDiskConfigTestJSON-387886039</nova:project>
Oct 11 08:53:33 compute-0 nova_compute[260935]:       </nova:owner>
Oct 11 08:53:33 compute-0 nova_compute[260935]:       <nova:root type="image" uuid="95632eb9-5895-4e20-b760-0f149aadf400"/>
Oct 11 08:53:33 compute-0 nova_compute[260935]:       <nova:ports>
Oct 11 08:53:33 compute-0 nova_compute[260935]:         <nova:port uuid="13bb6d15-e65c-4e29-b0f3-b7a5a830236d">
Oct 11 08:53:33 compute-0 nova_compute[260935]:           <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Oct 11 08:53:33 compute-0 nova_compute[260935]:         </nova:port>
Oct 11 08:53:33 compute-0 nova_compute[260935]:       </nova:ports>
Oct 11 08:53:33 compute-0 nova_compute[260935]:     </nova:instance>
Oct 11 08:53:33 compute-0 nova_compute[260935]:   </metadata>
Oct 11 08:53:33 compute-0 nova_compute[260935]:   <sysinfo type="smbios">
Oct 11 08:53:33 compute-0 nova_compute[260935]:     <system>
Oct 11 08:53:33 compute-0 nova_compute[260935]:       <entry name="manufacturer">RDO</entry>
Oct 11 08:53:33 compute-0 nova_compute[260935]:       <entry name="product">OpenStack Compute</entry>
Oct 11 08:53:33 compute-0 nova_compute[260935]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 08:53:33 compute-0 nova_compute[260935]:       <entry name="serial">92be5b35-6b7a-4f95-924d-008348f27b42</entry>
Oct 11 08:53:33 compute-0 nova_compute[260935]:       <entry name="uuid">92be5b35-6b7a-4f95-924d-008348f27b42</entry>
Oct 11 08:53:33 compute-0 nova_compute[260935]:       <entry name="family">Virtual Machine</entry>
Oct 11 08:53:33 compute-0 nova_compute[260935]:     </system>
Oct 11 08:53:33 compute-0 nova_compute[260935]:   </sysinfo>
Oct 11 08:53:33 compute-0 nova_compute[260935]:   <os>
Oct 11 08:53:33 compute-0 nova_compute[260935]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 11 08:53:33 compute-0 nova_compute[260935]:     <boot dev="hd"/>
Oct 11 08:53:33 compute-0 nova_compute[260935]:     <smbios mode="sysinfo"/>
Oct 11 08:53:33 compute-0 nova_compute[260935]:   </os>
Oct 11 08:53:33 compute-0 nova_compute[260935]:   <features>
Oct 11 08:53:33 compute-0 nova_compute[260935]:     <acpi/>
Oct 11 08:53:33 compute-0 nova_compute[260935]:     <apic/>
Oct 11 08:53:33 compute-0 nova_compute[260935]:     <vmcoreinfo/>
Oct 11 08:53:33 compute-0 nova_compute[260935]:   </features>
Oct 11 08:53:33 compute-0 nova_compute[260935]:   <clock offset="utc">
Oct 11 08:53:33 compute-0 nova_compute[260935]:     <timer name="pit" tickpolicy="delay"/>
Oct 11 08:53:33 compute-0 nova_compute[260935]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 11 08:53:33 compute-0 nova_compute[260935]:     <timer name="hpet" present="no"/>
Oct 11 08:53:33 compute-0 nova_compute[260935]:   </clock>
Oct 11 08:53:33 compute-0 nova_compute[260935]:   <cpu mode="host-model" match="exact">
Oct 11 08:53:33 compute-0 nova_compute[260935]:     <topology sockets="1" cores="1" threads="1"/>
Oct 11 08:53:33 compute-0 nova_compute[260935]:   </cpu>
Oct 11 08:53:33 compute-0 nova_compute[260935]:   <devices>
Oct 11 08:53:33 compute-0 nova_compute[260935]:     <disk type="network" device="disk">
Oct 11 08:53:33 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 08:53:33 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/92be5b35-6b7a-4f95-924d-008348f27b42_disk">
Oct 11 08:53:33 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 08:53:33 compute-0 nova_compute[260935]:       </source>
Oct 11 08:53:33 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 08:53:33 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 08:53:33 compute-0 nova_compute[260935]:       </auth>
Oct 11 08:53:33 compute-0 nova_compute[260935]:       <target dev="vda" bus="virtio"/>
Oct 11 08:53:33 compute-0 nova_compute[260935]:     </disk>
Oct 11 08:53:33 compute-0 nova_compute[260935]:     <disk type="network" device="cdrom">
Oct 11 08:53:33 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 08:53:33 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/92be5b35-6b7a-4f95-924d-008348f27b42_disk.config">
Oct 11 08:53:33 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 08:53:33 compute-0 nova_compute[260935]:       </source>
Oct 11 08:53:33 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 08:53:33 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 08:53:33 compute-0 nova_compute[260935]:       </auth>
Oct 11 08:53:33 compute-0 nova_compute[260935]:       <target dev="sda" bus="sata"/>
Oct 11 08:53:33 compute-0 nova_compute[260935]:     </disk>
Oct 11 08:53:33 compute-0 nova_compute[260935]:     <interface type="ethernet">
Oct 11 08:53:33 compute-0 nova_compute[260935]:       <mac address="fa:16:3e:5b:b5:4a"/>
Oct 11 08:53:33 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 08:53:33 compute-0 nova_compute[260935]:       <driver name="vhost" rx_queue_size="512"/>
Oct 11 08:53:33 compute-0 nova_compute[260935]:       <mtu size="1442"/>
Oct 11 08:53:33 compute-0 nova_compute[260935]:       <target dev="tap13bb6d15-e6"/>
Oct 11 08:53:33 compute-0 nova_compute[260935]:     </interface>
Oct 11 08:53:33 compute-0 nova_compute[260935]:     <serial type="pty">
Oct 11 08:53:33 compute-0 nova_compute[260935]:       <log file="/var/lib/nova/instances/92be5b35-6b7a-4f95-924d-008348f27b42/console.log" append="off"/>
Oct 11 08:53:33 compute-0 nova_compute[260935]:     </serial>
Oct 11 08:53:33 compute-0 nova_compute[260935]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 08:53:33 compute-0 nova_compute[260935]:     <video>
Oct 11 08:53:33 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 08:53:33 compute-0 nova_compute[260935]:     </video>
Oct 11 08:53:33 compute-0 nova_compute[260935]:     <input type="tablet" bus="usb"/>
Oct 11 08:53:33 compute-0 nova_compute[260935]:     <rng model="virtio">
Oct 11 08:53:33 compute-0 nova_compute[260935]:       <backend model="random">/dev/urandom</backend>
Oct 11 08:53:33 compute-0 nova_compute[260935]:     </rng>
Oct 11 08:53:33 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root"/>
Oct 11 08:53:33 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:53:33 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:53:33 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:53:33 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:53:33 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:53:33 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:53:33 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:53:33 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:53:33 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:53:33 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:53:33 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:53:33 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:53:33 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:53:33 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:53:33 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:53:33 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:53:33 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:53:33 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:53:33 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:53:33 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:53:33 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:53:33 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:53:33 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:53:33 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:53:33 compute-0 nova_compute[260935]:     <controller type="usb" index="0"/>
Oct 11 08:53:33 compute-0 nova_compute[260935]:     <memballoon model="virtio">
Oct 11 08:53:33 compute-0 nova_compute[260935]:       <stats period="10"/>
Oct 11 08:53:33 compute-0 nova_compute[260935]:     </memballoon>
Oct 11 08:53:33 compute-0 nova_compute[260935]:   </devices>
Oct 11 08:53:33 compute-0 nova_compute[260935]: </domain>
Oct 11 08:53:33 compute-0 nova_compute[260935]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 11 08:53:33 compute-0 nova_compute[260935]: 2025-10-11 08:53:33.515 2 DEBUG nova.compute.manager [None req-fe77ed62-25b3-4ad0-9b14-d5990c233b42 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 92be5b35-6b7a-4f95-924d-008348f27b42] Preparing to wait for external event network-vif-plugged-13bb6d15-e65c-4e29-b0f3-b7a5a830236d prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 11 08:53:33 compute-0 nova_compute[260935]: 2025-10-11 08:53:33.515 2 DEBUG oslo_concurrency.lockutils [None req-fe77ed62-25b3-4ad0-9b14-d5990c233b42 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Acquiring lock "92be5b35-6b7a-4f95-924d-008348f27b42-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:53:33 compute-0 nova_compute[260935]: 2025-10-11 08:53:33.515 2 DEBUG oslo_concurrency.lockutils [None req-fe77ed62-25b3-4ad0-9b14-d5990c233b42 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Lock "92be5b35-6b7a-4f95-924d-008348f27b42-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:53:33 compute-0 nova_compute[260935]: 2025-10-11 08:53:33.515 2 DEBUG oslo_concurrency.lockutils [None req-fe77ed62-25b3-4ad0-9b14-d5990c233b42 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Lock "92be5b35-6b7a-4f95-924d-008348f27b42-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:53:33 compute-0 nova_compute[260935]: 2025-10-11 08:53:33.516 2 DEBUG nova.virt.libvirt.vif [None req-fe77ed62-25b3-4ad0-9b14-d5990c233b42 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-11T08:52:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-118771075',display_name='tempest-ServerDiskConfigTestJSON-server-118771075',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-118771075',id=43,image_ref='95632eb9-5895-4e20-b760-0f149aadf400',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-11T08:53:09Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='9af27fad6b5a4783b66213343f27f0a1',ramdisk_id='',reservation_id='r-aqy60l5l',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='95632eb9-5895-4e20-b760-0f149aadf400',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerDiskConfigTestJSON-387886039',owner_user_name='tempest-ServerDiskConfigTestJSON-387886039-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T08:53:31Z,user_data=None,user_id='2a171de1f79843e0b048393cabfee77d',uuid=92be5b35-6b7a-4f95-924d-008348f27b42,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "13bb6d15-e65c-4e29-b0f3-b7a5a830236d", "address": "fa:16:3e:5b:b5:4a", "network": {"id": "e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-964284753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9af27fad6b5a4783b66213343f27f0a1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap13bb6d15-e6", "ovs_interfaceid": "13bb6d15-e65c-4e29-b0f3-b7a5a830236d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 11 08:53:33 compute-0 nova_compute[260935]: 2025-10-11 08:53:33.516 2 DEBUG nova.network.os_vif_util [None req-fe77ed62-25b3-4ad0-9b14-d5990c233b42 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Converting VIF {"id": "13bb6d15-e65c-4e29-b0f3-b7a5a830236d", "address": "fa:16:3e:5b:b5:4a", "network": {"id": "e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-964284753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9af27fad6b5a4783b66213343f27f0a1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap13bb6d15-e6", "ovs_interfaceid": "13bb6d15-e65c-4e29-b0f3-b7a5a830236d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 08:53:33 compute-0 nova_compute[260935]: 2025-10-11 08:53:33.516 2 DEBUG nova.network.os_vif_util [None req-fe77ed62-25b3-4ad0-9b14-d5990c233b42 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5b:b5:4a,bridge_name='br-int',has_traffic_filtering=True,id=13bb6d15-e65c-4e29-b0f3-b7a5a830236d,network=Network(e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap13bb6d15-e6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 08:53:33 compute-0 nova_compute[260935]: 2025-10-11 08:53:33.517 2 DEBUG os_vif [None req-fe77ed62-25b3-4ad0-9b14-d5990c233b42 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:5b:b5:4a,bridge_name='br-int',has_traffic_filtering=True,id=13bb6d15-e65c-4e29-b0f3-b7a5a830236d,network=Network(e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap13bb6d15-e6') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 11 08:53:33 compute-0 nova_compute[260935]: 2025-10-11 08:53:33.517 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:53:33 compute-0 nova_compute[260935]: 2025-10-11 08:53:33.517 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:53:33 compute-0 nova_compute[260935]: 2025-10-11 08:53:33.518 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 08:53:33 compute-0 nova_compute[260935]: 2025-10-11 08:53:33.520 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:53:33 compute-0 nova_compute[260935]: 2025-10-11 08:53:33.520 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap13bb6d15-e6, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:53:33 compute-0 nova_compute[260935]: 2025-10-11 08:53:33.521 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap13bb6d15-e6, col_values=(('external_ids', {'iface-id': '13bb6d15-e65c-4e29-b0f3-b7a5a830236d', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:5b:b5:4a', 'vm-uuid': '92be5b35-6b7a-4f95-924d-008348f27b42'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:53:33 compute-0 NetworkManager[44960]: <info>  [1760172813.5229] manager: (tap13bb6d15-e6): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/189)
Oct 11 08:53:33 compute-0 nova_compute[260935]: 2025-10-11 08:53:33.525 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:53:33 compute-0 nova_compute[260935]: 2025-10-11 08:53:33.526 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 08:53:33 compute-0 nova_compute[260935]: 2025-10-11 08:53:33.527 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:53:33 compute-0 nova_compute[260935]: 2025-10-11 08:53:33.528 2 INFO os_vif [None req-fe77ed62-25b3-4ad0-9b14-d5990c233b42 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:5b:b5:4a,bridge_name='br-int',has_traffic_filtering=True,id=13bb6d15-e65c-4e29-b0f3-b7a5a830236d,network=Network(e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap13bb6d15-e6')
Oct 11 08:53:33 compute-0 nova_compute[260935]: 2025-10-11 08:53:33.587 2 DEBUG nova.virt.libvirt.driver [None req-fe77ed62-25b3-4ad0-9b14-d5990c233b42 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 08:53:33 compute-0 nova_compute[260935]: 2025-10-11 08:53:33.588 2 DEBUG nova.virt.libvirt.driver [None req-fe77ed62-25b3-4ad0-9b14-d5990c233b42 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 08:53:33 compute-0 nova_compute[260935]: 2025-10-11 08:53:33.588 2 DEBUG nova.virt.libvirt.driver [None req-fe77ed62-25b3-4ad0-9b14-d5990c233b42 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] No VIF found with MAC fa:16:3e:5b:b5:4a, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 11 08:53:33 compute-0 nova_compute[260935]: 2025-10-11 08:53:33.588 2 INFO nova.virt.libvirt.driver [None req-fe77ed62-25b3-4ad0-9b14-d5990c233b42 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 92be5b35-6b7a-4f95-924d-008348f27b42] Using config drive
Oct 11 08:53:33 compute-0 nova_compute[260935]: 2025-10-11 08:53:33.606 2 DEBUG nova.storage.rbd_utils [None req-fe77ed62-25b3-4ad0-9b14-d5990c233b42 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] rbd image 92be5b35-6b7a-4f95-924d-008348f27b42_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:53:33 compute-0 nova_compute[260935]: 2025-10-11 08:53:33.627 2 DEBUG nova.objects.instance [None req-fe77ed62-25b3-4ad0-9b14-d5990c233b42 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Lazy-loading 'ec2_ids' on Instance uuid 92be5b35-6b7a-4f95-924d-008348f27b42 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:53:33 compute-0 podman[313040]: 2025-10-11 08:53:33.645577071 +0000 UTC m=+0.075349109 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_id=iscsid, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Oct 11 08:53:33 compute-0 nova_compute[260935]: 2025-10-11 08:53:33.680 2 DEBUG nova.objects.instance [None req-fe77ed62-25b3-4ad0-9b14-d5990c233b42 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Lazy-loading 'keypairs' on Instance uuid 92be5b35-6b7a-4f95-924d-008348f27b42 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:53:33 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1471: 321 pgs: 321 active+clean; 208 MiB data, 530 MiB used, 59 GiB / 60 GiB avail; 19 MiB/s rd, 16 MiB/s wr, 1.15k op/s
Oct 11 08:53:33 compute-0 nova_compute[260935]: 2025-10-11 08:53:33.984 2 DEBUG nova.network.neutron [-] [instance: dd2f9164-cc85-46e5-9ac5-2847421fe9fc] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:53:34 compute-0 nova_compute[260935]: 2025-10-11 08:53:34.001 2 DEBUG nova.network.neutron [-] [instance: e3289c21-dd0f-43aa-9d39-3aff16eff5cd] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:53:34 compute-0 nova_compute[260935]: 2025-10-11 08:53:34.004 2 INFO nova.compute.manager [-] [instance: dd2f9164-cc85-46e5-9ac5-2847421fe9fc] Took 1.05 seconds to deallocate network for instance.
Oct 11 08:53:34 compute-0 nova_compute[260935]: 2025-10-11 08:53:34.026 2 INFO nova.compute.manager [-] [instance: e3289c21-dd0f-43aa-9d39-3aff16eff5cd] Took 1.11 seconds to deallocate network for instance.
Oct 11 08:53:34 compute-0 nova_compute[260935]: 2025-10-11 08:53:34.070 2 DEBUG oslo_concurrency.lockutils [None req-c8c60bb6-fa6d-4709-9538-fd7ca557b391 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:53:34 compute-0 nova_compute[260935]: 2025-10-11 08:53:34.071 2 DEBUG oslo_concurrency.lockutils [None req-c8c60bb6-fa6d-4709-9538-fd7ca557b391 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:53:34 compute-0 ceph-mon[74313]: osdmap e200: 3 total, 3 up, 3 in
Oct 11 08:53:34 compute-0 ceph-mon[74313]: osdmap e201: 3 total, 3 up, 3 in
Oct 11 08:53:34 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/2171843780' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:53:34 compute-0 nova_compute[260935]: 2025-10-11 08:53:34.077 2 DEBUG oslo_concurrency.lockutils [None req-9e304d57-2dd4-4121-8be6-93377b7cb6cf 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:53:34 compute-0 nova_compute[260935]: 2025-10-11 08:53:34.171 2 INFO nova.virt.libvirt.driver [None req-fe77ed62-25b3-4ad0-9b14-d5990c233b42 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 92be5b35-6b7a-4f95-924d-008348f27b42] Creating config drive at /var/lib/nova/instances/92be5b35-6b7a-4f95-924d-008348f27b42/disk.config
Oct 11 08:53:34 compute-0 nova_compute[260935]: 2025-10-11 08:53:34.178 2 DEBUG oslo_concurrency.processutils [None req-fe77ed62-25b3-4ad0-9b14-d5990c233b42 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/92be5b35-6b7a-4f95-924d-008348f27b42/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpijavio5b execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:53:34 compute-0 nova_compute[260935]: 2025-10-11 08:53:34.234 2 DEBUG oslo_concurrency.processutils [None req-c8c60bb6-fa6d-4709-9538-fd7ca557b391 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:53:34 compute-0 nova_compute[260935]: 2025-10-11 08:53:34.347 2 DEBUG oslo_concurrency.processutils [None req-fe77ed62-25b3-4ad0-9b14-d5990c233b42 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/92be5b35-6b7a-4f95-924d-008348f27b42/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpijavio5b" returned: 0 in 0.169s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:53:34 compute-0 nova_compute[260935]: 2025-10-11 08:53:34.380 2 DEBUG nova.storage.rbd_utils [None req-fe77ed62-25b3-4ad0-9b14-d5990c233b42 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] rbd image 92be5b35-6b7a-4f95-924d-008348f27b42_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:53:34 compute-0 nova_compute[260935]: 2025-10-11 08:53:34.385 2 DEBUG oslo_concurrency.processutils [None req-fe77ed62-25b3-4ad0-9b14-d5990c233b42 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/92be5b35-6b7a-4f95-924d-008348f27b42/disk.config 92be5b35-6b7a-4f95-924d-008348f27b42_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:53:34 compute-0 nova_compute[260935]: 2025-10-11 08:53:34.608 2 DEBUG oslo_concurrency.processutils [None req-fe77ed62-25b3-4ad0-9b14-d5990c233b42 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/92be5b35-6b7a-4f95-924d-008348f27b42/disk.config 92be5b35-6b7a-4f95-924d-008348f27b42_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.223s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:53:34 compute-0 nova_compute[260935]: 2025-10-11 08:53:34.609 2 INFO nova.virt.libvirt.driver [None req-fe77ed62-25b3-4ad0-9b14-d5990c233b42 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 92be5b35-6b7a-4f95-924d-008348f27b42] Deleting local config drive /var/lib/nova/instances/92be5b35-6b7a-4f95-924d-008348f27b42/disk.config because it was imported into RBD.
Oct 11 08:53:34 compute-0 kernel: tap13bb6d15-e6: entered promiscuous mode
Oct 11 08:53:34 compute-0 systemd-udevd[312651]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 08:53:34 compute-0 nova_compute[260935]: 2025-10-11 08:53:34.689 2 DEBUG oslo_concurrency.lockutils [None req-e1c0c43e-5d1b-4f50-9815-d04cf6d649fc d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Acquiring lock "8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:53:34 compute-0 nova_compute[260935]: 2025-10-11 08:53:34.689 2 DEBUG oslo_concurrency.lockutils [None req-e1c0c43e-5d1b-4f50-9815-d04cf6d649fc d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Lock "8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:53:34 compute-0 NetworkManager[44960]: <info>  [1760172814.6960] manager: (tap13bb6d15-e6): new Tun device (/org/freedesktop/NetworkManager/Devices/190)
Oct 11 08:53:34 compute-0 ovn_controller[152945]: 2025-10-11T08:53:34Z|00397|binding|INFO|Claiming lport 13bb6d15-e65c-4e29-b0f3-b7a5a830236d for this chassis.
Oct 11 08:53:34 compute-0 ovn_controller[152945]: 2025-10-11T08:53:34Z|00398|binding|INFO|13bb6d15-e65c-4e29-b0f3-b7a5a830236d: Claiming fa:16:3e:5b:b5:4a 10.100.0.12
Oct 11 08:53:34 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:34.701 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5b:b5:4a 10.100.0.12'], port_security=['fa:16:3e:5b:b5:4a 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '92be5b35-6b7a-4f95-924d-008348f27b42', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9af27fad6b5a4783b66213343f27f0a1', 'neutron:revision_number': '5', 'neutron:security_group_ids': 'a50a9db8-e2ab-4969-88fc-b4ddbb372174', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b34768a1-4d4c-416a-8ec1-d8538916a72a, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=13bb6d15-e65c-4e29-b0f3-b7a5a830236d) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 08:53:34 compute-0 nova_compute[260935]: 2025-10-11 08:53:34.699 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:53:34 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:34.702 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 13bb6d15-e65c-4e29-b0f3-b7a5a830236d in datapath e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd bound to our chassis
Oct 11 08:53:34 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:34.704 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd
Oct 11 08:53:34 compute-0 NetworkManager[44960]: <info>  [1760172814.7092] device (tap13bb6d15-e6): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 08:53:34 compute-0 NetworkManager[44960]: <info>  [1760172814.7102] device (tap13bb6d15-e6): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 08:53:34 compute-0 nova_compute[260935]: 2025-10-11 08:53:34.713 2 DEBUG nova.compute.manager [None req-e1c0c43e-5d1b-4f50-9815-d04cf6d649fc d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] [instance: 8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 11 08:53:34 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:34.724 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[92c9e708-3d8c-4321-a2ee-f8e425dc2b06]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:53:34 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:34.725 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tape5d4fc7a-11 in ovnmeta-e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 11 08:53:34 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:34.727 276945 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tape5d4fc7a-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 11 08:53:34 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:34.728 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[2aad4c2b-e227-4cf6-ac46-69345d22dfba]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:53:34 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:34.728 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[c7103abf-7fb2-43e5-b264-cd7c4d2faf24]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:53:34 compute-0 ovn_controller[152945]: 2025-10-11T08:53:34Z|00399|binding|INFO|Setting lport 13bb6d15-e65c-4e29-b0f3-b7a5a830236d up in Southbound
Oct 11 08:53:34 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:34.744 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[9b59e377-8fc9-4320-9754-2cb60a240eaa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:53:34 compute-0 ovn_controller[152945]: 2025-10-11T08:53:34Z|00400|binding|INFO|Setting lport 13bb6d15-e65c-4e29-b0f3-b7a5a830236d ovn-installed in OVS
Oct 11 08:53:34 compute-0 nova_compute[260935]: 2025-10-11 08:53:34.746 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:53:34 compute-0 nova_compute[260935]: 2025-10-11 08:53:34.751 2 DEBUG nova.compute.manager [req-40b0ea35-a8de-40d5-9a38-fcf450d29e7d req-59ab1f4c-da03-4840-8ce6-33df76e937ae e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: e3289c21-dd0f-43aa-9d39-3aff16eff5cd] Received event network-vif-deleted-69a28fb2-3a8f-49be-9b93-8dc3db323fd5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:53:34 compute-0 nova_compute[260935]: 2025-10-11 08:53:34.752 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:53:34 compute-0 systemd-machined[215705]: New machine qemu-53-instance-0000002b.
Oct 11 08:53:34 compute-0 systemd[1]: Started Virtual Machine qemu-53-instance-0000002b.
Oct 11 08:53:34 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:34.762 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[5218679e-e420-4f4e-aba7-b68c72940dd8]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:53:34 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 08:53:34 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3295096361' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:53:34 compute-0 nova_compute[260935]: 2025-10-11 08:53:34.801 2 DEBUG oslo_concurrency.lockutils [None req-e1c0c43e-5d1b-4f50-9815-d04cf6d649fc d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:53:34 compute-0 nova_compute[260935]: 2025-10-11 08:53:34.809 2 DEBUG oslo_concurrency.processutils [None req-c8c60bb6-fa6d-4709-9538-fd7ca557b391 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.576s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:53:34 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:34.824 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[3c89fa47-31b6-4455-931d-62b98b1f2b3b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:53:34 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:34.829 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[a2a1e848-c59e-42c5-b5ae-e089e7e5cdf9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:53:34 compute-0 nova_compute[260935]: 2025-10-11 08:53:34.828 2 DEBUG nova.compute.provider_tree [None req-c8c60bb6-fa6d-4709-9538-fd7ca557b391 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 08:53:34 compute-0 NetworkManager[44960]: <info>  [1760172814.8349] manager: (tape5d4fc7a-10): new Veth device (/org/freedesktop/NetworkManager/Devices/191)
Oct 11 08:53:34 compute-0 nova_compute[260935]: 2025-10-11 08:53:34.846 2 DEBUG nova.scheduler.client.report [None req-c8c60bb6-fa6d-4709-9538-fd7ca557b391 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 08:53:34 compute-0 systemd-udevd[313169]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 08:53:34 compute-0 nova_compute[260935]: 2025-10-11 08:53:34.869 2 DEBUG oslo_concurrency.lockutils [None req-c8c60bb6-fa6d-4709-9538-fd7ca557b391 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.798s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:53:34 compute-0 nova_compute[260935]: 2025-10-11 08:53:34.872 2 DEBUG oslo_concurrency.lockutils [None req-9e304d57-2dd4-4121-8be6-93377b7cb6cf 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.795s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:53:34 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:34.884 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[b199c1a4-58c1-4669-87f5-7b200fc010e3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:53:34 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:34.890 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[9d06f12e-bc7f-4851-9806-490a55fe6fc4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:53:34 compute-0 nova_compute[260935]: 2025-10-11 08:53:34.901 2 INFO nova.scheduler.client.report [None req-c8c60bb6-fa6d-4709-9538-fd7ca557b391 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Deleted allocations for instance dd2f9164-cc85-46e5-9ac5-2847421fe9fc
Oct 11 08:53:34 compute-0 NetworkManager[44960]: <info>  [1760172814.9312] device (tape5d4fc7a-10): carrier: link connected
Oct 11 08:53:34 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:34.942 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[013b19be-7518-4db1-ac2c-0776a4619edb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:53:34 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:34.967 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[8c37baa8-fb3f-4c47-be95-6fd7645f399c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape5d4fc7a-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:17:20:33'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 128], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 465963, 'reachable_time': 16746, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 313188, 'error': None, 'target': 'ovnmeta-e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:53:34 compute-0 nova_compute[260935]: 2025-10-11 08:53:34.968 2 DEBUG oslo_concurrency.lockutils [None req-c8c60bb6-fa6d-4709-9538-fd7ca557b391 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Lock "dd2f9164-cc85-46e5-9ac5-2847421fe9fc" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.226s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:53:34 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:34.989 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[6d4f2365-f2bc-40ef-840c-4dd4f985a746]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe17:2033'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 465963, 'tstamp': 465963}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 313196, 'error': None, 'target': 'ovnmeta-e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:53:34 compute-0 nova_compute[260935]: 2025-10-11 08:53:34.993 2 DEBUG oslo_concurrency.processutils [None req-9e304d57-2dd4-4121-8be6-93377b7cb6cf 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:53:35 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:35.015 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[2120003d-ccf8-42ad-b0a2-fcd00aafa6d0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape5d4fc7a-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:17:20:33'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 128], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 465963, 'reachable_time': 16746, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 313208, 'error': None, 'target': 'ovnmeta-e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:53:35 compute-0 nova_compute[260935]: 2025-10-11 08:53:35.038 2 INFO nova.virt.libvirt.driver [None req-ea864462-84d7-453c-a364-0b170ab59761 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] [instance: 2c551e6f-adba-4963-a583-c5118e2be62a] Snapshot image upload complete
Oct 11 08:53:35 compute-0 nova_compute[260935]: 2025-10-11 08:53:35.039 2 INFO nova.compute.manager [None req-ea864462-84d7-453c-a364-0b170ab59761 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] [instance: 2c551e6f-adba-4963-a583-c5118e2be62a] Took 4.10 seconds to snapshot the instance on the hypervisor.
Oct 11 08:53:35 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:35.054 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[f8b11d0c-86f4-449b-ad52-137a9d6b4ebf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:53:35 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:35.151 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[94283121-b6d8-4f53-a9f5-eda2c86a55f0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:53:35 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:35.152 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape5d4fc7a-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:53:35 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:35.153 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 08:53:35 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:35.153 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape5d4fc7a-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:53:35 compute-0 kernel: tape5d4fc7a-10: entered promiscuous mode
Oct 11 08:53:35 compute-0 NetworkManager[44960]: <info>  [1760172815.1974] manager: (tape5d4fc7a-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/192)
Oct 11 08:53:35 compute-0 nova_compute[260935]: 2025-10-11 08:53:35.196 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:53:35 compute-0 nova_compute[260935]: 2025-10-11 08:53:35.199 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:53:35 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:35.200 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape5d4fc7a-10, col_values=(('external_ids', {'iface-id': '7a0f31c4-9bda-45df-9fec-aacc40fc88c1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:53:35 compute-0 nova_compute[260935]: 2025-10-11 08:53:35.202 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:53:35 compute-0 ovn_controller[152945]: 2025-10-11T08:53:35Z|00401|binding|INFO|Releasing lport 7a0f31c4-9bda-45df-9fec-aacc40fc88c1 from this chassis (sb_readonly=0)
Oct 11 08:53:35 compute-0 nova_compute[260935]: 2025-10-11 08:53:35.203 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:53:35 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:35.204 162815 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 11 08:53:35 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:35.205 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[1fb1ff04-5c98-4f22-a868-c30b1161bc3f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:53:35 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:35.205 162815 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 11 08:53:35 compute-0 ovn_metadata_agent[162810]: global
Oct 11 08:53:35 compute-0 ovn_metadata_agent[162810]:     log         /dev/log local0 debug
Oct 11 08:53:35 compute-0 ovn_metadata_agent[162810]:     log-tag     haproxy-metadata-proxy-e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd
Oct 11 08:53:35 compute-0 ovn_metadata_agent[162810]:     user        root
Oct 11 08:53:35 compute-0 ovn_metadata_agent[162810]:     group       root
Oct 11 08:53:35 compute-0 ovn_metadata_agent[162810]:     maxconn     1024
Oct 11 08:53:35 compute-0 ovn_metadata_agent[162810]:     pidfile     /var/lib/neutron/external/pids/e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd.pid.haproxy
Oct 11 08:53:35 compute-0 ovn_metadata_agent[162810]:     daemon
Oct 11 08:53:35 compute-0 ovn_metadata_agent[162810]: 
Oct 11 08:53:35 compute-0 ovn_metadata_agent[162810]: defaults
Oct 11 08:53:35 compute-0 ovn_metadata_agent[162810]:     log global
Oct 11 08:53:35 compute-0 ovn_metadata_agent[162810]:     mode http
Oct 11 08:53:35 compute-0 ovn_metadata_agent[162810]:     option httplog
Oct 11 08:53:35 compute-0 ovn_metadata_agent[162810]:     option dontlognull
Oct 11 08:53:35 compute-0 ovn_metadata_agent[162810]:     option http-server-close
Oct 11 08:53:35 compute-0 ovn_metadata_agent[162810]:     option forwardfor
Oct 11 08:53:35 compute-0 ovn_metadata_agent[162810]:     retries                 3
Oct 11 08:53:35 compute-0 ovn_metadata_agent[162810]:     timeout http-request    30s
Oct 11 08:53:35 compute-0 ovn_metadata_agent[162810]:     timeout connect         30s
Oct 11 08:53:35 compute-0 ovn_metadata_agent[162810]:     timeout client          32s
Oct 11 08:53:35 compute-0 ovn_metadata_agent[162810]:     timeout server          32s
Oct 11 08:53:35 compute-0 ovn_metadata_agent[162810]:     timeout http-keep-alive 30s
Oct 11 08:53:35 compute-0 ovn_metadata_agent[162810]: 
Oct 11 08:53:35 compute-0 ovn_metadata_agent[162810]: 
Oct 11 08:53:35 compute-0 ovn_metadata_agent[162810]: listen listener
Oct 11 08:53:35 compute-0 ovn_metadata_agent[162810]:     bind 169.254.169.254:80
Oct 11 08:53:35 compute-0 ovn_metadata_agent[162810]:     server metadata /var/lib/neutron/metadata_proxy
Oct 11 08:53:35 compute-0 ovn_metadata_agent[162810]:     http-request add-header X-OVN-Network-ID e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd
Oct 11 08:53:35 compute-0 ovn_metadata_agent[162810]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 11 08:53:35 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:35.206 162815 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd', 'env', 'PROCESS_TAG=haproxy-e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 11 08:53:35 compute-0 nova_compute[260935]: 2025-10-11 08:53:35.219 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:53:35 compute-0 ceph-mon[74313]: pgmap v1471: 321 pgs: 321 active+clean; 208 MiB data, 530 MiB used, 59 GiB / 60 GiB avail; 19 MiB/s rd, 16 MiB/s wr, 1.15k op/s
Oct 11 08:53:35 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/3295096361' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:53:35 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 08:53:35 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1964509641' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:53:35 compute-0 nova_compute[260935]: 2025-10-11 08:53:35.433 2 DEBUG oslo_concurrency.processutils [None req-9e304d57-2dd4-4121-8be6-93377b7cb6cf 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:53:35 compute-0 nova_compute[260935]: 2025-10-11 08:53:35.444 2 DEBUG nova.compute.provider_tree [None req-9e304d57-2dd4-4121-8be6-93377b7cb6cf 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 08:53:35 compute-0 nova_compute[260935]: 2025-10-11 08:53:35.465 2 DEBUG nova.scheduler.client.report [None req-9e304d57-2dd4-4121-8be6-93377b7cb6cf 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 08:53:35 compute-0 nova_compute[260935]: 2025-10-11 08:53:35.506 2 DEBUG oslo_concurrency.lockutils [None req-9e304d57-2dd4-4121-8be6-93377b7cb6cf 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.635s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:53:35 compute-0 nova_compute[260935]: 2025-10-11 08:53:35.511 2 DEBUG oslo_concurrency.lockutils [None req-e1c0c43e-5d1b-4f50-9815-d04cf6d649fc d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.709s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:53:35 compute-0 nova_compute[260935]: 2025-10-11 08:53:35.520 2 DEBUG nova.virt.hardware [None req-e1c0c43e-5d1b-4f50-9815-d04cf6d649fc d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 11 08:53:35 compute-0 nova_compute[260935]: 2025-10-11 08:53:35.520 2 INFO nova.compute.claims [None req-e1c0c43e-5d1b-4f50-9815-d04cf6d649fc d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] [instance: 8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08] Claim successful on node compute-0.ctlplane.example.com
Oct 11 08:53:35 compute-0 nova_compute[260935]: 2025-10-11 08:53:35.562 2 INFO nova.scheduler.client.report [None req-9e304d57-2dd4-4121-8be6-93377b7cb6cf 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Deleted allocations for instance e3289c21-dd0f-43aa-9d39-3aff16eff5cd
Oct 11 08:53:35 compute-0 nova_compute[260935]: 2025-10-11 08:53:35.630 2 DEBUG nova.virt.libvirt.host [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Removed pending event for 92be5b35-6b7a-4f95-924d-008348f27b42 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Oct 11 08:53:35 compute-0 nova_compute[260935]: 2025-10-11 08:53:35.630 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172815.627102, 92be5b35-6b7a-4f95-924d-008348f27b42 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:53:35 compute-0 nova_compute[260935]: 2025-10-11 08:53:35.631 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 92be5b35-6b7a-4f95-924d-008348f27b42] VM Started (Lifecycle Event)
Oct 11 08:53:35 compute-0 nova_compute[260935]: 2025-10-11 08:53:35.646 2 DEBUG nova.compute.manager [req-cda3e6aa-518d-4bbc-8058-2a15f28d5af2 req-b2f214e4-0029-427e-b3aa-424a501b78d6 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: dd2f9164-cc85-46e5-9ac5-2847421fe9fc] Received event network-vif-deleted-35cdb65d-4c85-4645-9bdf-3ef8f40f9596 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:53:35 compute-0 nova_compute[260935]: 2025-10-11 08:53:35.646 2 DEBUG nova.compute.manager [req-cda3e6aa-518d-4bbc-8058-2a15f28d5af2 req-b2f214e4-0029-427e-b3aa-424a501b78d6 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 92be5b35-6b7a-4f95-924d-008348f27b42] Received event network-vif-plugged-13bb6d15-e65c-4e29-b0f3-b7a5a830236d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:53:35 compute-0 nova_compute[260935]: 2025-10-11 08:53:35.647 2 DEBUG oslo_concurrency.lockutils [req-cda3e6aa-518d-4bbc-8058-2a15f28d5af2 req-b2f214e4-0029-427e-b3aa-424a501b78d6 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "92be5b35-6b7a-4f95-924d-008348f27b42-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:53:35 compute-0 nova_compute[260935]: 2025-10-11 08:53:35.647 2 DEBUG oslo_concurrency.lockutils [req-cda3e6aa-518d-4bbc-8058-2a15f28d5af2 req-b2f214e4-0029-427e-b3aa-424a501b78d6 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "92be5b35-6b7a-4f95-924d-008348f27b42-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:53:35 compute-0 nova_compute[260935]: 2025-10-11 08:53:35.648 2 DEBUG oslo_concurrency.lockutils [req-cda3e6aa-518d-4bbc-8058-2a15f28d5af2 req-b2f214e4-0029-427e-b3aa-424a501b78d6 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "92be5b35-6b7a-4f95-924d-008348f27b42-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:53:35 compute-0 nova_compute[260935]: 2025-10-11 08:53:35.648 2 DEBUG nova.compute.manager [req-cda3e6aa-518d-4bbc-8058-2a15f28d5af2 req-b2f214e4-0029-427e-b3aa-424a501b78d6 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 92be5b35-6b7a-4f95-924d-008348f27b42] Processing event network-vif-plugged-13bb6d15-e65c-4e29-b0f3-b7a5a830236d _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 11 08:53:35 compute-0 nova_compute[260935]: 2025-10-11 08:53:35.648 2 DEBUG nova.compute.manager [req-cda3e6aa-518d-4bbc-8058-2a15f28d5af2 req-b2f214e4-0029-427e-b3aa-424a501b78d6 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 92be5b35-6b7a-4f95-924d-008348f27b42] Received event network-vif-plugged-13bb6d15-e65c-4e29-b0f3-b7a5a830236d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:53:35 compute-0 nova_compute[260935]: 2025-10-11 08:53:35.649 2 DEBUG oslo_concurrency.lockutils [req-cda3e6aa-518d-4bbc-8058-2a15f28d5af2 req-b2f214e4-0029-427e-b3aa-424a501b78d6 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "92be5b35-6b7a-4f95-924d-008348f27b42-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:53:35 compute-0 nova_compute[260935]: 2025-10-11 08:53:35.649 2 DEBUG oslo_concurrency.lockutils [req-cda3e6aa-518d-4bbc-8058-2a15f28d5af2 req-b2f214e4-0029-427e-b3aa-424a501b78d6 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "92be5b35-6b7a-4f95-924d-008348f27b42-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:53:35 compute-0 nova_compute[260935]: 2025-10-11 08:53:35.649 2 DEBUG oslo_concurrency.lockutils [req-cda3e6aa-518d-4bbc-8058-2a15f28d5af2 req-b2f214e4-0029-427e-b3aa-424a501b78d6 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "92be5b35-6b7a-4f95-924d-008348f27b42-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:53:35 compute-0 nova_compute[260935]: 2025-10-11 08:53:35.650 2 DEBUG nova.compute.manager [req-cda3e6aa-518d-4bbc-8058-2a15f28d5af2 req-b2f214e4-0029-427e-b3aa-424a501b78d6 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 92be5b35-6b7a-4f95-924d-008348f27b42] No waiting events found dispatching network-vif-plugged-13bb6d15-e65c-4e29-b0f3-b7a5a830236d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 08:53:35 compute-0 nova_compute[260935]: 2025-10-11 08:53:35.650 2 WARNING nova.compute.manager [req-cda3e6aa-518d-4bbc-8058-2a15f28d5af2 req-b2f214e4-0029-427e-b3aa-424a501b78d6 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 92be5b35-6b7a-4f95-924d-008348f27b42] Received unexpected event network-vif-plugged-13bb6d15-e65c-4e29-b0f3-b7a5a830236d for instance with vm_state active and task_state rebuild_spawning.
Oct 11 08:53:35 compute-0 nova_compute[260935]: 2025-10-11 08:53:35.653 2 DEBUG nova.compute.manager [None req-fe77ed62-25b3-4ad0-9b14-d5990c233b42 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 92be5b35-6b7a-4f95-924d-008348f27b42] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 11 08:53:35 compute-0 nova_compute[260935]: 2025-10-11 08:53:35.656 2 DEBUG oslo_concurrency.lockutils [None req-9e304d57-2dd4-4121-8be6-93377b7cb6cf 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Lock "e3289c21-dd0f-43aa-9d39-3aff16eff5cd" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.070s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:53:35 compute-0 nova_compute[260935]: 2025-10-11 08:53:35.658 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 92be5b35-6b7a-4f95-924d-008348f27b42] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:53:35 compute-0 nova_compute[260935]: 2025-10-11 08:53:35.668 2 DEBUG nova.virt.libvirt.driver [None req-fe77ed62-25b3-4ad0-9b14-d5990c233b42 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 92be5b35-6b7a-4f95-924d-008348f27b42] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 11 08:53:35 compute-0 podman[313286]: 2025-10-11 08:53:35.673076274 +0000 UTC m=+0.081621339 container create 93ce5815cc24073fc5465470cde888fbff7aeb8d44ee8d0f37da2dc9d054a24a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Oct 11 08:53:35 compute-0 nova_compute[260935]: 2025-10-11 08:53:35.675 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 92be5b35-6b7a-4f95-924d-008348f27b42] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 08:53:35 compute-0 nova_compute[260935]: 2025-10-11 08:53:35.681 2 INFO nova.virt.libvirt.driver [-] [instance: 92be5b35-6b7a-4f95-924d-008348f27b42] Instance spawned successfully.
Oct 11 08:53:35 compute-0 nova_compute[260935]: 2025-10-11 08:53:35.682 2 DEBUG nova.virt.libvirt.driver [None req-fe77ed62-25b3-4ad0-9b14-d5990c233b42 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 92be5b35-6b7a-4f95-924d-008348f27b42] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 11 08:53:35 compute-0 nova_compute[260935]: 2025-10-11 08:53:35.699 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 92be5b35-6b7a-4f95-924d-008348f27b42] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Oct 11 08:53:35 compute-0 nova_compute[260935]: 2025-10-11 08:53:35.699 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172815.6274183, 92be5b35-6b7a-4f95-924d-008348f27b42 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:53:35 compute-0 nova_compute[260935]: 2025-10-11 08:53:35.700 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 92be5b35-6b7a-4f95-924d-008348f27b42] VM Paused (Lifecycle Event)
Oct 11 08:53:35 compute-0 nova_compute[260935]: 2025-10-11 08:53:35.716 2 DEBUG nova.virt.libvirt.driver [None req-fe77ed62-25b3-4ad0-9b14-d5990c233b42 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 92be5b35-6b7a-4f95-924d-008348f27b42] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:53:35 compute-0 nova_compute[260935]: 2025-10-11 08:53:35.717 2 DEBUG nova.virt.libvirt.driver [None req-fe77ed62-25b3-4ad0-9b14-d5990c233b42 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 92be5b35-6b7a-4f95-924d-008348f27b42] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:53:35 compute-0 nova_compute[260935]: 2025-10-11 08:53:35.717 2 DEBUG nova.virt.libvirt.driver [None req-fe77ed62-25b3-4ad0-9b14-d5990c233b42 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 92be5b35-6b7a-4f95-924d-008348f27b42] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:53:35 compute-0 podman[313286]: 2025-10-11 08:53:35.628850343 +0000 UTC m=+0.037395458 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 11 08:53:35 compute-0 nova_compute[260935]: 2025-10-11 08:53:35.717 2 DEBUG nova.virt.libvirt.driver [None req-fe77ed62-25b3-4ad0-9b14-d5990c233b42 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 92be5b35-6b7a-4f95-924d-008348f27b42] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:53:35 compute-0 nova_compute[260935]: 2025-10-11 08:53:35.718 2 DEBUG nova.virt.libvirt.driver [None req-fe77ed62-25b3-4ad0-9b14-d5990c233b42 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 92be5b35-6b7a-4f95-924d-008348f27b42] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:53:35 compute-0 nova_compute[260935]: 2025-10-11 08:53:35.719 2 DEBUG nova.virt.libvirt.driver [None req-fe77ed62-25b3-4ad0-9b14-d5990c233b42 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 92be5b35-6b7a-4f95-924d-008348f27b42] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:53:35 compute-0 nova_compute[260935]: 2025-10-11 08:53:35.728 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 92be5b35-6b7a-4f95-924d-008348f27b42] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:53:35 compute-0 nova_compute[260935]: 2025-10-11 08:53:35.732 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172815.6643114, 92be5b35-6b7a-4f95-924d-008348f27b42 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:53:35 compute-0 nova_compute[260935]: 2025-10-11 08:53:35.732 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 92be5b35-6b7a-4f95-924d-008348f27b42] VM Resumed (Lifecycle Event)
Oct 11 08:53:35 compute-0 systemd[1]: Started libpod-conmon-93ce5815cc24073fc5465470cde888fbff7aeb8d44ee8d0f37da2dc9d054a24a.scope.
Oct 11 08:53:35 compute-0 nova_compute[260935]: 2025-10-11 08:53:35.745 2 DEBUG oslo_concurrency.processutils [None req-e1c0c43e-5d1b-4f50-9815-d04cf6d649fc d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:53:35 compute-0 systemd[1]: Started libcrun container.
Oct 11 08:53:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ac96d95cf133868deb35b7c5e5709e96b6a8d34762efff7b7d3e6627c387060/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 11 08:53:35 compute-0 nova_compute[260935]: 2025-10-11 08:53:35.797 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 92be5b35-6b7a-4f95-924d-008348f27b42] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:53:35 compute-0 nova_compute[260935]: 2025-10-11 08:53:35.800 2 DEBUG nova.compute.manager [None req-fe77ed62-25b3-4ad0-9b14-d5990c233b42 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 92be5b35-6b7a-4f95-924d-008348f27b42] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:53:35 compute-0 nova_compute[260935]: 2025-10-11 08:53:35.805 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 92be5b35-6b7a-4f95-924d-008348f27b42] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 08:53:35 compute-0 podman[313286]: 2025-10-11 08:53:35.81042778 +0000 UTC m=+0.218972895 container init 93ce5815cc24073fc5465470cde888fbff7aeb8d44ee8d0f37da2dc9d054a24a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 11 08:53:35 compute-0 podman[313286]: 2025-10-11 08:53:35.822904236 +0000 UTC m=+0.231449301 container start 93ce5815cc24073fc5465470cde888fbff7aeb8d44ee8d0f37da2dc9d054a24a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Oct 11 08:53:35 compute-0 nova_compute[260935]: 2025-10-11 08:53:35.849 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 92be5b35-6b7a-4f95-924d-008348f27b42] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Oct 11 08:53:35 compute-0 neutron-haproxy-ovnmeta-e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd[313302]: [NOTICE]   (313307) : New worker (313309) forked
Oct 11 08:53:35 compute-0 neutron-haproxy-ovnmeta-e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd[313302]: [NOTICE]   (313307) : Loading success.
Oct 11 08:53:35 compute-0 nova_compute[260935]: 2025-10-11 08:53:35.881 2 DEBUG oslo_concurrency.lockutils [None req-fe77ed62-25b3-4ad0-9b14-d5990c233b42 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:53:35 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1472: 321 pgs: 321 active+clean; 208 MiB data, 530 MiB used, 59 GiB / 60 GiB avail; 14 MiB/s rd, 12 MiB/s wr, 851 op/s
Oct 11 08:53:36 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 08:53:36 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/183429936' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:53:36 compute-0 nova_compute[260935]: 2025-10-11 08:53:36.312 2 DEBUG oslo_concurrency.processutils [None req-e1c0c43e-5d1b-4f50-9815-d04cf6d649fc d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.566s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:53:36 compute-0 nova_compute[260935]: 2025-10-11 08:53:36.320 2 DEBUG nova.compute.provider_tree [None req-e1c0c43e-5d1b-4f50-9815-d04cf6d649fc d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 08:53:36 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/1964509641' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:53:36 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/183429936' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:53:36 compute-0 nova_compute[260935]: 2025-10-11 08:53:36.343 2 DEBUG nova.scheduler.client.report [None req-e1c0c43e-5d1b-4f50-9815-d04cf6d649fc d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 08:53:36 compute-0 nova_compute[260935]: 2025-10-11 08:53:36.374 2 DEBUG oslo_concurrency.lockutils [None req-e1c0c43e-5d1b-4f50-9815-d04cf6d649fc d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.863s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:53:36 compute-0 nova_compute[260935]: 2025-10-11 08:53:36.375 2 DEBUG nova.compute.manager [None req-e1c0c43e-5d1b-4f50-9815-d04cf6d649fc d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] [instance: 8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 11 08:53:36 compute-0 nova_compute[260935]: 2025-10-11 08:53:36.380 2 DEBUG oslo_concurrency.lockutils [None req-fe77ed62-25b3-4ad0-9b14-d5990c233b42 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.499s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:53:36 compute-0 nova_compute[260935]: 2025-10-11 08:53:36.381 2 DEBUG nova.objects.instance [None req-fe77ed62-25b3-4ad0-9b14-d5990c233b42 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 92be5b35-6b7a-4f95-924d-008348f27b42] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Oct 11 08:53:36 compute-0 nova_compute[260935]: 2025-10-11 08:53:36.460 2 DEBUG nova.compute.manager [None req-e1c0c43e-5d1b-4f50-9815-d04cf6d649fc d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] [instance: 8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 11 08:53:36 compute-0 nova_compute[260935]: 2025-10-11 08:53:36.460 2 DEBUG nova.network.neutron [None req-e1c0c43e-5d1b-4f50-9815-d04cf6d649fc d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] [instance: 8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 11 08:53:36 compute-0 nova_compute[260935]: 2025-10-11 08:53:36.495 2 DEBUG oslo_concurrency.lockutils [None req-fe77ed62-25b3-4ad0-9b14-d5990c233b42 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.115s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:53:36 compute-0 nova_compute[260935]: 2025-10-11 08:53:36.502 2 INFO nova.virt.libvirt.driver [None req-e1c0c43e-5d1b-4f50-9815-d04cf6d649fc d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] [instance: 8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 11 08:53:36 compute-0 nova_compute[260935]: 2025-10-11 08:53:36.528 2 DEBUG nova.compute.manager [None req-e1c0c43e-5d1b-4f50-9815-d04cf6d649fc d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] [instance: 8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 11 08:53:36 compute-0 nova_compute[260935]: 2025-10-11 08:53:36.777 2 DEBUG nova.compute.manager [None req-e1c0c43e-5d1b-4f50-9815-d04cf6d649fc d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] [instance: 8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 11 08:53:36 compute-0 nova_compute[260935]: 2025-10-11 08:53:36.780 2 DEBUG nova.virt.libvirt.driver [None req-e1c0c43e-5d1b-4f50-9815-d04cf6d649fc d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] [instance: 8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 11 08:53:36 compute-0 nova_compute[260935]: 2025-10-11 08:53:36.781 2 INFO nova.virt.libvirt.driver [None req-e1c0c43e-5d1b-4f50-9815-d04cf6d649fc d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] [instance: 8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08] Creating image(s)
Oct 11 08:53:36 compute-0 nova_compute[260935]: 2025-10-11 08:53:36.820 2 DEBUG nova.storage.rbd_utils [None req-e1c0c43e-5d1b-4f50-9815-d04cf6d649fc d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] rbd image 8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:53:36 compute-0 nova_compute[260935]: 2025-10-11 08:53:36.864 2 DEBUG nova.storage.rbd_utils [None req-e1c0c43e-5d1b-4f50-9815-d04cf6d649fc d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] rbd image 8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:53:36 compute-0 nova_compute[260935]: 2025-10-11 08:53:36.904 2 DEBUG nova.storage.rbd_utils [None req-e1c0c43e-5d1b-4f50-9815-d04cf6d649fc d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] rbd image 8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:53:36 compute-0 nova_compute[260935]: 2025-10-11 08:53:36.910 2 DEBUG oslo_concurrency.processutils [None req-e1c0c43e-5d1b-4f50-9815-d04cf6d649fc d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:53:37 compute-0 nova_compute[260935]: 2025-10-11 08:53:37.021 2 DEBUG oslo_concurrency.processutils [None req-e1c0c43e-5d1b-4f50-9815-d04cf6d649fc d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json" returned: 0 in 0.111s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:53:37 compute-0 nova_compute[260935]: 2025-10-11 08:53:37.023 2 DEBUG oslo_concurrency.lockutils [None req-e1c0c43e-5d1b-4f50-9815-d04cf6d649fc d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Acquiring lock "9811042c7d73cc51997f7c966840f4b7728169a1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:53:37 compute-0 nova_compute[260935]: 2025-10-11 08:53:37.024 2 DEBUG oslo_concurrency.lockutils [None req-e1c0c43e-5d1b-4f50-9815-d04cf6d649fc d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:53:37 compute-0 nova_compute[260935]: 2025-10-11 08:53:37.024 2 DEBUG oslo_concurrency.lockutils [None req-e1c0c43e-5d1b-4f50-9815-d04cf6d649fc d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:53:37 compute-0 nova_compute[260935]: 2025-10-11 08:53:37.057 2 DEBUG nova.storage.rbd_utils [None req-e1c0c43e-5d1b-4f50-9815-d04cf6d649fc d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] rbd image 8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:53:37 compute-0 nova_compute[260935]: 2025-10-11 08:53:37.063 2 DEBUG oslo_concurrency.processutils [None req-e1c0c43e-5d1b-4f50-9815-d04cf6d649fc d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:53:37 compute-0 nova_compute[260935]: 2025-10-11 08:53:37.272 2 DEBUG nova.policy [None req-e1c0c43e-5d1b-4f50-9815-d04cf6d649fc d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'd4dce65b39be45739408ca70d672df84', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'b34d40b1586348c3be3d9142dfe1770d', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 11 08:53:37 compute-0 ceph-mon[74313]: pgmap v1472: 321 pgs: 321 active+clean; 208 MiB data, 530 MiB used, 59 GiB / 60 GiB avail; 14 MiB/s rd, 12 MiB/s wr, 851 op/s
Oct 11 08:53:37 compute-0 nova_compute[260935]: 2025-10-11 08:53:37.376 2 DEBUG oslo_concurrency.processutils [None req-e1c0c43e-5d1b-4f50-9815-d04cf6d649fc d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.313s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:53:37 compute-0 nova_compute[260935]: 2025-10-11 08:53:37.442 2 DEBUG nova.storage.rbd_utils [None req-e1c0c43e-5d1b-4f50-9815-d04cf6d649fc d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] resizing rbd image 8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 11 08:53:37 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 08:53:37 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3362876690' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 08:53:37 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 08:53:37 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3362876690' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 08:53:37 compute-0 nova_compute[260935]: 2025-10-11 08:53:37.564 2 DEBUG nova.objects.instance [None req-e1c0c43e-5d1b-4f50-9815-d04cf6d649fc d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Lazy-loading 'migration_context' on Instance uuid 8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:53:37 compute-0 nova_compute[260935]: 2025-10-11 08:53:37.581 2 DEBUG nova.virt.libvirt.driver [None req-e1c0c43e-5d1b-4f50-9815-d04cf6d649fc d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] [instance: 8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 11 08:53:37 compute-0 nova_compute[260935]: 2025-10-11 08:53:37.581 2 DEBUG nova.virt.libvirt.driver [None req-e1c0c43e-5d1b-4f50-9815-d04cf6d649fc d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] [instance: 8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08] Ensure instance console log exists: /var/lib/nova/instances/8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 11 08:53:37 compute-0 nova_compute[260935]: 2025-10-11 08:53:37.582 2 DEBUG oslo_concurrency.lockutils [None req-e1c0c43e-5d1b-4f50-9815-d04cf6d649fc d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:53:37 compute-0 nova_compute[260935]: 2025-10-11 08:53:37.583 2 DEBUG oslo_concurrency.lockutils [None req-e1c0c43e-5d1b-4f50-9815-d04cf6d649fc d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:53:37 compute-0 nova_compute[260935]: 2025-10-11 08:53:37.584 2 DEBUG oslo_concurrency.lockutils [None req-e1c0c43e-5d1b-4f50-9815-d04cf6d649fc d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:53:37 compute-0 nova_compute[260935]: 2025-10-11 08:53:37.758 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:53:37 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1473: 321 pgs: 321 active+clean; 292 MiB data, 565 MiB used, 59 GiB / 60 GiB avail; 20 MiB/s rd, 19 MiB/s wr, 1.18k op/s
Oct 11 08:53:38 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e201 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 08:53:38 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e201 do_prune osdmap full prune enabled
Oct 11 08:53:38 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e202 e202: 3 total, 3 up, 3 in
Oct 11 08:53:38 compute-0 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e202: 3 total, 3 up, 3 in
Oct 11 08:53:38 compute-0 ceph-mon[74313]: from='client.? 192.168.122.10:0/3362876690' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 08:53:38 compute-0 ceph-mon[74313]: from='client.? 192.168.122.10:0/3362876690' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 08:53:38 compute-0 ceph-mon[74313]: osdmap e202: 3 total, 3 up, 3 in
Oct 11 08:53:38 compute-0 nova_compute[260935]: 2025-10-11 08:53:38.444 2 DEBUG nova.network.neutron [None req-e1c0c43e-5d1b-4f50-9815-d04cf6d649fc d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] [instance: 8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08] Successfully created port: 5e003588-ef81-4e87-900f-734b2c4bad32 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 11 08:53:38 compute-0 nova_compute[260935]: 2025-10-11 08:53:38.524 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:53:39 compute-0 nova_compute[260935]: 2025-10-11 08:53:39.051 2 DEBUG nova.network.neutron [None req-e1c0c43e-5d1b-4f50-9815-d04cf6d649fc d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] [instance: 8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08] Successfully updated port: 5e003588-ef81-4e87-900f-734b2c4bad32 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 11 08:53:39 compute-0 nova_compute[260935]: 2025-10-11 08:53:39.065 2 DEBUG oslo_concurrency.lockutils [None req-e1c0c43e-5d1b-4f50-9815-d04cf6d649fc d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Acquiring lock "refresh_cache-8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 08:53:39 compute-0 nova_compute[260935]: 2025-10-11 08:53:39.065 2 DEBUG oslo_concurrency.lockutils [None req-e1c0c43e-5d1b-4f50-9815-d04cf6d649fc d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Acquired lock "refresh_cache-8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 08:53:39 compute-0 nova_compute[260935]: 2025-10-11 08:53:39.065 2 DEBUG nova.network.neutron [None req-e1c0c43e-5d1b-4f50-9815-d04cf6d649fc d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] [instance: 8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 11 08:53:39 compute-0 nova_compute[260935]: 2025-10-11 08:53:39.078 2 DEBUG oslo_concurrency.lockutils [None req-abb033af-cc52-4e7c-9afb-3c534d1eb32b 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Acquiring lock "92be5b35-6b7a-4f95-924d-008348f27b42" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:53:39 compute-0 nova_compute[260935]: 2025-10-11 08:53:39.078 2 DEBUG oslo_concurrency.lockutils [None req-abb033af-cc52-4e7c-9afb-3c534d1eb32b 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Lock "92be5b35-6b7a-4f95-924d-008348f27b42" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:53:39 compute-0 nova_compute[260935]: 2025-10-11 08:53:39.079 2 DEBUG oslo_concurrency.lockutils [None req-abb033af-cc52-4e7c-9afb-3c534d1eb32b 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Acquiring lock "92be5b35-6b7a-4f95-924d-008348f27b42-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:53:39 compute-0 nova_compute[260935]: 2025-10-11 08:53:39.079 2 DEBUG oslo_concurrency.lockutils [None req-abb033af-cc52-4e7c-9afb-3c534d1eb32b 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Lock "92be5b35-6b7a-4f95-924d-008348f27b42-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:53:39 compute-0 nova_compute[260935]: 2025-10-11 08:53:39.080 2 DEBUG oslo_concurrency.lockutils [None req-abb033af-cc52-4e7c-9afb-3c534d1eb32b 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Lock "92be5b35-6b7a-4f95-924d-008348f27b42-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:53:39 compute-0 nova_compute[260935]: 2025-10-11 08:53:39.082 2 INFO nova.compute.manager [None req-abb033af-cc52-4e7c-9afb-3c534d1eb32b 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 92be5b35-6b7a-4f95-924d-008348f27b42] Terminating instance
Oct 11 08:53:39 compute-0 nova_compute[260935]: 2025-10-11 08:53:39.084 2 DEBUG nova.compute.manager [None req-abb033af-cc52-4e7c-9afb-3c534d1eb32b 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 92be5b35-6b7a-4f95-924d-008348f27b42] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 11 08:53:39 compute-0 kernel: tap13bb6d15-e6 (unregistering): left promiscuous mode
Oct 11 08:53:39 compute-0 NetworkManager[44960]: <info>  [1760172819.1447] device (tap13bb6d15-e6): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 08:53:39 compute-0 ovn_controller[152945]: 2025-10-11T08:53:39Z|00402|binding|INFO|Releasing lport 13bb6d15-e65c-4e29-b0f3-b7a5a830236d from this chassis (sb_readonly=0)
Oct 11 08:53:39 compute-0 ovn_controller[152945]: 2025-10-11T08:53:39Z|00403|binding|INFO|Setting lport 13bb6d15-e65c-4e29-b0f3-b7a5a830236d down in Southbound
Oct 11 08:53:39 compute-0 ovn_controller[152945]: 2025-10-11T08:53:39Z|00404|binding|INFO|Removing iface tap13bb6d15-e6 ovn-installed in OVS
Oct 11 08:53:39 compute-0 nova_compute[260935]: 2025-10-11 08:53:39.157 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:53:39 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:39.164 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5b:b5:4a 10.100.0.12'], port_security=['fa:16:3e:5b:b5:4a 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '92be5b35-6b7a-4f95-924d-008348f27b42', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9af27fad6b5a4783b66213343f27f0a1', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'a50a9db8-e2ab-4969-88fc-b4ddbb372174', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b34768a1-4d4c-416a-8ec1-d8538916a72a, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=13bb6d15-e65c-4e29-b0f3-b7a5a830236d) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 08:53:39 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:39.168 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 13bb6d15-e65c-4e29-b0f3-b7a5a830236d in datapath e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd unbound from our chassis
Oct 11 08:53:39 compute-0 nova_compute[260935]: 2025-10-11 08:53:39.169 2 DEBUG nova.compute.manager [req-fac62629-87d1-4969-8011-b92ed5f58452 req-9ce308ad-42cc-4154-9540-7782c7ad3c55 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08] Received event network-changed-5e003588-ef81-4e87-900f-734b2c4bad32 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:53:39 compute-0 nova_compute[260935]: 2025-10-11 08:53:39.170 2 DEBUG nova.compute.manager [req-fac62629-87d1-4969-8011-b92ed5f58452 req-9ce308ad-42cc-4154-9540-7782c7ad3c55 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08] Refreshing instance network info cache due to event network-changed-5e003588-ef81-4e87-900f-734b2c4bad32. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 08:53:39 compute-0 nova_compute[260935]: 2025-10-11 08:53:39.170 2 DEBUG oslo_concurrency.lockutils [req-fac62629-87d1-4969-8011-b92ed5f58452 req-9ce308ad-42cc-4154-9540-7782c7ad3c55 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 08:53:39 compute-0 nova_compute[260935]: 2025-10-11 08:53:39.170 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:53:39 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:39.172 162815 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 11 08:53:39 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:39.174 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[33d49502-e71b-45f2-a30d-27576874575b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:53:39 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:39.175 162815 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd namespace which is not needed anymore
Oct 11 08:53:39 compute-0 nova_compute[260935]: 2025-10-11 08:53:39.178 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:53:39 compute-0 systemd[1]: machine-qemu\x2d53\x2dinstance\x2d0000002b.scope: Deactivated successfully.
Oct 11 08:53:39 compute-0 systemd[1]: machine-qemu\x2d53\x2dinstance\x2d0000002b.scope: Consumed 4.270s CPU time.
Oct 11 08:53:39 compute-0 systemd-machined[215705]: Machine qemu-53-instance-0000002b terminated.
Oct 11 08:53:39 compute-0 nova_compute[260935]: 2025-10-11 08:53:39.231 2 DEBUG nova.network.neutron [None req-e1c0c43e-5d1b-4f50-9815-d04cf6d649fc d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] [instance: 8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 11 08:53:39 compute-0 podman[313505]: 2025-10-11 08:53:39.276582396 +0000 UTC m=+0.110082550 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=multipathd, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3)
Oct 11 08:53:39 compute-0 podman[313508]: 2025-10-11 08:53:39.313138308 +0000 UTC m=+0.131648325 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Oct 11 08:53:39 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e202 do_prune osdmap full prune enabled
Oct 11 08:53:39 compute-0 nova_compute[260935]: 2025-10-11 08:53:39.319 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:53:39 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e203 e203: 3 total, 3 up, 3 in
Oct 11 08:53:39 compute-0 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e203: 3 total, 3 up, 3 in
Oct 11 08:53:39 compute-0 nova_compute[260935]: 2025-10-11 08:53:39.330 2 INFO nova.virt.libvirt.driver [-] [instance: 92be5b35-6b7a-4f95-924d-008348f27b42] Instance destroyed successfully.
Oct 11 08:53:39 compute-0 nova_compute[260935]: 2025-10-11 08:53:39.331 2 DEBUG nova.objects.instance [None req-abb033af-cc52-4e7c-9afb-3c534d1eb32b 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Lazy-loading 'resources' on Instance uuid 92be5b35-6b7a-4f95-924d-008348f27b42 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:53:39 compute-0 nova_compute[260935]: 2025-10-11 08:53:39.351 2 DEBUG nova.virt.libvirt.vif [None req-abb033af-cc52-4e7c-9afb-3c534d1eb32b 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-11T08:52:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-118771075',display_name='tempest-ServerDiskConfigTestJSON-server-118771075',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-118771075',id=43,image_ref='95632eb9-5895-4e20-b760-0f149aadf400',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-11T08:53:35Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='9af27fad6b5a4783b66213343f27f0a1',ramdisk_id='',reservation_id='r-aqy60l5l',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='95632eb9-5895-4e20-b760-0f149aadf400',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerDiskConfigTestJSON-387886039',owner_user_name='tempest-ServerDiskConfigTestJSON-387886039-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T08:53:36Z,user_data=None,user_id='2a171de1f79843e0b048393cabfee77d',uuid=92be5b35-6b7a-4f95-924d-008348f27b42,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "13bb6d15-e65c-4e29-b0f3-b7a5a830236d", "address": "fa:16:3e:5b:b5:4a", "network": {"id": "e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-964284753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9af27fad6b5a4783b66213343f27f0a1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap13bb6d15-e6", "ovs_interfaceid": "13bb6d15-e65c-4e29-b0f3-b7a5a830236d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 11 08:53:39 compute-0 nova_compute[260935]: 2025-10-11 08:53:39.351 2 DEBUG nova.network.os_vif_util [None req-abb033af-cc52-4e7c-9afb-3c534d1eb32b 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Converting VIF {"id": "13bb6d15-e65c-4e29-b0f3-b7a5a830236d", "address": "fa:16:3e:5b:b5:4a", "network": {"id": "e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-964284753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9af27fad6b5a4783b66213343f27f0a1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap13bb6d15-e6", "ovs_interfaceid": "13bb6d15-e65c-4e29-b0f3-b7a5a830236d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 08:53:39 compute-0 nova_compute[260935]: 2025-10-11 08:53:39.352 2 DEBUG nova.network.os_vif_util [None req-abb033af-cc52-4e7c-9afb-3c534d1eb32b 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5b:b5:4a,bridge_name='br-int',has_traffic_filtering=True,id=13bb6d15-e65c-4e29-b0f3-b7a5a830236d,network=Network(e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap13bb6d15-e6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 08:53:39 compute-0 nova_compute[260935]: 2025-10-11 08:53:39.353 2 DEBUG os_vif [None req-abb033af-cc52-4e7c-9afb-3c534d1eb32b 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:5b:b5:4a,bridge_name='br-int',has_traffic_filtering=True,id=13bb6d15-e65c-4e29-b0f3-b7a5a830236d,network=Network(e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap13bb6d15-e6') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 11 08:53:39 compute-0 nova_compute[260935]: 2025-10-11 08:53:39.355 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:53:39 compute-0 nova_compute[260935]: 2025-10-11 08:53:39.355 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap13bb6d15-e6, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:53:39 compute-0 nova_compute[260935]: 2025-10-11 08:53:39.357 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:53:39 compute-0 nova_compute[260935]: 2025-10-11 08:53:39.358 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:53:39 compute-0 nova_compute[260935]: 2025-10-11 08:53:39.361 2 INFO os_vif [None req-abb033af-cc52-4e7c-9afb-3c534d1eb32b 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:5b:b5:4a,bridge_name='br-int',has_traffic_filtering=True,id=13bb6d15-e65c-4e29-b0f3-b7a5a830236d,network=Network(e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap13bb6d15-e6')
Oct 11 08:53:39 compute-0 ceph-mon[74313]: pgmap v1473: 321 pgs: 321 active+clean; 292 MiB data, 565 MiB used, 59 GiB / 60 GiB avail; 20 MiB/s rd, 19 MiB/s wr, 1.18k op/s
Oct 11 08:53:39 compute-0 ceph-mon[74313]: osdmap e203: 3 total, 3 up, 3 in
Oct 11 08:53:39 compute-0 neutron-haproxy-ovnmeta-e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd[313302]: [NOTICE]   (313307) : haproxy version is 2.8.14-c23fe91
Oct 11 08:53:39 compute-0 neutron-haproxy-ovnmeta-e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd[313302]: [NOTICE]   (313307) : path to executable is /usr/sbin/haproxy
Oct 11 08:53:39 compute-0 neutron-haproxy-ovnmeta-e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd[313302]: [WARNING]  (313307) : Exiting Master process...
Oct 11 08:53:39 compute-0 neutron-haproxy-ovnmeta-e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd[313302]: [WARNING]  (313307) : Exiting Master process...
Oct 11 08:53:39 compute-0 neutron-haproxy-ovnmeta-e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd[313302]: [ALERT]    (313307) : Current worker (313309) exited with code 143 (Terminated)
Oct 11 08:53:39 compute-0 neutron-haproxy-ovnmeta-e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd[313302]: [WARNING]  (313307) : All workers exited. Exiting... (0)
Oct 11 08:53:39 compute-0 systemd[1]: libpod-93ce5815cc24073fc5465470cde888fbff7aeb8d44ee8d0f37da2dc9d054a24a.scope: Deactivated successfully.
Oct 11 08:53:39 compute-0 podman[313569]: 2025-10-11 08:53:39.383964498 +0000 UTC m=+0.080741604 container died 93ce5815cc24073fc5465470cde888fbff7aeb8d44ee8d0f37da2dc9d054a24a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, tcib_managed=true)
Oct 11 08:53:39 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-93ce5815cc24073fc5465470cde888fbff7aeb8d44ee8d0f37da2dc9d054a24a-userdata-shm.mount: Deactivated successfully.
Oct 11 08:53:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-1ac96d95cf133868deb35b7c5e5709e96b6a8d34762efff7b7d3e6627c387060-merged.mount: Deactivated successfully.
Oct 11 08:53:39 compute-0 podman[313569]: 2025-10-11 08:53:39.427488709 +0000 UTC m=+0.124265805 container cleanup 93ce5815cc24073fc5465470cde888fbff7aeb8d44ee8d0f37da2dc9d054a24a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 11 08:53:39 compute-0 systemd[1]: libpod-conmon-93ce5815cc24073fc5465470cde888fbff7aeb8d44ee8d0f37da2dc9d054a24a.scope: Deactivated successfully.
Oct 11 08:53:39 compute-0 podman[313627]: 2025-10-11 08:53:39.52046693 +0000 UTC m=+0.061888976 container remove 93ce5815cc24073fc5465470cde888fbff7aeb8d44ee8d0f37da2dc9d054a24a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3)
Oct 11 08:53:39 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:39.529 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[eb176494-5e16-4c16-9e9e-e4f1f1871a89]: (4, ('Sat Oct 11 08:53:39 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd (93ce5815cc24073fc5465470cde888fbff7aeb8d44ee8d0f37da2dc9d054a24a)\n93ce5815cc24073fc5465470cde888fbff7aeb8d44ee8d0f37da2dc9d054a24a\nSat Oct 11 08:53:39 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd (93ce5815cc24073fc5465470cde888fbff7aeb8d44ee8d0f37da2dc9d054a24a)\n93ce5815cc24073fc5465470cde888fbff7aeb8d44ee8d0f37da2dc9d054a24a\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:53:39 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:39.532 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[dce67f11-2815-4ac9-b288-b5d51bba2100]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:53:39 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:39.533 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape5d4fc7a-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:53:39 compute-0 nova_compute[260935]: 2025-10-11 08:53:39.536 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:53:39 compute-0 kernel: tape5d4fc7a-10: left promiscuous mode
Oct 11 08:53:39 compute-0 nova_compute[260935]: 2025-10-11 08:53:39.571 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:53:39 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:39.578 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[dd36affc-2f17-4db0-897b-39809b07317d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:53:39 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:39.601 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[9a2ff319-106e-45b3-a4e3-d73de3099c09]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:53:39 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:39.603 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[b5a04e21-0c93-42ca-aa43-b2a9c005fec0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:53:39 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:39.622 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[06e44ba7-adae-4c9f-898c-b805b762cdef]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 465951, 'reachable_time': 30669, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 313642, 'error': None, 'target': 'ovnmeta-e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:53:39 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:39.625 162929 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 11 08:53:39 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:39.625 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[d96c45cb-2bd3-4e54-bcbc-6dd136af525e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:53:39 compute-0 systemd[1]: run-netns-ovnmeta\x2de5d4fc7a\x2d1d47\x2d4774\x2daad7\x2d8bb2b388fbbd.mount: Deactivated successfully.
Oct 11 08:53:39 compute-0 nova_compute[260935]: 2025-10-11 08:53:39.853 2 INFO nova.virt.libvirt.driver [None req-abb033af-cc52-4e7c-9afb-3c534d1eb32b 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 92be5b35-6b7a-4f95-924d-008348f27b42] Deleting instance files /var/lib/nova/instances/92be5b35-6b7a-4f95-924d-008348f27b42_del
Oct 11 08:53:39 compute-0 nova_compute[260935]: 2025-10-11 08:53:39.854 2 INFO nova.virt.libvirt.driver [None req-abb033af-cc52-4e7c-9afb-3c534d1eb32b 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 92be5b35-6b7a-4f95-924d-008348f27b42] Deletion of /var/lib/nova/instances/92be5b35-6b7a-4f95-924d-008348f27b42_del complete
Oct 11 08:53:39 compute-0 nova_compute[260935]: 2025-10-11 08:53:39.916 2 INFO nova.compute.manager [None req-abb033af-cc52-4e7c-9afb-3c534d1eb32b 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 92be5b35-6b7a-4f95-924d-008348f27b42] Took 0.83 seconds to destroy the instance on the hypervisor.
Oct 11 08:53:39 compute-0 nova_compute[260935]: 2025-10-11 08:53:39.918 2 DEBUG oslo.service.loopingcall [None req-abb033af-cc52-4e7c-9afb-3c534d1eb32b 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 11 08:53:39 compute-0 nova_compute[260935]: 2025-10-11 08:53:39.919 2 DEBUG nova.compute.manager [-] [instance: 92be5b35-6b7a-4f95-924d-008348f27b42] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 11 08:53:39 compute-0 nova_compute[260935]: 2025-10-11 08:53:39.919 2 DEBUG nova.network.neutron [-] [instance: 92be5b35-6b7a-4f95-924d-008348f27b42] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 11 08:53:39 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1476: 321 pgs: 321 active+clean; 292 MiB data, 565 MiB used, 59 GiB / 60 GiB avail; 5.6 MiB/s rd, 6.4 MiB/s wr, 294 op/s
Oct 11 08:53:41 compute-0 nova_compute[260935]: 2025-10-11 08:53:41.129 2 DEBUG nova.network.neutron [None req-e1c0c43e-5d1b-4f50-9815-d04cf6d649fc d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] [instance: 8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08] Updating instance_info_cache with network_info: [{"id": "5e003588-ef81-4e87-900f-734b2c4bad32", "address": "fa:16:3e:97:d1:ed", "network": {"id": "a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1898587159-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b34d40b1586348c3be3d9142dfe1770d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5e003588-ef", "ovs_interfaceid": "5e003588-ef81-4e87-900f-734b2c4bad32", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:53:41 compute-0 nova_compute[260935]: 2025-10-11 08:53:41.187 2 DEBUG oslo_concurrency.lockutils [None req-e1c0c43e-5d1b-4f50-9815-d04cf6d649fc d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Releasing lock "refresh_cache-8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 08:53:41 compute-0 nova_compute[260935]: 2025-10-11 08:53:41.188 2 DEBUG nova.compute.manager [None req-e1c0c43e-5d1b-4f50-9815-d04cf6d649fc d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] [instance: 8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08] Instance network_info: |[{"id": "5e003588-ef81-4e87-900f-734b2c4bad32", "address": "fa:16:3e:97:d1:ed", "network": {"id": "a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1898587159-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b34d40b1586348c3be3d9142dfe1770d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5e003588-ef", "ovs_interfaceid": "5e003588-ef81-4e87-900f-734b2c4bad32", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 11 08:53:41 compute-0 nova_compute[260935]: 2025-10-11 08:53:41.188 2 DEBUG oslo_concurrency.lockutils [req-fac62629-87d1-4969-8011-b92ed5f58452 req-9ce308ad-42cc-4154-9540-7782c7ad3c55 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 08:53:41 compute-0 nova_compute[260935]: 2025-10-11 08:53:41.189 2 DEBUG nova.network.neutron [req-fac62629-87d1-4969-8011-b92ed5f58452 req-9ce308ad-42cc-4154-9540-7782c7ad3c55 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08] Refreshing network info cache for port 5e003588-ef81-4e87-900f-734b2c4bad32 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 08:53:41 compute-0 nova_compute[260935]: 2025-10-11 08:53:41.194 2 DEBUG nova.virt.libvirt.driver [None req-e1c0c43e-5d1b-4f50-9815-d04cf6d649fc d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] [instance: 8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08] Start _get_guest_xml network_info=[{"id": "5e003588-ef81-4e87-900f-734b2c4bad32", "address": "fa:16:3e:97:d1:ed", "network": {"id": "a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1898587159-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b34d40b1586348c3be3d9142dfe1770d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5e003588-ef", "ovs_interfaceid": "5e003588-ef81-4e87-900f-734b2c4bad32", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 11 08:53:41 compute-0 nova_compute[260935]: 2025-10-11 08:53:41.201 2 WARNING nova.virt.libvirt.driver [None req-e1c0c43e-5d1b-4f50-9815-d04cf6d649fc d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 08:53:41 compute-0 nova_compute[260935]: 2025-10-11 08:53:41.207 2 DEBUG nova.virt.libvirt.host [None req-e1c0c43e-5d1b-4f50-9815-d04cf6d649fc d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 11 08:53:41 compute-0 nova_compute[260935]: 2025-10-11 08:53:41.208 2 DEBUG nova.virt.libvirt.host [None req-e1c0c43e-5d1b-4f50-9815-d04cf6d649fc d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 11 08:53:41 compute-0 nova_compute[260935]: 2025-10-11 08:53:41.218 2 DEBUG nova.virt.libvirt.host [None req-e1c0c43e-5d1b-4f50-9815-d04cf6d649fc d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 11 08:53:41 compute-0 nova_compute[260935]: 2025-10-11 08:53:41.218 2 DEBUG nova.virt.libvirt.host [None req-e1c0c43e-5d1b-4f50-9815-d04cf6d649fc d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 11 08:53:41 compute-0 nova_compute[260935]: 2025-10-11 08:53:41.219 2 DEBUG nova.virt.libvirt.driver [None req-e1c0c43e-5d1b-4f50-9815-d04cf6d649fc d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 11 08:53:41 compute-0 nova_compute[260935]: 2025-10-11 08:53:41.219 2 DEBUG nova.virt.hardware [None req-e1c0c43e-5d1b-4f50-9815-d04cf6d649fc d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 11 08:53:41 compute-0 nova_compute[260935]: 2025-10-11 08:53:41.220 2 DEBUG nova.virt.hardware [None req-e1c0c43e-5d1b-4f50-9815-d04cf6d649fc d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 11 08:53:41 compute-0 nova_compute[260935]: 2025-10-11 08:53:41.221 2 DEBUG nova.virt.hardware [None req-e1c0c43e-5d1b-4f50-9815-d04cf6d649fc d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 11 08:53:41 compute-0 nova_compute[260935]: 2025-10-11 08:53:41.221 2 DEBUG nova.virt.hardware [None req-e1c0c43e-5d1b-4f50-9815-d04cf6d649fc d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 11 08:53:41 compute-0 nova_compute[260935]: 2025-10-11 08:53:41.221 2 DEBUG nova.virt.hardware [None req-e1c0c43e-5d1b-4f50-9815-d04cf6d649fc d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 11 08:53:41 compute-0 nova_compute[260935]: 2025-10-11 08:53:41.222 2 DEBUG nova.virt.hardware [None req-e1c0c43e-5d1b-4f50-9815-d04cf6d649fc d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 11 08:53:41 compute-0 nova_compute[260935]: 2025-10-11 08:53:41.222 2 DEBUG nova.virt.hardware [None req-e1c0c43e-5d1b-4f50-9815-d04cf6d649fc d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 11 08:53:41 compute-0 nova_compute[260935]: 2025-10-11 08:53:41.223 2 DEBUG nova.virt.hardware [None req-e1c0c43e-5d1b-4f50-9815-d04cf6d649fc d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 11 08:53:41 compute-0 nova_compute[260935]: 2025-10-11 08:53:41.223 2 DEBUG nova.virt.hardware [None req-e1c0c43e-5d1b-4f50-9815-d04cf6d649fc d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 11 08:53:41 compute-0 nova_compute[260935]: 2025-10-11 08:53:41.224 2 DEBUG nova.virt.hardware [None req-e1c0c43e-5d1b-4f50-9815-d04cf6d649fc d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 11 08:53:41 compute-0 nova_compute[260935]: 2025-10-11 08:53:41.224 2 DEBUG nova.virt.hardware [None req-e1c0c43e-5d1b-4f50-9815-d04cf6d649fc d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 11 08:53:41 compute-0 nova_compute[260935]: 2025-10-11 08:53:41.229 2 DEBUG oslo_concurrency.processutils [None req-e1c0c43e-5d1b-4f50-9815-d04cf6d649fc d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:53:41 compute-0 ceph-mon[74313]: pgmap v1476: 321 pgs: 321 active+clean; 292 MiB data, 565 MiB used, 59 GiB / 60 GiB avail; 5.6 MiB/s rd, 6.4 MiB/s wr, 294 op/s
Oct 11 08:53:41 compute-0 nova_compute[260935]: 2025-10-11 08:53:41.517 2 DEBUG nova.network.neutron [-] [instance: 92be5b35-6b7a-4f95-924d-008348f27b42] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:53:41 compute-0 nova_compute[260935]: 2025-10-11 08:53:41.551 2 INFO nova.compute.manager [-] [instance: 92be5b35-6b7a-4f95-924d-008348f27b42] Took 1.63 seconds to deallocate network for instance.
Oct 11 08:53:41 compute-0 nova_compute[260935]: 2025-10-11 08:53:41.616 2 DEBUG oslo_concurrency.lockutils [None req-abb033af-cc52-4e7c-9afb-3c534d1eb32b 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:53:41 compute-0 nova_compute[260935]: 2025-10-11 08:53:41.618 2 DEBUG oslo_concurrency.lockutils [None req-abb033af-cc52-4e7c-9afb-3c534d1eb32b 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:53:41 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 08:53:41 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2411837045' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:53:41 compute-0 nova_compute[260935]: 2025-10-11 08:53:41.749 2 DEBUG oslo_concurrency.processutils [None req-abb033af-cc52-4e7c-9afb-3c534d1eb32b 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:53:41 compute-0 nova_compute[260935]: 2025-10-11 08:53:41.806 2 DEBUG oslo_concurrency.processutils [None req-e1c0c43e-5d1b-4f50-9815-d04cf6d649fc d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.577s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:53:41 compute-0 nova_compute[260935]: 2025-10-11 08:53:41.811 2 DEBUG nova.compute.manager [req-7d68ca44-f441-441c-866f-cbb22269d6fb req-6a12569b-682a-4969-9a62-05601bd0ba3a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 92be5b35-6b7a-4f95-924d-008348f27b42] Received event network-vif-unplugged-13bb6d15-e65c-4e29-b0f3-b7a5a830236d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:53:41 compute-0 nova_compute[260935]: 2025-10-11 08:53:41.811 2 DEBUG oslo_concurrency.lockutils [req-7d68ca44-f441-441c-866f-cbb22269d6fb req-6a12569b-682a-4969-9a62-05601bd0ba3a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "92be5b35-6b7a-4f95-924d-008348f27b42-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:53:41 compute-0 nova_compute[260935]: 2025-10-11 08:53:41.813 2 DEBUG oslo_concurrency.lockutils [req-7d68ca44-f441-441c-866f-cbb22269d6fb req-6a12569b-682a-4969-9a62-05601bd0ba3a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "92be5b35-6b7a-4f95-924d-008348f27b42-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:53:41 compute-0 nova_compute[260935]: 2025-10-11 08:53:41.814 2 DEBUG oslo_concurrency.lockutils [req-7d68ca44-f441-441c-866f-cbb22269d6fb req-6a12569b-682a-4969-9a62-05601bd0ba3a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "92be5b35-6b7a-4f95-924d-008348f27b42-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:53:41 compute-0 nova_compute[260935]: 2025-10-11 08:53:41.815 2 DEBUG nova.compute.manager [req-7d68ca44-f441-441c-866f-cbb22269d6fb req-6a12569b-682a-4969-9a62-05601bd0ba3a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 92be5b35-6b7a-4f95-924d-008348f27b42] No waiting events found dispatching network-vif-unplugged-13bb6d15-e65c-4e29-b0f3-b7a5a830236d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 08:53:41 compute-0 nova_compute[260935]: 2025-10-11 08:53:41.815 2 WARNING nova.compute.manager [req-7d68ca44-f441-441c-866f-cbb22269d6fb req-6a12569b-682a-4969-9a62-05601bd0ba3a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 92be5b35-6b7a-4f95-924d-008348f27b42] Received unexpected event network-vif-unplugged-13bb6d15-e65c-4e29-b0f3-b7a5a830236d for instance with vm_state deleted and task_state None.
Oct 11 08:53:41 compute-0 nova_compute[260935]: 2025-10-11 08:53:41.848 2 DEBUG nova.storage.rbd_utils [None req-e1c0c43e-5d1b-4f50-9815-d04cf6d649fc d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] rbd image 8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:53:41 compute-0 nova_compute[260935]: 2025-10-11 08:53:41.855 2 DEBUG oslo_concurrency.processutils [None req-e1c0c43e-5d1b-4f50-9815-d04cf6d649fc d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:53:41 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1477: 321 pgs: 321 active+clean; 292 MiB data, 565 MiB used, 59 GiB / 60 GiB avail; 4.7 MiB/s rd, 5.3 MiB/s wr, 244 op/s
Oct 11 08:53:41 compute-0 nova_compute[260935]: 2025-10-11 08:53:41.959 2 DEBUG nova.compute.manager [req-7b4d5a9b-ebe0-4a64-862e-37dad2b2c3e0 req-dfa4c539-f64a-486e-97a3-d24e9e7d4e5b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 92be5b35-6b7a-4f95-924d-008348f27b42] Received event network-vif-deleted-13bb6d15-e65c-4e29-b0f3-b7a5a830236d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:53:42 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 08:53:42 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/533724975' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:53:42 compute-0 nova_compute[260935]: 2025-10-11 08:53:42.192 2 DEBUG oslo_concurrency.processutils [None req-abb033af-cc52-4e7c-9afb-3c534d1eb32b 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:53:42 compute-0 nova_compute[260935]: 2025-10-11 08:53:42.199 2 DEBUG nova.compute.provider_tree [None req-abb033af-cc52-4e7c-9afb-3c534d1eb32b 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 08:53:42 compute-0 nova_compute[260935]: 2025-10-11 08:53:42.217 2 DEBUG nova.scheduler.client.report [None req-abb033af-cc52-4e7c-9afb-3c534d1eb32b 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 08:53:42 compute-0 nova_compute[260935]: 2025-10-11 08:53:42.239 2 DEBUG oslo_concurrency.lockutils [None req-abb033af-cc52-4e7c-9afb-3c534d1eb32b 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.621s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:53:42 compute-0 nova_compute[260935]: 2025-10-11 08:53:42.274 2 INFO nova.scheduler.client.report [None req-abb033af-cc52-4e7c-9afb-3c534d1eb32b 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Deleted allocations for instance 92be5b35-6b7a-4f95-924d-008348f27b42
Oct 11 08:53:42 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 08:53:42 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/705102083' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:53:42 compute-0 nova_compute[260935]: 2025-10-11 08:53:42.344 2 DEBUG oslo_concurrency.lockutils [None req-abb033af-cc52-4e7c-9afb-3c534d1eb32b 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Lock "92be5b35-6b7a-4f95-924d-008348f27b42" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.266s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:53:42 compute-0 nova_compute[260935]: 2025-10-11 08:53:42.351 2 DEBUG oslo_concurrency.processutils [None req-e1c0c43e-5d1b-4f50-9815-d04cf6d649fc d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.497s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:53:42 compute-0 nova_compute[260935]: 2025-10-11 08:53:42.353 2 DEBUG nova.virt.libvirt.vif [None req-e1c0c43e-5d1b-4f50-9815-d04cf6d649fc d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T08:53:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesOneServerNegativeTestJSON-server-454723019',display_name='tempest-ImagesOneServerNegativeTestJSON-server-454723019',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservernegativetestjson-server-454723019',id=47,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b34d40b1586348c3be3d9142dfe1770d',ramdisk_id='',reservation_id='r-tv2wxykz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesOneServerNegativeTestJSON-253514738',owner_user_name='tempest-ImagesOneServerNegativeTestJSON-253514738-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T08:53:36Z,user_data=None,user_id='d4dce65b39be45739408ca70d672df84',uuid=8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5e003588-ef81-4e87-900f-734b2c4bad32", "address": "fa:16:3e:97:d1:ed", "network": {"id": "a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1898587159-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b34d40b1586348c3be3d9142dfe1770d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5e003588-ef", "ovs_interfaceid": "5e003588-ef81-4e87-900f-734b2c4bad32", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 11 08:53:42 compute-0 nova_compute[260935]: 2025-10-11 08:53:42.353 2 DEBUG nova.network.os_vif_util [None req-e1c0c43e-5d1b-4f50-9815-d04cf6d649fc d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Converting VIF {"id": "5e003588-ef81-4e87-900f-734b2c4bad32", "address": "fa:16:3e:97:d1:ed", "network": {"id": "a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1898587159-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b34d40b1586348c3be3d9142dfe1770d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5e003588-ef", "ovs_interfaceid": "5e003588-ef81-4e87-900f-734b2c4bad32", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 08:53:42 compute-0 nova_compute[260935]: 2025-10-11 08:53:42.354 2 DEBUG nova.network.os_vif_util [None req-e1c0c43e-5d1b-4f50-9815-d04cf6d649fc d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:97:d1:ed,bridge_name='br-int',has_traffic_filtering=True,id=5e003588-ef81-4e87-900f-734b2c4bad32,network=Network(a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5e003588-ef') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 08:53:42 compute-0 nova_compute[260935]: 2025-10-11 08:53:42.356 2 DEBUG nova.objects.instance [None req-e1c0c43e-5d1b-4f50-9815-d04cf6d649fc d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Lazy-loading 'pci_devices' on Instance uuid 8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:53:42 compute-0 nova_compute[260935]: 2025-10-11 08:53:42.372 2 DEBUG nova.virt.libvirt.driver [None req-e1c0c43e-5d1b-4f50-9815-d04cf6d649fc d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] [instance: 8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08] End _get_guest_xml xml=<domain type="kvm">
Oct 11 08:53:42 compute-0 nova_compute[260935]:   <uuid>8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08</uuid>
Oct 11 08:53:42 compute-0 nova_compute[260935]:   <name>instance-0000002f</name>
Oct 11 08:53:42 compute-0 nova_compute[260935]:   <memory>131072</memory>
Oct 11 08:53:42 compute-0 nova_compute[260935]:   <vcpu>1</vcpu>
Oct 11 08:53:42 compute-0 nova_compute[260935]:   <metadata>
Oct 11 08:53:42 compute-0 nova_compute[260935]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 08:53:42 compute-0 nova_compute[260935]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 08:53:42 compute-0 nova_compute[260935]:       <nova:name>tempest-ImagesOneServerNegativeTestJSON-server-454723019</nova:name>
Oct 11 08:53:42 compute-0 nova_compute[260935]:       <nova:creationTime>2025-10-11 08:53:41</nova:creationTime>
Oct 11 08:53:42 compute-0 nova_compute[260935]:       <nova:flavor name="m1.nano">
Oct 11 08:53:42 compute-0 nova_compute[260935]:         <nova:memory>128</nova:memory>
Oct 11 08:53:42 compute-0 nova_compute[260935]:         <nova:disk>1</nova:disk>
Oct 11 08:53:42 compute-0 nova_compute[260935]:         <nova:swap>0</nova:swap>
Oct 11 08:53:42 compute-0 nova_compute[260935]:         <nova:ephemeral>0</nova:ephemeral>
Oct 11 08:53:42 compute-0 nova_compute[260935]:         <nova:vcpus>1</nova:vcpus>
Oct 11 08:53:42 compute-0 nova_compute[260935]:       </nova:flavor>
Oct 11 08:53:42 compute-0 nova_compute[260935]:       <nova:owner>
Oct 11 08:53:42 compute-0 nova_compute[260935]:         <nova:user uuid="d4dce65b39be45739408ca70d672df84">tempest-ImagesOneServerNegativeTestJSON-253514738-project-member</nova:user>
Oct 11 08:53:42 compute-0 nova_compute[260935]:         <nova:project uuid="b34d40b1586348c3be3d9142dfe1770d">tempest-ImagesOneServerNegativeTestJSON-253514738</nova:project>
Oct 11 08:53:42 compute-0 nova_compute[260935]:       </nova:owner>
Oct 11 08:53:42 compute-0 nova_compute[260935]:       <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 08:53:42 compute-0 nova_compute[260935]:       <nova:ports>
Oct 11 08:53:42 compute-0 nova_compute[260935]:         <nova:port uuid="5e003588-ef81-4e87-900f-734b2c4bad32">
Oct 11 08:53:42 compute-0 nova_compute[260935]:           <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Oct 11 08:53:42 compute-0 nova_compute[260935]:         </nova:port>
Oct 11 08:53:42 compute-0 nova_compute[260935]:       </nova:ports>
Oct 11 08:53:42 compute-0 nova_compute[260935]:     </nova:instance>
Oct 11 08:53:42 compute-0 nova_compute[260935]:   </metadata>
Oct 11 08:53:42 compute-0 nova_compute[260935]:   <sysinfo type="smbios">
Oct 11 08:53:42 compute-0 nova_compute[260935]:     <system>
Oct 11 08:53:42 compute-0 nova_compute[260935]:       <entry name="manufacturer">RDO</entry>
Oct 11 08:53:42 compute-0 nova_compute[260935]:       <entry name="product">OpenStack Compute</entry>
Oct 11 08:53:42 compute-0 nova_compute[260935]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 08:53:42 compute-0 nova_compute[260935]:       <entry name="serial">8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08</entry>
Oct 11 08:53:42 compute-0 nova_compute[260935]:       <entry name="uuid">8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08</entry>
Oct 11 08:53:42 compute-0 nova_compute[260935]:       <entry name="family">Virtual Machine</entry>
Oct 11 08:53:42 compute-0 nova_compute[260935]:     </system>
Oct 11 08:53:42 compute-0 nova_compute[260935]:   </sysinfo>
Oct 11 08:53:42 compute-0 nova_compute[260935]:   <os>
Oct 11 08:53:42 compute-0 nova_compute[260935]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 11 08:53:42 compute-0 nova_compute[260935]:     <boot dev="hd"/>
Oct 11 08:53:42 compute-0 nova_compute[260935]:     <smbios mode="sysinfo"/>
Oct 11 08:53:42 compute-0 nova_compute[260935]:   </os>
Oct 11 08:53:42 compute-0 nova_compute[260935]:   <features>
Oct 11 08:53:42 compute-0 nova_compute[260935]:     <acpi/>
Oct 11 08:53:42 compute-0 nova_compute[260935]:     <apic/>
Oct 11 08:53:42 compute-0 nova_compute[260935]:     <vmcoreinfo/>
Oct 11 08:53:42 compute-0 nova_compute[260935]:   </features>
Oct 11 08:53:42 compute-0 nova_compute[260935]:   <clock offset="utc">
Oct 11 08:53:42 compute-0 nova_compute[260935]:     <timer name="pit" tickpolicy="delay"/>
Oct 11 08:53:42 compute-0 nova_compute[260935]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 11 08:53:42 compute-0 nova_compute[260935]:     <timer name="hpet" present="no"/>
Oct 11 08:53:42 compute-0 nova_compute[260935]:   </clock>
Oct 11 08:53:42 compute-0 nova_compute[260935]:   <cpu mode="host-model" match="exact">
Oct 11 08:53:42 compute-0 nova_compute[260935]:     <topology sockets="1" cores="1" threads="1"/>
Oct 11 08:53:42 compute-0 nova_compute[260935]:   </cpu>
Oct 11 08:53:42 compute-0 nova_compute[260935]:   <devices>
Oct 11 08:53:42 compute-0 nova_compute[260935]:     <disk type="network" device="disk">
Oct 11 08:53:42 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 08:53:42 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08_disk">
Oct 11 08:53:42 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 08:53:42 compute-0 nova_compute[260935]:       </source>
Oct 11 08:53:42 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 08:53:42 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 08:53:42 compute-0 nova_compute[260935]:       </auth>
Oct 11 08:53:42 compute-0 nova_compute[260935]:       <target dev="vda" bus="virtio"/>
Oct 11 08:53:42 compute-0 nova_compute[260935]:     </disk>
Oct 11 08:53:42 compute-0 nova_compute[260935]:     <disk type="network" device="cdrom">
Oct 11 08:53:42 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 08:53:42 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08_disk.config">
Oct 11 08:53:42 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 08:53:42 compute-0 nova_compute[260935]:       </source>
Oct 11 08:53:42 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 08:53:42 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 08:53:42 compute-0 nova_compute[260935]:       </auth>
Oct 11 08:53:42 compute-0 nova_compute[260935]:       <target dev="sda" bus="sata"/>
Oct 11 08:53:42 compute-0 nova_compute[260935]:     </disk>
Oct 11 08:53:42 compute-0 nova_compute[260935]:     <interface type="ethernet">
Oct 11 08:53:42 compute-0 nova_compute[260935]:       <mac address="fa:16:3e:97:d1:ed"/>
Oct 11 08:53:42 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 08:53:42 compute-0 nova_compute[260935]:       <driver name="vhost" rx_queue_size="512"/>
Oct 11 08:53:42 compute-0 nova_compute[260935]:       <mtu size="1442"/>
Oct 11 08:53:42 compute-0 nova_compute[260935]:       <target dev="tap5e003588-ef"/>
Oct 11 08:53:42 compute-0 nova_compute[260935]:     </interface>
Oct 11 08:53:42 compute-0 nova_compute[260935]:     <serial type="pty">
Oct 11 08:53:42 compute-0 nova_compute[260935]:       <log file="/var/lib/nova/instances/8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08/console.log" append="off"/>
Oct 11 08:53:42 compute-0 nova_compute[260935]:     </serial>
Oct 11 08:53:42 compute-0 nova_compute[260935]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 08:53:42 compute-0 nova_compute[260935]:     <video>
Oct 11 08:53:42 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 08:53:42 compute-0 nova_compute[260935]:     </video>
Oct 11 08:53:42 compute-0 nova_compute[260935]:     <input type="tablet" bus="usb"/>
Oct 11 08:53:42 compute-0 nova_compute[260935]:     <rng model="virtio">
Oct 11 08:53:42 compute-0 nova_compute[260935]:       <backend model="random">/dev/urandom</backend>
Oct 11 08:53:42 compute-0 nova_compute[260935]:     </rng>
Oct 11 08:53:42 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root"/>
Oct 11 08:53:42 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:53:42 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:53:42 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:53:42 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:53:42 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:53:42 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:53:42 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:53:42 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:53:42 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:53:42 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:53:42 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:53:42 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:53:42 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:53:42 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:53:42 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:53:42 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:53:42 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:53:42 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:53:42 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:53:42 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:53:42 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:53:42 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:53:42 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:53:42 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:53:42 compute-0 nova_compute[260935]:     <controller type="usb" index="0"/>
Oct 11 08:53:42 compute-0 nova_compute[260935]:     <memballoon model="virtio">
Oct 11 08:53:42 compute-0 nova_compute[260935]:       <stats period="10"/>
Oct 11 08:53:42 compute-0 nova_compute[260935]:     </memballoon>
Oct 11 08:53:42 compute-0 nova_compute[260935]:   </devices>
Oct 11 08:53:42 compute-0 nova_compute[260935]: </domain>
Oct 11 08:53:42 compute-0 nova_compute[260935]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 11 08:53:42 compute-0 nova_compute[260935]: 2025-10-11 08:53:42.374 2 DEBUG nova.compute.manager [None req-e1c0c43e-5d1b-4f50-9815-d04cf6d649fc d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] [instance: 8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08] Preparing to wait for external event network-vif-plugged-5e003588-ef81-4e87-900f-734b2c4bad32 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 11 08:53:42 compute-0 nova_compute[260935]: 2025-10-11 08:53:42.375 2 DEBUG oslo_concurrency.lockutils [None req-e1c0c43e-5d1b-4f50-9815-d04cf6d649fc d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Acquiring lock "8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:53:42 compute-0 nova_compute[260935]: 2025-10-11 08:53:42.375 2 DEBUG oslo_concurrency.lockutils [None req-e1c0c43e-5d1b-4f50-9815-d04cf6d649fc d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Lock "8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:53:42 compute-0 nova_compute[260935]: 2025-10-11 08:53:42.376 2 DEBUG oslo_concurrency.lockutils [None req-e1c0c43e-5d1b-4f50-9815-d04cf6d649fc d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Lock "8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:53:42 compute-0 nova_compute[260935]: 2025-10-11 08:53:42.378 2 DEBUG nova.virt.libvirt.vif [None req-e1c0c43e-5d1b-4f50-9815-d04cf6d649fc d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T08:53:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesOneServerNegativeTestJSON-server-454723019',display_name='tempest-ImagesOneServerNegativeTestJSON-server-454723019',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservernegativetestjson-server-454723019',id=47,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b34d40b1586348c3be3d9142dfe1770d',ramdisk_id='',reservation_id='r-tv2wxykz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesOneServerNegativeTestJSON-253514738',owner_user_name='tempest-ImagesOneServerNegativeTestJSON-253514738-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T08:53:36Z,user_data=None,user_id='d4dce65b39be45739408ca70d672df84',uuid=8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5e003588-ef81-4e87-900f-734b2c4bad32", "address": "fa:16:3e:97:d1:ed", "network": {"id": "a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1898587159-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b34d40b1586348c3be3d9142dfe1770d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5e003588-ef", "ovs_interfaceid": "5e003588-ef81-4e87-900f-734b2c4bad32", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 11 08:53:42 compute-0 nova_compute[260935]: 2025-10-11 08:53:42.378 2 DEBUG nova.network.os_vif_util [None req-e1c0c43e-5d1b-4f50-9815-d04cf6d649fc d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Converting VIF {"id": "5e003588-ef81-4e87-900f-734b2c4bad32", "address": "fa:16:3e:97:d1:ed", "network": {"id": "a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1898587159-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b34d40b1586348c3be3d9142dfe1770d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5e003588-ef", "ovs_interfaceid": "5e003588-ef81-4e87-900f-734b2c4bad32", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 08:53:42 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/2411837045' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:53:42 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/533724975' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:53:42 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/705102083' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:53:42 compute-0 nova_compute[260935]: 2025-10-11 08:53:42.380 2 DEBUG nova.network.os_vif_util [None req-e1c0c43e-5d1b-4f50-9815-d04cf6d649fc d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:97:d1:ed,bridge_name='br-int',has_traffic_filtering=True,id=5e003588-ef81-4e87-900f-734b2c4bad32,network=Network(a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5e003588-ef') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 08:53:42 compute-0 nova_compute[260935]: 2025-10-11 08:53:42.380 2 DEBUG os_vif [None req-e1c0c43e-5d1b-4f50-9815-d04cf6d649fc d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:97:d1:ed,bridge_name='br-int',has_traffic_filtering=True,id=5e003588-ef81-4e87-900f-734b2c4bad32,network=Network(a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5e003588-ef') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 11 08:53:42 compute-0 nova_compute[260935]: 2025-10-11 08:53:42.385 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:53:42 compute-0 nova_compute[260935]: 2025-10-11 08:53:42.386 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:53:42 compute-0 nova_compute[260935]: 2025-10-11 08:53:42.387 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 08:53:42 compute-0 nova_compute[260935]: 2025-10-11 08:53:42.400 2 DEBUG oslo_concurrency.lockutils [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Acquiring lock "d0ac94c4-9bbc-443b-bbce-0d447b37153a" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:53:42 compute-0 nova_compute[260935]: 2025-10-11 08:53:42.401 2 DEBUG oslo_concurrency.lockutils [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Lock "d0ac94c4-9bbc-443b-bbce-0d447b37153a" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:53:42 compute-0 nova_compute[260935]: 2025-10-11 08:53:42.408 2 DEBUG oslo_concurrency.lockutils [None req-6a41ddc2-a2f2-442c-bdef-bced32c62be8 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Acquiring lock "090243f9-46ac-42a1-b921-fa3b1974f127" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:53:42 compute-0 nova_compute[260935]: 2025-10-11 08:53:42.409 2 DEBUG oslo_concurrency.lockutils [None req-6a41ddc2-a2f2-442c-bdef-bced32c62be8 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Lock "090243f9-46ac-42a1-b921-fa3b1974f127" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:53:42 compute-0 nova_compute[260935]: 2025-10-11 08:53:42.411 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:53:42 compute-0 nova_compute[260935]: 2025-10-11 08:53:42.411 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5e003588-ef, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:53:42 compute-0 nova_compute[260935]: 2025-10-11 08:53:42.412 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap5e003588-ef, col_values=(('external_ids', {'iface-id': '5e003588-ef81-4e87-900f-734b2c4bad32', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:97:d1:ed', 'vm-uuid': '8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:53:42 compute-0 NetworkManager[44960]: <info>  [1760172822.4156] manager: (tap5e003588-ef): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/193)
Oct 11 08:53:42 compute-0 nova_compute[260935]: 2025-10-11 08:53:42.423 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:53:42 compute-0 nova_compute[260935]: 2025-10-11 08:53:42.425 2 INFO os_vif [None req-e1c0c43e-5d1b-4f50-9815-d04cf6d649fc d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:97:d1:ed,bridge_name='br-int',has_traffic_filtering=True,id=5e003588-ef81-4e87-900f-734b2c4bad32,network=Network(a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5e003588-ef')
Oct 11 08:53:42 compute-0 nova_compute[260935]: 2025-10-11 08:53:42.476 2 DEBUG nova.compute.manager [None req-6a41ddc2-a2f2-442c-bdef-bced32c62be8 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 090243f9-46ac-42a1-b921-fa3b1974f127] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 11 08:53:42 compute-0 nova_compute[260935]: 2025-10-11 08:53:42.481 2 DEBUG oslo_concurrency.lockutils [None req-4bbbffb0-c2c6-44de-bf01-b4884f8d23f1 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] Acquiring lock "2c551e6f-adba-4963-a583-c5118e2be62a" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:53:42 compute-0 nova_compute[260935]: 2025-10-11 08:53:42.482 2 DEBUG oslo_concurrency.lockutils [None req-4bbbffb0-c2c6-44de-bf01-b4884f8d23f1 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] Lock "2c551e6f-adba-4963-a583-c5118e2be62a" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:53:42 compute-0 nova_compute[260935]: 2025-10-11 08:53:42.483 2 DEBUG oslo_concurrency.lockutils [None req-4bbbffb0-c2c6-44de-bf01-b4884f8d23f1 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] Acquiring lock "2c551e6f-adba-4963-a583-c5118e2be62a-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:53:42 compute-0 nova_compute[260935]: 2025-10-11 08:53:42.483 2 DEBUG oslo_concurrency.lockutils [None req-4bbbffb0-c2c6-44de-bf01-b4884f8d23f1 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] Lock "2c551e6f-adba-4963-a583-c5118e2be62a-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:53:42 compute-0 nova_compute[260935]: 2025-10-11 08:53:42.484 2 DEBUG oslo_concurrency.lockutils [None req-4bbbffb0-c2c6-44de-bf01-b4884f8d23f1 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] Lock "2c551e6f-adba-4963-a583-c5118e2be62a-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:53:42 compute-0 nova_compute[260935]: 2025-10-11 08:53:42.486 2 INFO nova.compute.manager [None req-4bbbffb0-c2c6-44de-bf01-b4884f8d23f1 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] [instance: 2c551e6f-adba-4963-a583-c5118e2be62a] Terminating instance
Oct 11 08:53:42 compute-0 nova_compute[260935]: 2025-10-11 08:53:42.488 2 DEBUG nova.compute.manager [None req-4bbbffb0-c2c6-44de-bf01-b4884f8d23f1 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] [instance: 2c551e6f-adba-4963-a583-c5118e2be62a] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 11 08:53:42 compute-0 nova_compute[260935]: 2025-10-11 08:53:42.499 2 DEBUG nova.compute.manager [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: d0ac94c4-9bbc-443b-bbce-0d447b37153a] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 11 08:53:42 compute-0 nova_compute[260935]: 2025-10-11 08:53:42.507 2 DEBUG oslo_concurrency.lockutils [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Acquiring lock "c5c5e7c6-36ba-4cdd-9ad5-03996c419556" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:53:42 compute-0 nova_compute[260935]: 2025-10-11 08:53:42.508 2 DEBUG oslo_concurrency.lockutils [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Lock "c5c5e7c6-36ba-4cdd-9ad5-03996c419556" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:53:42 compute-0 nova_compute[260935]: 2025-10-11 08:53:42.525 2 DEBUG nova.virt.libvirt.driver [None req-e1c0c43e-5d1b-4f50-9815-d04cf6d649fc d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 08:53:42 compute-0 nova_compute[260935]: 2025-10-11 08:53:42.526 2 DEBUG nova.virt.libvirt.driver [None req-e1c0c43e-5d1b-4f50-9815-d04cf6d649fc d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 08:53:42 compute-0 nova_compute[260935]: 2025-10-11 08:53:42.526 2 DEBUG nova.virt.libvirt.driver [None req-e1c0c43e-5d1b-4f50-9815-d04cf6d649fc d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] No VIF found with MAC fa:16:3e:97:d1:ed, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 11 08:53:42 compute-0 nova_compute[260935]: 2025-10-11 08:53:42.527 2 INFO nova.virt.libvirt.driver [None req-e1c0c43e-5d1b-4f50-9815-d04cf6d649fc d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] [instance: 8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08] Using config drive
Oct 11 08:53:42 compute-0 nova_compute[260935]: 2025-10-11 08:53:42.561 2 DEBUG nova.storage.rbd_utils [None req-e1c0c43e-5d1b-4f50-9815-d04cf6d649fc d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] rbd image 8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:53:42 compute-0 nova_compute[260935]: 2025-10-11 08:53:42.568 2 DEBUG nova.compute.manager [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: c5c5e7c6-36ba-4cdd-9ad5-03996c419556] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 11 08:53:42 compute-0 kernel: tap29b8f5a1-6e (unregistering): left promiscuous mode
Oct 11 08:53:42 compute-0 NetworkManager[44960]: <info>  [1760172822.5855] device (tap29b8f5a1-6e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 08:53:42 compute-0 ovn_controller[152945]: 2025-10-11T08:53:42Z|00405|binding|INFO|Releasing lport 29b8f5a1-6e7c-42c3-9876-cc4cc9942b76 from this chassis (sb_readonly=0)
Oct 11 08:53:42 compute-0 ovn_controller[152945]: 2025-10-11T08:53:42Z|00406|binding|INFO|Setting lport 29b8f5a1-6e7c-42c3-9876-cc4cc9942b76 down in Southbound
Oct 11 08:53:42 compute-0 ovn_controller[152945]: 2025-10-11T08:53:42Z|00407|binding|INFO|Removing iface tap29b8f5a1-6e ovn-installed in OVS
Oct 11 08:53:42 compute-0 nova_compute[260935]: 2025-10-11 08:53:42.606 2 DEBUG oslo_concurrency.lockutils [None req-6a41ddc2-a2f2-442c-bdef-bced32c62be8 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:53:42 compute-0 nova_compute[260935]: 2025-10-11 08:53:42.606 2 DEBUG oslo_concurrency.lockutils [None req-6a41ddc2-a2f2-442c-bdef-bced32c62be8 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:53:42 compute-0 nova_compute[260935]: 2025-10-11 08:53:42.607 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:53:42 compute-0 nova_compute[260935]: 2025-10-11 08:53:42.611 2 DEBUG oslo_concurrency.lockutils [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:53:42 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:42.613 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d0:47:df 10.100.0.14'], port_security=['fa:16:3e:d0:47:df 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '2c551e6f-adba-4963-a583-c5118e2be62a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-eb048a28-2e76-4170-b83c-10a20efb7841', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '141fa83aa39e4cf6883c3f86fe0de7d4', 'neutron:revision_number': '4', 'neutron:security_group_ids': '82265068-f2dc-451b-ac76-cae1a0de925c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=74c9fb34-71dd-4115-aee6-37c651590f11, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=29b8f5a1-6e7c-42c3-9876-cc4cc9942b76) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 08:53:42 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:42.615 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 29b8f5a1-6e7c-42c3-9876-cc4cc9942b76 in datapath eb048a28-2e76-4170-b83c-10a20efb7841 unbound from our chassis
Oct 11 08:53:42 compute-0 nova_compute[260935]: 2025-10-11 08:53:42.616 2 DEBUG nova.virt.hardware [None req-6a41ddc2-a2f2-442c-bdef-bced32c62be8 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 11 08:53:42 compute-0 nova_compute[260935]: 2025-10-11 08:53:42.616 2 INFO nova.compute.claims [None req-6a41ddc2-a2f2-442c-bdef-bced32c62be8 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 090243f9-46ac-42a1-b921-fa3b1974f127] Claim successful on node compute-0.ctlplane.example.com
Oct 11 08:53:42 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:42.617 162815 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network eb048a28-2e76-4170-b83c-10a20efb7841, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 11 08:53:42 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:42.618 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[b5126f32-929b-4f5d-8c50-b0dc86981a4d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:53:42 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:42.619 162815 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-eb048a28-2e76-4170-b83c-10a20efb7841 namespace which is not needed anymore
Oct 11 08:53:42 compute-0 nova_compute[260935]: 2025-10-11 08:53:42.648 2 DEBUG oslo_concurrency.lockutils [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:53:42 compute-0 systemd[1]: machine-qemu\x2d50\x2dinstance\x2d0000002c.scope: Deactivated successfully.
Oct 11 08:53:42 compute-0 systemd[1]: machine-qemu\x2d50\x2dinstance\x2d0000002c.scope: Consumed 13.083s CPU time.
Oct 11 08:53:42 compute-0 nova_compute[260935]: 2025-10-11 08:53:42.658 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:53:42 compute-0 systemd-machined[215705]: Machine qemu-50-instance-0000002c terminated.
Oct 11 08:53:42 compute-0 nova_compute[260935]: 2025-10-11 08:53:42.685 2 DEBUG nova.network.neutron [req-fac62629-87d1-4969-8011-b92ed5f58452 req-9ce308ad-42cc-4154-9540-7782c7ad3c55 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08] Updated VIF entry in instance network info cache for port 5e003588-ef81-4e87-900f-734b2c4bad32. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 08:53:42 compute-0 nova_compute[260935]: 2025-10-11 08:53:42.685 2 DEBUG nova.network.neutron [req-fac62629-87d1-4969-8011-b92ed5f58452 req-9ce308ad-42cc-4154-9540-7782c7ad3c55 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08] Updating instance_info_cache with network_info: [{"id": "5e003588-ef81-4e87-900f-734b2c4bad32", "address": "fa:16:3e:97:d1:ed", "network": {"id": "a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1898587159-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b34d40b1586348c3be3d9142dfe1770d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5e003588-ef", "ovs_interfaceid": "5e003588-ef81-4e87-900f-734b2c4bad32", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:53:42 compute-0 nova_compute[260935]: 2025-10-11 08:53:42.709 2 DEBUG oslo_concurrency.lockutils [req-fac62629-87d1-4969-8011-b92ed5f58452 req-9ce308ad-42cc-4154-9540-7782c7ad3c55 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 08:53:42 compute-0 nova_compute[260935]: 2025-10-11 08:53:42.756 2 INFO nova.virt.libvirt.driver [-] [instance: 2c551e6f-adba-4963-a583-c5118e2be62a] Instance destroyed successfully.
Oct 11 08:53:42 compute-0 nova_compute[260935]: 2025-10-11 08:53:42.757 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:53:42 compute-0 nova_compute[260935]: 2025-10-11 08:53:42.758 2 DEBUG nova.objects.instance [None req-4bbbffb0-c2c6-44de-bf01-b4884f8d23f1 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] Lazy-loading 'resources' on Instance uuid 2c551e6f-adba-4963-a583-c5118e2be62a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:53:42 compute-0 nova_compute[260935]: 2025-10-11 08:53:42.771 2 DEBUG nova.virt.libvirt.vif [None req-4bbbffb0-c2c6-44de-bf01-b4884f8d23f1 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T08:53:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesOneServerTestJSON-server-702852874',display_name='tempest-ImagesOneServerTestJSON-server-702852874',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservertestjson-server-702852874',id=44,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-11T08:53:17Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='141fa83aa39e4cf6883c3f86fe0de7d4',ramdisk_id='',reservation_id='r-0ygioays',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ImagesOneServerTestJSON-896973684',owner_user_name='tempest-ImagesOneServerTestJSON-896973684-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T08:53:35Z,user_data=None,user_id='ea1a78d0e9f549b580366a5b344f23f5',uuid=2c551e6f-adba-4963-a583-c5118e2be62a,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "29b8f5a1-6e7c-42c3-9876-cc4cc9942b76", "address": "fa:16:3e:d0:47:df", "network": {"id": "eb048a28-2e76-4170-b83c-10a20efb7841", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-1710166253-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "141fa83aa39e4cf6883c3f86fe0de7d4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap29b8f5a1-6e", "ovs_interfaceid": "29b8f5a1-6e7c-42c3-9876-cc4cc9942b76", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 11 08:53:42 compute-0 nova_compute[260935]: 2025-10-11 08:53:42.771 2 DEBUG nova.network.os_vif_util [None req-4bbbffb0-c2c6-44de-bf01-b4884f8d23f1 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] Converting VIF {"id": "29b8f5a1-6e7c-42c3-9876-cc4cc9942b76", "address": "fa:16:3e:d0:47:df", "network": {"id": "eb048a28-2e76-4170-b83c-10a20efb7841", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-1710166253-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "141fa83aa39e4cf6883c3f86fe0de7d4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap29b8f5a1-6e", "ovs_interfaceid": "29b8f5a1-6e7c-42c3-9876-cc4cc9942b76", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 08:53:42 compute-0 nova_compute[260935]: 2025-10-11 08:53:42.772 2 DEBUG nova.network.os_vif_util [None req-4bbbffb0-c2c6-44de-bf01-b4884f8d23f1 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d0:47:df,bridge_name='br-int',has_traffic_filtering=True,id=29b8f5a1-6e7c-42c3-9876-cc4cc9942b76,network=Network(eb048a28-2e76-4170-b83c-10a20efb7841),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap29b8f5a1-6e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 08:53:42 compute-0 nova_compute[260935]: 2025-10-11 08:53:42.772 2 DEBUG os_vif [None req-4bbbffb0-c2c6-44de-bf01-b4884f8d23f1 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d0:47:df,bridge_name='br-int',has_traffic_filtering=True,id=29b8f5a1-6e7c-42c3-9876-cc4cc9942b76,network=Network(eb048a28-2e76-4170-b83c-10a20efb7841),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap29b8f5a1-6e') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 11 08:53:42 compute-0 nova_compute[260935]: 2025-10-11 08:53:42.774 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:53:42 compute-0 nova_compute[260935]: 2025-10-11 08:53:42.774 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap29b8f5a1-6e, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:53:42 compute-0 nova_compute[260935]: 2025-10-11 08:53:42.776 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:53:42 compute-0 nova_compute[260935]: 2025-10-11 08:53:42.780 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 08:53:42 compute-0 nova_compute[260935]: 2025-10-11 08:53:42.780 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:53:42 compute-0 nova_compute[260935]: 2025-10-11 08:53:42.783 2 INFO os_vif [None req-4bbbffb0-c2c6-44de-bf01-b4884f8d23f1 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d0:47:df,bridge_name='br-int',has_traffic_filtering=True,id=29b8f5a1-6e7c-42c3-9876-cc4cc9942b76,network=Network(eb048a28-2e76-4170-b83c-10a20efb7841),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap29b8f5a1-6e')
Oct 11 08:53:42 compute-0 nova_compute[260935]: 2025-10-11 08:53:42.810 2 DEBUG oslo_concurrency.processutils [None req-6a41ddc2-a2f2-442c-bdef-bced32c62be8 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:53:42 compute-0 neutron-haproxy-ovnmeta-eb048a28-2e76-4170-b83c-10a20efb7841[311407]: [NOTICE]   (311411) : haproxy version is 2.8.14-c23fe91
Oct 11 08:53:42 compute-0 neutron-haproxy-ovnmeta-eb048a28-2e76-4170-b83c-10a20efb7841[311407]: [NOTICE]   (311411) : path to executable is /usr/sbin/haproxy
Oct 11 08:53:42 compute-0 neutron-haproxy-ovnmeta-eb048a28-2e76-4170-b83c-10a20efb7841[311407]: [WARNING]  (311411) : Exiting Master process...
Oct 11 08:53:42 compute-0 neutron-haproxy-ovnmeta-eb048a28-2e76-4170-b83c-10a20efb7841[311407]: [WARNING]  (311411) : Exiting Master process...
Oct 11 08:53:42 compute-0 neutron-haproxy-ovnmeta-eb048a28-2e76-4170-b83c-10a20efb7841[311407]: [ALERT]    (311411) : Current worker (311413) exited with code 143 (Terminated)
Oct 11 08:53:42 compute-0 neutron-haproxy-ovnmeta-eb048a28-2e76-4170-b83c-10a20efb7841[311407]: [WARNING]  (311411) : All workers exited. Exiting... (0)
Oct 11 08:53:42 compute-0 systemd[1]: libpod-b936b3db46013b0427a4d00c2b0b214765ca225acfddc18e89eb06ec24770bad.scope: Deactivated successfully.
Oct 11 08:53:42 compute-0 podman[313776]: 2025-10-11 08:53:42.833298623 +0000 UTC m=+0.082675308 container died b936b3db46013b0427a4d00c2b0b214765ca225acfddc18e89eb06ec24770bad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-eb048a28-2e76-4170-b83c-10a20efb7841, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 11 08:53:42 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b936b3db46013b0427a4d00c2b0b214765ca225acfddc18e89eb06ec24770bad-userdata-shm.mount: Deactivated successfully.
Oct 11 08:53:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-8a9fa1c571dca2050c3743ed1af84045936d49108271115daa852b0e59f97447-merged.mount: Deactivated successfully.
Oct 11 08:53:42 compute-0 podman[313776]: 2025-10-11 08:53:42.880333575 +0000 UTC m=+0.129710250 container cleanup b936b3db46013b0427a4d00c2b0b214765ca225acfddc18e89eb06ec24770bad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-eb048a28-2e76-4170-b83c-10a20efb7841, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001)
Oct 11 08:53:42 compute-0 systemd[1]: libpod-conmon-b936b3db46013b0427a4d00c2b0b214765ca225acfddc18e89eb06ec24770bad.scope: Deactivated successfully.
Oct 11 08:53:42 compute-0 podman[313839]: 2025-10-11 08:53:42.971916976 +0000 UTC m=+0.050172582 container remove b936b3db46013b0427a4d00c2b0b214765ca225acfddc18e89eb06ec24770bad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-eb048a28-2e76-4170-b83c-10a20efb7841, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Oct 11 08:53:42 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:42.980 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[e5a22c16-3d52-4daa-add9-734fe1a69879]: (4, ('Sat Oct 11 08:53:42 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-eb048a28-2e76-4170-b83c-10a20efb7841 (b936b3db46013b0427a4d00c2b0b214765ca225acfddc18e89eb06ec24770bad)\nb936b3db46013b0427a4d00c2b0b214765ca225acfddc18e89eb06ec24770bad\nSat Oct 11 08:53:42 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-eb048a28-2e76-4170-b83c-10a20efb7841 (b936b3db46013b0427a4d00c2b0b214765ca225acfddc18e89eb06ec24770bad)\nb936b3db46013b0427a4d00c2b0b214765ca225acfddc18e89eb06ec24770bad\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:53:42 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:42.982 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[c965dfb9-db60-48af-856c-f2f2c4384b15]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:53:42 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:42.984 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapeb048a28-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:53:42 compute-0 nova_compute[260935]: 2025-10-11 08:53:42.987 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:53:42 compute-0 kernel: tapeb048a28-20: left promiscuous mode
Oct 11 08:53:43 compute-0 nova_compute[260935]: 2025-10-11 08:53:43.028 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:53:43 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:43.034 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[70d0920d-51a7-474f-aa11-263ec1021798]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:53:43 compute-0 nova_compute[260935]: 2025-10-11 08:53:43.045 2 INFO nova.virt.libvirt.driver [None req-e1c0c43e-5d1b-4f50-9815-d04cf6d649fc d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] [instance: 8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08] Creating config drive at /var/lib/nova/instances/8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08/disk.config
Oct 11 08:53:43 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:43.053 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[4c5b056b-b934-41cb-a831-219ff205b5f3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:53:43 compute-0 nova_compute[260935]: 2025-10-11 08:53:43.055 2 DEBUG oslo_concurrency.processutils [None req-e1c0c43e-5d1b-4f50-9815-d04cf6d649fc d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpzk3vg3h5 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:53:43 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:43.055 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[434d5fa8-07ee-45b3-ae0c-952ff2ec2751]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:53:43 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:43.080 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[931c29e1-d431-46b4-85e9-8d0121bdc04e]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 464001, 'reachable_time': 23734, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 313877, 'error': None, 'target': 'ovnmeta-eb048a28-2e76-4170-b83c-10a20efb7841', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:53:43 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:43.083 162929 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-eb048a28-2e76-4170-b83c-10a20efb7841 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 11 08:53:43 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:43.083 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[20aa0b79-4ab1-4721-80d2-9cdd6ce252a7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:53:43 compute-0 systemd[1]: run-netns-ovnmeta\x2deb048a28\x2d2e76\x2d4170\x2db83c\x2d10a20efb7841.mount: Deactivated successfully.
Oct 11 08:53:43 compute-0 nova_compute[260935]: 2025-10-11 08:53:43.223 2 DEBUG oslo_concurrency.processutils [None req-e1c0c43e-5d1b-4f50-9815-d04cf6d649fc d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpzk3vg3h5" returned: 0 in 0.169s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:53:43 compute-0 nova_compute[260935]: 2025-10-11 08:53:43.254 2 DEBUG nova.storage.rbd_utils [None req-e1c0c43e-5d1b-4f50-9815-d04cf6d649fc d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] rbd image 8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:53:43 compute-0 nova_compute[260935]: 2025-10-11 08:53:43.258 2 DEBUG oslo_concurrency.processutils [None req-e1c0c43e-5d1b-4f50-9815-d04cf6d649fc d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08/disk.config 8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:53:43 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e203 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 08:53:43 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e203 do_prune osdmap full prune enabled
Oct 11 08:53:43 compute-0 nova_compute[260935]: 2025-10-11 08:53:43.313 2 INFO nova.virt.libvirt.driver [None req-4bbbffb0-c2c6-44de-bf01-b4884f8d23f1 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] [instance: 2c551e6f-adba-4963-a583-c5118e2be62a] Deleting instance files /var/lib/nova/instances/2c551e6f-adba-4963-a583-c5118e2be62a_del
Oct 11 08:53:43 compute-0 nova_compute[260935]: 2025-10-11 08:53:43.314 2 INFO nova.virt.libvirt.driver [None req-4bbbffb0-c2c6-44de-bf01-b4884f8d23f1 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] [instance: 2c551e6f-adba-4963-a583-c5118e2be62a] Deletion of /var/lib/nova/instances/2c551e6f-adba-4963-a583-c5118e2be62a_del complete
Oct 11 08:53:43 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e204 e204: 3 total, 3 up, 3 in
Oct 11 08:53:43 compute-0 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e204: 3 total, 3 up, 3 in
Oct 11 08:53:43 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 08:53:43 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1127732979' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:53:43 compute-0 nova_compute[260935]: 2025-10-11 08:53:43.362 2 DEBUG oslo_concurrency.processutils [None req-6a41ddc2-a2f2-442c-bdef-bced32c62be8 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.552s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:53:43 compute-0 nova_compute[260935]: 2025-10-11 08:53:43.364 2 INFO nova.compute.manager [None req-4bbbffb0-c2c6-44de-bf01-b4884f8d23f1 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] [instance: 2c551e6f-adba-4963-a583-c5118e2be62a] Took 0.87 seconds to destroy the instance on the hypervisor.
Oct 11 08:53:43 compute-0 nova_compute[260935]: 2025-10-11 08:53:43.364 2 DEBUG oslo.service.loopingcall [None req-4bbbffb0-c2c6-44de-bf01-b4884f8d23f1 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 11 08:53:43 compute-0 nova_compute[260935]: 2025-10-11 08:53:43.364 2 DEBUG nova.compute.manager [-] [instance: 2c551e6f-adba-4963-a583-c5118e2be62a] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 11 08:53:43 compute-0 nova_compute[260935]: 2025-10-11 08:53:43.364 2 DEBUG nova.network.neutron [-] [instance: 2c551e6f-adba-4963-a583-c5118e2be62a] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 11 08:53:43 compute-0 nova_compute[260935]: 2025-10-11 08:53:43.374 2 DEBUG nova.compute.provider_tree [None req-6a41ddc2-a2f2-442c-bdef-bced32c62be8 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 08:53:43 compute-0 nova_compute[260935]: 2025-10-11 08:53:43.391 2 DEBUG nova.scheduler.client.report [None req-6a41ddc2-a2f2-442c-bdef-bced32c62be8 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 08:53:43 compute-0 ceph-mon[74313]: pgmap v1477: 321 pgs: 321 active+clean; 292 MiB data, 565 MiB used, 59 GiB / 60 GiB avail; 4.7 MiB/s rd, 5.3 MiB/s wr, 244 op/s
Oct 11 08:53:43 compute-0 ceph-mon[74313]: osdmap e204: 3 total, 3 up, 3 in
Oct 11 08:53:43 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/1127732979' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:53:43 compute-0 nova_compute[260935]: 2025-10-11 08:53:43.418 2 DEBUG oslo_concurrency.lockutils [None req-6a41ddc2-a2f2-442c-bdef-bced32c62be8 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.812s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:53:43 compute-0 nova_compute[260935]: 2025-10-11 08:53:43.419 2 DEBUG nova.compute.manager [None req-6a41ddc2-a2f2-442c-bdef-bced32c62be8 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 090243f9-46ac-42a1-b921-fa3b1974f127] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 11 08:53:43 compute-0 nova_compute[260935]: 2025-10-11 08:53:43.421 2 DEBUG oslo_concurrency.lockutils [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.810s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:53:43 compute-0 nova_compute[260935]: 2025-10-11 08:53:43.429 2 DEBUG nova.virt.hardware [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 11 08:53:43 compute-0 nova_compute[260935]: 2025-10-11 08:53:43.430 2 INFO nova.compute.claims [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: d0ac94c4-9bbc-443b-bbce-0d447b37153a] Claim successful on node compute-0.ctlplane.example.com
Oct 11 08:53:43 compute-0 nova_compute[260935]: 2025-10-11 08:53:43.463 2 DEBUG oslo_concurrency.processutils [None req-e1c0c43e-5d1b-4f50-9815-d04cf6d649fc d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08/disk.config 8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.204s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:53:43 compute-0 nova_compute[260935]: 2025-10-11 08:53:43.463 2 INFO nova.virt.libvirt.driver [None req-e1c0c43e-5d1b-4f50-9815-d04cf6d649fc d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] [instance: 8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08] Deleting local config drive /var/lib/nova/instances/8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08/disk.config because it was imported into RBD.
Oct 11 08:53:43 compute-0 nova_compute[260935]: 2025-10-11 08:53:43.488 2 DEBUG nova.compute.manager [None req-6a41ddc2-a2f2-442c-bdef-bced32c62be8 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 090243f9-46ac-42a1-b921-fa3b1974f127] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 11 08:53:43 compute-0 nova_compute[260935]: 2025-10-11 08:53:43.489 2 DEBUG nova.network.neutron [None req-6a41ddc2-a2f2-442c-bdef-bced32c62be8 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 090243f9-46ac-42a1-b921-fa3b1974f127] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 11 08:53:43 compute-0 nova_compute[260935]: 2025-10-11 08:53:43.517 2 INFO nova.virt.libvirt.driver [None req-6a41ddc2-a2f2-442c-bdef-bced32c62be8 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 090243f9-46ac-42a1-b921-fa3b1974f127] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 11 08:53:43 compute-0 nova_compute[260935]: 2025-10-11 08:53:43.539 2 DEBUG nova.compute.manager [None req-6a41ddc2-a2f2-442c-bdef-bced32c62be8 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 090243f9-46ac-42a1-b921-fa3b1974f127] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 11 08:53:43 compute-0 kernel: tap5e003588-ef: entered promiscuous mode
Oct 11 08:53:43 compute-0 systemd-udevd[313754]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 08:53:43 compute-0 NetworkManager[44960]: <info>  [1760172823.5611] manager: (tap5e003588-ef): new Tun device (/org/freedesktop/NetworkManager/Devices/194)
Oct 11 08:53:43 compute-0 nova_compute[260935]: 2025-10-11 08:53:43.565 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:53:43 compute-0 ovn_controller[152945]: 2025-10-11T08:53:43Z|00408|binding|INFO|Claiming lport 5e003588-ef81-4e87-900f-734b2c4bad32 for this chassis.
Oct 11 08:53:43 compute-0 ovn_controller[152945]: 2025-10-11T08:53:43Z|00409|binding|INFO|5e003588-ef81-4e87-900f-734b2c4bad32: Claiming fa:16:3e:97:d1:ed 10.100.0.8
Oct 11 08:53:43 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:43.580 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:97:d1:ed 10.100.0.8'], port_security=['fa:16:3e:97:d1:ed 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b34d40b1586348c3be3d9142dfe1770d', 'neutron:revision_number': '2', 'neutron:security_group_ids': '9b07fc82-c0d2-4e42-8878-3b0706731285', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=be4f554b-8352-4ab3-babd-ad834c691fd3, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=5e003588-ef81-4e87-900f-734b2c4bad32) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 08:53:43 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:43.583 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 5e003588-ef81-4e87-900f-734b2c4bad32 in datapath a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf bound to our chassis
Oct 11 08:53:43 compute-0 NetworkManager[44960]: <info>  [1760172823.5841] device (tap5e003588-ef): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 08:53:43 compute-0 NetworkManager[44960]: <info>  [1760172823.5865] device (tap5e003588-ef): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 08:53:43 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:43.586 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf
Oct 11 08:53:43 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:43.607 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[e7c94611-8c50-4b98-a9fa-56d2aa8c6e39]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:53:43 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:43.609 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapa1a65c6f-c1 in ovnmeta-a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 11 08:53:43 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:43.611 276945 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapa1a65c6f-c0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 11 08:53:43 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:43.612 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[5416e1ca-1e9b-437f-8bfb-cbe643ce5d86]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:53:43 compute-0 systemd-machined[215705]: New machine qemu-54-instance-0000002f.
Oct 11 08:53:43 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:43.613 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[335e1b26-84cd-48b6-8651-18630e441207]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:53:43 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:43.633 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[4805ad24-b2d0-4600-8a19-301dbd83b3de]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:53:43 compute-0 systemd[1]: Started Virtual Machine qemu-54-instance-0000002f.
Oct 11 08:53:43 compute-0 nova_compute[260935]: 2025-10-11 08:53:43.643 2 DEBUG nova.compute.manager [None req-6a41ddc2-a2f2-442c-bdef-bced32c62be8 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 090243f9-46ac-42a1-b921-fa3b1974f127] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 11 08:53:43 compute-0 nova_compute[260935]: 2025-10-11 08:53:43.645 2 DEBUG nova.virt.libvirt.driver [None req-6a41ddc2-a2f2-442c-bdef-bced32c62be8 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 090243f9-46ac-42a1-b921-fa3b1974f127] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 11 08:53:43 compute-0 nova_compute[260935]: 2025-10-11 08:53:43.646 2 INFO nova.virt.libvirt.driver [None req-6a41ddc2-a2f2-442c-bdef-bced32c62be8 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 090243f9-46ac-42a1-b921-fa3b1974f127] Creating image(s)
Oct 11 08:53:43 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:43.662 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[4e894a4f-ee96-4f51-8394-fd396ceb2bdd]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:53:43 compute-0 ovn_controller[152945]: 2025-10-11T08:53:43Z|00410|binding|INFO|Setting lport 5e003588-ef81-4e87-900f-734b2c4bad32 ovn-installed in OVS
Oct 11 08:53:43 compute-0 ovn_controller[152945]: 2025-10-11T08:53:43Z|00411|binding|INFO|Setting lport 5e003588-ef81-4e87-900f-734b2c4bad32 up in Southbound
Oct 11 08:53:43 compute-0 nova_compute[260935]: 2025-10-11 08:53:43.703 2 DEBUG nova.storage.rbd_utils [None req-6a41ddc2-a2f2-442c-bdef-bced32c62be8 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] rbd image 090243f9-46ac-42a1-b921-fa3b1974f127_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:53:43 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:43.719 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[0dee435a-b72a-4a58-a599-37f4a7ea9f69]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:53:43 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:43.727 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[e112b3a1-c5b9-41d1-a28f-66600b3fedc4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:53:43 compute-0 NetworkManager[44960]: <info>  [1760172823.7315] manager: (tapa1a65c6f-c0): new Veth device (/org/freedesktop/NetworkManager/Devices/195)
Oct 11 08:53:43 compute-0 nova_compute[260935]: 2025-10-11 08:53:43.783 2 DEBUG nova.storage.rbd_utils [None req-6a41ddc2-a2f2-442c-bdef-bced32c62be8 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] rbd image 090243f9-46ac-42a1-b921-fa3b1974f127_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:53:43 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:43.785 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[1f3985db-4176-4b92-a9d0-f4b5e73342f0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:53:43 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:43.789 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[5e5e2f7d-a779-4237-8a8f-4e00e7dd7ebe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:53:43 compute-0 nova_compute[260935]: 2025-10-11 08:53:43.827 2 DEBUG nova.storage.rbd_utils [None req-6a41ddc2-a2f2-442c-bdef-bced32c62be8 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] rbd image 090243f9-46ac-42a1-b921-fa3b1974f127_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:53:43 compute-0 NetworkManager[44960]: <info>  [1760172823.8292] device (tapa1a65c6f-c0): carrier: link connected
Oct 11 08:53:43 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:43.839 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[2c321b18-91bc-4bca-ae18-15f48e0a9905]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:53:43 compute-0 nova_compute[260935]: 2025-10-11 08:53:43.851 2 DEBUG oslo_concurrency.processutils [None req-6a41ddc2-a2f2-442c-bdef-bced32c62be8 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:53:43 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:43.870 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[1edcbd77-0680-4d33-8b49-94df7eed6c7f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa1a65c6f-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b9:05:55'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 132], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 466853, 'reachable_time': 34091, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 314019, 'error': None, 'target': 'ovnmeta-a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:53:43 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:43.896 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[23efa336-8af4-4721-b2ab-9056b6e341e9]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feb9:555'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 466853, 'tstamp': 466853}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 314021, 'error': None, 'target': 'ovnmeta-a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:53:43 compute-0 nova_compute[260935]: 2025-10-11 08:53:43.911 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:53:43 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:43.929 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[1de38658-04ca-41c7-a4ff-599823c3eb78]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa1a65c6f-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b9:05:55'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 132], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 466853, 'reachable_time': 34091, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 314022, 'error': None, 'target': 'ovnmeta-a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:53:43 compute-0 nova_compute[260935]: 2025-10-11 08:53:43.934 2 DEBUG oslo_concurrency.processutils [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:53:43 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1479: 321 pgs: 321 active+clean; 167 MiB data, 502 MiB used, 59 GiB / 60 GiB avail; 71 KiB/s rd, 4.7 KiB/s wr, 102 op/s
Oct 11 08:53:43 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:43.974 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[a46fce07-d061-4a64-bdfb-bd3b3dff9778]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:53:43 compute-0 nova_compute[260935]: 2025-10-11 08:53:43.980 2 DEBUG oslo_concurrency.processutils [None req-6a41ddc2-a2f2-442c-bdef-bced32c62be8 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json" returned: 0 in 0.129s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:53:43 compute-0 nova_compute[260935]: 2025-10-11 08:53:43.982 2 DEBUG oslo_concurrency.lockutils [None req-6a41ddc2-a2f2-442c-bdef-bced32c62be8 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Acquiring lock "9811042c7d73cc51997f7c966840f4b7728169a1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:53:43 compute-0 nova_compute[260935]: 2025-10-11 08:53:43.983 2 DEBUG oslo_concurrency.lockutils [None req-6a41ddc2-a2f2-442c-bdef-bced32c62be8 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:53:43 compute-0 nova_compute[260935]: 2025-10-11 08:53:43.983 2 DEBUG oslo_concurrency.lockutils [None req-6a41ddc2-a2f2-442c-bdef-bced32c62be8 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:53:44 compute-0 nova_compute[260935]: 2025-10-11 08:53:44.018 2 DEBUG nova.storage.rbd_utils [None req-6a41ddc2-a2f2-442c-bdef-bced32c62be8 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] rbd image 090243f9-46ac-42a1-b921-fa3b1974f127_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:53:44 compute-0 nova_compute[260935]: 2025-10-11 08:53:44.028 2 DEBUG oslo_concurrency.processutils [None req-6a41ddc2-a2f2-442c-bdef-bced32c62be8 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 090243f9-46ac-42a1-b921-fa3b1974f127_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:53:44 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:44.057 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[861d96b0-997c-4dca-9fa2-f0b6198bbcbe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:53:44 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:44.059 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa1a65c6f-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:53:44 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:44.059 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 08:53:44 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:44.060 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa1a65c6f-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:53:44 compute-0 NetworkManager[44960]: <info>  [1760172824.0632] manager: (tapa1a65c6f-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/196)
Oct 11 08:53:44 compute-0 kernel: tapa1a65c6f-c0: entered promiscuous mode
Oct 11 08:53:44 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:44.066 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapa1a65c6f-c0, col_values=(('external_ids', {'iface-id': '4bae176b-fbae-4a70-a041-16a7a5205899'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:53:44 compute-0 ovn_controller[152945]: 2025-10-11T08:53:44Z|00412|binding|INFO|Releasing lport 4bae176b-fbae-4a70-a041-16a7a5205899 from this chassis (sb_readonly=0)
Oct 11 08:53:44 compute-0 nova_compute[260935]: 2025-10-11 08:53:44.081 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:53:44 compute-0 nova_compute[260935]: 2025-10-11 08:53:44.088 2 DEBUG nova.policy [None req-6a41ddc2-a2f2-442c-bdef-bced32c62be8 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '2a171de1f79843e0b048393cabfee77d', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '9af27fad6b5a4783b66213343f27f0a1', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 11 08:53:44 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:44.104 162815 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 11 08:53:44 compute-0 nova_compute[260935]: 2025-10-11 08:53:44.105 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:53:44 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:44.105 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[6a44de22-9005-4a56-8425-fd15fcd18326]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:53:44 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:44.106 162815 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 11 08:53:44 compute-0 ovn_metadata_agent[162810]: global
Oct 11 08:53:44 compute-0 ovn_metadata_agent[162810]:     log         /dev/log local0 debug
Oct 11 08:53:44 compute-0 ovn_metadata_agent[162810]:     log-tag     haproxy-metadata-proxy-a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf
Oct 11 08:53:44 compute-0 ovn_metadata_agent[162810]:     user        root
Oct 11 08:53:44 compute-0 ovn_metadata_agent[162810]:     group       root
Oct 11 08:53:44 compute-0 ovn_metadata_agent[162810]:     maxconn     1024
Oct 11 08:53:44 compute-0 ovn_metadata_agent[162810]:     pidfile     /var/lib/neutron/external/pids/a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf.pid.haproxy
Oct 11 08:53:44 compute-0 ovn_metadata_agent[162810]:     daemon
Oct 11 08:53:44 compute-0 ovn_metadata_agent[162810]: 
Oct 11 08:53:44 compute-0 ovn_metadata_agent[162810]: defaults
Oct 11 08:53:44 compute-0 ovn_metadata_agent[162810]:     log global
Oct 11 08:53:44 compute-0 ovn_metadata_agent[162810]:     mode http
Oct 11 08:53:44 compute-0 ovn_metadata_agent[162810]:     option httplog
Oct 11 08:53:44 compute-0 ovn_metadata_agent[162810]:     option dontlognull
Oct 11 08:53:44 compute-0 ovn_metadata_agent[162810]:     option http-server-close
Oct 11 08:53:44 compute-0 ovn_metadata_agent[162810]:     option forwardfor
Oct 11 08:53:44 compute-0 ovn_metadata_agent[162810]:     retries                 3
Oct 11 08:53:44 compute-0 ovn_metadata_agent[162810]:     timeout http-request    30s
Oct 11 08:53:44 compute-0 ovn_metadata_agent[162810]:     timeout connect         30s
Oct 11 08:53:44 compute-0 ovn_metadata_agent[162810]:     timeout client          32s
Oct 11 08:53:44 compute-0 ovn_metadata_agent[162810]:     timeout server          32s
Oct 11 08:53:44 compute-0 ovn_metadata_agent[162810]:     timeout http-keep-alive 30s
Oct 11 08:53:44 compute-0 ovn_metadata_agent[162810]: 
Oct 11 08:53:44 compute-0 ovn_metadata_agent[162810]: 
Oct 11 08:53:44 compute-0 ovn_metadata_agent[162810]: listen listener
Oct 11 08:53:44 compute-0 ovn_metadata_agent[162810]:     bind 169.254.169.254:80
Oct 11 08:53:44 compute-0 ovn_metadata_agent[162810]:     server metadata /var/lib/neutron/metadata_proxy
Oct 11 08:53:44 compute-0 ovn_metadata_agent[162810]:     http-request add-header X-OVN-Network-ID a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf
Oct 11 08:53:44 compute-0 ovn_metadata_agent[162810]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 11 08:53:44 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:44.107 162815 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf', 'env', 'PROCESS_TAG=haproxy-a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 11 08:53:44 compute-0 nova_compute[260935]: 2025-10-11 08:53:44.357 2 DEBUG nova.compute.manager [req-4c549393-be9d-41fa-bc0f-e02b869f842d req-3cf6b827-3a09-4a1d-97dd-2b4c7dd87384 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 92be5b35-6b7a-4f95-924d-008348f27b42] Received event network-vif-plugged-13bb6d15-e65c-4e29-b0f3-b7a5a830236d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:53:44 compute-0 nova_compute[260935]: 2025-10-11 08:53:44.358 2 DEBUG oslo_concurrency.lockutils [req-4c549393-be9d-41fa-bc0f-e02b869f842d req-3cf6b827-3a09-4a1d-97dd-2b4c7dd87384 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "92be5b35-6b7a-4f95-924d-008348f27b42-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:53:44 compute-0 nova_compute[260935]: 2025-10-11 08:53:44.359 2 DEBUG oslo_concurrency.lockutils [req-4c549393-be9d-41fa-bc0f-e02b869f842d req-3cf6b827-3a09-4a1d-97dd-2b4c7dd87384 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "92be5b35-6b7a-4f95-924d-008348f27b42-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:53:44 compute-0 nova_compute[260935]: 2025-10-11 08:53:44.362 2 DEBUG oslo_concurrency.lockutils [req-4c549393-be9d-41fa-bc0f-e02b869f842d req-3cf6b827-3a09-4a1d-97dd-2b4c7dd87384 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "92be5b35-6b7a-4f95-924d-008348f27b42-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:53:44 compute-0 nova_compute[260935]: 2025-10-11 08:53:44.363 2 DEBUG nova.compute.manager [req-4c549393-be9d-41fa-bc0f-e02b869f842d req-3cf6b827-3a09-4a1d-97dd-2b4c7dd87384 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 92be5b35-6b7a-4f95-924d-008348f27b42] No waiting events found dispatching network-vif-plugged-13bb6d15-e65c-4e29-b0f3-b7a5a830236d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 08:53:44 compute-0 nova_compute[260935]: 2025-10-11 08:53:44.364 2 WARNING nova.compute.manager [req-4c549393-be9d-41fa-bc0f-e02b869f842d req-3cf6b827-3a09-4a1d-97dd-2b4c7dd87384 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 92be5b35-6b7a-4f95-924d-008348f27b42] Received unexpected event network-vif-plugged-13bb6d15-e65c-4e29-b0f3-b7a5a830236d for instance with vm_state deleted and task_state None.
Oct 11 08:53:44 compute-0 nova_compute[260935]: 2025-10-11 08:53:44.402 2 DEBUG oslo_concurrency.processutils [None req-6a41ddc2-a2f2-442c-bdef-bced32c62be8 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 090243f9-46ac-42a1-b921-fa3b1974f127_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.374s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:53:44 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 08:53:44 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4125527845' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:53:44 compute-0 nova_compute[260935]: 2025-10-11 08:53:44.451 2 DEBUG nova.compute.manager [req-1d85bb26-9222-420c-ab00-114cd583d1b5 req-8d785d72-8afd-4451-97ca-3d53e718e78f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 2c551e6f-adba-4963-a583-c5118e2be62a] Received event network-vif-unplugged-29b8f5a1-6e7c-42c3-9876-cc4cc9942b76 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:53:44 compute-0 nova_compute[260935]: 2025-10-11 08:53:44.453 2 DEBUG oslo_concurrency.lockutils [req-1d85bb26-9222-420c-ab00-114cd583d1b5 req-8d785d72-8afd-4451-97ca-3d53e718e78f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "2c551e6f-adba-4963-a583-c5118e2be62a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:53:44 compute-0 nova_compute[260935]: 2025-10-11 08:53:44.453 2 DEBUG oslo_concurrency.lockutils [req-1d85bb26-9222-420c-ab00-114cd583d1b5 req-8d785d72-8afd-4451-97ca-3d53e718e78f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "2c551e6f-adba-4963-a583-c5118e2be62a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:53:44 compute-0 nova_compute[260935]: 2025-10-11 08:53:44.454 2 DEBUG oslo_concurrency.lockutils [req-1d85bb26-9222-420c-ab00-114cd583d1b5 req-8d785d72-8afd-4451-97ca-3d53e718e78f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "2c551e6f-adba-4963-a583-c5118e2be62a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:53:44 compute-0 nova_compute[260935]: 2025-10-11 08:53:44.455 2 DEBUG nova.compute.manager [req-1d85bb26-9222-420c-ab00-114cd583d1b5 req-8d785d72-8afd-4451-97ca-3d53e718e78f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 2c551e6f-adba-4963-a583-c5118e2be62a] No waiting events found dispatching network-vif-unplugged-29b8f5a1-6e7c-42c3-9876-cc4cc9942b76 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 08:53:44 compute-0 nova_compute[260935]: 2025-10-11 08:53:44.455 2 DEBUG nova.compute.manager [req-1d85bb26-9222-420c-ab00-114cd583d1b5 req-8d785d72-8afd-4451-97ca-3d53e718e78f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 2c551e6f-adba-4963-a583-c5118e2be62a] Received event network-vif-unplugged-29b8f5a1-6e7c-42c3-9876-cc4cc9942b76 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 11 08:53:44 compute-0 nova_compute[260935]: 2025-10-11 08:53:44.456 2 DEBUG nova.compute.manager [req-1d85bb26-9222-420c-ab00-114cd583d1b5 req-8d785d72-8afd-4451-97ca-3d53e718e78f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 2c551e6f-adba-4963-a583-c5118e2be62a] Received event network-vif-plugged-29b8f5a1-6e7c-42c3-9876-cc4cc9942b76 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:53:44 compute-0 nova_compute[260935]: 2025-10-11 08:53:44.457 2 DEBUG oslo_concurrency.lockutils [req-1d85bb26-9222-420c-ab00-114cd583d1b5 req-8d785d72-8afd-4451-97ca-3d53e718e78f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "2c551e6f-adba-4963-a583-c5118e2be62a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:53:44 compute-0 nova_compute[260935]: 2025-10-11 08:53:44.457 2 DEBUG oslo_concurrency.lockutils [req-1d85bb26-9222-420c-ab00-114cd583d1b5 req-8d785d72-8afd-4451-97ca-3d53e718e78f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "2c551e6f-adba-4963-a583-c5118e2be62a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:53:44 compute-0 nova_compute[260935]: 2025-10-11 08:53:44.458 2 DEBUG oslo_concurrency.lockutils [req-1d85bb26-9222-420c-ab00-114cd583d1b5 req-8d785d72-8afd-4451-97ca-3d53e718e78f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "2c551e6f-adba-4963-a583-c5118e2be62a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:53:44 compute-0 nova_compute[260935]: 2025-10-11 08:53:44.459 2 DEBUG nova.compute.manager [req-1d85bb26-9222-420c-ab00-114cd583d1b5 req-8d785d72-8afd-4451-97ca-3d53e718e78f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 2c551e6f-adba-4963-a583-c5118e2be62a] No waiting events found dispatching network-vif-plugged-29b8f5a1-6e7c-42c3-9876-cc4cc9942b76 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 08:53:44 compute-0 nova_compute[260935]: 2025-10-11 08:53:44.460 2 WARNING nova.compute.manager [req-1d85bb26-9222-420c-ab00-114cd583d1b5 req-8d785d72-8afd-4451-97ca-3d53e718e78f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 2c551e6f-adba-4963-a583-c5118e2be62a] Received unexpected event network-vif-plugged-29b8f5a1-6e7c-42c3-9876-cc4cc9942b76 for instance with vm_state active and task_state deleting.
Oct 11 08:53:44 compute-0 nova_compute[260935]: 2025-10-11 08:53:44.461 2 DEBUG oslo_concurrency.processutils [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.527s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:53:44 compute-0 nova_compute[260935]: 2025-10-11 08:53:44.511 2 DEBUG nova.storage.rbd_utils [None req-6a41ddc2-a2f2-442c-bdef-bced32c62be8 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] resizing rbd image 090243f9-46ac-42a1-b921-fa3b1974f127_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 11 08:53:44 compute-0 podman[314130]: 2025-10-11 08:53:44.538343682 +0000 UTC m=+0.062996697 container create 7d1a9e6b3c9130792d0328aa3e55096abd83598bbcc264728304976547ed89b5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0)
Oct 11 08:53:44 compute-0 nova_compute[260935]: 2025-10-11 08:53:44.572 2 DEBUG nova.compute.provider_tree [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 08:53:44 compute-0 systemd[1]: Started libpod-conmon-7d1a9e6b3c9130792d0328aa3e55096abd83598bbcc264728304976547ed89b5.scope.
Oct 11 08:53:44 compute-0 podman[314130]: 2025-10-11 08:53:44.506102773 +0000 UTC m=+0.030755828 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 11 08:53:44 compute-0 nova_compute[260935]: 2025-10-11 08:53:44.602 2 DEBUG nova.scheduler.client.report [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 08:53:44 compute-0 systemd[1]: Started libcrun container.
Oct 11 08:53:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5898e42c3d8153f562219a47f22912dd20bb1c3a0e1902954c3573851fcf1fb5/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 11 08:53:44 compute-0 podman[314130]: 2025-10-11 08:53:44.650394167 +0000 UTC m=+0.175047172 container init 7d1a9e6b3c9130792d0328aa3e55096abd83598bbcc264728304976547ed89b5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 11 08:53:44 compute-0 podman[314130]: 2025-10-11 08:53:44.66065629 +0000 UTC m=+0.185309295 container start 7d1a9e6b3c9130792d0328aa3e55096abd83598bbcc264728304976547ed89b5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct 11 08:53:44 compute-0 nova_compute[260935]: 2025-10-11 08:53:44.676 2 DEBUG oslo_concurrency.lockutils [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.255s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:53:44 compute-0 nova_compute[260935]: 2025-10-11 08:53:44.677 2 DEBUG nova.compute.manager [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: d0ac94c4-9bbc-443b-bbce-0d447b37153a] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 11 08:53:44 compute-0 nova_compute[260935]: 2025-10-11 08:53:44.680 2 DEBUG oslo_concurrency.lockutils [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 2.032s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:53:44 compute-0 neutron-haproxy-ovnmeta-a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf[314221]: [NOTICE]   (314243) : New worker (314248) forked
Oct 11 08:53:44 compute-0 neutron-haproxy-ovnmeta-a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf[314221]: [NOTICE]   (314243) : Loading success.
Oct 11 08:53:44 compute-0 nova_compute[260935]: 2025-10-11 08:53:44.702 2 DEBUG nova.objects.instance [None req-6a41ddc2-a2f2-442c-bdef-bced32c62be8 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Lazy-loading 'migration_context' on Instance uuid 090243f9-46ac-42a1-b921-fa3b1974f127 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:53:44 compute-0 nova_compute[260935]: 2025-10-11 08:53:44.705 2 DEBUG nova.network.neutron [-] [instance: 2c551e6f-adba-4963-a583-c5118e2be62a] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:53:44 compute-0 nova_compute[260935]: 2025-10-11 08:53:44.709 2 DEBUG nova.virt.hardware [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 11 08:53:44 compute-0 nova_compute[260935]: 2025-10-11 08:53:44.709 2 INFO nova.compute.claims [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: c5c5e7c6-36ba-4cdd-9ad5-03996c419556] Claim successful on node compute-0.ctlplane.example.com
Oct 11 08:53:44 compute-0 nova_compute[260935]: 2025-10-11 08:53:44.724 2 DEBUG nova.virt.libvirt.driver [None req-6a41ddc2-a2f2-442c-bdef-bced32c62be8 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 090243f9-46ac-42a1-b921-fa3b1974f127] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 11 08:53:44 compute-0 nova_compute[260935]: 2025-10-11 08:53:44.724 2 DEBUG nova.virt.libvirt.driver [None req-6a41ddc2-a2f2-442c-bdef-bced32c62be8 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 090243f9-46ac-42a1-b921-fa3b1974f127] Ensure instance console log exists: /var/lib/nova/instances/090243f9-46ac-42a1-b921-fa3b1974f127/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 11 08:53:44 compute-0 nova_compute[260935]: 2025-10-11 08:53:44.725 2 DEBUG oslo_concurrency.lockutils [None req-6a41ddc2-a2f2-442c-bdef-bced32c62be8 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:53:44 compute-0 nova_compute[260935]: 2025-10-11 08:53:44.725 2 DEBUG oslo_concurrency.lockutils [None req-6a41ddc2-a2f2-442c-bdef-bced32c62be8 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:53:44 compute-0 nova_compute[260935]: 2025-10-11 08:53:44.725 2 DEBUG oslo_concurrency.lockutils [None req-6a41ddc2-a2f2-442c-bdef-bced32c62be8 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:53:44 compute-0 nova_compute[260935]: 2025-10-11 08:53:44.744 2 DEBUG nova.compute.manager [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: d0ac94c4-9bbc-443b-bbce-0d447b37153a] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 11 08:53:44 compute-0 nova_compute[260935]: 2025-10-11 08:53:44.744 2 DEBUG nova.network.neutron [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: d0ac94c4-9bbc-443b-bbce-0d447b37153a] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 11 08:53:44 compute-0 nova_compute[260935]: 2025-10-11 08:53:44.748 2 INFO nova.compute.manager [-] [instance: 2c551e6f-adba-4963-a583-c5118e2be62a] Took 1.38 seconds to deallocate network for instance.
Oct 11 08:53:44 compute-0 nova_compute[260935]: 2025-10-11 08:53:44.780 2 INFO nova.virt.libvirt.driver [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: d0ac94c4-9bbc-443b-bbce-0d447b37153a] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 11 08:53:44 compute-0 nova_compute[260935]: 2025-10-11 08:53:44.814 2 DEBUG oslo_concurrency.lockutils [None req-4bbbffb0-c2c6-44de-bf01-b4884f8d23f1 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:53:44 compute-0 nova_compute[260935]: 2025-10-11 08:53:44.815 2 DEBUG nova.compute.manager [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: d0ac94c4-9bbc-443b-bbce-0d447b37153a] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 11 08:53:44 compute-0 nova_compute[260935]: 2025-10-11 08:53:44.938 2 DEBUG nova.compute.manager [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: d0ac94c4-9bbc-443b-bbce-0d447b37153a] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 11 08:53:44 compute-0 nova_compute[260935]: 2025-10-11 08:53:44.941 2 DEBUG nova.virt.libvirt.driver [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: d0ac94c4-9bbc-443b-bbce-0d447b37153a] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 11 08:53:44 compute-0 nova_compute[260935]: 2025-10-11 08:53:44.941 2 INFO nova.virt.libvirt.driver [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: d0ac94c4-9bbc-443b-bbce-0d447b37153a] Creating image(s)
Oct 11 08:53:44 compute-0 nova_compute[260935]: 2025-10-11 08:53:44.976 2 DEBUG nova.storage.rbd_utils [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] rbd image d0ac94c4-9bbc-443b-bbce-0d447b37153a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:53:45 compute-0 nova_compute[260935]: 2025-10-11 08:53:45.015 2 DEBUG nova.storage.rbd_utils [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] rbd image d0ac94c4-9bbc-443b-bbce-0d447b37153a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:53:45 compute-0 nova_compute[260935]: 2025-10-11 08:53:45.062 2 DEBUG nova.storage.rbd_utils [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] rbd image d0ac94c4-9bbc-443b-bbce-0d447b37153a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:53:45 compute-0 nova_compute[260935]: 2025-10-11 08:53:45.069 2 DEBUG oslo_concurrency.processutils [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:53:45 compute-0 nova_compute[260935]: 2025-10-11 08:53:45.126 2 DEBUG nova.policy [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '624c293d73ca4d14a182fadee17abb16', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '5fe47dfd30914099a9819413cbab00c6', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 11 08:53:45 compute-0 nova_compute[260935]: 2025-10-11 08:53:45.132 2 DEBUG oslo_concurrency.processutils [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:53:45 compute-0 nova_compute[260935]: 2025-10-11 08:53:45.178 2 DEBUG nova.network.neutron [None req-6a41ddc2-a2f2-442c-bdef-bced32c62be8 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 090243f9-46ac-42a1-b921-fa3b1974f127] Successfully created port: 94e66298-1c10-486d-9cd5-fd680cfa037c _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 11 08:53:45 compute-0 nova_compute[260935]: 2025-10-11 08:53:45.190 2 DEBUG oslo_concurrency.processutils [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json" returned: 0 in 0.121s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:53:45 compute-0 nova_compute[260935]: 2025-10-11 08:53:45.193 2 DEBUG oslo_concurrency.lockutils [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Acquiring lock "9811042c7d73cc51997f7c966840f4b7728169a1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:53:45 compute-0 nova_compute[260935]: 2025-10-11 08:53:45.194 2 DEBUG oslo_concurrency.lockutils [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:53:45 compute-0 nova_compute[260935]: 2025-10-11 08:53:45.194 2 DEBUG oslo_concurrency.lockutils [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:53:45 compute-0 nova_compute[260935]: 2025-10-11 08:53:45.219 2 DEBUG nova.storage.rbd_utils [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] rbd image d0ac94c4-9bbc-443b-bbce-0d447b37153a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:53:45 compute-0 nova_compute[260935]: 2025-10-11 08:53:45.223 2 DEBUG oslo_concurrency.processutils [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 d0ac94c4-9bbc-443b-bbce-0d447b37153a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:53:45 compute-0 nova_compute[260935]: 2025-10-11 08:53:45.269 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172825.201277, 8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:53:45 compute-0 nova_compute[260935]: 2025-10-11 08:53:45.270 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08] VM Started (Lifecycle Event)
Oct 11 08:53:45 compute-0 nova_compute[260935]: 2025-10-11 08:53:45.318 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:53:45 compute-0 nova_compute[260935]: 2025-10-11 08:53:45.323 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172825.201428, 8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:53:45 compute-0 nova_compute[260935]: 2025-10-11 08:53:45.323 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08] VM Paused (Lifecycle Event)
Oct 11 08:53:45 compute-0 nova_compute[260935]: 2025-10-11 08:53:45.354 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:53:45 compute-0 nova_compute[260935]: 2025-10-11 08:53:45.359 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 08:53:45 compute-0 nova_compute[260935]: 2025-10-11 08:53:45.391 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 08:53:45 compute-0 ceph-mon[74313]: pgmap v1479: 321 pgs: 321 active+clean; 167 MiB data, 502 MiB used, 59 GiB / 60 GiB avail; 71 KiB/s rd, 4.7 KiB/s wr, 102 op/s
Oct 11 08:53:45 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/4125527845' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:53:45 compute-0 nova_compute[260935]: 2025-10-11 08:53:45.556 2 DEBUG oslo_concurrency.processutils [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 d0ac94c4-9bbc-443b-bbce-0d447b37153a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.333s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:53:45 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 08:53:45 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1446357688' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:53:45 compute-0 nova_compute[260935]: 2025-10-11 08:53:45.638 2 DEBUG oslo_concurrency.processutils [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.506s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:53:45 compute-0 nova_compute[260935]: 2025-10-11 08:53:45.646 2 DEBUG nova.storage.rbd_utils [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] resizing rbd image d0ac94c4-9bbc-443b-bbce-0d447b37153a_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 11 08:53:45 compute-0 nova_compute[260935]: 2025-10-11 08:53:45.689 2 DEBUG nova.compute.provider_tree [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 08:53:45 compute-0 nova_compute[260935]: 2025-10-11 08:53:45.707 2 DEBUG nova.scheduler.client.report [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 08:53:45 compute-0 nova_compute[260935]: 2025-10-11 08:53:45.775 2 DEBUG oslo_concurrency.lockutils [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.095s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:53:45 compute-0 nova_compute[260935]: 2025-10-11 08:53:45.776 2 DEBUG nova.compute.manager [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: c5c5e7c6-36ba-4cdd-9ad5-03996c419556] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 11 08:53:45 compute-0 nova_compute[260935]: 2025-10-11 08:53:45.781 2 DEBUG oslo_concurrency.lockutils [None req-4bbbffb0-c2c6-44de-bf01-b4884f8d23f1 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.967s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:53:45 compute-0 nova_compute[260935]: 2025-10-11 08:53:45.788 2 DEBUG nova.objects.instance [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Lazy-loading 'migration_context' on Instance uuid d0ac94c4-9bbc-443b-bbce-0d447b37153a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:53:45 compute-0 nova_compute[260935]: 2025-10-11 08:53:45.802 2 DEBUG nova.virt.libvirt.driver [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: d0ac94c4-9bbc-443b-bbce-0d447b37153a] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 11 08:53:45 compute-0 nova_compute[260935]: 2025-10-11 08:53:45.802 2 DEBUG nova.virt.libvirt.driver [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: d0ac94c4-9bbc-443b-bbce-0d447b37153a] Ensure instance console log exists: /var/lib/nova/instances/d0ac94c4-9bbc-443b-bbce-0d447b37153a/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 11 08:53:45 compute-0 nova_compute[260935]: 2025-10-11 08:53:45.803 2 DEBUG oslo_concurrency.lockutils [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:53:45 compute-0 nova_compute[260935]: 2025-10-11 08:53:45.803 2 DEBUG oslo_concurrency.lockutils [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:53:45 compute-0 nova_compute[260935]: 2025-10-11 08:53:45.804 2 DEBUG oslo_concurrency.lockutils [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:53:45 compute-0 nova_compute[260935]: 2025-10-11 08:53:45.827 2 DEBUG nova.compute.manager [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: c5c5e7c6-36ba-4cdd-9ad5-03996c419556] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 11 08:53:45 compute-0 nova_compute[260935]: 2025-10-11 08:53:45.827 2 DEBUG nova.network.neutron [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: c5c5e7c6-36ba-4cdd-9ad5-03996c419556] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 11 08:53:45 compute-0 nova_compute[260935]: 2025-10-11 08:53:45.844 2 INFO nova.virt.libvirt.driver [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: c5c5e7c6-36ba-4cdd-9ad5-03996c419556] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 11 08:53:45 compute-0 nova_compute[260935]: 2025-10-11 08:53:45.868 2 DEBUG nova.compute.manager [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: c5c5e7c6-36ba-4cdd-9ad5-03996c419556] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 11 08:53:45 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1480: 321 pgs: 321 active+clean; 167 MiB data, 502 MiB used, 59 GiB / 60 GiB avail; 56 KiB/s rd, 3.7 KiB/s wr, 80 op/s
Oct 11 08:53:45 compute-0 nova_compute[260935]: 2025-10-11 08:53:45.972 2 DEBUG oslo_concurrency.processutils [None req-4bbbffb0-c2c6-44de-bf01-b4884f8d23f1 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:53:46 compute-0 nova_compute[260935]: 2025-10-11 08:53:46.027 2 DEBUG nova.compute.manager [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: c5c5e7c6-36ba-4cdd-9ad5-03996c419556] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 11 08:53:46 compute-0 nova_compute[260935]: 2025-10-11 08:53:46.032 2 DEBUG nova.virt.libvirt.driver [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: c5c5e7c6-36ba-4cdd-9ad5-03996c419556] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 11 08:53:46 compute-0 nova_compute[260935]: 2025-10-11 08:53:46.033 2 INFO nova.virt.libvirt.driver [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: c5c5e7c6-36ba-4cdd-9ad5-03996c419556] Creating image(s)
Oct 11 08:53:46 compute-0 nova_compute[260935]: 2025-10-11 08:53:46.071 2 DEBUG nova.storage.rbd_utils [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] rbd image c5c5e7c6-36ba-4cdd-9ad5-03996c419556_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:53:46 compute-0 nova_compute[260935]: 2025-10-11 08:53:46.110 2 DEBUG nova.storage.rbd_utils [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] rbd image c5c5e7c6-36ba-4cdd-9ad5-03996c419556_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:53:46 compute-0 nova_compute[260935]: 2025-10-11 08:53:46.153 2 DEBUG nova.storage.rbd_utils [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] rbd image c5c5e7c6-36ba-4cdd-9ad5-03996c419556_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:53:46 compute-0 nova_compute[260935]: 2025-10-11 08:53:46.160 2 DEBUG oslo_concurrency.processutils [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:53:46 compute-0 nova_compute[260935]: 2025-10-11 08:53:46.261 2 DEBUG oslo_concurrency.processutils [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json" returned: 0 in 0.101s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:53:46 compute-0 nova_compute[260935]: 2025-10-11 08:53:46.263 2 DEBUG oslo_concurrency.lockutils [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Acquiring lock "9811042c7d73cc51997f7c966840f4b7728169a1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:53:46 compute-0 nova_compute[260935]: 2025-10-11 08:53:46.264 2 DEBUG oslo_concurrency.lockutils [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:53:46 compute-0 nova_compute[260935]: 2025-10-11 08:53:46.264 2 DEBUG oslo_concurrency.lockutils [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:53:46 compute-0 nova_compute[260935]: 2025-10-11 08:53:46.300 2 DEBUG nova.storage.rbd_utils [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] rbd image c5c5e7c6-36ba-4cdd-9ad5-03996c419556_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:53:46 compute-0 nova_compute[260935]: 2025-10-11 08:53:46.306 2 DEBUG oslo_concurrency.processutils [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 c5c5e7c6-36ba-4cdd-9ad5-03996c419556_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:53:46 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/1446357688' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:53:46 compute-0 nova_compute[260935]: 2025-10-11 08:53:46.416 2 DEBUG nova.policy [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '624c293d73ca4d14a182fadee17abb16', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '5fe47dfd30914099a9819413cbab00c6', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 11 08:53:46 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 08:53:46 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/92852459' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:53:46 compute-0 nova_compute[260935]: 2025-10-11 08:53:46.468 2 DEBUG oslo_concurrency.processutils [None req-4bbbffb0-c2c6-44de-bf01-b4884f8d23f1 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.496s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:53:46 compute-0 nova_compute[260935]: 2025-10-11 08:53:46.477 2 DEBUG nova.compute.provider_tree [None req-4bbbffb0-c2c6-44de-bf01-b4884f8d23f1 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 08:53:46 compute-0 nova_compute[260935]: 2025-10-11 08:53:46.496 2 DEBUG nova.compute.manager [req-b87a8a70-14ef-44a8-96c3-3352464a101a req-6bddad4d-1e15-42d8-b47d-94d1c8d77588 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08] Received event network-vif-plugged-5e003588-ef81-4e87-900f-734b2c4bad32 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:53:46 compute-0 nova_compute[260935]: 2025-10-11 08:53:46.497 2 DEBUG oslo_concurrency.lockutils [req-b87a8a70-14ef-44a8-96c3-3352464a101a req-6bddad4d-1e15-42d8-b47d-94d1c8d77588 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:53:46 compute-0 nova_compute[260935]: 2025-10-11 08:53:46.497 2 DEBUG oslo_concurrency.lockutils [req-b87a8a70-14ef-44a8-96c3-3352464a101a req-6bddad4d-1e15-42d8-b47d-94d1c8d77588 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:53:46 compute-0 nova_compute[260935]: 2025-10-11 08:53:46.497 2 DEBUG oslo_concurrency.lockutils [req-b87a8a70-14ef-44a8-96c3-3352464a101a req-6bddad4d-1e15-42d8-b47d-94d1c8d77588 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:53:46 compute-0 nova_compute[260935]: 2025-10-11 08:53:46.497 2 DEBUG nova.compute.manager [req-b87a8a70-14ef-44a8-96c3-3352464a101a req-6bddad4d-1e15-42d8-b47d-94d1c8d77588 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08] Processing event network-vif-plugged-5e003588-ef81-4e87-900f-734b2c4bad32 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 11 08:53:46 compute-0 nova_compute[260935]: 2025-10-11 08:53:46.497 2 DEBUG nova.compute.manager [req-b87a8a70-14ef-44a8-96c3-3352464a101a req-6bddad4d-1e15-42d8-b47d-94d1c8d77588 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08] Received event network-vif-plugged-5e003588-ef81-4e87-900f-734b2c4bad32 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:53:46 compute-0 nova_compute[260935]: 2025-10-11 08:53:46.497 2 DEBUG oslo_concurrency.lockutils [req-b87a8a70-14ef-44a8-96c3-3352464a101a req-6bddad4d-1e15-42d8-b47d-94d1c8d77588 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:53:46 compute-0 nova_compute[260935]: 2025-10-11 08:53:46.498 2 DEBUG oslo_concurrency.lockutils [req-b87a8a70-14ef-44a8-96c3-3352464a101a req-6bddad4d-1e15-42d8-b47d-94d1c8d77588 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:53:46 compute-0 nova_compute[260935]: 2025-10-11 08:53:46.498 2 DEBUG oslo_concurrency.lockutils [req-b87a8a70-14ef-44a8-96c3-3352464a101a req-6bddad4d-1e15-42d8-b47d-94d1c8d77588 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:53:46 compute-0 nova_compute[260935]: 2025-10-11 08:53:46.498 2 DEBUG nova.compute.manager [req-b87a8a70-14ef-44a8-96c3-3352464a101a req-6bddad4d-1e15-42d8-b47d-94d1c8d77588 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08] No waiting events found dispatching network-vif-plugged-5e003588-ef81-4e87-900f-734b2c4bad32 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 08:53:46 compute-0 nova_compute[260935]: 2025-10-11 08:53:46.498 2 WARNING nova.compute.manager [req-b87a8a70-14ef-44a8-96c3-3352464a101a req-6bddad4d-1e15-42d8-b47d-94d1c8d77588 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08] Received unexpected event network-vif-plugged-5e003588-ef81-4e87-900f-734b2c4bad32 for instance with vm_state building and task_state spawning.
Oct 11 08:53:46 compute-0 nova_compute[260935]: 2025-10-11 08:53:46.499 2 DEBUG nova.compute.manager [None req-e1c0c43e-5d1b-4f50-9815-d04cf6d649fc d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] [instance: 8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 11 08:53:46 compute-0 nova_compute[260935]: 2025-10-11 08:53:46.500 2 DEBUG nova.scheduler.client.report [None req-4bbbffb0-c2c6-44de-bf01-b4884f8d23f1 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 08:53:46 compute-0 nova_compute[260935]: 2025-10-11 08:53:46.509 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172826.508762, 8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:53:46 compute-0 nova_compute[260935]: 2025-10-11 08:53:46.509 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08] VM Resumed (Lifecycle Event)
Oct 11 08:53:46 compute-0 nova_compute[260935]: 2025-10-11 08:53:46.511 2 DEBUG nova.virt.libvirt.driver [None req-e1c0c43e-5d1b-4f50-9815-d04cf6d649fc d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] [instance: 8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 11 08:53:46 compute-0 nova_compute[260935]: 2025-10-11 08:53:46.525 2 DEBUG oslo_concurrency.lockutils [None req-4bbbffb0-c2c6-44de-bf01-b4884f8d23f1 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.744s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:53:46 compute-0 nova_compute[260935]: 2025-10-11 08:53:46.528 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:53:46 compute-0 nova_compute[260935]: 2025-10-11 08:53:46.539 2 INFO nova.virt.libvirt.driver [-] [instance: 8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08] Instance spawned successfully.
Oct 11 08:53:46 compute-0 nova_compute[260935]: 2025-10-11 08:53:46.540 2 DEBUG nova.virt.libvirt.driver [None req-e1c0c43e-5d1b-4f50-9815-d04cf6d649fc d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] [instance: 8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 11 08:53:46 compute-0 nova_compute[260935]: 2025-10-11 08:53:46.543 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 08:53:46 compute-0 nova_compute[260935]: 2025-10-11 08:53:46.560 2 INFO nova.scheduler.client.report [None req-4bbbffb0-c2c6-44de-bf01-b4884f8d23f1 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] Deleted allocations for instance 2c551e6f-adba-4963-a583-c5118e2be62a
Oct 11 08:53:46 compute-0 nova_compute[260935]: 2025-10-11 08:53:46.578 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 08:53:46 compute-0 nova_compute[260935]: 2025-10-11 08:53:46.580 2 DEBUG nova.compute.manager [req-5cd1e8d7-27b0-4e66-8667-0fd01263b2ee req-de94ea56-ee52-4dbf-8002-b0ec841770f2 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 2c551e6f-adba-4963-a583-c5118e2be62a] Received event network-vif-deleted-29b8f5a1-6e7c-42c3-9876-cc4cc9942b76 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:53:46 compute-0 nova_compute[260935]: 2025-10-11 08:53:46.586 2 DEBUG nova.virt.libvirt.driver [None req-e1c0c43e-5d1b-4f50-9815-d04cf6d649fc d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] [instance: 8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:53:46 compute-0 nova_compute[260935]: 2025-10-11 08:53:46.586 2 DEBUG nova.virt.libvirt.driver [None req-e1c0c43e-5d1b-4f50-9815-d04cf6d649fc d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] [instance: 8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:53:46 compute-0 nova_compute[260935]: 2025-10-11 08:53:46.586 2 DEBUG nova.virt.libvirt.driver [None req-e1c0c43e-5d1b-4f50-9815-d04cf6d649fc d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] [instance: 8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:53:46 compute-0 nova_compute[260935]: 2025-10-11 08:53:46.587 2 DEBUG nova.virt.libvirt.driver [None req-e1c0c43e-5d1b-4f50-9815-d04cf6d649fc d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] [instance: 8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:53:46 compute-0 nova_compute[260935]: 2025-10-11 08:53:46.587 2 DEBUG nova.virt.libvirt.driver [None req-e1c0c43e-5d1b-4f50-9815-d04cf6d649fc d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] [instance: 8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:53:46 compute-0 nova_compute[260935]: 2025-10-11 08:53:46.587 2 DEBUG nova.virt.libvirt.driver [None req-e1c0c43e-5d1b-4f50-9815-d04cf6d649fc d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] [instance: 8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:53:46 compute-0 nova_compute[260935]: 2025-10-11 08:53:46.642 2 DEBUG oslo_concurrency.processutils [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 c5c5e7c6-36ba-4cdd-9ad5-03996c419556_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.336s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:53:46 compute-0 nova_compute[260935]: 2025-10-11 08:53:46.712 2 INFO nova.compute.manager [None req-e1c0c43e-5d1b-4f50-9815-d04cf6d649fc d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] [instance: 8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08] Took 9.93 seconds to spawn the instance on the hypervisor.
Oct 11 08:53:46 compute-0 nova_compute[260935]: 2025-10-11 08:53:46.712 2 DEBUG nova.compute.manager [None req-e1c0c43e-5d1b-4f50-9815-d04cf6d649fc d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] [instance: 8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:53:46 compute-0 nova_compute[260935]: 2025-10-11 08:53:46.714 2 DEBUG oslo_concurrency.lockutils [None req-4bbbffb0-c2c6-44de-bf01-b4884f8d23f1 ea1a78d0e9f549b580366a5b344f23f5 141fa83aa39e4cf6883c3f86fe0de7d4 - - default default] Lock "2c551e6f-adba-4963-a583-c5118e2be62a" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.232s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:53:46 compute-0 nova_compute[260935]: 2025-10-11 08:53:46.721 2 DEBUG nova.storage.rbd_utils [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] resizing rbd image c5c5e7c6-36ba-4cdd-9ad5-03996c419556_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 11 08:53:46 compute-0 nova_compute[260935]: 2025-10-11 08:53:46.819 2 INFO nova.compute.manager [None req-e1c0c43e-5d1b-4f50-9815-d04cf6d649fc d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] [instance: 8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08] Took 12.05 seconds to build instance.
Oct 11 08:53:46 compute-0 nova_compute[260935]: 2025-10-11 08:53:46.823 2 DEBUG nova.objects.instance [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Lazy-loading 'migration_context' on Instance uuid c5c5e7c6-36ba-4cdd-9ad5-03996c419556 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:53:46 compute-0 nova_compute[260935]: 2025-10-11 08:53:46.851 2 DEBUG nova.virt.libvirt.driver [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: c5c5e7c6-36ba-4cdd-9ad5-03996c419556] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 11 08:53:46 compute-0 nova_compute[260935]: 2025-10-11 08:53:46.851 2 DEBUG nova.virt.libvirt.driver [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: c5c5e7c6-36ba-4cdd-9ad5-03996c419556] Ensure instance console log exists: /var/lib/nova/instances/c5c5e7c6-36ba-4cdd-9ad5-03996c419556/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 11 08:53:46 compute-0 nova_compute[260935]: 2025-10-11 08:53:46.852 2 DEBUG oslo_concurrency.lockutils [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:53:46 compute-0 nova_compute[260935]: 2025-10-11 08:53:46.852 2 DEBUG oslo_concurrency.lockutils [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:53:46 compute-0 nova_compute[260935]: 2025-10-11 08:53:46.852 2 DEBUG oslo_concurrency.lockutils [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:53:46 compute-0 nova_compute[260935]: 2025-10-11 08:53:46.853 2 DEBUG oslo_concurrency.lockutils [None req-e1c0c43e-5d1b-4f50-9815-d04cf6d649fc d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Lock "8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.164s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:53:46 compute-0 nova_compute[260935]: 2025-10-11 08:53:46.986 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760172811.9043243, e3289c21-dd0f-43aa-9d39-3aff16eff5cd => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:53:46 compute-0 nova_compute[260935]: 2025-10-11 08:53:46.987 2 INFO nova.compute.manager [-] [instance: e3289c21-dd0f-43aa-9d39-3aff16eff5cd] VM Stopped (Lifecycle Event)
Oct 11 08:53:46 compute-0 nova_compute[260935]: 2025-10-11 08:53:46.989 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760172811.9815793, dd2f9164-cc85-46e5-9ac5-2847421fe9fc => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:53:46 compute-0 nova_compute[260935]: 2025-10-11 08:53:46.989 2 INFO nova.compute.manager [-] [instance: dd2f9164-cc85-46e5-9ac5-2847421fe9fc] VM Stopped (Lifecycle Event)
Oct 11 08:53:47 compute-0 nova_compute[260935]: 2025-10-11 08:53:47.011 2 DEBUG nova.compute.manager [None req-c752f7c7-78fa-4959-9b83-d4995827bd89 - - - - - -] [instance: e3289c21-dd0f-43aa-9d39-3aff16eff5cd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:53:47 compute-0 nova_compute[260935]: 2025-10-11 08:53:47.012 2 DEBUG nova.compute.manager [None req-e7a63e32-9373-4e71-8cc4-461e1707c165 - - - - - -] [instance: dd2f9164-cc85-46e5-9ac5-2847421fe9fc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:53:47 compute-0 nova_compute[260935]: 2025-10-11 08:53:47.051 2 DEBUG nova.network.neutron [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: d0ac94c4-9bbc-443b-bbce-0d447b37153a] Successfully created port: 11b55091-2876-4b36-98b0-aa1a4db00d3e _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 11 08:53:47 compute-0 nova_compute[260935]: 2025-10-11 08:53:47.273 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 08:53:47 compute-0 nova_compute[260935]: 2025-10-11 08:53:47.314 2 WARNING nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] While synchronizing instance power states, found 4 instances in the database and 1 instances on the hypervisor.
Oct 11 08:53:47 compute-0 nova_compute[260935]: 2025-10-11 08:53:47.314 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Triggering sync for uuid 8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Oct 11 08:53:47 compute-0 nova_compute[260935]: 2025-10-11 08:53:47.315 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Triggering sync for uuid d0ac94c4-9bbc-443b-bbce-0d447b37153a _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Oct 11 08:53:47 compute-0 nova_compute[260935]: 2025-10-11 08:53:47.315 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Triggering sync for uuid 090243f9-46ac-42a1-b921-fa3b1974f127 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Oct 11 08:53:47 compute-0 nova_compute[260935]: 2025-10-11 08:53:47.316 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Triggering sync for uuid c5c5e7c6-36ba-4cdd-9ad5-03996c419556 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Oct 11 08:53:47 compute-0 nova_compute[260935]: 2025-10-11 08:53:47.317 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:53:47 compute-0 nova_compute[260935]: 2025-10-11 08:53:47.318 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:53:47 compute-0 nova_compute[260935]: 2025-10-11 08:53:47.318 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "d0ac94c4-9bbc-443b-bbce-0d447b37153a" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:53:47 compute-0 nova_compute[260935]: 2025-10-11 08:53:47.319 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "090243f9-46ac-42a1-b921-fa3b1974f127" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:53:47 compute-0 nova_compute[260935]: 2025-10-11 08:53:47.319 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "c5c5e7c6-36ba-4cdd-9ad5-03996c419556" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:53:47 compute-0 nova_compute[260935]: 2025-10-11 08:53:47.334 2 DEBUG nova.network.neutron [None req-6a41ddc2-a2f2-442c-bdef-bced32c62be8 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 090243f9-46ac-42a1-b921-fa3b1974f127] Successfully updated port: 94e66298-1c10-486d-9cd5-fd680cfa037c _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 11 08:53:47 compute-0 nova_compute[260935]: 2025-10-11 08:53:47.368 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.050s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:53:47 compute-0 nova_compute[260935]: 2025-10-11 08:53:47.370 2 DEBUG oslo_concurrency.lockutils [None req-6a41ddc2-a2f2-442c-bdef-bced32c62be8 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Acquiring lock "refresh_cache-090243f9-46ac-42a1-b921-fa3b1974f127" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 08:53:47 compute-0 nova_compute[260935]: 2025-10-11 08:53:47.370 2 DEBUG oslo_concurrency.lockutils [None req-6a41ddc2-a2f2-442c-bdef-bced32c62be8 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Acquired lock "refresh_cache-090243f9-46ac-42a1-b921-fa3b1974f127" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 08:53:47 compute-0 nova_compute[260935]: 2025-10-11 08:53:47.371 2 DEBUG nova.network.neutron [None req-6a41ddc2-a2f2-442c-bdef-bced32c62be8 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 090243f9-46ac-42a1-b921-fa3b1974f127] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 11 08:53:47 compute-0 nova_compute[260935]: 2025-10-11 08:53:47.402 2 DEBUG nova.network.neutron [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: c5c5e7c6-36ba-4cdd-9ad5-03996c419556] Successfully created port: eb770b47-0e30-41bb-8d08-e52bc86b8fb7 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 11 08:53:47 compute-0 ceph-mon[74313]: pgmap v1480: 321 pgs: 321 active+clean; 167 MiB data, 502 MiB used, 59 GiB / 60 GiB avail; 56 KiB/s rd, 3.7 KiB/s wr, 80 op/s
Oct 11 08:53:47 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/92852459' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:53:47 compute-0 nova_compute[260935]: 2025-10-11 08:53:47.703 2 DEBUG nova.network.neutron [None req-6a41ddc2-a2f2-442c-bdef-bced32c62be8 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 090243f9-46ac-42a1-b921-fa3b1974f127] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 11 08:53:47 compute-0 nova_compute[260935]: 2025-10-11 08:53:47.759 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:53:47 compute-0 nova_compute[260935]: 2025-10-11 08:53:47.776 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:53:47 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1481: 321 pgs: 321 active+clean; 227 MiB data, 515 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 7.4 MiB/s wr, 279 op/s
Oct 11 08:53:48 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e204 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 08:53:48 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e204 do_prune osdmap full prune enabled
Oct 11 08:53:48 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e205 e205: 3 total, 3 up, 3 in
Oct 11 08:53:48 compute-0 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e205: 3 total, 3 up, 3 in
Oct 11 08:53:49 compute-0 nova_compute[260935]: 2025-10-11 08:53:49.037 2 DEBUG nova.network.neutron [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: d0ac94c4-9bbc-443b-bbce-0d447b37153a] Successfully updated port: 11b55091-2876-4b36-98b0-aa1a4db00d3e _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 11 08:53:49 compute-0 nova_compute[260935]: 2025-10-11 08:53:49.051 2 DEBUG oslo_concurrency.lockutils [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Acquiring lock "refresh_cache-d0ac94c4-9bbc-443b-bbce-0d447b37153a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 08:53:49 compute-0 nova_compute[260935]: 2025-10-11 08:53:49.051 2 DEBUG oslo_concurrency.lockutils [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Acquired lock "refresh_cache-d0ac94c4-9bbc-443b-bbce-0d447b37153a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 08:53:49 compute-0 nova_compute[260935]: 2025-10-11 08:53:49.052 2 DEBUG nova.network.neutron [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: d0ac94c4-9bbc-443b-bbce-0d447b37153a] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 11 08:53:49 compute-0 nova_compute[260935]: 2025-10-11 08:53:49.166 2 DEBUG nova.compute.manager [req-02986fd4-437b-4afb-8a3f-197e22109c88 req-8ffd67a5-1631-4492-9d4a-bcbd91aadd44 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 090243f9-46ac-42a1-b921-fa3b1974f127] Received event network-changed-94e66298-1c10-486d-9cd5-fd680cfa037c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:53:49 compute-0 nova_compute[260935]: 2025-10-11 08:53:49.167 2 DEBUG nova.compute.manager [req-02986fd4-437b-4afb-8a3f-197e22109c88 req-8ffd67a5-1631-4492-9d4a-bcbd91aadd44 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 090243f9-46ac-42a1-b921-fa3b1974f127] Refreshing instance network info cache due to event network-changed-94e66298-1c10-486d-9cd5-fd680cfa037c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 08:53:49 compute-0 nova_compute[260935]: 2025-10-11 08:53:49.167 2 DEBUG oslo_concurrency.lockutils [req-02986fd4-437b-4afb-8a3f-197e22109c88 req-8ffd67a5-1631-4492-9d4a-bcbd91aadd44 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-090243f9-46ac-42a1-b921-fa3b1974f127" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 08:53:49 compute-0 nova_compute[260935]: 2025-10-11 08:53:49.232 2 DEBUG nova.network.neutron [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: d0ac94c4-9bbc-443b-bbce-0d447b37153a] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 11 08:53:49 compute-0 ceph-mon[74313]: pgmap v1481: 321 pgs: 321 active+clean; 227 MiB data, 515 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 7.4 MiB/s wr, 279 op/s
Oct 11 08:53:49 compute-0 ceph-mon[74313]: osdmap e205: 3 total, 3 up, 3 in
Oct 11 08:53:49 compute-0 nova_compute[260935]: 2025-10-11 08:53:49.390 2 DEBUG nova.network.neutron [None req-6a41ddc2-a2f2-442c-bdef-bced32c62be8 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 090243f9-46ac-42a1-b921-fa3b1974f127] Updating instance_info_cache with network_info: [{"id": "94e66298-1c10-486d-9cd5-fd680cfa037c", "address": "fa:16:3e:a6:01:8a", "network": {"id": "e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-964284753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9af27fad6b5a4783b66213343f27f0a1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap94e66298-1c", "ovs_interfaceid": "94e66298-1c10-486d-9cd5-fd680cfa037c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:53:49 compute-0 nova_compute[260935]: 2025-10-11 08:53:49.423 2 DEBUG oslo_concurrency.lockutils [None req-6a41ddc2-a2f2-442c-bdef-bced32c62be8 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Releasing lock "refresh_cache-090243f9-46ac-42a1-b921-fa3b1974f127" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 08:53:49 compute-0 nova_compute[260935]: 2025-10-11 08:53:49.424 2 DEBUG nova.compute.manager [None req-6a41ddc2-a2f2-442c-bdef-bced32c62be8 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 090243f9-46ac-42a1-b921-fa3b1974f127] Instance network_info: |[{"id": "94e66298-1c10-486d-9cd5-fd680cfa037c", "address": "fa:16:3e:a6:01:8a", "network": {"id": "e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-964284753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9af27fad6b5a4783b66213343f27f0a1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap94e66298-1c", "ovs_interfaceid": "94e66298-1c10-486d-9cd5-fd680cfa037c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 11 08:53:49 compute-0 nova_compute[260935]: 2025-10-11 08:53:49.424 2 DEBUG oslo_concurrency.lockutils [req-02986fd4-437b-4afb-8a3f-197e22109c88 req-8ffd67a5-1631-4492-9d4a-bcbd91aadd44 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-090243f9-46ac-42a1-b921-fa3b1974f127" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 08:53:49 compute-0 nova_compute[260935]: 2025-10-11 08:53:49.425 2 DEBUG nova.network.neutron [req-02986fd4-437b-4afb-8a3f-197e22109c88 req-8ffd67a5-1631-4492-9d4a-bcbd91aadd44 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 090243f9-46ac-42a1-b921-fa3b1974f127] Refreshing network info cache for port 94e66298-1c10-486d-9cd5-fd680cfa037c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 08:53:49 compute-0 nova_compute[260935]: 2025-10-11 08:53:49.430 2 DEBUG nova.virt.libvirt.driver [None req-6a41ddc2-a2f2-442c-bdef-bced32c62be8 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 090243f9-46ac-42a1-b921-fa3b1974f127] Start _get_guest_xml network_info=[{"id": "94e66298-1c10-486d-9cd5-fd680cfa037c", "address": "fa:16:3e:a6:01:8a", "network": {"id": "e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-964284753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9af27fad6b5a4783b66213343f27f0a1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap94e66298-1c", "ovs_interfaceid": "94e66298-1c10-486d-9cd5-fd680cfa037c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 11 08:53:49 compute-0 nova_compute[260935]: 2025-10-11 08:53:49.437 2 WARNING nova.virt.libvirt.driver [None req-6a41ddc2-a2f2-442c-bdef-bced32c62be8 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 08:53:49 compute-0 nova_compute[260935]: 2025-10-11 08:53:49.445 2 DEBUG nova.virt.libvirt.host [None req-6a41ddc2-a2f2-442c-bdef-bced32c62be8 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 11 08:53:49 compute-0 nova_compute[260935]: 2025-10-11 08:53:49.446 2 DEBUG nova.virt.libvirt.host [None req-6a41ddc2-a2f2-442c-bdef-bced32c62be8 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 11 08:53:49 compute-0 nova_compute[260935]: 2025-10-11 08:53:49.462 2 DEBUG nova.virt.libvirt.host [None req-6a41ddc2-a2f2-442c-bdef-bced32c62be8 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 11 08:53:49 compute-0 nova_compute[260935]: 2025-10-11 08:53:49.463 2 DEBUG nova.virt.libvirt.host [None req-6a41ddc2-a2f2-442c-bdef-bced32c62be8 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 11 08:53:49 compute-0 nova_compute[260935]: 2025-10-11 08:53:49.464 2 DEBUG nova.virt.libvirt.driver [None req-6a41ddc2-a2f2-442c-bdef-bced32c62be8 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 11 08:53:49 compute-0 nova_compute[260935]: 2025-10-11 08:53:49.464 2 DEBUG nova.virt.hardware [None req-6a41ddc2-a2f2-442c-bdef-bced32c62be8 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 11 08:53:49 compute-0 nova_compute[260935]: 2025-10-11 08:53:49.465 2 DEBUG nova.virt.hardware [None req-6a41ddc2-a2f2-442c-bdef-bced32c62be8 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 11 08:53:49 compute-0 nova_compute[260935]: 2025-10-11 08:53:49.465 2 DEBUG nova.virt.hardware [None req-6a41ddc2-a2f2-442c-bdef-bced32c62be8 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 11 08:53:49 compute-0 nova_compute[260935]: 2025-10-11 08:53:49.466 2 DEBUG nova.virt.hardware [None req-6a41ddc2-a2f2-442c-bdef-bced32c62be8 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 11 08:53:49 compute-0 nova_compute[260935]: 2025-10-11 08:53:49.466 2 DEBUG nova.virt.hardware [None req-6a41ddc2-a2f2-442c-bdef-bced32c62be8 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 11 08:53:49 compute-0 nova_compute[260935]: 2025-10-11 08:53:49.467 2 DEBUG nova.virt.hardware [None req-6a41ddc2-a2f2-442c-bdef-bced32c62be8 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 11 08:53:49 compute-0 nova_compute[260935]: 2025-10-11 08:53:49.467 2 DEBUG nova.virt.hardware [None req-6a41ddc2-a2f2-442c-bdef-bced32c62be8 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 11 08:53:49 compute-0 nova_compute[260935]: 2025-10-11 08:53:49.468 2 DEBUG nova.virt.hardware [None req-6a41ddc2-a2f2-442c-bdef-bced32c62be8 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 11 08:53:49 compute-0 nova_compute[260935]: 2025-10-11 08:53:49.468 2 DEBUG nova.virt.hardware [None req-6a41ddc2-a2f2-442c-bdef-bced32c62be8 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 11 08:53:49 compute-0 nova_compute[260935]: 2025-10-11 08:53:49.469 2 DEBUG nova.virt.hardware [None req-6a41ddc2-a2f2-442c-bdef-bced32c62be8 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 11 08:53:49 compute-0 nova_compute[260935]: 2025-10-11 08:53:49.469 2 DEBUG nova.virt.hardware [None req-6a41ddc2-a2f2-442c-bdef-bced32c62be8 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 11 08:53:49 compute-0 nova_compute[260935]: 2025-10-11 08:53:49.474 2 DEBUG oslo_concurrency.processutils [None req-6a41ddc2-a2f2-442c-bdef-bced32c62be8 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:53:49 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 08:53:49 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1759633833' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:53:49 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1483: 321 pgs: 321 active+clean; 227 MiB data, 515 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 8.0 MiB/s wr, 301 op/s
Oct 11 08:53:49 compute-0 nova_compute[260935]: 2025-10-11 08:53:49.967 2 DEBUG oslo_concurrency.processutils [None req-6a41ddc2-a2f2-442c-bdef-bced32c62be8 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.493s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:53:50 compute-0 nova_compute[260935]: 2025-10-11 08:53:50.029 2 DEBUG nova.storage.rbd_utils [None req-6a41ddc2-a2f2-442c-bdef-bced32c62be8 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] rbd image 090243f9-46ac-42a1-b921-fa3b1974f127_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:53:50 compute-0 nova_compute[260935]: 2025-10-11 08:53:50.037 2 DEBUG oslo_concurrency.processutils [None req-6a41ddc2-a2f2-442c-bdef-bced32c62be8 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:53:50 compute-0 nova_compute[260935]: 2025-10-11 08:53:50.100 2 DEBUG nova.network.neutron [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: c5c5e7c6-36ba-4cdd-9ad5-03996c419556] Successfully updated port: eb770b47-0e30-41bb-8d08-e52bc86b8fb7 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 11 08:53:50 compute-0 nova_compute[260935]: 2025-10-11 08:53:50.133 2 DEBUG oslo_concurrency.lockutils [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Acquiring lock "refresh_cache-c5c5e7c6-36ba-4cdd-9ad5-03996c419556" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 08:53:50 compute-0 nova_compute[260935]: 2025-10-11 08:53:50.134 2 DEBUG oslo_concurrency.lockutils [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Acquired lock "refresh_cache-c5c5e7c6-36ba-4cdd-9ad5-03996c419556" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 08:53:50 compute-0 nova_compute[260935]: 2025-10-11 08:53:50.134 2 DEBUG nova.network.neutron [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: c5c5e7c6-36ba-4cdd-9ad5-03996c419556] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 11 08:53:50 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/1759633833' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:53:50 compute-0 nova_compute[260935]: 2025-10-11 08:53:50.342 2 DEBUG nova.network.neutron [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: c5c5e7c6-36ba-4cdd-9ad5-03996c419556] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 11 08:53:50 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 08:53:50 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1169621818' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:53:50 compute-0 nova_compute[260935]: 2025-10-11 08:53:50.581 2 DEBUG oslo_concurrency.processutils [None req-6a41ddc2-a2f2-442c-bdef-bced32c62be8 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.545s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:53:50 compute-0 nova_compute[260935]: 2025-10-11 08:53:50.584 2 DEBUG nova.virt.libvirt.vif [None req-6a41ddc2-a2f2-442c-bdef-bced32c62be8 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T08:53:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-416013570',display_name='tempest-ServerDiskConfigTestJSON-server-416013570',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-416013570',id=49,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9af27fad6b5a4783b66213343f27f0a1',ramdisk_id='',reservation_id='r-2vpa8irs',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerDiskConfigTestJSON-387886039',owner_user_name='tempest-ServerDiskConfigTestJSON-387886039-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T08:53:43Z,user_data=None,user_id='2a171de1f79843e0b048393cabfee77d',uuid=090243f9-46ac-42a1-b921-fa3b1974f127,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "94e66298-1c10-486d-9cd5-fd680cfa037c", "address": "fa:16:3e:a6:01:8a", "network": {"id": "e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-964284753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9af27fad6b5a4783b66213343f27f0a1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap94e66298-1c", "ovs_interfaceid": "94e66298-1c10-486d-9cd5-fd680cfa037c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 11 08:53:50 compute-0 nova_compute[260935]: 2025-10-11 08:53:50.584 2 DEBUG nova.network.os_vif_util [None req-6a41ddc2-a2f2-442c-bdef-bced32c62be8 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Converting VIF {"id": "94e66298-1c10-486d-9cd5-fd680cfa037c", "address": "fa:16:3e:a6:01:8a", "network": {"id": "e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-964284753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9af27fad6b5a4783b66213343f27f0a1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap94e66298-1c", "ovs_interfaceid": "94e66298-1c10-486d-9cd5-fd680cfa037c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 08:53:50 compute-0 nova_compute[260935]: 2025-10-11 08:53:50.586 2 DEBUG nova.network.os_vif_util [None req-6a41ddc2-a2f2-442c-bdef-bced32c62be8 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a6:01:8a,bridge_name='br-int',has_traffic_filtering=True,id=94e66298-1c10-486d-9cd5-fd680cfa037c,network=Network(e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap94e66298-1c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 08:53:50 compute-0 nova_compute[260935]: 2025-10-11 08:53:50.588 2 DEBUG nova.objects.instance [None req-6a41ddc2-a2f2-442c-bdef-bced32c62be8 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Lazy-loading 'pci_devices' on Instance uuid 090243f9-46ac-42a1-b921-fa3b1974f127 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:53:50 compute-0 nova_compute[260935]: 2025-10-11 08:53:50.603 2 DEBUG nova.virt.libvirt.driver [None req-6a41ddc2-a2f2-442c-bdef-bced32c62be8 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 090243f9-46ac-42a1-b921-fa3b1974f127] End _get_guest_xml xml=<domain type="kvm">
Oct 11 08:53:50 compute-0 nova_compute[260935]:   <uuid>090243f9-46ac-42a1-b921-fa3b1974f127</uuid>
Oct 11 08:53:50 compute-0 nova_compute[260935]:   <name>instance-00000031</name>
Oct 11 08:53:50 compute-0 nova_compute[260935]:   <memory>131072</memory>
Oct 11 08:53:50 compute-0 nova_compute[260935]:   <vcpu>1</vcpu>
Oct 11 08:53:50 compute-0 nova_compute[260935]:   <metadata>
Oct 11 08:53:50 compute-0 nova_compute[260935]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 08:53:50 compute-0 nova_compute[260935]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 08:53:50 compute-0 nova_compute[260935]:       <nova:name>tempest-ServerDiskConfigTestJSON-server-416013570</nova:name>
Oct 11 08:53:50 compute-0 nova_compute[260935]:       <nova:creationTime>2025-10-11 08:53:49</nova:creationTime>
Oct 11 08:53:50 compute-0 nova_compute[260935]:       <nova:flavor name="m1.nano">
Oct 11 08:53:50 compute-0 nova_compute[260935]:         <nova:memory>128</nova:memory>
Oct 11 08:53:50 compute-0 nova_compute[260935]:         <nova:disk>1</nova:disk>
Oct 11 08:53:50 compute-0 nova_compute[260935]:         <nova:swap>0</nova:swap>
Oct 11 08:53:50 compute-0 nova_compute[260935]:         <nova:ephemeral>0</nova:ephemeral>
Oct 11 08:53:50 compute-0 nova_compute[260935]:         <nova:vcpus>1</nova:vcpus>
Oct 11 08:53:50 compute-0 nova_compute[260935]:       </nova:flavor>
Oct 11 08:53:50 compute-0 nova_compute[260935]:       <nova:owner>
Oct 11 08:53:50 compute-0 nova_compute[260935]:         <nova:user uuid="2a171de1f79843e0b048393cabfee77d">tempest-ServerDiskConfigTestJSON-387886039-project-member</nova:user>
Oct 11 08:53:50 compute-0 nova_compute[260935]:         <nova:project uuid="9af27fad6b5a4783b66213343f27f0a1">tempest-ServerDiskConfigTestJSON-387886039</nova:project>
Oct 11 08:53:50 compute-0 nova_compute[260935]:       </nova:owner>
Oct 11 08:53:50 compute-0 nova_compute[260935]:       <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 08:53:50 compute-0 nova_compute[260935]:       <nova:ports>
Oct 11 08:53:50 compute-0 nova_compute[260935]:         <nova:port uuid="94e66298-1c10-486d-9cd5-fd680cfa037c">
Oct 11 08:53:50 compute-0 nova_compute[260935]:           <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Oct 11 08:53:50 compute-0 nova_compute[260935]:         </nova:port>
Oct 11 08:53:50 compute-0 nova_compute[260935]:       </nova:ports>
Oct 11 08:53:50 compute-0 nova_compute[260935]:     </nova:instance>
Oct 11 08:53:50 compute-0 nova_compute[260935]:   </metadata>
Oct 11 08:53:50 compute-0 nova_compute[260935]:   <sysinfo type="smbios">
Oct 11 08:53:50 compute-0 nova_compute[260935]:     <system>
Oct 11 08:53:50 compute-0 nova_compute[260935]:       <entry name="manufacturer">RDO</entry>
Oct 11 08:53:50 compute-0 nova_compute[260935]:       <entry name="product">OpenStack Compute</entry>
Oct 11 08:53:50 compute-0 nova_compute[260935]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 08:53:50 compute-0 nova_compute[260935]:       <entry name="serial">090243f9-46ac-42a1-b921-fa3b1974f127</entry>
Oct 11 08:53:50 compute-0 nova_compute[260935]:       <entry name="uuid">090243f9-46ac-42a1-b921-fa3b1974f127</entry>
Oct 11 08:53:50 compute-0 nova_compute[260935]:       <entry name="family">Virtual Machine</entry>
Oct 11 08:53:50 compute-0 nova_compute[260935]:     </system>
Oct 11 08:53:50 compute-0 nova_compute[260935]:   </sysinfo>
Oct 11 08:53:50 compute-0 nova_compute[260935]:   <os>
Oct 11 08:53:50 compute-0 nova_compute[260935]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 11 08:53:50 compute-0 nova_compute[260935]:     <boot dev="hd"/>
Oct 11 08:53:50 compute-0 nova_compute[260935]:     <smbios mode="sysinfo"/>
Oct 11 08:53:50 compute-0 nova_compute[260935]:   </os>
Oct 11 08:53:50 compute-0 nova_compute[260935]:   <features>
Oct 11 08:53:50 compute-0 nova_compute[260935]:     <acpi/>
Oct 11 08:53:50 compute-0 nova_compute[260935]:     <apic/>
Oct 11 08:53:50 compute-0 nova_compute[260935]:     <vmcoreinfo/>
Oct 11 08:53:50 compute-0 nova_compute[260935]:   </features>
Oct 11 08:53:50 compute-0 nova_compute[260935]:   <clock offset="utc">
Oct 11 08:53:50 compute-0 nova_compute[260935]:     <timer name="pit" tickpolicy="delay"/>
Oct 11 08:53:50 compute-0 nova_compute[260935]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 11 08:53:50 compute-0 nova_compute[260935]:     <timer name="hpet" present="no"/>
Oct 11 08:53:50 compute-0 nova_compute[260935]:   </clock>
Oct 11 08:53:50 compute-0 nova_compute[260935]:   <cpu mode="host-model" match="exact">
Oct 11 08:53:50 compute-0 nova_compute[260935]:     <topology sockets="1" cores="1" threads="1"/>
Oct 11 08:53:50 compute-0 nova_compute[260935]:   </cpu>
Oct 11 08:53:50 compute-0 nova_compute[260935]:   <devices>
Oct 11 08:53:50 compute-0 nova_compute[260935]:     <disk type="network" device="disk">
Oct 11 08:53:50 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 08:53:50 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/090243f9-46ac-42a1-b921-fa3b1974f127_disk">
Oct 11 08:53:50 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 08:53:50 compute-0 nova_compute[260935]:       </source>
Oct 11 08:53:50 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 08:53:50 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 08:53:50 compute-0 nova_compute[260935]:       </auth>
Oct 11 08:53:50 compute-0 nova_compute[260935]:       <target dev="vda" bus="virtio"/>
Oct 11 08:53:50 compute-0 nova_compute[260935]:     </disk>
Oct 11 08:53:50 compute-0 nova_compute[260935]:     <disk type="network" device="cdrom">
Oct 11 08:53:50 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 08:53:50 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/090243f9-46ac-42a1-b921-fa3b1974f127_disk.config">
Oct 11 08:53:50 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 08:53:50 compute-0 nova_compute[260935]:       </source>
Oct 11 08:53:50 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 08:53:50 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 08:53:50 compute-0 nova_compute[260935]:       </auth>
Oct 11 08:53:50 compute-0 nova_compute[260935]:       <target dev="sda" bus="sata"/>
Oct 11 08:53:50 compute-0 nova_compute[260935]:     </disk>
Oct 11 08:53:50 compute-0 nova_compute[260935]:     <interface type="ethernet">
Oct 11 08:53:50 compute-0 nova_compute[260935]:       <mac address="fa:16:3e:a6:01:8a"/>
Oct 11 08:53:50 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 08:53:50 compute-0 nova_compute[260935]:       <driver name="vhost" rx_queue_size="512"/>
Oct 11 08:53:50 compute-0 nova_compute[260935]:       <mtu size="1442"/>
Oct 11 08:53:50 compute-0 nova_compute[260935]:       <target dev="tap94e66298-1c"/>
Oct 11 08:53:50 compute-0 nova_compute[260935]:     </interface>
Oct 11 08:53:50 compute-0 nova_compute[260935]:     <serial type="pty">
Oct 11 08:53:50 compute-0 nova_compute[260935]:       <log file="/var/lib/nova/instances/090243f9-46ac-42a1-b921-fa3b1974f127/console.log" append="off"/>
Oct 11 08:53:50 compute-0 nova_compute[260935]:     </serial>
Oct 11 08:53:50 compute-0 nova_compute[260935]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 08:53:50 compute-0 nova_compute[260935]:     <video>
Oct 11 08:53:50 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 08:53:50 compute-0 nova_compute[260935]:     </video>
Oct 11 08:53:50 compute-0 nova_compute[260935]:     <input type="tablet" bus="usb"/>
Oct 11 08:53:50 compute-0 nova_compute[260935]:     <rng model="virtio">
Oct 11 08:53:50 compute-0 nova_compute[260935]:       <backend model="random">/dev/urandom</backend>
Oct 11 08:53:50 compute-0 nova_compute[260935]:     </rng>
Oct 11 08:53:50 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root"/>
Oct 11 08:53:50 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:53:50 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:53:50 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:53:50 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:53:50 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:53:50 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:53:50 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:53:50 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:53:50 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:53:50 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:53:50 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:53:50 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:53:50 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:53:50 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:53:50 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:53:50 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:53:50 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:53:50 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:53:50 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:53:50 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:53:50 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:53:50 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:53:50 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:53:50 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:53:50 compute-0 nova_compute[260935]:     <controller type="usb" index="0"/>
Oct 11 08:53:50 compute-0 nova_compute[260935]:     <memballoon model="virtio">
Oct 11 08:53:50 compute-0 nova_compute[260935]:       <stats period="10"/>
Oct 11 08:53:50 compute-0 nova_compute[260935]:     </memballoon>
Oct 11 08:53:50 compute-0 nova_compute[260935]:   </devices>
Oct 11 08:53:50 compute-0 nova_compute[260935]: </domain>
Oct 11 08:53:50 compute-0 nova_compute[260935]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 11 08:53:50 compute-0 nova_compute[260935]: 2025-10-11 08:53:50.606 2 DEBUG nova.compute.manager [None req-6a41ddc2-a2f2-442c-bdef-bced32c62be8 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 090243f9-46ac-42a1-b921-fa3b1974f127] Preparing to wait for external event network-vif-plugged-94e66298-1c10-486d-9cd5-fd680cfa037c prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 11 08:53:50 compute-0 nova_compute[260935]: 2025-10-11 08:53:50.607 2 DEBUG oslo_concurrency.lockutils [None req-6a41ddc2-a2f2-442c-bdef-bced32c62be8 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Acquiring lock "090243f9-46ac-42a1-b921-fa3b1974f127-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:53:50 compute-0 nova_compute[260935]: 2025-10-11 08:53:50.607 2 DEBUG oslo_concurrency.lockutils [None req-6a41ddc2-a2f2-442c-bdef-bced32c62be8 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Lock "090243f9-46ac-42a1-b921-fa3b1974f127-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:53:50 compute-0 nova_compute[260935]: 2025-10-11 08:53:50.608 2 DEBUG oslo_concurrency.lockutils [None req-6a41ddc2-a2f2-442c-bdef-bced32c62be8 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Lock "090243f9-46ac-42a1-b921-fa3b1974f127-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:53:50 compute-0 nova_compute[260935]: 2025-10-11 08:53:50.610 2 DEBUG nova.virt.libvirt.vif [None req-6a41ddc2-a2f2-442c-bdef-bced32c62be8 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T08:53:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-416013570',display_name='tempest-ServerDiskConfigTestJSON-server-416013570',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-416013570',id=49,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9af27fad6b5a4783b66213343f27f0a1',ramdisk_id='',reservation_id='r-2vpa8irs',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerDiskConfigTestJSON-387886039',owner_user_name='tempest-ServerDiskConfigTestJSON-387886039-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T08:53:43Z,user_data=None,user_id='2a171de1f79843e0b048393cabfee77d',uuid=090243f9-46ac-42a1-b921-fa3b1974f127,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "94e66298-1c10-486d-9cd5-fd680cfa037c", "address": "fa:16:3e:a6:01:8a", "network": {"id": "e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-964284753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9af27fad6b5a4783b66213343f27f0a1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap94e66298-1c", "ovs_interfaceid": "94e66298-1c10-486d-9cd5-fd680cfa037c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 11 08:53:50 compute-0 nova_compute[260935]: 2025-10-11 08:53:50.611 2 DEBUG nova.network.os_vif_util [None req-6a41ddc2-a2f2-442c-bdef-bced32c62be8 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Converting VIF {"id": "94e66298-1c10-486d-9cd5-fd680cfa037c", "address": "fa:16:3e:a6:01:8a", "network": {"id": "e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-964284753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9af27fad6b5a4783b66213343f27f0a1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap94e66298-1c", "ovs_interfaceid": "94e66298-1c10-486d-9cd5-fd680cfa037c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 08:53:50 compute-0 nova_compute[260935]: 2025-10-11 08:53:50.612 2 DEBUG nova.network.os_vif_util [None req-6a41ddc2-a2f2-442c-bdef-bced32c62be8 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a6:01:8a,bridge_name='br-int',has_traffic_filtering=True,id=94e66298-1c10-486d-9cd5-fd680cfa037c,network=Network(e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap94e66298-1c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 08:53:50 compute-0 nova_compute[260935]: 2025-10-11 08:53:50.613 2 DEBUG os_vif [None req-6a41ddc2-a2f2-442c-bdef-bced32c62be8 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a6:01:8a,bridge_name='br-int',has_traffic_filtering=True,id=94e66298-1c10-486d-9cd5-fd680cfa037c,network=Network(e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap94e66298-1c') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 11 08:53:50 compute-0 nova_compute[260935]: 2025-10-11 08:53:50.615 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:53:50 compute-0 nova_compute[260935]: 2025-10-11 08:53:50.616 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:53:50 compute-0 nova_compute[260935]: 2025-10-11 08:53:50.617 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 08:53:50 compute-0 nova_compute[260935]: 2025-10-11 08:53:50.621 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:53:50 compute-0 nova_compute[260935]: 2025-10-11 08:53:50.622 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap94e66298-1c, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:53:50 compute-0 nova_compute[260935]: 2025-10-11 08:53:50.623 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap94e66298-1c, col_values=(('external_ids', {'iface-id': '94e66298-1c10-486d-9cd5-fd680cfa037c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:a6:01:8a', 'vm-uuid': '090243f9-46ac-42a1-b921-fa3b1974f127'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:53:50 compute-0 nova_compute[260935]: 2025-10-11 08:53:50.626 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:53:50 compute-0 NetworkManager[44960]: <info>  [1760172830.6275] manager: (tap94e66298-1c): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/197)
Oct 11 08:53:50 compute-0 nova_compute[260935]: 2025-10-11 08:53:50.630 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 08:53:50 compute-0 nova_compute[260935]: 2025-10-11 08:53:50.635 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:53:50 compute-0 nova_compute[260935]: 2025-10-11 08:53:50.636 2 INFO os_vif [None req-6a41ddc2-a2f2-442c-bdef-bced32c62be8 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a6:01:8a,bridge_name='br-int',has_traffic_filtering=True,id=94e66298-1c10-486d-9cd5-fd680cfa037c,network=Network(e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap94e66298-1c')
Oct 11 08:53:50 compute-0 nova_compute[260935]: 2025-10-11 08:53:50.686 2 DEBUG nova.virt.libvirt.driver [None req-6a41ddc2-a2f2-442c-bdef-bced32c62be8 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 08:53:50 compute-0 nova_compute[260935]: 2025-10-11 08:53:50.686 2 DEBUG nova.virt.libvirt.driver [None req-6a41ddc2-a2f2-442c-bdef-bced32c62be8 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 08:53:50 compute-0 nova_compute[260935]: 2025-10-11 08:53:50.686 2 DEBUG nova.virt.libvirt.driver [None req-6a41ddc2-a2f2-442c-bdef-bced32c62be8 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] No VIF found with MAC fa:16:3e:a6:01:8a, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 11 08:53:50 compute-0 nova_compute[260935]: 2025-10-11 08:53:50.687 2 INFO nova.virt.libvirt.driver [None req-6a41ddc2-a2f2-442c-bdef-bced32c62be8 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 090243f9-46ac-42a1-b921-fa3b1974f127] Using config drive
Oct 11 08:53:50 compute-0 nova_compute[260935]: 2025-10-11 08:53:50.708 2 DEBUG nova.storage.rbd_utils [None req-6a41ddc2-a2f2-442c-bdef-bced32c62be8 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] rbd image 090243f9-46ac-42a1-b921-fa3b1974f127_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:53:51 compute-0 nova_compute[260935]: 2025-10-11 08:53:51.060 2 DEBUG nova.network.neutron [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: d0ac94c4-9bbc-443b-bbce-0d447b37153a] Updating instance_info_cache with network_info: [{"id": "11b55091-2876-4b36-98b0-aa1a4db00d3e", "address": "fa:16:3e:86:b0:c9", "network": {"id": "aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1676373979-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5fe47dfd30914099a9819413cbab00c6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap11b55091-28", "ovs_interfaceid": "11b55091-2876-4b36-98b0-aa1a4db00d3e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:53:51 compute-0 nova_compute[260935]: 2025-10-11 08:53:51.122 2 DEBUG oslo_concurrency.lockutils [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Releasing lock "refresh_cache-d0ac94c4-9bbc-443b-bbce-0d447b37153a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 08:53:51 compute-0 nova_compute[260935]: 2025-10-11 08:53:51.123 2 DEBUG nova.compute.manager [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: d0ac94c4-9bbc-443b-bbce-0d447b37153a] Instance network_info: |[{"id": "11b55091-2876-4b36-98b0-aa1a4db00d3e", "address": "fa:16:3e:86:b0:c9", "network": {"id": "aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1676373979-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5fe47dfd30914099a9819413cbab00c6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap11b55091-28", "ovs_interfaceid": "11b55091-2876-4b36-98b0-aa1a4db00d3e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 11 08:53:51 compute-0 nova_compute[260935]: 2025-10-11 08:53:51.127 2 DEBUG nova.virt.libvirt.driver [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: d0ac94c4-9bbc-443b-bbce-0d447b37153a] Start _get_guest_xml network_info=[{"id": "11b55091-2876-4b36-98b0-aa1a4db00d3e", "address": "fa:16:3e:86:b0:c9", "network": {"id": "aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1676373979-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5fe47dfd30914099a9819413cbab00c6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap11b55091-28", "ovs_interfaceid": "11b55091-2876-4b36-98b0-aa1a4db00d3e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 11 08:53:51 compute-0 nova_compute[260935]: 2025-10-11 08:53:51.135 2 WARNING nova.virt.libvirt.driver [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 08:53:51 compute-0 nova_compute[260935]: 2025-10-11 08:53:51.147 2 DEBUG nova.virt.libvirt.host [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 11 08:53:51 compute-0 nova_compute[260935]: 2025-10-11 08:53:51.148 2 DEBUG nova.virt.libvirt.host [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 11 08:53:51 compute-0 nova_compute[260935]: 2025-10-11 08:53:51.152 2 DEBUG nova.virt.libvirt.host [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 11 08:53:51 compute-0 nova_compute[260935]: 2025-10-11 08:53:51.153 2 DEBUG nova.virt.libvirt.host [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 11 08:53:51 compute-0 nova_compute[260935]: 2025-10-11 08:53:51.154 2 DEBUG nova.virt.libvirt.driver [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 11 08:53:51 compute-0 nova_compute[260935]: 2025-10-11 08:53:51.154 2 DEBUG nova.virt.hardware [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 11 08:53:51 compute-0 nova_compute[260935]: 2025-10-11 08:53:51.155 2 DEBUG nova.virt.hardware [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 11 08:53:51 compute-0 nova_compute[260935]: 2025-10-11 08:53:51.156 2 DEBUG nova.virt.hardware [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 11 08:53:51 compute-0 nova_compute[260935]: 2025-10-11 08:53:51.157 2 DEBUG nova.virt.hardware [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 11 08:53:51 compute-0 nova_compute[260935]: 2025-10-11 08:53:51.157 2 DEBUG nova.virt.hardware [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 11 08:53:51 compute-0 nova_compute[260935]: 2025-10-11 08:53:51.157 2 DEBUG nova.virt.hardware [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 11 08:53:51 compute-0 nova_compute[260935]: 2025-10-11 08:53:51.158 2 DEBUG nova.virt.hardware [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 11 08:53:51 compute-0 nova_compute[260935]: 2025-10-11 08:53:51.158 2 DEBUG nova.virt.hardware [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 11 08:53:51 compute-0 nova_compute[260935]: 2025-10-11 08:53:51.159 2 DEBUG nova.virt.hardware [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 11 08:53:51 compute-0 nova_compute[260935]: 2025-10-11 08:53:51.159 2 DEBUG nova.virt.hardware [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 11 08:53:51 compute-0 nova_compute[260935]: 2025-10-11 08:53:51.160 2 DEBUG nova.virt.hardware [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 11 08:53:51 compute-0 nova_compute[260935]: 2025-10-11 08:53:51.164 2 DEBUG oslo_concurrency.processutils [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:53:51 compute-0 nova_compute[260935]: 2025-10-11 08:53:51.222 2 INFO nova.virt.libvirt.driver [None req-6a41ddc2-a2f2-442c-bdef-bced32c62be8 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 090243f9-46ac-42a1-b921-fa3b1974f127] Creating config drive at /var/lib/nova/instances/090243f9-46ac-42a1-b921-fa3b1974f127/disk.config
Oct 11 08:53:51 compute-0 nova_compute[260935]: 2025-10-11 08:53:51.230 2 DEBUG oslo_concurrency.processutils [None req-6a41ddc2-a2f2-442c-bdef-bced32c62be8 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/090243f9-46ac-42a1-b921-fa3b1974f127/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpk0_ms0y_ execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:53:51 compute-0 nova_compute[260935]: 2025-10-11 08:53:51.296 2 DEBUG nova.compute.manager [req-f165fd72-225b-42bb-86e6-4bafc9fabcca req-b7518090-f07b-4f2c-a6a8-7914104d447b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d0ac94c4-9bbc-443b-bbce-0d447b37153a] Received event network-changed-11b55091-2876-4b36-98b0-aa1a4db00d3e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:53:51 compute-0 nova_compute[260935]: 2025-10-11 08:53:51.297 2 DEBUG nova.compute.manager [req-f165fd72-225b-42bb-86e6-4bafc9fabcca req-b7518090-f07b-4f2c-a6a8-7914104d447b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d0ac94c4-9bbc-443b-bbce-0d447b37153a] Refreshing instance network info cache due to event network-changed-11b55091-2876-4b36-98b0-aa1a4db00d3e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 08:53:51 compute-0 nova_compute[260935]: 2025-10-11 08:53:51.297 2 DEBUG oslo_concurrency.lockutils [req-f165fd72-225b-42bb-86e6-4bafc9fabcca req-b7518090-f07b-4f2c-a6a8-7914104d447b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-d0ac94c4-9bbc-443b-bbce-0d447b37153a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 08:53:51 compute-0 nova_compute[260935]: 2025-10-11 08:53:51.298 2 DEBUG oslo_concurrency.lockutils [req-f165fd72-225b-42bb-86e6-4bafc9fabcca req-b7518090-f07b-4f2c-a6a8-7914104d447b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-d0ac94c4-9bbc-443b-bbce-0d447b37153a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 08:53:51 compute-0 nova_compute[260935]: 2025-10-11 08:53:51.298 2 DEBUG nova.network.neutron [req-f165fd72-225b-42bb-86e6-4bafc9fabcca req-b7518090-f07b-4f2c-a6a8-7914104d447b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d0ac94c4-9bbc-443b-bbce-0d447b37153a] Refreshing network info cache for port 11b55091-2876-4b36-98b0-aa1a4db00d3e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 08:53:51 compute-0 ceph-mon[74313]: pgmap v1483: 321 pgs: 321 active+clean; 227 MiB data, 515 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 8.0 MiB/s wr, 301 op/s
Oct 11 08:53:51 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/1169621818' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:53:51 compute-0 nova_compute[260935]: 2025-10-11 08:53:51.395 2 DEBUG oslo_concurrency.processutils [None req-6a41ddc2-a2f2-442c-bdef-bced32c62be8 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/090243f9-46ac-42a1-b921-fa3b1974f127/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpk0_ms0y_" returned: 0 in 0.165s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:53:51 compute-0 nova_compute[260935]: 2025-10-11 08:53:51.431 2 DEBUG nova.storage.rbd_utils [None req-6a41ddc2-a2f2-442c-bdef-bced32c62be8 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] rbd image 090243f9-46ac-42a1-b921-fa3b1974f127_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:53:51 compute-0 nova_compute[260935]: 2025-10-11 08:53:51.436 2 DEBUG oslo_concurrency.processutils [None req-6a41ddc2-a2f2-442c-bdef-bced32c62be8 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/090243f9-46ac-42a1-b921-fa3b1974f127/disk.config 090243f9-46ac-42a1-b921-fa3b1974f127_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:53:51 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 08:53:51 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/194243873' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:53:51 compute-0 nova_compute[260935]: 2025-10-11 08:53:51.660 2 DEBUG oslo_concurrency.processutils [None req-6a41ddc2-a2f2-442c-bdef-bced32c62be8 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/090243f9-46ac-42a1-b921-fa3b1974f127/disk.config 090243f9-46ac-42a1-b921-fa3b1974f127_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.224s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:53:51 compute-0 nova_compute[260935]: 2025-10-11 08:53:51.661 2 INFO nova.virt.libvirt.driver [None req-6a41ddc2-a2f2-442c-bdef-bced32c62be8 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 090243f9-46ac-42a1-b921-fa3b1974f127] Deleting local config drive /var/lib/nova/instances/090243f9-46ac-42a1-b921-fa3b1974f127/disk.config because it was imported into RBD.
Oct 11 08:53:51 compute-0 nova_compute[260935]: 2025-10-11 08:53:51.664 2 DEBUG oslo_concurrency.processutils [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.499s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:53:51 compute-0 nova_compute[260935]: 2025-10-11 08:53:51.708 2 DEBUG nova.storage.rbd_utils [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] rbd image d0ac94c4-9bbc-443b-bbce-0d447b37153a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:53:51 compute-0 nova_compute[260935]: 2025-10-11 08:53:51.714 2 DEBUG oslo_concurrency.processutils [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:53:51 compute-0 kernel: tap94e66298-1c: entered promiscuous mode
Oct 11 08:53:51 compute-0 NetworkManager[44960]: <info>  [1760172831.7559] manager: (tap94e66298-1c): new Tun device (/org/freedesktop/NetworkManager/Devices/198)
Oct 11 08:53:51 compute-0 ovn_controller[152945]: 2025-10-11T08:53:51Z|00413|binding|INFO|Claiming lport 94e66298-1c10-486d-9cd5-fd680cfa037c for this chassis.
Oct 11 08:53:51 compute-0 ovn_controller[152945]: 2025-10-11T08:53:51Z|00414|binding|INFO|94e66298-1c10-486d-9cd5-fd680cfa037c: Claiming fa:16:3e:a6:01:8a 10.100.0.7
Oct 11 08:53:51 compute-0 nova_compute[260935]: 2025-10-11 08:53:51.770 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:53:51 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:51.787 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a6:01:8a 10.100.0.7'], port_security=['fa:16:3e:a6:01:8a 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '090243f9-46ac-42a1-b921-fa3b1974f127', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9af27fad6b5a4783b66213343f27f0a1', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'a50a9db8-e2ab-4969-88fc-b4ddbb372174', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b34768a1-4d4c-416a-8ec1-d8538916a72a, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=94e66298-1c10-486d-9cd5-fd680cfa037c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 08:53:51 compute-0 ovn_controller[152945]: 2025-10-11T08:53:51Z|00415|binding|INFO|Setting lport 94e66298-1c10-486d-9cd5-fd680cfa037c ovn-installed in OVS
Oct 11 08:53:51 compute-0 ovn_controller[152945]: 2025-10-11T08:53:51Z|00416|binding|INFO|Setting lport 94e66298-1c10-486d-9cd5-fd680cfa037c up in Southbound
Oct 11 08:53:51 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:51.789 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 94e66298-1c10-486d-9cd5-fd680cfa037c in datapath e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd bound to our chassis
Oct 11 08:53:51 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:51.792 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd
Oct 11 08:53:51 compute-0 nova_compute[260935]: 2025-10-11 08:53:51.792 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:53:51 compute-0 nova_compute[260935]: 2025-10-11 08:53:51.795 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:53:51 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:51.814 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[895e86dc-0a1c-43b8-ac52-0f4f8d05a8c3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:53:51 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:51.816 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tape5d4fc7a-11 in ovnmeta-e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 11 08:53:51 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:51.818 276945 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tape5d4fc7a-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 11 08:53:51 compute-0 systemd-udevd[314810]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 08:53:51 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:51.819 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[ea9f5f20-e720-4f71-814a-017066547044]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:53:51 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:51.820 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[e1628853-71c3-4cd3-bfe1-e49392d34f43]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:53:51 compute-0 NetworkManager[44960]: <info>  [1760172831.8351] device (tap94e66298-1c): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 08:53:51 compute-0 NetworkManager[44960]: <info>  [1760172831.8376] device (tap94e66298-1c): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 08:53:51 compute-0 systemd-machined[215705]: New machine qemu-55-instance-00000031.
Oct 11 08:53:51 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:51.853 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[e943ba1b-e8a5-4a75-8584-4b5975144919]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:53:51 compute-0 systemd[1]: Started Virtual Machine qemu-55-instance-00000031.
Oct 11 08:53:51 compute-0 ovn_controller[152945]: 2025-10-11T08:53:51Z|00417|binding|INFO|Releasing lport 4bae176b-fbae-4a70-a041-16a7a5205899 from this chassis (sb_readonly=0)
Oct 11 08:53:51 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:51.887 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[a6648c49-e8c5-46ee-a1db-d6ca74732c97]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:53:51 compute-0 nova_compute[260935]: 2025-10-11 08:53:51.930 2 DEBUG nova.network.neutron [req-02986fd4-437b-4afb-8a3f-197e22109c88 req-8ffd67a5-1631-4492-9d4a-bcbd91aadd44 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 090243f9-46ac-42a1-b921-fa3b1974f127] Updated VIF entry in instance network info cache for port 94e66298-1c10-486d-9cd5-fd680cfa037c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 08:53:51 compute-0 nova_compute[260935]: 2025-10-11 08:53:51.931 2 DEBUG nova.network.neutron [req-02986fd4-437b-4afb-8a3f-197e22109c88 req-8ffd67a5-1631-4492-9d4a-bcbd91aadd44 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 090243f9-46ac-42a1-b921-fa3b1974f127] Updating instance_info_cache with network_info: [{"id": "94e66298-1c10-486d-9cd5-fd680cfa037c", "address": "fa:16:3e:a6:01:8a", "network": {"id": "e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-964284753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9af27fad6b5a4783b66213343f27f0a1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap94e66298-1c", "ovs_interfaceid": "94e66298-1c10-486d-9cd5-fd680cfa037c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:53:51 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:51.936 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[e8753ce9-9f2b-4aef-abb0-c41f47d9ee9c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:53:51 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:51.949 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[85fcd662-3cc7-4af3-82a0-709f849da72a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:53:51 compute-0 NetworkManager[44960]: <info>  [1760172831.9516] manager: (tape5d4fc7a-10): new Veth device (/org/freedesktop/NetworkManager/Devices/199)
Oct 11 08:53:51 compute-0 systemd-udevd[314814]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 08:53:51 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1484: 321 pgs: 321 active+clean; 227 MiB data, 515 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 7.4 MiB/s wr, 207 op/s
Oct 11 08:53:51 compute-0 nova_compute[260935]: 2025-10-11 08:53:51.976 2 DEBUG oslo_concurrency.lockutils [req-02986fd4-437b-4afb-8a3f-197e22109c88 req-8ffd67a5-1631-4492-9d4a-bcbd91aadd44 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-090243f9-46ac-42a1-b921-fa3b1974f127" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 08:53:52 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:52.005 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[3e6a8383-8955-4c27-bcee-fcf80f85678f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:53:52 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:52.009 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[6d020d3b-145a-483d-9c08-fb61b5aa1cf2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:53:52 compute-0 nova_compute[260935]: 2025-10-11 08:53:52.034 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:53:52 compute-0 NetworkManager[44960]: <info>  [1760172832.0422] device (tape5d4fc7a-10): carrier: link connected
Oct 11 08:53:52 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:52.050 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[4b14827a-7ca0-4040-9672-1de1b78918bc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:53:52 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:52.071 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[2b39e0ba-494a-47dd-87f1-0b520dccd406]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape5d4fc7a-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:17:20:33'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 134], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 467674, 'reachable_time': 39901, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 314862, 'error': None, 'target': 'ovnmeta-e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:53:52 compute-0 nova_compute[260935]: 2025-10-11 08:53:52.080 2 DEBUG nova.network.neutron [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: c5c5e7c6-36ba-4cdd-9ad5-03996c419556] Updating instance_info_cache with network_info: [{"id": "eb770b47-0e30-41bb-8d08-e52bc86b8fb7", "address": "fa:16:3e:c2:96:8d", "network": {"id": "aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1676373979-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5fe47dfd30914099a9819413cbab00c6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeb770b47-0e", "ovs_interfaceid": "eb770b47-0e30-41bb-8d08-e52bc86b8fb7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:53:52 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:52.090 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[b6a9359f-7391-41fa-b16d-a0b40a791c04]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe17:2033'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 467674, 'tstamp': 467674}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 314863, 'error': None, 'target': 'ovnmeta-e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:53:52 compute-0 nova_compute[260935]: 2025-10-11 08:53:52.108 2 DEBUG oslo_concurrency.lockutils [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Releasing lock "refresh_cache-c5c5e7c6-36ba-4cdd-9ad5-03996c419556" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 08:53:52 compute-0 nova_compute[260935]: 2025-10-11 08:53:52.108 2 DEBUG nova.compute.manager [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: c5c5e7c6-36ba-4cdd-9ad5-03996c419556] Instance network_info: |[{"id": "eb770b47-0e30-41bb-8d08-e52bc86b8fb7", "address": "fa:16:3e:c2:96:8d", "network": {"id": "aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1676373979-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5fe47dfd30914099a9819413cbab00c6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeb770b47-0e", "ovs_interfaceid": "eb770b47-0e30-41bb-8d08-e52bc86b8fb7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 11 08:53:52 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:52.110 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[d6a582bd-00b2-4ebc-b193-ca3fe27de974]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape5d4fc7a-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:17:20:33'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 134], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 467674, 'reachable_time': 39901, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 314864, 'error': None, 'target': 'ovnmeta-e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:53:52 compute-0 nova_compute[260935]: 2025-10-11 08:53:52.112 2 DEBUG nova.virt.libvirt.driver [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: c5c5e7c6-36ba-4cdd-9ad5-03996c419556] Start _get_guest_xml network_info=[{"id": "eb770b47-0e30-41bb-8d08-e52bc86b8fb7", "address": "fa:16:3e:c2:96:8d", "network": {"id": "aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1676373979-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5fe47dfd30914099a9819413cbab00c6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeb770b47-0e", "ovs_interfaceid": "eb770b47-0e30-41bb-8d08-e52bc86b8fb7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 11 08:53:52 compute-0 nova_compute[260935]: 2025-10-11 08:53:52.117 2 WARNING nova.virt.libvirt.driver [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 08:53:52 compute-0 nova_compute[260935]: 2025-10-11 08:53:52.122 2 DEBUG nova.virt.libvirt.host [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 11 08:53:52 compute-0 nova_compute[260935]: 2025-10-11 08:53:52.122 2 DEBUG nova.virt.libvirt.host [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 11 08:53:52 compute-0 nova_compute[260935]: 2025-10-11 08:53:52.129 2 DEBUG nova.virt.libvirt.host [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 11 08:53:52 compute-0 nova_compute[260935]: 2025-10-11 08:53:52.129 2 DEBUG nova.virt.libvirt.host [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 11 08:53:52 compute-0 nova_compute[260935]: 2025-10-11 08:53:52.130 2 DEBUG nova.virt.libvirt.driver [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 11 08:53:52 compute-0 nova_compute[260935]: 2025-10-11 08:53:52.130 2 DEBUG nova.virt.hardware [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 11 08:53:52 compute-0 nova_compute[260935]: 2025-10-11 08:53:52.131 2 DEBUG nova.virt.hardware [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 11 08:53:52 compute-0 nova_compute[260935]: 2025-10-11 08:53:52.131 2 DEBUG nova.virt.hardware [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 11 08:53:52 compute-0 nova_compute[260935]: 2025-10-11 08:53:52.132 2 DEBUG nova.virt.hardware [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 11 08:53:52 compute-0 nova_compute[260935]: 2025-10-11 08:53:52.132 2 DEBUG nova.virt.hardware [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 11 08:53:52 compute-0 nova_compute[260935]: 2025-10-11 08:53:52.132 2 DEBUG nova.virt.hardware [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 11 08:53:52 compute-0 nova_compute[260935]: 2025-10-11 08:53:52.133 2 DEBUG nova.virt.hardware [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 11 08:53:52 compute-0 nova_compute[260935]: 2025-10-11 08:53:52.133 2 DEBUG nova.virt.hardware [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 11 08:53:52 compute-0 nova_compute[260935]: 2025-10-11 08:53:52.133 2 DEBUG nova.virt.hardware [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 11 08:53:52 compute-0 nova_compute[260935]: 2025-10-11 08:53:52.134 2 DEBUG nova.virt.hardware [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 11 08:53:52 compute-0 nova_compute[260935]: 2025-10-11 08:53:52.134 2 DEBUG nova.virt.hardware [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 11 08:53:52 compute-0 nova_compute[260935]: 2025-10-11 08:53:52.138 2 DEBUG oslo_concurrency.processutils [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:53:52 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:52.153 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[013318c4-e63b-4e9e-90df-3b35ce3f1aae]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:53:52 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:52.225 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[55267de1-5cd6-4495-8bec-0d91f7f85097]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:53:52 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:52.226 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape5d4fc7a-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:53:52 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:52.227 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 08:53:52 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:52.227 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape5d4fc7a-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:53:52 compute-0 kernel: tape5d4fc7a-10: entered promiscuous mode
Oct 11 08:53:52 compute-0 NetworkManager[44960]: <info>  [1760172832.2300] manager: (tape5d4fc7a-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/200)
Oct 11 08:53:52 compute-0 nova_compute[260935]: 2025-10-11 08:53:52.229 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:53:52 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 08:53:52 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1495783055' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:53:52 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:52.232 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape5d4fc7a-10, col_values=(('external_ids', {'iface-id': '7a0f31c4-9bda-45df-9fec-aacc40fc88c1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:53:52 compute-0 ovn_controller[152945]: 2025-10-11T08:53:52Z|00418|binding|INFO|Releasing lport 7a0f31c4-9bda-45df-9fec-aacc40fc88c1 from this chassis (sb_readonly=0)
Oct 11 08:53:52 compute-0 nova_compute[260935]: 2025-10-11 08:53:52.252 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:53:52 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:52.253 162815 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 11 08:53:52 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:52.254 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[54402c03-c84b-470d-a33d-3e0dd190fb42]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:53:52 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:52.255 162815 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 11 08:53:52 compute-0 ovn_metadata_agent[162810]: global
Oct 11 08:53:52 compute-0 ovn_metadata_agent[162810]:     log         /dev/log local0 debug
Oct 11 08:53:52 compute-0 ovn_metadata_agent[162810]:     log-tag     haproxy-metadata-proxy-e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd
Oct 11 08:53:52 compute-0 ovn_metadata_agent[162810]:     user        root
Oct 11 08:53:52 compute-0 ovn_metadata_agent[162810]:     group       root
Oct 11 08:53:52 compute-0 ovn_metadata_agent[162810]:     maxconn     1024
Oct 11 08:53:52 compute-0 ovn_metadata_agent[162810]:     pidfile     /var/lib/neutron/external/pids/e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd.pid.haproxy
Oct 11 08:53:52 compute-0 ovn_metadata_agent[162810]:     daemon
Oct 11 08:53:52 compute-0 ovn_metadata_agent[162810]: 
Oct 11 08:53:52 compute-0 ovn_metadata_agent[162810]: defaults
Oct 11 08:53:52 compute-0 ovn_metadata_agent[162810]:     log global
Oct 11 08:53:52 compute-0 ovn_metadata_agent[162810]:     mode http
Oct 11 08:53:52 compute-0 ovn_metadata_agent[162810]:     option httplog
Oct 11 08:53:52 compute-0 ovn_metadata_agent[162810]:     option dontlognull
Oct 11 08:53:52 compute-0 ovn_metadata_agent[162810]:     option http-server-close
Oct 11 08:53:52 compute-0 ovn_metadata_agent[162810]:     option forwardfor
Oct 11 08:53:52 compute-0 ovn_metadata_agent[162810]:     retries                 3
Oct 11 08:53:52 compute-0 ovn_metadata_agent[162810]:     timeout http-request    30s
Oct 11 08:53:52 compute-0 ovn_metadata_agent[162810]:     timeout connect         30s
Oct 11 08:53:52 compute-0 ovn_metadata_agent[162810]:     timeout client          32s
Oct 11 08:53:52 compute-0 ovn_metadata_agent[162810]:     timeout server          32s
Oct 11 08:53:52 compute-0 ovn_metadata_agent[162810]:     timeout http-keep-alive 30s
Oct 11 08:53:52 compute-0 ovn_metadata_agent[162810]: 
Oct 11 08:53:52 compute-0 ovn_metadata_agent[162810]: 
Oct 11 08:53:52 compute-0 ovn_metadata_agent[162810]: listen listener
Oct 11 08:53:52 compute-0 ovn_metadata_agent[162810]:     bind 169.254.169.254:80
Oct 11 08:53:52 compute-0 ovn_metadata_agent[162810]:     server metadata /var/lib/neutron/metadata_proxy
Oct 11 08:53:52 compute-0 ovn_metadata_agent[162810]:     http-request add-header X-OVN-Network-ID e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd
Oct 11 08:53:52 compute-0 ovn_metadata_agent[162810]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 11 08:53:52 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:52.257 162815 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd', 'env', 'PROCESS_TAG=haproxy-e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 11 08:53:52 compute-0 nova_compute[260935]: 2025-10-11 08:53:52.281 2 DEBUG oslo_concurrency.processutils [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.567s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:53:52 compute-0 nova_compute[260935]: 2025-10-11 08:53:52.284 2 DEBUG nova.virt.libvirt.vif [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T08:53:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-MultipleCreateTestJSON-server-904038611',display_name='tempest-MultipleCreateTestJSON-server-904038611-1',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-multiplecreatetestjson-server-904038611-1',id=48,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5fe47dfd30914099a9819413cbab00c6',ramdisk_id='',reservation_id='r-gdifyikz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-MultipleCreateTestJSON-1825846956',owner_user_name='tempest-MultipleCreateTestJSON-1825846956-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T08:53:44Z,user_data=None,user_id='624c293d73ca4d14a182fadee17abb16',uuid=d0ac94c4-9bbc-443b-bbce-0d447b37153a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "11b55091-2876-4b36-98b0-aa1a4db00d3e", "address": "fa:16:3e:86:b0:c9", "network": {"id": "aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1676373979-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5fe47dfd30914099a9819413cbab00c6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap11b55091-28", "ovs_interfaceid": "11b55091-2876-4b36-98b0-aa1a4db00d3e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 11 08:53:52 compute-0 nova_compute[260935]: 2025-10-11 08:53:52.285 2 DEBUG nova.network.os_vif_util [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Converting VIF {"id": "11b55091-2876-4b36-98b0-aa1a4db00d3e", "address": "fa:16:3e:86:b0:c9", "network": {"id": "aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1676373979-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5fe47dfd30914099a9819413cbab00c6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap11b55091-28", "ovs_interfaceid": "11b55091-2876-4b36-98b0-aa1a4db00d3e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 08:53:52 compute-0 nova_compute[260935]: 2025-10-11 08:53:52.287 2 DEBUG nova.network.os_vif_util [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:86:b0:c9,bridge_name='br-int',has_traffic_filtering=True,id=11b55091-2876-4b36-98b0-aa1a4db00d3e,network=Network(aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap11b55091-28') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 08:53:52 compute-0 nova_compute[260935]: 2025-10-11 08:53:52.289 2 DEBUG nova.objects.instance [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Lazy-loading 'pci_devices' on Instance uuid d0ac94c4-9bbc-443b-bbce-0d447b37153a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:53:52 compute-0 nova_compute[260935]: 2025-10-11 08:53:52.311 2 DEBUG nova.virt.libvirt.driver [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: d0ac94c4-9bbc-443b-bbce-0d447b37153a] End _get_guest_xml xml=<domain type="kvm">
Oct 11 08:53:52 compute-0 nova_compute[260935]:   <uuid>d0ac94c4-9bbc-443b-bbce-0d447b37153a</uuid>
Oct 11 08:53:52 compute-0 nova_compute[260935]:   <name>instance-00000030</name>
Oct 11 08:53:52 compute-0 nova_compute[260935]:   <memory>131072</memory>
Oct 11 08:53:52 compute-0 nova_compute[260935]:   <vcpu>1</vcpu>
Oct 11 08:53:52 compute-0 nova_compute[260935]:   <metadata>
Oct 11 08:53:52 compute-0 nova_compute[260935]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 08:53:52 compute-0 nova_compute[260935]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 08:53:52 compute-0 nova_compute[260935]:       <nova:name>tempest-MultipleCreateTestJSON-server-904038611-1</nova:name>
Oct 11 08:53:52 compute-0 nova_compute[260935]:       <nova:creationTime>2025-10-11 08:53:51</nova:creationTime>
Oct 11 08:53:52 compute-0 nova_compute[260935]:       <nova:flavor name="m1.nano">
Oct 11 08:53:52 compute-0 nova_compute[260935]:         <nova:memory>128</nova:memory>
Oct 11 08:53:52 compute-0 nova_compute[260935]:         <nova:disk>1</nova:disk>
Oct 11 08:53:52 compute-0 nova_compute[260935]:         <nova:swap>0</nova:swap>
Oct 11 08:53:52 compute-0 nova_compute[260935]:         <nova:ephemeral>0</nova:ephemeral>
Oct 11 08:53:52 compute-0 nova_compute[260935]:         <nova:vcpus>1</nova:vcpus>
Oct 11 08:53:52 compute-0 nova_compute[260935]:       </nova:flavor>
Oct 11 08:53:52 compute-0 nova_compute[260935]:       <nova:owner>
Oct 11 08:53:52 compute-0 nova_compute[260935]:         <nova:user uuid="624c293d73ca4d14a182fadee17abb16">tempest-MultipleCreateTestJSON-1825846956-project-member</nova:user>
Oct 11 08:53:52 compute-0 nova_compute[260935]:         <nova:project uuid="5fe47dfd30914099a9819413cbab00c6">tempest-MultipleCreateTestJSON-1825846956</nova:project>
Oct 11 08:53:52 compute-0 nova_compute[260935]:       </nova:owner>
Oct 11 08:53:52 compute-0 nova_compute[260935]:       <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 08:53:52 compute-0 nova_compute[260935]:       <nova:ports>
Oct 11 08:53:52 compute-0 nova_compute[260935]:         <nova:port uuid="11b55091-2876-4b36-98b0-aa1a4db00d3e">
Oct 11 08:53:52 compute-0 nova_compute[260935]:           <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Oct 11 08:53:52 compute-0 nova_compute[260935]:         </nova:port>
Oct 11 08:53:52 compute-0 nova_compute[260935]:       </nova:ports>
Oct 11 08:53:52 compute-0 nova_compute[260935]:     </nova:instance>
Oct 11 08:53:52 compute-0 nova_compute[260935]:   </metadata>
Oct 11 08:53:52 compute-0 nova_compute[260935]:   <sysinfo type="smbios">
Oct 11 08:53:52 compute-0 nova_compute[260935]:     <system>
Oct 11 08:53:52 compute-0 nova_compute[260935]:       <entry name="manufacturer">RDO</entry>
Oct 11 08:53:52 compute-0 nova_compute[260935]:       <entry name="product">OpenStack Compute</entry>
Oct 11 08:53:52 compute-0 nova_compute[260935]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 08:53:52 compute-0 nova_compute[260935]:       <entry name="serial">d0ac94c4-9bbc-443b-bbce-0d447b37153a</entry>
Oct 11 08:53:52 compute-0 nova_compute[260935]:       <entry name="uuid">d0ac94c4-9bbc-443b-bbce-0d447b37153a</entry>
Oct 11 08:53:52 compute-0 nova_compute[260935]:       <entry name="family">Virtual Machine</entry>
Oct 11 08:53:52 compute-0 nova_compute[260935]:     </system>
Oct 11 08:53:52 compute-0 nova_compute[260935]:   </sysinfo>
Oct 11 08:53:52 compute-0 nova_compute[260935]:   <os>
Oct 11 08:53:52 compute-0 nova_compute[260935]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 11 08:53:52 compute-0 nova_compute[260935]:     <boot dev="hd"/>
Oct 11 08:53:52 compute-0 nova_compute[260935]:     <smbios mode="sysinfo"/>
Oct 11 08:53:52 compute-0 nova_compute[260935]:   </os>
Oct 11 08:53:52 compute-0 nova_compute[260935]:   <features>
Oct 11 08:53:52 compute-0 nova_compute[260935]:     <acpi/>
Oct 11 08:53:52 compute-0 nova_compute[260935]:     <apic/>
Oct 11 08:53:52 compute-0 nova_compute[260935]:     <vmcoreinfo/>
Oct 11 08:53:52 compute-0 nova_compute[260935]:   </features>
Oct 11 08:53:52 compute-0 nova_compute[260935]:   <clock offset="utc">
Oct 11 08:53:52 compute-0 nova_compute[260935]:     <timer name="pit" tickpolicy="delay"/>
Oct 11 08:53:52 compute-0 nova_compute[260935]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 11 08:53:52 compute-0 nova_compute[260935]:     <timer name="hpet" present="no"/>
Oct 11 08:53:52 compute-0 nova_compute[260935]:   </clock>
Oct 11 08:53:52 compute-0 nova_compute[260935]:   <cpu mode="host-model" match="exact">
Oct 11 08:53:52 compute-0 nova_compute[260935]:     <topology sockets="1" cores="1" threads="1"/>
Oct 11 08:53:52 compute-0 nova_compute[260935]:   </cpu>
Oct 11 08:53:52 compute-0 nova_compute[260935]:   <devices>
Oct 11 08:53:52 compute-0 nova_compute[260935]:     <disk type="network" device="disk">
Oct 11 08:53:52 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 08:53:52 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/d0ac94c4-9bbc-443b-bbce-0d447b37153a_disk">
Oct 11 08:53:52 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 08:53:52 compute-0 nova_compute[260935]:       </source>
Oct 11 08:53:52 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 08:53:52 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 08:53:52 compute-0 nova_compute[260935]:       </auth>
Oct 11 08:53:52 compute-0 nova_compute[260935]:       <target dev="vda" bus="virtio"/>
Oct 11 08:53:52 compute-0 nova_compute[260935]:     </disk>
Oct 11 08:53:52 compute-0 nova_compute[260935]:     <disk type="network" device="cdrom">
Oct 11 08:53:52 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 08:53:52 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/d0ac94c4-9bbc-443b-bbce-0d447b37153a_disk.config">
Oct 11 08:53:52 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 08:53:52 compute-0 nova_compute[260935]:       </source>
Oct 11 08:53:52 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 08:53:52 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 08:53:52 compute-0 nova_compute[260935]:       </auth>
Oct 11 08:53:52 compute-0 nova_compute[260935]:       <target dev="sda" bus="sata"/>
Oct 11 08:53:52 compute-0 nova_compute[260935]:     </disk>
Oct 11 08:53:52 compute-0 nova_compute[260935]:     <interface type="ethernet">
Oct 11 08:53:52 compute-0 nova_compute[260935]:       <mac address="fa:16:3e:86:b0:c9"/>
Oct 11 08:53:52 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 08:53:52 compute-0 nova_compute[260935]:       <driver name="vhost" rx_queue_size="512"/>
Oct 11 08:53:52 compute-0 nova_compute[260935]:       <mtu size="1442"/>
Oct 11 08:53:52 compute-0 nova_compute[260935]:       <target dev="tap11b55091-28"/>
Oct 11 08:53:52 compute-0 nova_compute[260935]:     </interface>
Oct 11 08:53:52 compute-0 nova_compute[260935]:     <serial type="pty">
Oct 11 08:53:52 compute-0 nova_compute[260935]:       <log file="/var/lib/nova/instances/d0ac94c4-9bbc-443b-bbce-0d447b37153a/console.log" append="off"/>
Oct 11 08:53:52 compute-0 nova_compute[260935]:     </serial>
Oct 11 08:53:52 compute-0 nova_compute[260935]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 08:53:52 compute-0 nova_compute[260935]:     <video>
Oct 11 08:53:52 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 08:53:52 compute-0 nova_compute[260935]:     </video>
Oct 11 08:53:52 compute-0 nova_compute[260935]:     <input type="tablet" bus="usb"/>
Oct 11 08:53:52 compute-0 nova_compute[260935]:     <rng model="virtio">
Oct 11 08:53:52 compute-0 nova_compute[260935]:       <backend model="random">/dev/urandom</backend>
Oct 11 08:53:52 compute-0 nova_compute[260935]:     </rng>
Oct 11 08:53:52 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root"/>
Oct 11 08:53:52 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:53:52 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:53:52 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:53:52 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:53:52 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:53:52 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:53:52 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:53:52 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:53:52 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:53:52 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:53:52 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:53:52 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:53:52 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:53:52 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:53:52 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:53:52 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:53:52 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:53:52 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:53:52 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:53:52 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:53:52 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:53:52 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:53:52 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:53:52 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:53:52 compute-0 nova_compute[260935]:     <controller type="usb" index="0"/>
Oct 11 08:53:52 compute-0 nova_compute[260935]:     <memballoon model="virtio">
Oct 11 08:53:52 compute-0 nova_compute[260935]:       <stats period="10"/>
Oct 11 08:53:52 compute-0 nova_compute[260935]:     </memballoon>
Oct 11 08:53:52 compute-0 nova_compute[260935]:   </devices>
Oct 11 08:53:52 compute-0 nova_compute[260935]: </domain>
Oct 11 08:53:52 compute-0 nova_compute[260935]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 11 08:53:52 compute-0 nova_compute[260935]: 2025-10-11 08:53:52.315 2 DEBUG nova.compute.manager [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: d0ac94c4-9bbc-443b-bbce-0d447b37153a] Preparing to wait for external event network-vif-plugged-11b55091-2876-4b36-98b0-aa1a4db00d3e prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 11 08:53:52 compute-0 nova_compute[260935]: 2025-10-11 08:53:52.316 2 DEBUG oslo_concurrency.lockutils [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Acquiring lock "d0ac94c4-9bbc-443b-bbce-0d447b37153a-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:53:52 compute-0 nova_compute[260935]: 2025-10-11 08:53:52.317 2 DEBUG oslo_concurrency.lockutils [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Lock "d0ac94c4-9bbc-443b-bbce-0d447b37153a-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:53:52 compute-0 nova_compute[260935]: 2025-10-11 08:53:52.317 2 DEBUG oslo_concurrency.lockutils [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Lock "d0ac94c4-9bbc-443b-bbce-0d447b37153a-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:53:52 compute-0 nova_compute[260935]: 2025-10-11 08:53:52.318 2 DEBUG nova.virt.libvirt.vif [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T08:53:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-MultipleCreateTestJSON-server-904038611',display_name='tempest-MultipleCreateTestJSON-server-904038611-1',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-multiplecreatetestjson-server-904038611-1',id=48,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5fe47dfd30914099a9819413cbab00c6',ramdisk_id='',reservation_id='r-gdifyikz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-MultipleCreateTestJSON-1825846956',owner_user_name='tempest-MultipleCreateTestJSON-1825846956-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T08:53:44Z,user_data=None,user_id='624c293d73ca4d14a182fadee17abb16',uuid=d0ac94c4-9bbc-443b-bbce-0d447b37153a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "11b55091-2876-4b36-98b0-aa1a4db00d3e", "address": "fa:16:3e:86:b0:c9", "network": {"id": "aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1676373979-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5fe47dfd30914099a9819413cbab00c6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap11b55091-28", "ovs_interfaceid": "11b55091-2876-4b36-98b0-aa1a4db00d3e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 11 08:53:52 compute-0 nova_compute[260935]: 2025-10-11 08:53:52.319 2 DEBUG nova.network.os_vif_util [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Converting VIF {"id": "11b55091-2876-4b36-98b0-aa1a4db00d3e", "address": "fa:16:3e:86:b0:c9", "network": {"id": "aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1676373979-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5fe47dfd30914099a9819413cbab00c6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap11b55091-28", "ovs_interfaceid": "11b55091-2876-4b36-98b0-aa1a4db00d3e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 08:53:52 compute-0 nova_compute[260935]: 2025-10-11 08:53:52.320 2 DEBUG nova.network.os_vif_util [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:86:b0:c9,bridge_name='br-int',has_traffic_filtering=True,id=11b55091-2876-4b36-98b0-aa1a4db00d3e,network=Network(aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap11b55091-28') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 08:53:52 compute-0 nova_compute[260935]: 2025-10-11 08:53:52.321 2 DEBUG os_vif [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:86:b0:c9,bridge_name='br-int',has_traffic_filtering=True,id=11b55091-2876-4b36-98b0-aa1a4db00d3e,network=Network(aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap11b55091-28') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 11 08:53:52 compute-0 nova_compute[260935]: 2025-10-11 08:53:52.322 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:53:52 compute-0 nova_compute[260935]: 2025-10-11 08:53:52.322 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:53:52 compute-0 nova_compute[260935]: 2025-10-11 08:53:52.323 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 08:53:52 compute-0 nova_compute[260935]: 2025-10-11 08:53:52.328 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:53:52 compute-0 nova_compute[260935]: 2025-10-11 08:53:52.329 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap11b55091-28, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:53:52 compute-0 nova_compute[260935]: 2025-10-11 08:53:52.329 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap11b55091-28, col_values=(('external_ids', {'iface-id': '11b55091-2876-4b36-98b0-aa1a4db00d3e', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:86:b0:c9', 'vm-uuid': 'd0ac94c4-9bbc-443b-bbce-0d447b37153a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:53:52 compute-0 nova_compute[260935]: 2025-10-11 08:53:52.331 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:53:52 compute-0 NetworkManager[44960]: <info>  [1760172832.3329] manager: (tap11b55091-28): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/201)
Oct 11 08:53:52 compute-0 nova_compute[260935]: 2025-10-11 08:53:52.333 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 08:53:52 compute-0 nova_compute[260935]: 2025-10-11 08:53:52.339 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:53:52 compute-0 nova_compute[260935]: 2025-10-11 08:53:52.342 2 INFO os_vif [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:86:b0:c9,bridge_name='br-int',has_traffic_filtering=True,id=11b55091-2876-4b36-98b0-aa1a4db00d3e,network=Network(aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap11b55091-28')
Oct 11 08:53:52 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/194243873' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:53:52 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/1495783055' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:53:52 compute-0 nova_compute[260935]: 2025-10-11 08:53:52.401 2 DEBUG nova.virt.libvirt.driver [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 08:53:52 compute-0 nova_compute[260935]: 2025-10-11 08:53:52.402 2 DEBUG nova.virt.libvirt.driver [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 08:53:52 compute-0 nova_compute[260935]: 2025-10-11 08:53:52.402 2 DEBUG nova.virt.libvirt.driver [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] No VIF found with MAC fa:16:3e:86:b0:c9, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 11 08:53:52 compute-0 nova_compute[260935]: 2025-10-11 08:53:52.403 2 INFO nova.virt.libvirt.driver [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: d0ac94c4-9bbc-443b-bbce-0d447b37153a] Using config drive
Oct 11 08:53:52 compute-0 nova_compute[260935]: 2025-10-11 08:53:52.425 2 DEBUG nova.storage.rbd_utils [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] rbd image d0ac94c4-9bbc-443b-bbce-0d447b37153a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:53:52 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 08:53:52 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/49145194' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:53:52 compute-0 nova_compute[260935]: 2025-10-11 08:53:52.646 2 DEBUG oslo_concurrency.processutils [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.508s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:53:52 compute-0 nova_compute[260935]: 2025-10-11 08:53:52.682 2 DEBUG nova.storage.rbd_utils [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] rbd image c5c5e7c6-36ba-4cdd-9ad5-03996c419556_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:53:52 compute-0 nova_compute[260935]: 2025-10-11 08:53:52.694 2 DEBUG oslo_concurrency.processutils [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:53:52 compute-0 podman[314933]: 2025-10-11 08:53:52.700019238 +0000 UTC m=+0.074234418 container create 81ee36c5aeb7a05d069a71a8721b9939b3ddb5f0983b70f5954fbe381e6c2684 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 08:53:52 compute-0 systemd[1]: Started libpod-conmon-81ee36c5aeb7a05d069a71a8721b9939b3ddb5f0983b70f5954fbe381e6c2684.scope.
Oct 11 08:53:52 compute-0 podman[314933]: 2025-10-11 08:53:52.662148188 +0000 UTC m=+0.036363408 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 11 08:53:52 compute-0 nova_compute[260935]: 2025-10-11 08:53:52.798 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:53:52 compute-0 systemd[1]: Started libcrun container.
Oct 11 08:53:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ff5f9aab65ed5ed74268fc74e80741bd6d56127a04d5f99970f5b4eb6779e28/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 11 08:53:52 compute-0 podman[314933]: 2025-10-11 08:53:52.850025295 +0000 UTC m=+0.224240535 container init 81ee36c5aeb7a05d069a71a8721b9939b3ddb5f0983b70f5954fbe381e6c2684 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct 11 08:53:52 compute-0 podman[314933]: 2025-10-11 08:53:52.855475911 +0000 UTC m=+0.229691081 container start 81ee36c5aeb7a05d069a71a8721b9939b3ddb5f0983b70f5954fbe381e6c2684 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct 11 08:53:52 compute-0 nova_compute[260935]: 2025-10-11 08:53:52.856 2 DEBUG nova.network.neutron [req-f165fd72-225b-42bb-86e6-4bafc9fabcca req-b7518090-f07b-4f2c-a6a8-7914104d447b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d0ac94c4-9bbc-443b-bbce-0d447b37153a] Updated VIF entry in instance network info cache for port 11b55091-2876-4b36-98b0-aa1a4db00d3e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 08:53:52 compute-0 nova_compute[260935]: 2025-10-11 08:53:52.857 2 DEBUG nova.network.neutron [req-f165fd72-225b-42bb-86e6-4bafc9fabcca req-b7518090-f07b-4f2c-a6a8-7914104d447b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d0ac94c4-9bbc-443b-bbce-0d447b37153a] Updating instance_info_cache with network_info: [{"id": "11b55091-2876-4b36-98b0-aa1a4db00d3e", "address": "fa:16:3e:86:b0:c9", "network": {"id": "aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1676373979-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5fe47dfd30914099a9819413cbab00c6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap11b55091-28", "ovs_interfaceid": "11b55091-2876-4b36-98b0-aa1a4db00d3e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:53:52 compute-0 nova_compute[260935]: 2025-10-11 08:53:52.871 2 DEBUG oslo_concurrency.lockutils [req-f165fd72-225b-42bb-86e6-4bafc9fabcca req-b7518090-f07b-4f2c-a6a8-7914104d447b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-d0ac94c4-9bbc-443b-bbce-0d447b37153a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 08:53:52 compute-0 nova_compute[260935]: 2025-10-11 08:53:52.872 2 DEBUG nova.compute.manager [req-f165fd72-225b-42bb-86e6-4bafc9fabcca req-b7518090-f07b-4f2c-a6a8-7914104d447b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: c5c5e7c6-36ba-4cdd-9ad5-03996c419556] Received event network-changed-eb770b47-0e30-41bb-8d08-e52bc86b8fb7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:53:52 compute-0 nova_compute[260935]: 2025-10-11 08:53:52.872 2 DEBUG nova.compute.manager [req-f165fd72-225b-42bb-86e6-4bafc9fabcca req-b7518090-f07b-4f2c-a6a8-7914104d447b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: c5c5e7c6-36ba-4cdd-9ad5-03996c419556] Refreshing instance network info cache due to event network-changed-eb770b47-0e30-41bb-8d08-e52bc86b8fb7. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 08:53:52 compute-0 nova_compute[260935]: 2025-10-11 08:53:52.872 2 DEBUG oslo_concurrency.lockutils [req-f165fd72-225b-42bb-86e6-4bafc9fabcca req-b7518090-f07b-4f2c-a6a8-7914104d447b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-c5c5e7c6-36ba-4cdd-9ad5-03996c419556" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 08:53:52 compute-0 nova_compute[260935]: 2025-10-11 08:53:52.872 2 DEBUG oslo_concurrency.lockutils [req-f165fd72-225b-42bb-86e6-4bafc9fabcca req-b7518090-f07b-4f2c-a6a8-7914104d447b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-c5c5e7c6-36ba-4cdd-9ad5-03996c419556" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 08:53:52 compute-0 nova_compute[260935]: 2025-10-11 08:53:52.872 2 DEBUG nova.network.neutron [req-f165fd72-225b-42bb-86e6-4bafc9fabcca req-b7518090-f07b-4f2c-a6a8-7914104d447b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: c5c5e7c6-36ba-4cdd-9ad5-03996c419556] Refreshing network info cache for port eb770b47-0e30-41bb-8d08-e52bc86b8fb7 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 08:53:52 compute-0 neutron-haproxy-ovnmeta-e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd[314970]: [NOTICE]   (314974) : New worker (314978) forked
Oct 11 08:53:52 compute-0 neutron-haproxy-ovnmeta-e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd[314970]: [NOTICE]   (314974) : Loading success.
Oct 11 08:53:53 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 08:53:53 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3245159004' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:53:53 compute-0 nova_compute[260935]: 2025-10-11 08:53:53.183 2 DEBUG oslo_concurrency.processutils [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.488s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:53:53 compute-0 nova_compute[260935]: 2025-10-11 08:53:53.185 2 DEBUG nova.virt.libvirt.vif [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T08:53:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-MultipleCreateTestJSON-server-904038611',display_name='tempest-MultipleCreateTestJSON-server-904038611-2',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-multiplecreatetestjson-server-904038611-2',id=50,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=1,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5fe47dfd30914099a9819413cbab00c6',ramdisk_id='',reservation_id='r-gdifyikz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-MultipleCreateTestJSON-1825846956',owner_user_name='tempest-MultipleCreateTestJSON-1825846956-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T08:53:45Z,user_data=None,user_id='624c293d73ca4d14a182fadee17abb16',uuid=c5c5e7c6-36ba-4cdd-9ad5-03996c419556,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "eb770b47-0e30-41bb-8d08-e52bc86b8fb7", "address": "fa:16:3e:c2:96:8d", "network": {"id": "aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1676373979-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5fe47dfd30914099a9819413cbab00c6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeb770b47-0e", "ovs_interfaceid": "eb770b47-0e30-41bb-8d08-e52bc86b8fb7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 11 08:53:53 compute-0 nova_compute[260935]: 2025-10-11 08:53:53.186 2 DEBUG nova.network.os_vif_util [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Converting VIF {"id": "eb770b47-0e30-41bb-8d08-e52bc86b8fb7", "address": "fa:16:3e:c2:96:8d", "network": {"id": "aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1676373979-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5fe47dfd30914099a9819413cbab00c6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeb770b47-0e", "ovs_interfaceid": "eb770b47-0e30-41bb-8d08-e52bc86b8fb7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 08:53:53 compute-0 nova_compute[260935]: 2025-10-11 08:53:53.188 2 DEBUG nova.network.os_vif_util [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c2:96:8d,bridge_name='br-int',has_traffic_filtering=True,id=eb770b47-0e30-41bb-8d08-e52bc86b8fb7,network=Network(aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapeb770b47-0e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 08:53:53 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e205 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 08:53:53 compute-0 nova_compute[260935]: 2025-10-11 08:53:53.190 2 DEBUG nova.objects.instance [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Lazy-loading 'pci_devices' on Instance uuid c5c5e7c6-36ba-4cdd-9ad5-03996c419556 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:53:53 compute-0 nova_compute[260935]: 2025-10-11 08:53:53.356 2 INFO nova.virt.libvirt.driver [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: d0ac94c4-9bbc-443b-bbce-0d447b37153a] Creating config drive at /var/lib/nova/instances/d0ac94c4-9bbc-443b-bbce-0d447b37153a/disk.config
Oct 11 08:53:53 compute-0 nova_compute[260935]: 2025-10-11 08:53:53.360 2 DEBUG oslo_concurrency.processutils [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/d0ac94c4-9bbc-443b-bbce-0d447b37153a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp1f3cibw9 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:53:53 compute-0 ceph-mon[74313]: pgmap v1484: 321 pgs: 321 active+clean; 227 MiB data, 515 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 7.4 MiB/s wr, 207 op/s
Oct 11 08:53:53 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/49145194' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:53:53 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/3245159004' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:53:53 compute-0 nova_compute[260935]: 2025-10-11 08:53:53.422 2 DEBUG nova.virt.libvirt.driver [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: c5c5e7c6-36ba-4cdd-9ad5-03996c419556] End _get_guest_xml xml=<domain type="kvm">
Oct 11 08:53:53 compute-0 nova_compute[260935]:   <uuid>c5c5e7c6-36ba-4cdd-9ad5-03996c419556</uuid>
Oct 11 08:53:53 compute-0 nova_compute[260935]:   <name>instance-00000032</name>
Oct 11 08:53:53 compute-0 nova_compute[260935]:   <memory>131072</memory>
Oct 11 08:53:53 compute-0 nova_compute[260935]:   <vcpu>1</vcpu>
Oct 11 08:53:53 compute-0 nova_compute[260935]:   <metadata>
Oct 11 08:53:53 compute-0 nova_compute[260935]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 08:53:53 compute-0 nova_compute[260935]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 08:53:53 compute-0 nova_compute[260935]:       <nova:name>tempest-MultipleCreateTestJSON-server-904038611-2</nova:name>
Oct 11 08:53:53 compute-0 nova_compute[260935]:       <nova:creationTime>2025-10-11 08:53:52</nova:creationTime>
Oct 11 08:53:53 compute-0 nova_compute[260935]:       <nova:flavor name="m1.nano">
Oct 11 08:53:53 compute-0 nova_compute[260935]:         <nova:memory>128</nova:memory>
Oct 11 08:53:53 compute-0 nova_compute[260935]:         <nova:disk>1</nova:disk>
Oct 11 08:53:53 compute-0 nova_compute[260935]:         <nova:swap>0</nova:swap>
Oct 11 08:53:53 compute-0 nova_compute[260935]:         <nova:ephemeral>0</nova:ephemeral>
Oct 11 08:53:53 compute-0 nova_compute[260935]:         <nova:vcpus>1</nova:vcpus>
Oct 11 08:53:53 compute-0 nova_compute[260935]:       </nova:flavor>
Oct 11 08:53:53 compute-0 nova_compute[260935]:       <nova:owner>
Oct 11 08:53:53 compute-0 nova_compute[260935]:         <nova:user uuid="624c293d73ca4d14a182fadee17abb16">tempest-MultipleCreateTestJSON-1825846956-project-member</nova:user>
Oct 11 08:53:53 compute-0 nova_compute[260935]:         <nova:project uuid="5fe47dfd30914099a9819413cbab00c6">tempest-MultipleCreateTestJSON-1825846956</nova:project>
Oct 11 08:53:53 compute-0 nova_compute[260935]:       </nova:owner>
Oct 11 08:53:53 compute-0 nova_compute[260935]:       <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 08:53:53 compute-0 nova_compute[260935]:       <nova:ports>
Oct 11 08:53:53 compute-0 nova_compute[260935]:         <nova:port uuid="eb770b47-0e30-41bb-8d08-e52bc86b8fb7">
Oct 11 08:53:53 compute-0 nova_compute[260935]:           <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Oct 11 08:53:53 compute-0 nova_compute[260935]:         </nova:port>
Oct 11 08:53:53 compute-0 nova_compute[260935]:       </nova:ports>
Oct 11 08:53:53 compute-0 nova_compute[260935]:     </nova:instance>
Oct 11 08:53:53 compute-0 nova_compute[260935]:   </metadata>
Oct 11 08:53:53 compute-0 nova_compute[260935]:   <sysinfo type="smbios">
Oct 11 08:53:53 compute-0 nova_compute[260935]:     <system>
Oct 11 08:53:53 compute-0 nova_compute[260935]:       <entry name="manufacturer">RDO</entry>
Oct 11 08:53:53 compute-0 nova_compute[260935]:       <entry name="product">OpenStack Compute</entry>
Oct 11 08:53:53 compute-0 nova_compute[260935]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 08:53:53 compute-0 nova_compute[260935]:       <entry name="serial">c5c5e7c6-36ba-4cdd-9ad5-03996c419556</entry>
Oct 11 08:53:53 compute-0 nova_compute[260935]:       <entry name="uuid">c5c5e7c6-36ba-4cdd-9ad5-03996c419556</entry>
Oct 11 08:53:53 compute-0 nova_compute[260935]:       <entry name="family">Virtual Machine</entry>
Oct 11 08:53:53 compute-0 nova_compute[260935]:     </system>
Oct 11 08:53:53 compute-0 nova_compute[260935]:   </sysinfo>
Oct 11 08:53:53 compute-0 nova_compute[260935]:   <os>
Oct 11 08:53:53 compute-0 nova_compute[260935]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 11 08:53:53 compute-0 nova_compute[260935]:     <boot dev="hd"/>
Oct 11 08:53:53 compute-0 nova_compute[260935]:     <smbios mode="sysinfo"/>
Oct 11 08:53:53 compute-0 nova_compute[260935]:   </os>
Oct 11 08:53:53 compute-0 nova_compute[260935]:   <features>
Oct 11 08:53:53 compute-0 nova_compute[260935]:     <acpi/>
Oct 11 08:53:53 compute-0 nova_compute[260935]:     <apic/>
Oct 11 08:53:53 compute-0 nova_compute[260935]:     <vmcoreinfo/>
Oct 11 08:53:53 compute-0 nova_compute[260935]:   </features>
Oct 11 08:53:53 compute-0 nova_compute[260935]:   <clock offset="utc">
Oct 11 08:53:53 compute-0 nova_compute[260935]:     <timer name="pit" tickpolicy="delay"/>
Oct 11 08:53:53 compute-0 nova_compute[260935]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 11 08:53:53 compute-0 nova_compute[260935]:     <timer name="hpet" present="no"/>
Oct 11 08:53:53 compute-0 nova_compute[260935]:   </clock>
Oct 11 08:53:53 compute-0 nova_compute[260935]:   <cpu mode="host-model" match="exact">
Oct 11 08:53:53 compute-0 nova_compute[260935]:     <topology sockets="1" cores="1" threads="1"/>
Oct 11 08:53:53 compute-0 nova_compute[260935]:   </cpu>
Oct 11 08:53:53 compute-0 nova_compute[260935]:   <devices>
Oct 11 08:53:53 compute-0 nova_compute[260935]:     <disk type="network" device="disk">
Oct 11 08:53:53 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 08:53:53 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/c5c5e7c6-36ba-4cdd-9ad5-03996c419556_disk">
Oct 11 08:53:53 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 08:53:53 compute-0 nova_compute[260935]:       </source>
Oct 11 08:53:53 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 08:53:53 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 08:53:53 compute-0 nova_compute[260935]:       </auth>
Oct 11 08:53:53 compute-0 nova_compute[260935]:       <target dev="vda" bus="virtio"/>
Oct 11 08:53:53 compute-0 nova_compute[260935]:     </disk>
Oct 11 08:53:53 compute-0 nova_compute[260935]:     <disk type="network" device="cdrom">
Oct 11 08:53:53 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 08:53:53 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/c5c5e7c6-36ba-4cdd-9ad5-03996c419556_disk.config">
Oct 11 08:53:53 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 08:53:53 compute-0 nova_compute[260935]:       </source>
Oct 11 08:53:53 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 08:53:53 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 08:53:53 compute-0 nova_compute[260935]:       </auth>
Oct 11 08:53:53 compute-0 nova_compute[260935]:       <target dev="sda" bus="sata"/>
Oct 11 08:53:53 compute-0 nova_compute[260935]:     </disk>
Oct 11 08:53:53 compute-0 nova_compute[260935]:     <interface type="ethernet">
Oct 11 08:53:53 compute-0 nova_compute[260935]:       <mac address="fa:16:3e:c2:96:8d"/>
Oct 11 08:53:53 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 08:53:53 compute-0 nova_compute[260935]:       <driver name="vhost" rx_queue_size="512"/>
Oct 11 08:53:53 compute-0 nova_compute[260935]:       <mtu size="1442"/>
Oct 11 08:53:53 compute-0 nova_compute[260935]:       <target dev="tapeb770b47-0e"/>
Oct 11 08:53:53 compute-0 nova_compute[260935]:     </interface>
Oct 11 08:53:53 compute-0 nova_compute[260935]:     <serial type="pty">
Oct 11 08:53:53 compute-0 nova_compute[260935]:       <log file="/var/lib/nova/instances/c5c5e7c6-36ba-4cdd-9ad5-03996c419556/console.log" append="off"/>
Oct 11 08:53:53 compute-0 nova_compute[260935]:     </serial>
Oct 11 08:53:53 compute-0 nova_compute[260935]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 08:53:53 compute-0 nova_compute[260935]:     <video>
Oct 11 08:53:53 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 08:53:53 compute-0 nova_compute[260935]:     </video>
Oct 11 08:53:53 compute-0 nova_compute[260935]:     <input type="tablet" bus="usb"/>
Oct 11 08:53:53 compute-0 nova_compute[260935]:     <rng model="virtio">
Oct 11 08:53:53 compute-0 nova_compute[260935]:       <backend model="random">/dev/urandom</backend>
Oct 11 08:53:53 compute-0 nova_compute[260935]:     </rng>
Oct 11 08:53:53 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root"/>
Oct 11 08:53:53 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:53:53 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:53:53 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:53:53 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:53:53 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:53:53 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:53:53 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:53:53 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:53:53 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:53:53 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:53:53 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:53:53 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:53:53 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:53:53 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:53:53 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:53:53 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:53:53 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:53:53 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:53:53 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:53:53 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:53:53 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:53:53 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:53:53 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:53:53 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:53:53 compute-0 nova_compute[260935]:     <controller type="usb" index="0"/>
Oct 11 08:53:53 compute-0 nova_compute[260935]:     <memballoon model="virtio">
Oct 11 08:53:53 compute-0 nova_compute[260935]:       <stats period="10"/>
Oct 11 08:53:53 compute-0 nova_compute[260935]:     </memballoon>
Oct 11 08:53:53 compute-0 nova_compute[260935]:   </devices>
Oct 11 08:53:53 compute-0 nova_compute[260935]: </domain>
Oct 11 08:53:53 compute-0 nova_compute[260935]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 11 08:53:53 compute-0 nova_compute[260935]: 2025-10-11 08:53:53.424 2 DEBUG nova.compute.manager [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: c5c5e7c6-36ba-4cdd-9ad5-03996c419556] Preparing to wait for external event network-vif-plugged-eb770b47-0e30-41bb-8d08-e52bc86b8fb7 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 11 08:53:53 compute-0 nova_compute[260935]: 2025-10-11 08:53:53.424 2 DEBUG oslo_concurrency.lockutils [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Acquiring lock "c5c5e7c6-36ba-4cdd-9ad5-03996c419556-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:53:53 compute-0 nova_compute[260935]: 2025-10-11 08:53:53.425 2 DEBUG oslo_concurrency.lockutils [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Lock "c5c5e7c6-36ba-4cdd-9ad5-03996c419556-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:53:53 compute-0 nova_compute[260935]: 2025-10-11 08:53:53.425 2 DEBUG oslo_concurrency.lockutils [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Lock "c5c5e7c6-36ba-4cdd-9ad5-03996c419556-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:53:53 compute-0 nova_compute[260935]: 2025-10-11 08:53:53.426 2 DEBUG nova.virt.libvirt.vif [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T08:53:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-MultipleCreateTestJSON-server-904038611',display_name='tempest-MultipleCreateTestJSON-server-904038611-2',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-multiplecreatetestjson-server-904038611-2',id=50,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=1,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5fe47dfd30914099a9819413cbab00c6',ramdisk_id='',reservation_id='r-gdifyikz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-MultipleCreateTestJSON-1825846956',owner_user_name='tempest-MultipleCreateTestJSON-1825846956-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T08:53:45Z,user_data=None,user_id='624c293d73ca4d14a182fadee17abb16',uuid=c5c5e7c6-36ba-4cdd-9ad5-03996c419556,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "eb770b47-0e30-41bb-8d08-e52bc86b8fb7", "address": "fa:16:3e:c2:96:8d", "network": {"id": "aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1676373979-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5fe47dfd30914099a9819413cbab00c6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeb770b47-0e", "ovs_interfaceid": "eb770b47-0e30-41bb-8d08-e52bc86b8fb7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 11 08:53:53 compute-0 nova_compute[260935]: 2025-10-11 08:53:53.427 2 DEBUG nova.network.os_vif_util [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Converting VIF {"id": "eb770b47-0e30-41bb-8d08-e52bc86b8fb7", "address": "fa:16:3e:c2:96:8d", "network": {"id": "aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1676373979-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5fe47dfd30914099a9819413cbab00c6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeb770b47-0e", "ovs_interfaceid": "eb770b47-0e30-41bb-8d08-e52bc86b8fb7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 08:53:53 compute-0 nova_compute[260935]: 2025-10-11 08:53:53.428 2 DEBUG nova.network.os_vif_util [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c2:96:8d,bridge_name='br-int',has_traffic_filtering=True,id=eb770b47-0e30-41bb-8d08-e52bc86b8fb7,network=Network(aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapeb770b47-0e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 08:53:53 compute-0 nova_compute[260935]: 2025-10-11 08:53:53.428 2 DEBUG os_vif [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c2:96:8d,bridge_name='br-int',has_traffic_filtering=True,id=eb770b47-0e30-41bb-8d08-e52bc86b8fb7,network=Network(aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapeb770b47-0e') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 11 08:53:53 compute-0 nova_compute[260935]: 2025-10-11 08:53:53.429 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:53:53 compute-0 nova_compute[260935]: 2025-10-11 08:53:53.429 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:53:53 compute-0 nova_compute[260935]: 2025-10-11 08:53:53.430 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 08:53:53 compute-0 nova_compute[260935]: 2025-10-11 08:53:53.432 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:53:53 compute-0 nova_compute[260935]: 2025-10-11 08:53:53.433 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapeb770b47-0e, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:53:53 compute-0 nova_compute[260935]: 2025-10-11 08:53:53.433 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapeb770b47-0e, col_values=(('external_ids', {'iface-id': 'eb770b47-0e30-41bb-8d08-e52bc86b8fb7', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:c2:96:8d', 'vm-uuid': 'c5c5e7c6-36ba-4cdd-9ad5-03996c419556'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:53:53 compute-0 nova_compute[260935]: 2025-10-11 08:53:53.435 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:53:53 compute-0 NetworkManager[44960]: <info>  [1760172833.4373] manager: (tapeb770b47-0e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/202)
Oct 11 08:53:53 compute-0 nova_compute[260935]: 2025-10-11 08:53:53.438 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 08:53:53 compute-0 nova_compute[260935]: 2025-10-11 08:53:53.447 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:53:53 compute-0 nova_compute[260935]: 2025-10-11 08:53:53.448 2 INFO os_vif [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c2:96:8d,bridge_name='br-int',has_traffic_filtering=True,id=eb770b47-0e30-41bb-8d08-e52bc86b8fb7,network=Network(aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapeb770b47-0e')
Oct 11 08:53:53 compute-0 nova_compute[260935]: 2025-10-11 08:53:53.507 2 DEBUG nova.virt.libvirt.driver [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 08:53:53 compute-0 nova_compute[260935]: 2025-10-11 08:53:53.508 2 DEBUG nova.virt.libvirt.driver [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 08:53:53 compute-0 nova_compute[260935]: 2025-10-11 08:53:53.508 2 DEBUG nova.virt.libvirt.driver [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] No VIF found with MAC fa:16:3e:c2:96:8d, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 11 08:53:53 compute-0 nova_compute[260935]: 2025-10-11 08:53:53.509 2 INFO nova.virt.libvirt.driver [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: c5c5e7c6-36ba-4cdd-9ad5-03996c419556] Using config drive
Oct 11 08:53:53 compute-0 nova_compute[260935]: 2025-10-11 08:53:53.535 2 DEBUG nova.storage.rbd_utils [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] rbd image c5c5e7c6-36ba-4cdd-9ad5-03996c419556_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:53:53 compute-0 nova_compute[260935]: 2025-10-11 08:53:53.542 2 DEBUG oslo_concurrency.processutils [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/d0ac94c4-9bbc-443b-bbce-0d447b37153a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp1f3cibw9" returned: 0 in 0.182s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:53:53 compute-0 nova_compute[260935]: 2025-10-11 08:53:53.576 2 DEBUG nova.storage.rbd_utils [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] rbd image d0ac94c4-9bbc-443b-bbce-0d447b37153a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:53:53 compute-0 nova_compute[260935]: 2025-10-11 08:53:53.581 2 DEBUG oslo_concurrency.processutils [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/d0ac94c4-9bbc-443b-bbce-0d447b37153a/disk.config d0ac94c4-9bbc-443b-bbce-0d447b37153a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:53:53 compute-0 nova_compute[260935]: 2025-10-11 08:53:53.776 2 DEBUG oslo_concurrency.processutils [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/d0ac94c4-9bbc-443b-bbce-0d447b37153a/disk.config d0ac94c4-9bbc-443b-bbce-0d447b37153a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.195s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:53:53 compute-0 nova_compute[260935]: 2025-10-11 08:53:53.777 2 INFO nova.virt.libvirt.driver [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: d0ac94c4-9bbc-443b-bbce-0d447b37153a] Deleting local config drive /var/lib/nova/instances/d0ac94c4-9bbc-443b-bbce-0d447b37153a/disk.config because it was imported into RBD.
Oct 11 08:53:53 compute-0 systemd-udevd[314847]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 08:53:53 compute-0 NetworkManager[44960]: <info>  [1760172833.8369] manager: (tap11b55091-28): new Tun device (/org/freedesktop/NetworkManager/Devices/203)
Oct 11 08:53:53 compute-0 kernel: tap11b55091-28: entered promiscuous mode
Oct 11 08:53:53 compute-0 ovn_controller[152945]: 2025-10-11T08:53:53Z|00419|binding|INFO|Claiming lport 11b55091-2876-4b36-98b0-aa1a4db00d3e for this chassis.
Oct 11 08:53:53 compute-0 ovn_controller[152945]: 2025-10-11T08:53:53Z|00420|binding|INFO|11b55091-2876-4b36-98b0-aa1a4db00d3e: Claiming fa:16:3e:86:b0:c9 10.100.0.11
Oct 11 08:53:53 compute-0 nova_compute[260935]: 2025-10-11 08:53:53.847 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:53:53 compute-0 NetworkManager[44960]: <info>  [1760172833.8601] device (tap11b55091-28): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 08:53:53 compute-0 NetworkManager[44960]: <info>  [1760172833.8619] device (tap11b55091-28): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 08:53:53 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:53.875 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:86:b0:c9 10.100.0.11'], port_security=['fa:16:3e:86:b0:c9 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'd0ac94c4-9bbc-443b-bbce-0d447b37153a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5fe47dfd30914099a9819413cbab00c6', 'neutron:revision_number': '2', 'neutron:security_group_ids': '76121093-a7de-4040-aef4-22a2c06e5eea', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0c1ab924-8567-41be-9107-10c6210b8f10, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=11b55091-2876-4b36-98b0-aa1a4db00d3e) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 08:53:53 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:53.877 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 11b55091-2876-4b36-98b0-aa1a4db00d3e in datapath aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f bound to our chassis
Oct 11 08:53:53 compute-0 systemd-machined[215705]: New machine qemu-56-instance-00000030.
Oct 11 08:53:53 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:53.880 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f
Oct 11 08:53:53 compute-0 systemd[1]: Started Virtual Machine qemu-56-instance-00000030.
Oct 11 08:53:53 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:53.899 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[5afcc04c-df00-461d-9143-32c52048a497]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:53:53 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:53.901 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapaac3fe7a-b1 in ovnmeta-aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 11 08:53:53 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:53.903 276945 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapaac3fe7a-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 11 08:53:53 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:53.903 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[9d020a89-5e69-4df1-a6cd-bfebd8c4e774]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:53:53 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:53.904 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[918fcd57-a957-48fc-aa99-590a212852ee]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:53:53 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:53.928 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[53fd9b1c-c74a-4c51-8263-25a6357d93e8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:53:53 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1485: 321 pgs: 321 active+clean; 227 MiB data, 515 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 6.4 MiB/s wr, 225 op/s
Oct 11 08:53:53 compute-0 nova_compute[260935]: 2025-10-11 08:53:53.956 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:53:53 compute-0 ovn_controller[152945]: 2025-10-11T08:53:53Z|00421|binding|INFO|Setting lport 11b55091-2876-4b36-98b0-aa1a4db00d3e ovn-installed in OVS
Oct 11 08:53:53 compute-0 ovn_controller[152945]: 2025-10-11T08:53:53Z|00422|binding|INFO|Setting lport 11b55091-2876-4b36-98b0-aa1a4db00d3e up in Southbound
Oct 11 08:53:53 compute-0 nova_compute[260935]: 2025-10-11 08:53:53.960 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:53:53 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:53.965 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[e0c24a03-cdf0-4fe6-b0c5-9369ae568886]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:53:54 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:54.011 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[a49950e1-7ba9-4e60-93b4-a8d67da6d1de]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:53:54 compute-0 NetworkManager[44960]: <info>  [1760172834.0206] manager: (tapaac3fe7a-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/204)
Oct 11 08:53:54 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:54.019 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[73d49e45-9281-442d-9e7d-97310af20357]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:53:54 compute-0 nova_compute[260935]: 2025-10-11 08:53:54.022 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172834.0202982, 090243f9-46ac-42a1-b921-fa3b1974f127 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:53:54 compute-0 nova_compute[260935]: 2025-10-11 08:53:54.022 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 090243f9-46ac-42a1-b921-fa3b1974f127] VM Started (Lifecycle Event)
Oct 11 08:53:54 compute-0 nova_compute[260935]: 2025-10-11 08:53:54.044 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 090243f9-46ac-42a1-b921-fa3b1974f127] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:53:54 compute-0 nova_compute[260935]: 2025-10-11 08:53:54.049 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172834.0204086, 090243f9-46ac-42a1-b921-fa3b1974f127 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:53:54 compute-0 nova_compute[260935]: 2025-10-11 08:53:54.049 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 090243f9-46ac-42a1-b921-fa3b1974f127] VM Paused (Lifecycle Event)
Oct 11 08:53:54 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:54.068 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[2eabc20a-8288-46d8-8284-c6cf366b732c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:53:54 compute-0 nova_compute[260935]: 2025-10-11 08:53:54.069 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 090243f9-46ac-42a1-b921-fa3b1974f127] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:53:54 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:54.072 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[7f6fdeb1-f2c5-461d-9196-103b790e926a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:53:54 compute-0 nova_compute[260935]: 2025-10-11 08:53:54.074 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 090243f9-46ac-42a1-b921-fa3b1974f127] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 08:53:54 compute-0 NetworkManager[44960]: <info>  [1760172834.1095] device (tapaac3fe7a-b0): carrier: link connected
Oct 11 08:53:54 compute-0 nova_compute[260935]: 2025-10-11 08:53:54.116 2 DEBUG nova.compute.manager [req-d1031697-3fb7-4718-87f2-2fb1eac0763c req-58ca6bc9-4a0c-4688-bf6f-ebd32311e5cd e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 090243f9-46ac-42a1-b921-fa3b1974f127] Received event network-vif-plugged-94e66298-1c10-486d-9cd5-fd680cfa037c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:53:54 compute-0 nova_compute[260935]: 2025-10-11 08:53:54.116 2 DEBUG oslo_concurrency.lockutils [req-d1031697-3fb7-4718-87f2-2fb1eac0763c req-58ca6bc9-4a0c-4688-bf6f-ebd32311e5cd e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "090243f9-46ac-42a1-b921-fa3b1974f127-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:53:54 compute-0 nova_compute[260935]: 2025-10-11 08:53:54.117 2 DEBUG oslo_concurrency.lockutils [req-d1031697-3fb7-4718-87f2-2fb1eac0763c req-58ca6bc9-4a0c-4688-bf6f-ebd32311e5cd e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "090243f9-46ac-42a1-b921-fa3b1974f127-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:53:54 compute-0 nova_compute[260935]: 2025-10-11 08:53:54.117 2 DEBUG oslo_concurrency.lockutils [req-d1031697-3fb7-4718-87f2-2fb1eac0763c req-58ca6bc9-4a0c-4688-bf6f-ebd32311e5cd e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "090243f9-46ac-42a1-b921-fa3b1974f127-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:53:54 compute-0 nova_compute[260935]: 2025-10-11 08:53:54.117 2 DEBUG nova.compute.manager [req-d1031697-3fb7-4718-87f2-2fb1eac0763c req-58ca6bc9-4a0c-4688-bf6f-ebd32311e5cd e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 090243f9-46ac-42a1-b921-fa3b1974f127] Processing event network-vif-plugged-94e66298-1c10-486d-9cd5-fd680cfa037c _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 11 08:53:54 compute-0 nova_compute[260935]: 2025-10-11 08:53:54.118 2 DEBUG nova.compute.manager [req-d1031697-3fb7-4718-87f2-2fb1eac0763c req-58ca6bc9-4a0c-4688-bf6f-ebd32311e5cd e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 090243f9-46ac-42a1-b921-fa3b1974f127] Received event network-vif-plugged-94e66298-1c10-486d-9cd5-fd680cfa037c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:53:54 compute-0 nova_compute[260935]: 2025-10-11 08:53:54.118 2 DEBUG oslo_concurrency.lockutils [req-d1031697-3fb7-4718-87f2-2fb1eac0763c req-58ca6bc9-4a0c-4688-bf6f-ebd32311e5cd e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "090243f9-46ac-42a1-b921-fa3b1974f127-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:53:54 compute-0 nova_compute[260935]: 2025-10-11 08:53:54.118 2 DEBUG oslo_concurrency.lockutils [req-d1031697-3fb7-4718-87f2-2fb1eac0763c req-58ca6bc9-4a0c-4688-bf6f-ebd32311e5cd e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "090243f9-46ac-42a1-b921-fa3b1974f127-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:53:54 compute-0 nova_compute[260935]: 2025-10-11 08:53:54.119 2 DEBUG oslo_concurrency.lockutils [req-d1031697-3fb7-4718-87f2-2fb1eac0763c req-58ca6bc9-4a0c-4688-bf6f-ebd32311e5cd e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "090243f9-46ac-42a1-b921-fa3b1974f127-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:53:54 compute-0 nova_compute[260935]: 2025-10-11 08:53:54.119 2 DEBUG nova.compute.manager [req-d1031697-3fb7-4718-87f2-2fb1eac0763c req-58ca6bc9-4a0c-4688-bf6f-ebd32311e5cd e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 090243f9-46ac-42a1-b921-fa3b1974f127] No waiting events found dispatching network-vif-plugged-94e66298-1c10-486d-9cd5-fd680cfa037c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 08:53:54 compute-0 nova_compute[260935]: 2025-10-11 08:53:54.119 2 WARNING nova.compute.manager [req-d1031697-3fb7-4718-87f2-2fb1eac0763c req-58ca6bc9-4a0c-4688-bf6f-ebd32311e5cd e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 090243f9-46ac-42a1-b921-fa3b1974f127] Received unexpected event network-vif-plugged-94e66298-1c10-486d-9cd5-fd680cfa037c for instance with vm_state building and task_state spawning.
Oct 11 08:53:54 compute-0 nova_compute[260935]: 2025-10-11 08:53:54.121 2 DEBUG nova.compute.manager [None req-6a41ddc2-a2f2-442c-bdef-bced32c62be8 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 090243f9-46ac-42a1-b921-fa3b1974f127] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 11 08:53:54 compute-0 nova_compute[260935]: 2025-10-11 08:53:54.121 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 090243f9-46ac-42a1-b921-fa3b1974f127] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 08:53:54 compute-0 nova_compute[260935]: 2025-10-11 08:53:54.125 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172834.1249022, 090243f9-46ac-42a1-b921-fa3b1974f127 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:53:54 compute-0 nova_compute[260935]: 2025-10-11 08:53:54.125 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 090243f9-46ac-42a1-b921-fa3b1974f127] VM Resumed (Lifecycle Event)
Oct 11 08:53:54 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:54.127 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[961ccd76-e2d4-4dc9-bcbf-fc198f4be98f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:53:54 compute-0 nova_compute[260935]: 2025-10-11 08:53:54.128 2 DEBUG nova.virt.libvirt.driver [None req-6a41ddc2-a2f2-442c-bdef-bced32c62be8 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 090243f9-46ac-42a1-b921-fa3b1974f127] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 11 08:53:54 compute-0 nova_compute[260935]: 2025-10-11 08:53:54.136 2 INFO nova.virt.libvirt.driver [-] [instance: 090243f9-46ac-42a1-b921-fa3b1974f127] Instance spawned successfully.
Oct 11 08:53:54 compute-0 nova_compute[260935]: 2025-10-11 08:53:54.137 2 DEBUG nova.virt.libvirt.driver [None req-6a41ddc2-a2f2-442c-bdef-bced32c62be8 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 090243f9-46ac-42a1-b921-fa3b1974f127] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 11 08:53:54 compute-0 nova_compute[260935]: 2025-10-11 08:53:54.146 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 090243f9-46ac-42a1-b921-fa3b1974f127] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:53:54 compute-0 nova_compute[260935]: 2025-10-11 08:53:54.150 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 090243f9-46ac-42a1-b921-fa3b1974f127] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 08:53:54 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:54.159 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[b266f02a-c66c-46d2-b9d4-4486669c7572]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapaac3fe7a-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a9:0a:6b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 136], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 467881, 'reachable_time': 38741, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 315145, 'error': None, 'target': 'ovnmeta-aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:53:54 compute-0 nova_compute[260935]: 2025-10-11 08:53:54.169 2 DEBUG nova.virt.libvirt.driver [None req-6a41ddc2-a2f2-442c-bdef-bced32c62be8 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 090243f9-46ac-42a1-b921-fa3b1974f127] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:53:54 compute-0 nova_compute[260935]: 2025-10-11 08:53:54.170 2 DEBUG nova.virt.libvirt.driver [None req-6a41ddc2-a2f2-442c-bdef-bced32c62be8 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 090243f9-46ac-42a1-b921-fa3b1974f127] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:53:54 compute-0 nova_compute[260935]: 2025-10-11 08:53:54.170 2 DEBUG nova.virt.libvirt.driver [None req-6a41ddc2-a2f2-442c-bdef-bced32c62be8 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 090243f9-46ac-42a1-b921-fa3b1974f127] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:53:54 compute-0 nova_compute[260935]: 2025-10-11 08:53:54.171 2 DEBUG nova.virt.libvirt.driver [None req-6a41ddc2-a2f2-442c-bdef-bced32c62be8 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 090243f9-46ac-42a1-b921-fa3b1974f127] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:53:54 compute-0 nova_compute[260935]: 2025-10-11 08:53:54.172 2 DEBUG nova.virt.libvirt.driver [None req-6a41ddc2-a2f2-442c-bdef-bced32c62be8 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 090243f9-46ac-42a1-b921-fa3b1974f127] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:53:54 compute-0 nova_compute[260935]: 2025-10-11 08:53:54.172 2 DEBUG nova.virt.libvirt.driver [None req-6a41ddc2-a2f2-442c-bdef-bced32c62be8 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 090243f9-46ac-42a1-b921-fa3b1974f127] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:53:54 compute-0 nova_compute[260935]: 2025-10-11 08:53:54.181 2 INFO nova.virt.libvirt.driver [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: c5c5e7c6-36ba-4cdd-9ad5-03996c419556] Creating config drive at /var/lib/nova/instances/c5c5e7c6-36ba-4cdd-9ad5-03996c419556/disk.config
Oct 11 08:53:54 compute-0 nova_compute[260935]: 2025-10-11 08:53:54.187 2 DEBUG oslo_concurrency.processutils [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/c5c5e7c6-36ba-4cdd-9ad5-03996c419556/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpew9l23hy execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:53:54 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:54.196 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[fe5786d7-95e4-427c-8d95-7e1be5210886]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fea9:a6b'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 467881, 'tstamp': 467881}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 315146, 'error': None, 'target': 'ovnmeta-aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:53:54 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:54.220 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[ab5cb7c2-2f07-4aea-a87b-ca1766043ef0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapaac3fe7a-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a9:0a:6b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 136], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 467881, 'reachable_time': 38741, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 315148, 'error': None, 'target': 'ovnmeta-aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:53:54 compute-0 nova_compute[260935]: 2025-10-11 08:53:54.232 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 090243f9-46ac-42a1-b921-fa3b1974f127] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 08:53:54 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:54.262 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[3c02c45f-0c61-4b78-8f28-37fc08d7f790]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:53:54 compute-0 nova_compute[260935]: 2025-10-11 08:53:54.282 2 INFO nova.compute.manager [None req-6a41ddc2-a2f2-442c-bdef-bced32c62be8 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 090243f9-46ac-42a1-b921-fa3b1974f127] Took 10.64 seconds to spawn the instance on the hypervisor.
Oct 11 08:53:54 compute-0 nova_compute[260935]: 2025-10-11 08:53:54.282 2 DEBUG nova.compute.manager [None req-6a41ddc2-a2f2-442c-bdef-bced32c62be8 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 090243f9-46ac-42a1-b921-fa3b1974f127] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:53:54 compute-0 nova_compute[260935]: 2025-10-11 08:53:54.326 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760172819.3255072, 92be5b35-6b7a-4f95-924d-008348f27b42 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:53:54 compute-0 nova_compute[260935]: 2025-10-11 08:53:54.326 2 INFO nova.compute.manager [-] [instance: 92be5b35-6b7a-4f95-924d-008348f27b42] VM Stopped (Lifecycle Event)
Oct 11 08:53:54 compute-0 nova_compute[260935]: 2025-10-11 08:53:54.341 2 DEBUG oslo_concurrency.processutils [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/c5c5e7c6-36ba-4cdd-9ad5-03996c419556/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpew9l23hy" returned: 0 in 0.154s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:53:54 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:54.349 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[f49fe4c1-871e-441a-9a1f-c94f078192ac]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:53:54 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:54.351 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapaac3fe7a-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:53:54 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:54.351 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 08:53:54 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:54.352 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapaac3fe7a-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:53:54 compute-0 NetworkManager[44960]: <info>  [1760172834.3550] manager: (tapaac3fe7a-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/205)
Oct 11 08:53:54 compute-0 kernel: tapaac3fe7a-b0: entered promiscuous mode
Oct 11 08:53:54 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:54.362 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapaac3fe7a-b0, col_values=(('external_ids', {'iface-id': 'debf3d0c-b4f8-4ab8-9507-a0acd9b0ee66'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:53:54 compute-0 ovn_controller[152945]: 2025-10-11T08:53:54Z|00423|binding|INFO|Releasing lport debf3d0c-b4f8-4ab8-9507-a0acd9b0ee66 from this chassis (sb_readonly=0)
Oct 11 08:53:54 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:54.394 162815 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 11 08:53:54 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:54.395 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[192bf1be-d0e8-4078-92ca-80ddd922973a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:53:54 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:54.396 162815 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 11 08:53:54 compute-0 ovn_metadata_agent[162810]: global
Oct 11 08:53:54 compute-0 ovn_metadata_agent[162810]:     log         /dev/log local0 debug
Oct 11 08:53:54 compute-0 ovn_metadata_agent[162810]:     log-tag     haproxy-metadata-proxy-aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f
Oct 11 08:53:54 compute-0 ovn_metadata_agent[162810]:     user        root
Oct 11 08:53:54 compute-0 ovn_metadata_agent[162810]:     group       root
Oct 11 08:53:54 compute-0 ovn_metadata_agent[162810]:     maxconn     1024
Oct 11 08:53:54 compute-0 ovn_metadata_agent[162810]:     pidfile     /var/lib/neutron/external/pids/aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f.pid.haproxy
Oct 11 08:53:54 compute-0 ovn_metadata_agent[162810]:     daemon
Oct 11 08:53:54 compute-0 ovn_metadata_agent[162810]: 
Oct 11 08:53:54 compute-0 ovn_metadata_agent[162810]: defaults
Oct 11 08:53:54 compute-0 ovn_metadata_agent[162810]:     log global
Oct 11 08:53:54 compute-0 ovn_metadata_agent[162810]:     mode http
Oct 11 08:53:54 compute-0 ovn_metadata_agent[162810]:     option httplog
Oct 11 08:53:54 compute-0 ovn_metadata_agent[162810]:     option dontlognull
Oct 11 08:53:54 compute-0 ovn_metadata_agent[162810]:     option http-server-close
Oct 11 08:53:54 compute-0 ovn_metadata_agent[162810]:     option forwardfor
Oct 11 08:53:54 compute-0 ovn_metadata_agent[162810]:     retries                 3
Oct 11 08:53:54 compute-0 ovn_metadata_agent[162810]:     timeout http-request    30s
Oct 11 08:53:54 compute-0 ovn_metadata_agent[162810]:     timeout connect         30s
Oct 11 08:53:54 compute-0 ovn_metadata_agent[162810]:     timeout client          32s
Oct 11 08:53:54 compute-0 ovn_metadata_agent[162810]:     timeout server          32s
Oct 11 08:53:54 compute-0 ovn_metadata_agent[162810]:     timeout http-keep-alive 30s
Oct 11 08:53:54 compute-0 ovn_metadata_agent[162810]: 
Oct 11 08:53:54 compute-0 ovn_metadata_agent[162810]: 
Oct 11 08:53:54 compute-0 ovn_metadata_agent[162810]: listen listener
Oct 11 08:53:54 compute-0 ovn_metadata_agent[162810]:     bind 169.254.169.254:80
Oct 11 08:53:54 compute-0 ovn_metadata_agent[162810]:     server metadata /var/lib/neutron/metadata_proxy
Oct 11 08:53:54 compute-0 ovn_metadata_agent[162810]:     http-request add-header X-OVN-Network-ID aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f
Oct 11 08:53:54 compute-0 ovn_metadata_agent[162810]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 11 08:53:54 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:54.397 162815 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f', 'env', 'PROCESS_TAG=haproxy-aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 11 08:53:54 compute-0 nova_compute[260935]: 2025-10-11 08:53:54.404 2 DEBUG nova.storage.rbd_utils [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] rbd image c5c5e7c6-36ba-4cdd-9ad5-03996c419556_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:53:54 compute-0 nova_compute[260935]: 2025-10-11 08:53:54.424 2 DEBUG oslo_concurrency.processutils [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/c5c5e7c6-36ba-4cdd-9ad5-03996c419556/disk.config c5c5e7c6-36ba-4cdd-9ad5-03996c419556_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:53:54 compute-0 nova_compute[260935]: 2025-10-11 08:53:54.469 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:53:54 compute-0 nova_compute[260935]: 2025-10-11 08:53:54.483 2 DEBUG nova.compute.manager [req-2f4a8150-7eae-4bc1-9e8d-03c4f5154652 req-9f9296de-edd5-4fd6-8327-87666a070852 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d0ac94c4-9bbc-443b-bbce-0d447b37153a] Received event network-vif-plugged-11b55091-2876-4b36-98b0-aa1a4db00d3e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:53:54 compute-0 nova_compute[260935]: 2025-10-11 08:53:54.484 2 DEBUG oslo_concurrency.lockutils [req-2f4a8150-7eae-4bc1-9e8d-03c4f5154652 req-9f9296de-edd5-4fd6-8327-87666a070852 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "d0ac94c4-9bbc-443b-bbce-0d447b37153a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:53:54 compute-0 nova_compute[260935]: 2025-10-11 08:53:54.484 2 DEBUG oslo_concurrency.lockutils [req-2f4a8150-7eae-4bc1-9e8d-03c4f5154652 req-9f9296de-edd5-4fd6-8327-87666a070852 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "d0ac94c4-9bbc-443b-bbce-0d447b37153a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:53:54 compute-0 nova_compute[260935]: 2025-10-11 08:53:54.485 2 DEBUG oslo_concurrency.lockutils [req-2f4a8150-7eae-4bc1-9e8d-03c4f5154652 req-9f9296de-edd5-4fd6-8327-87666a070852 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "d0ac94c4-9bbc-443b-bbce-0d447b37153a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:53:54 compute-0 nova_compute[260935]: 2025-10-11 08:53:54.485 2 DEBUG nova.compute.manager [req-2f4a8150-7eae-4bc1-9e8d-03c4f5154652 req-9f9296de-edd5-4fd6-8327-87666a070852 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d0ac94c4-9bbc-443b-bbce-0d447b37153a] Processing event network-vif-plugged-11b55091-2876-4b36-98b0-aa1a4db00d3e _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 11 08:53:54 compute-0 nova_compute[260935]: 2025-10-11 08:53:54.486 2 DEBUG nova.compute.manager [None req-147356cb-8766-4ad3-8d9b-86134d03b166 - - - - - -] [instance: 92be5b35-6b7a-4f95-924d-008348f27b42] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:53:54 compute-0 nova_compute[260935]: 2025-10-11 08:53:54.502 2 INFO nova.compute.manager [None req-6a41ddc2-a2f2-442c-bdef-bced32c62be8 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 090243f9-46ac-42a1-b921-fa3b1974f127] Took 11.93 seconds to build instance.
Oct 11 08:53:54 compute-0 nova_compute[260935]: 2025-10-11 08:53:54.534 2 DEBUG oslo_concurrency.lockutils [None req-6a41ddc2-a2f2-442c-bdef-bced32c62be8 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Lock "090243f9-46ac-42a1-b921-fa3b1974f127" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.125s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:53:54 compute-0 nova_compute[260935]: 2025-10-11 08:53:54.535 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "090243f9-46ac-42a1-b921-fa3b1974f127" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 7.216s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:53:54 compute-0 nova_compute[260935]: 2025-10-11 08:53:54.535 2 INFO nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: 090243f9-46ac-42a1-b921-fa3b1974f127] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 08:53:54 compute-0 nova_compute[260935]: 2025-10-11 08:53:54.536 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "090243f9-46ac-42a1-b921-fa3b1974f127" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:53:54 compute-0 nova_compute[260935]: 2025-10-11 08:53:54.602 2 DEBUG oslo_concurrency.processutils [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/c5c5e7c6-36ba-4cdd-9ad5-03996c419556/disk.config c5c5e7c6-36ba-4cdd-9ad5-03996c419556_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.178s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:53:54 compute-0 nova_compute[260935]: 2025-10-11 08:53:54.602 2 INFO nova.virt.libvirt.driver [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: c5c5e7c6-36ba-4cdd-9ad5-03996c419556] Deleting local config drive /var/lib/nova/instances/c5c5e7c6-36ba-4cdd-9ad5-03996c419556/disk.config because it was imported into RBD.
Oct 11 08:53:54 compute-0 kernel: tapeb770b47-0e: entered promiscuous mode
Oct 11 08:53:54 compute-0 NetworkManager[44960]: <info>  [1760172834.6571] manager: (tapeb770b47-0e): new Tun device (/org/freedesktop/NetworkManager/Devices/206)
Oct 11 08:53:54 compute-0 ovn_controller[152945]: 2025-10-11T08:53:54Z|00424|binding|INFO|Claiming lport eb770b47-0e30-41bb-8d08-e52bc86b8fb7 for this chassis.
Oct 11 08:53:54 compute-0 ovn_controller[152945]: 2025-10-11T08:53:54Z|00425|binding|INFO|eb770b47-0e30-41bb-8d08-e52bc86b8fb7: Claiming fa:16:3e:c2:96:8d 10.100.0.9
Oct 11 08:53:54 compute-0 nova_compute[260935]: 2025-10-11 08:53:54.660 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:53:54 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:54.669 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c2:96:8d 10.100.0.9'], port_security=['fa:16:3e:c2:96:8d 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'c5c5e7c6-36ba-4cdd-9ad5-03996c419556', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5fe47dfd30914099a9819413cbab00c6', 'neutron:revision_number': '2', 'neutron:security_group_ids': '76121093-a7de-4040-aef4-22a2c06e5eea', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0c1ab924-8567-41be-9107-10c6210b8f10, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=eb770b47-0e30-41bb-8d08-e52bc86b8fb7) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 08:53:54 compute-0 NetworkManager[44960]: <info>  [1760172834.6706] device (tapeb770b47-0e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 08:53:54 compute-0 NetworkManager[44960]: <info>  [1760172834.6713] device (tapeb770b47-0e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 08:53:54 compute-0 ovn_controller[152945]: 2025-10-11T08:53:54Z|00426|binding|INFO|Setting lport eb770b47-0e30-41bb-8d08-e52bc86b8fb7 ovn-installed in OVS
Oct 11 08:53:54 compute-0 ovn_controller[152945]: 2025-10-11T08:53:54Z|00427|binding|INFO|Setting lport eb770b47-0e30-41bb-8d08-e52bc86b8fb7 up in Southbound
Oct 11 08:53:54 compute-0 nova_compute[260935]: 2025-10-11 08:53:54.682 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:53:54 compute-0 nova_compute[260935]: 2025-10-11 08:53:54.687 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:53:54 compute-0 systemd-machined[215705]: New machine qemu-57-instance-00000032.
Oct 11 08:53:54 compute-0 systemd[1]: Started Virtual Machine qemu-57-instance-00000032.
Oct 11 08:53:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 08:53:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 08:53:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 08:53:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 08:53:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 08:53:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 08:53:54 compute-0 podman[315236]: 2025-10-11 08:53:54.826952216 +0000 UTC m=+0.059433586 container create 6877f03e8f74ec3b019684f0961795298d916dc72d355b1afc9b4cff51f152cd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct 11 08:53:54 compute-0 ceph-mgr[74605]: [balancer INFO root] Optimize plan auto_2025-10-11_08:53:54
Oct 11 08:53:54 compute-0 ceph-mgr[74605]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 08:53:54 compute-0 ceph-mgr[74605]: [balancer INFO root] do_upmap
Oct 11 08:53:54 compute-0 ceph-mgr[74605]: [balancer INFO root] pools ['.rgw.root', 'default.rgw.log', '.mgr', 'default.rgw.meta', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'vms', 'backups', 'images', 'default.rgw.control', 'volumes']
Oct 11 08:53:54 compute-0 ceph-mgr[74605]: [balancer INFO root] prepared 0/10 changes
Oct 11 08:53:54 compute-0 systemd[1]: Started libpod-conmon-6877f03e8f74ec3b019684f0961795298d916dc72d355b1afc9b4cff51f152cd.scope.
Oct 11 08:53:54 compute-0 podman[315236]: 2025-10-11 08:53:54.791868015 +0000 UTC m=+0.024349405 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 11 08:53:54 compute-0 systemd[1]: Started libcrun container.
Oct 11 08:53:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/696ca5a58f39daa91dbd8051307c63172a5233db2990bf8722059ac5f83a523e/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 11 08:53:54 compute-0 podman[315236]: 2025-10-11 08:53:54.938518767 +0000 UTC m=+0.171000167 container init 6877f03e8f74ec3b019684f0961795298d916dc72d355b1afc9b4cff51f152cd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 11 08:53:54 compute-0 podman[315236]: 2025-10-11 08:53:54.946317029 +0000 UTC m=+0.178798409 container start 6877f03e8f74ec3b019684f0961795298d916dc72d355b1afc9b4cff51f152cd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, tcib_managed=true)
Oct 11 08:53:54 compute-0 neutron-haproxy-ovnmeta-aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f[315254]: [NOTICE]   (315258) : New worker (315260) forked
Oct 11 08:53:54 compute-0 neutron-haproxy-ovnmeta-aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f[315254]: [NOTICE]   (315258) : Loading success.
Oct 11 08:53:55 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:55.015 162815 INFO neutron.agent.ovn.metadata.agent [-] Port eb770b47-0e30-41bb-8d08-e52bc86b8fb7 in datapath aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f unbound from our chassis
Oct 11 08:53:55 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:55.018 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f
Oct 11 08:53:55 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:55.045 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[f9f98f0c-351c-45e6-974b-020c1d086453]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:53:55 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:55.099 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[d4b17342-a30a-4ed0-abea-b1a48dac7f7f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:53:55 compute-0 nova_compute[260935]: 2025-10-11 08:53:55.100 2 DEBUG nova.network.neutron [req-f165fd72-225b-42bb-86e6-4bafc9fabcca req-b7518090-f07b-4f2c-a6a8-7914104d447b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: c5c5e7c6-36ba-4cdd-9ad5-03996c419556] Updated VIF entry in instance network info cache for port eb770b47-0e30-41bb-8d08-e52bc86b8fb7. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 08:53:55 compute-0 nova_compute[260935]: 2025-10-11 08:53:55.101 2 DEBUG nova.network.neutron [req-f165fd72-225b-42bb-86e6-4bafc9fabcca req-b7518090-f07b-4f2c-a6a8-7914104d447b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: c5c5e7c6-36ba-4cdd-9ad5-03996c419556] Updating instance_info_cache with network_info: [{"id": "eb770b47-0e30-41bb-8d08-e52bc86b8fb7", "address": "fa:16:3e:c2:96:8d", "network": {"id": "aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1676373979-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5fe47dfd30914099a9819413cbab00c6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeb770b47-0e", "ovs_interfaceid": "eb770b47-0e30-41bb-8d08-e52bc86b8fb7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:53:55 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:55.103 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[8a7e078c-2982-421c-95f1-0f61470e5a09]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:53:55 compute-0 nova_compute[260935]: 2025-10-11 08:53:55.117 2 DEBUG oslo_concurrency.lockutils [req-f165fd72-225b-42bb-86e6-4bafc9fabcca req-b7518090-f07b-4f2c-a6a8-7914104d447b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-c5c5e7c6-36ba-4cdd-9ad5-03996c419556" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 08:53:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 08:53:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 08:53:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 08:53:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 08:53:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 08:53:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 08:53:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 08:53:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 08:53:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 08:53:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 08:53:55 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:55.163 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[b1ca0248-10cc-446d-89a9-47e937ce4be7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:53:55 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:55.186 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[8733bf2c-8fdd-4331-8256-1b751128cde4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapaac3fe7a-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a9:0a:6b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 3, 'tx_packets': 4, 'rx_bytes': 306, 'tx_bytes': 264, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 3, 'tx_packets': 4, 'rx_bytes': 306, 'tx_bytes': 264, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 136], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 467881, 'reachable_time': 38741, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 3, 'inoctets': 264, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 3, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 264, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 3, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 315352, 'error': None, 'target': 'ovnmeta-aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:53:55 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:55.210 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[8129e78f-affc-4cf1-bece-57a0c29390d3]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapaac3fe7a-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 467899, 'tstamp': 467899}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 315357, 'error': None, 'target': 'ovnmeta-aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapaac3fe7a-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 467904, 'tstamp': 467904}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 315357, 'error': None, 'target': 'ovnmeta-aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:53:55 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:55.213 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapaac3fe7a-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:53:55 compute-0 nova_compute[260935]: 2025-10-11 08:53:55.250 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:53:55 compute-0 nova_compute[260935]: 2025-10-11 08:53:55.253 2 DEBUG nova.compute.manager [None req-96b0935f-0317-4ff0-b413-63f083993510 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] [instance: 8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:53:55 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:55.258 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapaac3fe7a-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:53:55 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:55.259 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 08:53:55 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:55.260 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapaac3fe7a-b0, col_values=(('external_ids', {'iface-id': 'debf3d0c-b4f8-4ab8-9507-a0acd9b0ee66'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:53:55 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:53:55.260 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 08:53:55 compute-0 nova_compute[260935]: 2025-10-11 08:53:55.340 2 INFO nova.compute.manager [None req-96b0935f-0317-4ff0-b413-63f083993510 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] [instance: 8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08] instance snapshotting
Oct 11 08:53:55 compute-0 ceph-mon[74313]: pgmap v1485: 321 pgs: 321 active+clean; 227 MiB data, 515 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 6.4 MiB/s wr, 225 op/s
Oct 11 08:53:55 compute-0 nova_compute[260935]: 2025-10-11 08:53:55.607 2 INFO nova.virt.libvirt.driver [None req-96b0935f-0317-4ff0-b413-63f083993510 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] [instance: 8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08] Beginning live snapshot process
Oct 11 08:53:55 compute-0 nova_compute[260935]: 2025-10-11 08:53:55.728 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172835.7273057, c5c5e7c6-36ba-4cdd-9ad5-03996c419556 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:53:55 compute-0 nova_compute[260935]: 2025-10-11 08:53:55.728 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: c5c5e7c6-36ba-4cdd-9ad5-03996c419556] VM Started (Lifecycle Event)
Oct 11 08:53:55 compute-0 nova_compute[260935]: 2025-10-11 08:53:55.822 2 DEBUG nova.compute.manager [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: d0ac94c4-9bbc-443b-bbce-0d447b37153a] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 11 08:53:55 compute-0 nova_compute[260935]: 2025-10-11 08:53:55.824 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: c5c5e7c6-36ba-4cdd-9ad5-03996c419556] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:53:55 compute-0 nova_compute[260935]: 2025-10-11 08:53:55.829 2 DEBUG nova.virt.libvirt.driver [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: d0ac94c4-9bbc-443b-bbce-0d447b37153a] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 11 08:53:55 compute-0 nova_compute[260935]: 2025-10-11 08:53:55.835 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172835.7276287, c5c5e7c6-36ba-4cdd-9ad5-03996c419556 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:53:55 compute-0 nova_compute[260935]: 2025-10-11 08:53:55.836 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: c5c5e7c6-36ba-4cdd-9ad5-03996c419556] VM Paused (Lifecycle Event)
Oct 11 08:53:55 compute-0 nova_compute[260935]: 2025-10-11 08:53:55.839 2 INFO nova.virt.libvirt.driver [-] [instance: d0ac94c4-9bbc-443b-bbce-0d447b37153a] Instance spawned successfully.
Oct 11 08:53:55 compute-0 nova_compute[260935]: 2025-10-11 08:53:55.840 2 DEBUG nova.virt.libvirt.driver [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: d0ac94c4-9bbc-443b-bbce-0d447b37153a] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 11 08:53:55 compute-0 nova_compute[260935]: 2025-10-11 08:53:55.855 2 DEBUG nova.virt.libvirt.imagebackend [None req-96b0935f-0317-4ff0-b413-63f083993510 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] No parent info for 03f2fef0-11c0-48e1-b3a0-3e02d898739e; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Oct 11 08:53:55 compute-0 nova_compute[260935]: 2025-10-11 08:53:55.866 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: c5c5e7c6-36ba-4cdd-9ad5-03996c419556] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:53:55 compute-0 nova_compute[260935]: 2025-10-11 08:53:55.873 2 DEBUG nova.virt.libvirt.driver [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: d0ac94c4-9bbc-443b-bbce-0d447b37153a] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:53:55 compute-0 nova_compute[260935]: 2025-10-11 08:53:55.873 2 DEBUG nova.virt.libvirt.driver [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: d0ac94c4-9bbc-443b-bbce-0d447b37153a] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:53:55 compute-0 nova_compute[260935]: 2025-10-11 08:53:55.874 2 DEBUG nova.virt.libvirt.driver [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: d0ac94c4-9bbc-443b-bbce-0d447b37153a] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:53:55 compute-0 nova_compute[260935]: 2025-10-11 08:53:55.875 2 DEBUG nova.virt.libvirt.driver [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: d0ac94c4-9bbc-443b-bbce-0d447b37153a] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:53:55 compute-0 nova_compute[260935]: 2025-10-11 08:53:55.876 2 DEBUG nova.virt.libvirt.driver [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: d0ac94c4-9bbc-443b-bbce-0d447b37153a] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:53:55 compute-0 nova_compute[260935]: 2025-10-11 08:53:55.877 2 DEBUG nova.virt.libvirt.driver [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: d0ac94c4-9bbc-443b-bbce-0d447b37153a] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:53:55 compute-0 nova_compute[260935]: 2025-10-11 08:53:55.883 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: c5c5e7c6-36ba-4cdd-9ad5-03996c419556] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 08:53:55 compute-0 nova_compute[260935]: 2025-10-11 08:53:55.926 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: c5c5e7c6-36ba-4cdd-9ad5-03996c419556] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 08:53:55 compute-0 nova_compute[260935]: 2025-10-11 08:53:55.927 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172835.7832527, d0ac94c4-9bbc-443b-bbce-0d447b37153a => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:53:55 compute-0 nova_compute[260935]: 2025-10-11 08:53:55.927 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: d0ac94c4-9bbc-443b-bbce-0d447b37153a] VM Started (Lifecycle Event)
Oct 11 08:53:55 compute-0 nova_compute[260935]: 2025-10-11 08:53:55.941 2 INFO nova.compute.manager [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: d0ac94c4-9bbc-443b-bbce-0d447b37153a] Took 11.00 seconds to spawn the instance on the hypervisor.
Oct 11 08:53:55 compute-0 nova_compute[260935]: 2025-10-11 08:53:55.942 2 DEBUG nova.compute.manager [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: d0ac94c4-9bbc-443b-bbce-0d447b37153a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:53:55 compute-0 nova_compute[260935]: 2025-10-11 08:53:55.951 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: d0ac94c4-9bbc-443b-bbce-0d447b37153a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:53:55 compute-0 nova_compute[260935]: 2025-10-11 08:53:55.954 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: d0ac94c4-9bbc-443b-bbce-0d447b37153a] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 08:53:55 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1486: 321 pgs: 321 active+clean; 227 MiB data, 515 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 6.4 MiB/s wr, 225 op/s
Oct 11 08:53:55 compute-0 nova_compute[260935]: 2025-10-11 08:53:55.996 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: d0ac94c4-9bbc-443b-bbce-0d447b37153a] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 08:53:55 compute-0 nova_compute[260935]: 2025-10-11 08:53:55.997 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172835.7833843, d0ac94c4-9bbc-443b-bbce-0d447b37153a => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:53:55 compute-0 nova_compute[260935]: 2025-10-11 08:53:55.998 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: d0ac94c4-9bbc-443b-bbce-0d447b37153a] VM Paused (Lifecycle Event)
Oct 11 08:53:56 compute-0 nova_compute[260935]: 2025-10-11 08:53:56.018 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: d0ac94c4-9bbc-443b-bbce-0d447b37153a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:53:56 compute-0 nova_compute[260935]: 2025-10-11 08:53:56.021 2 INFO nova.compute.manager [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: d0ac94c4-9bbc-443b-bbce-0d447b37153a] Took 13.44 seconds to build instance.
Oct 11 08:53:56 compute-0 nova_compute[260935]: 2025-10-11 08:53:56.027 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172835.8278902, d0ac94c4-9bbc-443b-bbce-0d447b37153a => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:53:56 compute-0 nova_compute[260935]: 2025-10-11 08:53:56.028 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: d0ac94c4-9bbc-443b-bbce-0d447b37153a] VM Resumed (Lifecycle Event)
Oct 11 08:53:56 compute-0 nova_compute[260935]: 2025-10-11 08:53:56.038 2 DEBUG oslo_concurrency.lockutils [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Lock "d0ac94c4-9bbc-443b-bbce-0d447b37153a" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.637s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:53:56 compute-0 nova_compute[260935]: 2025-10-11 08:53:56.039 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "d0ac94c4-9bbc-443b-bbce-0d447b37153a" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 8.721s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:53:56 compute-0 nova_compute[260935]: 2025-10-11 08:53:56.040 2 INFO nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: d0ac94c4-9bbc-443b-bbce-0d447b37153a] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 08:53:56 compute-0 nova_compute[260935]: 2025-10-11 08:53:56.040 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "d0ac94c4-9bbc-443b-bbce-0d447b37153a" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:53:56 compute-0 nova_compute[260935]: 2025-10-11 08:53:56.044 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: d0ac94c4-9bbc-443b-bbce-0d447b37153a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:53:56 compute-0 nova_compute[260935]: 2025-10-11 08:53:56.049 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: d0ac94c4-9bbc-443b-bbce-0d447b37153a] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 08:53:56 compute-0 nova_compute[260935]: 2025-10-11 08:53:56.132 2 DEBUG nova.storage.rbd_utils [None req-96b0935f-0317-4ff0-b413-63f083993510 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] creating snapshot(3d25b77f5fcc48438dbbb76e7eed7002) on rbd image(8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Oct 11 08:53:56 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e205 do_prune osdmap full prune enabled
Oct 11 08:53:56 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e206 e206: 3 total, 3 up, 3 in
Oct 11 08:53:56 compute-0 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e206: 3 total, 3 up, 3 in
Oct 11 08:53:56 compute-0 nova_compute[260935]: 2025-10-11 08:53:56.461 2 DEBUG nova.storage.rbd_utils [None req-96b0935f-0317-4ff0-b413-63f083993510 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] cloning vms/8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08_disk@3d25b77f5fcc48438dbbb76e7eed7002 to images/309541da-fb02-457f-9678-947ffe38cc30 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Oct 11 08:53:56 compute-0 nova_compute[260935]: 2025-10-11 08:53:56.501 2 DEBUG nova.compute.manager [req-3c5cb53a-d5e5-4ecf-9054-632a20bd25d9 req-862bfe80-199c-4bfa-a986-dfae860e6d6b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: c5c5e7c6-36ba-4cdd-9ad5-03996c419556] Received event network-vif-plugged-eb770b47-0e30-41bb-8d08-e52bc86b8fb7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:53:56 compute-0 nova_compute[260935]: 2025-10-11 08:53:56.502 2 DEBUG oslo_concurrency.lockutils [req-3c5cb53a-d5e5-4ecf-9054-632a20bd25d9 req-862bfe80-199c-4bfa-a986-dfae860e6d6b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "c5c5e7c6-36ba-4cdd-9ad5-03996c419556-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:53:56 compute-0 nova_compute[260935]: 2025-10-11 08:53:56.504 2 DEBUG oslo_concurrency.lockutils [req-3c5cb53a-d5e5-4ecf-9054-632a20bd25d9 req-862bfe80-199c-4bfa-a986-dfae860e6d6b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "c5c5e7c6-36ba-4cdd-9ad5-03996c419556-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:53:56 compute-0 nova_compute[260935]: 2025-10-11 08:53:56.505 2 DEBUG oslo_concurrency.lockutils [req-3c5cb53a-d5e5-4ecf-9054-632a20bd25d9 req-862bfe80-199c-4bfa-a986-dfae860e6d6b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "c5c5e7c6-36ba-4cdd-9ad5-03996c419556-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:53:56 compute-0 nova_compute[260935]: 2025-10-11 08:53:56.505 2 DEBUG nova.compute.manager [req-3c5cb53a-d5e5-4ecf-9054-632a20bd25d9 req-862bfe80-199c-4bfa-a986-dfae860e6d6b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: c5c5e7c6-36ba-4cdd-9ad5-03996c419556] Processing event network-vif-plugged-eb770b47-0e30-41bb-8d08-e52bc86b8fb7 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 11 08:53:56 compute-0 nova_compute[260935]: 2025-10-11 08:53:56.506 2 DEBUG nova.compute.manager [req-3c5cb53a-d5e5-4ecf-9054-632a20bd25d9 req-862bfe80-199c-4bfa-a986-dfae860e6d6b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: c5c5e7c6-36ba-4cdd-9ad5-03996c419556] Received event network-vif-plugged-eb770b47-0e30-41bb-8d08-e52bc86b8fb7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:53:56 compute-0 nova_compute[260935]: 2025-10-11 08:53:56.507 2 DEBUG oslo_concurrency.lockutils [req-3c5cb53a-d5e5-4ecf-9054-632a20bd25d9 req-862bfe80-199c-4bfa-a986-dfae860e6d6b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "c5c5e7c6-36ba-4cdd-9ad5-03996c419556-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:53:56 compute-0 nova_compute[260935]: 2025-10-11 08:53:56.507 2 DEBUG oslo_concurrency.lockutils [req-3c5cb53a-d5e5-4ecf-9054-632a20bd25d9 req-862bfe80-199c-4bfa-a986-dfae860e6d6b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "c5c5e7c6-36ba-4cdd-9ad5-03996c419556-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:53:56 compute-0 nova_compute[260935]: 2025-10-11 08:53:56.508 2 DEBUG oslo_concurrency.lockutils [req-3c5cb53a-d5e5-4ecf-9054-632a20bd25d9 req-862bfe80-199c-4bfa-a986-dfae860e6d6b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "c5c5e7c6-36ba-4cdd-9ad5-03996c419556-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:53:56 compute-0 nova_compute[260935]: 2025-10-11 08:53:56.511 2 DEBUG nova.compute.manager [req-3c5cb53a-d5e5-4ecf-9054-632a20bd25d9 req-862bfe80-199c-4bfa-a986-dfae860e6d6b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: c5c5e7c6-36ba-4cdd-9ad5-03996c419556] No waiting events found dispatching network-vif-plugged-eb770b47-0e30-41bb-8d08-e52bc86b8fb7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 08:53:56 compute-0 nova_compute[260935]: 2025-10-11 08:53:56.511 2 WARNING nova.compute.manager [req-3c5cb53a-d5e5-4ecf-9054-632a20bd25d9 req-862bfe80-199c-4bfa-a986-dfae860e6d6b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: c5c5e7c6-36ba-4cdd-9ad5-03996c419556] Received unexpected event network-vif-plugged-eb770b47-0e30-41bb-8d08-e52bc86b8fb7 for instance with vm_state building and task_state spawning.
Oct 11 08:53:56 compute-0 nova_compute[260935]: 2025-10-11 08:53:56.512 2 DEBUG nova.compute.manager [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: c5c5e7c6-36ba-4cdd-9ad5-03996c419556] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 11 08:53:56 compute-0 nova_compute[260935]: 2025-10-11 08:53:56.531 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172836.5191183, c5c5e7c6-36ba-4cdd-9ad5-03996c419556 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:53:56 compute-0 nova_compute[260935]: 2025-10-11 08:53:56.546 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: c5c5e7c6-36ba-4cdd-9ad5-03996c419556] VM Resumed (Lifecycle Event)
Oct 11 08:53:56 compute-0 nova_compute[260935]: 2025-10-11 08:53:56.549 2 DEBUG nova.virt.libvirt.driver [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: c5c5e7c6-36ba-4cdd-9ad5-03996c419556] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 11 08:53:56 compute-0 nova_compute[260935]: 2025-10-11 08:53:56.554 2 INFO nova.virt.libvirt.driver [-] [instance: c5c5e7c6-36ba-4cdd-9ad5-03996c419556] Instance spawned successfully.
Oct 11 08:53:56 compute-0 nova_compute[260935]: 2025-10-11 08:53:56.555 2 DEBUG nova.virt.libvirt.driver [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: c5c5e7c6-36ba-4cdd-9ad5-03996c419556] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 11 08:53:56 compute-0 nova_compute[260935]: 2025-10-11 08:53:56.570 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: c5c5e7c6-36ba-4cdd-9ad5-03996c419556] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:53:56 compute-0 nova_compute[260935]: 2025-10-11 08:53:56.576 2 DEBUG nova.compute.manager [req-49f3dbff-50d7-4fc6-ae46-bacdc675b4bc req-bc643af6-86b9-4b65-b859-1a28c5373eed e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d0ac94c4-9bbc-443b-bbce-0d447b37153a] Received event network-vif-plugged-11b55091-2876-4b36-98b0-aa1a4db00d3e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:53:56 compute-0 nova_compute[260935]: 2025-10-11 08:53:56.576 2 DEBUG oslo_concurrency.lockutils [req-49f3dbff-50d7-4fc6-ae46-bacdc675b4bc req-bc643af6-86b9-4b65-b859-1a28c5373eed e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "d0ac94c4-9bbc-443b-bbce-0d447b37153a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:53:56 compute-0 nova_compute[260935]: 2025-10-11 08:53:56.577 2 DEBUG oslo_concurrency.lockutils [req-49f3dbff-50d7-4fc6-ae46-bacdc675b4bc req-bc643af6-86b9-4b65-b859-1a28c5373eed e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "d0ac94c4-9bbc-443b-bbce-0d447b37153a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:53:56 compute-0 nova_compute[260935]: 2025-10-11 08:53:56.577 2 DEBUG oslo_concurrency.lockutils [req-49f3dbff-50d7-4fc6-ae46-bacdc675b4bc req-bc643af6-86b9-4b65-b859-1a28c5373eed e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "d0ac94c4-9bbc-443b-bbce-0d447b37153a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:53:56 compute-0 nova_compute[260935]: 2025-10-11 08:53:56.577 2 DEBUG nova.compute.manager [req-49f3dbff-50d7-4fc6-ae46-bacdc675b4bc req-bc643af6-86b9-4b65-b859-1a28c5373eed e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d0ac94c4-9bbc-443b-bbce-0d447b37153a] No waiting events found dispatching network-vif-plugged-11b55091-2876-4b36-98b0-aa1a4db00d3e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 08:53:56 compute-0 nova_compute[260935]: 2025-10-11 08:53:56.578 2 WARNING nova.compute.manager [req-49f3dbff-50d7-4fc6-ae46-bacdc675b4bc req-bc643af6-86b9-4b65-b859-1a28c5373eed e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d0ac94c4-9bbc-443b-bbce-0d447b37153a] Received unexpected event network-vif-plugged-11b55091-2876-4b36-98b0-aa1a4db00d3e for instance with vm_state active and task_state None.
Oct 11 08:53:56 compute-0 nova_compute[260935]: 2025-10-11 08:53:56.589 2 DEBUG nova.storage.rbd_utils [None req-96b0935f-0317-4ff0-b413-63f083993510 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] flattening images/309541da-fb02-457f-9678-947ffe38cc30 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Oct 11 08:53:56 compute-0 nova_compute[260935]: 2025-10-11 08:53:56.651 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: c5c5e7c6-36ba-4cdd-9ad5-03996c419556] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 08:53:56 compute-0 nova_compute[260935]: 2025-10-11 08:53:56.661 2 DEBUG nova.virt.libvirt.driver [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: c5c5e7c6-36ba-4cdd-9ad5-03996c419556] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:53:56 compute-0 nova_compute[260935]: 2025-10-11 08:53:56.661 2 DEBUG nova.virt.libvirt.driver [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: c5c5e7c6-36ba-4cdd-9ad5-03996c419556] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:53:56 compute-0 nova_compute[260935]: 2025-10-11 08:53:56.662 2 DEBUG nova.virt.libvirt.driver [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: c5c5e7c6-36ba-4cdd-9ad5-03996c419556] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:53:56 compute-0 nova_compute[260935]: 2025-10-11 08:53:56.662 2 DEBUG nova.virt.libvirt.driver [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: c5c5e7c6-36ba-4cdd-9ad5-03996c419556] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:53:56 compute-0 nova_compute[260935]: 2025-10-11 08:53:56.663 2 DEBUG nova.virt.libvirt.driver [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: c5c5e7c6-36ba-4cdd-9ad5-03996c419556] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:53:56 compute-0 nova_compute[260935]: 2025-10-11 08:53:56.663 2 DEBUG nova.virt.libvirt.driver [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: c5c5e7c6-36ba-4cdd-9ad5-03996c419556] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:53:56 compute-0 nova_compute[260935]: 2025-10-11 08:53:56.694 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: c5c5e7c6-36ba-4cdd-9ad5-03996c419556] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 08:53:56 compute-0 nova_compute[260935]: 2025-10-11 08:53:56.742 2 INFO nova.compute.manager [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: c5c5e7c6-36ba-4cdd-9ad5-03996c419556] Took 10.71 seconds to spawn the instance on the hypervisor.
Oct 11 08:53:56 compute-0 nova_compute[260935]: 2025-10-11 08:53:56.745 2 DEBUG nova.compute.manager [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: c5c5e7c6-36ba-4cdd-9ad5-03996c419556] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:53:56 compute-0 podman[315466]: 2025-10-11 08:53:56.76997306 +0000 UTC m=+0.071457528 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent)
Oct 11 08:53:56 compute-0 nova_compute[260935]: 2025-10-11 08:53:56.824 2 DEBUG nova.storage.rbd_utils [None req-96b0935f-0317-4ff0-b413-63f083993510 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] removing snapshot(3d25b77f5fcc48438dbbb76e7eed7002) on rbd image(8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Oct 11 08:53:56 compute-0 nova_compute[260935]: 2025-10-11 08:53:56.835 2 INFO nova.compute.manager [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: c5c5e7c6-36ba-4cdd-9ad5-03996c419556] Took 14.21 seconds to build instance.
Oct 11 08:53:56 compute-0 nova_compute[260935]: 2025-10-11 08:53:56.852 2 DEBUG oslo_concurrency.lockutils [None req-e602c2a2-97c4-477a-85ca-b121395a794c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Lock "c5c5e7c6-36ba-4cdd-9ad5-03996c419556" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 14.344s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:53:56 compute-0 nova_compute[260935]: 2025-10-11 08:53:56.853 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "c5c5e7c6-36ba-4cdd-9ad5-03996c419556" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 9.533s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:53:56 compute-0 nova_compute[260935]: 2025-10-11 08:53:56.853 2 INFO nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: c5c5e7c6-36ba-4cdd-9ad5-03996c419556] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 08:53:56 compute-0 nova_compute[260935]: 2025-10-11 08:53:56.853 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "c5c5e7c6-36ba-4cdd-9ad5-03996c419556" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:53:57 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e206 do_prune osdmap full prune enabled
Oct 11 08:53:57 compute-0 ceph-mon[74313]: pgmap v1486: 321 pgs: 321 active+clean; 227 MiB data, 515 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 6.4 MiB/s wr, 225 op/s
Oct 11 08:53:57 compute-0 ceph-mon[74313]: osdmap e206: 3 total, 3 up, 3 in
Oct 11 08:53:57 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e207 e207: 3 total, 3 up, 3 in
Oct 11 08:53:57 compute-0 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e207: 3 total, 3 up, 3 in
Oct 11 08:53:57 compute-0 nova_compute[260935]: 2025-10-11 08:53:57.448 2 DEBUG nova.storage.rbd_utils [None req-96b0935f-0317-4ff0-b413-63f083993510 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] creating snapshot(snap) on rbd image(309541da-fb02-457f-9678-947ffe38cc30) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Oct 11 08:53:57 compute-0 nova_compute[260935]: 2025-10-11 08:53:57.754 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760172822.7521193, 2c551e6f-adba-4963-a583-c5118e2be62a => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:53:57 compute-0 nova_compute[260935]: 2025-10-11 08:53:57.754 2 INFO nova.compute.manager [-] [instance: 2c551e6f-adba-4963-a583-c5118e2be62a] VM Stopped (Lifecycle Event)
Oct 11 08:53:57 compute-0 nova_compute[260935]: 2025-10-11 08:53:57.798 2 DEBUG nova.compute.manager [None req-e7bbb00d-90c1-4393-8e96-2bb2761cbf37 - - - - - -] [instance: 2c551e6f-adba-4963-a583-c5118e2be62a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:53:57 compute-0 nova_compute[260935]: 2025-10-11 08:53:57.800 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:53:57 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1489: 321 pgs: 4 active+clean+snaptrim, 14 active+clean+snaptrim_wait, 303 active+clean; 278 MiB data, 531 MiB used, 59 GiB / 60 GiB avail; 11 MiB/s rd, 3.3 MiB/s wr, 437 op/s
Oct 11 08:53:58 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e207 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 08:53:58 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e207 do_prune osdmap full prune enabled
Oct 11 08:53:58 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e208 e208: 3 total, 3 up, 3 in
Oct 11 08:53:58 compute-0 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e208: 3 total, 3 up, 3 in
Oct 11 08:53:58 compute-0 ceph-mon[74313]: osdmap e207: 3 total, 3 up, 3 in
Oct 11 08:53:58 compute-0 nova_compute[260935]: 2025-10-11 08:53:58.436 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:53:58 compute-0 nova_compute[260935]: 2025-10-11 08:53:58.673 2 ERROR nova.virt.libvirt.driver [None req-96b0935f-0317-4ff0-b413-63f083993510 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Failed to snapshot image: nova.exception.ImageNotFound: Image 309541da-fb02-457f-9678-947ffe38cc30 could not be found.
Oct 11 08:53:58 compute-0 nova_compute[260935]: 2025-10-11 08:53:58.673 2 ERROR nova.virt.libvirt.driver Traceback (most recent call last):
Oct 11 08:53:58 compute-0 nova_compute[260935]: 2025-10-11 08:53:58.673 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 691, in update
Oct 11 08:53:58 compute-0 nova_compute[260935]: 2025-10-11 08:53:58.673 2 ERROR nova.virt.libvirt.driver     image = self._update_v2(context, sent_service_image_meta, data)
Oct 11 08:53:58 compute-0 nova_compute[260935]: 2025-10-11 08:53:58.673 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 700, in _update_v2
Oct 11 08:53:58 compute-0 nova_compute[260935]: 2025-10-11 08:53:58.673 2 ERROR nova.virt.libvirt.driver     image = self._client.call(
Oct 11 08:53:58 compute-0 nova_compute[260935]: 2025-10-11 08:53:58.673 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 191, in call
Oct 11 08:53:58 compute-0 nova_compute[260935]: 2025-10-11 08:53:58.673 2 ERROR nova.virt.libvirt.driver     result = getattr(controller, method)(*args, **kwargs)
Oct 11 08:53:58 compute-0 nova_compute[260935]: 2025-10-11 08:53:58.673 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 440, in update
Oct 11 08:53:58 compute-0 nova_compute[260935]: 2025-10-11 08:53:58.673 2 ERROR nova.virt.libvirt.driver     unvalidated_image = self.get(image_id)
Oct 11 08:53:58 compute-0 nova_compute[260935]: 2025-10-11 08:53:58.673 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 197, in get
Oct 11 08:53:58 compute-0 nova_compute[260935]: 2025-10-11 08:53:58.673 2 ERROR nova.virt.libvirt.driver     return self._get(image_id)
Oct 11 08:53:58 compute-0 nova_compute[260935]: 2025-10-11 08:53:58.673 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/utils.py", line 649, in inner
Oct 11 08:53:58 compute-0 nova_compute[260935]: 2025-10-11 08:53:58.673 2 ERROR nova.virt.libvirt.driver     return RequestIdProxy(wrapped(*args, **kwargs))
Oct 11 08:53:58 compute-0 nova_compute[260935]: 2025-10-11 08:53:58.673 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 190, in _get
Oct 11 08:53:58 compute-0 nova_compute[260935]: 2025-10-11 08:53:58.673 2 ERROR nova.virt.libvirt.driver     resp, body = self.http_client.get(url, headers=header)
Oct 11 08:53:58 compute-0 nova_compute[260935]: 2025-10-11 08:53:58.673 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/keystoneauth1/adapter.py", line 395, in get
Oct 11 08:53:58 compute-0 nova_compute[260935]: 2025-10-11 08:53:58.673 2 ERROR nova.virt.libvirt.driver     return self.request(url, 'GET', **kwargs)
Oct 11 08:53:58 compute-0 nova_compute[260935]: 2025-10-11 08:53:58.673 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/http.py", line 380, in request
Oct 11 08:53:58 compute-0 nova_compute[260935]: 2025-10-11 08:53:58.673 2 ERROR nova.virt.libvirt.driver     return self._handle_response(resp)
Oct 11 08:53:58 compute-0 nova_compute[260935]: 2025-10-11 08:53:58.673 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/http.py", line 120, in _handle_response
Oct 11 08:53:58 compute-0 nova_compute[260935]: 2025-10-11 08:53:58.673 2 ERROR nova.virt.libvirt.driver     raise exc.from_response(resp, resp.content)
Oct 11 08:53:58 compute-0 nova_compute[260935]: 2025-10-11 08:53:58.673 2 ERROR nova.virt.libvirt.driver glanceclient.exc.HTTPNotFound: HTTP 404 Not Found: No image found with ID 309541da-fb02-457f-9678-947ffe38cc30
Oct 11 08:53:58 compute-0 nova_compute[260935]: 2025-10-11 08:53:58.673 2 ERROR nova.virt.libvirt.driver 
Oct 11 08:53:58 compute-0 nova_compute[260935]: 2025-10-11 08:53:58.673 2 ERROR nova.virt.libvirt.driver During handling of the above exception, another exception occurred:
Oct 11 08:53:58 compute-0 nova_compute[260935]: 2025-10-11 08:53:58.673 2 ERROR nova.virt.libvirt.driver 
Oct 11 08:53:58 compute-0 nova_compute[260935]: 2025-10-11 08:53:58.673 2 ERROR nova.virt.libvirt.driver Traceback (most recent call last):
Oct 11 08:53:58 compute-0 nova_compute[260935]: 2025-10-11 08:53:58.673 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 3082, in snapshot
Oct 11 08:53:58 compute-0 nova_compute[260935]: 2025-10-11 08:53:58.673 2 ERROR nova.virt.libvirt.driver     self._image_api.update(context, image_id, metadata,
Oct 11 08:53:58 compute-0 nova_compute[260935]: 2025-10-11 08:53:58.673 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 1243, in update
Oct 11 08:53:58 compute-0 nova_compute[260935]: 2025-10-11 08:53:58.673 2 ERROR nova.virt.libvirt.driver     return session.update(context, image_id, image_info, data=data,
Oct 11 08:53:58 compute-0 nova_compute[260935]: 2025-10-11 08:53:58.673 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 693, in update
Oct 11 08:53:58 compute-0 nova_compute[260935]: 2025-10-11 08:53:58.673 2 ERROR nova.virt.libvirt.driver     _reraise_translated_image_exception(image_id)
Oct 11 08:53:58 compute-0 nova_compute[260935]: 2025-10-11 08:53:58.673 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 1031, in _reraise_translated_image_exception
Oct 11 08:53:58 compute-0 nova_compute[260935]: 2025-10-11 08:53:58.673 2 ERROR nova.virt.libvirt.driver     raise new_exc.with_traceback(exc_trace)
Oct 11 08:53:58 compute-0 nova_compute[260935]: 2025-10-11 08:53:58.673 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 691, in update
Oct 11 08:53:58 compute-0 nova_compute[260935]: 2025-10-11 08:53:58.673 2 ERROR nova.virt.libvirt.driver     image = self._update_v2(context, sent_service_image_meta, data)
Oct 11 08:53:58 compute-0 nova_compute[260935]: 2025-10-11 08:53:58.673 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 700, in _update_v2
Oct 11 08:53:58 compute-0 nova_compute[260935]: 2025-10-11 08:53:58.673 2 ERROR nova.virt.libvirt.driver     image = self._client.call(
Oct 11 08:53:58 compute-0 nova_compute[260935]: 2025-10-11 08:53:58.673 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 191, in call
Oct 11 08:53:58 compute-0 nova_compute[260935]: 2025-10-11 08:53:58.673 2 ERROR nova.virt.libvirt.driver     result = getattr(controller, method)(*args, **kwargs)
Oct 11 08:53:58 compute-0 nova_compute[260935]: 2025-10-11 08:53:58.673 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 440, in update
Oct 11 08:53:58 compute-0 nova_compute[260935]: 2025-10-11 08:53:58.673 2 ERROR nova.virt.libvirt.driver     unvalidated_image = self.get(image_id)
Oct 11 08:53:58 compute-0 nova_compute[260935]: 2025-10-11 08:53:58.673 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 197, in get
Oct 11 08:53:58 compute-0 nova_compute[260935]: 2025-10-11 08:53:58.673 2 ERROR nova.virt.libvirt.driver     return self._get(image_id)
Oct 11 08:53:58 compute-0 nova_compute[260935]: 2025-10-11 08:53:58.673 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/utils.py", line 649, in inner
Oct 11 08:53:58 compute-0 nova_compute[260935]: 2025-10-11 08:53:58.673 2 ERROR nova.virt.libvirt.driver     return RequestIdProxy(wrapped(*args, **kwargs))
Oct 11 08:53:58 compute-0 nova_compute[260935]: 2025-10-11 08:53:58.673 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 190, in _get
Oct 11 08:53:58 compute-0 nova_compute[260935]: 2025-10-11 08:53:58.673 2 ERROR nova.virt.libvirt.driver     resp, body = self.http_client.get(url, headers=header)
Oct 11 08:53:58 compute-0 nova_compute[260935]: 2025-10-11 08:53:58.673 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/keystoneauth1/adapter.py", line 395, in get
Oct 11 08:53:58 compute-0 nova_compute[260935]: 2025-10-11 08:53:58.673 2 ERROR nova.virt.libvirt.driver     return self.request(url, 'GET', **kwargs)
Oct 11 08:53:58 compute-0 nova_compute[260935]: 2025-10-11 08:53:58.673 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/http.py", line 380, in request
Oct 11 08:53:58 compute-0 nova_compute[260935]: 2025-10-11 08:53:58.673 2 ERROR nova.virt.libvirt.driver     return self._handle_response(resp)
Oct 11 08:53:58 compute-0 nova_compute[260935]: 2025-10-11 08:53:58.673 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/http.py", line 120, in _handle_response
Oct 11 08:53:58 compute-0 nova_compute[260935]: 2025-10-11 08:53:58.673 2 ERROR nova.virt.libvirt.driver     raise exc.from_response(resp, resp.content)
Oct 11 08:53:58 compute-0 nova_compute[260935]: 2025-10-11 08:53:58.673 2 ERROR nova.virt.libvirt.driver nova.exception.ImageNotFound: Image 309541da-fb02-457f-9678-947ffe38cc30 could not be found.
Oct 11 08:53:58 compute-0 nova_compute[260935]: 2025-10-11 08:53:58.673 2 ERROR nova.virt.libvirt.driver 
Oct 11 08:53:58 compute-0 nova_compute[260935]: 2025-10-11 08:53:58.734 2 DEBUG nova.storage.rbd_utils [None req-96b0935f-0317-4ff0-b413-63f083993510 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] removing snapshot(snap) on rbd image(309541da-fb02-457f-9678-947ffe38cc30) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Oct 11 08:53:59 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e208 do_prune osdmap full prune enabled
Oct 11 08:53:59 compute-0 ceph-mon[74313]: pgmap v1489: 321 pgs: 4 active+clean+snaptrim, 14 active+clean+snaptrim_wait, 303 active+clean; 278 MiB data, 531 MiB used, 59 GiB / 60 GiB avail; 11 MiB/s rd, 3.3 MiB/s wr, 437 op/s
Oct 11 08:53:59 compute-0 ceph-mon[74313]: osdmap e208: 3 total, 3 up, 3 in
Oct 11 08:53:59 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e209 e209: 3 total, 3 up, 3 in
Oct 11 08:53:59 compute-0 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e209: 3 total, 3 up, 3 in
Oct 11 08:53:59 compute-0 nova_compute[260935]: 2025-10-11 08:53:59.749 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 08:53:59 compute-0 nova_compute[260935]: 2025-10-11 08:53:59.900 2 WARNING nova.compute.manager [None req-96b0935f-0317-4ff0-b413-63f083993510 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] [instance: 8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08] Image not found during snapshot: nova.exception.ImageNotFound: Image 309541da-fb02-457f-9678-947ffe38cc30 could not be found.
Oct 11 08:53:59 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1492: 321 pgs: 4 active+clean+snaptrim, 14 active+clean+snaptrim_wait, 303 active+clean; 278 MiB data, 531 MiB used, 59 GiB / 60 GiB avail; 20 MiB/s rd, 6.6 MiB/s wr, 760 op/s
Oct 11 08:53:59 compute-0 ovn_controller[152945]: 2025-10-11T08:53:59Z|00064|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:97:d1:ed 10.100.0.8
Oct 11 08:53:59 compute-0 ovn_controller[152945]: 2025-10-11T08:53:59Z|00065|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:97:d1:ed 10.100.0.8
Oct 11 08:54:00 compute-0 ceph-mon[74313]: osdmap e209: 3 total, 3 up, 3 in
Oct 11 08:54:01 compute-0 ceph-mon[74313]: pgmap v1492: 321 pgs: 4 active+clean+snaptrim, 14 active+clean+snaptrim_wait, 303 active+clean; 278 MiB data, 531 MiB used, 59 GiB / 60 GiB avail; 20 MiB/s rd, 6.6 MiB/s wr, 760 op/s
Oct 11 08:54:01 compute-0 nova_compute[260935]: 2025-10-11 08:54:01.598 2 DEBUG oslo_concurrency.lockutils [None req-08113e51-2469-4f43-8573-66a713d640b3 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Acquiring lock "d0ac94c4-9bbc-443b-bbce-0d447b37153a" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:54:01 compute-0 nova_compute[260935]: 2025-10-11 08:54:01.599 2 DEBUG oslo_concurrency.lockutils [None req-08113e51-2469-4f43-8573-66a713d640b3 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Lock "d0ac94c4-9bbc-443b-bbce-0d447b37153a" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:54:01 compute-0 nova_compute[260935]: 2025-10-11 08:54:01.599 2 DEBUG oslo_concurrency.lockutils [None req-08113e51-2469-4f43-8573-66a713d640b3 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Acquiring lock "d0ac94c4-9bbc-443b-bbce-0d447b37153a-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:54:01 compute-0 nova_compute[260935]: 2025-10-11 08:54:01.600 2 DEBUG oslo_concurrency.lockutils [None req-08113e51-2469-4f43-8573-66a713d640b3 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Lock "d0ac94c4-9bbc-443b-bbce-0d447b37153a-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:54:01 compute-0 nova_compute[260935]: 2025-10-11 08:54:01.600 2 DEBUG oslo_concurrency.lockutils [None req-08113e51-2469-4f43-8573-66a713d640b3 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Lock "d0ac94c4-9bbc-443b-bbce-0d447b37153a-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:54:01 compute-0 nova_compute[260935]: 2025-10-11 08:54:01.602 2 INFO nova.compute.manager [None req-08113e51-2469-4f43-8573-66a713d640b3 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: d0ac94c4-9bbc-443b-bbce-0d447b37153a] Terminating instance
Oct 11 08:54:01 compute-0 nova_compute[260935]: 2025-10-11 08:54:01.604 2 DEBUG nova.compute.manager [None req-08113e51-2469-4f43-8573-66a713d640b3 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: d0ac94c4-9bbc-443b-bbce-0d447b37153a] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 11 08:54:01 compute-0 kernel: tap11b55091-28 (unregistering): left promiscuous mode
Oct 11 08:54:01 compute-0 NetworkManager[44960]: <info>  [1760172841.6562] device (tap11b55091-28): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 08:54:01 compute-0 ovn_controller[152945]: 2025-10-11T08:54:01Z|00428|binding|INFO|Releasing lport 11b55091-2876-4b36-98b0-aa1a4db00d3e from this chassis (sb_readonly=0)
Oct 11 08:54:01 compute-0 ovn_controller[152945]: 2025-10-11T08:54:01Z|00429|binding|INFO|Setting lport 11b55091-2876-4b36-98b0-aa1a4db00d3e down in Southbound
Oct 11 08:54:01 compute-0 ovn_controller[152945]: 2025-10-11T08:54:01Z|00430|binding|INFO|Removing iface tap11b55091-28 ovn-installed in OVS
Oct 11 08:54:01 compute-0 nova_compute[260935]: 2025-10-11 08:54:01.683 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:54:01 compute-0 nova_compute[260935]: 2025-10-11 08:54:01.686 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:54:01 compute-0 nova_compute[260935]: 2025-10-11 08:54:01.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 08:54:01 compute-0 nova_compute[260935]: 2025-10-11 08:54:01.704 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 11 08:54:01 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:01.708 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:86:b0:c9 10.100.0.11'], port_security=['fa:16:3e:86:b0:c9 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'd0ac94c4-9bbc-443b-bbce-0d447b37153a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5fe47dfd30914099a9819413cbab00c6', 'neutron:revision_number': '4', 'neutron:security_group_ids': '76121093-a7de-4040-aef4-22a2c06e5eea', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0c1ab924-8567-41be-9107-10c6210b8f10, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=11b55091-2876-4b36-98b0-aa1a4db00d3e) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 08:54:01 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:01.710 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 11b55091-2876-4b36-98b0-aa1a4db00d3e in datapath aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f unbound from our chassis
Oct 11 08:54:01 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:01.713 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f
Oct 11 08:54:01 compute-0 systemd[1]: machine-qemu\x2d56\x2dinstance\x2d00000030.scope: Deactivated successfully.
Oct 11 08:54:01 compute-0 systemd[1]: machine-qemu\x2d56\x2dinstance\x2d00000030.scope: Consumed 7.346s CPU time.
Oct 11 08:54:01 compute-0 systemd-machined[215705]: Machine qemu-56-instance-00000030 terminated.
Oct 11 08:54:01 compute-0 nova_compute[260935]: 2025-10-11 08:54:01.741 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:54:01 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:01.755 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[7d26c729-50f9-4a84-8f9c-d0517cbbb557]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:54:01 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:01.800 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[3050a010-dc4b-484a-acef-f43aebf0ff10]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:54:01 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:01.804 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[6eb3fe79-f50f-4086-95bb-a3e6ac677c80]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:54:01 compute-0 nova_compute[260935]: 2025-10-11 08:54:01.848 2 INFO nova.virt.libvirt.driver [-] [instance: d0ac94c4-9bbc-443b-bbce-0d447b37153a] Instance destroyed successfully.
Oct 11 08:54:01 compute-0 nova_compute[260935]: 2025-10-11 08:54:01.849 2 DEBUG nova.objects.instance [None req-08113e51-2469-4f43-8573-66a713d640b3 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Lazy-loading 'resources' on Instance uuid d0ac94c4-9bbc-443b-bbce-0d447b37153a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:54:01 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:01.853 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[1fa7e5ea-19b4-4d30-ac4e-6006bd32ad83]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:54:01 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:01.881 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[28d6aa8a-9de3-4c13-9c83-c37d5ffa3dc8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapaac3fe7a-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a9:0a:6b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 832, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 832, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 136], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 467881, 'reachable_time': 38741, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 315577, 'error': None, 'target': 'ovnmeta-aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:54:01 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:01.908 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[1aae5895-e693-470f-8e68-dc2d80dea730]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapaac3fe7a-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 467899, 'tstamp': 467899}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 315578, 'error': None, 'target': 'ovnmeta-aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapaac3fe7a-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 467904, 'tstamp': 467904}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 315578, 'error': None, 'target': 'ovnmeta-aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:54:01 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:01.910 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapaac3fe7a-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:54:01 compute-0 nova_compute[260935]: 2025-10-11 08:54:01.912 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:54:01 compute-0 nova_compute[260935]: 2025-10-11 08:54:01.919 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:54:01 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:01.920 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapaac3fe7a-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:54:01 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:01.920 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 08:54:01 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:01.921 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapaac3fe7a-b0, col_values=(('external_ids', {'iface-id': 'debf3d0c-b4f8-4ab8-9507-a0acd9b0ee66'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:54:01 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:01.922 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 08:54:01 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1493: 321 pgs: 4 active+clean+snaptrim, 14 active+clean+snaptrim_wait, 303 active+clean; 278 MiB data, 531 MiB used, 59 GiB / 60 GiB avail; 14 MiB/s rd, 4.8 MiB/s wr, 547 op/s
Oct 11 08:54:01 compute-0 nova_compute[260935]: 2025-10-11 08:54:01.959 2 DEBUG nova.virt.libvirt.vif [None req-08113e51-2469-4f43-8573-66a713d640b3 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T08:53:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-MultipleCreateTestJSON-server-904038611',display_name='tempest-MultipleCreateTestJSON-server-904038611-1',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-multiplecreatetestjson-server-904038611-1',id=48,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-11T08:53:55Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='5fe47dfd30914099a9819413cbab00c6',ramdisk_id='',reservation_id='r-gdifyikz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-MultipleCreateTestJSON-1825846956',owner_user_name='tempest-MultipleCreateTestJSON-1825846956-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T08:53:55Z,user_data=None,user_id='624c293d73ca4d14a182fadee17abb16',uuid=d0ac94c4-9bbc-443b-bbce-0d447b37153a,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "11b55091-2876-4b36-98b0-aa1a4db00d3e", "address": "fa:16:3e:86:b0:c9", "network": {"id": "aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1676373979-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5fe47dfd30914099a9819413cbab00c6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap11b55091-28", "ovs_interfaceid": "11b55091-2876-4b36-98b0-aa1a4db00d3e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 11 08:54:01 compute-0 nova_compute[260935]: 2025-10-11 08:54:01.959 2 DEBUG nova.network.os_vif_util [None req-08113e51-2469-4f43-8573-66a713d640b3 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Converting VIF {"id": "11b55091-2876-4b36-98b0-aa1a4db00d3e", "address": "fa:16:3e:86:b0:c9", "network": {"id": "aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1676373979-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5fe47dfd30914099a9819413cbab00c6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap11b55091-28", "ovs_interfaceid": "11b55091-2876-4b36-98b0-aa1a4db00d3e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 08:54:01 compute-0 nova_compute[260935]: 2025-10-11 08:54:01.961 2 DEBUG nova.network.os_vif_util [None req-08113e51-2469-4f43-8573-66a713d640b3 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:86:b0:c9,bridge_name='br-int',has_traffic_filtering=True,id=11b55091-2876-4b36-98b0-aa1a4db00d3e,network=Network(aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap11b55091-28') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 08:54:01 compute-0 nova_compute[260935]: 2025-10-11 08:54:01.962 2 DEBUG os_vif [None req-08113e51-2469-4f43-8573-66a713d640b3 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:86:b0:c9,bridge_name='br-int',has_traffic_filtering=True,id=11b55091-2876-4b36-98b0-aa1a4db00d3e,network=Network(aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap11b55091-28') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 11 08:54:01 compute-0 nova_compute[260935]: 2025-10-11 08:54:01.964 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:54:01 compute-0 nova_compute[260935]: 2025-10-11 08:54:01.964 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap11b55091-28, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:54:01 compute-0 nova_compute[260935]: 2025-10-11 08:54:01.965 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:54:01 compute-0 nova_compute[260935]: 2025-10-11 08:54:01.966 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:54:01 compute-0 nova_compute[260935]: 2025-10-11 08:54:01.970 2 INFO os_vif [None req-08113e51-2469-4f43-8573-66a713d640b3 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:86:b0:c9,bridge_name='br-int',has_traffic_filtering=True,id=11b55091-2876-4b36-98b0-aa1a4db00d3e,network=Network(aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap11b55091-28')
Oct 11 08:54:02 compute-0 nova_compute[260935]: 2025-10-11 08:54:02.307 2 DEBUG oslo_concurrency.lockutils [None req-79edc500-30b7-473e-9cd9-7958c1b3987c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Acquiring lock "c5c5e7c6-36ba-4cdd-9ad5-03996c419556" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:54:02 compute-0 nova_compute[260935]: 2025-10-11 08:54:02.307 2 DEBUG oslo_concurrency.lockutils [None req-79edc500-30b7-473e-9cd9-7958c1b3987c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Lock "c5c5e7c6-36ba-4cdd-9ad5-03996c419556" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:54:02 compute-0 nova_compute[260935]: 2025-10-11 08:54:02.308 2 DEBUG oslo_concurrency.lockutils [None req-79edc500-30b7-473e-9cd9-7958c1b3987c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Acquiring lock "c5c5e7c6-36ba-4cdd-9ad5-03996c419556-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:54:02 compute-0 nova_compute[260935]: 2025-10-11 08:54:02.308 2 DEBUG oslo_concurrency.lockutils [None req-79edc500-30b7-473e-9cd9-7958c1b3987c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Lock "c5c5e7c6-36ba-4cdd-9ad5-03996c419556-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:54:02 compute-0 nova_compute[260935]: 2025-10-11 08:54:02.308 2 DEBUG oslo_concurrency.lockutils [None req-79edc500-30b7-473e-9cd9-7958c1b3987c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Lock "c5c5e7c6-36ba-4cdd-9ad5-03996c419556-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:54:02 compute-0 nova_compute[260935]: 2025-10-11 08:54:02.310 2 INFO nova.compute.manager [None req-79edc500-30b7-473e-9cd9-7958c1b3987c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: c5c5e7c6-36ba-4cdd-9ad5-03996c419556] Terminating instance
Oct 11 08:54:02 compute-0 nova_compute[260935]: 2025-10-11 08:54:02.310 2 DEBUG nova.compute.manager [None req-79edc500-30b7-473e-9cd9-7958c1b3987c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: c5c5e7c6-36ba-4cdd-9ad5-03996c419556] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 11 08:54:02 compute-0 nova_compute[260935]: 2025-10-11 08:54:02.331 2 INFO nova.virt.libvirt.driver [None req-08113e51-2469-4f43-8573-66a713d640b3 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: d0ac94c4-9bbc-443b-bbce-0d447b37153a] Deleting instance files /var/lib/nova/instances/d0ac94c4-9bbc-443b-bbce-0d447b37153a_del
Oct 11 08:54:02 compute-0 nova_compute[260935]: 2025-10-11 08:54:02.332 2 INFO nova.virt.libvirt.driver [None req-08113e51-2469-4f43-8573-66a713d640b3 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: d0ac94c4-9bbc-443b-bbce-0d447b37153a] Deletion of /var/lib/nova/instances/d0ac94c4-9bbc-443b-bbce-0d447b37153a_del complete
Oct 11 08:54:02 compute-0 kernel: tapeb770b47-0e (unregistering): left promiscuous mode
Oct 11 08:54:02 compute-0 NetworkManager[44960]: <info>  [1760172842.3470] device (tapeb770b47-0e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 08:54:02 compute-0 ovn_controller[152945]: 2025-10-11T08:54:02Z|00431|binding|INFO|Releasing lport eb770b47-0e30-41bb-8d08-e52bc86b8fb7 from this chassis (sb_readonly=0)
Oct 11 08:54:02 compute-0 ovn_controller[152945]: 2025-10-11T08:54:02Z|00432|binding|INFO|Setting lport eb770b47-0e30-41bb-8d08-e52bc86b8fb7 down in Southbound
Oct 11 08:54:02 compute-0 nova_compute[260935]: 2025-10-11 08:54:02.358 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:54:02 compute-0 ovn_controller[152945]: 2025-10-11T08:54:02Z|00433|binding|INFO|Removing iface tapeb770b47-0e ovn-installed in OVS
Oct 11 08:54:02 compute-0 nova_compute[260935]: 2025-10-11 08:54:02.360 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:54:02 compute-0 nova_compute[260935]: 2025-10-11 08:54:02.393 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:54:02 compute-0 systemd[1]: machine-qemu\x2d57\x2dinstance\x2d00000032.scope: Deactivated successfully.
Oct 11 08:54:02 compute-0 systemd[1]: machine-qemu\x2d57\x2dinstance\x2d00000032.scope: Consumed 6.631s CPU time.
Oct 11 08:54:02 compute-0 systemd-machined[215705]: Machine qemu-57-instance-00000032 terminated.
Oct 11 08:54:02 compute-0 nova_compute[260935]: 2025-10-11 08:54:02.545 2 INFO nova.virt.libvirt.driver [-] [instance: c5c5e7c6-36ba-4cdd-9ad5-03996c419556] Instance destroyed successfully.
Oct 11 08:54:02 compute-0 sudo[315602]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:54:02 compute-0 nova_compute[260935]: 2025-10-11 08:54:02.546 2 DEBUG nova.objects.instance [None req-79edc500-30b7-473e-9cd9-7958c1b3987c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Lazy-loading 'resources' on Instance uuid c5c5e7c6-36ba-4cdd-9ad5-03996c419556 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:54:02 compute-0 sudo[315602]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:54:02 compute-0 sudo[315602]: pam_unix(sudo:session): session closed for user root
Oct 11 08:54:02 compute-0 sudo[315638]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 08:54:02 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:02.621 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c2:96:8d 10.100.0.9'], port_security=['fa:16:3e:c2:96:8d 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'c5c5e7c6-36ba-4cdd-9ad5-03996c419556', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5fe47dfd30914099a9819413cbab00c6', 'neutron:revision_number': '4', 'neutron:security_group_ids': '76121093-a7de-4040-aef4-22a2c06e5eea', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0c1ab924-8567-41be-9107-10c6210b8f10, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=eb770b47-0e30-41bb-8d08-e52bc86b8fb7) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 08:54:02 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:02.622 162815 INFO neutron.agent.ovn.metadata.agent [-] Port eb770b47-0e30-41bb-8d08-e52bc86b8fb7 in datapath aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f unbound from our chassis
Oct 11 08:54:02 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:02.624 162815 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 11 08:54:02 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:02.625 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[10e4bbfd-eb39-4e7e-82db-d4cb7796fd90]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:54:02 compute-0 sudo[315638]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:54:02 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:02.626 162815 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f namespace which is not needed anymore
Oct 11 08:54:02 compute-0 sudo[315638]: pam_unix(sudo:session): session closed for user root
Oct 11 08:54:02 compute-0 nova_compute[260935]: 2025-10-11 08:54:02.679 2 DEBUG nova.virt.libvirt.vif [None req-79edc500-30b7-473e-9cd9-7958c1b3987c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T08:53:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-MultipleCreateTestJSON-server-904038611',display_name='tempest-MultipleCreateTestJSON-server-904038611-2',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-multiplecreatetestjson-server-904038611-2',id=50,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=1,launched_at=2025-10-11T08:53:56Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='5fe47dfd30914099a9819413cbab00c6',ramdisk_id='',reservation_id='r-gdifyikz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-MultipleCreateTestJSON-1825846956',owner_user_name='tempest-MultipleCreateTestJSON-1825846956-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T08:53:56Z,user_data=None,user_id='624c293d73ca4d14a182fadee17abb16',uuid=c5c5e7c6-36ba-4cdd-9ad5-03996c419556,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "eb770b47-0e30-41bb-8d08-e52bc86b8fb7", "address": "fa:16:3e:c2:96:8d", "network": {"id": "aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1676373979-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5fe47dfd30914099a9819413cbab00c6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeb770b47-0e", "ovs_interfaceid": "eb770b47-0e30-41bb-8d08-e52bc86b8fb7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 11 08:54:02 compute-0 nova_compute[260935]: 2025-10-11 08:54:02.681 2 DEBUG nova.network.os_vif_util [None req-79edc500-30b7-473e-9cd9-7958c1b3987c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Converting VIF {"id": "eb770b47-0e30-41bb-8d08-e52bc86b8fb7", "address": "fa:16:3e:c2:96:8d", "network": {"id": "aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1676373979-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5fe47dfd30914099a9819413cbab00c6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeb770b47-0e", "ovs_interfaceid": "eb770b47-0e30-41bb-8d08-e52bc86b8fb7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 08:54:02 compute-0 nova_compute[260935]: 2025-10-11 08:54:02.681 2 DEBUG nova.network.os_vif_util [None req-79edc500-30b7-473e-9cd9-7958c1b3987c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c2:96:8d,bridge_name='br-int',has_traffic_filtering=True,id=eb770b47-0e30-41bb-8d08-e52bc86b8fb7,network=Network(aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapeb770b47-0e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 08:54:02 compute-0 nova_compute[260935]: 2025-10-11 08:54:02.682 2 DEBUG os_vif [None req-79edc500-30b7-473e-9cd9-7958c1b3987c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c2:96:8d,bridge_name='br-int',has_traffic_filtering=True,id=eb770b47-0e30-41bb-8d08-e52bc86b8fb7,network=Network(aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapeb770b47-0e') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 11 08:54:02 compute-0 nova_compute[260935]: 2025-10-11 08:54:02.683 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:54:02 compute-0 nova_compute[260935]: 2025-10-11 08:54:02.684 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapeb770b47-0e, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:54:02 compute-0 nova_compute[260935]: 2025-10-11 08:54:02.685 2 DEBUG oslo_concurrency.lockutils [None req-d29d2b04-23c0-4456-a657-aa63148acf2f 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Acquiring lock "090243f9-46ac-42a1-b921-fa3b1974f127" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:54:02 compute-0 nova_compute[260935]: 2025-10-11 08:54:02.686 2 DEBUG oslo_concurrency.lockutils [None req-d29d2b04-23c0-4456-a657-aa63148acf2f 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Lock "090243f9-46ac-42a1-b921-fa3b1974f127" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:54:02 compute-0 nova_compute[260935]: 2025-10-11 08:54:02.686 2 DEBUG oslo_concurrency.lockutils [None req-d29d2b04-23c0-4456-a657-aa63148acf2f 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Acquiring lock "090243f9-46ac-42a1-b921-fa3b1974f127-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:54:02 compute-0 nova_compute[260935]: 2025-10-11 08:54:02.687 2 DEBUG oslo_concurrency.lockutils [None req-d29d2b04-23c0-4456-a657-aa63148acf2f 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Lock "090243f9-46ac-42a1-b921-fa3b1974f127-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:54:02 compute-0 nova_compute[260935]: 2025-10-11 08:54:02.687 2 DEBUG oslo_concurrency.lockutils [None req-d29d2b04-23c0-4456-a657-aa63148acf2f 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Lock "090243f9-46ac-42a1-b921-fa3b1974f127-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:54:02 compute-0 nova_compute[260935]: 2025-10-11 08:54:02.688 2 INFO nova.compute.manager [None req-d29d2b04-23c0-4456-a657-aa63148acf2f 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 090243f9-46ac-42a1-b921-fa3b1974f127] Terminating instance
Oct 11 08:54:02 compute-0 nova_compute[260935]: 2025-10-11 08:54:02.689 2 DEBUG nova.compute.manager [None req-d29d2b04-23c0-4456-a657-aa63148acf2f 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 090243f9-46ac-42a1-b921-fa3b1974f127] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 11 08:54:02 compute-0 nova_compute[260935]: 2025-10-11 08:54:02.690 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:54:02 compute-0 nova_compute[260935]: 2025-10-11 08:54:02.692 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 08:54:02 compute-0 nova_compute[260935]: 2025-10-11 08:54:02.694 2 DEBUG oslo_concurrency.lockutils [None req-65038e71-6994-4fd5-842c-fae4ec91d8d5 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Acquiring lock "8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:54:02 compute-0 nova_compute[260935]: 2025-10-11 08:54:02.694 2 DEBUG oslo_concurrency.lockutils [None req-65038e71-6994-4fd5-842c-fae4ec91d8d5 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Lock "8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:54:02 compute-0 nova_compute[260935]: 2025-10-11 08:54:02.695 2 DEBUG oslo_concurrency.lockutils [None req-65038e71-6994-4fd5-842c-fae4ec91d8d5 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Acquiring lock "8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:54:02 compute-0 nova_compute[260935]: 2025-10-11 08:54:02.695 2 DEBUG oslo_concurrency.lockutils [None req-65038e71-6994-4fd5-842c-fae4ec91d8d5 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Lock "8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:54:02 compute-0 nova_compute[260935]: 2025-10-11 08:54:02.695 2 DEBUG oslo_concurrency.lockutils [None req-65038e71-6994-4fd5-842c-fae4ec91d8d5 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Lock "8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:54:02 compute-0 nova_compute[260935]: 2025-10-11 08:54:02.696 2 INFO nova.compute.manager [None req-65038e71-6994-4fd5-842c-fae4ec91d8d5 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] [instance: 8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08] Terminating instance
Oct 11 08:54:02 compute-0 nova_compute[260935]: 2025-10-11 08:54:02.697 2 DEBUG nova.compute.manager [None req-65038e71-6994-4fd5-842c-fae4ec91d8d5 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] [instance: 8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 11 08:54:02 compute-0 nova_compute[260935]: 2025-10-11 08:54:02.698 2 INFO os_vif [None req-79edc500-30b7-473e-9cd9-7958c1b3987c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c2:96:8d,bridge_name='br-int',has_traffic_filtering=True,id=eb770b47-0e30-41bb-8d08-e52bc86b8fb7,network=Network(aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapeb770b47-0e')
Oct 11 08:54:02 compute-0 sudo[315664]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:54:02 compute-0 sudo[315664]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:54:02 compute-0 sudo[315664]: pam_unix(sudo:session): session closed for user root
Oct 11 08:54:02 compute-0 nova_compute[260935]: 2025-10-11 08:54:02.731 2 INFO nova.compute.manager [None req-08113e51-2469-4f43-8573-66a713d640b3 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: d0ac94c4-9bbc-443b-bbce-0d447b37153a] Took 1.13 seconds to destroy the instance on the hypervisor.
Oct 11 08:54:02 compute-0 nova_compute[260935]: 2025-10-11 08:54:02.732 2 DEBUG oslo.service.loopingcall [None req-08113e51-2469-4f43-8573-66a713d640b3 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 11 08:54:02 compute-0 nova_compute[260935]: 2025-10-11 08:54:02.732 2 DEBUG nova.compute.manager [-] [instance: d0ac94c4-9bbc-443b-bbce-0d447b37153a] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 11 08:54:02 compute-0 nova_compute[260935]: 2025-10-11 08:54:02.732 2 DEBUG nova.network.neutron [-] [instance: d0ac94c4-9bbc-443b-bbce-0d447b37153a] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 11 08:54:02 compute-0 kernel: tap94e66298-1c (unregistering): left promiscuous mode
Oct 11 08:54:02 compute-0 NetworkManager[44960]: <info>  [1760172842.7477] device (tap94e66298-1c): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 08:54:02 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:02.755 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=13, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:d1:d9', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '16:ab:1e:b7:4b:7f'}, ipsec=False) old=SB_Global(nb_cfg=12) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 08:54:02 compute-0 ovn_controller[152945]: 2025-10-11T08:54:02Z|00434|binding|INFO|Releasing lport 94e66298-1c10-486d-9cd5-fd680cfa037c from this chassis (sb_readonly=1)
Oct 11 08:54:02 compute-0 ovn_controller[152945]: 2025-10-11T08:54:02Z|00435|if_status|INFO|Not setting lport 94e66298-1c10-486d-9cd5-fd680cfa037c down as sb is readonly
Oct 11 08:54:02 compute-0 ovn_controller[152945]: 2025-10-11T08:54:02Z|00436|binding|INFO|Removing iface tap94e66298-1c ovn-installed in OVS
Oct 11 08:54:02 compute-0 nova_compute[260935]: 2025-10-11 08:54:02.758 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:54:02 compute-0 ovn_controller[152945]: 2025-10-11T08:54:02Z|00437|binding|INFO|Setting lport 94e66298-1c10-486d-9cd5-fd680cfa037c down in Southbound
Oct 11 08:54:02 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:02.770 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a6:01:8a 10.100.0.7'], port_security=['fa:16:3e:a6:01:8a 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '090243f9-46ac-42a1-b921-fa3b1974f127', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9af27fad6b5a4783b66213343f27f0a1', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'a50a9db8-e2ab-4969-88fc-b4ddbb372174', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b34768a1-4d4c-416a-8ec1-d8538916a72a, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=94e66298-1c10-486d-9cd5-fd680cfa037c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 08:54:02 compute-0 nova_compute[260935]: 2025-10-11 08:54:02.786 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:54:02 compute-0 kernel: tap5e003588-ef (unregistering): left promiscuous mode
Oct 11 08:54:02 compute-0 systemd[1]: machine-qemu\x2d55\x2dinstance\x2d00000031.scope: Deactivated successfully.
Oct 11 08:54:02 compute-0 systemd[1]: machine-qemu\x2d55\x2dinstance\x2d00000031.scope: Consumed 10.323s CPU time.
Oct 11 08:54:02 compute-0 NetworkManager[44960]: <info>  [1760172842.8026] device (tap5e003588-ef): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 08:54:02 compute-0 systemd-machined[215705]: Machine qemu-55-instance-00000031 terminated.
Oct 11 08:54:02 compute-0 sudo[315718]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 11 08:54:02 compute-0 sudo[315718]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:54:02 compute-0 ovn_controller[152945]: 2025-10-11T08:54:02Z|00438|binding|INFO|Releasing lport 5e003588-ef81-4e87-900f-734b2c4bad32 from this chassis (sb_readonly=0)
Oct 11 08:54:02 compute-0 ovn_controller[152945]: 2025-10-11T08:54:02Z|00439|binding|INFO|Setting lport 5e003588-ef81-4e87-900f-734b2c4bad32 down in Southbound
Oct 11 08:54:02 compute-0 nova_compute[260935]: 2025-10-11 08:54:02.817 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:54:02 compute-0 ovn_controller[152945]: 2025-10-11T08:54:02Z|00440|binding|INFO|Removing iface tap5e003588-ef ovn-installed in OVS
Oct 11 08:54:02 compute-0 nova_compute[260935]: 2025-10-11 08:54:02.820 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:54:02 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:02.826 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:97:d1:ed 10.100.0.8'], port_security=['fa:16:3e:97:d1:ed 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b34d40b1586348c3be3d9142dfe1770d', 'neutron:revision_number': '4', 'neutron:security_group_ids': '9b07fc82-c0d2-4e42-8878-3b0706731285', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=be4f554b-8352-4ab3-babd-ad834c691fd3, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=5e003588-ef81-4e87-900f-734b2c4bad32) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 08:54:02 compute-0 neutron-haproxy-ovnmeta-aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f[315254]: [NOTICE]   (315258) : haproxy version is 2.8.14-c23fe91
Oct 11 08:54:02 compute-0 neutron-haproxy-ovnmeta-aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f[315254]: [NOTICE]   (315258) : path to executable is /usr/sbin/haproxy
Oct 11 08:54:02 compute-0 neutron-haproxy-ovnmeta-aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f[315254]: [WARNING]  (315258) : Exiting Master process...
Oct 11 08:54:02 compute-0 neutron-haproxy-ovnmeta-aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f[315254]: [ALERT]    (315258) : Current worker (315260) exited with code 143 (Terminated)
Oct 11 08:54:02 compute-0 neutron-haproxy-ovnmeta-aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f[315254]: [WARNING]  (315258) : All workers exited. Exiting... (0)
Oct 11 08:54:02 compute-0 nova_compute[260935]: 2025-10-11 08:54:02.832 2 DEBUG nova.compute.manager [req-f07856b1-2a06-4d8b-918a-d27234b3aaaa req-0adf7753-90ef-4308-92a2-33727f4b2d43 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d0ac94c4-9bbc-443b-bbce-0d447b37153a] Received event network-vif-unplugged-11b55091-2876-4b36-98b0-aa1a4db00d3e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:54:02 compute-0 nova_compute[260935]: 2025-10-11 08:54:02.832 2 DEBUG oslo_concurrency.lockutils [req-f07856b1-2a06-4d8b-918a-d27234b3aaaa req-0adf7753-90ef-4308-92a2-33727f4b2d43 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "d0ac94c4-9bbc-443b-bbce-0d447b37153a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:54:02 compute-0 nova_compute[260935]: 2025-10-11 08:54:02.833 2 DEBUG oslo_concurrency.lockutils [req-f07856b1-2a06-4d8b-918a-d27234b3aaaa req-0adf7753-90ef-4308-92a2-33727f4b2d43 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "d0ac94c4-9bbc-443b-bbce-0d447b37153a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:54:02 compute-0 nova_compute[260935]: 2025-10-11 08:54:02.833 2 DEBUG oslo_concurrency.lockutils [req-f07856b1-2a06-4d8b-918a-d27234b3aaaa req-0adf7753-90ef-4308-92a2-33727f4b2d43 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "d0ac94c4-9bbc-443b-bbce-0d447b37153a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:54:02 compute-0 nova_compute[260935]: 2025-10-11 08:54:02.833 2 DEBUG nova.compute.manager [req-f07856b1-2a06-4d8b-918a-d27234b3aaaa req-0adf7753-90ef-4308-92a2-33727f4b2d43 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d0ac94c4-9bbc-443b-bbce-0d447b37153a] No waiting events found dispatching network-vif-unplugged-11b55091-2876-4b36-98b0-aa1a4db00d3e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 08:54:02 compute-0 nova_compute[260935]: 2025-10-11 08:54:02.833 2 DEBUG nova.compute.manager [req-f07856b1-2a06-4d8b-918a-d27234b3aaaa req-0adf7753-90ef-4308-92a2-33727f4b2d43 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d0ac94c4-9bbc-443b-bbce-0d447b37153a] Received event network-vif-unplugged-11b55091-2876-4b36-98b0-aa1a4db00d3e for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 11 08:54:02 compute-0 systemd[1]: libpod-6877f03e8f74ec3b019684f0961795298d916dc72d355b1afc9b4cff51f152cd.scope: Deactivated successfully.
Oct 11 08:54:02 compute-0 podman[315724]: 2025-10-11 08:54:02.845372296 +0000 UTC m=+0.071356786 container died 6877f03e8f74ec3b019684f0961795298d916dc72d355b1afc9b4cff51f152cd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 08:54:02 compute-0 nova_compute[260935]: 2025-10-11 08:54:02.855 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:54:02 compute-0 systemd[1]: machine-qemu\x2d54\x2dinstance\x2d0000002f.scope: Deactivated successfully.
Oct 11 08:54:02 compute-0 systemd[1]: machine-qemu\x2d54\x2dinstance\x2d0000002f.scope: Consumed 13.074s CPU time.
Oct 11 08:54:02 compute-0 systemd-machined[215705]: Machine qemu-54-instance-0000002f terminated.
Oct 11 08:54:02 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-6877f03e8f74ec3b019684f0961795298d916dc72d355b1afc9b4cff51f152cd-userdata-shm.mount: Deactivated successfully.
Oct 11 08:54:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-696ca5a58f39daa91dbd8051307c63172a5233db2990bf8722059ac5f83a523e-merged.mount: Deactivated successfully.
Oct 11 08:54:02 compute-0 podman[315724]: 2025-10-11 08:54:02.893998773 +0000 UTC m=+0.119983263 container cleanup 6877f03e8f74ec3b019684f0961795298d916dc72d355b1afc9b4cff51f152cd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct 11 08:54:02 compute-0 systemd[1]: libpod-conmon-6877f03e8f74ec3b019684f0961795298d916dc72d355b1afc9b4cff51f152cd.scope: Deactivated successfully.
Oct 11 08:54:02 compute-0 NetworkManager[44960]: <info>  [1760172842.9207] manager: (tap94e66298-1c): new Tun device (/org/freedesktop/NetworkManager/Devices/207)
Oct 11 08:54:02 compute-0 NetworkManager[44960]: <info>  [1760172842.9521] manager: (tap5e003588-ef): new Tun device (/org/freedesktop/NetworkManager/Devices/208)
Oct 11 08:54:02 compute-0 nova_compute[260935]: 2025-10-11 08:54:02.959 2 INFO nova.virt.libvirt.driver [-] [instance: 090243f9-46ac-42a1-b921-fa3b1974f127] Instance destroyed successfully.
Oct 11 08:54:02 compute-0 nova_compute[260935]: 2025-10-11 08:54:02.960 2 DEBUG nova.objects.instance [None req-d29d2b04-23c0-4456-a657-aa63148acf2f 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Lazy-loading 'resources' on Instance uuid 090243f9-46ac-42a1-b921-fa3b1974f127 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:54:02 compute-0 nova_compute[260935]: 2025-10-11 08:54:02.968 2 INFO nova.virt.libvirt.driver [-] [instance: 8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08] Instance destroyed successfully.
Oct 11 08:54:02 compute-0 nova_compute[260935]: 2025-10-11 08:54:02.968 2 DEBUG nova.objects.instance [None req-65038e71-6994-4fd5-842c-fae4ec91d8d5 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Lazy-loading 'resources' on Instance uuid 8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:54:02 compute-0 nova_compute[260935]: 2025-10-11 08:54:02.974 2 DEBUG nova.virt.libvirt.vif [None req-d29d2b04-23c0-4456-a657-aa63148acf2f 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T08:53:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-416013570',display_name='tempest-ServerDiskConfigTestJSON-server-416013570',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-416013570',id=49,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-11T08:53:54Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='9af27fad6b5a4783b66213343f27f0a1',ramdisk_id='',reservation_id='r-2vpa8irs',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerDiskConfigTestJSON-387886039',owner_user_name='tempest-ServerDiskConfigTestJSON-387886039-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T08:54:00Z,user_data=None,user_id='2a171de1f79843e0b048393cabfee77d',uuid=090243f9-46ac-42a1-b921-fa3b1974f127,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "94e66298-1c10-486d-9cd5-fd680cfa037c", "address": "fa:16:3e:a6:01:8a", "network": {"id": "e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-964284753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9af27fad6b5a4783b66213343f27f0a1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap94e66298-1c", "ovs_interfaceid": "94e66298-1c10-486d-9cd5-fd680cfa037c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 11 08:54:02 compute-0 nova_compute[260935]: 2025-10-11 08:54:02.974 2 DEBUG nova.network.os_vif_util [None req-d29d2b04-23c0-4456-a657-aa63148acf2f 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Converting VIF {"id": "94e66298-1c10-486d-9cd5-fd680cfa037c", "address": "fa:16:3e:a6:01:8a", "network": {"id": "e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-964284753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9af27fad6b5a4783b66213343f27f0a1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap94e66298-1c", "ovs_interfaceid": "94e66298-1c10-486d-9cd5-fd680cfa037c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 08:54:02 compute-0 nova_compute[260935]: 2025-10-11 08:54:02.975 2 DEBUG nova.network.os_vif_util [None req-d29d2b04-23c0-4456-a657-aa63148acf2f 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a6:01:8a,bridge_name='br-int',has_traffic_filtering=True,id=94e66298-1c10-486d-9cd5-fd680cfa037c,network=Network(e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap94e66298-1c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 08:54:02 compute-0 nova_compute[260935]: 2025-10-11 08:54:02.975 2 DEBUG os_vif [None req-d29d2b04-23c0-4456-a657-aa63148acf2f 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a6:01:8a,bridge_name='br-int',has_traffic_filtering=True,id=94e66298-1c10-486d-9cd5-fd680cfa037c,network=Network(e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap94e66298-1c') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 11 08:54:02 compute-0 nova_compute[260935]: 2025-10-11 08:54:02.976 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:54:02 compute-0 nova_compute[260935]: 2025-10-11 08:54:02.977 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap94e66298-1c, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:54:02 compute-0 nova_compute[260935]: 2025-10-11 08:54:02.978 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:54:02 compute-0 nova_compute[260935]: 2025-10-11 08:54:02.980 2 DEBUG nova.virt.libvirt.vif [None req-65038e71-6994-4fd5-842c-fae4ec91d8d5 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T08:53:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesOneServerNegativeTestJSON-server-454723019',display_name='tempest-ImagesOneServerNegativeTestJSON-server-454723019',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservernegativetestjson-server-454723019',id=47,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-11T08:53:46Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='b34d40b1586348c3be3d9142dfe1770d',ramdisk_id='',reservation_id='r-tv2wxykz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ImagesOneServerNegativeTestJSON-253514738',owner_user_name='tempest-ImagesOneServerNegativeTestJSON-253514738-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T08:53:59Z,user_data=None,user_id='d4dce65b39be45739408ca70d672df84',uuid=8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "5e003588-ef81-4e87-900f-734b2c4bad32", "address": "fa:16:3e:97:d1:ed", "network": {"id": "a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1898587159-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b34d40b1586348c3be3d9142dfe1770d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5e003588-ef", "ovs_interfaceid": "5e003588-ef81-4e87-900f-734b2c4bad32", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 11 08:54:02 compute-0 nova_compute[260935]: 2025-10-11 08:54:02.981 2 DEBUG nova.network.os_vif_util [None req-65038e71-6994-4fd5-842c-fae4ec91d8d5 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Converting VIF {"id": "5e003588-ef81-4e87-900f-734b2c4bad32", "address": "fa:16:3e:97:d1:ed", "network": {"id": "a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1898587159-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b34d40b1586348c3be3d9142dfe1770d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5e003588-ef", "ovs_interfaceid": "5e003588-ef81-4e87-900f-734b2c4bad32", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 08:54:02 compute-0 nova_compute[260935]: 2025-10-11 08:54:02.981 2 DEBUG nova.network.os_vif_util [None req-65038e71-6994-4fd5-842c-fae4ec91d8d5 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:97:d1:ed,bridge_name='br-int',has_traffic_filtering=True,id=5e003588-ef81-4e87-900f-734b2c4bad32,network=Network(a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5e003588-ef') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 08:54:02 compute-0 nova_compute[260935]: 2025-10-11 08:54:02.982 2 DEBUG os_vif [None req-65038e71-6994-4fd5-842c-fae4ec91d8d5 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:97:d1:ed,bridge_name='br-int',has_traffic_filtering=True,id=5e003588-ef81-4e87-900f-734b2c4bad32,network=Network(a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5e003588-ef') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 11 08:54:02 compute-0 nova_compute[260935]: 2025-10-11 08:54:02.983 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:54:02 compute-0 nova_compute[260935]: 2025-10-11 08:54:02.984 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:54:02 compute-0 nova_compute[260935]: 2025-10-11 08:54:02.984 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5e003588-ef, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:54:02 compute-0 nova_compute[260935]: 2025-10-11 08:54:02.984 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:54:02 compute-0 nova_compute[260935]: 2025-10-11 08:54:02.986 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:54:02 compute-0 nova_compute[260935]: 2025-10-11 08:54:02.986 2 INFO os_vif [None req-d29d2b04-23c0-4456-a657-aa63148acf2f 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a6:01:8a,bridge_name='br-int',has_traffic_filtering=True,id=94e66298-1c10-486d-9cd5-fd680cfa037c,network=Network(e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap94e66298-1c')
Oct 11 08:54:03 compute-0 nova_compute[260935]: 2025-10-11 08:54:03.000 2 INFO os_vif [None req-65038e71-6994-4fd5-842c-fae4ec91d8d5 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:97:d1:ed,bridge_name='br-int',has_traffic_filtering=True,id=5e003588-ef81-4e87-900f-734b2c4bad32,network=Network(a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5e003588-ef')
Oct 11 08:54:03 compute-0 podman[315789]: 2025-10-11 08:54:03.025010339 +0000 UTC m=+0.086099747 container remove 6877f03e8f74ec3b019684f0961795298d916dc72d355b1afc9b4cff51f152cd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 08:54:03 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:03.032 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[c4c77e1a-6733-402e-8d63-16f7962339bf]: (4, ('Sat Oct 11 08:54:02 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f (6877f03e8f74ec3b019684f0961795298d916dc72d355b1afc9b4cff51f152cd)\n6877f03e8f74ec3b019684f0961795298d916dc72d355b1afc9b4cff51f152cd\nSat Oct 11 08:54:02 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f (6877f03e8f74ec3b019684f0961795298d916dc72d355b1afc9b4cff51f152cd)\n6877f03e8f74ec3b019684f0961795298d916dc72d355b1afc9b4cff51f152cd\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:54:03 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:03.036 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[240ee7ba-756f-458a-9a10-dd65ab8c2fc8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:54:03 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:03.038 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapaac3fe7a-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:54:03 compute-0 kernel: tapaac3fe7a-b0: left promiscuous mode
Oct 11 08:54:03 compute-0 nova_compute[260935]: 2025-10-11 08:54:03.040 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:54:03 compute-0 nova_compute[260935]: 2025-10-11 08:54:03.065 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:54:03 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:03.068 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[2e2683c1-e4fb-4875-9297-3d8be39f45d9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:54:03 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:03.089 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[49fe5b99-27e2-4fa6-8133-c635afe2730d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:54:03 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:03.090 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[31e4e408-184b-4469-bab5-f03786c74957]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:54:03 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:03.108 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[2fe1aa36-0951-4b80-adc5-155433773d69]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 467870, 'reachable_time': 32885, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 315881, 'error': None, 'target': 'ovnmeta-aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:54:03 compute-0 systemd[1]: run-netns-ovnmeta\x2daac3fe7a\x2db6f5\x2d4809\x2d9f09\x2d04a3bbe54c8f.mount: Deactivated successfully.
Oct 11 08:54:03 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:03.113 162929 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-aac3fe7a-b6f5-4809-9f09-04a3bbe54c8f deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 11 08:54:03 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:03.113 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[e35d2f25-afd2-4528-8f3e-99f86c43079f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:54:03 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:03.114 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 11 08:54:03 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:03.114 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 94e66298-1c10-486d-9cd5-fd680cfa037c in datapath e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd unbound from our chassis
Oct 11 08:54:03 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:03.115 162815 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 11 08:54:03 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:03.116 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[c14584a4-d3f1-413e-b5bf-34d82310dba4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:54:03 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:03.117 162815 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd namespace which is not needed anymore
Oct 11 08:54:03 compute-0 nova_compute[260935]: 2025-10-11 08:54:03.301 2 INFO nova.virt.libvirt.driver [None req-79edc500-30b7-473e-9cd9-7958c1b3987c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: c5c5e7c6-36ba-4cdd-9ad5-03996c419556] Deleting instance files /var/lib/nova/instances/c5c5e7c6-36ba-4cdd-9ad5-03996c419556_del
Oct 11 08:54:03 compute-0 nova_compute[260935]: 2025-10-11 08:54:03.302 2 INFO nova.virt.libvirt.driver [None req-79edc500-30b7-473e-9cd9-7958c1b3987c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: c5c5e7c6-36ba-4cdd-9ad5-03996c419556] Deletion of /var/lib/nova/instances/c5c5e7c6-36ba-4cdd-9ad5-03996c419556_del complete
Oct 11 08:54:03 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e209 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 08:54:03 compute-0 neutron-haproxy-ovnmeta-e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd[314970]: [NOTICE]   (314974) : haproxy version is 2.8.14-c23fe91
Oct 11 08:54:03 compute-0 neutron-haproxy-ovnmeta-e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd[314970]: [NOTICE]   (314974) : path to executable is /usr/sbin/haproxy
Oct 11 08:54:03 compute-0 neutron-haproxy-ovnmeta-e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd[314970]: [WARNING]  (314974) : Exiting Master process...
Oct 11 08:54:03 compute-0 neutron-haproxy-ovnmeta-e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd[314970]: [WARNING]  (314974) : Exiting Master process...
Oct 11 08:54:03 compute-0 neutron-haproxy-ovnmeta-e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd[314970]: [ALERT]    (314974) : Current worker (314978) exited with code 143 (Terminated)
Oct 11 08:54:03 compute-0 neutron-haproxy-ovnmeta-e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd[314970]: [WARNING]  (314974) : All workers exited. Exiting... (0)
Oct 11 08:54:03 compute-0 systemd[1]: libpod-81ee36c5aeb7a05d069a71a8721b9939b3ddb5f0983b70f5954fbe381e6c2684.scope: Deactivated successfully.
Oct 11 08:54:03 compute-0 podman[315904]: 2025-10-11 08:54:03.340352661 +0000 UTC m=+0.072095907 container died 81ee36c5aeb7a05d069a71a8721b9939b3ddb5f0983b70f5954fbe381e6c2684 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct 11 08:54:03 compute-0 nova_compute[260935]: 2025-10-11 08:54:03.360 2 INFO nova.compute.manager [None req-79edc500-30b7-473e-9cd9-7958c1b3987c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] [instance: c5c5e7c6-36ba-4cdd-9ad5-03996c419556] Took 1.05 seconds to destroy the instance on the hypervisor.
Oct 11 08:54:03 compute-0 nova_compute[260935]: 2025-10-11 08:54:03.361 2 DEBUG oslo.service.loopingcall [None req-79edc500-30b7-473e-9cd9-7958c1b3987c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 11 08:54:03 compute-0 nova_compute[260935]: 2025-10-11 08:54:03.362 2 DEBUG nova.compute.manager [-] [instance: c5c5e7c6-36ba-4cdd-9ad5-03996c419556] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 11 08:54:03 compute-0 nova_compute[260935]: 2025-10-11 08:54:03.362 2 DEBUG nova.network.neutron [-] [instance: c5c5e7c6-36ba-4cdd-9ad5-03996c419556] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 11 08:54:03 compute-0 sudo[315718]: pam_unix(sudo:session): session closed for user root
Oct 11 08:54:03 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-81ee36c5aeb7a05d069a71a8721b9939b3ddb5f0983b70f5954fbe381e6c2684-userdata-shm.mount: Deactivated successfully.
Oct 11 08:54:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-6ff5f9aab65ed5ed74268fc74e80741bd6d56127a04d5f99970f5b4eb6779e28-merged.mount: Deactivated successfully.
Oct 11 08:54:03 compute-0 podman[315904]: 2025-10-11 08:54:03.397193621 +0000 UTC m=+0.128936867 container cleanup 81ee36c5aeb7a05d069a71a8721b9939b3ddb5f0983b70f5954fbe381e6c2684 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Oct 11 08:54:03 compute-0 systemd[1]: libpod-conmon-81ee36c5aeb7a05d069a71a8721b9939b3ddb5f0983b70f5954fbe381e6c2684.scope: Deactivated successfully.
Oct 11 08:54:03 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Oct 11 08:54:03 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 11 08:54:03 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 08:54:03 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 08:54:03 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 11 08:54:03 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 08:54:03 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 11 08:54:03 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 08:54:03 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev f83aa430-0644-4ff0-b9bc-f1b8b6a89775 does not exist
Oct 11 08:54:03 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev 5df8fa34-3f3b-4d63-a7f1-d333ab0d8aab does not exist
Oct 11 08:54:03 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev 91dfbaa9-c364-4783-ac53-e85d6558ba0e does not exist
Oct 11 08:54:03 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 11 08:54:03 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 08:54:03 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 11 08:54:03 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 08:54:03 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 08:54:03 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 08:54:03 compute-0 ceph-mon[74313]: pgmap v1493: 321 pgs: 4 active+clean+snaptrim, 14 active+clean+snaptrim_wait, 303 active+clean; 278 MiB data, 531 MiB used, 59 GiB / 60 GiB avail; 14 MiB/s rd, 4.8 MiB/s wr, 547 op/s
Oct 11 08:54:03 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 11 08:54:03 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 08:54:03 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 08:54:03 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 08:54:03 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 08:54:03 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 08:54:03 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 08:54:03 compute-0 podman[315944]: 2025-10-11 08:54:03.488972278 +0000 UTC m=+0.061821323 container remove 81ee36c5aeb7a05d069a71a8721b9939b3ddb5f0983b70f5954fbe381e6c2684 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001)
Oct 11 08:54:03 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:03.494 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[9bf97bd7-90c1-4c3a-91cf-3c12f5e25580]: (4, ('Sat Oct 11 08:54:03 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd (81ee36c5aeb7a05d069a71a8721b9939b3ddb5f0983b70f5954fbe381e6c2684)\n81ee36c5aeb7a05d069a71a8721b9939b3ddb5f0983b70f5954fbe381e6c2684\nSat Oct 11 08:54:03 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd (81ee36c5aeb7a05d069a71a8721b9939b3ddb5f0983b70f5954fbe381e6c2684)\n81ee36c5aeb7a05d069a71a8721b9939b3ddb5f0983b70f5954fbe381e6c2684\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:54:03 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:03.495 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[62654643-70ab-4cb9-a309-036d46067d91]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:54:03 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:03.496 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape5d4fc7a-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:54:03 compute-0 nova_compute[260935]: 2025-10-11 08:54:03.497 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:54:03 compute-0 kernel: tape5d4fc7a-10: left promiscuous mode
Oct 11 08:54:03 compute-0 sudo[315954]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:54:03 compute-0 sudo[315954]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:54:03 compute-0 nova_compute[260935]: 2025-10-11 08:54:03.521 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:54:03 compute-0 sudo[315954]: pam_unix(sudo:session): session closed for user root
Oct 11 08:54:03 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:03.524 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[47fbb3c3-5ed1-455d-b995-9f341845cf7d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:54:03 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:03.546 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[73f48f54-63f3-40b5-8c88-1dfea14d47e2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:54:03 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:03.547 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[cd95bb6f-4e0d-4db8-8d40-296fddd5c55e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:54:03 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:03.564 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[3ced2cf0-b9e8-4527-914b-2f792225e979]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 467663, 'reachable_time': 24734, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 315997, 'error': None, 'target': 'ovnmeta-e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:54:03 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:03.566 162929 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-e5d4fc7a-1d47-4774-aad7-8bb2b388fbbd deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 11 08:54:03 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:03.566 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[0adaa183-a656-4fd9-965c-95f079ffb1fc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:54:03 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:03.567 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 5e003588-ef81-4e87-900f-734b2c4bad32 in datapath a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf unbound from our chassis
Oct 11 08:54:03 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:03.568 162815 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 11 08:54:03 compute-0 nova_compute[260935]: 2025-10-11 08:54:03.572 2 INFO nova.virt.libvirt.driver [None req-65038e71-6994-4fd5-842c-fae4ec91d8d5 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] [instance: 8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08] Deleting instance files /var/lib/nova/instances/8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08_del
Oct 11 08:54:03 compute-0 nova_compute[260935]: 2025-10-11 08:54:03.573 2 INFO nova.virt.libvirt.driver [None req-65038e71-6994-4fd5-842c-fae4ec91d8d5 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] [instance: 8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08] Deletion of /var/lib/nova/instances/8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08_del complete
Oct 11 08:54:03 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:03.576 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[b68927d3-84ba-4752-afd4-7deb5ed71679]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:54:03 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:03.579 162815 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf namespace which is not needed anymore
Oct 11 08:54:03 compute-0 sudo[315982]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 08:54:03 compute-0 nova_compute[260935]: 2025-10-11 08:54:03.582 2 INFO nova.virt.libvirt.driver [None req-d29d2b04-23c0-4456-a657-aa63148acf2f 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 090243f9-46ac-42a1-b921-fa3b1974f127] Deleting instance files /var/lib/nova/instances/090243f9-46ac-42a1-b921-fa3b1974f127_del
Oct 11 08:54:03 compute-0 nova_compute[260935]: 2025-10-11 08:54:03.583 2 INFO nova.virt.libvirt.driver [None req-d29d2b04-23c0-4456-a657-aa63148acf2f 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 090243f9-46ac-42a1-b921-fa3b1974f127] Deletion of /var/lib/nova/instances/090243f9-46ac-42a1-b921-fa3b1974f127_del complete
Oct 11 08:54:03 compute-0 sudo[315982]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:54:03 compute-0 sudo[315982]: pam_unix(sudo:session): session closed for user root
Oct 11 08:54:03 compute-0 nova_compute[260935]: 2025-10-11 08:54:03.632 2 DEBUG nova.compute.manager [req-0aff252e-5bd8-460b-a1bd-ff499e3266eb req-871cea7c-d402-46e9-a675-27fa49f1ee42 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: c5c5e7c6-36ba-4cdd-9ad5-03996c419556] Received event network-vif-unplugged-eb770b47-0e30-41bb-8d08-e52bc86b8fb7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:54:03 compute-0 nova_compute[260935]: 2025-10-11 08:54:03.632 2 DEBUG oslo_concurrency.lockutils [req-0aff252e-5bd8-460b-a1bd-ff499e3266eb req-871cea7c-d402-46e9-a675-27fa49f1ee42 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "c5c5e7c6-36ba-4cdd-9ad5-03996c419556-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:54:03 compute-0 nova_compute[260935]: 2025-10-11 08:54:03.633 2 DEBUG oslo_concurrency.lockutils [req-0aff252e-5bd8-460b-a1bd-ff499e3266eb req-871cea7c-d402-46e9-a675-27fa49f1ee42 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "c5c5e7c6-36ba-4cdd-9ad5-03996c419556-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:54:03 compute-0 nova_compute[260935]: 2025-10-11 08:54:03.633 2 DEBUG oslo_concurrency.lockutils [req-0aff252e-5bd8-460b-a1bd-ff499e3266eb req-871cea7c-d402-46e9-a675-27fa49f1ee42 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "c5c5e7c6-36ba-4cdd-9ad5-03996c419556-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:54:03 compute-0 nova_compute[260935]: 2025-10-11 08:54:03.633 2 DEBUG nova.compute.manager [req-0aff252e-5bd8-460b-a1bd-ff499e3266eb req-871cea7c-d402-46e9-a675-27fa49f1ee42 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: c5c5e7c6-36ba-4cdd-9ad5-03996c419556] No waiting events found dispatching network-vif-unplugged-eb770b47-0e30-41bb-8d08-e52bc86b8fb7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 08:54:03 compute-0 nova_compute[260935]: 2025-10-11 08:54:03.634 2 DEBUG nova.compute.manager [req-0aff252e-5bd8-460b-a1bd-ff499e3266eb req-871cea7c-d402-46e9-a675-27fa49f1ee42 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: c5c5e7c6-36ba-4cdd-9ad5-03996c419556] Received event network-vif-unplugged-eb770b47-0e30-41bb-8d08-e52bc86b8fb7 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 11 08:54:03 compute-0 sudo[316012]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:54:03 compute-0 sudo[316012]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:54:03 compute-0 sudo[316012]: pam_unix(sudo:session): session closed for user root
Oct 11 08:54:03 compute-0 nova_compute[260935]: 2025-10-11 08:54:03.653 2 INFO nova.compute.manager [None req-65038e71-6994-4fd5-842c-fae4ec91d8d5 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] [instance: 8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08] Took 0.96 seconds to destroy the instance on the hypervisor.
Oct 11 08:54:03 compute-0 nova_compute[260935]: 2025-10-11 08:54:03.653 2 DEBUG oslo.service.loopingcall [None req-65038e71-6994-4fd5-842c-fae4ec91d8d5 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 11 08:54:03 compute-0 nova_compute[260935]: 2025-10-11 08:54:03.654 2 DEBUG nova.compute.manager [-] [instance: 8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 11 08:54:03 compute-0 nova_compute[260935]: 2025-10-11 08:54:03.654 2 DEBUG nova.network.neutron [-] [instance: 8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 11 08:54:03 compute-0 nova_compute[260935]: 2025-10-11 08:54:03.660 2 INFO nova.compute.manager [None req-d29d2b04-23c0-4456-a657-aa63148acf2f 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] [instance: 090243f9-46ac-42a1-b921-fa3b1974f127] Took 0.97 seconds to destroy the instance on the hypervisor.
Oct 11 08:54:03 compute-0 nova_compute[260935]: 2025-10-11 08:54:03.660 2 DEBUG oslo.service.loopingcall [None req-d29d2b04-23c0-4456-a657-aa63148acf2f 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 11 08:54:03 compute-0 nova_compute[260935]: 2025-10-11 08:54:03.661 2 DEBUG nova.compute.manager [-] [instance: 090243f9-46ac-42a1-b921-fa3b1974f127] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 11 08:54:03 compute-0 nova_compute[260935]: 2025-10-11 08:54:03.661 2 DEBUG nova.network.neutron [-] [instance: 090243f9-46ac-42a1-b921-fa3b1974f127] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 11 08:54:03 compute-0 sudo[316051]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 11 08:54:03 compute-0 sudo[316051]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:54:03 compute-0 neutron-haproxy-ovnmeta-a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf[314221]: [NOTICE]   (314243) : haproxy version is 2.8.14-c23fe91
Oct 11 08:54:03 compute-0 neutron-haproxy-ovnmeta-a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf[314221]: [NOTICE]   (314243) : path to executable is /usr/sbin/haproxy
Oct 11 08:54:03 compute-0 neutron-haproxy-ovnmeta-a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf[314221]: [ALERT]    (314243) : Current worker (314248) exited with code 143 (Terminated)
Oct 11 08:54:03 compute-0 neutron-haproxy-ovnmeta-a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf[314221]: [WARNING]  (314243) : All workers exited. Exiting... (0)
Oct 11 08:54:03 compute-0 systemd[1]: libpod-7d1a9e6b3c9130792d0328aa3e55096abd83598bbcc264728304976547ed89b5.scope: Deactivated successfully.
Oct 11 08:54:03 compute-0 podman[316054]: 2025-10-11 08:54:03.727445089 +0000 UTC m=+0.054905857 container died 7d1a9e6b3c9130792d0328aa3e55096abd83598bbcc264728304976547ed89b5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team)
Oct 11 08:54:03 compute-0 podman[316054]: 2025-10-11 08:54:03.758570046 +0000 UTC m=+0.086030814 container cleanup 7d1a9e6b3c9130792d0328aa3e55096abd83598bbcc264728304976547ed89b5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 11 08:54:03 compute-0 podman[316060]: 2025-10-11 08:54:03.774074418 +0000 UTC m=+0.081973148 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=iscsid, container_name=iscsid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 11 08:54:03 compute-0 systemd[1]: libpod-conmon-7d1a9e6b3c9130792d0328aa3e55096abd83598bbcc264728304976547ed89b5.scope: Deactivated successfully.
Oct 11 08:54:03 compute-0 podman[316119]: 2025-10-11 08:54:03.815269743 +0000 UTC m=+0.032862958 container remove 7d1a9e6b3c9130792d0328aa3e55096abd83598bbcc264728304976547ed89b5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 11 08:54:03 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:03.824 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[d8ff9c2b-47fc-4fbe-a1c3-dfdb9538a636]: (4, ('Sat Oct 11 08:54:03 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf (7d1a9e6b3c9130792d0328aa3e55096abd83598bbcc264728304976547ed89b5)\n7d1a9e6b3c9130792d0328aa3e55096abd83598bbcc264728304976547ed89b5\nSat Oct 11 08:54:03 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf (7d1a9e6b3c9130792d0328aa3e55096abd83598bbcc264728304976547ed89b5)\n7d1a9e6b3c9130792d0328aa3e55096abd83598bbcc264728304976547ed89b5\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:54:03 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:03.826 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[529428f6-5675-4ea5-88e9-3e713ca35451]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:54:03 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:03.827 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa1a65c6f-c0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:54:03 compute-0 nova_compute[260935]: 2025-10-11 08:54:03.867 2 DEBUG nova.network.neutron [-] [instance: d0ac94c4-9bbc-443b-bbce-0d447b37153a] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:54:03 compute-0 kernel: tapa1a65c6f-c0: left promiscuous mode
Oct 11 08:54:03 compute-0 nova_compute[260935]: 2025-10-11 08:54:03.872 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:54:03 compute-0 systemd[1]: run-netns-ovnmeta\x2de5d4fc7a\x2d1d47\x2d4774\x2daad7\x2d8bb2b388fbbd.mount: Deactivated successfully.
Oct 11 08:54:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-5898e42c3d8153f562219a47f22912dd20bb1c3a0e1902954c3573851fcf1fb5-merged.mount: Deactivated successfully.
Oct 11 08:54:03 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-7d1a9e6b3c9130792d0328aa3e55096abd83598bbcc264728304976547ed89b5-userdata-shm.mount: Deactivated successfully.
Oct 11 08:54:03 compute-0 nova_compute[260935]: 2025-10-11 08:54:03.891 2 INFO nova.compute.manager [-] [instance: d0ac94c4-9bbc-443b-bbce-0d447b37153a] Took 1.16 seconds to deallocate network for instance.
Oct 11 08:54:03 compute-0 nova_compute[260935]: 2025-10-11 08:54:03.904 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:54:03 compute-0 nova_compute[260935]: 2025-10-11 08:54:03.906 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:54:03 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:03.911 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[445f355f-9405-4ca5-9016-904d417ceab3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:54:03 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:03.938 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[83b27fa3-4d8a-45d3-b7a8-e20143e8f4fb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:54:03 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:03.939 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[a29ad126-fa98-42b6-aab4-99cc87153a65]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:54:03 compute-0 nova_compute[260935]: 2025-10-11 08:54:03.959 2 DEBUG oslo_concurrency.lockutils [None req-08113e51-2469-4f43-8573-66a713d640b3 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:54:03 compute-0 nova_compute[260935]: 2025-10-11 08:54:03.959 2 DEBUG oslo_concurrency.lockutils [None req-08113e51-2469-4f43-8573-66a713d640b3 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:54:03 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1494: 321 pgs: 321 active+clean; 213 MiB data, 550 MiB used, 59 GiB / 60 GiB avail; 14 MiB/s rd, 7.2 MiB/s wr, 716 op/s
Oct 11 08:54:03 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:03.966 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[ce70d240-fc91-401b-8095-fc4f2b227183]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 466841, 'reachable_time': 38667, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 316148, 'error': None, 'target': 'ovnmeta-a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:54:03 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:03.968 162929 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 11 08:54:03 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:03.969 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[1199c91d-1d81-4f80-96c7-501de420e916]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:54:03 compute-0 systemd[1]: run-netns-ovnmeta\x2da1a65c6f\x2dcd0e\x2d4ac5\x2db9de\x2d21e41a32ffbf.mount: Deactivated successfully.
Oct 11 08:54:04 compute-0 nova_compute[260935]: 2025-10-11 08:54:04.014 2 DEBUG nova.network.neutron [-] [instance: c5c5e7c6-36ba-4cdd-9ad5-03996c419556] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:54:04 compute-0 nova_compute[260935]: 2025-10-11 08:54:04.040 2 INFO nova.compute.manager [-] [instance: c5c5e7c6-36ba-4cdd-9ad5-03996c419556] Took 0.68 seconds to deallocate network for instance.
Oct 11 08:54:04 compute-0 nova_compute[260935]: 2025-10-11 08:54:04.100 2 DEBUG oslo_concurrency.lockutils [None req-79edc500-30b7-473e-9cd9-7958c1b3987c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:54:04 compute-0 nova_compute[260935]: 2025-10-11 08:54:04.104 2 DEBUG oslo_concurrency.processutils [None req-08113e51-2469-4f43-8573-66a713d640b3 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:54:04 compute-0 podman[316176]: 2025-10-11 08:54:04.206145559 +0000 UTC m=+0.061899246 container create d61237b684e2b67fcf2d69455ea2af1003e62f0d37ef7a5d8781ec1a24e8e64b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_montalcini, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct 11 08:54:04 compute-0 systemd[1]: Started libpod-conmon-d61237b684e2b67fcf2d69455ea2af1003e62f0d37ef7a5d8781ec1a24e8e64b.scope.
Oct 11 08:54:04 compute-0 systemd[1]: Started libcrun container.
Oct 11 08:54:04 compute-0 podman[316176]: 2025-10-11 08:54:04.184685167 +0000 UTC m=+0.040438914 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 08:54:04 compute-0 podman[316176]: 2025-10-11 08:54:04.297282066 +0000 UTC m=+0.153035853 container init d61237b684e2b67fcf2d69455ea2af1003e62f0d37ef7a5d8781ec1a24e8e64b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_montalcini, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 08:54:04 compute-0 podman[316176]: 2025-10-11 08:54:04.309074633 +0000 UTC m=+0.164828330 container start d61237b684e2b67fcf2d69455ea2af1003e62f0d37ef7a5d8781ec1a24e8e64b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_montalcini, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct 11 08:54:04 compute-0 podman[316176]: 2025-10-11 08:54:04.312719907 +0000 UTC m=+0.168473684 container attach d61237b684e2b67fcf2d69455ea2af1003e62f0d37ef7a5d8781ec1a24e8e64b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_montalcini, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 08:54:04 compute-0 systemd[1]: libpod-d61237b684e2b67fcf2d69455ea2af1003e62f0d37ef7a5d8781ec1a24e8e64b.scope: Deactivated successfully.
Oct 11 08:54:04 compute-0 affectionate_montalcini[316192]: 167 167
Oct 11 08:54:04 compute-0 podman[316176]: 2025-10-11 08:54:04.319338775 +0000 UTC m=+0.175092502 container died d61237b684e2b67fcf2d69455ea2af1003e62f0d37ef7a5d8781ec1a24e8e64b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_montalcini, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Oct 11 08:54:04 compute-0 conmon[316192]: conmon d61237b684e2b67fcf2d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d61237b684e2b67fcf2d69455ea2af1003e62f0d37ef7a5d8781ec1a24e8e64b.scope/container/memory.events
Oct 11 08:54:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-391d942d1c2eb1371a9aef4ba63866e47d7e397fd5f0cfec914fd605f1000cdc-merged.mount: Deactivated successfully.
Oct 11 08:54:04 compute-0 podman[316176]: 2025-10-11 08:54:04.383071373 +0000 UTC m=+0.238825100 container remove d61237b684e2b67fcf2d69455ea2af1003e62f0d37ef7a5d8781ec1a24e8e64b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_montalcini, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct 11 08:54:04 compute-0 systemd[1]: libpod-conmon-d61237b684e2b67fcf2d69455ea2af1003e62f0d37ef7a5d8781ec1a24e8e64b.scope: Deactivated successfully.
Oct 11 08:54:04 compute-0 nova_compute[260935]: 2025-10-11 08:54:04.528 2 DEBUG nova.network.neutron [-] [instance: 8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:54:04 compute-0 nova_compute[260935]: 2025-10-11 08:54:04.542 2 DEBUG nova.network.neutron [-] [instance: 090243f9-46ac-42a1-b921-fa3b1974f127] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:54:04 compute-0 nova_compute[260935]: 2025-10-11 08:54:04.554 2 INFO nova.compute.manager [-] [instance: 8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08] Took 0.90 seconds to deallocate network for instance.
Oct 11 08:54:04 compute-0 nova_compute[260935]: 2025-10-11 08:54:04.563 2 INFO nova.compute.manager [-] [instance: 090243f9-46ac-42a1-b921-fa3b1974f127] Took 0.90 seconds to deallocate network for instance.
Oct 11 08:54:04 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 08:54:04 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/822842830' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:54:04 compute-0 nova_compute[260935]: 2025-10-11 08:54:04.620 2 DEBUG oslo_concurrency.lockutils [None req-65038e71-6994-4fd5-842c-fae4ec91d8d5 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:54:04 compute-0 nova_compute[260935]: 2025-10-11 08:54:04.621 2 DEBUG oslo_concurrency.processutils [None req-08113e51-2469-4f43-8573-66a713d640b3 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.517s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:54:04 compute-0 nova_compute[260935]: 2025-10-11 08:54:04.628 2 DEBUG oslo_concurrency.lockutils [None req-d29d2b04-23c0-4456-a657-aa63148acf2f 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:54:04 compute-0 nova_compute[260935]: 2025-10-11 08:54:04.634 2 DEBUG nova.compute.provider_tree [None req-08113e51-2469-4f43-8573-66a713d640b3 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 08:54:04 compute-0 podman[316236]: 2025-10-11 08:54:04.648082859 +0000 UTC m=+0.075518384 container create 3ec23f4547cc0e539ac02b827b38cc6f130b6a8c99a84276140ffd435d4ab6d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_neumann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct 11 08:54:04 compute-0 nova_compute[260935]: 2025-10-11 08:54:04.650 2 DEBUG nova.scheduler.client.report [None req-08113e51-2469-4f43-8573-66a713d640b3 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 08:54:04 compute-0 nova_compute[260935]: 2025-10-11 08:54:04.673 2 DEBUG oslo_concurrency.lockutils [None req-08113e51-2469-4f43-8573-66a713d640b3 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.714s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:54:04 compute-0 nova_compute[260935]: 2025-10-11 08:54:04.676 2 DEBUG oslo_concurrency.lockutils [None req-79edc500-30b7-473e-9cd9-7958c1b3987c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.577s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:54:04 compute-0 nova_compute[260935]: 2025-10-11 08:54:04.704 2 INFO nova.scheduler.client.report [None req-08113e51-2469-4f43-8573-66a713d640b3 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Deleted allocations for instance d0ac94c4-9bbc-443b-bbce-0d447b37153a
Oct 11 08:54:04 compute-0 podman[316236]: 2025-10-11 08:54:04.614203093 +0000 UTC m=+0.041638678 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 08:54:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 08:54:04 compute-0 nova_compute[260935]: 2025-10-11 08:54:04.706 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 08:54:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:54:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 08:54:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:54:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.001452666129452367 of space, bias 1.0, pg target 0.4357998388357101 quantized to 32 (current 32)
Oct 11 08:54:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:54:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 08:54:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:54:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 08:54:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:54:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Oct 11 08:54:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:54:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 11 08:54:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:54:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 08:54:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:54:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 11 08:54:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:54:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 11 08:54:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:54:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 08:54:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:54:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 11 08:54:04 compute-0 systemd[1]: Started libpod-conmon-3ec23f4547cc0e539ac02b827b38cc6f130b6a8c99a84276140ffd435d4ab6d8.scope.
Oct 11 08:54:04 compute-0 systemd[1]: Started libcrun container.
Oct 11 08:54:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5081022b4876deca3781b856df4070f0b16910071fed7e9767360ef6b7b5c0ab/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 08:54:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5081022b4876deca3781b856df4070f0b16910071fed7e9767360ef6b7b5c0ab/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 08:54:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5081022b4876deca3781b856df4070f0b16910071fed7e9767360ef6b7b5c0ab/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 08:54:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5081022b4876deca3781b856df4070f0b16910071fed7e9767360ef6b7b5c0ab/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 08:54:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5081022b4876deca3781b856df4070f0b16910071fed7e9767360ef6b7b5c0ab/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 08:54:04 compute-0 nova_compute[260935]: 2025-10-11 08:54:04.774 2 DEBUG oslo_concurrency.lockutils [None req-08113e51-2469-4f43-8573-66a713d640b3 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Lock "d0ac94c4-9bbc-443b-bbce-0d447b37153a" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.175s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:54:04 compute-0 podman[316236]: 2025-10-11 08:54:04.781511894 +0000 UTC m=+0.208947459 container init 3ec23f4547cc0e539ac02b827b38cc6f130b6a8c99a84276140ffd435d4ab6d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_neumann, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct 11 08:54:04 compute-0 podman[316236]: 2025-10-11 08:54:04.801964868 +0000 UTC m=+0.229400393 container start 3ec23f4547cc0e539ac02b827b38cc6f130b6a8c99a84276140ffd435d4ab6d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_neumann, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 08:54:04 compute-0 nova_compute[260935]: 2025-10-11 08:54:04.801 2 DEBUG oslo_concurrency.processutils [None req-79edc500-30b7-473e-9cd9-7958c1b3987c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:54:04 compute-0 podman[316236]: 2025-10-11 08:54:04.806169657 +0000 UTC m=+0.233605172 container attach 3ec23f4547cc0e539ac02b827b38cc6f130b6a8c99a84276140ffd435d4ab6d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_neumann, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 08:54:04 compute-0 nova_compute[260935]: 2025-10-11 08:54:04.930 2 DEBUG nova.compute.manager [req-044a5dfc-b69a-45e5-8ed3-4219a8215da9 req-c2b68031-d77b-4589-80c6-e4c03ef718a4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d0ac94c4-9bbc-443b-bbce-0d447b37153a] Received event network-vif-plugged-11b55091-2876-4b36-98b0-aa1a4db00d3e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:54:04 compute-0 nova_compute[260935]: 2025-10-11 08:54:04.932 2 DEBUG oslo_concurrency.lockutils [req-044a5dfc-b69a-45e5-8ed3-4219a8215da9 req-c2b68031-d77b-4589-80c6-e4c03ef718a4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "d0ac94c4-9bbc-443b-bbce-0d447b37153a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:54:04 compute-0 nova_compute[260935]: 2025-10-11 08:54:04.932 2 DEBUG oslo_concurrency.lockutils [req-044a5dfc-b69a-45e5-8ed3-4219a8215da9 req-c2b68031-d77b-4589-80c6-e4c03ef718a4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "d0ac94c4-9bbc-443b-bbce-0d447b37153a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:54:04 compute-0 nova_compute[260935]: 2025-10-11 08:54:04.933 2 DEBUG oslo_concurrency.lockutils [req-044a5dfc-b69a-45e5-8ed3-4219a8215da9 req-c2b68031-d77b-4589-80c6-e4c03ef718a4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "d0ac94c4-9bbc-443b-bbce-0d447b37153a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:54:04 compute-0 nova_compute[260935]: 2025-10-11 08:54:04.933 2 DEBUG nova.compute.manager [req-044a5dfc-b69a-45e5-8ed3-4219a8215da9 req-c2b68031-d77b-4589-80c6-e4c03ef718a4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d0ac94c4-9bbc-443b-bbce-0d447b37153a] No waiting events found dispatching network-vif-plugged-11b55091-2876-4b36-98b0-aa1a4db00d3e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 08:54:04 compute-0 nova_compute[260935]: 2025-10-11 08:54:04.933 2 WARNING nova.compute.manager [req-044a5dfc-b69a-45e5-8ed3-4219a8215da9 req-c2b68031-d77b-4589-80c6-e4c03ef718a4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d0ac94c4-9bbc-443b-bbce-0d447b37153a] Received unexpected event network-vif-plugged-11b55091-2876-4b36-98b0-aa1a4db00d3e for instance with vm_state deleted and task_state None.
Oct 11 08:54:04 compute-0 nova_compute[260935]: 2025-10-11 08:54:04.934 2 DEBUG nova.compute.manager [req-044a5dfc-b69a-45e5-8ed3-4219a8215da9 req-c2b68031-d77b-4589-80c6-e4c03ef718a4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08] Received event network-vif-unplugged-5e003588-ef81-4e87-900f-734b2c4bad32 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:54:04 compute-0 nova_compute[260935]: 2025-10-11 08:54:04.934 2 DEBUG oslo_concurrency.lockutils [req-044a5dfc-b69a-45e5-8ed3-4219a8215da9 req-c2b68031-d77b-4589-80c6-e4c03ef718a4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:54:04 compute-0 nova_compute[260935]: 2025-10-11 08:54:04.935 2 DEBUG oslo_concurrency.lockutils [req-044a5dfc-b69a-45e5-8ed3-4219a8215da9 req-c2b68031-d77b-4589-80c6-e4c03ef718a4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:54:04 compute-0 nova_compute[260935]: 2025-10-11 08:54:04.935 2 DEBUG oslo_concurrency.lockutils [req-044a5dfc-b69a-45e5-8ed3-4219a8215da9 req-c2b68031-d77b-4589-80c6-e4c03ef718a4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:54:04 compute-0 nova_compute[260935]: 2025-10-11 08:54:04.936 2 DEBUG nova.compute.manager [req-044a5dfc-b69a-45e5-8ed3-4219a8215da9 req-c2b68031-d77b-4589-80c6-e4c03ef718a4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08] No waiting events found dispatching network-vif-unplugged-5e003588-ef81-4e87-900f-734b2c4bad32 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 08:54:04 compute-0 nova_compute[260935]: 2025-10-11 08:54:04.936 2 WARNING nova.compute.manager [req-044a5dfc-b69a-45e5-8ed3-4219a8215da9 req-c2b68031-d77b-4589-80c6-e4c03ef718a4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08] Received unexpected event network-vif-unplugged-5e003588-ef81-4e87-900f-734b2c4bad32 for instance with vm_state deleted and task_state None.
Oct 11 08:54:04 compute-0 nova_compute[260935]: 2025-10-11 08:54:04.937 2 DEBUG nova.compute.manager [req-044a5dfc-b69a-45e5-8ed3-4219a8215da9 req-c2b68031-d77b-4589-80c6-e4c03ef718a4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08] Received event network-vif-plugged-5e003588-ef81-4e87-900f-734b2c4bad32 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:54:04 compute-0 nova_compute[260935]: 2025-10-11 08:54:04.937 2 DEBUG oslo_concurrency.lockutils [req-044a5dfc-b69a-45e5-8ed3-4219a8215da9 req-c2b68031-d77b-4589-80c6-e4c03ef718a4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:54:04 compute-0 nova_compute[260935]: 2025-10-11 08:54:04.937 2 DEBUG oslo_concurrency.lockutils [req-044a5dfc-b69a-45e5-8ed3-4219a8215da9 req-c2b68031-d77b-4589-80c6-e4c03ef718a4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:54:04 compute-0 nova_compute[260935]: 2025-10-11 08:54:04.938 2 DEBUG oslo_concurrency.lockutils [req-044a5dfc-b69a-45e5-8ed3-4219a8215da9 req-c2b68031-d77b-4589-80c6-e4c03ef718a4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:54:04 compute-0 nova_compute[260935]: 2025-10-11 08:54:04.938 2 DEBUG nova.compute.manager [req-044a5dfc-b69a-45e5-8ed3-4219a8215da9 req-c2b68031-d77b-4589-80c6-e4c03ef718a4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08] No waiting events found dispatching network-vif-plugged-5e003588-ef81-4e87-900f-734b2c4bad32 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 08:54:04 compute-0 nova_compute[260935]: 2025-10-11 08:54:04.938 2 WARNING nova.compute.manager [req-044a5dfc-b69a-45e5-8ed3-4219a8215da9 req-c2b68031-d77b-4589-80c6-e4c03ef718a4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08] Received unexpected event network-vif-plugged-5e003588-ef81-4e87-900f-734b2c4bad32 for instance with vm_state deleted and task_state None.
Oct 11 08:54:04 compute-0 nova_compute[260935]: 2025-10-11 08:54:04.938 2 DEBUG nova.compute.manager [req-044a5dfc-b69a-45e5-8ed3-4219a8215da9 req-c2b68031-d77b-4589-80c6-e4c03ef718a4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08] Received event network-vif-deleted-5e003588-ef81-4e87-900f-734b2c4bad32 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:54:04 compute-0 nova_compute[260935]: 2025-10-11 08:54:04.939 2 DEBUG nova.compute.manager [req-044a5dfc-b69a-45e5-8ed3-4219a8215da9 req-c2b68031-d77b-4589-80c6-e4c03ef718a4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 090243f9-46ac-42a1-b921-fa3b1974f127] Received event network-vif-deleted-94e66298-1c10-486d-9cd5-fd680cfa037c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:54:05 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 08:54:05 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1397735718' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:54:05 compute-0 nova_compute[260935]: 2025-10-11 08:54:05.310 2 DEBUG oslo_concurrency.processutils [None req-79edc500-30b7-473e-9cd9-7958c1b3987c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.509s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:54:05 compute-0 nova_compute[260935]: 2025-10-11 08:54:05.317 2 DEBUG nova.compute.provider_tree [None req-79edc500-30b7-473e-9cd9-7958c1b3987c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 08:54:05 compute-0 nova_compute[260935]: 2025-10-11 08:54:05.335 2 DEBUG nova.scheduler.client.report [None req-79edc500-30b7-473e-9cd9-7958c1b3987c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 08:54:05 compute-0 nova_compute[260935]: 2025-10-11 08:54:05.360 2 DEBUG oslo_concurrency.lockutils [None req-79edc500-30b7-473e-9cd9-7958c1b3987c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.683s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:54:05 compute-0 nova_compute[260935]: 2025-10-11 08:54:05.364 2 DEBUG oslo_concurrency.lockutils [None req-65038e71-6994-4fd5-842c-fae4ec91d8d5 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.744s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:54:05 compute-0 nova_compute[260935]: 2025-10-11 08:54:05.389 2 DEBUG nova.scheduler.client.report [None req-65038e71-6994-4fd5-842c-fae4ec91d8d5 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Refreshing inventories for resource provider ead2f521-4d5d-46d9-864c-1aac19134114 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Oct 11 08:54:05 compute-0 nova_compute[260935]: 2025-10-11 08:54:05.394 2 INFO nova.scheduler.client.report [None req-79edc500-30b7-473e-9cd9-7958c1b3987c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Deleted allocations for instance c5c5e7c6-36ba-4cdd-9ad5-03996c419556
Oct 11 08:54:05 compute-0 nova_compute[260935]: 2025-10-11 08:54:05.412 2 DEBUG nova.scheduler.client.report [None req-65038e71-6994-4fd5-842c-fae4ec91d8d5 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Updating ProviderTree inventory for provider ead2f521-4d5d-46d9-864c-1aac19134114 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Oct 11 08:54:05 compute-0 nova_compute[260935]: 2025-10-11 08:54:05.412 2 DEBUG nova.compute.provider_tree [None req-65038e71-6994-4fd5-842c-fae4ec91d8d5 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Updating inventory in ProviderTree for provider ead2f521-4d5d-46d9-864c-1aac19134114 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Oct 11 08:54:05 compute-0 nova_compute[260935]: 2025-10-11 08:54:05.426 2 DEBUG nova.scheduler.client.report [None req-65038e71-6994-4fd5-842c-fae4ec91d8d5 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Refreshing aggregate associations for resource provider ead2f521-4d5d-46d9-864c-1aac19134114, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Oct 11 08:54:05 compute-0 nova_compute[260935]: 2025-10-11 08:54:05.465 2 DEBUG nova.scheduler.client.report [None req-65038e71-6994-4fd5-842c-fae4ec91d8d5 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Refreshing trait associations for resource provider ead2f521-4d5d-46d9-864c-1aac19134114, traits: HW_CPU_X86_AESNI,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_CLMUL,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_AVX,COMPUTE_RESCUE_BFV,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_F16C,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NODE,HW_CPU_X86_SSE2,HW_CPU_X86_BMI,COMPUTE_DEVICE_TAGGING,COMPUTE_STORAGE_BUS_FDC,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SHA,COMPUTE_STORAGE_BUS_SATA,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SSE42,HW_CPU_X86_SSE41,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_ABM,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_STORAGE_BUS_IDE,COMPUTE_STORAGE_BUS_USB,COMPUTE_TRUSTED_CERTS,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_FMA3,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_SSE4A,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSSE3,HW_CPU_X86_SVM,HW_CPU_X86_BMI2,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_AVX2,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_SECURITY_TPM_2_0,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_AMD_SVM,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_RTL8139 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Oct 11 08:54:05 compute-0 nova_compute[260935]: 2025-10-11 08:54:05.481 2 DEBUG oslo_concurrency.lockutils [None req-79edc500-30b7-473e-9cd9-7958c1b3987c 624c293d73ca4d14a182fadee17abb16 5fe47dfd30914099a9819413cbab00c6 - - default default] Lock "c5c5e7c6-36ba-4cdd-9ad5-03996c419556" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.174s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:54:05 compute-0 ceph-mon[74313]: pgmap v1494: 321 pgs: 321 active+clean; 213 MiB data, 550 MiB used, 59 GiB / 60 GiB avail; 14 MiB/s rd, 7.2 MiB/s wr, 716 op/s
Oct 11 08:54:05 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/822842830' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:54:05 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/1397735718' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:54:05 compute-0 nova_compute[260935]: 2025-10-11 08:54:05.531 2 DEBUG oslo_concurrency.processutils [None req-65038e71-6994-4fd5-842c-fae4ec91d8d5 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:54:05 compute-0 nova_compute[260935]: 2025-10-11 08:54:05.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 08:54:05 compute-0 nova_compute[260935]: 2025-10-11 08:54:05.721 2 DEBUG nova.compute.manager [req-bd8dea70-8e78-42cf-a611-7e12a8a2e7b6 req-41677940-7b17-48d2-994b-721ce56eb7b5 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: c5c5e7c6-36ba-4cdd-9ad5-03996c419556] Received event network-vif-plugged-eb770b47-0e30-41bb-8d08-e52bc86b8fb7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:54:05 compute-0 nova_compute[260935]: 2025-10-11 08:54:05.722 2 DEBUG oslo_concurrency.lockutils [req-bd8dea70-8e78-42cf-a611-7e12a8a2e7b6 req-41677940-7b17-48d2-994b-721ce56eb7b5 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "c5c5e7c6-36ba-4cdd-9ad5-03996c419556-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:54:05 compute-0 nova_compute[260935]: 2025-10-11 08:54:05.722 2 DEBUG oslo_concurrency.lockutils [req-bd8dea70-8e78-42cf-a611-7e12a8a2e7b6 req-41677940-7b17-48d2-994b-721ce56eb7b5 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "c5c5e7c6-36ba-4cdd-9ad5-03996c419556-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:54:05 compute-0 nova_compute[260935]: 2025-10-11 08:54:05.723 2 DEBUG oslo_concurrency.lockutils [req-bd8dea70-8e78-42cf-a611-7e12a8a2e7b6 req-41677940-7b17-48d2-994b-721ce56eb7b5 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "c5c5e7c6-36ba-4cdd-9ad5-03996c419556-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:54:05 compute-0 nova_compute[260935]: 2025-10-11 08:54:05.723 2 DEBUG nova.compute.manager [req-bd8dea70-8e78-42cf-a611-7e12a8a2e7b6 req-41677940-7b17-48d2-994b-721ce56eb7b5 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: c5c5e7c6-36ba-4cdd-9ad5-03996c419556] No waiting events found dispatching network-vif-plugged-eb770b47-0e30-41bb-8d08-e52bc86b8fb7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 08:54:05 compute-0 nova_compute[260935]: 2025-10-11 08:54:05.723 2 WARNING nova.compute.manager [req-bd8dea70-8e78-42cf-a611-7e12a8a2e7b6 req-41677940-7b17-48d2-994b-721ce56eb7b5 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: c5c5e7c6-36ba-4cdd-9ad5-03996c419556] Received unexpected event network-vif-plugged-eb770b47-0e30-41bb-8d08-e52bc86b8fb7 for instance with vm_state deleted and task_state None.
Oct 11 08:54:05 compute-0 nova_compute[260935]: 2025-10-11 08:54:05.724 2 DEBUG nova.compute.manager [req-bd8dea70-8e78-42cf-a611-7e12a8a2e7b6 req-41677940-7b17-48d2-994b-721ce56eb7b5 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 090243f9-46ac-42a1-b921-fa3b1974f127] Received event network-vif-unplugged-94e66298-1c10-486d-9cd5-fd680cfa037c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:54:05 compute-0 nova_compute[260935]: 2025-10-11 08:54:05.724 2 DEBUG oslo_concurrency.lockutils [req-bd8dea70-8e78-42cf-a611-7e12a8a2e7b6 req-41677940-7b17-48d2-994b-721ce56eb7b5 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "090243f9-46ac-42a1-b921-fa3b1974f127-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:54:05 compute-0 nova_compute[260935]: 2025-10-11 08:54:05.724 2 DEBUG oslo_concurrency.lockutils [req-bd8dea70-8e78-42cf-a611-7e12a8a2e7b6 req-41677940-7b17-48d2-994b-721ce56eb7b5 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "090243f9-46ac-42a1-b921-fa3b1974f127-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:54:05 compute-0 nova_compute[260935]: 2025-10-11 08:54:05.725 2 DEBUG oslo_concurrency.lockutils [req-bd8dea70-8e78-42cf-a611-7e12a8a2e7b6 req-41677940-7b17-48d2-994b-721ce56eb7b5 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "090243f9-46ac-42a1-b921-fa3b1974f127-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:54:05 compute-0 nova_compute[260935]: 2025-10-11 08:54:05.725 2 DEBUG nova.compute.manager [req-bd8dea70-8e78-42cf-a611-7e12a8a2e7b6 req-41677940-7b17-48d2-994b-721ce56eb7b5 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 090243f9-46ac-42a1-b921-fa3b1974f127] No waiting events found dispatching network-vif-unplugged-94e66298-1c10-486d-9cd5-fd680cfa037c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 08:54:05 compute-0 nova_compute[260935]: 2025-10-11 08:54:05.725 2 WARNING nova.compute.manager [req-bd8dea70-8e78-42cf-a611-7e12a8a2e7b6 req-41677940-7b17-48d2-994b-721ce56eb7b5 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 090243f9-46ac-42a1-b921-fa3b1974f127] Received unexpected event network-vif-unplugged-94e66298-1c10-486d-9cd5-fd680cfa037c for instance with vm_state deleted and task_state None.
Oct 11 08:54:05 compute-0 nova_compute[260935]: 2025-10-11 08:54:05.726 2 DEBUG nova.compute.manager [req-bd8dea70-8e78-42cf-a611-7e12a8a2e7b6 req-41677940-7b17-48d2-994b-721ce56eb7b5 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 090243f9-46ac-42a1-b921-fa3b1974f127] Received event network-vif-plugged-94e66298-1c10-486d-9cd5-fd680cfa037c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:54:05 compute-0 nova_compute[260935]: 2025-10-11 08:54:05.726 2 DEBUG oslo_concurrency.lockutils [req-bd8dea70-8e78-42cf-a611-7e12a8a2e7b6 req-41677940-7b17-48d2-994b-721ce56eb7b5 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "090243f9-46ac-42a1-b921-fa3b1974f127-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:54:05 compute-0 nova_compute[260935]: 2025-10-11 08:54:05.726 2 DEBUG oslo_concurrency.lockutils [req-bd8dea70-8e78-42cf-a611-7e12a8a2e7b6 req-41677940-7b17-48d2-994b-721ce56eb7b5 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "090243f9-46ac-42a1-b921-fa3b1974f127-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:54:05 compute-0 nova_compute[260935]: 2025-10-11 08:54:05.726 2 DEBUG oslo_concurrency.lockutils [req-bd8dea70-8e78-42cf-a611-7e12a8a2e7b6 req-41677940-7b17-48d2-994b-721ce56eb7b5 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "090243f9-46ac-42a1-b921-fa3b1974f127-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:54:05 compute-0 nova_compute[260935]: 2025-10-11 08:54:05.727 2 DEBUG nova.compute.manager [req-bd8dea70-8e78-42cf-a611-7e12a8a2e7b6 req-41677940-7b17-48d2-994b-721ce56eb7b5 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 090243f9-46ac-42a1-b921-fa3b1974f127] No waiting events found dispatching network-vif-plugged-94e66298-1c10-486d-9cd5-fd680cfa037c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 08:54:05 compute-0 nova_compute[260935]: 2025-10-11 08:54:05.727 2 WARNING nova.compute.manager [req-bd8dea70-8e78-42cf-a611-7e12a8a2e7b6 req-41677940-7b17-48d2-994b-721ce56eb7b5 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 090243f9-46ac-42a1-b921-fa3b1974f127] Received unexpected event network-vif-plugged-94e66298-1c10-486d-9cd5-fd680cfa037c for instance with vm_state deleted and task_state None.
Oct 11 08:54:05 compute-0 nova_compute[260935]: 2025-10-11 08:54:05.727 2 DEBUG nova.compute.manager [req-bd8dea70-8e78-42cf-a611-7e12a8a2e7b6 req-41677940-7b17-48d2-994b-721ce56eb7b5 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d0ac94c4-9bbc-443b-bbce-0d447b37153a] Received event network-vif-deleted-11b55091-2876-4b36-98b0-aa1a4db00d3e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:54:05 compute-0 nova_compute[260935]: 2025-10-11 08:54:05.728 2 DEBUG nova.compute.manager [req-bd8dea70-8e78-42cf-a611-7e12a8a2e7b6 req-41677940-7b17-48d2-994b-721ce56eb7b5 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: c5c5e7c6-36ba-4cdd-9ad5-03996c419556] Received event network-vif-deleted-eb770b47-0e30-41bb-8d08-e52bc86b8fb7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:54:05 compute-0 romantic_neumann[316254]: --> passed data devices: 0 physical, 3 LVM
Oct 11 08:54:05 compute-0 romantic_neumann[316254]: --> relative data size: 1.0
Oct 11 08:54:05 compute-0 romantic_neumann[316254]: --> All data devices are unavailable
Oct 11 08:54:05 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1495: 321 pgs: 321 active+clean; 213 MiB data, 550 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.6 MiB/s wr, 205 op/s
Oct 11 08:54:05 compute-0 systemd[1]: libpod-3ec23f4547cc0e539ac02b827b38cc6f130b6a8c99a84276140ffd435d4ab6d8.scope: Deactivated successfully.
Oct 11 08:54:05 compute-0 podman[316236]: 2025-10-11 08:54:05.980128702 +0000 UTC m=+1.407564217 container died 3ec23f4547cc0e539ac02b827b38cc6f130b6a8c99a84276140ffd435d4ab6d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_neumann, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Oct 11 08:54:05 compute-0 systemd[1]: libpod-3ec23f4547cc0e539ac02b827b38cc6f130b6a8c99a84276140ffd435d4ab6d8.scope: Consumed 1.113s CPU time.
Oct 11 08:54:05 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 08:54:05 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/31269448' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:54:06 compute-0 nova_compute[260935]: 2025-10-11 08:54:06.013 2 DEBUG oslo_concurrency.processutils [None req-65038e71-6994-4fd5-842c-fae4ec91d8d5 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.483s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:54:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-5081022b4876deca3781b856df4070f0b16910071fed7e9767360ef6b7b5c0ab-merged.mount: Deactivated successfully.
Oct 11 08:54:06 compute-0 nova_compute[260935]: 2025-10-11 08:54:06.024 2 DEBUG nova.compute.provider_tree [None req-65038e71-6994-4fd5-842c-fae4ec91d8d5 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 08:54:06 compute-0 nova_compute[260935]: 2025-10-11 08:54:06.040 2 DEBUG nova.scheduler.client.report [None req-65038e71-6994-4fd5-842c-fae4ec91d8d5 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 08:54:06 compute-0 podman[316236]: 2025-10-11 08:54:06.053982058 +0000 UTC m=+1.481417553 container remove 3ec23f4547cc0e539ac02b827b38cc6f130b6a8c99a84276140ffd435d4ab6d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_neumann, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct 11 08:54:06 compute-0 nova_compute[260935]: 2025-10-11 08:54:06.060 2 DEBUG oslo_concurrency.lockutils [None req-65038e71-6994-4fd5-842c-fae4ec91d8d5 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.696s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:54:06 compute-0 nova_compute[260935]: 2025-10-11 08:54:06.062 2 DEBUG oslo_concurrency.lockutils [None req-d29d2b04-23c0-4456-a657-aa63148acf2f 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 1.434s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:54:06 compute-0 systemd[1]: libpod-conmon-3ec23f4547cc0e539ac02b827b38cc6f130b6a8c99a84276140ffd435d4ab6d8.scope: Deactivated successfully.
Oct 11 08:54:06 compute-0 sudo[316051]: pam_unix(sudo:session): session closed for user root
Oct 11 08:54:06 compute-0 nova_compute[260935]: 2025-10-11 08:54:06.109 2 INFO nova.scheduler.client.report [None req-65038e71-6994-4fd5-842c-fae4ec91d8d5 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Deleted allocations for instance 8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08
Oct 11 08:54:06 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:06.115 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=bec9a5e2-82b0-42f0-811d-08d245f1dc66, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '13'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:54:06 compute-0 nova_compute[260935]: 2025-10-11 08:54:06.152 2 DEBUG oslo_concurrency.processutils [None req-d29d2b04-23c0-4456-a657-aa63148acf2f 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:54:06 compute-0 sudo[316340]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:54:06 compute-0 sudo[316340]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:54:06 compute-0 sudo[316340]: pam_unix(sudo:session): session closed for user root
Oct 11 08:54:06 compute-0 nova_compute[260935]: 2025-10-11 08:54:06.208 2 DEBUG oslo_concurrency.lockutils [None req-65038e71-6994-4fd5-842c-fae4ec91d8d5 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Lock "8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.514s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:54:06 compute-0 sudo[316366]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 08:54:06 compute-0 sudo[316366]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:54:06 compute-0 sudo[316366]: pam_unix(sudo:session): session closed for user root
Oct 11 08:54:06 compute-0 sudo[316391]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:54:06 compute-0 sudo[316391]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:54:06 compute-0 sudo[316391]: pam_unix(sudo:session): session closed for user root
Oct 11 08:54:06 compute-0 sudo[316435]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a -- lvm list --format json
Oct 11 08:54:06 compute-0 sudo[316435]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:54:06 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/31269448' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:54:06 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 08:54:06 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1468553982' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:54:06 compute-0 nova_compute[260935]: 2025-10-11 08:54:06.625 2 DEBUG oslo_concurrency.processutils [None req-d29d2b04-23c0-4456-a657-aa63148acf2f 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:54:06 compute-0 nova_compute[260935]: 2025-10-11 08:54:06.633 2 DEBUG nova.compute.provider_tree [None req-d29d2b04-23c0-4456-a657-aa63148acf2f 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 08:54:06 compute-0 nova_compute[260935]: 2025-10-11 08:54:06.647 2 DEBUG nova.scheduler.client.report [None req-d29d2b04-23c0-4456-a657-aa63148acf2f 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 08:54:06 compute-0 nova_compute[260935]: 2025-10-11 08:54:06.673 2 DEBUG oslo_concurrency.lockutils [None req-d29d2b04-23c0-4456-a657-aa63148acf2f 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.611s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:54:06 compute-0 nova_compute[260935]: 2025-10-11 08:54:06.700 2 INFO nova.scheduler.client.report [None req-d29d2b04-23c0-4456-a657-aa63148acf2f 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Deleted allocations for instance 090243f9-46ac-42a1-b921-fa3b1974f127
Oct 11 08:54:06 compute-0 nova_compute[260935]: 2025-10-11 08:54:06.769 2 DEBUG oslo_concurrency.lockutils [None req-d29d2b04-23c0-4456-a657-aa63148acf2f 2a171de1f79843e0b048393cabfee77d 9af27fad6b5a4783b66213343f27f0a1 - - default default] Lock "090243f9-46ac-42a1-b921-fa3b1974f127" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.084s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:54:06 compute-0 podman[316502]: 2025-10-11 08:54:06.916492822 +0000 UTC m=+0.060461015 container create f2dbd6f65681c48bb22ad416d63550fe56094b6e23d4b68b0275220fb2fa7cd4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_lovelace, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct 11 08:54:06 compute-0 systemd[1]: Started libpod-conmon-f2dbd6f65681c48bb22ad416d63550fe56094b6e23d4b68b0275220fb2fa7cd4.scope.
Oct 11 08:54:06 compute-0 podman[316502]: 2025-10-11 08:54:06.887353581 +0000 UTC m=+0.031321844 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 08:54:06 compute-0 systemd[1]: Started libcrun container.
Oct 11 08:54:07 compute-0 podman[316502]: 2025-10-11 08:54:07.012122339 +0000 UTC m=+0.156090542 container init f2dbd6f65681c48bb22ad416d63550fe56094b6e23d4b68b0275220fb2fa7cd4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_lovelace, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct 11 08:54:07 compute-0 podman[316502]: 2025-10-11 08:54:07.026219331 +0000 UTC m=+0.170187504 container start f2dbd6f65681c48bb22ad416d63550fe56094b6e23d4b68b0275220fb2fa7cd4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_lovelace, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 08:54:07 compute-0 podman[316502]: 2025-10-11 08:54:07.030646127 +0000 UTC m=+0.174614320 container attach f2dbd6f65681c48bb22ad416d63550fe56094b6e23d4b68b0275220fb2fa7cd4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_lovelace, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 08:54:07 compute-0 great_lovelace[316518]: 167 167
Oct 11 08:54:07 compute-0 systemd[1]: libpod-f2dbd6f65681c48bb22ad416d63550fe56094b6e23d4b68b0275220fb2fa7cd4.scope: Deactivated successfully.
Oct 11 08:54:07 compute-0 podman[316502]: 2025-10-11 08:54:07.034405095 +0000 UTC m=+0.178373268 container died f2dbd6f65681c48bb22ad416d63550fe56094b6e23d4b68b0275220fb2fa7cd4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_lovelace, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Oct 11 08:54:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-96330046e8420a46328b894f3f8e6191ba87972981afcf24c7d4de53e9f5fd62-merged.mount: Deactivated successfully.
Oct 11 08:54:07 compute-0 podman[316502]: 2025-10-11 08:54:07.086661295 +0000 UTC m=+0.230629498 container remove f2dbd6f65681c48bb22ad416d63550fe56094b6e23d4b68b0275220fb2fa7cd4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_lovelace, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct 11 08:54:07 compute-0 systemd[1]: libpod-conmon-f2dbd6f65681c48bb22ad416d63550fe56094b6e23d4b68b0275220fb2fa7cd4.scope: Deactivated successfully.
Oct 11 08:54:07 compute-0 podman[316542]: 2025-10-11 08:54:07.32428889 +0000 UTC m=+0.065831828 container create f74e55edfe1842c399128fa5647a1ac23c656e25571b5c42cc5f45fe2f8e3bae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_spence, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0)
Oct 11 08:54:07 compute-0 systemd[1]: Started libpod-conmon-f74e55edfe1842c399128fa5647a1ac23c656e25571b5c42cc5f45fe2f8e3bae.scope.
Oct 11 08:54:07 compute-0 podman[316542]: 2025-10-11 08:54:07.293298667 +0000 UTC m=+0.034841685 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 08:54:07 compute-0 systemd[1]: Started libcrun container.
Oct 11 08:54:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0eb945b9f724009f8ab9d720c5f2b13aaae9b7771597074ae4196cd82d23219d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 08:54:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0eb945b9f724009f8ab9d720c5f2b13aaae9b7771597074ae4196cd82d23219d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 08:54:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0eb945b9f724009f8ab9d720c5f2b13aaae9b7771597074ae4196cd82d23219d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 08:54:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0eb945b9f724009f8ab9d720c5f2b13aaae9b7771597074ae4196cd82d23219d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 08:54:07 compute-0 podman[316542]: 2025-10-11 08:54:07.433659259 +0000 UTC m=+0.175202277 container init f74e55edfe1842c399128fa5647a1ac23c656e25571b5c42cc5f45fe2f8e3bae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_spence, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 08:54:07 compute-0 podman[316542]: 2025-10-11 08:54:07.444009294 +0000 UTC m=+0.185552232 container start f74e55edfe1842c399128fa5647a1ac23c656e25571b5c42cc5f45fe2f8e3bae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_spence, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS)
Oct 11 08:54:07 compute-0 podman[316542]: 2025-10-11 08:54:07.448032319 +0000 UTC m=+0.189575347 container attach f74e55edfe1842c399128fa5647a1ac23c656e25571b5c42cc5f45fe2f8e3bae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_spence, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct 11 08:54:07 compute-0 ceph-mon[74313]: pgmap v1495: 321 pgs: 321 active+clean; 213 MiB data, 550 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.6 MiB/s wr, 205 op/s
Oct 11 08:54:07 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/1468553982' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:54:07 compute-0 nova_compute[260935]: 2025-10-11 08:54:07.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 08:54:07 compute-0 nova_compute[260935]: 2025-10-11 08:54:07.856 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:54:07 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1496: 321 pgs: 321 active+clean; 41 MiB data, 451 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 2.2 MiB/s wr, 269 op/s
Oct 11 08:54:07 compute-0 nova_compute[260935]: 2025-10-11 08:54:07.985 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:54:08 compute-0 mystifying_spence[316559]: {
Oct 11 08:54:08 compute-0 mystifying_spence[316559]:     "0": [
Oct 11 08:54:08 compute-0 mystifying_spence[316559]:         {
Oct 11 08:54:08 compute-0 mystifying_spence[316559]:             "devices": [
Oct 11 08:54:08 compute-0 mystifying_spence[316559]:                 "/dev/loop3"
Oct 11 08:54:08 compute-0 mystifying_spence[316559]:             ],
Oct 11 08:54:08 compute-0 mystifying_spence[316559]:             "lv_name": "ceph_lv0",
Oct 11 08:54:08 compute-0 mystifying_spence[316559]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 08:54:08 compute-0 mystifying_spence[316559]:             "lv_size": "21470642176",
Oct 11 08:54:08 compute-0 mystifying_spence[316559]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=bd31cfe9-266a-4479-a7f9-659fff14b3e1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 08:54:08 compute-0 mystifying_spence[316559]:             "lv_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 08:54:08 compute-0 mystifying_spence[316559]:             "name": "ceph_lv0",
Oct 11 08:54:08 compute-0 mystifying_spence[316559]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 08:54:08 compute-0 mystifying_spence[316559]:             "tags": {
Oct 11 08:54:08 compute-0 mystifying_spence[316559]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 11 08:54:08 compute-0 mystifying_spence[316559]:                 "ceph.block_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 08:54:08 compute-0 mystifying_spence[316559]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 08:54:08 compute-0 mystifying_spence[316559]:                 "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 08:54:08 compute-0 mystifying_spence[316559]:                 "ceph.cluster_name": "ceph",
Oct 11 08:54:08 compute-0 mystifying_spence[316559]:                 "ceph.crush_device_class": "",
Oct 11 08:54:08 compute-0 mystifying_spence[316559]:                 "ceph.encrypted": "0",
Oct 11 08:54:08 compute-0 mystifying_spence[316559]:                 "ceph.osd_fsid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 08:54:08 compute-0 mystifying_spence[316559]:                 "ceph.osd_id": "0",
Oct 11 08:54:08 compute-0 mystifying_spence[316559]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 08:54:08 compute-0 mystifying_spence[316559]:                 "ceph.type": "block",
Oct 11 08:54:08 compute-0 mystifying_spence[316559]:                 "ceph.vdo": "0"
Oct 11 08:54:08 compute-0 mystifying_spence[316559]:             },
Oct 11 08:54:08 compute-0 mystifying_spence[316559]:             "type": "block",
Oct 11 08:54:08 compute-0 mystifying_spence[316559]:             "vg_name": "ceph_vg0"
Oct 11 08:54:08 compute-0 mystifying_spence[316559]:         }
Oct 11 08:54:08 compute-0 mystifying_spence[316559]:     ],
Oct 11 08:54:08 compute-0 mystifying_spence[316559]:     "1": [
Oct 11 08:54:08 compute-0 mystifying_spence[316559]:         {
Oct 11 08:54:08 compute-0 mystifying_spence[316559]:             "devices": [
Oct 11 08:54:08 compute-0 mystifying_spence[316559]:                 "/dev/loop4"
Oct 11 08:54:08 compute-0 mystifying_spence[316559]:             ],
Oct 11 08:54:08 compute-0 mystifying_spence[316559]:             "lv_name": "ceph_lv1",
Oct 11 08:54:08 compute-0 mystifying_spence[316559]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 08:54:08 compute-0 mystifying_spence[316559]:             "lv_size": "21470642176",
Oct 11 08:54:08 compute-0 mystifying_spence[316559]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c6e730c8-7075-45e5-bf72-061b73088f5b,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 08:54:08 compute-0 mystifying_spence[316559]:             "lv_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 08:54:08 compute-0 mystifying_spence[316559]:             "name": "ceph_lv1",
Oct 11 08:54:08 compute-0 mystifying_spence[316559]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 08:54:08 compute-0 mystifying_spence[316559]:             "tags": {
Oct 11 08:54:08 compute-0 mystifying_spence[316559]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 11 08:54:08 compute-0 mystifying_spence[316559]:                 "ceph.block_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 08:54:08 compute-0 mystifying_spence[316559]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 08:54:08 compute-0 mystifying_spence[316559]:                 "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 08:54:08 compute-0 mystifying_spence[316559]:                 "ceph.cluster_name": "ceph",
Oct 11 08:54:08 compute-0 mystifying_spence[316559]:                 "ceph.crush_device_class": "",
Oct 11 08:54:08 compute-0 mystifying_spence[316559]:                 "ceph.encrypted": "0",
Oct 11 08:54:08 compute-0 mystifying_spence[316559]:                 "ceph.osd_fsid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 08:54:08 compute-0 mystifying_spence[316559]:                 "ceph.osd_id": "1",
Oct 11 08:54:08 compute-0 mystifying_spence[316559]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 08:54:08 compute-0 mystifying_spence[316559]:                 "ceph.type": "block",
Oct 11 08:54:08 compute-0 mystifying_spence[316559]:                 "ceph.vdo": "0"
Oct 11 08:54:08 compute-0 mystifying_spence[316559]:             },
Oct 11 08:54:08 compute-0 mystifying_spence[316559]:             "type": "block",
Oct 11 08:54:08 compute-0 mystifying_spence[316559]:             "vg_name": "ceph_vg1"
Oct 11 08:54:08 compute-0 mystifying_spence[316559]:         }
Oct 11 08:54:08 compute-0 mystifying_spence[316559]:     ],
Oct 11 08:54:08 compute-0 mystifying_spence[316559]:     "2": [
Oct 11 08:54:08 compute-0 mystifying_spence[316559]:         {
Oct 11 08:54:08 compute-0 mystifying_spence[316559]:             "devices": [
Oct 11 08:54:08 compute-0 mystifying_spence[316559]:                 "/dev/loop5"
Oct 11 08:54:08 compute-0 mystifying_spence[316559]:             ],
Oct 11 08:54:08 compute-0 mystifying_spence[316559]:             "lv_name": "ceph_lv2",
Oct 11 08:54:08 compute-0 mystifying_spence[316559]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 08:54:08 compute-0 mystifying_spence[316559]:             "lv_size": "21470642176",
Oct 11 08:54:08 compute-0 mystifying_spence[316559]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9a8d87fe-940e-4960-94fb-405e22e6e9ee,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 08:54:08 compute-0 mystifying_spence[316559]:             "lv_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 08:54:08 compute-0 mystifying_spence[316559]:             "name": "ceph_lv2",
Oct 11 08:54:08 compute-0 mystifying_spence[316559]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 08:54:08 compute-0 mystifying_spence[316559]:             "tags": {
Oct 11 08:54:08 compute-0 mystifying_spence[316559]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 11 08:54:08 compute-0 mystifying_spence[316559]:                 "ceph.block_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 08:54:08 compute-0 mystifying_spence[316559]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 08:54:08 compute-0 mystifying_spence[316559]:                 "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 08:54:08 compute-0 mystifying_spence[316559]:                 "ceph.cluster_name": "ceph",
Oct 11 08:54:08 compute-0 mystifying_spence[316559]:                 "ceph.crush_device_class": "",
Oct 11 08:54:08 compute-0 mystifying_spence[316559]:                 "ceph.encrypted": "0",
Oct 11 08:54:08 compute-0 mystifying_spence[316559]:                 "ceph.osd_fsid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 08:54:08 compute-0 mystifying_spence[316559]:                 "ceph.osd_id": "2",
Oct 11 08:54:08 compute-0 mystifying_spence[316559]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 08:54:08 compute-0 mystifying_spence[316559]:                 "ceph.type": "block",
Oct 11 08:54:08 compute-0 mystifying_spence[316559]:                 "ceph.vdo": "0"
Oct 11 08:54:08 compute-0 mystifying_spence[316559]:             },
Oct 11 08:54:08 compute-0 mystifying_spence[316559]:             "type": "block",
Oct 11 08:54:08 compute-0 mystifying_spence[316559]:             "vg_name": "ceph_vg2"
Oct 11 08:54:08 compute-0 mystifying_spence[316559]:         }
Oct 11 08:54:08 compute-0 mystifying_spence[316559]:     ]
Oct 11 08:54:08 compute-0 mystifying_spence[316559]: }
Oct 11 08:54:08 compute-0 podman[316542]: 2025-10-11 08:54:08.322618937 +0000 UTC m=+1.064161875 container died f74e55edfe1842c399128fa5647a1ac23c656e25571b5c42cc5f45fe2f8e3bae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_spence, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default)
Oct 11 08:54:08 compute-0 systemd[1]: libpod-f74e55edfe1842c399128fa5647a1ac23c656e25571b5c42cc5f45fe2f8e3bae.scope: Deactivated successfully.
Oct 11 08:54:08 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e209 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 08:54:08 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e209 do_prune osdmap full prune enabled
Oct 11 08:54:08 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e210 e210: 3 total, 3 up, 3 in
Oct 11 08:54:08 compute-0 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e210: 3 total, 3 up, 3 in
Oct 11 08:54:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-0eb945b9f724009f8ab9d720c5f2b13aaae9b7771597074ae4196cd82d23219d-merged.mount: Deactivated successfully.
Oct 11 08:54:08 compute-0 podman[316542]: 2025-10-11 08:54:08.403571715 +0000 UTC m=+1.145114653 container remove f74e55edfe1842c399128fa5647a1ac23c656e25571b5c42cc5f45fe2f8e3bae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_spence, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 08:54:08 compute-0 systemd[1]: libpod-conmon-f74e55edfe1842c399128fa5647a1ac23c656e25571b5c42cc5f45fe2f8e3bae.scope: Deactivated successfully.
Oct 11 08:54:08 compute-0 sudo[316435]: pam_unix(sudo:session): session closed for user root
Oct 11 08:54:08 compute-0 sudo[316577]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:54:08 compute-0 sudo[316577]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:54:08 compute-0 sudo[316577]: pam_unix(sudo:session): session closed for user root
Oct 11 08:54:08 compute-0 sudo[316602]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 08:54:08 compute-0 sudo[316602]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:54:08 compute-0 sudo[316602]: pam_unix(sudo:session): session closed for user root
Oct 11 08:54:08 compute-0 nova_compute[260935]: 2025-10-11 08:54:08.699 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 08:54:08 compute-0 nova_compute[260935]: 2025-10-11 08:54:08.701 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 08:54:08 compute-0 sudo[316627]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:54:08 compute-0 sudo[316627]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:54:08 compute-0 sudo[316627]: pam_unix(sudo:session): session closed for user root
Oct 11 08:54:08 compute-0 sudo[316652]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a -- raw list --format json
Oct 11 08:54:08 compute-0 sudo[316652]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:54:09 compute-0 ceph-mon[74313]: pgmap v1496: 321 pgs: 321 active+clean; 41 MiB data, 451 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 2.2 MiB/s wr, 269 op/s
Oct 11 08:54:09 compute-0 ceph-mon[74313]: osdmap e210: 3 total, 3 up, 3 in
Oct 11 08:54:09 compute-0 podman[316716]: 2025-10-11 08:54:09.386311547 +0000 UTC m=+0.066923779 container create 604e50214345a226ea677773add0c75ad83a1e3673fd03f0d711d2444a1309f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_brattain, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct 11 08:54:09 compute-0 podman[316716]: 2025-10-11 08:54:09.354931503 +0000 UTC m=+0.035543775 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 08:54:09 compute-0 nova_compute[260935]: 2025-10-11 08:54:09.477 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:54:09 compute-0 systemd[1]: Started libpod-conmon-604e50214345a226ea677773add0c75ad83a1e3673fd03f0d711d2444a1309f5.scope.
Oct 11 08:54:09 compute-0 systemd[1]: Started libcrun container.
Oct 11 08:54:09 compute-0 podman[316716]: 2025-10-11 08:54:09.530640943 +0000 UTC m=+0.211253235 container init 604e50214345a226ea677773add0c75ad83a1e3673fd03f0d711d2444a1309f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_brattain, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 08:54:09 compute-0 podman[316716]: 2025-10-11 08:54:09.548188053 +0000 UTC m=+0.228800285 container start 604e50214345a226ea677773add0c75ad83a1e3673fd03f0d711d2444a1309f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_brattain, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct 11 08:54:09 compute-0 podman[316716]: 2025-10-11 08:54:09.552354502 +0000 UTC m=+0.232966764 container attach 604e50214345a226ea677773add0c75ad83a1e3673fd03f0d711d2444a1309f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_brattain, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct 11 08:54:09 compute-0 nice_brattain[316735]: 167 167
Oct 11 08:54:09 compute-0 systemd[1]: libpod-604e50214345a226ea677773add0c75ad83a1e3673fd03f0d711d2444a1309f5.scope: Deactivated successfully.
Oct 11 08:54:09 compute-0 podman[316716]: 2025-10-11 08:54:09.556653675 +0000 UTC m=+0.237265907 container died 604e50214345a226ea677773add0c75ad83a1e3673fd03f0d711d2444a1309f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_brattain, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Oct 11 08:54:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-ae0f68c6c65815cef1f89661a5f948bbb43d76dad7c92147208055baf61cd4b3-merged.mount: Deactivated successfully.
Oct 11 08:54:09 compute-0 podman[316716]: 2025-10-11 08:54:09.613186577 +0000 UTC m=+0.293798829 container remove 604e50214345a226ea677773add0c75ad83a1e3673fd03f0d711d2444a1309f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_brattain, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 08:54:09 compute-0 podman[316728]: 2025-10-11 08:54:09.623674286 +0000 UTC m=+0.126252611 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, tcib_managed=true, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct 11 08:54:09 compute-0 systemd[1]: libpod-conmon-604e50214345a226ea677773add0c75ad83a1e3673fd03f0d711d2444a1309f5.scope: Deactivated successfully.
Oct 11 08:54:09 compute-0 podman[316734]: 2025-10-11 08:54:09.690785789 +0000 UTC m=+0.195444784 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 11 08:54:09 compute-0 podman[316802]: 2025-10-11 08:54:09.837172644 +0000 UTC m=+0.065858169 container create 5e4b1457b1c6de6300a551d88826763ee6f81d378aa3d0f903972b9318558ded (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_brown, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 08:54:09 compute-0 systemd[1]: Started libpod-conmon-5e4b1457b1c6de6300a551d88826763ee6f81d378aa3d0f903972b9318558ded.scope.
Oct 11 08:54:09 compute-0 podman[316802]: 2025-10-11 08:54:09.807322282 +0000 UTC m=+0.036007867 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 08:54:09 compute-0 systemd[1]: Started libcrun container.
Oct 11 08:54:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c41e86106ae4a335d8011372adf8366a211630267a953265ca526eaf04b53626/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 08:54:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c41e86106ae4a335d8011372adf8366a211630267a953265ca526eaf04b53626/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 08:54:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c41e86106ae4a335d8011372adf8366a211630267a953265ca526eaf04b53626/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 08:54:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c41e86106ae4a335d8011372adf8366a211630267a953265ca526eaf04b53626/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 08:54:09 compute-0 podman[316802]: 2025-10-11 08:54:09.95138744 +0000 UTC m=+0.180073015 container init 5e4b1457b1c6de6300a551d88826763ee6f81d378aa3d0f903972b9318558ded (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_brown, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 08:54:09 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1498: 321 pgs: 321 active+clean; 41 MiB data, 451 MiB used, 60 GiB / 60 GiB avail; 1.6 MiB/s rd, 2.1 MiB/s wr, 257 op/s
Oct 11 08:54:09 compute-0 podman[316802]: 2025-10-11 08:54:09.972179833 +0000 UTC m=+0.200865368 container start 5e4b1457b1c6de6300a551d88826763ee6f81d378aa3d0f903972b9318558ded (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_brown, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Oct 11 08:54:09 compute-0 podman[316802]: 2025-10-11 08:54:09.976871297 +0000 UTC m=+0.205556832 container attach 5e4b1457b1c6de6300a551d88826763ee6f81d378aa3d0f903972b9318558ded (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_brown, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 08:54:10 compute-0 nova_compute[260935]: 2025-10-11 08:54:10.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 08:54:10 compute-0 nova_compute[260935]: 2025-10-11 08:54:10.730 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:54:10 compute-0 nova_compute[260935]: 2025-10-11 08:54:10.731 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:54:10 compute-0 nova_compute[260935]: 2025-10-11 08:54:10.731 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:54:10 compute-0 nova_compute[260935]: 2025-10-11 08:54:10.732 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 11 08:54:10 compute-0 nova_compute[260935]: 2025-10-11 08:54:10.732 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:54:11 compute-0 upbeat_brown[316819]: {
Oct 11 08:54:11 compute-0 upbeat_brown[316819]:     "9a8d87fe-940e-4960-94fb-405e22e6e9ee": {
Oct 11 08:54:11 compute-0 upbeat_brown[316819]:         "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 08:54:11 compute-0 upbeat_brown[316819]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 11 08:54:11 compute-0 upbeat_brown[316819]:         "osd_id": 2,
Oct 11 08:54:11 compute-0 upbeat_brown[316819]:         "osd_uuid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 08:54:11 compute-0 upbeat_brown[316819]:         "type": "bluestore"
Oct 11 08:54:11 compute-0 upbeat_brown[316819]:     },
Oct 11 08:54:11 compute-0 upbeat_brown[316819]:     "bd31cfe9-266a-4479-a7f9-659fff14b3e1": {
Oct 11 08:54:11 compute-0 upbeat_brown[316819]:         "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 08:54:11 compute-0 upbeat_brown[316819]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 11 08:54:11 compute-0 upbeat_brown[316819]:         "osd_id": 0,
Oct 11 08:54:11 compute-0 upbeat_brown[316819]:         "osd_uuid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 08:54:11 compute-0 upbeat_brown[316819]:         "type": "bluestore"
Oct 11 08:54:11 compute-0 upbeat_brown[316819]:     },
Oct 11 08:54:11 compute-0 upbeat_brown[316819]:     "c6e730c8-7075-45e5-bf72-061b73088f5b": {
Oct 11 08:54:11 compute-0 upbeat_brown[316819]:         "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 08:54:11 compute-0 upbeat_brown[316819]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 11 08:54:11 compute-0 upbeat_brown[316819]:         "osd_id": 1,
Oct 11 08:54:11 compute-0 upbeat_brown[316819]:         "osd_uuid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 08:54:11 compute-0 upbeat_brown[316819]:         "type": "bluestore"
Oct 11 08:54:11 compute-0 upbeat_brown[316819]:     }
Oct 11 08:54:11 compute-0 upbeat_brown[316819]: }
Oct 11 08:54:11 compute-0 systemd[1]: libpod-5e4b1457b1c6de6300a551d88826763ee6f81d378aa3d0f903972b9318558ded.scope: Deactivated successfully.
Oct 11 08:54:11 compute-0 systemd[1]: libpod-5e4b1457b1c6de6300a551d88826763ee6f81d378aa3d0f903972b9318558ded.scope: Consumed 1.181s CPU time.
Oct 11 08:54:11 compute-0 podman[316802]: 2025-10-11 08:54:11.151708527 +0000 UTC m=+1.380394052 container died 5e4b1457b1c6de6300a551d88826763ee6f81d378aa3d0f903972b9318558ded (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_brown, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Oct 11 08:54:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-c41e86106ae4a335d8011372adf8366a211630267a953265ca526eaf04b53626-merged.mount: Deactivated successfully.
Oct 11 08:54:11 compute-0 podman[316802]: 2025-10-11 08:54:11.218391509 +0000 UTC m=+1.447077004 container remove 5e4b1457b1c6de6300a551d88826763ee6f81d378aa3d0f903972b9318558ded (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_brown, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct 11 08:54:11 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 08:54:11 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1963532223' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:54:11 compute-0 systemd[1]: libpod-conmon-5e4b1457b1c6de6300a551d88826763ee6f81d378aa3d0f903972b9318558ded.scope: Deactivated successfully.
Oct 11 08:54:11 compute-0 nova_compute[260935]: 2025-10-11 08:54:11.246 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.514s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:54:11 compute-0 sudo[316652]: pam_unix(sudo:session): session closed for user root
Oct 11 08:54:11 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 08:54:11 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 08:54:11 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 08:54:11 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 08:54:11 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev 463ce18b-c9f9-4e95-8fe4-b2f77282082a does not exist
Oct 11 08:54:11 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev 201a6b9b-3093-48a0-9d40-2757175ac650 does not exist
Oct 11 08:54:11 compute-0 ceph-mon[74313]: pgmap v1498: 321 pgs: 321 active+clean; 41 MiB data, 451 MiB used, 60 GiB / 60 GiB avail; 1.6 MiB/s rd, 2.1 MiB/s wr, 257 op/s
Oct 11 08:54:11 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/1963532223' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:54:11 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 08:54:11 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 08:54:11 compute-0 sudo[316886]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:54:11 compute-0 sudo[316886]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:54:11 compute-0 sudo[316886]: pam_unix(sudo:session): session closed for user root
Oct 11 08:54:11 compute-0 nova_compute[260935]: 2025-10-11 08:54:11.470 2 WARNING nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 08:54:11 compute-0 nova_compute[260935]: 2025-10-11 08:54:11.471 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4135MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 11 08:54:11 compute-0 nova_compute[260935]: 2025-10-11 08:54:11.471 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:54:11 compute-0 nova_compute[260935]: 2025-10-11 08:54:11.472 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:54:11 compute-0 sudo[316911]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 11 08:54:11 compute-0 sudo[316911]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:54:11 compute-0 sudo[316911]: pam_unix(sudo:session): session closed for user root
Oct 11 08:54:11 compute-0 nova_compute[260935]: 2025-10-11 08:54:11.535 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 11 08:54:11 compute-0 nova_compute[260935]: 2025-10-11 08:54:11.535 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 11 08:54:11 compute-0 nova_compute[260935]: 2025-10-11 08:54:11.561 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:54:11 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1499: 321 pgs: 321 active+clean; 41 MiB data, 451 MiB used, 60 GiB / 60 GiB avail; 1.6 MiB/s rd, 2.1 MiB/s wr, 257 op/s
Oct 11 08:54:12 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 08:54:12 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3896342550' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:54:12 compute-0 nova_compute[260935]: 2025-10-11 08:54:12.107 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.545s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:54:12 compute-0 nova_compute[260935]: 2025-10-11 08:54:12.115 2 DEBUG nova.compute.provider_tree [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 08:54:12 compute-0 nova_compute[260935]: 2025-10-11 08:54:12.136 2 DEBUG nova.scheduler.client.report [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 08:54:12 compute-0 nova_compute[260935]: 2025-10-11 08:54:12.164 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 11 08:54:12 compute-0 nova_compute[260935]: 2025-10-11 08:54:12.165 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.693s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:54:12 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/3896342550' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:54:12 compute-0 nova_compute[260935]: 2025-10-11 08:54:12.763 2 DEBUG oslo_concurrency.lockutils [None req-ed21b36e-f7ae-4ec1-9c16-d79f19f57eb4 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Acquiring lock "1a70666a-f421-4a09-bfa4-f1171cbe2000" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:54:12 compute-0 nova_compute[260935]: 2025-10-11 08:54:12.764 2 DEBUG oslo_concurrency.lockutils [None req-ed21b36e-f7ae-4ec1-9c16-d79f19f57eb4 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Lock "1a70666a-f421-4a09-bfa4-f1171cbe2000" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:54:12 compute-0 nova_compute[260935]: 2025-10-11 08:54:12.787 2 DEBUG nova.compute.manager [None req-ed21b36e-f7ae-4ec1-9c16-d79f19f57eb4 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] [instance: 1a70666a-f421-4a09-bfa4-f1171cbe2000] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 11 08:54:12 compute-0 nova_compute[260935]: 2025-10-11 08:54:12.851 2 DEBUG oslo_concurrency.lockutils [None req-ed21b36e-f7ae-4ec1-9c16-d79f19f57eb4 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:54:12 compute-0 nova_compute[260935]: 2025-10-11 08:54:12.851 2 DEBUG oslo_concurrency.lockutils [None req-ed21b36e-f7ae-4ec1-9c16-d79f19f57eb4 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:54:12 compute-0 nova_compute[260935]: 2025-10-11 08:54:12.859 2 DEBUG nova.virt.hardware [None req-ed21b36e-f7ae-4ec1-9c16-d79f19f57eb4 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 11 08:54:12 compute-0 nova_compute[260935]: 2025-10-11 08:54:12.860 2 INFO nova.compute.claims [None req-ed21b36e-f7ae-4ec1-9c16-d79f19f57eb4 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] [instance: 1a70666a-f421-4a09-bfa4-f1171cbe2000] Claim successful on node compute-0.ctlplane.example.com
Oct 11 08:54:12 compute-0 nova_compute[260935]: 2025-10-11 08:54:12.864 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:54:12 compute-0 nova_compute[260935]: 2025-10-11 08:54:12.987 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:54:12 compute-0 nova_compute[260935]: 2025-10-11 08:54:12.992 2 DEBUG oslo_concurrency.processutils [None req-ed21b36e-f7ae-4ec1-9c16-d79f19f57eb4 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:54:13 compute-0 nova_compute[260935]: 2025-10-11 08:54:13.166 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 08:54:13 compute-0 nova_compute[260935]: 2025-10-11 08:54:13.166 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 11 08:54:13 compute-0 nova_compute[260935]: 2025-10-11 08:54:13.166 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 11 08:54:13 compute-0 nova_compute[260935]: 2025-10-11 08:54:13.210 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: 1a70666a-f421-4a09-bfa4-f1171cbe2000] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Oct 11 08:54:13 compute-0 nova_compute[260935]: 2025-10-11 08:54:13.211 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 11 08:54:13 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e210 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 08:54:13 compute-0 ceph-mon[74313]: pgmap v1499: 321 pgs: 321 active+clean; 41 MiB data, 451 MiB used, 60 GiB / 60 GiB avail; 1.6 MiB/s rd, 2.1 MiB/s wr, 257 op/s
Oct 11 08:54:13 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 08:54:13 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/214837527' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:54:13 compute-0 nova_compute[260935]: 2025-10-11 08:54:13.480 2 DEBUG oslo_concurrency.processutils [None req-ed21b36e-f7ae-4ec1-9c16-d79f19f57eb4 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.488s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:54:13 compute-0 nova_compute[260935]: 2025-10-11 08:54:13.489 2 DEBUG nova.compute.provider_tree [None req-ed21b36e-f7ae-4ec1-9c16-d79f19f57eb4 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 08:54:13 compute-0 nova_compute[260935]: 2025-10-11 08:54:13.507 2 DEBUG nova.scheduler.client.report [None req-ed21b36e-f7ae-4ec1-9c16-d79f19f57eb4 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 08:54:13 compute-0 nova_compute[260935]: 2025-10-11 08:54:13.538 2 DEBUG oslo_concurrency.lockutils [None req-ed21b36e-f7ae-4ec1-9c16-d79f19f57eb4 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.687s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:54:13 compute-0 nova_compute[260935]: 2025-10-11 08:54:13.539 2 DEBUG nova.compute.manager [None req-ed21b36e-f7ae-4ec1-9c16-d79f19f57eb4 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] [instance: 1a70666a-f421-4a09-bfa4-f1171cbe2000] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 11 08:54:13 compute-0 nova_compute[260935]: 2025-10-11 08:54:13.590 2 DEBUG nova.compute.manager [None req-ed21b36e-f7ae-4ec1-9c16-d79f19f57eb4 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] [instance: 1a70666a-f421-4a09-bfa4-f1171cbe2000] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 11 08:54:13 compute-0 nova_compute[260935]: 2025-10-11 08:54:13.590 2 DEBUG nova.network.neutron [None req-ed21b36e-f7ae-4ec1-9c16-d79f19f57eb4 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] [instance: 1a70666a-f421-4a09-bfa4-f1171cbe2000] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 11 08:54:13 compute-0 nova_compute[260935]: 2025-10-11 08:54:13.611 2 INFO nova.virt.libvirt.driver [None req-ed21b36e-f7ae-4ec1-9c16-d79f19f57eb4 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] [instance: 1a70666a-f421-4a09-bfa4-f1171cbe2000] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 11 08:54:13 compute-0 nova_compute[260935]: 2025-10-11 08:54:13.636 2 DEBUG nova.compute.manager [None req-ed21b36e-f7ae-4ec1-9c16-d79f19f57eb4 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] [instance: 1a70666a-f421-4a09-bfa4-f1171cbe2000] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 11 08:54:13 compute-0 nova_compute[260935]: 2025-10-11 08:54:13.719 2 DEBUG nova.compute.manager [None req-ed21b36e-f7ae-4ec1-9c16-d79f19f57eb4 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] [instance: 1a70666a-f421-4a09-bfa4-f1171cbe2000] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 11 08:54:13 compute-0 nova_compute[260935]: 2025-10-11 08:54:13.720 2 DEBUG nova.virt.libvirt.driver [None req-ed21b36e-f7ae-4ec1-9c16-d79f19f57eb4 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] [instance: 1a70666a-f421-4a09-bfa4-f1171cbe2000] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 11 08:54:13 compute-0 nova_compute[260935]: 2025-10-11 08:54:13.721 2 INFO nova.virt.libvirt.driver [None req-ed21b36e-f7ae-4ec1-9c16-d79f19f57eb4 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] [instance: 1a70666a-f421-4a09-bfa4-f1171cbe2000] Creating image(s)
Oct 11 08:54:13 compute-0 nova_compute[260935]: 2025-10-11 08:54:13.747 2 DEBUG nova.storage.rbd_utils [None req-ed21b36e-f7ae-4ec1-9c16-d79f19f57eb4 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] rbd image 1a70666a-f421-4a09-bfa4-f1171cbe2000_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:54:13 compute-0 nova_compute[260935]: 2025-10-11 08:54:13.778 2 DEBUG nova.storage.rbd_utils [None req-ed21b36e-f7ae-4ec1-9c16-d79f19f57eb4 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] rbd image 1a70666a-f421-4a09-bfa4-f1171cbe2000_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:54:13 compute-0 nova_compute[260935]: 2025-10-11 08:54:13.805 2 DEBUG nova.storage.rbd_utils [None req-ed21b36e-f7ae-4ec1-9c16-d79f19f57eb4 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] rbd image 1a70666a-f421-4a09-bfa4-f1171cbe2000_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:54:13 compute-0 nova_compute[260935]: 2025-10-11 08:54:13.809 2 DEBUG oslo_concurrency.processutils [None req-ed21b36e-f7ae-4ec1-9c16-d79f19f57eb4 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:54:13 compute-0 nova_compute[260935]: 2025-10-11 08:54:13.889 2 DEBUG oslo_concurrency.processutils [None req-ed21b36e-f7ae-4ec1-9c16-d79f19f57eb4 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json" returned: 0 in 0.080s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:54:13 compute-0 nova_compute[260935]: 2025-10-11 08:54:13.891 2 DEBUG oslo_concurrency.lockutils [None req-ed21b36e-f7ae-4ec1-9c16-d79f19f57eb4 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Acquiring lock "9811042c7d73cc51997f7c966840f4b7728169a1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:54:13 compute-0 nova_compute[260935]: 2025-10-11 08:54:13.892 2 DEBUG oslo_concurrency.lockutils [None req-ed21b36e-f7ae-4ec1-9c16-d79f19f57eb4 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:54:13 compute-0 nova_compute[260935]: 2025-10-11 08:54:13.892 2 DEBUG oslo_concurrency.lockutils [None req-ed21b36e-f7ae-4ec1-9c16-d79f19f57eb4 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:54:13 compute-0 nova_compute[260935]: 2025-10-11 08:54:13.924 2 DEBUG nova.storage.rbd_utils [None req-ed21b36e-f7ae-4ec1-9c16-d79f19f57eb4 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] rbd image 1a70666a-f421-4a09-bfa4-f1171cbe2000_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:54:13 compute-0 nova_compute[260935]: 2025-10-11 08:54:13.929 2 DEBUG oslo_concurrency.processutils [None req-ed21b36e-f7ae-4ec1-9c16-d79f19f57eb4 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 1a70666a-f421-4a09-bfa4-f1171cbe2000_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:54:13 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1500: 321 pgs: 321 active+clean; 41 MiB data, 448 MiB used, 60 GiB / 60 GiB avail; 64 KiB/s rd, 4.2 KiB/s wr, 92 op/s
Oct 11 08:54:14 compute-0 nova_compute[260935]: 2025-10-11 08:54:14.012 2 DEBUG nova.policy [None req-ed21b36e-f7ae-4ec1-9c16-d79f19f57eb4 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'd4dce65b39be45739408ca70d672df84', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'b34d40b1586348c3be3d9142dfe1770d', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 11 08:54:14 compute-0 nova_compute[260935]: 2025-10-11 08:54:14.282 2 DEBUG oslo_concurrency.processutils [None req-ed21b36e-f7ae-4ec1-9c16-d79f19f57eb4 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 1a70666a-f421-4a09-bfa4-f1171cbe2000_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.353s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:54:14 compute-0 nova_compute[260935]: 2025-10-11 08:54:14.378 2 DEBUG nova.storage.rbd_utils [None req-ed21b36e-f7ae-4ec1-9c16-d79f19f57eb4 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] resizing rbd image 1a70666a-f421-4a09-bfa4-f1171cbe2000_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 11 08:54:14 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/214837527' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:54:14 compute-0 nova_compute[260935]: 2025-10-11 08:54:14.623 2 DEBUG nova.objects.instance [None req-ed21b36e-f7ae-4ec1-9c16-d79f19f57eb4 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Lazy-loading 'migration_context' on Instance uuid 1a70666a-f421-4a09-bfa4-f1171cbe2000 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:54:14 compute-0 nova_compute[260935]: 2025-10-11 08:54:14.649 2 DEBUG nova.virt.libvirt.driver [None req-ed21b36e-f7ae-4ec1-9c16-d79f19f57eb4 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] [instance: 1a70666a-f421-4a09-bfa4-f1171cbe2000] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 11 08:54:14 compute-0 nova_compute[260935]: 2025-10-11 08:54:14.649 2 DEBUG nova.virt.libvirt.driver [None req-ed21b36e-f7ae-4ec1-9c16-d79f19f57eb4 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] [instance: 1a70666a-f421-4a09-bfa4-f1171cbe2000] Ensure instance console log exists: /var/lib/nova/instances/1a70666a-f421-4a09-bfa4-f1171cbe2000/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 11 08:54:14 compute-0 nova_compute[260935]: 2025-10-11 08:54:14.650 2 DEBUG oslo_concurrency.lockutils [None req-ed21b36e-f7ae-4ec1-9c16-d79f19f57eb4 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:54:14 compute-0 nova_compute[260935]: 2025-10-11 08:54:14.651 2 DEBUG oslo_concurrency.lockutils [None req-ed21b36e-f7ae-4ec1-9c16-d79f19f57eb4 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:54:14 compute-0 nova_compute[260935]: 2025-10-11 08:54:14.651 2 DEBUG oslo_concurrency.lockutils [None req-ed21b36e-f7ae-4ec1-9c16-d79f19f57eb4 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:54:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:15.189 162815 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:54:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:15.190 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:54:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:15.190 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:54:15 compute-0 nova_compute[260935]: 2025-10-11 08:54:15.271 2 DEBUG nova.network.neutron [None req-ed21b36e-f7ae-4ec1-9c16-d79f19f57eb4 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] [instance: 1a70666a-f421-4a09-bfa4-f1171cbe2000] Successfully created port: 56e8a82c-acef-4136-b229-8039f0d8c843 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 11 08:54:15 compute-0 ceph-mon[74313]: pgmap v1500: 321 pgs: 321 active+clean; 41 MiB data, 448 MiB used, 60 GiB / 60 GiB avail; 64 KiB/s rd, 4.2 KiB/s wr, 92 op/s
Oct 11 08:54:15 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1501: 321 pgs: 321 active+clean; 41 MiB data, 448 MiB used, 60 GiB / 60 GiB avail; 64 KiB/s rd, 4.2 KiB/s wr, 92 op/s
Oct 11 08:54:16 compute-0 nova_compute[260935]: 2025-10-11 08:54:16.719 2 DEBUG nova.network.neutron [None req-ed21b36e-f7ae-4ec1-9c16-d79f19f57eb4 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] [instance: 1a70666a-f421-4a09-bfa4-f1171cbe2000] Successfully updated port: 56e8a82c-acef-4136-b229-8039f0d8c843 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 11 08:54:16 compute-0 nova_compute[260935]: 2025-10-11 08:54:16.748 2 DEBUG oslo_concurrency.lockutils [None req-ed21b36e-f7ae-4ec1-9c16-d79f19f57eb4 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Acquiring lock "refresh_cache-1a70666a-f421-4a09-bfa4-f1171cbe2000" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 08:54:16 compute-0 nova_compute[260935]: 2025-10-11 08:54:16.749 2 DEBUG oslo_concurrency.lockutils [None req-ed21b36e-f7ae-4ec1-9c16-d79f19f57eb4 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Acquired lock "refresh_cache-1a70666a-f421-4a09-bfa4-f1171cbe2000" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 08:54:16 compute-0 nova_compute[260935]: 2025-10-11 08:54:16.749 2 DEBUG nova.network.neutron [None req-ed21b36e-f7ae-4ec1-9c16-d79f19f57eb4 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] [instance: 1a70666a-f421-4a09-bfa4-f1171cbe2000] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 11 08:54:16 compute-0 nova_compute[260935]: 2025-10-11 08:54:16.845 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760172841.8440382, d0ac94c4-9bbc-443b-bbce-0d447b37153a => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:54:16 compute-0 nova_compute[260935]: 2025-10-11 08:54:16.846 2 INFO nova.compute.manager [-] [instance: d0ac94c4-9bbc-443b-bbce-0d447b37153a] VM Stopped (Lifecycle Event)
Oct 11 08:54:16 compute-0 nova_compute[260935]: 2025-10-11 08:54:16.861 2 DEBUG nova.compute.manager [req-77ca7993-b53f-454c-8997-94fb4b118031 req-0db12ec8-248f-4752-ac46-69dd7c7e21fc e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 1a70666a-f421-4a09-bfa4-f1171cbe2000] Received event network-changed-56e8a82c-acef-4136-b229-8039f0d8c843 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:54:16 compute-0 nova_compute[260935]: 2025-10-11 08:54:16.862 2 DEBUG nova.compute.manager [req-77ca7993-b53f-454c-8997-94fb4b118031 req-0db12ec8-248f-4752-ac46-69dd7c7e21fc e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 1a70666a-f421-4a09-bfa4-f1171cbe2000] Refreshing instance network info cache due to event network-changed-56e8a82c-acef-4136-b229-8039f0d8c843. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 08:54:16 compute-0 nova_compute[260935]: 2025-10-11 08:54:16.863 2 DEBUG oslo_concurrency.lockutils [req-77ca7993-b53f-454c-8997-94fb4b118031 req-0db12ec8-248f-4752-ac46-69dd7c7e21fc e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-1a70666a-f421-4a09-bfa4-f1171cbe2000" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 08:54:16 compute-0 nova_compute[260935]: 2025-10-11 08:54:16.881 2 DEBUG nova.compute.manager [None req-151a3760-b8ab-408b-bcf8-ce0a238ab5b2 - - - - - -] [instance: d0ac94c4-9bbc-443b-bbce-0d447b37153a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:54:17 compute-0 nova_compute[260935]: 2025-10-11 08:54:17.205 2 DEBUG nova.network.neutron [None req-ed21b36e-f7ae-4ec1-9c16-d79f19f57eb4 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] [instance: 1a70666a-f421-4a09-bfa4-f1171cbe2000] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 11 08:54:17 compute-0 ceph-mon[74313]: pgmap v1501: 321 pgs: 321 active+clean; 41 MiB data, 448 MiB used, 60 GiB / 60 GiB avail; 64 KiB/s rd, 4.2 KiB/s wr, 92 op/s
Oct 11 08:54:17 compute-0 nova_compute[260935]: 2025-10-11 08:54:17.543 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760172842.5422695, c5c5e7c6-36ba-4cdd-9ad5-03996c419556 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:54:17 compute-0 nova_compute[260935]: 2025-10-11 08:54:17.544 2 INFO nova.compute.manager [-] [instance: c5c5e7c6-36ba-4cdd-9ad5-03996c419556] VM Stopped (Lifecycle Event)
Oct 11 08:54:17 compute-0 nova_compute[260935]: 2025-10-11 08:54:17.570 2 DEBUG nova.compute.manager [None req-bd800a87-1dfd-4999-aa21-3fa66502cbb8 - - - - - -] [instance: c5c5e7c6-36ba-4cdd-9ad5-03996c419556] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:54:17 compute-0 nova_compute[260935]: 2025-10-11 08:54:17.860 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:54:17 compute-0 nova_compute[260935]: 2025-10-11 08:54:17.938 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760172842.937579, 090243f9-46ac-42a1-b921-fa3b1974f127 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:54:17 compute-0 nova_compute[260935]: 2025-10-11 08:54:17.939 2 INFO nova.compute.manager [-] [instance: 090243f9-46ac-42a1-b921-fa3b1974f127] VM Stopped (Lifecycle Event)
Oct 11 08:54:17 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1502: 321 pgs: 321 active+clean; 88 MiB data, 469 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 2.1 MiB/s wr, 32 op/s
Oct 11 08:54:17 compute-0 nova_compute[260935]: 2025-10-11 08:54:17.968 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760172842.9658763, 8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:54:17 compute-0 nova_compute[260935]: 2025-10-11 08:54:17.969 2 INFO nova.compute.manager [-] [instance: 8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08] VM Stopped (Lifecycle Event)
Oct 11 08:54:17 compute-0 nova_compute[260935]: 2025-10-11 08:54:17.971 2 DEBUG nova.compute.manager [None req-e399e5c3-5378-4f91-b9d3-93da32675437 - - - - - -] [instance: 090243f9-46ac-42a1-b921-fa3b1974f127] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:54:17 compute-0 nova_compute[260935]: 2025-10-11 08:54:17.990 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:54:18 compute-0 nova_compute[260935]: 2025-10-11 08:54:18.000 2 DEBUG nova.compute.manager [None req-83286972-4e47-4a85-8a7e-efbb70d20bbd - - - - - -] [instance: 8fe0dfa7-a7fb-4f86-9659-b92bba5a1c08] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:54:18 compute-0 nova_compute[260935]: 2025-10-11 08:54:18.231 2 DEBUG nova.network.neutron [None req-ed21b36e-f7ae-4ec1-9c16-d79f19f57eb4 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] [instance: 1a70666a-f421-4a09-bfa4-f1171cbe2000] Updating instance_info_cache with network_info: [{"id": "56e8a82c-acef-4136-b229-8039f0d8c843", "address": "fa:16:3e:cf:5a:16", "network": {"id": "a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1898587159-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b34d40b1586348c3be3d9142dfe1770d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap56e8a82c-ac", "ovs_interfaceid": "56e8a82c-acef-4136-b229-8039f0d8c843", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:54:18 compute-0 nova_compute[260935]: 2025-10-11 08:54:18.265 2 DEBUG oslo_concurrency.lockutils [None req-ed21b36e-f7ae-4ec1-9c16-d79f19f57eb4 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Releasing lock "refresh_cache-1a70666a-f421-4a09-bfa4-f1171cbe2000" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 08:54:18 compute-0 nova_compute[260935]: 2025-10-11 08:54:18.266 2 DEBUG nova.compute.manager [None req-ed21b36e-f7ae-4ec1-9c16-d79f19f57eb4 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] [instance: 1a70666a-f421-4a09-bfa4-f1171cbe2000] Instance network_info: |[{"id": "56e8a82c-acef-4136-b229-8039f0d8c843", "address": "fa:16:3e:cf:5a:16", "network": {"id": "a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1898587159-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b34d40b1586348c3be3d9142dfe1770d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap56e8a82c-ac", "ovs_interfaceid": "56e8a82c-acef-4136-b229-8039f0d8c843", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 11 08:54:18 compute-0 nova_compute[260935]: 2025-10-11 08:54:18.266 2 DEBUG oslo_concurrency.lockutils [req-77ca7993-b53f-454c-8997-94fb4b118031 req-0db12ec8-248f-4752-ac46-69dd7c7e21fc e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-1a70666a-f421-4a09-bfa4-f1171cbe2000" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 08:54:18 compute-0 nova_compute[260935]: 2025-10-11 08:54:18.267 2 DEBUG nova.network.neutron [req-77ca7993-b53f-454c-8997-94fb4b118031 req-0db12ec8-248f-4752-ac46-69dd7c7e21fc e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 1a70666a-f421-4a09-bfa4-f1171cbe2000] Refreshing network info cache for port 56e8a82c-acef-4136-b229-8039f0d8c843 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 08:54:18 compute-0 nova_compute[260935]: 2025-10-11 08:54:18.273 2 DEBUG nova.virt.libvirt.driver [None req-ed21b36e-f7ae-4ec1-9c16-d79f19f57eb4 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] [instance: 1a70666a-f421-4a09-bfa4-f1171cbe2000] Start _get_guest_xml network_info=[{"id": "56e8a82c-acef-4136-b229-8039f0d8c843", "address": "fa:16:3e:cf:5a:16", "network": {"id": "a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1898587159-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b34d40b1586348c3be3d9142dfe1770d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap56e8a82c-ac", "ovs_interfaceid": "56e8a82c-acef-4136-b229-8039f0d8c843", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 11 08:54:18 compute-0 nova_compute[260935]: 2025-10-11 08:54:18.281 2 WARNING nova.virt.libvirt.driver [None req-ed21b36e-f7ae-4ec1-9c16-d79f19f57eb4 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 08:54:18 compute-0 nova_compute[260935]: 2025-10-11 08:54:18.287 2 DEBUG nova.virt.libvirt.host [None req-ed21b36e-f7ae-4ec1-9c16-d79f19f57eb4 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 11 08:54:18 compute-0 nova_compute[260935]: 2025-10-11 08:54:18.288 2 DEBUG nova.virt.libvirt.host [None req-ed21b36e-f7ae-4ec1-9c16-d79f19f57eb4 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 11 08:54:18 compute-0 nova_compute[260935]: 2025-10-11 08:54:18.299 2 DEBUG nova.virt.libvirt.host [None req-ed21b36e-f7ae-4ec1-9c16-d79f19f57eb4 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 11 08:54:18 compute-0 nova_compute[260935]: 2025-10-11 08:54:18.300 2 DEBUG nova.virt.libvirt.host [None req-ed21b36e-f7ae-4ec1-9c16-d79f19f57eb4 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 11 08:54:18 compute-0 nova_compute[260935]: 2025-10-11 08:54:18.301 2 DEBUG nova.virt.libvirt.driver [None req-ed21b36e-f7ae-4ec1-9c16-d79f19f57eb4 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 11 08:54:18 compute-0 nova_compute[260935]: 2025-10-11 08:54:18.301 2 DEBUG nova.virt.hardware [None req-ed21b36e-f7ae-4ec1-9c16-d79f19f57eb4 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 11 08:54:18 compute-0 nova_compute[260935]: 2025-10-11 08:54:18.302 2 DEBUG nova.virt.hardware [None req-ed21b36e-f7ae-4ec1-9c16-d79f19f57eb4 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 11 08:54:18 compute-0 nova_compute[260935]: 2025-10-11 08:54:18.303 2 DEBUG nova.virt.hardware [None req-ed21b36e-f7ae-4ec1-9c16-d79f19f57eb4 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 11 08:54:18 compute-0 nova_compute[260935]: 2025-10-11 08:54:18.303 2 DEBUG nova.virt.hardware [None req-ed21b36e-f7ae-4ec1-9c16-d79f19f57eb4 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 11 08:54:18 compute-0 nova_compute[260935]: 2025-10-11 08:54:18.304 2 DEBUG nova.virt.hardware [None req-ed21b36e-f7ae-4ec1-9c16-d79f19f57eb4 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 11 08:54:18 compute-0 nova_compute[260935]: 2025-10-11 08:54:18.304 2 DEBUG nova.virt.hardware [None req-ed21b36e-f7ae-4ec1-9c16-d79f19f57eb4 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 11 08:54:18 compute-0 nova_compute[260935]: 2025-10-11 08:54:18.305 2 DEBUG nova.virt.hardware [None req-ed21b36e-f7ae-4ec1-9c16-d79f19f57eb4 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 11 08:54:18 compute-0 nova_compute[260935]: 2025-10-11 08:54:18.305 2 DEBUG nova.virt.hardware [None req-ed21b36e-f7ae-4ec1-9c16-d79f19f57eb4 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 11 08:54:18 compute-0 nova_compute[260935]: 2025-10-11 08:54:18.306 2 DEBUG nova.virt.hardware [None req-ed21b36e-f7ae-4ec1-9c16-d79f19f57eb4 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 11 08:54:18 compute-0 nova_compute[260935]: 2025-10-11 08:54:18.306 2 DEBUG nova.virt.hardware [None req-ed21b36e-f7ae-4ec1-9c16-d79f19f57eb4 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 11 08:54:18 compute-0 nova_compute[260935]: 2025-10-11 08:54:18.307 2 DEBUG nova.virt.hardware [None req-ed21b36e-f7ae-4ec1-9c16-d79f19f57eb4 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 11 08:54:18 compute-0 nova_compute[260935]: 2025-10-11 08:54:18.312 2 DEBUG oslo_concurrency.processutils [None req-ed21b36e-f7ae-4ec1-9c16-d79f19f57eb4 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:54:18 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e210 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 08:54:18 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 08:54:18 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/177491119' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:54:18 compute-0 nova_compute[260935]: 2025-10-11 08:54:18.860 2 DEBUG oslo_concurrency.processutils [None req-ed21b36e-f7ae-4ec1-9c16-d79f19f57eb4 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.548s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:54:18 compute-0 nova_compute[260935]: 2025-10-11 08:54:18.896 2 DEBUG nova.storage.rbd_utils [None req-ed21b36e-f7ae-4ec1-9c16-d79f19f57eb4 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] rbd image 1a70666a-f421-4a09-bfa4-f1171cbe2000_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:54:18 compute-0 nova_compute[260935]: 2025-10-11 08:54:18.902 2 DEBUG oslo_concurrency.processutils [None req-ed21b36e-f7ae-4ec1-9c16-d79f19f57eb4 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:54:19 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 08:54:19 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3052950723' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:54:19 compute-0 ceph-mon[74313]: pgmap v1502: 321 pgs: 321 active+clean; 88 MiB data, 469 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 2.1 MiB/s wr, 32 op/s
Oct 11 08:54:19 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/177491119' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:54:19 compute-0 nova_compute[260935]: 2025-10-11 08:54:19.443 2 DEBUG oslo_concurrency.processutils [None req-ed21b36e-f7ae-4ec1-9c16-d79f19f57eb4 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.541s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:54:19 compute-0 nova_compute[260935]: 2025-10-11 08:54:19.447 2 DEBUG nova.virt.libvirt.vif [None req-ed21b36e-f7ae-4ec1-9c16-d79f19f57eb4 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T08:54:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesOneServerNegativeTestJSON-server-1703807190',display_name='tempest-ImagesOneServerNegativeTestJSON-server-1703807190',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservernegativetestjson-server-1703807190',id=51,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b34d40b1586348c3be3d9142dfe1770d',ramdisk_id='',reservation_id='r-d42968wc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesOneServerNegativeTestJSON-253514738',owner_user_name='tempest-ImagesOneServerNegativeTestJSON-253514738-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T08:54:13Z,user_data=None,user_id='d4dce65b39be45739408ca70d672df84',uuid=1a70666a-f421-4a09-bfa4-f1171cbe2000,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "56e8a82c-acef-4136-b229-8039f0d8c843", "address": "fa:16:3e:cf:5a:16", "network": {"id": "a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1898587159-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b34d40b1586348c3be3d9142dfe1770d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap56e8a82c-ac", "ovs_interfaceid": "56e8a82c-acef-4136-b229-8039f0d8c843", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 11 08:54:19 compute-0 nova_compute[260935]: 2025-10-11 08:54:19.448 2 DEBUG nova.network.os_vif_util [None req-ed21b36e-f7ae-4ec1-9c16-d79f19f57eb4 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Converting VIF {"id": "56e8a82c-acef-4136-b229-8039f0d8c843", "address": "fa:16:3e:cf:5a:16", "network": {"id": "a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1898587159-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b34d40b1586348c3be3d9142dfe1770d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap56e8a82c-ac", "ovs_interfaceid": "56e8a82c-acef-4136-b229-8039f0d8c843", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 08:54:19 compute-0 nova_compute[260935]: 2025-10-11 08:54:19.449 2 DEBUG nova.network.os_vif_util [None req-ed21b36e-f7ae-4ec1-9c16-d79f19f57eb4 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:cf:5a:16,bridge_name='br-int',has_traffic_filtering=True,id=56e8a82c-acef-4136-b229-8039f0d8c843,network=Network(a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap56e8a82c-ac') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 08:54:19 compute-0 nova_compute[260935]: 2025-10-11 08:54:19.452 2 DEBUG nova.objects.instance [None req-ed21b36e-f7ae-4ec1-9c16-d79f19f57eb4 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Lazy-loading 'pci_devices' on Instance uuid 1a70666a-f421-4a09-bfa4-f1171cbe2000 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:54:19 compute-0 nova_compute[260935]: 2025-10-11 08:54:19.472 2 DEBUG nova.virt.libvirt.driver [None req-ed21b36e-f7ae-4ec1-9c16-d79f19f57eb4 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] [instance: 1a70666a-f421-4a09-bfa4-f1171cbe2000] End _get_guest_xml xml=<domain type="kvm">
Oct 11 08:54:19 compute-0 nova_compute[260935]:   <uuid>1a70666a-f421-4a09-bfa4-f1171cbe2000</uuid>
Oct 11 08:54:19 compute-0 nova_compute[260935]:   <name>instance-00000033</name>
Oct 11 08:54:19 compute-0 nova_compute[260935]:   <memory>131072</memory>
Oct 11 08:54:19 compute-0 nova_compute[260935]:   <vcpu>1</vcpu>
Oct 11 08:54:19 compute-0 nova_compute[260935]:   <metadata>
Oct 11 08:54:19 compute-0 nova_compute[260935]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 08:54:19 compute-0 nova_compute[260935]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 08:54:19 compute-0 nova_compute[260935]:       <nova:name>tempest-ImagesOneServerNegativeTestJSON-server-1703807190</nova:name>
Oct 11 08:54:19 compute-0 nova_compute[260935]:       <nova:creationTime>2025-10-11 08:54:18</nova:creationTime>
Oct 11 08:54:19 compute-0 nova_compute[260935]:       <nova:flavor name="m1.nano">
Oct 11 08:54:19 compute-0 nova_compute[260935]:         <nova:memory>128</nova:memory>
Oct 11 08:54:19 compute-0 nova_compute[260935]:         <nova:disk>1</nova:disk>
Oct 11 08:54:19 compute-0 nova_compute[260935]:         <nova:swap>0</nova:swap>
Oct 11 08:54:19 compute-0 nova_compute[260935]:         <nova:ephemeral>0</nova:ephemeral>
Oct 11 08:54:19 compute-0 nova_compute[260935]:         <nova:vcpus>1</nova:vcpus>
Oct 11 08:54:19 compute-0 nova_compute[260935]:       </nova:flavor>
Oct 11 08:54:19 compute-0 nova_compute[260935]:       <nova:owner>
Oct 11 08:54:19 compute-0 nova_compute[260935]:         <nova:user uuid="d4dce65b39be45739408ca70d672df84">tempest-ImagesOneServerNegativeTestJSON-253514738-project-member</nova:user>
Oct 11 08:54:19 compute-0 nova_compute[260935]:         <nova:project uuid="b34d40b1586348c3be3d9142dfe1770d">tempest-ImagesOneServerNegativeTestJSON-253514738</nova:project>
Oct 11 08:54:19 compute-0 nova_compute[260935]:       </nova:owner>
Oct 11 08:54:19 compute-0 nova_compute[260935]:       <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 08:54:19 compute-0 nova_compute[260935]:       <nova:ports>
Oct 11 08:54:19 compute-0 nova_compute[260935]:         <nova:port uuid="56e8a82c-acef-4136-b229-8039f0d8c843">
Oct 11 08:54:19 compute-0 nova_compute[260935]:           <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Oct 11 08:54:19 compute-0 nova_compute[260935]:         </nova:port>
Oct 11 08:54:19 compute-0 nova_compute[260935]:       </nova:ports>
Oct 11 08:54:19 compute-0 nova_compute[260935]:     </nova:instance>
Oct 11 08:54:19 compute-0 nova_compute[260935]:   </metadata>
Oct 11 08:54:19 compute-0 nova_compute[260935]:   <sysinfo type="smbios">
Oct 11 08:54:19 compute-0 nova_compute[260935]:     <system>
Oct 11 08:54:19 compute-0 nova_compute[260935]:       <entry name="manufacturer">RDO</entry>
Oct 11 08:54:19 compute-0 nova_compute[260935]:       <entry name="product">OpenStack Compute</entry>
Oct 11 08:54:19 compute-0 nova_compute[260935]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 08:54:19 compute-0 nova_compute[260935]:       <entry name="serial">1a70666a-f421-4a09-bfa4-f1171cbe2000</entry>
Oct 11 08:54:19 compute-0 nova_compute[260935]:       <entry name="uuid">1a70666a-f421-4a09-bfa4-f1171cbe2000</entry>
Oct 11 08:54:19 compute-0 nova_compute[260935]:       <entry name="family">Virtual Machine</entry>
Oct 11 08:54:19 compute-0 nova_compute[260935]:     </system>
Oct 11 08:54:19 compute-0 nova_compute[260935]:   </sysinfo>
Oct 11 08:54:19 compute-0 nova_compute[260935]:   <os>
Oct 11 08:54:19 compute-0 nova_compute[260935]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 11 08:54:19 compute-0 nova_compute[260935]:     <boot dev="hd"/>
Oct 11 08:54:19 compute-0 nova_compute[260935]:     <smbios mode="sysinfo"/>
Oct 11 08:54:19 compute-0 nova_compute[260935]:   </os>
Oct 11 08:54:19 compute-0 nova_compute[260935]:   <features>
Oct 11 08:54:19 compute-0 nova_compute[260935]:     <acpi/>
Oct 11 08:54:19 compute-0 nova_compute[260935]:     <apic/>
Oct 11 08:54:19 compute-0 nova_compute[260935]:     <vmcoreinfo/>
Oct 11 08:54:19 compute-0 nova_compute[260935]:   </features>
Oct 11 08:54:19 compute-0 nova_compute[260935]:   <clock offset="utc">
Oct 11 08:54:19 compute-0 nova_compute[260935]:     <timer name="pit" tickpolicy="delay"/>
Oct 11 08:54:19 compute-0 nova_compute[260935]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 11 08:54:19 compute-0 nova_compute[260935]:     <timer name="hpet" present="no"/>
Oct 11 08:54:19 compute-0 nova_compute[260935]:   </clock>
Oct 11 08:54:19 compute-0 nova_compute[260935]:   <cpu mode="host-model" match="exact">
Oct 11 08:54:19 compute-0 nova_compute[260935]:     <topology sockets="1" cores="1" threads="1"/>
Oct 11 08:54:19 compute-0 nova_compute[260935]:   </cpu>
Oct 11 08:54:19 compute-0 nova_compute[260935]:   <devices>
Oct 11 08:54:19 compute-0 nova_compute[260935]:     <disk type="network" device="disk">
Oct 11 08:54:19 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 08:54:19 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/1a70666a-f421-4a09-bfa4-f1171cbe2000_disk">
Oct 11 08:54:19 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 08:54:19 compute-0 nova_compute[260935]:       </source>
Oct 11 08:54:19 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 08:54:19 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 08:54:19 compute-0 nova_compute[260935]:       </auth>
Oct 11 08:54:19 compute-0 nova_compute[260935]:       <target dev="vda" bus="virtio"/>
Oct 11 08:54:19 compute-0 nova_compute[260935]:     </disk>
Oct 11 08:54:19 compute-0 nova_compute[260935]:     <disk type="network" device="cdrom">
Oct 11 08:54:19 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 08:54:19 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/1a70666a-f421-4a09-bfa4-f1171cbe2000_disk.config">
Oct 11 08:54:19 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 08:54:19 compute-0 nova_compute[260935]:       </source>
Oct 11 08:54:19 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 08:54:19 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 08:54:19 compute-0 nova_compute[260935]:       </auth>
Oct 11 08:54:19 compute-0 nova_compute[260935]:       <target dev="sda" bus="sata"/>
Oct 11 08:54:19 compute-0 nova_compute[260935]:     </disk>
Oct 11 08:54:19 compute-0 nova_compute[260935]:     <interface type="ethernet">
Oct 11 08:54:19 compute-0 nova_compute[260935]:       <mac address="fa:16:3e:cf:5a:16"/>
Oct 11 08:54:19 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 08:54:19 compute-0 nova_compute[260935]:       <driver name="vhost" rx_queue_size="512"/>
Oct 11 08:54:19 compute-0 nova_compute[260935]:       <mtu size="1442"/>
Oct 11 08:54:19 compute-0 nova_compute[260935]:       <target dev="tap56e8a82c-ac"/>
Oct 11 08:54:19 compute-0 nova_compute[260935]:     </interface>
Oct 11 08:54:19 compute-0 nova_compute[260935]:     <serial type="pty">
Oct 11 08:54:19 compute-0 nova_compute[260935]:       <log file="/var/lib/nova/instances/1a70666a-f421-4a09-bfa4-f1171cbe2000/console.log" append="off"/>
Oct 11 08:54:19 compute-0 nova_compute[260935]:     </serial>
Oct 11 08:54:19 compute-0 nova_compute[260935]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 08:54:19 compute-0 nova_compute[260935]:     <video>
Oct 11 08:54:19 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 08:54:19 compute-0 nova_compute[260935]:     </video>
Oct 11 08:54:19 compute-0 nova_compute[260935]:     <input type="tablet" bus="usb"/>
Oct 11 08:54:19 compute-0 nova_compute[260935]:     <rng model="virtio">
Oct 11 08:54:19 compute-0 nova_compute[260935]:       <backend model="random">/dev/urandom</backend>
Oct 11 08:54:19 compute-0 nova_compute[260935]:     </rng>
Oct 11 08:54:19 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root"/>
Oct 11 08:54:19 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:54:19 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:54:19 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:54:19 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:54:19 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:54:19 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:54:19 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:54:19 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:54:19 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:54:19 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:54:19 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:54:19 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:54:19 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:54:19 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:54:19 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:54:19 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:54:19 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:54:19 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:54:19 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:54:19 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:54:19 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:54:19 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:54:19 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:54:19 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:54:19 compute-0 nova_compute[260935]:     <controller type="usb" index="0"/>
Oct 11 08:54:19 compute-0 nova_compute[260935]:     <memballoon model="virtio">
Oct 11 08:54:19 compute-0 nova_compute[260935]:       <stats period="10"/>
Oct 11 08:54:19 compute-0 nova_compute[260935]:     </memballoon>
Oct 11 08:54:19 compute-0 nova_compute[260935]:   </devices>
Oct 11 08:54:19 compute-0 nova_compute[260935]: </domain>
Oct 11 08:54:19 compute-0 nova_compute[260935]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 11 08:54:19 compute-0 nova_compute[260935]: 2025-10-11 08:54:19.474 2 DEBUG nova.compute.manager [None req-ed21b36e-f7ae-4ec1-9c16-d79f19f57eb4 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] [instance: 1a70666a-f421-4a09-bfa4-f1171cbe2000] Preparing to wait for external event network-vif-plugged-56e8a82c-acef-4136-b229-8039f0d8c843 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 11 08:54:19 compute-0 nova_compute[260935]: 2025-10-11 08:54:19.475 2 DEBUG oslo_concurrency.lockutils [None req-ed21b36e-f7ae-4ec1-9c16-d79f19f57eb4 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Acquiring lock "1a70666a-f421-4a09-bfa4-f1171cbe2000-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:54:19 compute-0 nova_compute[260935]: 2025-10-11 08:54:19.476 2 DEBUG oslo_concurrency.lockutils [None req-ed21b36e-f7ae-4ec1-9c16-d79f19f57eb4 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Lock "1a70666a-f421-4a09-bfa4-f1171cbe2000-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:54:19 compute-0 nova_compute[260935]: 2025-10-11 08:54:19.476 2 DEBUG oslo_concurrency.lockutils [None req-ed21b36e-f7ae-4ec1-9c16-d79f19f57eb4 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Lock "1a70666a-f421-4a09-bfa4-f1171cbe2000-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:54:19 compute-0 nova_compute[260935]: 2025-10-11 08:54:19.477 2 DEBUG nova.virt.libvirt.vif [None req-ed21b36e-f7ae-4ec1-9c16-d79f19f57eb4 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T08:54:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesOneServerNegativeTestJSON-server-1703807190',display_name='tempest-ImagesOneServerNegativeTestJSON-server-1703807190',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservernegativetestjson-server-1703807190',id=51,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b34d40b1586348c3be3d9142dfe1770d',ramdisk_id='',reservation_id='r-d42968wc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesOneServerNegativeTestJSON-253514738',owner_user_name='tempest-ImagesOneServerNegativeTestJSON-253514738-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T08:54:13Z,user_data=None,user_id='d4dce65b39be45739408ca70d672df84',uuid=1a70666a-f421-4a09-bfa4-f1171cbe2000,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "56e8a82c-acef-4136-b229-8039f0d8c843", "address": "fa:16:3e:cf:5a:16", "network": {"id": "a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1898587159-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b34d40b1586348c3be3d9142dfe1770d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap56e8a82c-ac", "ovs_interfaceid": "56e8a82c-acef-4136-b229-8039f0d8c843", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 11 08:54:19 compute-0 nova_compute[260935]: 2025-10-11 08:54:19.478 2 DEBUG nova.network.os_vif_util [None req-ed21b36e-f7ae-4ec1-9c16-d79f19f57eb4 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Converting VIF {"id": "56e8a82c-acef-4136-b229-8039f0d8c843", "address": "fa:16:3e:cf:5a:16", "network": {"id": "a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1898587159-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b34d40b1586348c3be3d9142dfe1770d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap56e8a82c-ac", "ovs_interfaceid": "56e8a82c-acef-4136-b229-8039f0d8c843", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 08:54:19 compute-0 nova_compute[260935]: 2025-10-11 08:54:19.479 2 DEBUG nova.network.os_vif_util [None req-ed21b36e-f7ae-4ec1-9c16-d79f19f57eb4 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:cf:5a:16,bridge_name='br-int',has_traffic_filtering=True,id=56e8a82c-acef-4136-b229-8039f0d8c843,network=Network(a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap56e8a82c-ac') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 08:54:19 compute-0 nova_compute[260935]: 2025-10-11 08:54:19.480 2 DEBUG os_vif [None req-ed21b36e-f7ae-4ec1-9c16-d79f19f57eb4 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:cf:5a:16,bridge_name='br-int',has_traffic_filtering=True,id=56e8a82c-acef-4136-b229-8039f0d8c843,network=Network(a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap56e8a82c-ac') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 11 08:54:19 compute-0 nova_compute[260935]: 2025-10-11 08:54:19.481 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:54:19 compute-0 nova_compute[260935]: 2025-10-11 08:54:19.482 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:54:19 compute-0 nova_compute[260935]: 2025-10-11 08:54:19.483 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 08:54:19 compute-0 nova_compute[260935]: 2025-10-11 08:54:19.488 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:54:19 compute-0 nova_compute[260935]: 2025-10-11 08:54:19.488 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap56e8a82c-ac, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:54:19 compute-0 nova_compute[260935]: 2025-10-11 08:54:19.489 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap56e8a82c-ac, col_values=(('external_ids', {'iface-id': '56e8a82c-acef-4136-b229-8039f0d8c843', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:cf:5a:16', 'vm-uuid': '1a70666a-f421-4a09-bfa4-f1171cbe2000'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:54:19 compute-0 NetworkManager[44960]: <info>  [1760172859.4935] manager: (tap56e8a82c-ac): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/209)
Oct 11 08:54:19 compute-0 nova_compute[260935]: 2025-10-11 08:54:19.499 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:54:19 compute-0 nova_compute[260935]: 2025-10-11 08:54:19.504 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:54:19 compute-0 nova_compute[260935]: 2025-10-11 08:54:19.507 2 INFO os_vif [None req-ed21b36e-f7ae-4ec1-9c16-d79f19f57eb4 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:cf:5a:16,bridge_name='br-int',has_traffic_filtering=True,id=56e8a82c-acef-4136-b229-8039f0d8c843,network=Network(a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap56e8a82c-ac')
Oct 11 08:54:19 compute-0 nova_compute[260935]: 2025-10-11 08:54:19.569 2 DEBUG nova.virt.libvirt.driver [None req-ed21b36e-f7ae-4ec1-9c16-d79f19f57eb4 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 08:54:19 compute-0 nova_compute[260935]: 2025-10-11 08:54:19.570 2 DEBUG nova.virt.libvirt.driver [None req-ed21b36e-f7ae-4ec1-9c16-d79f19f57eb4 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 08:54:19 compute-0 nova_compute[260935]: 2025-10-11 08:54:19.571 2 DEBUG nova.virt.libvirt.driver [None req-ed21b36e-f7ae-4ec1-9c16-d79f19f57eb4 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] No VIF found with MAC fa:16:3e:cf:5a:16, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 11 08:54:19 compute-0 nova_compute[260935]: 2025-10-11 08:54:19.571 2 INFO nova.virt.libvirt.driver [None req-ed21b36e-f7ae-4ec1-9c16-d79f19f57eb4 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] [instance: 1a70666a-f421-4a09-bfa4-f1171cbe2000] Using config drive
Oct 11 08:54:19 compute-0 nova_compute[260935]: 2025-10-11 08:54:19.610 2 DEBUG nova.storage.rbd_utils [None req-ed21b36e-f7ae-4ec1-9c16-d79f19f57eb4 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] rbd image 1a70666a-f421-4a09-bfa4-f1171cbe2000_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:54:19 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1503: 321 pgs: 321 active+clean; 88 MiB data, 469 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 11 08:54:20 compute-0 nova_compute[260935]: 2025-10-11 08:54:20.156 2 DEBUG nova.network.neutron [req-77ca7993-b53f-454c-8997-94fb4b118031 req-0db12ec8-248f-4752-ac46-69dd7c7e21fc e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 1a70666a-f421-4a09-bfa4-f1171cbe2000] Updated VIF entry in instance network info cache for port 56e8a82c-acef-4136-b229-8039f0d8c843. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 08:54:20 compute-0 nova_compute[260935]: 2025-10-11 08:54:20.157 2 DEBUG nova.network.neutron [req-77ca7993-b53f-454c-8997-94fb4b118031 req-0db12ec8-248f-4752-ac46-69dd7c7e21fc e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 1a70666a-f421-4a09-bfa4-f1171cbe2000] Updating instance_info_cache with network_info: [{"id": "56e8a82c-acef-4136-b229-8039f0d8c843", "address": "fa:16:3e:cf:5a:16", "network": {"id": "a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1898587159-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b34d40b1586348c3be3d9142dfe1770d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap56e8a82c-ac", "ovs_interfaceid": "56e8a82c-acef-4136-b229-8039f0d8c843", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:54:20 compute-0 nova_compute[260935]: 2025-10-11 08:54:20.183 2 DEBUG oslo_concurrency.lockutils [req-77ca7993-b53f-454c-8997-94fb4b118031 req-0db12ec8-248f-4752-ac46-69dd7c7e21fc e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-1a70666a-f421-4a09-bfa4-f1171cbe2000" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 08:54:20 compute-0 nova_compute[260935]: 2025-10-11 08:54:20.337 2 INFO nova.virt.libvirt.driver [None req-ed21b36e-f7ae-4ec1-9c16-d79f19f57eb4 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] [instance: 1a70666a-f421-4a09-bfa4-f1171cbe2000] Creating config drive at /var/lib/nova/instances/1a70666a-f421-4a09-bfa4-f1171cbe2000/disk.config
Oct 11 08:54:20 compute-0 nova_compute[260935]: 2025-10-11 08:54:20.344 2 DEBUG oslo_concurrency.processutils [None req-ed21b36e-f7ae-4ec1-9c16-d79f19f57eb4 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/1a70666a-f421-4a09-bfa4-f1171cbe2000/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmptdf3svxi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:54:20 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/3052950723' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:54:20 compute-0 nova_compute[260935]: 2025-10-11 08:54:20.500 2 DEBUG oslo_concurrency.processutils [None req-ed21b36e-f7ae-4ec1-9c16-d79f19f57eb4 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/1a70666a-f421-4a09-bfa4-f1171cbe2000/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmptdf3svxi" returned: 0 in 0.156s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:54:20 compute-0 nova_compute[260935]: 2025-10-11 08:54:20.539 2 DEBUG nova.storage.rbd_utils [None req-ed21b36e-f7ae-4ec1-9c16-d79f19f57eb4 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] rbd image 1a70666a-f421-4a09-bfa4-f1171cbe2000_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:54:20 compute-0 nova_compute[260935]: 2025-10-11 08:54:20.544 2 DEBUG oslo_concurrency.processutils [None req-ed21b36e-f7ae-4ec1-9c16-d79f19f57eb4 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/1a70666a-f421-4a09-bfa4-f1171cbe2000/disk.config 1a70666a-f421-4a09-bfa4-f1171cbe2000_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:54:20 compute-0 nova_compute[260935]: 2025-10-11 08:54:20.765 2 DEBUG oslo_concurrency.processutils [None req-ed21b36e-f7ae-4ec1-9c16-d79f19f57eb4 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/1a70666a-f421-4a09-bfa4-f1171cbe2000/disk.config 1a70666a-f421-4a09-bfa4-f1171cbe2000_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.221s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:54:20 compute-0 nova_compute[260935]: 2025-10-11 08:54:20.768 2 INFO nova.virt.libvirt.driver [None req-ed21b36e-f7ae-4ec1-9c16-d79f19f57eb4 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] [instance: 1a70666a-f421-4a09-bfa4-f1171cbe2000] Deleting local config drive /var/lib/nova/instances/1a70666a-f421-4a09-bfa4-f1171cbe2000/disk.config because it was imported into RBD.
Oct 11 08:54:20 compute-0 kernel: tap56e8a82c-ac: entered promiscuous mode
Oct 11 08:54:20 compute-0 NetworkManager[44960]: <info>  [1760172860.8436] manager: (tap56e8a82c-ac): new Tun device (/org/freedesktop/NetworkManager/Devices/210)
Oct 11 08:54:20 compute-0 ovn_controller[152945]: 2025-10-11T08:54:20Z|00441|binding|INFO|Claiming lport 56e8a82c-acef-4136-b229-8039f0d8c843 for this chassis.
Oct 11 08:54:20 compute-0 ovn_controller[152945]: 2025-10-11T08:54:20Z|00442|binding|INFO|56e8a82c-acef-4136-b229-8039f0d8c843: Claiming fa:16:3e:cf:5a:16 10.100.0.9
Oct 11 08:54:20 compute-0 nova_compute[260935]: 2025-10-11 08:54:20.847 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:54:20 compute-0 nova_compute[260935]: 2025-10-11 08:54:20.856 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:54:20 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:20.865 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:cf:5a:16 10.100.0.9'], port_security=['fa:16:3e:cf:5a:16 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '1a70666a-f421-4a09-bfa4-f1171cbe2000', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b34d40b1586348c3be3d9142dfe1770d', 'neutron:revision_number': '2', 'neutron:security_group_ids': '9b07fc82-c0d2-4e42-8878-3b0706731285', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=be4f554b-8352-4ab3-babd-ad834c691fd3, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=56e8a82c-acef-4136-b229-8039f0d8c843) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 08:54:20 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:20.868 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 56e8a82c-acef-4136-b229-8039f0d8c843 in datapath a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf bound to our chassis
Oct 11 08:54:20 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:20.871 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf
Oct 11 08:54:20 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:20.888 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[6da704ca-e878-477a-a334-67662a6f53e4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:54:20 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:20.889 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapa1a65c6f-c1 in ovnmeta-a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 11 08:54:20 compute-0 systemd-udevd[317281]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 08:54:20 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:20.891 276945 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapa1a65c6f-c0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 11 08:54:20 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:20.892 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[2b520de9-bc1e-4f40-bb5b-668d2fd16247]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:54:20 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:20.894 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[1e794482-7a42-4085-80ff-e7957bcb82c2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:54:20 compute-0 systemd-machined[215705]: New machine qemu-58-instance-00000033.
Oct 11 08:54:20 compute-0 NetworkManager[44960]: <info>  [1760172860.9117] device (tap56e8a82c-ac): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 08:54:20 compute-0 NetworkManager[44960]: <info>  [1760172860.9131] device (tap56e8a82c-ac): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 08:54:20 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:20.911 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[a602382c-c6c8-460e-a078-8d53f7fb2462]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:54:20 compute-0 systemd[1]: Started Virtual Machine qemu-58-instance-00000033.
Oct 11 08:54:20 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:20.946 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[320cb43d-97d3-4979-8bac-b8210bad8ff2]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:54:20 compute-0 nova_compute[260935]: 2025-10-11 08:54:20.967 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:54:20 compute-0 ovn_controller[152945]: 2025-10-11T08:54:20Z|00443|binding|INFO|Setting lport 56e8a82c-acef-4136-b229-8039f0d8c843 ovn-installed in OVS
Oct 11 08:54:20 compute-0 ovn_controller[152945]: 2025-10-11T08:54:20Z|00444|binding|INFO|Setting lport 56e8a82c-acef-4136-b229-8039f0d8c843 up in Southbound
Oct 11 08:54:20 compute-0 nova_compute[260935]: 2025-10-11 08:54:20.971 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:54:20 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:20.987 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[0b9e9df8-6a2f-4ead-983d-3654fe35f5f3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:54:20 compute-0 systemd-udevd[317285]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 08:54:20 compute-0 NetworkManager[44960]: <info>  [1760172860.9964] manager: (tapa1a65c6f-c0): new Veth device (/org/freedesktop/NetworkManager/Devices/211)
Oct 11 08:54:20 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:20.996 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[2057db80-e87d-4c37-92ce-b87c7c5e5e16]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:54:21 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:21.058 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[243a7911-593e-40ef-85e2-33230c856f33]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:54:21 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:21.062 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[10f565a8-5a09-4147-ae51-d936ff004323]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:54:21 compute-0 NetworkManager[44960]: <info>  [1760172861.1000] device (tapa1a65c6f-c0): carrier: link connected
Oct 11 08:54:21 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:21.109 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[6050adfc-7bbf-46d9-9b7a-5ca44646a0dd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:54:21 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:21.139 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[03962623-741c-4012-9758-19f105d1a30c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa1a65c6f-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b9:05:55'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 143], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 470580, 'reachable_time': 40085, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 317314, 'error': None, 'target': 'ovnmeta-a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:54:21 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:21.161 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[d7d201d4-be6b-4a62-8bb4-19e688fba808]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feb9:555'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 470580, 'tstamp': 470580}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 317330, 'error': None, 'target': 'ovnmeta-a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:54:21 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:21.192 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[7a4184a9-a708-456c-956e-d6347591fbce]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa1a65c6f-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b9:05:55'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 143], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 470580, 'reachable_time': 40085, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 317334, 'error': None, 'target': 'ovnmeta-a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:54:21 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:21.241 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[31ea0098-2335-4690-8c5f-8da53ebdd5b9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:54:21 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:21.348 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[2fa33ae7-a25e-44dc-9583-60d97f735f01]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:54:21 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:21.351 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa1a65c6f-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:54:21 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:21.352 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 08:54:21 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:21.353 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa1a65c6f-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:54:21 compute-0 kernel: tapa1a65c6f-c0: entered promiscuous mode
Oct 11 08:54:21 compute-0 NetworkManager[44960]: <info>  [1760172861.3566] manager: (tapa1a65c6f-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/212)
Oct 11 08:54:21 compute-0 nova_compute[260935]: 2025-10-11 08:54:21.355 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:54:21 compute-0 nova_compute[260935]: 2025-10-11 08:54:21.359 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:54:21 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:21.361 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapa1a65c6f-c0, col_values=(('external_ids', {'iface-id': '4bae176b-fbae-4a70-a041-16a7a5205899'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:54:21 compute-0 nova_compute[260935]: 2025-10-11 08:54:21.363 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:54:21 compute-0 ovn_controller[152945]: 2025-10-11T08:54:21Z|00445|binding|INFO|Releasing lport 4bae176b-fbae-4a70-a041-16a7a5205899 from this chassis (sb_readonly=0)
Oct 11 08:54:21 compute-0 nova_compute[260935]: 2025-10-11 08:54:21.398 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:54:21 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:21.399 162815 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 11 08:54:21 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:21.401 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[b16824db-481f-4155-9ef5-49f1ae0782b9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:54:21 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:21.402 162815 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 11 08:54:21 compute-0 ovn_metadata_agent[162810]: global
Oct 11 08:54:21 compute-0 ovn_metadata_agent[162810]:     log         /dev/log local0 debug
Oct 11 08:54:21 compute-0 ovn_metadata_agent[162810]:     log-tag     haproxy-metadata-proxy-a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf
Oct 11 08:54:21 compute-0 ovn_metadata_agent[162810]:     user        root
Oct 11 08:54:21 compute-0 ovn_metadata_agent[162810]:     group       root
Oct 11 08:54:21 compute-0 ovn_metadata_agent[162810]:     maxconn     1024
Oct 11 08:54:21 compute-0 ovn_metadata_agent[162810]:     pidfile     /var/lib/neutron/external/pids/a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf.pid.haproxy
Oct 11 08:54:21 compute-0 ovn_metadata_agent[162810]:     daemon
Oct 11 08:54:21 compute-0 ovn_metadata_agent[162810]: 
Oct 11 08:54:21 compute-0 ovn_metadata_agent[162810]: defaults
Oct 11 08:54:21 compute-0 ovn_metadata_agent[162810]:     log global
Oct 11 08:54:21 compute-0 ovn_metadata_agent[162810]:     mode http
Oct 11 08:54:21 compute-0 ovn_metadata_agent[162810]:     option httplog
Oct 11 08:54:21 compute-0 ovn_metadata_agent[162810]:     option dontlognull
Oct 11 08:54:21 compute-0 ovn_metadata_agent[162810]:     option http-server-close
Oct 11 08:54:21 compute-0 ovn_metadata_agent[162810]:     option forwardfor
Oct 11 08:54:21 compute-0 ovn_metadata_agent[162810]:     retries                 3
Oct 11 08:54:21 compute-0 ovn_metadata_agent[162810]:     timeout http-request    30s
Oct 11 08:54:21 compute-0 ovn_metadata_agent[162810]:     timeout connect         30s
Oct 11 08:54:21 compute-0 ovn_metadata_agent[162810]:     timeout client          32s
Oct 11 08:54:21 compute-0 ovn_metadata_agent[162810]:     timeout server          32s
Oct 11 08:54:21 compute-0 ovn_metadata_agent[162810]:     timeout http-keep-alive 30s
Oct 11 08:54:21 compute-0 ovn_metadata_agent[162810]: 
Oct 11 08:54:21 compute-0 ovn_metadata_agent[162810]: 
Oct 11 08:54:21 compute-0 ovn_metadata_agent[162810]: listen listener
Oct 11 08:54:21 compute-0 ovn_metadata_agent[162810]:     bind 169.254.169.254:80
Oct 11 08:54:21 compute-0 ovn_metadata_agent[162810]:     server metadata /var/lib/neutron/metadata_proxy
Oct 11 08:54:21 compute-0 ovn_metadata_agent[162810]:     http-request add-header X-OVN-Network-ID a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf
Oct 11 08:54:21 compute-0 ovn_metadata_agent[162810]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 11 08:54:21 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:21.403 162815 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf', 'env', 'PROCESS_TAG=haproxy-a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 11 08:54:21 compute-0 ceph-mon[74313]: pgmap v1503: 321 pgs: 321 active+clean; 88 MiB data, 469 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 11 08:54:21 compute-0 nova_compute[260935]: 2025-10-11 08:54:21.738 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172861.7371757, 1a70666a-f421-4a09-bfa4-f1171cbe2000 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:54:21 compute-0 nova_compute[260935]: 2025-10-11 08:54:21.739 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 1a70666a-f421-4a09-bfa4-f1171cbe2000] VM Started (Lifecycle Event)
Oct 11 08:54:21 compute-0 nova_compute[260935]: 2025-10-11 08:54:21.764 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 1a70666a-f421-4a09-bfa4-f1171cbe2000] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:54:21 compute-0 nova_compute[260935]: 2025-10-11 08:54:21.769 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172861.7393045, 1a70666a-f421-4a09-bfa4-f1171cbe2000 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:54:21 compute-0 nova_compute[260935]: 2025-10-11 08:54:21.770 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 1a70666a-f421-4a09-bfa4-f1171cbe2000] VM Paused (Lifecycle Event)
Oct 11 08:54:21 compute-0 nova_compute[260935]: 2025-10-11 08:54:21.787 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 1a70666a-f421-4a09-bfa4-f1171cbe2000] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:54:21 compute-0 nova_compute[260935]: 2025-10-11 08:54:21.791 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 1a70666a-f421-4a09-bfa4-f1171cbe2000] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 08:54:21 compute-0 nova_compute[260935]: 2025-10-11 08:54:21.811 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 1a70666a-f421-4a09-bfa4-f1171cbe2000] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 08:54:21 compute-0 podman[317390]: 2025-10-11 08:54:21.863982231 +0000 UTC m=+0.069871404 container create de40479888e2850fca297c3be14505de2621bf2639bc9ebf7a2af5f81362af64 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Oct 11 08:54:21 compute-0 systemd[1]: Started libpod-conmon-de40479888e2850fca297c3be14505de2621bf2639bc9ebf7a2af5f81362af64.scope.
Oct 11 08:54:21 compute-0 podman[317390]: 2025-10-11 08:54:21.833075569 +0000 UTC m=+0.038964742 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 11 08:54:21 compute-0 systemd[1]: Started libcrun container.
Oct 11 08:54:21 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1504: 321 pgs: 321 active+clean; 88 MiB data, 469 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 11 08:54:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68e778c257d1db0a748ad4d230ca44b22155e290228243828b5216cbcae77472/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 11 08:54:21 compute-0 podman[317390]: 2025-10-11 08:54:21.997986312 +0000 UTC m=+0.203875515 container init de40479888e2850fca297c3be14505de2621bf2639bc9ebf7a2af5f81362af64 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001)
Oct 11 08:54:22 compute-0 podman[317390]: 2025-10-11 08:54:22.008860892 +0000 UTC m=+0.214750055 container start de40479888e2850fca297c3be14505de2621bf2639bc9ebf7a2af5f81362af64 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 08:54:22 compute-0 neutron-haproxy-ovnmeta-a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf[317405]: [NOTICE]   (317409) : New worker (317411) forked
Oct 11 08:54:22 compute-0 neutron-haproxy-ovnmeta-a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf[317405]: [NOTICE]   (317409) : Loading success.
Oct 11 08:54:22 compute-0 nova_compute[260935]: 2025-10-11 08:54:22.863 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:54:23 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e210 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 08:54:23 compute-0 ceph-mon[74313]: pgmap v1504: 321 pgs: 321 active+clean; 88 MiB data, 469 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 11 08:54:23 compute-0 sshd-session[317420]: Invalid user amran from 152.32.213.170 port 45612
Oct 11 08:54:23 compute-0 sshd-session[317420]: pam_unix(sshd:auth): check pass; user unknown
Oct 11 08:54:23 compute-0 sshd-session[317420]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=152.32.213.170
Oct 11 08:54:23 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1505: 321 pgs: 321 active+clean; 88 MiB data, 469 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 36 op/s
Oct 11 08:54:24 compute-0 nova_compute[260935]: 2025-10-11 08:54:24.492 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:54:24 compute-0 nova_compute[260935]: 2025-10-11 08:54:24.657 2 DEBUG nova.compute.manager [req-932262b6-7ea7-4339-bac4-f21c185fa611 req-a54421a1-2013-49ba-b12d-616fcbcb3556 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 1a70666a-f421-4a09-bfa4-f1171cbe2000] Received event network-vif-plugged-56e8a82c-acef-4136-b229-8039f0d8c843 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:54:24 compute-0 nova_compute[260935]: 2025-10-11 08:54:24.659 2 DEBUG oslo_concurrency.lockutils [req-932262b6-7ea7-4339-bac4-f21c185fa611 req-a54421a1-2013-49ba-b12d-616fcbcb3556 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "1a70666a-f421-4a09-bfa4-f1171cbe2000-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:54:24 compute-0 nova_compute[260935]: 2025-10-11 08:54:24.659 2 DEBUG oslo_concurrency.lockutils [req-932262b6-7ea7-4339-bac4-f21c185fa611 req-a54421a1-2013-49ba-b12d-616fcbcb3556 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "1a70666a-f421-4a09-bfa4-f1171cbe2000-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:54:24 compute-0 nova_compute[260935]: 2025-10-11 08:54:24.660 2 DEBUG oslo_concurrency.lockutils [req-932262b6-7ea7-4339-bac4-f21c185fa611 req-a54421a1-2013-49ba-b12d-616fcbcb3556 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "1a70666a-f421-4a09-bfa4-f1171cbe2000-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:54:24 compute-0 nova_compute[260935]: 2025-10-11 08:54:24.660 2 DEBUG nova.compute.manager [req-932262b6-7ea7-4339-bac4-f21c185fa611 req-a54421a1-2013-49ba-b12d-616fcbcb3556 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 1a70666a-f421-4a09-bfa4-f1171cbe2000] Processing event network-vif-plugged-56e8a82c-acef-4136-b229-8039f0d8c843 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 11 08:54:24 compute-0 nova_compute[260935]: 2025-10-11 08:54:24.661 2 DEBUG nova.compute.manager [None req-ed21b36e-f7ae-4ec1-9c16-d79f19f57eb4 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] [instance: 1a70666a-f421-4a09-bfa4-f1171cbe2000] Instance event wait completed in 2 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 11 08:54:24 compute-0 nova_compute[260935]: 2025-10-11 08:54:24.673 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172864.6723206, 1a70666a-f421-4a09-bfa4-f1171cbe2000 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:54:24 compute-0 nova_compute[260935]: 2025-10-11 08:54:24.674 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 1a70666a-f421-4a09-bfa4-f1171cbe2000] VM Resumed (Lifecycle Event)
Oct 11 08:54:24 compute-0 nova_compute[260935]: 2025-10-11 08:54:24.676 2 DEBUG nova.virt.libvirt.driver [None req-ed21b36e-f7ae-4ec1-9c16-d79f19f57eb4 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] [instance: 1a70666a-f421-4a09-bfa4-f1171cbe2000] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 11 08:54:24 compute-0 nova_compute[260935]: 2025-10-11 08:54:24.681 2 INFO nova.virt.libvirt.driver [-] [instance: 1a70666a-f421-4a09-bfa4-f1171cbe2000] Instance spawned successfully.
Oct 11 08:54:24 compute-0 nova_compute[260935]: 2025-10-11 08:54:24.682 2 DEBUG nova.virt.libvirt.driver [None req-ed21b36e-f7ae-4ec1-9c16-d79f19f57eb4 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] [instance: 1a70666a-f421-4a09-bfa4-f1171cbe2000] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 11 08:54:24 compute-0 nova_compute[260935]: 2025-10-11 08:54:24.700 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 1a70666a-f421-4a09-bfa4-f1171cbe2000] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:54:24 compute-0 nova_compute[260935]: 2025-10-11 08:54:24.704 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 1a70666a-f421-4a09-bfa4-f1171cbe2000] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 08:54:24 compute-0 nova_compute[260935]: 2025-10-11 08:54:24.715 2 DEBUG nova.virt.libvirt.driver [None req-ed21b36e-f7ae-4ec1-9c16-d79f19f57eb4 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] [instance: 1a70666a-f421-4a09-bfa4-f1171cbe2000] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:54:24 compute-0 nova_compute[260935]: 2025-10-11 08:54:24.716 2 DEBUG nova.virt.libvirt.driver [None req-ed21b36e-f7ae-4ec1-9c16-d79f19f57eb4 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] [instance: 1a70666a-f421-4a09-bfa4-f1171cbe2000] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:54:24 compute-0 nova_compute[260935]: 2025-10-11 08:54:24.717 2 DEBUG nova.virt.libvirt.driver [None req-ed21b36e-f7ae-4ec1-9c16-d79f19f57eb4 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] [instance: 1a70666a-f421-4a09-bfa4-f1171cbe2000] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:54:24 compute-0 nova_compute[260935]: 2025-10-11 08:54:24.717 2 DEBUG nova.virt.libvirt.driver [None req-ed21b36e-f7ae-4ec1-9c16-d79f19f57eb4 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] [instance: 1a70666a-f421-4a09-bfa4-f1171cbe2000] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:54:24 compute-0 nova_compute[260935]: 2025-10-11 08:54:24.718 2 DEBUG nova.virt.libvirt.driver [None req-ed21b36e-f7ae-4ec1-9c16-d79f19f57eb4 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] [instance: 1a70666a-f421-4a09-bfa4-f1171cbe2000] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:54:24 compute-0 nova_compute[260935]: 2025-10-11 08:54:24.719 2 DEBUG nova.virt.libvirt.driver [None req-ed21b36e-f7ae-4ec1-9c16-d79f19f57eb4 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] [instance: 1a70666a-f421-4a09-bfa4-f1171cbe2000] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:54:24 compute-0 nova_compute[260935]: 2025-10-11 08:54:24.726 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 1a70666a-f421-4a09-bfa4-f1171cbe2000] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 08:54:24 compute-0 nova_compute[260935]: 2025-10-11 08:54:24.786 2 INFO nova.compute.manager [None req-ed21b36e-f7ae-4ec1-9c16-d79f19f57eb4 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] [instance: 1a70666a-f421-4a09-bfa4-f1171cbe2000] Took 11.07 seconds to spawn the instance on the hypervisor.
Oct 11 08:54:24 compute-0 nova_compute[260935]: 2025-10-11 08:54:24.786 2 DEBUG nova.compute.manager [None req-ed21b36e-f7ae-4ec1-9c16-d79f19f57eb4 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] [instance: 1a70666a-f421-4a09-bfa4-f1171cbe2000] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:54:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 08:54:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 08:54:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 08:54:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 08:54:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 08:54:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 08:54:24 compute-0 nova_compute[260935]: 2025-10-11 08:54:24.862 2 INFO nova.compute.manager [None req-ed21b36e-f7ae-4ec1-9c16-d79f19f57eb4 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] [instance: 1a70666a-f421-4a09-bfa4-f1171cbe2000] Took 12.04 seconds to build instance.
Oct 11 08:54:24 compute-0 nova_compute[260935]: 2025-10-11 08:54:24.882 2 DEBUG oslo_concurrency.lockutils [None req-ed21b36e-f7ae-4ec1-9c16-d79f19f57eb4 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Lock "1a70666a-f421-4a09-bfa4-f1171cbe2000" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.118s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:54:25 compute-0 ceph-mon[74313]: pgmap v1505: 321 pgs: 321 active+clean; 88 MiB data, 469 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 36 op/s
Oct 11 08:54:25 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1506: 321 pgs: 321 active+clean; 88 MiB data, 469 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 36 op/s
Oct 11 08:54:26 compute-0 sshd-session[317420]: Failed password for invalid user amran from 152.32.213.170 port 45612 ssh2
Oct 11 08:54:27 compute-0 nova_compute[260935]: 2025-10-11 08:54:27.052 2 DEBUG oslo_concurrency.lockutils [None req-8c9ed641-f7d0-4508-bc10-5d8fe1991ba8 e7ae37ce1df34d87a006582ce3cb7d6d ce2ed1c47abf4e1889253402aa1e536f - - default default] Acquiring lock "ceef84cf-9df6-4484-862c-624eab05f1fe" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:54:27 compute-0 nova_compute[260935]: 2025-10-11 08:54:27.053 2 DEBUG oslo_concurrency.lockutils [None req-8c9ed641-f7d0-4508-bc10-5d8fe1991ba8 e7ae37ce1df34d87a006582ce3cb7d6d ce2ed1c47abf4e1889253402aa1e536f - - default default] Lock "ceef84cf-9df6-4484-862c-624eab05f1fe" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:54:27 compute-0 nova_compute[260935]: 2025-10-11 08:54:27.073 2 DEBUG nova.compute.manager [None req-8c9ed641-f7d0-4508-bc10-5d8fe1991ba8 e7ae37ce1df34d87a006582ce3cb7d6d ce2ed1c47abf4e1889253402aa1e536f - - default default] [instance: ceef84cf-9df6-4484-862c-624eab05f1fe] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 11 08:54:27 compute-0 nova_compute[260935]: 2025-10-11 08:54:27.159 2 DEBUG oslo_concurrency.lockutils [None req-8c9ed641-f7d0-4508-bc10-5d8fe1991ba8 e7ae37ce1df34d87a006582ce3cb7d6d ce2ed1c47abf4e1889253402aa1e536f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:54:27 compute-0 nova_compute[260935]: 2025-10-11 08:54:27.160 2 DEBUG oslo_concurrency.lockutils [None req-8c9ed641-f7d0-4508-bc10-5d8fe1991ba8 e7ae37ce1df34d87a006582ce3cb7d6d ce2ed1c47abf4e1889253402aa1e536f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:54:27 compute-0 nova_compute[260935]: 2025-10-11 08:54:27.169 2 DEBUG nova.virt.hardware [None req-8c9ed641-f7d0-4508-bc10-5d8fe1991ba8 e7ae37ce1df34d87a006582ce3cb7d6d ce2ed1c47abf4e1889253402aa1e536f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 11 08:54:27 compute-0 nova_compute[260935]: 2025-10-11 08:54:27.169 2 INFO nova.compute.claims [None req-8c9ed641-f7d0-4508-bc10-5d8fe1991ba8 e7ae37ce1df34d87a006582ce3cb7d6d ce2ed1c47abf4e1889253402aa1e536f - - default default] [instance: ceef84cf-9df6-4484-862c-624eab05f1fe] Claim successful on node compute-0.ctlplane.example.com
Oct 11 08:54:27 compute-0 nova_compute[260935]: 2025-10-11 08:54:27.315 2 DEBUG oslo_concurrency.processutils [None req-8c9ed641-f7d0-4508-bc10-5d8fe1991ba8 e7ae37ce1df34d87a006582ce3cb7d6d ce2ed1c47abf4e1889253402aa1e536f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:54:27 compute-0 nova_compute[260935]: 2025-10-11 08:54:27.395 2 DEBUG nova.compute.manager [req-95c34f18-2e39-4f29-823d-b5212d6325ca req-f0b9d6fa-7011-4951-af88-5dcaf8ed4ae6 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 1a70666a-f421-4a09-bfa4-f1171cbe2000] Received event network-vif-plugged-56e8a82c-acef-4136-b229-8039f0d8c843 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:54:27 compute-0 nova_compute[260935]: 2025-10-11 08:54:27.395 2 DEBUG oslo_concurrency.lockutils [req-95c34f18-2e39-4f29-823d-b5212d6325ca req-f0b9d6fa-7011-4951-af88-5dcaf8ed4ae6 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "1a70666a-f421-4a09-bfa4-f1171cbe2000-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:54:27 compute-0 nova_compute[260935]: 2025-10-11 08:54:27.396 2 DEBUG oslo_concurrency.lockutils [req-95c34f18-2e39-4f29-823d-b5212d6325ca req-f0b9d6fa-7011-4951-af88-5dcaf8ed4ae6 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "1a70666a-f421-4a09-bfa4-f1171cbe2000-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:54:27 compute-0 nova_compute[260935]: 2025-10-11 08:54:27.396 2 DEBUG oslo_concurrency.lockutils [req-95c34f18-2e39-4f29-823d-b5212d6325ca req-f0b9d6fa-7011-4951-af88-5dcaf8ed4ae6 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "1a70666a-f421-4a09-bfa4-f1171cbe2000-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:54:27 compute-0 nova_compute[260935]: 2025-10-11 08:54:27.396 2 DEBUG nova.compute.manager [req-95c34f18-2e39-4f29-823d-b5212d6325ca req-f0b9d6fa-7011-4951-af88-5dcaf8ed4ae6 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 1a70666a-f421-4a09-bfa4-f1171cbe2000] No waiting events found dispatching network-vif-plugged-56e8a82c-acef-4136-b229-8039f0d8c843 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 08:54:27 compute-0 nova_compute[260935]: 2025-10-11 08:54:27.396 2 WARNING nova.compute.manager [req-95c34f18-2e39-4f29-823d-b5212d6325ca req-f0b9d6fa-7011-4951-af88-5dcaf8ed4ae6 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 1a70666a-f421-4a09-bfa4-f1171cbe2000] Received unexpected event network-vif-plugged-56e8a82c-acef-4136-b229-8039f0d8c843 for instance with vm_state active and task_state None.
Oct 11 08:54:27 compute-0 ceph-mon[74313]: pgmap v1506: 321 pgs: 321 active+clean; 88 MiB data, 469 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 36 op/s
Oct 11 08:54:27 compute-0 podman[317442]: 2025-10-11 08:54:27.794734652 +0000 UTC m=+0.093102336 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001)
Oct 11 08:54:27 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 08:54:27 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4045841869' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:54:27 compute-0 nova_compute[260935]: 2025-10-11 08:54:27.858 2 DEBUG oslo_concurrency.processutils [None req-8c9ed641-f7d0-4508-bc10-5d8fe1991ba8 e7ae37ce1df34d87a006582ce3cb7d6d ce2ed1c47abf4e1889253402aa1e536f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.543s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:54:27 compute-0 nova_compute[260935]: 2025-10-11 08:54:27.867 2 DEBUG nova.compute.provider_tree [None req-8c9ed641-f7d0-4508-bc10-5d8fe1991ba8 e7ae37ce1df34d87a006582ce3cb7d6d ce2ed1c47abf4e1889253402aa1e536f - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 08:54:27 compute-0 nova_compute[260935]: 2025-10-11 08:54:27.872 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:54:27 compute-0 nova_compute[260935]: 2025-10-11 08:54:27.890 2 DEBUG nova.scheduler.client.report [None req-8c9ed641-f7d0-4508-bc10-5d8fe1991ba8 e7ae37ce1df34d87a006582ce3cb7d6d ce2ed1c47abf4e1889253402aa1e536f - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 08:54:27 compute-0 nova_compute[260935]: 2025-10-11 08:54:27.915 2 DEBUG oslo_concurrency.lockutils [None req-8c9ed641-f7d0-4508-bc10-5d8fe1991ba8 e7ae37ce1df34d87a006582ce3cb7d6d ce2ed1c47abf4e1889253402aa1e536f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.755s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:54:27 compute-0 nova_compute[260935]: 2025-10-11 08:54:27.917 2 DEBUG nova.compute.manager [None req-8c9ed641-f7d0-4508-bc10-5d8fe1991ba8 e7ae37ce1df34d87a006582ce3cb7d6d ce2ed1c47abf4e1889253402aa1e536f - - default default] [instance: ceef84cf-9df6-4484-862c-624eab05f1fe] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 11 08:54:27 compute-0 nova_compute[260935]: 2025-10-11 08:54:27.971 2 DEBUG nova.compute.manager [None req-8c9ed641-f7d0-4508-bc10-5d8fe1991ba8 e7ae37ce1df34d87a006582ce3cb7d6d ce2ed1c47abf4e1889253402aa1e536f - - default default] [instance: ceef84cf-9df6-4484-862c-624eab05f1fe] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 11 08:54:27 compute-0 nova_compute[260935]: 2025-10-11 08:54:27.972 2 DEBUG nova.network.neutron [None req-8c9ed641-f7d0-4508-bc10-5d8fe1991ba8 e7ae37ce1df34d87a006582ce3cb7d6d ce2ed1c47abf4e1889253402aa1e536f - - default default] [instance: ceef84cf-9df6-4484-862c-624eab05f1fe] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 11 08:54:27 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1507: 321 pgs: 321 active+clean; 88 MiB data, 469 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Oct 11 08:54:27 compute-0 nova_compute[260935]: 2025-10-11 08:54:27.998 2 DEBUG oslo_concurrency.lockutils [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Acquiring lock "d60a2dcd-7fb6-4bfe-8351-a38a71164f83" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:54:28 compute-0 nova_compute[260935]: 2025-10-11 08:54:27.999 2 DEBUG oslo_concurrency.lockutils [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Lock "d60a2dcd-7fb6-4bfe-8351-a38a71164f83" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:54:28 compute-0 nova_compute[260935]: 2025-10-11 08:54:28.004 2 INFO nova.virt.libvirt.driver [None req-8c9ed641-f7d0-4508-bc10-5d8fe1991ba8 e7ae37ce1df34d87a006582ce3cb7d6d ce2ed1c47abf4e1889253402aa1e536f - - default default] [instance: ceef84cf-9df6-4484-862c-624eab05f1fe] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 11 08:54:28 compute-0 nova_compute[260935]: 2025-10-11 08:54:28.026 2 DEBUG nova.compute.manager [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] [instance: d60a2dcd-7fb6-4bfe-8351-a38a71164f83] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 11 08:54:28 compute-0 nova_compute[260935]: 2025-10-11 08:54:28.032 2 DEBUG nova.compute.manager [None req-8c9ed641-f7d0-4508-bc10-5d8fe1991ba8 e7ae37ce1df34d87a006582ce3cb7d6d ce2ed1c47abf4e1889253402aa1e536f - - default default] [instance: ceef84cf-9df6-4484-862c-624eab05f1fe] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 11 08:54:28 compute-0 nova_compute[260935]: 2025-10-11 08:54:28.038 2 DEBUG oslo_concurrency.lockutils [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Acquiring lock "e263661e-e9c2-4a4d-a6e5-5fc8a7353f50" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:54:28 compute-0 nova_compute[260935]: 2025-10-11 08:54:28.038 2 DEBUG oslo_concurrency.lockutils [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Lock "e263661e-e9c2-4a4d-a6e5-5fc8a7353f50" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:54:28 compute-0 nova_compute[260935]: 2025-10-11 08:54:28.078 2 DEBUG nova.compute.manager [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] [instance: e263661e-e9c2-4a4d-a6e5-5fc8a7353f50] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 11 08:54:28 compute-0 nova_compute[260935]: 2025-10-11 08:54:28.086 2 DEBUG oslo_concurrency.lockutils [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Acquiring lock "21a71d10-e13b-47fe-88fd-ec9597f7902e" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:54:28 compute-0 nova_compute[260935]: 2025-10-11 08:54:28.086 2 DEBUG oslo_concurrency.lockutils [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Lock "21a71d10-e13b-47fe-88fd-ec9597f7902e" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:54:28 compute-0 nova_compute[260935]: 2025-10-11 08:54:28.137 2 DEBUG nova.compute.manager [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] [instance: 21a71d10-e13b-47fe-88fd-ec9597f7902e] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 11 08:54:28 compute-0 nova_compute[260935]: 2025-10-11 08:54:28.161 2 DEBUG nova.policy [None req-8c9ed641-f7d0-4508-bc10-5d8fe1991ba8 e7ae37ce1df34d87a006582ce3cb7d6d ce2ed1c47abf4e1889253402aa1e536f - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'e7ae37ce1df34d87a006582ce3cb7d6d', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'ce2ed1c47abf4e1889253402aa1e536f', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 11 08:54:28 compute-0 nova_compute[260935]: 2025-10-11 08:54:28.164 2 DEBUG oslo_concurrency.lockutils [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:54:28 compute-0 nova_compute[260935]: 2025-10-11 08:54:28.165 2 DEBUG oslo_concurrency.lockutils [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:54:28 compute-0 nova_compute[260935]: 2025-10-11 08:54:28.172 2 DEBUG nova.virt.hardware [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 11 08:54:28 compute-0 nova_compute[260935]: 2025-10-11 08:54:28.172 2 INFO nova.compute.claims [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] [instance: d60a2dcd-7fb6-4bfe-8351-a38a71164f83] Claim successful on node compute-0.ctlplane.example.com
Oct 11 08:54:28 compute-0 nova_compute[260935]: 2025-10-11 08:54:28.201 2 DEBUG oslo_concurrency.lockutils [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:54:28 compute-0 nova_compute[260935]: 2025-10-11 08:54:28.218 2 DEBUG nova.compute.manager [None req-8c9ed641-f7d0-4508-bc10-5d8fe1991ba8 e7ae37ce1df34d87a006582ce3cb7d6d ce2ed1c47abf4e1889253402aa1e536f - - default default] [instance: ceef84cf-9df6-4484-862c-624eab05f1fe] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 11 08:54:28 compute-0 nova_compute[260935]: 2025-10-11 08:54:28.220 2 DEBUG nova.virt.libvirt.driver [None req-8c9ed641-f7d0-4508-bc10-5d8fe1991ba8 e7ae37ce1df34d87a006582ce3cb7d6d ce2ed1c47abf4e1889253402aa1e536f - - default default] [instance: ceef84cf-9df6-4484-862c-624eab05f1fe] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 11 08:54:28 compute-0 nova_compute[260935]: 2025-10-11 08:54:28.220 2 INFO nova.virt.libvirt.driver [None req-8c9ed641-f7d0-4508-bc10-5d8fe1991ba8 e7ae37ce1df34d87a006582ce3cb7d6d ce2ed1c47abf4e1889253402aa1e536f - - default default] [instance: ceef84cf-9df6-4484-862c-624eab05f1fe] Creating image(s)
Oct 11 08:54:28 compute-0 nova_compute[260935]: 2025-10-11 08:54:28.250 2 DEBUG nova.storage.rbd_utils [None req-8c9ed641-f7d0-4508-bc10-5d8fe1991ba8 e7ae37ce1df34d87a006582ce3cb7d6d ce2ed1c47abf4e1889253402aa1e536f - - default default] rbd image ceef84cf-9df6-4484-862c-624eab05f1fe_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:54:28 compute-0 nova_compute[260935]: 2025-10-11 08:54:28.278 2 DEBUG nova.storage.rbd_utils [None req-8c9ed641-f7d0-4508-bc10-5d8fe1991ba8 e7ae37ce1df34d87a006582ce3cb7d6d ce2ed1c47abf4e1889253402aa1e536f - - default default] rbd image ceef84cf-9df6-4484-862c-624eab05f1fe_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:54:28 compute-0 nova_compute[260935]: 2025-10-11 08:54:28.322 2 DEBUG nova.storage.rbd_utils [None req-8c9ed641-f7d0-4508-bc10-5d8fe1991ba8 e7ae37ce1df34d87a006582ce3cb7d6d ce2ed1c47abf4e1889253402aa1e536f - - default default] rbd image ceef84cf-9df6-4484-862c-624eab05f1fe_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:54:28 compute-0 nova_compute[260935]: 2025-10-11 08:54:28.330 2 DEBUG oslo_concurrency.processutils [None req-8c9ed641-f7d0-4508-bc10-5d8fe1991ba8 e7ae37ce1df34d87a006582ce3cb7d6d ce2ed1c47abf4e1889253402aa1e536f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:54:28 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e210 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 08:54:28 compute-0 nova_compute[260935]: 2025-10-11 08:54:28.425 2 DEBUG oslo_concurrency.lockutils [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:54:28 compute-0 nova_compute[260935]: 2025-10-11 08:54:28.434 2 DEBUG oslo_concurrency.processutils [None req-8c9ed641-f7d0-4508-bc10-5d8fe1991ba8 e7ae37ce1df34d87a006582ce3cb7d6d ce2ed1c47abf4e1889253402aa1e536f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json" returned: 0 in 0.104s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:54:28 compute-0 nova_compute[260935]: 2025-10-11 08:54:28.435 2 DEBUG oslo_concurrency.lockutils [None req-8c9ed641-f7d0-4508-bc10-5d8fe1991ba8 e7ae37ce1df34d87a006582ce3cb7d6d ce2ed1c47abf4e1889253402aa1e536f - - default default] Acquiring lock "9811042c7d73cc51997f7c966840f4b7728169a1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:54:28 compute-0 nova_compute[260935]: 2025-10-11 08:54:28.436 2 DEBUG oslo_concurrency.lockutils [None req-8c9ed641-f7d0-4508-bc10-5d8fe1991ba8 e7ae37ce1df34d87a006582ce3cb7d6d ce2ed1c47abf4e1889253402aa1e536f - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:54:28 compute-0 nova_compute[260935]: 2025-10-11 08:54:28.437 2 DEBUG oslo_concurrency.lockutils [None req-8c9ed641-f7d0-4508-bc10-5d8fe1991ba8 e7ae37ce1df34d87a006582ce3cb7d6d ce2ed1c47abf4e1889253402aa1e536f - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:54:28 compute-0 nova_compute[260935]: 2025-10-11 08:54:28.469 2 DEBUG nova.storage.rbd_utils [None req-8c9ed641-f7d0-4508-bc10-5d8fe1991ba8 e7ae37ce1df34d87a006582ce3cb7d6d ce2ed1c47abf4e1889253402aa1e536f - - default default] rbd image ceef84cf-9df6-4484-862c-624eab05f1fe_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:54:28 compute-0 nova_compute[260935]: 2025-10-11 08:54:28.472 2 DEBUG oslo_concurrency.processutils [None req-8c9ed641-f7d0-4508-bc10-5d8fe1991ba8 e7ae37ce1df34d87a006582ce3cb7d6d ce2ed1c47abf4e1889253402aa1e536f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 ceef84cf-9df6-4484-862c-624eab05f1fe_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:54:28 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/4045841869' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:54:28 compute-0 nova_compute[260935]: 2025-10-11 08:54:28.627 2 DEBUG oslo_concurrency.processutils [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:54:28 compute-0 sshd-session[317420]: Received disconnect from 152.32.213.170 port 45612:11: Bye Bye [preauth]
Oct 11 08:54:28 compute-0 sshd-session[317420]: Disconnected from invalid user amran 152.32.213.170 port 45612 [preauth]
Oct 11 08:54:28 compute-0 nova_compute[260935]: 2025-10-11 08:54:28.742 2 DEBUG nova.compute.manager [None req-e0bd14a6-d051-4980-b375-b43e3c5f4192 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] [instance: 1a70666a-f421-4a09-bfa4-f1171cbe2000] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:54:28 compute-0 nova_compute[260935]: 2025-10-11 08:54:28.752 2 DEBUG nova.network.neutron [None req-8c9ed641-f7d0-4508-bc10-5d8fe1991ba8 e7ae37ce1df34d87a006582ce3cb7d6d ce2ed1c47abf4e1889253402aa1e536f - - default default] [instance: ceef84cf-9df6-4484-862c-624eab05f1fe] Successfully created port: b6cea59b-74d9-47bc-a2e0-cfc62e1cc66b _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 11 08:54:28 compute-0 nova_compute[260935]: 2025-10-11 08:54:28.802 2 INFO nova.compute.manager [None req-e0bd14a6-d051-4980-b375-b43e3c5f4192 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] [instance: 1a70666a-f421-4a09-bfa4-f1171cbe2000] instance snapshotting
Oct 11 08:54:28 compute-0 nova_compute[260935]: 2025-10-11 08:54:28.835 2 DEBUG oslo_concurrency.processutils [None req-8c9ed641-f7d0-4508-bc10-5d8fe1991ba8 e7ae37ce1df34d87a006582ce3cb7d6d ce2ed1c47abf4e1889253402aa1e536f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 ceef84cf-9df6-4484-862c-624eab05f1fe_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.362s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:54:28 compute-0 nova_compute[260935]: 2025-10-11 08:54:28.913 2 DEBUG nova.storage.rbd_utils [None req-8c9ed641-f7d0-4508-bc10-5d8fe1991ba8 e7ae37ce1df34d87a006582ce3cb7d6d ce2ed1c47abf4e1889253402aa1e536f - - default default] resizing rbd image ceef84cf-9df6-4484-862c-624eab05f1fe_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 11 08:54:29 compute-0 nova_compute[260935]: 2025-10-11 08:54:29.031 2 WARNING nova.compute.manager [None req-e0bd14a6-d051-4980-b375-b43e3c5f4192 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] [instance: 1a70666a-f421-4a09-bfa4-f1171cbe2000] Image not found during snapshot: nova.exception.ImageNotFound: Image f8549b54-e342-4157-bb2f-4d8e7cd886e5 could not be found.
Oct 11 08:54:29 compute-0 nova_compute[260935]: 2025-10-11 08:54:29.040 2 DEBUG nova.objects.instance [None req-8c9ed641-f7d0-4508-bc10-5d8fe1991ba8 e7ae37ce1df34d87a006582ce3cb7d6d ce2ed1c47abf4e1889253402aa1e536f - - default default] Lazy-loading 'migration_context' on Instance uuid ceef84cf-9df6-4484-862c-624eab05f1fe obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:54:29 compute-0 nova_compute[260935]: 2025-10-11 08:54:29.058 2 DEBUG nova.virt.libvirt.driver [None req-8c9ed641-f7d0-4508-bc10-5d8fe1991ba8 e7ae37ce1df34d87a006582ce3cb7d6d ce2ed1c47abf4e1889253402aa1e536f - - default default] [instance: ceef84cf-9df6-4484-862c-624eab05f1fe] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 11 08:54:29 compute-0 nova_compute[260935]: 2025-10-11 08:54:29.059 2 DEBUG nova.virt.libvirt.driver [None req-8c9ed641-f7d0-4508-bc10-5d8fe1991ba8 e7ae37ce1df34d87a006582ce3cb7d6d ce2ed1c47abf4e1889253402aa1e536f - - default default] [instance: ceef84cf-9df6-4484-862c-624eab05f1fe] Ensure instance console log exists: /var/lib/nova/instances/ceef84cf-9df6-4484-862c-624eab05f1fe/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 11 08:54:29 compute-0 nova_compute[260935]: 2025-10-11 08:54:29.060 2 DEBUG oslo_concurrency.lockutils [None req-8c9ed641-f7d0-4508-bc10-5d8fe1991ba8 e7ae37ce1df34d87a006582ce3cb7d6d ce2ed1c47abf4e1889253402aa1e536f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:54:29 compute-0 nova_compute[260935]: 2025-10-11 08:54:29.060 2 DEBUG oslo_concurrency.lockutils [None req-8c9ed641-f7d0-4508-bc10-5d8fe1991ba8 e7ae37ce1df34d87a006582ce3cb7d6d ce2ed1c47abf4e1889253402aa1e536f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:54:29 compute-0 nova_compute[260935]: 2025-10-11 08:54:29.061 2 DEBUG oslo_concurrency.lockutils [None req-8c9ed641-f7d0-4508-bc10-5d8fe1991ba8 e7ae37ce1df34d87a006582ce3cb7d6d ce2ed1c47abf4e1889253402aa1e536f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:54:29 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 08:54:29 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4186694936' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:54:29 compute-0 nova_compute[260935]: 2025-10-11 08:54:29.098 2 DEBUG oslo_concurrency.processutils [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.471s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:54:29 compute-0 nova_compute[260935]: 2025-10-11 08:54:29.108 2 DEBUG nova.compute.provider_tree [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 08:54:29 compute-0 nova_compute[260935]: 2025-10-11 08:54:29.128 2 DEBUG nova.scheduler.client.report [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 08:54:29 compute-0 nova_compute[260935]: 2025-10-11 08:54:29.156 2 DEBUG oslo_concurrency.lockutils [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.991s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:54:29 compute-0 nova_compute[260935]: 2025-10-11 08:54:29.158 2 DEBUG nova.compute.manager [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] [instance: d60a2dcd-7fb6-4bfe-8351-a38a71164f83] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 11 08:54:29 compute-0 nova_compute[260935]: 2025-10-11 08:54:29.163 2 DEBUG oslo_concurrency.lockutils [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.961s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:54:29 compute-0 nova_compute[260935]: 2025-10-11 08:54:29.172 2 DEBUG nova.virt.hardware [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 11 08:54:29 compute-0 nova_compute[260935]: 2025-10-11 08:54:29.172 2 INFO nova.compute.claims [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] [instance: e263661e-e9c2-4a4d-a6e5-5fc8a7353f50] Claim successful on node compute-0.ctlplane.example.com
Oct 11 08:54:29 compute-0 nova_compute[260935]: 2025-10-11 08:54:29.232 2 DEBUG nova.compute.manager [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] [instance: d60a2dcd-7fb6-4bfe-8351-a38a71164f83] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 11 08:54:29 compute-0 nova_compute[260935]: 2025-10-11 08:54:29.233 2 DEBUG nova.network.neutron [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] [instance: d60a2dcd-7fb6-4bfe-8351-a38a71164f83] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 11 08:54:29 compute-0 nova_compute[260935]: 2025-10-11 08:54:29.261 2 INFO nova.virt.libvirt.driver [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] [instance: d60a2dcd-7fb6-4bfe-8351-a38a71164f83] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 11 08:54:29 compute-0 nova_compute[260935]: 2025-10-11 08:54:29.283 2 DEBUG nova.compute.manager [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] [instance: d60a2dcd-7fb6-4bfe-8351-a38a71164f83] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 11 08:54:29 compute-0 nova_compute[260935]: 2025-10-11 08:54:29.371 2 DEBUG nova.network.neutron [None req-8c9ed641-f7d0-4508-bc10-5d8fe1991ba8 e7ae37ce1df34d87a006582ce3cb7d6d ce2ed1c47abf4e1889253402aa1e536f - - default default] [instance: ceef84cf-9df6-4484-862c-624eab05f1fe] Successfully updated port: b6cea59b-74d9-47bc-a2e0-cfc62e1cc66b _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 11 08:54:29 compute-0 nova_compute[260935]: 2025-10-11 08:54:29.378 2 DEBUG nova.compute.manager [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] [instance: d60a2dcd-7fb6-4bfe-8351-a38a71164f83] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 11 08:54:29 compute-0 nova_compute[260935]: 2025-10-11 08:54:29.380 2 DEBUG nova.virt.libvirt.driver [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] [instance: d60a2dcd-7fb6-4bfe-8351-a38a71164f83] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 11 08:54:29 compute-0 nova_compute[260935]: 2025-10-11 08:54:29.381 2 INFO nova.virt.libvirt.driver [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] [instance: d60a2dcd-7fb6-4bfe-8351-a38a71164f83] Creating image(s)
Oct 11 08:54:29 compute-0 nova_compute[260935]: 2025-10-11 08:54:29.411 2 DEBUG nova.storage.rbd_utils [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] rbd image d60a2dcd-7fb6-4bfe-8351-a38a71164f83_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:54:29 compute-0 nova_compute[260935]: 2025-10-11 08:54:29.446 2 DEBUG nova.storage.rbd_utils [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] rbd image d60a2dcd-7fb6-4bfe-8351-a38a71164f83_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:54:29 compute-0 nova_compute[260935]: 2025-10-11 08:54:29.483 2 DEBUG nova.storage.rbd_utils [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] rbd image d60a2dcd-7fb6-4bfe-8351-a38a71164f83_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:54:29 compute-0 nova_compute[260935]: 2025-10-11 08:54:29.487 2 DEBUG oslo_concurrency.processutils [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:54:29 compute-0 ceph-mon[74313]: pgmap v1507: 321 pgs: 321 active+clean; 88 MiB data, 469 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Oct 11 08:54:29 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/4186694936' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:54:29 compute-0 nova_compute[260935]: 2025-10-11 08:54:29.542 2 DEBUG nova.policy [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '5c02b9d6bdae439c9f1e49ae63c5c5e3', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '7a37bbdce5194d96bed20d4162e25337', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 11 08:54:29 compute-0 nova_compute[260935]: 2025-10-11 08:54:29.553 2 DEBUG oslo_concurrency.lockutils [None req-8c9ed641-f7d0-4508-bc10-5d8fe1991ba8 e7ae37ce1df34d87a006582ce3cb7d6d ce2ed1c47abf4e1889253402aa1e536f - - default default] Acquiring lock "refresh_cache-ceef84cf-9df6-4484-862c-624eab05f1fe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 08:54:29 compute-0 nova_compute[260935]: 2025-10-11 08:54:29.554 2 DEBUG oslo_concurrency.lockutils [None req-8c9ed641-f7d0-4508-bc10-5d8fe1991ba8 e7ae37ce1df34d87a006582ce3cb7d6d ce2ed1c47abf4e1889253402aa1e536f - - default default] Acquired lock "refresh_cache-ceef84cf-9df6-4484-862c-624eab05f1fe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 08:54:29 compute-0 nova_compute[260935]: 2025-10-11 08:54:29.554 2 DEBUG nova.network.neutron [None req-8c9ed641-f7d0-4508-bc10-5d8fe1991ba8 e7ae37ce1df34d87a006582ce3cb7d6d ce2ed1c47abf4e1889253402aa1e536f - - default default] [instance: ceef84cf-9df6-4484-862c-624eab05f1fe] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 11 08:54:29 compute-0 nova_compute[260935]: 2025-10-11 08:54:29.556 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:54:29 compute-0 nova_compute[260935]: 2025-10-11 08:54:29.582 2 DEBUG oslo_concurrency.processutils [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:54:29 compute-0 nova_compute[260935]: 2025-10-11 08:54:29.631 2 DEBUG oslo_concurrency.processutils [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json" returned: 0 in 0.143s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:54:29 compute-0 nova_compute[260935]: 2025-10-11 08:54:29.633 2 DEBUG oslo_concurrency.lockutils [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Acquiring lock "9811042c7d73cc51997f7c966840f4b7728169a1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:54:29 compute-0 nova_compute[260935]: 2025-10-11 08:54:29.634 2 DEBUG oslo_concurrency.lockutils [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:54:29 compute-0 nova_compute[260935]: 2025-10-11 08:54:29.634 2 DEBUG oslo_concurrency.lockutils [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:54:29 compute-0 nova_compute[260935]: 2025-10-11 08:54:29.669 2 DEBUG nova.storage.rbd_utils [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] rbd image d60a2dcd-7fb6-4bfe-8351-a38a71164f83_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:54:29 compute-0 nova_compute[260935]: 2025-10-11 08:54:29.674 2 DEBUG oslo_concurrency.processutils [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 d60a2dcd-7fb6-4bfe-8351-a38a71164f83_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:54:29 compute-0 nova_compute[260935]: 2025-10-11 08:54:29.731 2 DEBUG nova.compute.manager [req-5d35c0fd-bfb2-4bb0-a2c1-023917277d4b req-7fffc75b-4fd3-4f16-b812-723cc111f850 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ceef84cf-9df6-4484-862c-624eab05f1fe] Received event network-changed-b6cea59b-74d9-47bc-a2e0-cfc62e1cc66b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:54:29 compute-0 nova_compute[260935]: 2025-10-11 08:54:29.733 2 DEBUG nova.compute.manager [req-5d35c0fd-bfb2-4bb0-a2c1-023917277d4b req-7fffc75b-4fd3-4f16-b812-723cc111f850 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ceef84cf-9df6-4484-862c-624eab05f1fe] Refreshing instance network info cache due to event network-changed-b6cea59b-74d9-47bc-a2e0-cfc62e1cc66b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 08:54:29 compute-0 nova_compute[260935]: 2025-10-11 08:54:29.733 2 DEBUG oslo_concurrency.lockutils [req-5d35c0fd-bfb2-4bb0-a2c1-023917277d4b req-7fffc75b-4fd3-4f16-b812-723cc111f850 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-ceef84cf-9df6-4484-862c-624eab05f1fe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 08:54:29 compute-0 nova_compute[260935]: 2025-10-11 08:54:29.736 2 DEBUG nova.network.neutron [None req-8c9ed641-f7d0-4508-bc10-5d8fe1991ba8 e7ae37ce1df34d87a006582ce3cb7d6d ce2ed1c47abf4e1889253402aa1e536f - - default default] [instance: ceef84cf-9df6-4484-862c-624eab05f1fe] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 11 08:54:29 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1508: 321 pgs: 321 active+clean; 88 MiB data, 469 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 11 08:54:29 compute-0 nova_compute[260935]: 2025-10-11 08:54:29.990 2 DEBUG oslo_concurrency.processutils [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 d60a2dcd-7fb6-4bfe-8351-a38a71164f83_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.315s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:54:30 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 08:54:30 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3160616652' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:54:30 compute-0 nova_compute[260935]: 2025-10-11 08:54:30.076 2 DEBUG oslo_concurrency.processutils [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.495s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:54:30 compute-0 nova_compute[260935]: 2025-10-11 08:54:30.084 2 DEBUG nova.storage.rbd_utils [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] resizing rbd image d60a2dcd-7fb6-4bfe-8351-a38a71164f83_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 11 08:54:30 compute-0 nova_compute[260935]: 2025-10-11 08:54:30.143 2 DEBUG nova.compute.provider_tree [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 08:54:30 compute-0 nova_compute[260935]: 2025-10-11 08:54:30.203 2 DEBUG nova.network.neutron [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] [instance: d60a2dcd-7fb6-4bfe-8351-a38a71164f83] Successfully created port: b0464a2f-214f-420b-aa3e-70d2e3dce4ac _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 11 08:54:30 compute-0 nova_compute[260935]: 2025-10-11 08:54:30.209 2 DEBUG oslo_concurrency.lockutils [None req-65afeaf2-f404-4ffc-a661-16ab532bb966 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Acquiring lock "1a70666a-f421-4a09-bfa4-f1171cbe2000" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:54:30 compute-0 nova_compute[260935]: 2025-10-11 08:54:30.210 2 DEBUG oslo_concurrency.lockutils [None req-65afeaf2-f404-4ffc-a661-16ab532bb966 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Lock "1a70666a-f421-4a09-bfa4-f1171cbe2000" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:54:30 compute-0 nova_compute[260935]: 2025-10-11 08:54:30.211 2 DEBUG oslo_concurrency.lockutils [None req-65afeaf2-f404-4ffc-a661-16ab532bb966 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Acquiring lock "1a70666a-f421-4a09-bfa4-f1171cbe2000-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:54:30 compute-0 nova_compute[260935]: 2025-10-11 08:54:30.211 2 DEBUG oslo_concurrency.lockutils [None req-65afeaf2-f404-4ffc-a661-16ab532bb966 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Lock "1a70666a-f421-4a09-bfa4-f1171cbe2000-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:54:30 compute-0 nova_compute[260935]: 2025-10-11 08:54:30.211 2 DEBUG oslo_concurrency.lockutils [None req-65afeaf2-f404-4ffc-a661-16ab532bb966 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Lock "1a70666a-f421-4a09-bfa4-f1171cbe2000-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:54:30 compute-0 nova_compute[260935]: 2025-10-11 08:54:30.213 2 INFO nova.compute.manager [None req-65afeaf2-f404-4ffc-a661-16ab532bb966 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] [instance: 1a70666a-f421-4a09-bfa4-f1171cbe2000] Terminating instance
Oct 11 08:54:30 compute-0 nova_compute[260935]: 2025-10-11 08:54:30.215 2 DEBUG nova.compute.manager [None req-65afeaf2-f404-4ffc-a661-16ab532bb966 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] [instance: 1a70666a-f421-4a09-bfa4-f1171cbe2000] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 11 08:54:30 compute-0 nova_compute[260935]: 2025-10-11 08:54:30.217 2 DEBUG nova.scheduler.client.report [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 08:54:30 compute-0 nova_compute[260935]: 2025-10-11 08:54:30.230 2 DEBUG nova.objects.instance [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Lazy-loading 'migration_context' on Instance uuid d60a2dcd-7fb6-4bfe-8351-a38a71164f83 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:54:30 compute-0 nova_compute[260935]: 2025-10-11 08:54:30.247 2 DEBUG nova.virt.libvirt.driver [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] [instance: d60a2dcd-7fb6-4bfe-8351-a38a71164f83] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 11 08:54:30 compute-0 nova_compute[260935]: 2025-10-11 08:54:30.247 2 DEBUG nova.virt.libvirt.driver [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] [instance: d60a2dcd-7fb6-4bfe-8351-a38a71164f83] Ensure instance console log exists: /var/lib/nova/instances/d60a2dcd-7fb6-4bfe-8351-a38a71164f83/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 11 08:54:30 compute-0 nova_compute[260935]: 2025-10-11 08:54:30.249 2 DEBUG oslo_concurrency.lockutils [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:54:30 compute-0 nova_compute[260935]: 2025-10-11 08:54:30.249 2 DEBUG oslo_concurrency.lockutils [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:54:30 compute-0 nova_compute[260935]: 2025-10-11 08:54:30.250 2 DEBUG oslo_concurrency.lockutils [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:54:30 compute-0 nova_compute[260935]: 2025-10-11 08:54:30.252 2 DEBUG oslo_concurrency.lockutils [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.089s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:54:30 compute-0 nova_compute[260935]: 2025-10-11 08:54:30.253 2 DEBUG nova.compute.manager [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] [instance: e263661e-e9c2-4a4d-a6e5-5fc8a7353f50] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 11 08:54:30 compute-0 nova_compute[260935]: 2025-10-11 08:54:30.257 2 DEBUG oslo_concurrency.lockutils [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 1.833s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:54:30 compute-0 nova_compute[260935]: 2025-10-11 08:54:30.265 2 DEBUG nova.virt.hardware [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 11 08:54:30 compute-0 nova_compute[260935]: 2025-10-11 08:54:30.266 2 INFO nova.compute.claims [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] [instance: 21a71d10-e13b-47fe-88fd-ec9597f7902e] Claim successful on node compute-0.ctlplane.example.com
Oct 11 08:54:30 compute-0 kernel: tap56e8a82c-ac (unregistering): left promiscuous mode
Oct 11 08:54:30 compute-0 NetworkManager[44960]: <info>  [1760172870.2778] device (tap56e8a82c-ac): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 08:54:30 compute-0 nova_compute[260935]: 2025-10-11 08:54:30.326 2 DEBUG nova.compute.manager [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] [instance: e263661e-e9c2-4a4d-a6e5-5fc8a7353f50] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 11 08:54:30 compute-0 nova_compute[260935]: 2025-10-11 08:54:30.327 2 DEBUG nova.network.neutron [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] [instance: e263661e-e9c2-4a4d-a6e5-5fc8a7353f50] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 11 08:54:30 compute-0 ovn_controller[152945]: 2025-10-11T08:54:30Z|00446|binding|INFO|Releasing lport 56e8a82c-acef-4136-b229-8039f0d8c843 from this chassis (sb_readonly=0)
Oct 11 08:54:30 compute-0 ovn_controller[152945]: 2025-10-11T08:54:30Z|00447|binding|INFO|Setting lport 56e8a82c-acef-4136-b229-8039f0d8c843 down in Southbound
Oct 11 08:54:30 compute-0 ovn_controller[152945]: 2025-10-11T08:54:30Z|00448|binding|INFO|Removing iface tap56e8a82c-ac ovn-installed in OVS
Oct 11 08:54:30 compute-0 nova_compute[260935]: 2025-10-11 08:54:30.338 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:54:30 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:30.343 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:cf:5a:16 10.100.0.9'], port_security=['fa:16:3e:cf:5a:16 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '1a70666a-f421-4a09-bfa4-f1171cbe2000', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b34d40b1586348c3be3d9142dfe1770d', 'neutron:revision_number': '4', 'neutron:security_group_ids': '9b07fc82-c0d2-4e42-8878-3b0706731285', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=be4f554b-8352-4ab3-babd-ad834c691fd3, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=56e8a82c-acef-4136-b229-8039f0d8c843) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 08:54:30 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:30.346 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 56e8a82c-acef-4136-b229-8039f0d8c843 in datapath a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf unbound from our chassis
Oct 11 08:54:30 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:30.351 162815 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 11 08:54:30 compute-0 nova_compute[260935]: 2025-10-11 08:54:30.352 2 INFO nova.virt.libvirt.driver [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] [instance: e263661e-e9c2-4a4d-a6e5-5fc8a7353f50] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 11 08:54:30 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:30.352 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[d7c7f043-f86f-44c8-9ac6-1e5c9ef25f3e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:54:30 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:30.354 162815 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf namespace which is not needed anymore
Oct 11 08:54:30 compute-0 nova_compute[260935]: 2025-10-11 08:54:30.370 2 DEBUG nova.compute.manager [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] [instance: e263661e-e9c2-4a4d-a6e5-5fc8a7353f50] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 11 08:54:30 compute-0 nova_compute[260935]: 2025-10-11 08:54:30.379 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:54:30 compute-0 systemd[1]: machine-qemu\x2d58\x2dinstance\x2d00000033.scope: Deactivated successfully.
Oct 11 08:54:30 compute-0 systemd[1]: machine-qemu\x2d58\x2dinstance\x2d00000033.scope: Consumed 6.452s CPU time.
Oct 11 08:54:30 compute-0 systemd-machined[215705]: Machine qemu-58-instance-00000033 terminated.
Oct 11 08:54:30 compute-0 nova_compute[260935]: 2025-10-11 08:54:30.460 2 DEBUG nova.compute.manager [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] [instance: e263661e-e9c2-4a4d-a6e5-5fc8a7353f50] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 11 08:54:30 compute-0 nova_compute[260935]: 2025-10-11 08:54:30.461 2 DEBUG nova.virt.libvirt.driver [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] [instance: e263661e-e9c2-4a4d-a6e5-5fc8a7353f50] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 11 08:54:30 compute-0 nova_compute[260935]: 2025-10-11 08:54:30.462 2 INFO nova.virt.libvirt.driver [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] [instance: e263661e-e9c2-4a4d-a6e5-5fc8a7353f50] Creating image(s)
Oct 11 08:54:30 compute-0 nova_compute[260935]: 2025-10-11 08:54:30.498 2 DEBUG nova.storage.rbd_utils [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] rbd image e263661e-e9c2-4a4d-a6e5-5fc8a7353f50_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:54:30 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/3160616652' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:54:30 compute-0 nova_compute[260935]: 2025-10-11 08:54:30.550 2 DEBUG nova.storage.rbd_utils [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] rbd image e263661e-e9c2-4a4d-a6e5-5fc8a7353f50_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:54:30 compute-0 neutron-haproxy-ovnmeta-a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf[317405]: [NOTICE]   (317409) : haproxy version is 2.8.14-c23fe91
Oct 11 08:54:30 compute-0 neutron-haproxy-ovnmeta-a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf[317405]: [NOTICE]   (317409) : path to executable is /usr/sbin/haproxy
Oct 11 08:54:30 compute-0 neutron-haproxy-ovnmeta-a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf[317405]: [WARNING]  (317409) : Exiting Master process...
Oct 11 08:54:30 compute-0 neutron-haproxy-ovnmeta-a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf[317405]: [WARNING]  (317409) : Exiting Master process...
Oct 11 08:54:30 compute-0 neutron-haproxy-ovnmeta-a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf[317405]: [ALERT]    (317409) : Current worker (317411) exited with code 143 (Terminated)
Oct 11 08:54:30 compute-0 neutron-haproxy-ovnmeta-a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf[317405]: [WARNING]  (317409) : All workers exited. Exiting... (0)
Oct 11 08:54:30 compute-0 systemd[1]: libpod-de40479888e2850fca297c3be14505de2621bf2639bc9ebf7a2af5f81362af64.scope: Deactivated successfully.
Oct 11 08:54:30 compute-0 nova_compute[260935]: 2025-10-11 08:54:30.588 2 DEBUG nova.storage.rbd_utils [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] rbd image e263661e-e9c2-4a4d-a6e5-5fc8a7353f50_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:54:30 compute-0 podman[317887]: 2025-10-11 08:54:30.592483198 +0000 UTC m=+0.079452646 container died de40479888e2850fca297c3be14505de2621bf2639bc9ebf7a2af5f81362af64 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0)
Oct 11 08:54:30 compute-0 nova_compute[260935]: 2025-10-11 08:54:30.605 2 DEBUG oslo_concurrency.processutils [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:54:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-68e778c257d1db0a748ad4d230ca44b22155e290228243828b5216cbcae77472-merged.mount: Deactivated successfully.
Oct 11 08:54:30 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-de40479888e2850fca297c3be14505de2621bf2639bc9ebf7a2af5f81362af64-userdata-shm.mount: Deactivated successfully.
Oct 11 08:54:30 compute-0 podman[317887]: 2025-10-11 08:54:30.676922366 +0000 UTC m=+0.163891814 container cleanup de40479888e2850fca297c3be14505de2621bf2639bc9ebf7a2af5f81362af64 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Oct 11 08:54:30 compute-0 nova_compute[260935]: 2025-10-11 08:54:30.688 2 DEBUG oslo_concurrency.processutils [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:54:30 compute-0 systemd[1]: libpod-conmon-de40479888e2850fca297c3be14505de2621bf2639bc9ebf7a2af5f81362af64.scope: Deactivated successfully.
Oct 11 08:54:30 compute-0 nova_compute[260935]: 2025-10-11 08:54:30.748 2 DEBUG nova.policy [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '5c02b9d6bdae439c9f1e49ae63c5c5e3', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '7a37bbdce5194d96bed20d4162e25337', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 11 08:54:30 compute-0 nova_compute[260935]: 2025-10-11 08:54:30.762 2 DEBUG oslo_concurrency.processutils [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json" returned: 0 in 0.157s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:54:30 compute-0 nova_compute[260935]: 2025-10-11 08:54:30.765 2 DEBUG oslo_concurrency.lockutils [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Acquiring lock "9811042c7d73cc51997f7c966840f4b7728169a1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:54:30 compute-0 nova_compute[260935]: 2025-10-11 08:54:30.767 2 DEBUG oslo_concurrency.lockutils [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:54:30 compute-0 nova_compute[260935]: 2025-10-11 08:54:30.767 2 DEBUG oslo_concurrency.lockutils [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:54:30 compute-0 podman[317957]: 2025-10-11 08:54:30.772415249 +0000 UTC m=+0.058706805 container remove de40479888e2850fca297c3be14505de2621bf2639bc9ebf7a2af5f81362af64 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 11 08:54:30 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:30.782 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[645fde58-3a87-49b0-ab78-651f7e05feee]: (4, ('Sat Oct 11 08:54:30 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf (de40479888e2850fca297c3be14505de2621bf2639bc9ebf7a2af5f81362af64)\nde40479888e2850fca297c3be14505de2621bf2639bc9ebf7a2af5f81362af64\nSat Oct 11 08:54:30 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf (de40479888e2850fca297c3be14505de2621bf2639bc9ebf7a2af5f81362af64)\nde40479888e2850fca297c3be14505de2621bf2639bc9ebf7a2af5f81362af64\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:54:30 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:30.787 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[c11aa929-0ce0-4265-bbbd-6a944136e469]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:54:30 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:30.789 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa1a65c6f-c0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:54:30 compute-0 kernel: tapa1a65c6f-c0: left promiscuous mode
Oct 11 08:54:30 compute-0 nova_compute[260935]: 2025-10-11 08:54:30.795 2 DEBUG nova.storage.rbd_utils [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] rbd image e263661e-e9c2-4a4d-a6e5-5fc8a7353f50_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:54:30 compute-0 nova_compute[260935]: 2025-10-11 08:54:30.805 2 DEBUG oslo_concurrency.processutils [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 e263661e-e9c2-4a4d-a6e5-5fc8a7353f50_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:54:30 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:30.825 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[ea46d0f1-ca14-43e8-b2c5-87c47b29eae3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:54:30 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:30.858 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[c668cd0b-5992-44d0-a377-e3f0e543c1c0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:54:30 compute-0 nova_compute[260935]: 2025-10-11 08:54:30.858 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:54:30 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:30.859 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[12ced982-4636-45c4-9e89-4f059653d445]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:54:30 compute-0 nova_compute[260935]: 2025-10-11 08:54:30.865 2 INFO nova.virt.libvirt.driver [-] [instance: 1a70666a-f421-4a09-bfa4-f1171cbe2000] Instance destroyed successfully.
Oct 11 08:54:30 compute-0 nova_compute[260935]: 2025-10-11 08:54:30.866 2 DEBUG nova.objects.instance [None req-65afeaf2-f404-4ffc-a661-16ab532bb966 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Lazy-loading 'resources' on Instance uuid 1a70666a-f421-4a09-bfa4-f1171cbe2000 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:54:30 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:30.878 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[8c0a5da4-3299-4d1b-bacc-9e95428db0ab]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 470567, 'reachable_time': 44844, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 318000, 'error': None, 'target': 'ovnmeta-a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:54:30 compute-0 systemd[1]: run-netns-ovnmeta\x2da1a65c6f\x2dcd0e\x2d4ac5\x2db9de\x2d21e41a32ffbf.mount: Deactivated successfully.
Oct 11 08:54:30 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:30.884 162929 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 11 08:54:30 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:30.884 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[5a0c72e2-f9c1-4758-8adc-aa9b6c40e7ae]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:54:30 compute-0 nova_compute[260935]: 2025-10-11 08:54:30.886 2 DEBUG nova.virt.libvirt.vif [None req-65afeaf2-f404-4ffc-a661-16ab532bb966 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T08:54:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesOneServerNegativeTestJSON-server-1703807190',display_name='tempest-ImagesOneServerNegativeTestJSON-server-1703807190',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservernegativetestjson-server-1703807190',id=51,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-11T08:54:24Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='b34d40b1586348c3be3d9142dfe1770d',ramdisk_id='',reservation_id='r-d42968wc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ImagesOneServerNegativeTestJSON-253514738',owner_user_name='tempest-ImagesOneServerNegativeTestJSON-253514738-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T08:54:28Z,user_data=None,user_id='d4dce65b39be45739408ca70d672df84',uuid=1a70666a-f421-4a09-bfa4-f1171cbe2000,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "56e8a82c-acef-4136-b229-8039f0d8c843", "address": "fa:16:3e:cf:5a:16", "network": {"id": "a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1898587159-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b34d40b1586348c3be3d9142dfe1770d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap56e8a82c-ac", "ovs_interfaceid": "56e8a82c-acef-4136-b229-8039f0d8c843", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 11 08:54:30 compute-0 nova_compute[260935]: 2025-10-11 08:54:30.886 2 DEBUG nova.network.os_vif_util [None req-65afeaf2-f404-4ffc-a661-16ab532bb966 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Converting VIF {"id": "56e8a82c-acef-4136-b229-8039f0d8c843", "address": "fa:16:3e:cf:5a:16", "network": {"id": "a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1898587159-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b34d40b1586348c3be3d9142dfe1770d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap56e8a82c-ac", "ovs_interfaceid": "56e8a82c-acef-4136-b229-8039f0d8c843", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 08:54:30 compute-0 nova_compute[260935]: 2025-10-11 08:54:30.887 2 DEBUG nova.network.os_vif_util [None req-65afeaf2-f404-4ffc-a661-16ab532bb966 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:cf:5a:16,bridge_name='br-int',has_traffic_filtering=True,id=56e8a82c-acef-4136-b229-8039f0d8c843,network=Network(a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap56e8a82c-ac') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 08:54:30 compute-0 nova_compute[260935]: 2025-10-11 08:54:30.888 2 DEBUG os_vif [None req-65afeaf2-f404-4ffc-a661-16ab532bb966 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:cf:5a:16,bridge_name='br-int',has_traffic_filtering=True,id=56e8a82c-acef-4136-b229-8039f0d8c843,network=Network(a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap56e8a82c-ac') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 11 08:54:30 compute-0 nova_compute[260935]: 2025-10-11 08:54:30.890 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:54:30 compute-0 nova_compute[260935]: 2025-10-11 08:54:30.891 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap56e8a82c-ac, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:54:30 compute-0 nova_compute[260935]: 2025-10-11 08:54:30.893 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:54:30 compute-0 nova_compute[260935]: 2025-10-11 08:54:30.895 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 08:54:30 compute-0 nova_compute[260935]: 2025-10-11 08:54:30.899 2 INFO os_vif [None req-65afeaf2-f404-4ffc-a661-16ab532bb966 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:cf:5a:16,bridge_name='br-int',has_traffic_filtering=True,id=56e8a82c-acef-4136-b229-8039f0d8c843,network=Network(a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap56e8a82c-ac')
Oct 11 08:54:30 compute-0 nova_compute[260935]: 2025-10-11 08:54:30.987 2 DEBUG nova.network.neutron [None req-8c9ed641-f7d0-4508-bc10-5d8fe1991ba8 e7ae37ce1df34d87a006582ce3cb7d6d ce2ed1c47abf4e1889253402aa1e536f - - default default] [instance: ceef84cf-9df6-4484-862c-624eab05f1fe] Updating instance_info_cache with network_info: [{"id": "b6cea59b-74d9-47bc-a2e0-cfc62e1cc66b", "address": "fa:16:3e:4a:a8:a7", "network": {"id": "f022782f-f41d-4a83-babf-b48f2f344e01", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1436623710-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ce2ed1c47abf4e1889253402aa1e536f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb6cea59b-74", "ovs_interfaceid": "b6cea59b-74d9-47bc-a2e0-cfc62e1cc66b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:54:31 compute-0 nova_compute[260935]: 2025-10-11 08:54:31.028 2 DEBUG oslo_concurrency.lockutils [None req-8c9ed641-f7d0-4508-bc10-5d8fe1991ba8 e7ae37ce1df34d87a006582ce3cb7d6d ce2ed1c47abf4e1889253402aa1e536f - - default default] Releasing lock "refresh_cache-ceef84cf-9df6-4484-862c-624eab05f1fe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 08:54:31 compute-0 nova_compute[260935]: 2025-10-11 08:54:31.028 2 DEBUG nova.compute.manager [None req-8c9ed641-f7d0-4508-bc10-5d8fe1991ba8 e7ae37ce1df34d87a006582ce3cb7d6d ce2ed1c47abf4e1889253402aa1e536f - - default default] [instance: ceef84cf-9df6-4484-862c-624eab05f1fe] Instance network_info: |[{"id": "b6cea59b-74d9-47bc-a2e0-cfc62e1cc66b", "address": "fa:16:3e:4a:a8:a7", "network": {"id": "f022782f-f41d-4a83-babf-b48f2f344e01", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1436623710-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ce2ed1c47abf4e1889253402aa1e536f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb6cea59b-74", "ovs_interfaceid": "b6cea59b-74d9-47bc-a2e0-cfc62e1cc66b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 11 08:54:31 compute-0 nova_compute[260935]: 2025-10-11 08:54:31.031 2 DEBUG oslo_concurrency.lockutils [req-5d35c0fd-bfb2-4bb0-a2c1-023917277d4b req-7fffc75b-4fd3-4f16-b812-723cc111f850 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-ceef84cf-9df6-4484-862c-624eab05f1fe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 08:54:31 compute-0 nova_compute[260935]: 2025-10-11 08:54:31.031 2 DEBUG nova.network.neutron [req-5d35c0fd-bfb2-4bb0-a2c1-023917277d4b req-7fffc75b-4fd3-4f16-b812-723cc111f850 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ceef84cf-9df6-4484-862c-624eab05f1fe] Refreshing network info cache for port b6cea59b-74d9-47bc-a2e0-cfc62e1cc66b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 08:54:31 compute-0 nova_compute[260935]: 2025-10-11 08:54:31.035 2 DEBUG nova.virt.libvirt.driver [None req-8c9ed641-f7d0-4508-bc10-5d8fe1991ba8 e7ae37ce1df34d87a006582ce3cb7d6d ce2ed1c47abf4e1889253402aa1e536f - - default default] [instance: ceef84cf-9df6-4484-862c-624eab05f1fe] Start _get_guest_xml network_info=[{"id": "b6cea59b-74d9-47bc-a2e0-cfc62e1cc66b", "address": "fa:16:3e:4a:a8:a7", "network": {"id": "f022782f-f41d-4a83-babf-b48f2f344e01", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1436623710-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ce2ed1c47abf4e1889253402aa1e536f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb6cea59b-74", "ovs_interfaceid": "b6cea59b-74d9-47bc-a2e0-cfc62e1cc66b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 11 08:54:31 compute-0 nova_compute[260935]: 2025-10-11 08:54:31.041 2 WARNING nova.virt.libvirt.driver [None req-8c9ed641-f7d0-4508-bc10-5d8fe1991ba8 e7ae37ce1df34d87a006582ce3cb7d6d ce2ed1c47abf4e1889253402aa1e536f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 08:54:31 compute-0 nova_compute[260935]: 2025-10-11 08:54:31.049 2 DEBUG nova.virt.libvirt.host [None req-8c9ed641-f7d0-4508-bc10-5d8fe1991ba8 e7ae37ce1df34d87a006582ce3cb7d6d ce2ed1c47abf4e1889253402aa1e536f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 11 08:54:31 compute-0 nova_compute[260935]: 2025-10-11 08:54:31.050 2 DEBUG nova.virt.libvirt.host [None req-8c9ed641-f7d0-4508-bc10-5d8fe1991ba8 e7ae37ce1df34d87a006582ce3cb7d6d ce2ed1c47abf4e1889253402aa1e536f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 11 08:54:31 compute-0 nova_compute[260935]: 2025-10-11 08:54:31.054 2 DEBUG nova.virt.libvirt.host [None req-8c9ed641-f7d0-4508-bc10-5d8fe1991ba8 e7ae37ce1df34d87a006582ce3cb7d6d ce2ed1c47abf4e1889253402aa1e536f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 11 08:54:31 compute-0 nova_compute[260935]: 2025-10-11 08:54:31.055 2 DEBUG nova.virt.libvirt.host [None req-8c9ed641-f7d0-4508-bc10-5d8fe1991ba8 e7ae37ce1df34d87a006582ce3cb7d6d ce2ed1c47abf4e1889253402aa1e536f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 11 08:54:31 compute-0 nova_compute[260935]: 2025-10-11 08:54:31.055 2 DEBUG nova.virt.libvirt.driver [None req-8c9ed641-f7d0-4508-bc10-5d8fe1991ba8 e7ae37ce1df34d87a006582ce3cb7d6d ce2ed1c47abf4e1889253402aa1e536f - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 11 08:54:31 compute-0 nova_compute[260935]: 2025-10-11 08:54:31.055 2 DEBUG nova.virt.hardware [None req-8c9ed641-f7d0-4508-bc10-5d8fe1991ba8 e7ae37ce1df34d87a006582ce3cb7d6d ce2ed1c47abf4e1889253402aa1e536f - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 11 08:54:31 compute-0 nova_compute[260935]: 2025-10-11 08:54:31.056 2 DEBUG nova.virt.hardware [None req-8c9ed641-f7d0-4508-bc10-5d8fe1991ba8 e7ae37ce1df34d87a006582ce3cb7d6d ce2ed1c47abf4e1889253402aa1e536f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 11 08:54:31 compute-0 nova_compute[260935]: 2025-10-11 08:54:31.056 2 DEBUG nova.virt.hardware [None req-8c9ed641-f7d0-4508-bc10-5d8fe1991ba8 e7ae37ce1df34d87a006582ce3cb7d6d ce2ed1c47abf4e1889253402aa1e536f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 11 08:54:31 compute-0 nova_compute[260935]: 2025-10-11 08:54:31.056 2 DEBUG nova.virt.hardware [None req-8c9ed641-f7d0-4508-bc10-5d8fe1991ba8 e7ae37ce1df34d87a006582ce3cb7d6d ce2ed1c47abf4e1889253402aa1e536f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 11 08:54:31 compute-0 nova_compute[260935]: 2025-10-11 08:54:31.056 2 DEBUG nova.virt.hardware [None req-8c9ed641-f7d0-4508-bc10-5d8fe1991ba8 e7ae37ce1df34d87a006582ce3cb7d6d ce2ed1c47abf4e1889253402aa1e536f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 11 08:54:31 compute-0 nova_compute[260935]: 2025-10-11 08:54:31.056 2 DEBUG nova.virt.hardware [None req-8c9ed641-f7d0-4508-bc10-5d8fe1991ba8 e7ae37ce1df34d87a006582ce3cb7d6d ce2ed1c47abf4e1889253402aa1e536f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 11 08:54:31 compute-0 nova_compute[260935]: 2025-10-11 08:54:31.057 2 DEBUG nova.virt.hardware [None req-8c9ed641-f7d0-4508-bc10-5d8fe1991ba8 e7ae37ce1df34d87a006582ce3cb7d6d ce2ed1c47abf4e1889253402aa1e536f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 11 08:54:31 compute-0 nova_compute[260935]: 2025-10-11 08:54:31.057 2 DEBUG nova.virt.hardware [None req-8c9ed641-f7d0-4508-bc10-5d8fe1991ba8 e7ae37ce1df34d87a006582ce3cb7d6d ce2ed1c47abf4e1889253402aa1e536f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 11 08:54:31 compute-0 nova_compute[260935]: 2025-10-11 08:54:31.057 2 DEBUG nova.virt.hardware [None req-8c9ed641-f7d0-4508-bc10-5d8fe1991ba8 e7ae37ce1df34d87a006582ce3cb7d6d ce2ed1c47abf4e1889253402aa1e536f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 11 08:54:31 compute-0 nova_compute[260935]: 2025-10-11 08:54:31.057 2 DEBUG nova.virt.hardware [None req-8c9ed641-f7d0-4508-bc10-5d8fe1991ba8 e7ae37ce1df34d87a006582ce3cb7d6d ce2ed1c47abf4e1889253402aa1e536f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 11 08:54:31 compute-0 nova_compute[260935]: 2025-10-11 08:54:31.058 2 DEBUG nova.virt.hardware [None req-8c9ed641-f7d0-4508-bc10-5d8fe1991ba8 e7ae37ce1df34d87a006582ce3cb7d6d ce2ed1c47abf4e1889253402aa1e536f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 11 08:54:31 compute-0 nova_compute[260935]: 2025-10-11 08:54:31.061 2 DEBUG oslo_concurrency.processutils [None req-8c9ed641-f7d0-4508-bc10-5d8fe1991ba8 e7ae37ce1df34d87a006582ce3cb7d6d ce2ed1c47abf4e1889253402aa1e536f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:54:31 compute-0 nova_compute[260935]: 2025-10-11 08:54:31.158 2 DEBUG oslo_concurrency.processutils [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 e263661e-e9c2-4a4d-a6e5-5fc8a7353f50_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.354s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:54:31 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 08:54:31 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1731978409' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:54:31 compute-0 nova_compute[260935]: 2025-10-11 08:54:31.236 2 DEBUG oslo_concurrency.processutils [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.548s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:54:31 compute-0 nova_compute[260935]: 2025-10-11 08:54:31.248 2 DEBUG nova.storage.rbd_utils [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] resizing rbd image e263661e-e9c2-4a4d-a6e5-5fc8a7353f50_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 11 08:54:31 compute-0 nova_compute[260935]: 2025-10-11 08:54:31.293 2 DEBUG nova.compute.provider_tree [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 08:54:31 compute-0 nova_compute[260935]: 2025-10-11 08:54:31.307 2 DEBUG nova.scheduler.client.report [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 08:54:31 compute-0 nova_compute[260935]: 2025-10-11 08:54:31.378 2 DEBUG oslo_concurrency.lockutils [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.120s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:54:31 compute-0 nova_compute[260935]: 2025-10-11 08:54:31.378 2 DEBUG nova.compute.manager [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] [instance: 21a71d10-e13b-47fe-88fd-ec9597f7902e] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 11 08:54:31 compute-0 nova_compute[260935]: 2025-10-11 08:54:31.387 2 DEBUG nova.objects.instance [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Lazy-loading 'migration_context' on Instance uuid e263661e-e9c2-4a4d-a6e5-5fc8a7353f50 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:54:31 compute-0 nova_compute[260935]: 2025-10-11 08:54:31.407 2 DEBUG nova.virt.libvirt.driver [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] [instance: e263661e-e9c2-4a4d-a6e5-5fc8a7353f50] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 11 08:54:31 compute-0 nova_compute[260935]: 2025-10-11 08:54:31.408 2 DEBUG nova.virt.libvirt.driver [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] [instance: e263661e-e9c2-4a4d-a6e5-5fc8a7353f50] Ensure instance console log exists: /var/lib/nova/instances/e263661e-e9c2-4a4d-a6e5-5fc8a7353f50/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 11 08:54:31 compute-0 nova_compute[260935]: 2025-10-11 08:54:31.408 2 DEBUG oslo_concurrency.lockutils [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:54:31 compute-0 nova_compute[260935]: 2025-10-11 08:54:31.409 2 DEBUG oslo_concurrency.lockutils [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:54:31 compute-0 nova_compute[260935]: 2025-10-11 08:54:31.409 2 DEBUG oslo_concurrency.lockutils [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:54:31 compute-0 nova_compute[260935]: 2025-10-11 08:54:31.438 2 INFO nova.virt.libvirt.driver [None req-65afeaf2-f404-4ffc-a661-16ab532bb966 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] [instance: 1a70666a-f421-4a09-bfa4-f1171cbe2000] Deleting instance files /var/lib/nova/instances/1a70666a-f421-4a09-bfa4-f1171cbe2000_del
Oct 11 08:54:31 compute-0 nova_compute[260935]: 2025-10-11 08:54:31.439 2 INFO nova.virt.libvirt.driver [None req-65afeaf2-f404-4ffc-a661-16ab532bb966 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] [instance: 1a70666a-f421-4a09-bfa4-f1171cbe2000] Deletion of /var/lib/nova/instances/1a70666a-f421-4a09-bfa4-f1171cbe2000_del complete
Oct 11 08:54:31 compute-0 nova_compute[260935]: 2025-10-11 08:54:31.481 2 DEBUG nova.compute.manager [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] [instance: 21a71d10-e13b-47fe-88fd-ec9597f7902e] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 11 08:54:31 compute-0 nova_compute[260935]: 2025-10-11 08:54:31.482 2 DEBUG nova.network.neutron [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] [instance: 21a71d10-e13b-47fe-88fd-ec9597f7902e] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 11 08:54:31 compute-0 nova_compute[260935]: 2025-10-11 08:54:31.516 2 INFO nova.virt.libvirt.driver [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] [instance: 21a71d10-e13b-47fe-88fd-ec9597f7902e] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 11 08:54:31 compute-0 ceph-mon[74313]: pgmap v1508: 321 pgs: 321 active+clean; 88 MiB data, 469 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 11 08:54:31 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/1731978409' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:54:31 compute-0 nova_compute[260935]: 2025-10-11 08:54:31.523 2 INFO nova.compute.manager [None req-65afeaf2-f404-4ffc-a661-16ab532bb966 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] [instance: 1a70666a-f421-4a09-bfa4-f1171cbe2000] Took 1.31 seconds to destroy the instance on the hypervisor.
Oct 11 08:54:31 compute-0 nova_compute[260935]: 2025-10-11 08:54:31.523 2 DEBUG oslo.service.loopingcall [None req-65afeaf2-f404-4ffc-a661-16ab532bb966 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 11 08:54:31 compute-0 nova_compute[260935]: 2025-10-11 08:54:31.523 2 DEBUG nova.compute.manager [-] [instance: 1a70666a-f421-4a09-bfa4-f1171cbe2000] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 11 08:54:31 compute-0 nova_compute[260935]: 2025-10-11 08:54:31.524 2 DEBUG nova.network.neutron [-] [instance: 1a70666a-f421-4a09-bfa4-f1171cbe2000] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 11 08:54:31 compute-0 nova_compute[260935]: 2025-10-11 08:54:31.536 2 DEBUG nova.compute.manager [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] [instance: 21a71d10-e13b-47fe-88fd-ec9597f7902e] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 11 08:54:31 compute-0 nova_compute[260935]: 2025-10-11 08:54:31.568 2 DEBUG nova.network.neutron [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] [instance: d60a2dcd-7fb6-4bfe-8351-a38a71164f83] Successfully updated port: b0464a2f-214f-420b-aa3e-70d2e3dce4ac _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 11 08:54:31 compute-0 nova_compute[260935]: 2025-10-11 08:54:31.585 2 DEBUG oslo_concurrency.lockutils [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Acquiring lock "refresh_cache-d60a2dcd-7fb6-4bfe-8351-a38a71164f83" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 08:54:31 compute-0 nova_compute[260935]: 2025-10-11 08:54:31.585 2 DEBUG oslo_concurrency.lockutils [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Acquired lock "refresh_cache-d60a2dcd-7fb6-4bfe-8351-a38a71164f83" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 08:54:31 compute-0 nova_compute[260935]: 2025-10-11 08:54:31.585 2 DEBUG nova.network.neutron [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] [instance: d60a2dcd-7fb6-4bfe-8351-a38a71164f83] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 11 08:54:31 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 08:54:31 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3127925003' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:54:31 compute-0 nova_compute[260935]: 2025-10-11 08:54:31.607 2 DEBUG oslo_concurrency.processutils [None req-8c9ed641-f7d0-4508-bc10-5d8fe1991ba8 e7ae37ce1df34d87a006582ce3cb7d6d ce2ed1c47abf4e1889253402aa1e536f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.546s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:54:31 compute-0 nova_compute[260935]: 2025-10-11 08:54:31.628 2 DEBUG nova.storage.rbd_utils [None req-8c9ed641-f7d0-4508-bc10-5d8fe1991ba8 e7ae37ce1df34d87a006582ce3cb7d6d ce2ed1c47abf4e1889253402aa1e536f - - default default] rbd image ceef84cf-9df6-4484-862c-624eab05f1fe_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:54:31 compute-0 nova_compute[260935]: 2025-10-11 08:54:31.631 2 DEBUG oslo_concurrency.processutils [None req-8c9ed641-f7d0-4508-bc10-5d8fe1991ba8 e7ae37ce1df34d87a006582ce3cb7d6d ce2ed1c47abf4e1889253402aa1e536f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:54:31 compute-0 nova_compute[260935]: 2025-10-11 08:54:31.665 2 DEBUG nova.compute.manager [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] [instance: 21a71d10-e13b-47fe-88fd-ec9597f7902e] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 11 08:54:31 compute-0 nova_compute[260935]: 2025-10-11 08:54:31.667 2 DEBUG nova.virt.libvirt.driver [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] [instance: 21a71d10-e13b-47fe-88fd-ec9597f7902e] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 11 08:54:31 compute-0 nova_compute[260935]: 2025-10-11 08:54:31.667 2 INFO nova.virt.libvirt.driver [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] [instance: 21a71d10-e13b-47fe-88fd-ec9597f7902e] Creating image(s)
Oct 11 08:54:31 compute-0 nova_compute[260935]: 2025-10-11 08:54:31.704 2 DEBUG nova.storage.rbd_utils [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] rbd image 21a71d10-e13b-47fe-88fd-ec9597f7902e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:54:31 compute-0 nova_compute[260935]: 2025-10-11 08:54:31.738 2 DEBUG nova.storage.rbd_utils [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] rbd image 21a71d10-e13b-47fe-88fd-ec9597f7902e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:54:31 compute-0 nova_compute[260935]: 2025-10-11 08:54:31.763 2 DEBUG nova.storage.rbd_utils [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] rbd image 21a71d10-e13b-47fe-88fd-ec9597f7902e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:54:31 compute-0 nova_compute[260935]: 2025-10-11 08:54:31.768 2 DEBUG oslo_concurrency.processutils [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:54:31 compute-0 nova_compute[260935]: 2025-10-11 08:54:31.815 2 DEBUG nova.policy [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '5c02b9d6bdae439c9f1e49ae63c5c5e3', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '7a37bbdce5194d96bed20d4162e25337', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 11 08:54:31 compute-0 nova_compute[260935]: 2025-10-11 08:54:31.820 2 DEBUG nova.compute.manager [req-d0f9a41a-70b5-48b6-9aef-2193727d100a req-f263e311-55e3-4330-ba6d-d4e427c55af1 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d60a2dcd-7fb6-4bfe-8351-a38a71164f83] Received event network-changed-b0464a2f-214f-420b-aa3e-70d2e3dce4ac external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:54:31 compute-0 nova_compute[260935]: 2025-10-11 08:54:31.820 2 DEBUG nova.compute.manager [req-d0f9a41a-70b5-48b6-9aef-2193727d100a req-f263e311-55e3-4330-ba6d-d4e427c55af1 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d60a2dcd-7fb6-4bfe-8351-a38a71164f83] Refreshing instance network info cache due to event network-changed-b0464a2f-214f-420b-aa3e-70d2e3dce4ac. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 08:54:31 compute-0 nova_compute[260935]: 2025-10-11 08:54:31.821 2 DEBUG oslo_concurrency.lockutils [req-d0f9a41a-70b5-48b6-9aef-2193727d100a req-f263e311-55e3-4330-ba6d-d4e427c55af1 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-d60a2dcd-7fb6-4bfe-8351-a38a71164f83" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 08:54:31 compute-0 nova_compute[260935]: 2025-10-11 08:54:31.864 2 DEBUG oslo_concurrency.processutils [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json" returned: 0 in 0.096s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:54:31 compute-0 nova_compute[260935]: 2025-10-11 08:54:31.864 2 DEBUG oslo_concurrency.lockutils [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Acquiring lock "9811042c7d73cc51997f7c966840f4b7728169a1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:54:31 compute-0 nova_compute[260935]: 2025-10-11 08:54:31.865 2 DEBUG oslo_concurrency.lockutils [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:54:31 compute-0 nova_compute[260935]: 2025-10-11 08:54:31.865 2 DEBUG oslo_concurrency.lockutils [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:54:31 compute-0 nova_compute[260935]: 2025-10-11 08:54:31.889 2 DEBUG nova.storage.rbd_utils [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] rbd image 21a71d10-e13b-47fe-88fd-ec9597f7902e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:54:31 compute-0 nova_compute[260935]: 2025-10-11 08:54:31.893 2 DEBUG oslo_concurrency.processutils [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 21a71d10-e13b-47fe-88fd-ec9597f7902e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:54:31 compute-0 nova_compute[260935]: 2025-10-11 08:54:31.934 2 DEBUG nova.network.neutron [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] [instance: d60a2dcd-7fb6-4bfe-8351-a38a71164f83] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 11 08:54:31 compute-0 nova_compute[260935]: 2025-10-11 08:54:31.945 2 DEBUG nova.compute.manager [req-54c48891-8bbd-43e1-a84e-4d8a00cadf46 req-a8ee6222-7351-4400-a145-4f0e41e83ecd e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 1a70666a-f421-4a09-bfa4-f1171cbe2000] Received event network-vif-unplugged-56e8a82c-acef-4136-b229-8039f0d8c843 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:54:31 compute-0 nova_compute[260935]: 2025-10-11 08:54:31.945 2 DEBUG oslo_concurrency.lockutils [req-54c48891-8bbd-43e1-a84e-4d8a00cadf46 req-a8ee6222-7351-4400-a145-4f0e41e83ecd e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "1a70666a-f421-4a09-bfa4-f1171cbe2000-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:54:31 compute-0 nova_compute[260935]: 2025-10-11 08:54:31.945 2 DEBUG oslo_concurrency.lockutils [req-54c48891-8bbd-43e1-a84e-4d8a00cadf46 req-a8ee6222-7351-4400-a145-4f0e41e83ecd e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "1a70666a-f421-4a09-bfa4-f1171cbe2000-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:54:31 compute-0 nova_compute[260935]: 2025-10-11 08:54:31.946 2 DEBUG oslo_concurrency.lockutils [req-54c48891-8bbd-43e1-a84e-4d8a00cadf46 req-a8ee6222-7351-4400-a145-4f0e41e83ecd e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "1a70666a-f421-4a09-bfa4-f1171cbe2000-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:54:31 compute-0 nova_compute[260935]: 2025-10-11 08:54:31.946 2 DEBUG nova.compute.manager [req-54c48891-8bbd-43e1-a84e-4d8a00cadf46 req-a8ee6222-7351-4400-a145-4f0e41e83ecd e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 1a70666a-f421-4a09-bfa4-f1171cbe2000] No waiting events found dispatching network-vif-unplugged-56e8a82c-acef-4136-b229-8039f0d8c843 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 08:54:31 compute-0 nova_compute[260935]: 2025-10-11 08:54:31.946 2 DEBUG nova.compute.manager [req-54c48891-8bbd-43e1-a84e-4d8a00cadf46 req-a8ee6222-7351-4400-a145-4f0e41e83ecd e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 1a70666a-f421-4a09-bfa4-f1171cbe2000] Received event network-vif-unplugged-56e8a82c-acef-4136-b229-8039f0d8c843 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 11 08:54:31 compute-0 nova_compute[260935]: 2025-10-11 08:54:31.946 2 DEBUG nova.compute.manager [req-54c48891-8bbd-43e1-a84e-4d8a00cadf46 req-a8ee6222-7351-4400-a145-4f0e41e83ecd e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 1a70666a-f421-4a09-bfa4-f1171cbe2000] Received event network-vif-plugged-56e8a82c-acef-4136-b229-8039f0d8c843 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:54:31 compute-0 nova_compute[260935]: 2025-10-11 08:54:31.947 2 DEBUG oslo_concurrency.lockutils [req-54c48891-8bbd-43e1-a84e-4d8a00cadf46 req-a8ee6222-7351-4400-a145-4f0e41e83ecd e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "1a70666a-f421-4a09-bfa4-f1171cbe2000-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:54:31 compute-0 nova_compute[260935]: 2025-10-11 08:54:31.947 2 DEBUG oslo_concurrency.lockutils [req-54c48891-8bbd-43e1-a84e-4d8a00cadf46 req-a8ee6222-7351-4400-a145-4f0e41e83ecd e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "1a70666a-f421-4a09-bfa4-f1171cbe2000-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:54:31 compute-0 nova_compute[260935]: 2025-10-11 08:54:31.947 2 DEBUG oslo_concurrency.lockutils [req-54c48891-8bbd-43e1-a84e-4d8a00cadf46 req-a8ee6222-7351-4400-a145-4f0e41e83ecd e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "1a70666a-f421-4a09-bfa4-f1171cbe2000-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:54:31 compute-0 nova_compute[260935]: 2025-10-11 08:54:31.947 2 DEBUG nova.compute.manager [req-54c48891-8bbd-43e1-a84e-4d8a00cadf46 req-a8ee6222-7351-4400-a145-4f0e41e83ecd e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 1a70666a-f421-4a09-bfa4-f1171cbe2000] No waiting events found dispatching network-vif-plugged-56e8a82c-acef-4136-b229-8039f0d8c843 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 08:54:31 compute-0 nova_compute[260935]: 2025-10-11 08:54:31.947 2 WARNING nova.compute.manager [req-54c48891-8bbd-43e1-a84e-4d8a00cadf46 req-a8ee6222-7351-4400-a145-4f0e41e83ecd e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 1a70666a-f421-4a09-bfa4-f1171cbe2000] Received unexpected event network-vif-plugged-56e8a82c-acef-4136-b229-8039f0d8c843 for instance with vm_state active and task_state deleting.
Oct 11 08:54:31 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1509: 321 pgs: 321 active+clean; 88 MiB data, 469 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 11 08:54:32 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 08:54:32 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1646259643' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:54:32 compute-0 nova_compute[260935]: 2025-10-11 08:54:32.085 2 DEBUG oslo_concurrency.processutils [None req-8c9ed641-f7d0-4508-bc10-5d8fe1991ba8 e7ae37ce1df34d87a006582ce3cb7d6d ce2ed1c47abf4e1889253402aa1e536f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:54:32 compute-0 nova_compute[260935]: 2025-10-11 08:54:32.088 2 DEBUG nova.virt.libvirt.vif [None req-8c9ed641-f7d0-4508-bc10-5d8fe1991ba8 e7ae37ce1df34d87a006582ce3cb7d6d ce2ed1c47abf4e1889253402aa1e536f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T08:54:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerAddressesTestJSON-server-1070563181',display_name='tempest-ServerAddressesTestJSON-server-1070563181',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveraddressestestjson-server-1070563181',id=52,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ce2ed1c47abf4e1889253402aa1e536f',ramdisk_id='',reservation_id='r-xo306n5g',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerAddressesTestJSON-1144666060',owner_user_name='tempest-ServerAddressesTestJSON-1144666060-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T08:54:28Z,user_data=None,user_id='e7ae37ce1df34d87a006582ce3cb7d6d',uuid=ceef84cf-9df6-4484-862c-624eab05f1fe,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b6cea59b-74d9-47bc-a2e0-cfc62e1cc66b", "address": "fa:16:3e:4a:a8:a7", "network": {"id": "f022782f-f41d-4a83-babf-b48f2f344e01", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1436623710-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ce2ed1c47abf4e1889253402aa1e536f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb6cea59b-74", "ovs_interfaceid": "b6cea59b-74d9-47bc-a2e0-cfc62e1cc66b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 11 08:54:32 compute-0 nova_compute[260935]: 2025-10-11 08:54:32.089 2 DEBUG nova.network.os_vif_util [None req-8c9ed641-f7d0-4508-bc10-5d8fe1991ba8 e7ae37ce1df34d87a006582ce3cb7d6d ce2ed1c47abf4e1889253402aa1e536f - - default default] Converting VIF {"id": "b6cea59b-74d9-47bc-a2e0-cfc62e1cc66b", "address": "fa:16:3e:4a:a8:a7", "network": {"id": "f022782f-f41d-4a83-babf-b48f2f344e01", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1436623710-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ce2ed1c47abf4e1889253402aa1e536f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb6cea59b-74", "ovs_interfaceid": "b6cea59b-74d9-47bc-a2e0-cfc62e1cc66b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 08:54:32 compute-0 nova_compute[260935]: 2025-10-11 08:54:32.090 2 DEBUG nova.network.os_vif_util [None req-8c9ed641-f7d0-4508-bc10-5d8fe1991ba8 e7ae37ce1df34d87a006582ce3cb7d6d ce2ed1c47abf4e1889253402aa1e536f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4a:a8:a7,bridge_name='br-int',has_traffic_filtering=True,id=b6cea59b-74d9-47bc-a2e0-cfc62e1cc66b,network=Network(f022782f-f41d-4a83-babf-b48f2f344e01),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb6cea59b-74') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 08:54:32 compute-0 nova_compute[260935]: 2025-10-11 08:54:32.093 2 DEBUG nova.objects.instance [None req-8c9ed641-f7d0-4508-bc10-5d8fe1991ba8 e7ae37ce1df34d87a006582ce3cb7d6d ce2ed1c47abf4e1889253402aa1e536f - - default default] Lazy-loading 'pci_devices' on Instance uuid ceef84cf-9df6-4484-862c-624eab05f1fe obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:54:32 compute-0 nova_compute[260935]: 2025-10-11 08:54:32.112 2 DEBUG nova.virt.libvirt.driver [None req-8c9ed641-f7d0-4508-bc10-5d8fe1991ba8 e7ae37ce1df34d87a006582ce3cb7d6d ce2ed1c47abf4e1889253402aa1e536f - - default default] [instance: ceef84cf-9df6-4484-862c-624eab05f1fe] End _get_guest_xml xml=<domain type="kvm">
Oct 11 08:54:32 compute-0 nova_compute[260935]:   <uuid>ceef84cf-9df6-4484-862c-624eab05f1fe</uuid>
Oct 11 08:54:32 compute-0 nova_compute[260935]:   <name>instance-00000034</name>
Oct 11 08:54:32 compute-0 nova_compute[260935]:   <memory>131072</memory>
Oct 11 08:54:32 compute-0 nova_compute[260935]:   <vcpu>1</vcpu>
Oct 11 08:54:32 compute-0 nova_compute[260935]:   <metadata>
Oct 11 08:54:32 compute-0 nova_compute[260935]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 08:54:32 compute-0 nova_compute[260935]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 08:54:32 compute-0 nova_compute[260935]:       <nova:name>tempest-ServerAddressesTestJSON-server-1070563181</nova:name>
Oct 11 08:54:32 compute-0 nova_compute[260935]:       <nova:creationTime>2025-10-11 08:54:31</nova:creationTime>
Oct 11 08:54:32 compute-0 nova_compute[260935]:       <nova:flavor name="m1.nano">
Oct 11 08:54:32 compute-0 nova_compute[260935]:         <nova:memory>128</nova:memory>
Oct 11 08:54:32 compute-0 nova_compute[260935]:         <nova:disk>1</nova:disk>
Oct 11 08:54:32 compute-0 nova_compute[260935]:         <nova:swap>0</nova:swap>
Oct 11 08:54:32 compute-0 nova_compute[260935]:         <nova:ephemeral>0</nova:ephemeral>
Oct 11 08:54:32 compute-0 nova_compute[260935]:         <nova:vcpus>1</nova:vcpus>
Oct 11 08:54:32 compute-0 nova_compute[260935]:       </nova:flavor>
Oct 11 08:54:32 compute-0 nova_compute[260935]:       <nova:owner>
Oct 11 08:54:32 compute-0 nova_compute[260935]:         <nova:user uuid="e7ae37ce1df34d87a006582ce3cb7d6d">tempest-ServerAddressesTestJSON-1144666060-project-member</nova:user>
Oct 11 08:54:32 compute-0 nova_compute[260935]:         <nova:project uuid="ce2ed1c47abf4e1889253402aa1e536f">tempest-ServerAddressesTestJSON-1144666060</nova:project>
Oct 11 08:54:32 compute-0 nova_compute[260935]:       </nova:owner>
Oct 11 08:54:32 compute-0 nova_compute[260935]:       <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 08:54:32 compute-0 nova_compute[260935]:       <nova:ports>
Oct 11 08:54:32 compute-0 nova_compute[260935]:         <nova:port uuid="b6cea59b-74d9-47bc-a2e0-cfc62e1cc66b">
Oct 11 08:54:32 compute-0 nova_compute[260935]:           <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Oct 11 08:54:32 compute-0 nova_compute[260935]:         </nova:port>
Oct 11 08:54:32 compute-0 nova_compute[260935]:       </nova:ports>
Oct 11 08:54:32 compute-0 nova_compute[260935]:     </nova:instance>
Oct 11 08:54:32 compute-0 nova_compute[260935]:   </metadata>
Oct 11 08:54:32 compute-0 nova_compute[260935]:   <sysinfo type="smbios">
Oct 11 08:54:32 compute-0 nova_compute[260935]:     <system>
Oct 11 08:54:32 compute-0 nova_compute[260935]:       <entry name="manufacturer">RDO</entry>
Oct 11 08:54:32 compute-0 nova_compute[260935]:       <entry name="product">OpenStack Compute</entry>
Oct 11 08:54:32 compute-0 nova_compute[260935]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 08:54:32 compute-0 nova_compute[260935]:       <entry name="serial">ceef84cf-9df6-4484-862c-624eab05f1fe</entry>
Oct 11 08:54:32 compute-0 nova_compute[260935]:       <entry name="uuid">ceef84cf-9df6-4484-862c-624eab05f1fe</entry>
Oct 11 08:54:32 compute-0 nova_compute[260935]:       <entry name="family">Virtual Machine</entry>
Oct 11 08:54:32 compute-0 nova_compute[260935]:     </system>
Oct 11 08:54:32 compute-0 nova_compute[260935]:   </sysinfo>
Oct 11 08:54:32 compute-0 nova_compute[260935]:   <os>
Oct 11 08:54:32 compute-0 nova_compute[260935]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 11 08:54:32 compute-0 nova_compute[260935]:     <boot dev="hd"/>
Oct 11 08:54:32 compute-0 nova_compute[260935]:     <smbios mode="sysinfo"/>
Oct 11 08:54:32 compute-0 nova_compute[260935]:   </os>
Oct 11 08:54:32 compute-0 nova_compute[260935]:   <features>
Oct 11 08:54:32 compute-0 nova_compute[260935]:     <acpi/>
Oct 11 08:54:32 compute-0 nova_compute[260935]:     <apic/>
Oct 11 08:54:32 compute-0 nova_compute[260935]:     <vmcoreinfo/>
Oct 11 08:54:32 compute-0 nova_compute[260935]:   </features>
Oct 11 08:54:32 compute-0 nova_compute[260935]:   <clock offset="utc">
Oct 11 08:54:32 compute-0 nova_compute[260935]:     <timer name="pit" tickpolicy="delay"/>
Oct 11 08:54:32 compute-0 nova_compute[260935]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 11 08:54:32 compute-0 nova_compute[260935]:     <timer name="hpet" present="no"/>
Oct 11 08:54:32 compute-0 nova_compute[260935]:   </clock>
Oct 11 08:54:32 compute-0 nova_compute[260935]:   <cpu mode="host-model" match="exact">
Oct 11 08:54:32 compute-0 nova_compute[260935]:     <topology sockets="1" cores="1" threads="1"/>
Oct 11 08:54:32 compute-0 nova_compute[260935]:   </cpu>
Oct 11 08:54:32 compute-0 nova_compute[260935]:   <devices>
Oct 11 08:54:32 compute-0 nova_compute[260935]:     <disk type="network" device="disk">
Oct 11 08:54:32 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 08:54:32 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/ceef84cf-9df6-4484-862c-624eab05f1fe_disk">
Oct 11 08:54:32 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 08:54:32 compute-0 nova_compute[260935]:       </source>
Oct 11 08:54:32 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 08:54:32 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 08:54:32 compute-0 nova_compute[260935]:       </auth>
Oct 11 08:54:32 compute-0 nova_compute[260935]:       <target dev="vda" bus="virtio"/>
Oct 11 08:54:32 compute-0 nova_compute[260935]:     </disk>
Oct 11 08:54:32 compute-0 nova_compute[260935]:     <disk type="network" device="cdrom">
Oct 11 08:54:32 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 08:54:32 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/ceef84cf-9df6-4484-862c-624eab05f1fe_disk.config">
Oct 11 08:54:32 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 08:54:32 compute-0 nova_compute[260935]:       </source>
Oct 11 08:54:32 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 08:54:32 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 08:54:32 compute-0 nova_compute[260935]:       </auth>
Oct 11 08:54:32 compute-0 nova_compute[260935]:       <target dev="sda" bus="sata"/>
Oct 11 08:54:32 compute-0 nova_compute[260935]:     </disk>
Oct 11 08:54:32 compute-0 nova_compute[260935]:     <interface type="ethernet">
Oct 11 08:54:32 compute-0 nova_compute[260935]:       <mac address="fa:16:3e:4a:a8:a7"/>
Oct 11 08:54:32 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 08:54:32 compute-0 nova_compute[260935]:       <driver name="vhost" rx_queue_size="512"/>
Oct 11 08:54:32 compute-0 nova_compute[260935]:       <mtu size="1442"/>
Oct 11 08:54:32 compute-0 nova_compute[260935]:       <target dev="tapb6cea59b-74"/>
Oct 11 08:54:32 compute-0 nova_compute[260935]:     </interface>
Oct 11 08:54:32 compute-0 nova_compute[260935]:     <serial type="pty">
Oct 11 08:54:32 compute-0 nova_compute[260935]:       <log file="/var/lib/nova/instances/ceef84cf-9df6-4484-862c-624eab05f1fe/console.log" append="off"/>
Oct 11 08:54:32 compute-0 nova_compute[260935]:     </serial>
Oct 11 08:54:32 compute-0 nova_compute[260935]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 08:54:32 compute-0 nova_compute[260935]:     <video>
Oct 11 08:54:32 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 08:54:32 compute-0 nova_compute[260935]:     </video>
Oct 11 08:54:32 compute-0 nova_compute[260935]:     <input type="tablet" bus="usb"/>
Oct 11 08:54:32 compute-0 nova_compute[260935]:     <rng model="virtio">
Oct 11 08:54:32 compute-0 nova_compute[260935]:       <backend model="random">/dev/urandom</backend>
Oct 11 08:54:32 compute-0 nova_compute[260935]:     </rng>
Oct 11 08:54:32 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root"/>
Oct 11 08:54:32 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:54:32 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:54:32 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:54:32 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:54:32 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:54:32 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:54:32 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:54:32 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:54:32 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:54:32 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:54:32 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:54:32 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:54:32 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:54:32 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:54:32 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:54:32 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:54:32 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:54:32 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:54:32 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:54:32 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:54:32 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:54:32 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:54:32 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:54:32 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:54:32 compute-0 nova_compute[260935]:     <controller type="usb" index="0"/>
Oct 11 08:54:32 compute-0 nova_compute[260935]:     <memballoon model="virtio">
Oct 11 08:54:32 compute-0 nova_compute[260935]:       <stats period="10"/>
Oct 11 08:54:32 compute-0 nova_compute[260935]:     </memballoon>
Oct 11 08:54:32 compute-0 nova_compute[260935]:   </devices>
Oct 11 08:54:32 compute-0 nova_compute[260935]: </domain>
Oct 11 08:54:32 compute-0 nova_compute[260935]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 11 08:54:32 compute-0 nova_compute[260935]: 2025-10-11 08:54:32.112 2 DEBUG nova.compute.manager [None req-8c9ed641-f7d0-4508-bc10-5d8fe1991ba8 e7ae37ce1df34d87a006582ce3cb7d6d ce2ed1c47abf4e1889253402aa1e536f - - default default] [instance: ceef84cf-9df6-4484-862c-624eab05f1fe] Preparing to wait for external event network-vif-plugged-b6cea59b-74d9-47bc-a2e0-cfc62e1cc66b prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 11 08:54:32 compute-0 nova_compute[260935]: 2025-10-11 08:54:32.113 2 DEBUG oslo_concurrency.lockutils [None req-8c9ed641-f7d0-4508-bc10-5d8fe1991ba8 e7ae37ce1df34d87a006582ce3cb7d6d ce2ed1c47abf4e1889253402aa1e536f - - default default] Acquiring lock "ceef84cf-9df6-4484-862c-624eab05f1fe-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:54:32 compute-0 nova_compute[260935]: 2025-10-11 08:54:32.113 2 DEBUG oslo_concurrency.lockutils [None req-8c9ed641-f7d0-4508-bc10-5d8fe1991ba8 e7ae37ce1df34d87a006582ce3cb7d6d ce2ed1c47abf4e1889253402aa1e536f - - default default] Lock "ceef84cf-9df6-4484-862c-624eab05f1fe-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:54:32 compute-0 nova_compute[260935]: 2025-10-11 08:54:32.113 2 DEBUG oslo_concurrency.lockutils [None req-8c9ed641-f7d0-4508-bc10-5d8fe1991ba8 e7ae37ce1df34d87a006582ce3cb7d6d ce2ed1c47abf4e1889253402aa1e536f - - default default] Lock "ceef84cf-9df6-4484-862c-624eab05f1fe-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:54:32 compute-0 nova_compute[260935]: 2025-10-11 08:54:32.114 2 DEBUG nova.virt.libvirt.vif [None req-8c9ed641-f7d0-4508-bc10-5d8fe1991ba8 e7ae37ce1df34d87a006582ce3cb7d6d ce2ed1c47abf4e1889253402aa1e536f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T08:54:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerAddressesTestJSON-server-1070563181',display_name='tempest-ServerAddressesTestJSON-server-1070563181',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveraddressestestjson-server-1070563181',id=52,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ce2ed1c47abf4e1889253402aa1e536f',ramdisk_id='',reservation_id='r-xo306n5g',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerAddressesTestJSON-1144666060',owner_user_name='tempest-ServerAddressesTestJSON-1144666060-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T08:54:28Z,user_data=None,user_id='e7ae37ce1df34d87a006582ce3cb7d6d',uuid=ceef84cf-9df6-4484-862c-624eab05f1fe,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b6cea59b-74d9-47bc-a2e0-cfc62e1cc66b", "address": "fa:16:3e:4a:a8:a7", "network": {"id": "f022782f-f41d-4a83-babf-b48f2f344e01", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1436623710-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ce2ed1c47abf4e1889253402aa1e536f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb6cea59b-74", "ovs_interfaceid": "b6cea59b-74d9-47bc-a2e0-cfc62e1cc66b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 11 08:54:32 compute-0 nova_compute[260935]: 2025-10-11 08:54:32.115 2 DEBUG nova.network.os_vif_util [None req-8c9ed641-f7d0-4508-bc10-5d8fe1991ba8 e7ae37ce1df34d87a006582ce3cb7d6d ce2ed1c47abf4e1889253402aa1e536f - - default default] Converting VIF {"id": "b6cea59b-74d9-47bc-a2e0-cfc62e1cc66b", "address": "fa:16:3e:4a:a8:a7", "network": {"id": "f022782f-f41d-4a83-babf-b48f2f344e01", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1436623710-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ce2ed1c47abf4e1889253402aa1e536f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb6cea59b-74", "ovs_interfaceid": "b6cea59b-74d9-47bc-a2e0-cfc62e1cc66b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 08:54:32 compute-0 nova_compute[260935]: 2025-10-11 08:54:32.116 2 DEBUG nova.network.os_vif_util [None req-8c9ed641-f7d0-4508-bc10-5d8fe1991ba8 e7ae37ce1df34d87a006582ce3cb7d6d ce2ed1c47abf4e1889253402aa1e536f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4a:a8:a7,bridge_name='br-int',has_traffic_filtering=True,id=b6cea59b-74d9-47bc-a2e0-cfc62e1cc66b,network=Network(f022782f-f41d-4a83-babf-b48f2f344e01),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb6cea59b-74') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 08:54:32 compute-0 nova_compute[260935]: 2025-10-11 08:54:32.116 2 DEBUG os_vif [None req-8c9ed641-f7d0-4508-bc10-5d8fe1991ba8 e7ae37ce1df34d87a006582ce3cb7d6d ce2ed1c47abf4e1889253402aa1e536f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:4a:a8:a7,bridge_name='br-int',has_traffic_filtering=True,id=b6cea59b-74d9-47bc-a2e0-cfc62e1cc66b,network=Network(f022782f-f41d-4a83-babf-b48f2f344e01),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb6cea59b-74') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 11 08:54:32 compute-0 nova_compute[260935]: 2025-10-11 08:54:32.117 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:54:32 compute-0 nova_compute[260935]: 2025-10-11 08:54:32.118 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:54:32 compute-0 nova_compute[260935]: 2025-10-11 08:54:32.119 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 08:54:32 compute-0 nova_compute[260935]: 2025-10-11 08:54:32.123 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:54:32 compute-0 nova_compute[260935]: 2025-10-11 08:54:32.123 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb6cea59b-74, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:54:32 compute-0 nova_compute[260935]: 2025-10-11 08:54:32.124 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapb6cea59b-74, col_values=(('external_ids', {'iface-id': 'b6cea59b-74d9-47bc-a2e0-cfc62e1cc66b', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:4a:a8:a7', 'vm-uuid': 'ceef84cf-9df6-4484-862c-624eab05f1fe'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:54:32 compute-0 nova_compute[260935]: 2025-10-11 08:54:32.126 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:54:32 compute-0 NetworkManager[44960]: <info>  [1760172872.1268] manager: (tapb6cea59b-74): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/213)
Oct 11 08:54:32 compute-0 nova_compute[260935]: 2025-10-11 08:54:32.133 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:54:32 compute-0 nova_compute[260935]: 2025-10-11 08:54:32.135 2 INFO os_vif [None req-8c9ed641-f7d0-4508-bc10-5d8fe1991ba8 e7ae37ce1df34d87a006582ce3cb7d6d ce2ed1c47abf4e1889253402aa1e536f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:4a:a8:a7,bridge_name='br-int',has_traffic_filtering=True,id=b6cea59b-74d9-47bc-a2e0-cfc62e1cc66b,network=Network(f022782f-f41d-4a83-babf-b48f2f344e01),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb6cea59b-74')
Oct 11 08:54:32 compute-0 nova_compute[260935]: 2025-10-11 08:54:32.170 2 DEBUG oslo_concurrency.processutils [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 21a71d10-e13b-47fe-88fd-ec9597f7902e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.278s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:54:32 compute-0 nova_compute[260935]: 2025-10-11 08:54:32.208 2 DEBUG nova.network.neutron [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] [instance: e263661e-e9c2-4a4d-a6e5-5fc8a7353f50] Successfully created port: 5a02bb17-5d97-4eef-a91d-245b70cc5e1b _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 11 08:54:32 compute-0 nova_compute[260935]: 2025-10-11 08:54:32.272 2 DEBUG nova.storage.rbd_utils [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] resizing rbd image 21a71d10-e13b-47fe-88fd-ec9597f7902e_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 11 08:54:32 compute-0 nova_compute[260935]: 2025-10-11 08:54:32.313 2 DEBUG nova.virt.libvirt.driver [None req-8c9ed641-f7d0-4508-bc10-5d8fe1991ba8 e7ae37ce1df34d87a006582ce3cb7d6d ce2ed1c47abf4e1889253402aa1e536f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 08:54:32 compute-0 nova_compute[260935]: 2025-10-11 08:54:32.314 2 DEBUG nova.virt.libvirt.driver [None req-8c9ed641-f7d0-4508-bc10-5d8fe1991ba8 e7ae37ce1df34d87a006582ce3cb7d6d ce2ed1c47abf4e1889253402aa1e536f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 08:54:32 compute-0 nova_compute[260935]: 2025-10-11 08:54:32.314 2 DEBUG nova.virt.libvirt.driver [None req-8c9ed641-f7d0-4508-bc10-5d8fe1991ba8 e7ae37ce1df34d87a006582ce3cb7d6d ce2ed1c47abf4e1889253402aa1e536f - - default default] No VIF found with MAC fa:16:3e:4a:a8:a7, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 11 08:54:32 compute-0 nova_compute[260935]: 2025-10-11 08:54:32.315 2 INFO nova.virt.libvirt.driver [None req-8c9ed641-f7d0-4508-bc10-5d8fe1991ba8 e7ae37ce1df34d87a006582ce3cb7d6d ce2ed1c47abf4e1889253402aa1e536f - - default default] [instance: ceef84cf-9df6-4484-862c-624eab05f1fe] Using config drive
Oct 11 08:54:32 compute-0 nova_compute[260935]: 2025-10-11 08:54:32.338 2 DEBUG nova.storage.rbd_utils [None req-8c9ed641-f7d0-4508-bc10-5d8fe1991ba8 e7ae37ce1df34d87a006582ce3cb7d6d ce2ed1c47abf4e1889253402aa1e536f - - default default] rbd image ceef84cf-9df6-4484-862c-624eab05f1fe_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:54:32 compute-0 nova_compute[260935]: 2025-10-11 08:54:32.411 2 DEBUG nova.objects.instance [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Lazy-loading 'migration_context' on Instance uuid 21a71d10-e13b-47fe-88fd-ec9597f7902e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:54:32 compute-0 nova_compute[260935]: 2025-10-11 08:54:32.434 2 DEBUG nova.virt.libvirt.driver [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] [instance: 21a71d10-e13b-47fe-88fd-ec9597f7902e] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 11 08:54:32 compute-0 nova_compute[260935]: 2025-10-11 08:54:32.435 2 DEBUG nova.virt.libvirt.driver [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] [instance: 21a71d10-e13b-47fe-88fd-ec9597f7902e] Ensure instance console log exists: /var/lib/nova/instances/21a71d10-e13b-47fe-88fd-ec9597f7902e/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 11 08:54:32 compute-0 nova_compute[260935]: 2025-10-11 08:54:32.435 2 DEBUG oslo_concurrency.lockutils [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:54:32 compute-0 nova_compute[260935]: 2025-10-11 08:54:32.435 2 DEBUG oslo_concurrency.lockutils [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:54:32 compute-0 nova_compute[260935]: 2025-10-11 08:54:32.436 2 DEBUG oslo_concurrency.lockutils [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:54:32 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/3127925003' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:54:32 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/1646259643' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:54:32 compute-0 nova_compute[260935]: 2025-10-11 08:54:32.694 2 DEBUG nova.network.neutron [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] [instance: 21a71d10-e13b-47fe-88fd-ec9597f7902e] Successfully created port: 70c55a7a-6ff4-4ca8-a4b6-60e79bcea74b _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 11 08:54:32 compute-0 nova_compute[260935]: 2025-10-11 08:54:32.719 2 DEBUG nova.network.neutron [-] [instance: 1a70666a-f421-4a09-bfa4-f1171cbe2000] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:54:32 compute-0 nova_compute[260935]: 2025-10-11 08:54:32.740 2 INFO nova.compute.manager [-] [instance: 1a70666a-f421-4a09-bfa4-f1171cbe2000] Took 1.22 seconds to deallocate network for instance.
Oct 11 08:54:32 compute-0 nova_compute[260935]: 2025-10-11 08:54:32.794 2 DEBUG oslo_concurrency.lockutils [None req-65afeaf2-f404-4ffc-a661-16ab532bb966 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:54:32 compute-0 nova_compute[260935]: 2025-10-11 08:54:32.795 2 DEBUG oslo_concurrency.lockutils [None req-65afeaf2-f404-4ffc-a661-16ab532bb966 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:54:32 compute-0 nova_compute[260935]: 2025-10-11 08:54:32.867 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:54:32 compute-0 nova_compute[260935]: 2025-10-11 08:54:32.871 2 DEBUG nova.network.neutron [req-5d35c0fd-bfb2-4bb0-a2c1-023917277d4b req-7fffc75b-4fd3-4f16-b812-723cc111f850 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ceef84cf-9df6-4484-862c-624eab05f1fe] Updated VIF entry in instance network info cache for port b6cea59b-74d9-47bc-a2e0-cfc62e1cc66b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 08:54:32 compute-0 nova_compute[260935]: 2025-10-11 08:54:32.872 2 DEBUG nova.network.neutron [req-5d35c0fd-bfb2-4bb0-a2c1-023917277d4b req-7fffc75b-4fd3-4f16-b812-723cc111f850 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ceef84cf-9df6-4484-862c-624eab05f1fe] Updating instance_info_cache with network_info: [{"id": "b6cea59b-74d9-47bc-a2e0-cfc62e1cc66b", "address": "fa:16:3e:4a:a8:a7", "network": {"id": "f022782f-f41d-4a83-babf-b48f2f344e01", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1436623710-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ce2ed1c47abf4e1889253402aa1e536f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb6cea59b-74", "ovs_interfaceid": "b6cea59b-74d9-47bc-a2e0-cfc62e1cc66b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:54:32 compute-0 nova_compute[260935]: 2025-10-11 08:54:32.886 2 INFO nova.virt.libvirt.driver [None req-8c9ed641-f7d0-4508-bc10-5d8fe1991ba8 e7ae37ce1df34d87a006582ce3cb7d6d ce2ed1c47abf4e1889253402aa1e536f - - default default] [instance: ceef84cf-9df6-4484-862c-624eab05f1fe] Creating config drive at /var/lib/nova/instances/ceef84cf-9df6-4484-862c-624eab05f1fe/disk.config
Oct 11 08:54:32 compute-0 nova_compute[260935]: 2025-10-11 08:54:32.894 2 DEBUG oslo_concurrency.processutils [None req-8c9ed641-f7d0-4508-bc10-5d8fe1991ba8 e7ae37ce1df34d87a006582ce3cb7d6d ce2ed1c47abf4e1889253402aa1e536f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/ceef84cf-9df6-4484-862c-624eab05f1fe/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp3616h6tw execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:54:32 compute-0 nova_compute[260935]: 2025-10-11 08:54:32.955 2 DEBUG oslo_concurrency.lockutils [req-5d35c0fd-bfb2-4bb0-a2c1-023917277d4b req-7fffc75b-4fd3-4f16-b812-723cc111f850 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-ceef84cf-9df6-4484-862c-624eab05f1fe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 08:54:33 compute-0 nova_compute[260935]: 2025-10-11 08:54:33.017 2 DEBUG oslo_concurrency.processutils [None req-65afeaf2-f404-4ffc-a661-16ab532bb966 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:54:33 compute-0 nova_compute[260935]: 2025-10-11 08:54:33.071 2 DEBUG oslo_concurrency.processutils [None req-8c9ed641-f7d0-4508-bc10-5d8fe1991ba8 e7ae37ce1df34d87a006582ce3cb7d6d ce2ed1c47abf4e1889253402aa1e536f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/ceef84cf-9df6-4484-862c-624eab05f1fe/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp3616h6tw" returned: 0 in 0.177s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:54:33 compute-0 nova_compute[260935]: 2025-10-11 08:54:33.076 2 DEBUG nova.network.neutron [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] [instance: d60a2dcd-7fb6-4bfe-8351-a38a71164f83] Updating instance_info_cache with network_info: [{"id": "b0464a2f-214f-420b-aa3e-70d2e3dce4ac", "address": "fa:16:3e:9b:af:82", "network": {"id": "72464893-0f19-40a9-84d1-392a298e50b9", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1756260705-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7a37bbdce5194d96bed20d4162e25337", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb0464a2f-21", "ovs_interfaceid": "b0464a2f-214f-420b-aa3e-70d2e3dce4ac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:54:33 compute-0 nova_compute[260935]: 2025-10-11 08:54:33.113 2 DEBUG nova.storage.rbd_utils [None req-8c9ed641-f7d0-4508-bc10-5d8fe1991ba8 e7ae37ce1df34d87a006582ce3cb7d6d ce2ed1c47abf4e1889253402aa1e536f - - default default] rbd image ceef84cf-9df6-4484-862c-624eab05f1fe_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:54:33 compute-0 nova_compute[260935]: 2025-10-11 08:54:33.118 2 DEBUG oslo_concurrency.processutils [None req-8c9ed641-f7d0-4508-bc10-5d8fe1991ba8 e7ae37ce1df34d87a006582ce3cb7d6d ce2ed1c47abf4e1889253402aa1e536f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/ceef84cf-9df6-4484-862c-624eab05f1fe/disk.config ceef84cf-9df6-4484-862c-624eab05f1fe_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:54:33 compute-0 nova_compute[260935]: 2025-10-11 08:54:33.184 2 DEBUG oslo_concurrency.lockutils [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Releasing lock "refresh_cache-d60a2dcd-7fb6-4bfe-8351-a38a71164f83" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 08:54:33 compute-0 nova_compute[260935]: 2025-10-11 08:54:33.185 2 DEBUG nova.compute.manager [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] [instance: d60a2dcd-7fb6-4bfe-8351-a38a71164f83] Instance network_info: |[{"id": "b0464a2f-214f-420b-aa3e-70d2e3dce4ac", "address": "fa:16:3e:9b:af:82", "network": {"id": "72464893-0f19-40a9-84d1-392a298e50b9", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1756260705-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7a37bbdce5194d96bed20d4162e25337", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb0464a2f-21", "ovs_interfaceid": "b0464a2f-214f-420b-aa3e-70d2e3dce4ac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 11 08:54:33 compute-0 nova_compute[260935]: 2025-10-11 08:54:33.187 2 DEBUG oslo_concurrency.lockutils [req-d0f9a41a-70b5-48b6-9aef-2193727d100a req-f263e311-55e3-4330-ba6d-d4e427c55af1 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-d60a2dcd-7fb6-4bfe-8351-a38a71164f83" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 08:54:33 compute-0 nova_compute[260935]: 2025-10-11 08:54:33.187 2 DEBUG nova.network.neutron [req-d0f9a41a-70b5-48b6-9aef-2193727d100a req-f263e311-55e3-4330-ba6d-d4e427c55af1 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d60a2dcd-7fb6-4bfe-8351-a38a71164f83] Refreshing network info cache for port b0464a2f-214f-420b-aa3e-70d2e3dce4ac _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 08:54:33 compute-0 nova_compute[260935]: 2025-10-11 08:54:33.193 2 DEBUG nova.virt.libvirt.driver [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] [instance: d60a2dcd-7fb6-4bfe-8351-a38a71164f83] Start _get_guest_xml network_info=[{"id": "b0464a2f-214f-420b-aa3e-70d2e3dce4ac", "address": "fa:16:3e:9b:af:82", "network": {"id": "72464893-0f19-40a9-84d1-392a298e50b9", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1756260705-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7a37bbdce5194d96bed20d4162e25337", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb0464a2f-21", "ovs_interfaceid": "b0464a2f-214f-420b-aa3e-70d2e3dce4ac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 11 08:54:33 compute-0 nova_compute[260935]: 2025-10-11 08:54:33.200 2 WARNING nova.virt.libvirt.driver [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 08:54:33 compute-0 nova_compute[260935]: 2025-10-11 08:54:33.206 2 DEBUG nova.virt.libvirt.host [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 11 08:54:33 compute-0 nova_compute[260935]: 2025-10-11 08:54:33.207 2 DEBUG nova.virt.libvirt.host [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 11 08:54:33 compute-0 nova_compute[260935]: 2025-10-11 08:54:33.221 2 DEBUG nova.virt.libvirt.host [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 11 08:54:33 compute-0 nova_compute[260935]: 2025-10-11 08:54:33.222 2 DEBUG nova.virt.libvirt.host [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 11 08:54:33 compute-0 nova_compute[260935]: 2025-10-11 08:54:33.222 2 DEBUG nova.virt.libvirt.driver [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 11 08:54:33 compute-0 nova_compute[260935]: 2025-10-11 08:54:33.223 2 DEBUG nova.virt.hardware [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 11 08:54:33 compute-0 nova_compute[260935]: 2025-10-11 08:54:33.224 2 DEBUG nova.virt.hardware [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 11 08:54:33 compute-0 nova_compute[260935]: 2025-10-11 08:54:33.224 2 DEBUG nova.virt.hardware [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 11 08:54:33 compute-0 nova_compute[260935]: 2025-10-11 08:54:33.225 2 DEBUG nova.virt.hardware [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 11 08:54:33 compute-0 nova_compute[260935]: 2025-10-11 08:54:33.225 2 DEBUG nova.virt.hardware [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 11 08:54:33 compute-0 nova_compute[260935]: 2025-10-11 08:54:33.226 2 DEBUG nova.virt.hardware [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 11 08:54:33 compute-0 nova_compute[260935]: 2025-10-11 08:54:33.226 2 DEBUG nova.virt.hardware [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 11 08:54:33 compute-0 nova_compute[260935]: 2025-10-11 08:54:33.226 2 DEBUG nova.virt.hardware [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 11 08:54:33 compute-0 nova_compute[260935]: 2025-10-11 08:54:33.227 2 DEBUG nova.virt.hardware [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 11 08:54:33 compute-0 nova_compute[260935]: 2025-10-11 08:54:33.227 2 DEBUG nova.virt.hardware [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 11 08:54:33 compute-0 nova_compute[260935]: 2025-10-11 08:54:33.228 2 DEBUG nova.virt.hardware [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 11 08:54:33 compute-0 nova_compute[260935]: 2025-10-11 08:54:33.234 2 DEBUG oslo_concurrency.processutils [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:54:33 compute-0 nova_compute[260935]: 2025-10-11 08:54:33.314 2 DEBUG nova.network.neutron [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] [instance: e263661e-e9c2-4a4d-a6e5-5fc8a7353f50] Successfully updated port: 5a02bb17-5d97-4eef-a91d-245b70cc5e1b _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 11 08:54:33 compute-0 nova_compute[260935]: 2025-10-11 08:54:33.319 2 DEBUG oslo_concurrency.processutils [None req-8c9ed641-f7d0-4508-bc10-5d8fe1991ba8 e7ae37ce1df34d87a006582ce3cb7d6d ce2ed1c47abf4e1889253402aa1e536f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/ceef84cf-9df6-4484-862c-624eab05f1fe/disk.config ceef84cf-9df6-4484-862c-624eab05f1fe_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.200s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:54:33 compute-0 nova_compute[260935]: 2025-10-11 08:54:33.319 2 INFO nova.virt.libvirt.driver [None req-8c9ed641-f7d0-4508-bc10-5d8fe1991ba8 e7ae37ce1df34d87a006582ce3cb7d6d ce2ed1c47abf4e1889253402aa1e536f - - default default] [instance: ceef84cf-9df6-4484-862c-624eab05f1fe] Deleting local config drive /var/lib/nova/instances/ceef84cf-9df6-4484-862c-624eab05f1fe/disk.config because it was imported into RBD.
Oct 11 08:54:33 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e210 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 08:54:33 compute-0 nova_compute[260935]: 2025-10-11 08:54:33.344 2 DEBUG oslo_concurrency.lockutils [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Acquiring lock "refresh_cache-e263661e-e9c2-4a4d-a6e5-5fc8a7353f50" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 08:54:33 compute-0 nova_compute[260935]: 2025-10-11 08:54:33.345 2 DEBUG oslo_concurrency.lockutils [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Acquired lock "refresh_cache-e263661e-e9c2-4a4d-a6e5-5fc8a7353f50" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 08:54:33 compute-0 nova_compute[260935]: 2025-10-11 08:54:33.345 2 DEBUG nova.network.neutron [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] [instance: e263661e-e9c2-4a4d-a6e5-5fc8a7353f50] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 11 08:54:33 compute-0 kernel: tapb6cea59b-74: entered promiscuous mode
Oct 11 08:54:33 compute-0 NetworkManager[44960]: <info>  [1760172873.4145] manager: (tapb6cea59b-74): new Tun device (/org/freedesktop/NetworkManager/Devices/214)
Oct 11 08:54:33 compute-0 ovn_controller[152945]: 2025-10-11T08:54:33Z|00449|binding|INFO|Claiming lport b6cea59b-74d9-47bc-a2e0-cfc62e1cc66b for this chassis.
Oct 11 08:54:33 compute-0 ovn_controller[152945]: 2025-10-11T08:54:33Z|00450|binding|INFO|b6cea59b-74d9-47bc-a2e0-cfc62e1cc66b: Claiming fa:16:3e:4a:a8:a7 10.100.0.14
Oct 11 08:54:33 compute-0 nova_compute[260935]: 2025-10-11 08:54:33.414 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:54:33 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:33.428 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4a:a8:a7 10.100.0.14'], port_security=['fa:16:3e:4a:a8:a7 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'ceef84cf-9df6-4484-862c-624eab05f1fe', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f022782f-f41d-4a83-babf-b48f2f344e01', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ce2ed1c47abf4e1889253402aa1e536f', 'neutron:revision_number': '2', 'neutron:security_group_ids': '4c233320-8183-4c2d-a5ba-21893adcb132', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=dc533fa3-dc44-4446-b57b-1e5f53ba099c, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=b6cea59b-74d9-47bc-a2e0-cfc62e1cc66b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 08:54:33 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:33.431 162815 INFO neutron.agent.ovn.metadata.agent [-] Port b6cea59b-74d9-47bc-a2e0-cfc62e1cc66b in datapath f022782f-f41d-4a83-babf-b48f2f344e01 bound to our chassis
Oct 11 08:54:33 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:33.434 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f022782f-f41d-4a83-babf-b48f2f344e01
Oct 11 08:54:33 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:33.453 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[14f4a5b5-2211-4a4c-be8a-d6ee0fc4565f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:54:33 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:33.455 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapf022782f-f1 in ovnmeta-f022782f-f41d-4a83-babf-b48f2f344e01 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 11 08:54:33 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:33.459 276945 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapf022782f-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 11 08:54:33 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:33.459 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[61bfaccd-4fd9-43c0-a507-273b7b0652ed]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:54:33 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:33.460 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[523354d0-9f00-4fe8-877f-c842c2ed814f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:54:33 compute-0 systemd-machined[215705]: New machine qemu-59-instance-00000034.
Oct 11 08:54:33 compute-0 systemd[1]: Started Virtual Machine qemu-59-instance-00000034.
Oct 11 08:54:33 compute-0 systemd-udevd[318476]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 08:54:33 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 08:54:33 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/507219709' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:54:33 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:33.496 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[b7ce63b4-4cad-4600-878e-ca2fead4c209]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:54:33 compute-0 NetworkManager[44960]: <info>  [1760172873.5048] device (tapb6cea59b-74): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 08:54:33 compute-0 NetworkManager[44960]: <info>  [1760172873.5061] device (tapb6cea59b-74): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 08:54:33 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:33.531 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[61d2a890-2e91-4c18-b047-5852e42d03da]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:54:33 compute-0 ceph-mon[74313]: pgmap v1509: 321 pgs: 321 active+clean; 88 MiB data, 469 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 11 08:54:33 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/507219709' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:54:33 compute-0 nova_compute[260935]: 2025-10-11 08:54:33.541 2 DEBUG oslo_concurrency.processutils [None req-65afeaf2-f404-4ffc-a661-16ab532bb966 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.524s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:54:33 compute-0 nova_compute[260935]: 2025-10-11 08:54:33.543 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:54:33 compute-0 ovn_controller[152945]: 2025-10-11T08:54:33Z|00451|binding|INFO|Setting lport b6cea59b-74d9-47bc-a2e0-cfc62e1cc66b ovn-installed in OVS
Oct 11 08:54:33 compute-0 ovn_controller[152945]: 2025-10-11T08:54:33Z|00452|binding|INFO|Setting lport b6cea59b-74d9-47bc-a2e0-cfc62e1cc66b up in Southbound
Oct 11 08:54:33 compute-0 nova_compute[260935]: 2025-10-11 08:54:33.551 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:54:33 compute-0 nova_compute[260935]: 2025-10-11 08:54:33.558 2 DEBUG nova.compute.provider_tree [None req-65afeaf2-f404-4ffc-a661-16ab532bb966 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 08:54:33 compute-0 nova_compute[260935]: 2025-10-11 08:54:33.575 2 DEBUG nova.network.neutron [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] [instance: e263661e-e9c2-4a4d-a6e5-5fc8a7353f50] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 11 08:54:33 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:33.575 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[b8305f1a-9ad7-4987-9959-151cf8634183]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:54:33 compute-0 NetworkManager[44960]: <info>  [1760172873.5819] manager: (tapf022782f-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/215)
Oct 11 08:54:33 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:33.582 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[8c8fe602-4d0e-47b5-b8a3-8b217f3b939e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:54:33 compute-0 nova_compute[260935]: 2025-10-11 08:54:33.590 2 DEBUG nova.scheduler.client.report [None req-65afeaf2-f404-4ffc-a661-16ab532bb966 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 08:54:33 compute-0 nova_compute[260935]: 2025-10-11 08:54:33.622 2 DEBUG oslo_concurrency.lockutils [None req-65afeaf2-f404-4ffc-a661-16ab532bb966 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.827s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:54:33 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:33.630 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[4f0016a6-a336-4f25-b171-29fb2cc4a96b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:54:33 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:33.634 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[49abc609-27bb-470a-9aff-e564b18dd647]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:54:33 compute-0 nova_compute[260935]: 2025-10-11 08:54:33.656 2 INFO nova.scheduler.client.report [None req-65afeaf2-f404-4ffc-a661-16ab532bb966 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Deleted allocations for instance 1a70666a-f421-4a09-bfa4-f1171cbe2000
Oct 11 08:54:33 compute-0 NetworkManager[44960]: <info>  [1760172873.6681] device (tapf022782f-f0): carrier: link connected
Oct 11 08:54:33 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:33.676 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[97c846b4-cf36-490c-bdaa-d2a86de6a5a5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:54:33 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:33.701 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[533b2419-6ec2-4a0d-93b6-c35ae12a845c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf022782f-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ad:5c:fe'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 146], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 471837, 'reachable_time': 20507, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 318508, 'error': None, 'target': 'ovnmeta-f022782f-f41d-4a83-babf-b48f2f344e01', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:54:33 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:33.728 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[80878071-976c-4a66-9a23-a193e419aa05]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fead:5cfe'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 471837, 'tstamp': 471837}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 318509, 'error': None, 'target': 'ovnmeta-f022782f-f41d-4a83-babf-b48f2f344e01', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:54:33 compute-0 nova_compute[260935]: 2025-10-11 08:54:33.749 2 DEBUG oslo_concurrency.lockutils [None req-65afeaf2-f404-4ffc-a661-16ab532bb966 d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Lock "1a70666a-f421-4a09-bfa4-f1171cbe2000" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.539s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:54:33 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:33.754 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[c5bed711-5384-4e1b-9562-402df5800808]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf022782f-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ad:5c:fe'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 146], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 471837, 'reachable_time': 20507, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 318517, 'error': None, 'target': 'ovnmeta-f022782f-f41d-4a83-babf-b48f2f344e01', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:54:33 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:33.813 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[a7533e93-4488-429c-b3bd-1966a7ffc012]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:54:33 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 08:54:33 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3829389537' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:54:33 compute-0 nova_compute[260935]: 2025-10-11 08:54:33.887 2 DEBUG oslo_concurrency.processutils [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.653s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:54:33 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:33.900 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[064a419f-051e-4e80-b29a-ed14727384f3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:54:33 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:33.904 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf022782f-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:54:33 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:33.904 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 08:54:33 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:33.905 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf022782f-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:54:33 compute-0 kernel: tapf022782f-f0: entered promiscuous mode
Oct 11 08:54:33 compute-0 NetworkManager[44960]: <info>  [1760172873.9090] manager: (tapf022782f-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/216)
Oct 11 08:54:33 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:33.913 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf022782f-f0, col_values=(('external_ids', {'iface-id': '1b264af0-5de7-4f41-b533-8be3fcb7e621'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:54:33 compute-0 ovn_controller[152945]: 2025-10-11T08:54:33Z|00453|binding|INFO|Releasing lport 1b264af0-5de7-4f41-b533-8be3fcb7e621 from this chassis (sb_readonly=0)
Oct 11 08:54:33 compute-0 nova_compute[260935]: 2025-10-11 08:54:33.954 2 DEBUG nova.storage.rbd_utils [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] rbd image d60a2dcd-7fb6-4bfe-8351-a38a71164f83_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:54:33 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:33.955 162815 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/f022782f-f41d-4a83-babf-b48f2f344e01.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/f022782f-f41d-4a83-babf-b48f2f344e01.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 11 08:54:33 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:33.957 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[13a09d7e-7dfc-4414-845e-14939b6b9480]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:54:33 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:33.958 162815 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 11 08:54:33 compute-0 ovn_metadata_agent[162810]: global
Oct 11 08:54:33 compute-0 ovn_metadata_agent[162810]:     log         /dev/log local0 debug
Oct 11 08:54:33 compute-0 ovn_metadata_agent[162810]:     log-tag     haproxy-metadata-proxy-f022782f-f41d-4a83-babf-b48f2f344e01
Oct 11 08:54:33 compute-0 ovn_metadata_agent[162810]:     user        root
Oct 11 08:54:33 compute-0 ovn_metadata_agent[162810]:     group       root
Oct 11 08:54:33 compute-0 ovn_metadata_agent[162810]:     maxconn     1024
Oct 11 08:54:33 compute-0 ovn_metadata_agent[162810]:     pidfile     /var/lib/neutron/external/pids/f022782f-f41d-4a83-babf-b48f2f344e01.pid.haproxy
Oct 11 08:54:33 compute-0 ovn_metadata_agent[162810]:     daemon
Oct 11 08:54:33 compute-0 ovn_metadata_agent[162810]: 
Oct 11 08:54:33 compute-0 ovn_metadata_agent[162810]: defaults
Oct 11 08:54:33 compute-0 ovn_metadata_agent[162810]:     log global
Oct 11 08:54:33 compute-0 ovn_metadata_agent[162810]:     mode http
Oct 11 08:54:33 compute-0 ovn_metadata_agent[162810]:     option httplog
Oct 11 08:54:33 compute-0 ovn_metadata_agent[162810]:     option dontlognull
Oct 11 08:54:33 compute-0 ovn_metadata_agent[162810]:     option http-server-close
Oct 11 08:54:33 compute-0 ovn_metadata_agent[162810]:     option forwardfor
Oct 11 08:54:33 compute-0 ovn_metadata_agent[162810]:     retries                 3
Oct 11 08:54:33 compute-0 ovn_metadata_agent[162810]:     timeout http-request    30s
Oct 11 08:54:33 compute-0 ovn_metadata_agent[162810]:     timeout connect         30s
Oct 11 08:54:33 compute-0 ovn_metadata_agent[162810]:     timeout client          32s
Oct 11 08:54:33 compute-0 ovn_metadata_agent[162810]:     timeout server          32s
Oct 11 08:54:33 compute-0 ovn_metadata_agent[162810]:     timeout http-keep-alive 30s
Oct 11 08:54:33 compute-0 ovn_metadata_agent[162810]: 
Oct 11 08:54:33 compute-0 ovn_metadata_agent[162810]: 
Oct 11 08:54:33 compute-0 ovn_metadata_agent[162810]: listen listener
Oct 11 08:54:33 compute-0 ovn_metadata_agent[162810]:     bind 169.254.169.254:80
Oct 11 08:54:33 compute-0 ovn_metadata_agent[162810]:     server metadata /var/lib/neutron/metadata_proxy
Oct 11 08:54:33 compute-0 ovn_metadata_agent[162810]:     http-request add-header X-OVN-Network-ID f022782f-f41d-4a83-babf-b48f2f344e01
Oct 11 08:54:33 compute-0 ovn_metadata_agent[162810]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 11 08:54:33 compute-0 nova_compute[260935]: 2025-10-11 08:54:33.961 2 DEBUG oslo_concurrency.processutils [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:54:33 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:33.961 162815 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-f022782f-f41d-4a83-babf-b48f2f344e01', 'env', 'PROCESS_TAG=haproxy-f022782f-f41d-4a83-babf-b48f2f344e01', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/f022782f-f41d-4a83-babf-b48f2f344e01.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 11 08:54:33 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1510: 321 pgs: 321 active+clean; 226 MiB data, 545 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 7.1 MiB/s wr, 208 op/s
Oct 11 08:54:34 compute-0 nova_compute[260935]: 2025-10-11 08:54:34.013 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:54:34 compute-0 nova_compute[260935]: 2025-10-11 08:54:34.030 2 DEBUG nova.compute.manager [req-72fdb077-1185-4aed-8ee7-a56482e76d58 req-62e8f989-09b2-486c-b3f9-27662ba47a66 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 1a70666a-f421-4a09-bfa4-f1171cbe2000] Received event network-vif-deleted-56e8a82c-acef-4136-b229-8039f0d8c843 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:54:34 compute-0 nova_compute[260935]: 2025-10-11 08:54:34.030 2 DEBUG nova.compute.manager [req-72fdb077-1185-4aed-8ee7-a56482e76d58 req-62e8f989-09b2-486c-b3f9-27662ba47a66 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: e263661e-e9c2-4a4d-a6e5-5fc8a7353f50] Received event network-changed-5a02bb17-5d97-4eef-a91d-245b70cc5e1b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:54:34 compute-0 nova_compute[260935]: 2025-10-11 08:54:34.031 2 DEBUG nova.compute.manager [req-72fdb077-1185-4aed-8ee7-a56482e76d58 req-62e8f989-09b2-486c-b3f9-27662ba47a66 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: e263661e-e9c2-4a4d-a6e5-5fc8a7353f50] Refreshing instance network info cache due to event network-changed-5a02bb17-5d97-4eef-a91d-245b70cc5e1b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 08:54:34 compute-0 nova_compute[260935]: 2025-10-11 08:54:34.031 2 DEBUG oslo_concurrency.lockutils [req-72fdb077-1185-4aed-8ee7-a56482e76d58 req-62e8f989-09b2-486c-b3f9-27662ba47a66 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-e263661e-e9c2-4a4d-a6e5-5fc8a7353f50" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 08:54:34 compute-0 nova_compute[260935]: 2025-10-11 08:54:34.070 2 DEBUG nova.network.neutron [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] [instance: 21a71d10-e13b-47fe-88fd-ec9597f7902e] Successfully updated port: 70c55a7a-6ff4-4ca8-a4b6-60e79bcea74b _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 11 08:54:34 compute-0 nova_compute[260935]: 2025-10-11 08:54:34.087 2 DEBUG oslo_concurrency.lockutils [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Acquiring lock "refresh_cache-21a71d10-e13b-47fe-88fd-ec9597f7902e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 08:54:34 compute-0 nova_compute[260935]: 2025-10-11 08:54:34.087 2 DEBUG oslo_concurrency.lockutils [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Acquired lock "refresh_cache-21a71d10-e13b-47fe-88fd-ec9597f7902e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 08:54:34 compute-0 nova_compute[260935]: 2025-10-11 08:54:34.087 2 DEBUG nova.network.neutron [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] [instance: 21a71d10-e13b-47fe-88fd-ec9597f7902e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 11 08:54:34 compute-0 nova_compute[260935]: 2025-10-11 08:54:34.233 2 DEBUG nova.network.neutron [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] [instance: 21a71d10-e13b-47fe-88fd-ec9597f7902e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 11 08:54:34 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 08:54:34 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2095302234' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:54:34 compute-0 nova_compute[260935]: 2025-10-11 08:54:34.442 2 DEBUG oslo_concurrency.processutils [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.481s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:54:34 compute-0 nova_compute[260935]: 2025-10-11 08:54:34.446 2 DEBUG nova.virt.libvirt.vif [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T08:54:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServersNegativeTestJSON-server-123447316',display_name='tempest-ListServersNegativeTestJSON-server-123447316-1',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserversnegativetestjson-server-123447316-1',id=53,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='7a37bbdce5194d96bed20d4162e25337',ramdisk_id='',reservation_id='r-yl5fomq0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServersNegativeTestJSON-567006104',owner_user_name='tempest-ListServersNegativeTestJSON-567006104-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T08:54:29Z,user_data=None,user_id='5c02b9d6bdae439c9f1e49ae63c5c5e3',uuid=d60a2dcd-7fb6-4bfe-8351-a38a71164f83,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b0464a2f-214f-420b-aa3e-70d2e3dce4ac", "address": "fa:16:3e:9b:af:82", "network": {"id": "72464893-0f19-40a9-84d1-392a298e50b9", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1756260705-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7a37bbdce5194d96bed20d4162e25337", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb0464a2f-21", "ovs_interfaceid": "b0464a2f-214f-420b-aa3e-70d2e3dce4ac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 11 08:54:34 compute-0 nova_compute[260935]: 2025-10-11 08:54:34.447 2 DEBUG nova.network.os_vif_util [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Converting VIF {"id": "b0464a2f-214f-420b-aa3e-70d2e3dce4ac", "address": "fa:16:3e:9b:af:82", "network": {"id": "72464893-0f19-40a9-84d1-392a298e50b9", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1756260705-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7a37bbdce5194d96bed20d4162e25337", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb0464a2f-21", "ovs_interfaceid": "b0464a2f-214f-420b-aa3e-70d2e3dce4ac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 08:54:34 compute-0 nova_compute[260935]: 2025-10-11 08:54:34.449 2 DEBUG nova.network.os_vif_util [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9b:af:82,bridge_name='br-int',has_traffic_filtering=True,id=b0464a2f-214f-420b-aa3e-70d2e3dce4ac,network=Network(72464893-0f19-40a9-84d1-392a298e50b9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb0464a2f-21') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 08:54:34 compute-0 nova_compute[260935]: 2025-10-11 08:54:34.451 2 DEBUG nova.objects.instance [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Lazy-loading 'pci_devices' on Instance uuid d60a2dcd-7fb6-4bfe-8351-a38a71164f83 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:54:34 compute-0 podman[318624]: 2025-10-11 08:54:34.454845841 +0000 UTC m=+0.087598479 container create f3aae3bca0e893ccf324b75817ea1dd437236d36d1ffd2c53b7adacc2120d991 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-f022782f-f41d-4a83-babf-b48f2f344e01, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 11 08:54:34 compute-0 nova_compute[260935]: 2025-10-11 08:54:34.465 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172874.4638848, ceef84cf-9df6-4484-862c-624eab05f1fe => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:54:34 compute-0 nova_compute[260935]: 2025-10-11 08:54:34.465 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: ceef84cf-9df6-4484-862c-624eab05f1fe] VM Started (Lifecycle Event)
Oct 11 08:54:34 compute-0 nova_compute[260935]: 2025-10-11 08:54:34.472 2 DEBUG nova.virt.libvirt.driver [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] [instance: d60a2dcd-7fb6-4bfe-8351-a38a71164f83] End _get_guest_xml xml=<domain type="kvm">
Oct 11 08:54:34 compute-0 nova_compute[260935]:   <uuid>d60a2dcd-7fb6-4bfe-8351-a38a71164f83</uuid>
Oct 11 08:54:34 compute-0 nova_compute[260935]:   <name>instance-00000035</name>
Oct 11 08:54:34 compute-0 nova_compute[260935]:   <memory>131072</memory>
Oct 11 08:54:34 compute-0 nova_compute[260935]:   <vcpu>1</vcpu>
Oct 11 08:54:34 compute-0 nova_compute[260935]:   <metadata>
Oct 11 08:54:34 compute-0 nova_compute[260935]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 08:54:34 compute-0 nova_compute[260935]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 08:54:34 compute-0 nova_compute[260935]:       <nova:name>tempest-ListServersNegativeTestJSON-server-123447316-1</nova:name>
Oct 11 08:54:34 compute-0 nova_compute[260935]:       <nova:creationTime>2025-10-11 08:54:33</nova:creationTime>
Oct 11 08:54:34 compute-0 nova_compute[260935]:       <nova:flavor name="m1.nano">
Oct 11 08:54:34 compute-0 nova_compute[260935]:         <nova:memory>128</nova:memory>
Oct 11 08:54:34 compute-0 nova_compute[260935]:         <nova:disk>1</nova:disk>
Oct 11 08:54:34 compute-0 nova_compute[260935]:         <nova:swap>0</nova:swap>
Oct 11 08:54:34 compute-0 nova_compute[260935]:         <nova:ephemeral>0</nova:ephemeral>
Oct 11 08:54:34 compute-0 nova_compute[260935]:         <nova:vcpus>1</nova:vcpus>
Oct 11 08:54:34 compute-0 nova_compute[260935]:       </nova:flavor>
Oct 11 08:54:34 compute-0 nova_compute[260935]:       <nova:owner>
Oct 11 08:54:34 compute-0 nova_compute[260935]:         <nova:user uuid="5c02b9d6bdae439c9f1e49ae63c5c5e3">tempest-ListServersNegativeTestJSON-567006104-project-member</nova:user>
Oct 11 08:54:34 compute-0 nova_compute[260935]:         <nova:project uuid="7a37bbdce5194d96bed20d4162e25337">tempest-ListServersNegativeTestJSON-567006104</nova:project>
Oct 11 08:54:34 compute-0 nova_compute[260935]:       </nova:owner>
Oct 11 08:54:34 compute-0 nova_compute[260935]:       <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 08:54:34 compute-0 nova_compute[260935]:       <nova:ports>
Oct 11 08:54:34 compute-0 nova_compute[260935]:         <nova:port uuid="b0464a2f-214f-420b-aa3e-70d2e3dce4ac">
Oct 11 08:54:34 compute-0 nova_compute[260935]:           <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Oct 11 08:54:34 compute-0 nova_compute[260935]:         </nova:port>
Oct 11 08:54:34 compute-0 nova_compute[260935]:       </nova:ports>
Oct 11 08:54:34 compute-0 nova_compute[260935]:     </nova:instance>
Oct 11 08:54:34 compute-0 nova_compute[260935]:   </metadata>
Oct 11 08:54:34 compute-0 nova_compute[260935]:   <sysinfo type="smbios">
Oct 11 08:54:34 compute-0 nova_compute[260935]:     <system>
Oct 11 08:54:34 compute-0 nova_compute[260935]:       <entry name="manufacturer">RDO</entry>
Oct 11 08:54:34 compute-0 nova_compute[260935]:       <entry name="product">OpenStack Compute</entry>
Oct 11 08:54:34 compute-0 nova_compute[260935]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 08:54:34 compute-0 nova_compute[260935]:       <entry name="serial">d60a2dcd-7fb6-4bfe-8351-a38a71164f83</entry>
Oct 11 08:54:34 compute-0 nova_compute[260935]:       <entry name="uuid">d60a2dcd-7fb6-4bfe-8351-a38a71164f83</entry>
Oct 11 08:54:34 compute-0 nova_compute[260935]:       <entry name="family">Virtual Machine</entry>
Oct 11 08:54:34 compute-0 nova_compute[260935]:     </system>
Oct 11 08:54:34 compute-0 nova_compute[260935]:   </sysinfo>
Oct 11 08:54:34 compute-0 nova_compute[260935]:   <os>
Oct 11 08:54:34 compute-0 nova_compute[260935]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 11 08:54:34 compute-0 nova_compute[260935]:     <boot dev="hd"/>
Oct 11 08:54:34 compute-0 nova_compute[260935]:     <smbios mode="sysinfo"/>
Oct 11 08:54:34 compute-0 nova_compute[260935]:   </os>
Oct 11 08:54:34 compute-0 nova_compute[260935]:   <features>
Oct 11 08:54:34 compute-0 nova_compute[260935]:     <acpi/>
Oct 11 08:54:34 compute-0 nova_compute[260935]:     <apic/>
Oct 11 08:54:34 compute-0 nova_compute[260935]:     <vmcoreinfo/>
Oct 11 08:54:34 compute-0 nova_compute[260935]:   </features>
Oct 11 08:54:34 compute-0 nova_compute[260935]:   <clock offset="utc">
Oct 11 08:54:34 compute-0 nova_compute[260935]:     <timer name="pit" tickpolicy="delay"/>
Oct 11 08:54:34 compute-0 nova_compute[260935]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 11 08:54:34 compute-0 nova_compute[260935]:     <timer name="hpet" present="no"/>
Oct 11 08:54:34 compute-0 nova_compute[260935]:   </clock>
Oct 11 08:54:34 compute-0 nova_compute[260935]:   <cpu mode="host-model" match="exact">
Oct 11 08:54:34 compute-0 nova_compute[260935]:     <topology sockets="1" cores="1" threads="1"/>
Oct 11 08:54:34 compute-0 nova_compute[260935]:   </cpu>
Oct 11 08:54:34 compute-0 nova_compute[260935]:   <devices>
Oct 11 08:54:34 compute-0 nova_compute[260935]:     <disk type="network" device="disk">
Oct 11 08:54:34 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 08:54:34 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/d60a2dcd-7fb6-4bfe-8351-a38a71164f83_disk">
Oct 11 08:54:34 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 08:54:34 compute-0 nova_compute[260935]:       </source>
Oct 11 08:54:34 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 08:54:34 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 08:54:34 compute-0 nova_compute[260935]:       </auth>
Oct 11 08:54:34 compute-0 nova_compute[260935]:       <target dev="vda" bus="virtio"/>
Oct 11 08:54:34 compute-0 nova_compute[260935]:     </disk>
Oct 11 08:54:34 compute-0 nova_compute[260935]:     <disk type="network" device="cdrom">
Oct 11 08:54:34 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 08:54:34 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/d60a2dcd-7fb6-4bfe-8351-a38a71164f83_disk.config">
Oct 11 08:54:34 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 08:54:34 compute-0 nova_compute[260935]:       </source>
Oct 11 08:54:34 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 08:54:34 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 08:54:34 compute-0 nova_compute[260935]:       </auth>
Oct 11 08:54:34 compute-0 nova_compute[260935]:       <target dev="sda" bus="sata"/>
Oct 11 08:54:34 compute-0 nova_compute[260935]:     </disk>
Oct 11 08:54:34 compute-0 nova_compute[260935]:     <interface type="ethernet">
Oct 11 08:54:34 compute-0 nova_compute[260935]:       <mac address="fa:16:3e:9b:af:82"/>
Oct 11 08:54:34 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 08:54:34 compute-0 nova_compute[260935]:       <driver name="vhost" rx_queue_size="512"/>
Oct 11 08:54:34 compute-0 nova_compute[260935]:       <mtu size="1442"/>
Oct 11 08:54:34 compute-0 nova_compute[260935]:       <target dev="tapb0464a2f-21"/>
Oct 11 08:54:34 compute-0 nova_compute[260935]:     </interface>
Oct 11 08:54:34 compute-0 nova_compute[260935]:     <serial type="pty">
Oct 11 08:54:34 compute-0 nova_compute[260935]:       <log file="/var/lib/nova/instances/d60a2dcd-7fb6-4bfe-8351-a38a71164f83/console.log" append="off"/>
Oct 11 08:54:34 compute-0 nova_compute[260935]:     </serial>
Oct 11 08:54:34 compute-0 nova_compute[260935]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 08:54:34 compute-0 nova_compute[260935]:     <video>
Oct 11 08:54:34 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 08:54:34 compute-0 nova_compute[260935]:     </video>
Oct 11 08:54:34 compute-0 nova_compute[260935]:     <input type="tablet" bus="usb"/>
Oct 11 08:54:34 compute-0 nova_compute[260935]:     <rng model="virtio">
Oct 11 08:54:34 compute-0 nova_compute[260935]:       <backend model="random">/dev/urandom</backend>
Oct 11 08:54:34 compute-0 nova_compute[260935]:     </rng>
Oct 11 08:54:34 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root"/>
Oct 11 08:54:34 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:54:34 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:54:34 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:54:34 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:54:34 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:54:34 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:54:34 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:54:34 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:54:34 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:54:34 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:54:34 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:54:34 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:54:34 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:54:34 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:54:34 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:54:34 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:54:34 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:54:34 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:54:34 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:54:34 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:54:34 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:54:34 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:54:34 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:54:34 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:54:34 compute-0 nova_compute[260935]:     <controller type="usb" index="0"/>
Oct 11 08:54:34 compute-0 nova_compute[260935]:     <memballoon model="virtio">
Oct 11 08:54:34 compute-0 nova_compute[260935]:       <stats period="10"/>
Oct 11 08:54:34 compute-0 nova_compute[260935]:     </memballoon>
Oct 11 08:54:34 compute-0 nova_compute[260935]:   </devices>
Oct 11 08:54:34 compute-0 nova_compute[260935]: </domain>
Oct 11 08:54:34 compute-0 nova_compute[260935]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 11 08:54:34 compute-0 nova_compute[260935]: 2025-10-11 08:54:34.473 2 DEBUG nova.compute.manager [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] [instance: d60a2dcd-7fb6-4bfe-8351-a38a71164f83] Preparing to wait for external event network-vif-plugged-b0464a2f-214f-420b-aa3e-70d2e3dce4ac prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 11 08:54:34 compute-0 nova_compute[260935]: 2025-10-11 08:54:34.473 2 DEBUG oslo_concurrency.lockutils [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Acquiring lock "d60a2dcd-7fb6-4bfe-8351-a38a71164f83-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:54:34 compute-0 nova_compute[260935]: 2025-10-11 08:54:34.474 2 DEBUG oslo_concurrency.lockutils [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Lock "d60a2dcd-7fb6-4bfe-8351-a38a71164f83-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:54:34 compute-0 nova_compute[260935]: 2025-10-11 08:54:34.474 2 DEBUG oslo_concurrency.lockutils [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Lock "d60a2dcd-7fb6-4bfe-8351-a38a71164f83-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:54:34 compute-0 nova_compute[260935]: 2025-10-11 08:54:34.476 2 DEBUG nova.virt.libvirt.vif [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T08:54:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServersNegativeTestJSON-server-123447316',display_name='tempest-ListServersNegativeTestJSON-server-123447316-1',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserversnegativetestjson-server-123447316-1',id=53,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='7a37bbdce5194d96bed20d4162e25337',ramdisk_id='',reservation_id='r-yl5fomq0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServersNegativeTestJSON-567006104',owner_user_name='tempest-ListServersNegativeTestJSON-567006104-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T08:54:29Z,user_data=None,user_id='5c02b9d6bdae439c9f1e49ae63c5c5e3',uuid=d60a2dcd-7fb6-4bfe-8351-a38a71164f83,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b0464a2f-214f-420b-aa3e-70d2e3dce4ac", "address": "fa:16:3e:9b:af:82", "network": {"id": "72464893-0f19-40a9-84d1-392a298e50b9", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1756260705-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7a37bbdce5194d96bed20d4162e25337", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb0464a2f-21", "ovs_interfaceid": "b0464a2f-214f-420b-aa3e-70d2e3dce4ac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 11 08:54:34 compute-0 nova_compute[260935]: 2025-10-11 08:54:34.476 2 DEBUG nova.network.os_vif_util [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Converting VIF {"id": "b0464a2f-214f-420b-aa3e-70d2e3dce4ac", "address": "fa:16:3e:9b:af:82", "network": {"id": "72464893-0f19-40a9-84d1-392a298e50b9", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1756260705-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7a37bbdce5194d96bed20d4162e25337", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb0464a2f-21", "ovs_interfaceid": "b0464a2f-214f-420b-aa3e-70d2e3dce4ac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 08:54:34 compute-0 nova_compute[260935]: 2025-10-11 08:54:34.477 2 DEBUG nova.network.os_vif_util [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9b:af:82,bridge_name='br-int',has_traffic_filtering=True,id=b0464a2f-214f-420b-aa3e-70d2e3dce4ac,network=Network(72464893-0f19-40a9-84d1-392a298e50b9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb0464a2f-21') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 08:54:34 compute-0 nova_compute[260935]: 2025-10-11 08:54:34.478 2 DEBUG os_vif [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:9b:af:82,bridge_name='br-int',has_traffic_filtering=True,id=b0464a2f-214f-420b-aa3e-70d2e3dce4ac,network=Network(72464893-0f19-40a9-84d1-392a298e50b9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb0464a2f-21') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 11 08:54:34 compute-0 nova_compute[260935]: 2025-10-11 08:54:34.479 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:54:34 compute-0 nova_compute[260935]: 2025-10-11 08:54:34.480 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:54:34 compute-0 nova_compute[260935]: 2025-10-11 08:54:34.480 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 08:54:34 compute-0 nova_compute[260935]: 2025-10-11 08:54:34.488 2 DEBUG nova.network.neutron [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] [instance: e263661e-e9c2-4a4d-a6e5-5fc8a7353f50] Updating instance_info_cache with network_info: [{"id": "5a02bb17-5d97-4eef-a91d-245b70cc5e1b", "address": "fa:16:3e:2c:7f:c3", "network": {"id": "72464893-0f19-40a9-84d1-392a298e50b9", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1756260705-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7a37bbdce5194d96bed20d4162e25337", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a02bb17-5d", "ovs_interfaceid": "5a02bb17-5d97-4eef-a91d-245b70cc5e1b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:54:34 compute-0 nova_compute[260935]: 2025-10-11 08:54:34.490 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: ceef84cf-9df6-4484-862c-624eab05f1fe] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:54:34 compute-0 nova_compute[260935]: 2025-10-11 08:54:34.492 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:54:34 compute-0 podman[318624]: 2025-10-11 08:54:34.400972415 +0000 UTC m=+0.033725123 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 11 08:54:34 compute-0 nova_compute[260935]: 2025-10-11 08:54:34.493 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb0464a2f-21, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:54:34 compute-0 nova_compute[260935]: 2025-10-11 08:54:34.493 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapb0464a2f-21, col_values=(('external_ids', {'iface-id': 'b0464a2f-214f-420b-aa3e-70d2e3dce4ac', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:9b:af:82', 'vm-uuid': 'd60a2dcd-7fb6-4bfe-8351-a38a71164f83'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:54:34 compute-0 NetworkManager[44960]: <info>  [1760172874.4962] manager: (tapb0464a2f-21): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/217)
Oct 11 08:54:34 compute-0 nova_compute[260935]: 2025-10-11 08:54:34.495 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:54:34 compute-0 nova_compute[260935]: 2025-10-11 08:54:34.502 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 08:54:34 compute-0 nova_compute[260935]: 2025-10-11 08:54:34.504 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:54:34 compute-0 systemd[1]: Started libpod-conmon-f3aae3bca0e893ccf324b75817ea1dd437236d36d1ffd2c53b7adacc2120d991.scope.
Oct 11 08:54:34 compute-0 nova_compute[260935]: 2025-10-11 08:54:34.506 2 INFO os_vif [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:9b:af:82,bridge_name='br-int',has_traffic_filtering=True,id=b0464a2f-214f-420b-aa3e-70d2e3dce4ac,network=Network(72464893-0f19-40a9-84d1-392a298e50b9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb0464a2f-21')
Oct 11 08:54:34 compute-0 nova_compute[260935]: 2025-10-11 08:54:34.511 2 DEBUG oslo_concurrency.lockutils [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Releasing lock "refresh_cache-e263661e-e9c2-4a4d-a6e5-5fc8a7353f50" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 08:54:34 compute-0 nova_compute[260935]: 2025-10-11 08:54:34.511 2 DEBUG nova.compute.manager [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] [instance: e263661e-e9c2-4a4d-a6e5-5fc8a7353f50] Instance network_info: |[{"id": "5a02bb17-5d97-4eef-a91d-245b70cc5e1b", "address": "fa:16:3e:2c:7f:c3", "network": {"id": "72464893-0f19-40a9-84d1-392a298e50b9", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1756260705-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7a37bbdce5194d96bed20d4162e25337", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a02bb17-5d", "ovs_interfaceid": "5a02bb17-5d97-4eef-a91d-245b70cc5e1b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 11 08:54:34 compute-0 nova_compute[260935]: 2025-10-11 08:54:34.513 2 DEBUG oslo_concurrency.lockutils [req-72fdb077-1185-4aed-8ee7-a56482e76d58 req-62e8f989-09b2-486c-b3f9-27662ba47a66 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-e263661e-e9c2-4a4d-a6e5-5fc8a7353f50" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 08:54:34 compute-0 nova_compute[260935]: 2025-10-11 08:54:34.514 2 DEBUG nova.network.neutron [req-72fdb077-1185-4aed-8ee7-a56482e76d58 req-62e8f989-09b2-486c-b3f9-27662ba47a66 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: e263661e-e9c2-4a4d-a6e5-5fc8a7353f50] Refreshing network info cache for port 5a02bb17-5d97-4eef-a91d-245b70cc5e1b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 08:54:34 compute-0 nova_compute[260935]: 2025-10-11 08:54:34.520 2 DEBUG nova.virt.libvirt.driver [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] [instance: e263661e-e9c2-4a4d-a6e5-5fc8a7353f50] Start _get_guest_xml network_info=[{"id": "5a02bb17-5d97-4eef-a91d-245b70cc5e1b", "address": "fa:16:3e:2c:7f:c3", "network": {"id": "72464893-0f19-40a9-84d1-392a298e50b9", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1756260705-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7a37bbdce5194d96bed20d4162e25337", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a02bb17-5d", "ovs_interfaceid": "5a02bb17-5d97-4eef-a91d-245b70cc5e1b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 11 08:54:34 compute-0 nova_compute[260935]: 2025-10-11 08:54:34.529 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172874.4642432, ceef84cf-9df6-4484-862c-624eab05f1fe => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:54:34 compute-0 nova_compute[260935]: 2025-10-11 08:54:34.529 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: ceef84cf-9df6-4484-862c-624eab05f1fe] VM Paused (Lifecycle Event)
Oct 11 08:54:34 compute-0 nova_compute[260935]: 2025-10-11 08:54:34.537 2 DEBUG nova.compute.manager [req-47b386dc-515d-41f2-9145-a7df00467aef req-e6eba40c-8030-45df-8420-9603a2619002 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 21a71d10-e13b-47fe-88fd-ec9597f7902e] Received event network-changed-70c55a7a-6ff4-4ca8-a4b6-60e79bcea74b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:54:34 compute-0 systemd[1]: Started libcrun container.
Oct 11 08:54:34 compute-0 nova_compute[260935]: 2025-10-11 08:54:34.537 2 DEBUG nova.compute.manager [req-47b386dc-515d-41f2-9145-a7df00467aef req-e6eba40c-8030-45df-8420-9603a2619002 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 21a71d10-e13b-47fe-88fd-ec9597f7902e] Refreshing instance network info cache due to event network-changed-70c55a7a-6ff4-4ca8-a4b6-60e79bcea74b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 08:54:34 compute-0 nova_compute[260935]: 2025-10-11 08:54:34.538 2 DEBUG oslo_concurrency.lockutils [req-47b386dc-515d-41f2-9145-a7df00467aef req-e6eba40c-8030-45df-8420-9603a2619002 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-21a71d10-e13b-47fe-88fd-ec9597f7902e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 08:54:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2265d21a76f95933280395fdeb37d947104c519b3836bf0feabc7e53e6859577/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 11 08:54:34 compute-0 nova_compute[260935]: 2025-10-11 08:54:34.553 2 WARNING nova.virt.libvirt.driver [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 08:54:34 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/3829389537' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:54:34 compute-0 ceph-mon[74313]: pgmap v1510: 321 pgs: 321 active+clean; 226 MiB data, 545 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 7.1 MiB/s wr, 208 op/s
Oct 11 08:54:34 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/2095302234' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:54:34 compute-0 nova_compute[260935]: 2025-10-11 08:54:34.560 2 DEBUG nova.virt.libvirt.host [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 11 08:54:34 compute-0 nova_compute[260935]: 2025-10-11 08:54:34.561 2 DEBUG nova.virt.libvirt.host [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 11 08:54:34 compute-0 podman[318637]: 2025-10-11 08:54:34.567150544 +0000 UTC m=+0.072354745 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid)
Oct 11 08:54:34 compute-0 nova_compute[260935]: 2025-10-11 08:54:34.567 2 DEBUG nova.virt.libvirt.host [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 11 08:54:34 compute-0 nova_compute[260935]: 2025-10-11 08:54:34.568 2 DEBUG nova.virt.libvirt.host [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 11 08:54:34 compute-0 nova_compute[260935]: 2025-10-11 08:54:34.568 2 DEBUG nova.virt.libvirt.driver [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 11 08:54:34 compute-0 nova_compute[260935]: 2025-10-11 08:54:34.568 2 DEBUG nova.virt.hardware [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 11 08:54:34 compute-0 nova_compute[260935]: 2025-10-11 08:54:34.569 2 DEBUG nova.virt.hardware [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 11 08:54:34 compute-0 nova_compute[260935]: 2025-10-11 08:54:34.569 2 DEBUG nova.virt.hardware [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 11 08:54:34 compute-0 nova_compute[260935]: 2025-10-11 08:54:34.569 2 DEBUG nova.virt.hardware [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 11 08:54:34 compute-0 nova_compute[260935]: 2025-10-11 08:54:34.570 2 DEBUG nova.virt.hardware [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 11 08:54:34 compute-0 nova_compute[260935]: 2025-10-11 08:54:34.570 2 DEBUG nova.virt.hardware [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 11 08:54:34 compute-0 nova_compute[260935]: 2025-10-11 08:54:34.570 2 DEBUG nova.virt.hardware [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 11 08:54:34 compute-0 nova_compute[260935]: 2025-10-11 08:54:34.571 2 DEBUG nova.virt.hardware [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 11 08:54:34 compute-0 nova_compute[260935]: 2025-10-11 08:54:34.571 2 DEBUG nova.virt.hardware [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 11 08:54:34 compute-0 nova_compute[260935]: 2025-10-11 08:54:34.571 2 DEBUG nova.virt.hardware [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 11 08:54:34 compute-0 nova_compute[260935]: 2025-10-11 08:54:34.571 2 DEBUG nova.virt.hardware [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 11 08:54:34 compute-0 nova_compute[260935]: 2025-10-11 08:54:34.576 2 DEBUG oslo_concurrency.processutils [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:54:34 compute-0 podman[318624]: 2025-10-11 08:54:34.578030274 +0000 UTC m=+0.210782932 container init f3aae3bca0e893ccf324b75817ea1dd437236d36d1ffd2c53b7adacc2120d991 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-f022782f-f41d-4a83-babf-b48f2f344e01, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 11 08:54:34 compute-0 podman[318624]: 2025-10-11 08:54:34.585487587 +0000 UTC m=+0.218240225 container start f3aae3bca0e893ccf324b75817ea1dd437236d36d1ffd2c53b7adacc2120d991 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-f022782f-f41d-4a83-babf-b48f2f344e01, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 11 08:54:34 compute-0 neutron-haproxy-ovnmeta-f022782f-f41d-4a83-babf-b48f2f344e01[318651]: [NOTICE]   (318665) : New worker (318668) forked
Oct 11 08:54:34 compute-0 neutron-haproxy-ovnmeta-f022782f-f41d-4a83-babf-b48f2f344e01[318651]: [NOTICE]   (318665) : Loading success.
Oct 11 08:54:34 compute-0 nova_compute[260935]: 2025-10-11 08:54:34.621 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: ceef84cf-9df6-4484-862c-624eab05f1fe] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:54:34 compute-0 nova_compute[260935]: 2025-10-11 08:54:34.635 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: ceef84cf-9df6-4484-862c-624eab05f1fe] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 08:54:34 compute-0 nova_compute[260935]: 2025-10-11 08:54:34.660 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: ceef84cf-9df6-4484-862c-624eab05f1fe] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 08:54:34 compute-0 nova_compute[260935]: 2025-10-11 08:54:34.662 2 DEBUG nova.virt.libvirt.driver [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 08:54:34 compute-0 nova_compute[260935]: 2025-10-11 08:54:34.662 2 DEBUG nova.virt.libvirt.driver [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 08:54:34 compute-0 nova_compute[260935]: 2025-10-11 08:54:34.663 2 DEBUG nova.virt.libvirt.driver [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] No VIF found with MAC fa:16:3e:9b:af:82, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 11 08:54:34 compute-0 nova_compute[260935]: 2025-10-11 08:54:34.663 2 INFO nova.virt.libvirt.driver [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] [instance: d60a2dcd-7fb6-4bfe-8351-a38a71164f83] Using config drive
Oct 11 08:54:34 compute-0 nova_compute[260935]: 2025-10-11 08:54:34.688 2 DEBUG nova.storage.rbd_utils [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] rbd image d60a2dcd-7fb6-4bfe-8351-a38a71164f83_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:54:34 compute-0 nova_compute[260935]: 2025-10-11 08:54:34.706 2 DEBUG nova.network.neutron [req-d0f9a41a-70b5-48b6-9aef-2193727d100a req-f263e311-55e3-4330-ba6d-d4e427c55af1 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d60a2dcd-7fb6-4bfe-8351-a38a71164f83] Updated VIF entry in instance network info cache for port b0464a2f-214f-420b-aa3e-70d2e3dce4ac. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 08:54:34 compute-0 nova_compute[260935]: 2025-10-11 08:54:34.707 2 DEBUG nova.network.neutron [req-d0f9a41a-70b5-48b6-9aef-2193727d100a req-f263e311-55e3-4330-ba6d-d4e427c55af1 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d60a2dcd-7fb6-4bfe-8351-a38a71164f83] Updating instance_info_cache with network_info: [{"id": "b0464a2f-214f-420b-aa3e-70d2e3dce4ac", "address": "fa:16:3e:9b:af:82", "network": {"id": "72464893-0f19-40a9-84d1-392a298e50b9", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1756260705-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7a37bbdce5194d96bed20d4162e25337", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb0464a2f-21", "ovs_interfaceid": "b0464a2f-214f-420b-aa3e-70d2e3dce4ac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:54:34 compute-0 nova_compute[260935]: 2025-10-11 08:54:34.757 2 DEBUG oslo_concurrency.lockutils [req-d0f9a41a-70b5-48b6-9aef-2193727d100a req-f263e311-55e3-4330-ba6d-d4e427c55af1 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-d60a2dcd-7fb6-4bfe-8351-a38a71164f83" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 08:54:35 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 08:54:35 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4038275198' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:54:35 compute-0 nova_compute[260935]: 2025-10-11 08:54:35.023 2 DEBUG oslo_concurrency.processutils [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:54:35 compute-0 nova_compute[260935]: 2025-10-11 08:54:35.059 2 DEBUG nova.storage.rbd_utils [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] rbd image e263661e-e9c2-4a4d-a6e5-5fc8a7353f50_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:54:35 compute-0 nova_compute[260935]: 2025-10-11 08:54:35.066 2 DEBUG oslo_concurrency.processutils [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:54:35 compute-0 nova_compute[260935]: 2025-10-11 08:54:35.114 2 INFO nova.virt.libvirt.driver [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] [instance: d60a2dcd-7fb6-4bfe-8351-a38a71164f83] Creating config drive at /var/lib/nova/instances/d60a2dcd-7fb6-4bfe-8351-a38a71164f83/disk.config
Oct 11 08:54:35 compute-0 nova_compute[260935]: 2025-10-11 08:54:35.123 2 DEBUG oslo_concurrency.processutils [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/d60a2dcd-7fb6-4bfe-8351-a38a71164f83/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpf26us83j execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:54:35 compute-0 nova_compute[260935]: 2025-10-11 08:54:35.180 2 DEBUG nova.network.neutron [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] [instance: 21a71d10-e13b-47fe-88fd-ec9597f7902e] Updating instance_info_cache with network_info: [{"id": "70c55a7a-6ff4-4ca8-a4b6-60e79bcea74b", "address": "fa:16:3e:f5:da:34", "network": {"id": "72464893-0f19-40a9-84d1-392a298e50b9", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1756260705-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7a37bbdce5194d96bed20d4162e25337", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap70c55a7a-6f", "ovs_interfaceid": "70c55a7a-6ff4-4ca8-a4b6-60e79bcea74b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:54:35 compute-0 nova_compute[260935]: 2025-10-11 08:54:35.217 2 DEBUG oslo_concurrency.lockutils [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Releasing lock "refresh_cache-21a71d10-e13b-47fe-88fd-ec9597f7902e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 08:54:35 compute-0 nova_compute[260935]: 2025-10-11 08:54:35.218 2 DEBUG nova.compute.manager [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] [instance: 21a71d10-e13b-47fe-88fd-ec9597f7902e] Instance network_info: |[{"id": "70c55a7a-6ff4-4ca8-a4b6-60e79bcea74b", "address": "fa:16:3e:f5:da:34", "network": {"id": "72464893-0f19-40a9-84d1-392a298e50b9", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1756260705-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7a37bbdce5194d96bed20d4162e25337", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap70c55a7a-6f", "ovs_interfaceid": "70c55a7a-6ff4-4ca8-a4b6-60e79bcea74b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 11 08:54:35 compute-0 nova_compute[260935]: 2025-10-11 08:54:35.220 2 DEBUG oslo_concurrency.lockutils [req-47b386dc-515d-41f2-9145-a7df00467aef req-e6eba40c-8030-45df-8420-9603a2619002 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-21a71d10-e13b-47fe-88fd-ec9597f7902e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 08:54:35 compute-0 nova_compute[260935]: 2025-10-11 08:54:35.222 2 DEBUG nova.network.neutron [req-47b386dc-515d-41f2-9145-a7df00467aef req-e6eba40c-8030-45df-8420-9603a2619002 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 21a71d10-e13b-47fe-88fd-ec9597f7902e] Refreshing network info cache for port 70c55a7a-6ff4-4ca8-a4b6-60e79bcea74b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 08:54:35 compute-0 nova_compute[260935]: 2025-10-11 08:54:35.230 2 DEBUG nova.virt.libvirt.driver [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] [instance: 21a71d10-e13b-47fe-88fd-ec9597f7902e] Start _get_guest_xml network_info=[{"id": "70c55a7a-6ff4-4ca8-a4b6-60e79bcea74b", "address": "fa:16:3e:f5:da:34", "network": {"id": "72464893-0f19-40a9-84d1-392a298e50b9", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1756260705-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7a37bbdce5194d96bed20d4162e25337", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap70c55a7a-6f", "ovs_interfaceid": "70c55a7a-6ff4-4ca8-a4b6-60e79bcea74b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 11 08:54:35 compute-0 nova_compute[260935]: 2025-10-11 08:54:35.239 2 WARNING nova.virt.libvirt.driver [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 08:54:35 compute-0 nova_compute[260935]: 2025-10-11 08:54:35.245 2 DEBUG nova.virt.libvirt.host [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 11 08:54:35 compute-0 nova_compute[260935]: 2025-10-11 08:54:35.246 2 DEBUG nova.virt.libvirt.host [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 11 08:54:35 compute-0 nova_compute[260935]: 2025-10-11 08:54:35.255 2 DEBUG nova.virt.libvirt.host [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 11 08:54:35 compute-0 nova_compute[260935]: 2025-10-11 08:54:35.256 2 DEBUG nova.virt.libvirt.host [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 11 08:54:35 compute-0 nova_compute[260935]: 2025-10-11 08:54:35.257 2 DEBUG nova.virt.libvirt.driver [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 11 08:54:35 compute-0 nova_compute[260935]: 2025-10-11 08:54:35.257 2 DEBUG nova.virt.hardware [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 11 08:54:35 compute-0 nova_compute[260935]: 2025-10-11 08:54:35.259 2 DEBUG nova.virt.hardware [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 11 08:54:35 compute-0 nova_compute[260935]: 2025-10-11 08:54:35.259 2 DEBUG nova.virt.hardware [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 11 08:54:35 compute-0 nova_compute[260935]: 2025-10-11 08:54:35.260 2 DEBUG nova.virt.hardware [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 11 08:54:35 compute-0 nova_compute[260935]: 2025-10-11 08:54:35.260 2 DEBUG nova.virt.hardware [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 11 08:54:35 compute-0 nova_compute[260935]: 2025-10-11 08:54:35.261 2 DEBUG nova.virt.hardware [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 11 08:54:35 compute-0 nova_compute[260935]: 2025-10-11 08:54:35.261 2 DEBUG nova.virt.hardware [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 11 08:54:35 compute-0 nova_compute[260935]: 2025-10-11 08:54:35.262 2 DEBUG nova.virt.hardware [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 11 08:54:35 compute-0 nova_compute[260935]: 2025-10-11 08:54:35.262 2 DEBUG nova.virt.hardware [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 11 08:54:35 compute-0 nova_compute[260935]: 2025-10-11 08:54:35.263 2 DEBUG nova.virt.hardware [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 11 08:54:35 compute-0 nova_compute[260935]: 2025-10-11 08:54:35.264 2 DEBUG nova.virt.hardware [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 11 08:54:35 compute-0 nova_compute[260935]: 2025-10-11 08:54:35.270 2 DEBUG oslo_concurrency.processutils [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:54:35 compute-0 nova_compute[260935]: 2025-10-11 08:54:35.320 2 DEBUG oslo_concurrency.processutils [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/d60a2dcd-7fb6-4bfe-8351-a38a71164f83/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpf26us83j" returned: 0 in 0.197s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:54:35 compute-0 nova_compute[260935]: 2025-10-11 08:54:35.365 2 DEBUG nova.storage.rbd_utils [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] rbd image d60a2dcd-7fb6-4bfe-8351-a38a71164f83_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:54:35 compute-0 nova_compute[260935]: 2025-10-11 08:54:35.370 2 DEBUG oslo_concurrency.processutils [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/d60a2dcd-7fb6-4bfe-8351-a38a71164f83/disk.config d60a2dcd-7fb6-4bfe-8351-a38a71164f83_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:54:35 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 08:54:35 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2071018622' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:54:35 compute-0 nova_compute[260935]: 2025-10-11 08:54:35.523 2 DEBUG oslo_concurrency.processutils [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:54:35 compute-0 nova_compute[260935]: 2025-10-11 08:54:35.527 2 DEBUG nova.virt.libvirt.vif [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T08:54:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServersNegativeTestJSON-server-123447316',display_name='tempest-ListServersNegativeTestJSON-server-123447316-2',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserversnegativetestjson-server-123447316-2',id=54,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=1,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='7a37bbdce5194d96bed20d4162e25337',ramdisk_id='',reservation_id='r-yl5fomq0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServersNegativeTestJSON-567006104',owner_user_name='tempest-ListServersNegativeTestJSON-567006104-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T08:54:30Z,user_data=None,user_id='5c02b9d6bdae439c9f1e49ae63c5c5e3',uuid=e263661e-e9c2-4a4d-a6e5-5fc8a7353f50,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5a02bb17-5d97-4eef-a91d-245b70cc5e1b", "address": "fa:16:3e:2c:7f:c3", "network": {"id": "72464893-0f19-40a9-84d1-392a298e50b9", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1756260705-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7a37bbdce5194d96bed20d4162e25337", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a02bb17-5d", "ovs_interfaceid": "5a02bb17-5d97-4eef-a91d-245b70cc5e1b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 11 08:54:35 compute-0 nova_compute[260935]: 2025-10-11 08:54:35.528 2 DEBUG nova.network.os_vif_util [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Converting VIF {"id": "5a02bb17-5d97-4eef-a91d-245b70cc5e1b", "address": "fa:16:3e:2c:7f:c3", "network": {"id": "72464893-0f19-40a9-84d1-392a298e50b9", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1756260705-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7a37bbdce5194d96bed20d4162e25337", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a02bb17-5d", "ovs_interfaceid": "5a02bb17-5d97-4eef-a91d-245b70cc5e1b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 08:54:35 compute-0 nova_compute[260935]: 2025-10-11 08:54:35.530 2 DEBUG nova.network.os_vif_util [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2c:7f:c3,bridge_name='br-int',has_traffic_filtering=True,id=5a02bb17-5d97-4eef-a91d-245b70cc5e1b,network=Network(72464893-0f19-40a9-84d1-392a298e50b9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5a02bb17-5d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 08:54:35 compute-0 nova_compute[260935]: 2025-10-11 08:54:35.532 2 DEBUG nova.objects.instance [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Lazy-loading 'pci_devices' on Instance uuid e263661e-e9c2-4a4d-a6e5-5fc8a7353f50 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:54:35 compute-0 nova_compute[260935]: 2025-10-11 08:54:35.559 2 DEBUG nova.virt.libvirt.driver [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] [instance: e263661e-e9c2-4a4d-a6e5-5fc8a7353f50] End _get_guest_xml xml=<domain type="kvm">
Oct 11 08:54:35 compute-0 nova_compute[260935]:   <uuid>e263661e-e9c2-4a4d-a6e5-5fc8a7353f50</uuid>
Oct 11 08:54:35 compute-0 nova_compute[260935]:   <name>instance-00000036</name>
Oct 11 08:54:35 compute-0 nova_compute[260935]:   <memory>131072</memory>
Oct 11 08:54:35 compute-0 nova_compute[260935]:   <vcpu>1</vcpu>
Oct 11 08:54:35 compute-0 nova_compute[260935]:   <metadata>
Oct 11 08:54:35 compute-0 nova_compute[260935]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 08:54:35 compute-0 nova_compute[260935]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 08:54:35 compute-0 nova_compute[260935]:       <nova:name>tempest-ListServersNegativeTestJSON-server-123447316-2</nova:name>
Oct 11 08:54:35 compute-0 nova_compute[260935]:       <nova:creationTime>2025-10-11 08:54:34</nova:creationTime>
Oct 11 08:54:35 compute-0 nova_compute[260935]:       <nova:flavor name="m1.nano">
Oct 11 08:54:35 compute-0 nova_compute[260935]:         <nova:memory>128</nova:memory>
Oct 11 08:54:35 compute-0 nova_compute[260935]:         <nova:disk>1</nova:disk>
Oct 11 08:54:35 compute-0 nova_compute[260935]:         <nova:swap>0</nova:swap>
Oct 11 08:54:35 compute-0 nova_compute[260935]:         <nova:ephemeral>0</nova:ephemeral>
Oct 11 08:54:35 compute-0 nova_compute[260935]:         <nova:vcpus>1</nova:vcpus>
Oct 11 08:54:35 compute-0 nova_compute[260935]:       </nova:flavor>
Oct 11 08:54:35 compute-0 nova_compute[260935]:       <nova:owner>
Oct 11 08:54:35 compute-0 nova_compute[260935]:         <nova:user uuid="5c02b9d6bdae439c9f1e49ae63c5c5e3">tempest-ListServersNegativeTestJSON-567006104-project-member</nova:user>
Oct 11 08:54:35 compute-0 nova_compute[260935]:         <nova:project uuid="7a37bbdce5194d96bed20d4162e25337">tempest-ListServersNegativeTestJSON-567006104</nova:project>
Oct 11 08:54:35 compute-0 nova_compute[260935]:       </nova:owner>
Oct 11 08:54:35 compute-0 nova_compute[260935]:       <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 08:54:35 compute-0 nova_compute[260935]:       <nova:ports>
Oct 11 08:54:35 compute-0 nova_compute[260935]:         <nova:port uuid="5a02bb17-5d97-4eef-a91d-245b70cc5e1b">
Oct 11 08:54:35 compute-0 nova_compute[260935]:           <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Oct 11 08:54:35 compute-0 nova_compute[260935]:         </nova:port>
Oct 11 08:54:35 compute-0 nova_compute[260935]:       </nova:ports>
Oct 11 08:54:35 compute-0 nova_compute[260935]:     </nova:instance>
Oct 11 08:54:35 compute-0 nova_compute[260935]:   </metadata>
Oct 11 08:54:35 compute-0 nova_compute[260935]:   <sysinfo type="smbios">
Oct 11 08:54:35 compute-0 nova_compute[260935]:     <system>
Oct 11 08:54:35 compute-0 nova_compute[260935]:       <entry name="manufacturer">RDO</entry>
Oct 11 08:54:35 compute-0 nova_compute[260935]:       <entry name="product">OpenStack Compute</entry>
Oct 11 08:54:35 compute-0 nova_compute[260935]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 08:54:35 compute-0 nova_compute[260935]:       <entry name="serial">e263661e-e9c2-4a4d-a6e5-5fc8a7353f50</entry>
Oct 11 08:54:35 compute-0 nova_compute[260935]:       <entry name="uuid">e263661e-e9c2-4a4d-a6e5-5fc8a7353f50</entry>
Oct 11 08:54:35 compute-0 nova_compute[260935]:       <entry name="family">Virtual Machine</entry>
Oct 11 08:54:35 compute-0 nova_compute[260935]:     </system>
Oct 11 08:54:35 compute-0 nova_compute[260935]:   </sysinfo>
Oct 11 08:54:35 compute-0 nova_compute[260935]:   <os>
Oct 11 08:54:35 compute-0 nova_compute[260935]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 11 08:54:35 compute-0 nova_compute[260935]:     <boot dev="hd"/>
Oct 11 08:54:35 compute-0 nova_compute[260935]:     <smbios mode="sysinfo"/>
Oct 11 08:54:35 compute-0 nova_compute[260935]:   </os>
Oct 11 08:54:35 compute-0 nova_compute[260935]:   <features>
Oct 11 08:54:35 compute-0 nova_compute[260935]:     <acpi/>
Oct 11 08:54:35 compute-0 nova_compute[260935]:     <apic/>
Oct 11 08:54:35 compute-0 nova_compute[260935]:     <vmcoreinfo/>
Oct 11 08:54:35 compute-0 nova_compute[260935]:   </features>
Oct 11 08:54:35 compute-0 nova_compute[260935]:   <clock offset="utc">
Oct 11 08:54:35 compute-0 nova_compute[260935]:     <timer name="pit" tickpolicy="delay"/>
Oct 11 08:54:35 compute-0 nova_compute[260935]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 11 08:54:35 compute-0 nova_compute[260935]:     <timer name="hpet" present="no"/>
Oct 11 08:54:35 compute-0 nova_compute[260935]:   </clock>
Oct 11 08:54:35 compute-0 nova_compute[260935]:   <cpu mode="host-model" match="exact">
Oct 11 08:54:35 compute-0 nova_compute[260935]:     <topology sockets="1" cores="1" threads="1"/>
Oct 11 08:54:35 compute-0 nova_compute[260935]:   </cpu>
Oct 11 08:54:35 compute-0 nova_compute[260935]:   <devices>
Oct 11 08:54:35 compute-0 nova_compute[260935]:     <disk type="network" device="disk">
Oct 11 08:54:35 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 08:54:35 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/e263661e-e9c2-4a4d-a6e5-5fc8a7353f50_disk">
Oct 11 08:54:35 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 08:54:35 compute-0 nova_compute[260935]:       </source>
Oct 11 08:54:35 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 08:54:35 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 08:54:35 compute-0 nova_compute[260935]:       </auth>
Oct 11 08:54:35 compute-0 nova_compute[260935]:       <target dev="vda" bus="virtio"/>
Oct 11 08:54:35 compute-0 nova_compute[260935]:     </disk>
Oct 11 08:54:35 compute-0 nova_compute[260935]:     <disk type="network" device="cdrom">
Oct 11 08:54:35 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 08:54:35 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/e263661e-e9c2-4a4d-a6e5-5fc8a7353f50_disk.config">
Oct 11 08:54:35 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 08:54:35 compute-0 nova_compute[260935]:       </source>
Oct 11 08:54:35 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 08:54:35 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 08:54:35 compute-0 nova_compute[260935]:       </auth>
Oct 11 08:54:35 compute-0 nova_compute[260935]:       <target dev="sda" bus="sata"/>
Oct 11 08:54:35 compute-0 nova_compute[260935]:     </disk>
Oct 11 08:54:35 compute-0 nova_compute[260935]:     <interface type="ethernet">
Oct 11 08:54:35 compute-0 nova_compute[260935]:       <mac address="fa:16:3e:2c:7f:c3"/>
Oct 11 08:54:35 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 08:54:35 compute-0 nova_compute[260935]:       <driver name="vhost" rx_queue_size="512"/>
Oct 11 08:54:35 compute-0 nova_compute[260935]:       <mtu size="1442"/>
Oct 11 08:54:35 compute-0 nova_compute[260935]:       <target dev="tap5a02bb17-5d"/>
Oct 11 08:54:35 compute-0 nova_compute[260935]:     </interface>
Oct 11 08:54:35 compute-0 nova_compute[260935]:     <serial type="pty">
Oct 11 08:54:35 compute-0 nova_compute[260935]:       <log file="/var/lib/nova/instances/e263661e-e9c2-4a4d-a6e5-5fc8a7353f50/console.log" append="off"/>
Oct 11 08:54:35 compute-0 nova_compute[260935]:     </serial>
Oct 11 08:54:35 compute-0 nova_compute[260935]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 08:54:35 compute-0 nova_compute[260935]:     <video>
Oct 11 08:54:35 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 08:54:35 compute-0 nova_compute[260935]:     </video>
Oct 11 08:54:35 compute-0 nova_compute[260935]:     <input type="tablet" bus="usb"/>
Oct 11 08:54:35 compute-0 nova_compute[260935]:     <rng model="virtio">
Oct 11 08:54:35 compute-0 nova_compute[260935]:       <backend model="random">/dev/urandom</backend>
Oct 11 08:54:35 compute-0 nova_compute[260935]:     </rng>
Oct 11 08:54:35 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root"/>
Oct 11 08:54:35 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:54:35 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:54:35 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:54:35 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:54:35 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:54:35 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:54:35 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:54:35 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:54:35 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:54:35 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:54:35 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:54:35 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:54:35 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:54:35 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:54:35 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:54:35 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:54:35 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:54:35 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:54:35 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:54:35 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:54:35 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:54:35 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:54:35 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:54:35 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:54:35 compute-0 nova_compute[260935]:     <controller type="usb" index="0"/>
Oct 11 08:54:35 compute-0 nova_compute[260935]:     <memballoon model="virtio">
Oct 11 08:54:35 compute-0 nova_compute[260935]:       <stats period="10"/>
Oct 11 08:54:35 compute-0 nova_compute[260935]:     </memballoon>
Oct 11 08:54:35 compute-0 nova_compute[260935]:   </devices>
Oct 11 08:54:35 compute-0 nova_compute[260935]: </domain>
Oct 11 08:54:35 compute-0 nova_compute[260935]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 11 08:54:35 compute-0 nova_compute[260935]: 2025-10-11 08:54:35.560 2 DEBUG nova.compute.manager [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] [instance: e263661e-e9c2-4a4d-a6e5-5fc8a7353f50] Preparing to wait for external event network-vif-plugged-5a02bb17-5d97-4eef-a91d-245b70cc5e1b prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 11 08:54:35 compute-0 nova_compute[260935]: 2025-10-11 08:54:35.560 2 DEBUG oslo_concurrency.lockutils [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Acquiring lock "e263661e-e9c2-4a4d-a6e5-5fc8a7353f50-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:54:35 compute-0 nova_compute[260935]: 2025-10-11 08:54:35.561 2 DEBUG oslo_concurrency.lockutils [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Lock "e263661e-e9c2-4a4d-a6e5-5fc8a7353f50-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:54:35 compute-0 nova_compute[260935]: 2025-10-11 08:54:35.561 2 DEBUG oslo_concurrency.lockutils [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Lock "e263661e-e9c2-4a4d-a6e5-5fc8a7353f50-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:54:35 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/4038275198' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:54:35 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/2071018622' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:54:35 compute-0 nova_compute[260935]: 2025-10-11 08:54:35.562 2 DEBUG nova.virt.libvirt.vif [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T08:54:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServersNegativeTestJSON-server-123447316',display_name='tempest-ListServersNegativeTestJSON-server-123447316-2',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserversnegativetestjson-server-123447316-2',id=54,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=1,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='7a37bbdce5194d96bed20d4162e25337',ramdisk_id='',reservation_id='r-yl5fomq0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServersNegativeTestJSON-567006104',owner_user_name='tempest-ListServersNegativeTestJSON-567006104-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T08:54:30Z,user_data=None,user_id='5c02b9d6bdae439c9f1e49ae63c5c5e3',uuid=e263661e-e9c2-4a4d-a6e5-5fc8a7353f50,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5a02bb17-5d97-4eef-a91d-245b70cc5e1b", "address": "fa:16:3e:2c:7f:c3", "network": {"id": "72464893-0f19-40a9-84d1-392a298e50b9", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1756260705-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7a37bbdce5194d96bed20d4162e25337", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a02bb17-5d", "ovs_interfaceid": "5a02bb17-5d97-4eef-a91d-245b70cc5e1b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 11 08:54:35 compute-0 nova_compute[260935]: 2025-10-11 08:54:35.563 2 DEBUG nova.network.os_vif_util [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Converting VIF {"id": "5a02bb17-5d97-4eef-a91d-245b70cc5e1b", "address": "fa:16:3e:2c:7f:c3", "network": {"id": "72464893-0f19-40a9-84d1-392a298e50b9", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1756260705-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7a37bbdce5194d96bed20d4162e25337", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a02bb17-5d", "ovs_interfaceid": "5a02bb17-5d97-4eef-a91d-245b70cc5e1b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 08:54:35 compute-0 nova_compute[260935]: 2025-10-11 08:54:35.564 2 DEBUG nova.network.os_vif_util [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2c:7f:c3,bridge_name='br-int',has_traffic_filtering=True,id=5a02bb17-5d97-4eef-a91d-245b70cc5e1b,network=Network(72464893-0f19-40a9-84d1-392a298e50b9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5a02bb17-5d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 08:54:35 compute-0 nova_compute[260935]: 2025-10-11 08:54:35.568 2 DEBUG os_vif [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:2c:7f:c3,bridge_name='br-int',has_traffic_filtering=True,id=5a02bb17-5d97-4eef-a91d-245b70cc5e1b,network=Network(72464893-0f19-40a9-84d1-392a298e50b9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5a02bb17-5d') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 11 08:54:35 compute-0 nova_compute[260935]: 2025-10-11 08:54:35.569 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:54:35 compute-0 nova_compute[260935]: 2025-10-11 08:54:35.569 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:54:35 compute-0 nova_compute[260935]: 2025-10-11 08:54:35.570 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 08:54:35 compute-0 nova_compute[260935]: 2025-10-11 08:54:35.575 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:54:35 compute-0 nova_compute[260935]: 2025-10-11 08:54:35.576 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5a02bb17-5d, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:54:35 compute-0 nova_compute[260935]: 2025-10-11 08:54:35.576 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap5a02bb17-5d, col_values=(('external_ids', {'iface-id': '5a02bb17-5d97-4eef-a91d-245b70cc5e1b', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:2c:7f:c3', 'vm-uuid': 'e263661e-e9c2-4a4d-a6e5-5fc8a7353f50'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:54:35 compute-0 NetworkManager[44960]: <info>  [1760172875.5804] manager: (tap5a02bb17-5d): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/218)
Oct 11 08:54:35 compute-0 nova_compute[260935]: 2025-10-11 08:54:35.581 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 08:54:35 compute-0 nova_compute[260935]: 2025-10-11 08:54:35.593 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:54:35 compute-0 nova_compute[260935]: 2025-10-11 08:54:35.594 2 INFO os_vif [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:2c:7f:c3,bridge_name='br-int',has_traffic_filtering=True,id=5a02bb17-5d97-4eef-a91d-245b70cc5e1b,network=Network(72464893-0f19-40a9-84d1-392a298e50b9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5a02bb17-5d')
Oct 11 08:54:35 compute-0 nova_compute[260935]: 2025-10-11 08:54:35.638 2 DEBUG oslo_concurrency.processutils [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/d60a2dcd-7fb6-4bfe-8351-a38a71164f83/disk.config d60a2dcd-7fb6-4bfe-8351-a38a71164f83_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.268s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:54:35 compute-0 nova_compute[260935]: 2025-10-11 08:54:35.639 2 INFO nova.virt.libvirt.driver [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] [instance: d60a2dcd-7fb6-4bfe-8351-a38a71164f83] Deleting local config drive /var/lib/nova/instances/d60a2dcd-7fb6-4bfe-8351-a38a71164f83/disk.config because it was imported into RBD.
Oct 11 08:54:35 compute-0 nova_compute[260935]: 2025-10-11 08:54:35.678 2 DEBUG nova.virt.libvirt.driver [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 08:54:35 compute-0 nova_compute[260935]: 2025-10-11 08:54:35.678 2 DEBUG nova.virt.libvirt.driver [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 08:54:35 compute-0 nova_compute[260935]: 2025-10-11 08:54:35.679 2 DEBUG nova.virt.libvirt.driver [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] No VIF found with MAC fa:16:3e:2c:7f:c3, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 11 08:54:35 compute-0 nova_compute[260935]: 2025-10-11 08:54:35.680 2 INFO nova.virt.libvirt.driver [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] [instance: e263661e-e9c2-4a4d-a6e5-5fc8a7353f50] Using config drive
Oct 11 08:54:35 compute-0 nova_compute[260935]: 2025-10-11 08:54:35.718 2 DEBUG nova.storage.rbd_utils [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] rbd image e263661e-e9c2-4a4d-a6e5-5fc8a7353f50_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:54:35 compute-0 NetworkManager[44960]: <info>  [1760172875.7254] manager: (tapb0464a2f-21): new Tun device (/org/freedesktop/NetworkManager/Devices/219)
Oct 11 08:54:35 compute-0 systemd-udevd[318497]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 08:54:35 compute-0 kernel: tapb0464a2f-21: entered promiscuous mode
Oct 11 08:54:35 compute-0 nova_compute[260935]: 2025-10-11 08:54:35.738 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:54:35 compute-0 ovn_controller[152945]: 2025-10-11T08:54:35Z|00454|binding|INFO|Claiming lport b0464a2f-214f-420b-aa3e-70d2e3dce4ac for this chassis.
Oct 11 08:54:35 compute-0 ovn_controller[152945]: 2025-10-11T08:54:35Z|00455|binding|INFO|b0464a2f-214f-420b-aa3e-70d2e3dce4ac: Claiming fa:16:3e:9b:af:82 10.100.0.4
Oct 11 08:54:35 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:35.753 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9b:af:82 10.100.0.4'], port_security=['fa:16:3e:9b:af:82 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'd60a2dcd-7fb6-4bfe-8351-a38a71164f83', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-72464893-0f19-40a9-84d1-392a298e50b9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7a37bbdce5194d96bed20d4162e25337', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'a5c4bd12-f795-4994-bb5e-cd2cc665fe9f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2dc3e696-a953-4227-9499-d68d54f25c77, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=b0464a2f-214f-420b-aa3e-70d2e3dce4ac) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 08:54:35 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:35.755 162815 INFO neutron.agent.ovn.metadata.agent [-] Port b0464a2f-214f-420b-aa3e-70d2e3dce4ac in datapath 72464893-0f19-40a9-84d1-392a298e50b9 bound to our chassis
Oct 11 08:54:35 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:35.758 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 72464893-0f19-40a9-84d1-392a298e50b9
Oct 11 08:54:35 compute-0 NetworkManager[44960]: <info>  [1760172875.7691] device (tapb0464a2f-21): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 08:54:35 compute-0 NetworkManager[44960]: <info>  [1760172875.7717] device (tapb0464a2f-21): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 08:54:35 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 08:54:35 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1971376858' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:54:35 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:35.779 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[da732a24-8b05-4448-9778-bd06734a4f2a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:54:35 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:35.780 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap72464893-01 in ovnmeta-72464893-0f19-40a9-84d1-392a298e50b9 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 11 08:54:35 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:35.783 276945 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap72464893-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 11 08:54:35 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:35.783 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[58a08904-2ce9-483a-adcc-0e1ac3db1593]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:54:35 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:35.784 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[4ed45eb2-5f93-469c-9214-67a0e1de52f9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:54:35 compute-0 systemd-machined[215705]: New machine qemu-60-instance-00000035.
Oct 11 08:54:35 compute-0 nova_compute[260935]: 2025-10-11 08:54:35.807 2 DEBUG nova.network.neutron [req-72fdb077-1185-4aed-8ee7-a56482e76d58 req-62e8f989-09b2-486c-b3f9-27662ba47a66 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: e263661e-e9c2-4a4d-a6e5-5fc8a7353f50] Updated VIF entry in instance network info cache for port 5a02bb17-5d97-4eef-a91d-245b70cc5e1b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 08:54:35 compute-0 nova_compute[260935]: 2025-10-11 08:54:35.808 2 DEBUG nova.network.neutron [req-72fdb077-1185-4aed-8ee7-a56482e76d58 req-62e8f989-09b2-486c-b3f9-27662ba47a66 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: e263661e-e9c2-4a4d-a6e5-5fc8a7353f50] Updating instance_info_cache with network_info: [{"id": "5a02bb17-5d97-4eef-a91d-245b70cc5e1b", "address": "fa:16:3e:2c:7f:c3", "network": {"id": "72464893-0f19-40a9-84d1-392a298e50b9", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1756260705-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7a37bbdce5194d96bed20d4162e25337", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a02bb17-5d", "ovs_interfaceid": "5a02bb17-5d97-4eef-a91d-245b70cc5e1b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:54:35 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:35.814 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[685397cb-76f1-4ae3-9dea-5e7ca89ba0f0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:54:35 compute-0 nova_compute[260935]: 2025-10-11 08:54:35.815 2 DEBUG oslo_concurrency.processutils [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.545s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:54:35 compute-0 systemd[1]: Started Virtual Machine qemu-60-instance-00000035.
Oct 11 08:54:35 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:35.852 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[4f1034fe-5017-4a49-8c7f-83407159b3b4]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:54:35 compute-0 nova_compute[260935]: 2025-10-11 08:54:35.862 2 DEBUG nova.storage.rbd_utils [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] rbd image 21a71d10-e13b-47fe-88fd-ec9597f7902e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:54:35 compute-0 nova_compute[260935]: 2025-10-11 08:54:35.879 2 DEBUG oslo_concurrency.processutils [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:54:35 compute-0 ovn_controller[152945]: 2025-10-11T08:54:35Z|00456|binding|INFO|Setting lport b0464a2f-214f-420b-aa3e-70d2e3dce4ac ovn-installed in OVS
Oct 11 08:54:35 compute-0 ovn_controller[152945]: 2025-10-11T08:54:35Z|00457|binding|INFO|Setting lport b0464a2f-214f-420b-aa3e-70d2e3dce4ac up in Southbound
Oct 11 08:54:35 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:35.891 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[4cb80c43-fd3b-4e81-85e6-5178c30c5931]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:54:35 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:35.899 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[c7084681-c8ad-46fb-9cbd-a3c8f85c21fe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:54:35 compute-0 NetworkManager[44960]: <info>  [1760172875.9008] manager: (tap72464893-00): new Veth device (/org/freedesktop/NetworkManager/Devices/220)
Oct 11 08:54:35 compute-0 nova_compute[260935]: 2025-10-11 08:54:35.936 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:54:35 compute-0 nova_compute[260935]: 2025-10-11 08:54:35.944 2 DEBUG oslo_concurrency.lockutils [req-72fdb077-1185-4aed-8ee7-a56482e76d58 req-62e8f989-09b2-486c-b3f9-27662ba47a66 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-e263661e-e9c2-4a4d-a6e5-5fc8a7353f50" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 08:54:35 compute-0 nova_compute[260935]: 2025-10-11 08:54:35.945 2 DEBUG nova.compute.manager [req-72fdb077-1185-4aed-8ee7-a56482e76d58 req-62e8f989-09b2-486c-b3f9-27662ba47a66 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ceef84cf-9df6-4484-862c-624eab05f1fe] Received event network-vif-plugged-b6cea59b-74d9-47bc-a2e0-cfc62e1cc66b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:54:35 compute-0 nova_compute[260935]: 2025-10-11 08:54:35.946 2 DEBUG oslo_concurrency.lockutils [req-72fdb077-1185-4aed-8ee7-a56482e76d58 req-62e8f989-09b2-486c-b3f9-27662ba47a66 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "ceef84cf-9df6-4484-862c-624eab05f1fe-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:54:35 compute-0 nova_compute[260935]: 2025-10-11 08:54:35.946 2 DEBUG oslo_concurrency.lockutils [req-72fdb077-1185-4aed-8ee7-a56482e76d58 req-62e8f989-09b2-486c-b3f9-27662ba47a66 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "ceef84cf-9df6-4484-862c-624eab05f1fe-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:54:35 compute-0 nova_compute[260935]: 2025-10-11 08:54:35.946 2 DEBUG oslo_concurrency.lockutils [req-72fdb077-1185-4aed-8ee7-a56482e76d58 req-62e8f989-09b2-486c-b3f9-27662ba47a66 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "ceef84cf-9df6-4484-862c-624eab05f1fe-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:54:35 compute-0 nova_compute[260935]: 2025-10-11 08:54:35.946 2 DEBUG nova.compute.manager [req-72fdb077-1185-4aed-8ee7-a56482e76d58 req-62e8f989-09b2-486c-b3f9-27662ba47a66 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ceef84cf-9df6-4484-862c-624eab05f1fe] Processing event network-vif-plugged-b6cea59b-74d9-47bc-a2e0-cfc62e1cc66b _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 11 08:54:35 compute-0 nova_compute[260935]: 2025-10-11 08:54:35.947 2 DEBUG nova.compute.manager [None req-8c9ed641-f7d0-4508-bc10-5d8fe1991ba8 e7ae37ce1df34d87a006582ce3cb7d6d ce2ed1c47abf4e1889253402aa1e536f - - default default] [instance: ceef84cf-9df6-4484-862c-624eab05f1fe] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 11 08:54:35 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:35.949 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[cc134880-9d74-4a3d-ae9d-37952d6e7bf7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:54:35 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:35.953 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[28c4261c-52be-4826-8100-bcce27cdf0b1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:54:35 compute-0 nova_compute[260935]: 2025-10-11 08:54:35.954 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172875.9537165, ceef84cf-9df6-4484-862c-624eab05f1fe => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:54:35 compute-0 nova_compute[260935]: 2025-10-11 08:54:35.954 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: ceef84cf-9df6-4484-862c-624eab05f1fe] VM Resumed (Lifecycle Event)
Oct 11 08:54:35 compute-0 nova_compute[260935]: 2025-10-11 08:54:35.977 2 DEBUG nova.virt.libvirt.driver [None req-8c9ed641-f7d0-4508-bc10-5d8fe1991ba8 e7ae37ce1df34d87a006582ce3cb7d6d ce2ed1c47abf4e1889253402aa1e536f - - default default] [instance: ceef84cf-9df6-4484-862c-624eab05f1fe] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 11 08:54:35 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1511: 321 pgs: 321 active+clean; 226 MiB data, 545 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 7.1 MiB/s wr, 198 op/s
Oct 11 08:54:35 compute-0 nova_compute[260935]: 2025-10-11 08:54:35.987 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: ceef84cf-9df6-4484-862c-624eab05f1fe] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:54:35 compute-0 NetworkManager[44960]: <info>  [1760172875.9951] device (tap72464893-00): carrier: link connected
Oct 11 08:54:35 compute-0 nova_compute[260935]: 2025-10-11 08:54:35.997 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: ceef84cf-9df6-4484-862c-624eab05f1fe] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 08:54:35 compute-0 nova_compute[260935]: 2025-10-11 08:54:35.999 2 INFO nova.virt.libvirt.driver [-] [instance: ceef84cf-9df6-4484-862c-624eab05f1fe] Instance spawned successfully.
Oct 11 08:54:35 compute-0 nova_compute[260935]: 2025-10-11 08:54:35.999 2 DEBUG nova.virt.libvirt.driver [None req-8c9ed641-f7d0-4508-bc10-5d8fe1991ba8 e7ae37ce1df34d87a006582ce3cb7d6d ce2ed1c47abf4e1889253402aa1e536f - - default default] [instance: ceef84cf-9df6-4484-862c-624eab05f1fe] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 11 08:54:36 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:36.005 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[3327f642-9e2c-4ad9-9f89-d23603b721f1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:54:36 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:36.024 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[02fdde35-9038-4036-8d49-950b77c7805f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap72464893-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ca:78:4f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 148], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 472069, 'reachable_time': 41754, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 318893, 'error': None, 'target': 'ovnmeta-72464893-0f19-40a9-84d1-392a298e50b9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:54:36 compute-0 nova_compute[260935]: 2025-10-11 08:54:36.040 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: ceef84cf-9df6-4484-862c-624eab05f1fe] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 08:54:36 compute-0 nova_compute[260935]: 2025-10-11 08:54:36.055 2 DEBUG nova.virt.libvirt.driver [None req-8c9ed641-f7d0-4508-bc10-5d8fe1991ba8 e7ae37ce1df34d87a006582ce3cb7d6d ce2ed1c47abf4e1889253402aa1e536f - - default default] [instance: ceef84cf-9df6-4484-862c-624eab05f1fe] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:54:36 compute-0 nova_compute[260935]: 2025-10-11 08:54:36.056 2 DEBUG nova.virt.libvirt.driver [None req-8c9ed641-f7d0-4508-bc10-5d8fe1991ba8 e7ae37ce1df34d87a006582ce3cb7d6d ce2ed1c47abf4e1889253402aa1e536f - - default default] [instance: ceef84cf-9df6-4484-862c-624eab05f1fe] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:54:36 compute-0 nova_compute[260935]: 2025-10-11 08:54:36.057 2 DEBUG nova.virt.libvirt.driver [None req-8c9ed641-f7d0-4508-bc10-5d8fe1991ba8 e7ae37ce1df34d87a006582ce3cb7d6d ce2ed1c47abf4e1889253402aa1e536f - - default default] [instance: ceef84cf-9df6-4484-862c-624eab05f1fe] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:54:36 compute-0 nova_compute[260935]: 2025-10-11 08:54:36.057 2 DEBUG nova.virt.libvirt.driver [None req-8c9ed641-f7d0-4508-bc10-5d8fe1991ba8 e7ae37ce1df34d87a006582ce3cb7d6d ce2ed1c47abf4e1889253402aa1e536f - - default default] [instance: ceef84cf-9df6-4484-862c-624eab05f1fe] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:54:36 compute-0 nova_compute[260935]: 2025-10-11 08:54:36.058 2 DEBUG nova.virt.libvirt.driver [None req-8c9ed641-f7d0-4508-bc10-5d8fe1991ba8 e7ae37ce1df34d87a006582ce3cb7d6d ce2ed1c47abf4e1889253402aa1e536f - - default default] [instance: ceef84cf-9df6-4484-862c-624eab05f1fe] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:54:36 compute-0 nova_compute[260935]: 2025-10-11 08:54:36.058 2 DEBUG nova.virt.libvirt.driver [None req-8c9ed641-f7d0-4508-bc10-5d8fe1991ba8 e7ae37ce1df34d87a006582ce3cb7d6d ce2ed1c47abf4e1889253402aa1e536f - - default default] [instance: ceef84cf-9df6-4484-862c-624eab05f1fe] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:54:36 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:36.061 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[2f473e2f-34ad-4b5a-9cdc-d98365fca672]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feca:784f'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 472069, 'tstamp': 472069}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 318896, 'error': None, 'target': 'ovnmeta-72464893-0f19-40a9-84d1-392a298e50b9', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:54:36 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:36.088 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[83421c29-955b-4c01-b130-71add83b809c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap72464893-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ca:78:4f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 148], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 472069, 'reachable_time': 41754, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 318914, 'error': None, 'target': 'ovnmeta-72464893-0f19-40a9-84d1-392a298e50b9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:54:36 compute-0 nova_compute[260935]: 2025-10-11 08:54:36.129 2 INFO nova.compute.manager [None req-8c9ed641-f7d0-4508-bc10-5d8fe1991ba8 e7ae37ce1df34d87a006582ce3cb7d6d ce2ed1c47abf4e1889253402aa1e536f - - default default] [instance: ceef84cf-9df6-4484-862c-624eab05f1fe] Took 7.91 seconds to spawn the instance on the hypervisor.
Oct 11 08:54:36 compute-0 nova_compute[260935]: 2025-10-11 08:54:36.129 2 DEBUG nova.compute.manager [None req-8c9ed641-f7d0-4508-bc10-5d8fe1991ba8 e7ae37ce1df34d87a006582ce3cb7d6d ce2ed1c47abf4e1889253402aa1e536f - - default default] [instance: ceef84cf-9df6-4484-862c-624eab05f1fe] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:54:36 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:36.137 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[473aef8e-1a90-49f4-ac6b-610da6812a7d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:54:36 compute-0 nova_compute[260935]: 2025-10-11 08:54:36.140 2 DEBUG nova.compute.manager [req-78e56137-1959-458c-b2c4-01f4e313f31a req-ba3f7f2a-1a98-4c87-8f69-35f2e89fd038 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ceef84cf-9df6-4484-862c-624eab05f1fe] Received event network-vif-plugged-b6cea59b-74d9-47bc-a2e0-cfc62e1cc66b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:54:36 compute-0 nova_compute[260935]: 2025-10-11 08:54:36.141 2 DEBUG oslo_concurrency.lockutils [req-78e56137-1959-458c-b2c4-01f4e313f31a req-ba3f7f2a-1a98-4c87-8f69-35f2e89fd038 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "ceef84cf-9df6-4484-862c-624eab05f1fe-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:54:36 compute-0 nova_compute[260935]: 2025-10-11 08:54:36.142 2 DEBUG oslo_concurrency.lockutils [req-78e56137-1959-458c-b2c4-01f4e313f31a req-ba3f7f2a-1a98-4c87-8f69-35f2e89fd038 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "ceef84cf-9df6-4484-862c-624eab05f1fe-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:54:36 compute-0 nova_compute[260935]: 2025-10-11 08:54:36.142 2 DEBUG oslo_concurrency.lockutils [req-78e56137-1959-458c-b2c4-01f4e313f31a req-ba3f7f2a-1a98-4c87-8f69-35f2e89fd038 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "ceef84cf-9df6-4484-862c-624eab05f1fe-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:54:36 compute-0 nova_compute[260935]: 2025-10-11 08:54:36.142 2 DEBUG nova.compute.manager [req-78e56137-1959-458c-b2c4-01f4e313f31a req-ba3f7f2a-1a98-4c87-8f69-35f2e89fd038 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ceef84cf-9df6-4484-862c-624eab05f1fe] No waiting events found dispatching network-vif-plugged-b6cea59b-74d9-47bc-a2e0-cfc62e1cc66b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 08:54:36 compute-0 nova_compute[260935]: 2025-10-11 08:54:36.143 2 WARNING nova.compute.manager [req-78e56137-1959-458c-b2c4-01f4e313f31a req-ba3f7f2a-1a98-4c87-8f69-35f2e89fd038 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ceef84cf-9df6-4484-862c-624eab05f1fe] Received unexpected event network-vif-plugged-b6cea59b-74d9-47bc-a2e0-cfc62e1cc66b for instance with vm_state building and task_state spawning.
Oct 11 08:54:36 compute-0 nova_compute[260935]: 2025-10-11 08:54:36.206 2 INFO nova.compute.manager [None req-8c9ed641-f7d0-4508-bc10-5d8fe1991ba8 e7ae37ce1df34d87a006582ce3cb7d6d ce2ed1c47abf4e1889253402aa1e536f - - default default] [instance: ceef84cf-9df6-4484-862c-624eab05f1fe] Took 9.09 seconds to build instance.
Oct 11 08:54:36 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:36.237 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[84a1cf93-8498-4c97-b7ab-acb985182aa5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:54:36 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:36.239 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap72464893-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:54:36 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:36.240 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 08:54:36 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:36.241 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap72464893-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:54:36 compute-0 nova_compute[260935]: 2025-10-11 08:54:36.243 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:54:36 compute-0 NetworkManager[44960]: <info>  [1760172876.2446] manager: (tap72464893-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/221)
Oct 11 08:54:36 compute-0 kernel: tap72464893-00: entered promiscuous mode
Oct 11 08:54:36 compute-0 nova_compute[260935]: 2025-10-11 08:54:36.251 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:54:36 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:36.253 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap72464893-00, col_values=(('external_ids', {'iface-id': 'fba8f4e1-8635-4527-85f3-29ce2a1033b5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:54:36 compute-0 nova_compute[260935]: 2025-10-11 08:54:36.254 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:54:36 compute-0 ovn_controller[152945]: 2025-10-11T08:54:36Z|00458|binding|INFO|Releasing lport fba8f4e1-8635-4527-85f3-29ce2a1033b5 from this chassis (sb_readonly=0)
Oct 11 08:54:36 compute-0 nova_compute[260935]: 2025-10-11 08:54:36.269 2 DEBUG oslo_concurrency.lockutils [None req-8c9ed641-f7d0-4508-bc10-5d8fe1991ba8 e7ae37ce1df34d87a006582ce3cb7d6d ce2ed1c47abf4e1889253402aa1e536f - - default default] Lock "ceef84cf-9df6-4484-862c-624eab05f1fe" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.216s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:54:36 compute-0 nova_compute[260935]: 2025-10-11 08:54:36.284 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:54:36 compute-0 nova_compute[260935]: 2025-10-11 08:54:36.291 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:54:36 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:36.291 162815 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/72464893-0f19-40a9-84d1-392a298e50b9.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/72464893-0f19-40a9-84d1-392a298e50b9.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 11 08:54:36 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:36.299 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[ac15d768-3b93-427a-b249-a552f8e0a1d5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:54:36 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:36.300 162815 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 11 08:54:36 compute-0 ovn_metadata_agent[162810]: global
Oct 11 08:54:36 compute-0 ovn_metadata_agent[162810]:     log         /dev/log local0 debug
Oct 11 08:54:36 compute-0 ovn_metadata_agent[162810]:     log-tag     haproxy-metadata-proxy-72464893-0f19-40a9-84d1-392a298e50b9
Oct 11 08:54:36 compute-0 ovn_metadata_agent[162810]:     user        root
Oct 11 08:54:36 compute-0 ovn_metadata_agent[162810]:     group       root
Oct 11 08:54:36 compute-0 ovn_metadata_agent[162810]:     maxconn     1024
Oct 11 08:54:36 compute-0 ovn_metadata_agent[162810]:     pidfile     /var/lib/neutron/external/pids/72464893-0f19-40a9-84d1-392a298e50b9.pid.haproxy
Oct 11 08:54:36 compute-0 ovn_metadata_agent[162810]:     daemon
Oct 11 08:54:36 compute-0 ovn_metadata_agent[162810]: 
Oct 11 08:54:36 compute-0 ovn_metadata_agent[162810]: defaults
Oct 11 08:54:36 compute-0 ovn_metadata_agent[162810]:     log global
Oct 11 08:54:36 compute-0 ovn_metadata_agent[162810]:     mode http
Oct 11 08:54:36 compute-0 ovn_metadata_agent[162810]:     option httplog
Oct 11 08:54:36 compute-0 ovn_metadata_agent[162810]:     option dontlognull
Oct 11 08:54:36 compute-0 ovn_metadata_agent[162810]:     option http-server-close
Oct 11 08:54:36 compute-0 ovn_metadata_agent[162810]:     option forwardfor
Oct 11 08:54:36 compute-0 ovn_metadata_agent[162810]:     retries                 3
Oct 11 08:54:36 compute-0 ovn_metadata_agent[162810]:     timeout http-request    30s
Oct 11 08:54:36 compute-0 ovn_metadata_agent[162810]:     timeout connect         30s
Oct 11 08:54:36 compute-0 ovn_metadata_agent[162810]:     timeout client          32s
Oct 11 08:54:36 compute-0 ovn_metadata_agent[162810]:     timeout server          32s
Oct 11 08:54:36 compute-0 ovn_metadata_agent[162810]:     timeout http-keep-alive 30s
Oct 11 08:54:36 compute-0 ovn_metadata_agent[162810]: 
Oct 11 08:54:36 compute-0 ovn_metadata_agent[162810]: 
Oct 11 08:54:36 compute-0 ovn_metadata_agent[162810]: listen listener
Oct 11 08:54:36 compute-0 ovn_metadata_agent[162810]:     bind 169.254.169.254:80
Oct 11 08:54:36 compute-0 ovn_metadata_agent[162810]:     server metadata /var/lib/neutron/metadata_proxy
Oct 11 08:54:36 compute-0 ovn_metadata_agent[162810]:     http-request add-header X-OVN-Network-ID 72464893-0f19-40a9-84d1-392a298e50b9
Oct 11 08:54:36 compute-0 ovn_metadata_agent[162810]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 11 08:54:36 compute-0 nova_compute[260935]: 2025-10-11 08:54:36.302 2 INFO nova.virt.libvirt.driver [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] [instance: e263661e-e9c2-4a4d-a6e5-5fc8a7353f50] Creating config drive at /var/lib/nova/instances/e263661e-e9c2-4a4d-a6e5-5fc8a7353f50/disk.config
Oct 11 08:54:36 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:36.304 162815 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-72464893-0f19-40a9-84d1-392a298e50b9', 'env', 'PROCESS_TAG=haproxy-72464893-0f19-40a9-84d1-392a298e50b9', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/72464893-0f19-40a9-84d1-392a298e50b9.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 11 08:54:36 compute-0 nova_compute[260935]: 2025-10-11 08:54:36.311 2 DEBUG oslo_concurrency.processutils [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/e263661e-e9c2-4a4d-a6e5-5fc8a7353f50/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpet0fhntn execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:54:36 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 08:54:36 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2465400486' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:54:36 compute-0 nova_compute[260935]: 2025-10-11 08:54:36.375 2 DEBUG oslo_concurrency.processutils [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.496s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:54:36 compute-0 nova_compute[260935]: 2025-10-11 08:54:36.378 2 DEBUG nova.virt.libvirt.vif [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T08:54:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServersNegativeTestJSON-server-123447316',display_name='tempest-ListServersNegativeTestJSON-server-123447316-3',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserversnegativetestjson-server-123447316-3',id=55,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=2,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='7a37bbdce5194d96bed20d4162e25337',ramdisk_id='',reservation_id='r-yl5fomq0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServersNegativeTestJSON-567006104',owner_user_name='tempest-ListServersNegativeTestJSON-567006104-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T08:54:31Z,user_data=None,user_id='5c02b9d6bdae439c9f1e49ae63c5c5e3',uuid=21a71d10-e13b-47fe-88fd-ec9597f7902e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "70c55a7a-6ff4-4ca8-a4b6-60e79bcea74b", "address": "fa:16:3e:f5:da:34", "network": {"id": "72464893-0f19-40a9-84d1-392a298e50b9", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1756260705-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7a37bbdce5194d96bed20d4162e25337", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap70c55a7a-6f", "ovs_interfaceid": "70c55a7a-6ff4-4ca8-a4b6-60e79bcea74b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 11 08:54:36 compute-0 nova_compute[260935]: 2025-10-11 08:54:36.379 2 DEBUG nova.network.os_vif_util [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Converting VIF {"id": "70c55a7a-6ff4-4ca8-a4b6-60e79bcea74b", "address": "fa:16:3e:f5:da:34", "network": {"id": "72464893-0f19-40a9-84d1-392a298e50b9", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1756260705-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7a37bbdce5194d96bed20d4162e25337", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap70c55a7a-6f", "ovs_interfaceid": "70c55a7a-6ff4-4ca8-a4b6-60e79bcea74b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 08:54:36 compute-0 nova_compute[260935]: 2025-10-11 08:54:36.380 2 DEBUG nova.network.os_vif_util [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f5:da:34,bridge_name='br-int',has_traffic_filtering=True,id=70c55a7a-6ff4-4ca8-a4b6-60e79bcea74b,network=Network(72464893-0f19-40a9-84d1-392a298e50b9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap70c55a7a-6f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 08:54:36 compute-0 nova_compute[260935]: 2025-10-11 08:54:36.382 2 DEBUG nova.objects.instance [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Lazy-loading 'pci_devices' on Instance uuid 21a71d10-e13b-47fe-88fd-ec9597f7902e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:54:36 compute-0 nova_compute[260935]: 2025-10-11 08:54:36.401 2 DEBUG nova.virt.libvirt.driver [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] [instance: 21a71d10-e13b-47fe-88fd-ec9597f7902e] End _get_guest_xml xml=<domain type="kvm">
Oct 11 08:54:36 compute-0 nova_compute[260935]:   <uuid>21a71d10-e13b-47fe-88fd-ec9597f7902e</uuid>
Oct 11 08:54:36 compute-0 nova_compute[260935]:   <name>instance-00000037</name>
Oct 11 08:54:36 compute-0 nova_compute[260935]:   <memory>131072</memory>
Oct 11 08:54:36 compute-0 nova_compute[260935]:   <vcpu>1</vcpu>
Oct 11 08:54:36 compute-0 nova_compute[260935]:   <metadata>
Oct 11 08:54:36 compute-0 nova_compute[260935]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 08:54:36 compute-0 nova_compute[260935]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 08:54:36 compute-0 nova_compute[260935]:       <nova:name>tempest-ListServersNegativeTestJSON-server-123447316-3</nova:name>
Oct 11 08:54:36 compute-0 nova_compute[260935]:       <nova:creationTime>2025-10-11 08:54:35</nova:creationTime>
Oct 11 08:54:36 compute-0 nova_compute[260935]:       <nova:flavor name="m1.nano">
Oct 11 08:54:36 compute-0 nova_compute[260935]:         <nova:memory>128</nova:memory>
Oct 11 08:54:36 compute-0 nova_compute[260935]:         <nova:disk>1</nova:disk>
Oct 11 08:54:36 compute-0 nova_compute[260935]:         <nova:swap>0</nova:swap>
Oct 11 08:54:36 compute-0 nova_compute[260935]:         <nova:ephemeral>0</nova:ephemeral>
Oct 11 08:54:36 compute-0 nova_compute[260935]:         <nova:vcpus>1</nova:vcpus>
Oct 11 08:54:36 compute-0 nova_compute[260935]:       </nova:flavor>
Oct 11 08:54:36 compute-0 nova_compute[260935]:       <nova:owner>
Oct 11 08:54:36 compute-0 nova_compute[260935]:         <nova:user uuid="5c02b9d6bdae439c9f1e49ae63c5c5e3">tempest-ListServersNegativeTestJSON-567006104-project-member</nova:user>
Oct 11 08:54:36 compute-0 nova_compute[260935]:         <nova:project uuid="7a37bbdce5194d96bed20d4162e25337">tempest-ListServersNegativeTestJSON-567006104</nova:project>
Oct 11 08:54:36 compute-0 nova_compute[260935]:       </nova:owner>
Oct 11 08:54:36 compute-0 nova_compute[260935]:       <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 08:54:36 compute-0 nova_compute[260935]:       <nova:ports>
Oct 11 08:54:36 compute-0 nova_compute[260935]:         <nova:port uuid="70c55a7a-6ff4-4ca8-a4b6-60e79bcea74b">
Oct 11 08:54:36 compute-0 nova_compute[260935]:           <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Oct 11 08:54:36 compute-0 nova_compute[260935]:         </nova:port>
Oct 11 08:54:36 compute-0 nova_compute[260935]:       </nova:ports>
Oct 11 08:54:36 compute-0 nova_compute[260935]:     </nova:instance>
Oct 11 08:54:36 compute-0 nova_compute[260935]:   </metadata>
Oct 11 08:54:36 compute-0 nova_compute[260935]:   <sysinfo type="smbios">
Oct 11 08:54:36 compute-0 nova_compute[260935]:     <system>
Oct 11 08:54:36 compute-0 nova_compute[260935]:       <entry name="manufacturer">RDO</entry>
Oct 11 08:54:36 compute-0 nova_compute[260935]:       <entry name="product">OpenStack Compute</entry>
Oct 11 08:54:36 compute-0 nova_compute[260935]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 08:54:36 compute-0 nova_compute[260935]:       <entry name="serial">21a71d10-e13b-47fe-88fd-ec9597f7902e</entry>
Oct 11 08:54:36 compute-0 nova_compute[260935]:       <entry name="uuid">21a71d10-e13b-47fe-88fd-ec9597f7902e</entry>
Oct 11 08:54:36 compute-0 nova_compute[260935]:       <entry name="family">Virtual Machine</entry>
Oct 11 08:54:36 compute-0 nova_compute[260935]:     </system>
Oct 11 08:54:36 compute-0 nova_compute[260935]:   </sysinfo>
Oct 11 08:54:36 compute-0 nova_compute[260935]:   <os>
Oct 11 08:54:36 compute-0 nova_compute[260935]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 11 08:54:36 compute-0 nova_compute[260935]:     <boot dev="hd"/>
Oct 11 08:54:36 compute-0 nova_compute[260935]:     <smbios mode="sysinfo"/>
Oct 11 08:54:36 compute-0 nova_compute[260935]:   </os>
Oct 11 08:54:36 compute-0 nova_compute[260935]:   <features>
Oct 11 08:54:36 compute-0 nova_compute[260935]:     <acpi/>
Oct 11 08:54:36 compute-0 nova_compute[260935]:     <apic/>
Oct 11 08:54:36 compute-0 nova_compute[260935]:     <vmcoreinfo/>
Oct 11 08:54:36 compute-0 nova_compute[260935]:   </features>
Oct 11 08:54:36 compute-0 nova_compute[260935]:   <clock offset="utc">
Oct 11 08:54:36 compute-0 nova_compute[260935]:     <timer name="pit" tickpolicy="delay"/>
Oct 11 08:54:36 compute-0 nova_compute[260935]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 11 08:54:36 compute-0 nova_compute[260935]:     <timer name="hpet" present="no"/>
Oct 11 08:54:36 compute-0 nova_compute[260935]:   </clock>
Oct 11 08:54:36 compute-0 nova_compute[260935]:   <cpu mode="host-model" match="exact">
Oct 11 08:54:36 compute-0 nova_compute[260935]:     <topology sockets="1" cores="1" threads="1"/>
Oct 11 08:54:36 compute-0 nova_compute[260935]:   </cpu>
Oct 11 08:54:36 compute-0 nova_compute[260935]:   <devices>
Oct 11 08:54:36 compute-0 nova_compute[260935]:     <disk type="network" device="disk">
Oct 11 08:54:36 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 08:54:36 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/21a71d10-e13b-47fe-88fd-ec9597f7902e_disk">
Oct 11 08:54:36 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 08:54:36 compute-0 nova_compute[260935]:       </source>
Oct 11 08:54:36 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 08:54:36 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 08:54:36 compute-0 nova_compute[260935]:       </auth>
Oct 11 08:54:36 compute-0 nova_compute[260935]:       <target dev="vda" bus="virtio"/>
Oct 11 08:54:36 compute-0 nova_compute[260935]:     </disk>
Oct 11 08:54:36 compute-0 nova_compute[260935]:     <disk type="network" device="cdrom">
Oct 11 08:54:36 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 08:54:36 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/21a71d10-e13b-47fe-88fd-ec9597f7902e_disk.config">
Oct 11 08:54:36 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 08:54:36 compute-0 nova_compute[260935]:       </source>
Oct 11 08:54:36 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 08:54:36 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 08:54:36 compute-0 nova_compute[260935]:       </auth>
Oct 11 08:54:36 compute-0 nova_compute[260935]:       <target dev="sda" bus="sata"/>
Oct 11 08:54:36 compute-0 nova_compute[260935]:     </disk>
Oct 11 08:54:36 compute-0 nova_compute[260935]:     <interface type="ethernet">
Oct 11 08:54:36 compute-0 nova_compute[260935]:       <mac address="fa:16:3e:f5:da:34"/>
Oct 11 08:54:36 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 08:54:36 compute-0 nova_compute[260935]:       <driver name="vhost" rx_queue_size="512"/>
Oct 11 08:54:36 compute-0 nova_compute[260935]:       <mtu size="1442"/>
Oct 11 08:54:36 compute-0 nova_compute[260935]:       <target dev="tap70c55a7a-6f"/>
Oct 11 08:54:36 compute-0 nova_compute[260935]:     </interface>
Oct 11 08:54:36 compute-0 nova_compute[260935]:     <serial type="pty">
Oct 11 08:54:36 compute-0 nova_compute[260935]:       <log file="/var/lib/nova/instances/21a71d10-e13b-47fe-88fd-ec9597f7902e/console.log" append="off"/>
Oct 11 08:54:36 compute-0 nova_compute[260935]:     </serial>
Oct 11 08:54:36 compute-0 nova_compute[260935]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 08:54:36 compute-0 nova_compute[260935]:     <video>
Oct 11 08:54:36 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 08:54:36 compute-0 nova_compute[260935]:     </video>
Oct 11 08:54:36 compute-0 nova_compute[260935]:     <input type="tablet" bus="usb"/>
Oct 11 08:54:36 compute-0 nova_compute[260935]:     <rng model="virtio">
Oct 11 08:54:36 compute-0 nova_compute[260935]:       <backend model="random">/dev/urandom</backend>
Oct 11 08:54:36 compute-0 nova_compute[260935]:     </rng>
Oct 11 08:54:36 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root"/>
Oct 11 08:54:36 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:54:36 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:54:36 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:54:36 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:54:36 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:54:36 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:54:36 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:54:36 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:54:36 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:54:36 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:54:36 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:54:36 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:54:36 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:54:36 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:54:36 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:54:36 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:54:36 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:54:36 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:54:36 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:54:36 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:54:36 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:54:36 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:54:36 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:54:36 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:54:36 compute-0 nova_compute[260935]:     <controller type="usb" index="0"/>
Oct 11 08:54:36 compute-0 nova_compute[260935]:     <memballoon model="virtio">
Oct 11 08:54:36 compute-0 nova_compute[260935]:       <stats period="10"/>
Oct 11 08:54:36 compute-0 nova_compute[260935]:     </memballoon>
Oct 11 08:54:36 compute-0 nova_compute[260935]:   </devices>
Oct 11 08:54:36 compute-0 nova_compute[260935]: </domain>
Oct 11 08:54:36 compute-0 nova_compute[260935]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 11 08:54:36 compute-0 nova_compute[260935]: 2025-10-11 08:54:36.402 2 DEBUG nova.compute.manager [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] [instance: 21a71d10-e13b-47fe-88fd-ec9597f7902e] Preparing to wait for external event network-vif-plugged-70c55a7a-6ff4-4ca8-a4b6-60e79bcea74b prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 11 08:54:36 compute-0 nova_compute[260935]: 2025-10-11 08:54:36.402 2 DEBUG oslo_concurrency.lockutils [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Acquiring lock "21a71d10-e13b-47fe-88fd-ec9597f7902e-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:54:36 compute-0 nova_compute[260935]: 2025-10-11 08:54:36.403 2 DEBUG oslo_concurrency.lockutils [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Lock "21a71d10-e13b-47fe-88fd-ec9597f7902e-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:54:36 compute-0 nova_compute[260935]: 2025-10-11 08:54:36.403 2 DEBUG oslo_concurrency.lockutils [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Lock "21a71d10-e13b-47fe-88fd-ec9597f7902e-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:54:36 compute-0 nova_compute[260935]: 2025-10-11 08:54:36.404 2 DEBUG nova.virt.libvirt.vif [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T08:54:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServersNegativeTestJSON-server-123447316',display_name='tempest-ListServersNegativeTestJSON-server-123447316-3',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserversnegativetestjson-server-123447316-3',id=55,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=2,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='7a37bbdce5194d96bed20d4162e25337',ramdisk_id='',reservation_id='r-yl5fomq0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServersNegativeTestJSON-567006104',owner_user_name='tempest-ListServersNegativeTestJSON-567006104-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T08:54:31Z,user_data=None,user_id='5c02b9d6bdae439c9f1e49ae63c5c5e3',uuid=21a71d10-e13b-47fe-88fd-ec9597f7902e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "70c55a7a-6ff4-4ca8-a4b6-60e79bcea74b", "address": "fa:16:3e:f5:da:34", "network": {"id": "72464893-0f19-40a9-84d1-392a298e50b9", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1756260705-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7a37bbdce5194d96bed20d4162e25337", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap70c55a7a-6f", "ovs_interfaceid": "70c55a7a-6ff4-4ca8-a4b6-60e79bcea74b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 11 08:54:36 compute-0 nova_compute[260935]: 2025-10-11 08:54:36.405 2 DEBUG nova.network.os_vif_util [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Converting VIF {"id": "70c55a7a-6ff4-4ca8-a4b6-60e79bcea74b", "address": "fa:16:3e:f5:da:34", "network": {"id": "72464893-0f19-40a9-84d1-392a298e50b9", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1756260705-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7a37bbdce5194d96bed20d4162e25337", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap70c55a7a-6f", "ovs_interfaceid": "70c55a7a-6ff4-4ca8-a4b6-60e79bcea74b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 08:54:36 compute-0 nova_compute[260935]: 2025-10-11 08:54:36.406 2 DEBUG nova.network.os_vif_util [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f5:da:34,bridge_name='br-int',has_traffic_filtering=True,id=70c55a7a-6ff4-4ca8-a4b6-60e79bcea74b,network=Network(72464893-0f19-40a9-84d1-392a298e50b9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap70c55a7a-6f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 08:54:36 compute-0 nova_compute[260935]: 2025-10-11 08:54:36.406 2 DEBUG os_vif [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f5:da:34,bridge_name='br-int',has_traffic_filtering=True,id=70c55a7a-6ff4-4ca8-a4b6-60e79bcea74b,network=Network(72464893-0f19-40a9-84d1-392a298e50b9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap70c55a7a-6f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 11 08:54:36 compute-0 nova_compute[260935]: 2025-10-11 08:54:36.407 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:54:36 compute-0 nova_compute[260935]: 2025-10-11 08:54:36.408 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:54:36 compute-0 nova_compute[260935]: 2025-10-11 08:54:36.409 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 08:54:36 compute-0 nova_compute[260935]: 2025-10-11 08:54:36.414 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:54:36 compute-0 nova_compute[260935]: 2025-10-11 08:54:36.414 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap70c55a7a-6f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:54:36 compute-0 nova_compute[260935]: 2025-10-11 08:54:36.415 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap70c55a7a-6f, col_values=(('external_ids', {'iface-id': '70c55a7a-6ff4-4ca8-a4b6-60e79bcea74b', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:f5:da:34', 'vm-uuid': '21a71d10-e13b-47fe-88fd-ec9597f7902e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:54:36 compute-0 nova_compute[260935]: 2025-10-11 08:54:36.452 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:54:36 compute-0 NetworkManager[44960]: <info>  [1760172876.4546] manager: (tap70c55a7a-6f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/222)
Oct 11 08:54:36 compute-0 nova_compute[260935]: 2025-10-11 08:54:36.455 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 08:54:36 compute-0 nova_compute[260935]: 2025-10-11 08:54:36.464 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:54:36 compute-0 nova_compute[260935]: 2025-10-11 08:54:36.465 2 INFO os_vif [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f5:da:34,bridge_name='br-int',has_traffic_filtering=True,id=70c55a7a-6ff4-4ca8-a4b6-60e79bcea74b,network=Network(72464893-0f19-40a9-84d1-392a298e50b9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap70c55a7a-6f')
Oct 11 08:54:36 compute-0 nova_compute[260935]: 2025-10-11 08:54:36.475 2 DEBUG oslo_concurrency.processutils [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/e263661e-e9c2-4a4d-a6e5-5fc8a7353f50/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpet0fhntn" returned: 0 in 0.164s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:54:36 compute-0 nova_compute[260935]: 2025-10-11 08:54:36.511 2 DEBUG nova.storage.rbd_utils [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] rbd image e263661e-e9c2-4a4d-a6e5-5fc8a7353f50_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:54:36 compute-0 nova_compute[260935]: 2025-10-11 08:54:36.528 2 DEBUG oslo_concurrency.processutils [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/e263661e-e9c2-4a4d-a6e5-5fc8a7353f50/disk.config e263661e-e9c2-4a4d-a6e5-5fc8a7353f50_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:54:36 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/1971376858' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:54:36 compute-0 ceph-mon[74313]: pgmap v1511: 321 pgs: 321 active+clean; 226 MiB data, 545 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 7.1 MiB/s wr, 198 op/s
Oct 11 08:54:36 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/2465400486' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:54:36 compute-0 nova_compute[260935]: 2025-10-11 08:54:36.624 2 DEBUG nova.virt.libvirt.driver [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 08:54:36 compute-0 nova_compute[260935]: 2025-10-11 08:54:36.625 2 DEBUG nova.virt.libvirt.driver [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 08:54:36 compute-0 nova_compute[260935]: 2025-10-11 08:54:36.625 2 DEBUG nova.virt.libvirt.driver [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] No VIF found with MAC fa:16:3e:f5:da:34, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 11 08:54:36 compute-0 nova_compute[260935]: 2025-10-11 08:54:36.626 2 INFO nova.virt.libvirt.driver [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] [instance: 21a71d10-e13b-47fe-88fd-ec9597f7902e] Using config drive
Oct 11 08:54:36 compute-0 nova_compute[260935]: 2025-10-11 08:54:36.653 2 DEBUG nova.storage.rbd_utils [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] rbd image 21a71d10-e13b-47fe-88fd-ec9597f7902e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:54:36 compute-0 nova_compute[260935]: 2025-10-11 08:54:36.726 2 DEBUG oslo_concurrency.processutils [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/e263661e-e9c2-4a4d-a6e5-5fc8a7353f50/disk.config e263661e-e9c2-4a4d-a6e5-5fc8a7353f50_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.199s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:54:36 compute-0 nova_compute[260935]: 2025-10-11 08:54:36.727 2 INFO nova.virt.libvirt.driver [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] [instance: e263661e-e9c2-4a4d-a6e5-5fc8a7353f50] Deleting local config drive /var/lib/nova/instances/e263661e-e9c2-4a4d-a6e5-5fc8a7353f50/disk.config because it was imported into RBD.
Oct 11 08:54:36 compute-0 kernel: tap5a02bb17-5d: entered promiscuous mode
Oct 11 08:54:36 compute-0 NetworkManager[44960]: <info>  [1760172876.7952] manager: (tap5a02bb17-5d): new Tun device (/org/freedesktop/NetworkManager/Devices/223)
Oct 11 08:54:36 compute-0 systemd-udevd[318994]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 08:54:36 compute-0 ovn_controller[152945]: 2025-10-11T08:54:36Z|00459|binding|INFO|Claiming lport 5a02bb17-5d97-4eef-a91d-245b70cc5e1b for this chassis.
Oct 11 08:54:36 compute-0 ovn_controller[152945]: 2025-10-11T08:54:36Z|00460|binding|INFO|5a02bb17-5d97-4eef-a91d-245b70cc5e1b: Claiming fa:16:3e:2c:7f:c3 10.100.0.14
Oct 11 08:54:36 compute-0 podman[319058]: 2025-10-11 08:54:36.804979574 +0000 UTC m=+0.070350437 container create 291a72ea54cbd11b08a42f9445c0733f327b2d0109a7ed73d9c34352184c0676 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-72464893-0f19-40a9-84d1-392a298e50b9, tcib_managed=true, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 11 08:54:36 compute-0 nova_compute[260935]: 2025-10-11 08:54:36.803 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:54:36 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:36.807 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2c:7f:c3 10.100.0.14'], port_security=['fa:16:3e:2c:7f:c3 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'e263661e-e9c2-4a4d-a6e5-5fc8a7353f50', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-72464893-0f19-40a9-84d1-392a298e50b9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7a37bbdce5194d96bed20d4162e25337', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'a5c4bd12-f795-4994-bb5e-cd2cc665fe9f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2dc3e696-a953-4227-9499-d68d54f25c77, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=5a02bb17-5d97-4eef-a91d-245b70cc5e1b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 08:54:36 compute-0 NetworkManager[44960]: <info>  [1760172876.8166] device (tap5a02bb17-5d): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 08:54:36 compute-0 NetworkManager[44960]: <info>  [1760172876.8179] device (tap5a02bb17-5d): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 08:54:36 compute-0 ovn_controller[152945]: 2025-10-11T08:54:36Z|00461|binding|INFO|Setting lport 5a02bb17-5d97-4eef-a91d-245b70cc5e1b ovn-installed in OVS
Oct 11 08:54:36 compute-0 ovn_controller[152945]: 2025-10-11T08:54:36Z|00462|binding|INFO|Setting lport 5a02bb17-5d97-4eef-a91d-245b70cc5e1b up in Southbound
Oct 11 08:54:36 compute-0 nova_compute[260935]: 2025-10-11 08:54:36.829 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:54:36 compute-0 nova_compute[260935]: 2025-10-11 08:54:36.831 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:54:36 compute-0 systemd-machined[215705]: New machine qemu-61-instance-00000036.
Oct 11 08:54:36 compute-0 systemd[1]: Started libpod-conmon-291a72ea54cbd11b08a42f9445c0733f327b2d0109a7ed73d9c34352184c0676.scope.
Oct 11 08:54:36 compute-0 podman[319058]: 2025-10-11 08:54:36.767114974 +0000 UTC m=+0.032485847 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 11 08:54:36 compute-0 systemd[1]: Started Virtual Machine qemu-61-instance-00000036.
Oct 11 08:54:36 compute-0 systemd[1]: Started libcrun container.
Oct 11 08:54:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/601b7f68ddbb33e7d9c40d40c0537063b097ca0468dd1d3c4f0f1fbe13b69382/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 11 08:54:36 compute-0 podman[319058]: 2025-10-11 08:54:36.896158454 +0000 UTC m=+0.161529327 container init 291a72ea54cbd11b08a42f9445c0733f327b2d0109a7ed73d9c34352184c0676 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-72464893-0f19-40a9-84d1-392a298e50b9, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct 11 08:54:36 compute-0 nova_compute[260935]: 2025-10-11 08:54:36.896 2 DEBUG nova.network.neutron [req-47b386dc-515d-41f2-9145-a7df00467aef req-e6eba40c-8030-45df-8420-9603a2619002 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 21a71d10-e13b-47fe-88fd-ec9597f7902e] Updated VIF entry in instance network info cache for port 70c55a7a-6ff4-4ca8-a4b6-60e79bcea74b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 08:54:36 compute-0 nova_compute[260935]: 2025-10-11 08:54:36.897 2 DEBUG nova.network.neutron [req-47b386dc-515d-41f2-9145-a7df00467aef req-e6eba40c-8030-45df-8420-9603a2619002 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 21a71d10-e13b-47fe-88fd-ec9597f7902e] Updating instance_info_cache with network_info: [{"id": "70c55a7a-6ff4-4ca8-a4b6-60e79bcea74b", "address": "fa:16:3e:f5:da:34", "network": {"id": "72464893-0f19-40a9-84d1-392a298e50b9", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1756260705-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7a37bbdce5194d96bed20d4162e25337", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap70c55a7a-6f", "ovs_interfaceid": "70c55a7a-6ff4-4ca8-a4b6-60e79bcea74b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:54:36 compute-0 podman[319058]: 2025-10-11 08:54:36.906287862 +0000 UTC m=+0.171658725 container start 291a72ea54cbd11b08a42f9445c0733f327b2d0109a7ed73d9c34352184c0676 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-72464893-0f19-40a9-84d1-392a298e50b9, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true)
Oct 11 08:54:36 compute-0 nova_compute[260935]: 2025-10-11 08:54:36.914 2 DEBUG oslo_concurrency.lockutils [req-47b386dc-515d-41f2-9145-a7df00467aef req-e6eba40c-8030-45df-8420-9603a2619002 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-21a71d10-e13b-47fe-88fd-ec9597f7902e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 08:54:36 compute-0 neutron-haproxy-ovnmeta-72464893-0f19-40a9-84d1-392a298e50b9[319091]: [NOTICE]   (319100) : New worker (319103) forked
Oct 11 08:54:36 compute-0 neutron-haproxy-ovnmeta-72464893-0f19-40a9-84d1-392a298e50b9[319091]: [NOTICE]   (319100) : Loading success.
Oct 11 08:54:36 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:36.985 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 5a02bb17-5d97-4eef-a91d-245b70cc5e1b in datapath 72464893-0f19-40a9-84d1-392a298e50b9 unbound from our chassis
Oct 11 08:54:36 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:36.987 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 72464893-0f19-40a9-84d1-392a298e50b9
Oct 11 08:54:37 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:37.008 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[aec9a17e-5753-4e33-9431-6e1736389ca7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:54:37 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:37.050 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[91ee42e1-87ae-4ff3-b3a7-e9049332a587]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:54:37 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:37.054 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[837df0ef-7859-496c-aea6-7bdad5fc0f7c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:54:37 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:37.087 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[f3459ad3-64c4-4bfd-b4b5-9ee9ce202852]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:54:37 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:37.109 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[1618d04b-ae13-4b3d-bd00-8f9f49847392]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap72464893-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ca:78:4f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 3, 'tx_packets': 5, 'rx_bytes': 306, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 3, 'tx_packets': 5, 'rx_bytes': 306, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 148], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 472069, 'reachable_time': 41754, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 3, 'inoctets': 264, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 3, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 264, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 3, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 319135, 'error': None, 'target': 'ovnmeta-72464893-0f19-40a9-84d1-392a298e50b9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:54:37 compute-0 nova_compute[260935]: 2025-10-11 08:54:37.139 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172877.138601, d60a2dcd-7fb6-4bfe-8351-a38a71164f83 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:54:37 compute-0 nova_compute[260935]: 2025-10-11 08:54:37.139 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: d60a2dcd-7fb6-4bfe-8351-a38a71164f83] VM Started (Lifecycle Event)
Oct 11 08:54:37 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:37.143 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[d321d792-6e9b-4049-a997-5dacbbd7ac59]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap72464893-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 472087, 'tstamp': 472087}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 319137, 'error': None, 'target': 'ovnmeta-72464893-0f19-40a9-84d1-392a298e50b9', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap72464893-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 472092, 'tstamp': 472092}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 319137, 'error': None, 'target': 'ovnmeta-72464893-0f19-40a9-84d1-392a298e50b9', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:54:37 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:37.145 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap72464893-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:54:37 compute-0 nova_compute[260935]: 2025-10-11 08:54:37.147 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:54:37 compute-0 nova_compute[260935]: 2025-10-11 08:54:37.159 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:54:37 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:37.161 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap72464893-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:54:37 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:37.161 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 08:54:37 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:37.162 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap72464893-00, col_values=(('external_ids', {'iface-id': 'fba8f4e1-8635-4527-85f3-29ce2a1033b5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:54:37 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:37.163 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 08:54:37 compute-0 nova_compute[260935]: 2025-10-11 08:54:37.163 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: d60a2dcd-7fb6-4bfe-8351-a38a71164f83] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:54:37 compute-0 nova_compute[260935]: 2025-10-11 08:54:37.169 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172877.1413875, d60a2dcd-7fb6-4bfe-8351-a38a71164f83 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:54:37 compute-0 nova_compute[260935]: 2025-10-11 08:54:37.169 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: d60a2dcd-7fb6-4bfe-8351-a38a71164f83] VM Paused (Lifecycle Event)
Oct 11 08:54:37 compute-0 nova_compute[260935]: 2025-10-11 08:54:37.197 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: d60a2dcd-7fb6-4bfe-8351-a38a71164f83] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:54:37 compute-0 nova_compute[260935]: 2025-10-11 08:54:37.202 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: d60a2dcd-7fb6-4bfe-8351-a38a71164f83] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 08:54:37 compute-0 nova_compute[260935]: 2025-10-11 08:54:37.222 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: d60a2dcd-7fb6-4bfe-8351-a38a71164f83] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 08:54:37 compute-0 nova_compute[260935]: 2025-10-11 08:54:37.355 2 INFO nova.virt.libvirt.driver [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] [instance: 21a71d10-e13b-47fe-88fd-ec9597f7902e] Creating config drive at /var/lib/nova/instances/21a71d10-e13b-47fe-88fd-ec9597f7902e/disk.config
Oct 11 08:54:37 compute-0 nova_compute[260935]: 2025-10-11 08:54:37.364 2 DEBUG oslo_concurrency.processutils [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/21a71d10-e13b-47fe-88fd-ec9597f7902e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpikgdgygh execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:54:37 compute-0 nova_compute[260935]: 2025-10-11 08:54:37.461 2 DEBUG oslo_concurrency.lockutils [None req-70b80284-766f-4128-b01b-e503f64803d5 e7ae37ce1df34d87a006582ce3cb7d6d ce2ed1c47abf4e1889253402aa1e536f - - default default] Acquiring lock "ceef84cf-9df6-4484-862c-624eab05f1fe" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:54:37 compute-0 nova_compute[260935]: 2025-10-11 08:54:37.463 2 DEBUG oslo_concurrency.lockutils [None req-70b80284-766f-4128-b01b-e503f64803d5 e7ae37ce1df34d87a006582ce3cb7d6d ce2ed1c47abf4e1889253402aa1e536f - - default default] Lock "ceef84cf-9df6-4484-862c-624eab05f1fe" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:54:37 compute-0 nova_compute[260935]: 2025-10-11 08:54:37.463 2 DEBUG oslo_concurrency.lockutils [None req-70b80284-766f-4128-b01b-e503f64803d5 e7ae37ce1df34d87a006582ce3cb7d6d ce2ed1c47abf4e1889253402aa1e536f - - default default] Acquiring lock "ceef84cf-9df6-4484-862c-624eab05f1fe-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:54:37 compute-0 nova_compute[260935]: 2025-10-11 08:54:37.464 2 DEBUG oslo_concurrency.lockutils [None req-70b80284-766f-4128-b01b-e503f64803d5 e7ae37ce1df34d87a006582ce3cb7d6d ce2ed1c47abf4e1889253402aa1e536f - - default default] Lock "ceef84cf-9df6-4484-862c-624eab05f1fe-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:54:37 compute-0 nova_compute[260935]: 2025-10-11 08:54:37.464 2 DEBUG oslo_concurrency.lockutils [None req-70b80284-766f-4128-b01b-e503f64803d5 e7ae37ce1df34d87a006582ce3cb7d6d ce2ed1c47abf4e1889253402aa1e536f - - default default] Lock "ceef84cf-9df6-4484-862c-624eab05f1fe-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:54:37 compute-0 nova_compute[260935]: 2025-10-11 08:54:37.468 2 INFO nova.compute.manager [None req-70b80284-766f-4128-b01b-e503f64803d5 e7ae37ce1df34d87a006582ce3cb7d6d ce2ed1c47abf4e1889253402aa1e536f - - default default] [instance: ceef84cf-9df6-4484-862c-624eab05f1fe] Terminating instance
Oct 11 08:54:37 compute-0 nova_compute[260935]: 2025-10-11 08:54:37.469 2 DEBUG nova.compute.manager [None req-70b80284-766f-4128-b01b-e503f64803d5 e7ae37ce1df34d87a006582ce3cb7d6d ce2ed1c47abf4e1889253402aa1e536f - - default default] [instance: ceef84cf-9df6-4484-862c-624eab05f1fe] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 11 08:54:37 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 08:54:37 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2271091576' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 08:54:37 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 08:54:37 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2271091576' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 08:54:37 compute-0 kernel: tapb6cea59b-74 (unregistering): left promiscuous mode
Oct 11 08:54:37 compute-0 NetworkManager[44960]: <info>  [1760172877.5332] device (tapb6cea59b-74): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 08:54:37 compute-0 nova_compute[260935]: 2025-10-11 08:54:37.536 2 DEBUG oslo_concurrency.processutils [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/21a71d10-e13b-47fe-88fd-ec9597f7902e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpikgdgygh" returned: 0 in 0.172s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:54:37 compute-0 nova_compute[260935]: 2025-10-11 08:54:37.563 2 DEBUG nova.storage.rbd_utils [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] rbd image 21a71d10-e13b-47fe-88fd-ec9597f7902e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:54:37 compute-0 nova_compute[260935]: 2025-10-11 08:54:37.567 2 DEBUG oslo_concurrency.processutils [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/21a71d10-e13b-47fe-88fd-ec9597f7902e/disk.config 21a71d10-e13b-47fe-88fd-ec9597f7902e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:54:37 compute-0 ovn_controller[152945]: 2025-10-11T08:54:37Z|00463|binding|INFO|Releasing lport b6cea59b-74d9-47bc-a2e0-cfc62e1cc66b from this chassis (sb_readonly=0)
Oct 11 08:54:37 compute-0 ovn_controller[152945]: 2025-10-11T08:54:37Z|00464|binding|INFO|Setting lport b6cea59b-74d9-47bc-a2e0-cfc62e1cc66b down in Southbound
Oct 11 08:54:37 compute-0 ovn_controller[152945]: 2025-10-11T08:54:37Z|00465|binding|INFO|Removing iface tapb6cea59b-74 ovn-installed in OVS
Oct 11 08:54:37 compute-0 ceph-mon[74313]: from='client.? 192.168.122.10:0/2271091576' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 08:54:37 compute-0 ceph-mon[74313]: from='client.? 192.168.122.10:0/2271091576' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 08:54:37 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:37.588 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4a:a8:a7 10.100.0.14'], port_security=['fa:16:3e:4a:a8:a7 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'ceef84cf-9df6-4484-862c-624eab05f1fe', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f022782f-f41d-4a83-babf-b48f2f344e01', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ce2ed1c47abf4e1889253402aa1e536f', 'neutron:revision_number': '4', 'neutron:security_group_ids': '4c233320-8183-4c2d-a5ba-21893adcb132', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=dc533fa3-dc44-4446-b57b-1e5f53ba099c, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=b6cea59b-74d9-47bc-a2e0-cfc62e1cc66b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 08:54:37 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:37.591 162815 INFO neutron.agent.ovn.metadata.agent [-] Port b6cea59b-74d9-47bc-a2e0-cfc62e1cc66b in datapath f022782f-f41d-4a83-babf-b48f2f344e01 unbound from our chassis
Oct 11 08:54:37 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:37.594 162815 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f022782f-f41d-4a83-babf-b48f2f344e01, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 11 08:54:37 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:37.595 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[197e1690-c883-4771-9b1c-4c2b54c7173a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:54:37 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:37.596 162815 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-f022782f-f41d-4a83-babf-b48f2f344e01 namespace which is not needed anymore
Oct 11 08:54:37 compute-0 nova_compute[260935]: 2025-10-11 08:54:37.607 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:54:37 compute-0 systemd[1]: machine-qemu\x2d59\x2dinstance\x2d00000034.scope: Deactivated successfully.
Oct 11 08:54:37 compute-0 systemd[1]: machine-qemu\x2d59\x2dinstance\x2d00000034.scope: Consumed 2.361s CPU time.
Oct 11 08:54:37 compute-0 systemd-machined[215705]: Machine qemu-59-instance-00000034 terminated.
Oct 11 08:54:37 compute-0 nova_compute[260935]: 2025-10-11 08:54:37.689 2 DEBUG nova.compute.manager [req-7ba802ea-fa41-4cfd-9495-4318c9bfd1d8 req-55a231c4-201f-4b38-8c98-c268fd1d0e49 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: e263661e-e9c2-4a4d-a6e5-5fc8a7353f50] Received event network-vif-plugged-5a02bb17-5d97-4eef-a91d-245b70cc5e1b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:54:37 compute-0 nova_compute[260935]: 2025-10-11 08:54:37.690 2 DEBUG oslo_concurrency.lockutils [req-7ba802ea-fa41-4cfd-9495-4318c9bfd1d8 req-55a231c4-201f-4b38-8c98-c268fd1d0e49 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "e263661e-e9c2-4a4d-a6e5-5fc8a7353f50-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:54:37 compute-0 nova_compute[260935]: 2025-10-11 08:54:37.690 2 DEBUG oslo_concurrency.lockutils [req-7ba802ea-fa41-4cfd-9495-4318c9bfd1d8 req-55a231c4-201f-4b38-8c98-c268fd1d0e49 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "e263661e-e9c2-4a4d-a6e5-5fc8a7353f50-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:54:37 compute-0 nova_compute[260935]: 2025-10-11 08:54:37.690 2 DEBUG oslo_concurrency.lockutils [req-7ba802ea-fa41-4cfd-9495-4318c9bfd1d8 req-55a231c4-201f-4b38-8c98-c268fd1d0e49 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "e263661e-e9c2-4a4d-a6e5-5fc8a7353f50-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:54:37 compute-0 nova_compute[260935]: 2025-10-11 08:54:37.690 2 DEBUG nova.compute.manager [req-7ba802ea-fa41-4cfd-9495-4318c9bfd1d8 req-55a231c4-201f-4b38-8c98-c268fd1d0e49 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: e263661e-e9c2-4a4d-a6e5-5fc8a7353f50] Processing event network-vif-plugged-5a02bb17-5d97-4eef-a91d-245b70cc5e1b _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 11 08:54:37 compute-0 nova_compute[260935]: 2025-10-11 08:54:37.724 2 INFO nova.virt.libvirt.driver [-] [instance: ceef84cf-9df6-4484-862c-624eab05f1fe] Instance destroyed successfully.
Oct 11 08:54:37 compute-0 nova_compute[260935]: 2025-10-11 08:54:37.724 2 DEBUG nova.objects.instance [None req-70b80284-766f-4128-b01b-e503f64803d5 e7ae37ce1df34d87a006582ce3cb7d6d ce2ed1c47abf4e1889253402aa1e536f - - default default] Lazy-loading 'resources' on Instance uuid ceef84cf-9df6-4484-862c-624eab05f1fe obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:54:37 compute-0 nova_compute[260935]: 2025-10-11 08:54:37.739 2 DEBUG nova.virt.libvirt.vif [None req-70b80284-766f-4128-b01b-e503f64803d5 e7ae37ce1df34d87a006582ce3cb7d6d ce2ed1c47abf4e1889253402aa1e536f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T08:54:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerAddressesTestJSON-server-1070563181',display_name='tempest-ServerAddressesTestJSON-server-1070563181',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveraddressestestjson-server-1070563181',id=52,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-11T08:54:36Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='ce2ed1c47abf4e1889253402aa1e536f',ramdisk_id='',reservation_id='r-xo306n5g',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerAddressesTestJSON-1144666060',owner_user_name='tempest-ServerAddressesTestJSON-1144666060-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T08:54:36Z,user_data=None,user_id='e7ae37ce1df34d87a006582ce3cb7d6d',uuid=ceef84cf-9df6-4484-862c-624eab05f1fe,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b6cea59b-74d9-47bc-a2e0-cfc62e1cc66b", "address": "fa:16:3e:4a:a8:a7", "network": {"id": "f022782f-f41d-4a83-babf-b48f2f344e01", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1436623710-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ce2ed1c47abf4e1889253402aa1e536f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb6cea59b-74", "ovs_interfaceid": "b6cea59b-74d9-47bc-a2e0-cfc62e1cc66b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 11 08:54:37 compute-0 nova_compute[260935]: 2025-10-11 08:54:37.740 2 DEBUG nova.network.os_vif_util [None req-70b80284-766f-4128-b01b-e503f64803d5 e7ae37ce1df34d87a006582ce3cb7d6d ce2ed1c47abf4e1889253402aa1e536f - - default default] Converting VIF {"id": "b6cea59b-74d9-47bc-a2e0-cfc62e1cc66b", "address": "fa:16:3e:4a:a8:a7", "network": {"id": "f022782f-f41d-4a83-babf-b48f2f344e01", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1436623710-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ce2ed1c47abf4e1889253402aa1e536f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb6cea59b-74", "ovs_interfaceid": "b6cea59b-74d9-47bc-a2e0-cfc62e1cc66b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 08:54:37 compute-0 nova_compute[260935]: 2025-10-11 08:54:37.741 2 DEBUG nova.network.os_vif_util [None req-70b80284-766f-4128-b01b-e503f64803d5 e7ae37ce1df34d87a006582ce3cb7d6d ce2ed1c47abf4e1889253402aa1e536f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4a:a8:a7,bridge_name='br-int',has_traffic_filtering=True,id=b6cea59b-74d9-47bc-a2e0-cfc62e1cc66b,network=Network(f022782f-f41d-4a83-babf-b48f2f344e01),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb6cea59b-74') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 08:54:37 compute-0 nova_compute[260935]: 2025-10-11 08:54:37.741 2 DEBUG os_vif [None req-70b80284-766f-4128-b01b-e503f64803d5 e7ae37ce1df34d87a006582ce3cb7d6d ce2ed1c47abf4e1889253402aa1e536f - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:4a:a8:a7,bridge_name='br-int',has_traffic_filtering=True,id=b6cea59b-74d9-47bc-a2e0-cfc62e1cc66b,network=Network(f022782f-f41d-4a83-babf-b48f2f344e01),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb6cea59b-74') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 11 08:54:37 compute-0 nova_compute[260935]: 2025-10-11 08:54:37.743 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:54:37 compute-0 nova_compute[260935]: 2025-10-11 08:54:37.743 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb6cea59b-74, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:54:37 compute-0 nova_compute[260935]: 2025-10-11 08:54:37.744 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:54:37 compute-0 nova_compute[260935]: 2025-10-11 08:54:37.747 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 08:54:37 compute-0 nova_compute[260935]: 2025-10-11 08:54:37.750 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:54:37 compute-0 nova_compute[260935]: 2025-10-11 08:54:37.752 2 INFO os_vif [None req-70b80284-766f-4128-b01b-e503f64803d5 e7ae37ce1df34d87a006582ce3cb7d6d ce2ed1c47abf4e1889253402aa1e536f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:4a:a8:a7,bridge_name='br-int',has_traffic_filtering=True,id=b6cea59b-74d9-47bc-a2e0-cfc62e1cc66b,network=Network(f022782f-f41d-4a83-babf-b48f2f344e01),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb6cea59b-74')
Oct 11 08:54:37 compute-0 neutron-haproxy-ovnmeta-f022782f-f41d-4a83-babf-b48f2f344e01[318651]: [NOTICE]   (318665) : haproxy version is 2.8.14-c23fe91
Oct 11 08:54:37 compute-0 neutron-haproxy-ovnmeta-f022782f-f41d-4a83-babf-b48f2f344e01[318651]: [NOTICE]   (318665) : path to executable is /usr/sbin/haproxy
Oct 11 08:54:37 compute-0 neutron-haproxy-ovnmeta-f022782f-f41d-4a83-babf-b48f2f344e01[318651]: [WARNING]  (318665) : Exiting Master process...
Oct 11 08:54:37 compute-0 neutron-haproxy-ovnmeta-f022782f-f41d-4a83-babf-b48f2f344e01[318651]: [WARNING]  (318665) : Exiting Master process...
Oct 11 08:54:37 compute-0 neutron-haproxy-ovnmeta-f022782f-f41d-4a83-babf-b48f2f344e01[318651]: [ALERT]    (318665) : Current worker (318668) exited with code 143 (Terminated)
Oct 11 08:54:37 compute-0 neutron-haproxy-ovnmeta-f022782f-f41d-4a83-babf-b48f2f344e01[318651]: [WARNING]  (318665) : All workers exited. Exiting... (0)
Oct 11 08:54:37 compute-0 systemd[1]: libpod-f3aae3bca0e893ccf324b75817ea1dd437236d36d1ffd2c53b7adacc2120d991.scope: Deactivated successfully.
Oct 11 08:54:37 compute-0 podman[319228]: 2025-10-11 08:54:37.77588941 +0000 UTC m=+0.054191597 container died f3aae3bca0e893ccf324b75817ea1dd437236d36d1ffd2c53b7adacc2120d991 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-f022782f-f41d-4a83-babf-b48f2f344e01, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 11 08:54:37 compute-0 nova_compute[260935]: 2025-10-11 08:54:37.796 2 DEBUG oslo_concurrency.processutils [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/21a71d10-e13b-47fe-88fd-ec9597f7902e/disk.config 21a71d10-e13b-47fe-88fd-ec9597f7902e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.229s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:54:37 compute-0 nova_compute[260935]: 2025-10-11 08:54:37.797 2 INFO nova.virt.libvirt.driver [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] [instance: 21a71d10-e13b-47fe-88fd-ec9597f7902e] Deleting local config drive /var/lib/nova/instances/21a71d10-e13b-47fe-88fd-ec9597f7902e/disk.config because it was imported into RBD.
Oct 11 08:54:37 compute-0 nova_compute[260935]: 2025-10-11 08:54:37.804 2 DEBUG oslo_concurrency.lockutils [None req-1689977f-45c5-42f6-96ac-e12f067b80ad d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Acquiring lock "1ef94ed5-fffa-41e9-b72f-4569354392c6" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:54:37 compute-0 nova_compute[260935]: 2025-10-11 08:54:37.804 2 DEBUG oslo_concurrency.lockutils [None req-1689977f-45c5-42f6-96ac-e12f067b80ad d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Lock "1ef94ed5-fffa-41e9-b72f-4569354392c6" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:54:37 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-f3aae3bca0e893ccf324b75817ea1dd437236d36d1ffd2c53b7adacc2120d991-userdata-shm.mount: Deactivated successfully.
Oct 11 08:54:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-2265d21a76f95933280395fdeb37d947104c519b3836bf0feabc7e53e6859577-merged.mount: Deactivated successfully.
Oct 11 08:54:37 compute-0 nova_compute[260935]: 2025-10-11 08:54:37.827 2 DEBUG nova.compute.manager [None req-1689977f-45c5-42f6-96ac-e12f067b80ad d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] [instance: 1ef94ed5-fffa-41e9-b72f-4569354392c6] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 11 08:54:37 compute-0 podman[319228]: 2025-10-11 08:54:37.840919254 +0000 UTC m=+0.119221441 container cleanup f3aae3bca0e893ccf324b75817ea1dd437236d36d1ffd2c53b7adacc2120d991 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-f022782f-f41d-4a83-babf-b48f2f344e01, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true)
Oct 11 08:54:37 compute-0 systemd[1]: libpod-conmon-f3aae3bca0e893ccf324b75817ea1dd437236d36d1ffd2c53b7adacc2120d991.scope: Deactivated successfully.
Oct 11 08:54:37 compute-0 nova_compute[260935]: 2025-10-11 08:54:37.869 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:54:37 compute-0 kernel: tap70c55a7a-6f: entered promiscuous mode
Oct 11 08:54:37 compute-0 systemd-udevd[319081]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 08:54:37 compute-0 nova_compute[260935]: 2025-10-11 08:54:37.880 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:54:37 compute-0 NetworkManager[44960]: <info>  [1760172877.8802] manager: (tap70c55a7a-6f): new Tun device (/org/freedesktop/NetworkManager/Devices/224)
Oct 11 08:54:37 compute-0 ovn_controller[152945]: 2025-10-11T08:54:37Z|00466|binding|INFO|Claiming lport 70c55a7a-6ff4-4ca8-a4b6-60e79bcea74b for this chassis.
Oct 11 08:54:37 compute-0 ovn_controller[152945]: 2025-10-11T08:54:37Z|00467|binding|INFO|70c55a7a-6ff4-4ca8-a4b6-60e79bcea74b: Claiming fa:16:3e:f5:da:34 10.100.0.7
Oct 11 08:54:37 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:37.889 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f5:da:34 10.100.0.7'], port_security=['fa:16:3e:f5:da:34 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '21a71d10-e13b-47fe-88fd-ec9597f7902e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-72464893-0f19-40a9-84d1-392a298e50b9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7a37bbdce5194d96bed20d4162e25337', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'a5c4bd12-f795-4994-bb5e-cd2cc665fe9f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2dc3e696-a953-4227-9499-d68d54f25c77, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=70c55a7a-6ff4-4ca8-a4b6-60e79bcea74b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 08:54:37 compute-0 NetworkManager[44960]: <info>  [1760172877.8987] device (tap70c55a7a-6f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 08:54:37 compute-0 NetworkManager[44960]: <info>  [1760172877.9008] device (tap70c55a7a-6f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 08:54:37 compute-0 ovn_controller[152945]: 2025-10-11T08:54:37Z|00468|binding|INFO|Setting lport 70c55a7a-6ff4-4ca8-a4b6-60e79bcea74b ovn-installed in OVS
Oct 11 08:54:37 compute-0 ovn_controller[152945]: 2025-10-11T08:54:37Z|00469|binding|INFO|Setting lport 70c55a7a-6ff4-4ca8-a4b6-60e79bcea74b up in Southbound
Oct 11 08:54:37 compute-0 nova_compute[260935]: 2025-10-11 08:54:37.912 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:54:37 compute-0 nova_compute[260935]: 2025-10-11 08:54:37.915 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:54:37 compute-0 systemd-machined[215705]: New machine qemu-62-instance-00000037.
Oct 11 08:54:37 compute-0 nova_compute[260935]: 2025-10-11 08:54:37.931 2 DEBUG oslo_concurrency.lockutils [None req-1689977f-45c5-42f6-96ac-e12f067b80ad d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:54:37 compute-0 nova_compute[260935]: 2025-10-11 08:54:37.931 2 DEBUG oslo_concurrency.lockutils [None req-1689977f-45c5-42f6-96ac-e12f067b80ad d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:54:37 compute-0 systemd[1]: Started Virtual Machine qemu-62-instance-00000037.
Oct 11 08:54:37 compute-0 nova_compute[260935]: 2025-10-11 08:54:37.938 2 DEBUG nova.virt.hardware [None req-1689977f-45c5-42f6-96ac-e12f067b80ad d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 11 08:54:37 compute-0 nova_compute[260935]: 2025-10-11 08:54:37.938 2 INFO nova.compute.claims [None req-1689977f-45c5-42f6-96ac-e12f067b80ad d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] [instance: 1ef94ed5-fffa-41e9-b72f-4569354392c6] Claim successful on node compute-0.ctlplane.example.com
Oct 11 08:54:37 compute-0 podman[319293]: 2025-10-11 08:54:37.94211373 +0000 UTC m=+0.069692709 container remove f3aae3bca0e893ccf324b75817ea1dd437236d36d1ffd2c53b7adacc2120d991 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-f022782f-f41d-4a83-babf-b48f2f344e01, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true)
Oct 11 08:54:37 compute-0 nova_compute[260935]: 2025-10-11 08:54:37.951 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172877.9506705, e263661e-e9c2-4a4d-a6e5-5fc8a7353f50 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:54:37 compute-0 nova_compute[260935]: 2025-10-11 08:54:37.951 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: e263661e-e9c2-4a4d-a6e5-5fc8a7353f50] VM Started (Lifecycle Event)
Oct 11 08:54:37 compute-0 nova_compute[260935]: 2025-10-11 08:54:37.952 2 DEBUG nova.compute.manager [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] [instance: e263661e-e9c2-4a4d-a6e5-5fc8a7353f50] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 11 08:54:37 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:37.954 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[3e0b5d59-0a9d-43ff-8ed4-eaded2805595]: (4, ('Sat Oct 11 08:54:37 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-f022782f-f41d-4a83-babf-b48f2f344e01 (f3aae3bca0e893ccf324b75817ea1dd437236d36d1ffd2c53b7adacc2120d991)\nf3aae3bca0e893ccf324b75817ea1dd437236d36d1ffd2c53b7adacc2120d991\nSat Oct 11 08:54:37 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-f022782f-f41d-4a83-babf-b48f2f344e01 (f3aae3bca0e893ccf324b75817ea1dd437236d36d1ffd2c53b7adacc2120d991)\nf3aae3bca0e893ccf324b75817ea1dd437236d36d1ffd2c53b7adacc2120d991\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:54:37 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:37.958 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[0eee0da8-d32f-4082-b419-77d26decc0b8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:54:37 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:37.959 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf022782f-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:54:37 compute-0 kernel: tapf022782f-f0: left promiscuous mode
Oct 11 08:54:37 compute-0 nova_compute[260935]: 2025-10-11 08:54:37.961 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:54:37 compute-0 nova_compute[260935]: 2025-10-11 08:54:37.964 2 DEBUG nova.virt.libvirt.driver [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] [instance: e263661e-e9c2-4a4d-a6e5-5fc8a7353f50] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 11 08:54:37 compute-0 nova_compute[260935]: 2025-10-11 08:54:37.970 2 INFO nova.virt.libvirt.driver [-] [instance: e263661e-e9c2-4a4d-a6e5-5fc8a7353f50] Instance spawned successfully.
Oct 11 08:54:37 compute-0 nova_compute[260935]: 2025-10-11 08:54:37.971 2 DEBUG nova.virt.libvirt.driver [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] [instance: e263661e-e9c2-4a4d-a6e5-5fc8a7353f50] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 11 08:54:37 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1512: 321 pgs: 321 active+clean; 227 MiB data, 537 MiB used, 59 GiB / 60 GiB avail; 3.7 MiB/s rd, 7.1 MiB/s wr, 287 op/s
Oct 11 08:54:37 compute-0 nova_compute[260935]: 2025-10-11 08:54:37.980 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:54:37 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:37.983 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[e4498be6-1a42-42d0-b427-4e8b85cc13e5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:54:37 compute-0 nova_compute[260935]: 2025-10-11 08:54:37.991 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: e263661e-e9c2-4a4d-a6e5-5fc8a7353f50] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:54:38 compute-0 nova_compute[260935]: 2025-10-11 08:54:38.001 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: e263661e-e9c2-4a4d-a6e5-5fc8a7353f50] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 08:54:38 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:38.001 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[4bfd5f20-0bd2-45be-a1b2-a5430267ba3a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:54:38 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:38.003 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[e779b33e-b2a2-41c6-948d-eb7ca5d3451e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:54:38 compute-0 nova_compute[260935]: 2025-10-11 08:54:38.004 2 DEBUG nova.virt.libvirt.driver [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] [instance: e263661e-e9c2-4a4d-a6e5-5fc8a7353f50] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:54:38 compute-0 nova_compute[260935]: 2025-10-11 08:54:38.004 2 DEBUG nova.virt.libvirt.driver [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] [instance: e263661e-e9c2-4a4d-a6e5-5fc8a7353f50] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:54:38 compute-0 nova_compute[260935]: 2025-10-11 08:54:38.005 2 DEBUG nova.virt.libvirt.driver [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] [instance: e263661e-e9c2-4a4d-a6e5-5fc8a7353f50] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:54:38 compute-0 nova_compute[260935]: 2025-10-11 08:54:38.005 2 DEBUG nova.virt.libvirt.driver [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] [instance: e263661e-e9c2-4a4d-a6e5-5fc8a7353f50] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:54:38 compute-0 nova_compute[260935]: 2025-10-11 08:54:38.005 2 DEBUG nova.virt.libvirt.driver [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] [instance: e263661e-e9c2-4a4d-a6e5-5fc8a7353f50] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:54:38 compute-0 nova_compute[260935]: 2025-10-11 08:54:38.006 2 DEBUG nova.virt.libvirt.driver [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] [instance: e263661e-e9c2-4a4d-a6e5-5fc8a7353f50] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:54:38 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:38.021 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[0e56aa0c-d230-410a-ad82-a63d5ee6ef7a]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 471827, 'reachable_time': 17984, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 319322, 'error': None, 'target': 'ovnmeta-f022782f-f41d-4a83-babf-b48f2f344e01', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:54:38 compute-0 systemd[1]: run-netns-ovnmeta\x2df022782f\x2df41d\x2d4a83\x2dbabf\x2db48f2f344e01.mount: Deactivated successfully.
Oct 11 08:54:38 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:38.029 162929 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-f022782f-f41d-4a83-babf-b48f2f344e01 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 11 08:54:38 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:38.030 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[9de48926-d8d6-4e06-9b71-20a8497b919e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:54:38 compute-0 nova_compute[260935]: 2025-10-11 08:54:38.032 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: e263661e-e9c2-4a4d-a6e5-5fc8a7353f50] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 08:54:38 compute-0 nova_compute[260935]: 2025-10-11 08:54:38.033 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172877.950964, e263661e-e9c2-4a4d-a6e5-5fc8a7353f50 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:54:38 compute-0 nova_compute[260935]: 2025-10-11 08:54:38.033 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: e263661e-e9c2-4a4d-a6e5-5fc8a7353f50] VM Paused (Lifecycle Event)
Oct 11 08:54:38 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:38.031 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 70c55a7a-6ff4-4ca8-a4b6-60e79bcea74b in datapath 72464893-0f19-40a9-84d1-392a298e50b9 unbound from our chassis
Oct 11 08:54:38 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:38.033 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 72464893-0f19-40a9-84d1-392a298e50b9
Oct 11 08:54:38 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:38.059 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[67e4fedc-e524-401a-bdf3-caa60845a621]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:54:38 compute-0 nova_compute[260935]: 2025-10-11 08:54:38.067 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: e263661e-e9c2-4a4d-a6e5-5fc8a7353f50] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:54:38 compute-0 nova_compute[260935]: 2025-10-11 08:54:38.073 2 INFO nova.compute.manager [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] [instance: e263661e-e9c2-4a4d-a6e5-5fc8a7353f50] Took 7.61 seconds to spawn the instance on the hypervisor.
Oct 11 08:54:38 compute-0 nova_compute[260935]: 2025-10-11 08:54:38.073 2 DEBUG nova.compute.manager [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] [instance: e263661e-e9c2-4a4d-a6e5-5fc8a7353f50] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:54:38 compute-0 nova_compute[260935]: 2025-10-11 08:54:38.081 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172877.9594476, e263661e-e9c2-4a4d-a6e5-5fc8a7353f50 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:54:38 compute-0 nova_compute[260935]: 2025-10-11 08:54:38.082 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: e263661e-e9c2-4a4d-a6e5-5fc8a7353f50] VM Resumed (Lifecycle Event)
Oct 11 08:54:38 compute-0 nova_compute[260935]: 2025-10-11 08:54:38.110 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: e263661e-e9c2-4a4d-a6e5-5fc8a7353f50] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:54:38 compute-0 nova_compute[260935]: 2025-10-11 08:54:38.113 2 DEBUG oslo_concurrency.processutils [None req-1689977f-45c5-42f6-96ac-e12f067b80ad d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:54:38 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:38.121 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[ac199919-6400-434a-b6db-700c93ef82de]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:54:38 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:38.126 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[66bb6b79-d872-42d3-9841-9c919f0b074e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:54:38 compute-0 nova_compute[260935]: 2025-10-11 08:54:38.174 2 INFO nova.compute.manager [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] [instance: e263661e-e9c2-4a4d-a6e5-5fc8a7353f50] Took 10.00 seconds to build instance.
Oct 11 08:54:38 compute-0 nova_compute[260935]: 2025-10-11 08:54:38.177 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: e263661e-e9c2-4a4d-a6e5-5fc8a7353f50] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 08:54:38 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:38.186 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[22ae053b-7691-4654-a46c-b47dbc0c12ac]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:54:38 compute-0 nova_compute[260935]: 2025-10-11 08:54:38.195 2 DEBUG oslo_concurrency.lockutils [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Lock "e263661e-e9c2-4a4d-a6e5-5fc8a7353f50" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.157s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:54:38 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:38.209 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[27ba249a-cd51-4ec7-86aa-e7fa22390745]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap72464893-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ca:78:4f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 7, 'tx_packets': 7, 'rx_bytes': 742, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 7, 'tx_packets': 7, 'rx_bytes': 742, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 148], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 472069, 'reachable_time': 41754, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 7, 'inoctets': 644, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 7, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 644, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 7, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 319356, 'error': None, 'target': 'ovnmeta-72464893-0f19-40a9-84d1-392a298e50b9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:54:38 compute-0 nova_compute[260935]: 2025-10-11 08:54:38.229 2 DEBUG nova.compute.manager [req-1773fafa-73d2-4eac-b0b2-8906ac7f50d6 req-82f2b123-1067-41cd-82ce-f706021e8fd8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d60a2dcd-7fb6-4bfe-8351-a38a71164f83] Received event network-vif-plugged-b0464a2f-214f-420b-aa3e-70d2e3dce4ac external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:54:38 compute-0 nova_compute[260935]: 2025-10-11 08:54:38.229 2 DEBUG oslo_concurrency.lockutils [req-1773fafa-73d2-4eac-b0b2-8906ac7f50d6 req-82f2b123-1067-41cd-82ce-f706021e8fd8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "d60a2dcd-7fb6-4bfe-8351-a38a71164f83-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:54:38 compute-0 nova_compute[260935]: 2025-10-11 08:54:38.230 2 DEBUG oslo_concurrency.lockutils [req-1773fafa-73d2-4eac-b0b2-8906ac7f50d6 req-82f2b123-1067-41cd-82ce-f706021e8fd8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "d60a2dcd-7fb6-4bfe-8351-a38a71164f83-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:54:38 compute-0 nova_compute[260935]: 2025-10-11 08:54:38.230 2 DEBUG oslo_concurrency.lockutils [req-1773fafa-73d2-4eac-b0b2-8906ac7f50d6 req-82f2b123-1067-41cd-82ce-f706021e8fd8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "d60a2dcd-7fb6-4bfe-8351-a38a71164f83-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:54:38 compute-0 nova_compute[260935]: 2025-10-11 08:54:38.230 2 DEBUG nova.compute.manager [req-1773fafa-73d2-4eac-b0b2-8906ac7f50d6 req-82f2b123-1067-41cd-82ce-f706021e8fd8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d60a2dcd-7fb6-4bfe-8351-a38a71164f83] Processing event network-vif-plugged-b0464a2f-214f-420b-aa3e-70d2e3dce4ac _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 11 08:54:38 compute-0 nova_compute[260935]: 2025-10-11 08:54:38.231 2 DEBUG nova.compute.manager [req-1773fafa-73d2-4eac-b0b2-8906ac7f50d6 req-82f2b123-1067-41cd-82ce-f706021e8fd8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d60a2dcd-7fb6-4bfe-8351-a38a71164f83] Received event network-vif-plugged-b0464a2f-214f-420b-aa3e-70d2e3dce4ac external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:54:38 compute-0 nova_compute[260935]: 2025-10-11 08:54:38.231 2 DEBUG oslo_concurrency.lockutils [req-1773fafa-73d2-4eac-b0b2-8906ac7f50d6 req-82f2b123-1067-41cd-82ce-f706021e8fd8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "d60a2dcd-7fb6-4bfe-8351-a38a71164f83-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:54:38 compute-0 nova_compute[260935]: 2025-10-11 08:54:38.231 2 DEBUG oslo_concurrency.lockutils [req-1773fafa-73d2-4eac-b0b2-8906ac7f50d6 req-82f2b123-1067-41cd-82ce-f706021e8fd8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "d60a2dcd-7fb6-4bfe-8351-a38a71164f83-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:54:38 compute-0 nova_compute[260935]: 2025-10-11 08:54:38.232 2 DEBUG oslo_concurrency.lockutils [req-1773fafa-73d2-4eac-b0b2-8906ac7f50d6 req-82f2b123-1067-41cd-82ce-f706021e8fd8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "d60a2dcd-7fb6-4bfe-8351-a38a71164f83-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:54:38 compute-0 nova_compute[260935]: 2025-10-11 08:54:38.232 2 DEBUG nova.compute.manager [req-1773fafa-73d2-4eac-b0b2-8906ac7f50d6 req-82f2b123-1067-41cd-82ce-f706021e8fd8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d60a2dcd-7fb6-4bfe-8351-a38a71164f83] No waiting events found dispatching network-vif-plugged-b0464a2f-214f-420b-aa3e-70d2e3dce4ac pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 08:54:38 compute-0 nova_compute[260935]: 2025-10-11 08:54:38.232 2 WARNING nova.compute.manager [req-1773fafa-73d2-4eac-b0b2-8906ac7f50d6 req-82f2b123-1067-41cd-82ce-f706021e8fd8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d60a2dcd-7fb6-4bfe-8351-a38a71164f83] Received unexpected event network-vif-plugged-b0464a2f-214f-420b-aa3e-70d2e3dce4ac for instance with vm_state building and task_state spawning.
Oct 11 08:54:38 compute-0 nova_compute[260935]: 2025-10-11 08:54:38.233 2 DEBUG nova.compute.manager [req-1773fafa-73d2-4eac-b0b2-8906ac7f50d6 req-82f2b123-1067-41cd-82ce-f706021e8fd8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ceef84cf-9df6-4484-862c-624eab05f1fe] Received event network-vif-unplugged-b6cea59b-74d9-47bc-a2e0-cfc62e1cc66b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:54:38 compute-0 nova_compute[260935]: 2025-10-11 08:54:38.233 2 DEBUG oslo_concurrency.lockutils [req-1773fafa-73d2-4eac-b0b2-8906ac7f50d6 req-82f2b123-1067-41cd-82ce-f706021e8fd8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "ceef84cf-9df6-4484-862c-624eab05f1fe-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:54:38 compute-0 nova_compute[260935]: 2025-10-11 08:54:38.233 2 DEBUG oslo_concurrency.lockutils [req-1773fafa-73d2-4eac-b0b2-8906ac7f50d6 req-82f2b123-1067-41cd-82ce-f706021e8fd8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "ceef84cf-9df6-4484-862c-624eab05f1fe-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:54:38 compute-0 nova_compute[260935]: 2025-10-11 08:54:38.234 2 DEBUG oslo_concurrency.lockutils [req-1773fafa-73d2-4eac-b0b2-8906ac7f50d6 req-82f2b123-1067-41cd-82ce-f706021e8fd8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "ceef84cf-9df6-4484-862c-624eab05f1fe-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:54:38 compute-0 nova_compute[260935]: 2025-10-11 08:54:38.234 2 DEBUG nova.compute.manager [req-1773fafa-73d2-4eac-b0b2-8906ac7f50d6 req-82f2b123-1067-41cd-82ce-f706021e8fd8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ceef84cf-9df6-4484-862c-624eab05f1fe] No waiting events found dispatching network-vif-unplugged-b6cea59b-74d9-47bc-a2e0-cfc62e1cc66b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 08:54:38 compute-0 nova_compute[260935]: 2025-10-11 08:54:38.234 2 DEBUG nova.compute.manager [req-1773fafa-73d2-4eac-b0b2-8906ac7f50d6 req-82f2b123-1067-41cd-82ce-f706021e8fd8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ceef84cf-9df6-4484-862c-624eab05f1fe] Received event network-vif-unplugged-b6cea59b-74d9-47bc-a2e0-cfc62e1cc66b for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 11 08:54:38 compute-0 nova_compute[260935]: 2025-10-11 08:54:38.234 2 DEBUG nova.compute.manager [req-1773fafa-73d2-4eac-b0b2-8906ac7f50d6 req-82f2b123-1067-41cd-82ce-f706021e8fd8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ceef84cf-9df6-4484-862c-624eab05f1fe] Received event network-vif-plugged-b6cea59b-74d9-47bc-a2e0-cfc62e1cc66b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:54:38 compute-0 nova_compute[260935]: 2025-10-11 08:54:38.234 2 DEBUG oslo_concurrency.lockutils [req-1773fafa-73d2-4eac-b0b2-8906ac7f50d6 req-82f2b123-1067-41cd-82ce-f706021e8fd8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "ceef84cf-9df6-4484-862c-624eab05f1fe-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:54:38 compute-0 nova_compute[260935]: 2025-10-11 08:54:38.235 2 DEBUG oslo_concurrency.lockutils [req-1773fafa-73d2-4eac-b0b2-8906ac7f50d6 req-82f2b123-1067-41cd-82ce-f706021e8fd8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "ceef84cf-9df6-4484-862c-624eab05f1fe-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:54:38 compute-0 nova_compute[260935]: 2025-10-11 08:54:38.235 2 DEBUG oslo_concurrency.lockutils [req-1773fafa-73d2-4eac-b0b2-8906ac7f50d6 req-82f2b123-1067-41cd-82ce-f706021e8fd8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "ceef84cf-9df6-4484-862c-624eab05f1fe-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:54:38 compute-0 nova_compute[260935]: 2025-10-11 08:54:38.235 2 DEBUG nova.compute.manager [req-1773fafa-73d2-4eac-b0b2-8906ac7f50d6 req-82f2b123-1067-41cd-82ce-f706021e8fd8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ceef84cf-9df6-4484-862c-624eab05f1fe] No waiting events found dispatching network-vif-plugged-b6cea59b-74d9-47bc-a2e0-cfc62e1cc66b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 08:54:38 compute-0 nova_compute[260935]: 2025-10-11 08:54:38.235 2 WARNING nova.compute.manager [req-1773fafa-73d2-4eac-b0b2-8906ac7f50d6 req-82f2b123-1067-41cd-82ce-f706021e8fd8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ceef84cf-9df6-4484-862c-624eab05f1fe] Received unexpected event network-vif-plugged-b6cea59b-74d9-47bc-a2e0-cfc62e1cc66b for instance with vm_state active and task_state deleting.
Oct 11 08:54:38 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:38.235 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[c1a8afd8-4e1a-4463-8cc4-a1db5b6f7de7]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap72464893-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 472087, 'tstamp': 472087}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 319365, 'error': None, 'target': 'ovnmeta-72464893-0f19-40a9-84d1-392a298e50b9', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap72464893-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 472092, 'tstamp': 472092}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 319365, 'error': None, 'target': 'ovnmeta-72464893-0f19-40a9-84d1-392a298e50b9', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:54:38 compute-0 nova_compute[260935]: 2025-10-11 08:54:38.236 2 DEBUG nova.compute.manager [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] [instance: d60a2dcd-7fb6-4bfe-8351-a38a71164f83] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 11 08:54:38 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:38.237 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap72464893-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:54:38 compute-0 nova_compute[260935]: 2025-10-11 08:54:38.239 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:54:38 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:38.241 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap72464893-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:54:38 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:38.241 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 08:54:38 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:38.242 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap72464893-00, col_values=(('external_ids', {'iface-id': 'fba8f4e1-8635-4527-85f3-29ce2a1033b5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:54:38 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:38.242 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 08:54:38 compute-0 nova_compute[260935]: 2025-10-11 08:54:38.243 2 DEBUG nova.virt.libvirt.driver [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] [instance: d60a2dcd-7fb6-4bfe-8351-a38a71164f83] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 11 08:54:38 compute-0 nova_compute[260935]: 2025-10-11 08:54:38.243 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172878.2427812, d60a2dcd-7fb6-4bfe-8351-a38a71164f83 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:54:38 compute-0 nova_compute[260935]: 2025-10-11 08:54:38.243 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: d60a2dcd-7fb6-4bfe-8351-a38a71164f83] VM Resumed (Lifecycle Event)
Oct 11 08:54:38 compute-0 nova_compute[260935]: 2025-10-11 08:54:38.252 2 INFO nova.virt.libvirt.driver [-] [instance: d60a2dcd-7fb6-4bfe-8351-a38a71164f83] Instance spawned successfully.
Oct 11 08:54:38 compute-0 nova_compute[260935]: 2025-10-11 08:54:38.253 2 DEBUG nova.virt.libvirt.driver [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] [instance: d60a2dcd-7fb6-4bfe-8351-a38a71164f83] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 11 08:54:38 compute-0 nova_compute[260935]: 2025-10-11 08:54:38.263 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: d60a2dcd-7fb6-4bfe-8351-a38a71164f83] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:54:38 compute-0 nova_compute[260935]: 2025-10-11 08:54:38.270 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: d60a2dcd-7fb6-4bfe-8351-a38a71164f83] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 08:54:38 compute-0 nova_compute[260935]: 2025-10-11 08:54:38.279 2 DEBUG nova.virt.libvirt.driver [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] [instance: d60a2dcd-7fb6-4bfe-8351-a38a71164f83] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:54:38 compute-0 nova_compute[260935]: 2025-10-11 08:54:38.280 2 DEBUG nova.virt.libvirt.driver [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] [instance: d60a2dcd-7fb6-4bfe-8351-a38a71164f83] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:54:38 compute-0 nova_compute[260935]: 2025-10-11 08:54:38.280 2 DEBUG nova.virt.libvirt.driver [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] [instance: d60a2dcd-7fb6-4bfe-8351-a38a71164f83] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:54:38 compute-0 nova_compute[260935]: 2025-10-11 08:54:38.281 2 DEBUG nova.virt.libvirt.driver [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] [instance: d60a2dcd-7fb6-4bfe-8351-a38a71164f83] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:54:38 compute-0 nova_compute[260935]: 2025-10-11 08:54:38.281 2 DEBUG nova.virt.libvirt.driver [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] [instance: d60a2dcd-7fb6-4bfe-8351-a38a71164f83] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:54:38 compute-0 nova_compute[260935]: 2025-10-11 08:54:38.281 2 DEBUG nova.virt.libvirt.driver [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] [instance: d60a2dcd-7fb6-4bfe-8351-a38a71164f83] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:54:38 compute-0 nova_compute[260935]: 2025-10-11 08:54:38.287 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: d60a2dcd-7fb6-4bfe-8351-a38a71164f83] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 08:54:38 compute-0 nova_compute[260935]: 2025-10-11 08:54:38.325 2 INFO nova.virt.libvirt.driver [None req-70b80284-766f-4128-b01b-e503f64803d5 e7ae37ce1df34d87a006582ce3cb7d6d ce2ed1c47abf4e1889253402aa1e536f - - default default] [instance: ceef84cf-9df6-4484-862c-624eab05f1fe] Deleting instance files /var/lib/nova/instances/ceef84cf-9df6-4484-862c-624eab05f1fe_del
Oct 11 08:54:38 compute-0 nova_compute[260935]: 2025-10-11 08:54:38.326 2 INFO nova.virt.libvirt.driver [None req-70b80284-766f-4128-b01b-e503f64803d5 e7ae37ce1df34d87a006582ce3cb7d6d ce2ed1c47abf4e1889253402aa1e536f - - default default] [instance: ceef84cf-9df6-4484-862c-624eab05f1fe] Deletion of /var/lib/nova/instances/ceef84cf-9df6-4484-862c-624eab05f1fe_del complete
Oct 11 08:54:38 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e210 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 08:54:38 compute-0 nova_compute[260935]: 2025-10-11 08:54:38.353 2 INFO nova.compute.manager [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] [instance: d60a2dcd-7fb6-4bfe-8351-a38a71164f83] Took 8.97 seconds to spawn the instance on the hypervisor.
Oct 11 08:54:38 compute-0 nova_compute[260935]: 2025-10-11 08:54:38.353 2 DEBUG nova.compute.manager [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] [instance: d60a2dcd-7fb6-4bfe-8351-a38a71164f83] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:54:38 compute-0 nova_compute[260935]: 2025-10-11 08:54:38.388 2 INFO nova.compute.manager [None req-70b80284-766f-4128-b01b-e503f64803d5 e7ae37ce1df34d87a006582ce3cb7d6d ce2ed1c47abf4e1889253402aa1e536f - - default default] [instance: ceef84cf-9df6-4484-862c-624eab05f1fe] Took 0.92 seconds to destroy the instance on the hypervisor.
Oct 11 08:54:38 compute-0 nova_compute[260935]: 2025-10-11 08:54:38.389 2 DEBUG oslo.service.loopingcall [None req-70b80284-766f-4128-b01b-e503f64803d5 e7ae37ce1df34d87a006582ce3cb7d6d ce2ed1c47abf4e1889253402aa1e536f - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 11 08:54:38 compute-0 nova_compute[260935]: 2025-10-11 08:54:38.389 2 DEBUG nova.compute.manager [-] [instance: ceef84cf-9df6-4484-862c-624eab05f1fe] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 11 08:54:38 compute-0 nova_compute[260935]: 2025-10-11 08:54:38.390 2 DEBUG nova.network.neutron [-] [instance: ceef84cf-9df6-4484-862c-624eab05f1fe] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 11 08:54:38 compute-0 nova_compute[260935]: 2025-10-11 08:54:38.414 2 INFO nova.compute.manager [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] [instance: d60a2dcd-7fb6-4bfe-8351-a38a71164f83] Took 10.27 seconds to build instance.
Oct 11 08:54:38 compute-0 nova_compute[260935]: 2025-10-11 08:54:38.428 2 DEBUG oslo_concurrency.lockutils [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Lock "d60a2dcd-7fb6-4bfe-8351-a38a71164f83" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.429s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:54:38 compute-0 ceph-mon[74313]: pgmap v1512: 321 pgs: 321 active+clean; 227 MiB data, 537 MiB used, 59 GiB / 60 GiB avail; 3.7 MiB/s rd, 7.1 MiB/s wr, 287 op/s
Oct 11 08:54:38 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 08:54:38 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2272426784' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:54:38 compute-0 nova_compute[260935]: 2025-10-11 08:54:38.626 2 DEBUG oslo_concurrency.processutils [None req-1689977f-45c5-42f6-96ac-e12f067b80ad d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.514s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:54:38 compute-0 nova_compute[260935]: 2025-10-11 08:54:38.634 2 DEBUG nova.compute.provider_tree [None req-1689977f-45c5-42f6-96ac-e12f067b80ad d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 08:54:38 compute-0 nova_compute[260935]: 2025-10-11 08:54:38.649 2 DEBUG nova.scheduler.client.report [None req-1689977f-45c5-42f6-96ac-e12f067b80ad d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 08:54:38 compute-0 nova_compute[260935]: 2025-10-11 08:54:38.677 2 DEBUG oslo_concurrency.lockutils [None req-1689977f-45c5-42f6-96ac-e12f067b80ad d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.746s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:54:38 compute-0 nova_compute[260935]: 2025-10-11 08:54:38.678 2 DEBUG nova.compute.manager [None req-1689977f-45c5-42f6-96ac-e12f067b80ad d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] [instance: 1ef94ed5-fffa-41e9-b72f-4569354392c6] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 11 08:54:38 compute-0 nova_compute[260935]: 2025-10-11 08:54:38.743 2 DEBUG nova.compute.manager [None req-1689977f-45c5-42f6-96ac-e12f067b80ad d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] [instance: 1ef94ed5-fffa-41e9-b72f-4569354392c6] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 11 08:54:38 compute-0 nova_compute[260935]: 2025-10-11 08:54:38.743 2 DEBUG nova.network.neutron [None req-1689977f-45c5-42f6-96ac-e12f067b80ad d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] [instance: 1ef94ed5-fffa-41e9-b72f-4569354392c6] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 11 08:54:38 compute-0 nova_compute[260935]: 2025-10-11 08:54:38.762 2 INFO nova.virt.libvirt.driver [None req-1689977f-45c5-42f6-96ac-e12f067b80ad d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] [instance: 1ef94ed5-fffa-41e9-b72f-4569354392c6] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 11 08:54:38 compute-0 nova_compute[260935]: 2025-10-11 08:54:38.786 2 DEBUG nova.compute.manager [None req-1689977f-45c5-42f6-96ac-e12f067b80ad d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] [instance: 1ef94ed5-fffa-41e9-b72f-4569354392c6] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 11 08:54:38 compute-0 nova_compute[260935]: 2025-10-11 08:54:38.878 2 DEBUG nova.compute.manager [None req-1689977f-45c5-42f6-96ac-e12f067b80ad d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] [instance: 1ef94ed5-fffa-41e9-b72f-4569354392c6] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 11 08:54:38 compute-0 nova_compute[260935]: 2025-10-11 08:54:38.880 2 DEBUG nova.virt.libvirt.driver [None req-1689977f-45c5-42f6-96ac-e12f067b80ad d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] [instance: 1ef94ed5-fffa-41e9-b72f-4569354392c6] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 11 08:54:38 compute-0 nova_compute[260935]: 2025-10-11 08:54:38.880 2 INFO nova.virt.libvirt.driver [None req-1689977f-45c5-42f6-96ac-e12f067b80ad d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] [instance: 1ef94ed5-fffa-41e9-b72f-4569354392c6] Creating image(s)
Oct 11 08:54:38 compute-0 nova_compute[260935]: 2025-10-11 08:54:38.920 2 DEBUG nova.storage.rbd_utils [None req-1689977f-45c5-42f6-96ac-e12f067b80ad d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] rbd image 1ef94ed5-fffa-41e9-b72f-4569354392c6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:54:38 compute-0 nova_compute[260935]: 2025-10-11 08:54:38.954 2 DEBUG nova.storage.rbd_utils [None req-1689977f-45c5-42f6-96ac-e12f067b80ad d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] rbd image 1ef94ed5-fffa-41e9-b72f-4569354392c6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:54:38 compute-0 nova_compute[260935]: 2025-10-11 08:54:38.981 2 DEBUG nova.storage.rbd_utils [None req-1689977f-45c5-42f6-96ac-e12f067b80ad d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] rbd image 1ef94ed5-fffa-41e9-b72f-4569354392c6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:54:38 compute-0 nova_compute[260935]: 2025-10-11 08:54:38.986 2 DEBUG oslo_concurrency.processutils [None req-1689977f-45c5-42f6-96ac-e12f067b80ad d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:54:39 compute-0 nova_compute[260935]: 2025-10-11 08:54:39.034 2 DEBUG nova.policy [None req-1689977f-45c5-42f6-96ac-e12f067b80ad d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'd4dce65b39be45739408ca70d672df84', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'b34d40b1586348c3be3d9142dfe1770d', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 11 08:54:39 compute-0 nova_compute[260935]: 2025-10-11 08:54:39.037 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172878.9103584, 21a71d10-e13b-47fe-88fd-ec9597f7902e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:54:39 compute-0 nova_compute[260935]: 2025-10-11 08:54:39.037 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 21a71d10-e13b-47fe-88fd-ec9597f7902e] VM Started (Lifecycle Event)
Oct 11 08:54:39 compute-0 nova_compute[260935]: 2025-10-11 08:54:39.057 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 21a71d10-e13b-47fe-88fd-ec9597f7902e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:54:39 compute-0 nova_compute[260935]: 2025-10-11 08:54:39.064 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172878.9107077, 21a71d10-e13b-47fe-88fd-ec9597f7902e => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:54:39 compute-0 nova_compute[260935]: 2025-10-11 08:54:39.064 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 21a71d10-e13b-47fe-88fd-ec9597f7902e] VM Paused (Lifecycle Event)
Oct 11 08:54:39 compute-0 nova_compute[260935]: 2025-10-11 08:54:39.081 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 21a71d10-e13b-47fe-88fd-ec9597f7902e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:54:39 compute-0 nova_compute[260935]: 2025-10-11 08:54:39.086 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 21a71d10-e13b-47fe-88fd-ec9597f7902e] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 08:54:39 compute-0 nova_compute[260935]: 2025-10-11 08:54:39.090 2 DEBUG oslo_concurrency.processutils [None req-1689977f-45c5-42f6-96ac-e12f067b80ad d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json" returned: 0 in 0.104s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:54:39 compute-0 nova_compute[260935]: 2025-10-11 08:54:39.092 2 DEBUG oslo_concurrency.lockutils [None req-1689977f-45c5-42f6-96ac-e12f067b80ad d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Acquiring lock "9811042c7d73cc51997f7c966840f4b7728169a1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:54:39 compute-0 nova_compute[260935]: 2025-10-11 08:54:39.093 2 DEBUG oslo_concurrency.lockutils [None req-1689977f-45c5-42f6-96ac-e12f067b80ad d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:54:39 compute-0 nova_compute[260935]: 2025-10-11 08:54:39.094 2 DEBUG oslo_concurrency.lockutils [None req-1689977f-45c5-42f6-96ac-e12f067b80ad d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:54:39 compute-0 nova_compute[260935]: 2025-10-11 08:54:39.129 2 DEBUG nova.storage.rbd_utils [None req-1689977f-45c5-42f6-96ac-e12f067b80ad d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] rbd image 1ef94ed5-fffa-41e9-b72f-4569354392c6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:54:39 compute-0 nova_compute[260935]: 2025-10-11 08:54:39.133 2 DEBUG oslo_concurrency.processutils [None req-1689977f-45c5-42f6-96ac-e12f067b80ad d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 1ef94ed5-fffa-41e9-b72f-4569354392c6_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:54:39 compute-0 nova_compute[260935]: 2025-10-11 08:54:39.201 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 21a71d10-e13b-47fe-88fd-ec9597f7902e] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 08:54:39 compute-0 nova_compute[260935]: 2025-10-11 08:54:39.284 2 DEBUG nova.network.neutron [-] [instance: ceef84cf-9df6-4484-862c-624eab05f1fe] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:54:39 compute-0 nova_compute[260935]: 2025-10-11 08:54:39.305 2 INFO nova.compute.manager [-] [instance: ceef84cf-9df6-4484-862c-624eab05f1fe] Took 0.92 seconds to deallocate network for instance.
Oct 11 08:54:39 compute-0 nova_compute[260935]: 2025-10-11 08:54:39.359 2 DEBUG oslo_concurrency.lockutils [None req-70b80284-766f-4128-b01b-e503f64803d5 e7ae37ce1df34d87a006582ce3cb7d6d ce2ed1c47abf4e1889253402aa1e536f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:54:39 compute-0 nova_compute[260935]: 2025-10-11 08:54:39.360 2 DEBUG oslo_concurrency.lockutils [None req-70b80284-766f-4128-b01b-e503f64803d5 e7ae37ce1df34d87a006582ce3cb7d6d ce2ed1c47abf4e1889253402aa1e536f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:54:39 compute-0 nova_compute[260935]: 2025-10-11 08:54:39.448 2 DEBUG oslo_concurrency.processutils [None req-1689977f-45c5-42f6-96ac-e12f067b80ad d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 1ef94ed5-fffa-41e9-b72f-4569354392c6_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.314s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:54:39 compute-0 nova_compute[260935]: 2025-10-11 08:54:39.527 2 DEBUG oslo_concurrency.processutils [None req-70b80284-766f-4128-b01b-e503f64803d5 e7ae37ce1df34d87a006582ce3cb7d6d ce2ed1c47abf4e1889253402aa1e536f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:54:39 compute-0 nova_compute[260935]: 2025-10-11 08:54:39.582 2 DEBUG nova.storage.rbd_utils [None req-1689977f-45c5-42f6-96ac-e12f067b80ad d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] resizing rbd image 1ef94ed5-fffa-41e9-b72f-4569354392c6_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 11 08:54:39 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/2272426784' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:54:39 compute-0 nova_compute[260935]: 2025-10-11 08:54:39.733 2 DEBUG nova.objects.instance [None req-1689977f-45c5-42f6-96ac-e12f067b80ad d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Lazy-loading 'migration_context' on Instance uuid 1ef94ed5-fffa-41e9-b72f-4569354392c6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:54:39 compute-0 nova_compute[260935]: 2025-10-11 08:54:39.737 2 DEBUG nova.network.neutron [None req-1689977f-45c5-42f6-96ac-e12f067b80ad d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] [instance: 1ef94ed5-fffa-41e9-b72f-4569354392c6] Successfully created port: db59456c-ccd3-48b6-a889-d54a6e2cdf66 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 11 08:54:39 compute-0 nova_compute[260935]: 2025-10-11 08:54:39.752 2 DEBUG nova.virt.libvirt.driver [None req-1689977f-45c5-42f6-96ac-e12f067b80ad d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] [instance: 1ef94ed5-fffa-41e9-b72f-4569354392c6] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 11 08:54:39 compute-0 nova_compute[260935]: 2025-10-11 08:54:39.752 2 DEBUG nova.virt.libvirt.driver [None req-1689977f-45c5-42f6-96ac-e12f067b80ad d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] [instance: 1ef94ed5-fffa-41e9-b72f-4569354392c6] Ensure instance console log exists: /var/lib/nova/instances/1ef94ed5-fffa-41e9-b72f-4569354392c6/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 11 08:54:39 compute-0 nova_compute[260935]: 2025-10-11 08:54:39.753 2 DEBUG oslo_concurrency.lockutils [None req-1689977f-45c5-42f6-96ac-e12f067b80ad d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:54:39 compute-0 nova_compute[260935]: 2025-10-11 08:54:39.753 2 DEBUG oslo_concurrency.lockutils [None req-1689977f-45c5-42f6-96ac-e12f067b80ad d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:54:39 compute-0 nova_compute[260935]: 2025-10-11 08:54:39.753 2 DEBUG oslo_concurrency.lockutils [None req-1689977f-45c5-42f6-96ac-e12f067b80ad d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:54:39 compute-0 podman[319559]: 2025-10-11 08:54:39.770033263 +0000 UTC m=+0.077849651 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible)
Oct 11 08:54:39 compute-0 nova_compute[260935]: 2025-10-11 08:54:39.792 2 DEBUG nova.compute.manager [req-3382cbc4-648f-4b7e-9a5c-2110ee075686 req-5cf248a8-7232-42ae-907f-df244d59d0a3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: e263661e-e9c2-4a4d-a6e5-5fc8a7353f50] Received event network-vif-plugged-5a02bb17-5d97-4eef-a91d-245b70cc5e1b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:54:39 compute-0 nova_compute[260935]: 2025-10-11 08:54:39.792 2 DEBUG oslo_concurrency.lockutils [req-3382cbc4-648f-4b7e-9a5c-2110ee075686 req-5cf248a8-7232-42ae-907f-df244d59d0a3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "e263661e-e9c2-4a4d-a6e5-5fc8a7353f50-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:54:39 compute-0 nova_compute[260935]: 2025-10-11 08:54:39.792 2 DEBUG oslo_concurrency.lockutils [req-3382cbc4-648f-4b7e-9a5c-2110ee075686 req-5cf248a8-7232-42ae-907f-df244d59d0a3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "e263661e-e9c2-4a4d-a6e5-5fc8a7353f50-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:54:39 compute-0 nova_compute[260935]: 2025-10-11 08:54:39.792 2 DEBUG oslo_concurrency.lockutils [req-3382cbc4-648f-4b7e-9a5c-2110ee075686 req-5cf248a8-7232-42ae-907f-df244d59d0a3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "e263661e-e9c2-4a4d-a6e5-5fc8a7353f50-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:54:39 compute-0 nova_compute[260935]: 2025-10-11 08:54:39.793 2 DEBUG nova.compute.manager [req-3382cbc4-648f-4b7e-9a5c-2110ee075686 req-5cf248a8-7232-42ae-907f-df244d59d0a3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: e263661e-e9c2-4a4d-a6e5-5fc8a7353f50] No waiting events found dispatching network-vif-plugged-5a02bb17-5d97-4eef-a91d-245b70cc5e1b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 08:54:39 compute-0 nova_compute[260935]: 2025-10-11 08:54:39.793 2 WARNING nova.compute.manager [req-3382cbc4-648f-4b7e-9a5c-2110ee075686 req-5cf248a8-7232-42ae-907f-df244d59d0a3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: e263661e-e9c2-4a4d-a6e5-5fc8a7353f50] Received unexpected event network-vif-plugged-5a02bb17-5d97-4eef-a91d-245b70cc5e1b for instance with vm_state active and task_state None.
Oct 11 08:54:39 compute-0 nova_compute[260935]: 2025-10-11 08:54:39.793 2 DEBUG nova.compute.manager [req-3382cbc4-648f-4b7e-9a5c-2110ee075686 req-5cf248a8-7232-42ae-907f-df244d59d0a3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 21a71d10-e13b-47fe-88fd-ec9597f7902e] Received event network-vif-plugged-70c55a7a-6ff4-4ca8-a4b6-60e79bcea74b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:54:39 compute-0 nova_compute[260935]: 2025-10-11 08:54:39.793 2 DEBUG oslo_concurrency.lockutils [req-3382cbc4-648f-4b7e-9a5c-2110ee075686 req-5cf248a8-7232-42ae-907f-df244d59d0a3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "21a71d10-e13b-47fe-88fd-ec9597f7902e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:54:39 compute-0 nova_compute[260935]: 2025-10-11 08:54:39.793 2 DEBUG oslo_concurrency.lockutils [req-3382cbc4-648f-4b7e-9a5c-2110ee075686 req-5cf248a8-7232-42ae-907f-df244d59d0a3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "21a71d10-e13b-47fe-88fd-ec9597f7902e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:54:39 compute-0 nova_compute[260935]: 2025-10-11 08:54:39.793 2 DEBUG oslo_concurrency.lockutils [req-3382cbc4-648f-4b7e-9a5c-2110ee075686 req-5cf248a8-7232-42ae-907f-df244d59d0a3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "21a71d10-e13b-47fe-88fd-ec9597f7902e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:54:39 compute-0 nova_compute[260935]: 2025-10-11 08:54:39.793 2 DEBUG nova.compute.manager [req-3382cbc4-648f-4b7e-9a5c-2110ee075686 req-5cf248a8-7232-42ae-907f-df244d59d0a3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 21a71d10-e13b-47fe-88fd-ec9597f7902e] Processing event network-vif-plugged-70c55a7a-6ff4-4ca8-a4b6-60e79bcea74b _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 11 08:54:39 compute-0 nova_compute[260935]: 2025-10-11 08:54:39.794 2 DEBUG nova.compute.manager [req-3382cbc4-648f-4b7e-9a5c-2110ee075686 req-5cf248a8-7232-42ae-907f-df244d59d0a3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 21a71d10-e13b-47fe-88fd-ec9597f7902e] Received event network-vif-plugged-70c55a7a-6ff4-4ca8-a4b6-60e79bcea74b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:54:39 compute-0 nova_compute[260935]: 2025-10-11 08:54:39.794 2 DEBUG oslo_concurrency.lockutils [req-3382cbc4-648f-4b7e-9a5c-2110ee075686 req-5cf248a8-7232-42ae-907f-df244d59d0a3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "21a71d10-e13b-47fe-88fd-ec9597f7902e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:54:39 compute-0 nova_compute[260935]: 2025-10-11 08:54:39.794 2 DEBUG oslo_concurrency.lockutils [req-3382cbc4-648f-4b7e-9a5c-2110ee075686 req-5cf248a8-7232-42ae-907f-df244d59d0a3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "21a71d10-e13b-47fe-88fd-ec9597f7902e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:54:39 compute-0 nova_compute[260935]: 2025-10-11 08:54:39.794 2 DEBUG oslo_concurrency.lockutils [req-3382cbc4-648f-4b7e-9a5c-2110ee075686 req-5cf248a8-7232-42ae-907f-df244d59d0a3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "21a71d10-e13b-47fe-88fd-ec9597f7902e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:54:39 compute-0 nova_compute[260935]: 2025-10-11 08:54:39.794 2 DEBUG nova.compute.manager [req-3382cbc4-648f-4b7e-9a5c-2110ee075686 req-5cf248a8-7232-42ae-907f-df244d59d0a3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 21a71d10-e13b-47fe-88fd-ec9597f7902e] No waiting events found dispatching network-vif-plugged-70c55a7a-6ff4-4ca8-a4b6-60e79bcea74b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 08:54:39 compute-0 nova_compute[260935]: 2025-10-11 08:54:39.794 2 WARNING nova.compute.manager [req-3382cbc4-648f-4b7e-9a5c-2110ee075686 req-5cf248a8-7232-42ae-907f-df244d59d0a3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 21a71d10-e13b-47fe-88fd-ec9597f7902e] Received unexpected event network-vif-plugged-70c55a7a-6ff4-4ca8-a4b6-60e79bcea74b for instance with vm_state building and task_state spawning.
Oct 11 08:54:39 compute-0 nova_compute[260935]: 2025-10-11 08:54:39.794 2 DEBUG nova.compute.manager [req-3382cbc4-648f-4b7e-9a5c-2110ee075686 req-5cf248a8-7232-42ae-907f-df244d59d0a3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ceef84cf-9df6-4484-862c-624eab05f1fe] Received event network-vif-deleted-b6cea59b-74d9-47bc-a2e0-cfc62e1cc66b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:54:39 compute-0 nova_compute[260935]: 2025-10-11 08:54:39.795 2 DEBUG nova.compute.manager [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] [instance: 21a71d10-e13b-47fe-88fd-ec9597f7902e] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 11 08:54:39 compute-0 nova_compute[260935]: 2025-10-11 08:54:39.799 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172879.7992473, 21a71d10-e13b-47fe-88fd-ec9597f7902e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:54:39 compute-0 nova_compute[260935]: 2025-10-11 08:54:39.800 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 21a71d10-e13b-47fe-88fd-ec9597f7902e] VM Resumed (Lifecycle Event)
Oct 11 08:54:39 compute-0 nova_compute[260935]: 2025-10-11 08:54:39.808 2 DEBUG nova.virt.libvirt.driver [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] [instance: 21a71d10-e13b-47fe-88fd-ec9597f7902e] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 11 08:54:39 compute-0 nova_compute[260935]: 2025-10-11 08:54:39.815 2 INFO nova.virt.libvirt.driver [-] [instance: 21a71d10-e13b-47fe-88fd-ec9597f7902e] Instance spawned successfully.
Oct 11 08:54:39 compute-0 nova_compute[260935]: 2025-10-11 08:54:39.815 2 DEBUG nova.virt.libvirt.driver [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] [instance: 21a71d10-e13b-47fe-88fd-ec9597f7902e] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 11 08:54:39 compute-0 podman[319602]: 2025-10-11 08:54:39.892463314 +0000 UTC m=+0.097187493 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct 11 08:54:39 compute-0 nova_compute[260935]: 2025-10-11 08:54:39.906 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 21a71d10-e13b-47fe-88fd-ec9597f7902e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:54:39 compute-0 nova_compute[260935]: 2025-10-11 08:54:39.911 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 21a71d10-e13b-47fe-88fd-ec9597f7902e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 08:54:39 compute-0 nova_compute[260935]: 2025-10-11 08:54:39.942 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 21a71d10-e13b-47fe-88fd-ec9597f7902e] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 08:54:39 compute-0 nova_compute[260935]: 2025-10-11 08:54:39.961 2 DEBUG nova.virt.libvirt.driver [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] [instance: 21a71d10-e13b-47fe-88fd-ec9597f7902e] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:54:39 compute-0 nova_compute[260935]: 2025-10-11 08:54:39.962 2 DEBUG nova.virt.libvirt.driver [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] [instance: 21a71d10-e13b-47fe-88fd-ec9597f7902e] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:54:39 compute-0 nova_compute[260935]: 2025-10-11 08:54:39.962 2 DEBUG nova.virt.libvirt.driver [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] [instance: 21a71d10-e13b-47fe-88fd-ec9597f7902e] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:54:39 compute-0 nova_compute[260935]: 2025-10-11 08:54:39.962 2 DEBUG nova.virt.libvirt.driver [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] [instance: 21a71d10-e13b-47fe-88fd-ec9597f7902e] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:54:39 compute-0 nova_compute[260935]: 2025-10-11 08:54:39.963 2 DEBUG nova.virt.libvirt.driver [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] [instance: 21a71d10-e13b-47fe-88fd-ec9597f7902e] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:54:39 compute-0 nova_compute[260935]: 2025-10-11 08:54:39.963 2 DEBUG nova.virt.libvirt.driver [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] [instance: 21a71d10-e13b-47fe-88fd-ec9597f7902e] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:54:39 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1513: 321 pgs: 321 active+clean; 227 MiB data, 537 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 7.1 MiB/s wr, 223 op/s
Oct 11 08:54:40 compute-0 nova_compute[260935]: 2025-10-11 08:54:40.019 2 INFO nova.compute.manager [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] [instance: 21a71d10-e13b-47fe-88fd-ec9597f7902e] Took 8.35 seconds to spawn the instance on the hypervisor.
Oct 11 08:54:40 compute-0 nova_compute[260935]: 2025-10-11 08:54:40.020 2 DEBUG nova.compute.manager [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] [instance: 21a71d10-e13b-47fe-88fd-ec9597f7902e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:54:40 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 08:54:40 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3872670261' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:54:40 compute-0 nova_compute[260935]: 2025-10-11 08:54:40.082 2 INFO nova.compute.manager [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] [instance: 21a71d10-e13b-47fe-88fd-ec9597f7902e] Took 11.70 seconds to build instance.
Oct 11 08:54:40 compute-0 nova_compute[260935]: 2025-10-11 08:54:40.091 2 DEBUG oslo_concurrency.processutils [None req-70b80284-766f-4128-b01b-e503f64803d5 e7ae37ce1df34d87a006582ce3cb7d6d ce2ed1c47abf4e1889253402aa1e536f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.565s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:54:40 compute-0 nova_compute[260935]: 2025-10-11 08:54:40.099 2 DEBUG nova.compute.provider_tree [None req-70b80284-766f-4128-b01b-e503f64803d5 e7ae37ce1df34d87a006582ce3cb7d6d ce2ed1c47abf4e1889253402aa1e536f - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 08:54:40 compute-0 nova_compute[260935]: 2025-10-11 08:54:40.113 2 DEBUG oslo_concurrency.lockutils [None req-e96e56d0-9ad1-44a4-b38d-a9d27ddb88bb 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Lock "21a71d10-e13b-47fe-88fd-ec9597f7902e" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.027s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:54:40 compute-0 nova_compute[260935]: 2025-10-11 08:54:40.120 2 DEBUG nova.scheduler.client.report [None req-70b80284-766f-4128-b01b-e503f64803d5 e7ae37ce1df34d87a006582ce3cb7d6d ce2ed1c47abf4e1889253402aa1e536f - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 08:54:40 compute-0 nova_compute[260935]: 2025-10-11 08:54:40.148 2 DEBUG oslo_concurrency.lockutils [None req-70b80284-766f-4128-b01b-e503f64803d5 e7ae37ce1df34d87a006582ce3cb7d6d ce2ed1c47abf4e1889253402aa1e536f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.788s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:54:40 compute-0 nova_compute[260935]: 2025-10-11 08:54:40.176 2 INFO nova.scheduler.client.report [None req-70b80284-766f-4128-b01b-e503f64803d5 e7ae37ce1df34d87a006582ce3cb7d6d ce2ed1c47abf4e1889253402aa1e536f - - default default] Deleted allocations for instance ceef84cf-9df6-4484-862c-624eab05f1fe
Oct 11 08:54:40 compute-0 nova_compute[260935]: 2025-10-11 08:54:40.255 2 DEBUG oslo_concurrency.lockutils [None req-70b80284-766f-4128-b01b-e503f64803d5 e7ae37ce1df34d87a006582ce3cb7d6d ce2ed1c47abf4e1889253402aa1e536f - - default default] Lock "ceef84cf-9df6-4484-862c-624eab05f1fe" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.792s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:54:40 compute-0 ceph-mon[74313]: pgmap v1513: 321 pgs: 321 active+clean; 227 MiB data, 537 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 7.1 MiB/s wr, 223 op/s
Oct 11 08:54:40 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/3872670261' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:54:41 compute-0 nova_compute[260935]: 2025-10-11 08:54:41.295 2 DEBUG nova.network.neutron [None req-1689977f-45c5-42f6-96ac-e12f067b80ad d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] [instance: 1ef94ed5-fffa-41e9-b72f-4569354392c6] Successfully updated port: db59456c-ccd3-48b6-a889-d54a6e2cdf66 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 11 08:54:41 compute-0 nova_compute[260935]: 2025-10-11 08:54:41.326 2 DEBUG oslo_concurrency.lockutils [None req-1689977f-45c5-42f6-96ac-e12f067b80ad d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Acquiring lock "refresh_cache-1ef94ed5-fffa-41e9-b72f-4569354392c6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 08:54:41 compute-0 nova_compute[260935]: 2025-10-11 08:54:41.326 2 DEBUG oslo_concurrency.lockutils [None req-1689977f-45c5-42f6-96ac-e12f067b80ad d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Acquired lock "refresh_cache-1ef94ed5-fffa-41e9-b72f-4569354392c6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 08:54:41 compute-0 nova_compute[260935]: 2025-10-11 08:54:41.327 2 DEBUG nova.network.neutron [None req-1689977f-45c5-42f6-96ac-e12f067b80ad d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] [instance: 1ef94ed5-fffa-41e9-b72f-4569354392c6] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 11 08:54:41 compute-0 nova_compute[260935]: 2025-10-11 08:54:41.511 2 DEBUG nova.network.neutron [None req-1689977f-45c5-42f6-96ac-e12f067b80ad d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] [instance: 1ef94ed5-fffa-41e9-b72f-4569354392c6] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 11 08:54:41 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1514: 321 pgs: 321 active+clean; 227 MiB data, 537 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 7.1 MiB/s wr, 223 op/s
Oct 11 08:54:42 compute-0 nova_compute[260935]: 2025-10-11 08:54:42.520 2 DEBUG nova.compute.manager [req-0fedcdb2-49a5-4db3-b091-aaa41e5f83b8 req-f177e1c5-78c3-436f-b6ae-5a9c3259b670 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 1ef94ed5-fffa-41e9-b72f-4569354392c6] Received event network-changed-db59456c-ccd3-48b6-a889-d54a6e2cdf66 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:54:42 compute-0 nova_compute[260935]: 2025-10-11 08:54:42.522 2 DEBUG nova.compute.manager [req-0fedcdb2-49a5-4db3-b091-aaa41e5f83b8 req-f177e1c5-78c3-436f-b6ae-5a9c3259b670 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 1ef94ed5-fffa-41e9-b72f-4569354392c6] Refreshing instance network info cache due to event network-changed-db59456c-ccd3-48b6-a889-d54a6e2cdf66. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 08:54:42 compute-0 nova_compute[260935]: 2025-10-11 08:54:42.523 2 DEBUG oslo_concurrency.lockutils [req-0fedcdb2-49a5-4db3-b091-aaa41e5f83b8 req-f177e1c5-78c3-436f-b6ae-5a9c3259b670 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-1ef94ed5-fffa-41e9-b72f-4569354392c6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 08:54:42 compute-0 nova_compute[260935]: 2025-10-11 08:54:42.746 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:54:42 compute-0 nova_compute[260935]: 2025-10-11 08:54:42.873 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:54:43 compute-0 ceph-mon[74313]: pgmap v1514: 321 pgs: 321 active+clean; 227 MiB data, 537 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 7.1 MiB/s wr, 223 op/s
Oct 11 08:54:43 compute-0 nova_compute[260935]: 2025-10-11 08:54:43.291 2 DEBUG nova.network.neutron [None req-1689977f-45c5-42f6-96ac-e12f067b80ad d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] [instance: 1ef94ed5-fffa-41e9-b72f-4569354392c6] Updating instance_info_cache with network_info: [{"id": "db59456c-ccd3-48b6-a889-d54a6e2cdf66", "address": "fa:16:3e:5b:fd:5e", "network": {"id": "a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1898587159-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b34d40b1586348c3be3d9142dfe1770d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdb59456c-cc", "ovs_interfaceid": "db59456c-ccd3-48b6-a889-d54a6e2cdf66", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:54:43 compute-0 nova_compute[260935]: 2025-10-11 08:54:43.314 2 DEBUG oslo_concurrency.lockutils [None req-1689977f-45c5-42f6-96ac-e12f067b80ad d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Releasing lock "refresh_cache-1ef94ed5-fffa-41e9-b72f-4569354392c6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 08:54:43 compute-0 nova_compute[260935]: 2025-10-11 08:54:43.315 2 DEBUG nova.compute.manager [None req-1689977f-45c5-42f6-96ac-e12f067b80ad d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] [instance: 1ef94ed5-fffa-41e9-b72f-4569354392c6] Instance network_info: |[{"id": "db59456c-ccd3-48b6-a889-d54a6e2cdf66", "address": "fa:16:3e:5b:fd:5e", "network": {"id": "a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1898587159-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b34d40b1586348c3be3d9142dfe1770d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdb59456c-cc", "ovs_interfaceid": "db59456c-ccd3-48b6-a889-d54a6e2cdf66", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 11 08:54:43 compute-0 nova_compute[260935]: 2025-10-11 08:54:43.315 2 DEBUG oslo_concurrency.lockutils [req-0fedcdb2-49a5-4db3-b091-aaa41e5f83b8 req-f177e1c5-78c3-436f-b6ae-5a9c3259b670 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-1ef94ed5-fffa-41e9-b72f-4569354392c6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 08:54:43 compute-0 nova_compute[260935]: 2025-10-11 08:54:43.316 2 DEBUG nova.network.neutron [req-0fedcdb2-49a5-4db3-b091-aaa41e5f83b8 req-f177e1c5-78c3-436f-b6ae-5a9c3259b670 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 1ef94ed5-fffa-41e9-b72f-4569354392c6] Refreshing network info cache for port db59456c-ccd3-48b6-a889-d54a6e2cdf66 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 08:54:43 compute-0 nova_compute[260935]: 2025-10-11 08:54:43.320 2 DEBUG nova.virt.libvirt.driver [None req-1689977f-45c5-42f6-96ac-e12f067b80ad d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] [instance: 1ef94ed5-fffa-41e9-b72f-4569354392c6] Start _get_guest_xml network_info=[{"id": "db59456c-ccd3-48b6-a889-d54a6e2cdf66", "address": "fa:16:3e:5b:fd:5e", "network": {"id": "a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1898587159-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b34d40b1586348c3be3d9142dfe1770d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdb59456c-cc", "ovs_interfaceid": "db59456c-ccd3-48b6-a889-d54a6e2cdf66", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 11 08:54:43 compute-0 nova_compute[260935]: 2025-10-11 08:54:43.326 2 WARNING nova.virt.libvirt.driver [None req-1689977f-45c5-42f6-96ac-e12f067b80ad d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 08:54:43 compute-0 nova_compute[260935]: 2025-10-11 08:54:43.331 2 DEBUG nova.virt.libvirt.host [None req-1689977f-45c5-42f6-96ac-e12f067b80ad d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 11 08:54:43 compute-0 nova_compute[260935]: 2025-10-11 08:54:43.332 2 DEBUG nova.virt.libvirt.host [None req-1689977f-45c5-42f6-96ac-e12f067b80ad d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 11 08:54:43 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e210 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 08:54:43 compute-0 nova_compute[260935]: 2025-10-11 08:54:43.339 2 DEBUG nova.virt.libvirt.host [None req-1689977f-45c5-42f6-96ac-e12f067b80ad d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 11 08:54:43 compute-0 nova_compute[260935]: 2025-10-11 08:54:43.339 2 DEBUG nova.virt.libvirt.host [None req-1689977f-45c5-42f6-96ac-e12f067b80ad d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 11 08:54:43 compute-0 nova_compute[260935]: 2025-10-11 08:54:43.340 2 DEBUG nova.virt.libvirt.driver [None req-1689977f-45c5-42f6-96ac-e12f067b80ad d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 11 08:54:43 compute-0 nova_compute[260935]: 2025-10-11 08:54:43.340 2 DEBUG nova.virt.hardware [None req-1689977f-45c5-42f6-96ac-e12f067b80ad d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 11 08:54:43 compute-0 nova_compute[260935]: 2025-10-11 08:54:43.340 2 DEBUG nova.virt.hardware [None req-1689977f-45c5-42f6-96ac-e12f067b80ad d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 11 08:54:43 compute-0 nova_compute[260935]: 2025-10-11 08:54:43.341 2 DEBUG nova.virt.hardware [None req-1689977f-45c5-42f6-96ac-e12f067b80ad d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 11 08:54:43 compute-0 nova_compute[260935]: 2025-10-11 08:54:43.341 2 DEBUG nova.virt.hardware [None req-1689977f-45c5-42f6-96ac-e12f067b80ad d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 11 08:54:43 compute-0 nova_compute[260935]: 2025-10-11 08:54:43.341 2 DEBUG nova.virt.hardware [None req-1689977f-45c5-42f6-96ac-e12f067b80ad d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 11 08:54:43 compute-0 nova_compute[260935]: 2025-10-11 08:54:43.341 2 DEBUG nova.virt.hardware [None req-1689977f-45c5-42f6-96ac-e12f067b80ad d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 11 08:54:43 compute-0 nova_compute[260935]: 2025-10-11 08:54:43.341 2 DEBUG nova.virt.hardware [None req-1689977f-45c5-42f6-96ac-e12f067b80ad d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 11 08:54:43 compute-0 nova_compute[260935]: 2025-10-11 08:54:43.342 2 DEBUG nova.virt.hardware [None req-1689977f-45c5-42f6-96ac-e12f067b80ad d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 11 08:54:43 compute-0 nova_compute[260935]: 2025-10-11 08:54:43.342 2 DEBUG nova.virt.hardware [None req-1689977f-45c5-42f6-96ac-e12f067b80ad d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 11 08:54:43 compute-0 nova_compute[260935]: 2025-10-11 08:54:43.342 2 DEBUG nova.virt.hardware [None req-1689977f-45c5-42f6-96ac-e12f067b80ad d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 11 08:54:43 compute-0 nova_compute[260935]: 2025-10-11 08:54:43.342 2 DEBUG nova.virt.hardware [None req-1689977f-45c5-42f6-96ac-e12f067b80ad d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 11 08:54:43 compute-0 nova_compute[260935]: 2025-10-11 08:54:43.347 2 DEBUG oslo_concurrency.processutils [None req-1689977f-45c5-42f6-96ac-e12f067b80ad d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:54:43 compute-0 nova_compute[260935]: 2025-10-11 08:54:43.588 2 DEBUG oslo_concurrency.lockutils [None req-d94bae46-c9d4-4e42-b937-79b2476b94a2 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Acquiring lock "d60a2dcd-7fb6-4bfe-8351-a38a71164f83" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:54:43 compute-0 nova_compute[260935]: 2025-10-11 08:54:43.589 2 DEBUG oslo_concurrency.lockutils [None req-d94bae46-c9d4-4e42-b937-79b2476b94a2 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Lock "d60a2dcd-7fb6-4bfe-8351-a38a71164f83" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:54:43 compute-0 nova_compute[260935]: 2025-10-11 08:54:43.592 2 DEBUG oslo_concurrency.lockutils [None req-d94bae46-c9d4-4e42-b937-79b2476b94a2 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Acquiring lock "d60a2dcd-7fb6-4bfe-8351-a38a71164f83-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:54:43 compute-0 nova_compute[260935]: 2025-10-11 08:54:43.592 2 DEBUG oslo_concurrency.lockutils [None req-d94bae46-c9d4-4e42-b937-79b2476b94a2 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Lock "d60a2dcd-7fb6-4bfe-8351-a38a71164f83-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:54:43 compute-0 nova_compute[260935]: 2025-10-11 08:54:43.593 2 DEBUG oslo_concurrency.lockutils [None req-d94bae46-c9d4-4e42-b937-79b2476b94a2 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Lock "d60a2dcd-7fb6-4bfe-8351-a38a71164f83-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:54:43 compute-0 nova_compute[260935]: 2025-10-11 08:54:43.594 2 INFO nova.compute.manager [None req-d94bae46-c9d4-4e42-b937-79b2476b94a2 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] [instance: d60a2dcd-7fb6-4bfe-8351-a38a71164f83] Terminating instance
Oct 11 08:54:43 compute-0 nova_compute[260935]: 2025-10-11 08:54:43.595 2 DEBUG nova.compute.manager [None req-d94bae46-c9d4-4e42-b937-79b2476b94a2 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] [instance: d60a2dcd-7fb6-4bfe-8351-a38a71164f83] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 11 08:54:43 compute-0 kernel: tapb0464a2f-21 (unregistering): left promiscuous mode
Oct 11 08:54:43 compute-0 NetworkManager[44960]: <info>  [1760172883.6620] device (tapb0464a2f-21): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 08:54:43 compute-0 nova_compute[260935]: 2025-10-11 08:54:43.720 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:54:43 compute-0 nova_compute[260935]: 2025-10-11 08:54:43.724 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:54:43 compute-0 ovn_controller[152945]: 2025-10-11T08:54:43Z|00470|binding|INFO|Releasing lport b0464a2f-214f-420b-aa3e-70d2e3dce4ac from this chassis (sb_readonly=0)
Oct 11 08:54:43 compute-0 ovn_controller[152945]: 2025-10-11T08:54:43Z|00471|binding|INFO|Setting lport b0464a2f-214f-420b-aa3e-70d2e3dce4ac down in Southbound
Oct 11 08:54:43 compute-0 ovn_controller[152945]: 2025-10-11T08:54:43Z|00472|binding|INFO|Removing iface tapb0464a2f-21 ovn-installed in OVS
Oct 11 08:54:43 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:43.740 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9b:af:82 10.100.0.4'], port_security=['fa:16:3e:9b:af:82 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'd60a2dcd-7fb6-4bfe-8351-a38a71164f83', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-72464893-0f19-40a9-84d1-392a298e50b9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7a37bbdce5194d96bed20d4162e25337', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'a5c4bd12-f795-4994-bb5e-cd2cc665fe9f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2dc3e696-a953-4227-9499-d68d54f25c77, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=b0464a2f-214f-420b-aa3e-70d2e3dce4ac) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 08:54:43 compute-0 nova_compute[260935]: 2025-10-11 08:54:43.742 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:54:43 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:43.743 162815 INFO neutron.agent.ovn.metadata.agent [-] Port b0464a2f-214f-420b-aa3e-70d2e3dce4ac in datapath 72464893-0f19-40a9-84d1-392a298e50b9 unbound from our chassis
Oct 11 08:54:43 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:43.745 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 72464893-0f19-40a9-84d1-392a298e50b9
Oct 11 08:54:43 compute-0 systemd[1]: machine-qemu\x2d60\x2dinstance\x2d00000035.scope: Deactivated successfully.
Oct 11 08:54:43 compute-0 systemd[1]: machine-qemu\x2d60\x2dinstance\x2d00000035.scope: Consumed 6.430s CPU time.
Oct 11 08:54:43 compute-0 systemd-machined[215705]: Machine qemu-60-instance-00000035 terminated.
Oct 11 08:54:43 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:43.776 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[efa728ef-c05a-4faa-b63e-fa3538136739]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:54:43 compute-0 ovn_controller[152945]: 2025-10-11T08:54:43Z|00473|binding|INFO|Releasing lport fba8f4e1-8635-4527-85f3-29ce2a1033b5 from this chassis (sb_readonly=0)
Oct 11 08:54:43 compute-0 NetworkManager[44960]: <info>  [1760172883.8332] manager: (tapb0464a2f-21): new Tun device (/org/freedesktop/NetworkManager/Devices/225)
Oct 11 08:54:43 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:43.848 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[db78581f-0398-4de2-be6a-e7ec63a49744]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:54:43 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:43.857 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[0c6cb2da-09f5-4a96-b7be-83b4eb388e59]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:54:43 compute-0 nova_compute[260935]: 2025-10-11 08:54:43.866 2 INFO nova.virt.libvirt.driver [-] [instance: d60a2dcd-7fb6-4bfe-8351-a38a71164f83] Instance destroyed successfully.
Oct 11 08:54:43 compute-0 nova_compute[260935]: 2025-10-11 08:54:43.867 2 DEBUG nova.objects.instance [None req-d94bae46-c9d4-4e42-b937-79b2476b94a2 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Lazy-loading 'resources' on Instance uuid d60a2dcd-7fb6-4bfe-8351-a38a71164f83 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:54:43 compute-0 nova_compute[260935]: 2025-10-11 08:54:43.884 2 DEBUG nova.virt.libvirt.vif [None req-d94bae46-c9d4-4e42-b937-79b2476b94a2 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T08:54:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ListServersNegativeTestJSON-server-123447316',display_name='tempest-ListServersNegativeTestJSON-server-123447316-1',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserversnegativetestjson-server-123447316-1',id=53,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-11T08:54:38Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='7a37bbdce5194d96bed20d4162e25337',ramdisk_id='',reservation_id='r-yl5fomq0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ListServersNegativeTestJSON-567006104',owner_user_name='tempest-ListServersNegativeTestJSON-567006104-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T08:54:38Z,user_data=None,user_id='5c02b9d6bdae439c9f1e49ae63c5c5e3',uuid=d60a2dcd-7fb6-4bfe-8351-a38a71164f83,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b0464a2f-214f-420b-aa3e-70d2e3dce4ac", "address": "fa:16:3e:9b:af:82", "network": {"id": "72464893-0f19-40a9-84d1-392a298e50b9", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1756260705-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7a37bbdce5194d96bed20d4162e25337", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb0464a2f-21", "ovs_interfaceid": "b0464a2f-214f-420b-aa3e-70d2e3dce4ac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 11 08:54:43 compute-0 nova_compute[260935]: 2025-10-11 08:54:43.885 2 DEBUG nova.network.os_vif_util [None req-d94bae46-c9d4-4e42-b937-79b2476b94a2 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Converting VIF {"id": "b0464a2f-214f-420b-aa3e-70d2e3dce4ac", "address": "fa:16:3e:9b:af:82", "network": {"id": "72464893-0f19-40a9-84d1-392a298e50b9", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1756260705-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7a37bbdce5194d96bed20d4162e25337", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb0464a2f-21", "ovs_interfaceid": "b0464a2f-214f-420b-aa3e-70d2e3dce4ac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 08:54:43 compute-0 nova_compute[260935]: 2025-10-11 08:54:43.886 2 DEBUG nova.network.os_vif_util [None req-d94bae46-c9d4-4e42-b937-79b2476b94a2 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9b:af:82,bridge_name='br-int',has_traffic_filtering=True,id=b0464a2f-214f-420b-aa3e-70d2e3dce4ac,network=Network(72464893-0f19-40a9-84d1-392a298e50b9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb0464a2f-21') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 08:54:43 compute-0 nova_compute[260935]: 2025-10-11 08:54:43.886 2 DEBUG os_vif [None req-d94bae46-c9d4-4e42-b937-79b2476b94a2 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:9b:af:82,bridge_name='br-int',has_traffic_filtering=True,id=b0464a2f-214f-420b-aa3e-70d2e3dce4ac,network=Network(72464893-0f19-40a9-84d1-392a298e50b9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb0464a2f-21') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 11 08:54:43 compute-0 nova_compute[260935]: 2025-10-11 08:54:43.888 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:54:43 compute-0 nova_compute[260935]: 2025-10-11 08:54:43.888 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb0464a2f-21, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:54:43 compute-0 nova_compute[260935]: 2025-10-11 08:54:43.890 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:54:43 compute-0 nova_compute[260935]: 2025-10-11 08:54:43.892 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 08:54:43 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:43.902 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[b3d3b183-e127-4ba6-bd6f-e0d4b22d4e18]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:54:43 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:43.932 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[defc64ab-734d-4d5b-9438-44a1f2a1939b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap72464893-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ca:78:4f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 9, 'rx_bytes': 832, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 9, 'rx_bytes': 832, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 148], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 472069, 'reachable_time': 41754, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 319662, 'error': None, 'target': 'ovnmeta-72464893-0f19-40a9-84d1-392a298e50b9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:54:43 compute-0 nova_compute[260935]: 2025-10-11 08:54:43.948 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:54:43 compute-0 nova_compute[260935]: 2025-10-11 08:54:43.955 2 INFO os_vif [None req-d94bae46-c9d4-4e42-b937-79b2476b94a2 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:9b:af:82,bridge_name='br-int',has_traffic_filtering=True,id=b0464a2f-214f-420b-aa3e-70d2e3dce4ac,network=Network(72464893-0f19-40a9-84d1-392a298e50b9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb0464a2f-21')
Oct 11 08:54:43 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:43.959 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[24283729-1fbb-440d-8d75-7c470f83ec2c]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap72464893-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 472087, 'tstamp': 472087}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 319663, 'error': None, 'target': 'ovnmeta-72464893-0f19-40a9-84d1-392a298e50b9', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap72464893-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 472092, 'tstamp': 472092}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 319663, 'error': None, 'target': 'ovnmeta-72464893-0f19-40a9-84d1-392a298e50b9', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:54:43 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:43.961 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap72464893-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:54:43 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 08:54:43 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1331195559' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:54:43 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:43.964 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap72464893-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:54:43 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:43.965 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 08:54:43 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:43.965 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap72464893-00, col_values=(('external_ids', {'iface-id': 'fba8f4e1-8635-4527-85f3-29ce2a1033b5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:54:43 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:43.966 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 08:54:43 compute-0 nova_compute[260935]: 2025-10-11 08:54:43.975 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:54:43 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1515: 321 pgs: 321 active+clean; 227 MiB data, 537 MiB used, 59 GiB / 60 GiB avail; 7.5 MiB/s rd, 8.9 MiB/s wr, 475 op/s
Oct 11 08:54:43 compute-0 nova_compute[260935]: 2025-10-11 08:54:43.991 2 DEBUG oslo_concurrency.processutils [None req-1689977f-45c5-42f6-96ac-e12f067b80ad d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.644s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:54:44 compute-0 nova_compute[260935]: 2025-10-11 08:54:44.012 2 DEBUG nova.storage.rbd_utils [None req-1689977f-45c5-42f6-96ac-e12f067b80ad d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] rbd image 1ef94ed5-fffa-41e9-b72f-4569354392c6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:54:44 compute-0 nova_compute[260935]: 2025-10-11 08:54:44.018 2 DEBUG oslo_concurrency.processutils [None req-1689977f-45c5-42f6-96ac-e12f067b80ad d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:54:44 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/1331195559' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:54:44 compute-0 nova_compute[260935]: 2025-10-11 08:54:44.194 2 DEBUG nova.compute.manager [req-e18c29af-3ccd-462a-a9f1-47c720ce16ed req-a875288b-9a1b-4ad1-9166-12e305250bf3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d60a2dcd-7fb6-4bfe-8351-a38a71164f83] Received event network-vif-unplugged-b0464a2f-214f-420b-aa3e-70d2e3dce4ac external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:54:44 compute-0 nova_compute[260935]: 2025-10-11 08:54:44.195 2 DEBUG oslo_concurrency.lockutils [req-e18c29af-3ccd-462a-a9f1-47c720ce16ed req-a875288b-9a1b-4ad1-9166-12e305250bf3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "d60a2dcd-7fb6-4bfe-8351-a38a71164f83-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:54:44 compute-0 nova_compute[260935]: 2025-10-11 08:54:44.196 2 DEBUG oslo_concurrency.lockutils [req-e18c29af-3ccd-462a-a9f1-47c720ce16ed req-a875288b-9a1b-4ad1-9166-12e305250bf3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "d60a2dcd-7fb6-4bfe-8351-a38a71164f83-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:54:44 compute-0 nova_compute[260935]: 2025-10-11 08:54:44.196 2 DEBUG oslo_concurrency.lockutils [req-e18c29af-3ccd-462a-a9f1-47c720ce16ed req-a875288b-9a1b-4ad1-9166-12e305250bf3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "d60a2dcd-7fb6-4bfe-8351-a38a71164f83-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:54:44 compute-0 nova_compute[260935]: 2025-10-11 08:54:44.197 2 DEBUG nova.compute.manager [req-e18c29af-3ccd-462a-a9f1-47c720ce16ed req-a875288b-9a1b-4ad1-9166-12e305250bf3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d60a2dcd-7fb6-4bfe-8351-a38a71164f83] No waiting events found dispatching network-vif-unplugged-b0464a2f-214f-420b-aa3e-70d2e3dce4ac pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 08:54:44 compute-0 nova_compute[260935]: 2025-10-11 08:54:44.198 2 DEBUG nova.compute.manager [req-e18c29af-3ccd-462a-a9f1-47c720ce16ed req-a875288b-9a1b-4ad1-9166-12e305250bf3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d60a2dcd-7fb6-4bfe-8351-a38a71164f83] Received event network-vif-unplugged-b0464a2f-214f-420b-aa3e-70d2e3dce4ac for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 11 08:54:44 compute-0 nova_compute[260935]: 2025-10-11 08:54:44.339 2 INFO nova.virt.libvirt.driver [None req-d94bae46-c9d4-4e42-b937-79b2476b94a2 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] [instance: d60a2dcd-7fb6-4bfe-8351-a38a71164f83] Deleting instance files /var/lib/nova/instances/d60a2dcd-7fb6-4bfe-8351-a38a71164f83_del
Oct 11 08:54:44 compute-0 nova_compute[260935]: 2025-10-11 08:54:44.341 2 INFO nova.virt.libvirt.driver [None req-d94bae46-c9d4-4e42-b937-79b2476b94a2 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] [instance: d60a2dcd-7fb6-4bfe-8351-a38a71164f83] Deletion of /var/lib/nova/instances/d60a2dcd-7fb6-4bfe-8351-a38a71164f83_del complete
Oct 11 08:54:44 compute-0 nova_compute[260935]: 2025-10-11 08:54:44.427 2 INFO nova.compute.manager [None req-d94bae46-c9d4-4e42-b937-79b2476b94a2 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] [instance: d60a2dcd-7fb6-4bfe-8351-a38a71164f83] Took 0.83 seconds to destroy the instance on the hypervisor.
Oct 11 08:54:44 compute-0 nova_compute[260935]: 2025-10-11 08:54:44.428 2 DEBUG oslo.service.loopingcall [None req-d94bae46-c9d4-4e42-b937-79b2476b94a2 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 11 08:54:44 compute-0 nova_compute[260935]: 2025-10-11 08:54:44.428 2 DEBUG nova.compute.manager [-] [instance: d60a2dcd-7fb6-4bfe-8351-a38a71164f83] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 11 08:54:44 compute-0 nova_compute[260935]: 2025-10-11 08:54:44.428 2 DEBUG nova.network.neutron [-] [instance: d60a2dcd-7fb6-4bfe-8351-a38a71164f83] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 11 08:54:44 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 08:54:44 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3966572897' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:54:44 compute-0 nova_compute[260935]: 2025-10-11 08:54:44.509 2 DEBUG oslo_concurrency.processutils [None req-1689977f-45c5-42f6-96ac-e12f067b80ad d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.491s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:54:44 compute-0 nova_compute[260935]: 2025-10-11 08:54:44.511 2 DEBUG nova.virt.libvirt.vif [None req-1689977f-45c5-42f6-96ac-e12f067b80ad d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T08:54:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesOneServerNegativeTestJSON-server-1487174086',display_name='tempest-ImagesOneServerNegativeTestJSON-server-1487174086',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservernegativetestjson-server-1487174086',id=56,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b34d40b1586348c3be3d9142dfe1770d',ramdisk_id='',reservation_id='r-9kmel6v4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesOneServerNegativeTestJSON-253514738',owner_user_name='tempest-ImagesOneServerNegativeTestJSON-253514738-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T08:54:38Z,user_data=None,user_id='d4dce65b39be45739408ca70d672df84',uuid=1ef94ed5-fffa-41e9-b72f-4569354392c6,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "db59456c-ccd3-48b6-a889-d54a6e2cdf66", "address": "fa:16:3e:5b:fd:5e", "network": {"id": "a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1898587159-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b34d40b1586348c3be3d9142dfe1770d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdb59456c-cc", "ovs_interfaceid": "db59456c-ccd3-48b6-a889-d54a6e2cdf66", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 11 08:54:44 compute-0 nova_compute[260935]: 2025-10-11 08:54:44.512 2 DEBUG nova.network.os_vif_util [None req-1689977f-45c5-42f6-96ac-e12f067b80ad d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Converting VIF {"id": "db59456c-ccd3-48b6-a889-d54a6e2cdf66", "address": "fa:16:3e:5b:fd:5e", "network": {"id": "a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1898587159-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b34d40b1586348c3be3d9142dfe1770d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdb59456c-cc", "ovs_interfaceid": "db59456c-ccd3-48b6-a889-d54a6e2cdf66", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 08:54:44 compute-0 nova_compute[260935]: 2025-10-11 08:54:44.513 2 DEBUG nova.network.os_vif_util [None req-1689977f-45c5-42f6-96ac-e12f067b80ad d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5b:fd:5e,bridge_name='br-int',has_traffic_filtering=True,id=db59456c-ccd3-48b6-a889-d54a6e2cdf66,network=Network(a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdb59456c-cc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 08:54:44 compute-0 nova_compute[260935]: 2025-10-11 08:54:44.515 2 DEBUG nova.objects.instance [None req-1689977f-45c5-42f6-96ac-e12f067b80ad d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Lazy-loading 'pci_devices' on Instance uuid 1ef94ed5-fffa-41e9-b72f-4569354392c6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:54:44 compute-0 nova_compute[260935]: 2025-10-11 08:54:44.532 2 DEBUG nova.virt.libvirt.driver [None req-1689977f-45c5-42f6-96ac-e12f067b80ad d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] [instance: 1ef94ed5-fffa-41e9-b72f-4569354392c6] End _get_guest_xml xml=<domain type="kvm">
Oct 11 08:54:44 compute-0 nova_compute[260935]:   <uuid>1ef94ed5-fffa-41e9-b72f-4569354392c6</uuid>
Oct 11 08:54:44 compute-0 nova_compute[260935]:   <name>instance-00000038</name>
Oct 11 08:54:44 compute-0 nova_compute[260935]:   <memory>131072</memory>
Oct 11 08:54:44 compute-0 nova_compute[260935]:   <vcpu>1</vcpu>
Oct 11 08:54:44 compute-0 nova_compute[260935]:   <metadata>
Oct 11 08:54:44 compute-0 nova_compute[260935]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 08:54:44 compute-0 nova_compute[260935]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 08:54:44 compute-0 nova_compute[260935]:       <nova:name>tempest-ImagesOneServerNegativeTestJSON-server-1487174086</nova:name>
Oct 11 08:54:44 compute-0 nova_compute[260935]:       <nova:creationTime>2025-10-11 08:54:43</nova:creationTime>
Oct 11 08:54:44 compute-0 nova_compute[260935]:       <nova:flavor name="m1.nano">
Oct 11 08:54:44 compute-0 nova_compute[260935]:         <nova:memory>128</nova:memory>
Oct 11 08:54:44 compute-0 nova_compute[260935]:         <nova:disk>1</nova:disk>
Oct 11 08:54:44 compute-0 nova_compute[260935]:         <nova:swap>0</nova:swap>
Oct 11 08:54:44 compute-0 nova_compute[260935]:         <nova:ephemeral>0</nova:ephemeral>
Oct 11 08:54:44 compute-0 nova_compute[260935]:         <nova:vcpus>1</nova:vcpus>
Oct 11 08:54:44 compute-0 nova_compute[260935]:       </nova:flavor>
Oct 11 08:54:44 compute-0 nova_compute[260935]:       <nova:owner>
Oct 11 08:54:44 compute-0 nova_compute[260935]:         <nova:user uuid="d4dce65b39be45739408ca70d672df84">tempest-ImagesOneServerNegativeTestJSON-253514738-project-member</nova:user>
Oct 11 08:54:44 compute-0 nova_compute[260935]:         <nova:project uuid="b34d40b1586348c3be3d9142dfe1770d">tempest-ImagesOneServerNegativeTestJSON-253514738</nova:project>
Oct 11 08:54:44 compute-0 nova_compute[260935]:       </nova:owner>
Oct 11 08:54:44 compute-0 nova_compute[260935]:       <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 08:54:44 compute-0 nova_compute[260935]:       <nova:ports>
Oct 11 08:54:44 compute-0 nova_compute[260935]:         <nova:port uuid="db59456c-ccd3-48b6-a889-d54a6e2cdf66">
Oct 11 08:54:44 compute-0 nova_compute[260935]:           <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Oct 11 08:54:44 compute-0 nova_compute[260935]:         </nova:port>
Oct 11 08:54:44 compute-0 nova_compute[260935]:       </nova:ports>
Oct 11 08:54:44 compute-0 nova_compute[260935]:     </nova:instance>
Oct 11 08:54:44 compute-0 nova_compute[260935]:   </metadata>
Oct 11 08:54:44 compute-0 nova_compute[260935]:   <sysinfo type="smbios">
Oct 11 08:54:44 compute-0 nova_compute[260935]:     <system>
Oct 11 08:54:44 compute-0 nova_compute[260935]:       <entry name="manufacturer">RDO</entry>
Oct 11 08:54:44 compute-0 nova_compute[260935]:       <entry name="product">OpenStack Compute</entry>
Oct 11 08:54:44 compute-0 nova_compute[260935]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 08:54:44 compute-0 nova_compute[260935]:       <entry name="serial">1ef94ed5-fffa-41e9-b72f-4569354392c6</entry>
Oct 11 08:54:44 compute-0 nova_compute[260935]:       <entry name="uuid">1ef94ed5-fffa-41e9-b72f-4569354392c6</entry>
Oct 11 08:54:44 compute-0 nova_compute[260935]:       <entry name="family">Virtual Machine</entry>
Oct 11 08:54:44 compute-0 nova_compute[260935]:     </system>
Oct 11 08:54:44 compute-0 nova_compute[260935]:   </sysinfo>
Oct 11 08:54:44 compute-0 nova_compute[260935]:   <os>
Oct 11 08:54:44 compute-0 nova_compute[260935]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 11 08:54:44 compute-0 nova_compute[260935]:     <boot dev="hd"/>
Oct 11 08:54:44 compute-0 nova_compute[260935]:     <smbios mode="sysinfo"/>
Oct 11 08:54:44 compute-0 nova_compute[260935]:   </os>
Oct 11 08:54:44 compute-0 nova_compute[260935]:   <features>
Oct 11 08:54:44 compute-0 nova_compute[260935]:     <acpi/>
Oct 11 08:54:44 compute-0 nova_compute[260935]:     <apic/>
Oct 11 08:54:44 compute-0 nova_compute[260935]:     <vmcoreinfo/>
Oct 11 08:54:44 compute-0 nova_compute[260935]:   </features>
Oct 11 08:54:44 compute-0 nova_compute[260935]:   <clock offset="utc">
Oct 11 08:54:44 compute-0 nova_compute[260935]:     <timer name="pit" tickpolicy="delay"/>
Oct 11 08:54:44 compute-0 nova_compute[260935]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 11 08:54:44 compute-0 nova_compute[260935]:     <timer name="hpet" present="no"/>
Oct 11 08:54:44 compute-0 nova_compute[260935]:   </clock>
Oct 11 08:54:44 compute-0 nova_compute[260935]:   <cpu mode="host-model" match="exact">
Oct 11 08:54:44 compute-0 nova_compute[260935]:     <topology sockets="1" cores="1" threads="1"/>
Oct 11 08:54:44 compute-0 nova_compute[260935]:   </cpu>
Oct 11 08:54:44 compute-0 nova_compute[260935]:   <devices>
Oct 11 08:54:44 compute-0 nova_compute[260935]:     <disk type="network" device="disk">
Oct 11 08:54:44 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 08:54:44 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/1ef94ed5-fffa-41e9-b72f-4569354392c6_disk">
Oct 11 08:54:44 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 08:54:44 compute-0 nova_compute[260935]:       </source>
Oct 11 08:54:44 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 08:54:44 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 08:54:44 compute-0 nova_compute[260935]:       </auth>
Oct 11 08:54:44 compute-0 nova_compute[260935]:       <target dev="vda" bus="virtio"/>
Oct 11 08:54:44 compute-0 nova_compute[260935]:     </disk>
Oct 11 08:54:44 compute-0 nova_compute[260935]:     <disk type="network" device="cdrom">
Oct 11 08:54:44 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 08:54:44 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/1ef94ed5-fffa-41e9-b72f-4569354392c6_disk.config">
Oct 11 08:54:44 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 08:54:44 compute-0 nova_compute[260935]:       </source>
Oct 11 08:54:44 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 08:54:44 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 08:54:44 compute-0 nova_compute[260935]:       </auth>
Oct 11 08:54:44 compute-0 nova_compute[260935]:       <target dev="sda" bus="sata"/>
Oct 11 08:54:44 compute-0 nova_compute[260935]:     </disk>
Oct 11 08:54:44 compute-0 nova_compute[260935]:     <interface type="ethernet">
Oct 11 08:54:44 compute-0 nova_compute[260935]:       <mac address="fa:16:3e:5b:fd:5e"/>
Oct 11 08:54:44 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 08:54:44 compute-0 nova_compute[260935]:       <driver name="vhost" rx_queue_size="512"/>
Oct 11 08:54:44 compute-0 nova_compute[260935]:       <mtu size="1442"/>
Oct 11 08:54:44 compute-0 nova_compute[260935]:       <target dev="tapdb59456c-cc"/>
Oct 11 08:54:44 compute-0 nova_compute[260935]:     </interface>
Oct 11 08:54:44 compute-0 nova_compute[260935]:     <serial type="pty">
Oct 11 08:54:44 compute-0 nova_compute[260935]:       <log file="/var/lib/nova/instances/1ef94ed5-fffa-41e9-b72f-4569354392c6/console.log" append="off"/>
Oct 11 08:54:44 compute-0 nova_compute[260935]:     </serial>
Oct 11 08:54:44 compute-0 nova_compute[260935]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 08:54:44 compute-0 nova_compute[260935]:     <video>
Oct 11 08:54:44 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 08:54:44 compute-0 nova_compute[260935]:     </video>
Oct 11 08:54:44 compute-0 nova_compute[260935]:     <input type="tablet" bus="usb"/>
Oct 11 08:54:44 compute-0 nova_compute[260935]:     <rng model="virtio">
Oct 11 08:54:44 compute-0 nova_compute[260935]:       <backend model="random">/dev/urandom</backend>
Oct 11 08:54:44 compute-0 nova_compute[260935]:     </rng>
Oct 11 08:54:44 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root"/>
Oct 11 08:54:44 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:54:44 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:54:44 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:54:44 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:54:44 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:54:44 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:54:44 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:54:44 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:54:44 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:54:44 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:54:44 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:54:44 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:54:44 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:54:44 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:54:44 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:54:44 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:54:44 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:54:44 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:54:44 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:54:44 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:54:44 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:54:44 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:54:44 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:54:44 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:54:44 compute-0 nova_compute[260935]:     <controller type="usb" index="0"/>
Oct 11 08:54:44 compute-0 nova_compute[260935]:     <memballoon model="virtio">
Oct 11 08:54:44 compute-0 nova_compute[260935]:       <stats period="10"/>
Oct 11 08:54:44 compute-0 nova_compute[260935]:     </memballoon>
Oct 11 08:54:44 compute-0 nova_compute[260935]:   </devices>
Oct 11 08:54:44 compute-0 nova_compute[260935]: </domain>
Oct 11 08:54:44 compute-0 nova_compute[260935]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 11 08:54:44 compute-0 nova_compute[260935]: 2025-10-11 08:54:44.542 2 DEBUG nova.compute.manager [None req-1689977f-45c5-42f6-96ac-e12f067b80ad d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] [instance: 1ef94ed5-fffa-41e9-b72f-4569354392c6] Preparing to wait for external event network-vif-plugged-db59456c-ccd3-48b6-a889-d54a6e2cdf66 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 11 08:54:44 compute-0 nova_compute[260935]: 2025-10-11 08:54:44.543 2 DEBUG oslo_concurrency.lockutils [None req-1689977f-45c5-42f6-96ac-e12f067b80ad d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Acquiring lock "1ef94ed5-fffa-41e9-b72f-4569354392c6-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:54:44 compute-0 nova_compute[260935]: 2025-10-11 08:54:44.543 2 DEBUG oslo_concurrency.lockutils [None req-1689977f-45c5-42f6-96ac-e12f067b80ad d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Lock "1ef94ed5-fffa-41e9-b72f-4569354392c6-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:54:44 compute-0 nova_compute[260935]: 2025-10-11 08:54:44.543 2 DEBUG oslo_concurrency.lockutils [None req-1689977f-45c5-42f6-96ac-e12f067b80ad d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Lock "1ef94ed5-fffa-41e9-b72f-4569354392c6-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:54:44 compute-0 nova_compute[260935]: 2025-10-11 08:54:44.544 2 DEBUG nova.virt.libvirt.vif [None req-1689977f-45c5-42f6-96ac-e12f067b80ad d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T08:54:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesOneServerNegativeTestJSON-server-1487174086',display_name='tempest-ImagesOneServerNegativeTestJSON-server-1487174086',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservernegativetestjson-server-1487174086',id=56,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b34d40b1586348c3be3d9142dfe1770d',ramdisk_id='',reservation_id='r-9kmel6v4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesOneServerNegativeTestJSON-253514738',owner_user_name='tempest-ImagesOneServerNegativeTestJSON-253514738-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T08:54:38Z,user_data=None,user_id='d4dce65b39be45739408ca70d672df84',uuid=1ef94ed5-fffa-41e9-b72f-4569354392c6,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "db59456c-ccd3-48b6-a889-d54a6e2cdf66", "address": "fa:16:3e:5b:fd:5e", "network": {"id": "a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1898587159-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b34d40b1586348c3be3d9142dfe1770d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdb59456c-cc", "ovs_interfaceid": "db59456c-ccd3-48b6-a889-d54a6e2cdf66", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 11 08:54:44 compute-0 nova_compute[260935]: 2025-10-11 08:54:44.545 2 DEBUG nova.network.os_vif_util [None req-1689977f-45c5-42f6-96ac-e12f067b80ad d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Converting VIF {"id": "db59456c-ccd3-48b6-a889-d54a6e2cdf66", "address": "fa:16:3e:5b:fd:5e", "network": {"id": "a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1898587159-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b34d40b1586348c3be3d9142dfe1770d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdb59456c-cc", "ovs_interfaceid": "db59456c-ccd3-48b6-a889-d54a6e2cdf66", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 08:54:44 compute-0 nova_compute[260935]: 2025-10-11 08:54:44.546 2 DEBUG nova.network.os_vif_util [None req-1689977f-45c5-42f6-96ac-e12f067b80ad d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5b:fd:5e,bridge_name='br-int',has_traffic_filtering=True,id=db59456c-ccd3-48b6-a889-d54a6e2cdf66,network=Network(a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdb59456c-cc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 08:54:44 compute-0 nova_compute[260935]: 2025-10-11 08:54:44.546 2 DEBUG os_vif [None req-1689977f-45c5-42f6-96ac-e12f067b80ad d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:5b:fd:5e,bridge_name='br-int',has_traffic_filtering=True,id=db59456c-ccd3-48b6-a889-d54a6e2cdf66,network=Network(a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdb59456c-cc') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 11 08:54:44 compute-0 nova_compute[260935]: 2025-10-11 08:54:44.548 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:54:44 compute-0 nova_compute[260935]: 2025-10-11 08:54:44.549 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:54:44 compute-0 nova_compute[260935]: 2025-10-11 08:54:44.550 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 08:54:44 compute-0 nova_compute[260935]: 2025-10-11 08:54:44.554 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:54:44 compute-0 nova_compute[260935]: 2025-10-11 08:54:44.555 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapdb59456c-cc, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:54:44 compute-0 nova_compute[260935]: 2025-10-11 08:54:44.555 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapdb59456c-cc, col_values=(('external_ids', {'iface-id': 'db59456c-ccd3-48b6-a889-d54a6e2cdf66', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:5b:fd:5e', 'vm-uuid': '1ef94ed5-fffa-41e9-b72f-4569354392c6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:54:44 compute-0 nova_compute[260935]: 2025-10-11 08:54:44.558 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:54:44 compute-0 NetworkManager[44960]: <info>  [1760172884.5593] manager: (tapdb59456c-cc): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/226)
Oct 11 08:54:44 compute-0 nova_compute[260935]: 2025-10-11 08:54:44.563 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 08:54:44 compute-0 nova_compute[260935]: 2025-10-11 08:54:44.566 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:54:44 compute-0 nova_compute[260935]: 2025-10-11 08:54:44.567 2 INFO os_vif [None req-1689977f-45c5-42f6-96ac-e12f067b80ad d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:5b:fd:5e,bridge_name='br-int',has_traffic_filtering=True,id=db59456c-ccd3-48b6-a889-d54a6e2cdf66,network=Network(a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdb59456c-cc')
Oct 11 08:54:44 compute-0 nova_compute[260935]: 2025-10-11 08:54:44.652 2 DEBUG nova.virt.libvirt.driver [None req-1689977f-45c5-42f6-96ac-e12f067b80ad d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 08:54:44 compute-0 nova_compute[260935]: 2025-10-11 08:54:44.652 2 DEBUG nova.virt.libvirt.driver [None req-1689977f-45c5-42f6-96ac-e12f067b80ad d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 08:54:44 compute-0 nova_compute[260935]: 2025-10-11 08:54:44.653 2 DEBUG nova.virt.libvirt.driver [None req-1689977f-45c5-42f6-96ac-e12f067b80ad d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] No VIF found with MAC fa:16:3e:5b:fd:5e, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 11 08:54:44 compute-0 nova_compute[260935]: 2025-10-11 08:54:44.653 2 INFO nova.virt.libvirt.driver [None req-1689977f-45c5-42f6-96ac-e12f067b80ad d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] [instance: 1ef94ed5-fffa-41e9-b72f-4569354392c6] Using config drive
Oct 11 08:54:44 compute-0 nova_compute[260935]: 2025-10-11 08:54:44.672 2 DEBUG nova.storage.rbd_utils [None req-1689977f-45c5-42f6-96ac-e12f067b80ad d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] rbd image 1ef94ed5-fffa-41e9-b72f-4569354392c6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:54:44 compute-0 nova_compute[260935]: 2025-10-11 08:54:44.834 2 DEBUG nova.network.neutron [req-0fedcdb2-49a5-4db3-b091-aaa41e5f83b8 req-f177e1c5-78c3-436f-b6ae-5a9c3259b670 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 1ef94ed5-fffa-41e9-b72f-4569354392c6] Updated VIF entry in instance network info cache for port db59456c-ccd3-48b6-a889-d54a6e2cdf66. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 08:54:44 compute-0 nova_compute[260935]: 2025-10-11 08:54:44.835 2 DEBUG nova.network.neutron [req-0fedcdb2-49a5-4db3-b091-aaa41e5f83b8 req-f177e1c5-78c3-436f-b6ae-5a9c3259b670 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 1ef94ed5-fffa-41e9-b72f-4569354392c6] Updating instance_info_cache with network_info: [{"id": "db59456c-ccd3-48b6-a889-d54a6e2cdf66", "address": "fa:16:3e:5b:fd:5e", "network": {"id": "a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1898587159-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b34d40b1586348c3be3d9142dfe1770d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdb59456c-cc", "ovs_interfaceid": "db59456c-ccd3-48b6-a889-d54a6e2cdf66", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:54:44 compute-0 nova_compute[260935]: 2025-10-11 08:54:44.860 2 DEBUG oslo_concurrency.lockutils [req-0fedcdb2-49a5-4db3-b091-aaa41e5f83b8 req-f177e1c5-78c3-436f-b6ae-5a9c3259b670 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-1ef94ed5-fffa-41e9-b72f-4569354392c6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 08:54:45 compute-0 ceph-mon[74313]: pgmap v1515: 321 pgs: 321 active+clean; 227 MiB data, 537 MiB used, 59 GiB / 60 GiB avail; 7.5 MiB/s rd, 8.9 MiB/s wr, 475 op/s
Oct 11 08:54:45 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/3966572897' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:54:45 compute-0 nova_compute[260935]: 2025-10-11 08:54:45.344 2 INFO nova.virt.libvirt.driver [None req-1689977f-45c5-42f6-96ac-e12f067b80ad d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] [instance: 1ef94ed5-fffa-41e9-b72f-4569354392c6] Creating config drive at /var/lib/nova/instances/1ef94ed5-fffa-41e9-b72f-4569354392c6/disk.config
Oct 11 08:54:45 compute-0 nova_compute[260935]: 2025-10-11 08:54:45.353 2 DEBUG oslo_concurrency.processutils [None req-1689977f-45c5-42f6-96ac-e12f067b80ad d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/1ef94ed5-fffa-41e9-b72f-4569354392c6/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp7ocnqdgb execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:54:45 compute-0 nova_compute[260935]: 2025-10-11 08:54:45.403 2 DEBUG nova.network.neutron [-] [instance: d60a2dcd-7fb6-4bfe-8351-a38a71164f83] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:54:45 compute-0 nova_compute[260935]: 2025-10-11 08:54:45.438 2 INFO nova.compute.manager [-] [instance: d60a2dcd-7fb6-4bfe-8351-a38a71164f83] Took 1.01 seconds to deallocate network for instance.
Oct 11 08:54:45 compute-0 nova_compute[260935]: 2025-10-11 08:54:45.514 2 DEBUG oslo_concurrency.processutils [None req-1689977f-45c5-42f6-96ac-e12f067b80ad d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/1ef94ed5-fffa-41e9-b72f-4569354392c6/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp7ocnqdgb" returned: 0 in 0.161s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:54:45 compute-0 nova_compute[260935]: 2025-10-11 08:54:45.548 2 DEBUG nova.storage.rbd_utils [None req-1689977f-45c5-42f6-96ac-e12f067b80ad d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] rbd image 1ef94ed5-fffa-41e9-b72f-4569354392c6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:54:45 compute-0 nova_compute[260935]: 2025-10-11 08:54:45.552 2 DEBUG oslo_concurrency.processutils [None req-1689977f-45c5-42f6-96ac-e12f067b80ad d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/1ef94ed5-fffa-41e9-b72f-4569354392c6/disk.config 1ef94ed5-fffa-41e9-b72f-4569354392c6_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:54:45 compute-0 nova_compute[260935]: 2025-10-11 08:54:45.607 2 DEBUG oslo_concurrency.lockutils [None req-d94bae46-c9d4-4e42-b937-79b2476b94a2 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:54:45 compute-0 nova_compute[260935]: 2025-10-11 08:54:45.608 2 DEBUG oslo_concurrency.lockutils [None req-d94bae46-c9d4-4e42-b937-79b2476b94a2 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:54:45 compute-0 nova_compute[260935]: 2025-10-11 08:54:45.743 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760172870.4721122, 1a70666a-f421-4a09-bfa4-f1171cbe2000 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:54:45 compute-0 nova_compute[260935]: 2025-10-11 08:54:45.745 2 INFO nova.compute.manager [-] [instance: 1a70666a-f421-4a09-bfa4-f1171cbe2000] VM Stopped (Lifecycle Event)
Oct 11 08:54:45 compute-0 nova_compute[260935]: 2025-10-11 08:54:45.749 2 DEBUG oslo_concurrency.processutils [None req-1689977f-45c5-42f6-96ac-e12f067b80ad d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/1ef94ed5-fffa-41e9-b72f-4569354392c6/disk.config 1ef94ed5-fffa-41e9-b72f-4569354392c6_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.196s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:54:45 compute-0 nova_compute[260935]: 2025-10-11 08:54:45.750 2 INFO nova.virt.libvirt.driver [None req-1689977f-45c5-42f6-96ac-e12f067b80ad d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] [instance: 1ef94ed5-fffa-41e9-b72f-4569354392c6] Deleting local config drive /var/lib/nova/instances/1ef94ed5-fffa-41e9-b72f-4569354392c6/disk.config because it was imported into RBD.
Oct 11 08:54:45 compute-0 nova_compute[260935]: 2025-10-11 08:54:45.773 2 DEBUG nova.compute.manager [None req-a8d730d2-aad8-4733-8c08-8d35225543ea - - - - - -] [instance: 1a70666a-f421-4a09-bfa4-f1171cbe2000] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:54:45 compute-0 nova_compute[260935]: 2025-10-11 08:54:45.778 2 DEBUG oslo_concurrency.processutils [None req-d94bae46-c9d4-4e42-b937-79b2476b94a2 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:54:45 compute-0 kernel: tapdb59456c-cc: entered promiscuous mode
Oct 11 08:54:45 compute-0 NetworkManager[44960]: <info>  [1760172885.8085] manager: (tapdb59456c-cc): new Tun device (/org/freedesktop/NetworkManager/Devices/227)
Oct 11 08:54:45 compute-0 ovn_controller[152945]: 2025-10-11T08:54:45Z|00474|binding|INFO|Claiming lport db59456c-ccd3-48b6-a889-d54a6e2cdf66 for this chassis.
Oct 11 08:54:45 compute-0 ovn_controller[152945]: 2025-10-11T08:54:45Z|00475|binding|INFO|db59456c-ccd3-48b6-a889-d54a6e2cdf66: Claiming fa:16:3e:5b:fd:5e 10.100.0.10
Oct 11 08:54:45 compute-0 systemd-udevd[319650]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 08:54:45 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:45.821 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5b:fd:5e 10.100.0.10'], port_security=['fa:16:3e:5b:fd:5e 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '1ef94ed5-fffa-41e9-b72f-4569354392c6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b34d40b1586348c3be3d9142dfe1770d', 'neutron:revision_number': '2', 'neutron:security_group_ids': '9b07fc82-c0d2-4e42-8878-3b0706731285', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=be4f554b-8352-4ab3-babd-ad834c691fd3, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=db59456c-ccd3-48b6-a889-d54a6e2cdf66) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 08:54:45 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:45.824 162815 INFO neutron.agent.ovn.metadata.agent [-] Port db59456c-ccd3-48b6-a889-d54a6e2cdf66 in datapath a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf bound to our chassis
Oct 11 08:54:45 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:45.826 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf
Oct 11 08:54:45 compute-0 NetworkManager[44960]: <info>  [1760172885.8284] device (tapdb59456c-cc): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 08:54:45 compute-0 NetworkManager[44960]: <info>  [1760172885.8295] device (tapdb59456c-cc): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 08:54:45 compute-0 nova_compute[260935]: 2025-10-11 08:54:45.834 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:54:45 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:45.844 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[0ecbf9c8-85d6-42d8-a4db-33800a4740d6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:54:45 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:45.845 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapa1a65c6f-c1 in ovnmeta-a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 11 08:54:45 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:45.847 276945 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapa1a65c6f-c0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 11 08:54:45 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:45.847 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[339f59e4-cfc1-4b22-9824-1ec27924c0f1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:54:45 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:45.848 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[372dc595-92c4-4acb-bddf-32f92bba052c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:54:45 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:45.861 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[3bcbf3f0-3736-41f2-adc6-682b90525d38]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:54:45 compute-0 systemd-machined[215705]: New machine qemu-63-instance-00000038.
Oct 11 08:54:45 compute-0 systemd[1]: Started Virtual Machine qemu-63-instance-00000038.
Oct 11 08:54:45 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:45.892 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[fc74f96b-615e-4681-b182-db555a53ae14]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:54:45 compute-0 nova_compute[260935]: 2025-10-11 08:54:45.912 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:54:45 compute-0 ovn_controller[152945]: 2025-10-11T08:54:45Z|00476|binding|INFO|Setting lport db59456c-ccd3-48b6-a889-d54a6e2cdf66 ovn-installed in OVS
Oct 11 08:54:45 compute-0 ovn_controller[152945]: 2025-10-11T08:54:45Z|00477|binding|INFO|Setting lport db59456c-ccd3-48b6-a889-d54a6e2cdf66 up in Southbound
Oct 11 08:54:45 compute-0 nova_compute[260935]: 2025-10-11 08:54:45.919 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:54:45 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:45.937 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[2b3557d2-7a09-4c18-b696-f0dce1329b7e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:54:45 compute-0 NetworkManager[44960]: <info>  [1760172885.9531] manager: (tapa1a65c6f-c0): new Veth device (/org/freedesktop/NetworkManager/Devices/228)
Oct 11 08:54:45 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:45.954 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[4cd7797b-555c-4dc4-91d2-2336755b459b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:54:45 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1516: 321 pgs: 321 active+clean; 227 MiB data, 537 MiB used, 59 GiB / 60 GiB avail; 7.5 MiB/s rd, 1.8 MiB/s wr, 340 op/s
Oct 11 08:54:46 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:46.000 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[21f72bfd-c2ed-4249-a954-00f16d96abbc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:54:46 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:46.005 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[c8027311-1122-4e8a-b09a-ab7d7af594c1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:54:46 compute-0 NetworkManager[44960]: <info>  [1760172886.0331] device (tapa1a65c6f-c0): carrier: link connected
Oct 11 08:54:46 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:46.042 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[fa6fc794-5730-42da-a780-cda12528ebd4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:54:46 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:46.063 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[e2d3448d-9015-4e56-a3f7-f3a669b377d5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa1a65c6f-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b9:05:55'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 154], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 473073, 'reachable_time': 21336, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 319849, 'error': None, 'target': 'ovnmeta-a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:54:46 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:46.084 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[d5bb2875-8f35-45f5-a2af-b3a5fde65dbe]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feb9:555'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 473073, 'tstamp': 473073}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 319850, 'error': None, 'target': 'ovnmeta-a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:54:46 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:46.104 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[03adcd42-5537-452b-b20a-1e47adf7fdc9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa1a65c6f-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b9:05:55'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 154], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 473073, 'reachable_time': 21336, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 319851, 'error': None, 'target': 'ovnmeta-a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:54:46 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:46.151 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[caa10a3d-f0f6-45d4-abc0-a1873984bf73]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:54:46 compute-0 nova_compute[260935]: 2025-10-11 08:54:46.247 2 DEBUG nova.compute.manager [req-788ae14b-7d65-4cb1-8735-efc7853a35f6 req-9fbc1bc7-9e16-4d6f-92bf-fa90ba1c359c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 1ef94ed5-fffa-41e9-b72f-4569354392c6] Received event network-vif-plugged-db59456c-ccd3-48b6-a889-d54a6e2cdf66 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:54:46 compute-0 nova_compute[260935]: 2025-10-11 08:54:46.248 2 DEBUG oslo_concurrency.lockutils [req-788ae14b-7d65-4cb1-8735-efc7853a35f6 req-9fbc1bc7-9e16-4d6f-92bf-fa90ba1c359c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "1ef94ed5-fffa-41e9-b72f-4569354392c6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:54:46 compute-0 nova_compute[260935]: 2025-10-11 08:54:46.248 2 DEBUG oslo_concurrency.lockutils [req-788ae14b-7d65-4cb1-8735-efc7853a35f6 req-9fbc1bc7-9e16-4d6f-92bf-fa90ba1c359c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "1ef94ed5-fffa-41e9-b72f-4569354392c6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:54:46 compute-0 nova_compute[260935]: 2025-10-11 08:54:46.248 2 DEBUG oslo_concurrency.lockutils [req-788ae14b-7d65-4cb1-8735-efc7853a35f6 req-9fbc1bc7-9e16-4d6f-92bf-fa90ba1c359c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "1ef94ed5-fffa-41e9-b72f-4569354392c6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:54:46 compute-0 nova_compute[260935]: 2025-10-11 08:54:46.249 2 DEBUG nova.compute.manager [req-788ae14b-7d65-4cb1-8735-efc7853a35f6 req-9fbc1bc7-9e16-4d6f-92bf-fa90ba1c359c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 1ef94ed5-fffa-41e9-b72f-4569354392c6] Processing event network-vif-plugged-db59456c-ccd3-48b6-a889-d54a6e2cdf66 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 11 08:54:46 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 08:54:46 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/336707106' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:54:46 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:46.255 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[4ee9e063-82ad-4fed-a10b-50832c158220]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:54:46 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:46.257 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa1a65c6f-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:54:46 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:46.257 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 08:54:46 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:46.258 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa1a65c6f-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:54:46 compute-0 NetworkManager[44960]: <info>  [1760172886.2603] manager: (tapa1a65c6f-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/229)
Oct 11 08:54:46 compute-0 kernel: tapa1a65c6f-c0: entered promiscuous mode
Oct 11 08:54:46 compute-0 nova_compute[260935]: 2025-10-11 08:54:46.259 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:54:46 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:46.262 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapa1a65c6f-c0, col_values=(('external_ids', {'iface-id': '4bae176b-fbae-4a70-a041-16a7a5205899'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:54:46 compute-0 ovn_controller[152945]: 2025-10-11T08:54:46Z|00478|binding|INFO|Releasing lport 4bae176b-fbae-4a70-a041-16a7a5205899 from this chassis (sb_readonly=0)
Oct 11 08:54:46 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:46.264 162815 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 11 08:54:46 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:46.265 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[16dea164-eada-4895-b029-43fb0386bf28]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:54:46 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:46.266 162815 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 11 08:54:46 compute-0 ovn_metadata_agent[162810]: global
Oct 11 08:54:46 compute-0 ovn_metadata_agent[162810]:     log         /dev/log local0 debug
Oct 11 08:54:46 compute-0 ovn_metadata_agent[162810]:     log-tag     haproxy-metadata-proxy-a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf
Oct 11 08:54:46 compute-0 ovn_metadata_agent[162810]:     user        root
Oct 11 08:54:46 compute-0 ovn_metadata_agent[162810]:     group       root
Oct 11 08:54:46 compute-0 ovn_metadata_agent[162810]:     maxconn     1024
Oct 11 08:54:46 compute-0 ovn_metadata_agent[162810]:     pidfile     /var/lib/neutron/external/pids/a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf.pid.haproxy
Oct 11 08:54:46 compute-0 ovn_metadata_agent[162810]:     daemon
Oct 11 08:54:46 compute-0 ovn_metadata_agent[162810]: 
Oct 11 08:54:46 compute-0 ovn_metadata_agent[162810]: defaults
Oct 11 08:54:46 compute-0 ovn_metadata_agent[162810]:     log global
Oct 11 08:54:46 compute-0 ovn_metadata_agent[162810]:     mode http
Oct 11 08:54:46 compute-0 ovn_metadata_agent[162810]:     option httplog
Oct 11 08:54:46 compute-0 ovn_metadata_agent[162810]:     option dontlognull
Oct 11 08:54:46 compute-0 ovn_metadata_agent[162810]:     option http-server-close
Oct 11 08:54:46 compute-0 ovn_metadata_agent[162810]:     option forwardfor
Oct 11 08:54:46 compute-0 ovn_metadata_agent[162810]:     retries                 3
Oct 11 08:54:46 compute-0 ovn_metadata_agent[162810]:     timeout http-request    30s
Oct 11 08:54:46 compute-0 ovn_metadata_agent[162810]:     timeout connect         30s
Oct 11 08:54:46 compute-0 ovn_metadata_agent[162810]:     timeout client          32s
Oct 11 08:54:46 compute-0 ovn_metadata_agent[162810]:     timeout server          32s
Oct 11 08:54:46 compute-0 ovn_metadata_agent[162810]:     timeout http-keep-alive 30s
Oct 11 08:54:46 compute-0 ovn_metadata_agent[162810]: 
Oct 11 08:54:46 compute-0 ovn_metadata_agent[162810]: 
Oct 11 08:54:46 compute-0 ovn_metadata_agent[162810]: listen listener
Oct 11 08:54:46 compute-0 ovn_metadata_agent[162810]:     bind 169.254.169.254:80
Oct 11 08:54:46 compute-0 ovn_metadata_agent[162810]:     server metadata /var/lib/neutron/metadata_proxy
Oct 11 08:54:46 compute-0 ovn_metadata_agent[162810]:     http-request add-header X-OVN-Network-ID a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf
Oct 11 08:54:46 compute-0 ovn_metadata_agent[162810]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 11 08:54:46 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:46.267 162815 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf', 'env', 'PROCESS_TAG=haproxy-a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 11 08:54:46 compute-0 nova_compute[260935]: 2025-10-11 08:54:46.282 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:54:46 compute-0 nova_compute[260935]: 2025-10-11 08:54:46.299 2 DEBUG oslo_concurrency.processutils [None req-d94bae46-c9d4-4e42-b937-79b2476b94a2 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.520s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:54:46 compute-0 nova_compute[260935]: 2025-10-11 08:54:46.309 2 DEBUG nova.compute.provider_tree [None req-d94bae46-c9d4-4e42-b937-79b2476b94a2 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 08:54:46 compute-0 nova_compute[260935]: 2025-10-11 08:54:46.333 2 DEBUG nova.scheduler.client.report [None req-d94bae46-c9d4-4e42-b937-79b2476b94a2 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 08:54:46 compute-0 nova_compute[260935]: 2025-10-11 08:54:46.347 2 DEBUG nova.compute.manager [req-124e287c-6471-4752-b1c9-85f874b68ca1 req-ff17e4bb-3089-4a11-9581-8e7531eb24f6 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d60a2dcd-7fb6-4bfe-8351-a38a71164f83] Received event network-vif-plugged-b0464a2f-214f-420b-aa3e-70d2e3dce4ac external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:54:46 compute-0 nova_compute[260935]: 2025-10-11 08:54:46.348 2 DEBUG oslo_concurrency.lockutils [req-124e287c-6471-4752-b1c9-85f874b68ca1 req-ff17e4bb-3089-4a11-9581-8e7531eb24f6 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "d60a2dcd-7fb6-4bfe-8351-a38a71164f83-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:54:46 compute-0 nova_compute[260935]: 2025-10-11 08:54:46.349 2 DEBUG oslo_concurrency.lockutils [req-124e287c-6471-4752-b1c9-85f874b68ca1 req-ff17e4bb-3089-4a11-9581-8e7531eb24f6 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "d60a2dcd-7fb6-4bfe-8351-a38a71164f83-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:54:46 compute-0 nova_compute[260935]: 2025-10-11 08:54:46.349 2 DEBUG oslo_concurrency.lockutils [req-124e287c-6471-4752-b1c9-85f874b68ca1 req-ff17e4bb-3089-4a11-9581-8e7531eb24f6 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "d60a2dcd-7fb6-4bfe-8351-a38a71164f83-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:54:46 compute-0 nova_compute[260935]: 2025-10-11 08:54:46.350 2 DEBUG nova.compute.manager [req-124e287c-6471-4752-b1c9-85f874b68ca1 req-ff17e4bb-3089-4a11-9581-8e7531eb24f6 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d60a2dcd-7fb6-4bfe-8351-a38a71164f83] No waiting events found dispatching network-vif-plugged-b0464a2f-214f-420b-aa3e-70d2e3dce4ac pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 08:54:46 compute-0 nova_compute[260935]: 2025-10-11 08:54:46.351 2 WARNING nova.compute.manager [req-124e287c-6471-4752-b1c9-85f874b68ca1 req-ff17e4bb-3089-4a11-9581-8e7531eb24f6 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d60a2dcd-7fb6-4bfe-8351-a38a71164f83] Received unexpected event network-vif-plugged-b0464a2f-214f-420b-aa3e-70d2e3dce4ac for instance with vm_state deleted and task_state None.
Oct 11 08:54:46 compute-0 nova_compute[260935]: 2025-10-11 08:54:46.351 2 DEBUG nova.compute.manager [req-124e287c-6471-4752-b1c9-85f874b68ca1 req-ff17e4bb-3089-4a11-9581-8e7531eb24f6 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d60a2dcd-7fb6-4bfe-8351-a38a71164f83] Received event network-vif-deleted-b0464a2f-214f-420b-aa3e-70d2e3dce4ac external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:54:46 compute-0 nova_compute[260935]: 2025-10-11 08:54:46.386 2 DEBUG oslo_concurrency.lockutils [None req-d94bae46-c9d4-4e42-b937-79b2476b94a2 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.778s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:54:46 compute-0 nova_compute[260935]: 2025-10-11 08:54:46.427 2 INFO nova.scheduler.client.report [None req-d94bae46-c9d4-4e42-b937-79b2476b94a2 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Deleted allocations for instance d60a2dcd-7fb6-4bfe-8351-a38a71164f83
Oct 11 08:54:46 compute-0 nova_compute[260935]: 2025-10-11 08:54:46.530 2 DEBUG oslo_concurrency.lockutils [None req-d94bae46-c9d4-4e42-b937-79b2476b94a2 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Lock "d60a2dcd-7fb6-4bfe-8351-a38a71164f83" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.941s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:54:46 compute-0 podman[319885]: 2025-10-11 08:54:46.735317104 +0000 UTC m=+0.068887826 container create f9bc4e79d7c45187cc50bc28fd2e966162c326981e6dd2cfe73dd9a5e7742e0d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 11 08:54:46 compute-0 systemd[1]: Started libpod-conmon-f9bc4e79d7c45187cc50bc28fd2e966162c326981e6dd2cfe73dd9a5e7742e0d.scope.
Oct 11 08:54:46 compute-0 systemd[1]: Started libcrun container.
Oct 11 08:54:46 compute-0 podman[319885]: 2025-10-11 08:54:46.702194469 +0000 UTC m=+0.035765231 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 11 08:54:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47c7b3c28e9a6a99763d7a9f3da3eb8fa1d4e961a8207f68a66aeb356e779987/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 11 08:54:46 compute-0 podman[319885]: 2025-10-11 08:54:46.817112256 +0000 UTC m=+0.150682988 container init f9bc4e79d7c45187cc50bc28fd2e966162c326981e6dd2cfe73dd9a5e7742e0d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 08:54:46 compute-0 podman[319885]: 2025-10-11 08:54:46.825097214 +0000 UTC m=+0.158667916 container start f9bc4e79d7c45187cc50bc28fd2e966162c326981e6dd2cfe73dd9a5e7742e0d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 08:54:46 compute-0 neutron-haproxy-ovnmeta-a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf[319900]: [NOTICE]   (319904) : New worker (319906) forked
Oct 11 08:54:46 compute-0 neutron-haproxy-ovnmeta-a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf[319900]: [NOTICE]   (319904) : Loading success.
Oct 11 08:54:47 compute-0 ceph-mon[74313]: pgmap v1516: 321 pgs: 321 active+clean; 227 MiB data, 537 MiB used, 59 GiB / 60 GiB avail; 7.5 MiB/s rd, 1.8 MiB/s wr, 340 op/s
Oct 11 08:54:47 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/336707106' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:54:47 compute-0 nova_compute[260935]: 2025-10-11 08:54:47.657 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172887.6566904, 1ef94ed5-fffa-41e9-b72f-4569354392c6 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:54:47 compute-0 nova_compute[260935]: 2025-10-11 08:54:47.658 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 1ef94ed5-fffa-41e9-b72f-4569354392c6] VM Started (Lifecycle Event)
Oct 11 08:54:47 compute-0 nova_compute[260935]: 2025-10-11 08:54:47.661 2 DEBUG nova.compute.manager [None req-1689977f-45c5-42f6-96ac-e12f067b80ad d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] [instance: 1ef94ed5-fffa-41e9-b72f-4569354392c6] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 11 08:54:47 compute-0 nova_compute[260935]: 2025-10-11 08:54:47.664 2 DEBUG nova.virt.libvirt.driver [None req-1689977f-45c5-42f6-96ac-e12f067b80ad d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] [instance: 1ef94ed5-fffa-41e9-b72f-4569354392c6] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 11 08:54:47 compute-0 nova_compute[260935]: 2025-10-11 08:54:47.668 2 INFO nova.virt.libvirt.driver [-] [instance: 1ef94ed5-fffa-41e9-b72f-4569354392c6] Instance spawned successfully.
Oct 11 08:54:47 compute-0 nova_compute[260935]: 2025-10-11 08:54:47.669 2 DEBUG nova.virt.libvirt.driver [None req-1689977f-45c5-42f6-96ac-e12f067b80ad d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] [instance: 1ef94ed5-fffa-41e9-b72f-4569354392c6] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 11 08:54:47 compute-0 nova_compute[260935]: 2025-10-11 08:54:47.715 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 1ef94ed5-fffa-41e9-b72f-4569354392c6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:54:47 compute-0 nova_compute[260935]: 2025-10-11 08:54:47.721 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 1ef94ed5-fffa-41e9-b72f-4569354392c6] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 08:54:47 compute-0 nova_compute[260935]: 2025-10-11 08:54:47.726 2 DEBUG nova.virt.libvirt.driver [None req-1689977f-45c5-42f6-96ac-e12f067b80ad d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] [instance: 1ef94ed5-fffa-41e9-b72f-4569354392c6] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:54:47 compute-0 nova_compute[260935]: 2025-10-11 08:54:47.726 2 DEBUG nova.virt.libvirt.driver [None req-1689977f-45c5-42f6-96ac-e12f067b80ad d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] [instance: 1ef94ed5-fffa-41e9-b72f-4569354392c6] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:54:47 compute-0 nova_compute[260935]: 2025-10-11 08:54:47.727 2 DEBUG nova.virt.libvirt.driver [None req-1689977f-45c5-42f6-96ac-e12f067b80ad d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] [instance: 1ef94ed5-fffa-41e9-b72f-4569354392c6] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:54:47 compute-0 nova_compute[260935]: 2025-10-11 08:54:47.727 2 DEBUG nova.virt.libvirt.driver [None req-1689977f-45c5-42f6-96ac-e12f067b80ad d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] [instance: 1ef94ed5-fffa-41e9-b72f-4569354392c6] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:54:47 compute-0 nova_compute[260935]: 2025-10-11 08:54:47.727 2 DEBUG nova.virt.libvirt.driver [None req-1689977f-45c5-42f6-96ac-e12f067b80ad d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] [instance: 1ef94ed5-fffa-41e9-b72f-4569354392c6] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:54:47 compute-0 nova_compute[260935]: 2025-10-11 08:54:47.728 2 DEBUG nova.virt.libvirt.driver [None req-1689977f-45c5-42f6-96ac-e12f067b80ad d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] [instance: 1ef94ed5-fffa-41e9-b72f-4569354392c6] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:54:47 compute-0 nova_compute[260935]: 2025-10-11 08:54:47.760 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 1ef94ed5-fffa-41e9-b72f-4569354392c6] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 08:54:47 compute-0 nova_compute[260935]: 2025-10-11 08:54:47.761 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172887.6580708, 1ef94ed5-fffa-41e9-b72f-4569354392c6 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:54:47 compute-0 nova_compute[260935]: 2025-10-11 08:54:47.761 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 1ef94ed5-fffa-41e9-b72f-4569354392c6] VM Paused (Lifecycle Event)
Oct 11 08:54:47 compute-0 nova_compute[260935]: 2025-10-11 08:54:47.778 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 1ef94ed5-fffa-41e9-b72f-4569354392c6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:54:47 compute-0 nova_compute[260935]: 2025-10-11 08:54:47.783 2 INFO nova.compute.manager [None req-1689977f-45c5-42f6-96ac-e12f067b80ad d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] [instance: 1ef94ed5-fffa-41e9-b72f-4569354392c6] Took 8.90 seconds to spawn the instance on the hypervisor.
Oct 11 08:54:47 compute-0 nova_compute[260935]: 2025-10-11 08:54:47.783 2 DEBUG nova.compute.manager [None req-1689977f-45c5-42f6-96ac-e12f067b80ad d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] [instance: 1ef94ed5-fffa-41e9-b72f-4569354392c6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:54:47 compute-0 nova_compute[260935]: 2025-10-11 08:54:47.785 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172887.6641626, 1ef94ed5-fffa-41e9-b72f-4569354392c6 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:54:47 compute-0 nova_compute[260935]: 2025-10-11 08:54:47.786 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 1ef94ed5-fffa-41e9-b72f-4569354392c6] VM Resumed (Lifecycle Event)
Oct 11 08:54:47 compute-0 nova_compute[260935]: 2025-10-11 08:54:47.822 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 1ef94ed5-fffa-41e9-b72f-4569354392c6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:54:47 compute-0 nova_compute[260935]: 2025-10-11 08:54:47.827 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 1ef94ed5-fffa-41e9-b72f-4569354392c6] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 08:54:47 compute-0 nova_compute[260935]: 2025-10-11 08:54:47.847 2 INFO nova.compute.manager [None req-1689977f-45c5-42f6-96ac-e12f067b80ad d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] [instance: 1ef94ed5-fffa-41e9-b72f-4569354392c6] Took 9.95 seconds to build instance.
Oct 11 08:54:47 compute-0 nova_compute[260935]: 2025-10-11 08:54:47.872 2 DEBUG oslo_concurrency.lockutils [None req-1689977f-45c5-42f6-96ac-e12f067b80ad d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Lock "1ef94ed5-fffa-41e9-b72f-4569354392c6" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.068s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:54:47 compute-0 nova_compute[260935]: 2025-10-11 08:54:47.876 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:54:47 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1517: 321 pgs: 321 active+clean; 181 MiB data, 520 MiB used, 59 GiB / 60 GiB avail; 7.5 MiB/s rd, 1.8 MiB/s wr, 377 op/s
Oct 11 08:54:48 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e210 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 08:54:48 compute-0 nova_compute[260935]: 2025-10-11 08:54:48.455 2 DEBUG nova.compute.manager [req-590dc3fd-8b97-4789-ae8b-dc8b5791cdad req-40be9c89-2e34-4bdc-afec-7661b1854608 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 1ef94ed5-fffa-41e9-b72f-4569354392c6] Received event network-vif-plugged-db59456c-ccd3-48b6-a889-d54a6e2cdf66 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:54:48 compute-0 nova_compute[260935]: 2025-10-11 08:54:48.456 2 DEBUG oslo_concurrency.lockutils [req-590dc3fd-8b97-4789-ae8b-dc8b5791cdad req-40be9c89-2e34-4bdc-afec-7661b1854608 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "1ef94ed5-fffa-41e9-b72f-4569354392c6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:54:48 compute-0 nova_compute[260935]: 2025-10-11 08:54:48.458 2 DEBUG oslo_concurrency.lockutils [req-590dc3fd-8b97-4789-ae8b-dc8b5791cdad req-40be9c89-2e34-4bdc-afec-7661b1854608 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "1ef94ed5-fffa-41e9-b72f-4569354392c6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:54:48 compute-0 nova_compute[260935]: 2025-10-11 08:54:48.458 2 DEBUG oslo_concurrency.lockutils [req-590dc3fd-8b97-4789-ae8b-dc8b5791cdad req-40be9c89-2e34-4bdc-afec-7661b1854608 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "1ef94ed5-fffa-41e9-b72f-4569354392c6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:54:48 compute-0 nova_compute[260935]: 2025-10-11 08:54:48.459 2 DEBUG nova.compute.manager [req-590dc3fd-8b97-4789-ae8b-dc8b5791cdad req-40be9c89-2e34-4bdc-afec-7661b1854608 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 1ef94ed5-fffa-41e9-b72f-4569354392c6] No waiting events found dispatching network-vif-plugged-db59456c-ccd3-48b6-a889-d54a6e2cdf66 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 08:54:48 compute-0 nova_compute[260935]: 2025-10-11 08:54:48.460 2 WARNING nova.compute.manager [req-590dc3fd-8b97-4789-ae8b-dc8b5791cdad req-40be9c89-2e34-4bdc-afec-7661b1854608 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 1ef94ed5-fffa-41e9-b72f-4569354392c6] Received unexpected event network-vif-plugged-db59456c-ccd3-48b6-a889-d54a6e2cdf66 for instance with vm_state active and task_state None.
Oct 11 08:54:49 compute-0 ceph-mon[74313]: pgmap v1517: 321 pgs: 321 active+clean; 181 MiB data, 520 MiB used, 59 GiB / 60 GiB avail; 7.5 MiB/s rd, 1.8 MiB/s wr, 377 op/s
Oct 11 08:54:49 compute-0 nova_compute[260935]: 2025-10-11 08:54:49.558 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:54:49 compute-0 nova_compute[260935]: 2025-10-11 08:54:49.941 2 DEBUG oslo_concurrency.lockutils [None req-1681816a-f1db-452b-a156-0db80137229d d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Acquiring lock "1ef94ed5-fffa-41e9-b72f-4569354392c6" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:54:49 compute-0 nova_compute[260935]: 2025-10-11 08:54:49.942 2 DEBUG oslo_concurrency.lockutils [None req-1681816a-f1db-452b-a156-0db80137229d d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Lock "1ef94ed5-fffa-41e9-b72f-4569354392c6" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:54:49 compute-0 nova_compute[260935]: 2025-10-11 08:54:49.942 2 DEBUG oslo_concurrency.lockutils [None req-1681816a-f1db-452b-a156-0db80137229d d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Acquiring lock "1ef94ed5-fffa-41e9-b72f-4569354392c6-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:54:49 compute-0 nova_compute[260935]: 2025-10-11 08:54:49.942 2 DEBUG oslo_concurrency.lockutils [None req-1681816a-f1db-452b-a156-0db80137229d d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Lock "1ef94ed5-fffa-41e9-b72f-4569354392c6-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:54:49 compute-0 nova_compute[260935]: 2025-10-11 08:54:49.943 2 DEBUG oslo_concurrency.lockutils [None req-1681816a-f1db-452b-a156-0db80137229d d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Lock "1ef94ed5-fffa-41e9-b72f-4569354392c6-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:54:49 compute-0 nova_compute[260935]: 2025-10-11 08:54:49.944 2 INFO nova.compute.manager [None req-1681816a-f1db-452b-a156-0db80137229d d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] [instance: 1ef94ed5-fffa-41e9-b72f-4569354392c6] Terminating instance
Oct 11 08:54:49 compute-0 nova_compute[260935]: 2025-10-11 08:54:49.945 2 DEBUG nova.compute.manager [None req-1681816a-f1db-452b-a156-0db80137229d d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] [instance: 1ef94ed5-fffa-41e9-b72f-4569354392c6] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 11 08:54:49 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1518: 321 pgs: 321 active+clean; 181 MiB data, 520 MiB used, 59 GiB / 60 GiB avail; 5.8 MiB/s rd, 1.8 MiB/s wr, 288 op/s
Oct 11 08:54:50 compute-0 kernel: tapdb59456c-cc (unregistering): left promiscuous mode
Oct 11 08:54:50 compute-0 NetworkManager[44960]: <info>  [1760172890.0119] device (tapdb59456c-cc): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 08:54:50 compute-0 nova_compute[260935]: 2025-10-11 08:54:50.065 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:54:50 compute-0 ovn_controller[152945]: 2025-10-11T08:54:50Z|00479|binding|INFO|Releasing lport db59456c-ccd3-48b6-a889-d54a6e2cdf66 from this chassis (sb_readonly=0)
Oct 11 08:54:50 compute-0 ovn_controller[152945]: 2025-10-11T08:54:50Z|00480|binding|INFO|Setting lport db59456c-ccd3-48b6-a889-d54a6e2cdf66 down in Southbound
Oct 11 08:54:50 compute-0 ovn_controller[152945]: 2025-10-11T08:54:50Z|00481|binding|INFO|Removing iface tapdb59456c-cc ovn-installed in OVS
Oct 11 08:54:50 compute-0 nova_compute[260935]: 2025-10-11 08:54:50.071 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:54:50 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:50.085 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5b:fd:5e 10.100.0.10'], port_security=['fa:16:3e:5b:fd:5e 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '1ef94ed5-fffa-41e9-b72f-4569354392c6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b34d40b1586348c3be3d9142dfe1770d', 'neutron:revision_number': '4', 'neutron:security_group_ids': '9b07fc82-c0d2-4e42-8878-3b0706731285', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=be4f554b-8352-4ab3-babd-ad834c691fd3, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=db59456c-ccd3-48b6-a889-d54a6e2cdf66) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 08:54:50 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:50.086 162815 INFO neutron.agent.ovn.metadata.agent [-] Port db59456c-ccd3-48b6-a889-d54a6e2cdf66 in datapath a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf unbound from our chassis
Oct 11 08:54:50 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:50.088 162815 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 11 08:54:50 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:50.089 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[56aa557e-a1e0-4213-b0b7-9dc7617ffc15]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:54:50 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:50.090 162815 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf namespace which is not needed anymore
Oct 11 08:54:50 compute-0 nova_compute[260935]: 2025-10-11 08:54:50.097 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:54:50 compute-0 systemd[1]: machine-qemu\x2d63\x2dinstance\x2d00000038.scope: Deactivated successfully.
Oct 11 08:54:50 compute-0 systemd[1]: machine-qemu\x2d63\x2dinstance\x2d00000038.scope: Consumed 3.938s CPU time.
Oct 11 08:54:50 compute-0 systemd-machined[215705]: Machine qemu-63-instance-00000038 terminated.
Oct 11 08:54:50 compute-0 kernel: tapdb59456c-cc: entered promiscuous mode
Oct 11 08:54:50 compute-0 NetworkManager[44960]: <info>  [1760172890.1714] manager: (tapdb59456c-cc): new Tun device (/org/freedesktop/NetworkManager/Devices/230)
Oct 11 08:54:50 compute-0 ovn_controller[152945]: 2025-10-11T08:54:50Z|00482|binding|INFO|Claiming lport db59456c-ccd3-48b6-a889-d54a6e2cdf66 for this chassis.
Oct 11 08:54:50 compute-0 ovn_controller[152945]: 2025-10-11T08:54:50Z|00483|binding|INFO|db59456c-ccd3-48b6-a889-d54a6e2cdf66: Claiming fa:16:3e:5b:fd:5e 10.100.0.10
Oct 11 08:54:50 compute-0 nova_compute[260935]: 2025-10-11 08:54:50.173 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:54:50 compute-0 kernel: tapdb59456c-cc (unregistering): left promiscuous mode
Oct 11 08:54:50 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:50.194 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5b:fd:5e 10.100.0.10'], port_security=['fa:16:3e:5b:fd:5e 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '1ef94ed5-fffa-41e9-b72f-4569354392c6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b34d40b1586348c3be3d9142dfe1770d', 'neutron:revision_number': '4', 'neutron:security_group_ids': '9b07fc82-c0d2-4e42-8878-3b0706731285', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=be4f554b-8352-4ab3-babd-ad834c691fd3, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=db59456c-ccd3-48b6-a889-d54a6e2cdf66) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 08:54:50 compute-0 nova_compute[260935]: 2025-10-11 08:54:50.204 2 INFO nova.virt.libvirt.driver [-] [instance: 1ef94ed5-fffa-41e9-b72f-4569354392c6] Instance destroyed successfully.
Oct 11 08:54:50 compute-0 ovn_controller[152945]: 2025-10-11T08:54:50Z|00484|binding|INFO|Setting lport db59456c-ccd3-48b6-a889-d54a6e2cdf66 ovn-installed in OVS
Oct 11 08:54:50 compute-0 ovn_controller[152945]: 2025-10-11T08:54:50Z|00485|binding|INFO|Setting lport db59456c-ccd3-48b6-a889-d54a6e2cdf66 up in Southbound
Oct 11 08:54:50 compute-0 nova_compute[260935]: 2025-10-11 08:54:50.204 2 DEBUG nova.objects.instance [None req-1681816a-f1db-452b-a156-0db80137229d d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Lazy-loading 'resources' on Instance uuid 1ef94ed5-fffa-41e9-b72f-4569354392c6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:54:50 compute-0 ovn_controller[152945]: 2025-10-11T08:54:50Z|00486|binding|INFO|Releasing lport db59456c-ccd3-48b6-a889-d54a6e2cdf66 from this chassis (sb_readonly=1)
Oct 11 08:54:50 compute-0 ovn_controller[152945]: 2025-10-11T08:54:50Z|00487|if_status|INFO|Dropped 3 log messages in last 47 seconds (most recently, 47 seconds ago) due to excessive rate
Oct 11 08:54:50 compute-0 ovn_controller[152945]: 2025-10-11T08:54:50Z|00488|if_status|INFO|Not setting lport db59456c-ccd3-48b6-a889-d54a6e2cdf66 down as sb is readonly
Oct 11 08:54:50 compute-0 nova_compute[260935]: 2025-10-11 08:54:50.210 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:54:50 compute-0 ovn_controller[152945]: 2025-10-11T08:54:50Z|00489|binding|INFO|Removing iface tapdb59456c-cc ovn-installed in OVS
Oct 11 08:54:50 compute-0 nova_compute[260935]: 2025-10-11 08:54:50.215 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:54:50 compute-0 ovn_controller[152945]: 2025-10-11T08:54:50Z|00490|binding|INFO|Releasing lport db59456c-ccd3-48b6-a889-d54a6e2cdf66 from this chassis (sb_readonly=0)
Oct 11 08:54:50 compute-0 ovn_controller[152945]: 2025-10-11T08:54:50Z|00491|binding|INFO|Setting lport db59456c-ccd3-48b6-a889-d54a6e2cdf66 down in Southbound
Oct 11 08:54:50 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:50.231 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5b:fd:5e 10.100.0.10'], port_security=['fa:16:3e:5b:fd:5e 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '1ef94ed5-fffa-41e9-b72f-4569354392c6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b34d40b1586348c3be3d9142dfe1770d', 'neutron:revision_number': '4', 'neutron:security_group_ids': '9b07fc82-c0d2-4e42-8878-3b0706731285', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=be4f554b-8352-4ab3-babd-ad834c691fd3, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=db59456c-ccd3-48b6-a889-d54a6e2cdf66) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 08:54:50 compute-0 nova_compute[260935]: 2025-10-11 08:54:50.234 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:54:50 compute-0 nova_compute[260935]: 2025-10-11 08:54:50.237 2 DEBUG nova.virt.libvirt.vif [None req-1681816a-f1db-452b-a156-0db80137229d d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T08:54:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesOneServerNegativeTestJSON-server-1487174086',display_name='tempest-ImagesOneServerNegativeTestJSON-server-1487174086',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservernegativetestjson-server-1487174086',id=56,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-11T08:54:47Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='b34d40b1586348c3be3d9142dfe1770d',ramdisk_id='',reservation_id='r-9kmel6v4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ImagesOneServerNegativeTestJSON-253514738',owner_user_name='tempest-ImagesOneServerNegativeTestJSON-253514738-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T08:54:47Z,user_data=None,user_id='d4dce65b39be45739408ca70d672df84',uuid=1ef94ed5-fffa-41e9-b72f-4569354392c6,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "db59456c-ccd3-48b6-a889-d54a6e2cdf66", "address": "fa:16:3e:5b:fd:5e", "network": {"id": "a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1898587159-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b34d40b1586348c3be3d9142dfe1770d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdb59456c-cc", "ovs_interfaceid": "db59456c-ccd3-48b6-a889-d54a6e2cdf66", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 11 08:54:50 compute-0 nova_compute[260935]: 2025-10-11 08:54:50.237 2 DEBUG nova.network.os_vif_util [None req-1681816a-f1db-452b-a156-0db80137229d d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Converting VIF {"id": "db59456c-ccd3-48b6-a889-d54a6e2cdf66", "address": "fa:16:3e:5b:fd:5e", "network": {"id": "a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1898587159-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b34d40b1586348c3be3d9142dfe1770d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdb59456c-cc", "ovs_interfaceid": "db59456c-ccd3-48b6-a889-d54a6e2cdf66", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 08:54:50 compute-0 nova_compute[260935]: 2025-10-11 08:54:50.238 2 DEBUG nova.network.os_vif_util [None req-1681816a-f1db-452b-a156-0db80137229d d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5b:fd:5e,bridge_name='br-int',has_traffic_filtering=True,id=db59456c-ccd3-48b6-a889-d54a6e2cdf66,network=Network(a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdb59456c-cc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 08:54:50 compute-0 nova_compute[260935]: 2025-10-11 08:54:50.238 2 DEBUG os_vif [None req-1681816a-f1db-452b-a156-0db80137229d d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:5b:fd:5e,bridge_name='br-int',has_traffic_filtering=True,id=db59456c-ccd3-48b6-a889-d54a6e2cdf66,network=Network(a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdb59456c-cc') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 11 08:54:50 compute-0 nova_compute[260935]: 2025-10-11 08:54:50.240 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:54:50 compute-0 nova_compute[260935]: 2025-10-11 08:54:50.240 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapdb59456c-cc, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:54:50 compute-0 nova_compute[260935]: 2025-10-11 08:54:50.242 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:54:50 compute-0 nova_compute[260935]: 2025-10-11 08:54:50.243 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 08:54:50 compute-0 nova_compute[260935]: 2025-10-11 08:54:50.245 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:54:50 compute-0 nova_compute[260935]: 2025-10-11 08:54:50.247 2 INFO os_vif [None req-1681816a-f1db-452b-a156-0db80137229d d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:5b:fd:5e,bridge_name='br-int',has_traffic_filtering=True,id=db59456c-ccd3-48b6-a889-d54a6e2cdf66,network=Network(a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdb59456c-cc')
Oct 11 08:54:50 compute-0 neutron-haproxy-ovnmeta-a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf[319900]: [NOTICE]   (319904) : haproxy version is 2.8.14-c23fe91
Oct 11 08:54:50 compute-0 neutron-haproxy-ovnmeta-a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf[319900]: [NOTICE]   (319904) : path to executable is /usr/sbin/haproxy
Oct 11 08:54:50 compute-0 neutron-haproxy-ovnmeta-a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf[319900]: [WARNING]  (319904) : Exiting Master process...
Oct 11 08:54:50 compute-0 neutron-haproxy-ovnmeta-a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf[319900]: [ALERT]    (319904) : Current worker (319906) exited with code 143 (Terminated)
Oct 11 08:54:50 compute-0 neutron-haproxy-ovnmeta-a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf[319900]: [WARNING]  (319904) : All workers exited. Exiting... (0)
Oct 11 08:54:50 compute-0 systemd[1]: libpod-f9bc4e79d7c45187cc50bc28fd2e966162c326981e6dd2cfe73dd9a5e7742e0d.scope: Deactivated successfully.
Oct 11 08:54:50 compute-0 podman[319990]: 2025-10-11 08:54:50.298790274 +0000 UTC m=+0.076313277 container died f9bc4e79d7c45187cc50bc28fd2e966162c326981e6dd2cfe73dd9a5e7742e0d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS)
Oct 11 08:54:50 compute-0 nova_compute[260935]: 2025-10-11 08:54:50.320 2 DEBUG oslo_concurrency.lockutils [None req-1daaf7ef-40ad-467c-bc43-803ed00a5e23 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Acquiring lock "e263661e-e9c2-4a4d-a6e5-5fc8a7353f50" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:54:50 compute-0 nova_compute[260935]: 2025-10-11 08:54:50.321 2 DEBUG oslo_concurrency.lockutils [None req-1daaf7ef-40ad-467c-bc43-803ed00a5e23 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Lock "e263661e-e9c2-4a4d-a6e5-5fc8a7353f50" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:54:50 compute-0 nova_compute[260935]: 2025-10-11 08:54:50.321 2 DEBUG oslo_concurrency.lockutils [None req-1daaf7ef-40ad-467c-bc43-803ed00a5e23 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Acquiring lock "e263661e-e9c2-4a4d-a6e5-5fc8a7353f50-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:54:50 compute-0 nova_compute[260935]: 2025-10-11 08:54:50.321 2 DEBUG oslo_concurrency.lockutils [None req-1daaf7ef-40ad-467c-bc43-803ed00a5e23 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Lock "e263661e-e9c2-4a4d-a6e5-5fc8a7353f50-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:54:50 compute-0 nova_compute[260935]: 2025-10-11 08:54:50.322 2 DEBUG oslo_concurrency.lockutils [None req-1daaf7ef-40ad-467c-bc43-803ed00a5e23 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Lock "e263661e-e9c2-4a4d-a6e5-5fc8a7353f50-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:54:50 compute-0 nova_compute[260935]: 2025-10-11 08:54:50.324 2 INFO nova.compute.manager [None req-1daaf7ef-40ad-467c-bc43-803ed00a5e23 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] [instance: e263661e-e9c2-4a4d-a6e5-5fc8a7353f50] Terminating instance
Oct 11 08:54:50 compute-0 nova_compute[260935]: 2025-10-11 08:54:50.325 2 DEBUG nova.compute.manager [None req-1daaf7ef-40ad-467c-bc43-803ed00a5e23 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] [instance: e263661e-e9c2-4a4d-a6e5-5fc8a7353f50] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 11 08:54:50 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-f9bc4e79d7c45187cc50bc28fd2e966162c326981e6dd2cfe73dd9a5e7742e0d-userdata-shm.mount: Deactivated successfully.
Oct 11 08:54:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-47c7b3c28e9a6a99763d7a9f3da3eb8fa1d4e961a8207f68a66aeb356e779987-merged.mount: Deactivated successfully.
Oct 11 08:54:50 compute-0 podman[319990]: 2025-10-11 08:54:50.352150285 +0000 UTC m=+0.129673278 container cleanup f9bc4e79d7c45187cc50bc28fd2e966162c326981e6dd2cfe73dd9a5e7742e0d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct 11 08:54:50 compute-0 systemd[1]: libpod-conmon-f9bc4e79d7c45187cc50bc28fd2e966162c326981e6dd2cfe73dd9a5e7742e0d.scope: Deactivated successfully.
Oct 11 08:54:50 compute-0 kernel: tap5a02bb17-5d (unregistering): left promiscuous mode
Oct 11 08:54:50 compute-0 NetworkManager[44960]: <info>  [1760172890.3876] device (tap5a02bb17-5d): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 08:54:50 compute-0 nova_compute[260935]: 2025-10-11 08:54:50.393 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:54:50 compute-0 ovn_controller[152945]: 2025-10-11T08:54:50Z|00492|binding|INFO|Releasing lport 5a02bb17-5d97-4eef-a91d-245b70cc5e1b from this chassis (sb_readonly=0)
Oct 11 08:54:50 compute-0 ovn_controller[152945]: 2025-10-11T08:54:50Z|00493|binding|INFO|Setting lport 5a02bb17-5d97-4eef-a91d-245b70cc5e1b down in Southbound
Oct 11 08:54:50 compute-0 ovn_controller[152945]: 2025-10-11T08:54:50Z|00494|binding|INFO|Removing iface tap5a02bb17-5d ovn-installed in OVS
Oct 11 08:54:50 compute-0 nova_compute[260935]: 2025-10-11 08:54:50.405 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:54:50 compute-0 nova_compute[260935]: 2025-10-11 08:54:50.414 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:54:50 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:50.424 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2c:7f:c3 10.100.0.14'], port_security=['fa:16:3e:2c:7f:c3 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'e263661e-e9c2-4a4d-a6e5-5fc8a7353f50', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-72464893-0f19-40a9-84d1-392a298e50b9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7a37bbdce5194d96bed20d4162e25337', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'a5c4bd12-f795-4994-bb5e-cd2cc665fe9f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2dc3e696-a953-4227-9499-d68d54f25c77, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=5a02bb17-5d97-4eef-a91d-245b70cc5e1b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 08:54:50 compute-0 podman[320041]: 2025-10-11 08:54:50.431831527 +0000 UTC m=+0.055896954 container remove f9bc4e79d7c45187cc50bc28fd2e966162c326981e6dd2cfe73dd9a5e7742e0d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.vendor=CentOS)
Oct 11 08:54:50 compute-0 systemd[1]: machine-qemu\x2d61\x2dinstance\x2d00000036.scope: Deactivated successfully.
Oct 11 08:54:50 compute-0 systemd[1]: machine-qemu\x2d61\x2dinstance\x2d00000036.scope: Consumed 12.118s CPU time.
Oct 11 08:54:50 compute-0 systemd-machined[215705]: Machine qemu-61-instance-00000036 terminated.
Oct 11 08:54:50 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:50.446 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[7eb60fa2-b3ca-49bf-adf5-f37a6c36bc15]: (4, ('Sat Oct 11 08:54:50 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf (f9bc4e79d7c45187cc50bc28fd2e966162c326981e6dd2cfe73dd9a5e7742e0d)\nf9bc4e79d7c45187cc50bc28fd2e966162c326981e6dd2cfe73dd9a5e7742e0d\nSat Oct 11 08:54:50 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf (f9bc4e79d7c45187cc50bc28fd2e966162c326981e6dd2cfe73dd9a5e7742e0d)\nf9bc4e79d7c45187cc50bc28fd2e966162c326981e6dd2cfe73dd9a5e7742e0d\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:54:50 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:50.454 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[7fe891f1-df33-47ea-a23b-d9bc4eeedfba]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:54:50 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:50.455 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa1a65c6f-c0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:54:50 compute-0 nova_compute[260935]: 2025-10-11 08:54:50.457 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:54:50 compute-0 kernel: tapa1a65c6f-c0: left promiscuous mode
Oct 11 08:54:50 compute-0 nova_compute[260935]: 2025-10-11 08:54:50.478 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:54:50 compute-0 nova_compute[260935]: 2025-10-11 08:54:50.479 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:54:50 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:50.482 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[88264f71-5be0-406e-981a-09d22ba9152a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:54:50 compute-0 nova_compute[260935]: 2025-10-11 08:54:50.504 2 DEBUG oslo_concurrency.lockutils [None req-fdae6184-c540-45c9-8b9a-255e63108ea0 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Acquiring lock "21a71d10-e13b-47fe-88fd-ec9597f7902e" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:54:50 compute-0 nova_compute[260935]: 2025-10-11 08:54:50.504 2 DEBUG oslo_concurrency.lockutils [None req-fdae6184-c540-45c9-8b9a-255e63108ea0 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Lock "21a71d10-e13b-47fe-88fd-ec9597f7902e" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:54:50 compute-0 nova_compute[260935]: 2025-10-11 08:54:50.505 2 DEBUG oslo_concurrency.lockutils [None req-fdae6184-c540-45c9-8b9a-255e63108ea0 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Acquiring lock "21a71d10-e13b-47fe-88fd-ec9597f7902e-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:54:50 compute-0 nova_compute[260935]: 2025-10-11 08:54:50.505 2 DEBUG oslo_concurrency.lockutils [None req-fdae6184-c540-45c9-8b9a-255e63108ea0 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Lock "21a71d10-e13b-47fe-88fd-ec9597f7902e-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:54:50 compute-0 nova_compute[260935]: 2025-10-11 08:54:50.505 2 DEBUG oslo_concurrency.lockutils [None req-fdae6184-c540-45c9-8b9a-255e63108ea0 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Lock "21a71d10-e13b-47fe-88fd-ec9597f7902e-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:54:50 compute-0 nova_compute[260935]: 2025-10-11 08:54:50.506 2 INFO nova.compute.manager [None req-fdae6184-c540-45c9-8b9a-255e63108ea0 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] [instance: 21a71d10-e13b-47fe-88fd-ec9597f7902e] Terminating instance
Oct 11 08:54:50 compute-0 nova_compute[260935]: 2025-10-11 08:54:50.507 2 DEBUG nova.compute.manager [None req-fdae6184-c540-45c9-8b9a-255e63108ea0 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] [instance: 21a71d10-e13b-47fe-88fd-ec9597f7902e] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 11 08:54:50 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:50.515 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[c810eec3-0ae3-4c73-929e-d2c9ee6bb3b8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:54:50 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:50.516 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[62c9f84b-9f83-438e-ae2a-1de3c2aa0548]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:54:50 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:50.551 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[29ba1dff-e221-4add-bc24-17b7da0cc5a3]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 473063, 'reachable_time': 38189, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 320065, 'error': None, 'target': 'ovnmeta-a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:54:50 compute-0 systemd[1]: run-netns-ovnmeta\x2da1a65c6f\x2dcd0e\x2d4ac5\x2db9de\x2d21e41a32ffbf.mount: Deactivated successfully.
Oct 11 08:54:50 compute-0 kernel: tap70c55a7a-6f (unregistering): left promiscuous mode
Oct 11 08:54:50 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:50.556 162929 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 11 08:54:50 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:50.556 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[e6ec8a83-c1bb-4f77-b70f-510d9ef21695]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:54:50 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:50.560 162815 INFO neutron.agent.ovn.metadata.agent [-] Port db59456c-ccd3-48b6-a889-d54a6e2cdf66 in datapath a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf unbound from our chassis
Oct 11 08:54:50 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:50.561 162815 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 11 08:54:50 compute-0 NetworkManager[44960]: <info>  [1760172890.5620] device (tap70c55a7a-6f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 08:54:50 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:50.562 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[c75cab0c-2012-408a-861e-4d2780e7f4f1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:54:50 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:50.562 162815 INFO neutron.agent.ovn.metadata.agent [-] Port db59456c-ccd3-48b6-a889-d54a6e2cdf66 in datapath a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf unbound from our chassis
Oct 11 08:54:50 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:50.563 162815 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network a1a65c6f-cd0e-4ac5-b9de-21e41a32ffbf, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 11 08:54:50 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:50.564 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[07dc8b78-ef35-4908-9e48-9e5c8ed0b917]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:54:50 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:50.564 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 5a02bb17-5d97-4eef-a91d-245b70cc5e1b in datapath 72464893-0f19-40a9-84d1-392a298e50b9 unbound from our chassis
Oct 11 08:54:50 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:50.565 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 72464893-0f19-40a9-84d1-392a298e50b9
Oct 11 08:54:50 compute-0 ovn_controller[152945]: 2025-10-11T08:54:50Z|00495|binding|INFO|Releasing lport 70c55a7a-6ff4-4ca8-a4b6-60e79bcea74b from this chassis (sb_readonly=0)
Oct 11 08:54:50 compute-0 ovn_controller[152945]: 2025-10-11T08:54:50Z|00496|binding|INFO|Setting lport 70c55a7a-6ff4-4ca8-a4b6-60e79bcea74b down in Southbound
Oct 11 08:54:50 compute-0 ovn_controller[152945]: 2025-10-11T08:54:50Z|00497|binding|INFO|Removing iface tap70c55a7a-6f ovn-installed in OVS
Oct 11 08:54:50 compute-0 nova_compute[260935]: 2025-10-11 08:54:50.577 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:54:50 compute-0 nova_compute[260935]: 2025-10-11 08:54:50.579 2 INFO nova.virt.libvirt.driver [-] [instance: e263661e-e9c2-4a4d-a6e5-5fc8a7353f50] Instance destroyed successfully.
Oct 11 08:54:50 compute-0 nova_compute[260935]: 2025-10-11 08:54:50.579 2 DEBUG nova.objects.instance [None req-1daaf7ef-40ad-467c-bc43-803ed00a5e23 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Lazy-loading 'resources' on Instance uuid e263661e-e9c2-4a4d-a6e5-5fc8a7353f50 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:54:50 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:50.590 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f5:da:34 10.100.0.7'], port_security=['fa:16:3e:f5:da:34 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '21a71d10-e13b-47fe-88fd-ec9597f7902e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-72464893-0f19-40a9-84d1-392a298e50b9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7a37bbdce5194d96bed20d4162e25337', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'a5c4bd12-f795-4994-bb5e-cd2cc665fe9f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2dc3e696-a953-4227-9499-d68d54f25c77, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=70c55a7a-6ff4-4ca8-a4b6-60e79bcea74b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 08:54:50 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:50.591 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[c7eb0d08-b6b4-4628-8b23-508c039857fe]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:54:50 compute-0 nova_compute[260935]: 2025-10-11 08:54:50.606 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:54:50 compute-0 nova_compute[260935]: 2025-10-11 08:54:50.612 2 DEBUG nova.virt.libvirt.vif [None req-1daaf7ef-40ad-467c-bc43-803ed00a5e23 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T08:54:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ListServersNegativeTestJSON-server-123447316',display_name='tempest-ListServersNegativeTestJSON-server-123447316-2',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserversnegativetestjson-server-123447316-2',id=54,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=1,launched_at=2025-10-11T08:54:38Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='7a37bbdce5194d96bed20d4162e25337',ramdisk_id='',reservation_id='r-yl5fomq0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ListServersNegativeTestJSON-567006104',owner_user_name='tempest-ListServersNegativeTestJSON-567006104-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T08:54:38Z,user_data=None,user_id='5c02b9d6bdae439c9f1e49ae63c5c5e3',uuid=e263661e-e9c2-4a4d-a6e5-5fc8a7353f50,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "5a02bb17-5d97-4eef-a91d-245b70cc5e1b", "address": "fa:16:3e:2c:7f:c3", "network": {"id": "72464893-0f19-40a9-84d1-392a298e50b9", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1756260705-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7a37bbdce5194d96bed20d4162e25337", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a02bb17-5d", "ovs_interfaceid": "5a02bb17-5d97-4eef-a91d-245b70cc5e1b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 11 08:54:50 compute-0 nova_compute[260935]: 2025-10-11 08:54:50.613 2 DEBUG nova.network.os_vif_util [None req-1daaf7ef-40ad-467c-bc43-803ed00a5e23 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Converting VIF {"id": "5a02bb17-5d97-4eef-a91d-245b70cc5e1b", "address": "fa:16:3e:2c:7f:c3", "network": {"id": "72464893-0f19-40a9-84d1-392a298e50b9", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1756260705-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7a37bbdce5194d96bed20d4162e25337", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a02bb17-5d", "ovs_interfaceid": "5a02bb17-5d97-4eef-a91d-245b70cc5e1b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 08:54:50 compute-0 nova_compute[260935]: 2025-10-11 08:54:50.613 2 DEBUG nova.network.os_vif_util [None req-1daaf7ef-40ad-467c-bc43-803ed00a5e23 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2c:7f:c3,bridge_name='br-int',has_traffic_filtering=True,id=5a02bb17-5d97-4eef-a91d-245b70cc5e1b,network=Network(72464893-0f19-40a9-84d1-392a298e50b9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5a02bb17-5d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 08:54:50 compute-0 nova_compute[260935]: 2025-10-11 08:54:50.614 2 DEBUG os_vif [None req-1daaf7ef-40ad-467c-bc43-803ed00a5e23 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:2c:7f:c3,bridge_name='br-int',has_traffic_filtering=True,id=5a02bb17-5d97-4eef-a91d-245b70cc5e1b,network=Network(72464893-0f19-40a9-84d1-392a298e50b9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5a02bb17-5d') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 11 08:54:50 compute-0 nova_compute[260935]: 2025-10-11 08:54:50.615 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:54:50 compute-0 nova_compute[260935]: 2025-10-11 08:54:50.615 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5a02bb17-5d, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:54:50 compute-0 nova_compute[260935]: 2025-10-11 08:54:50.616 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:54:50 compute-0 nova_compute[260935]: 2025-10-11 08:54:50.619 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 08:54:50 compute-0 nova_compute[260935]: 2025-10-11 08:54:50.624 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:54:50 compute-0 nova_compute[260935]: 2025-10-11 08:54:50.626 2 INFO os_vif [None req-1daaf7ef-40ad-467c-bc43-803ed00a5e23 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:2c:7f:c3,bridge_name='br-int',has_traffic_filtering=True,id=5a02bb17-5d97-4eef-a91d-245b70cc5e1b,network=Network(72464893-0f19-40a9-84d1-392a298e50b9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5a02bb17-5d')
Oct 11 08:54:50 compute-0 systemd[1]: machine-qemu\x2d62\x2dinstance\x2d00000037.scope: Deactivated successfully.
Oct 11 08:54:50 compute-0 systemd[1]: machine-qemu\x2d62\x2dinstance\x2d00000037.scope: Consumed 11.273s CPU time.
Oct 11 08:54:50 compute-0 systemd-machined[215705]: Machine qemu-62-instance-00000037 terminated.
Oct 11 08:54:50 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:50.638 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[e366586a-2691-4028-a7cf-042313229021]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:54:50 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:50.641 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[a2c48ea3-59e0-4709-82e7-6f8074d707c9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:54:50 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:50.689 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[b4b6ca9b-c904-4398-80fa-3fd9420bf5ae]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:54:50 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:50.713 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[0d03ae91-a6dd-46c7-9491-ab7014dc1d81]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap72464893-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ca:78:4f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 11, 'rx_bytes': 832, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 11, 'rx_bytes': 832, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 148], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 472069, 'reachable_time': 41754, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 320107, 'error': None, 'target': 'ovnmeta-72464893-0f19-40a9-84d1-392a298e50b9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:54:50 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:50.743 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[f572d752-e84a-4b91-982f-1a7bb6769bd3]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap72464893-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 472087, 'tstamp': 472087}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 320110, 'error': None, 'target': 'ovnmeta-72464893-0f19-40a9-84d1-392a298e50b9', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap72464893-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 472092, 'tstamp': 472092}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 320110, 'error': None, 'target': 'ovnmeta-72464893-0f19-40a9-84d1-392a298e50b9', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:54:50 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:50.748 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap72464893-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:54:50 compute-0 nova_compute[260935]: 2025-10-11 08:54:50.756 2 INFO nova.virt.libvirt.driver [None req-1681816a-f1db-452b-a156-0db80137229d d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] [instance: 1ef94ed5-fffa-41e9-b72f-4569354392c6] Deleting instance files /var/lib/nova/instances/1ef94ed5-fffa-41e9-b72f-4569354392c6_del
Oct 11 08:54:50 compute-0 nova_compute[260935]: 2025-10-11 08:54:50.756 2 INFO nova.virt.libvirt.driver [None req-1681816a-f1db-452b-a156-0db80137229d d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] [instance: 1ef94ed5-fffa-41e9-b72f-4569354392c6] Deletion of /var/lib/nova/instances/1ef94ed5-fffa-41e9-b72f-4569354392c6_del complete
Oct 11 08:54:50 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:50.757 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap72464893-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:54:50 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:50.758 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 08:54:50 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:50.758 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap72464893-00, col_values=(('external_ids', {'iface-id': 'fba8f4e1-8635-4527-85f3-29ce2a1033b5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:54:50 compute-0 nova_compute[260935]: 2025-10-11 08:54:50.759 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:54:50 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:50.760 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 08:54:50 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:50.763 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 70c55a7a-6ff4-4ca8-a4b6-60e79bcea74b in datapath 72464893-0f19-40a9-84d1-392a298e50b9 unbound from our chassis
Oct 11 08:54:50 compute-0 nova_compute[260935]: 2025-10-11 08:54:50.764 2 INFO nova.virt.libvirt.driver [-] [instance: 21a71d10-e13b-47fe-88fd-ec9597f7902e] Instance destroyed successfully.
Oct 11 08:54:50 compute-0 nova_compute[260935]: 2025-10-11 08:54:50.764 2 DEBUG nova.objects.instance [None req-fdae6184-c540-45c9-8b9a-255e63108ea0 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Lazy-loading 'resources' on Instance uuid 21a71d10-e13b-47fe-88fd-ec9597f7902e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:54:50 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:50.766 162815 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 72464893-0f19-40a9-84d1-392a298e50b9, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 11 08:54:50 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:50.768 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[3321906e-1382-490f-b19d-795667a26121]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:54:50 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:50.769 162815 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-72464893-0f19-40a9-84d1-392a298e50b9 namespace which is not needed anymore
Oct 11 08:54:50 compute-0 nova_compute[260935]: 2025-10-11 08:54:50.780 2 DEBUG nova.virt.libvirt.vif [None req-fdae6184-c540-45c9-8b9a-255e63108ea0 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T08:54:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ListServersNegativeTestJSON-server-123447316',display_name='tempest-ListServersNegativeTestJSON-server-123447316-3',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserversnegativetestjson-server-123447316-3',id=55,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=2,launched_at=2025-10-11T08:54:40Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='7a37bbdce5194d96bed20d4162e25337',ramdisk_id='',reservation_id='r-yl5fomq0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ListServersNegativeTestJSON-567006104',owner_user_name='tempest-ListServersNegativeTestJSON-567006104-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T08:54:40Z,user_data=None,user_id='5c02b9d6bdae439c9f1e49ae63c5c5e3',uuid=21a71d10-e13b-47fe-88fd-ec9597f7902e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "70c55a7a-6ff4-4ca8-a4b6-60e79bcea74b", "address": "fa:16:3e:f5:da:34", "network": {"id": "72464893-0f19-40a9-84d1-392a298e50b9", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1756260705-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7a37bbdce5194d96bed20d4162e25337", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap70c55a7a-6f", "ovs_interfaceid": "70c55a7a-6ff4-4ca8-a4b6-60e79bcea74b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 11 08:54:50 compute-0 nova_compute[260935]: 2025-10-11 08:54:50.781 2 DEBUG nova.network.os_vif_util [None req-fdae6184-c540-45c9-8b9a-255e63108ea0 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Converting VIF {"id": "70c55a7a-6ff4-4ca8-a4b6-60e79bcea74b", "address": "fa:16:3e:f5:da:34", "network": {"id": "72464893-0f19-40a9-84d1-392a298e50b9", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1756260705-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7a37bbdce5194d96bed20d4162e25337", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap70c55a7a-6f", "ovs_interfaceid": "70c55a7a-6ff4-4ca8-a4b6-60e79bcea74b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 08:54:50 compute-0 nova_compute[260935]: 2025-10-11 08:54:50.781 2 DEBUG nova.network.os_vif_util [None req-fdae6184-c540-45c9-8b9a-255e63108ea0 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f5:da:34,bridge_name='br-int',has_traffic_filtering=True,id=70c55a7a-6ff4-4ca8-a4b6-60e79bcea74b,network=Network(72464893-0f19-40a9-84d1-392a298e50b9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap70c55a7a-6f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 08:54:50 compute-0 nova_compute[260935]: 2025-10-11 08:54:50.782 2 DEBUG os_vif [None req-fdae6184-c540-45c9-8b9a-255e63108ea0 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f5:da:34,bridge_name='br-int',has_traffic_filtering=True,id=70c55a7a-6ff4-4ca8-a4b6-60e79bcea74b,network=Network(72464893-0f19-40a9-84d1-392a298e50b9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap70c55a7a-6f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 11 08:54:50 compute-0 nova_compute[260935]: 2025-10-11 08:54:50.785 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:54:50 compute-0 nova_compute[260935]: 2025-10-11 08:54:50.786 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap70c55a7a-6f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:54:50 compute-0 nova_compute[260935]: 2025-10-11 08:54:50.788 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:54:50 compute-0 nova_compute[260935]: 2025-10-11 08:54:50.790 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:54:50 compute-0 nova_compute[260935]: 2025-10-11 08:54:50.793 2 INFO os_vif [None req-fdae6184-c540-45c9-8b9a-255e63108ea0 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f5:da:34,bridge_name='br-int',has_traffic_filtering=True,id=70c55a7a-6ff4-4ca8-a4b6-60e79bcea74b,network=Network(72464893-0f19-40a9-84d1-392a298e50b9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap70c55a7a-6f')
Oct 11 08:54:50 compute-0 nova_compute[260935]: 2025-10-11 08:54:50.831 2 INFO nova.compute.manager [None req-1681816a-f1db-452b-a156-0db80137229d d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] [instance: 1ef94ed5-fffa-41e9-b72f-4569354392c6] Took 0.89 seconds to destroy the instance on the hypervisor.
Oct 11 08:54:50 compute-0 nova_compute[260935]: 2025-10-11 08:54:50.832 2 DEBUG oslo.service.loopingcall [None req-1681816a-f1db-452b-a156-0db80137229d d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 11 08:54:50 compute-0 nova_compute[260935]: 2025-10-11 08:54:50.833 2 DEBUG nova.compute.manager [-] [instance: 1ef94ed5-fffa-41e9-b72f-4569354392c6] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 11 08:54:50 compute-0 nova_compute[260935]: 2025-10-11 08:54:50.833 2 DEBUG nova.network.neutron [-] [instance: 1ef94ed5-fffa-41e9-b72f-4569354392c6] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 11 08:54:50 compute-0 neutron-haproxy-ovnmeta-72464893-0f19-40a9-84d1-392a298e50b9[319091]: [NOTICE]   (319100) : haproxy version is 2.8.14-c23fe91
Oct 11 08:54:50 compute-0 neutron-haproxy-ovnmeta-72464893-0f19-40a9-84d1-392a298e50b9[319091]: [NOTICE]   (319100) : path to executable is /usr/sbin/haproxy
Oct 11 08:54:50 compute-0 neutron-haproxy-ovnmeta-72464893-0f19-40a9-84d1-392a298e50b9[319091]: [WARNING]  (319100) : Exiting Master process...
Oct 11 08:54:50 compute-0 neutron-haproxy-ovnmeta-72464893-0f19-40a9-84d1-392a298e50b9[319091]: [ALERT]    (319100) : Current worker (319103) exited with code 143 (Terminated)
Oct 11 08:54:50 compute-0 neutron-haproxy-ovnmeta-72464893-0f19-40a9-84d1-392a298e50b9[319091]: [WARNING]  (319100) : All workers exited. Exiting... (0)
Oct 11 08:54:50 compute-0 systemd[1]: libpod-291a72ea54cbd11b08a42f9445c0733f327b2d0109a7ed73d9c34352184c0676.scope: Deactivated successfully.
Oct 11 08:54:50 compute-0 podman[320158]: 2025-10-11 08:54:50.993891323 +0000 UTC m=+0.082132723 container died 291a72ea54cbd11b08a42f9445c0733f327b2d0109a7ed73d9c34352184c0676 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-72464893-0f19-40a9-84d1-392a298e50b9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 11 08:54:51 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-291a72ea54cbd11b08a42f9445c0733f327b2d0109a7ed73d9c34352184c0676-userdata-shm.mount: Deactivated successfully.
Oct 11 08:54:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-601b7f68ddbb33e7d9c40d40c0537063b097ca0468dd1d3c4f0f1fbe13b69382-merged.mount: Deactivated successfully.
Oct 11 08:54:51 compute-0 podman[320158]: 2025-10-11 08:54:51.043437806 +0000 UTC m=+0.131679206 container cleanup 291a72ea54cbd11b08a42f9445c0733f327b2d0109a7ed73d9c34352184c0676 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-72464893-0f19-40a9-84d1-392a298e50b9, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 11 08:54:51 compute-0 nova_compute[260935]: 2025-10-11 08:54:51.064 2 INFO nova.virt.libvirt.driver [None req-1daaf7ef-40ad-467c-bc43-803ed00a5e23 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] [instance: e263661e-e9c2-4a4d-a6e5-5fc8a7353f50] Deleting instance files /var/lib/nova/instances/e263661e-e9c2-4a4d-a6e5-5fc8a7353f50_del
Oct 11 08:54:51 compute-0 nova_compute[260935]: 2025-10-11 08:54:51.066 2 INFO nova.virt.libvirt.driver [None req-1daaf7ef-40ad-467c-bc43-803ed00a5e23 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] [instance: e263661e-e9c2-4a4d-a6e5-5fc8a7353f50] Deletion of /var/lib/nova/instances/e263661e-e9c2-4a4d-a6e5-5fc8a7353f50_del complete
Oct 11 08:54:51 compute-0 systemd[1]: libpod-conmon-291a72ea54cbd11b08a42f9445c0733f327b2d0109a7ed73d9c34352184c0676.scope: Deactivated successfully.
Oct 11 08:54:51 compute-0 ceph-mon[74313]: pgmap v1518: 321 pgs: 321 active+clean; 181 MiB data, 520 MiB used, 59 GiB / 60 GiB avail; 5.8 MiB/s rd, 1.8 MiB/s wr, 288 op/s
Oct 11 08:54:51 compute-0 podman[320190]: 2025-10-11 08:54:51.141097871 +0000 UTC m=+0.067996880 container remove 291a72ea54cbd11b08a42f9445c0733f327b2d0109a7ed73d9c34352184c0676 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-72464893-0f19-40a9-84d1-392a298e50b9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, tcib_managed=true)
Oct 11 08:54:51 compute-0 nova_compute[260935]: 2025-10-11 08:54:51.142 2 INFO nova.compute.manager [None req-1daaf7ef-40ad-467c-bc43-803ed00a5e23 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] [instance: e263661e-e9c2-4a4d-a6e5-5fc8a7353f50] Took 0.82 seconds to destroy the instance on the hypervisor.
Oct 11 08:54:51 compute-0 nova_compute[260935]: 2025-10-11 08:54:51.142 2 DEBUG oslo.service.loopingcall [None req-1daaf7ef-40ad-467c-bc43-803ed00a5e23 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 11 08:54:51 compute-0 nova_compute[260935]: 2025-10-11 08:54:51.143 2 DEBUG nova.compute.manager [-] [instance: e263661e-e9c2-4a4d-a6e5-5fc8a7353f50] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 11 08:54:51 compute-0 nova_compute[260935]: 2025-10-11 08:54:51.143 2 DEBUG nova.network.neutron [-] [instance: e263661e-e9c2-4a4d-a6e5-5fc8a7353f50] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 11 08:54:51 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:51.149 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[09e39d31-b729-45e8-b6d9-865706a28a6d]: (4, ('Sat Oct 11 08:54:50 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-72464893-0f19-40a9-84d1-392a298e50b9 (291a72ea54cbd11b08a42f9445c0733f327b2d0109a7ed73d9c34352184c0676)\n291a72ea54cbd11b08a42f9445c0733f327b2d0109a7ed73d9c34352184c0676\nSat Oct 11 08:54:51 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-72464893-0f19-40a9-84d1-392a298e50b9 (291a72ea54cbd11b08a42f9445c0733f327b2d0109a7ed73d9c34352184c0676)\n291a72ea54cbd11b08a42f9445c0733f327b2d0109a7ed73d9c34352184c0676\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:54:51 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:51.152 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[b3828b9b-8804-496d-89db-c8f9bb8b27b8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:54:51 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:51.153 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap72464893-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:54:51 compute-0 kernel: tap72464893-00: left promiscuous mode
Oct 11 08:54:51 compute-0 nova_compute[260935]: 2025-10-11 08:54:51.193 2 DEBUG nova.compute.manager [req-4c68f7cc-5de5-4f5a-a6bc-3db3caca0ab4 req-8f04d767-2ffb-4e57-9a2a-0e4e7cad7ae7 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 1ef94ed5-fffa-41e9-b72f-4569354392c6] Received event network-vif-unplugged-db59456c-ccd3-48b6-a889-d54a6e2cdf66 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:54:51 compute-0 nova_compute[260935]: 2025-10-11 08:54:51.194 2 DEBUG oslo_concurrency.lockutils [req-4c68f7cc-5de5-4f5a-a6bc-3db3caca0ab4 req-8f04d767-2ffb-4e57-9a2a-0e4e7cad7ae7 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "1ef94ed5-fffa-41e9-b72f-4569354392c6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:54:51 compute-0 nova_compute[260935]: 2025-10-11 08:54:51.194 2 DEBUG oslo_concurrency.lockutils [req-4c68f7cc-5de5-4f5a-a6bc-3db3caca0ab4 req-8f04d767-2ffb-4e57-9a2a-0e4e7cad7ae7 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "1ef94ed5-fffa-41e9-b72f-4569354392c6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:54:51 compute-0 nova_compute[260935]: 2025-10-11 08:54:51.195 2 DEBUG oslo_concurrency.lockutils [req-4c68f7cc-5de5-4f5a-a6bc-3db3caca0ab4 req-8f04d767-2ffb-4e57-9a2a-0e4e7cad7ae7 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "1ef94ed5-fffa-41e9-b72f-4569354392c6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:54:51 compute-0 nova_compute[260935]: 2025-10-11 08:54:51.195 2 DEBUG nova.compute.manager [req-4c68f7cc-5de5-4f5a-a6bc-3db3caca0ab4 req-8f04d767-2ffb-4e57-9a2a-0e4e7cad7ae7 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 1ef94ed5-fffa-41e9-b72f-4569354392c6] No waiting events found dispatching network-vif-unplugged-db59456c-ccd3-48b6-a889-d54a6e2cdf66 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 08:54:51 compute-0 nova_compute[260935]: 2025-10-11 08:54:51.195 2 DEBUG nova.compute.manager [req-4c68f7cc-5de5-4f5a-a6bc-3db3caca0ab4 req-8f04d767-2ffb-4e57-9a2a-0e4e7cad7ae7 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 1ef94ed5-fffa-41e9-b72f-4569354392c6] Received event network-vif-unplugged-db59456c-ccd3-48b6-a889-d54a6e2cdf66 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 11 08:54:51 compute-0 nova_compute[260935]: 2025-10-11 08:54:51.197 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:54:51 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:51.200 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[05135659-4d18-4e3c-aa5f-f289705d7d3c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:54:51 compute-0 nova_compute[260935]: 2025-10-11 08:54:51.232 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:54:51 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:51.252 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[a69a76eb-1518-4624-94da-4aaa087122de]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:54:51 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:51.254 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[f5fcd1d7-7f8d-44de-bf2b-349e2753a0d0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:54:51 compute-0 nova_compute[260935]: 2025-10-11 08:54:51.254 2 INFO nova.virt.libvirt.driver [None req-fdae6184-c540-45c9-8b9a-255e63108ea0 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] [instance: 21a71d10-e13b-47fe-88fd-ec9597f7902e] Deleting instance files /var/lib/nova/instances/21a71d10-e13b-47fe-88fd-ec9597f7902e_del
Oct 11 08:54:51 compute-0 nova_compute[260935]: 2025-10-11 08:54:51.255 2 INFO nova.virt.libvirt.driver [None req-fdae6184-c540-45c9-8b9a-255e63108ea0 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] [instance: 21a71d10-e13b-47fe-88fd-ec9597f7902e] Deletion of /var/lib/nova/instances/21a71d10-e13b-47fe-88fd-ec9597f7902e_del complete
Oct 11 08:54:51 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:51.279 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[e0f827b8-f5db-492d-8535-f63082ac6a87]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 472058, 'reachable_time': 20838, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 320205, 'error': None, 'target': 'ovnmeta-72464893-0f19-40a9-84d1-392a298e50b9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:54:51 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:51.283 162929 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-72464893-0f19-40a9-84d1-392a298e50b9 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 11 08:54:51 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:54:51.283 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[c008a465-7511-4c91-ba46-1e7bd3fd093e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:54:51 compute-0 nova_compute[260935]: 2025-10-11 08:54:51.322 2 INFO nova.compute.manager [None req-fdae6184-c540-45c9-8b9a-255e63108ea0 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] [instance: 21a71d10-e13b-47fe-88fd-ec9597f7902e] Took 0.81 seconds to destroy the instance on the hypervisor.
Oct 11 08:54:51 compute-0 nova_compute[260935]: 2025-10-11 08:54:51.322 2 DEBUG oslo.service.loopingcall [None req-fdae6184-c540-45c9-8b9a-255e63108ea0 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 11 08:54:51 compute-0 nova_compute[260935]: 2025-10-11 08:54:51.322 2 DEBUG nova.compute.manager [-] [instance: 21a71d10-e13b-47fe-88fd-ec9597f7902e] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 11 08:54:51 compute-0 nova_compute[260935]: 2025-10-11 08:54:51.322 2 DEBUG nova.network.neutron [-] [instance: 21a71d10-e13b-47fe-88fd-ec9597f7902e] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 11 08:54:51 compute-0 systemd[1]: run-netns-ovnmeta\x2d72464893\x2d0f19\x2d40a9\x2d84d1\x2d392a298e50b9.mount: Deactivated successfully.
Oct 11 08:54:51 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1519: 321 pgs: 321 active+clean; 181 MiB data, 520 MiB used, 59 GiB / 60 GiB avail; 5.8 MiB/s rd, 1.8 MiB/s wr, 288 op/s
Oct 11 08:54:52 compute-0 nova_compute[260935]: 2025-10-11 08:54:52.713 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760172877.712025, ceef84cf-9df6-4484-862c-624eab05f1fe => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:54:52 compute-0 nova_compute[260935]: 2025-10-11 08:54:52.713 2 INFO nova.compute.manager [-] [instance: ceef84cf-9df6-4484-862c-624eab05f1fe] VM Stopped (Lifecycle Event)
Oct 11 08:54:52 compute-0 nova_compute[260935]: 2025-10-11 08:54:52.771 2 DEBUG nova.compute.manager [None req-e37e818c-94dc-45dc-9069-a1aba3a8392e - - - - - -] [instance: ceef84cf-9df6-4484-862c-624eab05f1fe] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:54:52 compute-0 nova_compute[260935]: 2025-10-11 08:54:52.909 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:54:53 compute-0 ceph-mon[74313]: pgmap v1519: 321 pgs: 321 active+clean; 181 MiB data, 520 MiB used, 59 GiB / 60 GiB avail; 5.8 MiB/s rd, 1.8 MiB/s wr, 288 op/s
Oct 11 08:54:53 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e210 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 08:54:53 compute-0 nova_compute[260935]: 2025-10-11 08:54:53.583 2 DEBUG nova.compute.manager [req-2aeb6158-ecd1-4bb6-a257-cd4ea0476950 req-581df741-160e-44e1-831a-e2f3f4149067 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 1ef94ed5-fffa-41e9-b72f-4569354392c6] Received event network-vif-plugged-db59456c-ccd3-48b6-a889-d54a6e2cdf66 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:54:53 compute-0 nova_compute[260935]: 2025-10-11 08:54:53.583 2 DEBUG oslo_concurrency.lockutils [req-2aeb6158-ecd1-4bb6-a257-cd4ea0476950 req-581df741-160e-44e1-831a-e2f3f4149067 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "1ef94ed5-fffa-41e9-b72f-4569354392c6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:54:53 compute-0 nova_compute[260935]: 2025-10-11 08:54:53.584 2 DEBUG oslo_concurrency.lockutils [req-2aeb6158-ecd1-4bb6-a257-cd4ea0476950 req-581df741-160e-44e1-831a-e2f3f4149067 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "1ef94ed5-fffa-41e9-b72f-4569354392c6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:54:53 compute-0 nova_compute[260935]: 2025-10-11 08:54:53.584 2 DEBUG oslo_concurrency.lockutils [req-2aeb6158-ecd1-4bb6-a257-cd4ea0476950 req-581df741-160e-44e1-831a-e2f3f4149067 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "1ef94ed5-fffa-41e9-b72f-4569354392c6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:54:53 compute-0 nova_compute[260935]: 2025-10-11 08:54:53.584 2 DEBUG nova.compute.manager [req-2aeb6158-ecd1-4bb6-a257-cd4ea0476950 req-581df741-160e-44e1-831a-e2f3f4149067 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 1ef94ed5-fffa-41e9-b72f-4569354392c6] No waiting events found dispatching network-vif-plugged-db59456c-ccd3-48b6-a889-d54a6e2cdf66 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 08:54:53 compute-0 nova_compute[260935]: 2025-10-11 08:54:53.585 2 WARNING nova.compute.manager [req-2aeb6158-ecd1-4bb6-a257-cd4ea0476950 req-581df741-160e-44e1-831a-e2f3f4149067 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 1ef94ed5-fffa-41e9-b72f-4569354392c6] Received unexpected event network-vif-plugged-db59456c-ccd3-48b6-a889-d54a6e2cdf66 for instance with vm_state active and task_state deleting.
Oct 11 08:54:53 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1520: 321 pgs: 321 active+clean; 41 MiB data, 459 MiB used, 60 GiB / 60 GiB avail; 8.0 MiB/s rd, 3.9 MiB/s wr, 490 op/s
Oct 11 08:54:54 compute-0 nova_compute[260935]: 2025-10-11 08:54:54.770 2 DEBUG nova.network.neutron [-] [instance: 1ef94ed5-fffa-41e9-b72f-4569354392c6] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:54:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 08:54:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 08:54:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 08:54:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 08:54:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 08:54:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 08:54:54 compute-0 nova_compute[260935]: 2025-10-11 08:54:54.801 2 DEBUG nova.network.neutron [-] [instance: e263661e-e9c2-4a4d-a6e5-5fc8a7353f50] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:54:54 compute-0 nova_compute[260935]: 2025-10-11 08:54:54.806 2 DEBUG nova.network.neutron [-] [instance: 21a71d10-e13b-47fe-88fd-ec9597f7902e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:54:54 compute-0 nova_compute[260935]: 2025-10-11 08:54:54.807 2 INFO nova.compute.manager [-] [instance: 1ef94ed5-fffa-41e9-b72f-4569354392c6] Took 3.97 seconds to deallocate network for instance.
Oct 11 08:54:54 compute-0 ceph-mgr[74605]: [balancer INFO root] Optimize plan auto_2025-10-11_08:54:54
Oct 11 08:54:54 compute-0 ceph-mgr[74605]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 08:54:54 compute-0 ceph-mgr[74605]: [balancer INFO root] do_upmap
Oct 11 08:54:54 compute-0 ceph-mgr[74605]: [balancer INFO root] pools ['cephfs.cephfs.data', 'cephfs.cephfs.meta', 'images', 'volumes', 'backups', 'vms', 'default.rgw.log', 'default.rgw.meta', '.rgw.root', '.mgr', 'default.rgw.control']
Oct 11 08:54:54 compute-0 ceph-mgr[74605]: [balancer INFO root] prepared 0/10 changes
Oct 11 08:54:54 compute-0 nova_compute[260935]: 2025-10-11 08:54:54.847 2 INFO nova.compute.manager [-] [instance: e263661e-e9c2-4a4d-a6e5-5fc8a7353f50] Took 3.70 seconds to deallocate network for instance.
Oct 11 08:54:54 compute-0 nova_compute[260935]: 2025-10-11 08:54:54.855 2 INFO nova.compute.manager [-] [instance: 21a71d10-e13b-47fe-88fd-ec9597f7902e] Took 3.53 seconds to deallocate network for instance.
Oct 11 08:54:54 compute-0 nova_compute[260935]: 2025-10-11 08:54:54.864 2 DEBUG oslo_concurrency.lockutils [None req-1681816a-f1db-452b-a156-0db80137229d d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:54:54 compute-0 nova_compute[260935]: 2025-10-11 08:54:54.865 2 DEBUG oslo_concurrency.lockutils [None req-1681816a-f1db-452b-a156-0db80137229d d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:54:54 compute-0 nova_compute[260935]: 2025-10-11 08:54:54.932 2 DEBUG oslo_concurrency.lockutils [None req-fdae6184-c540-45c9-8b9a-255e63108ea0 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:54:54 compute-0 nova_compute[260935]: 2025-10-11 08:54:54.941 2 DEBUG oslo_concurrency.lockutils [None req-1daaf7ef-40ad-467c-bc43-803ed00a5e23 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:54:54 compute-0 nova_compute[260935]: 2025-10-11 08:54:54.997 2 DEBUG oslo_concurrency.processutils [None req-1681816a-f1db-452b-a156-0db80137229d d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:54:55 compute-0 ceph-mon[74313]: pgmap v1520: 321 pgs: 321 active+clean; 41 MiB data, 459 MiB used, 60 GiB / 60 GiB avail; 8.0 MiB/s rd, 3.9 MiB/s wr, 490 op/s
Oct 11 08:54:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 08:54:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 08:54:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 08:54:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 08:54:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 08:54:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 08:54:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 08:54:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 08:54:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 08:54:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 08:54:55 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 08:54:55 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/534161292' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:54:55 compute-0 nova_compute[260935]: 2025-10-11 08:54:55.508 2 DEBUG oslo_concurrency.processutils [None req-1681816a-f1db-452b-a156-0db80137229d d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.510s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:54:55 compute-0 nova_compute[260935]: 2025-10-11 08:54:55.516 2 DEBUG nova.compute.provider_tree [None req-1681816a-f1db-452b-a156-0db80137229d d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 08:54:55 compute-0 nova_compute[260935]: 2025-10-11 08:54:55.539 2 DEBUG nova.scheduler.client.report [None req-1681816a-f1db-452b-a156-0db80137229d d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 08:54:55 compute-0 nova_compute[260935]: 2025-10-11 08:54:55.569 2 DEBUG oslo_concurrency.lockutils [None req-1681816a-f1db-452b-a156-0db80137229d d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.704s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:54:55 compute-0 nova_compute[260935]: 2025-10-11 08:54:55.573 2 DEBUG oslo_concurrency.lockutils [None req-fdae6184-c540-45c9-8b9a-255e63108ea0 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.641s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:54:55 compute-0 nova_compute[260935]: 2025-10-11 08:54:55.622 2 INFO nova.scheduler.client.report [None req-1681816a-f1db-452b-a156-0db80137229d d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Deleted allocations for instance 1ef94ed5-fffa-41e9-b72f-4569354392c6
Oct 11 08:54:55 compute-0 nova_compute[260935]: 2025-10-11 08:54:55.707 2 DEBUG nova.compute.manager [req-73482234-6fa9-45e6-aa4e-4ad597d7c942 req-4dce729d-be92-4738-8b5d-4e4a5e1dd5f8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 1ef94ed5-fffa-41e9-b72f-4569354392c6] Received event network-vif-deleted-db59456c-ccd3-48b6-a889-d54a6e2cdf66 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:54:55 compute-0 nova_compute[260935]: 2025-10-11 08:54:55.707 2 DEBUG nova.compute.manager [req-73482234-6fa9-45e6-aa4e-4ad597d7c942 req-4dce729d-be92-4738-8b5d-4e4a5e1dd5f8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 21a71d10-e13b-47fe-88fd-ec9597f7902e] Received event network-vif-deleted-70c55a7a-6ff4-4ca8-a4b6-60e79bcea74b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:54:55 compute-0 nova_compute[260935]: 2025-10-11 08:54:55.708 2 DEBUG nova.compute.manager [req-73482234-6fa9-45e6-aa4e-4ad597d7c942 req-4dce729d-be92-4738-8b5d-4e4a5e1dd5f8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: e263661e-e9c2-4a4d-a6e5-5fc8a7353f50] Received event network-vif-deleted-5a02bb17-5d97-4eef-a91d-245b70cc5e1b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:54:55 compute-0 nova_compute[260935]: 2025-10-11 08:54:55.710 2 DEBUG oslo_concurrency.processutils [None req-fdae6184-c540-45c9-8b9a-255e63108ea0 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:54:55 compute-0 nova_compute[260935]: 2025-10-11 08:54:55.767 2 DEBUG oslo_concurrency.lockutils [None req-1681816a-f1db-452b-a156-0db80137229d d4dce65b39be45739408ca70d672df84 b34d40b1586348c3be3d9142dfe1770d - - default default] Lock "1ef94ed5-fffa-41e9-b72f-4569354392c6" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.825s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:54:55 compute-0 nova_compute[260935]: 2025-10-11 08:54:55.789 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:54:55 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1521: 321 pgs: 321 active+clean; 41 MiB data, 459 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 238 op/s
Oct 11 08:54:56 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/534161292' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:54:56 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 08:54:56 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2297362032' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:54:56 compute-0 nova_compute[260935]: 2025-10-11 08:54:56.196 2 DEBUG oslo_concurrency.processutils [None req-fdae6184-c540-45c9-8b9a-255e63108ea0 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.486s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:54:56 compute-0 nova_compute[260935]: 2025-10-11 08:54:56.203 2 DEBUG nova.compute.provider_tree [None req-fdae6184-c540-45c9-8b9a-255e63108ea0 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 08:54:56 compute-0 nova_compute[260935]: 2025-10-11 08:54:56.217 2 DEBUG nova.scheduler.client.report [None req-fdae6184-c540-45c9-8b9a-255e63108ea0 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 08:54:56 compute-0 nova_compute[260935]: 2025-10-11 08:54:56.241 2 DEBUG oslo_concurrency.lockutils [None req-fdae6184-c540-45c9-8b9a-255e63108ea0 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.667s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:54:56 compute-0 nova_compute[260935]: 2025-10-11 08:54:56.243 2 DEBUG oslo_concurrency.lockutils [None req-1daaf7ef-40ad-467c-bc43-803ed00a5e23 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 1.302s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:54:56 compute-0 nova_compute[260935]: 2025-10-11 08:54:56.280 2 INFO nova.scheduler.client.report [None req-fdae6184-c540-45c9-8b9a-255e63108ea0 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Deleted allocations for instance 21a71d10-e13b-47fe-88fd-ec9597f7902e
Oct 11 08:54:56 compute-0 nova_compute[260935]: 2025-10-11 08:54:56.313 2 DEBUG oslo_concurrency.processutils [None req-1daaf7ef-40ad-467c-bc43-803ed00a5e23 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:54:56 compute-0 nova_compute[260935]: 2025-10-11 08:54:56.365 2 DEBUG oslo_concurrency.lockutils [None req-fdae6184-c540-45c9-8b9a-255e63108ea0 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Lock "21a71d10-e13b-47fe-88fd-ec9597f7902e" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.861s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:54:56 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 08:54:56 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1666318733' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:54:56 compute-0 nova_compute[260935]: 2025-10-11 08:54:56.792 2 DEBUG oslo_concurrency.processutils [None req-1daaf7ef-40ad-467c-bc43-803ed00a5e23 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:54:56 compute-0 nova_compute[260935]: 2025-10-11 08:54:56.802 2 DEBUG nova.compute.provider_tree [None req-1daaf7ef-40ad-467c-bc43-803ed00a5e23 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 08:54:56 compute-0 nova_compute[260935]: 2025-10-11 08:54:56.884 2 DEBUG nova.scheduler.client.report [None req-1daaf7ef-40ad-467c-bc43-803ed00a5e23 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 08:54:56 compute-0 nova_compute[260935]: 2025-10-11 08:54:56.910 2 DEBUG oslo_concurrency.lockutils [None req-1daaf7ef-40ad-467c-bc43-803ed00a5e23 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.666s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:54:56 compute-0 nova_compute[260935]: 2025-10-11 08:54:56.953 2 INFO nova.scheduler.client.report [None req-1daaf7ef-40ad-467c-bc43-803ed00a5e23 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Deleted allocations for instance e263661e-e9c2-4a4d-a6e5-5fc8a7353f50
Oct 11 08:54:57 compute-0 nova_compute[260935]: 2025-10-11 08:54:57.041 2 DEBUG oslo_concurrency.lockutils [None req-1daaf7ef-40ad-467c-bc43-803ed00a5e23 5c02b9d6bdae439c9f1e49ae63c5c5e3 7a37bbdce5194d96bed20d4162e25337 - - default default] Lock "e263661e-e9c2-4a4d-a6e5-5fc8a7353f50" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 6.720s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:54:57 compute-0 ceph-mon[74313]: pgmap v1521: 321 pgs: 321 active+clean; 41 MiB data, 459 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 238 op/s
Oct 11 08:54:57 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/2297362032' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:54:57 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/1666318733' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:54:57 compute-0 nova_compute[260935]: 2025-10-11 08:54:57.912 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:54:57 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1522: 321 pgs: 321 active+clean; 41 MiB data, 459 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 238 op/s
Oct 11 08:54:58 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e210 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 08:54:58 compute-0 podman[320272]: 2025-10-11 08:54:58.803782389 +0000 UTC m=+0.088020101 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Oct 11 08:54:58 compute-0 nova_compute[260935]: 2025-10-11 08:54:58.865 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760172883.861947, d60a2dcd-7fb6-4bfe-8351-a38a71164f83 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:54:58 compute-0 nova_compute[260935]: 2025-10-11 08:54:58.866 2 INFO nova.compute.manager [-] [instance: d60a2dcd-7fb6-4bfe-8351-a38a71164f83] VM Stopped (Lifecycle Event)
Oct 11 08:54:58 compute-0 nova_compute[260935]: 2025-10-11 08:54:58.893 2 DEBUG nova.compute.manager [None req-312d7f9a-6db6-44bf-ae03-5563ce79b866 - - - - - -] [instance: d60a2dcd-7fb6-4bfe-8351-a38a71164f83] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:54:59 compute-0 ceph-mon[74313]: pgmap v1522: 321 pgs: 321 active+clean; 41 MiB data, 459 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 238 op/s
Oct 11 08:54:59 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1523: 321 pgs: 321 active+clean; 41 MiB data, 459 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 202 op/s
Oct 11 08:55:00 compute-0 sshd-session[320291]: Invalid user profesor from 155.4.244.179 port 27449
Oct 11 08:55:00 compute-0 sshd-session[320291]: pam_unix(sshd:auth): check pass; user unknown
Oct 11 08:55:00 compute-0 sshd-session[320291]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=155.4.244.179
Oct 11 08:55:00 compute-0 nova_compute[260935]: 2025-10-11 08:55:00.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 08:55:00 compute-0 nova_compute[260935]: 2025-10-11 08:55:00.792 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:55:01 compute-0 ceph-mon[74313]: pgmap v1523: 321 pgs: 321 active+clean; 41 MiB data, 459 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 202 op/s
Oct 11 08:55:01 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1524: 321 pgs: 321 active+clean; 41 MiB data, 459 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 202 op/s
Oct 11 08:55:02 compute-0 sshd-session[320291]: Failed password for invalid user profesor from 155.4.244.179 port 27449 ssh2
Oct 11 08:55:02 compute-0 sshd-session[320291]: Received disconnect from 155.4.244.179 port 27449:11: Bye Bye [preauth]
Oct 11 08:55:02 compute-0 sshd-session[320291]: Disconnected from invalid user profesor 155.4.244.179 port 27449 [preauth]
Oct 11 08:55:02 compute-0 nova_compute[260935]: 2025-10-11 08:55:02.919 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:55:03 compute-0 ceph-mon[74313]: pgmap v1524: 321 pgs: 321 active+clean; 41 MiB data, 459 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 202 op/s
Oct 11 08:55:03 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e210 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 08:55:03 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:55:03.514 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=14, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:d1:d9', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '16:ab:1e:b7:4b:7f'}, ipsec=False) old=SB_Global(nb_cfg=13) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 08:55:03 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:55:03.515 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 11 08:55:03 compute-0 nova_compute[260935]: 2025-10-11 08:55:03.515 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:55:03 compute-0 nova_compute[260935]: 2025-10-11 08:55:03.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 08:55:03 compute-0 nova_compute[260935]: 2025-10-11 08:55:03.703 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 11 08:55:03 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1525: 321 pgs: 321 active+clean; 41 MiB data, 459 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 202 op/s
Oct 11 08:55:04 compute-0 nova_compute[260935]: 2025-10-11 08:55:04.159 2 DEBUG oslo_concurrency.lockutils [None req-41e7fcd5-c65e-47e8-8535-75e0561242ef f58717ff8108424e87196def751cc1fe 06df44dbf05c4a5e8532640419eb19d3 - - default default] Acquiring lock "09e8444f-162e-4773-a181-a5b70c7af8dd" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:55:04 compute-0 nova_compute[260935]: 2025-10-11 08:55:04.160 2 DEBUG oslo_concurrency.lockutils [None req-41e7fcd5-c65e-47e8-8535-75e0561242ef f58717ff8108424e87196def751cc1fe 06df44dbf05c4a5e8532640419eb19d3 - - default default] Lock "09e8444f-162e-4773-a181-a5b70c7af8dd" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:55:04 compute-0 nova_compute[260935]: 2025-10-11 08:55:04.192 2 DEBUG nova.compute.manager [None req-41e7fcd5-c65e-47e8-8535-75e0561242ef f58717ff8108424e87196def751cc1fe 06df44dbf05c4a5e8532640419eb19d3 - - default default] [instance: 09e8444f-162e-4773-a181-a5b70c7af8dd] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 11 08:55:04 compute-0 nova_compute[260935]: 2025-10-11 08:55:04.276 2 DEBUG oslo_concurrency.lockutils [None req-41e7fcd5-c65e-47e8-8535-75e0561242ef f58717ff8108424e87196def751cc1fe 06df44dbf05c4a5e8532640419eb19d3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:55:04 compute-0 nova_compute[260935]: 2025-10-11 08:55:04.276 2 DEBUG oslo_concurrency.lockutils [None req-41e7fcd5-c65e-47e8-8535-75e0561242ef f58717ff8108424e87196def751cc1fe 06df44dbf05c4a5e8532640419eb19d3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:55:04 compute-0 nova_compute[260935]: 2025-10-11 08:55:04.285 2 DEBUG nova.virt.hardware [None req-41e7fcd5-c65e-47e8-8535-75e0561242ef f58717ff8108424e87196def751cc1fe 06df44dbf05c4a5e8532640419eb19d3 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 11 08:55:04 compute-0 nova_compute[260935]: 2025-10-11 08:55:04.286 2 INFO nova.compute.claims [None req-41e7fcd5-c65e-47e8-8535-75e0561242ef f58717ff8108424e87196def751cc1fe 06df44dbf05c4a5e8532640419eb19d3 - - default default] [instance: 09e8444f-162e-4773-a181-a5b70c7af8dd] Claim successful on node compute-0.ctlplane.example.com
Oct 11 08:55:04 compute-0 nova_compute[260935]: 2025-10-11 08:55:04.413 2 DEBUG oslo_concurrency.processutils [None req-41e7fcd5-c65e-47e8-8535-75e0561242ef f58717ff8108424e87196def751cc1fe 06df44dbf05c4a5e8532640419eb19d3 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:55:04 compute-0 nova_compute[260935]: 2025-10-11 08:55:04.704 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 08:55:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 08:55:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:55:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 08:55:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:55:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 11 08:55:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:55:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 08:55:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:55:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 08:55:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:55:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Oct 11 08:55:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:55:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 11 08:55:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:55:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 08:55:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:55:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 11 08:55:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:55:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 11 08:55:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:55:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 08:55:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:55:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 11 08:55:04 compute-0 podman[320313]: 2025-10-11 08:55:04.765211307 +0000 UTC m=+0.073683502 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, container_name=iscsid, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct 11 08:55:04 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 08:55:04 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2259497220' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:55:04 compute-0 nova_compute[260935]: 2025-10-11 08:55:04.896 2 DEBUG oslo_concurrency.processutils [None req-41e7fcd5-c65e-47e8-8535-75e0561242ef f58717ff8108424e87196def751cc1fe 06df44dbf05c4a5e8532640419eb19d3 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.483s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:55:04 compute-0 nova_compute[260935]: 2025-10-11 08:55:04.904 2 DEBUG nova.compute.provider_tree [None req-41e7fcd5-c65e-47e8-8535-75e0561242ef f58717ff8108424e87196def751cc1fe 06df44dbf05c4a5e8532640419eb19d3 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 08:55:04 compute-0 nova_compute[260935]: 2025-10-11 08:55:04.924 2 DEBUG nova.scheduler.client.report [None req-41e7fcd5-c65e-47e8-8535-75e0561242ef f58717ff8108424e87196def751cc1fe 06df44dbf05c4a5e8532640419eb19d3 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 08:55:04 compute-0 nova_compute[260935]: 2025-10-11 08:55:04.951 2 DEBUG oslo_concurrency.lockutils [None req-41e7fcd5-c65e-47e8-8535-75e0561242ef f58717ff8108424e87196def751cc1fe 06df44dbf05c4a5e8532640419eb19d3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.674s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:55:04 compute-0 nova_compute[260935]: 2025-10-11 08:55:04.952 2 DEBUG nova.compute.manager [None req-41e7fcd5-c65e-47e8-8535-75e0561242ef f58717ff8108424e87196def751cc1fe 06df44dbf05c4a5e8532640419eb19d3 - - default default] [instance: 09e8444f-162e-4773-a181-a5b70c7af8dd] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 11 08:55:04 compute-0 nova_compute[260935]: 2025-10-11 08:55:04.996 2 DEBUG nova.compute.manager [None req-41e7fcd5-c65e-47e8-8535-75e0561242ef f58717ff8108424e87196def751cc1fe 06df44dbf05c4a5e8532640419eb19d3 - - default default] [instance: 09e8444f-162e-4773-a181-a5b70c7af8dd] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 11 08:55:04 compute-0 nova_compute[260935]: 2025-10-11 08:55:04.996 2 DEBUG nova.network.neutron [None req-41e7fcd5-c65e-47e8-8535-75e0561242ef f58717ff8108424e87196def751cc1fe 06df44dbf05c4a5e8532640419eb19d3 - - default default] [instance: 09e8444f-162e-4773-a181-a5b70c7af8dd] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 11 08:55:05 compute-0 nova_compute[260935]: 2025-10-11 08:55:05.020 2 INFO nova.virt.libvirt.driver [None req-41e7fcd5-c65e-47e8-8535-75e0561242ef f58717ff8108424e87196def751cc1fe 06df44dbf05c4a5e8532640419eb19d3 - - default default] [instance: 09e8444f-162e-4773-a181-a5b70c7af8dd] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 11 08:55:05 compute-0 nova_compute[260935]: 2025-10-11 08:55:05.039 2 DEBUG nova.compute.manager [None req-41e7fcd5-c65e-47e8-8535-75e0561242ef f58717ff8108424e87196def751cc1fe 06df44dbf05c4a5e8532640419eb19d3 - - default default] [instance: 09e8444f-162e-4773-a181-a5b70c7af8dd] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 11 08:55:05 compute-0 nova_compute[260935]: 2025-10-11 08:55:05.147 2 DEBUG nova.compute.manager [None req-41e7fcd5-c65e-47e8-8535-75e0561242ef f58717ff8108424e87196def751cc1fe 06df44dbf05c4a5e8532640419eb19d3 - - default default] [instance: 09e8444f-162e-4773-a181-a5b70c7af8dd] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 11 08:55:05 compute-0 nova_compute[260935]: 2025-10-11 08:55:05.149 2 DEBUG nova.virt.libvirt.driver [None req-41e7fcd5-c65e-47e8-8535-75e0561242ef f58717ff8108424e87196def751cc1fe 06df44dbf05c4a5e8532640419eb19d3 - - default default] [instance: 09e8444f-162e-4773-a181-a5b70c7af8dd] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 11 08:55:05 compute-0 nova_compute[260935]: 2025-10-11 08:55:05.150 2 INFO nova.virt.libvirt.driver [None req-41e7fcd5-c65e-47e8-8535-75e0561242ef f58717ff8108424e87196def751cc1fe 06df44dbf05c4a5e8532640419eb19d3 - - default default] [instance: 09e8444f-162e-4773-a181-a5b70c7af8dd] Creating image(s)
Oct 11 08:55:05 compute-0 nova_compute[260935]: 2025-10-11 08:55:05.176 2 DEBUG nova.storage.rbd_utils [None req-41e7fcd5-c65e-47e8-8535-75e0561242ef f58717ff8108424e87196def751cc1fe 06df44dbf05c4a5e8532640419eb19d3 - - default default] rbd image 09e8444f-162e-4773-a181-a5b70c7af8dd_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:55:05 compute-0 ceph-mon[74313]: pgmap v1525: 321 pgs: 321 active+clean; 41 MiB data, 459 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 202 op/s
Oct 11 08:55:05 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/2259497220' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:55:05 compute-0 nova_compute[260935]: 2025-10-11 08:55:05.207 2 DEBUG nova.storage.rbd_utils [None req-41e7fcd5-c65e-47e8-8535-75e0561242ef f58717ff8108424e87196def751cc1fe 06df44dbf05c4a5e8532640419eb19d3 - - default default] rbd image 09e8444f-162e-4773-a181-a5b70c7af8dd_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:55:05 compute-0 nova_compute[260935]: 2025-10-11 08:55:05.235 2 DEBUG nova.storage.rbd_utils [None req-41e7fcd5-c65e-47e8-8535-75e0561242ef f58717ff8108424e87196def751cc1fe 06df44dbf05c4a5e8532640419eb19d3 - - default default] rbd image 09e8444f-162e-4773-a181-a5b70c7af8dd_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:55:05 compute-0 nova_compute[260935]: 2025-10-11 08:55:05.239 2 DEBUG oslo_concurrency.processutils [None req-41e7fcd5-c65e-47e8-8535-75e0561242ef f58717ff8108424e87196def751cc1fe 06df44dbf05c4a5e8532640419eb19d3 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:55:05 compute-0 nova_compute[260935]: 2025-10-11 08:55:05.286 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760172890.1957762, 1ef94ed5-fffa-41e9-b72f-4569354392c6 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:55:05 compute-0 nova_compute[260935]: 2025-10-11 08:55:05.287 2 INFO nova.compute.manager [-] [instance: 1ef94ed5-fffa-41e9-b72f-4569354392c6] VM Stopped (Lifecycle Event)
Oct 11 08:55:05 compute-0 nova_compute[260935]: 2025-10-11 08:55:05.310 2 DEBUG nova.compute.manager [None req-38bf9143-9f00-4664-a64f-79e8ab629418 - - - - - -] [instance: 1ef94ed5-fffa-41e9-b72f-4569354392c6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:55:05 compute-0 nova_compute[260935]: 2025-10-11 08:55:05.325 2 DEBUG nova.policy [None req-41e7fcd5-c65e-47e8-8535-75e0561242ef f58717ff8108424e87196def751cc1fe 06df44dbf05c4a5e8532640419eb19d3 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'f58717ff8108424e87196def751cc1fe', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '06df44dbf05c4a5e8532640419eb19d3', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 11 08:55:05 compute-0 nova_compute[260935]: 2025-10-11 08:55:05.338 2 DEBUG oslo_concurrency.processutils [None req-41e7fcd5-c65e-47e8-8535-75e0561242ef f58717ff8108424e87196def751cc1fe 06df44dbf05c4a5e8532640419eb19d3 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json" returned: 0 in 0.100s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:55:05 compute-0 nova_compute[260935]: 2025-10-11 08:55:05.339 2 DEBUG oslo_concurrency.lockutils [None req-41e7fcd5-c65e-47e8-8535-75e0561242ef f58717ff8108424e87196def751cc1fe 06df44dbf05c4a5e8532640419eb19d3 - - default default] Acquiring lock "9811042c7d73cc51997f7c966840f4b7728169a1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:55:05 compute-0 nova_compute[260935]: 2025-10-11 08:55:05.340 2 DEBUG oslo_concurrency.lockutils [None req-41e7fcd5-c65e-47e8-8535-75e0561242ef f58717ff8108424e87196def751cc1fe 06df44dbf05c4a5e8532640419eb19d3 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:55:05 compute-0 nova_compute[260935]: 2025-10-11 08:55:05.340 2 DEBUG oslo_concurrency.lockutils [None req-41e7fcd5-c65e-47e8-8535-75e0561242ef f58717ff8108424e87196def751cc1fe 06df44dbf05c4a5e8532640419eb19d3 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:55:05 compute-0 nova_compute[260935]: 2025-10-11 08:55:05.374 2 DEBUG nova.storage.rbd_utils [None req-41e7fcd5-c65e-47e8-8535-75e0561242ef f58717ff8108424e87196def751cc1fe 06df44dbf05c4a5e8532640419eb19d3 - - default default] rbd image 09e8444f-162e-4773-a181-a5b70c7af8dd_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:55:05 compute-0 nova_compute[260935]: 2025-10-11 08:55:05.380 2 DEBUG oslo_concurrency.processutils [None req-41e7fcd5-c65e-47e8-8535-75e0561242ef f58717ff8108424e87196def751cc1fe 06df44dbf05c4a5e8532640419eb19d3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 09e8444f-162e-4773-a181-a5b70c7af8dd_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:55:05 compute-0 nova_compute[260935]: 2025-10-11 08:55:05.564 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760172890.5631247, e263661e-e9c2-4a4d-a6e5-5fc8a7353f50 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:55:05 compute-0 nova_compute[260935]: 2025-10-11 08:55:05.566 2 INFO nova.compute.manager [-] [instance: e263661e-e9c2-4a4d-a6e5-5fc8a7353f50] VM Stopped (Lifecycle Event)
Oct 11 08:55:05 compute-0 nova_compute[260935]: 2025-10-11 08:55:05.599 2 DEBUG nova.compute.manager [None req-f1746f9d-2353-4264-8a19-577d651f5553 - - - - - -] [instance: e263661e-e9c2-4a4d-a6e5-5fc8a7353f50] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:55:05 compute-0 nova_compute[260935]: 2025-10-11 08:55:05.689 2 DEBUG oslo_concurrency.processutils [None req-41e7fcd5-c65e-47e8-8535-75e0561242ef f58717ff8108424e87196def751cc1fe 06df44dbf05c4a5e8532640419eb19d3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 09e8444f-162e-4773-a181-a5b70c7af8dd_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.310s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:55:05 compute-0 nova_compute[260935]: 2025-10-11 08:55:05.780 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760172890.7499397, 21a71d10-e13b-47fe-88fd-ec9597f7902e => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:55:05 compute-0 nova_compute[260935]: 2025-10-11 08:55:05.781 2 INFO nova.compute.manager [-] [instance: 21a71d10-e13b-47fe-88fd-ec9597f7902e] VM Stopped (Lifecycle Event)
Oct 11 08:55:05 compute-0 nova_compute[260935]: 2025-10-11 08:55:05.792 2 DEBUG nova.storage.rbd_utils [None req-41e7fcd5-c65e-47e8-8535-75e0561242ef f58717ff8108424e87196def751cc1fe 06df44dbf05c4a5e8532640419eb19d3 - - default default] resizing rbd image 09e8444f-162e-4773-a181-a5b70c7af8dd_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 11 08:55:05 compute-0 nova_compute[260935]: 2025-10-11 08:55:05.842 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:55:05 compute-0 nova_compute[260935]: 2025-10-11 08:55:05.846 2 DEBUG nova.compute.manager [None req-27b6963c-2e29-40d5-8f5b-4d1e66d299c2 - - - - - -] [instance: 21a71d10-e13b-47fe-88fd-ec9597f7902e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:55:05 compute-0 nova_compute[260935]: 2025-10-11 08:55:05.929 2 DEBUG nova.objects.instance [None req-41e7fcd5-c65e-47e8-8535-75e0561242ef f58717ff8108424e87196def751cc1fe 06df44dbf05c4a5e8532640419eb19d3 - - default default] Lazy-loading 'migration_context' on Instance uuid 09e8444f-162e-4773-a181-a5b70c7af8dd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:55:05 compute-0 nova_compute[260935]: 2025-10-11 08:55:05.943 2 DEBUG nova.virt.libvirt.driver [None req-41e7fcd5-c65e-47e8-8535-75e0561242ef f58717ff8108424e87196def751cc1fe 06df44dbf05c4a5e8532640419eb19d3 - - default default] [instance: 09e8444f-162e-4773-a181-a5b70c7af8dd] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 11 08:55:05 compute-0 nova_compute[260935]: 2025-10-11 08:55:05.944 2 DEBUG nova.virt.libvirt.driver [None req-41e7fcd5-c65e-47e8-8535-75e0561242ef f58717ff8108424e87196def751cc1fe 06df44dbf05c4a5e8532640419eb19d3 - - default default] [instance: 09e8444f-162e-4773-a181-a5b70c7af8dd] Ensure instance console log exists: /var/lib/nova/instances/09e8444f-162e-4773-a181-a5b70c7af8dd/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 11 08:55:05 compute-0 nova_compute[260935]: 2025-10-11 08:55:05.944 2 DEBUG oslo_concurrency.lockutils [None req-41e7fcd5-c65e-47e8-8535-75e0561242ef f58717ff8108424e87196def751cc1fe 06df44dbf05c4a5e8532640419eb19d3 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:55:05 compute-0 nova_compute[260935]: 2025-10-11 08:55:05.945 2 DEBUG oslo_concurrency.lockutils [None req-41e7fcd5-c65e-47e8-8535-75e0561242ef f58717ff8108424e87196def751cc1fe 06df44dbf05c4a5e8532640419eb19d3 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:55:05 compute-0 nova_compute[260935]: 2025-10-11 08:55:05.945 2 DEBUG oslo_concurrency.lockutils [None req-41e7fcd5-c65e-47e8-8535-75e0561242ef f58717ff8108424e87196def751cc1fe 06df44dbf05c4a5e8532640419eb19d3 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:55:05 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1526: 321 pgs: 321 active+clean; 41 MiB data, 459 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:55:06 compute-0 nova_compute[260935]: 2025-10-11 08:55:06.378 2 DEBUG nova.network.neutron [None req-41e7fcd5-c65e-47e8-8535-75e0561242ef f58717ff8108424e87196def751cc1fe 06df44dbf05c4a5e8532640419eb19d3 - - default default] [instance: 09e8444f-162e-4773-a181-a5b70c7af8dd] Successfully created port: ce404d10-5133-400e-acde-02c0abd10470 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 11 08:55:07 compute-0 ceph-mon[74313]: pgmap v1526: 321 pgs: 321 active+clean; 41 MiB data, 459 MiB used, 60 GiB / 60 GiB avail
Oct 11 08:55:07 compute-0 nova_compute[260935]: 2025-10-11 08:55:07.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 08:55:07 compute-0 nova_compute[260935]: 2025-10-11 08:55:07.704 2 DEBUG nova.network.neutron [None req-41e7fcd5-c65e-47e8-8535-75e0561242ef f58717ff8108424e87196def751cc1fe 06df44dbf05c4a5e8532640419eb19d3 - - default default] [instance: 09e8444f-162e-4773-a181-a5b70c7af8dd] Successfully updated port: ce404d10-5133-400e-acde-02c0abd10470 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 11 08:55:07 compute-0 nova_compute[260935]: 2025-10-11 08:55:07.706 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 08:55:07 compute-0 nova_compute[260935]: 2025-10-11 08:55:07.723 2 DEBUG oslo_concurrency.lockutils [None req-41e7fcd5-c65e-47e8-8535-75e0561242ef f58717ff8108424e87196def751cc1fe 06df44dbf05c4a5e8532640419eb19d3 - - default default] Acquiring lock "refresh_cache-09e8444f-162e-4773-a181-a5b70c7af8dd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 08:55:07 compute-0 nova_compute[260935]: 2025-10-11 08:55:07.723 2 DEBUG oslo_concurrency.lockutils [None req-41e7fcd5-c65e-47e8-8535-75e0561242ef f58717ff8108424e87196def751cc1fe 06df44dbf05c4a5e8532640419eb19d3 - - default default] Acquired lock "refresh_cache-09e8444f-162e-4773-a181-a5b70c7af8dd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 08:55:07 compute-0 nova_compute[260935]: 2025-10-11 08:55:07.723 2 DEBUG nova.network.neutron [None req-41e7fcd5-c65e-47e8-8535-75e0561242ef f58717ff8108424e87196def751cc1fe 06df44dbf05c4a5e8532640419eb19d3 - - default default] [instance: 09e8444f-162e-4773-a181-a5b70c7af8dd] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 11 08:55:07 compute-0 nova_compute[260935]: 2025-10-11 08:55:07.873 2 DEBUG nova.compute.manager [req-bdba56ee-c925-441f-8d8e-ef1e57eb4bda req-31f96c90-e75b-4bbe-9d30-5ddf9cbe9fce e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 09e8444f-162e-4773-a181-a5b70c7af8dd] Received event network-changed-ce404d10-5133-400e-acde-02c0abd10470 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:55:07 compute-0 nova_compute[260935]: 2025-10-11 08:55:07.874 2 DEBUG nova.compute.manager [req-bdba56ee-c925-441f-8d8e-ef1e57eb4bda req-31f96c90-e75b-4bbe-9d30-5ddf9cbe9fce e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 09e8444f-162e-4773-a181-a5b70c7af8dd] Refreshing instance network info cache due to event network-changed-ce404d10-5133-400e-acde-02c0abd10470. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 08:55:07 compute-0 nova_compute[260935]: 2025-10-11 08:55:07.874 2 DEBUG oslo_concurrency.lockutils [req-bdba56ee-c925-441f-8d8e-ef1e57eb4bda req-31f96c90-e75b-4bbe-9d30-5ddf9cbe9fce e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-09e8444f-162e-4773-a181-a5b70c7af8dd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 08:55:07 compute-0 nova_compute[260935]: 2025-10-11 08:55:07.919 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:55:07 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1527: 321 pgs: 321 active+clean; 88 MiB data, 477 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 11 08:55:08 compute-0 nova_compute[260935]: 2025-10-11 08:55:08.029 2 DEBUG nova.network.neutron [None req-41e7fcd5-c65e-47e8-8535-75e0561242ef f58717ff8108424e87196def751cc1fe 06df44dbf05c4a5e8532640419eb19d3 - - default default] [instance: 09e8444f-162e-4773-a181-a5b70c7af8dd] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 11 08:55:08 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e210 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 08:55:09 compute-0 nova_compute[260935]: 2025-10-11 08:55:09.017 2 DEBUG nova.network.neutron [None req-41e7fcd5-c65e-47e8-8535-75e0561242ef f58717ff8108424e87196def751cc1fe 06df44dbf05c4a5e8532640419eb19d3 - - default default] [instance: 09e8444f-162e-4773-a181-a5b70c7af8dd] Updating instance_info_cache with network_info: [{"id": "ce404d10-5133-400e-acde-02c0abd10470", "address": "fa:16:3e:53:75:e2", "network": {"id": "651d0a3a-4912-47c8-a5af-eb2f7badf8c4", "bridge": "br-int", "label": "tempest-ServerMetadataNegativeTestJSON-1981929345-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "06df44dbf05c4a5e8532640419eb19d3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapce404d10-51", "ovs_interfaceid": "ce404d10-5133-400e-acde-02c0abd10470", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:55:09 compute-0 nova_compute[260935]: 2025-10-11 08:55:09.033 2 DEBUG oslo_concurrency.lockutils [None req-41e7fcd5-c65e-47e8-8535-75e0561242ef f58717ff8108424e87196def751cc1fe 06df44dbf05c4a5e8532640419eb19d3 - - default default] Releasing lock "refresh_cache-09e8444f-162e-4773-a181-a5b70c7af8dd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 08:55:09 compute-0 nova_compute[260935]: 2025-10-11 08:55:09.034 2 DEBUG nova.compute.manager [None req-41e7fcd5-c65e-47e8-8535-75e0561242ef f58717ff8108424e87196def751cc1fe 06df44dbf05c4a5e8532640419eb19d3 - - default default] [instance: 09e8444f-162e-4773-a181-a5b70c7af8dd] Instance network_info: |[{"id": "ce404d10-5133-400e-acde-02c0abd10470", "address": "fa:16:3e:53:75:e2", "network": {"id": "651d0a3a-4912-47c8-a5af-eb2f7badf8c4", "bridge": "br-int", "label": "tempest-ServerMetadataNegativeTestJSON-1981929345-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "06df44dbf05c4a5e8532640419eb19d3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapce404d10-51", "ovs_interfaceid": "ce404d10-5133-400e-acde-02c0abd10470", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 11 08:55:09 compute-0 nova_compute[260935]: 2025-10-11 08:55:09.034 2 DEBUG oslo_concurrency.lockutils [req-bdba56ee-c925-441f-8d8e-ef1e57eb4bda req-31f96c90-e75b-4bbe-9d30-5ddf9cbe9fce e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-09e8444f-162e-4773-a181-a5b70c7af8dd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 08:55:09 compute-0 nova_compute[260935]: 2025-10-11 08:55:09.035 2 DEBUG nova.network.neutron [req-bdba56ee-c925-441f-8d8e-ef1e57eb4bda req-31f96c90-e75b-4bbe-9d30-5ddf9cbe9fce e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 09e8444f-162e-4773-a181-a5b70c7af8dd] Refreshing network info cache for port ce404d10-5133-400e-acde-02c0abd10470 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 08:55:09 compute-0 nova_compute[260935]: 2025-10-11 08:55:09.040 2 DEBUG nova.virt.libvirt.driver [None req-41e7fcd5-c65e-47e8-8535-75e0561242ef f58717ff8108424e87196def751cc1fe 06df44dbf05c4a5e8532640419eb19d3 - - default default] [instance: 09e8444f-162e-4773-a181-a5b70c7af8dd] Start _get_guest_xml network_info=[{"id": "ce404d10-5133-400e-acde-02c0abd10470", "address": "fa:16:3e:53:75:e2", "network": {"id": "651d0a3a-4912-47c8-a5af-eb2f7badf8c4", "bridge": "br-int", "label": "tempest-ServerMetadataNegativeTestJSON-1981929345-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "06df44dbf05c4a5e8532640419eb19d3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapce404d10-51", "ovs_interfaceid": "ce404d10-5133-400e-acde-02c0abd10470", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 11 08:55:09 compute-0 nova_compute[260935]: 2025-10-11 08:55:09.048 2 WARNING nova.virt.libvirt.driver [None req-41e7fcd5-c65e-47e8-8535-75e0561242ef f58717ff8108424e87196def751cc1fe 06df44dbf05c4a5e8532640419eb19d3 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 08:55:09 compute-0 nova_compute[260935]: 2025-10-11 08:55:09.062 2 DEBUG nova.virt.libvirt.host [None req-41e7fcd5-c65e-47e8-8535-75e0561242ef f58717ff8108424e87196def751cc1fe 06df44dbf05c4a5e8532640419eb19d3 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 11 08:55:09 compute-0 nova_compute[260935]: 2025-10-11 08:55:09.063 2 DEBUG nova.virt.libvirt.host [None req-41e7fcd5-c65e-47e8-8535-75e0561242ef f58717ff8108424e87196def751cc1fe 06df44dbf05c4a5e8532640419eb19d3 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 11 08:55:09 compute-0 nova_compute[260935]: 2025-10-11 08:55:09.068 2 DEBUG nova.virt.libvirt.host [None req-41e7fcd5-c65e-47e8-8535-75e0561242ef f58717ff8108424e87196def751cc1fe 06df44dbf05c4a5e8532640419eb19d3 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 11 08:55:09 compute-0 nova_compute[260935]: 2025-10-11 08:55:09.069 2 DEBUG nova.virt.libvirt.host [None req-41e7fcd5-c65e-47e8-8535-75e0561242ef f58717ff8108424e87196def751cc1fe 06df44dbf05c4a5e8532640419eb19d3 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 11 08:55:09 compute-0 nova_compute[260935]: 2025-10-11 08:55:09.070 2 DEBUG nova.virt.libvirt.driver [None req-41e7fcd5-c65e-47e8-8535-75e0561242ef f58717ff8108424e87196def751cc1fe 06df44dbf05c4a5e8532640419eb19d3 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 11 08:55:09 compute-0 nova_compute[260935]: 2025-10-11 08:55:09.070 2 DEBUG nova.virt.hardware [None req-41e7fcd5-c65e-47e8-8535-75e0561242ef f58717ff8108424e87196def751cc1fe 06df44dbf05c4a5e8532640419eb19d3 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 11 08:55:09 compute-0 nova_compute[260935]: 2025-10-11 08:55:09.071 2 DEBUG nova.virt.hardware [None req-41e7fcd5-c65e-47e8-8535-75e0561242ef f58717ff8108424e87196def751cc1fe 06df44dbf05c4a5e8532640419eb19d3 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 11 08:55:09 compute-0 nova_compute[260935]: 2025-10-11 08:55:09.072 2 DEBUG nova.virt.hardware [None req-41e7fcd5-c65e-47e8-8535-75e0561242ef f58717ff8108424e87196def751cc1fe 06df44dbf05c4a5e8532640419eb19d3 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 11 08:55:09 compute-0 nova_compute[260935]: 2025-10-11 08:55:09.072 2 DEBUG nova.virt.hardware [None req-41e7fcd5-c65e-47e8-8535-75e0561242ef f58717ff8108424e87196def751cc1fe 06df44dbf05c4a5e8532640419eb19d3 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 11 08:55:09 compute-0 nova_compute[260935]: 2025-10-11 08:55:09.073 2 DEBUG nova.virt.hardware [None req-41e7fcd5-c65e-47e8-8535-75e0561242ef f58717ff8108424e87196def751cc1fe 06df44dbf05c4a5e8532640419eb19d3 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 11 08:55:09 compute-0 nova_compute[260935]: 2025-10-11 08:55:09.073 2 DEBUG nova.virt.hardware [None req-41e7fcd5-c65e-47e8-8535-75e0561242ef f58717ff8108424e87196def751cc1fe 06df44dbf05c4a5e8532640419eb19d3 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 11 08:55:09 compute-0 nova_compute[260935]: 2025-10-11 08:55:09.074 2 DEBUG nova.virt.hardware [None req-41e7fcd5-c65e-47e8-8535-75e0561242ef f58717ff8108424e87196def751cc1fe 06df44dbf05c4a5e8532640419eb19d3 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 11 08:55:09 compute-0 nova_compute[260935]: 2025-10-11 08:55:09.074 2 DEBUG nova.virt.hardware [None req-41e7fcd5-c65e-47e8-8535-75e0561242ef f58717ff8108424e87196def751cc1fe 06df44dbf05c4a5e8532640419eb19d3 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 11 08:55:09 compute-0 nova_compute[260935]: 2025-10-11 08:55:09.075 2 DEBUG nova.virt.hardware [None req-41e7fcd5-c65e-47e8-8535-75e0561242ef f58717ff8108424e87196def751cc1fe 06df44dbf05c4a5e8532640419eb19d3 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 11 08:55:09 compute-0 nova_compute[260935]: 2025-10-11 08:55:09.075 2 DEBUG nova.virt.hardware [None req-41e7fcd5-c65e-47e8-8535-75e0561242ef f58717ff8108424e87196def751cc1fe 06df44dbf05c4a5e8532640419eb19d3 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 11 08:55:09 compute-0 nova_compute[260935]: 2025-10-11 08:55:09.076 2 DEBUG nova.virt.hardware [None req-41e7fcd5-c65e-47e8-8535-75e0561242ef f58717ff8108424e87196def751cc1fe 06df44dbf05c4a5e8532640419eb19d3 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 11 08:55:09 compute-0 nova_compute[260935]: 2025-10-11 08:55:09.081 2 DEBUG oslo_concurrency.processutils [None req-41e7fcd5-c65e-47e8-8535-75e0561242ef f58717ff8108424e87196def751cc1fe 06df44dbf05c4a5e8532640419eb19d3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:55:09 compute-0 ceph-mon[74313]: pgmap v1527: 321 pgs: 321 active+clean; 88 MiB data, 477 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 11 08:55:09 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 08:55:09 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2107231039' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:55:09 compute-0 nova_compute[260935]: 2025-10-11 08:55:09.590 2 DEBUG oslo_concurrency.processutils [None req-41e7fcd5-c65e-47e8-8535-75e0561242ef f58717ff8108424e87196def751cc1fe 06df44dbf05c4a5e8532640419eb19d3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.509s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:55:09 compute-0 nova_compute[260935]: 2025-10-11 08:55:09.623 2 DEBUG nova.storage.rbd_utils [None req-41e7fcd5-c65e-47e8-8535-75e0561242ef f58717ff8108424e87196def751cc1fe 06df44dbf05c4a5e8532640419eb19d3 - - default default] rbd image 09e8444f-162e-4773-a181-a5b70c7af8dd_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:55:09 compute-0 nova_compute[260935]: 2025-10-11 08:55:09.628 2 DEBUG oslo_concurrency.processutils [None req-41e7fcd5-c65e-47e8-8535-75e0561242ef f58717ff8108424e87196def751cc1fe 06df44dbf05c4a5e8532640419eb19d3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:55:09 compute-0 nova_compute[260935]: 2025-10-11 08:55:09.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 08:55:09 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1528: 321 pgs: 321 active+clean; 88 MiB data, 477 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 11 08:55:10 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 08:55:10 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1724282179' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:55:10 compute-0 nova_compute[260935]: 2025-10-11 08:55:10.120 2 DEBUG oslo_concurrency.processutils [None req-41e7fcd5-c65e-47e8-8535-75e0561242ef f58717ff8108424e87196def751cc1fe 06df44dbf05c4a5e8532640419eb19d3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.492s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:55:10 compute-0 nova_compute[260935]: 2025-10-11 08:55:10.123 2 DEBUG nova.virt.libvirt.vif [None req-41e7fcd5-c65e-47e8-8535-75e0561242ef f58717ff8108424e87196def751cc1fe 06df44dbf05c4a5e8532640419eb19d3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T08:55:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerMetadataNegativeTestJSON-server-1230911333',display_name='tempest-ServerMetadataNegativeTestJSON-server-1230911333',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-servermetadatanegativetestjson-server-1230911333',id=57,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='06df44dbf05c4a5e8532640419eb19d3',ramdisk_id='',reservation_id='r-ac9lff0k',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerMetadataNegativeTestJSON-968790743',owner_user_name='tempest-ServerMetadataNegativeTestJSON-968790743-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T08:55:05Z,user_data=None,user_id='f58717ff8108424e87196def751cc1fe',uuid=09e8444f-162e-4773-a181-a5b70c7af8dd,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ce404d10-5133-400e-acde-02c0abd10470", "address": "fa:16:3e:53:75:e2", "network": {"id": "651d0a3a-4912-47c8-a5af-eb2f7badf8c4", "bridge": "br-int", "label": "tempest-ServerMetadataNegativeTestJSON-1981929345-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "06df44dbf05c4a5e8532640419eb19d3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapce404d10-51", "ovs_interfaceid": "ce404d10-5133-400e-acde-02c0abd10470", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 11 08:55:10 compute-0 nova_compute[260935]: 2025-10-11 08:55:10.124 2 DEBUG nova.network.os_vif_util [None req-41e7fcd5-c65e-47e8-8535-75e0561242ef f58717ff8108424e87196def751cc1fe 06df44dbf05c4a5e8532640419eb19d3 - - default default] Converting VIF {"id": "ce404d10-5133-400e-acde-02c0abd10470", "address": "fa:16:3e:53:75:e2", "network": {"id": "651d0a3a-4912-47c8-a5af-eb2f7badf8c4", "bridge": "br-int", "label": "tempest-ServerMetadataNegativeTestJSON-1981929345-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "06df44dbf05c4a5e8532640419eb19d3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapce404d10-51", "ovs_interfaceid": "ce404d10-5133-400e-acde-02c0abd10470", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 08:55:10 compute-0 nova_compute[260935]: 2025-10-11 08:55:10.125 2 DEBUG nova.network.os_vif_util [None req-41e7fcd5-c65e-47e8-8535-75e0561242ef f58717ff8108424e87196def751cc1fe 06df44dbf05c4a5e8532640419eb19d3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:53:75:e2,bridge_name='br-int',has_traffic_filtering=True,id=ce404d10-5133-400e-acde-02c0abd10470,network=Network(651d0a3a-4912-47c8-a5af-eb2f7badf8c4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapce404d10-51') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 08:55:10 compute-0 nova_compute[260935]: 2025-10-11 08:55:10.127 2 DEBUG nova.objects.instance [None req-41e7fcd5-c65e-47e8-8535-75e0561242ef f58717ff8108424e87196def751cc1fe 06df44dbf05c4a5e8532640419eb19d3 - - default default] Lazy-loading 'pci_devices' on Instance uuid 09e8444f-162e-4773-a181-a5b70c7af8dd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:55:10 compute-0 nova_compute[260935]: 2025-10-11 08:55:10.149 2 DEBUG nova.virt.libvirt.driver [None req-41e7fcd5-c65e-47e8-8535-75e0561242ef f58717ff8108424e87196def751cc1fe 06df44dbf05c4a5e8532640419eb19d3 - - default default] [instance: 09e8444f-162e-4773-a181-a5b70c7af8dd] End _get_guest_xml xml=<domain type="kvm">
Oct 11 08:55:10 compute-0 nova_compute[260935]:   <uuid>09e8444f-162e-4773-a181-a5b70c7af8dd</uuid>
Oct 11 08:55:10 compute-0 nova_compute[260935]:   <name>instance-00000039</name>
Oct 11 08:55:10 compute-0 nova_compute[260935]:   <memory>131072</memory>
Oct 11 08:55:10 compute-0 nova_compute[260935]:   <vcpu>1</vcpu>
Oct 11 08:55:10 compute-0 nova_compute[260935]:   <metadata>
Oct 11 08:55:10 compute-0 nova_compute[260935]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 08:55:10 compute-0 nova_compute[260935]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 08:55:10 compute-0 nova_compute[260935]:       <nova:name>tempest-ServerMetadataNegativeTestJSON-server-1230911333</nova:name>
Oct 11 08:55:10 compute-0 nova_compute[260935]:       <nova:creationTime>2025-10-11 08:55:09</nova:creationTime>
Oct 11 08:55:10 compute-0 nova_compute[260935]:       <nova:flavor name="m1.nano">
Oct 11 08:55:10 compute-0 nova_compute[260935]:         <nova:memory>128</nova:memory>
Oct 11 08:55:10 compute-0 nova_compute[260935]:         <nova:disk>1</nova:disk>
Oct 11 08:55:10 compute-0 nova_compute[260935]:         <nova:swap>0</nova:swap>
Oct 11 08:55:10 compute-0 nova_compute[260935]:         <nova:ephemeral>0</nova:ephemeral>
Oct 11 08:55:10 compute-0 nova_compute[260935]:         <nova:vcpus>1</nova:vcpus>
Oct 11 08:55:10 compute-0 nova_compute[260935]:       </nova:flavor>
Oct 11 08:55:10 compute-0 nova_compute[260935]:       <nova:owner>
Oct 11 08:55:10 compute-0 nova_compute[260935]:         <nova:user uuid="f58717ff8108424e87196def751cc1fe">tempest-ServerMetadataNegativeTestJSON-968790743-project-member</nova:user>
Oct 11 08:55:10 compute-0 nova_compute[260935]:         <nova:project uuid="06df44dbf05c4a5e8532640419eb19d3">tempest-ServerMetadataNegativeTestJSON-968790743</nova:project>
Oct 11 08:55:10 compute-0 nova_compute[260935]:       </nova:owner>
Oct 11 08:55:10 compute-0 nova_compute[260935]:       <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 08:55:10 compute-0 nova_compute[260935]:       <nova:ports>
Oct 11 08:55:10 compute-0 nova_compute[260935]:         <nova:port uuid="ce404d10-5133-400e-acde-02c0abd10470">
Oct 11 08:55:10 compute-0 nova_compute[260935]:           <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Oct 11 08:55:10 compute-0 nova_compute[260935]:         </nova:port>
Oct 11 08:55:10 compute-0 nova_compute[260935]:       </nova:ports>
Oct 11 08:55:10 compute-0 nova_compute[260935]:     </nova:instance>
Oct 11 08:55:10 compute-0 nova_compute[260935]:   </metadata>
Oct 11 08:55:10 compute-0 nova_compute[260935]:   <sysinfo type="smbios">
Oct 11 08:55:10 compute-0 nova_compute[260935]:     <system>
Oct 11 08:55:10 compute-0 nova_compute[260935]:       <entry name="manufacturer">RDO</entry>
Oct 11 08:55:10 compute-0 nova_compute[260935]:       <entry name="product">OpenStack Compute</entry>
Oct 11 08:55:10 compute-0 nova_compute[260935]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 08:55:10 compute-0 nova_compute[260935]:       <entry name="serial">09e8444f-162e-4773-a181-a5b70c7af8dd</entry>
Oct 11 08:55:10 compute-0 nova_compute[260935]:       <entry name="uuid">09e8444f-162e-4773-a181-a5b70c7af8dd</entry>
Oct 11 08:55:10 compute-0 nova_compute[260935]:       <entry name="family">Virtual Machine</entry>
Oct 11 08:55:10 compute-0 nova_compute[260935]:     </system>
Oct 11 08:55:10 compute-0 nova_compute[260935]:   </sysinfo>
Oct 11 08:55:10 compute-0 nova_compute[260935]:   <os>
Oct 11 08:55:10 compute-0 nova_compute[260935]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 11 08:55:10 compute-0 nova_compute[260935]:     <boot dev="hd"/>
Oct 11 08:55:10 compute-0 nova_compute[260935]:     <smbios mode="sysinfo"/>
Oct 11 08:55:10 compute-0 nova_compute[260935]:   </os>
Oct 11 08:55:10 compute-0 nova_compute[260935]:   <features>
Oct 11 08:55:10 compute-0 nova_compute[260935]:     <acpi/>
Oct 11 08:55:10 compute-0 nova_compute[260935]:     <apic/>
Oct 11 08:55:10 compute-0 nova_compute[260935]:     <vmcoreinfo/>
Oct 11 08:55:10 compute-0 nova_compute[260935]:   </features>
Oct 11 08:55:10 compute-0 nova_compute[260935]:   <clock offset="utc">
Oct 11 08:55:10 compute-0 nova_compute[260935]:     <timer name="pit" tickpolicy="delay"/>
Oct 11 08:55:10 compute-0 nova_compute[260935]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 11 08:55:10 compute-0 nova_compute[260935]:     <timer name="hpet" present="no"/>
Oct 11 08:55:10 compute-0 nova_compute[260935]:   </clock>
Oct 11 08:55:10 compute-0 nova_compute[260935]:   <cpu mode="host-model" match="exact">
Oct 11 08:55:10 compute-0 nova_compute[260935]:     <topology sockets="1" cores="1" threads="1"/>
Oct 11 08:55:10 compute-0 nova_compute[260935]:   </cpu>
Oct 11 08:55:10 compute-0 nova_compute[260935]:   <devices>
Oct 11 08:55:10 compute-0 nova_compute[260935]:     <disk type="network" device="disk">
Oct 11 08:55:10 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 08:55:10 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/09e8444f-162e-4773-a181-a5b70c7af8dd_disk">
Oct 11 08:55:10 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 08:55:10 compute-0 nova_compute[260935]:       </source>
Oct 11 08:55:10 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 08:55:10 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 08:55:10 compute-0 nova_compute[260935]:       </auth>
Oct 11 08:55:10 compute-0 nova_compute[260935]:       <target dev="vda" bus="virtio"/>
Oct 11 08:55:10 compute-0 nova_compute[260935]:     </disk>
Oct 11 08:55:10 compute-0 nova_compute[260935]:     <disk type="network" device="cdrom">
Oct 11 08:55:10 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 08:55:10 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/09e8444f-162e-4773-a181-a5b70c7af8dd_disk.config">
Oct 11 08:55:10 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 08:55:10 compute-0 nova_compute[260935]:       </source>
Oct 11 08:55:10 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 08:55:10 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 08:55:10 compute-0 nova_compute[260935]:       </auth>
Oct 11 08:55:10 compute-0 nova_compute[260935]:       <target dev="sda" bus="sata"/>
Oct 11 08:55:10 compute-0 nova_compute[260935]:     </disk>
Oct 11 08:55:10 compute-0 nova_compute[260935]:     <interface type="ethernet">
Oct 11 08:55:10 compute-0 nova_compute[260935]:       <mac address="fa:16:3e:53:75:e2"/>
Oct 11 08:55:10 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 08:55:10 compute-0 nova_compute[260935]:       <driver name="vhost" rx_queue_size="512"/>
Oct 11 08:55:10 compute-0 nova_compute[260935]:       <mtu size="1442"/>
Oct 11 08:55:10 compute-0 nova_compute[260935]:       <target dev="tapce404d10-51"/>
Oct 11 08:55:10 compute-0 nova_compute[260935]:     </interface>
Oct 11 08:55:10 compute-0 nova_compute[260935]:     <serial type="pty">
Oct 11 08:55:10 compute-0 nova_compute[260935]:       <log file="/var/lib/nova/instances/09e8444f-162e-4773-a181-a5b70c7af8dd/console.log" append="off"/>
Oct 11 08:55:10 compute-0 nova_compute[260935]:     </serial>
Oct 11 08:55:10 compute-0 nova_compute[260935]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 08:55:10 compute-0 nova_compute[260935]:     <video>
Oct 11 08:55:10 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 08:55:10 compute-0 nova_compute[260935]:     </video>
Oct 11 08:55:10 compute-0 nova_compute[260935]:     <input type="tablet" bus="usb"/>
Oct 11 08:55:10 compute-0 nova_compute[260935]:     <rng model="virtio">
Oct 11 08:55:10 compute-0 nova_compute[260935]:       <backend model="random">/dev/urandom</backend>
Oct 11 08:55:10 compute-0 nova_compute[260935]:     </rng>
Oct 11 08:55:10 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root"/>
Oct 11 08:55:10 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:55:10 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:55:10 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:55:10 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:55:10 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:55:10 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:55:10 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:55:10 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:55:10 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:55:10 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:55:10 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:55:10 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:55:10 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:55:10 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:55:10 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:55:10 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:55:10 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:55:10 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:55:10 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:55:10 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:55:10 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:55:10 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:55:10 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:55:10 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:55:10 compute-0 nova_compute[260935]:     <controller type="usb" index="0"/>
Oct 11 08:55:10 compute-0 nova_compute[260935]:     <memballoon model="virtio">
Oct 11 08:55:10 compute-0 nova_compute[260935]:       <stats period="10"/>
Oct 11 08:55:10 compute-0 nova_compute[260935]:     </memballoon>
Oct 11 08:55:10 compute-0 nova_compute[260935]:   </devices>
Oct 11 08:55:10 compute-0 nova_compute[260935]: </domain>
Oct 11 08:55:10 compute-0 nova_compute[260935]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 11 08:55:10 compute-0 nova_compute[260935]: 2025-10-11 08:55:10.152 2 DEBUG nova.compute.manager [None req-41e7fcd5-c65e-47e8-8535-75e0561242ef f58717ff8108424e87196def751cc1fe 06df44dbf05c4a5e8532640419eb19d3 - - default default] [instance: 09e8444f-162e-4773-a181-a5b70c7af8dd] Preparing to wait for external event network-vif-plugged-ce404d10-5133-400e-acde-02c0abd10470 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 11 08:55:10 compute-0 nova_compute[260935]: 2025-10-11 08:55:10.152 2 DEBUG oslo_concurrency.lockutils [None req-41e7fcd5-c65e-47e8-8535-75e0561242ef f58717ff8108424e87196def751cc1fe 06df44dbf05c4a5e8532640419eb19d3 - - default default] Acquiring lock "09e8444f-162e-4773-a181-a5b70c7af8dd-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:55:10 compute-0 nova_compute[260935]: 2025-10-11 08:55:10.153 2 DEBUG oslo_concurrency.lockutils [None req-41e7fcd5-c65e-47e8-8535-75e0561242ef f58717ff8108424e87196def751cc1fe 06df44dbf05c4a5e8532640419eb19d3 - - default default] Lock "09e8444f-162e-4773-a181-a5b70c7af8dd-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:55:10 compute-0 nova_compute[260935]: 2025-10-11 08:55:10.153 2 DEBUG oslo_concurrency.lockutils [None req-41e7fcd5-c65e-47e8-8535-75e0561242ef f58717ff8108424e87196def751cc1fe 06df44dbf05c4a5e8532640419eb19d3 - - default default] Lock "09e8444f-162e-4773-a181-a5b70c7af8dd-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:55:10 compute-0 nova_compute[260935]: 2025-10-11 08:55:10.154 2 DEBUG nova.virt.libvirt.vif [None req-41e7fcd5-c65e-47e8-8535-75e0561242ef f58717ff8108424e87196def751cc1fe 06df44dbf05c4a5e8532640419eb19d3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T08:55:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerMetadataNegativeTestJSON-server-1230911333',display_name='tempest-ServerMetadataNegativeTestJSON-server-1230911333',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-servermetadatanegativetestjson-server-1230911333',id=57,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='06df44dbf05c4a5e8532640419eb19d3',ramdisk_id='',reservation_id='r-ac9lff0k',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerMetadataNegativeTestJSON-968790743',owner_user_name='tempest-ServerMetadataNegativeTestJSON-968790743-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T08:55:05Z,user_data=None,user_id='f58717ff8108424e87196def751cc1fe',uuid=09e8444f-162e-4773-a181-a5b70c7af8dd,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ce404d10-5133-400e-acde-02c0abd10470", "address": "fa:16:3e:53:75:e2", "network": {"id": "651d0a3a-4912-47c8-a5af-eb2f7badf8c4", "bridge": "br-int", "label": "tempest-ServerMetadataNegativeTestJSON-1981929345-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "06df44dbf05c4a5e8532640419eb19d3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapce404d10-51", "ovs_interfaceid": "ce404d10-5133-400e-acde-02c0abd10470", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 11 08:55:10 compute-0 nova_compute[260935]: 2025-10-11 08:55:10.155 2 DEBUG nova.network.os_vif_util [None req-41e7fcd5-c65e-47e8-8535-75e0561242ef f58717ff8108424e87196def751cc1fe 06df44dbf05c4a5e8532640419eb19d3 - - default default] Converting VIF {"id": "ce404d10-5133-400e-acde-02c0abd10470", "address": "fa:16:3e:53:75:e2", "network": {"id": "651d0a3a-4912-47c8-a5af-eb2f7badf8c4", "bridge": "br-int", "label": "tempest-ServerMetadataNegativeTestJSON-1981929345-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "06df44dbf05c4a5e8532640419eb19d3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapce404d10-51", "ovs_interfaceid": "ce404d10-5133-400e-acde-02c0abd10470", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 08:55:10 compute-0 nova_compute[260935]: 2025-10-11 08:55:10.156 2 DEBUG nova.network.os_vif_util [None req-41e7fcd5-c65e-47e8-8535-75e0561242ef f58717ff8108424e87196def751cc1fe 06df44dbf05c4a5e8532640419eb19d3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:53:75:e2,bridge_name='br-int',has_traffic_filtering=True,id=ce404d10-5133-400e-acde-02c0abd10470,network=Network(651d0a3a-4912-47c8-a5af-eb2f7badf8c4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapce404d10-51') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 08:55:10 compute-0 nova_compute[260935]: 2025-10-11 08:55:10.157 2 DEBUG os_vif [None req-41e7fcd5-c65e-47e8-8535-75e0561242ef f58717ff8108424e87196def751cc1fe 06df44dbf05c4a5e8532640419eb19d3 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:53:75:e2,bridge_name='br-int',has_traffic_filtering=True,id=ce404d10-5133-400e-acde-02c0abd10470,network=Network(651d0a3a-4912-47c8-a5af-eb2f7badf8c4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapce404d10-51') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 11 08:55:10 compute-0 nova_compute[260935]: 2025-10-11 08:55:10.158 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:55:10 compute-0 nova_compute[260935]: 2025-10-11 08:55:10.159 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:55:10 compute-0 nova_compute[260935]: 2025-10-11 08:55:10.159 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 08:55:10 compute-0 nova_compute[260935]: 2025-10-11 08:55:10.163 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:55:10 compute-0 nova_compute[260935]: 2025-10-11 08:55:10.164 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapce404d10-51, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:55:10 compute-0 nova_compute[260935]: 2025-10-11 08:55:10.165 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapce404d10-51, col_values=(('external_ids', {'iface-id': 'ce404d10-5133-400e-acde-02c0abd10470', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:53:75:e2', 'vm-uuid': '09e8444f-162e-4773-a181-a5b70c7af8dd'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:55:10 compute-0 nova_compute[260935]: 2025-10-11 08:55:10.167 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:55:10 compute-0 NetworkManager[44960]: <info>  [1760172910.1684] manager: (tapce404d10-51): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/231)
Oct 11 08:55:10 compute-0 nova_compute[260935]: 2025-10-11 08:55:10.170 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 08:55:10 compute-0 nova_compute[260935]: 2025-10-11 08:55:10.176 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:55:10 compute-0 nova_compute[260935]: 2025-10-11 08:55:10.178 2 INFO os_vif [None req-41e7fcd5-c65e-47e8-8535-75e0561242ef f58717ff8108424e87196def751cc1fe 06df44dbf05c4a5e8532640419eb19d3 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:53:75:e2,bridge_name='br-int',has_traffic_filtering=True,id=ce404d10-5133-400e-acde-02c0abd10470,network=Network(651d0a3a-4912-47c8-a5af-eb2f7badf8c4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapce404d10-51')
Oct 11 08:55:10 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/2107231039' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:55:10 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/1724282179' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:55:10 compute-0 nova_compute[260935]: 2025-10-11 08:55:10.250 2 DEBUG nova.virt.libvirt.driver [None req-41e7fcd5-c65e-47e8-8535-75e0561242ef f58717ff8108424e87196def751cc1fe 06df44dbf05c4a5e8532640419eb19d3 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 08:55:10 compute-0 nova_compute[260935]: 2025-10-11 08:55:10.250 2 DEBUG nova.virt.libvirt.driver [None req-41e7fcd5-c65e-47e8-8535-75e0561242ef f58717ff8108424e87196def751cc1fe 06df44dbf05c4a5e8532640419eb19d3 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 08:55:10 compute-0 nova_compute[260935]: 2025-10-11 08:55:10.251 2 DEBUG nova.virt.libvirt.driver [None req-41e7fcd5-c65e-47e8-8535-75e0561242ef f58717ff8108424e87196def751cc1fe 06df44dbf05c4a5e8532640419eb19d3 - - default default] No VIF found with MAC fa:16:3e:53:75:e2, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 11 08:55:10 compute-0 nova_compute[260935]: 2025-10-11 08:55:10.251 2 INFO nova.virt.libvirt.driver [None req-41e7fcd5-c65e-47e8-8535-75e0561242ef f58717ff8108424e87196def751cc1fe 06df44dbf05c4a5e8532640419eb19d3 - - default default] [instance: 09e8444f-162e-4773-a181-a5b70c7af8dd] Using config drive
Oct 11 08:55:10 compute-0 nova_compute[260935]: 2025-10-11 08:55:10.297 2 DEBUG nova.storage.rbd_utils [None req-41e7fcd5-c65e-47e8-8535-75e0561242ef f58717ff8108424e87196def751cc1fe 06df44dbf05c4a5e8532640419eb19d3 - - default default] rbd image 09e8444f-162e-4773-a181-a5b70c7af8dd_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:55:10 compute-0 podman[320567]: 2025-10-11 08:55:10.324211289 +0000 UTC m=+0.098735367 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, container_name=multipathd, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Oct 11 08:55:10 compute-0 podman[320569]: 2025-10-11 08:55:10.370219261 +0000 UTC m=+0.138222223 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 11 08:55:10 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:55:10.518 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=bec9a5e2-82b0-42f0-811d-08d245f1dc66, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '14'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:55:10 compute-0 nova_compute[260935]: 2025-10-11 08:55:10.621 2 DEBUG nova.network.neutron [req-bdba56ee-c925-441f-8d8e-ef1e57eb4bda req-31f96c90-e75b-4bbe-9d30-5ddf9cbe9fce e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 09e8444f-162e-4773-a181-a5b70c7af8dd] Updated VIF entry in instance network info cache for port ce404d10-5133-400e-acde-02c0abd10470. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 08:55:10 compute-0 nova_compute[260935]: 2025-10-11 08:55:10.622 2 DEBUG nova.network.neutron [req-bdba56ee-c925-441f-8d8e-ef1e57eb4bda req-31f96c90-e75b-4bbe-9d30-5ddf9cbe9fce e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 09e8444f-162e-4773-a181-a5b70c7af8dd] Updating instance_info_cache with network_info: [{"id": "ce404d10-5133-400e-acde-02c0abd10470", "address": "fa:16:3e:53:75:e2", "network": {"id": "651d0a3a-4912-47c8-a5af-eb2f7badf8c4", "bridge": "br-int", "label": "tempest-ServerMetadataNegativeTestJSON-1981929345-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "06df44dbf05c4a5e8532640419eb19d3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapce404d10-51", "ovs_interfaceid": "ce404d10-5133-400e-acde-02c0abd10470", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:55:10 compute-0 nova_compute[260935]: 2025-10-11 08:55:10.646 2 DEBUG oslo_concurrency.lockutils [req-bdba56ee-c925-441f-8d8e-ef1e57eb4bda req-31f96c90-e75b-4bbe-9d30-5ddf9cbe9fce e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-09e8444f-162e-4773-a181-a5b70c7af8dd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 08:55:10 compute-0 nova_compute[260935]: 2025-10-11 08:55:10.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 08:55:10 compute-0 nova_compute[260935]: 2025-10-11 08:55:10.811 2 INFO nova.virt.libvirt.driver [None req-41e7fcd5-c65e-47e8-8535-75e0561242ef f58717ff8108424e87196def751cc1fe 06df44dbf05c4a5e8532640419eb19d3 - - default default] [instance: 09e8444f-162e-4773-a181-a5b70c7af8dd] Creating config drive at /var/lib/nova/instances/09e8444f-162e-4773-a181-a5b70c7af8dd/disk.config
Oct 11 08:55:10 compute-0 nova_compute[260935]: 2025-10-11 08:55:10.820 2 DEBUG oslo_concurrency.processutils [None req-41e7fcd5-c65e-47e8-8535-75e0561242ef f58717ff8108424e87196def751cc1fe 06df44dbf05c4a5e8532640419eb19d3 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/09e8444f-162e-4773-a181-a5b70c7af8dd/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpfnfp1fi4 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:55:10 compute-0 nova_compute[260935]: 2025-10-11 08:55:10.990 2 DEBUG oslo_concurrency.processutils [None req-41e7fcd5-c65e-47e8-8535-75e0561242ef f58717ff8108424e87196def751cc1fe 06df44dbf05c4a5e8532640419eb19d3 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/09e8444f-162e-4773-a181-a5b70c7af8dd/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpfnfp1fi4" returned: 0 in 0.170s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:55:11 compute-0 nova_compute[260935]: 2025-10-11 08:55:11.030 2 DEBUG nova.storage.rbd_utils [None req-41e7fcd5-c65e-47e8-8535-75e0561242ef f58717ff8108424e87196def751cc1fe 06df44dbf05c4a5e8532640419eb19d3 - - default default] rbd image 09e8444f-162e-4773-a181-a5b70c7af8dd_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:55:11 compute-0 nova_compute[260935]: 2025-10-11 08:55:11.036 2 DEBUG oslo_concurrency.processutils [None req-41e7fcd5-c65e-47e8-8535-75e0561242ef f58717ff8108424e87196def751cc1fe 06df44dbf05c4a5e8532640419eb19d3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/09e8444f-162e-4773-a181-a5b70c7af8dd/disk.config 09e8444f-162e-4773-a181-a5b70c7af8dd_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:55:11 compute-0 ceph-mon[74313]: pgmap v1528: 321 pgs: 321 active+clean; 88 MiB data, 477 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 11 08:55:11 compute-0 nova_compute[260935]: 2025-10-11 08:55:11.265 2 DEBUG oslo_concurrency.processutils [None req-41e7fcd5-c65e-47e8-8535-75e0561242ef f58717ff8108424e87196def751cc1fe 06df44dbf05c4a5e8532640419eb19d3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/09e8444f-162e-4773-a181-a5b70c7af8dd/disk.config 09e8444f-162e-4773-a181-a5b70c7af8dd_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.230s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:55:11 compute-0 nova_compute[260935]: 2025-10-11 08:55:11.266 2 INFO nova.virt.libvirt.driver [None req-41e7fcd5-c65e-47e8-8535-75e0561242ef f58717ff8108424e87196def751cc1fe 06df44dbf05c4a5e8532640419eb19d3 - - default default] [instance: 09e8444f-162e-4773-a181-a5b70c7af8dd] Deleting local config drive /var/lib/nova/instances/09e8444f-162e-4773-a181-a5b70c7af8dd/disk.config because it was imported into RBD.
Oct 11 08:55:11 compute-0 kernel: tapce404d10-51: entered promiscuous mode
Oct 11 08:55:11 compute-0 ovn_controller[152945]: 2025-10-11T08:55:11Z|00498|binding|INFO|Claiming lport ce404d10-5133-400e-acde-02c0abd10470 for this chassis.
Oct 11 08:55:11 compute-0 ovn_controller[152945]: 2025-10-11T08:55:11Z|00499|binding|INFO|ce404d10-5133-400e-acde-02c0abd10470: Claiming fa:16:3e:53:75:e2 10.100.0.9
Oct 11 08:55:11 compute-0 NetworkManager[44960]: <info>  [1760172911.3412] manager: (tapce404d10-51): new Tun device (/org/freedesktop/NetworkManager/Devices/232)
Oct 11 08:55:11 compute-0 nova_compute[260935]: 2025-10-11 08:55:11.344 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:55:11 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:55:11.356 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:53:75:e2 10.100.0.9'], port_security=['fa:16:3e:53:75:e2 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '09e8444f-162e-4773-a181-a5b70c7af8dd', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-651d0a3a-4912-47c8-a5af-eb2f7badf8c4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '06df44dbf05c4a5e8532640419eb19d3', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'ef6334f9-e759-487e-a426-75ba5c313554', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c8280cc5-aa3a-4200-a683-dd07c09ecd59, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=ce404d10-5133-400e-acde-02c0abd10470) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 08:55:11 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:55:11.360 162815 INFO neutron.agent.ovn.metadata.agent [-] Port ce404d10-5133-400e-acde-02c0abd10470 in datapath 651d0a3a-4912-47c8-a5af-eb2f7badf8c4 bound to our chassis
Oct 11 08:55:11 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:55:11.365 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 651d0a3a-4912-47c8-a5af-eb2f7badf8c4
Oct 11 08:55:11 compute-0 systemd-udevd[320679]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 08:55:11 compute-0 NetworkManager[44960]: <info>  [1760172911.3864] device (tapce404d10-51): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 08:55:11 compute-0 NetworkManager[44960]: <info>  [1760172911.3895] device (tapce404d10-51): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 08:55:11 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:55:11.390 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[2c87e034-6244-4a21-8d62-66d44dd7b08d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:55:11 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:55:11.391 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap651d0a3a-41 in ovnmeta-651d0a3a-4912-47c8-a5af-eb2f7badf8c4 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 11 08:55:11 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:55:11.394 276945 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap651d0a3a-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 11 08:55:11 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:55:11.394 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[20290bfe-5732-45f9-b0c7-bfebec62b51f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:55:11 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:55:11.396 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[b93e174a-00f1-4ada-bf0e-46d65b675e22]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:55:11 compute-0 systemd-machined[215705]: New machine qemu-64-instance-00000039.
Oct 11 08:55:11 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:55:11.416 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[f01a96fd-9f90-4ea5-85fd-741802419a0e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:55:11 compute-0 systemd[1]: Started Virtual Machine qemu-64-instance-00000039.
Oct 11 08:55:11 compute-0 nova_compute[260935]: 2025-10-11 08:55:11.457 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:55:11 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:55:11.458 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[53c15a9e-19d4-484b-b3b1-5e21a09ff3a9]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:55:11 compute-0 nova_compute[260935]: 2025-10-11 08:55:11.463 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:55:11 compute-0 ovn_controller[152945]: 2025-10-11T08:55:11Z|00500|binding|INFO|Setting lport ce404d10-5133-400e-acde-02c0abd10470 ovn-installed in OVS
Oct 11 08:55:11 compute-0 ovn_controller[152945]: 2025-10-11T08:55:11Z|00501|binding|INFO|Setting lport ce404d10-5133-400e-acde-02c0abd10470 up in Southbound
Oct 11 08:55:11 compute-0 nova_compute[260935]: 2025-10-11 08:55:11.470 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:55:11 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:55:11.508 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[d1149bef-d96e-4b9b-bd6c-38f5ec9098b9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:55:11 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:55:11.516 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[2d347470-3079-4d39-b807-95be9b5507a3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:55:11 compute-0 NetworkManager[44960]: <info>  [1760172911.5174] manager: (tap651d0a3a-40): new Veth device (/org/freedesktop/NetworkManager/Devices/233)
Oct 11 08:55:11 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:55:11.578 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[810890f0-aa2e-4781-9e22-e6014ac8fe6f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:55:11 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:55:11.583 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[b085137a-2f67-4e5e-9b3c-ee917f480af1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:55:11 compute-0 sudo[320695]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:55:11 compute-0 sudo[320695]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:55:11 compute-0 sudo[320695]: pam_unix(sudo:session): session closed for user root
Oct 11 08:55:11 compute-0 NetworkManager[44960]: <info>  [1760172911.6208] device (tap651d0a3a-40): carrier: link connected
Oct 11 08:55:11 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:55:11.629 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[82d8a116-61ea-4874-ac44-abbe3d1ec0b5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:55:11 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:55:11.658 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[12138814-ee80-47cc-91a7-fe435a166a6e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap651d0a3a-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:02:f8:23'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 159], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 475632, 'reachable_time': 22611, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 320742, 'error': None, 'target': 'ovnmeta-651d0a3a-4912-47c8-a5af-eb2f7badf8c4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:55:11 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:55:11.688 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[e92c4993-245f-485a-a3ce-4ea20f487159]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe02:f823'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 475632, 'tstamp': 475632}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 320759, 'error': None, 'target': 'ovnmeta-651d0a3a-4912-47c8-a5af-eb2f7badf8c4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:55:11 compute-0 nova_compute[260935]: 2025-10-11 08:55:11.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 08:55:11 compute-0 sudo[320740]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 08:55:11 compute-0 sudo[320740]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:55:11 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:55:11.721 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[7951ed6d-b821-4338-8cd9-85d0cab29957]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap651d0a3a-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:02:f8:23'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 159], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 475632, 'reachable_time': 22611, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 320765, 'error': None, 'target': 'ovnmeta-651d0a3a-4912-47c8-a5af-eb2f7badf8c4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:55:11 compute-0 sudo[320740]: pam_unix(sudo:session): session closed for user root
Oct 11 08:55:11 compute-0 nova_compute[260935]: 2025-10-11 08:55:11.728 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:55:11 compute-0 nova_compute[260935]: 2025-10-11 08:55:11.729 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:55:11 compute-0 nova_compute[260935]: 2025-10-11 08:55:11.729 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:55:11 compute-0 nova_compute[260935]: 2025-10-11 08:55:11.729 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 11 08:55:11 compute-0 nova_compute[260935]: 2025-10-11 08:55:11.730 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:55:11 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:55:11.775 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[b00f100f-7f9d-4208-bbd0-c00c51d0841d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:55:11 compute-0 sudo[320769]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:55:11 compute-0 sudo[320769]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:55:11 compute-0 sudo[320769]: pam_unix(sudo:session): session closed for user root
Oct 11 08:55:11 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:55:11.880 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[dc27c0d8-eea8-4af9-b5b6-966fdcca04a0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:55:11 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:55:11.883 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap651d0a3a-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:55:11 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:55:11.886 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 08:55:11 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:55:11.887 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap651d0a3a-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:55:11 compute-0 kernel: tap651d0a3a-40: entered promiscuous mode
Oct 11 08:55:11 compute-0 NetworkManager[44960]: <info>  [1760172911.8917] manager: (tap651d0a3a-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/234)
Oct 11 08:55:11 compute-0 nova_compute[260935]: 2025-10-11 08:55:11.897 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:55:11 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:55:11.899 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap651d0a3a-40, col_values=(('external_ids', {'iface-id': '7ae49d05-3fe4-433c-8b86-08a1da3da34c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:55:11 compute-0 ovn_controller[152945]: 2025-10-11T08:55:11Z|00502|binding|INFO|Releasing lport 7ae49d05-3fe4-433c-8b86-08a1da3da34c from this chassis (sb_readonly=0)
Oct 11 08:55:11 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:55:11.904 162815 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/651d0a3a-4912-47c8-a5af-eb2f7badf8c4.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/651d0a3a-4912-47c8-a5af-eb2f7badf8c4.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 11 08:55:11 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:55:11.905 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[7dec6af5-e8d4-45e5-ba3e-3c15030ffba5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:55:11 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:55:11.906 162815 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 11 08:55:11 compute-0 ovn_metadata_agent[162810]: global
Oct 11 08:55:11 compute-0 ovn_metadata_agent[162810]:     log         /dev/log local0 debug
Oct 11 08:55:11 compute-0 ovn_metadata_agent[162810]:     log-tag     haproxy-metadata-proxy-651d0a3a-4912-47c8-a5af-eb2f7badf8c4
Oct 11 08:55:11 compute-0 ovn_metadata_agent[162810]:     user        root
Oct 11 08:55:11 compute-0 ovn_metadata_agent[162810]:     group       root
Oct 11 08:55:11 compute-0 ovn_metadata_agent[162810]:     maxconn     1024
Oct 11 08:55:11 compute-0 ovn_metadata_agent[162810]:     pidfile     /var/lib/neutron/external/pids/651d0a3a-4912-47c8-a5af-eb2f7badf8c4.pid.haproxy
Oct 11 08:55:11 compute-0 ovn_metadata_agent[162810]:     daemon
Oct 11 08:55:11 compute-0 ovn_metadata_agent[162810]: 
Oct 11 08:55:11 compute-0 ovn_metadata_agent[162810]: defaults
Oct 11 08:55:11 compute-0 ovn_metadata_agent[162810]:     log global
Oct 11 08:55:11 compute-0 ovn_metadata_agent[162810]:     mode http
Oct 11 08:55:11 compute-0 ovn_metadata_agent[162810]:     option httplog
Oct 11 08:55:11 compute-0 ovn_metadata_agent[162810]:     option dontlognull
Oct 11 08:55:11 compute-0 ovn_metadata_agent[162810]:     option http-server-close
Oct 11 08:55:11 compute-0 ovn_metadata_agent[162810]:     option forwardfor
Oct 11 08:55:11 compute-0 ovn_metadata_agent[162810]:     retries                 3
Oct 11 08:55:11 compute-0 ovn_metadata_agent[162810]:     timeout http-request    30s
Oct 11 08:55:11 compute-0 ovn_metadata_agent[162810]:     timeout connect         30s
Oct 11 08:55:11 compute-0 ovn_metadata_agent[162810]:     timeout client          32s
Oct 11 08:55:11 compute-0 ovn_metadata_agent[162810]:     timeout server          32s
Oct 11 08:55:11 compute-0 ovn_metadata_agent[162810]:     timeout http-keep-alive 30s
Oct 11 08:55:11 compute-0 ovn_metadata_agent[162810]: 
Oct 11 08:55:11 compute-0 ovn_metadata_agent[162810]: 
Oct 11 08:55:11 compute-0 ovn_metadata_agent[162810]: listen listener
Oct 11 08:55:11 compute-0 ovn_metadata_agent[162810]:     bind 169.254.169.254:80
Oct 11 08:55:11 compute-0 ovn_metadata_agent[162810]:     server metadata /var/lib/neutron/metadata_proxy
Oct 11 08:55:11 compute-0 ovn_metadata_agent[162810]:     http-request add-header X-OVN-Network-ID 651d0a3a-4912-47c8-a5af-eb2f7badf8c4
Oct 11 08:55:11 compute-0 ovn_metadata_agent[162810]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 11 08:55:11 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:55:11.909 162815 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-651d0a3a-4912-47c8-a5af-eb2f7badf8c4', 'env', 'PROCESS_TAG=haproxy-651d0a3a-4912-47c8-a5af-eb2f7badf8c4', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/651d0a3a-4912-47c8-a5af-eb2f7badf8c4.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 11 08:55:11 compute-0 sudo[320799]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Oct 11 08:55:11 compute-0 nova_compute[260935]: 2025-10-11 08:55:11.927 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:55:11 compute-0 sudo[320799]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:55:11 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1529: 321 pgs: 321 active+clean; 88 MiB data, 477 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 11 08:55:12 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 08:55:12 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3190582371' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:55:12 compute-0 nova_compute[260935]: 2025-10-11 08:55:12.308 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.579s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:55:12 compute-0 podman[320939]: 2025-10-11 08:55:12.310093774 +0000 UTC m=+0.054408082 container create e681dc86b4349b3c4497879b295c775e50a4f08774066f23cdbadfc587418566 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-651d0a3a-4912-47c8-a5af-eb2f7badf8c4, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.vendor=CentOS)
Oct 11 08:55:12 compute-0 systemd[1]: Started libpod-conmon-e681dc86b4349b3c4497879b295c775e50a4f08774066f23cdbadfc587418566.scope.
Oct 11 08:55:12 compute-0 podman[320939]: 2025-10-11 08:55:12.281767697 +0000 UTC m=+0.026082025 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 11 08:55:12 compute-0 systemd[1]: Started libcrun container.
Oct 11 08:55:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5d0672aa8e6c963225b544aba8752ab8163ac36884a8a150664ddc3431c4373/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 11 08:55:12 compute-0 podman[320939]: 2025-10-11 08:55:12.405633129 +0000 UTC m=+0.149947457 container init e681dc86b4349b3c4497879b295c775e50a4f08774066f23cdbadfc587418566 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-651d0a3a-4912-47c8-a5af-eb2f7badf8c4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 11 08:55:12 compute-0 podman[320939]: 2025-10-11 08:55:12.414549323 +0000 UTC m=+0.158863631 container start e681dc86b4349b3c4497879b295c775e50a4f08774066f23cdbadfc587418566 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-651d0a3a-4912-47c8-a5af-eb2f7badf8c4, tcib_managed=true, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Oct 11 08:55:12 compute-0 nova_compute[260935]: 2025-10-11 08:55:12.415 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000039 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 08:55:12 compute-0 nova_compute[260935]: 2025-10-11 08:55:12.416 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000039 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 08:55:12 compute-0 neutron-haproxy-ovnmeta-651d0a3a-4912-47c8-a5af-eb2f7badf8c4[320975]: [NOTICE]   (320992) : New worker (321003) forked
Oct 11 08:55:12 compute-0 neutron-haproxy-ovnmeta-651d0a3a-4912-47c8-a5af-eb2f7badf8c4[320975]: [NOTICE]   (320992) : Loading success.
Oct 11 08:55:12 compute-0 nova_compute[260935]: 2025-10-11 08:55:12.521 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172912.519378, 09e8444f-162e-4773-a181-a5b70c7af8dd => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:55:12 compute-0 nova_compute[260935]: 2025-10-11 08:55:12.522 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 09e8444f-162e-4773-a181-a5b70c7af8dd] VM Started (Lifecycle Event)
Oct 11 08:55:12 compute-0 nova_compute[260935]: 2025-10-11 08:55:12.550 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 09e8444f-162e-4773-a181-a5b70c7af8dd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:55:12 compute-0 nova_compute[260935]: 2025-10-11 08:55:12.556 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172912.5221946, 09e8444f-162e-4773-a181-a5b70c7af8dd => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:55:12 compute-0 nova_compute[260935]: 2025-10-11 08:55:12.557 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 09e8444f-162e-4773-a181-a5b70c7af8dd] VM Paused (Lifecycle Event)
Oct 11 08:55:12 compute-0 nova_compute[260935]: 2025-10-11 08:55:12.581 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 09e8444f-162e-4773-a181-a5b70c7af8dd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:55:12 compute-0 nova_compute[260935]: 2025-10-11 08:55:12.585 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 09e8444f-162e-4773-a181-a5b70c7af8dd] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 08:55:12 compute-0 podman[321013]: 2025-10-11 08:55:12.598879179 +0000 UTC m=+0.103328837 container exec ef4d743dbf6b626090e433b260dff1359de31ba4682290cbdab8727911345729 (image=quay.io/ceph/ceph:v18, name=ceph-33219f8b-dc38-5a8f-a577-8ccc4b37190a-mon-compute-0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 08:55:12 compute-0 nova_compute[260935]: 2025-10-11 08:55:12.607 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 09e8444f-162e-4773-a181-a5b70c7af8dd] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 08:55:12 compute-0 nova_compute[260935]: 2025-10-11 08:55:12.670 2 WARNING nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 08:55:12 compute-0 nova_compute[260935]: 2025-10-11 08:55:12.674 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4073MB free_disk=59.967525482177734GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 11 08:55:12 compute-0 nova_compute[260935]: 2025-10-11 08:55:12.674 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:55:12 compute-0 nova_compute[260935]: 2025-10-11 08:55:12.675 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:55:12 compute-0 podman[321013]: 2025-10-11 08:55:12.718451139 +0000 UTC m=+0.222900747 container exec_died ef4d743dbf6b626090e433b260dff1359de31ba4682290cbdab8727911345729 (image=quay.io/ceph/ceph:v18, name=ceph-33219f8b-dc38-5a8f-a577-8ccc4b37190a-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 08:55:12 compute-0 nova_compute[260935]: 2025-10-11 08:55:12.751 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance 09e8444f-162e-4773-a181-a5b70c7af8dd actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 08:55:12 compute-0 nova_compute[260935]: 2025-10-11 08:55:12.752 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 11 08:55:12 compute-0 nova_compute[260935]: 2025-10-11 08:55:12.753 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 11 08:55:12 compute-0 nova_compute[260935]: 2025-10-11 08:55:12.786 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:55:12 compute-0 nova_compute[260935]: 2025-10-11 08:55:12.924 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:55:13 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 08:55:13 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3260994994' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:55:13 compute-0 ceph-mon[74313]: pgmap v1529: 321 pgs: 321 active+clean; 88 MiB data, 477 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 11 08:55:13 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/3190582371' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:55:13 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/3260994994' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:55:13 compute-0 nova_compute[260935]: 2025-10-11 08:55:13.251 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:55:13 compute-0 nova_compute[260935]: 2025-10-11 08:55:13.266 2 DEBUG nova.compute.provider_tree [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 08:55:13 compute-0 nova_compute[260935]: 2025-10-11 08:55:13.283 2 DEBUG nova.scheduler.client.report [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 08:55:13 compute-0 nova_compute[260935]: 2025-10-11 08:55:13.307 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 11 08:55:13 compute-0 nova_compute[260935]: 2025-10-11 08:55:13.308 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.633s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:55:13 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e210 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 08:55:13 compute-0 nova_compute[260935]: 2025-10-11 08:55:13.490 2 DEBUG nova.compute.manager [req-e1cc71d1-e453-456d-97b3-2aee7aa6f6e7 req-a55d57e2-c25e-4f8e-9b2c-90a0ec4c8bab e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 09e8444f-162e-4773-a181-a5b70c7af8dd] Received event network-vif-plugged-ce404d10-5133-400e-acde-02c0abd10470 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:55:13 compute-0 nova_compute[260935]: 2025-10-11 08:55:13.490 2 DEBUG oslo_concurrency.lockutils [req-e1cc71d1-e453-456d-97b3-2aee7aa6f6e7 req-a55d57e2-c25e-4f8e-9b2c-90a0ec4c8bab e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "09e8444f-162e-4773-a181-a5b70c7af8dd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:55:13 compute-0 nova_compute[260935]: 2025-10-11 08:55:13.491 2 DEBUG oslo_concurrency.lockutils [req-e1cc71d1-e453-456d-97b3-2aee7aa6f6e7 req-a55d57e2-c25e-4f8e-9b2c-90a0ec4c8bab e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "09e8444f-162e-4773-a181-a5b70c7af8dd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:55:13 compute-0 nova_compute[260935]: 2025-10-11 08:55:13.491 2 DEBUG oslo_concurrency.lockutils [req-e1cc71d1-e453-456d-97b3-2aee7aa6f6e7 req-a55d57e2-c25e-4f8e-9b2c-90a0ec4c8bab e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "09e8444f-162e-4773-a181-a5b70c7af8dd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:55:13 compute-0 nova_compute[260935]: 2025-10-11 08:55:13.492 2 DEBUG nova.compute.manager [req-e1cc71d1-e453-456d-97b3-2aee7aa6f6e7 req-a55d57e2-c25e-4f8e-9b2c-90a0ec4c8bab e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 09e8444f-162e-4773-a181-a5b70c7af8dd] Processing event network-vif-plugged-ce404d10-5133-400e-acde-02c0abd10470 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 11 08:55:13 compute-0 nova_compute[260935]: 2025-10-11 08:55:13.493 2 DEBUG nova.compute.manager [None req-41e7fcd5-c65e-47e8-8535-75e0561242ef f58717ff8108424e87196def751cc1fe 06df44dbf05c4a5e8532640419eb19d3 - - default default] [instance: 09e8444f-162e-4773-a181-a5b70c7af8dd] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 11 08:55:13 compute-0 nova_compute[260935]: 2025-10-11 08:55:13.498 2 DEBUG nova.virt.libvirt.driver [None req-41e7fcd5-c65e-47e8-8535-75e0561242ef f58717ff8108424e87196def751cc1fe 06df44dbf05c4a5e8532640419eb19d3 - - default default] [instance: 09e8444f-162e-4773-a181-a5b70c7af8dd] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 11 08:55:13 compute-0 nova_compute[260935]: 2025-10-11 08:55:13.501 2 INFO nova.virt.libvirt.driver [-] [instance: 09e8444f-162e-4773-a181-a5b70c7af8dd] Instance spawned successfully.
Oct 11 08:55:13 compute-0 nova_compute[260935]: 2025-10-11 08:55:13.502 2 DEBUG nova.virt.libvirt.driver [None req-41e7fcd5-c65e-47e8-8535-75e0561242ef f58717ff8108424e87196def751cc1fe 06df44dbf05c4a5e8532640419eb19d3 - - default default] [instance: 09e8444f-162e-4773-a181-a5b70c7af8dd] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 11 08:55:13 compute-0 nova_compute[260935]: 2025-10-11 08:55:13.504 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172913.5043106, 09e8444f-162e-4773-a181-a5b70c7af8dd => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:55:13 compute-0 nova_compute[260935]: 2025-10-11 08:55:13.505 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 09e8444f-162e-4773-a181-a5b70c7af8dd] VM Resumed (Lifecycle Event)
Oct 11 08:55:13 compute-0 nova_compute[260935]: 2025-10-11 08:55:13.520 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 09e8444f-162e-4773-a181-a5b70c7af8dd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:55:13 compute-0 nova_compute[260935]: 2025-10-11 08:55:13.523 2 DEBUG nova.virt.libvirt.driver [None req-41e7fcd5-c65e-47e8-8535-75e0561242ef f58717ff8108424e87196def751cc1fe 06df44dbf05c4a5e8532640419eb19d3 - - default default] [instance: 09e8444f-162e-4773-a181-a5b70c7af8dd] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:55:13 compute-0 nova_compute[260935]: 2025-10-11 08:55:13.524 2 DEBUG nova.virt.libvirt.driver [None req-41e7fcd5-c65e-47e8-8535-75e0561242ef f58717ff8108424e87196def751cc1fe 06df44dbf05c4a5e8532640419eb19d3 - - default default] [instance: 09e8444f-162e-4773-a181-a5b70c7af8dd] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:55:13 compute-0 nova_compute[260935]: 2025-10-11 08:55:13.525 2 DEBUG nova.virt.libvirt.driver [None req-41e7fcd5-c65e-47e8-8535-75e0561242ef f58717ff8108424e87196def751cc1fe 06df44dbf05c4a5e8532640419eb19d3 - - default default] [instance: 09e8444f-162e-4773-a181-a5b70c7af8dd] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:55:13 compute-0 nova_compute[260935]: 2025-10-11 08:55:13.525 2 DEBUG nova.virt.libvirt.driver [None req-41e7fcd5-c65e-47e8-8535-75e0561242ef f58717ff8108424e87196def751cc1fe 06df44dbf05c4a5e8532640419eb19d3 - - default default] [instance: 09e8444f-162e-4773-a181-a5b70c7af8dd] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:55:13 compute-0 nova_compute[260935]: 2025-10-11 08:55:13.526 2 DEBUG nova.virt.libvirt.driver [None req-41e7fcd5-c65e-47e8-8535-75e0561242ef f58717ff8108424e87196def751cc1fe 06df44dbf05c4a5e8532640419eb19d3 - - default default] [instance: 09e8444f-162e-4773-a181-a5b70c7af8dd] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:55:13 compute-0 nova_compute[260935]: 2025-10-11 08:55:13.527 2 DEBUG nova.virt.libvirt.driver [None req-41e7fcd5-c65e-47e8-8535-75e0561242ef f58717ff8108424e87196def751cc1fe 06df44dbf05c4a5e8532640419eb19d3 - - default default] [instance: 09e8444f-162e-4773-a181-a5b70c7af8dd] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:55:13 compute-0 nova_compute[260935]: 2025-10-11 08:55:13.534 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 09e8444f-162e-4773-a181-a5b70c7af8dd] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 08:55:13 compute-0 nova_compute[260935]: 2025-10-11 08:55:13.558 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 09e8444f-162e-4773-a181-a5b70c7af8dd] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 08:55:13 compute-0 nova_compute[260935]: 2025-10-11 08:55:13.578 2 INFO nova.compute.manager [None req-41e7fcd5-c65e-47e8-8535-75e0561242ef f58717ff8108424e87196def751cc1fe 06df44dbf05c4a5e8532640419eb19d3 - - default default] [instance: 09e8444f-162e-4773-a181-a5b70c7af8dd] Took 8.43 seconds to spawn the instance on the hypervisor.
Oct 11 08:55:13 compute-0 nova_compute[260935]: 2025-10-11 08:55:13.579 2 DEBUG nova.compute.manager [None req-41e7fcd5-c65e-47e8-8535-75e0561242ef f58717ff8108424e87196def751cc1fe 06df44dbf05c4a5e8532640419eb19d3 - - default default] [instance: 09e8444f-162e-4773-a181-a5b70c7af8dd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:55:13 compute-0 sudo[320799]: pam_unix(sudo:session): session closed for user root
Oct 11 08:55:13 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 08:55:13 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 08:55:13 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 08:55:13 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 08:55:13 compute-0 nova_compute[260935]: 2025-10-11 08:55:13.631 2 INFO nova.compute.manager [None req-41e7fcd5-c65e-47e8-8535-75e0561242ef f58717ff8108424e87196def751cc1fe 06df44dbf05c4a5e8532640419eb19d3 - - default default] [instance: 09e8444f-162e-4773-a181-a5b70c7af8dd] Took 9.38 seconds to build instance.
Oct 11 08:55:13 compute-0 nova_compute[260935]: 2025-10-11 08:55:13.651 2 DEBUG oslo_concurrency.lockutils [None req-41e7fcd5-c65e-47e8-8535-75e0561242ef f58717ff8108424e87196def751cc1fe 06df44dbf05c4a5e8532640419eb19d3 - - default default] Lock "09e8444f-162e-4773-a181-a5b70c7af8dd" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.491s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:55:13 compute-0 sudo[321196]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:55:13 compute-0 sudo[321196]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:55:13 compute-0 sudo[321196]: pam_unix(sudo:session): session closed for user root
Oct 11 08:55:13 compute-0 sudo[321221]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 08:55:13 compute-0 sudo[321221]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:55:13 compute-0 sudo[321221]: pam_unix(sudo:session): session closed for user root
Oct 11 08:55:13 compute-0 sudo[321246]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:55:13 compute-0 sudo[321246]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:55:13 compute-0 sudo[321246]: pam_unix(sudo:session): session closed for user root
Oct 11 08:55:13 compute-0 sudo[321271]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 11 08:55:13 compute-0 sudo[321271]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:55:13 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1530: 321 pgs: 321 active+clean; 88 MiB data, 477 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 36 op/s
Oct 11 08:55:14 compute-0 nova_compute[260935]: 2025-10-11 08:55:14.304 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 08:55:14 compute-0 nova_compute[260935]: 2025-10-11 08:55:14.336 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 08:55:14 compute-0 nova_compute[260935]: 2025-10-11 08:55:14.337 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 11 08:55:14 compute-0 nova_compute[260935]: 2025-10-11 08:55:14.337 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 11 08:55:14 compute-0 nova_compute[260935]: 2025-10-11 08:55:14.520 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "refresh_cache-09e8444f-162e-4773-a181-a5b70c7af8dd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 08:55:14 compute-0 nova_compute[260935]: 2025-10-11 08:55:14.521 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquired lock "refresh_cache-09e8444f-162e-4773-a181-a5b70c7af8dd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 08:55:14 compute-0 nova_compute[260935]: 2025-10-11 08:55:14.521 2 DEBUG nova.network.neutron [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: 09e8444f-162e-4773-a181-a5b70c7af8dd] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 11 08:55:14 compute-0 nova_compute[260935]: 2025-10-11 08:55:14.522 2 DEBUG nova.objects.instance [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 09e8444f-162e-4773-a181-a5b70c7af8dd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:55:14 compute-0 sudo[321271]: pam_unix(sudo:session): session closed for user root
Oct 11 08:55:14 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 08:55:14 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 08:55:14 compute-0 ceph-mon[74313]: pgmap v1530: 321 pgs: 321 active+clean; 88 MiB data, 477 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 36 op/s
Oct 11 08:55:14 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 08:55:14 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 08:55:14 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 11 08:55:14 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 08:55:14 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 11 08:55:14 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 08:55:14 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev 957767da-43fb-4087-a7d3-469c0f5b2ec3 does not exist
Oct 11 08:55:14 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev 8181938e-4418-4b23-af7a-2e0a0ac209c5 does not exist
Oct 11 08:55:14 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev 1327d814-a308-47b8-8fa5-efb83c5dc90d does not exist
Oct 11 08:55:14 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 11 08:55:14 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 08:55:14 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 11 08:55:14 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 08:55:14 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 08:55:14 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 08:55:14 compute-0 sudo[321325]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:55:14 compute-0 sudo[321325]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:55:14 compute-0 sudo[321325]: pam_unix(sudo:session): session closed for user root
Oct 11 08:55:14 compute-0 sudo[321350]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 08:55:14 compute-0 sudo[321350]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:55:14 compute-0 sudo[321350]: pam_unix(sudo:session): session closed for user root
Oct 11 08:55:14 compute-0 sudo[321375]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:55:14 compute-0 sudo[321375]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:55:14 compute-0 sudo[321375]: pam_unix(sudo:session): session closed for user root
Oct 11 08:55:14 compute-0 sudo[321400]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 11 08:55:14 compute-0 sudo[321400]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:55:15 compute-0 nova_compute[260935]: 2025-10-11 08:55:15.168 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:55:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:55:15.189 162815 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:55:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:55:15.191 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:55:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:55:15.192 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:55:15 compute-0 podman[321466]: 2025-10-11 08:55:15.483305398 +0000 UTC m=+0.070113111 container create aff24f5373a8b6f827d60c04385982c098ea3435db8d5af1aad709de9a6eebc6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_benz, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct 11 08:55:15 compute-0 systemd[1]: Started libpod-conmon-aff24f5373a8b6f827d60c04385982c098ea3435db8d5af1aad709de9a6eebc6.scope.
Oct 11 08:55:15 compute-0 podman[321466]: 2025-10-11 08:55:15.455095934 +0000 UTC m=+0.041903656 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 08:55:15 compute-0 systemd[1]: Started libcrun container.
Oct 11 08:55:15 compute-0 podman[321466]: 2025-10-11 08:55:15.597088893 +0000 UTC m=+0.183896665 container init aff24f5373a8b6f827d60c04385982c098ea3435db8d5af1aad709de9a6eebc6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_benz, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Oct 11 08:55:15 compute-0 podman[321466]: 2025-10-11 08:55:15.606156272 +0000 UTC m=+0.192963994 container start aff24f5373a8b6f827d60c04385982c098ea3435db8d5af1aad709de9a6eebc6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_benz, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Oct 11 08:55:15 compute-0 podman[321466]: 2025-10-11 08:55:15.610114704 +0000 UTC m=+0.196922386 container attach aff24f5373a8b6f827d60c04385982c098ea3435db8d5af1aad709de9a6eebc6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_benz, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 08:55:15 compute-0 systemd[1]: libpod-aff24f5373a8b6f827d60c04385982c098ea3435db8d5af1aad709de9a6eebc6.scope: Deactivated successfully.
Oct 11 08:55:15 compute-0 condescending_benz[321482]: 167 167
Oct 11 08:55:15 compute-0 conmon[321482]: conmon aff24f5373a8b6f827d6 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-aff24f5373a8b6f827d60c04385982c098ea3435db8d5af1aad709de9a6eebc6.scope/container/memory.events
Oct 11 08:55:15 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 08:55:15 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 08:55:15 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 08:55:15 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 08:55:15 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 08:55:15 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 08:55:15 compute-0 nova_compute[260935]: 2025-10-11 08:55:15.679 2 DEBUG nova.compute.manager [req-5bcdd117-c7dc-4aef-8f80-a53e623588e7 req-343ee253-b6e9-4bfa-9784-64d094d59fe8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 09e8444f-162e-4773-a181-a5b70c7af8dd] Received event network-vif-plugged-ce404d10-5133-400e-acde-02c0abd10470 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:55:15 compute-0 nova_compute[260935]: 2025-10-11 08:55:15.680 2 DEBUG oslo_concurrency.lockutils [req-5bcdd117-c7dc-4aef-8f80-a53e623588e7 req-343ee253-b6e9-4bfa-9784-64d094d59fe8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "09e8444f-162e-4773-a181-a5b70c7af8dd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:55:15 compute-0 nova_compute[260935]: 2025-10-11 08:55:15.682 2 DEBUG oslo_concurrency.lockutils [req-5bcdd117-c7dc-4aef-8f80-a53e623588e7 req-343ee253-b6e9-4bfa-9784-64d094d59fe8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "09e8444f-162e-4773-a181-a5b70c7af8dd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:55:15 compute-0 nova_compute[260935]: 2025-10-11 08:55:15.683 2 DEBUG oslo_concurrency.lockutils [req-5bcdd117-c7dc-4aef-8f80-a53e623588e7 req-343ee253-b6e9-4bfa-9784-64d094d59fe8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "09e8444f-162e-4773-a181-a5b70c7af8dd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:55:15 compute-0 nova_compute[260935]: 2025-10-11 08:55:15.683 2 DEBUG nova.compute.manager [req-5bcdd117-c7dc-4aef-8f80-a53e623588e7 req-343ee253-b6e9-4bfa-9784-64d094d59fe8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 09e8444f-162e-4773-a181-a5b70c7af8dd] No waiting events found dispatching network-vif-plugged-ce404d10-5133-400e-acde-02c0abd10470 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 08:55:15 compute-0 nova_compute[260935]: 2025-10-11 08:55:15.684 2 WARNING nova.compute.manager [req-5bcdd117-c7dc-4aef-8f80-a53e623588e7 req-343ee253-b6e9-4bfa-9784-64d094d59fe8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 09e8444f-162e-4773-a181-a5b70c7af8dd] Received unexpected event network-vif-plugged-ce404d10-5133-400e-acde-02c0abd10470 for instance with vm_state active and task_state None.
Oct 11 08:55:15 compute-0 podman[321487]: 2025-10-11 08:55:15.709621672 +0000 UTC m=+0.055467593 container died aff24f5373a8b6f827d60c04385982c098ea3435db8d5af1aad709de9a6eebc6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_benz, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct 11 08:55:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-a294410b4aa76ff7f51aabf70c79f7d8b3f8fe965473c1b4f6f13d76dfbe8e23-merged.mount: Deactivated successfully.
Oct 11 08:55:15 compute-0 podman[321487]: 2025-10-11 08:55:15.758435694 +0000 UTC m=+0.104281515 container remove aff24f5373a8b6f827d60c04385982c098ea3435db8d5af1aad709de9a6eebc6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_benz, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 08:55:15 compute-0 systemd[1]: libpod-conmon-aff24f5373a8b6f827d60c04385982c098ea3435db8d5af1aad709de9a6eebc6.scope: Deactivated successfully.
Oct 11 08:55:15 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1531: 321 pgs: 321 active+clean; 88 MiB data, 477 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 36 op/s
Oct 11 08:55:16 compute-0 podman[321508]: 2025-10-11 08:55:16.010282494 +0000 UTC m=+0.054871416 container create 80d7f0f29fd5a4808047d83bb15a897823b65ac7bcf0a72344ee354f8ba92723 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_khorana, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct 11 08:55:16 compute-0 systemd[1]: Started libpod-conmon-80d7f0f29fd5a4808047d83bb15a897823b65ac7bcf0a72344ee354f8ba92723.scope.
Oct 11 08:55:16 compute-0 podman[321508]: 2025-10-11 08:55:15.993133295 +0000 UTC m=+0.037722207 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 08:55:16 compute-0 systemd[1]: Started libcrun container.
Oct 11 08:55:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c3d2a07cedb84d616d5b5149dea268e244be311f4572232edd35cb8efbc4d20/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 08:55:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c3d2a07cedb84d616d5b5149dea268e244be311f4572232edd35cb8efbc4d20/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 08:55:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c3d2a07cedb84d616d5b5149dea268e244be311f4572232edd35cb8efbc4d20/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 08:55:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c3d2a07cedb84d616d5b5149dea268e244be311f4572232edd35cb8efbc4d20/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 08:55:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c3d2a07cedb84d616d5b5149dea268e244be311f4572232edd35cb8efbc4d20/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 08:55:16 compute-0 podman[321508]: 2025-10-11 08:55:16.123498902 +0000 UTC m=+0.168087894 container init 80d7f0f29fd5a4808047d83bb15a897823b65ac7bcf0a72344ee354f8ba92723 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_khorana, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct 11 08:55:16 compute-0 podman[321508]: 2025-10-11 08:55:16.138303735 +0000 UTC m=+0.182892657 container start 80d7f0f29fd5a4808047d83bb15a897823b65ac7bcf0a72344ee354f8ba92723 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_khorana, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 08:55:16 compute-0 podman[321508]: 2025-10-11 08:55:16.143881304 +0000 UTC m=+0.188470226 container attach 80d7f0f29fd5a4808047d83bb15a897823b65ac7bcf0a72344ee354f8ba92723 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_khorana, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Oct 11 08:55:16 compute-0 ceph-mon[74313]: pgmap v1531: 321 pgs: 321 active+clean; 88 MiB data, 477 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 36 op/s
Oct 11 08:55:17 compute-0 nova_compute[260935]: 2025-10-11 08:55:17.149 2 DEBUG nova.network.neutron [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: 09e8444f-162e-4773-a181-a5b70c7af8dd] Updating instance_info_cache with network_info: [{"id": "ce404d10-5133-400e-acde-02c0abd10470", "address": "fa:16:3e:53:75:e2", "network": {"id": "651d0a3a-4912-47c8-a5af-eb2f7badf8c4", "bridge": "br-int", "label": "tempest-ServerMetadataNegativeTestJSON-1981929345-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "06df44dbf05c4a5e8532640419eb19d3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapce404d10-51", "ovs_interfaceid": "ce404d10-5133-400e-acde-02c0abd10470", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:55:17 compute-0 nova_compute[260935]: 2025-10-11 08:55:17.196 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Releasing lock "refresh_cache-09e8444f-162e-4773-a181-a5b70c7af8dd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 08:55:17 compute-0 nova_compute[260935]: 2025-10-11 08:55:17.197 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: 09e8444f-162e-4773-a181-a5b70c7af8dd] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 11 08:55:17 compute-0 admiring_khorana[321525]: --> passed data devices: 0 physical, 3 LVM
Oct 11 08:55:17 compute-0 admiring_khorana[321525]: --> relative data size: 1.0
Oct 11 08:55:17 compute-0 admiring_khorana[321525]: --> All data devices are unavailable
Oct 11 08:55:17 compute-0 systemd[1]: libpod-80d7f0f29fd5a4808047d83bb15a897823b65ac7bcf0a72344ee354f8ba92723.scope: Deactivated successfully.
Oct 11 08:55:17 compute-0 systemd[1]: libpod-80d7f0f29fd5a4808047d83bb15a897823b65ac7bcf0a72344ee354f8ba92723.scope: Consumed 1.186s CPU time.
Oct 11 08:55:17 compute-0 podman[321508]: 2025-10-11 08:55:17.389797411 +0000 UTC m=+1.434386343 container died 80d7f0f29fd5a4808047d83bb15a897823b65ac7bcf0a72344ee354f8ba92723 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_khorana, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True)
Oct 11 08:55:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-3c3d2a07cedb84d616d5b5149dea268e244be311f4572232edd35cb8efbc4d20-merged.mount: Deactivated successfully.
Oct 11 08:55:17 compute-0 podman[321508]: 2025-10-11 08:55:17.461073773 +0000 UTC m=+1.505662665 container remove 80d7f0f29fd5a4808047d83bb15a897823b65ac7bcf0a72344ee354f8ba92723 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_khorana, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Oct 11 08:55:17 compute-0 systemd[1]: libpod-conmon-80d7f0f29fd5a4808047d83bb15a897823b65ac7bcf0a72344ee354f8ba92723.scope: Deactivated successfully.
Oct 11 08:55:17 compute-0 sudo[321400]: pam_unix(sudo:session): session closed for user root
Oct 11 08:55:17 compute-0 sudo[321565]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:55:17 compute-0 sudo[321565]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:55:17 compute-0 sudo[321565]: pam_unix(sudo:session): session closed for user root
Oct 11 08:55:17 compute-0 sudo[321590]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 08:55:17 compute-0 sudo[321590]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:55:17 compute-0 sudo[321590]: pam_unix(sudo:session): session closed for user root
Oct 11 08:55:17 compute-0 sudo[321615]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:55:17 compute-0 sudo[321615]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:55:17 compute-0 sudo[321615]: pam_unix(sudo:session): session closed for user root
Oct 11 08:55:17 compute-0 sudo[321640]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a -- lvm list --format json
Oct 11 08:55:17 compute-0 sudo[321640]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:55:17 compute-0 nova_compute[260935]: 2025-10-11 08:55:17.924 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:55:17 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1532: 321 pgs: 321 active+clean; 88 MiB data, 477 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Oct 11 08:55:18 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e210 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 08:55:18 compute-0 podman[321707]: 2025-10-11 08:55:18.360675505 +0000 UTC m=+0.050184622 container create 6b01b572343acd054058db25378a7b1d68431193329f47c26c1479d2e19e3d18 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_wiles, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct 11 08:55:18 compute-0 systemd[1]: Started libpod-conmon-6b01b572343acd054058db25378a7b1d68431193329f47c26c1479d2e19e3d18.scope.
Oct 11 08:55:18 compute-0 podman[321707]: 2025-10-11 08:55:18.342398674 +0000 UTC m=+0.031907811 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 08:55:18 compute-0 systemd[1]: Started libcrun container.
Oct 11 08:55:18 compute-0 podman[321707]: 2025-10-11 08:55:18.457117985 +0000 UTC m=+0.146627172 container init 6b01b572343acd054058db25378a7b1d68431193329f47c26c1479d2e19e3d18 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_wiles, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 08:55:18 compute-0 podman[321707]: 2025-10-11 08:55:18.469642272 +0000 UTC m=+0.159151419 container start 6b01b572343acd054058db25378a7b1d68431193329f47c26c1479d2e19e3d18 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_wiles, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 08:55:18 compute-0 podman[321707]: 2025-10-11 08:55:18.473386089 +0000 UTC m=+0.162895246 container attach 6b01b572343acd054058db25378a7b1d68431193329f47c26c1479d2e19e3d18 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_wiles, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 08:55:18 compute-0 gifted_wiles[321724]: 167 167
Oct 11 08:55:18 compute-0 systemd[1]: libpod-6b01b572343acd054058db25378a7b1d68431193329f47c26c1479d2e19e3d18.scope: Deactivated successfully.
Oct 11 08:55:18 compute-0 podman[321707]: 2025-10-11 08:55:18.476107517 +0000 UTC m=+0.165616664 container died 6b01b572343acd054058db25378a7b1d68431193329f47c26c1479d2e19e3d18 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_wiles, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct 11 08:55:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-a55af9e3b2e8dcdb77721c8e9e6ba8b7a24f9b00f86389e5d9786f400f762937-merged.mount: Deactivated successfully.
Oct 11 08:55:18 compute-0 podman[321707]: 2025-10-11 08:55:18.529285353 +0000 UTC m=+0.218794500 container remove 6b01b572343acd054058db25378a7b1d68431193329f47c26c1479d2e19e3d18 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_wiles, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 08:55:18 compute-0 systemd[1]: libpod-conmon-6b01b572343acd054058db25378a7b1d68431193329f47c26c1479d2e19e3d18.scope: Deactivated successfully.
Oct 11 08:55:18 compute-0 podman[321748]: 2025-10-11 08:55:18.78309477 +0000 UTC m=+0.060092584 container create 9618a48970d0fb721263bca73c3c00f1ca75a100b51950c37a900b6bed4693ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_gauss, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Oct 11 08:55:18 compute-0 systemd[1]: Started libpod-conmon-9618a48970d0fb721263bca73c3c00f1ca75a100b51950c37a900b6bed4693ab.scope.
Oct 11 08:55:18 compute-0 podman[321748]: 2025-10-11 08:55:18.75818823 +0000 UTC m=+0.035186054 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 08:55:18 compute-0 systemd[1]: Started libcrun container.
Oct 11 08:55:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99a110e0be8418cab9844a0664bd7dc2f1e3f7b8b64adc755fc9e0b93ab022fe/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 08:55:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99a110e0be8418cab9844a0664bd7dc2f1e3f7b8b64adc755fc9e0b93ab022fe/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 08:55:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99a110e0be8418cab9844a0664bd7dc2f1e3f7b8b64adc755fc9e0b93ab022fe/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 08:55:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99a110e0be8418cab9844a0664bd7dc2f1e3f7b8b64adc755fc9e0b93ab022fe/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 08:55:18 compute-0 podman[321748]: 2025-10-11 08:55:18.897630376 +0000 UTC m=+0.174628240 container init 9618a48970d0fb721263bca73c3c00f1ca75a100b51950c37a900b6bed4693ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_gauss, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True)
Oct 11 08:55:18 compute-0 nova_compute[260935]: 2025-10-11 08:55:18.908 2 DEBUG oslo_concurrency.lockutils [None req-f03584e8-cfc1-4275-951f-c47662853626 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Acquiring lock "98d8ebd6-0917-49cf-8efc-a245486424bc" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:55:18 compute-0 nova_compute[260935]: 2025-10-11 08:55:18.911 2 DEBUG oslo_concurrency.lockutils [None req-f03584e8-cfc1-4275-951f-c47662853626 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Lock "98d8ebd6-0917-49cf-8efc-a245486424bc" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:55:18 compute-0 podman[321748]: 2025-10-11 08:55:18.918252824 +0000 UTC m=+0.195250638 container start 9618a48970d0fb721263bca73c3c00f1ca75a100b51950c37a900b6bed4693ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_gauss, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct 11 08:55:18 compute-0 podman[321748]: 2025-10-11 08:55:18.922376402 +0000 UTC m=+0.199374256 container attach 9618a48970d0fb721263bca73c3c00f1ca75a100b51950c37a900b6bed4693ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_gauss, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct 11 08:55:18 compute-0 nova_compute[260935]: 2025-10-11 08:55:18.938 2 DEBUG nova.compute.manager [None req-f03584e8-cfc1-4275-951f-c47662853626 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] [instance: 98d8ebd6-0917-49cf-8efc-a245486424bc] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 11 08:55:19 compute-0 nova_compute[260935]: 2025-10-11 08:55:19.003 2 DEBUG oslo_concurrency.lockutils [None req-1b87c1bf-b2e2-431a-b9ab-6a5580ceddde f58717ff8108424e87196def751cc1fe 06df44dbf05c4a5e8532640419eb19d3 - - default default] Acquiring lock "09e8444f-162e-4773-a181-a5b70c7af8dd" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:55:19 compute-0 nova_compute[260935]: 2025-10-11 08:55:19.003 2 DEBUG oslo_concurrency.lockutils [None req-1b87c1bf-b2e2-431a-b9ab-6a5580ceddde f58717ff8108424e87196def751cc1fe 06df44dbf05c4a5e8532640419eb19d3 - - default default] Lock "09e8444f-162e-4773-a181-a5b70c7af8dd" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:55:19 compute-0 nova_compute[260935]: 2025-10-11 08:55:19.004 2 DEBUG oslo_concurrency.lockutils [None req-1b87c1bf-b2e2-431a-b9ab-6a5580ceddde f58717ff8108424e87196def751cc1fe 06df44dbf05c4a5e8532640419eb19d3 - - default default] Acquiring lock "09e8444f-162e-4773-a181-a5b70c7af8dd-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:55:19 compute-0 nova_compute[260935]: 2025-10-11 08:55:19.005 2 DEBUG oslo_concurrency.lockutils [None req-1b87c1bf-b2e2-431a-b9ab-6a5580ceddde f58717ff8108424e87196def751cc1fe 06df44dbf05c4a5e8532640419eb19d3 - - default default] Lock "09e8444f-162e-4773-a181-a5b70c7af8dd-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:55:19 compute-0 nova_compute[260935]: 2025-10-11 08:55:19.005 2 DEBUG oslo_concurrency.lockutils [None req-1b87c1bf-b2e2-431a-b9ab-6a5580ceddde f58717ff8108424e87196def751cc1fe 06df44dbf05c4a5e8532640419eb19d3 - - default default] Lock "09e8444f-162e-4773-a181-a5b70c7af8dd-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:55:19 compute-0 nova_compute[260935]: 2025-10-11 08:55:19.007 2 INFO nova.compute.manager [None req-1b87c1bf-b2e2-431a-b9ab-6a5580ceddde f58717ff8108424e87196def751cc1fe 06df44dbf05c4a5e8532640419eb19d3 - - default default] [instance: 09e8444f-162e-4773-a181-a5b70c7af8dd] Terminating instance
Oct 11 08:55:19 compute-0 nova_compute[260935]: 2025-10-11 08:55:19.009 2 DEBUG nova.compute.manager [None req-1b87c1bf-b2e2-431a-b9ab-6a5580ceddde f58717ff8108424e87196def751cc1fe 06df44dbf05c4a5e8532640419eb19d3 - - default default] [instance: 09e8444f-162e-4773-a181-a5b70c7af8dd] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 11 08:55:19 compute-0 nova_compute[260935]: 2025-10-11 08:55:19.027 2 DEBUG oslo_concurrency.lockutils [None req-f03584e8-cfc1-4275-951f-c47662853626 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:55:19 compute-0 nova_compute[260935]: 2025-10-11 08:55:19.029 2 DEBUG oslo_concurrency.lockutils [None req-f03584e8-cfc1-4275-951f-c47662853626 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:55:19 compute-0 nova_compute[260935]: 2025-10-11 08:55:19.042 2 DEBUG nova.virt.hardware [None req-f03584e8-cfc1-4275-951f-c47662853626 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 11 08:55:19 compute-0 nova_compute[260935]: 2025-10-11 08:55:19.045 2 INFO nova.compute.claims [None req-f03584e8-cfc1-4275-951f-c47662853626 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] [instance: 98d8ebd6-0917-49cf-8efc-a245486424bc] Claim successful on node compute-0.ctlplane.example.com
Oct 11 08:55:19 compute-0 kernel: tapce404d10-51 (unregistering): left promiscuous mode
Oct 11 08:55:19 compute-0 ceph-mon[74313]: pgmap v1532: 321 pgs: 321 active+clean; 88 MiB data, 477 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Oct 11 08:55:19 compute-0 NetworkManager[44960]: <info>  [1760172919.0808] device (tapce404d10-51): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 08:55:19 compute-0 ovn_controller[152945]: 2025-10-11T08:55:19Z|00503|binding|INFO|Releasing lport ce404d10-5133-400e-acde-02c0abd10470 from this chassis (sb_readonly=0)
Oct 11 08:55:19 compute-0 ovn_controller[152945]: 2025-10-11T08:55:19Z|00504|binding|INFO|Setting lport ce404d10-5133-400e-acde-02c0abd10470 down in Southbound
Oct 11 08:55:19 compute-0 nova_compute[260935]: 2025-10-11 08:55:19.097 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:55:19 compute-0 nova_compute[260935]: 2025-10-11 08:55:19.099 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:55:19 compute-0 ovn_controller[152945]: 2025-10-11T08:55:19Z|00505|binding|INFO|Removing iface tapce404d10-51 ovn-installed in OVS
Oct 11 08:55:19 compute-0 nova_compute[260935]: 2025-10-11 08:55:19.102 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:55:19 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:55:19.109 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:53:75:e2 10.100.0.9'], port_security=['fa:16:3e:53:75:e2 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '09e8444f-162e-4773-a181-a5b70c7af8dd', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-651d0a3a-4912-47c8-a5af-eb2f7badf8c4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '06df44dbf05c4a5e8532640419eb19d3', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'ef6334f9-e759-487e-a426-75ba5c313554', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c8280cc5-aa3a-4200-a683-dd07c09ecd59, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=ce404d10-5133-400e-acde-02c0abd10470) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 08:55:19 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:55:19.113 162815 INFO neutron.agent.ovn.metadata.agent [-] Port ce404d10-5133-400e-acde-02c0abd10470 in datapath 651d0a3a-4912-47c8-a5af-eb2f7badf8c4 unbound from our chassis
Oct 11 08:55:19 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:55:19.121 162815 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 651d0a3a-4912-47c8-a5af-eb2f7badf8c4, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 11 08:55:19 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:55:19.122 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[831d57a7-0a7c-4034-8735-14337a4ed0a5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:55:19 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:55:19.123 162815 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-651d0a3a-4912-47c8-a5af-eb2f7badf8c4 namespace which is not needed anymore
Oct 11 08:55:19 compute-0 nova_compute[260935]: 2025-10-11 08:55:19.142 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:55:19 compute-0 systemd[1]: machine-qemu\x2d64\x2dinstance\x2d00000039.scope: Deactivated successfully.
Oct 11 08:55:19 compute-0 systemd[1]: machine-qemu\x2d64\x2dinstance\x2d00000039.scope: Consumed 6.610s CPU time.
Oct 11 08:55:19 compute-0 systemd-machined[215705]: Machine qemu-64-instance-00000039 terminated.
Oct 11 08:55:19 compute-0 nova_compute[260935]: 2025-10-11 08:55:19.232 2 DEBUG oslo_concurrency.processutils [None req-f03584e8-cfc1-4275-951f-c47662853626 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:55:19 compute-0 nova_compute[260935]: 2025-10-11 08:55:19.274 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:55:19 compute-0 nova_compute[260935]: 2025-10-11 08:55:19.285 2 INFO nova.virt.libvirt.driver [-] [instance: 09e8444f-162e-4773-a181-a5b70c7af8dd] Instance destroyed successfully.
Oct 11 08:55:19 compute-0 nova_compute[260935]: 2025-10-11 08:55:19.287 2 DEBUG nova.objects.instance [None req-1b87c1bf-b2e2-431a-b9ab-6a5580ceddde f58717ff8108424e87196def751cc1fe 06df44dbf05c4a5e8532640419eb19d3 - - default default] Lazy-loading 'resources' on Instance uuid 09e8444f-162e-4773-a181-a5b70c7af8dd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:55:19 compute-0 nova_compute[260935]: 2025-10-11 08:55:19.303 2 DEBUG nova.virt.libvirt.vif [None req-1b87c1bf-b2e2-431a-b9ab-6a5580ceddde f58717ff8108424e87196def751cc1fe 06df44dbf05c4a5e8532640419eb19d3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T08:55:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerMetadataNegativeTestJSON-server-1230911333',display_name='tempest-ServerMetadataNegativeTestJSON-server-1230911333',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-servermetadatanegativetestjson-server-1230911333',id=57,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-11T08:55:13Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='06df44dbf05c4a5e8532640419eb19d3',ramdisk_id='',reservation_id='r-ac9lff0k',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerMetadataNegativeTestJSON-968790743',owner_user_name='tempest-ServerMetadataNegativeTestJSON-968790743-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T08:55:13Z,user_data=None,user_id='f58717ff8108424e87196def751cc1fe',uuid=09e8444f-162e-4773-a181-a5b70c7af8dd,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ce404d10-5133-400e-acde-02c0abd10470", "address": "fa:16:3e:53:75:e2", "network": {"id": "651d0a3a-4912-47c8-a5af-eb2f7badf8c4", "bridge": "br-int", "label": "tempest-ServerMetadataNegativeTestJSON-1981929345-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "06df44dbf05c4a5e8532640419eb19d3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapce404d10-51", "ovs_interfaceid": "ce404d10-5133-400e-acde-02c0abd10470", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 11 08:55:19 compute-0 nova_compute[260935]: 2025-10-11 08:55:19.309 2 DEBUG nova.network.os_vif_util [None req-1b87c1bf-b2e2-431a-b9ab-6a5580ceddde f58717ff8108424e87196def751cc1fe 06df44dbf05c4a5e8532640419eb19d3 - - default default] Converting VIF {"id": "ce404d10-5133-400e-acde-02c0abd10470", "address": "fa:16:3e:53:75:e2", "network": {"id": "651d0a3a-4912-47c8-a5af-eb2f7badf8c4", "bridge": "br-int", "label": "tempest-ServerMetadataNegativeTestJSON-1981929345-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "06df44dbf05c4a5e8532640419eb19d3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapce404d10-51", "ovs_interfaceid": "ce404d10-5133-400e-acde-02c0abd10470", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 08:55:19 compute-0 nova_compute[260935]: 2025-10-11 08:55:19.310 2 DEBUG nova.network.os_vif_util [None req-1b87c1bf-b2e2-431a-b9ab-6a5580ceddde f58717ff8108424e87196def751cc1fe 06df44dbf05c4a5e8532640419eb19d3 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:53:75:e2,bridge_name='br-int',has_traffic_filtering=True,id=ce404d10-5133-400e-acde-02c0abd10470,network=Network(651d0a3a-4912-47c8-a5af-eb2f7badf8c4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapce404d10-51') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 08:55:19 compute-0 nova_compute[260935]: 2025-10-11 08:55:19.311 2 DEBUG os_vif [None req-1b87c1bf-b2e2-431a-b9ab-6a5580ceddde f58717ff8108424e87196def751cc1fe 06df44dbf05c4a5e8532640419eb19d3 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:53:75:e2,bridge_name='br-int',has_traffic_filtering=True,id=ce404d10-5133-400e-acde-02c0abd10470,network=Network(651d0a3a-4912-47c8-a5af-eb2f7badf8c4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapce404d10-51') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 11 08:55:19 compute-0 nova_compute[260935]: 2025-10-11 08:55:19.313 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:55:19 compute-0 nova_compute[260935]: 2025-10-11 08:55:19.314 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapce404d10-51, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:55:19 compute-0 nova_compute[260935]: 2025-10-11 08:55:19.316 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:55:19 compute-0 nova_compute[260935]: 2025-10-11 08:55:19.317 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 08:55:19 compute-0 nova_compute[260935]: 2025-10-11 08:55:19.321 2 INFO os_vif [None req-1b87c1bf-b2e2-431a-b9ab-6a5580ceddde f58717ff8108424e87196def751cc1fe 06df44dbf05c4a5e8532640419eb19d3 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:53:75:e2,bridge_name='br-int',has_traffic_filtering=True,id=ce404d10-5133-400e-acde-02c0abd10470,network=Network(651d0a3a-4912-47c8-a5af-eb2f7badf8c4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapce404d10-51')
Oct 11 08:55:19 compute-0 neutron-haproxy-ovnmeta-651d0a3a-4912-47c8-a5af-eb2f7badf8c4[320975]: [NOTICE]   (320992) : haproxy version is 2.8.14-c23fe91
Oct 11 08:55:19 compute-0 neutron-haproxy-ovnmeta-651d0a3a-4912-47c8-a5af-eb2f7badf8c4[320975]: [NOTICE]   (320992) : path to executable is /usr/sbin/haproxy
Oct 11 08:55:19 compute-0 neutron-haproxy-ovnmeta-651d0a3a-4912-47c8-a5af-eb2f7badf8c4[320975]: [WARNING]  (320992) : Exiting Master process...
Oct 11 08:55:19 compute-0 neutron-haproxy-ovnmeta-651d0a3a-4912-47c8-a5af-eb2f7badf8c4[320975]: [ALERT]    (320992) : Current worker (321003) exited with code 143 (Terminated)
Oct 11 08:55:19 compute-0 neutron-haproxy-ovnmeta-651d0a3a-4912-47c8-a5af-eb2f7badf8c4[320975]: [WARNING]  (320992) : All workers exited. Exiting... (0)
Oct 11 08:55:19 compute-0 systemd[1]: libpod-e681dc86b4349b3c4497879b295c775e50a4f08774066f23cdbadfc587418566.scope: Deactivated successfully.
Oct 11 08:55:19 compute-0 podman[321796]: 2025-10-11 08:55:19.340129334 +0000 UTC m=+0.081038852 container died e681dc86b4349b3c4497879b295c775e50a4f08774066f23cdbadfc587418566 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-651d0a3a-4912-47c8-a5af-eb2f7badf8c4, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Oct 11 08:55:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-b5d0672aa8e6c963225b544aba8752ab8163ac36884a8a150664ddc3431c4373-merged.mount: Deactivated successfully.
Oct 11 08:55:19 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e681dc86b4349b3c4497879b295c775e50a4f08774066f23cdbadfc587418566-userdata-shm.mount: Deactivated successfully.
Oct 11 08:55:19 compute-0 podman[321796]: 2025-10-11 08:55:19.41821768 +0000 UTC m=+0.159127188 container cleanup e681dc86b4349b3c4497879b295c775e50a4f08774066f23cdbadfc587418566 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-651d0a3a-4912-47c8-a5af-eb2f7badf8c4, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 11 08:55:19 compute-0 systemd[1]: libpod-conmon-e681dc86b4349b3c4497879b295c775e50a4f08774066f23cdbadfc587418566.scope: Deactivated successfully.
Oct 11 08:55:19 compute-0 podman[321871]: 2025-10-11 08:55:19.504259693 +0000 UTC m=+0.050221363 container remove e681dc86b4349b3c4497879b295c775e50a4f08774066f23cdbadfc587418566 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-651d0a3a-4912-47c8-a5af-eb2f7badf8c4, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 11 08:55:19 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:55:19.517 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[b7f1164a-2c26-4888-9757-9a10cfb43e0a]: (4, ('Sat Oct 11 08:55:19 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-651d0a3a-4912-47c8-a5af-eb2f7badf8c4 (e681dc86b4349b3c4497879b295c775e50a4f08774066f23cdbadfc587418566)\ne681dc86b4349b3c4497879b295c775e50a4f08774066f23cdbadfc587418566\nSat Oct 11 08:55:19 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-651d0a3a-4912-47c8-a5af-eb2f7badf8c4 (e681dc86b4349b3c4497879b295c775e50a4f08774066f23cdbadfc587418566)\ne681dc86b4349b3c4497879b295c775e50a4f08774066f23cdbadfc587418566\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:55:19 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:55:19.522 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[8ccaf1e4-5ec0-4da3-a371-77bef494f9e1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:55:19 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:55:19.525 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap651d0a3a-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:55:19 compute-0 kernel: tap651d0a3a-40: left promiscuous mode
Oct 11 08:55:19 compute-0 nova_compute[260935]: 2025-10-11 08:55:19.532 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:55:19 compute-0 nova_compute[260935]: 2025-10-11 08:55:19.562 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:55:19 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:55:19.564 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[4ee0a282-dfd4-496a-9d04-c928cd67c9a1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:55:19 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:55:19.597 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[8cb21848-1f96-4440-be65-16c290ca22fb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:55:19 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:55:19.598 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[4ed757ba-1a85-4a6f-b6f5-fad60be780b6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:55:19 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:55:19.620 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[4189895e-1a92-428b-89c0-0f4992f998a1]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 475620, 'reachable_time': 29139, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 321886, 'error': None, 'target': 'ovnmeta-651d0a3a-4912-47c8-a5af-eb2f7badf8c4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:55:19 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:55:19.623 162929 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-651d0a3a-4912-47c8-a5af-eb2f7badf8c4 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 11 08:55:19 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:55:19.623 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[3f371a86-ad70-44a5-858b-baff64646287]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:55:19 compute-0 systemd[1]: run-netns-ovnmeta\x2d651d0a3a\x2d4912\x2d47c8\x2da5af\x2deb2f7badf8c4.mount: Deactivated successfully.
Oct 11 08:55:19 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 08:55:19 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/942282138' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:55:19 compute-0 nova_compute[260935]: 2025-10-11 08:55:19.750 2 DEBUG oslo_concurrency.processutils [None req-f03584e8-cfc1-4275-951f-c47662853626 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.518s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:55:19 compute-0 nova_compute[260935]: 2025-10-11 08:55:19.761 2 DEBUG nova.compute.provider_tree [None req-f03584e8-cfc1-4275-951f-c47662853626 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 08:55:19 compute-0 nova_compute[260935]: 2025-10-11 08:55:19.789 2 DEBUG nova.scheduler.client.report [None req-f03584e8-cfc1-4275-951f-c47662853626 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 08:55:19 compute-0 nova_compute[260935]: 2025-10-11 08:55:19.822 2 DEBUG oslo_concurrency.lockutils [None req-f03584e8-cfc1-4275-951f-c47662853626 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.792s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:55:19 compute-0 nova_compute[260935]: 2025-10-11 08:55:19.823 2 DEBUG nova.compute.manager [None req-f03584e8-cfc1-4275-951f-c47662853626 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] [instance: 98d8ebd6-0917-49cf-8efc-a245486424bc] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 11 08:55:19 compute-0 admiring_gauss[321765]: {
Oct 11 08:55:19 compute-0 admiring_gauss[321765]:     "0": [
Oct 11 08:55:19 compute-0 admiring_gauss[321765]:         {
Oct 11 08:55:19 compute-0 admiring_gauss[321765]:             "devices": [
Oct 11 08:55:19 compute-0 admiring_gauss[321765]:                 "/dev/loop3"
Oct 11 08:55:19 compute-0 admiring_gauss[321765]:             ],
Oct 11 08:55:19 compute-0 admiring_gauss[321765]:             "lv_name": "ceph_lv0",
Oct 11 08:55:19 compute-0 admiring_gauss[321765]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 08:55:19 compute-0 admiring_gauss[321765]:             "lv_size": "21470642176",
Oct 11 08:55:19 compute-0 admiring_gauss[321765]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=bd31cfe9-266a-4479-a7f9-659fff14b3e1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 08:55:19 compute-0 admiring_gauss[321765]:             "lv_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 08:55:19 compute-0 admiring_gauss[321765]:             "name": "ceph_lv0",
Oct 11 08:55:19 compute-0 admiring_gauss[321765]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 08:55:19 compute-0 admiring_gauss[321765]:             "tags": {
Oct 11 08:55:19 compute-0 admiring_gauss[321765]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 11 08:55:19 compute-0 admiring_gauss[321765]:                 "ceph.block_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 08:55:19 compute-0 admiring_gauss[321765]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 08:55:19 compute-0 admiring_gauss[321765]:                 "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 08:55:19 compute-0 admiring_gauss[321765]:                 "ceph.cluster_name": "ceph",
Oct 11 08:55:19 compute-0 admiring_gauss[321765]:                 "ceph.crush_device_class": "",
Oct 11 08:55:19 compute-0 admiring_gauss[321765]:                 "ceph.encrypted": "0",
Oct 11 08:55:19 compute-0 admiring_gauss[321765]:                 "ceph.osd_fsid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 08:55:19 compute-0 admiring_gauss[321765]:                 "ceph.osd_id": "0",
Oct 11 08:55:19 compute-0 admiring_gauss[321765]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 08:55:19 compute-0 admiring_gauss[321765]:                 "ceph.type": "block",
Oct 11 08:55:19 compute-0 admiring_gauss[321765]:                 "ceph.vdo": "0"
Oct 11 08:55:19 compute-0 admiring_gauss[321765]:             },
Oct 11 08:55:19 compute-0 admiring_gauss[321765]:             "type": "block",
Oct 11 08:55:19 compute-0 admiring_gauss[321765]:             "vg_name": "ceph_vg0"
Oct 11 08:55:19 compute-0 admiring_gauss[321765]:         }
Oct 11 08:55:19 compute-0 admiring_gauss[321765]:     ],
Oct 11 08:55:19 compute-0 admiring_gauss[321765]:     "1": [
Oct 11 08:55:19 compute-0 admiring_gauss[321765]:         {
Oct 11 08:55:19 compute-0 admiring_gauss[321765]:             "devices": [
Oct 11 08:55:19 compute-0 admiring_gauss[321765]:                 "/dev/loop4"
Oct 11 08:55:19 compute-0 admiring_gauss[321765]:             ],
Oct 11 08:55:19 compute-0 admiring_gauss[321765]:             "lv_name": "ceph_lv1",
Oct 11 08:55:19 compute-0 admiring_gauss[321765]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 08:55:19 compute-0 admiring_gauss[321765]:             "lv_size": "21470642176",
Oct 11 08:55:19 compute-0 admiring_gauss[321765]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c6e730c8-7075-45e5-bf72-061b73088f5b,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 08:55:19 compute-0 admiring_gauss[321765]:             "lv_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 08:55:19 compute-0 admiring_gauss[321765]:             "name": "ceph_lv1",
Oct 11 08:55:19 compute-0 admiring_gauss[321765]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 08:55:19 compute-0 admiring_gauss[321765]:             "tags": {
Oct 11 08:55:19 compute-0 admiring_gauss[321765]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 11 08:55:19 compute-0 admiring_gauss[321765]:                 "ceph.block_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 08:55:19 compute-0 admiring_gauss[321765]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 08:55:19 compute-0 admiring_gauss[321765]:                 "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 08:55:19 compute-0 admiring_gauss[321765]:                 "ceph.cluster_name": "ceph",
Oct 11 08:55:19 compute-0 admiring_gauss[321765]:                 "ceph.crush_device_class": "",
Oct 11 08:55:19 compute-0 admiring_gauss[321765]:                 "ceph.encrypted": "0",
Oct 11 08:55:19 compute-0 admiring_gauss[321765]:                 "ceph.osd_fsid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 08:55:19 compute-0 admiring_gauss[321765]:                 "ceph.osd_id": "1",
Oct 11 08:55:19 compute-0 admiring_gauss[321765]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 08:55:19 compute-0 admiring_gauss[321765]:                 "ceph.type": "block",
Oct 11 08:55:19 compute-0 admiring_gauss[321765]:                 "ceph.vdo": "0"
Oct 11 08:55:19 compute-0 admiring_gauss[321765]:             },
Oct 11 08:55:19 compute-0 admiring_gauss[321765]:             "type": "block",
Oct 11 08:55:19 compute-0 admiring_gauss[321765]:             "vg_name": "ceph_vg1"
Oct 11 08:55:19 compute-0 admiring_gauss[321765]:         }
Oct 11 08:55:19 compute-0 admiring_gauss[321765]:     ],
Oct 11 08:55:19 compute-0 admiring_gauss[321765]:     "2": [
Oct 11 08:55:19 compute-0 admiring_gauss[321765]:         {
Oct 11 08:55:19 compute-0 admiring_gauss[321765]:             "devices": [
Oct 11 08:55:19 compute-0 admiring_gauss[321765]:                 "/dev/loop5"
Oct 11 08:55:19 compute-0 admiring_gauss[321765]:             ],
Oct 11 08:55:19 compute-0 admiring_gauss[321765]:             "lv_name": "ceph_lv2",
Oct 11 08:55:19 compute-0 admiring_gauss[321765]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 08:55:19 compute-0 admiring_gauss[321765]:             "lv_size": "21470642176",
Oct 11 08:55:19 compute-0 admiring_gauss[321765]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9a8d87fe-940e-4960-94fb-405e22e6e9ee,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 08:55:19 compute-0 admiring_gauss[321765]:             "lv_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 08:55:19 compute-0 admiring_gauss[321765]:             "name": "ceph_lv2",
Oct 11 08:55:19 compute-0 admiring_gauss[321765]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 08:55:19 compute-0 admiring_gauss[321765]:             "tags": {
Oct 11 08:55:19 compute-0 admiring_gauss[321765]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 11 08:55:19 compute-0 admiring_gauss[321765]:                 "ceph.block_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 08:55:19 compute-0 admiring_gauss[321765]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 08:55:19 compute-0 admiring_gauss[321765]:                 "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 08:55:19 compute-0 admiring_gauss[321765]:                 "ceph.cluster_name": "ceph",
Oct 11 08:55:19 compute-0 admiring_gauss[321765]:                 "ceph.crush_device_class": "",
Oct 11 08:55:19 compute-0 admiring_gauss[321765]:                 "ceph.encrypted": "0",
Oct 11 08:55:19 compute-0 admiring_gauss[321765]:                 "ceph.osd_fsid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 08:55:19 compute-0 admiring_gauss[321765]:                 "ceph.osd_id": "2",
Oct 11 08:55:19 compute-0 admiring_gauss[321765]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 08:55:19 compute-0 admiring_gauss[321765]:                 "ceph.type": "block",
Oct 11 08:55:19 compute-0 admiring_gauss[321765]:                 "ceph.vdo": "0"
Oct 11 08:55:19 compute-0 admiring_gauss[321765]:             },
Oct 11 08:55:19 compute-0 admiring_gauss[321765]:             "type": "block",
Oct 11 08:55:19 compute-0 admiring_gauss[321765]:             "vg_name": "ceph_vg2"
Oct 11 08:55:19 compute-0 admiring_gauss[321765]:         }
Oct 11 08:55:19 compute-0 admiring_gauss[321765]:     ]
Oct 11 08:55:19 compute-0 admiring_gauss[321765]: }
Oct 11 08:55:19 compute-0 nova_compute[260935]: 2025-10-11 08:55:19.867 2 INFO nova.virt.libvirt.driver [None req-1b87c1bf-b2e2-431a-b9ab-6a5580ceddde f58717ff8108424e87196def751cc1fe 06df44dbf05c4a5e8532640419eb19d3 - - default default] [instance: 09e8444f-162e-4773-a181-a5b70c7af8dd] Deleting instance files /var/lib/nova/instances/09e8444f-162e-4773-a181-a5b70c7af8dd_del
Oct 11 08:55:19 compute-0 nova_compute[260935]: 2025-10-11 08:55:19.869 2 INFO nova.virt.libvirt.driver [None req-1b87c1bf-b2e2-431a-b9ab-6a5580ceddde f58717ff8108424e87196def751cc1fe 06df44dbf05c4a5e8532640419eb19d3 - - default default] [instance: 09e8444f-162e-4773-a181-a5b70c7af8dd] Deletion of /var/lib/nova/instances/09e8444f-162e-4773-a181-a5b70c7af8dd_del complete
Oct 11 08:55:19 compute-0 nova_compute[260935]: 2025-10-11 08:55:19.879 2 DEBUG nova.compute.manager [None req-f03584e8-cfc1-4275-951f-c47662853626 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] [instance: 98d8ebd6-0917-49cf-8efc-a245486424bc] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 11 08:55:19 compute-0 nova_compute[260935]: 2025-10-11 08:55:19.879 2 DEBUG nova.network.neutron [None req-f03584e8-cfc1-4275-951f-c47662853626 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] [instance: 98d8ebd6-0917-49cf-8efc-a245486424bc] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 11 08:55:19 compute-0 systemd[1]: libpod-9618a48970d0fb721263bca73c3c00f1ca75a100b51950c37a900b6bed4693ab.scope: Deactivated successfully.
Oct 11 08:55:19 compute-0 podman[321748]: 2025-10-11 08:55:19.885792912 +0000 UTC m=+1.162790766 container died 9618a48970d0fb721263bca73c3c00f1ca75a100b51950c37a900b6bed4693ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_gauss, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 08:55:19 compute-0 nova_compute[260935]: 2025-10-11 08:55:19.900 2 INFO nova.virt.libvirt.driver [None req-f03584e8-cfc1-4275-951f-c47662853626 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] [instance: 98d8ebd6-0917-49cf-8efc-a245486424bc] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 11 08:55:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-99a110e0be8418cab9844a0664bd7dc2f1e3f7b8b64adc755fc9e0b93ab022fe-merged.mount: Deactivated successfully.
Oct 11 08:55:19 compute-0 nova_compute[260935]: 2025-10-11 08:55:19.930 2 DEBUG nova.compute.manager [None req-f03584e8-cfc1-4275-951f-c47662853626 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] [instance: 98d8ebd6-0917-49cf-8efc-a245486424bc] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 11 08:55:19 compute-0 nova_compute[260935]: 2025-10-11 08:55:19.951 2 INFO nova.compute.manager [None req-1b87c1bf-b2e2-431a-b9ab-6a5580ceddde f58717ff8108424e87196def751cc1fe 06df44dbf05c4a5e8532640419eb19d3 - - default default] [instance: 09e8444f-162e-4773-a181-a5b70c7af8dd] Took 0.94 seconds to destroy the instance on the hypervisor.
Oct 11 08:55:19 compute-0 nova_compute[260935]: 2025-10-11 08:55:19.952 2 DEBUG oslo.service.loopingcall [None req-1b87c1bf-b2e2-431a-b9ab-6a5580ceddde f58717ff8108424e87196def751cc1fe 06df44dbf05c4a5e8532640419eb19d3 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 11 08:55:19 compute-0 nova_compute[260935]: 2025-10-11 08:55:19.953 2 DEBUG nova.compute.manager [-] [instance: 09e8444f-162e-4773-a181-a5b70c7af8dd] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 11 08:55:19 compute-0 nova_compute[260935]: 2025-10-11 08:55:19.953 2 DEBUG nova.network.neutron [-] [instance: 09e8444f-162e-4773-a181-a5b70c7af8dd] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 11 08:55:19 compute-0 podman[321748]: 2025-10-11 08:55:19.960110731 +0000 UTC m=+1.237108505 container remove 9618a48970d0fb721263bca73c3c00f1ca75a100b51950c37a900b6bed4693ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_gauss, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 08:55:19 compute-0 systemd[1]: libpod-conmon-9618a48970d0fb721263bca73c3c00f1ca75a100b51950c37a900b6bed4693ab.scope: Deactivated successfully.
Oct 11 08:55:20 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1533: 321 pgs: 321 active+clean; 88 MiB data, 477 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 11 08:55:20 compute-0 sudo[321640]: pam_unix(sudo:session): session closed for user root
Oct 11 08:55:20 compute-0 nova_compute[260935]: 2025-10-11 08:55:20.067 2 DEBUG nova.compute.manager [None req-f03584e8-cfc1-4275-951f-c47662853626 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] [instance: 98d8ebd6-0917-49cf-8efc-a245486424bc] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 11 08:55:20 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/942282138' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:55:20 compute-0 nova_compute[260935]: 2025-10-11 08:55:20.071 2 DEBUG nova.virt.libvirt.driver [None req-f03584e8-cfc1-4275-951f-c47662853626 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] [instance: 98d8ebd6-0917-49cf-8efc-a245486424bc] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 11 08:55:20 compute-0 nova_compute[260935]: 2025-10-11 08:55:20.073 2 INFO nova.virt.libvirt.driver [None req-f03584e8-cfc1-4275-951f-c47662853626 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] [instance: 98d8ebd6-0917-49cf-8efc-a245486424bc] Creating image(s)
Oct 11 08:55:20 compute-0 sudo[321905]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:55:20 compute-0 sudo[321905]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:55:20 compute-0 nova_compute[260935]: 2025-10-11 08:55:20.108 2 DEBUG nova.storage.rbd_utils [None req-f03584e8-cfc1-4275-951f-c47662853626 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] rbd image 98d8ebd6-0917-49cf-8efc-a245486424bc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:55:20 compute-0 sudo[321905]: pam_unix(sudo:session): session closed for user root
Oct 11 08:55:20 compute-0 nova_compute[260935]: 2025-10-11 08:55:20.144 2 DEBUG nova.storage.rbd_utils [None req-f03584e8-cfc1-4275-951f-c47662853626 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] rbd image 98d8ebd6-0917-49cf-8efc-a245486424bc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:55:20 compute-0 nova_compute[260935]: 2025-10-11 08:55:20.176 2 DEBUG nova.storage.rbd_utils [None req-f03584e8-cfc1-4275-951f-c47662853626 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] rbd image 98d8ebd6-0917-49cf-8efc-a245486424bc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:55:20 compute-0 nova_compute[260935]: 2025-10-11 08:55:20.184 2 DEBUG oslo_concurrency.processutils [None req-f03584e8-cfc1-4275-951f-c47662853626 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:55:20 compute-0 sudo[321948]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 08:55:20 compute-0 sudo[321948]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:55:20 compute-0 sudo[321948]: pam_unix(sudo:session): session closed for user root
Oct 11 08:55:20 compute-0 nova_compute[260935]: 2025-10-11 08:55:20.273 2 DEBUG oslo_concurrency.processutils [None req-f03584e8-cfc1-4275-951f-c47662853626 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json" returned: 0 in 0.089s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:55:20 compute-0 nova_compute[260935]: 2025-10-11 08:55:20.274 2 DEBUG oslo_concurrency.lockutils [None req-f03584e8-cfc1-4275-951f-c47662853626 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Acquiring lock "9811042c7d73cc51997f7c966840f4b7728169a1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:55:20 compute-0 nova_compute[260935]: 2025-10-11 08:55:20.274 2 DEBUG oslo_concurrency.lockutils [None req-f03584e8-cfc1-4275-951f-c47662853626 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:55:20 compute-0 nova_compute[260935]: 2025-10-11 08:55:20.275 2 DEBUG oslo_concurrency.lockutils [None req-f03584e8-cfc1-4275-951f-c47662853626 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:55:20 compute-0 nova_compute[260935]: 2025-10-11 08:55:20.304 2 DEBUG nova.storage.rbd_utils [None req-f03584e8-cfc1-4275-951f-c47662853626 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] rbd image 98d8ebd6-0917-49cf-8efc-a245486424bc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:55:20 compute-0 sudo[322010]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:55:20 compute-0 sudo[322010]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:55:20 compute-0 nova_compute[260935]: 2025-10-11 08:55:20.311 2 DEBUG oslo_concurrency.processutils [None req-f03584e8-cfc1-4275-951f-c47662853626 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 98d8ebd6-0917-49cf-8efc-a245486424bc_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:55:20 compute-0 sudo[322010]: pam_unix(sudo:session): session closed for user root
Oct 11 08:55:20 compute-0 sudo[322055]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a -- raw list --format json
Oct 11 08:55:20 compute-0 sudo[322055]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:55:20 compute-0 nova_compute[260935]: 2025-10-11 08:55:20.441 2 DEBUG nova.policy [None req-f03584e8-cfc1-4275-951f-c47662853626 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '8d5f5f07c57c467286168be7c097bf26', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '73adfb8cf0c64359b1f33a9643148ef4', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 11 08:55:20 compute-0 ceph-osd[90364]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #45. Immutable memtables: 2.
Oct 11 08:55:20 compute-0 nova_compute[260935]: 2025-10-11 08:55:20.684 2 DEBUG oslo_concurrency.processutils [None req-f03584e8-cfc1-4275-951f-c47662853626 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 98d8ebd6-0917-49cf-8efc-a245486424bc_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.373s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:55:20 compute-0 nova_compute[260935]: 2025-10-11 08:55:20.780 2 DEBUG nova.storage.rbd_utils [None req-f03584e8-cfc1-4275-951f-c47662853626 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] resizing rbd image 98d8ebd6-0917-49cf-8efc-a245486424bc_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 11 08:55:20 compute-0 podman[322174]: 2025-10-11 08:55:20.860170396 +0000 UTC m=+0.072290142 container create f6a00a97705227a298b49acab7315968fcbe173928b6830a3d21c89bbf976724 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_hopper, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Oct 11 08:55:20 compute-0 nova_compute[260935]: 2025-10-11 08:55:20.907 2 DEBUG nova.objects.instance [None req-f03584e8-cfc1-4275-951f-c47662853626 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Lazy-loading 'migration_context' on Instance uuid 98d8ebd6-0917-49cf-8efc-a245486424bc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:55:20 compute-0 systemd[1]: Started libpod-conmon-f6a00a97705227a298b49acab7315968fcbe173928b6830a3d21c89bbf976724.scope.
Oct 11 08:55:20 compute-0 podman[322174]: 2025-10-11 08:55:20.828699139 +0000 UTC m=+0.040818925 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 08:55:20 compute-0 nova_compute[260935]: 2025-10-11 08:55:20.926 2 DEBUG nova.virt.libvirt.driver [None req-f03584e8-cfc1-4275-951f-c47662853626 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] [instance: 98d8ebd6-0917-49cf-8efc-a245486424bc] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 11 08:55:20 compute-0 nova_compute[260935]: 2025-10-11 08:55:20.927 2 DEBUG nova.virt.libvirt.driver [None req-f03584e8-cfc1-4275-951f-c47662853626 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] [instance: 98d8ebd6-0917-49cf-8efc-a245486424bc] Ensure instance console log exists: /var/lib/nova/instances/98d8ebd6-0917-49cf-8efc-a245486424bc/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 11 08:55:20 compute-0 nova_compute[260935]: 2025-10-11 08:55:20.928 2 DEBUG oslo_concurrency.lockutils [None req-f03584e8-cfc1-4275-951f-c47662853626 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:55:20 compute-0 nova_compute[260935]: 2025-10-11 08:55:20.929 2 DEBUG oslo_concurrency.lockutils [None req-f03584e8-cfc1-4275-951f-c47662853626 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:55:20 compute-0 nova_compute[260935]: 2025-10-11 08:55:20.929 2 DEBUG oslo_concurrency.lockutils [None req-f03584e8-cfc1-4275-951f-c47662853626 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:55:20 compute-0 systemd[1]: Started libcrun container.
Oct 11 08:55:20 compute-0 podman[322174]: 2025-10-11 08:55:20.986331184 +0000 UTC m=+0.198450980 container init f6a00a97705227a298b49acab7315968fcbe173928b6830a3d21c89bbf976724 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_hopper, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 08:55:21 compute-0 podman[322174]: 2025-10-11 08:55:21.004504242 +0000 UTC m=+0.216623988 container start f6a00a97705227a298b49acab7315968fcbe173928b6830a3d21c89bbf976724 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_hopper, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 08:55:21 compute-0 podman[322174]: 2025-10-11 08:55:21.009462953 +0000 UTC m=+0.221582699 container attach f6a00a97705227a298b49acab7315968fcbe173928b6830a3d21c89bbf976724 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_hopper, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 08:55:21 compute-0 tender_hopper[322227]: 167 167
Oct 11 08:55:21 compute-0 systemd[1]: libpod-f6a00a97705227a298b49acab7315968fcbe173928b6830a3d21c89bbf976724.scope: Deactivated successfully.
Oct 11 08:55:21 compute-0 podman[322174]: 2025-10-11 08:55:21.013516299 +0000 UTC m=+0.225636045 container died f6a00a97705227a298b49acab7315968fcbe173928b6830a3d21c89bbf976724 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_hopper, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Oct 11 08:55:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-d838f7d3ce2549dcfa9d5479a3c061d63b3f5529b893b85143e15eac8890ef53-merged.mount: Deactivated successfully.
Oct 11 08:55:21 compute-0 podman[322174]: 2025-10-11 08:55:21.070313339 +0000 UTC m=+0.282433085 container remove f6a00a97705227a298b49acab7315968fcbe173928b6830a3d21c89bbf976724 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_hopper, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 08:55:21 compute-0 ceph-mon[74313]: pgmap v1533: 321 pgs: 321 active+clean; 88 MiB data, 477 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 11 08:55:21 compute-0 systemd[1]: libpod-conmon-f6a00a97705227a298b49acab7315968fcbe173928b6830a3d21c89bbf976724.scope: Deactivated successfully.
Oct 11 08:55:21 compute-0 nova_compute[260935]: 2025-10-11 08:55:21.148 2 DEBUG nova.network.neutron [-] [instance: 09e8444f-162e-4773-a181-a5b70c7af8dd] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:55:21 compute-0 nova_compute[260935]: 2025-10-11 08:55:21.174 2 INFO nova.compute.manager [-] [instance: 09e8444f-162e-4773-a181-a5b70c7af8dd] Took 1.22 seconds to deallocate network for instance.
Oct 11 08:55:21 compute-0 nova_compute[260935]: 2025-10-11 08:55:21.223 2 DEBUG oslo_concurrency.lockutils [None req-1b87c1bf-b2e2-431a-b9ab-6a5580ceddde f58717ff8108424e87196def751cc1fe 06df44dbf05c4a5e8532640419eb19d3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:55:21 compute-0 nova_compute[260935]: 2025-10-11 08:55:21.223 2 DEBUG oslo_concurrency.lockutils [None req-1b87c1bf-b2e2-431a-b9ab-6a5580ceddde f58717ff8108424e87196def751cc1fe 06df44dbf05c4a5e8532640419eb19d3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:55:21 compute-0 nova_compute[260935]: 2025-10-11 08:55:21.230 2 DEBUG nova.compute.manager [req-3b1dfc25-dead-412e-9808-086c87c4b1ce req-04f06ac5-679f-4108-8993-e3106cb05c7b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 09e8444f-162e-4773-a181-a5b70c7af8dd] Received event network-vif-deleted-ce404d10-5133-400e-acde-02c0abd10470 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:55:21 compute-0 nova_compute[260935]: 2025-10-11 08:55:21.292 2 DEBUG oslo_concurrency.processutils [None req-1b87c1bf-b2e2-431a-b9ab-6a5580ceddde f58717ff8108424e87196def751cc1fe 06df44dbf05c4a5e8532640419eb19d3 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:55:21 compute-0 podman[322252]: 2025-10-11 08:55:21.311678471 +0000 UTC m=+0.055699939 container create 5003ba35a7b1791b085fbdcb61f43b55cf0e3267b869ceb4309960f2f81144db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_payne, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct 11 08:55:21 compute-0 podman[322252]: 2025-10-11 08:55:21.283281331 +0000 UTC m=+0.027302849 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 08:55:21 compute-0 systemd[1]: Started libpod-conmon-5003ba35a7b1791b085fbdcb61f43b55cf0e3267b869ceb4309960f2f81144db.scope.
Oct 11 08:55:21 compute-0 nova_compute[260935]: 2025-10-11 08:55:21.383 2 DEBUG nova.network.neutron [None req-f03584e8-cfc1-4275-951f-c47662853626 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] [instance: 98d8ebd6-0917-49cf-8efc-a245486424bc] Successfully created port: 10e01bbb-0d2c-4f76-8f81-c90e20d3e54c _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 11 08:55:21 compute-0 systemd[1]: Started libcrun container.
Oct 11 08:55:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aaaca534b58519e8e7068d084895d1b8930222cbe8ce3cd600fe9570cf3b6fac/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 08:55:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aaaca534b58519e8e7068d084895d1b8930222cbe8ce3cd600fe9570cf3b6fac/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 08:55:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aaaca534b58519e8e7068d084895d1b8930222cbe8ce3cd600fe9570cf3b6fac/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 08:55:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aaaca534b58519e8e7068d084895d1b8930222cbe8ce3cd600fe9570cf3b6fac/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 08:55:21 compute-0 podman[322252]: 2025-10-11 08:55:21.439801784 +0000 UTC m=+0.183823302 container init 5003ba35a7b1791b085fbdcb61f43b55cf0e3267b869ceb4309960f2f81144db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_payne, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 08:55:21 compute-0 podman[322252]: 2025-10-11 08:55:21.45191619 +0000 UTC m=+0.195937668 container start 5003ba35a7b1791b085fbdcb61f43b55cf0e3267b869ceb4309960f2f81144db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_payne, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 08:55:21 compute-0 podman[322252]: 2025-10-11 08:55:21.456014317 +0000 UTC m=+0.200035835 container attach 5003ba35a7b1791b085fbdcb61f43b55cf0e3267b869ceb4309960f2f81144db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_payne, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct 11 08:55:21 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 08:55:21 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/549302481' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:55:21 compute-0 nova_compute[260935]: 2025-10-11 08:55:21.817 2 DEBUG oslo_concurrency.processutils [None req-1b87c1bf-b2e2-431a-b9ab-6a5580ceddde f58717ff8108424e87196def751cc1fe 06df44dbf05c4a5e8532640419eb19d3 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.525s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:55:21 compute-0 nova_compute[260935]: 2025-10-11 08:55:21.827 2 DEBUG nova.compute.provider_tree [None req-1b87c1bf-b2e2-431a-b9ab-6a5580ceddde f58717ff8108424e87196def751cc1fe 06df44dbf05c4a5e8532640419eb19d3 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 08:55:21 compute-0 nova_compute[260935]: 2025-10-11 08:55:21.846 2 DEBUG nova.scheduler.client.report [None req-1b87c1bf-b2e2-431a-b9ab-6a5580ceddde f58717ff8108424e87196def751cc1fe 06df44dbf05c4a5e8532640419eb19d3 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 08:55:21 compute-0 nova_compute[260935]: 2025-10-11 08:55:21.881 2 DEBUG oslo_concurrency.lockutils [None req-1b87c1bf-b2e2-431a-b9ab-6a5580ceddde f58717ff8108424e87196def751cc1fe 06df44dbf05c4a5e8532640419eb19d3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.658s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:55:21 compute-0 nova_compute[260935]: 2025-10-11 08:55:21.924 2 INFO nova.scheduler.client.report [None req-1b87c1bf-b2e2-431a-b9ab-6a5580ceddde f58717ff8108424e87196def751cc1fe 06df44dbf05c4a5e8532640419eb19d3 - - default default] Deleted allocations for instance 09e8444f-162e-4773-a181-a5b70c7af8dd
Oct 11 08:55:21 compute-0 nova_compute[260935]: 2025-10-11 08:55:21.993 2 DEBUG oslo_concurrency.lockutils [None req-1b87c1bf-b2e2-431a-b9ab-6a5580ceddde f58717ff8108424e87196def751cc1fe 06df44dbf05c4a5e8532640419eb19d3 - - default default] Lock "09e8444f-162e-4773-a181-a5b70c7af8dd" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.989s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:55:22 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1534: 321 pgs: 321 active+clean; 88 MiB data, 477 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 11 08:55:22 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/549302481' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:55:22 compute-0 nova_compute[260935]: 2025-10-11 08:55:22.179 2 DEBUG nova.network.neutron [None req-f03584e8-cfc1-4275-951f-c47662853626 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] [instance: 98d8ebd6-0917-49cf-8efc-a245486424bc] Successfully updated port: 10e01bbb-0d2c-4f76-8f81-c90e20d3e54c _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 11 08:55:22 compute-0 nova_compute[260935]: 2025-10-11 08:55:22.195 2 DEBUG oslo_concurrency.lockutils [None req-f03584e8-cfc1-4275-951f-c47662853626 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Acquiring lock "refresh_cache-98d8ebd6-0917-49cf-8efc-a245486424bc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 08:55:22 compute-0 nova_compute[260935]: 2025-10-11 08:55:22.196 2 DEBUG oslo_concurrency.lockutils [None req-f03584e8-cfc1-4275-951f-c47662853626 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Acquired lock "refresh_cache-98d8ebd6-0917-49cf-8efc-a245486424bc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 08:55:22 compute-0 nova_compute[260935]: 2025-10-11 08:55:22.196 2 DEBUG nova.network.neutron [None req-f03584e8-cfc1-4275-951f-c47662853626 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] [instance: 98d8ebd6-0917-49cf-8efc-a245486424bc] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 11 08:55:22 compute-0 nova_compute[260935]: 2025-10-11 08:55:22.371 2 DEBUG nova.network.neutron [None req-f03584e8-cfc1-4275-951f-c47662853626 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] [instance: 98d8ebd6-0917-49cf-8efc-a245486424bc] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 11 08:55:22 compute-0 hardcore_payne[322269]: {
Oct 11 08:55:22 compute-0 hardcore_payne[322269]:     "9a8d87fe-940e-4960-94fb-405e22e6e9ee": {
Oct 11 08:55:22 compute-0 hardcore_payne[322269]:         "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 08:55:22 compute-0 hardcore_payne[322269]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 11 08:55:22 compute-0 hardcore_payne[322269]:         "osd_id": 2,
Oct 11 08:55:22 compute-0 hardcore_payne[322269]:         "osd_uuid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 08:55:22 compute-0 hardcore_payne[322269]:         "type": "bluestore"
Oct 11 08:55:22 compute-0 hardcore_payne[322269]:     },
Oct 11 08:55:22 compute-0 hardcore_payne[322269]:     "bd31cfe9-266a-4479-a7f9-659fff14b3e1": {
Oct 11 08:55:22 compute-0 hardcore_payne[322269]:         "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 08:55:22 compute-0 hardcore_payne[322269]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 11 08:55:22 compute-0 hardcore_payne[322269]:         "osd_id": 0,
Oct 11 08:55:22 compute-0 hardcore_payne[322269]:         "osd_uuid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 08:55:22 compute-0 hardcore_payne[322269]:         "type": "bluestore"
Oct 11 08:55:22 compute-0 hardcore_payne[322269]:     },
Oct 11 08:55:22 compute-0 hardcore_payne[322269]:     "c6e730c8-7075-45e5-bf72-061b73088f5b": {
Oct 11 08:55:22 compute-0 hardcore_payne[322269]:         "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 08:55:22 compute-0 hardcore_payne[322269]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 11 08:55:22 compute-0 hardcore_payne[322269]:         "osd_id": 1,
Oct 11 08:55:22 compute-0 hardcore_payne[322269]:         "osd_uuid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 08:55:22 compute-0 hardcore_payne[322269]:         "type": "bluestore"
Oct 11 08:55:22 compute-0 hardcore_payne[322269]:     }
Oct 11 08:55:22 compute-0 hardcore_payne[322269]: }
Oct 11 08:55:22 compute-0 systemd[1]: libpod-5003ba35a7b1791b085fbdcb61f43b55cf0e3267b869ceb4309960f2f81144db.scope: Deactivated successfully.
Oct 11 08:55:22 compute-0 systemd[1]: libpod-5003ba35a7b1791b085fbdcb61f43b55cf0e3267b869ceb4309960f2f81144db.scope: Consumed 1.209s CPU time.
Oct 11 08:55:22 compute-0 podman[322323]: 2025-10-11 08:55:22.730128728 +0000 UTC m=+0.040324791 container died 5003ba35a7b1791b085fbdcb61f43b55cf0e3267b869ceb4309960f2f81144db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_payne, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct 11 08:55:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-aaaca534b58519e8e7068d084895d1b8930222cbe8ce3cd600fe9570cf3b6fac-merged.mount: Deactivated successfully.
Oct 11 08:55:22 compute-0 podman[322323]: 2025-10-11 08:55:22.800770752 +0000 UTC m=+0.110966745 container remove 5003ba35a7b1791b085fbdcb61f43b55cf0e3267b869ceb4309960f2f81144db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_payne, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct 11 08:55:22 compute-0 systemd[1]: libpod-conmon-5003ba35a7b1791b085fbdcb61f43b55cf0e3267b869ceb4309960f2f81144db.scope: Deactivated successfully.
Oct 11 08:55:22 compute-0 sudo[322055]: pam_unix(sudo:session): session closed for user root
Oct 11 08:55:22 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 08:55:22 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 08:55:22 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 08:55:22 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 08:55:22 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev a6e8a64a-8bba-4427-a276-c1d2dbb0d615 does not exist
Oct 11 08:55:22 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev c18f2582-d38e-4088-aa51-3162d587c647 does not exist
Oct 11 08:55:22 compute-0 nova_compute[260935]: 2025-10-11 08:55:22.965 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:55:22 compute-0 sudo[322339]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:55:22 compute-0 sudo[322339]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:55:22 compute-0 sudo[322339]: pam_unix(sudo:session): session closed for user root
Oct 11 08:55:23 compute-0 sudo[322364]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 11 08:55:23 compute-0 sudo[322364]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:55:23 compute-0 sudo[322364]: pam_unix(sudo:session): session closed for user root
Oct 11 08:55:23 compute-0 ceph-mon[74313]: pgmap v1534: 321 pgs: 321 active+clean; 88 MiB data, 477 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 11 08:55:23 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 08:55:23 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 08:55:23 compute-0 nova_compute[260935]: 2025-10-11 08:55:23.122 2 DEBUG nova.network.neutron [None req-f03584e8-cfc1-4275-951f-c47662853626 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] [instance: 98d8ebd6-0917-49cf-8efc-a245486424bc] Updating instance_info_cache with network_info: [{"id": "10e01bbb-0d2c-4f76-8f81-c90e20d3e54c", "address": "fa:16:3e:3f:0f:36", "network": {"id": "056c6769-bc97-4ae9-9759-4cc2d984a31d", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-2094705751-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "73adfb8cf0c64359b1f33a9643148ef4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap10e01bbb-0d", "ovs_interfaceid": "10e01bbb-0d2c-4f76-8f81-c90e20d3e54c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:55:23 compute-0 nova_compute[260935]: 2025-10-11 08:55:23.147 2 DEBUG oslo_concurrency.lockutils [None req-f03584e8-cfc1-4275-951f-c47662853626 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Releasing lock "refresh_cache-98d8ebd6-0917-49cf-8efc-a245486424bc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 08:55:23 compute-0 nova_compute[260935]: 2025-10-11 08:55:23.147 2 DEBUG nova.compute.manager [None req-f03584e8-cfc1-4275-951f-c47662853626 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] [instance: 98d8ebd6-0917-49cf-8efc-a245486424bc] Instance network_info: |[{"id": "10e01bbb-0d2c-4f76-8f81-c90e20d3e54c", "address": "fa:16:3e:3f:0f:36", "network": {"id": "056c6769-bc97-4ae9-9759-4cc2d984a31d", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-2094705751-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "73adfb8cf0c64359b1f33a9643148ef4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap10e01bbb-0d", "ovs_interfaceid": "10e01bbb-0d2c-4f76-8f81-c90e20d3e54c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 11 08:55:23 compute-0 nova_compute[260935]: 2025-10-11 08:55:23.151 2 DEBUG nova.virt.libvirt.driver [None req-f03584e8-cfc1-4275-951f-c47662853626 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] [instance: 98d8ebd6-0917-49cf-8efc-a245486424bc] Start _get_guest_xml network_info=[{"id": "10e01bbb-0d2c-4f76-8f81-c90e20d3e54c", "address": "fa:16:3e:3f:0f:36", "network": {"id": "056c6769-bc97-4ae9-9759-4cc2d984a31d", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-2094705751-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "73adfb8cf0c64359b1f33a9643148ef4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap10e01bbb-0d", "ovs_interfaceid": "10e01bbb-0d2c-4f76-8f81-c90e20d3e54c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 11 08:55:23 compute-0 nova_compute[260935]: 2025-10-11 08:55:23.159 2 WARNING nova.virt.libvirt.driver [None req-f03584e8-cfc1-4275-951f-c47662853626 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 08:55:23 compute-0 nova_compute[260935]: 2025-10-11 08:55:23.167 2 DEBUG nova.virt.libvirt.host [None req-f03584e8-cfc1-4275-951f-c47662853626 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 11 08:55:23 compute-0 nova_compute[260935]: 2025-10-11 08:55:23.169 2 DEBUG nova.virt.libvirt.host [None req-f03584e8-cfc1-4275-951f-c47662853626 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 11 08:55:23 compute-0 nova_compute[260935]: 2025-10-11 08:55:23.173 2 DEBUG nova.virt.libvirt.host [None req-f03584e8-cfc1-4275-951f-c47662853626 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 11 08:55:23 compute-0 nova_compute[260935]: 2025-10-11 08:55:23.173 2 DEBUG nova.virt.libvirt.host [None req-f03584e8-cfc1-4275-951f-c47662853626 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 11 08:55:23 compute-0 nova_compute[260935]: 2025-10-11 08:55:23.174 2 DEBUG nova.virt.libvirt.driver [None req-f03584e8-cfc1-4275-951f-c47662853626 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 11 08:55:23 compute-0 nova_compute[260935]: 2025-10-11 08:55:23.174 2 DEBUG nova.virt.hardware [None req-f03584e8-cfc1-4275-951f-c47662853626 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 11 08:55:23 compute-0 nova_compute[260935]: 2025-10-11 08:55:23.175 2 DEBUG nova.virt.hardware [None req-f03584e8-cfc1-4275-951f-c47662853626 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 11 08:55:23 compute-0 nova_compute[260935]: 2025-10-11 08:55:23.175 2 DEBUG nova.virt.hardware [None req-f03584e8-cfc1-4275-951f-c47662853626 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 11 08:55:23 compute-0 nova_compute[260935]: 2025-10-11 08:55:23.175 2 DEBUG nova.virt.hardware [None req-f03584e8-cfc1-4275-951f-c47662853626 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 11 08:55:23 compute-0 nova_compute[260935]: 2025-10-11 08:55:23.176 2 DEBUG nova.virt.hardware [None req-f03584e8-cfc1-4275-951f-c47662853626 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 11 08:55:23 compute-0 nova_compute[260935]: 2025-10-11 08:55:23.176 2 DEBUG nova.virt.hardware [None req-f03584e8-cfc1-4275-951f-c47662853626 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 11 08:55:23 compute-0 nova_compute[260935]: 2025-10-11 08:55:23.176 2 DEBUG nova.virt.hardware [None req-f03584e8-cfc1-4275-951f-c47662853626 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 11 08:55:23 compute-0 nova_compute[260935]: 2025-10-11 08:55:23.177 2 DEBUG nova.virt.hardware [None req-f03584e8-cfc1-4275-951f-c47662853626 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 11 08:55:23 compute-0 nova_compute[260935]: 2025-10-11 08:55:23.177 2 DEBUG nova.virt.hardware [None req-f03584e8-cfc1-4275-951f-c47662853626 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 11 08:55:23 compute-0 nova_compute[260935]: 2025-10-11 08:55:23.177 2 DEBUG nova.virt.hardware [None req-f03584e8-cfc1-4275-951f-c47662853626 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 11 08:55:23 compute-0 nova_compute[260935]: 2025-10-11 08:55:23.178 2 DEBUG nova.virt.hardware [None req-f03584e8-cfc1-4275-951f-c47662853626 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 11 08:55:23 compute-0 nova_compute[260935]: 2025-10-11 08:55:23.183 2 DEBUG oslo_concurrency.processutils [None req-f03584e8-cfc1-4275-951f-c47662853626 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:55:23 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e210 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 08:55:23 compute-0 nova_compute[260935]: 2025-10-11 08:55:23.512 2 DEBUG nova.compute.manager [req-06fe644e-ddd3-4ebb-952e-8aa7cfcffbbe req-6baf8e50-c140-4ebd-8f45-16443ca86763 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 98d8ebd6-0917-49cf-8efc-a245486424bc] Received event network-changed-10e01bbb-0d2c-4f76-8f81-c90e20d3e54c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:55:23 compute-0 nova_compute[260935]: 2025-10-11 08:55:23.513 2 DEBUG nova.compute.manager [req-06fe644e-ddd3-4ebb-952e-8aa7cfcffbbe req-6baf8e50-c140-4ebd-8f45-16443ca86763 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 98d8ebd6-0917-49cf-8efc-a245486424bc] Refreshing instance network info cache due to event network-changed-10e01bbb-0d2c-4f76-8f81-c90e20d3e54c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 08:55:23 compute-0 nova_compute[260935]: 2025-10-11 08:55:23.513 2 DEBUG oslo_concurrency.lockutils [req-06fe644e-ddd3-4ebb-952e-8aa7cfcffbbe req-6baf8e50-c140-4ebd-8f45-16443ca86763 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-98d8ebd6-0917-49cf-8efc-a245486424bc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 08:55:23 compute-0 nova_compute[260935]: 2025-10-11 08:55:23.514 2 DEBUG oslo_concurrency.lockutils [req-06fe644e-ddd3-4ebb-952e-8aa7cfcffbbe req-6baf8e50-c140-4ebd-8f45-16443ca86763 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-98d8ebd6-0917-49cf-8efc-a245486424bc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 08:55:23 compute-0 nova_compute[260935]: 2025-10-11 08:55:23.514 2 DEBUG nova.network.neutron [req-06fe644e-ddd3-4ebb-952e-8aa7cfcffbbe req-6baf8e50-c140-4ebd-8f45-16443ca86763 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 98d8ebd6-0917-49cf-8efc-a245486424bc] Refreshing network info cache for port 10e01bbb-0d2c-4f76-8f81-c90e20d3e54c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 08:55:23 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 08:55:23 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2791638788' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:55:23 compute-0 nova_compute[260935]: 2025-10-11 08:55:23.634 2 DEBUG oslo_concurrency.processutils [None req-f03584e8-cfc1-4275-951f-c47662853626 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:55:23 compute-0 nova_compute[260935]: 2025-10-11 08:55:23.668 2 DEBUG nova.storage.rbd_utils [None req-f03584e8-cfc1-4275-951f-c47662853626 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] rbd image 98d8ebd6-0917-49cf-8efc-a245486424bc_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:55:23 compute-0 nova_compute[260935]: 2025-10-11 08:55:23.673 2 DEBUG oslo_concurrency.processutils [None req-f03584e8-cfc1-4275-951f-c47662853626 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:55:24 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1535: 321 pgs: 321 active+clean; 88 MiB data, 495 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 127 op/s
Oct 11 08:55:24 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/2791638788' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:55:24 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 08:55:24 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2462706206' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:55:24 compute-0 nova_compute[260935]: 2025-10-11 08:55:24.144 2 DEBUG oslo_concurrency.processutils [None req-f03584e8-cfc1-4275-951f-c47662853626 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:55:24 compute-0 nova_compute[260935]: 2025-10-11 08:55:24.147 2 DEBUG nova.virt.libvirt.vif [None req-f03584e8-cfc1-4275-951f-c47662853626 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T08:55:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-400269969',display_name='tempest-ServerActionsTestOtherB-server-400269969',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-400269969',id=58,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBeZ9ALPs3Oq9pCizSQrA3R5ihPZilMrzZLelKqG/iqyw53khzMkIgHWfcYBLUTF7AIKY9B5CPoZLdbz5r+wTvSZysVCKTRxhaIofirBhYLqgm1OKLLxBwrJrQAKSAriNg==',key_name='tempest-keypair-1264143465',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='73adfb8cf0c64359b1f33a9643148ef4',ramdisk_id='',reservation_id='r-bh4aj5da',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherB-1445504716',owner_user_name='tempest-ServerActionsTestOtherB-1445504716-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T08:55:19Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='8d5f5f07c57c467286168be7c097bf26',uuid=98d8ebd6-0917-49cf-8efc-a245486424bc,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "10e01bbb-0d2c-4f76-8f81-c90e20d3e54c", "address": "fa:16:3e:3f:0f:36", "network": {"id": "056c6769-bc97-4ae9-9759-4cc2d984a31d", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-2094705751-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "73adfb8cf0c64359b1f33a9643148ef4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap10e01bbb-0d", "ovs_interfaceid": "10e01bbb-0d2c-4f76-8f81-c90e20d3e54c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 11 08:55:24 compute-0 nova_compute[260935]: 2025-10-11 08:55:24.148 2 DEBUG nova.network.os_vif_util [None req-f03584e8-cfc1-4275-951f-c47662853626 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Converting VIF {"id": "10e01bbb-0d2c-4f76-8f81-c90e20d3e54c", "address": "fa:16:3e:3f:0f:36", "network": {"id": "056c6769-bc97-4ae9-9759-4cc2d984a31d", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-2094705751-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "73adfb8cf0c64359b1f33a9643148ef4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap10e01bbb-0d", "ovs_interfaceid": "10e01bbb-0d2c-4f76-8f81-c90e20d3e54c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 08:55:24 compute-0 nova_compute[260935]: 2025-10-11 08:55:24.150 2 DEBUG nova.network.os_vif_util [None req-f03584e8-cfc1-4275-951f-c47662853626 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3f:0f:36,bridge_name='br-int',has_traffic_filtering=True,id=10e01bbb-0d2c-4f76-8f81-c90e20d3e54c,network=Network(056c6769-bc97-4ae9-9759-4cc2d984a31d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap10e01bbb-0d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 08:55:24 compute-0 nova_compute[260935]: 2025-10-11 08:55:24.153 2 DEBUG nova.objects.instance [None req-f03584e8-cfc1-4275-951f-c47662853626 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Lazy-loading 'pci_devices' on Instance uuid 98d8ebd6-0917-49cf-8efc-a245486424bc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:55:24 compute-0 nova_compute[260935]: 2025-10-11 08:55:24.170 2 DEBUG nova.virt.libvirt.driver [None req-f03584e8-cfc1-4275-951f-c47662853626 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] [instance: 98d8ebd6-0917-49cf-8efc-a245486424bc] End _get_guest_xml xml=<domain type="kvm">
Oct 11 08:55:24 compute-0 nova_compute[260935]:   <uuid>98d8ebd6-0917-49cf-8efc-a245486424bc</uuid>
Oct 11 08:55:24 compute-0 nova_compute[260935]:   <name>instance-0000003a</name>
Oct 11 08:55:24 compute-0 nova_compute[260935]:   <memory>131072</memory>
Oct 11 08:55:24 compute-0 nova_compute[260935]:   <vcpu>1</vcpu>
Oct 11 08:55:24 compute-0 nova_compute[260935]:   <metadata>
Oct 11 08:55:24 compute-0 nova_compute[260935]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 08:55:24 compute-0 nova_compute[260935]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 08:55:24 compute-0 nova_compute[260935]:       <nova:name>tempest-ServerActionsTestOtherB-server-400269969</nova:name>
Oct 11 08:55:24 compute-0 nova_compute[260935]:       <nova:creationTime>2025-10-11 08:55:23</nova:creationTime>
Oct 11 08:55:24 compute-0 nova_compute[260935]:       <nova:flavor name="m1.nano">
Oct 11 08:55:24 compute-0 nova_compute[260935]:         <nova:memory>128</nova:memory>
Oct 11 08:55:24 compute-0 nova_compute[260935]:         <nova:disk>1</nova:disk>
Oct 11 08:55:24 compute-0 nova_compute[260935]:         <nova:swap>0</nova:swap>
Oct 11 08:55:24 compute-0 nova_compute[260935]:         <nova:ephemeral>0</nova:ephemeral>
Oct 11 08:55:24 compute-0 nova_compute[260935]:         <nova:vcpus>1</nova:vcpus>
Oct 11 08:55:24 compute-0 nova_compute[260935]:       </nova:flavor>
Oct 11 08:55:24 compute-0 nova_compute[260935]:       <nova:owner>
Oct 11 08:55:24 compute-0 nova_compute[260935]:         <nova:user uuid="8d5f5f07c57c467286168be7c097bf26">tempest-ServerActionsTestOtherB-1445504716-project-member</nova:user>
Oct 11 08:55:24 compute-0 nova_compute[260935]:         <nova:project uuid="73adfb8cf0c64359b1f33a9643148ef4">tempest-ServerActionsTestOtherB-1445504716</nova:project>
Oct 11 08:55:24 compute-0 nova_compute[260935]:       </nova:owner>
Oct 11 08:55:24 compute-0 nova_compute[260935]:       <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 08:55:24 compute-0 nova_compute[260935]:       <nova:ports>
Oct 11 08:55:24 compute-0 nova_compute[260935]:         <nova:port uuid="10e01bbb-0d2c-4f76-8f81-c90e20d3e54c">
Oct 11 08:55:24 compute-0 nova_compute[260935]:           <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Oct 11 08:55:24 compute-0 nova_compute[260935]:         </nova:port>
Oct 11 08:55:24 compute-0 nova_compute[260935]:       </nova:ports>
Oct 11 08:55:24 compute-0 nova_compute[260935]:     </nova:instance>
Oct 11 08:55:24 compute-0 nova_compute[260935]:   </metadata>
Oct 11 08:55:24 compute-0 nova_compute[260935]:   <sysinfo type="smbios">
Oct 11 08:55:24 compute-0 nova_compute[260935]:     <system>
Oct 11 08:55:24 compute-0 nova_compute[260935]:       <entry name="manufacturer">RDO</entry>
Oct 11 08:55:24 compute-0 nova_compute[260935]:       <entry name="product">OpenStack Compute</entry>
Oct 11 08:55:24 compute-0 nova_compute[260935]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 08:55:24 compute-0 nova_compute[260935]:       <entry name="serial">98d8ebd6-0917-49cf-8efc-a245486424bc</entry>
Oct 11 08:55:24 compute-0 nova_compute[260935]:       <entry name="uuid">98d8ebd6-0917-49cf-8efc-a245486424bc</entry>
Oct 11 08:55:24 compute-0 nova_compute[260935]:       <entry name="family">Virtual Machine</entry>
Oct 11 08:55:24 compute-0 nova_compute[260935]:     </system>
Oct 11 08:55:24 compute-0 nova_compute[260935]:   </sysinfo>
Oct 11 08:55:24 compute-0 nova_compute[260935]:   <os>
Oct 11 08:55:24 compute-0 nova_compute[260935]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 11 08:55:24 compute-0 nova_compute[260935]:     <boot dev="hd"/>
Oct 11 08:55:24 compute-0 nova_compute[260935]:     <smbios mode="sysinfo"/>
Oct 11 08:55:24 compute-0 nova_compute[260935]:   </os>
Oct 11 08:55:24 compute-0 nova_compute[260935]:   <features>
Oct 11 08:55:24 compute-0 nova_compute[260935]:     <acpi/>
Oct 11 08:55:24 compute-0 nova_compute[260935]:     <apic/>
Oct 11 08:55:24 compute-0 nova_compute[260935]:     <vmcoreinfo/>
Oct 11 08:55:24 compute-0 nova_compute[260935]:   </features>
Oct 11 08:55:24 compute-0 nova_compute[260935]:   <clock offset="utc">
Oct 11 08:55:24 compute-0 nova_compute[260935]:     <timer name="pit" tickpolicy="delay"/>
Oct 11 08:55:24 compute-0 nova_compute[260935]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 11 08:55:24 compute-0 nova_compute[260935]:     <timer name="hpet" present="no"/>
Oct 11 08:55:24 compute-0 nova_compute[260935]:   </clock>
Oct 11 08:55:24 compute-0 nova_compute[260935]:   <cpu mode="host-model" match="exact">
Oct 11 08:55:24 compute-0 nova_compute[260935]:     <topology sockets="1" cores="1" threads="1"/>
Oct 11 08:55:24 compute-0 nova_compute[260935]:   </cpu>
Oct 11 08:55:24 compute-0 nova_compute[260935]:   <devices>
Oct 11 08:55:24 compute-0 nova_compute[260935]:     <disk type="network" device="disk">
Oct 11 08:55:24 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 08:55:24 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/98d8ebd6-0917-49cf-8efc-a245486424bc_disk">
Oct 11 08:55:24 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 08:55:24 compute-0 nova_compute[260935]:       </source>
Oct 11 08:55:24 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 08:55:24 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 08:55:24 compute-0 nova_compute[260935]:       </auth>
Oct 11 08:55:24 compute-0 nova_compute[260935]:       <target dev="vda" bus="virtio"/>
Oct 11 08:55:24 compute-0 nova_compute[260935]:     </disk>
Oct 11 08:55:24 compute-0 nova_compute[260935]:     <disk type="network" device="cdrom">
Oct 11 08:55:24 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 08:55:24 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/98d8ebd6-0917-49cf-8efc-a245486424bc_disk.config">
Oct 11 08:55:24 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 08:55:24 compute-0 nova_compute[260935]:       </source>
Oct 11 08:55:24 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 08:55:24 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 08:55:24 compute-0 nova_compute[260935]:       </auth>
Oct 11 08:55:24 compute-0 nova_compute[260935]:       <target dev="sda" bus="sata"/>
Oct 11 08:55:24 compute-0 nova_compute[260935]:     </disk>
Oct 11 08:55:24 compute-0 nova_compute[260935]:     <interface type="ethernet">
Oct 11 08:55:24 compute-0 nova_compute[260935]:       <mac address="fa:16:3e:3f:0f:36"/>
Oct 11 08:55:24 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 08:55:24 compute-0 nova_compute[260935]:       <driver name="vhost" rx_queue_size="512"/>
Oct 11 08:55:24 compute-0 nova_compute[260935]:       <mtu size="1442"/>
Oct 11 08:55:24 compute-0 nova_compute[260935]:       <target dev="tap10e01bbb-0d"/>
Oct 11 08:55:24 compute-0 nova_compute[260935]:     </interface>
Oct 11 08:55:24 compute-0 nova_compute[260935]:     <serial type="pty">
Oct 11 08:55:24 compute-0 nova_compute[260935]:       <log file="/var/lib/nova/instances/98d8ebd6-0917-49cf-8efc-a245486424bc/console.log" append="off"/>
Oct 11 08:55:24 compute-0 nova_compute[260935]:     </serial>
Oct 11 08:55:24 compute-0 nova_compute[260935]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 08:55:24 compute-0 nova_compute[260935]:     <video>
Oct 11 08:55:24 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 08:55:24 compute-0 nova_compute[260935]:     </video>
Oct 11 08:55:24 compute-0 nova_compute[260935]:     <input type="tablet" bus="usb"/>
Oct 11 08:55:24 compute-0 nova_compute[260935]:     <rng model="virtio">
Oct 11 08:55:24 compute-0 nova_compute[260935]:       <backend model="random">/dev/urandom</backend>
Oct 11 08:55:24 compute-0 nova_compute[260935]:     </rng>
Oct 11 08:55:24 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root"/>
Oct 11 08:55:24 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:55:24 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:55:24 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:55:24 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:55:24 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:55:24 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:55:24 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:55:24 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:55:24 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:55:24 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:55:24 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:55:24 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:55:24 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:55:24 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:55:24 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:55:24 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:55:24 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:55:24 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:55:24 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:55:24 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:55:24 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:55:24 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:55:24 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:55:24 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:55:24 compute-0 nova_compute[260935]:     <controller type="usb" index="0"/>
Oct 11 08:55:24 compute-0 nova_compute[260935]:     <memballoon model="virtio">
Oct 11 08:55:24 compute-0 nova_compute[260935]:       <stats period="10"/>
Oct 11 08:55:24 compute-0 nova_compute[260935]:     </memballoon>
Oct 11 08:55:24 compute-0 nova_compute[260935]:   </devices>
Oct 11 08:55:24 compute-0 nova_compute[260935]: </domain>
Oct 11 08:55:24 compute-0 nova_compute[260935]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 11 08:55:24 compute-0 nova_compute[260935]: 2025-10-11 08:55:24.172 2 DEBUG nova.compute.manager [None req-f03584e8-cfc1-4275-951f-c47662853626 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] [instance: 98d8ebd6-0917-49cf-8efc-a245486424bc] Preparing to wait for external event network-vif-plugged-10e01bbb-0d2c-4f76-8f81-c90e20d3e54c prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 11 08:55:24 compute-0 nova_compute[260935]: 2025-10-11 08:55:24.173 2 DEBUG oslo_concurrency.lockutils [None req-f03584e8-cfc1-4275-951f-c47662853626 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Acquiring lock "98d8ebd6-0917-49cf-8efc-a245486424bc-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:55:24 compute-0 nova_compute[260935]: 2025-10-11 08:55:24.173 2 DEBUG oslo_concurrency.lockutils [None req-f03584e8-cfc1-4275-951f-c47662853626 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Lock "98d8ebd6-0917-49cf-8efc-a245486424bc-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:55:24 compute-0 nova_compute[260935]: 2025-10-11 08:55:24.173 2 DEBUG oslo_concurrency.lockutils [None req-f03584e8-cfc1-4275-951f-c47662853626 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Lock "98d8ebd6-0917-49cf-8efc-a245486424bc-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:55:24 compute-0 nova_compute[260935]: 2025-10-11 08:55:24.174 2 DEBUG nova.virt.libvirt.vif [None req-f03584e8-cfc1-4275-951f-c47662853626 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T08:55:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-400269969',display_name='tempest-ServerActionsTestOtherB-server-400269969',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-400269969',id=58,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBeZ9ALPs3Oq9pCizSQrA3R5ihPZilMrzZLelKqG/iqyw53khzMkIgHWfcYBLUTF7AIKY9B5CPoZLdbz5r+wTvSZysVCKTRxhaIofirBhYLqgm1OKLLxBwrJrQAKSAriNg==',key_name='tempest-keypair-1264143465',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='73adfb8cf0c64359b1f33a9643148ef4',ramdisk_id='',reservation_id='r-bh4aj5da',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherB-1445504716',owner_user_name='tempest-ServerActionsTestOtherB-1445504716-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T08:55:19Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='8d5f5f07c57c467286168be7c097bf26',uuid=98d8ebd6-0917-49cf-8efc-a245486424bc,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "10e01bbb-0d2c-4f76-8f81-c90e20d3e54c", "address": "fa:16:3e:3f:0f:36", "network": {"id": "056c6769-bc97-4ae9-9759-4cc2d984a31d", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-2094705751-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "73adfb8cf0c64359b1f33a9643148ef4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap10e01bbb-0d", "ovs_interfaceid": "10e01bbb-0d2c-4f76-8f81-c90e20d3e54c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 11 08:55:24 compute-0 nova_compute[260935]: 2025-10-11 08:55:24.174 2 DEBUG nova.network.os_vif_util [None req-f03584e8-cfc1-4275-951f-c47662853626 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Converting VIF {"id": "10e01bbb-0d2c-4f76-8f81-c90e20d3e54c", "address": "fa:16:3e:3f:0f:36", "network": {"id": "056c6769-bc97-4ae9-9759-4cc2d984a31d", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-2094705751-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "73adfb8cf0c64359b1f33a9643148ef4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap10e01bbb-0d", "ovs_interfaceid": "10e01bbb-0d2c-4f76-8f81-c90e20d3e54c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 08:55:24 compute-0 nova_compute[260935]: 2025-10-11 08:55:24.175 2 DEBUG nova.network.os_vif_util [None req-f03584e8-cfc1-4275-951f-c47662853626 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3f:0f:36,bridge_name='br-int',has_traffic_filtering=True,id=10e01bbb-0d2c-4f76-8f81-c90e20d3e54c,network=Network(056c6769-bc97-4ae9-9759-4cc2d984a31d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap10e01bbb-0d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 08:55:24 compute-0 nova_compute[260935]: 2025-10-11 08:55:24.175 2 DEBUG os_vif [None req-f03584e8-cfc1-4275-951f-c47662853626 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:3f:0f:36,bridge_name='br-int',has_traffic_filtering=True,id=10e01bbb-0d2c-4f76-8f81-c90e20d3e54c,network=Network(056c6769-bc97-4ae9-9759-4cc2d984a31d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap10e01bbb-0d') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 11 08:55:24 compute-0 nova_compute[260935]: 2025-10-11 08:55:24.176 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:55:24 compute-0 nova_compute[260935]: 2025-10-11 08:55:24.176 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:55:24 compute-0 nova_compute[260935]: 2025-10-11 08:55:24.177 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 08:55:24 compute-0 nova_compute[260935]: 2025-10-11 08:55:24.181 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:55:24 compute-0 nova_compute[260935]: 2025-10-11 08:55:24.181 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap10e01bbb-0d, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:55:24 compute-0 nova_compute[260935]: 2025-10-11 08:55:24.181 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap10e01bbb-0d, col_values=(('external_ids', {'iface-id': '10e01bbb-0d2c-4f76-8f81-c90e20d3e54c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:3f:0f:36', 'vm-uuid': '98d8ebd6-0917-49cf-8efc-a245486424bc'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:55:24 compute-0 nova_compute[260935]: 2025-10-11 08:55:24.184 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:55:24 compute-0 nova_compute[260935]: 2025-10-11 08:55:24.185 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 08:55:24 compute-0 NetworkManager[44960]: <info>  [1760172924.1862] manager: (tap10e01bbb-0d): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/235)
Oct 11 08:55:24 compute-0 nova_compute[260935]: 2025-10-11 08:55:24.192 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:55:24 compute-0 nova_compute[260935]: 2025-10-11 08:55:24.193 2 INFO os_vif [None req-f03584e8-cfc1-4275-951f-c47662853626 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:3f:0f:36,bridge_name='br-int',has_traffic_filtering=True,id=10e01bbb-0d2c-4f76-8f81-c90e20d3e54c,network=Network(056c6769-bc97-4ae9-9759-4cc2d984a31d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap10e01bbb-0d')
Oct 11 08:55:24 compute-0 nova_compute[260935]: 2025-10-11 08:55:24.272 2 DEBUG nova.virt.libvirt.driver [None req-f03584e8-cfc1-4275-951f-c47662853626 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 08:55:24 compute-0 nova_compute[260935]: 2025-10-11 08:55:24.273 2 DEBUG nova.virt.libvirt.driver [None req-f03584e8-cfc1-4275-951f-c47662853626 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 08:55:24 compute-0 nova_compute[260935]: 2025-10-11 08:55:24.273 2 DEBUG nova.virt.libvirt.driver [None req-f03584e8-cfc1-4275-951f-c47662853626 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] No VIF found with MAC fa:16:3e:3f:0f:36, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 11 08:55:24 compute-0 nova_compute[260935]: 2025-10-11 08:55:24.273 2 INFO nova.virt.libvirt.driver [None req-f03584e8-cfc1-4275-951f-c47662853626 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] [instance: 98d8ebd6-0917-49cf-8efc-a245486424bc] Using config drive
Oct 11 08:55:24 compute-0 nova_compute[260935]: 2025-10-11 08:55:24.298 2 DEBUG nova.storage.rbd_utils [None req-f03584e8-cfc1-4275-951f-c47662853626 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] rbd image 98d8ebd6-0917-49cf-8efc-a245486424bc_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:55:24 compute-0 nova_compute[260935]: 2025-10-11 08:55:24.749 2 INFO nova.virt.libvirt.driver [None req-f03584e8-cfc1-4275-951f-c47662853626 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] [instance: 98d8ebd6-0917-49cf-8efc-a245486424bc] Creating config drive at /var/lib/nova/instances/98d8ebd6-0917-49cf-8efc-a245486424bc/disk.config
Oct 11 08:55:24 compute-0 nova_compute[260935]: 2025-10-11 08:55:24.760 2 DEBUG oslo_concurrency.processutils [None req-f03584e8-cfc1-4275-951f-c47662853626 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/98d8ebd6-0917-49cf-8efc-a245486424bc/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp6xi1v3yj execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:55:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 08:55:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 08:55:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 08:55:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 08:55:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 08:55:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 08:55:24 compute-0 nova_compute[260935]: 2025-10-11 08:55:24.917 2 DEBUG oslo_concurrency.processutils [None req-f03584e8-cfc1-4275-951f-c47662853626 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/98d8ebd6-0917-49cf-8efc-a245486424bc/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp6xi1v3yj" returned: 0 in 0.157s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:55:24 compute-0 nova_compute[260935]: 2025-10-11 08:55:24.956 2 DEBUG nova.storage.rbd_utils [None req-f03584e8-cfc1-4275-951f-c47662853626 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] rbd image 98d8ebd6-0917-49cf-8efc-a245486424bc_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:55:24 compute-0 nova_compute[260935]: 2025-10-11 08:55:24.962 2 DEBUG oslo_concurrency.processutils [None req-f03584e8-cfc1-4275-951f-c47662853626 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/98d8ebd6-0917-49cf-8efc-a245486424bc/disk.config 98d8ebd6-0917-49cf-8efc-a245486424bc_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:55:25 compute-0 nova_compute[260935]: 2025-10-11 08:55:25.087 2 DEBUG nova.network.neutron [req-06fe644e-ddd3-4ebb-952e-8aa7cfcffbbe req-6baf8e50-c140-4ebd-8f45-16443ca86763 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 98d8ebd6-0917-49cf-8efc-a245486424bc] Updated VIF entry in instance network info cache for port 10e01bbb-0d2c-4f76-8f81-c90e20d3e54c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 08:55:25 compute-0 nova_compute[260935]: 2025-10-11 08:55:25.088 2 DEBUG nova.network.neutron [req-06fe644e-ddd3-4ebb-952e-8aa7cfcffbbe req-6baf8e50-c140-4ebd-8f45-16443ca86763 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 98d8ebd6-0917-49cf-8efc-a245486424bc] Updating instance_info_cache with network_info: [{"id": "10e01bbb-0d2c-4f76-8f81-c90e20d3e54c", "address": "fa:16:3e:3f:0f:36", "network": {"id": "056c6769-bc97-4ae9-9759-4cc2d984a31d", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-2094705751-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "73adfb8cf0c64359b1f33a9643148ef4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap10e01bbb-0d", "ovs_interfaceid": "10e01bbb-0d2c-4f76-8f81-c90e20d3e54c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:55:25 compute-0 ceph-mon[74313]: pgmap v1535: 321 pgs: 321 active+clean; 88 MiB data, 495 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 127 op/s
Oct 11 08:55:25 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/2462706206' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:55:25 compute-0 nova_compute[260935]: 2025-10-11 08:55:25.118 2 DEBUG oslo_concurrency.lockutils [req-06fe644e-ddd3-4ebb-952e-8aa7cfcffbbe req-6baf8e50-c140-4ebd-8f45-16443ca86763 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-98d8ebd6-0917-49cf-8efc-a245486424bc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 08:55:25 compute-0 nova_compute[260935]: 2025-10-11 08:55:25.187 2 DEBUG oslo_concurrency.processutils [None req-f03584e8-cfc1-4275-951f-c47662853626 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/98d8ebd6-0917-49cf-8efc-a245486424bc/disk.config 98d8ebd6-0917-49cf-8efc-a245486424bc_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.225s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:55:25 compute-0 nova_compute[260935]: 2025-10-11 08:55:25.189 2 INFO nova.virt.libvirt.driver [None req-f03584e8-cfc1-4275-951f-c47662853626 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] [instance: 98d8ebd6-0917-49cf-8efc-a245486424bc] Deleting local config drive /var/lib/nova/instances/98d8ebd6-0917-49cf-8efc-a245486424bc/disk.config because it was imported into RBD.
Oct 11 08:55:25 compute-0 kernel: tap10e01bbb-0d: entered promiscuous mode
Oct 11 08:55:25 compute-0 NetworkManager[44960]: <info>  [1760172925.2933] manager: (tap10e01bbb-0d): new Tun device (/org/freedesktop/NetworkManager/Devices/236)
Oct 11 08:55:25 compute-0 ovn_controller[152945]: 2025-10-11T08:55:25Z|00506|binding|INFO|Claiming lport 10e01bbb-0d2c-4f76-8f81-c90e20d3e54c for this chassis.
Oct 11 08:55:25 compute-0 ovn_controller[152945]: 2025-10-11T08:55:25Z|00507|binding|INFO|10e01bbb-0d2c-4f76-8f81-c90e20d3e54c: Claiming fa:16:3e:3f:0f:36 10.100.0.9
Oct 11 08:55:25 compute-0 nova_compute[260935]: 2025-10-11 08:55:25.334 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:55:25 compute-0 nova_compute[260935]: 2025-10-11 08:55:25.339 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:55:25 compute-0 nova_compute[260935]: 2025-10-11 08:55:25.341 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:55:25 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:55:25.363 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3f:0f:36 10.100.0.9'], port_security=['fa:16:3e:3f:0f:36 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '98d8ebd6-0917-49cf-8efc-a245486424bc', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-056c6769-bc97-4ae9-9759-4cc2d984a31d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '73adfb8cf0c64359b1f33a9643148ef4', 'neutron:revision_number': '2', 'neutron:security_group_ids': '06a0521a-9ff7-49ed-a194-a12a4a8fd551', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0cfa1fc9-121e-4e0a-bd08-716b82275316, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=10e01bbb-0d2c-4f76-8f81-c90e20d3e54c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 08:55:25 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:55:25.365 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 10e01bbb-0d2c-4f76-8f81-c90e20d3e54c in datapath 056c6769-bc97-4ae9-9759-4cc2d984a31d bound to our chassis
Oct 11 08:55:25 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:55:25.368 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 056c6769-bc97-4ae9-9759-4cc2d984a31d
Oct 11 08:55:25 compute-0 systemd-udevd[322524]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 08:55:25 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:55:25.390 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[aa743280-5c03-4037-9e48-970349400840]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:55:25 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:55:25.391 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap056c6769-b1 in ovnmeta-056c6769-bc97-4ae9-9759-4cc2d984a31d namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 11 08:55:25 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:55:25.395 276945 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap056c6769-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 11 08:55:25 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:55:25.395 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[1234da06-e75d-4a21-8196-a7452ee3309f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:55:25 compute-0 systemd-machined[215705]: New machine qemu-65-instance-0000003a.
Oct 11 08:55:25 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:55:25.396 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[f69d9b88-7881-4c26-99de-a32fa150760f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:55:25 compute-0 NetworkManager[44960]: <info>  [1760172925.4030] device (tap10e01bbb-0d): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 08:55:25 compute-0 NetworkManager[44960]: <info>  [1760172925.4046] device (tap10e01bbb-0d): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 08:55:25 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:55:25.414 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[de170643-e281-4680-89c4-e87fe18c5c5a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:55:25 compute-0 systemd[1]: Started Virtual Machine qemu-65-instance-0000003a.
Oct 11 08:55:25 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:55:25.451 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[4c83be95-1938-4d05-a6fa-06da991a4f78]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:55:25 compute-0 nova_compute[260935]: 2025-10-11 08:55:25.469 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:55:25 compute-0 ovn_controller[152945]: 2025-10-11T08:55:25Z|00508|binding|INFO|Setting lport 10e01bbb-0d2c-4f76-8f81-c90e20d3e54c ovn-installed in OVS
Oct 11 08:55:25 compute-0 ovn_controller[152945]: 2025-10-11T08:55:25Z|00509|binding|INFO|Setting lport 10e01bbb-0d2c-4f76-8f81-c90e20d3e54c up in Southbound
Oct 11 08:55:25 compute-0 nova_compute[260935]: 2025-10-11 08:55:25.474 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:55:25 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:55:25.503 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[0c3d33cb-15eb-433b-812a-0a8581bd18d4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:55:25 compute-0 systemd-udevd[322529]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 08:55:25 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:55:25.511 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[cb555a12-9cb6-4541-9f0e-61380af890f5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:55:25 compute-0 NetworkManager[44960]: <info>  [1760172925.5126] manager: (tap056c6769-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/237)
Oct 11 08:55:25 compute-0 nova_compute[260935]: 2025-10-11 08:55:25.531 2 DEBUG oslo_concurrency.lockutils [None req-7a652225-c8de-463d-bafb-8628c9a6f227 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Acquiring lock "3915cf40-bdd7-4fe8-8311-834ff26aaf9c" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:55:25 compute-0 nova_compute[260935]: 2025-10-11 08:55:25.532 2 DEBUG oslo_concurrency.lockutils [None req-7a652225-c8de-463d-bafb-8628c9a6f227 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Lock "3915cf40-bdd7-4fe8-8311-834ff26aaf9c" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:55:25 compute-0 nova_compute[260935]: 2025-10-11 08:55:25.551 2 DEBUG nova.compute.manager [None req-7a652225-c8de-463d-bafb-8628c9a6f227 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] [instance: 3915cf40-bdd7-4fe8-8311-834ff26aaf9c] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 11 08:55:25 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:55:25.583 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[ad5026b6-e66a-4e93-96ae-f6cac41c20d1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:55:25 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:55:25.588 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[67cf0134-05ba-40d0-b2f6-8f69b405ff6a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:55:25 compute-0 nova_compute[260935]: 2025-10-11 08:55:25.622 2 DEBUG oslo_concurrency.lockutils [None req-7a652225-c8de-463d-bafb-8628c9a6f227 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:55:25 compute-0 nova_compute[260935]: 2025-10-11 08:55:25.623 2 DEBUG oslo_concurrency.lockutils [None req-7a652225-c8de-463d-bafb-8628c9a6f227 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:55:25 compute-0 NetworkManager[44960]: <info>  [1760172925.6274] device (tap056c6769-b0): carrier: link connected
Oct 11 08:55:25 compute-0 nova_compute[260935]: 2025-10-11 08:55:25.635 2 DEBUG nova.virt.hardware [None req-7a652225-c8de-463d-bafb-8628c9a6f227 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 11 08:55:25 compute-0 nova_compute[260935]: 2025-10-11 08:55:25.636 2 INFO nova.compute.claims [None req-7a652225-c8de-463d-bafb-8628c9a6f227 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] [instance: 3915cf40-bdd7-4fe8-8311-834ff26aaf9c] Claim successful on node compute-0.ctlplane.example.com
Oct 11 08:55:25 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:55:25.639 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[ec6df846-f1fb-4151-99e8-7188489c9379]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:55:25 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:55:25.667 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[5ca87f82-cc34-4d3d-917c-e21859ba985d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap056c6769-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8b:cc:02'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 220, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 220, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 162], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 477032, 'reachable_time': 36312, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 192, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 192, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 322558, 'error': None, 'target': 'ovnmeta-056c6769-bc97-4ae9-9759-4cc2d984a31d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:55:25 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:55:25.698 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[37fd57a9-4e63-4fc4-9759-6232f8475aff]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe8b:cc02'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 477032, 'tstamp': 477032}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 322559, 'error': None, 'target': 'ovnmeta-056c6769-bc97-4ae9-9759-4cc2d984a31d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:55:25 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:55:25.733 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[4bf514bd-b7ba-49d7-bba0-dec9fbad58a6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap056c6769-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8b:cc:02'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 2, 'rx_bytes': 220, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 2, 'rx_bytes': 220, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 162], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 477032, 'reachable_time': 36312, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 192, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 192, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 322560, 'error': None, 'target': 'ovnmeta-056c6769-bc97-4ae9-9759-4cc2d984a31d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:55:25 compute-0 nova_compute[260935]: 2025-10-11 08:55:25.755 2 DEBUG oslo_concurrency.processutils [None req-7a652225-c8de-463d-bafb-8628c9a6f227 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:55:25 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:55:25.793 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[268e71da-409c-49e7-8c5c-d3b6b00be060]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:55:25 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:55:25.891 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[dc807e98-70f3-49a5-897a-59a4c3e5961f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:55:25 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:55:25.893 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap056c6769-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:55:25 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:55:25.893 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 08:55:25 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:55:25.894 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap056c6769-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:55:25 compute-0 NetworkManager[44960]: <info>  [1760172925.8980] manager: (tap056c6769-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/238)
Oct 11 08:55:25 compute-0 nova_compute[260935]: 2025-10-11 08:55:25.897 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:55:25 compute-0 kernel: tap056c6769-b0: entered promiscuous mode
Oct 11 08:55:25 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:55:25.903 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap056c6769-b0, col_values=(('external_ids', {'iface-id': '056a8563-0695-415b-921f-e75fa98e60e5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:55:25 compute-0 nova_compute[260935]: 2025-10-11 08:55:25.906 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:55:25 compute-0 ovn_controller[152945]: 2025-10-11T08:55:25Z|00510|binding|INFO|Releasing lport 056a8563-0695-415b-921f-e75fa98e60e5 from this chassis (sb_readonly=0)
Oct 11 08:55:25 compute-0 nova_compute[260935]: 2025-10-11 08:55:25.942 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:55:25 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:55:25.944 162815 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/056c6769-bc97-4ae9-9759-4cc2d984a31d.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/056c6769-bc97-4ae9-9759-4cc2d984a31d.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 11 08:55:25 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:55:25.950 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[bb532465-7e41-4970-b1ef-46d2e34b7293]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:55:25 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:55:25.952 162815 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 11 08:55:25 compute-0 ovn_metadata_agent[162810]: global
Oct 11 08:55:25 compute-0 ovn_metadata_agent[162810]:     log         /dev/log local0 debug
Oct 11 08:55:25 compute-0 ovn_metadata_agent[162810]:     log-tag     haproxy-metadata-proxy-056c6769-bc97-4ae9-9759-4cc2d984a31d
Oct 11 08:55:25 compute-0 ovn_metadata_agent[162810]:     user        root
Oct 11 08:55:25 compute-0 ovn_metadata_agent[162810]:     group       root
Oct 11 08:55:25 compute-0 ovn_metadata_agent[162810]:     maxconn     1024
Oct 11 08:55:25 compute-0 ovn_metadata_agent[162810]:     pidfile     /var/lib/neutron/external/pids/056c6769-bc97-4ae9-9759-4cc2d984a31d.pid.haproxy
Oct 11 08:55:25 compute-0 ovn_metadata_agent[162810]:     daemon
Oct 11 08:55:25 compute-0 ovn_metadata_agent[162810]: 
Oct 11 08:55:25 compute-0 ovn_metadata_agent[162810]: defaults
Oct 11 08:55:25 compute-0 ovn_metadata_agent[162810]:     log global
Oct 11 08:55:25 compute-0 ovn_metadata_agent[162810]:     mode http
Oct 11 08:55:25 compute-0 ovn_metadata_agent[162810]:     option httplog
Oct 11 08:55:25 compute-0 ovn_metadata_agent[162810]:     option dontlognull
Oct 11 08:55:25 compute-0 ovn_metadata_agent[162810]:     option http-server-close
Oct 11 08:55:25 compute-0 ovn_metadata_agent[162810]:     option forwardfor
Oct 11 08:55:25 compute-0 ovn_metadata_agent[162810]:     retries                 3
Oct 11 08:55:25 compute-0 ovn_metadata_agent[162810]:     timeout http-request    30s
Oct 11 08:55:25 compute-0 ovn_metadata_agent[162810]:     timeout connect         30s
Oct 11 08:55:25 compute-0 ovn_metadata_agent[162810]:     timeout client          32s
Oct 11 08:55:25 compute-0 ovn_metadata_agent[162810]:     timeout server          32s
Oct 11 08:55:25 compute-0 ovn_metadata_agent[162810]:     timeout http-keep-alive 30s
Oct 11 08:55:25 compute-0 ovn_metadata_agent[162810]: 
Oct 11 08:55:25 compute-0 ovn_metadata_agent[162810]: 
Oct 11 08:55:25 compute-0 ovn_metadata_agent[162810]: listen listener
Oct 11 08:55:25 compute-0 ovn_metadata_agent[162810]:     bind 169.254.169.254:80
Oct 11 08:55:25 compute-0 ovn_metadata_agent[162810]:     server metadata /var/lib/neutron/metadata_proxy
Oct 11 08:55:25 compute-0 ovn_metadata_agent[162810]:     http-request add-header X-OVN-Network-ID 056c6769-bc97-4ae9-9759-4cc2d984a31d
Oct 11 08:55:25 compute-0 ovn_metadata_agent[162810]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 11 08:55:25 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:55:25.952 162815 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-056c6769-bc97-4ae9-9759-4cc2d984a31d', 'env', 'PROCESS_TAG=haproxy-056c6769-bc97-4ae9-9759-4cc2d984a31d', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/056c6769-bc97-4ae9-9759-4cc2d984a31d.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 11 08:55:26 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1536: 321 pgs: 321 active+clean; 88 MiB data, 495 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 117 op/s
Oct 11 08:55:26 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 08:55:26 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2143679200' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:55:26 compute-0 nova_compute[260935]: 2025-10-11 08:55:26.286 2 DEBUG oslo_concurrency.processutils [None req-7a652225-c8de-463d-bafb-8628c9a6f227 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.530s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:55:26 compute-0 nova_compute[260935]: 2025-10-11 08:55:26.294 2 DEBUG nova.compute.provider_tree [None req-7a652225-c8de-463d-bafb-8628c9a6f227 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 08:55:26 compute-0 ovn_controller[152945]: 2025-10-11T08:55:26Z|00511|binding|INFO|Releasing lport 056a8563-0695-415b-921f-e75fa98e60e5 from this chassis (sb_readonly=0)
Oct 11 08:55:26 compute-0 nova_compute[260935]: 2025-10-11 08:55:26.313 2 DEBUG nova.scheduler.client.report [None req-7a652225-c8de-463d-bafb-8628c9a6f227 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 08:55:26 compute-0 nova_compute[260935]: 2025-10-11 08:55:26.341 2 DEBUG oslo_concurrency.lockutils [None req-7a652225-c8de-463d-bafb-8628c9a6f227 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.717s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:55:26 compute-0 nova_compute[260935]: 2025-10-11 08:55:26.342 2 DEBUG nova.compute.manager [None req-7a652225-c8de-463d-bafb-8628c9a6f227 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] [instance: 3915cf40-bdd7-4fe8-8311-834ff26aaf9c] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 11 08:55:26 compute-0 nova_compute[260935]: 2025-10-11 08:55:26.405 2 DEBUG nova.compute.manager [None req-7a652225-c8de-463d-bafb-8628c9a6f227 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] [instance: 3915cf40-bdd7-4fe8-8311-834ff26aaf9c] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 11 08:55:26 compute-0 nova_compute[260935]: 2025-10-11 08:55:26.406 2 DEBUG nova.network.neutron [None req-7a652225-c8de-463d-bafb-8628c9a6f227 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] [instance: 3915cf40-bdd7-4fe8-8311-834ff26aaf9c] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 11 08:55:26 compute-0 nova_compute[260935]: 2025-10-11 08:55:26.434 2 INFO nova.virt.libvirt.driver [None req-7a652225-c8de-463d-bafb-8628c9a6f227 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] [instance: 3915cf40-bdd7-4fe8-8311-834ff26aaf9c] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 11 08:55:26 compute-0 nova_compute[260935]: 2025-10-11 08:55:26.447 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:55:26 compute-0 nova_compute[260935]: 2025-10-11 08:55:26.451 2 DEBUG nova.compute.manager [None req-7a652225-c8de-463d-bafb-8628c9a6f227 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] [instance: 3915cf40-bdd7-4fe8-8311-834ff26aaf9c] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 11 08:55:26 compute-0 podman[322656]: 2025-10-11 08:55:26.466164059 +0000 UTC m=+0.057840201 container create ee71ccc73d4cb3e1954d0b370d20ab59432e766b0286a99e46c4f66196d6860d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-056c6769-bc97-4ae9-9759-4cc2d984a31d, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 08:55:26 compute-0 systemd[1]: Started libpod-conmon-ee71ccc73d4cb3e1954d0b370d20ab59432e766b0286a99e46c4f66196d6860d.scope.
Oct 11 08:55:26 compute-0 podman[322656]: 2025-10-11 08:55:26.438575712 +0000 UTC m=+0.030251874 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 11 08:55:26 compute-0 systemd[1]: Started libcrun container.
Oct 11 08:55:26 compute-0 nova_compute[260935]: 2025-10-11 08:55:26.553 2 DEBUG nova.compute.manager [None req-7a652225-c8de-463d-bafb-8628c9a6f227 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] [instance: 3915cf40-bdd7-4fe8-8311-834ff26aaf9c] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 11 08:55:26 compute-0 nova_compute[260935]: 2025-10-11 08:55:26.557 2 DEBUG nova.virt.libvirt.driver [None req-7a652225-c8de-463d-bafb-8628c9a6f227 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] [instance: 3915cf40-bdd7-4fe8-8311-834ff26aaf9c] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 11 08:55:26 compute-0 nova_compute[260935]: 2025-10-11 08:55:26.558 2 INFO nova.virt.libvirt.driver [None req-7a652225-c8de-463d-bafb-8628c9a6f227 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] [instance: 3915cf40-bdd7-4fe8-8311-834ff26aaf9c] Creating image(s)
Oct 11 08:55:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/65f5d931826b904a40c4e8f3faa087cfd21b33725873fe141d1cea9369cc5ab3/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 11 08:55:26 compute-0 podman[322656]: 2025-10-11 08:55:26.586335544 +0000 UTC m=+0.178011686 container init ee71ccc73d4cb3e1954d0b370d20ab59432e766b0286a99e46c4f66196d6860d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-056c6769-bc97-4ae9-9759-4cc2d984a31d, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, tcib_managed=true, io.buildah.version=1.41.3)
Oct 11 08:55:26 compute-0 podman[322656]: 2025-10-11 08:55:26.592953393 +0000 UTC m=+0.184629535 container start ee71ccc73d4cb3e1954d0b370d20ab59432e766b0286a99e46c4f66196d6860d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-056c6769-bc97-4ae9-9759-4cc2d984a31d, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 08:55:26 compute-0 nova_compute[260935]: 2025-10-11 08:55:26.609 2 DEBUG nova.storage.rbd_utils [None req-7a652225-c8de-463d-bafb-8628c9a6f227 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] rbd image 3915cf40-bdd7-4fe8-8311-834ff26aaf9c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:55:26 compute-0 neutron-haproxy-ovnmeta-056c6769-bc97-4ae9-9759-4cc2d984a31d[322672]: [NOTICE]   (322691) : New worker (322696) forked
Oct 11 08:55:26 compute-0 neutron-haproxy-ovnmeta-056c6769-bc97-4ae9-9759-4cc2d984a31d[322672]: [NOTICE]   (322691) : Loading success.
Oct 11 08:55:26 compute-0 nova_compute[260935]: 2025-10-11 08:55:26.648 2 DEBUG nova.storage.rbd_utils [None req-7a652225-c8de-463d-bafb-8628c9a6f227 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] rbd image 3915cf40-bdd7-4fe8-8311-834ff26aaf9c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:55:26 compute-0 nova_compute[260935]: 2025-10-11 08:55:26.677 2 DEBUG nova.storage.rbd_utils [None req-7a652225-c8de-463d-bafb-8628c9a6f227 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] rbd image 3915cf40-bdd7-4fe8-8311-834ff26aaf9c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:55:26 compute-0 nova_compute[260935]: 2025-10-11 08:55:26.681 2 DEBUG oslo_concurrency.processutils [None req-7a652225-c8de-463d-bafb-8628c9a6f227 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:55:26 compute-0 nova_compute[260935]: 2025-10-11 08:55:26.717 2 DEBUG nova.policy [None req-7a652225-c8de-463d-bafb-8628c9a6f227 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'a04e2908f5a54c8f98bee8d0faf3e658', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '3a6a3cc2a54f4a9bafcdc1304f07944b', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 11 08:55:26 compute-0 nova_compute[260935]: 2025-10-11 08:55:26.719 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172926.7117898, 98d8ebd6-0917-49cf-8efc-a245486424bc => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:55:26 compute-0 nova_compute[260935]: 2025-10-11 08:55:26.720 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 98d8ebd6-0917-49cf-8efc-a245486424bc] VM Started (Lifecycle Event)
Oct 11 08:55:26 compute-0 nova_compute[260935]: 2025-10-11 08:55:26.741 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 98d8ebd6-0917-49cf-8efc-a245486424bc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:55:26 compute-0 nova_compute[260935]: 2025-10-11 08:55:26.747 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172926.7126245, 98d8ebd6-0917-49cf-8efc-a245486424bc => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:55:26 compute-0 nova_compute[260935]: 2025-10-11 08:55:26.747 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 98d8ebd6-0917-49cf-8efc-a245486424bc] VM Paused (Lifecycle Event)
Oct 11 08:55:26 compute-0 nova_compute[260935]: 2025-10-11 08:55:26.751 2 DEBUG oslo_concurrency.processutils [None req-7a652225-c8de-463d-bafb-8628c9a6f227 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:55:26 compute-0 nova_compute[260935]: 2025-10-11 08:55:26.752 2 DEBUG oslo_concurrency.lockutils [None req-7a652225-c8de-463d-bafb-8628c9a6f227 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Acquiring lock "9811042c7d73cc51997f7c966840f4b7728169a1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:55:26 compute-0 nova_compute[260935]: 2025-10-11 08:55:26.752 2 DEBUG oslo_concurrency.lockutils [None req-7a652225-c8de-463d-bafb-8628c9a6f227 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:55:26 compute-0 nova_compute[260935]: 2025-10-11 08:55:26.753 2 DEBUG oslo_concurrency.lockutils [None req-7a652225-c8de-463d-bafb-8628c9a6f227 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:55:26 compute-0 nova_compute[260935]: 2025-10-11 08:55:26.781 2 DEBUG nova.storage.rbd_utils [None req-7a652225-c8de-463d-bafb-8628c9a6f227 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] rbd image 3915cf40-bdd7-4fe8-8311-834ff26aaf9c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:55:26 compute-0 nova_compute[260935]: 2025-10-11 08:55:26.788 2 DEBUG oslo_concurrency.processutils [None req-7a652225-c8de-463d-bafb-8628c9a6f227 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 3915cf40-bdd7-4fe8-8311-834ff26aaf9c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:55:26 compute-0 nova_compute[260935]: 2025-10-11 08:55:26.829 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 98d8ebd6-0917-49cf-8efc-a245486424bc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:55:26 compute-0 nova_compute[260935]: 2025-10-11 08:55:26.834 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 98d8ebd6-0917-49cf-8efc-a245486424bc] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 08:55:26 compute-0 nova_compute[260935]: 2025-10-11 08:55:26.861 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 98d8ebd6-0917-49cf-8efc-a245486424bc] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 08:55:27 compute-0 nova_compute[260935]: 2025-10-11 08:55:27.126 2 DEBUG oslo_concurrency.processutils [None req-7a652225-c8de-463d-bafb-8628c9a6f227 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 3915cf40-bdd7-4fe8-8311-834ff26aaf9c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.338s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:55:27 compute-0 ceph-mon[74313]: pgmap v1536: 321 pgs: 321 active+clean; 88 MiB data, 495 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 117 op/s
Oct 11 08:55:27 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/2143679200' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:55:27 compute-0 nova_compute[260935]: 2025-10-11 08:55:27.214 2 DEBUG nova.storage.rbd_utils [None req-7a652225-c8de-463d-bafb-8628c9a6f227 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] resizing rbd image 3915cf40-bdd7-4fe8-8311-834ff26aaf9c_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 11 08:55:27 compute-0 nova_compute[260935]: 2025-10-11 08:55:27.266 2 DEBUG oslo_concurrency.lockutils [None req-4c64e838-f2b1-4794-afda-1493719b8b97 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Acquiring lock "f6a731f1-14a4-4df0-b3e2-65c8c4cb16e8" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:55:27 compute-0 nova_compute[260935]: 2025-10-11 08:55:27.267 2 DEBUG oslo_concurrency.lockutils [None req-4c64e838-f2b1-4794-afda-1493719b8b97 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Lock "f6a731f1-14a4-4df0-b3e2-65c8c4cb16e8" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:55:27 compute-0 nova_compute[260935]: 2025-10-11 08:55:27.284 2 DEBUG nova.compute.manager [None req-4c64e838-f2b1-4794-afda-1493719b8b97 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: f6a731f1-14a4-4df0-b3e2-65c8c4cb16e8] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 11 08:55:27 compute-0 nova_compute[260935]: 2025-10-11 08:55:27.343 2 DEBUG nova.objects.instance [None req-7a652225-c8de-463d-bafb-8628c9a6f227 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Lazy-loading 'migration_context' on Instance uuid 3915cf40-bdd7-4fe8-8311-834ff26aaf9c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:55:27 compute-0 nova_compute[260935]: 2025-10-11 08:55:27.361 2 DEBUG nova.virt.libvirt.driver [None req-7a652225-c8de-463d-bafb-8628c9a6f227 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] [instance: 3915cf40-bdd7-4fe8-8311-834ff26aaf9c] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 11 08:55:27 compute-0 nova_compute[260935]: 2025-10-11 08:55:27.362 2 DEBUG nova.virt.libvirt.driver [None req-7a652225-c8de-463d-bafb-8628c9a6f227 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] [instance: 3915cf40-bdd7-4fe8-8311-834ff26aaf9c] Ensure instance console log exists: /var/lib/nova/instances/3915cf40-bdd7-4fe8-8311-834ff26aaf9c/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 11 08:55:27 compute-0 nova_compute[260935]: 2025-10-11 08:55:27.362 2 DEBUG oslo_concurrency.lockutils [None req-7a652225-c8de-463d-bafb-8628c9a6f227 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:55:27 compute-0 nova_compute[260935]: 2025-10-11 08:55:27.363 2 DEBUG oslo_concurrency.lockutils [None req-7a652225-c8de-463d-bafb-8628c9a6f227 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:55:27 compute-0 nova_compute[260935]: 2025-10-11 08:55:27.363 2 DEBUG oslo_concurrency.lockutils [None req-7a652225-c8de-463d-bafb-8628c9a6f227 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:55:27 compute-0 nova_compute[260935]: 2025-10-11 08:55:27.366 2 DEBUG oslo_concurrency.lockutils [None req-4c64e838-f2b1-4794-afda-1493719b8b97 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:55:27 compute-0 nova_compute[260935]: 2025-10-11 08:55:27.366 2 DEBUG oslo_concurrency.lockutils [None req-4c64e838-f2b1-4794-afda-1493719b8b97 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:55:27 compute-0 nova_compute[260935]: 2025-10-11 08:55:27.377 2 DEBUG nova.virt.hardware [None req-4c64e838-f2b1-4794-afda-1493719b8b97 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 11 08:55:27 compute-0 nova_compute[260935]: 2025-10-11 08:55:27.377 2 INFO nova.compute.claims [None req-4c64e838-f2b1-4794-afda-1493719b8b97 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: f6a731f1-14a4-4df0-b3e2-65c8c4cb16e8] Claim successful on node compute-0.ctlplane.example.com
Oct 11 08:55:27 compute-0 nova_compute[260935]: 2025-10-11 08:55:27.510 2 DEBUG oslo_concurrency.processutils [None req-4c64e838-f2b1-4794-afda-1493719b8b97 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:55:27 compute-0 nova_compute[260935]: 2025-10-11 08:55:27.583 2 DEBUG nova.network.neutron [None req-7a652225-c8de-463d-bafb-8628c9a6f227 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] [instance: 3915cf40-bdd7-4fe8-8311-834ff26aaf9c] Successfully created port: 33b4a30b-13cb-4c3b-a6fb-df71a1ce760e _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 11 08:55:27 compute-0 nova_compute[260935]: 2025-10-11 08:55:27.968 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:55:28 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1537: 321 pgs: 321 active+clean; 134 MiB data, 507 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.6 MiB/s wr, 154 op/s
Oct 11 08:55:28 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 08:55:28 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3649119031' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:55:28 compute-0 nova_compute[260935]: 2025-10-11 08:55:28.067 2 DEBUG oslo_concurrency.processutils [None req-4c64e838-f2b1-4794-afda-1493719b8b97 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.557s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:55:28 compute-0 nova_compute[260935]: 2025-10-11 08:55:28.074 2 DEBUG nova.compute.provider_tree [None req-4c64e838-f2b1-4794-afda-1493719b8b97 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 08:55:28 compute-0 nova_compute[260935]: 2025-10-11 08:55:28.096 2 DEBUG nova.scheduler.client.report [None req-4c64e838-f2b1-4794-afda-1493719b8b97 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 08:55:28 compute-0 nova_compute[260935]: 2025-10-11 08:55:28.122 2 DEBUG oslo_concurrency.lockutils [None req-4c64e838-f2b1-4794-afda-1493719b8b97 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.756s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:55:28 compute-0 nova_compute[260935]: 2025-10-11 08:55:28.123 2 DEBUG nova.compute.manager [None req-4c64e838-f2b1-4794-afda-1493719b8b97 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: f6a731f1-14a4-4df0-b3e2-65c8c4cb16e8] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 11 08:55:28 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/3649119031' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:55:28 compute-0 nova_compute[260935]: 2025-10-11 08:55:28.175 2 DEBUG nova.compute.manager [None req-4c64e838-f2b1-4794-afda-1493719b8b97 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: f6a731f1-14a4-4df0-b3e2-65c8c4cb16e8] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 11 08:55:28 compute-0 nova_compute[260935]: 2025-10-11 08:55:28.175 2 DEBUG nova.network.neutron [None req-4c64e838-f2b1-4794-afda-1493719b8b97 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: f6a731f1-14a4-4df0-b3e2-65c8c4cb16e8] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 11 08:55:28 compute-0 nova_compute[260935]: 2025-10-11 08:55:28.195 2 INFO nova.virt.libvirt.driver [None req-4c64e838-f2b1-4794-afda-1493719b8b97 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: f6a731f1-14a4-4df0-b3e2-65c8c4cb16e8] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 11 08:55:28 compute-0 nova_compute[260935]: 2025-10-11 08:55:28.210 2 DEBUG nova.compute.manager [None req-4c64e838-f2b1-4794-afda-1493719b8b97 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: f6a731f1-14a4-4df0-b3e2-65c8c4cb16e8] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 11 08:55:28 compute-0 nova_compute[260935]: 2025-10-11 08:55:28.305 2 DEBUG nova.compute.manager [None req-4c64e838-f2b1-4794-afda-1493719b8b97 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: f6a731f1-14a4-4df0-b3e2-65c8c4cb16e8] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 11 08:55:28 compute-0 nova_compute[260935]: 2025-10-11 08:55:28.306 2 DEBUG nova.virt.libvirt.driver [None req-4c64e838-f2b1-4794-afda-1493719b8b97 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: f6a731f1-14a4-4df0-b3e2-65c8c4cb16e8] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 11 08:55:28 compute-0 nova_compute[260935]: 2025-10-11 08:55:28.307 2 INFO nova.virt.libvirt.driver [None req-4c64e838-f2b1-4794-afda-1493719b8b97 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: f6a731f1-14a4-4df0-b3e2-65c8c4cb16e8] Creating image(s)
Oct 11 08:55:28 compute-0 nova_compute[260935]: 2025-10-11 08:55:28.331 2 DEBUG nova.storage.rbd_utils [None req-4c64e838-f2b1-4794-afda-1493719b8b97 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] rbd image f6a731f1-14a4-4df0-b3e2-65c8c4cb16e8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:55:28 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e210 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 08:55:28 compute-0 nova_compute[260935]: 2025-10-11 08:55:28.359 2 DEBUG nova.storage.rbd_utils [None req-4c64e838-f2b1-4794-afda-1493719b8b97 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] rbd image f6a731f1-14a4-4df0-b3e2-65c8c4cb16e8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:55:28 compute-0 nova_compute[260935]: 2025-10-11 08:55:28.384 2 DEBUG nova.storage.rbd_utils [None req-4c64e838-f2b1-4794-afda-1493719b8b97 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] rbd image f6a731f1-14a4-4df0-b3e2-65c8c4cb16e8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:55:28 compute-0 nova_compute[260935]: 2025-10-11 08:55:28.389 2 DEBUG oslo_concurrency.processutils [None req-4c64e838-f2b1-4794-afda-1493719b8b97 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:55:28 compute-0 nova_compute[260935]: 2025-10-11 08:55:28.506 2 DEBUG oslo_concurrency.processutils [None req-4c64e838-f2b1-4794-afda-1493719b8b97 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json" returned: 0 in 0.117s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:55:28 compute-0 nova_compute[260935]: 2025-10-11 08:55:28.508 2 DEBUG oslo_concurrency.lockutils [None req-4c64e838-f2b1-4794-afda-1493719b8b97 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Acquiring lock "9811042c7d73cc51997f7c966840f4b7728169a1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:55:28 compute-0 nova_compute[260935]: 2025-10-11 08:55:28.509 2 DEBUG oslo_concurrency.lockutils [None req-4c64e838-f2b1-4794-afda-1493719b8b97 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:55:28 compute-0 nova_compute[260935]: 2025-10-11 08:55:28.509 2 DEBUG oslo_concurrency.lockutils [None req-4c64e838-f2b1-4794-afda-1493719b8b97 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:55:28 compute-0 nova_compute[260935]: 2025-10-11 08:55:28.532 2 DEBUG nova.storage.rbd_utils [None req-4c64e838-f2b1-4794-afda-1493719b8b97 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] rbd image f6a731f1-14a4-4df0-b3e2-65c8c4cb16e8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:55:28 compute-0 nova_compute[260935]: 2025-10-11 08:55:28.537 2 DEBUG oslo_concurrency.processutils [None req-4c64e838-f2b1-4794-afda-1493719b8b97 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 f6a731f1-14a4-4df0-b3e2-65c8c4cb16e8_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:55:28 compute-0 nova_compute[260935]: 2025-10-11 08:55:28.866 2 DEBUG oslo_concurrency.processutils [None req-4c64e838-f2b1-4794-afda-1493719b8b97 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 f6a731f1-14a4-4df0-b3e2-65c8c4cb16e8_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.329s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:55:28 compute-0 nova_compute[260935]: 2025-10-11 08:55:28.953 2 DEBUG nova.storage.rbd_utils [None req-4c64e838-f2b1-4794-afda-1493719b8b97 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] resizing rbd image f6a731f1-14a4-4df0-b3e2-65c8c4cb16e8_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 11 08:55:29 compute-0 nova_compute[260935]: 2025-10-11 08:55:29.087 2 DEBUG nova.objects.instance [None req-4c64e838-f2b1-4794-afda-1493719b8b97 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Lazy-loading 'migration_context' on Instance uuid f6a731f1-14a4-4df0-b3e2-65c8c4cb16e8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:55:29 compute-0 nova_compute[260935]: 2025-10-11 08:55:29.106 2 DEBUG nova.virt.libvirt.driver [None req-4c64e838-f2b1-4794-afda-1493719b8b97 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: f6a731f1-14a4-4df0-b3e2-65c8c4cb16e8] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 11 08:55:29 compute-0 nova_compute[260935]: 2025-10-11 08:55:29.107 2 DEBUG nova.virt.libvirt.driver [None req-4c64e838-f2b1-4794-afda-1493719b8b97 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: f6a731f1-14a4-4df0-b3e2-65c8c4cb16e8] Ensure instance console log exists: /var/lib/nova/instances/f6a731f1-14a4-4df0-b3e2-65c8c4cb16e8/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 11 08:55:29 compute-0 nova_compute[260935]: 2025-10-11 08:55:29.108 2 DEBUG oslo_concurrency.lockutils [None req-4c64e838-f2b1-4794-afda-1493719b8b97 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:55:29 compute-0 nova_compute[260935]: 2025-10-11 08:55:29.109 2 DEBUG oslo_concurrency.lockutils [None req-4c64e838-f2b1-4794-afda-1493719b8b97 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:55:29 compute-0 nova_compute[260935]: 2025-10-11 08:55:29.109 2 DEBUG oslo_concurrency.lockutils [None req-4c64e838-f2b1-4794-afda-1493719b8b97 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:55:29 compute-0 ceph-mon[74313]: pgmap v1537: 321 pgs: 321 active+clean; 134 MiB data, 507 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.6 MiB/s wr, 154 op/s
Oct 11 08:55:29 compute-0 nova_compute[260935]: 2025-10-11 08:55:29.185 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:55:29 compute-0 nova_compute[260935]: 2025-10-11 08:55:29.249 2 DEBUG nova.network.neutron [None req-7a652225-c8de-463d-bafb-8628c9a6f227 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] [instance: 3915cf40-bdd7-4fe8-8311-834ff26aaf9c] Successfully updated port: 33b4a30b-13cb-4c3b-a6fb-df71a1ce760e _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 11 08:55:29 compute-0 nova_compute[260935]: 2025-10-11 08:55:29.267 2 DEBUG oslo_concurrency.lockutils [None req-7a652225-c8de-463d-bafb-8628c9a6f227 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Acquiring lock "refresh_cache-3915cf40-bdd7-4fe8-8311-834ff26aaf9c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 08:55:29 compute-0 nova_compute[260935]: 2025-10-11 08:55:29.268 2 DEBUG oslo_concurrency.lockutils [None req-7a652225-c8de-463d-bafb-8628c9a6f227 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Acquired lock "refresh_cache-3915cf40-bdd7-4fe8-8311-834ff26aaf9c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 08:55:29 compute-0 nova_compute[260935]: 2025-10-11 08:55:29.268 2 DEBUG nova.network.neutron [None req-7a652225-c8de-463d-bafb-8628c9a6f227 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] [instance: 3915cf40-bdd7-4fe8-8311-834ff26aaf9c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 11 08:55:29 compute-0 nova_compute[260935]: 2025-10-11 08:55:29.302 2 DEBUG nova.policy [None req-4c64e838-f2b1-4794-afda-1493719b8b97 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '213e5693e94f44e7950e3dfbca04228a', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '93dd4902ce324862a38006da8e06503a', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 11 08:55:29 compute-0 nova_compute[260935]: 2025-10-11 08:55:29.426 2 DEBUG nova.network.neutron [None req-7a652225-c8de-463d-bafb-8628c9a6f227 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] [instance: 3915cf40-bdd7-4fe8-8311-834ff26aaf9c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 11 08:55:29 compute-0 nova_compute[260935]: 2025-10-11 08:55:29.639 2 DEBUG nova.compute.manager [req-06a58741-ccf6-4c13-ac23-0627206f9fbc req-0c0d2df4-8613-46f5-8fd6-6a10befac62e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 3915cf40-bdd7-4fe8-8311-834ff26aaf9c] Received event network-changed-33b4a30b-13cb-4c3b-a6fb-df71a1ce760e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:55:29 compute-0 nova_compute[260935]: 2025-10-11 08:55:29.640 2 DEBUG nova.compute.manager [req-06a58741-ccf6-4c13-ac23-0627206f9fbc req-0c0d2df4-8613-46f5-8fd6-6a10befac62e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 3915cf40-bdd7-4fe8-8311-834ff26aaf9c] Refreshing instance network info cache due to event network-changed-33b4a30b-13cb-4c3b-a6fb-df71a1ce760e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 08:55:29 compute-0 nova_compute[260935]: 2025-10-11 08:55:29.640 2 DEBUG oslo_concurrency.lockutils [req-06a58741-ccf6-4c13-ac23-0627206f9fbc req-0c0d2df4-8613-46f5-8fd6-6a10befac62e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-3915cf40-bdd7-4fe8-8311-834ff26aaf9c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 08:55:29 compute-0 podman[323041]: 2025-10-11 08:55:29.813539197 +0000 UTC m=+0.097365617 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_id=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Oct 11 08:55:29 compute-0 nova_compute[260935]: 2025-10-11 08:55:29.861 2 DEBUG nova.network.neutron [None req-4c64e838-f2b1-4794-afda-1493719b8b97 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: f6a731f1-14a4-4df0-b3e2-65c8c4cb16e8] Successfully created port: d295555f-76d3-480c-9c34-1ef65200656c _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 11 08:55:30 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1538: 321 pgs: 321 active+clean; 134 MiB data, 507 MiB used, 59 GiB / 60 GiB avail; 60 KiB/s rd, 3.6 MiB/s wr, 90 op/s
Oct 11 08:55:30 compute-0 nova_compute[260935]: 2025-10-11 08:55:30.382 2 DEBUG nova.network.neutron [None req-7a652225-c8de-463d-bafb-8628c9a6f227 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] [instance: 3915cf40-bdd7-4fe8-8311-834ff26aaf9c] Updating instance_info_cache with network_info: [{"id": "33b4a30b-13cb-4c3b-a6fb-df71a1ce760e", "address": "fa:16:3e:ef:c3:49", "network": {"id": "338aeaf8-43d5-4292-a8fa-8952dd3c508b", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-293890957-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3a6a3cc2a54f4a9bafcdc1304f07944b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap33b4a30b-13", "ovs_interfaceid": "33b4a30b-13cb-4c3b-a6fb-df71a1ce760e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:55:30 compute-0 nova_compute[260935]: 2025-10-11 08:55:30.563 2 DEBUG oslo_concurrency.lockutils [None req-7a652225-c8de-463d-bafb-8628c9a6f227 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Releasing lock "refresh_cache-3915cf40-bdd7-4fe8-8311-834ff26aaf9c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 08:55:30 compute-0 nova_compute[260935]: 2025-10-11 08:55:30.564 2 DEBUG nova.compute.manager [None req-7a652225-c8de-463d-bafb-8628c9a6f227 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] [instance: 3915cf40-bdd7-4fe8-8311-834ff26aaf9c] Instance network_info: |[{"id": "33b4a30b-13cb-4c3b-a6fb-df71a1ce760e", "address": "fa:16:3e:ef:c3:49", "network": {"id": "338aeaf8-43d5-4292-a8fa-8952dd3c508b", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-293890957-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3a6a3cc2a54f4a9bafcdc1304f07944b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap33b4a30b-13", "ovs_interfaceid": "33b4a30b-13cb-4c3b-a6fb-df71a1ce760e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 11 08:55:30 compute-0 nova_compute[260935]: 2025-10-11 08:55:30.566 2 DEBUG oslo_concurrency.lockutils [req-06a58741-ccf6-4c13-ac23-0627206f9fbc req-0c0d2df4-8613-46f5-8fd6-6a10befac62e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-3915cf40-bdd7-4fe8-8311-834ff26aaf9c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 08:55:30 compute-0 nova_compute[260935]: 2025-10-11 08:55:30.567 2 DEBUG nova.network.neutron [req-06a58741-ccf6-4c13-ac23-0627206f9fbc req-0c0d2df4-8613-46f5-8fd6-6a10befac62e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 3915cf40-bdd7-4fe8-8311-834ff26aaf9c] Refreshing network info cache for port 33b4a30b-13cb-4c3b-a6fb-df71a1ce760e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 08:55:30 compute-0 nova_compute[260935]: 2025-10-11 08:55:30.573 2 DEBUG nova.virt.libvirt.driver [None req-7a652225-c8de-463d-bafb-8628c9a6f227 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] [instance: 3915cf40-bdd7-4fe8-8311-834ff26aaf9c] Start _get_guest_xml network_info=[{"id": "33b4a30b-13cb-4c3b-a6fb-df71a1ce760e", "address": "fa:16:3e:ef:c3:49", "network": {"id": "338aeaf8-43d5-4292-a8fa-8952dd3c508b", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-293890957-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3a6a3cc2a54f4a9bafcdc1304f07944b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap33b4a30b-13", "ovs_interfaceid": "33b4a30b-13cb-4c3b-a6fb-df71a1ce760e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 11 08:55:30 compute-0 nova_compute[260935]: 2025-10-11 08:55:30.582 2 WARNING nova.virt.libvirt.driver [None req-7a652225-c8de-463d-bafb-8628c9a6f227 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 08:55:30 compute-0 nova_compute[260935]: 2025-10-11 08:55:30.588 2 DEBUG nova.virt.libvirt.host [None req-7a652225-c8de-463d-bafb-8628c9a6f227 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 11 08:55:30 compute-0 nova_compute[260935]: 2025-10-11 08:55:30.589 2 DEBUG nova.virt.libvirt.host [None req-7a652225-c8de-463d-bafb-8628c9a6f227 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 11 08:55:30 compute-0 nova_compute[260935]: 2025-10-11 08:55:30.599 2 DEBUG nova.virt.libvirt.host [None req-7a652225-c8de-463d-bafb-8628c9a6f227 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 11 08:55:30 compute-0 nova_compute[260935]: 2025-10-11 08:55:30.600 2 DEBUG nova.virt.libvirt.host [None req-7a652225-c8de-463d-bafb-8628c9a6f227 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 11 08:55:30 compute-0 nova_compute[260935]: 2025-10-11 08:55:30.601 2 DEBUG nova.virt.libvirt.driver [None req-7a652225-c8de-463d-bafb-8628c9a6f227 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 11 08:55:30 compute-0 nova_compute[260935]: 2025-10-11 08:55:30.602 2 DEBUG nova.virt.hardware [None req-7a652225-c8de-463d-bafb-8628c9a6f227 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 11 08:55:30 compute-0 nova_compute[260935]: 2025-10-11 08:55:30.603 2 DEBUG nova.virt.hardware [None req-7a652225-c8de-463d-bafb-8628c9a6f227 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 11 08:55:30 compute-0 nova_compute[260935]: 2025-10-11 08:55:30.604 2 DEBUG nova.virt.hardware [None req-7a652225-c8de-463d-bafb-8628c9a6f227 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 11 08:55:30 compute-0 nova_compute[260935]: 2025-10-11 08:55:30.605 2 DEBUG nova.virt.hardware [None req-7a652225-c8de-463d-bafb-8628c9a6f227 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 11 08:55:30 compute-0 nova_compute[260935]: 2025-10-11 08:55:30.605 2 DEBUG nova.virt.hardware [None req-7a652225-c8de-463d-bafb-8628c9a6f227 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 11 08:55:30 compute-0 nova_compute[260935]: 2025-10-11 08:55:30.606 2 DEBUG nova.virt.hardware [None req-7a652225-c8de-463d-bafb-8628c9a6f227 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 11 08:55:30 compute-0 nova_compute[260935]: 2025-10-11 08:55:30.606 2 DEBUG nova.virt.hardware [None req-7a652225-c8de-463d-bafb-8628c9a6f227 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 11 08:55:30 compute-0 nova_compute[260935]: 2025-10-11 08:55:30.607 2 DEBUG nova.virt.hardware [None req-7a652225-c8de-463d-bafb-8628c9a6f227 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 11 08:55:30 compute-0 nova_compute[260935]: 2025-10-11 08:55:30.608 2 DEBUG nova.virt.hardware [None req-7a652225-c8de-463d-bafb-8628c9a6f227 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 11 08:55:30 compute-0 nova_compute[260935]: 2025-10-11 08:55:30.608 2 DEBUG nova.virt.hardware [None req-7a652225-c8de-463d-bafb-8628c9a6f227 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 11 08:55:30 compute-0 nova_compute[260935]: 2025-10-11 08:55:30.609 2 DEBUG nova.virt.hardware [None req-7a652225-c8de-463d-bafb-8628c9a6f227 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 11 08:55:30 compute-0 nova_compute[260935]: 2025-10-11 08:55:30.615 2 DEBUG oslo_concurrency.processutils [None req-7a652225-c8de-463d-bafb-8628c9a6f227 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:55:31 compute-0 nova_compute[260935]: 2025-10-11 08:55:31.073 2 DEBUG nova.network.neutron [None req-4c64e838-f2b1-4794-afda-1493719b8b97 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: f6a731f1-14a4-4df0-b3e2-65c8c4cb16e8] Successfully updated port: d295555f-76d3-480c-9c34-1ef65200656c _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 11 08:55:31 compute-0 nova_compute[260935]: 2025-10-11 08:55:31.097 2 DEBUG oslo_concurrency.lockutils [None req-4c64e838-f2b1-4794-afda-1493719b8b97 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Acquiring lock "refresh_cache-f6a731f1-14a4-4df0-b3e2-65c8c4cb16e8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 08:55:31 compute-0 nova_compute[260935]: 2025-10-11 08:55:31.097 2 DEBUG oslo_concurrency.lockutils [None req-4c64e838-f2b1-4794-afda-1493719b8b97 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Acquired lock "refresh_cache-f6a731f1-14a4-4df0-b3e2-65c8c4cb16e8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 08:55:31 compute-0 nova_compute[260935]: 2025-10-11 08:55:31.098 2 DEBUG nova.network.neutron [None req-4c64e838-f2b1-4794-afda-1493719b8b97 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: f6a731f1-14a4-4df0-b3e2-65c8c4cb16e8] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 11 08:55:31 compute-0 ceph-mon[74313]: pgmap v1538: 321 pgs: 321 active+clean; 134 MiB data, 507 MiB used, 59 GiB / 60 GiB avail; 60 KiB/s rd, 3.6 MiB/s wr, 90 op/s
Oct 11 08:55:31 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 08:55:31 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3623249392' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:55:31 compute-0 nova_compute[260935]: 2025-10-11 08:55:31.199 2 DEBUG oslo_concurrency.processutils [None req-7a652225-c8de-463d-bafb-8628c9a6f227 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.584s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:55:31 compute-0 nova_compute[260935]: 2025-10-11 08:55:31.224 2 DEBUG nova.storage.rbd_utils [None req-7a652225-c8de-463d-bafb-8628c9a6f227 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] rbd image 3915cf40-bdd7-4fe8-8311-834ff26aaf9c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:55:31 compute-0 nova_compute[260935]: 2025-10-11 08:55:31.230 2 DEBUG oslo_concurrency.processutils [None req-7a652225-c8de-463d-bafb-8628c9a6f227 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:55:31 compute-0 nova_compute[260935]: 2025-10-11 08:55:31.282 2 DEBUG nova.network.neutron [None req-4c64e838-f2b1-4794-afda-1493719b8b97 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: f6a731f1-14a4-4df0-b3e2-65c8c4cb16e8] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 11 08:55:31 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 08:55:31 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/514337895' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:55:31 compute-0 nova_compute[260935]: 2025-10-11 08:55:31.714 2 DEBUG oslo_concurrency.processutils [None req-7a652225-c8de-463d-bafb-8628c9a6f227 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.485s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:55:31 compute-0 nova_compute[260935]: 2025-10-11 08:55:31.717 2 DEBUG nova.virt.libvirt.vif [None req-7a652225-c8de-463d-bafb-8628c9a6f227 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T08:55:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-SecurityGroupsTestJSON-server-604807773',display_name='tempest-SecurityGroupsTestJSON-server-604807773',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-securitygroupstestjson-server-604807773',id=59,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3a6a3cc2a54f4a9bafcdc1304f07944b',ramdisk_id='',reservation_id='r-zmgd5yiy',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-SecurityGroupsTestJSON-2086563292',owner_user_name='tempest-SecurityGroupsTestJSON-2086563292-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T08:55:26Z,user_data=None,user_id='a04e2908f5a54c8f98bee8d0faf3e658',uuid=3915cf40-bdd7-4fe8-8311-834ff26aaf9c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "33b4a30b-13cb-4c3b-a6fb-df71a1ce760e", "address": "fa:16:3e:ef:c3:49", "network": {"id": "338aeaf8-43d5-4292-a8fa-8952dd3c508b", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-293890957-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3a6a3cc2a54f4a9bafcdc1304f07944b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap33b4a30b-13", "ovs_interfaceid": "33b4a30b-13cb-4c3b-a6fb-df71a1ce760e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 11 08:55:31 compute-0 nova_compute[260935]: 2025-10-11 08:55:31.717 2 DEBUG nova.network.os_vif_util [None req-7a652225-c8de-463d-bafb-8628c9a6f227 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Converting VIF {"id": "33b4a30b-13cb-4c3b-a6fb-df71a1ce760e", "address": "fa:16:3e:ef:c3:49", "network": {"id": "338aeaf8-43d5-4292-a8fa-8952dd3c508b", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-293890957-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3a6a3cc2a54f4a9bafcdc1304f07944b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap33b4a30b-13", "ovs_interfaceid": "33b4a30b-13cb-4c3b-a6fb-df71a1ce760e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 08:55:31 compute-0 nova_compute[260935]: 2025-10-11 08:55:31.718 2 DEBUG nova.network.os_vif_util [None req-7a652225-c8de-463d-bafb-8628c9a6f227 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ef:c3:49,bridge_name='br-int',has_traffic_filtering=True,id=33b4a30b-13cb-4c3b-a6fb-df71a1ce760e,network=Network(338aeaf8-43d5-4292-a8fa-8952dd3c508b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap33b4a30b-13') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 08:55:31 compute-0 nova_compute[260935]: 2025-10-11 08:55:31.720 2 DEBUG nova.objects.instance [None req-7a652225-c8de-463d-bafb-8628c9a6f227 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Lazy-loading 'pci_devices' on Instance uuid 3915cf40-bdd7-4fe8-8311-834ff26aaf9c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:55:31 compute-0 nova_compute[260935]: 2025-10-11 08:55:31.747 2 DEBUG nova.virt.libvirt.driver [None req-7a652225-c8de-463d-bafb-8628c9a6f227 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] [instance: 3915cf40-bdd7-4fe8-8311-834ff26aaf9c] End _get_guest_xml xml=<domain type="kvm">
Oct 11 08:55:31 compute-0 nova_compute[260935]:   <uuid>3915cf40-bdd7-4fe8-8311-834ff26aaf9c</uuid>
Oct 11 08:55:31 compute-0 nova_compute[260935]:   <name>instance-0000003b</name>
Oct 11 08:55:31 compute-0 nova_compute[260935]:   <memory>131072</memory>
Oct 11 08:55:31 compute-0 nova_compute[260935]:   <vcpu>1</vcpu>
Oct 11 08:55:31 compute-0 nova_compute[260935]:   <metadata>
Oct 11 08:55:31 compute-0 nova_compute[260935]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 08:55:31 compute-0 nova_compute[260935]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 08:55:31 compute-0 nova_compute[260935]:       <nova:name>tempest-SecurityGroupsTestJSON-server-604807773</nova:name>
Oct 11 08:55:31 compute-0 nova_compute[260935]:       <nova:creationTime>2025-10-11 08:55:30</nova:creationTime>
Oct 11 08:55:31 compute-0 nova_compute[260935]:       <nova:flavor name="m1.nano">
Oct 11 08:55:31 compute-0 nova_compute[260935]:         <nova:memory>128</nova:memory>
Oct 11 08:55:31 compute-0 nova_compute[260935]:         <nova:disk>1</nova:disk>
Oct 11 08:55:31 compute-0 nova_compute[260935]:         <nova:swap>0</nova:swap>
Oct 11 08:55:31 compute-0 nova_compute[260935]:         <nova:ephemeral>0</nova:ephemeral>
Oct 11 08:55:31 compute-0 nova_compute[260935]:         <nova:vcpus>1</nova:vcpus>
Oct 11 08:55:31 compute-0 nova_compute[260935]:       </nova:flavor>
Oct 11 08:55:31 compute-0 nova_compute[260935]:       <nova:owner>
Oct 11 08:55:31 compute-0 nova_compute[260935]:         <nova:user uuid="a04e2908f5a54c8f98bee8d0faf3e658">tempest-SecurityGroupsTestJSON-2086563292-project-member</nova:user>
Oct 11 08:55:31 compute-0 nova_compute[260935]:         <nova:project uuid="3a6a3cc2a54f4a9bafcdc1304f07944b">tempest-SecurityGroupsTestJSON-2086563292</nova:project>
Oct 11 08:55:31 compute-0 nova_compute[260935]:       </nova:owner>
Oct 11 08:55:31 compute-0 nova_compute[260935]:       <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 08:55:31 compute-0 nova_compute[260935]:       <nova:ports>
Oct 11 08:55:31 compute-0 nova_compute[260935]:         <nova:port uuid="33b4a30b-13cb-4c3b-a6fb-df71a1ce760e">
Oct 11 08:55:31 compute-0 nova_compute[260935]:           <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Oct 11 08:55:31 compute-0 nova_compute[260935]:         </nova:port>
Oct 11 08:55:31 compute-0 nova_compute[260935]:       </nova:ports>
Oct 11 08:55:31 compute-0 nova_compute[260935]:     </nova:instance>
Oct 11 08:55:31 compute-0 nova_compute[260935]:   </metadata>
Oct 11 08:55:31 compute-0 nova_compute[260935]:   <sysinfo type="smbios">
Oct 11 08:55:31 compute-0 nova_compute[260935]:     <system>
Oct 11 08:55:31 compute-0 nova_compute[260935]:       <entry name="manufacturer">RDO</entry>
Oct 11 08:55:31 compute-0 nova_compute[260935]:       <entry name="product">OpenStack Compute</entry>
Oct 11 08:55:31 compute-0 nova_compute[260935]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 08:55:31 compute-0 nova_compute[260935]:       <entry name="serial">3915cf40-bdd7-4fe8-8311-834ff26aaf9c</entry>
Oct 11 08:55:31 compute-0 nova_compute[260935]:       <entry name="uuid">3915cf40-bdd7-4fe8-8311-834ff26aaf9c</entry>
Oct 11 08:55:31 compute-0 nova_compute[260935]:       <entry name="family">Virtual Machine</entry>
Oct 11 08:55:31 compute-0 nova_compute[260935]:     </system>
Oct 11 08:55:31 compute-0 nova_compute[260935]:   </sysinfo>
Oct 11 08:55:31 compute-0 nova_compute[260935]:   <os>
Oct 11 08:55:31 compute-0 nova_compute[260935]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 11 08:55:31 compute-0 nova_compute[260935]:     <boot dev="hd"/>
Oct 11 08:55:31 compute-0 nova_compute[260935]:     <smbios mode="sysinfo"/>
Oct 11 08:55:31 compute-0 nova_compute[260935]:   </os>
Oct 11 08:55:31 compute-0 nova_compute[260935]:   <features>
Oct 11 08:55:31 compute-0 nova_compute[260935]:     <acpi/>
Oct 11 08:55:31 compute-0 nova_compute[260935]:     <apic/>
Oct 11 08:55:31 compute-0 nova_compute[260935]:     <vmcoreinfo/>
Oct 11 08:55:31 compute-0 nova_compute[260935]:   </features>
Oct 11 08:55:31 compute-0 nova_compute[260935]:   <clock offset="utc">
Oct 11 08:55:31 compute-0 nova_compute[260935]:     <timer name="pit" tickpolicy="delay"/>
Oct 11 08:55:31 compute-0 nova_compute[260935]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 11 08:55:31 compute-0 nova_compute[260935]:     <timer name="hpet" present="no"/>
Oct 11 08:55:31 compute-0 nova_compute[260935]:   </clock>
Oct 11 08:55:31 compute-0 nova_compute[260935]:   <cpu mode="host-model" match="exact">
Oct 11 08:55:31 compute-0 nova_compute[260935]:     <topology sockets="1" cores="1" threads="1"/>
Oct 11 08:55:31 compute-0 nova_compute[260935]:   </cpu>
Oct 11 08:55:31 compute-0 nova_compute[260935]:   <devices>
Oct 11 08:55:31 compute-0 nova_compute[260935]:     <disk type="network" device="disk">
Oct 11 08:55:31 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 08:55:31 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/3915cf40-bdd7-4fe8-8311-834ff26aaf9c_disk">
Oct 11 08:55:31 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 08:55:31 compute-0 nova_compute[260935]:       </source>
Oct 11 08:55:31 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 08:55:31 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 08:55:31 compute-0 nova_compute[260935]:       </auth>
Oct 11 08:55:31 compute-0 nova_compute[260935]:       <target dev="vda" bus="virtio"/>
Oct 11 08:55:31 compute-0 nova_compute[260935]:     </disk>
Oct 11 08:55:31 compute-0 nova_compute[260935]:     <disk type="network" device="cdrom">
Oct 11 08:55:31 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 08:55:31 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/3915cf40-bdd7-4fe8-8311-834ff26aaf9c_disk.config">
Oct 11 08:55:31 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 08:55:31 compute-0 nova_compute[260935]:       </source>
Oct 11 08:55:31 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 08:55:31 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 08:55:31 compute-0 nova_compute[260935]:       </auth>
Oct 11 08:55:31 compute-0 nova_compute[260935]:       <target dev="sda" bus="sata"/>
Oct 11 08:55:31 compute-0 nova_compute[260935]:     </disk>
Oct 11 08:55:31 compute-0 nova_compute[260935]:     <interface type="ethernet">
Oct 11 08:55:31 compute-0 nova_compute[260935]:       <mac address="fa:16:3e:ef:c3:49"/>
Oct 11 08:55:31 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 08:55:31 compute-0 nova_compute[260935]:       <driver name="vhost" rx_queue_size="512"/>
Oct 11 08:55:31 compute-0 nova_compute[260935]:       <mtu size="1442"/>
Oct 11 08:55:31 compute-0 nova_compute[260935]:       <target dev="tap33b4a30b-13"/>
Oct 11 08:55:31 compute-0 nova_compute[260935]:     </interface>
Oct 11 08:55:31 compute-0 nova_compute[260935]:     <serial type="pty">
Oct 11 08:55:31 compute-0 nova_compute[260935]:       <log file="/var/lib/nova/instances/3915cf40-bdd7-4fe8-8311-834ff26aaf9c/console.log" append="off"/>
Oct 11 08:55:31 compute-0 nova_compute[260935]:     </serial>
Oct 11 08:55:31 compute-0 nova_compute[260935]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 08:55:31 compute-0 nova_compute[260935]:     <video>
Oct 11 08:55:31 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 08:55:31 compute-0 nova_compute[260935]:     </video>
Oct 11 08:55:31 compute-0 nova_compute[260935]:     <input type="tablet" bus="usb"/>
Oct 11 08:55:31 compute-0 nova_compute[260935]:     <rng model="virtio">
Oct 11 08:55:31 compute-0 nova_compute[260935]:       <backend model="random">/dev/urandom</backend>
Oct 11 08:55:31 compute-0 nova_compute[260935]:     </rng>
Oct 11 08:55:31 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root"/>
Oct 11 08:55:31 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:55:31 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:55:31 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:55:31 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:55:31 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:55:31 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:55:31 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:55:31 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:55:31 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:55:31 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:55:31 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:55:31 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:55:31 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:55:31 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:55:31 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:55:31 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:55:31 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:55:31 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:55:31 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:55:31 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:55:31 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:55:31 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:55:31 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:55:31 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:55:31 compute-0 nova_compute[260935]:     <controller type="usb" index="0"/>
Oct 11 08:55:31 compute-0 nova_compute[260935]:     <memballoon model="virtio">
Oct 11 08:55:31 compute-0 nova_compute[260935]:       <stats period="10"/>
Oct 11 08:55:31 compute-0 nova_compute[260935]:     </memballoon>
Oct 11 08:55:31 compute-0 nova_compute[260935]:   </devices>
Oct 11 08:55:31 compute-0 nova_compute[260935]: </domain>
Oct 11 08:55:31 compute-0 nova_compute[260935]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 11 08:55:31 compute-0 nova_compute[260935]: 2025-10-11 08:55:31.749 2 DEBUG nova.compute.manager [None req-7a652225-c8de-463d-bafb-8628c9a6f227 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] [instance: 3915cf40-bdd7-4fe8-8311-834ff26aaf9c] Preparing to wait for external event network-vif-plugged-33b4a30b-13cb-4c3b-a6fb-df71a1ce760e prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 11 08:55:31 compute-0 nova_compute[260935]: 2025-10-11 08:55:31.750 2 DEBUG oslo_concurrency.lockutils [None req-7a652225-c8de-463d-bafb-8628c9a6f227 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Acquiring lock "3915cf40-bdd7-4fe8-8311-834ff26aaf9c-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:55:31 compute-0 nova_compute[260935]: 2025-10-11 08:55:31.750 2 DEBUG oslo_concurrency.lockutils [None req-7a652225-c8de-463d-bafb-8628c9a6f227 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Lock "3915cf40-bdd7-4fe8-8311-834ff26aaf9c-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:55:31 compute-0 nova_compute[260935]: 2025-10-11 08:55:31.751 2 DEBUG oslo_concurrency.lockutils [None req-7a652225-c8de-463d-bafb-8628c9a6f227 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Lock "3915cf40-bdd7-4fe8-8311-834ff26aaf9c-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:55:31 compute-0 nova_compute[260935]: 2025-10-11 08:55:31.752 2 DEBUG nova.virt.libvirt.vif [None req-7a652225-c8de-463d-bafb-8628c9a6f227 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T08:55:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-SecurityGroupsTestJSON-server-604807773',display_name='tempest-SecurityGroupsTestJSON-server-604807773',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-securitygroupstestjson-server-604807773',id=59,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3a6a3cc2a54f4a9bafcdc1304f07944b',ramdisk_id='',reservation_id='r-zmgd5yiy',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-SecurityGroupsTestJSON-2086563292',owner_user_name='tempest-SecurityGroupsTestJSON-2086563292-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T08:55:26Z,user_data=None,user_id='a04e2908f5a54c8f98bee8d0faf3e658',uuid=3915cf40-bdd7-4fe8-8311-834ff26aaf9c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "33b4a30b-13cb-4c3b-a6fb-df71a1ce760e", "address": "fa:16:3e:ef:c3:49", "network": {"id": "338aeaf8-43d5-4292-a8fa-8952dd3c508b", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-293890957-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3a6a3cc2a54f4a9bafcdc1304f07944b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap33b4a30b-13", "ovs_interfaceid": "33b4a30b-13cb-4c3b-a6fb-df71a1ce760e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 11 08:55:31 compute-0 nova_compute[260935]: 2025-10-11 08:55:31.753 2 DEBUG nova.network.os_vif_util [None req-7a652225-c8de-463d-bafb-8628c9a6f227 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Converting VIF {"id": "33b4a30b-13cb-4c3b-a6fb-df71a1ce760e", "address": "fa:16:3e:ef:c3:49", "network": {"id": "338aeaf8-43d5-4292-a8fa-8952dd3c508b", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-293890957-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3a6a3cc2a54f4a9bafcdc1304f07944b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap33b4a30b-13", "ovs_interfaceid": "33b4a30b-13cb-4c3b-a6fb-df71a1ce760e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 08:55:31 compute-0 nova_compute[260935]: 2025-10-11 08:55:31.754 2 DEBUG nova.network.os_vif_util [None req-7a652225-c8de-463d-bafb-8628c9a6f227 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ef:c3:49,bridge_name='br-int',has_traffic_filtering=True,id=33b4a30b-13cb-4c3b-a6fb-df71a1ce760e,network=Network(338aeaf8-43d5-4292-a8fa-8952dd3c508b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap33b4a30b-13') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 08:55:31 compute-0 nova_compute[260935]: 2025-10-11 08:55:31.755 2 DEBUG os_vif [None req-7a652225-c8de-463d-bafb-8628c9a6f227 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ef:c3:49,bridge_name='br-int',has_traffic_filtering=True,id=33b4a30b-13cb-4c3b-a6fb-df71a1ce760e,network=Network(338aeaf8-43d5-4292-a8fa-8952dd3c508b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap33b4a30b-13') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 11 08:55:31 compute-0 nova_compute[260935]: 2025-10-11 08:55:31.758 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:55:31 compute-0 nova_compute[260935]: 2025-10-11 08:55:31.759 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:55:31 compute-0 nova_compute[260935]: 2025-10-11 08:55:31.760 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 08:55:31 compute-0 nova_compute[260935]: 2025-10-11 08:55:31.763 2 DEBUG nova.compute.manager [req-cef8cce4-6ed4-40cf-87da-74586a8ca383 req-305547af-e019-4c3d-bbe6-844d3846ace7 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 98d8ebd6-0917-49cf-8efc-a245486424bc] Received event network-vif-plugged-10e01bbb-0d2c-4f76-8f81-c90e20d3e54c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:55:31 compute-0 nova_compute[260935]: 2025-10-11 08:55:31.764 2 DEBUG oslo_concurrency.lockutils [req-cef8cce4-6ed4-40cf-87da-74586a8ca383 req-305547af-e019-4c3d-bbe6-844d3846ace7 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "98d8ebd6-0917-49cf-8efc-a245486424bc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:55:31 compute-0 nova_compute[260935]: 2025-10-11 08:55:31.764 2 DEBUG oslo_concurrency.lockutils [req-cef8cce4-6ed4-40cf-87da-74586a8ca383 req-305547af-e019-4c3d-bbe6-844d3846ace7 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "98d8ebd6-0917-49cf-8efc-a245486424bc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:55:31 compute-0 nova_compute[260935]: 2025-10-11 08:55:31.765 2 DEBUG oslo_concurrency.lockutils [req-cef8cce4-6ed4-40cf-87da-74586a8ca383 req-305547af-e019-4c3d-bbe6-844d3846ace7 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "98d8ebd6-0917-49cf-8efc-a245486424bc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:55:31 compute-0 nova_compute[260935]: 2025-10-11 08:55:31.765 2 DEBUG nova.compute.manager [req-cef8cce4-6ed4-40cf-87da-74586a8ca383 req-305547af-e019-4c3d-bbe6-844d3846ace7 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 98d8ebd6-0917-49cf-8efc-a245486424bc] Processing event network-vif-plugged-10e01bbb-0d2c-4f76-8f81-c90e20d3e54c _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 11 08:55:31 compute-0 nova_compute[260935]: 2025-10-11 08:55:31.766 2 DEBUG nova.compute.manager [req-cef8cce4-6ed4-40cf-87da-74586a8ca383 req-305547af-e019-4c3d-bbe6-844d3846ace7 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 98d8ebd6-0917-49cf-8efc-a245486424bc] Received event network-vif-plugged-10e01bbb-0d2c-4f76-8f81-c90e20d3e54c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:55:31 compute-0 nova_compute[260935]: 2025-10-11 08:55:31.766 2 DEBUG oslo_concurrency.lockutils [req-cef8cce4-6ed4-40cf-87da-74586a8ca383 req-305547af-e019-4c3d-bbe6-844d3846ace7 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "98d8ebd6-0917-49cf-8efc-a245486424bc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:55:31 compute-0 nova_compute[260935]: 2025-10-11 08:55:31.766 2 DEBUG oslo_concurrency.lockutils [req-cef8cce4-6ed4-40cf-87da-74586a8ca383 req-305547af-e019-4c3d-bbe6-844d3846ace7 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "98d8ebd6-0917-49cf-8efc-a245486424bc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:55:31 compute-0 nova_compute[260935]: 2025-10-11 08:55:31.767 2 DEBUG oslo_concurrency.lockutils [req-cef8cce4-6ed4-40cf-87da-74586a8ca383 req-305547af-e019-4c3d-bbe6-844d3846ace7 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "98d8ebd6-0917-49cf-8efc-a245486424bc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:55:31 compute-0 nova_compute[260935]: 2025-10-11 08:55:31.767 2 DEBUG nova.compute.manager [req-cef8cce4-6ed4-40cf-87da-74586a8ca383 req-305547af-e019-4c3d-bbe6-844d3846ace7 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 98d8ebd6-0917-49cf-8efc-a245486424bc] No waiting events found dispatching network-vif-plugged-10e01bbb-0d2c-4f76-8f81-c90e20d3e54c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 08:55:31 compute-0 nova_compute[260935]: 2025-10-11 08:55:31.768 2 WARNING nova.compute.manager [req-cef8cce4-6ed4-40cf-87da-74586a8ca383 req-305547af-e019-4c3d-bbe6-844d3846ace7 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 98d8ebd6-0917-49cf-8efc-a245486424bc] Received unexpected event network-vif-plugged-10e01bbb-0d2c-4f76-8f81-c90e20d3e54c for instance with vm_state building and task_state spawning.
Oct 11 08:55:31 compute-0 nova_compute[260935]: 2025-10-11 08:55:31.768 2 DEBUG nova.compute.manager [req-cef8cce4-6ed4-40cf-87da-74586a8ca383 req-305547af-e019-4c3d-bbe6-844d3846ace7 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: f6a731f1-14a4-4df0-b3e2-65c8c4cb16e8] Received event network-changed-d295555f-76d3-480c-9c34-1ef65200656c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:55:31 compute-0 nova_compute[260935]: 2025-10-11 08:55:31.769 2 DEBUG nova.compute.manager [req-cef8cce4-6ed4-40cf-87da-74586a8ca383 req-305547af-e019-4c3d-bbe6-844d3846ace7 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: f6a731f1-14a4-4df0-b3e2-65c8c4cb16e8] Refreshing instance network info cache due to event network-changed-d295555f-76d3-480c-9c34-1ef65200656c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 08:55:31 compute-0 nova_compute[260935]: 2025-10-11 08:55:31.769 2 DEBUG oslo_concurrency.lockutils [req-cef8cce4-6ed4-40cf-87da-74586a8ca383 req-305547af-e019-4c3d-bbe6-844d3846ace7 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-f6a731f1-14a4-4df0-b3e2-65c8c4cb16e8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 08:55:31 compute-0 nova_compute[260935]: 2025-10-11 08:55:31.771 2 DEBUG nova.compute.manager [None req-f03584e8-cfc1-4275-951f-c47662853626 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] [instance: 98d8ebd6-0917-49cf-8efc-a245486424bc] Instance event wait completed in 5 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 11 08:55:31 compute-0 nova_compute[260935]: 2025-10-11 08:55:31.774 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:55:31 compute-0 nova_compute[260935]: 2025-10-11 08:55:31.775 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap33b4a30b-13, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:55:31 compute-0 nova_compute[260935]: 2025-10-11 08:55:31.776 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap33b4a30b-13, col_values=(('external_ids', {'iface-id': '33b4a30b-13cb-4c3b-a6fb-df71a1ce760e', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ef:c3:49', 'vm-uuid': '3915cf40-bdd7-4fe8-8311-834ff26aaf9c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:55:31 compute-0 NetworkManager[44960]: <info>  [1760172931.7799] manager: (tap33b4a30b-13): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/239)
Oct 11 08:55:31 compute-0 nova_compute[260935]: 2025-10-11 08:55:31.782 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172931.7819803, 98d8ebd6-0917-49cf-8efc-a245486424bc => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:55:31 compute-0 nova_compute[260935]: 2025-10-11 08:55:31.783 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 98d8ebd6-0917-49cf-8efc-a245486424bc] VM Resumed (Lifecycle Event)
Oct 11 08:55:31 compute-0 nova_compute[260935]: 2025-10-11 08:55:31.785 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:55:31 compute-0 nova_compute[260935]: 2025-10-11 08:55:31.790 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:55:31 compute-0 nova_compute[260935]: 2025-10-11 08:55:31.792 2 INFO os_vif [None req-7a652225-c8de-463d-bafb-8628c9a6f227 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ef:c3:49,bridge_name='br-int',has_traffic_filtering=True,id=33b4a30b-13cb-4c3b-a6fb-df71a1ce760e,network=Network(338aeaf8-43d5-4292-a8fa-8952dd3c508b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap33b4a30b-13')
Oct 11 08:55:31 compute-0 nova_compute[260935]: 2025-10-11 08:55:31.795 2 DEBUG nova.virt.libvirt.driver [None req-f03584e8-cfc1-4275-951f-c47662853626 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] [instance: 98d8ebd6-0917-49cf-8efc-a245486424bc] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 11 08:55:31 compute-0 nova_compute[260935]: 2025-10-11 08:55:31.801 2 INFO nova.virt.libvirt.driver [-] [instance: 98d8ebd6-0917-49cf-8efc-a245486424bc] Instance spawned successfully.
Oct 11 08:55:31 compute-0 nova_compute[260935]: 2025-10-11 08:55:31.802 2 DEBUG nova.virt.libvirt.driver [None req-f03584e8-cfc1-4275-951f-c47662853626 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] [instance: 98d8ebd6-0917-49cf-8efc-a245486424bc] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 11 08:55:31 compute-0 nova_compute[260935]: 2025-10-11 08:55:31.806 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 98d8ebd6-0917-49cf-8efc-a245486424bc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:55:31 compute-0 nova_compute[260935]: 2025-10-11 08:55:31.812 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 98d8ebd6-0917-49cf-8efc-a245486424bc] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 08:55:31 compute-0 nova_compute[260935]: 2025-10-11 08:55:31.822 2 DEBUG nova.virt.libvirt.driver [None req-f03584e8-cfc1-4275-951f-c47662853626 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] [instance: 98d8ebd6-0917-49cf-8efc-a245486424bc] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:55:31 compute-0 nova_compute[260935]: 2025-10-11 08:55:31.822 2 DEBUG nova.virt.libvirt.driver [None req-f03584e8-cfc1-4275-951f-c47662853626 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] [instance: 98d8ebd6-0917-49cf-8efc-a245486424bc] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:55:31 compute-0 nova_compute[260935]: 2025-10-11 08:55:31.822 2 DEBUG nova.virt.libvirt.driver [None req-f03584e8-cfc1-4275-951f-c47662853626 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] [instance: 98d8ebd6-0917-49cf-8efc-a245486424bc] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:55:31 compute-0 nova_compute[260935]: 2025-10-11 08:55:31.823 2 DEBUG nova.virt.libvirt.driver [None req-f03584e8-cfc1-4275-951f-c47662853626 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] [instance: 98d8ebd6-0917-49cf-8efc-a245486424bc] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:55:31 compute-0 nova_compute[260935]: 2025-10-11 08:55:31.823 2 DEBUG nova.virt.libvirt.driver [None req-f03584e8-cfc1-4275-951f-c47662853626 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] [instance: 98d8ebd6-0917-49cf-8efc-a245486424bc] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:55:31 compute-0 nova_compute[260935]: 2025-10-11 08:55:31.823 2 DEBUG nova.virt.libvirt.driver [None req-f03584e8-cfc1-4275-951f-c47662853626 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] [instance: 98d8ebd6-0917-49cf-8efc-a245486424bc] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:55:31 compute-0 nova_compute[260935]: 2025-10-11 08:55:31.829 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 98d8ebd6-0917-49cf-8efc-a245486424bc] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 08:55:31 compute-0 nova_compute[260935]: 2025-10-11 08:55:31.904 2 DEBUG nova.virt.libvirt.driver [None req-7a652225-c8de-463d-bafb-8628c9a6f227 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 08:55:31 compute-0 nova_compute[260935]: 2025-10-11 08:55:31.905 2 DEBUG nova.virt.libvirt.driver [None req-7a652225-c8de-463d-bafb-8628c9a6f227 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 08:55:31 compute-0 nova_compute[260935]: 2025-10-11 08:55:31.905 2 DEBUG nova.virt.libvirt.driver [None req-7a652225-c8de-463d-bafb-8628c9a6f227 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] No VIF found with MAC fa:16:3e:ef:c3:49, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 11 08:55:31 compute-0 nova_compute[260935]: 2025-10-11 08:55:31.905 2 INFO nova.virt.libvirt.driver [None req-7a652225-c8de-463d-bafb-8628c9a6f227 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] [instance: 3915cf40-bdd7-4fe8-8311-834ff26aaf9c] Using config drive
Oct 11 08:55:31 compute-0 nova_compute[260935]: 2025-10-11 08:55:31.927 2 DEBUG nova.storage.rbd_utils [None req-7a652225-c8de-463d-bafb-8628c9a6f227 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] rbd image 3915cf40-bdd7-4fe8-8311-834ff26aaf9c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:55:31 compute-0 nova_compute[260935]: 2025-10-11 08:55:31.934 2 INFO nova.compute.manager [None req-f03584e8-cfc1-4275-951f-c47662853626 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] [instance: 98d8ebd6-0917-49cf-8efc-a245486424bc] Took 11.87 seconds to spawn the instance on the hypervisor.
Oct 11 08:55:31 compute-0 nova_compute[260935]: 2025-10-11 08:55:31.934 2 DEBUG nova.compute.manager [None req-f03584e8-cfc1-4275-951f-c47662853626 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] [instance: 98d8ebd6-0917-49cf-8efc-a245486424bc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:55:31 compute-0 nova_compute[260935]: 2025-10-11 08:55:31.993 2 INFO nova.compute.manager [None req-f03584e8-cfc1-4275-951f-c47662853626 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] [instance: 98d8ebd6-0917-49cf-8efc-a245486424bc] Took 13.01 seconds to build instance.
Oct 11 08:55:32 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1539: 321 pgs: 321 active+clean; 134 MiB data, 507 MiB used, 59 GiB / 60 GiB avail; 60 KiB/s rd, 3.6 MiB/s wr, 90 op/s
Oct 11 08:55:32 compute-0 nova_compute[260935]: 2025-10-11 08:55:32.010 2 DEBUG oslo_concurrency.lockutils [None req-f03584e8-cfc1-4275-951f-c47662853626 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Lock "98d8ebd6-0917-49cf-8efc-a245486424bc" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.100s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:55:32 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/3623249392' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:55:32 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/514337895' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:55:32 compute-0 nova_compute[260935]: 2025-10-11 08:55:32.297 2 INFO nova.virt.libvirt.driver [None req-7a652225-c8de-463d-bafb-8628c9a6f227 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] [instance: 3915cf40-bdd7-4fe8-8311-834ff26aaf9c] Creating config drive at /var/lib/nova/instances/3915cf40-bdd7-4fe8-8311-834ff26aaf9c/disk.config
Oct 11 08:55:32 compute-0 nova_compute[260935]: 2025-10-11 08:55:32.306 2 DEBUG oslo_concurrency.processutils [None req-7a652225-c8de-463d-bafb-8628c9a6f227 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/3915cf40-bdd7-4fe8-8311-834ff26aaf9c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmphyqu0pg4 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:55:32 compute-0 nova_compute[260935]: 2025-10-11 08:55:32.484 2 DEBUG oslo_concurrency.processutils [None req-7a652225-c8de-463d-bafb-8628c9a6f227 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/3915cf40-bdd7-4fe8-8311-834ff26aaf9c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmphyqu0pg4" returned: 0 in 0.178s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:55:32 compute-0 nova_compute[260935]: 2025-10-11 08:55:32.530 2 DEBUG nova.storage.rbd_utils [None req-7a652225-c8de-463d-bafb-8628c9a6f227 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] rbd image 3915cf40-bdd7-4fe8-8311-834ff26aaf9c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:55:32 compute-0 nova_compute[260935]: 2025-10-11 08:55:32.536 2 DEBUG oslo_concurrency.processutils [None req-7a652225-c8de-463d-bafb-8628c9a6f227 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/3915cf40-bdd7-4fe8-8311-834ff26aaf9c/disk.config 3915cf40-bdd7-4fe8-8311-834ff26aaf9c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:55:32 compute-0 nova_compute[260935]: 2025-10-11 08:55:32.707 2 DEBUG nova.network.neutron [req-06a58741-ccf6-4c13-ac23-0627206f9fbc req-0c0d2df4-8613-46f5-8fd6-6a10befac62e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 3915cf40-bdd7-4fe8-8311-834ff26aaf9c] Updated VIF entry in instance network info cache for port 33b4a30b-13cb-4c3b-a6fb-df71a1ce760e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 08:55:32 compute-0 nova_compute[260935]: 2025-10-11 08:55:32.709 2 DEBUG nova.network.neutron [req-06a58741-ccf6-4c13-ac23-0627206f9fbc req-0c0d2df4-8613-46f5-8fd6-6a10befac62e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 3915cf40-bdd7-4fe8-8311-834ff26aaf9c] Updating instance_info_cache with network_info: [{"id": "33b4a30b-13cb-4c3b-a6fb-df71a1ce760e", "address": "fa:16:3e:ef:c3:49", "network": {"id": "338aeaf8-43d5-4292-a8fa-8952dd3c508b", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-293890957-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3a6a3cc2a54f4a9bafcdc1304f07944b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap33b4a30b-13", "ovs_interfaceid": "33b4a30b-13cb-4c3b-a6fb-df71a1ce760e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:55:32 compute-0 nova_compute[260935]: 2025-10-11 08:55:32.731 2 DEBUG oslo_concurrency.lockutils [req-06a58741-ccf6-4c13-ac23-0627206f9fbc req-0c0d2df4-8613-46f5-8fd6-6a10befac62e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-3915cf40-bdd7-4fe8-8311-834ff26aaf9c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 08:55:32 compute-0 nova_compute[260935]: 2025-10-11 08:55:32.741 2 DEBUG oslo_concurrency.processutils [None req-7a652225-c8de-463d-bafb-8628c9a6f227 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/3915cf40-bdd7-4fe8-8311-834ff26aaf9c/disk.config 3915cf40-bdd7-4fe8-8311-834ff26aaf9c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.206s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:55:32 compute-0 nova_compute[260935]: 2025-10-11 08:55:32.742 2 INFO nova.virt.libvirt.driver [None req-7a652225-c8de-463d-bafb-8628c9a6f227 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] [instance: 3915cf40-bdd7-4fe8-8311-834ff26aaf9c] Deleting local config drive /var/lib/nova/instances/3915cf40-bdd7-4fe8-8311-834ff26aaf9c/disk.config because it was imported into RBD.
Oct 11 08:55:32 compute-0 nova_compute[260935]: 2025-10-11 08:55:32.761 2 DEBUG nova.network.neutron [None req-4c64e838-f2b1-4794-afda-1493719b8b97 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: f6a731f1-14a4-4df0-b3e2-65c8c4cb16e8] Updating instance_info_cache with network_info: [{"id": "d295555f-76d3-480c-9c34-1ef65200656c", "address": "fa:16:3e:67:47:3c", "network": {"id": "2cb96d57-a5e9-4b38-b10e-68187a5bf82f", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-2000648338-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "93dd4902ce324862a38006da8e06503a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd295555f-76", "ovs_interfaceid": "d295555f-76d3-480c-9c34-1ef65200656c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:55:32 compute-0 nova_compute[260935]: 2025-10-11 08:55:32.779 2 DEBUG oslo_concurrency.lockutils [None req-4c64e838-f2b1-4794-afda-1493719b8b97 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Releasing lock "refresh_cache-f6a731f1-14a4-4df0-b3e2-65c8c4cb16e8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 08:55:32 compute-0 nova_compute[260935]: 2025-10-11 08:55:32.779 2 DEBUG nova.compute.manager [None req-4c64e838-f2b1-4794-afda-1493719b8b97 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: f6a731f1-14a4-4df0-b3e2-65c8c4cb16e8] Instance network_info: |[{"id": "d295555f-76d3-480c-9c34-1ef65200656c", "address": "fa:16:3e:67:47:3c", "network": {"id": "2cb96d57-a5e9-4b38-b10e-68187a5bf82f", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-2000648338-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "93dd4902ce324862a38006da8e06503a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd295555f-76", "ovs_interfaceid": "d295555f-76d3-480c-9c34-1ef65200656c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 11 08:55:32 compute-0 nova_compute[260935]: 2025-10-11 08:55:32.780 2 DEBUG oslo_concurrency.lockutils [req-cef8cce4-6ed4-40cf-87da-74586a8ca383 req-305547af-e019-4c3d-bbe6-844d3846ace7 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-f6a731f1-14a4-4df0-b3e2-65c8c4cb16e8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 08:55:32 compute-0 nova_compute[260935]: 2025-10-11 08:55:32.780 2 DEBUG nova.network.neutron [req-cef8cce4-6ed4-40cf-87da-74586a8ca383 req-305547af-e019-4c3d-bbe6-844d3846ace7 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: f6a731f1-14a4-4df0-b3e2-65c8c4cb16e8] Refreshing network info cache for port d295555f-76d3-480c-9c34-1ef65200656c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 08:55:32 compute-0 nova_compute[260935]: 2025-10-11 08:55:32.788 2 DEBUG nova.virt.libvirt.driver [None req-4c64e838-f2b1-4794-afda-1493719b8b97 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: f6a731f1-14a4-4df0-b3e2-65c8c4cb16e8] Start _get_guest_xml network_info=[{"id": "d295555f-76d3-480c-9c34-1ef65200656c", "address": "fa:16:3e:67:47:3c", "network": {"id": "2cb96d57-a5e9-4b38-b10e-68187a5bf82f", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-2000648338-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "93dd4902ce324862a38006da8e06503a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd295555f-76", "ovs_interfaceid": "d295555f-76d3-480c-9c34-1ef65200656c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 11 08:55:32 compute-0 nova_compute[260935]: 2025-10-11 08:55:32.806 2 WARNING nova.virt.libvirt.driver [None req-4c64e838-f2b1-4794-afda-1493719b8b97 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 08:55:32 compute-0 nova_compute[260935]: 2025-10-11 08:55:32.812 2 DEBUG nova.virt.libvirt.host [None req-4c64e838-f2b1-4794-afda-1493719b8b97 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 11 08:55:32 compute-0 nova_compute[260935]: 2025-10-11 08:55:32.813 2 DEBUG nova.virt.libvirt.host [None req-4c64e838-f2b1-4794-afda-1493719b8b97 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 11 08:55:32 compute-0 nova_compute[260935]: 2025-10-11 08:55:32.819 2 DEBUG nova.virt.libvirt.host [None req-4c64e838-f2b1-4794-afda-1493719b8b97 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 11 08:55:32 compute-0 nova_compute[260935]: 2025-10-11 08:55:32.819 2 DEBUG nova.virt.libvirt.host [None req-4c64e838-f2b1-4794-afda-1493719b8b97 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 11 08:55:32 compute-0 nova_compute[260935]: 2025-10-11 08:55:32.819 2 DEBUG nova.virt.libvirt.driver [None req-4c64e838-f2b1-4794-afda-1493719b8b97 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 11 08:55:32 compute-0 nova_compute[260935]: 2025-10-11 08:55:32.820 2 DEBUG nova.virt.hardware [None req-4c64e838-f2b1-4794-afda-1493719b8b97 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 11 08:55:32 compute-0 nova_compute[260935]: 2025-10-11 08:55:32.820 2 DEBUG nova.virt.hardware [None req-4c64e838-f2b1-4794-afda-1493719b8b97 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 11 08:55:32 compute-0 nova_compute[260935]: 2025-10-11 08:55:32.821 2 DEBUG nova.virt.hardware [None req-4c64e838-f2b1-4794-afda-1493719b8b97 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 11 08:55:32 compute-0 nova_compute[260935]: 2025-10-11 08:55:32.821 2 DEBUG nova.virt.hardware [None req-4c64e838-f2b1-4794-afda-1493719b8b97 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 11 08:55:32 compute-0 nova_compute[260935]: 2025-10-11 08:55:32.821 2 DEBUG nova.virt.hardware [None req-4c64e838-f2b1-4794-afda-1493719b8b97 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 11 08:55:32 compute-0 nova_compute[260935]: 2025-10-11 08:55:32.821 2 DEBUG nova.virt.hardware [None req-4c64e838-f2b1-4794-afda-1493719b8b97 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 11 08:55:32 compute-0 nova_compute[260935]: 2025-10-11 08:55:32.822 2 DEBUG nova.virt.hardware [None req-4c64e838-f2b1-4794-afda-1493719b8b97 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 11 08:55:32 compute-0 nova_compute[260935]: 2025-10-11 08:55:32.822 2 DEBUG nova.virt.hardware [None req-4c64e838-f2b1-4794-afda-1493719b8b97 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 11 08:55:32 compute-0 nova_compute[260935]: 2025-10-11 08:55:32.822 2 DEBUG nova.virt.hardware [None req-4c64e838-f2b1-4794-afda-1493719b8b97 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 11 08:55:32 compute-0 nova_compute[260935]: 2025-10-11 08:55:32.823 2 DEBUG nova.virt.hardware [None req-4c64e838-f2b1-4794-afda-1493719b8b97 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 11 08:55:32 compute-0 nova_compute[260935]: 2025-10-11 08:55:32.824 2 DEBUG nova.virt.hardware [None req-4c64e838-f2b1-4794-afda-1493719b8b97 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 11 08:55:32 compute-0 nova_compute[260935]: 2025-10-11 08:55:32.828 2 DEBUG oslo_concurrency.processutils [None req-4c64e838-f2b1-4794-afda-1493719b8b97 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:55:32 compute-0 kernel: tap33b4a30b-13: entered promiscuous mode
Oct 11 08:55:32 compute-0 NetworkManager[44960]: <info>  [1760172932.8339] manager: (tap33b4a30b-13): new Tun device (/org/freedesktop/NetworkManager/Devices/240)
Oct 11 08:55:32 compute-0 ovn_controller[152945]: 2025-10-11T08:55:32Z|00512|binding|INFO|Claiming lport 33b4a30b-13cb-4c3b-a6fb-df71a1ce760e for this chassis.
Oct 11 08:55:32 compute-0 ovn_controller[152945]: 2025-10-11T08:55:32Z|00513|binding|INFO|33b4a30b-13cb-4c3b-a6fb-df71a1ce760e: Claiming fa:16:3e:ef:c3:49 10.100.0.13
Oct 11 08:55:32 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:55:32.853 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ef:c3:49 10.100.0.13'], port_security=['fa:16:3e:ef:c3:49 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '3915cf40-bdd7-4fe8-8311-834ff26aaf9c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-338aeaf8-43d5-4292-a8fa-8952dd3c508b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3a6a3cc2a54f4a9bafcdc1304f07944b', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'e3ab43aa-8c9d-4578-afa6-1b85bfb9e682', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9b76a516-f507-4a4f-aa31-3047cedf7049, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=33b4a30b-13cb-4c3b-a6fb-df71a1ce760e) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 08:55:32 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:55:32.856 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 33b4a30b-13cb-4c3b-a6fb-df71a1ce760e in datapath 338aeaf8-43d5-4292-a8fa-8952dd3c508b bound to our chassis
Oct 11 08:55:32 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:55:32.859 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 338aeaf8-43d5-4292-a8fa-8952dd3c508b
Oct 11 08:55:32 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:55:32.878 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[dc8f7bdf-7f83-49d8-a612-8f0eefa71056]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:55:32 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:55:32.879 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap338aeaf8-41 in ovnmeta-338aeaf8-43d5-4292-a8fa-8952dd3c508b namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 11 08:55:32 compute-0 nova_compute[260935]: 2025-10-11 08:55:32.881 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:55:32 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:55:32.883 276945 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap338aeaf8-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 11 08:55:32 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:55:32.883 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[18f1ccaa-d53f-432b-bbe9-14384e0dca84]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:55:32 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:55:32.887 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[1e63dfaf-f1d9-48aa-a8b1-91ae98a03ff3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:55:32 compute-0 systemd-udevd[323199]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 08:55:32 compute-0 systemd-machined[215705]: New machine qemu-66-instance-0000003b.
Oct 11 08:55:32 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:55:32.907 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[5586d1b1-f73c-4338-a61e-4ea73bbdf020]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:55:32 compute-0 NetworkManager[44960]: <info>  [1760172932.9137] device (tap33b4a30b-13): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 08:55:32 compute-0 NetworkManager[44960]: <info>  [1760172932.9152] device (tap33b4a30b-13): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 08:55:32 compute-0 systemd[1]: Started Virtual Machine qemu-66-instance-0000003b.
Oct 11 08:55:32 compute-0 ovn_controller[152945]: 2025-10-11T08:55:32Z|00514|binding|INFO|Setting lport 33b4a30b-13cb-4c3b-a6fb-df71a1ce760e ovn-installed in OVS
Oct 11 08:55:32 compute-0 ovn_controller[152945]: 2025-10-11T08:55:32Z|00515|binding|INFO|Setting lport 33b4a30b-13cb-4c3b-a6fb-df71a1ce760e up in Southbound
Oct 11 08:55:32 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:55:32.964 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[d0eaeb7e-4262-4fc9-981f-d7fbcfbb5956]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:55:32 compute-0 nova_compute[260935]: 2025-10-11 08:55:32.968 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:55:32 compute-0 nova_compute[260935]: 2025-10-11 08:55:32.969 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:55:33 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:55:33.000 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[85b54dce-8e06-4099-80f1-b7c3103c40e5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:55:33 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:55:33.008 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[c6b9968c-a5a1-4862-9b40-83676579f5ed]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:55:33 compute-0 NetworkManager[44960]: <info>  [1760172933.0090] manager: (tap338aeaf8-40): new Veth device (/org/freedesktop/NetworkManager/Devices/241)
Oct 11 08:55:33 compute-0 systemd-udevd[323204]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 08:55:33 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:55:33.049 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[6bf4c29d-a6c5-4855-b81e-b95c1f6a9801]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:55:33 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:55:33.053 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[de15e042-aeba-409c-9b2c-462a72539077]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:55:33 compute-0 NetworkManager[44960]: <info>  [1760172933.0840] device (tap338aeaf8-40): carrier: link connected
Oct 11 08:55:33 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:55:33.092 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[37362a33-6f51-4319-8836-fd2fe4ddc40c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:55:33 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:55:33.116 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[62fef6a6-09a9-4fa8-8f30-5021d64445f8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap338aeaf8-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e5:2a:1c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 220, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 220, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 164], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 477778, 'reachable_time': 18372, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 192, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 192, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 323251, 'error': None, 'target': 'ovnmeta-338aeaf8-43d5-4292-a8fa-8952dd3c508b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:55:33 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:55:33.134 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[a875f85a-47ed-411c-a319-07915daf1f78]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fee5:2a1c'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 477778, 'tstamp': 477778}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 323252, 'error': None, 'target': 'ovnmeta-338aeaf8-43d5-4292-a8fa-8952dd3c508b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:55:33 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:55:33.157 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[55256811-08bd-45f7-a1ba-1ec6a8a06aa9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap338aeaf8-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e5:2a:1c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 3, 'tx_packets': 1, 'rx_bytes': 306, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 3, 'tx_packets': 1, 'rx_bytes': 306, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 164], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 477778, 'reachable_time': 18372, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 3, 'inoctets': 264, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 3, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 264, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 3, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 323253, 'error': None, 'target': 'ovnmeta-338aeaf8-43d5-4292-a8fa-8952dd3c508b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:55:33 compute-0 ceph-mon[74313]: pgmap v1539: 321 pgs: 321 active+clean; 134 MiB data, 507 MiB used, 59 GiB / 60 GiB avail; 60 KiB/s rd, 3.6 MiB/s wr, 90 op/s
Oct 11 08:55:33 compute-0 nova_compute[260935]: 2025-10-11 08:55:33.193 2 DEBUG nova.compute.manager [req-a742d6e7-8f4e-48d5-9180-9a10ae4b5ab5 req-3dbc7d7f-e25f-430b-907c-b0c6ece9bd9f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 3915cf40-bdd7-4fe8-8311-834ff26aaf9c] Received event network-vif-plugged-33b4a30b-13cb-4c3b-a6fb-df71a1ce760e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:55:33 compute-0 nova_compute[260935]: 2025-10-11 08:55:33.194 2 DEBUG oslo_concurrency.lockutils [req-a742d6e7-8f4e-48d5-9180-9a10ae4b5ab5 req-3dbc7d7f-e25f-430b-907c-b0c6ece9bd9f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "3915cf40-bdd7-4fe8-8311-834ff26aaf9c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:55:33 compute-0 nova_compute[260935]: 2025-10-11 08:55:33.194 2 DEBUG oslo_concurrency.lockutils [req-a742d6e7-8f4e-48d5-9180-9a10ae4b5ab5 req-3dbc7d7f-e25f-430b-907c-b0c6ece9bd9f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "3915cf40-bdd7-4fe8-8311-834ff26aaf9c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:55:33 compute-0 nova_compute[260935]: 2025-10-11 08:55:33.195 2 DEBUG oslo_concurrency.lockutils [req-a742d6e7-8f4e-48d5-9180-9a10ae4b5ab5 req-3dbc7d7f-e25f-430b-907c-b0c6ece9bd9f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "3915cf40-bdd7-4fe8-8311-834ff26aaf9c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:55:33 compute-0 nova_compute[260935]: 2025-10-11 08:55:33.196 2 DEBUG nova.compute.manager [req-a742d6e7-8f4e-48d5-9180-9a10ae4b5ab5 req-3dbc7d7f-e25f-430b-907c-b0c6ece9bd9f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 3915cf40-bdd7-4fe8-8311-834ff26aaf9c] Processing event network-vif-plugged-33b4a30b-13cb-4c3b-a6fb-df71a1ce760e _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 11 08:55:33 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:55:33.196 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[46d2dbb5-b16b-450b-84da-8e3fddc4a188]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:55:33 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:55:33.283 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[1ca2b8e9-e0c2-4333-a2d9-1e4dcfb315dc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:55:33 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:55:33.284 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap338aeaf8-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:55:33 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:55:33.285 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 08:55:33 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:55:33.285 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap338aeaf8-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:55:33 compute-0 nova_compute[260935]: 2025-10-11 08:55:33.287 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:55:33 compute-0 NetworkManager[44960]: <info>  [1760172933.2882] manager: (tap338aeaf8-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/242)
Oct 11 08:55:33 compute-0 kernel: tap338aeaf8-40: entered promiscuous mode
Oct 11 08:55:33 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:55:33.293 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap338aeaf8-40, col_values=(('external_ids', {'iface-id': 'ebd712a5-5601-47dd-8cfc-89d2ce6b1035'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:55:33 compute-0 ovn_controller[152945]: 2025-10-11T08:55:33Z|00516|binding|INFO|Releasing lport ebd712a5-5601-47dd-8cfc-89d2ce6b1035 from this chassis (sb_readonly=0)
Oct 11 08:55:33 compute-0 nova_compute[260935]: 2025-10-11 08:55:33.294 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:55:33 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:55:33.297 162815 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/338aeaf8-43d5-4292-a8fa-8952dd3c508b.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/338aeaf8-43d5-4292-a8fa-8952dd3c508b.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 11 08:55:33 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:55:33.300 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[5adaf018-8e37-4726-bd99-13f48285b856]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:55:33 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:55:33.301 162815 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 11 08:55:33 compute-0 ovn_metadata_agent[162810]: global
Oct 11 08:55:33 compute-0 ovn_metadata_agent[162810]:     log         /dev/log local0 debug
Oct 11 08:55:33 compute-0 ovn_metadata_agent[162810]:     log-tag     haproxy-metadata-proxy-338aeaf8-43d5-4292-a8fa-8952dd3c508b
Oct 11 08:55:33 compute-0 ovn_metadata_agent[162810]:     user        root
Oct 11 08:55:33 compute-0 ovn_metadata_agent[162810]:     group       root
Oct 11 08:55:33 compute-0 ovn_metadata_agent[162810]:     maxconn     1024
Oct 11 08:55:33 compute-0 ovn_metadata_agent[162810]:     pidfile     /var/lib/neutron/external/pids/338aeaf8-43d5-4292-a8fa-8952dd3c508b.pid.haproxy
Oct 11 08:55:33 compute-0 ovn_metadata_agent[162810]:     daemon
Oct 11 08:55:33 compute-0 ovn_metadata_agent[162810]: 
Oct 11 08:55:33 compute-0 ovn_metadata_agent[162810]: defaults
Oct 11 08:55:33 compute-0 ovn_metadata_agent[162810]:     log global
Oct 11 08:55:33 compute-0 ovn_metadata_agent[162810]:     mode http
Oct 11 08:55:33 compute-0 ovn_metadata_agent[162810]:     option httplog
Oct 11 08:55:33 compute-0 ovn_metadata_agent[162810]:     option dontlognull
Oct 11 08:55:33 compute-0 ovn_metadata_agent[162810]:     option http-server-close
Oct 11 08:55:33 compute-0 ovn_metadata_agent[162810]:     option forwardfor
Oct 11 08:55:33 compute-0 ovn_metadata_agent[162810]:     retries                 3
Oct 11 08:55:33 compute-0 ovn_metadata_agent[162810]:     timeout http-request    30s
Oct 11 08:55:33 compute-0 ovn_metadata_agent[162810]:     timeout connect         30s
Oct 11 08:55:33 compute-0 ovn_metadata_agent[162810]:     timeout client          32s
Oct 11 08:55:33 compute-0 ovn_metadata_agent[162810]:     timeout server          32s
Oct 11 08:55:33 compute-0 ovn_metadata_agent[162810]:     timeout http-keep-alive 30s
Oct 11 08:55:33 compute-0 ovn_metadata_agent[162810]: 
Oct 11 08:55:33 compute-0 ovn_metadata_agent[162810]: 
Oct 11 08:55:33 compute-0 ovn_metadata_agent[162810]: listen listener
Oct 11 08:55:33 compute-0 ovn_metadata_agent[162810]:     bind 169.254.169.254:80
Oct 11 08:55:33 compute-0 ovn_metadata_agent[162810]:     server metadata /var/lib/neutron/metadata_proxy
Oct 11 08:55:33 compute-0 ovn_metadata_agent[162810]:     http-request add-header X-OVN-Network-ID 338aeaf8-43d5-4292-a8fa-8952dd3c508b
Oct 11 08:55:33 compute-0 ovn_metadata_agent[162810]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 11 08:55:33 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:55:33.301 162815 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-338aeaf8-43d5-4292-a8fa-8952dd3c508b', 'env', 'PROCESS_TAG=haproxy-338aeaf8-43d5-4292-a8fa-8952dd3c508b', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/338aeaf8-43d5-4292-a8fa-8952dd3c508b.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 11 08:55:33 compute-0 nova_compute[260935]: 2025-10-11 08:55:33.316 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:55:33 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 08:55:33 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2261304288' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:55:33 compute-0 nova_compute[260935]: 2025-10-11 08:55:33.346 2 DEBUG oslo_concurrency.processutils [None req-4c64e838-f2b1-4794-afda-1493719b8b97 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.518s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:55:33 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e210 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 08:55:33 compute-0 nova_compute[260935]: 2025-10-11 08:55:33.378 2 DEBUG nova.storage.rbd_utils [None req-4c64e838-f2b1-4794-afda-1493719b8b97 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] rbd image f6a731f1-14a4-4df0-b3e2-65c8c4cb16e8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:55:33 compute-0 nova_compute[260935]: 2025-10-11 08:55:33.382 2 DEBUG oslo_concurrency.processutils [None req-4c64e838-f2b1-4794-afda-1493719b8b97 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:55:33 compute-0 podman[323322]: 2025-10-11 08:55:33.671426662 +0000 UTC m=+0.059414045 container create 32436912e067a4fb7ef37686671c1ef8b53abf62640fd4a0660436ebf6897565 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-338aeaf8-43d5-4292-a8fa-8952dd3c508b, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 08:55:33 compute-0 podman[323322]: 2025-10-11 08:55:33.635669943 +0000 UTC m=+0.023657366 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 11 08:55:33 compute-0 systemd[1]: Started libpod-conmon-32436912e067a4fb7ef37686671c1ef8b53abf62640fd4a0660436ebf6897565.scope.
Oct 11 08:55:33 compute-0 systemd[1]: Started libcrun container.
Oct 11 08:55:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68996452ee67c233085e6e9ba704114e7118cced28fee3c4a5f9d0d8650bf5d5/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 11 08:55:33 compute-0 podman[323322]: 2025-10-11 08:55:33.79694376 +0000 UTC m=+0.184931143 container init 32436912e067a4fb7ef37686671c1ef8b53abf62640fd4a0660436ebf6897565 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-338aeaf8-43d5-4292-a8fa-8952dd3c508b, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001)
Oct 11 08:55:33 compute-0 podman[323322]: 2025-10-11 08:55:33.80709653 +0000 UTC m=+0.195083883 container start 32436912e067a4fb7ef37686671c1ef8b53abf62640fd4a0660436ebf6897565 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-338aeaf8-43d5-4292-a8fa-8952dd3c508b, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 11 08:55:33 compute-0 neutron-haproxy-ovnmeta-338aeaf8-43d5-4292-a8fa-8952dd3c508b[323338]: [NOTICE]   (323342) : New worker (323351) forked
Oct 11 08:55:33 compute-0 neutron-haproxy-ovnmeta-338aeaf8-43d5-4292-a8fa-8952dd3c508b[323338]: [NOTICE]   (323342) : Loading success.
Oct 11 08:55:33 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 08:55:33 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3218247990' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:55:33 compute-0 nova_compute[260935]: 2025-10-11 08:55:33.865 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:55:33 compute-0 NetworkManager[44960]: <info>  [1760172933.8667] manager: (patch-provnet-02213cae-d648-4a8f-b18b-9bdf9573f1be-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/243)
Oct 11 08:55:33 compute-0 NetworkManager[44960]: <info>  [1760172933.8685] manager: (patch-br-int-to-provnet-02213cae-d648-4a8f-b18b-9bdf9573f1be): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/244)
Oct 11 08:55:33 compute-0 nova_compute[260935]: 2025-10-11 08:55:33.872 2 DEBUG oslo_concurrency.processutils [None req-4c64e838-f2b1-4794-afda-1493719b8b97 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.490s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:55:33 compute-0 nova_compute[260935]: 2025-10-11 08:55:33.875 2 DEBUG nova.virt.libvirt.vif [None req-4c64e838-f2b1-4794-afda-1493719b8b97 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T08:55:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-1662661810',display_name='tempest-DeleteServersTestJSON-server-1662661810',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-1662661810',id=60,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='93dd4902ce324862a38006da8e06503a',ramdisk_id='',reservation_id='r-hslosofy',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-1019340677',owner_user_name='tempest-DeleteServersTestJSON-1019340677-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T08:55:28Z,user_data=None,user_id='213e5693e94f44e7950e3dfbca04228a',uuid=f6a731f1-14a4-4df0-b3e2-65c8c4cb16e8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d295555f-76d3-480c-9c34-1ef65200656c", "address": "fa:16:3e:67:47:3c", "network": {"id": "2cb96d57-a5e9-4b38-b10e-68187a5bf82f", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-2000648338-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "93dd4902ce324862a38006da8e06503a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd295555f-76", "ovs_interfaceid": "d295555f-76d3-480c-9c34-1ef65200656c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 11 08:55:33 compute-0 nova_compute[260935]: 2025-10-11 08:55:33.876 2 DEBUG nova.network.os_vif_util [None req-4c64e838-f2b1-4794-afda-1493719b8b97 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Converting VIF {"id": "d295555f-76d3-480c-9c34-1ef65200656c", "address": "fa:16:3e:67:47:3c", "network": {"id": "2cb96d57-a5e9-4b38-b10e-68187a5bf82f", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-2000648338-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "93dd4902ce324862a38006da8e06503a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd295555f-76", "ovs_interfaceid": "d295555f-76d3-480c-9c34-1ef65200656c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 08:55:33 compute-0 nova_compute[260935]: 2025-10-11 08:55:33.878 2 DEBUG nova.network.os_vif_util [None req-4c64e838-f2b1-4794-afda-1493719b8b97 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:67:47:3c,bridge_name='br-int',has_traffic_filtering=True,id=d295555f-76d3-480c-9c34-1ef65200656c,network=Network(2cb96d57-a5e9-4b38-b10e-68187a5bf82f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd295555f-76') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 08:55:33 compute-0 nova_compute[260935]: 2025-10-11 08:55:33.880 2 DEBUG nova.objects.instance [None req-4c64e838-f2b1-4794-afda-1493719b8b97 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Lazy-loading 'pci_devices' on Instance uuid f6a731f1-14a4-4df0-b3e2-65c8c4cb16e8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:55:33 compute-0 nova_compute[260935]: 2025-10-11 08:55:33.896 2 DEBUG nova.virt.libvirt.driver [None req-4c64e838-f2b1-4794-afda-1493719b8b97 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: f6a731f1-14a4-4df0-b3e2-65c8c4cb16e8] End _get_guest_xml xml=<domain type="kvm">
Oct 11 08:55:33 compute-0 nova_compute[260935]:   <uuid>f6a731f1-14a4-4df0-b3e2-65c8c4cb16e8</uuid>
Oct 11 08:55:33 compute-0 nova_compute[260935]:   <name>instance-0000003c</name>
Oct 11 08:55:33 compute-0 nova_compute[260935]:   <memory>131072</memory>
Oct 11 08:55:33 compute-0 nova_compute[260935]:   <vcpu>1</vcpu>
Oct 11 08:55:33 compute-0 nova_compute[260935]:   <metadata>
Oct 11 08:55:33 compute-0 nova_compute[260935]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 08:55:33 compute-0 nova_compute[260935]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 08:55:33 compute-0 nova_compute[260935]:       <nova:name>tempest-DeleteServersTestJSON-server-1662661810</nova:name>
Oct 11 08:55:33 compute-0 nova_compute[260935]:       <nova:creationTime>2025-10-11 08:55:32</nova:creationTime>
Oct 11 08:55:33 compute-0 nova_compute[260935]:       <nova:flavor name="m1.nano">
Oct 11 08:55:33 compute-0 nova_compute[260935]:         <nova:memory>128</nova:memory>
Oct 11 08:55:33 compute-0 nova_compute[260935]:         <nova:disk>1</nova:disk>
Oct 11 08:55:33 compute-0 nova_compute[260935]:         <nova:swap>0</nova:swap>
Oct 11 08:55:33 compute-0 nova_compute[260935]:         <nova:ephemeral>0</nova:ephemeral>
Oct 11 08:55:33 compute-0 nova_compute[260935]:         <nova:vcpus>1</nova:vcpus>
Oct 11 08:55:33 compute-0 nova_compute[260935]:       </nova:flavor>
Oct 11 08:55:33 compute-0 nova_compute[260935]:       <nova:owner>
Oct 11 08:55:33 compute-0 nova_compute[260935]:         <nova:user uuid="213e5693e94f44e7950e3dfbca04228a">tempest-DeleteServersTestJSON-1019340677-project-member</nova:user>
Oct 11 08:55:33 compute-0 nova_compute[260935]:         <nova:project uuid="93dd4902ce324862a38006da8e06503a">tempest-DeleteServersTestJSON-1019340677</nova:project>
Oct 11 08:55:33 compute-0 nova_compute[260935]:       </nova:owner>
Oct 11 08:55:33 compute-0 nova_compute[260935]:       <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 08:55:33 compute-0 nova_compute[260935]:       <nova:ports>
Oct 11 08:55:33 compute-0 nova_compute[260935]:         <nova:port uuid="d295555f-76d3-480c-9c34-1ef65200656c">
Oct 11 08:55:33 compute-0 nova_compute[260935]:           <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Oct 11 08:55:33 compute-0 nova_compute[260935]:         </nova:port>
Oct 11 08:55:33 compute-0 nova_compute[260935]:       </nova:ports>
Oct 11 08:55:33 compute-0 nova_compute[260935]:     </nova:instance>
Oct 11 08:55:33 compute-0 nova_compute[260935]:   </metadata>
Oct 11 08:55:33 compute-0 nova_compute[260935]:   <sysinfo type="smbios">
Oct 11 08:55:33 compute-0 nova_compute[260935]:     <system>
Oct 11 08:55:33 compute-0 nova_compute[260935]:       <entry name="manufacturer">RDO</entry>
Oct 11 08:55:33 compute-0 nova_compute[260935]:       <entry name="product">OpenStack Compute</entry>
Oct 11 08:55:33 compute-0 nova_compute[260935]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 08:55:33 compute-0 nova_compute[260935]:       <entry name="serial">f6a731f1-14a4-4df0-b3e2-65c8c4cb16e8</entry>
Oct 11 08:55:33 compute-0 nova_compute[260935]:       <entry name="uuid">f6a731f1-14a4-4df0-b3e2-65c8c4cb16e8</entry>
Oct 11 08:55:33 compute-0 nova_compute[260935]:       <entry name="family">Virtual Machine</entry>
Oct 11 08:55:33 compute-0 nova_compute[260935]:     </system>
Oct 11 08:55:33 compute-0 nova_compute[260935]:   </sysinfo>
Oct 11 08:55:33 compute-0 nova_compute[260935]:   <os>
Oct 11 08:55:33 compute-0 nova_compute[260935]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 11 08:55:33 compute-0 nova_compute[260935]:     <boot dev="hd"/>
Oct 11 08:55:33 compute-0 nova_compute[260935]:     <smbios mode="sysinfo"/>
Oct 11 08:55:33 compute-0 nova_compute[260935]:   </os>
Oct 11 08:55:33 compute-0 nova_compute[260935]:   <features>
Oct 11 08:55:33 compute-0 nova_compute[260935]:     <acpi/>
Oct 11 08:55:33 compute-0 nova_compute[260935]:     <apic/>
Oct 11 08:55:33 compute-0 nova_compute[260935]:     <vmcoreinfo/>
Oct 11 08:55:33 compute-0 nova_compute[260935]:   </features>
Oct 11 08:55:33 compute-0 nova_compute[260935]:   <clock offset="utc">
Oct 11 08:55:33 compute-0 nova_compute[260935]:     <timer name="pit" tickpolicy="delay"/>
Oct 11 08:55:33 compute-0 nova_compute[260935]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 11 08:55:33 compute-0 nova_compute[260935]:     <timer name="hpet" present="no"/>
Oct 11 08:55:33 compute-0 nova_compute[260935]:   </clock>
Oct 11 08:55:33 compute-0 nova_compute[260935]:   <cpu mode="host-model" match="exact">
Oct 11 08:55:33 compute-0 nova_compute[260935]:     <topology sockets="1" cores="1" threads="1"/>
Oct 11 08:55:33 compute-0 nova_compute[260935]:   </cpu>
Oct 11 08:55:33 compute-0 nova_compute[260935]:   <devices>
Oct 11 08:55:33 compute-0 nova_compute[260935]:     <disk type="network" device="disk">
Oct 11 08:55:33 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 08:55:33 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/f6a731f1-14a4-4df0-b3e2-65c8c4cb16e8_disk">
Oct 11 08:55:33 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 08:55:33 compute-0 nova_compute[260935]:       </source>
Oct 11 08:55:33 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 08:55:33 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 08:55:33 compute-0 nova_compute[260935]:       </auth>
Oct 11 08:55:33 compute-0 nova_compute[260935]:       <target dev="vda" bus="virtio"/>
Oct 11 08:55:33 compute-0 nova_compute[260935]:     </disk>
Oct 11 08:55:33 compute-0 nova_compute[260935]:     <disk type="network" device="cdrom">
Oct 11 08:55:33 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 08:55:33 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/f6a731f1-14a4-4df0-b3e2-65c8c4cb16e8_disk.config">
Oct 11 08:55:33 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 08:55:33 compute-0 nova_compute[260935]:       </source>
Oct 11 08:55:33 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 08:55:33 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 08:55:33 compute-0 nova_compute[260935]:       </auth>
Oct 11 08:55:33 compute-0 nova_compute[260935]:       <target dev="sda" bus="sata"/>
Oct 11 08:55:33 compute-0 nova_compute[260935]:     </disk>
Oct 11 08:55:33 compute-0 nova_compute[260935]:     <interface type="ethernet">
Oct 11 08:55:33 compute-0 nova_compute[260935]:       <mac address="fa:16:3e:67:47:3c"/>
Oct 11 08:55:33 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 08:55:33 compute-0 nova_compute[260935]:       <driver name="vhost" rx_queue_size="512"/>
Oct 11 08:55:33 compute-0 nova_compute[260935]:       <mtu size="1442"/>
Oct 11 08:55:33 compute-0 nova_compute[260935]:       <target dev="tapd295555f-76"/>
Oct 11 08:55:33 compute-0 nova_compute[260935]:     </interface>
Oct 11 08:55:33 compute-0 nova_compute[260935]:     <serial type="pty">
Oct 11 08:55:33 compute-0 nova_compute[260935]:       <log file="/var/lib/nova/instances/f6a731f1-14a4-4df0-b3e2-65c8c4cb16e8/console.log" append="off"/>
Oct 11 08:55:33 compute-0 nova_compute[260935]:     </serial>
Oct 11 08:55:33 compute-0 nova_compute[260935]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 08:55:33 compute-0 nova_compute[260935]:     <video>
Oct 11 08:55:33 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 08:55:33 compute-0 nova_compute[260935]:     </video>
Oct 11 08:55:33 compute-0 nova_compute[260935]:     <input type="tablet" bus="usb"/>
Oct 11 08:55:33 compute-0 nova_compute[260935]:     <rng model="virtio">
Oct 11 08:55:33 compute-0 nova_compute[260935]:       <backend model="random">/dev/urandom</backend>
Oct 11 08:55:33 compute-0 nova_compute[260935]:     </rng>
Oct 11 08:55:33 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root"/>
Oct 11 08:55:33 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:55:33 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:55:33 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:55:33 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:55:33 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:55:33 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:55:33 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:55:33 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:55:33 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:55:33 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:55:33 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:55:33 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:55:33 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:55:33 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:55:33 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:55:33 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:55:33 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:55:33 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:55:33 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:55:33 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:55:33 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:55:33 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:55:33 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:55:33 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:55:33 compute-0 nova_compute[260935]:     <controller type="usb" index="0"/>
Oct 11 08:55:33 compute-0 nova_compute[260935]:     <memballoon model="virtio">
Oct 11 08:55:33 compute-0 nova_compute[260935]:       <stats period="10"/>
Oct 11 08:55:33 compute-0 nova_compute[260935]:     </memballoon>
Oct 11 08:55:33 compute-0 nova_compute[260935]:   </devices>
Oct 11 08:55:33 compute-0 nova_compute[260935]: </domain>
Oct 11 08:55:33 compute-0 nova_compute[260935]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 11 08:55:33 compute-0 nova_compute[260935]: 2025-10-11 08:55:33.906 2 DEBUG nova.compute.manager [None req-4c64e838-f2b1-4794-afda-1493719b8b97 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: f6a731f1-14a4-4df0-b3e2-65c8c4cb16e8] Preparing to wait for external event network-vif-plugged-d295555f-76d3-480c-9c34-1ef65200656c prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 11 08:55:33 compute-0 nova_compute[260935]: 2025-10-11 08:55:33.907 2 DEBUG oslo_concurrency.lockutils [None req-4c64e838-f2b1-4794-afda-1493719b8b97 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Acquiring lock "f6a731f1-14a4-4df0-b3e2-65c8c4cb16e8-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:55:33 compute-0 nova_compute[260935]: 2025-10-11 08:55:33.907 2 DEBUG oslo_concurrency.lockutils [None req-4c64e838-f2b1-4794-afda-1493719b8b97 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Lock "f6a731f1-14a4-4df0-b3e2-65c8c4cb16e8-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:55:33 compute-0 nova_compute[260935]: 2025-10-11 08:55:33.908 2 DEBUG oslo_concurrency.lockutils [None req-4c64e838-f2b1-4794-afda-1493719b8b97 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Lock "f6a731f1-14a4-4df0-b3e2-65c8c4cb16e8-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:55:33 compute-0 nova_compute[260935]: 2025-10-11 08:55:33.909 2 DEBUG nova.virt.libvirt.vif [None req-4c64e838-f2b1-4794-afda-1493719b8b97 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T08:55:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-1662661810',display_name='tempest-DeleteServersTestJSON-server-1662661810',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-1662661810',id=60,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='93dd4902ce324862a38006da8e06503a',ramdisk_id='',reservation_id='r-hslosofy',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-1019340677',owner_user_name='tempest-DeleteServersTestJSON-1019340677-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T08:55:28Z,user_data=None,user_id='213e5693e94f44e7950e3dfbca04228a',uuid=f6a731f1-14a4-4df0-b3e2-65c8c4cb16e8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d295555f-76d3-480c-9c34-1ef65200656c", "address": "fa:16:3e:67:47:3c", "network": {"id": "2cb96d57-a5e9-4b38-b10e-68187a5bf82f", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-2000648338-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "93dd4902ce324862a38006da8e06503a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd295555f-76", "ovs_interfaceid": "d295555f-76d3-480c-9c34-1ef65200656c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 11 08:55:33 compute-0 nova_compute[260935]: 2025-10-11 08:55:33.910 2 DEBUG nova.network.os_vif_util [None req-4c64e838-f2b1-4794-afda-1493719b8b97 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Converting VIF {"id": "d295555f-76d3-480c-9c34-1ef65200656c", "address": "fa:16:3e:67:47:3c", "network": {"id": "2cb96d57-a5e9-4b38-b10e-68187a5bf82f", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-2000648338-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "93dd4902ce324862a38006da8e06503a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd295555f-76", "ovs_interfaceid": "d295555f-76d3-480c-9c34-1ef65200656c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 08:55:33 compute-0 nova_compute[260935]: 2025-10-11 08:55:33.911 2 DEBUG nova.network.os_vif_util [None req-4c64e838-f2b1-4794-afda-1493719b8b97 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:67:47:3c,bridge_name='br-int',has_traffic_filtering=True,id=d295555f-76d3-480c-9c34-1ef65200656c,network=Network(2cb96d57-a5e9-4b38-b10e-68187a5bf82f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd295555f-76') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 08:55:33 compute-0 nova_compute[260935]: 2025-10-11 08:55:33.912 2 DEBUG os_vif [None req-4c64e838-f2b1-4794-afda-1493719b8b97 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:67:47:3c,bridge_name='br-int',has_traffic_filtering=True,id=d295555f-76d3-480c-9c34-1ef65200656c,network=Network(2cb96d57-a5e9-4b38-b10e-68187a5bf82f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd295555f-76') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 11 08:55:33 compute-0 nova_compute[260935]: 2025-10-11 08:55:33.913 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:55:33 compute-0 nova_compute[260935]: 2025-10-11 08:55:33.914 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:55:33 compute-0 nova_compute[260935]: 2025-10-11 08:55:33.915 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 08:55:33 compute-0 nova_compute[260935]: 2025-10-11 08:55:33.919 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:55:33 compute-0 nova_compute[260935]: 2025-10-11 08:55:33.919 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd295555f-76, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:55:33 compute-0 nova_compute[260935]: 2025-10-11 08:55:33.920 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapd295555f-76, col_values=(('external_ids', {'iface-id': 'd295555f-76d3-480c-9c34-1ef65200656c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:67:47:3c', 'vm-uuid': 'f6a731f1-14a4-4df0-b3e2-65c8c4cb16e8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:55:33 compute-0 NetworkManager[44960]: <info>  [1760172933.9244] manager: (tapd295555f-76): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/245)
Oct 11 08:55:33 compute-0 nova_compute[260935]: 2025-10-11 08:55:33.923 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:55:33 compute-0 nova_compute[260935]: 2025-10-11 08:55:33.935 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 08:55:33 compute-0 nova_compute[260935]: 2025-10-11 08:55:33.979 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:55:33 compute-0 nova_compute[260935]: 2025-10-11 08:55:33.982 2 INFO os_vif [None req-4c64e838-f2b1-4794-afda-1493719b8b97 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:67:47:3c,bridge_name='br-int',has_traffic_filtering=True,id=d295555f-76d3-480c-9c34-1ef65200656c,network=Network(2cb96d57-a5e9-4b38-b10e-68187a5bf82f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd295555f-76')
Oct 11 08:55:33 compute-0 ovn_controller[152945]: 2025-10-11T08:55:33Z|00517|binding|INFO|Releasing lport 056a8563-0695-415b-921f-e75fa98e60e5 from this chassis (sb_readonly=0)
Oct 11 08:55:33 compute-0 ovn_controller[152945]: 2025-10-11T08:55:33Z|00518|binding|INFO|Releasing lport ebd712a5-5601-47dd-8cfc-89d2ce6b1035 from this chassis (sb_readonly=0)
Oct 11 08:55:33 compute-0 nova_compute[260935]: 2025-10-11 08:55:33.996 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:55:34 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1540: 321 pgs: 321 active+clean; 180 MiB data, 537 MiB used, 59 GiB / 60 GiB avail; 707 KiB/s rd, 5.3 MiB/s wr, 139 op/s
Oct 11 08:55:34 compute-0 nova_compute[260935]: 2025-10-11 08:55:34.053 2 DEBUG nova.virt.libvirt.driver [None req-4c64e838-f2b1-4794-afda-1493719b8b97 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 08:55:34 compute-0 nova_compute[260935]: 2025-10-11 08:55:34.053 2 DEBUG nova.virt.libvirt.driver [None req-4c64e838-f2b1-4794-afda-1493719b8b97 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 08:55:34 compute-0 nova_compute[260935]: 2025-10-11 08:55:34.054 2 DEBUG nova.virt.libvirt.driver [None req-4c64e838-f2b1-4794-afda-1493719b8b97 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] No VIF found with MAC fa:16:3e:67:47:3c, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 11 08:55:34 compute-0 nova_compute[260935]: 2025-10-11 08:55:34.054 2 INFO nova.virt.libvirt.driver [None req-4c64e838-f2b1-4794-afda-1493719b8b97 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: f6a731f1-14a4-4df0-b3e2-65c8c4cb16e8] Using config drive
Oct 11 08:55:34 compute-0 nova_compute[260935]: 2025-10-11 08:55:34.081 2 DEBUG nova.storage.rbd_utils [None req-4c64e838-f2b1-4794-afda-1493719b8b97 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] rbd image f6a731f1-14a4-4df0-b3e2-65c8c4cb16e8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:55:34 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/2261304288' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:55:34 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/3218247990' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:55:34 compute-0 nova_compute[260935]: 2025-10-11 08:55:34.215 2 DEBUG nova.compute.manager [req-8b1982a6-5e3b-44ee-a3b0-2905a568ce51 req-70e56087-3b89-4036-8627-16d0fcb9a987 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 98d8ebd6-0917-49cf-8efc-a245486424bc] Received event network-changed-10e01bbb-0d2c-4f76-8f81-c90e20d3e54c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:55:34 compute-0 nova_compute[260935]: 2025-10-11 08:55:34.216 2 DEBUG nova.compute.manager [req-8b1982a6-5e3b-44ee-a3b0-2905a568ce51 req-70e56087-3b89-4036-8627-16d0fcb9a987 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 98d8ebd6-0917-49cf-8efc-a245486424bc] Refreshing instance network info cache due to event network-changed-10e01bbb-0d2c-4f76-8f81-c90e20d3e54c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 08:55:34 compute-0 nova_compute[260935]: 2025-10-11 08:55:34.217 2 DEBUG oslo_concurrency.lockutils [req-8b1982a6-5e3b-44ee-a3b0-2905a568ce51 req-70e56087-3b89-4036-8627-16d0fcb9a987 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-98d8ebd6-0917-49cf-8efc-a245486424bc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 08:55:34 compute-0 nova_compute[260935]: 2025-10-11 08:55:34.217 2 DEBUG oslo_concurrency.lockutils [req-8b1982a6-5e3b-44ee-a3b0-2905a568ce51 req-70e56087-3b89-4036-8627-16d0fcb9a987 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-98d8ebd6-0917-49cf-8efc-a245486424bc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 08:55:34 compute-0 nova_compute[260935]: 2025-10-11 08:55:34.218 2 DEBUG nova.network.neutron [req-8b1982a6-5e3b-44ee-a3b0-2905a568ce51 req-70e56087-3b89-4036-8627-16d0fcb9a987 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 98d8ebd6-0917-49cf-8efc-a245486424bc] Refreshing network info cache for port 10e01bbb-0d2c-4f76-8f81-c90e20d3e54c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 08:55:34 compute-0 nova_compute[260935]: 2025-10-11 08:55:34.275 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760172919.2630043, 09e8444f-162e-4773-a181-a5b70c7af8dd => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:55:34 compute-0 nova_compute[260935]: 2025-10-11 08:55:34.275 2 INFO nova.compute.manager [-] [instance: 09e8444f-162e-4773-a181-a5b70c7af8dd] VM Stopped (Lifecycle Event)
Oct 11 08:55:34 compute-0 nova_compute[260935]: 2025-10-11 08:55:34.310 2 DEBUG nova.compute.manager [None req-e656393c-f61a-4b1f-b595-5ec9d62ffef7 - - - - - -] [instance: 09e8444f-162e-4773-a181-a5b70c7af8dd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:55:34 compute-0 nova_compute[260935]: 2025-10-11 08:55:34.367 2 DEBUG nova.network.neutron [req-cef8cce4-6ed4-40cf-87da-74586a8ca383 req-305547af-e019-4c3d-bbe6-844d3846ace7 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: f6a731f1-14a4-4df0-b3e2-65c8c4cb16e8] Updated VIF entry in instance network info cache for port d295555f-76d3-480c-9c34-1ef65200656c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 08:55:34 compute-0 nova_compute[260935]: 2025-10-11 08:55:34.368 2 DEBUG nova.network.neutron [req-cef8cce4-6ed4-40cf-87da-74586a8ca383 req-305547af-e019-4c3d-bbe6-844d3846ace7 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: f6a731f1-14a4-4df0-b3e2-65c8c4cb16e8] Updating instance_info_cache with network_info: [{"id": "d295555f-76d3-480c-9c34-1ef65200656c", "address": "fa:16:3e:67:47:3c", "network": {"id": "2cb96d57-a5e9-4b38-b10e-68187a5bf82f", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-2000648338-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "93dd4902ce324862a38006da8e06503a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd295555f-76", "ovs_interfaceid": "d295555f-76d3-480c-9c34-1ef65200656c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:55:34 compute-0 nova_compute[260935]: 2025-10-11 08:55:34.379 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:55:34 compute-0 nova_compute[260935]: 2025-10-11 08:55:34.384 2 DEBUG oslo_concurrency.lockutils [req-cef8cce4-6ed4-40cf-87da-74586a8ca383 req-305547af-e019-4c3d-bbe6-844d3846ace7 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-f6a731f1-14a4-4df0-b3e2-65c8c4cb16e8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 08:55:34 compute-0 nova_compute[260935]: 2025-10-11 08:55:34.480 2 INFO nova.virt.libvirt.driver [None req-4c64e838-f2b1-4794-afda-1493719b8b97 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: f6a731f1-14a4-4df0-b3e2-65c8c4cb16e8] Creating config drive at /var/lib/nova/instances/f6a731f1-14a4-4df0-b3e2-65c8c4cb16e8/disk.config
Oct 11 08:55:34 compute-0 nova_compute[260935]: 2025-10-11 08:55:34.487 2 DEBUG oslo_concurrency.processutils [None req-4c64e838-f2b1-4794-afda-1493719b8b97 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/f6a731f1-14a4-4df0-b3e2-65c8c4cb16e8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpt1hddp2x execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:55:34 compute-0 nova_compute[260935]: 2025-10-11 08:55:34.560 2 DEBUG nova.compute.manager [None req-7a652225-c8de-463d-bafb-8628c9a6f227 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] [instance: 3915cf40-bdd7-4fe8-8311-834ff26aaf9c] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 11 08:55:34 compute-0 nova_compute[260935]: 2025-10-11 08:55:34.563 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172934.559016, 3915cf40-bdd7-4fe8-8311-834ff26aaf9c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:55:34 compute-0 nova_compute[260935]: 2025-10-11 08:55:34.563 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 3915cf40-bdd7-4fe8-8311-834ff26aaf9c] VM Started (Lifecycle Event)
Oct 11 08:55:34 compute-0 nova_compute[260935]: 2025-10-11 08:55:34.581 2 DEBUG nova.virt.libvirt.driver [None req-7a652225-c8de-463d-bafb-8628c9a6f227 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] [instance: 3915cf40-bdd7-4fe8-8311-834ff26aaf9c] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 11 08:55:34 compute-0 nova_compute[260935]: 2025-10-11 08:55:34.586 2 INFO nova.virt.libvirt.driver [-] [instance: 3915cf40-bdd7-4fe8-8311-834ff26aaf9c] Instance spawned successfully.
Oct 11 08:55:34 compute-0 nova_compute[260935]: 2025-10-11 08:55:34.587 2 DEBUG nova.virt.libvirt.driver [None req-7a652225-c8de-463d-bafb-8628c9a6f227 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] [instance: 3915cf40-bdd7-4fe8-8311-834ff26aaf9c] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 11 08:55:34 compute-0 nova_compute[260935]: 2025-10-11 08:55:34.601 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 3915cf40-bdd7-4fe8-8311-834ff26aaf9c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:55:34 compute-0 nova_compute[260935]: 2025-10-11 08:55:34.611 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 3915cf40-bdd7-4fe8-8311-834ff26aaf9c] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 08:55:34 compute-0 nova_compute[260935]: 2025-10-11 08:55:34.617 2 DEBUG nova.virt.libvirt.driver [None req-7a652225-c8de-463d-bafb-8628c9a6f227 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] [instance: 3915cf40-bdd7-4fe8-8311-834ff26aaf9c] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:55:34 compute-0 nova_compute[260935]: 2025-10-11 08:55:34.617 2 DEBUG nova.virt.libvirt.driver [None req-7a652225-c8de-463d-bafb-8628c9a6f227 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] [instance: 3915cf40-bdd7-4fe8-8311-834ff26aaf9c] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:55:34 compute-0 nova_compute[260935]: 2025-10-11 08:55:34.618 2 DEBUG nova.virt.libvirt.driver [None req-7a652225-c8de-463d-bafb-8628c9a6f227 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] [instance: 3915cf40-bdd7-4fe8-8311-834ff26aaf9c] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:55:34 compute-0 nova_compute[260935]: 2025-10-11 08:55:34.619 2 DEBUG nova.virt.libvirt.driver [None req-7a652225-c8de-463d-bafb-8628c9a6f227 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] [instance: 3915cf40-bdd7-4fe8-8311-834ff26aaf9c] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:55:34 compute-0 nova_compute[260935]: 2025-10-11 08:55:34.619 2 DEBUG nova.virt.libvirt.driver [None req-7a652225-c8de-463d-bafb-8628c9a6f227 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] [instance: 3915cf40-bdd7-4fe8-8311-834ff26aaf9c] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:55:34 compute-0 nova_compute[260935]: 2025-10-11 08:55:34.620 2 DEBUG nova.virt.libvirt.driver [None req-7a652225-c8de-463d-bafb-8628c9a6f227 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] [instance: 3915cf40-bdd7-4fe8-8311-834ff26aaf9c] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:55:34 compute-0 nova_compute[260935]: 2025-10-11 08:55:34.630 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 3915cf40-bdd7-4fe8-8311-834ff26aaf9c] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 08:55:34 compute-0 nova_compute[260935]: 2025-10-11 08:55:34.631 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172934.5608969, 3915cf40-bdd7-4fe8-8311-834ff26aaf9c => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:55:34 compute-0 nova_compute[260935]: 2025-10-11 08:55:34.631 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 3915cf40-bdd7-4fe8-8311-834ff26aaf9c] VM Paused (Lifecycle Event)
Oct 11 08:55:34 compute-0 nova_compute[260935]: 2025-10-11 08:55:34.647 2 DEBUG oslo_concurrency.processutils [None req-4c64e838-f2b1-4794-afda-1493719b8b97 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/f6a731f1-14a4-4df0-b3e2-65c8c4cb16e8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpt1hddp2x" returned: 0 in 0.161s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:55:34 compute-0 nova_compute[260935]: 2025-10-11 08:55:34.684 2 DEBUG nova.storage.rbd_utils [None req-4c64e838-f2b1-4794-afda-1493719b8b97 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] rbd image f6a731f1-14a4-4df0-b3e2-65c8c4cb16e8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:55:34 compute-0 nova_compute[260935]: 2025-10-11 08:55:34.688 2 DEBUG oslo_concurrency.processutils [None req-4c64e838-f2b1-4794-afda-1493719b8b97 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/f6a731f1-14a4-4df0-b3e2-65c8c4cb16e8/disk.config f6a731f1-14a4-4df0-b3e2-65c8c4cb16e8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:55:34 compute-0 nova_compute[260935]: 2025-10-11 08:55:34.749 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 3915cf40-bdd7-4fe8-8311-834ff26aaf9c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:55:34 compute-0 nova_compute[260935]: 2025-10-11 08:55:34.753 2 INFO nova.compute.manager [None req-7a652225-c8de-463d-bafb-8628c9a6f227 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] [instance: 3915cf40-bdd7-4fe8-8311-834ff26aaf9c] Took 8.20 seconds to spawn the instance on the hypervisor.
Oct 11 08:55:34 compute-0 nova_compute[260935]: 2025-10-11 08:55:34.754 2 DEBUG nova.compute.manager [None req-7a652225-c8de-463d-bafb-8628c9a6f227 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] [instance: 3915cf40-bdd7-4fe8-8311-834ff26aaf9c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:55:34 compute-0 nova_compute[260935]: 2025-10-11 08:55:34.761 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172934.5754414, 3915cf40-bdd7-4fe8-8311-834ff26aaf9c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:55:34 compute-0 nova_compute[260935]: 2025-10-11 08:55:34.762 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 3915cf40-bdd7-4fe8-8311-834ff26aaf9c] VM Resumed (Lifecycle Event)
Oct 11 08:55:34 compute-0 nova_compute[260935]: 2025-10-11 08:55:34.792 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 3915cf40-bdd7-4fe8-8311-834ff26aaf9c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:55:34 compute-0 nova_compute[260935]: 2025-10-11 08:55:34.815 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 3915cf40-bdd7-4fe8-8311-834ff26aaf9c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 08:55:34 compute-0 nova_compute[260935]: 2025-10-11 08:55:34.825 2 INFO nova.compute.manager [None req-7a652225-c8de-463d-bafb-8628c9a6f227 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] [instance: 3915cf40-bdd7-4fe8-8311-834ff26aaf9c] Took 9.23 seconds to build instance.
Oct 11 08:55:34 compute-0 nova_compute[260935]: 2025-10-11 08:55:34.855 2 DEBUG oslo_concurrency.lockutils [None req-7a652225-c8de-463d-bafb-8628c9a6f227 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Lock "3915cf40-bdd7-4fe8-8311-834ff26aaf9c" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.323s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:55:34 compute-0 nova_compute[260935]: 2025-10-11 08:55:34.897 2 DEBUG oslo_concurrency.processutils [None req-4c64e838-f2b1-4794-afda-1493719b8b97 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/f6a731f1-14a4-4df0-b3e2-65c8c4cb16e8/disk.config f6a731f1-14a4-4df0-b3e2-65c8c4cb16e8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.209s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:55:34 compute-0 nova_compute[260935]: 2025-10-11 08:55:34.898 2 INFO nova.virt.libvirt.driver [None req-4c64e838-f2b1-4794-afda-1493719b8b97 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: f6a731f1-14a4-4df0-b3e2-65c8c4cb16e8] Deleting local config drive /var/lib/nova/instances/f6a731f1-14a4-4df0-b3e2-65c8c4cb16e8/disk.config because it was imported into RBD.
Oct 11 08:55:34 compute-0 kernel: tapd295555f-76: entered promiscuous mode
Oct 11 08:55:34 compute-0 NetworkManager[44960]: <info>  [1760172934.9786] manager: (tapd295555f-76): new Tun device (/org/freedesktop/NetworkManager/Devices/246)
Oct 11 08:55:34 compute-0 systemd-udevd[323241]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 08:55:34 compute-0 ovn_controller[152945]: 2025-10-11T08:55:34Z|00519|binding|INFO|Claiming lport d295555f-76d3-480c-9c34-1ef65200656c for this chassis.
Oct 11 08:55:34 compute-0 ovn_controller[152945]: 2025-10-11T08:55:34Z|00520|binding|INFO|d295555f-76d3-480c-9c34-1ef65200656c: Claiming fa:16:3e:67:47:3c 10.100.0.7
Oct 11 08:55:34 compute-0 nova_compute[260935]: 2025-10-11 08:55:34.983 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:55:34 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:55:34.995 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:67:47:3c 10.100.0.7'], port_security=['fa:16:3e:67:47:3c 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'f6a731f1-14a4-4df0-b3e2-65c8c4cb16e8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2cb96d57-a5e9-4b38-b10e-68187a5bf82f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '93dd4902ce324862a38006da8e06503a', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'b773900e-3df7-4cb6-b9b0-3d240ff499b1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=69794a2f-48ab-4a0d-8725-f4a7f57172dd, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=d295555f-76d3-480c-9c34-1ef65200656c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 08:55:34 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:55:34.996 162815 INFO neutron.agent.ovn.metadata.agent [-] Port d295555f-76d3-480c-9c34-1ef65200656c in datapath 2cb96d57-a5e9-4b38-b10e-68187a5bf82f bound to our chassis
Oct 11 08:55:34 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:55:34.998 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 2cb96d57-a5e9-4b38-b10e-68187a5bf82f
Oct 11 08:55:35 compute-0 NetworkManager[44960]: <info>  [1760172935.0102] device (tapd295555f-76): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 08:55:35 compute-0 NetworkManager[44960]: <info>  [1760172935.0157] device (tapd295555f-76): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 08:55:35 compute-0 ovn_controller[152945]: 2025-10-11T08:55:35Z|00521|binding|INFO|Setting lport d295555f-76d3-480c-9c34-1ef65200656c ovn-installed in OVS
Oct 11 08:55:35 compute-0 ovn_controller[152945]: 2025-10-11T08:55:35Z|00522|binding|INFO|Setting lport d295555f-76d3-480c-9c34-1ef65200656c up in Southbound
Oct 11 08:55:35 compute-0 nova_compute[260935]: 2025-10-11 08:55:35.024 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:55:35 compute-0 nova_compute[260935]: 2025-10-11 08:55:35.030 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:55:35 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:55:35.030 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[64bd29a3-5fc3-4979-8129-c7e91b1985b5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:55:35 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:55:35.032 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap2cb96d57-a1 in ovnmeta-2cb96d57-a5e9-4b38-b10e-68187a5bf82f namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 11 08:55:35 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:55:35.038 276945 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap2cb96d57-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 11 08:55:35 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:55:35.038 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[374af8d0-2f6a-4923-ace5-ca74ae5f4374]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:55:35 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:55:35.041 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[6f85d7bf-a0e8-4db0-bc75-23b80e605378]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:55:35 compute-0 systemd-machined[215705]: New machine qemu-67-instance-0000003c.
Oct 11 08:55:35 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:55:35.060 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[bfd52b05-ad40-41df-9af5-1de3f3503d92]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:55:35 compute-0 systemd[1]: Started Virtual Machine qemu-67-instance-0000003c.
Oct 11 08:55:35 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:55:35.084 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[c371493a-d3d1-48a3-a34f-a13ef7e91022]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:55:35 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:55:35.132 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[3b13385f-fb34-4753-ad7d-75da4780c65a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:55:35 compute-0 NetworkManager[44960]: <info>  [1760172935.1483] manager: (tap2cb96d57-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/247)
Oct 11 08:55:35 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:55:35.149 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[bf4af24d-94e8-4efb-8d3e-85a8c6188c83]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:55:35 compute-0 podman[323468]: 2025-10-11 08:55:35.159152783 +0000 UTC m=+0.121426863 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, container_name=iscsid)
Oct 11 08:55:35 compute-0 ceph-mon[74313]: pgmap v1540: 321 pgs: 321 active+clean; 180 MiB data, 537 MiB used, 59 GiB / 60 GiB avail; 707 KiB/s rd, 5.3 MiB/s wr, 139 op/s
Oct 11 08:55:35 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:55:35.201 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[6394a352-e463-435c-99d3-46acae9cb589]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:55:35 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:55:35.205 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[5c4e2a5a-940b-4077-bf8b-2ec982b1bcbe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:55:35 compute-0 NetworkManager[44960]: <info>  [1760172935.2456] device (tap2cb96d57-a0): carrier: link connected
Oct 11 08:55:35 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:55:35.261 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[cb6727ef-6066-46f2-a6e9-9cebf9d07743]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:55:35 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:55:35.285 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[6ed3a25e-f540-441c-ae7b-d0191d46439c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2cb96d57-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0b:c9:b5'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 166], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 477994, 'reachable_time': 20209, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 323506, 'error': None, 'target': 'ovnmeta-2cb96d57-a5e9-4b38-b10e-68187a5bf82f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:55:35 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:55:35.306 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[13072288-94a0-4e33-b011-ffcb9d80d33b]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe0b:c9b5'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 477994, 'tstamp': 477994}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 323507, 'error': None, 'target': 'ovnmeta-2cb96d57-a5e9-4b38-b10e-68187a5bf82f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:55:35 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:55:35.331 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[d38b6494-d532-441d-aec7-beb05cbd61c4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2cb96d57-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0b:c9:b5'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 166], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 477994, 'reachable_time': 20209, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 323508, 'error': None, 'target': 'ovnmeta-2cb96d57-a5e9-4b38-b10e-68187a5bf82f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:55:35 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:55:35.377 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[1143afe9-6ac9-45ff-8e25-3b9c10882e22]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:55:35 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:55:35.474 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[00b5f734-6f45-4dae-b4ce-8a604ca5b307]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:55:35 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:55:35.476 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2cb96d57-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:55:35 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:55:35.476 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 08:55:35 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:55:35.476 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2cb96d57-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:55:35 compute-0 NetworkManager[44960]: <info>  [1760172935.4791] manager: (tap2cb96d57-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/248)
Oct 11 08:55:35 compute-0 kernel: tap2cb96d57-a0: entered promiscuous mode
Oct 11 08:55:35 compute-0 nova_compute[260935]: 2025-10-11 08:55:35.479 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:55:35 compute-0 nova_compute[260935]: 2025-10-11 08:55:35.481 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:55:35 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:55:35.482 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap2cb96d57-a0, col_values=(('external_ids', {'iface-id': 'a11e0a08-d1ab-4bff-901b-632484cc0e21'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:55:35 compute-0 nova_compute[260935]: 2025-10-11 08:55:35.483 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:55:35 compute-0 ovn_controller[152945]: 2025-10-11T08:55:35Z|00523|binding|INFO|Releasing lport a11e0a08-d1ab-4bff-901b-632484cc0e21 from this chassis (sb_readonly=0)
Oct 11 08:55:35 compute-0 nova_compute[260935]: 2025-10-11 08:55:35.504 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:55:35 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:55:35.505 162815 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/2cb96d57-a5e9-4b38-b10e-68187a5bf82f.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/2cb96d57-a5e9-4b38-b10e-68187a5bf82f.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 11 08:55:35 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:55:35.505 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[ecec851e-eb86-4d48-a5e1-15c160086f46]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:55:35 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:55:35.506 162815 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 11 08:55:35 compute-0 ovn_metadata_agent[162810]: global
Oct 11 08:55:35 compute-0 ovn_metadata_agent[162810]:     log         /dev/log local0 debug
Oct 11 08:55:35 compute-0 ovn_metadata_agent[162810]:     log-tag     haproxy-metadata-proxy-2cb96d57-a5e9-4b38-b10e-68187a5bf82f
Oct 11 08:55:35 compute-0 ovn_metadata_agent[162810]:     user        root
Oct 11 08:55:35 compute-0 ovn_metadata_agent[162810]:     group       root
Oct 11 08:55:35 compute-0 ovn_metadata_agent[162810]:     maxconn     1024
Oct 11 08:55:35 compute-0 ovn_metadata_agent[162810]:     pidfile     /var/lib/neutron/external/pids/2cb96d57-a5e9-4b38-b10e-68187a5bf82f.pid.haproxy
Oct 11 08:55:35 compute-0 ovn_metadata_agent[162810]:     daemon
Oct 11 08:55:35 compute-0 ovn_metadata_agent[162810]: 
Oct 11 08:55:35 compute-0 ovn_metadata_agent[162810]: defaults
Oct 11 08:55:35 compute-0 ovn_metadata_agent[162810]:     log global
Oct 11 08:55:35 compute-0 ovn_metadata_agent[162810]:     mode http
Oct 11 08:55:35 compute-0 ovn_metadata_agent[162810]:     option httplog
Oct 11 08:55:35 compute-0 ovn_metadata_agent[162810]:     option dontlognull
Oct 11 08:55:35 compute-0 ovn_metadata_agent[162810]:     option http-server-close
Oct 11 08:55:35 compute-0 ovn_metadata_agent[162810]:     option forwardfor
Oct 11 08:55:35 compute-0 ovn_metadata_agent[162810]:     retries                 3
Oct 11 08:55:35 compute-0 ovn_metadata_agent[162810]:     timeout http-request    30s
Oct 11 08:55:35 compute-0 ovn_metadata_agent[162810]:     timeout connect         30s
Oct 11 08:55:35 compute-0 ovn_metadata_agent[162810]:     timeout client          32s
Oct 11 08:55:35 compute-0 ovn_metadata_agent[162810]:     timeout server          32s
Oct 11 08:55:35 compute-0 ovn_metadata_agent[162810]:     timeout http-keep-alive 30s
Oct 11 08:55:35 compute-0 ovn_metadata_agent[162810]: 
Oct 11 08:55:35 compute-0 ovn_metadata_agent[162810]: 
Oct 11 08:55:35 compute-0 ovn_metadata_agent[162810]: listen listener
Oct 11 08:55:35 compute-0 ovn_metadata_agent[162810]:     bind 169.254.169.254:80
Oct 11 08:55:35 compute-0 ovn_metadata_agent[162810]:     server metadata /var/lib/neutron/metadata_proxy
Oct 11 08:55:35 compute-0 ovn_metadata_agent[162810]:     http-request add-header X-OVN-Network-ID 2cb96d57-a5e9-4b38-b10e-68187a5bf82f
Oct 11 08:55:35 compute-0 ovn_metadata_agent[162810]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 11 08:55:35 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:55:35.507 162815 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-2cb96d57-a5e9-4b38-b10e-68187a5bf82f', 'env', 'PROCESS_TAG=haproxy-2cb96d57-a5e9-4b38-b10e-68187a5bf82f', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/2cb96d57-a5e9-4b38-b10e-68187a5bf82f.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 11 08:55:35 compute-0 nova_compute[260935]: 2025-10-11 08:55:35.574 2 DEBUG nova.compute.manager [req-571904f1-dfbe-4ccc-8996-a132b8e01a28 req-6800dab8-4e2c-42b3-8e14-6284943d2e3b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 3915cf40-bdd7-4fe8-8311-834ff26aaf9c] Received event network-vif-plugged-33b4a30b-13cb-4c3b-a6fb-df71a1ce760e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:55:35 compute-0 nova_compute[260935]: 2025-10-11 08:55:35.575 2 DEBUG oslo_concurrency.lockutils [req-571904f1-dfbe-4ccc-8996-a132b8e01a28 req-6800dab8-4e2c-42b3-8e14-6284943d2e3b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "3915cf40-bdd7-4fe8-8311-834ff26aaf9c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:55:35 compute-0 nova_compute[260935]: 2025-10-11 08:55:35.575 2 DEBUG oslo_concurrency.lockutils [req-571904f1-dfbe-4ccc-8996-a132b8e01a28 req-6800dab8-4e2c-42b3-8e14-6284943d2e3b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "3915cf40-bdd7-4fe8-8311-834ff26aaf9c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:55:35 compute-0 nova_compute[260935]: 2025-10-11 08:55:35.575 2 DEBUG oslo_concurrency.lockutils [req-571904f1-dfbe-4ccc-8996-a132b8e01a28 req-6800dab8-4e2c-42b3-8e14-6284943d2e3b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "3915cf40-bdd7-4fe8-8311-834ff26aaf9c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:55:35 compute-0 nova_compute[260935]: 2025-10-11 08:55:35.575 2 DEBUG nova.compute.manager [req-571904f1-dfbe-4ccc-8996-a132b8e01a28 req-6800dab8-4e2c-42b3-8e14-6284943d2e3b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 3915cf40-bdd7-4fe8-8311-834ff26aaf9c] No waiting events found dispatching network-vif-plugged-33b4a30b-13cb-4c3b-a6fb-df71a1ce760e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 08:55:35 compute-0 nova_compute[260935]: 2025-10-11 08:55:35.575 2 WARNING nova.compute.manager [req-571904f1-dfbe-4ccc-8996-a132b8e01a28 req-6800dab8-4e2c-42b3-8e14-6284943d2e3b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 3915cf40-bdd7-4fe8-8311-834ff26aaf9c] Received unexpected event network-vif-plugged-33b4a30b-13cb-4c3b-a6fb-df71a1ce760e for instance with vm_state active and task_state None.
Oct 11 08:55:35 compute-0 nova_compute[260935]: 2025-10-11 08:55:35.830 2 DEBUG nova.network.neutron [req-8b1982a6-5e3b-44ee-a3b0-2905a568ce51 req-70e56087-3b89-4036-8627-16d0fcb9a987 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 98d8ebd6-0917-49cf-8efc-a245486424bc] Updated VIF entry in instance network info cache for port 10e01bbb-0d2c-4f76-8f81-c90e20d3e54c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 08:55:35 compute-0 nova_compute[260935]: 2025-10-11 08:55:35.831 2 DEBUG nova.network.neutron [req-8b1982a6-5e3b-44ee-a3b0-2905a568ce51 req-70e56087-3b89-4036-8627-16d0fcb9a987 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 98d8ebd6-0917-49cf-8efc-a245486424bc] Updating instance_info_cache with network_info: [{"id": "10e01bbb-0d2c-4f76-8f81-c90e20d3e54c", "address": "fa:16:3e:3f:0f:36", "network": {"id": "056c6769-bc97-4ae9-9759-4cc2d984a31d", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-2094705751-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.249", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "73adfb8cf0c64359b1f33a9643148ef4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap10e01bbb-0d", "ovs_interfaceid": "10e01bbb-0d2c-4f76-8f81-c90e20d3e54c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:55:35 compute-0 nova_compute[260935]: 2025-10-11 08:55:35.853 2 DEBUG oslo_concurrency.lockutils [req-8b1982a6-5e3b-44ee-a3b0-2905a568ce51 req-70e56087-3b89-4036-8627-16d0fcb9a987 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-98d8ebd6-0917-49cf-8efc-a245486424bc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 08:55:35 compute-0 podman[323580]: 2025-10-11 08:55:35.985693492 +0000 UTC m=+0.075517014 container create 798bebd64e4b7ad73be2758893eb501a1aeb4ef4f4569bb10c702b4bbf0edf4f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-2cb96d57-a5e9-4b38-b10e-68187a5bf82f, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS)
Oct 11 08:55:36 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1541: 321 pgs: 321 active+clean; 180 MiB data, 537 MiB used, 59 GiB / 60 GiB avail; 671 KiB/s rd, 3.6 MiB/s wr, 85 op/s
Oct 11 08:55:36 compute-0 systemd[1]: Started libpod-conmon-798bebd64e4b7ad73be2758893eb501a1aeb4ef4f4569bb10c702b4bbf0edf4f.scope.
Oct 11 08:55:36 compute-0 podman[323580]: 2025-10-11 08:55:35.945930458 +0000 UTC m=+0.035754060 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 11 08:55:36 compute-0 systemd[1]: Started libcrun container.
Oct 11 08:55:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7de2860817c6756baaf80b94981532227b2da2929edff94d5b737c93d30e6677/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 11 08:55:36 compute-0 podman[323580]: 2025-10-11 08:55:36.09891457 +0000 UTC m=+0.188738102 container init 798bebd64e4b7ad73be2758893eb501a1aeb4ef4f4569bb10c702b4bbf0edf4f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-2cb96d57-a5e9-4b38-b10e-68187a5bf82f, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001)
Oct 11 08:55:36 compute-0 podman[323580]: 2025-10-11 08:55:36.105147388 +0000 UTC m=+0.194970910 container start 798bebd64e4b7ad73be2758893eb501a1aeb4ef4f4569bb10c702b4bbf0edf4f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-2cb96d57-a5e9-4b38-b10e-68187a5bf82f, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Oct 11 08:55:36 compute-0 neutron-haproxy-ovnmeta-2cb96d57-a5e9-4b38-b10e-68187a5bf82f[323595]: [NOTICE]   (323599) : New worker (323601) forked
Oct 11 08:55:36 compute-0 neutron-haproxy-ovnmeta-2cb96d57-a5e9-4b38-b10e-68187a5bf82f[323595]: [NOTICE]   (323599) : Loading success.
Oct 11 08:55:36 compute-0 nova_compute[260935]: 2025-10-11 08:55:36.354 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172936.353903, f6a731f1-14a4-4df0-b3e2-65c8c4cb16e8 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:55:36 compute-0 nova_compute[260935]: 2025-10-11 08:55:36.355 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: f6a731f1-14a4-4df0-b3e2-65c8c4cb16e8] VM Started (Lifecycle Event)
Oct 11 08:55:36 compute-0 nova_compute[260935]: 2025-10-11 08:55:36.393 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: f6a731f1-14a4-4df0-b3e2-65c8c4cb16e8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:55:36 compute-0 nova_compute[260935]: 2025-10-11 08:55:36.399 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172936.3580801, f6a731f1-14a4-4df0-b3e2-65c8c4cb16e8 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:55:36 compute-0 nova_compute[260935]: 2025-10-11 08:55:36.400 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: f6a731f1-14a4-4df0-b3e2-65c8c4cb16e8] VM Paused (Lifecycle Event)
Oct 11 08:55:36 compute-0 nova_compute[260935]: 2025-10-11 08:55:36.435 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: f6a731f1-14a4-4df0-b3e2-65c8c4cb16e8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:55:36 compute-0 nova_compute[260935]: 2025-10-11 08:55:36.440 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: f6a731f1-14a4-4df0-b3e2-65c8c4cb16e8] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 08:55:36 compute-0 nova_compute[260935]: 2025-10-11 08:55:36.494 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: f6a731f1-14a4-4df0-b3e2-65c8c4cb16e8] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 08:55:36 compute-0 nova_compute[260935]: 2025-10-11 08:55:36.508 2 DEBUG nova.compute.manager [req-dfc42284-f51d-4dfb-8bd4-74ace4d9946b req-3f2df2e4-8c6a-4012-9935-d67fc4f69930 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: f6a731f1-14a4-4df0-b3e2-65c8c4cb16e8] Received event network-vif-plugged-d295555f-76d3-480c-9c34-1ef65200656c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:55:36 compute-0 nova_compute[260935]: 2025-10-11 08:55:36.509 2 DEBUG oslo_concurrency.lockutils [req-dfc42284-f51d-4dfb-8bd4-74ace4d9946b req-3f2df2e4-8c6a-4012-9935-d67fc4f69930 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "f6a731f1-14a4-4df0-b3e2-65c8c4cb16e8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:55:36 compute-0 nova_compute[260935]: 2025-10-11 08:55:36.509 2 DEBUG oslo_concurrency.lockutils [req-dfc42284-f51d-4dfb-8bd4-74ace4d9946b req-3f2df2e4-8c6a-4012-9935-d67fc4f69930 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "f6a731f1-14a4-4df0-b3e2-65c8c4cb16e8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:55:36 compute-0 nova_compute[260935]: 2025-10-11 08:55:36.510 2 DEBUG oslo_concurrency.lockutils [req-dfc42284-f51d-4dfb-8bd4-74ace4d9946b req-3f2df2e4-8c6a-4012-9935-d67fc4f69930 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "f6a731f1-14a4-4df0-b3e2-65c8c4cb16e8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:55:36 compute-0 nova_compute[260935]: 2025-10-11 08:55:36.510 2 DEBUG nova.compute.manager [req-dfc42284-f51d-4dfb-8bd4-74ace4d9946b req-3f2df2e4-8c6a-4012-9935-d67fc4f69930 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: f6a731f1-14a4-4df0-b3e2-65c8c4cb16e8] Processing event network-vif-plugged-d295555f-76d3-480c-9c34-1ef65200656c _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 11 08:55:36 compute-0 nova_compute[260935]: 2025-10-11 08:55:36.510 2 DEBUG nova.compute.manager [req-dfc42284-f51d-4dfb-8bd4-74ace4d9946b req-3f2df2e4-8c6a-4012-9935-d67fc4f69930 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: f6a731f1-14a4-4df0-b3e2-65c8c4cb16e8] Received event network-vif-plugged-d295555f-76d3-480c-9c34-1ef65200656c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:55:36 compute-0 nova_compute[260935]: 2025-10-11 08:55:36.511 2 DEBUG oslo_concurrency.lockutils [req-dfc42284-f51d-4dfb-8bd4-74ace4d9946b req-3f2df2e4-8c6a-4012-9935-d67fc4f69930 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "f6a731f1-14a4-4df0-b3e2-65c8c4cb16e8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:55:36 compute-0 nova_compute[260935]: 2025-10-11 08:55:36.511 2 DEBUG oslo_concurrency.lockutils [req-dfc42284-f51d-4dfb-8bd4-74ace4d9946b req-3f2df2e4-8c6a-4012-9935-d67fc4f69930 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "f6a731f1-14a4-4df0-b3e2-65c8c4cb16e8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:55:36 compute-0 nova_compute[260935]: 2025-10-11 08:55:36.511 2 DEBUG oslo_concurrency.lockutils [req-dfc42284-f51d-4dfb-8bd4-74ace4d9946b req-3f2df2e4-8c6a-4012-9935-d67fc4f69930 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "f6a731f1-14a4-4df0-b3e2-65c8c4cb16e8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:55:36 compute-0 nova_compute[260935]: 2025-10-11 08:55:36.512 2 DEBUG nova.compute.manager [req-dfc42284-f51d-4dfb-8bd4-74ace4d9946b req-3f2df2e4-8c6a-4012-9935-d67fc4f69930 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: f6a731f1-14a4-4df0-b3e2-65c8c4cb16e8] No waiting events found dispatching network-vif-plugged-d295555f-76d3-480c-9c34-1ef65200656c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 08:55:36 compute-0 nova_compute[260935]: 2025-10-11 08:55:36.512 2 WARNING nova.compute.manager [req-dfc42284-f51d-4dfb-8bd4-74ace4d9946b req-3f2df2e4-8c6a-4012-9935-d67fc4f69930 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: f6a731f1-14a4-4df0-b3e2-65c8c4cb16e8] Received unexpected event network-vif-plugged-d295555f-76d3-480c-9c34-1ef65200656c for instance with vm_state building and task_state spawning.
Oct 11 08:55:36 compute-0 nova_compute[260935]: 2025-10-11 08:55:36.513 2 DEBUG nova.compute.manager [None req-4c64e838-f2b1-4794-afda-1493719b8b97 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: f6a731f1-14a4-4df0-b3e2-65c8c4cb16e8] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 11 08:55:36 compute-0 nova_compute[260935]: 2025-10-11 08:55:36.518 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172936.5179389, f6a731f1-14a4-4df0-b3e2-65c8c4cb16e8 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:55:36 compute-0 nova_compute[260935]: 2025-10-11 08:55:36.518 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: f6a731f1-14a4-4df0-b3e2-65c8c4cb16e8] VM Resumed (Lifecycle Event)
Oct 11 08:55:36 compute-0 nova_compute[260935]: 2025-10-11 08:55:36.523 2 DEBUG nova.virt.libvirt.driver [None req-4c64e838-f2b1-4794-afda-1493719b8b97 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: f6a731f1-14a4-4df0-b3e2-65c8c4cb16e8] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 11 08:55:36 compute-0 nova_compute[260935]: 2025-10-11 08:55:36.528 2 INFO nova.virt.libvirt.driver [-] [instance: f6a731f1-14a4-4df0-b3e2-65c8c4cb16e8] Instance spawned successfully.
Oct 11 08:55:36 compute-0 nova_compute[260935]: 2025-10-11 08:55:36.529 2 DEBUG nova.virt.libvirt.driver [None req-4c64e838-f2b1-4794-afda-1493719b8b97 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: f6a731f1-14a4-4df0-b3e2-65c8c4cb16e8] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 11 08:55:36 compute-0 nova_compute[260935]: 2025-10-11 08:55:36.543 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: f6a731f1-14a4-4df0-b3e2-65c8c4cb16e8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:55:36 compute-0 nova_compute[260935]: 2025-10-11 08:55:36.553 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: f6a731f1-14a4-4df0-b3e2-65c8c4cb16e8] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 08:55:36 compute-0 nova_compute[260935]: 2025-10-11 08:55:36.561 2 DEBUG nova.virt.libvirt.driver [None req-4c64e838-f2b1-4794-afda-1493719b8b97 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: f6a731f1-14a4-4df0-b3e2-65c8c4cb16e8] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:55:36 compute-0 nova_compute[260935]: 2025-10-11 08:55:36.562 2 DEBUG nova.virt.libvirt.driver [None req-4c64e838-f2b1-4794-afda-1493719b8b97 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: f6a731f1-14a4-4df0-b3e2-65c8c4cb16e8] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:55:36 compute-0 nova_compute[260935]: 2025-10-11 08:55:36.563 2 DEBUG nova.virt.libvirt.driver [None req-4c64e838-f2b1-4794-afda-1493719b8b97 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: f6a731f1-14a4-4df0-b3e2-65c8c4cb16e8] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:55:36 compute-0 nova_compute[260935]: 2025-10-11 08:55:36.563 2 DEBUG nova.virt.libvirt.driver [None req-4c64e838-f2b1-4794-afda-1493719b8b97 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: f6a731f1-14a4-4df0-b3e2-65c8c4cb16e8] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:55:36 compute-0 nova_compute[260935]: 2025-10-11 08:55:36.564 2 DEBUG nova.virt.libvirt.driver [None req-4c64e838-f2b1-4794-afda-1493719b8b97 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: f6a731f1-14a4-4df0-b3e2-65c8c4cb16e8] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:55:36 compute-0 nova_compute[260935]: 2025-10-11 08:55:36.565 2 DEBUG nova.virt.libvirt.driver [None req-4c64e838-f2b1-4794-afda-1493719b8b97 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: f6a731f1-14a4-4df0-b3e2-65c8c4cb16e8] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:55:36 compute-0 nova_compute[260935]: 2025-10-11 08:55:36.577 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: f6a731f1-14a4-4df0-b3e2-65c8c4cb16e8] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 08:55:36 compute-0 nova_compute[260935]: 2025-10-11 08:55:36.669 2 INFO nova.compute.manager [None req-4c64e838-f2b1-4794-afda-1493719b8b97 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: f6a731f1-14a4-4df0-b3e2-65c8c4cb16e8] Took 8.36 seconds to spawn the instance on the hypervisor.
Oct 11 08:55:36 compute-0 nova_compute[260935]: 2025-10-11 08:55:36.670 2 DEBUG nova.compute.manager [None req-4c64e838-f2b1-4794-afda-1493719b8b97 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: f6a731f1-14a4-4df0-b3e2-65c8c4cb16e8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:55:36 compute-0 nova_compute[260935]: 2025-10-11 08:55:36.744 2 INFO nova.compute.manager [None req-4c64e838-f2b1-4794-afda-1493719b8b97 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: f6a731f1-14a4-4df0-b3e2-65c8c4cb16e8] Took 9.41 seconds to build instance.
Oct 11 08:55:36 compute-0 nova_compute[260935]: 2025-10-11 08:55:36.761 2 DEBUG oslo_concurrency.lockutils [None req-4c64e838-f2b1-4794-afda-1493719b8b97 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Lock "f6a731f1-14a4-4df0-b3e2-65c8c4cb16e8" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.494s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:55:37 compute-0 ceph-mon[74313]: pgmap v1541: 321 pgs: 321 active+clean; 180 MiB data, 537 MiB used, 59 GiB / 60 GiB avail; 671 KiB/s rd, 3.6 MiB/s wr, 85 op/s
Oct 11 08:55:37 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 08:55:37 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2681708282' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 08:55:37 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 08:55:37 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2681708282' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 08:55:37 compute-0 nova_compute[260935]: 2025-10-11 08:55:37.672 2 DEBUG oslo_concurrency.lockutils [None req-54d2ec95-f145-417d-84d8-e8abc3dc7c99 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Acquiring lock "f6a731f1-14a4-4df0-b3e2-65c8c4cb16e8" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:55:37 compute-0 nova_compute[260935]: 2025-10-11 08:55:37.672 2 DEBUG oslo_concurrency.lockutils [None req-54d2ec95-f145-417d-84d8-e8abc3dc7c99 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Lock "f6a731f1-14a4-4df0-b3e2-65c8c4cb16e8" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:55:37 compute-0 nova_compute[260935]: 2025-10-11 08:55:37.672 2 DEBUG oslo_concurrency.lockutils [None req-54d2ec95-f145-417d-84d8-e8abc3dc7c99 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Acquiring lock "f6a731f1-14a4-4df0-b3e2-65c8c4cb16e8-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:55:37 compute-0 nova_compute[260935]: 2025-10-11 08:55:37.673 2 DEBUG oslo_concurrency.lockutils [None req-54d2ec95-f145-417d-84d8-e8abc3dc7c99 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Lock "f6a731f1-14a4-4df0-b3e2-65c8c4cb16e8-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:55:37 compute-0 nova_compute[260935]: 2025-10-11 08:55:37.673 2 DEBUG oslo_concurrency.lockutils [None req-54d2ec95-f145-417d-84d8-e8abc3dc7c99 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Lock "f6a731f1-14a4-4df0-b3e2-65c8c4cb16e8-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:55:37 compute-0 nova_compute[260935]: 2025-10-11 08:55:37.674 2 INFO nova.compute.manager [None req-54d2ec95-f145-417d-84d8-e8abc3dc7c99 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: f6a731f1-14a4-4df0-b3e2-65c8c4cb16e8] Terminating instance
Oct 11 08:55:37 compute-0 nova_compute[260935]: 2025-10-11 08:55:37.675 2 DEBUG nova.compute.manager [None req-54d2ec95-f145-417d-84d8-e8abc3dc7c99 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: f6a731f1-14a4-4df0-b3e2-65c8c4cb16e8] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 11 08:55:37 compute-0 kernel: tapd295555f-76 (unregistering): left promiscuous mode
Oct 11 08:55:37 compute-0 NetworkManager[44960]: <info>  [1760172937.7253] device (tapd295555f-76): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 08:55:37 compute-0 nova_compute[260935]: 2025-10-11 08:55:37.743 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:55:37 compute-0 ovn_controller[152945]: 2025-10-11T08:55:37Z|00524|binding|INFO|Releasing lport d295555f-76d3-480c-9c34-1ef65200656c from this chassis (sb_readonly=0)
Oct 11 08:55:37 compute-0 ovn_controller[152945]: 2025-10-11T08:55:37Z|00525|binding|INFO|Setting lport d295555f-76d3-480c-9c34-1ef65200656c down in Southbound
Oct 11 08:55:37 compute-0 ovn_controller[152945]: 2025-10-11T08:55:37Z|00526|binding|INFO|Removing iface tapd295555f-76 ovn-installed in OVS
Oct 11 08:55:37 compute-0 nova_compute[260935]: 2025-10-11 08:55:37.745 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:55:37 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:55:37.755 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:67:47:3c 10.100.0.7'], port_security=['fa:16:3e:67:47:3c 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'f6a731f1-14a4-4df0-b3e2-65c8c4cb16e8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2cb96d57-a5e9-4b38-b10e-68187a5bf82f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '93dd4902ce324862a38006da8e06503a', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'b773900e-3df7-4cb6-b9b0-3d240ff499b1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=69794a2f-48ab-4a0d-8725-f4a7f57172dd, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=d295555f-76d3-480c-9c34-1ef65200656c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 08:55:37 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:55:37.757 162815 INFO neutron.agent.ovn.metadata.agent [-] Port d295555f-76d3-480c-9c34-1ef65200656c in datapath 2cb96d57-a5e9-4b38-b10e-68187a5bf82f unbound from our chassis
Oct 11 08:55:37 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:55:37.759 162815 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 2cb96d57-a5e9-4b38-b10e-68187a5bf82f, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 11 08:55:37 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:55:37.760 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[1f3a7691-085c-4dd5-b4cb-7d01bad3d5ae]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:55:37 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:55:37.761 162815 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-2cb96d57-a5e9-4b38-b10e-68187a5bf82f namespace which is not needed anymore
Oct 11 08:55:37 compute-0 nova_compute[260935]: 2025-10-11 08:55:37.773 2 DEBUG nova.compute.manager [req-44d98fdf-593b-4bdd-928d-8db5ab480085 req-ba11f04b-55d8-48d1-abda-ebf305165faa e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 3915cf40-bdd7-4fe8-8311-834ff26aaf9c] Received event network-changed-33b4a30b-13cb-4c3b-a6fb-df71a1ce760e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:55:37 compute-0 nova_compute[260935]: 2025-10-11 08:55:37.773 2 DEBUG nova.compute.manager [req-44d98fdf-593b-4bdd-928d-8db5ab480085 req-ba11f04b-55d8-48d1-abda-ebf305165faa e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 3915cf40-bdd7-4fe8-8311-834ff26aaf9c] Refreshing instance network info cache due to event network-changed-33b4a30b-13cb-4c3b-a6fb-df71a1ce760e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 08:55:37 compute-0 nova_compute[260935]: 2025-10-11 08:55:37.773 2 DEBUG oslo_concurrency.lockutils [req-44d98fdf-593b-4bdd-928d-8db5ab480085 req-ba11f04b-55d8-48d1-abda-ebf305165faa e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-3915cf40-bdd7-4fe8-8311-834ff26aaf9c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 08:55:37 compute-0 nova_compute[260935]: 2025-10-11 08:55:37.774 2 DEBUG oslo_concurrency.lockutils [req-44d98fdf-593b-4bdd-928d-8db5ab480085 req-ba11f04b-55d8-48d1-abda-ebf305165faa e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-3915cf40-bdd7-4fe8-8311-834ff26aaf9c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 08:55:37 compute-0 nova_compute[260935]: 2025-10-11 08:55:37.774 2 DEBUG nova.network.neutron [req-44d98fdf-593b-4bdd-928d-8db5ab480085 req-ba11f04b-55d8-48d1-abda-ebf305165faa e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 3915cf40-bdd7-4fe8-8311-834ff26aaf9c] Refreshing network info cache for port 33b4a30b-13cb-4c3b-a6fb-df71a1ce760e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 08:55:37 compute-0 nova_compute[260935]: 2025-10-11 08:55:37.779 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:55:37 compute-0 systemd[1]: machine-qemu\x2d67\x2dinstance\x2d0000003c.scope: Deactivated successfully.
Oct 11 08:55:37 compute-0 systemd[1]: machine-qemu\x2d67\x2dinstance\x2d0000003c.scope: Consumed 2.271s CPU time.
Oct 11 08:55:37 compute-0 systemd-machined[215705]: Machine qemu-67-instance-0000003c terminated.
Oct 11 08:55:37 compute-0 nova_compute[260935]: 2025-10-11 08:55:37.897 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:55:37 compute-0 nova_compute[260935]: 2025-10-11 08:55:37.909 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:55:37 compute-0 nova_compute[260935]: 2025-10-11 08:55:37.914 2 INFO nova.virt.libvirt.driver [-] [instance: f6a731f1-14a4-4df0-b3e2-65c8c4cb16e8] Instance destroyed successfully.
Oct 11 08:55:37 compute-0 nova_compute[260935]: 2025-10-11 08:55:37.914 2 DEBUG nova.objects.instance [None req-54d2ec95-f145-417d-84d8-e8abc3dc7c99 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Lazy-loading 'resources' on Instance uuid f6a731f1-14a4-4df0-b3e2-65c8c4cb16e8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:55:37 compute-0 neutron-haproxy-ovnmeta-2cb96d57-a5e9-4b38-b10e-68187a5bf82f[323595]: [NOTICE]   (323599) : haproxy version is 2.8.14-c23fe91
Oct 11 08:55:37 compute-0 neutron-haproxy-ovnmeta-2cb96d57-a5e9-4b38-b10e-68187a5bf82f[323595]: [NOTICE]   (323599) : path to executable is /usr/sbin/haproxy
Oct 11 08:55:37 compute-0 neutron-haproxy-ovnmeta-2cb96d57-a5e9-4b38-b10e-68187a5bf82f[323595]: [WARNING]  (323599) : Exiting Master process...
Oct 11 08:55:37 compute-0 neutron-haproxy-ovnmeta-2cb96d57-a5e9-4b38-b10e-68187a5bf82f[323595]: [WARNING]  (323599) : Exiting Master process...
Oct 11 08:55:37 compute-0 neutron-haproxy-ovnmeta-2cb96d57-a5e9-4b38-b10e-68187a5bf82f[323595]: [ALERT]    (323599) : Current worker (323601) exited with code 143 (Terminated)
Oct 11 08:55:37 compute-0 neutron-haproxy-ovnmeta-2cb96d57-a5e9-4b38-b10e-68187a5bf82f[323595]: [WARNING]  (323599) : All workers exited. Exiting... (0)
Oct 11 08:55:37 compute-0 nova_compute[260935]: 2025-10-11 08:55:37.932 2 DEBUG nova.virt.libvirt.vif [None req-54d2ec95-f145-417d-84d8-e8abc3dc7c99 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T08:55:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-1662661810',display_name='tempest-DeleteServersTestJSON-server-1662661810',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-1662661810',id=60,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-11T08:55:36Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='93dd4902ce324862a38006da8e06503a',ramdisk_id='',reservation_id='r-hslosofy',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-DeleteServersTestJSON-1019340677',owner_user_name='tempest-DeleteServersTestJSON-1019340677-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T08:55:36Z,user_data=None,user_id='213e5693e94f44e7950e3dfbca04228a',uuid=f6a731f1-14a4-4df0-b3e2-65c8c4cb16e8,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d295555f-76d3-480c-9c34-1ef65200656c", "address": "fa:16:3e:67:47:3c", "network": {"id": "2cb96d57-a5e9-4b38-b10e-68187a5bf82f", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-2000648338-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "93dd4902ce324862a38006da8e06503a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd295555f-76", "ovs_interfaceid": "d295555f-76d3-480c-9c34-1ef65200656c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 11 08:55:37 compute-0 nova_compute[260935]: 2025-10-11 08:55:37.932 2 DEBUG nova.network.os_vif_util [None req-54d2ec95-f145-417d-84d8-e8abc3dc7c99 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Converting VIF {"id": "d295555f-76d3-480c-9c34-1ef65200656c", "address": "fa:16:3e:67:47:3c", "network": {"id": "2cb96d57-a5e9-4b38-b10e-68187a5bf82f", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-2000648338-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "93dd4902ce324862a38006da8e06503a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd295555f-76", "ovs_interfaceid": "d295555f-76d3-480c-9c34-1ef65200656c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 08:55:37 compute-0 systemd[1]: libpod-798bebd64e4b7ad73be2758893eb501a1aeb4ef4f4569bb10c702b4bbf0edf4f.scope: Deactivated successfully.
Oct 11 08:55:37 compute-0 nova_compute[260935]: 2025-10-11 08:55:37.935 2 DEBUG nova.network.os_vif_util [None req-54d2ec95-f145-417d-84d8-e8abc3dc7c99 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:67:47:3c,bridge_name='br-int',has_traffic_filtering=True,id=d295555f-76d3-480c-9c34-1ef65200656c,network=Network(2cb96d57-a5e9-4b38-b10e-68187a5bf82f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd295555f-76') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 08:55:37 compute-0 nova_compute[260935]: 2025-10-11 08:55:37.936 2 DEBUG os_vif [None req-54d2ec95-f145-417d-84d8-e8abc3dc7c99 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:67:47:3c,bridge_name='br-int',has_traffic_filtering=True,id=d295555f-76d3-480c-9c34-1ef65200656c,network=Network(2cb96d57-a5e9-4b38-b10e-68187a5bf82f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd295555f-76') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 11 08:55:37 compute-0 nova_compute[260935]: 2025-10-11 08:55:37.938 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:55:37 compute-0 nova_compute[260935]: 2025-10-11 08:55:37.938 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd295555f-76, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:55:37 compute-0 nova_compute[260935]: 2025-10-11 08:55:37.939 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:55:37 compute-0 nova_compute[260935]: 2025-10-11 08:55:37.941 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:55:37 compute-0 podman[323633]: 2025-10-11 08:55:37.94395587 +0000 UTC m=+0.066706413 container died 798bebd64e4b7ad73be2758893eb501a1aeb4ef4f4569bb10c702b4bbf0edf4f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-2cb96d57-a5e9-4b38-b10e-68187a5bf82f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 11 08:55:37 compute-0 nova_compute[260935]: 2025-10-11 08:55:37.947 2 INFO os_vif [None req-54d2ec95-f145-417d-84d8-e8abc3dc7c99 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:67:47:3c,bridge_name='br-int',has_traffic_filtering=True,id=d295555f-76d3-480c-9c34-1ef65200656c,network=Network(2cb96d57-a5e9-4b38-b10e-68187a5bf82f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd295555f-76')
Oct 11 08:55:37 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-798bebd64e4b7ad73be2758893eb501a1aeb4ef4f4569bb10c702b4bbf0edf4f-userdata-shm.mount: Deactivated successfully.
Oct 11 08:55:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-7de2860817c6756baaf80b94981532227b2da2929edff94d5b737c93d30e6677-merged.mount: Deactivated successfully.
Oct 11 08:55:37 compute-0 nova_compute[260935]: 2025-10-11 08:55:37.997 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:55:38 compute-0 podman[323633]: 2025-10-11 08:55:38.002647184 +0000 UTC m=+0.125397747 container cleanup 798bebd64e4b7ad73be2758893eb501a1aeb4ef4f4569bb10c702b4bbf0edf4f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-2cb96d57-a5e9-4b38-b10e-68187a5bf82f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 08:55:38 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1542: 321 pgs: 321 active+clean; 181 MiB data, 538 MiB used, 59 GiB / 60 GiB avail; 5.4 MiB/s rd, 3.6 MiB/s wr, 262 op/s
Oct 11 08:55:38 compute-0 systemd[1]: libpod-conmon-798bebd64e4b7ad73be2758893eb501a1aeb4ef4f4569bb10c702b4bbf0edf4f.scope: Deactivated successfully.
Oct 11 08:55:38 compute-0 nova_compute[260935]: 2025-10-11 08:55:38.015 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:55:38 compute-0 podman[323686]: 2025-10-11 08:55:38.099349121 +0000 UTC m=+0.065771546 container remove 798bebd64e4b7ad73be2758893eb501a1aeb4ef4f4569bb10c702b4bbf0edf4f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-2cb96d57-a5e9-4b38-b10e-68187a5bf82f, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001)
Oct 11 08:55:38 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:55:38.110 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[b0922263-8503-4452-8968-c06fa72632a1]: (4, ('Sat Oct 11 08:55:37 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-2cb96d57-a5e9-4b38-b10e-68187a5bf82f (798bebd64e4b7ad73be2758893eb501a1aeb4ef4f4569bb10c702b4bbf0edf4f)\n798bebd64e4b7ad73be2758893eb501a1aeb4ef4f4569bb10c702b4bbf0edf4f\nSat Oct 11 08:55:38 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-2cb96d57-a5e9-4b38-b10e-68187a5bf82f (798bebd64e4b7ad73be2758893eb501a1aeb4ef4f4569bb10c702b4bbf0edf4f)\n798bebd64e4b7ad73be2758893eb501a1aeb4ef4f4569bb10c702b4bbf0edf4f\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:55:38 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:55:38.112 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[24c684eb-a63f-4503-9871-25f5ac6234c0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:55:38 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:55:38.114 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2cb96d57-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:55:38 compute-0 nova_compute[260935]: 2025-10-11 08:55:38.116 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:55:38 compute-0 kernel: tap2cb96d57-a0: left promiscuous mode
Oct 11 08:55:38 compute-0 nova_compute[260935]: 2025-10-11 08:55:38.155 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:55:38 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:55:38.158 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[24ffd6d5-fdbc-4468-b506-8a28cd5d8c5d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:55:38 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:55:38.183 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[3eba3290-6726-4a6c-988c-7e88c5581dc7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:55:38 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:55:38.184 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[33b77bc3-7753-46c0-a3c6-05e50b6963b0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:55:38 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:55:38.208 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[845c456e-da1c-4467-a557-0a1d6b510692]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 477982, 'reachable_time': 38789, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 323704, 'error': None, 'target': 'ovnmeta-2cb96d57-a5e9-4b38-b10e-68187a5bf82f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:55:38 compute-0 systemd[1]: run-netns-ovnmeta\x2d2cb96d57\x2da5e9\x2d4b38\x2db10e\x2d68187a5bf82f.mount: Deactivated successfully.
Oct 11 08:55:38 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:55:38.211 162929 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-2cb96d57-a5e9-4b38-b10e-68187a5bf82f deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 11 08:55:38 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:55:38.211 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[d0d3527b-a87f-4598-9a06-9b5d55a2e9b2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:55:38 compute-0 ceph-mon[74313]: from='client.? 192.168.122.10:0/2681708282' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 08:55:38 compute-0 ceph-mon[74313]: from='client.? 192.168.122.10:0/2681708282' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 08:55:38 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e210 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 08:55:38 compute-0 nova_compute[260935]: 2025-10-11 08:55:38.380 2 INFO nova.virt.libvirt.driver [None req-54d2ec95-f145-417d-84d8-e8abc3dc7c99 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: f6a731f1-14a4-4df0-b3e2-65c8c4cb16e8] Deleting instance files /var/lib/nova/instances/f6a731f1-14a4-4df0-b3e2-65c8c4cb16e8_del
Oct 11 08:55:38 compute-0 nova_compute[260935]: 2025-10-11 08:55:38.382 2 INFO nova.virt.libvirt.driver [None req-54d2ec95-f145-417d-84d8-e8abc3dc7c99 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: f6a731f1-14a4-4df0-b3e2-65c8c4cb16e8] Deletion of /var/lib/nova/instances/f6a731f1-14a4-4df0-b3e2-65c8c4cb16e8_del complete
Oct 11 08:55:38 compute-0 nova_compute[260935]: 2025-10-11 08:55:38.443 2 INFO nova.compute.manager [None req-54d2ec95-f145-417d-84d8-e8abc3dc7c99 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: f6a731f1-14a4-4df0-b3e2-65c8c4cb16e8] Took 0.77 seconds to destroy the instance on the hypervisor.
Oct 11 08:55:38 compute-0 nova_compute[260935]: 2025-10-11 08:55:38.444 2 DEBUG oslo.service.loopingcall [None req-54d2ec95-f145-417d-84d8-e8abc3dc7c99 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 11 08:55:38 compute-0 nova_compute[260935]: 2025-10-11 08:55:38.448 2 DEBUG nova.compute.manager [-] [instance: f6a731f1-14a4-4df0-b3e2-65c8c4cb16e8] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 11 08:55:38 compute-0 nova_compute[260935]: 2025-10-11 08:55:38.449 2 DEBUG nova.network.neutron [-] [instance: f6a731f1-14a4-4df0-b3e2-65c8c4cb16e8] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 11 08:55:39 compute-0 nova_compute[260935]: 2025-10-11 08:55:39.216 2 DEBUG nova.network.neutron [req-44d98fdf-593b-4bdd-928d-8db5ab480085 req-ba11f04b-55d8-48d1-abda-ebf305165faa e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 3915cf40-bdd7-4fe8-8311-834ff26aaf9c] Updated VIF entry in instance network info cache for port 33b4a30b-13cb-4c3b-a6fb-df71a1ce760e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 08:55:39 compute-0 nova_compute[260935]: 2025-10-11 08:55:39.217 2 DEBUG nova.network.neutron [req-44d98fdf-593b-4bdd-928d-8db5ab480085 req-ba11f04b-55d8-48d1-abda-ebf305165faa e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 3915cf40-bdd7-4fe8-8311-834ff26aaf9c] Updating instance_info_cache with network_info: [{"id": "33b4a30b-13cb-4c3b-a6fb-df71a1ce760e", "address": "fa:16:3e:ef:c3:49", "network": {"id": "338aeaf8-43d5-4292-a8fa-8952dd3c508b", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-293890957-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3a6a3cc2a54f4a9bafcdc1304f07944b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap33b4a30b-13", "ovs_interfaceid": "33b4a30b-13cb-4c3b-a6fb-df71a1ce760e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:55:39 compute-0 nova_compute[260935]: 2025-10-11 08:55:39.220 2 DEBUG nova.network.neutron [-] [instance: f6a731f1-14a4-4df0-b3e2-65c8c4cb16e8] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:55:39 compute-0 ceph-mon[74313]: pgmap v1542: 321 pgs: 321 active+clean; 181 MiB data, 538 MiB used, 59 GiB / 60 GiB avail; 5.4 MiB/s rd, 3.6 MiB/s wr, 262 op/s
Oct 11 08:55:39 compute-0 nova_compute[260935]: 2025-10-11 08:55:39.253 2 INFO nova.compute.manager [-] [instance: f6a731f1-14a4-4df0-b3e2-65c8c4cb16e8] Took 0.80 seconds to deallocate network for instance.
Oct 11 08:55:39 compute-0 nova_compute[260935]: 2025-10-11 08:55:39.269 2 DEBUG oslo_concurrency.lockutils [req-44d98fdf-593b-4bdd-928d-8db5ab480085 req-ba11f04b-55d8-48d1-abda-ebf305165faa e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-3915cf40-bdd7-4fe8-8311-834ff26aaf9c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 08:55:39 compute-0 nova_compute[260935]: 2025-10-11 08:55:39.288 2 DEBUG nova.compute.manager [req-7b709226-509a-4a81-aaea-19173d5f23c3 req-8ed6eb1c-a1c3-4a17-888c-edbb02a26534 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: f6a731f1-14a4-4df0-b3e2-65c8c4cb16e8] Received event network-vif-deleted-d295555f-76d3-480c-9c34-1ef65200656c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:55:39 compute-0 nova_compute[260935]: 2025-10-11 08:55:39.329 2 DEBUG oslo_concurrency.lockutils [None req-54d2ec95-f145-417d-84d8-e8abc3dc7c99 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:55:39 compute-0 nova_compute[260935]: 2025-10-11 08:55:39.329 2 DEBUG oslo_concurrency.lockutils [None req-54d2ec95-f145-417d-84d8-e8abc3dc7c99 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:55:39 compute-0 nova_compute[260935]: 2025-10-11 08:55:39.434 2 DEBUG oslo_concurrency.processutils [None req-54d2ec95-f145-417d-84d8-e8abc3dc7c99 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:55:39 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 08:55:39 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2409747422' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:55:39 compute-0 nova_compute[260935]: 2025-10-11 08:55:39.944 2 DEBUG oslo_concurrency.processutils [None req-54d2ec95-f145-417d-84d8-e8abc3dc7c99 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.511s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:55:39 compute-0 nova_compute[260935]: 2025-10-11 08:55:39.954 2 DEBUG nova.compute.provider_tree [None req-54d2ec95-f145-417d-84d8-e8abc3dc7c99 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 08:55:39 compute-0 nova_compute[260935]: 2025-10-11 08:55:39.970 2 DEBUG nova.scheduler.client.report [None req-54d2ec95-f145-417d-84d8-e8abc3dc7c99 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 08:55:39 compute-0 nova_compute[260935]: 2025-10-11 08:55:39.993 2 DEBUG oslo_concurrency.lockutils [None req-54d2ec95-f145-417d-84d8-e8abc3dc7c99 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.663s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:55:40 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1543: 321 pgs: 321 active+clean; 181 MiB data, 538 MiB used, 59 GiB / 60 GiB avail; 5.4 MiB/s rd, 1.8 MiB/s wr, 225 op/s
Oct 11 08:55:40 compute-0 nova_compute[260935]: 2025-10-11 08:55:40.019 2 INFO nova.scheduler.client.report [None req-54d2ec95-f145-417d-84d8-e8abc3dc7c99 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Deleted allocations for instance f6a731f1-14a4-4df0-b3e2-65c8c4cb16e8
Oct 11 08:55:40 compute-0 nova_compute[260935]: 2025-10-11 08:55:40.080 2 DEBUG oslo_concurrency.lockutils [None req-54d2ec95-f145-417d-84d8-e8abc3dc7c99 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Lock "f6a731f1-14a4-4df0-b3e2-65c8c4cb16e8" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.408s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:55:40 compute-0 nova_compute[260935]: 2025-10-11 08:55:40.203 2 DEBUG nova.compute.manager [req-289e0ea6-05a6-4578-be5f-079bc76edf5e req-94153dbe-cb94-47b2-89e2-659a09997f90 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 3915cf40-bdd7-4fe8-8311-834ff26aaf9c] Received event network-changed-33b4a30b-13cb-4c3b-a6fb-df71a1ce760e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:55:40 compute-0 nova_compute[260935]: 2025-10-11 08:55:40.204 2 DEBUG nova.compute.manager [req-289e0ea6-05a6-4578-be5f-079bc76edf5e req-94153dbe-cb94-47b2-89e2-659a09997f90 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 3915cf40-bdd7-4fe8-8311-834ff26aaf9c] Refreshing instance network info cache due to event network-changed-33b4a30b-13cb-4c3b-a6fb-df71a1ce760e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 08:55:40 compute-0 nova_compute[260935]: 2025-10-11 08:55:40.204 2 DEBUG oslo_concurrency.lockutils [req-289e0ea6-05a6-4578-be5f-079bc76edf5e req-94153dbe-cb94-47b2-89e2-659a09997f90 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-3915cf40-bdd7-4fe8-8311-834ff26aaf9c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 08:55:40 compute-0 nova_compute[260935]: 2025-10-11 08:55:40.205 2 DEBUG oslo_concurrency.lockutils [req-289e0ea6-05a6-4578-be5f-079bc76edf5e req-94153dbe-cb94-47b2-89e2-659a09997f90 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-3915cf40-bdd7-4fe8-8311-834ff26aaf9c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 08:55:40 compute-0 nova_compute[260935]: 2025-10-11 08:55:40.205 2 DEBUG nova.network.neutron [req-289e0ea6-05a6-4578-be5f-079bc76edf5e req-94153dbe-cb94-47b2-89e2-659a09997f90 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 3915cf40-bdd7-4fe8-8311-834ff26aaf9c] Refreshing network info cache for port 33b4a30b-13cb-4c3b-a6fb-df71a1ce760e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 08:55:40 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/2409747422' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:55:40 compute-0 podman[323728]: 2025-10-11 08:55:40.81300939 +0000 UTC m=+0.106948220 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=multipathd, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 11 08:55:40 compute-0 podman[323729]: 2025-10-11 08:55:40.864300562 +0000 UTC m=+0.154224128 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct 11 08:55:41 compute-0 ceph-mon[74313]: pgmap v1543: 321 pgs: 321 active+clean; 181 MiB data, 538 MiB used, 59 GiB / 60 GiB avail; 5.4 MiB/s rd, 1.8 MiB/s wr, 225 op/s
Oct 11 08:55:42 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1544: 321 pgs: 321 active+clean; 181 MiB data, 538 MiB used, 59 GiB / 60 GiB avail; 5.4 MiB/s rd, 1.8 MiB/s wr, 225 op/s
Oct 11 08:55:42 compute-0 nova_compute[260935]: 2025-10-11 08:55:42.385 2 DEBUG nova.network.neutron [req-289e0ea6-05a6-4578-be5f-079bc76edf5e req-94153dbe-cb94-47b2-89e2-659a09997f90 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 3915cf40-bdd7-4fe8-8311-834ff26aaf9c] Updated VIF entry in instance network info cache for port 33b4a30b-13cb-4c3b-a6fb-df71a1ce760e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 08:55:42 compute-0 nova_compute[260935]: 2025-10-11 08:55:42.386 2 DEBUG nova.network.neutron [req-289e0ea6-05a6-4578-be5f-079bc76edf5e req-94153dbe-cb94-47b2-89e2-659a09997f90 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 3915cf40-bdd7-4fe8-8311-834ff26aaf9c] Updating instance_info_cache with network_info: [{"id": "33b4a30b-13cb-4c3b-a6fb-df71a1ce760e", "address": "fa:16:3e:ef:c3:49", "network": {"id": "338aeaf8-43d5-4292-a8fa-8952dd3c508b", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-293890957-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3a6a3cc2a54f4a9bafcdc1304f07944b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap33b4a30b-13", "ovs_interfaceid": "33b4a30b-13cb-4c3b-a6fb-df71a1ce760e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:55:42 compute-0 nova_compute[260935]: 2025-10-11 08:55:42.409 2 DEBUG oslo_concurrency.lockutils [req-289e0ea6-05a6-4578-be5f-079bc76edf5e req-94153dbe-cb94-47b2-89e2-659a09997f90 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-3915cf40-bdd7-4fe8-8311-834ff26aaf9c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 08:55:42 compute-0 nova_compute[260935]: 2025-10-11 08:55:42.920 2 DEBUG oslo_concurrency.lockutils [None req-8ca70323-0739-4696-8509-b0a3e9587105 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Acquiring lock "b297454f-91af-4716-b4f7-6af9f0d7e62d" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:55:42 compute-0 nova_compute[260935]: 2025-10-11 08:55:42.920 2 DEBUG oslo_concurrency.lockutils [None req-8ca70323-0739-4696-8509-b0a3e9587105 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Lock "b297454f-91af-4716-b4f7-6af9f0d7e62d" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:55:42 compute-0 nova_compute[260935]: 2025-10-11 08:55:42.941 2 DEBUG nova.compute.manager [None req-8ca70323-0739-4696-8509-b0a3e9587105 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: b297454f-91af-4716-b4f7-6af9f0d7e62d] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 11 08:55:42 compute-0 nova_compute[260935]: 2025-10-11 08:55:42.985 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:55:43 compute-0 nova_compute[260935]: 2025-10-11 08:55:43.013 2 DEBUG oslo_concurrency.lockutils [None req-8ca70323-0739-4696-8509-b0a3e9587105 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:55:43 compute-0 nova_compute[260935]: 2025-10-11 08:55:43.014 2 DEBUG oslo_concurrency.lockutils [None req-8ca70323-0739-4696-8509-b0a3e9587105 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:55:43 compute-0 nova_compute[260935]: 2025-10-11 08:55:43.023 2 DEBUG nova.virt.hardware [None req-8ca70323-0739-4696-8509-b0a3e9587105 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 11 08:55:43 compute-0 nova_compute[260935]: 2025-10-11 08:55:43.023 2 INFO nova.compute.claims [None req-8ca70323-0739-4696-8509-b0a3e9587105 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: b297454f-91af-4716-b4f7-6af9f0d7e62d] Claim successful on node compute-0.ctlplane.example.com
Oct 11 08:55:43 compute-0 nova_compute[260935]: 2025-10-11 08:55:43.170 2 DEBUG oslo_concurrency.processutils [None req-8ca70323-0739-4696-8509-b0a3e9587105 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:55:43 compute-0 ceph-mon[74313]: pgmap v1544: 321 pgs: 321 active+clean; 181 MiB data, 538 MiB used, 59 GiB / 60 GiB avail; 5.4 MiB/s rd, 1.8 MiB/s wr, 225 op/s
Oct 11 08:55:43 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e210 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 08:55:43 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 08:55:43 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/217999181' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:55:43 compute-0 nova_compute[260935]: 2025-10-11 08:55:43.677 2 DEBUG oslo_concurrency.processutils [None req-8ca70323-0739-4696-8509-b0a3e9587105 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.507s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:55:43 compute-0 nova_compute[260935]: 2025-10-11 08:55:43.685 2 DEBUG nova.compute.provider_tree [None req-8ca70323-0739-4696-8509-b0a3e9587105 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 08:55:43 compute-0 nova_compute[260935]: 2025-10-11 08:55:43.708 2 DEBUG nova.scheduler.client.report [None req-8ca70323-0739-4696-8509-b0a3e9587105 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 08:55:43 compute-0 nova_compute[260935]: 2025-10-11 08:55:43.733 2 DEBUG oslo_concurrency.lockutils [None req-8ca70323-0739-4696-8509-b0a3e9587105 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.719s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:55:43 compute-0 nova_compute[260935]: 2025-10-11 08:55:43.734 2 DEBUG nova.compute.manager [None req-8ca70323-0739-4696-8509-b0a3e9587105 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: b297454f-91af-4716-b4f7-6af9f0d7e62d] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 11 08:55:43 compute-0 nova_compute[260935]: 2025-10-11 08:55:43.788 2 DEBUG nova.compute.manager [None req-8ca70323-0739-4696-8509-b0a3e9587105 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: b297454f-91af-4716-b4f7-6af9f0d7e62d] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 11 08:55:43 compute-0 nova_compute[260935]: 2025-10-11 08:55:43.788 2 DEBUG nova.network.neutron [None req-8ca70323-0739-4696-8509-b0a3e9587105 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: b297454f-91af-4716-b4f7-6af9f0d7e62d] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 11 08:55:43 compute-0 nova_compute[260935]: 2025-10-11 08:55:43.812 2 INFO nova.virt.libvirt.driver [None req-8ca70323-0739-4696-8509-b0a3e9587105 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: b297454f-91af-4716-b4f7-6af9f0d7e62d] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 11 08:55:43 compute-0 nova_compute[260935]: 2025-10-11 08:55:43.834 2 DEBUG nova.compute.manager [None req-8ca70323-0739-4696-8509-b0a3e9587105 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: b297454f-91af-4716-b4f7-6af9f0d7e62d] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 11 08:55:43 compute-0 nova_compute[260935]: 2025-10-11 08:55:43.918 2 DEBUG nova.compute.manager [None req-8ca70323-0739-4696-8509-b0a3e9587105 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: b297454f-91af-4716-b4f7-6af9f0d7e62d] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 11 08:55:43 compute-0 nova_compute[260935]: 2025-10-11 08:55:43.920 2 DEBUG nova.virt.libvirt.driver [None req-8ca70323-0739-4696-8509-b0a3e9587105 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: b297454f-91af-4716-b4f7-6af9f0d7e62d] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 11 08:55:43 compute-0 nova_compute[260935]: 2025-10-11 08:55:43.921 2 INFO nova.virt.libvirt.driver [None req-8ca70323-0739-4696-8509-b0a3e9587105 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: b297454f-91af-4716-b4f7-6af9f0d7e62d] Creating image(s)
Oct 11 08:55:43 compute-0 nova_compute[260935]: 2025-10-11 08:55:43.946 2 DEBUG nova.storage.rbd_utils [None req-8ca70323-0739-4696-8509-b0a3e9587105 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] rbd image b297454f-91af-4716-b4f7-6af9f0d7e62d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:55:43 compute-0 nova_compute[260935]: 2025-10-11 08:55:43.972 2 DEBUG nova.storage.rbd_utils [None req-8ca70323-0739-4696-8509-b0a3e9587105 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] rbd image b297454f-91af-4716-b4f7-6af9f0d7e62d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:55:43 compute-0 nova_compute[260935]: 2025-10-11 08:55:43.997 2 DEBUG nova.storage.rbd_utils [None req-8ca70323-0739-4696-8509-b0a3e9587105 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] rbd image b297454f-91af-4716-b4f7-6af9f0d7e62d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:55:44 compute-0 nova_compute[260935]: 2025-10-11 08:55:44.001 2 DEBUG oslo_concurrency.processutils [None req-8ca70323-0739-4696-8509-b0a3e9587105 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:55:44 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1545: 321 pgs: 321 active+clean; 135 MiB data, 516 MiB used, 59 GiB / 60 GiB avail; 5.6 MiB/s rd, 1.8 MiB/s wr, 261 op/s
Oct 11 08:55:44 compute-0 nova_compute[260935]: 2025-10-11 08:55:44.039 2 DEBUG nova.policy [None req-8ca70323-0739-4696-8509-b0a3e9587105 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'ee0e5fedb9fc464eb2a9ac362f5e0749', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'd9864fda4f8641d8a9c1509c426cc206', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 11 08:55:44 compute-0 nova_compute[260935]: 2025-10-11 08:55:44.085 2 DEBUG oslo_concurrency.processutils [None req-8ca70323-0739-4696-8509-b0a3e9587105 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json" returned: 0 in 0.084s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:55:44 compute-0 nova_compute[260935]: 2025-10-11 08:55:44.086 2 DEBUG oslo_concurrency.lockutils [None req-8ca70323-0739-4696-8509-b0a3e9587105 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Acquiring lock "9811042c7d73cc51997f7c966840f4b7728169a1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:55:44 compute-0 nova_compute[260935]: 2025-10-11 08:55:44.087 2 DEBUG oslo_concurrency.lockutils [None req-8ca70323-0739-4696-8509-b0a3e9587105 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:55:44 compute-0 nova_compute[260935]: 2025-10-11 08:55:44.087 2 DEBUG oslo_concurrency.lockutils [None req-8ca70323-0739-4696-8509-b0a3e9587105 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:55:44 compute-0 nova_compute[260935]: 2025-10-11 08:55:44.116 2 DEBUG nova.storage.rbd_utils [None req-8ca70323-0739-4696-8509-b0a3e9587105 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] rbd image b297454f-91af-4716-b4f7-6af9f0d7e62d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:55:44 compute-0 nova_compute[260935]: 2025-10-11 08:55:44.121 2 DEBUG oslo_concurrency.processutils [None req-8ca70323-0739-4696-8509-b0a3e9587105 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 b297454f-91af-4716-b4f7-6af9f0d7e62d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:55:44 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/217999181' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:55:44 compute-0 nova_compute[260935]: 2025-10-11 08:55:44.430 2 DEBUG oslo_concurrency.processutils [None req-8ca70323-0739-4696-8509-b0a3e9587105 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 b297454f-91af-4716-b4f7-6af9f0d7e62d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.309s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:55:44 compute-0 nova_compute[260935]: 2025-10-11 08:55:44.498 2 DEBUG nova.storage.rbd_utils [None req-8ca70323-0739-4696-8509-b0a3e9587105 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] resizing rbd image b297454f-91af-4716-b4f7-6af9f0d7e62d_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 11 08:55:44 compute-0 nova_compute[260935]: 2025-10-11 08:55:44.603 2 DEBUG nova.objects.instance [None req-8ca70323-0739-4696-8509-b0a3e9587105 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Lazy-loading 'migration_context' on Instance uuid b297454f-91af-4716-b4f7-6af9f0d7e62d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:55:44 compute-0 nova_compute[260935]: 2025-10-11 08:55:44.619 2 DEBUG nova.virt.libvirt.driver [None req-8ca70323-0739-4696-8509-b0a3e9587105 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: b297454f-91af-4716-b4f7-6af9f0d7e62d] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 11 08:55:44 compute-0 nova_compute[260935]: 2025-10-11 08:55:44.619 2 DEBUG nova.virt.libvirt.driver [None req-8ca70323-0739-4696-8509-b0a3e9587105 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: b297454f-91af-4716-b4f7-6af9f0d7e62d] Ensure instance console log exists: /var/lib/nova/instances/b297454f-91af-4716-b4f7-6af9f0d7e62d/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 11 08:55:44 compute-0 nova_compute[260935]: 2025-10-11 08:55:44.620 2 DEBUG oslo_concurrency.lockutils [None req-8ca70323-0739-4696-8509-b0a3e9587105 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:55:44 compute-0 nova_compute[260935]: 2025-10-11 08:55:44.620 2 DEBUG oslo_concurrency.lockutils [None req-8ca70323-0739-4696-8509-b0a3e9587105 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:55:44 compute-0 nova_compute[260935]: 2025-10-11 08:55:44.620 2 DEBUG oslo_concurrency.lockutils [None req-8ca70323-0739-4696-8509-b0a3e9587105 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:55:44 compute-0 nova_compute[260935]: 2025-10-11 08:55:44.639 2 DEBUG nova.network.neutron [None req-8ca70323-0739-4696-8509-b0a3e9587105 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: b297454f-91af-4716-b4f7-6af9f0d7e62d] Successfully created port: 7e1e0a2e-111a-44a6-8191-45964dbea604 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 11 08:55:45 compute-0 ovn_controller[152945]: 2025-10-11T08:55:45Z|00066|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:3f:0f:36 10.100.0.9
Oct 11 08:55:45 compute-0 ovn_controller[152945]: 2025-10-11T08:55:45Z|00067|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:3f:0f:36 10.100.0.9
Oct 11 08:55:45 compute-0 ceph-mon[74313]: pgmap v1545: 321 pgs: 321 active+clean; 135 MiB data, 516 MiB used, 59 GiB / 60 GiB avail; 5.6 MiB/s rd, 1.8 MiB/s wr, 261 op/s
Oct 11 08:55:45 compute-0 nova_compute[260935]: 2025-10-11 08:55:45.415 2 DEBUG nova.network.neutron [None req-8ca70323-0739-4696-8509-b0a3e9587105 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: b297454f-91af-4716-b4f7-6af9f0d7e62d] Successfully updated port: 7e1e0a2e-111a-44a6-8191-45964dbea604 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 11 08:55:45 compute-0 nova_compute[260935]: 2025-10-11 08:55:45.433 2 DEBUG oslo_concurrency.lockutils [None req-8ca70323-0739-4696-8509-b0a3e9587105 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Acquiring lock "refresh_cache-b297454f-91af-4716-b4f7-6af9f0d7e62d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 08:55:45 compute-0 nova_compute[260935]: 2025-10-11 08:55:45.434 2 DEBUG oslo_concurrency.lockutils [None req-8ca70323-0739-4696-8509-b0a3e9587105 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Acquired lock "refresh_cache-b297454f-91af-4716-b4f7-6af9f0d7e62d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 08:55:45 compute-0 nova_compute[260935]: 2025-10-11 08:55:45.434 2 DEBUG nova.network.neutron [None req-8ca70323-0739-4696-8509-b0a3e9587105 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: b297454f-91af-4716-b4f7-6af9f0d7e62d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 11 08:55:45 compute-0 nova_compute[260935]: 2025-10-11 08:55:45.583 2 DEBUG nova.network.neutron [None req-8ca70323-0739-4696-8509-b0a3e9587105 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: b297454f-91af-4716-b4f7-6af9f0d7e62d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 11 08:55:45 compute-0 nova_compute[260935]: 2025-10-11 08:55:45.607 2 DEBUG nova.compute.manager [req-84565fdb-7d79-412c-b3f9-678ce40bdaf2 req-8f22b11e-df39-4c4f-bafd-bfc65333392b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: b297454f-91af-4716-b4f7-6af9f0d7e62d] Received event network-changed-7e1e0a2e-111a-44a6-8191-45964dbea604 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:55:45 compute-0 nova_compute[260935]: 2025-10-11 08:55:45.608 2 DEBUG nova.compute.manager [req-84565fdb-7d79-412c-b3f9-678ce40bdaf2 req-8f22b11e-df39-4c4f-bafd-bfc65333392b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: b297454f-91af-4716-b4f7-6af9f0d7e62d] Refreshing instance network info cache due to event network-changed-7e1e0a2e-111a-44a6-8191-45964dbea604. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 08:55:45 compute-0 nova_compute[260935]: 2025-10-11 08:55:45.609 2 DEBUG oslo_concurrency.lockutils [req-84565fdb-7d79-412c-b3f9-678ce40bdaf2 req-8f22b11e-df39-4c4f-bafd-bfc65333392b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-b297454f-91af-4716-b4f7-6af9f0d7e62d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 08:55:46 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1546: 321 pgs: 321 active+clean; 135 MiB data, 516 MiB used, 59 GiB / 60 GiB avail; 4.9 MiB/s rd, 32 KiB/s wr, 212 op/s
Oct 11 08:55:46 compute-0 nova_compute[260935]: 2025-10-11 08:55:46.451 2 DEBUG oslo_concurrency.lockutils [None req-bac1875a-406e-4765-a23a-d37439cc32b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Acquiring lock "bb58fb30-73c8-457a-9293-2d07d0015a46" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:55:46 compute-0 nova_compute[260935]: 2025-10-11 08:55:46.452 2 DEBUG oslo_concurrency.lockutils [None req-bac1875a-406e-4765-a23a-d37439cc32b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Lock "bb58fb30-73c8-457a-9293-2d07d0015a46" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:55:46 compute-0 nova_compute[260935]: 2025-10-11 08:55:46.479 2 DEBUG nova.compute.manager [None req-bac1875a-406e-4765-a23a-d37439cc32b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: bb58fb30-73c8-457a-9293-2d07d0015a46] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 11 08:55:46 compute-0 nova_compute[260935]: 2025-10-11 08:55:46.563 2 DEBUG nova.network.neutron [None req-8ca70323-0739-4696-8509-b0a3e9587105 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: b297454f-91af-4716-b4f7-6af9f0d7e62d] Updating instance_info_cache with network_info: [{"id": "7e1e0a2e-111a-44a6-8191-45964dbea604", "address": "fa:16:3e:45:a4:99", "network": {"id": "e075bdab-78c4-414f-b270-c41d1c82f498", "bridge": "br-int", "label": "tempest-ServersTestJSON-1401783070-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d9864fda4f8641d8a9c1509c426cc206", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7e1e0a2e-11", "ovs_interfaceid": "7e1e0a2e-111a-44a6-8191-45964dbea604", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:55:46 compute-0 nova_compute[260935]: 2025-10-11 08:55:46.567 2 DEBUG oslo_concurrency.lockutils [None req-bac1875a-406e-4765-a23a-d37439cc32b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:55:46 compute-0 nova_compute[260935]: 2025-10-11 08:55:46.568 2 DEBUG oslo_concurrency.lockutils [None req-bac1875a-406e-4765-a23a-d37439cc32b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:55:46 compute-0 nova_compute[260935]: 2025-10-11 08:55:46.578 2 DEBUG nova.virt.hardware [None req-bac1875a-406e-4765-a23a-d37439cc32b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 11 08:55:46 compute-0 nova_compute[260935]: 2025-10-11 08:55:46.578 2 INFO nova.compute.claims [None req-bac1875a-406e-4765-a23a-d37439cc32b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: bb58fb30-73c8-457a-9293-2d07d0015a46] Claim successful on node compute-0.ctlplane.example.com
Oct 11 08:55:46 compute-0 nova_compute[260935]: 2025-10-11 08:55:46.604 2 DEBUG oslo_concurrency.lockutils [None req-8ca70323-0739-4696-8509-b0a3e9587105 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Releasing lock "refresh_cache-b297454f-91af-4716-b4f7-6af9f0d7e62d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 08:55:46 compute-0 nova_compute[260935]: 2025-10-11 08:55:46.604 2 DEBUG nova.compute.manager [None req-8ca70323-0739-4696-8509-b0a3e9587105 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: b297454f-91af-4716-b4f7-6af9f0d7e62d] Instance network_info: |[{"id": "7e1e0a2e-111a-44a6-8191-45964dbea604", "address": "fa:16:3e:45:a4:99", "network": {"id": "e075bdab-78c4-414f-b270-c41d1c82f498", "bridge": "br-int", "label": "tempest-ServersTestJSON-1401783070-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d9864fda4f8641d8a9c1509c426cc206", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7e1e0a2e-11", "ovs_interfaceid": "7e1e0a2e-111a-44a6-8191-45964dbea604", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 11 08:55:46 compute-0 nova_compute[260935]: 2025-10-11 08:55:46.605 2 DEBUG oslo_concurrency.lockutils [req-84565fdb-7d79-412c-b3f9-678ce40bdaf2 req-8f22b11e-df39-4c4f-bafd-bfc65333392b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-b297454f-91af-4716-b4f7-6af9f0d7e62d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 08:55:46 compute-0 nova_compute[260935]: 2025-10-11 08:55:46.605 2 DEBUG nova.network.neutron [req-84565fdb-7d79-412c-b3f9-678ce40bdaf2 req-8f22b11e-df39-4c4f-bafd-bfc65333392b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: b297454f-91af-4716-b4f7-6af9f0d7e62d] Refreshing network info cache for port 7e1e0a2e-111a-44a6-8191-45964dbea604 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 08:55:46 compute-0 nova_compute[260935]: 2025-10-11 08:55:46.608 2 DEBUG nova.virt.libvirt.driver [None req-8ca70323-0739-4696-8509-b0a3e9587105 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: b297454f-91af-4716-b4f7-6af9f0d7e62d] Start _get_guest_xml network_info=[{"id": "7e1e0a2e-111a-44a6-8191-45964dbea604", "address": "fa:16:3e:45:a4:99", "network": {"id": "e075bdab-78c4-414f-b270-c41d1c82f498", "bridge": "br-int", "label": "tempest-ServersTestJSON-1401783070-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d9864fda4f8641d8a9c1509c426cc206", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7e1e0a2e-11", "ovs_interfaceid": "7e1e0a2e-111a-44a6-8191-45964dbea604", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 11 08:55:46 compute-0 nova_compute[260935]: 2025-10-11 08:55:46.613 2 WARNING nova.virt.libvirt.driver [None req-8ca70323-0739-4696-8509-b0a3e9587105 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 08:55:46 compute-0 nova_compute[260935]: 2025-10-11 08:55:46.618 2 DEBUG nova.virt.libvirt.host [None req-8ca70323-0739-4696-8509-b0a3e9587105 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 11 08:55:46 compute-0 nova_compute[260935]: 2025-10-11 08:55:46.619 2 DEBUG nova.virt.libvirt.host [None req-8ca70323-0739-4696-8509-b0a3e9587105 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 11 08:55:46 compute-0 nova_compute[260935]: 2025-10-11 08:55:46.622 2 DEBUG nova.virt.libvirt.host [None req-8ca70323-0739-4696-8509-b0a3e9587105 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 11 08:55:46 compute-0 nova_compute[260935]: 2025-10-11 08:55:46.623 2 DEBUG nova.virt.libvirt.host [None req-8ca70323-0739-4696-8509-b0a3e9587105 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 11 08:55:46 compute-0 nova_compute[260935]: 2025-10-11 08:55:46.623 2 DEBUG nova.virt.libvirt.driver [None req-8ca70323-0739-4696-8509-b0a3e9587105 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 11 08:55:46 compute-0 nova_compute[260935]: 2025-10-11 08:55:46.623 2 DEBUG nova.virt.hardware [None req-8ca70323-0739-4696-8509-b0a3e9587105 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 11 08:55:46 compute-0 nova_compute[260935]: 2025-10-11 08:55:46.624 2 DEBUG nova.virt.hardware [None req-8ca70323-0739-4696-8509-b0a3e9587105 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 11 08:55:46 compute-0 nova_compute[260935]: 2025-10-11 08:55:46.624 2 DEBUG nova.virt.hardware [None req-8ca70323-0739-4696-8509-b0a3e9587105 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 11 08:55:46 compute-0 nova_compute[260935]: 2025-10-11 08:55:46.624 2 DEBUG nova.virt.hardware [None req-8ca70323-0739-4696-8509-b0a3e9587105 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 11 08:55:46 compute-0 nova_compute[260935]: 2025-10-11 08:55:46.625 2 DEBUG nova.virt.hardware [None req-8ca70323-0739-4696-8509-b0a3e9587105 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 11 08:55:46 compute-0 nova_compute[260935]: 2025-10-11 08:55:46.625 2 DEBUG nova.virt.hardware [None req-8ca70323-0739-4696-8509-b0a3e9587105 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 11 08:55:46 compute-0 nova_compute[260935]: 2025-10-11 08:55:46.625 2 DEBUG nova.virt.hardware [None req-8ca70323-0739-4696-8509-b0a3e9587105 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 11 08:55:46 compute-0 nova_compute[260935]: 2025-10-11 08:55:46.626 2 DEBUG nova.virt.hardware [None req-8ca70323-0739-4696-8509-b0a3e9587105 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 11 08:55:46 compute-0 nova_compute[260935]: 2025-10-11 08:55:46.626 2 DEBUG nova.virt.hardware [None req-8ca70323-0739-4696-8509-b0a3e9587105 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 11 08:55:46 compute-0 nova_compute[260935]: 2025-10-11 08:55:46.626 2 DEBUG nova.virt.hardware [None req-8ca70323-0739-4696-8509-b0a3e9587105 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 11 08:55:46 compute-0 nova_compute[260935]: 2025-10-11 08:55:46.626 2 DEBUG nova.virt.hardware [None req-8ca70323-0739-4696-8509-b0a3e9587105 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 11 08:55:46 compute-0 nova_compute[260935]: 2025-10-11 08:55:46.630 2 DEBUG oslo_concurrency.processutils [None req-8ca70323-0739-4696-8509-b0a3e9587105 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:55:46 compute-0 nova_compute[260935]: 2025-10-11 08:55:46.814 2 DEBUG oslo_concurrency.processutils [None req-bac1875a-406e-4765-a23a-d37439cc32b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:55:47 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 08:55:47 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1913167330' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:55:47 compute-0 nova_compute[260935]: 2025-10-11 08:55:47.173 2 DEBUG oslo_concurrency.processutils [None req-8ca70323-0739-4696-8509-b0a3e9587105 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.543s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:55:47 compute-0 nova_compute[260935]: 2025-10-11 08:55:47.208 2 DEBUG nova.storage.rbd_utils [None req-8ca70323-0739-4696-8509-b0a3e9587105 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] rbd image b297454f-91af-4716-b4f7-6af9f0d7e62d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:55:47 compute-0 nova_compute[260935]: 2025-10-11 08:55:47.213 2 DEBUG oslo_concurrency.processutils [None req-8ca70323-0739-4696-8509-b0a3e9587105 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:55:47 compute-0 ceph-mon[74313]: pgmap v1546: 321 pgs: 321 active+clean; 135 MiB data, 516 MiB used, 59 GiB / 60 GiB avail; 4.9 MiB/s rd, 32 KiB/s wr, 212 op/s
Oct 11 08:55:47 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/1913167330' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:55:47 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 08:55:47 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/90154206' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:55:47 compute-0 nova_compute[260935]: 2025-10-11 08:55:47.364 2 DEBUG oslo_concurrency.processutils [None req-bac1875a-406e-4765-a23a-d37439cc32b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.550s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:55:47 compute-0 nova_compute[260935]: 2025-10-11 08:55:47.379 2 DEBUG nova.compute.provider_tree [None req-bac1875a-406e-4765-a23a-d37439cc32b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 08:55:47 compute-0 nova_compute[260935]: 2025-10-11 08:55:47.398 2 DEBUG nova.scheduler.client.report [None req-bac1875a-406e-4765-a23a-d37439cc32b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 08:55:47 compute-0 nova_compute[260935]: 2025-10-11 08:55:47.424 2 DEBUG oslo_concurrency.lockutils [None req-bac1875a-406e-4765-a23a-d37439cc32b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.856s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:55:47 compute-0 nova_compute[260935]: 2025-10-11 08:55:47.425 2 DEBUG nova.compute.manager [None req-bac1875a-406e-4765-a23a-d37439cc32b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: bb58fb30-73c8-457a-9293-2d07d0015a46] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 11 08:55:47 compute-0 nova_compute[260935]: 2025-10-11 08:55:47.475 2 DEBUG nova.compute.manager [None req-bac1875a-406e-4765-a23a-d37439cc32b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: bb58fb30-73c8-457a-9293-2d07d0015a46] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 11 08:55:47 compute-0 nova_compute[260935]: 2025-10-11 08:55:47.476 2 DEBUG nova.network.neutron [None req-bac1875a-406e-4765-a23a-d37439cc32b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: bb58fb30-73c8-457a-9293-2d07d0015a46] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 11 08:55:47 compute-0 nova_compute[260935]: 2025-10-11 08:55:47.497 2 INFO nova.virt.libvirt.driver [None req-bac1875a-406e-4765-a23a-d37439cc32b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: bb58fb30-73c8-457a-9293-2d07d0015a46] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 11 08:55:47 compute-0 nova_compute[260935]: 2025-10-11 08:55:47.518 2 DEBUG nova.compute.manager [None req-bac1875a-406e-4765-a23a-d37439cc32b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: bb58fb30-73c8-457a-9293-2d07d0015a46] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 11 08:55:47 compute-0 nova_compute[260935]: 2025-10-11 08:55:47.629 2 DEBUG nova.compute.manager [None req-bac1875a-406e-4765-a23a-d37439cc32b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: bb58fb30-73c8-457a-9293-2d07d0015a46] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 11 08:55:47 compute-0 nova_compute[260935]: 2025-10-11 08:55:47.631 2 DEBUG nova.virt.libvirt.driver [None req-bac1875a-406e-4765-a23a-d37439cc32b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: bb58fb30-73c8-457a-9293-2d07d0015a46] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 11 08:55:47 compute-0 nova_compute[260935]: 2025-10-11 08:55:47.631 2 INFO nova.virt.libvirt.driver [None req-bac1875a-406e-4765-a23a-d37439cc32b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: bb58fb30-73c8-457a-9293-2d07d0015a46] Creating image(s)
Oct 11 08:55:47 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 08:55:47 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1754153588' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:55:47 compute-0 nova_compute[260935]: 2025-10-11 08:55:47.664 2 DEBUG nova.storage.rbd_utils [None req-bac1875a-406e-4765-a23a-d37439cc32b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] rbd image bb58fb30-73c8-457a-9293-2d07d0015a46_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:55:47 compute-0 nova_compute[260935]: 2025-10-11 08:55:47.698 2 DEBUG nova.storage.rbd_utils [None req-bac1875a-406e-4765-a23a-d37439cc32b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] rbd image bb58fb30-73c8-457a-9293-2d07d0015a46_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:55:47 compute-0 nova_compute[260935]: 2025-10-11 08:55:47.726 2 DEBUG nova.storage.rbd_utils [None req-bac1875a-406e-4765-a23a-d37439cc32b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] rbd image bb58fb30-73c8-457a-9293-2d07d0015a46_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:55:47 compute-0 nova_compute[260935]: 2025-10-11 08:55:47.731 2 DEBUG oslo_concurrency.processutils [None req-bac1875a-406e-4765-a23a-d37439cc32b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:55:47 compute-0 nova_compute[260935]: 2025-10-11 08:55:47.777 2 DEBUG nova.policy [None req-bac1875a-406e-4765-a23a-d37439cc32b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '213e5693e94f44e7950e3dfbca04228a', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '93dd4902ce324862a38006da8e06503a', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 11 08:55:47 compute-0 nova_compute[260935]: 2025-10-11 08:55:47.784 2 DEBUG oslo_concurrency.processutils [None req-8ca70323-0739-4696-8509-b0a3e9587105 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.572s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:55:47 compute-0 nova_compute[260935]: 2025-10-11 08:55:47.787 2 DEBUG nova.virt.libvirt.vif [None req-8ca70323-0739-4696-8509-b0a3e9587105 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T08:55:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-₡-268621522',display_name='tempest-₡-268621522',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest--268621522',id=61,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d9864fda4f8641d8a9c1509c426cc206',ramdisk_id='',reservation_id='r-w76lxkw4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-101172647',owner_user_name='tempest-ServersTestJSON-101172647-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T08:55:43Z,user_data=None,user_id='ee0e5fedb9fc464eb2a9ac362f5e0749',uuid=b297454f-91af-4716-b4f7-6af9f0d7e62d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "7e1e0a2e-111a-44a6-8191-45964dbea604", "address": "fa:16:3e:45:a4:99", "network": {"id": "e075bdab-78c4-414f-b270-c41d1c82f498", "bridge": "br-int", "label": "tempest-ServersTestJSON-1401783070-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d9864fda4f8641d8a9c1509c426cc206", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7e1e0a2e-11", "ovs_interfaceid": "7e1e0a2e-111a-44a6-8191-45964dbea604", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 11 08:55:47 compute-0 nova_compute[260935]: 2025-10-11 08:55:47.787 2 DEBUG nova.network.os_vif_util [None req-8ca70323-0739-4696-8509-b0a3e9587105 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Converting VIF {"id": "7e1e0a2e-111a-44a6-8191-45964dbea604", "address": "fa:16:3e:45:a4:99", "network": {"id": "e075bdab-78c4-414f-b270-c41d1c82f498", "bridge": "br-int", "label": "tempest-ServersTestJSON-1401783070-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d9864fda4f8641d8a9c1509c426cc206", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7e1e0a2e-11", "ovs_interfaceid": "7e1e0a2e-111a-44a6-8191-45964dbea604", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 08:55:47 compute-0 nova_compute[260935]: 2025-10-11 08:55:47.788 2 DEBUG nova.network.os_vif_util [None req-8ca70323-0739-4696-8509-b0a3e9587105 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:45:a4:99,bridge_name='br-int',has_traffic_filtering=True,id=7e1e0a2e-111a-44a6-8191-45964dbea604,network=Network(e075bdab-78c4-414f-b270-c41d1c82f498),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7e1e0a2e-11') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 08:55:47 compute-0 nova_compute[260935]: 2025-10-11 08:55:47.790 2 DEBUG nova.objects.instance [None req-8ca70323-0739-4696-8509-b0a3e9587105 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Lazy-loading 'pci_devices' on Instance uuid b297454f-91af-4716-b4f7-6af9f0d7e62d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:55:47 compute-0 nova_compute[260935]: 2025-10-11 08:55:47.808 2 DEBUG nova.virt.libvirt.driver [None req-8ca70323-0739-4696-8509-b0a3e9587105 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: b297454f-91af-4716-b4f7-6af9f0d7e62d] End _get_guest_xml xml=<domain type="kvm">
Oct 11 08:55:47 compute-0 nova_compute[260935]:   <uuid>b297454f-91af-4716-b4f7-6af9f0d7e62d</uuid>
Oct 11 08:55:47 compute-0 nova_compute[260935]:   <name>instance-0000003d</name>
Oct 11 08:55:47 compute-0 nova_compute[260935]:   <memory>131072</memory>
Oct 11 08:55:47 compute-0 nova_compute[260935]:   <vcpu>1</vcpu>
Oct 11 08:55:47 compute-0 nova_compute[260935]:   <metadata>
Oct 11 08:55:47 compute-0 nova_compute[260935]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 08:55:47 compute-0 nova_compute[260935]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 08:55:47 compute-0 nova_compute[260935]:       <nova:name>tempest-₡-268621522</nova:name>
Oct 11 08:55:47 compute-0 nova_compute[260935]:       <nova:creationTime>2025-10-11 08:55:46</nova:creationTime>
Oct 11 08:55:47 compute-0 nova_compute[260935]:       <nova:flavor name="m1.nano">
Oct 11 08:55:47 compute-0 nova_compute[260935]:         <nova:memory>128</nova:memory>
Oct 11 08:55:47 compute-0 nova_compute[260935]:         <nova:disk>1</nova:disk>
Oct 11 08:55:47 compute-0 nova_compute[260935]:         <nova:swap>0</nova:swap>
Oct 11 08:55:47 compute-0 nova_compute[260935]:         <nova:ephemeral>0</nova:ephemeral>
Oct 11 08:55:47 compute-0 nova_compute[260935]:         <nova:vcpus>1</nova:vcpus>
Oct 11 08:55:47 compute-0 nova_compute[260935]:       </nova:flavor>
Oct 11 08:55:47 compute-0 nova_compute[260935]:       <nova:owner>
Oct 11 08:55:47 compute-0 nova_compute[260935]:         <nova:user uuid="ee0e5fedb9fc464eb2a9ac362f5e0749">tempest-ServersTestJSON-101172647-project-member</nova:user>
Oct 11 08:55:47 compute-0 nova_compute[260935]:         <nova:project uuid="d9864fda4f8641d8a9c1509c426cc206">tempest-ServersTestJSON-101172647</nova:project>
Oct 11 08:55:47 compute-0 nova_compute[260935]:       </nova:owner>
Oct 11 08:55:47 compute-0 nova_compute[260935]:       <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 08:55:47 compute-0 nova_compute[260935]:       <nova:ports>
Oct 11 08:55:47 compute-0 nova_compute[260935]:         <nova:port uuid="7e1e0a2e-111a-44a6-8191-45964dbea604">
Oct 11 08:55:47 compute-0 nova_compute[260935]:           <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Oct 11 08:55:47 compute-0 nova_compute[260935]:         </nova:port>
Oct 11 08:55:47 compute-0 nova_compute[260935]:       </nova:ports>
Oct 11 08:55:47 compute-0 nova_compute[260935]:     </nova:instance>
Oct 11 08:55:47 compute-0 nova_compute[260935]:   </metadata>
Oct 11 08:55:47 compute-0 nova_compute[260935]:   <sysinfo type="smbios">
Oct 11 08:55:47 compute-0 nova_compute[260935]:     <system>
Oct 11 08:55:47 compute-0 nova_compute[260935]:       <entry name="manufacturer">RDO</entry>
Oct 11 08:55:47 compute-0 nova_compute[260935]:       <entry name="product">OpenStack Compute</entry>
Oct 11 08:55:47 compute-0 nova_compute[260935]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 08:55:47 compute-0 nova_compute[260935]:       <entry name="serial">b297454f-91af-4716-b4f7-6af9f0d7e62d</entry>
Oct 11 08:55:47 compute-0 nova_compute[260935]:       <entry name="uuid">b297454f-91af-4716-b4f7-6af9f0d7e62d</entry>
Oct 11 08:55:47 compute-0 nova_compute[260935]:       <entry name="family">Virtual Machine</entry>
Oct 11 08:55:47 compute-0 nova_compute[260935]:     </system>
Oct 11 08:55:47 compute-0 nova_compute[260935]:   </sysinfo>
Oct 11 08:55:47 compute-0 nova_compute[260935]:   <os>
Oct 11 08:55:47 compute-0 nova_compute[260935]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 11 08:55:47 compute-0 nova_compute[260935]:     <boot dev="hd"/>
Oct 11 08:55:47 compute-0 nova_compute[260935]:     <smbios mode="sysinfo"/>
Oct 11 08:55:47 compute-0 nova_compute[260935]:   </os>
Oct 11 08:55:47 compute-0 nova_compute[260935]:   <features>
Oct 11 08:55:47 compute-0 nova_compute[260935]:     <acpi/>
Oct 11 08:55:47 compute-0 nova_compute[260935]:     <apic/>
Oct 11 08:55:47 compute-0 nova_compute[260935]:     <vmcoreinfo/>
Oct 11 08:55:47 compute-0 nova_compute[260935]:   </features>
Oct 11 08:55:47 compute-0 nova_compute[260935]:   <clock offset="utc">
Oct 11 08:55:47 compute-0 nova_compute[260935]:     <timer name="pit" tickpolicy="delay"/>
Oct 11 08:55:47 compute-0 nova_compute[260935]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 11 08:55:47 compute-0 nova_compute[260935]:     <timer name="hpet" present="no"/>
Oct 11 08:55:47 compute-0 nova_compute[260935]:   </clock>
Oct 11 08:55:47 compute-0 nova_compute[260935]:   <cpu mode="host-model" match="exact">
Oct 11 08:55:47 compute-0 nova_compute[260935]:     <topology sockets="1" cores="1" threads="1"/>
Oct 11 08:55:47 compute-0 nova_compute[260935]:   </cpu>
Oct 11 08:55:47 compute-0 nova_compute[260935]:   <devices>
Oct 11 08:55:47 compute-0 nova_compute[260935]:     <disk type="network" device="disk">
Oct 11 08:55:47 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 08:55:47 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/b297454f-91af-4716-b4f7-6af9f0d7e62d_disk">
Oct 11 08:55:47 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 08:55:47 compute-0 nova_compute[260935]:       </source>
Oct 11 08:55:47 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 08:55:47 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 08:55:47 compute-0 nova_compute[260935]:       </auth>
Oct 11 08:55:47 compute-0 nova_compute[260935]:       <target dev="vda" bus="virtio"/>
Oct 11 08:55:47 compute-0 nova_compute[260935]:     </disk>
Oct 11 08:55:47 compute-0 nova_compute[260935]:     <disk type="network" device="cdrom">
Oct 11 08:55:47 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 08:55:47 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/b297454f-91af-4716-b4f7-6af9f0d7e62d_disk.config">
Oct 11 08:55:47 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 08:55:47 compute-0 nova_compute[260935]:       </source>
Oct 11 08:55:47 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 08:55:47 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 08:55:47 compute-0 nova_compute[260935]:       </auth>
Oct 11 08:55:47 compute-0 nova_compute[260935]:       <target dev="sda" bus="sata"/>
Oct 11 08:55:47 compute-0 nova_compute[260935]:     </disk>
Oct 11 08:55:47 compute-0 nova_compute[260935]:     <interface type="ethernet">
Oct 11 08:55:47 compute-0 nova_compute[260935]:       <mac address="fa:16:3e:45:a4:99"/>
Oct 11 08:55:47 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 08:55:47 compute-0 nova_compute[260935]:       <driver name="vhost" rx_queue_size="512"/>
Oct 11 08:55:47 compute-0 nova_compute[260935]:       <mtu size="1442"/>
Oct 11 08:55:47 compute-0 nova_compute[260935]:       <target dev="tap7e1e0a2e-11"/>
Oct 11 08:55:47 compute-0 nova_compute[260935]:     </interface>
Oct 11 08:55:47 compute-0 nova_compute[260935]:     <serial type="pty">
Oct 11 08:55:47 compute-0 nova_compute[260935]:       <log file="/var/lib/nova/instances/b297454f-91af-4716-b4f7-6af9f0d7e62d/console.log" append="off"/>
Oct 11 08:55:47 compute-0 nova_compute[260935]:     </serial>
Oct 11 08:55:47 compute-0 nova_compute[260935]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 08:55:47 compute-0 nova_compute[260935]:     <video>
Oct 11 08:55:47 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 08:55:47 compute-0 nova_compute[260935]:     </video>
Oct 11 08:55:47 compute-0 nova_compute[260935]:     <input type="tablet" bus="usb"/>
Oct 11 08:55:47 compute-0 nova_compute[260935]:     <rng model="virtio">
Oct 11 08:55:47 compute-0 nova_compute[260935]:       <backend model="random">/dev/urandom</backend>
Oct 11 08:55:47 compute-0 nova_compute[260935]:     </rng>
Oct 11 08:55:47 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root"/>
Oct 11 08:55:47 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:55:47 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:55:47 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:55:47 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:55:47 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:55:47 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:55:47 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:55:47 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:55:47 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:55:47 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:55:47 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:55:47 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:55:47 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:55:47 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:55:47 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:55:47 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:55:47 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:55:47 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:55:47 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:55:47 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:55:47 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:55:47 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:55:47 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:55:47 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:55:47 compute-0 nova_compute[260935]:     <controller type="usb" index="0"/>
Oct 11 08:55:47 compute-0 nova_compute[260935]:     <memballoon model="virtio">
Oct 11 08:55:47 compute-0 nova_compute[260935]:       <stats period="10"/>
Oct 11 08:55:47 compute-0 nova_compute[260935]:     </memballoon>
Oct 11 08:55:47 compute-0 nova_compute[260935]:   </devices>
Oct 11 08:55:47 compute-0 nova_compute[260935]: </domain>
Oct 11 08:55:47 compute-0 nova_compute[260935]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 11 08:55:47 compute-0 nova_compute[260935]: 2025-10-11 08:55:47.809 2 DEBUG nova.compute.manager [None req-8ca70323-0739-4696-8509-b0a3e9587105 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: b297454f-91af-4716-b4f7-6af9f0d7e62d] Preparing to wait for external event network-vif-plugged-7e1e0a2e-111a-44a6-8191-45964dbea604 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 11 08:55:47 compute-0 nova_compute[260935]: 2025-10-11 08:55:47.810 2 DEBUG oslo_concurrency.lockutils [None req-8ca70323-0739-4696-8509-b0a3e9587105 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Acquiring lock "b297454f-91af-4716-b4f7-6af9f0d7e62d-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:55:47 compute-0 nova_compute[260935]: 2025-10-11 08:55:47.810 2 DEBUG oslo_concurrency.lockutils [None req-8ca70323-0739-4696-8509-b0a3e9587105 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Lock "b297454f-91af-4716-b4f7-6af9f0d7e62d-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:55:47 compute-0 nova_compute[260935]: 2025-10-11 08:55:47.810 2 DEBUG oslo_concurrency.lockutils [None req-8ca70323-0739-4696-8509-b0a3e9587105 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Lock "b297454f-91af-4716-b4f7-6af9f0d7e62d-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:55:47 compute-0 nova_compute[260935]: 2025-10-11 08:55:47.811 2 DEBUG nova.virt.libvirt.vif [None req-8ca70323-0739-4696-8509-b0a3e9587105 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T08:55:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-₡-268621522',display_name='tempest-₡-268621522',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest--268621522',id=61,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d9864fda4f8641d8a9c1509c426cc206',ramdisk_id='',reservation_id='r-w76lxkw4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-101172647',owner_user_name='tempest-ServersTestJSON-101172647-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T08:55:43Z,user_data=None,user_id='ee0e5fedb9fc464eb2a9ac362f5e0749',uuid=b297454f-91af-4716-b4f7-6af9f0d7e62d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "7e1e0a2e-111a-44a6-8191-45964dbea604", "address": "fa:16:3e:45:a4:99", "network": {"id": "e075bdab-78c4-414f-b270-c41d1c82f498", "bridge": "br-int", "label": "tempest-ServersTestJSON-1401783070-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d9864fda4f8641d8a9c1509c426cc206", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7e1e0a2e-11", "ovs_interfaceid": "7e1e0a2e-111a-44a6-8191-45964dbea604", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 11 08:55:47 compute-0 nova_compute[260935]: 2025-10-11 08:55:47.812 2 DEBUG nova.network.os_vif_util [None req-8ca70323-0739-4696-8509-b0a3e9587105 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Converting VIF {"id": "7e1e0a2e-111a-44a6-8191-45964dbea604", "address": "fa:16:3e:45:a4:99", "network": {"id": "e075bdab-78c4-414f-b270-c41d1c82f498", "bridge": "br-int", "label": "tempest-ServersTestJSON-1401783070-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d9864fda4f8641d8a9c1509c426cc206", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7e1e0a2e-11", "ovs_interfaceid": "7e1e0a2e-111a-44a6-8191-45964dbea604", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 08:55:47 compute-0 nova_compute[260935]: 2025-10-11 08:55:47.812 2 DEBUG nova.network.os_vif_util [None req-8ca70323-0739-4696-8509-b0a3e9587105 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:45:a4:99,bridge_name='br-int',has_traffic_filtering=True,id=7e1e0a2e-111a-44a6-8191-45964dbea604,network=Network(e075bdab-78c4-414f-b270-c41d1c82f498),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7e1e0a2e-11') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 08:55:47 compute-0 nova_compute[260935]: 2025-10-11 08:55:47.813 2 DEBUG os_vif [None req-8ca70323-0739-4696-8509-b0a3e9587105 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:45:a4:99,bridge_name='br-int',has_traffic_filtering=True,id=7e1e0a2e-111a-44a6-8191-45964dbea604,network=Network(e075bdab-78c4-414f-b270-c41d1c82f498),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7e1e0a2e-11') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 11 08:55:47 compute-0 nova_compute[260935]: 2025-10-11 08:55:47.813 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:55:47 compute-0 nova_compute[260935]: 2025-10-11 08:55:47.814 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:55:47 compute-0 nova_compute[260935]: 2025-10-11 08:55:47.814 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 08:55:47 compute-0 nova_compute[260935]: 2025-10-11 08:55:47.817 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:55:47 compute-0 nova_compute[260935]: 2025-10-11 08:55:47.817 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7e1e0a2e-11, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:55:47 compute-0 nova_compute[260935]: 2025-10-11 08:55:47.818 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap7e1e0a2e-11, col_values=(('external_ids', {'iface-id': '7e1e0a2e-111a-44a6-8191-45964dbea604', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:45:a4:99', 'vm-uuid': 'b297454f-91af-4716-b4f7-6af9f0d7e62d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:55:47 compute-0 nova_compute[260935]: 2025-10-11 08:55:47.820 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:55:47 compute-0 NetworkManager[44960]: <info>  [1760172947.8227] manager: (tap7e1e0a2e-11): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/249)
Oct 11 08:55:47 compute-0 nova_compute[260935]: 2025-10-11 08:55:47.823 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 08:55:47 compute-0 nova_compute[260935]: 2025-10-11 08:55:47.829 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:55:47 compute-0 nova_compute[260935]: 2025-10-11 08:55:47.830 2 INFO os_vif [None req-8ca70323-0739-4696-8509-b0a3e9587105 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:45:a4:99,bridge_name='br-int',has_traffic_filtering=True,id=7e1e0a2e-111a-44a6-8191-45964dbea604,network=Network(e075bdab-78c4-414f-b270-c41d1c82f498),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7e1e0a2e-11')
Oct 11 08:55:47 compute-0 nova_compute[260935]: 2025-10-11 08:55:47.832 2 DEBUG oslo_concurrency.processutils [None req-bac1875a-406e-4765-a23a-d37439cc32b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json" returned: 0 in 0.101s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:55:47 compute-0 nova_compute[260935]: 2025-10-11 08:55:47.832 2 DEBUG oslo_concurrency.lockutils [None req-bac1875a-406e-4765-a23a-d37439cc32b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Acquiring lock "9811042c7d73cc51997f7c966840f4b7728169a1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:55:47 compute-0 nova_compute[260935]: 2025-10-11 08:55:47.833 2 DEBUG oslo_concurrency.lockutils [None req-bac1875a-406e-4765-a23a-d37439cc32b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:55:47 compute-0 nova_compute[260935]: 2025-10-11 08:55:47.833 2 DEBUG oslo_concurrency.lockutils [None req-bac1875a-406e-4765-a23a-d37439cc32b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:55:47 compute-0 nova_compute[260935]: 2025-10-11 08:55:47.857 2 DEBUG nova.storage.rbd_utils [None req-bac1875a-406e-4765-a23a-d37439cc32b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] rbd image bb58fb30-73c8-457a-9293-2d07d0015a46_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:55:47 compute-0 nova_compute[260935]: 2025-10-11 08:55:47.861 2 DEBUG oslo_concurrency.processutils [None req-bac1875a-406e-4765-a23a-d37439cc32b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 bb58fb30-73c8-457a-9293-2d07d0015a46_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:55:47 compute-0 nova_compute[260935]: 2025-10-11 08:55:47.975 2 DEBUG nova.virt.libvirt.driver [None req-8ca70323-0739-4696-8509-b0a3e9587105 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 08:55:47 compute-0 nova_compute[260935]: 2025-10-11 08:55:47.976 2 DEBUG nova.virt.libvirt.driver [None req-8ca70323-0739-4696-8509-b0a3e9587105 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 08:55:47 compute-0 nova_compute[260935]: 2025-10-11 08:55:47.976 2 DEBUG nova.virt.libvirt.driver [None req-8ca70323-0739-4696-8509-b0a3e9587105 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] No VIF found with MAC fa:16:3e:45:a4:99, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 11 08:55:47 compute-0 nova_compute[260935]: 2025-10-11 08:55:47.977 2 INFO nova.virt.libvirt.driver [None req-8ca70323-0739-4696-8509-b0a3e9587105 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: b297454f-91af-4716-b4f7-6af9f0d7e62d] Using config drive
Oct 11 08:55:47 compute-0 ovn_controller[152945]: 2025-10-11T08:55:47Z|00068|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:ef:c3:49 10.100.0.13
Oct 11 08:55:47 compute-0 ovn_controller[152945]: 2025-10-11T08:55:47Z|00069|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:ef:c3:49 10.100.0.13
Oct 11 08:55:48 compute-0 nova_compute[260935]: 2025-10-11 08:55:48.005 2 DEBUG nova.storage.rbd_utils [None req-8ca70323-0739-4696-8509-b0a3e9587105 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] rbd image b297454f-91af-4716-b4f7-6af9f0d7e62d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:55:48 compute-0 nova_compute[260935]: 2025-10-11 08:55:48.012 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:55:48 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1547: 321 pgs: 321 active+clean; 246 MiB data, 587 MiB used, 59 GiB / 60 GiB avail; 5.6 MiB/s rd, 6.1 MiB/s wr, 363 op/s
Oct 11 08:55:48 compute-0 nova_compute[260935]: 2025-10-11 08:55:48.186 2 DEBUG oslo_concurrency.processutils [None req-bac1875a-406e-4765-a23a-d37439cc32b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 bb58fb30-73c8-457a-9293-2d07d0015a46_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.325s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:55:48 compute-0 nova_compute[260935]: 2025-10-11 08:55:48.283 2 DEBUG nova.storage.rbd_utils [None req-bac1875a-406e-4765-a23a-d37439cc32b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] resizing rbd image bb58fb30-73c8-457a-9293-2d07d0015a46_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 11 08:55:48 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/90154206' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:55:48 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/1754153588' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:55:48 compute-0 nova_compute[260935]: 2025-10-11 08:55:48.345 2 DEBUG nova.network.neutron [req-84565fdb-7d79-412c-b3f9-678ce40bdaf2 req-8f22b11e-df39-4c4f-bafd-bfc65333392b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: b297454f-91af-4716-b4f7-6af9f0d7e62d] Updated VIF entry in instance network info cache for port 7e1e0a2e-111a-44a6-8191-45964dbea604. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 08:55:48 compute-0 nova_compute[260935]: 2025-10-11 08:55:48.346 2 DEBUG nova.network.neutron [req-84565fdb-7d79-412c-b3f9-678ce40bdaf2 req-8f22b11e-df39-4c4f-bafd-bfc65333392b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: b297454f-91af-4716-b4f7-6af9f0d7e62d] Updating instance_info_cache with network_info: [{"id": "7e1e0a2e-111a-44a6-8191-45964dbea604", "address": "fa:16:3e:45:a4:99", "network": {"id": "e075bdab-78c4-414f-b270-c41d1c82f498", "bridge": "br-int", "label": "tempest-ServersTestJSON-1401783070-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d9864fda4f8641d8a9c1509c426cc206", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7e1e0a2e-11", "ovs_interfaceid": "7e1e0a2e-111a-44a6-8191-45964dbea604", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:55:48 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e210 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 08:55:48 compute-0 nova_compute[260935]: 2025-10-11 08:55:48.401 2 DEBUG oslo_concurrency.lockutils [req-84565fdb-7d79-412c-b3f9-678ce40bdaf2 req-8f22b11e-df39-4c4f-bafd-bfc65333392b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-b297454f-91af-4716-b4f7-6af9f0d7e62d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 08:55:48 compute-0 nova_compute[260935]: 2025-10-11 08:55:48.412 2 DEBUG nova.objects.instance [None req-bac1875a-406e-4765-a23a-d37439cc32b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Lazy-loading 'migration_context' on Instance uuid bb58fb30-73c8-457a-9293-2d07d0015a46 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:55:48 compute-0 nova_compute[260935]: 2025-10-11 08:55:48.426 2 DEBUG nova.virt.libvirt.driver [None req-bac1875a-406e-4765-a23a-d37439cc32b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: bb58fb30-73c8-457a-9293-2d07d0015a46] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 11 08:55:48 compute-0 nova_compute[260935]: 2025-10-11 08:55:48.426 2 DEBUG nova.virt.libvirt.driver [None req-bac1875a-406e-4765-a23a-d37439cc32b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: bb58fb30-73c8-457a-9293-2d07d0015a46] Ensure instance console log exists: /var/lib/nova/instances/bb58fb30-73c8-457a-9293-2d07d0015a46/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 11 08:55:48 compute-0 nova_compute[260935]: 2025-10-11 08:55:48.427 2 DEBUG oslo_concurrency.lockutils [None req-bac1875a-406e-4765-a23a-d37439cc32b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:55:48 compute-0 nova_compute[260935]: 2025-10-11 08:55:48.427 2 DEBUG oslo_concurrency.lockutils [None req-bac1875a-406e-4765-a23a-d37439cc32b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:55:48 compute-0 nova_compute[260935]: 2025-10-11 08:55:48.427 2 DEBUG oslo_concurrency.lockutils [None req-bac1875a-406e-4765-a23a-d37439cc32b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:55:48 compute-0 nova_compute[260935]: 2025-10-11 08:55:48.543 2 INFO nova.virt.libvirt.driver [None req-8ca70323-0739-4696-8509-b0a3e9587105 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: b297454f-91af-4716-b4f7-6af9f0d7e62d] Creating config drive at /var/lib/nova/instances/b297454f-91af-4716-b4f7-6af9f0d7e62d/disk.config
Oct 11 08:55:48 compute-0 nova_compute[260935]: 2025-10-11 08:55:48.549 2 DEBUG oslo_concurrency.processutils [None req-8ca70323-0739-4696-8509-b0a3e9587105 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/b297454f-91af-4716-b4f7-6af9f0d7e62d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpm_h_qpuv execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:55:48 compute-0 nova_compute[260935]: 2025-10-11 08:55:48.660 2 DEBUG nova.network.neutron [None req-bac1875a-406e-4765-a23a-d37439cc32b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: bb58fb30-73c8-457a-9293-2d07d0015a46] Successfully created port: fca0187a-40e2-4312-bb5b-f3a6c59f3c9b _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 11 08:55:48 compute-0 nova_compute[260935]: 2025-10-11 08:55:48.710 2 DEBUG oslo_concurrency.processutils [None req-8ca70323-0739-4696-8509-b0a3e9587105 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/b297454f-91af-4716-b4f7-6af9f0d7e62d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpm_h_qpuv" returned: 0 in 0.161s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:55:48 compute-0 nova_compute[260935]: 2025-10-11 08:55:48.753 2 DEBUG nova.storage.rbd_utils [None req-8ca70323-0739-4696-8509-b0a3e9587105 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] rbd image b297454f-91af-4716-b4f7-6af9f0d7e62d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:55:48 compute-0 nova_compute[260935]: 2025-10-11 08:55:48.759 2 DEBUG oslo_concurrency.processutils [None req-8ca70323-0739-4696-8509-b0a3e9587105 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/b297454f-91af-4716-b4f7-6af9f0d7e62d/disk.config b297454f-91af-4716-b4f7-6af9f0d7e62d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:55:48 compute-0 nova_compute[260935]: 2025-10-11 08:55:48.957 2 DEBUG oslo_concurrency.processutils [None req-8ca70323-0739-4696-8509-b0a3e9587105 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/b297454f-91af-4716-b4f7-6af9f0d7e62d/disk.config b297454f-91af-4716-b4f7-6af9f0d7e62d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.198s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:55:48 compute-0 nova_compute[260935]: 2025-10-11 08:55:48.959 2 INFO nova.virt.libvirt.driver [None req-8ca70323-0739-4696-8509-b0a3e9587105 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: b297454f-91af-4716-b4f7-6af9f0d7e62d] Deleting local config drive /var/lib/nova/instances/b297454f-91af-4716-b4f7-6af9f0d7e62d/disk.config because it was imported into RBD.
Oct 11 08:55:49 compute-0 kernel: tap7e1e0a2e-11: entered promiscuous mode
Oct 11 08:55:49 compute-0 NetworkManager[44960]: <info>  [1760172949.0258] manager: (tap7e1e0a2e-11): new Tun device (/org/freedesktop/NetworkManager/Devices/250)
Oct 11 08:55:49 compute-0 ovn_controller[152945]: 2025-10-11T08:55:49Z|00527|binding|INFO|Claiming lport 7e1e0a2e-111a-44a6-8191-45964dbea604 for this chassis.
Oct 11 08:55:49 compute-0 ovn_controller[152945]: 2025-10-11T08:55:49Z|00528|binding|INFO|7e1e0a2e-111a-44a6-8191-45964dbea604: Claiming fa:16:3e:45:a4:99 10.100.0.5
Oct 11 08:55:49 compute-0 nova_compute[260935]: 2025-10-11 08:55:49.027 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:55:49 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:55:49.035 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:45:a4:99 10.100.0.5'], port_security=['fa:16:3e:45:a4:99 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'b297454f-91af-4716-b4f7-6af9f0d7e62d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e075bdab-78c4-414f-b270-c41d1c82f498', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd9864fda4f8641d8a9c1509c426cc206', 'neutron:revision_number': '2', 'neutron:security_group_ids': '708d19b6-1edc-4b98-ad73-f234668f1633', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2b07dbe7-131d-4b4e-9a25-dea5d7b28985, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=7e1e0a2e-111a-44a6-8191-45964dbea604) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 08:55:49 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:55:49.039 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 7e1e0a2e-111a-44a6-8191-45964dbea604 in datapath e075bdab-78c4-414f-b270-c41d1c82f498 bound to our chassis
Oct 11 08:55:49 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:55:49.049 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e075bdab-78c4-414f-b270-c41d1c82f498
Oct 11 08:55:49 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:55:49.065 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[4f5bf80e-a8d7-4a60-9d90-62b8392158f7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:55:49 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:55:49.066 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tape075bdab-71 in ovnmeta-e075bdab-78c4-414f-b270-c41d1c82f498 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 11 08:55:49 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:55:49.068 276945 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tape075bdab-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 11 08:55:49 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:55:49.069 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[df8e9ba5-518b-4907-96e8-485a26ca7b93]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:55:49 compute-0 ovn_controller[152945]: 2025-10-11T08:55:49Z|00529|binding|INFO|Setting lport 7e1e0a2e-111a-44a6-8191-45964dbea604 ovn-installed in OVS
Oct 11 08:55:49 compute-0 ovn_controller[152945]: 2025-10-11T08:55:49Z|00530|binding|INFO|Setting lport 7e1e0a2e-111a-44a6-8191-45964dbea604 up in Southbound
Oct 11 08:55:49 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:55:49.071 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[e1fcc81e-e9cb-4743-a5b2-732b072a2f93]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:55:49 compute-0 nova_compute[260935]: 2025-10-11 08:55:49.072 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:55:49 compute-0 systemd-machined[215705]: New machine qemu-68-instance-0000003d.
Oct 11 08:55:49 compute-0 nova_compute[260935]: 2025-10-11 08:55:49.077 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:55:49 compute-0 systemd[1]: Started Virtual Machine qemu-68-instance-0000003d.
Oct 11 08:55:49 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:55:49.088 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[2cc9bf16-8b20-4755-8bf5-a25af93ad88e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:55:49 compute-0 systemd-udevd[324286]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 08:55:49 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:55:49.106 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[4d6aa683-7044-4950-bfdc-fd2c1aca18ac]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:55:49 compute-0 NetworkManager[44960]: <info>  [1760172949.1229] device (tap7e1e0a2e-11): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 08:55:49 compute-0 NetworkManager[44960]: <info>  [1760172949.1244] device (tap7e1e0a2e-11): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 08:55:49 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:55:49.144 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[9006185d-90af-41ff-a1b6-64616b6099d4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:55:49 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:55:49.152 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[5b3fbaaf-74ae-4d26-9dbe-9b8830d377a9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:55:49 compute-0 NetworkManager[44960]: <info>  [1760172949.1536] manager: (tape075bdab-70): new Veth device (/org/freedesktop/NetworkManager/Devices/251)
Oct 11 08:55:49 compute-0 systemd-udevd[324290]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 08:55:49 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:55:49.201 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[3cdfd693-647b-4aec-bc62-911e1e4b2a71]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:55:49 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:55:49.204 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[e4943a1d-270f-4cdc-8a37-7e451e9901f3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:55:49 compute-0 NetworkManager[44960]: <info>  [1760172949.2404] device (tape075bdab-70): carrier: link connected
Oct 11 08:55:49 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:55:49.250 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[3d1897ac-7a1a-4006-a2fa-1f883304db27]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:55:49 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:55:49.278 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[75f9a9ff-c70e-4af3-9970-5d10175305b3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape075bdab-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0b:b9:79'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 169], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 479394, 'reachable_time': 36159, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 324317, 'error': None, 'target': 'ovnmeta-e075bdab-78c4-414f-b270-c41d1c82f498', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:55:49 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:55:49.300 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[463c42de-23fc-42e1-ad23-c96bf188ae21]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe0b:b979'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 479394, 'tstamp': 479394}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 324318, 'error': None, 'target': 'ovnmeta-e075bdab-78c4-414f-b270-c41d1c82f498', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:55:49 compute-0 ceph-mon[74313]: pgmap v1547: 321 pgs: 321 active+clean; 246 MiB data, 587 MiB used, 59 GiB / 60 GiB avail; 5.6 MiB/s rd, 6.1 MiB/s wr, 363 op/s
Oct 11 08:55:49 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:55:49.332 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[068b4d21-c1cb-436f-b248-5a2169d7de7e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape075bdab-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0b:b9:79'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 169], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 479394, 'reachable_time': 36159, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 324326, 'error': None, 'target': 'ovnmeta-e075bdab-78c4-414f-b270-c41d1c82f498', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:55:49 compute-0 nova_compute[260935]: 2025-10-11 08:55:49.352 2 DEBUG nova.compute.manager [req-a105ff72-ee38-4075-8802-38737c0d8065 req-19c2e772-6da8-44c3-9b4d-ba4ab3868153 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: b297454f-91af-4716-b4f7-6af9f0d7e62d] Received event network-vif-plugged-7e1e0a2e-111a-44a6-8191-45964dbea604 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:55:49 compute-0 nova_compute[260935]: 2025-10-11 08:55:49.352 2 DEBUG oslo_concurrency.lockutils [req-a105ff72-ee38-4075-8802-38737c0d8065 req-19c2e772-6da8-44c3-9b4d-ba4ab3868153 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "b297454f-91af-4716-b4f7-6af9f0d7e62d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:55:49 compute-0 nova_compute[260935]: 2025-10-11 08:55:49.353 2 DEBUG oslo_concurrency.lockutils [req-a105ff72-ee38-4075-8802-38737c0d8065 req-19c2e772-6da8-44c3-9b4d-ba4ab3868153 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "b297454f-91af-4716-b4f7-6af9f0d7e62d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:55:49 compute-0 nova_compute[260935]: 2025-10-11 08:55:49.354 2 DEBUG oslo_concurrency.lockutils [req-a105ff72-ee38-4075-8802-38737c0d8065 req-19c2e772-6da8-44c3-9b4d-ba4ab3868153 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "b297454f-91af-4716-b4f7-6af9f0d7e62d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:55:49 compute-0 nova_compute[260935]: 2025-10-11 08:55:49.354 2 DEBUG nova.compute.manager [req-a105ff72-ee38-4075-8802-38737c0d8065 req-19c2e772-6da8-44c3-9b4d-ba4ab3868153 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: b297454f-91af-4716-b4f7-6af9f0d7e62d] Processing event network-vif-plugged-7e1e0a2e-111a-44a6-8191-45964dbea604 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 11 08:55:49 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:55:49.391 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[2ee2e9c1-a4a1-462c-9995-cb55641966c3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:55:49 compute-0 nova_compute[260935]: 2025-10-11 08:55:49.433 2 DEBUG nova.network.neutron [None req-bac1875a-406e-4765-a23a-d37439cc32b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: bb58fb30-73c8-457a-9293-2d07d0015a46] Successfully updated port: fca0187a-40e2-4312-bb5b-f3a6c59f3c9b _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 11 08:55:49 compute-0 nova_compute[260935]: 2025-10-11 08:55:49.455 2 DEBUG oslo_concurrency.lockutils [None req-bac1875a-406e-4765-a23a-d37439cc32b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Acquiring lock "refresh_cache-bb58fb30-73c8-457a-9293-2d07d0015a46" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 08:55:49 compute-0 nova_compute[260935]: 2025-10-11 08:55:49.456 2 DEBUG oslo_concurrency.lockutils [None req-bac1875a-406e-4765-a23a-d37439cc32b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Acquired lock "refresh_cache-bb58fb30-73c8-457a-9293-2d07d0015a46" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 08:55:49 compute-0 nova_compute[260935]: 2025-10-11 08:55:49.456 2 DEBUG nova.network.neutron [None req-bac1875a-406e-4765-a23a-d37439cc32b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: bb58fb30-73c8-457a-9293-2d07d0015a46] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 11 08:55:49 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:55:49.506 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[27fe92a1-1e5c-4529-94cd-bda65d4d9259]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:55:49 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:55:49.508 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape075bdab-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:55:49 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:55:49.509 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 08:55:49 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:55:49.509 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape075bdab-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:55:49 compute-0 nova_compute[260935]: 2025-10-11 08:55:49.511 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:55:49 compute-0 kernel: tape075bdab-70: entered promiscuous mode
Oct 11 08:55:49 compute-0 NetworkManager[44960]: <info>  [1760172949.5127] manager: (tape075bdab-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/252)
Oct 11 08:55:49 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:55:49.515 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape075bdab-70, col_values=(('external_ids', {'iface-id': 'b9cf681c-9f4c-4c56-987a-55fa7aa89e1a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:55:49 compute-0 nova_compute[260935]: 2025-10-11 08:55:49.517 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:55:49 compute-0 ovn_controller[152945]: 2025-10-11T08:55:49Z|00531|binding|INFO|Releasing lport b9cf681c-9f4c-4c56-987a-55fa7aa89e1a from this chassis (sb_readonly=0)
Oct 11 08:55:49 compute-0 nova_compute[260935]: 2025-10-11 08:55:49.545 2 DEBUG nova.compute.manager [req-88a95b43-b48f-454f-a58c-6f2f93a90840 req-db6047e1-defa-4d7e-aedc-d5938f501a8c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: bb58fb30-73c8-457a-9293-2d07d0015a46] Received event network-changed-fca0187a-40e2-4312-bb5b-f3a6c59f3c9b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:55:49 compute-0 nova_compute[260935]: 2025-10-11 08:55:49.547 2 DEBUG nova.compute.manager [req-88a95b43-b48f-454f-a58c-6f2f93a90840 req-db6047e1-defa-4d7e-aedc-d5938f501a8c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: bb58fb30-73c8-457a-9293-2d07d0015a46] Refreshing instance network info cache due to event network-changed-fca0187a-40e2-4312-bb5b-f3a6c59f3c9b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 08:55:49 compute-0 nova_compute[260935]: 2025-10-11 08:55:49.548 2 DEBUG oslo_concurrency.lockutils [req-88a95b43-b48f-454f-a58c-6f2f93a90840 req-db6047e1-defa-4d7e-aedc-d5938f501a8c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-bb58fb30-73c8-457a-9293-2d07d0015a46" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 08:55:49 compute-0 nova_compute[260935]: 2025-10-11 08:55:49.549 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:55:49 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:55:49.550 162815 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/e075bdab-78c4-414f-b270-c41d1c82f498.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/e075bdab-78c4-414f-b270-c41d1c82f498.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 11 08:55:49 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:55:49.551 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[eb74b5a9-5604-4f8d-b835-b50ed28a95f2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:55:49 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:55:49.552 162815 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 11 08:55:49 compute-0 ovn_metadata_agent[162810]: global
Oct 11 08:55:49 compute-0 ovn_metadata_agent[162810]:     log         /dev/log local0 debug
Oct 11 08:55:49 compute-0 ovn_metadata_agent[162810]:     log-tag     haproxy-metadata-proxy-e075bdab-78c4-414f-b270-c41d1c82f498
Oct 11 08:55:49 compute-0 ovn_metadata_agent[162810]:     user        root
Oct 11 08:55:49 compute-0 ovn_metadata_agent[162810]:     group       root
Oct 11 08:55:49 compute-0 ovn_metadata_agent[162810]:     maxconn     1024
Oct 11 08:55:49 compute-0 ovn_metadata_agent[162810]:     pidfile     /var/lib/neutron/external/pids/e075bdab-78c4-414f-b270-c41d1c82f498.pid.haproxy
Oct 11 08:55:49 compute-0 ovn_metadata_agent[162810]:     daemon
Oct 11 08:55:49 compute-0 ovn_metadata_agent[162810]: 
Oct 11 08:55:49 compute-0 ovn_metadata_agent[162810]: defaults
Oct 11 08:55:49 compute-0 ovn_metadata_agent[162810]:     log global
Oct 11 08:55:49 compute-0 ovn_metadata_agent[162810]:     mode http
Oct 11 08:55:49 compute-0 ovn_metadata_agent[162810]:     option httplog
Oct 11 08:55:49 compute-0 ovn_metadata_agent[162810]:     option dontlognull
Oct 11 08:55:49 compute-0 ovn_metadata_agent[162810]:     option http-server-close
Oct 11 08:55:49 compute-0 ovn_metadata_agent[162810]:     option forwardfor
Oct 11 08:55:49 compute-0 ovn_metadata_agent[162810]:     retries                 3
Oct 11 08:55:49 compute-0 ovn_metadata_agent[162810]:     timeout http-request    30s
Oct 11 08:55:49 compute-0 ovn_metadata_agent[162810]:     timeout connect         30s
Oct 11 08:55:49 compute-0 ovn_metadata_agent[162810]:     timeout client          32s
Oct 11 08:55:49 compute-0 ovn_metadata_agent[162810]:     timeout server          32s
Oct 11 08:55:49 compute-0 ovn_metadata_agent[162810]:     timeout http-keep-alive 30s
Oct 11 08:55:49 compute-0 ovn_metadata_agent[162810]: 
Oct 11 08:55:49 compute-0 ovn_metadata_agent[162810]: 
Oct 11 08:55:49 compute-0 ovn_metadata_agent[162810]: listen listener
Oct 11 08:55:49 compute-0 ovn_metadata_agent[162810]:     bind 169.254.169.254:80
Oct 11 08:55:49 compute-0 ovn_metadata_agent[162810]:     server metadata /var/lib/neutron/metadata_proxy
Oct 11 08:55:49 compute-0 ovn_metadata_agent[162810]:     http-request add-header X-OVN-Network-ID e075bdab-78c4-414f-b270-c41d1c82f498
Oct 11 08:55:49 compute-0 ovn_metadata_agent[162810]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 11 08:55:49 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:55:49.553 162815 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-e075bdab-78c4-414f-b270-c41d1c82f498', 'env', 'PROCESS_TAG=haproxy-e075bdab-78c4-414f-b270-c41d1c82f498', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/e075bdab-78c4-414f-b270-c41d1c82f498.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 11 08:55:49 compute-0 nova_compute[260935]: 2025-10-11 08:55:49.574 2 DEBUG oslo_concurrency.lockutils [None req-7c81477d-e48f-41e4-8384-df4c9e7e75c2 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Acquiring lock "d80189d8-28e4-440b-8aed-b43c62f59dd7" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:55:49 compute-0 nova_compute[260935]: 2025-10-11 08:55:49.575 2 DEBUG oslo_concurrency.lockutils [None req-7c81477d-e48f-41e4-8384-df4c9e7e75c2 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Lock "d80189d8-28e4-440b-8aed-b43c62f59dd7" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:55:49 compute-0 nova_compute[260935]: 2025-10-11 08:55:49.600 2 DEBUG nova.compute.manager [None req-7c81477d-e48f-41e4-8384-df4c9e7e75c2 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] [instance: d80189d8-28e4-440b-8aed-b43c62f59dd7] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 11 08:55:49 compute-0 nova_compute[260935]: 2025-10-11 08:55:49.631 2 DEBUG nova.network.neutron [None req-bac1875a-406e-4765-a23a-d37439cc32b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: bb58fb30-73c8-457a-9293-2d07d0015a46] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 11 08:55:49 compute-0 nova_compute[260935]: 2025-10-11 08:55:49.662 2 DEBUG oslo_concurrency.lockutils [None req-7c81477d-e48f-41e4-8384-df4c9e7e75c2 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:55:49 compute-0 nova_compute[260935]: 2025-10-11 08:55:49.663 2 DEBUG oslo_concurrency.lockutils [None req-7c81477d-e48f-41e4-8384-df4c9e7e75c2 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:55:49 compute-0 nova_compute[260935]: 2025-10-11 08:55:49.670 2 DEBUG nova.virt.hardware [None req-7c81477d-e48f-41e4-8384-df4c9e7e75c2 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 11 08:55:49 compute-0 nova_compute[260935]: 2025-10-11 08:55:49.671 2 INFO nova.compute.claims [None req-7c81477d-e48f-41e4-8384-df4c9e7e75c2 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] [instance: d80189d8-28e4-440b-8aed-b43c62f59dd7] Claim successful on node compute-0.ctlplane.example.com
Oct 11 08:55:49 compute-0 nova_compute[260935]: 2025-10-11 08:55:49.864 2 DEBUG oslo_concurrency.processutils [None req-7c81477d-e48f-41e4-8384-df4c9e7e75c2 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:55:49 compute-0 nova_compute[260935]: 2025-10-11 08:55:49.980 2 DEBUG nova.compute.manager [None req-8ca70323-0739-4696-8509-b0a3e9587105 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: b297454f-91af-4716-b4f7-6af9f0d7e62d] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 11 08:55:49 compute-0 nova_compute[260935]: 2025-10-11 08:55:49.982 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172949.9816473, b297454f-91af-4716-b4f7-6af9f0d7e62d => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:55:49 compute-0 nova_compute[260935]: 2025-10-11 08:55:49.984 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: b297454f-91af-4716-b4f7-6af9f0d7e62d] VM Started (Lifecycle Event)
Oct 11 08:55:49 compute-0 nova_compute[260935]: 2025-10-11 08:55:49.990 2 DEBUG nova.virt.libvirt.driver [None req-8ca70323-0739-4696-8509-b0a3e9587105 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: b297454f-91af-4716-b4f7-6af9f0d7e62d] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 11 08:55:49 compute-0 nova_compute[260935]: 2025-10-11 08:55:49.996 2 INFO nova.virt.libvirt.driver [-] [instance: b297454f-91af-4716-b4f7-6af9f0d7e62d] Instance spawned successfully.
Oct 11 08:55:49 compute-0 nova_compute[260935]: 2025-10-11 08:55:49.997 2 DEBUG nova.virt.libvirt.driver [None req-8ca70323-0739-4696-8509-b0a3e9587105 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: b297454f-91af-4716-b4f7-6af9f0d7e62d] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 11 08:55:50 compute-0 nova_compute[260935]: 2025-10-11 08:55:50.008 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: b297454f-91af-4716-b4f7-6af9f0d7e62d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:55:50 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1548: 321 pgs: 321 active+clean; 246 MiB data, 587 MiB used, 59 GiB / 60 GiB avail; 884 KiB/s rd, 6.0 MiB/s wr, 186 op/s
Oct 11 08:55:50 compute-0 podman[324392]: 2025-10-11 08:55:50.015478115 +0000 UTC m=+0.076959216 container create 7365a64d78a5203cb6a4f7973d3225d3cf0f9d03377cb6f3325f2e530419040b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-e075bdab-78c4-414f-b270-c41d1c82f498, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 08:55:50 compute-0 nova_compute[260935]: 2025-10-11 08:55:50.028 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: b297454f-91af-4716-b4f7-6af9f0d7e62d] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 08:55:50 compute-0 nova_compute[260935]: 2025-10-11 08:55:50.036 2 DEBUG nova.virt.libvirt.driver [None req-8ca70323-0739-4696-8509-b0a3e9587105 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: b297454f-91af-4716-b4f7-6af9f0d7e62d] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:55:50 compute-0 nova_compute[260935]: 2025-10-11 08:55:50.037 2 DEBUG nova.virt.libvirt.driver [None req-8ca70323-0739-4696-8509-b0a3e9587105 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: b297454f-91af-4716-b4f7-6af9f0d7e62d] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:55:50 compute-0 nova_compute[260935]: 2025-10-11 08:55:50.038 2 DEBUG nova.virt.libvirt.driver [None req-8ca70323-0739-4696-8509-b0a3e9587105 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: b297454f-91af-4716-b4f7-6af9f0d7e62d] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:55:50 compute-0 nova_compute[260935]: 2025-10-11 08:55:50.039 2 DEBUG nova.virt.libvirt.driver [None req-8ca70323-0739-4696-8509-b0a3e9587105 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: b297454f-91af-4716-b4f7-6af9f0d7e62d] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:55:50 compute-0 nova_compute[260935]: 2025-10-11 08:55:50.040 2 DEBUG nova.virt.libvirt.driver [None req-8ca70323-0739-4696-8509-b0a3e9587105 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: b297454f-91af-4716-b4f7-6af9f0d7e62d] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:55:50 compute-0 nova_compute[260935]: 2025-10-11 08:55:50.041 2 DEBUG nova.virt.libvirt.driver [None req-8ca70323-0739-4696-8509-b0a3e9587105 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: b297454f-91af-4716-b4f7-6af9f0d7e62d] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:55:50 compute-0 systemd[1]: Started libpod-conmon-7365a64d78a5203cb6a4f7973d3225d3cf0f9d03377cb6f3325f2e530419040b.scope.
Oct 11 08:55:50 compute-0 podman[324392]: 2025-10-11 08:55:49.967070444 +0000 UTC m=+0.028551635 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 11 08:55:50 compute-0 nova_compute[260935]: 2025-10-11 08:55:50.063 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: b297454f-91af-4716-b4f7-6af9f0d7e62d] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 08:55:50 compute-0 nova_compute[260935]: 2025-10-11 08:55:50.064 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172949.9818032, b297454f-91af-4716-b4f7-6af9f0d7e62d => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:55:50 compute-0 nova_compute[260935]: 2025-10-11 08:55:50.064 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: b297454f-91af-4716-b4f7-6af9f0d7e62d] VM Paused (Lifecycle Event)
Oct 11 08:55:50 compute-0 systemd[1]: Started libcrun container.
Oct 11 08:55:50 compute-0 nova_compute[260935]: 2025-10-11 08:55:50.088 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: b297454f-91af-4716-b4f7-6af9f0d7e62d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:55:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf198163e725340e667f14571b9dc05439b8f7c35370125da99319b5667780ac/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 11 08:55:50 compute-0 nova_compute[260935]: 2025-10-11 08:55:50.098 2 INFO nova.compute.manager [None req-8ca70323-0739-4696-8509-b0a3e9587105 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: b297454f-91af-4716-b4f7-6af9f0d7e62d] Took 6.18 seconds to spawn the instance on the hypervisor.
Oct 11 08:55:50 compute-0 nova_compute[260935]: 2025-10-11 08:55:50.098 2 DEBUG nova.compute.manager [None req-8ca70323-0739-4696-8509-b0a3e9587105 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: b297454f-91af-4716-b4f7-6af9f0d7e62d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:55:50 compute-0 nova_compute[260935]: 2025-10-11 08:55:50.104 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172949.9852617, b297454f-91af-4716-b4f7-6af9f0d7e62d => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:55:50 compute-0 nova_compute[260935]: 2025-10-11 08:55:50.105 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: b297454f-91af-4716-b4f7-6af9f0d7e62d] VM Resumed (Lifecycle Event)
Oct 11 08:55:50 compute-0 podman[324392]: 2025-10-11 08:55:50.112175882 +0000 UTC m=+0.173657033 container init 7365a64d78a5203cb6a4f7973d3225d3cf0f9d03377cb6f3325f2e530419040b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-e075bdab-78c4-414f-b270-c41d1c82f498, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 11 08:55:50 compute-0 podman[324392]: 2025-10-11 08:55:50.119209323 +0000 UTC m=+0.180690434 container start 7365a64d78a5203cb6a4f7973d3225d3cf0f9d03377cb6f3325f2e530419040b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-e075bdab-78c4-414f-b270-c41d1c82f498, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS)
Oct 11 08:55:50 compute-0 nova_compute[260935]: 2025-10-11 08:55:50.132 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: b297454f-91af-4716-b4f7-6af9f0d7e62d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:55:50 compute-0 nova_compute[260935]: 2025-10-11 08:55:50.135 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: b297454f-91af-4716-b4f7-6af9f0d7e62d] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 08:55:50 compute-0 neutron-haproxy-ovnmeta-e075bdab-78c4-414f-b270-c41d1c82f498[324426]: [NOTICE]   (324430) : New worker (324432) forked
Oct 11 08:55:50 compute-0 neutron-haproxy-ovnmeta-e075bdab-78c4-414f-b270-c41d1c82f498[324426]: [NOTICE]   (324430) : Loading success.
Oct 11 08:55:50 compute-0 nova_compute[260935]: 2025-10-11 08:55:50.169 2 INFO nova.compute.manager [None req-8ca70323-0739-4696-8509-b0a3e9587105 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: b297454f-91af-4716-b4f7-6af9f0d7e62d] Took 7.18 seconds to build instance.
Oct 11 08:55:50 compute-0 nova_compute[260935]: 2025-10-11 08:55:50.188 2 DEBUG oslo_concurrency.lockutils [None req-8ca70323-0739-4696-8509-b0a3e9587105 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Lock "b297454f-91af-4716-b4f7-6af9f0d7e62d" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.268s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:55:50 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 08:55:50 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/815335945' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:55:50 compute-0 nova_compute[260935]: 2025-10-11 08:55:50.378 2 DEBUG oslo_concurrency.processutils [None req-7c81477d-e48f-41e4-8384-df4c9e7e75c2 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.514s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:55:50 compute-0 nova_compute[260935]: 2025-10-11 08:55:50.385 2 DEBUG nova.compute.provider_tree [None req-7c81477d-e48f-41e4-8384-df4c9e7e75c2 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 08:55:50 compute-0 nova_compute[260935]: 2025-10-11 08:55:50.403 2 DEBUG nova.network.neutron [None req-bac1875a-406e-4765-a23a-d37439cc32b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: bb58fb30-73c8-457a-9293-2d07d0015a46] Updating instance_info_cache with network_info: [{"id": "fca0187a-40e2-4312-bb5b-f3a6c59f3c9b", "address": "fa:16:3e:9f:3c:ad", "network": {"id": "2cb96d57-a5e9-4b38-b10e-68187a5bf82f", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-2000648338-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "93dd4902ce324862a38006da8e06503a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfca0187a-40", "ovs_interfaceid": "fca0187a-40e2-4312-bb5b-f3a6c59f3c9b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:55:50 compute-0 nova_compute[260935]: 2025-10-11 08:55:50.423 2 DEBUG nova.scheduler.client.report [None req-7c81477d-e48f-41e4-8384-df4c9e7e75c2 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 08:55:50 compute-0 nova_compute[260935]: 2025-10-11 08:55:50.431 2 DEBUG oslo_concurrency.lockutils [None req-bac1875a-406e-4765-a23a-d37439cc32b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Releasing lock "refresh_cache-bb58fb30-73c8-457a-9293-2d07d0015a46" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 08:55:50 compute-0 nova_compute[260935]: 2025-10-11 08:55:50.432 2 DEBUG nova.compute.manager [None req-bac1875a-406e-4765-a23a-d37439cc32b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: bb58fb30-73c8-457a-9293-2d07d0015a46] Instance network_info: |[{"id": "fca0187a-40e2-4312-bb5b-f3a6c59f3c9b", "address": "fa:16:3e:9f:3c:ad", "network": {"id": "2cb96d57-a5e9-4b38-b10e-68187a5bf82f", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-2000648338-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "93dd4902ce324862a38006da8e06503a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfca0187a-40", "ovs_interfaceid": "fca0187a-40e2-4312-bb5b-f3a6c59f3c9b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 11 08:55:50 compute-0 nova_compute[260935]: 2025-10-11 08:55:50.433 2 DEBUG oslo_concurrency.lockutils [req-88a95b43-b48f-454f-a58c-6f2f93a90840 req-db6047e1-defa-4d7e-aedc-d5938f501a8c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-bb58fb30-73c8-457a-9293-2d07d0015a46" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 08:55:50 compute-0 nova_compute[260935]: 2025-10-11 08:55:50.434 2 DEBUG nova.network.neutron [req-88a95b43-b48f-454f-a58c-6f2f93a90840 req-db6047e1-defa-4d7e-aedc-d5938f501a8c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: bb58fb30-73c8-457a-9293-2d07d0015a46] Refreshing network info cache for port fca0187a-40e2-4312-bb5b-f3a6c59f3c9b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 08:55:50 compute-0 nova_compute[260935]: 2025-10-11 08:55:50.438 2 DEBUG nova.virt.libvirt.driver [None req-bac1875a-406e-4765-a23a-d37439cc32b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: bb58fb30-73c8-457a-9293-2d07d0015a46] Start _get_guest_xml network_info=[{"id": "fca0187a-40e2-4312-bb5b-f3a6c59f3c9b", "address": "fa:16:3e:9f:3c:ad", "network": {"id": "2cb96d57-a5e9-4b38-b10e-68187a5bf82f", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-2000648338-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "93dd4902ce324862a38006da8e06503a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfca0187a-40", "ovs_interfaceid": "fca0187a-40e2-4312-bb5b-f3a6c59f3c9b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 11 08:55:50 compute-0 nova_compute[260935]: 2025-10-11 08:55:50.445 2 WARNING nova.virt.libvirt.driver [None req-bac1875a-406e-4765-a23a-d37439cc32b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 08:55:50 compute-0 nova_compute[260935]: 2025-10-11 08:55:50.452 2 DEBUG nova.virt.libvirt.host [None req-bac1875a-406e-4765-a23a-d37439cc32b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 11 08:55:50 compute-0 nova_compute[260935]: 2025-10-11 08:55:50.454 2 DEBUG nova.virt.libvirt.host [None req-bac1875a-406e-4765-a23a-d37439cc32b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 11 08:55:50 compute-0 nova_compute[260935]: 2025-10-11 08:55:50.458 2 DEBUG oslo_concurrency.lockutils [None req-7c81477d-e48f-41e4-8384-df4c9e7e75c2 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.795s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:55:50 compute-0 nova_compute[260935]: 2025-10-11 08:55:50.459 2 DEBUG nova.compute.manager [None req-7c81477d-e48f-41e4-8384-df4c9e7e75c2 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] [instance: d80189d8-28e4-440b-8aed-b43c62f59dd7] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 11 08:55:50 compute-0 nova_compute[260935]: 2025-10-11 08:55:50.473 2 DEBUG nova.virt.libvirt.host [None req-bac1875a-406e-4765-a23a-d37439cc32b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 11 08:55:50 compute-0 nova_compute[260935]: 2025-10-11 08:55:50.475 2 DEBUG nova.virt.libvirt.host [None req-bac1875a-406e-4765-a23a-d37439cc32b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 11 08:55:50 compute-0 nova_compute[260935]: 2025-10-11 08:55:50.475 2 DEBUG nova.virt.libvirt.driver [None req-bac1875a-406e-4765-a23a-d37439cc32b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 11 08:55:50 compute-0 nova_compute[260935]: 2025-10-11 08:55:50.476 2 DEBUG nova.virt.hardware [None req-bac1875a-406e-4765-a23a-d37439cc32b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 11 08:55:50 compute-0 nova_compute[260935]: 2025-10-11 08:55:50.477 2 DEBUG nova.virt.hardware [None req-bac1875a-406e-4765-a23a-d37439cc32b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 11 08:55:50 compute-0 nova_compute[260935]: 2025-10-11 08:55:50.478 2 DEBUG nova.virt.hardware [None req-bac1875a-406e-4765-a23a-d37439cc32b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 11 08:55:50 compute-0 nova_compute[260935]: 2025-10-11 08:55:50.478 2 DEBUG nova.virt.hardware [None req-bac1875a-406e-4765-a23a-d37439cc32b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 11 08:55:50 compute-0 nova_compute[260935]: 2025-10-11 08:55:50.479 2 DEBUG nova.virt.hardware [None req-bac1875a-406e-4765-a23a-d37439cc32b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 11 08:55:50 compute-0 nova_compute[260935]: 2025-10-11 08:55:50.479 2 DEBUG nova.virt.hardware [None req-bac1875a-406e-4765-a23a-d37439cc32b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 11 08:55:50 compute-0 nova_compute[260935]: 2025-10-11 08:55:50.480 2 DEBUG nova.virt.hardware [None req-bac1875a-406e-4765-a23a-d37439cc32b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 11 08:55:50 compute-0 nova_compute[260935]: 2025-10-11 08:55:50.481 2 DEBUG nova.virt.hardware [None req-bac1875a-406e-4765-a23a-d37439cc32b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 11 08:55:50 compute-0 nova_compute[260935]: 2025-10-11 08:55:50.481 2 DEBUG nova.virt.hardware [None req-bac1875a-406e-4765-a23a-d37439cc32b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 11 08:55:50 compute-0 nova_compute[260935]: 2025-10-11 08:55:50.482 2 DEBUG nova.virt.hardware [None req-bac1875a-406e-4765-a23a-d37439cc32b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 11 08:55:50 compute-0 nova_compute[260935]: 2025-10-11 08:55:50.482 2 DEBUG nova.virt.hardware [None req-bac1875a-406e-4765-a23a-d37439cc32b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 11 08:55:50 compute-0 nova_compute[260935]: 2025-10-11 08:55:50.488 2 DEBUG oslo_concurrency.processutils [None req-bac1875a-406e-4765-a23a-d37439cc32b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:55:50 compute-0 nova_compute[260935]: 2025-10-11 08:55:50.544 2 DEBUG nova.compute.manager [None req-7c81477d-e48f-41e4-8384-df4c9e7e75c2 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] [instance: d80189d8-28e4-440b-8aed-b43c62f59dd7] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 11 08:55:50 compute-0 nova_compute[260935]: 2025-10-11 08:55:50.546 2 DEBUG nova.network.neutron [None req-7c81477d-e48f-41e4-8384-df4c9e7e75c2 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] [instance: d80189d8-28e4-440b-8aed-b43c62f59dd7] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 11 08:55:50 compute-0 nova_compute[260935]: 2025-10-11 08:55:50.570 2 INFO nova.virt.libvirt.driver [None req-7c81477d-e48f-41e4-8384-df4c9e7e75c2 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] [instance: d80189d8-28e4-440b-8aed-b43c62f59dd7] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 11 08:55:50 compute-0 nova_compute[260935]: 2025-10-11 08:55:50.595 2 DEBUG nova.compute.manager [None req-7c81477d-e48f-41e4-8384-df4c9e7e75c2 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] [instance: d80189d8-28e4-440b-8aed-b43c62f59dd7] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 11 08:55:50 compute-0 nova_compute[260935]: 2025-10-11 08:55:50.728 2 DEBUG nova.compute.manager [None req-7c81477d-e48f-41e4-8384-df4c9e7e75c2 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] [instance: d80189d8-28e4-440b-8aed-b43c62f59dd7] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 11 08:55:50 compute-0 nova_compute[260935]: 2025-10-11 08:55:50.730 2 DEBUG nova.virt.libvirt.driver [None req-7c81477d-e48f-41e4-8384-df4c9e7e75c2 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] [instance: d80189d8-28e4-440b-8aed-b43c62f59dd7] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 11 08:55:50 compute-0 nova_compute[260935]: 2025-10-11 08:55:50.731 2 INFO nova.virt.libvirt.driver [None req-7c81477d-e48f-41e4-8384-df4c9e7e75c2 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] [instance: d80189d8-28e4-440b-8aed-b43c62f59dd7] Creating image(s)
Oct 11 08:55:50 compute-0 nova_compute[260935]: 2025-10-11 08:55:50.767 2 DEBUG nova.storage.rbd_utils [None req-7c81477d-e48f-41e4-8384-df4c9e7e75c2 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] rbd image d80189d8-28e4-440b-8aed-b43c62f59dd7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:55:50 compute-0 nova_compute[260935]: 2025-10-11 08:55:50.809 2 DEBUG nova.storage.rbd_utils [None req-7c81477d-e48f-41e4-8384-df4c9e7e75c2 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] rbd image d80189d8-28e4-440b-8aed-b43c62f59dd7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:55:50 compute-0 nova_compute[260935]: 2025-10-11 08:55:50.851 2 DEBUG nova.storage.rbd_utils [None req-7c81477d-e48f-41e4-8384-df4c9e7e75c2 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] rbd image d80189d8-28e4-440b-8aed-b43c62f59dd7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:55:50 compute-0 nova_compute[260935]: 2025-10-11 08:55:50.857 2 DEBUG oslo_concurrency.processutils [None req-7c81477d-e48f-41e4-8384-df4c9e7e75c2 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:55:50 compute-0 nova_compute[260935]: 2025-10-11 08:55:50.919 2 DEBUG nova.policy [None req-7c81477d-e48f-41e4-8384-df4c9e7e75c2 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'a04e2908f5a54c8f98bee8d0faf3e658', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '3a6a3cc2a54f4a9bafcdc1304f07944b', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 11 08:55:50 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 08:55:50 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/688212283' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:55:50 compute-0 nova_compute[260935]: 2025-10-11 08:55:50.977 2 DEBUG oslo_concurrency.processutils [None req-7c81477d-e48f-41e4-8384-df4c9e7e75c2 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json" returned: 0 in 0.120s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:55:50 compute-0 nova_compute[260935]: 2025-10-11 08:55:50.979 2 DEBUG oslo_concurrency.lockutils [None req-7c81477d-e48f-41e4-8384-df4c9e7e75c2 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Acquiring lock "9811042c7d73cc51997f7c966840f4b7728169a1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:55:50 compute-0 nova_compute[260935]: 2025-10-11 08:55:50.980 2 DEBUG oslo_concurrency.lockutils [None req-7c81477d-e48f-41e4-8384-df4c9e7e75c2 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:55:50 compute-0 nova_compute[260935]: 2025-10-11 08:55:50.981 2 DEBUG oslo_concurrency.lockutils [None req-7c81477d-e48f-41e4-8384-df4c9e7e75c2 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:55:51 compute-0 nova_compute[260935]: 2025-10-11 08:55:51.026 2 DEBUG nova.storage.rbd_utils [None req-7c81477d-e48f-41e4-8384-df4c9e7e75c2 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] rbd image d80189d8-28e4-440b-8aed-b43c62f59dd7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:55:51 compute-0 nova_compute[260935]: 2025-10-11 08:55:51.031 2 DEBUG oslo_concurrency.processutils [None req-7c81477d-e48f-41e4-8384-df4c9e7e75c2 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 d80189d8-28e4-440b-8aed-b43c62f59dd7_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:55:51 compute-0 nova_compute[260935]: 2025-10-11 08:55:51.072 2 DEBUG oslo_concurrency.processutils [None req-bac1875a-406e-4765-a23a-d37439cc32b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.584s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:55:51 compute-0 nova_compute[260935]: 2025-10-11 08:55:51.105 2 DEBUG nova.storage.rbd_utils [None req-bac1875a-406e-4765-a23a-d37439cc32b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] rbd image bb58fb30-73c8-457a-9293-2d07d0015a46_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:55:51 compute-0 nova_compute[260935]: 2025-10-11 08:55:51.110 2 DEBUG oslo_concurrency.processutils [None req-bac1875a-406e-4765-a23a-d37439cc32b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:55:51 compute-0 ceph-mon[74313]: pgmap v1548: 321 pgs: 321 active+clean; 246 MiB data, 587 MiB used, 59 GiB / 60 GiB avail; 884 KiB/s rd, 6.0 MiB/s wr, 186 op/s
Oct 11 08:55:51 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/815335945' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:55:51 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/688212283' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:55:51 compute-0 nova_compute[260935]: 2025-10-11 08:55:51.336 2 DEBUG oslo_concurrency.processutils [None req-7c81477d-e48f-41e4-8384-df4c9e7e75c2 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 d80189d8-28e4-440b-8aed-b43c62f59dd7_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.305s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:55:51 compute-0 nova_compute[260935]: 2025-10-11 08:55:51.414 2 DEBUG nova.storage.rbd_utils [None req-7c81477d-e48f-41e4-8384-df4c9e7e75c2 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] resizing rbd image d80189d8-28e4-440b-8aed-b43c62f59dd7_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 11 08:55:51 compute-0 nova_compute[260935]: 2025-10-11 08:55:51.490 2 DEBUG nova.compute.manager [req-bbdc1455-a948-4f08-a6d9-f29047c4b732 req-f33010bb-b949-4296-b9ee-b32aa96223f9 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: b297454f-91af-4716-b4f7-6af9f0d7e62d] Received event network-vif-plugged-7e1e0a2e-111a-44a6-8191-45964dbea604 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:55:51 compute-0 nova_compute[260935]: 2025-10-11 08:55:51.491 2 DEBUG oslo_concurrency.lockutils [req-bbdc1455-a948-4f08-a6d9-f29047c4b732 req-f33010bb-b949-4296-b9ee-b32aa96223f9 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "b297454f-91af-4716-b4f7-6af9f0d7e62d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:55:51 compute-0 nova_compute[260935]: 2025-10-11 08:55:51.491 2 DEBUG oslo_concurrency.lockutils [req-bbdc1455-a948-4f08-a6d9-f29047c4b732 req-f33010bb-b949-4296-b9ee-b32aa96223f9 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "b297454f-91af-4716-b4f7-6af9f0d7e62d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:55:51 compute-0 nova_compute[260935]: 2025-10-11 08:55:51.492 2 DEBUG oslo_concurrency.lockutils [req-bbdc1455-a948-4f08-a6d9-f29047c4b732 req-f33010bb-b949-4296-b9ee-b32aa96223f9 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "b297454f-91af-4716-b4f7-6af9f0d7e62d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:55:51 compute-0 nova_compute[260935]: 2025-10-11 08:55:51.492 2 DEBUG nova.compute.manager [req-bbdc1455-a948-4f08-a6d9-f29047c4b732 req-f33010bb-b949-4296-b9ee-b32aa96223f9 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: b297454f-91af-4716-b4f7-6af9f0d7e62d] No waiting events found dispatching network-vif-plugged-7e1e0a2e-111a-44a6-8191-45964dbea604 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 08:55:51 compute-0 nova_compute[260935]: 2025-10-11 08:55:51.493 2 WARNING nova.compute.manager [req-bbdc1455-a948-4f08-a6d9-f29047c4b732 req-f33010bb-b949-4296-b9ee-b32aa96223f9 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: b297454f-91af-4716-b4f7-6af9f0d7e62d] Received unexpected event network-vif-plugged-7e1e0a2e-111a-44a6-8191-45964dbea604 for instance with vm_state active and task_state None.
Oct 11 08:55:51 compute-0 nova_compute[260935]: 2025-10-11 08:55:51.532 2 DEBUG nova.objects.instance [None req-7c81477d-e48f-41e4-8384-df4c9e7e75c2 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Lazy-loading 'migration_context' on Instance uuid d80189d8-28e4-440b-8aed-b43c62f59dd7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:55:51 compute-0 nova_compute[260935]: 2025-10-11 08:55:51.546 2 DEBUG nova.virt.libvirt.driver [None req-7c81477d-e48f-41e4-8384-df4c9e7e75c2 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] [instance: d80189d8-28e4-440b-8aed-b43c62f59dd7] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 11 08:55:51 compute-0 nova_compute[260935]: 2025-10-11 08:55:51.547 2 DEBUG nova.virt.libvirt.driver [None req-7c81477d-e48f-41e4-8384-df4c9e7e75c2 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] [instance: d80189d8-28e4-440b-8aed-b43c62f59dd7] Ensure instance console log exists: /var/lib/nova/instances/d80189d8-28e4-440b-8aed-b43c62f59dd7/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 11 08:55:51 compute-0 nova_compute[260935]: 2025-10-11 08:55:51.548 2 DEBUG oslo_concurrency.lockutils [None req-7c81477d-e48f-41e4-8384-df4c9e7e75c2 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:55:51 compute-0 nova_compute[260935]: 2025-10-11 08:55:51.548 2 DEBUG oslo_concurrency.lockutils [None req-7c81477d-e48f-41e4-8384-df4c9e7e75c2 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:55:51 compute-0 nova_compute[260935]: 2025-10-11 08:55:51.549 2 DEBUG oslo_concurrency.lockutils [None req-7c81477d-e48f-41e4-8384-df4c9e7e75c2 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:55:51 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 08:55:51 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/540591875' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:55:51 compute-0 nova_compute[260935]: 2025-10-11 08:55:51.625 2 DEBUG oslo_concurrency.processutils [None req-bac1875a-406e-4765-a23a-d37439cc32b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.516s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:55:51 compute-0 nova_compute[260935]: 2025-10-11 08:55:51.627 2 DEBUG nova.virt.libvirt.vif [None req-bac1875a-406e-4765-a23a-d37439cc32b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T08:55:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-254064893',display_name='tempest-DeleteServersTestJSON-server-254064893',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-254064893',id=62,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='93dd4902ce324862a38006da8e06503a',ramdisk_id='',reservation_id='r-k16b35q4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-1019340677',owner_user_name='tempest-DeleteServersTestJSON-1019340677-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T08:55:47Z,user_data=None,user_id='213e5693e94f44e7950e3dfbca04228a',uuid=bb58fb30-73c8-457a-9293-2d07d0015a46,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "fca0187a-40e2-4312-bb5b-f3a6c59f3c9b", "address": "fa:16:3e:9f:3c:ad", "network": {"id": "2cb96d57-a5e9-4b38-b10e-68187a5bf82f", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-2000648338-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "93dd4902ce324862a38006da8e06503a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfca0187a-40", "ovs_interfaceid": "fca0187a-40e2-4312-bb5b-f3a6c59f3c9b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 11 08:55:51 compute-0 nova_compute[260935]: 2025-10-11 08:55:51.628 2 DEBUG nova.network.os_vif_util [None req-bac1875a-406e-4765-a23a-d37439cc32b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Converting VIF {"id": "fca0187a-40e2-4312-bb5b-f3a6c59f3c9b", "address": "fa:16:3e:9f:3c:ad", "network": {"id": "2cb96d57-a5e9-4b38-b10e-68187a5bf82f", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-2000648338-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "93dd4902ce324862a38006da8e06503a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfca0187a-40", "ovs_interfaceid": "fca0187a-40e2-4312-bb5b-f3a6c59f3c9b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 08:55:51 compute-0 nova_compute[260935]: 2025-10-11 08:55:51.629 2 DEBUG nova.network.os_vif_util [None req-bac1875a-406e-4765-a23a-d37439cc32b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9f:3c:ad,bridge_name='br-int',has_traffic_filtering=True,id=fca0187a-40e2-4312-bb5b-f3a6c59f3c9b,network=Network(2cb96d57-a5e9-4b38-b10e-68187a5bf82f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfca0187a-40') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 08:55:51 compute-0 nova_compute[260935]: 2025-10-11 08:55:51.630 2 DEBUG nova.objects.instance [None req-bac1875a-406e-4765-a23a-d37439cc32b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Lazy-loading 'pci_devices' on Instance uuid bb58fb30-73c8-457a-9293-2d07d0015a46 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:55:51 compute-0 nova_compute[260935]: 2025-10-11 08:55:51.655 2 DEBUG nova.virt.libvirt.driver [None req-bac1875a-406e-4765-a23a-d37439cc32b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: bb58fb30-73c8-457a-9293-2d07d0015a46] End _get_guest_xml xml=<domain type="kvm">
Oct 11 08:55:51 compute-0 nova_compute[260935]:   <uuid>bb58fb30-73c8-457a-9293-2d07d0015a46</uuid>
Oct 11 08:55:51 compute-0 nova_compute[260935]:   <name>instance-0000003e</name>
Oct 11 08:55:51 compute-0 nova_compute[260935]:   <memory>131072</memory>
Oct 11 08:55:51 compute-0 nova_compute[260935]:   <vcpu>1</vcpu>
Oct 11 08:55:51 compute-0 nova_compute[260935]:   <metadata>
Oct 11 08:55:51 compute-0 nova_compute[260935]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 08:55:51 compute-0 nova_compute[260935]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 08:55:51 compute-0 nova_compute[260935]:       <nova:name>tempest-DeleteServersTestJSON-server-254064893</nova:name>
Oct 11 08:55:51 compute-0 nova_compute[260935]:       <nova:creationTime>2025-10-11 08:55:50</nova:creationTime>
Oct 11 08:55:51 compute-0 nova_compute[260935]:       <nova:flavor name="m1.nano">
Oct 11 08:55:51 compute-0 nova_compute[260935]:         <nova:memory>128</nova:memory>
Oct 11 08:55:51 compute-0 nova_compute[260935]:         <nova:disk>1</nova:disk>
Oct 11 08:55:51 compute-0 nova_compute[260935]:         <nova:swap>0</nova:swap>
Oct 11 08:55:51 compute-0 nova_compute[260935]:         <nova:ephemeral>0</nova:ephemeral>
Oct 11 08:55:51 compute-0 nova_compute[260935]:         <nova:vcpus>1</nova:vcpus>
Oct 11 08:55:51 compute-0 nova_compute[260935]:       </nova:flavor>
Oct 11 08:55:51 compute-0 nova_compute[260935]:       <nova:owner>
Oct 11 08:55:51 compute-0 nova_compute[260935]:         <nova:user uuid="213e5693e94f44e7950e3dfbca04228a">tempest-DeleteServersTestJSON-1019340677-project-member</nova:user>
Oct 11 08:55:51 compute-0 nova_compute[260935]:         <nova:project uuid="93dd4902ce324862a38006da8e06503a">tempest-DeleteServersTestJSON-1019340677</nova:project>
Oct 11 08:55:51 compute-0 nova_compute[260935]:       </nova:owner>
Oct 11 08:55:51 compute-0 nova_compute[260935]:       <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 08:55:51 compute-0 nova_compute[260935]:       <nova:ports>
Oct 11 08:55:51 compute-0 nova_compute[260935]:         <nova:port uuid="fca0187a-40e2-4312-bb5b-f3a6c59f3c9b">
Oct 11 08:55:51 compute-0 nova_compute[260935]:           <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Oct 11 08:55:51 compute-0 nova_compute[260935]:         </nova:port>
Oct 11 08:55:51 compute-0 nova_compute[260935]:       </nova:ports>
Oct 11 08:55:51 compute-0 nova_compute[260935]:     </nova:instance>
Oct 11 08:55:51 compute-0 nova_compute[260935]:   </metadata>
Oct 11 08:55:51 compute-0 nova_compute[260935]:   <sysinfo type="smbios">
Oct 11 08:55:51 compute-0 nova_compute[260935]:     <system>
Oct 11 08:55:51 compute-0 nova_compute[260935]:       <entry name="manufacturer">RDO</entry>
Oct 11 08:55:51 compute-0 nova_compute[260935]:       <entry name="product">OpenStack Compute</entry>
Oct 11 08:55:51 compute-0 nova_compute[260935]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 08:55:51 compute-0 nova_compute[260935]:       <entry name="serial">bb58fb30-73c8-457a-9293-2d07d0015a46</entry>
Oct 11 08:55:51 compute-0 nova_compute[260935]:       <entry name="uuid">bb58fb30-73c8-457a-9293-2d07d0015a46</entry>
Oct 11 08:55:51 compute-0 nova_compute[260935]:       <entry name="family">Virtual Machine</entry>
Oct 11 08:55:51 compute-0 nova_compute[260935]:     </system>
Oct 11 08:55:51 compute-0 nova_compute[260935]:   </sysinfo>
Oct 11 08:55:51 compute-0 nova_compute[260935]:   <os>
Oct 11 08:55:51 compute-0 nova_compute[260935]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 11 08:55:51 compute-0 nova_compute[260935]:     <boot dev="hd"/>
Oct 11 08:55:51 compute-0 nova_compute[260935]:     <smbios mode="sysinfo"/>
Oct 11 08:55:51 compute-0 nova_compute[260935]:   </os>
Oct 11 08:55:51 compute-0 nova_compute[260935]:   <features>
Oct 11 08:55:51 compute-0 nova_compute[260935]:     <acpi/>
Oct 11 08:55:51 compute-0 nova_compute[260935]:     <apic/>
Oct 11 08:55:51 compute-0 nova_compute[260935]:     <vmcoreinfo/>
Oct 11 08:55:51 compute-0 nova_compute[260935]:   </features>
Oct 11 08:55:51 compute-0 nova_compute[260935]:   <clock offset="utc">
Oct 11 08:55:51 compute-0 nova_compute[260935]:     <timer name="pit" tickpolicy="delay"/>
Oct 11 08:55:51 compute-0 nova_compute[260935]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 11 08:55:51 compute-0 nova_compute[260935]:     <timer name="hpet" present="no"/>
Oct 11 08:55:51 compute-0 nova_compute[260935]:   </clock>
Oct 11 08:55:51 compute-0 nova_compute[260935]:   <cpu mode="host-model" match="exact">
Oct 11 08:55:51 compute-0 nova_compute[260935]:     <topology sockets="1" cores="1" threads="1"/>
Oct 11 08:55:51 compute-0 nova_compute[260935]:   </cpu>
Oct 11 08:55:51 compute-0 nova_compute[260935]:   <devices>
Oct 11 08:55:51 compute-0 nova_compute[260935]:     <disk type="network" device="disk">
Oct 11 08:55:51 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 08:55:51 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/bb58fb30-73c8-457a-9293-2d07d0015a46_disk">
Oct 11 08:55:51 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 08:55:51 compute-0 nova_compute[260935]:       </source>
Oct 11 08:55:51 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 08:55:51 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 08:55:51 compute-0 nova_compute[260935]:       </auth>
Oct 11 08:55:51 compute-0 nova_compute[260935]:       <target dev="vda" bus="virtio"/>
Oct 11 08:55:51 compute-0 nova_compute[260935]:     </disk>
Oct 11 08:55:51 compute-0 nova_compute[260935]:     <disk type="network" device="cdrom">
Oct 11 08:55:51 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 08:55:51 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/bb58fb30-73c8-457a-9293-2d07d0015a46_disk.config">
Oct 11 08:55:51 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 08:55:51 compute-0 nova_compute[260935]:       </source>
Oct 11 08:55:51 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 08:55:51 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 08:55:51 compute-0 nova_compute[260935]:       </auth>
Oct 11 08:55:51 compute-0 nova_compute[260935]:       <target dev="sda" bus="sata"/>
Oct 11 08:55:51 compute-0 nova_compute[260935]:     </disk>
Oct 11 08:55:51 compute-0 nova_compute[260935]:     <interface type="ethernet">
Oct 11 08:55:51 compute-0 nova_compute[260935]:       <mac address="fa:16:3e:9f:3c:ad"/>
Oct 11 08:55:51 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 08:55:51 compute-0 nova_compute[260935]:       <driver name="vhost" rx_queue_size="512"/>
Oct 11 08:55:51 compute-0 nova_compute[260935]:       <mtu size="1442"/>
Oct 11 08:55:51 compute-0 nova_compute[260935]:       <target dev="tapfca0187a-40"/>
Oct 11 08:55:51 compute-0 nova_compute[260935]:     </interface>
Oct 11 08:55:51 compute-0 nova_compute[260935]:     <serial type="pty">
Oct 11 08:55:51 compute-0 nova_compute[260935]:       <log file="/var/lib/nova/instances/bb58fb30-73c8-457a-9293-2d07d0015a46/console.log" append="off"/>
Oct 11 08:55:51 compute-0 nova_compute[260935]:     </serial>
Oct 11 08:55:51 compute-0 nova_compute[260935]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 08:55:51 compute-0 nova_compute[260935]:     <video>
Oct 11 08:55:51 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 08:55:51 compute-0 nova_compute[260935]:     </video>
Oct 11 08:55:51 compute-0 nova_compute[260935]:     <input type="tablet" bus="usb"/>
Oct 11 08:55:51 compute-0 nova_compute[260935]:     <rng model="virtio">
Oct 11 08:55:51 compute-0 nova_compute[260935]:       <backend model="random">/dev/urandom</backend>
Oct 11 08:55:51 compute-0 nova_compute[260935]:     </rng>
Oct 11 08:55:51 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root"/>
Oct 11 08:55:51 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:55:51 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:55:51 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:55:51 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:55:51 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:55:51 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:55:51 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:55:51 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:55:51 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:55:51 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:55:51 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:55:51 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:55:51 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:55:51 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:55:51 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:55:51 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:55:51 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:55:51 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:55:51 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:55:51 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:55:51 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:55:51 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:55:51 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:55:51 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:55:51 compute-0 nova_compute[260935]:     <controller type="usb" index="0"/>
Oct 11 08:55:51 compute-0 nova_compute[260935]:     <memballoon model="virtio">
Oct 11 08:55:51 compute-0 nova_compute[260935]:       <stats period="10"/>
Oct 11 08:55:51 compute-0 nova_compute[260935]:     </memballoon>
Oct 11 08:55:51 compute-0 nova_compute[260935]:   </devices>
Oct 11 08:55:51 compute-0 nova_compute[260935]: </domain>
Oct 11 08:55:51 compute-0 nova_compute[260935]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 11 08:55:51 compute-0 nova_compute[260935]: 2025-10-11 08:55:51.663 2 DEBUG nova.compute.manager [None req-bac1875a-406e-4765-a23a-d37439cc32b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: bb58fb30-73c8-457a-9293-2d07d0015a46] Preparing to wait for external event network-vif-plugged-fca0187a-40e2-4312-bb5b-f3a6c59f3c9b prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 11 08:55:51 compute-0 nova_compute[260935]: 2025-10-11 08:55:51.663 2 DEBUG oslo_concurrency.lockutils [None req-bac1875a-406e-4765-a23a-d37439cc32b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Acquiring lock "bb58fb30-73c8-457a-9293-2d07d0015a46-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:55:51 compute-0 nova_compute[260935]: 2025-10-11 08:55:51.664 2 DEBUG oslo_concurrency.lockutils [None req-bac1875a-406e-4765-a23a-d37439cc32b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Lock "bb58fb30-73c8-457a-9293-2d07d0015a46-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:55:51 compute-0 nova_compute[260935]: 2025-10-11 08:55:51.664 2 DEBUG oslo_concurrency.lockutils [None req-bac1875a-406e-4765-a23a-d37439cc32b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Lock "bb58fb30-73c8-457a-9293-2d07d0015a46-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:55:51 compute-0 nova_compute[260935]: 2025-10-11 08:55:51.665 2 DEBUG nova.virt.libvirt.vif [None req-bac1875a-406e-4765-a23a-d37439cc32b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T08:55:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-254064893',display_name='tempest-DeleteServersTestJSON-server-254064893',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-254064893',id=62,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='93dd4902ce324862a38006da8e06503a',ramdisk_id='',reservation_id='r-k16b35q4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-1019340677',owner_user_name='tempest-DeleteServersTestJSON-1019340677-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T08:55:47Z,user_data=None,user_id='213e5693e94f44e7950e3dfbca04228a',uuid=bb58fb30-73c8-457a-9293-2d07d0015a46,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "fca0187a-40e2-4312-bb5b-f3a6c59f3c9b", "address": "fa:16:3e:9f:3c:ad", "network": {"id": "2cb96d57-a5e9-4b38-b10e-68187a5bf82f", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-2000648338-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "93dd4902ce324862a38006da8e06503a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfca0187a-40", "ovs_interfaceid": "fca0187a-40e2-4312-bb5b-f3a6c59f3c9b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 11 08:55:51 compute-0 nova_compute[260935]: 2025-10-11 08:55:51.666 2 DEBUG nova.network.os_vif_util [None req-bac1875a-406e-4765-a23a-d37439cc32b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Converting VIF {"id": "fca0187a-40e2-4312-bb5b-f3a6c59f3c9b", "address": "fa:16:3e:9f:3c:ad", "network": {"id": "2cb96d57-a5e9-4b38-b10e-68187a5bf82f", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-2000648338-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "93dd4902ce324862a38006da8e06503a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfca0187a-40", "ovs_interfaceid": "fca0187a-40e2-4312-bb5b-f3a6c59f3c9b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 08:55:51 compute-0 nova_compute[260935]: 2025-10-11 08:55:51.667 2 DEBUG nova.network.os_vif_util [None req-bac1875a-406e-4765-a23a-d37439cc32b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9f:3c:ad,bridge_name='br-int',has_traffic_filtering=True,id=fca0187a-40e2-4312-bb5b-f3a6c59f3c9b,network=Network(2cb96d57-a5e9-4b38-b10e-68187a5bf82f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfca0187a-40') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 08:55:51 compute-0 nova_compute[260935]: 2025-10-11 08:55:51.667 2 DEBUG os_vif [None req-bac1875a-406e-4765-a23a-d37439cc32b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:9f:3c:ad,bridge_name='br-int',has_traffic_filtering=True,id=fca0187a-40e2-4312-bb5b-f3a6c59f3c9b,network=Network(2cb96d57-a5e9-4b38-b10e-68187a5bf82f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfca0187a-40') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 11 08:55:51 compute-0 nova_compute[260935]: 2025-10-11 08:55:51.668 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:55:51 compute-0 nova_compute[260935]: 2025-10-11 08:55:51.669 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:55:51 compute-0 nova_compute[260935]: 2025-10-11 08:55:51.670 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 08:55:51 compute-0 nova_compute[260935]: 2025-10-11 08:55:51.674 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:55:51 compute-0 nova_compute[260935]: 2025-10-11 08:55:51.675 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfca0187a-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:55:51 compute-0 nova_compute[260935]: 2025-10-11 08:55:51.675 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapfca0187a-40, col_values=(('external_ids', {'iface-id': 'fca0187a-40e2-4312-bb5b-f3a6c59f3c9b', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:9f:3c:ad', 'vm-uuid': 'bb58fb30-73c8-457a-9293-2d07d0015a46'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:55:51 compute-0 nova_compute[260935]: 2025-10-11 08:55:51.713 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:55:51 compute-0 NetworkManager[44960]: <info>  [1760172951.7142] manager: (tapfca0187a-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/253)
Oct 11 08:55:51 compute-0 nova_compute[260935]: 2025-10-11 08:55:51.716 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 08:55:51 compute-0 nova_compute[260935]: 2025-10-11 08:55:51.722 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:55:51 compute-0 nova_compute[260935]: 2025-10-11 08:55:51.723 2 INFO os_vif [None req-bac1875a-406e-4765-a23a-d37439cc32b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:9f:3c:ad,bridge_name='br-int',has_traffic_filtering=True,id=fca0187a-40e2-4312-bb5b-f3a6c59f3c9b,network=Network(2cb96d57-a5e9-4b38-b10e-68187a5bf82f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfca0187a-40')
Oct 11 08:55:51 compute-0 nova_compute[260935]: 2025-10-11 08:55:51.780 2 DEBUG nova.virt.libvirt.driver [None req-bac1875a-406e-4765-a23a-d37439cc32b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 08:55:51 compute-0 nova_compute[260935]: 2025-10-11 08:55:51.781 2 DEBUG nova.virt.libvirt.driver [None req-bac1875a-406e-4765-a23a-d37439cc32b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 08:55:51 compute-0 nova_compute[260935]: 2025-10-11 08:55:51.781 2 DEBUG nova.virt.libvirt.driver [None req-bac1875a-406e-4765-a23a-d37439cc32b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] No VIF found with MAC fa:16:3e:9f:3c:ad, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 11 08:55:51 compute-0 nova_compute[260935]: 2025-10-11 08:55:51.782 2 INFO nova.virt.libvirt.driver [None req-bac1875a-406e-4765-a23a-d37439cc32b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: bb58fb30-73c8-457a-9293-2d07d0015a46] Using config drive
Oct 11 08:55:51 compute-0 nova_compute[260935]: 2025-10-11 08:55:51.815 2 DEBUG nova.storage.rbd_utils [None req-bac1875a-406e-4765-a23a-d37439cc32b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] rbd image bb58fb30-73c8-457a-9293-2d07d0015a46_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:55:52 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1549: 321 pgs: 321 active+clean; 246 MiB data, 587 MiB used, 59 GiB / 60 GiB avail; 884 KiB/s rd, 6.0 MiB/s wr, 186 op/s
Oct 11 08:55:52 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/540591875' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:55:52 compute-0 nova_compute[260935]: 2025-10-11 08:55:52.908 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760172937.905254, f6a731f1-14a4-4df0-b3e2-65c8c4cb16e8 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:55:52 compute-0 nova_compute[260935]: 2025-10-11 08:55:52.908 2 INFO nova.compute.manager [-] [instance: f6a731f1-14a4-4df0-b3e2-65c8c4cb16e8] VM Stopped (Lifecycle Event)
Oct 11 08:55:52 compute-0 nova_compute[260935]: 2025-10-11 08:55:52.944 2 DEBUG nova.compute.manager [None req-23a8afcb-0934-4002-a9da-a0babaa5bbd1 - - - - - -] [instance: f6a731f1-14a4-4df0-b3e2-65c8c4cb16e8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:55:52 compute-0 nova_compute[260935]: 2025-10-11 08:55:52.990 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:55:53 compute-0 ceph-mon[74313]: pgmap v1549: 321 pgs: 321 active+clean; 246 MiB data, 587 MiB used, 59 GiB / 60 GiB avail; 884 KiB/s rd, 6.0 MiB/s wr, 186 op/s
Oct 11 08:55:53 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e210 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 08:55:54 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1550: 321 pgs: 321 active+clean; 339 MiB data, 630 MiB used, 59 GiB / 60 GiB avail; 2.8 MiB/s rd, 9.6 MiB/s wr, 315 op/s
Oct 11 08:55:54 compute-0 nova_compute[260935]: 2025-10-11 08:55:54.138 2 INFO nova.virt.libvirt.driver [None req-bac1875a-406e-4765-a23a-d37439cc32b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: bb58fb30-73c8-457a-9293-2d07d0015a46] Creating config drive at /var/lib/nova/instances/bb58fb30-73c8-457a-9293-2d07d0015a46/disk.config
Oct 11 08:55:54 compute-0 nova_compute[260935]: 2025-10-11 08:55:54.148 2 DEBUG oslo_concurrency.processutils [None req-bac1875a-406e-4765-a23a-d37439cc32b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/bb58fb30-73c8-457a-9293-2d07d0015a46/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpwrb3r71t execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:55:54 compute-0 nova_compute[260935]: 2025-10-11 08:55:54.209 2 DEBUG nova.network.neutron [req-88a95b43-b48f-454f-a58c-6f2f93a90840 req-db6047e1-defa-4d7e-aedc-d5938f501a8c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: bb58fb30-73c8-457a-9293-2d07d0015a46] Updated VIF entry in instance network info cache for port fca0187a-40e2-4312-bb5b-f3a6c59f3c9b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 08:55:54 compute-0 nova_compute[260935]: 2025-10-11 08:55:54.210 2 DEBUG nova.network.neutron [req-88a95b43-b48f-454f-a58c-6f2f93a90840 req-db6047e1-defa-4d7e-aedc-d5938f501a8c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: bb58fb30-73c8-457a-9293-2d07d0015a46] Updating instance_info_cache with network_info: [{"id": "fca0187a-40e2-4312-bb5b-f3a6c59f3c9b", "address": "fa:16:3e:9f:3c:ad", "network": {"id": "2cb96d57-a5e9-4b38-b10e-68187a5bf82f", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-2000648338-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "93dd4902ce324862a38006da8e06503a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfca0187a-40", "ovs_interfaceid": "fca0187a-40e2-4312-bb5b-f3a6c59f3c9b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:55:54 compute-0 nova_compute[260935]: 2025-10-11 08:55:54.231 2 DEBUG oslo_concurrency.lockutils [req-88a95b43-b48f-454f-a58c-6f2f93a90840 req-db6047e1-defa-4d7e-aedc-d5938f501a8c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-bb58fb30-73c8-457a-9293-2d07d0015a46" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 08:55:54 compute-0 nova_compute[260935]: 2025-10-11 08:55:54.315 2 DEBUG oslo_concurrency.processutils [None req-bac1875a-406e-4765-a23a-d37439cc32b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/bb58fb30-73c8-457a-9293-2d07d0015a46/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpwrb3r71t" returned: 0 in 0.167s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:55:54 compute-0 nova_compute[260935]: 2025-10-11 08:55:54.342 2 DEBUG nova.storage.rbd_utils [None req-bac1875a-406e-4765-a23a-d37439cc32b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] rbd image bb58fb30-73c8-457a-9293-2d07d0015a46_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:55:54 compute-0 nova_compute[260935]: 2025-10-11 08:55:54.347 2 DEBUG oslo_concurrency.processutils [None req-bac1875a-406e-4765-a23a-d37439cc32b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/bb58fb30-73c8-457a-9293-2d07d0015a46/disk.config bb58fb30-73c8-457a-9293-2d07d0015a46_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:55:54 compute-0 nova_compute[260935]: 2025-10-11 08:55:54.641 2 DEBUG oslo_concurrency.processutils [None req-bac1875a-406e-4765-a23a-d37439cc32b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/bb58fb30-73c8-457a-9293-2d07d0015a46/disk.config bb58fb30-73c8-457a-9293-2d07d0015a46_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.293s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:55:54 compute-0 nova_compute[260935]: 2025-10-11 08:55:54.642 2 INFO nova.virt.libvirt.driver [None req-bac1875a-406e-4765-a23a-d37439cc32b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: bb58fb30-73c8-457a-9293-2d07d0015a46] Deleting local config drive /var/lib/nova/instances/bb58fb30-73c8-457a-9293-2d07d0015a46/disk.config because it was imported into RBD.
Oct 11 08:55:54 compute-0 nova_compute[260935]: 2025-10-11 08:55:54.739 2 DEBUG nova.network.neutron [None req-7c81477d-e48f-41e4-8384-df4c9e7e75c2 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] [instance: d80189d8-28e4-440b-8aed-b43c62f59dd7] Successfully created port: 7bc371fa-443a-4188-ace2-2837e3709136 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 11 08:55:54 compute-0 kernel: tapfca0187a-40: entered promiscuous mode
Oct 11 08:55:54 compute-0 nova_compute[260935]: 2025-10-11 08:55:54.748 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:55:54 compute-0 NetworkManager[44960]: <info>  [1760172954.7487] manager: (tapfca0187a-40): new Tun device (/org/freedesktop/NetworkManager/Devices/254)
Oct 11 08:55:54 compute-0 ovn_controller[152945]: 2025-10-11T08:55:54Z|00532|binding|INFO|Claiming lport fca0187a-40e2-4312-bb5b-f3a6c59f3c9b for this chassis.
Oct 11 08:55:54 compute-0 ovn_controller[152945]: 2025-10-11T08:55:54Z|00533|binding|INFO|fca0187a-40e2-4312-bb5b-f3a6c59f3c9b: Claiming fa:16:3e:9f:3c:ad 10.100.0.9
Oct 11 08:55:54 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:55:54.756 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9f:3c:ad 10.100.0.9'], port_security=['fa:16:3e:9f:3c:ad 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'bb58fb30-73c8-457a-9293-2d07d0015a46', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2cb96d57-a5e9-4b38-b10e-68187a5bf82f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '93dd4902ce324862a38006da8e06503a', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'b773900e-3df7-4cb6-b9b0-3d240ff499b1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=69794a2f-48ab-4a0d-8725-f4a7f57172dd, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=fca0187a-40e2-4312-bb5b-f3a6c59f3c9b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 08:55:54 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:55:54.759 162815 INFO neutron.agent.ovn.metadata.agent [-] Port fca0187a-40e2-4312-bb5b-f3a6c59f3c9b in datapath 2cb96d57-a5e9-4b38-b10e-68187a5bf82f bound to our chassis
Oct 11 08:55:54 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:55:54.762 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 2cb96d57-a5e9-4b38-b10e-68187a5bf82f
Oct 11 08:55:54 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:55:54.780 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[225eb295-30c4-4766-b127-9baa5c900d30]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:55:54 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:55:54.781 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap2cb96d57-a1 in ovnmeta-2cb96d57-a5e9-4b38-b10e-68187a5bf82f namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 11 08:55:54 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:55:54.784 276945 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap2cb96d57-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 11 08:55:54 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:55:54.784 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[89603c7a-9409-4b6e-98e4-b531384f9eb9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:55:54 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:55:54.786 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[7b6ff1f0-9a93-42b5-89d5-7d6a20714a81]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:55:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 08:55:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 08:55:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 08:55:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 08:55:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 08:55:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 08:55:54 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:55:54.798 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[6be61996-8414-4c51-8dc8-0ee4806d32ba]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:55:54 compute-0 ovn_controller[152945]: 2025-10-11T08:55:54Z|00534|binding|INFO|Setting lport fca0187a-40e2-4312-bb5b-f3a6c59f3c9b ovn-installed in OVS
Oct 11 08:55:54 compute-0 ovn_controller[152945]: 2025-10-11T08:55:54Z|00535|binding|INFO|Setting lport fca0187a-40e2-4312-bb5b-f3a6c59f3c9b up in Southbound
Oct 11 08:55:54 compute-0 nova_compute[260935]: 2025-10-11 08:55:54.813 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:55:54 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:55:54.818 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[59dcbd26-60e3-4a46-849d-4ee8d47e1b77]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:55:54 compute-0 systemd-machined[215705]: New machine qemu-69-instance-0000003e.
Oct 11 08:55:54 compute-0 systemd-udevd[324748]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 08:55:54 compute-0 ceph-mgr[74605]: [balancer INFO root] Optimize plan auto_2025-10-11_08:55:54
Oct 11 08:55:54 compute-0 ceph-mgr[74605]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 08:55:54 compute-0 ceph-mgr[74605]: [balancer INFO root] do_upmap
Oct 11 08:55:54 compute-0 ceph-mgr[74605]: [balancer INFO root] pools ['default.rgw.log', '.rgw.root', 'images', 'volumes', 'cephfs.cephfs.data', 'backups', '.mgr', 'vms', 'default.rgw.control', 'default.rgw.meta', 'cephfs.cephfs.meta']
Oct 11 08:55:54 compute-0 systemd[1]: Started Virtual Machine qemu-69-instance-0000003e.
Oct 11 08:55:54 compute-0 NetworkManager[44960]: <info>  [1760172954.8426] device (tapfca0187a-40): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 08:55:54 compute-0 NetworkManager[44960]: <info>  [1760172954.8446] device (tapfca0187a-40): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 08:55:54 compute-0 ceph-mgr[74605]: [balancer INFO root] prepared 0/10 changes
Oct 11 08:55:54 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:55:54.861 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[ad2cc58f-322c-406d-ba58-80c3d8d93864]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:55:54 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:55:54.867 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[2989965a-54c5-432c-b42a-b89b41c36a39]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:55:54 compute-0 NetworkManager[44960]: <info>  [1760172954.8690] manager: (tap2cb96d57-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/255)
Oct 11 08:55:54 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:55:54.926 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[24bcd841-6866-406a-8966-acca9dca3ad0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:55:54 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:55:54.931 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[581f770f-4217-4da6-be8d-106897641cb1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:55:54 compute-0 NetworkManager[44960]: <info>  [1760172954.9756] device (tap2cb96d57-a0): carrier: link connected
Oct 11 08:55:54 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:55:54.987 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[f2d1ebbe-894a-41e3-b77d-e72cbc5c436e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:55:55 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:55:55.011 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[cad05475-62c8-43b8-885a-43b720e90458]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2cb96d57-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0b:c9:b5'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 171], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 479967, 'reachable_time': 39345, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 324778, 'error': None, 'target': 'ovnmeta-2cb96d57-a5e9-4b38-b10e-68187a5bf82f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:55:55 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:55:55.035 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[507fe403-f5e3-4f7d-a7e3-e5bc4c957fd3]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe0b:c9b5'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 479967, 'tstamp': 479967}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 324779, 'error': None, 'target': 'ovnmeta-2cb96d57-a5e9-4b38-b10e-68187a5bf82f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:55:55 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:55:55.064 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[b345842f-6491-48b3-805c-d56258a72a1d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2cb96d57-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0b:c9:b5'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 171], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 479967, 'reachable_time': 39345, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 324780, 'error': None, 'target': 'ovnmeta-2cb96d57-a5e9-4b38-b10e-68187a5bf82f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:55:55 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:55:55.111 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[5efe06a6-a6f1-4609-83cf-8c321c91ae4a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:55:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 08:55:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 08:55:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 08:55:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 08:55:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 08:55:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 08:55:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 08:55:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 08:55:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 08:55:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 08:55:55 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:55:55.210 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[e741887e-80dc-4bcb-acc4-7c36373cf036]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:55:55 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:55:55.212 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2cb96d57-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:55:55 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:55:55.212 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 08:55:55 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:55:55.213 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2cb96d57-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:55:55 compute-0 nova_compute[260935]: 2025-10-11 08:55:55.265 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:55:55 compute-0 kernel: tap2cb96d57-a0: entered promiscuous mode
Oct 11 08:55:55 compute-0 NetworkManager[44960]: <info>  [1760172955.2659] manager: (tap2cb96d57-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/256)
Oct 11 08:55:55 compute-0 nova_compute[260935]: 2025-10-11 08:55:55.267 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:55:55 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:55:55.268 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap2cb96d57-a0, col_values=(('external_ids', {'iface-id': 'a11e0a08-d1ab-4bff-901b-632484cc0e21'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:55:55 compute-0 nova_compute[260935]: 2025-10-11 08:55:55.269 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:55:55 compute-0 ovn_controller[152945]: 2025-10-11T08:55:55Z|00536|binding|INFO|Releasing lport a11e0a08-d1ab-4bff-901b-632484cc0e21 from this chassis (sb_readonly=0)
Oct 11 08:55:55 compute-0 nova_compute[260935]: 2025-10-11 08:55:55.270 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:55:55 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:55:55.271 162815 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/2cb96d57-a5e9-4b38-b10e-68187a5bf82f.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/2cb96d57-a5e9-4b38-b10e-68187a5bf82f.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 11 08:55:55 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:55:55.272 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[2b8d29b5-5683-4ffc-9937-5ac9cf3600f8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:55:55 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:55:55.273 162815 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 11 08:55:55 compute-0 ovn_metadata_agent[162810]: global
Oct 11 08:55:55 compute-0 ovn_metadata_agent[162810]:     log         /dev/log local0 debug
Oct 11 08:55:55 compute-0 ovn_metadata_agent[162810]:     log-tag     haproxy-metadata-proxy-2cb96d57-a5e9-4b38-b10e-68187a5bf82f
Oct 11 08:55:55 compute-0 ovn_metadata_agent[162810]:     user        root
Oct 11 08:55:55 compute-0 ovn_metadata_agent[162810]:     group       root
Oct 11 08:55:55 compute-0 ovn_metadata_agent[162810]:     maxconn     1024
Oct 11 08:55:55 compute-0 ovn_metadata_agent[162810]:     pidfile     /var/lib/neutron/external/pids/2cb96d57-a5e9-4b38-b10e-68187a5bf82f.pid.haproxy
Oct 11 08:55:55 compute-0 ovn_metadata_agent[162810]:     daemon
Oct 11 08:55:55 compute-0 ovn_metadata_agent[162810]: 
Oct 11 08:55:55 compute-0 ovn_metadata_agent[162810]: defaults
Oct 11 08:55:55 compute-0 ovn_metadata_agent[162810]:     log global
Oct 11 08:55:55 compute-0 ovn_metadata_agent[162810]:     mode http
Oct 11 08:55:55 compute-0 ovn_metadata_agent[162810]:     option httplog
Oct 11 08:55:55 compute-0 ovn_metadata_agent[162810]:     option dontlognull
Oct 11 08:55:55 compute-0 ovn_metadata_agent[162810]:     option http-server-close
Oct 11 08:55:55 compute-0 ovn_metadata_agent[162810]:     option forwardfor
Oct 11 08:55:55 compute-0 ovn_metadata_agent[162810]:     retries                 3
Oct 11 08:55:55 compute-0 ovn_metadata_agent[162810]:     timeout http-request    30s
Oct 11 08:55:55 compute-0 ovn_metadata_agent[162810]:     timeout connect         30s
Oct 11 08:55:55 compute-0 ovn_metadata_agent[162810]:     timeout client          32s
Oct 11 08:55:55 compute-0 ovn_metadata_agent[162810]:     timeout server          32s
Oct 11 08:55:55 compute-0 ovn_metadata_agent[162810]:     timeout http-keep-alive 30s
Oct 11 08:55:55 compute-0 ovn_metadata_agent[162810]: 
Oct 11 08:55:55 compute-0 ovn_metadata_agent[162810]: 
Oct 11 08:55:55 compute-0 ovn_metadata_agent[162810]: listen listener
Oct 11 08:55:55 compute-0 ovn_metadata_agent[162810]:     bind 169.254.169.254:80
Oct 11 08:55:55 compute-0 ovn_metadata_agent[162810]:     server metadata /var/lib/neutron/metadata_proxy
Oct 11 08:55:55 compute-0 ovn_metadata_agent[162810]:     http-request add-header X-OVN-Network-ID 2cb96d57-a5e9-4b38-b10e-68187a5bf82f
Oct 11 08:55:55 compute-0 ovn_metadata_agent[162810]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 11 08:55:55 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:55:55.274 162815 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-2cb96d57-a5e9-4b38-b10e-68187a5bf82f', 'env', 'PROCESS_TAG=haproxy-2cb96d57-a5e9-4b38-b10e-68187a5bf82f', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/2cb96d57-a5e9-4b38-b10e-68187a5bf82f.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 11 08:55:55 compute-0 nova_compute[260935]: 2025-10-11 08:55:55.289 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:55:55 compute-0 ceph-mon[74313]: pgmap v1550: 321 pgs: 321 active+clean; 339 MiB data, 630 MiB used, 59 GiB / 60 GiB avail; 2.8 MiB/s rd, 9.6 MiB/s wr, 315 op/s
Oct 11 08:55:55 compute-0 nova_compute[260935]: 2025-10-11 08:55:55.501 2 DEBUG nova.compute.manager [req-898263a4-78b9-402e-b7a7-3b54219ffbdf req-0690edbd-2184-4116-8ec0-de6835c98af8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: bb58fb30-73c8-457a-9293-2d07d0015a46] Received event network-vif-plugged-fca0187a-40e2-4312-bb5b-f3a6c59f3c9b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:55:55 compute-0 nova_compute[260935]: 2025-10-11 08:55:55.502 2 DEBUG oslo_concurrency.lockutils [req-898263a4-78b9-402e-b7a7-3b54219ffbdf req-0690edbd-2184-4116-8ec0-de6835c98af8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "bb58fb30-73c8-457a-9293-2d07d0015a46-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:55:55 compute-0 nova_compute[260935]: 2025-10-11 08:55:55.502 2 DEBUG oslo_concurrency.lockutils [req-898263a4-78b9-402e-b7a7-3b54219ffbdf req-0690edbd-2184-4116-8ec0-de6835c98af8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "bb58fb30-73c8-457a-9293-2d07d0015a46-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:55:55 compute-0 nova_compute[260935]: 2025-10-11 08:55:55.503 2 DEBUG oslo_concurrency.lockutils [req-898263a4-78b9-402e-b7a7-3b54219ffbdf req-0690edbd-2184-4116-8ec0-de6835c98af8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "bb58fb30-73c8-457a-9293-2d07d0015a46-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:55:55 compute-0 nova_compute[260935]: 2025-10-11 08:55:55.503 2 DEBUG nova.compute.manager [req-898263a4-78b9-402e-b7a7-3b54219ffbdf req-0690edbd-2184-4116-8ec0-de6835c98af8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: bb58fb30-73c8-457a-9293-2d07d0015a46] Processing event network-vif-plugged-fca0187a-40e2-4312-bb5b-f3a6c59f3c9b _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 11 08:55:55 compute-0 ceph-mgr[74605]: client.0 ms_handle_reset on v2:192.168.122.100:6800/2898047278
Oct 11 08:55:55 compute-0 nova_compute[260935]: 2025-10-11 08:55:55.792 2 DEBUG nova.network.neutron [None req-7c81477d-e48f-41e4-8384-df4c9e7e75c2 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] [instance: d80189d8-28e4-440b-8aed-b43c62f59dd7] Successfully updated port: 7bc371fa-443a-4188-ace2-2837e3709136 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 11 08:55:55 compute-0 podman[324856]: 2025-10-11 08:55:55.716469715 +0000 UTC m=+0.037455349 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 11 08:55:55 compute-0 nova_compute[260935]: 2025-10-11 08:55:55.807 2 DEBUG oslo_concurrency.lockutils [None req-7c81477d-e48f-41e4-8384-df4c9e7e75c2 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Acquiring lock "refresh_cache-d80189d8-28e4-440b-8aed-b43c62f59dd7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 08:55:55 compute-0 nova_compute[260935]: 2025-10-11 08:55:55.807 2 DEBUG oslo_concurrency.lockutils [None req-7c81477d-e48f-41e4-8384-df4c9e7e75c2 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Acquired lock "refresh_cache-d80189d8-28e4-440b-8aed-b43c62f59dd7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 08:55:55 compute-0 nova_compute[260935]: 2025-10-11 08:55:55.808 2 DEBUG nova.network.neutron [None req-7c81477d-e48f-41e4-8384-df4c9e7e75c2 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] [instance: d80189d8-28e4-440b-8aed-b43c62f59dd7] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 11 08:55:55 compute-0 podman[324856]: 2025-10-11 08:55:55.928340496 +0000 UTC m=+0.249326100 container create dd8176e1614dd73fb708e1ba61d19ab31c61d189e1f5f2968e41437bb8c3b43d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-2cb96d57-a5e9-4b38-b10e-68187a5bf82f, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 11 08:55:55 compute-0 nova_compute[260935]: 2025-10-11 08:55:55.986 2 DEBUG nova.network.neutron [None req-7c81477d-e48f-41e4-8384-df4c9e7e75c2 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] [instance: d80189d8-28e4-440b-8aed-b43c62f59dd7] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 11 08:55:56 compute-0 systemd[1]: Started libpod-conmon-dd8176e1614dd73fb708e1ba61d19ab31c61d189e1f5f2968e41437bb8c3b43d.scope.
Oct 11 08:55:56 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1551: 321 pgs: 321 active+clean; 339 MiB data, 630 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 9.6 MiB/s wr, 279 op/s
Oct 11 08:55:56 compute-0 systemd[1]: Started libcrun container.
Oct 11 08:55:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9cb611fd86e7d9e583b61223ad1390858d6dcde7efb0b54c1de6f4a299db85d1/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 11 08:55:56 compute-0 nova_compute[260935]: 2025-10-11 08:55:56.071 2 DEBUG nova.compute.manager [None req-bac1875a-406e-4765-a23a-d37439cc32b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: bb58fb30-73c8-457a-9293-2d07d0015a46] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 11 08:55:56 compute-0 nova_compute[260935]: 2025-10-11 08:55:56.072 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172956.0704134, bb58fb30-73c8-457a-9293-2d07d0015a46 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:55:56 compute-0 nova_compute[260935]: 2025-10-11 08:55:56.072 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: bb58fb30-73c8-457a-9293-2d07d0015a46] VM Started (Lifecycle Event)
Oct 11 08:55:56 compute-0 nova_compute[260935]: 2025-10-11 08:55:56.097 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: bb58fb30-73c8-457a-9293-2d07d0015a46] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:55:56 compute-0 nova_compute[260935]: 2025-10-11 08:55:56.103 2 DEBUG nova.virt.libvirt.driver [None req-bac1875a-406e-4765-a23a-d37439cc32b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: bb58fb30-73c8-457a-9293-2d07d0015a46] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 11 08:55:56 compute-0 nova_compute[260935]: 2025-10-11 08:55:56.104 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: bb58fb30-73c8-457a-9293-2d07d0015a46] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 08:55:56 compute-0 nova_compute[260935]: 2025-10-11 08:55:56.108 2 INFO nova.virt.libvirt.driver [-] [instance: bb58fb30-73c8-457a-9293-2d07d0015a46] Instance spawned successfully.
Oct 11 08:55:56 compute-0 nova_compute[260935]: 2025-10-11 08:55:56.108 2 DEBUG nova.virt.libvirt.driver [None req-bac1875a-406e-4765-a23a-d37439cc32b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: bb58fb30-73c8-457a-9293-2d07d0015a46] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 11 08:55:56 compute-0 nova_compute[260935]: 2025-10-11 08:55:56.129 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: bb58fb30-73c8-457a-9293-2d07d0015a46] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 08:55:56 compute-0 nova_compute[260935]: 2025-10-11 08:55:56.130 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172956.070837, bb58fb30-73c8-457a-9293-2d07d0015a46 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:55:56 compute-0 nova_compute[260935]: 2025-10-11 08:55:56.130 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: bb58fb30-73c8-457a-9293-2d07d0015a46] VM Paused (Lifecycle Event)
Oct 11 08:55:56 compute-0 podman[324856]: 2025-10-11 08:55:56.137377917 +0000 UTC m=+0.458363531 container init dd8176e1614dd73fb708e1ba61d19ab31c61d189e1f5f2968e41437bb8c3b43d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-2cb96d57-a5e9-4b38-b10e-68187a5bf82f, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true)
Oct 11 08:55:56 compute-0 nova_compute[260935]: 2025-10-11 08:55:56.142 2 DEBUG nova.virt.libvirt.driver [None req-bac1875a-406e-4765-a23a-d37439cc32b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: bb58fb30-73c8-457a-9293-2d07d0015a46] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:55:56 compute-0 nova_compute[260935]: 2025-10-11 08:55:56.143 2 DEBUG nova.virt.libvirt.driver [None req-bac1875a-406e-4765-a23a-d37439cc32b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: bb58fb30-73c8-457a-9293-2d07d0015a46] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:55:56 compute-0 podman[324856]: 2025-10-11 08:55:56.144218462 +0000 UTC m=+0.465204056 container start dd8176e1614dd73fb708e1ba61d19ab31c61d189e1f5f2968e41437bb8c3b43d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-2cb96d57-a5e9-4b38-b10e-68187a5bf82f, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct 11 08:55:56 compute-0 nova_compute[260935]: 2025-10-11 08:55:56.144 2 DEBUG nova.virt.libvirt.driver [None req-bac1875a-406e-4765-a23a-d37439cc32b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: bb58fb30-73c8-457a-9293-2d07d0015a46] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:55:56 compute-0 nova_compute[260935]: 2025-10-11 08:55:56.144 2 DEBUG nova.virt.libvirt.driver [None req-bac1875a-406e-4765-a23a-d37439cc32b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: bb58fb30-73c8-457a-9293-2d07d0015a46] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:55:56 compute-0 nova_compute[260935]: 2025-10-11 08:55:56.144 2 DEBUG nova.virt.libvirt.driver [None req-bac1875a-406e-4765-a23a-d37439cc32b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: bb58fb30-73c8-457a-9293-2d07d0015a46] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:55:56 compute-0 nova_compute[260935]: 2025-10-11 08:55:56.145 2 DEBUG nova.virt.libvirt.driver [None req-bac1875a-406e-4765-a23a-d37439cc32b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: bb58fb30-73c8-457a-9293-2d07d0015a46] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:55:56 compute-0 neutron-haproxy-ovnmeta-2cb96d57-a5e9-4b38-b10e-68187a5bf82f[324871]: [NOTICE]   (324875) : New worker (324877) forked
Oct 11 08:55:56 compute-0 neutron-haproxy-ovnmeta-2cb96d57-a5e9-4b38-b10e-68187a5bf82f[324871]: [NOTICE]   (324875) : Loading success.
Oct 11 08:55:56 compute-0 nova_compute[260935]: 2025-10-11 08:55:56.168 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: bb58fb30-73c8-457a-9293-2d07d0015a46] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:55:56 compute-0 nova_compute[260935]: 2025-10-11 08:55:56.180 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172956.075883, bb58fb30-73c8-457a-9293-2d07d0015a46 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:55:56 compute-0 nova_compute[260935]: 2025-10-11 08:55:56.181 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: bb58fb30-73c8-457a-9293-2d07d0015a46] VM Resumed (Lifecycle Event)
Oct 11 08:55:56 compute-0 nova_compute[260935]: 2025-10-11 08:55:56.202 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: bb58fb30-73c8-457a-9293-2d07d0015a46] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:55:56 compute-0 nova_compute[260935]: 2025-10-11 08:55:56.206 2 INFO nova.compute.manager [None req-bac1875a-406e-4765-a23a-d37439cc32b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: bb58fb30-73c8-457a-9293-2d07d0015a46] Took 8.58 seconds to spawn the instance on the hypervisor.
Oct 11 08:55:56 compute-0 nova_compute[260935]: 2025-10-11 08:55:56.207 2 DEBUG nova.compute.manager [None req-bac1875a-406e-4765-a23a-d37439cc32b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: bb58fb30-73c8-457a-9293-2d07d0015a46] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:55:56 compute-0 nova_compute[260935]: 2025-10-11 08:55:56.211 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: bb58fb30-73c8-457a-9293-2d07d0015a46] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 08:55:56 compute-0 nova_compute[260935]: 2025-10-11 08:55:56.246 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: bb58fb30-73c8-457a-9293-2d07d0015a46] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 08:55:56 compute-0 nova_compute[260935]: 2025-10-11 08:55:56.261 2 INFO nova.compute.manager [None req-bac1875a-406e-4765-a23a-d37439cc32b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: bb58fb30-73c8-457a-9293-2d07d0015a46] Took 9.72 seconds to build instance.
Oct 11 08:55:56 compute-0 nova_compute[260935]: 2025-10-11 08:55:56.280 2 DEBUG oslo_concurrency.lockutils [None req-bac1875a-406e-4765-a23a-d37439cc32b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Lock "bb58fb30-73c8-457a-9293-2d07d0015a46" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.828s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:55:56 compute-0 sshd-session[324806]: Invalid user gera from 152.32.213.170 port 41100
Oct 11 08:55:56 compute-0 sshd-session[324806]: pam_unix(sshd:auth): check pass; user unknown
Oct 11 08:55:56 compute-0 sshd-session[324806]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=152.32.213.170
Oct 11 08:55:56 compute-0 nova_compute[260935]: 2025-10-11 08:55:56.752 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:55:57 compute-0 ceph-mon[74313]: pgmap v1551: 321 pgs: 321 active+clean; 339 MiB data, 630 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 9.6 MiB/s wr, 279 op/s
Oct 11 08:55:57 compute-0 nova_compute[260935]: 2025-10-11 08:55:57.993 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:55:58 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1552: 321 pgs: 321 active+clean; 339 MiB data, 631 MiB used, 59 GiB / 60 GiB avail; 4.5 MiB/s rd, 9.6 MiB/s wr, 353 op/s
Oct 11 08:55:58 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e210 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 08:55:58 compute-0 nova_compute[260935]: 2025-10-11 08:55:58.430 2 DEBUG nova.compute.manager [req-84e9f6e3-53a4-4647-8643-9ba779a0e05b req-27763275-15a9-4e2c-9e98-d9e854adae65 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: bb58fb30-73c8-457a-9293-2d07d0015a46] Received event network-vif-plugged-fca0187a-40e2-4312-bb5b-f3a6c59f3c9b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:55:58 compute-0 nova_compute[260935]: 2025-10-11 08:55:58.431 2 DEBUG oslo_concurrency.lockutils [req-84e9f6e3-53a4-4647-8643-9ba779a0e05b req-27763275-15a9-4e2c-9e98-d9e854adae65 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "bb58fb30-73c8-457a-9293-2d07d0015a46-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:55:58 compute-0 nova_compute[260935]: 2025-10-11 08:55:58.431 2 DEBUG oslo_concurrency.lockutils [req-84e9f6e3-53a4-4647-8643-9ba779a0e05b req-27763275-15a9-4e2c-9e98-d9e854adae65 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "bb58fb30-73c8-457a-9293-2d07d0015a46-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:55:58 compute-0 nova_compute[260935]: 2025-10-11 08:55:58.432 2 DEBUG oslo_concurrency.lockutils [req-84e9f6e3-53a4-4647-8643-9ba779a0e05b req-27763275-15a9-4e2c-9e98-d9e854adae65 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "bb58fb30-73c8-457a-9293-2d07d0015a46-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:55:58 compute-0 nova_compute[260935]: 2025-10-11 08:55:58.432 2 DEBUG nova.compute.manager [req-84e9f6e3-53a4-4647-8643-9ba779a0e05b req-27763275-15a9-4e2c-9e98-d9e854adae65 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: bb58fb30-73c8-457a-9293-2d07d0015a46] No waiting events found dispatching network-vif-plugged-fca0187a-40e2-4312-bb5b-f3a6c59f3c9b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 08:55:58 compute-0 nova_compute[260935]: 2025-10-11 08:55:58.433 2 WARNING nova.compute.manager [req-84e9f6e3-53a4-4647-8643-9ba779a0e05b req-27763275-15a9-4e2c-9e98-d9e854adae65 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: bb58fb30-73c8-457a-9293-2d07d0015a46] Received unexpected event network-vif-plugged-fca0187a-40e2-4312-bb5b-f3a6c59f3c9b for instance with vm_state active and task_state None.
Oct 11 08:55:58 compute-0 nova_compute[260935]: 2025-10-11 08:55:58.433 2 DEBUG nova.compute.manager [req-84e9f6e3-53a4-4647-8643-9ba779a0e05b req-27763275-15a9-4e2c-9e98-d9e854adae65 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d80189d8-28e4-440b-8aed-b43c62f59dd7] Received event network-changed-7bc371fa-443a-4188-ace2-2837e3709136 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:55:58 compute-0 nova_compute[260935]: 2025-10-11 08:55:58.434 2 DEBUG nova.compute.manager [req-84e9f6e3-53a4-4647-8643-9ba779a0e05b req-27763275-15a9-4e2c-9e98-d9e854adae65 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d80189d8-28e4-440b-8aed-b43c62f59dd7] Refreshing instance network info cache due to event network-changed-7bc371fa-443a-4188-ace2-2837e3709136. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 08:55:58 compute-0 nova_compute[260935]: 2025-10-11 08:55:58.434 2 DEBUG oslo_concurrency.lockutils [req-84e9f6e3-53a4-4647-8643-9ba779a0e05b req-27763275-15a9-4e2c-9e98-d9e854adae65 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-d80189d8-28e4-440b-8aed-b43c62f59dd7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 08:55:58 compute-0 ceph-mon[74313]: pgmap v1552: 321 pgs: 321 active+clean; 339 MiB data, 631 MiB used, 59 GiB / 60 GiB avail; 4.5 MiB/s rd, 9.6 MiB/s wr, 353 op/s
Oct 11 08:55:59 compute-0 sshd-session[324806]: Failed password for invalid user gera from 152.32.213.170 port 41100 ssh2
Oct 11 08:55:59 compute-0 nova_compute[260935]: 2025-10-11 08:55:59.124 2 DEBUG nova.network.neutron [None req-7c81477d-e48f-41e4-8384-df4c9e7e75c2 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] [instance: d80189d8-28e4-440b-8aed-b43c62f59dd7] Updating instance_info_cache with network_info: [{"id": "7bc371fa-443a-4188-ace2-2837e3709136", "address": "fa:16:3e:76:54:8b", "network": {"id": "338aeaf8-43d5-4292-a8fa-8952dd3c508b", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-293890957-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3a6a3cc2a54f4a9bafcdc1304f07944b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7bc371fa-44", "ovs_interfaceid": "7bc371fa-443a-4188-ace2-2837e3709136", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:55:59 compute-0 nova_compute[260935]: 2025-10-11 08:55:59.161 2 DEBUG oslo_concurrency.lockutils [None req-7c81477d-e48f-41e4-8384-df4c9e7e75c2 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Releasing lock "refresh_cache-d80189d8-28e4-440b-8aed-b43c62f59dd7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 08:55:59 compute-0 nova_compute[260935]: 2025-10-11 08:55:59.161 2 DEBUG nova.compute.manager [None req-7c81477d-e48f-41e4-8384-df4c9e7e75c2 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] [instance: d80189d8-28e4-440b-8aed-b43c62f59dd7] Instance network_info: |[{"id": "7bc371fa-443a-4188-ace2-2837e3709136", "address": "fa:16:3e:76:54:8b", "network": {"id": "338aeaf8-43d5-4292-a8fa-8952dd3c508b", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-293890957-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3a6a3cc2a54f4a9bafcdc1304f07944b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7bc371fa-44", "ovs_interfaceid": "7bc371fa-443a-4188-ace2-2837e3709136", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 11 08:55:59 compute-0 nova_compute[260935]: 2025-10-11 08:55:59.162 2 DEBUG oslo_concurrency.lockutils [req-84e9f6e3-53a4-4647-8643-9ba779a0e05b req-27763275-15a9-4e2c-9e98-d9e854adae65 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-d80189d8-28e4-440b-8aed-b43c62f59dd7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 08:55:59 compute-0 nova_compute[260935]: 2025-10-11 08:55:59.162 2 DEBUG nova.network.neutron [req-84e9f6e3-53a4-4647-8643-9ba779a0e05b req-27763275-15a9-4e2c-9e98-d9e854adae65 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d80189d8-28e4-440b-8aed-b43c62f59dd7] Refreshing network info cache for port 7bc371fa-443a-4188-ace2-2837e3709136 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 08:55:59 compute-0 nova_compute[260935]: 2025-10-11 08:55:59.167 2 DEBUG nova.virt.libvirt.driver [None req-7c81477d-e48f-41e4-8384-df4c9e7e75c2 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] [instance: d80189d8-28e4-440b-8aed-b43c62f59dd7] Start _get_guest_xml network_info=[{"id": "7bc371fa-443a-4188-ace2-2837e3709136", "address": "fa:16:3e:76:54:8b", "network": {"id": "338aeaf8-43d5-4292-a8fa-8952dd3c508b", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-293890957-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3a6a3cc2a54f4a9bafcdc1304f07944b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7bc371fa-44", "ovs_interfaceid": "7bc371fa-443a-4188-ace2-2837e3709136", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 11 08:55:59 compute-0 nova_compute[260935]: 2025-10-11 08:55:59.175 2 WARNING nova.virt.libvirt.driver [None req-7c81477d-e48f-41e4-8384-df4c9e7e75c2 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 08:55:59 compute-0 nova_compute[260935]: 2025-10-11 08:55:59.182 2 DEBUG nova.virt.libvirt.host [None req-7c81477d-e48f-41e4-8384-df4c9e7e75c2 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 11 08:55:59 compute-0 nova_compute[260935]: 2025-10-11 08:55:59.183 2 DEBUG nova.virt.libvirt.host [None req-7c81477d-e48f-41e4-8384-df4c9e7e75c2 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 11 08:55:59 compute-0 nova_compute[260935]: 2025-10-11 08:55:59.188 2 DEBUG nova.virt.libvirt.host [None req-7c81477d-e48f-41e4-8384-df4c9e7e75c2 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 11 08:55:59 compute-0 nova_compute[260935]: 2025-10-11 08:55:59.189 2 DEBUG nova.virt.libvirt.host [None req-7c81477d-e48f-41e4-8384-df4c9e7e75c2 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 11 08:55:59 compute-0 nova_compute[260935]: 2025-10-11 08:55:59.189 2 DEBUG nova.virt.libvirt.driver [None req-7c81477d-e48f-41e4-8384-df4c9e7e75c2 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 11 08:55:59 compute-0 nova_compute[260935]: 2025-10-11 08:55:59.189 2 DEBUG nova.virt.hardware [None req-7c81477d-e48f-41e4-8384-df4c9e7e75c2 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 11 08:55:59 compute-0 nova_compute[260935]: 2025-10-11 08:55:59.190 2 DEBUG nova.virt.hardware [None req-7c81477d-e48f-41e4-8384-df4c9e7e75c2 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 11 08:55:59 compute-0 nova_compute[260935]: 2025-10-11 08:55:59.190 2 DEBUG nova.virt.hardware [None req-7c81477d-e48f-41e4-8384-df4c9e7e75c2 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 11 08:55:59 compute-0 nova_compute[260935]: 2025-10-11 08:55:59.190 2 DEBUG nova.virt.hardware [None req-7c81477d-e48f-41e4-8384-df4c9e7e75c2 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 11 08:55:59 compute-0 nova_compute[260935]: 2025-10-11 08:55:59.190 2 DEBUG nova.virt.hardware [None req-7c81477d-e48f-41e4-8384-df4c9e7e75c2 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 11 08:55:59 compute-0 nova_compute[260935]: 2025-10-11 08:55:59.190 2 DEBUG nova.virt.hardware [None req-7c81477d-e48f-41e4-8384-df4c9e7e75c2 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 11 08:55:59 compute-0 nova_compute[260935]: 2025-10-11 08:55:59.191 2 DEBUG nova.virt.hardware [None req-7c81477d-e48f-41e4-8384-df4c9e7e75c2 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 11 08:55:59 compute-0 nova_compute[260935]: 2025-10-11 08:55:59.191 2 DEBUG nova.virt.hardware [None req-7c81477d-e48f-41e4-8384-df4c9e7e75c2 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 11 08:55:59 compute-0 nova_compute[260935]: 2025-10-11 08:55:59.191 2 DEBUG nova.virt.hardware [None req-7c81477d-e48f-41e4-8384-df4c9e7e75c2 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 11 08:55:59 compute-0 nova_compute[260935]: 2025-10-11 08:55:59.191 2 DEBUG nova.virt.hardware [None req-7c81477d-e48f-41e4-8384-df4c9e7e75c2 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 11 08:55:59 compute-0 nova_compute[260935]: 2025-10-11 08:55:59.192 2 DEBUG nova.virt.hardware [None req-7c81477d-e48f-41e4-8384-df4c9e7e75c2 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 11 08:55:59 compute-0 nova_compute[260935]: 2025-10-11 08:55:59.194 2 DEBUG oslo_concurrency.processutils [None req-7c81477d-e48f-41e4-8384-df4c9e7e75c2 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:55:59 compute-0 nova_compute[260935]: 2025-10-11 08:55:59.384 2 INFO nova.compute.manager [None req-894bc8a0-d0e1-4167-b303-3bb53f41cf69 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: bb58fb30-73c8-457a-9293-2d07d0015a46] Pausing
Oct 11 08:55:59 compute-0 nova_compute[260935]: 2025-10-11 08:55:59.385 2 DEBUG nova.objects.instance [None req-894bc8a0-d0e1-4167-b303-3bb53f41cf69 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Lazy-loading 'flavor' on Instance uuid bb58fb30-73c8-457a-9293-2d07d0015a46 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:55:59 compute-0 nova_compute[260935]: 2025-10-11 08:55:59.423 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172959.4231842, bb58fb30-73c8-457a-9293-2d07d0015a46 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:55:59 compute-0 nova_compute[260935]: 2025-10-11 08:55:59.424 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: bb58fb30-73c8-457a-9293-2d07d0015a46] VM Paused (Lifecycle Event)
Oct 11 08:55:59 compute-0 nova_compute[260935]: 2025-10-11 08:55:59.426 2 DEBUG nova.compute.manager [None req-894bc8a0-d0e1-4167-b303-3bb53f41cf69 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: bb58fb30-73c8-457a-9293-2d07d0015a46] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:55:59 compute-0 nova_compute[260935]: 2025-10-11 08:55:59.456 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: bb58fb30-73c8-457a-9293-2d07d0015a46] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:55:59 compute-0 nova_compute[260935]: 2025-10-11 08:55:59.468 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: bb58fb30-73c8-457a-9293-2d07d0015a46] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: pausing, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 08:55:59 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 08:55:59 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1603621127' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:55:59 compute-0 nova_compute[260935]: 2025-10-11 08:55:59.669 2 DEBUG oslo_concurrency.processutils [None req-7c81477d-e48f-41e4-8384-df4c9e7e75c2 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:55:59 compute-0 nova_compute[260935]: 2025-10-11 08:55:59.706 2 DEBUG nova.storage.rbd_utils [None req-7c81477d-e48f-41e4-8384-df4c9e7e75c2 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] rbd image d80189d8-28e4-440b-8aed-b43c62f59dd7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:55:59 compute-0 nova_compute[260935]: 2025-10-11 08:55:59.710 2 DEBUG oslo_concurrency.processutils [None req-7c81477d-e48f-41e4-8384-df4c9e7e75c2 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:55:59 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/1603621127' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:55:59 compute-0 sshd-session[324806]: Received disconnect from 152.32.213.170 port 41100:11: Bye Bye [preauth]
Oct 11 08:55:59 compute-0 sshd-session[324806]: Disconnected from invalid user gera 152.32.213.170 port 41100 [preauth]
Oct 11 08:56:00 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1553: 321 pgs: 321 active+clean; 339 MiB data, 631 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 3.6 MiB/s wr, 202 op/s
Oct 11 08:56:00 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 08:56:00 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2039512440' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:56:00 compute-0 nova_compute[260935]: 2025-10-11 08:56:00.181 2 DEBUG oslo_concurrency.processutils [None req-7c81477d-e48f-41e4-8384-df4c9e7e75c2 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.471s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:56:00 compute-0 nova_compute[260935]: 2025-10-11 08:56:00.184 2 DEBUG nova.virt.libvirt.vif [None req-7c81477d-e48f-41e4-8384-df4c9e7e75c2 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T08:55:48Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-SecurityGroupsTestJSON-server-136419371',display_name='tempest-SecurityGroupsTestJSON-server-136419371',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-securitygroupstestjson-server-136419371',id=63,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3a6a3cc2a54f4a9bafcdc1304f07944b',ramdisk_id='',reservation_id='r-qhzm8q7c',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-SecurityGroupsTestJSON-2086563292',owner_user_name='tempest-SecurityGroupsTestJSON-2086563292-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T08:55:50Z,user_data=None,user_id='a04e2908f5a54c8f98bee8d0faf3e658',uuid=d80189d8-28e4-440b-8aed-b43c62f59dd7,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "7bc371fa-443a-4188-ace2-2837e3709136", "address": "fa:16:3e:76:54:8b", "network": {"id": "338aeaf8-43d5-4292-a8fa-8952dd3c508b", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-293890957-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3a6a3cc2a54f4a9bafcdc1304f07944b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7bc371fa-44", "ovs_interfaceid": "7bc371fa-443a-4188-ace2-2837e3709136", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 11 08:56:00 compute-0 nova_compute[260935]: 2025-10-11 08:56:00.185 2 DEBUG nova.network.os_vif_util [None req-7c81477d-e48f-41e4-8384-df4c9e7e75c2 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Converting VIF {"id": "7bc371fa-443a-4188-ace2-2837e3709136", "address": "fa:16:3e:76:54:8b", "network": {"id": "338aeaf8-43d5-4292-a8fa-8952dd3c508b", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-293890957-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3a6a3cc2a54f4a9bafcdc1304f07944b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7bc371fa-44", "ovs_interfaceid": "7bc371fa-443a-4188-ace2-2837e3709136", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 08:56:00 compute-0 nova_compute[260935]: 2025-10-11 08:56:00.186 2 DEBUG nova.network.os_vif_util [None req-7c81477d-e48f-41e4-8384-df4c9e7e75c2 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:76:54:8b,bridge_name='br-int',has_traffic_filtering=True,id=7bc371fa-443a-4188-ace2-2837e3709136,network=Network(338aeaf8-43d5-4292-a8fa-8952dd3c508b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7bc371fa-44') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 08:56:00 compute-0 nova_compute[260935]: 2025-10-11 08:56:00.189 2 DEBUG nova.objects.instance [None req-7c81477d-e48f-41e4-8384-df4c9e7e75c2 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Lazy-loading 'pci_devices' on Instance uuid d80189d8-28e4-440b-8aed-b43c62f59dd7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:56:00 compute-0 nova_compute[260935]: 2025-10-11 08:56:00.212 2 DEBUG nova.virt.libvirt.driver [None req-7c81477d-e48f-41e4-8384-df4c9e7e75c2 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] [instance: d80189d8-28e4-440b-8aed-b43c62f59dd7] End _get_guest_xml xml=<domain type="kvm">
Oct 11 08:56:00 compute-0 nova_compute[260935]:   <uuid>d80189d8-28e4-440b-8aed-b43c62f59dd7</uuid>
Oct 11 08:56:00 compute-0 nova_compute[260935]:   <name>instance-0000003f</name>
Oct 11 08:56:00 compute-0 nova_compute[260935]:   <memory>131072</memory>
Oct 11 08:56:00 compute-0 nova_compute[260935]:   <vcpu>1</vcpu>
Oct 11 08:56:00 compute-0 nova_compute[260935]:   <metadata>
Oct 11 08:56:00 compute-0 nova_compute[260935]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 08:56:00 compute-0 nova_compute[260935]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 08:56:00 compute-0 nova_compute[260935]:       <nova:name>tempest-SecurityGroupsTestJSON-server-136419371</nova:name>
Oct 11 08:56:00 compute-0 nova_compute[260935]:       <nova:creationTime>2025-10-11 08:55:59</nova:creationTime>
Oct 11 08:56:00 compute-0 nova_compute[260935]:       <nova:flavor name="m1.nano">
Oct 11 08:56:00 compute-0 nova_compute[260935]:         <nova:memory>128</nova:memory>
Oct 11 08:56:00 compute-0 nova_compute[260935]:         <nova:disk>1</nova:disk>
Oct 11 08:56:00 compute-0 nova_compute[260935]:         <nova:swap>0</nova:swap>
Oct 11 08:56:00 compute-0 nova_compute[260935]:         <nova:ephemeral>0</nova:ephemeral>
Oct 11 08:56:00 compute-0 nova_compute[260935]:         <nova:vcpus>1</nova:vcpus>
Oct 11 08:56:00 compute-0 nova_compute[260935]:       </nova:flavor>
Oct 11 08:56:00 compute-0 nova_compute[260935]:       <nova:owner>
Oct 11 08:56:00 compute-0 nova_compute[260935]:         <nova:user uuid="a04e2908f5a54c8f98bee8d0faf3e658">tempest-SecurityGroupsTestJSON-2086563292-project-member</nova:user>
Oct 11 08:56:00 compute-0 nova_compute[260935]:         <nova:project uuid="3a6a3cc2a54f4a9bafcdc1304f07944b">tempest-SecurityGroupsTestJSON-2086563292</nova:project>
Oct 11 08:56:00 compute-0 nova_compute[260935]:       </nova:owner>
Oct 11 08:56:00 compute-0 nova_compute[260935]:       <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 08:56:00 compute-0 nova_compute[260935]:       <nova:ports>
Oct 11 08:56:00 compute-0 nova_compute[260935]:         <nova:port uuid="7bc371fa-443a-4188-ace2-2837e3709136">
Oct 11 08:56:00 compute-0 nova_compute[260935]:           <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Oct 11 08:56:00 compute-0 nova_compute[260935]:         </nova:port>
Oct 11 08:56:00 compute-0 nova_compute[260935]:       </nova:ports>
Oct 11 08:56:00 compute-0 nova_compute[260935]:     </nova:instance>
Oct 11 08:56:00 compute-0 nova_compute[260935]:   </metadata>
Oct 11 08:56:00 compute-0 nova_compute[260935]:   <sysinfo type="smbios">
Oct 11 08:56:00 compute-0 nova_compute[260935]:     <system>
Oct 11 08:56:00 compute-0 nova_compute[260935]:       <entry name="manufacturer">RDO</entry>
Oct 11 08:56:00 compute-0 nova_compute[260935]:       <entry name="product">OpenStack Compute</entry>
Oct 11 08:56:00 compute-0 nova_compute[260935]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 08:56:00 compute-0 nova_compute[260935]:       <entry name="serial">d80189d8-28e4-440b-8aed-b43c62f59dd7</entry>
Oct 11 08:56:00 compute-0 nova_compute[260935]:       <entry name="uuid">d80189d8-28e4-440b-8aed-b43c62f59dd7</entry>
Oct 11 08:56:00 compute-0 nova_compute[260935]:       <entry name="family">Virtual Machine</entry>
Oct 11 08:56:00 compute-0 nova_compute[260935]:     </system>
Oct 11 08:56:00 compute-0 nova_compute[260935]:   </sysinfo>
Oct 11 08:56:00 compute-0 nova_compute[260935]:   <os>
Oct 11 08:56:00 compute-0 nova_compute[260935]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 11 08:56:00 compute-0 nova_compute[260935]:     <boot dev="hd"/>
Oct 11 08:56:00 compute-0 nova_compute[260935]:     <smbios mode="sysinfo"/>
Oct 11 08:56:00 compute-0 nova_compute[260935]:   </os>
Oct 11 08:56:00 compute-0 nova_compute[260935]:   <features>
Oct 11 08:56:00 compute-0 nova_compute[260935]:     <acpi/>
Oct 11 08:56:00 compute-0 nova_compute[260935]:     <apic/>
Oct 11 08:56:00 compute-0 nova_compute[260935]:     <vmcoreinfo/>
Oct 11 08:56:00 compute-0 nova_compute[260935]:   </features>
Oct 11 08:56:00 compute-0 nova_compute[260935]:   <clock offset="utc">
Oct 11 08:56:00 compute-0 nova_compute[260935]:     <timer name="pit" tickpolicy="delay"/>
Oct 11 08:56:00 compute-0 nova_compute[260935]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 11 08:56:00 compute-0 nova_compute[260935]:     <timer name="hpet" present="no"/>
Oct 11 08:56:00 compute-0 nova_compute[260935]:   </clock>
Oct 11 08:56:00 compute-0 nova_compute[260935]:   <cpu mode="host-model" match="exact">
Oct 11 08:56:00 compute-0 nova_compute[260935]:     <topology sockets="1" cores="1" threads="1"/>
Oct 11 08:56:00 compute-0 nova_compute[260935]:   </cpu>
Oct 11 08:56:00 compute-0 nova_compute[260935]:   <devices>
Oct 11 08:56:00 compute-0 nova_compute[260935]:     <disk type="network" device="disk">
Oct 11 08:56:00 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 08:56:00 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/d80189d8-28e4-440b-8aed-b43c62f59dd7_disk">
Oct 11 08:56:00 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 08:56:00 compute-0 nova_compute[260935]:       </source>
Oct 11 08:56:00 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 08:56:00 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 08:56:00 compute-0 nova_compute[260935]:       </auth>
Oct 11 08:56:00 compute-0 nova_compute[260935]:       <target dev="vda" bus="virtio"/>
Oct 11 08:56:00 compute-0 nova_compute[260935]:     </disk>
Oct 11 08:56:00 compute-0 nova_compute[260935]:     <disk type="network" device="cdrom">
Oct 11 08:56:00 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 08:56:00 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/d80189d8-28e4-440b-8aed-b43c62f59dd7_disk.config">
Oct 11 08:56:00 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 08:56:00 compute-0 nova_compute[260935]:       </source>
Oct 11 08:56:00 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 08:56:00 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 08:56:00 compute-0 nova_compute[260935]:       </auth>
Oct 11 08:56:00 compute-0 nova_compute[260935]:       <target dev="sda" bus="sata"/>
Oct 11 08:56:00 compute-0 nova_compute[260935]:     </disk>
Oct 11 08:56:00 compute-0 nova_compute[260935]:     <interface type="ethernet">
Oct 11 08:56:00 compute-0 nova_compute[260935]:       <mac address="fa:16:3e:76:54:8b"/>
Oct 11 08:56:00 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 08:56:00 compute-0 nova_compute[260935]:       <driver name="vhost" rx_queue_size="512"/>
Oct 11 08:56:00 compute-0 nova_compute[260935]:       <mtu size="1442"/>
Oct 11 08:56:00 compute-0 nova_compute[260935]:       <target dev="tap7bc371fa-44"/>
Oct 11 08:56:00 compute-0 nova_compute[260935]:     </interface>
Oct 11 08:56:00 compute-0 nova_compute[260935]:     <serial type="pty">
Oct 11 08:56:00 compute-0 nova_compute[260935]:       <log file="/var/lib/nova/instances/d80189d8-28e4-440b-8aed-b43c62f59dd7/console.log" append="off"/>
Oct 11 08:56:00 compute-0 nova_compute[260935]:     </serial>
Oct 11 08:56:00 compute-0 nova_compute[260935]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 08:56:00 compute-0 nova_compute[260935]:     <video>
Oct 11 08:56:00 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 08:56:00 compute-0 nova_compute[260935]:     </video>
Oct 11 08:56:00 compute-0 nova_compute[260935]:     <input type="tablet" bus="usb"/>
Oct 11 08:56:00 compute-0 nova_compute[260935]:     <rng model="virtio">
Oct 11 08:56:00 compute-0 nova_compute[260935]:       <backend model="random">/dev/urandom</backend>
Oct 11 08:56:00 compute-0 nova_compute[260935]:     </rng>
Oct 11 08:56:00 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root"/>
Oct 11 08:56:00 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:56:00 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:56:00 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:56:00 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:56:00 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:56:00 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:56:00 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:56:00 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:56:00 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:56:00 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:56:00 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:56:00 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:56:00 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:56:00 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:56:00 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:56:00 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:56:00 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:56:00 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:56:00 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:56:00 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:56:00 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:56:00 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:56:00 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:56:00 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:56:00 compute-0 nova_compute[260935]:     <controller type="usb" index="0"/>
Oct 11 08:56:00 compute-0 nova_compute[260935]:     <memballoon model="virtio">
Oct 11 08:56:00 compute-0 nova_compute[260935]:       <stats period="10"/>
Oct 11 08:56:00 compute-0 nova_compute[260935]:     </memballoon>
Oct 11 08:56:00 compute-0 nova_compute[260935]:   </devices>
Oct 11 08:56:00 compute-0 nova_compute[260935]: </domain>
Oct 11 08:56:00 compute-0 nova_compute[260935]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 11 08:56:00 compute-0 nova_compute[260935]: 2025-10-11 08:56:00.214 2 DEBUG nova.compute.manager [None req-7c81477d-e48f-41e4-8384-df4c9e7e75c2 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] [instance: d80189d8-28e4-440b-8aed-b43c62f59dd7] Preparing to wait for external event network-vif-plugged-7bc371fa-443a-4188-ace2-2837e3709136 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 11 08:56:00 compute-0 nova_compute[260935]: 2025-10-11 08:56:00.215 2 DEBUG oslo_concurrency.lockutils [None req-7c81477d-e48f-41e4-8384-df4c9e7e75c2 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Acquiring lock "d80189d8-28e4-440b-8aed-b43c62f59dd7-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:56:00 compute-0 nova_compute[260935]: 2025-10-11 08:56:00.215 2 DEBUG oslo_concurrency.lockutils [None req-7c81477d-e48f-41e4-8384-df4c9e7e75c2 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Lock "d80189d8-28e4-440b-8aed-b43c62f59dd7-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:56:00 compute-0 nova_compute[260935]: 2025-10-11 08:56:00.215 2 DEBUG oslo_concurrency.lockutils [None req-7c81477d-e48f-41e4-8384-df4c9e7e75c2 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Lock "d80189d8-28e4-440b-8aed-b43c62f59dd7-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:56:00 compute-0 nova_compute[260935]: 2025-10-11 08:56:00.216 2 DEBUG nova.virt.libvirt.vif [None req-7c81477d-e48f-41e4-8384-df4c9e7e75c2 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T08:55:48Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-SecurityGroupsTestJSON-server-136419371',display_name='tempest-SecurityGroupsTestJSON-server-136419371',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-securitygroupstestjson-server-136419371',id=63,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3a6a3cc2a54f4a9bafcdc1304f07944b',ramdisk_id='',reservation_id='r-qhzm8q7c',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-SecurityGroupsTestJSON-2086563292',owner_user_name='tempest-SecurityGroupsTestJSON-2086563292-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T08:55:50Z,user_data=None,user_id='a04e2908f5a54c8f98bee8d0faf3e658',uuid=d80189d8-28e4-440b-8aed-b43c62f59dd7,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "7bc371fa-443a-4188-ace2-2837e3709136", "address": "fa:16:3e:76:54:8b", "network": {"id": "338aeaf8-43d5-4292-a8fa-8952dd3c508b", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-293890957-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3a6a3cc2a54f4a9bafcdc1304f07944b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7bc371fa-44", "ovs_interfaceid": "7bc371fa-443a-4188-ace2-2837e3709136", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 11 08:56:00 compute-0 nova_compute[260935]: 2025-10-11 08:56:00.217 2 DEBUG nova.network.os_vif_util [None req-7c81477d-e48f-41e4-8384-df4c9e7e75c2 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Converting VIF {"id": "7bc371fa-443a-4188-ace2-2837e3709136", "address": "fa:16:3e:76:54:8b", "network": {"id": "338aeaf8-43d5-4292-a8fa-8952dd3c508b", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-293890957-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3a6a3cc2a54f4a9bafcdc1304f07944b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7bc371fa-44", "ovs_interfaceid": "7bc371fa-443a-4188-ace2-2837e3709136", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 08:56:00 compute-0 nova_compute[260935]: 2025-10-11 08:56:00.217 2 DEBUG nova.network.os_vif_util [None req-7c81477d-e48f-41e4-8384-df4c9e7e75c2 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:76:54:8b,bridge_name='br-int',has_traffic_filtering=True,id=7bc371fa-443a-4188-ace2-2837e3709136,network=Network(338aeaf8-43d5-4292-a8fa-8952dd3c508b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7bc371fa-44') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 08:56:00 compute-0 nova_compute[260935]: 2025-10-11 08:56:00.218 2 DEBUG os_vif [None req-7c81477d-e48f-41e4-8384-df4c9e7e75c2 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:76:54:8b,bridge_name='br-int',has_traffic_filtering=True,id=7bc371fa-443a-4188-ace2-2837e3709136,network=Network(338aeaf8-43d5-4292-a8fa-8952dd3c508b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7bc371fa-44') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 11 08:56:00 compute-0 nova_compute[260935]: 2025-10-11 08:56:00.218 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:56:00 compute-0 nova_compute[260935]: 2025-10-11 08:56:00.219 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:56:00 compute-0 nova_compute[260935]: 2025-10-11 08:56:00.220 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 08:56:00 compute-0 nova_compute[260935]: 2025-10-11 08:56:00.225 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:56:00 compute-0 nova_compute[260935]: 2025-10-11 08:56:00.225 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7bc371fa-44, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:56:00 compute-0 nova_compute[260935]: 2025-10-11 08:56:00.226 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap7bc371fa-44, col_values=(('external_ids', {'iface-id': '7bc371fa-443a-4188-ace2-2837e3709136', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:76:54:8b', 'vm-uuid': 'd80189d8-28e4-440b-8aed-b43c62f59dd7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:56:00 compute-0 nova_compute[260935]: 2025-10-11 08:56:00.227 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:56:00 compute-0 NetworkManager[44960]: <info>  [1760172960.2295] manager: (tap7bc371fa-44): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/257)
Oct 11 08:56:00 compute-0 nova_compute[260935]: 2025-10-11 08:56:00.230 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 08:56:00 compute-0 nova_compute[260935]: 2025-10-11 08:56:00.238 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:56:00 compute-0 nova_compute[260935]: 2025-10-11 08:56:00.240 2 INFO os_vif [None req-7c81477d-e48f-41e4-8384-df4c9e7e75c2 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:76:54:8b,bridge_name='br-int',has_traffic_filtering=True,id=7bc371fa-443a-4188-ace2-2837e3709136,network=Network(338aeaf8-43d5-4292-a8fa-8952dd3c508b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7bc371fa-44')
Oct 11 08:56:00 compute-0 nova_compute[260935]: 2025-10-11 08:56:00.314 2 DEBUG nova.virt.libvirt.driver [None req-7c81477d-e48f-41e4-8384-df4c9e7e75c2 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 08:56:00 compute-0 nova_compute[260935]: 2025-10-11 08:56:00.315 2 DEBUG nova.virt.libvirt.driver [None req-7c81477d-e48f-41e4-8384-df4c9e7e75c2 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 08:56:00 compute-0 nova_compute[260935]: 2025-10-11 08:56:00.315 2 DEBUG nova.virt.libvirt.driver [None req-7c81477d-e48f-41e4-8384-df4c9e7e75c2 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] No VIF found with MAC fa:16:3e:76:54:8b, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 11 08:56:00 compute-0 nova_compute[260935]: 2025-10-11 08:56:00.316 2 INFO nova.virt.libvirt.driver [None req-7c81477d-e48f-41e4-8384-df4c9e7e75c2 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] [instance: d80189d8-28e4-440b-8aed-b43c62f59dd7] Using config drive
Oct 11 08:56:00 compute-0 nova_compute[260935]: 2025-10-11 08:56:00.369 2 DEBUG nova.storage.rbd_utils [None req-7c81477d-e48f-41e4-8384-df4c9e7e75c2 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] rbd image d80189d8-28e4-440b-8aed-b43c62f59dd7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:56:00 compute-0 podman[324950]: 2025-10-11 08:56:00.404411659 +0000 UTC m=+0.108302289 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 08:56:00 compute-0 nova_compute[260935]: 2025-10-11 08:56:00.407 2 DEBUG oslo_concurrency.lockutils [None req-8e0d93a6-27b1-4601-bc37-27e62a0e782b ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Acquiring lock "ead8f79e-4b1d-4bf5-bf5a-4943d2122bbc" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:56:00 compute-0 nova_compute[260935]: 2025-10-11 08:56:00.408 2 DEBUG oslo_concurrency.lockutils [None req-8e0d93a6-27b1-4601-bc37-27e62a0e782b ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Lock "ead8f79e-4b1d-4bf5-bf5a-4943d2122bbc" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:56:00 compute-0 nova_compute[260935]: 2025-10-11 08:56:00.427 2 DEBUG nova.compute.manager [None req-8e0d93a6-27b1-4601-bc37-27e62a0e782b ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: ead8f79e-4b1d-4bf5-bf5a-4943d2122bbc] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 11 08:56:00 compute-0 nova_compute[260935]: 2025-10-11 08:56:00.467 2 DEBUG oslo_concurrency.lockutils [None req-8f7cc2a4-e125-420d-9ac6-934dcb8fa0bc 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Acquiring lock "bb58fb30-73c8-457a-9293-2d07d0015a46" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:56:00 compute-0 nova_compute[260935]: 2025-10-11 08:56:00.468 2 DEBUG oslo_concurrency.lockutils [None req-8f7cc2a4-e125-420d-9ac6-934dcb8fa0bc 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Lock "bb58fb30-73c8-457a-9293-2d07d0015a46" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:56:00 compute-0 nova_compute[260935]: 2025-10-11 08:56:00.468 2 DEBUG oslo_concurrency.lockutils [None req-8f7cc2a4-e125-420d-9ac6-934dcb8fa0bc 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Acquiring lock "bb58fb30-73c8-457a-9293-2d07d0015a46-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:56:00 compute-0 nova_compute[260935]: 2025-10-11 08:56:00.468 2 DEBUG oslo_concurrency.lockutils [None req-8f7cc2a4-e125-420d-9ac6-934dcb8fa0bc 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Lock "bb58fb30-73c8-457a-9293-2d07d0015a46-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:56:00 compute-0 nova_compute[260935]: 2025-10-11 08:56:00.469 2 DEBUG oslo_concurrency.lockutils [None req-8f7cc2a4-e125-420d-9ac6-934dcb8fa0bc 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Lock "bb58fb30-73c8-457a-9293-2d07d0015a46-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:56:00 compute-0 nova_compute[260935]: 2025-10-11 08:56:00.470 2 INFO nova.compute.manager [None req-8f7cc2a4-e125-420d-9ac6-934dcb8fa0bc 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: bb58fb30-73c8-457a-9293-2d07d0015a46] Terminating instance
Oct 11 08:56:00 compute-0 nova_compute[260935]: 2025-10-11 08:56:00.472 2 DEBUG nova.compute.manager [None req-8f7cc2a4-e125-420d-9ac6-934dcb8fa0bc 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: bb58fb30-73c8-457a-9293-2d07d0015a46] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 11 08:56:00 compute-0 kernel: tapfca0187a-40 (unregistering): left promiscuous mode
Oct 11 08:56:00 compute-0 NetworkManager[44960]: <info>  [1760172960.5246] device (tapfca0187a-40): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 08:56:00 compute-0 nova_compute[260935]: 2025-10-11 08:56:00.525 2 DEBUG oslo_concurrency.lockutils [None req-8e0d93a6-27b1-4601-bc37-27e62a0e782b ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:56:00 compute-0 nova_compute[260935]: 2025-10-11 08:56:00.526 2 DEBUG oslo_concurrency.lockutils [None req-8e0d93a6-27b1-4601-bc37-27e62a0e782b ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:56:00 compute-0 nova_compute[260935]: 2025-10-11 08:56:00.537 2 DEBUG nova.virt.hardware [None req-8e0d93a6-27b1-4601-bc37-27e62a0e782b ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 11 08:56:00 compute-0 nova_compute[260935]: 2025-10-11 08:56:00.537 2 INFO nova.compute.claims [None req-8e0d93a6-27b1-4601-bc37-27e62a0e782b ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: ead8f79e-4b1d-4bf5-bf5a-4943d2122bbc] Claim successful on node compute-0.ctlplane.example.com
Oct 11 08:56:00 compute-0 ovn_controller[152945]: 2025-10-11T08:56:00Z|00537|binding|INFO|Releasing lport fca0187a-40e2-4312-bb5b-f3a6c59f3c9b from this chassis (sb_readonly=0)
Oct 11 08:56:00 compute-0 nova_compute[260935]: 2025-10-11 08:56:00.586 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:56:00 compute-0 ovn_controller[152945]: 2025-10-11T08:56:00Z|00538|binding|INFO|Setting lport fca0187a-40e2-4312-bb5b-f3a6c59f3c9b down in Southbound
Oct 11 08:56:00 compute-0 ovn_controller[152945]: 2025-10-11T08:56:00Z|00539|binding|INFO|Removing iface tapfca0187a-40 ovn-installed in OVS
Oct 11 08:56:00 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:56:00.591 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9f:3c:ad 10.100.0.9'], port_security=['fa:16:3e:9f:3c:ad 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'bb58fb30-73c8-457a-9293-2d07d0015a46', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2cb96d57-a5e9-4b38-b10e-68187a5bf82f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '93dd4902ce324862a38006da8e06503a', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'b773900e-3df7-4cb6-b9b0-3d240ff499b1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=69794a2f-48ab-4a0d-8725-f4a7f57172dd, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=fca0187a-40e2-4312-bb5b-f3a6c59f3c9b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 08:56:00 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:56:00.593 162815 INFO neutron.agent.ovn.metadata.agent [-] Port fca0187a-40e2-4312-bb5b-f3a6c59f3c9b in datapath 2cb96d57-a5e9-4b38-b10e-68187a5bf82f unbound from our chassis
Oct 11 08:56:00 compute-0 nova_compute[260935]: 2025-10-11 08:56:00.591 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:56:00 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:56:00.594 162815 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 2cb96d57-a5e9-4b38-b10e-68187a5bf82f, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 11 08:56:00 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:56:00.595 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[50f32637-85d8-4843-a990-cc824bf4e3a8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:56:00 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:56:00.597 162815 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-2cb96d57-a5e9-4b38-b10e-68187a5bf82f namespace which is not needed anymore
Oct 11 08:56:00 compute-0 systemd[1]: machine-qemu\x2d69\x2dinstance\x2d0000003e.scope: Deactivated successfully.
Oct 11 08:56:00 compute-0 nova_compute[260935]: 2025-10-11 08:56:00.615 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:56:00 compute-0 systemd[1]: machine-qemu\x2d69\x2dinstance\x2d0000003e.scope: Consumed 4.264s CPU time.
Oct 11 08:56:00 compute-0 systemd-machined[215705]: Machine qemu-69-instance-0000003e terminated.
Oct 11 08:56:00 compute-0 nova_compute[260935]: 2025-10-11 08:56:00.704 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 08:56:00 compute-0 nova_compute[260935]: 2025-10-11 08:56:00.734 2 INFO nova.virt.libvirt.driver [-] [instance: bb58fb30-73c8-457a-9293-2d07d0015a46] Instance destroyed successfully.
Oct 11 08:56:00 compute-0 nova_compute[260935]: 2025-10-11 08:56:00.734 2 DEBUG nova.objects.instance [None req-8f7cc2a4-e125-420d-9ac6-934dcb8fa0bc 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Lazy-loading 'resources' on Instance uuid bb58fb30-73c8-457a-9293-2d07d0015a46 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:56:00 compute-0 nova_compute[260935]: 2025-10-11 08:56:00.754 2 DEBUG nova.virt.libvirt.vif [None req-8f7cc2a4-e125-420d-9ac6-934dcb8fa0bc 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T08:55:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-254064893',display_name='tempest-DeleteServersTestJSON-server-254064893',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-254064893',id=62,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-11T08:55:56Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=3,progress=0,project_id='93dd4902ce324862a38006da8e06503a',ramdisk_id='',reservation_id='r-k16b35q4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-DeleteServersTestJSON-1019340677',owner_user_name='tempest-DeleteServersTestJSON-1019340677-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T08:55:59Z,user_data=None,user_id='213e5693e94f44e7950e3dfbca04228a',uuid=bb58fb30-73c8-457a-9293-2d07d0015a46,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='paused') vif={"id": "fca0187a-40e2-4312-bb5b-f3a6c59f3c9b", "address": "fa:16:3e:9f:3c:ad", "network": {"id": "2cb96d57-a5e9-4b38-b10e-68187a5bf82f", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-2000648338-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "93dd4902ce324862a38006da8e06503a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfca0187a-40", "ovs_interfaceid": "fca0187a-40e2-4312-bb5b-f3a6c59f3c9b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 11 08:56:00 compute-0 nova_compute[260935]: 2025-10-11 08:56:00.755 2 DEBUG nova.network.os_vif_util [None req-8f7cc2a4-e125-420d-9ac6-934dcb8fa0bc 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Converting VIF {"id": "fca0187a-40e2-4312-bb5b-f3a6c59f3c9b", "address": "fa:16:3e:9f:3c:ad", "network": {"id": "2cb96d57-a5e9-4b38-b10e-68187a5bf82f", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-2000648338-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "93dd4902ce324862a38006da8e06503a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfca0187a-40", "ovs_interfaceid": "fca0187a-40e2-4312-bb5b-f3a6c59f3c9b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 08:56:00 compute-0 nova_compute[260935]: 2025-10-11 08:56:00.756 2 DEBUG nova.network.os_vif_util [None req-8f7cc2a4-e125-420d-9ac6-934dcb8fa0bc 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9f:3c:ad,bridge_name='br-int',has_traffic_filtering=True,id=fca0187a-40e2-4312-bb5b-f3a6c59f3c9b,network=Network(2cb96d57-a5e9-4b38-b10e-68187a5bf82f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfca0187a-40') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 08:56:00 compute-0 nova_compute[260935]: 2025-10-11 08:56:00.756 2 DEBUG os_vif [None req-8f7cc2a4-e125-420d-9ac6-934dcb8fa0bc 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:9f:3c:ad,bridge_name='br-int',has_traffic_filtering=True,id=fca0187a-40e2-4312-bb5b-f3a6c59f3c9b,network=Network(2cb96d57-a5e9-4b38-b10e-68187a5bf82f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfca0187a-40') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 11 08:56:00 compute-0 nova_compute[260935]: 2025-10-11 08:56:00.758 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:56:00 compute-0 nova_compute[260935]: 2025-10-11 08:56:00.759 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfca0187a-40, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:56:00 compute-0 nova_compute[260935]: 2025-10-11 08:56:00.763 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:56:00 compute-0 nova_compute[260935]: 2025-10-11 08:56:00.766 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 08:56:00 compute-0 nova_compute[260935]: 2025-10-11 08:56:00.775 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:56:00 compute-0 nova_compute[260935]: 2025-10-11 08:56:00.778 2 INFO os_vif [None req-8f7cc2a4-e125-420d-9ac6-934dcb8fa0bc 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:9f:3c:ad,bridge_name='br-int',has_traffic_filtering=True,id=fca0187a-40e2-4312-bb5b-f3a6c59f3c9b,network=Network(2cb96d57-a5e9-4b38-b10e-68187a5bf82f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfca0187a-40')
Oct 11 08:56:00 compute-0 ceph-mon[74313]: pgmap v1553: 321 pgs: 321 active+clean; 339 MiB data, 631 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 3.6 MiB/s wr, 202 op/s
Oct 11 08:56:00 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/2039512440' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:56:00 compute-0 neutron-haproxy-ovnmeta-2cb96d57-a5e9-4b38-b10e-68187a5bf82f[324871]: [NOTICE]   (324875) : haproxy version is 2.8.14-c23fe91
Oct 11 08:56:00 compute-0 neutron-haproxy-ovnmeta-2cb96d57-a5e9-4b38-b10e-68187a5bf82f[324871]: [NOTICE]   (324875) : path to executable is /usr/sbin/haproxy
Oct 11 08:56:00 compute-0 neutron-haproxy-ovnmeta-2cb96d57-a5e9-4b38-b10e-68187a5bf82f[324871]: [WARNING]  (324875) : Exiting Master process...
Oct 11 08:56:00 compute-0 neutron-haproxy-ovnmeta-2cb96d57-a5e9-4b38-b10e-68187a5bf82f[324871]: [WARNING]  (324875) : Exiting Master process...
Oct 11 08:56:00 compute-0 nova_compute[260935]: 2025-10-11 08:56:00.840 2 DEBUG oslo_concurrency.processutils [None req-8e0d93a6-27b1-4601-bc37-27e62a0e782b ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:56:00 compute-0 neutron-haproxy-ovnmeta-2cb96d57-a5e9-4b38-b10e-68187a5bf82f[324871]: [ALERT]    (324875) : Current worker (324877) exited with code 143 (Terminated)
Oct 11 08:56:00 compute-0 neutron-haproxy-ovnmeta-2cb96d57-a5e9-4b38-b10e-68187a5bf82f[324871]: [WARNING]  (324875) : All workers exited. Exiting... (0)
Oct 11 08:56:00 compute-0 systemd[1]: libpod-dd8176e1614dd73fb708e1ba61d19ab31c61d189e1f5f2968e41437bb8c3b43d.scope: Deactivated successfully.
Oct 11 08:56:00 compute-0 podman[325022]: 2025-10-11 08:56:00.851847348 +0000 UTC m=+0.087823376 container died dd8176e1614dd73fb708e1ba61d19ab31c61d189e1f5f2968e41437bb8c3b43d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-2cb96d57-a5e9-4b38-b10e-68187a5bf82f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 11 08:56:00 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-dd8176e1614dd73fb708e1ba61d19ab31c61d189e1f5f2968e41437bb8c3b43d-userdata-shm.mount: Deactivated successfully.
Oct 11 08:56:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-9cb611fd86e7d9e583b61223ad1390858d6dcde7efb0b54c1de6f4a299db85d1-merged.mount: Deactivated successfully.
Oct 11 08:56:00 compute-0 podman[325022]: 2025-10-11 08:56:00.923223053 +0000 UTC m=+0.159199081 container cleanup dd8176e1614dd73fb708e1ba61d19ab31c61d189e1f5f2968e41437bb8c3b43d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-2cb96d57-a5e9-4b38-b10e-68187a5bf82f, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team)
Oct 11 08:56:00 compute-0 systemd[1]: libpod-conmon-dd8176e1614dd73fb708e1ba61d19ab31c61d189e1f5f2968e41437bb8c3b43d.scope: Deactivated successfully.
Oct 11 08:56:01 compute-0 podman[325071]: 2025-10-11 08:56:01.003613205 +0000 UTC m=+0.050294045 container remove dd8176e1614dd73fb708e1ba61d19ab31c61d189e1f5f2968e41437bb8c3b43d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-2cb96d57-a5e9-4b38-b10e-68187a5bf82f, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Oct 11 08:56:01 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:56:01.012 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[e495a16a-7cec-47b5-b61e-fe608cf7bd47]: (4, ('Sat Oct 11 08:56:00 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-2cb96d57-a5e9-4b38-b10e-68187a5bf82f (dd8176e1614dd73fb708e1ba61d19ab31c61d189e1f5f2968e41437bb8c3b43d)\ndd8176e1614dd73fb708e1ba61d19ab31c61d189e1f5f2968e41437bb8c3b43d\nSat Oct 11 08:56:00 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-2cb96d57-a5e9-4b38-b10e-68187a5bf82f (dd8176e1614dd73fb708e1ba61d19ab31c61d189e1f5f2968e41437bb8c3b43d)\ndd8176e1614dd73fb708e1ba61d19ab31c61d189e1f5f2968e41437bb8c3b43d\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:56:01 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:56:01.014 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[e0f77bcc-08a3-457d-a392-c016df913edd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:56:01 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:56:01.016 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2cb96d57-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:56:01 compute-0 kernel: tap2cb96d57-a0: left promiscuous mode
Oct 11 08:56:01 compute-0 nova_compute[260935]: 2025-10-11 08:56:01.022 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:56:01 compute-0 nova_compute[260935]: 2025-10-11 08:56:01.039 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:56:01 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:56:01.044 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[2ee483bc-2cf9-4b6e-a032-d61fcadef8e3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:56:01 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:56:01.067 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[590a3de4-ddf7-432a-b91e-6137beac5011]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:56:01 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:56:01.069 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[76562b0c-2af0-4e5c-8fec-9cf77911e9cc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:56:01 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:56:01.091 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[8002bc5a-7982-4606-ad38-a1ef48ab2018]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 479955, 'reachable_time': 37254, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 325107, 'error': None, 'target': 'ovnmeta-2cb96d57-a5e9-4b38-b10e-68187a5bf82f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:56:01 compute-0 systemd[1]: run-netns-ovnmeta\x2d2cb96d57\x2da5e9\x2d4b38\x2db10e\x2d68187a5bf82f.mount: Deactivated successfully.
Oct 11 08:56:01 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:56:01.093 162929 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-2cb96d57-a5e9-4b38-b10e-68187a5bf82f deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 11 08:56:01 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:56:01.093 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[de80a685-e6cb-443d-a2c3-e11692e900d2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:56:01 compute-0 nova_compute[260935]: 2025-10-11 08:56:01.248 2 INFO nova.virt.libvirt.driver [None req-7c81477d-e48f-41e4-8384-df4c9e7e75c2 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] [instance: d80189d8-28e4-440b-8aed-b43c62f59dd7] Creating config drive at /var/lib/nova/instances/d80189d8-28e4-440b-8aed-b43c62f59dd7/disk.config
Oct 11 08:56:01 compute-0 nova_compute[260935]: 2025-10-11 08:56:01.254 2 DEBUG oslo_concurrency.processutils [None req-7c81477d-e48f-41e4-8384-df4c9e7e75c2 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/d80189d8-28e4-440b-8aed-b43c62f59dd7/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpr5npneld execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:56:01 compute-0 nova_compute[260935]: 2025-10-11 08:56:01.311 2 DEBUG nova.compute.manager [req-69241d06-4257-434e-aeae-032fccc301c4 req-b5af871e-f2ca-4bd1-9c38-7588a37d8920 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: bb58fb30-73c8-457a-9293-2d07d0015a46] Received event network-vif-unplugged-fca0187a-40e2-4312-bb5b-f3a6c59f3c9b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:56:01 compute-0 nova_compute[260935]: 2025-10-11 08:56:01.312 2 DEBUG oslo_concurrency.lockutils [req-69241d06-4257-434e-aeae-032fccc301c4 req-b5af871e-f2ca-4bd1-9c38-7588a37d8920 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "bb58fb30-73c8-457a-9293-2d07d0015a46-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:56:01 compute-0 nova_compute[260935]: 2025-10-11 08:56:01.312 2 DEBUG oslo_concurrency.lockutils [req-69241d06-4257-434e-aeae-032fccc301c4 req-b5af871e-f2ca-4bd1-9c38-7588a37d8920 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "bb58fb30-73c8-457a-9293-2d07d0015a46-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:56:01 compute-0 nova_compute[260935]: 2025-10-11 08:56:01.312 2 DEBUG oslo_concurrency.lockutils [req-69241d06-4257-434e-aeae-032fccc301c4 req-b5af871e-f2ca-4bd1-9c38-7588a37d8920 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "bb58fb30-73c8-457a-9293-2d07d0015a46-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:56:01 compute-0 nova_compute[260935]: 2025-10-11 08:56:01.313 2 DEBUG nova.compute.manager [req-69241d06-4257-434e-aeae-032fccc301c4 req-b5af871e-f2ca-4bd1-9c38-7588a37d8920 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: bb58fb30-73c8-457a-9293-2d07d0015a46] No waiting events found dispatching network-vif-unplugged-fca0187a-40e2-4312-bb5b-f3a6c59f3c9b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 08:56:01 compute-0 nova_compute[260935]: 2025-10-11 08:56:01.313 2 DEBUG nova.compute.manager [req-69241d06-4257-434e-aeae-032fccc301c4 req-b5af871e-f2ca-4bd1-9c38-7588a37d8920 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: bb58fb30-73c8-457a-9293-2d07d0015a46] Received event network-vif-unplugged-fca0187a-40e2-4312-bb5b-f3a6c59f3c9b for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 11 08:56:01 compute-0 nova_compute[260935]: 2025-10-11 08:56:01.357 2 INFO nova.virt.libvirt.driver [None req-8f7cc2a4-e125-420d-9ac6-934dcb8fa0bc 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: bb58fb30-73c8-457a-9293-2d07d0015a46] Deleting instance files /var/lib/nova/instances/bb58fb30-73c8-457a-9293-2d07d0015a46_del
Oct 11 08:56:01 compute-0 nova_compute[260935]: 2025-10-11 08:56:01.357 2 INFO nova.virt.libvirt.driver [None req-8f7cc2a4-e125-420d-9ac6-934dcb8fa0bc 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: bb58fb30-73c8-457a-9293-2d07d0015a46] Deletion of /var/lib/nova/instances/bb58fb30-73c8-457a-9293-2d07d0015a46_del complete
Oct 11 08:56:01 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 08:56:01 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4288032189' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:56:01 compute-0 nova_compute[260935]: 2025-10-11 08:56:01.389 2 DEBUG oslo_concurrency.processutils [None req-8e0d93a6-27b1-4601-bc37-27e62a0e782b ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.549s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:56:01 compute-0 nova_compute[260935]: 2025-10-11 08:56:01.395 2 DEBUG nova.compute.provider_tree [None req-8e0d93a6-27b1-4601-bc37-27e62a0e782b ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 08:56:01 compute-0 nova_compute[260935]: 2025-10-11 08:56:01.419 2 DEBUG oslo_concurrency.processutils [None req-7c81477d-e48f-41e4-8384-df4c9e7e75c2 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/d80189d8-28e4-440b-8aed-b43c62f59dd7/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpr5npneld" returned: 0 in 0.165s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:56:01 compute-0 nova_compute[260935]: 2025-10-11 08:56:01.444 2 DEBUG nova.storage.rbd_utils [None req-7c81477d-e48f-41e4-8384-df4c9e7e75c2 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] rbd image d80189d8-28e4-440b-8aed-b43c62f59dd7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:56:01 compute-0 nova_compute[260935]: 2025-10-11 08:56:01.448 2 DEBUG oslo_concurrency.processutils [None req-7c81477d-e48f-41e4-8384-df4c9e7e75c2 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/d80189d8-28e4-440b-8aed-b43c62f59dd7/disk.config d80189d8-28e4-440b-8aed-b43c62f59dd7_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:56:01 compute-0 nova_compute[260935]: 2025-10-11 08:56:01.497 2 DEBUG nova.scheduler.client.report [None req-8e0d93a6-27b1-4601-bc37-27e62a0e782b ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 08:56:01 compute-0 ovn_controller[152945]: 2025-10-11T08:56:01Z|00070|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:45:a4:99 10.100.0.5
Oct 11 08:56:01 compute-0 ovn_controller[152945]: 2025-10-11T08:56:01Z|00071|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:45:a4:99 10.100.0.5
Oct 11 08:56:01 compute-0 nova_compute[260935]: 2025-10-11 08:56:01.595 2 DEBUG oslo_concurrency.lockutils [None req-8e0d93a6-27b1-4601-bc37-27e62a0e782b ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.069s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:56:01 compute-0 nova_compute[260935]: 2025-10-11 08:56:01.596 2 DEBUG nova.compute.manager [None req-8e0d93a6-27b1-4601-bc37-27e62a0e782b ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: ead8f79e-4b1d-4bf5-bf5a-4943d2122bbc] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 11 08:56:01 compute-0 nova_compute[260935]: 2025-10-11 08:56:01.603 2 DEBUG nova.network.neutron [req-84e9f6e3-53a4-4647-8643-9ba779a0e05b req-27763275-15a9-4e2c-9e98-d9e854adae65 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d80189d8-28e4-440b-8aed-b43c62f59dd7] Updated VIF entry in instance network info cache for port 7bc371fa-443a-4188-ace2-2837e3709136. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 08:56:01 compute-0 nova_compute[260935]: 2025-10-11 08:56:01.604 2 DEBUG nova.network.neutron [req-84e9f6e3-53a4-4647-8643-9ba779a0e05b req-27763275-15a9-4e2c-9e98-d9e854adae65 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d80189d8-28e4-440b-8aed-b43c62f59dd7] Updating instance_info_cache with network_info: [{"id": "7bc371fa-443a-4188-ace2-2837e3709136", "address": "fa:16:3e:76:54:8b", "network": {"id": "338aeaf8-43d5-4292-a8fa-8952dd3c508b", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-293890957-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3a6a3cc2a54f4a9bafcdc1304f07944b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7bc371fa-44", "ovs_interfaceid": "7bc371fa-443a-4188-ace2-2837e3709136", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:56:01 compute-0 nova_compute[260935]: 2025-10-11 08:56:01.614 2 INFO nova.compute.manager [None req-8f7cc2a4-e125-420d-9ac6-934dcb8fa0bc 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: bb58fb30-73c8-457a-9293-2d07d0015a46] Took 1.14 seconds to destroy the instance on the hypervisor.
Oct 11 08:56:01 compute-0 nova_compute[260935]: 2025-10-11 08:56:01.615 2 DEBUG oslo.service.loopingcall [None req-8f7cc2a4-e125-420d-9ac6-934dcb8fa0bc 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 11 08:56:01 compute-0 nova_compute[260935]: 2025-10-11 08:56:01.615 2 DEBUG nova.compute.manager [-] [instance: bb58fb30-73c8-457a-9293-2d07d0015a46] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 11 08:56:01 compute-0 nova_compute[260935]: 2025-10-11 08:56:01.616 2 DEBUG nova.network.neutron [-] [instance: bb58fb30-73c8-457a-9293-2d07d0015a46] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 11 08:56:01 compute-0 nova_compute[260935]: 2025-10-11 08:56:01.632 2 DEBUG oslo_concurrency.lockutils [req-84e9f6e3-53a4-4647-8643-9ba779a0e05b req-27763275-15a9-4e2c-9e98-d9e854adae65 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-d80189d8-28e4-440b-8aed-b43c62f59dd7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 08:56:01 compute-0 nova_compute[260935]: 2025-10-11 08:56:01.633 2 DEBUG oslo_concurrency.processutils [None req-7c81477d-e48f-41e4-8384-df4c9e7e75c2 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/d80189d8-28e4-440b-8aed-b43c62f59dd7/disk.config d80189d8-28e4-440b-8aed-b43c62f59dd7_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.185s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:56:01 compute-0 nova_compute[260935]: 2025-10-11 08:56:01.634 2 INFO nova.virt.libvirt.driver [None req-7c81477d-e48f-41e4-8384-df4c9e7e75c2 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] [instance: d80189d8-28e4-440b-8aed-b43c62f59dd7] Deleting local config drive /var/lib/nova/instances/d80189d8-28e4-440b-8aed-b43c62f59dd7/disk.config because it was imported into RBD.
Oct 11 08:56:01 compute-0 nova_compute[260935]: 2025-10-11 08:56:01.663 2 DEBUG nova.compute.manager [None req-8e0d93a6-27b1-4601-bc37-27e62a0e782b ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: ead8f79e-4b1d-4bf5-bf5a-4943d2122bbc] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 11 08:56:01 compute-0 nova_compute[260935]: 2025-10-11 08:56:01.663 2 DEBUG nova.network.neutron [None req-8e0d93a6-27b1-4601-bc37-27e62a0e782b ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: ead8f79e-4b1d-4bf5-bf5a-4943d2122bbc] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 11 08:56:01 compute-0 nova_compute[260935]: 2025-10-11 08:56:01.680 2 INFO nova.virt.libvirt.driver [None req-8e0d93a6-27b1-4601-bc37-27e62a0e782b ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: ead8f79e-4b1d-4bf5-bf5a-4943d2122bbc] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 11 08:56:01 compute-0 nova_compute[260935]: 2025-10-11 08:56:01.709 2 DEBUG nova.compute.manager [None req-8e0d93a6-27b1-4601-bc37-27e62a0e782b ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: ead8f79e-4b1d-4bf5-bf5a-4943d2122bbc] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 11 08:56:01 compute-0 kernel: tap7bc371fa-44: entered promiscuous mode
Oct 11 08:56:01 compute-0 NetworkManager[44960]: <info>  [1760172961.7150] manager: (tap7bc371fa-44): new Tun device (/org/freedesktop/NetworkManager/Devices/258)
Oct 11 08:56:01 compute-0 systemd-udevd[324994]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 08:56:01 compute-0 NetworkManager[44960]: <info>  [1760172961.7418] device (tap7bc371fa-44): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 08:56:01 compute-0 NetworkManager[44960]: <info>  [1760172961.7426] device (tap7bc371fa-44): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 08:56:01 compute-0 ovn_controller[152945]: 2025-10-11T08:56:01Z|00540|binding|INFO|Claiming lport 7bc371fa-443a-4188-ace2-2837e3709136 for this chassis.
Oct 11 08:56:01 compute-0 ovn_controller[152945]: 2025-10-11T08:56:01Z|00541|binding|INFO|7bc371fa-443a-4188-ace2-2837e3709136: Claiming fa:16:3e:76:54:8b 10.100.0.14
Oct 11 08:56:01 compute-0 nova_compute[260935]: 2025-10-11 08:56:01.746 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:56:01 compute-0 ovn_controller[152945]: 2025-10-11T08:56:01Z|00542|binding|INFO|Setting lport 7bc371fa-443a-4188-ace2-2837e3709136 ovn-installed in OVS
Oct 11 08:56:01 compute-0 nova_compute[260935]: 2025-10-11 08:56:01.765 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:56:01 compute-0 ovn_controller[152945]: 2025-10-11T08:56:01Z|00543|binding|INFO|Setting lport 7bc371fa-443a-4188-ace2-2837e3709136 up in Southbound
Oct 11 08:56:01 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:56:01.778 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:76:54:8b 10.100.0.14'], port_security=['fa:16:3e:76:54:8b 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'd80189d8-28e4-440b-8aed-b43c62f59dd7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-338aeaf8-43d5-4292-a8fa-8952dd3c508b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3a6a3cc2a54f4a9bafcdc1304f07944b', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'e3ab43aa-8c9d-4578-afa6-1b85bfb9e682', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9b76a516-f507-4a4f-aa31-3047cedf7049, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=7bc371fa-443a-4188-ace2-2837e3709136) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 08:56:01 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:56:01.780 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 7bc371fa-443a-4188-ace2-2837e3709136 in datapath 338aeaf8-43d5-4292-a8fa-8952dd3c508b bound to our chassis
Oct 11 08:56:01 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:56:01.783 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 338aeaf8-43d5-4292-a8fa-8952dd3c508b
Oct 11 08:56:01 compute-0 systemd-machined[215705]: New machine qemu-70-instance-0000003f.
Oct 11 08:56:01 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:56:01.806 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[74d177bc-cd78-454a-b493-502560520107]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:56:01 compute-0 systemd[1]: Started Virtual Machine qemu-70-instance-0000003f.
Oct 11 08:56:01 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/4288032189' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:56:01 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:56:01.855 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[d982f1e6-a125-46b9-8417-5e5ca9d5b4b0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:56:01 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:56:01.860 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[6941660d-9abb-4cac-89ba-ab712aadacf3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:56:01 compute-0 nova_compute[260935]: 2025-10-11 08:56:01.863 2 DEBUG nova.compute.manager [None req-8e0d93a6-27b1-4601-bc37-27e62a0e782b ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: ead8f79e-4b1d-4bf5-bf5a-4943d2122bbc] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 11 08:56:01 compute-0 nova_compute[260935]: 2025-10-11 08:56:01.864 2 DEBUG nova.virt.libvirt.driver [None req-8e0d93a6-27b1-4601-bc37-27e62a0e782b ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: ead8f79e-4b1d-4bf5-bf5a-4943d2122bbc] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 11 08:56:01 compute-0 nova_compute[260935]: 2025-10-11 08:56:01.865 2 INFO nova.virt.libvirt.driver [None req-8e0d93a6-27b1-4601-bc37-27e62a0e782b ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: ead8f79e-4b1d-4bf5-bf5a-4943d2122bbc] Creating image(s)
Oct 11 08:56:01 compute-0 nova_compute[260935]: 2025-10-11 08:56:01.894 2 DEBUG nova.storage.rbd_utils [None req-8e0d93a6-27b1-4601-bc37-27e62a0e782b ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] rbd image ead8f79e-4b1d-4bf5-bf5a-4943d2122bbc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:56:01 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:56:01.908 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[ff1769ca-f784-422b-b1e0-29efe52e6407]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:56:01 compute-0 nova_compute[260935]: 2025-10-11 08:56:01.932 2 DEBUG nova.storage.rbd_utils [None req-8e0d93a6-27b1-4601-bc37-27e62a0e782b ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] rbd image ead8f79e-4b1d-4bf5-bf5a-4943d2122bbc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:56:01 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:56:01.937 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[90b0e028-0e04-4dfa-8349-0c04661eab26]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap338aeaf8-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e5:2a:1c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 164], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 477778, 'reachable_time': 18372, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 325209, 'error': None, 'target': 'ovnmeta-338aeaf8-43d5-4292-a8fa-8952dd3c508b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:56:01 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:56:01.964 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[0a027dc9-f656-4f71-87fc-a88d017157b0]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap338aeaf8-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 477793, 'tstamp': 477793}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 325213, 'error': None, 'target': 'ovnmeta-338aeaf8-43d5-4292-a8fa-8952dd3c508b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap338aeaf8-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 477797, 'tstamp': 477797}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 325213, 'error': None, 'target': 'ovnmeta-338aeaf8-43d5-4292-a8fa-8952dd3c508b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:56:01 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:56:01.967 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap338aeaf8-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:56:01 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:56:01.972 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap338aeaf8-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:56:01 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:56:01.972 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 08:56:01 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:56:01.973 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap338aeaf8-40, col_values=(('external_ids', {'iface-id': 'ebd712a5-5601-47dd-8cfc-89d2ce6b1035'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:56:01 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:56:01.973 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 08:56:01 compute-0 nova_compute[260935]: 2025-10-11 08:56:01.972 2 DEBUG nova.storage.rbd_utils [None req-8e0d93a6-27b1-4601-bc37-27e62a0e782b ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] rbd image ead8f79e-4b1d-4bf5-bf5a-4943d2122bbc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:56:01 compute-0 nova_compute[260935]: 2025-10-11 08:56:01.978 2 DEBUG oslo_concurrency.processutils [None req-8e0d93a6-27b1-4601-bc37-27e62a0e782b ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:56:02 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1554: 321 pgs: 321 active+clean; 339 MiB data, 631 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 3.6 MiB/s wr, 202 op/s
Oct 11 08:56:02 compute-0 nova_compute[260935]: 2025-10-11 08:56:02.028 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:56:02 compute-0 nova_compute[260935]: 2025-10-11 08:56:02.075 2 DEBUG oslo_concurrency.processutils [None req-8e0d93a6-27b1-4601-bc37-27e62a0e782b ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json" returned: 0 in 0.097s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:56:02 compute-0 nova_compute[260935]: 2025-10-11 08:56:02.076 2 DEBUG oslo_concurrency.lockutils [None req-8e0d93a6-27b1-4601-bc37-27e62a0e782b ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Acquiring lock "9811042c7d73cc51997f7c966840f4b7728169a1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:56:02 compute-0 nova_compute[260935]: 2025-10-11 08:56:02.077 2 DEBUG oslo_concurrency.lockutils [None req-8e0d93a6-27b1-4601-bc37-27e62a0e782b ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:56:02 compute-0 nova_compute[260935]: 2025-10-11 08:56:02.077 2 DEBUG oslo_concurrency.lockutils [None req-8e0d93a6-27b1-4601-bc37-27e62a0e782b ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:56:02 compute-0 nova_compute[260935]: 2025-10-11 08:56:02.106 2 DEBUG nova.storage.rbd_utils [None req-8e0d93a6-27b1-4601-bc37-27e62a0e782b ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] rbd image ead8f79e-4b1d-4bf5-bf5a-4943d2122bbc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:56:02 compute-0 nova_compute[260935]: 2025-10-11 08:56:02.111 2 DEBUG oslo_concurrency.processutils [None req-8e0d93a6-27b1-4601-bc37-27e62a0e782b ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 ead8f79e-4b1d-4bf5-bf5a-4943d2122bbc_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:56:02 compute-0 nova_compute[260935]: 2025-10-11 08:56:02.401 2 DEBUG oslo_concurrency.processutils [None req-8e0d93a6-27b1-4601-bc37-27e62a0e782b ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 ead8f79e-4b1d-4bf5-bf5a-4943d2122bbc_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.290s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:56:02 compute-0 nova_compute[260935]: 2025-10-11 08:56:02.439 2 DEBUG nova.policy [None req-8e0d93a6-27b1-4601-bc37-27e62a0e782b ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'ee0e5fedb9fc464eb2a9ac362f5e0749', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'd9864fda4f8641d8a9c1509c426cc206', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 11 08:56:02 compute-0 nova_compute[260935]: 2025-10-11 08:56:02.480 2 DEBUG nova.storage.rbd_utils [None req-8e0d93a6-27b1-4601-bc37-27e62a0e782b ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] resizing rbd image ead8f79e-4b1d-4bf5-bf5a-4943d2122bbc_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 11 08:56:02 compute-0 nova_compute[260935]: 2025-10-11 08:56:02.575 2 DEBUG nova.objects.instance [None req-8e0d93a6-27b1-4601-bc37-27e62a0e782b ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Lazy-loading 'migration_context' on Instance uuid ead8f79e-4b1d-4bf5-bf5a-4943d2122bbc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:56:02 compute-0 nova_compute[260935]: 2025-10-11 08:56:02.595 2 DEBUG nova.virt.libvirt.driver [None req-8e0d93a6-27b1-4601-bc37-27e62a0e782b ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: ead8f79e-4b1d-4bf5-bf5a-4943d2122bbc] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 11 08:56:02 compute-0 nova_compute[260935]: 2025-10-11 08:56:02.596 2 DEBUG nova.virt.libvirt.driver [None req-8e0d93a6-27b1-4601-bc37-27e62a0e782b ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: ead8f79e-4b1d-4bf5-bf5a-4943d2122bbc] Ensure instance console log exists: /var/lib/nova/instances/ead8f79e-4b1d-4bf5-bf5a-4943d2122bbc/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 11 08:56:02 compute-0 nova_compute[260935]: 2025-10-11 08:56:02.596 2 DEBUG oslo_concurrency.lockutils [None req-8e0d93a6-27b1-4601-bc37-27e62a0e782b ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:56:02 compute-0 nova_compute[260935]: 2025-10-11 08:56:02.597 2 DEBUG oslo_concurrency.lockutils [None req-8e0d93a6-27b1-4601-bc37-27e62a0e782b ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:56:02 compute-0 nova_compute[260935]: 2025-10-11 08:56:02.597 2 DEBUG oslo_concurrency.lockutils [None req-8e0d93a6-27b1-4601-bc37-27e62a0e782b ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:56:02 compute-0 nova_compute[260935]: 2025-10-11 08:56:02.796 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172962.796125, d80189d8-28e4-440b-8aed-b43c62f59dd7 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:56:02 compute-0 nova_compute[260935]: 2025-10-11 08:56:02.797 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: d80189d8-28e4-440b-8aed-b43c62f59dd7] VM Started (Lifecycle Event)
Oct 11 08:56:02 compute-0 nova_compute[260935]: 2025-10-11 08:56:02.819 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: d80189d8-28e4-440b-8aed-b43c62f59dd7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:56:02 compute-0 nova_compute[260935]: 2025-10-11 08:56:02.826 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172962.796411, d80189d8-28e4-440b-8aed-b43c62f59dd7 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:56:02 compute-0 nova_compute[260935]: 2025-10-11 08:56:02.827 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: d80189d8-28e4-440b-8aed-b43c62f59dd7] VM Paused (Lifecycle Event)
Oct 11 08:56:02 compute-0 nova_compute[260935]: 2025-10-11 08:56:02.846 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: d80189d8-28e4-440b-8aed-b43c62f59dd7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:56:02 compute-0 ceph-mon[74313]: pgmap v1554: 321 pgs: 321 active+clean; 339 MiB data, 631 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 3.6 MiB/s wr, 202 op/s
Oct 11 08:56:02 compute-0 nova_compute[260935]: 2025-10-11 08:56:02.852 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: d80189d8-28e4-440b-8aed-b43c62f59dd7] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 08:56:02 compute-0 nova_compute[260935]: 2025-10-11 08:56:02.874 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: d80189d8-28e4-440b-8aed-b43c62f59dd7] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 08:56:03 compute-0 nova_compute[260935]: 2025-10-11 08:56:03.023 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:56:03 compute-0 nova_compute[260935]: 2025-10-11 08:56:03.242 2 DEBUG nova.compute.manager [req-4d0d3f90-c073-4842-b58b-3e375c01ae29 req-15728aff-c117-4337-a759-c78e3b024be6 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d80189d8-28e4-440b-8aed-b43c62f59dd7] Received event network-vif-plugged-7bc371fa-443a-4188-ace2-2837e3709136 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:56:03 compute-0 nova_compute[260935]: 2025-10-11 08:56:03.243 2 DEBUG oslo_concurrency.lockutils [req-4d0d3f90-c073-4842-b58b-3e375c01ae29 req-15728aff-c117-4337-a759-c78e3b024be6 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "d80189d8-28e4-440b-8aed-b43c62f59dd7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:56:03 compute-0 nova_compute[260935]: 2025-10-11 08:56:03.243 2 DEBUG oslo_concurrency.lockutils [req-4d0d3f90-c073-4842-b58b-3e375c01ae29 req-15728aff-c117-4337-a759-c78e3b024be6 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "d80189d8-28e4-440b-8aed-b43c62f59dd7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:56:03 compute-0 nova_compute[260935]: 2025-10-11 08:56:03.243 2 DEBUG oslo_concurrency.lockutils [req-4d0d3f90-c073-4842-b58b-3e375c01ae29 req-15728aff-c117-4337-a759-c78e3b024be6 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "d80189d8-28e4-440b-8aed-b43c62f59dd7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:56:03 compute-0 nova_compute[260935]: 2025-10-11 08:56:03.243 2 DEBUG nova.compute.manager [req-4d0d3f90-c073-4842-b58b-3e375c01ae29 req-15728aff-c117-4337-a759-c78e3b024be6 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d80189d8-28e4-440b-8aed-b43c62f59dd7] Processing event network-vif-plugged-7bc371fa-443a-4188-ace2-2837e3709136 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 11 08:56:03 compute-0 nova_compute[260935]: 2025-10-11 08:56:03.244 2 DEBUG nova.compute.manager [None req-7c81477d-e48f-41e4-8384-df4c9e7e75c2 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] [instance: d80189d8-28e4-440b-8aed-b43c62f59dd7] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 11 08:56:03 compute-0 nova_compute[260935]: 2025-10-11 08:56:03.250 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172963.2499874, d80189d8-28e4-440b-8aed-b43c62f59dd7 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:56:03 compute-0 nova_compute[260935]: 2025-10-11 08:56:03.251 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: d80189d8-28e4-440b-8aed-b43c62f59dd7] VM Resumed (Lifecycle Event)
Oct 11 08:56:03 compute-0 nova_compute[260935]: 2025-10-11 08:56:03.253 2 DEBUG nova.virt.libvirt.driver [None req-7c81477d-e48f-41e4-8384-df4c9e7e75c2 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] [instance: d80189d8-28e4-440b-8aed-b43c62f59dd7] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 11 08:56:03 compute-0 nova_compute[260935]: 2025-10-11 08:56:03.261 2 INFO nova.virt.libvirt.driver [-] [instance: d80189d8-28e4-440b-8aed-b43c62f59dd7] Instance spawned successfully.
Oct 11 08:56:03 compute-0 nova_compute[260935]: 2025-10-11 08:56:03.261 2 DEBUG nova.virt.libvirt.driver [None req-7c81477d-e48f-41e4-8384-df4c9e7e75c2 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] [instance: d80189d8-28e4-440b-8aed-b43c62f59dd7] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 11 08:56:03 compute-0 nova_compute[260935]: 2025-10-11 08:56:03.289 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: d80189d8-28e4-440b-8aed-b43c62f59dd7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:56:03 compute-0 nova_compute[260935]: 2025-10-11 08:56:03.302 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: d80189d8-28e4-440b-8aed-b43c62f59dd7] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 08:56:03 compute-0 nova_compute[260935]: 2025-10-11 08:56:03.330 2 DEBUG nova.virt.libvirt.driver [None req-7c81477d-e48f-41e4-8384-df4c9e7e75c2 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] [instance: d80189d8-28e4-440b-8aed-b43c62f59dd7] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:56:03 compute-0 nova_compute[260935]: 2025-10-11 08:56:03.332 2 DEBUG nova.virt.libvirt.driver [None req-7c81477d-e48f-41e4-8384-df4c9e7e75c2 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] [instance: d80189d8-28e4-440b-8aed-b43c62f59dd7] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:56:03 compute-0 nova_compute[260935]: 2025-10-11 08:56:03.332 2 DEBUG nova.virt.libvirt.driver [None req-7c81477d-e48f-41e4-8384-df4c9e7e75c2 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] [instance: d80189d8-28e4-440b-8aed-b43c62f59dd7] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:56:03 compute-0 nova_compute[260935]: 2025-10-11 08:56:03.333 2 DEBUG nova.virt.libvirt.driver [None req-7c81477d-e48f-41e4-8384-df4c9e7e75c2 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] [instance: d80189d8-28e4-440b-8aed-b43c62f59dd7] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:56:03 compute-0 nova_compute[260935]: 2025-10-11 08:56:03.334 2 DEBUG nova.virt.libvirt.driver [None req-7c81477d-e48f-41e4-8384-df4c9e7e75c2 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] [instance: d80189d8-28e4-440b-8aed-b43c62f59dd7] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:56:03 compute-0 nova_compute[260935]: 2025-10-11 08:56:03.335 2 DEBUG nova.virt.libvirt.driver [None req-7c81477d-e48f-41e4-8384-df4c9e7e75c2 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] [instance: d80189d8-28e4-440b-8aed-b43c62f59dd7] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:56:03 compute-0 nova_compute[260935]: 2025-10-11 08:56:03.343 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: d80189d8-28e4-440b-8aed-b43c62f59dd7] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 08:56:03 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e210 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 08:56:03 compute-0 nova_compute[260935]: 2025-10-11 08:56:03.367 2 DEBUG nova.compute.manager [req-f6218389-546a-492a-a239-f2f26974a1b5 req-f64282cc-88c9-47a6-9ac7-ab14e16fa4b0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: bb58fb30-73c8-457a-9293-2d07d0015a46] Received event network-vif-plugged-fca0187a-40e2-4312-bb5b-f3a6c59f3c9b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:56:03 compute-0 nova_compute[260935]: 2025-10-11 08:56:03.368 2 DEBUG oslo_concurrency.lockutils [req-f6218389-546a-492a-a239-f2f26974a1b5 req-f64282cc-88c9-47a6-9ac7-ab14e16fa4b0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "bb58fb30-73c8-457a-9293-2d07d0015a46-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:56:03 compute-0 nova_compute[260935]: 2025-10-11 08:56:03.368 2 DEBUG oslo_concurrency.lockutils [req-f6218389-546a-492a-a239-f2f26974a1b5 req-f64282cc-88c9-47a6-9ac7-ab14e16fa4b0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "bb58fb30-73c8-457a-9293-2d07d0015a46-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:56:03 compute-0 nova_compute[260935]: 2025-10-11 08:56:03.369 2 DEBUG oslo_concurrency.lockutils [req-f6218389-546a-492a-a239-f2f26974a1b5 req-f64282cc-88c9-47a6-9ac7-ab14e16fa4b0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "bb58fb30-73c8-457a-9293-2d07d0015a46-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:56:03 compute-0 nova_compute[260935]: 2025-10-11 08:56:03.369 2 DEBUG nova.compute.manager [req-f6218389-546a-492a-a239-f2f26974a1b5 req-f64282cc-88c9-47a6-9ac7-ab14e16fa4b0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: bb58fb30-73c8-457a-9293-2d07d0015a46] No waiting events found dispatching network-vif-plugged-fca0187a-40e2-4312-bb5b-f3a6c59f3c9b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 08:56:03 compute-0 nova_compute[260935]: 2025-10-11 08:56:03.370 2 WARNING nova.compute.manager [req-f6218389-546a-492a-a239-f2f26974a1b5 req-f64282cc-88c9-47a6-9ac7-ab14e16fa4b0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: bb58fb30-73c8-457a-9293-2d07d0015a46] Received unexpected event network-vif-plugged-fca0187a-40e2-4312-bb5b-f3a6c59f3c9b for instance with vm_state paused and task_state deleting.
Oct 11 08:56:03 compute-0 nova_compute[260935]: 2025-10-11 08:56:03.395 2 INFO nova.compute.manager [None req-7c81477d-e48f-41e4-8384-df4c9e7e75c2 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] [instance: d80189d8-28e4-440b-8aed-b43c62f59dd7] Took 12.67 seconds to spawn the instance on the hypervisor.
Oct 11 08:56:03 compute-0 nova_compute[260935]: 2025-10-11 08:56:03.396 2 DEBUG nova.compute.manager [None req-7c81477d-e48f-41e4-8384-df4c9e7e75c2 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] [instance: d80189d8-28e4-440b-8aed-b43c62f59dd7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:56:03 compute-0 nova_compute[260935]: 2025-10-11 08:56:03.463 2 INFO nova.compute.manager [None req-7c81477d-e48f-41e4-8384-df4c9e7e75c2 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] [instance: d80189d8-28e4-440b-8aed-b43c62f59dd7] Took 13.82 seconds to build instance.
Oct 11 08:56:03 compute-0 nova_compute[260935]: 2025-10-11 08:56:03.480 2 DEBUG oslo_concurrency.lockutils [None req-7c81477d-e48f-41e4-8384-df4c9e7e75c2 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Lock "d80189d8-28e4-440b-8aed-b43c62f59dd7" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.905s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:56:03 compute-0 nova_compute[260935]: 2025-10-11 08:56:03.548 2 DEBUG nova.network.neutron [-] [instance: bb58fb30-73c8-457a-9293-2d07d0015a46] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:56:03 compute-0 nova_compute[260935]: 2025-10-11 08:56:03.565 2 INFO nova.compute.manager [-] [instance: bb58fb30-73c8-457a-9293-2d07d0015a46] Took 1.95 seconds to deallocate network for instance.
Oct 11 08:56:03 compute-0 nova_compute[260935]: 2025-10-11 08:56:03.605 2 DEBUG oslo_concurrency.lockutils [None req-8f7cc2a4-e125-420d-9ac6-934dcb8fa0bc 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:56:03 compute-0 nova_compute[260935]: 2025-10-11 08:56:03.606 2 DEBUG oslo_concurrency.lockutils [None req-8f7cc2a4-e125-420d-9ac6-934dcb8fa0bc 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:56:03 compute-0 nova_compute[260935]: 2025-10-11 08:56:03.629 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:56:03 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:56:03.630 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=15, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:d1:d9', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '16:ab:1e:b7:4b:7f'}, ipsec=False) old=SB_Global(nb_cfg=14) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 08:56:03 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:56:03.632 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 11 08:56:03 compute-0 nova_compute[260935]: 2025-10-11 08:56:03.696 2 DEBUG nova.network.neutron [None req-8e0d93a6-27b1-4601-bc37-27e62a0e782b ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: ead8f79e-4b1d-4bf5-bf5a-4943d2122bbc] Successfully created port: 92faeec9-cc08-45c5-84b3-7191f39c6339 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 11 08:56:03 compute-0 nova_compute[260935]: 2025-10-11 08:56:03.741 2 DEBUG oslo_concurrency.processutils [None req-8f7cc2a4-e125-420d-9ac6-934dcb8fa0bc 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:56:04 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1555: 321 pgs: 321 active+clean; 372 MiB data, 657 MiB used, 59 GiB / 60 GiB avail; 4.1 MiB/s rd, 7.5 MiB/s wr, 322 op/s
Oct 11 08:56:04 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 08:56:04 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1608763001' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:56:04 compute-0 nova_compute[260935]: 2025-10-11 08:56:04.226 2 DEBUG oslo_concurrency.processutils [None req-8f7cc2a4-e125-420d-9ac6-934dcb8fa0bc 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.485s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:56:04 compute-0 nova_compute[260935]: 2025-10-11 08:56:04.238 2 DEBUG nova.compute.provider_tree [None req-8f7cc2a4-e125-420d-9ac6-934dcb8fa0bc 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 08:56:04 compute-0 nova_compute[260935]: 2025-10-11 08:56:04.256 2 DEBUG nova.scheduler.client.report [None req-8f7cc2a4-e125-420d-9ac6-934dcb8fa0bc 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 08:56:04 compute-0 nova_compute[260935]: 2025-10-11 08:56:04.274 2 DEBUG oslo_concurrency.lockutils [None req-8f7cc2a4-e125-420d-9ac6-934dcb8fa0bc 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.668s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:56:04 compute-0 nova_compute[260935]: 2025-10-11 08:56:04.304 2 INFO nova.scheduler.client.report [None req-8f7cc2a4-e125-420d-9ac6-934dcb8fa0bc 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Deleted allocations for instance bb58fb30-73c8-457a-9293-2d07d0015a46
Oct 11 08:56:04 compute-0 nova_compute[260935]: 2025-10-11 08:56:04.366 2 DEBUG oslo_concurrency.lockutils [None req-8f7cc2a4-e125-420d-9ac6-934dcb8fa0bc 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Lock "bb58fb30-73c8-457a-9293-2d07d0015a46" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.898s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:56:04 compute-0 nova_compute[260935]: 2025-10-11 08:56:04.560 2 DEBUG nova.network.neutron [None req-8e0d93a6-27b1-4601-bc37-27e62a0e782b ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: ead8f79e-4b1d-4bf5-bf5a-4943d2122bbc] Successfully updated port: 92faeec9-cc08-45c5-84b3-7191f39c6339 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 11 08:56:04 compute-0 nova_compute[260935]: 2025-10-11 08:56:04.576 2 DEBUG oslo_concurrency.lockutils [None req-8e0d93a6-27b1-4601-bc37-27e62a0e782b ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Acquiring lock "refresh_cache-ead8f79e-4b1d-4bf5-bf5a-4943d2122bbc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 08:56:04 compute-0 nova_compute[260935]: 2025-10-11 08:56:04.577 2 DEBUG oslo_concurrency.lockutils [None req-8e0d93a6-27b1-4601-bc37-27e62a0e782b ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Acquired lock "refresh_cache-ead8f79e-4b1d-4bf5-bf5a-4943d2122bbc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 08:56:04 compute-0 nova_compute[260935]: 2025-10-11 08:56:04.577 2 DEBUG nova.network.neutron [None req-8e0d93a6-27b1-4601-bc37-27e62a0e782b ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: ead8f79e-4b1d-4bf5-bf5a-4943d2122bbc] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 11 08:56:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 08:56:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:56:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 08:56:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:56:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0029680962875236047 of space, bias 1.0, pg target 0.8904288862570814 quantized to 32 (current 32)
Oct 11 08:56:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:56:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 08:56:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:56:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 08:56:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:56:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Oct 11 08:56:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:56:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 11 08:56:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:56:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 08:56:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:56:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 11 08:56:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:56:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 11 08:56:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:56:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 08:56:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:56:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 11 08:56:04 compute-0 nova_compute[260935]: 2025-10-11 08:56:04.791 2 DEBUG nova.network.neutron [None req-8e0d93a6-27b1-4601-bc37-27e62a0e782b ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: ead8f79e-4b1d-4bf5-bf5a-4943d2122bbc] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 11 08:56:05 compute-0 ceph-mon[74313]: pgmap v1555: 321 pgs: 321 active+clean; 372 MiB data, 657 MiB used, 59 GiB / 60 GiB avail; 4.1 MiB/s rd, 7.5 MiB/s wr, 322 op/s
Oct 11 08:56:05 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/1608763001' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:56:05 compute-0 nova_compute[260935]: 2025-10-11 08:56:05.332 2 DEBUG nova.compute.manager [req-5c8abb19-9bf1-423b-a8f1-ce714c8eaf2c req-fa35f099-bd91-4ef1-9e66-0837fab88803 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d80189d8-28e4-440b-8aed-b43c62f59dd7] Received event network-vif-plugged-7bc371fa-443a-4188-ace2-2837e3709136 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:56:05 compute-0 nova_compute[260935]: 2025-10-11 08:56:05.333 2 DEBUG oslo_concurrency.lockutils [req-5c8abb19-9bf1-423b-a8f1-ce714c8eaf2c req-fa35f099-bd91-4ef1-9e66-0837fab88803 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "d80189d8-28e4-440b-8aed-b43c62f59dd7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:56:05 compute-0 nova_compute[260935]: 2025-10-11 08:56:05.333 2 DEBUG oslo_concurrency.lockutils [req-5c8abb19-9bf1-423b-a8f1-ce714c8eaf2c req-fa35f099-bd91-4ef1-9e66-0837fab88803 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "d80189d8-28e4-440b-8aed-b43c62f59dd7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:56:05 compute-0 nova_compute[260935]: 2025-10-11 08:56:05.334 2 DEBUG oslo_concurrency.lockutils [req-5c8abb19-9bf1-423b-a8f1-ce714c8eaf2c req-fa35f099-bd91-4ef1-9e66-0837fab88803 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "d80189d8-28e4-440b-8aed-b43c62f59dd7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:56:05 compute-0 nova_compute[260935]: 2025-10-11 08:56:05.334 2 DEBUG nova.compute.manager [req-5c8abb19-9bf1-423b-a8f1-ce714c8eaf2c req-fa35f099-bd91-4ef1-9e66-0837fab88803 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d80189d8-28e4-440b-8aed-b43c62f59dd7] No waiting events found dispatching network-vif-plugged-7bc371fa-443a-4188-ace2-2837e3709136 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 08:56:05 compute-0 nova_compute[260935]: 2025-10-11 08:56:05.334 2 WARNING nova.compute.manager [req-5c8abb19-9bf1-423b-a8f1-ce714c8eaf2c req-fa35f099-bd91-4ef1-9e66-0837fab88803 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d80189d8-28e4-440b-8aed-b43c62f59dd7] Received unexpected event network-vif-plugged-7bc371fa-443a-4188-ace2-2837e3709136 for instance with vm_state active and task_state None.
Oct 11 08:56:05 compute-0 nova_compute[260935]: 2025-10-11 08:56:05.335 2 DEBUG nova.compute.manager [req-5c8abb19-9bf1-423b-a8f1-ce714c8eaf2c req-fa35f099-bd91-4ef1-9e66-0837fab88803 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: bb58fb30-73c8-457a-9293-2d07d0015a46] Received event network-vif-deleted-fca0187a-40e2-4312-bb5b-f3a6c59f3c9b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:56:05 compute-0 nova_compute[260935]: 2025-10-11 08:56:05.335 2 DEBUG nova.compute.manager [req-5c8abb19-9bf1-423b-a8f1-ce714c8eaf2c req-fa35f099-bd91-4ef1-9e66-0837fab88803 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ead8f79e-4b1d-4bf5-bf5a-4943d2122bbc] Received event network-changed-92faeec9-cc08-45c5-84b3-7191f39c6339 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:56:05 compute-0 nova_compute[260935]: 2025-10-11 08:56:05.336 2 DEBUG nova.compute.manager [req-5c8abb19-9bf1-423b-a8f1-ce714c8eaf2c req-fa35f099-bd91-4ef1-9e66-0837fab88803 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ead8f79e-4b1d-4bf5-bf5a-4943d2122bbc] Refreshing instance network info cache due to event network-changed-92faeec9-cc08-45c5-84b3-7191f39c6339. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 08:56:05 compute-0 nova_compute[260935]: 2025-10-11 08:56:05.336 2 DEBUG oslo_concurrency.lockutils [req-5c8abb19-9bf1-423b-a8f1-ce714c8eaf2c req-fa35f099-bd91-4ef1-9e66-0837fab88803 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-ead8f79e-4b1d-4bf5-bf5a-4943d2122bbc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 08:56:05 compute-0 nova_compute[260935]: 2025-10-11 08:56:05.608 2 DEBUG nova.compute.manager [req-c92a476e-f938-4c1f-9dde-8b8636a25a87 req-fb0c2822-4d8e-4609-be2b-f2cfbfa206db e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d80189d8-28e4-440b-8aed-b43c62f59dd7] Received event network-changed-7bc371fa-443a-4188-ace2-2837e3709136 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:56:05 compute-0 nova_compute[260935]: 2025-10-11 08:56:05.609 2 DEBUG nova.compute.manager [req-c92a476e-f938-4c1f-9dde-8b8636a25a87 req-fb0c2822-4d8e-4609-be2b-f2cfbfa206db e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d80189d8-28e4-440b-8aed-b43c62f59dd7] Refreshing instance network info cache due to event network-changed-7bc371fa-443a-4188-ace2-2837e3709136. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 08:56:05 compute-0 nova_compute[260935]: 2025-10-11 08:56:05.609 2 DEBUG oslo_concurrency.lockutils [req-c92a476e-f938-4c1f-9dde-8b8636a25a87 req-fb0c2822-4d8e-4609-be2b-f2cfbfa206db e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-d80189d8-28e4-440b-8aed-b43c62f59dd7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 08:56:05 compute-0 nova_compute[260935]: 2025-10-11 08:56:05.610 2 DEBUG oslo_concurrency.lockutils [req-c92a476e-f938-4c1f-9dde-8b8636a25a87 req-fb0c2822-4d8e-4609-be2b-f2cfbfa206db e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-d80189d8-28e4-440b-8aed-b43c62f59dd7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 08:56:05 compute-0 nova_compute[260935]: 2025-10-11 08:56:05.610 2 DEBUG nova.network.neutron [req-c92a476e-f938-4c1f-9dde-8b8636a25a87 req-fb0c2822-4d8e-4609-be2b-f2cfbfa206db e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d80189d8-28e4-440b-8aed-b43c62f59dd7] Refreshing network info cache for port 7bc371fa-443a-4188-ace2-2837e3709136 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 08:56:05 compute-0 nova_compute[260935]: 2025-10-11 08:56:05.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 08:56:05 compute-0 nova_compute[260935]: 2025-10-11 08:56:05.704 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 08:56:05 compute-0 nova_compute[260935]: 2025-10-11 08:56:05.704 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 11 08:56:05 compute-0 nova_compute[260935]: 2025-10-11 08:56:05.724 2 DEBUG nova.network.neutron [None req-8e0d93a6-27b1-4601-bc37-27e62a0e782b ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: ead8f79e-4b1d-4bf5-bf5a-4943d2122bbc] Updating instance_info_cache with network_info: [{"id": "92faeec9-cc08-45c5-84b3-7191f39c6339", "address": "fa:16:3e:22:da:61", "network": {"id": "e075bdab-78c4-414f-b270-c41d1c82f498", "bridge": "br-int", "label": "tempest-ServersTestJSON-1401783070-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d9864fda4f8641d8a9c1509c426cc206", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap92faeec9-cc", "ovs_interfaceid": "92faeec9-cc08-45c5-84b3-7191f39c6339", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:56:05 compute-0 nova_compute[260935]: 2025-10-11 08:56:05.743 2 DEBUG oslo_concurrency.lockutils [None req-8e0d93a6-27b1-4601-bc37-27e62a0e782b ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Releasing lock "refresh_cache-ead8f79e-4b1d-4bf5-bf5a-4943d2122bbc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 08:56:05 compute-0 nova_compute[260935]: 2025-10-11 08:56:05.744 2 DEBUG nova.compute.manager [None req-8e0d93a6-27b1-4601-bc37-27e62a0e782b ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: ead8f79e-4b1d-4bf5-bf5a-4943d2122bbc] Instance network_info: |[{"id": "92faeec9-cc08-45c5-84b3-7191f39c6339", "address": "fa:16:3e:22:da:61", "network": {"id": "e075bdab-78c4-414f-b270-c41d1c82f498", "bridge": "br-int", "label": "tempest-ServersTestJSON-1401783070-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d9864fda4f8641d8a9c1509c426cc206", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap92faeec9-cc", "ovs_interfaceid": "92faeec9-cc08-45c5-84b3-7191f39c6339", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 11 08:56:05 compute-0 nova_compute[260935]: 2025-10-11 08:56:05.744 2 DEBUG oslo_concurrency.lockutils [req-5c8abb19-9bf1-423b-a8f1-ce714c8eaf2c req-fa35f099-bd91-4ef1-9e66-0837fab88803 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-ead8f79e-4b1d-4bf5-bf5a-4943d2122bbc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 08:56:05 compute-0 nova_compute[260935]: 2025-10-11 08:56:05.745 2 DEBUG nova.network.neutron [req-5c8abb19-9bf1-423b-a8f1-ce714c8eaf2c req-fa35f099-bd91-4ef1-9e66-0837fab88803 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ead8f79e-4b1d-4bf5-bf5a-4943d2122bbc] Refreshing network info cache for port 92faeec9-cc08-45c5-84b3-7191f39c6339 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 08:56:05 compute-0 nova_compute[260935]: 2025-10-11 08:56:05.751 2 DEBUG nova.virt.libvirt.driver [None req-8e0d93a6-27b1-4601-bc37-27e62a0e782b ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: ead8f79e-4b1d-4bf5-bf5a-4943d2122bbc] Start _get_guest_xml network_info=[{"id": "92faeec9-cc08-45c5-84b3-7191f39c6339", "address": "fa:16:3e:22:da:61", "network": {"id": "e075bdab-78c4-414f-b270-c41d1c82f498", "bridge": "br-int", "label": "tempest-ServersTestJSON-1401783070-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d9864fda4f8641d8a9c1509c426cc206", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap92faeec9-cc", "ovs_interfaceid": "92faeec9-cc08-45c5-84b3-7191f39c6339", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 11 08:56:05 compute-0 nova_compute[260935]: 2025-10-11 08:56:05.759 2 WARNING nova.virt.libvirt.driver [None req-8e0d93a6-27b1-4601-bc37-27e62a0e782b ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 08:56:05 compute-0 nova_compute[260935]: 2025-10-11 08:56:05.766 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:56:05 compute-0 nova_compute[260935]: 2025-10-11 08:56:05.767 2 DEBUG nova.virt.libvirt.host [None req-8e0d93a6-27b1-4601-bc37-27e62a0e782b ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 11 08:56:05 compute-0 nova_compute[260935]: 2025-10-11 08:56:05.768 2 DEBUG nova.virt.libvirt.host [None req-8e0d93a6-27b1-4601-bc37-27e62a0e782b ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 11 08:56:05 compute-0 nova_compute[260935]: 2025-10-11 08:56:05.774 2 DEBUG nova.virt.libvirt.host [None req-8e0d93a6-27b1-4601-bc37-27e62a0e782b ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 11 08:56:05 compute-0 nova_compute[260935]: 2025-10-11 08:56:05.775 2 DEBUG nova.virt.libvirt.host [None req-8e0d93a6-27b1-4601-bc37-27e62a0e782b ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 11 08:56:05 compute-0 nova_compute[260935]: 2025-10-11 08:56:05.775 2 DEBUG nova.virt.libvirt.driver [None req-8e0d93a6-27b1-4601-bc37-27e62a0e782b ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 11 08:56:05 compute-0 nova_compute[260935]: 2025-10-11 08:56:05.775 2 DEBUG nova.virt.hardware [None req-8e0d93a6-27b1-4601-bc37-27e62a0e782b ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 11 08:56:05 compute-0 nova_compute[260935]: 2025-10-11 08:56:05.777 2 DEBUG nova.virt.hardware [None req-8e0d93a6-27b1-4601-bc37-27e62a0e782b ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 11 08:56:05 compute-0 nova_compute[260935]: 2025-10-11 08:56:05.777 2 DEBUG nova.virt.hardware [None req-8e0d93a6-27b1-4601-bc37-27e62a0e782b ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 11 08:56:05 compute-0 nova_compute[260935]: 2025-10-11 08:56:05.778 2 DEBUG nova.virt.hardware [None req-8e0d93a6-27b1-4601-bc37-27e62a0e782b ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 11 08:56:05 compute-0 nova_compute[260935]: 2025-10-11 08:56:05.778 2 DEBUG nova.virt.hardware [None req-8e0d93a6-27b1-4601-bc37-27e62a0e782b ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 11 08:56:05 compute-0 nova_compute[260935]: 2025-10-11 08:56:05.778 2 DEBUG nova.virt.hardware [None req-8e0d93a6-27b1-4601-bc37-27e62a0e782b ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 11 08:56:05 compute-0 nova_compute[260935]: 2025-10-11 08:56:05.779 2 DEBUG nova.virt.hardware [None req-8e0d93a6-27b1-4601-bc37-27e62a0e782b ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 11 08:56:05 compute-0 nova_compute[260935]: 2025-10-11 08:56:05.779 2 DEBUG nova.virt.hardware [None req-8e0d93a6-27b1-4601-bc37-27e62a0e782b ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 11 08:56:05 compute-0 nova_compute[260935]: 2025-10-11 08:56:05.779 2 DEBUG nova.virt.hardware [None req-8e0d93a6-27b1-4601-bc37-27e62a0e782b ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 11 08:56:05 compute-0 nova_compute[260935]: 2025-10-11 08:56:05.780 2 DEBUG nova.virt.hardware [None req-8e0d93a6-27b1-4601-bc37-27e62a0e782b ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 11 08:56:05 compute-0 nova_compute[260935]: 2025-10-11 08:56:05.780 2 DEBUG nova.virt.hardware [None req-8e0d93a6-27b1-4601-bc37-27e62a0e782b ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 11 08:56:05 compute-0 nova_compute[260935]: 2025-10-11 08:56:05.785 2 DEBUG oslo_concurrency.processutils [None req-8e0d93a6-27b1-4601-bc37-27e62a0e782b ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:56:05 compute-0 podman[325409]: 2025-10-11 08:56:05.81889378 +0000 UTC m=+0.113018754 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, io.buildah.version=1.41.3)
Oct 11 08:56:06 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1556: 321 pgs: 321 active+clean; 372 MiB data, 657 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.9 MiB/s wr, 194 op/s
Oct 11 08:56:06 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 08:56:06 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1208195544' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:56:06 compute-0 nova_compute[260935]: 2025-10-11 08:56:06.357 2 DEBUG oslo_concurrency.processutils [None req-8e0d93a6-27b1-4601-bc37-27e62a0e782b ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.572s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:56:06 compute-0 nova_compute[260935]: 2025-10-11 08:56:06.389 2 DEBUG nova.storage.rbd_utils [None req-8e0d93a6-27b1-4601-bc37-27e62a0e782b ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] rbd image ead8f79e-4b1d-4bf5-bf5a-4943d2122bbc_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:56:06 compute-0 nova_compute[260935]: 2025-10-11 08:56:06.396 2 DEBUG oslo_concurrency.processutils [None req-8e0d93a6-27b1-4601-bc37-27e62a0e782b ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:56:06 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 08:56:06 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3796411927' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:56:06 compute-0 nova_compute[260935]: 2025-10-11 08:56:06.879 2 DEBUG oslo_concurrency.processutils [None req-8e0d93a6-27b1-4601-bc37-27e62a0e782b ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.482s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:56:06 compute-0 nova_compute[260935]: 2025-10-11 08:56:06.881 2 DEBUG nova.virt.libvirt.vif [None req-8e0d93a6-27b1-4601-bc37-27e62a0e782b ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=2001:2001::3,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T08:55:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-865090442',display_name='tempest-ServersTestJSON-server-865090442',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-865090442',id=64,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d9864fda4f8641d8a9c1509c426cc206',ramdisk_id='',reservation_id='r-9tcw9lxi',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-101172647',owner_user_name='tempest-ServersTestJSON-101172647-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T08:56:01Z,user_data=None,user_id='ee0e5fedb9fc464eb2a9ac362f5e0749',uuid=ead8f79e-4b1d-4bf5-bf5a-4943d2122bbc,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "92faeec9-cc08-45c5-84b3-7191f39c6339", "address": "fa:16:3e:22:da:61", "network": {"id": "e075bdab-78c4-414f-b270-c41d1c82f498", "bridge": "br-int", "label": "tempest-ServersTestJSON-1401783070-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d9864fda4f8641d8a9c1509c426cc206", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap92faeec9-cc", "ovs_interfaceid": "92faeec9-cc08-45c5-84b3-7191f39c6339", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 11 08:56:06 compute-0 nova_compute[260935]: 2025-10-11 08:56:06.882 2 DEBUG nova.network.os_vif_util [None req-8e0d93a6-27b1-4601-bc37-27e62a0e782b ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Converting VIF {"id": "92faeec9-cc08-45c5-84b3-7191f39c6339", "address": "fa:16:3e:22:da:61", "network": {"id": "e075bdab-78c4-414f-b270-c41d1c82f498", "bridge": "br-int", "label": "tempest-ServersTestJSON-1401783070-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d9864fda4f8641d8a9c1509c426cc206", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap92faeec9-cc", "ovs_interfaceid": "92faeec9-cc08-45c5-84b3-7191f39c6339", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 08:56:06 compute-0 nova_compute[260935]: 2025-10-11 08:56:06.884 2 DEBUG nova.network.os_vif_util [None req-8e0d93a6-27b1-4601-bc37-27e62a0e782b ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:22:da:61,bridge_name='br-int',has_traffic_filtering=True,id=92faeec9-cc08-45c5-84b3-7191f39c6339,network=Network(e075bdab-78c4-414f-b270-c41d1c82f498),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap92faeec9-cc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 08:56:06 compute-0 nova_compute[260935]: 2025-10-11 08:56:06.886 2 DEBUG nova.objects.instance [None req-8e0d93a6-27b1-4601-bc37-27e62a0e782b ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Lazy-loading 'pci_devices' on Instance uuid ead8f79e-4b1d-4bf5-bf5a-4943d2122bbc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:56:06 compute-0 nova_compute[260935]: 2025-10-11 08:56:06.905 2 DEBUG nova.virt.libvirt.driver [None req-8e0d93a6-27b1-4601-bc37-27e62a0e782b ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: ead8f79e-4b1d-4bf5-bf5a-4943d2122bbc] End _get_guest_xml xml=<domain type="kvm">
Oct 11 08:56:06 compute-0 nova_compute[260935]:   <uuid>ead8f79e-4b1d-4bf5-bf5a-4943d2122bbc</uuid>
Oct 11 08:56:06 compute-0 nova_compute[260935]:   <name>instance-00000040</name>
Oct 11 08:56:06 compute-0 nova_compute[260935]:   <memory>131072</memory>
Oct 11 08:56:06 compute-0 nova_compute[260935]:   <vcpu>1</vcpu>
Oct 11 08:56:06 compute-0 nova_compute[260935]:   <metadata>
Oct 11 08:56:06 compute-0 nova_compute[260935]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 08:56:06 compute-0 nova_compute[260935]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 08:56:06 compute-0 nova_compute[260935]:       <nova:name>tempest-ServersTestJSON-server-865090442</nova:name>
Oct 11 08:56:06 compute-0 nova_compute[260935]:       <nova:creationTime>2025-10-11 08:56:05</nova:creationTime>
Oct 11 08:56:06 compute-0 nova_compute[260935]:       <nova:flavor name="m1.nano">
Oct 11 08:56:06 compute-0 nova_compute[260935]:         <nova:memory>128</nova:memory>
Oct 11 08:56:06 compute-0 nova_compute[260935]:         <nova:disk>1</nova:disk>
Oct 11 08:56:06 compute-0 nova_compute[260935]:         <nova:swap>0</nova:swap>
Oct 11 08:56:06 compute-0 nova_compute[260935]:         <nova:ephemeral>0</nova:ephemeral>
Oct 11 08:56:06 compute-0 nova_compute[260935]:         <nova:vcpus>1</nova:vcpus>
Oct 11 08:56:06 compute-0 nova_compute[260935]:       </nova:flavor>
Oct 11 08:56:06 compute-0 nova_compute[260935]:       <nova:owner>
Oct 11 08:56:06 compute-0 nova_compute[260935]:         <nova:user uuid="ee0e5fedb9fc464eb2a9ac362f5e0749">tempest-ServersTestJSON-101172647-project-member</nova:user>
Oct 11 08:56:06 compute-0 nova_compute[260935]:         <nova:project uuid="d9864fda4f8641d8a9c1509c426cc206">tempest-ServersTestJSON-101172647</nova:project>
Oct 11 08:56:06 compute-0 nova_compute[260935]:       </nova:owner>
Oct 11 08:56:06 compute-0 nova_compute[260935]:       <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 08:56:06 compute-0 nova_compute[260935]:       <nova:ports>
Oct 11 08:56:06 compute-0 nova_compute[260935]:         <nova:port uuid="92faeec9-cc08-45c5-84b3-7191f39c6339">
Oct 11 08:56:06 compute-0 nova_compute[260935]:           <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Oct 11 08:56:06 compute-0 nova_compute[260935]:         </nova:port>
Oct 11 08:56:06 compute-0 nova_compute[260935]:       </nova:ports>
Oct 11 08:56:06 compute-0 nova_compute[260935]:     </nova:instance>
Oct 11 08:56:06 compute-0 nova_compute[260935]:   </metadata>
Oct 11 08:56:06 compute-0 nova_compute[260935]:   <sysinfo type="smbios">
Oct 11 08:56:06 compute-0 nova_compute[260935]:     <system>
Oct 11 08:56:06 compute-0 nova_compute[260935]:       <entry name="manufacturer">RDO</entry>
Oct 11 08:56:06 compute-0 nova_compute[260935]:       <entry name="product">OpenStack Compute</entry>
Oct 11 08:56:06 compute-0 nova_compute[260935]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 08:56:06 compute-0 nova_compute[260935]:       <entry name="serial">ead8f79e-4b1d-4bf5-bf5a-4943d2122bbc</entry>
Oct 11 08:56:06 compute-0 nova_compute[260935]:       <entry name="uuid">ead8f79e-4b1d-4bf5-bf5a-4943d2122bbc</entry>
Oct 11 08:56:06 compute-0 nova_compute[260935]:       <entry name="family">Virtual Machine</entry>
Oct 11 08:56:06 compute-0 nova_compute[260935]:     </system>
Oct 11 08:56:06 compute-0 nova_compute[260935]:   </sysinfo>
Oct 11 08:56:06 compute-0 nova_compute[260935]:   <os>
Oct 11 08:56:06 compute-0 nova_compute[260935]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 11 08:56:06 compute-0 nova_compute[260935]:     <boot dev="hd"/>
Oct 11 08:56:06 compute-0 nova_compute[260935]:     <smbios mode="sysinfo"/>
Oct 11 08:56:06 compute-0 nova_compute[260935]:   </os>
Oct 11 08:56:06 compute-0 nova_compute[260935]:   <features>
Oct 11 08:56:06 compute-0 nova_compute[260935]:     <acpi/>
Oct 11 08:56:06 compute-0 nova_compute[260935]:     <apic/>
Oct 11 08:56:06 compute-0 nova_compute[260935]:     <vmcoreinfo/>
Oct 11 08:56:06 compute-0 nova_compute[260935]:   </features>
Oct 11 08:56:06 compute-0 nova_compute[260935]:   <clock offset="utc">
Oct 11 08:56:06 compute-0 nova_compute[260935]:     <timer name="pit" tickpolicy="delay"/>
Oct 11 08:56:06 compute-0 nova_compute[260935]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 11 08:56:06 compute-0 nova_compute[260935]:     <timer name="hpet" present="no"/>
Oct 11 08:56:06 compute-0 nova_compute[260935]:   </clock>
Oct 11 08:56:06 compute-0 nova_compute[260935]:   <cpu mode="host-model" match="exact">
Oct 11 08:56:06 compute-0 nova_compute[260935]:     <topology sockets="1" cores="1" threads="1"/>
Oct 11 08:56:06 compute-0 nova_compute[260935]:   </cpu>
Oct 11 08:56:06 compute-0 nova_compute[260935]:   <devices>
Oct 11 08:56:06 compute-0 nova_compute[260935]:     <disk type="network" device="disk">
Oct 11 08:56:06 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 08:56:06 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/ead8f79e-4b1d-4bf5-bf5a-4943d2122bbc_disk">
Oct 11 08:56:06 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 08:56:06 compute-0 nova_compute[260935]:       </source>
Oct 11 08:56:06 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 08:56:06 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 08:56:06 compute-0 nova_compute[260935]:       </auth>
Oct 11 08:56:06 compute-0 nova_compute[260935]:       <target dev="vda" bus="virtio"/>
Oct 11 08:56:06 compute-0 nova_compute[260935]:     </disk>
Oct 11 08:56:06 compute-0 nova_compute[260935]:     <disk type="network" device="cdrom">
Oct 11 08:56:06 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 08:56:06 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/ead8f79e-4b1d-4bf5-bf5a-4943d2122bbc_disk.config">
Oct 11 08:56:06 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 08:56:06 compute-0 nova_compute[260935]:       </source>
Oct 11 08:56:06 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 08:56:06 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 08:56:06 compute-0 nova_compute[260935]:       </auth>
Oct 11 08:56:06 compute-0 nova_compute[260935]:       <target dev="sda" bus="sata"/>
Oct 11 08:56:06 compute-0 nova_compute[260935]:     </disk>
Oct 11 08:56:06 compute-0 nova_compute[260935]:     <interface type="ethernet">
Oct 11 08:56:06 compute-0 nova_compute[260935]:       <mac address="fa:16:3e:22:da:61"/>
Oct 11 08:56:06 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 08:56:06 compute-0 nova_compute[260935]:       <driver name="vhost" rx_queue_size="512"/>
Oct 11 08:56:06 compute-0 nova_compute[260935]:       <mtu size="1442"/>
Oct 11 08:56:06 compute-0 nova_compute[260935]:       <target dev="tap92faeec9-cc"/>
Oct 11 08:56:06 compute-0 nova_compute[260935]:     </interface>
Oct 11 08:56:06 compute-0 nova_compute[260935]:     <serial type="pty">
Oct 11 08:56:06 compute-0 nova_compute[260935]:       <log file="/var/lib/nova/instances/ead8f79e-4b1d-4bf5-bf5a-4943d2122bbc/console.log" append="off"/>
Oct 11 08:56:06 compute-0 nova_compute[260935]:     </serial>
Oct 11 08:56:06 compute-0 nova_compute[260935]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 08:56:06 compute-0 nova_compute[260935]:     <video>
Oct 11 08:56:06 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 08:56:06 compute-0 nova_compute[260935]:     </video>
Oct 11 08:56:06 compute-0 nova_compute[260935]:     <input type="tablet" bus="usb"/>
Oct 11 08:56:06 compute-0 nova_compute[260935]:     <rng model="virtio">
Oct 11 08:56:06 compute-0 nova_compute[260935]:       <backend model="random">/dev/urandom</backend>
Oct 11 08:56:06 compute-0 nova_compute[260935]:     </rng>
Oct 11 08:56:06 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root"/>
Oct 11 08:56:06 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:56:06 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:56:06 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:56:06 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:56:06 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:56:06 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:56:06 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:56:06 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:56:06 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:56:06 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:56:06 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:56:06 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:56:06 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:56:06 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:56:06 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:56:06 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:56:06 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:56:06 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:56:06 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:56:06 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:56:06 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:56:06 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:56:06 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:56:06 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:56:06 compute-0 nova_compute[260935]:     <controller type="usb" index="0"/>
Oct 11 08:56:06 compute-0 nova_compute[260935]:     <memballoon model="virtio">
Oct 11 08:56:06 compute-0 nova_compute[260935]:       <stats period="10"/>
Oct 11 08:56:06 compute-0 nova_compute[260935]:     </memballoon>
Oct 11 08:56:06 compute-0 nova_compute[260935]:   </devices>
Oct 11 08:56:06 compute-0 nova_compute[260935]: </domain>
Oct 11 08:56:06 compute-0 nova_compute[260935]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 11 08:56:06 compute-0 nova_compute[260935]: 2025-10-11 08:56:06.908 2 DEBUG nova.compute.manager [None req-8e0d93a6-27b1-4601-bc37-27e62a0e782b ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: ead8f79e-4b1d-4bf5-bf5a-4943d2122bbc] Preparing to wait for external event network-vif-plugged-92faeec9-cc08-45c5-84b3-7191f39c6339 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 11 08:56:06 compute-0 nova_compute[260935]: 2025-10-11 08:56:06.909 2 DEBUG oslo_concurrency.lockutils [None req-8e0d93a6-27b1-4601-bc37-27e62a0e782b ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Acquiring lock "ead8f79e-4b1d-4bf5-bf5a-4943d2122bbc-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:56:06 compute-0 nova_compute[260935]: 2025-10-11 08:56:06.910 2 DEBUG oslo_concurrency.lockutils [None req-8e0d93a6-27b1-4601-bc37-27e62a0e782b ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Lock "ead8f79e-4b1d-4bf5-bf5a-4943d2122bbc-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:56:06 compute-0 nova_compute[260935]: 2025-10-11 08:56:06.910 2 DEBUG oslo_concurrency.lockutils [None req-8e0d93a6-27b1-4601-bc37-27e62a0e782b ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Lock "ead8f79e-4b1d-4bf5-bf5a-4943d2122bbc-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:56:06 compute-0 nova_compute[260935]: 2025-10-11 08:56:06.912 2 DEBUG nova.virt.libvirt.vif [None req-8e0d93a6-27b1-4601-bc37-27e62a0e782b ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=2001:2001::3,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T08:55:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-865090442',display_name='tempest-ServersTestJSON-server-865090442',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-865090442',id=64,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d9864fda4f8641d8a9c1509c426cc206',ramdisk_id='',reservation_id='r-9tcw9lxi',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-101172647',owner_user_name='tempest-ServersTestJSON-101172647-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T08:56:01Z,user_data=None,user_id='ee0e5fedb9fc464eb2a9ac362f5e0749',uuid=ead8f79e-4b1d-4bf5-bf5a-4943d2122bbc,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "92faeec9-cc08-45c5-84b3-7191f39c6339", "address": "fa:16:3e:22:da:61", "network": {"id": "e075bdab-78c4-414f-b270-c41d1c82f498", "bridge": "br-int", "label": "tempest-ServersTestJSON-1401783070-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d9864fda4f8641d8a9c1509c426cc206", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap92faeec9-cc", "ovs_interfaceid": "92faeec9-cc08-45c5-84b3-7191f39c6339", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 11 08:56:06 compute-0 nova_compute[260935]: 2025-10-11 08:56:06.912 2 DEBUG nova.network.os_vif_util [None req-8e0d93a6-27b1-4601-bc37-27e62a0e782b ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Converting VIF {"id": "92faeec9-cc08-45c5-84b3-7191f39c6339", "address": "fa:16:3e:22:da:61", "network": {"id": "e075bdab-78c4-414f-b270-c41d1c82f498", "bridge": "br-int", "label": "tempest-ServersTestJSON-1401783070-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d9864fda4f8641d8a9c1509c426cc206", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap92faeec9-cc", "ovs_interfaceid": "92faeec9-cc08-45c5-84b3-7191f39c6339", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 08:56:06 compute-0 nova_compute[260935]: 2025-10-11 08:56:06.913 2 DEBUG nova.network.os_vif_util [None req-8e0d93a6-27b1-4601-bc37-27e62a0e782b ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:22:da:61,bridge_name='br-int',has_traffic_filtering=True,id=92faeec9-cc08-45c5-84b3-7191f39c6339,network=Network(e075bdab-78c4-414f-b270-c41d1c82f498),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap92faeec9-cc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 08:56:06 compute-0 nova_compute[260935]: 2025-10-11 08:56:06.914 2 DEBUG os_vif [None req-8e0d93a6-27b1-4601-bc37-27e62a0e782b ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:22:da:61,bridge_name='br-int',has_traffic_filtering=True,id=92faeec9-cc08-45c5-84b3-7191f39c6339,network=Network(e075bdab-78c4-414f-b270-c41d1c82f498),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap92faeec9-cc') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 11 08:56:06 compute-0 nova_compute[260935]: 2025-10-11 08:56:06.916 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:56:06 compute-0 nova_compute[260935]: 2025-10-11 08:56:06.917 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:56:06 compute-0 nova_compute[260935]: 2025-10-11 08:56:06.917 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 08:56:06 compute-0 nova_compute[260935]: 2025-10-11 08:56:06.923 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:56:06 compute-0 nova_compute[260935]: 2025-10-11 08:56:06.923 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap92faeec9-cc, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:56:06 compute-0 nova_compute[260935]: 2025-10-11 08:56:06.924 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap92faeec9-cc, col_values=(('external_ids', {'iface-id': '92faeec9-cc08-45c5-84b3-7191f39c6339', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:22:da:61', 'vm-uuid': 'ead8f79e-4b1d-4bf5-bf5a-4943d2122bbc'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:56:06 compute-0 nova_compute[260935]: 2025-10-11 08:56:06.927 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:56:06 compute-0 NetworkManager[44960]: <info>  [1760172966.9286] manager: (tap92faeec9-cc): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/259)
Oct 11 08:56:06 compute-0 nova_compute[260935]: 2025-10-11 08:56:06.931 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 08:56:06 compute-0 nova_compute[260935]: 2025-10-11 08:56:06.933 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:56:06 compute-0 nova_compute[260935]: 2025-10-11 08:56:06.934 2 INFO os_vif [None req-8e0d93a6-27b1-4601-bc37-27e62a0e782b ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:22:da:61,bridge_name='br-int',has_traffic_filtering=True,id=92faeec9-cc08-45c5-84b3-7191f39c6339,network=Network(e075bdab-78c4-414f-b270-c41d1c82f498),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap92faeec9-cc')
Oct 11 08:56:07 compute-0 nova_compute[260935]: 2025-10-11 08:56:07.009 2 DEBUG nova.virt.libvirt.driver [None req-8e0d93a6-27b1-4601-bc37-27e62a0e782b ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 08:56:07 compute-0 nova_compute[260935]: 2025-10-11 08:56:07.010 2 DEBUG nova.virt.libvirt.driver [None req-8e0d93a6-27b1-4601-bc37-27e62a0e782b ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 08:56:07 compute-0 nova_compute[260935]: 2025-10-11 08:56:07.011 2 DEBUG nova.virt.libvirt.driver [None req-8e0d93a6-27b1-4601-bc37-27e62a0e782b ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] No VIF found with MAC fa:16:3e:22:da:61, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 11 08:56:07 compute-0 nova_compute[260935]: 2025-10-11 08:56:07.012 2 INFO nova.virt.libvirt.driver [None req-8e0d93a6-27b1-4601-bc37-27e62a0e782b ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: ead8f79e-4b1d-4bf5-bf5a-4943d2122bbc] Using config drive
Oct 11 08:56:07 compute-0 nova_compute[260935]: 2025-10-11 08:56:07.045 2 DEBUG nova.storage.rbd_utils [None req-8e0d93a6-27b1-4601-bc37-27e62a0e782b ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] rbd image ead8f79e-4b1d-4bf5-bf5a-4943d2122bbc_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:56:07 compute-0 nova_compute[260935]: 2025-10-11 08:56:07.054 2 DEBUG nova.network.neutron [req-c92a476e-f938-4c1f-9dde-8b8636a25a87 req-fb0c2822-4d8e-4609-be2b-f2cfbfa206db e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d80189d8-28e4-440b-8aed-b43c62f59dd7] Updated VIF entry in instance network info cache for port 7bc371fa-443a-4188-ace2-2837e3709136. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 08:56:07 compute-0 nova_compute[260935]: 2025-10-11 08:56:07.055 2 DEBUG nova.network.neutron [req-c92a476e-f938-4c1f-9dde-8b8636a25a87 req-fb0c2822-4d8e-4609-be2b-f2cfbfa206db e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d80189d8-28e4-440b-8aed-b43c62f59dd7] Updating instance_info_cache with network_info: [{"id": "7bc371fa-443a-4188-ace2-2837e3709136", "address": "fa:16:3e:76:54:8b", "network": {"id": "338aeaf8-43d5-4292-a8fa-8952dd3c508b", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-293890957-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3a6a3cc2a54f4a9bafcdc1304f07944b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7bc371fa-44", "ovs_interfaceid": "7bc371fa-443a-4188-ace2-2837e3709136", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:56:07 compute-0 nova_compute[260935]: 2025-10-11 08:56:07.085 2 DEBUG oslo_concurrency.lockutils [req-c92a476e-f938-4c1f-9dde-8b8636a25a87 req-fb0c2822-4d8e-4609-be2b-f2cfbfa206db e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-d80189d8-28e4-440b-8aed-b43c62f59dd7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 08:56:07 compute-0 ceph-mon[74313]: pgmap v1556: 321 pgs: 321 active+clean; 372 MiB data, 657 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.9 MiB/s wr, 194 op/s
Oct 11 08:56:07 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/1208195544' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:56:07 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/3796411927' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:56:07 compute-0 nova_compute[260935]: 2025-10-11 08:56:07.092 2 DEBUG oslo_concurrency.lockutils [None req-ba28e5e9-9e40-489e-95fc-28faaf8d9862 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Acquiring lock "d80189d8-28e4-440b-8aed-b43c62f59dd7" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:56:07 compute-0 nova_compute[260935]: 2025-10-11 08:56:07.093 2 DEBUG oslo_concurrency.lockutils [None req-ba28e5e9-9e40-489e-95fc-28faaf8d9862 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Lock "d80189d8-28e4-440b-8aed-b43c62f59dd7" acquired by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:56:07 compute-0 nova_compute[260935]: 2025-10-11 08:56:07.093 2 INFO nova.compute.manager [None req-ba28e5e9-9e40-489e-95fc-28faaf8d9862 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] [instance: d80189d8-28e4-440b-8aed-b43c62f59dd7] Rebooting instance
Oct 11 08:56:07 compute-0 nova_compute[260935]: 2025-10-11 08:56:07.109 2 DEBUG oslo_concurrency.lockutils [None req-ba28e5e9-9e40-489e-95fc-28faaf8d9862 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Acquiring lock "refresh_cache-d80189d8-28e4-440b-8aed-b43c62f59dd7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 08:56:07 compute-0 nova_compute[260935]: 2025-10-11 08:56:07.109 2 DEBUG oslo_concurrency.lockutils [None req-ba28e5e9-9e40-489e-95fc-28faaf8d9862 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Acquired lock "refresh_cache-d80189d8-28e4-440b-8aed-b43c62f59dd7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 08:56:07 compute-0 nova_compute[260935]: 2025-10-11 08:56:07.110 2 DEBUG nova.network.neutron [None req-ba28e5e9-9e40-489e-95fc-28faaf8d9862 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] [instance: d80189d8-28e4-440b-8aed-b43c62f59dd7] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 11 08:56:07 compute-0 nova_compute[260935]: 2025-10-11 08:56:07.200 2 DEBUG nova.network.neutron [req-5c8abb19-9bf1-423b-a8f1-ce714c8eaf2c req-fa35f099-bd91-4ef1-9e66-0837fab88803 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ead8f79e-4b1d-4bf5-bf5a-4943d2122bbc] Updated VIF entry in instance network info cache for port 92faeec9-cc08-45c5-84b3-7191f39c6339. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 08:56:07 compute-0 nova_compute[260935]: 2025-10-11 08:56:07.201 2 DEBUG nova.network.neutron [req-5c8abb19-9bf1-423b-a8f1-ce714c8eaf2c req-fa35f099-bd91-4ef1-9e66-0837fab88803 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ead8f79e-4b1d-4bf5-bf5a-4943d2122bbc] Updating instance_info_cache with network_info: [{"id": "92faeec9-cc08-45c5-84b3-7191f39c6339", "address": "fa:16:3e:22:da:61", "network": {"id": "e075bdab-78c4-414f-b270-c41d1c82f498", "bridge": "br-int", "label": "tempest-ServersTestJSON-1401783070-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d9864fda4f8641d8a9c1509c426cc206", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap92faeec9-cc", "ovs_interfaceid": "92faeec9-cc08-45c5-84b3-7191f39c6339", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:56:07 compute-0 nova_compute[260935]: 2025-10-11 08:56:07.214 2 DEBUG oslo_concurrency.lockutils [req-5c8abb19-9bf1-423b-a8f1-ce714c8eaf2c req-fa35f099-bd91-4ef1-9e66-0837fab88803 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-ead8f79e-4b1d-4bf5-bf5a-4943d2122bbc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 08:56:07 compute-0 nova_compute[260935]: 2025-10-11 08:56:07.359 2 INFO nova.virt.libvirt.driver [None req-8e0d93a6-27b1-4601-bc37-27e62a0e782b ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: ead8f79e-4b1d-4bf5-bf5a-4943d2122bbc] Creating config drive at /var/lib/nova/instances/ead8f79e-4b1d-4bf5-bf5a-4943d2122bbc/disk.config
Oct 11 08:56:07 compute-0 nova_compute[260935]: 2025-10-11 08:56:07.368 2 DEBUG oslo_concurrency.processutils [None req-8e0d93a6-27b1-4601-bc37-27e62a0e782b ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/ead8f79e-4b1d-4bf5-bf5a-4943d2122bbc/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpkb5dx60x execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:56:07 compute-0 nova_compute[260935]: 2025-10-11 08:56:07.520 2 DEBUG oslo_concurrency.processutils [None req-8e0d93a6-27b1-4601-bc37-27e62a0e782b ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/ead8f79e-4b1d-4bf5-bf5a-4943d2122bbc/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpkb5dx60x" returned: 0 in 0.152s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:56:07 compute-0 nova_compute[260935]: 2025-10-11 08:56:07.580 2 DEBUG nova.storage.rbd_utils [None req-8e0d93a6-27b1-4601-bc37-27e62a0e782b ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] rbd image ead8f79e-4b1d-4bf5-bf5a-4943d2122bbc_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:56:07 compute-0 nova_compute[260935]: 2025-10-11 08:56:07.586 2 DEBUG oslo_concurrency.processutils [None req-8e0d93a6-27b1-4601-bc37-27e62a0e782b ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/ead8f79e-4b1d-4bf5-bf5a-4943d2122bbc/disk.config ead8f79e-4b1d-4bf5-bf5a-4943d2122bbc_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:56:07 compute-0 nova_compute[260935]: 2025-10-11 08:56:07.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 08:56:07 compute-0 nova_compute[260935]: 2025-10-11 08:56:07.745 2 DEBUG oslo_concurrency.lockutils [None req-77688478-a503-4f3f-a020-37394964ed83 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Acquiring lock "62c9b5d7-f22e-4738-b2e6-7c53fcb968ea" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:56:07 compute-0 nova_compute[260935]: 2025-10-11 08:56:07.746 2 DEBUG oslo_concurrency.lockutils [None req-77688478-a503-4f3f-a020-37394964ed83 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Lock "62c9b5d7-f22e-4738-b2e6-7c53fcb968ea" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:56:07 compute-0 nova_compute[260935]: 2025-10-11 08:56:07.763 2 DEBUG nova.compute.manager [None req-77688478-a503-4f3f-a020-37394964ed83 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: 62c9b5d7-f22e-4738-b2e6-7c53fcb968ea] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 11 08:56:07 compute-0 nova_compute[260935]: 2025-10-11 08:56:07.768 2 DEBUG oslo_concurrency.processutils [None req-8e0d93a6-27b1-4601-bc37-27e62a0e782b ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/ead8f79e-4b1d-4bf5-bf5a-4943d2122bbc/disk.config ead8f79e-4b1d-4bf5-bf5a-4943d2122bbc_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.182s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:56:07 compute-0 nova_compute[260935]: 2025-10-11 08:56:07.770 2 INFO nova.virt.libvirt.driver [None req-8e0d93a6-27b1-4601-bc37-27e62a0e782b ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: ead8f79e-4b1d-4bf5-bf5a-4943d2122bbc] Deleting local config drive /var/lib/nova/instances/ead8f79e-4b1d-4bf5-bf5a-4943d2122bbc/disk.config because it was imported into RBD.
Oct 11 08:56:07 compute-0 nova_compute[260935]: 2025-10-11 08:56:07.837 2 DEBUG oslo_concurrency.lockutils [None req-77688478-a503-4f3f-a020-37394964ed83 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:56:07 compute-0 nova_compute[260935]: 2025-10-11 08:56:07.837 2 DEBUG oslo_concurrency.lockutils [None req-77688478-a503-4f3f-a020-37394964ed83 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:56:07 compute-0 nova_compute[260935]: 2025-10-11 08:56:07.848 2 DEBUG nova.virt.hardware [None req-77688478-a503-4f3f-a020-37394964ed83 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 11 08:56:07 compute-0 nova_compute[260935]: 2025-10-11 08:56:07.849 2 INFO nova.compute.claims [None req-77688478-a503-4f3f-a020-37394964ed83 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: 62c9b5d7-f22e-4738-b2e6-7c53fcb968ea] Claim successful on node compute-0.ctlplane.example.com
Oct 11 08:56:07 compute-0 kernel: tap92faeec9-cc: entered promiscuous mode
Oct 11 08:56:07 compute-0 nova_compute[260935]: 2025-10-11 08:56:07.854 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:56:07 compute-0 ovn_controller[152945]: 2025-10-11T08:56:07Z|00544|binding|INFO|Claiming lport 92faeec9-cc08-45c5-84b3-7191f39c6339 for this chassis.
Oct 11 08:56:07 compute-0 ovn_controller[152945]: 2025-10-11T08:56:07Z|00545|binding|INFO|92faeec9-cc08-45c5-84b3-7191f39c6339: Claiming fa:16:3e:22:da:61 10.100.0.10
Oct 11 08:56:07 compute-0 NetworkManager[44960]: <info>  [1760172967.8572] manager: (tap92faeec9-cc): new Tun device (/org/freedesktop/NetworkManager/Devices/260)
Oct 11 08:56:07 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:56:07.864 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:22:da:61 10.100.0.10'], port_security=['fa:16:3e:22:da:61 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'ead8f79e-4b1d-4bf5-bf5a-4943d2122bbc', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e075bdab-78c4-414f-b270-c41d1c82f498', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd9864fda4f8641d8a9c1509c426cc206', 'neutron:revision_number': '2', 'neutron:security_group_ids': '708d19b6-1edc-4b98-ad73-f234668f1633', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2b07dbe7-131d-4b4e-9a25-dea5d7b28985, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=92faeec9-cc08-45c5-84b3-7191f39c6339) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 08:56:07 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:56:07.866 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 92faeec9-cc08-45c5-84b3-7191f39c6339 in datapath e075bdab-78c4-414f-b270-c41d1c82f498 bound to our chassis
Oct 11 08:56:07 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:56:07.870 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e075bdab-78c4-414f-b270-c41d1c82f498
Oct 11 08:56:07 compute-0 ovn_controller[152945]: 2025-10-11T08:56:07Z|00546|binding|INFO|Setting lport 92faeec9-cc08-45c5-84b3-7191f39c6339 ovn-installed in OVS
Oct 11 08:56:07 compute-0 ovn_controller[152945]: 2025-10-11T08:56:07Z|00547|binding|INFO|Setting lport 92faeec9-cc08-45c5-84b3-7191f39c6339 up in Southbound
Oct 11 08:56:07 compute-0 nova_compute[260935]: 2025-10-11 08:56:07.879 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:56:07 compute-0 nova_compute[260935]: 2025-10-11 08:56:07.884 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:56:07 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:56:07.905 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[67c46097-e63e-47bf-86f0-746a4b43a0f8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:56:07 compute-0 systemd-machined[215705]: New machine qemu-71-instance-00000040.
Oct 11 08:56:07 compute-0 systemd[1]: Started Virtual Machine qemu-71-instance-00000040.
Oct 11 08:56:07 compute-0 systemd-udevd[325565]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 08:56:07 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:56:07.951 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[5e3d2261-8ef1-4f5a-b9c4-719eef8d4202]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:56:07 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:56:07.955 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[813431b6-052f-4592-88b6-394b569fcadf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:56:07 compute-0 NetworkManager[44960]: <info>  [1760172967.9748] device (tap92faeec9-cc): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 08:56:07 compute-0 NetworkManager[44960]: <info>  [1760172967.9760] device (tap92faeec9-cc): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 08:56:07 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:56:07.998 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[5f7f5207-e4a4-41c5-9ddb-9130761783bf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:56:08 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1557: 321 pgs: 321 active+clean; 372 MiB data, 691 MiB used, 59 GiB / 60 GiB avail; 4.1 MiB/s rd, 3.9 MiB/s wr, 259 op/s
Oct 11 08:56:08 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:56:08.021 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[8cd6bff6-2063-4f71-9237-afd9cfa950bc]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape075bdab-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0b:b9:79'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 169], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 479394, 'reachable_time': 36159, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 325575, 'error': None, 'target': 'ovnmeta-e075bdab-78c4-414f-b270-c41d1c82f498', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:56:08 compute-0 nova_compute[260935]: 2025-10-11 08:56:08.026 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:56:08 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:56:08.043 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[4cb85dd3-8815-4bf1-9e64-63d233627953]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tape075bdab-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 479414, 'tstamp': 479414}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 325576, 'error': None, 'target': 'ovnmeta-e075bdab-78c4-414f-b270-c41d1c82f498', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tape075bdab-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 479419, 'tstamp': 479419}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 325576, 'error': None, 'target': 'ovnmeta-e075bdab-78c4-414f-b270-c41d1c82f498', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:56:08 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:56:08.045 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape075bdab-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:56:08 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:56:08.048 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape075bdab-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:56:08 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:56:08.048 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 08:56:08 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:56:08.049 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape075bdab-70, col_values=(('external_ids', {'iface-id': 'b9cf681c-9f4c-4c56-987a-55fa7aa89e1a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:56:08 compute-0 nova_compute[260935]: 2025-10-11 08:56:08.049 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:56:08 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:56:08.050 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 08:56:08 compute-0 nova_compute[260935]: 2025-10-11 08:56:08.075 2 DEBUG oslo_concurrency.processutils [None req-77688478-a503-4f3f-a020-37394964ed83 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:56:08 compute-0 nova_compute[260935]: 2025-10-11 08:56:08.323 2 DEBUG nova.network.neutron [None req-ba28e5e9-9e40-489e-95fc-28faaf8d9862 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] [instance: d80189d8-28e4-440b-8aed-b43c62f59dd7] Updating instance_info_cache with network_info: [{"id": "7bc371fa-443a-4188-ace2-2837e3709136", "address": "fa:16:3e:76:54:8b", "network": {"id": "338aeaf8-43d5-4292-a8fa-8952dd3c508b", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-293890957-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3a6a3cc2a54f4a9bafcdc1304f07944b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7bc371fa-44", "ovs_interfaceid": "7bc371fa-443a-4188-ace2-2837e3709136", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:56:08 compute-0 nova_compute[260935]: 2025-10-11 08:56:08.343 2 DEBUG oslo_concurrency.lockutils [None req-ba28e5e9-9e40-489e-95fc-28faaf8d9862 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Releasing lock "refresh_cache-d80189d8-28e4-440b-8aed-b43c62f59dd7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 08:56:08 compute-0 nova_compute[260935]: 2025-10-11 08:56:08.346 2 DEBUG nova.compute.manager [None req-ba28e5e9-9e40-489e-95fc-28faaf8d9862 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] [instance: d80189d8-28e4-440b-8aed-b43c62f59dd7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:56:08 compute-0 nova_compute[260935]: 2025-10-11 08:56:08.351 2 DEBUG nova.compute.manager [None req-c1c756a2-3b40-40d3-9664-2fb8da17375d 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] [instance: 98d8ebd6-0917-49cf-8efc-a245486424bc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:56:08 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e210 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 08:56:08 compute-0 nova_compute[260935]: 2025-10-11 08:56:08.404 2 INFO nova.compute.manager [None req-c1c756a2-3b40-40d3-9664-2fb8da17375d 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] [instance: 98d8ebd6-0917-49cf-8efc-a245486424bc] instance snapshotting
Oct 11 08:56:08 compute-0 nova_compute[260935]: 2025-10-11 08:56:08.406 2 DEBUG nova.objects.instance [None req-c1c756a2-3b40-40d3-9664-2fb8da17375d 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Lazy-loading 'flavor' on Instance uuid 98d8ebd6-0917-49cf-8efc-a245486424bc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:56:08 compute-0 nova_compute[260935]: 2025-10-11 08:56:08.415 2 DEBUG nova.compute.manager [req-47020f6e-a698-48f4-a0ea-68caed07017e req-938018d0-0965-47b6-9336-66d5048a4745 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ead8f79e-4b1d-4bf5-bf5a-4943d2122bbc] Received event network-vif-plugged-92faeec9-cc08-45c5-84b3-7191f39c6339 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:56:08 compute-0 nova_compute[260935]: 2025-10-11 08:56:08.415 2 DEBUG oslo_concurrency.lockutils [req-47020f6e-a698-48f4-a0ea-68caed07017e req-938018d0-0965-47b6-9336-66d5048a4745 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "ead8f79e-4b1d-4bf5-bf5a-4943d2122bbc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:56:08 compute-0 nova_compute[260935]: 2025-10-11 08:56:08.415 2 DEBUG oslo_concurrency.lockutils [req-47020f6e-a698-48f4-a0ea-68caed07017e req-938018d0-0965-47b6-9336-66d5048a4745 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "ead8f79e-4b1d-4bf5-bf5a-4943d2122bbc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:56:08 compute-0 nova_compute[260935]: 2025-10-11 08:56:08.416 2 DEBUG oslo_concurrency.lockutils [req-47020f6e-a698-48f4-a0ea-68caed07017e req-938018d0-0965-47b6-9336-66d5048a4745 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "ead8f79e-4b1d-4bf5-bf5a-4943d2122bbc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:56:08 compute-0 nova_compute[260935]: 2025-10-11 08:56:08.416 2 DEBUG nova.compute.manager [req-47020f6e-a698-48f4-a0ea-68caed07017e req-938018d0-0965-47b6-9336-66d5048a4745 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ead8f79e-4b1d-4bf5-bf5a-4943d2122bbc] Processing event network-vif-plugged-92faeec9-cc08-45c5-84b3-7191f39c6339 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 11 08:56:08 compute-0 kernel: tap7bc371fa-44 (unregistering): left promiscuous mode
Oct 11 08:56:08 compute-0 NetworkManager[44960]: <info>  [1760172968.5072] device (tap7bc371fa-44): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 08:56:08 compute-0 nova_compute[260935]: 2025-10-11 08:56:08.514 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:56:08 compute-0 ovn_controller[152945]: 2025-10-11T08:56:08Z|00548|binding|INFO|Releasing lport 7bc371fa-443a-4188-ace2-2837e3709136 from this chassis (sb_readonly=0)
Oct 11 08:56:08 compute-0 ovn_controller[152945]: 2025-10-11T08:56:08Z|00549|binding|INFO|Setting lport 7bc371fa-443a-4188-ace2-2837e3709136 down in Southbound
Oct 11 08:56:08 compute-0 ovn_controller[152945]: 2025-10-11T08:56:08Z|00550|binding|INFO|Removing iface tap7bc371fa-44 ovn-installed in OVS
Oct 11 08:56:08 compute-0 nova_compute[260935]: 2025-10-11 08:56:08.518 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:56:08 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:56:08.526 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:76:54:8b 10.100.0.14'], port_security=['fa:16:3e:76:54:8b 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'd80189d8-28e4-440b-8aed-b43c62f59dd7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-338aeaf8-43d5-4292-a8fa-8952dd3c508b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3a6a3cc2a54f4a9bafcdc1304f07944b', 'neutron:revision_number': '5', 'neutron:security_group_ids': 'e0385938-baf4-4634-ae20-542a58f27831 e3ab43aa-8c9d-4578-afa6-1b85bfb9e682', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9b76a516-f507-4a4f-aa31-3047cedf7049, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=7bc371fa-443a-4188-ace2-2837e3709136) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 08:56:08 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:56:08.527 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 7bc371fa-443a-4188-ace2-2837e3709136 in datapath 338aeaf8-43d5-4292-a8fa-8952dd3c508b unbound from our chassis
Oct 11 08:56:08 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:56:08.531 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 338aeaf8-43d5-4292-a8fa-8952dd3c508b
Oct 11 08:56:08 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 08:56:08 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3488982851' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:56:08 compute-0 nova_compute[260935]: 2025-10-11 08:56:08.539 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:56:08 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:56:08.554 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[f5f8c950-5ac2-4625-9bb2-b859184bf84f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:56:08 compute-0 nova_compute[260935]: 2025-10-11 08:56:08.558 2 DEBUG oslo_concurrency.processutils [None req-77688478-a503-4f3f-a020-37394964ed83 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.483s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:56:08 compute-0 nova_compute[260935]: 2025-10-11 08:56:08.565 2 DEBUG nova.compute.provider_tree [None req-77688478-a503-4f3f-a020-37394964ed83 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 08:56:08 compute-0 systemd[1]: machine-qemu\x2d70\x2dinstance\x2d0000003f.scope: Deactivated successfully.
Oct 11 08:56:08 compute-0 systemd[1]: machine-qemu\x2d70\x2dinstance\x2d0000003f.scope: Consumed 6.217s CPU time.
Oct 11 08:56:08 compute-0 systemd-machined[215705]: Machine qemu-70-instance-0000003f terminated.
Oct 11 08:56:08 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:56:08.592 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[bb333247-b690-4bd0-98af-dd6aec411cf6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:56:08 compute-0 nova_compute[260935]: 2025-10-11 08:56:08.595 2 DEBUG nova.scheduler.client.report [None req-77688478-a503-4f3f-a020-37394964ed83 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 08:56:08 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:56:08.598 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[aa54d482-825a-4247-b5be-4db72bd739ec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:56:08 compute-0 nova_compute[260935]: 2025-10-11 08:56:08.621 2 DEBUG oslo_concurrency.lockutils [None req-77688478-a503-4f3f-a020-37394964ed83 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.784s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:56:08 compute-0 nova_compute[260935]: 2025-10-11 08:56:08.622 2 DEBUG nova.compute.manager [None req-77688478-a503-4f3f-a020-37394964ed83 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: 62c9b5d7-f22e-4738-b2e6-7c53fcb968ea] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 11 08:56:08 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:56:08.634 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[6d4b5e09-d30a-4e93-bf57-0fda9a7156cc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:56:08 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:56:08.653 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[8bd259d4-1020-497d-a9b5-4b82122c4910]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap338aeaf8-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e5:2a:1c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 916, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 916, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 164], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 477778, 'reachable_time': 18372, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 325650, 'error': None, 'target': 'ovnmeta-338aeaf8-43d5-4292-a8fa-8952dd3c508b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:56:08 compute-0 nova_compute[260935]: 2025-10-11 08:56:08.674 2 DEBUG nova.compute.manager [None req-77688478-a503-4f3f-a020-37394964ed83 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: 62c9b5d7-f22e-4738-b2e6-7c53fcb968ea] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 11 08:56:08 compute-0 nova_compute[260935]: 2025-10-11 08:56:08.674 2 DEBUG nova.network.neutron [None req-77688478-a503-4f3f-a020-37394964ed83 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: 62c9b5d7-f22e-4738-b2e6-7c53fcb968ea] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 11 08:56:08 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:56:08.679 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[caab2cef-4f0e-4912-b225-b6dd2f505c39]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap338aeaf8-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 477793, 'tstamp': 477793}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 325651, 'error': None, 'target': 'ovnmeta-338aeaf8-43d5-4292-a8fa-8952dd3c508b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap338aeaf8-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 477797, 'tstamp': 477797}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 325651, 'error': None, 'target': 'ovnmeta-338aeaf8-43d5-4292-a8fa-8952dd3c508b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:56:08 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:56:08.681 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap338aeaf8-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:56:08 compute-0 nova_compute[260935]: 2025-10-11 08:56:08.682 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:56:08 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:56:08.690 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap338aeaf8-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:56:08 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:56:08.691 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 08:56:08 compute-0 nova_compute[260935]: 2025-10-11 08:56:08.691 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:56:08 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:56:08.691 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap338aeaf8-40, col_values=(('external_ids', {'iface-id': 'ebd712a5-5601-47dd-8cfc-89d2ce6b1035'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:56:08 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:56:08.692 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 08:56:08 compute-0 nova_compute[260935]: 2025-10-11 08:56:08.697 2 INFO nova.virt.libvirt.driver [None req-c1c756a2-3b40-40d3-9664-2fb8da17375d 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] [instance: 98d8ebd6-0917-49cf-8efc-a245486424bc] Beginning live snapshot process
Oct 11 08:56:08 compute-0 nova_compute[260935]: 2025-10-11 08:56:08.701 2 INFO nova.virt.libvirt.driver [None req-77688478-a503-4f3f-a020-37394964ed83 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: 62c9b5d7-f22e-4738-b2e6-7c53fcb968ea] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 11 08:56:08 compute-0 nova_compute[260935]: 2025-10-11 08:56:08.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 08:56:08 compute-0 nova_compute[260935]: 2025-10-11 08:56:08.705 2 INFO nova.virt.libvirt.driver [-] [instance: d80189d8-28e4-440b-8aed-b43c62f59dd7] Instance destroyed successfully.
Oct 11 08:56:08 compute-0 nova_compute[260935]: 2025-10-11 08:56:08.705 2 DEBUG nova.objects.instance [None req-ba28e5e9-9e40-489e-95fc-28faaf8d9862 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Lazy-loading 'resources' on Instance uuid d80189d8-28e4-440b-8aed-b43c62f59dd7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:56:08 compute-0 nova_compute[260935]: 2025-10-11 08:56:08.720 2 DEBUG nova.virt.libvirt.vif [None req-ba28e5e9-9e40-489e-95fc-28faaf8d9862 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T08:55:48Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-SecurityGroupsTestJSON-server-136419371',display_name='tempest-SecurityGroupsTestJSON-server-136419371',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-securitygroupstestjson-server-136419371',id=63,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-11T08:56:03Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='3a6a3cc2a54f4a9bafcdc1304f07944b',ramdisk_id='',reservation_id='r-qhzm8q7c',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-SecurityGroupsTestJSON-2086563292',owner_user_name='tempest-SecurityGroupsTestJSON-2086563292-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T08:56:08Z,user_data=None,user_id='a04e2908f5a54c8f98bee8d0faf3e658',uuid=d80189d8-28e4-440b-8aed-b43c62f59dd7,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "7bc371fa-443a-4188-ace2-2837e3709136", "address": "fa:16:3e:76:54:8b", "network": {"id": "338aeaf8-43d5-4292-a8fa-8952dd3c508b", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-293890957-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3a6a3cc2a54f4a9bafcdc1304f07944b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7bc371fa-44", "ovs_interfaceid": "7bc371fa-443a-4188-ace2-2837e3709136", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 11 08:56:08 compute-0 nova_compute[260935]: 2025-10-11 08:56:08.720 2 DEBUG nova.network.os_vif_util [None req-ba28e5e9-9e40-489e-95fc-28faaf8d9862 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Converting VIF {"id": "7bc371fa-443a-4188-ace2-2837e3709136", "address": "fa:16:3e:76:54:8b", "network": {"id": "338aeaf8-43d5-4292-a8fa-8952dd3c508b", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-293890957-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3a6a3cc2a54f4a9bafcdc1304f07944b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7bc371fa-44", "ovs_interfaceid": "7bc371fa-443a-4188-ace2-2837e3709136", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 08:56:08 compute-0 nova_compute[260935]: 2025-10-11 08:56:08.723 2 DEBUG nova.network.os_vif_util [None req-ba28e5e9-9e40-489e-95fc-28faaf8d9862 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:76:54:8b,bridge_name='br-int',has_traffic_filtering=True,id=7bc371fa-443a-4188-ace2-2837e3709136,network=Network(338aeaf8-43d5-4292-a8fa-8952dd3c508b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7bc371fa-44') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 08:56:08 compute-0 nova_compute[260935]: 2025-10-11 08:56:08.723 2 DEBUG os_vif [None req-ba28e5e9-9e40-489e-95fc-28faaf8d9862 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:76:54:8b,bridge_name='br-int',has_traffic_filtering=True,id=7bc371fa-443a-4188-ace2-2837e3709136,network=Network(338aeaf8-43d5-4292-a8fa-8952dd3c508b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7bc371fa-44') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 11 08:56:08 compute-0 nova_compute[260935]: 2025-10-11 08:56:08.726 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:56:08 compute-0 nova_compute[260935]: 2025-10-11 08:56:08.726 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7bc371fa-44, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:56:08 compute-0 nova_compute[260935]: 2025-10-11 08:56:08.728 2 DEBUG nova.compute.manager [None req-77688478-a503-4f3f-a020-37394964ed83 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: 62c9b5d7-f22e-4738-b2e6-7c53fcb968ea] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 11 08:56:08 compute-0 nova_compute[260935]: 2025-10-11 08:56:08.730 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:56:08 compute-0 nova_compute[260935]: 2025-10-11 08:56:08.731 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 08:56:08 compute-0 nova_compute[260935]: 2025-10-11 08:56:08.734 2 DEBUG nova.compute.manager [req-96ef74b7-58ac-40bf-a536-545d2b8094e5 req-8037edb4-b3da-45a8-8dcd-1dedd54f00c4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d80189d8-28e4-440b-8aed-b43c62f59dd7] Received event network-vif-unplugged-7bc371fa-443a-4188-ace2-2837e3709136 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:56:08 compute-0 nova_compute[260935]: 2025-10-11 08:56:08.734 2 DEBUG oslo_concurrency.lockutils [req-96ef74b7-58ac-40bf-a536-545d2b8094e5 req-8037edb4-b3da-45a8-8dcd-1dedd54f00c4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "d80189d8-28e4-440b-8aed-b43c62f59dd7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:56:08 compute-0 nova_compute[260935]: 2025-10-11 08:56:08.734 2 DEBUG oslo_concurrency.lockutils [req-96ef74b7-58ac-40bf-a536-545d2b8094e5 req-8037edb4-b3da-45a8-8dcd-1dedd54f00c4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "d80189d8-28e4-440b-8aed-b43c62f59dd7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:56:08 compute-0 nova_compute[260935]: 2025-10-11 08:56:08.735 2 DEBUG oslo_concurrency.lockutils [req-96ef74b7-58ac-40bf-a536-545d2b8094e5 req-8037edb4-b3da-45a8-8dcd-1dedd54f00c4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "d80189d8-28e4-440b-8aed-b43c62f59dd7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:56:08 compute-0 nova_compute[260935]: 2025-10-11 08:56:08.735 2 DEBUG nova.compute.manager [req-96ef74b7-58ac-40bf-a536-545d2b8094e5 req-8037edb4-b3da-45a8-8dcd-1dedd54f00c4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d80189d8-28e4-440b-8aed-b43c62f59dd7] No waiting events found dispatching network-vif-unplugged-7bc371fa-443a-4188-ace2-2837e3709136 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 08:56:08 compute-0 nova_compute[260935]: 2025-10-11 08:56:08.735 2 WARNING nova.compute.manager [req-96ef74b7-58ac-40bf-a536-545d2b8094e5 req-8037edb4-b3da-45a8-8dcd-1dedd54f00c4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d80189d8-28e4-440b-8aed-b43c62f59dd7] Received unexpected event network-vif-unplugged-7bc371fa-443a-4188-ace2-2837e3709136 for instance with vm_state active and task_state reboot_started_hard.
Oct 11 08:56:08 compute-0 nova_compute[260935]: 2025-10-11 08:56:08.736 2 INFO os_vif [None req-ba28e5e9-9e40-489e-95fc-28faaf8d9862 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:76:54:8b,bridge_name='br-int',has_traffic_filtering=True,id=7bc371fa-443a-4188-ace2-2837e3709136,network=Network(338aeaf8-43d5-4292-a8fa-8952dd3c508b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7bc371fa-44')
Oct 11 08:56:08 compute-0 nova_compute[260935]: 2025-10-11 08:56:08.742 2 DEBUG nova.virt.libvirt.driver [None req-ba28e5e9-9e40-489e-95fc-28faaf8d9862 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] [instance: d80189d8-28e4-440b-8aed-b43c62f59dd7] Start _get_guest_xml network_info=[{"id": "7bc371fa-443a-4188-ace2-2837e3709136", "address": "fa:16:3e:76:54:8b", "network": {"id": "338aeaf8-43d5-4292-a8fa-8952dd3c508b", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-293890957-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3a6a3cc2a54f4a9bafcdc1304f07944b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7bc371fa-44", "ovs_interfaceid": "7bc371fa-443a-4188-ace2-2837e3709136", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 11 08:56:08 compute-0 nova_compute[260935]: 2025-10-11 08:56:08.746 2 WARNING nova.virt.libvirt.driver [None req-ba28e5e9-9e40-489e-95fc-28faaf8d9862 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 08:56:08 compute-0 nova_compute[260935]: 2025-10-11 08:56:08.752 2 DEBUG nova.virt.libvirt.host [None req-ba28e5e9-9e40-489e-95fc-28faaf8d9862 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 11 08:56:08 compute-0 nova_compute[260935]: 2025-10-11 08:56:08.753 2 DEBUG nova.virt.libvirt.host [None req-ba28e5e9-9e40-489e-95fc-28faaf8d9862 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 11 08:56:08 compute-0 nova_compute[260935]: 2025-10-11 08:56:08.759 2 DEBUG nova.virt.libvirt.host [None req-ba28e5e9-9e40-489e-95fc-28faaf8d9862 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 11 08:56:08 compute-0 nova_compute[260935]: 2025-10-11 08:56:08.760 2 DEBUG nova.virt.libvirt.host [None req-ba28e5e9-9e40-489e-95fc-28faaf8d9862 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 11 08:56:08 compute-0 nova_compute[260935]: 2025-10-11 08:56:08.760 2 DEBUG nova.virt.libvirt.driver [None req-ba28e5e9-9e40-489e-95fc-28faaf8d9862 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 11 08:56:08 compute-0 nova_compute[260935]: 2025-10-11 08:56:08.760 2 DEBUG nova.virt.hardware [None req-ba28e5e9-9e40-489e-95fc-28faaf8d9862 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 11 08:56:08 compute-0 nova_compute[260935]: 2025-10-11 08:56:08.761 2 DEBUG nova.virt.hardware [None req-ba28e5e9-9e40-489e-95fc-28faaf8d9862 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 11 08:56:08 compute-0 nova_compute[260935]: 2025-10-11 08:56:08.761 2 DEBUG nova.virt.hardware [None req-ba28e5e9-9e40-489e-95fc-28faaf8d9862 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 11 08:56:08 compute-0 nova_compute[260935]: 2025-10-11 08:56:08.761 2 DEBUG nova.virt.hardware [None req-ba28e5e9-9e40-489e-95fc-28faaf8d9862 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 11 08:56:08 compute-0 nova_compute[260935]: 2025-10-11 08:56:08.762 2 DEBUG nova.virt.hardware [None req-ba28e5e9-9e40-489e-95fc-28faaf8d9862 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 11 08:56:08 compute-0 nova_compute[260935]: 2025-10-11 08:56:08.762 2 DEBUG nova.virt.hardware [None req-ba28e5e9-9e40-489e-95fc-28faaf8d9862 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 11 08:56:08 compute-0 nova_compute[260935]: 2025-10-11 08:56:08.762 2 DEBUG nova.virt.hardware [None req-ba28e5e9-9e40-489e-95fc-28faaf8d9862 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 11 08:56:08 compute-0 nova_compute[260935]: 2025-10-11 08:56:08.762 2 DEBUG nova.virt.hardware [None req-ba28e5e9-9e40-489e-95fc-28faaf8d9862 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 11 08:56:08 compute-0 nova_compute[260935]: 2025-10-11 08:56:08.763 2 DEBUG nova.virt.hardware [None req-ba28e5e9-9e40-489e-95fc-28faaf8d9862 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 11 08:56:08 compute-0 nova_compute[260935]: 2025-10-11 08:56:08.763 2 DEBUG nova.virt.hardware [None req-ba28e5e9-9e40-489e-95fc-28faaf8d9862 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 11 08:56:08 compute-0 nova_compute[260935]: 2025-10-11 08:56:08.763 2 DEBUG nova.virt.hardware [None req-ba28e5e9-9e40-489e-95fc-28faaf8d9862 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 11 08:56:08 compute-0 nova_compute[260935]: 2025-10-11 08:56:08.764 2 DEBUG nova.objects.instance [None req-ba28e5e9-9e40-489e-95fc-28faaf8d9862 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Lazy-loading 'vcpu_model' on Instance uuid d80189d8-28e4-440b-8aed-b43c62f59dd7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:56:08 compute-0 nova_compute[260935]: 2025-10-11 08:56:08.794 2 DEBUG oslo_concurrency.processutils [None req-ba28e5e9-9e40-489e-95fc-28faaf8d9862 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:56:08 compute-0 nova_compute[260935]: 2025-10-11 08:56:08.838 2 DEBUG nova.compute.manager [None req-77688478-a503-4f3f-a020-37394964ed83 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: 62c9b5d7-f22e-4738-b2e6-7c53fcb968ea] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 11 08:56:08 compute-0 nova_compute[260935]: 2025-10-11 08:56:08.841 2 DEBUG nova.virt.libvirt.driver [None req-77688478-a503-4f3f-a020-37394964ed83 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: 62c9b5d7-f22e-4738-b2e6-7c53fcb968ea] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 11 08:56:08 compute-0 nova_compute[260935]: 2025-10-11 08:56:08.841 2 INFO nova.virt.libvirt.driver [None req-77688478-a503-4f3f-a020-37394964ed83 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: 62c9b5d7-f22e-4738-b2e6-7c53fcb968ea] Creating image(s)
Oct 11 08:56:08 compute-0 nova_compute[260935]: 2025-10-11 08:56:08.865 2 DEBUG nova.storage.rbd_utils [None req-77688478-a503-4f3f-a020-37394964ed83 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] rbd image 62c9b5d7-f22e-4738-b2e6-7c53fcb968ea_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:56:08 compute-0 nova_compute[260935]: 2025-10-11 08:56:08.888 2 DEBUG nova.storage.rbd_utils [None req-77688478-a503-4f3f-a020-37394964ed83 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] rbd image 62c9b5d7-f22e-4738-b2e6-7c53fcb968ea_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:56:08 compute-0 nova_compute[260935]: 2025-10-11 08:56:08.911 2 DEBUG nova.storage.rbd_utils [None req-77688478-a503-4f3f-a020-37394964ed83 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] rbd image 62c9b5d7-f22e-4738-b2e6-7c53fcb968ea_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:56:08 compute-0 nova_compute[260935]: 2025-10-11 08:56:08.916 2 DEBUG oslo_concurrency.processutils [None req-77688478-a503-4f3f-a020-37394964ed83 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:56:09 compute-0 nova_compute[260935]: 2025-10-11 08:56:09.008 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172968.908331, ead8f79e-4b1d-4bf5-bf5a-4943d2122bbc => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:56:09 compute-0 nova_compute[260935]: 2025-10-11 08:56:09.009 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: ead8f79e-4b1d-4bf5-bf5a-4943d2122bbc] VM Started (Lifecycle Event)
Oct 11 08:56:09 compute-0 nova_compute[260935]: 2025-10-11 08:56:09.015 2 DEBUG nova.policy [None req-77688478-a503-4f3f-a020-37394964ed83 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '213e5693e94f44e7950e3dfbca04228a', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '93dd4902ce324862a38006da8e06503a', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 11 08:56:09 compute-0 nova_compute[260935]: 2025-10-11 08:56:09.019 2 DEBUG nova.compute.manager [None req-8e0d93a6-27b1-4601-bc37-27e62a0e782b ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: ead8f79e-4b1d-4bf5-bf5a-4943d2122bbc] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 11 08:56:09 compute-0 nova_compute[260935]: 2025-10-11 08:56:09.021 2 DEBUG oslo_concurrency.processutils [None req-77688478-a503-4f3f-a020-37394964ed83 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json" returned: 0 in 0.105s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:56:09 compute-0 nova_compute[260935]: 2025-10-11 08:56:09.030 2 DEBUG oslo_concurrency.lockutils [None req-77688478-a503-4f3f-a020-37394964ed83 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Acquiring lock "9811042c7d73cc51997f7c966840f4b7728169a1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:56:09 compute-0 nova_compute[260935]: 2025-10-11 08:56:09.031 2 DEBUG oslo_concurrency.lockutils [None req-77688478-a503-4f3f-a020-37394964ed83 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:56:09 compute-0 nova_compute[260935]: 2025-10-11 08:56:09.031 2 DEBUG oslo_concurrency.lockutils [None req-77688478-a503-4f3f-a020-37394964ed83 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:56:09 compute-0 nova_compute[260935]: 2025-10-11 08:56:09.051 2 DEBUG nova.storage.rbd_utils [None req-77688478-a503-4f3f-a020-37394964ed83 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] rbd image 62c9b5d7-f22e-4738-b2e6-7c53fcb968ea_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:56:09 compute-0 nova_compute[260935]: 2025-10-11 08:56:09.054 2 DEBUG oslo_concurrency.processutils [None req-77688478-a503-4f3f-a020-37394964ed83 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 62c9b5d7-f22e-4738-b2e6-7c53fcb968ea_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:56:09 compute-0 nova_compute[260935]: 2025-10-11 08:56:09.093 2 DEBUG nova.virt.libvirt.imagebackend [None req-c1c756a2-3b40-40d3-9664-2fb8da17375d 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] No parent info for 03f2fef0-11c0-48e1-b3a0-3e02d898739e; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Oct 11 08:56:09 compute-0 nova_compute[260935]: 2025-10-11 08:56:09.098 2 DEBUG nova.virt.libvirt.driver [None req-8e0d93a6-27b1-4601-bc37-27e62a0e782b ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: ead8f79e-4b1d-4bf5-bf5a-4943d2122bbc] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 11 08:56:09 compute-0 ceph-mon[74313]: pgmap v1557: 321 pgs: 321 active+clean; 372 MiB data, 691 MiB used, 59 GiB / 60 GiB avail; 4.1 MiB/s rd, 3.9 MiB/s wr, 259 op/s
Oct 11 08:56:09 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/3488982851' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:56:09 compute-0 nova_compute[260935]: 2025-10-11 08:56:09.100 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: ead8f79e-4b1d-4bf5-bf5a-4943d2122bbc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:56:09 compute-0 nova_compute[260935]: 2025-10-11 08:56:09.107 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: ead8f79e-4b1d-4bf5-bf5a-4943d2122bbc] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 08:56:09 compute-0 nova_compute[260935]: 2025-10-11 08:56:09.110 2 INFO nova.virt.libvirt.driver [-] [instance: ead8f79e-4b1d-4bf5-bf5a-4943d2122bbc] Instance spawned successfully.
Oct 11 08:56:09 compute-0 nova_compute[260935]: 2025-10-11 08:56:09.110 2 DEBUG nova.virt.libvirt.driver [None req-8e0d93a6-27b1-4601-bc37-27e62a0e782b ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: ead8f79e-4b1d-4bf5-bf5a-4943d2122bbc] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 11 08:56:09 compute-0 nova_compute[260935]: 2025-10-11 08:56:09.137 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: ead8f79e-4b1d-4bf5-bf5a-4943d2122bbc] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 08:56:09 compute-0 nova_compute[260935]: 2025-10-11 08:56:09.138 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172968.9086003, ead8f79e-4b1d-4bf5-bf5a-4943d2122bbc => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:56:09 compute-0 nova_compute[260935]: 2025-10-11 08:56:09.138 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: ead8f79e-4b1d-4bf5-bf5a-4943d2122bbc] VM Paused (Lifecycle Event)
Oct 11 08:56:09 compute-0 nova_compute[260935]: 2025-10-11 08:56:09.159 2 DEBUG nova.virt.libvirt.driver [None req-8e0d93a6-27b1-4601-bc37-27e62a0e782b ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: ead8f79e-4b1d-4bf5-bf5a-4943d2122bbc] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:56:09 compute-0 nova_compute[260935]: 2025-10-11 08:56:09.160 2 DEBUG nova.virt.libvirt.driver [None req-8e0d93a6-27b1-4601-bc37-27e62a0e782b ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: ead8f79e-4b1d-4bf5-bf5a-4943d2122bbc] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:56:09 compute-0 nova_compute[260935]: 2025-10-11 08:56:09.160 2 DEBUG nova.virt.libvirt.driver [None req-8e0d93a6-27b1-4601-bc37-27e62a0e782b ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: ead8f79e-4b1d-4bf5-bf5a-4943d2122bbc] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:56:09 compute-0 nova_compute[260935]: 2025-10-11 08:56:09.161 2 DEBUG nova.virt.libvirt.driver [None req-8e0d93a6-27b1-4601-bc37-27e62a0e782b ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: ead8f79e-4b1d-4bf5-bf5a-4943d2122bbc] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:56:09 compute-0 nova_compute[260935]: 2025-10-11 08:56:09.162 2 DEBUG nova.virt.libvirt.driver [None req-8e0d93a6-27b1-4601-bc37-27e62a0e782b ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: ead8f79e-4b1d-4bf5-bf5a-4943d2122bbc] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:56:09 compute-0 nova_compute[260935]: 2025-10-11 08:56:09.163 2 DEBUG nova.virt.libvirt.driver [None req-8e0d93a6-27b1-4601-bc37-27e62a0e782b ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: ead8f79e-4b1d-4bf5-bf5a-4943d2122bbc] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:56:09 compute-0 nova_compute[260935]: 2025-10-11 08:56:09.179 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: ead8f79e-4b1d-4bf5-bf5a-4943d2122bbc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:56:09 compute-0 nova_compute[260935]: 2025-10-11 08:56:09.185 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172969.0239458, ead8f79e-4b1d-4bf5-bf5a-4943d2122bbc => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:56:09 compute-0 nova_compute[260935]: 2025-10-11 08:56:09.186 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: ead8f79e-4b1d-4bf5-bf5a-4943d2122bbc] VM Resumed (Lifecycle Event)
Oct 11 08:56:09 compute-0 nova_compute[260935]: 2025-10-11 08:56:09.221 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: ead8f79e-4b1d-4bf5-bf5a-4943d2122bbc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:56:09 compute-0 nova_compute[260935]: 2025-10-11 08:56:09.226 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: ead8f79e-4b1d-4bf5-bf5a-4943d2122bbc] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 08:56:09 compute-0 nova_compute[260935]: 2025-10-11 08:56:09.273 2 INFO nova.compute.manager [None req-8e0d93a6-27b1-4601-bc37-27e62a0e782b ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: ead8f79e-4b1d-4bf5-bf5a-4943d2122bbc] Took 7.41 seconds to spawn the instance on the hypervisor.
Oct 11 08:56:09 compute-0 nova_compute[260935]: 2025-10-11 08:56:09.274 2 DEBUG nova.compute.manager [None req-8e0d93a6-27b1-4601-bc37-27e62a0e782b ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: ead8f79e-4b1d-4bf5-bf5a-4943d2122bbc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:56:09 compute-0 nova_compute[260935]: 2025-10-11 08:56:09.280 2 DEBUG nova.storage.rbd_utils [None req-c1c756a2-3b40-40d3-9664-2fb8da17375d 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] creating snapshot(afee575312834d4e959990ab1d1ecca1) on rbd image(98d8ebd6-0917-49cf-8efc-a245486424bc_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Oct 11 08:56:09 compute-0 nova_compute[260935]: 2025-10-11 08:56:09.327 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: ead8f79e-4b1d-4bf5-bf5a-4943d2122bbc] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 08:56:09 compute-0 nova_compute[260935]: 2025-10-11 08:56:09.338 2 DEBUG oslo_concurrency.processutils [None req-77688478-a503-4f3f-a020-37394964ed83 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 62c9b5d7-f22e-4738-b2e6-7c53fcb968ea_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.284s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:56:09 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 08:56:09 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2913637587' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:56:09 compute-0 nova_compute[260935]: 2025-10-11 08:56:09.406 2 DEBUG oslo_concurrency.processutils [None req-ba28e5e9-9e40-489e-95fc-28faaf8d9862 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.612s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:56:09 compute-0 nova_compute[260935]: 2025-10-11 08:56:09.445 2 DEBUG oslo_concurrency.processutils [None req-ba28e5e9-9e40-489e-95fc-28faaf8d9862 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:56:09 compute-0 nova_compute[260935]: 2025-10-11 08:56:09.501 2 DEBUG nova.storage.rbd_utils [None req-77688478-a503-4f3f-a020-37394964ed83 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] resizing rbd image 62c9b5d7-f22e-4738-b2e6-7c53fcb968ea_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 11 08:56:09 compute-0 nova_compute[260935]: 2025-10-11 08:56:09.544 2 INFO nova.compute.manager [None req-8e0d93a6-27b1-4601-bc37-27e62a0e782b ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: ead8f79e-4b1d-4bf5-bf5a-4943d2122bbc] Took 9.05 seconds to build instance.
Oct 11 08:56:09 compute-0 nova_compute[260935]: 2025-10-11 08:56:09.566 2 DEBUG oslo_concurrency.lockutils [None req-8e0d93a6-27b1-4601-bc37-27e62a0e782b ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Lock "ead8f79e-4b1d-4bf5-bf5a-4943d2122bbc" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.158s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:56:09 compute-0 nova_compute[260935]: 2025-10-11 08:56:09.611 2 DEBUG nova.network.neutron [None req-77688478-a503-4f3f-a020-37394964ed83 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: 62c9b5d7-f22e-4738-b2e6-7c53fcb968ea] Successfully created port: 37653b55-c083-4108-a42f-4bbe7778f058 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 11 08:56:09 compute-0 nova_compute[260935]: 2025-10-11 08:56:09.621 2 DEBUG nova.objects.instance [None req-77688478-a503-4f3f-a020-37394964ed83 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Lazy-loading 'migration_context' on Instance uuid 62c9b5d7-f22e-4738-b2e6-7c53fcb968ea obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:56:09 compute-0 nova_compute[260935]: 2025-10-11 08:56:09.636 2 DEBUG nova.virt.libvirt.driver [None req-77688478-a503-4f3f-a020-37394964ed83 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: 62c9b5d7-f22e-4738-b2e6-7c53fcb968ea] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 11 08:56:09 compute-0 nova_compute[260935]: 2025-10-11 08:56:09.636 2 DEBUG nova.virt.libvirt.driver [None req-77688478-a503-4f3f-a020-37394964ed83 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: 62c9b5d7-f22e-4738-b2e6-7c53fcb968ea] Ensure instance console log exists: /var/lib/nova/instances/62c9b5d7-f22e-4738-b2e6-7c53fcb968ea/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 11 08:56:09 compute-0 nova_compute[260935]: 2025-10-11 08:56:09.636 2 DEBUG oslo_concurrency.lockutils [None req-77688478-a503-4f3f-a020-37394964ed83 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:56:09 compute-0 nova_compute[260935]: 2025-10-11 08:56:09.637 2 DEBUG oslo_concurrency.lockutils [None req-77688478-a503-4f3f-a020-37394964ed83 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:56:09 compute-0 nova_compute[260935]: 2025-10-11 08:56:09.637 2 DEBUG oslo_concurrency.lockutils [None req-77688478-a503-4f3f-a020-37394964ed83 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:56:09 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 08:56:09 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/394974751' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:56:09 compute-0 nova_compute[260935]: 2025-10-11 08:56:09.981 2 DEBUG oslo_concurrency.processutils [None req-ba28e5e9-9e40-489e-95fc-28faaf8d9862 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.537s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:56:09 compute-0 nova_compute[260935]: 2025-10-11 08:56:09.983 2 DEBUG nova.virt.libvirt.vif [None req-ba28e5e9-9e40-489e-95fc-28faaf8d9862 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T08:55:48Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-SecurityGroupsTestJSON-server-136419371',display_name='tempest-SecurityGroupsTestJSON-server-136419371',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-securitygroupstestjson-server-136419371',id=63,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-11T08:56:03Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='3a6a3cc2a54f4a9bafcdc1304f07944b',ramdisk_id='',reservation_id='r-qhzm8q7c',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-SecurityGroupsTestJSON-2086563292',owner_user_name='tempest-SecurityGroupsTestJSON-2086563292-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T08:56:08Z,user_data=None,user_id='a04e2908f5a54c8f98bee8d0faf3e658',uuid=d80189d8-28e4-440b-8aed-b43c62f59dd7,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "7bc371fa-443a-4188-ace2-2837e3709136", "address": "fa:16:3e:76:54:8b", "network": {"id": "338aeaf8-43d5-4292-a8fa-8952dd3c508b", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-293890957-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3a6a3cc2a54f4a9bafcdc1304f07944b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7bc371fa-44", "ovs_interfaceid": "7bc371fa-443a-4188-ace2-2837e3709136", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 11 08:56:09 compute-0 nova_compute[260935]: 2025-10-11 08:56:09.984 2 DEBUG nova.network.os_vif_util [None req-ba28e5e9-9e40-489e-95fc-28faaf8d9862 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Converting VIF {"id": "7bc371fa-443a-4188-ace2-2837e3709136", "address": "fa:16:3e:76:54:8b", "network": {"id": "338aeaf8-43d5-4292-a8fa-8952dd3c508b", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-293890957-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3a6a3cc2a54f4a9bafcdc1304f07944b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7bc371fa-44", "ovs_interfaceid": "7bc371fa-443a-4188-ace2-2837e3709136", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 08:56:09 compute-0 nova_compute[260935]: 2025-10-11 08:56:09.985 2 DEBUG nova.network.os_vif_util [None req-ba28e5e9-9e40-489e-95fc-28faaf8d9862 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:76:54:8b,bridge_name='br-int',has_traffic_filtering=True,id=7bc371fa-443a-4188-ace2-2837e3709136,network=Network(338aeaf8-43d5-4292-a8fa-8952dd3c508b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7bc371fa-44') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 08:56:09 compute-0 nova_compute[260935]: 2025-10-11 08:56:09.987 2 DEBUG nova.objects.instance [None req-ba28e5e9-9e40-489e-95fc-28faaf8d9862 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Lazy-loading 'pci_devices' on Instance uuid d80189d8-28e4-440b-8aed-b43c62f59dd7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:56:10 compute-0 nova_compute[260935]: 2025-10-11 08:56:10.005 2 DEBUG nova.virt.libvirt.driver [None req-ba28e5e9-9e40-489e-95fc-28faaf8d9862 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] [instance: d80189d8-28e4-440b-8aed-b43c62f59dd7] End _get_guest_xml xml=<domain type="kvm">
Oct 11 08:56:10 compute-0 nova_compute[260935]:   <uuid>d80189d8-28e4-440b-8aed-b43c62f59dd7</uuid>
Oct 11 08:56:10 compute-0 nova_compute[260935]:   <name>instance-0000003f</name>
Oct 11 08:56:10 compute-0 nova_compute[260935]:   <memory>131072</memory>
Oct 11 08:56:10 compute-0 nova_compute[260935]:   <vcpu>1</vcpu>
Oct 11 08:56:10 compute-0 nova_compute[260935]:   <metadata>
Oct 11 08:56:10 compute-0 nova_compute[260935]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 08:56:10 compute-0 nova_compute[260935]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 08:56:10 compute-0 nova_compute[260935]:       <nova:name>tempest-SecurityGroupsTestJSON-server-136419371</nova:name>
Oct 11 08:56:10 compute-0 nova_compute[260935]:       <nova:creationTime>2025-10-11 08:56:08</nova:creationTime>
Oct 11 08:56:10 compute-0 nova_compute[260935]:       <nova:flavor name="m1.nano">
Oct 11 08:56:10 compute-0 nova_compute[260935]:         <nova:memory>128</nova:memory>
Oct 11 08:56:10 compute-0 nova_compute[260935]:         <nova:disk>1</nova:disk>
Oct 11 08:56:10 compute-0 nova_compute[260935]:         <nova:swap>0</nova:swap>
Oct 11 08:56:10 compute-0 nova_compute[260935]:         <nova:ephemeral>0</nova:ephemeral>
Oct 11 08:56:10 compute-0 nova_compute[260935]:         <nova:vcpus>1</nova:vcpus>
Oct 11 08:56:10 compute-0 nova_compute[260935]:       </nova:flavor>
Oct 11 08:56:10 compute-0 nova_compute[260935]:       <nova:owner>
Oct 11 08:56:10 compute-0 nova_compute[260935]:         <nova:user uuid="a04e2908f5a54c8f98bee8d0faf3e658">tempest-SecurityGroupsTestJSON-2086563292-project-member</nova:user>
Oct 11 08:56:10 compute-0 nova_compute[260935]:         <nova:project uuid="3a6a3cc2a54f4a9bafcdc1304f07944b">tempest-SecurityGroupsTestJSON-2086563292</nova:project>
Oct 11 08:56:10 compute-0 nova_compute[260935]:       </nova:owner>
Oct 11 08:56:10 compute-0 nova_compute[260935]:       <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 08:56:10 compute-0 nova_compute[260935]:       <nova:ports>
Oct 11 08:56:10 compute-0 nova_compute[260935]:         <nova:port uuid="7bc371fa-443a-4188-ace2-2837e3709136">
Oct 11 08:56:10 compute-0 nova_compute[260935]:           <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Oct 11 08:56:10 compute-0 nova_compute[260935]:         </nova:port>
Oct 11 08:56:10 compute-0 nova_compute[260935]:       </nova:ports>
Oct 11 08:56:10 compute-0 nova_compute[260935]:     </nova:instance>
Oct 11 08:56:10 compute-0 nova_compute[260935]:   </metadata>
Oct 11 08:56:10 compute-0 nova_compute[260935]:   <sysinfo type="smbios">
Oct 11 08:56:10 compute-0 nova_compute[260935]:     <system>
Oct 11 08:56:10 compute-0 nova_compute[260935]:       <entry name="manufacturer">RDO</entry>
Oct 11 08:56:10 compute-0 nova_compute[260935]:       <entry name="product">OpenStack Compute</entry>
Oct 11 08:56:10 compute-0 nova_compute[260935]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 08:56:10 compute-0 nova_compute[260935]:       <entry name="serial">d80189d8-28e4-440b-8aed-b43c62f59dd7</entry>
Oct 11 08:56:10 compute-0 nova_compute[260935]:       <entry name="uuid">d80189d8-28e4-440b-8aed-b43c62f59dd7</entry>
Oct 11 08:56:10 compute-0 nova_compute[260935]:       <entry name="family">Virtual Machine</entry>
Oct 11 08:56:10 compute-0 nova_compute[260935]:     </system>
Oct 11 08:56:10 compute-0 nova_compute[260935]:   </sysinfo>
Oct 11 08:56:10 compute-0 nova_compute[260935]:   <os>
Oct 11 08:56:10 compute-0 nova_compute[260935]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 11 08:56:10 compute-0 nova_compute[260935]:     <boot dev="hd"/>
Oct 11 08:56:10 compute-0 nova_compute[260935]:     <smbios mode="sysinfo"/>
Oct 11 08:56:10 compute-0 nova_compute[260935]:   </os>
Oct 11 08:56:10 compute-0 nova_compute[260935]:   <features>
Oct 11 08:56:10 compute-0 nova_compute[260935]:     <acpi/>
Oct 11 08:56:10 compute-0 nova_compute[260935]:     <apic/>
Oct 11 08:56:10 compute-0 nova_compute[260935]:     <vmcoreinfo/>
Oct 11 08:56:10 compute-0 nova_compute[260935]:   </features>
Oct 11 08:56:10 compute-0 nova_compute[260935]:   <clock offset="utc">
Oct 11 08:56:10 compute-0 nova_compute[260935]:     <timer name="pit" tickpolicy="delay"/>
Oct 11 08:56:10 compute-0 nova_compute[260935]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 11 08:56:10 compute-0 nova_compute[260935]:     <timer name="hpet" present="no"/>
Oct 11 08:56:10 compute-0 nova_compute[260935]:   </clock>
Oct 11 08:56:10 compute-0 nova_compute[260935]:   <cpu mode="host-model" match="exact">
Oct 11 08:56:10 compute-0 nova_compute[260935]:     <topology sockets="1" cores="1" threads="1"/>
Oct 11 08:56:10 compute-0 nova_compute[260935]:   </cpu>
Oct 11 08:56:10 compute-0 nova_compute[260935]:   <devices>
Oct 11 08:56:10 compute-0 nova_compute[260935]:     <disk type="network" device="disk">
Oct 11 08:56:10 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 08:56:10 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/d80189d8-28e4-440b-8aed-b43c62f59dd7_disk">
Oct 11 08:56:10 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 08:56:10 compute-0 nova_compute[260935]:       </source>
Oct 11 08:56:10 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 08:56:10 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 08:56:10 compute-0 nova_compute[260935]:       </auth>
Oct 11 08:56:10 compute-0 nova_compute[260935]:       <target dev="vda" bus="virtio"/>
Oct 11 08:56:10 compute-0 nova_compute[260935]:     </disk>
Oct 11 08:56:10 compute-0 nova_compute[260935]:     <disk type="network" device="cdrom">
Oct 11 08:56:10 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 08:56:10 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/d80189d8-28e4-440b-8aed-b43c62f59dd7_disk.config">
Oct 11 08:56:10 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 08:56:10 compute-0 nova_compute[260935]:       </source>
Oct 11 08:56:10 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 08:56:10 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 08:56:10 compute-0 nova_compute[260935]:       </auth>
Oct 11 08:56:10 compute-0 nova_compute[260935]:       <target dev="sda" bus="sata"/>
Oct 11 08:56:10 compute-0 nova_compute[260935]:     </disk>
Oct 11 08:56:10 compute-0 nova_compute[260935]:     <interface type="ethernet">
Oct 11 08:56:10 compute-0 nova_compute[260935]:       <mac address="fa:16:3e:76:54:8b"/>
Oct 11 08:56:10 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 08:56:10 compute-0 nova_compute[260935]:       <driver name="vhost" rx_queue_size="512"/>
Oct 11 08:56:10 compute-0 nova_compute[260935]:       <mtu size="1442"/>
Oct 11 08:56:10 compute-0 nova_compute[260935]:       <target dev="tap7bc371fa-44"/>
Oct 11 08:56:10 compute-0 nova_compute[260935]:     </interface>
Oct 11 08:56:10 compute-0 nova_compute[260935]:     <serial type="pty">
Oct 11 08:56:10 compute-0 nova_compute[260935]:       <log file="/var/lib/nova/instances/d80189d8-28e4-440b-8aed-b43c62f59dd7/console.log" append="off"/>
Oct 11 08:56:10 compute-0 nova_compute[260935]:     </serial>
Oct 11 08:56:10 compute-0 nova_compute[260935]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 08:56:10 compute-0 nova_compute[260935]:     <video>
Oct 11 08:56:10 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 08:56:10 compute-0 nova_compute[260935]:     </video>
Oct 11 08:56:10 compute-0 nova_compute[260935]:     <input type="tablet" bus="usb"/>
Oct 11 08:56:10 compute-0 nova_compute[260935]:     <input type="keyboard" bus="usb"/>
Oct 11 08:56:10 compute-0 nova_compute[260935]:     <rng model="virtio">
Oct 11 08:56:10 compute-0 nova_compute[260935]:       <backend model="random">/dev/urandom</backend>
Oct 11 08:56:10 compute-0 nova_compute[260935]:     </rng>
Oct 11 08:56:10 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root"/>
Oct 11 08:56:10 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:56:10 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:56:10 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:56:10 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:56:10 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:56:10 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:56:10 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:56:10 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:56:10 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:56:10 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:56:10 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:56:10 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:56:10 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:56:10 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:56:10 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:56:10 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:56:10 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:56:10 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:56:10 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:56:10 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:56:10 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:56:10 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:56:10 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:56:10 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:56:10 compute-0 nova_compute[260935]:     <controller type="usb" index="0"/>
Oct 11 08:56:10 compute-0 nova_compute[260935]:     <memballoon model="virtio">
Oct 11 08:56:10 compute-0 nova_compute[260935]:       <stats period="10"/>
Oct 11 08:56:10 compute-0 nova_compute[260935]:     </memballoon>
Oct 11 08:56:10 compute-0 nova_compute[260935]:   </devices>
Oct 11 08:56:10 compute-0 nova_compute[260935]: </domain>
Oct 11 08:56:10 compute-0 nova_compute[260935]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 11 08:56:10 compute-0 nova_compute[260935]: 2025-10-11 08:56:10.006 2 DEBUG nova.virt.libvirt.driver [None req-ba28e5e9-9e40-489e-95fc-28faaf8d9862 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] skipping disk for instance-0000003f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 08:56:10 compute-0 nova_compute[260935]: 2025-10-11 08:56:10.007 2 DEBUG nova.virt.libvirt.driver [None req-ba28e5e9-9e40-489e-95fc-28faaf8d9862 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] skipping disk for instance-0000003f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 08:56:10 compute-0 nova_compute[260935]: 2025-10-11 08:56:10.009 2 DEBUG nova.virt.libvirt.vif [None req-ba28e5e9-9e40-489e-95fc-28faaf8d9862 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T08:55:48Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-SecurityGroupsTestJSON-server-136419371',display_name='tempest-SecurityGroupsTestJSON-server-136419371',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-securitygroupstestjson-server-136419371',id=63,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-11T08:56:03Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=<?>,power_state=1,progress=0,project_id='3a6a3cc2a54f4a9bafcdc1304f07944b',ramdisk_id='',reservation_id='r-qhzm8q7c',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-SecurityGroupsTestJSON-2086563292',owner_user_name='tempest-SecurityGroupsTestJSON-2086563292-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T08:56:08Z,user_data=None,user_id='a04e2908f5a54c8f98bee8d0faf3e658',uuid=d80189d8-28e4-440b-8aed-b43c62f59dd7,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "7bc371fa-443a-4188-ace2-2837e3709136", "address": "fa:16:3e:76:54:8b", "network": {"id": "338aeaf8-43d5-4292-a8fa-8952dd3c508b", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-293890957-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3a6a3cc2a54f4a9bafcdc1304f07944b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7bc371fa-44", "ovs_interfaceid": "7bc371fa-443a-4188-ace2-2837e3709136", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 11 08:56:10 compute-0 nova_compute[260935]: 2025-10-11 08:56:10.009 2 DEBUG nova.network.os_vif_util [None req-ba28e5e9-9e40-489e-95fc-28faaf8d9862 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Converting VIF {"id": "7bc371fa-443a-4188-ace2-2837e3709136", "address": "fa:16:3e:76:54:8b", "network": {"id": "338aeaf8-43d5-4292-a8fa-8952dd3c508b", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-293890957-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3a6a3cc2a54f4a9bafcdc1304f07944b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7bc371fa-44", "ovs_interfaceid": "7bc371fa-443a-4188-ace2-2837e3709136", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 08:56:10 compute-0 nova_compute[260935]: 2025-10-11 08:56:10.010 2 DEBUG nova.network.os_vif_util [None req-ba28e5e9-9e40-489e-95fc-28faaf8d9862 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:76:54:8b,bridge_name='br-int',has_traffic_filtering=True,id=7bc371fa-443a-4188-ace2-2837e3709136,network=Network(338aeaf8-43d5-4292-a8fa-8952dd3c508b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7bc371fa-44') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 08:56:10 compute-0 nova_compute[260935]: 2025-10-11 08:56:10.011 2 DEBUG os_vif [None req-ba28e5e9-9e40-489e-95fc-28faaf8d9862 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Plugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:76:54:8b,bridge_name='br-int',has_traffic_filtering=True,id=7bc371fa-443a-4188-ace2-2837e3709136,network=Network(338aeaf8-43d5-4292-a8fa-8952dd3c508b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7bc371fa-44') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 11 08:56:10 compute-0 nova_compute[260935]: 2025-10-11 08:56:10.012 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:56:10 compute-0 nova_compute[260935]: 2025-10-11 08:56:10.013 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:56:10 compute-0 nova_compute[260935]: 2025-10-11 08:56:10.014 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 08:56:10 compute-0 nova_compute[260935]: 2025-10-11 08:56:10.019 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:56:10 compute-0 nova_compute[260935]: 2025-10-11 08:56:10.020 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7bc371fa-44, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:56:10 compute-0 nova_compute[260935]: 2025-10-11 08:56:10.021 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap7bc371fa-44, col_values=(('external_ids', {'iface-id': '7bc371fa-443a-4188-ace2-2837e3709136', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:76:54:8b', 'vm-uuid': 'd80189d8-28e4-440b-8aed-b43c62f59dd7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:56:10 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1558: 321 pgs: 321 active+clean; 372 MiB data, 691 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.9 MiB/s wr, 185 op/s
Oct 11 08:56:10 compute-0 nova_compute[260935]: 2025-10-11 08:56:10.064 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:56:10 compute-0 NetworkManager[44960]: <info>  [1760172970.0660] manager: (tap7bc371fa-44): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/261)
Oct 11 08:56:10 compute-0 nova_compute[260935]: 2025-10-11 08:56:10.067 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 08:56:10 compute-0 nova_compute[260935]: 2025-10-11 08:56:10.071 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:56:10 compute-0 nova_compute[260935]: 2025-10-11 08:56:10.071 2 INFO os_vif [None req-ba28e5e9-9e40-489e-95fc-28faaf8d9862 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Successfully plugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:76:54:8b,bridge_name='br-int',has_traffic_filtering=True,id=7bc371fa-443a-4188-ace2-2837e3709136,network=Network(338aeaf8-43d5-4292-a8fa-8952dd3c508b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7bc371fa-44')
Oct 11 08:56:10 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e210 do_prune osdmap full prune enabled
Oct 11 08:56:10 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/2913637587' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:56:10 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/394974751' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:56:10 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e211 e211: 3 total, 3 up, 3 in
Oct 11 08:56:10 compute-0 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e211: 3 total, 3 up, 3 in
Oct 11 08:56:10 compute-0 kernel: tap7bc371fa-44: entered promiscuous mode
Oct 11 08:56:10 compute-0 NetworkManager[44960]: <info>  [1760172970.1797] manager: (tap7bc371fa-44): new Tun device (/org/freedesktop/NetworkManager/Devices/262)
Oct 11 08:56:10 compute-0 ovn_controller[152945]: 2025-10-11T08:56:10Z|00551|binding|INFO|Claiming lport 7bc371fa-443a-4188-ace2-2837e3709136 for this chassis.
Oct 11 08:56:10 compute-0 ovn_controller[152945]: 2025-10-11T08:56:10Z|00552|binding|INFO|7bc371fa-443a-4188-ace2-2837e3709136: Claiming fa:16:3e:76:54:8b 10.100.0.14
Oct 11 08:56:10 compute-0 nova_compute[260935]: 2025-10-11 08:56:10.183 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:56:10 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:56:10.188 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:76:54:8b 10.100.0.14'], port_security=['fa:16:3e:76:54:8b 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'd80189d8-28e4-440b-8aed-b43c62f59dd7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-338aeaf8-43d5-4292-a8fa-8952dd3c508b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3a6a3cc2a54f4a9bafcdc1304f07944b', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'e0385938-baf4-4634-ae20-542a58f27831 e3ab43aa-8c9d-4578-afa6-1b85bfb9e682', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9b76a516-f507-4a4f-aa31-3047cedf7049, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=7bc371fa-443a-4188-ace2-2837e3709136) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 08:56:10 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:56:10.190 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 7bc371fa-443a-4188-ace2-2837e3709136 in datapath 338aeaf8-43d5-4292-a8fa-8952dd3c508b bound to our chassis
Oct 11 08:56:10 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:56:10.194 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 338aeaf8-43d5-4292-a8fa-8952dd3c508b
Oct 11 08:56:10 compute-0 NetworkManager[44960]: <info>  [1760172970.2031] device (tap7bc371fa-44): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 08:56:10 compute-0 NetworkManager[44960]: <info>  [1760172970.2057] device (tap7bc371fa-44): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 08:56:10 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:56:10.220 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[5f236946-13dc-49cd-9ac4-d11481617dc7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:56:10 compute-0 ovn_controller[152945]: 2025-10-11T08:56:10Z|00553|binding|INFO|Setting lport 7bc371fa-443a-4188-ace2-2837e3709136 ovn-installed in OVS
Oct 11 08:56:10 compute-0 ovn_controller[152945]: 2025-10-11T08:56:10Z|00554|binding|INFO|Setting lport 7bc371fa-443a-4188-ace2-2837e3709136 up in Southbound
Oct 11 08:56:10 compute-0 systemd-machined[215705]: New machine qemu-72-instance-0000003f.
Oct 11 08:56:10 compute-0 nova_compute[260935]: 2025-10-11 08:56:10.232 2 DEBUG nova.storage.rbd_utils [None req-c1c756a2-3b40-40d3-9664-2fb8da17375d 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] cloning vms/98d8ebd6-0917-49cf-8efc-a245486424bc_disk@afee575312834d4e959990ab1d1ecca1 to images/6885ce04-2718-45ad-80a5-64bd9611a1ad clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Oct 11 08:56:10 compute-0 rsyslogd[1003]: imjournal from <np0005481065:nova_compute>: begin to drop messages due to rate-limiting
Oct 11 08:56:10 compute-0 systemd[1]: Started Virtual Machine qemu-72-instance-0000003f.
Oct 11 08:56:10 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:56:10.266 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[3cd8784b-3fdb-4795-a81c-58c5141d7eb2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:56:10 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:56:10.271 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[5ec2a7f3-e5fd-4a7b-b367-cbe784b9cce4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:56:10 compute-0 nova_compute[260935]: 2025-10-11 08:56:10.303 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:56:10 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:56:10.321 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[86468b19-01f4-4f97-96c2-8879e1e9cd8e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:56:10 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:56:10.356 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[9bcc3234-e989-4fc6-be49-a89da44649a6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap338aeaf8-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e5:2a:1c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 9, 'rx_bytes': 916, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 9, 'rx_bytes': 916, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 164], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 477778, 'reachable_time': 18372, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 326001, 'error': None, 'target': 'ovnmeta-338aeaf8-43d5-4292-a8fa-8952dd3c508b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:56:10 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:56:10.393 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[8754e7ea-c5f3-4870-a487-de32b700859b]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap338aeaf8-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 477793, 'tstamp': 477793}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 326004, 'error': None, 'target': 'ovnmeta-338aeaf8-43d5-4292-a8fa-8952dd3c508b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap338aeaf8-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 477797, 'tstamp': 477797}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 326004, 'error': None, 'target': 'ovnmeta-338aeaf8-43d5-4292-a8fa-8952dd3c508b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:56:10 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:56:10.398 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap338aeaf8-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:56:10 compute-0 nova_compute[260935]: 2025-10-11 08:56:10.401 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:56:10 compute-0 nova_compute[260935]: 2025-10-11 08:56:10.402 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:56:10 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:56:10.402 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap338aeaf8-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:56:10 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:56:10.403 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 08:56:10 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:56:10.404 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap338aeaf8-40, col_values=(('external_ids', {'iface-id': 'ebd712a5-5601-47dd-8cfc-89d2ce6b1035'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:56:10 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:56:10.404 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 08:56:10 compute-0 nova_compute[260935]: 2025-10-11 08:56:10.413 2 DEBUG nova.storage.rbd_utils [None req-c1c756a2-3b40-40d3-9664-2fb8da17375d 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] flattening images/6885ce04-2718-45ad-80a5-64bd9611a1ad flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Oct 11 08:56:10 compute-0 nova_compute[260935]: 2025-10-11 08:56:10.665 2 DEBUG nova.compute.manager [req-1ccf052e-d91a-4156-ab8a-d5045f67da48 req-888486e6-a24e-42bb-ac79-186765fe1a82 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ead8f79e-4b1d-4bf5-bf5a-4943d2122bbc] Received event network-vif-plugged-92faeec9-cc08-45c5-84b3-7191f39c6339 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:56:10 compute-0 nova_compute[260935]: 2025-10-11 08:56:10.667 2 DEBUG oslo_concurrency.lockutils [req-1ccf052e-d91a-4156-ab8a-d5045f67da48 req-888486e6-a24e-42bb-ac79-186765fe1a82 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "ead8f79e-4b1d-4bf5-bf5a-4943d2122bbc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:56:10 compute-0 nova_compute[260935]: 2025-10-11 08:56:10.667 2 DEBUG oslo_concurrency.lockutils [req-1ccf052e-d91a-4156-ab8a-d5045f67da48 req-888486e6-a24e-42bb-ac79-186765fe1a82 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "ead8f79e-4b1d-4bf5-bf5a-4943d2122bbc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:56:10 compute-0 nova_compute[260935]: 2025-10-11 08:56:10.667 2 DEBUG oslo_concurrency.lockutils [req-1ccf052e-d91a-4156-ab8a-d5045f67da48 req-888486e6-a24e-42bb-ac79-186765fe1a82 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "ead8f79e-4b1d-4bf5-bf5a-4943d2122bbc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:56:10 compute-0 nova_compute[260935]: 2025-10-11 08:56:10.668 2 DEBUG nova.compute.manager [req-1ccf052e-d91a-4156-ab8a-d5045f67da48 req-888486e6-a24e-42bb-ac79-186765fe1a82 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ead8f79e-4b1d-4bf5-bf5a-4943d2122bbc] No waiting events found dispatching network-vif-plugged-92faeec9-cc08-45c5-84b3-7191f39c6339 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 08:56:10 compute-0 nova_compute[260935]: 2025-10-11 08:56:10.668 2 WARNING nova.compute.manager [req-1ccf052e-d91a-4156-ab8a-d5045f67da48 req-888486e6-a24e-42bb-ac79-186765fe1a82 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ead8f79e-4b1d-4bf5-bf5a-4943d2122bbc] Received unexpected event network-vif-plugged-92faeec9-cc08-45c5-84b3-7191f39c6339 for instance with vm_state active and task_state None.
Oct 11 08:56:10 compute-0 nova_compute[260935]: 2025-10-11 08:56:10.697 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 08:56:10 compute-0 nova_compute[260935]: 2025-10-11 08:56:10.799 2 DEBUG nova.network.neutron [None req-77688478-a503-4f3f-a020-37394964ed83 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: 62c9b5d7-f22e-4738-b2e6-7c53fcb968ea] Successfully updated port: 37653b55-c083-4108-a42f-4bbe7778f058 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 11 08:56:10 compute-0 nova_compute[260935]: 2025-10-11 08:56:10.824 2 DEBUG oslo_concurrency.lockutils [None req-77688478-a503-4f3f-a020-37394964ed83 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Acquiring lock "refresh_cache-62c9b5d7-f22e-4738-b2e6-7c53fcb968ea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 08:56:10 compute-0 nova_compute[260935]: 2025-10-11 08:56:10.824 2 DEBUG oslo_concurrency.lockutils [None req-77688478-a503-4f3f-a020-37394964ed83 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Acquired lock "refresh_cache-62c9b5d7-f22e-4738-b2e6-7c53fcb968ea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 08:56:10 compute-0 nova_compute[260935]: 2025-10-11 08:56:10.825 2 DEBUG nova.network.neutron [None req-77688478-a503-4f3f-a020-37394964ed83 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: 62c9b5d7-f22e-4738-b2e6-7c53fcb968ea] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 11 08:56:10 compute-0 nova_compute[260935]: 2025-10-11 08:56:10.827 2 DEBUG oslo_concurrency.lockutils [None req-a340f7f7-5204-4f2a-a8a3-8bce2e6c347f ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Acquiring lock "ead8f79e-4b1d-4bf5-bf5a-4943d2122bbc" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:56:10 compute-0 nova_compute[260935]: 2025-10-11 08:56:10.827 2 DEBUG oslo_concurrency.lockutils [None req-a340f7f7-5204-4f2a-a8a3-8bce2e6c347f ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Lock "ead8f79e-4b1d-4bf5-bf5a-4943d2122bbc" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:56:10 compute-0 nova_compute[260935]: 2025-10-11 08:56:10.828 2 DEBUG oslo_concurrency.lockutils [None req-a340f7f7-5204-4f2a-a8a3-8bce2e6c347f ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Acquiring lock "ead8f79e-4b1d-4bf5-bf5a-4943d2122bbc-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:56:10 compute-0 nova_compute[260935]: 2025-10-11 08:56:10.829 2 DEBUG oslo_concurrency.lockutils [None req-a340f7f7-5204-4f2a-a8a3-8bce2e6c347f ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Lock "ead8f79e-4b1d-4bf5-bf5a-4943d2122bbc-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:56:10 compute-0 rsyslogd[1003]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 11 08:56:10 compute-0 nova_compute[260935]: 2025-10-11 08:56:10.829 2 DEBUG oslo_concurrency.lockutils [None req-a340f7f7-5204-4f2a-a8a3-8bce2e6c347f ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Lock "ead8f79e-4b1d-4bf5-bf5a-4943d2122bbc-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:56:10 compute-0 rsyslogd[1003]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 11 08:56:10 compute-0 nova_compute[260935]: 2025-10-11 08:56:10.832 2 INFO nova.compute.manager [None req-a340f7f7-5204-4f2a-a8a3-8bce2e6c347f ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: ead8f79e-4b1d-4bf5-bf5a-4943d2122bbc] Terminating instance
Oct 11 08:56:10 compute-0 nova_compute[260935]: 2025-10-11 08:56:10.834 2 DEBUG nova.compute.manager [None req-a340f7f7-5204-4f2a-a8a3-8bce2e6c347f ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: ead8f79e-4b1d-4bf5-bf5a-4943d2122bbc] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 11 08:56:10 compute-0 nova_compute[260935]: 2025-10-11 08:56:10.839 2 DEBUG nova.storage.rbd_utils [None req-c1c756a2-3b40-40d3-9664-2fb8da17375d 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] removing snapshot(afee575312834d4e959990ab1d1ecca1) on rbd image(98d8ebd6-0917-49cf-8efc-a245486424bc_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Oct 11 08:56:10 compute-0 nova_compute[260935]: 2025-10-11 08:56:10.851 2 DEBUG nova.compute.manager [req-c9a9ff58-05da-4302-ac82-b4d84dc31316 req-e9664dbe-2695-4a23-a28f-9e849d6fd0b2 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d80189d8-28e4-440b-8aed-b43c62f59dd7] Received event network-vif-plugged-7bc371fa-443a-4188-ace2-2837e3709136 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:56:10 compute-0 nova_compute[260935]: 2025-10-11 08:56:10.853 2 DEBUG oslo_concurrency.lockutils [req-c9a9ff58-05da-4302-ac82-b4d84dc31316 req-e9664dbe-2695-4a23-a28f-9e849d6fd0b2 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "d80189d8-28e4-440b-8aed-b43c62f59dd7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:56:10 compute-0 nova_compute[260935]: 2025-10-11 08:56:10.854 2 DEBUG oslo_concurrency.lockutils [req-c9a9ff58-05da-4302-ac82-b4d84dc31316 req-e9664dbe-2695-4a23-a28f-9e849d6fd0b2 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "d80189d8-28e4-440b-8aed-b43c62f59dd7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:56:10 compute-0 nova_compute[260935]: 2025-10-11 08:56:10.854 2 DEBUG oslo_concurrency.lockutils [req-c9a9ff58-05da-4302-ac82-b4d84dc31316 req-e9664dbe-2695-4a23-a28f-9e849d6fd0b2 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "d80189d8-28e4-440b-8aed-b43c62f59dd7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:56:10 compute-0 nova_compute[260935]: 2025-10-11 08:56:10.855 2 DEBUG nova.compute.manager [req-c9a9ff58-05da-4302-ac82-b4d84dc31316 req-e9664dbe-2695-4a23-a28f-9e849d6fd0b2 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d80189d8-28e4-440b-8aed-b43c62f59dd7] No waiting events found dispatching network-vif-plugged-7bc371fa-443a-4188-ace2-2837e3709136 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 08:56:10 compute-0 nova_compute[260935]: 2025-10-11 08:56:10.855 2 WARNING nova.compute.manager [req-c9a9ff58-05da-4302-ac82-b4d84dc31316 req-e9664dbe-2695-4a23-a28f-9e849d6fd0b2 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d80189d8-28e4-440b-8aed-b43c62f59dd7] Received unexpected event network-vif-plugged-7bc371fa-443a-4188-ace2-2837e3709136 for instance with vm_state active and task_state reboot_started_hard.
Oct 11 08:56:10 compute-0 nova_compute[260935]: 2025-10-11 08:56:10.855 2 DEBUG nova.compute.manager [req-c9a9ff58-05da-4302-ac82-b4d84dc31316 req-e9664dbe-2695-4a23-a28f-9e849d6fd0b2 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d80189d8-28e4-440b-8aed-b43c62f59dd7] Received event network-vif-plugged-7bc371fa-443a-4188-ace2-2837e3709136 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:56:10 compute-0 nova_compute[260935]: 2025-10-11 08:56:10.855 2 DEBUG oslo_concurrency.lockutils [req-c9a9ff58-05da-4302-ac82-b4d84dc31316 req-e9664dbe-2695-4a23-a28f-9e849d6fd0b2 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "d80189d8-28e4-440b-8aed-b43c62f59dd7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:56:10 compute-0 nova_compute[260935]: 2025-10-11 08:56:10.856 2 DEBUG oslo_concurrency.lockutils [req-c9a9ff58-05da-4302-ac82-b4d84dc31316 req-e9664dbe-2695-4a23-a28f-9e849d6fd0b2 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "d80189d8-28e4-440b-8aed-b43c62f59dd7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:56:10 compute-0 nova_compute[260935]: 2025-10-11 08:56:10.856 2 DEBUG oslo_concurrency.lockutils [req-c9a9ff58-05da-4302-ac82-b4d84dc31316 req-e9664dbe-2695-4a23-a28f-9e849d6fd0b2 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "d80189d8-28e4-440b-8aed-b43c62f59dd7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:56:10 compute-0 nova_compute[260935]: 2025-10-11 08:56:10.856 2 DEBUG nova.compute.manager [req-c9a9ff58-05da-4302-ac82-b4d84dc31316 req-e9664dbe-2695-4a23-a28f-9e849d6fd0b2 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d80189d8-28e4-440b-8aed-b43c62f59dd7] No waiting events found dispatching network-vif-plugged-7bc371fa-443a-4188-ace2-2837e3709136 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 08:56:10 compute-0 nova_compute[260935]: 2025-10-11 08:56:10.857 2 WARNING nova.compute.manager [req-c9a9ff58-05da-4302-ac82-b4d84dc31316 req-e9664dbe-2695-4a23-a28f-9e849d6fd0b2 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d80189d8-28e4-440b-8aed-b43c62f59dd7] Received unexpected event network-vif-plugged-7bc371fa-443a-4188-ace2-2837e3709136 for instance with vm_state active and task_state reboot_started_hard.
Oct 11 08:56:10 compute-0 kernel: tap92faeec9-cc (unregistering): left promiscuous mode
Oct 11 08:56:10 compute-0 NetworkManager[44960]: <info>  [1760172970.9140] device (tap92faeec9-cc): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 08:56:10 compute-0 ovn_controller[152945]: 2025-10-11T08:56:10Z|00555|binding|INFO|Releasing lport 92faeec9-cc08-45c5-84b3-7191f39c6339 from this chassis (sb_readonly=0)
Oct 11 08:56:10 compute-0 ovn_controller[152945]: 2025-10-11T08:56:10Z|00556|binding|INFO|Setting lport 92faeec9-cc08-45c5-84b3-7191f39c6339 down in Southbound
Oct 11 08:56:10 compute-0 nova_compute[260935]: 2025-10-11 08:56:10.930 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:56:10 compute-0 ovn_controller[152945]: 2025-10-11T08:56:10Z|00557|binding|INFO|Removing iface tap92faeec9-cc ovn-installed in OVS
Oct 11 08:56:10 compute-0 nova_compute[260935]: 2025-10-11 08:56:10.932 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:56:10 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:56:10.939 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:22:da:61 10.100.0.10'], port_security=['fa:16:3e:22:da:61 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'ead8f79e-4b1d-4bf5-bf5a-4943d2122bbc', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e075bdab-78c4-414f-b270-c41d1c82f498', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd9864fda4f8641d8a9c1509c426cc206', 'neutron:revision_number': '4', 'neutron:security_group_ids': '708d19b6-1edc-4b98-ad73-f234668f1633', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2b07dbe7-131d-4b4e-9a25-dea5d7b28985, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=92faeec9-cc08-45c5-84b3-7191f39c6339) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 08:56:10 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:56:10.941 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 92faeec9-cc08-45c5-84b3-7191f39c6339 in datapath e075bdab-78c4-414f-b270-c41d1c82f498 unbound from our chassis
Oct 11 08:56:10 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:56:10.942 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e075bdab-78c4-414f-b270-c41d1c82f498
Oct 11 08:56:10 compute-0 nova_compute[260935]: 2025-10-11 08:56:10.951 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:56:10 compute-0 systemd[1]: machine-qemu\x2d71\x2dinstance\x2d00000040.scope: Deactivated successfully.
Oct 11 08:56:10 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:56:10.969 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[86eefa5a-50c4-4dcb-8d8b-75d951e126a5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:56:10 compute-0 systemd[1]: machine-qemu\x2d71\x2dinstance\x2d00000040.scope: Consumed 2.661s CPU time.
Oct 11 08:56:10 compute-0 systemd-machined[215705]: Machine qemu-71-instance-00000040 terminated.
Oct 11 08:56:11 compute-0 nova_compute[260935]: 2025-10-11 08:56:11.010 2 DEBUG nova.network.neutron [None req-77688478-a503-4f3f-a020-37394964ed83 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: 62c9b5d7-f22e-4738-b2e6-7c53fcb968ea] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 11 08:56:11 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:56:11.015 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[2aa80732-6a04-4749-be9f-e3015a40b0d3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:56:11 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:56:11.018 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[c3f04a4c-768d-4f8d-9d05-10d0268b8a7e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:56:11 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:56:11.059 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[cbbe06dd-1d53-4f63-9e7e-167f5c31d69d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:56:11 compute-0 podman[326086]: 2025-10-11 08:56:11.064097823 +0000 UTC m=+0.116673807 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3)
Oct 11 08:56:11 compute-0 podman[326088]: 2025-10-11 08:56:11.072716469 +0000 UTC m=+0.119203880 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, managed_by=edpm_ansible, config_id=ovn_controller)
Oct 11 08:56:11 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:56:11.087 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[247d93d7-2126-44b7-969e-9b523cc65714]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape075bdab-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0b:b9:79'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 916, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 916, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 169], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 479394, 'reachable_time': 36159, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 326139, 'error': None, 'target': 'ovnmeta-e075bdab-78c4-414f-b270-c41d1c82f498', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:56:11 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e211 do_prune osdmap full prune enabled
Oct 11 08:56:11 compute-0 nova_compute[260935]: 2025-10-11 08:56:11.117 2 INFO nova.virt.libvirt.driver [-] [instance: ead8f79e-4b1d-4bf5-bf5a-4943d2122bbc] Instance destroyed successfully.
Oct 11 08:56:11 compute-0 nova_compute[260935]: 2025-10-11 08:56:11.118 2 DEBUG nova.objects.instance [None req-a340f7f7-5204-4f2a-a8a3-8bce2e6c347f ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Lazy-loading 'resources' on Instance uuid ead8f79e-4b1d-4bf5-bf5a-4943d2122bbc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:56:11 compute-0 ceph-mon[74313]: pgmap v1558: 321 pgs: 321 active+clean; 372 MiB data, 691 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.9 MiB/s wr, 185 op/s
Oct 11 08:56:11 compute-0 ceph-mon[74313]: osdmap e211: 3 total, 3 up, 3 in
Oct 11 08:56:11 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:56:11.121 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[7c01c3ab-ee41-4f28-bf41-ec0ac2318a93]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tape075bdab-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 479414, 'tstamp': 479414}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 326147, 'error': None, 'target': 'ovnmeta-e075bdab-78c4-414f-b270-c41d1c82f498', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tape075bdab-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 479419, 'tstamp': 479419}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 326147, 'error': None, 'target': 'ovnmeta-e075bdab-78c4-414f-b270-c41d1c82f498', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:56:11 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:56:11.124 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape075bdab-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:56:11 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e212 e212: 3 total, 3 up, 3 in
Oct 11 08:56:11 compute-0 nova_compute[260935]: 2025-10-11 08:56:11.185 2 DEBUG nova.virt.libvirt.vif [None req-a340f7f7-5204-4f2a-a8a3-8bce2e6c347f ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=2001:2001::3,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T08:55:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestJSON-server-865090442',display_name='tempest-ServersTestJSON-server-865090442',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-865090442',id=64,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-11T08:56:09Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='d9864fda4f8641d8a9c1509c426cc206',ramdisk_id='',reservation_id='r-9tcw9lxi',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestJSON-101172647',owner_user_name='tempest-ServersTestJSON-101172647-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T08:56:09Z,user_data=None,user_id='ee0e5fedb9fc464eb2a9ac362f5e0749',uuid=ead8f79e-4b1d-4bf5-bf5a-4943d2122bbc,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "92faeec9-cc08-45c5-84b3-7191f39c6339", "address": "fa:16:3e:22:da:61", "network": {"id": "e075bdab-78c4-414f-b270-c41d1c82f498", "bridge": "br-int", "label": "tempest-ServersTestJSON-1401783070-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d9864fda4f8641d8a9c1509c426cc206", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap92faeec9-cc", "ovs_interfaceid": "92faeec9-cc08-45c5-84b3-7191f39c6339", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 11 08:56:11 compute-0 nova_compute[260935]: 2025-10-11 08:56:11.187 2 DEBUG nova.network.os_vif_util [None req-a340f7f7-5204-4f2a-a8a3-8bce2e6c347f ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Converting VIF {"id": "92faeec9-cc08-45c5-84b3-7191f39c6339", "address": "fa:16:3e:22:da:61", "network": {"id": "e075bdab-78c4-414f-b270-c41d1c82f498", "bridge": "br-int", "label": "tempest-ServersTestJSON-1401783070-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d9864fda4f8641d8a9c1509c426cc206", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap92faeec9-cc", "ovs_interfaceid": "92faeec9-cc08-45c5-84b3-7191f39c6339", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 08:56:11 compute-0 nova_compute[260935]: 2025-10-11 08:56:11.187 2 DEBUG nova.network.os_vif_util [None req-a340f7f7-5204-4f2a-a8a3-8bce2e6c347f ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:22:da:61,bridge_name='br-int',has_traffic_filtering=True,id=92faeec9-cc08-45c5-84b3-7191f39c6339,network=Network(e075bdab-78c4-414f-b270-c41d1c82f498),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap92faeec9-cc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 08:56:11 compute-0 nova_compute[260935]: 2025-10-11 08:56:11.187 2 DEBUG os_vif [None req-a340f7f7-5204-4f2a-a8a3-8bce2e6c347f ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:22:da:61,bridge_name='br-int',has_traffic_filtering=True,id=92faeec9-cc08-45c5-84b3-7191f39c6339,network=Network(e075bdab-78c4-414f-b270-c41d1c82f498),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap92faeec9-cc') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 11 08:56:11 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:56:11.189 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape075bdab-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:56:11 compute-0 nova_compute[260935]: 2025-10-11 08:56:11.190 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:56:11 compute-0 nova_compute[260935]: 2025-10-11 08:56:11.190 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap92faeec9-cc, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:56:11 compute-0 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e212: 3 total, 3 up, 3 in
Oct 11 08:56:11 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:56:11.191 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 08:56:11 compute-0 nova_compute[260935]: 2025-10-11 08:56:11.191 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:56:11 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:56:11.192 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape075bdab-70, col_values=(('external_ids', {'iface-id': 'b9cf681c-9f4c-4c56-987a-55fa7aa89e1a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:56:11 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:56:11.192 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 08:56:11 compute-0 nova_compute[260935]: 2025-10-11 08:56:11.192 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:56:11 compute-0 nova_compute[260935]: 2025-10-11 08:56:11.194 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 08:56:11 compute-0 nova_compute[260935]: 2025-10-11 08:56:11.194 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:56:11 compute-0 ceph-mon[74313]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #69. Immutable memtables: 0.
Oct 11 08:56:11 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:56:11.196782) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 11 08:56:11 compute-0 ceph-mon[74313]: rocksdb: [db/flush_job.cc:856] [default] [JOB 37] Flushing memtable with next log file: 69
Oct 11 08:56:11 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760172971196834, "job": 37, "event": "flush_started", "num_memtables": 1, "num_entries": 2151, "num_deletes": 258, "total_data_size": 3072929, "memory_usage": 3122368, "flush_reason": "Manual Compaction"}
Oct 11 08:56:11 compute-0 ceph-mon[74313]: rocksdb: [db/flush_job.cc:885] [default] [JOB 37] Level-0 flush table #70: started
Oct 11 08:56:11 compute-0 nova_compute[260935]: 2025-10-11 08:56:11.198 2 INFO os_vif [None req-a340f7f7-5204-4f2a-a8a3-8bce2e6c347f ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:22:da:61,bridge_name='br-int',has_traffic_filtering=True,id=92faeec9-cc08-45c5-84b3-7191f39c6339,network=Network(e075bdab-78c4-414f-b270-c41d1c82f498),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap92faeec9-cc')
Oct 11 08:56:11 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760172971213003, "cf_name": "default", "job": 37, "event": "table_file_creation", "file_number": 70, "file_size": 3013674, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 30667, "largest_seqno": 32817, "table_properties": {"data_size": 3004131, "index_size": 5909, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2565, "raw_key_size": 21250, "raw_average_key_size": 20, "raw_value_size": 2984376, "raw_average_value_size": 2934, "num_data_blocks": 260, "num_entries": 1017, "num_filter_entries": 1017, "num_deletions": 258, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760172793, "oldest_key_time": 1760172793, "file_creation_time": 1760172971, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "25d4b2da-549a-4320-988a-8e4ed36a341b", "db_session_id": "HBQ1LYMNE0LLZGR7AHC6", "orig_file_number": 70, "seqno_to_time_mapping": "N/A"}}
Oct 11 08:56:11 compute-0 ceph-mon[74313]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 37] Flush lasted 16557 microseconds, and 6953 cpu microseconds.
Oct 11 08:56:11 compute-0 ceph-mon[74313]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 08:56:11 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:56:11.213335) [db/flush_job.cc:967] [default] [JOB 37] Level-0 flush table #70: 3013674 bytes OK
Oct 11 08:56:11 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:56:11.213428) [db/memtable_list.cc:519] [default] Level-0 commit table #70 started
Oct 11 08:56:11 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:56:11.215036) [db/memtable_list.cc:722] [default] Level-0 commit table #70: memtable #1 done
Oct 11 08:56:11 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:56:11.215048) EVENT_LOG_v1 {"time_micros": 1760172971215044, "job": 37, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 11 08:56:11 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:56:11.215064) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 11 08:56:11 compute-0 ceph-mon[74313]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 37] Try to delete WAL files size 3063722, prev total WAL file size 3063722, number of live WAL files 2.
Oct 11 08:56:11 compute-0 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000066.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 08:56:11 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:56:11.216843) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032373631' seq:72057594037927935, type:22 .. '7061786F730033303133' seq:0, type:0; will stop at (end)
Oct 11 08:56:11 compute-0 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 38] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 11 08:56:11 compute-0 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 37 Base level 0, inputs: [70(2943KB)], [68(6750KB)]
Oct 11 08:56:11 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760172971216924, "job": 38, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [70], "files_L6": [68], "score": -1, "input_data_size": 9926033, "oldest_snapshot_seqno": -1}
Oct 11 08:56:11 compute-0 nova_compute[260935]: 2025-10-11 08:56:11.236 2 DEBUG nova.storage.rbd_utils [None req-c1c756a2-3b40-40d3-9664-2fb8da17375d 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] creating snapshot(snap) on rbd image(6885ce04-2718-45ad-80a5-64bd9611a1ad) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Oct 11 08:56:11 compute-0 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 38] Generated table #71: 5713 keys, 8262977 bytes, temperature: kUnknown
Oct 11 08:56:11 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760172971268584, "cf_name": "default", "job": 38, "event": "table_file_creation", "file_number": 71, "file_size": 8262977, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8224625, "index_size": 22946, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14341, "raw_key_size": 143522, "raw_average_key_size": 25, "raw_value_size": 8121737, "raw_average_value_size": 1421, "num_data_blocks": 934, "num_entries": 5713, "num_filter_entries": 5713, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760170204, "oldest_key_time": 0, "file_creation_time": 1760172971, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "25d4b2da-549a-4320-988a-8e4ed36a341b", "db_session_id": "HBQ1LYMNE0LLZGR7AHC6", "orig_file_number": 71, "seqno_to_time_mapping": "N/A"}}
Oct 11 08:56:11 compute-0 ceph-mon[74313]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 08:56:11 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:56:11.269164) [db/compaction/compaction_job.cc:1663] [default] [JOB 38] Compacted 1@0 + 1@6 files to L6 => 8262977 bytes
Oct 11 08:56:11 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:56:11.270612) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 191.4 rd, 159.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.9, 6.6 +0.0 blob) out(7.9 +0.0 blob), read-write-amplify(6.0) write-amplify(2.7) OK, records in: 6241, records dropped: 528 output_compression: NoCompression
Oct 11 08:56:11 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:56:11.270638) EVENT_LOG_v1 {"time_micros": 1760172971270624, "job": 38, "event": "compaction_finished", "compaction_time_micros": 51863, "compaction_time_cpu_micros": 25561, "output_level": 6, "num_output_files": 1, "total_output_size": 8262977, "num_input_records": 6241, "num_output_records": 5713, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 11 08:56:11 compute-0 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000070.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 08:56:11 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760172971271999, "job": 38, "event": "table_file_deletion", "file_number": 70}
Oct 11 08:56:11 compute-0 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000068.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 08:56:11 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760172971274124, "job": 38, "event": "table_file_deletion", "file_number": 68}
Oct 11 08:56:11 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:56:11.216729) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 08:56:11 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:56:11.274264) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 08:56:11 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:56:11.274270) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 08:56:11 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:56:11.274272) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 08:56:11 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:56:11.274274) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 08:56:11 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:56:11.274276) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 08:56:11 compute-0 nova_compute[260935]: 2025-10-11 08:56:11.342 2 DEBUG nova.virt.libvirt.host [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Removed pending event for d80189d8-28e4-440b-8aed-b43c62f59dd7 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Oct 11 08:56:11 compute-0 nova_compute[260935]: 2025-10-11 08:56:11.342 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172971.3422058, d80189d8-28e4-440b-8aed-b43c62f59dd7 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:56:11 compute-0 nova_compute[260935]: 2025-10-11 08:56:11.343 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: d80189d8-28e4-440b-8aed-b43c62f59dd7] VM Resumed (Lifecycle Event)
Oct 11 08:56:11 compute-0 nova_compute[260935]: 2025-10-11 08:56:11.355 2 DEBUG nova.compute.manager [None req-ba28e5e9-9e40-489e-95fc-28faaf8d9862 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] [instance: d80189d8-28e4-440b-8aed-b43c62f59dd7] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 11 08:56:11 compute-0 nova_compute[260935]: 2025-10-11 08:56:11.359 2 INFO nova.virt.libvirt.driver [-] [instance: d80189d8-28e4-440b-8aed-b43c62f59dd7] Instance rebooted successfully.
Oct 11 08:56:11 compute-0 nova_compute[260935]: 2025-10-11 08:56:11.359 2 DEBUG nova.compute.manager [None req-ba28e5e9-9e40-489e-95fc-28faaf8d9862 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] [instance: d80189d8-28e4-440b-8aed-b43c62f59dd7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:56:11 compute-0 nova_compute[260935]: 2025-10-11 08:56:11.361 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: d80189d8-28e4-440b-8aed-b43c62f59dd7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:56:11 compute-0 nova_compute[260935]: 2025-10-11 08:56:11.368 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: d80189d8-28e4-440b-8aed-b43c62f59dd7] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: reboot_started_hard, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 08:56:11 compute-0 nova_compute[260935]: 2025-10-11 08:56:11.400 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: d80189d8-28e4-440b-8aed-b43c62f59dd7] During sync_power_state the instance has a pending task (reboot_started_hard). Skip.
Oct 11 08:56:11 compute-0 nova_compute[260935]: 2025-10-11 08:56:11.400 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172971.3550382, d80189d8-28e4-440b-8aed-b43c62f59dd7 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:56:11 compute-0 nova_compute[260935]: 2025-10-11 08:56:11.401 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: d80189d8-28e4-440b-8aed-b43c62f59dd7] VM Started (Lifecycle Event)
Oct 11 08:56:11 compute-0 nova_compute[260935]: 2025-10-11 08:56:11.414 2 DEBUG oslo_concurrency.lockutils [None req-ba28e5e9-9e40-489e-95fc-28faaf8d9862 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Lock "d80189d8-28e4-440b-8aed-b43c62f59dd7" "released" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: held 4.321s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:56:11 compute-0 nova_compute[260935]: 2025-10-11 08:56:11.426 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: d80189d8-28e4-440b-8aed-b43c62f59dd7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:56:11 compute-0 nova_compute[260935]: 2025-10-11 08:56:11.431 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: d80189d8-28e4-440b-8aed-b43c62f59dd7] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 08:56:11 compute-0 nova_compute[260935]: 2025-10-11 08:56:11.689 2 INFO nova.virt.libvirt.driver [None req-a340f7f7-5204-4f2a-a8a3-8bce2e6c347f ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: ead8f79e-4b1d-4bf5-bf5a-4943d2122bbc] Deleting instance files /var/lib/nova/instances/ead8f79e-4b1d-4bf5-bf5a-4943d2122bbc_del
Oct 11 08:56:11 compute-0 nova_compute[260935]: 2025-10-11 08:56:11.690 2 INFO nova.virt.libvirt.driver [None req-a340f7f7-5204-4f2a-a8a3-8bce2e6c347f ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: ead8f79e-4b1d-4bf5-bf5a-4943d2122bbc] Deletion of /var/lib/nova/instances/ead8f79e-4b1d-4bf5-bf5a-4943d2122bbc_del complete
Oct 11 08:56:11 compute-0 nova_compute[260935]: 2025-10-11 08:56:11.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 08:56:11 compute-0 nova_compute[260935]: 2025-10-11 08:56:11.732 2 INFO nova.compute.manager [None req-a340f7f7-5204-4f2a-a8a3-8bce2e6c347f ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: ead8f79e-4b1d-4bf5-bf5a-4943d2122bbc] Took 0.90 seconds to destroy the instance on the hypervisor.
Oct 11 08:56:11 compute-0 nova_compute[260935]: 2025-10-11 08:56:11.732 2 DEBUG oslo.service.loopingcall [None req-a340f7f7-5204-4f2a-a8a3-8bce2e6c347f ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 11 08:56:11 compute-0 nova_compute[260935]: 2025-10-11 08:56:11.733 2 DEBUG nova.compute.manager [-] [instance: ead8f79e-4b1d-4bf5-bf5a-4943d2122bbc] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 11 08:56:11 compute-0 nova_compute[260935]: 2025-10-11 08:56:11.733 2 DEBUG nova.network.neutron [-] [instance: ead8f79e-4b1d-4bf5-bf5a-4943d2122bbc] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 11 08:56:11 compute-0 nova_compute[260935]: 2025-10-11 08:56:11.802 2 DEBUG nova.network.neutron [None req-77688478-a503-4f3f-a020-37394964ed83 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: 62c9b5d7-f22e-4738-b2e6-7c53fcb968ea] Updating instance_info_cache with network_info: [{"id": "37653b55-c083-4108-a42f-4bbe7778f058", "address": "fa:16:3e:62:88:02", "network": {"id": "2cb96d57-a5e9-4b38-b10e-68187a5bf82f", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-2000648338-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "93dd4902ce324862a38006da8e06503a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap37653b55-c0", "ovs_interfaceid": "37653b55-c083-4108-a42f-4bbe7778f058", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:56:11 compute-0 nova_compute[260935]: 2025-10-11 08:56:11.820 2 DEBUG oslo_concurrency.lockutils [None req-77688478-a503-4f3f-a020-37394964ed83 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Releasing lock "refresh_cache-62c9b5d7-f22e-4738-b2e6-7c53fcb968ea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 08:56:11 compute-0 nova_compute[260935]: 2025-10-11 08:56:11.821 2 DEBUG nova.compute.manager [None req-77688478-a503-4f3f-a020-37394964ed83 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: 62c9b5d7-f22e-4738-b2e6-7c53fcb968ea] Instance network_info: |[{"id": "37653b55-c083-4108-a42f-4bbe7778f058", "address": "fa:16:3e:62:88:02", "network": {"id": "2cb96d57-a5e9-4b38-b10e-68187a5bf82f", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-2000648338-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "93dd4902ce324862a38006da8e06503a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap37653b55-c0", "ovs_interfaceid": "37653b55-c083-4108-a42f-4bbe7778f058", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 11 08:56:11 compute-0 nova_compute[260935]: 2025-10-11 08:56:11.823 2 DEBUG nova.virt.libvirt.driver [None req-77688478-a503-4f3f-a020-37394964ed83 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: 62c9b5d7-f22e-4738-b2e6-7c53fcb968ea] Start _get_guest_xml network_info=[{"id": "37653b55-c083-4108-a42f-4bbe7778f058", "address": "fa:16:3e:62:88:02", "network": {"id": "2cb96d57-a5e9-4b38-b10e-68187a5bf82f", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-2000648338-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "93dd4902ce324862a38006da8e06503a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap37653b55-c0", "ovs_interfaceid": "37653b55-c083-4108-a42f-4bbe7778f058", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 11 08:56:11 compute-0 nova_compute[260935]: 2025-10-11 08:56:11.827 2 WARNING nova.virt.libvirt.driver [None req-77688478-a503-4f3f-a020-37394964ed83 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 08:56:11 compute-0 nova_compute[260935]: 2025-10-11 08:56:11.831 2 DEBUG nova.virt.libvirt.host [None req-77688478-a503-4f3f-a020-37394964ed83 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 11 08:56:11 compute-0 nova_compute[260935]: 2025-10-11 08:56:11.831 2 DEBUG nova.virt.libvirt.host [None req-77688478-a503-4f3f-a020-37394964ed83 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 11 08:56:11 compute-0 nova_compute[260935]: 2025-10-11 08:56:11.834 2 DEBUG nova.virt.libvirt.host [None req-77688478-a503-4f3f-a020-37394964ed83 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 11 08:56:11 compute-0 nova_compute[260935]: 2025-10-11 08:56:11.835 2 DEBUG nova.virt.libvirt.host [None req-77688478-a503-4f3f-a020-37394964ed83 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 11 08:56:11 compute-0 nova_compute[260935]: 2025-10-11 08:56:11.835 2 DEBUG nova.virt.libvirt.driver [None req-77688478-a503-4f3f-a020-37394964ed83 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 11 08:56:11 compute-0 nova_compute[260935]: 2025-10-11 08:56:11.835 2 DEBUG nova.virt.hardware [None req-77688478-a503-4f3f-a020-37394964ed83 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 11 08:56:11 compute-0 nova_compute[260935]: 2025-10-11 08:56:11.836 2 DEBUG nova.virt.hardware [None req-77688478-a503-4f3f-a020-37394964ed83 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 11 08:56:11 compute-0 nova_compute[260935]: 2025-10-11 08:56:11.836 2 DEBUG nova.virt.hardware [None req-77688478-a503-4f3f-a020-37394964ed83 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 11 08:56:11 compute-0 nova_compute[260935]: 2025-10-11 08:56:11.836 2 DEBUG nova.virt.hardware [None req-77688478-a503-4f3f-a020-37394964ed83 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 11 08:56:11 compute-0 nova_compute[260935]: 2025-10-11 08:56:11.836 2 DEBUG nova.virt.hardware [None req-77688478-a503-4f3f-a020-37394964ed83 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 11 08:56:11 compute-0 nova_compute[260935]: 2025-10-11 08:56:11.836 2 DEBUG nova.virt.hardware [None req-77688478-a503-4f3f-a020-37394964ed83 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 11 08:56:11 compute-0 nova_compute[260935]: 2025-10-11 08:56:11.836 2 DEBUG nova.virt.hardware [None req-77688478-a503-4f3f-a020-37394964ed83 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 11 08:56:11 compute-0 nova_compute[260935]: 2025-10-11 08:56:11.837 2 DEBUG nova.virt.hardware [None req-77688478-a503-4f3f-a020-37394964ed83 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 11 08:56:11 compute-0 nova_compute[260935]: 2025-10-11 08:56:11.837 2 DEBUG nova.virt.hardware [None req-77688478-a503-4f3f-a020-37394964ed83 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 11 08:56:11 compute-0 nova_compute[260935]: 2025-10-11 08:56:11.837 2 DEBUG nova.virt.hardware [None req-77688478-a503-4f3f-a020-37394964ed83 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 11 08:56:11 compute-0 nova_compute[260935]: 2025-10-11 08:56:11.837 2 DEBUG nova.virt.hardware [None req-77688478-a503-4f3f-a020-37394964ed83 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 11 08:56:11 compute-0 nova_compute[260935]: 2025-10-11 08:56:11.839 2 DEBUG oslo_concurrency.processutils [None req-77688478-a503-4f3f-a020-37394964ed83 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:56:12 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1561: 321 pgs: 321 active+clean; 372 MiB data, 691 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 1.6 KiB/s wr, 97 op/s
Oct 11 08:56:12 compute-0 ceph-mon[74313]: osdmap e212: 3 total, 3 up, 3 in
Oct 11 08:56:12 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e212 do_prune osdmap full prune enabled
Oct 11 08:56:12 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e213 e213: 3 total, 3 up, 3 in
Oct 11 08:56:12 compute-0 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e213: 3 total, 3 up, 3 in
Oct 11 08:56:12 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 08:56:12 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2747684445' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:56:12 compute-0 nova_compute[260935]: 2025-10-11 08:56:12.390 2 DEBUG oslo_concurrency.processutils [None req-77688478-a503-4f3f-a020-37394964ed83 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.550s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:56:12 compute-0 nova_compute[260935]: 2025-10-11 08:56:12.426 2 DEBUG nova.storage.rbd_utils [None req-77688478-a503-4f3f-a020-37394964ed83 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] rbd image 62c9b5d7-f22e-4738-b2e6-7c53fcb968ea_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:56:12 compute-0 nova_compute[260935]: 2025-10-11 08:56:12.431 2 DEBUG oslo_concurrency.processutils [None req-77688478-a503-4f3f-a020-37394964ed83 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:56:12 compute-0 nova_compute[260935]: 2025-10-11 08:56:12.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 08:56:12 compute-0 nova_compute[260935]: 2025-10-11 08:56:12.702 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 11 08:56:12 compute-0 nova_compute[260935]: 2025-10-11 08:56:12.702 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 11 08:56:12 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 08:56:12 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2670106075' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:56:12 compute-0 nova_compute[260935]: 2025-10-11 08:56:12.897 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: ead8f79e-4b1d-4bf5-bf5a-4943d2122bbc] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9875
Oct 11 08:56:12 compute-0 nova_compute[260935]: 2025-10-11 08:56:12.897 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: 62c9b5d7-f22e-4738-b2e6-7c53fcb968ea] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Oct 11 08:56:12 compute-0 nova_compute[260935]: 2025-10-11 08:56:12.898 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "refresh_cache-98d8ebd6-0917-49cf-8efc-a245486424bc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 08:56:12 compute-0 nova_compute[260935]: 2025-10-11 08:56:12.898 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquired lock "refresh_cache-98d8ebd6-0917-49cf-8efc-a245486424bc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 08:56:12 compute-0 nova_compute[260935]: 2025-10-11 08:56:12.898 2 DEBUG nova.network.neutron [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: 98d8ebd6-0917-49cf-8efc-a245486424bc] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 11 08:56:12 compute-0 nova_compute[260935]: 2025-10-11 08:56:12.898 2 DEBUG nova.objects.instance [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 98d8ebd6-0917-49cf-8efc-a245486424bc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:56:12 compute-0 nova_compute[260935]: 2025-10-11 08:56:12.903 2 DEBUG oslo_concurrency.processutils [None req-77688478-a503-4f3f-a020-37394964ed83 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:56:12 compute-0 nova_compute[260935]: 2025-10-11 08:56:12.904 2 DEBUG nova.virt.libvirt.vif [None req-77688478-a503-4f3f-a020-37394964ed83 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T08:56:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-352403089',display_name='tempest-DeleteServersTestJSON-server-352403089',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-352403089',id=65,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='93dd4902ce324862a38006da8e06503a',ramdisk_id='',reservation_id='r-781vkij3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-1019340677',owner_user_name='tempest-DeleteServersTestJSON-1019340677-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T08:56:08Z,user_data=None,user_id='213e5693e94f44e7950e3dfbca04228a',uuid=62c9b5d7-f22e-4738-b2e6-7c53fcb968ea,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "37653b55-c083-4108-a42f-4bbe7778f058", "address": "fa:16:3e:62:88:02", "network": {"id": "2cb96d57-a5e9-4b38-b10e-68187a5bf82f", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-2000648338-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "93dd4902ce324862a38006da8e06503a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap37653b55-c0", "ovs_interfaceid": "37653b55-c083-4108-a42f-4bbe7778f058", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 11 08:56:12 compute-0 nova_compute[260935]: 2025-10-11 08:56:12.904 2 DEBUG nova.network.os_vif_util [None req-77688478-a503-4f3f-a020-37394964ed83 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Converting VIF {"id": "37653b55-c083-4108-a42f-4bbe7778f058", "address": "fa:16:3e:62:88:02", "network": {"id": "2cb96d57-a5e9-4b38-b10e-68187a5bf82f", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-2000648338-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "93dd4902ce324862a38006da8e06503a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap37653b55-c0", "ovs_interfaceid": "37653b55-c083-4108-a42f-4bbe7778f058", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 08:56:12 compute-0 nova_compute[260935]: 2025-10-11 08:56:12.905 2 DEBUG nova.network.os_vif_util [None req-77688478-a503-4f3f-a020-37394964ed83 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:62:88:02,bridge_name='br-int',has_traffic_filtering=True,id=37653b55-c083-4108-a42f-4bbe7778f058,network=Network(2cb96d57-a5e9-4b38-b10e-68187a5bf82f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap37653b55-c0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 08:56:12 compute-0 nova_compute[260935]: 2025-10-11 08:56:12.906 2 DEBUG nova.objects.instance [None req-77688478-a503-4f3f-a020-37394964ed83 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Lazy-loading 'pci_devices' on Instance uuid 62c9b5d7-f22e-4738-b2e6-7c53fcb968ea obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:56:12 compute-0 nova_compute[260935]: 2025-10-11 08:56:12.953 2 DEBUG nova.virt.libvirt.driver [None req-77688478-a503-4f3f-a020-37394964ed83 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: 62c9b5d7-f22e-4738-b2e6-7c53fcb968ea] End _get_guest_xml xml=<domain type="kvm">
Oct 11 08:56:12 compute-0 nova_compute[260935]:   <uuid>62c9b5d7-f22e-4738-b2e6-7c53fcb968ea</uuid>
Oct 11 08:56:12 compute-0 nova_compute[260935]:   <name>instance-00000041</name>
Oct 11 08:56:12 compute-0 nova_compute[260935]:   <memory>131072</memory>
Oct 11 08:56:12 compute-0 nova_compute[260935]:   <vcpu>1</vcpu>
Oct 11 08:56:12 compute-0 nova_compute[260935]:   <metadata>
Oct 11 08:56:12 compute-0 nova_compute[260935]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 08:56:12 compute-0 nova_compute[260935]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 08:56:12 compute-0 nova_compute[260935]:       <nova:name>tempest-DeleteServersTestJSON-server-352403089</nova:name>
Oct 11 08:56:12 compute-0 nova_compute[260935]:       <nova:creationTime>2025-10-11 08:56:11</nova:creationTime>
Oct 11 08:56:12 compute-0 nova_compute[260935]:       <nova:flavor name="m1.nano">
Oct 11 08:56:12 compute-0 nova_compute[260935]:         <nova:memory>128</nova:memory>
Oct 11 08:56:12 compute-0 nova_compute[260935]:         <nova:disk>1</nova:disk>
Oct 11 08:56:12 compute-0 nova_compute[260935]:         <nova:swap>0</nova:swap>
Oct 11 08:56:12 compute-0 nova_compute[260935]:         <nova:ephemeral>0</nova:ephemeral>
Oct 11 08:56:12 compute-0 nova_compute[260935]:         <nova:vcpus>1</nova:vcpus>
Oct 11 08:56:12 compute-0 nova_compute[260935]:       </nova:flavor>
Oct 11 08:56:12 compute-0 nova_compute[260935]:       <nova:owner>
Oct 11 08:56:12 compute-0 nova_compute[260935]:         <nova:user uuid="213e5693e94f44e7950e3dfbca04228a">tempest-DeleteServersTestJSON-1019340677-project-member</nova:user>
Oct 11 08:56:12 compute-0 nova_compute[260935]:         <nova:project uuid="93dd4902ce324862a38006da8e06503a">tempest-DeleteServersTestJSON-1019340677</nova:project>
Oct 11 08:56:12 compute-0 nova_compute[260935]:       </nova:owner>
Oct 11 08:56:12 compute-0 nova_compute[260935]:       <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 08:56:12 compute-0 nova_compute[260935]:       <nova:ports>
Oct 11 08:56:12 compute-0 nova_compute[260935]:         <nova:port uuid="37653b55-c083-4108-a42f-4bbe7778f058">
Oct 11 08:56:12 compute-0 nova_compute[260935]:           <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Oct 11 08:56:12 compute-0 nova_compute[260935]:         </nova:port>
Oct 11 08:56:12 compute-0 nova_compute[260935]:       </nova:ports>
Oct 11 08:56:12 compute-0 nova_compute[260935]:     </nova:instance>
Oct 11 08:56:12 compute-0 nova_compute[260935]:   </metadata>
Oct 11 08:56:12 compute-0 nova_compute[260935]:   <sysinfo type="smbios">
Oct 11 08:56:12 compute-0 nova_compute[260935]:     <system>
Oct 11 08:56:12 compute-0 nova_compute[260935]:       <entry name="manufacturer">RDO</entry>
Oct 11 08:56:12 compute-0 nova_compute[260935]:       <entry name="product">OpenStack Compute</entry>
Oct 11 08:56:12 compute-0 nova_compute[260935]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 08:56:12 compute-0 nova_compute[260935]:       <entry name="serial">62c9b5d7-f22e-4738-b2e6-7c53fcb968ea</entry>
Oct 11 08:56:12 compute-0 nova_compute[260935]:       <entry name="uuid">62c9b5d7-f22e-4738-b2e6-7c53fcb968ea</entry>
Oct 11 08:56:12 compute-0 nova_compute[260935]:       <entry name="family">Virtual Machine</entry>
Oct 11 08:56:12 compute-0 nova_compute[260935]:     </system>
Oct 11 08:56:12 compute-0 nova_compute[260935]:   </sysinfo>
Oct 11 08:56:12 compute-0 nova_compute[260935]:   <os>
Oct 11 08:56:12 compute-0 nova_compute[260935]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 11 08:56:12 compute-0 nova_compute[260935]:     <boot dev="hd"/>
Oct 11 08:56:12 compute-0 nova_compute[260935]:     <smbios mode="sysinfo"/>
Oct 11 08:56:12 compute-0 nova_compute[260935]:   </os>
Oct 11 08:56:12 compute-0 nova_compute[260935]:   <features>
Oct 11 08:56:12 compute-0 nova_compute[260935]:     <acpi/>
Oct 11 08:56:12 compute-0 nova_compute[260935]:     <apic/>
Oct 11 08:56:12 compute-0 nova_compute[260935]:     <vmcoreinfo/>
Oct 11 08:56:12 compute-0 nova_compute[260935]:   </features>
Oct 11 08:56:12 compute-0 nova_compute[260935]:   <clock offset="utc">
Oct 11 08:56:12 compute-0 nova_compute[260935]:     <timer name="pit" tickpolicy="delay"/>
Oct 11 08:56:12 compute-0 nova_compute[260935]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 11 08:56:12 compute-0 nova_compute[260935]:     <timer name="hpet" present="no"/>
Oct 11 08:56:12 compute-0 nova_compute[260935]:   </clock>
Oct 11 08:56:12 compute-0 nova_compute[260935]:   <cpu mode="host-model" match="exact">
Oct 11 08:56:12 compute-0 nova_compute[260935]:     <topology sockets="1" cores="1" threads="1"/>
Oct 11 08:56:12 compute-0 nova_compute[260935]:   </cpu>
Oct 11 08:56:12 compute-0 nova_compute[260935]:   <devices>
Oct 11 08:56:12 compute-0 nova_compute[260935]:     <disk type="network" device="disk">
Oct 11 08:56:12 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 08:56:12 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/62c9b5d7-f22e-4738-b2e6-7c53fcb968ea_disk">
Oct 11 08:56:12 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 08:56:12 compute-0 nova_compute[260935]:       </source>
Oct 11 08:56:12 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 08:56:12 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 08:56:12 compute-0 nova_compute[260935]:       </auth>
Oct 11 08:56:12 compute-0 nova_compute[260935]:       <target dev="vda" bus="virtio"/>
Oct 11 08:56:12 compute-0 nova_compute[260935]:     </disk>
Oct 11 08:56:12 compute-0 nova_compute[260935]:     <disk type="network" device="cdrom">
Oct 11 08:56:12 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 08:56:12 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/62c9b5d7-f22e-4738-b2e6-7c53fcb968ea_disk.config">
Oct 11 08:56:12 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 08:56:12 compute-0 nova_compute[260935]:       </source>
Oct 11 08:56:12 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 08:56:12 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 08:56:12 compute-0 nova_compute[260935]:       </auth>
Oct 11 08:56:12 compute-0 nova_compute[260935]:       <target dev="sda" bus="sata"/>
Oct 11 08:56:12 compute-0 nova_compute[260935]:     </disk>
Oct 11 08:56:12 compute-0 nova_compute[260935]:     <interface type="ethernet">
Oct 11 08:56:12 compute-0 nova_compute[260935]:       <mac address="fa:16:3e:62:88:02"/>
Oct 11 08:56:12 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 08:56:12 compute-0 nova_compute[260935]:       <driver name="vhost" rx_queue_size="512"/>
Oct 11 08:56:12 compute-0 nova_compute[260935]:       <mtu size="1442"/>
Oct 11 08:56:12 compute-0 nova_compute[260935]:       <target dev="tap37653b55-c0"/>
Oct 11 08:56:12 compute-0 nova_compute[260935]:     </interface>
Oct 11 08:56:12 compute-0 nova_compute[260935]:     <serial type="pty">
Oct 11 08:56:12 compute-0 nova_compute[260935]:       <log file="/var/lib/nova/instances/62c9b5d7-f22e-4738-b2e6-7c53fcb968ea/console.log" append="off"/>
Oct 11 08:56:12 compute-0 nova_compute[260935]:     </serial>
Oct 11 08:56:12 compute-0 nova_compute[260935]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 08:56:12 compute-0 nova_compute[260935]:     <video>
Oct 11 08:56:12 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 08:56:12 compute-0 nova_compute[260935]:     </video>
Oct 11 08:56:12 compute-0 nova_compute[260935]:     <input type="tablet" bus="usb"/>
Oct 11 08:56:12 compute-0 nova_compute[260935]:     <rng model="virtio">
Oct 11 08:56:12 compute-0 nova_compute[260935]:       <backend model="random">/dev/urandom</backend>
Oct 11 08:56:12 compute-0 nova_compute[260935]:     </rng>
Oct 11 08:56:12 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root"/>
Oct 11 08:56:12 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:56:12 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:56:12 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:56:12 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:56:12 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:56:12 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:56:12 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:56:12 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:56:12 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:56:12 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:56:12 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:56:12 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:56:12 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:56:12 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:56:12 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:56:12 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:56:12 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:56:12 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:56:12 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:56:12 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:56:12 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:56:12 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:56:12 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:56:12 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:56:12 compute-0 nova_compute[260935]:     <controller type="usb" index="0"/>
Oct 11 08:56:12 compute-0 nova_compute[260935]:     <memballoon model="virtio">
Oct 11 08:56:12 compute-0 nova_compute[260935]:       <stats period="10"/>
Oct 11 08:56:12 compute-0 nova_compute[260935]:     </memballoon>
Oct 11 08:56:12 compute-0 nova_compute[260935]:   </devices>
Oct 11 08:56:12 compute-0 nova_compute[260935]: </domain>
Oct 11 08:56:12 compute-0 nova_compute[260935]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 11 08:56:12 compute-0 nova_compute[260935]: 2025-10-11 08:56:12.953 2 DEBUG nova.compute.manager [None req-77688478-a503-4f3f-a020-37394964ed83 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: 62c9b5d7-f22e-4738-b2e6-7c53fcb968ea] Preparing to wait for external event network-vif-plugged-37653b55-c083-4108-a42f-4bbe7778f058 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 11 08:56:12 compute-0 nova_compute[260935]: 2025-10-11 08:56:12.954 2 DEBUG oslo_concurrency.lockutils [None req-77688478-a503-4f3f-a020-37394964ed83 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Acquiring lock "62c9b5d7-f22e-4738-b2e6-7c53fcb968ea-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:56:12 compute-0 nova_compute[260935]: 2025-10-11 08:56:12.954 2 DEBUG oslo_concurrency.lockutils [None req-77688478-a503-4f3f-a020-37394964ed83 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Lock "62c9b5d7-f22e-4738-b2e6-7c53fcb968ea-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:56:12 compute-0 nova_compute[260935]: 2025-10-11 08:56:12.954 2 DEBUG oslo_concurrency.lockutils [None req-77688478-a503-4f3f-a020-37394964ed83 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Lock "62c9b5d7-f22e-4738-b2e6-7c53fcb968ea-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:56:12 compute-0 nova_compute[260935]: 2025-10-11 08:56:12.955 2 DEBUG nova.virt.libvirt.vif [None req-77688478-a503-4f3f-a020-37394964ed83 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T08:56:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-352403089',display_name='tempest-DeleteServersTestJSON-server-352403089',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-352403089',id=65,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='93dd4902ce324862a38006da8e06503a',ramdisk_id='',reservation_id='r-781vkij3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-1019340677',owner_user_name='tempest-DeleteServersTestJSON-1019340677-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T08:56:08Z,user_data=None,user_id='213e5693e94f44e7950e3dfbca04228a',uuid=62c9b5d7-f22e-4738-b2e6-7c53fcb968ea,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "37653b55-c083-4108-a42f-4bbe7778f058", "address": "fa:16:3e:62:88:02", "network": {"id": "2cb96d57-a5e9-4b38-b10e-68187a5bf82f", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-2000648338-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "93dd4902ce324862a38006da8e06503a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap37653b55-c0", "ovs_interfaceid": "37653b55-c083-4108-a42f-4bbe7778f058", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 11 08:56:12 compute-0 nova_compute[260935]: 2025-10-11 08:56:12.955 2 DEBUG nova.network.os_vif_util [None req-77688478-a503-4f3f-a020-37394964ed83 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Converting VIF {"id": "37653b55-c083-4108-a42f-4bbe7778f058", "address": "fa:16:3e:62:88:02", "network": {"id": "2cb96d57-a5e9-4b38-b10e-68187a5bf82f", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-2000648338-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "93dd4902ce324862a38006da8e06503a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap37653b55-c0", "ovs_interfaceid": "37653b55-c083-4108-a42f-4bbe7778f058", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 08:56:12 compute-0 nova_compute[260935]: 2025-10-11 08:56:12.955 2 DEBUG nova.network.os_vif_util [None req-77688478-a503-4f3f-a020-37394964ed83 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:62:88:02,bridge_name='br-int',has_traffic_filtering=True,id=37653b55-c083-4108-a42f-4bbe7778f058,network=Network(2cb96d57-a5e9-4b38-b10e-68187a5bf82f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap37653b55-c0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 08:56:12 compute-0 nova_compute[260935]: 2025-10-11 08:56:12.956 2 DEBUG os_vif [None req-77688478-a503-4f3f-a020-37394964ed83 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:62:88:02,bridge_name='br-int',has_traffic_filtering=True,id=37653b55-c083-4108-a42f-4bbe7778f058,network=Network(2cb96d57-a5e9-4b38-b10e-68187a5bf82f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap37653b55-c0') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 11 08:56:12 compute-0 nova_compute[260935]: 2025-10-11 08:56:12.956 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:56:12 compute-0 nova_compute[260935]: 2025-10-11 08:56:12.957 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:56:12 compute-0 nova_compute[260935]: 2025-10-11 08:56:12.957 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 08:56:12 compute-0 nova_compute[260935]: 2025-10-11 08:56:12.959 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:56:12 compute-0 nova_compute[260935]: 2025-10-11 08:56:12.960 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap37653b55-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:56:12 compute-0 nova_compute[260935]: 2025-10-11 08:56:12.960 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap37653b55-c0, col_values=(('external_ids', {'iface-id': '37653b55-c083-4108-a42f-4bbe7778f058', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:62:88:02', 'vm-uuid': '62c9b5d7-f22e-4738-b2e6-7c53fcb968ea'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:56:12 compute-0 nova_compute[260935]: 2025-10-11 08:56:12.961 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:56:12 compute-0 NetworkManager[44960]: <info>  [1760172972.9627] manager: (tap37653b55-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/263)
Oct 11 08:56:12 compute-0 nova_compute[260935]: 2025-10-11 08:56:12.964 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 08:56:12 compute-0 nova_compute[260935]: 2025-10-11 08:56:12.972 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:56:12 compute-0 nova_compute[260935]: 2025-10-11 08:56:12.972 2 INFO os_vif [None req-77688478-a503-4f3f-a020-37394964ed83 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:62:88:02,bridge_name='br-int',has_traffic_filtering=True,id=37653b55-c083-4108-a42f-4bbe7778f058,network=Network(2cb96d57-a5e9-4b38-b10e-68187a5bf82f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap37653b55-c0')
Oct 11 08:56:13 compute-0 nova_compute[260935]: 2025-10-11 08:56:13.028 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:56:13 compute-0 nova_compute[260935]: 2025-10-11 08:56:13.035 2 DEBUG nova.virt.libvirt.driver [None req-77688478-a503-4f3f-a020-37394964ed83 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 08:56:13 compute-0 nova_compute[260935]: 2025-10-11 08:56:13.035 2 DEBUG nova.virt.libvirt.driver [None req-77688478-a503-4f3f-a020-37394964ed83 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 08:56:13 compute-0 nova_compute[260935]: 2025-10-11 08:56:13.035 2 DEBUG nova.virt.libvirt.driver [None req-77688478-a503-4f3f-a020-37394964ed83 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] No VIF found with MAC fa:16:3e:62:88:02, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 11 08:56:13 compute-0 nova_compute[260935]: 2025-10-11 08:56:13.036 2 INFO nova.virt.libvirt.driver [None req-77688478-a503-4f3f-a020-37394964ed83 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: 62c9b5d7-f22e-4738-b2e6-7c53fcb968ea] Using config drive
Oct 11 08:56:13 compute-0 nova_compute[260935]: 2025-10-11 08:56:13.057 2 DEBUG nova.storage.rbd_utils [None req-77688478-a503-4f3f-a020-37394964ed83 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] rbd image 62c9b5d7-f22e-4738-b2e6-7c53fcb968ea_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:56:13 compute-0 ceph-mon[74313]: pgmap v1561: 321 pgs: 321 active+clean; 372 MiB data, 691 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 1.6 KiB/s wr, 97 op/s
Oct 11 08:56:13 compute-0 ceph-mon[74313]: osdmap e213: 3 total, 3 up, 3 in
Oct 11 08:56:13 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/2747684445' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:56:13 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/2670106075' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:56:13 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 08:56:13 compute-0 nova_compute[260935]: 2025-10-11 08:56:13.420 2 INFO nova.virt.libvirt.driver [None req-77688478-a503-4f3f-a020-37394964ed83 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: 62c9b5d7-f22e-4738-b2e6-7c53fcb968ea] Creating config drive at /var/lib/nova/instances/62c9b5d7-f22e-4738-b2e6-7c53fcb968ea/disk.config
Oct 11 08:56:13 compute-0 nova_compute[260935]: 2025-10-11 08:56:13.433 2 DEBUG oslo_concurrency.processutils [None req-77688478-a503-4f3f-a020-37394964ed83 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/62c9b5d7-f22e-4738-b2e6-7c53fcb968ea/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp12pysfux execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:56:13 compute-0 nova_compute[260935]: 2025-10-11 08:56:13.509 2 DEBUG nova.compute.manager [req-de5440b3-a35a-4662-99cc-e4d258c4bb68 req-3bde3275-783c-4885-93cb-4fd35ad4a2bd e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ead8f79e-4b1d-4bf5-bf5a-4943d2122bbc] Received event network-vif-unplugged-92faeec9-cc08-45c5-84b3-7191f39c6339 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:56:13 compute-0 nova_compute[260935]: 2025-10-11 08:56:13.510 2 DEBUG oslo_concurrency.lockutils [req-de5440b3-a35a-4662-99cc-e4d258c4bb68 req-3bde3275-783c-4885-93cb-4fd35ad4a2bd e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "ead8f79e-4b1d-4bf5-bf5a-4943d2122bbc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:56:13 compute-0 nova_compute[260935]: 2025-10-11 08:56:13.510 2 DEBUG oslo_concurrency.lockutils [req-de5440b3-a35a-4662-99cc-e4d258c4bb68 req-3bde3275-783c-4885-93cb-4fd35ad4a2bd e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "ead8f79e-4b1d-4bf5-bf5a-4943d2122bbc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:56:13 compute-0 nova_compute[260935]: 2025-10-11 08:56:13.510 2 DEBUG oslo_concurrency.lockutils [req-de5440b3-a35a-4662-99cc-e4d258c4bb68 req-3bde3275-783c-4885-93cb-4fd35ad4a2bd e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "ead8f79e-4b1d-4bf5-bf5a-4943d2122bbc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:56:13 compute-0 nova_compute[260935]: 2025-10-11 08:56:13.510 2 DEBUG nova.compute.manager [req-de5440b3-a35a-4662-99cc-e4d258c4bb68 req-3bde3275-783c-4885-93cb-4fd35ad4a2bd e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ead8f79e-4b1d-4bf5-bf5a-4943d2122bbc] No waiting events found dispatching network-vif-unplugged-92faeec9-cc08-45c5-84b3-7191f39c6339 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 08:56:13 compute-0 nova_compute[260935]: 2025-10-11 08:56:13.510 2 DEBUG nova.compute.manager [req-de5440b3-a35a-4662-99cc-e4d258c4bb68 req-3bde3275-783c-4885-93cb-4fd35ad4a2bd e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ead8f79e-4b1d-4bf5-bf5a-4943d2122bbc] Received event network-vif-unplugged-92faeec9-cc08-45c5-84b3-7191f39c6339 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 11 08:56:13 compute-0 nova_compute[260935]: 2025-10-11 08:56:13.510 2 DEBUG nova.compute.manager [req-de5440b3-a35a-4662-99cc-e4d258c4bb68 req-3bde3275-783c-4885-93cb-4fd35ad4a2bd e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ead8f79e-4b1d-4bf5-bf5a-4943d2122bbc] Received event network-vif-plugged-92faeec9-cc08-45c5-84b3-7191f39c6339 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:56:13 compute-0 nova_compute[260935]: 2025-10-11 08:56:13.511 2 DEBUG oslo_concurrency.lockutils [req-de5440b3-a35a-4662-99cc-e4d258c4bb68 req-3bde3275-783c-4885-93cb-4fd35ad4a2bd e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "ead8f79e-4b1d-4bf5-bf5a-4943d2122bbc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:56:13 compute-0 nova_compute[260935]: 2025-10-11 08:56:13.511 2 DEBUG oslo_concurrency.lockutils [req-de5440b3-a35a-4662-99cc-e4d258c4bb68 req-3bde3275-783c-4885-93cb-4fd35ad4a2bd e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "ead8f79e-4b1d-4bf5-bf5a-4943d2122bbc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:56:13 compute-0 nova_compute[260935]: 2025-10-11 08:56:13.511 2 DEBUG oslo_concurrency.lockutils [req-de5440b3-a35a-4662-99cc-e4d258c4bb68 req-3bde3275-783c-4885-93cb-4fd35ad4a2bd e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "ead8f79e-4b1d-4bf5-bf5a-4943d2122bbc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:56:13 compute-0 nova_compute[260935]: 2025-10-11 08:56:13.511 2 DEBUG nova.compute.manager [req-de5440b3-a35a-4662-99cc-e4d258c4bb68 req-3bde3275-783c-4885-93cb-4fd35ad4a2bd e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ead8f79e-4b1d-4bf5-bf5a-4943d2122bbc] No waiting events found dispatching network-vif-plugged-92faeec9-cc08-45c5-84b3-7191f39c6339 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 08:56:13 compute-0 nova_compute[260935]: 2025-10-11 08:56:13.511 2 WARNING nova.compute.manager [req-de5440b3-a35a-4662-99cc-e4d258c4bb68 req-3bde3275-783c-4885-93cb-4fd35ad4a2bd e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ead8f79e-4b1d-4bf5-bf5a-4943d2122bbc] Received unexpected event network-vif-plugged-92faeec9-cc08-45c5-84b3-7191f39c6339 for instance with vm_state active and task_state deleting.
Oct 11 08:56:13 compute-0 nova_compute[260935]: 2025-10-11 08:56:13.544 2 DEBUG nova.network.neutron [-] [instance: ead8f79e-4b1d-4bf5-bf5a-4943d2122bbc] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:56:13 compute-0 nova_compute[260935]: 2025-10-11 08:56:13.557 2 DEBUG nova.compute.manager [req-e75dd23d-ebdd-4dbb-bebb-3c3513e62c62 req-366e9b1a-42b3-4c84-9abf-3e6a888e4c2c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 62c9b5d7-f22e-4738-b2e6-7c53fcb968ea] Received event network-changed-37653b55-c083-4108-a42f-4bbe7778f058 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:56:13 compute-0 nova_compute[260935]: 2025-10-11 08:56:13.557 2 DEBUG nova.compute.manager [req-e75dd23d-ebdd-4dbb-bebb-3c3513e62c62 req-366e9b1a-42b3-4c84-9abf-3e6a888e4c2c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 62c9b5d7-f22e-4738-b2e6-7c53fcb968ea] Refreshing instance network info cache due to event network-changed-37653b55-c083-4108-a42f-4bbe7778f058. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 08:56:13 compute-0 nova_compute[260935]: 2025-10-11 08:56:13.557 2 DEBUG oslo_concurrency.lockutils [req-e75dd23d-ebdd-4dbb-bebb-3c3513e62c62 req-366e9b1a-42b3-4c84-9abf-3e6a888e4c2c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-62c9b5d7-f22e-4738-b2e6-7c53fcb968ea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 08:56:13 compute-0 nova_compute[260935]: 2025-10-11 08:56:13.558 2 DEBUG oslo_concurrency.lockutils [req-e75dd23d-ebdd-4dbb-bebb-3c3513e62c62 req-366e9b1a-42b3-4c84-9abf-3e6a888e4c2c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-62c9b5d7-f22e-4738-b2e6-7c53fcb968ea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 08:56:13 compute-0 nova_compute[260935]: 2025-10-11 08:56:13.558 2 DEBUG nova.network.neutron [req-e75dd23d-ebdd-4dbb-bebb-3c3513e62c62 req-366e9b1a-42b3-4c84-9abf-3e6a888e4c2c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 62c9b5d7-f22e-4738-b2e6-7c53fcb968ea] Refreshing network info cache for port 37653b55-c083-4108-a42f-4bbe7778f058 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 08:56:13 compute-0 nova_compute[260935]: 2025-10-11 08:56:13.581 2 INFO nova.compute.manager [-] [instance: ead8f79e-4b1d-4bf5-bf5a-4943d2122bbc] Took 1.85 seconds to deallocate network for instance.
Oct 11 08:56:13 compute-0 nova_compute[260935]: 2025-10-11 08:56:13.609 2 DEBUG oslo_concurrency.processutils [None req-77688478-a503-4f3f-a020-37394964ed83 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/62c9b5d7-f22e-4738-b2e6-7c53fcb968ea/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp12pysfux" returned: 0 in 0.176s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:56:13 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:56:13.634 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=bec9a5e2-82b0-42f0-811d-08d245f1dc66, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '15'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:56:13 compute-0 nova_compute[260935]: 2025-10-11 08:56:13.643 2 DEBUG nova.storage.rbd_utils [None req-77688478-a503-4f3f-a020-37394964ed83 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] rbd image 62c9b5d7-f22e-4738-b2e6-7c53fcb968ea_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:56:13 compute-0 nova_compute[260935]: 2025-10-11 08:56:13.648 2 DEBUG oslo_concurrency.processutils [None req-77688478-a503-4f3f-a020-37394964ed83 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/62c9b5d7-f22e-4738-b2e6-7c53fcb968ea/disk.config 62c9b5d7-f22e-4738-b2e6-7c53fcb968ea_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:56:13 compute-0 nova_compute[260935]: 2025-10-11 08:56:13.697 2 DEBUG oslo_concurrency.lockutils [None req-a340f7f7-5204-4f2a-a8a3-8bce2e6c347f ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:56:13 compute-0 nova_compute[260935]: 2025-10-11 08:56:13.699 2 DEBUG oslo_concurrency.lockutils [None req-a340f7f7-5204-4f2a-a8a3-8bce2e6c347f ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:56:13 compute-0 nova_compute[260935]: 2025-10-11 08:56:13.829 2 DEBUG oslo_concurrency.processutils [None req-77688478-a503-4f3f-a020-37394964ed83 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/62c9b5d7-f22e-4738-b2e6-7c53fcb968ea/disk.config 62c9b5d7-f22e-4738-b2e6-7c53fcb968ea_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.181s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:56:13 compute-0 nova_compute[260935]: 2025-10-11 08:56:13.829 2 INFO nova.virt.libvirt.driver [None req-77688478-a503-4f3f-a020-37394964ed83 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: 62c9b5d7-f22e-4738-b2e6-7c53fcb968ea] Deleting local config drive /var/lib/nova/instances/62c9b5d7-f22e-4738-b2e6-7c53fcb968ea/disk.config because it was imported into RBD.
Oct 11 08:56:13 compute-0 nova_compute[260935]: 2025-10-11 08:56:13.893 2 DEBUG oslo_concurrency.processutils [None req-a340f7f7-5204-4f2a-a8a3-8bce2e6c347f ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:56:13 compute-0 kernel: tap37653b55-c0: entered promiscuous mode
Oct 11 08:56:13 compute-0 NetworkManager[44960]: <info>  [1760172973.9019] manager: (tap37653b55-c0): new Tun device (/org/freedesktop/NetworkManager/Devices/264)
Oct 11 08:56:13 compute-0 systemd-udevd[326104]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 08:56:13 compute-0 NetworkManager[44960]: <info>  [1760172973.9243] device (tap37653b55-c0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 08:56:13 compute-0 NetworkManager[44960]: <info>  [1760172973.9252] device (tap37653b55-c0): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 08:56:13 compute-0 nova_compute[260935]: 2025-10-11 08:56:13.942 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:56:13 compute-0 ovn_controller[152945]: 2025-10-11T08:56:13Z|00558|binding|INFO|Claiming lport 37653b55-c083-4108-a42f-4bbe7778f058 for this chassis.
Oct 11 08:56:13 compute-0 ovn_controller[152945]: 2025-10-11T08:56:13Z|00559|binding|INFO|37653b55-c083-4108-a42f-4bbe7778f058: Claiming fa:16:3e:62:88:02 10.100.0.9
Oct 11 08:56:13 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:56:13.953 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:62:88:02 10.100.0.9'], port_security=['fa:16:3e:62:88:02 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '62c9b5d7-f22e-4738-b2e6-7c53fcb968ea', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2cb96d57-a5e9-4b38-b10e-68187a5bf82f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '93dd4902ce324862a38006da8e06503a', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'b773900e-3df7-4cb6-b9b0-3d240ff499b1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=69794a2f-48ab-4a0d-8725-f4a7f57172dd, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=37653b55-c083-4108-a42f-4bbe7778f058) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 08:56:13 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:56:13.955 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 37653b55-c083-4108-a42f-4bbe7778f058 in datapath 2cb96d57-a5e9-4b38-b10e-68187a5bf82f bound to our chassis
Oct 11 08:56:13 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:56:13.958 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 2cb96d57-a5e9-4b38-b10e-68187a5bf82f
Oct 11 08:56:13 compute-0 ovn_controller[152945]: 2025-10-11T08:56:13Z|00560|binding|INFO|Setting lport 37653b55-c083-4108-a42f-4bbe7778f058 ovn-installed in OVS
Oct 11 08:56:13 compute-0 ovn_controller[152945]: 2025-10-11T08:56:13Z|00561|binding|INFO|Setting lport 37653b55-c083-4108-a42f-4bbe7778f058 up in Southbound
Oct 11 08:56:13 compute-0 nova_compute[260935]: 2025-10-11 08:56:13.979 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:56:13 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:56:13.980 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[fc56eb25-335c-478f-a6e0-bc84ebb2defc]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:56:13 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:56:13.981 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap2cb96d57-a1 in ovnmeta-2cb96d57-a5e9-4b38-b10e-68187a5bf82f namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 11 08:56:13 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:56:13.983 276945 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap2cb96d57-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 11 08:56:13 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:56:13.984 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[2b71a49f-9a68-4a84-8e47-1e64d122c8d7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:56:13 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:56:13.985 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[73e5f196-d87d-437e-ad7c-b85533e1af9a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:56:13 compute-0 systemd-machined[215705]: New machine qemu-73-instance-00000041.
Oct 11 08:56:14 compute-0 systemd[1]: Started Virtual Machine qemu-73-instance-00000041.
Oct 11 08:56:14 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:56:14.009 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[3b0155a4-ea11-471f-9faf-63a4ef05d198]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:56:14 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1563: 321 pgs: 321 active+clean; 451 MiB data, 739 MiB used, 59 GiB / 60 GiB avail; 16 MiB/s rd, 11 MiB/s wr, 552 op/s
Oct 11 08:56:14 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:56:14.047 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[0d879e0b-74d8-4390-b3f5-a57e34446b66]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:56:14 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:56:14.090 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[2eb52557-2bbf-4e1d-9f96-250e6162af21]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:56:14 compute-0 NetworkManager[44960]: <info>  [1760172974.0983] manager: (tap2cb96d57-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/265)
Oct 11 08:56:14 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:56:14.099 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[ea52103d-fa6e-427f-80f8-4f21ebce699d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:56:14 compute-0 systemd-udevd[326328]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 08:56:14 compute-0 nova_compute[260935]: 2025-10-11 08:56:14.124 2 INFO nova.virt.libvirt.driver [None req-c1c756a2-3b40-40d3-9664-2fb8da17375d 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] [instance: 98d8ebd6-0917-49cf-8efc-a245486424bc] Snapshot image upload complete
Oct 11 08:56:14 compute-0 nova_compute[260935]: 2025-10-11 08:56:14.124 2 INFO nova.compute.manager [None req-c1c756a2-3b40-40d3-9664-2fb8da17375d 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] [instance: 98d8ebd6-0917-49cf-8efc-a245486424bc] Took 5.69 seconds to snapshot the instance on the hypervisor.
Oct 11 08:56:14 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:56:14.139 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[7f38367a-1697-40da-bad3-5f1afd490e86]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:56:14 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:56:14.143 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[dfbf1a0d-2e20-45d6-99f5-b272cbfa09b6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:56:14 compute-0 NetworkManager[44960]: <info>  [1760172974.1723] device (tap2cb96d57-a0): carrier: link connected
Oct 11 08:56:14 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:56:14.177 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[dada678a-c8e9-4148-bcb1-b3748028a71e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:56:14 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:56:14.209 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[6ad0a1ad-58fd-4653-986b-87c4119ea8a6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2cb96d57-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0b:c9:b5'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 179], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 481887, 'reachable_time': 16570, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 326378, 'error': None, 'target': 'ovnmeta-2cb96d57-a5e9-4b38-b10e-68187a5bf82f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:56:14 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:56:14.242 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[27679675-f6ee-4640-8fee-60982db48a89]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe0b:c9b5'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 481887, 'tstamp': 481887}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 326379, 'error': None, 'target': 'ovnmeta-2cb96d57-a5e9-4b38-b10e-68187a5bf82f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:56:14 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:56:14.263 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[f9ee711f-b07e-4195-807c-3347dcd0b2ef]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2cb96d57-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0b:c9:b5'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 179], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 481887, 'reachable_time': 16570, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 326380, 'error': None, 'target': 'ovnmeta-2cb96d57-a5e9-4b38-b10e-68187a5bf82f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:56:14 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:56:14.313 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[84a02e7d-2342-4743-aa9d-b836d0c74a1a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:56:14 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:56:14.406 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[de14b3bb-1792-4cc3-b1fb-b69eeb1ea7b6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:56:14 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:56:14.408 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2cb96d57-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:56:14 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:56:14.408 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 08:56:14 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:56:14.409 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2cb96d57-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:56:14 compute-0 nova_compute[260935]: 2025-10-11 08:56:14.411 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:56:14 compute-0 NetworkManager[44960]: <info>  [1760172974.4122] manager: (tap2cb96d57-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/266)
Oct 11 08:56:14 compute-0 kernel: tap2cb96d57-a0: entered promiscuous mode
Oct 11 08:56:14 compute-0 nova_compute[260935]: 2025-10-11 08:56:14.418 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:56:14 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:56:14.418 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap2cb96d57-a0, col_values=(('external_ids', {'iface-id': 'a11e0a08-d1ab-4bff-901b-632484cc0e21'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:56:14 compute-0 nova_compute[260935]: 2025-10-11 08:56:14.420 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:56:14 compute-0 nova_compute[260935]: 2025-10-11 08:56:14.422 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:56:14 compute-0 ovn_controller[152945]: 2025-10-11T08:56:14Z|00562|binding|INFO|Releasing lport a11e0a08-d1ab-4bff-901b-632484cc0e21 from this chassis (sb_readonly=0)
Oct 11 08:56:14 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:56:14.423 162815 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/2cb96d57-a5e9-4b38-b10e-68187a5bf82f.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/2cb96d57-a5e9-4b38-b10e-68187a5bf82f.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 11 08:56:14 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:56:14.427 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[f1c74f64-f191-435d-88cb-a732fd9a0dd7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:56:14 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:56:14.428 162815 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 11 08:56:14 compute-0 ovn_metadata_agent[162810]: global
Oct 11 08:56:14 compute-0 ovn_metadata_agent[162810]:     log         /dev/log local0 debug
Oct 11 08:56:14 compute-0 ovn_metadata_agent[162810]:     log-tag     haproxy-metadata-proxy-2cb96d57-a5e9-4b38-b10e-68187a5bf82f
Oct 11 08:56:14 compute-0 ovn_metadata_agent[162810]:     user        root
Oct 11 08:56:14 compute-0 ovn_metadata_agent[162810]:     group       root
Oct 11 08:56:14 compute-0 ovn_metadata_agent[162810]:     maxconn     1024
Oct 11 08:56:14 compute-0 ovn_metadata_agent[162810]:     pidfile     /var/lib/neutron/external/pids/2cb96d57-a5e9-4b38-b10e-68187a5bf82f.pid.haproxy
Oct 11 08:56:14 compute-0 ovn_metadata_agent[162810]:     daemon
Oct 11 08:56:14 compute-0 ovn_metadata_agent[162810]: 
Oct 11 08:56:14 compute-0 ovn_metadata_agent[162810]: defaults
Oct 11 08:56:14 compute-0 ovn_metadata_agent[162810]:     log global
Oct 11 08:56:14 compute-0 ovn_metadata_agent[162810]:     mode http
Oct 11 08:56:14 compute-0 ovn_metadata_agent[162810]:     option httplog
Oct 11 08:56:14 compute-0 ovn_metadata_agent[162810]:     option dontlognull
Oct 11 08:56:14 compute-0 ovn_metadata_agent[162810]:     option http-server-close
Oct 11 08:56:14 compute-0 ovn_metadata_agent[162810]:     option forwardfor
Oct 11 08:56:14 compute-0 ovn_metadata_agent[162810]:     retries                 3
Oct 11 08:56:14 compute-0 ovn_metadata_agent[162810]:     timeout http-request    30s
Oct 11 08:56:14 compute-0 ovn_metadata_agent[162810]:     timeout connect         30s
Oct 11 08:56:14 compute-0 ovn_metadata_agent[162810]:     timeout client          32s
Oct 11 08:56:14 compute-0 ovn_metadata_agent[162810]:     timeout server          32s
Oct 11 08:56:14 compute-0 ovn_metadata_agent[162810]:     timeout http-keep-alive 30s
Oct 11 08:56:14 compute-0 ovn_metadata_agent[162810]: 
Oct 11 08:56:14 compute-0 ovn_metadata_agent[162810]: 
Oct 11 08:56:14 compute-0 ovn_metadata_agent[162810]: listen listener
Oct 11 08:56:14 compute-0 ovn_metadata_agent[162810]:     bind 169.254.169.254:80
Oct 11 08:56:14 compute-0 ovn_metadata_agent[162810]:     server metadata /var/lib/neutron/metadata_proxy
Oct 11 08:56:14 compute-0 ovn_metadata_agent[162810]:     http-request add-header X-OVN-Network-ID 2cb96d57-a5e9-4b38-b10e-68187a5bf82f
Oct 11 08:56:14 compute-0 ovn_metadata_agent[162810]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 11 08:56:14 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:56:14.429 162815 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-2cb96d57-a5e9-4b38-b10e-68187a5bf82f', 'env', 'PROCESS_TAG=haproxy-2cb96d57-a5e9-4b38-b10e-68187a5bf82f', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/2cb96d57-a5e9-4b38-b10e-68187a5bf82f.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 11 08:56:14 compute-0 nova_compute[260935]: 2025-10-11 08:56:14.437 2 DEBUG nova.compute.manager [None req-c1c756a2-3b40-40d3-9664-2fb8da17375d 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] [instance: 98d8ebd6-0917-49cf-8efc-a245486424bc] Found 1 images (rotation: 2) _rotate_backups /usr/lib/python3.9/site-packages/nova/compute/manager.py:4450
Oct 11 08:56:14 compute-0 nova_compute[260935]: 2025-10-11 08:56:14.449 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:56:14 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 08:56:14 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3880269529' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:56:14 compute-0 nova_compute[260935]: 2025-10-11 08:56:14.500 2 DEBUG oslo_concurrency.processutils [None req-a340f7f7-5204-4f2a-a8a3-8bce2e6c347f ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.607s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:56:14 compute-0 nova_compute[260935]: 2025-10-11 08:56:14.508 2 DEBUG nova.compute.provider_tree [None req-a340f7f7-5204-4f2a-a8a3-8bce2e6c347f ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 08:56:14 compute-0 nova_compute[260935]: 2025-10-11 08:56:14.545 2 DEBUG nova.scheduler.client.report [None req-a340f7f7-5204-4f2a-a8a3-8bce2e6c347f ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 08:56:14 compute-0 nova_compute[260935]: 2025-10-11 08:56:14.577 2 DEBUG oslo_concurrency.lockutils [None req-a340f7f7-5204-4f2a-a8a3-8bce2e6c347f ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.879s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:56:14 compute-0 nova_compute[260935]: 2025-10-11 08:56:14.612 2 INFO nova.scheduler.client.report [None req-a340f7f7-5204-4f2a-a8a3-8bce2e6c347f ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Deleted allocations for instance ead8f79e-4b1d-4bf5-bf5a-4943d2122bbc
Oct 11 08:56:14 compute-0 nova_compute[260935]: 2025-10-11 08:56:14.825 2 DEBUG oslo_concurrency.lockutils [None req-a340f7f7-5204-4f2a-a8a3-8bce2e6c347f ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Lock "ead8f79e-4b1d-4bf5-bf5a-4943d2122bbc" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.997s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:56:14 compute-0 nova_compute[260935]: 2025-10-11 08:56:14.866 2 DEBUG nova.network.neutron [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: 98d8ebd6-0917-49cf-8efc-a245486424bc] Updating instance_info_cache with network_info: [{"id": "10e01bbb-0d2c-4f76-8f81-c90e20d3e54c", "address": "fa:16:3e:3f:0f:36", "network": {"id": "056c6769-bc97-4ae9-9759-4cc2d984a31d", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-2094705751-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.249", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "73adfb8cf0c64359b1f33a9643148ef4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap10e01bbb-0d", "ovs_interfaceid": "10e01bbb-0d2c-4f76-8f81-c90e20d3e54c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:56:14 compute-0 nova_compute[260935]: 2025-10-11 08:56:14.884 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Releasing lock "refresh_cache-98d8ebd6-0917-49cf-8efc-a245486424bc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 08:56:14 compute-0 nova_compute[260935]: 2025-10-11 08:56:14.885 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: 98d8ebd6-0917-49cf-8efc-a245486424bc] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 11 08:56:14 compute-0 nova_compute[260935]: 2025-10-11 08:56:14.886 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 08:56:14 compute-0 nova_compute[260935]: 2025-10-11 08:56:14.926 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:56:14 compute-0 nova_compute[260935]: 2025-10-11 08:56:14.927 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:56:14 compute-0 nova_compute[260935]: 2025-10-11 08:56:14.927 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:56:14 compute-0 nova_compute[260935]: 2025-10-11 08:56:14.927 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 11 08:56:14 compute-0 nova_compute[260935]: 2025-10-11 08:56:14.928 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:56:14 compute-0 podman[326452]: 2025-10-11 08:56:14.954677351 +0000 UTC m=+0.112420626 container create 23f892a3d86984389f393cafe8ce70113b1bd7117fb9e54465eabd456d808c9b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-2cb96d57-a5e9-4b38-b10e-68187a5bf82f, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 11 08:56:14 compute-0 podman[326452]: 2025-10-11 08:56:14.901785153 +0000 UTC m=+0.059528468 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 11 08:56:15 compute-0 systemd[1]: Started libpod-conmon-23f892a3d86984389f393cafe8ce70113b1bd7117fb9e54465eabd456d808c9b.scope.
Oct 11 08:56:15 compute-0 systemd[1]: Started libcrun container.
Oct 11 08:56:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5cd4ec95262ad04bcf8d9d7aeb62e59cea4fe95d7952747cb0697c3295b3f6e2/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 11 08:56:15 compute-0 podman[326452]: 2025-10-11 08:56:15.084134863 +0000 UTC m=+0.241878148 container init 23f892a3d86984389f393cafe8ce70113b1bd7117fb9e54465eabd456d808c9b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-2cb96d57-a5e9-4b38-b10e-68187a5bf82f, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 11 08:56:15 compute-0 podman[326452]: 2025-10-11 08:56:15.095724333 +0000 UTC m=+0.253467578 container start 23f892a3d86984389f393cafe8ce70113b1bd7117fb9e54465eabd456d808c9b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-2cb96d57-a5e9-4b38-b10e-68187a5bf82f, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 08:56:15 compute-0 neutron-haproxy-ovnmeta-2cb96d57-a5e9-4b38-b10e-68187a5bf82f[326467]: [NOTICE]   (326471) : New worker (326474) forked
Oct 11 08:56:15 compute-0 neutron-haproxy-ovnmeta-2cb96d57-a5e9-4b38-b10e-68187a5bf82f[326467]: [NOTICE]   (326471) : Loading success.
Oct 11 08:56:15 compute-0 nova_compute[260935]: 2025-10-11 08:56:15.139 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172975.1385374, 62c9b5d7-f22e-4738-b2e6-7c53fcb968ea => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:56:15 compute-0 nova_compute[260935]: 2025-10-11 08:56:15.140 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 62c9b5d7-f22e-4738-b2e6-7c53fcb968ea] VM Started (Lifecycle Event)
Oct 11 08:56:15 compute-0 nova_compute[260935]: 2025-10-11 08:56:15.168 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 62c9b5d7-f22e-4738-b2e6-7c53fcb968ea] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:56:15 compute-0 nova_compute[260935]: 2025-10-11 08:56:15.172 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172975.140144, 62c9b5d7-f22e-4738-b2e6-7c53fcb968ea => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:56:15 compute-0 nova_compute[260935]: 2025-10-11 08:56:15.173 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 62c9b5d7-f22e-4738-b2e6-7c53fcb968ea] VM Paused (Lifecycle Event)
Oct 11 08:56:15 compute-0 nova_compute[260935]: 2025-10-11 08:56:15.189 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 62c9b5d7-f22e-4738-b2e6-7c53fcb968ea] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:56:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:56:15.189 162815 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:56:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:56:15.190 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:56:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:56:15.191 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:56:15 compute-0 nova_compute[260935]: 2025-10-11 08:56:15.191 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 62c9b5d7-f22e-4738-b2e6-7c53fcb968ea] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 08:56:15 compute-0 nova_compute[260935]: 2025-10-11 08:56:15.206 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 62c9b5d7-f22e-4738-b2e6-7c53fcb968ea] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 08:56:15 compute-0 ceph-mon[74313]: pgmap v1563: 321 pgs: 321 active+clean; 451 MiB data, 739 MiB used, 59 GiB / 60 GiB avail; 16 MiB/s rd, 11 MiB/s wr, 552 op/s
Oct 11 08:56:15 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/3880269529' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:56:15 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 08:56:15 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/559982335' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:56:15 compute-0 nova_compute[260935]: 2025-10-11 08:56:15.455 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.527s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:56:15 compute-0 nova_compute[260935]: 2025-10-11 08:56:15.546 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-0000003f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 08:56:15 compute-0 nova_compute[260935]: 2025-10-11 08:56:15.546 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-0000003f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 08:56:15 compute-0 nova_compute[260935]: 2025-10-11 08:56:15.550 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000041 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 08:56:15 compute-0 nova_compute[260935]: 2025-10-11 08:56:15.551 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000041 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 08:56:15 compute-0 nova_compute[260935]: 2025-10-11 08:56:15.555 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-0000003d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 08:56:15 compute-0 nova_compute[260935]: 2025-10-11 08:56:15.555 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-0000003d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 08:56:15 compute-0 nova_compute[260935]: 2025-10-11 08:56:15.560 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-0000003b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 08:56:15 compute-0 nova_compute[260935]: 2025-10-11 08:56:15.560 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-0000003b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 08:56:15 compute-0 nova_compute[260935]: 2025-10-11 08:56:15.564 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-0000003a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 08:56:15 compute-0 nova_compute[260935]: 2025-10-11 08:56:15.564 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-0000003a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 08:56:15 compute-0 nova_compute[260935]: 2025-10-11 08:56:15.723 2 DEBUG nova.network.neutron [req-e75dd23d-ebdd-4dbb-bebb-3c3513e62c62 req-366e9b1a-42b3-4c84-9abf-3e6a888e4c2c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 62c9b5d7-f22e-4738-b2e6-7c53fcb968ea] Updated VIF entry in instance network info cache for port 37653b55-c083-4108-a42f-4bbe7778f058. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 08:56:15 compute-0 nova_compute[260935]: 2025-10-11 08:56:15.724 2 DEBUG nova.network.neutron [req-e75dd23d-ebdd-4dbb-bebb-3c3513e62c62 req-366e9b1a-42b3-4c84-9abf-3e6a888e4c2c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 62c9b5d7-f22e-4738-b2e6-7c53fcb968ea] Updating instance_info_cache with network_info: [{"id": "37653b55-c083-4108-a42f-4bbe7778f058", "address": "fa:16:3e:62:88:02", "network": {"id": "2cb96d57-a5e9-4b38-b10e-68187a5bf82f", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-2000648338-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "93dd4902ce324862a38006da8e06503a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap37653b55-c0", "ovs_interfaceid": "37653b55-c083-4108-a42f-4bbe7778f058", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:56:15 compute-0 nova_compute[260935]: 2025-10-11 08:56:15.731 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760172960.7305317, bb58fb30-73c8-457a-9293-2d07d0015a46 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:56:15 compute-0 nova_compute[260935]: 2025-10-11 08:56:15.731 2 INFO nova.compute.manager [-] [instance: bb58fb30-73c8-457a-9293-2d07d0015a46] VM Stopped (Lifecycle Event)
Oct 11 08:56:15 compute-0 nova_compute[260935]: 2025-10-11 08:56:15.753 2 DEBUG oslo_concurrency.lockutils [req-e75dd23d-ebdd-4dbb-bebb-3c3513e62c62 req-366e9b1a-42b3-4c84-9abf-3e6a888e4c2c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-62c9b5d7-f22e-4738-b2e6-7c53fcb968ea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 08:56:15 compute-0 nova_compute[260935]: 2025-10-11 08:56:15.753 2 DEBUG nova.compute.manager [req-e75dd23d-ebdd-4dbb-bebb-3c3513e62c62 req-366e9b1a-42b3-4c84-9abf-3e6a888e4c2c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d80189d8-28e4-440b-8aed-b43c62f59dd7] Received event network-vif-plugged-7bc371fa-443a-4188-ace2-2837e3709136 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:56:15 compute-0 nova_compute[260935]: 2025-10-11 08:56:15.754 2 DEBUG oslo_concurrency.lockutils [req-e75dd23d-ebdd-4dbb-bebb-3c3513e62c62 req-366e9b1a-42b3-4c84-9abf-3e6a888e4c2c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "d80189d8-28e4-440b-8aed-b43c62f59dd7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:56:15 compute-0 nova_compute[260935]: 2025-10-11 08:56:15.754 2 DEBUG oslo_concurrency.lockutils [req-e75dd23d-ebdd-4dbb-bebb-3c3513e62c62 req-366e9b1a-42b3-4c84-9abf-3e6a888e4c2c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "d80189d8-28e4-440b-8aed-b43c62f59dd7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:56:15 compute-0 nova_compute[260935]: 2025-10-11 08:56:15.754 2 DEBUG oslo_concurrency.lockutils [req-e75dd23d-ebdd-4dbb-bebb-3c3513e62c62 req-366e9b1a-42b3-4c84-9abf-3e6a888e4c2c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "d80189d8-28e4-440b-8aed-b43c62f59dd7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:56:15 compute-0 nova_compute[260935]: 2025-10-11 08:56:15.754 2 DEBUG nova.compute.manager [req-e75dd23d-ebdd-4dbb-bebb-3c3513e62c62 req-366e9b1a-42b3-4c84-9abf-3e6a888e4c2c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d80189d8-28e4-440b-8aed-b43c62f59dd7] No waiting events found dispatching network-vif-plugged-7bc371fa-443a-4188-ace2-2837e3709136 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 08:56:15 compute-0 nova_compute[260935]: 2025-10-11 08:56:15.755 2 WARNING nova.compute.manager [req-e75dd23d-ebdd-4dbb-bebb-3c3513e62c62 req-366e9b1a-42b3-4c84-9abf-3e6a888e4c2c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d80189d8-28e4-440b-8aed-b43c62f59dd7] Received unexpected event network-vif-plugged-7bc371fa-443a-4188-ace2-2837e3709136 for instance with vm_state active and task_state None.
Oct 11 08:56:15 compute-0 nova_compute[260935]: 2025-10-11 08:56:15.756 2 DEBUG nova.compute.manager [None req-f7981e54-ac33-4c45-8999-fb7fdff70e16 - - - - - -] [instance: bb58fb30-73c8-457a-9293-2d07d0015a46] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:56:15 compute-0 nova_compute[260935]: 2025-10-11 08:56:15.863 2 WARNING nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 08:56:15 compute-0 nova_compute[260935]: 2025-10-11 08:56:15.864 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3217MB free_disk=59.810062408447266GB free_vcpus=3 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 11 08:56:15 compute-0 nova_compute[260935]: 2025-10-11 08:56:15.864 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:56:15 compute-0 nova_compute[260935]: 2025-10-11 08:56:15.865 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:56:15 compute-0 nova_compute[260935]: 2025-10-11 08:56:15.902 2 DEBUG nova.compute.manager [req-a78b1e5c-c25a-4922-b39a-58dcfd02dab9 req-32d77839-4694-4c4d-bf03-3ff9fb12c5e5 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ead8f79e-4b1d-4bf5-bf5a-4943d2122bbc] Received event network-vif-deleted-92faeec9-cc08-45c5-84b3-7191f39c6339 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:56:15 compute-0 nova_compute[260935]: 2025-10-11 08:56:15.902 2 DEBUG nova.compute.manager [req-a78b1e5c-c25a-4922-b39a-58dcfd02dab9 req-32d77839-4694-4c4d-bf03-3ff9fb12c5e5 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 62c9b5d7-f22e-4738-b2e6-7c53fcb968ea] Received event network-vif-plugged-37653b55-c083-4108-a42f-4bbe7778f058 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:56:15 compute-0 nova_compute[260935]: 2025-10-11 08:56:15.903 2 DEBUG oslo_concurrency.lockutils [req-a78b1e5c-c25a-4922-b39a-58dcfd02dab9 req-32d77839-4694-4c4d-bf03-3ff9fb12c5e5 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "62c9b5d7-f22e-4738-b2e6-7c53fcb968ea-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:56:15 compute-0 nova_compute[260935]: 2025-10-11 08:56:15.903 2 DEBUG oslo_concurrency.lockutils [req-a78b1e5c-c25a-4922-b39a-58dcfd02dab9 req-32d77839-4694-4c4d-bf03-3ff9fb12c5e5 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "62c9b5d7-f22e-4738-b2e6-7c53fcb968ea-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:56:15 compute-0 nova_compute[260935]: 2025-10-11 08:56:15.903 2 DEBUG oslo_concurrency.lockutils [req-a78b1e5c-c25a-4922-b39a-58dcfd02dab9 req-32d77839-4694-4c4d-bf03-3ff9fb12c5e5 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "62c9b5d7-f22e-4738-b2e6-7c53fcb968ea-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:56:15 compute-0 nova_compute[260935]: 2025-10-11 08:56:15.904 2 DEBUG nova.compute.manager [req-a78b1e5c-c25a-4922-b39a-58dcfd02dab9 req-32d77839-4694-4c4d-bf03-3ff9fb12c5e5 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 62c9b5d7-f22e-4738-b2e6-7c53fcb968ea] Processing event network-vif-plugged-37653b55-c083-4108-a42f-4bbe7778f058 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 11 08:56:15 compute-0 nova_compute[260935]: 2025-10-11 08:56:15.905 2 DEBUG nova.compute.manager [None req-77688478-a503-4f3f-a020-37394964ed83 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: 62c9b5d7-f22e-4738-b2e6-7c53fcb968ea] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 11 08:56:15 compute-0 nova_compute[260935]: 2025-10-11 08:56:15.920 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172975.9196525, 62c9b5d7-f22e-4738-b2e6-7c53fcb968ea => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:56:15 compute-0 nova_compute[260935]: 2025-10-11 08:56:15.920 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 62c9b5d7-f22e-4738-b2e6-7c53fcb968ea] VM Resumed (Lifecycle Event)
Oct 11 08:56:15 compute-0 nova_compute[260935]: 2025-10-11 08:56:15.922 2 DEBUG nova.virt.libvirt.driver [None req-77688478-a503-4f3f-a020-37394964ed83 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: 62c9b5d7-f22e-4738-b2e6-7c53fcb968ea] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 11 08:56:15 compute-0 nova_compute[260935]: 2025-10-11 08:56:15.929 2 INFO nova.virt.libvirt.driver [-] [instance: 62c9b5d7-f22e-4738-b2e6-7c53fcb968ea] Instance spawned successfully.
Oct 11 08:56:15 compute-0 nova_compute[260935]: 2025-10-11 08:56:15.930 2 DEBUG nova.virt.libvirt.driver [None req-77688478-a503-4f3f-a020-37394964ed83 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: 62c9b5d7-f22e-4738-b2e6-7c53fcb968ea] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 11 08:56:15 compute-0 nova_compute[260935]: 2025-10-11 08:56:15.967 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 62c9b5d7-f22e-4738-b2e6-7c53fcb968ea] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:56:15 compute-0 nova_compute[260935]: 2025-10-11 08:56:15.978 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 62c9b5d7-f22e-4738-b2e6-7c53fcb968ea] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 08:56:15 compute-0 nova_compute[260935]: 2025-10-11 08:56:15.986 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance 98d8ebd6-0917-49cf-8efc-a245486424bc actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 08:56:15 compute-0 nova_compute[260935]: 2025-10-11 08:56:15.987 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance 3915cf40-bdd7-4fe8-8311-834ff26aaf9c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 08:56:15 compute-0 nova_compute[260935]: 2025-10-11 08:56:15.987 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance b297454f-91af-4716-b4f7-6af9f0d7e62d actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 08:56:15 compute-0 nova_compute[260935]: 2025-10-11 08:56:15.987 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance d80189d8-28e4-440b-8aed-b43c62f59dd7 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 08:56:15 compute-0 nova_compute[260935]: 2025-10-11 08:56:15.987 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance 62c9b5d7-f22e-4738-b2e6-7c53fcb968ea actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 08:56:15 compute-0 nova_compute[260935]: 2025-10-11 08:56:15.988 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 5 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 11 08:56:15 compute-0 nova_compute[260935]: 2025-10-11 08:56:15.988 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=1152MB phys_disk=59GB used_disk=5GB total_vcpus=8 used_vcpus=5 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 11 08:56:15 compute-0 nova_compute[260935]: 2025-10-11 08:56:15.995 2 DEBUG nova.virt.libvirt.driver [None req-77688478-a503-4f3f-a020-37394964ed83 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: 62c9b5d7-f22e-4738-b2e6-7c53fcb968ea] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:56:15 compute-0 nova_compute[260935]: 2025-10-11 08:56:15.996 2 DEBUG nova.virt.libvirt.driver [None req-77688478-a503-4f3f-a020-37394964ed83 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: 62c9b5d7-f22e-4738-b2e6-7c53fcb968ea] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:56:15 compute-0 nova_compute[260935]: 2025-10-11 08:56:15.996 2 DEBUG nova.virt.libvirt.driver [None req-77688478-a503-4f3f-a020-37394964ed83 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: 62c9b5d7-f22e-4738-b2e6-7c53fcb968ea] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:56:15 compute-0 nova_compute[260935]: 2025-10-11 08:56:15.997 2 DEBUG nova.virt.libvirt.driver [None req-77688478-a503-4f3f-a020-37394964ed83 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: 62c9b5d7-f22e-4738-b2e6-7c53fcb968ea] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:56:15 compute-0 nova_compute[260935]: 2025-10-11 08:56:15.997 2 DEBUG nova.virt.libvirt.driver [None req-77688478-a503-4f3f-a020-37394964ed83 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: 62c9b5d7-f22e-4738-b2e6-7c53fcb968ea] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:56:15 compute-0 nova_compute[260935]: 2025-10-11 08:56:15.998 2 DEBUG nova.virt.libvirt.driver [None req-77688478-a503-4f3f-a020-37394964ed83 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: 62c9b5d7-f22e-4738-b2e6-7c53fcb968ea] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:56:16 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1564: 321 pgs: 321 active+clean; 451 MiB data, 739 MiB used, 59 GiB / 60 GiB avail; 16 MiB/s rd, 11 MiB/s wr, 552 op/s
Oct 11 08:56:16 compute-0 nova_compute[260935]: 2025-10-11 08:56:16.052 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 62c9b5d7-f22e-4738-b2e6-7c53fcb968ea] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 08:56:16 compute-0 nova_compute[260935]: 2025-10-11 08:56:16.080 2 INFO nova.compute.manager [None req-77688478-a503-4f3f-a020-37394964ed83 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: 62c9b5d7-f22e-4738-b2e6-7c53fcb968ea] Took 7.24 seconds to spawn the instance on the hypervisor.
Oct 11 08:56:16 compute-0 nova_compute[260935]: 2025-10-11 08:56:16.081 2 DEBUG nova.compute.manager [None req-77688478-a503-4f3f-a020-37394964ed83 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: 62c9b5d7-f22e-4738-b2e6-7c53fcb968ea] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:56:16 compute-0 nova_compute[260935]: 2025-10-11 08:56:16.137 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:56:16 compute-0 nova_compute[260935]: 2025-10-11 08:56:16.210 2 INFO nova.compute.manager [None req-77688478-a503-4f3f-a020-37394964ed83 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: 62c9b5d7-f22e-4738-b2e6-7c53fcb968ea] Took 8.39 seconds to build instance.
Oct 11 08:56:16 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/559982335' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:56:16 compute-0 nova_compute[260935]: 2025-10-11 08:56:16.225 2 DEBUG nova.compute.manager [None req-937314b8-02ad-40b8-af42-60454f64d35c 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] [instance: 98d8ebd6-0917-49cf-8efc-a245486424bc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:56:16 compute-0 nova_compute[260935]: 2025-10-11 08:56:16.236 2 DEBUG oslo_concurrency.lockutils [None req-77688478-a503-4f3f-a020-37394964ed83 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Lock "62c9b5d7-f22e-4738-b2e6-7c53fcb968ea" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.490s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:56:16 compute-0 nova_compute[260935]: 2025-10-11 08:56:16.269 2 INFO nova.compute.manager [None req-937314b8-02ad-40b8-af42-60454f64d35c 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] [instance: 98d8ebd6-0917-49cf-8efc-a245486424bc] instance snapshotting
Oct 11 08:56:16 compute-0 nova_compute[260935]: 2025-10-11 08:56:16.270 2 DEBUG nova.objects.instance [None req-937314b8-02ad-40b8-af42-60454f64d35c 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Lazy-loading 'flavor' on Instance uuid 98d8ebd6-0917-49cf-8efc-a245486424bc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:56:16 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 08:56:16 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1619156829' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:56:16 compute-0 nova_compute[260935]: 2025-10-11 08:56:16.644 2 INFO nova.virt.libvirt.driver [None req-937314b8-02ad-40b8-af42-60454f64d35c 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] [instance: 98d8ebd6-0917-49cf-8efc-a245486424bc] Beginning live snapshot process
Oct 11 08:56:16 compute-0 nova_compute[260935]: 2025-10-11 08:56:16.649 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.512s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:56:16 compute-0 nova_compute[260935]: 2025-10-11 08:56:16.656 2 DEBUG nova.compute.provider_tree [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 08:56:16 compute-0 nova_compute[260935]: 2025-10-11 08:56:16.668 2 DEBUG nova.scheduler.client.report [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 08:56:16 compute-0 nova_compute[260935]: 2025-10-11 08:56:16.710 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 11 08:56:16 compute-0 nova_compute[260935]: 2025-10-11 08:56:16.710 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.846s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:56:16 compute-0 nova_compute[260935]: 2025-10-11 08:56:16.783 2 DEBUG nova.virt.libvirt.imagebackend [None req-937314b8-02ad-40b8-af42-60454f64d35c 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] No parent info for 03f2fef0-11c0-48e1-b3a0-3e02d898739e; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Oct 11 08:56:16 compute-0 nova_compute[260935]: 2025-10-11 08:56:16.962 2 DEBUG nova.storage.rbd_utils [None req-937314b8-02ad-40b8-af42-60454f64d35c 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] creating snapshot(4a9e049560f74d7eb173fbcfe9e5d079) on rbd image(98d8ebd6-0917-49cf-8efc-a245486424bc_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Oct 11 08:56:17 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e213 do_prune osdmap full prune enabled
Oct 11 08:56:17 compute-0 ceph-mon[74313]: pgmap v1564: 321 pgs: 321 active+clean; 451 MiB data, 739 MiB used, 59 GiB / 60 GiB avail; 16 MiB/s rd, 11 MiB/s wr, 552 op/s
Oct 11 08:56:17 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/1619156829' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:56:17 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e214 e214: 3 total, 3 up, 3 in
Oct 11 08:56:17 compute-0 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e214: 3 total, 3 up, 3 in
Oct 11 08:56:17 compute-0 nova_compute[260935]: 2025-10-11 08:56:17.309 2 DEBUG nova.storage.rbd_utils [None req-937314b8-02ad-40b8-af42-60454f64d35c 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] cloning vms/98d8ebd6-0917-49cf-8efc-a245486424bc_disk@4a9e049560f74d7eb173fbcfe9e5d079 to images/e7a407ba-fb0f-4473-8e67-bd2f57ec6871 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Oct 11 08:56:17 compute-0 nova_compute[260935]: 2025-10-11 08:56:17.432 2 DEBUG nova.storage.rbd_utils [None req-937314b8-02ad-40b8-af42-60454f64d35c 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] flattening images/e7a407ba-fb0f-4473-8e67-bd2f57ec6871 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Oct 11 08:56:17 compute-0 nova_compute[260935]: 2025-10-11 08:56:17.790 2 DEBUG nova.storage.rbd_utils [None req-937314b8-02ad-40b8-af42-60454f64d35c 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] removing snapshot(4a9e049560f74d7eb173fbcfe9e5d079) on rbd image(98d8ebd6-0917-49cf-8efc-a245486424bc_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Oct 11 08:56:17 compute-0 nova_compute[260935]: 2025-10-11 08:56:17.963 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:56:18 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1566: 321 pgs: 321 active+clean; 498 MiB data, 752 MiB used, 59 GiB / 60 GiB avail; 22 MiB/s rd, 13 MiB/s wr, 723 op/s
Oct 11 08:56:18 compute-0 nova_compute[260935]: 2025-10-11 08:56:18.031 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:56:18 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e214 do_prune osdmap full prune enabled
Oct 11 08:56:18 compute-0 ceph-mon[74313]: osdmap e214: 3 total, 3 up, 3 in
Oct 11 08:56:18 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e215 e215: 3 total, 3 up, 3 in
Oct 11 08:56:18 compute-0 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e215: 3 total, 3 up, 3 in
Oct 11 08:56:18 compute-0 nova_compute[260935]: 2025-10-11 08:56:18.291 2 DEBUG nova.storage.rbd_utils [None req-937314b8-02ad-40b8-af42-60454f64d35c 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] creating snapshot(snap) on rbd image(e7a407ba-fb0f-4473-8e67-bd2f57ec6871) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Oct 11 08:56:18 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e215 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 08:56:18 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e215 do_prune osdmap full prune enabled
Oct 11 08:56:18 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e216 e216: 3 total, 3 up, 3 in
Oct 11 08:56:18 compute-0 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e216: 3 total, 3 up, 3 in
Oct 11 08:56:19 compute-0 ceph-mon[74313]: pgmap v1566: 321 pgs: 321 active+clean; 498 MiB data, 752 MiB used, 59 GiB / 60 GiB avail; 22 MiB/s rd, 13 MiB/s wr, 723 op/s
Oct 11 08:56:19 compute-0 ceph-mon[74313]: osdmap e215: 3 total, 3 up, 3 in
Oct 11 08:56:19 compute-0 ceph-mon[74313]: osdmap e216: 3 total, 3 up, 3 in
Oct 11 08:56:19 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e216 do_prune osdmap full prune enabled
Oct 11 08:56:19 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e217 e217: 3 total, 3 up, 3 in
Oct 11 08:56:19 compute-0 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e217: 3 total, 3 up, 3 in
Oct 11 08:56:20 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1570: 321 pgs: 321 active+clean; 498 MiB data, 752 MiB used, 59 GiB / 60 GiB avail; 15 MiB/s rd, 5.9 MiB/s wr, 406 op/s
Oct 11 08:56:20 compute-0 nova_compute[260935]: 2025-10-11 08:56:20.177 2 DEBUG nova.compute.manager [req-d34790c5-7031-4417-bd65-717abe90ad78 req-f1fbe0ba-1dc5-4e26-bc17-9611882275d6 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 62c9b5d7-f22e-4738-b2e6-7c53fcb968ea] Received event network-vif-plugged-37653b55-c083-4108-a42f-4bbe7778f058 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:56:20 compute-0 nova_compute[260935]: 2025-10-11 08:56:20.178 2 DEBUG oslo_concurrency.lockutils [req-d34790c5-7031-4417-bd65-717abe90ad78 req-f1fbe0ba-1dc5-4e26-bc17-9611882275d6 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "62c9b5d7-f22e-4738-b2e6-7c53fcb968ea-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:56:20 compute-0 nova_compute[260935]: 2025-10-11 08:56:20.178 2 DEBUG oslo_concurrency.lockutils [req-d34790c5-7031-4417-bd65-717abe90ad78 req-f1fbe0ba-1dc5-4e26-bc17-9611882275d6 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "62c9b5d7-f22e-4738-b2e6-7c53fcb968ea-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:56:20 compute-0 nova_compute[260935]: 2025-10-11 08:56:20.179 2 DEBUG oslo_concurrency.lockutils [req-d34790c5-7031-4417-bd65-717abe90ad78 req-f1fbe0ba-1dc5-4e26-bc17-9611882275d6 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "62c9b5d7-f22e-4738-b2e6-7c53fcb968ea-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:56:20 compute-0 nova_compute[260935]: 2025-10-11 08:56:20.180 2 DEBUG nova.compute.manager [req-d34790c5-7031-4417-bd65-717abe90ad78 req-f1fbe0ba-1dc5-4e26-bc17-9611882275d6 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 62c9b5d7-f22e-4738-b2e6-7c53fcb968ea] No waiting events found dispatching network-vif-plugged-37653b55-c083-4108-a42f-4bbe7778f058 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 08:56:20 compute-0 nova_compute[260935]: 2025-10-11 08:56:20.180 2 WARNING nova.compute.manager [req-d34790c5-7031-4417-bd65-717abe90ad78 req-f1fbe0ba-1dc5-4e26-bc17-9611882275d6 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 62c9b5d7-f22e-4738-b2e6-7c53fcb968ea] Received unexpected event network-vif-plugged-37653b55-c083-4108-a42f-4bbe7778f058 for instance with vm_state active and task_state None.
Oct 11 08:56:20 compute-0 nova_compute[260935]: 2025-10-11 08:56:20.181 2 DEBUG nova.compute.manager [req-d34790c5-7031-4417-bd65-717abe90ad78 req-f1fbe0ba-1dc5-4e26-bc17-9611882275d6 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d80189d8-28e4-440b-8aed-b43c62f59dd7] Received event network-changed-7bc371fa-443a-4188-ace2-2837e3709136 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:56:20 compute-0 nova_compute[260935]: 2025-10-11 08:56:20.181 2 DEBUG nova.compute.manager [req-d34790c5-7031-4417-bd65-717abe90ad78 req-f1fbe0ba-1dc5-4e26-bc17-9611882275d6 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d80189d8-28e4-440b-8aed-b43c62f59dd7] Refreshing instance network info cache due to event network-changed-7bc371fa-443a-4188-ace2-2837e3709136. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 08:56:20 compute-0 nova_compute[260935]: 2025-10-11 08:56:20.182 2 DEBUG oslo_concurrency.lockutils [req-d34790c5-7031-4417-bd65-717abe90ad78 req-f1fbe0ba-1dc5-4e26-bc17-9611882275d6 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-d80189d8-28e4-440b-8aed-b43c62f59dd7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 08:56:20 compute-0 nova_compute[260935]: 2025-10-11 08:56:20.183 2 DEBUG oslo_concurrency.lockutils [req-d34790c5-7031-4417-bd65-717abe90ad78 req-f1fbe0ba-1dc5-4e26-bc17-9611882275d6 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-d80189d8-28e4-440b-8aed-b43c62f59dd7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 08:56:20 compute-0 nova_compute[260935]: 2025-10-11 08:56:20.183 2 DEBUG nova.network.neutron [req-d34790c5-7031-4417-bd65-717abe90ad78 req-f1fbe0ba-1dc5-4e26-bc17-9611882275d6 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d80189d8-28e4-440b-8aed-b43c62f59dd7] Refreshing network info cache for port 7bc371fa-443a-4188-ace2-2837e3709136 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 08:56:20 compute-0 nova_compute[260935]: 2025-10-11 08:56:20.357 2 DEBUG oslo_concurrency.lockutils [None req-dc08f2a5-0049-4a6b-aed6-f9c49d09a4e1 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Acquiring lock "d80189d8-28e4-440b-8aed-b43c62f59dd7" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:56:20 compute-0 nova_compute[260935]: 2025-10-11 08:56:20.358 2 DEBUG oslo_concurrency.lockutils [None req-dc08f2a5-0049-4a6b-aed6-f9c49d09a4e1 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Lock "d80189d8-28e4-440b-8aed-b43c62f59dd7" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:56:20 compute-0 nova_compute[260935]: 2025-10-11 08:56:20.358 2 DEBUG oslo_concurrency.lockutils [None req-dc08f2a5-0049-4a6b-aed6-f9c49d09a4e1 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Acquiring lock "d80189d8-28e4-440b-8aed-b43c62f59dd7-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:56:20 compute-0 nova_compute[260935]: 2025-10-11 08:56:20.359 2 DEBUG oslo_concurrency.lockutils [None req-dc08f2a5-0049-4a6b-aed6-f9c49d09a4e1 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Lock "d80189d8-28e4-440b-8aed-b43c62f59dd7-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:56:20 compute-0 nova_compute[260935]: 2025-10-11 08:56:20.360 2 DEBUG oslo_concurrency.lockutils [None req-dc08f2a5-0049-4a6b-aed6-f9c49d09a4e1 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Lock "d80189d8-28e4-440b-8aed-b43c62f59dd7-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:56:20 compute-0 nova_compute[260935]: 2025-10-11 08:56:20.362 2 INFO nova.compute.manager [None req-dc08f2a5-0049-4a6b-aed6-f9c49d09a4e1 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] [instance: d80189d8-28e4-440b-8aed-b43c62f59dd7] Terminating instance
Oct 11 08:56:20 compute-0 nova_compute[260935]: 2025-10-11 08:56:20.364 2 DEBUG nova.compute.manager [None req-dc08f2a5-0049-4a6b-aed6-f9c49d09a4e1 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] [instance: d80189d8-28e4-440b-8aed-b43c62f59dd7] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 11 08:56:20 compute-0 ceph-mon[74313]: osdmap e217: 3 total, 3 up, 3 in
Oct 11 08:56:20 compute-0 kernel: tap7bc371fa-44 (unregistering): left promiscuous mode
Oct 11 08:56:20 compute-0 NetworkManager[44960]: <info>  [1760172980.4213] device (tap7bc371fa-44): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 08:56:20 compute-0 ovn_controller[152945]: 2025-10-11T08:56:20Z|00563|binding|INFO|Releasing lport 7bc371fa-443a-4188-ace2-2837e3709136 from this chassis (sb_readonly=0)
Oct 11 08:56:20 compute-0 ovn_controller[152945]: 2025-10-11T08:56:20Z|00564|binding|INFO|Setting lport 7bc371fa-443a-4188-ace2-2837e3709136 down in Southbound
Oct 11 08:56:20 compute-0 ovn_controller[152945]: 2025-10-11T08:56:20Z|00565|binding|INFO|Removing iface tap7bc371fa-44 ovn-installed in OVS
Oct 11 08:56:20 compute-0 nova_compute[260935]: 2025-10-11 08:56:20.476 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:56:20 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:56:20.488 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:76:54:8b 10.100.0.14'], port_security=['fa:16:3e:76:54:8b 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'd80189d8-28e4-440b-8aed-b43c62f59dd7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-338aeaf8-43d5-4292-a8fa-8952dd3c508b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3a6a3cc2a54f4a9bafcdc1304f07944b', 'neutron:revision_number': '8', 'neutron:security_group_ids': '310f607e-9688-4f0f-ac6e-036c1fbd41cd e0385938-baf4-4634-ae20-542a58f27831 e3ab43aa-8c9d-4578-afa6-1b85bfb9e682', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9b76a516-f507-4a4f-aa31-3047cedf7049, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=7bc371fa-443a-4188-ace2-2837e3709136) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 08:56:20 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:56:20.490 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 7bc371fa-443a-4188-ace2-2837e3709136 in datapath 338aeaf8-43d5-4292-a8fa-8952dd3c508b unbound from our chassis
Oct 11 08:56:20 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:56:20.503 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 338aeaf8-43d5-4292-a8fa-8952dd3c508b
Oct 11 08:56:20 compute-0 systemd[1]: machine-qemu\x2d72\x2dinstance\x2d0000003f.scope: Deactivated successfully.
Oct 11 08:56:20 compute-0 systemd[1]: machine-qemu\x2d72\x2dinstance\x2d0000003f.scope: Consumed 10.047s CPU time.
Oct 11 08:56:20 compute-0 nova_compute[260935]: 2025-10-11 08:56:20.524 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:56:20 compute-0 systemd-machined[215705]: Machine qemu-72-instance-0000003f terminated.
Oct 11 08:56:20 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:56:20.531 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[4fe14c07-59ff-4717-a0a7-77f210c28817]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:56:20 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:56:20.576 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[b8d911f0-8466-460b-b6f5-00c488360c77]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:56:20 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:56:20.581 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[8f228586-3939-4e89-89f4-0655ab280053]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:56:20 compute-0 nova_compute[260935]: 2025-10-11 08:56:20.618 2 INFO nova.virt.libvirt.driver [-] [instance: d80189d8-28e4-440b-8aed-b43c62f59dd7] Instance destroyed successfully.
Oct 11 08:56:20 compute-0 nova_compute[260935]: 2025-10-11 08:56:20.620 2 DEBUG nova.objects.instance [None req-dc08f2a5-0049-4a6b-aed6-f9c49d09a4e1 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Lazy-loading 'resources' on Instance uuid d80189d8-28e4-440b-8aed-b43c62f59dd7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:56:20 compute-0 nova_compute[260935]: 2025-10-11 08:56:20.639 2 DEBUG nova.virt.libvirt.vif [None req-dc08f2a5-0049-4a6b-aed6-f9c49d09a4e1 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T08:55:48Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-SecurityGroupsTestJSON-server-136419371',display_name='tempest-SecurityGroupsTestJSON-server-136419371',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-securitygroupstestjson-server-136419371',id=63,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-11T08:56:03Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='3a6a3cc2a54f4a9bafcdc1304f07944b',ramdisk_id='',reservation_id='r-qhzm8q7c',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-SecurityGroupsTestJSON-2086563292',owner_user_name='tempest-SecurityGroupsTestJSON-2086563292-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T08:56:11Z,user_data=None,user_id='a04e2908f5a54c8f98bee8d0faf3e658',uuid=d80189d8-28e4-440b-8aed-b43c62f59dd7,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "7bc371fa-443a-4188-ace2-2837e3709136", "address": "fa:16:3e:76:54:8b", "network": {"id": "338aeaf8-43d5-4292-a8fa-8952dd3c508b", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-293890957-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3a6a3cc2a54f4a9bafcdc1304f07944b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7bc371fa-44", "ovs_interfaceid": "7bc371fa-443a-4188-ace2-2837e3709136", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 11 08:56:20 compute-0 nova_compute[260935]: 2025-10-11 08:56:20.640 2 DEBUG nova.network.os_vif_util [None req-dc08f2a5-0049-4a6b-aed6-f9c49d09a4e1 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Converting VIF {"id": "7bc371fa-443a-4188-ace2-2837e3709136", "address": "fa:16:3e:76:54:8b", "network": {"id": "338aeaf8-43d5-4292-a8fa-8952dd3c508b", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-293890957-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3a6a3cc2a54f4a9bafcdc1304f07944b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7bc371fa-44", "ovs_interfaceid": "7bc371fa-443a-4188-ace2-2837e3709136", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 08:56:20 compute-0 nova_compute[260935]: 2025-10-11 08:56:20.642 2 DEBUG nova.network.os_vif_util [None req-dc08f2a5-0049-4a6b-aed6-f9c49d09a4e1 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:76:54:8b,bridge_name='br-int',has_traffic_filtering=True,id=7bc371fa-443a-4188-ace2-2837e3709136,network=Network(338aeaf8-43d5-4292-a8fa-8952dd3c508b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7bc371fa-44') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 08:56:20 compute-0 nova_compute[260935]: 2025-10-11 08:56:20.643 2 DEBUG os_vif [None req-dc08f2a5-0049-4a6b-aed6-f9c49d09a4e1 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:76:54:8b,bridge_name='br-int',has_traffic_filtering=True,id=7bc371fa-443a-4188-ace2-2837e3709136,network=Network(338aeaf8-43d5-4292-a8fa-8952dd3c508b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7bc371fa-44') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 11 08:56:20 compute-0 nova_compute[260935]: 2025-10-11 08:56:20.646 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:56:20 compute-0 nova_compute[260935]: 2025-10-11 08:56:20.647 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7bc371fa-44, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:56:20 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:56:20.647 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[ba92df42-1500-434c-8fea-0fe06e1199f5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:56:20 compute-0 nova_compute[260935]: 2025-10-11 08:56:20.651 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:56:20 compute-0 nova_compute[260935]: 2025-10-11 08:56:20.655 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 08:56:20 compute-0 nova_compute[260935]: 2025-10-11 08:56:20.660 2 INFO os_vif [None req-dc08f2a5-0049-4a6b-aed6-f9c49d09a4e1 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:76:54:8b,bridge_name='br-int',has_traffic_filtering=True,id=7bc371fa-443a-4188-ace2-2837e3709136,network=Network(338aeaf8-43d5-4292-a8fa-8952dd3c508b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7bc371fa-44')
Oct 11 08:56:20 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:56:20.677 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[c670082a-461a-4075-9751-bd9517b9825d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap338aeaf8-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e5:2a:1c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 11, 'rx_bytes': 916, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 11, 'rx_bytes': 916, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 164], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 477778, 'reachable_time': 18372, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 326690, 'error': None, 'target': 'ovnmeta-338aeaf8-43d5-4292-a8fa-8952dd3c508b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:56:20 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:56:20.702 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[01f525b9-1939-4dff-874b-14f8f6ae8632]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap338aeaf8-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 477793, 'tstamp': 477793}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 326702, 'error': None, 'target': 'ovnmeta-338aeaf8-43d5-4292-a8fa-8952dd3c508b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap338aeaf8-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 477797, 'tstamp': 477797}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 326702, 'error': None, 'target': 'ovnmeta-338aeaf8-43d5-4292-a8fa-8952dd3c508b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:56:20 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:56:20.706 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap338aeaf8-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:56:20 compute-0 nova_compute[260935]: 2025-10-11 08:56:20.708 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:56:20 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:56:20.710 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap338aeaf8-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:56:20 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:56:20.710 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 08:56:20 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:56:20.711 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap338aeaf8-40, col_values=(('external_ids', {'iface-id': 'ebd712a5-5601-47dd-8cfc-89d2ce6b1035'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:56:20 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:56:20.711 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 08:56:20 compute-0 nova_compute[260935]: 2025-10-11 08:56:20.714 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:56:20 compute-0 nova_compute[260935]: 2025-10-11 08:56:20.816 2 DEBUG oslo_concurrency.lockutils [None req-1ff6340a-99ab-4f8a-9c38-84240440232e 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Acquiring lock "62c9b5d7-f22e-4738-b2e6-7c53fcb968ea" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:56:20 compute-0 nova_compute[260935]: 2025-10-11 08:56:20.817 2 DEBUG oslo_concurrency.lockutils [None req-1ff6340a-99ab-4f8a-9c38-84240440232e 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Lock "62c9b5d7-f22e-4738-b2e6-7c53fcb968ea" acquired by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:56:20 compute-0 nova_compute[260935]: 2025-10-11 08:56:20.817 2 INFO nova.compute.manager [None req-1ff6340a-99ab-4f8a-9c38-84240440232e 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: 62c9b5d7-f22e-4738-b2e6-7c53fcb968ea] Shelving
Oct 11 08:56:20 compute-0 nova_compute[260935]: 2025-10-11 08:56:20.845 2 DEBUG nova.virt.libvirt.driver [None req-1ff6340a-99ab-4f8a-9c38-84240440232e 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: 62c9b5d7-f22e-4738-b2e6-7c53fcb968ea] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Oct 11 08:56:20 compute-0 nova_compute[260935]: 2025-10-11 08:56:20.893 2 INFO nova.virt.libvirt.driver [None req-937314b8-02ad-40b8-af42-60454f64d35c 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] [instance: 98d8ebd6-0917-49cf-8efc-a245486424bc] Snapshot image upload complete
Oct 11 08:56:20 compute-0 nova_compute[260935]: 2025-10-11 08:56:20.894 2 INFO nova.compute.manager [None req-937314b8-02ad-40b8-af42-60454f64d35c 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] [instance: 98d8ebd6-0917-49cf-8efc-a245486424bc] Took 4.60 seconds to snapshot the instance on the hypervisor.
Oct 11 08:56:20 compute-0 nova_compute[260935]: 2025-10-11 08:56:20.955 2 DEBUG oslo_concurrency.lockutils [None req-6ef25eb0-80fd-41fc-82e2-bbf2a87cabdf ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Acquiring lock "f699e5fd-760a-4326-b428-c34d5e922f8b" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:56:20 compute-0 nova_compute[260935]: 2025-10-11 08:56:20.955 2 DEBUG oslo_concurrency.lockutils [None req-6ef25eb0-80fd-41fc-82e2-bbf2a87cabdf ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Lock "f699e5fd-760a-4326-b428-c34d5e922f8b" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:56:20 compute-0 nova_compute[260935]: 2025-10-11 08:56:20.970 2 DEBUG nova.compute.manager [None req-6ef25eb0-80fd-41fc-82e2-bbf2a87cabdf ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: f699e5fd-760a-4326-b428-c34d5e922f8b] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 11 08:56:21 compute-0 nova_compute[260935]: 2025-10-11 08:56:21.054 2 DEBUG oslo_concurrency.lockutils [None req-6ef25eb0-80fd-41fc-82e2-bbf2a87cabdf ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:56:21 compute-0 nova_compute[260935]: 2025-10-11 08:56:21.056 2 DEBUG oslo_concurrency.lockutils [None req-6ef25eb0-80fd-41fc-82e2-bbf2a87cabdf ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:56:21 compute-0 nova_compute[260935]: 2025-10-11 08:56:21.062 2 INFO nova.virt.libvirt.driver [None req-dc08f2a5-0049-4a6b-aed6-f9c49d09a4e1 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] [instance: d80189d8-28e4-440b-8aed-b43c62f59dd7] Deleting instance files /var/lib/nova/instances/d80189d8-28e4-440b-8aed-b43c62f59dd7_del
Oct 11 08:56:21 compute-0 nova_compute[260935]: 2025-10-11 08:56:21.063 2 INFO nova.virt.libvirt.driver [None req-dc08f2a5-0049-4a6b-aed6-f9c49d09a4e1 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] [instance: d80189d8-28e4-440b-8aed-b43c62f59dd7] Deletion of /var/lib/nova/instances/d80189d8-28e4-440b-8aed-b43c62f59dd7_del complete
Oct 11 08:56:21 compute-0 nova_compute[260935]: 2025-10-11 08:56:21.074 2 DEBUG nova.virt.hardware [None req-6ef25eb0-80fd-41fc-82e2-bbf2a87cabdf ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 11 08:56:21 compute-0 nova_compute[260935]: 2025-10-11 08:56:21.074 2 INFO nova.compute.claims [None req-6ef25eb0-80fd-41fc-82e2-bbf2a87cabdf ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: f699e5fd-760a-4326-b428-c34d5e922f8b] Claim successful on node compute-0.ctlplane.example.com
Oct 11 08:56:21 compute-0 nova_compute[260935]: 2025-10-11 08:56:21.130 2 INFO nova.compute.manager [None req-dc08f2a5-0049-4a6b-aed6-f9c49d09a4e1 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] [instance: d80189d8-28e4-440b-8aed-b43c62f59dd7] Took 0.77 seconds to destroy the instance on the hypervisor.
Oct 11 08:56:21 compute-0 nova_compute[260935]: 2025-10-11 08:56:21.131 2 DEBUG oslo.service.loopingcall [None req-dc08f2a5-0049-4a6b-aed6-f9c49d09a4e1 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 11 08:56:21 compute-0 nova_compute[260935]: 2025-10-11 08:56:21.132 2 DEBUG nova.compute.manager [-] [instance: d80189d8-28e4-440b-8aed-b43c62f59dd7] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 11 08:56:21 compute-0 nova_compute[260935]: 2025-10-11 08:56:21.132 2 DEBUG nova.network.neutron [-] [instance: d80189d8-28e4-440b-8aed-b43c62f59dd7] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 11 08:56:21 compute-0 nova_compute[260935]: 2025-10-11 08:56:21.249 2 DEBUG nova.compute.manager [None req-937314b8-02ad-40b8-af42-60454f64d35c 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] [instance: 98d8ebd6-0917-49cf-8efc-a245486424bc] Found 2 images (rotation: 2) _rotate_backups /usr/lib/python3.9/site-packages/nova/compute/manager.py:4450
Oct 11 08:56:21 compute-0 nova_compute[260935]: 2025-10-11 08:56:21.282 2 DEBUG oslo_concurrency.processutils [None req-6ef25eb0-80fd-41fc-82e2-bbf2a87cabdf ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:56:21 compute-0 ceph-mon[74313]: pgmap v1570: 321 pgs: 321 active+clean; 498 MiB data, 752 MiB used, 59 GiB / 60 GiB avail; 15 MiB/s rd, 5.9 MiB/s wr, 406 op/s
Oct 11 08:56:21 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 08:56:21 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4068770470' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:56:21 compute-0 nova_compute[260935]: 2025-10-11 08:56:21.762 2 DEBUG oslo_concurrency.processutils [None req-6ef25eb0-80fd-41fc-82e2-bbf2a87cabdf ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:56:21 compute-0 nova_compute[260935]: 2025-10-11 08:56:21.770 2 DEBUG nova.compute.provider_tree [None req-6ef25eb0-80fd-41fc-82e2-bbf2a87cabdf ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 08:56:21 compute-0 nova_compute[260935]: 2025-10-11 08:56:21.785 2 DEBUG nova.scheduler.client.report [None req-6ef25eb0-80fd-41fc-82e2-bbf2a87cabdf ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 08:56:21 compute-0 nova_compute[260935]: 2025-10-11 08:56:21.801 2 DEBUG oslo_concurrency.lockutils [None req-6ef25eb0-80fd-41fc-82e2-bbf2a87cabdf ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.745s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:56:21 compute-0 nova_compute[260935]: 2025-10-11 08:56:21.801 2 DEBUG nova.compute.manager [None req-6ef25eb0-80fd-41fc-82e2-bbf2a87cabdf ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: f699e5fd-760a-4326-b428-c34d5e922f8b] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 11 08:56:21 compute-0 nova_compute[260935]: 2025-10-11 08:56:21.849 2 DEBUG nova.compute.manager [None req-6ef25eb0-80fd-41fc-82e2-bbf2a87cabdf ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: f699e5fd-760a-4326-b428-c34d5e922f8b] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 11 08:56:21 compute-0 nova_compute[260935]: 2025-10-11 08:56:21.850 2 DEBUG nova.network.neutron [None req-6ef25eb0-80fd-41fc-82e2-bbf2a87cabdf ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: f699e5fd-760a-4326-b428-c34d5e922f8b] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 11 08:56:21 compute-0 nova_compute[260935]: 2025-10-11 08:56:21.869 2 INFO nova.virt.libvirt.driver [None req-6ef25eb0-80fd-41fc-82e2-bbf2a87cabdf ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: f699e5fd-760a-4326-b428-c34d5e922f8b] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 11 08:56:21 compute-0 nova_compute[260935]: 2025-10-11 08:56:21.887 2 DEBUG nova.compute.manager [None req-6ef25eb0-80fd-41fc-82e2-bbf2a87cabdf ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: f699e5fd-760a-4326-b428-c34d5e922f8b] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 11 08:56:21 compute-0 nova_compute[260935]: 2025-10-11 08:56:21.964 2 DEBUG nova.compute.manager [None req-6ef25eb0-80fd-41fc-82e2-bbf2a87cabdf ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: f699e5fd-760a-4326-b428-c34d5e922f8b] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 11 08:56:21 compute-0 nova_compute[260935]: 2025-10-11 08:56:21.966 2 DEBUG nova.virt.libvirt.driver [None req-6ef25eb0-80fd-41fc-82e2-bbf2a87cabdf ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: f699e5fd-760a-4326-b428-c34d5e922f8b] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 11 08:56:21 compute-0 nova_compute[260935]: 2025-10-11 08:56:21.967 2 INFO nova.virt.libvirt.driver [None req-6ef25eb0-80fd-41fc-82e2-bbf2a87cabdf ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: f699e5fd-760a-4326-b428-c34d5e922f8b] Creating image(s)
Oct 11 08:56:22 compute-0 nova_compute[260935]: 2025-10-11 08:56:22.004 2 DEBUG nova.storage.rbd_utils [None req-6ef25eb0-80fd-41fc-82e2-bbf2a87cabdf ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] rbd image f699e5fd-760a-4326-b428-c34d5e922f8b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:56:22 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1571: 321 pgs: 321 active+clean; 498 MiB data, 752 MiB used, 59 GiB / 60 GiB avail; 13 MiB/s rd, 4.9 MiB/s wr, 340 op/s
Oct 11 08:56:22 compute-0 nova_compute[260935]: 2025-10-11 08:56:22.043 2 DEBUG nova.storage.rbd_utils [None req-6ef25eb0-80fd-41fc-82e2-bbf2a87cabdf ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] rbd image f699e5fd-760a-4326-b428-c34d5e922f8b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:56:22 compute-0 nova_compute[260935]: 2025-10-11 08:56:22.081 2 DEBUG nova.storage.rbd_utils [None req-6ef25eb0-80fd-41fc-82e2-bbf2a87cabdf ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] rbd image f699e5fd-760a-4326-b428-c34d5e922f8b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:56:22 compute-0 nova_compute[260935]: 2025-10-11 08:56:22.085 2 DEBUG oslo_concurrency.processutils [None req-6ef25eb0-80fd-41fc-82e2-bbf2a87cabdf ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:56:22 compute-0 nova_compute[260935]: 2025-10-11 08:56:22.143 2 DEBUG nova.policy [None req-6ef25eb0-80fd-41fc-82e2-bbf2a87cabdf ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'ee0e5fedb9fc464eb2a9ac362f5e0749', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'd9864fda4f8641d8a9c1509c426cc206', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 11 08:56:22 compute-0 nova_compute[260935]: 2025-10-11 08:56:22.192 2 DEBUG oslo_concurrency.processutils [None req-6ef25eb0-80fd-41fc-82e2-bbf2a87cabdf ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json" returned: 0 in 0.107s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:56:22 compute-0 nova_compute[260935]: 2025-10-11 08:56:22.193 2 DEBUG oslo_concurrency.lockutils [None req-6ef25eb0-80fd-41fc-82e2-bbf2a87cabdf ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Acquiring lock "9811042c7d73cc51997f7c966840f4b7728169a1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:56:22 compute-0 nova_compute[260935]: 2025-10-11 08:56:22.194 2 DEBUG oslo_concurrency.lockutils [None req-6ef25eb0-80fd-41fc-82e2-bbf2a87cabdf ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:56:22 compute-0 nova_compute[260935]: 2025-10-11 08:56:22.194 2 DEBUG oslo_concurrency.lockutils [None req-6ef25eb0-80fd-41fc-82e2-bbf2a87cabdf ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:56:22 compute-0 nova_compute[260935]: 2025-10-11 08:56:22.224 2 DEBUG nova.storage.rbd_utils [None req-6ef25eb0-80fd-41fc-82e2-bbf2a87cabdf ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] rbd image f699e5fd-760a-4326-b428-c34d5e922f8b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:56:22 compute-0 nova_compute[260935]: 2025-10-11 08:56:22.228 2 DEBUG oslo_concurrency.processutils [None req-6ef25eb0-80fd-41fc-82e2-bbf2a87cabdf ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 f699e5fd-760a-4326-b428-c34d5e922f8b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:56:22 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/4068770470' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:56:22 compute-0 nova_compute[260935]: 2025-10-11 08:56:22.573 2 DEBUG oslo_concurrency.processutils [None req-6ef25eb0-80fd-41fc-82e2-bbf2a87cabdf ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 f699e5fd-760a-4326-b428-c34d5e922f8b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.344s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:56:22 compute-0 nova_compute[260935]: 2025-10-11 08:56:22.688 2 DEBUG nova.storage.rbd_utils [None req-6ef25eb0-80fd-41fc-82e2-bbf2a87cabdf ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] resizing rbd image f699e5fd-760a-4326-b428-c34d5e922f8b_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 11 08:56:22 compute-0 nova_compute[260935]: 2025-10-11 08:56:22.739 2 DEBUG nova.network.neutron [-] [instance: d80189d8-28e4-440b-8aed-b43c62f59dd7] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:56:22 compute-0 nova_compute[260935]: 2025-10-11 08:56:22.769 2 DEBUG nova.compute.manager [req-7b34efdc-a4a8-4278-b35a-b71efb82f885 req-6839a74f-eb7c-46c8-836c-948aee7b090b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d80189d8-28e4-440b-8aed-b43c62f59dd7] Received event network-vif-unplugged-7bc371fa-443a-4188-ace2-2837e3709136 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:56:22 compute-0 nova_compute[260935]: 2025-10-11 08:56:22.770 2 DEBUG oslo_concurrency.lockutils [req-7b34efdc-a4a8-4278-b35a-b71efb82f885 req-6839a74f-eb7c-46c8-836c-948aee7b090b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "d80189d8-28e4-440b-8aed-b43c62f59dd7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:56:22 compute-0 nova_compute[260935]: 2025-10-11 08:56:22.771 2 DEBUG oslo_concurrency.lockutils [req-7b34efdc-a4a8-4278-b35a-b71efb82f885 req-6839a74f-eb7c-46c8-836c-948aee7b090b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "d80189d8-28e4-440b-8aed-b43c62f59dd7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:56:22 compute-0 nova_compute[260935]: 2025-10-11 08:56:22.772 2 DEBUG oslo_concurrency.lockutils [req-7b34efdc-a4a8-4278-b35a-b71efb82f885 req-6839a74f-eb7c-46c8-836c-948aee7b090b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "d80189d8-28e4-440b-8aed-b43c62f59dd7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:56:22 compute-0 nova_compute[260935]: 2025-10-11 08:56:22.773 2 DEBUG nova.compute.manager [req-7b34efdc-a4a8-4278-b35a-b71efb82f885 req-6839a74f-eb7c-46c8-836c-948aee7b090b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d80189d8-28e4-440b-8aed-b43c62f59dd7] No waiting events found dispatching network-vif-unplugged-7bc371fa-443a-4188-ace2-2837e3709136 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 08:56:22 compute-0 nova_compute[260935]: 2025-10-11 08:56:22.774 2 DEBUG nova.compute.manager [req-7b34efdc-a4a8-4278-b35a-b71efb82f885 req-6839a74f-eb7c-46c8-836c-948aee7b090b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d80189d8-28e4-440b-8aed-b43c62f59dd7] Received event network-vif-unplugged-7bc371fa-443a-4188-ace2-2837e3709136 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 11 08:56:22 compute-0 nova_compute[260935]: 2025-10-11 08:56:22.774 2 DEBUG nova.compute.manager [req-7b34efdc-a4a8-4278-b35a-b71efb82f885 req-6839a74f-eb7c-46c8-836c-948aee7b090b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d80189d8-28e4-440b-8aed-b43c62f59dd7] Received event network-vif-plugged-7bc371fa-443a-4188-ace2-2837e3709136 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:56:22 compute-0 nova_compute[260935]: 2025-10-11 08:56:22.775 2 DEBUG oslo_concurrency.lockutils [req-7b34efdc-a4a8-4278-b35a-b71efb82f885 req-6839a74f-eb7c-46c8-836c-948aee7b090b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "d80189d8-28e4-440b-8aed-b43c62f59dd7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:56:22 compute-0 nova_compute[260935]: 2025-10-11 08:56:22.775 2 DEBUG oslo_concurrency.lockutils [req-7b34efdc-a4a8-4278-b35a-b71efb82f885 req-6839a74f-eb7c-46c8-836c-948aee7b090b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "d80189d8-28e4-440b-8aed-b43c62f59dd7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:56:22 compute-0 nova_compute[260935]: 2025-10-11 08:56:22.776 2 DEBUG oslo_concurrency.lockutils [req-7b34efdc-a4a8-4278-b35a-b71efb82f885 req-6839a74f-eb7c-46c8-836c-948aee7b090b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "d80189d8-28e4-440b-8aed-b43c62f59dd7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:56:22 compute-0 nova_compute[260935]: 2025-10-11 08:56:22.777 2 DEBUG nova.compute.manager [req-7b34efdc-a4a8-4278-b35a-b71efb82f885 req-6839a74f-eb7c-46c8-836c-948aee7b090b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d80189d8-28e4-440b-8aed-b43c62f59dd7] No waiting events found dispatching network-vif-plugged-7bc371fa-443a-4188-ace2-2837e3709136 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 08:56:22 compute-0 nova_compute[260935]: 2025-10-11 08:56:22.777 2 WARNING nova.compute.manager [req-7b34efdc-a4a8-4278-b35a-b71efb82f885 req-6839a74f-eb7c-46c8-836c-948aee7b090b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d80189d8-28e4-440b-8aed-b43c62f59dd7] Received unexpected event network-vif-plugged-7bc371fa-443a-4188-ace2-2837e3709136 for instance with vm_state active and task_state deleting.
Oct 11 08:56:22 compute-0 nova_compute[260935]: 2025-10-11 08:56:22.779 2 INFO nova.compute.manager [-] [instance: d80189d8-28e4-440b-8aed-b43c62f59dd7] Took 1.65 seconds to deallocate network for instance.
Oct 11 08:56:22 compute-0 nova_compute[260935]: 2025-10-11 08:56:22.838 2 DEBUG nova.objects.instance [None req-6ef25eb0-80fd-41fc-82e2-bbf2a87cabdf ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Lazy-loading 'migration_context' on Instance uuid f699e5fd-760a-4326-b428-c34d5e922f8b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:56:22 compute-0 nova_compute[260935]: 2025-10-11 08:56:22.864 2 DEBUG nova.compute.manager [req-30488a0d-9397-4890-b8c9-eef128315af0 req-81ff8af3-8a45-4abf-84c8-f88668de0be6 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d80189d8-28e4-440b-8aed-b43c62f59dd7] Received event network-vif-deleted-7bc371fa-443a-4188-ace2-2837e3709136 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:56:22 compute-0 nova_compute[260935]: 2025-10-11 08:56:22.866 2 DEBUG nova.virt.libvirt.driver [None req-6ef25eb0-80fd-41fc-82e2-bbf2a87cabdf ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: f699e5fd-760a-4326-b428-c34d5e922f8b] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 11 08:56:22 compute-0 nova_compute[260935]: 2025-10-11 08:56:22.867 2 DEBUG nova.virt.libvirt.driver [None req-6ef25eb0-80fd-41fc-82e2-bbf2a87cabdf ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: f699e5fd-760a-4326-b428-c34d5e922f8b] Ensure instance console log exists: /var/lib/nova/instances/f699e5fd-760a-4326-b428-c34d5e922f8b/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 11 08:56:22 compute-0 nova_compute[260935]: 2025-10-11 08:56:22.868 2 DEBUG oslo_concurrency.lockutils [None req-6ef25eb0-80fd-41fc-82e2-bbf2a87cabdf ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:56:22 compute-0 nova_compute[260935]: 2025-10-11 08:56:22.869 2 DEBUG oslo_concurrency.lockutils [None req-6ef25eb0-80fd-41fc-82e2-bbf2a87cabdf ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:56:22 compute-0 nova_compute[260935]: 2025-10-11 08:56:22.870 2 DEBUG oslo_concurrency.lockutils [None req-6ef25eb0-80fd-41fc-82e2-bbf2a87cabdf ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:56:22 compute-0 nova_compute[260935]: 2025-10-11 08:56:22.887 2 DEBUG oslo_concurrency.lockutils [None req-dc08f2a5-0049-4a6b-aed6-f9c49d09a4e1 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:56:22 compute-0 nova_compute[260935]: 2025-10-11 08:56:22.888 2 DEBUG oslo_concurrency.lockutils [None req-dc08f2a5-0049-4a6b-aed6-f9c49d09a4e1 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:56:22 compute-0 nova_compute[260935]: 2025-10-11 08:56:22.891 2 DEBUG nova.network.neutron [req-d34790c5-7031-4417-bd65-717abe90ad78 req-f1fbe0ba-1dc5-4e26-bc17-9611882275d6 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d80189d8-28e4-440b-8aed-b43c62f59dd7] Updated VIF entry in instance network info cache for port 7bc371fa-443a-4188-ace2-2837e3709136. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 08:56:22 compute-0 nova_compute[260935]: 2025-10-11 08:56:22.892 2 DEBUG nova.network.neutron [req-d34790c5-7031-4417-bd65-717abe90ad78 req-f1fbe0ba-1dc5-4e26-bc17-9611882275d6 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d80189d8-28e4-440b-8aed-b43c62f59dd7] Updating instance_info_cache with network_info: [{"id": "7bc371fa-443a-4188-ace2-2837e3709136", "address": "fa:16:3e:76:54:8b", "network": {"id": "338aeaf8-43d5-4292-a8fa-8952dd3c508b", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-293890957-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3a6a3cc2a54f4a9bafcdc1304f07944b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7bc371fa-44", "ovs_interfaceid": "7bc371fa-443a-4188-ace2-2837e3709136", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:56:22 compute-0 nova_compute[260935]: 2025-10-11 08:56:22.912 2 DEBUG oslo_concurrency.lockutils [req-d34790c5-7031-4417-bd65-717abe90ad78 req-f1fbe0ba-1dc5-4e26-bc17-9611882275d6 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-d80189d8-28e4-440b-8aed-b43c62f59dd7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 08:56:23 compute-0 nova_compute[260935]: 2025-10-11 08:56:23.018 2 DEBUG oslo_concurrency.processutils [None req-dc08f2a5-0049-4a6b-aed6-f9c49d09a4e1 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:56:23 compute-0 nova_compute[260935]: 2025-10-11 08:56:23.072 2 DEBUG nova.network.neutron [None req-6ef25eb0-80fd-41fc-82e2-bbf2a87cabdf ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: f699e5fd-760a-4326-b428-c34d5e922f8b] Successfully created port: 6288362b-25a4-409c-ae10-4a00943f4b89 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 11 08:56:23 compute-0 nova_compute[260935]: 2025-10-11 08:56:23.080 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:56:23 compute-0 sudo[326901]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:56:23 compute-0 sudo[326901]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:56:23 compute-0 sudo[326901]: pam_unix(sudo:session): session closed for user root
Oct 11 08:56:23 compute-0 sudo[326945]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 08:56:23 compute-0 sudo[326945]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:56:23 compute-0 sudo[326945]: pam_unix(sudo:session): session closed for user root
Oct 11 08:56:23 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e217 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 08:56:23 compute-0 sudo[326970]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:56:23 compute-0 sudo[326970]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:56:23 compute-0 ceph-mon[74313]: pgmap v1571: 321 pgs: 321 active+clean; 498 MiB data, 752 MiB used, 59 GiB / 60 GiB avail; 13 MiB/s rd, 4.9 MiB/s wr, 340 op/s
Oct 11 08:56:23 compute-0 sudo[326970]: pam_unix(sudo:session): session closed for user root
Oct 11 08:56:23 compute-0 nova_compute[260935]: 2025-10-11 08:56:23.418 2 DEBUG nova.compute.manager [None req-c7177d66-320e-4aee-8e12-ffa6a3e5ce83 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] [instance: 98d8ebd6-0917-49cf-8efc-a245486424bc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:56:23 compute-0 nova_compute[260935]: 2025-10-11 08:56:23.472 2 INFO nova.compute.manager [None req-c7177d66-320e-4aee-8e12-ffa6a3e5ce83 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] [instance: 98d8ebd6-0917-49cf-8efc-a245486424bc] instance snapshotting
Oct 11 08:56:23 compute-0 nova_compute[260935]: 2025-10-11 08:56:23.474 2 DEBUG nova.objects.instance [None req-c7177d66-320e-4aee-8e12-ffa6a3e5ce83 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Lazy-loading 'flavor' on Instance uuid 98d8ebd6-0917-49cf-8efc-a245486424bc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:56:23 compute-0 sudo[326995]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 11 08:56:23 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 08:56:23 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1300005276' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:56:23 compute-0 sudo[326995]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:56:23 compute-0 nova_compute[260935]: 2025-10-11 08:56:23.529 2 DEBUG oslo_concurrency.processutils [None req-dc08f2a5-0049-4a6b-aed6-f9c49d09a4e1 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.511s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:56:23 compute-0 nova_compute[260935]: 2025-10-11 08:56:23.537 2 DEBUG nova.compute.provider_tree [None req-dc08f2a5-0049-4a6b-aed6-f9c49d09a4e1 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 08:56:23 compute-0 nova_compute[260935]: 2025-10-11 08:56:23.553 2 DEBUG nova.scheduler.client.report [None req-dc08f2a5-0049-4a6b-aed6-f9c49d09a4e1 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 08:56:23 compute-0 nova_compute[260935]: 2025-10-11 08:56:23.581 2 DEBUG oslo_concurrency.lockutils [None req-dc08f2a5-0049-4a6b-aed6-f9c49d09a4e1 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.693s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:56:23 compute-0 nova_compute[260935]: 2025-10-11 08:56:23.618 2 INFO nova.scheduler.client.report [None req-dc08f2a5-0049-4a6b-aed6-f9c49d09a4e1 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Deleted allocations for instance d80189d8-28e4-440b-8aed-b43c62f59dd7
Oct 11 08:56:23 compute-0 nova_compute[260935]: 2025-10-11 08:56:23.713 2 DEBUG oslo_concurrency.lockutils [None req-dc08f2a5-0049-4a6b-aed6-f9c49d09a4e1 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Lock "d80189d8-28e4-440b-8aed-b43c62f59dd7" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.355s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:56:23 compute-0 nova_compute[260935]: 2025-10-11 08:56:23.853 2 INFO nova.virt.libvirt.driver [None req-c7177d66-320e-4aee-8e12-ffa6a3e5ce83 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] [instance: 98d8ebd6-0917-49cf-8efc-a245486424bc] Beginning live snapshot process
Oct 11 08:56:23 compute-0 nova_compute[260935]: 2025-10-11 08:56:23.941 2 DEBUG nova.network.neutron [None req-6ef25eb0-80fd-41fc-82e2-bbf2a87cabdf ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: f699e5fd-760a-4326-b428-c34d5e922f8b] Successfully updated port: 6288362b-25a4-409c-ae10-4a00943f4b89 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 11 08:56:24 compute-0 nova_compute[260935]: 2025-10-11 08:56:24.023 2 DEBUG oslo_concurrency.lockutils [None req-6ef25eb0-80fd-41fc-82e2-bbf2a87cabdf ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Acquiring lock "refresh_cache-f699e5fd-760a-4326-b428-c34d5e922f8b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 08:56:24 compute-0 nova_compute[260935]: 2025-10-11 08:56:24.024 2 DEBUG oslo_concurrency.lockutils [None req-6ef25eb0-80fd-41fc-82e2-bbf2a87cabdf ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Acquired lock "refresh_cache-f699e5fd-760a-4326-b428-c34d5e922f8b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 08:56:24 compute-0 nova_compute[260935]: 2025-10-11 08:56:24.024 2 DEBUG nova.network.neutron [None req-6ef25eb0-80fd-41fc-82e2-bbf2a87cabdf ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: f699e5fd-760a-4326-b428-c34d5e922f8b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 11 08:56:24 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1572: 321 pgs: 321 active+clean; 530 MiB data, 775 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 7.4 MiB/s wr, 167 op/s
Oct 11 08:56:24 compute-0 nova_compute[260935]: 2025-10-11 08:56:24.042 2 DEBUG nova.virt.libvirt.imagebackend [None req-c7177d66-320e-4aee-8e12-ffa6a3e5ce83 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] No parent info for 03f2fef0-11c0-48e1-b3a0-3e02d898739e; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Oct 11 08:56:24 compute-0 sudo[326995]: pam_unix(sudo:session): session closed for user root
Oct 11 08:56:24 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 08:56:24 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 08:56:24 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 11 08:56:24 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 08:56:24 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 11 08:56:24 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 08:56:24 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev 30fc1125-0511-4289-877e-576104d2467d does not exist
Oct 11 08:56:24 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev 69011eed-4905-440d-ae8e-bc27e52ee8ba does not exist
Oct 11 08:56:24 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev 24836cee-9063-48e7-9593-3b127b018bb1 does not exist
Oct 11 08:56:24 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 11 08:56:24 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 08:56:24 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 11 08:56:24 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 08:56:24 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 08:56:24 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 08:56:24 compute-0 nova_compute[260935]: 2025-10-11 08:56:24.252 2 DEBUG nova.network.neutron [None req-6ef25eb0-80fd-41fc-82e2-bbf2a87cabdf ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: f699e5fd-760a-4326-b428-c34d5e922f8b] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 11 08:56:24 compute-0 nova_compute[260935]: 2025-10-11 08:56:24.276 2 DEBUG nova.storage.rbd_utils [None req-c7177d66-320e-4aee-8e12-ffa6a3e5ce83 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] creating snapshot(6852d36be8704c6f8dd656101cfd5d59) on rbd image(98d8ebd6-0917-49cf-8efc-a245486424bc_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Oct 11 08:56:24 compute-0 sudo[327088]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:56:24 compute-0 sudo[327088]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:56:24 compute-0 sudo[327088]: pam_unix(sudo:session): session closed for user root
Oct 11 08:56:24 compute-0 sudo[327128]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 08:56:24 compute-0 sudo[327128]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:56:24 compute-0 sudo[327128]: pam_unix(sudo:session): session closed for user root
Oct 11 08:56:24 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/1300005276' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:56:24 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 08:56:24 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 08:56:24 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 08:56:24 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 08:56:24 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 08:56:24 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 08:56:24 compute-0 sudo[327156]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:56:24 compute-0 sudo[327156]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:56:24 compute-0 sudo[327156]: pam_unix(sudo:session): session closed for user root
Oct 11 08:56:24 compute-0 sudo[327181]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 11 08:56:24 compute-0 sudo[327181]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:56:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 08:56:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 08:56:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 08:56:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 08:56:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 08:56:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 08:56:24 compute-0 nova_compute[260935]: 2025-10-11 08:56:24.892 2 DEBUG nova.compute.manager [req-addc2c46-c096-4b9c-bd09-c981ea42dd0e req-c9c1332e-1261-4570-9f98-9ea155a631da e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: f699e5fd-760a-4326-b428-c34d5e922f8b] Received event network-changed-6288362b-25a4-409c-ae10-4a00943f4b89 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:56:24 compute-0 nova_compute[260935]: 2025-10-11 08:56:24.894 2 DEBUG nova.compute.manager [req-addc2c46-c096-4b9c-bd09-c981ea42dd0e req-c9c1332e-1261-4570-9f98-9ea155a631da e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: f699e5fd-760a-4326-b428-c34d5e922f8b] Refreshing instance network info cache due to event network-changed-6288362b-25a4-409c-ae10-4a00943f4b89. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 08:56:24 compute-0 nova_compute[260935]: 2025-10-11 08:56:24.895 2 DEBUG oslo_concurrency.lockutils [req-addc2c46-c096-4b9c-bd09-c981ea42dd0e req-c9c1332e-1261-4570-9f98-9ea155a631da e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-f699e5fd-760a-4326-b428-c34d5e922f8b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 08:56:25 compute-0 podman[327245]: 2025-10-11 08:56:25.131840797 +0000 UTC m=+0.078672744 container create c5a32d999ffc6379022915dfb6ae35bd5e61dbcd8fe7941a4299e764b5c920c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_gould, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct 11 08:56:25 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e217 do_prune osdmap full prune enabled
Oct 11 08:56:25 compute-0 systemd[1]: Started libpod-conmon-c5a32d999ffc6379022915dfb6ae35bd5e61dbcd8fe7941a4299e764b5c920c5.scope.
Oct 11 08:56:25 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e218 e218: 3 total, 3 up, 3 in
Oct 11 08:56:25 compute-0 podman[327245]: 2025-10-11 08:56:25.09934557 +0000 UTC m=+0.046177577 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 08:56:25 compute-0 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e218: 3 total, 3 up, 3 in
Oct 11 08:56:25 compute-0 systemd[1]: Started libcrun container.
Oct 11 08:56:25 compute-0 podman[327245]: 2025-10-11 08:56:25.25257701 +0000 UTC m=+0.199409017 container init c5a32d999ffc6379022915dfb6ae35bd5e61dbcd8fe7941a4299e764b5c920c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_gould, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 08:56:25 compute-0 podman[327245]: 2025-10-11 08:56:25.266639551 +0000 UTC m=+0.213471508 container start c5a32d999ffc6379022915dfb6ae35bd5e61dbcd8fe7941a4299e764b5c920c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_gould, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct 11 08:56:25 compute-0 nova_compute[260935]: 2025-10-11 08:56:25.265 2 DEBUG nova.storage.rbd_utils [None req-c7177d66-320e-4aee-8e12-ffa6a3e5ce83 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] cloning vms/98d8ebd6-0917-49cf-8efc-a245486424bc_disk@6852d36be8704c6f8dd656101cfd5d59 to images/d5855ab3-cb37-48e8-a360-ab08fcc225fe clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Oct 11 08:56:25 compute-0 podman[327245]: 2025-10-11 08:56:25.271138409 +0000 UTC m=+0.217970416 container attach c5a32d999ffc6379022915dfb6ae35bd5e61dbcd8fe7941a4299e764b5c920c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_gould, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct 11 08:56:25 compute-0 busy_gould[327261]: 167 167
Oct 11 08:56:25 compute-0 systemd[1]: libpod-c5a32d999ffc6379022915dfb6ae35bd5e61dbcd8fe7941a4299e764b5c920c5.scope: Deactivated successfully.
Oct 11 08:56:25 compute-0 podman[327245]: 2025-10-11 08:56:25.276932374 +0000 UTC m=+0.223764321 container died c5a32d999ffc6379022915dfb6ae35bd5e61dbcd8fe7941a4299e764b5c920c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_gould, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct 11 08:56:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-6e7ff4fae4f7c3be1b15ca9981e84b843bdf34506c78599817fe26eb6ea612be-merged.mount: Deactivated successfully.
Oct 11 08:56:25 compute-0 podman[327245]: 2025-10-11 08:56:25.345798908 +0000 UTC m=+0.292630855 container remove c5a32d999ffc6379022915dfb6ae35bd5e61dbcd8fe7941a4299e764b5c920c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_gould, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct 11 08:56:25 compute-0 nova_compute[260935]: 2025-10-11 08:56:25.355 2 DEBUG nova.network.neutron [None req-6ef25eb0-80fd-41fc-82e2-bbf2a87cabdf ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: f699e5fd-760a-4326-b428-c34d5e922f8b] Updating instance_info_cache with network_info: [{"id": "6288362b-25a4-409c-ae10-4a00943f4b89", "address": "fa:16:3e:a3:b0:09", "network": {"id": "e075bdab-78c4-414f-b270-c41d1c82f498", "bridge": "br-int", "label": "tempest-ServersTestJSON-1401783070-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d9864fda4f8641d8a9c1509c426cc206", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6288362b-25", "ovs_interfaceid": "6288362b-25a4-409c-ae10-4a00943f4b89", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:56:25 compute-0 systemd[1]: libpod-conmon-c5a32d999ffc6379022915dfb6ae35bd5e61dbcd8fe7941a4299e764b5c920c5.scope: Deactivated successfully.
Oct 11 08:56:25 compute-0 nova_compute[260935]: 2025-10-11 08:56:25.390 2 DEBUG oslo_concurrency.lockutils [None req-6ef25eb0-80fd-41fc-82e2-bbf2a87cabdf ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Releasing lock "refresh_cache-f699e5fd-760a-4326-b428-c34d5e922f8b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 08:56:25 compute-0 nova_compute[260935]: 2025-10-11 08:56:25.391 2 DEBUG nova.compute.manager [None req-6ef25eb0-80fd-41fc-82e2-bbf2a87cabdf ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: f699e5fd-760a-4326-b428-c34d5e922f8b] Instance network_info: |[{"id": "6288362b-25a4-409c-ae10-4a00943f4b89", "address": "fa:16:3e:a3:b0:09", "network": {"id": "e075bdab-78c4-414f-b270-c41d1c82f498", "bridge": "br-int", "label": "tempest-ServersTestJSON-1401783070-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d9864fda4f8641d8a9c1509c426cc206", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6288362b-25", "ovs_interfaceid": "6288362b-25a4-409c-ae10-4a00943f4b89", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 11 08:56:25 compute-0 nova_compute[260935]: 2025-10-11 08:56:25.391 2 DEBUG oslo_concurrency.lockutils [req-addc2c46-c096-4b9c-bd09-c981ea42dd0e req-c9c1332e-1261-4570-9f98-9ea155a631da e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-f699e5fd-760a-4326-b428-c34d5e922f8b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 08:56:25 compute-0 nova_compute[260935]: 2025-10-11 08:56:25.392 2 DEBUG nova.network.neutron [req-addc2c46-c096-4b9c-bd09-c981ea42dd0e req-c9c1332e-1261-4570-9f98-9ea155a631da e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: f699e5fd-760a-4326-b428-c34d5e922f8b] Refreshing network info cache for port 6288362b-25a4-409c-ae10-4a00943f4b89 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 08:56:25 compute-0 nova_compute[260935]: 2025-10-11 08:56:25.397 2 DEBUG nova.virt.libvirt.driver [None req-6ef25eb0-80fd-41fc-82e2-bbf2a87cabdf ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: f699e5fd-760a-4326-b428-c34d5e922f8b] Start _get_guest_xml network_info=[{"id": "6288362b-25a4-409c-ae10-4a00943f4b89", "address": "fa:16:3e:a3:b0:09", "network": {"id": "e075bdab-78c4-414f-b270-c41d1c82f498", "bridge": "br-int", "label": "tempest-ServersTestJSON-1401783070-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d9864fda4f8641d8a9c1509c426cc206", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6288362b-25", "ovs_interfaceid": "6288362b-25a4-409c-ae10-4a00943f4b89", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 11 08:56:25 compute-0 nova_compute[260935]: 2025-10-11 08:56:25.406 2 WARNING nova.virt.libvirt.driver [None req-6ef25eb0-80fd-41fc-82e2-bbf2a87cabdf ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 08:56:25 compute-0 nova_compute[260935]: 2025-10-11 08:56:25.418 2 DEBUG nova.virt.libvirt.host [None req-6ef25eb0-80fd-41fc-82e2-bbf2a87cabdf ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 11 08:56:25 compute-0 nova_compute[260935]: 2025-10-11 08:56:25.419 2 DEBUG nova.virt.libvirt.host [None req-6ef25eb0-80fd-41fc-82e2-bbf2a87cabdf ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 11 08:56:25 compute-0 ceph-mon[74313]: pgmap v1572: 321 pgs: 321 active+clean; 530 MiB data, 775 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 7.4 MiB/s wr, 167 op/s
Oct 11 08:56:25 compute-0 ceph-mon[74313]: osdmap e218: 3 total, 3 up, 3 in
Oct 11 08:56:25 compute-0 nova_compute[260935]: 2025-10-11 08:56:25.427 2 DEBUG nova.virt.libvirt.host [None req-6ef25eb0-80fd-41fc-82e2-bbf2a87cabdf ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 11 08:56:25 compute-0 nova_compute[260935]: 2025-10-11 08:56:25.428 2 DEBUG nova.virt.libvirt.host [None req-6ef25eb0-80fd-41fc-82e2-bbf2a87cabdf ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 11 08:56:25 compute-0 nova_compute[260935]: 2025-10-11 08:56:25.429 2 DEBUG nova.virt.libvirt.driver [None req-6ef25eb0-80fd-41fc-82e2-bbf2a87cabdf ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 11 08:56:25 compute-0 nova_compute[260935]: 2025-10-11 08:56:25.429 2 DEBUG nova.virt.hardware [None req-6ef25eb0-80fd-41fc-82e2-bbf2a87cabdf ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 11 08:56:25 compute-0 nova_compute[260935]: 2025-10-11 08:56:25.430 2 DEBUG nova.virt.hardware [None req-6ef25eb0-80fd-41fc-82e2-bbf2a87cabdf ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 11 08:56:25 compute-0 nova_compute[260935]: 2025-10-11 08:56:25.430 2 DEBUG nova.virt.hardware [None req-6ef25eb0-80fd-41fc-82e2-bbf2a87cabdf ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 11 08:56:25 compute-0 nova_compute[260935]: 2025-10-11 08:56:25.431 2 DEBUG nova.virt.hardware [None req-6ef25eb0-80fd-41fc-82e2-bbf2a87cabdf ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 11 08:56:25 compute-0 nova_compute[260935]: 2025-10-11 08:56:25.431 2 DEBUG nova.virt.hardware [None req-6ef25eb0-80fd-41fc-82e2-bbf2a87cabdf ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 11 08:56:25 compute-0 nova_compute[260935]: 2025-10-11 08:56:25.431 2 DEBUG nova.virt.hardware [None req-6ef25eb0-80fd-41fc-82e2-bbf2a87cabdf ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 11 08:56:25 compute-0 nova_compute[260935]: 2025-10-11 08:56:25.432 2 DEBUG nova.virt.hardware [None req-6ef25eb0-80fd-41fc-82e2-bbf2a87cabdf ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 11 08:56:25 compute-0 nova_compute[260935]: 2025-10-11 08:56:25.432 2 DEBUG nova.virt.hardware [None req-6ef25eb0-80fd-41fc-82e2-bbf2a87cabdf ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 11 08:56:25 compute-0 nova_compute[260935]: 2025-10-11 08:56:25.433 2 DEBUG nova.virt.hardware [None req-6ef25eb0-80fd-41fc-82e2-bbf2a87cabdf ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 11 08:56:25 compute-0 nova_compute[260935]: 2025-10-11 08:56:25.433 2 DEBUG nova.virt.hardware [None req-6ef25eb0-80fd-41fc-82e2-bbf2a87cabdf ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 11 08:56:25 compute-0 nova_compute[260935]: 2025-10-11 08:56:25.434 2 DEBUG nova.virt.hardware [None req-6ef25eb0-80fd-41fc-82e2-bbf2a87cabdf ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 11 08:56:25 compute-0 nova_compute[260935]: 2025-10-11 08:56:25.438 2 DEBUG oslo_concurrency.processutils [None req-6ef25eb0-80fd-41fc-82e2-bbf2a87cabdf ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:56:25 compute-0 nova_compute[260935]: 2025-10-11 08:56:25.533 2 DEBUG nova.storage.rbd_utils [None req-c7177d66-320e-4aee-8e12-ffa6a3e5ce83 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] flattening images/d5855ab3-cb37-48e8-a360-ab08fcc225fe flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Oct 11 08:56:25 compute-0 podman[327331]: 2025-10-11 08:56:25.665806923 +0000 UTC m=+0.091139470 container create 167b1d0ea1534e3d1c011aaf34eae9c464dd929f01244040096c9f97134d4c09 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_robinson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 08:56:25 compute-0 podman[327331]: 2025-10-11 08:56:25.614173241 +0000 UTC m=+0.039505858 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 08:56:25 compute-0 nova_compute[260935]: 2025-10-11 08:56:25.709 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:56:25 compute-0 systemd[1]: Started libpod-conmon-167b1d0ea1534e3d1c011aaf34eae9c464dd929f01244040096c9f97134d4c09.scope.
Oct 11 08:56:25 compute-0 systemd[1]: Started libcrun container.
Oct 11 08:56:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/684418d6faf14614f16a6ab67f7944c1aed97bc90e887215cf04b10670b77c54/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 08:56:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/684418d6faf14614f16a6ab67f7944c1aed97bc90e887215cf04b10670b77c54/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 08:56:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/684418d6faf14614f16a6ab67f7944c1aed97bc90e887215cf04b10670b77c54/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 08:56:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/684418d6faf14614f16a6ab67f7944c1aed97bc90e887215cf04b10670b77c54/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 08:56:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/684418d6faf14614f16a6ab67f7944c1aed97bc90e887215cf04b10670b77c54/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 08:56:25 compute-0 podman[327331]: 2025-10-11 08:56:25.792898137 +0000 UTC m=+0.218230684 container init 167b1d0ea1534e3d1c011aaf34eae9c464dd929f01244040096c9f97134d4c09 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_robinson, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Oct 11 08:56:25 compute-0 podman[327331]: 2025-10-11 08:56:25.803053836 +0000 UTC m=+0.228386383 container start 167b1d0ea1534e3d1c011aaf34eae9c464dd929f01244040096c9f97134d4c09 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_robinson, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct 11 08:56:25 compute-0 podman[327331]: 2025-10-11 08:56:25.816759937 +0000 UTC m=+0.242092494 container attach 167b1d0ea1534e3d1c011aaf34eae9c464dd929f01244040096c9f97134d4c09 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_robinson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct 11 08:56:25 compute-0 nova_compute[260935]: 2025-10-11 08:56:25.949 2 DEBUG nova.storage.rbd_utils [None req-c7177d66-320e-4aee-8e12-ffa6a3e5ce83 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] removing snapshot(6852d36be8704c6f8dd656101cfd5d59) on rbd image(98d8ebd6-0917-49cf-8efc-a245486424bc_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Oct 11 08:56:26 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 08:56:26 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2381482203' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:56:26 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1574: 321 pgs: 321 active+clean; 530 MiB data, 775 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 5.8 MiB/s wr, 130 op/s
Oct 11 08:56:26 compute-0 nova_compute[260935]: 2025-10-11 08:56:26.047 2 DEBUG oslo_concurrency.processutils [None req-6ef25eb0-80fd-41fc-82e2-bbf2a87cabdf ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.609s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:56:26 compute-0 nova_compute[260935]: 2025-10-11 08:56:26.069 2 DEBUG nova.storage.rbd_utils [None req-6ef25eb0-80fd-41fc-82e2-bbf2a87cabdf ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] rbd image f699e5fd-760a-4326-b428-c34d5e922f8b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:56:26 compute-0 nova_compute[260935]: 2025-10-11 08:56:26.072 2 DEBUG oslo_concurrency.processutils [None req-6ef25eb0-80fd-41fc-82e2-bbf2a87cabdf ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:56:26 compute-0 nova_compute[260935]: 2025-10-11 08:56:26.128 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760172971.11239, ead8f79e-4b1d-4bf5-bf5a-4943d2122bbc => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:56:26 compute-0 nova_compute[260935]: 2025-10-11 08:56:26.128 2 INFO nova.compute.manager [-] [instance: ead8f79e-4b1d-4bf5-bf5a-4943d2122bbc] VM Stopped (Lifecycle Event)
Oct 11 08:56:26 compute-0 nova_compute[260935]: 2025-10-11 08:56:26.162 2 DEBUG nova.compute.manager [None req-ae483f72-d805-4b7c-8c51-3487b9c91c8a - - - - - -] [instance: ead8f79e-4b1d-4bf5-bf5a-4943d2122bbc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:56:26 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e218 do_prune osdmap full prune enabled
Oct 11 08:56:26 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/2381482203' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:56:26 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e219 e219: 3 total, 3 up, 3 in
Oct 11 08:56:26 compute-0 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e219: 3 total, 3 up, 3 in
Oct 11 08:56:26 compute-0 ceph-osd[88249]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #46. Immutable memtables: 3.
Oct 11 08:56:26 compute-0 nova_compute[260935]: 2025-10-11 08:56:26.484 2 DEBUG nova.storage.rbd_utils [None req-c7177d66-320e-4aee-8e12-ffa6a3e5ce83 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] creating snapshot(snap) on rbd image(d5855ab3-cb37-48e8-a360-ab08fcc225fe) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Oct 11 08:56:26 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 08:56:26 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/245973482' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:56:26 compute-0 nova_compute[260935]: 2025-10-11 08:56:26.653 2 DEBUG oslo_concurrency.processutils [None req-6ef25eb0-80fd-41fc-82e2-bbf2a87cabdf ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.581s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:56:26 compute-0 nova_compute[260935]: 2025-10-11 08:56:26.655 2 DEBUG nova.virt.libvirt.vif [None req-6ef25eb0-80fd-41fc-82e2-bbf2a87cabdf ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T08:56:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-584420125',display_name='tempest-ServersTestJSON-server-584420125',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-584420125',id=66,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAAk/MD4FoWprvRA2zmWxJheHU1Q6pbrfGqfP9a3/89xTMHYKqik/bQVNkLzZIS3lp+y8ZWTBvY2vCVH0mlXCADLAZoUmTQcydsCtTk6U8YJ4oWLtZkp2VZ/VYWbcHdLrQ==',key_name='tempest-key-882655236',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d9864fda4f8641d8a9c1509c426cc206',ramdisk_id='',reservation_id='r-h01orfad',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-101172647',owner_user_name='tempest-ServersTestJSON-101172647-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T08:56:21Z,user_data=None,user_id='ee0e5fedb9fc464eb2a9ac362f5e0749',uuid=f699e5fd-760a-4326-b428-c34d5e922f8b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6288362b-25a4-409c-ae10-4a00943f4b89", "address": "fa:16:3e:a3:b0:09", "network": {"id": "e075bdab-78c4-414f-b270-c41d1c82f498", "bridge": "br-int", "label": "tempest-ServersTestJSON-1401783070-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d9864fda4f8641d8a9c1509c426cc206", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6288362b-25", "ovs_interfaceid": "6288362b-25a4-409c-ae10-4a00943f4b89", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 11 08:56:26 compute-0 nova_compute[260935]: 2025-10-11 08:56:26.655 2 DEBUG nova.network.os_vif_util [None req-6ef25eb0-80fd-41fc-82e2-bbf2a87cabdf ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Converting VIF {"id": "6288362b-25a4-409c-ae10-4a00943f4b89", "address": "fa:16:3e:a3:b0:09", "network": {"id": "e075bdab-78c4-414f-b270-c41d1c82f498", "bridge": "br-int", "label": "tempest-ServersTestJSON-1401783070-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d9864fda4f8641d8a9c1509c426cc206", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6288362b-25", "ovs_interfaceid": "6288362b-25a4-409c-ae10-4a00943f4b89", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 08:56:26 compute-0 nova_compute[260935]: 2025-10-11 08:56:26.656 2 DEBUG nova.network.os_vif_util [None req-6ef25eb0-80fd-41fc-82e2-bbf2a87cabdf ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a3:b0:09,bridge_name='br-int',has_traffic_filtering=True,id=6288362b-25a4-409c-ae10-4a00943f4b89,network=Network(e075bdab-78c4-414f-b270-c41d1c82f498),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6288362b-25') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 08:56:26 compute-0 nova_compute[260935]: 2025-10-11 08:56:26.658 2 DEBUG nova.objects.instance [None req-6ef25eb0-80fd-41fc-82e2-bbf2a87cabdf ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Lazy-loading 'pci_devices' on Instance uuid f699e5fd-760a-4326-b428-c34d5e922f8b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:56:26 compute-0 nova_compute[260935]: 2025-10-11 08:56:26.777 2 DEBUG nova.virt.libvirt.driver [None req-6ef25eb0-80fd-41fc-82e2-bbf2a87cabdf ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: f699e5fd-760a-4326-b428-c34d5e922f8b] End _get_guest_xml xml=<domain type="kvm">
Oct 11 08:56:26 compute-0 nova_compute[260935]:   <uuid>f699e5fd-760a-4326-b428-c34d5e922f8b</uuid>
Oct 11 08:56:26 compute-0 nova_compute[260935]:   <name>instance-00000042</name>
Oct 11 08:56:26 compute-0 nova_compute[260935]:   <memory>131072</memory>
Oct 11 08:56:26 compute-0 nova_compute[260935]:   <vcpu>1</vcpu>
Oct 11 08:56:26 compute-0 nova_compute[260935]:   <metadata>
Oct 11 08:56:26 compute-0 nova_compute[260935]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 08:56:26 compute-0 nova_compute[260935]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 08:56:26 compute-0 nova_compute[260935]:       <nova:name>tempest-ServersTestJSON-server-584420125</nova:name>
Oct 11 08:56:26 compute-0 nova_compute[260935]:       <nova:creationTime>2025-10-11 08:56:25</nova:creationTime>
Oct 11 08:56:26 compute-0 nova_compute[260935]:       <nova:flavor name="m1.nano">
Oct 11 08:56:26 compute-0 nova_compute[260935]:         <nova:memory>128</nova:memory>
Oct 11 08:56:26 compute-0 nova_compute[260935]:         <nova:disk>1</nova:disk>
Oct 11 08:56:26 compute-0 nova_compute[260935]:         <nova:swap>0</nova:swap>
Oct 11 08:56:26 compute-0 nova_compute[260935]:         <nova:ephemeral>0</nova:ephemeral>
Oct 11 08:56:26 compute-0 nova_compute[260935]:         <nova:vcpus>1</nova:vcpus>
Oct 11 08:56:26 compute-0 nova_compute[260935]:       </nova:flavor>
Oct 11 08:56:26 compute-0 nova_compute[260935]:       <nova:owner>
Oct 11 08:56:26 compute-0 nova_compute[260935]:         <nova:user uuid="ee0e5fedb9fc464eb2a9ac362f5e0749">tempest-ServersTestJSON-101172647-project-member</nova:user>
Oct 11 08:56:26 compute-0 nova_compute[260935]:         <nova:project uuid="d9864fda4f8641d8a9c1509c426cc206">tempest-ServersTestJSON-101172647</nova:project>
Oct 11 08:56:26 compute-0 nova_compute[260935]:       </nova:owner>
Oct 11 08:56:26 compute-0 nova_compute[260935]:       <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 08:56:26 compute-0 nova_compute[260935]:       <nova:ports>
Oct 11 08:56:26 compute-0 nova_compute[260935]:         <nova:port uuid="6288362b-25a4-409c-ae10-4a00943f4b89">
Oct 11 08:56:26 compute-0 nova_compute[260935]:           <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Oct 11 08:56:26 compute-0 nova_compute[260935]:         </nova:port>
Oct 11 08:56:26 compute-0 nova_compute[260935]:       </nova:ports>
Oct 11 08:56:26 compute-0 nova_compute[260935]:     </nova:instance>
Oct 11 08:56:26 compute-0 nova_compute[260935]:   </metadata>
Oct 11 08:56:26 compute-0 nova_compute[260935]:   <sysinfo type="smbios">
Oct 11 08:56:26 compute-0 nova_compute[260935]:     <system>
Oct 11 08:56:26 compute-0 nova_compute[260935]:       <entry name="manufacturer">RDO</entry>
Oct 11 08:56:26 compute-0 nova_compute[260935]:       <entry name="product">OpenStack Compute</entry>
Oct 11 08:56:26 compute-0 nova_compute[260935]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 08:56:26 compute-0 nova_compute[260935]:       <entry name="serial">f699e5fd-760a-4326-b428-c34d5e922f8b</entry>
Oct 11 08:56:26 compute-0 nova_compute[260935]:       <entry name="uuid">f699e5fd-760a-4326-b428-c34d5e922f8b</entry>
Oct 11 08:56:26 compute-0 nova_compute[260935]:       <entry name="family">Virtual Machine</entry>
Oct 11 08:56:26 compute-0 nova_compute[260935]:     </system>
Oct 11 08:56:26 compute-0 nova_compute[260935]:   </sysinfo>
Oct 11 08:56:26 compute-0 nova_compute[260935]:   <os>
Oct 11 08:56:26 compute-0 nova_compute[260935]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 11 08:56:26 compute-0 nova_compute[260935]:     <boot dev="hd"/>
Oct 11 08:56:26 compute-0 nova_compute[260935]:     <smbios mode="sysinfo"/>
Oct 11 08:56:26 compute-0 nova_compute[260935]:   </os>
Oct 11 08:56:26 compute-0 nova_compute[260935]:   <features>
Oct 11 08:56:26 compute-0 nova_compute[260935]:     <acpi/>
Oct 11 08:56:26 compute-0 nova_compute[260935]:     <apic/>
Oct 11 08:56:26 compute-0 nova_compute[260935]:     <vmcoreinfo/>
Oct 11 08:56:26 compute-0 nova_compute[260935]:   </features>
Oct 11 08:56:26 compute-0 nova_compute[260935]:   <clock offset="utc">
Oct 11 08:56:26 compute-0 nova_compute[260935]:     <timer name="pit" tickpolicy="delay"/>
Oct 11 08:56:26 compute-0 nova_compute[260935]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 11 08:56:26 compute-0 nova_compute[260935]:     <timer name="hpet" present="no"/>
Oct 11 08:56:26 compute-0 nova_compute[260935]:   </clock>
Oct 11 08:56:26 compute-0 nova_compute[260935]:   <cpu mode="host-model" match="exact">
Oct 11 08:56:26 compute-0 nova_compute[260935]:     <topology sockets="1" cores="1" threads="1"/>
Oct 11 08:56:26 compute-0 nova_compute[260935]:   </cpu>
Oct 11 08:56:26 compute-0 nova_compute[260935]:   <devices>
Oct 11 08:56:26 compute-0 nova_compute[260935]:     <disk type="network" device="disk">
Oct 11 08:56:26 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 08:56:26 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/f699e5fd-760a-4326-b428-c34d5e922f8b_disk">
Oct 11 08:56:26 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 08:56:26 compute-0 nova_compute[260935]:       </source>
Oct 11 08:56:26 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 08:56:26 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 08:56:26 compute-0 nova_compute[260935]:       </auth>
Oct 11 08:56:26 compute-0 nova_compute[260935]:       <target dev="vda" bus="virtio"/>
Oct 11 08:56:26 compute-0 nova_compute[260935]:     </disk>
Oct 11 08:56:26 compute-0 nova_compute[260935]:     <disk type="network" device="cdrom">
Oct 11 08:56:26 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 08:56:26 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/f699e5fd-760a-4326-b428-c34d5e922f8b_disk.config">
Oct 11 08:56:26 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 08:56:26 compute-0 nova_compute[260935]:       </source>
Oct 11 08:56:26 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 08:56:26 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 08:56:26 compute-0 nova_compute[260935]:       </auth>
Oct 11 08:56:26 compute-0 nova_compute[260935]:       <target dev="sda" bus="sata"/>
Oct 11 08:56:26 compute-0 nova_compute[260935]:     </disk>
Oct 11 08:56:26 compute-0 nova_compute[260935]:     <interface type="ethernet">
Oct 11 08:56:26 compute-0 nova_compute[260935]:       <mac address="fa:16:3e:a3:b0:09"/>
Oct 11 08:56:26 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 08:56:26 compute-0 nova_compute[260935]:       <driver name="vhost" rx_queue_size="512"/>
Oct 11 08:56:26 compute-0 nova_compute[260935]:       <mtu size="1442"/>
Oct 11 08:56:26 compute-0 nova_compute[260935]:       <target dev="tap6288362b-25"/>
Oct 11 08:56:26 compute-0 nova_compute[260935]:     </interface>
Oct 11 08:56:26 compute-0 nova_compute[260935]:     <serial type="pty">
Oct 11 08:56:26 compute-0 nova_compute[260935]:       <log file="/var/lib/nova/instances/f699e5fd-760a-4326-b428-c34d5e922f8b/console.log" append="off"/>
Oct 11 08:56:26 compute-0 nova_compute[260935]:     </serial>
Oct 11 08:56:26 compute-0 nova_compute[260935]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 08:56:26 compute-0 nova_compute[260935]:     <video>
Oct 11 08:56:26 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 08:56:26 compute-0 nova_compute[260935]:     </video>
Oct 11 08:56:26 compute-0 nova_compute[260935]:     <input type="tablet" bus="usb"/>
Oct 11 08:56:26 compute-0 nova_compute[260935]:     <rng model="virtio">
Oct 11 08:56:26 compute-0 nova_compute[260935]:       <backend model="random">/dev/urandom</backend>
Oct 11 08:56:26 compute-0 nova_compute[260935]:     </rng>
Oct 11 08:56:26 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root"/>
Oct 11 08:56:26 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:56:26 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:56:26 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:56:26 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:56:26 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:56:26 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:56:26 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:56:26 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:56:26 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:56:26 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:56:26 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:56:26 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:56:26 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:56:26 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:56:26 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:56:26 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:56:26 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:56:26 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:56:26 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:56:26 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:56:26 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:56:26 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:56:26 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:56:26 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:56:26 compute-0 nova_compute[260935]:     <controller type="usb" index="0"/>
Oct 11 08:56:26 compute-0 nova_compute[260935]:     <memballoon model="virtio">
Oct 11 08:56:26 compute-0 nova_compute[260935]:       <stats period="10"/>
Oct 11 08:56:26 compute-0 nova_compute[260935]:     </memballoon>
Oct 11 08:56:26 compute-0 nova_compute[260935]:   </devices>
Oct 11 08:56:26 compute-0 nova_compute[260935]: </domain>
Oct 11 08:56:26 compute-0 nova_compute[260935]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 11 08:56:26 compute-0 nova_compute[260935]: 2025-10-11 08:56:26.778 2 DEBUG nova.compute.manager [None req-6ef25eb0-80fd-41fc-82e2-bbf2a87cabdf ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: f699e5fd-760a-4326-b428-c34d5e922f8b] Preparing to wait for external event network-vif-plugged-6288362b-25a4-409c-ae10-4a00943f4b89 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 11 08:56:26 compute-0 nova_compute[260935]: 2025-10-11 08:56:26.778 2 DEBUG oslo_concurrency.lockutils [None req-6ef25eb0-80fd-41fc-82e2-bbf2a87cabdf ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Acquiring lock "f699e5fd-760a-4326-b428-c34d5e922f8b-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:56:26 compute-0 nova_compute[260935]: 2025-10-11 08:56:26.779 2 DEBUG oslo_concurrency.lockutils [None req-6ef25eb0-80fd-41fc-82e2-bbf2a87cabdf ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Lock "f699e5fd-760a-4326-b428-c34d5e922f8b-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:56:26 compute-0 nova_compute[260935]: 2025-10-11 08:56:26.779 2 DEBUG oslo_concurrency.lockutils [None req-6ef25eb0-80fd-41fc-82e2-bbf2a87cabdf ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Lock "f699e5fd-760a-4326-b428-c34d5e922f8b-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:56:26 compute-0 nova_compute[260935]: 2025-10-11 08:56:26.781 2 DEBUG nova.virt.libvirt.vif [None req-6ef25eb0-80fd-41fc-82e2-bbf2a87cabdf ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T08:56:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-584420125',display_name='tempest-ServersTestJSON-server-584420125',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-584420125',id=66,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAAk/MD4FoWprvRA2zmWxJheHU1Q6pbrfGqfP9a3/89xTMHYKqik/bQVNkLzZIS3lp+y8ZWTBvY2vCVH0mlXCADLAZoUmTQcydsCtTk6U8YJ4oWLtZkp2VZ/VYWbcHdLrQ==',key_name='tempest-key-882655236',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d9864fda4f8641d8a9c1509c426cc206',ramdisk_id='',reservation_id='r-h01orfad',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-101172647',owner_user_name='tempest-ServersTestJSON-101172647-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T08:56:21Z,user_data=None,user_id='ee0e5fedb9fc464eb2a9ac362f5e0749',uuid=f699e5fd-760a-4326-b428-c34d5e922f8b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6288362b-25a4-409c-ae10-4a00943f4b89", "address": "fa:16:3e:a3:b0:09", "network": {"id": "e075bdab-78c4-414f-b270-c41d1c82f498", "bridge": "br-int", "label": "tempest-ServersTestJSON-1401783070-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d9864fda4f8641d8a9c1509c426cc206", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6288362b-25", "ovs_interfaceid": "6288362b-25a4-409c-ae10-4a00943f4b89", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 11 08:56:26 compute-0 nova_compute[260935]: 2025-10-11 08:56:26.781 2 DEBUG nova.network.os_vif_util [None req-6ef25eb0-80fd-41fc-82e2-bbf2a87cabdf ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Converting VIF {"id": "6288362b-25a4-409c-ae10-4a00943f4b89", "address": "fa:16:3e:a3:b0:09", "network": {"id": "e075bdab-78c4-414f-b270-c41d1c82f498", "bridge": "br-int", "label": "tempest-ServersTestJSON-1401783070-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d9864fda4f8641d8a9c1509c426cc206", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6288362b-25", "ovs_interfaceid": "6288362b-25a4-409c-ae10-4a00943f4b89", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 08:56:26 compute-0 nova_compute[260935]: 2025-10-11 08:56:26.783 2 DEBUG nova.network.os_vif_util [None req-6ef25eb0-80fd-41fc-82e2-bbf2a87cabdf ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a3:b0:09,bridge_name='br-int',has_traffic_filtering=True,id=6288362b-25a4-409c-ae10-4a00943f4b89,network=Network(e075bdab-78c4-414f-b270-c41d1c82f498),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6288362b-25') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 08:56:26 compute-0 nova_compute[260935]: 2025-10-11 08:56:26.784 2 DEBUG os_vif [None req-6ef25eb0-80fd-41fc-82e2-bbf2a87cabdf ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a3:b0:09,bridge_name='br-int',has_traffic_filtering=True,id=6288362b-25a4-409c-ae10-4a00943f4b89,network=Network(e075bdab-78c4-414f-b270-c41d1c82f498),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6288362b-25') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 11 08:56:26 compute-0 nova_compute[260935]: 2025-10-11 08:56:26.785 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:56:26 compute-0 nova_compute[260935]: 2025-10-11 08:56:26.786 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:56:26 compute-0 nova_compute[260935]: 2025-10-11 08:56:26.787 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 08:56:26 compute-0 nova_compute[260935]: 2025-10-11 08:56:26.791 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:56:26 compute-0 nova_compute[260935]: 2025-10-11 08:56:26.792 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6288362b-25, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:56:26 compute-0 nova_compute[260935]: 2025-10-11 08:56:26.794 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap6288362b-25, col_values=(('external_ids', {'iface-id': '6288362b-25a4-409c-ae10-4a00943f4b89', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:a3:b0:09', 'vm-uuid': 'f699e5fd-760a-4326-b428-c34d5e922f8b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:56:26 compute-0 NetworkManager[44960]: <info>  [1760172986.8355] manager: (tap6288362b-25): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/267)
Oct 11 08:56:26 compute-0 nova_compute[260935]: 2025-10-11 08:56:26.834 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:56:26 compute-0 nova_compute[260935]: 2025-10-11 08:56:26.841 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 08:56:26 compute-0 nova_compute[260935]: 2025-10-11 08:56:26.847 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:56:26 compute-0 nova_compute[260935]: 2025-10-11 08:56:26.848 2 INFO os_vif [None req-6ef25eb0-80fd-41fc-82e2-bbf2a87cabdf ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a3:b0:09,bridge_name='br-int',has_traffic_filtering=True,id=6288362b-25a4-409c-ae10-4a00943f4b89,network=Network(e075bdab-78c4-414f-b270-c41d1c82f498),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6288362b-25')
Oct 11 08:56:26 compute-0 nova_compute[260935]: 2025-10-11 08:56:26.929 2 DEBUG nova.virt.libvirt.driver [None req-6ef25eb0-80fd-41fc-82e2-bbf2a87cabdf ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 08:56:26 compute-0 nova_compute[260935]: 2025-10-11 08:56:26.929 2 DEBUG nova.virt.libvirt.driver [None req-6ef25eb0-80fd-41fc-82e2-bbf2a87cabdf ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 08:56:26 compute-0 nova_compute[260935]: 2025-10-11 08:56:26.930 2 DEBUG nova.virt.libvirt.driver [None req-6ef25eb0-80fd-41fc-82e2-bbf2a87cabdf ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] No VIF found with MAC fa:16:3e:a3:b0:09, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 11 08:56:26 compute-0 nova_compute[260935]: 2025-10-11 08:56:26.931 2 INFO nova.virt.libvirt.driver [None req-6ef25eb0-80fd-41fc-82e2-bbf2a87cabdf ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: f699e5fd-760a-4326-b428-c34d5e922f8b] Using config drive
Oct 11 08:56:26 compute-0 nova_compute[260935]: 2025-10-11 08:56:26.963 2 DEBUG nova.storage.rbd_utils [None req-6ef25eb0-80fd-41fc-82e2-bbf2a87cabdf ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] rbd image f699e5fd-760a-4326-b428-c34d5e922f8b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:56:27 compute-0 exciting_robinson[327373]: --> passed data devices: 0 physical, 3 LVM
Oct 11 08:56:27 compute-0 exciting_robinson[327373]: --> relative data size: 1.0
Oct 11 08:56:27 compute-0 exciting_robinson[327373]: --> All data devices are unavailable
Oct 11 08:56:27 compute-0 systemd[1]: libpod-167b1d0ea1534e3d1c011aaf34eae9c464dd929f01244040096c9f97134d4c09.scope: Deactivated successfully.
Oct 11 08:56:27 compute-0 podman[327331]: 2025-10-11 08:56:27.044940428 +0000 UTC m=+1.470272975 container died 167b1d0ea1534e3d1c011aaf34eae9c464dd929f01244040096c9f97134d4c09 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_robinson, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 08:56:27 compute-0 systemd[1]: libpod-167b1d0ea1534e3d1c011aaf34eae9c464dd929f01244040096c9f97134d4c09.scope: Consumed 1.096s CPU time.
Oct 11 08:56:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-684418d6faf14614f16a6ab67f7944c1aed97bc90e887215cf04b10670b77c54-merged.mount: Deactivated successfully.
Oct 11 08:56:27 compute-0 podman[327331]: 2025-10-11 08:56:27.097920029 +0000 UTC m=+1.523252576 container remove 167b1d0ea1534e3d1c011aaf34eae9c464dd929f01244040096c9f97134d4c09 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_robinson, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 08:56:27 compute-0 systemd[1]: libpod-conmon-167b1d0ea1534e3d1c011aaf34eae9c464dd929f01244040096c9f97134d4c09.scope: Deactivated successfully.
Oct 11 08:56:27 compute-0 sudo[327181]: pam_unix(sudo:session): session closed for user root
Oct 11 08:56:27 compute-0 sudo[327511]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:56:27 compute-0 sudo[327511]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:56:27 compute-0 sudo[327511]: pam_unix(sudo:session): session closed for user root
Oct 11 08:56:27 compute-0 nova_compute[260935]: 2025-10-11 08:56:27.319 2 INFO nova.virt.libvirt.driver [None req-6ef25eb0-80fd-41fc-82e2-bbf2a87cabdf ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: f699e5fd-760a-4326-b428-c34d5e922f8b] Creating config drive at /var/lib/nova/instances/f699e5fd-760a-4326-b428-c34d5e922f8b/disk.config
Oct 11 08:56:27 compute-0 sudo[327536]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 08:56:27 compute-0 sudo[327536]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:56:27 compute-0 nova_compute[260935]: 2025-10-11 08:56:27.329 2 DEBUG oslo_concurrency.processutils [None req-6ef25eb0-80fd-41fc-82e2-bbf2a87cabdf ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/f699e5fd-760a-4326-b428-c34d5e922f8b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmptmwcc44m execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:56:27 compute-0 sudo[327536]: pam_unix(sudo:session): session closed for user root
Oct 11 08:56:27 compute-0 sudo[327562]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:56:27 compute-0 sudo[327562]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:56:27 compute-0 sudo[327562]: pam_unix(sudo:session): session closed for user root
Oct 11 08:56:27 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e219 do_prune osdmap full prune enabled
Oct 11 08:56:27 compute-0 ceph-mon[74313]: pgmap v1574: 321 pgs: 321 active+clean; 530 MiB data, 775 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 5.8 MiB/s wr, 130 op/s
Oct 11 08:56:27 compute-0 ceph-mon[74313]: osdmap e219: 3 total, 3 up, 3 in
Oct 11 08:56:27 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/245973482' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:56:27 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e220 e220: 3 total, 3 up, 3 in
Oct 11 08:56:27 compute-0 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e220: 3 total, 3 up, 3 in
Oct 11 08:56:27 compute-0 nova_compute[260935]: 2025-10-11 08:56:27.495 2 DEBUG oslo_concurrency.processutils [None req-6ef25eb0-80fd-41fc-82e2-bbf2a87cabdf ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/f699e5fd-760a-4326-b428-c34d5e922f8b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmptmwcc44m" returned: 0 in 0.165s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:56:27 compute-0 sudo[327589]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a -- lvm list --format json
Oct 11 08:56:27 compute-0 nova_compute[260935]: 2025-10-11 08:56:27.523 2 DEBUG nova.storage.rbd_utils [None req-6ef25eb0-80fd-41fc-82e2-bbf2a87cabdf ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] rbd image f699e5fd-760a-4326-b428-c34d5e922f8b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:56:27 compute-0 sudo[327589]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:56:27 compute-0 nova_compute[260935]: 2025-10-11 08:56:27.528 2 DEBUG oslo_concurrency.processutils [None req-6ef25eb0-80fd-41fc-82e2-bbf2a87cabdf ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/f699e5fd-760a-4326-b428-c34d5e922f8b/disk.config f699e5fd-760a-4326-b428-c34d5e922f8b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:56:27 compute-0 nova_compute[260935]: 2025-10-11 08:56:27.724 2 DEBUG oslo_concurrency.processutils [None req-6ef25eb0-80fd-41fc-82e2-bbf2a87cabdf ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/f699e5fd-760a-4326-b428-c34d5e922f8b/disk.config f699e5fd-760a-4326-b428-c34d5e922f8b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.196s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:56:27 compute-0 nova_compute[260935]: 2025-10-11 08:56:27.725 2 INFO nova.virt.libvirt.driver [None req-6ef25eb0-80fd-41fc-82e2-bbf2a87cabdf ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: f699e5fd-760a-4326-b428-c34d5e922f8b] Deleting local config drive /var/lib/nova/instances/f699e5fd-760a-4326-b428-c34d5e922f8b/disk.config because it was imported into RBD.
Oct 11 08:56:27 compute-0 kernel: tap6288362b-25: entered promiscuous mode
Oct 11 08:56:27 compute-0 NetworkManager[44960]: <info>  [1760172987.8012] manager: (tap6288362b-25): new Tun device (/org/freedesktop/NetworkManager/Devices/268)
Oct 11 08:56:27 compute-0 nova_compute[260935]: 2025-10-11 08:56:27.802 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:56:27 compute-0 ovn_controller[152945]: 2025-10-11T08:56:27Z|00566|binding|INFO|Claiming lport 6288362b-25a4-409c-ae10-4a00943f4b89 for this chassis.
Oct 11 08:56:27 compute-0 ovn_controller[152945]: 2025-10-11T08:56:27Z|00567|binding|INFO|6288362b-25a4-409c-ae10-4a00943f4b89: Claiming fa:16:3e:a3:b0:09 10.100.0.10
Oct 11 08:56:27 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:56:27.813 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a3:b0:09 10.100.0.10'], port_security=['fa:16:3e:a3:b0:09 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'f699e5fd-760a-4326-b428-c34d5e922f8b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e075bdab-78c4-414f-b270-c41d1c82f498', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd9864fda4f8641d8a9c1509c426cc206', 'neutron:revision_number': '2', 'neutron:security_group_ids': '708d19b6-1edc-4b98-ad73-f234668f1633', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2b07dbe7-131d-4b4e-9a25-dea5d7b28985, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=6288362b-25a4-409c-ae10-4a00943f4b89) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 08:56:27 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:56:27.815 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 6288362b-25a4-409c-ae10-4a00943f4b89 in datapath e075bdab-78c4-414f-b270-c41d1c82f498 bound to our chassis
Oct 11 08:56:27 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:56:27.825 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e075bdab-78c4-414f-b270-c41d1c82f498
Oct 11 08:56:27 compute-0 ovn_controller[152945]: 2025-10-11T08:56:27Z|00568|binding|INFO|Setting lport 6288362b-25a4-409c-ae10-4a00943f4b89 ovn-installed in OVS
Oct 11 08:56:27 compute-0 ovn_controller[152945]: 2025-10-11T08:56:27Z|00569|binding|INFO|Setting lport 6288362b-25a4-409c-ae10-4a00943f4b89 up in Southbound
Oct 11 08:56:27 compute-0 nova_compute[260935]: 2025-10-11 08:56:27.835 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:56:27 compute-0 systemd-udevd[327694]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 08:56:27 compute-0 nova_compute[260935]: 2025-10-11 08:56:27.841 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:56:27 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:56:27.849 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[bb768634-5792-4360-9f65-d8dd8bf49602]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:56:27 compute-0 NetworkManager[44960]: <info>  [1760172987.8561] device (tap6288362b-25): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 08:56:27 compute-0 NetworkManager[44960]: <info>  [1760172987.8567] device (tap6288362b-25): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 08:56:27 compute-0 systemd-machined[215705]: New machine qemu-74-instance-00000042.
Oct 11 08:56:27 compute-0 systemd[1]: Started Virtual Machine qemu-74-instance-00000042.
Oct 11 08:56:27 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:56:27.897 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[2afc2c8a-9e50-4775-a202-fd6773f4bd73]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:56:27 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:56:27.904 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[ff0eed47-1440-4222-8f06-47577ec64971]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:56:27 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:56:27.951 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[92f2a060-f36f-451a-b60d-ac9bd8ffaf69]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:56:27 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:56:27.973 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[87a37f29-5118-4569-9460-a98e45b5afa4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape075bdab-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0b:b9:79'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 9, 'rx_bytes': 916, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 9, 'rx_bytes': 916, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 169], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 479394, 'reachable_time': 36159, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 327725, 'error': None, 'target': 'ovnmeta-e075bdab-78c4-414f-b270-c41d1c82f498', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:56:27 compute-0 podman[327708]: 2025-10-11 08:56:27.992391163 +0000 UTC m=+0.061912686 container create 9de0b5c5d6f612f8035a4644522daf759afb8607c9a14d252753a0c03b7da9ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_hellman, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 08:56:27 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:56:27.997 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[993f85b7-05ea-4619-9bcc-6fa9e56c2324]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tape075bdab-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 479414, 'tstamp': 479414}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 327731, 'error': None, 'target': 'ovnmeta-e075bdab-78c4-414f-b270-c41d1c82f498', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tape075bdab-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 479419, 'tstamp': 479419}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 327731, 'error': None, 'target': 'ovnmeta-e075bdab-78c4-414f-b270-c41d1c82f498', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:56:27 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:56:27.999 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape075bdab-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:56:28 compute-0 nova_compute[260935]: 2025-10-11 08:56:28.001 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:56:28 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:56:28.004 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape075bdab-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:56:28 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:56:28.005 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 08:56:28 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:56:28.005 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape075bdab-70, col_values=(('external_ids', {'iface-id': 'b9cf681c-9f4c-4c56-987a-55fa7aa89e1a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:56:28 compute-0 nova_compute[260935]: 2025-10-11 08:56:28.004 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:56:28 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:56:28.005 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 08:56:28 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1577: 321 pgs: 321 active+clean; 610 MiB data, 849 MiB used, 59 GiB / 60 GiB avail; 9.6 MiB/s rd, 15 MiB/s wr, 343 op/s
Oct 11 08:56:28 compute-0 systemd[1]: Started libpod-conmon-9de0b5c5d6f612f8035a4644522daf759afb8607c9a14d252753a0c03b7da9ca.scope.
Oct 11 08:56:28 compute-0 podman[327708]: 2025-10-11 08:56:27.967342039 +0000 UTC m=+0.036863562 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 08:56:28 compute-0 systemd[1]: Started libcrun container.
Oct 11 08:56:28 compute-0 nova_compute[260935]: 2025-10-11 08:56:28.077 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:56:28 compute-0 podman[327708]: 2025-10-11 08:56:28.095769691 +0000 UTC m=+0.165291214 container init 9de0b5c5d6f612f8035a4644522daf759afb8607c9a14d252753a0c03b7da9ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_hellman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 08:56:28 compute-0 podman[327708]: 2025-10-11 08:56:28.106056505 +0000 UTC m=+0.175578008 container start 9de0b5c5d6f612f8035a4644522daf759afb8607c9a14d252753a0c03b7da9ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_hellman, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef)
Oct 11 08:56:28 compute-0 podman[327708]: 2025-10-11 08:56:28.109580485 +0000 UTC m=+0.179102008 container attach 9de0b5c5d6f612f8035a4644522daf759afb8607c9a14d252753a0c03b7da9ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_hellman, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct 11 08:56:28 compute-0 silly_hellman[327734]: 167 167
Oct 11 08:56:28 compute-0 systemd[1]: libpod-9de0b5c5d6f612f8035a4644522daf759afb8607c9a14d252753a0c03b7da9ca.scope: Deactivated successfully.
Oct 11 08:56:28 compute-0 podman[327708]: 2025-10-11 08:56:28.11324223 +0000 UTC m=+0.182763753 container died 9de0b5c5d6f612f8035a4644522daf759afb8607c9a14d252753a0c03b7da9ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_hellman, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct 11 08:56:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-9368ee4bc009e67407704555d377fd26447a191bd5f2d8330551c680a469800a-merged.mount: Deactivated successfully.
Oct 11 08:56:28 compute-0 podman[327708]: 2025-10-11 08:56:28.151929513 +0000 UTC m=+0.221451016 container remove 9de0b5c5d6f612f8035a4644522daf759afb8607c9a14d252753a0c03b7da9ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_hellman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 08:56:28 compute-0 systemd[1]: libpod-conmon-9de0b5c5d6f612f8035a4644522daf759afb8607c9a14d252753a0c03b7da9ca.scope: Deactivated successfully.
Oct 11 08:56:28 compute-0 nova_compute[260935]: 2025-10-11 08:56:28.333 2 DEBUG nova.network.neutron [req-addc2c46-c096-4b9c-bd09-c981ea42dd0e req-c9c1332e-1261-4570-9f98-9ea155a631da e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: f699e5fd-760a-4326-b428-c34d5e922f8b] Updated VIF entry in instance network info cache for port 6288362b-25a4-409c-ae10-4a00943f4b89. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 08:56:28 compute-0 nova_compute[260935]: 2025-10-11 08:56:28.335 2 DEBUG nova.network.neutron [req-addc2c46-c096-4b9c-bd09-c981ea42dd0e req-c9c1332e-1261-4570-9f98-9ea155a631da e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: f699e5fd-760a-4326-b428-c34d5e922f8b] Updating instance_info_cache with network_info: [{"id": "6288362b-25a4-409c-ae10-4a00943f4b89", "address": "fa:16:3e:a3:b0:09", "network": {"id": "e075bdab-78c4-414f-b270-c41d1c82f498", "bridge": "br-int", "label": "tempest-ServersTestJSON-1401783070-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d9864fda4f8641d8a9c1509c426cc206", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6288362b-25", "ovs_interfaceid": "6288362b-25a4-409c-ae10-4a00943f4b89", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:56:28 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e220 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 08:56:28 compute-0 nova_compute[260935]: 2025-10-11 08:56:28.366 2 DEBUG oslo_concurrency.lockutils [req-addc2c46-c096-4b9c-bd09-c981ea42dd0e req-c9c1332e-1261-4570-9f98-9ea155a631da e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-f699e5fd-760a-4326-b428-c34d5e922f8b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 08:56:28 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e220 do_prune osdmap full prune enabled
Oct 11 08:56:28 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e221 e221: 3 total, 3 up, 3 in
Oct 11 08:56:28 compute-0 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e221: 3 total, 3 up, 3 in
Oct 11 08:56:28 compute-0 nova_compute[260935]: 2025-10-11 08:56:28.392 2 DEBUG nova.compute.manager [req-8651c3b0-2288-4532-a67a-4d81095c7d7f req-efe5f4f7-9da6-4060-9972-8f4eb968b3b6 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: f699e5fd-760a-4326-b428-c34d5e922f8b] Received event network-vif-plugged-6288362b-25a4-409c-ae10-4a00943f4b89 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:56:28 compute-0 nova_compute[260935]: 2025-10-11 08:56:28.393 2 DEBUG oslo_concurrency.lockutils [req-8651c3b0-2288-4532-a67a-4d81095c7d7f req-efe5f4f7-9da6-4060-9972-8f4eb968b3b6 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "f699e5fd-760a-4326-b428-c34d5e922f8b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:56:28 compute-0 nova_compute[260935]: 2025-10-11 08:56:28.393 2 DEBUG oslo_concurrency.lockutils [req-8651c3b0-2288-4532-a67a-4d81095c7d7f req-efe5f4f7-9da6-4060-9972-8f4eb968b3b6 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "f699e5fd-760a-4326-b428-c34d5e922f8b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:56:28 compute-0 nova_compute[260935]: 2025-10-11 08:56:28.394 2 DEBUG oslo_concurrency.lockutils [req-8651c3b0-2288-4532-a67a-4d81095c7d7f req-efe5f4f7-9da6-4060-9972-8f4eb968b3b6 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "f699e5fd-760a-4326-b428-c34d5e922f8b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:56:28 compute-0 nova_compute[260935]: 2025-10-11 08:56:28.394 2 DEBUG nova.compute.manager [req-8651c3b0-2288-4532-a67a-4d81095c7d7f req-efe5f4f7-9da6-4060-9972-8f4eb968b3b6 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: f699e5fd-760a-4326-b428-c34d5e922f8b] Processing event network-vif-plugged-6288362b-25a4-409c-ae10-4a00943f4b89 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 11 08:56:28 compute-0 podman[327764]: 2025-10-11 08:56:28.416081445 +0000 UTC m=+0.044016356 container create 2ac92ab9a2f195673ab9178ef9d9d3f0a33a7a18ff35de0b93c7906c10c5f90c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_chandrasekhar, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 08:56:28 compute-0 ceph-mon[74313]: osdmap e220: 3 total, 3 up, 3 in
Oct 11 08:56:28 compute-0 ceph-mon[74313]: osdmap e221: 3 total, 3 up, 3 in
Oct 11 08:56:28 compute-0 systemd[1]: Started libpod-conmon-2ac92ab9a2f195673ab9178ef9d9d3f0a33a7a18ff35de0b93c7906c10c5f90c.scope.
Oct 11 08:56:28 compute-0 podman[327764]: 2025-10-11 08:56:28.399988236 +0000 UTC m=+0.027923177 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 08:56:28 compute-0 systemd[1]: Started libcrun container.
Oct 11 08:56:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8aded5994f8a78d647cc808a93fe0c3de4b2346feeda1147a5331f66c8155a48/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 08:56:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8aded5994f8a78d647cc808a93fe0c3de4b2346feeda1147a5331f66c8155a48/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 08:56:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8aded5994f8a78d647cc808a93fe0c3de4b2346feeda1147a5331f66c8155a48/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 08:56:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8aded5994f8a78d647cc808a93fe0c3de4b2346feeda1147a5331f66c8155a48/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 08:56:28 compute-0 podman[327764]: 2025-10-11 08:56:28.548337726 +0000 UTC m=+0.176272677 container init 2ac92ab9a2f195673ab9178ef9d9d3f0a33a7a18ff35de0b93c7906c10c5f90c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_chandrasekhar, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 08:56:28 compute-0 podman[327764]: 2025-10-11 08:56:28.563512299 +0000 UTC m=+0.191447200 container start 2ac92ab9a2f195673ab9178ef9d9d3f0a33a7a18ff35de0b93c7906c10c5f90c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_chandrasekhar, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 08:56:28 compute-0 podman[327764]: 2025-10-11 08:56:28.567651117 +0000 UTC m=+0.195586078 container attach 2ac92ab9a2f195673ab9178ef9d9d3f0a33a7a18ff35de0b93c7906c10c5f90c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_chandrasekhar, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 08:56:28 compute-0 ceph-osd[89278]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #46. Immutable memtables: 3.
Oct 11 08:56:29 compute-0 nova_compute[260935]: 2025-10-11 08:56:29.178 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172989.178116, f699e5fd-760a-4326-b428-c34d5e922f8b => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:56:29 compute-0 nova_compute[260935]: 2025-10-11 08:56:29.179 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: f699e5fd-760a-4326-b428-c34d5e922f8b] VM Started (Lifecycle Event)
Oct 11 08:56:29 compute-0 nova_compute[260935]: 2025-10-11 08:56:29.181 2 DEBUG nova.compute.manager [None req-6ef25eb0-80fd-41fc-82e2-bbf2a87cabdf ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: f699e5fd-760a-4326-b428-c34d5e922f8b] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 11 08:56:29 compute-0 nova_compute[260935]: 2025-10-11 08:56:29.184 2 DEBUG nova.virt.libvirt.driver [None req-6ef25eb0-80fd-41fc-82e2-bbf2a87cabdf ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: f699e5fd-760a-4326-b428-c34d5e922f8b] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 11 08:56:29 compute-0 nova_compute[260935]: 2025-10-11 08:56:29.190 2 INFO nova.virt.libvirt.driver [-] [instance: f699e5fd-760a-4326-b428-c34d5e922f8b] Instance spawned successfully.
Oct 11 08:56:29 compute-0 nova_compute[260935]: 2025-10-11 08:56:29.190 2 DEBUG nova.virt.libvirt.driver [None req-6ef25eb0-80fd-41fc-82e2-bbf2a87cabdf ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: f699e5fd-760a-4326-b428-c34d5e922f8b] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 11 08:56:29 compute-0 nova_compute[260935]: 2025-10-11 08:56:29.207 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: f699e5fd-760a-4326-b428-c34d5e922f8b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:56:29 compute-0 nova_compute[260935]: 2025-10-11 08:56:29.210 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: f699e5fd-760a-4326-b428-c34d5e922f8b] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 08:56:29 compute-0 nova_compute[260935]: 2025-10-11 08:56:29.220 2 DEBUG nova.virt.libvirt.driver [None req-6ef25eb0-80fd-41fc-82e2-bbf2a87cabdf ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: f699e5fd-760a-4326-b428-c34d5e922f8b] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:56:29 compute-0 nova_compute[260935]: 2025-10-11 08:56:29.220 2 DEBUG nova.virt.libvirt.driver [None req-6ef25eb0-80fd-41fc-82e2-bbf2a87cabdf ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: f699e5fd-760a-4326-b428-c34d5e922f8b] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:56:29 compute-0 nova_compute[260935]: 2025-10-11 08:56:29.220 2 DEBUG nova.virt.libvirt.driver [None req-6ef25eb0-80fd-41fc-82e2-bbf2a87cabdf ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: f699e5fd-760a-4326-b428-c34d5e922f8b] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:56:29 compute-0 nova_compute[260935]: 2025-10-11 08:56:29.221 2 DEBUG nova.virt.libvirt.driver [None req-6ef25eb0-80fd-41fc-82e2-bbf2a87cabdf ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: f699e5fd-760a-4326-b428-c34d5e922f8b] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:56:29 compute-0 nova_compute[260935]: 2025-10-11 08:56:29.221 2 DEBUG nova.virt.libvirt.driver [None req-6ef25eb0-80fd-41fc-82e2-bbf2a87cabdf ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: f699e5fd-760a-4326-b428-c34d5e922f8b] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:56:29 compute-0 nova_compute[260935]: 2025-10-11 08:56:29.221 2 DEBUG nova.virt.libvirt.driver [None req-6ef25eb0-80fd-41fc-82e2-bbf2a87cabdf ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: f699e5fd-760a-4326-b428-c34d5e922f8b] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:56:29 compute-0 nova_compute[260935]: 2025-10-11 08:56:29.227 2 INFO nova.virt.libvirt.driver [None req-c7177d66-320e-4aee-8e12-ffa6a3e5ce83 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] [instance: 98d8ebd6-0917-49cf-8efc-a245486424bc] Snapshot image upload complete
Oct 11 08:56:29 compute-0 nova_compute[260935]: 2025-10-11 08:56:29.228 2 INFO nova.compute.manager [None req-c7177d66-320e-4aee-8e12-ffa6a3e5ce83 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] [instance: 98d8ebd6-0917-49cf-8efc-a245486424bc] Took 5.73 seconds to snapshot the instance on the hypervisor.
Oct 11 08:56:29 compute-0 nova_compute[260935]: 2025-10-11 08:56:29.231 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: f699e5fd-760a-4326-b428-c34d5e922f8b] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 08:56:29 compute-0 nova_compute[260935]: 2025-10-11 08:56:29.231 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172989.1790009, f699e5fd-760a-4326-b428-c34d5e922f8b => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:56:29 compute-0 nova_compute[260935]: 2025-10-11 08:56:29.232 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: f699e5fd-760a-4326-b428-c34d5e922f8b] VM Paused (Lifecycle Event)
Oct 11 08:56:29 compute-0 nova_compute[260935]: 2025-10-11 08:56:29.262 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: f699e5fd-760a-4326-b428-c34d5e922f8b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:56:29 compute-0 nova_compute[260935]: 2025-10-11 08:56:29.266 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760172989.1837463, f699e5fd-760a-4326-b428-c34d5e922f8b => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:56:29 compute-0 nova_compute[260935]: 2025-10-11 08:56:29.266 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: f699e5fd-760a-4326-b428-c34d5e922f8b] VM Resumed (Lifecycle Event)
Oct 11 08:56:29 compute-0 nova_compute[260935]: 2025-10-11 08:56:29.292 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: f699e5fd-760a-4326-b428-c34d5e922f8b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:56:29 compute-0 nova_compute[260935]: 2025-10-11 08:56:29.298 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: f699e5fd-760a-4326-b428-c34d5e922f8b] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 08:56:29 compute-0 nova_compute[260935]: 2025-10-11 08:56:29.315 2 INFO nova.compute.manager [None req-6ef25eb0-80fd-41fc-82e2-bbf2a87cabdf ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: f699e5fd-760a-4326-b428-c34d5e922f8b] Took 7.35 seconds to spawn the instance on the hypervisor.
Oct 11 08:56:29 compute-0 nova_compute[260935]: 2025-10-11 08:56:29.315 2 DEBUG nova.compute.manager [None req-6ef25eb0-80fd-41fc-82e2-bbf2a87cabdf ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: f699e5fd-760a-4326-b428-c34d5e922f8b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:56:29 compute-0 nova_compute[260935]: 2025-10-11 08:56:29.326 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: f699e5fd-760a-4326-b428-c34d5e922f8b] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 08:56:29 compute-0 nova_compute[260935]: 2025-10-11 08:56:29.376 2 INFO nova.compute.manager [None req-6ef25eb0-80fd-41fc-82e2-bbf2a87cabdf ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: f699e5fd-760a-4326-b428-c34d5e922f8b] Took 8.35 seconds to build instance.
Oct 11 08:56:29 compute-0 nova_compute[260935]: 2025-10-11 08:56:29.394 2 DEBUG oslo_concurrency.lockutils [None req-6ef25eb0-80fd-41fc-82e2-bbf2a87cabdf ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Lock "f699e5fd-760a-4326-b428-c34d5e922f8b" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.439s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:56:29 compute-0 elated_chandrasekhar[327811]: {
Oct 11 08:56:29 compute-0 elated_chandrasekhar[327811]:     "0": [
Oct 11 08:56:29 compute-0 elated_chandrasekhar[327811]:         {
Oct 11 08:56:29 compute-0 elated_chandrasekhar[327811]:             "devices": [
Oct 11 08:56:29 compute-0 elated_chandrasekhar[327811]:                 "/dev/loop3"
Oct 11 08:56:29 compute-0 elated_chandrasekhar[327811]:             ],
Oct 11 08:56:29 compute-0 elated_chandrasekhar[327811]:             "lv_name": "ceph_lv0",
Oct 11 08:56:29 compute-0 elated_chandrasekhar[327811]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 08:56:29 compute-0 elated_chandrasekhar[327811]:             "lv_size": "21470642176",
Oct 11 08:56:29 compute-0 elated_chandrasekhar[327811]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=bd31cfe9-266a-4479-a7f9-659fff14b3e1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 08:56:29 compute-0 elated_chandrasekhar[327811]:             "lv_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 08:56:29 compute-0 elated_chandrasekhar[327811]:             "name": "ceph_lv0",
Oct 11 08:56:29 compute-0 elated_chandrasekhar[327811]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 08:56:29 compute-0 elated_chandrasekhar[327811]:             "tags": {
Oct 11 08:56:29 compute-0 elated_chandrasekhar[327811]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 11 08:56:29 compute-0 elated_chandrasekhar[327811]:                 "ceph.block_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 08:56:29 compute-0 elated_chandrasekhar[327811]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 08:56:29 compute-0 elated_chandrasekhar[327811]:                 "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 08:56:29 compute-0 elated_chandrasekhar[327811]:                 "ceph.cluster_name": "ceph",
Oct 11 08:56:29 compute-0 elated_chandrasekhar[327811]:                 "ceph.crush_device_class": "",
Oct 11 08:56:29 compute-0 elated_chandrasekhar[327811]:                 "ceph.encrypted": "0",
Oct 11 08:56:29 compute-0 elated_chandrasekhar[327811]:                 "ceph.osd_fsid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 08:56:29 compute-0 elated_chandrasekhar[327811]:                 "ceph.osd_id": "0",
Oct 11 08:56:29 compute-0 elated_chandrasekhar[327811]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 08:56:29 compute-0 elated_chandrasekhar[327811]:                 "ceph.type": "block",
Oct 11 08:56:29 compute-0 elated_chandrasekhar[327811]:                 "ceph.vdo": "0"
Oct 11 08:56:29 compute-0 elated_chandrasekhar[327811]:             },
Oct 11 08:56:29 compute-0 elated_chandrasekhar[327811]:             "type": "block",
Oct 11 08:56:29 compute-0 elated_chandrasekhar[327811]:             "vg_name": "ceph_vg0"
Oct 11 08:56:29 compute-0 elated_chandrasekhar[327811]:         }
Oct 11 08:56:29 compute-0 elated_chandrasekhar[327811]:     ],
Oct 11 08:56:29 compute-0 elated_chandrasekhar[327811]:     "1": [
Oct 11 08:56:29 compute-0 elated_chandrasekhar[327811]:         {
Oct 11 08:56:29 compute-0 elated_chandrasekhar[327811]:             "devices": [
Oct 11 08:56:29 compute-0 elated_chandrasekhar[327811]:                 "/dev/loop4"
Oct 11 08:56:29 compute-0 elated_chandrasekhar[327811]:             ],
Oct 11 08:56:29 compute-0 elated_chandrasekhar[327811]:             "lv_name": "ceph_lv1",
Oct 11 08:56:29 compute-0 elated_chandrasekhar[327811]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 08:56:29 compute-0 elated_chandrasekhar[327811]:             "lv_size": "21470642176",
Oct 11 08:56:29 compute-0 elated_chandrasekhar[327811]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c6e730c8-7075-45e5-bf72-061b73088f5b,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 08:56:29 compute-0 elated_chandrasekhar[327811]:             "lv_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 08:56:29 compute-0 elated_chandrasekhar[327811]:             "name": "ceph_lv1",
Oct 11 08:56:29 compute-0 elated_chandrasekhar[327811]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 08:56:29 compute-0 elated_chandrasekhar[327811]:             "tags": {
Oct 11 08:56:29 compute-0 elated_chandrasekhar[327811]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 11 08:56:29 compute-0 elated_chandrasekhar[327811]:                 "ceph.block_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 08:56:29 compute-0 elated_chandrasekhar[327811]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 08:56:29 compute-0 elated_chandrasekhar[327811]:                 "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 08:56:29 compute-0 elated_chandrasekhar[327811]:                 "ceph.cluster_name": "ceph",
Oct 11 08:56:29 compute-0 elated_chandrasekhar[327811]:                 "ceph.crush_device_class": "",
Oct 11 08:56:29 compute-0 elated_chandrasekhar[327811]:                 "ceph.encrypted": "0",
Oct 11 08:56:29 compute-0 elated_chandrasekhar[327811]:                 "ceph.osd_fsid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 08:56:29 compute-0 elated_chandrasekhar[327811]:                 "ceph.osd_id": "1",
Oct 11 08:56:29 compute-0 elated_chandrasekhar[327811]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 08:56:29 compute-0 elated_chandrasekhar[327811]:                 "ceph.type": "block",
Oct 11 08:56:29 compute-0 elated_chandrasekhar[327811]:                 "ceph.vdo": "0"
Oct 11 08:56:29 compute-0 elated_chandrasekhar[327811]:             },
Oct 11 08:56:29 compute-0 elated_chandrasekhar[327811]:             "type": "block",
Oct 11 08:56:29 compute-0 elated_chandrasekhar[327811]:             "vg_name": "ceph_vg1"
Oct 11 08:56:29 compute-0 elated_chandrasekhar[327811]:         }
Oct 11 08:56:29 compute-0 elated_chandrasekhar[327811]:     ],
Oct 11 08:56:29 compute-0 elated_chandrasekhar[327811]:     "2": [
Oct 11 08:56:29 compute-0 elated_chandrasekhar[327811]:         {
Oct 11 08:56:29 compute-0 elated_chandrasekhar[327811]:             "devices": [
Oct 11 08:56:29 compute-0 elated_chandrasekhar[327811]:                 "/dev/loop5"
Oct 11 08:56:29 compute-0 elated_chandrasekhar[327811]:             ],
Oct 11 08:56:29 compute-0 elated_chandrasekhar[327811]:             "lv_name": "ceph_lv2",
Oct 11 08:56:29 compute-0 elated_chandrasekhar[327811]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 08:56:29 compute-0 elated_chandrasekhar[327811]:             "lv_size": "21470642176",
Oct 11 08:56:29 compute-0 elated_chandrasekhar[327811]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9a8d87fe-940e-4960-94fb-405e22e6e9ee,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 08:56:29 compute-0 elated_chandrasekhar[327811]:             "lv_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 08:56:29 compute-0 elated_chandrasekhar[327811]:             "name": "ceph_lv2",
Oct 11 08:56:29 compute-0 elated_chandrasekhar[327811]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 08:56:29 compute-0 elated_chandrasekhar[327811]:             "tags": {
Oct 11 08:56:29 compute-0 elated_chandrasekhar[327811]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 11 08:56:29 compute-0 elated_chandrasekhar[327811]:                 "ceph.block_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 08:56:29 compute-0 elated_chandrasekhar[327811]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 08:56:29 compute-0 elated_chandrasekhar[327811]:                 "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 08:56:29 compute-0 elated_chandrasekhar[327811]:                 "ceph.cluster_name": "ceph",
Oct 11 08:56:29 compute-0 elated_chandrasekhar[327811]:                 "ceph.crush_device_class": "",
Oct 11 08:56:29 compute-0 elated_chandrasekhar[327811]:                 "ceph.encrypted": "0",
Oct 11 08:56:29 compute-0 elated_chandrasekhar[327811]:                 "ceph.osd_fsid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 08:56:29 compute-0 elated_chandrasekhar[327811]:                 "ceph.osd_id": "2",
Oct 11 08:56:29 compute-0 elated_chandrasekhar[327811]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 08:56:29 compute-0 elated_chandrasekhar[327811]:                 "ceph.type": "block",
Oct 11 08:56:29 compute-0 elated_chandrasekhar[327811]:                 "ceph.vdo": "0"
Oct 11 08:56:29 compute-0 elated_chandrasekhar[327811]:             },
Oct 11 08:56:29 compute-0 elated_chandrasekhar[327811]:             "type": "block",
Oct 11 08:56:29 compute-0 elated_chandrasekhar[327811]:             "vg_name": "ceph_vg2"
Oct 11 08:56:29 compute-0 elated_chandrasekhar[327811]:         }
Oct 11 08:56:29 compute-0 elated_chandrasekhar[327811]:     ]
Oct 11 08:56:29 compute-0 elated_chandrasekhar[327811]: }
Oct 11 08:56:29 compute-0 systemd[1]: libpod-2ac92ab9a2f195673ab9178ef9d9d3f0a33a7a18ff35de0b93c7906c10c5f90c.scope: Deactivated successfully.
Oct 11 08:56:29 compute-0 ceph-mon[74313]: pgmap v1577: 321 pgs: 321 active+clean; 610 MiB data, 849 MiB used, 59 GiB / 60 GiB avail; 9.6 MiB/s rd, 15 MiB/s wr, 343 op/s
Oct 11 08:56:29 compute-0 podman[327824]: 2025-10-11 08:56:29.558088269 +0000 UTC m=+0.062448592 container died 2ac92ab9a2f195673ab9178ef9d9d3f0a33a7a18ff35de0b93c7906c10c5f90c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_chandrasekhar, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 08:56:29 compute-0 nova_compute[260935]: 2025-10-11 08:56:29.591 2 DEBUG nova.compute.manager [None req-c7177d66-320e-4aee-8e12-ffa6a3e5ce83 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] [instance: 98d8ebd6-0917-49cf-8efc-a245486424bc] Found 3 images (rotation: 2) _rotate_backups /usr/lib/python3.9/site-packages/nova/compute/manager.py:4450
Oct 11 08:56:29 compute-0 nova_compute[260935]: 2025-10-11 08:56:29.591 2 DEBUG nova.compute.manager [None req-c7177d66-320e-4aee-8e12-ffa6a3e5ce83 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] [instance: 98d8ebd6-0917-49cf-8efc-a245486424bc] Rotating out 1 backups _rotate_backups /usr/lib/python3.9/site-packages/nova/compute/manager.py:4458
Oct 11 08:56:29 compute-0 nova_compute[260935]: 2025-10-11 08:56:29.592 2 DEBUG nova.compute.manager [None req-c7177d66-320e-4aee-8e12-ffa6a3e5ce83 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] [instance: 98d8ebd6-0917-49cf-8efc-a245486424bc] Deleting image 6885ce04-2718-45ad-80a5-64bd9611a1ad _rotate_backups /usr/lib/python3.9/site-packages/nova/compute/manager.py:4463
Oct 11 08:56:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-8aded5994f8a78d647cc808a93fe0c3de4b2346feeda1147a5331f66c8155a48-merged.mount: Deactivated successfully.
Oct 11 08:56:29 compute-0 podman[327824]: 2025-10-11 08:56:29.661641021 +0000 UTC m=+0.166001334 container remove 2ac92ab9a2f195673ab9178ef9d9d3f0a33a7a18ff35de0b93c7906c10c5f90c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_chandrasekhar, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct 11 08:56:29 compute-0 systemd[1]: libpod-conmon-2ac92ab9a2f195673ab9178ef9d9d3f0a33a7a18ff35de0b93c7906c10c5f90c.scope: Deactivated successfully.
Oct 11 08:56:29 compute-0 sudo[327589]: pam_unix(sudo:session): session closed for user root
Oct 11 08:56:29 compute-0 sudo[327839]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:56:29 compute-0 sudo[327839]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:56:29 compute-0 sudo[327839]: pam_unix(sudo:session): session closed for user root
Oct 11 08:56:29 compute-0 sudo[327864]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 08:56:29 compute-0 sudo[327864]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:56:29 compute-0 sudo[327864]: pam_unix(sudo:session): session closed for user root
Oct 11 08:56:30 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1579: 321 pgs: 321 active+clean; 610 MiB data, 849 MiB used, 59 GiB / 60 GiB avail; 9.7 MiB/s rd, 9.6 MiB/s wr, 218 op/s
Oct 11 08:56:30 compute-0 sudo[327889]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:56:30 compute-0 sudo[327889]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:56:30 compute-0 sudo[327889]: pam_unix(sudo:session): session closed for user root
Oct 11 08:56:30 compute-0 sudo[327914]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a -- raw list --format json
Oct 11 08:56:30 compute-0 sudo[327914]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:56:30 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e221 do_prune osdmap full prune enabled
Oct 11 08:56:30 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e222 e222: 3 total, 3 up, 3 in
Oct 11 08:56:30 compute-0 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e222: 3 total, 3 up, 3 in
Oct 11 08:56:30 compute-0 nova_compute[260935]: 2025-10-11 08:56:30.545 2 DEBUG nova.compute.manager [req-53a77923-9eab-4d76-a84e-af110408bc0a req-1553ff71-a05d-4398-abe1-13a928e6c5db e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: f699e5fd-760a-4326-b428-c34d5e922f8b] Received event network-vif-plugged-6288362b-25a4-409c-ae10-4a00943f4b89 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:56:30 compute-0 nova_compute[260935]: 2025-10-11 08:56:30.547 2 DEBUG oslo_concurrency.lockutils [req-53a77923-9eab-4d76-a84e-af110408bc0a req-1553ff71-a05d-4398-abe1-13a928e6c5db e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "f699e5fd-760a-4326-b428-c34d5e922f8b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:56:30 compute-0 nova_compute[260935]: 2025-10-11 08:56:30.548 2 DEBUG oslo_concurrency.lockutils [req-53a77923-9eab-4d76-a84e-af110408bc0a req-1553ff71-a05d-4398-abe1-13a928e6c5db e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "f699e5fd-760a-4326-b428-c34d5e922f8b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:56:30 compute-0 nova_compute[260935]: 2025-10-11 08:56:30.548 2 DEBUG oslo_concurrency.lockutils [req-53a77923-9eab-4d76-a84e-af110408bc0a req-1553ff71-a05d-4398-abe1-13a928e6c5db e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "f699e5fd-760a-4326-b428-c34d5e922f8b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:56:30 compute-0 nova_compute[260935]: 2025-10-11 08:56:30.549 2 DEBUG nova.compute.manager [req-53a77923-9eab-4d76-a84e-af110408bc0a req-1553ff71-a05d-4398-abe1-13a928e6c5db e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: f699e5fd-760a-4326-b428-c34d5e922f8b] No waiting events found dispatching network-vif-plugged-6288362b-25a4-409c-ae10-4a00943f4b89 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 08:56:30 compute-0 nova_compute[260935]: 2025-10-11 08:56:30.549 2 WARNING nova.compute.manager [req-53a77923-9eab-4d76-a84e-af110408bc0a req-1553ff71-a05d-4398-abe1-13a928e6c5db e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: f699e5fd-760a-4326-b428-c34d5e922f8b] Received unexpected event network-vif-plugged-6288362b-25a4-409c-ae10-4a00943f4b89 for instance with vm_state active and task_state None.
Oct 11 08:56:30 compute-0 podman[327977]: 2025-10-11 08:56:30.624092186 +0000 UTC m=+0.058174940 container create c48d2ed1368929c2ed3418376be933961abafe8808e62f009bcdf68735a1647a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_carson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Oct 11 08:56:30 compute-0 systemd[1]: Started libpod-conmon-c48d2ed1368929c2ed3418376be933961abafe8808e62f009bcdf68735a1647a.scope.
Oct 11 08:56:30 compute-0 systemd[1]: Started libcrun container.
Oct 11 08:56:30 compute-0 podman[327977]: 2025-10-11 08:56:30.597392454 +0000 UTC m=+0.031475298 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 08:56:30 compute-0 podman[327977]: 2025-10-11 08:56:30.709713897 +0000 UTC m=+0.143796671 container init c48d2ed1368929c2ed3418376be933961abafe8808e62f009bcdf68735a1647a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_carson, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 08:56:30 compute-0 podman[327977]: 2025-10-11 08:56:30.718161528 +0000 UTC m=+0.152244282 container start c48d2ed1368929c2ed3418376be933961abafe8808e62f009bcdf68735a1647a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_carson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 11 08:56:30 compute-0 podman[327977]: 2025-10-11 08:56:30.720937507 +0000 UTC m=+0.155020281 container attach c48d2ed1368929c2ed3418376be933961abafe8808e62f009bcdf68735a1647a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_carson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 11 08:56:30 compute-0 zen_carson[327995]: 167 167
Oct 11 08:56:30 compute-0 systemd[1]: libpod-c48d2ed1368929c2ed3418376be933961abafe8808e62f009bcdf68735a1647a.scope: Deactivated successfully.
Oct 11 08:56:30 compute-0 podman[327977]: 2025-10-11 08:56:30.729160862 +0000 UTC m=+0.163243656 container died c48d2ed1368929c2ed3418376be933961abafe8808e62f009bcdf68735a1647a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_carson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct 11 08:56:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-820383f30fce3607ef47d2282a132983d2f1de670d3c2d7b329cd3798715b5cd-merged.mount: Deactivated successfully.
Oct 11 08:56:30 compute-0 podman[327994]: 2025-10-11 08:56:30.787942878 +0000 UTC m=+0.105279673 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible)
Oct 11 08:56:30 compute-0 podman[327977]: 2025-10-11 08:56:30.792471397 +0000 UTC m=+0.226554161 container remove c48d2ed1368929c2ed3418376be933961abafe8808e62f009bcdf68735a1647a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_carson, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 08:56:30 compute-0 systemd[1]: libpod-conmon-c48d2ed1368929c2ed3418376be933961abafe8808e62f009bcdf68735a1647a.scope: Deactivated successfully.
Oct 11 08:56:30 compute-0 nova_compute[260935]: 2025-10-11 08:56:30.916 2 DEBUG nova.virt.libvirt.driver [None req-1ff6340a-99ab-4f8a-9c38-84240440232e 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: 62c9b5d7-f22e-4738-b2e6-7c53fcb968ea] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Oct 11 08:56:31 compute-0 podman[328036]: 2025-10-11 08:56:31.075009042 +0000 UTC m=+0.084348836 container create 12e02bfee1ea93b33fe5630253af29a29600471750dda8982c831b2a5c9f4fb2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_matsumoto, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Oct 11 08:56:31 compute-0 podman[328036]: 2025-10-11 08:56:31.037865493 +0000 UTC m=+0.047205327 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 08:56:31 compute-0 systemd[1]: Started libpod-conmon-12e02bfee1ea93b33fe5630253af29a29600471750dda8982c831b2a5c9f4fb2.scope.
Oct 11 08:56:31 compute-0 systemd[1]: Started libcrun container.
Oct 11 08:56:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de368273cf4fbcb3374cb2411f1cad0f84c4b62c688ae24cdcda57cfcef9a8d4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 08:56:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de368273cf4fbcb3374cb2411f1cad0f84c4b62c688ae24cdcda57cfcef9a8d4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 08:56:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de368273cf4fbcb3374cb2411f1cad0f84c4b62c688ae24cdcda57cfcef9a8d4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 08:56:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de368273cf4fbcb3374cb2411f1cad0f84c4b62c688ae24cdcda57cfcef9a8d4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 08:56:31 compute-0 podman[328036]: 2025-10-11 08:56:31.219618946 +0000 UTC m=+0.228958750 container init 12e02bfee1ea93b33fe5630253af29a29600471750dda8982c831b2a5c9f4fb2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_matsumoto, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 08:56:31 compute-0 podman[328036]: 2025-10-11 08:56:31.233053109 +0000 UTC m=+0.242392873 container start 12e02bfee1ea93b33fe5630253af29a29600471750dda8982c831b2a5c9f4fb2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_matsumoto, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 08:56:31 compute-0 podman[328036]: 2025-10-11 08:56:31.23625097 +0000 UTC m=+0.245590734 container attach 12e02bfee1ea93b33fe5630253af29a29600471750dda8982c831b2a5c9f4fb2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_matsumoto, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 08:56:31 compute-0 ceph-mon[74313]: pgmap v1579: 321 pgs: 321 active+clean; 610 MiB data, 849 MiB used, 59 GiB / 60 GiB avail; 9.7 MiB/s rd, 9.6 MiB/s wr, 218 op/s
Oct 11 08:56:31 compute-0 ceph-mon[74313]: osdmap e222: 3 total, 3 up, 3 in
Oct 11 08:56:31 compute-0 nova_compute[260935]: 2025-10-11 08:56:31.873 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:56:32 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1581: 321 pgs: 321 active+clean; 610 MiB data, 849 MiB used, 59 GiB / 60 GiB avail; 8.4 MiB/s rd, 8.3 MiB/s wr, 189 op/s
Oct 11 08:56:32 compute-0 nova_compute[260935]: 2025-10-11 08:56:32.193 2 DEBUG oslo_concurrency.lockutils [None req-6e09fb2d-44f8-4565-b6be-b9c0c01da4ac ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Acquiring lock "f699e5fd-760a-4326-b428-c34d5e922f8b" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:56:32 compute-0 nova_compute[260935]: 2025-10-11 08:56:32.194 2 DEBUG oslo_concurrency.lockutils [None req-6e09fb2d-44f8-4565-b6be-b9c0c01da4ac ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Lock "f699e5fd-760a-4326-b428-c34d5e922f8b" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:56:32 compute-0 nova_compute[260935]: 2025-10-11 08:56:32.194 2 DEBUG oslo_concurrency.lockutils [None req-6e09fb2d-44f8-4565-b6be-b9c0c01da4ac ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Acquiring lock "f699e5fd-760a-4326-b428-c34d5e922f8b-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:56:32 compute-0 nova_compute[260935]: 2025-10-11 08:56:32.195 2 DEBUG oslo_concurrency.lockutils [None req-6e09fb2d-44f8-4565-b6be-b9c0c01da4ac ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Lock "f699e5fd-760a-4326-b428-c34d5e922f8b-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:56:32 compute-0 nova_compute[260935]: 2025-10-11 08:56:32.196 2 DEBUG oslo_concurrency.lockutils [None req-6e09fb2d-44f8-4565-b6be-b9c0c01da4ac ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Lock "f699e5fd-760a-4326-b428-c34d5e922f8b-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:56:32 compute-0 nova_compute[260935]: 2025-10-11 08:56:32.198 2 INFO nova.compute.manager [None req-6e09fb2d-44f8-4565-b6be-b9c0c01da4ac ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: f699e5fd-760a-4326-b428-c34d5e922f8b] Terminating instance
Oct 11 08:56:32 compute-0 nova_compute[260935]: 2025-10-11 08:56:32.200 2 DEBUG nova.compute.manager [None req-6e09fb2d-44f8-4565-b6be-b9c0c01da4ac ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: f699e5fd-760a-4326-b428-c34d5e922f8b] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 11 08:56:32 compute-0 kernel: tap6288362b-25 (unregistering): left promiscuous mode
Oct 11 08:56:32 compute-0 NetworkManager[44960]: <info>  [1760172992.2637] device (tap6288362b-25): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 08:56:32 compute-0 ovn_controller[152945]: 2025-10-11T08:56:32Z|00570|binding|INFO|Releasing lport 6288362b-25a4-409c-ae10-4a00943f4b89 from this chassis (sb_readonly=0)
Oct 11 08:56:32 compute-0 nova_compute[260935]: 2025-10-11 08:56:32.270 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:56:32 compute-0 ovn_controller[152945]: 2025-10-11T08:56:32Z|00571|binding|INFO|Setting lport 6288362b-25a4-409c-ae10-4a00943f4b89 down in Southbound
Oct 11 08:56:32 compute-0 ovn_controller[152945]: 2025-10-11T08:56:32Z|00572|binding|INFO|Removing iface tap6288362b-25 ovn-installed in OVS
Oct 11 08:56:32 compute-0 nova_compute[260935]: 2025-10-11 08:56:32.274 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:56:32 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:56:32.299 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a3:b0:09 10.100.0.10'], port_security=['fa:16:3e:a3:b0:09 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'f699e5fd-760a-4326-b428-c34d5e922f8b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e075bdab-78c4-414f-b270-c41d1c82f498', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd9864fda4f8641d8a9c1509c426cc206', 'neutron:revision_number': '4', 'neutron:security_group_ids': '708d19b6-1edc-4b98-ad73-f234668f1633', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2b07dbe7-131d-4b4e-9a25-dea5d7b28985, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=6288362b-25a4-409c-ae10-4a00943f4b89) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 08:56:32 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:56:32.302 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 6288362b-25a4-409c-ae10-4a00943f4b89 in datapath e075bdab-78c4-414f-b270-c41d1c82f498 unbound from our chassis
Oct 11 08:56:32 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:56:32.306 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e075bdab-78c4-414f-b270-c41d1c82f498
Oct 11 08:56:32 compute-0 nova_compute[260935]: 2025-10-11 08:56:32.309 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:56:32 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:56:32.336 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[d0d0caff-7950-4297-9ef3-c70f428874ce]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:56:32 compute-0 systemd[1]: machine-qemu\x2d74\x2dinstance\x2d00000042.scope: Deactivated successfully.
Oct 11 08:56:32 compute-0 systemd[1]: machine-qemu\x2d74\x2dinstance\x2d00000042.scope: Consumed 4.077s CPU time.
Oct 11 08:56:32 compute-0 systemd-machined[215705]: Machine qemu-74-instance-00000042 terminated.
Oct 11 08:56:32 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:56:32.380 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[17d55213-8fc8-4b74-8912-df424c97b008]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:56:32 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:56:32.386 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[f013cdbf-48ce-4c0f-b9e2-11faa43d0382]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:56:32 compute-0 youthful_matsumoto[328054]: {
Oct 11 08:56:32 compute-0 youthful_matsumoto[328054]:     "9a8d87fe-940e-4960-94fb-405e22e6e9ee": {
Oct 11 08:56:32 compute-0 youthful_matsumoto[328054]:         "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 08:56:32 compute-0 youthful_matsumoto[328054]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 11 08:56:32 compute-0 youthful_matsumoto[328054]:         "osd_id": 2,
Oct 11 08:56:32 compute-0 youthful_matsumoto[328054]:         "osd_uuid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 08:56:32 compute-0 youthful_matsumoto[328054]:         "type": "bluestore"
Oct 11 08:56:32 compute-0 youthful_matsumoto[328054]:     },
Oct 11 08:56:32 compute-0 youthful_matsumoto[328054]:     "bd31cfe9-266a-4479-a7f9-659fff14b3e1": {
Oct 11 08:56:32 compute-0 youthful_matsumoto[328054]:         "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 08:56:32 compute-0 youthful_matsumoto[328054]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 11 08:56:32 compute-0 youthful_matsumoto[328054]:         "osd_id": 0,
Oct 11 08:56:32 compute-0 youthful_matsumoto[328054]:         "osd_uuid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 08:56:32 compute-0 youthful_matsumoto[328054]:         "type": "bluestore"
Oct 11 08:56:32 compute-0 youthful_matsumoto[328054]:     },
Oct 11 08:56:32 compute-0 youthful_matsumoto[328054]:     "c6e730c8-7075-45e5-bf72-061b73088f5b": {
Oct 11 08:56:32 compute-0 youthful_matsumoto[328054]:         "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 08:56:32 compute-0 youthful_matsumoto[328054]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 11 08:56:32 compute-0 youthful_matsumoto[328054]:         "osd_id": 1,
Oct 11 08:56:32 compute-0 youthful_matsumoto[328054]:         "osd_uuid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 08:56:32 compute-0 youthful_matsumoto[328054]:         "type": "bluestore"
Oct 11 08:56:32 compute-0 youthful_matsumoto[328054]:     }
Oct 11 08:56:32 compute-0 youthful_matsumoto[328054]: }
Oct 11 08:56:32 compute-0 nova_compute[260935]: 2025-10-11 08:56:32.431 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:56:32 compute-0 nova_compute[260935]: 2025-10-11 08:56:32.441 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:56:32 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:56:32.445 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[898f0087-c4b4-4cff-aa08-6179c7282add]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:56:32 compute-0 nova_compute[260935]: 2025-10-11 08:56:32.450 2 INFO nova.virt.libvirt.driver [-] [instance: f699e5fd-760a-4326-b428-c34d5e922f8b] Instance destroyed successfully.
Oct 11 08:56:32 compute-0 nova_compute[260935]: 2025-10-11 08:56:32.451 2 DEBUG nova.objects.instance [None req-6e09fb2d-44f8-4565-b6be-b9c0c01da4ac ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Lazy-loading 'resources' on Instance uuid f699e5fd-760a-4326-b428-c34d5e922f8b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:56:32 compute-0 nova_compute[260935]: 2025-10-11 08:56:32.467 2 DEBUG nova.virt.libvirt.vif [None req-6e09fb2d-44f8-4565-b6be-b9c0c01da4ac ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T08:56:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestJSON-server-584420125',display_name='tempest-ServersTestJSON-server-584420125',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-584420125',id=66,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAAk/MD4FoWprvRA2zmWxJheHU1Q6pbrfGqfP9a3/89xTMHYKqik/bQVNkLzZIS3lp+y8ZWTBvY2vCVH0mlXCADLAZoUmTQcydsCtTk6U8YJ4oWLtZkp2VZ/VYWbcHdLrQ==',key_name='tempest-key-882655236',keypairs=<?>,launch_index=0,launched_at=2025-10-11T08:56:29Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='d9864fda4f8641d8a9c1509c426cc206',ramdisk_id='',reservation_id='r-h01orfad',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestJSON-101172647',owner_user_name='tempest-ServersTestJSON-101172647-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T08:56:29Z,user_data=None,user_id='ee0e5fedb9fc464eb2a9ac362f5e0749',uuid=f699e5fd-760a-4326-b428-c34d5e922f8b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "6288362b-25a4-409c-ae10-4a00943f4b89", "address": "fa:16:3e:a3:b0:09", "network": {"id": "e075bdab-78c4-414f-b270-c41d1c82f498", "bridge": "br-int", "label": "tempest-ServersTestJSON-1401783070-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d9864fda4f8641d8a9c1509c426cc206", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6288362b-25", "ovs_interfaceid": "6288362b-25a4-409c-ae10-4a00943f4b89", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 11 08:56:32 compute-0 nova_compute[260935]: 2025-10-11 08:56:32.468 2 DEBUG nova.network.os_vif_util [None req-6e09fb2d-44f8-4565-b6be-b9c0c01da4ac ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Converting VIF {"id": "6288362b-25a4-409c-ae10-4a00943f4b89", "address": "fa:16:3e:a3:b0:09", "network": {"id": "e075bdab-78c4-414f-b270-c41d1c82f498", "bridge": "br-int", "label": "tempest-ServersTestJSON-1401783070-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d9864fda4f8641d8a9c1509c426cc206", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6288362b-25", "ovs_interfaceid": "6288362b-25a4-409c-ae10-4a00943f4b89", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 08:56:32 compute-0 nova_compute[260935]: 2025-10-11 08:56:32.469 2 DEBUG nova.network.os_vif_util [None req-6e09fb2d-44f8-4565-b6be-b9c0c01da4ac ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a3:b0:09,bridge_name='br-int',has_traffic_filtering=True,id=6288362b-25a4-409c-ae10-4a00943f4b89,network=Network(e075bdab-78c4-414f-b270-c41d1c82f498),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6288362b-25') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 08:56:32 compute-0 nova_compute[260935]: 2025-10-11 08:56:32.470 2 DEBUG os_vif [None req-6e09fb2d-44f8-4565-b6be-b9c0c01da4ac ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a3:b0:09,bridge_name='br-int',has_traffic_filtering=True,id=6288362b-25a4-409c-ae10-4a00943f4b89,network=Network(e075bdab-78c4-414f-b270-c41d1c82f498),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6288362b-25') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 11 08:56:32 compute-0 nova_compute[260935]: 2025-10-11 08:56:32.471 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:56:32 compute-0 nova_compute[260935]: 2025-10-11 08:56:32.472 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6288362b-25, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:56:32 compute-0 nova_compute[260935]: 2025-10-11 08:56:32.476 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:56:32 compute-0 systemd[1]: libpod-12e02bfee1ea93b33fe5630253af29a29600471750dda8982c831b2a5c9f4fb2.scope: Deactivated successfully.
Oct 11 08:56:32 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:56:32.477 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[8bf28dfb-44bb-47f8-861b-b6fa963ac0cd]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape075bdab-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0b:b9:79'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 11, 'rx_bytes': 916, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 11, 'rx_bytes': 916, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 169], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 479394, 'reachable_time': 36159, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 328107, 'error': None, 'target': 'ovnmeta-e075bdab-78c4-414f-b270-c41d1c82f498', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:56:32 compute-0 systemd[1]: libpod-12e02bfee1ea93b33fe5630253af29a29600471750dda8982c831b2a5c9f4fb2.scope: Consumed 1.225s CPU time.
Oct 11 08:56:32 compute-0 nova_compute[260935]: 2025-10-11 08:56:32.479 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 08:56:32 compute-0 podman[328036]: 2025-10-11 08:56:32.479642955 +0000 UTC m=+1.488982709 container died 12e02bfee1ea93b33fe5630253af29a29600471750dda8982c831b2a5c9f4fb2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_matsumoto, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct 11 08:56:32 compute-0 nova_compute[260935]: 2025-10-11 08:56:32.484 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:56:32 compute-0 nova_compute[260935]: 2025-10-11 08:56:32.487 2 INFO os_vif [None req-6e09fb2d-44f8-4565-b6be-b9c0c01da4ac ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a3:b0:09,bridge_name='br-int',has_traffic_filtering=True,id=6288362b-25a4-409c-ae10-4a00943f4b89,network=Network(e075bdab-78c4-414f-b270-c41d1c82f498),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6288362b-25')
Oct 11 08:56:32 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:56:32.510 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[6bb72e6b-15ea-42ea-8121-9afb70aa4324]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tape075bdab-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 479414, 'tstamp': 479414}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 328110, 'error': None, 'target': 'ovnmeta-e075bdab-78c4-414f-b270-c41d1c82f498', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tape075bdab-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 479419, 'tstamp': 479419}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 328110, 'error': None, 'target': 'ovnmeta-e075bdab-78c4-414f-b270-c41d1c82f498', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:56:32 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:56:32.512 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape075bdab-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:56:32 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:56:32.516 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape075bdab-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:56:32 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:56:32.516 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 08:56:32 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:56:32.516 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape075bdab-70, col_values=(('external_ids', {'iface-id': 'b9cf681c-9f4c-4c56-987a-55fa7aa89e1a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:56:32 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:56:32.517 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 08:56:32 compute-0 nova_compute[260935]: 2025-10-11 08:56:32.521 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:56:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-de368273cf4fbcb3374cb2411f1cad0f84c4b62c688ae24cdcda57cfcef9a8d4-merged.mount: Deactivated successfully.
Oct 11 08:56:32 compute-0 podman[328036]: 2025-10-11 08:56:32.557689611 +0000 UTC m=+1.567029375 container remove 12e02bfee1ea93b33fe5630253af29a29600471750dda8982c831b2a5c9f4fb2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_matsumoto, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct 11 08:56:32 compute-0 systemd[1]: libpod-conmon-12e02bfee1ea93b33fe5630253af29a29600471750dda8982c831b2a5c9f4fb2.scope: Deactivated successfully.
Oct 11 08:56:32 compute-0 sudo[327914]: pam_unix(sudo:session): session closed for user root
Oct 11 08:56:32 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 08:56:32 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 08:56:32 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 08:56:32 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 08:56:32 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev 0d67b0a7-a093-4d3a-9207-23ba0665ca8d does not exist
Oct 11 08:56:32 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev 071a4165-b353-49fc-ab7e-5bb9481963e1 does not exist
Oct 11 08:56:32 compute-0 sudo[328140]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:56:32 compute-0 sudo[328140]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:56:32 compute-0 sudo[328140]: pam_unix(sudo:session): session closed for user root
Oct 11 08:56:32 compute-0 sudo[328165]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 11 08:56:32 compute-0 sudo[328165]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:56:32 compute-0 sudo[328165]: pam_unix(sudo:session): session closed for user root
Oct 11 08:56:32 compute-0 nova_compute[260935]: 2025-10-11 08:56:32.965 2 INFO nova.virt.libvirt.driver [None req-6e09fb2d-44f8-4565-b6be-b9c0c01da4ac ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: f699e5fd-760a-4326-b428-c34d5e922f8b] Deleting instance files /var/lib/nova/instances/f699e5fd-760a-4326-b428-c34d5e922f8b_del
Oct 11 08:56:32 compute-0 nova_compute[260935]: 2025-10-11 08:56:32.966 2 INFO nova.virt.libvirt.driver [None req-6e09fb2d-44f8-4565-b6be-b9c0c01da4ac ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: f699e5fd-760a-4326-b428-c34d5e922f8b] Deletion of /var/lib/nova/instances/f699e5fd-760a-4326-b428-c34d5e922f8b_del complete
Oct 11 08:56:33 compute-0 nova_compute[260935]: 2025-10-11 08:56:33.019 2 INFO nova.compute.manager [None req-6e09fb2d-44f8-4565-b6be-b9c0c01da4ac ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: f699e5fd-760a-4326-b428-c34d5e922f8b] Took 0.82 seconds to destroy the instance on the hypervisor.
Oct 11 08:56:33 compute-0 nova_compute[260935]: 2025-10-11 08:56:33.020 2 DEBUG oslo.service.loopingcall [None req-6e09fb2d-44f8-4565-b6be-b9c0c01da4ac ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 11 08:56:33 compute-0 nova_compute[260935]: 2025-10-11 08:56:33.021 2 DEBUG nova.compute.manager [-] [instance: f699e5fd-760a-4326-b428-c34d5e922f8b] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 11 08:56:33 compute-0 nova_compute[260935]: 2025-10-11 08:56:33.021 2 DEBUG nova.network.neutron [-] [instance: f699e5fd-760a-4326-b428-c34d5e922f8b] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 11 08:56:33 compute-0 nova_compute[260935]: 2025-10-11 08:56:33.109 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:56:33 compute-0 nova_compute[260935]: 2025-10-11 08:56:33.192 2 DEBUG nova.compute.manager [req-5f5f1e7c-47b7-49d8-98e3-f9fdd5d2d3b4 req-52fb5ade-047b-40f6-b6fc-543a0bef250c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: f699e5fd-760a-4326-b428-c34d5e922f8b] Received event network-vif-unplugged-6288362b-25a4-409c-ae10-4a00943f4b89 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:56:33 compute-0 nova_compute[260935]: 2025-10-11 08:56:33.192 2 DEBUG oslo_concurrency.lockutils [req-5f5f1e7c-47b7-49d8-98e3-f9fdd5d2d3b4 req-52fb5ade-047b-40f6-b6fc-543a0bef250c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "f699e5fd-760a-4326-b428-c34d5e922f8b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:56:33 compute-0 nova_compute[260935]: 2025-10-11 08:56:33.193 2 DEBUG oslo_concurrency.lockutils [req-5f5f1e7c-47b7-49d8-98e3-f9fdd5d2d3b4 req-52fb5ade-047b-40f6-b6fc-543a0bef250c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "f699e5fd-760a-4326-b428-c34d5e922f8b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:56:33 compute-0 nova_compute[260935]: 2025-10-11 08:56:33.193 2 DEBUG oslo_concurrency.lockutils [req-5f5f1e7c-47b7-49d8-98e3-f9fdd5d2d3b4 req-52fb5ade-047b-40f6-b6fc-543a0bef250c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "f699e5fd-760a-4326-b428-c34d5e922f8b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:56:33 compute-0 nova_compute[260935]: 2025-10-11 08:56:33.193 2 DEBUG nova.compute.manager [req-5f5f1e7c-47b7-49d8-98e3-f9fdd5d2d3b4 req-52fb5ade-047b-40f6-b6fc-543a0bef250c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: f699e5fd-760a-4326-b428-c34d5e922f8b] No waiting events found dispatching network-vif-unplugged-6288362b-25a4-409c-ae10-4a00943f4b89 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 08:56:33 compute-0 nova_compute[260935]: 2025-10-11 08:56:33.193 2 DEBUG nova.compute.manager [req-5f5f1e7c-47b7-49d8-98e3-f9fdd5d2d3b4 req-52fb5ade-047b-40f6-b6fc-543a0bef250c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: f699e5fd-760a-4326-b428-c34d5e922f8b] Received event network-vif-unplugged-6288362b-25a4-409c-ae10-4a00943f4b89 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 11 08:56:33 compute-0 kernel: tap37653b55-c0 (unregistering): left promiscuous mode
Oct 11 08:56:33 compute-0 NetworkManager[44960]: <info>  [1760172993.2552] device (tap37653b55-c0): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 08:56:33 compute-0 nova_compute[260935]: 2025-10-11 08:56:33.267 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:56:33 compute-0 ovn_controller[152945]: 2025-10-11T08:56:33Z|00573|binding|INFO|Releasing lport 37653b55-c083-4108-a42f-4bbe7778f058 from this chassis (sb_readonly=0)
Oct 11 08:56:33 compute-0 ovn_controller[152945]: 2025-10-11T08:56:33Z|00574|binding|INFO|Setting lport 37653b55-c083-4108-a42f-4bbe7778f058 down in Southbound
Oct 11 08:56:33 compute-0 ovn_controller[152945]: 2025-10-11T08:56:33Z|00575|binding|INFO|Removing iface tap37653b55-c0 ovn-installed in OVS
Oct 11 08:56:33 compute-0 nova_compute[260935]: 2025-10-11 08:56:33.272 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:56:33 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:56:33.278 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:62:88:02 10.100.0.9'], port_security=['fa:16:3e:62:88:02 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '62c9b5d7-f22e-4738-b2e6-7c53fcb968ea', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2cb96d57-a5e9-4b38-b10e-68187a5bf82f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '93dd4902ce324862a38006da8e06503a', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'b773900e-3df7-4cb6-b9b0-3d240ff499b1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=69794a2f-48ab-4a0d-8725-f4a7f57172dd, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=37653b55-c083-4108-a42f-4bbe7778f058) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 08:56:33 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:56:33.279 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 37653b55-c083-4108-a42f-4bbe7778f058 in datapath 2cb96d57-a5e9-4b38-b10e-68187a5bf82f unbound from our chassis
Oct 11 08:56:33 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:56:33.281 162815 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 2cb96d57-a5e9-4b38-b10e-68187a5bf82f, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 11 08:56:33 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:56:33.284 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[7c3afaff-c222-491f-bf18-210ca565bc41]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:56:33 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:56:33.285 162815 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-2cb96d57-a5e9-4b38-b10e-68187a5bf82f namespace which is not needed anymore
Oct 11 08:56:33 compute-0 nova_compute[260935]: 2025-10-11 08:56:33.300 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:56:33 compute-0 systemd[1]: machine-qemu\x2d73\x2dinstance\x2d00000041.scope: Deactivated successfully.
Oct 11 08:56:33 compute-0 systemd[1]: machine-qemu\x2d73\x2dinstance\x2d00000041.scope: Consumed 14.782s CPU time.
Oct 11 08:56:33 compute-0 systemd-machined[215705]: Machine qemu-73-instance-00000041 terminated.
Oct 11 08:56:33 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e222 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 08:56:33 compute-0 neutron-haproxy-ovnmeta-2cb96d57-a5e9-4b38-b10e-68187a5bf82f[326467]: [NOTICE]   (326471) : haproxy version is 2.8.14-c23fe91
Oct 11 08:56:33 compute-0 neutron-haproxy-ovnmeta-2cb96d57-a5e9-4b38-b10e-68187a5bf82f[326467]: [NOTICE]   (326471) : path to executable is /usr/sbin/haproxy
Oct 11 08:56:33 compute-0 neutron-haproxy-ovnmeta-2cb96d57-a5e9-4b38-b10e-68187a5bf82f[326467]: [WARNING]  (326471) : Exiting Master process...
Oct 11 08:56:33 compute-0 neutron-haproxy-ovnmeta-2cb96d57-a5e9-4b38-b10e-68187a5bf82f[326467]: [ALERT]    (326471) : Current worker (326474) exited with code 143 (Terminated)
Oct 11 08:56:33 compute-0 neutron-haproxy-ovnmeta-2cb96d57-a5e9-4b38-b10e-68187a5bf82f[326467]: [WARNING]  (326471) : All workers exited. Exiting... (0)
Oct 11 08:56:33 compute-0 systemd[1]: libpod-23f892a3d86984389f393cafe8ce70113b1bd7117fb9e54465eabd456d808c9b.scope: Deactivated successfully.
Oct 11 08:56:33 compute-0 podman[328211]: 2025-10-11 08:56:33.499167087 +0000 UTC m=+0.062957977 container died 23f892a3d86984389f393cafe8ce70113b1bd7117fb9e54465eabd456d808c9b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-2cb96d57-a5e9-4b38-b10e-68187a5bf82f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true)
Oct 11 08:56:33 compute-0 nova_compute[260935]: 2025-10-11 08:56:33.500 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:56:33 compute-0 ceph-mon[74313]: pgmap v1581: 321 pgs: 321 active+clean; 610 MiB data, 849 MiB used, 59 GiB / 60 GiB avail; 8.4 MiB/s rd, 8.3 MiB/s wr, 189 op/s
Oct 11 08:56:33 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 08:56:33 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 08:56:33 compute-0 nova_compute[260935]: 2025-10-11 08:56:33.511 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:56:33 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-23f892a3d86984389f393cafe8ce70113b1bd7117fb9e54465eabd456d808c9b-userdata-shm.mount: Deactivated successfully.
Oct 11 08:56:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-5cd4ec95262ad04bcf8d9d7aeb62e59cea4fe95d7952747cb0697c3295b3f6e2-merged.mount: Deactivated successfully.
Oct 11 08:56:33 compute-0 podman[328211]: 2025-10-11 08:56:33.561980548 +0000 UTC m=+0.125771478 container cleanup 23f892a3d86984389f393cafe8ce70113b1bd7117fb9e54465eabd456d808c9b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-2cb96d57-a5e9-4b38-b10e-68187a5bf82f, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 11 08:56:33 compute-0 systemd[1]: libpod-conmon-23f892a3d86984389f393cafe8ce70113b1bd7117fb9e54465eabd456d808c9b.scope: Deactivated successfully.
Oct 11 08:56:33 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e222 do_prune osdmap full prune enabled
Oct 11 08:56:33 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e223 e223: 3 total, 3 up, 3 in
Oct 11 08:56:33 compute-0 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e223: 3 total, 3 up, 3 in
Oct 11 08:56:33 compute-0 podman[328247]: 2025-10-11 08:56:33.655638628 +0000 UTC m=+0.065837648 container remove 23f892a3d86984389f393cafe8ce70113b1bd7117fb9e54465eabd456d808c9b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-2cb96d57-a5e9-4b38-b10e-68187a5bf82f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true)
Oct 11 08:56:33 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:56:33.661 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[ab6534d3-29e7-45a1-81bc-d922b12722f1]: (4, ('Sat Oct 11 08:56:33 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-2cb96d57-a5e9-4b38-b10e-68187a5bf82f (23f892a3d86984389f393cafe8ce70113b1bd7117fb9e54465eabd456d808c9b)\n23f892a3d86984389f393cafe8ce70113b1bd7117fb9e54465eabd456d808c9b\nSat Oct 11 08:56:33 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-2cb96d57-a5e9-4b38-b10e-68187a5bf82f (23f892a3d86984389f393cafe8ce70113b1bd7117fb9e54465eabd456d808c9b)\n23f892a3d86984389f393cafe8ce70113b1bd7117fb9e54465eabd456d808c9b\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:56:33 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:56:33.663 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[4f5cf215-94db-4693-9468-42421300da21]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:56:33 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:56:33.665 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2cb96d57-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:56:33 compute-0 kernel: tap2cb96d57-a0: left promiscuous mode
Oct 11 08:56:33 compute-0 nova_compute[260935]: 2025-10-11 08:56:33.667 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:56:33 compute-0 nova_compute[260935]: 2025-10-11 08:56:33.691 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:56:33 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:56:33.695 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[5ade0a1f-a885-448d-add2-c5f1079e2121]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:56:33 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:56:33.730 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[26a3e446-1b95-41cf-a431-144edec582cf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:56:33 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:56:33.732 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[d446f2ed-e9a3-461d-90c8-73a638189086]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:56:33 compute-0 nova_compute[260935]: 2025-10-11 08:56:33.742 2 DEBUG nova.network.neutron [-] [instance: f699e5fd-760a-4326-b428-c34d5e922f8b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:56:33 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:56:33.763 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[700e966e-bb5d-4291-9520-8c398a046e4f]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 481878, 'reachable_time': 33021, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 328269, 'error': None, 'target': 'ovnmeta-2cb96d57-a5e9-4b38-b10e-68187a5bf82f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:56:33 compute-0 nova_compute[260935]: 2025-10-11 08:56:33.767 2 INFO nova.compute.manager [-] [instance: f699e5fd-760a-4326-b428-c34d5e922f8b] Took 0.75 seconds to deallocate network for instance.
Oct 11 08:56:33 compute-0 systemd[1]: run-netns-ovnmeta\x2d2cb96d57\x2da5e9\x2d4b38\x2db10e\x2d68187a5bf82f.mount: Deactivated successfully.
Oct 11 08:56:33 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:56:33.772 162929 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-2cb96d57-a5e9-4b38-b10e-68187a5bf82f deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 11 08:56:33 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:56:33.773 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[58f7066a-a0ae-41a8-a789-ff27bb05cca9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:56:33 compute-0 nova_compute[260935]: 2025-10-11 08:56:33.814 2 DEBUG nova.compute.manager [req-6b44bb53-86f7-4483-b8a3-08c03ee69f7d req-ae03bf64-47b4-4317-bb40-d83d33f8fe43 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 62c9b5d7-f22e-4738-b2e6-7c53fcb968ea] Received event network-vif-unplugged-37653b55-c083-4108-a42f-4bbe7778f058 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:56:33 compute-0 nova_compute[260935]: 2025-10-11 08:56:33.814 2 DEBUG oslo_concurrency.lockutils [req-6b44bb53-86f7-4483-b8a3-08c03ee69f7d req-ae03bf64-47b4-4317-bb40-d83d33f8fe43 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "62c9b5d7-f22e-4738-b2e6-7c53fcb968ea-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:56:33 compute-0 nova_compute[260935]: 2025-10-11 08:56:33.815 2 DEBUG oslo_concurrency.lockutils [req-6b44bb53-86f7-4483-b8a3-08c03ee69f7d req-ae03bf64-47b4-4317-bb40-d83d33f8fe43 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "62c9b5d7-f22e-4738-b2e6-7c53fcb968ea-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:56:33 compute-0 nova_compute[260935]: 2025-10-11 08:56:33.815 2 DEBUG oslo_concurrency.lockutils [req-6b44bb53-86f7-4483-b8a3-08c03ee69f7d req-ae03bf64-47b4-4317-bb40-d83d33f8fe43 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "62c9b5d7-f22e-4738-b2e6-7c53fcb968ea-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:56:33 compute-0 nova_compute[260935]: 2025-10-11 08:56:33.815 2 DEBUG nova.compute.manager [req-6b44bb53-86f7-4483-b8a3-08c03ee69f7d req-ae03bf64-47b4-4317-bb40-d83d33f8fe43 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 62c9b5d7-f22e-4738-b2e6-7c53fcb968ea] No waiting events found dispatching network-vif-unplugged-37653b55-c083-4108-a42f-4bbe7778f058 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 08:56:33 compute-0 nova_compute[260935]: 2025-10-11 08:56:33.816 2 WARNING nova.compute.manager [req-6b44bb53-86f7-4483-b8a3-08c03ee69f7d req-ae03bf64-47b4-4317-bb40-d83d33f8fe43 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 62c9b5d7-f22e-4738-b2e6-7c53fcb968ea] Received unexpected event network-vif-unplugged-37653b55-c083-4108-a42f-4bbe7778f058 for instance with vm_state active and task_state shelving.
Oct 11 08:56:33 compute-0 nova_compute[260935]: 2025-10-11 08:56:33.818 2 DEBUG oslo_concurrency.lockutils [None req-6e09fb2d-44f8-4565-b6be-b9c0c01da4ac ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:56:33 compute-0 nova_compute[260935]: 2025-10-11 08:56:33.818 2 DEBUG oslo_concurrency.lockutils [None req-6e09fb2d-44f8-4565-b6be-b9c0c01da4ac ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:56:33 compute-0 nova_compute[260935]: 2025-10-11 08:56:33.937 2 INFO nova.virt.libvirt.driver [None req-1ff6340a-99ab-4f8a-9c38-84240440232e 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: 62c9b5d7-f22e-4738-b2e6-7c53fcb968ea] Instance shutdown successfully after 13 seconds.
Oct 11 08:56:33 compute-0 nova_compute[260935]: 2025-10-11 08:56:33.948 2 INFO nova.virt.libvirt.driver [-] [instance: 62c9b5d7-f22e-4738-b2e6-7c53fcb968ea] Instance destroyed successfully.
Oct 11 08:56:33 compute-0 nova_compute[260935]: 2025-10-11 08:56:33.949 2 DEBUG nova.objects.instance [None req-1ff6340a-99ab-4f8a-9c38-84240440232e 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Lazy-loading 'numa_topology' on Instance uuid 62c9b5d7-f22e-4738-b2e6-7c53fcb968ea obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:56:33 compute-0 nova_compute[260935]: 2025-10-11 08:56:33.982 2 DEBUG oslo_concurrency.processutils [None req-6e09fb2d-44f8-4565-b6be-b9c0c01da4ac ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:56:34 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1583: 321 pgs: 321 active+clean; 557 MiB data, 857 MiB used, 59 GiB / 60 GiB avail; 4.5 MiB/s rd, 4.3 MiB/s wr, 368 op/s
Oct 11 08:56:34 compute-0 nova_compute[260935]: 2025-10-11 08:56:34.200 2 INFO nova.virt.libvirt.driver [None req-1ff6340a-99ab-4f8a-9c38-84240440232e 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: 62c9b5d7-f22e-4738-b2e6-7c53fcb968ea] Beginning cold snapshot process
Oct 11 08:56:34 compute-0 nova_compute[260935]: 2025-10-11 08:56:34.401 2 DEBUG nova.virt.libvirt.imagebackend [None req-1ff6340a-99ab-4f8a-9c38-84240440232e 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] No parent info for 03f2fef0-11c0-48e1-b3a0-3e02d898739e; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Oct 11 08:56:34 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 08:56:34 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1947135093' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:56:34 compute-0 nova_compute[260935]: 2025-10-11 08:56:34.463 2 DEBUG oslo_concurrency.processutils [None req-6e09fb2d-44f8-4565-b6be-b9c0c01da4ac ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.482s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:56:34 compute-0 nova_compute[260935]: 2025-10-11 08:56:34.474 2 DEBUG nova.compute.provider_tree [None req-6e09fb2d-44f8-4565-b6be-b9c0c01da4ac ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 08:56:34 compute-0 nova_compute[260935]: 2025-10-11 08:56:34.495 2 DEBUG nova.scheduler.client.report [None req-6e09fb2d-44f8-4565-b6be-b9c0c01da4ac ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 08:56:34 compute-0 nova_compute[260935]: 2025-10-11 08:56:34.518 2 DEBUG oslo_concurrency.lockutils [None req-6e09fb2d-44f8-4565-b6be-b9c0c01da4ac ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.700s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:56:34 compute-0 nova_compute[260935]: 2025-10-11 08:56:34.541 2 INFO nova.scheduler.client.report [None req-6e09fb2d-44f8-4565-b6be-b9c0c01da4ac ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Deleted allocations for instance f699e5fd-760a-4326-b428-c34d5e922f8b
Oct 11 08:56:34 compute-0 nova_compute[260935]: 2025-10-11 08:56:34.621 2 DEBUG oslo_concurrency.lockutils [None req-6e09fb2d-44f8-4565-b6be-b9c0c01da4ac ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Lock "f699e5fd-760a-4326-b428-c34d5e922f8b" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.427s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:56:34 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e223 do_prune osdmap full prune enabled
Oct 11 08:56:34 compute-0 ceph-mon[74313]: osdmap e223: 3 total, 3 up, 3 in
Oct 11 08:56:34 compute-0 ceph-mon[74313]: pgmap v1583: 321 pgs: 321 active+clean; 557 MiB data, 857 MiB used, 59 GiB / 60 GiB avail; 4.5 MiB/s rd, 4.3 MiB/s wr, 368 op/s
Oct 11 08:56:34 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/1947135093' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:56:34 compute-0 nova_compute[260935]: 2025-10-11 08:56:34.651 2 DEBUG nova.storage.rbd_utils [None req-1ff6340a-99ab-4f8a-9c38-84240440232e 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] creating snapshot(4b9519edbce24b4ba1acc58fb46dca99) on rbd image(62c9b5d7-f22e-4738-b2e6-7c53fcb968ea_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Oct 11 08:56:34 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e224 e224: 3 total, 3 up, 3 in
Oct 11 08:56:34 compute-0 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e224: 3 total, 3 up, 3 in
Oct 11 08:56:34 compute-0 nova_compute[260935]: 2025-10-11 08:56:34.749 2 DEBUG oslo_concurrency.lockutils [None req-29da304d-4c2f-4f96-93e3-c9388bf8cbc5 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Acquiring lock "3915cf40-bdd7-4fe8-8311-834ff26aaf9c" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:56:34 compute-0 nova_compute[260935]: 2025-10-11 08:56:34.750 2 DEBUG oslo_concurrency.lockutils [None req-29da304d-4c2f-4f96-93e3-c9388bf8cbc5 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Lock "3915cf40-bdd7-4fe8-8311-834ff26aaf9c" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:56:34 compute-0 nova_compute[260935]: 2025-10-11 08:56:34.750 2 DEBUG oslo_concurrency.lockutils [None req-29da304d-4c2f-4f96-93e3-c9388bf8cbc5 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Acquiring lock "3915cf40-bdd7-4fe8-8311-834ff26aaf9c-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:56:34 compute-0 nova_compute[260935]: 2025-10-11 08:56:34.751 2 DEBUG oslo_concurrency.lockutils [None req-29da304d-4c2f-4f96-93e3-c9388bf8cbc5 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Lock "3915cf40-bdd7-4fe8-8311-834ff26aaf9c-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:56:34 compute-0 nova_compute[260935]: 2025-10-11 08:56:34.751 2 DEBUG oslo_concurrency.lockutils [None req-29da304d-4c2f-4f96-93e3-c9388bf8cbc5 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Lock "3915cf40-bdd7-4fe8-8311-834ff26aaf9c-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:56:34 compute-0 nova_compute[260935]: 2025-10-11 08:56:34.753 2 INFO nova.compute.manager [None req-29da304d-4c2f-4f96-93e3-c9388bf8cbc5 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] [instance: 3915cf40-bdd7-4fe8-8311-834ff26aaf9c] Terminating instance
Oct 11 08:56:34 compute-0 nova_compute[260935]: 2025-10-11 08:56:34.755 2 DEBUG nova.compute.manager [None req-29da304d-4c2f-4f96-93e3-c9388bf8cbc5 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] [instance: 3915cf40-bdd7-4fe8-8311-834ff26aaf9c] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 11 08:56:34 compute-0 kernel: tap33b4a30b-13 (unregistering): left promiscuous mode
Oct 11 08:56:34 compute-0 NetworkManager[44960]: <info>  [1760172994.8159] device (tap33b4a30b-13): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 08:56:34 compute-0 ovn_controller[152945]: 2025-10-11T08:56:34Z|00576|binding|INFO|Releasing lport 33b4a30b-13cb-4c3b-a6fb-df71a1ce760e from this chassis (sb_readonly=0)
Oct 11 08:56:34 compute-0 ovn_controller[152945]: 2025-10-11T08:56:34Z|00577|binding|INFO|Setting lport 33b4a30b-13cb-4c3b-a6fb-df71a1ce760e down in Southbound
Oct 11 08:56:34 compute-0 nova_compute[260935]: 2025-10-11 08:56:34.836 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:56:34 compute-0 ovn_controller[152945]: 2025-10-11T08:56:34Z|00578|binding|INFO|Removing iface tap33b4a30b-13 ovn-installed in OVS
Oct 11 08:56:34 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:56:34.850 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ef:c3:49 10.100.0.13'], port_security=['fa:16:3e:ef:c3:49 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '3915cf40-bdd7-4fe8-8311-834ff26aaf9c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-338aeaf8-43d5-4292-a8fa-8952dd3c508b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3a6a3cc2a54f4a9bafcdc1304f07944b', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'b3f3385e-a444-4723-a65a-0a3291b3a862 bf239c36-0899-4f19-8153-bf756139c1b8 e3ab43aa-8c9d-4578-afa6-1b85bfb9e682', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9b76a516-f507-4a4f-aa31-3047cedf7049, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=33b4a30b-13cb-4c3b-a6fb-df71a1ce760e) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 08:56:34 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:56:34.853 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 33b4a30b-13cb-4c3b-a6fb-df71a1ce760e in datapath 338aeaf8-43d5-4292-a8fa-8952dd3c508b unbound from our chassis
Oct 11 08:56:34 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:56:34.856 162815 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 338aeaf8-43d5-4292-a8fa-8952dd3c508b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 11 08:56:34 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:56:34.857 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[ad364f5c-dadf-4a60-8701-56acf8afe2eb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:56:34 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:56:34.858 162815 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-338aeaf8-43d5-4292-a8fa-8952dd3c508b namespace which is not needed anymore
Oct 11 08:56:34 compute-0 nova_compute[260935]: 2025-10-11 08:56:34.865 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:56:34 compute-0 systemd[1]: machine-qemu\x2d66\x2dinstance\x2d0000003b.scope: Deactivated successfully.
Oct 11 08:56:34 compute-0 systemd[1]: machine-qemu\x2d66\x2dinstance\x2d0000003b.scope: Consumed 15.378s CPU time.
Oct 11 08:56:34 compute-0 systemd-machined[215705]: Machine qemu-66-instance-0000003b terminated.
Oct 11 08:56:34 compute-0 nova_compute[260935]: 2025-10-11 08:56:34.989 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:56:34 compute-0 nova_compute[260935]: 2025-10-11 08:56:34.998 2 INFO nova.virt.libvirt.driver [-] [instance: 3915cf40-bdd7-4fe8-8311-834ff26aaf9c] Instance destroyed successfully.
Oct 11 08:56:34 compute-0 nova_compute[260935]: 2025-10-11 08:56:34.999 2 DEBUG nova.objects.instance [None req-29da304d-4c2f-4f96-93e3-c9388bf8cbc5 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Lazy-loading 'resources' on Instance uuid 3915cf40-bdd7-4fe8-8311-834ff26aaf9c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:56:35 compute-0 nova_compute[260935]: 2025-10-11 08:56:35.004 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:56:35 compute-0 neutron-haproxy-ovnmeta-338aeaf8-43d5-4292-a8fa-8952dd3c508b[323338]: [NOTICE]   (323342) : haproxy version is 2.8.14-c23fe91
Oct 11 08:56:35 compute-0 neutron-haproxy-ovnmeta-338aeaf8-43d5-4292-a8fa-8952dd3c508b[323338]: [NOTICE]   (323342) : path to executable is /usr/sbin/haproxy
Oct 11 08:56:35 compute-0 neutron-haproxy-ovnmeta-338aeaf8-43d5-4292-a8fa-8952dd3c508b[323338]: [WARNING]  (323342) : Exiting Master process...
Oct 11 08:56:35 compute-0 neutron-haproxy-ovnmeta-338aeaf8-43d5-4292-a8fa-8952dd3c508b[323338]: [WARNING]  (323342) : Exiting Master process...
Oct 11 08:56:35 compute-0 neutron-haproxy-ovnmeta-338aeaf8-43d5-4292-a8fa-8952dd3c508b[323338]: [ALERT]    (323342) : Current worker (323351) exited with code 143 (Terminated)
Oct 11 08:56:35 compute-0 neutron-haproxy-ovnmeta-338aeaf8-43d5-4292-a8fa-8952dd3c508b[323338]: [WARNING]  (323342) : All workers exited. Exiting... (0)
Oct 11 08:56:35 compute-0 nova_compute[260935]: 2025-10-11 08:56:35.038 2 DEBUG nova.virt.libvirt.vif [None req-29da304d-4c2f-4f96-93e3-c9388bf8cbc5 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T08:55:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-SecurityGroupsTestJSON-server-604807773',display_name='tempest-SecurityGroupsTestJSON-server-604807773',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-securitygroupstestjson-server-604807773',id=59,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-11T08:55:34Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='3a6a3cc2a54f4a9bafcdc1304f07944b',ramdisk_id='',reservation_id='r-zmgd5yiy',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-SecurityGroupsTestJSON-2086563292',owner_user_name='tempest-SecurityGroupsTestJSON-2086563292-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T08:55:34Z,user_data=None,user_id='a04e2908f5a54c8f98bee8d0faf3e658',uuid=3915cf40-bdd7-4fe8-8311-834ff26aaf9c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "33b4a30b-13cb-4c3b-a6fb-df71a1ce760e", "address": "fa:16:3e:ef:c3:49", "network": {"id": "338aeaf8-43d5-4292-a8fa-8952dd3c508b", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-293890957-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3a6a3cc2a54f4a9bafcdc1304f07944b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap33b4a30b-13", "ovs_interfaceid": "33b4a30b-13cb-4c3b-a6fb-df71a1ce760e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 11 08:56:35 compute-0 nova_compute[260935]: 2025-10-11 08:56:35.039 2 DEBUG nova.network.os_vif_util [None req-29da304d-4c2f-4f96-93e3-c9388bf8cbc5 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Converting VIF {"id": "33b4a30b-13cb-4c3b-a6fb-df71a1ce760e", "address": "fa:16:3e:ef:c3:49", "network": {"id": "338aeaf8-43d5-4292-a8fa-8952dd3c508b", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-293890957-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3a6a3cc2a54f4a9bafcdc1304f07944b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap33b4a30b-13", "ovs_interfaceid": "33b4a30b-13cb-4c3b-a6fb-df71a1ce760e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 08:56:35 compute-0 nova_compute[260935]: 2025-10-11 08:56:35.040 2 DEBUG nova.network.os_vif_util [None req-29da304d-4c2f-4f96-93e3-c9388bf8cbc5 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:ef:c3:49,bridge_name='br-int',has_traffic_filtering=True,id=33b4a30b-13cb-4c3b-a6fb-df71a1ce760e,network=Network(338aeaf8-43d5-4292-a8fa-8952dd3c508b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap33b4a30b-13') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 08:56:35 compute-0 nova_compute[260935]: 2025-10-11 08:56:35.040 2 DEBUG os_vif [None req-29da304d-4c2f-4f96-93e3-c9388bf8cbc5 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:ef:c3:49,bridge_name='br-int',has_traffic_filtering=True,id=33b4a30b-13cb-4c3b-a6fb-df71a1ce760e,network=Network(338aeaf8-43d5-4292-a8fa-8952dd3c508b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap33b4a30b-13') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 11 08:56:35 compute-0 systemd[1]: libpod-32436912e067a4fb7ef37686671c1ef8b53abf62640fd4a0660436ebf6897565.scope: Deactivated successfully.
Oct 11 08:56:35 compute-0 nova_compute[260935]: 2025-10-11 08:56:35.042 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:56:35 compute-0 nova_compute[260935]: 2025-10-11 08:56:35.043 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap33b4a30b-13, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:56:35 compute-0 nova_compute[260935]: 2025-10-11 08:56:35.045 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:56:35 compute-0 nova_compute[260935]: 2025-10-11 08:56:35.047 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 08:56:35 compute-0 podman[328367]: 2025-10-11 08:56:35.052799277 +0000 UTC m=+0.064741947 container died 32436912e067a4fb7ef37686671c1ef8b53abf62640fd4a0660436ebf6897565 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-338aeaf8-43d5-4292-a8fa-8952dd3c508b, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 11 08:56:35 compute-0 nova_compute[260935]: 2025-10-11 08:56:35.053 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:56:35 compute-0 nova_compute[260935]: 2025-10-11 08:56:35.056 2 INFO os_vif [None req-29da304d-4c2f-4f96-93e3-c9388bf8cbc5 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:ef:c3:49,bridge_name='br-int',has_traffic_filtering=True,id=33b4a30b-13cb-4c3b-a6fb-df71a1ce760e,network=Network(338aeaf8-43d5-4292-a8fa-8952dd3c508b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap33b4a30b-13')
Oct 11 08:56:35 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-32436912e067a4fb7ef37686671c1ef8b53abf62640fd4a0660436ebf6897565-userdata-shm.mount: Deactivated successfully.
Oct 11 08:56:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-68996452ee67c233085e6e9ba704114e7118cced28fee3c4a5f9d0d8650bf5d5-merged.mount: Deactivated successfully.
Oct 11 08:56:35 compute-0 podman[328367]: 2025-10-11 08:56:35.10691927 +0000 UTC m=+0.118861910 container cleanup 32436912e067a4fb7ef37686671c1ef8b53abf62640fd4a0660436ebf6897565 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-338aeaf8-43d5-4292-a8fa-8952dd3c508b, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Oct 11 08:56:35 compute-0 systemd[1]: libpod-conmon-32436912e067a4fb7ef37686671c1ef8b53abf62640fd4a0660436ebf6897565.scope: Deactivated successfully.
Oct 11 08:56:35 compute-0 podman[328427]: 2025-10-11 08:56:35.193016555 +0000 UTC m=+0.053680021 container remove 32436912e067a4fb7ef37686671c1ef8b53abf62640fd4a0660436ebf6897565 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-338aeaf8-43d5-4292-a8fa-8952dd3c508b, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Oct 11 08:56:35 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:56:35.205 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[4960fc82-6ef3-43d2-a89e-29d5b58fe281]: (4, ('Sat Oct 11 08:56:34 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-338aeaf8-43d5-4292-a8fa-8952dd3c508b (32436912e067a4fb7ef37686671c1ef8b53abf62640fd4a0660436ebf6897565)\n32436912e067a4fb7ef37686671c1ef8b53abf62640fd4a0660436ebf6897565\nSat Oct 11 08:56:35 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-338aeaf8-43d5-4292-a8fa-8952dd3c508b (32436912e067a4fb7ef37686671c1ef8b53abf62640fd4a0660436ebf6897565)\n32436912e067a4fb7ef37686671c1ef8b53abf62640fd4a0660436ebf6897565\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:56:35 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:56:35.209 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[531ad93f-3bff-40a0-a0c7-835923e42a50]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:56:35 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:56:35.210 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap338aeaf8-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:56:35 compute-0 kernel: tap338aeaf8-40: left promiscuous mode
Oct 11 08:56:35 compute-0 nova_compute[260935]: 2025-10-11 08:56:35.213 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:56:35 compute-0 nova_compute[260935]: 2025-10-11 08:56:35.243 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:56:35 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:56:35.247 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[d5e43316-fac8-4196-9964-2808befee1fe]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:56:35 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:56:35.281 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[dd19927d-6af1-4aa2-85d3-7824b5883f9a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:56:35 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:56:35.283 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[80d4279d-cc02-4ad0-8170-007161a3e17d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:56:35 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:56:35.301 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[e751b11d-439c-40e8-9846-e26bdec21fbd]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 477769, 'reachable_time': 28852, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 328445, 'error': None, 'target': 'ovnmeta-338aeaf8-43d5-4292-a8fa-8952dd3c508b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:56:35 compute-0 systemd[1]: run-netns-ovnmeta\x2d338aeaf8\x2d43d5\x2d4292\x2da8fa\x2d8952dd3c508b.mount: Deactivated successfully.
Oct 11 08:56:35 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:56:35.306 162929 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-338aeaf8-43d5-4292-a8fa-8952dd3c508b deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 11 08:56:35 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:56:35.306 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[5561fcbf-1825-4f68-aa15-1b3725f24bb9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:56:35 compute-0 nova_compute[260935]: 2025-10-11 08:56:35.495 2 INFO nova.virt.libvirt.driver [None req-29da304d-4c2f-4f96-93e3-c9388bf8cbc5 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] [instance: 3915cf40-bdd7-4fe8-8311-834ff26aaf9c] Deleting instance files /var/lib/nova/instances/3915cf40-bdd7-4fe8-8311-834ff26aaf9c_del
Oct 11 08:56:35 compute-0 nova_compute[260935]: 2025-10-11 08:56:35.497 2 INFO nova.virt.libvirt.driver [None req-29da304d-4c2f-4f96-93e3-c9388bf8cbc5 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] [instance: 3915cf40-bdd7-4fe8-8311-834ff26aaf9c] Deletion of /var/lib/nova/instances/3915cf40-bdd7-4fe8-8311-834ff26aaf9c_del complete
Oct 11 08:56:35 compute-0 nova_compute[260935]: 2025-10-11 08:56:35.564 2 INFO nova.compute.manager [None req-29da304d-4c2f-4f96-93e3-c9388bf8cbc5 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] [instance: 3915cf40-bdd7-4fe8-8311-834ff26aaf9c] Took 0.81 seconds to destroy the instance on the hypervisor.
Oct 11 08:56:35 compute-0 nova_compute[260935]: 2025-10-11 08:56:35.565 2 DEBUG oslo.service.loopingcall [None req-29da304d-4c2f-4f96-93e3-c9388bf8cbc5 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 11 08:56:35 compute-0 nova_compute[260935]: 2025-10-11 08:56:35.565 2 DEBUG nova.compute.manager [-] [instance: 3915cf40-bdd7-4fe8-8311-834ff26aaf9c] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 11 08:56:35 compute-0 nova_compute[260935]: 2025-10-11 08:56:35.565 2 DEBUG nova.network.neutron [-] [instance: 3915cf40-bdd7-4fe8-8311-834ff26aaf9c] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 11 08:56:35 compute-0 nova_compute[260935]: 2025-10-11 08:56:35.610 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760172980.610241, d80189d8-28e4-440b-8aed-b43c62f59dd7 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:56:35 compute-0 nova_compute[260935]: 2025-10-11 08:56:35.611 2 INFO nova.compute.manager [-] [instance: d80189d8-28e4-440b-8aed-b43c62f59dd7] VM Stopped (Lifecycle Event)
Oct 11 08:56:35 compute-0 nova_compute[260935]: 2025-10-11 08:56:35.638 2 DEBUG nova.compute.manager [None req-2bc14278-5b9c-450d-b4f3-017964ba6676 - - - - - -] [instance: d80189d8-28e4-440b-8aed-b43c62f59dd7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:56:35 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e224 do_prune osdmap full prune enabled
Oct 11 08:56:35 compute-0 ceph-mon[74313]: osdmap e224: 3 total, 3 up, 3 in
Oct 11 08:56:35 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e225 e225: 3 total, 3 up, 3 in
Oct 11 08:56:35 compute-0 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e225: 3 total, 3 up, 3 in
Oct 11 08:56:35 compute-0 nova_compute[260935]: 2025-10-11 08:56:35.740 2 DEBUG nova.storage.rbd_utils [None req-1ff6340a-99ab-4f8a-9c38-84240440232e 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] cloning vms/62c9b5d7-f22e-4738-b2e6-7c53fcb968ea_disk@4b9519edbce24b4ba1acc58fb46dca99 to images/842b58bb-88f1-4b43-9dd5-ce6009e157e3 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Oct 11 08:56:35 compute-0 nova_compute[260935]: 2025-10-11 08:56:35.877 2 DEBUG nova.storage.rbd_utils [None req-1ff6340a-99ab-4f8a-9c38-84240440232e 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] flattening images/842b58bb-88f1-4b43-9dd5-ce6009e157e3 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Oct 11 08:56:36 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1586: 321 pgs: 321 active+clean; 557 MiB data, 857 MiB used, 59 GiB / 60 GiB avail; 4.9 MiB/s rd, 4.7 MiB/s wr, 398 op/s
Oct 11 08:56:36 compute-0 nova_compute[260935]: 2025-10-11 08:56:36.236 2 DEBUG nova.storage.rbd_utils [None req-1ff6340a-99ab-4f8a-9c38-84240440232e 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] removing snapshot(4b9519edbce24b4ba1acc58fb46dca99) on rbd image(62c9b5d7-f22e-4738-b2e6-7c53fcb968ea_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Oct 11 08:56:36 compute-0 nova_compute[260935]: 2025-10-11 08:56:36.252 2 DEBUG nova.compute.manager [req-ac2d91b6-63d7-4d7b-834d-2461879442cb req-05cee4e0-dd20-4095-9c4f-f8737f03c508 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: f699e5fd-760a-4326-b428-c34d5e922f8b] Received event network-vif-plugged-6288362b-25a4-409c-ae10-4a00943f4b89 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:56:36 compute-0 nova_compute[260935]: 2025-10-11 08:56:36.253 2 DEBUG oslo_concurrency.lockutils [req-ac2d91b6-63d7-4d7b-834d-2461879442cb req-05cee4e0-dd20-4095-9c4f-f8737f03c508 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "f699e5fd-760a-4326-b428-c34d5e922f8b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:56:36 compute-0 nova_compute[260935]: 2025-10-11 08:56:36.253 2 DEBUG oslo_concurrency.lockutils [req-ac2d91b6-63d7-4d7b-834d-2461879442cb req-05cee4e0-dd20-4095-9c4f-f8737f03c508 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "f699e5fd-760a-4326-b428-c34d5e922f8b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:56:36 compute-0 nova_compute[260935]: 2025-10-11 08:56:36.254 2 DEBUG oslo_concurrency.lockutils [req-ac2d91b6-63d7-4d7b-834d-2461879442cb req-05cee4e0-dd20-4095-9c4f-f8737f03c508 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "f699e5fd-760a-4326-b428-c34d5e922f8b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:56:36 compute-0 nova_compute[260935]: 2025-10-11 08:56:36.254 2 DEBUG nova.compute.manager [req-ac2d91b6-63d7-4d7b-834d-2461879442cb req-05cee4e0-dd20-4095-9c4f-f8737f03c508 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: f699e5fd-760a-4326-b428-c34d5e922f8b] No waiting events found dispatching network-vif-plugged-6288362b-25a4-409c-ae10-4a00943f4b89 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 08:56:36 compute-0 nova_compute[260935]: 2025-10-11 08:56:36.254 2 WARNING nova.compute.manager [req-ac2d91b6-63d7-4d7b-834d-2461879442cb req-05cee4e0-dd20-4095-9c4f-f8737f03c508 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: f699e5fd-760a-4326-b428-c34d5e922f8b] Received unexpected event network-vif-plugged-6288362b-25a4-409c-ae10-4a00943f4b89 for instance with vm_state deleted and task_state None.
Oct 11 08:56:36 compute-0 nova_compute[260935]: 2025-10-11 08:56:36.501 2 DEBUG nova.compute.manager [req-d4a75f5a-e316-4697-923a-6365ac39f5f2 req-a6e6b62e-7bab-4e99-8d68-6fa338c06cb5 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: f699e5fd-760a-4326-b428-c34d5e922f8b] Received event network-vif-deleted-6288362b-25a4-409c-ae10-4a00943f4b89 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:56:36 compute-0 nova_compute[260935]: 2025-10-11 08:56:36.501 2 DEBUG nova.compute.manager [req-d4a75f5a-e316-4697-923a-6365ac39f5f2 req-a6e6b62e-7bab-4e99-8d68-6fa338c06cb5 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 62c9b5d7-f22e-4738-b2e6-7c53fcb968ea] Received event network-vif-plugged-37653b55-c083-4108-a42f-4bbe7778f058 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:56:36 compute-0 nova_compute[260935]: 2025-10-11 08:56:36.502 2 DEBUG oslo_concurrency.lockutils [req-d4a75f5a-e316-4697-923a-6365ac39f5f2 req-a6e6b62e-7bab-4e99-8d68-6fa338c06cb5 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "62c9b5d7-f22e-4738-b2e6-7c53fcb968ea-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:56:36 compute-0 nova_compute[260935]: 2025-10-11 08:56:36.502 2 DEBUG oslo_concurrency.lockutils [req-d4a75f5a-e316-4697-923a-6365ac39f5f2 req-a6e6b62e-7bab-4e99-8d68-6fa338c06cb5 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "62c9b5d7-f22e-4738-b2e6-7c53fcb968ea-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:56:36 compute-0 nova_compute[260935]: 2025-10-11 08:56:36.503 2 DEBUG oslo_concurrency.lockutils [req-d4a75f5a-e316-4697-923a-6365ac39f5f2 req-a6e6b62e-7bab-4e99-8d68-6fa338c06cb5 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "62c9b5d7-f22e-4738-b2e6-7c53fcb968ea-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:56:36 compute-0 nova_compute[260935]: 2025-10-11 08:56:36.503 2 DEBUG nova.compute.manager [req-d4a75f5a-e316-4697-923a-6365ac39f5f2 req-a6e6b62e-7bab-4e99-8d68-6fa338c06cb5 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 62c9b5d7-f22e-4738-b2e6-7c53fcb968ea] No waiting events found dispatching network-vif-plugged-37653b55-c083-4108-a42f-4bbe7778f058 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 08:56:36 compute-0 nova_compute[260935]: 2025-10-11 08:56:36.503 2 WARNING nova.compute.manager [req-d4a75f5a-e316-4697-923a-6365ac39f5f2 req-a6e6b62e-7bab-4e99-8d68-6fa338c06cb5 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 62c9b5d7-f22e-4738-b2e6-7c53fcb968ea] Received unexpected event network-vif-plugged-37653b55-c083-4108-a42f-4bbe7778f058 for instance with vm_state active and task_state shelving_image_uploading.
Oct 11 08:56:36 compute-0 nova_compute[260935]: 2025-10-11 08:56:36.503 2 DEBUG nova.compute.manager [req-d4a75f5a-e316-4697-923a-6365ac39f5f2 req-a6e6b62e-7bab-4e99-8d68-6fa338c06cb5 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 3915cf40-bdd7-4fe8-8311-834ff26aaf9c] Received event network-vif-unplugged-33b4a30b-13cb-4c3b-a6fb-df71a1ce760e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:56:36 compute-0 nova_compute[260935]: 2025-10-11 08:56:36.504 2 DEBUG oslo_concurrency.lockutils [req-d4a75f5a-e316-4697-923a-6365ac39f5f2 req-a6e6b62e-7bab-4e99-8d68-6fa338c06cb5 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "3915cf40-bdd7-4fe8-8311-834ff26aaf9c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:56:36 compute-0 nova_compute[260935]: 2025-10-11 08:56:36.504 2 DEBUG oslo_concurrency.lockutils [req-d4a75f5a-e316-4697-923a-6365ac39f5f2 req-a6e6b62e-7bab-4e99-8d68-6fa338c06cb5 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "3915cf40-bdd7-4fe8-8311-834ff26aaf9c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:56:36 compute-0 nova_compute[260935]: 2025-10-11 08:56:36.504 2 DEBUG oslo_concurrency.lockutils [req-d4a75f5a-e316-4697-923a-6365ac39f5f2 req-a6e6b62e-7bab-4e99-8d68-6fa338c06cb5 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "3915cf40-bdd7-4fe8-8311-834ff26aaf9c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:56:36 compute-0 nova_compute[260935]: 2025-10-11 08:56:36.505 2 DEBUG nova.compute.manager [req-d4a75f5a-e316-4697-923a-6365ac39f5f2 req-a6e6b62e-7bab-4e99-8d68-6fa338c06cb5 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 3915cf40-bdd7-4fe8-8311-834ff26aaf9c] No waiting events found dispatching network-vif-unplugged-33b4a30b-13cb-4c3b-a6fb-df71a1ce760e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 08:56:36 compute-0 nova_compute[260935]: 2025-10-11 08:56:36.505 2 DEBUG nova.compute.manager [req-d4a75f5a-e316-4697-923a-6365ac39f5f2 req-a6e6b62e-7bab-4e99-8d68-6fa338c06cb5 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 3915cf40-bdd7-4fe8-8311-834ff26aaf9c] Received event network-vif-unplugged-33b4a30b-13cb-4c3b-a6fb-df71a1ce760e for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 11 08:56:36 compute-0 nova_compute[260935]: 2025-10-11 08:56:36.505 2 DEBUG nova.compute.manager [req-d4a75f5a-e316-4697-923a-6365ac39f5f2 req-a6e6b62e-7bab-4e99-8d68-6fa338c06cb5 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 3915cf40-bdd7-4fe8-8311-834ff26aaf9c] Received event network-vif-plugged-33b4a30b-13cb-4c3b-a6fb-df71a1ce760e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:56:36 compute-0 nova_compute[260935]: 2025-10-11 08:56:36.506 2 DEBUG oslo_concurrency.lockutils [req-d4a75f5a-e316-4697-923a-6365ac39f5f2 req-a6e6b62e-7bab-4e99-8d68-6fa338c06cb5 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "3915cf40-bdd7-4fe8-8311-834ff26aaf9c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:56:36 compute-0 nova_compute[260935]: 2025-10-11 08:56:36.506 2 DEBUG oslo_concurrency.lockutils [req-d4a75f5a-e316-4697-923a-6365ac39f5f2 req-a6e6b62e-7bab-4e99-8d68-6fa338c06cb5 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "3915cf40-bdd7-4fe8-8311-834ff26aaf9c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:56:36 compute-0 nova_compute[260935]: 2025-10-11 08:56:36.506 2 DEBUG oslo_concurrency.lockutils [req-d4a75f5a-e316-4697-923a-6365ac39f5f2 req-a6e6b62e-7bab-4e99-8d68-6fa338c06cb5 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "3915cf40-bdd7-4fe8-8311-834ff26aaf9c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:56:36 compute-0 nova_compute[260935]: 2025-10-11 08:56:36.507 2 DEBUG nova.compute.manager [req-d4a75f5a-e316-4697-923a-6365ac39f5f2 req-a6e6b62e-7bab-4e99-8d68-6fa338c06cb5 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 3915cf40-bdd7-4fe8-8311-834ff26aaf9c] No waiting events found dispatching network-vif-plugged-33b4a30b-13cb-4c3b-a6fb-df71a1ce760e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 08:56:36 compute-0 nova_compute[260935]: 2025-10-11 08:56:36.507 2 WARNING nova.compute.manager [req-d4a75f5a-e316-4697-923a-6365ac39f5f2 req-a6e6b62e-7bab-4e99-8d68-6fa338c06cb5 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 3915cf40-bdd7-4fe8-8311-834ff26aaf9c] Received unexpected event network-vif-plugged-33b4a30b-13cb-4c3b-a6fb-df71a1ce760e for instance with vm_state active and task_state deleting.
Oct 11 08:56:36 compute-0 nova_compute[260935]: 2025-10-11 08:56:36.558 2 DEBUG nova.network.neutron [-] [instance: 3915cf40-bdd7-4fe8-8311-834ff26aaf9c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:56:36 compute-0 nova_compute[260935]: 2025-10-11 08:56:36.580 2 INFO nova.compute.manager [-] [instance: 3915cf40-bdd7-4fe8-8311-834ff26aaf9c] Took 1.01 seconds to deallocate network for instance.
Oct 11 08:56:36 compute-0 nova_compute[260935]: 2025-10-11 08:56:36.637 2 DEBUG oslo_concurrency.lockutils [None req-29da304d-4c2f-4f96-93e3-c9388bf8cbc5 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:56:36 compute-0 nova_compute[260935]: 2025-10-11 08:56:36.638 2 DEBUG oslo_concurrency.lockutils [None req-29da304d-4c2f-4f96-93e3-c9388bf8cbc5 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:56:36 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e225 do_prune osdmap full prune enabled
Oct 11 08:56:36 compute-0 ceph-mon[74313]: osdmap e225: 3 total, 3 up, 3 in
Oct 11 08:56:36 compute-0 ceph-mon[74313]: pgmap v1586: 321 pgs: 321 active+clean; 557 MiB data, 857 MiB used, 59 GiB / 60 GiB avail; 4.9 MiB/s rd, 4.7 MiB/s wr, 398 op/s
Oct 11 08:56:36 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e226 e226: 3 total, 3 up, 3 in
Oct 11 08:56:36 compute-0 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e226: 3 total, 3 up, 3 in
Oct 11 08:56:36 compute-0 nova_compute[260935]: 2025-10-11 08:56:36.733 2 DEBUG nova.storage.rbd_utils [None req-1ff6340a-99ab-4f8a-9c38-84240440232e 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] creating snapshot(snap) on rbd image(842b58bb-88f1-4b43-9dd5-ce6009e157e3) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Oct 11 08:56:36 compute-0 podman[328519]: 2025-10-11 08:56:36.813503363 +0000 UTC m=+0.103848132 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=iscsid, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid)
Oct 11 08:56:36 compute-0 nova_compute[260935]: 2025-10-11 08:56:36.870 2 DEBUG oslo_concurrency.processutils [None req-29da304d-4c2f-4f96-93e3-c9388bf8cbc5 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:56:37 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 08:56:37 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3035486225' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:56:37 compute-0 nova_compute[260935]: 2025-10-11 08:56:37.414 2 DEBUG oslo_concurrency.processutils [None req-29da304d-4c2f-4f96-93e3-c9388bf8cbc5 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.545s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:56:37 compute-0 nova_compute[260935]: 2025-10-11 08:56:37.425 2 DEBUG nova.compute.provider_tree [None req-29da304d-4c2f-4f96-93e3-c9388bf8cbc5 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 08:56:37 compute-0 nova_compute[260935]: 2025-10-11 08:56:37.490 2 DEBUG nova.scheduler.client.report [None req-29da304d-4c2f-4f96-93e3-c9388bf8cbc5 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 08:56:37 compute-0 nova_compute[260935]: 2025-10-11 08:56:37.523 2 DEBUG oslo_concurrency.lockutils [None req-29da304d-4c2f-4f96-93e3-c9388bf8cbc5 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.886s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:56:37 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 08:56:37 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3641526066' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 08:56:37 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 08:56:37 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3641526066' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 08:56:37 compute-0 nova_compute[260935]: 2025-10-11 08:56:37.560 2 INFO nova.scheduler.client.report [None req-29da304d-4c2f-4f96-93e3-c9388bf8cbc5 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Deleted allocations for instance 3915cf40-bdd7-4fe8-8311-834ff26aaf9c
Oct 11 08:56:37 compute-0 nova_compute[260935]: 2025-10-11 08:56:37.643 2 DEBUG oslo_concurrency.lockutils [None req-29da304d-4c2f-4f96-93e3-c9388bf8cbc5 a04e2908f5a54c8f98bee8d0faf3e658 3a6a3cc2a54f4a9bafcdc1304f07944b - - default default] Lock "3915cf40-bdd7-4fe8-8311-834ff26aaf9c" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.893s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:56:37 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e226 do_prune osdmap full prune enabled
Oct 11 08:56:37 compute-0 ceph-mon[74313]: osdmap e226: 3 total, 3 up, 3 in
Oct 11 08:56:37 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/3035486225' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:56:37 compute-0 ceph-mon[74313]: from='client.? 192.168.122.10:0/3641526066' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 08:56:37 compute-0 ceph-mon[74313]: from='client.? 192.168.122.10:0/3641526066' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 08:56:37 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e227 e227: 3 total, 3 up, 3 in
Oct 11 08:56:37 compute-0 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e227: 3 total, 3 up, 3 in
Oct 11 08:56:38 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1589: 321 pgs: 321 active+clean; 358 MiB data, 743 MiB used, 59 GiB / 60 GiB avail; 12 MiB/s rd, 12 MiB/s wr, 478 op/s
Oct 11 08:56:38 compute-0 nova_compute[260935]: 2025-10-11 08:56:38.176 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:56:38 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e227 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 08:56:38 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e227 do_prune osdmap full prune enabled
Oct 11 08:56:38 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e228 e228: 3 total, 3 up, 3 in
Oct 11 08:56:38 compute-0 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e228: 3 total, 3 up, 3 in
Oct 11 08:56:38 compute-0 ceph-mon[74313]: osdmap e227: 3 total, 3 up, 3 in
Oct 11 08:56:38 compute-0 ceph-mon[74313]: pgmap v1589: 321 pgs: 321 active+clean; 358 MiB data, 743 MiB used, 59 GiB / 60 GiB avail; 12 MiB/s rd, 12 MiB/s wr, 478 op/s
Oct 11 08:56:38 compute-0 ceph-mon[74313]: osdmap e228: 3 total, 3 up, 3 in
Oct 11 08:56:39 compute-0 nova_compute[260935]: 2025-10-11 08:56:39.292 2 INFO nova.virt.libvirt.driver [None req-1ff6340a-99ab-4f8a-9c38-84240440232e 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: 62c9b5d7-f22e-4738-b2e6-7c53fcb968ea] Snapshot image upload complete
Oct 11 08:56:39 compute-0 nova_compute[260935]: 2025-10-11 08:56:39.293 2 DEBUG nova.compute.manager [None req-1ff6340a-99ab-4f8a-9c38-84240440232e 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: 62c9b5d7-f22e-4738-b2e6-7c53fcb968ea] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:56:39 compute-0 nova_compute[260935]: 2025-10-11 08:56:39.315 2 DEBUG nova.compute.manager [req-34193025-cca8-49ec-8d40-e4616600296e req-324a1086-64ec-4940-8d28-2791d8eab482 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 3915cf40-bdd7-4fe8-8311-834ff26aaf9c] Received event network-vif-deleted-33b4a30b-13cb-4c3b-a6fb-df71a1ce760e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:56:39 compute-0 nova_compute[260935]: 2025-10-11 08:56:39.371 2 INFO nova.compute.manager [None req-1ff6340a-99ab-4f8a-9c38-84240440232e 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: 62c9b5d7-f22e-4738-b2e6-7c53fcb968ea] Shelve offloading
Oct 11 08:56:39 compute-0 nova_compute[260935]: 2025-10-11 08:56:39.389 2 INFO nova.virt.libvirt.driver [-] [instance: 62c9b5d7-f22e-4738-b2e6-7c53fcb968ea] Instance destroyed successfully.
Oct 11 08:56:39 compute-0 nova_compute[260935]: 2025-10-11 08:56:39.389 2 DEBUG nova.compute.manager [None req-1ff6340a-99ab-4f8a-9c38-84240440232e 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: 62c9b5d7-f22e-4738-b2e6-7c53fcb968ea] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:56:39 compute-0 nova_compute[260935]: 2025-10-11 08:56:39.394 2 DEBUG oslo_concurrency.lockutils [None req-1ff6340a-99ab-4f8a-9c38-84240440232e 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Acquiring lock "refresh_cache-62c9b5d7-f22e-4738-b2e6-7c53fcb968ea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 08:56:39 compute-0 nova_compute[260935]: 2025-10-11 08:56:39.394 2 DEBUG oslo_concurrency.lockutils [None req-1ff6340a-99ab-4f8a-9c38-84240440232e 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Acquired lock "refresh_cache-62c9b5d7-f22e-4738-b2e6-7c53fcb968ea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 08:56:39 compute-0 nova_compute[260935]: 2025-10-11 08:56:39.394 2 DEBUG nova.network.neutron [None req-1ff6340a-99ab-4f8a-9c38-84240440232e 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: 62c9b5d7-f22e-4738-b2e6-7c53fcb968ea] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 11 08:56:39 compute-0 nova_compute[260935]: 2025-10-11 08:56:39.407 2 DEBUG oslo_concurrency.lockutils [None req-d7529a95-9591-4434-9cc0-ba133964a1ee ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Acquiring lock "be4cae7e-0c2f-4c19-9a7e-58681faf9523" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:56:39 compute-0 nova_compute[260935]: 2025-10-11 08:56:39.407 2 DEBUG oslo_concurrency.lockutils [None req-d7529a95-9591-4434-9cc0-ba133964a1ee ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Lock "be4cae7e-0c2f-4c19-9a7e-58681faf9523" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:56:39 compute-0 nova_compute[260935]: 2025-10-11 08:56:39.427 2 DEBUG nova.compute.manager [None req-d7529a95-9591-4434-9cc0-ba133964a1ee ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: be4cae7e-0c2f-4c19-9a7e-58681faf9523] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 11 08:56:39 compute-0 nova_compute[260935]: 2025-10-11 08:56:39.496 2 DEBUG oslo_concurrency.lockutils [None req-d7529a95-9591-4434-9cc0-ba133964a1ee ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:56:39 compute-0 nova_compute[260935]: 2025-10-11 08:56:39.497 2 DEBUG oslo_concurrency.lockutils [None req-d7529a95-9591-4434-9cc0-ba133964a1ee ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:56:39 compute-0 nova_compute[260935]: 2025-10-11 08:56:39.503 2 DEBUG nova.virt.hardware [None req-d7529a95-9591-4434-9cc0-ba133964a1ee ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 11 08:56:39 compute-0 nova_compute[260935]: 2025-10-11 08:56:39.504 2 INFO nova.compute.claims [None req-d7529a95-9591-4434-9cc0-ba133964a1ee ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: be4cae7e-0c2f-4c19-9a7e-58681faf9523] Claim successful on node compute-0.ctlplane.example.com
Oct 11 08:56:39 compute-0 nova_compute[260935]: 2025-10-11 08:56:39.651 2 DEBUG oslo_concurrency.processutils [None req-d7529a95-9591-4434-9cc0-ba133964a1ee ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:56:40 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1591: 321 pgs: 321 active+clean; 358 MiB data, 743 MiB used, 59 GiB / 60 GiB avail; 11 MiB/s rd, 11 MiB/s wr, 438 op/s
Oct 11 08:56:40 compute-0 nova_compute[260935]: 2025-10-11 08:56:40.046 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:56:40 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 08:56:40 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1299459672' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:56:40 compute-0 nova_compute[260935]: 2025-10-11 08:56:40.202 2 DEBUG oslo_concurrency.processutils [None req-d7529a95-9591-4434-9cc0-ba133964a1ee ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.551s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:56:40 compute-0 nova_compute[260935]: 2025-10-11 08:56:40.210 2 DEBUG nova.compute.provider_tree [None req-d7529a95-9591-4434-9cc0-ba133964a1ee ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 08:56:40 compute-0 nova_compute[260935]: 2025-10-11 08:56:40.231 2 DEBUG nova.scheduler.client.report [None req-d7529a95-9591-4434-9cc0-ba133964a1ee ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 08:56:40 compute-0 nova_compute[260935]: 2025-10-11 08:56:40.266 2 DEBUG oslo_concurrency.lockutils [None req-d7529a95-9591-4434-9cc0-ba133964a1ee ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.769s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:56:40 compute-0 nova_compute[260935]: 2025-10-11 08:56:40.266 2 DEBUG nova.compute.manager [None req-d7529a95-9591-4434-9cc0-ba133964a1ee ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: be4cae7e-0c2f-4c19-9a7e-58681faf9523] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 11 08:56:40 compute-0 nova_compute[260935]: 2025-10-11 08:56:40.322 2 DEBUG nova.compute.manager [None req-d7529a95-9591-4434-9cc0-ba133964a1ee ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: be4cae7e-0c2f-4c19-9a7e-58681faf9523] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 11 08:56:40 compute-0 nova_compute[260935]: 2025-10-11 08:56:40.322 2 DEBUG nova.network.neutron [None req-d7529a95-9591-4434-9cc0-ba133964a1ee ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: be4cae7e-0c2f-4c19-9a7e-58681faf9523] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 11 08:56:40 compute-0 nova_compute[260935]: 2025-10-11 08:56:40.341 2 INFO nova.virt.libvirt.driver [None req-d7529a95-9591-4434-9cc0-ba133964a1ee ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: be4cae7e-0c2f-4c19-9a7e-58681faf9523] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 11 08:56:40 compute-0 nova_compute[260935]: 2025-10-11 08:56:40.359 2 DEBUG nova.compute.manager [None req-d7529a95-9591-4434-9cc0-ba133964a1ee ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: be4cae7e-0c2f-4c19-9a7e-58681faf9523] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 11 08:56:40 compute-0 nova_compute[260935]: 2025-10-11 08:56:40.452 2 DEBUG nova.compute.manager [None req-d7529a95-9591-4434-9cc0-ba133964a1ee ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: be4cae7e-0c2f-4c19-9a7e-58681faf9523] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 11 08:56:40 compute-0 nova_compute[260935]: 2025-10-11 08:56:40.454 2 DEBUG nova.virt.libvirt.driver [None req-d7529a95-9591-4434-9cc0-ba133964a1ee ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: be4cae7e-0c2f-4c19-9a7e-58681faf9523] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 11 08:56:40 compute-0 nova_compute[260935]: 2025-10-11 08:56:40.455 2 INFO nova.virt.libvirt.driver [None req-d7529a95-9591-4434-9cc0-ba133964a1ee ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: be4cae7e-0c2f-4c19-9a7e-58681faf9523] Creating image(s)
Oct 11 08:56:40 compute-0 nova_compute[260935]: 2025-10-11 08:56:40.488 2 DEBUG nova.storage.rbd_utils [None req-d7529a95-9591-4434-9cc0-ba133964a1ee ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] rbd image be4cae7e-0c2f-4c19-9a7e-58681faf9523_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:56:40 compute-0 nova_compute[260935]: 2025-10-11 08:56:40.521 2 DEBUG nova.storage.rbd_utils [None req-d7529a95-9591-4434-9cc0-ba133964a1ee ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] rbd image be4cae7e-0c2f-4c19-9a7e-58681faf9523_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:56:40 compute-0 nova_compute[260935]: 2025-10-11 08:56:40.558 2 DEBUG nova.storage.rbd_utils [None req-d7529a95-9591-4434-9cc0-ba133964a1ee ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] rbd image be4cae7e-0c2f-4c19-9a7e-58681faf9523_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:56:40 compute-0 nova_compute[260935]: 2025-10-11 08:56:40.563 2 DEBUG oslo_concurrency.processutils [None req-d7529a95-9591-4434-9cc0-ba133964a1ee ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:56:40 compute-0 nova_compute[260935]: 2025-10-11 08:56:40.636 2 DEBUG nova.policy [None req-d7529a95-9591-4434-9cc0-ba133964a1ee ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'ee0e5fedb9fc464eb2a9ac362f5e0749', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'd9864fda4f8641d8a9c1509c426cc206', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 11 08:56:40 compute-0 nova_compute[260935]: 2025-10-11 08:56:40.687 2 DEBUG oslo_concurrency.processutils [None req-d7529a95-9591-4434-9cc0-ba133964a1ee ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json" returned: 0 in 0.124s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:56:40 compute-0 nova_compute[260935]: 2025-10-11 08:56:40.688 2 DEBUG oslo_concurrency.lockutils [None req-d7529a95-9591-4434-9cc0-ba133964a1ee ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Acquiring lock "9811042c7d73cc51997f7c966840f4b7728169a1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:56:40 compute-0 nova_compute[260935]: 2025-10-11 08:56:40.688 2 DEBUG oslo_concurrency.lockutils [None req-d7529a95-9591-4434-9cc0-ba133964a1ee ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:56:40 compute-0 nova_compute[260935]: 2025-10-11 08:56:40.689 2 DEBUG oslo_concurrency.lockutils [None req-d7529a95-9591-4434-9cc0-ba133964a1ee ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:56:40 compute-0 nova_compute[260935]: 2025-10-11 08:56:40.710 2 DEBUG nova.storage.rbd_utils [None req-d7529a95-9591-4434-9cc0-ba133964a1ee ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] rbd image be4cae7e-0c2f-4c19-9a7e-58681faf9523_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:56:40 compute-0 nova_compute[260935]: 2025-10-11 08:56:40.713 2 DEBUG oslo_concurrency.processutils [None req-d7529a95-9591-4434-9cc0-ba133964a1ee ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 be4cae7e-0c2f-4c19-9a7e-58681faf9523_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:56:41 compute-0 nova_compute[260935]: 2025-10-11 08:56:41.066 2 DEBUG oslo_concurrency.processutils [None req-d7529a95-9591-4434-9cc0-ba133964a1ee ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 be4cae7e-0c2f-4c19-9a7e-58681faf9523_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.353s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:56:41 compute-0 ceph-mon[74313]: pgmap v1591: 321 pgs: 321 active+clean; 358 MiB data, 743 MiB used, 59 GiB / 60 GiB avail; 11 MiB/s rd, 11 MiB/s wr, 438 op/s
Oct 11 08:56:41 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/1299459672' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:56:41 compute-0 nova_compute[260935]: 2025-10-11 08:56:41.156 2 DEBUG nova.storage.rbd_utils [None req-d7529a95-9591-4434-9cc0-ba133964a1ee ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] resizing rbd image be4cae7e-0c2f-4c19-9a7e-58681faf9523_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 11 08:56:41 compute-0 nova_compute[260935]: 2025-10-11 08:56:41.287 2 DEBUG nova.objects.instance [None req-d7529a95-9591-4434-9cc0-ba133964a1ee ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Lazy-loading 'migration_context' on Instance uuid be4cae7e-0c2f-4c19-9a7e-58681faf9523 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:56:41 compute-0 nova_compute[260935]: 2025-10-11 08:56:41.307 2 DEBUG nova.virt.libvirt.driver [None req-d7529a95-9591-4434-9cc0-ba133964a1ee ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: be4cae7e-0c2f-4c19-9a7e-58681faf9523] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 11 08:56:41 compute-0 nova_compute[260935]: 2025-10-11 08:56:41.307 2 DEBUG nova.virt.libvirt.driver [None req-d7529a95-9591-4434-9cc0-ba133964a1ee ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: be4cae7e-0c2f-4c19-9a7e-58681faf9523] Ensure instance console log exists: /var/lib/nova/instances/be4cae7e-0c2f-4c19-9a7e-58681faf9523/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 11 08:56:41 compute-0 nova_compute[260935]: 2025-10-11 08:56:41.308 2 DEBUG oslo_concurrency.lockutils [None req-d7529a95-9591-4434-9cc0-ba133964a1ee ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:56:41 compute-0 nova_compute[260935]: 2025-10-11 08:56:41.309 2 DEBUG oslo_concurrency.lockutils [None req-d7529a95-9591-4434-9cc0-ba133964a1ee ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:56:41 compute-0 nova_compute[260935]: 2025-10-11 08:56:41.309 2 DEBUG oslo_concurrency.lockutils [None req-d7529a95-9591-4434-9cc0-ba133964a1ee ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:56:41 compute-0 nova_compute[260935]: 2025-10-11 08:56:41.586 2 DEBUG nova.network.neutron [None req-1ff6340a-99ab-4f8a-9c38-84240440232e 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: 62c9b5d7-f22e-4738-b2e6-7c53fcb968ea] Updating instance_info_cache with network_info: [{"id": "37653b55-c083-4108-a42f-4bbe7778f058", "address": "fa:16:3e:62:88:02", "network": {"id": "2cb96d57-a5e9-4b38-b10e-68187a5bf82f", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-2000648338-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "93dd4902ce324862a38006da8e06503a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap37653b55-c0", "ovs_interfaceid": "37653b55-c083-4108-a42f-4bbe7778f058", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:56:41 compute-0 nova_compute[260935]: 2025-10-11 08:56:41.612 2 DEBUG oslo_concurrency.lockutils [None req-1ff6340a-99ab-4f8a-9c38-84240440232e 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Releasing lock "refresh_cache-62c9b5d7-f22e-4738-b2e6-7c53fcb968ea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 08:56:41 compute-0 podman[328768]: 2025-10-11 08:56:41.809302744 +0000 UTC m=+0.100965920 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Oct 11 08:56:41 compute-0 nova_compute[260935]: 2025-10-11 08:56:41.838 2 DEBUG nova.network.neutron [None req-d7529a95-9591-4434-9cc0-ba133964a1ee ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: be4cae7e-0c2f-4c19-9a7e-58681faf9523] Successfully created port: 47a6e23f-ee4e-4768-bb7e-ab5e3096976f _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 11 08:56:41 compute-0 podman[328769]: 2025-10-11 08:56:41.862042758 +0000 UTC m=+0.148019152 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct 11 08:56:42 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1592: 321 pgs: 321 active+clean; 358 MiB data, 743 MiB used, 59 GiB / 60 GiB avail; 7.9 MiB/s rd, 7.8 MiB/s wr, 319 op/s
Oct 11 08:56:43 compute-0 ceph-mon[74313]: pgmap v1592: 321 pgs: 321 active+clean; 358 MiB data, 743 MiB used, 59 GiB / 60 GiB avail; 7.9 MiB/s rd, 7.8 MiB/s wr, 319 op/s
Oct 11 08:56:43 compute-0 nova_compute[260935]: 2025-10-11 08:56:43.188 2 INFO nova.virt.libvirt.driver [-] [instance: 62c9b5d7-f22e-4738-b2e6-7c53fcb968ea] Instance destroyed successfully.
Oct 11 08:56:43 compute-0 nova_compute[260935]: 2025-10-11 08:56:43.188 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:56:43 compute-0 nova_compute[260935]: 2025-10-11 08:56:43.191 2 DEBUG nova.objects.instance [None req-1ff6340a-99ab-4f8a-9c38-84240440232e 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Lazy-loading 'resources' on Instance uuid 62c9b5d7-f22e-4738-b2e6-7c53fcb968ea obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:56:43 compute-0 nova_compute[260935]: 2025-10-11 08:56:43.216 2 DEBUG nova.virt.libvirt.vif [None req-1ff6340a-99ab-4f8a-9c38-84240440232e 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T08:56:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-352403089',display_name='tempest-DeleteServersTestJSON-server-352403089',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-352403089',id=65,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-11T08:56:16Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='93dd4902ce324862a38006da8e06503a',ramdisk_id='',reservation_id='r-781vkij3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-DeleteServersTestJSON-1019340677',owner_user_name='tempest-DeleteServersTestJSON-1019340677-project-member',shelved_at='2025-10-11T08:56:39.293171',shelved_host='compute-0.ctlplane.example.com',shelved_image_id='842b58bb-88f1-4b43-9dd5-ce6009e157e3'},tags=<?>,task_state='shelving_offloading',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T08:56:34Z,user_data=None,user_id='213e5693e94f44e7950e3dfbca04228a',uuid=62c9b5d7-f22e-4738-b2e6-7c53fcb968ea,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='shelved') vif={"id": "37653b55-c083-4108-a42f-4bbe7778f058", "address": "fa:16:3e:62:88:02", "network": {"id": "2cb96d57-a5e9-4b38-b10e-68187a5bf82f", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-2000648338-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "93dd4902ce324862a38006da8e06503a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap37653b55-c0", "ovs_interfaceid": "37653b55-c083-4108-a42f-4bbe7778f058", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 11 08:56:43 compute-0 nova_compute[260935]: 2025-10-11 08:56:43.216 2 DEBUG nova.network.os_vif_util [None req-1ff6340a-99ab-4f8a-9c38-84240440232e 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Converting VIF {"id": "37653b55-c083-4108-a42f-4bbe7778f058", "address": "fa:16:3e:62:88:02", "network": {"id": "2cb96d57-a5e9-4b38-b10e-68187a5bf82f", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-2000648338-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "93dd4902ce324862a38006da8e06503a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap37653b55-c0", "ovs_interfaceid": "37653b55-c083-4108-a42f-4bbe7778f058", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 08:56:43 compute-0 nova_compute[260935]: 2025-10-11 08:56:43.218 2 DEBUG nova.network.os_vif_util [None req-1ff6340a-99ab-4f8a-9c38-84240440232e 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:62:88:02,bridge_name='br-int',has_traffic_filtering=True,id=37653b55-c083-4108-a42f-4bbe7778f058,network=Network(2cb96d57-a5e9-4b38-b10e-68187a5bf82f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap37653b55-c0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 08:56:43 compute-0 nova_compute[260935]: 2025-10-11 08:56:43.219 2 DEBUG os_vif [None req-1ff6340a-99ab-4f8a-9c38-84240440232e 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:62:88:02,bridge_name='br-int',has_traffic_filtering=True,id=37653b55-c083-4108-a42f-4bbe7778f058,network=Network(2cb96d57-a5e9-4b38-b10e-68187a5bf82f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap37653b55-c0') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 11 08:56:43 compute-0 nova_compute[260935]: 2025-10-11 08:56:43.221 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:56:43 compute-0 nova_compute[260935]: 2025-10-11 08:56:43.222 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap37653b55-c0, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:56:43 compute-0 nova_compute[260935]: 2025-10-11 08:56:43.224 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:56:43 compute-0 nova_compute[260935]: 2025-10-11 08:56:43.228 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 08:56:43 compute-0 nova_compute[260935]: 2025-10-11 08:56:43.231 2 INFO os_vif [None req-1ff6340a-99ab-4f8a-9c38-84240440232e 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:62:88:02,bridge_name='br-int',has_traffic_filtering=True,id=37653b55-c083-4108-a42f-4bbe7778f058,network=Network(2cb96d57-a5e9-4b38-b10e-68187a5bf82f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap37653b55-c0')
Oct 11 08:56:43 compute-0 nova_compute[260935]: 2025-10-11 08:56:43.260 2 DEBUG nova.compute.manager [req-bc43af15-67c2-4360-bcaf-cb63b3874c2a req-3b7f3d0a-b346-4a95-b3df-7b6c180375c2 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 62c9b5d7-f22e-4738-b2e6-7c53fcb968ea] Received event network-changed-37653b55-c083-4108-a42f-4bbe7778f058 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:56:43 compute-0 nova_compute[260935]: 2025-10-11 08:56:43.261 2 DEBUG nova.compute.manager [req-bc43af15-67c2-4360-bcaf-cb63b3874c2a req-3b7f3d0a-b346-4a95-b3df-7b6c180375c2 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 62c9b5d7-f22e-4738-b2e6-7c53fcb968ea] Refreshing instance network info cache due to event network-changed-37653b55-c083-4108-a42f-4bbe7778f058. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 08:56:43 compute-0 nova_compute[260935]: 2025-10-11 08:56:43.265 2 DEBUG oslo_concurrency.lockutils [req-bc43af15-67c2-4360-bcaf-cb63b3874c2a req-3b7f3d0a-b346-4a95-b3df-7b6c180375c2 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-62c9b5d7-f22e-4738-b2e6-7c53fcb968ea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 08:56:43 compute-0 nova_compute[260935]: 2025-10-11 08:56:43.265 2 DEBUG oslo_concurrency.lockutils [req-bc43af15-67c2-4360-bcaf-cb63b3874c2a req-3b7f3d0a-b346-4a95-b3df-7b6c180375c2 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-62c9b5d7-f22e-4738-b2e6-7c53fcb968ea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 08:56:43 compute-0 nova_compute[260935]: 2025-10-11 08:56:43.266 2 DEBUG nova.network.neutron [req-bc43af15-67c2-4360-bcaf-cb63b3874c2a req-3b7f3d0a-b346-4a95-b3df-7b6c180375c2 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 62c9b5d7-f22e-4738-b2e6-7c53fcb968ea] Refreshing network info cache for port 37653b55-c083-4108-a42f-4bbe7778f058 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 08:56:43 compute-0 nova_compute[260935]: 2025-10-11 08:56:43.337 2 DEBUG nova.network.neutron [None req-d7529a95-9591-4434-9cc0-ba133964a1ee ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: be4cae7e-0c2f-4c19-9a7e-58681faf9523] Successfully updated port: 47a6e23f-ee4e-4768-bb7e-ab5e3096976f _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 11 08:56:43 compute-0 nova_compute[260935]: 2025-10-11 08:56:43.353 2 DEBUG oslo_concurrency.lockutils [None req-d7529a95-9591-4434-9cc0-ba133964a1ee ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Acquiring lock "refresh_cache-be4cae7e-0c2f-4c19-9a7e-58681faf9523" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 08:56:43 compute-0 nova_compute[260935]: 2025-10-11 08:56:43.354 2 DEBUG oslo_concurrency.lockutils [None req-d7529a95-9591-4434-9cc0-ba133964a1ee ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Acquired lock "refresh_cache-be4cae7e-0c2f-4c19-9a7e-58681faf9523" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 08:56:43 compute-0 nova_compute[260935]: 2025-10-11 08:56:43.354 2 DEBUG nova.network.neutron [None req-d7529a95-9591-4434-9cc0-ba133964a1ee ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: be4cae7e-0c2f-4c19-9a7e-58681faf9523] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 11 08:56:43 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e228 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 08:56:43 compute-0 nova_compute[260935]: 2025-10-11 08:56:43.675 2 DEBUG nova.network.neutron [None req-d7529a95-9591-4434-9cc0-ba133964a1ee ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: be4cae7e-0c2f-4c19-9a7e-58681faf9523] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 11 08:56:43 compute-0 nova_compute[260935]: 2025-10-11 08:56:43.732 2 INFO nova.virt.libvirt.driver [None req-1ff6340a-99ab-4f8a-9c38-84240440232e 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: 62c9b5d7-f22e-4738-b2e6-7c53fcb968ea] Deleting instance files /var/lib/nova/instances/62c9b5d7-f22e-4738-b2e6-7c53fcb968ea_del
Oct 11 08:56:43 compute-0 nova_compute[260935]: 2025-10-11 08:56:43.734 2 INFO nova.virt.libvirt.driver [None req-1ff6340a-99ab-4f8a-9c38-84240440232e 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: 62c9b5d7-f22e-4738-b2e6-7c53fcb968ea] Deletion of /var/lib/nova/instances/62c9b5d7-f22e-4738-b2e6-7c53fcb968ea_del complete
Oct 11 08:56:43 compute-0 nova_compute[260935]: 2025-10-11 08:56:43.822 2 INFO nova.scheduler.client.report [None req-1ff6340a-99ab-4f8a-9c38-84240440232e 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Deleted allocations for instance 62c9b5d7-f22e-4738-b2e6-7c53fcb968ea
Oct 11 08:56:43 compute-0 nova_compute[260935]: 2025-10-11 08:56:43.866 2 DEBUG oslo_concurrency.lockutils [None req-1ff6340a-99ab-4f8a-9c38-84240440232e 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:56:43 compute-0 nova_compute[260935]: 2025-10-11 08:56:43.867 2 DEBUG oslo_concurrency.lockutils [None req-1ff6340a-99ab-4f8a-9c38-84240440232e 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:56:43 compute-0 nova_compute[260935]: 2025-10-11 08:56:43.966 2 DEBUG oslo_concurrency.processutils [None req-1ff6340a-99ab-4f8a-9c38-84240440232e 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:56:44 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1593: 321 pgs: 321 active+clean; 405 MiB data, 744 MiB used, 59 GiB / 60 GiB avail; 6.5 MiB/s rd, 9.2 MiB/s wr, 326 op/s
Oct 11 08:56:44 compute-0 nova_compute[260935]: 2025-10-11 08:56:44.208 2 DEBUG oslo_concurrency.lockutils [None req-741ae2b0-482f-416a-b0a1-b511c367f117 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Acquiring lock "5c74d1f2-66dc-474b-b669-445115cf6b1d" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:56:44 compute-0 nova_compute[260935]: 2025-10-11 08:56:44.209 2 DEBUG oslo_concurrency.lockutils [None req-741ae2b0-482f-416a-b0a1-b511c367f117 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Lock "5c74d1f2-66dc-474b-b669-445115cf6b1d" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:56:44 compute-0 nova_compute[260935]: 2025-10-11 08:56:44.227 2 DEBUG nova.compute.manager [None req-741ae2b0-482f-416a-b0a1-b511c367f117 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] [instance: 5c74d1f2-66dc-474b-b669-445115cf6b1d] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 11 08:56:44 compute-0 nova_compute[260935]: 2025-10-11 08:56:44.283 2 DEBUG oslo_concurrency.lockutils [None req-741ae2b0-482f-416a-b0a1-b511c367f117 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:56:44 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 08:56:44 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2631408883' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:56:44 compute-0 nova_compute[260935]: 2025-10-11 08:56:44.426 2 DEBUG oslo_concurrency.processutils [None req-1ff6340a-99ab-4f8a-9c38-84240440232e 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:56:44 compute-0 nova_compute[260935]: 2025-10-11 08:56:44.434 2 DEBUG nova.compute.provider_tree [None req-1ff6340a-99ab-4f8a-9c38-84240440232e 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 08:56:44 compute-0 nova_compute[260935]: 2025-10-11 08:56:44.455 2 DEBUG nova.scheduler.client.report [None req-1ff6340a-99ab-4f8a-9c38-84240440232e 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 08:56:44 compute-0 nova_compute[260935]: 2025-10-11 08:56:44.488 2 DEBUG oslo_concurrency.lockutils [None req-1ff6340a-99ab-4f8a-9c38-84240440232e 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.622s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:56:44 compute-0 nova_compute[260935]: 2025-10-11 08:56:44.492 2 DEBUG oslo_concurrency.lockutils [None req-741ae2b0-482f-416a-b0a1-b511c367f117 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.209s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:56:44 compute-0 nova_compute[260935]: 2025-10-11 08:56:44.502 2 DEBUG nova.virt.hardware [None req-741ae2b0-482f-416a-b0a1-b511c367f117 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 11 08:56:44 compute-0 nova_compute[260935]: 2025-10-11 08:56:44.502 2 INFO nova.compute.claims [None req-741ae2b0-482f-416a-b0a1-b511c367f117 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] [instance: 5c74d1f2-66dc-474b-b669-445115cf6b1d] Claim successful on node compute-0.ctlplane.example.com
Oct 11 08:56:44 compute-0 nova_compute[260935]: 2025-10-11 08:56:44.579 2 DEBUG oslo_concurrency.lockutils [None req-1ff6340a-99ab-4f8a-9c38-84240440232e 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Lock "62c9b5d7-f22e-4738-b2e6-7c53fcb968ea" "released" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: held 23.762s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:56:44 compute-0 ovn_controller[152945]: 2025-10-11T08:56:44Z|00579|binding|INFO|Releasing lport 056a8563-0695-415b-921f-e75fa98e60e5 from this chassis (sb_readonly=0)
Oct 11 08:56:44 compute-0 ovn_controller[152945]: 2025-10-11T08:56:44Z|00580|binding|INFO|Releasing lport b9cf681c-9f4c-4c56-987a-55fa7aa89e1a from this chassis (sb_readonly=0)
Oct 11 08:56:44 compute-0 nova_compute[260935]: 2025-10-11 08:56:44.710 2 DEBUG oslo_concurrency.processutils [None req-741ae2b0-482f-416a-b0a1-b511c367f117 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:56:44 compute-0 nova_compute[260935]: 2025-10-11 08:56:44.777 2 DEBUG nova.network.neutron [None req-d7529a95-9591-4434-9cc0-ba133964a1ee ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: be4cae7e-0c2f-4c19-9a7e-58681faf9523] Updating instance_info_cache with network_info: [{"id": "47a6e23f-ee4e-4768-bb7e-ab5e3096976f", "address": "fa:16:3e:3e:18:4c", "network": {"id": "e075bdab-78c4-414f-b270-c41d1c82f498", "bridge": "br-int", "label": "tempest-ServersTestJSON-1401783070-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d9864fda4f8641d8a9c1509c426cc206", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap47a6e23f-ee", "ovs_interfaceid": "47a6e23f-ee4e-4768-bb7e-ab5e3096976f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:56:44 compute-0 nova_compute[260935]: 2025-10-11 08:56:44.781 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:56:44 compute-0 nova_compute[260935]: 2025-10-11 08:56:44.786 2 DEBUG nova.network.neutron [req-bc43af15-67c2-4360-bcaf-cb63b3874c2a req-3b7f3d0a-b346-4a95-b3df-7b6c180375c2 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 62c9b5d7-f22e-4738-b2e6-7c53fcb968ea] Updated VIF entry in instance network info cache for port 37653b55-c083-4108-a42f-4bbe7778f058. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 08:56:44 compute-0 nova_compute[260935]: 2025-10-11 08:56:44.787 2 DEBUG nova.network.neutron [req-bc43af15-67c2-4360-bcaf-cb63b3874c2a req-3b7f3d0a-b346-4a95-b3df-7b6c180375c2 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 62c9b5d7-f22e-4738-b2e6-7c53fcb968ea] Updating instance_info_cache with network_info: [{"id": "37653b55-c083-4108-a42f-4bbe7778f058", "address": "fa:16:3e:62:88:02", "network": {"id": "2cb96d57-a5e9-4b38-b10e-68187a5bf82f", "bridge": null, "label": "tempest-DeleteServersTestJSON-2000648338-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "93dd4902ce324862a38006da8e06503a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "unbound", "details": {}, "devname": "tap37653b55-c0", "ovs_interfaceid": null, "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:56:44 compute-0 nova_compute[260935]: 2025-10-11 08:56:44.811 2 DEBUG oslo_concurrency.lockutils [None req-d7529a95-9591-4434-9cc0-ba133964a1ee ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Releasing lock "refresh_cache-be4cae7e-0c2f-4c19-9a7e-58681faf9523" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 08:56:44 compute-0 nova_compute[260935]: 2025-10-11 08:56:44.812 2 DEBUG nova.compute.manager [None req-d7529a95-9591-4434-9cc0-ba133964a1ee ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: be4cae7e-0c2f-4c19-9a7e-58681faf9523] Instance network_info: |[{"id": "47a6e23f-ee4e-4768-bb7e-ab5e3096976f", "address": "fa:16:3e:3e:18:4c", "network": {"id": "e075bdab-78c4-414f-b270-c41d1c82f498", "bridge": "br-int", "label": "tempest-ServersTestJSON-1401783070-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d9864fda4f8641d8a9c1509c426cc206", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap47a6e23f-ee", "ovs_interfaceid": "47a6e23f-ee4e-4768-bb7e-ab5e3096976f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 11 08:56:44 compute-0 nova_compute[260935]: 2025-10-11 08:56:44.813 2 DEBUG oslo_concurrency.lockutils [req-bc43af15-67c2-4360-bcaf-cb63b3874c2a req-3b7f3d0a-b346-4a95-b3df-7b6c180375c2 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-62c9b5d7-f22e-4738-b2e6-7c53fcb968ea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 08:56:44 compute-0 nova_compute[260935]: 2025-10-11 08:56:44.819 2 DEBUG nova.virt.libvirt.driver [None req-d7529a95-9591-4434-9cc0-ba133964a1ee ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: be4cae7e-0c2f-4c19-9a7e-58681faf9523] Start _get_guest_xml network_info=[{"id": "47a6e23f-ee4e-4768-bb7e-ab5e3096976f", "address": "fa:16:3e:3e:18:4c", "network": {"id": "e075bdab-78c4-414f-b270-c41d1c82f498", "bridge": "br-int", "label": "tempest-ServersTestJSON-1401783070-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d9864fda4f8641d8a9c1509c426cc206", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap47a6e23f-ee", "ovs_interfaceid": "47a6e23f-ee4e-4768-bb7e-ab5e3096976f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 11 08:56:44 compute-0 nova_compute[260935]: 2025-10-11 08:56:44.826 2 WARNING nova.virt.libvirt.driver [None req-d7529a95-9591-4434-9cc0-ba133964a1ee ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 08:56:44 compute-0 nova_compute[260935]: 2025-10-11 08:56:44.832 2 DEBUG nova.virt.libvirt.host [None req-d7529a95-9591-4434-9cc0-ba133964a1ee ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 11 08:56:44 compute-0 nova_compute[260935]: 2025-10-11 08:56:44.833 2 DEBUG nova.virt.libvirt.host [None req-d7529a95-9591-4434-9cc0-ba133964a1ee ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 11 08:56:44 compute-0 nova_compute[260935]: 2025-10-11 08:56:44.838 2 DEBUG nova.virt.libvirt.host [None req-d7529a95-9591-4434-9cc0-ba133964a1ee ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 11 08:56:44 compute-0 nova_compute[260935]: 2025-10-11 08:56:44.838 2 DEBUG nova.virt.libvirt.host [None req-d7529a95-9591-4434-9cc0-ba133964a1ee ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 11 08:56:44 compute-0 nova_compute[260935]: 2025-10-11 08:56:44.839 2 DEBUG nova.virt.libvirt.driver [None req-d7529a95-9591-4434-9cc0-ba133964a1ee ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 11 08:56:44 compute-0 nova_compute[260935]: 2025-10-11 08:56:44.839 2 DEBUG nova.virt.hardware [None req-d7529a95-9591-4434-9cc0-ba133964a1ee ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 11 08:56:44 compute-0 nova_compute[260935]: 2025-10-11 08:56:44.841 2 DEBUG nova.virt.hardware [None req-d7529a95-9591-4434-9cc0-ba133964a1ee ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 11 08:56:44 compute-0 nova_compute[260935]: 2025-10-11 08:56:44.841 2 DEBUG nova.virt.hardware [None req-d7529a95-9591-4434-9cc0-ba133964a1ee ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 11 08:56:44 compute-0 nova_compute[260935]: 2025-10-11 08:56:44.842 2 DEBUG nova.virt.hardware [None req-d7529a95-9591-4434-9cc0-ba133964a1ee ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 11 08:56:44 compute-0 nova_compute[260935]: 2025-10-11 08:56:44.842 2 DEBUG nova.virt.hardware [None req-d7529a95-9591-4434-9cc0-ba133964a1ee ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 11 08:56:44 compute-0 nova_compute[260935]: 2025-10-11 08:56:44.843 2 DEBUG nova.virt.hardware [None req-d7529a95-9591-4434-9cc0-ba133964a1ee ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 11 08:56:44 compute-0 nova_compute[260935]: 2025-10-11 08:56:44.843 2 DEBUG nova.virt.hardware [None req-d7529a95-9591-4434-9cc0-ba133964a1ee ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 11 08:56:44 compute-0 nova_compute[260935]: 2025-10-11 08:56:44.844 2 DEBUG nova.virt.hardware [None req-d7529a95-9591-4434-9cc0-ba133964a1ee ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 11 08:56:44 compute-0 nova_compute[260935]: 2025-10-11 08:56:44.844 2 DEBUG nova.virt.hardware [None req-d7529a95-9591-4434-9cc0-ba133964a1ee ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 11 08:56:44 compute-0 nova_compute[260935]: 2025-10-11 08:56:44.845 2 DEBUG nova.virt.hardware [None req-d7529a95-9591-4434-9cc0-ba133964a1ee ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 11 08:56:44 compute-0 nova_compute[260935]: 2025-10-11 08:56:44.845 2 DEBUG nova.virt.hardware [None req-d7529a95-9591-4434-9cc0-ba133964a1ee ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 11 08:56:44 compute-0 nova_compute[260935]: 2025-10-11 08:56:44.851 2 DEBUG oslo_concurrency.processutils [None req-d7529a95-9591-4434-9cc0-ba133964a1ee ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:56:45 compute-0 ceph-mon[74313]: pgmap v1593: 321 pgs: 321 active+clean; 405 MiB data, 744 MiB used, 59 GiB / 60 GiB avail; 6.5 MiB/s rd, 9.2 MiB/s wr, 326 op/s
Oct 11 08:56:45 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/2631408883' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:56:45 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 08:56:45 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3811944332' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:56:45 compute-0 nova_compute[260935]: 2025-10-11 08:56:45.233 2 DEBUG oslo_concurrency.processutils [None req-741ae2b0-482f-416a-b0a1-b511c367f117 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.523s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:56:45 compute-0 nova_compute[260935]: 2025-10-11 08:56:45.241 2 DEBUG nova.compute.provider_tree [None req-741ae2b0-482f-416a-b0a1-b511c367f117 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 08:56:45 compute-0 nova_compute[260935]: 2025-10-11 08:56:45.269 2 DEBUG nova.scheduler.client.report [None req-741ae2b0-482f-416a-b0a1-b511c367f117 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 08:56:45 compute-0 nova_compute[260935]: 2025-10-11 08:56:45.300 2 DEBUG oslo_concurrency.lockutils [None req-741ae2b0-482f-416a-b0a1-b511c367f117 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.807s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:56:45 compute-0 nova_compute[260935]: 2025-10-11 08:56:45.300 2 DEBUG nova.compute.manager [None req-741ae2b0-482f-416a-b0a1-b511c367f117 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] [instance: 5c74d1f2-66dc-474b-b669-445115cf6b1d] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 11 08:56:45 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 08:56:45 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2530168886' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:56:45 compute-0 nova_compute[260935]: 2025-10-11 08:56:45.362 2 DEBUG oslo_concurrency.processutils [None req-d7529a95-9591-4434-9cc0-ba133964a1ee ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.511s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:56:45 compute-0 nova_compute[260935]: 2025-10-11 08:56:45.387 2 DEBUG nova.storage.rbd_utils [None req-d7529a95-9591-4434-9cc0-ba133964a1ee ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] rbd image be4cae7e-0c2f-4c19-9a7e-58681faf9523_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:56:45 compute-0 nova_compute[260935]: 2025-10-11 08:56:45.393 2 DEBUG oslo_concurrency.processutils [None req-d7529a95-9591-4434-9cc0-ba133964a1ee ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:56:45 compute-0 nova_compute[260935]: 2025-10-11 08:56:45.438 2 DEBUG nova.compute.manager [None req-741ae2b0-482f-416a-b0a1-b511c367f117 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] [instance: 5c74d1f2-66dc-474b-b669-445115cf6b1d] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 11 08:56:45 compute-0 nova_compute[260935]: 2025-10-11 08:56:45.439 2 DEBUG nova.network.neutron [None req-741ae2b0-482f-416a-b0a1-b511c367f117 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] [instance: 5c74d1f2-66dc-474b-b669-445115cf6b1d] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 11 08:56:45 compute-0 nova_compute[260935]: 2025-10-11 08:56:45.473 2 INFO nova.virt.libvirt.driver [None req-741ae2b0-482f-416a-b0a1-b511c367f117 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] [instance: 5c74d1f2-66dc-474b-b669-445115cf6b1d] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 11 08:56:45 compute-0 nova_compute[260935]: 2025-10-11 08:56:45.497 2 DEBUG nova.compute.manager [None req-741ae2b0-482f-416a-b0a1-b511c367f117 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] [instance: 5c74d1f2-66dc-474b-b669-445115cf6b1d] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 11 08:56:45 compute-0 nova_compute[260935]: 2025-10-11 08:56:45.591 2 DEBUG nova.policy [None req-741ae2b0-482f-416a-b0a1-b511c367f117 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '8d5f5f07c57c467286168be7c097bf26', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '73adfb8cf0c64359b1f33a9643148ef4', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 11 08:56:45 compute-0 nova_compute[260935]: 2025-10-11 08:56:45.627 2 DEBUG nova.compute.manager [None req-741ae2b0-482f-416a-b0a1-b511c367f117 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] [instance: 5c74d1f2-66dc-474b-b669-445115cf6b1d] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 11 08:56:45 compute-0 nova_compute[260935]: 2025-10-11 08:56:45.629 2 DEBUG nova.virt.libvirt.driver [None req-741ae2b0-482f-416a-b0a1-b511c367f117 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] [instance: 5c74d1f2-66dc-474b-b669-445115cf6b1d] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 11 08:56:45 compute-0 nova_compute[260935]: 2025-10-11 08:56:45.629 2 INFO nova.virt.libvirt.driver [None req-741ae2b0-482f-416a-b0a1-b511c367f117 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] [instance: 5c74d1f2-66dc-474b-b669-445115cf6b1d] Creating image(s)
Oct 11 08:56:45 compute-0 nova_compute[260935]: 2025-10-11 08:56:45.657 2 DEBUG nova.storage.rbd_utils [None req-741ae2b0-482f-416a-b0a1-b511c367f117 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] rbd image 5c74d1f2-66dc-474b-b669-445115cf6b1d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:56:45 compute-0 nova_compute[260935]: 2025-10-11 08:56:45.685 2 DEBUG nova.storage.rbd_utils [None req-741ae2b0-482f-416a-b0a1-b511c367f117 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] rbd image 5c74d1f2-66dc-474b-b669-445115cf6b1d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:56:45 compute-0 nova_compute[260935]: 2025-10-11 08:56:45.716 2 DEBUG nova.storage.rbd_utils [None req-741ae2b0-482f-416a-b0a1-b511c367f117 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] rbd image 5c74d1f2-66dc-474b-b669-445115cf6b1d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:56:45 compute-0 nova_compute[260935]: 2025-10-11 08:56:45.721 2 DEBUG oslo_concurrency.processutils [None req-741ae2b0-482f-416a-b0a1-b511c367f117 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:56:45 compute-0 nova_compute[260935]: 2025-10-11 08:56:45.801 2 DEBUG nova.compute.manager [req-0317f326-c70d-4eb9-b8d6-31b302a15788 req-dc2a0d4a-a2a5-4170-9372-ed6deebadb0c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: be4cae7e-0c2f-4c19-9a7e-58681faf9523] Received event network-changed-47a6e23f-ee4e-4768-bb7e-ab5e3096976f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:56:45 compute-0 nova_compute[260935]: 2025-10-11 08:56:45.802 2 DEBUG nova.compute.manager [req-0317f326-c70d-4eb9-b8d6-31b302a15788 req-dc2a0d4a-a2a5-4170-9372-ed6deebadb0c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: be4cae7e-0c2f-4c19-9a7e-58681faf9523] Refreshing instance network info cache due to event network-changed-47a6e23f-ee4e-4768-bb7e-ab5e3096976f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 08:56:45 compute-0 nova_compute[260935]: 2025-10-11 08:56:45.802 2 DEBUG oslo_concurrency.lockutils [req-0317f326-c70d-4eb9-b8d6-31b302a15788 req-dc2a0d4a-a2a5-4170-9372-ed6deebadb0c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-be4cae7e-0c2f-4c19-9a7e-58681faf9523" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 08:56:45 compute-0 nova_compute[260935]: 2025-10-11 08:56:45.803 2 DEBUG oslo_concurrency.lockutils [req-0317f326-c70d-4eb9-b8d6-31b302a15788 req-dc2a0d4a-a2a5-4170-9372-ed6deebadb0c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-be4cae7e-0c2f-4c19-9a7e-58681faf9523" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 08:56:45 compute-0 nova_compute[260935]: 2025-10-11 08:56:45.803 2 DEBUG nova.network.neutron [req-0317f326-c70d-4eb9-b8d6-31b302a15788 req-dc2a0d4a-a2a5-4170-9372-ed6deebadb0c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: be4cae7e-0c2f-4c19-9a7e-58681faf9523] Refreshing network info cache for port 47a6e23f-ee4e-4768-bb7e-ab5e3096976f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 08:56:45 compute-0 nova_compute[260935]: 2025-10-11 08:56:45.822 2 DEBUG oslo_concurrency.processutils [None req-741ae2b0-482f-416a-b0a1-b511c367f117 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json" returned: 0 in 0.101s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:56:45 compute-0 nova_compute[260935]: 2025-10-11 08:56:45.823 2 DEBUG oslo_concurrency.lockutils [None req-741ae2b0-482f-416a-b0a1-b511c367f117 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Acquiring lock "9811042c7d73cc51997f7c966840f4b7728169a1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:56:45 compute-0 nova_compute[260935]: 2025-10-11 08:56:45.824 2 DEBUG oslo_concurrency.lockutils [None req-741ae2b0-482f-416a-b0a1-b511c367f117 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:56:45 compute-0 nova_compute[260935]: 2025-10-11 08:56:45.824 2 DEBUG oslo_concurrency.lockutils [None req-741ae2b0-482f-416a-b0a1-b511c367f117 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:56:45 compute-0 nova_compute[260935]: 2025-10-11 08:56:45.847 2 DEBUG nova.storage.rbd_utils [None req-741ae2b0-482f-416a-b0a1-b511c367f117 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] rbd image 5c74d1f2-66dc-474b-b669-445115cf6b1d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:56:45 compute-0 nova_compute[260935]: 2025-10-11 08:56:45.851 2 DEBUG oslo_concurrency.processutils [None req-741ae2b0-482f-416a-b0a1-b511c367f117 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 5c74d1f2-66dc-474b-b669-445115cf6b1d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:56:45 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 08:56:45 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2564139889' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:56:45 compute-0 nova_compute[260935]: 2025-10-11 08:56:45.891 2 DEBUG oslo_concurrency.processutils [None req-d7529a95-9591-4434-9cc0-ba133964a1ee ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.498s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:56:45 compute-0 nova_compute[260935]: 2025-10-11 08:56:45.894 2 DEBUG nova.virt.libvirt.vif [None req-d7529a95-9591-4434-9cc0-ba133964a1ee ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T08:56:38Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-349699878',display_name='tempest-ServersTestJSON-server-349699878',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-349699878',id=67,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d9864fda4f8641d8a9c1509c426cc206',ramdisk_id='',reservation_id='r-g30l2goh',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-101172647',owner_user_name='tempest-ServersTestJSON-101172647-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T08:56:40Z,user_data=None,user_id='ee0e5fedb9fc464eb2a9ac362f5e0749',uuid=be4cae7e-0c2f-4c19-9a7e-58681faf9523,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "47a6e23f-ee4e-4768-bb7e-ab5e3096976f", "address": "fa:16:3e:3e:18:4c", "network": {"id": "e075bdab-78c4-414f-b270-c41d1c82f498", "bridge": "br-int", "label": "tempest-ServersTestJSON-1401783070-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d9864fda4f8641d8a9c1509c426cc206", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap47a6e23f-ee", "ovs_interfaceid": "47a6e23f-ee4e-4768-bb7e-ab5e3096976f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 11 08:56:45 compute-0 nova_compute[260935]: 2025-10-11 08:56:45.894 2 DEBUG nova.network.os_vif_util [None req-d7529a95-9591-4434-9cc0-ba133964a1ee ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Converting VIF {"id": "47a6e23f-ee4e-4768-bb7e-ab5e3096976f", "address": "fa:16:3e:3e:18:4c", "network": {"id": "e075bdab-78c4-414f-b270-c41d1c82f498", "bridge": "br-int", "label": "tempest-ServersTestJSON-1401783070-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d9864fda4f8641d8a9c1509c426cc206", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap47a6e23f-ee", "ovs_interfaceid": "47a6e23f-ee4e-4768-bb7e-ab5e3096976f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 08:56:45 compute-0 nova_compute[260935]: 2025-10-11 08:56:45.896 2 DEBUG nova.network.os_vif_util [None req-d7529a95-9591-4434-9cc0-ba133964a1ee ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3e:18:4c,bridge_name='br-int',has_traffic_filtering=True,id=47a6e23f-ee4e-4768-bb7e-ab5e3096976f,network=Network(e075bdab-78c4-414f-b270-c41d1c82f498),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap47a6e23f-ee') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 08:56:45 compute-0 nova_compute[260935]: 2025-10-11 08:56:45.898 2 DEBUG nova.objects.instance [None req-d7529a95-9591-4434-9cc0-ba133964a1ee ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Lazy-loading 'pci_devices' on Instance uuid be4cae7e-0c2f-4c19-9a7e-58681faf9523 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:56:45 compute-0 nova_compute[260935]: 2025-10-11 08:56:45.940 2 DEBUG nova.virt.libvirt.driver [None req-d7529a95-9591-4434-9cc0-ba133964a1ee ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: be4cae7e-0c2f-4c19-9a7e-58681faf9523] End _get_guest_xml xml=<domain type="kvm">
Oct 11 08:56:45 compute-0 nova_compute[260935]:   <uuid>be4cae7e-0c2f-4c19-9a7e-58681faf9523</uuid>
Oct 11 08:56:45 compute-0 nova_compute[260935]:   <name>instance-00000043</name>
Oct 11 08:56:45 compute-0 nova_compute[260935]:   <memory>131072</memory>
Oct 11 08:56:45 compute-0 nova_compute[260935]:   <vcpu>1</vcpu>
Oct 11 08:56:45 compute-0 nova_compute[260935]:   <metadata>
Oct 11 08:56:45 compute-0 nova_compute[260935]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 08:56:45 compute-0 nova_compute[260935]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 08:56:45 compute-0 nova_compute[260935]:       <nova:name>tempest-ServersTestJSON-server-349699878</nova:name>
Oct 11 08:56:45 compute-0 nova_compute[260935]:       <nova:creationTime>2025-10-11 08:56:44</nova:creationTime>
Oct 11 08:56:45 compute-0 nova_compute[260935]:       <nova:flavor name="m1.nano">
Oct 11 08:56:45 compute-0 nova_compute[260935]:         <nova:memory>128</nova:memory>
Oct 11 08:56:45 compute-0 nova_compute[260935]:         <nova:disk>1</nova:disk>
Oct 11 08:56:45 compute-0 nova_compute[260935]:         <nova:swap>0</nova:swap>
Oct 11 08:56:45 compute-0 nova_compute[260935]:         <nova:ephemeral>0</nova:ephemeral>
Oct 11 08:56:45 compute-0 nova_compute[260935]:         <nova:vcpus>1</nova:vcpus>
Oct 11 08:56:45 compute-0 nova_compute[260935]:       </nova:flavor>
Oct 11 08:56:45 compute-0 nova_compute[260935]:       <nova:owner>
Oct 11 08:56:45 compute-0 nova_compute[260935]:         <nova:user uuid="ee0e5fedb9fc464eb2a9ac362f5e0749">tempest-ServersTestJSON-101172647-project-member</nova:user>
Oct 11 08:56:45 compute-0 nova_compute[260935]:         <nova:project uuid="d9864fda4f8641d8a9c1509c426cc206">tempest-ServersTestJSON-101172647</nova:project>
Oct 11 08:56:45 compute-0 nova_compute[260935]:       </nova:owner>
Oct 11 08:56:45 compute-0 nova_compute[260935]:       <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 08:56:45 compute-0 nova_compute[260935]:       <nova:ports>
Oct 11 08:56:45 compute-0 nova_compute[260935]:         <nova:port uuid="47a6e23f-ee4e-4768-bb7e-ab5e3096976f">
Oct 11 08:56:45 compute-0 nova_compute[260935]:           <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Oct 11 08:56:45 compute-0 nova_compute[260935]:         </nova:port>
Oct 11 08:56:45 compute-0 nova_compute[260935]:       </nova:ports>
Oct 11 08:56:45 compute-0 nova_compute[260935]:     </nova:instance>
Oct 11 08:56:45 compute-0 nova_compute[260935]:   </metadata>
Oct 11 08:56:45 compute-0 nova_compute[260935]:   <sysinfo type="smbios">
Oct 11 08:56:45 compute-0 nova_compute[260935]:     <system>
Oct 11 08:56:45 compute-0 nova_compute[260935]:       <entry name="manufacturer">RDO</entry>
Oct 11 08:56:45 compute-0 nova_compute[260935]:       <entry name="product">OpenStack Compute</entry>
Oct 11 08:56:45 compute-0 nova_compute[260935]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 08:56:45 compute-0 nova_compute[260935]:       <entry name="serial">be4cae7e-0c2f-4c19-9a7e-58681faf9523</entry>
Oct 11 08:56:45 compute-0 nova_compute[260935]:       <entry name="uuid">be4cae7e-0c2f-4c19-9a7e-58681faf9523</entry>
Oct 11 08:56:45 compute-0 nova_compute[260935]:       <entry name="family">Virtual Machine</entry>
Oct 11 08:56:45 compute-0 nova_compute[260935]:     </system>
Oct 11 08:56:45 compute-0 nova_compute[260935]:   </sysinfo>
Oct 11 08:56:45 compute-0 nova_compute[260935]:   <os>
Oct 11 08:56:45 compute-0 nova_compute[260935]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 11 08:56:45 compute-0 nova_compute[260935]:     <boot dev="hd"/>
Oct 11 08:56:45 compute-0 nova_compute[260935]:     <smbios mode="sysinfo"/>
Oct 11 08:56:45 compute-0 nova_compute[260935]:   </os>
Oct 11 08:56:45 compute-0 nova_compute[260935]:   <features>
Oct 11 08:56:45 compute-0 nova_compute[260935]:     <acpi/>
Oct 11 08:56:45 compute-0 nova_compute[260935]:     <apic/>
Oct 11 08:56:45 compute-0 nova_compute[260935]:     <vmcoreinfo/>
Oct 11 08:56:45 compute-0 nova_compute[260935]:   </features>
Oct 11 08:56:45 compute-0 nova_compute[260935]:   <clock offset="utc">
Oct 11 08:56:45 compute-0 nova_compute[260935]:     <timer name="pit" tickpolicy="delay"/>
Oct 11 08:56:45 compute-0 nova_compute[260935]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 11 08:56:45 compute-0 nova_compute[260935]:     <timer name="hpet" present="no"/>
Oct 11 08:56:45 compute-0 nova_compute[260935]:   </clock>
Oct 11 08:56:45 compute-0 nova_compute[260935]:   <cpu mode="host-model" match="exact">
Oct 11 08:56:45 compute-0 nova_compute[260935]:     <topology sockets="1" cores="1" threads="1"/>
Oct 11 08:56:45 compute-0 nova_compute[260935]:   </cpu>
Oct 11 08:56:45 compute-0 nova_compute[260935]:   <devices>
Oct 11 08:56:45 compute-0 nova_compute[260935]:     <disk type="network" device="disk">
Oct 11 08:56:45 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 08:56:45 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/be4cae7e-0c2f-4c19-9a7e-58681faf9523_disk">
Oct 11 08:56:45 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 08:56:45 compute-0 nova_compute[260935]:       </source>
Oct 11 08:56:45 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 08:56:45 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 08:56:45 compute-0 nova_compute[260935]:       </auth>
Oct 11 08:56:45 compute-0 nova_compute[260935]:       <target dev="vda" bus="virtio"/>
Oct 11 08:56:45 compute-0 nova_compute[260935]:     </disk>
Oct 11 08:56:45 compute-0 nova_compute[260935]:     <disk type="network" device="cdrom">
Oct 11 08:56:45 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 08:56:45 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/be4cae7e-0c2f-4c19-9a7e-58681faf9523_disk.config">
Oct 11 08:56:45 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 08:56:45 compute-0 nova_compute[260935]:       </source>
Oct 11 08:56:45 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 08:56:45 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 08:56:45 compute-0 nova_compute[260935]:       </auth>
Oct 11 08:56:45 compute-0 nova_compute[260935]:       <target dev="sda" bus="sata"/>
Oct 11 08:56:45 compute-0 nova_compute[260935]:     </disk>
Oct 11 08:56:45 compute-0 nova_compute[260935]:     <interface type="ethernet">
Oct 11 08:56:45 compute-0 nova_compute[260935]:       <mac address="fa:16:3e:3e:18:4c"/>
Oct 11 08:56:45 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 08:56:45 compute-0 nova_compute[260935]:       <driver name="vhost" rx_queue_size="512"/>
Oct 11 08:56:45 compute-0 nova_compute[260935]:       <mtu size="1442"/>
Oct 11 08:56:45 compute-0 nova_compute[260935]:       <target dev="tap47a6e23f-ee"/>
Oct 11 08:56:45 compute-0 nova_compute[260935]:     </interface>
Oct 11 08:56:45 compute-0 nova_compute[260935]:     <serial type="pty">
Oct 11 08:56:45 compute-0 nova_compute[260935]:       <log file="/var/lib/nova/instances/be4cae7e-0c2f-4c19-9a7e-58681faf9523/console.log" append="off"/>
Oct 11 08:56:45 compute-0 nova_compute[260935]:     </serial>
Oct 11 08:56:45 compute-0 nova_compute[260935]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 08:56:45 compute-0 nova_compute[260935]:     <video>
Oct 11 08:56:45 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 08:56:45 compute-0 nova_compute[260935]:     </video>
Oct 11 08:56:45 compute-0 nova_compute[260935]:     <input type="tablet" bus="usb"/>
Oct 11 08:56:45 compute-0 nova_compute[260935]:     <rng model="virtio">
Oct 11 08:56:45 compute-0 nova_compute[260935]:       <backend model="random">/dev/urandom</backend>
Oct 11 08:56:45 compute-0 nova_compute[260935]:     </rng>
Oct 11 08:56:45 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root"/>
Oct 11 08:56:45 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:56:45 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:56:45 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:56:45 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:56:45 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:56:45 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:56:45 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:56:45 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:56:45 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:56:45 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:56:45 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:56:45 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:56:45 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:56:45 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:56:45 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:56:45 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:56:45 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:56:45 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:56:45 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:56:45 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:56:45 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:56:45 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:56:45 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:56:45 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:56:45 compute-0 nova_compute[260935]:     <controller type="usb" index="0"/>
Oct 11 08:56:45 compute-0 nova_compute[260935]:     <memballoon model="virtio">
Oct 11 08:56:45 compute-0 nova_compute[260935]:       <stats period="10"/>
Oct 11 08:56:45 compute-0 nova_compute[260935]:     </memballoon>
Oct 11 08:56:45 compute-0 nova_compute[260935]:   </devices>
Oct 11 08:56:45 compute-0 nova_compute[260935]: </domain>
Oct 11 08:56:45 compute-0 nova_compute[260935]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 11 08:56:45 compute-0 nova_compute[260935]: 2025-10-11 08:56:45.942 2 DEBUG nova.compute.manager [None req-d7529a95-9591-4434-9cc0-ba133964a1ee ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: be4cae7e-0c2f-4c19-9a7e-58681faf9523] Preparing to wait for external event network-vif-plugged-47a6e23f-ee4e-4768-bb7e-ab5e3096976f prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 11 08:56:45 compute-0 nova_compute[260935]: 2025-10-11 08:56:45.942 2 DEBUG oslo_concurrency.lockutils [None req-d7529a95-9591-4434-9cc0-ba133964a1ee ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Acquiring lock "be4cae7e-0c2f-4c19-9a7e-58681faf9523-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:56:45 compute-0 nova_compute[260935]: 2025-10-11 08:56:45.943 2 DEBUG oslo_concurrency.lockutils [None req-d7529a95-9591-4434-9cc0-ba133964a1ee ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Lock "be4cae7e-0c2f-4c19-9a7e-58681faf9523-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:56:45 compute-0 nova_compute[260935]: 2025-10-11 08:56:45.943 2 DEBUG oslo_concurrency.lockutils [None req-d7529a95-9591-4434-9cc0-ba133964a1ee ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Lock "be4cae7e-0c2f-4c19-9a7e-58681faf9523-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:56:45 compute-0 nova_compute[260935]: 2025-10-11 08:56:45.944 2 DEBUG nova.virt.libvirt.vif [None req-d7529a95-9591-4434-9cc0-ba133964a1ee ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T08:56:38Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-349699878',display_name='tempest-ServersTestJSON-server-349699878',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-349699878',id=67,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d9864fda4f8641d8a9c1509c426cc206',ramdisk_id='',reservation_id='r-g30l2goh',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-101172647',owner_user_name='tempest-ServersTestJSON-101172647-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T08:56:40Z,user_data=None,user_id='ee0e5fedb9fc464eb2a9ac362f5e0749',uuid=be4cae7e-0c2f-4c19-9a7e-58681faf9523,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "47a6e23f-ee4e-4768-bb7e-ab5e3096976f", "address": "fa:16:3e:3e:18:4c", "network": {"id": "e075bdab-78c4-414f-b270-c41d1c82f498", "bridge": "br-int", "label": "tempest-ServersTestJSON-1401783070-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d9864fda4f8641d8a9c1509c426cc206", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap47a6e23f-ee", "ovs_interfaceid": "47a6e23f-ee4e-4768-bb7e-ab5e3096976f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 11 08:56:45 compute-0 nova_compute[260935]: 2025-10-11 08:56:45.944 2 DEBUG nova.network.os_vif_util [None req-d7529a95-9591-4434-9cc0-ba133964a1ee ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Converting VIF {"id": "47a6e23f-ee4e-4768-bb7e-ab5e3096976f", "address": "fa:16:3e:3e:18:4c", "network": {"id": "e075bdab-78c4-414f-b270-c41d1c82f498", "bridge": "br-int", "label": "tempest-ServersTestJSON-1401783070-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d9864fda4f8641d8a9c1509c426cc206", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap47a6e23f-ee", "ovs_interfaceid": "47a6e23f-ee4e-4768-bb7e-ab5e3096976f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 08:56:45 compute-0 nova_compute[260935]: 2025-10-11 08:56:45.945 2 DEBUG nova.network.os_vif_util [None req-d7529a95-9591-4434-9cc0-ba133964a1ee ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3e:18:4c,bridge_name='br-int',has_traffic_filtering=True,id=47a6e23f-ee4e-4768-bb7e-ab5e3096976f,network=Network(e075bdab-78c4-414f-b270-c41d1c82f498),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap47a6e23f-ee') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 08:56:45 compute-0 nova_compute[260935]: 2025-10-11 08:56:45.945 2 DEBUG os_vif [None req-d7529a95-9591-4434-9cc0-ba133964a1ee ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:3e:18:4c,bridge_name='br-int',has_traffic_filtering=True,id=47a6e23f-ee4e-4768-bb7e-ab5e3096976f,network=Network(e075bdab-78c4-414f-b270-c41d1c82f498),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap47a6e23f-ee') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 11 08:56:45 compute-0 nova_compute[260935]: 2025-10-11 08:56:45.947 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:56:45 compute-0 nova_compute[260935]: 2025-10-11 08:56:45.947 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:56:45 compute-0 nova_compute[260935]: 2025-10-11 08:56:45.948 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 08:56:45 compute-0 nova_compute[260935]: 2025-10-11 08:56:45.952 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:56:45 compute-0 nova_compute[260935]: 2025-10-11 08:56:45.953 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap47a6e23f-ee, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:56:45 compute-0 nova_compute[260935]: 2025-10-11 08:56:45.953 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap47a6e23f-ee, col_values=(('external_ids', {'iface-id': '47a6e23f-ee4e-4768-bb7e-ab5e3096976f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:3e:18:4c', 'vm-uuid': 'be4cae7e-0c2f-4c19-9a7e-58681faf9523'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:56:45 compute-0 nova_compute[260935]: 2025-10-11 08:56:45.955 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:56:45 compute-0 NetworkManager[44960]: <info>  [1760173005.9561] manager: (tap47a6e23f-ee): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/269)
Oct 11 08:56:45 compute-0 nova_compute[260935]: 2025-10-11 08:56:45.958 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 08:56:45 compute-0 nova_compute[260935]: 2025-10-11 08:56:45.963 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:56:45 compute-0 nova_compute[260935]: 2025-10-11 08:56:45.964 2 INFO os_vif [None req-d7529a95-9591-4434-9cc0-ba133964a1ee ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:3e:18:4c,bridge_name='br-int',has_traffic_filtering=True,id=47a6e23f-ee4e-4768-bb7e-ab5e3096976f,network=Network(e075bdab-78c4-414f-b270-c41d1c82f498),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap47a6e23f-ee')
Oct 11 08:56:46 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1594: 321 pgs: 321 active+clean; 405 MiB data, 744 MiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 5.9 MiB/s wr, 133 op/s
Oct 11 08:56:46 compute-0 nova_compute[260935]: 2025-10-11 08:56:46.038 2 DEBUG nova.virt.libvirt.driver [None req-d7529a95-9591-4434-9cc0-ba133964a1ee ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 08:56:46 compute-0 nova_compute[260935]: 2025-10-11 08:56:46.040 2 DEBUG nova.virt.libvirt.driver [None req-d7529a95-9591-4434-9cc0-ba133964a1ee ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 08:56:46 compute-0 nova_compute[260935]: 2025-10-11 08:56:46.040 2 DEBUG nova.virt.libvirt.driver [None req-d7529a95-9591-4434-9cc0-ba133964a1ee ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] No VIF found with MAC fa:16:3e:3e:18:4c, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 11 08:56:46 compute-0 nova_compute[260935]: 2025-10-11 08:56:46.041 2 INFO nova.virt.libvirt.driver [None req-d7529a95-9591-4434-9cc0-ba133964a1ee ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: be4cae7e-0c2f-4c19-9a7e-58681faf9523] Using config drive
Oct 11 08:56:46 compute-0 nova_compute[260935]: 2025-10-11 08:56:46.081 2 DEBUG nova.storage.rbd_utils [None req-d7529a95-9591-4434-9cc0-ba133964a1ee ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] rbd image be4cae7e-0c2f-4c19-9a7e-58681faf9523_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:56:46 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/3811944332' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:56:46 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/2530168886' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:56:46 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/2564139889' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:56:46 compute-0 nova_compute[260935]: 2025-10-11 08:56:46.149 2 DEBUG oslo_concurrency.processutils [None req-741ae2b0-482f-416a-b0a1-b511c367f117 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 5c74d1f2-66dc-474b-b669-445115cf6b1d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.298s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:56:46 compute-0 nova_compute[260935]: 2025-10-11 08:56:46.243 2 DEBUG nova.storage.rbd_utils [None req-741ae2b0-482f-416a-b0a1-b511c367f117 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] resizing rbd image 5c74d1f2-66dc-474b-b669-445115cf6b1d_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 11 08:56:46 compute-0 nova_compute[260935]: 2025-10-11 08:56:46.376 2 DEBUG nova.objects.instance [None req-741ae2b0-482f-416a-b0a1-b511c367f117 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Lazy-loading 'migration_context' on Instance uuid 5c74d1f2-66dc-474b-b669-445115cf6b1d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:56:46 compute-0 nova_compute[260935]: 2025-10-11 08:56:46.399 2 DEBUG nova.virt.libvirt.driver [None req-741ae2b0-482f-416a-b0a1-b511c367f117 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] [instance: 5c74d1f2-66dc-474b-b669-445115cf6b1d] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 11 08:56:46 compute-0 nova_compute[260935]: 2025-10-11 08:56:46.400 2 DEBUG nova.virt.libvirt.driver [None req-741ae2b0-482f-416a-b0a1-b511c367f117 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] [instance: 5c74d1f2-66dc-474b-b669-445115cf6b1d] Ensure instance console log exists: /var/lib/nova/instances/5c74d1f2-66dc-474b-b669-445115cf6b1d/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 11 08:56:46 compute-0 nova_compute[260935]: 2025-10-11 08:56:46.401 2 DEBUG oslo_concurrency.lockutils [None req-741ae2b0-482f-416a-b0a1-b511c367f117 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:56:46 compute-0 nova_compute[260935]: 2025-10-11 08:56:46.401 2 DEBUG oslo_concurrency.lockutils [None req-741ae2b0-482f-416a-b0a1-b511c367f117 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:56:46 compute-0 nova_compute[260935]: 2025-10-11 08:56:46.401 2 DEBUG oslo_concurrency.lockutils [None req-741ae2b0-482f-416a-b0a1-b511c367f117 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:56:46 compute-0 nova_compute[260935]: 2025-10-11 08:56:46.621 2 INFO nova.virt.libvirt.driver [None req-d7529a95-9591-4434-9cc0-ba133964a1ee ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: be4cae7e-0c2f-4c19-9a7e-58681faf9523] Creating config drive at /var/lib/nova/instances/be4cae7e-0c2f-4c19-9a7e-58681faf9523/disk.config
Oct 11 08:56:46 compute-0 nova_compute[260935]: 2025-10-11 08:56:46.630 2 DEBUG oslo_concurrency.processutils [None req-d7529a95-9591-4434-9cc0-ba133964a1ee ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/be4cae7e-0c2f-4c19-9a7e-58681faf9523/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp0z6a55a_ execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:56:46 compute-0 nova_compute[260935]: 2025-10-11 08:56:46.697 2 DEBUG nova.network.neutron [None req-741ae2b0-482f-416a-b0a1-b511c367f117 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] [instance: 5c74d1f2-66dc-474b-b669-445115cf6b1d] Successfully created port: c07af0d2-0bbd-43c9-b243-60698370a2d5 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 11 08:56:46 compute-0 nova_compute[260935]: 2025-10-11 08:56:46.806 2 DEBUG oslo_concurrency.processutils [None req-d7529a95-9591-4434-9cc0-ba133964a1ee ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/be4cae7e-0c2f-4c19-9a7e-58681faf9523/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp0z6a55a_" returned: 0 in 0.176s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:56:46 compute-0 nova_compute[260935]: 2025-10-11 08:56:46.843 2 DEBUG nova.storage.rbd_utils [None req-d7529a95-9591-4434-9cc0-ba133964a1ee ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] rbd image be4cae7e-0c2f-4c19-9a7e-58681faf9523_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:56:46 compute-0 nova_compute[260935]: 2025-10-11 08:56:46.849 2 DEBUG oslo_concurrency.processutils [None req-d7529a95-9591-4434-9cc0-ba133964a1ee ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/be4cae7e-0c2f-4c19-9a7e-58681faf9523/disk.config be4cae7e-0c2f-4c19-9a7e-58681faf9523_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:56:47 compute-0 nova_compute[260935]: 2025-10-11 08:56:47.047 2 DEBUG oslo_concurrency.processutils [None req-d7529a95-9591-4434-9cc0-ba133964a1ee ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/be4cae7e-0c2f-4c19-9a7e-58681faf9523/disk.config be4cae7e-0c2f-4c19-9a7e-58681faf9523_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.198s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:56:47 compute-0 nova_compute[260935]: 2025-10-11 08:56:47.048 2 INFO nova.virt.libvirt.driver [None req-d7529a95-9591-4434-9cc0-ba133964a1ee ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: be4cae7e-0c2f-4c19-9a7e-58681faf9523] Deleting local config drive /var/lib/nova/instances/be4cae7e-0c2f-4c19-9a7e-58681faf9523/disk.config because it was imported into RBD.
Oct 11 08:56:47 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e228 do_prune osdmap full prune enabled
Oct 11 08:56:47 compute-0 kernel: tap47a6e23f-ee: entered promiscuous mode
Oct 11 08:56:47 compute-0 ovn_controller[152945]: 2025-10-11T08:56:47Z|00581|binding|INFO|Claiming lport 47a6e23f-ee4e-4768-bb7e-ab5e3096976f for this chassis.
Oct 11 08:56:47 compute-0 ovn_controller[152945]: 2025-10-11T08:56:47Z|00582|binding|INFO|47a6e23f-ee4e-4768-bb7e-ab5e3096976f: Claiming fa:16:3e:3e:18:4c 10.100.0.4
Oct 11 08:56:47 compute-0 NetworkManager[44960]: <info>  [1760173007.1432] manager: (tap47a6e23f-ee): new Tun device (/org/freedesktop/NetworkManager/Devices/270)
Oct 11 08:56:47 compute-0 nova_compute[260935]: 2025-10-11 08:56:47.143 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:56:47 compute-0 ceph-mon[74313]: pgmap v1594: 321 pgs: 321 active+clean; 405 MiB data, 744 MiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 5.9 MiB/s wr, 133 op/s
Oct 11 08:56:47 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:56:47.151 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3e:18:4c 10.100.0.4'], port_security=['fa:16:3e:3e:18:4c 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'be4cae7e-0c2f-4c19-9a7e-58681faf9523', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e075bdab-78c4-414f-b270-c41d1c82f498', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd9864fda4f8641d8a9c1509c426cc206', 'neutron:revision_number': '2', 'neutron:security_group_ids': '708d19b6-1edc-4b98-ad73-f234668f1633', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2b07dbe7-131d-4b4e-9a25-dea5d7b28985, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=47a6e23f-ee4e-4768-bb7e-ab5e3096976f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 08:56:47 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:56:47.153 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 47a6e23f-ee4e-4768-bb7e-ab5e3096976f in datapath e075bdab-78c4-414f-b270-c41d1c82f498 bound to our chassis
Oct 11 08:56:47 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:56:47.156 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e075bdab-78c4-414f-b270-c41d1c82f498
Oct 11 08:56:47 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e229 e229: 3 total, 3 up, 3 in
Oct 11 08:56:47 compute-0 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e229: 3 total, 3 up, 3 in
Oct 11 08:56:47 compute-0 ovn_controller[152945]: 2025-10-11T08:56:47Z|00583|binding|INFO|Setting lport 47a6e23f-ee4e-4768-bb7e-ab5e3096976f ovn-installed in OVS
Oct 11 08:56:47 compute-0 ovn_controller[152945]: 2025-10-11T08:56:47Z|00584|binding|INFO|Setting lport 47a6e23f-ee4e-4768-bb7e-ab5e3096976f up in Southbound
Oct 11 08:56:47 compute-0 nova_compute[260935]: 2025-10-11 08:56:47.190 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:56:47 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:56:47.192 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[436a9731-35e8-4dbb-9a23-5995f26d4733]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:56:47 compute-0 nova_compute[260935]: 2025-10-11 08:56:47.196 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:56:47 compute-0 systemd-udevd[329180]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 08:56:47 compute-0 systemd-machined[215705]: New machine qemu-75-instance-00000043.
Oct 11 08:56:47 compute-0 systemd[1]: Started Virtual Machine qemu-75-instance-00000043.
Oct 11 08:56:47 compute-0 NetworkManager[44960]: <info>  [1760173007.2415] device (tap47a6e23f-ee): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 08:56:47 compute-0 NetworkManager[44960]: <info>  [1760173007.2437] device (tap47a6e23f-ee): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 08:56:47 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:56:47.253 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[750d4cce-91c4-4fdc-9009-9a473e6f4f43]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:56:47 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:56:47.258 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[16a917a7-271b-4f91-a731-a2bdb95fd72e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:56:47 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:56:47.302 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[c1082403-76be-4538-aa64-af545d83ca1b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:56:47 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:56:47.330 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[23a4bc25-9caa-4342-909b-9c829e8c910a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape075bdab-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0b:b9:79'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 13, 'rx_bytes': 916, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 13, 'rx_bytes': 916, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 169], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 479394, 'reachable_time': 36159, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 329192, 'error': None, 'target': 'ovnmeta-e075bdab-78c4-414f-b270-c41d1c82f498', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:56:47 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:56:47.353 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[8714b8e8-5eae-4921-a0de-30169a372e1f]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tape075bdab-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 479414, 'tstamp': 479414}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 329194, 'error': None, 'target': 'ovnmeta-e075bdab-78c4-414f-b270-c41d1c82f498', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tape075bdab-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 479419, 'tstamp': 479419}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 329194, 'error': None, 'target': 'ovnmeta-e075bdab-78c4-414f-b270-c41d1c82f498', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:56:47 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:56:47.356 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape075bdab-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:56:47 compute-0 nova_compute[260935]: 2025-10-11 08:56:47.358 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:56:47 compute-0 nova_compute[260935]: 2025-10-11 08:56:47.360 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:56:47 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:56:47.361 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape075bdab-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:56:47 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:56:47.361 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 08:56:47 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:56:47.361 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape075bdab-70, col_values=(('external_ids', {'iface-id': 'b9cf681c-9f4c-4c56-987a-55fa7aa89e1a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:56:47 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:56:47.361 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 08:56:47 compute-0 nova_compute[260935]: 2025-10-11 08:56:47.447 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760172992.4452095, f699e5fd-760a-4326-b428-c34d5e922f8b => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:56:47 compute-0 nova_compute[260935]: 2025-10-11 08:56:47.448 2 INFO nova.compute.manager [-] [instance: f699e5fd-760a-4326-b428-c34d5e922f8b] VM Stopped (Lifecycle Event)
Oct 11 08:56:47 compute-0 nova_compute[260935]: 2025-10-11 08:56:47.473 2 DEBUG nova.compute.manager [None req-67d117f6-ca2a-46d4-8e1c-84f0ba14afac - - - - - -] [instance: f699e5fd-760a-4326-b428-c34d5e922f8b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:56:47 compute-0 nova_compute[260935]: 2025-10-11 08:56:47.507 2 DEBUG nova.network.neutron [req-0317f326-c70d-4eb9-b8d6-31b302a15788 req-dc2a0d4a-a2a5-4170-9372-ed6deebadb0c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: be4cae7e-0c2f-4c19-9a7e-58681faf9523] Updated VIF entry in instance network info cache for port 47a6e23f-ee4e-4768-bb7e-ab5e3096976f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 08:56:47 compute-0 nova_compute[260935]: 2025-10-11 08:56:47.508 2 DEBUG nova.network.neutron [req-0317f326-c70d-4eb9-b8d6-31b302a15788 req-dc2a0d4a-a2a5-4170-9372-ed6deebadb0c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: be4cae7e-0c2f-4c19-9a7e-58681faf9523] Updating instance_info_cache with network_info: [{"id": "47a6e23f-ee4e-4768-bb7e-ab5e3096976f", "address": "fa:16:3e:3e:18:4c", "network": {"id": "e075bdab-78c4-414f-b270-c41d1c82f498", "bridge": "br-int", "label": "tempest-ServersTestJSON-1401783070-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d9864fda4f8641d8a9c1509c426cc206", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap47a6e23f-ee", "ovs_interfaceid": "47a6e23f-ee4e-4768-bb7e-ab5e3096976f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:56:47 compute-0 nova_compute[260935]: 2025-10-11 08:56:47.530 2 DEBUG oslo_concurrency.lockutils [req-0317f326-c70d-4eb9-b8d6-31b302a15788 req-dc2a0d4a-a2a5-4170-9372-ed6deebadb0c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-be4cae7e-0c2f-4c19-9a7e-58681faf9523" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 08:56:47 compute-0 nova_compute[260935]: 2025-10-11 08:56:47.729 2 DEBUG nova.network.neutron [None req-741ae2b0-482f-416a-b0a1-b511c367f117 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] [instance: 5c74d1f2-66dc-474b-b669-445115cf6b1d] Successfully updated port: c07af0d2-0bbd-43c9-b243-60698370a2d5 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 11 08:56:47 compute-0 nova_compute[260935]: 2025-10-11 08:56:47.757 2 DEBUG oslo_concurrency.lockutils [None req-741ae2b0-482f-416a-b0a1-b511c367f117 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Acquiring lock "refresh_cache-5c74d1f2-66dc-474b-b669-445115cf6b1d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 08:56:47 compute-0 nova_compute[260935]: 2025-10-11 08:56:47.758 2 DEBUG oslo_concurrency.lockutils [None req-741ae2b0-482f-416a-b0a1-b511c367f117 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Acquired lock "refresh_cache-5c74d1f2-66dc-474b-b669-445115cf6b1d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 08:56:47 compute-0 nova_compute[260935]: 2025-10-11 08:56:47.758 2 DEBUG nova.network.neutron [None req-741ae2b0-482f-416a-b0a1-b511c367f117 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] [instance: 5c74d1f2-66dc-474b-b669-445115cf6b1d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 11 08:56:48 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1596: 321 pgs: 321 active+clean; 293 MiB data, 719 MiB used, 59 GiB / 60 GiB avail; 107 KiB/s rd, 4.4 MiB/s wr, 159 op/s
Oct 11 08:56:48 compute-0 nova_compute[260935]: 2025-10-11 08:56:48.085 2 DEBUG nova.network.neutron [None req-741ae2b0-482f-416a-b0a1-b511c367f117 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] [instance: 5c74d1f2-66dc-474b-b669-445115cf6b1d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 11 08:56:48 compute-0 ceph-mon[74313]: osdmap e229: 3 total, 3 up, 3 in
Oct 11 08:56:48 compute-0 nova_compute[260935]: 2025-10-11 08:56:48.226 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:56:48 compute-0 nova_compute[260935]: 2025-10-11 08:56:48.240 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760173008.2395012, be4cae7e-0c2f-4c19-9a7e-58681faf9523 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:56:48 compute-0 nova_compute[260935]: 2025-10-11 08:56:48.240 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: be4cae7e-0c2f-4c19-9a7e-58681faf9523] VM Started (Lifecycle Event)
Oct 11 08:56:48 compute-0 nova_compute[260935]: 2025-10-11 08:56:48.263 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: be4cae7e-0c2f-4c19-9a7e-58681faf9523] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:56:48 compute-0 nova_compute[260935]: 2025-10-11 08:56:48.267 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760173008.2397099, be4cae7e-0c2f-4c19-9a7e-58681faf9523 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:56:48 compute-0 nova_compute[260935]: 2025-10-11 08:56:48.268 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: be4cae7e-0c2f-4c19-9a7e-58681faf9523] VM Paused (Lifecycle Event)
Oct 11 08:56:48 compute-0 nova_compute[260935]: 2025-10-11 08:56:48.286 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: be4cae7e-0c2f-4c19-9a7e-58681faf9523] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:56:48 compute-0 nova_compute[260935]: 2025-10-11 08:56:48.289 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: be4cae7e-0c2f-4c19-9a7e-58681faf9523] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 08:56:48 compute-0 nova_compute[260935]: 2025-10-11 08:56:48.311 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: be4cae7e-0c2f-4c19-9a7e-58681faf9523] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 08:56:48 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e229 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 08:56:48 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e229 do_prune osdmap full prune enabled
Oct 11 08:56:48 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e230 e230: 3 total, 3 up, 3 in
Oct 11 08:56:48 compute-0 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e230: 3 total, 3 up, 3 in
Oct 11 08:56:48 compute-0 nova_compute[260935]: 2025-10-11 08:56:48.519 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760172993.5182457, 62c9b5d7-f22e-4738-b2e6-7c53fcb968ea => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:56:48 compute-0 nova_compute[260935]: 2025-10-11 08:56:48.519 2 INFO nova.compute.manager [-] [instance: 62c9b5d7-f22e-4738-b2e6-7c53fcb968ea] VM Stopped (Lifecycle Event)
Oct 11 08:56:48 compute-0 nova_compute[260935]: 2025-10-11 08:56:48.543 2 DEBUG nova.compute.manager [None req-93afc91b-1e24-457f-a26b-00095a8c0817 - - - - - -] [instance: 62c9b5d7-f22e-4738-b2e6-7c53fcb968ea] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:56:48 compute-0 nova_compute[260935]: 2025-10-11 08:56:48.753 2 DEBUG nova.compute.manager [req-b2d7471c-2109-4135-babd-3e54615e7dac req-fc508fe5-ffea-452a-b636-365a3fe47554 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 5c74d1f2-66dc-474b-b669-445115cf6b1d] Received event network-changed-c07af0d2-0bbd-43c9-b243-60698370a2d5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:56:48 compute-0 nova_compute[260935]: 2025-10-11 08:56:48.754 2 DEBUG nova.compute.manager [req-b2d7471c-2109-4135-babd-3e54615e7dac req-fc508fe5-ffea-452a-b636-365a3fe47554 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 5c74d1f2-66dc-474b-b669-445115cf6b1d] Refreshing instance network info cache due to event network-changed-c07af0d2-0bbd-43c9-b243-60698370a2d5. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 08:56:48 compute-0 nova_compute[260935]: 2025-10-11 08:56:48.754 2 DEBUG oslo_concurrency.lockutils [req-b2d7471c-2109-4135-babd-3e54615e7dac req-fc508fe5-ffea-452a-b636-365a3fe47554 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-5c74d1f2-66dc-474b-b669-445115cf6b1d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 08:56:48 compute-0 nova_compute[260935]: 2025-10-11 08:56:48.914 2 DEBUG nova.compute.manager [req-6a987634-5839-4fd1-ab88-58ed11a2b1af req-faf66989-313b-4df0-8067-9c12233c1a5a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: be4cae7e-0c2f-4c19-9a7e-58681faf9523] Received event network-vif-plugged-47a6e23f-ee4e-4768-bb7e-ab5e3096976f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:56:48 compute-0 nova_compute[260935]: 2025-10-11 08:56:48.915 2 DEBUG oslo_concurrency.lockutils [req-6a987634-5839-4fd1-ab88-58ed11a2b1af req-faf66989-313b-4df0-8067-9c12233c1a5a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "be4cae7e-0c2f-4c19-9a7e-58681faf9523-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:56:48 compute-0 nova_compute[260935]: 2025-10-11 08:56:48.915 2 DEBUG oslo_concurrency.lockutils [req-6a987634-5839-4fd1-ab88-58ed11a2b1af req-faf66989-313b-4df0-8067-9c12233c1a5a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "be4cae7e-0c2f-4c19-9a7e-58681faf9523-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:56:48 compute-0 nova_compute[260935]: 2025-10-11 08:56:48.915 2 DEBUG oslo_concurrency.lockutils [req-6a987634-5839-4fd1-ab88-58ed11a2b1af req-faf66989-313b-4df0-8067-9c12233c1a5a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "be4cae7e-0c2f-4c19-9a7e-58681faf9523-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:56:48 compute-0 nova_compute[260935]: 2025-10-11 08:56:48.916 2 DEBUG nova.compute.manager [req-6a987634-5839-4fd1-ab88-58ed11a2b1af req-faf66989-313b-4df0-8067-9c12233c1a5a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: be4cae7e-0c2f-4c19-9a7e-58681faf9523] Processing event network-vif-plugged-47a6e23f-ee4e-4768-bb7e-ab5e3096976f _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 11 08:56:48 compute-0 nova_compute[260935]: 2025-10-11 08:56:48.916 2 DEBUG nova.compute.manager [req-6a987634-5839-4fd1-ab88-58ed11a2b1af req-faf66989-313b-4df0-8067-9c12233c1a5a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: be4cae7e-0c2f-4c19-9a7e-58681faf9523] Received event network-vif-plugged-47a6e23f-ee4e-4768-bb7e-ab5e3096976f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:56:48 compute-0 nova_compute[260935]: 2025-10-11 08:56:48.917 2 DEBUG oslo_concurrency.lockutils [req-6a987634-5839-4fd1-ab88-58ed11a2b1af req-faf66989-313b-4df0-8067-9c12233c1a5a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "be4cae7e-0c2f-4c19-9a7e-58681faf9523-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:56:48 compute-0 nova_compute[260935]: 2025-10-11 08:56:48.917 2 DEBUG oslo_concurrency.lockutils [req-6a987634-5839-4fd1-ab88-58ed11a2b1af req-faf66989-313b-4df0-8067-9c12233c1a5a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "be4cae7e-0c2f-4c19-9a7e-58681faf9523-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:56:48 compute-0 nova_compute[260935]: 2025-10-11 08:56:48.917 2 DEBUG oslo_concurrency.lockutils [req-6a987634-5839-4fd1-ab88-58ed11a2b1af req-faf66989-313b-4df0-8067-9c12233c1a5a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "be4cae7e-0c2f-4c19-9a7e-58681faf9523-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:56:48 compute-0 nova_compute[260935]: 2025-10-11 08:56:48.918 2 DEBUG nova.compute.manager [req-6a987634-5839-4fd1-ab88-58ed11a2b1af req-faf66989-313b-4df0-8067-9c12233c1a5a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: be4cae7e-0c2f-4c19-9a7e-58681faf9523] No waiting events found dispatching network-vif-plugged-47a6e23f-ee4e-4768-bb7e-ab5e3096976f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 08:56:48 compute-0 nova_compute[260935]: 2025-10-11 08:56:48.918 2 WARNING nova.compute.manager [req-6a987634-5839-4fd1-ab88-58ed11a2b1af req-faf66989-313b-4df0-8067-9c12233c1a5a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: be4cae7e-0c2f-4c19-9a7e-58681faf9523] Received unexpected event network-vif-plugged-47a6e23f-ee4e-4768-bb7e-ab5e3096976f for instance with vm_state building and task_state spawning.
Oct 11 08:56:48 compute-0 nova_compute[260935]: 2025-10-11 08:56:48.919 2 DEBUG nova.compute.manager [None req-d7529a95-9591-4434-9cc0-ba133964a1ee ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: be4cae7e-0c2f-4c19-9a7e-58681faf9523] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 11 08:56:48 compute-0 nova_compute[260935]: 2025-10-11 08:56:48.923 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760173008.9232996, be4cae7e-0c2f-4c19-9a7e-58681faf9523 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:56:48 compute-0 nova_compute[260935]: 2025-10-11 08:56:48.924 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: be4cae7e-0c2f-4c19-9a7e-58681faf9523] VM Resumed (Lifecycle Event)
Oct 11 08:56:48 compute-0 nova_compute[260935]: 2025-10-11 08:56:48.926 2 DEBUG nova.virt.libvirt.driver [None req-d7529a95-9591-4434-9cc0-ba133964a1ee ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: be4cae7e-0c2f-4c19-9a7e-58681faf9523] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 11 08:56:48 compute-0 nova_compute[260935]: 2025-10-11 08:56:48.932 2 INFO nova.virt.libvirt.driver [-] [instance: be4cae7e-0c2f-4c19-9a7e-58681faf9523] Instance spawned successfully.
Oct 11 08:56:48 compute-0 nova_compute[260935]: 2025-10-11 08:56:48.933 2 DEBUG nova.virt.libvirt.driver [None req-d7529a95-9591-4434-9cc0-ba133964a1ee ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: be4cae7e-0c2f-4c19-9a7e-58681faf9523] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 11 08:56:48 compute-0 nova_compute[260935]: 2025-10-11 08:56:48.952 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: be4cae7e-0c2f-4c19-9a7e-58681faf9523] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:56:48 compute-0 nova_compute[260935]: 2025-10-11 08:56:48.960 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: be4cae7e-0c2f-4c19-9a7e-58681faf9523] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 08:56:48 compute-0 nova_compute[260935]: 2025-10-11 08:56:48.966 2 DEBUG nova.virt.libvirt.driver [None req-d7529a95-9591-4434-9cc0-ba133964a1ee ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: be4cae7e-0c2f-4c19-9a7e-58681faf9523] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:56:48 compute-0 nova_compute[260935]: 2025-10-11 08:56:48.967 2 DEBUG nova.virt.libvirt.driver [None req-d7529a95-9591-4434-9cc0-ba133964a1ee ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: be4cae7e-0c2f-4c19-9a7e-58681faf9523] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:56:48 compute-0 nova_compute[260935]: 2025-10-11 08:56:48.967 2 DEBUG nova.virt.libvirt.driver [None req-d7529a95-9591-4434-9cc0-ba133964a1ee ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: be4cae7e-0c2f-4c19-9a7e-58681faf9523] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:56:48 compute-0 nova_compute[260935]: 2025-10-11 08:56:48.968 2 DEBUG nova.virt.libvirt.driver [None req-d7529a95-9591-4434-9cc0-ba133964a1ee ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: be4cae7e-0c2f-4c19-9a7e-58681faf9523] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:56:48 compute-0 nova_compute[260935]: 2025-10-11 08:56:48.969 2 DEBUG nova.virt.libvirt.driver [None req-d7529a95-9591-4434-9cc0-ba133964a1ee ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: be4cae7e-0c2f-4c19-9a7e-58681faf9523] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:56:48 compute-0 nova_compute[260935]: 2025-10-11 08:56:48.970 2 DEBUG nova.virt.libvirt.driver [None req-d7529a95-9591-4434-9cc0-ba133964a1ee ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: be4cae7e-0c2f-4c19-9a7e-58681faf9523] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:56:49 compute-0 nova_compute[260935]: 2025-10-11 08:56:49.000 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: be4cae7e-0c2f-4c19-9a7e-58681faf9523] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 08:56:49 compute-0 nova_compute[260935]: 2025-10-11 08:56:49.040 2 DEBUG nova.network.neutron [None req-741ae2b0-482f-416a-b0a1-b511c367f117 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] [instance: 5c74d1f2-66dc-474b-b669-445115cf6b1d] Updating instance_info_cache with network_info: [{"id": "c07af0d2-0bbd-43c9-b243-60698370a2d5", "address": "fa:16:3e:fc:d6:3e", "network": {"id": "056c6769-bc97-4ae9-9759-4cc2d984a31d", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-2094705751-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "73adfb8cf0c64359b1f33a9643148ef4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc07af0d2-0b", "ovs_interfaceid": "c07af0d2-0bbd-43c9-b243-60698370a2d5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:56:49 compute-0 nova_compute[260935]: 2025-10-11 08:56:49.050 2 INFO nova.compute.manager [None req-d7529a95-9591-4434-9cc0-ba133964a1ee ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: be4cae7e-0c2f-4c19-9a7e-58681faf9523] Took 8.60 seconds to spawn the instance on the hypervisor.
Oct 11 08:56:49 compute-0 nova_compute[260935]: 2025-10-11 08:56:49.050 2 DEBUG nova.compute.manager [None req-d7529a95-9591-4434-9cc0-ba133964a1ee ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: be4cae7e-0c2f-4c19-9a7e-58681faf9523] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:56:49 compute-0 nova_compute[260935]: 2025-10-11 08:56:49.079 2 DEBUG oslo_concurrency.lockutils [None req-741ae2b0-482f-416a-b0a1-b511c367f117 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Releasing lock "refresh_cache-5c74d1f2-66dc-474b-b669-445115cf6b1d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 08:56:49 compute-0 nova_compute[260935]: 2025-10-11 08:56:49.080 2 DEBUG nova.compute.manager [None req-741ae2b0-482f-416a-b0a1-b511c367f117 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] [instance: 5c74d1f2-66dc-474b-b669-445115cf6b1d] Instance network_info: |[{"id": "c07af0d2-0bbd-43c9-b243-60698370a2d5", "address": "fa:16:3e:fc:d6:3e", "network": {"id": "056c6769-bc97-4ae9-9759-4cc2d984a31d", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-2094705751-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "73adfb8cf0c64359b1f33a9643148ef4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc07af0d2-0b", "ovs_interfaceid": "c07af0d2-0bbd-43c9-b243-60698370a2d5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 11 08:56:49 compute-0 nova_compute[260935]: 2025-10-11 08:56:49.080 2 DEBUG oslo_concurrency.lockutils [req-b2d7471c-2109-4135-babd-3e54615e7dac req-fc508fe5-ffea-452a-b636-365a3fe47554 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-5c74d1f2-66dc-474b-b669-445115cf6b1d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 08:56:49 compute-0 nova_compute[260935]: 2025-10-11 08:56:49.081 2 DEBUG nova.network.neutron [req-b2d7471c-2109-4135-babd-3e54615e7dac req-fc508fe5-ffea-452a-b636-365a3fe47554 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 5c74d1f2-66dc-474b-b669-445115cf6b1d] Refreshing network info cache for port c07af0d2-0bbd-43c9-b243-60698370a2d5 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 08:56:49 compute-0 nova_compute[260935]: 2025-10-11 08:56:49.086 2 DEBUG nova.virt.libvirt.driver [None req-741ae2b0-482f-416a-b0a1-b511c367f117 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] [instance: 5c74d1f2-66dc-474b-b669-445115cf6b1d] Start _get_guest_xml network_info=[{"id": "c07af0d2-0bbd-43c9-b243-60698370a2d5", "address": "fa:16:3e:fc:d6:3e", "network": {"id": "056c6769-bc97-4ae9-9759-4cc2d984a31d", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-2094705751-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "73adfb8cf0c64359b1f33a9643148ef4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc07af0d2-0b", "ovs_interfaceid": "c07af0d2-0bbd-43c9-b243-60698370a2d5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 11 08:56:49 compute-0 nova_compute[260935]: 2025-10-11 08:56:49.092 2 WARNING nova.virt.libvirt.driver [None req-741ae2b0-482f-416a-b0a1-b511c367f117 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 08:56:49 compute-0 nova_compute[260935]: 2025-10-11 08:56:49.100 2 DEBUG nova.virt.libvirt.host [None req-741ae2b0-482f-416a-b0a1-b511c367f117 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 11 08:56:49 compute-0 nova_compute[260935]: 2025-10-11 08:56:49.100 2 DEBUG nova.virt.libvirt.host [None req-741ae2b0-482f-416a-b0a1-b511c367f117 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 11 08:56:49 compute-0 nova_compute[260935]: 2025-10-11 08:56:49.105 2 DEBUG nova.virt.libvirt.host [None req-741ae2b0-482f-416a-b0a1-b511c367f117 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 11 08:56:49 compute-0 nova_compute[260935]: 2025-10-11 08:56:49.106 2 DEBUG nova.virt.libvirt.host [None req-741ae2b0-482f-416a-b0a1-b511c367f117 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 11 08:56:49 compute-0 nova_compute[260935]: 2025-10-11 08:56:49.107 2 DEBUG nova.virt.libvirt.driver [None req-741ae2b0-482f-416a-b0a1-b511c367f117 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 11 08:56:49 compute-0 nova_compute[260935]: 2025-10-11 08:56:49.107 2 DEBUG nova.virt.hardware [None req-741ae2b0-482f-416a-b0a1-b511c367f117 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 11 08:56:49 compute-0 nova_compute[260935]: 2025-10-11 08:56:49.108 2 DEBUG nova.virt.hardware [None req-741ae2b0-482f-416a-b0a1-b511c367f117 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 11 08:56:49 compute-0 nova_compute[260935]: 2025-10-11 08:56:49.108 2 DEBUG nova.virt.hardware [None req-741ae2b0-482f-416a-b0a1-b511c367f117 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 11 08:56:49 compute-0 nova_compute[260935]: 2025-10-11 08:56:49.109 2 DEBUG nova.virt.hardware [None req-741ae2b0-482f-416a-b0a1-b511c367f117 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 11 08:56:49 compute-0 nova_compute[260935]: 2025-10-11 08:56:49.109 2 DEBUG nova.virt.hardware [None req-741ae2b0-482f-416a-b0a1-b511c367f117 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 11 08:56:49 compute-0 nova_compute[260935]: 2025-10-11 08:56:49.109 2 DEBUG nova.virt.hardware [None req-741ae2b0-482f-416a-b0a1-b511c367f117 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 11 08:56:49 compute-0 nova_compute[260935]: 2025-10-11 08:56:49.110 2 DEBUG nova.virt.hardware [None req-741ae2b0-482f-416a-b0a1-b511c367f117 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 11 08:56:49 compute-0 nova_compute[260935]: 2025-10-11 08:56:49.110 2 DEBUG nova.virt.hardware [None req-741ae2b0-482f-416a-b0a1-b511c367f117 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 11 08:56:49 compute-0 nova_compute[260935]: 2025-10-11 08:56:49.111 2 DEBUG nova.virt.hardware [None req-741ae2b0-482f-416a-b0a1-b511c367f117 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 11 08:56:49 compute-0 nova_compute[260935]: 2025-10-11 08:56:49.111 2 DEBUG nova.virt.hardware [None req-741ae2b0-482f-416a-b0a1-b511c367f117 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 11 08:56:49 compute-0 nova_compute[260935]: 2025-10-11 08:56:49.111 2 DEBUG nova.virt.hardware [None req-741ae2b0-482f-416a-b0a1-b511c367f117 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 11 08:56:49 compute-0 nova_compute[260935]: 2025-10-11 08:56:49.116 2 DEBUG oslo_concurrency.processutils [None req-741ae2b0-482f-416a-b0a1-b511c367f117 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:56:49 compute-0 nova_compute[260935]: 2025-10-11 08:56:49.191 2 INFO nova.compute.manager [None req-d7529a95-9591-4434-9cc0-ba133964a1ee ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: be4cae7e-0c2f-4c19-9a7e-58681faf9523] Took 9.71 seconds to build instance.
Oct 11 08:56:49 compute-0 nova_compute[260935]: 2025-10-11 08:56:49.213 2 DEBUG oslo_concurrency.lockutils [None req-d7529a95-9591-4434-9cc0-ba133964a1ee ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Lock "be4cae7e-0c2f-4c19-9a7e-58681faf9523" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.805s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:56:49 compute-0 ceph-mon[74313]: pgmap v1596: 321 pgs: 321 active+clean; 293 MiB data, 719 MiB used, 59 GiB / 60 GiB avail; 107 KiB/s rd, 4.4 MiB/s wr, 159 op/s
Oct 11 08:56:49 compute-0 ceph-mon[74313]: osdmap e230: 3 total, 3 up, 3 in
Oct 11 08:56:49 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 08:56:49 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1811042274' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:56:49 compute-0 nova_compute[260935]: 2025-10-11 08:56:49.633 2 DEBUG oslo_concurrency.processutils [None req-741ae2b0-482f-416a-b0a1-b511c367f117 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.517s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:56:49 compute-0 nova_compute[260935]: 2025-10-11 08:56:49.669 2 DEBUG nova.storage.rbd_utils [None req-741ae2b0-482f-416a-b0a1-b511c367f117 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] rbd image 5c74d1f2-66dc-474b-b669-445115cf6b1d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:56:49 compute-0 nova_compute[260935]: 2025-10-11 08:56:49.676 2 DEBUG oslo_concurrency.processutils [None req-741ae2b0-482f-416a-b0a1-b511c367f117 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:56:49 compute-0 nova_compute[260935]: 2025-10-11 08:56:49.995 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760172994.994049, 3915cf40-bdd7-4fe8-8311-834ff26aaf9c => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:56:49 compute-0 nova_compute[260935]: 2025-10-11 08:56:49.996 2 INFO nova.compute.manager [-] [instance: 3915cf40-bdd7-4fe8-8311-834ff26aaf9c] VM Stopped (Lifecycle Event)
Oct 11 08:56:50 compute-0 nova_compute[260935]: 2025-10-11 08:56:50.026 2 DEBUG nova.compute.manager [None req-0af98e73-6bcb-4737-add5-1c9ed6fe85ba - - - - - -] [instance: 3915cf40-bdd7-4fe8-8311-834ff26aaf9c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:56:50 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1598: 321 pgs: 321 active+clean; 293 MiB data, 719 MiB used, 59 GiB / 60 GiB avail; 129 KiB/s rd, 5.3 MiB/s wr, 191 op/s
Oct 11 08:56:50 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 08:56:50 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2208792946' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:56:50 compute-0 nova_compute[260935]: 2025-10-11 08:56:50.165 2 DEBUG oslo_concurrency.processutils [None req-741ae2b0-482f-416a-b0a1-b511c367f117 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.488s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:56:50 compute-0 nova_compute[260935]: 2025-10-11 08:56:50.166 2 DEBUG nova.virt.libvirt.vif [None req-741ae2b0-482f-416a-b0a1-b511c367f117 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T08:56:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-602145985',display_name='tempest-ServerActionsTestOtherB-server-602145985',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-602145985',id=68,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='73adfb8cf0c64359b1f33a9643148ef4',ramdisk_id='',reservation_id='r-xzpjs7qm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherB-1445504716',owner_user_name='tempest-ServerActionsTestOtherB-1445504716-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T08:56:45Z,user_data=None,user_id='8d5f5f07c57c467286168be7c097bf26',uuid=5c74d1f2-66dc-474b-b669-445115cf6b1d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c07af0d2-0bbd-43c9-b243-60698370a2d5", "address": "fa:16:3e:fc:d6:3e", "network": {"id": "056c6769-bc97-4ae9-9759-4cc2d984a31d", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-2094705751-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "73adfb8cf0c64359b1f33a9643148ef4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc07af0d2-0b", "ovs_interfaceid": "c07af0d2-0bbd-43c9-b243-60698370a2d5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 11 08:56:50 compute-0 nova_compute[260935]: 2025-10-11 08:56:50.167 2 DEBUG nova.network.os_vif_util [None req-741ae2b0-482f-416a-b0a1-b511c367f117 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Converting VIF {"id": "c07af0d2-0bbd-43c9-b243-60698370a2d5", "address": "fa:16:3e:fc:d6:3e", "network": {"id": "056c6769-bc97-4ae9-9759-4cc2d984a31d", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-2094705751-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "73adfb8cf0c64359b1f33a9643148ef4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc07af0d2-0b", "ovs_interfaceid": "c07af0d2-0bbd-43c9-b243-60698370a2d5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 08:56:50 compute-0 nova_compute[260935]: 2025-10-11 08:56:50.167 2 DEBUG nova.network.os_vif_util [None req-741ae2b0-482f-416a-b0a1-b511c367f117 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fc:d6:3e,bridge_name='br-int',has_traffic_filtering=True,id=c07af0d2-0bbd-43c9-b243-60698370a2d5,network=Network(056c6769-bc97-4ae9-9759-4cc2d984a31d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc07af0d2-0b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 08:56:50 compute-0 nova_compute[260935]: 2025-10-11 08:56:50.169 2 DEBUG nova.objects.instance [None req-741ae2b0-482f-416a-b0a1-b511c367f117 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Lazy-loading 'pci_devices' on Instance uuid 5c74d1f2-66dc-474b-b669-445115cf6b1d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:56:50 compute-0 nova_compute[260935]: 2025-10-11 08:56:50.188 2 DEBUG nova.virt.libvirt.driver [None req-741ae2b0-482f-416a-b0a1-b511c367f117 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] [instance: 5c74d1f2-66dc-474b-b669-445115cf6b1d] End _get_guest_xml xml=<domain type="kvm">
Oct 11 08:56:50 compute-0 nova_compute[260935]:   <uuid>5c74d1f2-66dc-474b-b669-445115cf6b1d</uuid>
Oct 11 08:56:50 compute-0 nova_compute[260935]:   <name>instance-00000044</name>
Oct 11 08:56:50 compute-0 nova_compute[260935]:   <memory>131072</memory>
Oct 11 08:56:50 compute-0 nova_compute[260935]:   <vcpu>1</vcpu>
Oct 11 08:56:50 compute-0 nova_compute[260935]:   <metadata>
Oct 11 08:56:50 compute-0 nova_compute[260935]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 08:56:50 compute-0 nova_compute[260935]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 08:56:50 compute-0 nova_compute[260935]:       <nova:name>tempest-ServerActionsTestOtherB-server-602145985</nova:name>
Oct 11 08:56:50 compute-0 nova_compute[260935]:       <nova:creationTime>2025-10-11 08:56:49</nova:creationTime>
Oct 11 08:56:50 compute-0 nova_compute[260935]:       <nova:flavor name="m1.nano">
Oct 11 08:56:50 compute-0 nova_compute[260935]:         <nova:memory>128</nova:memory>
Oct 11 08:56:50 compute-0 nova_compute[260935]:         <nova:disk>1</nova:disk>
Oct 11 08:56:50 compute-0 nova_compute[260935]:         <nova:swap>0</nova:swap>
Oct 11 08:56:50 compute-0 nova_compute[260935]:         <nova:ephemeral>0</nova:ephemeral>
Oct 11 08:56:50 compute-0 nova_compute[260935]:         <nova:vcpus>1</nova:vcpus>
Oct 11 08:56:50 compute-0 nova_compute[260935]:       </nova:flavor>
Oct 11 08:56:50 compute-0 nova_compute[260935]:       <nova:owner>
Oct 11 08:56:50 compute-0 nova_compute[260935]:         <nova:user uuid="8d5f5f07c57c467286168be7c097bf26">tempest-ServerActionsTestOtherB-1445504716-project-member</nova:user>
Oct 11 08:56:50 compute-0 nova_compute[260935]:         <nova:project uuid="73adfb8cf0c64359b1f33a9643148ef4">tempest-ServerActionsTestOtherB-1445504716</nova:project>
Oct 11 08:56:50 compute-0 nova_compute[260935]:       </nova:owner>
Oct 11 08:56:50 compute-0 nova_compute[260935]:       <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 08:56:50 compute-0 nova_compute[260935]:       <nova:ports>
Oct 11 08:56:50 compute-0 nova_compute[260935]:         <nova:port uuid="c07af0d2-0bbd-43c9-b243-60698370a2d5">
Oct 11 08:56:50 compute-0 nova_compute[260935]:           <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Oct 11 08:56:50 compute-0 nova_compute[260935]:         </nova:port>
Oct 11 08:56:50 compute-0 nova_compute[260935]:       </nova:ports>
Oct 11 08:56:50 compute-0 nova_compute[260935]:     </nova:instance>
Oct 11 08:56:50 compute-0 nova_compute[260935]:   </metadata>
Oct 11 08:56:50 compute-0 nova_compute[260935]:   <sysinfo type="smbios">
Oct 11 08:56:50 compute-0 nova_compute[260935]:     <system>
Oct 11 08:56:50 compute-0 nova_compute[260935]:       <entry name="manufacturer">RDO</entry>
Oct 11 08:56:50 compute-0 nova_compute[260935]:       <entry name="product">OpenStack Compute</entry>
Oct 11 08:56:50 compute-0 nova_compute[260935]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 08:56:50 compute-0 nova_compute[260935]:       <entry name="serial">5c74d1f2-66dc-474b-b669-445115cf6b1d</entry>
Oct 11 08:56:50 compute-0 nova_compute[260935]:       <entry name="uuid">5c74d1f2-66dc-474b-b669-445115cf6b1d</entry>
Oct 11 08:56:50 compute-0 nova_compute[260935]:       <entry name="family">Virtual Machine</entry>
Oct 11 08:56:50 compute-0 nova_compute[260935]:     </system>
Oct 11 08:56:50 compute-0 nova_compute[260935]:   </sysinfo>
Oct 11 08:56:50 compute-0 nova_compute[260935]:   <os>
Oct 11 08:56:50 compute-0 nova_compute[260935]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 11 08:56:50 compute-0 nova_compute[260935]:     <boot dev="hd"/>
Oct 11 08:56:50 compute-0 nova_compute[260935]:     <smbios mode="sysinfo"/>
Oct 11 08:56:50 compute-0 nova_compute[260935]:   </os>
Oct 11 08:56:50 compute-0 nova_compute[260935]:   <features>
Oct 11 08:56:50 compute-0 nova_compute[260935]:     <acpi/>
Oct 11 08:56:50 compute-0 nova_compute[260935]:     <apic/>
Oct 11 08:56:50 compute-0 nova_compute[260935]:     <vmcoreinfo/>
Oct 11 08:56:50 compute-0 nova_compute[260935]:   </features>
Oct 11 08:56:50 compute-0 nova_compute[260935]:   <clock offset="utc">
Oct 11 08:56:50 compute-0 nova_compute[260935]:     <timer name="pit" tickpolicy="delay"/>
Oct 11 08:56:50 compute-0 nova_compute[260935]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 11 08:56:50 compute-0 nova_compute[260935]:     <timer name="hpet" present="no"/>
Oct 11 08:56:50 compute-0 nova_compute[260935]:   </clock>
Oct 11 08:56:50 compute-0 nova_compute[260935]:   <cpu mode="host-model" match="exact">
Oct 11 08:56:50 compute-0 nova_compute[260935]:     <topology sockets="1" cores="1" threads="1"/>
Oct 11 08:56:50 compute-0 nova_compute[260935]:   </cpu>
Oct 11 08:56:50 compute-0 nova_compute[260935]:   <devices>
Oct 11 08:56:50 compute-0 nova_compute[260935]:     <disk type="network" device="disk">
Oct 11 08:56:50 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 08:56:50 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/5c74d1f2-66dc-474b-b669-445115cf6b1d_disk">
Oct 11 08:56:50 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 08:56:50 compute-0 nova_compute[260935]:       </source>
Oct 11 08:56:50 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 08:56:50 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 08:56:50 compute-0 nova_compute[260935]:       </auth>
Oct 11 08:56:50 compute-0 nova_compute[260935]:       <target dev="vda" bus="virtio"/>
Oct 11 08:56:50 compute-0 nova_compute[260935]:     </disk>
Oct 11 08:56:50 compute-0 nova_compute[260935]:     <disk type="network" device="cdrom">
Oct 11 08:56:50 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 08:56:50 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/5c74d1f2-66dc-474b-b669-445115cf6b1d_disk.config">
Oct 11 08:56:50 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 08:56:50 compute-0 nova_compute[260935]:       </source>
Oct 11 08:56:50 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 08:56:50 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 08:56:50 compute-0 nova_compute[260935]:       </auth>
Oct 11 08:56:50 compute-0 nova_compute[260935]:       <target dev="sda" bus="sata"/>
Oct 11 08:56:50 compute-0 nova_compute[260935]:     </disk>
Oct 11 08:56:50 compute-0 nova_compute[260935]:     <interface type="ethernet">
Oct 11 08:56:50 compute-0 nova_compute[260935]:       <mac address="fa:16:3e:fc:d6:3e"/>
Oct 11 08:56:50 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 08:56:50 compute-0 nova_compute[260935]:       <driver name="vhost" rx_queue_size="512"/>
Oct 11 08:56:50 compute-0 nova_compute[260935]:       <mtu size="1442"/>
Oct 11 08:56:50 compute-0 nova_compute[260935]:       <target dev="tapc07af0d2-0b"/>
Oct 11 08:56:50 compute-0 nova_compute[260935]:     </interface>
Oct 11 08:56:50 compute-0 nova_compute[260935]:     <serial type="pty">
Oct 11 08:56:50 compute-0 nova_compute[260935]:       <log file="/var/lib/nova/instances/5c74d1f2-66dc-474b-b669-445115cf6b1d/console.log" append="off"/>
Oct 11 08:56:50 compute-0 nova_compute[260935]:     </serial>
Oct 11 08:56:50 compute-0 nova_compute[260935]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 08:56:50 compute-0 nova_compute[260935]:     <video>
Oct 11 08:56:50 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 08:56:50 compute-0 nova_compute[260935]:     </video>
Oct 11 08:56:50 compute-0 nova_compute[260935]:     <input type="tablet" bus="usb"/>
Oct 11 08:56:50 compute-0 nova_compute[260935]:     <rng model="virtio">
Oct 11 08:56:50 compute-0 nova_compute[260935]:       <backend model="random">/dev/urandom</backend>
Oct 11 08:56:50 compute-0 nova_compute[260935]:     </rng>
Oct 11 08:56:50 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root"/>
Oct 11 08:56:50 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:56:50 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:56:50 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:56:50 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:56:50 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:56:50 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:56:50 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:56:50 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:56:50 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:56:50 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:56:50 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:56:50 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:56:50 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:56:50 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:56:50 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:56:50 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:56:50 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:56:50 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:56:50 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:56:50 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:56:50 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:56:50 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:56:50 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:56:50 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:56:50 compute-0 nova_compute[260935]:     <controller type="usb" index="0"/>
Oct 11 08:56:50 compute-0 nova_compute[260935]:     <memballoon model="virtio">
Oct 11 08:56:50 compute-0 nova_compute[260935]:       <stats period="10"/>
Oct 11 08:56:50 compute-0 nova_compute[260935]:     </memballoon>
Oct 11 08:56:50 compute-0 nova_compute[260935]:   </devices>
Oct 11 08:56:50 compute-0 nova_compute[260935]: </domain>
Oct 11 08:56:50 compute-0 nova_compute[260935]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 11 08:56:50 compute-0 nova_compute[260935]: 2025-10-11 08:56:50.190 2 DEBUG nova.compute.manager [None req-741ae2b0-482f-416a-b0a1-b511c367f117 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] [instance: 5c74d1f2-66dc-474b-b669-445115cf6b1d] Preparing to wait for external event network-vif-plugged-c07af0d2-0bbd-43c9-b243-60698370a2d5 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 11 08:56:50 compute-0 nova_compute[260935]: 2025-10-11 08:56:50.190 2 DEBUG oslo_concurrency.lockutils [None req-741ae2b0-482f-416a-b0a1-b511c367f117 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Acquiring lock "5c74d1f2-66dc-474b-b669-445115cf6b1d-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:56:50 compute-0 nova_compute[260935]: 2025-10-11 08:56:50.190 2 DEBUG oslo_concurrency.lockutils [None req-741ae2b0-482f-416a-b0a1-b511c367f117 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Lock "5c74d1f2-66dc-474b-b669-445115cf6b1d-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:56:50 compute-0 nova_compute[260935]: 2025-10-11 08:56:50.191 2 DEBUG oslo_concurrency.lockutils [None req-741ae2b0-482f-416a-b0a1-b511c367f117 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Lock "5c74d1f2-66dc-474b-b669-445115cf6b1d-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:56:50 compute-0 nova_compute[260935]: 2025-10-11 08:56:50.191 2 DEBUG nova.virt.libvirt.vif [None req-741ae2b0-482f-416a-b0a1-b511c367f117 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T08:56:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-602145985',display_name='tempest-ServerActionsTestOtherB-server-602145985',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-602145985',id=68,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='73adfb8cf0c64359b1f33a9643148ef4',ramdisk_id='',reservation_id='r-xzpjs7qm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherB-1445504716',owner_user_name='tempest-ServerActionsTestOtherB-1445504716-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T08:56:45Z,user_data=None,user_id='8d5f5f07c57c467286168be7c097bf26',uuid=5c74d1f2-66dc-474b-b669-445115cf6b1d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c07af0d2-0bbd-43c9-b243-60698370a2d5", "address": "fa:16:3e:fc:d6:3e", "network": {"id": "056c6769-bc97-4ae9-9759-4cc2d984a31d", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-2094705751-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "73adfb8cf0c64359b1f33a9643148ef4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc07af0d2-0b", "ovs_interfaceid": "c07af0d2-0bbd-43c9-b243-60698370a2d5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 11 08:56:50 compute-0 nova_compute[260935]: 2025-10-11 08:56:50.192 2 DEBUG nova.network.os_vif_util [None req-741ae2b0-482f-416a-b0a1-b511c367f117 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Converting VIF {"id": "c07af0d2-0bbd-43c9-b243-60698370a2d5", "address": "fa:16:3e:fc:d6:3e", "network": {"id": "056c6769-bc97-4ae9-9759-4cc2d984a31d", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-2094705751-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "73adfb8cf0c64359b1f33a9643148ef4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc07af0d2-0b", "ovs_interfaceid": "c07af0d2-0bbd-43c9-b243-60698370a2d5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 08:56:50 compute-0 nova_compute[260935]: 2025-10-11 08:56:50.193 2 DEBUG nova.network.os_vif_util [None req-741ae2b0-482f-416a-b0a1-b511c367f117 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fc:d6:3e,bridge_name='br-int',has_traffic_filtering=True,id=c07af0d2-0bbd-43c9-b243-60698370a2d5,network=Network(056c6769-bc97-4ae9-9759-4cc2d984a31d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc07af0d2-0b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 08:56:50 compute-0 nova_compute[260935]: 2025-10-11 08:56:50.193 2 DEBUG os_vif [None req-741ae2b0-482f-416a-b0a1-b511c367f117 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:fc:d6:3e,bridge_name='br-int',has_traffic_filtering=True,id=c07af0d2-0bbd-43c9-b243-60698370a2d5,network=Network(056c6769-bc97-4ae9-9759-4cc2d984a31d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc07af0d2-0b') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 11 08:56:50 compute-0 nova_compute[260935]: 2025-10-11 08:56:50.194 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:56:50 compute-0 nova_compute[260935]: 2025-10-11 08:56:50.194 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:56:50 compute-0 nova_compute[260935]: 2025-10-11 08:56:50.195 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 08:56:50 compute-0 nova_compute[260935]: 2025-10-11 08:56:50.198 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:56:50 compute-0 nova_compute[260935]: 2025-10-11 08:56:50.199 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc07af0d2-0b, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:56:50 compute-0 nova_compute[260935]: 2025-10-11 08:56:50.199 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapc07af0d2-0b, col_values=(('external_ids', {'iface-id': 'c07af0d2-0bbd-43c9-b243-60698370a2d5', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:fc:d6:3e', 'vm-uuid': '5c74d1f2-66dc-474b-b669-445115cf6b1d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:56:50 compute-0 NetworkManager[44960]: <info>  [1760173010.2019] manager: (tapc07af0d2-0b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/271)
Oct 11 08:56:50 compute-0 nova_compute[260935]: 2025-10-11 08:56:50.203 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:56:50 compute-0 nova_compute[260935]: 2025-10-11 08:56:50.206 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 08:56:50 compute-0 nova_compute[260935]: 2025-10-11 08:56:50.207 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:56:50 compute-0 nova_compute[260935]: 2025-10-11 08:56:50.208 2 INFO os_vif [None req-741ae2b0-482f-416a-b0a1-b511c367f117 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:fc:d6:3e,bridge_name='br-int',has_traffic_filtering=True,id=c07af0d2-0bbd-43c9-b243-60698370a2d5,network=Network(056c6769-bc97-4ae9-9759-4cc2d984a31d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc07af0d2-0b')
Oct 11 08:56:50 compute-0 nova_compute[260935]: 2025-10-11 08:56:50.290 2 DEBUG nova.virt.libvirt.driver [None req-741ae2b0-482f-416a-b0a1-b511c367f117 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 08:56:50 compute-0 nova_compute[260935]: 2025-10-11 08:56:50.291 2 DEBUG nova.virt.libvirt.driver [None req-741ae2b0-482f-416a-b0a1-b511c367f117 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 08:56:50 compute-0 nova_compute[260935]: 2025-10-11 08:56:50.301 2 DEBUG nova.virt.libvirt.driver [None req-741ae2b0-482f-416a-b0a1-b511c367f117 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] No VIF found with MAC fa:16:3e:fc:d6:3e, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 11 08:56:50 compute-0 nova_compute[260935]: 2025-10-11 08:56:50.302 2 INFO nova.virt.libvirt.driver [None req-741ae2b0-482f-416a-b0a1-b511c367f117 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] [instance: 5c74d1f2-66dc-474b-b669-445115cf6b1d] Using config drive
Oct 11 08:56:50 compute-0 nova_compute[260935]: 2025-10-11 08:56:50.341 2 DEBUG nova.storage.rbd_utils [None req-741ae2b0-482f-416a-b0a1-b511c367f117 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] rbd image 5c74d1f2-66dc-474b-b669-445115cf6b1d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:56:50 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/1811042274' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:56:50 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/2208792946' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:56:50 compute-0 nova_compute[260935]: 2025-10-11 08:56:50.623 2 DEBUG nova.network.neutron [req-b2d7471c-2109-4135-babd-3e54615e7dac req-fc508fe5-ffea-452a-b636-365a3fe47554 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 5c74d1f2-66dc-474b-b669-445115cf6b1d] Updated VIF entry in instance network info cache for port c07af0d2-0bbd-43c9-b243-60698370a2d5. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 08:56:50 compute-0 nova_compute[260935]: 2025-10-11 08:56:50.623 2 DEBUG nova.network.neutron [req-b2d7471c-2109-4135-babd-3e54615e7dac req-fc508fe5-ffea-452a-b636-365a3fe47554 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 5c74d1f2-66dc-474b-b669-445115cf6b1d] Updating instance_info_cache with network_info: [{"id": "c07af0d2-0bbd-43c9-b243-60698370a2d5", "address": "fa:16:3e:fc:d6:3e", "network": {"id": "056c6769-bc97-4ae9-9759-4cc2d984a31d", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-2094705751-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "73adfb8cf0c64359b1f33a9643148ef4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc07af0d2-0b", "ovs_interfaceid": "c07af0d2-0bbd-43c9-b243-60698370a2d5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:56:50 compute-0 nova_compute[260935]: 2025-10-11 08:56:50.642 2 DEBUG oslo_concurrency.lockutils [req-b2d7471c-2109-4135-babd-3e54615e7dac req-fc508fe5-ffea-452a-b636-365a3fe47554 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-5c74d1f2-66dc-474b-b669-445115cf6b1d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 08:56:51 compute-0 nova_compute[260935]: 2025-10-11 08:56:51.062 2 INFO nova.virt.libvirt.driver [None req-741ae2b0-482f-416a-b0a1-b511c367f117 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] [instance: 5c74d1f2-66dc-474b-b669-445115cf6b1d] Creating config drive at /var/lib/nova/instances/5c74d1f2-66dc-474b-b669-445115cf6b1d/disk.config
Oct 11 08:56:51 compute-0 nova_compute[260935]: 2025-10-11 08:56:51.070 2 DEBUG oslo_concurrency.processutils [None req-741ae2b0-482f-416a-b0a1-b511c367f117 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/5c74d1f2-66dc-474b-b669-445115cf6b1d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpngjkb3d3 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:56:51 compute-0 nova_compute[260935]: 2025-10-11 08:56:51.236 2 DEBUG oslo_concurrency.processutils [None req-741ae2b0-482f-416a-b0a1-b511c367f117 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/5c74d1f2-66dc-474b-b669-445115cf6b1d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpngjkb3d3" returned: 0 in 0.166s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:56:51 compute-0 nova_compute[260935]: 2025-10-11 08:56:51.280 2 DEBUG nova.storage.rbd_utils [None req-741ae2b0-482f-416a-b0a1-b511c367f117 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] rbd image 5c74d1f2-66dc-474b-b669-445115cf6b1d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:56:51 compute-0 nova_compute[260935]: 2025-10-11 08:56:51.285 2 DEBUG oslo_concurrency.processutils [None req-741ae2b0-482f-416a-b0a1-b511c367f117 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/5c74d1f2-66dc-474b-b669-445115cf6b1d/disk.config 5c74d1f2-66dc-474b-b669-445115cf6b1d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:56:51 compute-0 ceph-mon[74313]: pgmap v1598: 321 pgs: 321 active+clean; 293 MiB data, 719 MiB used, 59 GiB / 60 GiB avail; 129 KiB/s rd, 5.3 MiB/s wr, 191 op/s
Oct 11 08:56:51 compute-0 nova_compute[260935]: 2025-10-11 08:56:51.503 2 DEBUG oslo_concurrency.processutils [None req-741ae2b0-482f-416a-b0a1-b511c367f117 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/5c74d1f2-66dc-474b-b669-445115cf6b1d/disk.config 5c74d1f2-66dc-474b-b669-445115cf6b1d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.217s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:56:51 compute-0 nova_compute[260935]: 2025-10-11 08:56:51.503 2 INFO nova.virt.libvirt.driver [None req-741ae2b0-482f-416a-b0a1-b511c367f117 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] [instance: 5c74d1f2-66dc-474b-b669-445115cf6b1d] Deleting local config drive /var/lib/nova/instances/5c74d1f2-66dc-474b-b669-445115cf6b1d/disk.config because it was imported into RBD.
Oct 11 08:56:51 compute-0 kernel: tapc07af0d2-0b: entered promiscuous mode
Oct 11 08:56:51 compute-0 NetworkManager[44960]: <info>  [1760173011.5744] manager: (tapc07af0d2-0b): new Tun device (/org/freedesktop/NetworkManager/Devices/272)
Oct 11 08:56:51 compute-0 nova_compute[260935]: 2025-10-11 08:56:51.577 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:56:51 compute-0 ovn_controller[152945]: 2025-10-11T08:56:51Z|00585|binding|INFO|Claiming lport c07af0d2-0bbd-43c9-b243-60698370a2d5 for this chassis.
Oct 11 08:56:51 compute-0 ovn_controller[152945]: 2025-10-11T08:56:51Z|00586|binding|INFO|c07af0d2-0bbd-43c9-b243-60698370a2d5: Claiming fa:16:3e:fc:d6:3e 10.100.0.7
Oct 11 08:56:51 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:56:51.593 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fc:d6:3e 10.100.0.7'], port_security=['fa:16:3e:fc:d6:3e 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '5c74d1f2-66dc-474b-b669-445115cf6b1d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-056c6769-bc97-4ae9-9759-4cc2d984a31d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '73adfb8cf0c64359b1f33a9643148ef4', 'neutron:revision_number': '2', 'neutron:security_group_ids': '88bbbe16-d865-47df-a62a-312128d64455', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0cfa1fc9-121e-4e0a-bd08-716b82275316, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=c07af0d2-0bbd-43c9-b243-60698370a2d5) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 08:56:51 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:56:51.594 162815 INFO neutron.agent.ovn.metadata.agent [-] Port c07af0d2-0bbd-43c9-b243-60698370a2d5 in datapath 056c6769-bc97-4ae9-9759-4cc2d984a31d bound to our chassis
Oct 11 08:56:51 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:56:51.596 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 056c6769-bc97-4ae9-9759-4cc2d984a31d
Oct 11 08:56:51 compute-0 systemd-udevd[329373]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 08:56:51 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:56:51.621 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[eea6f7c9-ffea-421e-baef-1d88fb68b09d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:56:51 compute-0 ovn_controller[152945]: 2025-10-11T08:56:51Z|00587|binding|INFO|Setting lport c07af0d2-0bbd-43c9-b243-60698370a2d5 ovn-installed in OVS
Oct 11 08:56:51 compute-0 ovn_controller[152945]: 2025-10-11T08:56:51Z|00588|binding|INFO|Setting lport c07af0d2-0bbd-43c9-b243-60698370a2d5 up in Southbound
Oct 11 08:56:51 compute-0 NetworkManager[44960]: <info>  [1760173011.6317] device (tapc07af0d2-0b): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 08:56:51 compute-0 NetworkManager[44960]: <info>  [1760173011.6325] device (tapc07af0d2-0b): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 08:56:51 compute-0 nova_compute[260935]: 2025-10-11 08:56:51.634 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:56:51 compute-0 nova_compute[260935]: 2025-10-11 08:56:51.638 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:56:51 compute-0 systemd-machined[215705]: New machine qemu-76-instance-00000044.
Oct 11 08:56:51 compute-0 systemd[1]: Started Virtual Machine qemu-76-instance-00000044.
Oct 11 08:56:51 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:56:51.671 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[99963bff-3724-4053-9a04-73cdd62ce872]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:56:51 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:56:51.675 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[aade9de6-d86b-401a-b49e-424310be6416]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:56:51 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:56:51.723 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[4ef1a797-bc17-4fcd-b784-2e207a74af71]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:56:51 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:56:51.756 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[bb234d7f-962e-4af2-9156-afbcb2e13388]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap056c6769-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8b:cc:02'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 6, 'rx_bytes': 916, 'tx_bytes': 440, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 6, 'rx_bytes': 916, 'tx_bytes': 440, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 162], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 477032, 'reachable_time': 36312, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 329386, 'error': None, 'target': 'ovnmeta-056c6769-bc97-4ae9-9759-4cc2d984a31d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:56:51 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:56:51.784 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[820e805d-63d4-4074-88b6-d2f4c3ffa7ae]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap056c6769-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 477053, 'tstamp': 477053}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 329388, 'error': None, 'target': 'ovnmeta-056c6769-bc97-4ae9-9759-4cc2d984a31d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap056c6769-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 477058, 'tstamp': 477058}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 329388, 'error': None, 'target': 'ovnmeta-056c6769-bc97-4ae9-9759-4cc2d984a31d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:56:51 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:56:51.786 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap056c6769-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:56:51 compute-0 nova_compute[260935]: 2025-10-11 08:56:51.788 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:56:51 compute-0 nova_compute[260935]: 2025-10-11 08:56:51.790 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:56:51 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:56:51.791 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap056c6769-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:56:51 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:56:51.792 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 08:56:51 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:56:51.793 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap056c6769-b0, col_values=(('external_ids', {'iface-id': '056a8563-0695-415b-921f-e75fa98e60e5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:56:51 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:56:51.793 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 08:56:52 compute-0 nova_compute[260935]: 2025-10-11 08:56:52.024 2 DEBUG oslo_concurrency.lockutils [None req-8b868f60-123f-4386-b821-338e962f07b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Acquiring lock "80799b12-9add-4561-a8eb-f1cf3c1f93a1" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:56:52 compute-0 nova_compute[260935]: 2025-10-11 08:56:52.025 2 DEBUG oslo_concurrency.lockutils [None req-8b868f60-123f-4386-b821-338e962f07b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Lock "80799b12-9add-4561-a8eb-f1cf3c1f93a1" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:56:52 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1599: 321 pgs: 321 active+clean; 293 MiB data, 719 MiB used, 59 GiB / 60 GiB avail; 87 KiB/s rd, 2.7 MiB/s wr, 130 op/s
Oct 11 08:56:52 compute-0 nova_compute[260935]: 2025-10-11 08:56:52.048 2 DEBUG nova.compute.manager [None req-8b868f60-123f-4386-b821-338e962f07b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: 80799b12-9add-4561-a8eb-f1cf3c1f93a1] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 11 08:56:52 compute-0 nova_compute[260935]: 2025-10-11 08:56:52.150 2 DEBUG oslo_concurrency.lockutils [None req-8b868f60-123f-4386-b821-338e962f07b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:56:52 compute-0 nova_compute[260935]: 2025-10-11 08:56:52.151 2 DEBUG oslo_concurrency.lockutils [None req-8b868f60-123f-4386-b821-338e962f07b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:56:52 compute-0 nova_compute[260935]: 2025-10-11 08:56:52.164 2 DEBUG nova.virt.hardware [None req-8b868f60-123f-4386-b821-338e962f07b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 11 08:56:52 compute-0 nova_compute[260935]: 2025-10-11 08:56:52.165 2 INFO nova.compute.claims [None req-8b868f60-123f-4386-b821-338e962f07b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: 80799b12-9add-4561-a8eb-f1cf3c1f93a1] Claim successful on node compute-0.ctlplane.example.com
Oct 11 08:56:52 compute-0 nova_compute[260935]: 2025-10-11 08:56:52.208 2 DEBUG nova.compute.manager [req-e6964523-8e0f-4cf7-84a8-29e52067e12f req-233dadeb-a8b8-4702-9374-cf1c9932042f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 5c74d1f2-66dc-474b-b669-445115cf6b1d] Received event network-vif-plugged-c07af0d2-0bbd-43c9-b243-60698370a2d5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:56:52 compute-0 nova_compute[260935]: 2025-10-11 08:56:52.209 2 DEBUG oslo_concurrency.lockutils [req-e6964523-8e0f-4cf7-84a8-29e52067e12f req-233dadeb-a8b8-4702-9374-cf1c9932042f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "5c74d1f2-66dc-474b-b669-445115cf6b1d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:56:52 compute-0 nova_compute[260935]: 2025-10-11 08:56:52.209 2 DEBUG oslo_concurrency.lockutils [req-e6964523-8e0f-4cf7-84a8-29e52067e12f req-233dadeb-a8b8-4702-9374-cf1c9932042f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "5c74d1f2-66dc-474b-b669-445115cf6b1d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:56:52 compute-0 nova_compute[260935]: 2025-10-11 08:56:52.210 2 DEBUG oslo_concurrency.lockutils [req-e6964523-8e0f-4cf7-84a8-29e52067e12f req-233dadeb-a8b8-4702-9374-cf1c9932042f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "5c74d1f2-66dc-474b-b669-445115cf6b1d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:56:52 compute-0 nova_compute[260935]: 2025-10-11 08:56:52.210 2 DEBUG nova.compute.manager [req-e6964523-8e0f-4cf7-84a8-29e52067e12f req-233dadeb-a8b8-4702-9374-cf1c9932042f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 5c74d1f2-66dc-474b-b669-445115cf6b1d] Processing event network-vif-plugged-c07af0d2-0bbd-43c9-b243-60698370a2d5 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 11 08:56:52 compute-0 nova_compute[260935]: 2025-10-11 08:56:52.374 2 DEBUG oslo_concurrency.processutils [None req-8b868f60-123f-4386-b821-338e962f07b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:56:52 compute-0 nova_compute[260935]: 2025-10-11 08:56:52.494 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760173012.4937117, 5c74d1f2-66dc-474b-b669-445115cf6b1d => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:56:52 compute-0 nova_compute[260935]: 2025-10-11 08:56:52.495 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 5c74d1f2-66dc-474b-b669-445115cf6b1d] VM Started (Lifecycle Event)
Oct 11 08:56:52 compute-0 nova_compute[260935]: 2025-10-11 08:56:52.502 2 DEBUG nova.compute.manager [None req-741ae2b0-482f-416a-b0a1-b511c367f117 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] [instance: 5c74d1f2-66dc-474b-b669-445115cf6b1d] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 11 08:56:52 compute-0 nova_compute[260935]: 2025-10-11 08:56:52.510 2 DEBUG nova.virt.libvirt.driver [None req-741ae2b0-482f-416a-b0a1-b511c367f117 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] [instance: 5c74d1f2-66dc-474b-b669-445115cf6b1d] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 11 08:56:52 compute-0 nova_compute[260935]: 2025-10-11 08:56:52.515 2 INFO nova.virt.libvirt.driver [-] [instance: 5c74d1f2-66dc-474b-b669-445115cf6b1d] Instance spawned successfully.
Oct 11 08:56:52 compute-0 nova_compute[260935]: 2025-10-11 08:56:52.516 2 DEBUG nova.virt.libvirt.driver [None req-741ae2b0-482f-416a-b0a1-b511c367f117 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] [instance: 5c74d1f2-66dc-474b-b669-445115cf6b1d] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 11 08:56:52 compute-0 nova_compute[260935]: 2025-10-11 08:56:52.521 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 5c74d1f2-66dc-474b-b669-445115cf6b1d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:56:52 compute-0 nova_compute[260935]: 2025-10-11 08:56:52.528 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 5c74d1f2-66dc-474b-b669-445115cf6b1d] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 08:56:52 compute-0 nova_compute[260935]: 2025-10-11 08:56:52.541 2 DEBUG nova.virt.libvirt.driver [None req-741ae2b0-482f-416a-b0a1-b511c367f117 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] [instance: 5c74d1f2-66dc-474b-b669-445115cf6b1d] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:56:52 compute-0 nova_compute[260935]: 2025-10-11 08:56:52.541 2 DEBUG nova.virt.libvirt.driver [None req-741ae2b0-482f-416a-b0a1-b511c367f117 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] [instance: 5c74d1f2-66dc-474b-b669-445115cf6b1d] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:56:52 compute-0 nova_compute[260935]: 2025-10-11 08:56:52.542 2 DEBUG nova.virt.libvirt.driver [None req-741ae2b0-482f-416a-b0a1-b511c367f117 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] [instance: 5c74d1f2-66dc-474b-b669-445115cf6b1d] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:56:52 compute-0 nova_compute[260935]: 2025-10-11 08:56:52.543 2 DEBUG nova.virt.libvirt.driver [None req-741ae2b0-482f-416a-b0a1-b511c367f117 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] [instance: 5c74d1f2-66dc-474b-b669-445115cf6b1d] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:56:52 compute-0 nova_compute[260935]: 2025-10-11 08:56:52.543 2 DEBUG nova.virt.libvirt.driver [None req-741ae2b0-482f-416a-b0a1-b511c367f117 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] [instance: 5c74d1f2-66dc-474b-b669-445115cf6b1d] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:56:52 compute-0 nova_compute[260935]: 2025-10-11 08:56:52.544 2 DEBUG nova.virt.libvirt.driver [None req-741ae2b0-482f-416a-b0a1-b511c367f117 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] [instance: 5c74d1f2-66dc-474b-b669-445115cf6b1d] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:56:52 compute-0 nova_compute[260935]: 2025-10-11 08:56:52.549 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 5c74d1f2-66dc-474b-b669-445115cf6b1d] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 08:56:52 compute-0 nova_compute[260935]: 2025-10-11 08:56:52.550 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760173012.4965124, 5c74d1f2-66dc-474b-b669-445115cf6b1d => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:56:52 compute-0 nova_compute[260935]: 2025-10-11 08:56:52.550 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 5c74d1f2-66dc-474b-b669-445115cf6b1d] VM Paused (Lifecycle Event)
Oct 11 08:56:52 compute-0 nova_compute[260935]: 2025-10-11 08:56:52.579 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 5c74d1f2-66dc-474b-b669-445115cf6b1d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:56:52 compute-0 nova_compute[260935]: 2025-10-11 08:56:52.593 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760173012.5053747, 5c74d1f2-66dc-474b-b669-445115cf6b1d => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:56:52 compute-0 nova_compute[260935]: 2025-10-11 08:56:52.593 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 5c74d1f2-66dc-474b-b669-445115cf6b1d] VM Resumed (Lifecycle Event)
Oct 11 08:56:52 compute-0 nova_compute[260935]: 2025-10-11 08:56:52.620 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 5c74d1f2-66dc-474b-b669-445115cf6b1d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:56:52 compute-0 nova_compute[260935]: 2025-10-11 08:56:52.628 2 INFO nova.compute.manager [None req-741ae2b0-482f-416a-b0a1-b511c367f117 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] [instance: 5c74d1f2-66dc-474b-b669-445115cf6b1d] Took 7.00 seconds to spawn the instance on the hypervisor.
Oct 11 08:56:52 compute-0 nova_compute[260935]: 2025-10-11 08:56:52.629 2 DEBUG nova.compute.manager [None req-741ae2b0-482f-416a-b0a1-b511c367f117 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] [instance: 5c74d1f2-66dc-474b-b669-445115cf6b1d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:56:52 compute-0 nova_compute[260935]: 2025-10-11 08:56:52.634 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 5c74d1f2-66dc-474b-b669-445115cf6b1d] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 08:56:52 compute-0 nova_compute[260935]: 2025-10-11 08:56:52.680 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 5c74d1f2-66dc-474b-b669-445115cf6b1d] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 08:56:52 compute-0 nova_compute[260935]: 2025-10-11 08:56:52.744 2 INFO nova.compute.manager [None req-741ae2b0-482f-416a-b0a1-b511c367f117 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] [instance: 5c74d1f2-66dc-474b-b669-445115cf6b1d] Took 8.48 seconds to build instance.
Oct 11 08:56:52 compute-0 nova_compute[260935]: 2025-10-11 08:56:52.765 2 DEBUG oslo_concurrency.lockutils [None req-741ae2b0-482f-416a-b0a1-b511c367f117 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Lock "5c74d1f2-66dc-474b-b669-445115cf6b1d" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.557s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:56:52 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 08:56:52 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3243038305' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:56:52 compute-0 nova_compute[260935]: 2025-10-11 08:56:52.881 2 DEBUG oslo_concurrency.processutils [None req-8b868f60-123f-4386-b821-338e962f07b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.507s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:56:52 compute-0 nova_compute[260935]: 2025-10-11 08:56:52.889 2 DEBUG nova.compute.provider_tree [None req-8b868f60-123f-4386-b821-338e962f07b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 08:56:52 compute-0 nova_compute[260935]: 2025-10-11 08:56:52.906 2 DEBUG nova.scheduler.client.report [None req-8b868f60-123f-4386-b821-338e962f07b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 08:56:52 compute-0 nova_compute[260935]: 2025-10-11 08:56:52.946 2 DEBUG oslo_concurrency.lockutils [None req-8b868f60-123f-4386-b821-338e962f07b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.795s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:56:52 compute-0 nova_compute[260935]: 2025-10-11 08:56:52.947 2 DEBUG nova.compute.manager [None req-8b868f60-123f-4386-b821-338e962f07b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: 80799b12-9add-4561-a8eb-f1cf3c1f93a1] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 11 08:56:53 compute-0 nova_compute[260935]: 2025-10-11 08:56:53.025 2 DEBUG nova.compute.manager [None req-8b868f60-123f-4386-b821-338e962f07b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: 80799b12-9add-4561-a8eb-f1cf3c1f93a1] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 11 08:56:53 compute-0 nova_compute[260935]: 2025-10-11 08:56:53.026 2 DEBUG nova.network.neutron [None req-8b868f60-123f-4386-b821-338e962f07b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: 80799b12-9add-4561-a8eb-f1cf3c1f93a1] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 11 08:56:53 compute-0 nova_compute[260935]: 2025-10-11 08:56:53.060 2 INFO nova.virt.libvirt.driver [None req-8b868f60-123f-4386-b821-338e962f07b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: 80799b12-9add-4561-a8eb-f1cf3c1f93a1] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 11 08:56:53 compute-0 nova_compute[260935]: 2025-10-11 08:56:53.086 2 DEBUG nova.compute.manager [None req-8b868f60-123f-4386-b821-338e962f07b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: 80799b12-9add-4561-a8eb-f1cf3c1f93a1] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 11 08:56:53 compute-0 nova_compute[260935]: 2025-10-11 08:56:53.230 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:56:53 compute-0 nova_compute[260935]: 2025-10-11 08:56:53.256 2 DEBUG nova.compute.manager [None req-8b868f60-123f-4386-b821-338e962f07b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: 80799b12-9add-4561-a8eb-f1cf3c1f93a1] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 11 08:56:53 compute-0 nova_compute[260935]: 2025-10-11 08:56:53.258 2 DEBUG nova.virt.libvirt.driver [None req-8b868f60-123f-4386-b821-338e962f07b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: 80799b12-9add-4561-a8eb-f1cf3c1f93a1] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 11 08:56:53 compute-0 nova_compute[260935]: 2025-10-11 08:56:53.259 2 INFO nova.virt.libvirt.driver [None req-8b868f60-123f-4386-b821-338e962f07b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: 80799b12-9add-4561-a8eb-f1cf3c1f93a1] Creating image(s)
Oct 11 08:56:53 compute-0 nova_compute[260935]: 2025-10-11 08:56:53.295 2 DEBUG nova.storage.rbd_utils [None req-8b868f60-123f-4386-b821-338e962f07b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] rbd image 80799b12-9add-4561-a8eb-f1cf3c1f93a1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:56:53 compute-0 nova_compute[260935]: 2025-10-11 08:56:53.335 2 DEBUG nova.storage.rbd_utils [None req-8b868f60-123f-4386-b821-338e962f07b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] rbd image 80799b12-9add-4561-a8eb-f1cf3c1f93a1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:56:53 compute-0 nova_compute[260935]: 2025-10-11 08:56:53.372 2 DEBUG nova.storage.rbd_utils [None req-8b868f60-123f-4386-b821-338e962f07b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] rbd image 80799b12-9add-4561-a8eb-f1cf3c1f93a1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:56:53 compute-0 nova_compute[260935]: 2025-10-11 08:56:53.376 2 DEBUG oslo_concurrency.processutils [None req-8b868f60-123f-4386-b821-338e962f07b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:56:53 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e230 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 08:56:53 compute-0 ceph-mon[74313]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #72. Immutable memtables: 0.
Oct 11 08:56:53 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:56:53.384155) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 11 08:56:53 compute-0 ceph-mon[74313]: rocksdb: [db/flush_job.cc:856] [default] [JOB 39] Flushing memtable with next log file: 72
Oct 11 08:56:53 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760173013384194, "job": 39, "event": "flush_started", "num_memtables": 1, "num_entries": 887, "num_deletes": 265, "total_data_size": 967379, "memory_usage": 983528, "flush_reason": "Manual Compaction"}
Oct 11 08:56:53 compute-0 ceph-mon[74313]: rocksdb: [db/flush_job.cc:885] [default] [JOB 39] Level-0 flush table #73: started
Oct 11 08:56:53 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760173013391285, "cf_name": "default", "job": 39, "event": "table_file_creation", "file_number": 73, "file_size": 954163, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 32818, "largest_seqno": 33704, "table_properties": {"data_size": 949640, "index_size": 2111, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1349, "raw_key_size": 10510, "raw_average_key_size": 20, "raw_value_size": 940289, "raw_average_value_size": 1808, "num_data_blocks": 92, "num_entries": 520, "num_filter_entries": 520, "num_deletions": 265, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760172972, "oldest_key_time": 1760172972, "file_creation_time": 1760173013, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "25d4b2da-549a-4320-988a-8e4ed36a341b", "db_session_id": "HBQ1LYMNE0LLZGR7AHC6", "orig_file_number": 73, "seqno_to_time_mapping": "N/A"}}
Oct 11 08:56:53 compute-0 ceph-mon[74313]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 39] Flush lasted 7153 microseconds, and 2866 cpu microseconds.
Oct 11 08:56:53 compute-0 ceph-mon[74313]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 08:56:53 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:56:53.391313) [db/flush_job.cc:967] [default] [JOB 39] Level-0 flush table #73: 954163 bytes OK
Oct 11 08:56:53 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:56:53.391328) [db/memtable_list.cc:519] [default] Level-0 commit table #73 started
Oct 11 08:56:53 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:56:53.393049) [db/memtable_list.cc:722] [default] Level-0 commit table #73: memtable #1 done
Oct 11 08:56:53 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:56:53.393060) EVENT_LOG_v1 {"time_micros": 1760173013393056, "job": 39, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 11 08:56:53 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:56:53.393075) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 11 08:56:53 compute-0 ceph-mon[74313]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 39] Try to delete WAL files size 962849, prev total WAL file size 962849, number of live WAL files 2.
Oct 11 08:56:53 compute-0 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000069.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 08:56:53 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:56:53.393525) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031303037' seq:72057594037927935, type:22 .. '6C6F676D0031323539' seq:0, type:0; will stop at (end)
Oct 11 08:56:53 compute-0 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 40] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 11 08:56:53 compute-0 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 39 Base level 0, inputs: [73(931KB)], [71(8069KB)]
Oct 11 08:56:53 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760173013393569, "job": 40, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [73], "files_L6": [71], "score": -1, "input_data_size": 9217140, "oldest_snapshot_seqno": -1}
Oct 11 08:56:53 compute-0 nova_compute[260935]: 2025-10-11 08:56:53.430 2 DEBUG nova.policy [None req-8b868f60-123f-4386-b821-338e962f07b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '213e5693e94f44e7950e3dfbca04228a', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '93dd4902ce324862a38006da8e06503a', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 11 08:56:53 compute-0 ceph-mon[74313]: pgmap v1599: 321 pgs: 321 active+clean; 293 MiB data, 719 MiB used, 59 GiB / 60 GiB avail; 87 KiB/s rd, 2.7 MiB/s wr, 130 op/s
Oct 11 08:56:53 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/3243038305' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:56:53 compute-0 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 40] Generated table #74: 5693 keys, 9108524 bytes, temperature: kUnknown
Oct 11 08:56:53 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760173013434978, "cf_name": "default", "job": 40, "event": "table_file_creation", "file_number": 74, "file_size": 9108524, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9068584, "index_size": 24598, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14277, "raw_key_size": 144216, "raw_average_key_size": 25, "raw_value_size": 8964351, "raw_average_value_size": 1574, "num_data_blocks": 1000, "num_entries": 5693, "num_filter_entries": 5693, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760170204, "oldest_key_time": 0, "file_creation_time": 1760173013, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "25d4b2da-549a-4320-988a-8e4ed36a341b", "db_session_id": "HBQ1LYMNE0LLZGR7AHC6", "orig_file_number": 74, "seqno_to_time_mapping": "N/A"}}
Oct 11 08:56:53 compute-0 ceph-mon[74313]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 08:56:53 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:56:53.435277) [db/compaction/compaction_job.cc:1663] [default] [JOB 40] Compacted 1@0 + 1@6 files to L6 => 9108524 bytes
Oct 11 08:56:53 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:56:53.442768) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 222.3 rd, 219.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.9, 7.9 +0.0 blob) out(8.7 +0.0 blob), read-write-amplify(19.2) write-amplify(9.5) OK, records in: 6233, records dropped: 540 output_compression: NoCompression
Oct 11 08:56:53 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:56:53.442783) EVENT_LOG_v1 {"time_micros": 1760173013442776, "job": 40, "event": "compaction_finished", "compaction_time_micros": 41466, "compaction_time_cpu_micros": 19756, "output_level": 6, "num_output_files": 1, "total_output_size": 9108524, "num_input_records": 6233, "num_output_records": 5693, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 11 08:56:53 compute-0 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000073.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 08:56:53 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760173013443203, "job": 40, "event": "table_file_deletion", "file_number": 73}
Oct 11 08:56:53 compute-0 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000071.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 08:56:53 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760173013444469, "job": 40, "event": "table_file_deletion", "file_number": 71}
Oct 11 08:56:53 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:56:53.393455) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 08:56:53 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:56:53.444518) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 08:56:53 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:56:53.444523) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 08:56:53 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:56:53.444525) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 08:56:53 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:56:53.444526) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 08:56:53 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:56:53.444527) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 08:56:53 compute-0 nova_compute[260935]: 2025-10-11 08:56:53.466 2 DEBUG oslo_concurrency.processutils [None req-8b868f60-123f-4386-b821-338e962f07b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json" returned: 0 in 0.089s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:56:53 compute-0 nova_compute[260935]: 2025-10-11 08:56:53.466 2 DEBUG oslo_concurrency.lockutils [None req-8b868f60-123f-4386-b821-338e962f07b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Acquiring lock "9811042c7d73cc51997f7c966840f4b7728169a1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:56:53 compute-0 nova_compute[260935]: 2025-10-11 08:56:53.467 2 DEBUG oslo_concurrency.lockutils [None req-8b868f60-123f-4386-b821-338e962f07b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:56:53 compute-0 nova_compute[260935]: 2025-10-11 08:56:53.468 2 DEBUG oslo_concurrency.lockutils [None req-8b868f60-123f-4386-b821-338e962f07b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:56:53 compute-0 nova_compute[260935]: 2025-10-11 08:56:53.497 2 DEBUG nova.storage.rbd_utils [None req-8b868f60-123f-4386-b821-338e962f07b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] rbd image 80799b12-9add-4561-a8eb-f1cf3c1f93a1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:56:53 compute-0 nova_compute[260935]: 2025-10-11 08:56:53.501 2 DEBUG oslo_concurrency.processutils [None req-8b868f60-123f-4386-b821-338e962f07b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 80799b12-9add-4561-a8eb-f1cf3c1f93a1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:56:53 compute-0 nova_compute[260935]: 2025-10-11 08:56:53.559 2 DEBUG oslo_concurrency.lockutils [None req-8661c271-232d-4d96-8c1b-b75be873e0d0 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Acquiring lock "7e87b7bb-5a07-4c5d-9998-0e1375a226f1" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:56:53 compute-0 nova_compute[260935]: 2025-10-11 08:56:53.560 2 DEBUG oslo_concurrency.lockutils [None req-8661c271-232d-4d96-8c1b-b75be873e0d0 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Lock "7e87b7bb-5a07-4c5d-9998-0e1375a226f1" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:56:53 compute-0 nova_compute[260935]: 2025-10-11 08:56:53.583 2 DEBUG nova.compute.manager [None req-8661c271-232d-4d96-8c1b-b75be873e0d0 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: 7e87b7bb-5a07-4c5d-9998-0e1375a226f1] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 11 08:56:53 compute-0 nova_compute[260935]: 2025-10-11 08:56:53.669 2 DEBUG oslo_concurrency.lockutils [None req-8661c271-232d-4d96-8c1b-b75be873e0d0 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:56:53 compute-0 nova_compute[260935]: 2025-10-11 08:56:53.669 2 DEBUG oslo_concurrency.lockutils [None req-8661c271-232d-4d96-8c1b-b75be873e0d0 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:56:53 compute-0 nova_compute[260935]: 2025-10-11 08:56:53.687 2 DEBUG nova.virt.hardware [None req-8661c271-232d-4d96-8c1b-b75be873e0d0 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 11 08:56:53 compute-0 nova_compute[260935]: 2025-10-11 08:56:53.688 2 INFO nova.compute.claims [None req-8661c271-232d-4d96-8c1b-b75be873e0d0 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: 7e87b7bb-5a07-4c5d-9998-0e1375a226f1] Claim successful on node compute-0.ctlplane.example.com
Oct 11 08:56:53 compute-0 nova_compute[260935]: 2025-10-11 08:56:53.777 2 DEBUG oslo_concurrency.processutils [None req-8b868f60-123f-4386-b821-338e962f07b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 80799b12-9add-4561-a8eb-f1cf3c1f93a1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.276s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:56:53 compute-0 nova_compute[260935]: 2025-10-11 08:56:53.864 2 DEBUG nova.storage.rbd_utils [None req-8b868f60-123f-4386-b821-338e962f07b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] resizing rbd image 80799b12-9add-4561-a8eb-f1cf3c1f93a1_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 11 08:56:53 compute-0 nova_compute[260935]: 2025-10-11 08:56:53.986 2 DEBUG nova.objects.instance [None req-8b868f60-123f-4386-b821-338e962f07b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Lazy-loading 'migration_context' on Instance uuid 80799b12-9add-4561-a8eb-f1cf3c1f93a1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:56:54 compute-0 nova_compute[260935]: 2025-10-11 08:56:54.005 2 DEBUG nova.virt.libvirt.driver [None req-8b868f60-123f-4386-b821-338e962f07b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: 80799b12-9add-4561-a8eb-f1cf3c1f93a1] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 11 08:56:54 compute-0 nova_compute[260935]: 2025-10-11 08:56:54.006 2 DEBUG nova.virt.libvirt.driver [None req-8b868f60-123f-4386-b821-338e962f07b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: 80799b12-9add-4561-a8eb-f1cf3c1f93a1] Ensure instance console log exists: /var/lib/nova/instances/80799b12-9add-4561-a8eb-f1cf3c1f93a1/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 11 08:56:54 compute-0 nova_compute[260935]: 2025-10-11 08:56:54.007 2 DEBUG oslo_concurrency.lockutils [None req-8b868f60-123f-4386-b821-338e962f07b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:56:54 compute-0 nova_compute[260935]: 2025-10-11 08:56:54.008 2 DEBUG oslo_concurrency.lockutils [None req-8b868f60-123f-4386-b821-338e962f07b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:56:54 compute-0 nova_compute[260935]: 2025-10-11 08:56:54.008 2 DEBUG oslo_concurrency.lockutils [None req-8b868f60-123f-4386-b821-338e962f07b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:56:54 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1600: 321 pgs: 321 active+clean; 293 MiB data, 673 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 2.7 MiB/s wr, 241 op/s
Oct 11 08:56:54 compute-0 nova_compute[260935]: 2025-10-11 08:56:54.075 2 DEBUG oslo_concurrency.processutils [None req-8661c271-232d-4d96-8c1b-b75be873e0d0 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:56:54 compute-0 nova_compute[260935]: 2025-10-11 08:56:54.132 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:56:54 compute-0 nova_compute[260935]: 2025-10-11 08:56:54.245 2 INFO nova.compute.manager [None req-1c5a40e2-7915-4063-a296-3ce49c61eb53 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] [instance: 5c74d1f2-66dc-474b-b669-445115cf6b1d] Get console output
Oct 11 08:56:54 compute-0 nova_compute[260935]: 2025-10-11 08:56:54.257 2 INFO oslo.privsep.daemon [None req-1c5a40e2-7915-4063-a296-3ce49c61eb53 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'nova.privsep.sys_admin_pctxt', '--privsep_sock_path', '/tmp/tmp6a7yxndc/privsep.sock']
Oct 11 08:56:54 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 08:56:54 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2670170854' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:56:54 compute-0 nova_compute[260935]: 2025-10-11 08:56:54.550 2 DEBUG oslo_concurrency.processutils [None req-8661c271-232d-4d96-8c1b-b75be873e0d0 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:56:54 compute-0 nova_compute[260935]: 2025-10-11 08:56:54.558 2 DEBUG nova.compute.provider_tree [None req-8661c271-232d-4d96-8c1b-b75be873e0d0 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 08:56:54 compute-0 nova_compute[260935]: 2025-10-11 08:56:54.610 2 DEBUG nova.scheduler.client.report [None req-8661c271-232d-4d96-8c1b-b75be873e0d0 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 08:56:54 compute-0 nova_compute[260935]: 2025-10-11 08:56:54.626 2 DEBUG nova.network.neutron [None req-8b868f60-123f-4386-b821-338e962f07b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: 80799b12-9add-4561-a8eb-f1cf3c1f93a1] Successfully created port: b57a52c8-34c3-415f-9848-195b09be3611 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 11 08:56:54 compute-0 nova_compute[260935]: 2025-10-11 08:56:54.638 2 DEBUG oslo_concurrency.lockutils [None req-8661c271-232d-4d96-8c1b-b75be873e0d0 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.969s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:56:54 compute-0 nova_compute[260935]: 2025-10-11 08:56:54.639 2 DEBUG nova.compute.manager [None req-8661c271-232d-4d96-8c1b-b75be873e0d0 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: 7e87b7bb-5a07-4c5d-9998-0e1375a226f1] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 11 08:56:54 compute-0 nova_compute[260935]: 2025-10-11 08:56:54.722 2 DEBUG nova.compute.manager [None req-8661c271-232d-4d96-8c1b-b75be873e0d0 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: 7e87b7bb-5a07-4c5d-9998-0e1375a226f1] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 11 08:56:54 compute-0 nova_compute[260935]: 2025-10-11 08:56:54.723 2 DEBUG nova.network.neutron [None req-8661c271-232d-4d96-8c1b-b75be873e0d0 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: 7e87b7bb-5a07-4c5d-9998-0e1375a226f1] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 11 08:56:54 compute-0 nova_compute[260935]: 2025-10-11 08:56:54.743 2 INFO nova.virt.libvirt.driver [None req-8661c271-232d-4d96-8c1b-b75be873e0d0 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: 7e87b7bb-5a07-4c5d-9998-0e1375a226f1] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 11 08:56:54 compute-0 nova_compute[260935]: 2025-10-11 08:56:54.766 2 DEBUG nova.compute.manager [None req-8661c271-232d-4d96-8c1b-b75be873e0d0 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: 7e87b7bb-5a07-4c5d-9998-0e1375a226f1] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 11 08:56:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 08:56:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 08:56:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 08:56:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 08:56:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 08:56:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 08:56:54 compute-0 nova_compute[260935]: 2025-10-11 08:56:54.804 2 DEBUG nova.compute.manager [req-58ffea35-81ff-4422-af35-e23d6c4108db req-0f9b73d0-bc09-4d9a-b89f-600808df9a17 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 5c74d1f2-66dc-474b-b669-445115cf6b1d] Received event network-vif-plugged-c07af0d2-0bbd-43c9-b243-60698370a2d5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:56:54 compute-0 nova_compute[260935]: 2025-10-11 08:56:54.805 2 DEBUG oslo_concurrency.lockutils [req-58ffea35-81ff-4422-af35-e23d6c4108db req-0f9b73d0-bc09-4d9a-b89f-600808df9a17 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "5c74d1f2-66dc-474b-b669-445115cf6b1d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:56:54 compute-0 nova_compute[260935]: 2025-10-11 08:56:54.805 2 DEBUG oslo_concurrency.lockutils [req-58ffea35-81ff-4422-af35-e23d6c4108db req-0f9b73d0-bc09-4d9a-b89f-600808df9a17 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "5c74d1f2-66dc-474b-b669-445115cf6b1d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:56:54 compute-0 nova_compute[260935]: 2025-10-11 08:56:54.806 2 DEBUG oslo_concurrency.lockutils [req-58ffea35-81ff-4422-af35-e23d6c4108db req-0f9b73d0-bc09-4d9a-b89f-600808df9a17 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "5c74d1f2-66dc-474b-b669-445115cf6b1d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:56:54 compute-0 nova_compute[260935]: 2025-10-11 08:56:54.806 2 DEBUG nova.compute.manager [req-58ffea35-81ff-4422-af35-e23d6c4108db req-0f9b73d0-bc09-4d9a-b89f-600808df9a17 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 5c74d1f2-66dc-474b-b669-445115cf6b1d] No waiting events found dispatching network-vif-plugged-c07af0d2-0bbd-43c9-b243-60698370a2d5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 08:56:54 compute-0 nova_compute[260935]: 2025-10-11 08:56:54.807 2 WARNING nova.compute.manager [req-58ffea35-81ff-4422-af35-e23d6c4108db req-0f9b73d0-bc09-4d9a-b89f-600808df9a17 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 5c74d1f2-66dc-474b-b669-445115cf6b1d] Received unexpected event network-vif-plugged-c07af0d2-0bbd-43c9-b243-60698370a2d5 for instance with vm_state active and task_state None.
Oct 11 08:56:54 compute-0 ceph-mgr[74605]: [balancer INFO root] Optimize plan auto_2025-10-11_08:56:54
Oct 11 08:56:54 compute-0 ceph-mgr[74605]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 08:56:54 compute-0 ceph-mgr[74605]: [balancer INFO root] do_upmap
Oct 11 08:56:54 compute-0 ceph-mgr[74605]: [balancer INFO root] pools ['.mgr', 'backups', 'volumes', '.rgw.root', 'vms', 'images', 'default.rgw.log', 'default.rgw.control', 'default.rgw.meta', 'cephfs.cephfs.meta', 'cephfs.cephfs.data']
Oct 11 08:56:54 compute-0 ceph-mgr[74605]: [balancer INFO root] prepared 0/10 changes
Oct 11 08:56:54 compute-0 nova_compute[260935]: 2025-10-11 08:56:54.901 2 DEBUG nova.compute.manager [None req-8661c271-232d-4d96-8c1b-b75be873e0d0 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: 7e87b7bb-5a07-4c5d-9998-0e1375a226f1] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 11 08:56:54 compute-0 nova_compute[260935]: 2025-10-11 08:56:54.903 2 DEBUG nova.virt.libvirt.driver [None req-8661c271-232d-4d96-8c1b-b75be873e0d0 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: 7e87b7bb-5a07-4c5d-9998-0e1375a226f1] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 11 08:56:54 compute-0 nova_compute[260935]: 2025-10-11 08:56:54.904 2 INFO nova.virt.libvirt.driver [None req-8661c271-232d-4d96-8c1b-b75be873e0d0 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: 7e87b7bb-5a07-4c5d-9998-0e1375a226f1] Creating image(s)
Oct 11 08:56:54 compute-0 nova_compute[260935]: 2025-10-11 08:56:54.945 2 DEBUG nova.storage.rbd_utils [None req-8661c271-232d-4d96-8c1b-b75be873e0d0 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] rbd image 7e87b7bb-5a07-4c5d-9998-0e1375a226f1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:56:54 compute-0 nova_compute[260935]: 2025-10-11 08:56:54.983 2 DEBUG nova.storage.rbd_utils [None req-8661c271-232d-4d96-8c1b-b75be873e0d0 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] rbd image 7e87b7bb-5a07-4c5d-9998-0e1375a226f1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:56:55 compute-0 nova_compute[260935]: 2025-10-11 08:56:55.024 2 DEBUG nova.storage.rbd_utils [None req-8661c271-232d-4d96-8c1b-b75be873e0d0 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] rbd image 7e87b7bb-5a07-4c5d-9998-0e1375a226f1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:56:55 compute-0 nova_compute[260935]: 2025-10-11 08:56:55.033 2 DEBUG oslo_concurrency.processutils [None req-8661c271-232d-4d96-8c1b-b75be873e0d0 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:56:55 compute-0 nova_compute[260935]: 2025-10-11 08:56:55.093 2 DEBUG nova.policy [None req-8661c271-232d-4d96-8c1b-b75be873e0d0 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'ee0e5fedb9fc464eb2a9ac362f5e0749', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'd9864fda4f8641d8a9c1509c426cc206', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 11 08:56:55 compute-0 nova_compute[260935]: 2025-10-11 08:56:55.148 2 DEBUG oslo_concurrency.processutils [None req-8661c271-232d-4d96-8c1b-b75be873e0d0 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json" returned: 0 in 0.116s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:56:55 compute-0 nova_compute[260935]: 2025-10-11 08:56:55.149 2 DEBUG oslo_concurrency.lockutils [None req-8661c271-232d-4d96-8c1b-b75be873e0d0 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Acquiring lock "9811042c7d73cc51997f7c966840f4b7728169a1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:56:55 compute-0 nova_compute[260935]: 2025-10-11 08:56:55.150 2 DEBUG oslo_concurrency.lockutils [None req-8661c271-232d-4d96-8c1b-b75be873e0d0 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:56:55 compute-0 nova_compute[260935]: 2025-10-11 08:56:55.151 2 DEBUG oslo_concurrency.lockutils [None req-8661c271-232d-4d96-8c1b-b75be873e0d0 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:56:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 08:56:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 08:56:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 08:56:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 08:56:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 08:56:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 08:56:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 08:56:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 08:56:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 08:56:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 08:56:55 compute-0 nova_compute[260935]: 2025-10-11 08:56:55.195 2 DEBUG nova.storage.rbd_utils [None req-8661c271-232d-4d96-8c1b-b75be873e0d0 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] rbd image 7e87b7bb-5a07-4c5d-9998-0e1375a226f1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:56:55 compute-0 nova_compute[260935]: 2025-10-11 08:56:55.199 2 DEBUG oslo_concurrency.processutils [None req-8661c271-232d-4d96-8c1b-b75be873e0d0 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 7e87b7bb-5a07-4c5d-9998-0e1375a226f1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:56:55 compute-0 nova_compute[260935]: 2025-10-11 08:56:55.240 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:56:55 compute-0 nova_compute[260935]: 2025-10-11 08:56:55.242 2 INFO oslo.privsep.daemon [None req-1c5a40e2-7915-4063-a296-3ce49c61eb53 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Spawned new privsep daemon via rootwrap
Oct 11 08:56:55 compute-0 nova_compute[260935]: 2025-10-11 08:56:55.021 29289 INFO oslo.privsep.daemon [-] privsep daemon starting
Oct 11 08:56:55 compute-0 nova_compute[260935]: 2025-10-11 08:56:55.035 29289 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Oct 11 08:56:55 compute-0 nova_compute[260935]: 2025-10-11 08:56:55.040 29289 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/none
Oct 11 08:56:55 compute-0 nova_compute[260935]: 2025-10-11 08:56:55.040 29289 INFO oslo.privsep.daemon [-] privsep daemon running as pid 29289
Oct 11 08:56:55 compute-0 unix_chkpwd[329740]: password check failed for user (root)
Oct 11 08:56:55 compute-0 sshd-session[329620]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=155.4.244.179  user=root
Oct 11 08:56:55 compute-0 nova_compute[260935]: 2025-10-11 08:56:55.393 29289 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Oct 11 08:56:55 compute-0 ceph-mon[74313]: pgmap v1600: 321 pgs: 321 active+clean; 293 MiB data, 673 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 2.7 MiB/s wr, 241 op/s
Oct 11 08:56:55 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/2670170854' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:56:55 compute-0 nova_compute[260935]: 2025-10-11 08:56:55.508 2 DEBUG oslo_concurrency.processutils [None req-8661c271-232d-4d96-8c1b-b75be873e0d0 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 7e87b7bb-5a07-4c5d-9998-0e1375a226f1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.309s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:56:55 compute-0 nova_compute[260935]: 2025-10-11 08:56:55.582 2 DEBUG nova.network.neutron [None req-8b868f60-123f-4386-b821-338e962f07b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: 80799b12-9add-4561-a8eb-f1cf3c1f93a1] Successfully updated port: b57a52c8-34c3-415f-9848-195b09be3611 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 11 08:56:55 compute-0 nova_compute[260935]: 2025-10-11 08:56:55.592 2 DEBUG nova.storage.rbd_utils [None req-8661c271-232d-4d96-8c1b-b75be873e0d0 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] resizing rbd image 7e87b7bb-5a07-4c5d-9998-0e1375a226f1_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 11 08:56:55 compute-0 nova_compute[260935]: 2025-10-11 08:56:55.631 2 DEBUG oslo_concurrency.lockutils [None req-8b868f60-123f-4386-b821-338e962f07b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Acquiring lock "refresh_cache-80799b12-9add-4561-a8eb-f1cf3c1f93a1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 08:56:55 compute-0 nova_compute[260935]: 2025-10-11 08:56:55.631 2 DEBUG oslo_concurrency.lockutils [None req-8b868f60-123f-4386-b821-338e962f07b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Acquired lock "refresh_cache-80799b12-9add-4561-a8eb-f1cf3c1f93a1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 08:56:55 compute-0 nova_compute[260935]: 2025-10-11 08:56:55.631 2 DEBUG nova.network.neutron [None req-8b868f60-123f-4386-b821-338e962f07b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: 80799b12-9add-4561-a8eb-f1cf3c1f93a1] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 11 08:56:55 compute-0 nova_compute[260935]: 2025-10-11 08:56:55.716 2 DEBUG nova.compute.manager [req-8a134a69-5c72-4dd7-b791-17e5ac3a488b req-8bbf51a6-60ec-4e5e-acf1-110c89ecd020 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 80799b12-9add-4561-a8eb-f1cf3c1f93a1] Received event network-changed-b57a52c8-34c3-415f-9848-195b09be3611 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:56:55 compute-0 nova_compute[260935]: 2025-10-11 08:56:55.717 2 DEBUG nova.compute.manager [req-8a134a69-5c72-4dd7-b791-17e5ac3a488b req-8bbf51a6-60ec-4e5e-acf1-110c89ecd020 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 80799b12-9add-4561-a8eb-f1cf3c1f93a1] Refreshing instance network info cache due to event network-changed-b57a52c8-34c3-415f-9848-195b09be3611. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 08:56:55 compute-0 nova_compute[260935]: 2025-10-11 08:56:55.717 2 DEBUG oslo_concurrency.lockutils [req-8a134a69-5c72-4dd7-b791-17e5ac3a488b req-8bbf51a6-60ec-4e5e-acf1-110c89ecd020 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-80799b12-9add-4561-a8eb-f1cf3c1f93a1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 08:56:55 compute-0 nova_compute[260935]: 2025-10-11 08:56:55.724 2 DEBUG nova.objects.instance [None req-8661c271-232d-4d96-8c1b-b75be873e0d0 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Lazy-loading 'migration_context' on Instance uuid 7e87b7bb-5a07-4c5d-9998-0e1375a226f1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:56:55 compute-0 nova_compute[260935]: 2025-10-11 08:56:55.748 2 DEBUG nova.virt.libvirt.driver [None req-8661c271-232d-4d96-8c1b-b75be873e0d0 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: 7e87b7bb-5a07-4c5d-9998-0e1375a226f1] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 11 08:56:55 compute-0 nova_compute[260935]: 2025-10-11 08:56:55.749 2 DEBUG nova.virt.libvirt.driver [None req-8661c271-232d-4d96-8c1b-b75be873e0d0 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: 7e87b7bb-5a07-4c5d-9998-0e1375a226f1] Ensure instance console log exists: /var/lib/nova/instances/7e87b7bb-5a07-4c5d-9998-0e1375a226f1/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 11 08:56:55 compute-0 nova_compute[260935]: 2025-10-11 08:56:55.749 2 DEBUG oslo_concurrency.lockutils [None req-8661c271-232d-4d96-8c1b-b75be873e0d0 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:56:55 compute-0 nova_compute[260935]: 2025-10-11 08:56:55.750 2 DEBUG oslo_concurrency.lockutils [None req-8661c271-232d-4d96-8c1b-b75be873e0d0 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:56:55 compute-0 nova_compute[260935]: 2025-10-11 08:56:55.750 2 DEBUG oslo_concurrency.lockutils [None req-8661c271-232d-4d96-8c1b-b75be873e0d0 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:56:55 compute-0 nova_compute[260935]: 2025-10-11 08:56:55.898 2 DEBUG nova.network.neutron [None req-8b868f60-123f-4386-b821-338e962f07b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: 80799b12-9add-4561-a8eb-f1cf3c1f93a1] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 11 08:56:56 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1601: 321 pgs: 321 active+clean; 293 MiB data, 673 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 2.4 MiB/s wr, 218 op/s
Oct 11 08:56:56 compute-0 nova_compute[260935]: 2025-10-11 08:56:56.291 2 DEBUG nova.network.neutron [None req-8661c271-232d-4d96-8c1b-b75be873e0d0 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: 7e87b7bb-5a07-4c5d-9998-0e1375a226f1] Successfully created port: 9d9032fb-a79e-4d9c-897c-84768b25ec64 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 11 08:56:57 compute-0 nova_compute[260935]: 2025-10-11 08:56:57.113 2 DEBUG nova.network.neutron [None req-8b868f60-123f-4386-b821-338e962f07b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: 80799b12-9add-4561-a8eb-f1cf3c1f93a1] Updating instance_info_cache with network_info: [{"id": "b57a52c8-34c3-415f-9848-195b09be3611", "address": "fa:16:3e:3f:db:18", "network": {"id": "2cb96d57-a5e9-4b38-b10e-68187a5bf82f", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-2000648338-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "93dd4902ce324862a38006da8e06503a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb57a52c8-34", "ovs_interfaceid": "b57a52c8-34c3-415f-9848-195b09be3611", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:56:57 compute-0 nova_compute[260935]: 2025-10-11 08:56:57.168 2 DEBUG oslo_concurrency.lockutils [None req-8b868f60-123f-4386-b821-338e962f07b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Releasing lock "refresh_cache-80799b12-9add-4561-a8eb-f1cf3c1f93a1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 08:56:57 compute-0 nova_compute[260935]: 2025-10-11 08:56:57.169 2 DEBUG nova.compute.manager [None req-8b868f60-123f-4386-b821-338e962f07b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: 80799b12-9add-4561-a8eb-f1cf3c1f93a1] Instance network_info: |[{"id": "b57a52c8-34c3-415f-9848-195b09be3611", "address": "fa:16:3e:3f:db:18", "network": {"id": "2cb96d57-a5e9-4b38-b10e-68187a5bf82f", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-2000648338-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "93dd4902ce324862a38006da8e06503a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb57a52c8-34", "ovs_interfaceid": "b57a52c8-34c3-415f-9848-195b09be3611", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 11 08:56:57 compute-0 nova_compute[260935]: 2025-10-11 08:56:57.169 2 DEBUG oslo_concurrency.lockutils [req-8a134a69-5c72-4dd7-b791-17e5ac3a488b req-8bbf51a6-60ec-4e5e-acf1-110c89ecd020 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-80799b12-9add-4561-a8eb-f1cf3c1f93a1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 08:56:57 compute-0 nova_compute[260935]: 2025-10-11 08:56:57.170 2 DEBUG nova.network.neutron [req-8a134a69-5c72-4dd7-b791-17e5ac3a488b req-8bbf51a6-60ec-4e5e-acf1-110c89ecd020 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 80799b12-9add-4561-a8eb-f1cf3c1f93a1] Refreshing network info cache for port b57a52c8-34c3-415f-9848-195b09be3611 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 08:56:57 compute-0 nova_compute[260935]: 2025-10-11 08:56:57.175 2 DEBUG nova.virt.libvirt.driver [None req-8b868f60-123f-4386-b821-338e962f07b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: 80799b12-9add-4561-a8eb-f1cf3c1f93a1] Start _get_guest_xml network_info=[{"id": "b57a52c8-34c3-415f-9848-195b09be3611", "address": "fa:16:3e:3f:db:18", "network": {"id": "2cb96d57-a5e9-4b38-b10e-68187a5bf82f", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-2000648338-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "93dd4902ce324862a38006da8e06503a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb57a52c8-34", "ovs_interfaceid": "b57a52c8-34c3-415f-9848-195b09be3611", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 11 08:56:57 compute-0 nova_compute[260935]: 2025-10-11 08:56:57.185 2 WARNING nova.virt.libvirt.driver [None req-8b868f60-123f-4386-b821-338e962f07b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 08:56:57 compute-0 nova_compute[260935]: 2025-10-11 08:56:57.198 2 DEBUG nova.virt.libvirt.host [None req-8b868f60-123f-4386-b821-338e962f07b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 11 08:56:57 compute-0 nova_compute[260935]: 2025-10-11 08:56:57.199 2 DEBUG nova.virt.libvirt.host [None req-8b868f60-123f-4386-b821-338e962f07b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 11 08:56:57 compute-0 nova_compute[260935]: 2025-10-11 08:56:57.210 2 DEBUG nova.virt.libvirt.host [None req-8b868f60-123f-4386-b821-338e962f07b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 11 08:56:57 compute-0 nova_compute[260935]: 2025-10-11 08:56:57.211 2 DEBUG nova.virt.libvirt.host [None req-8b868f60-123f-4386-b821-338e962f07b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 11 08:56:57 compute-0 nova_compute[260935]: 2025-10-11 08:56:57.211 2 DEBUG nova.virt.libvirt.driver [None req-8b868f60-123f-4386-b821-338e962f07b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 11 08:56:57 compute-0 nova_compute[260935]: 2025-10-11 08:56:57.212 2 DEBUG nova.virt.hardware [None req-8b868f60-123f-4386-b821-338e962f07b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 11 08:56:57 compute-0 nova_compute[260935]: 2025-10-11 08:56:57.213 2 DEBUG nova.virt.hardware [None req-8b868f60-123f-4386-b821-338e962f07b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 11 08:56:57 compute-0 nova_compute[260935]: 2025-10-11 08:56:57.213 2 DEBUG nova.virt.hardware [None req-8b868f60-123f-4386-b821-338e962f07b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 11 08:56:57 compute-0 nova_compute[260935]: 2025-10-11 08:56:57.213 2 DEBUG nova.virt.hardware [None req-8b868f60-123f-4386-b821-338e962f07b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 11 08:56:57 compute-0 nova_compute[260935]: 2025-10-11 08:56:57.214 2 DEBUG nova.virt.hardware [None req-8b868f60-123f-4386-b821-338e962f07b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 11 08:56:57 compute-0 nova_compute[260935]: 2025-10-11 08:56:57.214 2 DEBUG nova.virt.hardware [None req-8b868f60-123f-4386-b821-338e962f07b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 11 08:56:57 compute-0 nova_compute[260935]: 2025-10-11 08:56:57.215 2 DEBUG nova.virt.hardware [None req-8b868f60-123f-4386-b821-338e962f07b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 11 08:56:57 compute-0 nova_compute[260935]: 2025-10-11 08:56:57.215 2 DEBUG nova.virt.hardware [None req-8b868f60-123f-4386-b821-338e962f07b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 11 08:56:57 compute-0 nova_compute[260935]: 2025-10-11 08:56:57.215 2 DEBUG nova.virt.hardware [None req-8b868f60-123f-4386-b821-338e962f07b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 11 08:56:57 compute-0 nova_compute[260935]: 2025-10-11 08:56:57.216 2 DEBUG nova.virt.hardware [None req-8b868f60-123f-4386-b821-338e962f07b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 11 08:56:57 compute-0 nova_compute[260935]: 2025-10-11 08:56:57.216 2 DEBUG nova.virt.hardware [None req-8b868f60-123f-4386-b821-338e962f07b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 11 08:56:57 compute-0 nova_compute[260935]: 2025-10-11 08:56:57.222 2 DEBUG oslo_concurrency.processutils [None req-8b868f60-123f-4386-b821-338e962f07b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:56:57 compute-0 sshd-session[329620]: Failed password for root from 155.4.244.179 port 58780 ssh2
Oct 11 08:56:57 compute-0 ceph-mon[74313]: pgmap v1601: 321 pgs: 321 active+clean; 293 MiB data, 673 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 2.4 MiB/s wr, 218 op/s
Oct 11 08:56:57 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 08:56:57 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2283851620' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:56:57 compute-0 nova_compute[260935]: 2025-10-11 08:56:57.729 2 DEBUG oslo_concurrency.processutils [None req-8b868f60-123f-4386-b821-338e962f07b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.507s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:56:57 compute-0 nova_compute[260935]: 2025-10-11 08:56:57.760 2 DEBUG nova.storage.rbd_utils [None req-8b868f60-123f-4386-b821-338e962f07b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] rbd image 80799b12-9add-4561-a8eb-f1cf3c1f93a1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:56:57 compute-0 nova_compute[260935]: 2025-10-11 08:56:57.766 2 DEBUG oslo_concurrency.processutils [None req-8b868f60-123f-4386-b821-338e962f07b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:56:57 compute-0 sshd-session[329620]: Received disconnect from 155.4.244.179 port 58780:11: Bye Bye [preauth]
Oct 11 08:56:57 compute-0 sshd-session[329620]: Disconnected from authenticating user root 155.4.244.179 port 58780 [preauth]
Oct 11 08:56:58 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1602: 321 pgs: 321 active+clean; 385 MiB data, 719 MiB used, 59 GiB / 60 GiB avail; 4.6 MiB/s rd, 4.3 MiB/s wr, 230 op/s
Oct 11 08:56:58 compute-0 nova_compute[260935]: 2025-10-11 08:56:58.232 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:56:58 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 08:56:58 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2190123434' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:56:58 compute-0 nova_compute[260935]: 2025-10-11 08:56:58.266 2 DEBUG oslo_concurrency.processutils [None req-8b868f60-123f-4386-b821-338e962f07b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.500s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:56:58 compute-0 nova_compute[260935]: 2025-10-11 08:56:58.269 2 DEBUG nova.virt.libvirt.vif [None req-8b868f60-123f-4386-b821-338e962f07b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T08:56:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-375075055',display_name='tempest-DeleteServersTestJSON-server-375075055',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-375075055',id=69,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='93dd4902ce324862a38006da8e06503a',ramdisk_id='',reservation_id='r-dj57enum',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-1019340677',owner_user_name='tempest-DeleteServersTestJSON-1019340677-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T08:56:53Z,user_data=None,user_id='213e5693e94f44e7950e3dfbca04228a',uuid=80799b12-9add-4561-a8eb-f1cf3c1f93a1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b57a52c8-34c3-415f-9848-195b09be3611", "address": "fa:16:3e:3f:db:18", "network": {"id": "2cb96d57-a5e9-4b38-b10e-68187a5bf82f", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-2000648338-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "93dd4902ce324862a38006da8e06503a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb57a52c8-34", "ovs_interfaceid": "b57a52c8-34c3-415f-9848-195b09be3611", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 11 08:56:58 compute-0 nova_compute[260935]: 2025-10-11 08:56:58.270 2 DEBUG nova.network.os_vif_util [None req-8b868f60-123f-4386-b821-338e962f07b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Converting VIF {"id": "b57a52c8-34c3-415f-9848-195b09be3611", "address": "fa:16:3e:3f:db:18", "network": {"id": "2cb96d57-a5e9-4b38-b10e-68187a5bf82f", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-2000648338-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "93dd4902ce324862a38006da8e06503a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb57a52c8-34", "ovs_interfaceid": "b57a52c8-34c3-415f-9848-195b09be3611", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 08:56:58 compute-0 nova_compute[260935]: 2025-10-11 08:56:58.272 2 DEBUG nova.network.os_vif_util [None req-8b868f60-123f-4386-b821-338e962f07b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3f:db:18,bridge_name='br-int',has_traffic_filtering=True,id=b57a52c8-34c3-415f-9848-195b09be3611,network=Network(2cb96d57-a5e9-4b38-b10e-68187a5bf82f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb57a52c8-34') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 08:56:58 compute-0 nova_compute[260935]: 2025-10-11 08:56:58.274 2 DEBUG nova.objects.instance [None req-8b868f60-123f-4386-b821-338e962f07b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Lazy-loading 'pci_devices' on Instance uuid 80799b12-9add-4561-a8eb-f1cf3c1f93a1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:56:58 compute-0 nova_compute[260935]: 2025-10-11 08:56:58.300 2 DEBUG nova.virt.libvirt.driver [None req-8b868f60-123f-4386-b821-338e962f07b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: 80799b12-9add-4561-a8eb-f1cf3c1f93a1] End _get_guest_xml xml=<domain type="kvm">
Oct 11 08:56:58 compute-0 nova_compute[260935]:   <uuid>80799b12-9add-4561-a8eb-f1cf3c1f93a1</uuid>
Oct 11 08:56:58 compute-0 nova_compute[260935]:   <name>instance-00000045</name>
Oct 11 08:56:58 compute-0 nova_compute[260935]:   <memory>131072</memory>
Oct 11 08:56:58 compute-0 nova_compute[260935]:   <vcpu>1</vcpu>
Oct 11 08:56:58 compute-0 nova_compute[260935]:   <metadata>
Oct 11 08:56:58 compute-0 nova_compute[260935]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 08:56:58 compute-0 nova_compute[260935]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 08:56:58 compute-0 nova_compute[260935]:       <nova:name>tempest-DeleteServersTestJSON-server-375075055</nova:name>
Oct 11 08:56:58 compute-0 nova_compute[260935]:       <nova:creationTime>2025-10-11 08:56:57</nova:creationTime>
Oct 11 08:56:58 compute-0 nova_compute[260935]:       <nova:flavor name="m1.nano">
Oct 11 08:56:58 compute-0 nova_compute[260935]:         <nova:memory>128</nova:memory>
Oct 11 08:56:58 compute-0 nova_compute[260935]:         <nova:disk>1</nova:disk>
Oct 11 08:56:58 compute-0 nova_compute[260935]:         <nova:swap>0</nova:swap>
Oct 11 08:56:58 compute-0 nova_compute[260935]:         <nova:ephemeral>0</nova:ephemeral>
Oct 11 08:56:58 compute-0 nova_compute[260935]:         <nova:vcpus>1</nova:vcpus>
Oct 11 08:56:58 compute-0 nova_compute[260935]:       </nova:flavor>
Oct 11 08:56:58 compute-0 nova_compute[260935]:       <nova:owner>
Oct 11 08:56:58 compute-0 nova_compute[260935]:         <nova:user uuid="213e5693e94f44e7950e3dfbca04228a">tempest-DeleteServersTestJSON-1019340677-project-member</nova:user>
Oct 11 08:56:58 compute-0 nova_compute[260935]:         <nova:project uuid="93dd4902ce324862a38006da8e06503a">tempest-DeleteServersTestJSON-1019340677</nova:project>
Oct 11 08:56:58 compute-0 nova_compute[260935]:       </nova:owner>
Oct 11 08:56:58 compute-0 nova_compute[260935]:       <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 08:56:58 compute-0 nova_compute[260935]:       <nova:ports>
Oct 11 08:56:58 compute-0 nova_compute[260935]:         <nova:port uuid="b57a52c8-34c3-415f-9848-195b09be3611">
Oct 11 08:56:58 compute-0 nova_compute[260935]:           <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Oct 11 08:56:58 compute-0 nova_compute[260935]:         </nova:port>
Oct 11 08:56:58 compute-0 nova_compute[260935]:       </nova:ports>
Oct 11 08:56:58 compute-0 nova_compute[260935]:     </nova:instance>
Oct 11 08:56:58 compute-0 nova_compute[260935]:   </metadata>
Oct 11 08:56:58 compute-0 nova_compute[260935]:   <sysinfo type="smbios">
Oct 11 08:56:58 compute-0 nova_compute[260935]:     <system>
Oct 11 08:56:58 compute-0 nova_compute[260935]:       <entry name="manufacturer">RDO</entry>
Oct 11 08:56:58 compute-0 nova_compute[260935]:       <entry name="product">OpenStack Compute</entry>
Oct 11 08:56:58 compute-0 nova_compute[260935]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 08:56:58 compute-0 nova_compute[260935]:       <entry name="serial">80799b12-9add-4561-a8eb-f1cf3c1f93a1</entry>
Oct 11 08:56:58 compute-0 nova_compute[260935]:       <entry name="uuid">80799b12-9add-4561-a8eb-f1cf3c1f93a1</entry>
Oct 11 08:56:58 compute-0 nova_compute[260935]:       <entry name="family">Virtual Machine</entry>
Oct 11 08:56:58 compute-0 nova_compute[260935]:     </system>
Oct 11 08:56:58 compute-0 nova_compute[260935]:   </sysinfo>
Oct 11 08:56:58 compute-0 nova_compute[260935]:   <os>
Oct 11 08:56:58 compute-0 nova_compute[260935]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 11 08:56:58 compute-0 nova_compute[260935]:     <boot dev="hd"/>
Oct 11 08:56:58 compute-0 nova_compute[260935]:     <smbios mode="sysinfo"/>
Oct 11 08:56:58 compute-0 nova_compute[260935]:   </os>
Oct 11 08:56:58 compute-0 nova_compute[260935]:   <features>
Oct 11 08:56:58 compute-0 nova_compute[260935]:     <acpi/>
Oct 11 08:56:58 compute-0 nova_compute[260935]:     <apic/>
Oct 11 08:56:58 compute-0 nova_compute[260935]:     <vmcoreinfo/>
Oct 11 08:56:58 compute-0 nova_compute[260935]:   </features>
Oct 11 08:56:58 compute-0 nova_compute[260935]:   <clock offset="utc">
Oct 11 08:56:58 compute-0 nova_compute[260935]:     <timer name="pit" tickpolicy="delay"/>
Oct 11 08:56:58 compute-0 nova_compute[260935]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 11 08:56:58 compute-0 nova_compute[260935]:     <timer name="hpet" present="no"/>
Oct 11 08:56:58 compute-0 nova_compute[260935]:   </clock>
Oct 11 08:56:58 compute-0 nova_compute[260935]:   <cpu mode="host-model" match="exact">
Oct 11 08:56:58 compute-0 nova_compute[260935]:     <topology sockets="1" cores="1" threads="1"/>
Oct 11 08:56:58 compute-0 nova_compute[260935]:   </cpu>
Oct 11 08:56:58 compute-0 nova_compute[260935]:   <devices>
Oct 11 08:56:58 compute-0 nova_compute[260935]:     <disk type="network" device="disk">
Oct 11 08:56:58 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 08:56:58 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/80799b12-9add-4561-a8eb-f1cf3c1f93a1_disk">
Oct 11 08:56:58 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 08:56:58 compute-0 nova_compute[260935]:       </source>
Oct 11 08:56:58 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 08:56:58 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 08:56:58 compute-0 nova_compute[260935]:       </auth>
Oct 11 08:56:58 compute-0 nova_compute[260935]:       <target dev="vda" bus="virtio"/>
Oct 11 08:56:58 compute-0 nova_compute[260935]:     </disk>
Oct 11 08:56:58 compute-0 nova_compute[260935]:     <disk type="network" device="cdrom">
Oct 11 08:56:58 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 08:56:58 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/80799b12-9add-4561-a8eb-f1cf3c1f93a1_disk.config">
Oct 11 08:56:58 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 08:56:58 compute-0 nova_compute[260935]:       </source>
Oct 11 08:56:58 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 08:56:58 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 08:56:58 compute-0 nova_compute[260935]:       </auth>
Oct 11 08:56:58 compute-0 nova_compute[260935]:       <target dev="sda" bus="sata"/>
Oct 11 08:56:58 compute-0 nova_compute[260935]:     </disk>
Oct 11 08:56:58 compute-0 nova_compute[260935]:     <interface type="ethernet">
Oct 11 08:56:58 compute-0 nova_compute[260935]:       <mac address="fa:16:3e:3f:db:18"/>
Oct 11 08:56:58 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 08:56:58 compute-0 nova_compute[260935]:       <driver name="vhost" rx_queue_size="512"/>
Oct 11 08:56:58 compute-0 nova_compute[260935]:       <mtu size="1442"/>
Oct 11 08:56:58 compute-0 nova_compute[260935]:       <target dev="tapb57a52c8-34"/>
Oct 11 08:56:58 compute-0 nova_compute[260935]:     </interface>
Oct 11 08:56:58 compute-0 nova_compute[260935]:     <serial type="pty">
Oct 11 08:56:58 compute-0 nova_compute[260935]:       <log file="/var/lib/nova/instances/80799b12-9add-4561-a8eb-f1cf3c1f93a1/console.log" append="off"/>
Oct 11 08:56:58 compute-0 nova_compute[260935]:     </serial>
Oct 11 08:56:58 compute-0 nova_compute[260935]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 08:56:58 compute-0 nova_compute[260935]:     <video>
Oct 11 08:56:58 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 08:56:58 compute-0 nova_compute[260935]:     </video>
Oct 11 08:56:58 compute-0 nova_compute[260935]:     <input type="tablet" bus="usb"/>
Oct 11 08:56:58 compute-0 nova_compute[260935]:     <rng model="virtio">
Oct 11 08:56:58 compute-0 nova_compute[260935]:       <backend model="random">/dev/urandom</backend>
Oct 11 08:56:58 compute-0 nova_compute[260935]:     </rng>
Oct 11 08:56:58 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root"/>
Oct 11 08:56:58 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:56:58 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:56:58 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:56:58 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:56:58 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:56:58 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:56:58 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:56:58 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:56:58 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:56:58 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:56:58 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:56:58 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:56:58 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:56:58 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:56:58 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:56:58 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:56:58 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:56:58 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:56:58 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:56:58 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:56:58 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:56:58 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:56:58 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:56:58 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:56:58 compute-0 nova_compute[260935]:     <controller type="usb" index="0"/>
Oct 11 08:56:58 compute-0 nova_compute[260935]:     <memballoon model="virtio">
Oct 11 08:56:58 compute-0 nova_compute[260935]:       <stats period="10"/>
Oct 11 08:56:58 compute-0 nova_compute[260935]:     </memballoon>
Oct 11 08:56:58 compute-0 nova_compute[260935]:   </devices>
Oct 11 08:56:58 compute-0 nova_compute[260935]: </domain>
Oct 11 08:56:58 compute-0 nova_compute[260935]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 11 08:56:58 compute-0 nova_compute[260935]: 2025-10-11 08:56:58.311 2 DEBUG nova.compute.manager [None req-8b868f60-123f-4386-b821-338e962f07b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: 80799b12-9add-4561-a8eb-f1cf3c1f93a1] Preparing to wait for external event network-vif-plugged-b57a52c8-34c3-415f-9848-195b09be3611 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 11 08:56:58 compute-0 nova_compute[260935]: 2025-10-11 08:56:58.312 2 DEBUG oslo_concurrency.lockutils [None req-8b868f60-123f-4386-b821-338e962f07b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Acquiring lock "80799b12-9add-4561-a8eb-f1cf3c1f93a1-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:56:58 compute-0 nova_compute[260935]: 2025-10-11 08:56:58.313 2 DEBUG oslo_concurrency.lockutils [None req-8b868f60-123f-4386-b821-338e962f07b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Lock "80799b12-9add-4561-a8eb-f1cf3c1f93a1-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:56:58 compute-0 nova_compute[260935]: 2025-10-11 08:56:58.313 2 DEBUG oslo_concurrency.lockutils [None req-8b868f60-123f-4386-b821-338e962f07b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Lock "80799b12-9add-4561-a8eb-f1cf3c1f93a1-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:56:58 compute-0 nova_compute[260935]: 2025-10-11 08:56:58.324 2 DEBUG nova.virt.libvirt.vif [None req-8b868f60-123f-4386-b821-338e962f07b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T08:56:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-375075055',display_name='tempest-DeleteServersTestJSON-server-375075055',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-375075055',id=69,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='93dd4902ce324862a38006da8e06503a',ramdisk_id='',reservation_id='r-dj57enum',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-1019340677',owner_user_name='tempest-DeleteServersTestJSON-1019340677-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T08:56:53Z,user_data=None,user_id='213e5693e94f44e7950e3dfbca04228a',uuid=80799b12-9add-4561-a8eb-f1cf3c1f93a1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b57a52c8-34c3-415f-9848-195b09be3611", "address": "fa:16:3e:3f:db:18", "network": {"id": "2cb96d57-a5e9-4b38-b10e-68187a5bf82f", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-2000648338-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "93dd4902ce324862a38006da8e06503a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb57a52c8-34", "ovs_interfaceid": "b57a52c8-34c3-415f-9848-195b09be3611", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 11 08:56:58 compute-0 nova_compute[260935]: 2025-10-11 08:56:58.327 2 DEBUG nova.network.os_vif_util [None req-8b868f60-123f-4386-b821-338e962f07b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Converting VIF {"id": "b57a52c8-34c3-415f-9848-195b09be3611", "address": "fa:16:3e:3f:db:18", "network": {"id": "2cb96d57-a5e9-4b38-b10e-68187a5bf82f", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-2000648338-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "93dd4902ce324862a38006da8e06503a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb57a52c8-34", "ovs_interfaceid": "b57a52c8-34c3-415f-9848-195b09be3611", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 08:56:58 compute-0 nova_compute[260935]: 2025-10-11 08:56:58.329 2 DEBUG nova.network.os_vif_util [None req-8b868f60-123f-4386-b821-338e962f07b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3f:db:18,bridge_name='br-int',has_traffic_filtering=True,id=b57a52c8-34c3-415f-9848-195b09be3611,network=Network(2cb96d57-a5e9-4b38-b10e-68187a5bf82f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb57a52c8-34') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 08:56:58 compute-0 nova_compute[260935]: 2025-10-11 08:56:58.331 2 DEBUG os_vif [None req-8b868f60-123f-4386-b821-338e962f07b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:3f:db:18,bridge_name='br-int',has_traffic_filtering=True,id=b57a52c8-34c3-415f-9848-195b09be3611,network=Network(2cb96d57-a5e9-4b38-b10e-68187a5bf82f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb57a52c8-34') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 11 08:56:58 compute-0 nova_compute[260935]: 2025-10-11 08:56:58.332 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:56:58 compute-0 nova_compute[260935]: 2025-10-11 08:56:58.333 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:56:58 compute-0 nova_compute[260935]: 2025-10-11 08:56:58.334 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 08:56:58 compute-0 nova_compute[260935]: 2025-10-11 08:56:58.337 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:56:58 compute-0 nova_compute[260935]: 2025-10-11 08:56:58.337 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb57a52c8-34, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:56:58 compute-0 nova_compute[260935]: 2025-10-11 08:56:58.338 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapb57a52c8-34, col_values=(('external_ids', {'iface-id': 'b57a52c8-34c3-415f-9848-195b09be3611', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:3f:db:18', 'vm-uuid': '80799b12-9add-4561-a8eb-f1cf3c1f93a1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:56:58 compute-0 NetworkManager[44960]: <info>  [1760173018.3413] manager: (tapb57a52c8-34): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/273)
Oct 11 08:56:58 compute-0 nova_compute[260935]: 2025-10-11 08:56:58.343 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 08:56:58 compute-0 nova_compute[260935]: 2025-10-11 08:56:58.348 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:56:58 compute-0 nova_compute[260935]: 2025-10-11 08:56:58.349 2 INFO os_vif [None req-8b868f60-123f-4386-b821-338e962f07b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:3f:db:18,bridge_name='br-int',has_traffic_filtering=True,id=b57a52c8-34c3-415f-9848-195b09be3611,network=Network(2cb96d57-a5e9-4b38-b10e-68187a5bf82f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb57a52c8-34')
Oct 11 08:56:58 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e230 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 08:56:58 compute-0 nova_compute[260935]: 2025-10-11 08:56:58.444 2 DEBUG nova.virt.libvirt.driver [None req-8b868f60-123f-4386-b821-338e962f07b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 08:56:58 compute-0 nova_compute[260935]: 2025-10-11 08:56:58.445 2 DEBUG nova.virt.libvirt.driver [None req-8b868f60-123f-4386-b821-338e962f07b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 08:56:58 compute-0 nova_compute[260935]: 2025-10-11 08:56:58.445 2 DEBUG nova.virt.libvirt.driver [None req-8b868f60-123f-4386-b821-338e962f07b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] No VIF found with MAC fa:16:3e:3f:db:18, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 11 08:56:58 compute-0 nova_compute[260935]: 2025-10-11 08:56:58.446 2 INFO nova.virt.libvirt.driver [None req-8b868f60-123f-4386-b821-338e962f07b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: 80799b12-9add-4561-a8eb-f1cf3c1f93a1] Using config drive
Oct 11 08:56:58 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/2283851620' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:56:58 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/2190123434' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:56:58 compute-0 nova_compute[260935]: 2025-10-11 08:56:58.481 2 DEBUG nova.storage.rbd_utils [None req-8b868f60-123f-4386-b821-338e962f07b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] rbd image 80799b12-9add-4561-a8eb-f1cf3c1f93a1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:56:58 compute-0 nova_compute[260935]: 2025-10-11 08:56:58.593 2 DEBUG nova.network.neutron [None req-8661c271-232d-4d96-8c1b-b75be873e0d0 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: 7e87b7bb-5a07-4c5d-9998-0e1375a226f1] Successfully updated port: 9d9032fb-a79e-4d9c-897c-84768b25ec64 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 11 08:56:58 compute-0 nova_compute[260935]: 2025-10-11 08:56:58.632 2 DEBUG oslo_concurrency.lockutils [None req-8661c271-232d-4d96-8c1b-b75be873e0d0 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Acquiring lock "refresh_cache-7e87b7bb-5a07-4c5d-9998-0e1375a226f1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 08:56:58 compute-0 nova_compute[260935]: 2025-10-11 08:56:58.633 2 DEBUG oslo_concurrency.lockutils [None req-8661c271-232d-4d96-8c1b-b75be873e0d0 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Acquired lock "refresh_cache-7e87b7bb-5a07-4c5d-9998-0e1375a226f1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 08:56:58 compute-0 nova_compute[260935]: 2025-10-11 08:56:58.633 2 DEBUG nova.network.neutron [None req-8661c271-232d-4d96-8c1b-b75be873e0d0 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: 7e87b7bb-5a07-4c5d-9998-0e1375a226f1] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 11 08:56:58 compute-0 nova_compute[260935]: 2025-10-11 08:56:58.840 2 DEBUG nova.network.neutron [None req-8661c271-232d-4d96-8c1b-b75be873e0d0 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: 7e87b7bb-5a07-4c5d-9998-0e1375a226f1] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 11 08:56:58 compute-0 nova_compute[260935]: 2025-10-11 08:56:58.909 2 INFO nova.virt.libvirt.driver [None req-8b868f60-123f-4386-b821-338e962f07b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: 80799b12-9add-4561-a8eb-f1cf3c1f93a1] Creating config drive at /var/lib/nova/instances/80799b12-9add-4561-a8eb-f1cf3c1f93a1/disk.config
Oct 11 08:56:58 compute-0 nova_compute[260935]: 2025-10-11 08:56:58.916 2 DEBUG oslo_concurrency.processutils [None req-8b868f60-123f-4386-b821-338e962f07b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/80799b12-9add-4561-a8eb-f1cf3c1f93a1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpzjqpuaye execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:56:58 compute-0 nova_compute[260935]: 2025-10-11 08:56:58.981 2 DEBUG nova.compute.manager [req-01e5f7d2-2e46-4a10-9a72-bd675e157e11 req-0ffac204-6d3f-4ea5-86b4-9d6e99490eef e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 7e87b7bb-5a07-4c5d-9998-0e1375a226f1] Received event network-changed-9d9032fb-a79e-4d9c-897c-84768b25ec64 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:56:58 compute-0 nova_compute[260935]: 2025-10-11 08:56:58.982 2 DEBUG nova.compute.manager [req-01e5f7d2-2e46-4a10-9a72-bd675e157e11 req-0ffac204-6d3f-4ea5-86b4-9d6e99490eef e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 7e87b7bb-5a07-4c5d-9998-0e1375a226f1] Refreshing instance network info cache due to event network-changed-9d9032fb-a79e-4d9c-897c-84768b25ec64. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 08:56:58 compute-0 nova_compute[260935]: 2025-10-11 08:56:58.982 2 DEBUG oslo_concurrency.lockutils [req-01e5f7d2-2e46-4a10-9a72-bd675e157e11 req-0ffac204-6d3f-4ea5-86b4-9d6e99490eef e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-7e87b7bb-5a07-4c5d-9998-0e1375a226f1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 08:56:59 compute-0 nova_compute[260935]: 2025-10-11 08:56:59.081 2 DEBUG oslo_concurrency.processutils [None req-8b868f60-123f-4386-b821-338e962f07b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/80799b12-9add-4561-a8eb-f1cf3c1f93a1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpzjqpuaye" returned: 0 in 0.166s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:56:59 compute-0 nova_compute[260935]: 2025-10-11 08:56:59.111 2 DEBUG nova.storage.rbd_utils [None req-8b868f60-123f-4386-b821-338e962f07b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] rbd image 80799b12-9add-4561-a8eb-f1cf3c1f93a1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:56:59 compute-0 nova_compute[260935]: 2025-10-11 08:56:59.116 2 DEBUG oslo_concurrency.processutils [None req-8b868f60-123f-4386-b821-338e962f07b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/80799b12-9add-4561-a8eb-f1cf3c1f93a1/disk.config 80799b12-9add-4561-a8eb-f1cf3c1f93a1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:56:59 compute-0 nova_compute[260935]: 2025-10-11 08:56:59.216 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:56:59 compute-0 nova_compute[260935]: 2025-10-11 08:56:59.377 2 DEBUG oslo_concurrency.processutils [None req-8b868f60-123f-4386-b821-338e962f07b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/80799b12-9add-4561-a8eb-f1cf3c1f93a1/disk.config 80799b12-9add-4561-a8eb-f1cf3c1f93a1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.261s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:56:59 compute-0 nova_compute[260935]: 2025-10-11 08:56:59.378 2 INFO nova.virt.libvirt.driver [None req-8b868f60-123f-4386-b821-338e962f07b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: 80799b12-9add-4561-a8eb-f1cf3c1f93a1] Deleting local config drive /var/lib/nova/instances/80799b12-9add-4561-a8eb-f1cf3c1f93a1/disk.config because it was imported into RBD.
Oct 11 08:56:59 compute-0 kernel: tapb57a52c8-34: entered promiscuous mode
Oct 11 08:56:59 compute-0 NetworkManager[44960]: <info>  [1760173019.4490] manager: (tapb57a52c8-34): new Tun device (/org/freedesktop/NetworkManager/Devices/274)
Oct 11 08:56:59 compute-0 nova_compute[260935]: 2025-10-11 08:56:59.451 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:56:59 compute-0 nova_compute[260935]: 2025-10-11 08:56:59.457 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:56:59 compute-0 ovn_controller[152945]: 2025-10-11T08:56:59Z|00589|binding|INFO|Claiming lport b57a52c8-34c3-415f-9848-195b09be3611 for this chassis.
Oct 11 08:56:59 compute-0 ovn_controller[152945]: 2025-10-11T08:56:59Z|00590|binding|INFO|b57a52c8-34c3-415f-9848-195b09be3611: Claiming fa:16:3e:3f:db:18 10.100.0.12
Oct 11 08:56:59 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:56:59.465 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3f:db:18 10.100.0.12'], port_security=['fa:16:3e:3f:db:18 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '80799b12-9add-4561-a8eb-f1cf3c1f93a1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2cb96d57-a5e9-4b38-b10e-68187a5bf82f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '93dd4902ce324862a38006da8e06503a', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'b773900e-3df7-4cb6-b9b0-3d240ff499b1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=69794a2f-48ab-4a0d-8725-f4a7f57172dd, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=b57a52c8-34c3-415f-9848-195b09be3611) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 08:56:59 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:56:59.466 162815 INFO neutron.agent.ovn.metadata.agent [-] Port b57a52c8-34c3-415f-9848-195b09be3611 in datapath 2cb96d57-a5e9-4b38-b10e-68187a5bf82f bound to our chassis
Oct 11 08:56:59 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:56:59.469 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 2cb96d57-a5e9-4b38-b10e-68187a5bf82f
Oct 11 08:56:59 compute-0 ceph-mon[74313]: pgmap v1602: 321 pgs: 321 active+clean; 385 MiB data, 719 MiB used, 59 GiB / 60 GiB avail; 4.6 MiB/s rd, 4.3 MiB/s wr, 230 op/s
Oct 11 08:56:59 compute-0 ovn_controller[152945]: 2025-10-11T08:56:59Z|00591|binding|INFO|Setting lport b57a52c8-34c3-415f-9848-195b09be3611 ovn-installed in OVS
Oct 11 08:56:59 compute-0 ovn_controller[152945]: 2025-10-11T08:56:59Z|00592|binding|INFO|Setting lport b57a52c8-34c3-415f-9848-195b09be3611 up in Southbound
Oct 11 08:56:59 compute-0 nova_compute[260935]: 2025-10-11 08:56:59.479 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:56:59 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:56:59.489 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[b8c421d8-06bb-429d-b0a8-391b6782f20d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:56:59 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:56:59.490 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap2cb96d57-a1 in ovnmeta-2cb96d57-a5e9-4b38-b10e-68187a5bf82f namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 11 08:56:59 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:56:59.492 276945 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap2cb96d57-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 11 08:56:59 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:56:59.492 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[25a4ee16-8043-4ac8-bfe3-add1cab2fd60]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:56:59 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:56:59.493 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[2fd5e47a-5e61-4daa-9940-0ebbfd284577]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:56:59 compute-0 systemd-udevd[329953]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 08:56:59 compute-0 systemd-machined[215705]: New machine qemu-77-instance-00000045.
Oct 11 08:56:59 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:56:59.509 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[582aa922-5e34-41ec-8cb3-79917782e0e2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:56:59 compute-0 systemd[1]: Started Virtual Machine qemu-77-instance-00000045.
Oct 11 08:56:59 compute-0 NetworkManager[44960]: <info>  [1760173019.5236] device (tapb57a52c8-34): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 08:56:59 compute-0 NetworkManager[44960]: <info>  [1760173019.5248] device (tapb57a52c8-34): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 08:56:59 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:56:59.540 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[d4a055f0-a5c7-4424-9aad-1a9984fde173]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:56:59 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:56:59.581 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[e6ae8337-6ff9-424d-ae11-6cb8825c8393]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:56:59 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:56:59.589 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[aa997a64-cd75-4587-88da-b8c044b26958]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:56:59 compute-0 NetworkManager[44960]: <info>  [1760173019.5904] manager: (tap2cb96d57-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/275)
Oct 11 08:56:59 compute-0 systemd-udevd[329957]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 08:56:59 compute-0 nova_compute[260935]: 2025-10-11 08:56:59.597 2 DEBUG nova.network.neutron [req-8a134a69-5c72-4dd7-b791-17e5ac3a488b req-8bbf51a6-60ec-4e5e-acf1-110c89ecd020 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 80799b12-9add-4561-a8eb-f1cf3c1f93a1] Updated VIF entry in instance network info cache for port b57a52c8-34c3-415f-9848-195b09be3611. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 08:56:59 compute-0 nova_compute[260935]: 2025-10-11 08:56:59.599 2 DEBUG nova.network.neutron [req-8a134a69-5c72-4dd7-b791-17e5ac3a488b req-8bbf51a6-60ec-4e5e-acf1-110c89ecd020 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 80799b12-9add-4561-a8eb-f1cf3c1f93a1] Updating instance_info_cache with network_info: [{"id": "b57a52c8-34c3-415f-9848-195b09be3611", "address": "fa:16:3e:3f:db:18", "network": {"id": "2cb96d57-a5e9-4b38-b10e-68187a5bf82f", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-2000648338-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "93dd4902ce324862a38006da8e06503a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb57a52c8-34", "ovs_interfaceid": "b57a52c8-34c3-415f-9848-195b09be3611", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:56:59 compute-0 nova_compute[260935]: 2025-10-11 08:56:59.632 2 DEBUG oslo_concurrency.lockutils [req-8a134a69-5c72-4dd7-b791-17e5ac3a488b req-8bbf51a6-60ec-4e5e-acf1-110c89ecd020 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-80799b12-9add-4561-a8eb-f1cf3c1f93a1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 08:56:59 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:56:59.670 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[4aa89238-9e0c-42ff-bada-7753af8577d7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:56:59 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:56:59.675 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[dc9d5465-9531-4c5d-9403-ee0c9081901a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:56:59 compute-0 NetworkManager[44960]: <info>  [1760173019.7099] device (tap2cb96d57-a0): carrier: link connected
Oct 11 08:56:59 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:56:59.722 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[6e1eb4f0-2b46-42bd-9c90-97fe6fb6db88]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:56:59 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:56:59.750 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[a609d89e-38f8-4bf4-868b-bf075bbd2063]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2cb96d57-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0b:c9:b5'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 188], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 486441, 'reachable_time': 20265, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 329985, 'error': None, 'target': 'ovnmeta-2cb96d57-a5e9-4b38-b10e-68187a5bf82f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:56:59 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:56:59.783 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[926afdd7-1f19-4775-9f25-8a790d3b3ccc]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe0b:c9b5'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 486441, 'tstamp': 486441}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 329986, 'error': None, 'target': 'ovnmeta-2cb96d57-a5e9-4b38-b10e-68187a5bf82f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:56:59 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:56:59.831 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[24dfe312-52bd-4c90-a5ab-b2d74637ae5b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2cb96d57-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0b:c9:b5'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 188], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 486441, 'reachable_time': 20265, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 329994, 'error': None, 'target': 'ovnmeta-2cb96d57-a5e9-4b38-b10e-68187a5bf82f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:56:59 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:56:59.885 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[5f241675-5422-4bd5-9ca1-df655c0062b6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:56:59 compute-0 nova_compute[260935]: 2025-10-11 08:56:59.959 2 DEBUG nova.network.neutron [None req-8661c271-232d-4d96-8c1b-b75be873e0d0 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: 7e87b7bb-5a07-4c5d-9998-0e1375a226f1] Updating instance_info_cache with network_info: [{"id": "9d9032fb-a79e-4d9c-897c-84768b25ec64", "address": "fa:16:3e:fc:11:6a", "network": {"id": "e075bdab-78c4-414f-b270-c41d1c82f498", "bridge": "br-int", "label": "tempest-ServersTestJSON-1401783070-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d9864fda4f8641d8a9c1509c426cc206", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9d9032fb-a7", "ovs_interfaceid": "9d9032fb-a79e-4d9c-897c-84768b25ec64", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:56:59 compute-0 nova_compute[260935]: 2025-10-11 08:56:59.985 2 DEBUG oslo_concurrency.lockutils [None req-8661c271-232d-4d96-8c1b-b75be873e0d0 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Releasing lock "refresh_cache-7e87b7bb-5a07-4c5d-9998-0e1375a226f1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 08:56:59 compute-0 nova_compute[260935]: 2025-10-11 08:56:59.986 2 DEBUG nova.compute.manager [None req-8661c271-232d-4d96-8c1b-b75be873e0d0 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: 7e87b7bb-5a07-4c5d-9998-0e1375a226f1] Instance network_info: |[{"id": "9d9032fb-a79e-4d9c-897c-84768b25ec64", "address": "fa:16:3e:fc:11:6a", "network": {"id": "e075bdab-78c4-414f-b270-c41d1c82f498", "bridge": "br-int", "label": "tempest-ServersTestJSON-1401783070-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d9864fda4f8641d8a9c1509c426cc206", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9d9032fb-a7", "ovs_interfaceid": "9d9032fb-a79e-4d9c-897c-84768b25ec64", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 11 08:56:59 compute-0 nova_compute[260935]: 2025-10-11 08:56:59.987 2 DEBUG oslo_concurrency.lockutils [req-01e5f7d2-2e46-4a10-9a72-bd675e157e11 req-0ffac204-6d3f-4ea5-86b4-9d6e99490eef e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-7e87b7bb-5a07-4c5d-9998-0e1375a226f1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 08:56:59 compute-0 nova_compute[260935]: 2025-10-11 08:56:59.987 2 DEBUG nova.network.neutron [req-01e5f7d2-2e46-4a10-9a72-bd675e157e11 req-0ffac204-6d3f-4ea5-86b4-9d6e99490eef e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 7e87b7bb-5a07-4c5d-9998-0e1375a226f1] Refreshing network info cache for port 9d9032fb-a79e-4d9c-897c-84768b25ec64 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 08:56:59 compute-0 nova_compute[260935]: 2025-10-11 08:56:59.991 2 DEBUG nova.virt.libvirt.driver [None req-8661c271-232d-4d96-8c1b-b75be873e0d0 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: 7e87b7bb-5a07-4c5d-9998-0e1375a226f1] Start _get_guest_xml network_info=[{"id": "9d9032fb-a79e-4d9c-897c-84768b25ec64", "address": "fa:16:3e:fc:11:6a", "network": {"id": "e075bdab-78c4-414f-b270-c41d1c82f498", "bridge": "br-int", "label": "tempest-ServersTestJSON-1401783070-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d9864fda4f8641d8a9c1509c426cc206", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9d9032fb-a7", "ovs_interfaceid": "9d9032fb-a79e-4d9c-897c-84768b25ec64", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 11 08:57:00 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:57:00.001 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[4622d03e-b404-4976-af9f-6ee2fb4cec55]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:57:00 compute-0 nova_compute[260935]: 2025-10-11 08:57:00.002 2 WARNING nova.virt.libvirt.driver [None req-8661c271-232d-4d96-8c1b-b75be873e0d0 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 08:57:00 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:57:00.003 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2cb96d57-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:57:00 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:57:00.004 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 08:57:00 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:57:00.005 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2cb96d57-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:57:00 compute-0 NetworkManager[44960]: <info>  [1760173020.0083] manager: (tap2cb96d57-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/276)
Oct 11 08:57:00 compute-0 kernel: tap2cb96d57-a0: entered promiscuous mode
Oct 11 08:57:00 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:57:00.011 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap2cb96d57-a0, col_values=(('external_ids', {'iface-id': 'a11e0a08-d1ab-4bff-901b-632484cc0e21'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:57:00 compute-0 ovn_controller[152945]: 2025-10-11T08:57:00Z|00593|binding|INFO|Releasing lport a11e0a08-d1ab-4bff-901b-632484cc0e21 from this chassis (sb_readonly=0)
Oct 11 08:57:00 compute-0 nova_compute[260935]: 2025-10-11 08:57:00.035 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:57:00 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1603: 321 pgs: 321 active+clean; 385 MiB data, 719 MiB used, 59 GiB / 60 GiB avail; 4.0 MiB/s rd, 3.7 MiB/s wr, 197 op/s
Oct 11 08:57:00 compute-0 nova_compute[260935]: 2025-10-11 08:57:00.045 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:57:00 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:57:00.047 162815 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/2cb96d57-a5e9-4b38-b10e-68187a5bf82f.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/2cb96d57-a5e9-4b38-b10e-68187a5bf82f.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 11 08:57:00 compute-0 nova_compute[260935]: 2025-10-11 08:57:00.047 2 DEBUG nova.virt.libvirt.host [None req-8661c271-232d-4d96-8c1b-b75be873e0d0 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 11 08:57:00 compute-0 nova_compute[260935]: 2025-10-11 08:57:00.048 2 DEBUG nova.virt.libvirt.host [None req-8661c271-232d-4d96-8c1b-b75be873e0d0 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 11 08:57:00 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:57:00.048 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[95968a8f-9da2-4c78-b1fa-3ceee8eea3a7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:57:00 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:57:00.049 162815 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 11 08:57:00 compute-0 ovn_metadata_agent[162810]: global
Oct 11 08:57:00 compute-0 ovn_metadata_agent[162810]:     log         /dev/log local0 debug
Oct 11 08:57:00 compute-0 ovn_metadata_agent[162810]:     log-tag     haproxy-metadata-proxy-2cb96d57-a5e9-4b38-b10e-68187a5bf82f
Oct 11 08:57:00 compute-0 ovn_metadata_agent[162810]:     user        root
Oct 11 08:57:00 compute-0 ovn_metadata_agent[162810]:     group       root
Oct 11 08:57:00 compute-0 ovn_metadata_agent[162810]:     maxconn     1024
Oct 11 08:57:00 compute-0 ovn_metadata_agent[162810]:     pidfile     /var/lib/neutron/external/pids/2cb96d57-a5e9-4b38-b10e-68187a5bf82f.pid.haproxy
Oct 11 08:57:00 compute-0 ovn_metadata_agent[162810]:     daemon
Oct 11 08:57:00 compute-0 ovn_metadata_agent[162810]: 
Oct 11 08:57:00 compute-0 ovn_metadata_agent[162810]: defaults
Oct 11 08:57:00 compute-0 ovn_metadata_agent[162810]:     log global
Oct 11 08:57:00 compute-0 ovn_metadata_agent[162810]:     mode http
Oct 11 08:57:00 compute-0 ovn_metadata_agent[162810]:     option httplog
Oct 11 08:57:00 compute-0 ovn_metadata_agent[162810]:     option dontlognull
Oct 11 08:57:00 compute-0 ovn_metadata_agent[162810]:     option http-server-close
Oct 11 08:57:00 compute-0 ovn_metadata_agent[162810]:     option forwardfor
Oct 11 08:57:00 compute-0 ovn_metadata_agent[162810]:     retries                 3
Oct 11 08:57:00 compute-0 ovn_metadata_agent[162810]:     timeout http-request    30s
Oct 11 08:57:00 compute-0 ovn_metadata_agent[162810]:     timeout connect         30s
Oct 11 08:57:00 compute-0 ovn_metadata_agent[162810]:     timeout client          32s
Oct 11 08:57:00 compute-0 ovn_metadata_agent[162810]:     timeout server          32s
Oct 11 08:57:00 compute-0 ovn_metadata_agent[162810]:     timeout http-keep-alive 30s
Oct 11 08:57:00 compute-0 ovn_metadata_agent[162810]: 
Oct 11 08:57:00 compute-0 ovn_metadata_agent[162810]: 
Oct 11 08:57:00 compute-0 ovn_metadata_agent[162810]: listen listener
Oct 11 08:57:00 compute-0 ovn_metadata_agent[162810]:     bind 169.254.169.254:80
Oct 11 08:57:00 compute-0 ovn_metadata_agent[162810]:     server metadata /var/lib/neutron/metadata_proxy
Oct 11 08:57:00 compute-0 ovn_metadata_agent[162810]:     http-request add-header X-OVN-Network-ID 2cb96d57-a5e9-4b38-b10e-68187a5bf82f
Oct 11 08:57:00 compute-0 ovn_metadata_agent[162810]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 11 08:57:00 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:57:00.050 162815 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-2cb96d57-a5e9-4b38-b10e-68187a5bf82f', 'env', 'PROCESS_TAG=haproxy-2cb96d57-a5e9-4b38-b10e-68187a5bf82f', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/2cb96d57-a5e9-4b38-b10e-68187a5bf82f.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 11 08:57:00 compute-0 nova_compute[260935]: 2025-10-11 08:57:00.053 2 DEBUG nova.virt.libvirt.host [None req-8661c271-232d-4d96-8c1b-b75be873e0d0 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 11 08:57:00 compute-0 nova_compute[260935]: 2025-10-11 08:57:00.053 2 DEBUG nova.virt.libvirt.host [None req-8661c271-232d-4d96-8c1b-b75be873e0d0 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 11 08:57:00 compute-0 nova_compute[260935]: 2025-10-11 08:57:00.053 2 DEBUG nova.virt.libvirt.driver [None req-8661c271-232d-4d96-8c1b-b75be873e0d0 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 11 08:57:00 compute-0 nova_compute[260935]: 2025-10-11 08:57:00.054 2 DEBUG nova.virt.hardware [None req-8661c271-232d-4d96-8c1b-b75be873e0d0 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 11 08:57:00 compute-0 nova_compute[260935]: 2025-10-11 08:57:00.054 2 DEBUG nova.virt.hardware [None req-8661c271-232d-4d96-8c1b-b75be873e0d0 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 11 08:57:00 compute-0 nova_compute[260935]: 2025-10-11 08:57:00.054 2 DEBUG nova.virt.hardware [None req-8661c271-232d-4d96-8c1b-b75be873e0d0 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 11 08:57:00 compute-0 nova_compute[260935]: 2025-10-11 08:57:00.055 2 DEBUG nova.virt.hardware [None req-8661c271-232d-4d96-8c1b-b75be873e0d0 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 11 08:57:00 compute-0 nova_compute[260935]: 2025-10-11 08:57:00.055 2 DEBUG nova.virt.hardware [None req-8661c271-232d-4d96-8c1b-b75be873e0d0 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 11 08:57:00 compute-0 nova_compute[260935]: 2025-10-11 08:57:00.055 2 DEBUG nova.virt.hardware [None req-8661c271-232d-4d96-8c1b-b75be873e0d0 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 11 08:57:00 compute-0 nova_compute[260935]: 2025-10-11 08:57:00.055 2 DEBUG nova.virt.hardware [None req-8661c271-232d-4d96-8c1b-b75be873e0d0 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 11 08:57:00 compute-0 nova_compute[260935]: 2025-10-11 08:57:00.055 2 DEBUG nova.virt.hardware [None req-8661c271-232d-4d96-8c1b-b75be873e0d0 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 11 08:57:00 compute-0 nova_compute[260935]: 2025-10-11 08:57:00.056 2 DEBUG nova.virt.hardware [None req-8661c271-232d-4d96-8c1b-b75be873e0d0 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 11 08:57:00 compute-0 nova_compute[260935]: 2025-10-11 08:57:00.056 2 DEBUG nova.virt.hardware [None req-8661c271-232d-4d96-8c1b-b75be873e0d0 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 11 08:57:00 compute-0 nova_compute[260935]: 2025-10-11 08:57:00.056 2 DEBUG nova.virt.hardware [None req-8661c271-232d-4d96-8c1b-b75be873e0d0 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 11 08:57:00 compute-0 nova_compute[260935]: 2025-10-11 08:57:00.060 2 DEBUG oslo_concurrency.processutils [None req-8661c271-232d-4d96-8c1b-b75be873e0d0 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:57:00 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 08:57:00 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3792937636' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:57:00 compute-0 podman[330079]: 2025-10-11 08:57:00.523740397 +0000 UTC m=+0.069530113 container create 0f7b494e3002f138722f5f1271d1014e9efa004f9ebbc75a7b05cf11472c9051 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-2cb96d57-a5e9-4b38-b10e-68187a5bf82f, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true)
Oct 11 08:57:00 compute-0 nova_compute[260935]: 2025-10-11 08:57:00.539 2 DEBUG oslo_concurrency.processutils [None req-8661c271-232d-4d96-8c1b-b75be873e0d0 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.479s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:57:00 compute-0 nova_compute[260935]: 2025-10-11 08:57:00.572 2 DEBUG nova.storage.rbd_utils [None req-8661c271-232d-4d96-8c1b-b75be873e0d0 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] rbd image 7e87b7bb-5a07-4c5d-9998-0e1375a226f1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:57:00 compute-0 nova_compute[260935]: 2025-10-11 08:57:00.577 2 DEBUG oslo_concurrency.processutils [None req-8661c271-232d-4d96-8c1b-b75be873e0d0 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:57:00 compute-0 systemd[1]: Started libpod-conmon-0f7b494e3002f138722f5f1271d1014e9efa004f9ebbc75a7b05cf11472c9051.scope.
Oct 11 08:57:00 compute-0 podman[330079]: 2025-10-11 08:57:00.486833515 +0000 UTC m=+0.032623221 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 11 08:57:00 compute-0 systemd[1]: Started libcrun container.
Oct 11 08:57:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ea993341fb37a6ad0ac0884d0d0c5b35572bc4b7fe43aac965c390f78f3454a/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 11 08:57:00 compute-0 nova_compute[260935]: 2025-10-11 08:57:00.622 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760173020.544552, 80799b12-9add-4561-a8eb-f1cf3c1f93a1 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:57:00 compute-0 nova_compute[260935]: 2025-10-11 08:57:00.623 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 80799b12-9add-4561-a8eb-f1cf3c1f93a1] VM Started (Lifecycle Event)
Oct 11 08:57:00 compute-0 podman[330079]: 2025-10-11 08:57:00.632235131 +0000 UTC m=+0.178024857 container init 0f7b494e3002f138722f5f1271d1014e9efa004f9ebbc75a7b05cf11472c9051 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-2cb96d57-a5e9-4b38-b10e-68187a5bf82f, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 08:57:00 compute-0 podman[330079]: 2025-10-11 08:57:00.637789639 +0000 UTC m=+0.183579345 container start 0f7b494e3002f138722f5f1271d1014e9efa004f9ebbc75a7b05cf11472c9051 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-2cb96d57-a5e9-4b38-b10e-68187a5bf82f, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2)
Oct 11 08:57:00 compute-0 neutron-haproxy-ovnmeta-2cb96d57-a5e9-4b38-b10e-68187a5bf82f[330114]: [NOTICE]   (330119) : New worker (330121) forked
Oct 11 08:57:00 compute-0 neutron-haproxy-ovnmeta-2cb96d57-a5e9-4b38-b10e-68187a5bf82f[330114]: [NOTICE]   (330119) : Loading success.
Oct 11 08:57:00 compute-0 nova_compute[260935]: 2025-10-11 08:57:00.665 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 80799b12-9add-4561-a8eb-f1cf3c1f93a1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:57:00 compute-0 nova_compute[260935]: 2025-10-11 08:57:00.673 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760173020.5462894, 80799b12-9add-4561-a8eb-f1cf3c1f93a1 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:57:00 compute-0 nova_compute[260935]: 2025-10-11 08:57:00.673 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 80799b12-9add-4561-a8eb-f1cf3c1f93a1] VM Paused (Lifecycle Event)
Oct 11 08:57:00 compute-0 nova_compute[260935]: 2025-10-11 08:57:00.711 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 80799b12-9add-4561-a8eb-f1cf3c1f93a1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:57:00 compute-0 nova_compute[260935]: 2025-10-11 08:57:00.717 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 80799b12-9add-4561-a8eb-f1cf3c1f93a1] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 08:57:00 compute-0 nova_compute[260935]: 2025-10-11 08:57:00.742 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 80799b12-9add-4561-a8eb-f1cf3c1f93a1] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 08:57:01 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 08:57:01 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3814393010' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:57:01 compute-0 nova_compute[260935]: 2025-10-11 08:57:01.101 2 DEBUG oslo_concurrency.processutils [None req-8661c271-232d-4d96-8c1b-b75be873e0d0 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.524s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:57:01 compute-0 nova_compute[260935]: 2025-10-11 08:57:01.103 2 DEBUG nova.virt.libvirt.vif [None req-8661c271-232d-4d96-8c1b-b75be873e0d0 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T08:56:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-349699878',display_name='tempest-ServersTestJSON-server-349699878',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-349699878',id=70,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d9864fda4f8641d8a9c1509c426cc206',ramdisk_id='',reservation_id='r-yuy9dim2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-101172647',owner_user_name='tempest-ServersTestJSON-101172647-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T08:56:54Z,user_data=None,user_id='ee0e5fedb9fc464eb2a9ac362f5e0749',uuid=7e87b7bb-5a07-4c5d-9998-0e1375a226f1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9d9032fb-a79e-4d9c-897c-84768b25ec64", "address": "fa:16:3e:fc:11:6a", "network": {"id": "e075bdab-78c4-414f-b270-c41d1c82f498", "bridge": "br-int", "label": "tempest-ServersTestJSON-1401783070-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d9864fda4f8641d8a9c1509c426cc206", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9d9032fb-a7", "ovs_interfaceid": "9d9032fb-a79e-4d9c-897c-84768b25ec64", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 11 08:57:01 compute-0 nova_compute[260935]: 2025-10-11 08:57:01.103 2 DEBUG nova.network.os_vif_util [None req-8661c271-232d-4d96-8c1b-b75be873e0d0 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Converting VIF {"id": "9d9032fb-a79e-4d9c-897c-84768b25ec64", "address": "fa:16:3e:fc:11:6a", "network": {"id": "e075bdab-78c4-414f-b270-c41d1c82f498", "bridge": "br-int", "label": "tempest-ServersTestJSON-1401783070-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d9864fda4f8641d8a9c1509c426cc206", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9d9032fb-a7", "ovs_interfaceid": "9d9032fb-a79e-4d9c-897c-84768b25ec64", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 08:57:01 compute-0 nova_compute[260935]: 2025-10-11 08:57:01.104 2 DEBUG nova.network.os_vif_util [None req-8661c271-232d-4d96-8c1b-b75be873e0d0 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fc:11:6a,bridge_name='br-int',has_traffic_filtering=True,id=9d9032fb-a79e-4d9c-897c-84768b25ec64,network=Network(e075bdab-78c4-414f-b270-c41d1c82f498),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9d9032fb-a7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 08:57:01 compute-0 nova_compute[260935]: 2025-10-11 08:57:01.105 2 DEBUG nova.objects.instance [None req-8661c271-232d-4d96-8c1b-b75be873e0d0 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Lazy-loading 'pci_devices' on Instance uuid 7e87b7bb-5a07-4c5d-9998-0e1375a226f1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:57:01 compute-0 nova_compute[260935]: 2025-10-11 08:57:01.121 2 DEBUG nova.virt.libvirt.driver [None req-8661c271-232d-4d96-8c1b-b75be873e0d0 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: 7e87b7bb-5a07-4c5d-9998-0e1375a226f1] End _get_guest_xml xml=<domain type="kvm">
Oct 11 08:57:01 compute-0 nova_compute[260935]:   <uuid>7e87b7bb-5a07-4c5d-9998-0e1375a226f1</uuid>
Oct 11 08:57:01 compute-0 nova_compute[260935]:   <name>instance-00000046</name>
Oct 11 08:57:01 compute-0 nova_compute[260935]:   <memory>131072</memory>
Oct 11 08:57:01 compute-0 nova_compute[260935]:   <vcpu>1</vcpu>
Oct 11 08:57:01 compute-0 nova_compute[260935]:   <metadata>
Oct 11 08:57:01 compute-0 nova_compute[260935]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 08:57:01 compute-0 nova_compute[260935]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 08:57:01 compute-0 nova_compute[260935]:       <nova:name>tempest-ServersTestJSON-server-349699878</nova:name>
Oct 11 08:57:01 compute-0 nova_compute[260935]:       <nova:creationTime>2025-10-11 08:57:00</nova:creationTime>
Oct 11 08:57:01 compute-0 nova_compute[260935]:       <nova:flavor name="m1.nano">
Oct 11 08:57:01 compute-0 nova_compute[260935]:         <nova:memory>128</nova:memory>
Oct 11 08:57:01 compute-0 nova_compute[260935]:         <nova:disk>1</nova:disk>
Oct 11 08:57:01 compute-0 nova_compute[260935]:         <nova:swap>0</nova:swap>
Oct 11 08:57:01 compute-0 nova_compute[260935]:         <nova:ephemeral>0</nova:ephemeral>
Oct 11 08:57:01 compute-0 nova_compute[260935]:         <nova:vcpus>1</nova:vcpus>
Oct 11 08:57:01 compute-0 nova_compute[260935]:       </nova:flavor>
Oct 11 08:57:01 compute-0 nova_compute[260935]:       <nova:owner>
Oct 11 08:57:01 compute-0 nova_compute[260935]:         <nova:user uuid="ee0e5fedb9fc464eb2a9ac362f5e0749">tempest-ServersTestJSON-101172647-project-member</nova:user>
Oct 11 08:57:01 compute-0 nova_compute[260935]:         <nova:project uuid="d9864fda4f8641d8a9c1509c426cc206">tempest-ServersTestJSON-101172647</nova:project>
Oct 11 08:57:01 compute-0 nova_compute[260935]:       </nova:owner>
Oct 11 08:57:01 compute-0 nova_compute[260935]:       <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 08:57:01 compute-0 nova_compute[260935]:       <nova:ports>
Oct 11 08:57:01 compute-0 nova_compute[260935]:         <nova:port uuid="9d9032fb-a79e-4d9c-897c-84768b25ec64">
Oct 11 08:57:01 compute-0 nova_compute[260935]:           <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Oct 11 08:57:01 compute-0 nova_compute[260935]:         </nova:port>
Oct 11 08:57:01 compute-0 nova_compute[260935]:       </nova:ports>
Oct 11 08:57:01 compute-0 nova_compute[260935]:     </nova:instance>
Oct 11 08:57:01 compute-0 nova_compute[260935]:   </metadata>
Oct 11 08:57:01 compute-0 nova_compute[260935]:   <sysinfo type="smbios">
Oct 11 08:57:01 compute-0 nova_compute[260935]:     <system>
Oct 11 08:57:01 compute-0 nova_compute[260935]:       <entry name="manufacturer">RDO</entry>
Oct 11 08:57:01 compute-0 nova_compute[260935]:       <entry name="product">OpenStack Compute</entry>
Oct 11 08:57:01 compute-0 nova_compute[260935]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 08:57:01 compute-0 nova_compute[260935]:       <entry name="serial">7e87b7bb-5a07-4c5d-9998-0e1375a226f1</entry>
Oct 11 08:57:01 compute-0 nova_compute[260935]:       <entry name="uuid">7e87b7bb-5a07-4c5d-9998-0e1375a226f1</entry>
Oct 11 08:57:01 compute-0 nova_compute[260935]:       <entry name="family">Virtual Machine</entry>
Oct 11 08:57:01 compute-0 nova_compute[260935]:     </system>
Oct 11 08:57:01 compute-0 nova_compute[260935]:   </sysinfo>
Oct 11 08:57:01 compute-0 nova_compute[260935]:   <os>
Oct 11 08:57:01 compute-0 nova_compute[260935]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 11 08:57:01 compute-0 nova_compute[260935]:     <boot dev="hd"/>
Oct 11 08:57:01 compute-0 nova_compute[260935]:     <smbios mode="sysinfo"/>
Oct 11 08:57:01 compute-0 nova_compute[260935]:   </os>
Oct 11 08:57:01 compute-0 nova_compute[260935]:   <features>
Oct 11 08:57:01 compute-0 nova_compute[260935]:     <acpi/>
Oct 11 08:57:01 compute-0 nova_compute[260935]:     <apic/>
Oct 11 08:57:01 compute-0 nova_compute[260935]:     <vmcoreinfo/>
Oct 11 08:57:01 compute-0 nova_compute[260935]:   </features>
Oct 11 08:57:01 compute-0 nova_compute[260935]:   <clock offset="utc">
Oct 11 08:57:01 compute-0 nova_compute[260935]:     <timer name="pit" tickpolicy="delay"/>
Oct 11 08:57:01 compute-0 nova_compute[260935]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 11 08:57:01 compute-0 nova_compute[260935]:     <timer name="hpet" present="no"/>
Oct 11 08:57:01 compute-0 nova_compute[260935]:   </clock>
Oct 11 08:57:01 compute-0 nova_compute[260935]:   <cpu mode="host-model" match="exact">
Oct 11 08:57:01 compute-0 nova_compute[260935]:     <topology sockets="1" cores="1" threads="1"/>
Oct 11 08:57:01 compute-0 nova_compute[260935]:   </cpu>
Oct 11 08:57:01 compute-0 nova_compute[260935]:   <devices>
Oct 11 08:57:01 compute-0 nova_compute[260935]:     <disk type="network" device="disk">
Oct 11 08:57:01 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 08:57:01 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/7e87b7bb-5a07-4c5d-9998-0e1375a226f1_disk">
Oct 11 08:57:01 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 08:57:01 compute-0 nova_compute[260935]:       </source>
Oct 11 08:57:01 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 08:57:01 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 08:57:01 compute-0 nova_compute[260935]:       </auth>
Oct 11 08:57:01 compute-0 nova_compute[260935]:       <target dev="vda" bus="virtio"/>
Oct 11 08:57:01 compute-0 nova_compute[260935]:     </disk>
Oct 11 08:57:01 compute-0 nova_compute[260935]:     <disk type="network" device="cdrom">
Oct 11 08:57:01 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 08:57:01 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/7e87b7bb-5a07-4c5d-9998-0e1375a226f1_disk.config">
Oct 11 08:57:01 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 08:57:01 compute-0 nova_compute[260935]:       </source>
Oct 11 08:57:01 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 08:57:01 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 08:57:01 compute-0 nova_compute[260935]:       </auth>
Oct 11 08:57:01 compute-0 nova_compute[260935]:       <target dev="sda" bus="sata"/>
Oct 11 08:57:01 compute-0 nova_compute[260935]:     </disk>
Oct 11 08:57:01 compute-0 nova_compute[260935]:     <interface type="ethernet">
Oct 11 08:57:01 compute-0 nova_compute[260935]:       <mac address="fa:16:3e:fc:11:6a"/>
Oct 11 08:57:01 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 08:57:01 compute-0 nova_compute[260935]:       <driver name="vhost" rx_queue_size="512"/>
Oct 11 08:57:01 compute-0 nova_compute[260935]:       <mtu size="1442"/>
Oct 11 08:57:01 compute-0 nova_compute[260935]:       <target dev="tap9d9032fb-a7"/>
Oct 11 08:57:01 compute-0 nova_compute[260935]:     </interface>
Oct 11 08:57:01 compute-0 nova_compute[260935]:     <serial type="pty">
Oct 11 08:57:01 compute-0 nova_compute[260935]:       <log file="/var/lib/nova/instances/7e87b7bb-5a07-4c5d-9998-0e1375a226f1/console.log" append="off"/>
Oct 11 08:57:01 compute-0 nova_compute[260935]:     </serial>
Oct 11 08:57:01 compute-0 nova_compute[260935]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 08:57:01 compute-0 nova_compute[260935]:     <video>
Oct 11 08:57:01 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 08:57:01 compute-0 nova_compute[260935]:     </video>
Oct 11 08:57:01 compute-0 nova_compute[260935]:     <input type="tablet" bus="usb"/>
Oct 11 08:57:01 compute-0 nova_compute[260935]:     <rng model="virtio">
Oct 11 08:57:01 compute-0 nova_compute[260935]:       <backend model="random">/dev/urandom</backend>
Oct 11 08:57:01 compute-0 nova_compute[260935]:     </rng>
Oct 11 08:57:01 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root"/>
Oct 11 08:57:01 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:57:01 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:57:01 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:57:01 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:57:01 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:57:01 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:57:01 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:57:01 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:57:01 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:57:01 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:57:01 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:57:01 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:57:01 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:57:01 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:57:01 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:57:01 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:57:01 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:57:01 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:57:01 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:57:01 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:57:01 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:57:01 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:57:01 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:57:01 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:57:01 compute-0 nova_compute[260935]:     <controller type="usb" index="0"/>
Oct 11 08:57:01 compute-0 nova_compute[260935]:     <memballoon model="virtio">
Oct 11 08:57:01 compute-0 nova_compute[260935]:       <stats period="10"/>
Oct 11 08:57:01 compute-0 nova_compute[260935]:     </memballoon>
Oct 11 08:57:01 compute-0 nova_compute[260935]:   </devices>
Oct 11 08:57:01 compute-0 nova_compute[260935]: </domain>
Oct 11 08:57:01 compute-0 nova_compute[260935]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 11 08:57:01 compute-0 nova_compute[260935]: 2025-10-11 08:57:01.122 2 DEBUG nova.compute.manager [None req-8661c271-232d-4d96-8c1b-b75be873e0d0 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: 7e87b7bb-5a07-4c5d-9998-0e1375a226f1] Preparing to wait for external event network-vif-plugged-9d9032fb-a79e-4d9c-897c-84768b25ec64 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 11 08:57:01 compute-0 nova_compute[260935]: 2025-10-11 08:57:01.122 2 DEBUG oslo_concurrency.lockutils [None req-8661c271-232d-4d96-8c1b-b75be873e0d0 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Acquiring lock "7e87b7bb-5a07-4c5d-9998-0e1375a226f1-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:57:01 compute-0 nova_compute[260935]: 2025-10-11 08:57:01.123 2 DEBUG oslo_concurrency.lockutils [None req-8661c271-232d-4d96-8c1b-b75be873e0d0 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Lock "7e87b7bb-5a07-4c5d-9998-0e1375a226f1-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:57:01 compute-0 nova_compute[260935]: 2025-10-11 08:57:01.123 2 DEBUG oslo_concurrency.lockutils [None req-8661c271-232d-4d96-8c1b-b75be873e0d0 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Lock "7e87b7bb-5a07-4c5d-9998-0e1375a226f1-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:57:01 compute-0 nova_compute[260935]: 2025-10-11 08:57:01.123 2 DEBUG nova.virt.libvirt.vif [None req-8661c271-232d-4d96-8c1b-b75be873e0d0 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T08:56:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-349699878',display_name='tempest-ServersTestJSON-server-349699878',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-349699878',id=70,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d9864fda4f8641d8a9c1509c426cc206',ramdisk_id='',reservation_id='r-yuy9dim2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-101172647',owner_user_name='tempest-ServersTestJSON-101172647-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T08:56:54Z,user_data=None,user_id='ee0e5fedb9fc464eb2a9ac362f5e0749',uuid=7e87b7bb-5a07-4c5d-9998-0e1375a226f1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9d9032fb-a79e-4d9c-897c-84768b25ec64", "address": "fa:16:3e:fc:11:6a", "network": {"id": "e075bdab-78c4-414f-b270-c41d1c82f498", "bridge": "br-int", "label": "tempest-ServersTestJSON-1401783070-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d9864fda4f8641d8a9c1509c426cc206", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9d9032fb-a7", "ovs_interfaceid": "9d9032fb-a79e-4d9c-897c-84768b25ec64", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 11 08:57:01 compute-0 nova_compute[260935]: 2025-10-11 08:57:01.124 2 DEBUG nova.network.os_vif_util [None req-8661c271-232d-4d96-8c1b-b75be873e0d0 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Converting VIF {"id": "9d9032fb-a79e-4d9c-897c-84768b25ec64", "address": "fa:16:3e:fc:11:6a", "network": {"id": "e075bdab-78c4-414f-b270-c41d1c82f498", "bridge": "br-int", "label": "tempest-ServersTestJSON-1401783070-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d9864fda4f8641d8a9c1509c426cc206", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9d9032fb-a7", "ovs_interfaceid": "9d9032fb-a79e-4d9c-897c-84768b25ec64", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 08:57:01 compute-0 nova_compute[260935]: 2025-10-11 08:57:01.124 2 DEBUG nova.network.os_vif_util [None req-8661c271-232d-4d96-8c1b-b75be873e0d0 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fc:11:6a,bridge_name='br-int',has_traffic_filtering=True,id=9d9032fb-a79e-4d9c-897c-84768b25ec64,network=Network(e075bdab-78c4-414f-b270-c41d1c82f498),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9d9032fb-a7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 08:57:01 compute-0 nova_compute[260935]: 2025-10-11 08:57:01.125 2 DEBUG os_vif [None req-8661c271-232d-4d96-8c1b-b75be873e0d0 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:fc:11:6a,bridge_name='br-int',has_traffic_filtering=True,id=9d9032fb-a79e-4d9c-897c-84768b25ec64,network=Network(e075bdab-78c4-414f-b270-c41d1c82f498),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9d9032fb-a7') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 11 08:57:01 compute-0 nova_compute[260935]: 2025-10-11 08:57:01.125 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:57:01 compute-0 nova_compute[260935]: 2025-10-11 08:57:01.126 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:57:01 compute-0 nova_compute[260935]: 2025-10-11 08:57:01.126 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 08:57:01 compute-0 nova_compute[260935]: 2025-10-11 08:57:01.129 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:57:01 compute-0 nova_compute[260935]: 2025-10-11 08:57:01.130 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9d9032fb-a7, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:57:01 compute-0 nova_compute[260935]: 2025-10-11 08:57:01.130 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap9d9032fb-a7, col_values=(('external_ids', {'iface-id': '9d9032fb-a79e-4d9c-897c-84768b25ec64', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:fc:11:6a', 'vm-uuid': '7e87b7bb-5a07-4c5d-9998-0e1375a226f1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:57:01 compute-0 nova_compute[260935]: 2025-10-11 08:57:01.132 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:57:01 compute-0 NetworkManager[44960]: <info>  [1760173021.1331] manager: (tap9d9032fb-a7): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/277)
Oct 11 08:57:01 compute-0 nova_compute[260935]: 2025-10-11 08:57:01.137 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 08:57:01 compute-0 nova_compute[260935]: 2025-10-11 08:57:01.145 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:57:01 compute-0 nova_compute[260935]: 2025-10-11 08:57:01.146 2 INFO os_vif [None req-8661c271-232d-4d96-8c1b-b75be873e0d0 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:fc:11:6a,bridge_name='br-int',has_traffic_filtering=True,id=9d9032fb-a79e-4d9c-897c-84768b25ec64,network=Network(e075bdab-78c4-414f-b270-c41d1c82f498),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9d9032fb-a7')
Oct 11 08:57:01 compute-0 nova_compute[260935]: 2025-10-11 08:57:01.234 2 DEBUG nova.virt.libvirt.driver [None req-8661c271-232d-4d96-8c1b-b75be873e0d0 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 08:57:01 compute-0 nova_compute[260935]: 2025-10-11 08:57:01.234 2 DEBUG nova.virt.libvirt.driver [None req-8661c271-232d-4d96-8c1b-b75be873e0d0 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 08:57:01 compute-0 nova_compute[260935]: 2025-10-11 08:57:01.235 2 DEBUG nova.virt.libvirt.driver [None req-8661c271-232d-4d96-8c1b-b75be873e0d0 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] No VIF found with MAC fa:16:3e:fc:11:6a, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 11 08:57:01 compute-0 nova_compute[260935]: 2025-10-11 08:57:01.236 2 INFO nova.virt.libvirt.driver [None req-8661c271-232d-4d96-8c1b-b75be873e0d0 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: 7e87b7bb-5a07-4c5d-9998-0e1375a226f1] Using config drive
Oct 11 08:57:01 compute-0 nova_compute[260935]: 2025-10-11 08:57:01.304 2 DEBUG nova.storage.rbd_utils [None req-8661c271-232d-4d96-8c1b-b75be873e0d0 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] rbd image 7e87b7bb-5a07-4c5d-9998-0e1375a226f1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:57:01 compute-0 podman[330153]: 2025-10-11 08:57:01.320407394 +0000 UTC m=+0.128509145 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 11 08:57:01 compute-0 nova_compute[260935]: 2025-10-11 08:57:01.380 2 DEBUG oslo_concurrency.lockutils [None req-289e433c-4e93-4b47-92d3-1e62827612b1 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Acquiring lock "7aed3949-7b95-4887-90c3-3e2c8202cf27" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:57:01 compute-0 nova_compute[260935]: 2025-10-11 08:57:01.381 2 DEBUG oslo_concurrency.lockutils [None req-289e433c-4e93-4b47-92d3-1e62827612b1 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Lock "7aed3949-7b95-4887-90c3-3e2c8202cf27" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:57:01 compute-0 nova_compute[260935]: 2025-10-11 08:57:01.403 2 DEBUG nova.compute.manager [None req-289e433c-4e93-4b47-92d3-1e62827612b1 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] [instance: 7aed3949-7b95-4887-90c3-3e2c8202cf27] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 11 08:57:01 compute-0 nova_compute[260935]: 2025-10-11 08:57:01.458 2 DEBUG nova.network.neutron [req-01e5f7d2-2e46-4a10-9a72-bd675e157e11 req-0ffac204-6d3f-4ea5-86b4-9d6e99490eef e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 7e87b7bb-5a07-4c5d-9998-0e1375a226f1] Updated VIF entry in instance network info cache for port 9d9032fb-a79e-4d9c-897c-84768b25ec64. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 08:57:01 compute-0 nova_compute[260935]: 2025-10-11 08:57:01.459 2 DEBUG nova.network.neutron [req-01e5f7d2-2e46-4a10-9a72-bd675e157e11 req-0ffac204-6d3f-4ea5-86b4-9d6e99490eef e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 7e87b7bb-5a07-4c5d-9998-0e1375a226f1] Updating instance_info_cache with network_info: [{"id": "9d9032fb-a79e-4d9c-897c-84768b25ec64", "address": "fa:16:3e:fc:11:6a", "network": {"id": "e075bdab-78c4-414f-b270-c41d1c82f498", "bridge": "br-int", "label": "tempest-ServersTestJSON-1401783070-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d9864fda4f8641d8a9c1509c426cc206", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9d9032fb-a7", "ovs_interfaceid": "9d9032fb-a79e-4d9c-897c-84768b25ec64", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:57:01 compute-0 nova_compute[260935]: 2025-10-11 08:57:01.475 2 DEBUG oslo_concurrency.lockutils [req-01e5f7d2-2e46-4a10-9a72-bd675e157e11 req-0ffac204-6d3f-4ea5-86b4-9d6e99490eef e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-7e87b7bb-5a07-4c5d-9998-0e1375a226f1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 08:57:01 compute-0 ceph-mon[74313]: pgmap v1603: 321 pgs: 321 active+clean; 385 MiB data, 719 MiB used, 59 GiB / 60 GiB avail; 4.0 MiB/s rd, 3.7 MiB/s wr, 197 op/s
Oct 11 08:57:01 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/3792937636' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:57:01 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/3814393010' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:57:01 compute-0 nova_compute[260935]: 2025-10-11 08:57:01.483 2 DEBUG oslo_concurrency.lockutils [None req-289e433c-4e93-4b47-92d3-1e62827612b1 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:57:01 compute-0 nova_compute[260935]: 2025-10-11 08:57:01.483 2 DEBUG oslo_concurrency.lockutils [None req-289e433c-4e93-4b47-92d3-1e62827612b1 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:57:01 compute-0 nova_compute[260935]: 2025-10-11 08:57:01.493 2 DEBUG nova.virt.hardware [None req-289e433c-4e93-4b47-92d3-1e62827612b1 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 11 08:57:01 compute-0 nova_compute[260935]: 2025-10-11 08:57:01.493 2 INFO nova.compute.claims [None req-289e433c-4e93-4b47-92d3-1e62827612b1 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] [instance: 7aed3949-7b95-4887-90c3-3e2c8202cf27] Claim successful on node compute-0.ctlplane.example.com
Oct 11 08:57:01 compute-0 nova_compute[260935]: 2025-10-11 08:57:01.675 2 INFO nova.virt.libvirt.driver [None req-8661c271-232d-4d96-8c1b-b75be873e0d0 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: 7e87b7bb-5a07-4c5d-9998-0e1375a226f1] Creating config drive at /var/lib/nova/instances/7e87b7bb-5a07-4c5d-9998-0e1375a226f1/disk.config
Oct 11 08:57:01 compute-0 nova_compute[260935]: 2025-10-11 08:57:01.685 2 DEBUG oslo_concurrency.processutils [None req-8661c271-232d-4d96-8c1b-b75be873e0d0 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/7e87b7bb-5a07-4c5d-9998-0e1375a226f1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmptoyx8ts3 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:57:01 compute-0 nova_compute[260935]: 2025-10-11 08:57:01.783 2 DEBUG oslo_concurrency.processutils [None req-289e433c-4e93-4b47-92d3-1e62827612b1 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:57:01 compute-0 nova_compute[260935]: 2025-10-11 08:57:01.857 2 DEBUG oslo_concurrency.processutils [None req-8661c271-232d-4d96-8c1b-b75be873e0d0 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/7e87b7bb-5a07-4c5d-9998-0e1375a226f1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmptoyx8ts3" returned: 0 in 0.172s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:57:01 compute-0 nova_compute[260935]: 2025-10-11 08:57:01.899 2 DEBUG nova.storage.rbd_utils [None req-8661c271-232d-4d96-8c1b-b75be873e0d0 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] rbd image 7e87b7bb-5a07-4c5d-9998-0e1375a226f1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:57:01 compute-0 nova_compute[260935]: 2025-10-11 08:57:01.904 2 DEBUG oslo_concurrency.processutils [None req-8661c271-232d-4d96-8c1b-b75be873e0d0 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/7e87b7bb-5a07-4c5d-9998-0e1375a226f1/disk.config 7e87b7bb-5a07-4c5d-9998-0e1375a226f1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:57:02 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1604: 321 pgs: 321 active+clean; 385 MiB data, 719 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 3.6 MiB/s wr, 191 op/s
Oct 11 08:57:02 compute-0 nova_compute[260935]: 2025-10-11 08:57:02.113 2 DEBUG oslo_concurrency.processutils [None req-8661c271-232d-4d96-8c1b-b75be873e0d0 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/7e87b7bb-5a07-4c5d-9998-0e1375a226f1/disk.config 7e87b7bb-5a07-4c5d-9998-0e1375a226f1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.209s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:57:02 compute-0 nova_compute[260935]: 2025-10-11 08:57:02.114 2 INFO nova.virt.libvirt.driver [None req-8661c271-232d-4d96-8c1b-b75be873e0d0 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: 7e87b7bb-5a07-4c5d-9998-0e1375a226f1] Deleting local config drive /var/lib/nova/instances/7e87b7bb-5a07-4c5d-9998-0e1375a226f1/disk.config because it was imported into RBD.
Oct 11 08:57:02 compute-0 NetworkManager[44960]: <info>  [1760173022.2004] manager: (tap9d9032fb-a7): new Tun device (/org/freedesktop/NetworkManager/Devices/278)
Oct 11 08:57:02 compute-0 kernel: tap9d9032fb-a7: entered promiscuous mode
Oct 11 08:57:02 compute-0 systemd-udevd[329978]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 08:57:02 compute-0 ovn_controller[152945]: 2025-10-11T08:57:02Z|00594|binding|INFO|Claiming lport 9d9032fb-a79e-4d9c-897c-84768b25ec64 for this chassis.
Oct 11 08:57:02 compute-0 ovn_controller[152945]: 2025-10-11T08:57:02Z|00595|binding|INFO|9d9032fb-a79e-4d9c-897c-84768b25ec64: Claiming fa:16:3e:fc:11:6a 10.100.0.7
Oct 11 08:57:02 compute-0 nova_compute[260935]: 2025-10-11 08:57:02.211 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:57:02 compute-0 NetworkManager[44960]: <info>  [1760173022.2200] device (tap9d9032fb-a7): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 08:57:02 compute-0 NetworkManager[44960]: <info>  [1760173022.2214] device (tap9d9032fb-a7): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 08:57:02 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:57:02.225 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fc:11:6a 10.100.0.7'], port_security=['fa:16:3e:fc:11:6a 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '7e87b7bb-5a07-4c5d-9998-0e1375a226f1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e075bdab-78c4-414f-b270-c41d1c82f498', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd9864fda4f8641d8a9c1509c426cc206', 'neutron:revision_number': '2', 'neutron:security_group_ids': '708d19b6-1edc-4b98-ad73-f234668f1633', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2b07dbe7-131d-4b4e-9a25-dea5d7b28985, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=9d9032fb-a79e-4d9c-897c-84768b25ec64) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 08:57:02 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:57:02.226 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 9d9032fb-a79e-4d9c-897c-84768b25ec64 in datapath e075bdab-78c4-414f-b270-c41d1c82f498 bound to our chassis
Oct 11 08:57:02 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:57:02.229 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e075bdab-78c4-414f-b270-c41d1c82f498
Oct 11 08:57:02 compute-0 ovn_controller[152945]: 2025-10-11T08:57:02Z|00596|binding|INFO|Setting lport 9d9032fb-a79e-4d9c-897c-84768b25ec64 ovn-installed in OVS
Oct 11 08:57:02 compute-0 ovn_controller[152945]: 2025-10-11T08:57:02Z|00597|binding|INFO|Setting lport 9d9032fb-a79e-4d9c-897c-84768b25ec64 up in Southbound
Oct 11 08:57:02 compute-0 nova_compute[260935]: 2025-10-11 08:57:02.252 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:57:02 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:57:02.251 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[643e3997-b781-4ec0-bccd-d71392e65e0e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:57:02 compute-0 nova_compute[260935]: 2025-10-11 08:57:02.256 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:57:02 compute-0 systemd-machined[215705]: New machine qemu-78-instance-00000046.
Oct 11 08:57:02 compute-0 systemd[1]: Started Virtual Machine qemu-78-instance-00000046.
Oct 11 08:57:02 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 08:57:02 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/954575715' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:57:02 compute-0 ovn_controller[152945]: 2025-10-11T08:57:02Z|00072|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:3e:18:4c 10.100.0.4
Oct 11 08:57:02 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:57:02.315 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[635500c4-d7df-4145-ba09-998dfe47b9d1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:57:02 compute-0 ovn_controller[152945]: 2025-10-11T08:57:02Z|00073|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:3e:18:4c 10.100.0.4
Oct 11 08:57:02 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:57:02.323 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[5bd51a99-833d-4ec7-bb33-a824e1ddeb2d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:57:02 compute-0 nova_compute[260935]: 2025-10-11 08:57:02.327 2 DEBUG oslo_concurrency.processutils [None req-289e433c-4e93-4b47-92d3-1e62827612b1 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.544s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:57:02 compute-0 nova_compute[260935]: 2025-10-11 08:57:02.356 2 DEBUG nova.compute.provider_tree [None req-289e433c-4e93-4b47-92d3-1e62827612b1 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 08:57:02 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:57:02.376 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[39c2333b-3e89-42ce-aa8c-ee877ba45404]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:57:02 compute-0 nova_compute[260935]: 2025-10-11 08:57:02.378 2 DEBUG nova.scheduler.client.report [None req-289e433c-4e93-4b47-92d3-1e62827612b1 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 08:57:02 compute-0 nova_compute[260935]: 2025-10-11 08:57:02.398 2 DEBUG oslo_concurrency.lockutils [None req-289e433c-4e93-4b47-92d3-1e62827612b1 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.914s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:57:02 compute-0 nova_compute[260935]: 2025-10-11 08:57:02.399 2 DEBUG nova.compute.manager [None req-289e433c-4e93-4b47-92d3-1e62827612b1 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] [instance: 7aed3949-7b95-4887-90c3-3e2c8202cf27] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 11 08:57:02 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:57:02.415 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[ced0e91b-9b0c-40f6-9978-4bc2adc1eb6d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape075bdab-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0b:b9:79'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 15, 'rx_bytes': 916, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 15, 'rx_bytes': 916, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 169], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 479394, 'reachable_time': 36159, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 330276, 'error': None, 'target': 'ovnmeta-e075bdab-78c4-414f-b270-c41d1c82f498', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:57:02 compute-0 nova_compute[260935]: 2025-10-11 08:57:02.440 2 DEBUG nova.compute.manager [None req-289e433c-4e93-4b47-92d3-1e62827612b1 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] [instance: 7aed3949-7b95-4887-90c3-3e2c8202cf27] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 11 08:57:02 compute-0 nova_compute[260935]: 2025-10-11 08:57:02.440 2 DEBUG nova.network.neutron [None req-289e433c-4e93-4b47-92d3-1e62827612b1 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] [instance: 7aed3949-7b95-4887-90c3-3e2c8202cf27] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 11 08:57:02 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:57:02.447 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[9a0dc670-7684-46cf-b0b0-39495527ff01]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tape075bdab-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 479414, 'tstamp': 479414}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 330277, 'error': None, 'target': 'ovnmeta-e075bdab-78c4-414f-b270-c41d1c82f498', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tape075bdab-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 479419, 'tstamp': 479419}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 330277, 'error': None, 'target': 'ovnmeta-e075bdab-78c4-414f-b270-c41d1c82f498', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:57:02 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:57:02.451 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape075bdab-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:57:02 compute-0 nova_compute[260935]: 2025-10-11 08:57:02.453 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:57:02 compute-0 nova_compute[260935]: 2025-10-11 08:57:02.455 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:57:02 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:57:02.456 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape075bdab-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:57:02 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:57:02.456 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 08:57:02 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:57:02.457 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape075bdab-70, col_values=(('external_ids', {'iface-id': 'b9cf681c-9f4c-4c56-987a-55fa7aa89e1a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:57:02 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:57:02.458 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 08:57:02 compute-0 nova_compute[260935]: 2025-10-11 08:57:02.464 2 INFO nova.virt.libvirt.driver [None req-289e433c-4e93-4b47-92d3-1e62827612b1 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] [instance: 7aed3949-7b95-4887-90c3-3e2c8202cf27] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 11 08:57:02 compute-0 nova_compute[260935]: 2025-10-11 08:57:02.482 2 DEBUG nova.compute.manager [None req-289e433c-4e93-4b47-92d3-1e62827612b1 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] [instance: 7aed3949-7b95-4887-90c3-3e2c8202cf27] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 11 08:57:02 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/954575715' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:57:02 compute-0 nova_compute[260935]: 2025-10-11 08:57:02.526 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 08:57:02 compute-0 nova_compute[260935]: 2025-10-11 08:57:02.556 2 DEBUG nova.compute.manager [None req-289e433c-4e93-4b47-92d3-1e62827612b1 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] [instance: 7aed3949-7b95-4887-90c3-3e2c8202cf27] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 11 08:57:02 compute-0 nova_compute[260935]: 2025-10-11 08:57:02.557 2 DEBUG nova.virt.libvirt.driver [None req-289e433c-4e93-4b47-92d3-1e62827612b1 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] [instance: 7aed3949-7b95-4887-90c3-3e2c8202cf27] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 11 08:57:02 compute-0 nova_compute[260935]: 2025-10-11 08:57:02.558 2 INFO nova.virt.libvirt.driver [None req-289e433c-4e93-4b47-92d3-1e62827612b1 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] [instance: 7aed3949-7b95-4887-90c3-3e2c8202cf27] Creating image(s)
Oct 11 08:57:02 compute-0 nova_compute[260935]: 2025-10-11 08:57:02.589 2 DEBUG nova.storage.rbd_utils [None req-289e433c-4e93-4b47-92d3-1e62827612b1 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] rbd image 7aed3949-7b95-4887-90c3-3e2c8202cf27_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:57:02 compute-0 nova_compute[260935]: 2025-10-11 08:57:02.623 2 DEBUG nova.storage.rbd_utils [None req-289e433c-4e93-4b47-92d3-1e62827612b1 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] rbd image 7aed3949-7b95-4887-90c3-3e2c8202cf27_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:57:02 compute-0 nova_compute[260935]: 2025-10-11 08:57:02.661 2 DEBUG nova.storage.rbd_utils [None req-289e433c-4e93-4b47-92d3-1e62827612b1 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] rbd image 7aed3949-7b95-4887-90c3-3e2c8202cf27_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:57:02 compute-0 nova_compute[260935]: 2025-10-11 08:57:02.666 2 DEBUG oslo_concurrency.processutils [None req-289e433c-4e93-4b47-92d3-1e62827612b1 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:57:02 compute-0 nova_compute[260935]: 2025-10-11 08:57:02.759 2 DEBUG oslo_concurrency.processutils [None req-289e433c-4e93-4b47-92d3-1e62827612b1 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json" returned: 0 in 0.093s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:57:02 compute-0 nova_compute[260935]: 2025-10-11 08:57:02.761 2 DEBUG oslo_concurrency.lockutils [None req-289e433c-4e93-4b47-92d3-1e62827612b1 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Acquiring lock "9811042c7d73cc51997f7c966840f4b7728169a1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:57:02 compute-0 nova_compute[260935]: 2025-10-11 08:57:02.762 2 DEBUG oslo_concurrency.lockutils [None req-289e433c-4e93-4b47-92d3-1e62827612b1 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:57:02 compute-0 nova_compute[260935]: 2025-10-11 08:57:02.763 2 DEBUG oslo_concurrency.lockutils [None req-289e433c-4e93-4b47-92d3-1e62827612b1 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:57:02 compute-0 nova_compute[260935]: 2025-10-11 08:57:02.804 2 DEBUG nova.storage.rbd_utils [None req-289e433c-4e93-4b47-92d3-1e62827612b1 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] rbd image 7aed3949-7b95-4887-90c3-3e2c8202cf27_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:57:02 compute-0 nova_compute[260935]: 2025-10-11 08:57:02.811 2 DEBUG oslo_concurrency.processutils [None req-289e433c-4e93-4b47-92d3-1e62827612b1 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 7aed3949-7b95-4887-90c3-3e2c8202cf27_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:57:03 compute-0 nova_compute[260935]: 2025-10-11 08:57:03.171 2 DEBUG oslo_concurrency.processutils [None req-289e433c-4e93-4b47-92d3-1e62827612b1 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 7aed3949-7b95-4887-90c3-3e2c8202cf27_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.360s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:57:03 compute-0 nova_compute[260935]: 2025-10-11 08:57:03.213 2 DEBUG nova.policy [None req-289e433c-4e93-4b47-92d3-1e62827612b1 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '8d5f5f07c57c467286168be7c097bf26', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '73adfb8cf0c64359b1f33a9643148ef4', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 11 08:57:03 compute-0 nova_compute[260935]: 2025-10-11 08:57:03.285 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:57:03 compute-0 nova_compute[260935]: 2025-10-11 08:57:03.297 2 DEBUG nova.storage.rbd_utils [None req-289e433c-4e93-4b47-92d3-1e62827612b1 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] resizing rbd image 7aed3949-7b95-4887-90c3-3e2c8202cf27_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 11 08:57:03 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e230 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 08:57:03 compute-0 nova_compute[260935]: 2025-10-11 08:57:03.431 2 DEBUG nova.objects.instance [None req-289e433c-4e93-4b47-92d3-1e62827612b1 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Lazy-loading 'migration_context' on Instance uuid 7aed3949-7b95-4887-90c3-3e2c8202cf27 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:57:03 compute-0 nova_compute[260935]: 2025-10-11 08:57:03.452 2 DEBUG nova.virt.libvirt.driver [None req-289e433c-4e93-4b47-92d3-1e62827612b1 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] [instance: 7aed3949-7b95-4887-90c3-3e2c8202cf27] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 11 08:57:03 compute-0 nova_compute[260935]: 2025-10-11 08:57:03.453 2 DEBUG nova.virt.libvirt.driver [None req-289e433c-4e93-4b47-92d3-1e62827612b1 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] [instance: 7aed3949-7b95-4887-90c3-3e2c8202cf27] Ensure instance console log exists: /var/lib/nova/instances/7aed3949-7b95-4887-90c3-3e2c8202cf27/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 11 08:57:03 compute-0 nova_compute[260935]: 2025-10-11 08:57:03.453 2 DEBUG oslo_concurrency.lockutils [None req-289e433c-4e93-4b47-92d3-1e62827612b1 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:57:03 compute-0 nova_compute[260935]: 2025-10-11 08:57:03.454 2 DEBUG oslo_concurrency.lockutils [None req-289e433c-4e93-4b47-92d3-1e62827612b1 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:57:03 compute-0 nova_compute[260935]: 2025-10-11 08:57:03.454 2 DEBUG oslo_concurrency.lockutils [None req-289e433c-4e93-4b47-92d3-1e62827612b1 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:57:03 compute-0 ceph-mon[74313]: pgmap v1604: 321 pgs: 321 active+clean; 385 MiB data, 719 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 3.6 MiB/s wr, 191 op/s
Oct 11 08:57:03 compute-0 nova_compute[260935]: 2025-10-11 08:57:03.542 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760173023.5415716, 7e87b7bb-5a07-4c5d-9998-0e1375a226f1 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:57:03 compute-0 nova_compute[260935]: 2025-10-11 08:57:03.542 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 7e87b7bb-5a07-4c5d-9998-0e1375a226f1] VM Started (Lifecycle Event)
Oct 11 08:57:03 compute-0 nova_compute[260935]: 2025-10-11 08:57:03.571 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 7e87b7bb-5a07-4c5d-9998-0e1375a226f1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:57:03 compute-0 nova_compute[260935]: 2025-10-11 08:57:03.576 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760173023.5418572, 7e87b7bb-5a07-4c5d-9998-0e1375a226f1 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:57:03 compute-0 nova_compute[260935]: 2025-10-11 08:57:03.576 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 7e87b7bb-5a07-4c5d-9998-0e1375a226f1] VM Paused (Lifecycle Event)
Oct 11 08:57:03 compute-0 nova_compute[260935]: 2025-10-11 08:57:03.601 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 7e87b7bb-5a07-4c5d-9998-0e1375a226f1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:57:03 compute-0 nova_compute[260935]: 2025-10-11 08:57:03.605 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 7e87b7bb-5a07-4c5d-9998-0e1375a226f1] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 08:57:03 compute-0 nova_compute[260935]: 2025-10-11 08:57:03.639 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 7e87b7bb-5a07-4c5d-9998-0e1375a226f1] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 08:57:04 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1605: 321 pgs: 321 active+clean; 418 MiB data, 744 MiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 5.7 MiB/s wr, 268 op/s
Oct 11 08:57:04 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:57:04.367 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=16, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:d1:d9', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '16:ab:1e:b7:4b:7f'}, ipsec=False) old=SB_Global(nb_cfg=15) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 08:57:04 compute-0 nova_compute[260935]: 2025-10-11 08:57:04.368 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:57:04 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:57:04.369 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 11 08:57:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 08:57:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:57:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 08:57:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:57:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0033172092734583577 of space, bias 1.0, pg target 0.9951627820375073 quantized to 32 (current 32)
Oct 11 08:57:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:57:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 08:57:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:57:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 08:57:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:57:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Oct 11 08:57:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:57:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 11 08:57:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:57:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 08:57:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:57:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 11 08:57:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:57:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 11 08:57:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:57:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 08:57:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:57:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 11 08:57:04 compute-0 nova_compute[260935]: 2025-10-11 08:57:04.757 2 DEBUG nova.network.neutron [None req-289e433c-4e93-4b47-92d3-1e62827612b1 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] [instance: 7aed3949-7b95-4887-90c3-3e2c8202cf27] Successfully created port: f5e1ba56-7772-43c4-a7a6-9435b3b0796c _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 11 08:57:05 compute-0 ovn_controller[152945]: 2025-10-11T08:57:05Z|00074|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:fc:d6:3e 10.100.0.7
Oct 11 08:57:05 compute-0 ovn_controller[152945]: 2025-10-11T08:57:05Z|00075|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:fc:d6:3e 10.100.0.7
Oct 11 08:57:05 compute-0 nova_compute[260935]: 2025-10-11 08:57:05.497 2 DEBUG nova.compute.manager [req-90e4772d-a40f-4cc4-82f1-427b31c59bbc req-3ccee42d-976d-4269-af0b-0966a94ad632 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 80799b12-9add-4561-a8eb-f1cf3c1f93a1] Received event network-vif-plugged-b57a52c8-34c3-415f-9848-195b09be3611 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:57:05 compute-0 nova_compute[260935]: 2025-10-11 08:57:05.498 2 DEBUG oslo_concurrency.lockutils [req-90e4772d-a40f-4cc4-82f1-427b31c59bbc req-3ccee42d-976d-4269-af0b-0966a94ad632 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "80799b12-9add-4561-a8eb-f1cf3c1f93a1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:57:05 compute-0 nova_compute[260935]: 2025-10-11 08:57:05.498 2 DEBUG oslo_concurrency.lockutils [req-90e4772d-a40f-4cc4-82f1-427b31c59bbc req-3ccee42d-976d-4269-af0b-0966a94ad632 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "80799b12-9add-4561-a8eb-f1cf3c1f93a1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:57:05 compute-0 nova_compute[260935]: 2025-10-11 08:57:05.499 2 DEBUG oslo_concurrency.lockutils [req-90e4772d-a40f-4cc4-82f1-427b31c59bbc req-3ccee42d-976d-4269-af0b-0966a94ad632 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "80799b12-9add-4561-a8eb-f1cf3c1f93a1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:57:05 compute-0 nova_compute[260935]: 2025-10-11 08:57:05.499 2 DEBUG nova.compute.manager [req-90e4772d-a40f-4cc4-82f1-427b31c59bbc req-3ccee42d-976d-4269-af0b-0966a94ad632 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 80799b12-9add-4561-a8eb-f1cf3c1f93a1] Processing event network-vif-plugged-b57a52c8-34c3-415f-9848-195b09be3611 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 11 08:57:05 compute-0 nova_compute[260935]: 2025-10-11 08:57:05.501 2 DEBUG nova.compute.manager [None req-8b868f60-123f-4386-b821-338e962f07b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: 80799b12-9add-4561-a8eb-f1cf3c1f93a1] Instance event wait completed in 4 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 11 08:57:05 compute-0 ceph-mon[74313]: pgmap v1605: 321 pgs: 321 active+clean; 418 MiB data, 744 MiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 5.7 MiB/s wr, 268 op/s
Oct 11 08:57:05 compute-0 nova_compute[260935]: 2025-10-11 08:57:05.507 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760173025.5068393, 80799b12-9add-4561-a8eb-f1cf3c1f93a1 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:57:05 compute-0 nova_compute[260935]: 2025-10-11 08:57:05.507 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 80799b12-9add-4561-a8eb-f1cf3c1f93a1] VM Resumed (Lifecycle Event)
Oct 11 08:57:05 compute-0 nova_compute[260935]: 2025-10-11 08:57:05.511 2 DEBUG nova.virt.libvirt.driver [None req-8b868f60-123f-4386-b821-338e962f07b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: 80799b12-9add-4561-a8eb-f1cf3c1f93a1] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 11 08:57:05 compute-0 nova_compute[260935]: 2025-10-11 08:57:05.517 2 INFO nova.virt.libvirt.driver [-] [instance: 80799b12-9add-4561-a8eb-f1cf3c1f93a1] Instance spawned successfully.
Oct 11 08:57:05 compute-0 nova_compute[260935]: 2025-10-11 08:57:05.518 2 DEBUG nova.virt.libvirt.driver [None req-8b868f60-123f-4386-b821-338e962f07b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: 80799b12-9add-4561-a8eb-f1cf3c1f93a1] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 11 08:57:05 compute-0 nova_compute[260935]: 2025-10-11 08:57:05.533 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 80799b12-9add-4561-a8eb-f1cf3c1f93a1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:57:05 compute-0 nova_compute[260935]: 2025-10-11 08:57:05.541 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 80799b12-9add-4561-a8eb-f1cf3c1f93a1] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 08:57:05 compute-0 nova_compute[260935]: 2025-10-11 08:57:05.547 2 DEBUG nova.virt.libvirt.driver [None req-8b868f60-123f-4386-b821-338e962f07b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: 80799b12-9add-4561-a8eb-f1cf3c1f93a1] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:57:05 compute-0 nova_compute[260935]: 2025-10-11 08:57:05.548 2 DEBUG nova.virt.libvirt.driver [None req-8b868f60-123f-4386-b821-338e962f07b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: 80799b12-9add-4561-a8eb-f1cf3c1f93a1] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:57:05 compute-0 nova_compute[260935]: 2025-10-11 08:57:05.549 2 DEBUG nova.virt.libvirt.driver [None req-8b868f60-123f-4386-b821-338e962f07b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: 80799b12-9add-4561-a8eb-f1cf3c1f93a1] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:57:05 compute-0 nova_compute[260935]: 2025-10-11 08:57:05.550 2 DEBUG nova.virt.libvirt.driver [None req-8b868f60-123f-4386-b821-338e962f07b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: 80799b12-9add-4561-a8eb-f1cf3c1f93a1] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:57:05 compute-0 nova_compute[260935]: 2025-10-11 08:57:05.551 2 DEBUG nova.virt.libvirt.driver [None req-8b868f60-123f-4386-b821-338e962f07b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: 80799b12-9add-4561-a8eb-f1cf3c1f93a1] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:57:05 compute-0 nova_compute[260935]: 2025-10-11 08:57:05.552 2 DEBUG nova.virt.libvirt.driver [None req-8b868f60-123f-4386-b821-338e962f07b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: 80799b12-9add-4561-a8eb-f1cf3c1f93a1] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:57:05 compute-0 nova_compute[260935]: 2025-10-11 08:57:05.567 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 80799b12-9add-4561-a8eb-f1cf3c1f93a1] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 08:57:05 compute-0 nova_compute[260935]: 2025-10-11 08:57:05.637 2 INFO nova.compute.manager [None req-8b868f60-123f-4386-b821-338e962f07b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: 80799b12-9add-4561-a8eb-f1cf3c1f93a1] Took 12.38 seconds to spawn the instance on the hypervisor.
Oct 11 08:57:05 compute-0 nova_compute[260935]: 2025-10-11 08:57:05.638 2 DEBUG nova.compute.manager [None req-8b868f60-123f-4386-b821-338e962f07b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: 80799b12-9add-4561-a8eb-f1cf3c1f93a1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:57:05 compute-0 nova_compute[260935]: 2025-10-11 08:57:05.702 2 INFO nova.compute.manager [None req-8b868f60-123f-4386-b821-338e962f07b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: 80799b12-9add-4561-a8eb-f1cf3c1f93a1] Took 13.59 seconds to build instance.
Oct 11 08:57:05 compute-0 nova_compute[260935]: 2025-10-11 08:57:05.722 2 DEBUG oslo_concurrency.lockutils [None req-8b868f60-123f-4386-b821-338e962f07b4 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Lock "80799b12-9add-4561-a8eb-f1cf3c1f93a1" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.698s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:57:06 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1606: 321 pgs: 321 active+clean; 418 MiB data, 744 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 5.7 MiB/s wr, 194 op/s
Oct 11 08:57:06 compute-0 nova_compute[260935]: 2025-10-11 08:57:06.132 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:57:06 compute-0 nova_compute[260935]: 2025-10-11 08:57:06.471 2 DEBUG oslo_concurrency.lockutils [None req-65f6f63e-75c5-4f43-a2f3-f6ef8987b9d7 bebcc72ba14b4d1e976076ce9fee6080 7b0903df38254068b8566040cf90343c - - default default] Acquiring lock "075ec27a-70a2-49f6-a097-5738f3407605" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:57:06 compute-0 nova_compute[260935]: 2025-10-11 08:57:06.472 2 DEBUG oslo_concurrency.lockutils [None req-65f6f63e-75c5-4f43-a2f3-f6ef8987b9d7 bebcc72ba14b4d1e976076ce9fee6080 7b0903df38254068b8566040cf90343c - - default default] Lock "075ec27a-70a2-49f6-a097-5738f3407605" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:57:06 compute-0 nova_compute[260935]: 2025-10-11 08:57:06.494 2 DEBUG nova.compute.manager [None req-65f6f63e-75c5-4f43-a2f3-f6ef8987b9d7 bebcc72ba14b4d1e976076ce9fee6080 7b0903df38254068b8566040cf90343c - - default default] [instance: 075ec27a-70a2-49f6-a097-5738f3407605] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 11 08:57:06 compute-0 nova_compute[260935]: 2025-10-11 08:57:06.562 2 DEBUG oslo_concurrency.lockutils [None req-65f6f63e-75c5-4f43-a2f3-f6ef8987b9d7 bebcc72ba14b4d1e976076ce9fee6080 7b0903df38254068b8566040cf90343c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:57:06 compute-0 nova_compute[260935]: 2025-10-11 08:57:06.563 2 DEBUG oslo_concurrency.lockutils [None req-65f6f63e-75c5-4f43-a2f3-f6ef8987b9d7 bebcc72ba14b4d1e976076ce9fee6080 7b0903df38254068b8566040cf90343c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:57:06 compute-0 nova_compute[260935]: 2025-10-11 08:57:06.575 2 DEBUG nova.virt.hardware [None req-65f6f63e-75c5-4f43-a2f3-f6ef8987b9d7 bebcc72ba14b4d1e976076ce9fee6080 7b0903df38254068b8566040cf90343c - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 11 08:57:06 compute-0 nova_compute[260935]: 2025-10-11 08:57:06.576 2 INFO nova.compute.claims [None req-65f6f63e-75c5-4f43-a2f3-f6ef8987b9d7 bebcc72ba14b4d1e976076ce9fee6080 7b0903df38254068b8566040cf90343c - - default default] [instance: 075ec27a-70a2-49f6-a097-5738f3407605] Claim successful on node compute-0.ctlplane.example.com
Oct 11 08:57:06 compute-0 nova_compute[260935]: 2025-10-11 08:57:06.597 2 DEBUG nova.network.neutron [None req-289e433c-4e93-4b47-92d3-1e62827612b1 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] [instance: 7aed3949-7b95-4887-90c3-3e2c8202cf27] Successfully updated port: f5e1ba56-7772-43c4-a7a6-9435b3b0796c _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 11 08:57:06 compute-0 nova_compute[260935]: 2025-10-11 08:57:06.633 2 DEBUG oslo_concurrency.lockutils [None req-289e433c-4e93-4b47-92d3-1e62827612b1 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Acquiring lock "refresh_cache-7aed3949-7b95-4887-90c3-3e2c8202cf27" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 08:57:06 compute-0 nova_compute[260935]: 2025-10-11 08:57:06.633 2 DEBUG oslo_concurrency.lockutils [None req-289e433c-4e93-4b47-92d3-1e62827612b1 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Acquired lock "refresh_cache-7aed3949-7b95-4887-90c3-3e2c8202cf27" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 08:57:06 compute-0 nova_compute[260935]: 2025-10-11 08:57:06.634 2 DEBUG nova.network.neutron [None req-289e433c-4e93-4b47-92d3-1e62827612b1 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] [instance: 7aed3949-7b95-4887-90c3-3e2c8202cf27] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 11 08:57:06 compute-0 nova_compute[260935]: 2025-10-11 08:57:06.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 08:57:06 compute-0 nova_compute[260935]: 2025-10-11 08:57:06.860 2 DEBUG oslo_concurrency.processutils [None req-65f6f63e-75c5-4f43-a2f3-f6ef8987b9d7 bebcc72ba14b4d1e976076ce9fee6080 7b0903df38254068b8566040cf90343c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:57:07 compute-0 nova_compute[260935]: 2025-10-11 08:57:07.160 2 DEBUG nova.network.neutron [None req-289e433c-4e93-4b47-92d3-1e62827612b1 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] [instance: 7aed3949-7b95-4887-90c3-3e2c8202cf27] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 11 08:57:07 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 08:57:07 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2880244161' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:57:07 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:57:07.371 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=bec9a5e2-82b0-42f0-811d-08d245f1dc66, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '16'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:57:07 compute-0 nova_compute[260935]: 2025-10-11 08:57:07.395 2 DEBUG oslo_concurrency.processutils [None req-65f6f63e-75c5-4f43-a2f3-f6ef8987b9d7 bebcc72ba14b4d1e976076ce9fee6080 7b0903df38254068b8566040cf90343c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.535s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:57:07 compute-0 nova_compute[260935]: 2025-10-11 08:57:07.403 2 DEBUG nova.compute.provider_tree [None req-65f6f63e-75c5-4f43-a2f3-f6ef8987b9d7 bebcc72ba14b4d1e976076ce9fee6080 7b0903df38254068b8566040cf90343c - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 08:57:07 compute-0 nova_compute[260935]: 2025-10-11 08:57:07.429 2 DEBUG nova.scheduler.client.report [None req-65f6f63e-75c5-4f43-a2f3-f6ef8987b9d7 bebcc72ba14b4d1e976076ce9fee6080 7b0903df38254068b8566040cf90343c - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 08:57:07 compute-0 nova_compute[260935]: 2025-10-11 08:57:07.459 2 DEBUG oslo_concurrency.lockutils [None req-65f6f63e-75c5-4f43-a2f3-f6ef8987b9d7 bebcc72ba14b4d1e976076ce9fee6080 7b0903df38254068b8566040cf90343c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.895s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:57:07 compute-0 nova_compute[260935]: 2025-10-11 08:57:07.460 2 DEBUG nova.compute.manager [None req-65f6f63e-75c5-4f43-a2f3-f6ef8987b9d7 bebcc72ba14b4d1e976076ce9fee6080 7b0903df38254068b8566040cf90343c - - default default] [instance: 075ec27a-70a2-49f6-a097-5738f3407605] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 11 08:57:07 compute-0 ceph-mon[74313]: pgmap v1606: 321 pgs: 321 active+clean; 418 MiB data, 744 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 5.7 MiB/s wr, 194 op/s
Oct 11 08:57:07 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/2880244161' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:57:07 compute-0 nova_compute[260935]: 2025-10-11 08:57:07.524 2 DEBUG nova.compute.manager [None req-65f6f63e-75c5-4f43-a2f3-f6ef8987b9d7 bebcc72ba14b4d1e976076ce9fee6080 7b0903df38254068b8566040cf90343c - - default default] [instance: 075ec27a-70a2-49f6-a097-5738f3407605] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 11 08:57:07 compute-0 nova_compute[260935]: 2025-10-11 08:57:07.525 2 DEBUG nova.network.neutron [None req-65f6f63e-75c5-4f43-a2f3-f6ef8987b9d7 bebcc72ba14b4d1e976076ce9fee6080 7b0903df38254068b8566040cf90343c - - default default] [instance: 075ec27a-70a2-49f6-a097-5738f3407605] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 11 08:57:07 compute-0 nova_compute[260935]: 2025-10-11 08:57:07.548 2 INFO nova.virt.libvirt.driver [None req-65f6f63e-75c5-4f43-a2f3-f6ef8987b9d7 bebcc72ba14b4d1e976076ce9fee6080 7b0903df38254068b8566040cf90343c - - default default] [instance: 075ec27a-70a2-49f6-a097-5738f3407605] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 11 08:57:07 compute-0 nova_compute[260935]: 2025-10-11 08:57:07.570 2 DEBUG nova.compute.manager [None req-65f6f63e-75c5-4f43-a2f3-f6ef8987b9d7 bebcc72ba14b4d1e976076ce9fee6080 7b0903df38254068b8566040cf90343c - - default default] [instance: 075ec27a-70a2-49f6-a097-5738f3407605] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 11 08:57:07 compute-0 nova_compute[260935]: 2025-10-11 08:57:07.618 2 DEBUG nova.compute.manager [req-73d32512-b507-444c-a1c8-1724672d9122 req-776cfca3-38aa-4b0d-ade7-860c510b8a34 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 80799b12-9add-4561-a8eb-f1cf3c1f93a1] Received event network-vif-plugged-b57a52c8-34c3-415f-9848-195b09be3611 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:57:07 compute-0 nova_compute[260935]: 2025-10-11 08:57:07.620 2 DEBUG oslo_concurrency.lockutils [req-73d32512-b507-444c-a1c8-1724672d9122 req-776cfca3-38aa-4b0d-ade7-860c510b8a34 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "80799b12-9add-4561-a8eb-f1cf3c1f93a1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:57:07 compute-0 nova_compute[260935]: 2025-10-11 08:57:07.621 2 DEBUG oslo_concurrency.lockutils [req-73d32512-b507-444c-a1c8-1724672d9122 req-776cfca3-38aa-4b0d-ade7-860c510b8a34 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "80799b12-9add-4561-a8eb-f1cf3c1f93a1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:57:07 compute-0 nova_compute[260935]: 2025-10-11 08:57:07.622 2 DEBUG oslo_concurrency.lockutils [req-73d32512-b507-444c-a1c8-1724672d9122 req-776cfca3-38aa-4b0d-ade7-860c510b8a34 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "80799b12-9add-4561-a8eb-f1cf3c1f93a1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:57:07 compute-0 nova_compute[260935]: 2025-10-11 08:57:07.623 2 DEBUG nova.compute.manager [req-73d32512-b507-444c-a1c8-1724672d9122 req-776cfca3-38aa-4b0d-ade7-860c510b8a34 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 80799b12-9add-4561-a8eb-f1cf3c1f93a1] No waiting events found dispatching network-vif-plugged-b57a52c8-34c3-415f-9848-195b09be3611 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 08:57:07 compute-0 nova_compute[260935]: 2025-10-11 08:57:07.624 2 WARNING nova.compute.manager [req-73d32512-b507-444c-a1c8-1724672d9122 req-776cfca3-38aa-4b0d-ade7-860c510b8a34 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 80799b12-9add-4561-a8eb-f1cf3c1f93a1] Received unexpected event network-vif-plugged-b57a52c8-34c3-415f-9848-195b09be3611 for instance with vm_state active and task_state None.
Oct 11 08:57:07 compute-0 nova_compute[260935]: 2025-10-11 08:57:07.625 2 DEBUG nova.compute.manager [req-73d32512-b507-444c-a1c8-1724672d9122 req-776cfca3-38aa-4b0d-ade7-860c510b8a34 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 7e87b7bb-5a07-4c5d-9998-0e1375a226f1] Received event network-vif-plugged-9d9032fb-a79e-4d9c-897c-84768b25ec64 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:57:07 compute-0 nova_compute[260935]: 2025-10-11 08:57:07.626 2 DEBUG oslo_concurrency.lockutils [req-73d32512-b507-444c-a1c8-1724672d9122 req-776cfca3-38aa-4b0d-ade7-860c510b8a34 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "7e87b7bb-5a07-4c5d-9998-0e1375a226f1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:57:07 compute-0 nova_compute[260935]: 2025-10-11 08:57:07.627 2 DEBUG oslo_concurrency.lockutils [req-73d32512-b507-444c-a1c8-1724672d9122 req-776cfca3-38aa-4b0d-ade7-860c510b8a34 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "7e87b7bb-5a07-4c5d-9998-0e1375a226f1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:57:07 compute-0 nova_compute[260935]: 2025-10-11 08:57:07.628 2 DEBUG oslo_concurrency.lockutils [req-73d32512-b507-444c-a1c8-1724672d9122 req-776cfca3-38aa-4b0d-ade7-860c510b8a34 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "7e87b7bb-5a07-4c5d-9998-0e1375a226f1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:57:07 compute-0 nova_compute[260935]: 2025-10-11 08:57:07.629 2 DEBUG nova.compute.manager [req-73d32512-b507-444c-a1c8-1724672d9122 req-776cfca3-38aa-4b0d-ade7-860c510b8a34 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 7e87b7bb-5a07-4c5d-9998-0e1375a226f1] Processing event network-vif-plugged-9d9032fb-a79e-4d9c-897c-84768b25ec64 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 11 08:57:07 compute-0 nova_compute[260935]: 2025-10-11 08:57:07.630 2 DEBUG nova.compute.manager [req-73d32512-b507-444c-a1c8-1724672d9122 req-776cfca3-38aa-4b0d-ade7-860c510b8a34 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 7e87b7bb-5a07-4c5d-9998-0e1375a226f1] Received event network-vif-plugged-9d9032fb-a79e-4d9c-897c-84768b25ec64 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:57:07 compute-0 nova_compute[260935]: 2025-10-11 08:57:07.631 2 DEBUG oslo_concurrency.lockutils [req-73d32512-b507-444c-a1c8-1724672d9122 req-776cfca3-38aa-4b0d-ade7-860c510b8a34 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "7e87b7bb-5a07-4c5d-9998-0e1375a226f1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:57:07 compute-0 nova_compute[260935]: 2025-10-11 08:57:07.632 2 DEBUG oslo_concurrency.lockutils [req-73d32512-b507-444c-a1c8-1724672d9122 req-776cfca3-38aa-4b0d-ade7-860c510b8a34 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "7e87b7bb-5a07-4c5d-9998-0e1375a226f1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:57:07 compute-0 nova_compute[260935]: 2025-10-11 08:57:07.633 2 DEBUG oslo_concurrency.lockutils [req-73d32512-b507-444c-a1c8-1724672d9122 req-776cfca3-38aa-4b0d-ade7-860c510b8a34 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "7e87b7bb-5a07-4c5d-9998-0e1375a226f1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:57:07 compute-0 nova_compute[260935]: 2025-10-11 08:57:07.634 2 DEBUG nova.compute.manager [req-73d32512-b507-444c-a1c8-1724672d9122 req-776cfca3-38aa-4b0d-ade7-860c510b8a34 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 7e87b7bb-5a07-4c5d-9998-0e1375a226f1] No waiting events found dispatching network-vif-plugged-9d9032fb-a79e-4d9c-897c-84768b25ec64 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 08:57:07 compute-0 nova_compute[260935]: 2025-10-11 08:57:07.635 2 WARNING nova.compute.manager [req-73d32512-b507-444c-a1c8-1724672d9122 req-776cfca3-38aa-4b0d-ade7-860c510b8a34 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 7e87b7bb-5a07-4c5d-9998-0e1375a226f1] Received unexpected event network-vif-plugged-9d9032fb-a79e-4d9c-897c-84768b25ec64 for instance with vm_state building and task_state spawning.
Oct 11 08:57:07 compute-0 nova_compute[260935]: 2025-10-11 08:57:07.636 2 DEBUG nova.compute.manager [req-73d32512-b507-444c-a1c8-1724672d9122 req-776cfca3-38aa-4b0d-ade7-860c510b8a34 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 7aed3949-7b95-4887-90c3-3e2c8202cf27] Received event network-changed-f5e1ba56-7772-43c4-a7a6-9435b3b0796c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:57:07 compute-0 nova_compute[260935]: 2025-10-11 08:57:07.638 2 DEBUG nova.compute.manager [req-73d32512-b507-444c-a1c8-1724672d9122 req-776cfca3-38aa-4b0d-ade7-860c510b8a34 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 7aed3949-7b95-4887-90c3-3e2c8202cf27] Refreshing instance network info cache due to event network-changed-f5e1ba56-7772-43c4-a7a6-9435b3b0796c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 08:57:07 compute-0 nova_compute[260935]: 2025-10-11 08:57:07.638 2 DEBUG oslo_concurrency.lockutils [req-73d32512-b507-444c-a1c8-1724672d9122 req-776cfca3-38aa-4b0d-ade7-860c510b8a34 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-7aed3949-7b95-4887-90c3-3e2c8202cf27" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 08:57:07 compute-0 nova_compute[260935]: 2025-10-11 08:57:07.646 2 DEBUG nova.compute.manager [None req-8661c271-232d-4d96-8c1b-b75be873e0d0 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: 7e87b7bb-5a07-4c5d-9998-0e1375a226f1] Instance event wait completed in 4 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 11 08:57:07 compute-0 nova_compute[260935]: 2025-10-11 08:57:07.665 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760173027.6565945, 7e87b7bb-5a07-4c5d-9998-0e1375a226f1 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:57:07 compute-0 nova_compute[260935]: 2025-10-11 08:57:07.666 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 7e87b7bb-5a07-4c5d-9998-0e1375a226f1] VM Resumed (Lifecycle Event)
Oct 11 08:57:07 compute-0 nova_compute[260935]: 2025-10-11 08:57:07.670 2 DEBUG nova.virt.libvirt.driver [None req-8661c271-232d-4d96-8c1b-b75be873e0d0 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: 7e87b7bb-5a07-4c5d-9998-0e1375a226f1] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 11 08:57:07 compute-0 nova_compute[260935]: 2025-10-11 08:57:07.675 2 INFO nova.virt.libvirt.driver [-] [instance: 7e87b7bb-5a07-4c5d-9998-0e1375a226f1] Instance spawned successfully.
Oct 11 08:57:07 compute-0 nova_compute[260935]: 2025-10-11 08:57:07.676 2 DEBUG nova.virt.libvirt.driver [None req-8661c271-232d-4d96-8c1b-b75be873e0d0 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: 7e87b7bb-5a07-4c5d-9998-0e1375a226f1] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 11 08:57:07 compute-0 nova_compute[260935]: 2025-10-11 08:57:07.699 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 7e87b7bb-5a07-4c5d-9998-0e1375a226f1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:57:07 compute-0 nova_compute[260935]: 2025-10-11 08:57:07.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 08:57:07 compute-0 nova_compute[260935]: 2025-10-11 08:57:07.704 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 11 08:57:07 compute-0 nova_compute[260935]: 2025-10-11 08:57:07.707 2 DEBUG nova.compute.manager [None req-65f6f63e-75c5-4f43-a2f3-f6ef8987b9d7 bebcc72ba14b4d1e976076ce9fee6080 7b0903df38254068b8566040cf90343c - - default default] [instance: 075ec27a-70a2-49f6-a097-5738f3407605] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 11 08:57:07 compute-0 nova_compute[260935]: 2025-10-11 08:57:07.709 2 DEBUG nova.virt.libvirt.driver [None req-65f6f63e-75c5-4f43-a2f3-f6ef8987b9d7 bebcc72ba14b4d1e976076ce9fee6080 7b0903df38254068b8566040cf90343c - - default default] [instance: 075ec27a-70a2-49f6-a097-5738f3407605] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 11 08:57:07 compute-0 nova_compute[260935]: 2025-10-11 08:57:07.710 2 INFO nova.virt.libvirt.driver [None req-65f6f63e-75c5-4f43-a2f3-f6ef8987b9d7 bebcc72ba14b4d1e976076ce9fee6080 7b0903df38254068b8566040cf90343c - - default default] [instance: 075ec27a-70a2-49f6-a097-5738f3407605] Creating image(s)
Oct 11 08:57:07 compute-0 nova_compute[260935]: 2025-10-11 08:57:07.756 2 DEBUG nova.storage.rbd_utils [None req-65f6f63e-75c5-4f43-a2f3-f6ef8987b9d7 bebcc72ba14b4d1e976076ce9fee6080 7b0903df38254068b8566040cf90343c - - default default] rbd image 075ec27a-70a2-49f6-a097-5738f3407605_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:57:07 compute-0 nova_compute[260935]: 2025-10-11 08:57:07.821 2 DEBUG nova.storage.rbd_utils [None req-65f6f63e-75c5-4f43-a2f3-f6ef8987b9d7 bebcc72ba14b4d1e976076ce9fee6080 7b0903df38254068b8566040cf90343c - - default default] rbd image 075ec27a-70a2-49f6-a097-5738f3407605_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:57:07 compute-0 podman[330509]: 2025-10-11 08:57:07.851121814 +0000 UTC m=+0.131997944 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_id=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 11 08:57:07 compute-0 nova_compute[260935]: 2025-10-11 08:57:07.870 2 DEBUG nova.storage.rbd_utils [None req-65f6f63e-75c5-4f43-a2f3-f6ef8987b9d7 bebcc72ba14b4d1e976076ce9fee6080 7b0903df38254068b8566040cf90343c - - default default] rbd image 075ec27a-70a2-49f6-a097-5738f3407605_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:57:07 compute-0 nova_compute[260935]: 2025-10-11 08:57:07.876 2 DEBUG oslo_concurrency.processutils [None req-65f6f63e-75c5-4f43-a2f3-f6ef8987b9d7 bebcc72ba14b4d1e976076ce9fee6080 7b0903df38254068b8566040cf90343c - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:57:07 compute-0 nova_compute[260935]: 2025-10-11 08:57:07.938 2 DEBUG nova.policy [None req-65f6f63e-75c5-4f43-a2f3-f6ef8987b9d7 bebcc72ba14b4d1e976076ce9fee6080 7b0903df38254068b8566040cf90343c - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'bebcc72ba14b4d1e976076ce9fee6080', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '7b0903df38254068b8566040cf90343c', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 11 08:57:07 compute-0 nova_compute[260935]: 2025-10-11 08:57:07.948 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 7e87b7bb-5a07-4c5d-9998-0e1375a226f1] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 08:57:07 compute-0 nova_compute[260935]: 2025-10-11 08:57:07.957 2 DEBUG nova.virt.libvirt.driver [None req-8661c271-232d-4d96-8c1b-b75be873e0d0 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: 7e87b7bb-5a07-4c5d-9998-0e1375a226f1] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:57:07 compute-0 nova_compute[260935]: 2025-10-11 08:57:07.958 2 DEBUG nova.virt.libvirt.driver [None req-8661c271-232d-4d96-8c1b-b75be873e0d0 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: 7e87b7bb-5a07-4c5d-9998-0e1375a226f1] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:57:07 compute-0 nova_compute[260935]: 2025-10-11 08:57:07.959 2 DEBUG nova.virt.libvirt.driver [None req-8661c271-232d-4d96-8c1b-b75be873e0d0 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: 7e87b7bb-5a07-4c5d-9998-0e1375a226f1] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:57:07 compute-0 nova_compute[260935]: 2025-10-11 08:57:07.959 2 DEBUG nova.virt.libvirt.driver [None req-8661c271-232d-4d96-8c1b-b75be873e0d0 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: 7e87b7bb-5a07-4c5d-9998-0e1375a226f1] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:57:07 compute-0 nova_compute[260935]: 2025-10-11 08:57:07.960 2 DEBUG nova.virt.libvirt.driver [None req-8661c271-232d-4d96-8c1b-b75be873e0d0 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: 7e87b7bb-5a07-4c5d-9998-0e1375a226f1] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:57:07 compute-0 nova_compute[260935]: 2025-10-11 08:57:07.960 2 DEBUG nova.virt.libvirt.driver [None req-8661c271-232d-4d96-8c1b-b75be873e0d0 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: 7e87b7bb-5a07-4c5d-9998-0e1375a226f1] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:57:07 compute-0 nova_compute[260935]: 2025-10-11 08:57:07.970 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 7e87b7bb-5a07-4c5d-9998-0e1375a226f1] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 08:57:07 compute-0 nova_compute[260935]: 2025-10-11 08:57:07.986 2 DEBUG oslo_concurrency.processutils [None req-65f6f63e-75c5-4f43-a2f3-f6ef8987b9d7 bebcc72ba14b4d1e976076ce9fee6080 7b0903df38254068b8566040cf90343c - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json" returned: 0 in 0.110s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:57:07 compute-0 nova_compute[260935]: 2025-10-11 08:57:07.987 2 DEBUG oslo_concurrency.lockutils [None req-65f6f63e-75c5-4f43-a2f3-f6ef8987b9d7 bebcc72ba14b4d1e976076ce9fee6080 7b0903df38254068b8566040cf90343c - - default default] Acquiring lock "9811042c7d73cc51997f7c966840f4b7728169a1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:57:07 compute-0 nova_compute[260935]: 2025-10-11 08:57:07.988 2 DEBUG oslo_concurrency.lockutils [None req-65f6f63e-75c5-4f43-a2f3-f6ef8987b9d7 bebcc72ba14b4d1e976076ce9fee6080 7b0903df38254068b8566040cf90343c - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:57:07 compute-0 nova_compute[260935]: 2025-10-11 08:57:07.988 2 DEBUG oslo_concurrency.lockutils [None req-65f6f63e-75c5-4f43-a2f3-f6ef8987b9d7 bebcc72ba14b4d1e976076ce9fee6080 7b0903df38254068b8566040cf90343c - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:57:08 compute-0 nova_compute[260935]: 2025-10-11 08:57:08.018 2 DEBUG nova.storage.rbd_utils [None req-65f6f63e-75c5-4f43-a2f3-f6ef8987b9d7 bebcc72ba14b4d1e976076ce9fee6080 7b0903df38254068b8566040cf90343c - - default default] rbd image 075ec27a-70a2-49f6-a097-5738f3407605_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:57:08 compute-0 nova_compute[260935]: 2025-10-11 08:57:08.023 2 DEBUG oslo_concurrency.processutils [None req-65f6f63e-75c5-4f43-a2f3-f6ef8987b9d7 bebcc72ba14b4d1e976076ce9fee6080 7b0903df38254068b8566040cf90343c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 075ec27a-70a2-49f6-a097-5738f3407605_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:57:08 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1607: 321 pgs: 321 active+clean; 498 MiB data, 791 MiB used, 59 GiB / 60 GiB avail; 4.4 MiB/s rd, 9.6 MiB/s wr, 350 op/s
Oct 11 08:57:08 compute-0 nova_compute[260935]: 2025-10-11 08:57:08.086 2 INFO nova.compute.manager [None req-8661c271-232d-4d96-8c1b-b75be873e0d0 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: 7e87b7bb-5a07-4c5d-9998-0e1375a226f1] Took 13.18 seconds to spawn the instance on the hypervisor.
Oct 11 08:57:08 compute-0 nova_compute[260935]: 2025-10-11 08:57:08.087 2 DEBUG nova.compute.manager [None req-8661c271-232d-4d96-8c1b-b75be873e0d0 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: 7e87b7bb-5a07-4c5d-9998-0e1375a226f1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:57:08 compute-0 nova_compute[260935]: 2025-10-11 08:57:08.204 2 INFO nova.compute.manager [None req-8661c271-232d-4d96-8c1b-b75be873e0d0 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: 7e87b7bb-5a07-4c5d-9998-0e1375a226f1] Took 14.56 seconds to build instance.
Oct 11 08:57:08 compute-0 nova_compute[260935]: 2025-10-11 08:57:08.240 2 DEBUG oslo_concurrency.lockutils [None req-8661c271-232d-4d96-8c1b-b75be873e0d0 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Lock "7e87b7bb-5a07-4c5d-9998-0e1375a226f1" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 14.680s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:57:08 compute-0 nova_compute[260935]: 2025-10-11 08:57:08.271 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:57:08 compute-0 nova_compute[260935]: 2025-10-11 08:57:08.385 2 DEBUG oslo_concurrency.processutils [None req-65f6f63e-75c5-4f43-a2f3-f6ef8987b9d7 bebcc72ba14b4d1e976076ce9fee6080 7b0903df38254068b8566040cf90343c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 075ec27a-70a2-49f6-a097-5738f3407605_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.362s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:57:08 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e230 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 08:57:08 compute-0 nova_compute[260935]: 2025-10-11 08:57:08.456 2 DEBUG nova.network.neutron [None req-289e433c-4e93-4b47-92d3-1e62827612b1 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] [instance: 7aed3949-7b95-4887-90c3-3e2c8202cf27] Updating instance_info_cache with network_info: [{"id": "f5e1ba56-7772-43c4-a7a6-9435b3b0796c", "address": "fa:16:3e:39:33:ea", "network": {"id": "056c6769-bc97-4ae9-9759-4cc2d984a31d", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-2094705751-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "73adfb8cf0c64359b1f33a9643148ef4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf5e1ba56-77", "ovs_interfaceid": "f5e1ba56-7772-43c4-a7a6-9435b3b0796c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:57:08 compute-0 nova_compute[260935]: 2025-10-11 08:57:08.464 2 DEBUG nova.storage.rbd_utils [None req-65f6f63e-75c5-4f43-a2f3-f6ef8987b9d7 bebcc72ba14b4d1e976076ce9fee6080 7b0903df38254068b8566040cf90343c - - default default] resizing rbd image 075ec27a-70a2-49f6-a097-5738f3407605_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 11 08:57:08 compute-0 nova_compute[260935]: 2025-10-11 08:57:08.526 2 DEBUG oslo_concurrency.lockutils [None req-289e433c-4e93-4b47-92d3-1e62827612b1 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Releasing lock "refresh_cache-7aed3949-7b95-4887-90c3-3e2c8202cf27" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 08:57:08 compute-0 nova_compute[260935]: 2025-10-11 08:57:08.527 2 DEBUG nova.compute.manager [None req-289e433c-4e93-4b47-92d3-1e62827612b1 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] [instance: 7aed3949-7b95-4887-90c3-3e2c8202cf27] Instance network_info: |[{"id": "f5e1ba56-7772-43c4-a7a6-9435b3b0796c", "address": "fa:16:3e:39:33:ea", "network": {"id": "056c6769-bc97-4ae9-9759-4cc2d984a31d", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-2094705751-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "73adfb8cf0c64359b1f33a9643148ef4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf5e1ba56-77", "ovs_interfaceid": "f5e1ba56-7772-43c4-a7a6-9435b3b0796c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 11 08:57:08 compute-0 nova_compute[260935]: 2025-10-11 08:57:08.528 2 DEBUG oslo_concurrency.lockutils [req-73d32512-b507-444c-a1c8-1724672d9122 req-776cfca3-38aa-4b0d-ade7-860c510b8a34 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-7aed3949-7b95-4887-90c3-3e2c8202cf27" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 08:57:08 compute-0 nova_compute[260935]: 2025-10-11 08:57:08.529 2 DEBUG nova.network.neutron [req-73d32512-b507-444c-a1c8-1724672d9122 req-776cfca3-38aa-4b0d-ade7-860c510b8a34 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 7aed3949-7b95-4887-90c3-3e2c8202cf27] Refreshing network info cache for port f5e1ba56-7772-43c4-a7a6-9435b3b0796c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 08:57:08 compute-0 nova_compute[260935]: 2025-10-11 08:57:08.538 2 DEBUG nova.virt.libvirt.driver [None req-289e433c-4e93-4b47-92d3-1e62827612b1 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] [instance: 7aed3949-7b95-4887-90c3-3e2c8202cf27] Start _get_guest_xml network_info=[{"id": "f5e1ba56-7772-43c4-a7a6-9435b3b0796c", "address": "fa:16:3e:39:33:ea", "network": {"id": "056c6769-bc97-4ae9-9759-4cc2d984a31d", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-2094705751-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "73adfb8cf0c64359b1f33a9643148ef4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf5e1ba56-77", "ovs_interfaceid": "f5e1ba56-7772-43c4-a7a6-9435b3b0796c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 11 08:57:08 compute-0 nova_compute[260935]: 2025-10-11 08:57:08.546 2 WARNING nova.virt.libvirt.driver [None req-289e433c-4e93-4b47-92d3-1e62827612b1 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 08:57:08 compute-0 nova_compute[260935]: 2025-10-11 08:57:08.595 2 DEBUG nova.virt.libvirt.host [None req-289e433c-4e93-4b47-92d3-1e62827612b1 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 11 08:57:08 compute-0 nova_compute[260935]: 2025-10-11 08:57:08.596 2 DEBUG nova.virt.libvirt.host [None req-289e433c-4e93-4b47-92d3-1e62827612b1 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 11 08:57:08 compute-0 nova_compute[260935]: 2025-10-11 08:57:08.602 2 DEBUG nova.objects.instance [None req-65f6f63e-75c5-4f43-a2f3-f6ef8987b9d7 bebcc72ba14b4d1e976076ce9fee6080 7b0903df38254068b8566040cf90343c - - default default] Lazy-loading 'migration_context' on Instance uuid 075ec27a-70a2-49f6-a097-5738f3407605 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:57:08 compute-0 nova_compute[260935]: 2025-10-11 08:57:08.606 2 DEBUG nova.virt.libvirt.host [None req-289e433c-4e93-4b47-92d3-1e62827612b1 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 11 08:57:08 compute-0 nova_compute[260935]: 2025-10-11 08:57:08.607 2 DEBUG nova.virt.libvirt.host [None req-289e433c-4e93-4b47-92d3-1e62827612b1 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 11 08:57:08 compute-0 nova_compute[260935]: 2025-10-11 08:57:08.607 2 DEBUG nova.virt.libvirt.driver [None req-289e433c-4e93-4b47-92d3-1e62827612b1 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 11 08:57:08 compute-0 nova_compute[260935]: 2025-10-11 08:57:08.607 2 DEBUG nova.virt.hardware [None req-289e433c-4e93-4b47-92d3-1e62827612b1 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 11 08:57:08 compute-0 nova_compute[260935]: 2025-10-11 08:57:08.608 2 DEBUG nova.virt.hardware [None req-289e433c-4e93-4b47-92d3-1e62827612b1 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 11 08:57:08 compute-0 nova_compute[260935]: 2025-10-11 08:57:08.608 2 DEBUG nova.virt.hardware [None req-289e433c-4e93-4b47-92d3-1e62827612b1 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 11 08:57:08 compute-0 nova_compute[260935]: 2025-10-11 08:57:08.608 2 DEBUG nova.virt.hardware [None req-289e433c-4e93-4b47-92d3-1e62827612b1 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 11 08:57:08 compute-0 nova_compute[260935]: 2025-10-11 08:57:08.609 2 DEBUG nova.virt.hardware [None req-289e433c-4e93-4b47-92d3-1e62827612b1 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 11 08:57:08 compute-0 nova_compute[260935]: 2025-10-11 08:57:08.609 2 DEBUG nova.virt.hardware [None req-289e433c-4e93-4b47-92d3-1e62827612b1 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 11 08:57:08 compute-0 nova_compute[260935]: 2025-10-11 08:57:08.609 2 DEBUG nova.virt.hardware [None req-289e433c-4e93-4b47-92d3-1e62827612b1 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 11 08:57:08 compute-0 nova_compute[260935]: 2025-10-11 08:57:08.609 2 DEBUG nova.virt.hardware [None req-289e433c-4e93-4b47-92d3-1e62827612b1 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 11 08:57:08 compute-0 nova_compute[260935]: 2025-10-11 08:57:08.609 2 DEBUG nova.virt.hardware [None req-289e433c-4e93-4b47-92d3-1e62827612b1 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 11 08:57:08 compute-0 nova_compute[260935]: 2025-10-11 08:57:08.610 2 DEBUG nova.virt.hardware [None req-289e433c-4e93-4b47-92d3-1e62827612b1 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 11 08:57:08 compute-0 nova_compute[260935]: 2025-10-11 08:57:08.610 2 DEBUG nova.virt.hardware [None req-289e433c-4e93-4b47-92d3-1e62827612b1 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 11 08:57:08 compute-0 nova_compute[260935]: 2025-10-11 08:57:08.613 2 DEBUG oslo_concurrency.processutils [None req-289e433c-4e93-4b47-92d3-1e62827612b1 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:57:08 compute-0 nova_compute[260935]: 2025-10-11 08:57:08.660 2 DEBUG nova.virt.libvirt.driver [None req-65f6f63e-75c5-4f43-a2f3-f6ef8987b9d7 bebcc72ba14b4d1e976076ce9fee6080 7b0903df38254068b8566040cf90343c - - default default] [instance: 075ec27a-70a2-49f6-a097-5738f3407605] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 11 08:57:08 compute-0 nova_compute[260935]: 2025-10-11 08:57:08.661 2 DEBUG nova.virt.libvirt.driver [None req-65f6f63e-75c5-4f43-a2f3-f6ef8987b9d7 bebcc72ba14b4d1e976076ce9fee6080 7b0903df38254068b8566040cf90343c - - default default] [instance: 075ec27a-70a2-49f6-a097-5738f3407605] Ensure instance console log exists: /var/lib/nova/instances/075ec27a-70a2-49f6-a097-5738f3407605/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 11 08:57:08 compute-0 nova_compute[260935]: 2025-10-11 08:57:08.662 2 DEBUG oslo_concurrency.lockutils [None req-65f6f63e-75c5-4f43-a2f3-f6ef8987b9d7 bebcc72ba14b4d1e976076ce9fee6080 7b0903df38254068b8566040cf90343c - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:57:08 compute-0 nova_compute[260935]: 2025-10-11 08:57:08.662 2 DEBUG oslo_concurrency.lockutils [None req-65f6f63e-75c5-4f43-a2f3-f6ef8987b9d7 bebcc72ba14b4d1e976076ce9fee6080 7b0903df38254068b8566040cf90343c - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:57:08 compute-0 nova_compute[260935]: 2025-10-11 08:57:08.662 2 DEBUG oslo_concurrency.lockutils [None req-65f6f63e-75c5-4f43-a2f3-f6ef8987b9d7 bebcc72ba14b4d1e976076ce9fee6080 7b0903df38254068b8566040cf90343c - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:57:08 compute-0 nova_compute[260935]: 2025-10-11 08:57:08.663 2 DEBUG oslo_concurrency.lockutils [None req-896d3807-46c3-4ce1-8842-872f85ece55b 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Acquiring lock "80799b12-9add-4561-a8eb-f1cf3c1f93a1" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:57:08 compute-0 nova_compute[260935]: 2025-10-11 08:57:08.664 2 DEBUG oslo_concurrency.lockutils [None req-896d3807-46c3-4ce1-8842-872f85ece55b 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Lock "80799b12-9add-4561-a8eb-f1cf3c1f93a1" acquired by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:57:08 compute-0 nova_compute[260935]: 2025-10-11 08:57:08.664 2 DEBUG nova.compute.manager [None req-896d3807-46c3-4ce1-8842-872f85ece55b 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: 80799b12-9add-4561-a8eb-f1cf3c1f93a1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:57:08 compute-0 nova_compute[260935]: 2025-10-11 08:57:08.668 2 DEBUG nova.compute.manager [None req-896d3807-46c3-4ce1-8842-872f85ece55b 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: 80799b12-9add-4561-a8eb-f1cf3c1f93a1] Stopping instance; current vm_state: active, current task_state: powering-off, current DB power_state: 1, current VM power_state: 1 do_stop_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3338
Oct 11 08:57:08 compute-0 nova_compute[260935]: 2025-10-11 08:57:08.669 2 DEBUG nova.objects.instance [None req-896d3807-46c3-4ce1-8842-872f85ece55b 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Lazy-loading 'flavor' on Instance uuid 80799b12-9add-4561-a8eb-f1cf3c1f93a1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:57:08 compute-0 nova_compute[260935]: 2025-10-11 08:57:08.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 08:57:08 compute-0 nova_compute[260935]: 2025-10-11 08:57:08.705 2 DEBUG nova.network.neutron [None req-65f6f63e-75c5-4f43-a2f3-f6ef8987b9d7 bebcc72ba14b4d1e976076ce9fee6080 7b0903df38254068b8566040cf90343c - - default default] [instance: 075ec27a-70a2-49f6-a097-5738f3407605] Successfully created port: 10f3051f-8897-4e57-89f1-4a03aac94192 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 11 08:57:08 compute-0 nova_compute[260935]: 2025-10-11 08:57:08.714 2 DEBUG nova.virt.libvirt.driver [None req-896d3807-46c3-4ce1-8842-872f85ece55b 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: 80799b12-9add-4561-a8eb-f1cf3c1f93a1] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Oct 11 08:57:09 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 08:57:09 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1635249124' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:57:09 compute-0 nova_compute[260935]: 2025-10-11 08:57:09.090 2 DEBUG oslo_concurrency.processutils [None req-289e433c-4e93-4b47-92d3-1e62827612b1 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.477s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:57:09 compute-0 nova_compute[260935]: 2025-10-11 08:57:09.131 2 DEBUG nova.storage.rbd_utils [None req-289e433c-4e93-4b47-92d3-1e62827612b1 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] rbd image 7aed3949-7b95-4887-90c3-3e2c8202cf27_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:57:09 compute-0 nova_compute[260935]: 2025-10-11 08:57:09.139 2 DEBUG oslo_concurrency.processutils [None req-289e433c-4e93-4b47-92d3-1e62827612b1 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:57:09 compute-0 ceph-mon[74313]: pgmap v1607: 321 pgs: 321 active+clean; 498 MiB data, 791 MiB used, 59 GiB / 60 GiB avail; 4.4 MiB/s rd, 9.6 MiB/s wr, 350 op/s
Oct 11 08:57:09 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/1635249124' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:57:09 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 08:57:09 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/955886612' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:57:09 compute-0 nova_compute[260935]: 2025-10-11 08:57:09.645 2 DEBUG oslo_concurrency.processutils [None req-289e433c-4e93-4b47-92d3-1e62827612b1 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.506s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:57:09 compute-0 nova_compute[260935]: 2025-10-11 08:57:09.651 2 DEBUG nova.virt.libvirt.vif [None req-289e433c-4e93-4b47-92d3-1e62827612b1 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T08:57:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-2012099789',display_name='tempest-ServerActionsTestOtherB-server-2012099789',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-2012099789',id=71,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='73adfb8cf0c64359b1f33a9643148ef4',ramdisk_id='',reservation_id='r-3md42ked',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherB-1445504716',owner_user_name='tempest-ServerActionsTestOtherB-1445504716-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T08:57:02Z,user_data=None,user_id='8d5f5f07c57c467286168be7c097bf26',uuid=7aed3949-7b95-4887-90c3-3e2c8202cf27,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f5e1ba56-7772-43c4-a7a6-9435b3b0796c", "address": "fa:16:3e:39:33:ea", "network": {"id": "056c6769-bc97-4ae9-9759-4cc2d984a31d", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-2094705751-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "73adfb8cf0c64359b1f33a9643148ef4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf5e1ba56-77", "ovs_interfaceid": "f5e1ba56-7772-43c4-a7a6-9435b3b0796c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 11 08:57:09 compute-0 nova_compute[260935]: 2025-10-11 08:57:09.652 2 DEBUG nova.network.os_vif_util [None req-289e433c-4e93-4b47-92d3-1e62827612b1 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Converting VIF {"id": "f5e1ba56-7772-43c4-a7a6-9435b3b0796c", "address": "fa:16:3e:39:33:ea", "network": {"id": "056c6769-bc97-4ae9-9759-4cc2d984a31d", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-2094705751-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "73adfb8cf0c64359b1f33a9643148ef4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf5e1ba56-77", "ovs_interfaceid": "f5e1ba56-7772-43c4-a7a6-9435b3b0796c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 08:57:09 compute-0 nova_compute[260935]: 2025-10-11 08:57:09.654 2 DEBUG nova.network.os_vif_util [None req-289e433c-4e93-4b47-92d3-1e62827612b1 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:39:33:ea,bridge_name='br-int',has_traffic_filtering=True,id=f5e1ba56-7772-43c4-a7a6-9435b3b0796c,network=Network(056c6769-bc97-4ae9-9759-4cc2d984a31d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf5e1ba56-77') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 08:57:09 compute-0 nova_compute[260935]: 2025-10-11 08:57:09.657 2 DEBUG nova.objects.instance [None req-289e433c-4e93-4b47-92d3-1e62827612b1 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Lazy-loading 'pci_devices' on Instance uuid 7aed3949-7b95-4887-90c3-3e2c8202cf27 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:57:09 compute-0 nova_compute[260935]: 2025-10-11 08:57:09.699 2 DEBUG nova.virt.libvirt.driver [None req-289e433c-4e93-4b47-92d3-1e62827612b1 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] [instance: 7aed3949-7b95-4887-90c3-3e2c8202cf27] End _get_guest_xml xml=<domain type="kvm">
Oct 11 08:57:09 compute-0 nova_compute[260935]:   <uuid>7aed3949-7b95-4887-90c3-3e2c8202cf27</uuid>
Oct 11 08:57:09 compute-0 nova_compute[260935]:   <name>instance-00000047</name>
Oct 11 08:57:09 compute-0 nova_compute[260935]:   <memory>131072</memory>
Oct 11 08:57:09 compute-0 nova_compute[260935]:   <vcpu>1</vcpu>
Oct 11 08:57:09 compute-0 nova_compute[260935]:   <metadata>
Oct 11 08:57:09 compute-0 nova_compute[260935]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 08:57:09 compute-0 nova_compute[260935]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 08:57:09 compute-0 nova_compute[260935]:       <nova:name>tempest-ServerActionsTestOtherB-server-2012099789</nova:name>
Oct 11 08:57:09 compute-0 nova_compute[260935]:       <nova:creationTime>2025-10-11 08:57:08</nova:creationTime>
Oct 11 08:57:09 compute-0 nova_compute[260935]:       <nova:flavor name="m1.nano">
Oct 11 08:57:09 compute-0 nova_compute[260935]:         <nova:memory>128</nova:memory>
Oct 11 08:57:09 compute-0 nova_compute[260935]:         <nova:disk>1</nova:disk>
Oct 11 08:57:09 compute-0 nova_compute[260935]:         <nova:swap>0</nova:swap>
Oct 11 08:57:09 compute-0 nova_compute[260935]:         <nova:ephemeral>0</nova:ephemeral>
Oct 11 08:57:09 compute-0 nova_compute[260935]:         <nova:vcpus>1</nova:vcpus>
Oct 11 08:57:09 compute-0 nova_compute[260935]:       </nova:flavor>
Oct 11 08:57:09 compute-0 nova_compute[260935]:       <nova:owner>
Oct 11 08:57:09 compute-0 nova_compute[260935]:         <nova:user uuid="8d5f5f07c57c467286168be7c097bf26">tempest-ServerActionsTestOtherB-1445504716-project-member</nova:user>
Oct 11 08:57:09 compute-0 nova_compute[260935]:         <nova:project uuid="73adfb8cf0c64359b1f33a9643148ef4">tempest-ServerActionsTestOtherB-1445504716</nova:project>
Oct 11 08:57:09 compute-0 nova_compute[260935]:       </nova:owner>
Oct 11 08:57:09 compute-0 nova_compute[260935]:       <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 08:57:09 compute-0 nova_compute[260935]:       <nova:ports>
Oct 11 08:57:09 compute-0 nova_compute[260935]:         <nova:port uuid="f5e1ba56-7772-43c4-a7a6-9435b3b0796c">
Oct 11 08:57:09 compute-0 nova_compute[260935]:           <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Oct 11 08:57:09 compute-0 nova_compute[260935]:         </nova:port>
Oct 11 08:57:09 compute-0 nova_compute[260935]:       </nova:ports>
Oct 11 08:57:09 compute-0 nova_compute[260935]:     </nova:instance>
Oct 11 08:57:09 compute-0 nova_compute[260935]:   </metadata>
Oct 11 08:57:09 compute-0 nova_compute[260935]:   <sysinfo type="smbios">
Oct 11 08:57:09 compute-0 nova_compute[260935]:     <system>
Oct 11 08:57:09 compute-0 nova_compute[260935]:       <entry name="manufacturer">RDO</entry>
Oct 11 08:57:09 compute-0 nova_compute[260935]:       <entry name="product">OpenStack Compute</entry>
Oct 11 08:57:09 compute-0 nova_compute[260935]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 08:57:09 compute-0 nova_compute[260935]:       <entry name="serial">7aed3949-7b95-4887-90c3-3e2c8202cf27</entry>
Oct 11 08:57:09 compute-0 nova_compute[260935]:       <entry name="uuid">7aed3949-7b95-4887-90c3-3e2c8202cf27</entry>
Oct 11 08:57:09 compute-0 nova_compute[260935]:       <entry name="family">Virtual Machine</entry>
Oct 11 08:57:09 compute-0 nova_compute[260935]:     </system>
Oct 11 08:57:09 compute-0 nova_compute[260935]:   </sysinfo>
Oct 11 08:57:09 compute-0 nova_compute[260935]:   <os>
Oct 11 08:57:09 compute-0 nova_compute[260935]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 11 08:57:09 compute-0 nova_compute[260935]:     <boot dev="hd"/>
Oct 11 08:57:09 compute-0 nova_compute[260935]:     <smbios mode="sysinfo"/>
Oct 11 08:57:09 compute-0 nova_compute[260935]:   </os>
Oct 11 08:57:09 compute-0 nova_compute[260935]:   <features>
Oct 11 08:57:09 compute-0 nova_compute[260935]:     <acpi/>
Oct 11 08:57:09 compute-0 nova_compute[260935]:     <apic/>
Oct 11 08:57:09 compute-0 nova_compute[260935]:     <vmcoreinfo/>
Oct 11 08:57:09 compute-0 nova_compute[260935]:   </features>
Oct 11 08:57:09 compute-0 nova_compute[260935]:   <clock offset="utc">
Oct 11 08:57:09 compute-0 nova_compute[260935]:     <timer name="pit" tickpolicy="delay"/>
Oct 11 08:57:09 compute-0 nova_compute[260935]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 11 08:57:09 compute-0 nova_compute[260935]:     <timer name="hpet" present="no"/>
Oct 11 08:57:09 compute-0 nova_compute[260935]:   </clock>
Oct 11 08:57:09 compute-0 nova_compute[260935]:   <cpu mode="host-model" match="exact">
Oct 11 08:57:09 compute-0 nova_compute[260935]:     <topology sockets="1" cores="1" threads="1"/>
Oct 11 08:57:09 compute-0 nova_compute[260935]:   </cpu>
Oct 11 08:57:09 compute-0 nova_compute[260935]:   <devices>
Oct 11 08:57:09 compute-0 nova_compute[260935]:     <disk type="network" device="disk">
Oct 11 08:57:09 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 08:57:09 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/7aed3949-7b95-4887-90c3-3e2c8202cf27_disk">
Oct 11 08:57:09 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 08:57:09 compute-0 nova_compute[260935]:       </source>
Oct 11 08:57:09 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 08:57:09 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 08:57:09 compute-0 nova_compute[260935]:       </auth>
Oct 11 08:57:09 compute-0 nova_compute[260935]:       <target dev="vda" bus="virtio"/>
Oct 11 08:57:09 compute-0 nova_compute[260935]:     </disk>
Oct 11 08:57:09 compute-0 nova_compute[260935]:     <disk type="network" device="cdrom">
Oct 11 08:57:09 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 08:57:09 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/7aed3949-7b95-4887-90c3-3e2c8202cf27_disk.config">
Oct 11 08:57:09 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 08:57:09 compute-0 nova_compute[260935]:       </source>
Oct 11 08:57:09 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 08:57:09 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 08:57:09 compute-0 nova_compute[260935]:       </auth>
Oct 11 08:57:09 compute-0 nova_compute[260935]:       <target dev="sda" bus="sata"/>
Oct 11 08:57:09 compute-0 nova_compute[260935]:     </disk>
Oct 11 08:57:09 compute-0 nova_compute[260935]:     <interface type="ethernet">
Oct 11 08:57:09 compute-0 nova_compute[260935]:       <mac address="fa:16:3e:39:33:ea"/>
Oct 11 08:57:09 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 08:57:09 compute-0 nova_compute[260935]:       <driver name="vhost" rx_queue_size="512"/>
Oct 11 08:57:09 compute-0 nova_compute[260935]:       <mtu size="1442"/>
Oct 11 08:57:09 compute-0 nova_compute[260935]:       <target dev="tapf5e1ba56-77"/>
Oct 11 08:57:09 compute-0 nova_compute[260935]:     </interface>
Oct 11 08:57:09 compute-0 nova_compute[260935]:     <serial type="pty">
Oct 11 08:57:09 compute-0 nova_compute[260935]:       <log file="/var/lib/nova/instances/7aed3949-7b95-4887-90c3-3e2c8202cf27/console.log" append="off"/>
Oct 11 08:57:09 compute-0 nova_compute[260935]:     </serial>
Oct 11 08:57:09 compute-0 nova_compute[260935]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 08:57:09 compute-0 nova_compute[260935]:     <video>
Oct 11 08:57:09 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 08:57:09 compute-0 nova_compute[260935]:     </video>
Oct 11 08:57:09 compute-0 nova_compute[260935]:     <input type="tablet" bus="usb"/>
Oct 11 08:57:09 compute-0 nova_compute[260935]:     <rng model="virtio">
Oct 11 08:57:09 compute-0 nova_compute[260935]:       <backend model="random">/dev/urandom</backend>
Oct 11 08:57:09 compute-0 nova_compute[260935]:     </rng>
Oct 11 08:57:09 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root"/>
Oct 11 08:57:09 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:57:09 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:57:09 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:57:09 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:57:09 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:57:09 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:57:09 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:57:09 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:57:09 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:57:09 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:57:09 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:57:09 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:57:09 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:57:09 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:57:09 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:57:09 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:57:09 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:57:09 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:57:09 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:57:09 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:57:09 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:57:09 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:57:09 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:57:09 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:57:09 compute-0 nova_compute[260935]:     <controller type="usb" index="0"/>
Oct 11 08:57:09 compute-0 nova_compute[260935]:     <memballoon model="virtio">
Oct 11 08:57:09 compute-0 nova_compute[260935]:       <stats period="10"/>
Oct 11 08:57:09 compute-0 nova_compute[260935]:     </memballoon>
Oct 11 08:57:09 compute-0 nova_compute[260935]:   </devices>
Oct 11 08:57:09 compute-0 nova_compute[260935]: </domain>
Oct 11 08:57:09 compute-0 nova_compute[260935]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 11 08:57:09 compute-0 nova_compute[260935]: 2025-10-11 08:57:09.714 2 DEBUG nova.compute.manager [None req-289e433c-4e93-4b47-92d3-1e62827612b1 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] [instance: 7aed3949-7b95-4887-90c3-3e2c8202cf27] Preparing to wait for external event network-vif-plugged-f5e1ba56-7772-43c4-a7a6-9435b3b0796c prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 11 08:57:09 compute-0 nova_compute[260935]: 2025-10-11 08:57:09.714 2 DEBUG oslo_concurrency.lockutils [None req-289e433c-4e93-4b47-92d3-1e62827612b1 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Acquiring lock "7aed3949-7b95-4887-90c3-3e2c8202cf27-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:57:09 compute-0 nova_compute[260935]: 2025-10-11 08:57:09.715 2 DEBUG oslo_concurrency.lockutils [None req-289e433c-4e93-4b47-92d3-1e62827612b1 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Lock "7aed3949-7b95-4887-90c3-3e2c8202cf27-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:57:09 compute-0 nova_compute[260935]: 2025-10-11 08:57:09.716 2 DEBUG oslo_concurrency.lockutils [None req-289e433c-4e93-4b47-92d3-1e62827612b1 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Lock "7aed3949-7b95-4887-90c3-3e2c8202cf27-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:57:09 compute-0 nova_compute[260935]: 2025-10-11 08:57:09.718 2 DEBUG nova.virt.libvirt.vif [None req-289e433c-4e93-4b47-92d3-1e62827612b1 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T08:57:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-2012099789',display_name='tempest-ServerActionsTestOtherB-server-2012099789',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-2012099789',id=71,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='73adfb8cf0c64359b1f33a9643148ef4',ramdisk_id='',reservation_id='r-3md42ked',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherB-1445504716',owner_user_name='tempest-ServerActionsTestOtherB-1445504716-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T08:57:02Z,user_data=None,user_id='8d5f5f07c57c467286168be7c097bf26',uuid=7aed3949-7b95-4887-90c3-3e2c8202cf27,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f5e1ba56-7772-43c4-a7a6-9435b3b0796c", "address": "fa:16:3e:39:33:ea", "network": {"id": "056c6769-bc97-4ae9-9759-4cc2d984a31d", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-2094705751-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "73adfb8cf0c64359b1f33a9643148ef4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf5e1ba56-77", "ovs_interfaceid": "f5e1ba56-7772-43c4-a7a6-9435b3b0796c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 11 08:57:09 compute-0 nova_compute[260935]: 2025-10-11 08:57:09.719 2 DEBUG nova.network.os_vif_util [None req-289e433c-4e93-4b47-92d3-1e62827612b1 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Converting VIF {"id": "f5e1ba56-7772-43c4-a7a6-9435b3b0796c", "address": "fa:16:3e:39:33:ea", "network": {"id": "056c6769-bc97-4ae9-9759-4cc2d984a31d", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-2094705751-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "73adfb8cf0c64359b1f33a9643148ef4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf5e1ba56-77", "ovs_interfaceid": "f5e1ba56-7772-43c4-a7a6-9435b3b0796c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 08:57:09 compute-0 nova_compute[260935]: 2025-10-11 08:57:09.721 2 DEBUG nova.network.os_vif_util [None req-289e433c-4e93-4b47-92d3-1e62827612b1 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:39:33:ea,bridge_name='br-int',has_traffic_filtering=True,id=f5e1ba56-7772-43c4-a7a6-9435b3b0796c,network=Network(056c6769-bc97-4ae9-9759-4cc2d984a31d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf5e1ba56-77') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 08:57:09 compute-0 nova_compute[260935]: 2025-10-11 08:57:09.722 2 DEBUG os_vif [None req-289e433c-4e93-4b47-92d3-1e62827612b1 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:39:33:ea,bridge_name='br-int',has_traffic_filtering=True,id=f5e1ba56-7772-43c4-a7a6-9435b3b0796c,network=Network(056c6769-bc97-4ae9-9759-4cc2d984a31d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf5e1ba56-77') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 11 08:57:09 compute-0 nova_compute[260935]: 2025-10-11 08:57:09.724 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:57:09 compute-0 nova_compute[260935]: 2025-10-11 08:57:09.725 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:57:09 compute-0 nova_compute[260935]: 2025-10-11 08:57:09.725 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 08:57:09 compute-0 nova_compute[260935]: 2025-10-11 08:57:09.727 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 08:57:09 compute-0 nova_compute[260935]: 2025-10-11 08:57:09.732 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:57:09 compute-0 nova_compute[260935]: 2025-10-11 08:57:09.732 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf5e1ba56-77, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:57:09 compute-0 nova_compute[260935]: 2025-10-11 08:57:09.733 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapf5e1ba56-77, col_values=(('external_ids', {'iface-id': 'f5e1ba56-7772-43c4-a7a6-9435b3b0796c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:39:33:ea', 'vm-uuid': '7aed3949-7b95-4887-90c3-3e2c8202cf27'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:57:09 compute-0 nova_compute[260935]: 2025-10-11 08:57:09.735 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:57:09 compute-0 NetworkManager[44960]: <info>  [1760173029.7360] manager: (tapf5e1ba56-77): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/279)
Oct 11 08:57:09 compute-0 nova_compute[260935]: 2025-10-11 08:57:09.740 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 08:57:09 compute-0 nova_compute[260935]: 2025-10-11 08:57:09.745 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:57:09 compute-0 nova_compute[260935]: 2025-10-11 08:57:09.747 2 INFO os_vif [None req-289e433c-4e93-4b47-92d3-1e62827612b1 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:39:33:ea,bridge_name='br-int',has_traffic_filtering=True,id=f5e1ba56-7772-43c4-a7a6-9435b3b0796c,network=Network(056c6769-bc97-4ae9-9759-4cc2d984a31d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf5e1ba56-77')
Oct 11 08:57:09 compute-0 nova_compute[260935]: 2025-10-11 08:57:09.856 2 DEBUG nova.virt.libvirt.driver [None req-289e433c-4e93-4b47-92d3-1e62827612b1 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 08:57:09 compute-0 nova_compute[260935]: 2025-10-11 08:57:09.856 2 DEBUG nova.virt.libvirt.driver [None req-289e433c-4e93-4b47-92d3-1e62827612b1 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 08:57:09 compute-0 nova_compute[260935]: 2025-10-11 08:57:09.857 2 DEBUG nova.virt.libvirt.driver [None req-289e433c-4e93-4b47-92d3-1e62827612b1 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] No VIF found with MAC fa:16:3e:39:33:ea, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 11 08:57:09 compute-0 nova_compute[260935]: 2025-10-11 08:57:09.858 2 INFO nova.virt.libvirt.driver [None req-289e433c-4e93-4b47-92d3-1e62827612b1 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] [instance: 7aed3949-7b95-4887-90c3-3e2c8202cf27] Using config drive
Oct 11 08:57:09 compute-0 nova_compute[260935]: 2025-10-11 08:57:09.891 2 DEBUG nova.storage.rbd_utils [None req-289e433c-4e93-4b47-92d3-1e62827612b1 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] rbd image 7aed3949-7b95-4887-90c3-3e2c8202cf27_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:57:10 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1608: 321 pgs: 321 active+clean; 498 MiB data, 791 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 6.1 MiB/s wr, 232 op/s
Oct 11 08:57:10 compute-0 nova_compute[260935]: 2025-10-11 08:57:10.118 2 DEBUG nova.network.neutron [req-73d32512-b507-444c-a1c8-1724672d9122 req-776cfca3-38aa-4b0d-ade7-860c510b8a34 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 7aed3949-7b95-4887-90c3-3e2c8202cf27] Updated VIF entry in instance network info cache for port f5e1ba56-7772-43c4-a7a6-9435b3b0796c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 08:57:10 compute-0 nova_compute[260935]: 2025-10-11 08:57:10.119 2 DEBUG nova.network.neutron [req-73d32512-b507-444c-a1c8-1724672d9122 req-776cfca3-38aa-4b0d-ade7-860c510b8a34 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 7aed3949-7b95-4887-90c3-3e2c8202cf27] Updating instance_info_cache with network_info: [{"id": "f5e1ba56-7772-43c4-a7a6-9435b3b0796c", "address": "fa:16:3e:39:33:ea", "network": {"id": "056c6769-bc97-4ae9-9759-4cc2d984a31d", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-2094705751-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "73adfb8cf0c64359b1f33a9643148ef4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf5e1ba56-77", "ovs_interfaceid": "f5e1ba56-7772-43c4-a7a6-9435b3b0796c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:57:10 compute-0 nova_compute[260935]: 2025-10-11 08:57:10.170 2 DEBUG nova.network.neutron [None req-65f6f63e-75c5-4f43-a2f3-f6ef8987b9d7 bebcc72ba14b4d1e976076ce9fee6080 7b0903df38254068b8566040cf90343c - - default default] [instance: 075ec27a-70a2-49f6-a097-5738f3407605] Successfully updated port: 10f3051f-8897-4e57-89f1-4a03aac94192 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 11 08:57:10 compute-0 nova_compute[260935]: 2025-10-11 08:57:10.181 2 DEBUG oslo_concurrency.lockutils [req-73d32512-b507-444c-a1c8-1724672d9122 req-776cfca3-38aa-4b0d-ade7-860c510b8a34 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-7aed3949-7b95-4887-90c3-3e2c8202cf27" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 08:57:10 compute-0 nova_compute[260935]: 2025-10-11 08:57:10.238 2 DEBUG oslo_concurrency.lockutils [None req-65f6f63e-75c5-4f43-a2f3-f6ef8987b9d7 bebcc72ba14b4d1e976076ce9fee6080 7b0903df38254068b8566040cf90343c - - default default] Acquiring lock "refresh_cache-075ec27a-70a2-49f6-a097-5738f3407605" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 08:57:10 compute-0 nova_compute[260935]: 2025-10-11 08:57:10.238 2 DEBUG oslo_concurrency.lockutils [None req-65f6f63e-75c5-4f43-a2f3-f6ef8987b9d7 bebcc72ba14b4d1e976076ce9fee6080 7b0903df38254068b8566040cf90343c - - default default] Acquired lock "refresh_cache-075ec27a-70a2-49f6-a097-5738f3407605" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 08:57:10 compute-0 nova_compute[260935]: 2025-10-11 08:57:10.239 2 DEBUG nova.network.neutron [None req-65f6f63e-75c5-4f43-a2f3-f6ef8987b9d7 bebcc72ba14b4d1e976076ce9fee6080 7b0903df38254068b8566040cf90343c - - default default] [instance: 075ec27a-70a2-49f6-a097-5738f3407605] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 11 08:57:10 compute-0 nova_compute[260935]: 2025-10-11 08:57:10.330 2 INFO nova.virt.libvirt.driver [None req-289e433c-4e93-4b47-92d3-1e62827612b1 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] [instance: 7aed3949-7b95-4887-90c3-3e2c8202cf27] Creating config drive at /var/lib/nova/instances/7aed3949-7b95-4887-90c3-3e2c8202cf27/disk.config
Oct 11 08:57:10 compute-0 nova_compute[260935]: 2025-10-11 08:57:10.336 2 DEBUG oslo_concurrency.processutils [None req-289e433c-4e93-4b47-92d3-1e62827612b1 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/7aed3949-7b95-4887-90c3-3e2c8202cf27/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpd_5xgto_ execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:57:10 compute-0 nova_compute[260935]: 2025-10-11 08:57:10.408 2 DEBUG nova.network.neutron [None req-65f6f63e-75c5-4f43-a2f3-f6ef8987b9d7 bebcc72ba14b4d1e976076ce9fee6080 7b0903df38254068b8566040cf90343c - - default default] [instance: 075ec27a-70a2-49f6-a097-5738f3407605] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 11 08:57:10 compute-0 nova_compute[260935]: 2025-10-11 08:57:10.510 2 DEBUG oslo_concurrency.processutils [None req-289e433c-4e93-4b47-92d3-1e62827612b1 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/7aed3949-7b95-4887-90c3-3e2c8202cf27/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpd_5xgto_" returned: 0 in 0.174s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:57:10 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/955886612' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:57:10 compute-0 nova_compute[260935]: 2025-10-11 08:57:10.563 2 DEBUG nova.storage.rbd_utils [None req-289e433c-4e93-4b47-92d3-1e62827612b1 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] rbd image 7aed3949-7b95-4887-90c3-3e2c8202cf27_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:57:10 compute-0 nova_compute[260935]: 2025-10-11 08:57:10.568 2 DEBUG oslo_concurrency.processutils [None req-289e433c-4e93-4b47-92d3-1e62827612b1 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/7aed3949-7b95-4887-90c3-3e2c8202cf27/disk.config 7aed3949-7b95-4887-90c3-3e2c8202cf27_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:57:10 compute-0 nova_compute[260935]: 2025-10-11 08:57:10.688 2 DEBUG nova.compute.manager [req-ea64f53e-9b46-453e-8d28-4dd49fc3f764 req-c04cfbfd-a40a-462e-86aa-535cf4368c9c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 075ec27a-70a2-49f6-a097-5738f3407605] Received event network-changed-10f3051f-8897-4e57-89f1-4a03aac94192 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:57:10 compute-0 nova_compute[260935]: 2025-10-11 08:57:10.689 2 DEBUG nova.compute.manager [req-ea64f53e-9b46-453e-8d28-4dd49fc3f764 req-c04cfbfd-a40a-462e-86aa-535cf4368c9c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 075ec27a-70a2-49f6-a097-5738f3407605] Refreshing instance network info cache due to event network-changed-10f3051f-8897-4e57-89f1-4a03aac94192. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 08:57:10 compute-0 nova_compute[260935]: 2025-10-11 08:57:10.690 2 DEBUG oslo_concurrency.lockutils [req-ea64f53e-9b46-453e-8d28-4dd49fc3f764 req-c04cfbfd-a40a-462e-86aa-535cf4368c9c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-075ec27a-70a2-49f6-a097-5738f3407605" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 08:57:10 compute-0 nova_compute[260935]: 2025-10-11 08:57:10.778 2 DEBUG oslo_concurrency.processutils [None req-289e433c-4e93-4b47-92d3-1e62827612b1 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/7aed3949-7b95-4887-90c3-3e2c8202cf27/disk.config 7aed3949-7b95-4887-90c3-3e2c8202cf27_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.210s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:57:10 compute-0 nova_compute[260935]: 2025-10-11 08:57:10.779 2 INFO nova.virt.libvirt.driver [None req-289e433c-4e93-4b47-92d3-1e62827612b1 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] [instance: 7aed3949-7b95-4887-90c3-3e2c8202cf27] Deleting local config drive /var/lib/nova/instances/7aed3949-7b95-4887-90c3-3e2c8202cf27/disk.config because it was imported into RBD.
Oct 11 08:57:10 compute-0 kernel: tapf5e1ba56-77: entered promiscuous mode
Oct 11 08:57:10 compute-0 NetworkManager[44960]: <info>  [1760173030.8515] manager: (tapf5e1ba56-77): new Tun device (/org/freedesktop/NetworkManager/Devices/280)
Oct 11 08:57:10 compute-0 nova_compute[260935]: 2025-10-11 08:57:10.890 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:57:10 compute-0 ovn_controller[152945]: 2025-10-11T08:57:10Z|00598|binding|INFO|Claiming lport f5e1ba56-7772-43c4-a7a6-9435b3b0796c for this chassis.
Oct 11 08:57:10 compute-0 ovn_controller[152945]: 2025-10-11T08:57:10Z|00599|binding|INFO|f5e1ba56-7772-43c4-a7a6-9435b3b0796c: Claiming fa:16:3e:39:33:ea 10.100.0.4
Oct 11 08:57:10 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:57:10.904 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:39:33:ea 10.100.0.4'], port_security=['fa:16:3e:39:33:ea 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '7aed3949-7b95-4887-90c3-3e2c8202cf27', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-056c6769-bc97-4ae9-9759-4cc2d984a31d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '73adfb8cf0c64359b1f33a9643148ef4', 'neutron:revision_number': '2', 'neutron:security_group_ids': '88bbbe16-d865-47df-a62a-312128d64455', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0cfa1fc9-121e-4e0a-bd08-716b82275316, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=f5e1ba56-7772-43c4-a7a6-9435b3b0796c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 08:57:10 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:57:10.905 162815 INFO neutron.agent.ovn.metadata.agent [-] Port f5e1ba56-7772-43c4-a7a6-9435b3b0796c in datapath 056c6769-bc97-4ae9-9759-4cc2d984a31d bound to our chassis
Oct 11 08:57:10 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:57:10.909 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 056c6769-bc97-4ae9-9759-4cc2d984a31d
Oct 11 08:57:10 compute-0 systemd-udevd[330828]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 08:57:10 compute-0 ovn_controller[152945]: 2025-10-11T08:57:10Z|00600|binding|INFO|Setting lport f5e1ba56-7772-43c4-a7a6-9435b3b0796c ovn-installed in OVS
Oct 11 08:57:10 compute-0 ovn_controller[152945]: 2025-10-11T08:57:10Z|00601|binding|INFO|Setting lport f5e1ba56-7772-43c4-a7a6-9435b3b0796c up in Southbound
Oct 11 08:57:10 compute-0 nova_compute[260935]: 2025-10-11 08:57:10.935 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:57:10 compute-0 NetworkManager[44960]: <info>  [1760173030.9389] device (tapf5e1ba56-77): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 08:57:10 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:57:10.938 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[3446fbbe-3c1d-41a5-b4fb-e3ea1aa61ddf]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:57:10 compute-0 NetworkManager[44960]: <info>  [1760173030.9401] device (tapf5e1ba56-77): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 08:57:10 compute-0 nova_compute[260935]: 2025-10-11 08:57:10.952 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:57:10 compute-0 systemd-machined[215705]: New machine qemu-79-instance-00000047.
Oct 11 08:57:10 compute-0 systemd[1]: Started Virtual Machine qemu-79-instance-00000047.
Oct 11 08:57:10 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:57:10.982 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[f83f6d23-dd54-4f44-acae-f4c96c5d4649]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:57:10 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:57:10.986 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[3e387857-a4f4-4ca1-83d5-8f318861334c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:57:11 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:57:11.020 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[d7441961-f1f0-4dd7-a2d8-03af0ac537d8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:57:11 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:57:11.049 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[383718c7-fb45-4b87-aa65-3cd12a965b52]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap056c6769-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8b:cc:02'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 11, 'tx_packets': 8, 'rx_bytes': 958, 'tx_bytes': 524, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 11, 'tx_packets': 8, 'rx_bytes': 958, 'tx_bytes': 524, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 162], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 477032, 'reachable_time': 36312, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 330844, 'error': None, 'target': 'ovnmeta-056c6769-bc97-4ae9-9759-4cc2d984a31d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:57:11 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:57:11.072 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[f329ba63-204f-4472-961c-7e32eefae82b]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap056c6769-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 477053, 'tstamp': 477053}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 330846, 'error': None, 'target': 'ovnmeta-056c6769-bc97-4ae9-9759-4cc2d984a31d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap056c6769-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 477058, 'tstamp': 477058}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 330846, 'error': None, 'target': 'ovnmeta-056c6769-bc97-4ae9-9759-4cc2d984a31d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:57:11 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:57:11.074 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap056c6769-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:57:11 compute-0 nova_compute[260935]: 2025-10-11 08:57:11.077 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:57:11 compute-0 nova_compute[260935]: 2025-10-11 08:57:11.078 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:57:11 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:57:11.079 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap056c6769-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:57:11 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:57:11.080 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 08:57:11 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:57:11.080 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap056c6769-b0, col_values=(('external_ids', {'iface-id': '056a8563-0695-415b-921f-e75fa98e60e5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:57:11 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:57:11.081 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 08:57:11 compute-0 nova_compute[260935]: 2025-10-11 08:57:11.312 2 DEBUG nova.compute.manager [req-d797affe-eda0-4e0f-8d14-47b9ce4d3ef1 req-4fdc3719-3a97-4b23-859c-64d84d16a9b3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 7aed3949-7b95-4887-90c3-3e2c8202cf27] Received event network-vif-plugged-f5e1ba56-7772-43c4-a7a6-9435b3b0796c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:57:11 compute-0 nova_compute[260935]: 2025-10-11 08:57:11.313 2 DEBUG oslo_concurrency.lockutils [req-d797affe-eda0-4e0f-8d14-47b9ce4d3ef1 req-4fdc3719-3a97-4b23-859c-64d84d16a9b3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "7aed3949-7b95-4887-90c3-3e2c8202cf27-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:57:11 compute-0 nova_compute[260935]: 2025-10-11 08:57:11.316 2 DEBUG oslo_concurrency.lockutils [req-d797affe-eda0-4e0f-8d14-47b9ce4d3ef1 req-4fdc3719-3a97-4b23-859c-64d84d16a9b3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "7aed3949-7b95-4887-90c3-3e2c8202cf27-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:57:11 compute-0 nova_compute[260935]: 2025-10-11 08:57:11.317 2 DEBUG oslo_concurrency.lockutils [req-d797affe-eda0-4e0f-8d14-47b9ce4d3ef1 req-4fdc3719-3a97-4b23-859c-64d84d16a9b3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "7aed3949-7b95-4887-90c3-3e2c8202cf27-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:57:11 compute-0 nova_compute[260935]: 2025-10-11 08:57:11.317 2 DEBUG nova.compute.manager [req-d797affe-eda0-4e0f-8d14-47b9ce4d3ef1 req-4fdc3719-3a97-4b23-859c-64d84d16a9b3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 7aed3949-7b95-4887-90c3-3e2c8202cf27] Processing event network-vif-plugged-f5e1ba56-7772-43c4-a7a6-9435b3b0796c _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 11 08:57:11 compute-0 nova_compute[260935]: 2025-10-11 08:57:11.348 2 DEBUG nova.network.neutron [None req-65f6f63e-75c5-4f43-a2f3-f6ef8987b9d7 bebcc72ba14b4d1e976076ce9fee6080 7b0903df38254068b8566040cf90343c - - default default] [instance: 075ec27a-70a2-49f6-a097-5738f3407605] Updating instance_info_cache with network_info: [{"id": "10f3051f-8897-4e57-89f1-4a03aac94192", "address": "fa:16:3e:15:d9:b2", "network": {"id": "36e695f7-a84c-4eed-8d21-d70ce3f1f736", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-876022911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7b0903df38254068b8566040cf90343c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap10f3051f-88", "ovs_interfaceid": "10f3051f-8897-4e57-89f1-4a03aac94192", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:57:11 compute-0 nova_compute[260935]: 2025-10-11 08:57:11.376 2 DEBUG oslo_concurrency.lockutils [None req-65f6f63e-75c5-4f43-a2f3-f6ef8987b9d7 bebcc72ba14b4d1e976076ce9fee6080 7b0903df38254068b8566040cf90343c - - default default] Releasing lock "refresh_cache-075ec27a-70a2-49f6-a097-5738f3407605" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 08:57:11 compute-0 nova_compute[260935]: 2025-10-11 08:57:11.376 2 DEBUG nova.compute.manager [None req-65f6f63e-75c5-4f43-a2f3-f6ef8987b9d7 bebcc72ba14b4d1e976076ce9fee6080 7b0903df38254068b8566040cf90343c - - default default] [instance: 075ec27a-70a2-49f6-a097-5738f3407605] Instance network_info: |[{"id": "10f3051f-8897-4e57-89f1-4a03aac94192", "address": "fa:16:3e:15:d9:b2", "network": {"id": "36e695f7-a84c-4eed-8d21-d70ce3f1f736", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-876022911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7b0903df38254068b8566040cf90343c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap10f3051f-88", "ovs_interfaceid": "10f3051f-8897-4e57-89f1-4a03aac94192", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 11 08:57:11 compute-0 nova_compute[260935]: 2025-10-11 08:57:11.377 2 DEBUG oslo_concurrency.lockutils [req-ea64f53e-9b46-453e-8d28-4dd49fc3f764 req-c04cfbfd-a40a-462e-86aa-535cf4368c9c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-075ec27a-70a2-49f6-a097-5738f3407605" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 08:57:11 compute-0 nova_compute[260935]: 2025-10-11 08:57:11.377 2 DEBUG nova.network.neutron [req-ea64f53e-9b46-453e-8d28-4dd49fc3f764 req-c04cfbfd-a40a-462e-86aa-535cf4368c9c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 075ec27a-70a2-49f6-a097-5738f3407605] Refreshing network info cache for port 10f3051f-8897-4e57-89f1-4a03aac94192 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 08:57:11 compute-0 nova_compute[260935]: 2025-10-11 08:57:11.383 2 DEBUG nova.virt.libvirt.driver [None req-65f6f63e-75c5-4f43-a2f3-f6ef8987b9d7 bebcc72ba14b4d1e976076ce9fee6080 7b0903df38254068b8566040cf90343c - - default default] [instance: 075ec27a-70a2-49f6-a097-5738f3407605] Start _get_guest_xml network_info=[{"id": "10f3051f-8897-4e57-89f1-4a03aac94192", "address": "fa:16:3e:15:d9:b2", "network": {"id": "36e695f7-a84c-4eed-8d21-d70ce3f1f736", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-876022911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7b0903df38254068b8566040cf90343c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap10f3051f-88", "ovs_interfaceid": "10f3051f-8897-4e57-89f1-4a03aac94192", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 11 08:57:11 compute-0 nova_compute[260935]: 2025-10-11 08:57:11.390 2 WARNING nova.virt.libvirt.driver [None req-65f6f63e-75c5-4f43-a2f3-f6ef8987b9d7 bebcc72ba14b4d1e976076ce9fee6080 7b0903df38254068b8566040cf90343c - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 08:57:11 compute-0 nova_compute[260935]: 2025-10-11 08:57:11.399 2 DEBUG nova.virt.libvirt.host [None req-65f6f63e-75c5-4f43-a2f3-f6ef8987b9d7 bebcc72ba14b4d1e976076ce9fee6080 7b0903df38254068b8566040cf90343c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 11 08:57:11 compute-0 nova_compute[260935]: 2025-10-11 08:57:11.400 2 DEBUG nova.virt.libvirt.host [None req-65f6f63e-75c5-4f43-a2f3-f6ef8987b9d7 bebcc72ba14b4d1e976076ce9fee6080 7b0903df38254068b8566040cf90343c - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 11 08:57:11 compute-0 nova_compute[260935]: 2025-10-11 08:57:11.404 2 DEBUG nova.virt.libvirt.host [None req-65f6f63e-75c5-4f43-a2f3-f6ef8987b9d7 bebcc72ba14b4d1e976076ce9fee6080 7b0903df38254068b8566040cf90343c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 11 08:57:11 compute-0 nova_compute[260935]: 2025-10-11 08:57:11.405 2 DEBUG nova.virt.libvirt.host [None req-65f6f63e-75c5-4f43-a2f3-f6ef8987b9d7 bebcc72ba14b4d1e976076ce9fee6080 7b0903df38254068b8566040cf90343c - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 11 08:57:11 compute-0 nova_compute[260935]: 2025-10-11 08:57:11.405 2 DEBUG nova.virt.libvirt.driver [None req-65f6f63e-75c5-4f43-a2f3-f6ef8987b9d7 bebcc72ba14b4d1e976076ce9fee6080 7b0903df38254068b8566040cf90343c - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 11 08:57:11 compute-0 nova_compute[260935]: 2025-10-11 08:57:11.406 2 DEBUG nova.virt.hardware [None req-65f6f63e-75c5-4f43-a2f3-f6ef8987b9d7 bebcc72ba14b4d1e976076ce9fee6080 7b0903df38254068b8566040cf90343c - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 11 08:57:11 compute-0 nova_compute[260935]: 2025-10-11 08:57:11.406 2 DEBUG nova.virt.hardware [None req-65f6f63e-75c5-4f43-a2f3-f6ef8987b9d7 bebcc72ba14b4d1e976076ce9fee6080 7b0903df38254068b8566040cf90343c - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 11 08:57:11 compute-0 nova_compute[260935]: 2025-10-11 08:57:11.407 2 DEBUG nova.virt.hardware [None req-65f6f63e-75c5-4f43-a2f3-f6ef8987b9d7 bebcc72ba14b4d1e976076ce9fee6080 7b0903df38254068b8566040cf90343c - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 11 08:57:11 compute-0 nova_compute[260935]: 2025-10-11 08:57:11.407 2 DEBUG nova.virt.hardware [None req-65f6f63e-75c5-4f43-a2f3-f6ef8987b9d7 bebcc72ba14b4d1e976076ce9fee6080 7b0903df38254068b8566040cf90343c - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 11 08:57:11 compute-0 nova_compute[260935]: 2025-10-11 08:57:11.407 2 DEBUG nova.virt.hardware [None req-65f6f63e-75c5-4f43-a2f3-f6ef8987b9d7 bebcc72ba14b4d1e976076ce9fee6080 7b0903df38254068b8566040cf90343c - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 11 08:57:11 compute-0 nova_compute[260935]: 2025-10-11 08:57:11.408 2 DEBUG nova.virt.hardware [None req-65f6f63e-75c5-4f43-a2f3-f6ef8987b9d7 bebcc72ba14b4d1e976076ce9fee6080 7b0903df38254068b8566040cf90343c - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 11 08:57:11 compute-0 nova_compute[260935]: 2025-10-11 08:57:11.408 2 DEBUG nova.virt.hardware [None req-65f6f63e-75c5-4f43-a2f3-f6ef8987b9d7 bebcc72ba14b4d1e976076ce9fee6080 7b0903df38254068b8566040cf90343c - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 11 08:57:11 compute-0 nova_compute[260935]: 2025-10-11 08:57:11.409 2 DEBUG nova.virt.hardware [None req-65f6f63e-75c5-4f43-a2f3-f6ef8987b9d7 bebcc72ba14b4d1e976076ce9fee6080 7b0903df38254068b8566040cf90343c - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 11 08:57:11 compute-0 nova_compute[260935]: 2025-10-11 08:57:11.409 2 DEBUG nova.virt.hardware [None req-65f6f63e-75c5-4f43-a2f3-f6ef8987b9d7 bebcc72ba14b4d1e976076ce9fee6080 7b0903df38254068b8566040cf90343c - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 11 08:57:11 compute-0 nova_compute[260935]: 2025-10-11 08:57:11.410 2 DEBUG nova.virt.hardware [None req-65f6f63e-75c5-4f43-a2f3-f6ef8987b9d7 bebcc72ba14b4d1e976076ce9fee6080 7b0903df38254068b8566040cf90343c - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 11 08:57:11 compute-0 nova_compute[260935]: 2025-10-11 08:57:11.411 2 DEBUG nova.virt.hardware [None req-65f6f63e-75c5-4f43-a2f3-f6ef8987b9d7 bebcc72ba14b4d1e976076ce9fee6080 7b0903df38254068b8566040cf90343c - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 11 08:57:11 compute-0 nova_compute[260935]: 2025-10-11 08:57:11.418 2 DEBUG oslo_concurrency.processutils [None req-65f6f63e-75c5-4f43-a2f3-f6ef8987b9d7 bebcc72ba14b4d1e976076ce9fee6080 7b0903df38254068b8566040cf90343c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:57:11 compute-0 ceph-mon[74313]: pgmap v1608: 321 pgs: 321 active+clean; 498 MiB data, 791 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 6.1 MiB/s wr, 232 op/s
Oct 11 08:57:11 compute-0 nova_compute[260935]: 2025-10-11 08:57:11.698 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 08:57:11 compute-0 nova_compute[260935]: 2025-10-11 08:57:11.756 2 DEBUG oslo_concurrency.lockutils [None req-c7265249-4664-4660-a0e2-9d673d6cd2f9 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Acquiring lock "7e87b7bb-5a07-4c5d-9998-0e1375a226f1" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:57:11 compute-0 nova_compute[260935]: 2025-10-11 08:57:11.756 2 DEBUG oslo_concurrency.lockutils [None req-c7265249-4664-4660-a0e2-9d673d6cd2f9 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Lock "7e87b7bb-5a07-4c5d-9998-0e1375a226f1" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:57:11 compute-0 nova_compute[260935]: 2025-10-11 08:57:11.757 2 DEBUG oslo_concurrency.lockutils [None req-c7265249-4664-4660-a0e2-9d673d6cd2f9 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Acquiring lock "7e87b7bb-5a07-4c5d-9998-0e1375a226f1-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:57:11 compute-0 nova_compute[260935]: 2025-10-11 08:57:11.757 2 DEBUG oslo_concurrency.lockutils [None req-c7265249-4664-4660-a0e2-9d673d6cd2f9 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Lock "7e87b7bb-5a07-4c5d-9998-0e1375a226f1-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:57:11 compute-0 nova_compute[260935]: 2025-10-11 08:57:11.757 2 DEBUG oslo_concurrency.lockutils [None req-c7265249-4664-4660-a0e2-9d673d6cd2f9 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Lock "7e87b7bb-5a07-4c5d-9998-0e1375a226f1-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:57:11 compute-0 nova_compute[260935]: 2025-10-11 08:57:11.759 2 INFO nova.compute.manager [None req-c7265249-4664-4660-a0e2-9d673d6cd2f9 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: 7e87b7bb-5a07-4c5d-9998-0e1375a226f1] Terminating instance
Oct 11 08:57:11 compute-0 nova_compute[260935]: 2025-10-11 08:57:11.760 2 DEBUG nova.compute.manager [None req-c7265249-4664-4660-a0e2-9d673d6cd2f9 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: 7e87b7bb-5a07-4c5d-9998-0e1375a226f1] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 11 08:57:11 compute-0 kernel: tap9d9032fb-a7 (unregistering): left promiscuous mode
Oct 11 08:57:11 compute-0 NetworkManager[44960]: <info>  [1760173031.8056] device (tap9d9032fb-a7): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 08:57:11 compute-0 nova_compute[260935]: 2025-10-11 08:57:11.824 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:57:11 compute-0 ovn_controller[152945]: 2025-10-11T08:57:11Z|00602|binding|INFO|Releasing lport 9d9032fb-a79e-4d9c-897c-84768b25ec64 from this chassis (sb_readonly=0)
Oct 11 08:57:11 compute-0 ovn_controller[152945]: 2025-10-11T08:57:11Z|00603|binding|INFO|Setting lport 9d9032fb-a79e-4d9c-897c-84768b25ec64 down in Southbound
Oct 11 08:57:11 compute-0 ovn_controller[152945]: 2025-10-11T08:57:11Z|00604|binding|INFO|Removing iface tap9d9032fb-a7 ovn-installed in OVS
Oct 11 08:57:11 compute-0 nova_compute[260935]: 2025-10-11 08:57:11.829 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:57:11 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:57:11.839 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fc:11:6a 10.100.0.7'], port_security=['fa:16:3e:fc:11:6a 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '7e87b7bb-5a07-4c5d-9998-0e1375a226f1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e075bdab-78c4-414f-b270-c41d1c82f498', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd9864fda4f8641d8a9c1509c426cc206', 'neutron:revision_number': '4', 'neutron:security_group_ids': '708d19b6-1edc-4b98-ad73-f234668f1633', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2b07dbe7-131d-4b4e-9a25-dea5d7b28985, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=9d9032fb-a79e-4d9c-897c-84768b25ec64) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 08:57:11 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:57:11.841 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 9d9032fb-a79e-4d9c-897c-84768b25ec64 in datapath e075bdab-78c4-414f-b270-c41d1c82f498 unbound from our chassis
Oct 11 08:57:11 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:57:11.850 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e075bdab-78c4-414f-b270-c41d1c82f498
Oct 11 08:57:11 compute-0 systemd[1]: machine-qemu\x2d78\x2dinstance\x2d00000046.scope: Deactivated successfully.
Oct 11 08:57:11 compute-0 systemd[1]: machine-qemu\x2d78\x2dinstance\x2d00000046.scope: Consumed 5.223s CPU time.
Oct 11 08:57:11 compute-0 systemd-machined[215705]: Machine qemu-78-instance-00000046 terminated.
Oct 11 08:57:11 compute-0 nova_compute[260935]: 2025-10-11 08:57:11.887 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:57:11 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:57:11.889 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[68bdd626-a652-4bca-96c1-33dd18094b04]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:57:11 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 08:57:11 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/363444600' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:57:11 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:57:11.927 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[6facd60f-d6ff-4091-b4a8-3eb29f2d8cae]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:57:11 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:57:11.930 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[5b792fb5-97c8-4865-af4c-f932500f72b3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:57:11 compute-0 nova_compute[260935]: 2025-10-11 08:57:11.959 2 DEBUG oslo_concurrency.processutils [None req-65f6f63e-75c5-4f43-a2f3-f6ef8987b9d7 bebcc72ba14b4d1e976076ce9fee6080 7b0903df38254068b8566040cf90343c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.542s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:57:11 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:57:11.973 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[68bdab47-b828-4fdc-97bd-3f347b538cbb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:57:11 compute-0 podman[330870]: 2025-10-11 08:57:11.986260506 +0000 UTC m=+0.084175652 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, container_name=multipathd, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd)
Oct 11 08:57:12 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:57:12.009 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[5ba53b49-dd59-475d-b92a-1f2cfaddd9c6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape075bdab-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0b:b9:79'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 17, 'rx_bytes': 1000, 'tx_bytes': 858, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 17, 'rx_bytes': 1000, 'tx_bytes': 858, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 169], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 479394, 'reachable_time': 36159, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 330921, 'error': None, 'target': 'ovnmeta-e075bdab-78c4-414f-b270-c41d1c82f498', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:57:12 compute-0 nova_compute[260935]: 2025-10-11 08:57:12.013 2 DEBUG nova.storage.rbd_utils [None req-65f6f63e-75c5-4f43-a2f3-f6ef8987b9d7 bebcc72ba14b4d1e976076ce9fee6080 7b0903df38254068b8566040cf90343c - - default default] rbd image 075ec27a-70a2-49f6-a097-5738f3407605_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:57:12 compute-0 nova_compute[260935]: 2025-10-11 08:57:12.026 2 DEBUG oslo_concurrency.processutils [None req-65f6f63e-75c5-4f43-a2f3-f6ef8987b9d7 bebcc72ba14b4d1e976076ce9fee6080 7b0903df38254068b8566040cf90343c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:57:12 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:57:12.031 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[668a3501-e85e-40f8-ab7d-db81ae58bdff]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tape075bdab-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 479414, 'tstamp': 479414}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 330954, 'error': None, 'target': 'ovnmeta-e075bdab-78c4-414f-b270-c41d1c82f498', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tape075bdab-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 479419, 'tstamp': 479419}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 330954, 'error': None, 'target': 'ovnmeta-e075bdab-78c4-414f-b270-c41d1c82f498', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:57:12 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:57:12.034 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape075bdab-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:57:12 compute-0 podman[330872]: 2025-10-11 08:57:12.036586001 +0000 UTC m=+0.133936111 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct 11 08:57:12 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:57:12.040 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape075bdab-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:57:12 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:57:12.041 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 08:57:12 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:57:12.041 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape075bdab-70, col_values=(('external_ids', {'iface-id': 'b9cf681c-9f4c-4c56-987a-55fa7aa89e1a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:57:12 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:57:12.042 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 08:57:12 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1609: 321 pgs: 321 active+clean; 498 MiB data, 791 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 6.1 MiB/s wr, 232 op/s
Oct 11 08:57:12 compute-0 nova_compute[260935]: 2025-10-11 08:57:12.066 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:57:12 compute-0 nova_compute[260935]: 2025-10-11 08:57:12.073 2 INFO nova.virt.libvirt.driver [-] [instance: 7e87b7bb-5a07-4c5d-9998-0e1375a226f1] Instance destroyed successfully.
Oct 11 08:57:12 compute-0 nova_compute[260935]: 2025-10-11 08:57:12.073 2 DEBUG nova.objects.instance [None req-c7265249-4664-4660-a0e2-9d673d6cd2f9 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Lazy-loading 'resources' on Instance uuid 7e87b7bb-5a07-4c5d-9998-0e1375a226f1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:57:12 compute-0 nova_compute[260935]: 2025-10-11 08:57:12.092 2 DEBUG nova.virt.libvirt.vif [None req-c7265249-4664-4660-a0e2-9d673d6cd2f9 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T08:56:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestJSON-server-349699878',display_name='tempest-ServersTestJSON-server-349699878',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-349699878',id=70,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-11T08:57:08Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='d9864fda4f8641d8a9c1509c426cc206',ramdisk_id='',reservation_id='r-yuy9dim2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestJSON-101172647',owner_user_name='tempest-ServersTestJSON-101172647-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T08:57:08Z,user_data=None,user_id='ee0e5fedb9fc464eb2a9ac362f5e0749',uuid=7e87b7bb-5a07-4c5d-9998-0e1375a226f1,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9d9032fb-a79e-4d9c-897c-84768b25ec64", "address": "fa:16:3e:fc:11:6a", "network": {"id": "e075bdab-78c4-414f-b270-c41d1c82f498", "bridge": "br-int", "label": "tempest-ServersTestJSON-1401783070-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d9864fda4f8641d8a9c1509c426cc206", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9d9032fb-a7", "ovs_interfaceid": "9d9032fb-a79e-4d9c-897c-84768b25ec64", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 11 08:57:12 compute-0 nova_compute[260935]: 2025-10-11 08:57:12.092 2 DEBUG nova.network.os_vif_util [None req-c7265249-4664-4660-a0e2-9d673d6cd2f9 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Converting VIF {"id": "9d9032fb-a79e-4d9c-897c-84768b25ec64", "address": "fa:16:3e:fc:11:6a", "network": {"id": "e075bdab-78c4-414f-b270-c41d1c82f498", "bridge": "br-int", "label": "tempest-ServersTestJSON-1401783070-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d9864fda4f8641d8a9c1509c426cc206", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9d9032fb-a7", "ovs_interfaceid": "9d9032fb-a79e-4d9c-897c-84768b25ec64", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 08:57:12 compute-0 nova_compute[260935]: 2025-10-11 08:57:12.093 2 DEBUG nova.network.os_vif_util [None req-c7265249-4664-4660-a0e2-9d673d6cd2f9 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fc:11:6a,bridge_name='br-int',has_traffic_filtering=True,id=9d9032fb-a79e-4d9c-897c-84768b25ec64,network=Network(e075bdab-78c4-414f-b270-c41d1c82f498),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9d9032fb-a7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 08:57:12 compute-0 nova_compute[260935]: 2025-10-11 08:57:12.094 2 DEBUG os_vif [None req-c7265249-4664-4660-a0e2-9d673d6cd2f9 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:fc:11:6a,bridge_name='br-int',has_traffic_filtering=True,id=9d9032fb-a79e-4d9c-897c-84768b25ec64,network=Network(e075bdab-78c4-414f-b270-c41d1c82f498),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9d9032fb-a7') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 11 08:57:12 compute-0 nova_compute[260935]: 2025-10-11 08:57:12.096 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:57:12 compute-0 nova_compute[260935]: 2025-10-11 08:57:12.096 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9d9032fb-a7, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:57:12 compute-0 nova_compute[260935]: 2025-10-11 08:57:12.102 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:57:12 compute-0 nova_compute[260935]: 2025-10-11 08:57:12.104 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 08:57:12 compute-0 nova_compute[260935]: 2025-10-11 08:57:12.107 2 INFO os_vif [None req-c7265249-4664-4660-a0e2-9d673d6cd2f9 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:fc:11:6a,bridge_name='br-int',has_traffic_filtering=True,id=9d9032fb-a79e-4d9c-897c-84768b25ec64,network=Network(e075bdab-78c4-414f-b270-c41d1c82f498),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9d9032fb-a7')
Oct 11 08:57:12 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/363444600' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:57:12 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 08:57:12 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1322638207' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:57:12 compute-0 nova_compute[260935]: 2025-10-11 08:57:12.582 2 DEBUG oslo_concurrency.processutils [None req-65f6f63e-75c5-4f43-a2f3-f6ef8987b9d7 bebcc72ba14b4d1e976076ce9fee6080 7b0903df38254068b8566040cf90343c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.555s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:57:12 compute-0 nova_compute[260935]: 2025-10-11 08:57:12.584 2 DEBUG nova.virt.libvirt.vif [None req-65f6f63e-75c5-4f43-a2f3-f6ef8987b9d7 bebcc72ba14b4d1e976076ce9fee6080 7b0903df38254068b8566040cf90343c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T08:57:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesUnderV243Test-server-851720095',display_name='tempest-AttachInterfacesUnderV243Test-server-851720095',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesunderv243test-server-851720095',id=72,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAGxOZfesgB/tCes3ESvhcMptBsCzTvmP/nYl8DHVYghCHuLBja1AiGfG+gOQH9P/HksjxuaZFfe8OX9vWqWGPo1XRvQOf98btMFH9J4OtTO0Jdx6gFgakiqKMlOwZh5hg==',key_name='tempest-keypair-1327467686',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='7b0903df38254068b8566040cf90343c',ramdisk_id='',reservation_id='r-57egyjtf',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesUnderV243Test-1983245910',owner_user_name='tempest-AttachInterfacesUnderV243Test-1983245910-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T08:57:07Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='bebcc72ba14b4d1e976076ce9fee6080',uuid=075ec27a-70a2-49f6-a097-5738f3407605,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "10f3051f-8897-4e57-89f1-4a03aac94192", "address": "fa:16:3e:15:d9:b2", "network": {"id": "36e695f7-a84c-4eed-8d21-d70ce3f1f736", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-876022911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7b0903df38254068b8566040cf90343c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap10f3051f-88", "ovs_interfaceid": "10f3051f-8897-4e57-89f1-4a03aac94192", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 11 08:57:12 compute-0 nova_compute[260935]: 2025-10-11 08:57:12.584 2 DEBUG nova.network.os_vif_util [None req-65f6f63e-75c5-4f43-a2f3-f6ef8987b9d7 bebcc72ba14b4d1e976076ce9fee6080 7b0903df38254068b8566040cf90343c - - default default] Converting VIF {"id": "10f3051f-8897-4e57-89f1-4a03aac94192", "address": "fa:16:3e:15:d9:b2", "network": {"id": "36e695f7-a84c-4eed-8d21-d70ce3f1f736", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-876022911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7b0903df38254068b8566040cf90343c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap10f3051f-88", "ovs_interfaceid": "10f3051f-8897-4e57-89f1-4a03aac94192", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 08:57:12 compute-0 nova_compute[260935]: 2025-10-11 08:57:12.585 2 DEBUG nova.network.os_vif_util [None req-65f6f63e-75c5-4f43-a2f3-f6ef8987b9d7 bebcc72ba14b4d1e976076ce9fee6080 7b0903df38254068b8566040cf90343c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:15:d9:b2,bridge_name='br-int',has_traffic_filtering=True,id=10f3051f-8897-4e57-89f1-4a03aac94192,network=Network(36e695f7-a84c-4eed-8d21-d70ce3f1f736),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap10f3051f-88') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 08:57:12 compute-0 nova_compute[260935]: 2025-10-11 08:57:12.588 2 DEBUG nova.objects.instance [None req-65f6f63e-75c5-4f43-a2f3-f6ef8987b9d7 bebcc72ba14b4d1e976076ce9fee6080 7b0903df38254068b8566040cf90343c - - default default] Lazy-loading 'pci_devices' on Instance uuid 075ec27a-70a2-49f6-a097-5738f3407605 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:57:12 compute-0 nova_compute[260935]: 2025-10-11 08:57:12.603 2 INFO nova.virt.libvirt.driver [None req-c7265249-4664-4660-a0e2-9d673d6cd2f9 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: 7e87b7bb-5a07-4c5d-9998-0e1375a226f1] Deleting instance files /var/lib/nova/instances/7e87b7bb-5a07-4c5d-9998-0e1375a226f1_del
Oct 11 08:57:12 compute-0 nova_compute[260935]: 2025-10-11 08:57:12.604 2 INFO nova.virt.libvirt.driver [None req-c7265249-4664-4660-a0e2-9d673d6cd2f9 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: 7e87b7bb-5a07-4c5d-9998-0e1375a226f1] Deletion of /var/lib/nova/instances/7e87b7bb-5a07-4c5d-9998-0e1375a226f1_del complete
Oct 11 08:57:12 compute-0 nova_compute[260935]: 2025-10-11 08:57:12.610 2 DEBUG nova.virt.libvirt.driver [None req-65f6f63e-75c5-4f43-a2f3-f6ef8987b9d7 bebcc72ba14b4d1e976076ce9fee6080 7b0903df38254068b8566040cf90343c - - default default] [instance: 075ec27a-70a2-49f6-a097-5738f3407605] End _get_guest_xml xml=<domain type="kvm">
Oct 11 08:57:12 compute-0 nova_compute[260935]:   <uuid>075ec27a-70a2-49f6-a097-5738f3407605</uuid>
Oct 11 08:57:12 compute-0 nova_compute[260935]:   <name>instance-00000048</name>
Oct 11 08:57:12 compute-0 nova_compute[260935]:   <memory>131072</memory>
Oct 11 08:57:12 compute-0 nova_compute[260935]:   <vcpu>1</vcpu>
Oct 11 08:57:12 compute-0 nova_compute[260935]:   <metadata>
Oct 11 08:57:12 compute-0 nova_compute[260935]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 08:57:12 compute-0 nova_compute[260935]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 08:57:12 compute-0 nova_compute[260935]:       <nova:name>tempest-AttachInterfacesUnderV243Test-server-851720095</nova:name>
Oct 11 08:57:12 compute-0 nova_compute[260935]:       <nova:creationTime>2025-10-11 08:57:11</nova:creationTime>
Oct 11 08:57:12 compute-0 nova_compute[260935]:       <nova:flavor name="m1.nano">
Oct 11 08:57:12 compute-0 nova_compute[260935]:         <nova:memory>128</nova:memory>
Oct 11 08:57:12 compute-0 nova_compute[260935]:         <nova:disk>1</nova:disk>
Oct 11 08:57:12 compute-0 nova_compute[260935]:         <nova:swap>0</nova:swap>
Oct 11 08:57:12 compute-0 nova_compute[260935]:         <nova:ephemeral>0</nova:ephemeral>
Oct 11 08:57:12 compute-0 nova_compute[260935]:         <nova:vcpus>1</nova:vcpus>
Oct 11 08:57:12 compute-0 nova_compute[260935]:       </nova:flavor>
Oct 11 08:57:12 compute-0 nova_compute[260935]:       <nova:owner>
Oct 11 08:57:12 compute-0 nova_compute[260935]:         <nova:user uuid="bebcc72ba14b4d1e976076ce9fee6080">tempest-AttachInterfacesUnderV243Test-1983245910-project-member</nova:user>
Oct 11 08:57:12 compute-0 nova_compute[260935]:         <nova:project uuid="7b0903df38254068b8566040cf90343c">tempest-AttachInterfacesUnderV243Test-1983245910</nova:project>
Oct 11 08:57:12 compute-0 nova_compute[260935]:       </nova:owner>
Oct 11 08:57:12 compute-0 nova_compute[260935]:       <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 08:57:12 compute-0 nova_compute[260935]:       <nova:ports>
Oct 11 08:57:12 compute-0 nova_compute[260935]:         <nova:port uuid="10f3051f-8897-4e57-89f1-4a03aac94192">
Oct 11 08:57:12 compute-0 nova_compute[260935]:           <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Oct 11 08:57:12 compute-0 nova_compute[260935]:         </nova:port>
Oct 11 08:57:12 compute-0 nova_compute[260935]:       </nova:ports>
Oct 11 08:57:12 compute-0 nova_compute[260935]:     </nova:instance>
Oct 11 08:57:12 compute-0 nova_compute[260935]:   </metadata>
Oct 11 08:57:12 compute-0 nova_compute[260935]:   <sysinfo type="smbios">
Oct 11 08:57:12 compute-0 nova_compute[260935]:     <system>
Oct 11 08:57:12 compute-0 nova_compute[260935]:       <entry name="manufacturer">RDO</entry>
Oct 11 08:57:12 compute-0 nova_compute[260935]:       <entry name="product">OpenStack Compute</entry>
Oct 11 08:57:12 compute-0 nova_compute[260935]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 08:57:12 compute-0 nova_compute[260935]:       <entry name="serial">075ec27a-70a2-49f6-a097-5738f3407605</entry>
Oct 11 08:57:12 compute-0 nova_compute[260935]:       <entry name="uuid">075ec27a-70a2-49f6-a097-5738f3407605</entry>
Oct 11 08:57:12 compute-0 nova_compute[260935]:       <entry name="family">Virtual Machine</entry>
Oct 11 08:57:12 compute-0 nova_compute[260935]:     </system>
Oct 11 08:57:12 compute-0 nova_compute[260935]:   </sysinfo>
Oct 11 08:57:12 compute-0 nova_compute[260935]:   <os>
Oct 11 08:57:12 compute-0 nova_compute[260935]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 11 08:57:12 compute-0 nova_compute[260935]:     <boot dev="hd"/>
Oct 11 08:57:12 compute-0 nova_compute[260935]:     <smbios mode="sysinfo"/>
Oct 11 08:57:12 compute-0 nova_compute[260935]:   </os>
Oct 11 08:57:12 compute-0 nova_compute[260935]:   <features>
Oct 11 08:57:12 compute-0 nova_compute[260935]:     <acpi/>
Oct 11 08:57:12 compute-0 nova_compute[260935]:     <apic/>
Oct 11 08:57:12 compute-0 nova_compute[260935]:     <vmcoreinfo/>
Oct 11 08:57:12 compute-0 nova_compute[260935]:   </features>
Oct 11 08:57:12 compute-0 nova_compute[260935]:   <clock offset="utc">
Oct 11 08:57:12 compute-0 nova_compute[260935]:     <timer name="pit" tickpolicy="delay"/>
Oct 11 08:57:12 compute-0 nova_compute[260935]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 11 08:57:12 compute-0 nova_compute[260935]:     <timer name="hpet" present="no"/>
Oct 11 08:57:12 compute-0 nova_compute[260935]:   </clock>
Oct 11 08:57:12 compute-0 nova_compute[260935]:   <cpu mode="host-model" match="exact">
Oct 11 08:57:12 compute-0 nova_compute[260935]:     <topology sockets="1" cores="1" threads="1"/>
Oct 11 08:57:12 compute-0 nova_compute[260935]:   </cpu>
Oct 11 08:57:12 compute-0 nova_compute[260935]:   <devices>
Oct 11 08:57:12 compute-0 nova_compute[260935]:     <disk type="network" device="disk">
Oct 11 08:57:12 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 08:57:12 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/075ec27a-70a2-49f6-a097-5738f3407605_disk">
Oct 11 08:57:12 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 08:57:12 compute-0 nova_compute[260935]:       </source>
Oct 11 08:57:12 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 08:57:12 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 08:57:12 compute-0 nova_compute[260935]:       </auth>
Oct 11 08:57:12 compute-0 nova_compute[260935]:       <target dev="vda" bus="virtio"/>
Oct 11 08:57:12 compute-0 nova_compute[260935]:     </disk>
Oct 11 08:57:12 compute-0 nova_compute[260935]:     <disk type="network" device="cdrom">
Oct 11 08:57:12 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 08:57:12 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/075ec27a-70a2-49f6-a097-5738f3407605_disk.config">
Oct 11 08:57:12 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 08:57:12 compute-0 nova_compute[260935]:       </source>
Oct 11 08:57:12 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 08:57:12 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 08:57:12 compute-0 nova_compute[260935]:       </auth>
Oct 11 08:57:12 compute-0 nova_compute[260935]:       <target dev="sda" bus="sata"/>
Oct 11 08:57:12 compute-0 nova_compute[260935]:     </disk>
Oct 11 08:57:12 compute-0 nova_compute[260935]:     <interface type="ethernet">
Oct 11 08:57:12 compute-0 nova_compute[260935]:       <mac address="fa:16:3e:15:d9:b2"/>
Oct 11 08:57:12 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 08:57:12 compute-0 nova_compute[260935]:       <driver name="vhost" rx_queue_size="512"/>
Oct 11 08:57:12 compute-0 nova_compute[260935]:       <mtu size="1442"/>
Oct 11 08:57:12 compute-0 nova_compute[260935]:       <target dev="tap10f3051f-88"/>
Oct 11 08:57:12 compute-0 nova_compute[260935]:     </interface>
Oct 11 08:57:12 compute-0 nova_compute[260935]:     <serial type="pty">
Oct 11 08:57:12 compute-0 nova_compute[260935]:       <log file="/var/lib/nova/instances/075ec27a-70a2-49f6-a097-5738f3407605/console.log" append="off"/>
Oct 11 08:57:12 compute-0 nova_compute[260935]:     </serial>
Oct 11 08:57:12 compute-0 nova_compute[260935]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 08:57:12 compute-0 nova_compute[260935]:     <video>
Oct 11 08:57:12 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 08:57:12 compute-0 nova_compute[260935]:     </video>
Oct 11 08:57:12 compute-0 nova_compute[260935]:     <input type="tablet" bus="usb"/>
Oct 11 08:57:12 compute-0 nova_compute[260935]:     <rng model="virtio">
Oct 11 08:57:12 compute-0 nova_compute[260935]:       <backend model="random">/dev/urandom</backend>
Oct 11 08:57:12 compute-0 nova_compute[260935]:     </rng>
Oct 11 08:57:12 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root"/>
Oct 11 08:57:12 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:57:12 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:57:12 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:57:12 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:57:12 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:57:12 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:57:12 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:57:12 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:57:12 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:57:12 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:57:12 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:57:12 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:57:12 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:57:12 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:57:12 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:57:12 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:57:12 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:57:12 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:57:12 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:57:12 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:57:12 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:57:12 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:57:12 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:57:12 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:57:12 compute-0 nova_compute[260935]:     <controller type="usb" index="0"/>
Oct 11 08:57:12 compute-0 nova_compute[260935]:     <memballoon model="virtio">
Oct 11 08:57:12 compute-0 nova_compute[260935]:       <stats period="10"/>
Oct 11 08:57:12 compute-0 nova_compute[260935]:     </memballoon>
Oct 11 08:57:12 compute-0 nova_compute[260935]:   </devices>
Oct 11 08:57:12 compute-0 nova_compute[260935]: </domain>
Oct 11 08:57:12 compute-0 nova_compute[260935]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 11 08:57:12 compute-0 nova_compute[260935]: 2025-10-11 08:57:12.611 2 DEBUG nova.compute.manager [None req-65f6f63e-75c5-4f43-a2f3-f6ef8987b9d7 bebcc72ba14b4d1e976076ce9fee6080 7b0903df38254068b8566040cf90343c - - default default] [instance: 075ec27a-70a2-49f6-a097-5738f3407605] Preparing to wait for external event network-vif-plugged-10f3051f-8897-4e57-89f1-4a03aac94192 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 11 08:57:12 compute-0 nova_compute[260935]: 2025-10-11 08:57:12.611 2 DEBUG oslo_concurrency.lockutils [None req-65f6f63e-75c5-4f43-a2f3-f6ef8987b9d7 bebcc72ba14b4d1e976076ce9fee6080 7b0903df38254068b8566040cf90343c - - default default] Acquiring lock "075ec27a-70a2-49f6-a097-5738f3407605-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:57:12 compute-0 nova_compute[260935]: 2025-10-11 08:57:12.612 2 DEBUG oslo_concurrency.lockutils [None req-65f6f63e-75c5-4f43-a2f3-f6ef8987b9d7 bebcc72ba14b4d1e976076ce9fee6080 7b0903df38254068b8566040cf90343c - - default default] Lock "075ec27a-70a2-49f6-a097-5738f3407605-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:57:12 compute-0 nova_compute[260935]: 2025-10-11 08:57:12.612 2 DEBUG oslo_concurrency.lockutils [None req-65f6f63e-75c5-4f43-a2f3-f6ef8987b9d7 bebcc72ba14b4d1e976076ce9fee6080 7b0903df38254068b8566040cf90343c - - default default] Lock "075ec27a-70a2-49f6-a097-5738f3407605-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:57:12 compute-0 nova_compute[260935]: 2025-10-11 08:57:12.613 2 DEBUG nova.virt.libvirt.vif [None req-65f6f63e-75c5-4f43-a2f3-f6ef8987b9d7 bebcc72ba14b4d1e976076ce9fee6080 7b0903df38254068b8566040cf90343c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T08:57:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesUnderV243Test-server-851720095',display_name='tempest-AttachInterfacesUnderV243Test-server-851720095',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesunderv243test-server-851720095',id=72,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAGxOZfesgB/tCes3ESvhcMptBsCzTvmP/nYl8DHVYghCHuLBja1AiGfG+gOQH9P/HksjxuaZFfe8OX9vWqWGPo1XRvQOf98btMFH9J4OtTO0Jdx6gFgakiqKMlOwZh5hg==',key_name='tempest-keypair-1327467686',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='7b0903df38254068b8566040cf90343c',ramdisk_id='',reservation_id='r-57egyjtf',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesUnderV243Test-1983245910',owner_user_name='tempest-AttachInterfacesUnderV243Test-1983245910-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T08:57:07Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='bebcc72ba14b4d1e976076ce9fee6080',uuid=075ec27a-70a2-49f6-a097-5738f3407605,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "10f3051f-8897-4e57-89f1-4a03aac94192", "address": "fa:16:3e:15:d9:b2", "network": {"id": "36e695f7-a84c-4eed-8d21-d70ce3f1f736", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-876022911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7b0903df38254068b8566040cf90343c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap10f3051f-88", "ovs_interfaceid": "10f3051f-8897-4e57-89f1-4a03aac94192", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 11 08:57:12 compute-0 nova_compute[260935]: 2025-10-11 08:57:12.613 2 DEBUG nova.network.os_vif_util [None req-65f6f63e-75c5-4f43-a2f3-f6ef8987b9d7 bebcc72ba14b4d1e976076ce9fee6080 7b0903df38254068b8566040cf90343c - - default default] Converting VIF {"id": "10f3051f-8897-4e57-89f1-4a03aac94192", "address": "fa:16:3e:15:d9:b2", "network": {"id": "36e695f7-a84c-4eed-8d21-d70ce3f1f736", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-876022911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7b0903df38254068b8566040cf90343c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap10f3051f-88", "ovs_interfaceid": "10f3051f-8897-4e57-89f1-4a03aac94192", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 08:57:12 compute-0 nova_compute[260935]: 2025-10-11 08:57:12.614 2 DEBUG nova.network.os_vif_util [None req-65f6f63e-75c5-4f43-a2f3-f6ef8987b9d7 bebcc72ba14b4d1e976076ce9fee6080 7b0903df38254068b8566040cf90343c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:15:d9:b2,bridge_name='br-int',has_traffic_filtering=True,id=10f3051f-8897-4e57-89f1-4a03aac94192,network=Network(36e695f7-a84c-4eed-8d21-d70ce3f1f736),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap10f3051f-88') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 08:57:12 compute-0 nova_compute[260935]: 2025-10-11 08:57:12.615 2 DEBUG os_vif [None req-65f6f63e-75c5-4f43-a2f3-f6ef8987b9d7 bebcc72ba14b4d1e976076ce9fee6080 7b0903df38254068b8566040cf90343c - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:15:d9:b2,bridge_name='br-int',has_traffic_filtering=True,id=10f3051f-8897-4e57-89f1-4a03aac94192,network=Network(36e695f7-a84c-4eed-8d21-d70ce3f1f736),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap10f3051f-88') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 11 08:57:12 compute-0 nova_compute[260935]: 2025-10-11 08:57:12.615 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:57:12 compute-0 nova_compute[260935]: 2025-10-11 08:57:12.616 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:57:12 compute-0 nova_compute[260935]: 2025-10-11 08:57:12.616 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 08:57:12 compute-0 nova_compute[260935]: 2025-10-11 08:57:12.622 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:57:12 compute-0 nova_compute[260935]: 2025-10-11 08:57:12.622 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap10f3051f-88, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:57:12 compute-0 nova_compute[260935]: 2025-10-11 08:57:12.623 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap10f3051f-88, col_values=(('external_ids', {'iface-id': '10f3051f-8897-4e57-89f1-4a03aac94192', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:15:d9:b2', 'vm-uuid': '075ec27a-70a2-49f6-a097-5738f3407605'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:57:12 compute-0 NetworkManager[44960]: <info>  [1760173032.6268] manager: (tap10f3051f-88): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/281)
Oct 11 08:57:12 compute-0 nova_compute[260935]: 2025-10-11 08:57:12.625 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:57:12 compute-0 nova_compute[260935]: 2025-10-11 08:57:12.628 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 08:57:12 compute-0 nova_compute[260935]: 2025-10-11 08:57:12.633 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:57:12 compute-0 nova_compute[260935]: 2025-10-11 08:57:12.634 2 INFO os_vif [None req-65f6f63e-75c5-4f43-a2f3-f6ef8987b9d7 bebcc72ba14b4d1e976076ce9fee6080 7b0903df38254068b8566040cf90343c - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:15:d9:b2,bridge_name='br-int',has_traffic_filtering=True,id=10f3051f-8897-4e57-89f1-4a03aac94192,network=Network(36e695f7-a84c-4eed-8d21-d70ce3f1f736),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap10f3051f-88')
Oct 11 08:57:12 compute-0 nova_compute[260935]: 2025-10-11 08:57:12.653 2 INFO nova.compute.manager [None req-c7265249-4664-4660-a0e2-9d673d6cd2f9 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: 7e87b7bb-5a07-4c5d-9998-0e1375a226f1] Took 0.89 seconds to destroy the instance on the hypervisor.
Oct 11 08:57:12 compute-0 nova_compute[260935]: 2025-10-11 08:57:12.653 2 DEBUG oslo.service.loopingcall [None req-c7265249-4664-4660-a0e2-9d673d6cd2f9 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 11 08:57:12 compute-0 nova_compute[260935]: 2025-10-11 08:57:12.654 2 DEBUG nova.compute.manager [-] [instance: 7e87b7bb-5a07-4c5d-9998-0e1375a226f1] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 11 08:57:12 compute-0 nova_compute[260935]: 2025-10-11 08:57:12.654 2 DEBUG nova.network.neutron [-] [instance: 7e87b7bb-5a07-4c5d-9998-0e1375a226f1] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 11 08:57:12 compute-0 nova_compute[260935]: 2025-10-11 08:57:12.697 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 08:57:12 compute-0 nova_compute[260935]: 2025-10-11 08:57:12.711 2 DEBUG nova.network.neutron [req-ea64f53e-9b46-453e-8d28-4dd49fc3f764 req-c04cfbfd-a40a-462e-86aa-535cf4368c9c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 075ec27a-70a2-49f6-a097-5738f3407605] Updated VIF entry in instance network info cache for port 10f3051f-8897-4e57-89f1-4a03aac94192. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 08:57:12 compute-0 nova_compute[260935]: 2025-10-11 08:57:12.711 2 DEBUG nova.network.neutron [req-ea64f53e-9b46-453e-8d28-4dd49fc3f764 req-c04cfbfd-a40a-462e-86aa-535cf4368c9c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 075ec27a-70a2-49f6-a097-5738f3407605] Updating instance_info_cache with network_info: [{"id": "10f3051f-8897-4e57-89f1-4a03aac94192", "address": "fa:16:3e:15:d9:b2", "network": {"id": "36e695f7-a84c-4eed-8d21-d70ce3f1f736", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-876022911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7b0903df38254068b8566040cf90343c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap10f3051f-88", "ovs_interfaceid": "10f3051f-8897-4e57-89f1-4a03aac94192", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:57:12 compute-0 nova_compute[260935]: 2025-10-11 08:57:12.753 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760173032.7529871, 7aed3949-7b95-4887-90c3-3e2c8202cf27 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:57:12 compute-0 nova_compute[260935]: 2025-10-11 08:57:12.754 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 7aed3949-7b95-4887-90c3-3e2c8202cf27] VM Started (Lifecycle Event)
Oct 11 08:57:12 compute-0 nova_compute[260935]: 2025-10-11 08:57:12.755 2 DEBUG nova.compute.manager [None req-289e433c-4e93-4b47-92d3-1e62827612b1 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] [instance: 7aed3949-7b95-4887-90c3-3e2c8202cf27] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 11 08:57:12 compute-0 nova_compute[260935]: 2025-10-11 08:57:12.769 2 DEBUG nova.virt.libvirt.driver [None req-289e433c-4e93-4b47-92d3-1e62827612b1 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] [instance: 7aed3949-7b95-4887-90c3-3e2c8202cf27] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 11 08:57:12 compute-0 nova_compute[260935]: 2025-10-11 08:57:12.772 2 INFO nova.virt.libvirt.driver [-] [instance: 7aed3949-7b95-4887-90c3-3e2c8202cf27] Instance spawned successfully.
Oct 11 08:57:12 compute-0 nova_compute[260935]: 2025-10-11 08:57:12.773 2 DEBUG nova.virt.libvirt.driver [None req-289e433c-4e93-4b47-92d3-1e62827612b1 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] [instance: 7aed3949-7b95-4887-90c3-3e2c8202cf27] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 11 08:57:12 compute-0 nova_compute[260935]: 2025-10-11 08:57:12.803 2 DEBUG nova.virt.libvirt.driver [None req-65f6f63e-75c5-4f43-a2f3-f6ef8987b9d7 bebcc72ba14b4d1e976076ce9fee6080 7b0903df38254068b8566040cf90343c - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 08:57:12 compute-0 nova_compute[260935]: 2025-10-11 08:57:12.803 2 DEBUG nova.virt.libvirt.driver [None req-65f6f63e-75c5-4f43-a2f3-f6ef8987b9d7 bebcc72ba14b4d1e976076ce9fee6080 7b0903df38254068b8566040cf90343c - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 08:57:12 compute-0 nova_compute[260935]: 2025-10-11 08:57:12.804 2 DEBUG nova.virt.libvirt.driver [None req-65f6f63e-75c5-4f43-a2f3-f6ef8987b9d7 bebcc72ba14b4d1e976076ce9fee6080 7b0903df38254068b8566040cf90343c - - default default] No VIF found with MAC fa:16:3e:15:d9:b2, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 11 08:57:12 compute-0 nova_compute[260935]: 2025-10-11 08:57:12.817 2 INFO nova.virt.libvirt.driver [None req-65f6f63e-75c5-4f43-a2f3-f6ef8987b9d7 bebcc72ba14b4d1e976076ce9fee6080 7b0903df38254068b8566040cf90343c - - default default] [instance: 075ec27a-70a2-49f6-a097-5738f3407605] Using config drive
Oct 11 08:57:12 compute-0 nova_compute[260935]: 2025-10-11 08:57:12.843 2 DEBUG nova.storage.rbd_utils [None req-65f6f63e-75c5-4f43-a2f3-f6ef8987b9d7 bebcc72ba14b4d1e976076ce9fee6080 7b0903df38254068b8566040cf90343c - - default default] rbd image 075ec27a-70a2-49f6-a097-5738f3407605_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:57:12 compute-0 nova_compute[260935]: 2025-10-11 08:57:12.857 2 DEBUG oslo_concurrency.lockutils [req-ea64f53e-9b46-453e-8d28-4dd49fc3f764 req-c04cfbfd-a40a-462e-86aa-535cf4368c9c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-075ec27a-70a2-49f6-a097-5738f3407605" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 08:57:12 compute-0 nova_compute[260935]: 2025-10-11 08:57:12.858 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 7aed3949-7b95-4887-90c3-3e2c8202cf27] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:57:12 compute-0 nova_compute[260935]: 2025-10-11 08:57:12.862 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 08:57:12 compute-0 nova_compute[260935]: 2025-10-11 08:57:12.862 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 11 08:57:12 compute-0 nova_compute[260935]: 2025-10-11 08:57:12.876 2 DEBUG nova.virt.libvirt.driver [None req-289e433c-4e93-4b47-92d3-1e62827612b1 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] [instance: 7aed3949-7b95-4887-90c3-3e2c8202cf27] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:57:12 compute-0 nova_compute[260935]: 2025-10-11 08:57:12.877 2 DEBUG nova.virt.libvirt.driver [None req-289e433c-4e93-4b47-92d3-1e62827612b1 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] [instance: 7aed3949-7b95-4887-90c3-3e2c8202cf27] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:57:12 compute-0 nova_compute[260935]: 2025-10-11 08:57:12.878 2 DEBUG nova.virt.libvirt.driver [None req-289e433c-4e93-4b47-92d3-1e62827612b1 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] [instance: 7aed3949-7b95-4887-90c3-3e2c8202cf27] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:57:12 compute-0 nova_compute[260935]: 2025-10-11 08:57:12.878 2 DEBUG nova.virt.libvirt.driver [None req-289e433c-4e93-4b47-92d3-1e62827612b1 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] [instance: 7aed3949-7b95-4887-90c3-3e2c8202cf27] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:57:12 compute-0 nova_compute[260935]: 2025-10-11 08:57:12.878 2 DEBUG nova.virt.libvirt.driver [None req-289e433c-4e93-4b47-92d3-1e62827612b1 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] [instance: 7aed3949-7b95-4887-90c3-3e2c8202cf27] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:57:12 compute-0 nova_compute[260935]: 2025-10-11 08:57:12.879 2 DEBUG nova.virt.libvirt.driver [None req-289e433c-4e93-4b47-92d3-1e62827612b1 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] [instance: 7aed3949-7b95-4887-90c3-3e2c8202cf27] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:57:12 compute-0 nova_compute[260935]: 2025-10-11 08:57:12.883 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 7aed3949-7b95-4887-90c3-3e2c8202cf27] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 08:57:12 compute-0 nova_compute[260935]: 2025-10-11 08:57:12.915 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 7aed3949-7b95-4887-90c3-3e2c8202cf27] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 08:57:12 compute-0 nova_compute[260935]: 2025-10-11 08:57:12.915 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760173032.7530751, 7aed3949-7b95-4887-90c3-3e2c8202cf27 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:57:12 compute-0 nova_compute[260935]: 2025-10-11 08:57:12.915 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 7aed3949-7b95-4887-90c3-3e2c8202cf27] VM Paused (Lifecycle Event)
Oct 11 08:57:12 compute-0 nova_compute[260935]: 2025-10-11 08:57:12.948 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 7aed3949-7b95-4887-90c3-3e2c8202cf27] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:57:12 compute-0 nova_compute[260935]: 2025-10-11 08:57:12.951 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760173032.7688453, 7aed3949-7b95-4887-90c3-3e2c8202cf27 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:57:12 compute-0 nova_compute[260935]: 2025-10-11 08:57:12.951 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 7aed3949-7b95-4887-90c3-3e2c8202cf27] VM Resumed (Lifecycle Event)
Oct 11 08:57:12 compute-0 nova_compute[260935]: 2025-10-11 08:57:12.972 2 INFO nova.compute.manager [None req-289e433c-4e93-4b47-92d3-1e62827612b1 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] [instance: 7aed3949-7b95-4887-90c3-3e2c8202cf27] Took 10.42 seconds to spawn the instance on the hypervisor.
Oct 11 08:57:12 compute-0 nova_compute[260935]: 2025-10-11 08:57:12.973 2 DEBUG nova.compute.manager [None req-289e433c-4e93-4b47-92d3-1e62827612b1 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] [instance: 7aed3949-7b95-4887-90c3-3e2c8202cf27] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:57:12 compute-0 nova_compute[260935]: 2025-10-11 08:57:12.975 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 7aed3949-7b95-4887-90c3-3e2c8202cf27] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:57:12 compute-0 nova_compute[260935]: 2025-10-11 08:57:12.983 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 7aed3949-7b95-4887-90c3-3e2c8202cf27] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 08:57:13 compute-0 nova_compute[260935]: 2025-10-11 08:57:13.011 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 7aed3949-7b95-4887-90c3-3e2c8202cf27] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 08:57:13 compute-0 nova_compute[260935]: 2025-10-11 08:57:13.044 2 INFO nova.compute.manager [None req-289e433c-4e93-4b47-92d3-1e62827612b1 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] [instance: 7aed3949-7b95-4887-90c3-3e2c8202cf27] Took 11.60 seconds to build instance.
Oct 11 08:57:13 compute-0 nova_compute[260935]: 2025-10-11 08:57:13.057 2 DEBUG oslo_concurrency.lockutils [None req-289e433c-4e93-4b47-92d3-1e62827612b1 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Lock "7aed3949-7b95-4887-90c3-3e2c8202cf27" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.676s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:57:13 compute-0 nova_compute[260935]: 2025-10-11 08:57:13.110 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "refresh_cache-b297454f-91af-4716-b4f7-6af9f0d7e62d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 08:57:13 compute-0 nova_compute[260935]: 2025-10-11 08:57:13.110 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquired lock "refresh_cache-b297454f-91af-4716-b4f7-6af9f0d7e62d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 08:57:13 compute-0 nova_compute[260935]: 2025-10-11 08:57:13.110 2 DEBUG nova.network.neutron [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: b297454f-91af-4716-b4f7-6af9f0d7e62d] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 11 08:57:13 compute-0 nova_compute[260935]: 2025-10-11 08:57:13.252 2 INFO nova.virt.libvirt.driver [None req-65f6f63e-75c5-4f43-a2f3-f6ef8987b9d7 bebcc72ba14b4d1e976076ce9fee6080 7b0903df38254068b8566040cf90343c - - default default] [instance: 075ec27a-70a2-49f6-a097-5738f3407605] Creating config drive at /var/lib/nova/instances/075ec27a-70a2-49f6-a097-5738f3407605/disk.config
Oct 11 08:57:13 compute-0 nova_compute[260935]: 2025-10-11 08:57:13.263 2 DEBUG oslo_concurrency.processutils [None req-65f6f63e-75c5-4f43-a2f3-f6ef8987b9d7 bebcc72ba14b4d1e976076ce9fee6080 7b0903df38254068b8566040cf90343c - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/075ec27a-70a2-49f6-a097-5738f3407605/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpzmu0i4gw execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:57:13 compute-0 nova_compute[260935]: 2025-10-11 08:57:13.339 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:57:13 compute-0 nova_compute[260935]: 2025-10-11 08:57:13.352 2 DEBUG nova.network.neutron [-] [instance: 7e87b7bb-5a07-4c5d-9998-0e1375a226f1] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:57:13 compute-0 nova_compute[260935]: 2025-10-11 08:57:13.369 2 INFO nova.compute.manager [-] [instance: 7e87b7bb-5a07-4c5d-9998-0e1375a226f1] Took 0.72 seconds to deallocate network for instance.
Oct 11 08:57:13 compute-0 nova_compute[260935]: 2025-10-11 08:57:13.383 2 DEBUG nova.compute.manager [req-4de725f7-5661-4487-bc36-55fa2fdbf25f req-d4a187b8-0dcd-41e6-89f8-f62b28fbe14b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 7e87b7bb-5a07-4c5d-9998-0e1375a226f1] Received event network-vif-deleted-9d9032fb-a79e-4d9c-897c-84768b25ec64 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:57:13 compute-0 nova_compute[260935]: 2025-10-11 08:57:13.383 2 INFO nova.compute.manager [req-4de725f7-5661-4487-bc36-55fa2fdbf25f req-d4a187b8-0dcd-41e6-89f8-f62b28fbe14b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 7e87b7bb-5a07-4c5d-9998-0e1375a226f1] Neutron deleted interface 9d9032fb-a79e-4d9c-897c-84768b25ec64; detaching it from the instance and deleting it from the info cache
Oct 11 08:57:13 compute-0 nova_compute[260935]: 2025-10-11 08:57:13.384 2 DEBUG nova.network.neutron [req-4de725f7-5661-4487-bc36-55fa2fdbf25f req-d4a187b8-0dcd-41e6-89f8-f62b28fbe14b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 7e87b7bb-5a07-4c5d-9998-0e1375a226f1] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:57:13 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e230 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 08:57:13 compute-0 nova_compute[260935]: 2025-10-11 08:57:13.413 2 DEBUG nova.compute.manager [req-4de725f7-5661-4487-bc36-55fa2fdbf25f req-d4a187b8-0dcd-41e6-89f8-f62b28fbe14b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 7e87b7bb-5a07-4c5d-9998-0e1375a226f1] Detach interface failed, port_id=9d9032fb-a79e-4d9c-897c-84768b25ec64, reason: Instance 7e87b7bb-5a07-4c5d-9998-0e1375a226f1 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Oct 11 08:57:13 compute-0 nova_compute[260935]: 2025-10-11 08:57:13.420 2 DEBUG oslo_concurrency.processutils [None req-65f6f63e-75c5-4f43-a2f3-f6ef8987b9d7 bebcc72ba14b4d1e976076ce9fee6080 7b0903df38254068b8566040cf90343c - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/075ec27a-70a2-49f6-a097-5738f3407605/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpzmu0i4gw" returned: 0 in 0.156s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:57:13 compute-0 nova_compute[260935]: 2025-10-11 08:57:13.457 2 DEBUG nova.storage.rbd_utils [None req-65f6f63e-75c5-4f43-a2f3-f6ef8987b9d7 bebcc72ba14b4d1e976076ce9fee6080 7b0903df38254068b8566040cf90343c - - default default] rbd image 075ec27a-70a2-49f6-a097-5738f3407605_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:57:13 compute-0 nova_compute[260935]: 2025-10-11 08:57:13.461 2 DEBUG oslo_concurrency.processutils [None req-65f6f63e-75c5-4f43-a2f3-f6ef8987b9d7 bebcc72ba14b4d1e976076ce9fee6080 7b0903df38254068b8566040cf90343c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/075ec27a-70a2-49f6-a097-5738f3407605/disk.config 075ec27a-70a2-49f6-a097-5738f3407605_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:57:13 compute-0 ceph-mon[74313]: pgmap v1609: 321 pgs: 321 active+clean; 498 MiB data, 791 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 6.1 MiB/s wr, 232 op/s
Oct 11 08:57:13 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/1322638207' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:57:13 compute-0 nova_compute[260935]: 2025-10-11 08:57:13.556 2 DEBUG nova.compute.manager [req-1c3ac393-7525-4847-a2f5-7174dc031341 req-f3489f05-8e81-474d-9abf-c1af5a166dda e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 7aed3949-7b95-4887-90c3-3e2c8202cf27] Received event network-vif-plugged-f5e1ba56-7772-43c4-a7a6-9435b3b0796c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:57:13 compute-0 nova_compute[260935]: 2025-10-11 08:57:13.557 2 DEBUG oslo_concurrency.lockutils [req-1c3ac393-7525-4847-a2f5-7174dc031341 req-f3489f05-8e81-474d-9abf-c1af5a166dda e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "7aed3949-7b95-4887-90c3-3e2c8202cf27-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:57:13 compute-0 nova_compute[260935]: 2025-10-11 08:57:13.558 2 DEBUG oslo_concurrency.lockutils [req-1c3ac393-7525-4847-a2f5-7174dc031341 req-f3489f05-8e81-474d-9abf-c1af5a166dda e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "7aed3949-7b95-4887-90c3-3e2c8202cf27-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:57:13 compute-0 nova_compute[260935]: 2025-10-11 08:57:13.558 2 DEBUG oslo_concurrency.lockutils [req-1c3ac393-7525-4847-a2f5-7174dc031341 req-f3489f05-8e81-474d-9abf-c1af5a166dda e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "7aed3949-7b95-4887-90c3-3e2c8202cf27-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:57:13 compute-0 nova_compute[260935]: 2025-10-11 08:57:13.558 2 DEBUG nova.compute.manager [req-1c3ac393-7525-4847-a2f5-7174dc031341 req-f3489f05-8e81-474d-9abf-c1af5a166dda e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 7aed3949-7b95-4887-90c3-3e2c8202cf27] No waiting events found dispatching network-vif-plugged-f5e1ba56-7772-43c4-a7a6-9435b3b0796c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 08:57:13 compute-0 nova_compute[260935]: 2025-10-11 08:57:13.558 2 WARNING nova.compute.manager [req-1c3ac393-7525-4847-a2f5-7174dc031341 req-f3489f05-8e81-474d-9abf-c1af5a166dda e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 7aed3949-7b95-4887-90c3-3e2c8202cf27] Received unexpected event network-vif-plugged-f5e1ba56-7772-43c4-a7a6-9435b3b0796c for instance with vm_state active and task_state None.
Oct 11 08:57:13 compute-0 nova_compute[260935]: 2025-10-11 08:57:13.559 2 DEBUG nova.compute.manager [req-1c3ac393-7525-4847-a2f5-7174dc031341 req-f3489f05-8e81-474d-9abf-c1af5a166dda e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 7e87b7bb-5a07-4c5d-9998-0e1375a226f1] Received event network-vif-unplugged-9d9032fb-a79e-4d9c-897c-84768b25ec64 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:57:13 compute-0 nova_compute[260935]: 2025-10-11 08:57:13.559 2 DEBUG oslo_concurrency.lockutils [req-1c3ac393-7525-4847-a2f5-7174dc031341 req-f3489f05-8e81-474d-9abf-c1af5a166dda e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "7e87b7bb-5a07-4c5d-9998-0e1375a226f1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:57:13 compute-0 nova_compute[260935]: 2025-10-11 08:57:13.560 2 DEBUG oslo_concurrency.lockutils [req-1c3ac393-7525-4847-a2f5-7174dc031341 req-f3489f05-8e81-474d-9abf-c1af5a166dda e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "7e87b7bb-5a07-4c5d-9998-0e1375a226f1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:57:13 compute-0 nova_compute[260935]: 2025-10-11 08:57:13.560 2 DEBUG oslo_concurrency.lockutils [req-1c3ac393-7525-4847-a2f5-7174dc031341 req-f3489f05-8e81-474d-9abf-c1af5a166dda e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "7e87b7bb-5a07-4c5d-9998-0e1375a226f1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:57:13 compute-0 nova_compute[260935]: 2025-10-11 08:57:13.560 2 DEBUG nova.compute.manager [req-1c3ac393-7525-4847-a2f5-7174dc031341 req-f3489f05-8e81-474d-9abf-c1af5a166dda e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 7e87b7bb-5a07-4c5d-9998-0e1375a226f1] No waiting events found dispatching network-vif-unplugged-9d9032fb-a79e-4d9c-897c-84768b25ec64 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 08:57:13 compute-0 nova_compute[260935]: 2025-10-11 08:57:13.561 2 WARNING nova.compute.manager [req-1c3ac393-7525-4847-a2f5-7174dc031341 req-f3489f05-8e81-474d-9abf-c1af5a166dda e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 7e87b7bb-5a07-4c5d-9998-0e1375a226f1] Received unexpected event network-vif-unplugged-9d9032fb-a79e-4d9c-897c-84768b25ec64 for instance with vm_state deleted and task_state None.
Oct 11 08:57:13 compute-0 nova_compute[260935]: 2025-10-11 08:57:13.561 2 DEBUG nova.compute.manager [req-1c3ac393-7525-4847-a2f5-7174dc031341 req-f3489f05-8e81-474d-9abf-c1af5a166dda e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 7e87b7bb-5a07-4c5d-9998-0e1375a226f1] Received event network-vif-plugged-9d9032fb-a79e-4d9c-897c-84768b25ec64 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:57:13 compute-0 nova_compute[260935]: 2025-10-11 08:57:13.561 2 DEBUG oslo_concurrency.lockutils [req-1c3ac393-7525-4847-a2f5-7174dc031341 req-f3489f05-8e81-474d-9abf-c1af5a166dda e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "7e87b7bb-5a07-4c5d-9998-0e1375a226f1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:57:13 compute-0 nova_compute[260935]: 2025-10-11 08:57:13.561 2 DEBUG oslo_concurrency.lockutils [req-1c3ac393-7525-4847-a2f5-7174dc031341 req-f3489f05-8e81-474d-9abf-c1af5a166dda e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "7e87b7bb-5a07-4c5d-9998-0e1375a226f1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:57:13 compute-0 nova_compute[260935]: 2025-10-11 08:57:13.562 2 DEBUG oslo_concurrency.lockutils [req-1c3ac393-7525-4847-a2f5-7174dc031341 req-f3489f05-8e81-474d-9abf-c1af5a166dda e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "7e87b7bb-5a07-4c5d-9998-0e1375a226f1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:57:13 compute-0 nova_compute[260935]: 2025-10-11 08:57:13.562 2 DEBUG nova.compute.manager [req-1c3ac393-7525-4847-a2f5-7174dc031341 req-f3489f05-8e81-474d-9abf-c1af5a166dda e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 7e87b7bb-5a07-4c5d-9998-0e1375a226f1] No waiting events found dispatching network-vif-plugged-9d9032fb-a79e-4d9c-897c-84768b25ec64 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 08:57:13 compute-0 nova_compute[260935]: 2025-10-11 08:57:13.562 2 WARNING nova.compute.manager [req-1c3ac393-7525-4847-a2f5-7174dc031341 req-f3489f05-8e81-474d-9abf-c1af5a166dda e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 7e87b7bb-5a07-4c5d-9998-0e1375a226f1] Received unexpected event network-vif-plugged-9d9032fb-a79e-4d9c-897c-84768b25ec64 for instance with vm_state deleted and task_state None.
Oct 11 08:57:13 compute-0 nova_compute[260935]: 2025-10-11 08:57:13.565 2 DEBUG oslo_concurrency.lockutils [None req-c7265249-4664-4660-a0e2-9d673d6cd2f9 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:57:13 compute-0 nova_compute[260935]: 2025-10-11 08:57:13.565 2 DEBUG oslo_concurrency.lockutils [None req-c7265249-4664-4660-a0e2-9d673d6cd2f9 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:57:13 compute-0 nova_compute[260935]: 2025-10-11 08:57:13.670 2 DEBUG oslo_concurrency.processutils [None req-65f6f63e-75c5-4f43-a2f3-f6ef8987b9d7 bebcc72ba14b4d1e976076ce9fee6080 7b0903df38254068b8566040cf90343c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/075ec27a-70a2-49f6-a097-5738f3407605/disk.config 075ec27a-70a2-49f6-a097-5738f3407605_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.210s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:57:13 compute-0 nova_compute[260935]: 2025-10-11 08:57:13.671 2 INFO nova.virt.libvirt.driver [None req-65f6f63e-75c5-4f43-a2f3-f6ef8987b9d7 bebcc72ba14b4d1e976076ce9fee6080 7b0903df38254068b8566040cf90343c - - default default] [instance: 075ec27a-70a2-49f6-a097-5738f3407605] Deleting local config drive /var/lib/nova/instances/075ec27a-70a2-49f6-a097-5738f3407605/disk.config because it was imported into RBD.
Oct 11 08:57:13 compute-0 NetworkManager[44960]: <info>  [1760173033.7339] manager: (tap10f3051f-88): new Tun device (/org/freedesktop/NetworkManager/Devices/282)
Oct 11 08:57:13 compute-0 kernel: tap10f3051f-88: entered promiscuous mode
Oct 11 08:57:13 compute-0 systemd-udevd[330834]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 08:57:13 compute-0 ovn_controller[152945]: 2025-10-11T08:57:13Z|00605|binding|INFO|Claiming lport 10f3051f-8897-4e57-89f1-4a03aac94192 for this chassis.
Oct 11 08:57:13 compute-0 ovn_controller[152945]: 2025-10-11T08:57:13Z|00606|binding|INFO|10f3051f-8897-4e57-89f1-4a03aac94192: Claiming fa:16:3e:15:d9:b2 10.100.0.14
Oct 11 08:57:13 compute-0 nova_compute[260935]: 2025-10-11 08:57:13.743 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:57:13 compute-0 NetworkManager[44960]: <info>  [1760173033.7589] device (tap10f3051f-88): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 08:57:13 compute-0 NetworkManager[44960]: <info>  [1760173033.7612] device (tap10f3051f-88): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 08:57:13 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:57:13.782 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:15:d9:b2 10.100.0.14'], port_security=['fa:16:3e:15:d9:b2 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '075ec27a-70a2-49f6-a097-5738f3407605', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-36e695f7-a84c-4eed-8d21-d70ce3f1f736', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7b0903df38254068b8566040cf90343c', 'neutron:revision_number': '2', 'neutron:security_group_ids': '6330b3dc-321d-4ac7-887a-36f567413dc1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f7b3f72a-340b-411e-a57a-e434c4495732, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=10f3051f-8897-4e57-89f1-4a03aac94192) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 08:57:13 compute-0 nova_compute[260935]: 2025-10-11 08:57:13.782 2 DEBUG oslo_concurrency.processutils [None req-c7265249-4664-4660-a0e2-9d673d6cd2f9 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:57:13 compute-0 ovn_controller[152945]: 2025-10-11T08:57:13Z|00607|binding|INFO|Setting lport 10f3051f-8897-4e57-89f1-4a03aac94192 ovn-installed in OVS
Oct 11 08:57:13 compute-0 ovn_controller[152945]: 2025-10-11T08:57:13Z|00608|binding|INFO|Setting lport 10f3051f-8897-4e57-89f1-4a03aac94192 up in Southbound
Oct 11 08:57:13 compute-0 systemd-machined[215705]: New machine qemu-80-instance-00000048.
Oct 11 08:57:13 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:57:13.785 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 10f3051f-8897-4e57-89f1-4a03aac94192 in datapath 36e695f7-a84c-4eed-8d21-d70ce3f1f736 bound to our chassis
Oct 11 08:57:13 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:57:13.791 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 36e695f7-a84c-4eed-8d21-d70ce3f1f736
Oct 11 08:57:13 compute-0 systemd[1]: Started Virtual Machine qemu-80-instance-00000048.
Oct 11 08:57:13 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:57:13.805 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[141e1bda-3cce-4945-9d1b-04e0a008633b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:57:13 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:57:13.806 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap36e695f7-a1 in ovnmeta-36e695f7-a84c-4eed-8d21-d70ce3f1f736 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 11 08:57:13 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:57:13.811 276945 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap36e695f7-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 11 08:57:13 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:57:13.812 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[7622ae8c-930d-4557-8840-ae398cdc0159]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:57:13 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:57:13.817 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[f9f32a3c-eb0f-48e1-8d5a-96ebe2df2ba0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:57:13 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:57:13.835 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[f375c656-3e60-4e67-94c7-8968bd4d627c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:57:13 compute-0 nova_compute[260935]: 2025-10-11 08:57:13.840 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:57:13 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:57:13.859 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[aa93cce9-b6a3-4fbb-a3a7-daad3e457845]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:57:13 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:57:13.899 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[72fedb69-8251-4b6a-a280-66d29a187778]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:57:13 compute-0 NetworkManager[44960]: <info>  [1760173033.9114] manager: (tap36e695f7-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/283)
Oct 11 08:57:13 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:57:13.919 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[1f950bf3-4bd7-4b44-95d0-26b94913050f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:57:13 compute-0 systemd-udevd[331120]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 08:57:13 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:57:13.963 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[0829367d-8557-4e13-8a00-5905e3dffbc6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:57:13 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:57:13.966 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[5db1a1c4-f3e8-4bb7-8337-dc9e97bcf078]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:57:13 compute-0 NetworkManager[44960]: <info>  [1760173033.9994] device (tap36e695f7-a0): carrier: link connected
Oct 11 08:57:14 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:57:14.007 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[45744916-ab55-4956-acf5-4c4abe682215]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:57:14 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:57:14.034 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[0f95614e-7434-4701-974d-05e62eb81156]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap36e695f7-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c0:ed:23'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 193], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 487870, 'reachable_time': 30027, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 331159, 'error': None, 'target': 'ovnmeta-36e695f7-a84c-4eed-8d21-d70ce3f1f736', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:57:14 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1610: 321 pgs: 321 active+clean; 536 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 4.5 MiB/s rd, 7.9 MiB/s wr, 340 op/s
Oct 11 08:57:14 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:57:14.055 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[92e80cb5-b0d2-4c4d-96aa-41b748dc2afd]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fec0:ed23'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 487870, 'tstamp': 487870}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 331160, 'error': None, 'target': 'ovnmeta-36e695f7-a84c-4eed-8d21-d70ce3f1f736', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:57:14 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:57:14.080 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[1f44964d-8761-44e6-bd8b-258887b31374]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap36e695f7-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c0:ed:23'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 193], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 487870, 'reachable_time': 30027, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 331161, 'error': None, 'target': 'ovnmeta-36e695f7-a84c-4eed-8d21-d70ce3f1f736', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:57:14 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:57:14.135 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[27704d9c-24bb-4fae-8ac3-25003457ce4d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:57:14 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:57:14.223 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[2a4fad41-6ee1-4018-a250-c6c50aa5f81f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:57:14 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:57:14.225 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap36e695f7-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:57:14 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:57:14.226 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 08:57:14 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:57:14.226 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap36e695f7-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:57:14 compute-0 kernel: tap36e695f7-a0: entered promiscuous mode
Oct 11 08:57:14 compute-0 NetworkManager[44960]: <info>  [1760173034.2287] manager: (tap36e695f7-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/284)
Oct 11 08:57:14 compute-0 nova_compute[260935]: 2025-10-11 08:57:14.228 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:57:14 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:57:14.230 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap36e695f7-a0, col_values=(('external_ids', {'iface-id': 'c084dcc6-5dbe-41b8-aad1-caa55b8cf091'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:57:14 compute-0 ovn_controller[152945]: 2025-10-11T08:57:14Z|00609|binding|INFO|Releasing lport c084dcc6-5dbe-41b8-aad1-caa55b8cf091 from this chassis (sb_readonly=0)
Oct 11 08:57:14 compute-0 nova_compute[260935]: 2025-10-11 08:57:14.232 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:57:14 compute-0 nova_compute[260935]: 2025-10-11 08:57:14.251 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:57:14 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:57:14.252 162815 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/36e695f7-a84c-4eed-8d21-d70ce3f1f736.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/36e695f7-a84c-4eed-8d21-d70ce3f1f736.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 11 08:57:14 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:57:14.254 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[3e18521f-55e8-48a1-b09e-803efc71bc5a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:57:14 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:57:14.255 162815 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 11 08:57:14 compute-0 ovn_metadata_agent[162810]: global
Oct 11 08:57:14 compute-0 ovn_metadata_agent[162810]:     log         /dev/log local0 debug
Oct 11 08:57:14 compute-0 ovn_metadata_agent[162810]:     log-tag     haproxy-metadata-proxy-36e695f7-a84c-4eed-8d21-d70ce3f1f736
Oct 11 08:57:14 compute-0 ovn_metadata_agent[162810]:     user        root
Oct 11 08:57:14 compute-0 ovn_metadata_agent[162810]:     group       root
Oct 11 08:57:14 compute-0 ovn_metadata_agent[162810]:     maxconn     1024
Oct 11 08:57:14 compute-0 ovn_metadata_agent[162810]:     pidfile     /var/lib/neutron/external/pids/36e695f7-a84c-4eed-8d21-d70ce3f1f736.pid.haproxy
Oct 11 08:57:14 compute-0 ovn_metadata_agent[162810]:     daemon
Oct 11 08:57:14 compute-0 ovn_metadata_agent[162810]: 
Oct 11 08:57:14 compute-0 ovn_metadata_agent[162810]: defaults
Oct 11 08:57:14 compute-0 ovn_metadata_agent[162810]:     log global
Oct 11 08:57:14 compute-0 ovn_metadata_agent[162810]:     mode http
Oct 11 08:57:14 compute-0 ovn_metadata_agent[162810]:     option httplog
Oct 11 08:57:14 compute-0 ovn_metadata_agent[162810]:     option dontlognull
Oct 11 08:57:14 compute-0 ovn_metadata_agent[162810]:     option http-server-close
Oct 11 08:57:14 compute-0 ovn_metadata_agent[162810]:     option forwardfor
Oct 11 08:57:14 compute-0 ovn_metadata_agent[162810]:     retries                 3
Oct 11 08:57:14 compute-0 ovn_metadata_agent[162810]:     timeout http-request    30s
Oct 11 08:57:14 compute-0 ovn_metadata_agent[162810]:     timeout connect         30s
Oct 11 08:57:14 compute-0 ovn_metadata_agent[162810]:     timeout client          32s
Oct 11 08:57:14 compute-0 ovn_metadata_agent[162810]:     timeout server          32s
Oct 11 08:57:14 compute-0 ovn_metadata_agent[162810]:     timeout http-keep-alive 30s
Oct 11 08:57:14 compute-0 ovn_metadata_agent[162810]: 
Oct 11 08:57:14 compute-0 ovn_metadata_agent[162810]: 
Oct 11 08:57:14 compute-0 ovn_metadata_agent[162810]: listen listener
Oct 11 08:57:14 compute-0 ovn_metadata_agent[162810]:     bind 169.254.169.254:80
Oct 11 08:57:14 compute-0 ovn_metadata_agent[162810]:     server metadata /var/lib/neutron/metadata_proxy
Oct 11 08:57:14 compute-0 ovn_metadata_agent[162810]:     http-request add-header X-OVN-Network-ID 36e695f7-a84c-4eed-8d21-d70ce3f1f736
Oct 11 08:57:14 compute-0 ovn_metadata_agent[162810]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 11 08:57:14 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:57:14.256 162815 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-36e695f7-a84c-4eed-8d21-d70ce3f1f736', 'env', 'PROCESS_TAG=haproxy-36e695f7-a84c-4eed-8d21-d70ce3f1f736', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/36e695f7-a84c-4eed-8d21-d70ce3f1f736.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 11 08:57:14 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 08:57:14 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/336167933' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:57:14 compute-0 nova_compute[260935]: 2025-10-11 08:57:14.332 2 DEBUG oslo_concurrency.processutils [None req-c7265249-4664-4660-a0e2-9d673d6cd2f9 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.550s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:57:14 compute-0 nova_compute[260935]: 2025-10-11 08:57:14.338 2 DEBUG nova.compute.provider_tree [None req-c7265249-4664-4660-a0e2-9d673d6cd2f9 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 08:57:14 compute-0 nova_compute[260935]: 2025-10-11 08:57:14.355 2 DEBUG nova.scheduler.client.report [None req-c7265249-4664-4660-a0e2-9d673d6cd2f9 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 08:57:14 compute-0 nova_compute[260935]: 2025-10-11 08:57:14.377 2 DEBUG oslo_concurrency.lockutils [None req-c7265249-4664-4660-a0e2-9d673d6cd2f9 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.812s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:57:14 compute-0 nova_compute[260935]: 2025-10-11 08:57:14.403 2 INFO nova.scheduler.client.report [None req-c7265249-4664-4660-a0e2-9d673d6cd2f9 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Deleted allocations for instance 7e87b7bb-5a07-4c5d-9998-0e1375a226f1
Oct 11 08:57:14 compute-0 nova_compute[260935]: 2025-10-11 08:57:14.443 2 DEBUG nova.network.neutron [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: b297454f-91af-4716-b4f7-6af9f0d7e62d] Updating instance_info_cache with network_info: [{"id": "7e1e0a2e-111a-44a6-8191-45964dbea604", "address": "fa:16:3e:45:a4:99", "network": {"id": "e075bdab-78c4-414f-b270-c41d1c82f498", "bridge": "br-int", "label": "tempest-ServersTestJSON-1401783070-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d9864fda4f8641d8a9c1509c426cc206", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7e1e0a2e-11", "ovs_interfaceid": "7e1e0a2e-111a-44a6-8191-45964dbea604", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:57:14 compute-0 nova_compute[260935]: 2025-10-11 08:57:14.472 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Releasing lock "refresh_cache-b297454f-91af-4716-b4f7-6af9f0d7e62d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 08:57:14 compute-0 nova_compute[260935]: 2025-10-11 08:57:14.473 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: b297454f-91af-4716-b4f7-6af9f0d7e62d] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 11 08:57:14 compute-0 nova_compute[260935]: 2025-10-11 08:57:14.474 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 08:57:14 compute-0 nova_compute[260935]: 2025-10-11 08:57:14.474 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 08:57:14 compute-0 nova_compute[260935]: 2025-10-11 08:57:14.483 2 DEBUG oslo_concurrency.lockutils [None req-c7265249-4664-4660-a0e2-9d673d6cd2f9 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Lock "7e87b7bb-5a07-4c5d-9998-0e1375a226f1" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.727s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:57:14 compute-0 nova_compute[260935]: 2025-10-11 08:57:14.508 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:57:14 compute-0 nova_compute[260935]: 2025-10-11 08:57:14.509 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:57:14 compute-0 nova_compute[260935]: 2025-10-11 08:57:14.509 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:57:14 compute-0 nova_compute[260935]: 2025-10-11 08:57:14.510 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 11 08:57:14 compute-0 nova_compute[260935]: 2025-10-11 08:57:14.510 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:57:14 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/336167933' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:57:14 compute-0 podman[331237]: 2025-10-11 08:57:14.796062415 +0000 UTC m=+0.057285684 container create f58ff28e75232a61ed5a6d658aa545c5aafba0f467cca80b788dba39e98beb23 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-36e695f7-a84c-4eed-8d21-d70ce3f1f736, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001)
Oct 11 08:57:14 compute-0 systemd[1]: Started libpod-conmon-f58ff28e75232a61ed5a6d658aa545c5aafba0f467cca80b788dba39e98beb23.scope.
Oct 11 08:57:14 compute-0 podman[331237]: 2025-10-11 08:57:14.766162223 +0000 UTC m=+0.027385502 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 11 08:57:14 compute-0 systemd[1]: Started libcrun container.
Oct 11 08:57:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a77a7b73ae016cf1cbf18145d6b521c02b985ba38cb72f8238983023520ab94/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 11 08:57:14 compute-0 podman[331237]: 2025-10-11 08:57:14.896489389 +0000 UTC m=+0.157712668 container init f58ff28e75232a61ed5a6d658aa545c5aafba0f467cca80b788dba39e98beb23 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-36e695f7-a84c-4eed-8d21-d70ce3f1f736, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001)
Oct 11 08:57:14 compute-0 nova_compute[260935]: 2025-10-11 08:57:14.902 2 DEBUG oslo_concurrency.lockutils [None req-5571b786-eb56-41fc-93a7-8f3ec7962abc ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Acquiring lock "be4cae7e-0c2f-4c19-9a7e-58681faf9523" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:57:14 compute-0 nova_compute[260935]: 2025-10-11 08:57:14.904 2 DEBUG oslo_concurrency.lockutils [None req-5571b786-eb56-41fc-93a7-8f3ec7962abc ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Lock "be4cae7e-0c2f-4c19-9a7e-58681faf9523" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:57:14 compute-0 nova_compute[260935]: 2025-10-11 08:57:14.904 2 DEBUG oslo_concurrency.lockutils [None req-5571b786-eb56-41fc-93a7-8f3ec7962abc ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Acquiring lock "be4cae7e-0c2f-4c19-9a7e-58681faf9523-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:57:14 compute-0 podman[331237]: 2025-10-11 08:57:14.904830797 +0000 UTC m=+0.166054086 container start f58ff28e75232a61ed5a6d658aa545c5aafba0f467cca80b788dba39e98beb23 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-36e695f7-a84c-4eed-8d21-d70ce3f1f736, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.schema-version=1.0)
Oct 11 08:57:14 compute-0 nova_compute[260935]: 2025-10-11 08:57:14.905 2 DEBUG oslo_concurrency.lockutils [None req-5571b786-eb56-41fc-93a7-8f3ec7962abc ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Lock "be4cae7e-0c2f-4c19-9a7e-58681faf9523-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:57:14 compute-0 nova_compute[260935]: 2025-10-11 08:57:14.905 2 DEBUG oslo_concurrency.lockutils [None req-5571b786-eb56-41fc-93a7-8f3ec7962abc ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Lock "be4cae7e-0c2f-4c19-9a7e-58681faf9523-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:57:14 compute-0 nova_compute[260935]: 2025-10-11 08:57:14.907 2 INFO nova.compute.manager [None req-5571b786-eb56-41fc-93a7-8f3ec7962abc ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: be4cae7e-0c2f-4c19-9a7e-58681faf9523] Terminating instance
Oct 11 08:57:14 compute-0 nova_compute[260935]: 2025-10-11 08:57:14.908 2 DEBUG nova.compute.manager [None req-5571b786-eb56-41fc-93a7-8f3ec7962abc ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: be4cae7e-0c2f-4c19-9a7e-58681faf9523] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 11 08:57:14 compute-0 neutron-haproxy-ovnmeta-36e695f7-a84c-4eed-8d21-d70ce3f1f736[331269]: [NOTICE]   (331273) : New worker (331275) forked
Oct 11 08:57:14 compute-0 neutron-haproxy-ovnmeta-36e695f7-a84c-4eed-8d21-d70ce3f1f736[331269]: [NOTICE]   (331273) : Loading success.
Oct 11 08:57:14 compute-0 nova_compute[260935]: 2025-10-11 08:57:14.956 2 INFO nova.compute.manager [None req-9622f333-4270-4304-8152-941c71823a0b 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] [instance: 7aed3949-7b95-4887-90c3-3e2c8202cf27] Pausing
Oct 11 08:57:14 compute-0 nova_compute[260935]: 2025-10-11 08:57:14.959 2 DEBUG nova.objects.instance [None req-9622f333-4270-4304-8152-941c71823a0b 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Lazy-loading 'flavor' on Instance uuid 7aed3949-7b95-4887-90c3-3e2c8202cf27 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:57:14 compute-0 kernel: tap47a6e23f-ee (unregistering): left promiscuous mode
Oct 11 08:57:15 compute-0 NetworkManager[44960]: <info>  [1760173035.0033] device (tap47a6e23f-ee): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 08:57:15 compute-0 nova_compute[260935]: 2025-10-11 08:57:15.016 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760173035.0162392, 7aed3949-7b95-4887-90c3-3e2c8202cf27 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:57:15 compute-0 nova_compute[260935]: 2025-10-11 08:57:15.017 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 7aed3949-7b95-4887-90c3-3e2c8202cf27] VM Paused (Lifecycle Event)
Oct 11 08:57:15 compute-0 ovn_controller[152945]: 2025-10-11T08:57:15Z|00610|binding|INFO|Releasing lport 47a6e23f-ee4e-4768-bb7e-ab5e3096976f from this chassis (sb_readonly=0)
Oct 11 08:57:15 compute-0 ovn_controller[152945]: 2025-10-11T08:57:15Z|00611|binding|INFO|Setting lport 47a6e23f-ee4e-4768-bb7e-ab5e3096976f down in Southbound
Oct 11 08:57:15 compute-0 ovn_controller[152945]: 2025-10-11T08:57:15Z|00612|binding|INFO|Removing iface tap47a6e23f-ee ovn-installed in OVS
Oct 11 08:57:15 compute-0 nova_compute[260935]: 2025-10-11 08:57:15.020 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:57:15 compute-0 nova_compute[260935]: 2025-10-11 08:57:15.025 2 DEBUG nova.compute.manager [None req-9622f333-4270-4304-8152-941c71823a0b 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] [instance: 7aed3949-7b95-4887-90c3-3e2c8202cf27] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:57:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:57:15.030 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3e:18:4c 10.100.0.4'], port_security=['fa:16:3e:3e:18:4c 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'be4cae7e-0c2f-4c19-9a7e-58681faf9523', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e075bdab-78c4-414f-b270-c41d1c82f498', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd9864fda4f8641d8a9c1509c426cc206', 'neutron:revision_number': '4', 'neutron:security_group_ids': '708d19b6-1edc-4b98-ad73-f234668f1633', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2b07dbe7-131d-4b4e-9a25-dea5d7b28985, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=47a6e23f-ee4e-4768-bb7e-ab5e3096976f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 08:57:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:57:15.034 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 47a6e23f-ee4e-4768-bb7e-ab5e3096976f in datapath e075bdab-78c4-414f-b270-c41d1c82f498 unbound from our chassis
Oct 11 08:57:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:57:15.038 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e075bdab-78c4-414f-b270-c41d1c82f498
Oct 11 08:57:15 compute-0 nova_compute[260935]: 2025-10-11 08:57:15.043 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:57:15 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 08:57:15 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1763584380' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:57:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:57:15.063 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[0bbd01c4-ac3d-4957-b5f3-3dce3ee4c191]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:57:15 compute-0 nova_compute[260935]: 2025-10-11 08:57:15.064 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 7aed3949-7b95-4887-90c3-3e2c8202cf27] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:57:15 compute-0 nova_compute[260935]: 2025-10-11 08:57:15.069 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 7aed3949-7b95-4887-90c3-3e2c8202cf27] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: pausing, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 08:57:15 compute-0 systemd[1]: machine-qemu\x2d75\x2dinstance\x2d00000043.scope: Deactivated successfully.
Oct 11 08:57:15 compute-0 systemd[1]: machine-qemu\x2d75\x2dinstance\x2d00000043.scope: Consumed 13.226s CPU time.
Oct 11 08:57:15 compute-0 systemd-machined[215705]: Machine qemu-75-instance-00000043 terminated.
Oct 11 08:57:15 compute-0 nova_compute[260935]: 2025-10-11 08:57:15.094 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.584s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:57:15 compute-0 nova_compute[260935]: 2025-10-11 08:57:15.102 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 7aed3949-7b95-4887-90c3-3e2c8202cf27] During sync_power_state the instance has a pending task (pausing). Skip.
Oct 11 08:57:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:57:15.108 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[2135d594-6042-42be-862a-772da409fa36]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:57:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:57:15.111 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[4b6e3e78-8167-4d71-a66a-d9b6309a0e22]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:57:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:57:15.148 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[9ac21cfa-c7ac-40c3-adb2-120bff436733]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:57:15 compute-0 kernel: tap47a6e23f-ee: entered promiscuous mode
Oct 11 08:57:15 compute-0 nova_compute[260935]: 2025-10-11 08:57:15.165 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:57:15 compute-0 systemd-udevd[331125]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 08:57:15 compute-0 ovn_controller[152945]: 2025-10-11T08:57:15Z|00613|binding|INFO|Claiming lport 47a6e23f-ee4e-4768-bb7e-ab5e3096976f for this chassis.
Oct 11 08:57:15 compute-0 ovn_controller[152945]: 2025-10-11T08:57:15Z|00614|binding|INFO|47a6e23f-ee4e-4768-bb7e-ab5e3096976f: Claiming fa:16:3e:3e:18:4c 10.100.0.4
Oct 11 08:57:15 compute-0 NetworkManager[44960]: <info>  [1760173035.1725] manager: (tap47a6e23f-ee): new Tun device (/org/freedesktop/NetworkManager/Devices/285)
Oct 11 08:57:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:57:15.173 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3e:18:4c 10.100.0.4'], port_security=['fa:16:3e:3e:18:4c 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'be4cae7e-0c2f-4c19-9a7e-58681faf9523', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e075bdab-78c4-414f-b270-c41d1c82f498', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd9864fda4f8641d8a9c1509c426cc206', 'neutron:revision_number': '4', 'neutron:security_group_ids': '708d19b6-1edc-4b98-ad73-f234668f1633', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2b07dbe7-131d-4b4e-9a25-dea5d7b28985, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=47a6e23f-ee4e-4768-bb7e-ab5e3096976f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 08:57:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:57:15.179 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[817b0f11-9561-4135-86de-5a4c63af1c29]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape075bdab-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0b:b9:79'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 19, 'rx_bytes': 1000, 'tx_bytes': 942, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 19, 'rx_bytes': 1000, 'tx_bytes': 942, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 169], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 479394, 'reachable_time': 36159, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 331295, 'error': None, 'target': 'ovnmeta-e075bdab-78c4-414f-b270-c41d1c82f498', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:57:15 compute-0 kernel: tap47a6e23f-ee (unregistering): left promiscuous mode
Oct 11 08:57:15 compute-0 nova_compute[260935]: 2025-10-11 08:57:15.188 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760173035.1884582, 075ec27a-70a2-49f6-a097-5738f3407605 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:57:15 compute-0 nova_compute[260935]: 2025-10-11 08:57:15.189 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 075ec27a-70a2-49f6-a097-5738f3407605] VM Started (Lifecycle Event)
Oct 11 08:57:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:57:15.192 162815 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:57:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:57:15.192 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:57:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:57:15.193 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:57:15 compute-0 ovn_controller[152945]: 2025-10-11T08:57:15Z|00615|binding|INFO|Setting lport 47a6e23f-ee4e-4768-bb7e-ab5e3096976f ovn-installed in OVS
Oct 11 08:57:15 compute-0 ovn_controller[152945]: 2025-10-11T08:57:15Z|00616|binding|INFO|Setting lport 47a6e23f-ee4e-4768-bb7e-ab5e3096976f up in Southbound
Oct 11 08:57:15 compute-0 nova_compute[260935]: 2025-10-11 08:57:15.200 2 INFO nova.virt.libvirt.driver [-] [instance: be4cae7e-0c2f-4c19-9a7e-58681faf9523] Instance destroyed successfully.
Oct 11 08:57:15 compute-0 nova_compute[260935]: 2025-10-11 08:57:15.201 2 DEBUG nova.objects.instance [None req-5571b786-eb56-41fc-93a7-8f3ec7962abc ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Lazy-loading 'resources' on Instance uuid be4cae7e-0c2f-4c19-9a7e-58681faf9523 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:57:15 compute-0 nova_compute[260935]: 2025-10-11 08:57:15.202 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:57:15 compute-0 ovn_controller[152945]: 2025-10-11T08:57:15Z|00617|binding|INFO|Releasing lport 47a6e23f-ee4e-4768-bb7e-ab5e3096976f from this chassis (sb_readonly=1)
Oct 11 08:57:15 compute-0 ovn_controller[152945]: 2025-10-11T08:57:15Z|00618|binding|INFO|Removing iface tap47a6e23f-ee ovn-installed in OVS
Oct 11 08:57:15 compute-0 ovn_controller[152945]: 2025-10-11T08:57:15Z|00619|if_status|INFO|Dropped 2 log messages in last 145 seconds (most recently, 145 seconds ago) due to excessive rate
Oct 11 08:57:15 compute-0 ovn_controller[152945]: 2025-10-11T08:57:15Z|00620|if_status|INFO|Not setting lport 47a6e23f-ee4e-4768-bb7e-ab5e3096976f down as sb is readonly
Oct 11 08:57:15 compute-0 nova_compute[260935]: 2025-10-11 08:57:15.204 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:57:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:57:15.206 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[7bdbed59-e55c-4d6d-b6a7-b19b3eb9b148]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tape075bdab-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 479414, 'tstamp': 479414}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 331298, 'error': None, 'target': 'ovnmeta-e075bdab-78c4-414f-b270-c41d1c82f498', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tape075bdab-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 479419, 'tstamp': 479419}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 331298, 'error': None, 'target': 'ovnmeta-e075bdab-78c4-414f-b270-c41d1c82f498', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:57:15 compute-0 ovn_controller[152945]: 2025-10-11T08:57:15Z|00621|binding|INFO|Releasing lport 47a6e23f-ee4e-4768-bb7e-ab5e3096976f from this chassis (sb_readonly=0)
Oct 11 08:57:15 compute-0 ovn_controller[152945]: 2025-10-11T08:57:15Z|00622|binding|INFO|Setting lport 47a6e23f-ee4e-4768-bb7e-ab5e3096976f down in Southbound
Oct 11 08:57:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:57:15.208 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape075bdab-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:57:15 compute-0 nova_compute[260935]: 2025-10-11 08:57:15.209 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:57:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:57:15.212 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3e:18:4c 10.100.0.4'], port_security=['fa:16:3e:3e:18:4c 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'be4cae7e-0c2f-4c19-9a7e-58681faf9523', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e075bdab-78c4-414f-b270-c41d1c82f498', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd9864fda4f8641d8a9c1509c426cc206', 'neutron:revision_number': '4', 'neutron:security_group_ids': '708d19b6-1edc-4b98-ad73-f234668f1633', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2b07dbe7-131d-4b4e-9a25-dea5d7b28985, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=47a6e23f-ee4e-4768-bb7e-ab5e3096976f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 08:57:15 compute-0 nova_compute[260935]: 2025-10-11 08:57:15.215 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 075ec27a-70a2-49f6-a097-5738f3407605] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:57:15 compute-0 nova_compute[260935]: 2025-10-11 08:57:15.216 2 DEBUG nova.virt.libvirt.vif [None req-5571b786-eb56-41fc-93a7-8f3ec7962abc ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T08:56:38Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestJSON-server-349699878',display_name='tempest-ServersTestJSON-server-349699878',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-349699878',id=67,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-11T08:56:49Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='d9864fda4f8641d8a9c1509c426cc206',ramdisk_id='',reservation_id='r-g30l2goh',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestJSON-101172647',owner_user_name='tempest-ServersTestJSON-101172647-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T08:56:49Z,user_data=None,user_id='ee0e5fedb9fc464eb2a9ac362f5e0749',uuid=be4cae7e-0c2f-4c19-9a7e-58681faf9523,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "47a6e23f-ee4e-4768-bb7e-ab5e3096976f", "address": "fa:16:3e:3e:18:4c", "network": {"id": "e075bdab-78c4-414f-b270-c41d1c82f498", "bridge": "br-int", "label": "tempest-ServersTestJSON-1401783070-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d9864fda4f8641d8a9c1509c426cc206", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap47a6e23f-ee", "ovs_interfaceid": "47a6e23f-ee4e-4768-bb7e-ab5e3096976f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 11 08:57:15 compute-0 nova_compute[260935]: 2025-10-11 08:57:15.216 2 DEBUG nova.network.os_vif_util [None req-5571b786-eb56-41fc-93a7-8f3ec7962abc ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Converting VIF {"id": "47a6e23f-ee4e-4768-bb7e-ab5e3096976f", "address": "fa:16:3e:3e:18:4c", "network": {"id": "e075bdab-78c4-414f-b270-c41d1c82f498", "bridge": "br-int", "label": "tempest-ServersTestJSON-1401783070-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d9864fda4f8641d8a9c1509c426cc206", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap47a6e23f-ee", "ovs_interfaceid": "47a6e23f-ee4e-4768-bb7e-ab5e3096976f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 08:57:15 compute-0 nova_compute[260935]: 2025-10-11 08:57:15.217 2 DEBUG nova.network.os_vif_util [None req-5571b786-eb56-41fc-93a7-8f3ec7962abc ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3e:18:4c,bridge_name='br-int',has_traffic_filtering=True,id=47a6e23f-ee4e-4768-bb7e-ab5e3096976f,network=Network(e075bdab-78c4-414f-b270-c41d1c82f498),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap47a6e23f-ee') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 08:57:15 compute-0 nova_compute[260935]: 2025-10-11 08:57:15.218 2 DEBUG os_vif [None req-5571b786-eb56-41fc-93a7-8f3ec7962abc ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:3e:18:4c,bridge_name='br-int',has_traffic_filtering=True,id=47a6e23f-ee4e-4768-bb7e-ab5e3096976f,network=Network(e075bdab-78c4-414f-b270-c41d1c82f498),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap47a6e23f-ee') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 11 08:57:15 compute-0 nova_compute[260935]: 2025-10-11 08:57:15.219 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:57:15 compute-0 nova_compute[260935]: 2025-10-11 08:57:15.220 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap47a6e23f-ee, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:57:15 compute-0 nova_compute[260935]: 2025-10-11 08:57:15.221 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:57:15 compute-0 nova_compute[260935]: 2025-10-11 08:57:15.223 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 08:57:15 compute-0 nova_compute[260935]: 2025-10-11 08:57:15.224 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760173035.1885388, 075ec27a-70a2-49f6-a097-5738f3407605 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:57:15 compute-0 nova_compute[260935]: 2025-10-11 08:57:15.224 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 075ec27a-70a2-49f6-a097-5738f3407605] VM Paused (Lifecycle Event)
Oct 11 08:57:15 compute-0 nova_compute[260935]: 2025-10-11 08:57:15.230 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:57:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:57:15.231 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape075bdab-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:57:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:57:15.232 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 08:57:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:57:15.232 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape075bdab-70, col_values=(('external_ids', {'iface-id': 'b9cf681c-9f4c-4c56-987a-55fa7aa89e1a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:57:15 compute-0 nova_compute[260935]: 2025-10-11 08:57:15.232 2 INFO os_vif [None req-5571b786-eb56-41fc-93a7-8f3ec7962abc ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:3e:18:4c,bridge_name='br-int',has_traffic_filtering=True,id=47a6e23f-ee4e-4768-bb7e-ab5e3096976f,network=Network(e075bdab-78c4-414f-b270-c41d1c82f498),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap47a6e23f-ee')
Oct 11 08:57:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:57:15.232 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 08:57:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:57:15.233 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 47a6e23f-ee4e-4768-bb7e-ab5e3096976f in datapath e075bdab-78c4-414f-b270-c41d1c82f498 unbound from our chassis
Oct 11 08:57:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:57:15.235 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e075bdab-78c4-414f-b270-c41d1c82f498
Oct 11 08:57:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:57:15.251 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[ef652f19-7296-4225-8c7e-ed6e928a5300]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:57:15 compute-0 nova_compute[260935]: 2025-10-11 08:57:15.256 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 075ec27a-70a2-49f6-a097-5738f3407605] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:57:15 compute-0 nova_compute[260935]: 2025-10-11 08:57:15.264 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 075ec27a-70a2-49f6-a097-5738f3407605] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 08:57:15 compute-0 nova_compute[260935]: 2025-10-11 08:57:15.291 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 075ec27a-70a2-49f6-a097-5738f3407605] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 08:57:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:57:15.299 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[3fabb241-40de-4221-98ba-8185262037d3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:57:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:57:15.303 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[b55b0656-024f-4cae-91a5-6aae9c573869]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:57:15 compute-0 nova_compute[260935]: 2025-10-11 08:57:15.337 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000044 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 08:57:15 compute-0 nova_compute[260935]: 2025-10-11 08:57:15.339 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000044 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 08:57:15 compute-0 nova_compute[260935]: 2025-10-11 08:57:15.353 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000043 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 08:57:15 compute-0 nova_compute[260935]: 2025-10-11 08:57:15.355 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000043 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 08:57:15 compute-0 nova_compute[260935]: 2025-10-11 08:57:15.361 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000047 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 08:57:15 compute-0 nova_compute[260935]: 2025-10-11 08:57:15.362 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000047 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 08:57:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:57:15.367 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[5b910955-919a-4856-9c37-eb274d4bd2d7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:57:15 compute-0 nova_compute[260935]: 2025-10-11 08:57:15.372 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000045 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 08:57:15 compute-0 nova_compute[260935]: 2025-10-11 08:57:15.372 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000045 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 08:57:15 compute-0 nova_compute[260935]: 2025-10-11 08:57:15.377 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-0000003d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 08:57:15 compute-0 nova_compute[260935]: 2025-10-11 08:57:15.378 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-0000003d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 08:57:15 compute-0 nova_compute[260935]: 2025-10-11 08:57:15.383 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000048 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 08:57:15 compute-0 nova_compute[260935]: 2025-10-11 08:57:15.384 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000048 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 08:57:15 compute-0 nova_compute[260935]: 2025-10-11 08:57:15.389 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-0000003a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 08:57:15 compute-0 nova_compute[260935]: 2025-10-11 08:57:15.389 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-0000003a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 08:57:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:57:15.395 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[14671236-6a04-4967-a370-e78ad070b1a4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape075bdab-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0b:b9:79'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 21, 'rx_bytes': 1000, 'tx_bytes': 1026, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 21, 'rx_bytes': 1000, 'tx_bytes': 1026, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 169], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 479394, 'reachable_time': 36159, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 331325, 'error': None, 'target': 'ovnmeta-e075bdab-78c4-414f-b270-c41d1c82f498', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:57:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:57:15.436 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[60fbe814-0cb5-4869-bf05-6ad4f6bff29b]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tape075bdab-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 479414, 'tstamp': 479414}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 331326, 'error': None, 'target': 'ovnmeta-e075bdab-78c4-414f-b270-c41d1c82f498', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tape075bdab-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 479419, 'tstamp': 479419}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 331326, 'error': None, 'target': 'ovnmeta-e075bdab-78c4-414f-b270-c41d1c82f498', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:57:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:57:15.439 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape075bdab-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:57:15 compute-0 nova_compute[260935]: 2025-10-11 08:57:15.441 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:57:15 compute-0 nova_compute[260935]: 2025-10-11 08:57:15.442 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:57:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:57:15.443 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape075bdab-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:57:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:57:15.444 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 08:57:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:57:15.445 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape075bdab-70, col_values=(('external_ids', {'iface-id': 'b9cf681c-9f4c-4c56-987a-55fa7aa89e1a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:57:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:57:15.446 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 08:57:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:57:15.447 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 47a6e23f-ee4e-4768-bb7e-ab5e3096976f in datapath e075bdab-78c4-414f-b270-c41d1c82f498 unbound from our chassis
Oct 11 08:57:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:57:15.450 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e075bdab-78c4-414f-b270-c41d1c82f498
Oct 11 08:57:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:57:15.479 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[e4865b85-21e7-4c45-8c72-87a45aa1063b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:57:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:57:15.531 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[54cd3a26-c670-4f49-973f-b1669ca3d72f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:57:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:57:15.539 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[61699b3f-f714-40c7-ae7c-3a1567b48692]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:57:15 compute-0 ceph-mon[74313]: pgmap v1610: 321 pgs: 321 active+clean; 536 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 4.5 MiB/s rd, 7.9 MiB/s wr, 340 op/s
Oct 11 08:57:15 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/1763584380' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:57:15 compute-0 nova_compute[260935]: 2025-10-11 08:57:15.587 2 DEBUG nova.compute.manager [req-d5d1ed71-1195-4819-b5e7-0726306dceaf req-703dc65e-9aa5-43ac-84cc-0c98afdd6e69 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 075ec27a-70a2-49f6-a097-5738f3407605] Received event network-vif-plugged-10f3051f-8897-4e57-89f1-4a03aac94192 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:57:15 compute-0 nova_compute[260935]: 2025-10-11 08:57:15.588 2 DEBUG oslo_concurrency.lockutils [req-d5d1ed71-1195-4819-b5e7-0726306dceaf req-703dc65e-9aa5-43ac-84cc-0c98afdd6e69 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "075ec27a-70a2-49f6-a097-5738f3407605-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:57:15 compute-0 nova_compute[260935]: 2025-10-11 08:57:15.588 2 DEBUG oslo_concurrency.lockutils [req-d5d1ed71-1195-4819-b5e7-0726306dceaf req-703dc65e-9aa5-43ac-84cc-0c98afdd6e69 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "075ec27a-70a2-49f6-a097-5738f3407605-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:57:15 compute-0 nova_compute[260935]: 2025-10-11 08:57:15.588 2 DEBUG oslo_concurrency.lockutils [req-d5d1ed71-1195-4819-b5e7-0726306dceaf req-703dc65e-9aa5-43ac-84cc-0c98afdd6e69 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "075ec27a-70a2-49f6-a097-5738f3407605-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:57:15 compute-0 nova_compute[260935]: 2025-10-11 08:57:15.588 2 DEBUG nova.compute.manager [req-d5d1ed71-1195-4819-b5e7-0726306dceaf req-703dc65e-9aa5-43ac-84cc-0c98afdd6e69 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 075ec27a-70a2-49f6-a097-5738f3407605] Processing event network-vif-plugged-10f3051f-8897-4e57-89f1-4a03aac94192 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 11 08:57:15 compute-0 nova_compute[260935]: 2025-10-11 08:57:15.589 2 DEBUG nova.compute.manager [req-d5d1ed71-1195-4819-b5e7-0726306dceaf req-703dc65e-9aa5-43ac-84cc-0c98afdd6e69 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 075ec27a-70a2-49f6-a097-5738f3407605] Received event network-vif-plugged-10f3051f-8897-4e57-89f1-4a03aac94192 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:57:15 compute-0 nova_compute[260935]: 2025-10-11 08:57:15.589 2 DEBUG oslo_concurrency.lockutils [req-d5d1ed71-1195-4819-b5e7-0726306dceaf req-703dc65e-9aa5-43ac-84cc-0c98afdd6e69 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "075ec27a-70a2-49f6-a097-5738f3407605-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:57:15 compute-0 nova_compute[260935]: 2025-10-11 08:57:15.589 2 DEBUG oslo_concurrency.lockutils [req-d5d1ed71-1195-4819-b5e7-0726306dceaf req-703dc65e-9aa5-43ac-84cc-0c98afdd6e69 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "075ec27a-70a2-49f6-a097-5738f3407605-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:57:15 compute-0 nova_compute[260935]: 2025-10-11 08:57:15.589 2 DEBUG oslo_concurrency.lockutils [req-d5d1ed71-1195-4819-b5e7-0726306dceaf req-703dc65e-9aa5-43ac-84cc-0c98afdd6e69 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "075ec27a-70a2-49f6-a097-5738f3407605-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:57:15 compute-0 nova_compute[260935]: 2025-10-11 08:57:15.590 2 DEBUG nova.compute.manager [req-d5d1ed71-1195-4819-b5e7-0726306dceaf req-703dc65e-9aa5-43ac-84cc-0c98afdd6e69 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 075ec27a-70a2-49f6-a097-5738f3407605] No waiting events found dispatching network-vif-plugged-10f3051f-8897-4e57-89f1-4a03aac94192 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 08:57:15 compute-0 nova_compute[260935]: 2025-10-11 08:57:15.590 2 WARNING nova.compute.manager [req-d5d1ed71-1195-4819-b5e7-0726306dceaf req-703dc65e-9aa5-43ac-84cc-0c98afdd6e69 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 075ec27a-70a2-49f6-a097-5738f3407605] Received unexpected event network-vif-plugged-10f3051f-8897-4e57-89f1-4a03aac94192 for instance with vm_state building and task_state spawning.
Oct 11 08:57:15 compute-0 nova_compute[260935]: 2025-10-11 08:57:15.590 2 DEBUG nova.compute.manager [req-d5d1ed71-1195-4819-b5e7-0726306dceaf req-703dc65e-9aa5-43ac-84cc-0c98afdd6e69 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: be4cae7e-0c2f-4c19-9a7e-58681faf9523] Received event network-vif-unplugged-47a6e23f-ee4e-4768-bb7e-ab5e3096976f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:57:15 compute-0 nova_compute[260935]: 2025-10-11 08:57:15.590 2 DEBUG oslo_concurrency.lockutils [req-d5d1ed71-1195-4819-b5e7-0726306dceaf req-703dc65e-9aa5-43ac-84cc-0c98afdd6e69 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "be4cae7e-0c2f-4c19-9a7e-58681faf9523-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:57:15 compute-0 nova_compute[260935]: 2025-10-11 08:57:15.590 2 DEBUG oslo_concurrency.lockutils [req-d5d1ed71-1195-4819-b5e7-0726306dceaf req-703dc65e-9aa5-43ac-84cc-0c98afdd6e69 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "be4cae7e-0c2f-4c19-9a7e-58681faf9523-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:57:15 compute-0 nova_compute[260935]: 2025-10-11 08:57:15.591 2 DEBUG oslo_concurrency.lockutils [req-d5d1ed71-1195-4819-b5e7-0726306dceaf req-703dc65e-9aa5-43ac-84cc-0c98afdd6e69 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "be4cae7e-0c2f-4c19-9a7e-58681faf9523-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:57:15 compute-0 nova_compute[260935]: 2025-10-11 08:57:15.591 2 DEBUG nova.compute.manager [req-d5d1ed71-1195-4819-b5e7-0726306dceaf req-703dc65e-9aa5-43ac-84cc-0c98afdd6e69 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: be4cae7e-0c2f-4c19-9a7e-58681faf9523] No waiting events found dispatching network-vif-unplugged-47a6e23f-ee4e-4768-bb7e-ab5e3096976f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 08:57:15 compute-0 nova_compute[260935]: 2025-10-11 08:57:15.591 2 DEBUG nova.compute.manager [req-d5d1ed71-1195-4819-b5e7-0726306dceaf req-703dc65e-9aa5-43ac-84cc-0c98afdd6e69 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: be4cae7e-0c2f-4c19-9a7e-58681faf9523] Received event network-vif-unplugged-47a6e23f-ee4e-4768-bb7e-ab5e3096976f for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 11 08:57:15 compute-0 nova_compute[260935]: 2025-10-11 08:57:15.593 2 DEBUG nova.compute.manager [None req-65f6f63e-75c5-4f43-a2f3-f6ef8987b9d7 bebcc72ba14b4d1e976076ce9fee6080 7b0903df38254068b8566040cf90343c - - default default] [instance: 075ec27a-70a2-49f6-a097-5738f3407605] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 11 08:57:15 compute-0 nova_compute[260935]: 2025-10-11 08:57:15.598 2 DEBUG nova.virt.libvirt.driver [None req-65f6f63e-75c5-4f43-a2f3-f6ef8987b9d7 bebcc72ba14b4d1e976076ce9fee6080 7b0903df38254068b8566040cf90343c - - default default] [instance: 075ec27a-70a2-49f6-a097-5738f3407605] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 11 08:57:15 compute-0 nova_compute[260935]: 2025-10-11 08:57:15.600 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760173035.5980887, 075ec27a-70a2-49f6-a097-5738f3407605 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:57:15 compute-0 nova_compute[260935]: 2025-10-11 08:57:15.601 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 075ec27a-70a2-49f6-a097-5738f3407605] VM Resumed (Lifecycle Event)
Oct 11 08:57:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:57:15.607 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[acfd38a4-e588-4b24-aa24-c01ba074d6b3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:57:15 compute-0 nova_compute[260935]: 2025-10-11 08:57:15.611 2 INFO nova.virt.libvirt.driver [-] [instance: 075ec27a-70a2-49f6-a097-5738f3407605] Instance spawned successfully.
Oct 11 08:57:15 compute-0 nova_compute[260935]: 2025-10-11 08:57:15.612 2 DEBUG nova.virt.libvirt.driver [None req-65f6f63e-75c5-4f43-a2f3-f6ef8987b9d7 bebcc72ba14b4d1e976076ce9fee6080 7b0903df38254068b8566040cf90343c - - default default] [instance: 075ec27a-70a2-49f6-a097-5738f3407605] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 11 08:57:15 compute-0 nova_compute[260935]: 2025-10-11 08:57:15.634 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 075ec27a-70a2-49f6-a097-5738f3407605] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:57:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:57:15.637 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[c1e396c6-b1ed-4ba4-a4f1-b34569b5b714]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape075bdab-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0b:b9:79'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 23, 'rx_bytes': 1000, 'tx_bytes': 1110, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 23, 'rx_bytes': 1000, 'tx_bytes': 1110, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 169], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 479394, 'reachable_time': 36159, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 331333, 'error': None, 'target': 'ovnmeta-e075bdab-78c4-414f-b270-c41d1c82f498', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:57:15 compute-0 nova_compute[260935]: 2025-10-11 08:57:15.642 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 075ec27a-70a2-49f6-a097-5738f3407605] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 08:57:15 compute-0 nova_compute[260935]: 2025-10-11 08:57:15.646 2 DEBUG nova.virt.libvirt.driver [None req-65f6f63e-75c5-4f43-a2f3-f6ef8987b9d7 bebcc72ba14b4d1e976076ce9fee6080 7b0903df38254068b8566040cf90343c - - default default] [instance: 075ec27a-70a2-49f6-a097-5738f3407605] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:57:15 compute-0 nova_compute[260935]: 2025-10-11 08:57:15.646 2 DEBUG nova.virt.libvirt.driver [None req-65f6f63e-75c5-4f43-a2f3-f6ef8987b9d7 bebcc72ba14b4d1e976076ce9fee6080 7b0903df38254068b8566040cf90343c - - default default] [instance: 075ec27a-70a2-49f6-a097-5738f3407605] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:57:15 compute-0 nova_compute[260935]: 2025-10-11 08:57:15.647 2 DEBUG nova.virt.libvirt.driver [None req-65f6f63e-75c5-4f43-a2f3-f6ef8987b9d7 bebcc72ba14b4d1e976076ce9fee6080 7b0903df38254068b8566040cf90343c - - default default] [instance: 075ec27a-70a2-49f6-a097-5738f3407605] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:57:15 compute-0 nova_compute[260935]: 2025-10-11 08:57:15.647 2 DEBUG nova.virt.libvirt.driver [None req-65f6f63e-75c5-4f43-a2f3-f6ef8987b9d7 bebcc72ba14b4d1e976076ce9fee6080 7b0903df38254068b8566040cf90343c - - default default] [instance: 075ec27a-70a2-49f6-a097-5738f3407605] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:57:15 compute-0 nova_compute[260935]: 2025-10-11 08:57:15.647 2 DEBUG nova.virt.libvirt.driver [None req-65f6f63e-75c5-4f43-a2f3-f6ef8987b9d7 bebcc72ba14b4d1e976076ce9fee6080 7b0903df38254068b8566040cf90343c - - default default] [instance: 075ec27a-70a2-49f6-a097-5738f3407605] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:57:15 compute-0 nova_compute[260935]: 2025-10-11 08:57:15.648 2 DEBUG nova.virt.libvirt.driver [None req-65f6f63e-75c5-4f43-a2f3-f6ef8987b9d7 bebcc72ba14b4d1e976076ce9fee6080 7b0903df38254068b8566040cf90343c - - default default] [instance: 075ec27a-70a2-49f6-a097-5738f3407605] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:57:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:57:15.670 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[ded37bf2-c89b-4ebe-b380-59c0b332bb99]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tape075bdab-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 479414, 'tstamp': 479414}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 331334, 'error': None, 'target': 'ovnmeta-e075bdab-78c4-414f-b270-c41d1c82f498', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tape075bdab-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 479419, 'tstamp': 479419}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 331334, 'error': None, 'target': 'ovnmeta-e075bdab-78c4-414f-b270-c41d1c82f498', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:57:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:57:15.673 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape075bdab-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:57:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:57:15.676 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape075bdab-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:57:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:57:15.676 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 08:57:15 compute-0 nova_compute[260935]: 2025-10-11 08:57:15.676 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:57:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:57:15.677 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape075bdab-70, col_values=(('external_ids', {'iface-id': 'b9cf681c-9f4c-4c56-987a-55fa7aa89e1a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:57:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:57:15.677 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 08:57:15 compute-0 nova_compute[260935]: 2025-10-11 08:57:15.685 2 INFO nova.virt.libvirt.driver [None req-5571b786-eb56-41fc-93a7-8f3ec7962abc ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: be4cae7e-0c2f-4c19-9a7e-58681faf9523] Deleting instance files /var/lib/nova/instances/be4cae7e-0c2f-4c19-9a7e-58681faf9523_del
Oct 11 08:57:15 compute-0 nova_compute[260935]: 2025-10-11 08:57:15.686 2 INFO nova.virt.libvirt.driver [None req-5571b786-eb56-41fc-93a7-8f3ec7962abc ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: be4cae7e-0c2f-4c19-9a7e-58681faf9523] Deletion of /var/lib/nova/instances/be4cae7e-0c2f-4c19-9a7e-58681faf9523_del complete
Oct 11 08:57:15 compute-0 nova_compute[260935]: 2025-10-11 08:57:15.689 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 075ec27a-70a2-49f6-a097-5738f3407605] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 08:57:15 compute-0 nova_compute[260935]: 2025-10-11 08:57:15.748 2 INFO nova.compute.manager [None req-65f6f63e-75c5-4f43-a2f3-f6ef8987b9d7 bebcc72ba14b4d1e976076ce9fee6080 7b0903df38254068b8566040cf90343c - - default default] [instance: 075ec27a-70a2-49f6-a097-5738f3407605] Took 8.04 seconds to spawn the instance on the hypervisor.
Oct 11 08:57:15 compute-0 nova_compute[260935]: 2025-10-11 08:57:15.749 2 DEBUG nova.compute.manager [None req-65f6f63e-75c5-4f43-a2f3-f6ef8987b9d7 bebcc72ba14b4d1e976076ce9fee6080 7b0903df38254068b8566040cf90343c - - default default] [instance: 075ec27a-70a2-49f6-a097-5738f3407605] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:57:15 compute-0 nova_compute[260935]: 2025-10-11 08:57:15.760 2 INFO nova.compute.manager [None req-5571b786-eb56-41fc-93a7-8f3ec7962abc ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: be4cae7e-0c2f-4c19-9a7e-58681faf9523] Took 0.85 seconds to destroy the instance on the hypervisor.
Oct 11 08:57:15 compute-0 nova_compute[260935]: 2025-10-11 08:57:15.760 2 DEBUG oslo.service.loopingcall [None req-5571b786-eb56-41fc-93a7-8f3ec7962abc ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 11 08:57:15 compute-0 nova_compute[260935]: 2025-10-11 08:57:15.760 2 DEBUG nova.compute.manager [-] [instance: be4cae7e-0c2f-4c19-9a7e-58681faf9523] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 11 08:57:15 compute-0 nova_compute[260935]: 2025-10-11 08:57:15.761 2 DEBUG nova.network.neutron [-] [instance: be4cae7e-0c2f-4c19-9a7e-58681faf9523] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 11 08:57:15 compute-0 nova_compute[260935]: 2025-10-11 08:57:15.787 2 WARNING nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 08:57:15 compute-0 nova_compute[260935]: 2025-10-11 08:57:15.788 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3033MB free_disk=59.729610443115234GB free_vcpus=1 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 11 08:57:15 compute-0 nova_compute[260935]: 2025-10-11 08:57:15.788 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:57:15 compute-0 nova_compute[260935]: 2025-10-11 08:57:15.788 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:57:15 compute-0 nova_compute[260935]: 2025-10-11 08:57:15.801 2 INFO nova.compute.manager [None req-65f6f63e-75c5-4f43-a2f3-f6ef8987b9d7 bebcc72ba14b4d1e976076ce9fee6080 7b0903df38254068b8566040cf90343c - - default default] [instance: 075ec27a-70a2-49f6-a097-5738f3407605] Took 9.27 seconds to build instance.
Oct 11 08:57:15 compute-0 nova_compute[260935]: 2025-10-11 08:57:15.853 2 DEBUG oslo_concurrency.lockutils [None req-65f6f63e-75c5-4f43-a2f3-f6ef8987b9d7 bebcc72ba14b4d1e976076ce9fee6080 7b0903df38254068b8566040cf90343c - - default default] Lock "075ec27a-70a2-49f6-a097-5738f3407605" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.381s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:57:15 compute-0 nova_compute[260935]: 2025-10-11 08:57:15.893 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance 98d8ebd6-0917-49cf-8efc-a245486424bc actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 08:57:15 compute-0 nova_compute[260935]: 2025-10-11 08:57:15.894 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance b297454f-91af-4716-b4f7-6af9f0d7e62d actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 08:57:15 compute-0 nova_compute[260935]: 2025-10-11 08:57:15.894 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance be4cae7e-0c2f-4c19-9a7e-58681faf9523 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 08:57:15 compute-0 nova_compute[260935]: 2025-10-11 08:57:15.895 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance 5c74d1f2-66dc-474b-b669-445115cf6b1d actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 08:57:15 compute-0 nova_compute[260935]: 2025-10-11 08:57:15.895 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance 80799b12-9add-4561-a8eb-f1cf3c1f93a1 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 08:57:15 compute-0 nova_compute[260935]: 2025-10-11 08:57:15.895 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance 7aed3949-7b95-4887-90c3-3e2c8202cf27 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 08:57:15 compute-0 nova_compute[260935]: 2025-10-11 08:57:15.896 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance 075ec27a-70a2-49f6-a097-5738f3407605 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 08:57:15 compute-0 nova_compute[260935]: 2025-10-11 08:57:15.896 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 7 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 11 08:57:15 compute-0 nova_compute[260935]: 2025-10-11 08:57:15.896 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=1408MB phys_disk=59GB used_disk=7GB total_vcpus=8 used_vcpus=7 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 11 08:57:16 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1611: 321 pgs: 321 active+clean; 536 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 5.7 MiB/s wr, 263 op/s
Oct 11 08:57:16 compute-0 nova_compute[260935]: 2025-10-11 08:57:16.134 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:57:16 compute-0 nova_compute[260935]: 2025-10-11 08:57:16.347 2 DEBUG nova.network.neutron [-] [instance: be4cae7e-0c2f-4c19-9a7e-58681faf9523] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:57:16 compute-0 nova_compute[260935]: 2025-10-11 08:57:16.368 2 INFO nova.compute.manager [-] [instance: be4cae7e-0c2f-4c19-9a7e-58681faf9523] Took 0.61 seconds to deallocate network for instance.
Oct 11 08:57:16 compute-0 nova_compute[260935]: 2025-10-11 08:57:16.419 2 DEBUG nova.compute.manager [req-bb5f4dbf-6992-4a3d-9e43-464c3815bbb2 req-a5736477-eb95-4fb9-a985-9852cffbdb36 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: be4cae7e-0c2f-4c19-9a7e-58681faf9523] Received event network-vif-deleted-47a6e23f-ee4e-4768-bb7e-ab5e3096976f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:57:16 compute-0 nova_compute[260935]: 2025-10-11 08:57:16.422 2 DEBUG oslo_concurrency.lockutils [None req-5571b786-eb56-41fc-93a7-8f3ec7962abc ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:57:16 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 08:57:16 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2373008351' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:57:16 compute-0 nova_compute[260935]: 2025-10-11 08:57:16.661 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.527s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:57:16 compute-0 nova_compute[260935]: 2025-10-11 08:57:16.672 2 DEBUG nova.compute.provider_tree [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 08:57:16 compute-0 nova_compute[260935]: 2025-10-11 08:57:16.696 2 DEBUG nova.scheduler.client.report [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 08:57:16 compute-0 nova_compute[260935]: 2025-10-11 08:57:16.725 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 11 08:57:16 compute-0 nova_compute[260935]: 2025-10-11 08:57:16.726 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.938s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:57:16 compute-0 nova_compute[260935]: 2025-10-11 08:57:16.727 2 DEBUG oslo_concurrency.lockutils [None req-5571b786-eb56-41fc-93a7-8f3ec7962abc ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.305s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:57:16 compute-0 nova_compute[260935]: 2025-10-11 08:57:16.891 2 DEBUG oslo_concurrency.processutils [None req-5571b786-eb56-41fc-93a7-8f3ec7962abc ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:57:17 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 08:57:17 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2116410291' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:57:17 compute-0 nova_compute[260935]: 2025-10-11 08:57:17.376 2 DEBUG oslo_concurrency.processutils [None req-5571b786-eb56-41fc-93a7-8f3ec7962abc ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.485s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:57:17 compute-0 nova_compute[260935]: 2025-10-11 08:57:17.385 2 DEBUG nova.compute.provider_tree [None req-5571b786-eb56-41fc-93a7-8f3ec7962abc ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 08:57:17 compute-0 nova_compute[260935]: 2025-10-11 08:57:17.415 2 DEBUG nova.scheduler.client.report [None req-5571b786-eb56-41fc-93a7-8f3ec7962abc ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 08:57:17 compute-0 nova_compute[260935]: 2025-10-11 08:57:17.451 2 DEBUG oslo_concurrency.lockutils [None req-5571b786-eb56-41fc-93a7-8f3ec7962abc ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.724s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:57:17 compute-0 nova_compute[260935]: 2025-10-11 08:57:17.497 2 INFO nova.scheduler.client.report [None req-5571b786-eb56-41fc-93a7-8f3ec7962abc ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Deleted allocations for instance be4cae7e-0c2f-4c19-9a7e-58681faf9523
Oct 11 08:57:17 compute-0 ceph-mon[74313]: pgmap v1611: 321 pgs: 321 active+clean; 536 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 5.7 MiB/s wr, 263 op/s
Oct 11 08:57:17 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/2373008351' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:57:17 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/2116410291' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:57:17 compute-0 nova_compute[260935]: 2025-10-11 08:57:17.601 2 DEBUG oslo_concurrency.lockutils [None req-5571b786-eb56-41fc-93a7-8f3ec7962abc ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Lock "be4cae7e-0c2f-4c19-9a7e-58681faf9523" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.698s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:57:17 compute-0 nova_compute[260935]: 2025-10-11 08:57:17.813 2 DEBUG oslo_concurrency.lockutils [None req-18630ca5-6617-4c0e-b534-d2cc8727ed2f 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Acquiring lock "7aed3949-7b95-4887-90c3-3e2c8202cf27" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:57:17 compute-0 nova_compute[260935]: 2025-10-11 08:57:17.814 2 DEBUG oslo_concurrency.lockutils [None req-18630ca5-6617-4c0e-b534-d2cc8727ed2f 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Lock "7aed3949-7b95-4887-90c3-3e2c8202cf27" acquired by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:57:17 compute-0 nova_compute[260935]: 2025-10-11 08:57:17.815 2 INFO nova.compute.manager [None req-18630ca5-6617-4c0e-b534-d2cc8727ed2f 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] [instance: 7aed3949-7b95-4887-90c3-3e2c8202cf27] Shelving
Oct 11 08:57:17 compute-0 kernel: tapf5e1ba56-77 (unregistering): left promiscuous mode
Oct 11 08:57:17 compute-0 NetworkManager[44960]: <info>  [1760173037.8983] device (tapf5e1ba56-77): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 08:57:17 compute-0 nova_compute[260935]: 2025-10-11 08:57:17.904 2 DEBUG nova.compute.manager [req-8f96a950-c528-4714-81b8-2327211f7e8b req-95f25a33-dbc5-4b06-8007-32e957620f6a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: be4cae7e-0c2f-4c19-9a7e-58681faf9523] Received event network-vif-plugged-47a6e23f-ee4e-4768-bb7e-ab5e3096976f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:57:17 compute-0 nova_compute[260935]: 2025-10-11 08:57:17.904 2 DEBUG oslo_concurrency.lockutils [req-8f96a950-c528-4714-81b8-2327211f7e8b req-95f25a33-dbc5-4b06-8007-32e957620f6a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "be4cae7e-0c2f-4c19-9a7e-58681faf9523-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:57:17 compute-0 nova_compute[260935]: 2025-10-11 08:57:17.905 2 DEBUG oslo_concurrency.lockutils [req-8f96a950-c528-4714-81b8-2327211f7e8b req-95f25a33-dbc5-4b06-8007-32e957620f6a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "be4cae7e-0c2f-4c19-9a7e-58681faf9523-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:57:17 compute-0 nova_compute[260935]: 2025-10-11 08:57:17.906 2 DEBUG oslo_concurrency.lockutils [req-8f96a950-c528-4714-81b8-2327211f7e8b req-95f25a33-dbc5-4b06-8007-32e957620f6a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "be4cae7e-0c2f-4c19-9a7e-58681faf9523-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:57:17 compute-0 nova_compute[260935]: 2025-10-11 08:57:17.907 2 DEBUG nova.compute.manager [req-8f96a950-c528-4714-81b8-2327211f7e8b req-95f25a33-dbc5-4b06-8007-32e957620f6a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: be4cae7e-0c2f-4c19-9a7e-58681faf9523] No waiting events found dispatching network-vif-plugged-47a6e23f-ee4e-4768-bb7e-ab5e3096976f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 08:57:17 compute-0 nova_compute[260935]: 2025-10-11 08:57:17.907 2 WARNING nova.compute.manager [req-8f96a950-c528-4714-81b8-2327211f7e8b req-95f25a33-dbc5-4b06-8007-32e957620f6a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: be4cae7e-0c2f-4c19-9a7e-58681faf9523] Received unexpected event network-vif-plugged-47a6e23f-ee4e-4768-bb7e-ab5e3096976f for instance with vm_state deleted and task_state None.
Oct 11 08:57:17 compute-0 nova_compute[260935]: 2025-10-11 08:57:17.907 2 DEBUG nova.compute.manager [req-8f96a950-c528-4714-81b8-2327211f7e8b req-95f25a33-dbc5-4b06-8007-32e957620f6a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: be4cae7e-0c2f-4c19-9a7e-58681faf9523] Received event network-vif-plugged-47a6e23f-ee4e-4768-bb7e-ab5e3096976f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:57:17 compute-0 nova_compute[260935]: 2025-10-11 08:57:17.907 2 DEBUG oslo_concurrency.lockutils [req-8f96a950-c528-4714-81b8-2327211f7e8b req-95f25a33-dbc5-4b06-8007-32e957620f6a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "be4cae7e-0c2f-4c19-9a7e-58681faf9523-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:57:17 compute-0 nova_compute[260935]: 2025-10-11 08:57:17.908 2 DEBUG oslo_concurrency.lockutils [req-8f96a950-c528-4714-81b8-2327211f7e8b req-95f25a33-dbc5-4b06-8007-32e957620f6a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "be4cae7e-0c2f-4c19-9a7e-58681faf9523-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:57:17 compute-0 ovn_controller[152945]: 2025-10-11T08:57:17Z|00623|binding|INFO|Releasing lport f5e1ba56-7772-43c4-a7a6-9435b3b0796c from this chassis (sb_readonly=0)
Oct 11 08:57:17 compute-0 ovn_controller[152945]: 2025-10-11T08:57:17Z|00624|binding|INFO|Setting lport f5e1ba56-7772-43c4-a7a6-9435b3b0796c down in Southbound
Oct 11 08:57:17 compute-0 ovn_controller[152945]: 2025-10-11T08:57:17Z|00625|binding|INFO|Removing iface tapf5e1ba56-77 ovn-installed in OVS
Oct 11 08:57:17 compute-0 nova_compute[260935]: 2025-10-11 08:57:17.908 2 DEBUG oslo_concurrency.lockutils [req-8f96a950-c528-4714-81b8-2327211f7e8b req-95f25a33-dbc5-4b06-8007-32e957620f6a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "be4cae7e-0c2f-4c19-9a7e-58681faf9523-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:57:17 compute-0 nova_compute[260935]: 2025-10-11 08:57:17.920 2 DEBUG nova.compute.manager [req-8f96a950-c528-4714-81b8-2327211f7e8b req-95f25a33-dbc5-4b06-8007-32e957620f6a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: be4cae7e-0c2f-4c19-9a7e-58681faf9523] No waiting events found dispatching network-vif-plugged-47a6e23f-ee4e-4768-bb7e-ab5e3096976f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 08:57:17 compute-0 nova_compute[260935]: 2025-10-11 08:57:17.921 2 WARNING nova.compute.manager [req-8f96a950-c528-4714-81b8-2327211f7e8b req-95f25a33-dbc5-4b06-8007-32e957620f6a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: be4cae7e-0c2f-4c19-9a7e-58681faf9523] Received unexpected event network-vif-plugged-47a6e23f-ee4e-4768-bb7e-ab5e3096976f for instance with vm_state deleted and task_state None.
Oct 11 08:57:17 compute-0 nova_compute[260935]: 2025-10-11 08:57:17.922 2 DEBUG nova.compute.manager [req-8f96a950-c528-4714-81b8-2327211f7e8b req-95f25a33-dbc5-4b06-8007-32e957620f6a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: be4cae7e-0c2f-4c19-9a7e-58681faf9523] Received event network-vif-plugged-47a6e23f-ee4e-4768-bb7e-ab5e3096976f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:57:17 compute-0 nova_compute[260935]: 2025-10-11 08:57:17.922 2 DEBUG oslo_concurrency.lockutils [req-8f96a950-c528-4714-81b8-2327211f7e8b req-95f25a33-dbc5-4b06-8007-32e957620f6a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "be4cae7e-0c2f-4c19-9a7e-58681faf9523-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:57:17 compute-0 nova_compute[260935]: 2025-10-11 08:57:17.923 2 DEBUG oslo_concurrency.lockutils [req-8f96a950-c528-4714-81b8-2327211f7e8b req-95f25a33-dbc5-4b06-8007-32e957620f6a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "be4cae7e-0c2f-4c19-9a7e-58681faf9523-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:57:17 compute-0 nova_compute[260935]: 2025-10-11 08:57:17.923 2 DEBUG oslo_concurrency.lockutils [req-8f96a950-c528-4714-81b8-2327211f7e8b req-95f25a33-dbc5-4b06-8007-32e957620f6a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "be4cae7e-0c2f-4c19-9a7e-58681faf9523-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:57:17 compute-0 nova_compute[260935]: 2025-10-11 08:57:17.924 2 DEBUG nova.compute.manager [req-8f96a950-c528-4714-81b8-2327211f7e8b req-95f25a33-dbc5-4b06-8007-32e957620f6a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: be4cae7e-0c2f-4c19-9a7e-58681faf9523] No waiting events found dispatching network-vif-plugged-47a6e23f-ee4e-4768-bb7e-ab5e3096976f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 08:57:17 compute-0 nova_compute[260935]: 2025-10-11 08:57:17.925 2 WARNING nova.compute.manager [req-8f96a950-c528-4714-81b8-2327211f7e8b req-95f25a33-dbc5-4b06-8007-32e957620f6a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: be4cae7e-0c2f-4c19-9a7e-58681faf9523] Received unexpected event network-vif-plugged-47a6e23f-ee4e-4768-bb7e-ab5e3096976f for instance with vm_state deleted and task_state None.
Oct 11 08:57:17 compute-0 nova_compute[260935]: 2025-10-11 08:57:17.925 2 DEBUG nova.compute.manager [req-8f96a950-c528-4714-81b8-2327211f7e8b req-95f25a33-dbc5-4b06-8007-32e957620f6a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: be4cae7e-0c2f-4c19-9a7e-58681faf9523] Received event network-vif-unplugged-47a6e23f-ee4e-4768-bb7e-ab5e3096976f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:57:17 compute-0 nova_compute[260935]: 2025-10-11 08:57:17.926 2 DEBUG oslo_concurrency.lockutils [req-8f96a950-c528-4714-81b8-2327211f7e8b req-95f25a33-dbc5-4b06-8007-32e957620f6a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "be4cae7e-0c2f-4c19-9a7e-58681faf9523-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:57:17 compute-0 nova_compute[260935]: 2025-10-11 08:57:17.926 2 DEBUG oslo_concurrency.lockutils [req-8f96a950-c528-4714-81b8-2327211f7e8b req-95f25a33-dbc5-4b06-8007-32e957620f6a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "be4cae7e-0c2f-4c19-9a7e-58681faf9523-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:57:17 compute-0 nova_compute[260935]: 2025-10-11 08:57:17.927 2 DEBUG oslo_concurrency.lockutils [req-8f96a950-c528-4714-81b8-2327211f7e8b req-95f25a33-dbc5-4b06-8007-32e957620f6a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "be4cae7e-0c2f-4c19-9a7e-58681faf9523-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:57:17 compute-0 nova_compute[260935]: 2025-10-11 08:57:17.927 2 DEBUG nova.compute.manager [req-8f96a950-c528-4714-81b8-2327211f7e8b req-95f25a33-dbc5-4b06-8007-32e957620f6a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: be4cae7e-0c2f-4c19-9a7e-58681faf9523] No waiting events found dispatching network-vif-unplugged-47a6e23f-ee4e-4768-bb7e-ab5e3096976f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 08:57:17 compute-0 nova_compute[260935]: 2025-10-11 08:57:17.928 2 WARNING nova.compute.manager [req-8f96a950-c528-4714-81b8-2327211f7e8b req-95f25a33-dbc5-4b06-8007-32e957620f6a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: be4cae7e-0c2f-4c19-9a7e-58681faf9523] Received unexpected event network-vif-unplugged-47a6e23f-ee4e-4768-bb7e-ab5e3096976f for instance with vm_state deleted and task_state None.
Oct 11 08:57:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:57:17.928 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:39:33:ea 10.100.0.4'], port_security=['fa:16:3e:39:33:ea 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '7aed3949-7b95-4887-90c3-3e2c8202cf27', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-056c6769-bc97-4ae9-9759-4cc2d984a31d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '73adfb8cf0c64359b1f33a9643148ef4', 'neutron:revision_number': '4', 'neutron:security_group_ids': '88bbbe16-d865-47df-a62a-312128d64455', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0cfa1fc9-121e-4e0a-bd08-716b82275316, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=f5e1ba56-7772-43c4-a7a6-9435b3b0796c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 08:57:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:57:17.931 162815 INFO neutron.agent.ovn.metadata.agent [-] Port f5e1ba56-7772-43c4-a7a6-9435b3b0796c in datapath 056c6769-bc97-4ae9-9759-4cc2d984a31d unbound from our chassis
Oct 11 08:57:17 compute-0 nova_compute[260935]: 2025-10-11 08:57:17.928 2 DEBUG nova.compute.manager [req-8f96a950-c528-4714-81b8-2327211f7e8b req-95f25a33-dbc5-4b06-8007-32e957620f6a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: be4cae7e-0c2f-4c19-9a7e-58681faf9523] Received event network-vif-plugged-47a6e23f-ee4e-4768-bb7e-ab5e3096976f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:57:17 compute-0 nova_compute[260935]: 2025-10-11 08:57:17.929 2 DEBUG oslo_concurrency.lockutils [req-8f96a950-c528-4714-81b8-2327211f7e8b req-95f25a33-dbc5-4b06-8007-32e957620f6a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "be4cae7e-0c2f-4c19-9a7e-58681faf9523-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:57:17 compute-0 nova_compute[260935]: 2025-10-11 08:57:17.929 2 DEBUG oslo_concurrency.lockutils [req-8f96a950-c528-4714-81b8-2327211f7e8b req-95f25a33-dbc5-4b06-8007-32e957620f6a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "be4cae7e-0c2f-4c19-9a7e-58681faf9523-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:57:17 compute-0 nova_compute[260935]: 2025-10-11 08:57:17.930 2 DEBUG oslo_concurrency.lockutils [req-8f96a950-c528-4714-81b8-2327211f7e8b req-95f25a33-dbc5-4b06-8007-32e957620f6a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "be4cae7e-0c2f-4c19-9a7e-58681faf9523-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:57:17 compute-0 nova_compute[260935]: 2025-10-11 08:57:17.930 2 DEBUG nova.compute.manager [req-8f96a950-c528-4714-81b8-2327211f7e8b req-95f25a33-dbc5-4b06-8007-32e957620f6a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: be4cae7e-0c2f-4c19-9a7e-58681faf9523] No waiting events found dispatching network-vif-plugged-47a6e23f-ee4e-4768-bb7e-ab5e3096976f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 08:57:17 compute-0 nova_compute[260935]: 2025-10-11 08:57:17.930 2 WARNING nova.compute.manager [req-8f96a950-c528-4714-81b8-2327211f7e8b req-95f25a33-dbc5-4b06-8007-32e957620f6a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: be4cae7e-0c2f-4c19-9a7e-58681faf9523] Received unexpected event network-vif-plugged-47a6e23f-ee4e-4768-bb7e-ab5e3096976f for instance with vm_state deleted and task_state None.
Oct 11 08:57:17 compute-0 nova_compute[260935]: 2025-10-11 08:57:17.931 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:57:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:57:17.938 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 056c6769-bc97-4ae9-9759-4cc2d984a31d
Oct 11 08:57:17 compute-0 nova_compute[260935]: 2025-10-11 08:57:17.947 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:57:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:57:17.967 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[2c5655a6-a796-40eb-baad-fa690a67e3f7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:57:17 compute-0 systemd[1]: machine-qemu\x2d79\x2dinstance\x2d00000047.scope: Deactivated successfully.
Oct 11 08:57:17 compute-0 systemd[1]: machine-qemu\x2d79\x2dinstance\x2d00000047.scope: Consumed 3.840s CPU time.
Oct 11 08:57:17 compute-0 systemd-machined[215705]: Machine qemu-79-instance-00000047 terminated.
Oct 11 08:57:18 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:57:18.009 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[15d1a9f5-0cb4-470b-af80-90683d532b63]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:57:18 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:57:18.013 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[bdde701b-88d7-492a-9222-992cc7400ac7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:57:18 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1612: 321 pgs: 321 active+clean; 419 MiB data, 732 MiB used, 59 GiB / 60 GiB avail; 7.8 MiB/s rd, 5.7 MiB/s wr, 446 op/s
Oct 11 08:57:18 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:57:18.052 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[ec32008a-5ac5-4652-89f5-0256b7d049bb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:57:18 compute-0 NetworkManager[44960]: <info>  [1760173038.0659] manager: (tapf5e1ba56-77): new Tun device (/org/freedesktop/NetworkManager/Devices/286)
Oct 11 08:57:18 compute-0 nova_compute[260935]: 2025-10-11 08:57:18.091 2 INFO nova.virt.libvirt.driver [-] [instance: 7aed3949-7b95-4887-90c3-3e2c8202cf27] Instance destroyed successfully.
Oct 11 08:57:18 compute-0 nova_compute[260935]: 2025-10-11 08:57:18.093 2 DEBUG nova.objects.instance [None req-18630ca5-6617-4c0e-b534-d2cc8727ed2f 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Lazy-loading 'numa_topology' on Instance uuid 7aed3949-7b95-4887-90c3-3e2c8202cf27 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:57:18 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:57:18.096 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[1236e865-921a-4517-9715-6564e0e976a6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap056c6769-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8b:cc:02'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 10, 'rx_bytes': 1000, 'tx_bytes': 608, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 10, 'rx_bytes': 1000, 'tx_bytes': 608, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 162], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 477032, 'reachable_time': 36312, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 331394, 'error': None, 'target': 'ovnmeta-056c6769-bc97-4ae9-9759-4cc2d984a31d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:57:18 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:57:18.123 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[562569f1-f391-4bac-be98-8cce34020892]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap056c6769-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 477053, 'tstamp': 477053}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 331404, 'error': None, 'target': 'ovnmeta-056c6769-bc97-4ae9-9759-4cc2d984a31d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap056c6769-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 477058, 'tstamp': 477058}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 331404, 'error': None, 'target': 'ovnmeta-056c6769-bc97-4ae9-9759-4cc2d984a31d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:57:18 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:57:18.125 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap056c6769-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:57:18 compute-0 nova_compute[260935]: 2025-10-11 08:57:18.189 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:57:18 compute-0 nova_compute[260935]: 2025-10-11 08:57:18.196 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:57:18 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:57:18.197 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap056c6769-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:57:18 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:57:18.197 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 08:57:18 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:57:18.197 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap056c6769-b0, col_values=(('external_ids', {'iface-id': '056a8563-0695-415b-921f-e75fa98e60e5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:57:18 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:57:18.198 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 08:57:18 compute-0 nova_compute[260935]: 2025-10-11 08:57:18.338 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:57:18 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e230 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 08:57:18 compute-0 nova_compute[260935]: 2025-10-11 08:57:18.487 2 INFO nova.virt.libvirt.driver [None req-18630ca5-6617-4c0e-b534-d2cc8727ed2f 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] [instance: 7aed3949-7b95-4887-90c3-3e2c8202cf27] Beginning cold snapshot process
Oct 11 08:57:18 compute-0 nova_compute[260935]: 2025-10-11 08:57:18.699 2 DEBUG nova.virt.libvirt.imagebackend [None req-18630ca5-6617-4c0e-b534-d2cc8727ed2f 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] No parent info for 03f2fef0-11c0-48e1-b3a0-3e02d898739e; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Oct 11 08:57:18 compute-0 nova_compute[260935]: 2025-10-11 08:57:18.801 2 DEBUG nova.virt.libvirt.driver [None req-896d3807-46c3-4ce1-8842-872f85ece55b 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: 80799b12-9add-4561-a8eb-f1cf3c1f93a1] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Oct 11 08:57:18 compute-0 nova_compute[260935]: 2025-10-11 08:57:18.991 2 DEBUG nova.storage.rbd_utils [None req-18630ca5-6617-4c0e-b534-d2cc8727ed2f 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] creating snapshot(5c1dc56e6abc443a9f12f862683fd141) on rbd image(7aed3949-7b95-4887-90c3-3e2c8202cf27_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Oct 11 08:57:19 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e230 do_prune osdmap full prune enabled
Oct 11 08:57:19 compute-0 ceph-mon[74313]: pgmap v1612: 321 pgs: 321 active+clean; 419 MiB data, 732 MiB used, 59 GiB / 60 GiB avail; 7.8 MiB/s rd, 5.7 MiB/s wr, 446 op/s
Oct 11 08:57:19 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e231 e231: 3 total, 3 up, 3 in
Oct 11 08:57:19 compute-0 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e231: 3 total, 3 up, 3 in
Oct 11 08:57:19 compute-0 ovn_controller[152945]: 2025-10-11T08:57:19Z|00076|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:3f:db:18 10.100.0.12
Oct 11 08:57:19 compute-0 ovn_controller[152945]: 2025-10-11T08:57:19Z|00077|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:3f:db:18 10.100.0.12
Oct 11 08:57:19 compute-0 nova_compute[260935]: 2025-10-11 08:57:19.683 2 DEBUG nova.storage.rbd_utils [None req-18630ca5-6617-4c0e-b534-d2cc8727ed2f 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] cloning vms/7aed3949-7b95-4887-90c3-3e2c8202cf27_disk@5c1dc56e6abc443a9f12f862683fd141 to images/9c0551a6-e9bd-422a-a7ce-23fca57baa23 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Oct 11 08:57:19 compute-0 nova_compute[260935]: 2025-10-11 08:57:19.883 2 DEBUG nova.storage.rbd_utils [None req-18630ca5-6617-4c0e-b534-d2cc8727ed2f 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] flattening images/9c0551a6-e9bd-422a-a7ce-23fca57baa23 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Oct 11 08:57:20 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1614: 321 pgs: 321 active+clean; 419 MiB data, 732 MiB used, 59 GiB / 60 GiB avail; 6.8 MiB/s rd, 2.2 MiB/s wr, 348 op/s
Oct 11 08:57:20 compute-0 nova_compute[260935]: 2025-10-11 08:57:20.223 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:57:20 compute-0 nova_compute[260935]: 2025-10-11 08:57:20.226 2 DEBUG nova.compute.manager [req-a4e20132-5d45-441f-a8f9-906aa838cee1 req-b7539a07-a495-472d-a4e7-672776953979 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 7aed3949-7b95-4887-90c3-3e2c8202cf27] Received event network-vif-unplugged-f5e1ba56-7772-43c4-a7a6-9435b3b0796c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:57:20 compute-0 nova_compute[260935]: 2025-10-11 08:57:20.226 2 DEBUG oslo_concurrency.lockutils [req-a4e20132-5d45-441f-a8f9-906aa838cee1 req-b7539a07-a495-472d-a4e7-672776953979 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "7aed3949-7b95-4887-90c3-3e2c8202cf27-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:57:20 compute-0 nova_compute[260935]: 2025-10-11 08:57:20.227 2 DEBUG oslo_concurrency.lockutils [req-a4e20132-5d45-441f-a8f9-906aa838cee1 req-b7539a07-a495-472d-a4e7-672776953979 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "7aed3949-7b95-4887-90c3-3e2c8202cf27-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:57:20 compute-0 nova_compute[260935]: 2025-10-11 08:57:20.227 2 DEBUG oslo_concurrency.lockutils [req-a4e20132-5d45-441f-a8f9-906aa838cee1 req-b7539a07-a495-472d-a4e7-672776953979 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "7aed3949-7b95-4887-90c3-3e2c8202cf27-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:57:20 compute-0 nova_compute[260935]: 2025-10-11 08:57:20.227 2 DEBUG nova.compute.manager [req-a4e20132-5d45-441f-a8f9-906aa838cee1 req-b7539a07-a495-472d-a4e7-672776953979 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 7aed3949-7b95-4887-90c3-3e2c8202cf27] No waiting events found dispatching network-vif-unplugged-f5e1ba56-7772-43c4-a7a6-9435b3b0796c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 08:57:20 compute-0 nova_compute[260935]: 2025-10-11 08:57:20.227 2 WARNING nova.compute.manager [req-a4e20132-5d45-441f-a8f9-906aa838cee1 req-b7539a07-a495-472d-a4e7-672776953979 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 7aed3949-7b95-4887-90c3-3e2c8202cf27] Received unexpected event network-vif-unplugged-f5e1ba56-7772-43c4-a7a6-9435b3b0796c for instance with vm_state paused and task_state shelving_image_uploading.
Oct 11 08:57:20 compute-0 nova_compute[260935]: 2025-10-11 08:57:20.228 2 DEBUG nova.compute.manager [req-a4e20132-5d45-441f-a8f9-906aa838cee1 req-b7539a07-a495-472d-a4e7-672776953979 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 7aed3949-7b95-4887-90c3-3e2c8202cf27] Received event network-vif-plugged-f5e1ba56-7772-43c4-a7a6-9435b3b0796c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:57:20 compute-0 nova_compute[260935]: 2025-10-11 08:57:20.228 2 DEBUG oslo_concurrency.lockutils [req-a4e20132-5d45-441f-a8f9-906aa838cee1 req-b7539a07-a495-472d-a4e7-672776953979 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "7aed3949-7b95-4887-90c3-3e2c8202cf27-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:57:20 compute-0 nova_compute[260935]: 2025-10-11 08:57:20.228 2 DEBUG oslo_concurrency.lockutils [req-a4e20132-5d45-441f-a8f9-906aa838cee1 req-b7539a07-a495-472d-a4e7-672776953979 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "7aed3949-7b95-4887-90c3-3e2c8202cf27-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:57:20 compute-0 nova_compute[260935]: 2025-10-11 08:57:20.229 2 DEBUG oslo_concurrency.lockutils [req-a4e20132-5d45-441f-a8f9-906aa838cee1 req-b7539a07-a495-472d-a4e7-672776953979 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "7aed3949-7b95-4887-90c3-3e2c8202cf27-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:57:20 compute-0 nova_compute[260935]: 2025-10-11 08:57:20.229 2 DEBUG nova.compute.manager [req-a4e20132-5d45-441f-a8f9-906aa838cee1 req-b7539a07-a495-472d-a4e7-672776953979 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 7aed3949-7b95-4887-90c3-3e2c8202cf27] No waiting events found dispatching network-vif-plugged-f5e1ba56-7772-43c4-a7a6-9435b3b0796c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 08:57:20 compute-0 nova_compute[260935]: 2025-10-11 08:57:20.229 2 WARNING nova.compute.manager [req-a4e20132-5d45-441f-a8f9-906aa838cee1 req-b7539a07-a495-472d-a4e7-672776953979 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 7aed3949-7b95-4887-90c3-3e2c8202cf27] Received unexpected event network-vif-plugged-f5e1ba56-7772-43c4-a7a6-9435b3b0796c for instance with vm_state paused and task_state shelving_image_uploading.
Oct 11 08:57:20 compute-0 nova_compute[260935]: 2025-10-11 08:57:20.248 2 DEBUG nova.storage.rbd_utils [None req-18630ca5-6617-4c0e-b534-d2cc8727ed2f 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] removing snapshot(5c1dc56e6abc443a9f12f862683fd141) on rbd image(7aed3949-7b95-4887-90c3-3e2c8202cf27_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Oct 11 08:57:20 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e231 do_prune osdmap full prune enabled
Oct 11 08:57:20 compute-0 ceph-mon[74313]: osdmap e231: 3 total, 3 up, 3 in
Oct 11 08:57:20 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e232 e232: 3 total, 3 up, 3 in
Oct 11 08:57:20 compute-0 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e232: 3 total, 3 up, 3 in
Oct 11 08:57:20 compute-0 nova_compute[260935]: 2025-10-11 08:57:20.658 2 DEBUG nova.storage.rbd_utils [None req-18630ca5-6617-4c0e-b534-d2cc8727ed2f 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] creating snapshot(snap) on rbd image(9c0551a6-e9bd-422a-a7ce-23fca57baa23) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Oct 11 08:57:21 compute-0 ceph-mon[74313]: pgmap v1614: 321 pgs: 321 active+clean; 419 MiB data, 732 MiB used, 59 GiB / 60 GiB avail; 6.8 MiB/s rd, 2.2 MiB/s wr, 348 op/s
Oct 11 08:57:21 compute-0 ceph-mon[74313]: osdmap e232: 3 total, 3 up, 3 in
Oct 11 08:57:21 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e232 do_prune osdmap full prune enabled
Oct 11 08:57:21 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e233 e233: 3 total, 3 up, 3 in
Oct 11 08:57:21 compute-0 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e233: 3 total, 3 up, 3 in
Oct 11 08:57:22 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1617: 321 pgs: 321 active+clean; 419 MiB data, 732 MiB used, 59 GiB / 60 GiB avail; 7.2 MiB/s rd, 34 KiB/s wr, 365 op/s
Oct 11 08:57:22 compute-0 sshd-session[331547]: Invalid user qq from 152.32.213.170 port 32860
Oct 11 08:57:22 compute-0 sshd-session[331547]: pam_unix(sshd:auth): check pass; user unknown
Oct 11 08:57:22 compute-0 sshd-session[331547]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=152.32.213.170
Oct 11 08:57:22 compute-0 nova_compute[260935]: 2025-10-11 08:57:22.430 2 DEBUG nova.compute.manager [req-ec8575e1-11cf-4f63-9190-030eb37f9171 req-20383cbb-f409-4676-b2d4-f00290e1450d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 075ec27a-70a2-49f6-a097-5738f3407605] Received event network-changed-10f3051f-8897-4e57-89f1-4a03aac94192 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:57:22 compute-0 nova_compute[260935]: 2025-10-11 08:57:22.431 2 DEBUG nova.compute.manager [req-ec8575e1-11cf-4f63-9190-030eb37f9171 req-20383cbb-f409-4676-b2d4-f00290e1450d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 075ec27a-70a2-49f6-a097-5738f3407605] Refreshing instance network info cache due to event network-changed-10f3051f-8897-4e57-89f1-4a03aac94192. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 08:57:22 compute-0 nova_compute[260935]: 2025-10-11 08:57:22.432 2 DEBUG oslo_concurrency.lockutils [req-ec8575e1-11cf-4f63-9190-030eb37f9171 req-20383cbb-f409-4676-b2d4-f00290e1450d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-075ec27a-70a2-49f6-a097-5738f3407605" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 08:57:22 compute-0 nova_compute[260935]: 2025-10-11 08:57:22.432 2 DEBUG oslo_concurrency.lockutils [req-ec8575e1-11cf-4f63-9190-030eb37f9171 req-20383cbb-f409-4676-b2d4-f00290e1450d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-075ec27a-70a2-49f6-a097-5738f3407605" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 08:57:22 compute-0 nova_compute[260935]: 2025-10-11 08:57:22.433 2 DEBUG nova.network.neutron [req-ec8575e1-11cf-4f63-9190-030eb37f9171 req-20383cbb-f409-4676-b2d4-f00290e1450d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 075ec27a-70a2-49f6-a097-5738f3407605] Refreshing network info cache for port 10f3051f-8897-4e57-89f1-4a03aac94192 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 08:57:22 compute-0 ceph-mon[74313]: osdmap e233: 3 total, 3 up, 3 in
Oct 11 08:57:22 compute-0 ceph-mon[74313]: pgmap v1617: 321 pgs: 321 active+clean; 419 MiB data, 732 MiB used, 59 GiB / 60 GiB avail; 7.2 MiB/s rd, 34 KiB/s wr, 365 op/s
Oct 11 08:57:23 compute-0 nova_compute[260935]: 2025-10-11 08:57:23.341 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:57:23 compute-0 nova_compute[260935]: 2025-10-11 08:57:23.378 2 INFO nova.virt.libvirt.driver [None req-18630ca5-6617-4c0e-b534-d2cc8727ed2f 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] [instance: 7aed3949-7b95-4887-90c3-3e2c8202cf27] Snapshot image upload complete
Oct 11 08:57:23 compute-0 nova_compute[260935]: 2025-10-11 08:57:23.380 2 DEBUG nova.compute.manager [None req-18630ca5-6617-4c0e-b534-d2cc8727ed2f 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] [instance: 7aed3949-7b95-4887-90c3-3e2c8202cf27] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:57:23 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e233 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 08:57:23 compute-0 nova_compute[260935]: 2025-10-11 08:57:23.455 2 INFO nova.compute.manager [None req-18630ca5-6617-4c0e-b534-d2cc8727ed2f 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] [instance: 7aed3949-7b95-4887-90c3-3e2c8202cf27] Shelve offloading
Oct 11 08:57:23 compute-0 nova_compute[260935]: 2025-10-11 08:57:23.468 2 INFO nova.virt.libvirt.driver [-] [instance: 7aed3949-7b95-4887-90c3-3e2c8202cf27] Instance destroyed successfully.
Oct 11 08:57:23 compute-0 nova_compute[260935]: 2025-10-11 08:57:23.468 2 DEBUG nova.compute.manager [None req-18630ca5-6617-4c0e-b534-d2cc8727ed2f 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] [instance: 7aed3949-7b95-4887-90c3-3e2c8202cf27] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:57:23 compute-0 nova_compute[260935]: 2025-10-11 08:57:23.472 2 DEBUG oslo_concurrency.lockutils [None req-18630ca5-6617-4c0e-b534-d2cc8727ed2f 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Acquiring lock "refresh_cache-7aed3949-7b95-4887-90c3-3e2c8202cf27" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 08:57:23 compute-0 nova_compute[260935]: 2025-10-11 08:57:23.472 2 DEBUG oslo_concurrency.lockutils [None req-18630ca5-6617-4c0e-b534-d2cc8727ed2f 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Acquired lock "refresh_cache-7aed3949-7b95-4887-90c3-3e2c8202cf27" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 08:57:23 compute-0 nova_compute[260935]: 2025-10-11 08:57:23.473 2 DEBUG nova.network.neutron [None req-18630ca5-6617-4c0e-b534-d2cc8727ed2f 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] [instance: 7aed3949-7b95-4887-90c3-3e2c8202cf27] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 11 08:57:23 compute-0 sshd-session[331547]: Failed password for invalid user qq from 152.32.213.170 port 32860 ssh2
Oct 11 08:57:24 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1618: 321 pgs: 321 active+clean; 498 MiB data, 775 MiB used, 59 GiB / 60 GiB avail; 4.7 MiB/s rd, 7.8 MiB/s wr, 248 op/s
Oct 11 08:57:24 compute-0 nova_compute[260935]: 2025-10-11 08:57:24.071 2 DEBUG oslo_concurrency.lockutils [None req-ae93024e-ce36-4243-9626-0432d5a9bbb7 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Acquiring lock "a4114e5c-7c87-48a4-b73b-01ec4a9dd9b4" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:57:24 compute-0 nova_compute[260935]: 2025-10-11 08:57:24.072 2 DEBUG oslo_concurrency.lockutils [None req-ae93024e-ce36-4243-9626-0432d5a9bbb7 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Lock "a4114e5c-7c87-48a4-b73b-01ec4a9dd9b4" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:57:24 compute-0 nova_compute[260935]: 2025-10-11 08:57:24.122 2 DEBUG nova.compute.manager [None req-ae93024e-ce36-4243-9626-0432d5a9bbb7 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: a4114e5c-7c87-48a4-b73b-01ec4a9dd9b4] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 11 08:57:24 compute-0 nova_compute[260935]: 2025-10-11 08:57:24.225 2 DEBUG oslo_concurrency.lockutils [None req-ae93024e-ce36-4243-9626-0432d5a9bbb7 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:57:24 compute-0 nova_compute[260935]: 2025-10-11 08:57:24.226 2 DEBUG oslo_concurrency.lockutils [None req-ae93024e-ce36-4243-9626-0432d5a9bbb7 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:57:24 compute-0 nova_compute[260935]: 2025-10-11 08:57:24.240 2 DEBUG nova.virt.hardware [None req-ae93024e-ce36-4243-9626-0432d5a9bbb7 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 11 08:57:24 compute-0 nova_compute[260935]: 2025-10-11 08:57:24.242 2 INFO nova.compute.claims [None req-ae93024e-ce36-4243-9626-0432d5a9bbb7 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: a4114e5c-7c87-48a4-b73b-01ec4a9dd9b4] Claim successful on node compute-0.ctlplane.example.com
Oct 11 08:57:24 compute-0 nova_compute[260935]: 2025-10-11 08:57:24.447 2 DEBUG nova.network.neutron [req-ec8575e1-11cf-4f63-9190-030eb37f9171 req-20383cbb-f409-4676-b2d4-f00290e1450d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 075ec27a-70a2-49f6-a097-5738f3407605] Updated VIF entry in instance network info cache for port 10f3051f-8897-4e57-89f1-4a03aac94192. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 08:57:24 compute-0 nova_compute[260935]: 2025-10-11 08:57:24.448 2 DEBUG nova.network.neutron [req-ec8575e1-11cf-4f63-9190-030eb37f9171 req-20383cbb-f409-4676-b2d4-f00290e1450d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 075ec27a-70a2-49f6-a097-5738f3407605] Updating instance_info_cache with network_info: [{"id": "10f3051f-8897-4e57-89f1-4a03aac94192", "address": "fa:16:3e:15:d9:b2", "network": {"id": "36e695f7-a84c-4eed-8d21-d70ce3f1f736", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-876022911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.220", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7b0903df38254068b8566040cf90343c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap10f3051f-88", "ovs_interfaceid": "10f3051f-8897-4e57-89f1-4a03aac94192", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:57:24 compute-0 nova_compute[260935]: 2025-10-11 08:57:24.472 2 DEBUG oslo_concurrency.lockutils [req-ec8575e1-11cf-4f63-9190-030eb37f9171 req-20383cbb-f409-4676-b2d4-f00290e1450d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-075ec27a-70a2-49f6-a097-5738f3407605" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 08:57:24 compute-0 nova_compute[260935]: 2025-10-11 08:57:24.513 2 DEBUG oslo_concurrency.processutils [None req-ae93024e-ce36-4243-9626-0432d5a9bbb7 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:57:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 08:57:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 08:57:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 08:57:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 08:57:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 08:57:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 08:57:24 compute-0 sshd-session[331547]: Received disconnect from 152.32.213.170 port 32860:11: Bye Bye [preauth]
Oct 11 08:57:24 compute-0 sshd-session[331547]: Disconnected from invalid user qq 152.32.213.170 port 32860 [preauth]
Oct 11 08:57:24 compute-0 nova_compute[260935]: 2025-10-11 08:57:24.852 2 DEBUG nova.network.neutron [None req-18630ca5-6617-4c0e-b534-d2cc8727ed2f 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] [instance: 7aed3949-7b95-4887-90c3-3e2c8202cf27] Updating instance_info_cache with network_info: [{"id": "f5e1ba56-7772-43c4-a7a6-9435b3b0796c", "address": "fa:16:3e:39:33:ea", "network": {"id": "056c6769-bc97-4ae9-9759-4cc2d984a31d", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-2094705751-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "73adfb8cf0c64359b1f33a9643148ef4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf5e1ba56-77", "ovs_interfaceid": "f5e1ba56-7772-43c4-a7a6-9435b3b0796c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:57:24 compute-0 nova_compute[260935]: 2025-10-11 08:57:24.872 2 DEBUG oslo_concurrency.lockutils [None req-18630ca5-6617-4c0e-b534-d2cc8727ed2f 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Releasing lock "refresh_cache-7aed3949-7b95-4887-90c3-3e2c8202cf27" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 08:57:24 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 08:57:24 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/529401236' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:57:24 compute-0 nova_compute[260935]: 2025-10-11 08:57:24.998 2 DEBUG oslo_concurrency.processutils [None req-ae93024e-ce36-4243-9626-0432d5a9bbb7 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.486s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:57:25 compute-0 nova_compute[260935]: 2025-10-11 08:57:25.011 2 DEBUG nova.compute.provider_tree [None req-ae93024e-ce36-4243-9626-0432d5a9bbb7 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 08:57:25 compute-0 nova_compute[260935]: 2025-10-11 08:57:25.027 2 DEBUG nova.scheduler.client.report [None req-ae93024e-ce36-4243-9626-0432d5a9bbb7 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 08:57:25 compute-0 nova_compute[260935]: 2025-10-11 08:57:25.052 2 DEBUG oslo_concurrency.lockutils [None req-ae93024e-ce36-4243-9626-0432d5a9bbb7 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.826s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:57:25 compute-0 nova_compute[260935]: 2025-10-11 08:57:25.053 2 DEBUG nova.compute.manager [None req-ae93024e-ce36-4243-9626-0432d5a9bbb7 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: a4114e5c-7c87-48a4-b73b-01ec4a9dd9b4] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 11 08:57:25 compute-0 nova_compute[260935]: 2025-10-11 08:57:25.102 2 DEBUG nova.compute.manager [None req-ae93024e-ce36-4243-9626-0432d5a9bbb7 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: a4114e5c-7c87-48a4-b73b-01ec4a9dd9b4] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 11 08:57:25 compute-0 nova_compute[260935]: 2025-10-11 08:57:25.103 2 DEBUG nova.network.neutron [None req-ae93024e-ce36-4243-9626-0432d5a9bbb7 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: a4114e5c-7c87-48a4-b73b-01ec4a9dd9b4] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 11 08:57:25 compute-0 ceph-mon[74313]: pgmap v1618: 321 pgs: 321 active+clean; 498 MiB data, 775 MiB used, 59 GiB / 60 GiB avail; 4.7 MiB/s rd, 7.8 MiB/s wr, 248 op/s
Oct 11 08:57:25 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/529401236' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:57:25 compute-0 nova_compute[260935]: 2025-10-11 08:57:25.122 2 INFO nova.virt.libvirt.driver [None req-ae93024e-ce36-4243-9626-0432d5a9bbb7 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: a4114e5c-7c87-48a4-b73b-01ec4a9dd9b4] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 11 08:57:25 compute-0 nova_compute[260935]: 2025-10-11 08:57:25.141 2 DEBUG nova.compute.manager [None req-ae93024e-ce36-4243-9626-0432d5a9bbb7 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: a4114e5c-7c87-48a4-b73b-01ec4a9dd9b4] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 11 08:57:25 compute-0 nova_compute[260935]: 2025-10-11 08:57:25.221 2 DEBUG nova.compute.manager [None req-ae93024e-ce36-4243-9626-0432d5a9bbb7 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: a4114e5c-7c87-48a4-b73b-01ec4a9dd9b4] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 11 08:57:25 compute-0 nova_compute[260935]: 2025-10-11 08:57:25.223 2 DEBUG nova.virt.libvirt.driver [None req-ae93024e-ce36-4243-9626-0432d5a9bbb7 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: a4114e5c-7c87-48a4-b73b-01ec4a9dd9b4] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 11 08:57:25 compute-0 nova_compute[260935]: 2025-10-11 08:57:25.223 2 INFO nova.virt.libvirt.driver [None req-ae93024e-ce36-4243-9626-0432d5a9bbb7 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: a4114e5c-7c87-48a4-b73b-01ec4a9dd9b4] Creating image(s)
Oct 11 08:57:25 compute-0 nova_compute[260935]: 2025-10-11 08:57:25.251 2 DEBUG nova.storage.rbd_utils [None req-ae93024e-ce36-4243-9626-0432d5a9bbb7 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] rbd image a4114e5c-7c87-48a4-b73b-01ec4a9dd9b4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:57:25 compute-0 nova_compute[260935]: 2025-10-11 08:57:25.276 2 DEBUG nova.storage.rbd_utils [None req-ae93024e-ce36-4243-9626-0432d5a9bbb7 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] rbd image a4114e5c-7c87-48a4-b73b-01ec4a9dd9b4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:57:25 compute-0 nova_compute[260935]: 2025-10-11 08:57:25.301 2 DEBUG nova.storage.rbd_utils [None req-ae93024e-ce36-4243-9626-0432d5a9bbb7 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] rbd image a4114e5c-7c87-48a4-b73b-01ec4a9dd9b4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:57:25 compute-0 nova_compute[260935]: 2025-10-11 08:57:25.308 2 DEBUG oslo_concurrency.processutils [None req-ae93024e-ce36-4243-9626-0432d5a9bbb7 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:57:25 compute-0 nova_compute[260935]: 2025-10-11 08:57:25.362 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:57:25 compute-0 nova_compute[260935]: 2025-10-11 08:57:25.373 2 DEBUG nova.policy [None req-ae93024e-ce36-4243-9626-0432d5a9bbb7 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'ee0e5fedb9fc464eb2a9ac362f5e0749', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'd9864fda4f8641d8a9c1509c426cc206', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 11 08:57:25 compute-0 nova_compute[260935]: 2025-10-11 08:57:25.409 2 DEBUG oslo_concurrency.processutils [None req-ae93024e-ce36-4243-9626-0432d5a9bbb7 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json" returned: 0 in 0.101s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:57:25 compute-0 nova_compute[260935]: 2025-10-11 08:57:25.411 2 DEBUG oslo_concurrency.lockutils [None req-ae93024e-ce36-4243-9626-0432d5a9bbb7 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Acquiring lock "9811042c7d73cc51997f7c966840f4b7728169a1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:57:25 compute-0 nova_compute[260935]: 2025-10-11 08:57:25.412 2 DEBUG oslo_concurrency.lockutils [None req-ae93024e-ce36-4243-9626-0432d5a9bbb7 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:57:25 compute-0 nova_compute[260935]: 2025-10-11 08:57:25.412 2 DEBUG oslo_concurrency.lockutils [None req-ae93024e-ce36-4243-9626-0432d5a9bbb7 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:57:25 compute-0 nova_compute[260935]: 2025-10-11 08:57:25.437 2 DEBUG nova.storage.rbd_utils [None req-ae93024e-ce36-4243-9626-0432d5a9bbb7 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] rbd image a4114e5c-7c87-48a4-b73b-01ec4a9dd9b4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:57:25 compute-0 nova_compute[260935]: 2025-10-11 08:57:25.441 2 DEBUG oslo_concurrency.processutils [None req-ae93024e-ce36-4243-9626-0432d5a9bbb7 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 a4114e5c-7c87-48a4-b73b-01ec4a9dd9b4_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:57:25 compute-0 nova_compute[260935]: 2025-10-11 08:57:25.758 2 INFO nova.virt.libvirt.driver [-] [instance: 7aed3949-7b95-4887-90c3-3e2c8202cf27] Instance destroyed successfully.
Oct 11 08:57:25 compute-0 nova_compute[260935]: 2025-10-11 08:57:25.759 2 DEBUG nova.objects.instance [None req-18630ca5-6617-4c0e-b534-d2cc8727ed2f 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Lazy-loading 'resources' on Instance uuid 7aed3949-7b95-4887-90c3-3e2c8202cf27 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:57:25 compute-0 nova_compute[260935]: 2025-10-11 08:57:25.771 2 DEBUG oslo_concurrency.processutils [None req-ae93024e-ce36-4243-9626-0432d5a9bbb7 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 a4114e5c-7c87-48a4-b73b-01ec4a9dd9b4_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.330s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:57:25 compute-0 nova_compute[260935]: 2025-10-11 08:57:25.805 2 DEBUG nova.virt.libvirt.vif [None req-18630ca5-6617-4c0e-b534-d2cc8727ed2f 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T08:57:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-2012099789',display_name='tempest-ServerActionsTestOtherB-server-2012099789',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-2012099789',id=71,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-11T08:57:12Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='73adfb8cf0c64359b1f33a9643148ef4',ramdisk_id='',reservation_id='r-3md42ked',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherB-1445504716',owner_user_name='tempest-ServerActionsTestOtherB-1445504716-project-member',shelved_at='2025-10-11T08:57:23.379931',shelved_host='compute-0.ctlplane.example.com',shelved_image_id='9c0551a6-e9bd-422a-a7ce-23fca57baa23'},tags=<?>,task_state='shelving_offloading',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T08:57:18Z,user_data=None,user_id='8d5f5f07c57c467286168be7c097bf26',uuid=7aed3949-7b95-4887-90c3-3e2c8202cf27,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='shelved') vif={"id": "f5e1ba56-7772-43c4-a7a6-9435b3b0796c", "address": "fa:16:3e:39:33:ea", "network": {"id": "056c6769-bc97-4ae9-9759-4cc2d984a31d", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-2094705751-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "73adfb8cf0c64359b1f33a9643148ef4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf5e1ba56-77", "ovs_interfaceid": "f5e1ba56-7772-43c4-a7a6-9435b3b0796c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 11 08:57:25 compute-0 nova_compute[260935]: 2025-10-11 08:57:25.806 2 DEBUG nova.network.os_vif_util [None req-18630ca5-6617-4c0e-b534-d2cc8727ed2f 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Converting VIF {"id": "f5e1ba56-7772-43c4-a7a6-9435b3b0796c", "address": "fa:16:3e:39:33:ea", "network": {"id": "056c6769-bc97-4ae9-9759-4cc2d984a31d", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-2094705751-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "73adfb8cf0c64359b1f33a9643148ef4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf5e1ba56-77", "ovs_interfaceid": "f5e1ba56-7772-43c4-a7a6-9435b3b0796c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 08:57:25 compute-0 nova_compute[260935]: 2025-10-11 08:57:25.807 2 DEBUG nova.network.os_vif_util [None req-18630ca5-6617-4c0e-b534-d2cc8727ed2f 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:39:33:ea,bridge_name='br-int',has_traffic_filtering=True,id=f5e1ba56-7772-43c4-a7a6-9435b3b0796c,network=Network(056c6769-bc97-4ae9-9759-4cc2d984a31d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf5e1ba56-77') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 08:57:25 compute-0 nova_compute[260935]: 2025-10-11 08:57:25.807 2 DEBUG os_vif [None req-18630ca5-6617-4c0e-b534-d2cc8727ed2f 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:39:33:ea,bridge_name='br-int',has_traffic_filtering=True,id=f5e1ba56-7772-43c4-a7a6-9435b3b0796c,network=Network(056c6769-bc97-4ae9-9759-4cc2d984a31d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf5e1ba56-77') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 11 08:57:25 compute-0 nova_compute[260935]: 2025-10-11 08:57:25.809 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:57:25 compute-0 nova_compute[260935]: 2025-10-11 08:57:25.810 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf5e1ba56-77, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:57:25 compute-0 nova_compute[260935]: 2025-10-11 08:57:25.812 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:57:25 compute-0 nova_compute[260935]: 2025-10-11 08:57:25.814 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 08:57:25 compute-0 nova_compute[260935]: 2025-10-11 08:57:25.814 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:57:25 compute-0 nova_compute[260935]: 2025-10-11 08:57:25.855 2 INFO os_vif [None req-18630ca5-6617-4c0e-b534-d2cc8727ed2f 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:39:33:ea,bridge_name='br-int',has_traffic_filtering=True,id=f5e1ba56-7772-43c4-a7a6-9435b3b0796c,network=Network(056c6769-bc97-4ae9-9759-4cc2d984a31d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf5e1ba56-77')
Oct 11 08:57:25 compute-0 nova_compute[260935]: 2025-10-11 08:57:25.878 2 DEBUG nova.compute.manager [req-92a2bed4-d89e-40b0-af4a-77eafdf2ece2 req-86bb8237-7c05-48f1-982e-482a78893992 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 7aed3949-7b95-4887-90c3-3e2c8202cf27] Received event network-changed-f5e1ba56-7772-43c4-a7a6-9435b3b0796c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:57:25 compute-0 nova_compute[260935]: 2025-10-11 08:57:25.879 2 DEBUG nova.compute.manager [req-92a2bed4-d89e-40b0-af4a-77eafdf2ece2 req-86bb8237-7c05-48f1-982e-482a78893992 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 7aed3949-7b95-4887-90c3-3e2c8202cf27] Refreshing instance network info cache due to event network-changed-f5e1ba56-7772-43c4-a7a6-9435b3b0796c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 08:57:25 compute-0 nova_compute[260935]: 2025-10-11 08:57:25.879 2 DEBUG oslo_concurrency.lockutils [req-92a2bed4-d89e-40b0-af4a-77eafdf2ece2 req-86bb8237-7c05-48f1-982e-482a78893992 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-7aed3949-7b95-4887-90c3-3e2c8202cf27" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 08:57:25 compute-0 nova_compute[260935]: 2025-10-11 08:57:25.880 2 DEBUG oslo_concurrency.lockutils [req-92a2bed4-d89e-40b0-af4a-77eafdf2ece2 req-86bb8237-7c05-48f1-982e-482a78893992 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-7aed3949-7b95-4887-90c3-3e2c8202cf27" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 08:57:25 compute-0 nova_compute[260935]: 2025-10-11 08:57:25.880 2 DEBUG nova.network.neutron [req-92a2bed4-d89e-40b0-af4a-77eafdf2ece2 req-86bb8237-7c05-48f1-982e-482a78893992 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 7aed3949-7b95-4887-90c3-3e2c8202cf27] Refreshing network info cache for port f5e1ba56-7772-43c4-a7a6-9435b3b0796c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 08:57:25 compute-0 nova_compute[260935]: 2025-10-11 08:57:25.882 2 DEBUG nova.network.neutron [None req-ae93024e-ce36-4243-9626-0432d5a9bbb7 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: a4114e5c-7c87-48a4-b73b-01ec4a9dd9b4] Successfully created port: 569c42c9-b81d-42a2-8fc1-8fced709830d _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 11 08:57:25 compute-0 nova_compute[260935]: 2025-10-11 08:57:25.891 2 DEBUG nova.storage.rbd_utils [None req-ae93024e-ce36-4243-9626-0432d5a9bbb7 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] resizing rbd image a4114e5c-7c87-48a4-b73b-01ec4a9dd9b4_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 11 08:57:26 compute-0 nova_compute[260935]: 2025-10-11 08:57:26.014 2 DEBUG nova.objects.instance [None req-ae93024e-ce36-4243-9626-0432d5a9bbb7 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Lazy-loading 'migration_context' on Instance uuid a4114e5c-7c87-48a4-b73b-01ec4a9dd9b4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:57:26 compute-0 nova_compute[260935]: 2025-10-11 08:57:26.031 2 DEBUG nova.virt.libvirt.driver [None req-ae93024e-ce36-4243-9626-0432d5a9bbb7 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: a4114e5c-7c87-48a4-b73b-01ec4a9dd9b4] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 11 08:57:26 compute-0 nova_compute[260935]: 2025-10-11 08:57:26.031 2 DEBUG nova.virt.libvirt.driver [None req-ae93024e-ce36-4243-9626-0432d5a9bbb7 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: a4114e5c-7c87-48a4-b73b-01ec4a9dd9b4] Ensure instance console log exists: /var/lib/nova/instances/a4114e5c-7c87-48a4-b73b-01ec4a9dd9b4/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 11 08:57:26 compute-0 nova_compute[260935]: 2025-10-11 08:57:26.032 2 DEBUG oslo_concurrency.lockutils [None req-ae93024e-ce36-4243-9626-0432d5a9bbb7 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:57:26 compute-0 nova_compute[260935]: 2025-10-11 08:57:26.032 2 DEBUG oslo_concurrency.lockutils [None req-ae93024e-ce36-4243-9626-0432d5a9bbb7 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:57:26 compute-0 nova_compute[260935]: 2025-10-11 08:57:26.032 2 DEBUG oslo_concurrency.lockutils [None req-ae93024e-ce36-4243-9626-0432d5a9bbb7 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:57:26 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1619: 321 pgs: 321 active+clean; 498 MiB data, 775 MiB used, 59 GiB / 60 GiB avail; 4.3 MiB/s rd, 7.3 MiB/s wr, 231 op/s
Oct 11 08:57:26 compute-0 nova_compute[260935]: 2025-10-11 08:57:26.279 2 INFO nova.virt.libvirt.driver [None req-18630ca5-6617-4c0e-b534-d2cc8727ed2f 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] [instance: 7aed3949-7b95-4887-90c3-3e2c8202cf27] Deleting instance files /var/lib/nova/instances/7aed3949-7b95-4887-90c3-3e2c8202cf27_del
Oct 11 08:57:26 compute-0 nova_compute[260935]: 2025-10-11 08:57:26.280 2 INFO nova.virt.libvirt.driver [None req-18630ca5-6617-4c0e-b534-d2cc8727ed2f 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] [instance: 7aed3949-7b95-4887-90c3-3e2c8202cf27] Deletion of /var/lib/nova/instances/7aed3949-7b95-4887-90c3-3e2c8202cf27_del complete
Oct 11 08:57:26 compute-0 nova_compute[260935]: 2025-10-11 08:57:26.381 2 INFO nova.scheduler.client.report [None req-18630ca5-6617-4c0e-b534-d2cc8727ed2f 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Deleted allocations for instance 7aed3949-7b95-4887-90c3-3e2c8202cf27
Oct 11 08:57:26 compute-0 nova_compute[260935]: 2025-10-11 08:57:26.446 2 DEBUG oslo_concurrency.lockutils [None req-18630ca5-6617-4c0e-b534-d2cc8727ed2f 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:57:26 compute-0 nova_compute[260935]: 2025-10-11 08:57:26.447 2 DEBUG oslo_concurrency.lockutils [None req-18630ca5-6617-4c0e-b534-d2cc8727ed2f 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:57:26 compute-0 nova_compute[260935]: 2025-10-11 08:57:26.651 2 DEBUG oslo_concurrency.processutils [None req-18630ca5-6617-4c0e-b534-d2cc8727ed2f 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:57:26 compute-0 nova_compute[260935]: 2025-10-11 08:57:26.710 2 DEBUG nova.network.neutron [None req-ae93024e-ce36-4243-9626-0432d5a9bbb7 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: a4114e5c-7c87-48a4-b73b-01ec4a9dd9b4] Successfully updated port: 569c42c9-b81d-42a2-8fc1-8fced709830d _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 11 08:57:26 compute-0 nova_compute[260935]: 2025-10-11 08:57:26.736 2 DEBUG oslo_concurrency.lockutils [None req-ae93024e-ce36-4243-9626-0432d5a9bbb7 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Acquiring lock "refresh_cache-a4114e5c-7c87-48a4-b73b-01ec4a9dd9b4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 08:57:26 compute-0 nova_compute[260935]: 2025-10-11 08:57:26.736 2 DEBUG oslo_concurrency.lockutils [None req-ae93024e-ce36-4243-9626-0432d5a9bbb7 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Acquired lock "refresh_cache-a4114e5c-7c87-48a4-b73b-01ec4a9dd9b4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 08:57:26 compute-0 nova_compute[260935]: 2025-10-11 08:57:26.737 2 DEBUG nova.network.neutron [None req-ae93024e-ce36-4243-9626-0432d5a9bbb7 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: a4114e5c-7c87-48a4-b73b-01ec4a9dd9b4] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 11 08:57:26 compute-0 nova_compute[260935]: 2025-10-11 08:57:26.799 2 DEBUG nova.compute.manager [req-a419db7e-0a6b-45c0-b27f-b22a81811b0d req-724db7b4-8c33-4a3b-9bd0-031c1a49f17c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a4114e5c-7c87-48a4-b73b-01ec4a9dd9b4] Received event network-changed-569c42c9-b81d-42a2-8fc1-8fced709830d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:57:26 compute-0 nova_compute[260935]: 2025-10-11 08:57:26.800 2 DEBUG nova.compute.manager [req-a419db7e-0a6b-45c0-b27f-b22a81811b0d req-724db7b4-8c33-4a3b-9bd0-031c1a49f17c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a4114e5c-7c87-48a4-b73b-01ec4a9dd9b4] Refreshing instance network info cache due to event network-changed-569c42c9-b81d-42a2-8fc1-8fced709830d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 08:57:26 compute-0 nova_compute[260935]: 2025-10-11 08:57:26.800 2 DEBUG oslo_concurrency.lockutils [req-a419db7e-0a6b-45c0-b27f-b22a81811b0d req-724db7b4-8c33-4a3b-9bd0-031c1a49f17c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-a4114e5c-7c87-48a4-b73b-01ec4a9dd9b4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 08:57:27 compute-0 nova_compute[260935]: 2025-10-11 08:57:27.067 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760173032.0090911, 7e87b7bb-5a07-4c5d-9998-0e1375a226f1 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:57:27 compute-0 nova_compute[260935]: 2025-10-11 08:57:27.067 2 INFO nova.compute.manager [-] [instance: 7e87b7bb-5a07-4c5d-9998-0e1375a226f1] VM Stopped (Lifecycle Event)
Oct 11 08:57:27 compute-0 nova_compute[260935]: 2025-10-11 08:57:27.094 2 DEBUG nova.network.neutron [None req-ae93024e-ce36-4243-9626-0432d5a9bbb7 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: a4114e5c-7c87-48a4-b73b-01ec4a9dd9b4] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 11 08:57:27 compute-0 nova_compute[260935]: 2025-10-11 08:57:27.100 2 DEBUG nova.compute.manager [None req-adf08ff6-1a5f-4b45-9860-ef4b1a464c16 - - - - - -] [instance: 7e87b7bb-5a07-4c5d-9998-0e1375a226f1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:57:27 compute-0 ceph-mon[74313]: pgmap v1619: 321 pgs: 321 active+clean; 498 MiB data, 775 MiB used, 59 GiB / 60 GiB avail; 4.3 MiB/s rd, 7.3 MiB/s wr, 231 op/s
Oct 11 08:57:27 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 08:57:27 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2811817455' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:57:27 compute-0 nova_compute[260935]: 2025-10-11 08:57:27.182 2 DEBUG oslo_concurrency.processutils [None req-18630ca5-6617-4c0e-b534-d2cc8727ed2f 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.530s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:57:27 compute-0 nova_compute[260935]: 2025-10-11 08:57:27.190 2 DEBUG nova.compute.provider_tree [None req-18630ca5-6617-4c0e-b534-d2cc8727ed2f 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 08:57:27 compute-0 nova_compute[260935]: 2025-10-11 08:57:27.210 2 DEBUG nova.scheduler.client.report [None req-18630ca5-6617-4c0e-b534-d2cc8727ed2f 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 08:57:27 compute-0 nova_compute[260935]: 2025-10-11 08:57:27.233 2 DEBUG oslo_concurrency.lockutils [None req-18630ca5-6617-4c0e-b534-d2cc8727ed2f 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.786s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:57:27 compute-0 nova_compute[260935]: 2025-10-11 08:57:27.290 2 DEBUG oslo_concurrency.lockutils [None req-18630ca5-6617-4c0e-b534-d2cc8727ed2f 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Lock "7aed3949-7b95-4887-90c3-3e2c8202cf27" "released" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: held 9.475s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:57:27 compute-0 ovn_controller[152945]: 2025-10-11T08:57:27Z|00078|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:15:d9:b2 10.100.0.14
Oct 11 08:57:27 compute-0 ovn_controller[152945]: 2025-10-11T08:57:27Z|00079|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:15:d9:b2 10.100.0.14
Oct 11 08:57:27 compute-0 nova_compute[260935]: 2025-10-11 08:57:27.670 2 DEBUG nova.network.neutron [req-92a2bed4-d89e-40b0-af4a-77eafdf2ece2 req-86bb8237-7c05-48f1-982e-482a78893992 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 7aed3949-7b95-4887-90c3-3e2c8202cf27] Updated VIF entry in instance network info cache for port f5e1ba56-7772-43c4-a7a6-9435b3b0796c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 08:57:27 compute-0 nova_compute[260935]: 2025-10-11 08:57:27.670 2 DEBUG nova.network.neutron [req-92a2bed4-d89e-40b0-af4a-77eafdf2ece2 req-86bb8237-7c05-48f1-982e-482a78893992 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 7aed3949-7b95-4887-90c3-3e2c8202cf27] Updating instance_info_cache with network_info: [{"id": "f5e1ba56-7772-43c4-a7a6-9435b3b0796c", "address": "fa:16:3e:39:33:ea", "network": {"id": "056c6769-bc97-4ae9-9759-4cc2d984a31d", "bridge": null, "label": "tempest-ServerActionsTestOtherB-2094705751-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "73adfb8cf0c64359b1f33a9643148ef4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "unbound", "details": {}, "devname": "tapf5e1ba56-77", "ovs_interfaceid": null, "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:57:27 compute-0 nova_compute[260935]: 2025-10-11 08:57:27.710 2 DEBUG oslo_concurrency.lockutils [req-92a2bed4-d89e-40b0-af4a-77eafdf2ece2 req-86bb8237-7c05-48f1-982e-482a78893992 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-7aed3949-7b95-4887-90c3-3e2c8202cf27" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 08:57:28 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1620: 321 pgs: 321 active+clean; 519 MiB data, 794 MiB used, 59 GiB / 60 GiB avail; 3.7 MiB/s rd, 12 MiB/s wr, 336 op/s
Oct 11 08:57:28 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/2811817455' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:57:28 compute-0 nova_compute[260935]: 2025-10-11 08:57:28.344 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:57:28 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e233 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 08:57:28 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e233 do_prune osdmap full prune enabled
Oct 11 08:57:28 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e234 e234: 3 total, 3 up, 3 in
Oct 11 08:57:28 compute-0 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e234: 3 total, 3 up, 3 in
Oct 11 08:57:28 compute-0 nova_compute[260935]: 2025-10-11 08:57:28.410 2 DEBUG nova.network.neutron [None req-ae93024e-ce36-4243-9626-0432d5a9bbb7 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: a4114e5c-7c87-48a4-b73b-01ec4a9dd9b4] Updating instance_info_cache with network_info: [{"id": "569c42c9-b81d-42a2-8fc1-8fced709830d", "address": "fa:16:3e:5b:be:d8", "network": {"id": "e075bdab-78c4-414f-b270-c41d1c82f498", "bridge": "br-int", "label": "tempest-ServersTestJSON-1401783070-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d9864fda4f8641d8a9c1509c426cc206", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap569c42c9-b8", "ovs_interfaceid": "569c42c9-b81d-42a2-8fc1-8fced709830d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:57:28 compute-0 nova_compute[260935]: 2025-10-11 08:57:28.438 2 DEBUG oslo_concurrency.lockutils [None req-ae93024e-ce36-4243-9626-0432d5a9bbb7 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Releasing lock "refresh_cache-a4114e5c-7c87-48a4-b73b-01ec4a9dd9b4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 08:57:28 compute-0 nova_compute[260935]: 2025-10-11 08:57:28.438 2 DEBUG nova.compute.manager [None req-ae93024e-ce36-4243-9626-0432d5a9bbb7 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: a4114e5c-7c87-48a4-b73b-01ec4a9dd9b4] Instance network_info: |[{"id": "569c42c9-b81d-42a2-8fc1-8fced709830d", "address": "fa:16:3e:5b:be:d8", "network": {"id": "e075bdab-78c4-414f-b270-c41d1c82f498", "bridge": "br-int", "label": "tempest-ServersTestJSON-1401783070-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d9864fda4f8641d8a9c1509c426cc206", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap569c42c9-b8", "ovs_interfaceid": "569c42c9-b81d-42a2-8fc1-8fced709830d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 11 08:57:28 compute-0 nova_compute[260935]: 2025-10-11 08:57:28.438 2 DEBUG oslo_concurrency.lockutils [req-a419db7e-0a6b-45c0-b27f-b22a81811b0d req-724db7b4-8c33-4a3b-9bd0-031c1a49f17c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-a4114e5c-7c87-48a4-b73b-01ec4a9dd9b4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 08:57:28 compute-0 nova_compute[260935]: 2025-10-11 08:57:28.439 2 DEBUG nova.network.neutron [req-a419db7e-0a6b-45c0-b27f-b22a81811b0d req-724db7b4-8c33-4a3b-9bd0-031c1a49f17c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a4114e5c-7c87-48a4-b73b-01ec4a9dd9b4] Refreshing network info cache for port 569c42c9-b81d-42a2-8fc1-8fced709830d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 08:57:28 compute-0 nova_compute[260935]: 2025-10-11 08:57:28.441 2 DEBUG nova.virt.libvirt.driver [None req-ae93024e-ce36-4243-9626-0432d5a9bbb7 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: a4114e5c-7c87-48a4-b73b-01ec4a9dd9b4] Start _get_guest_xml network_info=[{"id": "569c42c9-b81d-42a2-8fc1-8fced709830d", "address": "fa:16:3e:5b:be:d8", "network": {"id": "e075bdab-78c4-414f-b270-c41d1c82f498", "bridge": "br-int", "label": "tempest-ServersTestJSON-1401783070-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d9864fda4f8641d8a9c1509c426cc206", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap569c42c9-b8", "ovs_interfaceid": "569c42c9-b81d-42a2-8fc1-8fced709830d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 11 08:57:28 compute-0 nova_compute[260935]: 2025-10-11 08:57:28.446 2 WARNING nova.virt.libvirt.driver [None req-ae93024e-ce36-4243-9626-0432d5a9bbb7 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 08:57:28 compute-0 nova_compute[260935]: 2025-10-11 08:57:28.452 2 DEBUG nova.virt.libvirt.host [None req-ae93024e-ce36-4243-9626-0432d5a9bbb7 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 11 08:57:28 compute-0 nova_compute[260935]: 2025-10-11 08:57:28.453 2 DEBUG nova.virt.libvirt.host [None req-ae93024e-ce36-4243-9626-0432d5a9bbb7 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 11 08:57:28 compute-0 nova_compute[260935]: 2025-10-11 08:57:28.456 2 DEBUG nova.virt.libvirt.host [None req-ae93024e-ce36-4243-9626-0432d5a9bbb7 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 11 08:57:28 compute-0 nova_compute[260935]: 2025-10-11 08:57:28.457 2 DEBUG nova.virt.libvirt.host [None req-ae93024e-ce36-4243-9626-0432d5a9bbb7 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 11 08:57:28 compute-0 nova_compute[260935]: 2025-10-11 08:57:28.457 2 DEBUG nova.virt.libvirt.driver [None req-ae93024e-ce36-4243-9626-0432d5a9bbb7 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 11 08:57:28 compute-0 nova_compute[260935]: 2025-10-11 08:57:28.457 2 DEBUG nova.virt.hardware [None req-ae93024e-ce36-4243-9626-0432d5a9bbb7 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 11 08:57:28 compute-0 nova_compute[260935]: 2025-10-11 08:57:28.458 2 DEBUG nova.virt.hardware [None req-ae93024e-ce36-4243-9626-0432d5a9bbb7 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 11 08:57:28 compute-0 nova_compute[260935]: 2025-10-11 08:57:28.458 2 DEBUG nova.virt.hardware [None req-ae93024e-ce36-4243-9626-0432d5a9bbb7 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 11 08:57:28 compute-0 nova_compute[260935]: 2025-10-11 08:57:28.458 2 DEBUG nova.virt.hardware [None req-ae93024e-ce36-4243-9626-0432d5a9bbb7 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 11 08:57:28 compute-0 nova_compute[260935]: 2025-10-11 08:57:28.458 2 DEBUG nova.virt.hardware [None req-ae93024e-ce36-4243-9626-0432d5a9bbb7 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 11 08:57:28 compute-0 nova_compute[260935]: 2025-10-11 08:57:28.458 2 DEBUG nova.virt.hardware [None req-ae93024e-ce36-4243-9626-0432d5a9bbb7 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 11 08:57:28 compute-0 nova_compute[260935]: 2025-10-11 08:57:28.459 2 DEBUG nova.virt.hardware [None req-ae93024e-ce36-4243-9626-0432d5a9bbb7 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 11 08:57:28 compute-0 nova_compute[260935]: 2025-10-11 08:57:28.459 2 DEBUG nova.virt.hardware [None req-ae93024e-ce36-4243-9626-0432d5a9bbb7 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 11 08:57:28 compute-0 nova_compute[260935]: 2025-10-11 08:57:28.459 2 DEBUG nova.virt.hardware [None req-ae93024e-ce36-4243-9626-0432d5a9bbb7 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 11 08:57:28 compute-0 nova_compute[260935]: 2025-10-11 08:57:28.459 2 DEBUG nova.virt.hardware [None req-ae93024e-ce36-4243-9626-0432d5a9bbb7 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 11 08:57:28 compute-0 nova_compute[260935]: 2025-10-11 08:57:28.459 2 DEBUG nova.virt.hardware [None req-ae93024e-ce36-4243-9626-0432d5a9bbb7 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 11 08:57:28 compute-0 nova_compute[260935]: 2025-10-11 08:57:28.463 2 DEBUG oslo_concurrency.processutils [None req-ae93024e-ce36-4243-9626-0432d5a9bbb7 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:57:28 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 08:57:28 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2152178733' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:57:28 compute-0 nova_compute[260935]: 2025-10-11 08:57:28.934 2 DEBUG oslo_concurrency.processutils [None req-ae93024e-ce36-4243-9626-0432d5a9bbb7 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.471s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:57:28 compute-0 nova_compute[260935]: 2025-10-11 08:57:28.957 2 DEBUG nova.storage.rbd_utils [None req-ae93024e-ce36-4243-9626-0432d5a9bbb7 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] rbd image a4114e5c-7c87-48a4-b73b-01ec4a9dd9b4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:57:28 compute-0 nova_compute[260935]: 2025-10-11 08:57:28.961 2 DEBUG oslo_concurrency.processutils [None req-ae93024e-ce36-4243-9626-0432d5a9bbb7 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:57:29 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 08:57:29 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/546692701' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:57:29 compute-0 ceph-mon[74313]: pgmap v1620: 321 pgs: 321 active+clean; 519 MiB data, 794 MiB used, 59 GiB / 60 GiB avail; 3.7 MiB/s rd, 12 MiB/s wr, 336 op/s
Oct 11 08:57:29 compute-0 ceph-mon[74313]: osdmap e234: 3 total, 3 up, 3 in
Oct 11 08:57:29 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/2152178733' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:57:29 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/546692701' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:57:29 compute-0 nova_compute[260935]: 2025-10-11 08:57:29.404 2 DEBUG oslo_concurrency.processutils [None req-ae93024e-ce36-4243-9626-0432d5a9bbb7 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:57:29 compute-0 nova_compute[260935]: 2025-10-11 08:57:29.408 2 DEBUG nova.virt.libvirt.vif [None req-ae93024e-ce36-4243-9626-0432d5a9bbb7 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T08:57:22Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-636257832',display_name='tempest-ServersTestJSON-server-636257832',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-636257832',id=73,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d9864fda4f8641d8a9c1509c426cc206',ramdisk_id='',reservation_id='r-zxehnsia',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-101172647',owner_user_name='tempest-ServersTestJSON-101172647-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T08:57:25Z,user_data=None,user_id='ee0e5fedb9fc464eb2a9ac362f5e0749',uuid=a4114e5c-7c87-48a4-b73b-01ec4a9dd9b4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "569c42c9-b81d-42a2-8fc1-8fced709830d", "address": "fa:16:3e:5b:be:d8", "network": {"id": "e075bdab-78c4-414f-b270-c41d1c82f498", "bridge": "br-int", "label": "tempest-ServersTestJSON-1401783070-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d9864fda4f8641d8a9c1509c426cc206", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap569c42c9-b8", "ovs_interfaceid": "569c42c9-b81d-42a2-8fc1-8fced709830d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 11 08:57:29 compute-0 nova_compute[260935]: 2025-10-11 08:57:29.409 2 DEBUG nova.network.os_vif_util [None req-ae93024e-ce36-4243-9626-0432d5a9bbb7 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Converting VIF {"id": "569c42c9-b81d-42a2-8fc1-8fced709830d", "address": "fa:16:3e:5b:be:d8", "network": {"id": "e075bdab-78c4-414f-b270-c41d1c82f498", "bridge": "br-int", "label": "tempest-ServersTestJSON-1401783070-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d9864fda4f8641d8a9c1509c426cc206", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap569c42c9-b8", "ovs_interfaceid": "569c42c9-b81d-42a2-8fc1-8fced709830d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 08:57:29 compute-0 nova_compute[260935]: 2025-10-11 08:57:29.411 2 DEBUG nova.network.os_vif_util [None req-ae93024e-ce36-4243-9626-0432d5a9bbb7 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5b:be:d8,bridge_name='br-int',has_traffic_filtering=True,id=569c42c9-b81d-42a2-8fc1-8fced709830d,network=Network(e075bdab-78c4-414f-b270-c41d1c82f498),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap569c42c9-b8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 08:57:29 compute-0 nova_compute[260935]: 2025-10-11 08:57:29.413 2 DEBUG nova.objects.instance [None req-ae93024e-ce36-4243-9626-0432d5a9bbb7 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Lazy-loading 'pci_devices' on Instance uuid a4114e5c-7c87-48a4-b73b-01ec4a9dd9b4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:57:29 compute-0 nova_compute[260935]: 2025-10-11 08:57:29.441 2 DEBUG nova.virt.libvirt.driver [None req-ae93024e-ce36-4243-9626-0432d5a9bbb7 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: a4114e5c-7c87-48a4-b73b-01ec4a9dd9b4] End _get_guest_xml xml=<domain type="kvm">
Oct 11 08:57:29 compute-0 nova_compute[260935]:   <uuid>a4114e5c-7c87-48a4-b73b-01ec4a9dd9b4</uuid>
Oct 11 08:57:29 compute-0 nova_compute[260935]:   <name>instance-00000049</name>
Oct 11 08:57:29 compute-0 nova_compute[260935]:   <memory>131072</memory>
Oct 11 08:57:29 compute-0 nova_compute[260935]:   <vcpu>1</vcpu>
Oct 11 08:57:29 compute-0 nova_compute[260935]:   <metadata>
Oct 11 08:57:29 compute-0 nova_compute[260935]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 08:57:29 compute-0 nova_compute[260935]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 08:57:29 compute-0 nova_compute[260935]:       <nova:name>tempest-ServersTestJSON-server-636257832</nova:name>
Oct 11 08:57:29 compute-0 nova_compute[260935]:       <nova:creationTime>2025-10-11 08:57:28</nova:creationTime>
Oct 11 08:57:29 compute-0 nova_compute[260935]:       <nova:flavor name="m1.nano">
Oct 11 08:57:29 compute-0 nova_compute[260935]:         <nova:memory>128</nova:memory>
Oct 11 08:57:29 compute-0 nova_compute[260935]:         <nova:disk>1</nova:disk>
Oct 11 08:57:29 compute-0 nova_compute[260935]:         <nova:swap>0</nova:swap>
Oct 11 08:57:29 compute-0 nova_compute[260935]:         <nova:ephemeral>0</nova:ephemeral>
Oct 11 08:57:29 compute-0 nova_compute[260935]:         <nova:vcpus>1</nova:vcpus>
Oct 11 08:57:29 compute-0 nova_compute[260935]:       </nova:flavor>
Oct 11 08:57:29 compute-0 nova_compute[260935]:       <nova:owner>
Oct 11 08:57:29 compute-0 nova_compute[260935]:         <nova:user uuid="ee0e5fedb9fc464eb2a9ac362f5e0749">tempest-ServersTestJSON-101172647-project-member</nova:user>
Oct 11 08:57:29 compute-0 nova_compute[260935]:         <nova:project uuid="d9864fda4f8641d8a9c1509c426cc206">tempest-ServersTestJSON-101172647</nova:project>
Oct 11 08:57:29 compute-0 nova_compute[260935]:       </nova:owner>
Oct 11 08:57:29 compute-0 nova_compute[260935]:       <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 08:57:29 compute-0 nova_compute[260935]:       <nova:ports>
Oct 11 08:57:29 compute-0 nova_compute[260935]:         <nova:port uuid="569c42c9-b81d-42a2-8fc1-8fced709830d">
Oct 11 08:57:29 compute-0 nova_compute[260935]:           <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Oct 11 08:57:29 compute-0 nova_compute[260935]:         </nova:port>
Oct 11 08:57:29 compute-0 nova_compute[260935]:       </nova:ports>
Oct 11 08:57:29 compute-0 nova_compute[260935]:     </nova:instance>
Oct 11 08:57:29 compute-0 nova_compute[260935]:   </metadata>
Oct 11 08:57:29 compute-0 nova_compute[260935]:   <sysinfo type="smbios">
Oct 11 08:57:29 compute-0 nova_compute[260935]:     <system>
Oct 11 08:57:29 compute-0 nova_compute[260935]:       <entry name="manufacturer">RDO</entry>
Oct 11 08:57:29 compute-0 nova_compute[260935]:       <entry name="product">OpenStack Compute</entry>
Oct 11 08:57:29 compute-0 nova_compute[260935]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 08:57:29 compute-0 nova_compute[260935]:       <entry name="serial">a4114e5c-7c87-48a4-b73b-01ec4a9dd9b4</entry>
Oct 11 08:57:29 compute-0 nova_compute[260935]:       <entry name="uuid">a4114e5c-7c87-48a4-b73b-01ec4a9dd9b4</entry>
Oct 11 08:57:29 compute-0 nova_compute[260935]:       <entry name="family">Virtual Machine</entry>
Oct 11 08:57:29 compute-0 nova_compute[260935]:     </system>
Oct 11 08:57:29 compute-0 nova_compute[260935]:   </sysinfo>
Oct 11 08:57:29 compute-0 nova_compute[260935]:   <os>
Oct 11 08:57:29 compute-0 nova_compute[260935]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 11 08:57:29 compute-0 nova_compute[260935]:     <boot dev="hd"/>
Oct 11 08:57:29 compute-0 nova_compute[260935]:     <smbios mode="sysinfo"/>
Oct 11 08:57:29 compute-0 nova_compute[260935]:   </os>
Oct 11 08:57:29 compute-0 nova_compute[260935]:   <features>
Oct 11 08:57:29 compute-0 nova_compute[260935]:     <acpi/>
Oct 11 08:57:29 compute-0 nova_compute[260935]:     <apic/>
Oct 11 08:57:29 compute-0 nova_compute[260935]:     <vmcoreinfo/>
Oct 11 08:57:29 compute-0 nova_compute[260935]:   </features>
Oct 11 08:57:29 compute-0 nova_compute[260935]:   <clock offset="utc">
Oct 11 08:57:29 compute-0 nova_compute[260935]:     <timer name="pit" tickpolicy="delay"/>
Oct 11 08:57:29 compute-0 nova_compute[260935]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 11 08:57:29 compute-0 nova_compute[260935]:     <timer name="hpet" present="no"/>
Oct 11 08:57:29 compute-0 nova_compute[260935]:   </clock>
Oct 11 08:57:29 compute-0 nova_compute[260935]:   <cpu mode="host-model" match="exact">
Oct 11 08:57:29 compute-0 nova_compute[260935]:     <topology sockets="1" cores="1" threads="1"/>
Oct 11 08:57:29 compute-0 nova_compute[260935]:   </cpu>
Oct 11 08:57:29 compute-0 nova_compute[260935]:   <devices>
Oct 11 08:57:29 compute-0 nova_compute[260935]:     <disk type="network" device="disk">
Oct 11 08:57:29 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 08:57:29 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/a4114e5c-7c87-48a4-b73b-01ec4a9dd9b4_disk">
Oct 11 08:57:29 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 08:57:29 compute-0 nova_compute[260935]:       </source>
Oct 11 08:57:29 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 08:57:29 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 08:57:29 compute-0 nova_compute[260935]:       </auth>
Oct 11 08:57:29 compute-0 nova_compute[260935]:       <target dev="vda" bus="virtio"/>
Oct 11 08:57:29 compute-0 nova_compute[260935]:     </disk>
Oct 11 08:57:29 compute-0 nova_compute[260935]:     <disk type="network" device="cdrom">
Oct 11 08:57:29 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 08:57:29 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/a4114e5c-7c87-48a4-b73b-01ec4a9dd9b4_disk.config">
Oct 11 08:57:29 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 08:57:29 compute-0 nova_compute[260935]:       </source>
Oct 11 08:57:29 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 08:57:29 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 08:57:29 compute-0 nova_compute[260935]:       </auth>
Oct 11 08:57:29 compute-0 nova_compute[260935]:       <target dev="sda" bus="sata"/>
Oct 11 08:57:29 compute-0 nova_compute[260935]:     </disk>
Oct 11 08:57:29 compute-0 nova_compute[260935]:     <interface type="ethernet">
Oct 11 08:57:29 compute-0 nova_compute[260935]:       <mac address="fa:16:3e:5b:be:d8"/>
Oct 11 08:57:29 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 08:57:29 compute-0 nova_compute[260935]:       <driver name="vhost" rx_queue_size="512"/>
Oct 11 08:57:29 compute-0 nova_compute[260935]:       <mtu size="1442"/>
Oct 11 08:57:29 compute-0 nova_compute[260935]:       <target dev="tap569c42c9-b8"/>
Oct 11 08:57:29 compute-0 nova_compute[260935]:     </interface>
Oct 11 08:57:29 compute-0 nova_compute[260935]:     <serial type="pty">
Oct 11 08:57:29 compute-0 nova_compute[260935]:       <log file="/var/lib/nova/instances/a4114e5c-7c87-48a4-b73b-01ec4a9dd9b4/console.log" append="off"/>
Oct 11 08:57:29 compute-0 nova_compute[260935]:     </serial>
Oct 11 08:57:29 compute-0 nova_compute[260935]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 08:57:29 compute-0 nova_compute[260935]:     <video>
Oct 11 08:57:29 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 08:57:29 compute-0 nova_compute[260935]:     </video>
Oct 11 08:57:29 compute-0 nova_compute[260935]:     <input type="tablet" bus="usb"/>
Oct 11 08:57:29 compute-0 nova_compute[260935]:     <rng model="virtio">
Oct 11 08:57:29 compute-0 nova_compute[260935]:       <backend model="random">/dev/urandom</backend>
Oct 11 08:57:29 compute-0 nova_compute[260935]:     </rng>
Oct 11 08:57:29 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root"/>
Oct 11 08:57:29 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:57:29 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:57:29 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:57:29 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:57:29 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:57:29 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:57:29 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:57:29 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:57:29 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:57:29 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:57:29 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:57:29 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:57:29 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:57:29 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:57:29 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:57:29 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:57:29 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:57:29 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:57:29 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:57:29 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:57:29 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:57:29 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:57:29 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:57:29 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:57:29 compute-0 nova_compute[260935]:     <controller type="usb" index="0"/>
Oct 11 08:57:29 compute-0 nova_compute[260935]:     <memballoon model="virtio">
Oct 11 08:57:29 compute-0 nova_compute[260935]:       <stats period="10"/>
Oct 11 08:57:29 compute-0 nova_compute[260935]:     </memballoon>
Oct 11 08:57:29 compute-0 nova_compute[260935]:   </devices>
Oct 11 08:57:29 compute-0 nova_compute[260935]: </domain>
Oct 11 08:57:29 compute-0 nova_compute[260935]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 11 08:57:29 compute-0 nova_compute[260935]: 2025-10-11 08:57:29.442 2 DEBUG nova.compute.manager [None req-ae93024e-ce36-4243-9626-0432d5a9bbb7 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: a4114e5c-7c87-48a4-b73b-01ec4a9dd9b4] Preparing to wait for external event network-vif-plugged-569c42c9-b81d-42a2-8fc1-8fced709830d prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 11 08:57:29 compute-0 nova_compute[260935]: 2025-10-11 08:57:29.442 2 DEBUG oslo_concurrency.lockutils [None req-ae93024e-ce36-4243-9626-0432d5a9bbb7 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Acquiring lock "a4114e5c-7c87-48a4-b73b-01ec4a9dd9b4-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:57:29 compute-0 nova_compute[260935]: 2025-10-11 08:57:29.443 2 DEBUG oslo_concurrency.lockutils [None req-ae93024e-ce36-4243-9626-0432d5a9bbb7 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Lock "a4114e5c-7c87-48a4-b73b-01ec4a9dd9b4-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:57:29 compute-0 nova_compute[260935]: 2025-10-11 08:57:29.443 2 DEBUG oslo_concurrency.lockutils [None req-ae93024e-ce36-4243-9626-0432d5a9bbb7 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Lock "a4114e5c-7c87-48a4-b73b-01ec4a9dd9b4-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:57:29 compute-0 nova_compute[260935]: 2025-10-11 08:57:29.445 2 DEBUG nova.virt.libvirt.vif [None req-ae93024e-ce36-4243-9626-0432d5a9bbb7 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T08:57:22Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-636257832',display_name='tempest-ServersTestJSON-server-636257832',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-636257832',id=73,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d9864fda4f8641d8a9c1509c426cc206',ramdisk_id='',reservation_id='r-zxehnsia',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-101172647',owner_user_name='tempest-ServersTestJSON-101172647-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T08:57:25Z,user_data=None,user_id='ee0e5fedb9fc464eb2a9ac362f5e0749',uuid=a4114e5c-7c87-48a4-b73b-01ec4a9dd9b4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "569c42c9-b81d-42a2-8fc1-8fced709830d", "address": "fa:16:3e:5b:be:d8", "network": {"id": "e075bdab-78c4-414f-b270-c41d1c82f498", "bridge": "br-int", "label": "tempest-ServersTestJSON-1401783070-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d9864fda4f8641d8a9c1509c426cc206", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap569c42c9-b8", "ovs_interfaceid": "569c42c9-b81d-42a2-8fc1-8fced709830d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 11 08:57:29 compute-0 nova_compute[260935]: 2025-10-11 08:57:29.445 2 DEBUG nova.network.os_vif_util [None req-ae93024e-ce36-4243-9626-0432d5a9bbb7 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Converting VIF {"id": "569c42c9-b81d-42a2-8fc1-8fced709830d", "address": "fa:16:3e:5b:be:d8", "network": {"id": "e075bdab-78c4-414f-b270-c41d1c82f498", "bridge": "br-int", "label": "tempest-ServersTestJSON-1401783070-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d9864fda4f8641d8a9c1509c426cc206", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap569c42c9-b8", "ovs_interfaceid": "569c42c9-b81d-42a2-8fc1-8fced709830d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 08:57:29 compute-0 nova_compute[260935]: 2025-10-11 08:57:29.447 2 DEBUG nova.network.os_vif_util [None req-ae93024e-ce36-4243-9626-0432d5a9bbb7 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5b:be:d8,bridge_name='br-int',has_traffic_filtering=True,id=569c42c9-b81d-42a2-8fc1-8fced709830d,network=Network(e075bdab-78c4-414f-b270-c41d1c82f498),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap569c42c9-b8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 08:57:29 compute-0 nova_compute[260935]: 2025-10-11 08:57:29.448 2 DEBUG os_vif [None req-ae93024e-ce36-4243-9626-0432d5a9bbb7 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:5b:be:d8,bridge_name='br-int',has_traffic_filtering=True,id=569c42c9-b81d-42a2-8fc1-8fced709830d,network=Network(e075bdab-78c4-414f-b270-c41d1c82f498),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap569c42c9-b8') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 11 08:57:29 compute-0 nova_compute[260935]: 2025-10-11 08:57:29.450 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:57:29 compute-0 nova_compute[260935]: 2025-10-11 08:57:29.450 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:57:29 compute-0 nova_compute[260935]: 2025-10-11 08:57:29.451 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 08:57:29 compute-0 nova_compute[260935]: 2025-10-11 08:57:29.455 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:57:29 compute-0 nova_compute[260935]: 2025-10-11 08:57:29.455 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap569c42c9-b8, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:57:29 compute-0 nova_compute[260935]: 2025-10-11 08:57:29.456 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap569c42c9-b8, col_values=(('external_ids', {'iface-id': '569c42c9-b81d-42a2-8fc1-8fced709830d', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:5b:be:d8', 'vm-uuid': 'a4114e5c-7c87-48a4-b73b-01ec4a9dd9b4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:57:29 compute-0 nova_compute[260935]: 2025-10-11 08:57:29.458 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:57:29 compute-0 NetworkManager[44960]: <info>  [1760173049.4600] manager: (tap569c42c9-b8): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/287)
Oct 11 08:57:29 compute-0 nova_compute[260935]: 2025-10-11 08:57:29.462 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 08:57:29 compute-0 nova_compute[260935]: 2025-10-11 08:57:29.466 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:57:29 compute-0 nova_compute[260935]: 2025-10-11 08:57:29.467 2 INFO os_vif [None req-ae93024e-ce36-4243-9626-0432d5a9bbb7 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:5b:be:d8,bridge_name='br-int',has_traffic_filtering=True,id=569c42c9-b81d-42a2-8fc1-8fced709830d,network=Network(e075bdab-78c4-414f-b270-c41d1c82f498),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap569c42c9-b8')
Oct 11 08:57:29 compute-0 nova_compute[260935]: 2025-10-11 08:57:29.541 2 DEBUG nova.virt.libvirt.driver [None req-ae93024e-ce36-4243-9626-0432d5a9bbb7 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 08:57:29 compute-0 nova_compute[260935]: 2025-10-11 08:57:29.541 2 DEBUG nova.virt.libvirt.driver [None req-ae93024e-ce36-4243-9626-0432d5a9bbb7 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 08:57:29 compute-0 nova_compute[260935]: 2025-10-11 08:57:29.542 2 DEBUG nova.virt.libvirt.driver [None req-ae93024e-ce36-4243-9626-0432d5a9bbb7 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] No VIF found with MAC fa:16:3e:5b:be:d8, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 11 08:57:29 compute-0 nova_compute[260935]: 2025-10-11 08:57:29.542 2 INFO nova.virt.libvirt.driver [None req-ae93024e-ce36-4243-9626-0432d5a9bbb7 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: a4114e5c-7c87-48a4-b73b-01ec4a9dd9b4] Using config drive
Oct 11 08:57:29 compute-0 nova_compute[260935]: 2025-10-11 08:57:29.573 2 DEBUG nova.storage.rbd_utils [None req-ae93024e-ce36-4243-9626-0432d5a9bbb7 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] rbd image a4114e5c-7c87-48a4-b73b-01ec4a9dd9b4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:57:30 compute-0 nova_compute[260935]: 2025-10-11 08:57:30.004 2 DEBUG nova.virt.libvirt.driver [None req-896d3807-46c3-4ce1-8842-872f85ece55b 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: 80799b12-9add-4561-a8eb-f1cf3c1f93a1] Instance in state 1 after 21 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Oct 11 08:57:30 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1622: 321 pgs: 321 active+clean; 519 MiB data, 794 MiB used, 59 GiB / 60 GiB avail; 3.5 MiB/s rd, 11 MiB/s wr, 318 op/s
Oct 11 08:57:30 compute-0 nova_compute[260935]: 2025-10-11 08:57:30.292 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760173035.196303, be4cae7e-0c2f-4c19-9a7e-58681faf9523 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:57:30 compute-0 nova_compute[260935]: 2025-10-11 08:57:30.293 2 INFO nova.compute.manager [-] [instance: be4cae7e-0c2f-4c19-9a7e-58681faf9523] VM Stopped (Lifecycle Event)
Oct 11 08:57:30 compute-0 nova_compute[260935]: 2025-10-11 08:57:30.325 2 INFO nova.virt.libvirt.driver [None req-ae93024e-ce36-4243-9626-0432d5a9bbb7 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: a4114e5c-7c87-48a4-b73b-01ec4a9dd9b4] Creating config drive at /var/lib/nova/instances/a4114e5c-7c87-48a4-b73b-01ec4a9dd9b4/disk.config
Oct 11 08:57:30 compute-0 nova_compute[260935]: 2025-10-11 08:57:30.330 2 DEBUG oslo_concurrency.processutils [None req-ae93024e-ce36-4243-9626-0432d5a9bbb7 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/a4114e5c-7c87-48a4-b73b-01ec4a9dd9b4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpt_zn4y1f execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:57:30 compute-0 nova_compute[260935]: 2025-10-11 08:57:30.376 2 DEBUG nova.compute.manager [None req-8804ecd9-6ed6-499c-9b43-782dbcfad0f4 - - - - - -] [instance: be4cae7e-0c2f-4c19-9a7e-58681faf9523] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:57:30 compute-0 nova_compute[260935]: 2025-10-11 08:57:30.489 2 DEBUG oslo_concurrency.processutils [None req-ae93024e-ce36-4243-9626-0432d5a9bbb7 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/a4114e5c-7c87-48a4-b73b-01ec4a9dd9b4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpt_zn4y1f" returned: 0 in 0.159s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:57:30 compute-0 nova_compute[260935]: 2025-10-11 08:57:30.528 2 DEBUG nova.storage.rbd_utils [None req-ae93024e-ce36-4243-9626-0432d5a9bbb7 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] rbd image a4114e5c-7c87-48a4-b73b-01ec4a9dd9b4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:57:30 compute-0 nova_compute[260935]: 2025-10-11 08:57:30.533 2 DEBUG oslo_concurrency.processutils [None req-ae93024e-ce36-4243-9626-0432d5a9bbb7 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/a4114e5c-7c87-48a4-b73b-01ec4a9dd9b4/disk.config a4114e5c-7c87-48a4-b73b-01ec4a9dd9b4_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:57:30 compute-0 nova_compute[260935]: 2025-10-11 08:57:30.587 2 DEBUG nova.network.neutron [req-a419db7e-0a6b-45c0-b27f-b22a81811b0d req-724db7b4-8c33-4a3b-9bd0-031c1a49f17c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a4114e5c-7c87-48a4-b73b-01ec4a9dd9b4] Updated VIF entry in instance network info cache for port 569c42c9-b81d-42a2-8fc1-8fced709830d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 08:57:30 compute-0 nova_compute[260935]: 2025-10-11 08:57:30.589 2 DEBUG nova.network.neutron [req-a419db7e-0a6b-45c0-b27f-b22a81811b0d req-724db7b4-8c33-4a3b-9bd0-031c1a49f17c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a4114e5c-7c87-48a4-b73b-01ec4a9dd9b4] Updating instance_info_cache with network_info: [{"id": "569c42c9-b81d-42a2-8fc1-8fced709830d", "address": "fa:16:3e:5b:be:d8", "network": {"id": "e075bdab-78c4-414f-b270-c41d1c82f498", "bridge": "br-int", "label": "tempest-ServersTestJSON-1401783070-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d9864fda4f8641d8a9c1509c426cc206", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap569c42c9-b8", "ovs_interfaceid": "569c42c9-b81d-42a2-8fc1-8fced709830d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:57:30 compute-0 nova_compute[260935]: 2025-10-11 08:57:30.606 2 DEBUG oslo_concurrency.lockutils [req-a419db7e-0a6b-45c0-b27f-b22a81811b0d req-724db7b4-8c33-4a3b-9bd0-031c1a49f17c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-a4114e5c-7c87-48a4-b73b-01ec4a9dd9b4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 08:57:30 compute-0 nova_compute[260935]: 2025-10-11 08:57:30.742 2 DEBUG oslo_concurrency.processutils [None req-ae93024e-ce36-4243-9626-0432d5a9bbb7 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/a4114e5c-7c87-48a4-b73b-01ec4a9dd9b4/disk.config a4114e5c-7c87-48a4-b73b-01ec4a9dd9b4_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.210s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:57:30 compute-0 nova_compute[260935]: 2025-10-11 08:57:30.744 2 INFO nova.virt.libvirt.driver [None req-ae93024e-ce36-4243-9626-0432d5a9bbb7 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: a4114e5c-7c87-48a4-b73b-01ec4a9dd9b4] Deleting local config drive /var/lib/nova/instances/a4114e5c-7c87-48a4-b73b-01ec4a9dd9b4/disk.config because it was imported into RBD.
Oct 11 08:57:30 compute-0 NetworkManager[44960]: <info>  [1760173050.8212] manager: (tap569c42c9-b8): new Tun device (/org/freedesktop/NetworkManager/Devices/288)
Oct 11 08:57:30 compute-0 kernel: tap569c42c9-b8: entered promiscuous mode
Oct 11 08:57:30 compute-0 systemd-udevd[331911]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 08:57:30 compute-0 ovn_controller[152945]: 2025-10-11T08:57:30Z|00626|binding|INFO|Claiming lport 569c42c9-b81d-42a2-8fc1-8fced709830d for this chassis.
Oct 11 08:57:30 compute-0 ovn_controller[152945]: 2025-10-11T08:57:30Z|00627|binding|INFO|569c42c9-b81d-42a2-8fc1-8fced709830d: Claiming fa:16:3e:5b:be:d8 10.100.0.7
Oct 11 08:57:30 compute-0 nova_compute[260935]: 2025-10-11 08:57:30.878 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:57:30 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:57:30.895 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5b:be:d8 10.100.0.7'], port_security=['fa:16:3e:5b:be:d8 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'a4114e5c-7c87-48a4-b73b-01ec4a9dd9b4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e075bdab-78c4-414f-b270-c41d1c82f498', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd9864fda4f8641d8a9c1509c426cc206', 'neutron:revision_number': '2', 'neutron:security_group_ids': '708d19b6-1edc-4b98-ad73-f234668f1633', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2b07dbe7-131d-4b4e-9a25-dea5d7b28985, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=569c42c9-b81d-42a2-8fc1-8fced709830d) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 08:57:30 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:57:30.896 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 569c42c9-b81d-42a2-8fc1-8fced709830d in datapath e075bdab-78c4-414f-b270-c41d1c82f498 bound to our chassis
Oct 11 08:57:30 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:57:30.898 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e075bdab-78c4-414f-b270-c41d1c82f498
Oct 11 08:57:30 compute-0 NetworkManager[44960]: <info>  [1760173050.9003] device (tap569c42c9-b8): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 08:57:30 compute-0 NetworkManager[44960]: <info>  [1760173050.9030] device (tap569c42c9-b8): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 08:57:30 compute-0 ovn_controller[152945]: 2025-10-11T08:57:30Z|00628|binding|INFO|Setting lport 569c42c9-b81d-42a2-8fc1-8fced709830d ovn-installed in OVS
Oct 11 08:57:30 compute-0 ovn_controller[152945]: 2025-10-11T08:57:30Z|00629|binding|INFO|Setting lport 569c42c9-b81d-42a2-8fc1-8fced709830d up in Southbound
Oct 11 08:57:30 compute-0 systemd-machined[215705]: New machine qemu-81-instance-00000049.
Oct 11 08:57:30 compute-0 nova_compute[260935]: 2025-10-11 08:57:30.923 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:57:30 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:57:30.929 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[d71a34a9-b65d-40e1-9f68-703fa30206f8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:57:30 compute-0 systemd[1]: Started Virtual Machine qemu-81-instance-00000049.
Oct 11 08:57:30 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:57:30.990 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[19ed03b5-d954-44b1-aa8c-8b167a584261]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:57:30 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:57:30.996 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[849d73b7-c647-4357-b481-5a8ad107d8e4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:57:31 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:57:31.047 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[4641a283-f694-44e7-baa3-eedc370d393d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:57:31 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:57:31.075 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[a4764ff4-7bf5-438a-9895-c04ca7d7c429]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape075bdab-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0b:b9:79'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 25, 'rx_bytes': 1000, 'tx_bytes': 1194, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 25, 'rx_bytes': 1000, 'tx_bytes': 1194, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 169], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 479394, 'reachable_time': 15419, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 331928, 'error': None, 'target': 'ovnmeta-e075bdab-78c4-414f-b270-c41d1c82f498', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:57:31 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:57:31.099 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[5cd212f5-a652-4f69-a704-4077398cc4a0]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tape075bdab-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 479414, 'tstamp': 479414}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 331929, 'error': None, 'target': 'ovnmeta-e075bdab-78c4-414f-b270-c41d1c82f498', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tape075bdab-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 479419, 'tstamp': 479419}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 331929, 'error': None, 'target': 'ovnmeta-e075bdab-78c4-414f-b270-c41d1c82f498', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:57:31 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:57:31.100 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape075bdab-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:57:31 compute-0 nova_compute[260935]: 2025-10-11 08:57:31.102 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:57:31 compute-0 nova_compute[260935]: 2025-10-11 08:57:31.104 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:57:31 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:57:31.105 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape075bdab-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:57:31 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:57:31.105 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 08:57:31 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:57:31.106 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape075bdab-70, col_values=(('external_ids', {'iface-id': 'b9cf681c-9f4c-4c56-987a-55fa7aa89e1a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:57:31 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:57:31.106 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 08:57:31 compute-0 nova_compute[260935]: 2025-10-11 08:57:31.217 2 DEBUG nova.compute.manager [req-c3ffb3c8-fdac-443c-b1ea-369947670dfd req-f6f485b0-57ab-420a-9be3-d5dbef1aea62 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a4114e5c-7c87-48a4-b73b-01ec4a9dd9b4] Received event network-vif-plugged-569c42c9-b81d-42a2-8fc1-8fced709830d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:57:31 compute-0 nova_compute[260935]: 2025-10-11 08:57:31.218 2 DEBUG oslo_concurrency.lockutils [req-c3ffb3c8-fdac-443c-b1ea-369947670dfd req-f6f485b0-57ab-420a-9be3-d5dbef1aea62 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "a4114e5c-7c87-48a4-b73b-01ec4a9dd9b4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:57:31 compute-0 nova_compute[260935]: 2025-10-11 08:57:31.219 2 DEBUG oslo_concurrency.lockutils [req-c3ffb3c8-fdac-443c-b1ea-369947670dfd req-f6f485b0-57ab-420a-9be3-d5dbef1aea62 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "a4114e5c-7c87-48a4-b73b-01ec4a9dd9b4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:57:31 compute-0 nova_compute[260935]: 2025-10-11 08:57:31.219 2 DEBUG oslo_concurrency.lockutils [req-c3ffb3c8-fdac-443c-b1ea-369947670dfd req-f6f485b0-57ab-420a-9be3-d5dbef1aea62 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "a4114e5c-7c87-48a4-b73b-01ec4a9dd9b4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:57:31 compute-0 nova_compute[260935]: 2025-10-11 08:57:31.220 2 DEBUG nova.compute.manager [req-c3ffb3c8-fdac-443c-b1ea-369947670dfd req-f6f485b0-57ab-420a-9be3-d5dbef1aea62 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a4114e5c-7c87-48a4-b73b-01ec4a9dd9b4] Processing event network-vif-plugged-569c42c9-b81d-42a2-8fc1-8fced709830d _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 11 08:57:31 compute-0 ceph-mon[74313]: pgmap v1622: 321 pgs: 321 active+clean; 519 MiB data, 794 MiB used, 59 GiB / 60 GiB avail; 3.5 MiB/s rd, 11 MiB/s wr, 318 op/s
Oct 11 08:57:31 compute-0 nova_compute[260935]: 2025-10-11 08:57:31.753 2 DEBUG oslo_concurrency.lockutils [None req-8d0bab69-a07b-442e-bea1-1078b3bbd2b0 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Acquiring lock "98d8ebd6-0917-49cf-8efc-a245486424bc" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:57:31 compute-0 nova_compute[260935]: 2025-10-11 08:57:31.754 2 DEBUG oslo_concurrency.lockutils [None req-8d0bab69-a07b-442e-bea1-1078b3bbd2b0 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Lock "98d8ebd6-0917-49cf-8efc-a245486424bc" acquired by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:57:31 compute-0 nova_compute[260935]: 2025-10-11 08:57:31.754 2 INFO nova.compute.manager [None req-8d0bab69-a07b-442e-bea1-1078b3bbd2b0 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] [instance: 98d8ebd6-0917-49cf-8efc-a245486424bc] Shelving
Oct 11 08:57:31 compute-0 nova_compute[260935]: 2025-10-11 08:57:31.777 2 DEBUG nova.virt.libvirt.driver [None req-8d0bab69-a07b-442e-bea1-1078b3bbd2b0 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] [instance: 98d8ebd6-0917-49cf-8efc-a245486424bc] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Oct 11 08:57:31 compute-0 podman[331972]: 2025-10-11 08:57:31.803256306 +0000 UTC m=+0.092120348 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 11 08:57:32 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1623: 321 pgs: 321 active+clean; 519 MiB data, 794 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 9.3 MiB/s wr, 268 op/s
Oct 11 08:57:32 compute-0 nova_compute[260935]: 2025-10-11 08:57:32.136 2 DEBUG nova.compute.manager [None req-ae93024e-ce36-4243-9626-0432d5a9bbb7 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: a4114e5c-7c87-48a4-b73b-01ec4a9dd9b4] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 11 08:57:32 compute-0 nova_compute[260935]: 2025-10-11 08:57:32.138 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760173052.1357868, a4114e5c-7c87-48a4-b73b-01ec4a9dd9b4 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:57:32 compute-0 nova_compute[260935]: 2025-10-11 08:57:32.138 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: a4114e5c-7c87-48a4-b73b-01ec4a9dd9b4] VM Started (Lifecycle Event)
Oct 11 08:57:32 compute-0 nova_compute[260935]: 2025-10-11 08:57:32.151 2 DEBUG nova.virt.libvirt.driver [None req-ae93024e-ce36-4243-9626-0432d5a9bbb7 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: a4114e5c-7c87-48a4-b73b-01ec4a9dd9b4] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 11 08:57:32 compute-0 nova_compute[260935]: 2025-10-11 08:57:32.157 2 INFO nova.virt.libvirt.driver [-] [instance: a4114e5c-7c87-48a4-b73b-01ec4a9dd9b4] Instance spawned successfully.
Oct 11 08:57:32 compute-0 nova_compute[260935]: 2025-10-11 08:57:32.158 2 DEBUG nova.virt.libvirt.driver [None req-ae93024e-ce36-4243-9626-0432d5a9bbb7 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: a4114e5c-7c87-48a4-b73b-01ec4a9dd9b4] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 11 08:57:32 compute-0 nova_compute[260935]: 2025-10-11 08:57:32.173 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: a4114e5c-7c87-48a4-b73b-01ec4a9dd9b4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:57:32 compute-0 nova_compute[260935]: 2025-10-11 08:57:32.182 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: a4114e5c-7c87-48a4-b73b-01ec4a9dd9b4] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 08:57:32 compute-0 nova_compute[260935]: 2025-10-11 08:57:32.192 2 DEBUG nova.virt.libvirt.driver [None req-ae93024e-ce36-4243-9626-0432d5a9bbb7 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: a4114e5c-7c87-48a4-b73b-01ec4a9dd9b4] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:57:32 compute-0 nova_compute[260935]: 2025-10-11 08:57:32.192 2 DEBUG nova.virt.libvirt.driver [None req-ae93024e-ce36-4243-9626-0432d5a9bbb7 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: a4114e5c-7c87-48a4-b73b-01ec4a9dd9b4] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:57:32 compute-0 nova_compute[260935]: 2025-10-11 08:57:32.193 2 DEBUG nova.virt.libvirt.driver [None req-ae93024e-ce36-4243-9626-0432d5a9bbb7 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: a4114e5c-7c87-48a4-b73b-01ec4a9dd9b4] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:57:32 compute-0 nova_compute[260935]: 2025-10-11 08:57:32.194 2 DEBUG nova.virt.libvirt.driver [None req-ae93024e-ce36-4243-9626-0432d5a9bbb7 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: a4114e5c-7c87-48a4-b73b-01ec4a9dd9b4] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:57:32 compute-0 nova_compute[260935]: 2025-10-11 08:57:32.195 2 DEBUG nova.virt.libvirt.driver [None req-ae93024e-ce36-4243-9626-0432d5a9bbb7 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: a4114e5c-7c87-48a4-b73b-01ec4a9dd9b4] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:57:32 compute-0 nova_compute[260935]: 2025-10-11 08:57:32.196 2 DEBUG nova.virt.libvirt.driver [None req-ae93024e-ce36-4243-9626-0432d5a9bbb7 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: a4114e5c-7c87-48a4-b73b-01ec4a9dd9b4] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:57:32 compute-0 nova_compute[260935]: 2025-10-11 08:57:32.243 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: a4114e5c-7c87-48a4-b73b-01ec4a9dd9b4] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 08:57:32 compute-0 nova_compute[260935]: 2025-10-11 08:57:32.243 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760173052.136171, a4114e5c-7c87-48a4-b73b-01ec4a9dd9b4 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:57:32 compute-0 nova_compute[260935]: 2025-10-11 08:57:32.244 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: a4114e5c-7c87-48a4-b73b-01ec4a9dd9b4] VM Paused (Lifecycle Event)
Oct 11 08:57:32 compute-0 nova_compute[260935]: 2025-10-11 08:57:32.291 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: a4114e5c-7c87-48a4-b73b-01ec4a9dd9b4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:57:32 compute-0 nova_compute[260935]: 2025-10-11 08:57:32.299 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760173052.1455376, a4114e5c-7c87-48a4-b73b-01ec4a9dd9b4 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:57:32 compute-0 nova_compute[260935]: 2025-10-11 08:57:32.299 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: a4114e5c-7c87-48a4-b73b-01ec4a9dd9b4] VM Resumed (Lifecycle Event)
Oct 11 08:57:32 compute-0 nova_compute[260935]: 2025-10-11 08:57:32.309 2 INFO nova.compute.manager [None req-ae93024e-ce36-4243-9626-0432d5a9bbb7 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: a4114e5c-7c87-48a4-b73b-01ec4a9dd9b4] Took 7.09 seconds to spawn the instance on the hypervisor.
Oct 11 08:57:32 compute-0 nova_compute[260935]: 2025-10-11 08:57:32.309 2 DEBUG nova.compute.manager [None req-ae93024e-ce36-4243-9626-0432d5a9bbb7 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: a4114e5c-7c87-48a4-b73b-01ec4a9dd9b4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:57:32 compute-0 kernel: tapb57a52c8-34 (unregistering): left promiscuous mode
Oct 11 08:57:32 compute-0 NetworkManager[44960]: <info>  [1760173052.3228] device (tapb57a52c8-34): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 08:57:32 compute-0 nova_compute[260935]: 2025-10-11 08:57:32.327 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: a4114e5c-7c87-48a4-b73b-01ec4a9dd9b4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:57:32 compute-0 nova_compute[260935]: 2025-10-11 08:57:32.337 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: a4114e5c-7c87-48a4-b73b-01ec4a9dd9b4] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 08:57:32 compute-0 nova_compute[260935]: 2025-10-11 08:57:32.340 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:57:32 compute-0 ovn_controller[152945]: 2025-10-11T08:57:32Z|00630|binding|INFO|Releasing lport b57a52c8-34c3-415f-9848-195b09be3611 from this chassis (sb_readonly=0)
Oct 11 08:57:32 compute-0 ovn_controller[152945]: 2025-10-11T08:57:32Z|00631|binding|INFO|Setting lport b57a52c8-34c3-415f-9848-195b09be3611 down in Southbound
Oct 11 08:57:32 compute-0 ovn_controller[152945]: 2025-10-11T08:57:32Z|00632|binding|INFO|Removing iface tapb57a52c8-34 ovn-installed in OVS
Oct 11 08:57:32 compute-0 nova_compute[260935]: 2025-10-11 08:57:32.343 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:57:32 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:57:32.350 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3f:db:18 10.100.0.12'], port_security=['fa:16:3e:3f:db:18 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '80799b12-9add-4561-a8eb-f1cf3c1f93a1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2cb96d57-a5e9-4b38-b10e-68187a5bf82f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '93dd4902ce324862a38006da8e06503a', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'b773900e-3df7-4cb6-b9b0-3d240ff499b1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=69794a2f-48ab-4a0d-8725-f4a7f57172dd, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=b57a52c8-34c3-415f-9848-195b09be3611) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 08:57:32 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:57:32.352 162815 INFO neutron.agent.ovn.metadata.agent [-] Port b57a52c8-34c3-415f-9848-195b09be3611 in datapath 2cb96d57-a5e9-4b38-b10e-68187a5bf82f unbound from our chassis
Oct 11 08:57:32 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:57:32.355 162815 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 2cb96d57-a5e9-4b38-b10e-68187a5bf82f, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 11 08:57:32 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:57:32.356 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[2dd55c37-deb9-437f-86ae-9305d5f07368]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:57:32 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:57:32.357 162815 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-2cb96d57-a5e9-4b38-b10e-68187a5bf82f namespace which is not needed anymore
Oct 11 08:57:32 compute-0 nova_compute[260935]: 2025-10-11 08:57:32.368 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:57:32 compute-0 nova_compute[260935]: 2025-10-11 08:57:32.374 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: a4114e5c-7c87-48a4-b73b-01ec4a9dd9b4] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 08:57:32 compute-0 nova_compute[260935]: 2025-10-11 08:57:32.409 2 INFO nova.compute.manager [None req-ae93024e-ce36-4243-9626-0432d5a9bbb7 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: a4114e5c-7c87-48a4-b73b-01ec4a9dd9b4] Took 8.22 seconds to build instance.
Oct 11 08:57:32 compute-0 systemd[1]: machine-qemu\x2d77\x2dinstance\x2d00000045.scope: Deactivated successfully.
Oct 11 08:57:32 compute-0 systemd[1]: machine-qemu\x2d77\x2dinstance\x2d00000045.scope: Consumed 15.725s CPU time.
Oct 11 08:57:32 compute-0 systemd-machined[215705]: Machine qemu-77-instance-00000045 terminated.
Oct 11 08:57:32 compute-0 nova_compute[260935]: 2025-10-11 08:57:32.431 2 DEBUG oslo_concurrency.lockutils [None req-ae93024e-ce36-4243-9626-0432d5a9bbb7 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Lock "a4114e5c-7c87-48a4-b73b-01ec4a9dd9b4" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.359s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:57:32 compute-0 nova_compute[260935]: 2025-10-11 08:57:32.551 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:57:32 compute-0 nova_compute[260935]: 2025-10-11 08:57:32.557 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:57:32 compute-0 neutron-haproxy-ovnmeta-2cb96d57-a5e9-4b38-b10e-68187a5bf82f[330114]: [NOTICE]   (330119) : haproxy version is 2.8.14-c23fe91
Oct 11 08:57:32 compute-0 neutron-haproxy-ovnmeta-2cb96d57-a5e9-4b38-b10e-68187a5bf82f[330114]: [NOTICE]   (330119) : path to executable is /usr/sbin/haproxy
Oct 11 08:57:32 compute-0 neutron-haproxy-ovnmeta-2cb96d57-a5e9-4b38-b10e-68187a5bf82f[330114]: [WARNING]  (330119) : Exiting Master process...
Oct 11 08:57:32 compute-0 neutron-haproxy-ovnmeta-2cb96d57-a5e9-4b38-b10e-68187a5bf82f[330114]: [ALERT]    (330119) : Current worker (330121) exited with code 143 (Terminated)
Oct 11 08:57:32 compute-0 neutron-haproxy-ovnmeta-2cb96d57-a5e9-4b38-b10e-68187a5bf82f[330114]: [WARNING]  (330119) : All workers exited. Exiting... (0)
Oct 11 08:57:32 compute-0 systemd[1]: libpod-0f7b494e3002f138722f5f1271d1014e9efa004f9ebbc75a7b05cf11472c9051.scope: Deactivated successfully.
Oct 11 08:57:32 compute-0 podman[332013]: 2025-10-11 08:57:32.578154741 +0000 UTC m=+0.077716167 container died 0f7b494e3002f138722f5f1271d1014e9efa004f9ebbc75a7b05cf11472c9051 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-2cb96d57-a5e9-4b38-b10e-68187a5bf82f, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0)
Oct 11 08:57:32 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-0f7b494e3002f138722f5f1271d1014e9efa004f9ebbc75a7b05cf11472c9051-userdata-shm.mount: Deactivated successfully.
Oct 11 08:57:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-8ea993341fb37a6ad0ac0884d0d0c5b35572bc4b7fe43aac965c390f78f3454a-merged.mount: Deactivated successfully.
Oct 11 08:57:32 compute-0 podman[332013]: 2025-10-11 08:57:32.653100758 +0000 UTC m=+0.152662174 container cleanup 0f7b494e3002f138722f5f1271d1014e9efa004f9ebbc75a7b05cf11472c9051 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-2cb96d57-a5e9-4b38-b10e-68187a5bf82f, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Oct 11 08:57:32 compute-0 systemd[1]: libpod-conmon-0f7b494e3002f138722f5f1271d1014e9efa004f9ebbc75a7b05cf11472c9051.scope: Deactivated successfully.
Oct 11 08:57:32 compute-0 podman[332049]: 2025-10-11 08:57:32.747725766 +0000 UTC m=+0.065483448 container remove 0f7b494e3002f138722f5f1271d1014e9efa004f9ebbc75a7b05cf11472c9051 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-2cb96d57-a5e9-4b38-b10e-68187a5bf82f, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001)
Oct 11 08:57:32 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:57:32.757 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[9ba6dd8e-8802-4631-b99a-002c8a4d2319]: (4, ('Sat Oct 11 08:57:32 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-2cb96d57-a5e9-4b38-b10e-68187a5bf82f (0f7b494e3002f138722f5f1271d1014e9efa004f9ebbc75a7b05cf11472c9051)\n0f7b494e3002f138722f5f1271d1014e9efa004f9ebbc75a7b05cf11472c9051\nSat Oct 11 08:57:32 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-2cb96d57-a5e9-4b38-b10e-68187a5bf82f (0f7b494e3002f138722f5f1271d1014e9efa004f9ebbc75a7b05cf11472c9051)\n0f7b494e3002f138722f5f1271d1014e9efa004f9ebbc75a7b05cf11472c9051\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:57:32 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:57:32.760 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[47219e7e-3903-4f00-b7d6-351e0bc39682]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:57:32 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:57:32.761 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2cb96d57-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:57:32 compute-0 nova_compute[260935]: 2025-10-11 08:57:32.764 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:57:32 compute-0 kernel: tap2cb96d57-a0: left promiscuous mode
Oct 11 08:57:32 compute-0 nova_compute[260935]: 2025-10-11 08:57:32.810 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:57:32 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:57:32.815 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[8688e371-cef9-4ccb-979c-22e8bb31bd87]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:57:32 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:57:32.837 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[fa8172fd-a1a4-4964-9a26-7ff305e4bac2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:57:32 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:57:32.841 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[6990d44b-7e12-42e0-936c-f724dbdb360c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:57:32 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:57:32.864 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[e85df899-9ee9-43fb-8f3f-52fdb87c8f26]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 486427, 'reachable_time': 31502, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 332068, 'error': None, 'target': 'ovnmeta-2cb96d57-a5e9-4b38-b10e-68187a5bf82f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:57:32 compute-0 systemd[1]: run-netns-ovnmeta\x2d2cb96d57\x2da5e9\x2d4b38\x2db10e\x2d68187a5bf82f.mount: Deactivated successfully.
Oct 11 08:57:32 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:57:32.872 162929 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-2cb96d57-a5e9-4b38-b10e-68187a5bf82f deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 11 08:57:32 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:57:32.872 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[90dc536d-1090-40e6-bb56-d5c8cc75deca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:57:32 compute-0 sudo[332067]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:57:32 compute-0 sudo[332067]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:57:32 compute-0 sudo[332067]: pam_unix(sudo:session): session closed for user root
Oct 11 08:57:33 compute-0 nova_compute[260935]: 2025-10-11 08:57:33.027 2 INFO nova.virt.libvirt.driver [None req-896d3807-46c3-4ce1-8842-872f85ece55b 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: 80799b12-9add-4561-a8eb-f1cf3c1f93a1] Instance shutdown successfully after 24 seconds.
Oct 11 08:57:33 compute-0 nova_compute[260935]: 2025-10-11 08:57:33.037 2 INFO nova.virt.libvirt.driver [-] [instance: 80799b12-9add-4561-a8eb-f1cf3c1f93a1] Instance destroyed successfully.
Oct 11 08:57:33 compute-0 nova_compute[260935]: 2025-10-11 08:57:33.038 2 DEBUG nova.objects.instance [None req-896d3807-46c3-4ce1-8842-872f85ece55b 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Lazy-loading 'numa_topology' on Instance uuid 80799b12-9add-4561-a8eb-f1cf3c1f93a1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:57:33 compute-0 sudo[332093]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 08:57:33 compute-0 sudo[332093]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:57:33 compute-0 nova_compute[260935]: 2025-10-11 08:57:33.064 2 DEBUG nova.compute.manager [None req-896d3807-46c3-4ce1-8842-872f85ece55b 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: 80799b12-9add-4561-a8eb-f1cf3c1f93a1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:57:33 compute-0 sudo[332093]: pam_unix(sudo:session): session closed for user root
Oct 11 08:57:33 compute-0 nova_compute[260935]: 2025-10-11 08:57:33.087 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760173038.084394, 7aed3949-7b95-4887-90c3-3e2c8202cf27 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:57:33 compute-0 nova_compute[260935]: 2025-10-11 08:57:33.087 2 INFO nova.compute.manager [-] [instance: 7aed3949-7b95-4887-90c3-3e2c8202cf27] VM Stopped (Lifecycle Event)
Oct 11 08:57:33 compute-0 nova_compute[260935]: 2025-10-11 08:57:33.129 2 DEBUG nova.compute.manager [None req-c6d2f50a-6a67-4fdf-bf95-58aec0707065 - - - - - -] [instance: 7aed3949-7b95-4887-90c3-3e2c8202cf27] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:57:33 compute-0 nova_compute[260935]: 2025-10-11 08:57:33.146 2 DEBUG oslo_concurrency.lockutils [None req-896d3807-46c3-4ce1-8842-872f85ece55b 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Lock "80799b12-9add-4561-a8eb-f1cf3c1f93a1" "released" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: held 24.483s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:57:33 compute-0 sudo[332118]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:57:33 compute-0 sudo[332118]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:57:33 compute-0 sudo[332118]: pam_unix(sudo:session): session closed for user root
Oct 11 08:57:33 compute-0 sudo[332143]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 11 08:57:33 compute-0 sudo[332143]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:57:33 compute-0 nova_compute[260935]: 2025-10-11 08:57:33.359 2 DEBUG nova.compute.manager [req-2455b600-eeba-410c-81f5-19d845589cff req-b0c09d09-1150-4018-9a77-1079cbff719b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a4114e5c-7c87-48a4-b73b-01ec4a9dd9b4] Received event network-vif-plugged-569c42c9-b81d-42a2-8fc1-8fced709830d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:57:33 compute-0 nova_compute[260935]: 2025-10-11 08:57:33.359 2 DEBUG oslo_concurrency.lockutils [req-2455b600-eeba-410c-81f5-19d845589cff req-b0c09d09-1150-4018-9a77-1079cbff719b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "a4114e5c-7c87-48a4-b73b-01ec4a9dd9b4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:57:33 compute-0 nova_compute[260935]: 2025-10-11 08:57:33.359 2 DEBUG oslo_concurrency.lockutils [req-2455b600-eeba-410c-81f5-19d845589cff req-b0c09d09-1150-4018-9a77-1079cbff719b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "a4114e5c-7c87-48a4-b73b-01ec4a9dd9b4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:57:33 compute-0 nova_compute[260935]: 2025-10-11 08:57:33.360 2 DEBUG oslo_concurrency.lockutils [req-2455b600-eeba-410c-81f5-19d845589cff req-b0c09d09-1150-4018-9a77-1079cbff719b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "a4114e5c-7c87-48a4-b73b-01ec4a9dd9b4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:57:33 compute-0 nova_compute[260935]: 2025-10-11 08:57:33.360 2 DEBUG nova.compute.manager [req-2455b600-eeba-410c-81f5-19d845589cff req-b0c09d09-1150-4018-9a77-1079cbff719b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a4114e5c-7c87-48a4-b73b-01ec4a9dd9b4] No waiting events found dispatching network-vif-plugged-569c42c9-b81d-42a2-8fc1-8fced709830d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 08:57:33 compute-0 nova_compute[260935]: 2025-10-11 08:57:33.360 2 WARNING nova.compute.manager [req-2455b600-eeba-410c-81f5-19d845589cff req-b0c09d09-1150-4018-9a77-1079cbff719b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a4114e5c-7c87-48a4-b73b-01ec4a9dd9b4] Received unexpected event network-vif-plugged-569c42c9-b81d-42a2-8fc1-8fced709830d for instance with vm_state active and task_state None.
Oct 11 08:57:33 compute-0 nova_compute[260935]: 2025-10-11 08:57:33.360 2 DEBUG nova.compute.manager [req-2455b600-eeba-410c-81f5-19d845589cff req-b0c09d09-1150-4018-9a77-1079cbff719b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 80799b12-9add-4561-a8eb-f1cf3c1f93a1] Received event network-vif-unplugged-b57a52c8-34c3-415f-9848-195b09be3611 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:57:33 compute-0 nova_compute[260935]: 2025-10-11 08:57:33.360 2 DEBUG oslo_concurrency.lockutils [req-2455b600-eeba-410c-81f5-19d845589cff req-b0c09d09-1150-4018-9a77-1079cbff719b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "80799b12-9add-4561-a8eb-f1cf3c1f93a1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:57:33 compute-0 nova_compute[260935]: 2025-10-11 08:57:33.360 2 DEBUG oslo_concurrency.lockutils [req-2455b600-eeba-410c-81f5-19d845589cff req-b0c09d09-1150-4018-9a77-1079cbff719b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "80799b12-9add-4561-a8eb-f1cf3c1f93a1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:57:33 compute-0 nova_compute[260935]: 2025-10-11 08:57:33.360 2 DEBUG oslo_concurrency.lockutils [req-2455b600-eeba-410c-81f5-19d845589cff req-b0c09d09-1150-4018-9a77-1079cbff719b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "80799b12-9add-4561-a8eb-f1cf3c1f93a1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:57:33 compute-0 nova_compute[260935]: 2025-10-11 08:57:33.361 2 DEBUG nova.compute.manager [req-2455b600-eeba-410c-81f5-19d845589cff req-b0c09d09-1150-4018-9a77-1079cbff719b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 80799b12-9add-4561-a8eb-f1cf3c1f93a1] No waiting events found dispatching network-vif-unplugged-b57a52c8-34c3-415f-9848-195b09be3611 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 08:57:33 compute-0 nova_compute[260935]: 2025-10-11 08:57:33.361 2 WARNING nova.compute.manager [req-2455b600-eeba-410c-81f5-19d845589cff req-b0c09d09-1150-4018-9a77-1079cbff719b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 80799b12-9add-4561-a8eb-f1cf3c1f93a1] Received unexpected event network-vif-unplugged-b57a52c8-34c3-415f-9848-195b09be3611 for instance with vm_state stopped and task_state None.
Oct 11 08:57:33 compute-0 nova_compute[260935]: 2025-10-11 08:57:33.361 2 DEBUG nova.compute.manager [req-2455b600-eeba-410c-81f5-19d845589cff req-b0c09d09-1150-4018-9a77-1079cbff719b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 80799b12-9add-4561-a8eb-f1cf3c1f93a1] Received event network-vif-plugged-b57a52c8-34c3-415f-9848-195b09be3611 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:57:33 compute-0 nova_compute[260935]: 2025-10-11 08:57:33.361 2 DEBUG oslo_concurrency.lockutils [req-2455b600-eeba-410c-81f5-19d845589cff req-b0c09d09-1150-4018-9a77-1079cbff719b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "80799b12-9add-4561-a8eb-f1cf3c1f93a1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:57:33 compute-0 nova_compute[260935]: 2025-10-11 08:57:33.361 2 DEBUG oslo_concurrency.lockutils [req-2455b600-eeba-410c-81f5-19d845589cff req-b0c09d09-1150-4018-9a77-1079cbff719b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "80799b12-9add-4561-a8eb-f1cf3c1f93a1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:57:33 compute-0 nova_compute[260935]: 2025-10-11 08:57:33.361 2 DEBUG oslo_concurrency.lockutils [req-2455b600-eeba-410c-81f5-19d845589cff req-b0c09d09-1150-4018-9a77-1079cbff719b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "80799b12-9add-4561-a8eb-f1cf3c1f93a1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:57:33 compute-0 nova_compute[260935]: 2025-10-11 08:57:33.361 2 DEBUG nova.compute.manager [req-2455b600-eeba-410c-81f5-19d845589cff req-b0c09d09-1150-4018-9a77-1079cbff719b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 80799b12-9add-4561-a8eb-f1cf3c1f93a1] No waiting events found dispatching network-vif-plugged-b57a52c8-34c3-415f-9848-195b09be3611 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 08:57:33 compute-0 nova_compute[260935]: 2025-10-11 08:57:33.362 2 WARNING nova.compute.manager [req-2455b600-eeba-410c-81f5-19d845589cff req-b0c09d09-1150-4018-9a77-1079cbff719b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 80799b12-9add-4561-a8eb-f1cf3c1f93a1] Received unexpected event network-vif-plugged-b57a52c8-34c3-415f-9848-195b09be3611 for instance with vm_state stopped and task_state None.
Oct 11 08:57:33 compute-0 nova_compute[260935]: 2025-10-11 08:57:33.393 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:57:33 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e234 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 08:57:33 compute-0 ceph-mon[74313]: pgmap v1623: 321 pgs: 321 active+clean; 519 MiB data, 794 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 9.3 MiB/s wr, 268 op/s
Oct 11 08:57:33 compute-0 sudo[332143]: pam_unix(sudo:session): session closed for user root
Oct 11 08:57:34 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 08:57:34 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 08:57:34 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 11 08:57:34 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 08:57:34 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 11 08:57:34 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 08:57:34 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev 56f6d131-1e58-4711-b71d-b952d8b4cae9 does not exist
Oct 11 08:57:34 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev 8ac69409-19d8-4ab1-a0fd-7bf3ea345054 does not exist
Oct 11 08:57:34 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev 94b2d954-0763-447c-acf9-5d60e02c7a4b does not exist
Oct 11 08:57:34 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 11 08:57:34 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 08:57:34 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 11 08:57:34 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 08:57:34 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 08:57:34 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 08:57:34 compute-0 kernel: tap10e01bbb-0d (unregistering): left promiscuous mode
Oct 11 08:57:34 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1624: 321 pgs: 321 active+clean; 531 MiB data, 800 MiB used, 59 GiB / 60 GiB avail; 460 KiB/s rd, 4.7 MiB/s wr, 173 op/s
Oct 11 08:57:34 compute-0 NetworkManager[44960]: <info>  [1760173054.0763] device (tap10e01bbb-0d): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 08:57:34 compute-0 ovn_controller[152945]: 2025-10-11T08:57:34Z|00633|binding|INFO|Releasing lport 10e01bbb-0d2c-4f76-8f81-c90e20d3e54c from this chassis (sb_readonly=0)
Oct 11 08:57:34 compute-0 ovn_controller[152945]: 2025-10-11T08:57:34Z|00634|binding|INFO|Setting lport 10e01bbb-0d2c-4f76-8f81-c90e20d3e54c down in Southbound
Oct 11 08:57:34 compute-0 ovn_controller[152945]: 2025-10-11T08:57:34Z|00635|binding|INFO|Removing iface tap10e01bbb-0d ovn-installed in OVS
Oct 11 08:57:34 compute-0 nova_compute[260935]: 2025-10-11 08:57:34.099 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:57:34 compute-0 nova_compute[260935]: 2025-10-11 08:57:34.102 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:57:34 compute-0 sudo[332199]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:57:34 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:57:34.108 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3f:0f:36 10.100.0.9'], port_security=['fa:16:3e:3f:0f:36 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '98d8ebd6-0917-49cf-8efc-a245486424bc', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-056c6769-bc97-4ae9-9759-4cc2d984a31d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '73adfb8cf0c64359b1f33a9643148ef4', 'neutron:revision_number': '4', 'neutron:security_group_ids': '06a0521a-9ff7-49ed-a194-a12a4a8fd551', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.249'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0cfa1fc9-121e-4e0a-bd08-716b82275316, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=10e01bbb-0d2c-4f76-8f81-c90e20d3e54c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 08:57:34 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:57:34.110 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 10e01bbb-0d2c-4f76-8f81-c90e20d3e54c in datapath 056c6769-bc97-4ae9-9759-4cc2d984a31d unbound from our chassis
Oct 11 08:57:34 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:57:34.114 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 056c6769-bc97-4ae9-9759-4cc2d984a31d
Oct 11 08:57:34 compute-0 sudo[332199]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:57:34 compute-0 sudo[332199]: pam_unix(sudo:session): session closed for user root
Oct 11 08:57:34 compute-0 systemd[1]: machine-qemu\x2d65\x2dinstance\x2d0000003a.scope: Deactivated successfully.
Oct 11 08:57:34 compute-0 systemd[1]: machine-qemu\x2d65\x2dinstance\x2d0000003a.scope: Consumed 18.487s CPU time.
Oct 11 08:57:34 compute-0 nova_compute[260935]: 2025-10-11 08:57:34.143 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:57:34 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:57:34.144 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[b8fc77e2-a0d8-4920-9366-7c937899ce17]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:57:34 compute-0 systemd-machined[215705]: Machine qemu-65-instance-0000003a terminated.
Oct 11 08:57:34 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:57:34.185 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[ea9cfe41-d1fc-4632-a80a-d54d128fb4b4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:57:34 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:57:34.189 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[4669c122-93f7-4b26-8d9d-31c6e6f307d5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:57:34 compute-0 sudo[332228]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 08:57:34 compute-0 sudo[332228]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:57:34 compute-0 sudo[332228]: pam_unix(sudo:session): session closed for user root
Oct 11 08:57:34 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:57:34.221 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[50325f8e-5e18-48ee-bd84-c099857366be]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:57:34 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:57:34.244 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[29c8029f-171e-41ee-8397-299430b98c98]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap056c6769-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8b:cc:02'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 12, 'rx_bytes': 1000, 'tx_bytes': 692, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 12, 'rx_bytes': 1000, 'tx_bytes': 692, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 162], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 477032, 'reachable_time': 32693, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 332265, 'error': None, 'target': 'ovnmeta-056c6769-bc97-4ae9-9759-4cc2d984a31d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:57:34 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:57:34.266 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[9fecda88-d3fe-4dbb-b524-7d4efb5ae5f7]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap056c6769-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 477053, 'tstamp': 477053}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 332282, 'error': None, 'target': 'ovnmeta-056c6769-bc97-4ae9-9759-4cc2d984a31d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap056c6769-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 477058, 'tstamp': 477058}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 332282, 'error': None, 'target': 'ovnmeta-056c6769-bc97-4ae9-9759-4cc2d984a31d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:57:34 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:57:34.268 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap056c6769-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:57:34 compute-0 nova_compute[260935]: 2025-10-11 08:57:34.270 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:57:34 compute-0 nova_compute[260935]: 2025-10-11 08:57:34.279 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:57:34 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:57:34.279 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap056c6769-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:57:34 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:57:34.279 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 08:57:34 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:57:34.280 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap056c6769-b0, col_values=(('external_ids', {'iface-id': '056a8563-0695-415b-921f-e75fa98e60e5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:57:34 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:57:34.280 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 08:57:34 compute-0 sudo[332261]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:57:34 compute-0 sudo[332261]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:57:34 compute-0 sudo[332261]: pam_unix(sudo:session): session closed for user root
Oct 11 08:57:34 compute-0 nova_compute[260935]: 2025-10-11 08:57:34.373 2 DEBUG nova.compute.manager [req-f249992e-fdd9-4a40-87b3-811ceffa4649 req-3f6a278f-a044-4ca6-b2e7-db3fa09ddb7a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 98d8ebd6-0917-49cf-8efc-a245486424bc] Received event network-vif-unplugged-10e01bbb-0d2c-4f76-8f81-c90e20d3e54c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:57:34 compute-0 nova_compute[260935]: 2025-10-11 08:57:34.374 2 DEBUG oslo_concurrency.lockutils [req-f249992e-fdd9-4a40-87b3-811ceffa4649 req-3f6a278f-a044-4ca6-b2e7-db3fa09ddb7a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "98d8ebd6-0917-49cf-8efc-a245486424bc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:57:34 compute-0 nova_compute[260935]: 2025-10-11 08:57:34.374 2 DEBUG oslo_concurrency.lockutils [req-f249992e-fdd9-4a40-87b3-811ceffa4649 req-3f6a278f-a044-4ca6-b2e7-db3fa09ddb7a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "98d8ebd6-0917-49cf-8efc-a245486424bc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:57:34 compute-0 nova_compute[260935]: 2025-10-11 08:57:34.374 2 DEBUG oslo_concurrency.lockutils [req-f249992e-fdd9-4a40-87b3-811ceffa4649 req-3f6a278f-a044-4ca6-b2e7-db3fa09ddb7a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "98d8ebd6-0917-49cf-8efc-a245486424bc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:57:34 compute-0 nova_compute[260935]: 2025-10-11 08:57:34.375 2 DEBUG nova.compute.manager [req-f249992e-fdd9-4a40-87b3-811ceffa4649 req-3f6a278f-a044-4ca6-b2e7-db3fa09ddb7a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 98d8ebd6-0917-49cf-8efc-a245486424bc] No waiting events found dispatching network-vif-unplugged-10e01bbb-0d2c-4f76-8f81-c90e20d3e54c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 08:57:34 compute-0 nova_compute[260935]: 2025-10-11 08:57:34.375 2 WARNING nova.compute.manager [req-f249992e-fdd9-4a40-87b3-811ceffa4649 req-3f6a278f-a044-4ca6-b2e7-db3fa09ddb7a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 98d8ebd6-0917-49cf-8efc-a245486424bc] Received unexpected event network-vif-unplugged-10e01bbb-0d2c-4f76-8f81-c90e20d3e54c for instance with vm_state active and task_state shelving.
Oct 11 08:57:34 compute-0 sudo[332295]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 11 08:57:34 compute-0 sudo[332295]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:57:34 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 08:57:34 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 08:57:34 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 08:57:34 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 08:57:34 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 08:57:34 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 08:57:34 compute-0 nova_compute[260935]: 2025-10-11 08:57:34.499 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:57:34 compute-0 nova_compute[260935]: 2025-10-11 08:57:34.798 2 INFO nova.virt.libvirt.driver [None req-8d0bab69-a07b-442e-bea1-1078b3bbd2b0 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] [instance: 98d8ebd6-0917-49cf-8efc-a245486424bc] Instance shutdown successfully after 3 seconds.
Oct 11 08:57:34 compute-0 nova_compute[260935]: 2025-10-11 08:57:34.805 2 INFO nova.virt.libvirt.driver [-] [instance: 98d8ebd6-0917-49cf-8efc-a245486424bc] Instance destroyed successfully.
Oct 11 08:57:34 compute-0 nova_compute[260935]: 2025-10-11 08:57:34.805 2 DEBUG nova.objects.instance [None req-8d0bab69-a07b-442e-bea1-1078b3bbd2b0 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Lazy-loading 'numa_topology' on Instance uuid 98d8ebd6-0917-49cf-8efc-a245486424bc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:57:34 compute-0 podman[332371]: 2025-10-11 08:57:34.940055941 +0000 UTC m=+0.075066321 container create d827a2e2573114d34f2ec8cc587d78e38167adee5bc3daffdd0b09e6f87a3a0f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_roentgen, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 08:57:34 compute-0 podman[332371]: 2025-10-11 08:57:34.903106158 +0000 UTC m=+0.038116558 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 08:57:35 compute-0 systemd[1]: Started libpod-conmon-d827a2e2573114d34f2ec8cc587d78e38167adee5bc3daffdd0b09e6f87a3a0f.scope.
Oct 11 08:57:35 compute-0 systemd[1]: Started libcrun container.
Oct 11 08:57:35 compute-0 podman[332371]: 2025-10-11 08:57:35.05645812 +0000 UTC m=+0.191468520 container init d827a2e2573114d34f2ec8cc587d78e38167adee5bc3daffdd0b09e6f87a3a0f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_roentgen, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 08:57:35 compute-0 podman[332371]: 2025-10-11 08:57:35.065248661 +0000 UTC m=+0.200259071 container start d827a2e2573114d34f2ec8cc587d78e38167adee5bc3daffdd0b09e6f87a3a0f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_roentgen, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 08:57:35 compute-0 podman[332371]: 2025-10-11 08:57:35.069027979 +0000 UTC m=+0.204038409 container attach d827a2e2573114d34f2ec8cc587d78e38167adee5bc3daffdd0b09e6f87a3a0f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_roentgen, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct 11 08:57:35 compute-0 blissful_roentgen[332389]: 167 167
Oct 11 08:57:35 compute-0 systemd[1]: libpod-d827a2e2573114d34f2ec8cc587d78e38167adee5bc3daffdd0b09e6f87a3a0f.scope: Deactivated successfully.
Oct 11 08:57:35 compute-0 conmon[332389]: conmon d827a2e2573114d34f2e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d827a2e2573114d34f2ec8cc587d78e38167adee5bc3daffdd0b09e6f87a3a0f.scope/container/memory.events
Oct 11 08:57:35 compute-0 podman[332371]: 2025-10-11 08:57:35.074965518 +0000 UTC m=+0.209975928 container died d827a2e2573114d34f2ec8cc587d78e38167adee5bc3daffdd0b09e6f87a3a0f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_roentgen, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct 11 08:57:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-7e502bdfddd46cd3140cd040ced85ff8248d7cc0102646f0b782ebd6415a4952-merged.mount: Deactivated successfully.
Oct 11 08:57:35 compute-0 podman[332371]: 2025-10-11 08:57:35.121445854 +0000 UTC m=+0.256456254 container remove d827a2e2573114d34f2ec8cc587d78e38167adee5bc3daffdd0b09e6f87a3a0f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_roentgen, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0)
Oct 11 08:57:35 compute-0 nova_compute[260935]: 2025-10-11 08:57:35.148 2 INFO nova.virt.libvirt.driver [None req-8d0bab69-a07b-442e-bea1-1078b3bbd2b0 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] [instance: 98d8ebd6-0917-49cf-8efc-a245486424bc] Beginning cold snapshot process
Oct 11 08:57:35 compute-0 systemd[1]: libpod-conmon-d827a2e2573114d34f2ec8cc587d78e38167adee5bc3daffdd0b09e6f87a3a0f.scope: Deactivated successfully.
Oct 11 08:57:35 compute-0 nova_compute[260935]: 2025-10-11 08:57:35.360 2 DEBUG nova.virt.libvirt.imagebackend [None req-8d0bab69-a07b-442e-bea1-1078b3bbd2b0 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] No parent info for 03f2fef0-11c0-48e1-b3a0-3e02d898739e; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Oct 11 08:57:35 compute-0 nova_compute[260935]: 2025-10-11 08:57:35.368 2 DEBUG oslo_concurrency.lockutils [None req-88a713e6-8e9d-4f15-a13f-6d4d6b227dda 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Acquiring lock "80799b12-9add-4561-a8eb-f1cf3c1f93a1" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:57:35 compute-0 nova_compute[260935]: 2025-10-11 08:57:35.368 2 DEBUG oslo_concurrency.lockutils [None req-88a713e6-8e9d-4f15-a13f-6d4d6b227dda 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Lock "80799b12-9add-4561-a8eb-f1cf3c1f93a1" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:57:35 compute-0 nova_compute[260935]: 2025-10-11 08:57:35.369 2 DEBUG oslo_concurrency.lockutils [None req-88a713e6-8e9d-4f15-a13f-6d4d6b227dda 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Acquiring lock "80799b12-9add-4561-a8eb-f1cf3c1f93a1-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:57:35 compute-0 nova_compute[260935]: 2025-10-11 08:57:35.369 2 DEBUG oslo_concurrency.lockutils [None req-88a713e6-8e9d-4f15-a13f-6d4d6b227dda 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Lock "80799b12-9add-4561-a8eb-f1cf3c1f93a1-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:57:35 compute-0 nova_compute[260935]: 2025-10-11 08:57:35.369 2 DEBUG oslo_concurrency.lockutils [None req-88a713e6-8e9d-4f15-a13f-6d4d6b227dda 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Lock "80799b12-9add-4561-a8eb-f1cf3c1f93a1-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:57:35 compute-0 nova_compute[260935]: 2025-10-11 08:57:35.371 2 INFO nova.compute.manager [None req-88a713e6-8e9d-4f15-a13f-6d4d6b227dda 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: 80799b12-9add-4561-a8eb-f1cf3c1f93a1] Terminating instance
Oct 11 08:57:35 compute-0 nova_compute[260935]: 2025-10-11 08:57:35.373 2 DEBUG nova.compute.manager [None req-88a713e6-8e9d-4f15-a13f-6d4d6b227dda 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: 80799b12-9add-4561-a8eb-f1cf3c1f93a1] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 11 08:57:35 compute-0 nova_compute[260935]: 2025-10-11 08:57:35.383 2 INFO nova.virt.libvirt.driver [-] [instance: 80799b12-9add-4561-a8eb-f1cf3c1f93a1] Instance destroyed successfully.
Oct 11 08:57:35 compute-0 nova_compute[260935]: 2025-10-11 08:57:35.383 2 DEBUG nova.objects.instance [None req-88a713e6-8e9d-4f15-a13f-6d4d6b227dda 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Lazy-loading 'resources' on Instance uuid 80799b12-9add-4561-a8eb-f1cf3c1f93a1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:57:35 compute-0 nova_compute[260935]: 2025-10-11 08:57:35.411 2 DEBUG nova.virt.libvirt.vif [None req-88a713e6-8e9d-4f15-a13f-6d4d6b227dda 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T08:56:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-375075055',display_name='tempest-DeleteServersTestJSON-server-375075055',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-375075055',id=69,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-11T08:57:05Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='93dd4902ce324862a38006da8e06503a',ramdisk_id='',reservation_id='r-dj57enum',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-DeleteServersTestJSON-1019340677',owner_user_name='tempest-DeleteServersTestJSON-1019340677-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T08:57:33Z,user_data=None,user_id='213e5693e94f44e7950e3dfbca04228a',uuid=80799b12-9add-4561-a8eb-f1cf3c1f93a1,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "b57a52c8-34c3-415f-9848-195b09be3611", "address": "fa:16:3e:3f:db:18", "network": {"id": "2cb96d57-a5e9-4b38-b10e-68187a5bf82f", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-2000648338-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "93dd4902ce324862a38006da8e06503a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb57a52c8-34", "ovs_interfaceid": "b57a52c8-34c3-415f-9848-195b09be3611", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 11 08:57:35 compute-0 nova_compute[260935]: 2025-10-11 08:57:35.412 2 DEBUG nova.network.os_vif_util [None req-88a713e6-8e9d-4f15-a13f-6d4d6b227dda 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Converting VIF {"id": "b57a52c8-34c3-415f-9848-195b09be3611", "address": "fa:16:3e:3f:db:18", "network": {"id": "2cb96d57-a5e9-4b38-b10e-68187a5bf82f", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-2000648338-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "93dd4902ce324862a38006da8e06503a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb57a52c8-34", "ovs_interfaceid": "b57a52c8-34c3-415f-9848-195b09be3611", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 08:57:35 compute-0 nova_compute[260935]: 2025-10-11 08:57:35.413 2 DEBUG nova.network.os_vif_util [None req-88a713e6-8e9d-4f15-a13f-6d4d6b227dda 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3f:db:18,bridge_name='br-int',has_traffic_filtering=True,id=b57a52c8-34c3-415f-9848-195b09be3611,network=Network(2cb96d57-a5e9-4b38-b10e-68187a5bf82f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb57a52c8-34') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 08:57:35 compute-0 nova_compute[260935]: 2025-10-11 08:57:35.414 2 DEBUG os_vif [None req-88a713e6-8e9d-4f15-a13f-6d4d6b227dda 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:3f:db:18,bridge_name='br-int',has_traffic_filtering=True,id=b57a52c8-34c3-415f-9848-195b09be3611,network=Network(2cb96d57-a5e9-4b38-b10e-68187a5bf82f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb57a52c8-34') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 11 08:57:35 compute-0 nova_compute[260935]: 2025-10-11 08:57:35.417 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:57:35 compute-0 nova_compute[260935]: 2025-10-11 08:57:35.417 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb57a52c8-34, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:57:35 compute-0 podman[332442]: 2025-10-11 08:57:35.420681345 +0000 UTC m=+0.074134244 container create 9f5e717395ce4b59fa05eb57e77e0ddf86e7f86c100be7b71cd481bad166d8a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_lovelace, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 08:57:35 compute-0 nova_compute[260935]: 2025-10-11 08:57:35.420 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:57:35 compute-0 nova_compute[260935]: 2025-10-11 08:57:35.424 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 08:57:35 compute-0 ceph-mon[74313]: pgmap v1624: 321 pgs: 321 active+clean; 531 MiB data, 800 MiB used, 59 GiB / 60 GiB avail; 460 KiB/s rd, 4.7 MiB/s wr, 173 op/s
Oct 11 08:57:35 compute-0 nova_compute[260935]: 2025-10-11 08:57:35.432 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:57:35 compute-0 nova_compute[260935]: 2025-10-11 08:57:35.435 2 INFO os_vif [None req-88a713e6-8e9d-4f15-a13f-6d4d6b227dda 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:3f:db:18,bridge_name='br-int',has_traffic_filtering=True,id=b57a52c8-34c3-415f-9848-195b09be3611,network=Network(2cb96d57-a5e9-4b38-b10e-68187a5bf82f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb57a52c8-34')
Oct 11 08:57:35 compute-0 systemd[1]: Started libpod-conmon-9f5e717395ce4b59fa05eb57e77e0ddf86e7f86c100be7b71cd481bad166d8a6.scope.
Oct 11 08:57:35 compute-0 podman[332442]: 2025-10-11 08:57:35.389335852 +0000 UTC m=+0.042788741 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 08:57:35 compute-0 systemd[1]: Started libcrun container.
Oct 11 08:57:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1afb78bf63b105cf00ee836f5e918ce175d478085d4beb35090c2f801849aab2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 08:57:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1afb78bf63b105cf00ee836f5e918ce175d478085d4beb35090c2f801849aab2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 08:57:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1afb78bf63b105cf00ee836f5e918ce175d478085d4beb35090c2f801849aab2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 08:57:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1afb78bf63b105cf00ee836f5e918ce175d478085d4beb35090c2f801849aab2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 08:57:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1afb78bf63b105cf00ee836f5e918ce175d478085d4beb35090c2f801849aab2/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 08:57:35 compute-0 podman[332442]: 2025-10-11 08:57:35.541932063 +0000 UTC m=+0.195384952 container init 9f5e717395ce4b59fa05eb57e77e0ddf86e7f86c100be7b71cd481bad166d8a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_lovelace, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct 11 08:57:35 compute-0 podman[332442]: 2025-10-11 08:57:35.556281362 +0000 UTC m=+0.209734261 container start 9f5e717395ce4b59fa05eb57e77e0ddf86e7f86c100be7b71cd481bad166d8a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_lovelace, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct 11 08:57:35 compute-0 podman[332442]: 2025-10-11 08:57:35.562431017 +0000 UTC m=+0.215883896 container attach 9f5e717395ce4b59fa05eb57e77e0ddf86e7f86c100be7b71cd481bad166d8a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_lovelace, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct 11 08:57:35 compute-0 nova_compute[260935]: 2025-10-11 08:57:35.726 2 DEBUG nova.storage.rbd_utils [None req-8d0bab69-a07b-442e-bea1-1078b3bbd2b0 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] creating snapshot(99ad15e3c87a4c0b86d92d3f3ef7c439) on rbd image(98d8ebd6-0917-49cf-8efc-a245486424bc_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Oct 11 08:57:36 compute-0 nova_compute[260935]: 2025-10-11 08:57:36.001 2 INFO nova.virt.libvirt.driver [None req-88a713e6-8e9d-4f15-a13f-6d4d6b227dda 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: 80799b12-9add-4561-a8eb-f1cf3c1f93a1] Deleting instance files /var/lib/nova/instances/80799b12-9add-4561-a8eb-f1cf3c1f93a1_del
Oct 11 08:57:36 compute-0 nova_compute[260935]: 2025-10-11 08:57:36.003 2 INFO nova.virt.libvirt.driver [None req-88a713e6-8e9d-4f15-a13f-6d4d6b227dda 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: 80799b12-9add-4561-a8eb-f1cf3c1f93a1] Deletion of /var/lib/nova/instances/80799b12-9add-4561-a8eb-f1cf3c1f93a1_del complete
Oct 11 08:57:36 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1625: 321 pgs: 321 active+clean; 531 MiB data, 800 MiB used, 59 GiB / 60 GiB avail; 460 KiB/s rd, 4.7 MiB/s wr, 173 op/s
Oct 11 08:57:36 compute-0 nova_compute[260935]: 2025-10-11 08:57:36.060 2 DEBUG oslo_concurrency.lockutils [None req-49474401-22be-48bc-a781-b65169c97a5a ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Acquiring lock "a4114e5c-7c87-48a4-b73b-01ec4a9dd9b4" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:57:36 compute-0 nova_compute[260935]: 2025-10-11 08:57:36.060 2 DEBUG oslo_concurrency.lockutils [None req-49474401-22be-48bc-a781-b65169c97a5a ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Lock "a4114e5c-7c87-48a4-b73b-01ec4a9dd9b4" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:57:36 compute-0 nova_compute[260935]: 2025-10-11 08:57:36.060 2 DEBUG oslo_concurrency.lockutils [None req-49474401-22be-48bc-a781-b65169c97a5a ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Acquiring lock "a4114e5c-7c87-48a4-b73b-01ec4a9dd9b4-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:57:36 compute-0 nova_compute[260935]: 2025-10-11 08:57:36.061 2 DEBUG oslo_concurrency.lockutils [None req-49474401-22be-48bc-a781-b65169c97a5a ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Lock "a4114e5c-7c87-48a4-b73b-01ec4a9dd9b4-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:57:36 compute-0 nova_compute[260935]: 2025-10-11 08:57:36.061 2 DEBUG oslo_concurrency.lockutils [None req-49474401-22be-48bc-a781-b65169c97a5a ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Lock "a4114e5c-7c87-48a4-b73b-01ec4a9dd9b4-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:57:36 compute-0 nova_compute[260935]: 2025-10-11 08:57:36.062 2 INFO nova.compute.manager [None req-49474401-22be-48bc-a781-b65169c97a5a ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: a4114e5c-7c87-48a4-b73b-01ec4a9dd9b4] Terminating instance
Oct 11 08:57:36 compute-0 nova_compute[260935]: 2025-10-11 08:57:36.063 2 DEBUG nova.compute.manager [None req-49474401-22be-48bc-a781-b65169c97a5a ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: a4114e5c-7c87-48a4-b73b-01ec4a9dd9b4] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 11 08:57:36 compute-0 nova_compute[260935]: 2025-10-11 08:57:36.072 2 INFO nova.compute.manager [None req-88a713e6-8e9d-4f15-a13f-6d4d6b227dda 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: 80799b12-9add-4561-a8eb-f1cf3c1f93a1] Took 0.70 seconds to destroy the instance on the hypervisor.
Oct 11 08:57:36 compute-0 nova_compute[260935]: 2025-10-11 08:57:36.072 2 DEBUG oslo.service.loopingcall [None req-88a713e6-8e9d-4f15-a13f-6d4d6b227dda 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 11 08:57:36 compute-0 nova_compute[260935]: 2025-10-11 08:57:36.073 2 DEBUG nova.compute.manager [-] [instance: 80799b12-9add-4561-a8eb-f1cf3c1f93a1] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 11 08:57:36 compute-0 nova_compute[260935]: 2025-10-11 08:57:36.073 2 DEBUG nova.network.neutron [-] [instance: 80799b12-9add-4561-a8eb-f1cf3c1f93a1] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 11 08:57:36 compute-0 kernel: tap569c42c9-b8 (unregistering): left promiscuous mode
Oct 11 08:57:36 compute-0 NetworkManager[44960]: <info>  [1760173056.1261] device (tap569c42c9-b8): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 08:57:36 compute-0 nova_compute[260935]: 2025-10-11 08:57:36.140 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:57:36 compute-0 ovn_controller[152945]: 2025-10-11T08:57:36Z|00636|binding|INFO|Releasing lport 569c42c9-b81d-42a2-8fc1-8fced709830d from this chassis (sb_readonly=0)
Oct 11 08:57:36 compute-0 ovn_controller[152945]: 2025-10-11T08:57:36Z|00637|binding|INFO|Setting lport 569c42c9-b81d-42a2-8fc1-8fced709830d down in Southbound
Oct 11 08:57:36 compute-0 ovn_controller[152945]: 2025-10-11T08:57:36Z|00638|binding|INFO|Removing iface tap569c42c9-b8 ovn-installed in OVS
Oct 11 08:57:36 compute-0 nova_compute[260935]: 2025-10-11 08:57:36.143 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:57:36 compute-0 nova_compute[260935]: 2025-10-11 08:57:36.159 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:57:36 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:57:36.161 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5b:be:d8 10.100.0.7'], port_security=['fa:16:3e:5b:be:d8 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'a4114e5c-7c87-48a4-b73b-01ec4a9dd9b4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e075bdab-78c4-414f-b270-c41d1c82f498', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd9864fda4f8641d8a9c1509c426cc206', 'neutron:revision_number': '4', 'neutron:security_group_ids': '708d19b6-1edc-4b98-ad73-f234668f1633', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2b07dbe7-131d-4b4e-9a25-dea5d7b28985, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=569c42c9-b81d-42a2-8fc1-8fced709830d) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 08:57:36 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:57:36.165 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 569c42c9-b81d-42a2-8fc1-8fced709830d in datapath e075bdab-78c4-414f-b270-c41d1c82f498 unbound from our chassis
Oct 11 08:57:36 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:57:36.168 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e075bdab-78c4-414f-b270-c41d1c82f498
Oct 11 08:57:36 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:57:36.196 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[44a2d90e-e300-4174-afe9-b0de7f105180]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:57:36 compute-0 systemd[1]: machine-qemu\x2d81\x2dinstance\x2d00000049.scope: Deactivated successfully.
Oct 11 08:57:36 compute-0 systemd[1]: machine-qemu\x2d81\x2dinstance\x2d00000049.scope: Consumed 4.994s CPU time.
Oct 11 08:57:36 compute-0 systemd-machined[215705]: Machine qemu-81-instance-00000049 terminated.
Oct 11 08:57:36 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:57:36.235 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[1a8eb42b-b1d3-4620-9958-c92448ba7ed1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:57:36 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:57:36.238 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[2dfbea10-1a31-4e09-8403-bd4c05a39d02]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:57:36 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:57:36.284 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[a265a633-d0b4-4b46-95a8-c7dec9e5c869]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:57:36 compute-0 nova_compute[260935]: 2025-10-11 08:57:36.317 2 INFO nova.virt.libvirt.driver [-] [instance: a4114e5c-7c87-48a4-b73b-01ec4a9dd9b4] Instance destroyed successfully.
Oct 11 08:57:36 compute-0 nova_compute[260935]: 2025-10-11 08:57:36.317 2 DEBUG nova.objects.instance [None req-49474401-22be-48bc-a781-b65169c97a5a ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Lazy-loading 'resources' on Instance uuid a4114e5c-7c87-48a4-b73b-01ec4a9dd9b4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:57:36 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:57:36.324 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[f2458ebf-fcc9-4c13-93cf-9692bc0e2f20]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape075bdab-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0b:b9:79'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 27, 'rx_bytes': 1000, 'tx_bytes': 1278, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 27, 'rx_bytes': 1000, 'tx_bytes': 1278, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 169], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 479394, 'reachable_time': 15419, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 332521, 'error': None, 'target': 'ovnmeta-e075bdab-78c4-414f-b270-c41d1c82f498', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:57:36 compute-0 nova_compute[260935]: 2025-10-11 08:57:36.336 2 DEBUG nova.virt.libvirt.vif [None req-49474401-22be-48bc-a781-b65169c97a5a ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:202:202,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T08:57:22Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestJSON-server-636257832',display_name='tempest-ServersTestJSON-server-636257832',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-636257832',id=73,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-11T08:57:32Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='d9864fda4f8641d8a9c1509c426cc206',ramdisk_id='',reservation_id='r-zxehnsia',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestJSON-101172647',owner_user_name='tempest-ServersTestJSON-101172647-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T08:57:34Z,user_data=None,user_id='ee0e5fedb9fc464eb2a9ac362f5e0749',uuid=a4114e5c-7c87-48a4-b73b-01ec4a9dd9b4,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "569c42c9-b81d-42a2-8fc1-8fced709830d", "address": "fa:16:3e:5b:be:d8", "network": {"id": "e075bdab-78c4-414f-b270-c41d1c82f498", "bridge": "br-int", "label": "tempest-ServersTestJSON-1401783070-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d9864fda4f8641d8a9c1509c426cc206", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap569c42c9-b8", "ovs_interfaceid": "569c42c9-b81d-42a2-8fc1-8fced709830d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 11 08:57:36 compute-0 nova_compute[260935]: 2025-10-11 08:57:36.337 2 DEBUG nova.network.os_vif_util [None req-49474401-22be-48bc-a781-b65169c97a5a ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Converting VIF {"id": "569c42c9-b81d-42a2-8fc1-8fced709830d", "address": "fa:16:3e:5b:be:d8", "network": {"id": "e075bdab-78c4-414f-b270-c41d1c82f498", "bridge": "br-int", "label": "tempest-ServersTestJSON-1401783070-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d9864fda4f8641d8a9c1509c426cc206", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap569c42c9-b8", "ovs_interfaceid": "569c42c9-b81d-42a2-8fc1-8fced709830d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 08:57:36 compute-0 nova_compute[260935]: 2025-10-11 08:57:36.337 2 DEBUG nova.network.os_vif_util [None req-49474401-22be-48bc-a781-b65169c97a5a ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5b:be:d8,bridge_name='br-int',has_traffic_filtering=True,id=569c42c9-b81d-42a2-8fc1-8fced709830d,network=Network(e075bdab-78c4-414f-b270-c41d1c82f498),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap569c42c9-b8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 08:57:36 compute-0 nova_compute[260935]: 2025-10-11 08:57:36.338 2 DEBUG os_vif [None req-49474401-22be-48bc-a781-b65169c97a5a ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:5b:be:d8,bridge_name='br-int',has_traffic_filtering=True,id=569c42c9-b81d-42a2-8fc1-8fced709830d,network=Network(e075bdab-78c4-414f-b270-c41d1c82f498),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap569c42c9-b8') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 11 08:57:36 compute-0 nova_compute[260935]: 2025-10-11 08:57:36.339 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:57:36 compute-0 nova_compute[260935]: 2025-10-11 08:57:36.339 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap569c42c9-b8, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:57:36 compute-0 nova_compute[260935]: 2025-10-11 08:57:36.341 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:57:36 compute-0 nova_compute[260935]: 2025-10-11 08:57:36.343 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 08:57:36 compute-0 nova_compute[260935]: 2025-10-11 08:57:36.348 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:57:36 compute-0 nova_compute[260935]: 2025-10-11 08:57:36.350 2 INFO os_vif [None req-49474401-22be-48bc-a781-b65169c97a5a ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:5b:be:d8,bridge_name='br-int',has_traffic_filtering=True,id=569c42c9-b81d-42a2-8fc1-8fced709830d,network=Network(e075bdab-78c4-414f-b270-c41d1c82f498),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap569c42c9-b8')
Oct 11 08:57:36 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:57:36.356 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[353da671-b446-42bd-ba08-96eabcc361dc]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tape075bdab-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 479414, 'tstamp': 479414}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 332534, 'error': None, 'target': 'ovnmeta-e075bdab-78c4-414f-b270-c41d1c82f498', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tape075bdab-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 479419, 'tstamp': 479419}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 332534, 'error': None, 'target': 'ovnmeta-e075bdab-78c4-414f-b270-c41d1c82f498', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:57:36 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:57:36.358 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape075bdab-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:57:36 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:57:36.365 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape075bdab-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:57:36 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:57:36.365 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 08:57:36 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:57:36.366 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape075bdab-70, col_values=(('external_ids', {'iface-id': 'b9cf681c-9f4c-4c56-987a-55fa7aa89e1a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:57:36 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:57:36.366 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 08:57:36 compute-0 nova_compute[260935]: 2025-10-11 08:57:36.373 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:57:36 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e234 do_prune osdmap full prune enabled
Oct 11 08:57:36 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e235 e235: 3 total, 3 up, 3 in
Oct 11 08:57:36 compute-0 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e235: 3 total, 3 up, 3 in
Oct 11 08:57:36 compute-0 nova_compute[260935]: 2025-10-11 08:57:36.501 2 DEBUG nova.compute.manager [req-ef76a74a-a503-4195-ab07-6c8d2ad6a19f req-a9e9a093-2416-4758-a30e-12a1eb29f99a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 98d8ebd6-0917-49cf-8efc-a245486424bc] Received event network-vif-plugged-10e01bbb-0d2c-4f76-8f81-c90e20d3e54c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:57:36 compute-0 nova_compute[260935]: 2025-10-11 08:57:36.502 2 DEBUG oslo_concurrency.lockutils [req-ef76a74a-a503-4195-ab07-6c8d2ad6a19f req-a9e9a093-2416-4758-a30e-12a1eb29f99a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "98d8ebd6-0917-49cf-8efc-a245486424bc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:57:36 compute-0 nova_compute[260935]: 2025-10-11 08:57:36.502 2 DEBUG oslo_concurrency.lockutils [req-ef76a74a-a503-4195-ab07-6c8d2ad6a19f req-a9e9a093-2416-4758-a30e-12a1eb29f99a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "98d8ebd6-0917-49cf-8efc-a245486424bc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:57:36 compute-0 nova_compute[260935]: 2025-10-11 08:57:36.503 2 DEBUG oslo_concurrency.lockutils [req-ef76a74a-a503-4195-ab07-6c8d2ad6a19f req-a9e9a093-2416-4758-a30e-12a1eb29f99a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "98d8ebd6-0917-49cf-8efc-a245486424bc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:57:36 compute-0 nova_compute[260935]: 2025-10-11 08:57:36.504 2 DEBUG nova.compute.manager [req-ef76a74a-a503-4195-ab07-6c8d2ad6a19f req-a9e9a093-2416-4758-a30e-12a1eb29f99a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 98d8ebd6-0917-49cf-8efc-a245486424bc] No waiting events found dispatching network-vif-plugged-10e01bbb-0d2c-4f76-8f81-c90e20d3e54c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 08:57:36 compute-0 nova_compute[260935]: 2025-10-11 08:57:36.505 2 WARNING nova.compute.manager [req-ef76a74a-a503-4195-ab07-6c8d2ad6a19f req-a9e9a093-2416-4758-a30e-12a1eb29f99a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 98d8ebd6-0917-49cf-8efc-a245486424bc] Received unexpected event network-vif-plugged-10e01bbb-0d2c-4f76-8f81-c90e20d3e54c for instance with vm_state active and task_state shelving_image_uploading.
Oct 11 08:57:36 compute-0 nova_compute[260935]: 2025-10-11 08:57:36.518 2 DEBUG nova.storage.rbd_utils [None req-8d0bab69-a07b-442e-bea1-1078b3bbd2b0 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] cloning vms/98d8ebd6-0917-49cf-8efc-a245486424bc_disk@99ad15e3c87a4c0b86d92d3f3ef7c439 to images/aff10a5e-fdc1-44d7-be39-f9f9dc19ad9d clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Oct 11 08:57:36 compute-0 nova_compute[260935]: 2025-10-11 08:57:36.662 2 DEBUG nova.storage.rbd_utils [None req-8d0bab69-a07b-442e-bea1-1078b3bbd2b0 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] flattening images/aff10a5e-fdc1-44d7-be39-f9f9dc19ad9d flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Oct 11 08:57:36 compute-0 wizardly_lovelace[332478]: --> passed data devices: 0 physical, 3 LVM
Oct 11 08:57:36 compute-0 wizardly_lovelace[332478]: --> relative data size: 1.0
Oct 11 08:57:36 compute-0 wizardly_lovelace[332478]: --> All data devices are unavailable
Oct 11 08:57:36 compute-0 podman[332442]: 2025-10-11 08:57:36.869027425 +0000 UTC m=+1.522480304 container died 9f5e717395ce4b59fa05eb57e77e0ddf86e7f86c100be7b71cd481bad166d8a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_lovelace, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Oct 11 08:57:36 compute-0 systemd[1]: libpod-9f5e717395ce4b59fa05eb57e77e0ddf86e7f86c100be7b71cd481bad166d8a6.scope: Deactivated successfully.
Oct 11 08:57:36 compute-0 systemd[1]: libpod-9f5e717395ce4b59fa05eb57e77e0ddf86e7f86c100be7b71cd481bad166d8a6.scope: Consumed 1.127s CPU time.
Oct 11 08:57:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-1afb78bf63b105cf00ee836f5e918ce175d478085d4beb35090c2f801849aab2-merged.mount: Deactivated successfully.
Oct 11 08:57:36 compute-0 podman[332442]: 2025-10-11 08:57:36.985928328 +0000 UTC m=+1.639381217 container remove 9f5e717395ce4b59fa05eb57e77e0ddf86e7f86c100be7b71cd481bad166d8a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_lovelace, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct 11 08:57:36 compute-0 systemd[1]: libpod-conmon-9f5e717395ce4b59fa05eb57e77e0ddf86e7f86c100be7b71cd481bad166d8a6.scope: Deactivated successfully.
Oct 11 08:57:37 compute-0 sudo[332295]: pam_unix(sudo:session): session closed for user root
Oct 11 08:57:37 compute-0 nova_compute[260935]: 2025-10-11 08:57:37.063 2 INFO nova.virt.libvirt.driver [None req-49474401-22be-48bc-a781-b65169c97a5a ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: a4114e5c-7c87-48a4-b73b-01ec4a9dd9b4] Deleting instance files /var/lib/nova/instances/a4114e5c-7c87-48a4-b73b-01ec4a9dd9b4_del
Oct 11 08:57:37 compute-0 nova_compute[260935]: 2025-10-11 08:57:37.064 2 INFO nova.virt.libvirt.driver [None req-49474401-22be-48bc-a781-b65169c97a5a ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: a4114e5c-7c87-48a4-b73b-01ec4a9dd9b4] Deletion of /var/lib/nova/instances/a4114e5c-7c87-48a4-b73b-01ec4a9dd9b4_del complete
Oct 11 08:57:37 compute-0 nova_compute[260935]: 2025-10-11 08:57:37.082 2 DEBUG nova.storage.rbd_utils [None req-8d0bab69-a07b-442e-bea1-1078b3bbd2b0 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] removing snapshot(99ad15e3c87a4c0b86d92d3f3ef7c439) on rbd image(98d8ebd6-0917-49cf-8efc-a245486424bc_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Oct 11 08:57:37 compute-0 sudo[332658]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:57:37 compute-0 sudo[332658]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:57:37 compute-0 sudo[332658]: pam_unix(sudo:session): session closed for user root
Oct 11 08:57:37 compute-0 nova_compute[260935]: 2025-10-11 08:57:37.148 2 INFO nova.compute.manager [None req-49474401-22be-48bc-a781-b65169c97a5a ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: a4114e5c-7c87-48a4-b73b-01ec4a9dd9b4] Took 1.08 seconds to destroy the instance on the hypervisor.
Oct 11 08:57:37 compute-0 nova_compute[260935]: 2025-10-11 08:57:37.149 2 DEBUG oslo.service.loopingcall [None req-49474401-22be-48bc-a781-b65169c97a5a ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 11 08:57:37 compute-0 nova_compute[260935]: 2025-10-11 08:57:37.152 2 DEBUG nova.compute.manager [-] [instance: a4114e5c-7c87-48a4-b73b-01ec4a9dd9b4] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 11 08:57:37 compute-0 nova_compute[260935]: 2025-10-11 08:57:37.152 2 DEBUG nova.network.neutron [-] [instance: a4114e5c-7c87-48a4-b73b-01ec4a9dd9b4] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 11 08:57:37 compute-0 sudo[332686]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 08:57:37 compute-0 sudo[332686]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:57:37 compute-0 sudo[332686]: pam_unix(sudo:session): session closed for user root
Oct 11 08:57:37 compute-0 sudo[332711]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:57:37 compute-0 sudo[332711]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:57:37 compute-0 sudo[332711]: pam_unix(sudo:session): session closed for user root
Oct 11 08:57:37 compute-0 sudo[332736]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a -- lvm list --format json
Oct 11 08:57:37 compute-0 sudo[332736]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:57:37 compute-0 ceph-mon[74313]: pgmap v1625: 321 pgs: 321 active+clean; 531 MiB data, 800 MiB used, 59 GiB / 60 GiB avail; 460 KiB/s rd, 4.7 MiB/s wr, 173 op/s
Oct 11 08:57:37 compute-0 ceph-mon[74313]: osdmap e235: 3 total, 3 up, 3 in
Oct 11 08:57:37 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e235 do_prune osdmap full prune enabled
Oct 11 08:57:37 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e236 e236: 3 total, 3 up, 3 in
Oct 11 08:57:37 compute-0 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e236: 3 total, 3 up, 3 in
Oct 11 08:57:37 compute-0 nova_compute[260935]: 2025-10-11 08:57:37.499 2 DEBUG nova.storage.rbd_utils [None req-8d0bab69-a07b-442e-bea1-1078b3bbd2b0 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] creating snapshot(snap) on rbd image(aff10a5e-fdc1-44d7-be39-f9f9dc19ad9d) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Oct 11 08:57:37 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 08:57:37 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1426219286' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 08:57:37 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 08:57:37 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1426219286' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 08:57:37 compute-0 podman[332820]: 2025-10-11 08:57:37.717531319 +0000 UTC m=+0.053879177 container create 70882b38f2160b851f48448973abb2702a84eaf6aed8a9017dd6daa47f7618df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_khorana, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS)
Oct 11 08:57:37 compute-0 systemd[1]: Started libpod-conmon-70882b38f2160b851f48448973abb2702a84eaf6aed8a9017dd6daa47f7618df.scope.
Oct 11 08:57:37 compute-0 podman[332820]: 2025-10-11 08:57:37.695354957 +0000 UTC m=+0.031702815 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 08:57:37 compute-0 systemd[1]: Started libcrun container.
Oct 11 08:57:37 compute-0 podman[332820]: 2025-10-11 08:57:37.837911402 +0000 UTC m=+0.174259310 container init 70882b38f2160b851f48448973abb2702a84eaf6aed8a9017dd6daa47f7618df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_khorana, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct 11 08:57:37 compute-0 podman[332820]: 2025-10-11 08:57:37.853075574 +0000 UTC m=+0.189423432 container start 70882b38f2160b851f48448973abb2702a84eaf6aed8a9017dd6daa47f7618df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_khorana, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef)
Oct 11 08:57:37 compute-0 podman[332820]: 2025-10-11 08:57:37.858147229 +0000 UTC m=+0.194495127 container attach 70882b38f2160b851f48448973abb2702a84eaf6aed8a9017dd6daa47f7618df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_khorana, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 08:57:37 compute-0 agitated_khorana[332836]: 167 167
Oct 11 08:57:37 compute-0 systemd[1]: libpod-70882b38f2160b851f48448973abb2702a84eaf6aed8a9017dd6daa47f7618df.scope: Deactivated successfully.
Oct 11 08:57:37 compute-0 podman[332820]: 2025-10-11 08:57:37.864258573 +0000 UTC m=+0.200606431 container died 70882b38f2160b851f48448973abb2702a84eaf6aed8a9017dd6daa47f7618df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_khorana, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 08:57:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-5abfdd10f41c815c51708a6a7c5cfd7d55baab888b9b008a0df822fcf39982ff-merged.mount: Deactivated successfully.
Oct 11 08:57:37 compute-0 podman[332820]: 2025-10-11 08:57:37.929978037 +0000 UTC m=+0.266325865 container remove 70882b38f2160b851f48448973abb2702a84eaf6aed8a9017dd6daa47f7618df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_khorana, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Oct 11 08:57:37 compute-0 systemd[1]: libpod-conmon-70882b38f2160b851f48448973abb2702a84eaf6aed8a9017dd6daa47f7618df.scope: Deactivated successfully.
Oct 11 08:57:37 compute-0 podman[332842]: 2025-10-11 08:57:37.994830817 +0000 UTC m=+0.092243952 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=iscsid, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, container_name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 11 08:57:38 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1628: 321 pgs: 321 active+clean; 484 MiB data, 758 MiB used, 59 GiB / 60 GiB avail; 9.1 MiB/s rd, 6.0 MiB/s wr, 348 op/s
Oct 11 08:57:38 compute-0 podman[332877]: 2025-10-11 08:57:38.167248803 +0000 UTC m=+0.074274889 container create 863dfeaf3ac0087331f623dbcfadf72ad1c3dd623da4a7b3dc008e3921c4b7c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_wu, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 08:57:38 compute-0 systemd[1]: Started libpod-conmon-863dfeaf3ac0087331f623dbcfadf72ad1c3dd623da4a7b3dc008e3921c4b7c2.scope.
Oct 11 08:57:38 compute-0 podman[332877]: 2025-10-11 08:57:38.139605205 +0000 UTC m=+0.046631301 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 08:57:38 compute-0 systemd[1]: Started libcrun container.
Oct 11 08:57:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d341406c02256722a2e7c6dbb661095bc6168cbb5cdc2b7cf22b8c8bd57f7ba/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 08:57:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d341406c02256722a2e7c6dbb661095bc6168cbb5cdc2b7cf22b8c8bd57f7ba/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 08:57:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d341406c02256722a2e7c6dbb661095bc6168cbb5cdc2b7cf22b8c8bd57f7ba/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 08:57:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d341406c02256722a2e7c6dbb661095bc6168cbb5cdc2b7cf22b8c8bd57f7ba/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 08:57:38 compute-0 podman[332877]: 2025-10-11 08:57:38.294408479 +0000 UTC m=+0.201434585 container init 863dfeaf3ac0087331f623dbcfadf72ad1c3dd623da4a7b3dc008e3921c4b7c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_wu, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 08:57:38 compute-0 podman[332877]: 2025-10-11 08:57:38.312172465 +0000 UTC m=+0.219198531 container start 863dfeaf3ac0087331f623dbcfadf72ad1c3dd623da4a7b3dc008e3921c4b7c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_wu, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 08:57:38 compute-0 podman[332877]: 2025-10-11 08:57:38.316430767 +0000 UTC m=+0.223456833 container attach 863dfeaf3ac0087331f623dbcfadf72ad1c3dd623da4a7b3dc008e3921c4b7c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_wu, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct 11 08:57:38 compute-0 nova_compute[260935]: 2025-10-11 08:57:38.397 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:57:38 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e236 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 08:57:38 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e236 do_prune osdmap full prune enabled
Oct 11 08:57:38 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e237 e237: 3 total, 3 up, 3 in
Oct 11 08:57:38 compute-0 ceph-mon[74313]: osdmap e236: 3 total, 3 up, 3 in
Oct 11 08:57:38 compute-0 ceph-mon[74313]: from='client.? 192.168.122.10:0/1426219286' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 08:57:38 compute-0 ceph-mon[74313]: from='client.? 192.168.122.10:0/1426219286' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 08:57:38 compute-0 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e237: 3 total, 3 up, 3 in
Oct 11 08:57:39 compute-0 stoic_wu[332893]: {
Oct 11 08:57:39 compute-0 stoic_wu[332893]:     "0": [
Oct 11 08:57:39 compute-0 stoic_wu[332893]:         {
Oct 11 08:57:39 compute-0 stoic_wu[332893]:             "devices": [
Oct 11 08:57:39 compute-0 stoic_wu[332893]:                 "/dev/loop3"
Oct 11 08:57:39 compute-0 stoic_wu[332893]:             ],
Oct 11 08:57:39 compute-0 stoic_wu[332893]:             "lv_name": "ceph_lv0",
Oct 11 08:57:39 compute-0 stoic_wu[332893]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 08:57:39 compute-0 stoic_wu[332893]:             "lv_size": "21470642176",
Oct 11 08:57:39 compute-0 stoic_wu[332893]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=bd31cfe9-266a-4479-a7f9-659fff14b3e1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 08:57:39 compute-0 stoic_wu[332893]:             "lv_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 08:57:39 compute-0 stoic_wu[332893]:             "name": "ceph_lv0",
Oct 11 08:57:39 compute-0 stoic_wu[332893]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 08:57:39 compute-0 stoic_wu[332893]:             "tags": {
Oct 11 08:57:39 compute-0 stoic_wu[332893]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 11 08:57:39 compute-0 stoic_wu[332893]:                 "ceph.block_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 08:57:39 compute-0 stoic_wu[332893]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 08:57:39 compute-0 stoic_wu[332893]:                 "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 08:57:39 compute-0 stoic_wu[332893]:                 "ceph.cluster_name": "ceph",
Oct 11 08:57:39 compute-0 stoic_wu[332893]:                 "ceph.crush_device_class": "",
Oct 11 08:57:39 compute-0 stoic_wu[332893]:                 "ceph.encrypted": "0",
Oct 11 08:57:39 compute-0 stoic_wu[332893]:                 "ceph.osd_fsid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 08:57:39 compute-0 stoic_wu[332893]:                 "ceph.osd_id": "0",
Oct 11 08:57:39 compute-0 stoic_wu[332893]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 08:57:39 compute-0 stoic_wu[332893]:                 "ceph.type": "block",
Oct 11 08:57:39 compute-0 stoic_wu[332893]:                 "ceph.vdo": "0"
Oct 11 08:57:39 compute-0 stoic_wu[332893]:             },
Oct 11 08:57:39 compute-0 stoic_wu[332893]:             "type": "block",
Oct 11 08:57:39 compute-0 stoic_wu[332893]:             "vg_name": "ceph_vg0"
Oct 11 08:57:39 compute-0 stoic_wu[332893]:         }
Oct 11 08:57:39 compute-0 stoic_wu[332893]:     ],
Oct 11 08:57:39 compute-0 stoic_wu[332893]:     "1": [
Oct 11 08:57:39 compute-0 stoic_wu[332893]:         {
Oct 11 08:57:39 compute-0 stoic_wu[332893]:             "devices": [
Oct 11 08:57:39 compute-0 stoic_wu[332893]:                 "/dev/loop4"
Oct 11 08:57:39 compute-0 stoic_wu[332893]:             ],
Oct 11 08:57:39 compute-0 stoic_wu[332893]:             "lv_name": "ceph_lv1",
Oct 11 08:57:39 compute-0 stoic_wu[332893]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 08:57:39 compute-0 stoic_wu[332893]:             "lv_size": "21470642176",
Oct 11 08:57:39 compute-0 stoic_wu[332893]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c6e730c8-7075-45e5-bf72-061b73088f5b,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 08:57:39 compute-0 stoic_wu[332893]:             "lv_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 08:57:39 compute-0 stoic_wu[332893]:             "name": "ceph_lv1",
Oct 11 08:57:39 compute-0 stoic_wu[332893]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 08:57:39 compute-0 stoic_wu[332893]:             "tags": {
Oct 11 08:57:39 compute-0 stoic_wu[332893]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 11 08:57:39 compute-0 stoic_wu[332893]:                 "ceph.block_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 08:57:39 compute-0 stoic_wu[332893]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 08:57:39 compute-0 stoic_wu[332893]:                 "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 08:57:39 compute-0 stoic_wu[332893]:                 "ceph.cluster_name": "ceph",
Oct 11 08:57:39 compute-0 stoic_wu[332893]:                 "ceph.crush_device_class": "",
Oct 11 08:57:39 compute-0 stoic_wu[332893]:                 "ceph.encrypted": "0",
Oct 11 08:57:39 compute-0 stoic_wu[332893]:                 "ceph.osd_fsid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 08:57:39 compute-0 stoic_wu[332893]:                 "ceph.osd_id": "1",
Oct 11 08:57:39 compute-0 stoic_wu[332893]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 08:57:39 compute-0 stoic_wu[332893]:                 "ceph.type": "block",
Oct 11 08:57:39 compute-0 stoic_wu[332893]:                 "ceph.vdo": "0"
Oct 11 08:57:39 compute-0 stoic_wu[332893]:             },
Oct 11 08:57:39 compute-0 stoic_wu[332893]:             "type": "block",
Oct 11 08:57:39 compute-0 stoic_wu[332893]:             "vg_name": "ceph_vg1"
Oct 11 08:57:39 compute-0 stoic_wu[332893]:         }
Oct 11 08:57:39 compute-0 stoic_wu[332893]:     ],
Oct 11 08:57:39 compute-0 stoic_wu[332893]:     "2": [
Oct 11 08:57:39 compute-0 stoic_wu[332893]:         {
Oct 11 08:57:39 compute-0 stoic_wu[332893]:             "devices": [
Oct 11 08:57:39 compute-0 stoic_wu[332893]:                 "/dev/loop5"
Oct 11 08:57:39 compute-0 stoic_wu[332893]:             ],
Oct 11 08:57:39 compute-0 stoic_wu[332893]:             "lv_name": "ceph_lv2",
Oct 11 08:57:39 compute-0 stoic_wu[332893]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 08:57:39 compute-0 stoic_wu[332893]:             "lv_size": "21470642176",
Oct 11 08:57:39 compute-0 stoic_wu[332893]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9a8d87fe-940e-4960-94fb-405e22e6e9ee,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 08:57:39 compute-0 stoic_wu[332893]:             "lv_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 08:57:39 compute-0 stoic_wu[332893]:             "name": "ceph_lv2",
Oct 11 08:57:39 compute-0 stoic_wu[332893]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 08:57:39 compute-0 stoic_wu[332893]:             "tags": {
Oct 11 08:57:39 compute-0 stoic_wu[332893]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 11 08:57:39 compute-0 stoic_wu[332893]:                 "ceph.block_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 08:57:39 compute-0 stoic_wu[332893]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 08:57:39 compute-0 stoic_wu[332893]:                 "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 08:57:39 compute-0 stoic_wu[332893]:                 "ceph.cluster_name": "ceph",
Oct 11 08:57:39 compute-0 stoic_wu[332893]:                 "ceph.crush_device_class": "",
Oct 11 08:57:39 compute-0 stoic_wu[332893]:                 "ceph.encrypted": "0",
Oct 11 08:57:39 compute-0 stoic_wu[332893]:                 "ceph.osd_fsid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 08:57:39 compute-0 stoic_wu[332893]:                 "ceph.osd_id": "2",
Oct 11 08:57:39 compute-0 stoic_wu[332893]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 08:57:39 compute-0 stoic_wu[332893]:                 "ceph.type": "block",
Oct 11 08:57:39 compute-0 stoic_wu[332893]:                 "ceph.vdo": "0"
Oct 11 08:57:39 compute-0 stoic_wu[332893]:             },
Oct 11 08:57:39 compute-0 stoic_wu[332893]:             "type": "block",
Oct 11 08:57:39 compute-0 stoic_wu[332893]:             "vg_name": "ceph_vg2"
Oct 11 08:57:39 compute-0 stoic_wu[332893]:         }
Oct 11 08:57:39 compute-0 stoic_wu[332893]:     ]
Oct 11 08:57:39 compute-0 stoic_wu[332893]: }
Oct 11 08:57:39 compute-0 systemd[1]: libpod-863dfeaf3ac0087331f623dbcfadf72ad1c3dd623da4a7b3dc008e3921c4b7c2.scope: Deactivated successfully.
Oct 11 08:57:39 compute-0 podman[332877]: 2025-10-11 08:57:39.10284401 +0000 UTC m=+1.009870096 container died 863dfeaf3ac0087331f623dbcfadf72ad1c3dd623da4a7b3dc008e3921c4b7c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_wu, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True)
Oct 11 08:57:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-2d341406c02256722a2e7c6dbb661095bc6168cbb5cdc2b7cf22b8c8bd57f7ba-merged.mount: Deactivated successfully.
Oct 11 08:57:39 compute-0 podman[332877]: 2025-10-11 08:57:39.18910318 +0000 UTC m=+1.096129266 container remove 863dfeaf3ac0087331f623dbcfadf72ad1c3dd623da4a7b3dc008e3921c4b7c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_wu, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct 11 08:57:39 compute-0 systemd[1]: libpod-conmon-863dfeaf3ac0087331f623dbcfadf72ad1c3dd623da4a7b3dc008e3921c4b7c2.scope: Deactivated successfully.
Oct 11 08:57:39 compute-0 sudo[332736]: pam_unix(sudo:session): session closed for user root
Oct 11 08:57:39 compute-0 sudo[332916]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:57:39 compute-0 sudo[332916]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:57:39 compute-0 sudo[332916]: pam_unix(sudo:session): session closed for user root
Oct 11 08:57:39 compute-0 sudo[332941]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 08:57:39 compute-0 sudo[332941]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:57:39 compute-0 sudo[332941]: pam_unix(sudo:session): session closed for user root
Oct 11 08:57:39 compute-0 ceph-mon[74313]: pgmap v1628: 321 pgs: 321 active+clean; 484 MiB data, 758 MiB used, 59 GiB / 60 GiB avail; 9.1 MiB/s rd, 6.0 MiB/s wr, 348 op/s
Oct 11 08:57:39 compute-0 ceph-mon[74313]: osdmap e237: 3 total, 3 up, 3 in
Oct 11 08:57:39 compute-0 nova_compute[260935]: 2025-10-11 08:57:39.480 2 DEBUG nova.network.neutron [-] [instance: 80799b12-9add-4561-a8eb-f1cf3c1f93a1] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:57:39 compute-0 nova_compute[260935]: 2025-10-11 08:57:39.518 2 INFO nova.compute.manager [-] [instance: 80799b12-9add-4561-a8eb-f1cf3c1f93a1] Took 3.45 seconds to deallocate network for instance.
Oct 11 08:57:39 compute-0 sudo[332966]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:57:39 compute-0 sudo[332966]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:57:39 compute-0 sudo[332966]: pam_unix(sudo:session): session closed for user root
Oct 11 08:57:39 compute-0 nova_compute[260935]: 2025-10-11 08:57:39.578 2 DEBUG oslo_concurrency.lockutils [None req-88a713e6-8e9d-4f15-a13f-6d4d6b227dda 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:57:39 compute-0 nova_compute[260935]: 2025-10-11 08:57:39.579 2 DEBUG oslo_concurrency.lockutils [None req-88a713e6-8e9d-4f15-a13f-6d4d6b227dda 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:57:39 compute-0 sudo[332991]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a -- raw list --format json
Oct 11 08:57:39 compute-0 sudo[332991]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:57:39 compute-0 nova_compute[260935]: 2025-10-11 08:57:39.804 2 DEBUG oslo_concurrency.processutils [None req-88a713e6-8e9d-4f15-a13f-6d4d6b227dda 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:57:39 compute-0 nova_compute[260935]: 2025-10-11 08:57:39.861 2 DEBUG nova.network.neutron [-] [instance: a4114e5c-7c87-48a4-b73b-01ec4a9dd9b4] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:57:39 compute-0 nova_compute[260935]: 2025-10-11 08:57:39.884 2 INFO nova.compute.manager [-] [instance: a4114e5c-7c87-48a4-b73b-01ec4a9dd9b4] Took 2.73 seconds to deallocate network for instance.
Oct 11 08:57:39 compute-0 nova_compute[260935]: 2025-10-11 08:57:39.897 2 DEBUG nova.compute.manager [req-f2c9eaca-ad79-42eb-8f8d-2f2917902754 req-69f559f7-238e-44d6-a6d0-258d8a5a1707 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a4114e5c-7c87-48a4-b73b-01ec4a9dd9b4] Received event network-vif-unplugged-569c42c9-b81d-42a2-8fc1-8fced709830d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:57:39 compute-0 nova_compute[260935]: 2025-10-11 08:57:39.897 2 DEBUG oslo_concurrency.lockutils [req-f2c9eaca-ad79-42eb-8f8d-2f2917902754 req-69f559f7-238e-44d6-a6d0-258d8a5a1707 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "a4114e5c-7c87-48a4-b73b-01ec4a9dd9b4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:57:39 compute-0 nova_compute[260935]: 2025-10-11 08:57:39.898 2 DEBUG oslo_concurrency.lockutils [req-f2c9eaca-ad79-42eb-8f8d-2f2917902754 req-69f559f7-238e-44d6-a6d0-258d8a5a1707 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "a4114e5c-7c87-48a4-b73b-01ec4a9dd9b4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:57:39 compute-0 nova_compute[260935]: 2025-10-11 08:57:39.898 2 DEBUG oslo_concurrency.lockutils [req-f2c9eaca-ad79-42eb-8f8d-2f2917902754 req-69f559f7-238e-44d6-a6d0-258d8a5a1707 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "a4114e5c-7c87-48a4-b73b-01ec4a9dd9b4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:57:39 compute-0 nova_compute[260935]: 2025-10-11 08:57:39.898 2 DEBUG nova.compute.manager [req-f2c9eaca-ad79-42eb-8f8d-2f2917902754 req-69f559f7-238e-44d6-a6d0-258d8a5a1707 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a4114e5c-7c87-48a4-b73b-01ec4a9dd9b4] No waiting events found dispatching network-vif-unplugged-569c42c9-b81d-42a2-8fc1-8fced709830d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 08:57:39 compute-0 nova_compute[260935]: 2025-10-11 08:57:39.899 2 DEBUG nova.compute.manager [req-f2c9eaca-ad79-42eb-8f8d-2f2917902754 req-69f559f7-238e-44d6-a6d0-258d8a5a1707 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a4114e5c-7c87-48a4-b73b-01ec4a9dd9b4] Received event network-vif-unplugged-569c42c9-b81d-42a2-8fc1-8fced709830d for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 11 08:57:39 compute-0 nova_compute[260935]: 2025-10-11 08:57:39.944 2 DEBUG oslo_concurrency.lockutils [None req-49474401-22be-48bc-a781-b65169c97a5a ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:57:39 compute-0 nova_compute[260935]: 2025-10-11 08:57:39.952 2 DEBUG nova.compute.manager [req-d8a9dbdf-ed84-41e9-b3a0-140e874e3bd3 req-d68cd1c6-d65f-42fd-a161-a4b2e087cf28 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 80799b12-9add-4561-a8eb-f1cf3c1f93a1] Received event network-vif-deleted-b57a52c8-34c3-415f-9848-195b09be3611 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:57:40 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1630: 321 pgs: 321 active+clean; 484 MiB data, 758 MiB used, 59 GiB / 60 GiB avail; 12 MiB/s rd, 7.8 MiB/s wr, 374 op/s
Oct 11 08:57:40 compute-0 podman[333077]: 2025-10-11 08:57:40.262907209 +0000 UTC m=+0.067771693 container create 02735ecd5e990328abfb71af4205f6b6f781eea87bf24053d321c6e6c26cb986 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_antonelli, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 08:57:40 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 08:57:40 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1588345491' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:57:40 compute-0 systemd[1]: Started libpod-conmon-02735ecd5e990328abfb71af4205f6b6f781eea87bf24053d321c6e6c26cb986.scope.
Oct 11 08:57:40 compute-0 nova_compute[260935]: 2025-10-11 08:57:40.317 2 DEBUG oslo_concurrency.processutils [None req-88a713e6-8e9d-4f15-a13f-6d4d6b227dda 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.512s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:57:40 compute-0 nova_compute[260935]: 2025-10-11 08:57:40.324 2 DEBUG nova.compute.provider_tree [None req-88a713e6-8e9d-4f15-a13f-6d4d6b227dda 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 08:57:40 compute-0 podman[333077]: 2025-10-11 08:57:40.239181583 +0000 UTC m=+0.044046067 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 08:57:40 compute-0 nova_compute[260935]: 2025-10-11 08:57:40.336 2 DEBUG nova.objects.instance [None req-57ae2581-de5e-4bd6-90b3-bbff0514a800 bebcc72ba14b4d1e976076ce9fee6080 7b0903df38254068b8566040cf90343c - - default default] Lazy-loading 'flavor' on Instance uuid 075ec27a-70a2-49f6-a097-5738f3407605 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:57:40 compute-0 systemd[1]: Started libcrun container.
Oct 11 08:57:40 compute-0 podman[333077]: 2025-10-11 08:57:40.355932702 +0000 UTC m=+0.160797186 container init 02735ecd5e990328abfb71af4205f6b6f781eea87bf24053d321c6e6c26cb986 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_antonelli, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 11 08:57:40 compute-0 nova_compute[260935]: 2025-10-11 08:57:40.357 2 DEBUG nova.scheduler.client.report [None req-88a713e6-8e9d-4f15-a13f-6d4d6b227dda 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 08:57:40 compute-0 podman[333077]: 2025-10-11 08:57:40.362675344 +0000 UTC m=+0.167539818 container start 02735ecd5e990328abfb71af4205f6b6f781eea87bf24053d321c6e6c26cb986 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_antonelli, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True)
Oct 11 08:57:40 compute-0 podman[333077]: 2025-10-11 08:57:40.366456252 +0000 UTC m=+0.171320736 container attach 02735ecd5e990328abfb71af4205f6b6f781eea87bf24053d321c6e6c26cb986 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_antonelli, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct 11 08:57:40 compute-0 vibrant_antonelli[333095]: 167 167
Oct 11 08:57:40 compute-0 nova_compute[260935]: 2025-10-11 08:57:40.369 2 DEBUG oslo_concurrency.lockutils [None req-57ae2581-de5e-4bd6-90b3-bbff0514a800 bebcc72ba14b4d1e976076ce9fee6080 7b0903df38254068b8566040cf90343c - - default default] Acquiring lock "refresh_cache-075ec27a-70a2-49f6-a097-5738f3407605" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 08:57:40 compute-0 nova_compute[260935]: 2025-10-11 08:57:40.369 2 DEBUG oslo_concurrency.lockutils [None req-57ae2581-de5e-4bd6-90b3-bbff0514a800 bebcc72ba14b4d1e976076ce9fee6080 7b0903df38254068b8566040cf90343c - - default default] Acquired lock "refresh_cache-075ec27a-70a2-49f6-a097-5738f3407605" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 08:57:40 compute-0 systemd[1]: libpod-02735ecd5e990328abfb71af4205f6b6f781eea87bf24053d321c6e6c26cb986.scope: Deactivated successfully.
Oct 11 08:57:40 compute-0 podman[333077]: 2025-10-11 08:57:40.370396474 +0000 UTC m=+0.175260958 container died 02735ecd5e990328abfb71af4205f6b6f781eea87bf24053d321c6e6c26cb986 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_antonelli, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 08:57:40 compute-0 nova_compute[260935]: 2025-10-11 08:57:40.391 2 DEBUG oslo_concurrency.lockutils [None req-88a713e6-8e9d-4f15-a13f-6d4d6b227dda 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.812s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:57:40 compute-0 nova_compute[260935]: 2025-10-11 08:57:40.393 2 DEBUG oslo_concurrency.lockutils [None req-49474401-22be-48bc-a781-b65169c97a5a ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.449s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:57:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-ab34b206dc36720e516f1e2ddb8af9f97020dfcf0a4f4cafa8bd08ad387d3025-merged.mount: Deactivated successfully.
Oct 11 08:57:40 compute-0 nova_compute[260935]: 2025-10-11 08:57:40.412 2 INFO nova.scheduler.client.report [None req-88a713e6-8e9d-4f15-a13f-6d4d6b227dda 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Deleted allocations for instance 80799b12-9add-4561-a8eb-f1cf3c1f93a1
Oct 11 08:57:40 compute-0 podman[333077]: 2025-10-11 08:57:40.424049254 +0000 UTC m=+0.228913728 container remove 02735ecd5e990328abfb71af4205f6b6f781eea87bf24053d321c6e6c26cb986 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_antonelli, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Oct 11 08:57:40 compute-0 systemd[1]: libpod-conmon-02735ecd5e990328abfb71af4205f6b6f781eea87bf24053d321c6e6c26cb986.scope: Deactivated successfully.
Oct 11 08:57:40 compute-0 nova_compute[260935]: 2025-10-11 08:57:40.475 2 INFO nova.virt.libvirt.driver [None req-8d0bab69-a07b-442e-bea1-1078b3bbd2b0 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] [instance: 98d8ebd6-0917-49cf-8efc-a245486424bc] Snapshot image upload complete
Oct 11 08:57:40 compute-0 nova_compute[260935]: 2025-10-11 08:57:40.476 2 DEBUG nova.compute.manager [None req-8d0bab69-a07b-442e-bea1-1078b3bbd2b0 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] [instance: 98d8ebd6-0917-49cf-8efc-a245486424bc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:57:40 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/1588345491' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:57:40 compute-0 nova_compute[260935]: 2025-10-11 08:57:40.521 2 DEBUG oslo_concurrency.lockutils [None req-88a713e6-8e9d-4f15-a13f-6d4d6b227dda 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Lock "80799b12-9add-4561-a8eb-f1cf3c1f93a1" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.153s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:57:40 compute-0 nova_compute[260935]: 2025-10-11 08:57:40.558 2 INFO nova.compute.manager [None req-8d0bab69-a07b-442e-bea1-1078b3bbd2b0 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] [instance: 98d8ebd6-0917-49cf-8efc-a245486424bc] Shelve offloading
Oct 11 08:57:40 compute-0 nova_compute[260935]: 2025-10-11 08:57:40.575 2 INFO nova.virt.libvirt.driver [-] [instance: 98d8ebd6-0917-49cf-8efc-a245486424bc] Instance destroyed successfully.
Oct 11 08:57:40 compute-0 nova_compute[260935]: 2025-10-11 08:57:40.576 2 DEBUG nova.compute.manager [None req-8d0bab69-a07b-442e-bea1-1078b3bbd2b0 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] [instance: 98d8ebd6-0917-49cf-8efc-a245486424bc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:57:40 compute-0 nova_compute[260935]: 2025-10-11 08:57:40.579 2 DEBUG oslo_concurrency.processutils [None req-49474401-22be-48bc-a781-b65169c97a5a ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:57:40 compute-0 nova_compute[260935]: 2025-10-11 08:57:40.630 2 DEBUG oslo_concurrency.lockutils [None req-8d0bab69-a07b-442e-bea1-1078b3bbd2b0 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Acquiring lock "refresh_cache-98d8ebd6-0917-49cf-8efc-a245486424bc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 08:57:40 compute-0 nova_compute[260935]: 2025-10-11 08:57:40.631 2 DEBUG oslo_concurrency.lockutils [None req-8d0bab69-a07b-442e-bea1-1078b3bbd2b0 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Acquired lock "refresh_cache-98d8ebd6-0917-49cf-8efc-a245486424bc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 08:57:40 compute-0 nova_compute[260935]: 2025-10-11 08:57:40.631 2 DEBUG nova.network.neutron [None req-8d0bab69-a07b-442e-bea1-1078b3bbd2b0 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] [instance: 98d8ebd6-0917-49cf-8efc-a245486424bc] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 11 08:57:40 compute-0 podman[333121]: 2025-10-11 08:57:40.674791174 +0000 UTC m=+0.071130370 container create 8492bab7ab81057148808cb4345a1942272f07becf9a49b19c66885775f5b84a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_rosalind, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 08:57:40 compute-0 systemd[1]: Started libpod-conmon-8492bab7ab81057148808cb4345a1942272f07becf9a49b19c66885775f5b84a.scope.
Oct 11 08:57:40 compute-0 podman[333121]: 2025-10-11 08:57:40.646509217 +0000 UTC m=+0.042848483 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 08:57:40 compute-0 systemd[1]: Started libcrun container.
Oct 11 08:57:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc9dbcc679b9335317abd5f8c8ea20dcea5bce96b81c871a23edabcb897fe7ee/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 08:57:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc9dbcc679b9335317abd5f8c8ea20dcea5bce96b81c871a23edabcb897fe7ee/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 08:57:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc9dbcc679b9335317abd5f8c8ea20dcea5bce96b81c871a23edabcb897fe7ee/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 08:57:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc9dbcc679b9335317abd5f8c8ea20dcea5bce96b81c871a23edabcb897fe7ee/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 08:57:40 compute-0 podman[333121]: 2025-10-11 08:57:40.782357841 +0000 UTC m=+0.178697027 container init 8492bab7ab81057148808cb4345a1942272f07becf9a49b19c66885775f5b84a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_rosalind, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 08:57:40 compute-0 podman[333121]: 2025-10-11 08:57:40.796979608 +0000 UTC m=+0.193318814 container start 8492bab7ab81057148808cb4345a1942272f07becf9a49b19c66885775f5b84a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_rosalind, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True)
Oct 11 08:57:40 compute-0 podman[333121]: 2025-10-11 08:57:40.800757076 +0000 UTC m=+0.197096262 container attach 8492bab7ab81057148808cb4345a1942272f07becf9a49b19c66885775f5b84a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_rosalind, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Oct 11 08:57:41 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 08:57:41 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3748489939' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:57:41 compute-0 nova_compute[260935]: 2025-10-11 08:57:41.042 2 DEBUG oslo_concurrency.processutils [None req-49474401-22be-48bc-a781-b65169c97a5a ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:57:41 compute-0 nova_compute[260935]: 2025-10-11 08:57:41.049 2 DEBUG nova.compute.provider_tree [None req-49474401-22be-48bc-a781-b65169c97a5a ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 08:57:41 compute-0 nova_compute[260935]: 2025-10-11 08:57:41.074 2 DEBUG nova.scheduler.client.report [None req-49474401-22be-48bc-a781-b65169c97a5a ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 08:57:41 compute-0 nova_compute[260935]: 2025-10-11 08:57:41.120 2 DEBUG oslo_concurrency.lockutils [None req-49474401-22be-48bc-a781-b65169c97a5a ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.728s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:57:41 compute-0 nova_compute[260935]: 2025-10-11 08:57:41.180 2 INFO nova.scheduler.client.report [None req-49474401-22be-48bc-a781-b65169c97a5a ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Deleted allocations for instance a4114e5c-7c87-48a4-b73b-01ec4a9dd9b4
Oct 11 08:57:41 compute-0 nova_compute[260935]: 2025-10-11 08:57:41.259 2 DEBUG oslo_concurrency.lockutils [None req-49474401-22be-48bc-a781-b65169c97a5a ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Lock "a4114e5c-7c87-48a4-b73b-01ec4a9dd9b4" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.199s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:57:41 compute-0 nova_compute[260935]: 2025-10-11 08:57:41.343 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:57:41 compute-0 ceph-mon[74313]: pgmap v1630: 321 pgs: 321 active+clean; 484 MiB data, 758 MiB used, 59 GiB / 60 GiB avail; 12 MiB/s rd, 7.8 MiB/s wr, 374 op/s
Oct 11 08:57:41 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/3748489939' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:57:41 compute-0 trusting_rosalind[333157]: {
Oct 11 08:57:41 compute-0 trusting_rosalind[333157]:     "9a8d87fe-940e-4960-94fb-405e22e6e9ee": {
Oct 11 08:57:41 compute-0 trusting_rosalind[333157]:         "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 08:57:41 compute-0 trusting_rosalind[333157]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 11 08:57:41 compute-0 trusting_rosalind[333157]:         "osd_id": 2,
Oct 11 08:57:41 compute-0 trusting_rosalind[333157]:         "osd_uuid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 08:57:41 compute-0 trusting_rosalind[333157]:         "type": "bluestore"
Oct 11 08:57:41 compute-0 trusting_rosalind[333157]:     },
Oct 11 08:57:41 compute-0 trusting_rosalind[333157]:     "bd31cfe9-266a-4479-a7f9-659fff14b3e1": {
Oct 11 08:57:41 compute-0 trusting_rosalind[333157]:         "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 08:57:41 compute-0 trusting_rosalind[333157]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 11 08:57:41 compute-0 trusting_rosalind[333157]:         "osd_id": 0,
Oct 11 08:57:41 compute-0 trusting_rosalind[333157]:         "osd_uuid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 08:57:41 compute-0 trusting_rosalind[333157]:         "type": "bluestore"
Oct 11 08:57:41 compute-0 trusting_rosalind[333157]:     },
Oct 11 08:57:41 compute-0 trusting_rosalind[333157]:     "c6e730c8-7075-45e5-bf72-061b73088f5b": {
Oct 11 08:57:41 compute-0 trusting_rosalind[333157]:         "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 08:57:41 compute-0 trusting_rosalind[333157]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 11 08:57:41 compute-0 trusting_rosalind[333157]:         "osd_id": 1,
Oct 11 08:57:41 compute-0 trusting_rosalind[333157]:         "osd_uuid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 08:57:41 compute-0 trusting_rosalind[333157]:         "type": "bluestore"
Oct 11 08:57:41 compute-0 trusting_rosalind[333157]:     }
Oct 11 08:57:41 compute-0 trusting_rosalind[333157]: }
Oct 11 08:57:41 compute-0 systemd[1]: libpod-8492bab7ab81057148808cb4345a1942272f07becf9a49b19c66885775f5b84a.scope: Deactivated successfully.
Oct 11 08:57:41 compute-0 systemd[1]: libpod-8492bab7ab81057148808cb4345a1942272f07becf9a49b19c66885775f5b84a.scope: Consumed 1.038s CPU time.
Oct 11 08:57:41 compute-0 podman[333121]: 2025-10-11 08:57:41.828856992 +0000 UTC m=+1.225196208 container died 8492bab7ab81057148808cb4345a1942272f07becf9a49b19c66885775f5b84a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_rosalind, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct 11 08:57:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-bc9dbcc679b9335317abd5f8c8ea20dcea5bce96b81c871a23edabcb897fe7ee-merged.mount: Deactivated successfully.
Oct 11 08:57:41 compute-0 podman[333121]: 2025-10-11 08:57:41.908310787 +0000 UTC m=+1.304650013 container remove 8492bab7ab81057148808cb4345a1942272f07becf9a49b19c66885775f5b84a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_rosalind, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct 11 08:57:41 compute-0 systemd[1]: libpod-conmon-8492bab7ab81057148808cb4345a1942272f07becf9a49b19c66885775f5b84a.scope: Deactivated successfully.
Oct 11 08:57:41 compute-0 sudo[332991]: pam_unix(sudo:session): session closed for user root
Oct 11 08:57:41 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 08:57:41 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 08:57:41 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 08:57:41 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 08:57:41 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev 89c78909-0cd1-44f5-ab78-716308c44bd5 does not exist
Oct 11 08:57:41 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev dd334ab8-7202-4576-8077-edb25bc17ffb does not exist
Oct 11 08:57:41 compute-0 nova_compute[260935]: 2025-10-11 08:57:41.981 2 DEBUG nova.compute.manager [req-1e566fc1-861b-4edc-b8e5-aefb27327205 req-e8250a7f-d50d-4b8d-a432-82e8cea53f6d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a4114e5c-7c87-48a4-b73b-01ec4a9dd9b4] Received event network-vif-plugged-569c42c9-b81d-42a2-8fc1-8fced709830d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:57:41 compute-0 nova_compute[260935]: 2025-10-11 08:57:41.983 2 DEBUG oslo_concurrency.lockutils [req-1e566fc1-861b-4edc-b8e5-aefb27327205 req-e8250a7f-d50d-4b8d-a432-82e8cea53f6d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "a4114e5c-7c87-48a4-b73b-01ec4a9dd9b4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:57:41 compute-0 nova_compute[260935]: 2025-10-11 08:57:41.983 2 DEBUG oslo_concurrency.lockutils [req-1e566fc1-861b-4edc-b8e5-aefb27327205 req-e8250a7f-d50d-4b8d-a432-82e8cea53f6d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "a4114e5c-7c87-48a4-b73b-01ec4a9dd9b4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:57:41 compute-0 nova_compute[260935]: 2025-10-11 08:57:41.984 2 DEBUG oslo_concurrency.lockutils [req-1e566fc1-861b-4edc-b8e5-aefb27327205 req-e8250a7f-d50d-4b8d-a432-82e8cea53f6d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "a4114e5c-7c87-48a4-b73b-01ec4a9dd9b4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:57:41 compute-0 nova_compute[260935]: 2025-10-11 08:57:41.984 2 DEBUG nova.compute.manager [req-1e566fc1-861b-4edc-b8e5-aefb27327205 req-e8250a7f-d50d-4b8d-a432-82e8cea53f6d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a4114e5c-7c87-48a4-b73b-01ec4a9dd9b4] No waiting events found dispatching network-vif-plugged-569c42c9-b81d-42a2-8fc1-8fced709830d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 08:57:41 compute-0 nova_compute[260935]: 2025-10-11 08:57:41.985 2 WARNING nova.compute.manager [req-1e566fc1-861b-4edc-b8e5-aefb27327205 req-e8250a7f-d50d-4b8d-a432-82e8cea53f6d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a4114e5c-7c87-48a4-b73b-01ec4a9dd9b4] Received unexpected event network-vif-plugged-569c42c9-b81d-42a2-8fc1-8fced709830d for instance with vm_state deleted and task_state None.
Oct 11 08:57:41 compute-0 nova_compute[260935]: 2025-10-11 08:57:41.985 2 DEBUG nova.compute.manager [req-1e566fc1-861b-4edc-b8e5-aefb27327205 req-e8250a7f-d50d-4b8d-a432-82e8cea53f6d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a4114e5c-7c87-48a4-b73b-01ec4a9dd9b4] Received event network-vif-deleted-569c42c9-b81d-42a2-8fc1-8fced709830d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:57:42 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1631: 321 pgs: 321 active+clean; 484 MiB data, 758 MiB used, 59 GiB / 60 GiB avail; 12 MiB/s rd, 7.8 MiB/s wr, 374 op/s
Oct 11 08:57:42 compute-0 sudo[333207]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:57:42 compute-0 sudo[333207]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:57:42 compute-0 sudo[333207]: pam_unix(sudo:session): session closed for user root
Oct 11 08:57:42 compute-0 sudo[333244]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 11 08:57:42 compute-0 sudo[333244]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:57:42 compute-0 sudo[333244]: pam_unix(sudo:session): session closed for user root
Oct 11 08:57:42 compute-0 podman[333232]: 2025-10-11 08:57:42.253783758 +0000 UTC m=+0.132256412 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct 11 08:57:42 compute-0 podman[333231]: 2025-10-11 08:57:42.254801468 +0000 UTC m=+0.133689084 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.vendor=CentOS)
Oct 11 08:57:42 compute-0 nova_compute[260935]: 2025-10-11 08:57:42.520 2 DEBUG nova.network.neutron [None req-57ae2581-de5e-4bd6-90b3-bbff0514a800 bebcc72ba14b4d1e976076ce9fee6080 7b0903df38254068b8566040cf90343c - - default default] [instance: 075ec27a-70a2-49f6-a097-5738f3407605] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 11 08:57:42 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 08:57:42 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 08:57:42 compute-0 ceph-mon[74313]: pgmap v1631: 321 pgs: 321 active+clean; 484 MiB data, 758 MiB used, 59 GiB / 60 GiB avail; 12 MiB/s rd, 7.8 MiB/s wr, 374 op/s
Oct 11 08:57:43 compute-0 nova_compute[260935]: 2025-10-11 08:57:43.147 2 DEBUG nova.network.neutron [None req-8d0bab69-a07b-442e-bea1-1078b3bbd2b0 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] [instance: 98d8ebd6-0917-49cf-8efc-a245486424bc] Updating instance_info_cache with network_info: [{"id": "10e01bbb-0d2c-4f76-8f81-c90e20d3e54c", "address": "fa:16:3e:3f:0f:36", "network": {"id": "056c6769-bc97-4ae9-9759-4cc2d984a31d", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-2094705751-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.249", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "73adfb8cf0c64359b1f33a9643148ef4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap10e01bbb-0d", "ovs_interfaceid": "10e01bbb-0d2c-4f76-8f81-c90e20d3e54c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:57:43 compute-0 nova_compute[260935]: 2025-10-11 08:57:43.187 2 DEBUG oslo_concurrency.lockutils [None req-8d0bab69-a07b-442e-bea1-1078b3bbd2b0 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Releasing lock "refresh_cache-98d8ebd6-0917-49cf-8efc-a245486424bc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 08:57:43 compute-0 nova_compute[260935]: 2025-10-11 08:57:43.399 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:57:43 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e237 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 08:57:44 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1632: 321 pgs: 321 active+clean; 484 MiB data, 779 MiB used, 59 GiB / 60 GiB avail; 9.2 MiB/s rd, 6.1 MiB/s wr, 323 op/s
Oct 11 08:57:44 compute-0 nova_compute[260935]: 2025-10-11 08:57:44.442 2 DEBUG nova.compute.manager [req-3a36cc4b-0c4d-4b4d-8942-e895c3d79809 req-aa8cef49-1b53-442d-8094-2f96c64f6125 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 075ec27a-70a2-49f6-a097-5738f3407605] Received event network-changed-10f3051f-8897-4e57-89f1-4a03aac94192 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:57:44 compute-0 nova_compute[260935]: 2025-10-11 08:57:44.443 2 DEBUG nova.compute.manager [req-3a36cc4b-0c4d-4b4d-8942-e895c3d79809 req-aa8cef49-1b53-442d-8094-2f96c64f6125 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 075ec27a-70a2-49f6-a097-5738f3407605] Refreshing instance network info cache due to event network-changed-10f3051f-8897-4e57-89f1-4a03aac94192. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 08:57:44 compute-0 nova_compute[260935]: 2025-10-11 08:57:44.443 2 DEBUG oslo_concurrency.lockutils [req-3a36cc4b-0c4d-4b4d-8942-e895c3d79809 req-aa8cef49-1b53-442d-8094-2f96c64f6125 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-075ec27a-70a2-49f6-a097-5738f3407605" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 08:57:44 compute-0 nova_compute[260935]: 2025-10-11 08:57:44.693 2 INFO nova.virt.libvirt.driver [-] [instance: 98d8ebd6-0917-49cf-8efc-a245486424bc] Instance destroyed successfully.
Oct 11 08:57:44 compute-0 nova_compute[260935]: 2025-10-11 08:57:44.694 2 DEBUG nova.objects.instance [None req-8d0bab69-a07b-442e-bea1-1078b3bbd2b0 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Lazy-loading 'resources' on Instance uuid 98d8ebd6-0917-49cf-8efc-a245486424bc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:57:44 compute-0 nova_compute[260935]: 2025-10-11 08:57:44.716 2 DEBUG nova.virt.libvirt.vif [None req-8d0bab69-a07b-442e-bea1-1078b3bbd2b0 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T08:55:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-400269969',display_name='tempest-ServerActionsTestOtherB-server-400269969',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-400269969',id=58,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBeZ9ALPs3Oq9pCizSQrA3R5ihPZilMrzZLelKqG/iqyw53khzMkIgHWfcYBLUTF7AIKY9B5CPoZLdbz5r+wTvSZysVCKTRxhaIofirBhYLqgm1OKLLxBwrJrQAKSAriNg==',key_name='tempest-keypair-1264143465',keypairs=<?>,launch_index=0,launched_at=2025-10-11T08:55:31Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='73adfb8cf0c64359b1f33a9643148ef4',ramdisk_id='',reservation_id='r-bh4aj5da',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherB-1445504716',owner_user_name='tempest-ServerActionsTestOtherB-1445504716-project-member',shelved_at='2025-10-11T08:57:40.476674',shelved_host='compute-0.ctlplane.example.com',shelved_image_id='aff10a5e-fdc1-44d7-be39-f9f9dc19ad9d'},tags=<?>,task_state='shelving_offloading',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T08:57:35Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='8d5f5f07c57c467286168be7c097bf26',uuid=98d8ebd6-0917-49cf-8efc-a245486424bc,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='shelved') vif={"id": "10e01bbb-0d2c-4f76-8f81-c90e20d3e54c", "address": "fa:16:3e:3f:0f:36", "network": {"id": "056c6769-bc97-4ae9-9759-4cc2d984a31d", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-2094705751-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.249", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "73adfb8cf0c64359b1f33a9643148ef4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap10e01bbb-0d", "ovs_interfaceid": "10e01bbb-0d2c-4f76-8f81-c90e20d3e54c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 11 08:57:44 compute-0 nova_compute[260935]: 2025-10-11 08:57:44.717 2 DEBUG nova.network.os_vif_util [None req-8d0bab69-a07b-442e-bea1-1078b3bbd2b0 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Converting VIF {"id": "10e01bbb-0d2c-4f76-8f81-c90e20d3e54c", "address": "fa:16:3e:3f:0f:36", "network": {"id": "056c6769-bc97-4ae9-9759-4cc2d984a31d", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-2094705751-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.249", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "73adfb8cf0c64359b1f33a9643148ef4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap10e01bbb-0d", "ovs_interfaceid": "10e01bbb-0d2c-4f76-8f81-c90e20d3e54c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 08:57:44 compute-0 nova_compute[260935]: 2025-10-11 08:57:44.718 2 DEBUG nova.network.os_vif_util [None req-8d0bab69-a07b-442e-bea1-1078b3bbd2b0 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3f:0f:36,bridge_name='br-int',has_traffic_filtering=True,id=10e01bbb-0d2c-4f76-8f81-c90e20d3e54c,network=Network(056c6769-bc97-4ae9-9759-4cc2d984a31d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap10e01bbb-0d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 08:57:44 compute-0 nova_compute[260935]: 2025-10-11 08:57:44.719 2 DEBUG os_vif [None req-8d0bab69-a07b-442e-bea1-1078b3bbd2b0 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:3f:0f:36,bridge_name='br-int',has_traffic_filtering=True,id=10e01bbb-0d2c-4f76-8f81-c90e20d3e54c,network=Network(056c6769-bc97-4ae9-9759-4cc2d984a31d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap10e01bbb-0d') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 11 08:57:44 compute-0 nova_compute[260935]: 2025-10-11 08:57:44.722 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:57:44 compute-0 nova_compute[260935]: 2025-10-11 08:57:44.723 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap10e01bbb-0d, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:57:44 compute-0 nova_compute[260935]: 2025-10-11 08:57:44.776 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:57:44 compute-0 nova_compute[260935]: 2025-10-11 08:57:44.779 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:57:44 compute-0 nova_compute[260935]: 2025-10-11 08:57:44.785 2 INFO os_vif [None req-8d0bab69-a07b-442e-bea1-1078b3bbd2b0 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:3f:0f:36,bridge_name='br-int',has_traffic_filtering=True,id=10e01bbb-0d2c-4f76-8f81-c90e20d3e54c,network=Network(056c6769-bc97-4ae9-9759-4cc2d984a31d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap10e01bbb-0d')
Oct 11 08:57:45 compute-0 ceph-mon[74313]: pgmap v1632: 321 pgs: 321 active+clean; 484 MiB data, 779 MiB used, 59 GiB / 60 GiB avail; 9.2 MiB/s rd, 6.1 MiB/s wr, 323 op/s
Oct 11 08:57:45 compute-0 nova_compute[260935]: 2025-10-11 08:57:45.224 2 INFO nova.virt.libvirt.driver [None req-8d0bab69-a07b-442e-bea1-1078b3bbd2b0 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] [instance: 98d8ebd6-0917-49cf-8efc-a245486424bc] Deleting instance files /var/lib/nova/instances/98d8ebd6-0917-49cf-8efc-a245486424bc_del
Oct 11 08:57:45 compute-0 nova_compute[260935]: 2025-10-11 08:57:45.225 2 INFO nova.virt.libvirt.driver [None req-8d0bab69-a07b-442e-bea1-1078b3bbd2b0 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] [instance: 98d8ebd6-0917-49cf-8efc-a245486424bc] Deletion of /var/lib/nova/instances/98d8ebd6-0917-49cf-8efc-a245486424bc_del complete
Oct 11 08:57:45 compute-0 nova_compute[260935]: 2025-10-11 08:57:45.271 2 DEBUG oslo_concurrency.lockutils [None req-b313b5b1-d87d-4afa-99ed-53b854e421fc 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Acquiring lock "937f10b2-422f-42bc-b13e-5995310b461c" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:57:45 compute-0 nova_compute[260935]: 2025-10-11 08:57:45.272 2 DEBUG oslo_concurrency.lockutils [None req-b313b5b1-d87d-4afa-99ed-53b854e421fc 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Lock "937f10b2-422f-42bc-b13e-5995310b461c" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:57:45 compute-0 nova_compute[260935]: 2025-10-11 08:57:45.288 2 DEBUG nova.compute.manager [None req-b313b5b1-d87d-4afa-99ed-53b854e421fc 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: 937f10b2-422f-42bc-b13e-5995310b461c] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 11 08:57:45 compute-0 nova_compute[260935]: 2025-10-11 08:57:45.421 2 INFO nova.scheduler.client.report [None req-8d0bab69-a07b-442e-bea1-1078b3bbd2b0 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Deleted allocations for instance 98d8ebd6-0917-49cf-8efc-a245486424bc
Oct 11 08:57:45 compute-0 nova_compute[260935]: 2025-10-11 08:57:45.439 2 DEBUG oslo_concurrency.lockutils [None req-b313b5b1-d87d-4afa-99ed-53b854e421fc 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:57:45 compute-0 nova_compute[260935]: 2025-10-11 08:57:45.440 2 DEBUG oslo_concurrency.lockutils [None req-b313b5b1-d87d-4afa-99ed-53b854e421fc 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:57:45 compute-0 nova_compute[260935]: 2025-10-11 08:57:45.447 2 DEBUG nova.virt.hardware [None req-b313b5b1-d87d-4afa-99ed-53b854e421fc 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 11 08:57:45 compute-0 nova_compute[260935]: 2025-10-11 08:57:45.448 2 INFO nova.compute.claims [None req-b313b5b1-d87d-4afa-99ed-53b854e421fc 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: 937f10b2-422f-42bc-b13e-5995310b461c] Claim successful on node compute-0.ctlplane.example.com
Oct 11 08:57:45 compute-0 nova_compute[260935]: 2025-10-11 08:57:45.518 2 DEBUG oslo_concurrency.lockutils [None req-8d0bab69-a07b-442e-bea1-1078b3bbd2b0 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:57:45 compute-0 nova_compute[260935]: 2025-10-11 08:57:45.623 2 DEBUG oslo_concurrency.processutils [None req-b313b5b1-d87d-4afa-99ed-53b854e421fc 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:57:45 compute-0 nova_compute[260935]: 2025-10-11 08:57:45.849 2 DEBUG nova.compute.manager [req-92366f17-1061-4b7c-a405-a6cce0d9ba31 req-22117d87-2840-488f-9ac0-765a71537489 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 98d8ebd6-0917-49cf-8efc-a245486424bc] Received event network-changed-10e01bbb-0d2c-4f76-8f81-c90e20d3e54c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:57:45 compute-0 nova_compute[260935]: 2025-10-11 08:57:45.850 2 DEBUG nova.compute.manager [req-92366f17-1061-4b7c-a405-a6cce0d9ba31 req-22117d87-2840-488f-9ac0-765a71537489 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 98d8ebd6-0917-49cf-8efc-a245486424bc] Refreshing instance network info cache due to event network-changed-10e01bbb-0d2c-4f76-8f81-c90e20d3e54c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 08:57:45 compute-0 nova_compute[260935]: 2025-10-11 08:57:45.850 2 DEBUG oslo_concurrency.lockutils [req-92366f17-1061-4b7c-a405-a6cce0d9ba31 req-22117d87-2840-488f-9ac0-765a71537489 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-98d8ebd6-0917-49cf-8efc-a245486424bc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 08:57:45 compute-0 nova_compute[260935]: 2025-10-11 08:57:45.850 2 DEBUG oslo_concurrency.lockutils [req-92366f17-1061-4b7c-a405-a6cce0d9ba31 req-22117d87-2840-488f-9ac0-765a71537489 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-98d8ebd6-0917-49cf-8efc-a245486424bc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 08:57:45 compute-0 nova_compute[260935]: 2025-10-11 08:57:45.851 2 DEBUG nova.network.neutron [req-92366f17-1061-4b7c-a405-a6cce0d9ba31 req-22117d87-2840-488f-9ac0-765a71537489 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 98d8ebd6-0917-49cf-8efc-a245486424bc] Refreshing network info cache for port 10e01bbb-0d2c-4f76-8f81-c90e20d3e54c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 08:57:46 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1633: 321 pgs: 321 active+clean; 484 MiB data, 779 MiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 832 B/s wr, 25 op/s
Oct 11 08:57:46 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 08:57:46 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/694663705' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:57:46 compute-0 nova_compute[260935]: 2025-10-11 08:57:46.101 2 DEBUG oslo_concurrency.processutils [None req-b313b5b1-d87d-4afa-99ed-53b854e421fc 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:57:46 compute-0 nova_compute[260935]: 2025-10-11 08:57:46.107 2 DEBUG nova.compute.provider_tree [None req-b313b5b1-d87d-4afa-99ed-53b854e421fc 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 08:57:46 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/694663705' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:57:46 compute-0 nova_compute[260935]: 2025-10-11 08:57:46.143 2 DEBUG nova.scheduler.client.report [None req-b313b5b1-d87d-4afa-99ed-53b854e421fc 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 08:57:46 compute-0 nova_compute[260935]: 2025-10-11 08:57:46.179 2 DEBUG oslo_concurrency.lockutils [None req-b313b5b1-d87d-4afa-99ed-53b854e421fc 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.740s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:57:46 compute-0 nova_compute[260935]: 2025-10-11 08:57:46.180 2 DEBUG nova.compute.manager [None req-b313b5b1-d87d-4afa-99ed-53b854e421fc 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: 937f10b2-422f-42bc-b13e-5995310b461c] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 11 08:57:46 compute-0 nova_compute[260935]: 2025-10-11 08:57:46.184 2 DEBUG oslo_concurrency.lockutils [None req-8d0bab69-a07b-442e-bea1-1078b3bbd2b0 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.666s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:57:46 compute-0 nova_compute[260935]: 2025-10-11 08:57:46.201 2 DEBUG nova.network.neutron [None req-57ae2581-de5e-4bd6-90b3-bbff0514a800 bebcc72ba14b4d1e976076ce9fee6080 7b0903df38254068b8566040cf90343c - - default default] [instance: 075ec27a-70a2-49f6-a097-5738f3407605] Updating instance_info_cache with network_info: [{"id": "10f3051f-8897-4e57-89f1-4a03aac94192", "address": "fa:16:3e:15:d9:b2", "network": {"id": "36e695f7-a84c-4eed-8d21-d70ce3f1f736", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-876022911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.220", "type": "floating", "version": 4, "meta": {}}]}, {"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7b0903df38254068b8566040cf90343c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap10f3051f-88", "ovs_interfaceid": "10f3051f-8897-4e57-89f1-4a03aac94192", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:57:46 compute-0 nova_compute[260935]: 2025-10-11 08:57:46.236 2 DEBUG oslo_concurrency.lockutils [None req-57ae2581-de5e-4bd6-90b3-bbff0514a800 bebcc72ba14b4d1e976076ce9fee6080 7b0903df38254068b8566040cf90343c - - default default] Releasing lock "refresh_cache-075ec27a-70a2-49f6-a097-5738f3407605" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 08:57:46 compute-0 nova_compute[260935]: 2025-10-11 08:57:46.236 2 DEBUG nova.compute.manager [None req-57ae2581-de5e-4bd6-90b3-bbff0514a800 bebcc72ba14b4d1e976076ce9fee6080 7b0903df38254068b8566040cf90343c - - default default] [instance: 075ec27a-70a2-49f6-a097-5738f3407605] Inject network info _inject_network_info /usr/lib/python3.9/site-packages/nova/compute/manager.py:7144
Oct 11 08:57:46 compute-0 nova_compute[260935]: 2025-10-11 08:57:46.237 2 DEBUG nova.compute.manager [None req-57ae2581-de5e-4bd6-90b3-bbff0514a800 bebcc72ba14b4d1e976076ce9fee6080 7b0903df38254068b8566040cf90343c - - default default] [instance: 075ec27a-70a2-49f6-a097-5738f3407605] network_info to inject: |[{"id": "10f3051f-8897-4e57-89f1-4a03aac94192", "address": "fa:16:3e:15:d9:b2", "network": {"id": "36e695f7-a84c-4eed-8d21-d70ce3f1f736", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-876022911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.220", "type": "floating", "version": 4, "meta": {}}]}, {"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7b0903df38254068b8566040cf90343c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap10f3051f-88", "ovs_interfaceid": "10f3051f-8897-4e57-89f1-4a03aac94192", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _inject_network_info /usr/lib/python3.9/site-packages/nova/compute/manager.py:7145
Oct 11 08:57:46 compute-0 nova_compute[260935]: 2025-10-11 08:57:46.240 2 DEBUG oslo_concurrency.lockutils [req-3a36cc4b-0c4d-4b4d-8942-e895c3d79809 req-aa8cef49-1b53-442d-8094-2f96c64f6125 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-075ec27a-70a2-49f6-a097-5738f3407605" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 08:57:46 compute-0 nova_compute[260935]: 2025-10-11 08:57:46.241 2 DEBUG nova.network.neutron [req-3a36cc4b-0c4d-4b4d-8942-e895c3d79809 req-aa8cef49-1b53-442d-8094-2f96c64f6125 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 075ec27a-70a2-49f6-a097-5738f3407605] Refreshing network info cache for port 10f3051f-8897-4e57-89f1-4a03aac94192 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 08:57:46 compute-0 nova_compute[260935]: 2025-10-11 08:57:46.273 2 DEBUG nova.compute.manager [None req-b313b5b1-d87d-4afa-99ed-53b854e421fc 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: 937f10b2-422f-42bc-b13e-5995310b461c] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 11 08:57:46 compute-0 nova_compute[260935]: 2025-10-11 08:57:46.274 2 DEBUG nova.network.neutron [None req-b313b5b1-d87d-4afa-99ed-53b854e421fc 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: 937f10b2-422f-42bc-b13e-5995310b461c] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 11 08:57:46 compute-0 nova_compute[260935]: 2025-10-11 08:57:46.301 2 INFO nova.virt.libvirt.driver [None req-b313b5b1-d87d-4afa-99ed-53b854e421fc 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: 937f10b2-422f-42bc-b13e-5995310b461c] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 11 08:57:46 compute-0 nova_compute[260935]: 2025-10-11 08:57:46.323 2 DEBUG oslo_concurrency.processutils [None req-8d0bab69-a07b-442e-bea1-1078b3bbd2b0 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:57:46 compute-0 nova_compute[260935]: 2025-10-11 08:57:46.389 2 DEBUG nova.compute.manager [None req-b313b5b1-d87d-4afa-99ed-53b854e421fc 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: 937f10b2-422f-42bc-b13e-5995310b461c] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 11 08:57:46 compute-0 nova_compute[260935]: 2025-10-11 08:57:46.487 2 DEBUG nova.policy [None req-b313b5b1-d87d-4afa-99ed-53b854e421fc 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '213e5693e94f44e7950e3dfbca04228a', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '93dd4902ce324862a38006da8e06503a', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 11 08:57:46 compute-0 nova_compute[260935]: 2025-10-11 08:57:46.535 2 DEBUG nova.compute.manager [None req-b313b5b1-d87d-4afa-99ed-53b854e421fc 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: 937f10b2-422f-42bc-b13e-5995310b461c] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 11 08:57:46 compute-0 nova_compute[260935]: 2025-10-11 08:57:46.537 2 DEBUG nova.virt.libvirt.driver [None req-b313b5b1-d87d-4afa-99ed-53b854e421fc 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: 937f10b2-422f-42bc-b13e-5995310b461c] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 11 08:57:46 compute-0 nova_compute[260935]: 2025-10-11 08:57:46.538 2 INFO nova.virt.libvirt.driver [None req-b313b5b1-d87d-4afa-99ed-53b854e421fc 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: 937f10b2-422f-42bc-b13e-5995310b461c] Creating image(s)
Oct 11 08:57:46 compute-0 nova_compute[260935]: 2025-10-11 08:57:46.575 2 DEBUG nova.storage.rbd_utils [None req-b313b5b1-d87d-4afa-99ed-53b854e421fc 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] rbd image 937f10b2-422f-42bc-b13e-5995310b461c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:57:46 compute-0 nova_compute[260935]: 2025-10-11 08:57:46.614 2 DEBUG nova.storage.rbd_utils [None req-b313b5b1-d87d-4afa-99ed-53b854e421fc 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] rbd image 937f10b2-422f-42bc-b13e-5995310b461c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:57:46 compute-0 nova_compute[260935]: 2025-10-11 08:57:46.652 2 DEBUG nova.storage.rbd_utils [None req-b313b5b1-d87d-4afa-99ed-53b854e421fc 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] rbd image 937f10b2-422f-42bc-b13e-5995310b461c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:57:46 compute-0 nova_compute[260935]: 2025-10-11 08:57:46.658 2 DEBUG oslo_concurrency.processutils [None req-b313b5b1-d87d-4afa-99ed-53b854e421fc 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:57:46 compute-0 nova_compute[260935]: 2025-10-11 08:57:46.721 2 DEBUG oslo_concurrency.lockutils [None req-9ab933dd-16af-4259-a3ee-7f6c4dbb3e2e ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Acquiring lock "8f609d1a-7eaa-41dc-9f3b-9acf7abe4239" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:57:46 compute-0 nova_compute[260935]: 2025-10-11 08:57:46.722 2 DEBUG oslo_concurrency.lockutils [None req-9ab933dd-16af-4259-a3ee-7f6c4dbb3e2e ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Lock "8f609d1a-7eaa-41dc-9f3b-9acf7abe4239" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:57:46 compute-0 nova_compute[260935]: 2025-10-11 08:57:46.755 2 DEBUG oslo_concurrency.processutils [None req-b313b5b1-d87d-4afa-99ed-53b854e421fc 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json" returned: 0 in 0.096s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:57:46 compute-0 nova_compute[260935]: 2025-10-11 08:57:46.755 2 DEBUG oslo_concurrency.lockutils [None req-b313b5b1-d87d-4afa-99ed-53b854e421fc 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Acquiring lock "9811042c7d73cc51997f7c966840f4b7728169a1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:57:46 compute-0 nova_compute[260935]: 2025-10-11 08:57:46.756 2 DEBUG oslo_concurrency.lockutils [None req-b313b5b1-d87d-4afa-99ed-53b854e421fc 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:57:46 compute-0 nova_compute[260935]: 2025-10-11 08:57:46.756 2 DEBUG oslo_concurrency.lockutils [None req-b313b5b1-d87d-4afa-99ed-53b854e421fc 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:57:46 compute-0 nova_compute[260935]: 2025-10-11 08:57:46.779 2 DEBUG nova.storage.rbd_utils [None req-b313b5b1-d87d-4afa-99ed-53b854e421fc 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] rbd image 937f10b2-422f-42bc-b13e-5995310b461c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:57:46 compute-0 nova_compute[260935]: 2025-10-11 08:57:46.782 2 DEBUG oslo_concurrency.processutils [None req-b313b5b1-d87d-4afa-99ed-53b854e421fc 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 937f10b2-422f-42bc-b13e-5995310b461c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:57:46 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 08:57:46 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3272666615' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:57:46 compute-0 nova_compute[260935]: 2025-10-11 08:57:46.833 2 DEBUG nova.compute.manager [None req-9ab933dd-16af-4259-a3ee-7f6c4dbb3e2e ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: 8f609d1a-7eaa-41dc-9f3b-9acf7abe4239] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 11 08:57:46 compute-0 nova_compute[260935]: 2025-10-11 08:57:46.838 2 DEBUG oslo_concurrency.processutils [None req-8d0bab69-a07b-442e-bea1-1078b3bbd2b0 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.515s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:57:46 compute-0 nova_compute[260935]: 2025-10-11 08:57:46.843 2 DEBUG nova.compute.provider_tree [None req-8d0bab69-a07b-442e-bea1-1078b3bbd2b0 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 08:57:46 compute-0 nova_compute[260935]: 2025-10-11 08:57:46.860 2 DEBUG nova.scheduler.client.report [None req-8d0bab69-a07b-442e-bea1-1078b3bbd2b0 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 08:57:46 compute-0 nova_compute[260935]: 2025-10-11 08:57:46.902 2 DEBUG oslo_concurrency.lockutils [None req-8d0bab69-a07b-442e-bea1-1078b3bbd2b0 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.718s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:57:46 compute-0 nova_compute[260935]: 2025-10-11 08:57:46.919 2 DEBUG oslo_concurrency.lockutils [None req-9ab933dd-16af-4259-a3ee-7f6c4dbb3e2e ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:57:46 compute-0 nova_compute[260935]: 2025-10-11 08:57:46.919 2 DEBUG oslo_concurrency.lockutils [None req-9ab933dd-16af-4259-a3ee-7f6c4dbb3e2e ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:57:46 compute-0 nova_compute[260935]: 2025-10-11 08:57:46.926 2 DEBUG nova.virt.hardware [None req-9ab933dd-16af-4259-a3ee-7f6c4dbb3e2e ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 11 08:57:46 compute-0 nova_compute[260935]: 2025-10-11 08:57:46.926 2 INFO nova.compute.claims [None req-9ab933dd-16af-4259-a3ee-7f6c4dbb3e2e ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: 8f609d1a-7eaa-41dc-9f3b-9acf7abe4239] Claim successful on node compute-0.ctlplane.example.com
Oct 11 08:57:46 compute-0 nova_compute[260935]: 2025-10-11 08:57:46.974 2 DEBUG oslo_concurrency.lockutils [None req-8d0bab69-a07b-442e-bea1-1078b3bbd2b0 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Lock "98d8ebd6-0917-49cf-8efc-a245486424bc" "released" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: held 15.220s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:57:47 compute-0 nova_compute[260935]: 2025-10-11 08:57:47.088 2 DEBUG oslo_concurrency.processutils [None req-b313b5b1-d87d-4afa-99ed-53b854e421fc 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 937f10b2-422f-42bc-b13e-5995310b461c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.305s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:57:47 compute-0 ceph-mon[74313]: pgmap v1633: 321 pgs: 321 active+clean; 484 MiB data, 779 MiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 832 B/s wr, 25 op/s
Oct 11 08:57:47 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/3272666615' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:57:47 compute-0 nova_compute[260935]: 2025-10-11 08:57:47.190 2 DEBUG oslo_concurrency.processutils [None req-9ab933dd-16af-4259-a3ee-7f6c4dbb3e2e ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:57:47 compute-0 nova_compute[260935]: 2025-10-11 08:57:47.255 2 DEBUG nova.storage.rbd_utils [None req-b313b5b1-d87d-4afa-99ed-53b854e421fc 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] resizing rbd image 937f10b2-422f-42bc-b13e-5995310b461c_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 11 08:57:47 compute-0 nova_compute[260935]: 2025-10-11 08:57:47.364 2 DEBUG nova.objects.instance [None req-b313b5b1-d87d-4afa-99ed-53b854e421fc 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Lazy-loading 'migration_context' on Instance uuid 937f10b2-422f-42bc-b13e-5995310b461c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:57:47 compute-0 nova_compute[260935]: 2025-10-11 08:57:47.383 2 DEBUG nova.virt.libvirt.driver [None req-b313b5b1-d87d-4afa-99ed-53b854e421fc 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: 937f10b2-422f-42bc-b13e-5995310b461c] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 11 08:57:47 compute-0 nova_compute[260935]: 2025-10-11 08:57:47.383 2 DEBUG nova.virt.libvirt.driver [None req-b313b5b1-d87d-4afa-99ed-53b854e421fc 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: 937f10b2-422f-42bc-b13e-5995310b461c] Ensure instance console log exists: /var/lib/nova/instances/937f10b2-422f-42bc-b13e-5995310b461c/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 11 08:57:47 compute-0 nova_compute[260935]: 2025-10-11 08:57:47.384 2 DEBUG oslo_concurrency.lockutils [None req-b313b5b1-d87d-4afa-99ed-53b854e421fc 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:57:47 compute-0 nova_compute[260935]: 2025-10-11 08:57:47.385 2 DEBUG oslo_concurrency.lockutils [None req-b313b5b1-d87d-4afa-99ed-53b854e421fc 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:57:47 compute-0 nova_compute[260935]: 2025-10-11 08:57:47.385 2 DEBUG oslo_concurrency.lockutils [None req-b313b5b1-d87d-4afa-99ed-53b854e421fc 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:57:47 compute-0 nova_compute[260935]: 2025-10-11 08:57:47.557 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760173052.5561845, 80799b12-9add-4561-a8eb-f1cf3c1f93a1 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:57:47 compute-0 nova_compute[260935]: 2025-10-11 08:57:47.558 2 INFO nova.compute.manager [-] [instance: 80799b12-9add-4561-a8eb-f1cf3c1f93a1] VM Stopped (Lifecycle Event)
Oct 11 08:57:47 compute-0 nova_compute[260935]: 2025-10-11 08:57:47.579 2 DEBUG nova.compute.manager [None req-927d6752-aaf6-48f9-9522-c56f17ca0537 - - - - - -] [instance: 80799b12-9add-4561-a8eb-f1cf3c1f93a1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:57:47 compute-0 nova_compute[260935]: 2025-10-11 08:57:47.662 2 DEBUG nova.network.neutron [None req-b313b5b1-d87d-4afa-99ed-53b854e421fc 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: 937f10b2-422f-42bc-b13e-5995310b461c] Successfully created port: 228ffaff-8879-4872-9923-a5a8270cb291 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 11 08:57:47 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 08:57:47 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1475470789' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:57:47 compute-0 nova_compute[260935]: 2025-10-11 08:57:47.699 2 DEBUG oslo_concurrency.processutils [None req-9ab933dd-16af-4259-a3ee-7f6c4dbb3e2e ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.509s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:57:47 compute-0 nova_compute[260935]: 2025-10-11 08:57:47.708 2 DEBUG nova.compute.provider_tree [None req-9ab933dd-16af-4259-a3ee-7f6c4dbb3e2e ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 08:57:47 compute-0 nova_compute[260935]: 2025-10-11 08:57:47.726 2 DEBUG nova.scheduler.client.report [None req-9ab933dd-16af-4259-a3ee-7f6c4dbb3e2e ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 08:57:47 compute-0 nova_compute[260935]: 2025-10-11 08:57:47.751 2 DEBUG oslo_concurrency.lockutils [None req-9ab933dd-16af-4259-a3ee-7f6c4dbb3e2e ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.832s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:57:47 compute-0 nova_compute[260935]: 2025-10-11 08:57:47.752 2 DEBUG nova.compute.manager [None req-9ab933dd-16af-4259-a3ee-7f6c4dbb3e2e ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: 8f609d1a-7eaa-41dc-9f3b-9acf7abe4239] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 11 08:57:47 compute-0 nova_compute[260935]: 2025-10-11 08:57:47.793 2 DEBUG nova.compute.manager [None req-9ab933dd-16af-4259-a3ee-7f6c4dbb3e2e ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: 8f609d1a-7eaa-41dc-9f3b-9acf7abe4239] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 11 08:57:47 compute-0 nova_compute[260935]: 2025-10-11 08:57:47.793 2 DEBUG nova.network.neutron [None req-9ab933dd-16af-4259-a3ee-7f6c4dbb3e2e ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: 8f609d1a-7eaa-41dc-9f3b-9acf7abe4239] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 11 08:57:47 compute-0 nova_compute[260935]: 2025-10-11 08:57:47.813 2 INFO nova.virt.libvirt.driver [None req-9ab933dd-16af-4259-a3ee-7f6c4dbb3e2e ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: 8f609d1a-7eaa-41dc-9f3b-9acf7abe4239] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 11 08:57:47 compute-0 nova_compute[260935]: 2025-10-11 08:57:47.832 2 DEBUG nova.compute.manager [None req-9ab933dd-16af-4259-a3ee-7f6c4dbb3e2e ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: 8f609d1a-7eaa-41dc-9f3b-9acf7abe4239] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 11 08:57:47 compute-0 nova_compute[260935]: 2025-10-11 08:57:47.926 2 DEBUG nova.compute.manager [None req-9ab933dd-16af-4259-a3ee-7f6c4dbb3e2e ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: 8f609d1a-7eaa-41dc-9f3b-9acf7abe4239] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 11 08:57:47 compute-0 nova_compute[260935]: 2025-10-11 08:57:47.929 2 DEBUG nova.virt.libvirt.driver [None req-9ab933dd-16af-4259-a3ee-7f6c4dbb3e2e ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: 8f609d1a-7eaa-41dc-9f3b-9acf7abe4239] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 11 08:57:47 compute-0 nova_compute[260935]: 2025-10-11 08:57:47.929 2 INFO nova.virt.libvirt.driver [None req-9ab933dd-16af-4259-a3ee-7f6c4dbb3e2e ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: 8f609d1a-7eaa-41dc-9f3b-9acf7abe4239] Creating image(s)
Oct 11 08:57:47 compute-0 nova_compute[260935]: 2025-10-11 08:57:47.963 2 DEBUG nova.storage.rbd_utils [None req-9ab933dd-16af-4259-a3ee-7f6c4dbb3e2e ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] rbd image 8f609d1a-7eaa-41dc-9f3b-9acf7abe4239_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:57:48 compute-0 nova_compute[260935]: 2025-10-11 08:57:48.003 2 DEBUG nova.storage.rbd_utils [None req-9ab933dd-16af-4259-a3ee-7f6c4dbb3e2e ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] rbd image 8f609d1a-7eaa-41dc-9f3b-9acf7abe4239_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:57:48 compute-0 nova_compute[260935]: 2025-10-11 08:57:48.041 2 DEBUG nova.storage.rbd_utils [None req-9ab933dd-16af-4259-a3ee-7f6c4dbb3e2e ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] rbd image 8f609d1a-7eaa-41dc-9f3b-9acf7abe4239_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:57:48 compute-0 nova_compute[260935]: 2025-10-11 08:57:48.048 2 DEBUG oslo_concurrency.processutils [None req-9ab933dd-16af-4259-a3ee-7f6c4dbb3e2e ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:57:48 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1634: 321 pgs: 321 active+clean; 451 MiB data, 744 MiB used, 59 GiB / 60 GiB avail; 59 KiB/s rd, 2.1 MiB/s wr, 85 op/s
Oct 11 08:57:48 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/1475470789' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:57:48 compute-0 nova_compute[260935]: 2025-10-11 08:57:48.142 2 DEBUG oslo_concurrency.processutils [None req-9ab933dd-16af-4259-a3ee-7f6c4dbb3e2e ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json" returned: 0 in 0.094s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:57:48 compute-0 nova_compute[260935]: 2025-10-11 08:57:48.144 2 DEBUG oslo_concurrency.lockutils [None req-9ab933dd-16af-4259-a3ee-7f6c4dbb3e2e ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Acquiring lock "9811042c7d73cc51997f7c966840f4b7728169a1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:57:48 compute-0 nova_compute[260935]: 2025-10-11 08:57:48.145 2 DEBUG oslo_concurrency.lockutils [None req-9ab933dd-16af-4259-a3ee-7f6c4dbb3e2e ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:57:48 compute-0 nova_compute[260935]: 2025-10-11 08:57:48.146 2 DEBUG oslo_concurrency.lockutils [None req-9ab933dd-16af-4259-a3ee-7f6c4dbb3e2e ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:57:48 compute-0 nova_compute[260935]: 2025-10-11 08:57:48.184 2 DEBUG nova.storage.rbd_utils [None req-9ab933dd-16af-4259-a3ee-7f6c4dbb3e2e ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] rbd image 8f609d1a-7eaa-41dc-9f3b-9acf7abe4239_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:57:48 compute-0 nova_compute[260935]: 2025-10-11 08:57:48.190 2 DEBUG oslo_concurrency.processutils [None req-9ab933dd-16af-4259-a3ee-7f6c4dbb3e2e ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 8f609d1a-7eaa-41dc-9f3b-9acf7abe4239_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:57:48 compute-0 nova_compute[260935]: 2025-10-11 08:57:48.237 2 DEBUG nova.policy [None req-9ab933dd-16af-4259-a3ee-7f6c4dbb3e2e ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'ee0e5fedb9fc464eb2a9ac362f5e0749', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'd9864fda4f8641d8a9c1509c426cc206', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 11 08:57:48 compute-0 nova_compute[260935]: 2025-10-11 08:57:48.243 2 DEBUG nova.network.neutron [req-92366f17-1061-4b7c-a405-a6cce0d9ba31 req-22117d87-2840-488f-9ac0-765a71537489 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 98d8ebd6-0917-49cf-8efc-a245486424bc] Updated VIF entry in instance network info cache for port 10e01bbb-0d2c-4f76-8f81-c90e20d3e54c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 08:57:48 compute-0 nova_compute[260935]: 2025-10-11 08:57:48.244 2 DEBUG nova.network.neutron [req-92366f17-1061-4b7c-a405-a6cce0d9ba31 req-22117d87-2840-488f-9ac0-765a71537489 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 98d8ebd6-0917-49cf-8efc-a245486424bc] Updating instance_info_cache with network_info: [{"id": "10e01bbb-0d2c-4f76-8f81-c90e20d3e54c", "address": "fa:16:3e:3f:0f:36", "network": {"id": "056c6769-bc97-4ae9-9759-4cc2d984a31d", "bridge": null, "label": "tempest-ServerActionsTestOtherB-2094705751-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.249", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "73adfb8cf0c64359b1f33a9643148ef4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "unbound", "details": {}, "devname": "tap10e01bbb-0d", "ovs_interfaceid": null, "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:57:48 compute-0 nova_compute[260935]: 2025-10-11 08:57:48.273 2 DEBUG oslo_concurrency.lockutils [req-92366f17-1061-4b7c-a405-a6cce0d9ba31 req-22117d87-2840-488f-9ac0-765a71537489 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-98d8ebd6-0917-49cf-8efc-a245486424bc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 08:57:48 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e237 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 08:57:48 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e237 do_prune osdmap full prune enabled
Oct 11 08:57:48 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e238 e238: 3 total, 3 up, 3 in
Oct 11 08:57:48 compute-0 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e238: 3 total, 3 up, 3 in
Oct 11 08:57:48 compute-0 nova_compute[260935]: 2025-10-11 08:57:48.444 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:57:48 compute-0 nova_compute[260935]: 2025-10-11 08:57:48.479 2 DEBUG oslo_concurrency.processutils [None req-9ab933dd-16af-4259-a3ee-7f6c4dbb3e2e ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 8f609d1a-7eaa-41dc-9f3b-9acf7abe4239_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.289s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:57:48 compute-0 nova_compute[260935]: 2025-10-11 08:57:48.544 2 DEBUG nova.storage.rbd_utils [None req-9ab933dd-16af-4259-a3ee-7f6c4dbb3e2e ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] resizing rbd image 8f609d1a-7eaa-41dc-9f3b-9acf7abe4239_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 11 08:57:48 compute-0 nova_compute[260935]: 2025-10-11 08:57:48.578 2 DEBUG nova.objects.instance [None req-0c3afd71-0671-4219-8cec-79d362d8eeca bebcc72ba14b4d1e976076ce9fee6080 7b0903df38254068b8566040cf90343c - - default default] Lazy-loading 'flavor' on Instance uuid 075ec27a-70a2-49f6-a097-5738f3407605 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:57:48 compute-0 nova_compute[260935]: 2025-10-11 08:57:48.641 2 DEBUG oslo_concurrency.lockutils [None req-0c3afd71-0671-4219-8cec-79d362d8eeca bebcc72ba14b4d1e976076ce9fee6080 7b0903df38254068b8566040cf90343c - - default default] Acquiring lock "refresh_cache-075ec27a-70a2-49f6-a097-5738f3407605" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 08:57:48 compute-0 nova_compute[260935]: 2025-10-11 08:57:48.647 2 DEBUG nova.objects.instance [None req-9ab933dd-16af-4259-a3ee-7f6c4dbb3e2e ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Lazy-loading 'migration_context' on Instance uuid 8f609d1a-7eaa-41dc-9f3b-9acf7abe4239 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:57:48 compute-0 nova_compute[260935]: 2025-10-11 08:57:48.665 2 DEBUG nova.virt.libvirt.driver [None req-9ab933dd-16af-4259-a3ee-7f6c4dbb3e2e ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: 8f609d1a-7eaa-41dc-9f3b-9acf7abe4239] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 11 08:57:48 compute-0 nova_compute[260935]: 2025-10-11 08:57:48.665 2 DEBUG nova.virt.libvirt.driver [None req-9ab933dd-16af-4259-a3ee-7f6c4dbb3e2e ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: 8f609d1a-7eaa-41dc-9f3b-9acf7abe4239] Ensure instance console log exists: /var/lib/nova/instances/8f609d1a-7eaa-41dc-9f3b-9acf7abe4239/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 11 08:57:48 compute-0 nova_compute[260935]: 2025-10-11 08:57:48.666 2 DEBUG oslo_concurrency.lockutils [None req-9ab933dd-16af-4259-a3ee-7f6c4dbb3e2e ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:57:48 compute-0 nova_compute[260935]: 2025-10-11 08:57:48.666 2 DEBUG oslo_concurrency.lockutils [None req-9ab933dd-16af-4259-a3ee-7f6c4dbb3e2e ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:57:48 compute-0 nova_compute[260935]: 2025-10-11 08:57:48.666 2 DEBUG oslo_concurrency.lockutils [None req-9ab933dd-16af-4259-a3ee-7f6c4dbb3e2e ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:57:49 compute-0 nova_compute[260935]: 2025-10-11 08:57:49.183 2 DEBUG nova.network.neutron [req-3a36cc4b-0c4d-4b4d-8942-e895c3d79809 req-aa8cef49-1b53-442d-8094-2f96c64f6125 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 075ec27a-70a2-49f6-a097-5738f3407605] Updated VIF entry in instance network info cache for port 10f3051f-8897-4e57-89f1-4a03aac94192. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 08:57:49 compute-0 nova_compute[260935]: 2025-10-11 08:57:49.185 2 DEBUG nova.network.neutron [req-3a36cc4b-0c4d-4b4d-8942-e895c3d79809 req-aa8cef49-1b53-442d-8094-2f96c64f6125 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 075ec27a-70a2-49f6-a097-5738f3407605] Updating instance_info_cache with network_info: [{"id": "10f3051f-8897-4e57-89f1-4a03aac94192", "address": "fa:16:3e:15:d9:b2", "network": {"id": "36e695f7-a84c-4eed-8d21-d70ce3f1f736", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-876022911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.220", "type": "floating", "version": 4, "meta": {}}]}, {"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7b0903df38254068b8566040cf90343c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap10f3051f-88", "ovs_interfaceid": "10f3051f-8897-4e57-89f1-4a03aac94192", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:57:49 compute-0 nova_compute[260935]: 2025-10-11 08:57:49.212 2 DEBUG oslo_concurrency.lockutils [req-3a36cc4b-0c4d-4b4d-8942-e895c3d79809 req-aa8cef49-1b53-442d-8094-2f96c64f6125 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-075ec27a-70a2-49f6-a097-5738f3407605" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 08:57:49 compute-0 nova_compute[260935]: 2025-10-11 08:57:49.214 2 DEBUG oslo_concurrency.lockutils [None req-0c3afd71-0671-4219-8cec-79d362d8eeca bebcc72ba14b4d1e976076ce9fee6080 7b0903df38254068b8566040cf90343c - - default default] Acquired lock "refresh_cache-075ec27a-70a2-49f6-a097-5738f3407605" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 08:57:49 compute-0 nova_compute[260935]: 2025-10-11 08:57:49.327 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760173054.326404, 98d8ebd6-0917-49cf-8efc-a245486424bc => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:57:49 compute-0 nova_compute[260935]: 2025-10-11 08:57:49.328 2 INFO nova.compute.manager [-] [instance: 98d8ebd6-0917-49cf-8efc-a245486424bc] VM Stopped (Lifecycle Event)
Oct 11 08:57:49 compute-0 nova_compute[260935]: 2025-10-11 08:57:49.353 2 DEBUG nova.compute.manager [None req-1e1bbfaf-f45c-4849-8dcd-a7fa49302762 - - - - - -] [instance: 98d8ebd6-0917-49cf-8efc-a245486424bc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:57:49 compute-0 ceph-mon[74313]: pgmap v1634: 321 pgs: 321 active+clean; 451 MiB data, 744 MiB used, 59 GiB / 60 GiB avail; 59 KiB/s rd, 2.1 MiB/s wr, 85 op/s
Oct 11 08:57:49 compute-0 ceph-mon[74313]: osdmap e238: 3 total, 3 up, 3 in
Oct 11 08:57:49 compute-0 nova_compute[260935]: 2025-10-11 08:57:49.433 2 DEBUG nova.network.neutron [None req-b313b5b1-d87d-4afa-99ed-53b854e421fc 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: 937f10b2-422f-42bc-b13e-5995310b461c] Successfully updated port: 228ffaff-8879-4872-9923-a5a8270cb291 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 11 08:57:49 compute-0 nova_compute[260935]: 2025-10-11 08:57:49.458 2 DEBUG oslo_concurrency.lockutils [None req-b313b5b1-d87d-4afa-99ed-53b854e421fc 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Acquiring lock "refresh_cache-937f10b2-422f-42bc-b13e-5995310b461c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 08:57:49 compute-0 nova_compute[260935]: 2025-10-11 08:57:49.458 2 DEBUG oslo_concurrency.lockutils [None req-b313b5b1-d87d-4afa-99ed-53b854e421fc 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Acquired lock "refresh_cache-937f10b2-422f-42bc-b13e-5995310b461c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 08:57:49 compute-0 nova_compute[260935]: 2025-10-11 08:57:49.458 2 DEBUG nova.network.neutron [None req-b313b5b1-d87d-4afa-99ed-53b854e421fc 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: 937f10b2-422f-42bc-b13e-5995310b461c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 11 08:57:49 compute-0 nova_compute[260935]: 2025-10-11 08:57:49.561 2 DEBUG nova.compute.manager [req-ad48c44c-d982-4e6c-9ecb-80aa3f71eb10 req-e518f61e-1836-4328-9ac0-3aac55eea9f2 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 937f10b2-422f-42bc-b13e-5995310b461c] Received event network-changed-228ffaff-8879-4872-9923-a5a8270cb291 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:57:49 compute-0 nova_compute[260935]: 2025-10-11 08:57:49.562 2 DEBUG nova.compute.manager [req-ad48c44c-d982-4e6c-9ecb-80aa3f71eb10 req-e518f61e-1836-4328-9ac0-3aac55eea9f2 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 937f10b2-422f-42bc-b13e-5995310b461c] Refreshing instance network info cache due to event network-changed-228ffaff-8879-4872-9923-a5a8270cb291. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 08:57:49 compute-0 nova_compute[260935]: 2025-10-11 08:57:49.563 2 DEBUG oslo_concurrency.lockutils [req-ad48c44c-d982-4e6c-9ecb-80aa3f71eb10 req-e518f61e-1836-4328-9ac0-3aac55eea9f2 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-937f10b2-422f-42bc-b13e-5995310b461c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 08:57:49 compute-0 nova_compute[260935]: 2025-10-11 08:57:49.817 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:57:50 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1636: 321 pgs: 321 active+clean; 451 MiB data, 744 MiB used, 59 GiB / 60 GiB avail; 59 KiB/s rd, 2.1 MiB/s wr, 85 op/s
Oct 11 08:57:50 compute-0 nova_compute[260935]: 2025-10-11 08:57:50.132 2 DEBUG nova.network.neutron [None req-b313b5b1-d87d-4afa-99ed-53b854e421fc 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: 937f10b2-422f-42bc-b13e-5995310b461c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 11 08:57:50 compute-0 nova_compute[260935]: 2025-10-11 08:57:50.173 2 DEBUG nova.network.neutron [None req-9ab933dd-16af-4259-a3ee-7f6c4dbb3e2e ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: 8f609d1a-7eaa-41dc-9f3b-9acf7abe4239] Successfully created port: 5fbde773-dc50-43d9-ae02-a499b8096537 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 11 08:57:50 compute-0 nova_compute[260935]: 2025-10-11 08:57:50.800 2 DEBUG nova.network.neutron [None req-0c3afd71-0671-4219-8cec-79d362d8eeca bebcc72ba14b4d1e976076ce9fee6080 7b0903df38254068b8566040cf90343c - - default default] [instance: 075ec27a-70a2-49f6-a097-5738f3407605] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 11 08:57:51 compute-0 nova_compute[260935]: 2025-10-11 08:57:51.313 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760173056.311751, a4114e5c-7c87-48a4-b73b-01ec4a9dd9b4 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:57:51 compute-0 nova_compute[260935]: 2025-10-11 08:57:51.313 2 INFO nova.compute.manager [-] [instance: a4114e5c-7c87-48a4-b73b-01ec4a9dd9b4] VM Stopped (Lifecycle Event)
Oct 11 08:57:51 compute-0 nova_compute[260935]: 2025-10-11 08:57:51.339 2 DEBUG nova.compute.manager [None req-62537eab-1956-4907-886f-74b375aa0830 - - - - - -] [instance: a4114e5c-7c87-48a4-b73b-01ec4a9dd9b4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:57:51 compute-0 ceph-mon[74313]: pgmap v1636: 321 pgs: 321 active+clean; 451 MiB data, 744 MiB used, 59 GiB / 60 GiB avail; 59 KiB/s rd, 2.1 MiB/s wr, 85 op/s
Oct 11 08:57:51 compute-0 nova_compute[260935]: 2025-10-11 08:57:51.484 2 DEBUG nova.network.neutron [None req-b313b5b1-d87d-4afa-99ed-53b854e421fc 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: 937f10b2-422f-42bc-b13e-5995310b461c] Updating instance_info_cache with network_info: [{"id": "228ffaff-8879-4872-9923-a5a8270cb291", "address": "fa:16:3e:0a:d9:4d", "network": {"id": "2cb96d57-a5e9-4b38-b10e-68187a5bf82f", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-2000648338-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "93dd4902ce324862a38006da8e06503a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap228ffaff-88", "ovs_interfaceid": "228ffaff-8879-4872-9923-a5a8270cb291", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:57:51 compute-0 nova_compute[260935]: 2025-10-11 08:57:51.503 2 DEBUG oslo_concurrency.lockutils [None req-b313b5b1-d87d-4afa-99ed-53b854e421fc 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Releasing lock "refresh_cache-937f10b2-422f-42bc-b13e-5995310b461c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 08:57:51 compute-0 nova_compute[260935]: 2025-10-11 08:57:51.504 2 DEBUG nova.compute.manager [None req-b313b5b1-d87d-4afa-99ed-53b854e421fc 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: 937f10b2-422f-42bc-b13e-5995310b461c] Instance network_info: |[{"id": "228ffaff-8879-4872-9923-a5a8270cb291", "address": "fa:16:3e:0a:d9:4d", "network": {"id": "2cb96d57-a5e9-4b38-b10e-68187a5bf82f", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-2000648338-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "93dd4902ce324862a38006da8e06503a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap228ffaff-88", "ovs_interfaceid": "228ffaff-8879-4872-9923-a5a8270cb291", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 11 08:57:51 compute-0 nova_compute[260935]: 2025-10-11 08:57:51.505 2 DEBUG oslo_concurrency.lockutils [req-ad48c44c-d982-4e6c-9ecb-80aa3f71eb10 req-e518f61e-1836-4328-9ac0-3aac55eea9f2 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-937f10b2-422f-42bc-b13e-5995310b461c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 08:57:51 compute-0 nova_compute[260935]: 2025-10-11 08:57:51.505 2 DEBUG nova.network.neutron [req-ad48c44c-d982-4e6c-9ecb-80aa3f71eb10 req-e518f61e-1836-4328-9ac0-3aac55eea9f2 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 937f10b2-422f-42bc-b13e-5995310b461c] Refreshing network info cache for port 228ffaff-8879-4872-9923-a5a8270cb291 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 08:57:51 compute-0 nova_compute[260935]: 2025-10-11 08:57:51.509 2 DEBUG nova.virt.libvirt.driver [None req-b313b5b1-d87d-4afa-99ed-53b854e421fc 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: 937f10b2-422f-42bc-b13e-5995310b461c] Start _get_guest_xml network_info=[{"id": "228ffaff-8879-4872-9923-a5a8270cb291", "address": "fa:16:3e:0a:d9:4d", "network": {"id": "2cb96d57-a5e9-4b38-b10e-68187a5bf82f", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-2000648338-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "93dd4902ce324862a38006da8e06503a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap228ffaff-88", "ovs_interfaceid": "228ffaff-8879-4872-9923-a5a8270cb291", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 11 08:57:51 compute-0 nova_compute[260935]: 2025-10-11 08:57:51.516 2 WARNING nova.virt.libvirt.driver [None req-b313b5b1-d87d-4afa-99ed-53b854e421fc 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 08:57:51 compute-0 nova_compute[260935]: 2025-10-11 08:57:51.523 2 DEBUG nova.virt.libvirt.host [None req-b313b5b1-d87d-4afa-99ed-53b854e421fc 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 11 08:57:51 compute-0 nova_compute[260935]: 2025-10-11 08:57:51.524 2 DEBUG nova.virt.libvirt.host [None req-b313b5b1-d87d-4afa-99ed-53b854e421fc 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 11 08:57:51 compute-0 nova_compute[260935]: 2025-10-11 08:57:51.535 2 DEBUG nova.virt.libvirt.host [None req-b313b5b1-d87d-4afa-99ed-53b854e421fc 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 11 08:57:51 compute-0 nova_compute[260935]: 2025-10-11 08:57:51.535 2 DEBUG nova.virt.libvirt.host [None req-b313b5b1-d87d-4afa-99ed-53b854e421fc 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 11 08:57:51 compute-0 nova_compute[260935]: 2025-10-11 08:57:51.536 2 DEBUG nova.virt.libvirt.driver [None req-b313b5b1-d87d-4afa-99ed-53b854e421fc 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 11 08:57:51 compute-0 nova_compute[260935]: 2025-10-11 08:57:51.536 2 DEBUG nova.virt.hardware [None req-b313b5b1-d87d-4afa-99ed-53b854e421fc 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 11 08:57:51 compute-0 nova_compute[260935]: 2025-10-11 08:57:51.537 2 DEBUG nova.virt.hardware [None req-b313b5b1-d87d-4afa-99ed-53b854e421fc 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 11 08:57:51 compute-0 nova_compute[260935]: 2025-10-11 08:57:51.538 2 DEBUG nova.virt.hardware [None req-b313b5b1-d87d-4afa-99ed-53b854e421fc 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 11 08:57:51 compute-0 nova_compute[260935]: 2025-10-11 08:57:51.538 2 DEBUG nova.virt.hardware [None req-b313b5b1-d87d-4afa-99ed-53b854e421fc 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 11 08:57:51 compute-0 nova_compute[260935]: 2025-10-11 08:57:51.538 2 DEBUG nova.virt.hardware [None req-b313b5b1-d87d-4afa-99ed-53b854e421fc 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 11 08:57:51 compute-0 nova_compute[260935]: 2025-10-11 08:57:51.539 2 DEBUG nova.virt.hardware [None req-b313b5b1-d87d-4afa-99ed-53b854e421fc 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 11 08:57:51 compute-0 nova_compute[260935]: 2025-10-11 08:57:51.539 2 DEBUG nova.virt.hardware [None req-b313b5b1-d87d-4afa-99ed-53b854e421fc 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 11 08:57:51 compute-0 nova_compute[260935]: 2025-10-11 08:57:51.539 2 DEBUG nova.virt.hardware [None req-b313b5b1-d87d-4afa-99ed-53b854e421fc 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 11 08:57:51 compute-0 nova_compute[260935]: 2025-10-11 08:57:51.540 2 DEBUG nova.virt.hardware [None req-b313b5b1-d87d-4afa-99ed-53b854e421fc 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 11 08:57:51 compute-0 nova_compute[260935]: 2025-10-11 08:57:51.540 2 DEBUG nova.virt.hardware [None req-b313b5b1-d87d-4afa-99ed-53b854e421fc 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 11 08:57:51 compute-0 nova_compute[260935]: 2025-10-11 08:57:51.540 2 DEBUG nova.virt.hardware [None req-b313b5b1-d87d-4afa-99ed-53b854e421fc 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 11 08:57:51 compute-0 nova_compute[260935]: 2025-10-11 08:57:51.546 2 DEBUG oslo_concurrency.processutils [None req-b313b5b1-d87d-4afa-99ed-53b854e421fc 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:57:51 compute-0 nova_compute[260935]: 2025-10-11 08:57:51.607 2 DEBUG nova.compute.manager [req-a6a6c420-d010-4983-bf31-91d966314f4b req-bcab1330-0e44-42ad-8114-b9123deedc1a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 075ec27a-70a2-49f6-a097-5738f3407605] Received event network-changed-10f3051f-8897-4e57-89f1-4a03aac94192 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:57:51 compute-0 nova_compute[260935]: 2025-10-11 08:57:51.608 2 DEBUG nova.compute.manager [req-a6a6c420-d010-4983-bf31-91d966314f4b req-bcab1330-0e44-42ad-8114-b9123deedc1a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 075ec27a-70a2-49f6-a097-5738f3407605] Refreshing instance network info cache due to event network-changed-10f3051f-8897-4e57-89f1-4a03aac94192. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 08:57:51 compute-0 nova_compute[260935]: 2025-10-11 08:57:51.608 2 DEBUG oslo_concurrency.lockutils [req-a6a6c420-d010-4983-bf31-91d966314f4b req-bcab1330-0e44-42ad-8114-b9123deedc1a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-075ec27a-70a2-49f6-a097-5738f3407605" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 08:57:52 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 08:57:52 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2743044679' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:57:52 compute-0 nova_compute[260935]: 2025-10-11 08:57:52.060 2 DEBUG oslo_concurrency.processutils [None req-b313b5b1-d87d-4afa-99ed-53b854e421fc 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.514s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:57:52 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1637: 321 pgs: 321 active+clean; 451 MiB data, 744 MiB used, 59 GiB / 60 GiB avail; 59 KiB/s rd, 2.1 MiB/s wr, 85 op/s
Oct 11 08:57:52 compute-0 nova_compute[260935]: 2025-10-11 08:57:52.100 2 DEBUG nova.storage.rbd_utils [None req-b313b5b1-d87d-4afa-99ed-53b854e421fc 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] rbd image 937f10b2-422f-42bc-b13e-5995310b461c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:57:52 compute-0 nova_compute[260935]: 2025-10-11 08:57:52.106 2 DEBUG oslo_concurrency.processutils [None req-b313b5b1-d87d-4afa-99ed-53b854e421fc 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:57:52 compute-0 nova_compute[260935]: 2025-10-11 08:57:52.237 2 DEBUG nova.network.neutron [None req-9ab933dd-16af-4259-a3ee-7f6c4dbb3e2e ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: 8f609d1a-7eaa-41dc-9f3b-9acf7abe4239] Successfully updated port: 5fbde773-dc50-43d9-ae02-a499b8096537 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 11 08:57:52 compute-0 nova_compute[260935]: 2025-10-11 08:57:52.260 2 DEBUG oslo_concurrency.lockutils [None req-9ab933dd-16af-4259-a3ee-7f6c4dbb3e2e ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Acquiring lock "refresh_cache-8f609d1a-7eaa-41dc-9f3b-9acf7abe4239" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 08:57:52 compute-0 nova_compute[260935]: 2025-10-11 08:57:52.260 2 DEBUG oslo_concurrency.lockutils [None req-9ab933dd-16af-4259-a3ee-7f6c4dbb3e2e ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Acquired lock "refresh_cache-8f609d1a-7eaa-41dc-9f3b-9acf7abe4239" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 08:57:52 compute-0 nova_compute[260935]: 2025-10-11 08:57:52.260 2 DEBUG nova.network.neutron [None req-9ab933dd-16af-4259-a3ee-7f6c4dbb3e2e ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: 8f609d1a-7eaa-41dc-9f3b-9acf7abe4239] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 11 08:57:52 compute-0 nova_compute[260935]: 2025-10-11 08:57:52.417 2 DEBUG nova.network.neutron [None req-9ab933dd-16af-4259-a3ee-7f6c4dbb3e2e ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: 8f609d1a-7eaa-41dc-9f3b-9acf7abe4239] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 11 08:57:52 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/2743044679' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:57:52 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 08:57:52 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3328688018' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:57:52 compute-0 nova_compute[260935]: 2025-10-11 08:57:52.560 2 DEBUG oslo_concurrency.processutils [None req-b313b5b1-d87d-4afa-99ed-53b854e421fc 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:57:52 compute-0 nova_compute[260935]: 2025-10-11 08:57:52.564 2 DEBUG nova.virt.libvirt.vif [None req-b313b5b1-d87d-4afa-99ed-53b854e421fc 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T08:57:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-2072321326',display_name='tempest-DeleteServersTestJSON-server-2072321326',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-2072321326',id=74,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='93dd4902ce324862a38006da8e06503a',ramdisk_id='',reservation_id='r-q6sypizy',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-1019340677',owner_user_name='tempest-DeleteServersTestJSON-1019340677-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T08:57:46Z,user_data=None,user_id='213e5693e94f44e7950e3dfbca04228a',uuid=937f10b2-422f-42bc-b13e-5995310b461c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "228ffaff-8879-4872-9923-a5a8270cb291", "address": "fa:16:3e:0a:d9:4d", "network": {"id": "2cb96d57-a5e9-4b38-b10e-68187a5bf82f", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-2000648338-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "93dd4902ce324862a38006da8e06503a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap228ffaff-88", "ovs_interfaceid": "228ffaff-8879-4872-9923-a5a8270cb291", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 11 08:57:52 compute-0 nova_compute[260935]: 2025-10-11 08:57:52.565 2 DEBUG nova.network.os_vif_util [None req-b313b5b1-d87d-4afa-99ed-53b854e421fc 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Converting VIF {"id": "228ffaff-8879-4872-9923-a5a8270cb291", "address": "fa:16:3e:0a:d9:4d", "network": {"id": "2cb96d57-a5e9-4b38-b10e-68187a5bf82f", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-2000648338-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "93dd4902ce324862a38006da8e06503a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap228ffaff-88", "ovs_interfaceid": "228ffaff-8879-4872-9923-a5a8270cb291", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 08:57:52 compute-0 nova_compute[260935]: 2025-10-11 08:57:52.568 2 DEBUG nova.network.os_vif_util [None req-b313b5b1-d87d-4afa-99ed-53b854e421fc 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0a:d9:4d,bridge_name='br-int',has_traffic_filtering=True,id=228ffaff-8879-4872-9923-a5a8270cb291,network=Network(2cb96d57-a5e9-4b38-b10e-68187a5bf82f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap228ffaff-88') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 08:57:52 compute-0 nova_compute[260935]: 2025-10-11 08:57:52.570 2 DEBUG nova.objects.instance [None req-b313b5b1-d87d-4afa-99ed-53b854e421fc 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Lazy-loading 'pci_devices' on Instance uuid 937f10b2-422f-42bc-b13e-5995310b461c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:57:52 compute-0 nova_compute[260935]: 2025-10-11 08:57:52.600 2 DEBUG nova.virt.libvirt.driver [None req-b313b5b1-d87d-4afa-99ed-53b854e421fc 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: 937f10b2-422f-42bc-b13e-5995310b461c] End _get_guest_xml xml=<domain type="kvm">
Oct 11 08:57:52 compute-0 nova_compute[260935]:   <uuid>937f10b2-422f-42bc-b13e-5995310b461c</uuid>
Oct 11 08:57:52 compute-0 nova_compute[260935]:   <name>instance-0000004a</name>
Oct 11 08:57:52 compute-0 nova_compute[260935]:   <memory>131072</memory>
Oct 11 08:57:52 compute-0 nova_compute[260935]:   <vcpu>1</vcpu>
Oct 11 08:57:52 compute-0 nova_compute[260935]:   <metadata>
Oct 11 08:57:52 compute-0 nova_compute[260935]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 08:57:52 compute-0 nova_compute[260935]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 08:57:52 compute-0 nova_compute[260935]:       <nova:name>tempest-DeleteServersTestJSON-server-2072321326</nova:name>
Oct 11 08:57:52 compute-0 nova_compute[260935]:       <nova:creationTime>2025-10-11 08:57:51</nova:creationTime>
Oct 11 08:57:52 compute-0 nova_compute[260935]:       <nova:flavor name="m1.nano">
Oct 11 08:57:52 compute-0 nova_compute[260935]:         <nova:memory>128</nova:memory>
Oct 11 08:57:52 compute-0 nova_compute[260935]:         <nova:disk>1</nova:disk>
Oct 11 08:57:52 compute-0 nova_compute[260935]:         <nova:swap>0</nova:swap>
Oct 11 08:57:52 compute-0 nova_compute[260935]:         <nova:ephemeral>0</nova:ephemeral>
Oct 11 08:57:52 compute-0 nova_compute[260935]:         <nova:vcpus>1</nova:vcpus>
Oct 11 08:57:52 compute-0 nova_compute[260935]:       </nova:flavor>
Oct 11 08:57:52 compute-0 nova_compute[260935]:       <nova:owner>
Oct 11 08:57:52 compute-0 nova_compute[260935]:         <nova:user uuid="213e5693e94f44e7950e3dfbca04228a">tempest-DeleteServersTestJSON-1019340677-project-member</nova:user>
Oct 11 08:57:52 compute-0 nova_compute[260935]:         <nova:project uuid="93dd4902ce324862a38006da8e06503a">tempest-DeleteServersTestJSON-1019340677</nova:project>
Oct 11 08:57:52 compute-0 nova_compute[260935]:       </nova:owner>
Oct 11 08:57:52 compute-0 nova_compute[260935]:       <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 08:57:52 compute-0 nova_compute[260935]:       <nova:ports>
Oct 11 08:57:52 compute-0 nova_compute[260935]:         <nova:port uuid="228ffaff-8879-4872-9923-a5a8270cb291">
Oct 11 08:57:52 compute-0 nova_compute[260935]:           <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Oct 11 08:57:52 compute-0 nova_compute[260935]:         </nova:port>
Oct 11 08:57:52 compute-0 nova_compute[260935]:       </nova:ports>
Oct 11 08:57:52 compute-0 nova_compute[260935]:     </nova:instance>
Oct 11 08:57:52 compute-0 nova_compute[260935]:   </metadata>
Oct 11 08:57:52 compute-0 nova_compute[260935]:   <sysinfo type="smbios">
Oct 11 08:57:52 compute-0 nova_compute[260935]:     <system>
Oct 11 08:57:52 compute-0 nova_compute[260935]:       <entry name="manufacturer">RDO</entry>
Oct 11 08:57:52 compute-0 nova_compute[260935]:       <entry name="product">OpenStack Compute</entry>
Oct 11 08:57:52 compute-0 nova_compute[260935]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 08:57:52 compute-0 nova_compute[260935]:       <entry name="serial">937f10b2-422f-42bc-b13e-5995310b461c</entry>
Oct 11 08:57:52 compute-0 nova_compute[260935]:       <entry name="uuid">937f10b2-422f-42bc-b13e-5995310b461c</entry>
Oct 11 08:57:52 compute-0 nova_compute[260935]:       <entry name="family">Virtual Machine</entry>
Oct 11 08:57:52 compute-0 nova_compute[260935]:     </system>
Oct 11 08:57:52 compute-0 nova_compute[260935]:   </sysinfo>
Oct 11 08:57:52 compute-0 nova_compute[260935]:   <os>
Oct 11 08:57:52 compute-0 nova_compute[260935]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 11 08:57:52 compute-0 nova_compute[260935]:     <boot dev="hd"/>
Oct 11 08:57:52 compute-0 nova_compute[260935]:     <smbios mode="sysinfo"/>
Oct 11 08:57:52 compute-0 nova_compute[260935]:   </os>
Oct 11 08:57:52 compute-0 nova_compute[260935]:   <features>
Oct 11 08:57:52 compute-0 nova_compute[260935]:     <acpi/>
Oct 11 08:57:52 compute-0 nova_compute[260935]:     <apic/>
Oct 11 08:57:52 compute-0 nova_compute[260935]:     <vmcoreinfo/>
Oct 11 08:57:52 compute-0 nova_compute[260935]:   </features>
Oct 11 08:57:52 compute-0 nova_compute[260935]:   <clock offset="utc">
Oct 11 08:57:52 compute-0 nova_compute[260935]:     <timer name="pit" tickpolicy="delay"/>
Oct 11 08:57:52 compute-0 nova_compute[260935]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 11 08:57:52 compute-0 nova_compute[260935]:     <timer name="hpet" present="no"/>
Oct 11 08:57:52 compute-0 nova_compute[260935]:   </clock>
Oct 11 08:57:52 compute-0 nova_compute[260935]:   <cpu mode="host-model" match="exact">
Oct 11 08:57:52 compute-0 nova_compute[260935]:     <topology sockets="1" cores="1" threads="1"/>
Oct 11 08:57:52 compute-0 nova_compute[260935]:   </cpu>
Oct 11 08:57:52 compute-0 nova_compute[260935]:   <devices>
Oct 11 08:57:52 compute-0 nova_compute[260935]:     <disk type="network" device="disk">
Oct 11 08:57:52 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 08:57:52 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/937f10b2-422f-42bc-b13e-5995310b461c_disk">
Oct 11 08:57:52 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 08:57:52 compute-0 nova_compute[260935]:       </source>
Oct 11 08:57:52 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 08:57:52 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 08:57:52 compute-0 nova_compute[260935]:       </auth>
Oct 11 08:57:52 compute-0 nova_compute[260935]:       <target dev="vda" bus="virtio"/>
Oct 11 08:57:52 compute-0 nova_compute[260935]:     </disk>
Oct 11 08:57:52 compute-0 nova_compute[260935]:     <disk type="network" device="cdrom">
Oct 11 08:57:52 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 08:57:52 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/937f10b2-422f-42bc-b13e-5995310b461c_disk.config">
Oct 11 08:57:52 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 08:57:52 compute-0 nova_compute[260935]:       </source>
Oct 11 08:57:52 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 08:57:52 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 08:57:52 compute-0 nova_compute[260935]:       </auth>
Oct 11 08:57:52 compute-0 nova_compute[260935]:       <target dev="sda" bus="sata"/>
Oct 11 08:57:52 compute-0 nova_compute[260935]:     </disk>
Oct 11 08:57:52 compute-0 nova_compute[260935]:     <interface type="ethernet">
Oct 11 08:57:52 compute-0 nova_compute[260935]:       <mac address="fa:16:3e:0a:d9:4d"/>
Oct 11 08:57:52 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 08:57:52 compute-0 nova_compute[260935]:       <driver name="vhost" rx_queue_size="512"/>
Oct 11 08:57:52 compute-0 nova_compute[260935]:       <mtu size="1442"/>
Oct 11 08:57:52 compute-0 nova_compute[260935]:       <target dev="tap228ffaff-88"/>
Oct 11 08:57:52 compute-0 nova_compute[260935]:     </interface>
Oct 11 08:57:52 compute-0 nova_compute[260935]:     <serial type="pty">
Oct 11 08:57:52 compute-0 nova_compute[260935]:       <log file="/var/lib/nova/instances/937f10b2-422f-42bc-b13e-5995310b461c/console.log" append="off"/>
Oct 11 08:57:52 compute-0 nova_compute[260935]:     </serial>
Oct 11 08:57:52 compute-0 nova_compute[260935]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 08:57:52 compute-0 nova_compute[260935]:     <video>
Oct 11 08:57:52 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 08:57:52 compute-0 nova_compute[260935]:     </video>
Oct 11 08:57:52 compute-0 nova_compute[260935]:     <input type="tablet" bus="usb"/>
Oct 11 08:57:52 compute-0 nova_compute[260935]:     <rng model="virtio">
Oct 11 08:57:52 compute-0 nova_compute[260935]:       <backend model="random">/dev/urandom</backend>
Oct 11 08:57:52 compute-0 nova_compute[260935]:     </rng>
Oct 11 08:57:52 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root"/>
Oct 11 08:57:52 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:57:52 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:57:52 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:57:52 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:57:52 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:57:52 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:57:52 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:57:52 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:57:52 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:57:52 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:57:52 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:57:52 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:57:52 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:57:52 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:57:52 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:57:52 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:57:52 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:57:52 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:57:52 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:57:52 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:57:52 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:57:52 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:57:52 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:57:52 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:57:52 compute-0 nova_compute[260935]:     <controller type="usb" index="0"/>
Oct 11 08:57:52 compute-0 nova_compute[260935]:     <memballoon model="virtio">
Oct 11 08:57:52 compute-0 nova_compute[260935]:       <stats period="10"/>
Oct 11 08:57:52 compute-0 nova_compute[260935]:     </memballoon>
Oct 11 08:57:52 compute-0 nova_compute[260935]:   </devices>
Oct 11 08:57:52 compute-0 nova_compute[260935]: </domain>
Oct 11 08:57:52 compute-0 nova_compute[260935]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 11 08:57:52 compute-0 nova_compute[260935]: 2025-10-11 08:57:52.602 2 DEBUG nova.compute.manager [None req-b313b5b1-d87d-4afa-99ed-53b854e421fc 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: 937f10b2-422f-42bc-b13e-5995310b461c] Preparing to wait for external event network-vif-plugged-228ffaff-8879-4872-9923-a5a8270cb291 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 11 08:57:52 compute-0 nova_compute[260935]: 2025-10-11 08:57:52.603 2 DEBUG oslo_concurrency.lockutils [None req-b313b5b1-d87d-4afa-99ed-53b854e421fc 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Acquiring lock "937f10b2-422f-42bc-b13e-5995310b461c-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:57:52 compute-0 nova_compute[260935]: 2025-10-11 08:57:52.603 2 DEBUG oslo_concurrency.lockutils [None req-b313b5b1-d87d-4afa-99ed-53b854e421fc 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Lock "937f10b2-422f-42bc-b13e-5995310b461c-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:57:52 compute-0 nova_compute[260935]: 2025-10-11 08:57:52.604 2 DEBUG oslo_concurrency.lockutils [None req-b313b5b1-d87d-4afa-99ed-53b854e421fc 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Lock "937f10b2-422f-42bc-b13e-5995310b461c-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:57:52 compute-0 nova_compute[260935]: 2025-10-11 08:57:52.605 2 DEBUG nova.virt.libvirt.vif [None req-b313b5b1-d87d-4afa-99ed-53b854e421fc 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T08:57:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-2072321326',display_name='tempest-DeleteServersTestJSON-server-2072321326',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-2072321326',id=74,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='93dd4902ce324862a38006da8e06503a',ramdisk_id='',reservation_id='r-q6sypizy',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-1019340677',owner_user_name='tempest-DeleteServersTestJSON-1019340677-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T08:57:46Z,user_data=None,user_id='213e5693e94f44e7950e3dfbca04228a',uuid=937f10b2-422f-42bc-b13e-5995310b461c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "228ffaff-8879-4872-9923-a5a8270cb291", "address": "fa:16:3e:0a:d9:4d", "network": {"id": "2cb96d57-a5e9-4b38-b10e-68187a5bf82f", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-2000648338-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "93dd4902ce324862a38006da8e06503a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap228ffaff-88", "ovs_interfaceid": "228ffaff-8879-4872-9923-a5a8270cb291", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 11 08:57:52 compute-0 nova_compute[260935]: 2025-10-11 08:57:52.606 2 DEBUG nova.network.os_vif_util [None req-b313b5b1-d87d-4afa-99ed-53b854e421fc 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Converting VIF {"id": "228ffaff-8879-4872-9923-a5a8270cb291", "address": "fa:16:3e:0a:d9:4d", "network": {"id": "2cb96d57-a5e9-4b38-b10e-68187a5bf82f", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-2000648338-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "93dd4902ce324862a38006da8e06503a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap228ffaff-88", "ovs_interfaceid": "228ffaff-8879-4872-9923-a5a8270cb291", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 08:57:52 compute-0 nova_compute[260935]: 2025-10-11 08:57:52.607 2 DEBUG nova.network.os_vif_util [None req-b313b5b1-d87d-4afa-99ed-53b854e421fc 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0a:d9:4d,bridge_name='br-int',has_traffic_filtering=True,id=228ffaff-8879-4872-9923-a5a8270cb291,network=Network(2cb96d57-a5e9-4b38-b10e-68187a5bf82f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap228ffaff-88') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 08:57:52 compute-0 nova_compute[260935]: 2025-10-11 08:57:52.608 2 DEBUG os_vif [None req-b313b5b1-d87d-4afa-99ed-53b854e421fc 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:0a:d9:4d,bridge_name='br-int',has_traffic_filtering=True,id=228ffaff-8879-4872-9923-a5a8270cb291,network=Network(2cb96d57-a5e9-4b38-b10e-68187a5bf82f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap228ffaff-88') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 11 08:57:52 compute-0 nova_compute[260935]: 2025-10-11 08:57:52.609 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:57:52 compute-0 nova_compute[260935]: 2025-10-11 08:57:52.610 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:57:52 compute-0 nova_compute[260935]: 2025-10-11 08:57:52.611 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 08:57:52 compute-0 nova_compute[260935]: 2025-10-11 08:57:52.615 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:57:52 compute-0 nova_compute[260935]: 2025-10-11 08:57:52.615 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap228ffaff-88, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:57:52 compute-0 nova_compute[260935]: 2025-10-11 08:57:52.616 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap228ffaff-88, col_values=(('external_ids', {'iface-id': '228ffaff-8879-4872-9923-a5a8270cb291', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:0a:d9:4d', 'vm-uuid': '937f10b2-422f-42bc-b13e-5995310b461c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:57:52 compute-0 nova_compute[260935]: 2025-10-11 08:57:52.618 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:57:52 compute-0 NetworkManager[44960]: <info>  [1760173072.6203] manager: (tap228ffaff-88): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/289)
Oct 11 08:57:52 compute-0 nova_compute[260935]: 2025-10-11 08:57:52.622 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 08:57:52 compute-0 nova_compute[260935]: 2025-10-11 08:57:52.628 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:57:52 compute-0 nova_compute[260935]: 2025-10-11 08:57:52.629 2 INFO os_vif [None req-b313b5b1-d87d-4afa-99ed-53b854e421fc 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:0a:d9:4d,bridge_name='br-int',has_traffic_filtering=True,id=228ffaff-8879-4872-9923-a5a8270cb291,network=Network(2cb96d57-a5e9-4b38-b10e-68187a5bf82f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap228ffaff-88')
Oct 11 08:57:52 compute-0 nova_compute[260935]: 2025-10-11 08:57:52.705 2 DEBUG nova.virt.libvirt.driver [None req-b313b5b1-d87d-4afa-99ed-53b854e421fc 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 08:57:52 compute-0 nova_compute[260935]: 2025-10-11 08:57:52.706 2 DEBUG nova.virt.libvirt.driver [None req-b313b5b1-d87d-4afa-99ed-53b854e421fc 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 08:57:52 compute-0 nova_compute[260935]: 2025-10-11 08:57:52.707 2 DEBUG nova.virt.libvirt.driver [None req-b313b5b1-d87d-4afa-99ed-53b854e421fc 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] No VIF found with MAC fa:16:3e:0a:d9:4d, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 11 08:57:52 compute-0 nova_compute[260935]: 2025-10-11 08:57:52.707 2 INFO nova.virt.libvirt.driver [None req-b313b5b1-d87d-4afa-99ed-53b854e421fc 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: 937f10b2-422f-42bc-b13e-5995310b461c] Using config drive
Oct 11 08:57:52 compute-0 nova_compute[260935]: 2025-10-11 08:57:52.734 2 DEBUG nova.storage.rbd_utils [None req-b313b5b1-d87d-4afa-99ed-53b854e421fc 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] rbd image 937f10b2-422f-42bc-b13e-5995310b461c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:57:52 compute-0 nova_compute[260935]: 2025-10-11 08:57:52.987 2 DEBUG nova.network.neutron [None req-0c3afd71-0671-4219-8cec-79d362d8eeca bebcc72ba14b4d1e976076ce9fee6080 7b0903df38254068b8566040cf90343c - - default default] [instance: 075ec27a-70a2-49f6-a097-5738f3407605] Updating instance_info_cache with network_info: [{"id": "10f3051f-8897-4e57-89f1-4a03aac94192", "address": "fa:16:3e:15:d9:b2", "network": {"id": "36e695f7-a84c-4eed-8d21-d70ce3f1f736", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-876022911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.220", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7b0903df38254068b8566040cf90343c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap10f3051f-88", "ovs_interfaceid": "10f3051f-8897-4e57-89f1-4a03aac94192", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:57:52 compute-0 nova_compute[260935]: 2025-10-11 08:57:52.990 2 DEBUG oslo_concurrency.lockutils [None req-52b41a59-dd6d-4f95-968d-9f8bdd1c5fa9 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Acquiring lock "98d8ebd6-0917-49cf-8efc-a245486424bc" by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:57:52 compute-0 nova_compute[260935]: 2025-10-11 08:57:52.991 2 DEBUG oslo_concurrency.lockutils [None req-52b41a59-dd6d-4f95-968d-9f8bdd1c5fa9 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Lock "98d8ebd6-0917-49cf-8efc-a245486424bc" acquired by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:57:52 compute-0 nova_compute[260935]: 2025-10-11 08:57:52.991 2 INFO nova.compute.manager [None req-52b41a59-dd6d-4f95-968d-9f8bdd1c5fa9 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] [instance: 98d8ebd6-0917-49cf-8efc-a245486424bc] Unshelving
Oct 11 08:57:53 compute-0 nova_compute[260935]: 2025-10-11 08:57:53.019 2 DEBUG oslo_concurrency.lockutils [None req-0c3afd71-0671-4219-8cec-79d362d8eeca bebcc72ba14b4d1e976076ce9fee6080 7b0903df38254068b8566040cf90343c - - default default] Releasing lock "refresh_cache-075ec27a-70a2-49f6-a097-5738f3407605" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 08:57:53 compute-0 nova_compute[260935]: 2025-10-11 08:57:53.019 2 DEBUG nova.compute.manager [None req-0c3afd71-0671-4219-8cec-79d362d8eeca bebcc72ba14b4d1e976076ce9fee6080 7b0903df38254068b8566040cf90343c - - default default] [instance: 075ec27a-70a2-49f6-a097-5738f3407605] Inject network info _inject_network_info /usr/lib/python3.9/site-packages/nova/compute/manager.py:7144
Oct 11 08:57:53 compute-0 nova_compute[260935]: 2025-10-11 08:57:53.019 2 DEBUG nova.compute.manager [None req-0c3afd71-0671-4219-8cec-79d362d8eeca bebcc72ba14b4d1e976076ce9fee6080 7b0903df38254068b8566040cf90343c - - default default] [instance: 075ec27a-70a2-49f6-a097-5738f3407605] network_info to inject: |[{"id": "10f3051f-8897-4e57-89f1-4a03aac94192", "address": "fa:16:3e:15:d9:b2", "network": {"id": "36e695f7-a84c-4eed-8d21-d70ce3f1f736", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-876022911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.220", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7b0903df38254068b8566040cf90343c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap10f3051f-88", "ovs_interfaceid": "10f3051f-8897-4e57-89f1-4a03aac94192", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _inject_network_info /usr/lib/python3.9/site-packages/nova/compute/manager.py:7145
Oct 11 08:57:53 compute-0 nova_compute[260935]: 2025-10-11 08:57:53.022 2 DEBUG oslo_concurrency.lockutils [req-a6a6c420-d010-4983-bf31-91d966314f4b req-bcab1330-0e44-42ad-8114-b9123deedc1a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-075ec27a-70a2-49f6-a097-5738f3407605" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 08:57:53 compute-0 nova_compute[260935]: 2025-10-11 08:57:53.022 2 DEBUG nova.network.neutron [req-a6a6c420-d010-4983-bf31-91d966314f4b req-bcab1330-0e44-42ad-8114-b9123deedc1a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 075ec27a-70a2-49f6-a097-5738f3407605] Refreshing network info cache for port 10f3051f-8897-4e57-89f1-4a03aac94192 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 08:57:53 compute-0 nova_compute[260935]: 2025-10-11 08:57:53.101 2 DEBUG oslo_concurrency.lockutils [None req-52b41a59-dd6d-4f95-968d-9f8bdd1c5fa9 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:57:53 compute-0 nova_compute[260935]: 2025-10-11 08:57:53.101 2 DEBUG oslo_concurrency.lockutils [None req-52b41a59-dd6d-4f95-968d-9f8bdd1c5fa9 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:57:53 compute-0 nova_compute[260935]: 2025-10-11 08:57:53.110 2 DEBUG nova.objects.instance [None req-52b41a59-dd6d-4f95-968d-9f8bdd1c5fa9 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Lazy-loading 'pci_requests' on Instance uuid 98d8ebd6-0917-49cf-8efc-a245486424bc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:57:53 compute-0 nova_compute[260935]: 2025-10-11 08:57:53.119 2 DEBUG nova.network.neutron [req-ad48c44c-d982-4e6c-9ecb-80aa3f71eb10 req-e518f61e-1836-4328-9ac0-3aac55eea9f2 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 937f10b2-422f-42bc-b13e-5995310b461c] Updated VIF entry in instance network info cache for port 228ffaff-8879-4872-9923-a5a8270cb291. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 08:57:53 compute-0 nova_compute[260935]: 2025-10-11 08:57:53.119 2 DEBUG nova.network.neutron [req-ad48c44c-d982-4e6c-9ecb-80aa3f71eb10 req-e518f61e-1836-4328-9ac0-3aac55eea9f2 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 937f10b2-422f-42bc-b13e-5995310b461c] Updating instance_info_cache with network_info: [{"id": "228ffaff-8879-4872-9923-a5a8270cb291", "address": "fa:16:3e:0a:d9:4d", "network": {"id": "2cb96d57-a5e9-4b38-b10e-68187a5bf82f", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-2000648338-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "93dd4902ce324862a38006da8e06503a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap228ffaff-88", "ovs_interfaceid": "228ffaff-8879-4872-9923-a5a8270cb291", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:57:53 compute-0 nova_compute[260935]: 2025-10-11 08:57:53.131 2 DEBUG nova.objects.instance [None req-52b41a59-dd6d-4f95-968d-9f8bdd1c5fa9 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Lazy-loading 'numa_topology' on Instance uuid 98d8ebd6-0917-49cf-8efc-a245486424bc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:57:53 compute-0 nova_compute[260935]: 2025-10-11 08:57:53.137 2 DEBUG oslo_concurrency.lockutils [req-ad48c44c-d982-4e6c-9ecb-80aa3f71eb10 req-e518f61e-1836-4328-9ac0-3aac55eea9f2 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-937f10b2-422f-42bc-b13e-5995310b461c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 08:57:53 compute-0 nova_compute[260935]: 2025-10-11 08:57:53.155 2 DEBUG nova.virt.hardware [None req-52b41a59-dd6d-4f95-968d-9f8bdd1c5fa9 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 11 08:57:53 compute-0 nova_compute[260935]: 2025-10-11 08:57:53.155 2 INFO nova.compute.claims [None req-52b41a59-dd6d-4f95-968d-9f8bdd1c5fa9 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] [instance: 98d8ebd6-0917-49cf-8efc-a245486424bc] Claim successful on node compute-0.ctlplane.example.com
Oct 11 08:57:53 compute-0 nova_compute[260935]: 2025-10-11 08:57:53.197 2 INFO nova.virt.libvirt.driver [None req-b313b5b1-d87d-4afa-99ed-53b854e421fc 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: 937f10b2-422f-42bc-b13e-5995310b461c] Creating config drive at /var/lib/nova/instances/937f10b2-422f-42bc-b13e-5995310b461c/disk.config
Oct 11 08:57:53 compute-0 nova_compute[260935]: 2025-10-11 08:57:53.204 2 DEBUG oslo_concurrency.processutils [None req-b313b5b1-d87d-4afa-99ed-53b854e421fc 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/937f10b2-422f-42bc-b13e-5995310b461c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp359_2suv execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:57:53 compute-0 nova_compute[260935]: 2025-10-11 08:57:53.365 2 DEBUG oslo_concurrency.processutils [None req-b313b5b1-d87d-4afa-99ed-53b854e421fc 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/937f10b2-422f-42bc-b13e-5995310b461c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp359_2suv" returned: 0 in 0.161s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:57:53 compute-0 nova_compute[260935]: 2025-10-11 08:57:53.396 2 DEBUG nova.storage.rbd_utils [None req-b313b5b1-d87d-4afa-99ed-53b854e421fc 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] rbd image 937f10b2-422f-42bc-b13e-5995310b461c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:57:53 compute-0 nova_compute[260935]: 2025-10-11 08:57:53.401 2 DEBUG oslo_concurrency.processutils [None req-b313b5b1-d87d-4afa-99ed-53b854e421fc 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/937f10b2-422f-42bc-b13e-5995310b461c/disk.config 937f10b2-422f-42bc-b13e-5995310b461c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:57:53 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e238 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 08:57:53 compute-0 ceph-mon[74313]: pgmap v1637: 321 pgs: 321 active+clean; 451 MiB data, 744 MiB used, 59 GiB / 60 GiB avail; 59 KiB/s rd, 2.1 MiB/s wr, 85 op/s
Oct 11 08:57:53 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/3328688018' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:57:53 compute-0 nova_compute[260935]: 2025-10-11 08:57:53.463 2 DEBUG nova.network.neutron [None req-9ab933dd-16af-4259-a3ee-7f6c4dbb3e2e ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: 8f609d1a-7eaa-41dc-9f3b-9acf7abe4239] Updating instance_info_cache with network_info: [{"id": "5fbde773-dc50-43d9-ae02-a499b8096537", "address": "fa:16:3e:e5:dc:a1", "network": {"id": "e075bdab-78c4-414f-b270-c41d1c82f498", "bridge": "br-int", "label": "tempest-ServersTestJSON-1401783070-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d9864fda4f8641d8a9c1509c426cc206", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5fbde773-dc", "ovs_interfaceid": "5fbde773-dc50-43d9-ae02-a499b8096537", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:57:53 compute-0 nova_compute[260935]: 2025-10-11 08:57:53.485 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:57:53 compute-0 nova_compute[260935]: 2025-10-11 08:57:53.503 2 DEBUG oslo_concurrency.lockutils [None req-9ab933dd-16af-4259-a3ee-7f6c4dbb3e2e ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Releasing lock "refresh_cache-8f609d1a-7eaa-41dc-9f3b-9acf7abe4239" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 08:57:53 compute-0 nova_compute[260935]: 2025-10-11 08:57:53.504 2 DEBUG nova.compute.manager [None req-9ab933dd-16af-4259-a3ee-7f6c4dbb3e2e ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: 8f609d1a-7eaa-41dc-9f3b-9acf7abe4239] Instance network_info: |[{"id": "5fbde773-dc50-43d9-ae02-a499b8096537", "address": "fa:16:3e:e5:dc:a1", "network": {"id": "e075bdab-78c4-414f-b270-c41d1c82f498", "bridge": "br-int", "label": "tempest-ServersTestJSON-1401783070-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d9864fda4f8641d8a9c1509c426cc206", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5fbde773-dc", "ovs_interfaceid": "5fbde773-dc50-43d9-ae02-a499b8096537", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 11 08:57:53 compute-0 nova_compute[260935]: 2025-10-11 08:57:53.507 2 DEBUG nova.virt.libvirt.driver [None req-9ab933dd-16af-4259-a3ee-7f6c4dbb3e2e ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: 8f609d1a-7eaa-41dc-9f3b-9acf7abe4239] Start _get_guest_xml network_info=[{"id": "5fbde773-dc50-43d9-ae02-a499b8096537", "address": "fa:16:3e:e5:dc:a1", "network": {"id": "e075bdab-78c4-414f-b270-c41d1c82f498", "bridge": "br-int", "label": "tempest-ServersTestJSON-1401783070-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d9864fda4f8641d8a9c1509c426cc206", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5fbde773-dc", "ovs_interfaceid": "5fbde773-dc50-43d9-ae02-a499b8096537", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 11 08:57:53 compute-0 nova_compute[260935]: 2025-10-11 08:57:53.514 2 WARNING nova.virt.libvirt.driver [None req-9ab933dd-16af-4259-a3ee-7f6c4dbb3e2e ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 08:57:53 compute-0 nova_compute[260935]: 2025-10-11 08:57:53.520 2 DEBUG nova.virt.libvirt.host [None req-9ab933dd-16af-4259-a3ee-7f6c4dbb3e2e ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 11 08:57:53 compute-0 nova_compute[260935]: 2025-10-11 08:57:53.521 2 DEBUG nova.virt.libvirt.host [None req-9ab933dd-16af-4259-a3ee-7f6c4dbb3e2e ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 11 08:57:53 compute-0 nova_compute[260935]: 2025-10-11 08:57:53.529 2 DEBUG nova.virt.libvirt.host [None req-9ab933dd-16af-4259-a3ee-7f6c4dbb3e2e ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 11 08:57:53 compute-0 nova_compute[260935]: 2025-10-11 08:57:53.529 2 DEBUG nova.virt.libvirt.host [None req-9ab933dd-16af-4259-a3ee-7f6c4dbb3e2e ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 11 08:57:53 compute-0 nova_compute[260935]: 2025-10-11 08:57:53.530 2 DEBUG nova.virt.libvirt.driver [None req-9ab933dd-16af-4259-a3ee-7f6c4dbb3e2e ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 11 08:57:53 compute-0 nova_compute[260935]: 2025-10-11 08:57:53.530 2 DEBUG nova.virt.hardware [None req-9ab933dd-16af-4259-a3ee-7f6c4dbb3e2e ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 11 08:57:53 compute-0 nova_compute[260935]: 2025-10-11 08:57:53.531 2 DEBUG nova.virt.hardware [None req-9ab933dd-16af-4259-a3ee-7f6c4dbb3e2e ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 11 08:57:53 compute-0 nova_compute[260935]: 2025-10-11 08:57:53.531 2 DEBUG nova.virt.hardware [None req-9ab933dd-16af-4259-a3ee-7f6c4dbb3e2e ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 11 08:57:53 compute-0 nova_compute[260935]: 2025-10-11 08:57:53.531 2 DEBUG nova.virt.hardware [None req-9ab933dd-16af-4259-a3ee-7f6c4dbb3e2e ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 11 08:57:53 compute-0 nova_compute[260935]: 2025-10-11 08:57:53.531 2 DEBUG nova.virt.hardware [None req-9ab933dd-16af-4259-a3ee-7f6c4dbb3e2e ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 11 08:57:53 compute-0 nova_compute[260935]: 2025-10-11 08:57:53.531 2 DEBUG nova.virt.hardware [None req-9ab933dd-16af-4259-a3ee-7f6c4dbb3e2e ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 11 08:57:53 compute-0 nova_compute[260935]: 2025-10-11 08:57:53.532 2 DEBUG nova.virt.hardware [None req-9ab933dd-16af-4259-a3ee-7f6c4dbb3e2e ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 11 08:57:53 compute-0 nova_compute[260935]: 2025-10-11 08:57:53.532 2 DEBUG nova.virt.hardware [None req-9ab933dd-16af-4259-a3ee-7f6c4dbb3e2e ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 11 08:57:53 compute-0 nova_compute[260935]: 2025-10-11 08:57:53.532 2 DEBUG nova.virt.hardware [None req-9ab933dd-16af-4259-a3ee-7f6c4dbb3e2e ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 11 08:57:53 compute-0 nova_compute[260935]: 2025-10-11 08:57:53.532 2 DEBUG nova.virt.hardware [None req-9ab933dd-16af-4259-a3ee-7f6c4dbb3e2e ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 11 08:57:53 compute-0 nova_compute[260935]: 2025-10-11 08:57:53.533 2 DEBUG nova.virt.hardware [None req-9ab933dd-16af-4259-a3ee-7f6c4dbb3e2e ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 11 08:57:53 compute-0 nova_compute[260935]: 2025-10-11 08:57:53.537 2 DEBUG oslo_concurrency.processutils [None req-9ab933dd-16af-4259-a3ee-7f6c4dbb3e2e ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:57:53 compute-0 nova_compute[260935]: 2025-10-11 08:57:53.607 2 DEBUG oslo_concurrency.processutils [None req-52b41a59-dd6d-4f95-968d-9f8bdd1c5fa9 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:57:53 compute-0 nova_compute[260935]: 2025-10-11 08:57:53.666 2 DEBUG oslo_concurrency.processutils [None req-b313b5b1-d87d-4afa-99ed-53b854e421fc 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/937f10b2-422f-42bc-b13e-5995310b461c/disk.config 937f10b2-422f-42bc-b13e-5995310b461c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.265s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:57:53 compute-0 nova_compute[260935]: 2025-10-11 08:57:53.667 2 INFO nova.virt.libvirt.driver [None req-b313b5b1-d87d-4afa-99ed-53b854e421fc 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: 937f10b2-422f-42bc-b13e-5995310b461c] Deleting local config drive /var/lib/nova/instances/937f10b2-422f-42bc-b13e-5995310b461c/disk.config because it was imported into RBD.
Oct 11 08:57:53 compute-0 kernel: tap228ffaff-88: entered promiscuous mode
Oct 11 08:57:53 compute-0 NetworkManager[44960]: <info>  [1760173073.7303] manager: (tap228ffaff-88): new Tun device (/org/freedesktop/NetworkManager/Devices/290)
Oct 11 08:57:53 compute-0 ovn_controller[152945]: 2025-10-11T08:57:53Z|00639|binding|INFO|Claiming lport 228ffaff-8879-4872-9923-a5a8270cb291 for this chassis.
Oct 11 08:57:53 compute-0 nova_compute[260935]: 2025-10-11 08:57:53.736 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:57:53 compute-0 ovn_controller[152945]: 2025-10-11T08:57:53Z|00640|binding|INFO|228ffaff-8879-4872-9923-a5a8270cb291: Claiming fa:16:3e:0a:d9:4d 10.100.0.9
Oct 11 08:57:53 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:57:53.744 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0a:d9:4d 10.100.0.9'], port_security=['fa:16:3e:0a:d9:4d 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '937f10b2-422f-42bc-b13e-5995310b461c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2cb96d57-a5e9-4b38-b10e-68187a5bf82f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '93dd4902ce324862a38006da8e06503a', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'b773900e-3df7-4cb6-b9b0-3d240ff499b1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=69794a2f-48ab-4a0d-8725-f4a7f57172dd, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=228ffaff-8879-4872-9923-a5a8270cb291) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 08:57:53 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:57:53.747 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 228ffaff-8879-4872-9923-a5a8270cb291 in datapath 2cb96d57-a5e9-4b38-b10e-68187a5bf82f bound to our chassis
Oct 11 08:57:53 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:57:53.750 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 2cb96d57-a5e9-4b38-b10e-68187a5bf82f
Oct 11 08:57:53 compute-0 systemd-udevd[333872]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 08:57:53 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:57:53.769 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[f89a97f3-fcfb-48b3-9ad5-b2ccfaf066df]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:57:53 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:57:53.771 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap2cb96d57-a1 in ovnmeta-2cb96d57-a5e9-4b38-b10e-68187a5bf82f namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 11 08:57:53 compute-0 systemd-machined[215705]: New machine qemu-82-instance-0000004a.
Oct 11 08:57:53 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:57:53.773 276945 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap2cb96d57-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 11 08:57:53 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:57:53.773 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[37631047-626b-4e59-8c01-23f5d8696126]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:57:53 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:57:53.775 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[1a33cfdc-fdc9-43a6-9073-64aad641d30f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:57:53 compute-0 ovn_controller[152945]: 2025-10-11T08:57:53Z|00641|binding|INFO|Setting lport 228ffaff-8879-4872-9923-a5a8270cb291 ovn-installed in OVS
Oct 11 08:57:53 compute-0 ovn_controller[152945]: 2025-10-11T08:57:53Z|00642|binding|INFO|Setting lport 228ffaff-8879-4872-9923-a5a8270cb291 up in Southbound
Oct 11 08:57:53 compute-0 nova_compute[260935]: 2025-10-11 08:57:53.784 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:57:53 compute-0 systemd[1]: Started Virtual Machine qemu-82-instance-0000004a.
Oct 11 08:57:53 compute-0 NetworkManager[44960]: <info>  [1760173073.7928] device (tap228ffaff-88): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 08:57:53 compute-0 NetworkManager[44960]: <info>  [1760173073.7941] device (tap228ffaff-88): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 08:57:53 compute-0 nova_compute[260935]: 2025-10-11 08:57:53.794 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:57:53 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:57:53.801 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[15757a7a-528c-4175-97b5-0993403aee9f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:57:53 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:57:53.828 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[a8db069b-2247-460f-a8dd-061ba988d502]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:57:53 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:57:53.872 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[f0a3a959-8abf-4298-b7fc-26112c59c0b1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:57:53 compute-0 nova_compute[260935]: 2025-10-11 08:57:53.875 2 DEBUG nova.compute.manager [req-d3a77f99-6391-4aa4-9702-c29728dff171 req-40c6854e-a053-4ff5-a4d5-0a4725bc2fa9 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 8f609d1a-7eaa-41dc-9f3b-9acf7abe4239] Received event network-changed-5fbde773-dc50-43d9-ae02-a499b8096537 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:57:53 compute-0 nova_compute[260935]: 2025-10-11 08:57:53.875 2 DEBUG nova.compute.manager [req-d3a77f99-6391-4aa4-9702-c29728dff171 req-40c6854e-a053-4ff5-a4d5-0a4725bc2fa9 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 8f609d1a-7eaa-41dc-9f3b-9acf7abe4239] Refreshing instance network info cache due to event network-changed-5fbde773-dc50-43d9-ae02-a499b8096537. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 08:57:53 compute-0 nova_compute[260935]: 2025-10-11 08:57:53.875 2 DEBUG oslo_concurrency.lockutils [req-d3a77f99-6391-4aa4-9702-c29728dff171 req-40c6854e-a053-4ff5-a4d5-0a4725bc2fa9 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-8f609d1a-7eaa-41dc-9f3b-9acf7abe4239" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 08:57:53 compute-0 nova_compute[260935]: 2025-10-11 08:57:53.875 2 DEBUG oslo_concurrency.lockutils [req-d3a77f99-6391-4aa4-9702-c29728dff171 req-40c6854e-a053-4ff5-a4d5-0a4725bc2fa9 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-8f609d1a-7eaa-41dc-9f3b-9acf7abe4239" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 08:57:53 compute-0 nova_compute[260935]: 2025-10-11 08:57:53.876 2 DEBUG nova.network.neutron [req-d3a77f99-6391-4aa4-9702-c29728dff171 req-40c6854e-a053-4ff5-a4d5-0a4725bc2fa9 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 8f609d1a-7eaa-41dc-9f3b-9acf7abe4239] Refreshing network info cache for port 5fbde773-dc50-43d9-ae02-a499b8096537 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 08:57:53 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:57:53.879 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[d15e3220-efbf-4c2b-90c2-fb1755ea4610]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:57:53 compute-0 NetworkManager[44960]: <info>  [1760173073.8806] manager: (tap2cb96d57-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/291)
Oct 11 08:57:53 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:57:53.922 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[16ab23d1-d206-4aa8-9ce6-77ecbcaf05c6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:57:53 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:57:53.928 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[833a178d-0767-4686-b603-7320fbaf07bd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:57:53 compute-0 NetworkManager[44960]: <info>  [1760173073.9577] device (tap2cb96d57-a0): carrier: link connected
Oct 11 08:57:53 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:57:53.967 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[fd4d1417-6f1e-4a47-bfbb-ff00f9b0cc6a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:57:53 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:57:53.988 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[eaf8d6d5-ce22-4c9c-8361-0a185dfb1fb4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2cb96d57-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0b:c9:b5'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 201], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 491866, 'reachable_time': 33620, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 333932, 'error': None, 'target': 'ovnmeta-2cb96d57-a5e9-4b38-b10e-68187a5bf82f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:57:54 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:57:54.009 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[0867bdc7-8b84-46c9-99e9-e0185b9f444d]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe0b:c9b5'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 491866, 'tstamp': 491866}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 333933, 'error': None, 'target': 'ovnmeta-2cb96d57-a5e9-4b38-b10e-68187a5bf82f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:57:54 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:57:54.034 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[d212d9c4-d810-45bd-950a-c0fcc30028d2]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2cb96d57-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0b:c9:b5'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 201], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 491866, 'reachable_time': 33620, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 333934, 'error': None, 'target': 'ovnmeta-2cb96d57-a5e9-4b38-b10e-68187a5bf82f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:57:54 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1638: 321 pgs: 321 active+clean; 497 MiB data, 775 MiB used, 59 GiB / 60 GiB avail; 64 KiB/s rd, 4.3 MiB/s wr, 98 op/s
Oct 11 08:57:54 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:57:54.080 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[f12fac56-ad8b-435a-83f6-bd71b4f963a8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:57:54 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 08:57:54 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3634568175' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:57:54 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 08:57:54 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1337655550' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:57:54 compute-0 nova_compute[260935]: 2025-10-11 08:57:54.161 2 DEBUG oslo_concurrency.processutils [None req-52b41a59-dd6d-4f95-968d-9f8bdd1c5fa9 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.554s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:57:54 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:57:54.161 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[da8fd2fc-084f-4bb6-96c2-64297cbbc7de]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:57:54 compute-0 nova_compute[260935]: 2025-10-11 08:57:54.162 2 DEBUG oslo_concurrency.processutils [None req-9ab933dd-16af-4259-a3ee-7f6c4dbb3e2e ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.626s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:57:54 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:57:54.162 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2cb96d57-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:57:54 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:57:54.163 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 08:57:54 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:57:54.163 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2cb96d57-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:57:54 compute-0 NetworkManager[44960]: <info>  [1760173074.1656] manager: (tap2cb96d57-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/292)
Oct 11 08:57:54 compute-0 kernel: tap2cb96d57-a0: entered promiscuous mode
Oct 11 08:57:54 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:57:54.168 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap2cb96d57-a0, col_values=(('external_ids', {'iface-id': 'a11e0a08-d1ab-4bff-901b-632484cc0e21'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:57:54 compute-0 ovn_controller[152945]: 2025-10-11T08:57:54Z|00643|binding|INFO|Releasing lport a11e0a08-d1ab-4bff-901b-632484cc0e21 from this chassis (sb_readonly=0)
Oct 11 08:57:54 compute-0 nova_compute[260935]: 2025-10-11 08:57:54.184 2 DEBUG nova.storage.rbd_utils [None req-9ab933dd-16af-4259-a3ee-7f6c4dbb3e2e ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] rbd image 8f609d1a-7eaa-41dc-9f3b-9acf7abe4239_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:57:54 compute-0 nova_compute[260935]: 2025-10-11 08:57:54.188 2 DEBUG oslo_concurrency.processutils [None req-9ab933dd-16af-4259-a3ee-7f6c4dbb3e2e ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:57:54 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:57:54.197 162815 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/2cb96d57-a5e9-4b38-b10e-68187a5bf82f.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/2cb96d57-a5e9-4b38-b10e-68187a5bf82f.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 11 08:57:54 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:57:54.198 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[55970941-ee0c-4a78-a43e-aa6bab110097]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:57:54 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:57:54.199 162815 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 11 08:57:54 compute-0 ovn_metadata_agent[162810]: global
Oct 11 08:57:54 compute-0 ovn_metadata_agent[162810]:     log         /dev/log local0 debug
Oct 11 08:57:54 compute-0 ovn_metadata_agent[162810]:     log-tag     haproxy-metadata-proxy-2cb96d57-a5e9-4b38-b10e-68187a5bf82f
Oct 11 08:57:54 compute-0 ovn_metadata_agent[162810]:     user        root
Oct 11 08:57:54 compute-0 ovn_metadata_agent[162810]:     group       root
Oct 11 08:57:54 compute-0 ovn_metadata_agent[162810]:     maxconn     1024
Oct 11 08:57:54 compute-0 ovn_metadata_agent[162810]:     pidfile     /var/lib/neutron/external/pids/2cb96d57-a5e9-4b38-b10e-68187a5bf82f.pid.haproxy
Oct 11 08:57:54 compute-0 ovn_metadata_agent[162810]:     daemon
Oct 11 08:57:54 compute-0 ovn_metadata_agent[162810]: 
Oct 11 08:57:54 compute-0 ovn_metadata_agent[162810]: defaults
Oct 11 08:57:54 compute-0 ovn_metadata_agent[162810]:     log global
Oct 11 08:57:54 compute-0 ovn_metadata_agent[162810]:     mode http
Oct 11 08:57:54 compute-0 ovn_metadata_agent[162810]:     option httplog
Oct 11 08:57:54 compute-0 ovn_metadata_agent[162810]:     option dontlognull
Oct 11 08:57:54 compute-0 ovn_metadata_agent[162810]:     option http-server-close
Oct 11 08:57:54 compute-0 ovn_metadata_agent[162810]:     option forwardfor
Oct 11 08:57:54 compute-0 ovn_metadata_agent[162810]:     retries                 3
Oct 11 08:57:54 compute-0 ovn_metadata_agent[162810]:     timeout http-request    30s
Oct 11 08:57:54 compute-0 ovn_metadata_agent[162810]:     timeout connect         30s
Oct 11 08:57:54 compute-0 ovn_metadata_agent[162810]:     timeout client          32s
Oct 11 08:57:54 compute-0 ovn_metadata_agent[162810]:     timeout server          32s
Oct 11 08:57:54 compute-0 ovn_metadata_agent[162810]:     timeout http-keep-alive 30s
Oct 11 08:57:54 compute-0 ovn_metadata_agent[162810]: 
Oct 11 08:57:54 compute-0 ovn_metadata_agent[162810]: 
Oct 11 08:57:54 compute-0 ovn_metadata_agent[162810]: listen listener
Oct 11 08:57:54 compute-0 ovn_metadata_agent[162810]:     bind 169.254.169.254:80
Oct 11 08:57:54 compute-0 ovn_metadata_agent[162810]:     server metadata /var/lib/neutron/metadata_proxy
Oct 11 08:57:54 compute-0 ovn_metadata_agent[162810]:     http-request add-header X-OVN-Network-ID 2cb96d57-a5e9-4b38-b10e-68187a5bf82f
Oct 11 08:57:54 compute-0 ovn_metadata_agent[162810]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 11 08:57:54 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:57:54.200 162815 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-2cb96d57-a5e9-4b38-b10e-68187a5bf82f', 'env', 'PROCESS_TAG=haproxy-2cb96d57-a5e9-4b38-b10e-68187a5bf82f', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/2cb96d57-a5e9-4b38-b10e-68187a5bf82f.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 11 08:57:54 compute-0 nova_compute[260935]: 2025-10-11 08:57:54.223 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:57:54 compute-0 nova_compute[260935]: 2025-10-11 08:57:54.236 2 DEBUG nova.compute.provider_tree [None req-52b41a59-dd6d-4f95-968d-9f8bdd1c5fa9 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 08:57:54 compute-0 nova_compute[260935]: 2025-10-11 08:57:54.250 2 DEBUG nova.scheduler.client.report [None req-52b41a59-dd6d-4f95-968d-9f8bdd1c5fa9 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 08:57:54 compute-0 nova_compute[260935]: 2025-10-11 08:57:54.273 2 DEBUG oslo_concurrency.lockutils [None req-52b41a59-dd6d-4f95-968d-9f8bdd1c5fa9 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.171s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:57:54 compute-0 nova_compute[260935]: 2025-10-11 08:57:54.291 2 DEBUG oslo_concurrency.lockutils [None req-225c3c13-aa0d-4a92-a713-8f5635710409 bebcc72ba14b4d1e976076ce9fee6080 7b0903df38254068b8566040cf90343c - - default default] Acquiring lock "075ec27a-70a2-49f6-a097-5738f3407605" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:57:54 compute-0 nova_compute[260935]: 2025-10-11 08:57:54.292 2 DEBUG oslo_concurrency.lockutils [None req-225c3c13-aa0d-4a92-a713-8f5635710409 bebcc72ba14b4d1e976076ce9fee6080 7b0903df38254068b8566040cf90343c - - default default] Lock "075ec27a-70a2-49f6-a097-5738f3407605" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:57:54 compute-0 nova_compute[260935]: 2025-10-11 08:57:54.292 2 DEBUG oslo_concurrency.lockutils [None req-225c3c13-aa0d-4a92-a713-8f5635710409 bebcc72ba14b4d1e976076ce9fee6080 7b0903df38254068b8566040cf90343c - - default default] Acquiring lock "075ec27a-70a2-49f6-a097-5738f3407605-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:57:54 compute-0 nova_compute[260935]: 2025-10-11 08:57:54.292 2 DEBUG oslo_concurrency.lockutils [None req-225c3c13-aa0d-4a92-a713-8f5635710409 bebcc72ba14b4d1e976076ce9fee6080 7b0903df38254068b8566040cf90343c - - default default] Lock "075ec27a-70a2-49f6-a097-5738f3407605-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:57:54 compute-0 nova_compute[260935]: 2025-10-11 08:57:54.292 2 DEBUG oslo_concurrency.lockutils [None req-225c3c13-aa0d-4a92-a713-8f5635710409 bebcc72ba14b4d1e976076ce9fee6080 7b0903df38254068b8566040cf90343c - - default default] Lock "075ec27a-70a2-49f6-a097-5738f3407605-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:57:54 compute-0 nova_compute[260935]: 2025-10-11 08:57:54.294 2 INFO nova.compute.manager [None req-225c3c13-aa0d-4a92-a713-8f5635710409 bebcc72ba14b4d1e976076ce9fee6080 7b0903df38254068b8566040cf90343c - - default default] [instance: 075ec27a-70a2-49f6-a097-5738f3407605] Terminating instance
Oct 11 08:57:54 compute-0 nova_compute[260935]: 2025-10-11 08:57:54.295 2 DEBUG nova.compute.manager [None req-225c3c13-aa0d-4a92-a713-8f5635710409 bebcc72ba14b4d1e976076ce9fee6080 7b0903df38254068b8566040cf90343c - - default default] [instance: 075ec27a-70a2-49f6-a097-5738f3407605] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 11 08:57:54 compute-0 kernel: tap10f3051f-88 (unregistering): left promiscuous mode
Oct 11 08:57:54 compute-0 NetworkManager[44960]: <info>  [1760173074.3484] device (tap10f3051f-88): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 08:57:54 compute-0 ovn_controller[152945]: 2025-10-11T08:57:54Z|00644|binding|INFO|Releasing lport 10f3051f-8897-4e57-89f1-4a03aac94192 from this chassis (sb_readonly=0)
Oct 11 08:57:54 compute-0 ovn_controller[152945]: 2025-10-11T08:57:54Z|00645|binding|INFO|Setting lport 10f3051f-8897-4e57-89f1-4a03aac94192 down in Southbound
Oct 11 08:57:54 compute-0 ovn_controller[152945]: 2025-10-11T08:57:54Z|00646|binding|INFO|Removing iface tap10f3051f-88 ovn-installed in OVS
Oct 11 08:57:54 compute-0 nova_compute[260935]: 2025-10-11 08:57:54.363 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:57:54 compute-0 nova_compute[260935]: 2025-10-11 08:57:54.374 2 DEBUG nova.compute.manager [req-aa7d9681-2cf1-49b0-858f-699434f31e19 req-0cf9bfb5-0c1a-41f1-8def-31e3c195c951 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 937f10b2-422f-42bc-b13e-5995310b461c] Received event network-vif-plugged-228ffaff-8879-4872-9923-a5a8270cb291 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:57:54 compute-0 nova_compute[260935]: 2025-10-11 08:57:54.374 2 DEBUG oslo_concurrency.lockutils [req-aa7d9681-2cf1-49b0-858f-699434f31e19 req-0cf9bfb5-0c1a-41f1-8def-31e3c195c951 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "937f10b2-422f-42bc-b13e-5995310b461c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:57:54 compute-0 nova_compute[260935]: 2025-10-11 08:57:54.375 2 DEBUG oslo_concurrency.lockutils [req-aa7d9681-2cf1-49b0-858f-699434f31e19 req-0cf9bfb5-0c1a-41f1-8def-31e3c195c951 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "937f10b2-422f-42bc-b13e-5995310b461c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:57:54 compute-0 nova_compute[260935]: 2025-10-11 08:57:54.376 2 DEBUG oslo_concurrency.lockutils [req-aa7d9681-2cf1-49b0-858f-699434f31e19 req-0cf9bfb5-0c1a-41f1-8def-31e3c195c951 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "937f10b2-422f-42bc-b13e-5995310b461c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:57:54 compute-0 nova_compute[260935]: 2025-10-11 08:57:54.376 2 DEBUG nova.compute.manager [req-aa7d9681-2cf1-49b0-858f-699434f31e19 req-0cf9bfb5-0c1a-41f1-8def-31e3c195c951 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 937f10b2-422f-42bc-b13e-5995310b461c] Processing event network-vif-plugged-228ffaff-8879-4872-9923-a5a8270cb291 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 11 08:57:54 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:57:54.383 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:15:d9:b2 10.100.0.14'], port_security=['fa:16:3e:15:d9:b2 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '075ec27a-70a2-49f6-a097-5738f3407605', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-36e695f7-a84c-4eed-8d21-d70ce3f1f736', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7b0903df38254068b8566040cf90343c', 'neutron:revision_number': '6', 'neutron:security_group_ids': '6330b3dc-321d-4ac7-887a-36f567413dc1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.220'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f7b3f72a-340b-411e-a57a-e434c4495732, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=10f3051f-8897-4e57-89f1-4a03aac94192) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 08:57:54 compute-0 nova_compute[260935]: 2025-10-11 08:57:54.384 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:57:54 compute-0 systemd[1]: machine-qemu\x2d80\x2dinstance\x2d00000048.scope: Deactivated successfully.
Oct 11 08:57:54 compute-0 systemd[1]: machine-qemu\x2d80\x2dinstance\x2d00000048.scope: Consumed 14.677s CPU time.
Oct 11 08:57:54 compute-0 systemd-machined[215705]: Machine qemu-80-instance-00000048 terminated.
Oct 11 08:57:54 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/3634568175' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:57:54 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/1337655550' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:57:54 compute-0 nova_compute[260935]: 2025-10-11 08:57:54.475 2 INFO nova.network.neutron [None req-52b41a59-dd6d-4f95-968d-9f8bdd1c5fa9 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] [instance: 98d8ebd6-0917-49cf-8efc-a245486424bc] Updating port 10e01bbb-0d2c-4f76-8f81-c90e20d3e54c with attributes {'binding:host_id': 'compute-0.ctlplane.example.com', 'device_owner': 'compute:nova'}
Oct 11 08:57:54 compute-0 NetworkManager[44960]: <info>  [1760173074.5216] manager: (tap10f3051f-88): new Tun device (/org/freedesktop/NetworkManager/Devices/293)
Oct 11 08:57:54 compute-0 nova_compute[260935]: 2025-10-11 08:57:54.544 2 INFO nova.virt.libvirt.driver [-] [instance: 075ec27a-70a2-49f6-a097-5738f3407605] Instance destroyed successfully.
Oct 11 08:57:54 compute-0 nova_compute[260935]: 2025-10-11 08:57:54.545 2 DEBUG nova.objects.instance [None req-225c3c13-aa0d-4a92-a713-8f5635710409 bebcc72ba14b4d1e976076ce9fee6080 7b0903df38254068b8566040cf90343c - - default default] Lazy-loading 'resources' on Instance uuid 075ec27a-70a2-49f6-a097-5738f3407605 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:57:54 compute-0 nova_compute[260935]: 2025-10-11 08:57:54.562 2 DEBUG nova.virt.libvirt.vif [None req-225c3c13-aa0d-4a92-a713-8f5635710409 bebcc72ba14b4d1e976076ce9fee6080 7b0903df38254068b8566040cf90343c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T08:57:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesUnderV243Test-server-851720095',display_name='tempest-AttachInterfacesUnderV243Test-server-851720095',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesunderv243test-server-851720095',id=72,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAGxOZfesgB/tCes3ESvhcMptBsCzTvmP/nYl8DHVYghCHuLBja1AiGfG+gOQH9P/HksjxuaZFfe8OX9vWqWGPo1XRvQOf98btMFH9J4OtTO0Jdx6gFgakiqKMlOwZh5hg==',key_name='tempest-keypair-1327467686',keypairs=<?>,launch_index=0,launched_at=2025-10-11T08:57:15Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='7b0903df38254068b8566040cf90343c',ramdisk_id='',reservation_id='r-57egyjtf',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesUnderV243Test-1983245910',owner_user_name='tempest-AttachInterfacesUnderV243Test-1983245910-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T08:57:53Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='bebcc72ba14b4d1e976076ce9fee6080',uuid=075ec27a-70a2-49f6-a097-5738f3407605,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "10f3051f-8897-4e57-89f1-4a03aac94192", "address": "fa:16:3e:15:d9:b2", "network": {"id": "36e695f7-a84c-4eed-8d21-d70ce3f1f736", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-876022911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.220", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7b0903df38254068b8566040cf90343c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap10f3051f-88", "ovs_interfaceid": "10f3051f-8897-4e57-89f1-4a03aac94192", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 11 08:57:54 compute-0 nova_compute[260935]: 2025-10-11 08:57:54.563 2 DEBUG nova.network.os_vif_util [None req-225c3c13-aa0d-4a92-a713-8f5635710409 bebcc72ba14b4d1e976076ce9fee6080 7b0903df38254068b8566040cf90343c - - default default] Converting VIF {"id": "10f3051f-8897-4e57-89f1-4a03aac94192", "address": "fa:16:3e:15:d9:b2", "network": {"id": "36e695f7-a84c-4eed-8d21-d70ce3f1f736", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-876022911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.220", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7b0903df38254068b8566040cf90343c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap10f3051f-88", "ovs_interfaceid": "10f3051f-8897-4e57-89f1-4a03aac94192", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 08:57:54 compute-0 nova_compute[260935]: 2025-10-11 08:57:54.565 2 DEBUG nova.network.os_vif_util [None req-225c3c13-aa0d-4a92-a713-8f5635710409 bebcc72ba14b4d1e976076ce9fee6080 7b0903df38254068b8566040cf90343c - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:15:d9:b2,bridge_name='br-int',has_traffic_filtering=True,id=10f3051f-8897-4e57-89f1-4a03aac94192,network=Network(36e695f7-a84c-4eed-8d21-d70ce3f1f736),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap10f3051f-88') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 08:57:54 compute-0 nova_compute[260935]: 2025-10-11 08:57:54.565 2 DEBUG os_vif [None req-225c3c13-aa0d-4a92-a713-8f5635710409 bebcc72ba14b4d1e976076ce9fee6080 7b0903df38254068b8566040cf90343c - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:15:d9:b2,bridge_name='br-int',has_traffic_filtering=True,id=10f3051f-8897-4e57-89f1-4a03aac94192,network=Network(36e695f7-a84c-4eed-8d21-d70ce3f1f736),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap10f3051f-88') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 11 08:57:54 compute-0 nova_compute[260935]: 2025-10-11 08:57:54.568 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:57:54 compute-0 nova_compute[260935]: 2025-10-11 08:57:54.569 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap10f3051f-88, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:57:54 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 08:57:54 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/695193846' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:57:54 compute-0 nova_compute[260935]: 2025-10-11 08:57:54.620 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:57:54 compute-0 nova_compute[260935]: 2025-10-11 08:57:54.624 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 08:57:54 compute-0 nova_compute[260935]: 2025-10-11 08:57:54.627 2 INFO os_vif [None req-225c3c13-aa0d-4a92-a713-8f5635710409 bebcc72ba14b4d1e976076ce9fee6080 7b0903df38254068b8566040cf90343c - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:15:d9:b2,bridge_name='br-int',has_traffic_filtering=True,id=10f3051f-8897-4e57-89f1-4a03aac94192,network=Network(36e695f7-a84c-4eed-8d21-d70ce3f1f736),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap10f3051f-88')
Oct 11 08:57:54 compute-0 nova_compute[260935]: 2025-10-11 08:57:54.647 2 DEBUG oslo_concurrency.processutils [None req-9ab933dd-16af-4259-a3ee-7f6c4dbb3e2e ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:57:54 compute-0 nova_compute[260935]: 2025-10-11 08:57:54.649 2 DEBUG nova.virt.libvirt.vif [None req-9ab933dd-16af-4259-a3ee-7f6c4dbb3e2e ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T08:57:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-918260000',display_name='tempest-ServersTestJSON-server-918260000',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-918260000',id=75,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d9864fda4f8641d8a9c1509c426cc206',ramdisk_id='',reservation_id='r-dps7r5jq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-101172647',owner_user_name='tempest-ServersTestJSON-101172647-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T08:57:47Z,user_data=None,user_id='ee0e5fedb9fc464eb2a9ac362f5e0749',uuid=8f609d1a-7eaa-41dc-9f3b-9acf7abe4239,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5fbde773-dc50-43d9-ae02-a499b8096537", "address": "fa:16:3e:e5:dc:a1", "network": {"id": "e075bdab-78c4-414f-b270-c41d1c82f498", "bridge": "br-int", "label": "tempest-ServersTestJSON-1401783070-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d9864fda4f8641d8a9c1509c426cc206", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5fbde773-dc", "ovs_interfaceid": "5fbde773-dc50-43d9-ae02-a499b8096537", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 11 08:57:54 compute-0 nova_compute[260935]: 2025-10-11 08:57:54.649 2 DEBUG nova.network.os_vif_util [None req-9ab933dd-16af-4259-a3ee-7f6c4dbb3e2e ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Converting VIF {"id": "5fbde773-dc50-43d9-ae02-a499b8096537", "address": "fa:16:3e:e5:dc:a1", "network": {"id": "e075bdab-78c4-414f-b270-c41d1c82f498", "bridge": "br-int", "label": "tempest-ServersTestJSON-1401783070-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d9864fda4f8641d8a9c1509c426cc206", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5fbde773-dc", "ovs_interfaceid": "5fbde773-dc50-43d9-ae02-a499b8096537", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 08:57:54 compute-0 nova_compute[260935]: 2025-10-11 08:57:54.650 2 DEBUG nova.network.os_vif_util [None req-9ab933dd-16af-4259-a3ee-7f6c4dbb3e2e ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e5:dc:a1,bridge_name='br-int',has_traffic_filtering=True,id=5fbde773-dc50-43d9-ae02-a499b8096537,network=Network(e075bdab-78c4-414f-b270-c41d1c82f498),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5fbde773-dc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 08:57:54 compute-0 nova_compute[260935]: 2025-10-11 08:57:54.652 2 DEBUG nova.objects.instance [None req-9ab933dd-16af-4259-a3ee-7f6c4dbb3e2e ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Lazy-loading 'pci_devices' on Instance uuid 8f609d1a-7eaa-41dc-9f3b-9acf7abe4239 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:57:54 compute-0 nova_compute[260935]: 2025-10-11 08:57:54.667 2 DEBUG nova.virt.libvirt.driver [None req-9ab933dd-16af-4259-a3ee-7f6c4dbb3e2e ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: 8f609d1a-7eaa-41dc-9f3b-9acf7abe4239] End _get_guest_xml xml=<domain type="kvm">
Oct 11 08:57:54 compute-0 nova_compute[260935]:   <uuid>8f609d1a-7eaa-41dc-9f3b-9acf7abe4239</uuid>
Oct 11 08:57:54 compute-0 nova_compute[260935]:   <name>instance-0000004b</name>
Oct 11 08:57:54 compute-0 nova_compute[260935]:   <memory>131072</memory>
Oct 11 08:57:54 compute-0 nova_compute[260935]:   <vcpu>1</vcpu>
Oct 11 08:57:54 compute-0 nova_compute[260935]:   <metadata>
Oct 11 08:57:54 compute-0 nova_compute[260935]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 08:57:54 compute-0 nova_compute[260935]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 08:57:54 compute-0 nova_compute[260935]:       <nova:name>tempest-ServersTestJSON-server-918260000</nova:name>
Oct 11 08:57:54 compute-0 nova_compute[260935]:       <nova:creationTime>2025-10-11 08:57:53</nova:creationTime>
Oct 11 08:57:54 compute-0 nova_compute[260935]:       <nova:flavor name="m1.nano">
Oct 11 08:57:54 compute-0 nova_compute[260935]:         <nova:memory>128</nova:memory>
Oct 11 08:57:54 compute-0 nova_compute[260935]:         <nova:disk>1</nova:disk>
Oct 11 08:57:54 compute-0 nova_compute[260935]:         <nova:swap>0</nova:swap>
Oct 11 08:57:54 compute-0 nova_compute[260935]:         <nova:ephemeral>0</nova:ephemeral>
Oct 11 08:57:54 compute-0 nova_compute[260935]:         <nova:vcpus>1</nova:vcpus>
Oct 11 08:57:54 compute-0 nova_compute[260935]:       </nova:flavor>
Oct 11 08:57:54 compute-0 nova_compute[260935]:       <nova:owner>
Oct 11 08:57:54 compute-0 nova_compute[260935]:         <nova:user uuid="ee0e5fedb9fc464eb2a9ac362f5e0749">tempest-ServersTestJSON-101172647-project-member</nova:user>
Oct 11 08:57:54 compute-0 nova_compute[260935]:         <nova:project uuid="d9864fda4f8641d8a9c1509c426cc206">tempest-ServersTestJSON-101172647</nova:project>
Oct 11 08:57:54 compute-0 nova_compute[260935]:       </nova:owner>
Oct 11 08:57:54 compute-0 nova_compute[260935]:       <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 08:57:54 compute-0 nova_compute[260935]:       <nova:ports>
Oct 11 08:57:54 compute-0 nova_compute[260935]:         <nova:port uuid="5fbde773-dc50-43d9-ae02-a499b8096537">
Oct 11 08:57:54 compute-0 nova_compute[260935]:           <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Oct 11 08:57:54 compute-0 nova_compute[260935]:         </nova:port>
Oct 11 08:57:54 compute-0 nova_compute[260935]:       </nova:ports>
Oct 11 08:57:54 compute-0 nova_compute[260935]:     </nova:instance>
Oct 11 08:57:54 compute-0 nova_compute[260935]:   </metadata>
Oct 11 08:57:54 compute-0 nova_compute[260935]:   <sysinfo type="smbios">
Oct 11 08:57:54 compute-0 nova_compute[260935]:     <system>
Oct 11 08:57:54 compute-0 nova_compute[260935]:       <entry name="manufacturer">RDO</entry>
Oct 11 08:57:54 compute-0 nova_compute[260935]:       <entry name="product">OpenStack Compute</entry>
Oct 11 08:57:54 compute-0 nova_compute[260935]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 08:57:54 compute-0 nova_compute[260935]:       <entry name="serial">8f609d1a-7eaa-41dc-9f3b-9acf7abe4239</entry>
Oct 11 08:57:54 compute-0 nova_compute[260935]:       <entry name="uuid">8f609d1a-7eaa-41dc-9f3b-9acf7abe4239</entry>
Oct 11 08:57:54 compute-0 nova_compute[260935]:       <entry name="family">Virtual Machine</entry>
Oct 11 08:57:54 compute-0 nova_compute[260935]:     </system>
Oct 11 08:57:54 compute-0 nova_compute[260935]:   </sysinfo>
Oct 11 08:57:54 compute-0 nova_compute[260935]:   <os>
Oct 11 08:57:54 compute-0 nova_compute[260935]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 11 08:57:54 compute-0 nova_compute[260935]:     <boot dev="hd"/>
Oct 11 08:57:54 compute-0 nova_compute[260935]:     <smbios mode="sysinfo"/>
Oct 11 08:57:54 compute-0 nova_compute[260935]:   </os>
Oct 11 08:57:54 compute-0 nova_compute[260935]:   <features>
Oct 11 08:57:54 compute-0 nova_compute[260935]:     <acpi/>
Oct 11 08:57:54 compute-0 nova_compute[260935]:     <apic/>
Oct 11 08:57:54 compute-0 nova_compute[260935]:     <vmcoreinfo/>
Oct 11 08:57:54 compute-0 nova_compute[260935]:   </features>
Oct 11 08:57:54 compute-0 nova_compute[260935]:   <clock offset="utc">
Oct 11 08:57:54 compute-0 nova_compute[260935]:     <timer name="pit" tickpolicy="delay"/>
Oct 11 08:57:54 compute-0 nova_compute[260935]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 11 08:57:54 compute-0 nova_compute[260935]:     <timer name="hpet" present="no"/>
Oct 11 08:57:54 compute-0 nova_compute[260935]:   </clock>
Oct 11 08:57:54 compute-0 nova_compute[260935]:   <cpu mode="host-model" match="exact">
Oct 11 08:57:54 compute-0 nova_compute[260935]:     <topology sockets="1" cores="1" threads="1"/>
Oct 11 08:57:54 compute-0 nova_compute[260935]:   </cpu>
Oct 11 08:57:54 compute-0 nova_compute[260935]:   <devices>
Oct 11 08:57:54 compute-0 nova_compute[260935]:     <disk type="network" device="disk">
Oct 11 08:57:54 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 08:57:54 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/8f609d1a-7eaa-41dc-9f3b-9acf7abe4239_disk">
Oct 11 08:57:54 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 08:57:54 compute-0 nova_compute[260935]:       </source>
Oct 11 08:57:54 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 08:57:54 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 08:57:54 compute-0 nova_compute[260935]:       </auth>
Oct 11 08:57:54 compute-0 nova_compute[260935]:       <target dev="vda" bus="virtio"/>
Oct 11 08:57:54 compute-0 nova_compute[260935]:     </disk>
Oct 11 08:57:54 compute-0 nova_compute[260935]:     <disk type="network" device="cdrom">
Oct 11 08:57:54 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 08:57:54 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/8f609d1a-7eaa-41dc-9f3b-9acf7abe4239_disk.config">
Oct 11 08:57:54 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 08:57:54 compute-0 nova_compute[260935]:       </source>
Oct 11 08:57:54 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 08:57:54 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 08:57:54 compute-0 nova_compute[260935]:       </auth>
Oct 11 08:57:54 compute-0 nova_compute[260935]:       <target dev="sda" bus="sata"/>
Oct 11 08:57:54 compute-0 nova_compute[260935]:     </disk>
Oct 11 08:57:54 compute-0 nova_compute[260935]:     <interface type="ethernet">
Oct 11 08:57:54 compute-0 nova_compute[260935]:       <mac address="fa:16:3e:e5:dc:a1"/>
Oct 11 08:57:54 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 08:57:54 compute-0 nova_compute[260935]:       <driver name="vhost" rx_queue_size="512"/>
Oct 11 08:57:54 compute-0 nova_compute[260935]:       <mtu size="1442"/>
Oct 11 08:57:54 compute-0 nova_compute[260935]:       <target dev="tap5fbde773-dc"/>
Oct 11 08:57:54 compute-0 nova_compute[260935]:     </interface>
Oct 11 08:57:54 compute-0 nova_compute[260935]:     <serial type="pty">
Oct 11 08:57:54 compute-0 nova_compute[260935]:       <log file="/var/lib/nova/instances/8f609d1a-7eaa-41dc-9f3b-9acf7abe4239/console.log" append="off"/>
Oct 11 08:57:54 compute-0 nova_compute[260935]:     </serial>
Oct 11 08:57:54 compute-0 nova_compute[260935]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 08:57:54 compute-0 nova_compute[260935]:     <video>
Oct 11 08:57:54 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 08:57:54 compute-0 nova_compute[260935]:     </video>
Oct 11 08:57:54 compute-0 nova_compute[260935]:     <input type="tablet" bus="usb"/>
Oct 11 08:57:54 compute-0 nova_compute[260935]:     <rng model="virtio">
Oct 11 08:57:54 compute-0 nova_compute[260935]:       <backend model="random">/dev/urandom</backend>
Oct 11 08:57:54 compute-0 nova_compute[260935]:     </rng>
Oct 11 08:57:54 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root"/>
Oct 11 08:57:54 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:57:54 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:57:54 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:57:54 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:57:54 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:57:54 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:57:54 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:57:54 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:57:54 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:57:54 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:57:54 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:57:54 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:57:54 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:57:54 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:57:54 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:57:54 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:57:54 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:57:54 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:57:54 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:57:54 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:57:54 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:57:54 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:57:54 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:57:54 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:57:54 compute-0 nova_compute[260935]:     <controller type="usb" index="0"/>
Oct 11 08:57:54 compute-0 nova_compute[260935]:     <memballoon model="virtio">
Oct 11 08:57:54 compute-0 nova_compute[260935]:       <stats period="10"/>
Oct 11 08:57:54 compute-0 nova_compute[260935]:     </memballoon>
Oct 11 08:57:54 compute-0 nova_compute[260935]:   </devices>
Oct 11 08:57:54 compute-0 nova_compute[260935]: </domain>
Oct 11 08:57:54 compute-0 nova_compute[260935]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 11 08:57:54 compute-0 nova_compute[260935]: 2025-10-11 08:57:54.667 2 DEBUG nova.compute.manager [None req-9ab933dd-16af-4259-a3ee-7f6c4dbb3e2e ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: 8f609d1a-7eaa-41dc-9f3b-9acf7abe4239] Preparing to wait for external event network-vif-plugged-5fbde773-dc50-43d9-ae02-a499b8096537 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 11 08:57:54 compute-0 nova_compute[260935]: 2025-10-11 08:57:54.667 2 DEBUG oslo_concurrency.lockutils [None req-9ab933dd-16af-4259-a3ee-7f6c4dbb3e2e ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Acquiring lock "8f609d1a-7eaa-41dc-9f3b-9acf7abe4239-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:57:54 compute-0 nova_compute[260935]: 2025-10-11 08:57:54.668 2 DEBUG oslo_concurrency.lockutils [None req-9ab933dd-16af-4259-a3ee-7f6c4dbb3e2e ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Lock "8f609d1a-7eaa-41dc-9f3b-9acf7abe4239-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:57:54 compute-0 nova_compute[260935]: 2025-10-11 08:57:54.668 2 DEBUG oslo_concurrency.lockutils [None req-9ab933dd-16af-4259-a3ee-7f6c4dbb3e2e ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Lock "8f609d1a-7eaa-41dc-9f3b-9acf7abe4239-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:57:54 compute-0 nova_compute[260935]: 2025-10-11 08:57:54.668 2 DEBUG nova.virt.libvirt.vif [None req-9ab933dd-16af-4259-a3ee-7f6c4dbb3e2e ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T08:57:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-918260000',display_name='tempest-ServersTestJSON-server-918260000',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-918260000',id=75,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d9864fda4f8641d8a9c1509c426cc206',ramdisk_id='',reservation_id='r-dps7r5jq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-101172647',owner_user_name='tempest-ServersTestJSON-101172647-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T08:57:47Z,user_data=None,user_id='ee0e5fedb9fc464eb2a9ac362f5e0749',uuid=8f609d1a-7eaa-41dc-9f3b-9acf7abe4239,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5fbde773-dc50-43d9-ae02-a499b8096537", "address": "fa:16:3e:e5:dc:a1", "network": {"id": "e075bdab-78c4-414f-b270-c41d1c82f498", "bridge": "br-int", "label": "tempest-ServersTestJSON-1401783070-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d9864fda4f8641d8a9c1509c426cc206", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5fbde773-dc", "ovs_interfaceid": "5fbde773-dc50-43d9-ae02-a499b8096537", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 11 08:57:54 compute-0 nova_compute[260935]: 2025-10-11 08:57:54.669 2 DEBUG nova.network.os_vif_util [None req-9ab933dd-16af-4259-a3ee-7f6c4dbb3e2e ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Converting VIF {"id": "5fbde773-dc50-43d9-ae02-a499b8096537", "address": "fa:16:3e:e5:dc:a1", "network": {"id": "e075bdab-78c4-414f-b270-c41d1c82f498", "bridge": "br-int", "label": "tempest-ServersTestJSON-1401783070-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d9864fda4f8641d8a9c1509c426cc206", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5fbde773-dc", "ovs_interfaceid": "5fbde773-dc50-43d9-ae02-a499b8096537", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 08:57:54 compute-0 nova_compute[260935]: 2025-10-11 08:57:54.669 2 DEBUG nova.network.os_vif_util [None req-9ab933dd-16af-4259-a3ee-7f6c4dbb3e2e ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e5:dc:a1,bridge_name='br-int',has_traffic_filtering=True,id=5fbde773-dc50-43d9-ae02-a499b8096537,network=Network(e075bdab-78c4-414f-b270-c41d1c82f498),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5fbde773-dc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 08:57:54 compute-0 nova_compute[260935]: 2025-10-11 08:57:54.670 2 DEBUG os_vif [None req-9ab933dd-16af-4259-a3ee-7f6c4dbb3e2e ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e5:dc:a1,bridge_name='br-int',has_traffic_filtering=True,id=5fbde773-dc50-43d9-ae02-a499b8096537,network=Network(e075bdab-78c4-414f-b270-c41d1c82f498),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5fbde773-dc') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 11 08:57:54 compute-0 nova_compute[260935]: 2025-10-11 08:57:54.670 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:57:54 compute-0 nova_compute[260935]: 2025-10-11 08:57:54.670 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:57:54 compute-0 nova_compute[260935]: 2025-10-11 08:57:54.671 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 08:57:54 compute-0 nova_compute[260935]: 2025-10-11 08:57:54.673 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:57:54 compute-0 nova_compute[260935]: 2025-10-11 08:57:54.673 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5fbde773-dc, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:57:54 compute-0 nova_compute[260935]: 2025-10-11 08:57:54.674 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap5fbde773-dc, col_values=(('external_ids', {'iface-id': '5fbde773-dc50-43d9-ae02-a499b8096537', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:e5:dc:a1', 'vm-uuid': '8f609d1a-7eaa-41dc-9f3b-9acf7abe4239'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:57:54 compute-0 nova_compute[260935]: 2025-10-11 08:57:54.675 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:57:54 compute-0 nova_compute[260935]: 2025-10-11 08:57:54.677 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 08:57:54 compute-0 NetworkManager[44960]: <info>  [1760173074.6775] manager: (tap5fbde773-dc): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/294)
Oct 11 08:57:54 compute-0 nova_compute[260935]: 2025-10-11 08:57:54.683 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:57:54 compute-0 nova_compute[260935]: 2025-10-11 08:57:54.684 2 INFO os_vif [None req-9ab933dd-16af-4259-a3ee-7f6c4dbb3e2e ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e5:dc:a1,bridge_name='br-int',has_traffic_filtering=True,id=5fbde773-dc50-43d9-ae02-a499b8096537,network=Network(e075bdab-78c4-414f-b270-c41d1c82f498),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5fbde773-dc')
Oct 11 08:57:54 compute-0 podman[334068]: 2025-10-11 08:57:54.722208979 +0000 UTC m=+0.065796037 container create 15002fe25ea5d8642f3bccc904794cccf67a4a4a965e4e9d4f6c21989230796c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-2cb96d57-a5e9-4b38-b10e-68187a5bf82f, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 11 08:57:54 compute-0 nova_compute[260935]: 2025-10-11 08:57:54.751 2 DEBUG nova.virt.libvirt.driver [None req-9ab933dd-16af-4259-a3ee-7f6c4dbb3e2e ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 08:57:54 compute-0 nova_compute[260935]: 2025-10-11 08:57:54.752 2 DEBUG nova.virt.libvirt.driver [None req-9ab933dd-16af-4259-a3ee-7f6c4dbb3e2e ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 08:57:54 compute-0 nova_compute[260935]: 2025-10-11 08:57:54.752 2 DEBUG nova.virt.libvirt.driver [None req-9ab933dd-16af-4259-a3ee-7f6c4dbb3e2e ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] No VIF found with MAC fa:16:3e:e5:dc:a1, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 11 08:57:54 compute-0 nova_compute[260935]: 2025-10-11 08:57:54.752 2 INFO nova.virt.libvirt.driver [None req-9ab933dd-16af-4259-a3ee-7f6c4dbb3e2e ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: 8f609d1a-7eaa-41dc-9f3b-9acf7abe4239] Using config drive
Oct 11 08:57:54 compute-0 systemd[1]: Started libpod-conmon-15002fe25ea5d8642f3bccc904794cccf67a4a4a965e4e9d4f6c21989230796c.scope.
Oct 11 08:57:54 compute-0 nova_compute[260935]: 2025-10-11 08:57:54.774 2 DEBUG nova.storage.rbd_utils [None req-9ab933dd-16af-4259-a3ee-7f6c4dbb3e2e ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] rbd image 8f609d1a-7eaa-41dc-9f3b-9acf7abe4239_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:57:54 compute-0 podman[334068]: 2025-10-11 08:57:54.689218398 +0000 UTC m=+0.032805466 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 11 08:57:54 compute-0 systemd[1]: Started libcrun container.
Oct 11 08:57:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 08:57:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 08:57:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 08:57:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 08:57:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/30e257640a1d89eff57910701e7f783a8af724620b6f99795860ab48b3b2dcd2/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 11 08:57:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 08:57:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 08:57:54 compute-0 nova_compute[260935]: 2025-10-11 08:57:54.805 2 DEBUG nova.network.neutron [req-a6a6c420-d010-4983-bf31-91d966314f4b req-bcab1330-0e44-42ad-8114-b9123deedc1a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 075ec27a-70a2-49f6-a097-5738f3407605] Updated VIF entry in instance network info cache for port 10f3051f-8897-4e57-89f1-4a03aac94192. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 08:57:54 compute-0 nova_compute[260935]: 2025-10-11 08:57:54.806 2 DEBUG nova.network.neutron [req-a6a6c420-d010-4983-bf31-91d966314f4b req-bcab1330-0e44-42ad-8114-b9123deedc1a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 075ec27a-70a2-49f6-a097-5738f3407605] Updating instance_info_cache with network_info: [{"id": "10f3051f-8897-4e57-89f1-4a03aac94192", "address": "fa:16:3e:15:d9:b2", "network": {"id": "36e695f7-a84c-4eed-8d21-d70ce3f1f736", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-876022911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.220", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7b0903df38254068b8566040cf90343c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap10f3051f-88", "ovs_interfaceid": "10f3051f-8897-4e57-89f1-4a03aac94192", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:57:54 compute-0 podman[334068]: 2025-10-11 08:57:54.814432329 +0000 UTC m=+0.158019417 container init 15002fe25ea5d8642f3bccc904794cccf67a4a4a965e4e9d4f6c21989230796c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-2cb96d57-a5e9-4b38-b10e-68187a5bf82f, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 08:57:54 compute-0 podman[334068]: 2025-10-11 08:57:54.827079199 +0000 UTC m=+0.170666257 container start 15002fe25ea5d8642f3bccc904794cccf67a4a4a965e4e9d4f6c21989230796c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-2cb96d57-a5e9-4b38-b10e-68187a5bf82f, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Oct 11 08:57:54 compute-0 nova_compute[260935]: 2025-10-11 08:57:54.827 2 DEBUG oslo_concurrency.lockutils [req-a6a6c420-d010-4983-bf31-91d966314f4b req-bcab1330-0e44-42ad-8114-b9123deedc1a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-075ec27a-70a2-49f6-a097-5738f3407605" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 08:57:54 compute-0 ceph-mgr[74605]: [balancer INFO root] Optimize plan auto_2025-10-11_08:57:54
Oct 11 08:57:54 compute-0 ceph-mgr[74605]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 08:57:54 compute-0 ceph-mgr[74605]: [balancer INFO root] do_upmap
Oct 11 08:57:54 compute-0 ceph-mgr[74605]: [balancer INFO root] pools ['default.rgw.control', '.mgr', '.rgw.root', 'images', 'backups', 'cephfs.cephfs.data', 'volumes', 'default.rgw.meta', 'default.rgw.log', 'vms', 'cephfs.cephfs.meta']
Oct 11 08:57:54 compute-0 ceph-mgr[74605]: [balancer INFO root] prepared 0/10 changes
Oct 11 08:57:54 compute-0 neutron-haproxy-ovnmeta-2cb96d57-a5e9-4b38-b10e-68187a5bf82f[334114]: [NOTICE]   (334121) : New worker (334123) forked
Oct 11 08:57:54 compute-0 neutron-haproxy-ovnmeta-2cb96d57-a5e9-4b38-b10e-68187a5bf82f[334114]: [NOTICE]   (334121) : Loading success.
Oct 11 08:57:54 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:57:54.893 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 10f3051f-8897-4e57-89f1-4a03aac94192 in datapath 36e695f7-a84c-4eed-8d21-d70ce3f1f736 unbound from our chassis
Oct 11 08:57:54 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:57:54.896 162815 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 36e695f7-a84c-4eed-8d21-d70ce3f1f736, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 11 08:57:54 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:57:54.897 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[069d214a-162a-4eca-8786-58ba5273929c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:57:54 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:57:54.898 162815 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-36e695f7-a84c-4eed-8d21-d70ce3f1f736 namespace which is not needed anymore
Oct 11 08:57:55 compute-0 nova_compute[260935]: 2025-10-11 08:57:55.065 2 INFO nova.virt.libvirt.driver [None req-225c3c13-aa0d-4a92-a713-8f5635710409 bebcc72ba14b4d1e976076ce9fee6080 7b0903df38254068b8566040cf90343c - - default default] [instance: 075ec27a-70a2-49f6-a097-5738f3407605] Deleting instance files /var/lib/nova/instances/075ec27a-70a2-49f6-a097-5738f3407605_del
Oct 11 08:57:55 compute-0 nova_compute[260935]: 2025-10-11 08:57:55.066 2 INFO nova.virt.libvirt.driver [None req-225c3c13-aa0d-4a92-a713-8f5635710409 bebcc72ba14b4d1e976076ce9fee6080 7b0903df38254068b8566040cf90343c - - default default] [instance: 075ec27a-70a2-49f6-a097-5738f3407605] Deletion of /var/lib/nova/instances/075ec27a-70a2-49f6-a097-5738f3407605_del complete
Oct 11 08:57:55 compute-0 neutron-haproxy-ovnmeta-36e695f7-a84c-4eed-8d21-d70ce3f1f736[331269]: [NOTICE]   (331273) : haproxy version is 2.8.14-c23fe91
Oct 11 08:57:55 compute-0 neutron-haproxy-ovnmeta-36e695f7-a84c-4eed-8d21-d70ce3f1f736[331269]: [NOTICE]   (331273) : path to executable is /usr/sbin/haproxy
Oct 11 08:57:55 compute-0 neutron-haproxy-ovnmeta-36e695f7-a84c-4eed-8d21-d70ce3f1f736[331269]: [WARNING]  (331273) : Exiting Master process...
Oct 11 08:57:55 compute-0 nova_compute[260935]: 2025-10-11 08:57:55.088 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760173075.088195, 937f10b2-422f-42bc-b13e-5995310b461c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:57:55 compute-0 nova_compute[260935]: 2025-10-11 08:57:55.089 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 937f10b2-422f-42bc-b13e-5995310b461c] VM Started (Lifecycle Event)
Oct 11 08:57:55 compute-0 neutron-haproxy-ovnmeta-36e695f7-a84c-4eed-8d21-d70ce3f1f736[331269]: [ALERT]    (331273) : Current worker (331275) exited with code 143 (Terminated)
Oct 11 08:57:55 compute-0 neutron-haproxy-ovnmeta-36e695f7-a84c-4eed-8d21-d70ce3f1f736[331269]: [WARNING]  (331273) : All workers exited. Exiting... (0)
Oct 11 08:57:55 compute-0 nova_compute[260935]: 2025-10-11 08:57:55.092 2 DEBUG nova.compute.manager [None req-b313b5b1-d87d-4afa-99ed-53b854e421fc 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: 937f10b2-422f-42bc-b13e-5995310b461c] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 11 08:57:55 compute-0 systemd[1]: libpod-f58ff28e75232a61ed5a6d658aa545c5aafba0f467cca80b788dba39e98beb23.scope: Deactivated successfully.
Oct 11 08:57:55 compute-0 nova_compute[260935]: 2025-10-11 08:57:55.097 2 DEBUG nova.virt.libvirt.driver [None req-b313b5b1-d87d-4afa-99ed-53b854e421fc 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: 937f10b2-422f-42bc-b13e-5995310b461c] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 11 08:57:55 compute-0 podman[334151]: 2025-10-11 08:57:55.098269552 +0000 UTC m=+0.065062306 container died f58ff28e75232a61ed5a6d658aa545c5aafba0f467cca80b788dba39e98beb23 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-36e695f7-a84c-4eed-8d21-d70ce3f1f736, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct 11 08:57:55 compute-0 nova_compute[260935]: 2025-10-11 08:57:55.101 2 INFO nova.virt.libvirt.driver [-] [instance: 937f10b2-422f-42bc-b13e-5995310b461c] Instance spawned successfully.
Oct 11 08:57:55 compute-0 nova_compute[260935]: 2025-10-11 08:57:55.102 2 DEBUG nova.virt.libvirt.driver [None req-b313b5b1-d87d-4afa-99ed-53b854e421fc 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: 937f10b2-422f-42bc-b13e-5995310b461c] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 11 08:57:55 compute-0 nova_compute[260935]: 2025-10-11 08:57:55.119 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 937f10b2-422f-42bc-b13e-5995310b461c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:57:55 compute-0 nova_compute[260935]: 2025-10-11 08:57:55.124 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 937f10b2-422f-42bc-b13e-5995310b461c] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 08:57:55 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-f58ff28e75232a61ed5a6d658aa545c5aafba0f467cca80b788dba39e98beb23-userdata-shm.mount: Deactivated successfully.
Oct 11 08:57:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-8a77a7b73ae016cf1cbf18145d6b521c02b985ba38cb72f8238983023520ab94-merged.mount: Deactivated successfully.
Oct 11 08:57:55 compute-0 podman[334151]: 2025-10-11 08:57:55.141042562 +0000 UTC m=+0.107835306 container cleanup f58ff28e75232a61ed5a6d658aa545c5aafba0f467cca80b788dba39e98beb23 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-36e695f7-a84c-4eed-8d21-d70ce3f1f736, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2)
Oct 11 08:57:55 compute-0 nova_compute[260935]: 2025-10-11 08:57:55.153 2 DEBUG nova.virt.libvirt.driver [None req-b313b5b1-d87d-4afa-99ed-53b854e421fc 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: 937f10b2-422f-42bc-b13e-5995310b461c] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:57:55 compute-0 nova_compute[260935]: 2025-10-11 08:57:55.154 2 DEBUG nova.virt.libvirt.driver [None req-b313b5b1-d87d-4afa-99ed-53b854e421fc 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: 937f10b2-422f-42bc-b13e-5995310b461c] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:57:55 compute-0 nova_compute[260935]: 2025-10-11 08:57:55.154 2 DEBUG nova.virt.libvirt.driver [None req-b313b5b1-d87d-4afa-99ed-53b854e421fc 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: 937f10b2-422f-42bc-b13e-5995310b461c] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:57:55 compute-0 nova_compute[260935]: 2025-10-11 08:57:55.155 2 DEBUG nova.virt.libvirt.driver [None req-b313b5b1-d87d-4afa-99ed-53b854e421fc 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: 937f10b2-422f-42bc-b13e-5995310b461c] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:57:55 compute-0 nova_compute[260935]: 2025-10-11 08:57:55.156 2 DEBUG nova.virt.libvirt.driver [None req-b313b5b1-d87d-4afa-99ed-53b854e421fc 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: 937f10b2-422f-42bc-b13e-5995310b461c] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:57:55 compute-0 nova_compute[260935]: 2025-10-11 08:57:55.156 2 DEBUG nova.virt.libvirt.driver [None req-b313b5b1-d87d-4afa-99ed-53b854e421fc 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: 937f10b2-422f-42bc-b13e-5995310b461c] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:57:55 compute-0 nova_compute[260935]: 2025-10-11 08:57:55.165 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 937f10b2-422f-42bc-b13e-5995310b461c] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 08:57:55 compute-0 nova_compute[260935]: 2025-10-11 08:57:55.165 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760173075.0893023, 937f10b2-422f-42bc-b13e-5995310b461c => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:57:55 compute-0 nova_compute[260935]: 2025-10-11 08:57:55.165 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 937f10b2-422f-42bc-b13e-5995310b461c] VM Paused (Lifecycle Event)
Oct 11 08:57:55 compute-0 systemd[1]: libpod-conmon-f58ff28e75232a61ed5a6d658aa545c5aafba0f467cca80b788dba39e98beb23.scope: Deactivated successfully.
Oct 11 08:57:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 08:57:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 08:57:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 08:57:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 08:57:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 08:57:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 08:57:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 08:57:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 08:57:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 08:57:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 08:57:55 compute-0 podman[334179]: 2025-10-11 08:57:55.240000304 +0000 UTC m=+0.061191236 container remove f58ff28e75232a61ed5a6d658aa545c5aafba0f467cca80b788dba39e98beb23 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-36e695f7-a84c-4eed-8d21-d70ce3f1f736, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.vendor=CentOS)
Oct 11 08:57:55 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:57:55.249 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[25ed3737-3a01-4ad8-a15c-dbfe2d83bbf7]: (4, ('Sat Oct 11 08:57:55 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-36e695f7-a84c-4eed-8d21-d70ce3f1f736 (f58ff28e75232a61ed5a6d658aa545c5aafba0f467cca80b788dba39e98beb23)\nf58ff28e75232a61ed5a6d658aa545c5aafba0f467cca80b788dba39e98beb23\nSat Oct 11 08:57:55 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-36e695f7-a84c-4eed-8d21-d70ce3f1f736 (f58ff28e75232a61ed5a6d658aa545c5aafba0f467cca80b788dba39e98beb23)\nf58ff28e75232a61ed5a6d658aa545c5aafba0f467cca80b788dba39e98beb23\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:57:55 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:57:55.252 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[2958999c-ea09-4fe2-8d62-83105e6602a2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:57:55 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:57:55.253 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap36e695f7-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:57:55 compute-0 kernel: tap36e695f7-a0: left promiscuous mode
Oct 11 08:57:55 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:57:55.287 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[c7ffdf6b-3473-4ca2-8f98-4f0cb346cac5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:57:55 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:57:55.319 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[b404e861-980b-4565-9d66-fe8f8cfba15a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:57:55 compute-0 nova_compute[260935]: 2025-10-11 08:57:55.322 2 DEBUG oslo_concurrency.lockutils [None req-52b41a59-dd6d-4f95-968d-9f8bdd1c5fa9 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Acquiring lock "refresh_cache-98d8ebd6-0917-49cf-8efc-a245486424bc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 08:57:55 compute-0 nova_compute[260935]: 2025-10-11 08:57:55.323 2 DEBUG oslo_concurrency.lockutils [None req-52b41a59-dd6d-4f95-968d-9f8bdd1c5fa9 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Acquired lock "refresh_cache-98d8ebd6-0917-49cf-8efc-a245486424bc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 08:57:55 compute-0 nova_compute[260935]: 2025-10-11 08:57:55.323 2 DEBUG nova.network.neutron [None req-52b41a59-dd6d-4f95-968d-9f8bdd1c5fa9 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] [instance: 98d8ebd6-0917-49cf-8efc-a245486424bc] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 11 08:57:55 compute-0 nova_compute[260935]: 2025-10-11 08:57:55.326 2 DEBUG nova.network.neutron [req-d3a77f99-6391-4aa4-9702-c29728dff171 req-40c6854e-a053-4ff5-a4d5-0a4725bc2fa9 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 8f609d1a-7eaa-41dc-9f3b-9acf7abe4239] Updated VIF entry in instance network info cache for port 5fbde773-dc50-43d9-ae02-a499b8096537. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 08:57:55 compute-0 nova_compute[260935]: 2025-10-11 08:57:55.326 2 DEBUG nova.network.neutron [req-d3a77f99-6391-4aa4-9702-c29728dff171 req-40c6854e-a053-4ff5-a4d5-0a4725bc2fa9 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 8f609d1a-7eaa-41dc-9f3b-9acf7abe4239] Updating instance_info_cache with network_info: [{"id": "5fbde773-dc50-43d9-ae02-a499b8096537", "address": "fa:16:3e:e5:dc:a1", "network": {"id": "e075bdab-78c4-414f-b270-c41d1c82f498", "bridge": "br-int", "label": "tempest-ServersTestJSON-1401783070-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d9864fda4f8641d8a9c1509c426cc206", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5fbde773-dc", "ovs_interfaceid": "5fbde773-dc50-43d9-ae02-a499b8096537", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:57:55 compute-0 nova_compute[260935]: 2025-10-11 08:57:55.328 2 INFO nova.virt.libvirt.driver [None req-9ab933dd-16af-4259-a3ee-7f6c4dbb3e2e ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: 8f609d1a-7eaa-41dc-9f3b-9acf7abe4239] Creating config drive at /var/lib/nova/instances/8f609d1a-7eaa-41dc-9f3b-9acf7abe4239/disk.config
Oct 11 08:57:55 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:57:55.328 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[08c99736-b5a8-4063-a98f-61c5fd9542ab]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:57:55 compute-0 nova_compute[260935]: 2025-10-11 08:57:55.333 2 DEBUG oslo_concurrency.processutils [None req-9ab933dd-16af-4259-a3ee-7f6c4dbb3e2e ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/8f609d1a-7eaa-41dc-9f3b-9acf7abe4239/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp29puhjwr execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:57:55 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:57:55.355 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[cd4d65d8-7d32-48d5-8b6f-704a95d658cc]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 487859, 'reachable_time': 32805, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 334196, 'error': None, 'target': 'ovnmeta-36e695f7-a84c-4eed-8d21-d70ce3f1f736', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:57:55 compute-0 systemd[1]: run-netns-ovnmeta\x2d36e695f7\x2da84c\x2d4eed\x2d8d21\x2dd70ce3f1f736.mount: Deactivated successfully.
Oct 11 08:57:55 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:57:55.359 162929 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-36e695f7-a84c-4eed-8d21-d70ce3f1f736 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 11 08:57:55 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:57:55.359 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[295115f7-ec16-4a2a-9e7a-4db77ed9c800]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:57:55 compute-0 nova_compute[260935]: 2025-10-11 08:57:55.378 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:57:55 compute-0 nova_compute[260935]: 2025-10-11 08:57:55.383 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 937f10b2-422f-42bc-b13e-5995310b461c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:57:55 compute-0 nova_compute[260935]: 2025-10-11 08:57:55.385 2 INFO nova.compute.manager [None req-b313b5b1-d87d-4afa-99ed-53b854e421fc 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: 937f10b2-422f-42bc-b13e-5995310b461c] Took 8.85 seconds to spawn the instance on the hypervisor.
Oct 11 08:57:55 compute-0 nova_compute[260935]: 2025-10-11 08:57:55.386 2 DEBUG nova.compute.manager [None req-b313b5b1-d87d-4afa-99ed-53b854e421fc 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: 937f10b2-422f-42bc-b13e-5995310b461c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:57:55 compute-0 nova_compute[260935]: 2025-10-11 08:57:55.391 2 INFO nova.compute.manager [None req-225c3c13-aa0d-4a92-a713-8f5635710409 bebcc72ba14b4d1e976076ce9fee6080 7b0903df38254068b8566040cf90343c - - default default] [instance: 075ec27a-70a2-49f6-a097-5738f3407605] Took 1.10 seconds to destroy the instance on the hypervisor.
Oct 11 08:57:55 compute-0 nova_compute[260935]: 2025-10-11 08:57:55.392 2 DEBUG oslo.service.loopingcall [None req-225c3c13-aa0d-4a92-a713-8f5635710409 bebcc72ba14b4d1e976076ce9fee6080 7b0903df38254068b8566040cf90343c - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 11 08:57:55 compute-0 nova_compute[260935]: 2025-10-11 08:57:55.392 2 DEBUG nova.compute.manager [-] [instance: 075ec27a-70a2-49f6-a097-5738f3407605] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 11 08:57:55 compute-0 nova_compute[260935]: 2025-10-11 08:57:55.393 2 DEBUG nova.network.neutron [-] [instance: 075ec27a-70a2-49f6-a097-5738f3407605] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 11 08:57:55 compute-0 nova_compute[260935]: 2025-10-11 08:57:55.397 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760173075.097233, 937f10b2-422f-42bc-b13e-5995310b461c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:57:55 compute-0 nova_compute[260935]: 2025-10-11 08:57:55.398 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 937f10b2-422f-42bc-b13e-5995310b461c] VM Resumed (Lifecycle Event)
Oct 11 08:57:55 compute-0 nova_compute[260935]: 2025-10-11 08:57:55.411 2 DEBUG oslo_concurrency.lockutils [req-d3a77f99-6391-4aa4-9702-c29728dff171 req-40c6854e-a053-4ff5-a4d5-0a4725bc2fa9 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-8f609d1a-7eaa-41dc-9f3b-9acf7abe4239" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 08:57:55 compute-0 nova_compute[260935]: 2025-10-11 08:57:55.438 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 937f10b2-422f-42bc-b13e-5995310b461c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:57:55 compute-0 nova_compute[260935]: 2025-10-11 08:57:55.441 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 937f10b2-422f-42bc-b13e-5995310b461c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 08:57:55 compute-0 ceph-mon[74313]: pgmap v1638: 321 pgs: 321 active+clean; 497 MiB data, 775 MiB used, 59 GiB / 60 GiB avail; 64 KiB/s rd, 4.3 MiB/s wr, 98 op/s
Oct 11 08:57:55 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/695193846' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:57:55 compute-0 nova_compute[260935]: 2025-10-11 08:57:55.474 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 937f10b2-422f-42bc-b13e-5995310b461c] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 08:57:55 compute-0 nova_compute[260935]: 2025-10-11 08:57:55.486 2 INFO nova.compute.manager [None req-b313b5b1-d87d-4afa-99ed-53b854e421fc 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: 937f10b2-422f-42bc-b13e-5995310b461c] Took 10.07 seconds to build instance.
Oct 11 08:57:55 compute-0 nova_compute[260935]: 2025-10-11 08:57:55.489 2 DEBUG oslo_concurrency.processutils [None req-9ab933dd-16af-4259-a3ee-7f6c4dbb3e2e ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/8f609d1a-7eaa-41dc-9f3b-9acf7abe4239/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp29puhjwr" returned: 0 in 0.156s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:57:55 compute-0 nova_compute[260935]: 2025-10-11 08:57:55.516 2 DEBUG nova.storage.rbd_utils [None req-9ab933dd-16af-4259-a3ee-7f6c4dbb3e2e ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] rbd image 8f609d1a-7eaa-41dc-9f3b-9acf7abe4239_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:57:55 compute-0 nova_compute[260935]: 2025-10-11 08:57:55.520 2 DEBUG oslo_concurrency.processutils [None req-9ab933dd-16af-4259-a3ee-7f6c4dbb3e2e ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/8f609d1a-7eaa-41dc-9f3b-9acf7abe4239/disk.config 8f609d1a-7eaa-41dc-9f3b-9acf7abe4239_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:57:55 compute-0 nova_compute[260935]: 2025-10-11 08:57:55.580 2 DEBUG oslo_concurrency.lockutils [None req-b313b5b1-d87d-4afa-99ed-53b854e421fc 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Lock "937f10b2-422f-42bc-b13e-5995310b461c" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.309s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:57:55 compute-0 nova_compute[260935]: 2025-10-11 08:57:55.741 2 DEBUG oslo_concurrency.processutils [None req-9ab933dd-16af-4259-a3ee-7f6c4dbb3e2e ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/8f609d1a-7eaa-41dc-9f3b-9acf7abe4239/disk.config 8f609d1a-7eaa-41dc-9f3b-9acf7abe4239_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.221s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:57:55 compute-0 nova_compute[260935]: 2025-10-11 08:57:55.742 2 INFO nova.virt.libvirt.driver [None req-9ab933dd-16af-4259-a3ee-7f6c4dbb3e2e ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: 8f609d1a-7eaa-41dc-9f3b-9acf7abe4239] Deleting local config drive /var/lib/nova/instances/8f609d1a-7eaa-41dc-9f3b-9acf7abe4239/disk.config because it was imported into RBD.
Oct 11 08:57:55 compute-0 kernel: tap5fbde773-dc: entered promiscuous mode
Oct 11 08:57:55 compute-0 NetworkManager[44960]: <info>  [1760173075.8335] manager: (tap5fbde773-dc): new Tun device (/org/freedesktop/NetworkManager/Devices/295)
Oct 11 08:57:55 compute-0 systemd-udevd[333927]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 08:57:55 compute-0 nova_compute[260935]: 2025-10-11 08:57:55.882 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:57:55 compute-0 ovn_controller[152945]: 2025-10-11T08:57:55Z|00647|binding|INFO|Claiming lport 5fbde773-dc50-43d9-ae02-a499b8096537 for this chassis.
Oct 11 08:57:55 compute-0 ovn_controller[152945]: 2025-10-11T08:57:55Z|00648|binding|INFO|5fbde773-dc50-43d9-ae02-a499b8096537: Claiming fa:16:3e:e5:dc:a1 10.100.0.8
Oct 11 08:57:55 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:57:55.892 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e5:dc:a1 10.100.0.8'], port_security=['fa:16:3e:e5:dc:a1 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '8f609d1a-7eaa-41dc-9f3b-9acf7abe4239', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e075bdab-78c4-414f-b270-c41d1c82f498', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd9864fda4f8641d8a9c1509c426cc206', 'neutron:revision_number': '2', 'neutron:security_group_ids': '708d19b6-1edc-4b98-ad73-f234668f1633', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2b07dbe7-131d-4b4e-9a25-dea5d7b28985, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=5fbde773-dc50-43d9-ae02-a499b8096537) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 08:57:55 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:57:55.894 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 5fbde773-dc50-43d9-ae02-a499b8096537 in datapath e075bdab-78c4-414f-b270-c41d1c82f498 bound to our chassis
Oct 11 08:57:55 compute-0 NetworkManager[44960]: <info>  [1760173075.8963] device (tap5fbde773-dc): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 08:57:55 compute-0 NetworkManager[44960]: <info>  [1760173075.8973] device (tap5fbde773-dc): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 08:57:55 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:57:55.896 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e075bdab-78c4-414f-b270-c41d1c82f498
Oct 11 08:57:55 compute-0 ovn_controller[152945]: 2025-10-11T08:57:55Z|00649|binding|INFO|Setting lport 5fbde773-dc50-43d9-ae02-a499b8096537 ovn-installed in OVS
Oct 11 08:57:55 compute-0 ovn_controller[152945]: 2025-10-11T08:57:55Z|00650|binding|INFO|Setting lport 5fbde773-dc50-43d9-ae02-a499b8096537 up in Southbound
Oct 11 08:57:55 compute-0 nova_compute[260935]: 2025-10-11 08:57:55.912 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:57:55 compute-0 nova_compute[260935]: 2025-10-11 08:57:55.914 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:57:55 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:57:55.919 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[30bdbd0c-e45c-4fea-9b0e-eeb69083e9fd]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:57:55 compute-0 systemd-machined[215705]: New machine qemu-83-instance-0000004b.
Oct 11 08:57:55 compute-0 systemd[1]: Started Virtual Machine qemu-83-instance-0000004b.
Oct 11 08:57:55 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:57:55.960 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[f8913b90-d3e2-4122-96f4-d81dc216ba7b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:57:55 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:57:55.965 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[e742aad0-27e3-4ef0-a98a-8e88054f3b2d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:57:55 compute-0 nova_compute[260935]: 2025-10-11 08:57:55.981 2 DEBUG nova.compute.manager [req-ec3c6260-7406-4e1b-8bf8-c75a3de26cb6 req-ceb17c19-8d4f-47f7-845d-7a5fa18e5645 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 98d8ebd6-0917-49cf-8efc-a245486424bc] Received event network-changed-10e01bbb-0d2c-4f76-8f81-c90e20d3e54c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:57:55 compute-0 nova_compute[260935]: 2025-10-11 08:57:55.982 2 DEBUG nova.compute.manager [req-ec3c6260-7406-4e1b-8bf8-c75a3de26cb6 req-ceb17c19-8d4f-47f7-845d-7a5fa18e5645 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 98d8ebd6-0917-49cf-8efc-a245486424bc] Refreshing instance network info cache due to event network-changed-10e01bbb-0d2c-4f76-8f81-c90e20d3e54c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 08:57:55 compute-0 nova_compute[260935]: 2025-10-11 08:57:55.982 2 DEBUG oslo_concurrency.lockutils [req-ec3c6260-7406-4e1b-8bf8-c75a3de26cb6 req-ceb17c19-8d4f-47f7-845d-7a5fa18e5645 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-98d8ebd6-0917-49cf-8efc-a245486424bc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 08:57:56 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:57:56.005 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[c1735ca7-c651-40f6-8bba-ae57d97728df]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:57:56 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:57:56.030 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[4fbcb6b5-9177-4b73-8867-8c2f8661bef9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape075bdab-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0b:b9:79'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 29, 'rx_bytes': 1000, 'tx_bytes': 1362, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 29, 'rx_bytes': 1000, 'tx_bytes': 1362, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 169], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 479394, 'reachable_time': 15419, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 334260, 'error': None, 'target': 'ovnmeta-e075bdab-78c4-414f-b270-c41d1c82f498', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:57:56 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:57:56.057 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[9938aebf-2f8c-4682-a452-fe23a131dc2c]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tape075bdab-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 479414, 'tstamp': 479414}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 334261, 'error': None, 'target': 'ovnmeta-e075bdab-78c4-414f-b270-c41d1c82f498', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tape075bdab-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 479419, 'tstamp': 479419}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 334261, 'error': None, 'target': 'ovnmeta-e075bdab-78c4-414f-b270-c41d1c82f498', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:57:56 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:57:56.058 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape075bdab-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:57:56 compute-0 nova_compute[260935]: 2025-10-11 08:57:56.060 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:57:56 compute-0 nova_compute[260935]: 2025-10-11 08:57:56.061 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:57:56 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:57:56.062 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape075bdab-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:57:56 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:57:56.062 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 08:57:56 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:57:56.063 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape075bdab-70, col_values=(('external_ids', {'iface-id': 'b9cf681c-9f4c-4c56-987a-55fa7aa89e1a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:57:56 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:57:56.063 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 08:57:56 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1639: 321 pgs: 321 active+clean; 497 MiB data, 775 MiB used, 59 GiB / 60 GiB avail; 64 KiB/s rd, 4.3 MiB/s wr, 98 op/s
Oct 11 08:57:56 compute-0 nova_compute[260935]: 2025-10-11 08:57:56.399 2 DEBUG nova.network.neutron [-] [instance: 075ec27a-70a2-49f6-a097-5738f3407605] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:57:56 compute-0 nova_compute[260935]: 2025-10-11 08:57:56.427 2 INFO nova.compute.manager [-] [instance: 075ec27a-70a2-49f6-a097-5738f3407605] Took 1.03 seconds to deallocate network for instance.
Oct 11 08:57:56 compute-0 nova_compute[260935]: 2025-10-11 08:57:56.496 2 DEBUG oslo_concurrency.lockutils [None req-225c3c13-aa0d-4a92-a713-8f5635710409 bebcc72ba14b4d1e976076ce9fee6080 7b0903df38254068b8566040cf90343c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:57:56 compute-0 nova_compute[260935]: 2025-10-11 08:57:56.498 2 DEBUG oslo_concurrency.lockutils [None req-225c3c13-aa0d-4a92-a713-8f5635710409 bebcc72ba14b4d1e976076ce9fee6080 7b0903df38254068b8566040cf90343c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:57:56 compute-0 nova_compute[260935]: 2025-10-11 08:57:56.596 2 DEBUG nova.compute.manager [req-12a06f29-9446-4472-9557-11473a8294e9 req-a714830a-b707-4b1e-ba44-8eb74581cd73 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 937f10b2-422f-42bc-b13e-5995310b461c] Received event network-vif-plugged-228ffaff-8879-4872-9923-a5a8270cb291 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:57:56 compute-0 nova_compute[260935]: 2025-10-11 08:57:56.598 2 DEBUG oslo_concurrency.lockutils [req-12a06f29-9446-4472-9557-11473a8294e9 req-a714830a-b707-4b1e-ba44-8eb74581cd73 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "937f10b2-422f-42bc-b13e-5995310b461c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:57:56 compute-0 nova_compute[260935]: 2025-10-11 08:57:56.599 2 DEBUG oslo_concurrency.lockutils [req-12a06f29-9446-4472-9557-11473a8294e9 req-a714830a-b707-4b1e-ba44-8eb74581cd73 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "937f10b2-422f-42bc-b13e-5995310b461c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:57:56 compute-0 nova_compute[260935]: 2025-10-11 08:57:56.600 2 DEBUG oslo_concurrency.lockutils [req-12a06f29-9446-4472-9557-11473a8294e9 req-a714830a-b707-4b1e-ba44-8eb74581cd73 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "937f10b2-422f-42bc-b13e-5995310b461c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:57:56 compute-0 nova_compute[260935]: 2025-10-11 08:57:56.602 2 DEBUG nova.compute.manager [req-12a06f29-9446-4472-9557-11473a8294e9 req-a714830a-b707-4b1e-ba44-8eb74581cd73 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 937f10b2-422f-42bc-b13e-5995310b461c] No waiting events found dispatching network-vif-plugged-228ffaff-8879-4872-9923-a5a8270cb291 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 08:57:56 compute-0 nova_compute[260935]: 2025-10-11 08:57:56.602 2 WARNING nova.compute.manager [req-12a06f29-9446-4472-9557-11473a8294e9 req-a714830a-b707-4b1e-ba44-8eb74581cd73 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 937f10b2-422f-42bc-b13e-5995310b461c] Received unexpected event network-vif-plugged-228ffaff-8879-4872-9923-a5a8270cb291 for instance with vm_state active and task_state None.
Oct 11 08:57:56 compute-0 nova_compute[260935]: 2025-10-11 08:57:56.603 2 DEBUG nova.compute.manager [req-12a06f29-9446-4472-9557-11473a8294e9 req-a714830a-b707-4b1e-ba44-8eb74581cd73 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 075ec27a-70a2-49f6-a097-5738f3407605] Received event network-vif-unplugged-10f3051f-8897-4e57-89f1-4a03aac94192 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:57:56 compute-0 nova_compute[260935]: 2025-10-11 08:57:56.604 2 DEBUG oslo_concurrency.lockutils [req-12a06f29-9446-4472-9557-11473a8294e9 req-a714830a-b707-4b1e-ba44-8eb74581cd73 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "075ec27a-70a2-49f6-a097-5738f3407605-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:57:56 compute-0 nova_compute[260935]: 2025-10-11 08:57:56.604 2 DEBUG oslo_concurrency.lockutils [req-12a06f29-9446-4472-9557-11473a8294e9 req-a714830a-b707-4b1e-ba44-8eb74581cd73 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "075ec27a-70a2-49f6-a097-5738f3407605-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:57:56 compute-0 nova_compute[260935]: 2025-10-11 08:57:56.605 2 DEBUG oslo_concurrency.lockutils [req-12a06f29-9446-4472-9557-11473a8294e9 req-a714830a-b707-4b1e-ba44-8eb74581cd73 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "075ec27a-70a2-49f6-a097-5738f3407605-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:57:56 compute-0 nova_compute[260935]: 2025-10-11 08:57:56.606 2 DEBUG nova.compute.manager [req-12a06f29-9446-4472-9557-11473a8294e9 req-a714830a-b707-4b1e-ba44-8eb74581cd73 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 075ec27a-70a2-49f6-a097-5738f3407605] No waiting events found dispatching network-vif-unplugged-10f3051f-8897-4e57-89f1-4a03aac94192 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 08:57:56 compute-0 nova_compute[260935]: 2025-10-11 08:57:56.607 2 WARNING nova.compute.manager [req-12a06f29-9446-4472-9557-11473a8294e9 req-a714830a-b707-4b1e-ba44-8eb74581cd73 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 075ec27a-70a2-49f6-a097-5738f3407605] Received unexpected event network-vif-unplugged-10f3051f-8897-4e57-89f1-4a03aac94192 for instance with vm_state deleted and task_state None.
Oct 11 08:57:56 compute-0 nova_compute[260935]: 2025-10-11 08:57:56.607 2 DEBUG nova.compute.manager [req-12a06f29-9446-4472-9557-11473a8294e9 req-a714830a-b707-4b1e-ba44-8eb74581cd73 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 075ec27a-70a2-49f6-a097-5738f3407605] Received event network-vif-plugged-10f3051f-8897-4e57-89f1-4a03aac94192 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:57:56 compute-0 nova_compute[260935]: 2025-10-11 08:57:56.608 2 DEBUG oslo_concurrency.lockutils [req-12a06f29-9446-4472-9557-11473a8294e9 req-a714830a-b707-4b1e-ba44-8eb74581cd73 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "075ec27a-70a2-49f6-a097-5738f3407605-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:57:56 compute-0 nova_compute[260935]: 2025-10-11 08:57:56.609 2 DEBUG oslo_concurrency.lockutils [req-12a06f29-9446-4472-9557-11473a8294e9 req-a714830a-b707-4b1e-ba44-8eb74581cd73 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "075ec27a-70a2-49f6-a097-5738f3407605-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:57:56 compute-0 nova_compute[260935]: 2025-10-11 08:57:56.610 2 DEBUG oslo_concurrency.lockutils [req-12a06f29-9446-4472-9557-11473a8294e9 req-a714830a-b707-4b1e-ba44-8eb74581cd73 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "075ec27a-70a2-49f6-a097-5738f3407605-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:57:56 compute-0 nova_compute[260935]: 2025-10-11 08:57:56.610 2 DEBUG nova.compute.manager [req-12a06f29-9446-4472-9557-11473a8294e9 req-a714830a-b707-4b1e-ba44-8eb74581cd73 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 075ec27a-70a2-49f6-a097-5738f3407605] No waiting events found dispatching network-vif-plugged-10f3051f-8897-4e57-89f1-4a03aac94192 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 08:57:56 compute-0 nova_compute[260935]: 2025-10-11 08:57:56.611 2 WARNING nova.compute.manager [req-12a06f29-9446-4472-9557-11473a8294e9 req-a714830a-b707-4b1e-ba44-8eb74581cd73 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 075ec27a-70a2-49f6-a097-5738f3407605] Received unexpected event network-vif-plugged-10f3051f-8897-4e57-89f1-4a03aac94192 for instance with vm_state deleted and task_state None.
Oct 11 08:57:56 compute-0 nova_compute[260935]: 2025-10-11 08:57:56.612 2 DEBUG nova.compute.manager [req-12a06f29-9446-4472-9557-11473a8294e9 req-a714830a-b707-4b1e-ba44-8eb74581cd73 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 075ec27a-70a2-49f6-a097-5738f3407605] Received event network-vif-deleted-10f3051f-8897-4e57-89f1-4a03aac94192 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:57:56 compute-0 nova_compute[260935]: 2025-10-11 08:57:56.681 2 DEBUG oslo_concurrency.processutils [None req-225c3c13-aa0d-4a92-a713-8f5635710409 bebcc72ba14b4d1e976076ce9fee6080 7b0903df38254068b8566040cf90343c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:57:56 compute-0 nova_compute[260935]: 2025-10-11 08:57:56.939 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760173076.9387891, 8f609d1a-7eaa-41dc-9f3b-9acf7abe4239 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:57:56 compute-0 nova_compute[260935]: 2025-10-11 08:57:56.940 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 8f609d1a-7eaa-41dc-9f3b-9acf7abe4239] VM Started (Lifecycle Event)
Oct 11 08:57:56 compute-0 nova_compute[260935]: 2025-10-11 08:57:56.966 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 8f609d1a-7eaa-41dc-9f3b-9acf7abe4239] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:57:56 compute-0 nova_compute[260935]: 2025-10-11 08:57:56.971 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760173076.940147, 8f609d1a-7eaa-41dc-9f3b-9acf7abe4239 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:57:56 compute-0 nova_compute[260935]: 2025-10-11 08:57:56.971 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 8f609d1a-7eaa-41dc-9f3b-9acf7abe4239] VM Paused (Lifecycle Event)
Oct 11 08:57:56 compute-0 nova_compute[260935]: 2025-10-11 08:57:56.990 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 8f609d1a-7eaa-41dc-9f3b-9acf7abe4239] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:57:56 compute-0 nova_compute[260935]: 2025-10-11 08:57:56.996 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 8f609d1a-7eaa-41dc-9f3b-9acf7abe4239] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 08:57:57 compute-0 nova_compute[260935]: 2025-10-11 08:57:57.025 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 8f609d1a-7eaa-41dc-9f3b-9acf7abe4239] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 08:57:57 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 08:57:57 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1692513452' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:57:57 compute-0 nova_compute[260935]: 2025-10-11 08:57:57.156 2 DEBUG oslo_concurrency.processutils [None req-225c3c13-aa0d-4a92-a713-8f5635710409 bebcc72ba14b4d1e976076ce9fee6080 7b0903df38254068b8566040cf90343c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:57:57 compute-0 nova_compute[260935]: 2025-10-11 08:57:57.163 2 DEBUG nova.compute.provider_tree [None req-225c3c13-aa0d-4a92-a713-8f5635710409 bebcc72ba14b4d1e976076ce9fee6080 7b0903df38254068b8566040cf90343c - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 08:57:57 compute-0 nova_compute[260935]: 2025-10-11 08:57:57.180 2 DEBUG nova.scheduler.client.report [None req-225c3c13-aa0d-4a92-a713-8f5635710409 bebcc72ba14b4d1e976076ce9fee6080 7b0903df38254068b8566040cf90343c - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 08:57:57 compute-0 nova_compute[260935]: 2025-10-11 08:57:57.212 2 DEBUG nova.network.neutron [None req-52b41a59-dd6d-4f95-968d-9f8bdd1c5fa9 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] [instance: 98d8ebd6-0917-49cf-8efc-a245486424bc] Updating instance_info_cache with network_info: [{"id": "10e01bbb-0d2c-4f76-8f81-c90e20d3e54c", "address": "fa:16:3e:3f:0f:36", "network": {"id": "056c6769-bc97-4ae9-9759-4cc2d984a31d", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-2094705751-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.249", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "73adfb8cf0c64359b1f33a9643148ef4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap10e01bbb-0d", "ovs_interfaceid": "10e01bbb-0d2c-4f76-8f81-c90e20d3e54c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:57:57 compute-0 nova_compute[260935]: 2025-10-11 08:57:57.216 2 DEBUG oslo_concurrency.lockutils [None req-225c3c13-aa0d-4a92-a713-8f5635710409 bebcc72ba14b4d1e976076ce9fee6080 7b0903df38254068b8566040cf90343c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.718s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:57:57 compute-0 nova_compute[260935]: 2025-10-11 08:57:57.235 2 DEBUG oslo_concurrency.lockutils [None req-52b41a59-dd6d-4f95-968d-9f8bdd1c5fa9 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Releasing lock "refresh_cache-98d8ebd6-0917-49cf-8efc-a245486424bc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 08:57:57 compute-0 nova_compute[260935]: 2025-10-11 08:57:57.236 2 DEBUG nova.virt.libvirt.driver [None req-52b41a59-dd6d-4f95-968d-9f8bdd1c5fa9 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] [instance: 98d8ebd6-0917-49cf-8efc-a245486424bc] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 11 08:57:57 compute-0 nova_compute[260935]: 2025-10-11 08:57:57.237 2 INFO nova.virt.libvirt.driver [None req-52b41a59-dd6d-4f95-968d-9f8bdd1c5fa9 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] [instance: 98d8ebd6-0917-49cf-8efc-a245486424bc] Creating image(s)
Oct 11 08:57:57 compute-0 nova_compute[260935]: 2025-10-11 08:57:57.260 2 DEBUG nova.storage.rbd_utils [None req-52b41a59-dd6d-4f95-968d-9f8bdd1c5fa9 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] rbd image 98d8ebd6-0917-49cf-8efc-a245486424bc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:57:57 compute-0 nova_compute[260935]: 2025-10-11 08:57:57.264 2 DEBUG nova.objects.instance [None req-52b41a59-dd6d-4f95-968d-9f8bdd1c5fa9 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 98d8ebd6-0917-49cf-8efc-a245486424bc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:57:57 compute-0 nova_compute[260935]: 2025-10-11 08:57:57.266 2 INFO nova.scheduler.client.report [None req-225c3c13-aa0d-4a92-a713-8f5635710409 bebcc72ba14b4d1e976076ce9fee6080 7b0903df38254068b8566040cf90343c - - default default] Deleted allocations for instance 075ec27a-70a2-49f6-a097-5738f3407605
Oct 11 08:57:57 compute-0 nova_compute[260935]: 2025-10-11 08:57:57.267 2 DEBUG oslo_concurrency.lockutils [req-ec3c6260-7406-4e1b-8bf8-c75a3de26cb6 req-ceb17c19-8d4f-47f7-845d-7a5fa18e5645 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-98d8ebd6-0917-49cf-8efc-a245486424bc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 08:57:57 compute-0 nova_compute[260935]: 2025-10-11 08:57:57.267 2 DEBUG nova.network.neutron [req-ec3c6260-7406-4e1b-8bf8-c75a3de26cb6 req-ceb17c19-8d4f-47f7-845d-7a5fa18e5645 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 98d8ebd6-0917-49cf-8efc-a245486424bc] Refreshing network info cache for port 10e01bbb-0d2c-4f76-8f81-c90e20d3e54c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 08:57:57 compute-0 nova_compute[260935]: 2025-10-11 08:57:57.367 2 DEBUG nova.storage.rbd_utils [None req-52b41a59-dd6d-4f95-968d-9f8bdd1c5fa9 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] rbd image 98d8ebd6-0917-49cf-8efc-a245486424bc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:57:57 compute-0 nova_compute[260935]: 2025-10-11 08:57:57.421 2 DEBUG nova.storage.rbd_utils [None req-52b41a59-dd6d-4f95-968d-9f8bdd1c5fa9 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] rbd image 98d8ebd6-0917-49cf-8efc-a245486424bc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:57:57 compute-0 nova_compute[260935]: 2025-10-11 08:57:57.426 2 DEBUG oslo_concurrency.lockutils [None req-52b41a59-dd6d-4f95-968d-9f8bdd1c5fa9 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Acquiring lock "99ed4b2133c5056c5b0fd23a4e6ea2374e37ede3" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:57:57 compute-0 nova_compute[260935]: 2025-10-11 08:57:57.428 2 DEBUG oslo_concurrency.lockutils [None req-52b41a59-dd6d-4f95-968d-9f8bdd1c5fa9 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Lock "99ed4b2133c5056c5b0fd23a4e6ea2374e37ede3" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:57:57 compute-0 ceph-mon[74313]: pgmap v1639: 321 pgs: 321 active+clean; 497 MiB data, 775 MiB used, 59 GiB / 60 GiB avail; 64 KiB/s rd, 4.3 MiB/s wr, 98 op/s
Oct 11 08:57:57 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/1692513452' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:57:57 compute-0 nova_compute[260935]: 2025-10-11 08:57:57.504 2 DEBUG oslo_concurrency.lockutils [None req-225c3c13-aa0d-4a92-a713-8f5635710409 bebcc72ba14b4d1e976076ce9fee6080 7b0903df38254068b8566040cf90343c - - default default] Lock "075ec27a-70a2-49f6-a097-5738f3407605" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.212s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:57:57 compute-0 nova_compute[260935]: 2025-10-11 08:57:57.589 2 DEBUG nova.objects.instance [None req-5ca97171-320f-42eb-9f6d-b38df18ee7e5 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Lazy-loading 'pci_devices' on Instance uuid 937f10b2-422f-42bc-b13e-5995310b461c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:57:57 compute-0 nova_compute[260935]: 2025-10-11 08:57:57.613 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760173077.6135836, 937f10b2-422f-42bc-b13e-5995310b461c => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:57:57 compute-0 nova_compute[260935]: 2025-10-11 08:57:57.614 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 937f10b2-422f-42bc-b13e-5995310b461c] VM Paused (Lifecycle Event)
Oct 11 08:57:57 compute-0 nova_compute[260935]: 2025-10-11 08:57:57.644 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 937f10b2-422f-42bc-b13e-5995310b461c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:57:57 compute-0 nova_compute[260935]: 2025-10-11 08:57:57.647 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 937f10b2-422f-42bc-b13e-5995310b461c] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: suspending, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 08:57:57 compute-0 nova_compute[260935]: 2025-10-11 08:57:57.685 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 937f10b2-422f-42bc-b13e-5995310b461c] During sync_power_state the instance has a pending task (suspending). Skip.
Oct 11 08:57:57 compute-0 nova_compute[260935]: 2025-10-11 08:57:57.810 2 DEBUG nova.virt.libvirt.imagebackend [None req-52b41a59-dd6d-4f95-968d-9f8bdd1c5fa9 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Image locations are: [{'url': 'rbd://33219f8b-dc38-5a8f-a577-8ccc4b37190a/images/aff10a5e-fdc1-44d7-be39-f9f9dc19ad9d/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://33219f8b-dc38-5a8f-a577-8ccc4b37190a/images/aff10a5e-fdc1-44d7-be39-f9f9dc19ad9d/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085
Oct 11 08:57:57 compute-0 nova_compute[260935]: 2025-10-11 08:57:57.889 2 DEBUG nova.virt.libvirt.imagebackend [None req-52b41a59-dd6d-4f95-968d-9f8bdd1c5fa9 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Selected location: {'url': 'rbd://33219f8b-dc38-5a8f-a577-8ccc4b37190a/images/aff10a5e-fdc1-44d7-be39-f9f9dc19ad9d/snap', 'metadata': {'store': 'default_backend'}} clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1094
Oct 11 08:57:57 compute-0 nova_compute[260935]: 2025-10-11 08:57:57.890 2 DEBUG nova.storage.rbd_utils [None req-52b41a59-dd6d-4f95-968d-9f8bdd1c5fa9 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] cloning images/aff10a5e-fdc1-44d7-be39-f9f9dc19ad9d@snap to None/98d8ebd6-0917-49cf-8efc-a245486424bc_disk clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Oct 11 08:57:58 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1640: 321 pgs: 321 active+clean; 418 MiB data, 739 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.2 MiB/s wr, 168 op/s
Oct 11 08:57:58 compute-0 kernel: tap228ffaff-88 (unregistering): left promiscuous mode
Oct 11 08:57:58 compute-0 NetworkManager[44960]: <info>  [1760173078.1172] device (tap228ffaff-88): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 08:57:58 compute-0 ovn_controller[152945]: 2025-10-11T08:57:58Z|00651|binding|INFO|Releasing lport 228ffaff-8879-4872-9923-a5a8270cb291 from this chassis (sb_readonly=0)
Oct 11 08:57:58 compute-0 ovn_controller[152945]: 2025-10-11T08:57:58Z|00652|binding|INFO|Setting lport 228ffaff-8879-4872-9923-a5a8270cb291 down in Southbound
Oct 11 08:57:58 compute-0 nova_compute[260935]: 2025-10-11 08:57:58.132 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:57:58 compute-0 ovn_controller[152945]: 2025-10-11T08:57:58Z|00653|binding|INFO|Removing iface tap228ffaff-88 ovn-installed in OVS
Oct 11 08:57:58 compute-0 nova_compute[260935]: 2025-10-11 08:57:58.156 2 DEBUG oslo_concurrency.lockutils [None req-52b41a59-dd6d-4f95-968d-9f8bdd1c5fa9 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Lock "99ed4b2133c5056c5b0fd23a4e6ea2374e37ede3" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.728s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:57:58 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:57:58.174 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0a:d9:4d 10.100.0.9'], port_security=['fa:16:3e:0a:d9:4d 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '937f10b2-422f-42bc-b13e-5995310b461c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2cb96d57-a5e9-4b38-b10e-68187a5bf82f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '93dd4902ce324862a38006da8e06503a', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'b773900e-3df7-4cb6-b9b0-3d240ff499b1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=69794a2f-48ab-4a0d-8725-f4a7f57172dd, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=228ffaff-8879-4872-9923-a5a8270cb291) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 08:57:58 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:57:58.176 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 228ffaff-8879-4872-9923-a5a8270cb291 in datapath 2cb96d57-a5e9-4b38-b10e-68187a5bf82f unbound from our chassis
Oct 11 08:57:58 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:57:58.179 162815 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 2cb96d57-a5e9-4b38-b10e-68187a5bf82f, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 11 08:57:58 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:57:58.180 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[120d03d5-2b5f-436d-bc79-cb4076b2d807]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:57:58 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:57:58.181 162815 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-2cb96d57-a5e9-4b38-b10e-68187a5bf82f namespace which is not needed anymore
Oct 11 08:57:58 compute-0 systemd[1]: machine-qemu\x2d82\x2dinstance\x2d0000004a.scope: Deactivated successfully.
Oct 11 08:57:58 compute-0 systemd[1]: machine-qemu\x2d82\x2dinstance\x2d0000004a.scope: Consumed 3.823s CPU time.
Oct 11 08:57:58 compute-0 systemd-machined[215705]: Machine qemu-82-instance-0000004a terminated.
Oct 11 08:57:58 compute-0 nova_compute[260935]: 2025-10-11 08:57:58.233 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:57:58 compute-0 nova_compute[260935]: 2025-10-11 08:57:58.289 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:57:58 compute-0 nova_compute[260935]: 2025-10-11 08:57:58.360 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:57:58 compute-0 nova_compute[260935]: 2025-10-11 08:57:58.368 2 DEBUG nova.compute.manager [None req-5ca97171-320f-42eb-9f6d-b38df18ee7e5 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: 937f10b2-422f-42bc-b13e-5995310b461c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:57:58 compute-0 nova_compute[260935]: 2025-10-11 08:57:58.384 2 DEBUG nova.objects.instance [None req-52b41a59-dd6d-4f95-968d-9f8bdd1c5fa9 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Lazy-loading 'migration_context' on Instance uuid 98d8ebd6-0917-49cf-8efc-a245486424bc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:57:58 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e238 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 08:57:58 compute-0 neutron-haproxy-ovnmeta-2cb96d57-a5e9-4b38-b10e-68187a5bf82f[334114]: [NOTICE]   (334121) : haproxy version is 2.8.14-c23fe91
Oct 11 08:57:58 compute-0 neutron-haproxy-ovnmeta-2cb96d57-a5e9-4b38-b10e-68187a5bf82f[334114]: [NOTICE]   (334121) : path to executable is /usr/sbin/haproxy
Oct 11 08:57:58 compute-0 neutron-haproxy-ovnmeta-2cb96d57-a5e9-4b38-b10e-68187a5bf82f[334114]: [WARNING]  (334121) : Exiting Master process...
Oct 11 08:57:58 compute-0 neutron-haproxy-ovnmeta-2cb96d57-a5e9-4b38-b10e-68187a5bf82f[334114]: [ALERT]    (334121) : Current worker (334123) exited with code 143 (Terminated)
Oct 11 08:57:58 compute-0 neutron-haproxy-ovnmeta-2cb96d57-a5e9-4b38-b10e-68187a5bf82f[334114]: [WARNING]  (334121) : All workers exited. Exiting... (0)
Oct 11 08:57:58 compute-0 systemd[1]: libpod-15002fe25ea5d8642f3bccc904794cccf67a4a4a965e4e9d4f6c21989230796c.scope: Deactivated successfully.
Oct 11 08:57:58 compute-0 podman[334541]: 2025-10-11 08:57:58.435285665 +0000 UTC m=+0.068602727 container died 15002fe25ea5d8642f3bccc904794cccf67a4a4a965e4e9d4f6c21989230796c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-2cb96d57-a5e9-4b38-b10e-68187a5bf82f, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Oct 11 08:57:58 compute-0 nova_compute[260935]: 2025-10-11 08:57:58.479 2 DEBUG nova.storage.rbd_utils [None req-52b41a59-dd6d-4f95-968d-9f8bdd1c5fa9 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] flattening vms/98d8ebd6-0917-49cf-8efc-a245486424bc_disk flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Oct 11 08:57:58 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-15002fe25ea5d8642f3bccc904794cccf67a4a4a965e4e9d4f6c21989230796c-userdata-shm.mount: Deactivated successfully.
Oct 11 08:57:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-30e257640a1d89eff57910701e7f783a8af724620b6f99795860ab48b3b2dcd2-merged.mount: Deactivated successfully.
Oct 11 08:57:58 compute-0 podman[334541]: 2025-10-11 08:57:58.500888786 +0000 UTC m=+0.134205818 container cleanup 15002fe25ea5d8642f3bccc904794cccf67a4a4a965e4e9d4f6c21989230796c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-2cb96d57-a5e9-4b38-b10e-68187a5bf82f, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team)
Oct 11 08:57:58 compute-0 systemd[1]: libpod-conmon-15002fe25ea5d8642f3bccc904794cccf67a4a4a965e4e9d4f6c21989230796c.scope: Deactivated successfully.
Oct 11 08:57:58 compute-0 nova_compute[260935]: 2025-10-11 08:57:58.535 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:57:58 compute-0 podman[334606]: 2025-10-11 08:57:58.65567953 +0000 UTC m=+0.119876210 container remove 15002fe25ea5d8642f3bccc904794cccf67a4a4a965e4e9d4f6c21989230796c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-2cb96d57-a5e9-4b38-b10e-68187a5bf82f, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001)
Oct 11 08:57:58 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:57:58.667 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[f2b52182-1ac9-4314-9ce5-f7033c9ec5e1]: (4, ('Sat Oct 11 08:57:58 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-2cb96d57-a5e9-4b38-b10e-68187a5bf82f (15002fe25ea5d8642f3bccc904794cccf67a4a4a965e4e9d4f6c21989230796c)\n15002fe25ea5d8642f3bccc904794cccf67a4a4a965e4e9d4f6c21989230796c\nSat Oct 11 08:57:58 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-2cb96d57-a5e9-4b38-b10e-68187a5bf82f (15002fe25ea5d8642f3bccc904794cccf67a4a4a965e4e9d4f6c21989230796c)\n15002fe25ea5d8642f3bccc904794cccf67a4a4a965e4e9d4f6c21989230796c\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:57:58 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:57:58.670 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[7607eb3a-aa8e-414c-8694-b5f62828e9ad]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:57:58 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:57:58.672 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2cb96d57-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:57:58 compute-0 kernel: tap2cb96d57-a0: left promiscuous mode
Oct 11 08:57:58 compute-0 nova_compute[260935]: 2025-10-11 08:57:58.710 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:57:58 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:57:58.741 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[162961eb-eaae-4080-835a-526a0a7df157]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:57:58 compute-0 nova_compute[260935]: 2025-10-11 08:57:58.742 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:57:58 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:57:58.778 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[ca1131cb-773f-46ff-a14c-31b680a90dec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:57:58 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:57:58.779 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[27eb12db-5f3b-4a79-9040-7729ddcc43bb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:57:58 compute-0 nova_compute[260935]: 2025-10-11 08:57:58.794 2 DEBUG nova.virt.libvirt.driver [None req-52b41a59-dd6d-4f95-968d-9f8bdd1c5fa9 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] [instance: 98d8ebd6-0917-49cf-8efc-a245486424bc] Image rbd:vms/98d8ebd6-0917-49cf-8efc-a245486424bc_disk:id=openstack:conf=/etc/ceph/ceph.conf flattened successfully while unshelving instance. _try_fetch_image_cache /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11007
Oct 11 08:57:58 compute-0 nova_compute[260935]: 2025-10-11 08:57:58.795 2 DEBUG nova.virt.libvirt.driver [None req-52b41a59-dd6d-4f95-968d-9f8bdd1c5fa9 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] [instance: 98d8ebd6-0917-49cf-8efc-a245486424bc] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 11 08:57:58 compute-0 nova_compute[260935]: 2025-10-11 08:57:58.795 2 DEBUG nova.virt.libvirt.driver [None req-52b41a59-dd6d-4f95-968d-9f8bdd1c5fa9 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] [instance: 98d8ebd6-0917-49cf-8efc-a245486424bc] Ensure instance console log exists: /var/lib/nova/instances/98d8ebd6-0917-49cf-8efc-a245486424bc/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 11 08:57:58 compute-0 nova_compute[260935]: 2025-10-11 08:57:58.796 2 DEBUG oslo_concurrency.lockutils [None req-52b41a59-dd6d-4f95-968d-9f8bdd1c5fa9 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:57:58 compute-0 nova_compute[260935]: 2025-10-11 08:57:58.796 2 DEBUG oslo_concurrency.lockutils [None req-52b41a59-dd6d-4f95-968d-9f8bdd1c5fa9 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:57:58 compute-0 nova_compute[260935]: 2025-10-11 08:57:58.797 2 DEBUG oslo_concurrency.lockutils [None req-52b41a59-dd6d-4f95-968d-9f8bdd1c5fa9 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:57:58 compute-0 nova_compute[260935]: 2025-10-11 08:57:58.799 2 DEBUG nova.virt.libvirt.driver [None req-52b41a59-dd6d-4f95-968d-9f8bdd1c5fa9 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] [instance: 98d8ebd6-0917-49cf-8efc-a245486424bc] Start _get_guest_xml network_info=[{"id": "10e01bbb-0d2c-4f76-8f81-c90e20d3e54c", "address": "fa:16:3e:3f:0f:36", "network": {"id": "056c6769-bc97-4ae9-9759-4cc2d984a31d", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-2094705751-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.249", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "73adfb8cf0c64359b1f33a9643148ef4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap10e01bbb-0d", "ovs_interfaceid": "10e01bbb-0d2c-4f76-8f81-c90e20d3e54c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='',container_format='bare',created_at=2025-10-11T08:57:31Z,direct_url=<?>,disk_format='raw',id=aff10a5e-fdc1-44d7-be39-f9f9dc19ad9d,min_disk=1,min_ram=0,name='tempest-ServerActionsTestOtherB-server-400269969-shelved',owner='73adfb8cf0c64359b1f33a9643148ef4',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2025-10-11T08:57:40Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 11 08:57:58 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:57:58.802 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[67c04625-490f-411c-9b3e-e380bf1b4690]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 491856, 'reachable_time': 26054, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 334624, 'error': None, 'target': 'ovnmeta-2cb96d57-a5e9-4b38-b10e-68187a5bf82f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:57:58 compute-0 nova_compute[260935]: 2025-10-11 08:57:58.803 2 WARNING nova.virt.libvirt.driver [None req-52b41a59-dd6d-4f95-968d-9f8bdd1c5fa9 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 08:57:58 compute-0 systemd[1]: run-netns-ovnmeta\x2d2cb96d57\x2da5e9\x2d4b38\x2db10e\x2d68187a5bf82f.mount: Deactivated successfully.
Oct 11 08:57:58 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:57:58.804 162929 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-2cb96d57-a5e9-4b38-b10e-68187a5bf82f deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 11 08:57:58 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:57:58.804 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[58352cf0-c0c8-4428-a34e-64b21ae08392]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:57:58 compute-0 nova_compute[260935]: 2025-10-11 08:57:58.810 2 DEBUG nova.virt.libvirt.host [None req-52b41a59-dd6d-4f95-968d-9f8bdd1c5fa9 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 11 08:57:58 compute-0 nova_compute[260935]: 2025-10-11 08:57:58.811 2 DEBUG nova.virt.libvirt.host [None req-52b41a59-dd6d-4f95-968d-9f8bdd1c5fa9 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 11 08:57:58 compute-0 nova_compute[260935]: 2025-10-11 08:57:58.814 2 DEBUG nova.virt.libvirt.host [None req-52b41a59-dd6d-4f95-968d-9f8bdd1c5fa9 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 11 08:57:58 compute-0 nova_compute[260935]: 2025-10-11 08:57:58.814 2 DEBUG nova.virt.libvirt.host [None req-52b41a59-dd6d-4f95-968d-9f8bdd1c5fa9 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 11 08:57:58 compute-0 nova_compute[260935]: 2025-10-11 08:57:58.814 2 DEBUG nova.virt.libvirt.driver [None req-52b41a59-dd6d-4f95-968d-9f8bdd1c5fa9 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 11 08:57:58 compute-0 nova_compute[260935]: 2025-10-11 08:57:58.815 2 DEBUG nova.virt.hardware [None req-52b41a59-dd6d-4f95-968d-9f8bdd1c5fa9 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='',container_format='bare',created_at=2025-10-11T08:57:31Z,direct_url=<?>,disk_format='raw',id=aff10a5e-fdc1-44d7-be39-f9f9dc19ad9d,min_disk=1,min_ram=0,name='tempest-ServerActionsTestOtherB-server-400269969-shelved',owner='73adfb8cf0c64359b1f33a9643148ef4',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2025-10-11T08:57:40Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 11 08:57:58 compute-0 nova_compute[260935]: 2025-10-11 08:57:58.815 2 DEBUG nova.virt.hardware [None req-52b41a59-dd6d-4f95-968d-9f8bdd1c5fa9 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 11 08:57:58 compute-0 nova_compute[260935]: 2025-10-11 08:57:58.816 2 DEBUG nova.virt.hardware [None req-52b41a59-dd6d-4f95-968d-9f8bdd1c5fa9 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 11 08:57:58 compute-0 nova_compute[260935]: 2025-10-11 08:57:58.816 2 DEBUG nova.virt.hardware [None req-52b41a59-dd6d-4f95-968d-9f8bdd1c5fa9 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 11 08:57:58 compute-0 nova_compute[260935]: 2025-10-11 08:57:58.816 2 DEBUG nova.virt.hardware [None req-52b41a59-dd6d-4f95-968d-9f8bdd1c5fa9 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 11 08:57:58 compute-0 nova_compute[260935]: 2025-10-11 08:57:58.816 2 DEBUG nova.virt.hardware [None req-52b41a59-dd6d-4f95-968d-9f8bdd1c5fa9 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 11 08:57:58 compute-0 nova_compute[260935]: 2025-10-11 08:57:58.817 2 DEBUG nova.virt.hardware [None req-52b41a59-dd6d-4f95-968d-9f8bdd1c5fa9 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 11 08:57:58 compute-0 nova_compute[260935]: 2025-10-11 08:57:58.817 2 DEBUG nova.virt.hardware [None req-52b41a59-dd6d-4f95-968d-9f8bdd1c5fa9 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 11 08:57:58 compute-0 nova_compute[260935]: 2025-10-11 08:57:58.817 2 DEBUG nova.virt.hardware [None req-52b41a59-dd6d-4f95-968d-9f8bdd1c5fa9 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 11 08:57:58 compute-0 nova_compute[260935]: 2025-10-11 08:57:58.818 2 DEBUG nova.virt.hardware [None req-52b41a59-dd6d-4f95-968d-9f8bdd1c5fa9 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 11 08:57:58 compute-0 nova_compute[260935]: 2025-10-11 08:57:58.818 2 DEBUG nova.virt.hardware [None req-52b41a59-dd6d-4f95-968d-9f8bdd1c5fa9 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 11 08:57:58 compute-0 nova_compute[260935]: 2025-10-11 08:57:58.818 2 DEBUG nova.objects.instance [None req-52b41a59-dd6d-4f95-968d-9f8bdd1c5fa9 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 98d8ebd6-0917-49cf-8efc-a245486424bc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:57:58 compute-0 nova_compute[260935]: 2025-10-11 08:57:58.844 2 DEBUG oslo_concurrency.processutils [None req-52b41a59-dd6d-4f95-968d-9f8bdd1c5fa9 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:57:58 compute-0 nova_compute[260935]: 2025-10-11 08:57:58.933 2 DEBUG nova.compute.manager [req-18d7e2db-d98d-455f-b714-96357c1c557e req-e862072c-9598-46ba-9a83-853f5fb2caea e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 8f609d1a-7eaa-41dc-9f3b-9acf7abe4239] Received event network-vif-plugged-5fbde773-dc50-43d9-ae02-a499b8096537 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:57:58 compute-0 nova_compute[260935]: 2025-10-11 08:57:58.934 2 DEBUG oslo_concurrency.lockutils [req-18d7e2db-d98d-455f-b714-96357c1c557e req-e862072c-9598-46ba-9a83-853f5fb2caea e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "8f609d1a-7eaa-41dc-9f3b-9acf7abe4239-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:57:58 compute-0 nova_compute[260935]: 2025-10-11 08:57:58.934 2 DEBUG oslo_concurrency.lockutils [req-18d7e2db-d98d-455f-b714-96357c1c557e req-e862072c-9598-46ba-9a83-853f5fb2caea e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "8f609d1a-7eaa-41dc-9f3b-9acf7abe4239-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:57:58 compute-0 nova_compute[260935]: 2025-10-11 08:57:58.935 2 DEBUG oslo_concurrency.lockutils [req-18d7e2db-d98d-455f-b714-96357c1c557e req-e862072c-9598-46ba-9a83-853f5fb2caea e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "8f609d1a-7eaa-41dc-9f3b-9acf7abe4239-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:57:58 compute-0 nova_compute[260935]: 2025-10-11 08:57:58.935 2 DEBUG nova.compute.manager [req-18d7e2db-d98d-455f-b714-96357c1c557e req-e862072c-9598-46ba-9a83-853f5fb2caea e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 8f609d1a-7eaa-41dc-9f3b-9acf7abe4239] Processing event network-vif-plugged-5fbde773-dc50-43d9-ae02-a499b8096537 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 11 08:57:58 compute-0 nova_compute[260935]: 2025-10-11 08:57:58.935 2 DEBUG nova.compute.manager [req-18d7e2db-d98d-455f-b714-96357c1c557e req-e862072c-9598-46ba-9a83-853f5fb2caea e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 8f609d1a-7eaa-41dc-9f3b-9acf7abe4239] Received event network-vif-plugged-5fbde773-dc50-43d9-ae02-a499b8096537 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:57:58 compute-0 nova_compute[260935]: 2025-10-11 08:57:58.935 2 DEBUG oslo_concurrency.lockutils [req-18d7e2db-d98d-455f-b714-96357c1c557e req-e862072c-9598-46ba-9a83-853f5fb2caea e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "8f609d1a-7eaa-41dc-9f3b-9acf7abe4239-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:57:58 compute-0 nova_compute[260935]: 2025-10-11 08:57:58.936 2 DEBUG oslo_concurrency.lockutils [req-18d7e2db-d98d-455f-b714-96357c1c557e req-e862072c-9598-46ba-9a83-853f5fb2caea e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "8f609d1a-7eaa-41dc-9f3b-9acf7abe4239-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:57:58 compute-0 nova_compute[260935]: 2025-10-11 08:57:58.936 2 DEBUG oslo_concurrency.lockutils [req-18d7e2db-d98d-455f-b714-96357c1c557e req-e862072c-9598-46ba-9a83-853f5fb2caea e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "8f609d1a-7eaa-41dc-9f3b-9acf7abe4239-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:57:58 compute-0 nova_compute[260935]: 2025-10-11 08:57:58.936 2 DEBUG nova.compute.manager [req-18d7e2db-d98d-455f-b714-96357c1c557e req-e862072c-9598-46ba-9a83-853f5fb2caea e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 8f609d1a-7eaa-41dc-9f3b-9acf7abe4239] No waiting events found dispatching network-vif-plugged-5fbde773-dc50-43d9-ae02-a499b8096537 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 08:57:58 compute-0 nova_compute[260935]: 2025-10-11 08:57:58.937 2 WARNING nova.compute.manager [req-18d7e2db-d98d-455f-b714-96357c1c557e req-e862072c-9598-46ba-9a83-853f5fb2caea e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 8f609d1a-7eaa-41dc-9f3b-9acf7abe4239] Received unexpected event network-vif-plugged-5fbde773-dc50-43d9-ae02-a499b8096537 for instance with vm_state building and task_state spawning.
Oct 11 08:57:58 compute-0 nova_compute[260935]: 2025-10-11 08:57:58.937 2 DEBUG nova.compute.manager [None req-9ab933dd-16af-4259-a3ee-7f6c4dbb3e2e ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: 8f609d1a-7eaa-41dc-9f3b-9acf7abe4239] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 11 08:57:58 compute-0 nova_compute[260935]: 2025-10-11 08:57:58.946 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760173078.9451537, 8f609d1a-7eaa-41dc-9f3b-9acf7abe4239 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:57:58 compute-0 nova_compute[260935]: 2025-10-11 08:57:58.947 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 8f609d1a-7eaa-41dc-9f3b-9acf7abe4239] VM Resumed (Lifecycle Event)
Oct 11 08:57:58 compute-0 nova_compute[260935]: 2025-10-11 08:57:58.949 2 DEBUG nova.virt.libvirt.driver [None req-9ab933dd-16af-4259-a3ee-7f6c4dbb3e2e ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: 8f609d1a-7eaa-41dc-9f3b-9acf7abe4239] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 11 08:57:58 compute-0 nova_compute[260935]: 2025-10-11 08:57:58.954 2 INFO nova.virt.libvirt.driver [-] [instance: 8f609d1a-7eaa-41dc-9f3b-9acf7abe4239] Instance spawned successfully.
Oct 11 08:57:58 compute-0 nova_compute[260935]: 2025-10-11 08:57:58.954 2 DEBUG nova.virt.libvirt.driver [None req-9ab933dd-16af-4259-a3ee-7f6c4dbb3e2e ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: 8f609d1a-7eaa-41dc-9f3b-9acf7abe4239] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 11 08:57:58 compute-0 nova_compute[260935]: 2025-10-11 08:57:58.971 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 8f609d1a-7eaa-41dc-9f3b-9acf7abe4239] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:57:58 compute-0 nova_compute[260935]: 2025-10-11 08:57:58.980 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 8f609d1a-7eaa-41dc-9f3b-9acf7abe4239] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 08:57:58 compute-0 nova_compute[260935]: 2025-10-11 08:57:58.988 2 DEBUG nova.virt.libvirt.driver [None req-9ab933dd-16af-4259-a3ee-7f6c4dbb3e2e ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: 8f609d1a-7eaa-41dc-9f3b-9acf7abe4239] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:57:58 compute-0 nova_compute[260935]: 2025-10-11 08:57:58.989 2 DEBUG nova.virt.libvirt.driver [None req-9ab933dd-16af-4259-a3ee-7f6c4dbb3e2e ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: 8f609d1a-7eaa-41dc-9f3b-9acf7abe4239] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:57:58 compute-0 nova_compute[260935]: 2025-10-11 08:57:58.990 2 DEBUG nova.virt.libvirt.driver [None req-9ab933dd-16af-4259-a3ee-7f6c4dbb3e2e ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: 8f609d1a-7eaa-41dc-9f3b-9acf7abe4239] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:57:58 compute-0 nova_compute[260935]: 2025-10-11 08:57:58.991 2 DEBUG nova.virt.libvirt.driver [None req-9ab933dd-16af-4259-a3ee-7f6c4dbb3e2e ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: 8f609d1a-7eaa-41dc-9f3b-9acf7abe4239] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:57:58 compute-0 nova_compute[260935]: 2025-10-11 08:57:58.992 2 DEBUG nova.virt.libvirt.driver [None req-9ab933dd-16af-4259-a3ee-7f6c4dbb3e2e ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: 8f609d1a-7eaa-41dc-9f3b-9acf7abe4239] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:57:58 compute-0 nova_compute[260935]: 2025-10-11 08:57:58.993 2 DEBUG nova.virt.libvirt.driver [None req-9ab933dd-16af-4259-a3ee-7f6c4dbb3e2e ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: 8f609d1a-7eaa-41dc-9f3b-9acf7abe4239] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:57:59 compute-0 nova_compute[260935]: 2025-10-11 08:57:59.021 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 8f609d1a-7eaa-41dc-9f3b-9acf7abe4239] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 08:57:59 compute-0 nova_compute[260935]: 2025-10-11 08:57:59.035 2 DEBUG nova.compute.manager [req-9ef995f3-484f-443d-8fe5-43f716063997 req-953996c3-76b5-4480-985d-e2da915e3c0d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 937f10b2-422f-42bc-b13e-5995310b461c] Received event network-vif-unplugged-228ffaff-8879-4872-9923-a5a8270cb291 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:57:59 compute-0 nova_compute[260935]: 2025-10-11 08:57:59.036 2 DEBUG oslo_concurrency.lockutils [req-9ef995f3-484f-443d-8fe5-43f716063997 req-953996c3-76b5-4480-985d-e2da915e3c0d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "937f10b2-422f-42bc-b13e-5995310b461c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:57:59 compute-0 nova_compute[260935]: 2025-10-11 08:57:59.036 2 DEBUG oslo_concurrency.lockutils [req-9ef995f3-484f-443d-8fe5-43f716063997 req-953996c3-76b5-4480-985d-e2da915e3c0d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "937f10b2-422f-42bc-b13e-5995310b461c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:57:59 compute-0 nova_compute[260935]: 2025-10-11 08:57:59.036 2 DEBUG oslo_concurrency.lockutils [req-9ef995f3-484f-443d-8fe5-43f716063997 req-953996c3-76b5-4480-985d-e2da915e3c0d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "937f10b2-422f-42bc-b13e-5995310b461c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:57:59 compute-0 nova_compute[260935]: 2025-10-11 08:57:59.037 2 DEBUG nova.compute.manager [req-9ef995f3-484f-443d-8fe5-43f716063997 req-953996c3-76b5-4480-985d-e2da915e3c0d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 937f10b2-422f-42bc-b13e-5995310b461c] No waiting events found dispatching network-vif-unplugged-228ffaff-8879-4872-9923-a5a8270cb291 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 08:57:59 compute-0 nova_compute[260935]: 2025-10-11 08:57:59.037 2 WARNING nova.compute.manager [req-9ef995f3-484f-443d-8fe5-43f716063997 req-953996c3-76b5-4480-985d-e2da915e3c0d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 937f10b2-422f-42bc-b13e-5995310b461c] Received unexpected event network-vif-unplugged-228ffaff-8879-4872-9923-a5a8270cb291 for instance with vm_state suspended and task_state None.
Oct 11 08:57:59 compute-0 nova_compute[260935]: 2025-10-11 08:57:59.061 2 INFO nova.compute.manager [None req-9ab933dd-16af-4259-a3ee-7f6c4dbb3e2e ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: 8f609d1a-7eaa-41dc-9f3b-9acf7abe4239] Took 11.13 seconds to spawn the instance on the hypervisor.
Oct 11 08:57:59 compute-0 nova_compute[260935]: 2025-10-11 08:57:59.061 2 DEBUG nova.compute.manager [None req-9ab933dd-16af-4259-a3ee-7f6c4dbb3e2e ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: 8f609d1a-7eaa-41dc-9f3b-9acf7abe4239] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:57:59 compute-0 nova_compute[260935]: 2025-10-11 08:57:59.112 2 INFO nova.compute.manager [None req-9ab933dd-16af-4259-a3ee-7f6c4dbb3e2e ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: 8f609d1a-7eaa-41dc-9f3b-9acf7abe4239] Took 12.22 seconds to build instance.
Oct 11 08:57:59 compute-0 nova_compute[260935]: 2025-10-11 08:57:59.127 2 DEBUG oslo_concurrency.lockutils [None req-9ab933dd-16af-4259-a3ee-7f6c4dbb3e2e ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Lock "8f609d1a-7eaa-41dc-9f3b-9acf7abe4239" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.405s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:57:59 compute-0 nova_compute[260935]: 2025-10-11 08:57:59.183 2 DEBUG nova.network.neutron [req-ec3c6260-7406-4e1b-8bf8-c75a3de26cb6 req-ceb17c19-8d4f-47f7-845d-7a5fa18e5645 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 98d8ebd6-0917-49cf-8efc-a245486424bc] Updated VIF entry in instance network info cache for port 10e01bbb-0d2c-4f76-8f81-c90e20d3e54c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 08:57:59 compute-0 nova_compute[260935]: 2025-10-11 08:57:59.183 2 DEBUG nova.network.neutron [req-ec3c6260-7406-4e1b-8bf8-c75a3de26cb6 req-ceb17c19-8d4f-47f7-845d-7a5fa18e5645 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 98d8ebd6-0917-49cf-8efc-a245486424bc] Updating instance_info_cache with network_info: [{"id": "10e01bbb-0d2c-4f76-8f81-c90e20d3e54c", "address": "fa:16:3e:3f:0f:36", "network": {"id": "056c6769-bc97-4ae9-9759-4cc2d984a31d", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-2094705751-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.249", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "73adfb8cf0c64359b1f33a9643148ef4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap10e01bbb-0d", "ovs_interfaceid": "10e01bbb-0d2c-4f76-8f81-c90e20d3e54c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:57:59 compute-0 nova_compute[260935]: 2025-10-11 08:57:59.195 2 DEBUG oslo_concurrency.lockutils [req-ec3c6260-7406-4e1b-8bf8-c75a3de26cb6 req-ceb17c19-8d4f-47f7-845d-7a5fa18e5645 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-98d8ebd6-0917-49cf-8efc-a245486424bc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 08:57:59 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 08:57:59 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3247351359' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:57:59 compute-0 nova_compute[260935]: 2025-10-11 08:57:59.314 2 DEBUG oslo_concurrency.processutils [None req-52b41a59-dd6d-4f95-968d-9f8bdd1c5fa9 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:57:59 compute-0 nova_compute[260935]: 2025-10-11 08:57:59.338 2 DEBUG nova.storage.rbd_utils [None req-52b41a59-dd6d-4f95-968d-9f8bdd1c5fa9 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] rbd image 98d8ebd6-0917-49cf-8efc-a245486424bc_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:57:59 compute-0 nova_compute[260935]: 2025-10-11 08:57:59.342 2 DEBUG oslo_concurrency.processutils [None req-52b41a59-dd6d-4f95-968d-9f8bdd1c5fa9 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:57:59 compute-0 ceph-mon[74313]: pgmap v1640: 321 pgs: 321 active+clean; 418 MiB data, 739 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.2 MiB/s wr, 168 op/s
Oct 11 08:57:59 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/3247351359' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:57:59 compute-0 nova_compute[260935]: 2025-10-11 08:57:59.676 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:57:59 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 08:57:59 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3225954883' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:57:59 compute-0 nova_compute[260935]: 2025-10-11 08:57:59.845 2 DEBUG oslo_concurrency.processutils [None req-52b41a59-dd6d-4f95-968d-9f8bdd1c5fa9 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.503s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:57:59 compute-0 nova_compute[260935]: 2025-10-11 08:57:59.847 2 DEBUG nova.virt.libvirt.vif [None req-52b41a59-dd6d-4f95-968d-9f8bdd1c5fa9 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-11T08:55:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-400269969',display_name='tempest-ServerActionsTestOtherB-server-400269969',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-400269969',id=58,image_ref='aff10a5e-fdc1-44d7-be39-f9f9dc19ad9d',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name='tempest-keypair-1264143465',keypairs=<?>,launch_index=0,launched_at=2025-10-11T08:55:31Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='73adfb8cf0c64359b1f33a9643148ef4',ramdisk_id='',reservation_id='r-bh4aj5da',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherB-1445504716',owner_user_name='tempest-ServerActionsTestOtherB-1445504716-project-member',shelved_at='2025-10-11T08:57:40.476674',shelved_host='compute-0.ctlplane.example.com',shelved_image_id='aff10a5e-fdc1-44d7-be39-f9f9dc19ad9d'},tags=<?>,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T08:57:53Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='8d5f5f07c57c467286168be7c097bf26',uuid=98d8ebd6-0917-49cf-8efc-a245486424bc,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='shelved_offloaded') vif={"id": "10e01bbb-0d2c-4f76-8f81-c90e20d3e54c", "address": "fa:16:3e:3f:0f:36", "network": {"id": "056c6769-bc97-4ae9-9759-4cc2d984a31d", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-2094705751-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.249", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "73adfb8cf0c64359b1f33a9643148ef4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap10e01bbb-0d", "ovs_interfaceid": "10e01bbb-0d2c-4f76-8f81-c90e20d3e54c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 11 08:57:59 compute-0 nova_compute[260935]: 2025-10-11 08:57:59.848 2 DEBUG nova.network.os_vif_util [None req-52b41a59-dd6d-4f95-968d-9f8bdd1c5fa9 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Converting VIF {"id": "10e01bbb-0d2c-4f76-8f81-c90e20d3e54c", "address": "fa:16:3e:3f:0f:36", "network": {"id": "056c6769-bc97-4ae9-9759-4cc2d984a31d", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-2094705751-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.249", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "73adfb8cf0c64359b1f33a9643148ef4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap10e01bbb-0d", "ovs_interfaceid": "10e01bbb-0d2c-4f76-8f81-c90e20d3e54c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 08:57:59 compute-0 nova_compute[260935]: 2025-10-11 08:57:59.850 2 DEBUG nova.network.os_vif_util [None req-52b41a59-dd6d-4f95-968d-9f8bdd1c5fa9 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3f:0f:36,bridge_name='br-int',has_traffic_filtering=True,id=10e01bbb-0d2c-4f76-8f81-c90e20d3e54c,network=Network(056c6769-bc97-4ae9-9759-4cc2d984a31d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap10e01bbb-0d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 08:57:59 compute-0 nova_compute[260935]: 2025-10-11 08:57:59.852 2 DEBUG nova.objects.instance [None req-52b41a59-dd6d-4f95-968d-9f8bdd1c5fa9 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Lazy-loading 'pci_devices' on Instance uuid 98d8ebd6-0917-49cf-8efc-a245486424bc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:57:59 compute-0 nova_compute[260935]: 2025-10-11 08:57:59.875 2 DEBUG nova.virt.libvirt.driver [None req-52b41a59-dd6d-4f95-968d-9f8bdd1c5fa9 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] [instance: 98d8ebd6-0917-49cf-8efc-a245486424bc] End _get_guest_xml xml=<domain type="kvm">
Oct 11 08:57:59 compute-0 nova_compute[260935]:   <uuid>98d8ebd6-0917-49cf-8efc-a245486424bc</uuid>
Oct 11 08:57:59 compute-0 nova_compute[260935]:   <name>instance-0000003a</name>
Oct 11 08:57:59 compute-0 nova_compute[260935]:   <memory>131072</memory>
Oct 11 08:57:59 compute-0 nova_compute[260935]:   <vcpu>1</vcpu>
Oct 11 08:57:59 compute-0 nova_compute[260935]:   <metadata>
Oct 11 08:57:59 compute-0 nova_compute[260935]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 08:57:59 compute-0 nova_compute[260935]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 08:57:59 compute-0 nova_compute[260935]:       <nova:name>tempest-ServerActionsTestOtherB-server-400269969</nova:name>
Oct 11 08:57:59 compute-0 nova_compute[260935]:       <nova:creationTime>2025-10-11 08:57:58</nova:creationTime>
Oct 11 08:57:59 compute-0 nova_compute[260935]:       <nova:flavor name="m1.nano">
Oct 11 08:57:59 compute-0 nova_compute[260935]:         <nova:memory>128</nova:memory>
Oct 11 08:57:59 compute-0 nova_compute[260935]:         <nova:disk>1</nova:disk>
Oct 11 08:57:59 compute-0 nova_compute[260935]:         <nova:swap>0</nova:swap>
Oct 11 08:57:59 compute-0 nova_compute[260935]:         <nova:ephemeral>0</nova:ephemeral>
Oct 11 08:57:59 compute-0 nova_compute[260935]:         <nova:vcpus>1</nova:vcpus>
Oct 11 08:57:59 compute-0 nova_compute[260935]:       </nova:flavor>
Oct 11 08:57:59 compute-0 nova_compute[260935]:       <nova:owner>
Oct 11 08:57:59 compute-0 nova_compute[260935]:         <nova:user uuid="8d5f5f07c57c467286168be7c097bf26">tempest-ServerActionsTestOtherB-1445504716-project-member</nova:user>
Oct 11 08:57:59 compute-0 nova_compute[260935]:         <nova:project uuid="73adfb8cf0c64359b1f33a9643148ef4">tempest-ServerActionsTestOtherB-1445504716</nova:project>
Oct 11 08:57:59 compute-0 nova_compute[260935]:       </nova:owner>
Oct 11 08:57:59 compute-0 nova_compute[260935]:       <nova:root type="image" uuid="aff10a5e-fdc1-44d7-be39-f9f9dc19ad9d"/>
Oct 11 08:57:59 compute-0 nova_compute[260935]:       <nova:ports>
Oct 11 08:57:59 compute-0 nova_compute[260935]:         <nova:port uuid="10e01bbb-0d2c-4f76-8f81-c90e20d3e54c">
Oct 11 08:57:59 compute-0 nova_compute[260935]:           <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Oct 11 08:57:59 compute-0 nova_compute[260935]:         </nova:port>
Oct 11 08:57:59 compute-0 nova_compute[260935]:       </nova:ports>
Oct 11 08:57:59 compute-0 nova_compute[260935]:     </nova:instance>
Oct 11 08:57:59 compute-0 nova_compute[260935]:   </metadata>
Oct 11 08:57:59 compute-0 nova_compute[260935]:   <sysinfo type="smbios">
Oct 11 08:57:59 compute-0 nova_compute[260935]:     <system>
Oct 11 08:57:59 compute-0 nova_compute[260935]:       <entry name="manufacturer">RDO</entry>
Oct 11 08:57:59 compute-0 nova_compute[260935]:       <entry name="product">OpenStack Compute</entry>
Oct 11 08:57:59 compute-0 nova_compute[260935]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 08:57:59 compute-0 nova_compute[260935]:       <entry name="serial">98d8ebd6-0917-49cf-8efc-a245486424bc</entry>
Oct 11 08:57:59 compute-0 nova_compute[260935]:       <entry name="uuid">98d8ebd6-0917-49cf-8efc-a245486424bc</entry>
Oct 11 08:57:59 compute-0 nova_compute[260935]:       <entry name="family">Virtual Machine</entry>
Oct 11 08:57:59 compute-0 nova_compute[260935]:     </system>
Oct 11 08:57:59 compute-0 nova_compute[260935]:   </sysinfo>
Oct 11 08:57:59 compute-0 nova_compute[260935]:   <os>
Oct 11 08:57:59 compute-0 nova_compute[260935]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 11 08:57:59 compute-0 nova_compute[260935]:     <boot dev="hd"/>
Oct 11 08:57:59 compute-0 nova_compute[260935]:     <smbios mode="sysinfo"/>
Oct 11 08:57:59 compute-0 nova_compute[260935]:   </os>
Oct 11 08:57:59 compute-0 nova_compute[260935]:   <features>
Oct 11 08:57:59 compute-0 nova_compute[260935]:     <acpi/>
Oct 11 08:57:59 compute-0 nova_compute[260935]:     <apic/>
Oct 11 08:57:59 compute-0 nova_compute[260935]:     <vmcoreinfo/>
Oct 11 08:57:59 compute-0 nova_compute[260935]:   </features>
Oct 11 08:57:59 compute-0 nova_compute[260935]:   <clock offset="utc">
Oct 11 08:57:59 compute-0 nova_compute[260935]:     <timer name="pit" tickpolicy="delay"/>
Oct 11 08:57:59 compute-0 nova_compute[260935]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 11 08:57:59 compute-0 nova_compute[260935]:     <timer name="hpet" present="no"/>
Oct 11 08:57:59 compute-0 nova_compute[260935]:   </clock>
Oct 11 08:57:59 compute-0 nova_compute[260935]:   <cpu mode="host-model" match="exact">
Oct 11 08:57:59 compute-0 nova_compute[260935]:     <topology sockets="1" cores="1" threads="1"/>
Oct 11 08:57:59 compute-0 nova_compute[260935]:   </cpu>
Oct 11 08:57:59 compute-0 nova_compute[260935]:   <devices>
Oct 11 08:57:59 compute-0 nova_compute[260935]:     <disk type="network" device="disk">
Oct 11 08:57:59 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 08:57:59 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/98d8ebd6-0917-49cf-8efc-a245486424bc_disk">
Oct 11 08:57:59 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 08:57:59 compute-0 nova_compute[260935]:       </source>
Oct 11 08:57:59 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 08:57:59 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 08:57:59 compute-0 nova_compute[260935]:       </auth>
Oct 11 08:57:59 compute-0 nova_compute[260935]:       <target dev="vda" bus="virtio"/>
Oct 11 08:57:59 compute-0 nova_compute[260935]:     </disk>
Oct 11 08:57:59 compute-0 nova_compute[260935]:     <disk type="network" device="cdrom">
Oct 11 08:57:59 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 08:57:59 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/98d8ebd6-0917-49cf-8efc-a245486424bc_disk.config">
Oct 11 08:57:59 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 08:57:59 compute-0 nova_compute[260935]:       </source>
Oct 11 08:57:59 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 08:57:59 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 08:57:59 compute-0 nova_compute[260935]:       </auth>
Oct 11 08:57:59 compute-0 nova_compute[260935]:       <target dev="sda" bus="sata"/>
Oct 11 08:57:59 compute-0 nova_compute[260935]:     </disk>
Oct 11 08:57:59 compute-0 nova_compute[260935]:     <interface type="ethernet">
Oct 11 08:57:59 compute-0 nova_compute[260935]:       <mac address="fa:16:3e:3f:0f:36"/>
Oct 11 08:57:59 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 08:57:59 compute-0 nova_compute[260935]:       <driver name="vhost" rx_queue_size="512"/>
Oct 11 08:57:59 compute-0 nova_compute[260935]:       <mtu size="1442"/>
Oct 11 08:57:59 compute-0 nova_compute[260935]:       <target dev="tap10e01bbb-0d"/>
Oct 11 08:57:59 compute-0 nova_compute[260935]:     </interface>
Oct 11 08:57:59 compute-0 nova_compute[260935]:     <serial type="pty">
Oct 11 08:57:59 compute-0 nova_compute[260935]:       <log file="/var/lib/nova/instances/98d8ebd6-0917-49cf-8efc-a245486424bc/console.log" append="off"/>
Oct 11 08:57:59 compute-0 nova_compute[260935]:     </serial>
Oct 11 08:57:59 compute-0 nova_compute[260935]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 08:57:59 compute-0 nova_compute[260935]:     <video>
Oct 11 08:57:59 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 08:57:59 compute-0 nova_compute[260935]:     </video>
Oct 11 08:57:59 compute-0 nova_compute[260935]:     <input type="tablet" bus="usb"/>
Oct 11 08:57:59 compute-0 nova_compute[260935]:     <input type="keyboard" bus="usb"/>
Oct 11 08:57:59 compute-0 nova_compute[260935]:     <rng model="virtio">
Oct 11 08:57:59 compute-0 nova_compute[260935]:       <backend model="random">/dev/urandom</backend>
Oct 11 08:57:59 compute-0 nova_compute[260935]:     </rng>
Oct 11 08:57:59 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root"/>
Oct 11 08:57:59 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:57:59 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:57:59 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:57:59 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:57:59 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:57:59 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:57:59 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:57:59 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:57:59 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:57:59 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:57:59 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:57:59 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:57:59 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:57:59 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:57:59 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:57:59 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:57:59 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:57:59 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:57:59 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:57:59 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:57:59 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:57:59 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:57:59 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:57:59 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:57:59 compute-0 nova_compute[260935]:     <controller type="usb" index="0"/>
Oct 11 08:57:59 compute-0 nova_compute[260935]:     <memballoon model="virtio">
Oct 11 08:57:59 compute-0 nova_compute[260935]:       <stats period="10"/>
Oct 11 08:57:59 compute-0 nova_compute[260935]:     </memballoon>
Oct 11 08:57:59 compute-0 nova_compute[260935]:   </devices>
Oct 11 08:57:59 compute-0 nova_compute[260935]: </domain>
Oct 11 08:57:59 compute-0 nova_compute[260935]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 11 08:57:59 compute-0 nova_compute[260935]: 2025-10-11 08:57:59.878 2 DEBUG nova.compute.manager [None req-52b41a59-dd6d-4f95-968d-9f8bdd1c5fa9 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] [instance: 98d8ebd6-0917-49cf-8efc-a245486424bc] Preparing to wait for external event network-vif-plugged-10e01bbb-0d2c-4f76-8f81-c90e20d3e54c prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 11 08:57:59 compute-0 nova_compute[260935]: 2025-10-11 08:57:59.878 2 DEBUG oslo_concurrency.lockutils [None req-52b41a59-dd6d-4f95-968d-9f8bdd1c5fa9 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Acquiring lock "98d8ebd6-0917-49cf-8efc-a245486424bc-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:57:59 compute-0 nova_compute[260935]: 2025-10-11 08:57:59.879 2 DEBUG oslo_concurrency.lockutils [None req-52b41a59-dd6d-4f95-968d-9f8bdd1c5fa9 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Lock "98d8ebd6-0917-49cf-8efc-a245486424bc-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:57:59 compute-0 nova_compute[260935]: 2025-10-11 08:57:59.879 2 DEBUG oslo_concurrency.lockutils [None req-52b41a59-dd6d-4f95-968d-9f8bdd1c5fa9 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Lock "98d8ebd6-0917-49cf-8efc-a245486424bc-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:57:59 compute-0 nova_compute[260935]: 2025-10-11 08:57:59.880 2 DEBUG nova.virt.libvirt.vif [None req-52b41a59-dd6d-4f95-968d-9f8bdd1c5fa9 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-11T08:55:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-400269969',display_name='tempest-ServerActionsTestOtherB-server-400269969',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-400269969',id=58,image_ref='aff10a5e-fdc1-44d7-be39-f9f9dc19ad9d',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name='tempest-keypair-1264143465',keypairs=<?>,launch_index=0,launched_at=2025-10-11T08:55:31Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='73adfb8cf0c64359b1f33a9643148ef4',ramdisk_id='',reservation_id='r-bh4aj5da',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherB-1445504716',owner_user_name='tempest-ServerActionsTestOtherB-1445504716-project-member',shelved_at='2025-10-11T08:57:40.476674',shelved_host='compute-0.ctlplane.example.com',shelved_image_id='aff10a5e-fdc1-44d7-be39-f9f9dc19ad9d'},tags=<?>,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T08:57:53Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='8d5f5f07c57c467286168be7c097bf26',uuid=98d8ebd6-0917-49cf-8efc-a245486424bc,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='shelved_offloaded') vif={"id": "10e01bbb-0d2c-4f76-8f81-c90e20d3e54c", "address": "fa:16:3e:3f:0f:36", "network": {"id": "056c6769-bc97-4ae9-9759-4cc2d984a31d", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-2094705751-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.249", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "73adfb8cf0c64359b1f33a9643148ef4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap10e01bbb-0d", "ovs_interfaceid": "10e01bbb-0d2c-4f76-8f81-c90e20d3e54c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 11 08:57:59 compute-0 nova_compute[260935]: 2025-10-11 08:57:59.881 2 DEBUG nova.network.os_vif_util [None req-52b41a59-dd6d-4f95-968d-9f8bdd1c5fa9 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Converting VIF {"id": "10e01bbb-0d2c-4f76-8f81-c90e20d3e54c", "address": "fa:16:3e:3f:0f:36", "network": {"id": "056c6769-bc97-4ae9-9759-4cc2d984a31d", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-2094705751-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.249", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "73adfb8cf0c64359b1f33a9643148ef4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap10e01bbb-0d", "ovs_interfaceid": "10e01bbb-0d2c-4f76-8f81-c90e20d3e54c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 08:57:59 compute-0 nova_compute[260935]: 2025-10-11 08:57:59.882 2 DEBUG nova.network.os_vif_util [None req-52b41a59-dd6d-4f95-968d-9f8bdd1c5fa9 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3f:0f:36,bridge_name='br-int',has_traffic_filtering=True,id=10e01bbb-0d2c-4f76-8f81-c90e20d3e54c,network=Network(056c6769-bc97-4ae9-9759-4cc2d984a31d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap10e01bbb-0d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 08:57:59 compute-0 nova_compute[260935]: 2025-10-11 08:57:59.883 2 DEBUG os_vif [None req-52b41a59-dd6d-4f95-968d-9f8bdd1c5fa9 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:3f:0f:36,bridge_name='br-int',has_traffic_filtering=True,id=10e01bbb-0d2c-4f76-8f81-c90e20d3e54c,network=Network(056c6769-bc97-4ae9-9759-4cc2d984a31d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap10e01bbb-0d') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 11 08:57:59 compute-0 nova_compute[260935]: 2025-10-11 08:57:59.884 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:57:59 compute-0 nova_compute[260935]: 2025-10-11 08:57:59.885 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:57:59 compute-0 nova_compute[260935]: 2025-10-11 08:57:59.885 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 08:57:59 compute-0 nova_compute[260935]: 2025-10-11 08:57:59.890 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:57:59 compute-0 nova_compute[260935]: 2025-10-11 08:57:59.890 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap10e01bbb-0d, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:57:59 compute-0 nova_compute[260935]: 2025-10-11 08:57:59.891 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap10e01bbb-0d, col_values=(('external_ids', {'iface-id': '10e01bbb-0d2c-4f76-8f81-c90e20d3e54c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:3f:0f:36', 'vm-uuid': '98d8ebd6-0917-49cf-8efc-a245486424bc'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:57:59 compute-0 nova_compute[260935]: 2025-10-11 08:57:59.894 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:57:59 compute-0 NetworkManager[44960]: <info>  [1760173079.8958] manager: (tap10e01bbb-0d): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/296)
Oct 11 08:57:59 compute-0 nova_compute[260935]: 2025-10-11 08:57:59.896 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 08:57:59 compute-0 nova_compute[260935]: 2025-10-11 08:57:59.911 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:57:59 compute-0 nova_compute[260935]: 2025-10-11 08:57:59.912 2 INFO os_vif [None req-52b41a59-dd6d-4f95-968d-9f8bdd1c5fa9 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:3f:0f:36,bridge_name='br-int',has_traffic_filtering=True,id=10e01bbb-0d2c-4f76-8f81-c90e20d3e54c,network=Network(056c6769-bc97-4ae9-9759-4cc2d984a31d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap10e01bbb-0d')
Oct 11 08:57:59 compute-0 nova_compute[260935]: 2025-10-11 08:57:59.975 2 DEBUG nova.virt.libvirt.driver [None req-52b41a59-dd6d-4f95-968d-9f8bdd1c5fa9 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 08:57:59 compute-0 nova_compute[260935]: 2025-10-11 08:57:59.976 2 DEBUG nova.virt.libvirt.driver [None req-52b41a59-dd6d-4f95-968d-9f8bdd1c5fa9 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 08:57:59 compute-0 nova_compute[260935]: 2025-10-11 08:57:59.976 2 DEBUG nova.virt.libvirt.driver [None req-52b41a59-dd6d-4f95-968d-9f8bdd1c5fa9 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] No VIF found with MAC fa:16:3e:3f:0f:36, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 11 08:57:59 compute-0 nova_compute[260935]: 2025-10-11 08:57:59.977 2 INFO nova.virt.libvirt.driver [None req-52b41a59-dd6d-4f95-968d-9f8bdd1c5fa9 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] [instance: 98d8ebd6-0917-49cf-8efc-a245486424bc] Using config drive
Oct 11 08:58:00 compute-0 nova_compute[260935]: 2025-10-11 08:58:00.011 2 DEBUG nova.storage.rbd_utils [None req-52b41a59-dd6d-4f95-968d-9f8bdd1c5fa9 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] rbd image 98d8ebd6-0917-49cf-8efc-a245486424bc_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:58:00 compute-0 nova_compute[260935]: 2025-10-11 08:58:00.038 2 DEBUG nova.objects.instance [None req-52b41a59-dd6d-4f95-968d-9f8bdd1c5fa9 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Lazy-loading 'ec2_ids' on Instance uuid 98d8ebd6-0917-49cf-8efc-a245486424bc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:58:00 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1641: 321 pgs: 321 active+clean; 418 MiB data, 739 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.9 MiB/s wr, 144 op/s
Oct 11 08:58:00 compute-0 nova_compute[260935]: 2025-10-11 08:58:00.100 2 DEBUG nova.objects.instance [None req-52b41a59-dd6d-4f95-968d-9f8bdd1c5fa9 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Lazy-loading 'keypairs' on Instance uuid 98d8ebd6-0917-49cf-8efc-a245486424bc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:58:00 compute-0 nova_compute[260935]: 2025-10-11 08:58:00.458 2 INFO nova.virt.libvirt.driver [None req-52b41a59-dd6d-4f95-968d-9f8bdd1c5fa9 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] [instance: 98d8ebd6-0917-49cf-8efc-a245486424bc] Creating config drive at /var/lib/nova/instances/98d8ebd6-0917-49cf-8efc-a245486424bc/disk.config
Oct 11 08:58:00 compute-0 nova_compute[260935]: 2025-10-11 08:58:00.469 2 DEBUG oslo_concurrency.processutils [None req-52b41a59-dd6d-4f95-968d-9f8bdd1c5fa9 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/98d8ebd6-0917-49cf-8efc-a245486424bc/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpjvj2oiis execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:58:00 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/3225954883' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:58:00 compute-0 nova_compute[260935]: 2025-10-11 08:58:00.652 2 DEBUG oslo_concurrency.processutils [None req-52b41a59-dd6d-4f95-968d-9f8bdd1c5fa9 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/98d8ebd6-0917-49cf-8efc-a245486424bc/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpjvj2oiis" returned: 0 in 0.183s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:58:00 compute-0 nova_compute[260935]: 2025-10-11 08:58:00.692 2 DEBUG nova.storage.rbd_utils [None req-52b41a59-dd6d-4f95-968d-9f8bdd1c5fa9 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] rbd image 98d8ebd6-0917-49cf-8efc-a245486424bc_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:58:00 compute-0 nova_compute[260935]: 2025-10-11 08:58:00.697 2 DEBUG oslo_concurrency.processutils [None req-52b41a59-dd6d-4f95-968d-9f8bdd1c5fa9 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/98d8ebd6-0917-49cf-8efc-a245486424bc/disk.config 98d8ebd6-0917-49cf-8efc-a245486424bc_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:58:00 compute-0 nova_compute[260935]: 2025-10-11 08:58:00.799 2 DEBUG oslo_concurrency.lockutils [None req-6c5590db-f2f6-42f7-97de-1f10a2fe6e48 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Acquiring lock "937f10b2-422f-42bc-b13e-5995310b461c" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:58:00 compute-0 nova_compute[260935]: 2025-10-11 08:58:00.800 2 DEBUG oslo_concurrency.lockutils [None req-6c5590db-f2f6-42f7-97de-1f10a2fe6e48 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Lock "937f10b2-422f-42bc-b13e-5995310b461c" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:58:00 compute-0 nova_compute[260935]: 2025-10-11 08:58:00.800 2 DEBUG oslo_concurrency.lockutils [None req-6c5590db-f2f6-42f7-97de-1f10a2fe6e48 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Acquiring lock "937f10b2-422f-42bc-b13e-5995310b461c-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:58:00 compute-0 nova_compute[260935]: 2025-10-11 08:58:00.801 2 DEBUG oslo_concurrency.lockutils [None req-6c5590db-f2f6-42f7-97de-1f10a2fe6e48 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Lock "937f10b2-422f-42bc-b13e-5995310b461c-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:58:00 compute-0 nova_compute[260935]: 2025-10-11 08:58:00.801 2 DEBUG oslo_concurrency.lockutils [None req-6c5590db-f2f6-42f7-97de-1f10a2fe6e48 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Lock "937f10b2-422f-42bc-b13e-5995310b461c-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:58:00 compute-0 nova_compute[260935]: 2025-10-11 08:58:00.804 2 INFO nova.compute.manager [None req-6c5590db-f2f6-42f7-97de-1f10a2fe6e48 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: 937f10b2-422f-42bc-b13e-5995310b461c] Terminating instance
Oct 11 08:58:00 compute-0 nova_compute[260935]: 2025-10-11 08:58:00.806 2 DEBUG nova.compute.manager [None req-6c5590db-f2f6-42f7-97de-1f10a2fe6e48 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: 937f10b2-422f-42bc-b13e-5995310b461c] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 11 08:58:00 compute-0 nova_compute[260935]: 2025-10-11 08:58:00.817 2 INFO nova.virt.libvirt.driver [-] [instance: 937f10b2-422f-42bc-b13e-5995310b461c] Instance destroyed successfully.
Oct 11 08:58:00 compute-0 nova_compute[260935]: 2025-10-11 08:58:00.817 2 DEBUG nova.objects.instance [None req-6c5590db-f2f6-42f7-97de-1f10a2fe6e48 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Lazy-loading 'resources' on Instance uuid 937f10b2-422f-42bc-b13e-5995310b461c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:58:00 compute-0 nova_compute[260935]: 2025-10-11 08:58:00.833 2 DEBUG nova.virt.libvirt.vif [None req-6c5590db-f2f6-42f7-97de-1f10a2fe6e48 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T08:57:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-2072321326',display_name='tempest-DeleteServersTestJSON-server-2072321326',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-2072321326',id=74,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-11T08:57:55Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='93dd4902ce324862a38006da8e06503a',ramdisk_id='',reservation_id='r-q6sypizy',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-DeleteServersTestJSON-1019340677',owner_user_name='tempest-DeleteServersTestJSON-1019340677-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T08:57:58Z,user_data=None,user_id='213e5693e94f44e7950e3dfbca04228a',uuid=937f10b2-422f-42bc-b13e-5995310b461c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='suspended') vif={"id": "228ffaff-8879-4872-9923-a5a8270cb291", "address": "fa:16:3e:0a:d9:4d", "network": {"id": "2cb96d57-a5e9-4b38-b10e-68187a5bf82f", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-2000648338-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "93dd4902ce324862a38006da8e06503a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap228ffaff-88", "ovs_interfaceid": "228ffaff-8879-4872-9923-a5a8270cb291", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 11 08:58:00 compute-0 nova_compute[260935]: 2025-10-11 08:58:00.834 2 DEBUG nova.network.os_vif_util [None req-6c5590db-f2f6-42f7-97de-1f10a2fe6e48 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Converting VIF {"id": "228ffaff-8879-4872-9923-a5a8270cb291", "address": "fa:16:3e:0a:d9:4d", "network": {"id": "2cb96d57-a5e9-4b38-b10e-68187a5bf82f", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-2000648338-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "93dd4902ce324862a38006da8e06503a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap228ffaff-88", "ovs_interfaceid": "228ffaff-8879-4872-9923-a5a8270cb291", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 08:58:00 compute-0 nova_compute[260935]: 2025-10-11 08:58:00.835 2 DEBUG nova.network.os_vif_util [None req-6c5590db-f2f6-42f7-97de-1f10a2fe6e48 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0a:d9:4d,bridge_name='br-int',has_traffic_filtering=True,id=228ffaff-8879-4872-9923-a5a8270cb291,network=Network(2cb96d57-a5e9-4b38-b10e-68187a5bf82f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap228ffaff-88') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 08:58:00 compute-0 nova_compute[260935]: 2025-10-11 08:58:00.836 2 DEBUG os_vif [None req-6c5590db-f2f6-42f7-97de-1f10a2fe6e48 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:0a:d9:4d,bridge_name='br-int',has_traffic_filtering=True,id=228ffaff-8879-4872-9923-a5a8270cb291,network=Network(2cb96d57-a5e9-4b38-b10e-68187a5bf82f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap228ffaff-88') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 11 08:58:00 compute-0 nova_compute[260935]: 2025-10-11 08:58:00.839 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:58:00 compute-0 nova_compute[260935]: 2025-10-11 08:58:00.840 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap228ffaff-88, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:58:00 compute-0 nova_compute[260935]: 2025-10-11 08:58:00.847 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 08:58:00 compute-0 nova_compute[260935]: 2025-10-11 08:58:00.852 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:58:00 compute-0 nova_compute[260935]: 2025-10-11 08:58:00.855 2 INFO os_vif [None req-6c5590db-f2f6-42f7-97de-1f10a2fe6e48 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:0a:d9:4d,bridge_name='br-int',has_traffic_filtering=True,id=228ffaff-8879-4872-9923-a5a8270cb291,network=Network(2cb96d57-a5e9-4b38-b10e-68187a5bf82f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap228ffaff-88')
Oct 11 08:58:00 compute-0 nova_compute[260935]: 2025-10-11 08:58:00.888 2 DEBUG oslo_concurrency.processutils [None req-52b41a59-dd6d-4f95-968d-9f8bdd1c5fa9 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/98d8ebd6-0917-49cf-8efc-a245486424bc/disk.config 98d8ebd6-0917-49cf-8efc-a245486424bc_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.191s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:58:00 compute-0 nova_compute[260935]: 2025-10-11 08:58:00.889 2 INFO nova.virt.libvirt.driver [None req-52b41a59-dd6d-4f95-968d-9f8bdd1c5fa9 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] [instance: 98d8ebd6-0917-49cf-8efc-a245486424bc] Deleting local config drive /var/lib/nova/instances/98d8ebd6-0917-49cf-8efc-a245486424bc/disk.config because it was imported into RBD.
Oct 11 08:58:00 compute-0 kernel: tap10e01bbb-0d: entered promiscuous mode
Oct 11 08:58:00 compute-0 systemd-udevd[334463]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 08:58:00 compute-0 NetworkManager[44960]: <info>  [1760173080.9717] manager: (tap10e01bbb-0d): new Tun device (/org/freedesktop/NetworkManager/Devices/297)
Oct 11 08:58:00 compute-0 nova_compute[260935]: 2025-10-11 08:58:00.978 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:58:00 compute-0 ovn_controller[152945]: 2025-10-11T08:58:00Z|00654|binding|INFO|Claiming lport 10e01bbb-0d2c-4f76-8f81-c90e20d3e54c for this chassis.
Oct 11 08:58:00 compute-0 ovn_controller[152945]: 2025-10-11T08:58:00Z|00655|binding|INFO|10e01bbb-0d2c-4f76-8f81-c90e20d3e54c: Claiming fa:16:3e:3f:0f:36 10.100.0.9
Oct 11 08:58:00 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:58:00.991 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3f:0f:36 10.100.0.9'], port_security=['fa:16:3e:3f:0f:36 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '98d8ebd6-0917-49cf-8efc-a245486424bc', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-056c6769-bc97-4ae9-9759-4cc2d984a31d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '73adfb8cf0c64359b1f33a9643148ef4', 'neutron:revision_number': '7', 'neutron:security_group_ids': '06a0521a-9ff7-49ed-a194-a12a4a8fd551', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.249'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0cfa1fc9-121e-4e0a-bd08-716b82275316, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=10e01bbb-0d2c-4f76-8f81-c90e20d3e54c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 08:58:00 compute-0 NetworkManager[44960]: <info>  [1760173080.9976] device (tap10e01bbb-0d): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 08:58:00 compute-0 NetworkManager[44960]: <info>  [1760173080.9986] device (tap10e01bbb-0d): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 08:58:00 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:58:00.992 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 10e01bbb-0d2c-4f76-8f81-c90e20d3e54c in datapath 056c6769-bc97-4ae9-9759-4cc2d984a31d bound to our chassis
Oct 11 08:58:00 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:58:00.998 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 056c6769-bc97-4ae9-9759-4cc2d984a31d
Oct 11 08:58:01 compute-0 ovn_controller[152945]: 2025-10-11T08:58:01Z|00656|binding|INFO|Setting lport 10e01bbb-0d2c-4f76-8f81-c90e20d3e54c ovn-installed in OVS
Oct 11 08:58:01 compute-0 ovn_controller[152945]: 2025-10-11T08:58:01Z|00657|binding|INFO|Setting lport 10e01bbb-0d2c-4f76-8f81-c90e20d3e54c up in Southbound
Oct 11 08:58:01 compute-0 systemd-machined[215705]: New machine qemu-84-instance-0000003a.
Oct 11 08:58:01 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:58:01.021 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[55aee8b2-26a9-4f5a-bf48-913e40115661]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:58:01 compute-0 nova_compute[260935]: 2025-10-11 08:58:01.069 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:58:01 compute-0 systemd[1]: Started Virtual Machine qemu-84-instance-0000003a.
Oct 11 08:58:01 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:58:01.130 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[3b20df0b-2334-431a-82a0-431ac8a1d6db]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:58:01 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:58:01.137 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[c5728a13-a343-4487-adda-5837b9c98be6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:58:01 compute-0 nova_compute[260935]: 2025-10-11 08:58:01.138 2 DEBUG nova.compute.manager [req-80f476c2-e0aa-41e6-be37-6d6cce733a3c req-3afe3ebb-83f0-4d03-9bc7-5e1e921ac98f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 937f10b2-422f-42bc-b13e-5995310b461c] Received event network-vif-plugged-228ffaff-8879-4872-9923-a5a8270cb291 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:58:01 compute-0 nova_compute[260935]: 2025-10-11 08:58:01.138 2 DEBUG oslo_concurrency.lockutils [req-80f476c2-e0aa-41e6-be37-6d6cce733a3c req-3afe3ebb-83f0-4d03-9bc7-5e1e921ac98f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "937f10b2-422f-42bc-b13e-5995310b461c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:58:01 compute-0 nova_compute[260935]: 2025-10-11 08:58:01.139 2 DEBUG oslo_concurrency.lockutils [req-80f476c2-e0aa-41e6-be37-6d6cce733a3c req-3afe3ebb-83f0-4d03-9bc7-5e1e921ac98f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "937f10b2-422f-42bc-b13e-5995310b461c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:58:01 compute-0 nova_compute[260935]: 2025-10-11 08:58:01.139 2 DEBUG oslo_concurrency.lockutils [req-80f476c2-e0aa-41e6-be37-6d6cce733a3c req-3afe3ebb-83f0-4d03-9bc7-5e1e921ac98f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "937f10b2-422f-42bc-b13e-5995310b461c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:58:01 compute-0 nova_compute[260935]: 2025-10-11 08:58:01.140 2 DEBUG nova.compute.manager [req-80f476c2-e0aa-41e6-be37-6d6cce733a3c req-3afe3ebb-83f0-4d03-9bc7-5e1e921ac98f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 937f10b2-422f-42bc-b13e-5995310b461c] No waiting events found dispatching network-vif-plugged-228ffaff-8879-4872-9923-a5a8270cb291 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 08:58:01 compute-0 nova_compute[260935]: 2025-10-11 08:58:01.140 2 WARNING nova.compute.manager [req-80f476c2-e0aa-41e6-be37-6d6cce733a3c req-3afe3ebb-83f0-4d03-9bc7-5e1e921ac98f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 937f10b2-422f-42bc-b13e-5995310b461c] Received unexpected event network-vif-plugged-228ffaff-8879-4872-9923-a5a8270cb291 for instance with vm_state suspended and task_state deleting.
Oct 11 08:58:01 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:58:01.177 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[34de46a2-d36d-4063-852f-07126815f2d6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:58:01 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:58:01.205 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[050a88c8-af78-48d2-bef9-1c023520e5ea]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap056c6769-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8b:cc:02'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 14, 'rx_bytes': 1000, 'tx_bytes': 776, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 14, 'rx_bytes': 1000, 'tx_bytes': 776, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 162], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 477032, 'reachable_time': 32693, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 334793, 'error': None, 'target': 'ovnmeta-056c6769-bc97-4ae9-9759-4cc2d984a31d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:58:01 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:58:01.232 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[73cea656-663f-4b66-915f-c8c0fdb1c391]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap056c6769-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 477053, 'tstamp': 477053}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 334794, 'error': None, 'target': 'ovnmeta-056c6769-bc97-4ae9-9759-4cc2d984a31d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap056c6769-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 477058, 'tstamp': 477058}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 334794, 'error': None, 'target': 'ovnmeta-056c6769-bc97-4ae9-9759-4cc2d984a31d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:58:01 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:58:01.234 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap056c6769-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:58:01 compute-0 nova_compute[260935]: 2025-10-11 08:58:01.236 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:58:01 compute-0 nova_compute[260935]: 2025-10-11 08:58:01.237 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:58:01 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:58:01.237 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap056c6769-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:58:01 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:58:01.238 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 08:58:01 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:58:01.238 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap056c6769-b0, col_values=(('external_ids', {'iface-id': '056a8563-0695-415b-921f-e75fa98e60e5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:58:01 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:58:01.239 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 08:58:01 compute-0 nova_compute[260935]: 2025-10-11 08:58:01.261 2 DEBUG nova.compute.manager [req-98e1d1f4-4f41-4776-9995-81d3f43e3644 req-0f3c301a-4d10-466a-aef5-6271df1d0b55 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 98d8ebd6-0917-49cf-8efc-a245486424bc] Received event network-vif-plugged-10e01bbb-0d2c-4f76-8f81-c90e20d3e54c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:58:01 compute-0 nova_compute[260935]: 2025-10-11 08:58:01.262 2 DEBUG oslo_concurrency.lockutils [req-98e1d1f4-4f41-4776-9995-81d3f43e3644 req-0f3c301a-4d10-466a-aef5-6271df1d0b55 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "98d8ebd6-0917-49cf-8efc-a245486424bc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:58:01 compute-0 nova_compute[260935]: 2025-10-11 08:58:01.262 2 DEBUG oslo_concurrency.lockutils [req-98e1d1f4-4f41-4776-9995-81d3f43e3644 req-0f3c301a-4d10-466a-aef5-6271df1d0b55 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "98d8ebd6-0917-49cf-8efc-a245486424bc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:58:01 compute-0 nova_compute[260935]: 2025-10-11 08:58:01.263 2 DEBUG oslo_concurrency.lockutils [req-98e1d1f4-4f41-4776-9995-81d3f43e3644 req-0f3c301a-4d10-466a-aef5-6271df1d0b55 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "98d8ebd6-0917-49cf-8efc-a245486424bc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:58:01 compute-0 nova_compute[260935]: 2025-10-11 08:58:01.263 2 DEBUG nova.compute.manager [req-98e1d1f4-4f41-4776-9995-81d3f43e3644 req-0f3c301a-4d10-466a-aef5-6271df1d0b55 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 98d8ebd6-0917-49cf-8efc-a245486424bc] Processing event network-vif-plugged-10e01bbb-0d2c-4f76-8f81-c90e20d3e54c _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 11 08:58:01 compute-0 nova_compute[260935]: 2025-10-11 08:58:01.311 2 INFO nova.virt.libvirt.driver [None req-6c5590db-f2f6-42f7-97de-1f10a2fe6e48 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: 937f10b2-422f-42bc-b13e-5995310b461c] Deleting instance files /var/lib/nova/instances/937f10b2-422f-42bc-b13e-5995310b461c_del
Oct 11 08:58:01 compute-0 nova_compute[260935]: 2025-10-11 08:58:01.312 2 INFO nova.virt.libvirt.driver [None req-6c5590db-f2f6-42f7-97de-1f10a2fe6e48 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: 937f10b2-422f-42bc-b13e-5995310b461c] Deletion of /var/lib/nova/instances/937f10b2-422f-42bc-b13e-5995310b461c_del complete
Oct 11 08:58:01 compute-0 nova_compute[260935]: 2025-10-11 08:58:01.430 2 INFO nova.compute.manager [None req-6c5590db-f2f6-42f7-97de-1f10a2fe6e48 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] [instance: 937f10b2-422f-42bc-b13e-5995310b461c] Took 0.62 seconds to destroy the instance on the hypervisor.
Oct 11 08:58:01 compute-0 nova_compute[260935]: 2025-10-11 08:58:01.431 2 DEBUG oslo.service.loopingcall [None req-6c5590db-f2f6-42f7-97de-1f10a2fe6e48 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 11 08:58:01 compute-0 nova_compute[260935]: 2025-10-11 08:58:01.431 2 DEBUG nova.compute.manager [-] [instance: 937f10b2-422f-42bc-b13e-5995310b461c] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 11 08:58:01 compute-0 nova_compute[260935]: 2025-10-11 08:58:01.432 2 DEBUG nova.network.neutron [-] [instance: 937f10b2-422f-42bc-b13e-5995310b461c] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 11 08:58:01 compute-0 ceph-mon[74313]: pgmap v1641: 321 pgs: 321 active+clean; 418 MiB data, 739 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.9 MiB/s wr, 144 op/s
Oct 11 08:58:01 compute-0 nova_compute[260935]: 2025-10-11 08:58:01.714 2 DEBUG oslo_concurrency.lockutils [None req-116a3b37-1b59-4226-b2a5-21f7ac3496cb ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Acquiring lock "8f609d1a-7eaa-41dc-9f3b-9acf7abe4239" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:58:01 compute-0 nova_compute[260935]: 2025-10-11 08:58:01.715 2 DEBUG oslo_concurrency.lockutils [None req-116a3b37-1b59-4226-b2a5-21f7ac3496cb ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Lock "8f609d1a-7eaa-41dc-9f3b-9acf7abe4239" acquired by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:58:01 compute-0 nova_compute[260935]: 2025-10-11 08:58:01.715 2 DEBUG nova.compute.manager [None req-116a3b37-1b59-4226-b2a5-21f7ac3496cb ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: 8f609d1a-7eaa-41dc-9f3b-9acf7abe4239] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:58:01 compute-0 nova_compute[260935]: 2025-10-11 08:58:01.721 2 DEBUG nova.compute.manager [None req-116a3b37-1b59-4226-b2a5-21f7ac3496cb ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: 8f609d1a-7eaa-41dc-9f3b-9acf7abe4239] Stopping instance; current vm_state: active, current task_state: powering-off, current DB power_state: 1, current VM power_state: 1 do_stop_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3338
Oct 11 08:58:01 compute-0 nova_compute[260935]: 2025-10-11 08:58:01.723 2 DEBUG nova.objects.instance [None req-116a3b37-1b59-4226-b2a5-21f7ac3496cb ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Lazy-loading 'flavor' on Instance uuid 8f609d1a-7eaa-41dc-9f3b-9acf7abe4239 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:58:01 compute-0 nova_compute[260935]: 2025-10-11 08:58:01.759 2 DEBUG nova.virt.libvirt.driver [None req-116a3b37-1b59-4226-b2a5-21f7ac3496cb ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: 8f609d1a-7eaa-41dc-9f3b-9acf7abe4239] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Oct 11 08:58:02 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1642: 321 pgs: 321 active+clean; 425 MiB data, 745 MiB used, 59 GiB / 60 GiB avail; 3.4 MiB/s rd, 3.1 MiB/s wr, 165 op/s
Oct 11 08:58:02 compute-0 nova_compute[260935]: 2025-10-11 08:58:02.288 2 DEBUG nova.network.neutron [-] [instance: 937f10b2-422f-42bc-b13e-5995310b461c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:58:02 compute-0 nova_compute[260935]: 2025-10-11 08:58:02.311 2 INFO nova.compute.manager [-] [instance: 937f10b2-422f-42bc-b13e-5995310b461c] Took 0.88 seconds to deallocate network for instance.
Oct 11 08:58:02 compute-0 nova_compute[260935]: 2025-10-11 08:58:02.321 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760173082.3193476, 98d8ebd6-0917-49cf-8efc-a245486424bc => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:58:02 compute-0 nova_compute[260935]: 2025-10-11 08:58:02.322 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 98d8ebd6-0917-49cf-8efc-a245486424bc] VM Started (Lifecycle Event)
Oct 11 08:58:02 compute-0 nova_compute[260935]: 2025-10-11 08:58:02.326 2 DEBUG nova.compute.manager [None req-52b41a59-dd6d-4f95-968d-9f8bdd1c5fa9 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] [instance: 98d8ebd6-0917-49cf-8efc-a245486424bc] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 11 08:58:02 compute-0 nova_compute[260935]: 2025-10-11 08:58:02.331 2 DEBUG nova.virt.libvirt.driver [None req-52b41a59-dd6d-4f95-968d-9f8bdd1c5fa9 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] [instance: 98d8ebd6-0917-49cf-8efc-a245486424bc] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 11 08:58:02 compute-0 nova_compute[260935]: 2025-10-11 08:58:02.337 2 INFO nova.virt.libvirt.driver [-] [instance: 98d8ebd6-0917-49cf-8efc-a245486424bc] Instance spawned successfully.
Oct 11 08:58:02 compute-0 nova_compute[260935]: 2025-10-11 08:58:02.364 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 98d8ebd6-0917-49cf-8efc-a245486424bc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:58:02 compute-0 nova_compute[260935]: 2025-10-11 08:58:02.370 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 98d8ebd6-0917-49cf-8efc-a245486424bc] Synchronizing instance power state after lifecycle event "Started"; current vm_state: shelved_offloaded, current task_state: spawning, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 08:58:02 compute-0 nova_compute[260935]: 2025-10-11 08:58:02.373 2 DEBUG oslo_concurrency.lockutils [None req-6c5590db-f2f6-42f7-97de-1f10a2fe6e48 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:58:02 compute-0 nova_compute[260935]: 2025-10-11 08:58:02.373 2 DEBUG oslo_concurrency.lockutils [None req-6c5590db-f2f6-42f7-97de-1f10a2fe6e48 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:58:02 compute-0 nova_compute[260935]: 2025-10-11 08:58:02.393 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 98d8ebd6-0917-49cf-8efc-a245486424bc] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 08:58:02 compute-0 nova_compute[260935]: 2025-10-11 08:58:02.393 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760173082.320839, 98d8ebd6-0917-49cf-8efc-a245486424bc => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:58:02 compute-0 nova_compute[260935]: 2025-10-11 08:58:02.393 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 98d8ebd6-0917-49cf-8efc-a245486424bc] VM Paused (Lifecycle Event)
Oct 11 08:58:02 compute-0 nova_compute[260935]: 2025-10-11 08:58:02.415 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 98d8ebd6-0917-49cf-8efc-a245486424bc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:58:02 compute-0 nova_compute[260935]: 2025-10-11 08:58:02.418 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760173082.3297532, 98d8ebd6-0917-49cf-8efc-a245486424bc => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:58:02 compute-0 nova_compute[260935]: 2025-10-11 08:58:02.418 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 98d8ebd6-0917-49cf-8efc-a245486424bc] VM Resumed (Lifecycle Event)
Oct 11 08:58:02 compute-0 nova_compute[260935]: 2025-10-11 08:58:02.439 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 98d8ebd6-0917-49cf-8efc-a245486424bc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:58:02 compute-0 nova_compute[260935]: 2025-10-11 08:58:02.443 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 98d8ebd6-0917-49cf-8efc-a245486424bc] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: shelved_offloaded, current task_state: spawning, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 08:58:02 compute-0 nova_compute[260935]: 2025-10-11 08:58:02.461 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 98d8ebd6-0917-49cf-8efc-a245486424bc] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 08:58:02 compute-0 nova_compute[260935]: 2025-10-11 08:58:02.586 2 DEBUG oslo_concurrency.processutils [None req-6c5590db-f2f6-42f7-97de-1f10a2fe6e48 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:58:02 compute-0 podman[334839]: 2025-10-11 08:58:02.760160487 +0000 UTC m=+0.065430407 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.schema-version=1.0)
Oct 11 08:58:02 compute-0 nova_compute[260935]: 2025-10-11 08:58:02.957 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 08:58:03 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 08:58:03 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4021489269' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:58:03 compute-0 nova_compute[260935]: 2025-10-11 08:58:03.081 2 DEBUG oslo_concurrency.processutils [None req-6c5590db-f2f6-42f7-97de-1f10a2fe6e48 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.495s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:58:03 compute-0 nova_compute[260935]: 2025-10-11 08:58:03.089 2 DEBUG nova.compute.provider_tree [None req-6c5590db-f2f6-42f7-97de-1f10a2fe6e48 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 08:58:03 compute-0 nova_compute[260935]: 2025-10-11 08:58:03.114 2 DEBUG nova.scheduler.client.report [None req-6c5590db-f2f6-42f7-97de-1f10a2fe6e48 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 08:58:03 compute-0 nova_compute[260935]: 2025-10-11 08:58:03.154 2 DEBUG oslo_concurrency.lockutils [None req-6c5590db-f2f6-42f7-97de-1f10a2fe6e48 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.781s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:58:03 compute-0 nova_compute[260935]: 2025-10-11 08:58:03.195 2 INFO nova.scheduler.client.report [None req-6c5590db-f2f6-42f7-97de-1f10a2fe6e48 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Deleted allocations for instance 937f10b2-422f-42bc-b13e-5995310b461c
Oct 11 08:58:03 compute-0 nova_compute[260935]: 2025-10-11 08:58:03.302 2 DEBUG oslo_concurrency.lockutils [None req-6c5590db-f2f6-42f7-97de-1f10a2fe6e48 213e5693e94f44e7950e3dfbca04228a 93dd4902ce324862a38006da8e06503a - - default default] Lock "937f10b2-422f-42bc-b13e-5995310b461c" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.503s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:58:03 compute-0 ovn_controller[152945]: 2025-10-11T08:58:03Z|00658|binding|INFO|Releasing lport 056a8563-0695-415b-921f-e75fa98e60e5 from this chassis (sb_readonly=0)
Oct 11 08:58:03 compute-0 ovn_controller[152945]: 2025-10-11T08:58:03Z|00659|binding|INFO|Releasing lport b9cf681c-9f4c-4c56-987a-55fa7aa89e1a from this chassis (sb_readonly=0)
Oct 11 08:58:03 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e238 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 08:58:03 compute-0 nova_compute[260935]: 2025-10-11 08:58:03.488 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:58:03 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e238 do_prune osdmap full prune enabled
Oct 11 08:58:03 compute-0 ceph-mon[74313]: pgmap v1642: 321 pgs: 321 active+clean; 425 MiB data, 745 MiB used, 59 GiB / 60 GiB avail; 3.4 MiB/s rd, 3.1 MiB/s wr, 165 op/s
Oct 11 08:58:03 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/4021489269' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:58:03 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e239 e239: 3 total, 3 up, 3 in
Oct 11 08:58:03 compute-0 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e239: 3 total, 3 up, 3 in
Oct 11 08:58:03 compute-0 nova_compute[260935]: 2025-10-11 08:58:03.787 2 DEBUG nova.compute.manager [req-5236356f-a063-4410-a5d1-080f79848eeb req-c972b40a-056c-46e2-8c56-1b90dc92b590 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 98d8ebd6-0917-49cf-8efc-a245486424bc] Received event network-vif-plugged-10e01bbb-0d2c-4f76-8f81-c90e20d3e54c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:58:03 compute-0 nova_compute[260935]: 2025-10-11 08:58:03.788 2 DEBUG oslo_concurrency.lockutils [req-5236356f-a063-4410-a5d1-080f79848eeb req-c972b40a-056c-46e2-8c56-1b90dc92b590 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "98d8ebd6-0917-49cf-8efc-a245486424bc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:58:03 compute-0 nova_compute[260935]: 2025-10-11 08:58:03.789 2 DEBUG oslo_concurrency.lockutils [req-5236356f-a063-4410-a5d1-080f79848eeb req-c972b40a-056c-46e2-8c56-1b90dc92b590 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "98d8ebd6-0917-49cf-8efc-a245486424bc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:58:03 compute-0 nova_compute[260935]: 2025-10-11 08:58:03.789 2 DEBUG oslo_concurrency.lockutils [req-5236356f-a063-4410-a5d1-080f79848eeb req-c972b40a-056c-46e2-8c56-1b90dc92b590 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "98d8ebd6-0917-49cf-8efc-a245486424bc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:58:03 compute-0 nova_compute[260935]: 2025-10-11 08:58:03.789 2 DEBUG nova.compute.manager [req-5236356f-a063-4410-a5d1-080f79848eeb req-c972b40a-056c-46e2-8c56-1b90dc92b590 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 98d8ebd6-0917-49cf-8efc-a245486424bc] No waiting events found dispatching network-vif-plugged-10e01bbb-0d2c-4f76-8f81-c90e20d3e54c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 08:58:03 compute-0 nova_compute[260935]: 2025-10-11 08:58:03.790 2 WARNING nova.compute.manager [req-5236356f-a063-4410-a5d1-080f79848eeb req-c972b40a-056c-46e2-8c56-1b90dc92b590 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 98d8ebd6-0917-49cf-8efc-a245486424bc] Received unexpected event network-vif-plugged-10e01bbb-0d2c-4f76-8f81-c90e20d3e54c for instance with vm_state shelved_offloaded and task_state spawning.
Oct 11 08:58:03 compute-0 nova_compute[260935]: 2025-10-11 08:58:03.790 2 DEBUG nova.compute.manager [req-5236356f-a063-4410-a5d1-080f79848eeb req-c972b40a-056c-46e2-8c56-1b90dc92b590 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 937f10b2-422f-42bc-b13e-5995310b461c] Received event network-vif-deleted-228ffaff-8879-4872-9923-a5a8270cb291 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:58:03 compute-0 nova_compute[260935]: 2025-10-11 08:58:03.971 2 DEBUG nova.compute.manager [None req-52b41a59-dd6d-4f95-968d-9f8bdd1c5fa9 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] [instance: 98d8ebd6-0917-49cf-8efc-a245486424bc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:58:04 compute-0 nova_compute[260935]: 2025-10-11 08:58:04.064 2 DEBUG oslo_concurrency.lockutils [None req-52b41a59-dd6d-4f95-968d-9f8bdd1c5fa9 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Lock "98d8ebd6-0917-49cf-8efc-a245486424bc" "released" by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" :: held 11.074s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:58:04 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1644: 321 pgs: 321 active+clean; 451 MiB data, 768 MiB used, 59 GiB / 60 GiB avail; 9.4 MiB/s rd, 4.7 MiB/s wr, 345 op/s
Oct 11 08:58:04 compute-0 ceph-mon[74313]: osdmap e239: 3 total, 3 up, 3 in
Oct 11 08:58:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 08:58:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:58:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 08:58:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:58:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0026262326422803934 of space, bias 1.0, pg target 0.787869792684118 quantized to 32 (current 32)
Oct 11 08:58:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:58:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 08:58:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:58:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 08:58:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:58:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0017703017150159537 of space, bias 1.0, pg target 0.5310905145047862 quantized to 32 (current 32)
Oct 11 08:58:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:58:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 11 08:58:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:58:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 08:58:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:58:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 11 08:58:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:58:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 11 08:58:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:58:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 08:58:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:58:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 11 08:58:05 compute-0 ceph-mon[74313]: pgmap v1644: 321 pgs: 321 active+clean; 451 MiB data, 768 MiB used, 59 GiB / 60 GiB avail; 9.4 MiB/s rd, 4.7 MiB/s wr, 345 op/s
Oct 11 08:58:05 compute-0 nova_compute[260935]: 2025-10-11 08:58:05.883 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:58:06 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1645: 321 pgs: 321 active+clean; 451 MiB data, 768 MiB used, 59 GiB / 60 GiB avail; 9.4 MiB/s rd, 4.7 MiB/s wr, 345 op/s
Oct 11 08:58:06 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:58:06.616 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=17, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:d1:d9', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '16:ab:1e:b7:4b:7f'}, ipsec=False) old=SB_Global(nb_cfg=16) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 08:58:06 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:58:06.617 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 11 08:58:06 compute-0 nova_compute[260935]: 2025-10-11 08:58:06.621 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:58:06 compute-0 nova_compute[260935]: 2025-10-11 08:58:06.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 08:58:07 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e239 do_prune osdmap full prune enabled
Oct 11 08:58:07 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e240 e240: 3 total, 3 up, 3 in
Oct 11 08:58:07 compute-0 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e240: 3 total, 3 up, 3 in
Oct 11 08:58:07 compute-0 ceph-mon[74313]: pgmap v1645: 321 pgs: 321 active+clean; 451 MiB data, 768 MiB used, 59 GiB / 60 GiB avail; 9.4 MiB/s rd, 4.7 MiB/s wr, 345 op/s
Oct 11 08:58:08 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1647: 321 pgs: 321 active+clean; 372 MiB data, 711 MiB used, 59 GiB / 60 GiB avail; 12 MiB/s rd, 5.9 MiB/s wr, 408 op/s
Oct 11 08:58:08 compute-0 ovn_controller[152945]: 2025-10-11T08:58:08Z|00660|binding|INFO|Releasing lport 056a8563-0695-415b-921f-e75fa98e60e5 from this chassis (sb_readonly=0)
Oct 11 08:58:08 compute-0 ovn_controller[152945]: 2025-10-11T08:58:08Z|00661|binding|INFO|Releasing lport b9cf681c-9f4c-4c56-987a-55fa7aa89e1a from this chassis (sb_readonly=0)
Oct 11 08:58:08 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e240 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 08:58:08 compute-0 nova_compute[260935]: 2025-10-11 08:58:08.473 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:58:08 compute-0 nova_compute[260935]: 2025-10-11 08:58:08.492 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:58:08 compute-0 ceph-mon[74313]: osdmap e240: 3 total, 3 up, 3 in
Oct 11 08:58:08 compute-0 nova_compute[260935]: 2025-10-11 08:58:08.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 08:58:08 compute-0 nova_compute[260935]: 2025-10-11 08:58:08.704 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 11 08:58:08 compute-0 podman[334876]: 2025-10-11 08:58:08.796397485 +0000 UTC m=+0.098944082 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.schema-version=1.0, container_name=iscsid, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct 11 08:58:09 compute-0 nova_compute[260935]: 2025-10-11 08:58:09.537 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760173074.5347724, 075ec27a-70a2-49f6-a097-5738f3407605 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:58:09 compute-0 nova_compute[260935]: 2025-10-11 08:58:09.539 2 INFO nova.compute.manager [-] [instance: 075ec27a-70a2-49f6-a097-5738f3407605] VM Stopped (Lifecycle Event)
Oct 11 08:58:09 compute-0 nova_compute[260935]: 2025-10-11 08:58:09.575 2 DEBUG nova.compute.manager [None req-b54e8ba3-db5f-43c8-8b06-89c30aff64ec - - - - - -] [instance: 075ec27a-70a2-49f6-a097-5738f3407605] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:58:09 compute-0 ceph-mon[74313]: pgmap v1647: 321 pgs: 321 active+clean; 372 MiB data, 711 MiB used, 59 GiB / 60 GiB avail; 12 MiB/s rd, 5.9 MiB/s wr, 408 op/s
Oct 11 08:58:09 compute-0 nova_compute[260935]: 2025-10-11 08:58:09.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 08:58:09 compute-0 nova_compute[260935]: 2025-10-11 08:58:09.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 08:58:09 compute-0 nova_compute[260935]: 2025-10-11 08:58:09.704 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Oct 11 08:58:09 compute-0 nova_compute[260935]: 2025-10-11 08:58:09.837 2 DEBUG oslo_concurrency.lockutils [None req-a25c66f6-aa34-436c-9b5c-fa84b6a5e85e 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Acquiring lock "5c74d1f2-66dc-474b-b669-445115cf6b1d" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:58:09 compute-0 nova_compute[260935]: 2025-10-11 08:58:09.838 2 DEBUG oslo_concurrency.lockutils [None req-a25c66f6-aa34-436c-9b5c-fa84b6a5e85e 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Lock "5c74d1f2-66dc-474b-b669-445115cf6b1d" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:58:09 compute-0 nova_compute[260935]: 2025-10-11 08:58:09.839 2 DEBUG oslo_concurrency.lockutils [None req-a25c66f6-aa34-436c-9b5c-fa84b6a5e85e 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Acquiring lock "5c74d1f2-66dc-474b-b669-445115cf6b1d-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:58:09 compute-0 nova_compute[260935]: 2025-10-11 08:58:09.839 2 DEBUG oslo_concurrency.lockutils [None req-a25c66f6-aa34-436c-9b5c-fa84b6a5e85e 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Lock "5c74d1f2-66dc-474b-b669-445115cf6b1d-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:58:09 compute-0 nova_compute[260935]: 2025-10-11 08:58:09.840 2 DEBUG oslo_concurrency.lockutils [None req-a25c66f6-aa34-436c-9b5c-fa84b6a5e85e 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Lock "5c74d1f2-66dc-474b-b669-445115cf6b1d-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:58:09 compute-0 nova_compute[260935]: 2025-10-11 08:58:09.842 2 INFO nova.compute.manager [None req-a25c66f6-aa34-436c-9b5c-fa84b6a5e85e 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] [instance: 5c74d1f2-66dc-474b-b669-445115cf6b1d] Terminating instance
Oct 11 08:58:09 compute-0 nova_compute[260935]: 2025-10-11 08:58:09.845 2 DEBUG nova.compute.manager [None req-a25c66f6-aa34-436c-9b5c-fa84b6a5e85e 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] [instance: 5c74d1f2-66dc-474b-b669-445115cf6b1d] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 11 08:58:09 compute-0 kernel: tapc07af0d2-0b (unregistering): left promiscuous mode
Oct 11 08:58:09 compute-0 NetworkManager[44960]: <info>  [1760173089.9297] device (tapc07af0d2-0b): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 08:58:09 compute-0 ovn_controller[152945]: 2025-10-11T08:58:09Z|00662|binding|INFO|Releasing lport c07af0d2-0bbd-43c9-b243-60698370a2d5 from this chassis (sb_readonly=0)
Oct 11 08:58:09 compute-0 ovn_controller[152945]: 2025-10-11T08:58:09Z|00663|binding|INFO|Setting lport c07af0d2-0bbd-43c9-b243-60698370a2d5 down in Southbound
Oct 11 08:58:09 compute-0 ovn_controller[152945]: 2025-10-11T08:58:09Z|00664|binding|INFO|Removing iface tapc07af0d2-0b ovn-installed in OVS
Oct 11 08:58:09 compute-0 nova_compute[260935]: 2025-10-11 08:58:09.947 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:58:09 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:58:09.954 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fc:d6:3e 10.100.0.7'], port_security=['fa:16:3e:fc:d6:3e 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '5c74d1f2-66dc-474b-b669-445115cf6b1d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-056c6769-bc97-4ae9-9759-4cc2d984a31d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '73adfb8cf0c64359b1f33a9643148ef4', 'neutron:revision_number': '4', 'neutron:security_group_ids': '88bbbe16-d865-47df-a62a-312128d64455', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0cfa1fc9-121e-4e0a-bd08-716b82275316, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=c07af0d2-0bbd-43c9-b243-60698370a2d5) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 08:58:09 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:58:09.955 162815 INFO neutron.agent.ovn.metadata.agent [-] Port c07af0d2-0bbd-43c9-b243-60698370a2d5 in datapath 056c6769-bc97-4ae9-9759-4cc2d984a31d unbound from our chassis
Oct 11 08:58:09 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:58:09.956 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 056c6769-bc97-4ae9-9759-4cc2d984a31d
Oct 11 08:58:09 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:58:09.981 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[5b11151a-95f7-4232-81ca-57e540291440]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:58:09 compute-0 nova_compute[260935]: 2025-10-11 08:58:09.987 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:58:10 compute-0 systemd[1]: machine-qemu\x2d76\x2dinstance\x2d00000044.scope: Deactivated successfully.
Oct 11 08:58:10 compute-0 systemd[1]: machine-qemu\x2d76\x2dinstance\x2d00000044.scope: Consumed 16.254s CPU time.
Oct 11 08:58:10 compute-0 systemd-machined[215705]: Machine qemu-76-instance-00000044 terminated.
Oct 11 08:58:10 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:58:10.029 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[b6c91277-0057-492e-b6fb-f165f11fe888]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:58:10 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:58:10.032 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[e094fa8b-c702-4680-ae01-92ed04daf969]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:58:10 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:58:10.071 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[bc4c5161-c7f6-4372-ac4e-2c0f52108888]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:58:10 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1648: 321 pgs: 321 active+clean; 372 MiB data, 711 MiB used, 59 GiB / 60 GiB avail; 9.5 MiB/s rd, 3.9 MiB/s wr, 369 op/s
Oct 11 08:58:10 compute-0 nova_compute[260935]: 2025-10-11 08:58:10.085 2 INFO nova.virt.libvirt.driver [-] [instance: 5c74d1f2-66dc-474b-b669-445115cf6b1d] Instance destroyed successfully.
Oct 11 08:58:10 compute-0 nova_compute[260935]: 2025-10-11 08:58:10.086 2 DEBUG nova.objects.instance [None req-a25c66f6-aa34-436c-9b5c-fa84b6a5e85e 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Lazy-loading 'resources' on Instance uuid 5c74d1f2-66dc-474b-b669-445115cf6b1d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:58:10 compute-0 nova_compute[260935]: 2025-10-11 08:58:10.103 2 DEBUG nova.virt.libvirt.vif [None req-a25c66f6-aa34-436c-9b5c-fa84b6a5e85e 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T08:56:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-602145985',display_name='tempest-ServerActionsTestOtherB-server-602145985',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-602145985',id=68,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-11T08:56:52Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='73adfb8cf0c64359b1f33a9643148ef4',ramdisk_id='',reservation_id='r-xzpjs7qm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherB-1445504716',owner_user_name='tempest-ServerActionsTestOtherB-1445504716-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T08:56:52Z,user_data=None,user_id='8d5f5f07c57c467286168be7c097bf26',uuid=5c74d1f2-66dc-474b-b669-445115cf6b1d,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "c07af0d2-0bbd-43c9-b243-60698370a2d5", "address": "fa:16:3e:fc:d6:3e", "network": {"id": "056c6769-bc97-4ae9-9759-4cc2d984a31d", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-2094705751-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "73adfb8cf0c64359b1f33a9643148ef4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc07af0d2-0b", "ovs_interfaceid": "c07af0d2-0bbd-43c9-b243-60698370a2d5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 11 08:58:10 compute-0 nova_compute[260935]: 2025-10-11 08:58:10.104 2 DEBUG nova.network.os_vif_util [None req-a25c66f6-aa34-436c-9b5c-fa84b6a5e85e 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Converting VIF {"id": "c07af0d2-0bbd-43c9-b243-60698370a2d5", "address": "fa:16:3e:fc:d6:3e", "network": {"id": "056c6769-bc97-4ae9-9759-4cc2d984a31d", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-2094705751-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "73adfb8cf0c64359b1f33a9643148ef4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc07af0d2-0b", "ovs_interfaceid": "c07af0d2-0bbd-43c9-b243-60698370a2d5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 08:58:10 compute-0 nova_compute[260935]: 2025-10-11 08:58:10.106 2 DEBUG nova.network.os_vif_util [None req-a25c66f6-aa34-436c-9b5c-fa84b6a5e85e 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fc:d6:3e,bridge_name='br-int',has_traffic_filtering=True,id=c07af0d2-0bbd-43c9-b243-60698370a2d5,network=Network(056c6769-bc97-4ae9-9759-4cc2d984a31d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc07af0d2-0b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 08:58:10 compute-0 nova_compute[260935]: 2025-10-11 08:58:10.106 2 DEBUG os_vif [None req-a25c66f6-aa34-436c-9b5c-fa84b6a5e85e 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:fc:d6:3e,bridge_name='br-int',has_traffic_filtering=True,id=c07af0d2-0bbd-43c9-b243-60698370a2d5,network=Network(056c6769-bc97-4ae9-9759-4cc2d984a31d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc07af0d2-0b') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 11 08:58:10 compute-0 nova_compute[260935]: 2025-10-11 08:58:10.109 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:58:10 compute-0 nova_compute[260935]: 2025-10-11 08:58:10.110 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc07af0d2-0b, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:58:10 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:58:10.102 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[36f16568-cfd3-47d2-a26b-7d5ba1a1575f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap056c6769-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8b:cc:02'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 16, 'rx_bytes': 1000, 'tx_bytes': 860, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 16, 'rx_bytes': 1000, 'tx_bytes': 860, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 162], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 477032, 'reachable_time': 32693, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 334915, 'error': None, 'target': 'ovnmeta-056c6769-bc97-4ae9-9759-4cc2d984a31d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:58:10 compute-0 nova_compute[260935]: 2025-10-11 08:58:10.116 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 08:58:10 compute-0 nova_compute[260935]: 2025-10-11 08:58:10.118 2 INFO os_vif [None req-a25c66f6-aa34-436c-9b5c-fa84b6a5e85e 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:fc:d6:3e,bridge_name='br-int',has_traffic_filtering=True,id=c07af0d2-0bbd-43c9-b243-60698370a2d5,network=Network(056c6769-bc97-4ae9-9759-4cc2d984a31d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc07af0d2-0b')
Oct 11 08:58:10 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:58:10.134 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[a2a73c83-ad18-4d5d-b29d-7f3015582354]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap056c6769-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 477053, 'tstamp': 477053}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 334920, 'error': None, 'target': 'ovnmeta-056c6769-bc97-4ae9-9759-4cc2d984a31d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap056c6769-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 477058, 'tstamp': 477058}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 334920, 'error': None, 'target': 'ovnmeta-056c6769-bc97-4ae9-9759-4cc2d984a31d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:58:10 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:58:10.136 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap056c6769-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:58:10 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:58:10.140 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap056c6769-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:58:10 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:58:10.141 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 08:58:10 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:58:10.142 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap056c6769-b0, col_values=(('external_ids', {'iface-id': '056a8563-0695-415b-921f-e75fa98e60e5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:58:10 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:58:10.142 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 08:58:10 compute-0 nova_compute[260935]: 2025-10-11 08:58:10.143 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:58:10 compute-0 nova_compute[260935]: 2025-10-11 08:58:10.456 2 INFO nova.virt.libvirt.driver [None req-a25c66f6-aa34-436c-9b5c-fa84b6a5e85e 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] [instance: 5c74d1f2-66dc-474b-b669-445115cf6b1d] Deleting instance files /var/lib/nova/instances/5c74d1f2-66dc-474b-b669-445115cf6b1d_del
Oct 11 08:58:10 compute-0 nova_compute[260935]: 2025-10-11 08:58:10.457 2 INFO nova.virt.libvirt.driver [None req-a25c66f6-aa34-436c-9b5c-fa84b6a5e85e 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] [instance: 5c74d1f2-66dc-474b-b669-445115cf6b1d] Deletion of /var/lib/nova/instances/5c74d1f2-66dc-474b-b669-445115cf6b1d_del complete
Oct 11 08:58:10 compute-0 nova_compute[260935]: 2025-10-11 08:58:10.547 2 INFO nova.compute.manager [None req-a25c66f6-aa34-436c-9b5c-fa84b6a5e85e 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] [instance: 5c74d1f2-66dc-474b-b669-445115cf6b1d] Took 0.70 seconds to destroy the instance on the hypervisor.
Oct 11 08:58:10 compute-0 nova_compute[260935]: 2025-10-11 08:58:10.548 2 DEBUG oslo.service.loopingcall [None req-a25c66f6-aa34-436c-9b5c-fa84b6a5e85e 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 11 08:58:10 compute-0 nova_compute[260935]: 2025-10-11 08:58:10.549 2 DEBUG nova.compute.manager [-] [instance: 5c74d1f2-66dc-474b-b669-445115cf6b1d] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 11 08:58:10 compute-0 nova_compute[260935]: 2025-10-11 08:58:10.549 2 DEBUG nova.network.neutron [-] [instance: 5c74d1f2-66dc-474b-b669-445115cf6b1d] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 11 08:58:10 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:58:10.619 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=bec9a5e2-82b0-42f0-811d-08d245f1dc66, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '17'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:58:10 compute-0 nova_compute[260935]: 2025-10-11 08:58:10.721 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 08:58:11 compute-0 nova_compute[260935]: 2025-10-11 08:58:11.352 2 DEBUG nova.network.neutron [-] [instance: 5c74d1f2-66dc-474b-b669-445115cf6b1d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:58:11 compute-0 nova_compute[260935]: 2025-10-11 08:58:11.372 2 INFO nova.compute.manager [-] [instance: 5c74d1f2-66dc-474b-b669-445115cf6b1d] Took 0.82 seconds to deallocate network for instance.
Oct 11 08:58:11 compute-0 ovn_controller[152945]: 2025-10-11T08:58:11Z|00080|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:e5:dc:a1 10.100.0.8
Oct 11 08:58:11 compute-0 ovn_controller[152945]: 2025-10-11T08:58:11Z|00081|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:e5:dc:a1 10.100.0.8
Oct 11 08:58:11 compute-0 nova_compute[260935]: 2025-10-11 08:58:11.450 2 DEBUG oslo_concurrency.lockutils [None req-a25c66f6-aa34-436c-9b5c-fa84b6a5e85e 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:58:11 compute-0 nova_compute[260935]: 2025-10-11 08:58:11.451 2 DEBUG oslo_concurrency.lockutils [None req-a25c66f6-aa34-436c-9b5c-fa84b6a5e85e 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:58:11 compute-0 nova_compute[260935]: 2025-10-11 08:58:11.480 2 DEBUG nova.compute.manager [req-c57d6723-b8ea-4143-bb12-960b2c292042 req-baef67e8-f4f6-4c3d-9733-666753961031 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 5c74d1f2-66dc-474b-b669-445115cf6b1d] Received event network-vif-deleted-c07af0d2-0bbd-43c9-b243-60698370a2d5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:58:11 compute-0 ceph-mon[74313]: pgmap v1648: 321 pgs: 321 active+clean; 372 MiB data, 711 MiB used, 59 GiB / 60 GiB avail; 9.5 MiB/s rd, 3.9 MiB/s wr, 369 op/s
Oct 11 08:58:11 compute-0 nova_compute[260935]: 2025-10-11 08:58:11.586 2 DEBUG nova.compute.manager [req-a363a6d0-0f56-4181-8caa-7f5889cbdf6c req-58b5594e-4805-4be3-bfa8-fc425e955a08 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 5c74d1f2-66dc-474b-b669-445115cf6b1d] Received event network-vif-unplugged-c07af0d2-0bbd-43c9-b243-60698370a2d5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:58:11 compute-0 nova_compute[260935]: 2025-10-11 08:58:11.587 2 DEBUG oslo_concurrency.lockutils [req-a363a6d0-0f56-4181-8caa-7f5889cbdf6c req-58b5594e-4805-4be3-bfa8-fc425e955a08 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "5c74d1f2-66dc-474b-b669-445115cf6b1d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:58:11 compute-0 nova_compute[260935]: 2025-10-11 08:58:11.588 2 DEBUG oslo_concurrency.lockutils [req-a363a6d0-0f56-4181-8caa-7f5889cbdf6c req-58b5594e-4805-4be3-bfa8-fc425e955a08 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "5c74d1f2-66dc-474b-b669-445115cf6b1d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:58:11 compute-0 nova_compute[260935]: 2025-10-11 08:58:11.588 2 DEBUG oslo_concurrency.lockutils [req-a363a6d0-0f56-4181-8caa-7f5889cbdf6c req-58b5594e-4805-4be3-bfa8-fc425e955a08 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "5c74d1f2-66dc-474b-b669-445115cf6b1d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:58:11 compute-0 nova_compute[260935]: 2025-10-11 08:58:11.588 2 DEBUG nova.compute.manager [req-a363a6d0-0f56-4181-8caa-7f5889cbdf6c req-58b5594e-4805-4be3-bfa8-fc425e955a08 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 5c74d1f2-66dc-474b-b669-445115cf6b1d] No waiting events found dispatching network-vif-unplugged-c07af0d2-0bbd-43c9-b243-60698370a2d5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 08:58:11 compute-0 nova_compute[260935]: 2025-10-11 08:58:11.589 2 WARNING nova.compute.manager [req-a363a6d0-0f56-4181-8caa-7f5889cbdf6c req-58b5594e-4805-4be3-bfa8-fc425e955a08 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 5c74d1f2-66dc-474b-b669-445115cf6b1d] Received unexpected event network-vif-unplugged-c07af0d2-0bbd-43c9-b243-60698370a2d5 for instance with vm_state deleted and task_state None.
Oct 11 08:58:11 compute-0 nova_compute[260935]: 2025-10-11 08:58:11.589 2 DEBUG nova.compute.manager [req-a363a6d0-0f56-4181-8caa-7f5889cbdf6c req-58b5594e-4805-4be3-bfa8-fc425e955a08 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 5c74d1f2-66dc-474b-b669-445115cf6b1d] Received event network-vif-plugged-c07af0d2-0bbd-43c9-b243-60698370a2d5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:58:11 compute-0 nova_compute[260935]: 2025-10-11 08:58:11.589 2 DEBUG oslo_concurrency.lockutils [req-a363a6d0-0f56-4181-8caa-7f5889cbdf6c req-58b5594e-4805-4be3-bfa8-fc425e955a08 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "5c74d1f2-66dc-474b-b669-445115cf6b1d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:58:11 compute-0 nova_compute[260935]: 2025-10-11 08:58:11.590 2 DEBUG oslo_concurrency.lockutils [req-a363a6d0-0f56-4181-8caa-7f5889cbdf6c req-58b5594e-4805-4be3-bfa8-fc425e955a08 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "5c74d1f2-66dc-474b-b669-445115cf6b1d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:58:11 compute-0 nova_compute[260935]: 2025-10-11 08:58:11.590 2 DEBUG oslo_concurrency.lockutils [req-a363a6d0-0f56-4181-8caa-7f5889cbdf6c req-58b5594e-4805-4be3-bfa8-fc425e955a08 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "5c74d1f2-66dc-474b-b669-445115cf6b1d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:58:11 compute-0 nova_compute[260935]: 2025-10-11 08:58:11.590 2 DEBUG nova.compute.manager [req-a363a6d0-0f56-4181-8caa-7f5889cbdf6c req-58b5594e-4805-4be3-bfa8-fc425e955a08 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 5c74d1f2-66dc-474b-b669-445115cf6b1d] No waiting events found dispatching network-vif-plugged-c07af0d2-0bbd-43c9-b243-60698370a2d5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 08:58:11 compute-0 nova_compute[260935]: 2025-10-11 08:58:11.591 2 WARNING nova.compute.manager [req-a363a6d0-0f56-4181-8caa-7f5889cbdf6c req-58b5594e-4805-4be3-bfa8-fc425e955a08 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 5c74d1f2-66dc-474b-b669-445115cf6b1d] Received unexpected event network-vif-plugged-c07af0d2-0bbd-43c9-b243-60698370a2d5 for instance with vm_state deleted and task_state None.
Oct 11 08:58:11 compute-0 nova_compute[260935]: 2025-10-11 08:58:11.815 2 DEBUG nova.virt.libvirt.driver [None req-116a3b37-1b59-4226-b2a5-21f7ac3496cb ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: 8f609d1a-7eaa-41dc-9f3b-9acf7abe4239] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Oct 11 08:58:11 compute-0 nova_compute[260935]: 2025-10-11 08:58:11.881 2 DEBUG oslo_concurrency.processutils [None req-a25c66f6-aa34-436c-9b5c-fa84b6a5e85e 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:58:12 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1649: 321 pgs: 321 active+clean; 352 MiB data, 705 MiB used, 59 GiB / 60 GiB avail; 2.8 MiB/s rd, 1.8 MiB/s wr, 187 op/s
Oct 11 08:58:12 compute-0 nova_compute[260935]: 2025-10-11 08:58:12.279 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:58:12 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 08:58:12 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/953222184' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:58:12 compute-0 nova_compute[260935]: 2025-10-11 08:58:12.323 2 DEBUG oslo_concurrency.processutils [None req-a25c66f6-aa34-436c-9b5c-fa84b6a5e85e 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:58:12 compute-0 nova_compute[260935]: 2025-10-11 08:58:12.334 2 DEBUG nova.compute.provider_tree [None req-a25c66f6-aa34-436c-9b5c-fa84b6a5e85e 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 08:58:12 compute-0 nova_compute[260935]: 2025-10-11 08:58:12.358 2 DEBUG nova.scheduler.client.report [None req-a25c66f6-aa34-436c-9b5c-fa84b6a5e85e 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 08:58:12 compute-0 nova_compute[260935]: 2025-10-11 08:58:12.387 2 DEBUG oslo_concurrency.lockutils [None req-a25c66f6-aa34-436c-9b5c-fa84b6a5e85e 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.936s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:58:12 compute-0 nova_compute[260935]: 2025-10-11 08:58:12.423 2 INFO nova.scheduler.client.report [None req-a25c66f6-aa34-436c-9b5c-fa84b6a5e85e 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Deleted allocations for instance 5c74d1f2-66dc-474b-b669-445115cf6b1d
Oct 11 08:58:12 compute-0 nova_compute[260935]: 2025-10-11 08:58:12.503 2 DEBUG oslo_concurrency.lockutils [None req-a25c66f6-aa34-436c-9b5c-fa84b6a5e85e 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Lock "5c74d1f2-66dc-474b-b669-445115cf6b1d" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.665s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:58:12 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/953222184' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:58:12 compute-0 nova_compute[260935]: 2025-10-11 08:58:12.698 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 08:58:12 compute-0 nova_compute[260935]: 2025-10-11 08:58:12.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 08:58:12 compute-0 nova_compute[260935]: 2025-10-11 08:58:12.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 08:58:12 compute-0 nova_compute[260935]: 2025-10-11 08:58:12.735 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:58:12 compute-0 nova_compute[260935]: 2025-10-11 08:58:12.735 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:58:12 compute-0 nova_compute[260935]: 2025-10-11 08:58:12.736 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:58:12 compute-0 nova_compute[260935]: 2025-10-11 08:58:12.736 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 11 08:58:12 compute-0 nova_compute[260935]: 2025-10-11 08:58:12.736 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:58:12 compute-0 podman[334962]: 2025-10-11 08:58:12.820733737 +0000 UTC m=+0.118306494 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 11 08:58:12 compute-0 podman[334963]: 2025-10-11 08:58:12.88816084 +0000 UTC m=+0.180863738 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct 11 08:58:13 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 08:58:13 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1179931574' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:58:13 compute-0 nova_compute[260935]: 2025-10-11 08:58:13.298 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.562s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:58:13 compute-0 nova_compute[260935]: 2025-10-11 08:58:13.369 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760173078.2986257, 937f10b2-422f-42bc-b13e-5995310b461c => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:58:13 compute-0 nova_compute[260935]: 2025-10-11 08:58:13.370 2 INFO nova.compute.manager [-] [instance: 937f10b2-422f-42bc-b13e-5995310b461c] VM Stopped (Lifecycle Event)
Oct 11 08:58:13 compute-0 nova_compute[260935]: 2025-10-11 08:58:13.397 2 DEBUG nova.compute.manager [None req-e49b46e7-fa56-4ce6-8861-af72514c5ed6 - - - - - -] [instance: 937f10b2-422f-42bc-b13e-5995310b461c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:58:13 compute-0 nova_compute[260935]: 2025-10-11 08:58:13.404 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-0000003d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 08:58:13 compute-0 nova_compute[260935]: 2025-10-11 08:58:13.405 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-0000003d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 08:58:13 compute-0 nova_compute[260935]: 2025-10-11 08:58:13.410 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-0000004b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 08:58:13 compute-0 nova_compute[260935]: 2025-10-11 08:58:13.410 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-0000004b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 08:58:13 compute-0 nova_compute[260935]: 2025-10-11 08:58:13.416 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-0000003a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 08:58:13 compute-0 nova_compute[260935]: 2025-10-11 08:58:13.418 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-0000003a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 08:58:13 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e240 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 08:58:13 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e240 do_prune osdmap full prune enabled
Oct 11 08:58:13 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e241 e241: 3 total, 3 up, 3 in
Oct 11 08:58:13 compute-0 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e241: 3 total, 3 up, 3 in
Oct 11 08:58:13 compute-0 nova_compute[260935]: 2025-10-11 08:58:13.495 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:58:13 compute-0 ceph-mon[74313]: pgmap v1649: 321 pgs: 321 active+clean; 352 MiB data, 705 MiB used, 59 GiB / 60 GiB avail; 2.8 MiB/s rd, 1.8 MiB/s wr, 187 op/s
Oct 11 08:58:13 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/1179931574' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:58:13 compute-0 ceph-mon[74313]: osdmap e241: 3 total, 3 up, 3 in
Oct 11 08:58:13 compute-0 ovn_controller[152945]: 2025-10-11T08:58:13Z|00082|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:3f:0f:36 10.100.0.9
Oct 11 08:58:13 compute-0 nova_compute[260935]: 2025-10-11 08:58:13.732 2 WARNING nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 08:58:13 compute-0 nova_compute[260935]: 2025-10-11 08:58:13.734 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3279MB free_disk=59.8353385925293GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 11 08:58:13 compute-0 nova_compute[260935]: 2025-10-11 08:58:13.735 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:58:13 compute-0 nova_compute[260935]: 2025-10-11 08:58:13.735 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:58:13 compute-0 nova_compute[260935]: 2025-10-11 08:58:13.824 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance b297454f-91af-4716-b4f7-6af9f0d7e62d actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 08:58:13 compute-0 nova_compute[260935]: 2025-10-11 08:58:13.825 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance 8f609d1a-7eaa-41dc-9f3b-9acf7abe4239 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 08:58:13 compute-0 nova_compute[260935]: 2025-10-11 08:58:13.825 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance 98d8ebd6-0917-49cf-8efc-a245486424bc actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 08:58:13 compute-0 nova_compute[260935]: 2025-10-11 08:58:13.826 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 11 08:58:13 compute-0 nova_compute[260935]: 2025-10-11 08:58:13.826 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=896MB phys_disk=59GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 11 08:58:13 compute-0 nova_compute[260935]: 2025-10-11 08:58:13.902 2 DEBUG oslo_concurrency.lockutils [None req-d405db3b-bfab-4994-b670-79e16d792bf7 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Acquiring lock "98d8ebd6-0917-49cf-8efc-a245486424bc" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:58:13 compute-0 nova_compute[260935]: 2025-10-11 08:58:13.902 2 DEBUG oslo_concurrency.lockutils [None req-d405db3b-bfab-4994-b670-79e16d792bf7 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Lock "98d8ebd6-0917-49cf-8efc-a245486424bc" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:58:13 compute-0 nova_compute[260935]: 2025-10-11 08:58:13.903 2 DEBUG oslo_concurrency.lockutils [None req-d405db3b-bfab-4994-b670-79e16d792bf7 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Acquiring lock "98d8ebd6-0917-49cf-8efc-a245486424bc-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:58:13 compute-0 nova_compute[260935]: 2025-10-11 08:58:13.903 2 DEBUG oslo_concurrency.lockutils [None req-d405db3b-bfab-4994-b670-79e16d792bf7 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Lock "98d8ebd6-0917-49cf-8efc-a245486424bc-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:58:13 compute-0 nova_compute[260935]: 2025-10-11 08:58:13.904 2 DEBUG oslo_concurrency.lockutils [None req-d405db3b-bfab-4994-b670-79e16d792bf7 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Lock "98d8ebd6-0917-49cf-8efc-a245486424bc-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:58:13 compute-0 nova_compute[260935]: 2025-10-11 08:58:13.906 2 INFO nova.compute.manager [None req-d405db3b-bfab-4994-b670-79e16d792bf7 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] [instance: 98d8ebd6-0917-49cf-8efc-a245486424bc] Terminating instance
Oct 11 08:58:13 compute-0 nova_compute[260935]: 2025-10-11 08:58:13.907 2 DEBUG nova.compute.manager [None req-d405db3b-bfab-4994-b670-79e16d792bf7 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] [instance: 98d8ebd6-0917-49cf-8efc-a245486424bc] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 11 08:58:13 compute-0 nova_compute[260935]: 2025-10-11 08:58:13.943 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:58:13 compute-0 kernel: tap10e01bbb-0d (unregistering): left promiscuous mode
Oct 11 08:58:13 compute-0 NetworkManager[44960]: <info>  [1760173093.9655] device (tap10e01bbb-0d): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 08:58:13 compute-0 ovn_controller[152945]: 2025-10-11T08:58:13Z|00665|binding|INFO|Releasing lport 10e01bbb-0d2c-4f76-8f81-c90e20d3e54c from this chassis (sb_readonly=0)
Oct 11 08:58:13 compute-0 ovn_controller[152945]: 2025-10-11T08:58:13Z|00666|binding|INFO|Setting lport 10e01bbb-0d2c-4f76-8f81-c90e20d3e54c down in Southbound
Oct 11 08:58:13 compute-0 ovn_controller[152945]: 2025-10-11T08:58:13Z|00667|binding|INFO|Removing iface tap10e01bbb-0d ovn-installed in OVS
Oct 11 08:58:13 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:58:13.989 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3f:0f:36 10.100.0.9'], port_security=['fa:16:3e:3f:0f:36 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '98d8ebd6-0917-49cf-8efc-a245486424bc', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-056c6769-bc97-4ae9-9759-4cc2d984a31d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '73adfb8cf0c64359b1f33a9643148ef4', 'neutron:revision_number': '9', 'neutron:security_group_ids': '06a0521a-9ff7-49ed-a194-a12a4a8fd551', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.249', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0cfa1fc9-121e-4e0a-bd08-716b82275316, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=10e01bbb-0d2c-4f76-8f81-c90e20d3e54c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 08:58:13 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:58:13.991 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 10e01bbb-0d2c-4f76-8f81-c90e20d3e54c in datapath 056c6769-bc97-4ae9-9759-4cc2d984a31d unbound from our chassis
Oct 11 08:58:13 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:58:13.995 162815 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 056c6769-bc97-4ae9-9759-4cc2d984a31d, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 11 08:58:13 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:58:13.996 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[7f6c5003-2453-4f19-a147-42097e5b5493]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:58:13 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:58:13.998 162815 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-056c6769-bc97-4ae9-9759-4cc2d984a31d namespace which is not needed anymore
Oct 11 08:58:13 compute-0 nova_compute[260935]: 2025-10-11 08:58:13.997 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:58:14 compute-0 nova_compute[260935]: 2025-10-11 08:58:14.015 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:58:14 compute-0 systemd[1]: machine-qemu\x2d84\x2dinstance\x2d0000003a.scope: Deactivated successfully.
Oct 11 08:58:14 compute-0 systemd[1]: machine-qemu\x2d84\x2dinstance\x2d0000003a.scope: Consumed 12.554s CPU time.
Oct 11 08:58:14 compute-0 systemd-machined[215705]: Machine qemu-84-instance-0000003a terminated.
Oct 11 08:58:14 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1651: 321 pgs: 321 active+clean; 279 MiB data, 676 MiB used, 59 GiB / 60 GiB avail; 3.4 MiB/s rd, 3.2 MiB/s wr, 306 op/s
Oct 11 08:58:14 compute-0 kernel: tap5fbde773-dc (unregistering): left promiscuous mode
Oct 11 08:58:14 compute-0 NetworkManager[44960]: <info>  [1760173094.1180] device (tap5fbde773-dc): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 08:58:14 compute-0 ovn_controller[152945]: 2025-10-11T08:58:14Z|00668|binding|INFO|Releasing lport 5fbde773-dc50-43d9-ae02-a499b8096537 from this chassis (sb_readonly=0)
Oct 11 08:58:14 compute-0 nova_compute[260935]: 2025-10-11 08:58:14.139 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:58:14 compute-0 ovn_controller[152945]: 2025-10-11T08:58:14Z|00669|binding|INFO|Setting lport 5fbde773-dc50-43d9-ae02-a499b8096537 down in Southbound
Oct 11 08:58:14 compute-0 ovn_controller[152945]: 2025-10-11T08:58:14Z|00670|binding|INFO|Removing iface tap5fbde773-dc ovn-installed in OVS
Oct 11 08:58:14 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:58:14.150 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e5:dc:a1 10.100.0.8'], port_security=['fa:16:3e:e5:dc:a1 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '8f609d1a-7eaa-41dc-9f3b-9acf7abe4239', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e075bdab-78c4-414f-b270-c41d1c82f498', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd9864fda4f8641d8a9c1509c426cc206', 'neutron:revision_number': '4', 'neutron:security_group_ids': '708d19b6-1edc-4b98-ad73-f234668f1633', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2b07dbe7-131d-4b4e-9a25-dea5d7b28985, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=5fbde773-dc50-43d9-ae02-a499b8096537) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 08:58:14 compute-0 nova_compute[260935]: 2025-10-11 08:58:14.162 2 INFO nova.virt.libvirt.driver [-] [instance: 98d8ebd6-0917-49cf-8efc-a245486424bc] Instance destroyed successfully.
Oct 11 08:58:14 compute-0 nova_compute[260935]: 2025-10-11 08:58:14.164 2 DEBUG nova.objects.instance [None req-d405db3b-bfab-4994-b670-79e16d792bf7 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Lazy-loading 'resources' on Instance uuid 98d8ebd6-0917-49cf-8efc-a245486424bc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:58:14 compute-0 nova_compute[260935]: 2025-10-11 08:58:14.178 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:58:14 compute-0 nova_compute[260935]: 2025-10-11 08:58:14.181 2 DEBUG nova.virt.libvirt.vif [None req-d405db3b-bfab-4994-b670-79e16d792bf7 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-11T08:55:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-400269969',display_name='tempest-ServerActionsTestOtherB-server-400269969',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-400269969',id=58,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBeZ9ALPs3Oq9pCizSQrA3R5ihPZilMrzZLelKqG/iqyw53khzMkIgHWfcYBLUTF7AIKY9B5CPoZLdbz5r+wTvSZysVCKTRxhaIofirBhYLqgm1OKLLxBwrJrQAKSAriNg==',key_name='tempest-keypair-1264143465',keypairs=<?>,launch_index=0,launched_at=2025-10-11T08:58:03Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='73adfb8cf0c64359b1f33a9643148ef4',ramdisk_id='',reservation_id='r-bh4aj5da',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherB-1445504716',owner_user_name='tempest-ServerActionsTestOtherB-1445504716-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T08:58:04Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='8d5f5f07c57c467286168be7c097bf26',uuid=98d8ebd6-0917-49cf-8efc-a245486424bc,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "10e01bbb-0d2c-4f76-8f81-c90e20d3e54c", "address": "fa:16:3e:3f:0f:36", "network": {"id": "056c6769-bc97-4ae9-9759-4cc2d984a31d", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-2094705751-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.249", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "73adfb8cf0c64359b1f33a9643148ef4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap10e01bbb-0d", "ovs_interfaceid": "10e01bbb-0d2c-4f76-8f81-c90e20d3e54c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 11 08:58:14 compute-0 nova_compute[260935]: 2025-10-11 08:58:14.181 2 DEBUG nova.network.os_vif_util [None req-d405db3b-bfab-4994-b670-79e16d792bf7 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Converting VIF {"id": "10e01bbb-0d2c-4f76-8f81-c90e20d3e54c", "address": "fa:16:3e:3f:0f:36", "network": {"id": "056c6769-bc97-4ae9-9759-4cc2d984a31d", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-2094705751-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.249", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "73adfb8cf0c64359b1f33a9643148ef4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap10e01bbb-0d", "ovs_interfaceid": "10e01bbb-0d2c-4f76-8f81-c90e20d3e54c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 08:58:14 compute-0 nova_compute[260935]: 2025-10-11 08:58:14.182 2 DEBUG nova.network.os_vif_util [None req-d405db3b-bfab-4994-b670-79e16d792bf7 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3f:0f:36,bridge_name='br-int',has_traffic_filtering=True,id=10e01bbb-0d2c-4f76-8f81-c90e20d3e54c,network=Network(056c6769-bc97-4ae9-9759-4cc2d984a31d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap10e01bbb-0d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 08:58:14 compute-0 nova_compute[260935]: 2025-10-11 08:58:14.182 2 DEBUG os_vif [None req-d405db3b-bfab-4994-b670-79e16d792bf7 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:3f:0f:36,bridge_name='br-int',has_traffic_filtering=True,id=10e01bbb-0d2c-4f76-8f81-c90e20d3e54c,network=Network(056c6769-bc97-4ae9-9759-4cc2d984a31d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap10e01bbb-0d') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 11 08:58:14 compute-0 nova_compute[260935]: 2025-10-11 08:58:14.185 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:58:14 compute-0 nova_compute[260935]: 2025-10-11 08:58:14.185 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap10e01bbb-0d, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:58:14 compute-0 nova_compute[260935]: 2025-10-11 08:58:14.187 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:58:14 compute-0 systemd[1]: machine-qemu\x2d83\x2dinstance\x2d0000004b.scope: Deactivated successfully.
Oct 11 08:58:14 compute-0 nova_compute[260935]: 2025-10-11 08:58:14.190 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 08:58:14 compute-0 systemd[1]: machine-qemu\x2d83\x2dinstance\x2d0000004b.scope: Consumed 12.824s CPU time.
Oct 11 08:58:14 compute-0 systemd-machined[215705]: Machine qemu-83-instance-0000004b terminated.
Oct 11 08:58:14 compute-0 nova_compute[260935]: 2025-10-11 08:58:14.197 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:58:14 compute-0 nova_compute[260935]: 2025-10-11 08:58:14.200 2 INFO os_vif [None req-d405db3b-bfab-4994-b670-79e16d792bf7 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:3f:0f:36,bridge_name='br-int',has_traffic_filtering=True,id=10e01bbb-0d2c-4f76-8f81-c90e20d3e54c,network=Network(056c6769-bc97-4ae9-9759-4cc2d984a31d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap10e01bbb-0d')
Oct 11 08:58:14 compute-0 neutron-haproxy-ovnmeta-056c6769-bc97-4ae9-9759-4cc2d984a31d[322672]: [NOTICE]   (322691) : haproxy version is 2.8.14-c23fe91
Oct 11 08:58:14 compute-0 neutron-haproxy-ovnmeta-056c6769-bc97-4ae9-9759-4cc2d984a31d[322672]: [NOTICE]   (322691) : path to executable is /usr/sbin/haproxy
Oct 11 08:58:14 compute-0 neutron-haproxy-ovnmeta-056c6769-bc97-4ae9-9759-4cc2d984a31d[322672]: [WARNING]  (322691) : Exiting Master process...
Oct 11 08:58:14 compute-0 neutron-haproxy-ovnmeta-056c6769-bc97-4ae9-9759-4cc2d984a31d[322672]: [ALERT]    (322691) : Current worker (322696) exited with code 143 (Terminated)
Oct 11 08:58:14 compute-0 neutron-haproxy-ovnmeta-056c6769-bc97-4ae9-9759-4cc2d984a31d[322672]: [WARNING]  (322691) : All workers exited. Exiting... (0)
Oct 11 08:58:14 compute-0 systemd[1]: libpod-ee71ccc73d4cb3e1954d0b370d20ab59432e766b0286a99e46c4f66196d6860d.scope: Deactivated successfully.
Oct 11 08:58:14 compute-0 podman[335084]: 2025-10-11 08:58:14.245390471 +0000 UTC m=+0.072592261 container died ee71ccc73d4cb3e1954d0b370d20ab59432e766b0286a99e46c4f66196d6860d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-056c6769-bc97-4ae9-9759-4cc2d984a31d, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3)
Oct 11 08:58:14 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ee71ccc73d4cb3e1954d0b370d20ab59432e766b0286a99e46c4f66196d6860d-userdata-shm.mount: Deactivated successfully.
Oct 11 08:58:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-65f5d931826b904a40c4e8f3faa087cfd21b33725873fe141d1cea9369cc5ab3-merged.mount: Deactivated successfully.
Oct 11 08:58:14 compute-0 podman[335084]: 2025-10-11 08:58:14.330170459 +0000 UTC m=+0.157372259 container cleanup ee71ccc73d4cb3e1954d0b370d20ab59432e766b0286a99e46c4f66196d6860d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-056c6769-bc97-4ae9-9759-4cc2d984a31d, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 08:58:14 compute-0 nova_compute[260935]: 2025-10-11 08:58:14.353 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:58:14 compute-0 nova_compute[260935]: 2025-10-11 08:58:14.360 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:58:14 compute-0 systemd[1]: libpod-conmon-ee71ccc73d4cb3e1954d0b370d20ab59432e766b0286a99e46c4f66196d6860d.scope: Deactivated successfully.
Oct 11 08:58:14 compute-0 podman[335141]: 2025-10-11 08:58:14.454993068 +0000 UTC m=+0.071199211 container remove ee71ccc73d4cb3e1954d0b370d20ab59432e766b0286a99e46c4f66196d6860d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-056c6769-bc97-4ae9-9759-4cc2d984a31d, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 11 08:58:14 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 08:58:14 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:58:14.466 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[835b9bd1-128f-4a26-aa40-71eb86114d7c]: (4, ('Sat Oct 11 08:58:14 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-056c6769-bc97-4ae9-9759-4cc2d984a31d (ee71ccc73d4cb3e1954d0b370d20ab59432e766b0286a99e46c4f66196d6860d)\nee71ccc73d4cb3e1954d0b370d20ab59432e766b0286a99e46c4f66196d6860d\nSat Oct 11 08:58:14 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-056c6769-bc97-4ae9-9759-4cc2d984a31d (ee71ccc73d4cb3e1954d0b370d20ab59432e766b0286a99e46c4f66196d6860d)\nee71ccc73d4cb3e1954d0b370d20ab59432e766b0286a99e46c4f66196d6860d\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:58:14 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2211999067' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:58:14 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:58:14.468 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[c2b3d604-7589-4b35-8441-2dd1c718915b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:58:14 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:58:14.470 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap056c6769-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:58:14 compute-0 nova_compute[260935]: 2025-10-11 08:58:14.472 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:58:14 compute-0 kernel: tap056c6769-b0: left promiscuous mode
Oct 11 08:58:14 compute-0 nova_compute[260935]: 2025-10-11 08:58:14.500 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.557s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:58:14 compute-0 nova_compute[260935]: 2025-10-11 08:58:14.512 2 DEBUG nova.compute.provider_tree [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 08:58:14 compute-0 nova_compute[260935]: 2025-10-11 08:58:14.517 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:58:14 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:58:14.520 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[265f861f-a602-4944-b1ba-80724289330c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:58:14 compute-0 nova_compute[260935]: 2025-10-11 08:58:14.531 2 DEBUG nova.scheduler.client.report [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 08:58:14 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:58:14.539 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[54a3d7f4-0a75-400f-96f4-a9e2ce9616fa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:58:14 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:58:14.540 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[1b762ee4-ac22-4f10-bcc6-23e8d7ebb91e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:58:14 compute-0 nova_compute[260935]: 2025-10-11 08:58:14.559 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 11 08:58:14 compute-0 nova_compute[260935]: 2025-10-11 08:58:14.560 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.825s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:58:14 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:58:14.562 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[c4b98724-a197-490e-87a8-5740cbf32bae]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 477019, 'reachable_time': 22333, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 335170, 'error': None, 'target': 'ovnmeta-056c6769-bc97-4ae9-9759-4cc2d984a31d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:58:14 compute-0 systemd[1]: run-netns-ovnmeta\x2d056c6769\x2dbc97\x2d4ae9\x2d9759\x2d4cc2d984a31d.mount: Deactivated successfully.
Oct 11 08:58:14 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:58:14.565 162929 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-056c6769-bc97-4ae9-9759-4cc2d984a31d deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 11 08:58:14 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:58:14.566 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[b667d57a-c816-457b-8a2e-46465c0120d2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:58:14 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:58:14.568 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 5fbde773-dc50-43d9-ae02-a499b8096537 in datapath e075bdab-78c4-414f-b270-c41d1c82f498 unbound from our chassis
Oct 11 08:58:14 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:58:14.569 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e075bdab-78c4-414f-b270-c41d1c82f498
Oct 11 08:58:14 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:58:14.591 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[73e6cd02-9ebd-469b-89a3-6c222ffc4746]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:58:14 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/2211999067' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:58:14 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:58:14.645 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[78e5af90-28f4-4104-869a-ef10bc3c88c0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:58:14 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:58:14.649 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[d9d7cb04-dc63-4a00-b3a2-9b95af1b27c1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:58:14 compute-0 nova_compute[260935]: 2025-10-11 08:58:14.693 2 INFO nova.virt.libvirt.driver [None req-d405db3b-bfab-4994-b670-79e16d792bf7 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] [instance: 98d8ebd6-0917-49cf-8efc-a245486424bc] Deleting instance files /var/lib/nova/instances/98d8ebd6-0917-49cf-8efc-a245486424bc_del
Oct 11 08:58:14 compute-0 nova_compute[260935]: 2025-10-11 08:58:14.694 2 INFO nova.virt.libvirt.driver [None req-d405db3b-bfab-4994-b670-79e16d792bf7 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] [instance: 98d8ebd6-0917-49cf-8efc-a245486424bc] Deletion of /var/lib/nova/instances/98d8ebd6-0917-49cf-8efc-a245486424bc_del complete
Oct 11 08:58:14 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:58:14.700 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[1cf0dec1-78d8-4cac-b021-a326954f9ebe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:58:14 compute-0 nova_compute[260935]: 2025-10-11 08:58:14.729 2 DEBUG nova.compute.manager [req-9dac0f91-bbe0-4687-8c55-b458956ff2df req-a20c95f9-53ca-4052-8fa9-eb53cab31197 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 98d8ebd6-0917-49cf-8efc-a245486424bc] Received event network-vif-unplugged-10e01bbb-0d2c-4f76-8f81-c90e20d3e54c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:58:14 compute-0 nova_compute[260935]: 2025-10-11 08:58:14.730 2 DEBUG oslo_concurrency.lockutils [req-9dac0f91-bbe0-4687-8c55-b458956ff2df req-a20c95f9-53ca-4052-8fa9-eb53cab31197 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "98d8ebd6-0917-49cf-8efc-a245486424bc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:58:14 compute-0 nova_compute[260935]: 2025-10-11 08:58:14.731 2 DEBUG oslo_concurrency.lockutils [req-9dac0f91-bbe0-4687-8c55-b458956ff2df req-a20c95f9-53ca-4052-8fa9-eb53cab31197 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "98d8ebd6-0917-49cf-8efc-a245486424bc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:58:14 compute-0 nova_compute[260935]: 2025-10-11 08:58:14.731 2 DEBUG oslo_concurrency.lockutils [req-9dac0f91-bbe0-4687-8c55-b458956ff2df req-a20c95f9-53ca-4052-8fa9-eb53cab31197 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "98d8ebd6-0917-49cf-8efc-a245486424bc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:58:14 compute-0 nova_compute[260935]: 2025-10-11 08:58:14.731 2 DEBUG nova.compute.manager [req-9dac0f91-bbe0-4687-8c55-b458956ff2df req-a20c95f9-53ca-4052-8fa9-eb53cab31197 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 98d8ebd6-0917-49cf-8efc-a245486424bc] No waiting events found dispatching network-vif-unplugged-10e01bbb-0d2c-4f76-8f81-c90e20d3e54c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 08:58:14 compute-0 nova_compute[260935]: 2025-10-11 08:58:14.732 2 DEBUG nova.compute.manager [req-9dac0f91-bbe0-4687-8c55-b458956ff2df req-a20c95f9-53ca-4052-8fa9-eb53cab31197 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 98d8ebd6-0917-49cf-8efc-a245486424bc] Received event network-vif-unplugged-10e01bbb-0d2c-4f76-8f81-c90e20d3e54c for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 11 08:58:14 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:58:14.733 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[960d4dc3-5790-4358-ac52-a6eea2c3d793]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape075bdab-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0b:b9:79'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 31, 'rx_bytes': 1000, 'tx_bytes': 1446, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 31, 'rx_bytes': 1000, 'tx_bytes': 1446, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 169], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 479394, 'reachable_time': 15419, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 335176, 'error': None, 'target': 'ovnmeta-e075bdab-78c4-414f-b270-c41d1c82f498', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:58:14 compute-0 nova_compute[260935]: 2025-10-11 08:58:14.756 2 INFO nova.compute.manager [None req-d405db3b-bfab-4994-b670-79e16d792bf7 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] [instance: 98d8ebd6-0917-49cf-8efc-a245486424bc] Took 0.85 seconds to destroy the instance on the hypervisor.
Oct 11 08:58:14 compute-0 nova_compute[260935]: 2025-10-11 08:58:14.757 2 DEBUG oslo.service.loopingcall [None req-d405db3b-bfab-4994-b670-79e16d792bf7 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 11 08:58:14 compute-0 nova_compute[260935]: 2025-10-11 08:58:14.757 2 DEBUG nova.compute.manager [-] [instance: 98d8ebd6-0917-49cf-8efc-a245486424bc] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 11 08:58:14 compute-0 nova_compute[260935]: 2025-10-11 08:58:14.758 2 DEBUG nova.network.neutron [-] [instance: 98d8ebd6-0917-49cf-8efc-a245486424bc] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 11 08:58:14 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:58:14.763 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[485b9b08-19ea-4776-b4d1-19ae617fb300]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tape075bdab-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 479414, 'tstamp': 479414}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 335177, 'error': None, 'target': 'ovnmeta-e075bdab-78c4-414f-b270-c41d1c82f498', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tape075bdab-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 479419, 'tstamp': 479419}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 335177, 'error': None, 'target': 'ovnmeta-e075bdab-78c4-414f-b270-c41d1c82f498', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:58:14 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:58:14.766 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape075bdab-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:58:14 compute-0 nova_compute[260935]: 2025-10-11 08:58:14.768 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:58:14 compute-0 nova_compute[260935]: 2025-10-11 08:58:14.776 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:58:14 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:58:14.777 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape075bdab-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:58:14 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:58:14.778 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 08:58:14 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:58:14.779 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape075bdab-70, col_values=(('external_ids', {'iface-id': 'b9cf681c-9f4c-4c56-987a-55fa7aa89e1a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:58:14 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:58:14.779 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 08:58:14 compute-0 nova_compute[260935]: 2025-10-11 08:58:14.838 2 INFO nova.virt.libvirt.driver [None req-116a3b37-1b59-4226-b2a5-21f7ac3496cb ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: 8f609d1a-7eaa-41dc-9f3b-9acf7abe4239] Instance shutdown successfully after 13 seconds.
Oct 11 08:58:14 compute-0 nova_compute[260935]: 2025-10-11 08:58:14.844 2 INFO nova.virt.libvirt.driver [-] [instance: 8f609d1a-7eaa-41dc-9f3b-9acf7abe4239] Instance destroyed successfully.
Oct 11 08:58:14 compute-0 nova_compute[260935]: 2025-10-11 08:58:14.845 2 DEBUG nova.objects.instance [None req-116a3b37-1b59-4226-b2a5-21f7ac3496cb ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Lazy-loading 'numa_topology' on Instance uuid 8f609d1a-7eaa-41dc-9f3b-9acf7abe4239 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:58:14 compute-0 nova_compute[260935]: 2025-10-11 08:58:14.857 2 DEBUG nova.compute.manager [None req-116a3b37-1b59-4226-b2a5-21f7ac3496cb ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: 8f609d1a-7eaa-41dc-9f3b-9acf7abe4239] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:58:14 compute-0 nova_compute[260935]: 2025-10-11 08:58:14.906 2 DEBUG oslo_concurrency.lockutils [None req-116a3b37-1b59-4226-b2a5-21f7ac3496cb ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Lock "8f609d1a-7eaa-41dc-9f3b-9acf7abe4239" "released" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: held 13.191s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:58:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:58:15.193 162815 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:58:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:58:15.193 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:58:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:58:15.194 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:58:15 compute-0 ceph-mon[74313]: pgmap v1651: 321 pgs: 321 active+clean; 279 MiB data, 676 MiB used, 59 GiB / 60 GiB avail; 3.4 MiB/s rd, 3.2 MiB/s wr, 306 op/s
Oct 11 08:58:16 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1652: 321 pgs: 321 active+clean; 279 MiB data, 676 MiB used, 59 GiB / 60 GiB avail; 507 KiB/s rd, 3.0 MiB/s wr, 153 op/s
Oct 11 08:58:16 compute-0 nova_compute[260935]: 2025-10-11 08:58:16.473 2 DEBUG nova.network.neutron [-] [instance: 98d8ebd6-0917-49cf-8efc-a245486424bc] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:58:16 compute-0 nova_compute[260935]: 2025-10-11 08:58:16.496 2 INFO nova.compute.manager [-] [instance: 98d8ebd6-0917-49cf-8efc-a245486424bc] Took 1.74 seconds to deallocate network for instance.
Oct 11 08:58:16 compute-0 nova_compute[260935]: 2025-10-11 08:58:16.556 2 DEBUG oslo_concurrency.lockutils [None req-d405db3b-bfab-4994-b670-79e16d792bf7 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:58:16 compute-0 nova_compute[260935]: 2025-10-11 08:58:16.557 2 DEBUG oslo_concurrency.lockutils [None req-d405db3b-bfab-4994-b670-79e16d792bf7 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:58:16 compute-0 nova_compute[260935]: 2025-10-11 08:58:16.561 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 08:58:16 compute-0 nova_compute[260935]: 2025-10-11 08:58:16.562 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 11 08:58:16 compute-0 nova_compute[260935]: 2025-10-11 08:58:16.596 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 11 08:58:16 compute-0 nova_compute[260935]: 2025-10-11 08:58:16.604 2 DEBUG nova.compute.manager [req-2d5a691b-f730-4d60-ac2c-7354fab4811d req-f50bc25f-71e8-4f37-90e3-5a1fb5d31580 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 98d8ebd6-0917-49cf-8efc-a245486424bc] Received event network-vif-deleted-10e01bbb-0d2c-4f76-8f81-c90e20d3e54c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:58:16 compute-0 ceph-mon[74313]: pgmap v1652: 321 pgs: 321 active+clean; 279 MiB data, 676 MiB used, 59 GiB / 60 GiB avail; 507 KiB/s rd, 3.0 MiB/s wr, 153 op/s
Oct 11 08:58:16 compute-0 nova_compute[260935]: 2025-10-11 08:58:16.684 2 DEBUG oslo_concurrency.processutils [None req-d405db3b-bfab-4994-b670-79e16d792bf7 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:58:16 compute-0 nova_compute[260935]: 2025-10-11 08:58:16.748 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:58:16 compute-0 nova_compute[260935]: 2025-10-11 08:58:16.856 2 DEBUG nova.compute.manager [req-e3dee11b-c10b-418e-bc14-62ee3cb01ba7 req-17431301-c960-4b41-a8fe-1603444aa5f9 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 98d8ebd6-0917-49cf-8efc-a245486424bc] Received event network-vif-plugged-10e01bbb-0d2c-4f76-8f81-c90e20d3e54c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:58:16 compute-0 nova_compute[260935]: 2025-10-11 08:58:16.857 2 DEBUG oslo_concurrency.lockutils [req-e3dee11b-c10b-418e-bc14-62ee3cb01ba7 req-17431301-c960-4b41-a8fe-1603444aa5f9 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "98d8ebd6-0917-49cf-8efc-a245486424bc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:58:16 compute-0 nova_compute[260935]: 2025-10-11 08:58:16.859 2 DEBUG oslo_concurrency.lockutils [req-e3dee11b-c10b-418e-bc14-62ee3cb01ba7 req-17431301-c960-4b41-a8fe-1603444aa5f9 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "98d8ebd6-0917-49cf-8efc-a245486424bc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:58:16 compute-0 nova_compute[260935]: 2025-10-11 08:58:16.860 2 DEBUG oslo_concurrency.lockutils [req-e3dee11b-c10b-418e-bc14-62ee3cb01ba7 req-17431301-c960-4b41-a8fe-1603444aa5f9 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "98d8ebd6-0917-49cf-8efc-a245486424bc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:58:16 compute-0 nova_compute[260935]: 2025-10-11 08:58:16.860 2 DEBUG nova.compute.manager [req-e3dee11b-c10b-418e-bc14-62ee3cb01ba7 req-17431301-c960-4b41-a8fe-1603444aa5f9 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 98d8ebd6-0917-49cf-8efc-a245486424bc] No waiting events found dispatching network-vif-plugged-10e01bbb-0d2c-4f76-8f81-c90e20d3e54c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 08:58:16 compute-0 nova_compute[260935]: 2025-10-11 08:58:16.861 2 WARNING nova.compute.manager [req-e3dee11b-c10b-418e-bc14-62ee3cb01ba7 req-17431301-c960-4b41-a8fe-1603444aa5f9 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 98d8ebd6-0917-49cf-8efc-a245486424bc] Received unexpected event network-vif-plugged-10e01bbb-0d2c-4f76-8f81-c90e20d3e54c for instance with vm_state deleted and task_state None.
Oct 11 08:58:16 compute-0 nova_compute[260935]: 2025-10-11 08:58:16.862 2 DEBUG nova.compute.manager [req-e3dee11b-c10b-418e-bc14-62ee3cb01ba7 req-17431301-c960-4b41-a8fe-1603444aa5f9 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 8f609d1a-7eaa-41dc-9f3b-9acf7abe4239] Received event network-vif-unplugged-5fbde773-dc50-43d9-ae02-a499b8096537 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:58:16 compute-0 nova_compute[260935]: 2025-10-11 08:58:16.863 2 DEBUG oslo_concurrency.lockutils [req-e3dee11b-c10b-418e-bc14-62ee3cb01ba7 req-17431301-c960-4b41-a8fe-1603444aa5f9 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "8f609d1a-7eaa-41dc-9f3b-9acf7abe4239-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:58:16 compute-0 nova_compute[260935]: 2025-10-11 08:58:16.863 2 DEBUG oslo_concurrency.lockutils [req-e3dee11b-c10b-418e-bc14-62ee3cb01ba7 req-17431301-c960-4b41-a8fe-1603444aa5f9 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "8f609d1a-7eaa-41dc-9f3b-9acf7abe4239-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:58:16 compute-0 nova_compute[260935]: 2025-10-11 08:58:16.864 2 DEBUG oslo_concurrency.lockutils [req-e3dee11b-c10b-418e-bc14-62ee3cb01ba7 req-17431301-c960-4b41-a8fe-1603444aa5f9 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "8f609d1a-7eaa-41dc-9f3b-9acf7abe4239-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:58:16 compute-0 nova_compute[260935]: 2025-10-11 08:58:16.864 2 DEBUG nova.compute.manager [req-e3dee11b-c10b-418e-bc14-62ee3cb01ba7 req-17431301-c960-4b41-a8fe-1603444aa5f9 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 8f609d1a-7eaa-41dc-9f3b-9acf7abe4239] No waiting events found dispatching network-vif-unplugged-5fbde773-dc50-43d9-ae02-a499b8096537 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 08:58:16 compute-0 nova_compute[260935]: 2025-10-11 08:58:16.865 2 WARNING nova.compute.manager [req-e3dee11b-c10b-418e-bc14-62ee3cb01ba7 req-17431301-c960-4b41-a8fe-1603444aa5f9 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 8f609d1a-7eaa-41dc-9f3b-9acf7abe4239] Received unexpected event network-vif-unplugged-5fbde773-dc50-43d9-ae02-a499b8096537 for instance with vm_state stopped and task_state None.
Oct 11 08:58:16 compute-0 nova_compute[260935]: 2025-10-11 08:58:16.865 2 DEBUG nova.compute.manager [req-e3dee11b-c10b-418e-bc14-62ee3cb01ba7 req-17431301-c960-4b41-a8fe-1603444aa5f9 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 8f609d1a-7eaa-41dc-9f3b-9acf7abe4239] Received event network-vif-plugged-5fbde773-dc50-43d9-ae02-a499b8096537 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:58:16 compute-0 nova_compute[260935]: 2025-10-11 08:58:16.866 2 DEBUG oslo_concurrency.lockutils [req-e3dee11b-c10b-418e-bc14-62ee3cb01ba7 req-17431301-c960-4b41-a8fe-1603444aa5f9 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "8f609d1a-7eaa-41dc-9f3b-9acf7abe4239-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:58:16 compute-0 nova_compute[260935]: 2025-10-11 08:58:16.867 2 DEBUG oslo_concurrency.lockutils [req-e3dee11b-c10b-418e-bc14-62ee3cb01ba7 req-17431301-c960-4b41-a8fe-1603444aa5f9 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "8f609d1a-7eaa-41dc-9f3b-9acf7abe4239-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:58:16 compute-0 nova_compute[260935]: 2025-10-11 08:58:16.867 2 DEBUG oslo_concurrency.lockutils [req-e3dee11b-c10b-418e-bc14-62ee3cb01ba7 req-17431301-c960-4b41-a8fe-1603444aa5f9 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "8f609d1a-7eaa-41dc-9f3b-9acf7abe4239-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:58:16 compute-0 nova_compute[260935]: 2025-10-11 08:58:16.868 2 DEBUG nova.compute.manager [req-e3dee11b-c10b-418e-bc14-62ee3cb01ba7 req-17431301-c960-4b41-a8fe-1603444aa5f9 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 8f609d1a-7eaa-41dc-9f3b-9acf7abe4239] No waiting events found dispatching network-vif-plugged-5fbde773-dc50-43d9-ae02-a499b8096537 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 08:58:16 compute-0 nova_compute[260935]: 2025-10-11 08:58:16.869 2 WARNING nova.compute.manager [req-e3dee11b-c10b-418e-bc14-62ee3cb01ba7 req-17431301-c960-4b41-a8fe-1603444aa5f9 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 8f609d1a-7eaa-41dc-9f3b-9acf7abe4239] Received unexpected event network-vif-plugged-5fbde773-dc50-43d9-ae02-a499b8096537 for instance with vm_state stopped and task_state None.
Oct 11 08:58:17 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 08:58:17 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/233616483' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:58:17 compute-0 nova_compute[260935]: 2025-10-11 08:58:17.167 2 DEBUG oslo_concurrency.processutils [None req-d405db3b-bfab-4994-b670-79e16d792bf7 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.483s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:58:17 compute-0 nova_compute[260935]: 2025-10-11 08:58:17.175 2 DEBUG nova.compute.provider_tree [None req-d405db3b-bfab-4994-b670-79e16d792bf7 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 08:58:17 compute-0 nova_compute[260935]: 2025-10-11 08:58:17.202 2 DEBUG nova.scheduler.client.report [None req-d405db3b-bfab-4994-b670-79e16d792bf7 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 08:58:17 compute-0 nova_compute[260935]: 2025-10-11 08:58:17.247 2 DEBUG oslo_concurrency.lockutils [None req-d405db3b-bfab-4994-b670-79e16d792bf7 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.690s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:58:17 compute-0 nova_compute[260935]: 2025-10-11 08:58:17.276 2 INFO nova.scheduler.client.report [None req-d405db3b-bfab-4994-b670-79e16d792bf7 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Deleted allocations for instance 98d8ebd6-0917-49cf-8efc-a245486424bc
Oct 11 08:58:17 compute-0 nova_compute[260935]: 2025-10-11 08:58:17.355 2 DEBUG oslo_concurrency.lockutils [None req-d405db3b-bfab-4994-b670-79e16d792bf7 8d5f5f07c57c467286168be7c097bf26 73adfb8cf0c64359b1f33a9643148ef4 - - default default] Lock "98d8ebd6-0917-49cf-8efc-a245486424bc" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.453s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:58:17 compute-0 nova_compute[260935]: 2025-10-11 08:58:17.611 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:58:17 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/233616483' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:58:18 compute-0 nova_compute[260935]: 2025-10-11 08:58:18.071 2 DEBUG oslo_concurrency.lockutils [None req-fbf91486-6fd1-4dff-8468-ad75ad6914e5 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Acquiring lock "8f609d1a-7eaa-41dc-9f3b-9acf7abe4239" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:58:18 compute-0 nova_compute[260935]: 2025-10-11 08:58:18.072 2 DEBUG oslo_concurrency.lockutils [None req-fbf91486-6fd1-4dff-8468-ad75ad6914e5 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Lock "8f609d1a-7eaa-41dc-9f3b-9acf7abe4239" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:58:18 compute-0 nova_compute[260935]: 2025-10-11 08:58:18.073 2 DEBUG oslo_concurrency.lockutils [None req-fbf91486-6fd1-4dff-8468-ad75ad6914e5 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Acquiring lock "8f609d1a-7eaa-41dc-9f3b-9acf7abe4239-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:58:18 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1653: 321 pgs: 321 active+clean; 200 MiB data, 637 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 2.6 MiB/s wr, 216 op/s
Oct 11 08:58:18 compute-0 nova_compute[260935]: 2025-10-11 08:58:18.073 2 DEBUG oslo_concurrency.lockutils [None req-fbf91486-6fd1-4dff-8468-ad75ad6914e5 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Lock "8f609d1a-7eaa-41dc-9f3b-9acf7abe4239-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:58:18 compute-0 nova_compute[260935]: 2025-10-11 08:58:18.074 2 DEBUG oslo_concurrency.lockutils [None req-fbf91486-6fd1-4dff-8468-ad75ad6914e5 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Lock "8f609d1a-7eaa-41dc-9f3b-9acf7abe4239-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:58:18 compute-0 nova_compute[260935]: 2025-10-11 08:58:18.076 2 INFO nova.compute.manager [None req-fbf91486-6fd1-4dff-8468-ad75ad6914e5 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: 8f609d1a-7eaa-41dc-9f3b-9acf7abe4239] Terminating instance
Oct 11 08:58:18 compute-0 nova_compute[260935]: 2025-10-11 08:58:18.078 2 DEBUG nova.compute.manager [None req-fbf91486-6fd1-4dff-8468-ad75ad6914e5 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: 8f609d1a-7eaa-41dc-9f3b-9acf7abe4239] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 11 08:58:18 compute-0 nova_compute[260935]: 2025-10-11 08:58:18.089 2 INFO nova.virt.libvirt.driver [-] [instance: 8f609d1a-7eaa-41dc-9f3b-9acf7abe4239] Instance destroyed successfully.
Oct 11 08:58:18 compute-0 nova_compute[260935]: 2025-10-11 08:58:18.089 2 DEBUG nova.objects.instance [None req-fbf91486-6fd1-4dff-8468-ad75ad6914e5 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Lazy-loading 'resources' on Instance uuid 8f609d1a-7eaa-41dc-9f3b-9acf7abe4239 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:58:18 compute-0 nova_compute[260935]: 2025-10-11 08:58:18.109 2 DEBUG nova.virt.libvirt.vif [None req-fbf91486-6fd1-4dff-8468-ad75ad6914e5 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T08:57:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestJSON-server-918260000',display_name='tempest-Íñstáñcé-2118755249',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-918260000',id=75,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-11T08:57:59Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='d9864fda4f8641d8a9c1509c426cc206',ramdisk_id='',reservation_id='r-dps7r5jq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestJSON-101172647',owner_user_name='tempest-ServersTestJSON-101172647-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T08:58:16Z,user_data=None,user_id='ee0e5fedb9fc464eb2a9ac362f5e0749',uuid=8f609d1a-7eaa-41dc-9f3b-9acf7abe4239,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "5fbde773-dc50-43d9-ae02-a499b8096537", "address": "fa:16:3e:e5:dc:a1", "network": {"id": "e075bdab-78c4-414f-b270-c41d1c82f498", "bridge": "br-int", "label": "tempest-ServersTestJSON-1401783070-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d9864fda4f8641d8a9c1509c426cc206", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5fbde773-dc", "ovs_interfaceid": "5fbde773-dc50-43d9-ae02-a499b8096537", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 11 08:58:18 compute-0 nova_compute[260935]: 2025-10-11 08:58:18.110 2 DEBUG nova.network.os_vif_util [None req-fbf91486-6fd1-4dff-8468-ad75ad6914e5 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Converting VIF {"id": "5fbde773-dc50-43d9-ae02-a499b8096537", "address": "fa:16:3e:e5:dc:a1", "network": {"id": "e075bdab-78c4-414f-b270-c41d1c82f498", "bridge": "br-int", "label": "tempest-ServersTestJSON-1401783070-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d9864fda4f8641d8a9c1509c426cc206", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5fbde773-dc", "ovs_interfaceid": "5fbde773-dc50-43d9-ae02-a499b8096537", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 08:58:18 compute-0 nova_compute[260935]: 2025-10-11 08:58:18.112 2 DEBUG nova.network.os_vif_util [None req-fbf91486-6fd1-4dff-8468-ad75ad6914e5 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e5:dc:a1,bridge_name='br-int',has_traffic_filtering=True,id=5fbde773-dc50-43d9-ae02-a499b8096537,network=Network(e075bdab-78c4-414f-b270-c41d1c82f498),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5fbde773-dc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 08:58:18 compute-0 nova_compute[260935]: 2025-10-11 08:58:18.113 2 DEBUG os_vif [None req-fbf91486-6fd1-4dff-8468-ad75ad6914e5 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e5:dc:a1,bridge_name='br-int',has_traffic_filtering=True,id=5fbde773-dc50-43d9-ae02-a499b8096537,network=Network(e075bdab-78c4-414f-b270-c41d1c82f498),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5fbde773-dc') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 11 08:58:18 compute-0 nova_compute[260935]: 2025-10-11 08:58:18.118 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:58:18 compute-0 nova_compute[260935]: 2025-10-11 08:58:18.119 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5fbde773-dc, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:58:18 compute-0 nova_compute[260935]: 2025-10-11 08:58:18.122 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:58:18 compute-0 nova_compute[260935]: 2025-10-11 08:58:18.124 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 08:58:18 compute-0 nova_compute[260935]: 2025-10-11 08:58:18.126 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:58:18 compute-0 nova_compute[260935]: 2025-10-11 08:58:18.129 2 INFO os_vif [None req-fbf91486-6fd1-4dff-8468-ad75ad6914e5 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e5:dc:a1,bridge_name='br-int',has_traffic_filtering=True,id=5fbde773-dc50-43d9-ae02-a499b8096537,network=Network(e075bdab-78c4-414f-b270-c41d1c82f498),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5fbde773-dc')
Oct 11 08:58:18 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e241 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 08:58:18 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e241 do_prune osdmap full prune enabled
Oct 11 08:58:18 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e242 e242: 3 total, 3 up, 3 in
Oct 11 08:58:18 compute-0 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e242: 3 total, 3 up, 3 in
Oct 11 08:58:18 compute-0 nova_compute[260935]: 2025-10-11 08:58:18.498 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:58:18 compute-0 nova_compute[260935]: 2025-10-11 08:58:18.547 2 INFO nova.virt.libvirt.driver [None req-fbf91486-6fd1-4dff-8468-ad75ad6914e5 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: 8f609d1a-7eaa-41dc-9f3b-9acf7abe4239] Deleting instance files /var/lib/nova/instances/8f609d1a-7eaa-41dc-9f3b-9acf7abe4239_del
Oct 11 08:58:18 compute-0 nova_compute[260935]: 2025-10-11 08:58:18.548 2 INFO nova.virt.libvirt.driver [None req-fbf91486-6fd1-4dff-8468-ad75ad6914e5 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: 8f609d1a-7eaa-41dc-9f3b-9acf7abe4239] Deletion of /var/lib/nova/instances/8f609d1a-7eaa-41dc-9f3b-9acf7abe4239_del complete
Oct 11 08:58:18 compute-0 nova_compute[260935]: 2025-10-11 08:58:18.633 2 INFO nova.compute.manager [None req-fbf91486-6fd1-4dff-8468-ad75ad6914e5 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: 8f609d1a-7eaa-41dc-9f3b-9acf7abe4239] Took 0.55 seconds to destroy the instance on the hypervisor.
Oct 11 08:58:18 compute-0 nova_compute[260935]: 2025-10-11 08:58:18.634 2 DEBUG oslo.service.loopingcall [None req-fbf91486-6fd1-4dff-8468-ad75ad6914e5 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 11 08:58:18 compute-0 nova_compute[260935]: 2025-10-11 08:58:18.635 2 DEBUG nova.compute.manager [-] [instance: 8f609d1a-7eaa-41dc-9f3b-9acf7abe4239] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 11 08:58:18 compute-0 nova_compute[260935]: 2025-10-11 08:58:18.635 2 DEBUG nova.network.neutron [-] [instance: 8f609d1a-7eaa-41dc-9f3b-9acf7abe4239] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 11 08:58:18 compute-0 nova_compute[260935]: 2025-10-11 08:58:18.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 08:58:18 compute-0 nova_compute[260935]: 2025-10-11 08:58:18.703 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Oct 11 08:58:18 compute-0 nova_compute[260935]: 2025-10-11 08:58:18.742 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Oct 11 08:58:19 compute-0 nova_compute[260935]: 2025-10-11 08:58:19.402 2 DEBUG nova.network.neutron [-] [instance: 8f609d1a-7eaa-41dc-9f3b-9acf7abe4239] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:58:19 compute-0 nova_compute[260935]: 2025-10-11 08:58:19.425 2 INFO nova.compute.manager [-] [instance: 8f609d1a-7eaa-41dc-9f3b-9acf7abe4239] Took 0.79 seconds to deallocate network for instance.
Oct 11 08:58:19 compute-0 ceph-mon[74313]: pgmap v1653: 321 pgs: 321 active+clean; 200 MiB data, 637 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 2.6 MiB/s wr, 216 op/s
Oct 11 08:58:19 compute-0 ceph-mon[74313]: osdmap e242: 3 total, 3 up, 3 in
Oct 11 08:58:19 compute-0 nova_compute[260935]: 2025-10-11 08:58:19.486 2 DEBUG oslo_concurrency.lockutils [None req-fbf91486-6fd1-4dff-8468-ad75ad6914e5 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:58:19 compute-0 nova_compute[260935]: 2025-10-11 08:58:19.487 2 DEBUG oslo_concurrency.lockutils [None req-fbf91486-6fd1-4dff-8468-ad75ad6914e5 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:58:19 compute-0 nova_compute[260935]: 2025-10-11 08:58:19.491 2 DEBUG nova.compute.manager [req-3e534871-70b2-400f-89ce-856425d99128 req-60a06a4b-d3e9-4748-b75f-7cfb2f72e741 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 8f609d1a-7eaa-41dc-9f3b-9acf7abe4239] Received event network-vif-deleted-5fbde773-dc50-43d9-ae02-a499b8096537 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:58:19 compute-0 nova_compute[260935]: 2025-10-11 08:58:19.574 2 DEBUG oslo_concurrency.processutils [None req-fbf91486-6fd1-4dff-8468-ad75ad6914e5 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:58:20 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 08:58:20 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1267841657' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:58:20 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1655: 321 pgs: 321 active+clean; 200 MiB data, 637 MiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 1.4 MiB/s wr, 213 op/s
Oct 11 08:58:20 compute-0 nova_compute[260935]: 2025-10-11 08:58:20.082 2 DEBUG oslo_concurrency.processutils [None req-fbf91486-6fd1-4dff-8468-ad75ad6914e5 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.509s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:58:20 compute-0 nova_compute[260935]: 2025-10-11 08:58:20.090 2 DEBUG nova.compute.provider_tree [None req-fbf91486-6fd1-4dff-8468-ad75ad6914e5 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 08:58:20 compute-0 nova_compute[260935]: 2025-10-11 08:58:20.110 2 DEBUG nova.scheduler.client.report [None req-fbf91486-6fd1-4dff-8468-ad75ad6914e5 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 08:58:20 compute-0 nova_compute[260935]: 2025-10-11 08:58:20.144 2 DEBUG oslo_concurrency.lockutils [None req-fbf91486-6fd1-4dff-8468-ad75ad6914e5 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.657s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:58:20 compute-0 nova_compute[260935]: 2025-10-11 08:58:20.187 2 INFO nova.scheduler.client.report [None req-fbf91486-6fd1-4dff-8468-ad75ad6914e5 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Deleted allocations for instance 8f609d1a-7eaa-41dc-9f3b-9acf7abe4239
Oct 11 08:58:20 compute-0 nova_compute[260935]: 2025-10-11 08:58:20.257 2 DEBUG oslo_concurrency.lockutils [None req-fbf91486-6fd1-4dff-8468-ad75ad6914e5 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Lock "8f609d1a-7eaa-41dc-9f3b-9acf7abe4239" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.185s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:58:20 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/1267841657' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:58:20 compute-0 nova_compute[260935]: 2025-10-11 08:58:20.894 2 DEBUG oslo_concurrency.lockutils [None req-cda31545-9081-4dd8-af54-651e70371514 7195b797fdf14f41b559c905c46027f8 1570574742514675b82edd1079304340 - - default default] Acquiring lock "6208d031-a043-4761-ba4d-f3564e22723d" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:58:20 compute-0 nova_compute[260935]: 2025-10-11 08:58:20.894 2 DEBUG oslo_concurrency.lockutils [None req-cda31545-9081-4dd8-af54-651e70371514 7195b797fdf14f41b559c905c46027f8 1570574742514675b82edd1079304340 - - default default] Lock "6208d031-a043-4761-ba4d-f3564e22723d" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:58:20 compute-0 nova_compute[260935]: 2025-10-11 08:58:20.915 2 DEBUG nova.compute.manager [None req-cda31545-9081-4dd8-af54-651e70371514 7195b797fdf14f41b559c905c46027f8 1570574742514675b82edd1079304340 - - default default] [instance: 6208d031-a043-4761-ba4d-f3564e22723d] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 11 08:58:21 compute-0 nova_compute[260935]: 2025-10-11 08:58:21.006 2 DEBUG oslo_concurrency.lockutils [None req-cda31545-9081-4dd8-af54-651e70371514 7195b797fdf14f41b559c905c46027f8 1570574742514675b82edd1079304340 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:58:21 compute-0 nova_compute[260935]: 2025-10-11 08:58:21.007 2 DEBUG oslo_concurrency.lockutils [None req-cda31545-9081-4dd8-af54-651e70371514 7195b797fdf14f41b559c905c46027f8 1570574742514675b82edd1079304340 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:58:21 compute-0 nova_compute[260935]: 2025-10-11 08:58:21.016 2 DEBUG nova.virt.hardware [None req-cda31545-9081-4dd8-af54-651e70371514 7195b797fdf14f41b559c905c46027f8 1570574742514675b82edd1079304340 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 11 08:58:21 compute-0 nova_compute[260935]: 2025-10-11 08:58:21.017 2 INFO nova.compute.claims [None req-cda31545-9081-4dd8-af54-651e70371514 7195b797fdf14f41b559c905c46027f8 1570574742514675b82edd1079304340 - - default default] [instance: 6208d031-a043-4761-ba4d-f3564e22723d] Claim successful on node compute-0.ctlplane.example.com
Oct 11 08:58:21 compute-0 nova_compute[260935]: 2025-10-11 08:58:21.168 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:58:21 compute-0 nova_compute[260935]: 2025-10-11 08:58:21.189 2 DEBUG oslo_concurrency.processutils [None req-cda31545-9081-4dd8-af54-651e70371514 7195b797fdf14f41b559c905c46027f8 1570574742514675b82edd1079304340 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:58:21 compute-0 ceph-mon[74313]: pgmap v1655: 321 pgs: 321 active+clean; 200 MiB data, 637 MiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 1.4 MiB/s wr, 213 op/s
Oct 11 08:58:21 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 08:58:21 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2746905996' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:58:21 compute-0 nova_compute[260935]: 2025-10-11 08:58:21.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 08:58:21 compute-0 nova_compute[260935]: 2025-10-11 08:58:21.722 2 DEBUG oslo_concurrency.processutils [None req-cda31545-9081-4dd8-af54-651e70371514 7195b797fdf14f41b559c905c46027f8 1570574742514675b82edd1079304340 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.534s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:58:21 compute-0 nova_compute[260935]: 2025-10-11 08:58:21.730 2 DEBUG nova.compute.provider_tree [None req-cda31545-9081-4dd8-af54-651e70371514 7195b797fdf14f41b559c905c46027f8 1570574742514675b82edd1079304340 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 08:58:21 compute-0 nova_compute[260935]: 2025-10-11 08:58:21.753 2 DEBUG nova.scheduler.client.report [None req-cda31545-9081-4dd8-af54-651e70371514 7195b797fdf14f41b559c905c46027f8 1570574742514675b82edd1079304340 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 08:58:21 compute-0 nova_compute[260935]: 2025-10-11 08:58:21.789 2 DEBUG oslo_concurrency.lockutils [None req-cda31545-9081-4dd8-af54-651e70371514 7195b797fdf14f41b559c905c46027f8 1570574742514675b82edd1079304340 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.782s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:58:21 compute-0 nova_compute[260935]: 2025-10-11 08:58:21.789 2 DEBUG nova.compute.manager [None req-cda31545-9081-4dd8-af54-651e70371514 7195b797fdf14f41b559c905c46027f8 1570574742514675b82edd1079304340 - - default default] [instance: 6208d031-a043-4761-ba4d-f3564e22723d] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 11 08:58:21 compute-0 nova_compute[260935]: 2025-10-11 08:58:21.860 2 DEBUG nova.compute.manager [None req-cda31545-9081-4dd8-af54-651e70371514 7195b797fdf14f41b559c905c46027f8 1570574742514675b82edd1079304340 - - default default] [instance: 6208d031-a043-4761-ba4d-f3564e22723d] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 11 08:58:21 compute-0 nova_compute[260935]: 2025-10-11 08:58:21.861 2 DEBUG nova.network.neutron [None req-cda31545-9081-4dd8-af54-651e70371514 7195b797fdf14f41b559c905c46027f8 1570574742514675b82edd1079304340 - - default default] [instance: 6208d031-a043-4761-ba4d-f3564e22723d] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 11 08:58:21 compute-0 nova_compute[260935]: 2025-10-11 08:58:21.905 2 INFO nova.virt.libvirt.driver [None req-cda31545-9081-4dd8-af54-651e70371514 7195b797fdf14f41b559c905c46027f8 1570574742514675b82edd1079304340 - - default default] [instance: 6208d031-a043-4761-ba4d-f3564e22723d] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 11 08:58:21 compute-0 nova_compute[260935]: 2025-10-11 08:58:21.947 2 DEBUG nova.compute.manager [None req-cda31545-9081-4dd8-af54-651e70371514 7195b797fdf14f41b559c905c46027f8 1570574742514675b82edd1079304340 - - default default] [instance: 6208d031-a043-4761-ba4d-f3564e22723d] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 11 08:58:22 compute-0 nova_compute[260935]: 2025-10-11 08:58:22.060 2 DEBUG nova.policy [None req-cda31545-9081-4dd8-af54-651e70371514 7195b797fdf14f41b559c905c46027f8 1570574742514675b82edd1079304340 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '7195b797fdf14f41b559c905c46027f8', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '1570574742514675b82edd1079304340', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 11 08:58:22 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1656: 321 pgs: 321 active+clean; 167 MiB data, 616 MiB used, 59 GiB / 60 GiB avail; 770 KiB/s rd, 19 KiB/s wr, 117 op/s
Oct 11 08:58:22 compute-0 nova_compute[260935]: 2025-10-11 08:58:22.097 2 DEBUG nova.compute.manager [None req-cda31545-9081-4dd8-af54-651e70371514 7195b797fdf14f41b559c905c46027f8 1570574742514675b82edd1079304340 - - default default] [instance: 6208d031-a043-4761-ba4d-f3564e22723d] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 11 08:58:22 compute-0 nova_compute[260935]: 2025-10-11 08:58:22.099 2 DEBUG nova.virt.libvirt.driver [None req-cda31545-9081-4dd8-af54-651e70371514 7195b797fdf14f41b559c905c46027f8 1570574742514675b82edd1079304340 - - default default] [instance: 6208d031-a043-4761-ba4d-f3564e22723d] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 11 08:58:22 compute-0 nova_compute[260935]: 2025-10-11 08:58:22.099 2 INFO nova.virt.libvirt.driver [None req-cda31545-9081-4dd8-af54-651e70371514 7195b797fdf14f41b559c905c46027f8 1570574742514675b82edd1079304340 - - default default] [instance: 6208d031-a043-4761-ba4d-f3564e22723d] Creating image(s)
Oct 11 08:58:22 compute-0 nova_compute[260935]: 2025-10-11 08:58:22.134 2 DEBUG nova.storage.rbd_utils [None req-cda31545-9081-4dd8-af54-651e70371514 7195b797fdf14f41b559c905c46027f8 1570574742514675b82edd1079304340 - - default default] rbd image 6208d031-a043-4761-ba4d-f3564e22723d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:58:22 compute-0 nova_compute[260935]: 2025-10-11 08:58:22.171 2 DEBUG nova.storage.rbd_utils [None req-cda31545-9081-4dd8-af54-651e70371514 7195b797fdf14f41b559c905c46027f8 1570574742514675b82edd1079304340 - - default default] rbd image 6208d031-a043-4761-ba4d-f3564e22723d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:58:22 compute-0 nova_compute[260935]: 2025-10-11 08:58:22.206 2 DEBUG nova.storage.rbd_utils [None req-cda31545-9081-4dd8-af54-651e70371514 7195b797fdf14f41b559c905c46027f8 1570574742514675b82edd1079304340 - - default default] rbd image 6208d031-a043-4761-ba4d-f3564e22723d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:58:22 compute-0 nova_compute[260935]: 2025-10-11 08:58:22.211 2 DEBUG oslo_concurrency.processutils [None req-cda31545-9081-4dd8-af54-651e70371514 7195b797fdf14f41b559c905c46027f8 1570574742514675b82edd1079304340 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:58:22 compute-0 nova_compute[260935]: 2025-10-11 08:58:22.314 2 DEBUG oslo_concurrency.processutils [None req-cda31545-9081-4dd8-af54-651e70371514 7195b797fdf14f41b559c905c46027f8 1570574742514675b82edd1079304340 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json" returned: 0 in 0.103s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:58:22 compute-0 nova_compute[260935]: 2025-10-11 08:58:22.315 2 DEBUG oslo_concurrency.lockutils [None req-cda31545-9081-4dd8-af54-651e70371514 7195b797fdf14f41b559c905c46027f8 1570574742514675b82edd1079304340 - - default default] Acquiring lock "9811042c7d73cc51997f7c966840f4b7728169a1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:58:22 compute-0 nova_compute[260935]: 2025-10-11 08:58:22.316 2 DEBUG oslo_concurrency.lockutils [None req-cda31545-9081-4dd8-af54-651e70371514 7195b797fdf14f41b559c905c46027f8 1570574742514675b82edd1079304340 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:58:22 compute-0 nova_compute[260935]: 2025-10-11 08:58:22.317 2 DEBUG oslo_concurrency.lockutils [None req-cda31545-9081-4dd8-af54-651e70371514 7195b797fdf14f41b559c905c46027f8 1570574742514675b82edd1079304340 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:58:22 compute-0 nova_compute[260935]: 2025-10-11 08:58:22.343 2 DEBUG nova.storage.rbd_utils [None req-cda31545-9081-4dd8-af54-651e70371514 7195b797fdf14f41b559c905c46027f8 1570574742514675b82edd1079304340 - - default default] rbd image 6208d031-a043-4761-ba4d-f3564e22723d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:58:22 compute-0 nova_compute[260935]: 2025-10-11 08:58:22.348 2 DEBUG oslo_concurrency.processutils [None req-cda31545-9081-4dd8-af54-651e70371514 7195b797fdf14f41b559c905c46027f8 1570574742514675b82edd1079304340 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 6208d031-a043-4761-ba4d-f3564e22723d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:58:22 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/2746905996' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:58:22 compute-0 nova_compute[260935]: 2025-10-11 08:58:22.691 2 DEBUG oslo_concurrency.processutils [None req-cda31545-9081-4dd8-af54-651e70371514 7195b797fdf14f41b559c905c46027f8 1570574742514675b82edd1079304340 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 6208d031-a043-4761-ba4d-f3564e22723d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.344s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:58:22 compute-0 nova_compute[260935]: 2025-10-11 08:58:22.805 2 DEBUG oslo_concurrency.lockutils [None req-b638e826-a118-48ac-83d6-ea0d7f501c74 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Acquiring lock "b297454f-91af-4716-b4f7-6af9f0d7e62d" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:58:22 compute-0 nova_compute[260935]: 2025-10-11 08:58:22.806 2 DEBUG oslo_concurrency.lockutils [None req-b638e826-a118-48ac-83d6-ea0d7f501c74 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Lock "b297454f-91af-4716-b4f7-6af9f0d7e62d" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:58:22 compute-0 nova_compute[260935]: 2025-10-11 08:58:22.807 2 DEBUG oslo_concurrency.lockutils [None req-b638e826-a118-48ac-83d6-ea0d7f501c74 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Acquiring lock "b297454f-91af-4716-b4f7-6af9f0d7e62d-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:58:22 compute-0 nova_compute[260935]: 2025-10-11 08:58:22.808 2 DEBUG oslo_concurrency.lockutils [None req-b638e826-a118-48ac-83d6-ea0d7f501c74 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Lock "b297454f-91af-4716-b4f7-6af9f0d7e62d-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:58:22 compute-0 nova_compute[260935]: 2025-10-11 08:58:22.809 2 DEBUG oslo_concurrency.lockutils [None req-b638e826-a118-48ac-83d6-ea0d7f501c74 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Lock "b297454f-91af-4716-b4f7-6af9f0d7e62d-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:58:22 compute-0 nova_compute[260935]: 2025-10-11 08:58:22.815 2 INFO nova.compute.manager [None req-b638e826-a118-48ac-83d6-ea0d7f501c74 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: b297454f-91af-4716-b4f7-6af9f0d7e62d] Terminating instance
Oct 11 08:58:22 compute-0 nova_compute[260935]: 2025-10-11 08:58:22.817 2 DEBUG nova.compute.manager [None req-b638e826-a118-48ac-83d6-ea0d7f501c74 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: b297454f-91af-4716-b4f7-6af9f0d7e62d] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 11 08:58:22 compute-0 nova_compute[260935]: 2025-10-11 08:58:22.828 2 DEBUG nova.storage.rbd_utils [None req-cda31545-9081-4dd8-af54-651e70371514 7195b797fdf14f41b559c905c46027f8 1570574742514675b82edd1079304340 - - default default] resizing rbd image 6208d031-a043-4761-ba4d-f3564e22723d_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 11 08:58:22 compute-0 kernel: tap7e1e0a2e-11 (unregistering): left promiscuous mode
Oct 11 08:58:22 compute-0 NetworkManager[44960]: <info>  [1760173102.9471] device (tap7e1e0a2e-11): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 08:58:22 compute-0 ovn_controller[152945]: 2025-10-11T08:58:22Z|00671|binding|INFO|Releasing lport 7e1e0a2e-111a-44a6-8191-45964dbea604 from this chassis (sb_readonly=0)
Oct 11 08:58:22 compute-0 ovn_controller[152945]: 2025-10-11T08:58:22Z|00672|binding|INFO|Setting lport 7e1e0a2e-111a-44a6-8191-45964dbea604 down in Southbound
Oct 11 08:58:22 compute-0 ovn_controller[152945]: 2025-10-11T08:58:22Z|00673|binding|INFO|Removing iface tap7e1e0a2e-11 ovn-installed in OVS
Oct 11 08:58:22 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:58:22.976 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:45:a4:99 10.100.0.5'], port_security=['fa:16:3e:45:a4:99 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'b297454f-91af-4716-b4f7-6af9f0d7e62d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e075bdab-78c4-414f-b270-c41d1c82f498', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd9864fda4f8641d8a9c1509c426cc206', 'neutron:revision_number': '4', 'neutron:security_group_ids': '708d19b6-1edc-4b98-ad73-f234668f1633', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2b07dbe7-131d-4b4e-9a25-dea5d7b28985, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=7e1e0a2e-111a-44a6-8191-45964dbea604) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 08:58:22 compute-0 nova_compute[260935]: 2025-10-11 08:58:22.974 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:58:22 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:58:22.982 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 7e1e0a2e-111a-44a6-8191-45964dbea604 in datapath e075bdab-78c4-414f-b270-c41d1c82f498 unbound from our chassis
Oct 11 08:58:22 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:58:22.987 162815 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e075bdab-78c4-414f-b270-c41d1c82f498, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 11 08:58:22 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:58:22.991 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[eb635cff-8fb6-426e-87a1-7704580e7b73]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:58:22 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:58:22.992 162815 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-e075bdab-78c4-414f-b270-c41d1c82f498 namespace which is not needed anymore
Oct 11 08:58:23 compute-0 nova_compute[260935]: 2025-10-11 08:58:23.001 2 DEBUG nova.objects.instance [None req-cda31545-9081-4dd8-af54-651e70371514 7195b797fdf14f41b559c905c46027f8 1570574742514675b82edd1079304340 - - default default] Lazy-loading 'migration_context' on Instance uuid 6208d031-a043-4761-ba4d-f3564e22723d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:58:23 compute-0 nova_compute[260935]: 2025-10-11 08:58:23.006 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:58:23 compute-0 systemd[1]: machine-qemu\x2d68\x2dinstance\x2d0000003d.scope: Deactivated successfully.
Oct 11 08:58:23 compute-0 systemd[1]: machine-qemu\x2d68\x2dinstance\x2d0000003d.scope: Consumed 19.013s CPU time.
Oct 11 08:58:23 compute-0 nova_compute[260935]: 2025-10-11 08:58:23.019 2 DEBUG nova.virt.libvirt.driver [None req-cda31545-9081-4dd8-af54-651e70371514 7195b797fdf14f41b559c905c46027f8 1570574742514675b82edd1079304340 - - default default] [instance: 6208d031-a043-4761-ba4d-f3564e22723d] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 11 08:58:23 compute-0 nova_compute[260935]: 2025-10-11 08:58:23.019 2 DEBUG nova.virt.libvirt.driver [None req-cda31545-9081-4dd8-af54-651e70371514 7195b797fdf14f41b559c905c46027f8 1570574742514675b82edd1079304340 - - default default] [instance: 6208d031-a043-4761-ba4d-f3564e22723d] Ensure instance console log exists: /var/lib/nova/instances/6208d031-a043-4761-ba4d-f3564e22723d/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 11 08:58:23 compute-0 nova_compute[260935]: 2025-10-11 08:58:23.020 2 DEBUG oslo_concurrency.lockutils [None req-cda31545-9081-4dd8-af54-651e70371514 7195b797fdf14f41b559c905c46027f8 1570574742514675b82edd1079304340 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:58:23 compute-0 nova_compute[260935]: 2025-10-11 08:58:23.020 2 DEBUG oslo_concurrency.lockutils [None req-cda31545-9081-4dd8-af54-651e70371514 7195b797fdf14f41b559c905c46027f8 1570574742514675b82edd1079304340 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:58:23 compute-0 nova_compute[260935]: 2025-10-11 08:58:23.021 2 DEBUG oslo_concurrency.lockutils [None req-cda31545-9081-4dd8-af54-651e70371514 7195b797fdf14f41b559c905c46027f8 1570574742514675b82edd1079304340 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:58:23 compute-0 systemd-machined[215705]: Machine qemu-68-instance-0000003d terminated.
Oct 11 08:58:23 compute-0 nova_compute[260935]: 2025-10-11 08:58:23.104 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:58:23 compute-0 nova_compute[260935]: 2025-10-11 08:58:23.113 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:58:23 compute-0 nova_compute[260935]: 2025-10-11 08:58:23.122 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:58:23 compute-0 nova_compute[260935]: 2025-10-11 08:58:23.125 2 DEBUG nova.network.neutron [None req-cda31545-9081-4dd8-af54-651e70371514 7195b797fdf14f41b559c905c46027f8 1570574742514675b82edd1079304340 - - default default] [instance: 6208d031-a043-4761-ba4d-f3564e22723d] Successfully created port: 6c74faab-60f6-485c-b31f-ea324cc34f9c _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 11 08:58:23 compute-0 nova_compute[260935]: 2025-10-11 08:58:23.130 2 INFO nova.virt.libvirt.driver [-] [instance: b297454f-91af-4716-b4f7-6af9f0d7e62d] Instance destroyed successfully.
Oct 11 08:58:23 compute-0 nova_compute[260935]: 2025-10-11 08:58:23.130 2 DEBUG nova.objects.instance [None req-b638e826-a118-48ac-83d6-ea0d7f501c74 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Lazy-loading 'resources' on Instance uuid b297454f-91af-4716-b4f7-6af9f0d7e62d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:58:23 compute-0 nova_compute[260935]: 2025-10-11 08:58:23.152 2 DEBUG nova.virt.libvirt.vif [None req-b638e826-a118-48ac-83d6-ea0d7f501c74 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T08:55:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-₡-268621522',display_name='tempest-₡-268621522',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest--268621522',id=61,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-11T08:55:50Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='d9864fda4f8641d8a9c1509c426cc206',ramdisk_id='',reservation_id='r-w76lxkw4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestJSON-101172647',owner_user_name='tempest-ServersTestJSON-101172647-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T08:55:50Z,user_data=None,user_id='ee0e5fedb9fc464eb2a9ac362f5e0749',uuid=b297454f-91af-4716-b4f7-6af9f0d7e62d,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "7e1e0a2e-111a-44a6-8191-45964dbea604", "address": "fa:16:3e:45:a4:99", "network": {"id": "e075bdab-78c4-414f-b270-c41d1c82f498", "bridge": "br-int", "label": "tempest-ServersTestJSON-1401783070-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d9864fda4f8641d8a9c1509c426cc206", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7e1e0a2e-11", "ovs_interfaceid": "7e1e0a2e-111a-44a6-8191-45964dbea604", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 11 08:58:23 compute-0 nova_compute[260935]: 2025-10-11 08:58:23.152 2 DEBUG nova.network.os_vif_util [None req-b638e826-a118-48ac-83d6-ea0d7f501c74 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Converting VIF {"id": "7e1e0a2e-111a-44a6-8191-45964dbea604", "address": "fa:16:3e:45:a4:99", "network": {"id": "e075bdab-78c4-414f-b270-c41d1c82f498", "bridge": "br-int", "label": "tempest-ServersTestJSON-1401783070-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d9864fda4f8641d8a9c1509c426cc206", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7e1e0a2e-11", "ovs_interfaceid": "7e1e0a2e-111a-44a6-8191-45964dbea604", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 08:58:23 compute-0 nova_compute[260935]: 2025-10-11 08:58:23.153 2 DEBUG nova.network.os_vif_util [None req-b638e826-a118-48ac-83d6-ea0d7f501c74 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:45:a4:99,bridge_name='br-int',has_traffic_filtering=True,id=7e1e0a2e-111a-44a6-8191-45964dbea604,network=Network(e075bdab-78c4-414f-b270-c41d1c82f498),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7e1e0a2e-11') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 08:58:23 compute-0 nova_compute[260935]: 2025-10-11 08:58:23.154 2 DEBUG os_vif [None req-b638e826-a118-48ac-83d6-ea0d7f501c74 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:45:a4:99,bridge_name='br-int',has_traffic_filtering=True,id=7e1e0a2e-111a-44a6-8191-45964dbea604,network=Network(e075bdab-78c4-414f-b270-c41d1c82f498),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7e1e0a2e-11') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 11 08:58:23 compute-0 nova_compute[260935]: 2025-10-11 08:58:23.156 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:58:23 compute-0 nova_compute[260935]: 2025-10-11 08:58:23.156 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7e1e0a2e-11, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:58:23 compute-0 nova_compute[260935]: 2025-10-11 08:58:23.162 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 08:58:23 compute-0 nova_compute[260935]: 2025-10-11 08:58:23.165 2 INFO os_vif [None req-b638e826-a118-48ac-83d6-ea0d7f501c74 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:45:a4:99,bridge_name='br-int',has_traffic_filtering=True,id=7e1e0a2e-111a-44a6-8191-45964dbea604,network=Network(e075bdab-78c4-414f-b270-c41d1c82f498),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7e1e0a2e-11')
Oct 11 08:58:23 compute-0 neutron-haproxy-ovnmeta-e075bdab-78c4-414f-b270-c41d1c82f498[324426]: [NOTICE]   (324430) : haproxy version is 2.8.14-c23fe91
Oct 11 08:58:23 compute-0 neutron-haproxy-ovnmeta-e075bdab-78c4-414f-b270-c41d1c82f498[324426]: [NOTICE]   (324430) : path to executable is /usr/sbin/haproxy
Oct 11 08:58:23 compute-0 neutron-haproxy-ovnmeta-e075bdab-78c4-414f-b270-c41d1c82f498[324426]: [WARNING]  (324430) : Exiting Master process...
Oct 11 08:58:23 compute-0 neutron-haproxy-ovnmeta-e075bdab-78c4-414f-b270-c41d1c82f498[324426]: [WARNING]  (324430) : Exiting Master process...
Oct 11 08:58:23 compute-0 neutron-haproxy-ovnmeta-e075bdab-78c4-414f-b270-c41d1c82f498[324426]: [ALERT]    (324430) : Current worker (324432) exited with code 143 (Terminated)
Oct 11 08:58:23 compute-0 neutron-haproxy-ovnmeta-e075bdab-78c4-414f-b270-c41d1c82f498[324426]: [WARNING]  (324430) : All workers exited. Exiting... (0)
Oct 11 08:58:23 compute-0 systemd[1]: libpod-7365a64d78a5203cb6a4f7973d3225d3cf0f9d03377cb6f3325f2e530419040b.scope: Deactivated successfully.
Oct 11 08:58:23 compute-0 podman[335459]: 2025-10-11 08:58:23.240487791 +0000 UTC m=+0.091394327 container died 7365a64d78a5203cb6a4f7973d3225d3cf0f9d03377cb6f3325f2e530419040b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-e075bdab-78c4-414f-b270-c41d1c82f498, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true)
Oct 11 08:58:23 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-7365a64d78a5203cb6a4f7973d3225d3cf0f9d03377cb6f3325f2e530419040b-userdata-shm.mount: Deactivated successfully.
Oct 11 08:58:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-bf198163e725340e667f14571b9dc05439b8f7c35370125da99319b5667780ac-merged.mount: Deactivated successfully.
Oct 11 08:58:23 compute-0 podman[335459]: 2025-10-11 08:58:23.295139289 +0000 UTC m=+0.146045815 container cleanup 7365a64d78a5203cb6a4f7973d3225d3cf0f9d03377cb6f3325f2e530419040b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-e075bdab-78c4-414f-b270-c41d1c82f498, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 11 08:58:23 compute-0 systemd[1]: libpod-conmon-7365a64d78a5203cb6a4f7973d3225d3cf0f9d03377cb6f3325f2e530419040b.scope: Deactivated successfully.
Oct 11 08:58:23 compute-0 podman[335506]: 2025-10-11 08:58:23.409677535 +0000 UTC m=+0.066571289 container remove 7365a64d78a5203cb6a4f7973d3225d3cf0f9d03377cb6f3325f2e530419040b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-e075bdab-78c4-414f-b270-c41d1c82f498, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 11 08:58:23 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:58:23.421 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[9ea17bf8-add3-4936-b06e-266478e81ff5]: (4, ('Sat Oct 11 08:58:23 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-e075bdab-78c4-414f-b270-c41d1c82f498 (7365a64d78a5203cb6a4f7973d3225d3cf0f9d03377cb6f3325f2e530419040b)\n7365a64d78a5203cb6a4f7973d3225d3cf0f9d03377cb6f3325f2e530419040b\nSat Oct 11 08:58:23 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-e075bdab-78c4-414f-b270-c41d1c82f498 (7365a64d78a5203cb6a4f7973d3225d3cf0f9d03377cb6f3325f2e530419040b)\n7365a64d78a5203cb6a4f7973d3225d3cf0f9d03377cb6f3325f2e530419040b\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:58:23 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:58:23.426 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[efbb1dcc-746e-4035-886f-cd45e8fca8b8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:58:23 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:58:23.427 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape075bdab-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:58:23 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e242 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 08:58:23 compute-0 nova_compute[260935]: 2025-10-11 08:58:23.471 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:58:23 compute-0 kernel: tape075bdab-70: left promiscuous mode
Oct 11 08:58:23 compute-0 ceph-mon[74313]: pgmap v1656: 321 pgs: 321 active+clean; 167 MiB data, 616 MiB used, 59 GiB / 60 GiB avail; 770 KiB/s rd, 19 KiB/s wr, 117 op/s
Oct 11 08:58:23 compute-0 nova_compute[260935]: 2025-10-11 08:58:23.476 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:58:23 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:58:23.482 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[452587c0-9a9d-4230-9ccc-aed75b35e2c3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:58:23 compute-0 nova_compute[260935]: 2025-10-11 08:58:23.492 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:58:23 compute-0 nova_compute[260935]: 2025-10-11 08:58:23.499 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:58:23 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:58:23.513 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[5d1eb0be-4c17-497f-8d86-233666254b2a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:58:23 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:58:23.515 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[8daf10d9-947e-435c-b2fc-23271c0dce93]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:58:23 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:58:23.538 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[6d51d1dd-40e4-4cc7-954e-25eeaff1ca74]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 479384, 'reachable_time': 22264, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 335522, 'error': None, 'target': 'ovnmeta-e075bdab-78c4-414f-b270-c41d1c82f498', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:58:23 compute-0 systemd[1]: run-netns-ovnmeta\x2de075bdab\x2d78c4\x2d414f\x2db270\x2dc41d1c82f498.mount: Deactivated successfully.
Oct 11 08:58:23 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:58:23.545 162929 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-e075bdab-78c4-414f-b270-c41d1c82f498 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 11 08:58:23 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:58:23.545 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[754fa73b-9d08-4760-a9c1-b7c628e86a4d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:58:23 compute-0 nova_compute[260935]: 2025-10-11 08:58:23.686 2 INFO nova.virt.libvirt.driver [None req-b638e826-a118-48ac-83d6-ea0d7f501c74 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: b297454f-91af-4716-b4f7-6af9f0d7e62d] Deleting instance files /var/lib/nova/instances/b297454f-91af-4716-b4f7-6af9f0d7e62d_del
Oct 11 08:58:23 compute-0 nova_compute[260935]: 2025-10-11 08:58:23.688 2 INFO nova.virt.libvirt.driver [None req-b638e826-a118-48ac-83d6-ea0d7f501c74 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: b297454f-91af-4716-b4f7-6af9f0d7e62d] Deletion of /var/lib/nova/instances/b297454f-91af-4716-b4f7-6af9f0d7e62d_del complete
Oct 11 08:58:23 compute-0 nova_compute[260935]: 2025-10-11 08:58:23.767 2 INFO nova.compute.manager [None req-b638e826-a118-48ac-83d6-ea0d7f501c74 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] [instance: b297454f-91af-4716-b4f7-6af9f0d7e62d] Took 0.95 seconds to destroy the instance on the hypervisor.
Oct 11 08:58:23 compute-0 nova_compute[260935]: 2025-10-11 08:58:23.768 2 DEBUG oslo.service.loopingcall [None req-b638e826-a118-48ac-83d6-ea0d7f501c74 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 11 08:58:23 compute-0 nova_compute[260935]: 2025-10-11 08:58:23.768 2 DEBUG nova.compute.manager [-] [instance: b297454f-91af-4716-b4f7-6af9f0d7e62d] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 11 08:58:23 compute-0 nova_compute[260935]: 2025-10-11 08:58:23.769 2 DEBUG nova.network.neutron [-] [instance: b297454f-91af-4716-b4f7-6af9f0d7e62d] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 11 08:58:24 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1657: 321 pgs: 321 active+clean; 121 MiB data, 597 MiB used, 59 GiB / 60 GiB avail; 677 KiB/s rd, 18 KiB/s wr, 119 op/s
Oct 11 08:58:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 08:58:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 08:58:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 08:58:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 08:58:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 08:58:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 08:58:25 compute-0 nova_compute[260935]: 2025-10-11 08:58:25.084 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760173090.082208, 5c74d1f2-66dc-474b-b669-445115cf6b1d => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:58:25 compute-0 nova_compute[260935]: 2025-10-11 08:58:25.084 2 INFO nova.compute.manager [-] [instance: 5c74d1f2-66dc-474b-b669-445115cf6b1d] VM Stopped (Lifecycle Event)
Oct 11 08:58:25 compute-0 nova_compute[260935]: 2025-10-11 08:58:25.087 2 DEBUG nova.network.neutron [-] [instance: b297454f-91af-4716-b4f7-6af9f0d7e62d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:58:25 compute-0 nova_compute[260935]: 2025-10-11 08:58:25.133 2 DEBUG nova.compute.manager [None req-30a5c471-d171-4202-8f48-18375e72873b - - - - - -] [instance: 5c74d1f2-66dc-474b-b669-445115cf6b1d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:58:25 compute-0 nova_compute[260935]: 2025-10-11 08:58:25.134 2 INFO nova.compute.manager [-] [instance: b297454f-91af-4716-b4f7-6af9f0d7e62d] Took 1.37 seconds to deallocate network for instance.
Oct 11 08:58:25 compute-0 nova_compute[260935]: 2025-10-11 08:58:25.186 2 DEBUG oslo_concurrency.lockutils [None req-b638e826-a118-48ac-83d6-ea0d7f501c74 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:58:25 compute-0 nova_compute[260935]: 2025-10-11 08:58:25.187 2 DEBUG oslo_concurrency.lockutils [None req-b638e826-a118-48ac-83d6-ea0d7f501c74 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:58:25 compute-0 nova_compute[260935]: 2025-10-11 08:58:25.264 2 DEBUG oslo_concurrency.processutils [None req-b638e826-a118-48ac-83d6-ea0d7f501c74 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:58:25 compute-0 nova_compute[260935]: 2025-10-11 08:58:25.340 2 DEBUG nova.compute.manager [req-f6c0455e-a19a-48a7-80c1-730473aab52e req-acfcf181-9459-4ddf-9091-5687d54dabcd e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: b297454f-91af-4716-b4f7-6af9f0d7e62d] Received event network-vif-unplugged-7e1e0a2e-111a-44a6-8191-45964dbea604 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:58:25 compute-0 nova_compute[260935]: 2025-10-11 08:58:25.341 2 DEBUG oslo_concurrency.lockutils [req-f6c0455e-a19a-48a7-80c1-730473aab52e req-acfcf181-9459-4ddf-9091-5687d54dabcd e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "b297454f-91af-4716-b4f7-6af9f0d7e62d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:58:25 compute-0 nova_compute[260935]: 2025-10-11 08:58:25.342 2 DEBUG oslo_concurrency.lockutils [req-f6c0455e-a19a-48a7-80c1-730473aab52e req-acfcf181-9459-4ddf-9091-5687d54dabcd e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "b297454f-91af-4716-b4f7-6af9f0d7e62d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:58:25 compute-0 nova_compute[260935]: 2025-10-11 08:58:25.342 2 DEBUG oslo_concurrency.lockutils [req-f6c0455e-a19a-48a7-80c1-730473aab52e req-acfcf181-9459-4ddf-9091-5687d54dabcd e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "b297454f-91af-4716-b4f7-6af9f0d7e62d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:58:25 compute-0 nova_compute[260935]: 2025-10-11 08:58:25.343 2 DEBUG nova.compute.manager [req-f6c0455e-a19a-48a7-80c1-730473aab52e req-acfcf181-9459-4ddf-9091-5687d54dabcd e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: b297454f-91af-4716-b4f7-6af9f0d7e62d] No waiting events found dispatching network-vif-unplugged-7e1e0a2e-111a-44a6-8191-45964dbea604 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 08:58:25 compute-0 nova_compute[260935]: 2025-10-11 08:58:25.343 2 WARNING nova.compute.manager [req-f6c0455e-a19a-48a7-80c1-730473aab52e req-acfcf181-9459-4ddf-9091-5687d54dabcd e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: b297454f-91af-4716-b4f7-6af9f0d7e62d] Received unexpected event network-vif-unplugged-7e1e0a2e-111a-44a6-8191-45964dbea604 for instance with vm_state deleted and task_state None.
Oct 11 08:58:25 compute-0 nova_compute[260935]: 2025-10-11 08:58:25.432 2 DEBUG nova.compute.manager [req-1c5bbae8-9a4c-4617-a51a-2fb48ce1d1d0 req-939142e1-e0f1-4b31-bd82-5a7ecc476753 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: b297454f-91af-4716-b4f7-6af9f0d7e62d] Received event network-vif-deleted-7e1e0a2e-111a-44a6-8191-45964dbea604 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:58:25 compute-0 ceph-mon[74313]: pgmap v1657: 321 pgs: 321 active+clean; 121 MiB data, 597 MiB used, 59 GiB / 60 GiB avail; 677 KiB/s rd, 18 KiB/s wr, 119 op/s
Oct 11 08:58:25 compute-0 nova_compute[260935]: 2025-10-11 08:58:25.502 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:58:25 compute-0 nova_compute[260935]: 2025-10-11 08:58:25.607 2 DEBUG oslo_concurrency.lockutils [None req-576d934c-3bea-49e4-87f0-7ea035c6381d 24dea95ec8d047639f1121e457f54149 a9bdaad8dfb9419b870adbb9b7034af4 - - default default] Acquiring lock "b7a3385b-c043-4cc1-963e-3421188270ac" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:58:25 compute-0 nova_compute[260935]: 2025-10-11 08:58:25.607 2 DEBUG oslo_concurrency.lockutils [None req-576d934c-3bea-49e4-87f0-7ea035c6381d 24dea95ec8d047639f1121e457f54149 a9bdaad8dfb9419b870adbb9b7034af4 - - default default] Lock "b7a3385b-c043-4cc1-963e-3421188270ac" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:58:25 compute-0 nova_compute[260935]: 2025-10-11 08:58:25.632 2 DEBUG nova.compute.manager [None req-576d934c-3bea-49e4-87f0-7ea035c6381d 24dea95ec8d047639f1121e457f54149 a9bdaad8dfb9419b870adbb9b7034af4 - - default default] [instance: b7a3385b-c043-4cc1-963e-3421188270ac] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 11 08:58:25 compute-0 nova_compute[260935]: 2025-10-11 08:58:25.715 2 DEBUG oslo_concurrency.lockutils [None req-576d934c-3bea-49e4-87f0-7ea035c6381d 24dea95ec8d047639f1121e457f54149 a9bdaad8dfb9419b870adbb9b7034af4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:58:25 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 08:58:25 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1670780642' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:58:25 compute-0 nova_compute[260935]: 2025-10-11 08:58:25.755 2 DEBUG oslo_concurrency.processutils [None req-b638e826-a118-48ac-83d6-ea0d7f501c74 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.491s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:58:25 compute-0 nova_compute[260935]: 2025-10-11 08:58:25.766 2 DEBUG nova.compute.provider_tree [None req-b638e826-a118-48ac-83d6-ea0d7f501c74 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 08:58:25 compute-0 nova_compute[260935]: 2025-10-11 08:58:25.779 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:58:25 compute-0 nova_compute[260935]: 2025-10-11 08:58:25.783 2 DEBUG nova.scheduler.client.report [None req-b638e826-a118-48ac-83d6-ea0d7f501c74 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 08:58:25 compute-0 nova_compute[260935]: 2025-10-11 08:58:25.805 2 DEBUG oslo_concurrency.lockutils [None req-b638e826-a118-48ac-83d6-ea0d7f501c74 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.618s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:58:25 compute-0 nova_compute[260935]: 2025-10-11 08:58:25.812 2 DEBUG nova.network.neutron [None req-cda31545-9081-4dd8-af54-651e70371514 7195b797fdf14f41b559c905c46027f8 1570574742514675b82edd1079304340 - - default default] [instance: 6208d031-a043-4761-ba4d-f3564e22723d] Successfully updated port: 6c74faab-60f6-485c-b31f-ea324cc34f9c _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 11 08:58:25 compute-0 nova_compute[260935]: 2025-10-11 08:58:25.815 2 DEBUG oslo_concurrency.lockutils [None req-576d934c-3bea-49e4-87f0-7ea035c6381d 24dea95ec8d047639f1121e457f54149 a9bdaad8dfb9419b870adbb9b7034af4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.100s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:58:25 compute-0 nova_compute[260935]: 2025-10-11 08:58:25.825 2 DEBUG nova.virt.hardware [None req-576d934c-3bea-49e4-87f0-7ea035c6381d 24dea95ec8d047639f1121e457f54149 a9bdaad8dfb9419b870adbb9b7034af4 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 11 08:58:25 compute-0 nova_compute[260935]: 2025-10-11 08:58:25.825 2 INFO nova.compute.claims [None req-576d934c-3bea-49e4-87f0-7ea035c6381d 24dea95ec8d047639f1121e457f54149 a9bdaad8dfb9419b870adbb9b7034af4 - - default default] [instance: b7a3385b-c043-4cc1-963e-3421188270ac] Claim successful on node compute-0.ctlplane.example.com
Oct 11 08:58:25 compute-0 nova_compute[260935]: 2025-10-11 08:58:25.833 2 DEBUG oslo_concurrency.lockutils [None req-cda31545-9081-4dd8-af54-651e70371514 7195b797fdf14f41b559c905c46027f8 1570574742514675b82edd1079304340 - - default default] Acquiring lock "refresh_cache-6208d031-a043-4761-ba4d-f3564e22723d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 08:58:25 compute-0 nova_compute[260935]: 2025-10-11 08:58:25.834 2 DEBUG oslo_concurrency.lockutils [None req-cda31545-9081-4dd8-af54-651e70371514 7195b797fdf14f41b559c905c46027f8 1570574742514675b82edd1079304340 - - default default] Acquired lock "refresh_cache-6208d031-a043-4761-ba4d-f3564e22723d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 08:58:25 compute-0 nova_compute[260935]: 2025-10-11 08:58:25.835 2 DEBUG nova.network.neutron [None req-cda31545-9081-4dd8-af54-651e70371514 7195b797fdf14f41b559c905c46027f8 1570574742514675b82edd1079304340 - - default default] [instance: 6208d031-a043-4761-ba4d-f3564e22723d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 11 08:58:25 compute-0 nova_compute[260935]: 2025-10-11 08:58:25.839 2 INFO nova.scheduler.client.report [None req-b638e826-a118-48ac-83d6-ea0d7f501c74 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Deleted allocations for instance b297454f-91af-4716-b4f7-6af9f0d7e62d
Oct 11 08:58:25 compute-0 nova_compute[260935]: 2025-10-11 08:58:25.939 2 DEBUG oslo_concurrency.lockutils [None req-b638e826-a118-48ac-83d6-ea0d7f501c74 ee0e5fedb9fc464eb2a9ac362f5e0749 d9864fda4f8641d8a9c1509c426cc206 - - default default] Lock "b297454f-91af-4716-b4f7-6af9f0d7e62d" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.133s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:58:25 compute-0 nova_compute[260935]: 2025-10-11 08:58:25.973 2 DEBUG oslo_concurrency.processutils [None req-576d934c-3bea-49e4-87f0-7ea035c6381d 24dea95ec8d047639f1121e457f54149 a9bdaad8dfb9419b870adbb9b7034af4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:58:26 compute-0 nova_compute[260935]: 2025-10-11 08:58:26.042 2 DEBUG nova.network.neutron [None req-cda31545-9081-4dd8-af54-651e70371514 7195b797fdf14f41b559c905c46027f8 1570574742514675b82edd1079304340 - - default default] [instance: 6208d031-a043-4761-ba4d-f3564e22723d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 11 08:58:26 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1658: 321 pgs: 321 active+clean; 121 MiB data, 597 MiB used, 59 GiB / 60 GiB avail; 677 KiB/s rd, 18 KiB/s wr, 119 op/s
Oct 11 08:58:26 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 08:58:26 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1148720577' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:58:26 compute-0 nova_compute[260935]: 2025-10-11 08:58:26.440 2 DEBUG oslo_concurrency.processutils [None req-576d934c-3bea-49e4-87f0-7ea035c6381d 24dea95ec8d047639f1121e457f54149 a9bdaad8dfb9419b870adbb9b7034af4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:58:26 compute-0 nova_compute[260935]: 2025-10-11 08:58:26.449 2 DEBUG nova.compute.provider_tree [None req-576d934c-3bea-49e4-87f0-7ea035c6381d 24dea95ec8d047639f1121e457f54149 a9bdaad8dfb9419b870adbb9b7034af4 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 08:58:26 compute-0 nova_compute[260935]: 2025-10-11 08:58:26.480 2 DEBUG nova.scheduler.client.report [None req-576d934c-3bea-49e4-87f0-7ea035c6381d 24dea95ec8d047639f1121e457f54149 a9bdaad8dfb9419b870adbb9b7034af4 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 08:58:26 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/1670780642' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:58:26 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/1148720577' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:58:26 compute-0 nova_compute[260935]: 2025-10-11 08:58:26.517 2 DEBUG oslo_concurrency.lockutils [None req-576d934c-3bea-49e4-87f0-7ea035c6381d 24dea95ec8d047639f1121e457f54149 a9bdaad8dfb9419b870adbb9b7034af4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.702s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:58:26 compute-0 nova_compute[260935]: 2025-10-11 08:58:26.518 2 DEBUG nova.compute.manager [None req-576d934c-3bea-49e4-87f0-7ea035c6381d 24dea95ec8d047639f1121e457f54149 a9bdaad8dfb9419b870adbb9b7034af4 - - default default] [instance: b7a3385b-c043-4cc1-963e-3421188270ac] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 11 08:58:26 compute-0 nova_compute[260935]: 2025-10-11 08:58:26.575 2 DEBUG nova.compute.manager [None req-576d934c-3bea-49e4-87f0-7ea035c6381d 24dea95ec8d047639f1121e457f54149 a9bdaad8dfb9419b870adbb9b7034af4 - - default default] [instance: b7a3385b-c043-4cc1-963e-3421188270ac] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 11 08:58:26 compute-0 nova_compute[260935]: 2025-10-11 08:58:26.576 2 DEBUG nova.network.neutron [None req-576d934c-3bea-49e4-87f0-7ea035c6381d 24dea95ec8d047639f1121e457f54149 a9bdaad8dfb9419b870adbb9b7034af4 - - default default] [instance: b7a3385b-c043-4cc1-963e-3421188270ac] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 11 08:58:26 compute-0 nova_compute[260935]: 2025-10-11 08:58:26.600 2 INFO nova.virt.libvirt.driver [None req-576d934c-3bea-49e4-87f0-7ea035c6381d 24dea95ec8d047639f1121e457f54149 a9bdaad8dfb9419b870adbb9b7034af4 - - default default] [instance: b7a3385b-c043-4cc1-963e-3421188270ac] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 11 08:58:26 compute-0 nova_compute[260935]: 2025-10-11 08:58:26.625 2 DEBUG nova.compute.manager [None req-576d934c-3bea-49e4-87f0-7ea035c6381d 24dea95ec8d047639f1121e457f54149 a9bdaad8dfb9419b870adbb9b7034af4 - - default default] [instance: b7a3385b-c043-4cc1-963e-3421188270ac] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 11 08:58:26 compute-0 nova_compute[260935]: 2025-10-11 08:58:26.728 2 DEBUG nova.compute.manager [None req-576d934c-3bea-49e4-87f0-7ea035c6381d 24dea95ec8d047639f1121e457f54149 a9bdaad8dfb9419b870adbb9b7034af4 - - default default] [instance: b7a3385b-c043-4cc1-963e-3421188270ac] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 11 08:58:26 compute-0 nova_compute[260935]: 2025-10-11 08:58:26.731 2 DEBUG nova.virt.libvirt.driver [None req-576d934c-3bea-49e4-87f0-7ea035c6381d 24dea95ec8d047639f1121e457f54149 a9bdaad8dfb9419b870adbb9b7034af4 - - default default] [instance: b7a3385b-c043-4cc1-963e-3421188270ac] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 11 08:58:26 compute-0 nova_compute[260935]: 2025-10-11 08:58:26.732 2 INFO nova.virt.libvirt.driver [None req-576d934c-3bea-49e4-87f0-7ea035c6381d 24dea95ec8d047639f1121e457f54149 a9bdaad8dfb9419b870adbb9b7034af4 - - default default] [instance: b7a3385b-c043-4cc1-963e-3421188270ac] Creating image(s)
Oct 11 08:58:26 compute-0 nova_compute[260935]: 2025-10-11 08:58:26.766 2 DEBUG nova.storage.rbd_utils [None req-576d934c-3bea-49e4-87f0-7ea035c6381d 24dea95ec8d047639f1121e457f54149 a9bdaad8dfb9419b870adbb9b7034af4 - - default default] rbd image b7a3385b-c043-4cc1-963e-3421188270ac_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:58:26 compute-0 nova_compute[260935]: 2025-10-11 08:58:26.801 2 DEBUG nova.storage.rbd_utils [None req-576d934c-3bea-49e4-87f0-7ea035c6381d 24dea95ec8d047639f1121e457f54149 a9bdaad8dfb9419b870adbb9b7034af4 - - default default] rbd image b7a3385b-c043-4cc1-963e-3421188270ac_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:58:26 compute-0 nova_compute[260935]: 2025-10-11 08:58:26.834 2 DEBUG nova.storage.rbd_utils [None req-576d934c-3bea-49e4-87f0-7ea035c6381d 24dea95ec8d047639f1121e457f54149 a9bdaad8dfb9419b870adbb9b7034af4 - - default default] rbd image b7a3385b-c043-4cc1-963e-3421188270ac_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:58:26 compute-0 nova_compute[260935]: 2025-10-11 08:58:26.840 2 DEBUG oslo_concurrency.processutils [None req-576d934c-3bea-49e4-87f0-7ea035c6381d 24dea95ec8d047639f1121e457f54149 a9bdaad8dfb9419b870adbb9b7034af4 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:58:26 compute-0 nova_compute[260935]: 2025-10-11 08:58:26.950 2 DEBUG oslo_concurrency.processutils [None req-576d934c-3bea-49e4-87f0-7ea035c6381d 24dea95ec8d047639f1121e457f54149 a9bdaad8dfb9419b870adbb9b7034af4 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json" returned: 0 in 0.110s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:58:26 compute-0 nova_compute[260935]: 2025-10-11 08:58:26.951 2 DEBUG oslo_concurrency.lockutils [None req-576d934c-3bea-49e4-87f0-7ea035c6381d 24dea95ec8d047639f1121e457f54149 a9bdaad8dfb9419b870adbb9b7034af4 - - default default] Acquiring lock "9811042c7d73cc51997f7c966840f4b7728169a1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:58:26 compute-0 nova_compute[260935]: 2025-10-11 08:58:26.952 2 DEBUG oslo_concurrency.lockutils [None req-576d934c-3bea-49e4-87f0-7ea035c6381d 24dea95ec8d047639f1121e457f54149 a9bdaad8dfb9419b870adbb9b7034af4 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:58:26 compute-0 nova_compute[260935]: 2025-10-11 08:58:26.953 2 DEBUG oslo_concurrency.lockutils [None req-576d934c-3bea-49e4-87f0-7ea035c6381d 24dea95ec8d047639f1121e457f54149 a9bdaad8dfb9419b870adbb9b7034af4 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:58:26 compute-0 nova_compute[260935]: 2025-10-11 08:58:26.985 2 DEBUG nova.storage.rbd_utils [None req-576d934c-3bea-49e4-87f0-7ea035c6381d 24dea95ec8d047639f1121e457f54149 a9bdaad8dfb9419b870adbb9b7034af4 - - default default] rbd image b7a3385b-c043-4cc1-963e-3421188270ac_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:58:26 compute-0 nova_compute[260935]: 2025-10-11 08:58:26.990 2 DEBUG oslo_concurrency.processutils [None req-576d934c-3bea-49e4-87f0-7ea035c6381d 24dea95ec8d047639f1121e457f54149 a9bdaad8dfb9419b870adbb9b7034af4 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 b7a3385b-c043-4cc1-963e-3421188270ac_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:58:27 compute-0 nova_compute[260935]: 2025-10-11 08:58:27.051 2 DEBUG nova.policy [None req-576d934c-3bea-49e4-87f0-7ea035c6381d 24dea95ec8d047639f1121e457f54149 a9bdaad8dfb9419b870adbb9b7034af4 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '24dea95ec8d047639f1121e457f54149', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'a9bdaad8dfb9419b870adbb9b7034af4', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 11 08:58:27 compute-0 nova_compute[260935]: 2025-10-11 08:58:27.104 2 DEBUG nova.network.neutron [None req-cda31545-9081-4dd8-af54-651e70371514 7195b797fdf14f41b559c905c46027f8 1570574742514675b82edd1079304340 - - default default] [instance: 6208d031-a043-4761-ba4d-f3564e22723d] Updating instance_info_cache with network_info: [{"id": "6c74faab-60f6-485c-b31f-ea324cc34f9c", "address": "fa:16:3e:b4:8a:29", "network": {"id": "c4ceb461-f642-4cfa-bc70-91212708aac4", "bridge": "br-int", "label": "tempest-InstanceActionsNegativeTestJSON-2068762966-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1570574742514675b82edd1079304340", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6c74faab-60", "ovs_interfaceid": "6c74faab-60f6-485c-b31f-ea324cc34f9c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:58:27 compute-0 nova_compute[260935]: 2025-10-11 08:58:27.163 2 DEBUG oslo_concurrency.lockutils [None req-cda31545-9081-4dd8-af54-651e70371514 7195b797fdf14f41b559c905c46027f8 1570574742514675b82edd1079304340 - - default default] Releasing lock "refresh_cache-6208d031-a043-4761-ba4d-f3564e22723d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 08:58:27 compute-0 nova_compute[260935]: 2025-10-11 08:58:27.164 2 DEBUG nova.compute.manager [None req-cda31545-9081-4dd8-af54-651e70371514 7195b797fdf14f41b559c905c46027f8 1570574742514675b82edd1079304340 - - default default] [instance: 6208d031-a043-4761-ba4d-f3564e22723d] Instance network_info: |[{"id": "6c74faab-60f6-485c-b31f-ea324cc34f9c", "address": "fa:16:3e:b4:8a:29", "network": {"id": "c4ceb461-f642-4cfa-bc70-91212708aac4", "bridge": "br-int", "label": "tempest-InstanceActionsNegativeTestJSON-2068762966-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1570574742514675b82edd1079304340", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6c74faab-60", "ovs_interfaceid": "6c74faab-60f6-485c-b31f-ea324cc34f9c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 11 08:58:27 compute-0 nova_compute[260935]: 2025-10-11 08:58:27.166 2 DEBUG nova.virt.libvirt.driver [None req-cda31545-9081-4dd8-af54-651e70371514 7195b797fdf14f41b559c905c46027f8 1570574742514675b82edd1079304340 - - default default] [instance: 6208d031-a043-4761-ba4d-f3564e22723d] Start _get_guest_xml network_info=[{"id": "6c74faab-60f6-485c-b31f-ea324cc34f9c", "address": "fa:16:3e:b4:8a:29", "network": {"id": "c4ceb461-f642-4cfa-bc70-91212708aac4", "bridge": "br-int", "label": "tempest-InstanceActionsNegativeTestJSON-2068762966-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1570574742514675b82edd1079304340", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6c74faab-60", "ovs_interfaceid": "6c74faab-60f6-485c-b31f-ea324cc34f9c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 11 08:58:27 compute-0 nova_compute[260935]: 2025-10-11 08:58:27.172 2 WARNING nova.virt.libvirt.driver [None req-cda31545-9081-4dd8-af54-651e70371514 7195b797fdf14f41b559c905c46027f8 1570574742514675b82edd1079304340 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 08:58:27 compute-0 nova_compute[260935]: 2025-10-11 08:58:27.179 2 DEBUG nova.virt.libvirt.host [None req-cda31545-9081-4dd8-af54-651e70371514 7195b797fdf14f41b559c905c46027f8 1570574742514675b82edd1079304340 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 11 08:58:27 compute-0 nova_compute[260935]: 2025-10-11 08:58:27.180 2 DEBUG nova.virt.libvirt.host [None req-cda31545-9081-4dd8-af54-651e70371514 7195b797fdf14f41b559c905c46027f8 1570574742514675b82edd1079304340 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 11 08:58:27 compute-0 nova_compute[260935]: 2025-10-11 08:58:27.186 2 DEBUG nova.virt.libvirt.host [None req-cda31545-9081-4dd8-af54-651e70371514 7195b797fdf14f41b559c905c46027f8 1570574742514675b82edd1079304340 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 11 08:58:27 compute-0 nova_compute[260935]: 2025-10-11 08:58:27.187 2 DEBUG nova.virt.libvirt.host [None req-cda31545-9081-4dd8-af54-651e70371514 7195b797fdf14f41b559c905c46027f8 1570574742514675b82edd1079304340 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 11 08:58:27 compute-0 nova_compute[260935]: 2025-10-11 08:58:27.188 2 DEBUG nova.virt.libvirt.driver [None req-cda31545-9081-4dd8-af54-651e70371514 7195b797fdf14f41b559c905c46027f8 1570574742514675b82edd1079304340 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 11 08:58:27 compute-0 nova_compute[260935]: 2025-10-11 08:58:27.188 2 DEBUG nova.virt.hardware [None req-cda31545-9081-4dd8-af54-651e70371514 7195b797fdf14f41b559c905c46027f8 1570574742514675b82edd1079304340 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 11 08:58:27 compute-0 nova_compute[260935]: 2025-10-11 08:58:27.188 2 DEBUG nova.virt.hardware [None req-cda31545-9081-4dd8-af54-651e70371514 7195b797fdf14f41b559c905c46027f8 1570574742514675b82edd1079304340 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 11 08:58:27 compute-0 nova_compute[260935]: 2025-10-11 08:58:27.188 2 DEBUG nova.virt.hardware [None req-cda31545-9081-4dd8-af54-651e70371514 7195b797fdf14f41b559c905c46027f8 1570574742514675b82edd1079304340 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 11 08:58:27 compute-0 nova_compute[260935]: 2025-10-11 08:58:27.189 2 DEBUG nova.virt.hardware [None req-cda31545-9081-4dd8-af54-651e70371514 7195b797fdf14f41b559c905c46027f8 1570574742514675b82edd1079304340 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 11 08:58:27 compute-0 nova_compute[260935]: 2025-10-11 08:58:27.189 2 DEBUG nova.virt.hardware [None req-cda31545-9081-4dd8-af54-651e70371514 7195b797fdf14f41b559c905c46027f8 1570574742514675b82edd1079304340 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 11 08:58:27 compute-0 nova_compute[260935]: 2025-10-11 08:58:27.189 2 DEBUG nova.virt.hardware [None req-cda31545-9081-4dd8-af54-651e70371514 7195b797fdf14f41b559c905c46027f8 1570574742514675b82edd1079304340 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 11 08:58:27 compute-0 nova_compute[260935]: 2025-10-11 08:58:27.189 2 DEBUG nova.virt.hardware [None req-cda31545-9081-4dd8-af54-651e70371514 7195b797fdf14f41b559c905c46027f8 1570574742514675b82edd1079304340 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 11 08:58:27 compute-0 nova_compute[260935]: 2025-10-11 08:58:27.190 2 DEBUG nova.virt.hardware [None req-cda31545-9081-4dd8-af54-651e70371514 7195b797fdf14f41b559c905c46027f8 1570574742514675b82edd1079304340 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 11 08:58:27 compute-0 nova_compute[260935]: 2025-10-11 08:58:27.190 2 DEBUG nova.virt.hardware [None req-cda31545-9081-4dd8-af54-651e70371514 7195b797fdf14f41b559c905c46027f8 1570574742514675b82edd1079304340 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 11 08:58:27 compute-0 nova_compute[260935]: 2025-10-11 08:58:27.190 2 DEBUG nova.virt.hardware [None req-cda31545-9081-4dd8-af54-651e70371514 7195b797fdf14f41b559c905c46027f8 1570574742514675b82edd1079304340 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 11 08:58:27 compute-0 nova_compute[260935]: 2025-10-11 08:58:27.190 2 DEBUG nova.virt.hardware [None req-cda31545-9081-4dd8-af54-651e70371514 7195b797fdf14f41b559c905c46027f8 1570574742514675b82edd1079304340 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 11 08:58:27 compute-0 nova_compute[260935]: 2025-10-11 08:58:27.193 2 DEBUG oslo_concurrency.processutils [None req-cda31545-9081-4dd8-af54-651e70371514 7195b797fdf14f41b559c905c46027f8 1570574742514675b82edd1079304340 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:58:27 compute-0 nova_compute[260935]: 2025-10-11 08:58:27.336 2 DEBUG oslo_concurrency.processutils [None req-576d934c-3bea-49e4-87f0-7ea035c6381d 24dea95ec8d047639f1121e457f54149 a9bdaad8dfb9419b870adbb9b7034af4 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 b7a3385b-c043-4cc1-963e-3421188270ac_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.346s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:58:27 compute-0 nova_compute[260935]: 2025-10-11 08:58:27.432 2 DEBUG nova.storage.rbd_utils [None req-576d934c-3bea-49e4-87f0-7ea035c6381d 24dea95ec8d047639f1121e457f54149 a9bdaad8dfb9419b870adbb9b7034af4 - - default default] resizing rbd image b7a3385b-c043-4cc1-963e-3421188270ac_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 11 08:58:27 compute-0 ceph-mon[74313]: pgmap v1658: 321 pgs: 321 active+clean; 121 MiB data, 597 MiB used, 59 GiB / 60 GiB avail; 677 KiB/s rd, 18 KiB/s wr, 119 op/s
Oct 11 08:58:27 compute-0 nova_compute[260935]: 2025-10-11 08:58:27.562 2 DEBUG nova.objects.instance [None req-576d934c-3bea-49e4-87f0-7ea035c6381d 24dea95ec8d047639f1121e457f54149 a9bdaad8dfb9419b870adbb9b7034af4 - - default default] Lazy-loading 'migration_context' on Instance uuid b7a3385b-c043-4cc1-963e-3421188270ac obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:58:27 compute-0 nova_compute[260935]: 2025-10-11 08:58:27.581 2 DEBUG nova.virt.libvirt.driver [None req-576d934c-3bea-49e4-87f0-7ea035c6381d 24dea95ec8d047639f1121e457f54149 a9bdaad8dfb9419b870adbb9b7034af4 - - default default] [instance: b7a3385b-c043-4cc1-963e-3421188270ac] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 11 08:58:27 compute-0 nova_compute[260935]: 2025-10-11 08:58:27.582 2 DEBUG nova.virt.libvirt.driver [None req-576d934c-3bea-49e4-87f0-7ea035c6381d 24dea95ec8d047639f1121e457f54149 a9bdaad8dfb9419b870adbb9b7034af4 - - default default] [instance: b7a3385b-c043-4cc1-963e-3421188270ac] Ensure instance console log exists: /var/lib/nova/instances/b7a3385b-c043-4cc1-963e-3421188270ac/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 11 08:58:27 compute-0 nova_compute[260935]: 2025-10-11 08:58:27.582 2 DEBUG oslo_concurrency.lockutils [None req-576d934c-3bea-49e4-87f0-7ea035c6381d 24dea95ec8d047639f1121e457f54149 a9bdaad8dfb9419b870adbb9b7034af4 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:58:27 compute-0 nova_compute[260935]: 2025-10-11 08:58:27.583 2 DEBUG oslo_concurrency.lockutils [None req-576d934c-3bea-49e4-87f0-7ea035c6381d 24dea95ec8d047639f1121e457f54149 a9bdaad8dfb9419b870adbb9b7034af4 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:58:27 compute-0 nova_compute[260935]: 2025-10-11 08:58:27.583 2 DEBUG oslo_concurrency.lockutils [None req-576d934c-3bea-49e4-87f0-7ea035c6381d 24dea95ec8d047639f1121e457f54149 a9bdaad8dfb9419b870adbb9b7034af4 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:58:27 compute-0 nova_compute[260935]: 2025-10-11 08:58:27.587 2 DEBUG nova.compute.manager [req-24d24f88-0746-46fa-b4be-b78745a0fcf7 req-869749ef-02ff-4e71-b6be-46e8b2a5ff6c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: b297454f-91af-4716-b4f7-6af9f0d7e62d] Received event network-vif-plugged-7e1e0a2e-111a-44a6-8191-45964dbea604 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:58:27 compute-0 nova_compute[260935]: 2025-10-11 08:58:27.588 2 DEBUG oslo_concurrency.lockutils [req-24d24f88-0746-46fa-b4be-b78745a0fcf7 req-869749ef-02ff-4e71-b6be-46e8b2a5ff6c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "b297454f-91af-4716-b4f7-6af9f0d7e62d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:58:27 compute-0 nova_compute[260935]: 2025-10-11 08:58:27.588 2 DEBUG oslo_concurrency.lockutils [req-24d24f88-0746-46fa-b4be-b78745a0fcf7 req-869749ef-02ff-4e71-b6be-46e8b2a5ff6c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "b297454f-91af-4716-b4f7-6af9f0d7e62d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:58:27 compute-0 nova_compute[260935]: 2025-10-11 08:58:27.588 2 DEBUG oslo_concurrency.lockutils [req-24d24f88-0746-46fa-b4be-b78745a0fcf7 req-869749ef-02ff-4e71-b6be-46e8b2a5ff6c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "b297454f-91af-4716-b4f7-6af9f0d7e62d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:58:27 compute-0 nova_compute[260935]: 2025-10-11 08:58:27.589 2 DEBUG nova.compute.manager [req-24d24f88-0746-46fa-b4be-b78745a0fcf7 req-869749ef-02ff-4e71-b6be-46e8b2a5ff6c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: b297454f-91af-4716-b4f7-6af9f0d7e62d] No waiting events found dispatching network-vif-plugged-7e1e0a2e-111a-44a6-8191-45964dbea604 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 08:58:27 compute-0 nova_compute[260935]: 2025-10-11 08:58:27.589 2 WARNING nova.compute.manager [req-24d24f88-0746-46fa-b4be-b78745a0fcf7 req-869749ef-02ff-4e71-b6be-46e8b2a5ff6c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: b297454f-91af-4716-b4f7-6af9f0d7e62d] Received unexpected event network-vif-plugged-7e1e0a2e-111a-44a6-8191-45964dbea604 for instance with vm_state deleted and task_state None.
Oct 11 08:58:27 compute-0 nova_compute[260935]: 2025-10-11 08:58:27.589 2 DEBUG nova.compute.manager [req-24d24f88-0746-46fa-b4be-b78745a0fcf7 req-869749ef-02ff-4e71-b6be-46e8b2a5ff6c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 6208d031-a043-4761-ba4d-f3564e22723d] Received event network-changed-6c74faab-60f6-485c-b31f-ea324cc34f9c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:58:27 compute-0 nova_compute[260935]: 2025-10-11 08:58:27.589 2 DEBUG nova.compute.manager [req-24d24f88-0746-46fa-b4be-b78745a0fcf7 req-869749ef-02ff-4e71-b6be-46e8b2a5ff6c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 6208d031-a043-4761-ba4d-f3564e22723d] Refreshing instance network info cache due to event network-changed-6c74faab-60f6-485c-b31f-ea324cc34f9c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 08:58:27 compute-0 nova_compute[260935]: 2025-10-11 08:58:27.590 2 DEBUG oslo_concurrency.lockutils [req-24d24f88-0746-46fa-b4be-b78745a0fcf7 req-869749ef-02ff-4e71-b6be-46e8b2a5ff6c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-6208d031-a043-4761-ba4d-f3564e22723d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 08:58:27 compute-0 nova_compute[260935]: 2025-10-11 08:58:27.590 2 DEBUG oslo_concurrency.lockutils [req-24d24f88-0746-46fa-b4be-b78745a0fcf7 req-869749ef-02ff-4e71-b6be-46e8b2a5ff6c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-6208d031-a043-4761-ba4d-f3564e22723d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 08:58:27 compute-0 nova_compute[260935]: 2025-10-11 08:58:27.590 2 DEBUG nova.network.neutron [req-24d24f88-0746-46fa-b4be-b78745a0fcf7 req-869749ef-02ff-4e71-b6be-46e8b2a5ff6c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 6208d031-a043-4761-ba4d-f3564e22723d] Refreshing network info cache for port 6c74faab-60f6-485c-b31f-ea324cc34f9c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 08:58:27 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 08:58:27 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/144093183' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:58:27 compute-0 nova_compute[260935]: 2025-10-11 08:58:27.710 2 DEBUG oslo_concurrency.processutils [None req-cda31545-9081-4dd8-af54-651e70371514 7195b797fdf14f41b559c905c46027f8 1570574742514675b82edd1079304340 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.516s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:58:27 compute-0 nova_compute[260935]: 2025-10-11 08:58:27.740 2 DEBUG nova.storage.rbd_utils [None req-cda31545-9081-4dd8-af54-651e70371514 7195b797fdf14f41b559c905c46027f8 1570574742514675b82edd1079304340 - - default default] rbd image 6208d031-a043-4761-ba4d-f3564e22723d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:58:27 compute-0 nova_compute[260935]: 2025-10-11 08:58:27.745 2 DEBUG oslo_concurrency.processutils [None req-cda31545-9081-4dd8-af54-651e70371514 7195b797fdf14f41b559c905c46027f8 1570574742514675b82edd1079304340 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:58:28 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1659: 321 pgs: 321 active+clean; 98 MiB data, 577 MiB used, 59 GiB / 60 GiB avail; 68 KiB/s rd, 2.3 MiB/s wr, 102 op/s
Oct 11 08:58:28 compute-0 nova_compute[260935]: 2025-10-11 08:58:28.184 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:58:28 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 08:58:28 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4166802508' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:58:28 compute-0 nova_compute[260935]: 2025-10-11 08:58:28.239 2 DEBUG oslo_concurrency.processutils [None req-cda31545-9081-4dd8-af54-651e70371514 7195b797fdf14f41b559c905c46027f8 1570574742514675b82edd1079304340 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.495s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:58:28 compute-0 nova_compute[260935]: 2025-10-11 08:58:28.242 2 DEBUG nova.virt.libvirt.vif [None req-cda31545-9081-4dd8-af54-651e70371514 7195b797fdf14f41b559c905c46027f8 1570574742514675b82edd1079304340 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T08:58:19Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-InstanceActionsNegativeTestJSON-server-470334281',display_name='tempest-InstanceActionsNegativeTestJSON-server-470334281',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionsnegativetestjson-server-470334281',id=76,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1570574742514675b82edd1079304340',ramdisk_id='',reservation_id='r-xbr7uzg0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-InstanceActionsNegativeTestJSON-1759188145',owner_user_name='tempest-InstanceActionsNegativeTestJSON-1759188145-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T08:58:22Z,user_data=None,user_id='7195b797fdf14f41b559c905c46027f8',uuid=6208d031-a043-4761-ba4d-f3564e22723d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6c74faab-60f6-485c-b31f-ea324cc34f9c", "address": "fa:16:3e:b4:8a:29", "network": {"id": "c4ceb461-f642-4cfa-bc70-91212708aac4", "bridge": "br-int", "label": "tempest-InstanceActionsNegativeTestJSON-2068762966-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1570574742514675b82edd1079304340", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6c74faab-60", "ovs_interfaceid": "6c74faab-60f6-485c-b31f-ea324cc34f9c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 11 08:58:28 compute-0 nova_compute[260935]: 2025-10-11 08:58:28.243 2 DEBUG nova.network.os_vif_util [None req-cda31545-9081-4dd8-af54-651e70371514 7195b797fdf14f41b559c905c46027f8 1570574742514675b82edd1079304340 - - default default] Converting VIF {"id": "6c74faab-60f6-485c-b31f-ea324cc34f9c", "address": "fa:16:3e:b4:8a:29", "network": {"id": "c4ceb461-f642-4cfa-bc70-91212708aac4", "bridge": "br-int", "label": "tempest-InstanceActionsNegativeTestJSON-2068762966-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1570574742514675b82edd1079304340", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6c74faab-60", "ovs_interfaceid": "6c74faab-60f6-485c-b31f-ea324cc34f9c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 08:58:28 compute-0 nova_compute[260935]: 2025-10-11 08:58:28.245 2 DEBUG nova.network.os_vif_util [None req-cda31545-9081-4dd8-af54-651e70371514 7195b797fdf14f41b559c905c46027f8 1570574742514675b82edd1079304340 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b4:8a:29,bridge_name='br-int',has_traffic_filtering=True,id=6c74faab-60f6-485c-b31f-ea324cc34f9c,network=Network(c4ceb461-f642-4cfa-bc70-91212708aac4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6c74faab-60') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 08:58:28 compute-0 nova_compute[260935]: 2025-10-11 08:58:28.248 2 DEBUG nova.objects.instance [None req-cda31545-9081-4dd8-af54-651e70371514 7195b797fdf14f41b559c905c46027f8 1570574742514675b82edd1079304340 - - default default] Lazy-loading 'pci_devices' on Instance uuid 6208d031-a043-4761-ba4d-f3564e22723d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:58:28 compute-0 nova_compute[260935]: 2025-10-11 08:58:28.268 2 DEBUG nova.virt.libvirt.driver [None req-cda31545-9081-4dd8-af54-651e70371514 7195b797fdf14f41b559c905c46027f8 1570574742514675b82edd1079304340 - - default default] [instance: 6208d031-a043-4761-ba4d-f3564e22723d] End _get_guest_xml xml=<domain type="kvm">
Oct 11 08:58:28 compute-0 nova_compute[260935]:   <uuid>6208d031-a043-4761-ba4d-f3564e22723d</uuid>
Oct 11 08:58:28 compute-0 nova_compute[260935]:   <name>instance-0000004c</name>
Oct 11 08:58:28 compute-0 nova_compute[260935]:   <memory>131072</memory>
Oct 11 08:58:28 compute-0 nova_compute[260935]:   <vcpu>1</vcpu>
Oct 11 08:58:28 compute-0 nova_compute[260935]:   <metadata>
Oct 11 08:58:28 compute-0 nova_compute[260935]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 08:58:28 compute-0 nova_compute[260935]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 08:58:28 compute-0 nova_compute[260935]:       <nova:name>tempest-InstanceActionsNegativeTestJSON-server-470334281</nova:name>
Oct 11 08:58:28 compute-0 nova_compute[260935]:       <nova:creationTime>2025-10-11 08:58:27</nova:creationTime>
Oct 11 08:58:28 compute-0 nova_compute[260935]:       <nova:flavor name="m1.nano">
Oct 11 08:58:28 compute-0 nova_compute[260935]:         <nova:memory>128</nova:memory>
Oct 11 08:58:28 compute-0 nova_compute[260935]:         <nova:disk>1</nova:disk>
Oct 11 08:58:28 compute-0 nova_compute[260935]:         <nova:swap>0</nova:swap>
Oct 11 08:58:28 compute-0 nova_compute[260935]:         <nova:ephemeral>0</nova:ephemeral>
Oct 11 08:58:28 compute-0 nova_compute[260935]:         <nova:vcpus>1</nova:vcpus>
Oct 11 08:58:28 compute-0 nova_compute[260935]:       </nova:flavor>
Oct 11 08:58:28 compute-0 nova_compute[260935]:       <nova:owner>
Oct 11 08:58:28 compute-0 nova_compute[260935]:         <nova:user uuid="7195b797fdf14f41b559c905c46027f8">tempest-InstanceActionsNegativeTestJSON-1759188145-project-member</nova:user>
Oct 11 08:58:28 compute-0 nova_compute[260935]:         <nova:project uuid="1570574742514675b82edd1079304340">tempest-InstanceActionsNegativeTestJSON-1759188145</nova:project>
Oct 11 08:58:28 compute-0 nova_compute[260935]:       </nova:owner>
Oct 11 08:58:28 compute-0 nova_compute[260935]:       <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 08:58:28 compute-0 nova_compute[260935]:       <nova:ports>
Oct 11 08:58:28 compute-0 nova_compute[260935]:         <nova:port uuid="6c74faab-60f6-485c-b31f-ea324cc34f9c">
Oct 11 08:58:28 compute-0 nova_compute[260935]:           <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Oct 11 08:58:28 compute-0 nova_compute[260935]:         </nova:port>
Oct 11 08:58:28 compute-0 nova_compute[260935]:       </nova:ports>
Oct 11 08:58:28 compute-0 nova_compute[260935]:     </nova:instance>
Oct 11 08:58:28 compute-0 nova_compute[260935]:   </metadata>
Oct 11 08:58:28 compute-0 nova_compute[260935]:   <sysinfo type="smbios">
Oct 11 08:58:28 compute-0 nova_compute[260935]:     <system>
Oct 11 08:58:28 compute-0 nova_compute[260935]:       <entry name="manufacturer">RDO</entry>
Oct 11 08:58:28 compute-0 nova_compute[260935]:       <entry name="product">OpenStack Compute</entry>
Oct 11 08:58:28 compute-0 nova_compute[260935]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 08:58:28 compute-0 nova_compute[260935]:       <entry name="serial">6208d031-a043-4761-ba4d-f3564e22723d</entry>
Oct 11 08:58:28 compute-0 nova_compute[260935]:       <entry name="uuid">6208d031-a043-4761-ba4d-f3564e22723d</entry>
Oct 11 08:58:28 compute-0 nova_compute[260935]:       <entry name="family">Virtual Machine</entry>
Oct 11 08:58:28 compute-0 nova_compute[260935]:     </system>
Oct 11 08:58:28 compute-0 nova_compute[260935]:   </sysinfo>
Oct 11 08:58:28 compute-0 nova_compute[260935]:   <os>
Oct 11 08:58:28 compute-0 nova_compute[260935]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 11 08:58:28 compute-0 nova_compute[260935]:     <boot dev="hd"/>
Oct 11 08:58:28 compute-0 nova_compute[260935]:     <smbios mode="sysinfo"/>
Oct 11 08:58:28 compute-0 nova_compute[260935]:   </os>
Oct 11 08:58:28 compute-0 nova_compute[260935]:   <features>
Oct 11 08:58:28 compute-0 nova_compute[260935]:     <acpi/>
Oct 11 08:58:28 compute-0 nova_compute[260935]:     <apic/>
Oct 11 08:58:28 compute-0 nova_compute[260935]:     <vmcoreinfo/>
Oct 11 08:58:28 compute-0 nova_compute[260935]:   </features>
Oct 11 08:58:28 compute-0 nova_compute[260935]:   <clock offset="utc">
Oct 11 08:58:28 compute-0 nova_compute[260935]:     <timer name="pit" tickpolicy="delay"/>
Oct 11 08:58:28 compute-0 nova_compute[260935]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 11 08:58:28 compute-0 nova_compute[260935]:     <timer name="hpet" present="no"/>
Oct 11 08:58:28 compute-0 nova_compute[260935]:   </clock>
Oct 11 08:58:28 compute-0 nova_compute[260935]:   <cpu mode="host-model" match="exact">
Oct 11 08:58:28 compute-0 nova_compute[260935]:     <topology sockets="1" cores="1" threads="1"/>
Oct 11 08:58:28 compute-0 nova_compute[260935]:   </cpu>
Oct 11 08:58:28 compute-0 nova_compute[260935]:   <devices>
Oct 11 08:58:28 compute-0 nova_compute[260935]:     <disk type="network" device="disk">
Oct 11 08:58:28 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 08:58:28 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/6208d031-a043-4761-ba4d-f3564e22723d_disk">
Oct 11 08:58:28 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 08:58:28 compute-0 nova_compute[260935]:       </source>
Oct 11 08:58:28 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 08:58:28 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 08:58:28 compute-0 nova_compute[260935]:       </auth>
Oct 11 08:58:28 compute-0 nova_compute[260935]:       <target dev="vda" bus="virtio"/>
Oct 11 08:58:28 compute-0 nova_compute[260935]:     </disk>
Oct 11 08:58:28 compute-0 nova_compute[260935]:     <disk type="network" device="cdrom">
Oct 11 08:58:28 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 08:58:28 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/6208d031-a043-4761-ba4d-f3564e22723d_disk.config">
Oct 11 08:58:28 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 08:58:28 compute-0 nova_compute[260935]:       </source>
Oct 11 08:58:28 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 08:58:28 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 08:58:28 compute-0 nova_compute[260935]:       </auth>
Oct 11 08:58:28 compute-0 nova_compute[260935]:       <target dev="sda" bus="sata"/>
Oct 11 08:58:28 compute-0 nova_compute[260935]:     </disk>
Oct 11 08:58:28 compute-0 nova_compute[260935]:     <interface type="ethernet">
Oct 11 08:58:28 compute-0 nova_compute[260935]:       <mac address="fa:16:3e:b4:8a:29"/>
Oct 11 08:58:28 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 08:58:28 compute-0 nova_compute[260935]:       <driver name="vhost" rx_queue_size="512"/>
Oct 11 08:58:28 compute-0 nova_compute[260935]:       <mtu size="1442"/>
Oct 11 08:58:28 compute-0 nova_compute[260935]:       <target dev="tap6c74faab-60"/>
Oct 11 08:58:28 compute-0 nova_compute[260935]:     </interface>
Oct 11 08:58:28 compute-0 nova_compute[260935]:     <serial type="pty">
Oct 11 08:58:28 compute-0 nova_compute[260935]:       <log file="/var/lib/nova/instances/6208d031-a043-4761-ba4d-f3564e22723d/console.log" append="off"/>
Oct 11 08:58:28 compute-0 nova_compute[260935]:     </serial>
Oct 11 08:58:28 compute-0 nova_compute[260935]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 08:58:28 compute-0 nova_compute[260935]:     <video>
Oct 11 08:58:28 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 08:58:28 compute-0 nova_compute[260935]:     </video>
Oct 11 08:58:28 compute-0 nova_compute[260935]:     <input type="tablet" bus="usb"/>
Oct 11 08:58:28 compute-0 nova_compute[260935]:     <rng model="virtio">
Oct 11 08:58:28 compute-0 nova_compute[260935]:       <backend model="random">/dev/urandom</backend>
Oct 11 08:58:28 compute-0 nova_compute[260935]:     </rng>
Oct 11 08:58:28 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root"/>
Oct 11 08:58:28 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:58:28 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:58:28 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:58:28 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:58:28 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:58:28 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:58:28 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:58:28 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:58:28 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:58:28 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:58:28 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:58:28 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:58:28 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:58:28 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:58:28 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:58:28 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:58:28 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:58:28 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:58:28 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:58:28 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:58:28 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:58:28 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:58:28 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:58:28 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:58:28 compute-0 nova_compute[260935]:     <controller type="usb" index="0"/>
Oct 11 08:58:28 compute-0 nova_compute[260935]:     <memballoon model="virtio">
Oct 11 08:58:28 compute-0 nova_compute[260935]:       <stats period="10"/>
Oct 11 08:58:28 compute-0 nova_compute[260935]:     </memballoon>
Oct 11 08:58:28 compute-0 nova_compute[260935]:   </devices>
Oct 11 08:58:28 compute-0 nova_compute[260935]: </domain>
Oct 11 08:58:28 compute-0 nova_compute[260935]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 11 08:58:28 compute-0 nova_compute[260935]: 2025-10-11 08:58:28.271 2 DEBUG nova.compute.manager [None req-cda31545-9081-4dd8-af54-651e70371514 7195b797fdf14f41b559c905c46027f8 1570574742514675b82edd1079304340 - - default default] [instance: 6208d031-a043-4761-ba4d-f3564e22723d] Preparing to wait for external event network-vif-plugged-6c74faab-60f6-485c-b31f-ea324cc34f9c prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 11 08:58:28 compute-0 nova_compute[260935]: 2025-10-11 08:58:28.272 2 DEBUG oslo_concurrency.lockutils [None req-cda31545-9081-4dd8-af54-651e70371514 7195b797fdf14f41b559c905c46027f8 1570574742514675b82edd1079304340 - - default default] Acquiring lock "6208d031-a043-4761-ba4d-f3564e22723d-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:58:28 compute-0 nova_compute[260935]: 2025-10-11 08:58:28.272 2 DEBUG oslo_concurrency.lockutils [None req-cda31545-9081-4dd8-af54-651e70371514 7195b797fdf14f41b559c905c46027f8 1570574742514675b82edd1079304340 - - default default] Lock "6208d031-a043-4761-ba4d-f3564e22723d-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:58:28 compute-0 nova_compute[260935]: 2025-10-11 08:58:28.273 2 DEBUG oslo_concurrency.lockutils [None req-cda31545-9081-4dd8-af54-651e70371514 7195b797fdf14f41b559c905c46027f8 1570574742514675b82edd1079304340 - - default default] Lock "6208d031-a043-4761-ba4d-f3564e22723d-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:58:28 compute-0 nova_compute[260935]: 2025-10-11 08:58:28.274 2 DEBUG nova.virt.libvirt.vif [None req-cda31545-9081-4dd8-af54-651e70371514 7195b797fdf14f41b559c905c46027f8 1570574742514675b82edd1079304340 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T08:58:19Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-InstanceActionsNegativeTestJSON-server-470334281',display_name='tempest-InstanceActionsNegativeTestJSON-server-470334281',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionsnegativetestjson-server-470334281',id=76,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1570574742514675b82edd1079304340',ramdisk_id='',reservation_id='r-xbr7uzg0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-InstanceActionsNegativeTestJSON-1759188145',owner_user_name='tempest-InstanceActionsNegativeTestJSON-1759188145-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T08:58:22Z,user_data=None,user_id='7195b797fdf14f41b559c905c46027f8',uuid=6208d031-a043-4761-ba4d-f3564e22723d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6c74faab-60f6-485c-b31f-ea324cc34f9c", "address": "fa:16:3e:b4:8a:29", "network": {"id": "c4ceb461-f642-4cfa-bc70-91212708aac4", "bridge": "br-int", "label": "tempest-InstanceActionsNegativeTestJSON-2068762966-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1570574742514675b82edd1079304340", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6c74faab-60", "ovs_interfaceid": "6c74faab-60f6-485c-b31f-ea324cc34f9c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 11 08:58:28 compute-0 nova_compute[260935]: 2025-10-11 08:58:28.275 2 DEBUG nova.network.os_vif_util [None req-cda31545-9081-4dd8-af54-651e70371514 7195b797fdf14f41b559c905c46027f8 1570574742514675b82edd1079304340 - - default default] Converting VIF {"id": "6c74faab-60f6-485c-b31f-ea324cc34f9c", "address": "fa:16:3e:b4:8a:29", "network": {"id": "c4ceb461-f642-4cfa-bc70-91212708aac4", "bridge": "br-int", "label": "tempest-InstanceActionsNegativeTestJSON-2068762966-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1570574742514675b82edd1079304340", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6c74faab-60", "ovs_interfaceid": "6c74faab-60f6-485c-b31f-ea324cc34f9c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 08:58:28 compute-0 nova_compute[260935]: 2025-10-11 08:58:28.276 2 DEBUG nova.network.os_vif_util [None req-cda31545-9081-4dd8-af54-651e70371514 7195b797fdf14f41b559c905c46027f8 1570574742514675b82edd1079304340 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b4:8a:29,bridge_name='br-int',has_traffic_filtering=True,id=6c74faab-60f6-485c-b31f-ea324cc34f9c,network=Network(c4ceb461-f642-4cfa-bc70-91212708aac4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6c74faab-60') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 08:58:28 compute-0 nova_compute[260935]: 2025-10-11 08:58:28.277 2 DEBUG os_vif [None req-cda31545-9081-4dd8-af54-651e70371514 7195b797fdf14f41b559c905c46027f8 1570574742514675b82edd1079304340 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b4:8a:29,bridge_name='br-int',has_traffic_filtering=True,id=6c74faab-60f6-485c-b31f-ea324cc34f9c,network=Network(c4ceb461-f642-4cfa-bc70-91212708aac4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6c74faab-60') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 11 08:58:28 compute-0 nova_compute[260935]: 2025-10-11 08:58:28.278 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:58:28 compute-0 nova_compute[260935]: 2025-10-11 08:58:28.279 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:58:28 compute-0 nova_compute[260935]: 2025-10-11 08:58:28.279 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 08:58:28 compute-0 nova_compute[260935]: 2025-10-11 08:58:28.285 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:58:28 compute-0 nova_compute[260935]: 2025-10-11 08:58:28.285 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6c74faab-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:58:28 compute-0 nova_compute[260935]: 2025-10-11 08:58:28.286 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap6c74faab-60, col_values=(('external_ids', {'iface-id': '6c74faab-60f6-485c-b31f-ea324cc34f9c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:b4:8a:29', 'vm-uuid': '6208d031-a043-4761-ba4d-f3564e22723d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:58:28 compute-0 NetworkManager[44960]: <info>  [1760173108.2901] manager: (tap6c74faab-60): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/298)
Oct 11 08:58:28 compute-0 nova_compute[260935]: 2025-10-11 08:58:28.289 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:58:28 compute-0 nova_compute[260935]: 2025-10-11 08:58:28.295 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:58:28 compute-0 nova_compute[260935]: 2025-10-11 08:58:28.296 2 INFO os_vif [None req-cda31545-9081-4dd8-af54-651e70371514 7195b797fdf14f41b559c905c46027f8 1570574742514675b82edd1079304340 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b4:8a:29,bridge_name='br-int',has_traffic_filtering=True,id=6c74faab-60f6-485c-b31f-ea324cc34f9c,network=Network(c4ceb461-f642-4cfa-bc70-91212708aac4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6c74faab-60')
Oct 11 08:58:28 compute-0 nova_compute[260935]: 2025-10-11 08:58:28.354 2 DEBUG nova.network.neutron [None req-576d934c-3bea-49e4-87f0-7ea035c6381d 24dea95ec8d047639f1121e457f54149 a9bdaad8dfb9419b870adbb9b7034af4 - - default default] [instance: b7a3385b-c043-4cc1-963e-3421188270ac] Successfully created port: 2717cdba-7e23-4eb0-81a8-fa322def9612 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 11 08:58:28 compute-0 nova_compute[260935]: 2025-10-11 08:58:28.370 2 DEBUG nova.virt.libvirt.driver [None req-cda31545-9081-4dd8-af54-651e70371514 7195b797fdf14f41b559c905c46027f8 1570574742514675b82edd1079304340 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 08:58:28 compute-0 nova_compute[260935]: 2025-10-11 08:58:28.371 2 DEBUG nova.virt.libvirt.driver [None req-cda31545-9081-4dd8-af54-651e70371514 7195b797fdf14f41b559c905c46027f8 1570574742514675b82edd1079304340 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 08:58:28 compute-0 nova_compute[260935]: 2025-10-11 08:58:28.371 2 DEBUG nova.virt.libvirt.driver [None req-cda31545-9081-4dd8-af54-651e70371514 7195b797fdf14f41b559c905c46027f8 1570574742514675b82edd1079304340 - - default default] No VIF found with MAC fa:16:3e:b4:8a:29, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 11 08:58:28 compute-0 nova_compute[260935]: 2025-10-11 08:58:28.372 2 INFO nova.virt.libvirt.driver [None req-cda31545-9081-4dd8-af54-651e70371514 7195b797fdf14f41b559c905c46027f8 1570574742514675b82edd1079304340 - - default default] [instance: 6208d031-a043-4761-ba4d-f3564e22723d] Using config drive
Oct 11 08:58:28 compute-0 nova_compute[260935]: 2025-10-11 08:58:28.406 2 DEBUG nova.storage.rbd_utils [None req-cda31545-9081-4dd8-af54-651e70371514 7195b797fdf14f41b559c905c46027f8 1570574742514675b82edd1079304340 - - default default] rbd image 6208d031-a043-4761-ba4d-f3564e22723d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:58:28 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e242 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 08:58:28 compute-0 nova_compute[260935]: 2025-10-11 08:58:28.501 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:58:28 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/144093183' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:58:28 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/4166802508' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:58:28 compute-0 nova_compute[260935]: 2025-10-11 08:58:28.938 2 INFO nova.virt.libvirt.driver [None req-cda31545-9081-4dd8-af54-651e70371514 7195b797fdf14f41b559c905c46027f8 1570574742514675b82edd1079304340 - - default default] [instance: 6208d031-a043-4761-ba4d-f3564e22723d] Creating config drive at /var/lib/nova/instances/6208d031-a043-4761-ba4d-f3564e22723d/disk.config
Oct 11 08:58:28 compute-0 nova_compute[260935]: 2025-10-11 08:58:28.948 2 DEBUG oslo_concurrency.processutils [None req-cda31545-9081-4dd8-af54-651e70371514 7195b797fdf14f41b559c905c46027f8 1570574742514675b82edd1079304340 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/6208d031-a043-4761-ba4d-f3564e22723d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpbnfchpxu execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:58:29 compute-0 nova_compute[260935]: 2025-10-11 08:58:29.110 2 DEBUG nova.network.neutron [None req-576d934c-3bea-49e4-87f0-7ea035c6381d 24dea95ec8d047639f1121e457f54149 a9bdaad8dfb9419b870adbb9b7034af4 - - default default] [instance: b7a3385b-c043-4cc1-963e-3421188270ac] Successfully updated port: 2717cdba-7e23-4eb0-81a8-fa322def9612 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 11 08:58:29 compute-0 nova_compute[260935]: 2025-10-11 08:58:29.125 2 DEBUG oslo_concurrency.processutils [None req-cda31545-9081-4dd8-af54-651e70371514 7195b797fdf14f41b559c905c46027f8 1570574742514675b82edd1079304340 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/6208d031-a043-4761-ba4d-f3564e22723d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpbnfchpxu" returned: 0 in 0.177s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:58:29 compute-0 nova_compute[260935]: 2025-10-11 08:58:29.169 2 DEBUG nova.storage.rbd_utils [None req-cda31545-9081-4dd8-af54-651e70371514 7195b797fdf14f41b559c905c46027f8 1570574742514675b82edd1079304340 - - default default] rbd image 6208d031-a043-4761-ba4d-f3564e22723d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:58:29 compute-0 nova_compute[260935]: 2025-10-11 08:58:29.175 2 DEBUG oslo_concurrency.processutils [None req-cda31545-9081-4dd8-af54-651e70371514 7195b797fdf14f41b559c905c46027f8 1570574742514675b82edd1079304340 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/6208d031-a043-4761-ba4d-f3564e22723d/disk.config 6208d031-a043-4761-ba4d-f3564e22723d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:58:29 compute-0 nova_compute[260935]: 2025-10-11 08:58:29.234 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760173094.1590986, 98d8ebd6-0917-49cf-8efc-a245486424bc => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:58:29 compute-0 nova_compute[260935]: 2025-10-11 08:58:29.235 2 INFO nova.compute.manager [-] [instance: 98d8ebd6-0917-49cf-8efc-a245486424bc] VM Stopped (Lifecycle Event)
Oct 11 08:58:29 compute-0 nova_compute[260935]: 2025-10-11 08:58:29.241 2 DEBUG nova.compute.manager [req-3948dcaa-7634-443a-b2dc-62738c1ac0a0 req-a4f1d5ec-4348-40e4-aa7f-6c894f366a50 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: b7a3385b-c043-4cc1-963e-3421188270ac] Received event network-changed-2717cdba-7e23-4eb0-81a8-fa322def9612 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:58:29 compute-0 nova_compute[260935]: 2025-10-11 08:58:29.241 2 DEBUG nova.compute.manager [req-3948dcaa-7634-443a-b2dc-62738c1ac0a0 req-a4f1d5ec-4348-40e4-aa7f-6c894f366a50 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: b7a3385b-c043-4cc1-963e-3421188270ac] Refreshing instance network info cache due to event network-changed-2717cdba-7e23-4eb0-81a8-fa322def9612. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 08:58:29 compute-0 nova_compute[260935]: 2025-10-11 08:58:29.242 2 DEBUG oslo_concurrency.lockutils [req-3948dcaa-7634-443a-b2dc-62738c1ac0a0 req-a4f1d5ec-4348-40e4-aa7f-6c894f366a50 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-b7a3385b-c043-4cc1-963e-3421188270ac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 08:58:29 compute-0 nova_compute[260935]: 2025-10-11 08:58:29.242 2 DEBUG oslo_concurrency.lockutils [req-3948dcaa-7634-443a-b2dc-62738c1ac0a0 req-a4f1d5ec-4348-40e4-aa7f-6c894f366a50 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-b7a3385b-c043-4cc1-963e-3421188270ac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 08:58:29 compute-0 nova_compute[260935]: 2025-10-11 08:58:29.243 2 DEBUG nova.network.neutron [req-3948dcaa-7634-443a-b2dc-62738c1ac0a0 req-a4f1d5ec-4348-40e4-aa7f-6c894f366a50 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: b7a3385b-c043-4cc1-963e-3421188270ac] Refreshing network info cache for port 2717cdba-7e23-4eb0-81a8-fa322def9612 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 08:58:29 compute-0 nova_compute[260935]: 2025-10-11 08:58:29.245 2 DEBUG oslo_concurrency.lockutils [None req-576d934c-3bea-49e4-87f0-7ea035c6381d 24dea95ec8d047639f1121e457f54149 a9bdaad8dfb9419b870adbb9b7034af4 - - default default] Acquiring lock "refresh_cache-b7a3385b-c043-4cc1-963e-3421188270ac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 08:58:29 compute-0 nova_compute[260935]: 2025-10-11 08:58:29.339 2 DEBUG nova.compute.manager [None req-d01e0e13-2e18-416f-ac35-ea006f932e2e - - - - - -] [instance: 98d8ebd6-0917-49cf-8efc-a245486424bc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:58:29 compute-0 nova_compute[260935]: 2025-10-11 08:58:29.368 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760173094.366909, 8f609d1a-7eaa-41dc-9f3b-9acf7abe4239 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:58:29 compute-0 nova_compute[260935]: 2025-10-11 08:58:29.368 2 INFO nova.compute.manager [-] [instance: 8f609d1a-7eaa-41dc-9f3b-9acf7abe4239] VM Stopped (Lifecycle Event)
Oct 11 08:58:29 compute-0 nova_compute[260935]: 2025-10-11 08:58:29.394 2 DEBUG nova.compute.manager [None req-37bfdd51-d44d-48bf-87f5-fc3360564b57 - - - - - -] [instance: 8f609d1a-7eaa-41dc-9f3b-9acf7abe4239] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:58:29 compute-0 nova_compute[260935]: 2025-10-11 08:58:29.397 2 DEBUG oslo_concurrency.processutils [None req-cda31545-9081-4dd8-af54-651e70371514 7195b797fdf14f41b559c905c46027f8 1570574742514675b82edd1079304340 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/6208d031-a043-4761-ba4d-f3564e22723d/disk.config 6208d031-a043-4761-ba4d-f3564e22723d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.222s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:58:29 compute-0 nova_compute[260935]: 2025-10-11 08:58:29.398 2 INFO nova.virt.libvirt.driver [None req-cda31545-9081-4dd8-af54-651e70371514 7195b797fdf14f41b559c905c46027f8 1570574742514675b82edd1079304340 - - default default] [instance: 6208d031-a043-4761-ba4d-f3564e22723d] Deleting local config drive /var/lib/nova/instances/6208d031-a043-4761-ba4d-f3564e22723d/disk.config because it was imported into RBD.
Oct 11 08:58:29 compute-0 kernel: tap6c74faab-60: entered promiscuous mode
Oct 11 08:58:29 compute-0 NetworkManager[44960]: <info>  [1760173109.4990] manager: (tap6c74faab-60): new Tun device (/org/freedesktop/NetworkManager/Devices/299)
Oct 11 08:58:29 compute-0 ovn_controller[152945]: 2025-10-11T08:58:29Z|00674|binding|INFO|Claiming lport 6c74faab-60f6-485c-b31f-ea324cc34f9c for this chassis.
Oct 11 08:58:29 compute-0 ovn_controller[152945]: 2025-10-11T08:58:29Z|00675|binding|INFO|6c74faab-60f6-485c-b31f-ea324cc34f9c: Claiming fa:16:3e:b4:8a:29 10.100.0.14
Oct 11 08:58:29 compute-0 ceph-mon[74313]: pgmap v1659: 321 pgs: 321 active+clean; 98 MiB data, 577 MiB used, 59 GiB / 60 GiB avail; 68 KiB/s rd, 2.3 MiB/s wr, 102 op/s
Oct 11 08:58:29 compute-0 nova_compute[260935]: 2025-10-11 08:58:29.546 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:58:29 compute-0 nova_compute[260935]: 2025-10-11 08:58:29.549 2 DEBUG nova.network.neutron [req-3948dcaa-7634-443a-b2dc-62738c1ac0a0 req-a4f1d5ec-4348-40e4-aa7f-6c894f366a50 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: b7a3385b-c043-4cc1-963e-3421188270ac] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 11 08:58:29 compute-0 nova_compute[260935]: 2025-10-11 08:58:29.557 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:58:29 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:58:29.570 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b4:8a:29 10.100.0.14'], port_security=['fa:16:3e:b4:8a:29 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '6208d031-a043-4761-ba4d-f3564e22723d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c4ceb461-f642-4cfa-bc70-91212708aac4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1570574742514675b82edd1079304340', 'neutron:revision_number': '2', 'neutron:security_group_ids': '38579ec6-1bc1-482c-9459-13b8a5bdaa01', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=033968fa-77cf-4651-a7a6-8d92fdc428f0, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=6c74faab-60f6-485c-b31f-ea324cc34f9c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 08:58:29 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:58:29.572 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 6c74faab-60f6-485c-b31f-ea324cc34f9c in datapath c4ceb461-f642-4cfa-bc70-91212708aac4 bound to our chassis
Oct 11 08:58:29 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:58:29.575 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c4ceb461-f642-4cfa-bc70-91212708aac4
Oct 11 08:58:29 compute-0 systemd-udevd[335867]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 08:58:29 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:58:29.596 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[9053a0e8-8b78-4c59-9444-deca49da6b7c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:58:29 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:58:29.597 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapc4ceb461-f1 in ovnmeta-c4ceb461-f642-4cfa-bc70-91212708aac4 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 11 08:58:29 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:58:29.600 276945 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapc4ceb461-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 11 08:58:29 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:58:29.600 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[401dcba0-9f5f-4826-a93e-fdf9576b3d9d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:58:29 compute-0 NetworkManager[44960]: <info>  [1760173109.6021] device (tap6c74faab-60): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 08:58:29 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:58:29.602 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[a4ce1e00-f076-4f7d-ba74-e00878739de1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:58:29 compute-0 NetworkManager[44960]: <info>  [1760173109.6046] device (tap6c74faab-60): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 08:58:29 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:58:29.618 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[fd3f653e-22e5-4d20-96a1-8daca7a6001d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:58:29 compute-0 systemd-machined[215705]: New machine qemu-85-instance-0000004c.
Oct 11 08:58:29 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:58:29.651 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[ac575d54-eb41-4ced-8dd8-acc35d6a7159]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:58:29 compute-0 systemd[1]: Started Virtual Machine qemu-85-instance-0000004c.
Oct 11 08:58:29 compute-0 nova_compute[260935]: 2025-10-11 08:58:29.679 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:58:29 compute-0 ovn_controller[152945]: 2025-10-11T08:58:29Z|00676|binding|INFO|Setting lport 6c74faab-60f6-485c-b31f-ea324cc34f9c ovn-installed in OVS
Oct 11 08:58:29 compute-0 ovn_controller[152945]: 2025-10-11T08:58:29Z|00677|binding|INFO|Setting lport 6c74faab-60f6-485c-b31f-ea324cc34f9c up in Southbound
Oct 11 08:58:29 compute-0 nova_compute[260935]: 2025-10-11 08:58:29.685 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:58:29 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:58:29.705 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[bbab29ba-b383-473c-8ae7-36976aa5e5e7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:58:29 compute-0 NetworkManager[44960]: <info>  [1760173109.7128] manager: (tapc4ceb461-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/300)
Oct 11 08:58:29 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:58:29.711 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[1c2e9d7f-a152-44da-b064-54ebf30f4f45]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:58:29 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:58:29.766 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[d561e8a2-f63c-4685-badc-b5eead98f01a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:58:29 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:58:29.770 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[8b2bfa0d-ebab-4109-aa61-b509a576f91d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:58:29 compute-0 NetworkManager[44960]: <info>  [1760173109.8029] device (tapc4ceb461-f0): carrier: link connected
Oct 11 08:58:29 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:58:29.813 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[2f7035aa-192b-4087-afe1-6a03eaf390e6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:58:29 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:58:29.847 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[763f82e3-5e2a-438c-9003-2435ddd54c1a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc4ceb461-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9a:7e:09'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 211], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 495450, 'reachable_time': 30443, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 335903, 'error': None, 'target': 'ovnmeta-c4ceb461-f642-4cfa-bc70-91212708aac4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:58:29 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:58:29.873 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[b096fef7-d821-44b5-a735-3c217bede542]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe9a:7e09'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 495450, 'tstamp': 495450}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 335904, 'error': None, 'target': 'ovnmeta-c4ceb461-f642-4cfa-bc70-91212708aac4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:58:29 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:58:29.907 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[999d025e-9720-4412-b908-edc9c6333c0f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc4ceb461-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9a:7e:09'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 220, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 220, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 211], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 495450, 'reachable_time': 30443, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 192, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 192, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 335905, 'error': None, 'target': 'ovnmeta-c4ceb461-f642-4cfa-bc70-91212708aac4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:58:29 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:58:29.961 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[5e06889d-7255-46d0-889d-f89d9bce2d46]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:58:30 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:58:30.061 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[5db7541e-dea9-4d53-9885-7da300904984]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:58:30 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:58:30.063 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc4ceb461-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:58:30 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:58:30.064 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 08:58:30 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:58:30.065 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc4ceb461-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:58:30 compute-0 NetworkManager[44960]: <info>  [1760173110.0684] manager: (tapc4ceb461-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/301)
Oct 11 08:58:30 compute-0 nova_compute[260935]: 2025-10-11 08:58:30.067 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:58:30 compute-0 kernel: tapc4ceb461-f0: entered promiscuous mode
Oct 11 08:58:30 compute-0 nova_compute[260935]: 2025-10-11 08:58:30.073 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:58:30 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:58:30.074 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc4ceb461-f0, col_values=(('external_ids', {'iface-id': '8709bc34-b5e9-43a8-8949-41637bcfbcf5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:58:30 compute-0 nova_compute[260935]: 2025-10-11 08:58:30.076 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:58:30 compute-0 ovn_controller[152945]: 2025-10-11T08:58:30Z|00678|binding|INFO|Releasing lport 8709bc34-b5e9-43a8-8949-41637bcfbcf5 from this chassis (sb_readonly=0)
Oct 11 08:58:30 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1660: 321 pgs: 321 active+clean; 98 MiB data, 577 MiB used, 59 GiB / 60 GiB avail; 59 KiB/s rd, 2.0 MiB/s wr, 88 op/s
Oct 11 08:58:30 compute-0 nova_compute[260935]: 2025-10-11 08:58:30.107 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:58:30 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:58:30.108 162815 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/c4ceb461-f642-4cfa-bc70-91212708aac4.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/c4ceb461-f642-4cfa-bc70-91212708aac4.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 11 08:58:30 compute-0 nova_compute[260935]: 2025-10-11 08:58:30.108 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:58:30 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:58:30.109 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[c333e3af-71a0-4bad-8889-d297bdb50152]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:58:30 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:58:30.111 162815 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 11 08:58:30 compute-0 ovn_metadata_agent[162810]: global
Oct 11 08:58:30 compute-0 ovn_metadata_agent[162810]:     log         /dev/log local0 debug
Oct 11 08:58:30 compute-0 ovn_metadata_agent[162810]:     log-tag     haproxy-metadata-proxy-c4ceb461-f642-4cfa-bc70-91212708aac4
Oct 11 08:58:30 compute-0 ovn_metadata_agent[162810]:     user        root
Oct 11 08:58:30 compute-0 ovn_metadata_agent[162810]:     group       root
Oct 11 08:58:30 compute-0 ovn_metadata_agent[162810]:     maxconn     1024
Oct 11 08:58:30 compute-0 ovn_metadata_agent[162810]:     pidfile     /var/lib/neutron/external/pids/c4ceb461-f642-4cfa-bc70-91212708aac4.pid.haproxy
Oct 11 08:58:30 compute-0 ovn_metadata_agent[162810]:     daemon
Oct 11 08:58:30 compute-0 ovn_metadata_agent[162810]: 
Oct 11 08:58:30 compute-0 ovn_metadata_agent[162810]: defaults
Oct 11 08:58:30 compute-0 ovn_metadata_agent[162810]:     log global
Oct 11 08:58:30 compute-0 ovn_metadata_agent[162810]:     mode http
Oct 11 08:58:30 compute-0 ovn_metadata_agent[162810]:     option httplog
Oct 11 08:58:30 compute-0 ovn_metadata_agent[162810]:     option dontlognull
Oct 11 08:58:30 compute-0 ovn_metadata_agent[162810]:     option http-server-close
Oct 11 08:58:30 compute-0 ovn_metadata_agent[162810]:     option forwardfor
Oct 11 08:58:30 compute-0 ovn_metadata_agent[162810]:     retries                 3
Oct 11 08:58:30 compute-0 ovn_metadata_agent[162810]:     timeout http-request    30s
Oct 11 08:58:30 compute-0 ovn_metadata_agent[162810]:     timeout connect         30s
Oct 11 08:58:30 compute-0 ovn_metadata_agent[162810]:     timeout client          32s
Oct 11 08:58:30 compute-0 ovn_metadata_agent[162810]:     timeout server          32s
Oct 11 08:58:30 compute-0 ovn_metadata_agent[162810]:     timeout http-keep-alive 30s
Oct 11 08:58:30 compute-0 ovn_metadata_agent[162810]: 
Oct 11 08:58:30 compute-0 ovn_metadata_agent[162810]: 
Oct 11 08:58:30 compute-0 ovn_metadata_agent[162810]: listen listener
Oct 11 08:58:30 compute-0 ovn_metadata_agent[162810]:     bind 169.254.169.254:80
Oct 11 08:58:30 compute-0 ovn_metadata_agent[162810]:     server metadata /var/lib/neutron/metadata_proxy
Oct 11 08:58:30 compute-0 ovn_metadata_agent[162810]:     http-request add-header X-OVN-Network-ID c4ceb461-f642-4cfa-bc70-91212708aac4
Oct 11 08:58:30 compute-0 ovn_metadata_agent[162810]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 11 08:58:30 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:58:30.113 162815 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-c4ceb461-f642-4cfa-bc70-91212708aac4', 'env', 'PROCESS_TAG=haproxy-c4ceb461-f642-4cfa-bc70-91212708aac4', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/c4ceb461-f642-4cfa-bc70-91212708aac4.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 11 08:58:30 compute-0 nova_compute[260935]: 2025-10-11 08:58:30.280 2 DEBUG nova.network.neutron [req-24d24f88-0746-46fa-b4be-b78745a0fcf7 req-869749ef-02ff-4e71-b6be-46e8b2a5ff6c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 6208d031-a043-4761-ba4d-f3564e22723d] Updated VIF entry in instance network info cache for port 6c74faab-60f6-485c-b31f-ea324cc34f9c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 08:58:30 compute-0 nova_compute[260935]: 2025-10-11 08:58:30.281 2 DEBUG nova.network.neutron [req-24d24f88-0746-46fa-b4be-b78745a0fcf7 req-869749ef-02ff-4e71-b6be-46e8b2a5ff6c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 6208d031-a043-4761-ba4d-f3564e22723d] Updating instance_info_cache with network_info: [{"id": "6c74faab-60f6-485c-b31f-ea324cc34f9c", "address": "fa:16:3e:b4:8a:29", "network": {"id": "c4ceb461-f642-4cfa-bc70-91212708aac4", "bridge": "br-int", "label": "tempest-InstanceActionsNegativeTestJSON-2068762966-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1570574742514675b82edd1079304340", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6c74faab-60", "ovs_interfaceid": "6c74faab-60f6-485c-b31f-ea324cc34f9c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:58:30 compute-0 nova_compute[260935]: 2025-10-11 08:58:30.312 2 DEBUG oslo_concurrency.lockutils [req-24d24f88-0746-46fa-b4be-b78745a0fcf7 req-869749ef-02ff-4e71-b6be-46e8b2a5ff6c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-6208d031-a043-4761-ba4d-f3564e22723d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 08:58:30 compute-0 podman[335979]: 2025-10-11 08:58:30.587643671 +0000 UTC m=+0.085982063 container create 1b20f96ad882bfb250701de8504ceb0add1c51515c7a0928878cf96ec1ed7fc8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-c4ceb461-f642-4cfa-bc70-91212708aac4, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true)
Oct 11 08:58:30 compute-0 nova_compute[260935]: 2025-10-11 08:58:30.625 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760173110.6255405, 6208d031-a043-4761-ba4d-f3564e22723d => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:58:30 compute-0 nova_compute[260935]: 2025-10-11 08:58:30.627 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 6208d031-a043-4761-ba4d-f3564e22723d] VM Started (Lifecycle Event)
Oct 11 08:58:30 compute-0 podman[335979]: 2025-10-11 08:58:30.553860208 +0000 UTC m=+0.052198610 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 11 08:58:30 compute-0 systemd[1]: Started libpod-conmon-1b20f96ad882bfb250701de8504ceb0add1c51515c7a0928878cf96ec1ed7fc8.scope.
Oct 11 08:58:30 compute-0 nova_compute[260935]: 2025-10-11 08:58:30.659 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 6208d031-a043-4761-ba4d-f3564e22723d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:58:30 compute-0 nova_compute[260935]: 2025-10-11 08:58:30.667 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760173110.6263769, 6208d031-a043-4761-ba4d-f3564e22723d => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:58:30 compute-0 nova_compute[260935]: 2025-10-11 08:58:30.668 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 6208d031-a043-4761-ba4d-f3564e22723d] VM Paused (Lifecycle Event)
Oct 11 08:58:30 compute-0 systemd[1]: Started libcrun container.
Oct 11 08:58:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55a8fa0a4237181abd6bdcdda937bed1536718e20d4dca1ae664421b36fdf681/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 11 08:58:30 compute-0 podman[335979]: 2025-10-11 08:58:30.708466666 +0000 UTC m=+0.206805088 container init 1b20f96ad882bfb250701de8504ceb0add1c51515c7a0928878cf96ec1ed7fc8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-c4ceb461-f642-4cfa-bc70-91212708aac4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct 11 08:58:30 compute-0 nova_compute[260935]: 2025-10-11 08:58:30.710 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 6208d031-a043-4761-ba4d-f3564e22723d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:58:30 compute-0 nova_compute[260935]: 2025-10-11 08:58:30.714 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 6208d031-a043-4761-ba4d-f3564e22723d] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 08:58:30 compute-0 podman[335979]: 2025-10-11 08:58:30.71631728 +0000 UTC m=+0.214655682 container start 1b20f96ad882bfb250701de8504ceb0add1c51515c7a0928878cf96ec1ed7fc8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-c4ceb461-f642-4cfa-bc70-91212708aac4, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct 11 08:58:30 compute-0 neutron-haproxy-ovnmeta-c4ceb461-f642-4cfa-bc70-91212708aac4[335994]: [NOTICE]   (335998) : New worker (336000) forked
Oct 11 08:58:30 compute-0 neutron-haproxy-ovnmeta-c4ceb461-f642-4cfa-bc70-91212708aac4[335994]: [NOTICE]   (335998) : Loading success.
Oct 11 08:58:30 compute-0 nova_compute[260935]: 2025-10-11 08:58:30.751 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 6208d031-a043-4761-ba4d-f3564e22723d] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 08:58:31 compute-0 nova_compute[260935]: 2025-10-11 08:58:31.036 2 DEBUG nova.network.neutron [req-3948dcaa-7634-443a-b2dc-62738c1ac0a0 req-a4f1d5ec-4348-40e4-aa7f-6c894f366a50 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: b7a3385b-c043-4cc1-963e-3421188270ac] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:58:31 compute-0 nova_compute[260935]: 2025-10-11 08:58:31.073 2 DEBUG oslo_concurrency.lockutils [req-3948dcaa-7634-443a-b2dc-62738c1ac0a0 req-a4f1d5ec-4348-40e4-aa7f-6c894f366a50 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-b7a3385b-c043-4cc1-963e-3421188270ac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 08:58:31 compute-0 nova_compute[260935]: 2025-10-11 08:58:31.074 2 DEBUG oslo_concurrency.lockutils [None req-576d934c-3bea-49e4-87f0-7ea035c6381d 24dea95ec8d047639f1121e457f54149 a9bdaad8dfb9419b870adbb9b7034af4 - - default default] Acquired lock "refresh_cache-b7a3385b-c043-4cc1-963e-3421188270ac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 08:58:31 compute-0 nova_compute[260935]: 2025-10-11 08:58:31.074 2 DEBUG nova.network.neutron [None req-576d934c-3bea-49e4-87f0-7ea035c6381d 24dea95ec8d047639f1121e457f54149 a9bdaad8dfb9419b870adbb9b7034af4 - - default default] [instance: b7a3385b-c043-4cc1-963e-3421188270ac] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 11 08:58:31 compute-0 nova_compute[260935]: 2025-10-11 08:58:31.327 2 DEBUG nova.network.neutron [None req-576d934c-3bea-49e4-87f0-7ea035c6381d 24dea95ec8d047639f1121e457f54149 a9bdaad8dfb9419b870adbb9b7034af4 - - default default] [instance: b7a3385b-c043-4cc1-963e-3421188270ac] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 11 08:58:31 compute-0 nova_compute[260935]: 2025-10-11 08:58:31.356 2 DEBUG nova.compute.manager [req-ae68b114-bc1d-434f-b56b-72ee1bf73506 req-37605f7f-31c7-444f-8b38-7bf00088c25e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 6208d031-a043-4761-ba4d-f3564e22723d] Received event network-vif-plugged-6c74faab-60f6-485c-b31f-ea324cc34f9c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:58:31 compute-0 nova_compute[260935]: 2025-10-11 08:58:31.357 2 DEBUG oslo_concurrency.lockutils [req-ae68b114-bc1d-434f-b56b-72ee1bf73506 req-37605f7f-31c7-444f-8b38-7bf00088c25e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "6208d031-a043-4761-ba4d-f3564e22723d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:58:31 compute-0 nova_compute[260935]: 2025-10-11 08:58:31.357 2 DEBUG oslo_concurrency.lockutils [req-ae68b114-bc1d-434f-b56b-72ee1bf73506 req-37605f7f-31c7-444f-8b38-7bf00088c25e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "6208d031-a043-4761-ba4d-f3564e22723d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:58:31 compute-0 nova_compute[260935]: 2025-10-11 08:58:31.358 2 DEBUG oslo_concurrency.lockutils [req-ae68b114-bc1d-434f-b56b-72ee1bf73506 req-37605f7f-31c7-444f-8b38-7bf00088c25e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "6208d031-a043-4761-ba4d-f3564e22723d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:58:31 compute-0 nova_compute[260935]: 2025-10-11 08:58:31.358 2 DEBUG nova.compute.manager [req-ae68b114-bc1d-434f-b56b-72ee1bf73506 req-37605f7f-31c7-444f-8b38-7bf00088c25e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 6208d031-a043-4761-ba4d-f3564e22723d] Processing event network-vif-plugged-6c74faab-60f6-485c-b31f-ea324cc34f9c _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 11 08:58:31 compute-0 nova_compute[260935]: 2025-10-11 08:58:31.359 2 DEBUG nova.compute.manager [None req-cda31545-9081-4dd8-af54-651e70371514 7195b797fdf14f41b559c905c46027f8 1570574742514675b82edd1079304340 - - default default] [instance: 6208d031-a043-4761-ba4d-f3564e22723d] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 11 08:58:31 compute-0 nova_compute[260935]: 2025-10-11 08:58:31.365 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760173111.36476, 6208d031-a043-4761-ba4d-f3564e22723d => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:58:31 compute-0 nova_compute[260935]: 2025-10-11 08:58:31.365 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 6208d031-a043-4761-ba4d-f3564e22723d] VM Resumed (Lifecycle Event)
Oct 11 08:58:31 compute-0 nova_compute[260935]: 2025-10-11 08:58:31.368 2 DEBUG nova.virt.libvirt.driver [None req-cda31545-9081-4dd8-af54-651e70371514 7195b797fdf14f41b559c905c46027f8 1570574742514675b82edd1079304340 - - default default] [instance: 6208d031-a043-4761-ba4d-f3564e22723d] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 11 08:58:31 compute-0 nova_compute[260935]: 2025-10-11 08:58:31.373 2 INFO nova.virt.libvirt.driver [-] [instance: 6208d031-a043-4761-ba4d-f3564e22723d] Instance spawned successfully.
Oct 11 08:58:31 compute-0 nova_compute[260935]: 2025-10-11 08:58:31.374 2 DEBUG nova.virt.libvirt.driver [None req-cda31545-9081-4dd8-af54-651e70371514 7195b797fdf14f41b559c905c46027f8 1570574742514675b82edd1079304340 - - default default] [instance: 6208d031-a043-4761-ba4d-f3564e22723d] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 11 08:58:31 compute-0 nova_compute[260935]: 2025-10-11 08:58:31.397 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 6208d031-a043-4761-ba4d-f3564e22723d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:58:31 compute-0 nova_compute[260935]: 2025-10-11 08:58:31.411 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 6208d031-a043-4761-ba4d-f3564e22723d] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 08:58:31 compute-0 nova_compute[260935]: 2025-10-11 08:58:31.420 2 DEBUG nova.virt.libvirt.driver [None req-cda31545-9081-4dd8-af54-651e70371514 7195b797fdf14f41b559c905c46027f8 1570574742514675b82edd1079304340 - - default default] [instance: 6208d031-a043-4761-ba4d-f3564e22723d] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:58:31 compute-0 nova_compute[260935]: 2025-10-11 08:58:31.421 2 DEBUG nova.virt.libvirt.driver [None req-cda31545-9081-4dd8-af54-651e70371514 7195b797fdf14f41b559c905c46027f8 1570574742514675b82edd1079304340 - - default default] [instance: 6208d031-a043-4761-ba4d-f3564e22723d] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:58:31 compute-0 nova_compute[260935]: 2025-10-11 08:58:31.422 2 DEBUG nova.virt.libvirt.driver [None req-cda31545-9081-4dd8-af54-651e70371514 7195b797fdf14f41b559c905c46027f8 1570574742514675b82edd1079304340 - - default default] [instance: 6208d031-a043-4761-ba4d-f3564e22723d] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:58:31 compute-0 nova_compute[260935]: 2025-10-11 08:58:31.423 2 DEBUG nova.virt.libvirt.driver [None req-cda31545-9081-4dd8-af54-651e70371514 7195b797fdf14f41b559c905c46027f8 1570574742514675b82edd1079304340 - - default default] [instance: 6208d031-a043-4761-ba4d-f3564e22723d] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:58:31 compute-0 nova_compute[260935]: 2025-10-11 08:58:31.424 2 DEBUG nova.virt.libvirt.driver [None req-cda31545-9081-4dd8-af54-651e70371514 7195b797fdf14f41b559c905c46027f8 1570574742514675b82edd1079304340 - - default default] [instance: 6208d031-a043-4761-ba4d-f3564e22723d] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:58:31 compute-0 nova_compute[260935]: 2025-10-11 08:58:31.425 2 DEBUG nova.virt.libvirt.driver [None req-cda31545-9081-4dd8-af54-651e70371514 7195b797fdf14f41b559c905c46027f8 1570574742514675b82edd1079304340 - - default default] [instance: 6208d031-a043-4761-ba4d-f3564e22723d] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:58:31 compute-0 nova_compute[260935]: 2025-10-11 08:58:31.479 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 6208d031-a043-4761-ba4d-f3564e22723d] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 08:58:31 compute-0 nova_compute[260935]: 2025-10-11 08:58:31.511 2 INFO nova.compute.manager [None req-cda31545-9081-4dd8-af54-651e70371514 7195b797fdf14f41b559c905c46027f8 1570574742514675b82edd1079304340 - - default default] [instance: 6208d031-a043-4761-ba4d-f3564e22723d] Took 9.41 seconds to spawn the instance on the hypervisor.
Oct 11 08:58:31 compute-0 nova_compute[260935]: 2025-10-11 08:58:31.512 2 DEBUG nova.compute.manager [None req-cda31545-9081-4dd8-af54-651e70371514 7195b797fdf14f41b559c905c46027f8 1570574742514675b82edd1079304340 - - default default] [instance: 6208d031-a043-4761-ba4d-f3564e22723d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:58:31 compute-0 ceph-mon[74313]: pgmap v1660: 321 pgs: 321 active+clean; 98 MiB data, 577 MiB used, 59 GiB / 60 GiB avail; 59 KiB/s rd, 2.0 MiB/s wr, 88 op/s
Oct 11 08:58:31 compute-0 nova_compute[260935]: 2025-10-11 08:58:31.580 2 INFO nova.compute.manager [None req-cda31545-9081-4dd8-af54-651e70371514 7195b797fdf14f41b559c905c46027f8 1570574742514675b82edd1079304340 - - default default] [instance: 6208d031-a043-4761-ba4d-f3564e22723d] Took 10.60 seconds to build instance.
Oct 11 08:58:31 compute-0 nova_compute[260935]: 2025-10-11 08:58:31.599 2 DEBUG oslo_concurrency.lockutils [None req-cda31545-9081-4dd8-af54-651e70371514 7195b797fdf14f41b559c905c46027f8 1570574742514675b82edd1079304340 - - default default] Lock "6208d031-a043-4761-ba4d-f3564e22723d" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.705s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:58:32 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1661: 321 pgs: 321 active+clean; 110 MiB data, 583 MiB used, 59 GiB / 60 GiB avail; 164 KiB/s rd, 2.6 MiB/s wr, 93 op/s
Oct 11 08:58:32 compute-0 nova_compute[260935]: 2025-10-11 08:58:32.323 2 DEBUG nova.network.neutron [None req-576d934c-3bea-49e4-87f0-7ea035c6381d 24dea95ec8d047639f1121e457f54149 a9bdaad8dfb9419b870adbb9b7034af4 - - default default] [instance: b7a3385b-c043-4cc1-963e-3421188270ac] Updating instance_info_cache with network_info: [{"id": "2717cdba-7e23-4eb0-81a8-fa322def9612", "address": "fa:16:3e:3b:fa:9f", "network": {"id": "bb3668ff-ecbd-4f84-9f47-fd2ac2c443f1", "bridge": "br-int", "label": "tempest-InstanceActionsV221TestJSON-2102542273-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9bdaad8dfb9419b870adbb9b7034af4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2717cdba-7e", "ovs_interfaceid": "2717cdba-7e23-4eb0-81a8-fa322def9612", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:58:32 compute-0 nova_compute[260935]: 2025-10-11 08:58:32.349 2 DEBUG oslo_concurrency.lockutils [None req-576d934c-3bea-49e4-87f0-7ea035c6381d 24dea95ec8d047639f1121e457f54149 a9bdaad8dfb9419b870adbb9b7034af4 - - default default] Releasing lock "refresh_cache-b7a3385b-c043-4cc1-963e-3421188270ac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 08:58:32 compute-0 nova_compute[260935]: 2025-10-11 08:58:32.350 2 DEBUG nova.compute.manager [None req-576d934c-3bea-49e4-87f0-7ea035c6381d 24dea95ec8d047639f1121e457f54149 a9bdaad8dfb9419b870adbb9b7034af4 - - default default] [instance: b7a3385b-c043-4cc1-963e-3421188270ac] Instance network_info: |[{"id": "2717cdba-7e23-4eb0-81a8-fa322def9612", "address": "fa:16:3e:3b:fa:9f", "network": {"id": "bb3668ff-ecbd-4f84-9f47-fd2ac2c443f1", "bridge": "br-int", "label": "tempest-InstanceActionsV221TestJSON-2102542273-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9bdaad8dfb9419b870adbb9b7034af4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2717cdba-7e", "ovs_interfaceid": "2717cdba-7e23-4eb0-81a8-fa322def9612", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 11 08:58:32 compute-0 nova_compute[260935]: 2025-10-11 08:58:32.355 2 DEBUG nova.virt.libvirt.driver [None req-576d934c-3bea-49e4-87f0-7ea035c6381d 24dea95ec8d047639f1121e457f54149 a9bdaad8dfb9419b870adbb9b7034af4 - - default default] [instance: b7a3385b-c043-4cc1-963e-3421188270ac] Start _get_guest_xml network_info=[{"id": "2717cdba-7e23-4eb0-81a8-fa322def9612", "address": "fa:16:3e:3b:fa:9f", "network": {"id": "bb3668ff-ecbd-4f84-9f47-fd2ac2c443f1", "bridge": "br-int", "label": "tempest-InstanceActionsV221TestJSON-2102542273-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9bdaad8dfb9419b870adbb9b7034af4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2717cdba-7e", "ovs_interfaceid": "2717cdba-7e23-4eb0-81a8-fa322def9612", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 11 08:58:32 compute-0 nova_compute[260935]: 2025-10-11 08:58:32.363 2 WARNING nova.virt.libvirt.driver [None req-576d934c-3bea-49e4-87f0-7ea035c6381d 24dea95ec8d047639f1121e457f54149 a9bdaad8dfb9419b870adbb9b7034af4 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 08:58:32 compute-0 nova_compute[260935]: 2025-10-11 08:58:32.374 2 DEBUG nova.virt.libvirt.host [None req-576d934c-3bea-49e4-87f0-7ea035c6381d 24dea95ec8d047639f1121e457f54149 a9bdaad8dfb9419b870adbb9b7034af4 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 11 08:58:32 compute-0 nova_compute[260935]: 2025-10-11 08:58:32.375 2 DEBUG nova.virt.libvirt.host [None req-576d934c-3bea-49e4-87f0-7ea035c6381d 24dea95ec8d047639f1121e457f54149 a9bdaad8dfb9419b870adbb9b7034af4 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 11 08:58:32 compute-0 nova_compute[260935]: 2025-10-11 08:58:32.380 2 DEBUG nova.virt.libvirt.host [None req-576d934c-3bea-49e4-87f0-7ea035c6381d 24dea95ec8d047639f1121e457f54149 a9bdaad8dfb9419b870adbb9b7034af4 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 11 08:58:32 compute-0 nova_compute[260935]: 2025-10-11 08:58:32.381 2 DEBUG nova.virt.libvirt.host [None req-576d934c-3bea-49e4-87f0-7ea035c6381d 24dea95ec8d047639f1121e457f54149 a9bdaad8dfb9419b870adbb9b7034af4 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 11 08:58:32 compute-0 nova_compute[260935]: 2025-10-11 08:58:32.382 2 DEBUG nova.virt.libvirt.driver [None req-576d934c-3bea-49e4-87f0-7ea035c6381d 24dea95ec8d047639f1121e457f54149 a9bdaad8dfb9419b870adbb9b7034af4 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 11 08:58:32 compute-0 nova_compute[260935]: 2025-10-11 08:58:32.383 2 DEBUG nova.virt.hardware [None req-576d934c-3bea-49e4-87f0-7ea035c6381d 24dea95ec8d047639f1121e457f54149 a9bdaad8dfb9419b870adbb9b7034af4 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 11 08:58:32 compute-0 nova_compute[260935]: 2025-10-11 08:58:32.385 2 DEBUG nova.virt.hardware [None req-576d934c-3bea-49e4-87f0-7ea035c6381d 24dea95ec8d047639f1121e457f54149 a9bdaad8dfb9419b870adbb9b7034af4 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 11 08:58:32 compute-0 nova_compute[260935]: 2025-10-11 08:58:32.385 2 DEBUG nova.virt.hardware [None req-576d934c-3bea-49e4-87f0-7ea035c6381d 24dea95ec8d047639f1121e457f54149 a9bdaad8dfb9419b870adbb9b7034af4 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 11 08:58:32 compute-0 nova_compute[260935]: 2025-10-11 08:58:32.386 2 DEBUG nova.virt.hardware [None req-576d934c-3bea-49e4-87f0-7ea035c6381d 24dea95ec8d047639f1121e457f54149 a9bdaad8dfb9419b870adbb9b7034af4 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 11 08:58:32 compute-0 nova_compute[260935]: 2025-10-11 08:58:32.387 2 DEBUG nova.virt.hardware [None req-576d934c-3bea-49e4-87f0-7ea035c6381d 24dea95ec8d047639f1121e457f54149 a9bdaad8dfb9419b870adbb9b7034af4 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 11 08:58:32 compute-0 nova_compute[260935]: 2025-10-11 08:58:32.388 2 DEBUG nova.virt.hardware [None req-576d934c-3bea-49e4-87f0-7ea035c6381d 24dea95ec8d047639f1121e457f54149 a9bdaad8dfb9419b870adbb9b7034af4 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 11 08:58:32 compute-0 nova_compute[260935]: 2025-10-11 08:58:32.389 2 DEBUG nova.virt.hardware [None req-576d934c-3bea-49e4-87f0-7ea035c6381d 24dea95ec8d047639f1121e457f54149 a9bdaad8dfb9419b870adbb9b7034af4 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 11 08:58:32 compute-0 nova_compute[260935]: 2025-10-11 08:58:32.390 2 DEBUG nova.virt.hardware [None req-576d934c-3bea-49e4-87f0-7ea035c6381d 24dea95ec8d047639f1121e457f54149 a9bdaad8dfb9419b870adbb9b7034af4 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 11 08:58:32 compute-0 nova_compute[260935]: 2025-10-11 08:58:32.391 2 DEBUG nova.virt.hardware [None req-576d934c-3bea-49e4-87f0-7ea035c6381d 24dea95ec8d047639f1121e457f54149 a9bdaad8dfb9419b870adbb9b7034af4 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 11 08:58:32 compute-0 nova_compute[260935]: 2025-10-11 08:58:32.392 2 DEBUG nova.virt.hardware [None req-576d934c-3bea-49e4-87f0-7ea035c6381d 24dea95ec8d047639f1121e457f54149 a9bdaad8dfb9419b870adbb9b7034af4 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 11 08:58:32 compute-0 nova_compute[260935]: 2025-10-11 08:58:32.393 2 DEBUG nova.virt.hardware [None req-576d934c-3bea-49e4-87f0-7ea035c6381d 24dea95ec8d047639f1121e457f54149 a9bdaad8dfb9419b870adbb9b7034af4 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 11 08:58:32 compute-0 nova_compute[260935]: 2025-10-11 08:58:32.400 2 DEBUG oslo_concurrency.processutils [None req-576d934c-3bea-49e4-87f0-7ea035c6381d 24dea95ec8d047639f1121e457f54149 a9bdaad8dfb9419b870adbb9b7034af4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:58:32 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 08:58:32 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4084959531' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:58:32 compute-0 nova_compute[260935]: 2025-10-11 08:58:32.962 2 DEBUG oslo_concurrency.processutils [None req-576d934c-3bea-49e4-87f0-7ea035c6381d 24dea95ec8d047639f1121e457f54149 a9bdaad8dfb9419b870adbb9b7034af4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.562s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:58:32 compute-0 nova_compute[260935]: 2025-10-11 08:58:32.998 2 DEBUG nova.storage.rbd_utils [None req-576d934c-3bea-49e4-87f0-7ea035c6381d 24dea95ec8d047639f1121e457f54149 a9bdaad8dfb9419b870adbb9b7034af4 - - default default] rbd image b7a3385b-c043-4cc1-963e-3421188270ac_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:58:33 compute-0 nova_compute[260935]: 2025-10-11 08:58:33.004 2 DEBUG oslo_concurrency.processutils [None req-576d934c-3bea-49e4-87f0-7ea035c6381d 24dea95ec8d047639f1121e457f54149 a9bdaad8dfb9419b870adbb9b7034af4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:58:33 compute-0 nova_compute[260935]: 2025-10-11 08:58:33.289 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:58:33 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e242 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 08:58:33 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 08:58:33 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/525933288' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:58:33 compute-0 nova_compute[260935]: 2025-10-11 08:58:33.504 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:58:33 compute-0 nova_compute[260935]: 2025-10-11 08:58:33.513 2 DEBUG nova.compute.manager [req-64803c77-570c-45e1-949f-b735d61b696b req-a1046cd0-e4b2-4aeb-b30f-bfb48644eb8b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 6208d031-a043-4761-ba4d-f3564e22723d] Received event network-vif-plugged-6c74faab-60f6-485c-b31f-ea324cc34f9c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:58:33 compute-0 nova_compute[260935]: 2025-10-11 08:58:33.514 2 DEBUG oslo_concurrency.lockutils [req-64803c77-570c-45e1-949f-b735d61b696b req-a1046cd0-e4b2-4aeb-b30f-bfb48644eb8b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "6208d031-a043-4761-ba4d-f3564e22723d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:58:33 compute-0 nova_compute[260935]: 2025-10-11 08:58:33.515 2 DEBUG oslo_concurrency.lockutils [req-64803c77-570c-45e1-949f-b735d61b696b req-a1046cd0-e4b2-4aeb-b30f-bfb48644eb8b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "6208d031-a043-4761-ba4d-f3564e22723d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:58:33 compute-0 nova_compute[260935]: 2025-10-11 08:58:33.515 2 DEBUG oslo_concurrency.lockutils [req-64803c77-570c-45e1-949f-b735d61b696b req-a1046cd0-e4b2-4aeb-b30f-bfb48644eb8b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "6208d031-a043-4761-ba4d-f3564e22723d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:58:33 compute-0 nova_compute[260935]: 2025-10-11 08:58:33.516 2 DEBUG nova.compute.manager [req-64803c77-570c-45e1-949f-b735d61b696b req-a1046cd0-e4b2-4aeb-b30f-bfb48644eb8b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 6208d031-a043-4761-ba4d-f3564e22723d] No waiting events found dispatching network-vif-plugged-6c74faab-60f6-485c-b31f-ea324cc34f9c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 08:58:33 compute-0 nova_compute[260935]: 2025-10-11 08:58:33.516 2 WARNING nova.compute.manager [req-64803c77-570c-45e1-949f-b735d61b696b req-a1046cd0-e4b2-4aeb-b30f-bfb48644eb8b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 6208d031-a043-4761-ba4d-f3564e22723d] Received unexpected event network-vif-plugged-6c74faab-60f6-485c-b31f-ea324cc34f9c for instance with vm_state active and task_state None.
Oct 11 08:58:33 compute-0 nova_compute[260935]: 2025-10-11 08:58:33.534 2 DEBUG oslo_concurrency.processutils [None req-576d934c-3bea-49e4-87f0-7ea035c6381d 24dea95ec8d047639f1121e457f54149 a9bdaad8dfb9419b870adbb9b7034af4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.530s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:58:33 compute-0 nova_compute[260935]: 2025-10-11 08:58:33.536 2 DEBUG nova.virt.libvirt.vif [None req-576d934c-3bea-49e4-87f0-7ea035c6381d 24dea95ec8d047639f1121e457f54149 a9bdaad8dfb9419b870adbb9b7034af4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T08:58:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-InstanceActionsV221TestJSON-server-692932297',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionsv221testjson-server-692932297',id=77,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a9bdaad8dfb9419b870adbb9b7034af4',ramdisk_id='',reservation_id='r-nqt260vz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-InstanceActionsV221TestJSON-1885783580',owner_user_name='tempest-InstanceActionsV221TestJSON-1885783580-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T08:58:26Z,user_data=None,user_id='24dea95ec8d047639f1121e457f54149',uuid=b7a3385b-c043-4cc1-963e-3421188270ac,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2717cdba-7e23-4eb0-81a8-fa322def9612", "address": "fa:16:3e:3b:fa:9f", "network": {"id": "bb3668ff-ecbd-4f84-9f47-fd2ac2c443f1", "bridge": "br-int", "label": "tempest-InstanceActionsV221TestJSON-2102542273-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9bdaad8dfb9419b870adbb9b7034af4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2717cdba-7e", "ovs_interfaceid": "2717cdba-7e23-4eb0-81a8-fa322def9612", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 11 08:58:33 compute-0 nova_compute[260935]: 2025-10-11 08:58:33.537 2 DEBUG nova.network.os_vif_util [None req-576d934c-3bea-49e4-87f0-7ea035c6381d 24dea95ec8d047639f1121e457f54149 a9bdaad8dfb9419b870adbb9b7034af4 - - default default] Converting VIF {"id": "2717cdba-7e23-4eb0-81a8-fa322def9612", "address": "fa:16:3e:3b:fa:9f", "network": {"id": "bb3668ff-ecbd-4f84-9f47-fd2ac2c443f1", "bridge": "br-int", "label": "tempest-InstanceActionsV221TestJSON-2102542273-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9bdaad8dfb9419b870adbb9b7034af4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2717cdba-7e", "ovs_interfaceid": "2717cdba-7e23-4eb0-81a8-fa322def9612", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 08:58:33 compute-0 nova_compute[260935]: 2025-10-11 08:58:33.538 2 DEBUG nova.network.os_vif_util [None req-576d934c-3bea-49e4-87f0-7ea035c6381d 24dea95ec8d047639f1121e457f54149 a9bdaad8dfb9419b870adbb9b7034af4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3b:fa:9f,bridge_name='br-int',has_traffic_filtering=True,id=2717cdba-7e23-4eb0-81a8-fa322def9612,network=Network(bb3668ff-ecbd-4f84-9f47-fd2ac2c443f1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2717cdba-7e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 08:58:33 compute-0 nova_compute[260935]: 2025-10-11 08:58:33.541 2 DEBUG nova.objects.instance [None req-576d934c-3bea-49e4-87f0-7ea035c6381d 24dea95ec8d047639f1121e457f54149 a9bdaad8dfb9419b870adbb9b7034af4 - - default default] Lazy-loading 'pci_devices' on Instance uuid b7a3385b-c043-4cc1-963e-3421188270ac obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:58:33 compute-0 nova_compute[260935]: 2025-10-11 08:58:33.564 2 DEBUG nova.virt.libvirt.driver [None req-576d934c-3bea-49e4-87f0-7ea035c6381d 24dea95ec8d047639f1121e457f54149 a9bdaad8dfb9419b870adbb9b7034af4 - - default default] [instance: b7a3385b-c043-4cc1-963e-3421188270ac] End _get_guest_xml xml=<domain type="kvm">
Oct 11 08:58:33 compute-0 nova_compute[260935]:   <uuid>b7a3385b-c043-4cc1-963e-3421188270ac</uuid>
Oct 11 08:58:33 compute-0 nova_compute[260935]:   <name>instance-0000004d</name>
Oct 11 08:58:33 compute-0 nova_compute[260935]:   <memory>131072</memory>
Oct 11 08:58:33 compute-0 nova_compute[260935]:   <vcpu>1</vcpu>
Oct 11 08:58:33 compute-0 nova_compute[260935]:   <metadata>
Oct 11 08:58:33 compute-0 nova_compute[260935]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 08:58:33 compute-0 nova_compute[260935]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 08:58:33 compute-0 nova_compute[260935]:       <nova:name>tempest-InstanceActionsV221TestJSON-server-692932297</nova:name>
Oct 11 08:58:33 compute-0 nova_compute[260935]:       <nova:creationTime>2025-10-11 08:58:32</nova:creationTime>
Oct 11 08:58:33 compute-0 nova_compute[260935]:       <nova:flavor name="m1.nano">
Oct 11 08:58:33 compute-0 nova_compute[260935]:         <nova:memory>128</nova:memory>
Oct 11 08:58:33 compute-0 nova_compute[260935]:         <nova:disk>1</nova:disk>
Oct 11 08:58:33 compute-0 nova_compute[260935]:         <nova:swap>0</nova:swap>
Oct 11 08:58:33 compute-0 nova_compute[260935]:         <nova:ephemeral>0</nova:ephemeral>
Oct 11 08:58:33 compute-0 nova_compute[260935]:         <nova:vcpus>1</nova:vcpus>
Oct 11 08:58:33 compute-0 nova_compute[260935]:       </nova:flavor>
Oct 11 08:58:33 compute-0 nova_compute[260935]:       <nova:owner>
Oct 11 08:58:33 compute-0 nova_compute[260935]:         <nova:user uuid="24dea95ec8d047639f1121e457f54149">tempest-InstanceActionsV221TestJSON-1885783580-project-member</nova:user>
Oct 11 08:58:33 compute-0 nova_compute[260935]:         <nova:project uuid="a9bdaad8dfb9419b870adbb9b7034af4">tempest-InstanceActionsV221TestJSON-1885783580</nova:project>
Oct 11 08:58:33 compute-0 nova_compute[260935]:       </nova:owner>
Oct 11 08:58:33 compute-0 nova_compute[260935]:       <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 08:58:33 compute-0 nova_compute[260935]:       <nova:ports>
Oct 11 08:58:33 compute-0 nova_compute[260935]:         <nova:port uuid="2717cdba-7e23-4eb0-81a8-fa322def9612">
Oct 11 08:58:33 compute-0 nova_compute[260935]:           <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Oct 11 08:58:33 compute-0 nova_compute[260935]:         </nova:port>
Oct 11 08:58:33 compute-0 nova_compute[260935]:       </nova:ports>
Oct 11 08:58:33 compute-0 nova_compute[260935]:     </nova:instance>
Oct 11 08:58:33 compute-0 nova_compute[260935]:   </metadata>
Oct 11 08:58:33 compute-0 nova_compute[260935]:   <sysinfo type="smbios">
Oct 11 08:58:33 compute-0 nova_compute[260935]:     <system>
Oct 11 08:58:33 compute-0 nova_compute[260935]:       <entry name="manufacturer">RDO</entry>
Oct 11 08:58:33 compute-0 nova_compute[260935]:       <entry name="product">OpenStack Compute</entry>
Oct 11 08:58:33 compute-0 nova_compute[260935]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 08:58:33 compute-0 nova_compute[260935]:       <entry name="serial">b7a3385b-c043-4cc1-963e-3421188270ac</entry>
Oct 11 08:58:33 compute-0 nova_compute[260935]:       <entry name="uuid">b7a3385b-c043-4cc1-963e-3421188270ac</entry>
Oct 11 08:58:33 compute-0 nova_compute[260935]:       <entry name="family">Virtual Machine</entry>
Oct 11 08:58:33 compute-0 nova_compute[260935]:     </system>
Oct 11 08:58:33 compute-0 nova_compute[260935]:   </sysinfo>
Oct 11 08:58:33 compute-0 nova_compute[260935]:   <os>
Oct 11 08:58:33 compute-0 nova_compute[260935]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 11 08:58:33 compute-0 nova_compute[260935]:     <boot dev="hd"/>
Oct 11 08:58:33 compute-0 nova_compute[260935]:     <smbios mode="sysinfo"/>
Oct 11 08:58:33 compute-0 nova_compute[260935]:   </os>
Oct 11 08:58:33 compute-0 nova_compute[260935]:   <features>
Oct 11 08:58:33 compute-0 nova_compute[260935]:     <acpi/>
Oct 11 08:58:33 compute-0 nova_compute[260935]:     <apic/>
Oct 11 08:58:33 compute-0 nova_compute[260935]:     <vmcoreinfo/>
Oct 11 08:58:33 compute-0 nova_compute[260935]:   </features>
Oct 11 08:58:33 compute-0 nova_compute[260935]:   <clock offset="utc">
Oct 11 08:58:33 compute-0 nova_compute[260935]:     <timer name="pit" tickpolicy="delay"/>
Oct 11 08:58:33 compute-0 nova_compute[260935]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 11 08:58:33 compute-0 nova_compute[260935]:     <timer name="hpet" present="no"/>
Oct 11 08:58:33 compute-0 nova_compute[260935]:   </clock>
Oct 11 08:58:33 compute-0 nova_compute[260935]:   <cpu mode="host-model" match="exact">
Oct 11 08:58:33 compute-0 nova_compute[260935]:     <topology sockets="1" cores="1" threads="1"/>
Oct 11 08:58:33 compute-0 nova_compute[260935]:   </cpu>
Oct 11 08:58:33 compute-0 nova_compute[260935]:   <devices>
Oct 11 08:58:33 compute-0 nova_compute[260935]:     <disk type="network" device="disk">
Oct 11 08:58:33 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 08:58:33 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/b7a3385b-c043-4cc1-963e-3421188270ac_disk">
Oct 11 08:58:33 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 08:58:33 compute-0 nova_compute[260935]:       </source>
Oct 11 08:58:33 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 08:58:33 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 08:58:33 compute-0 nova_compute[260935]:       </auth>
Oct 11 08:58:33 compute-0 nova_compute[260935]:       <target dev="vda" bus="virtio"/>
Oct 11 08:58:33 compute-0 nova_compute[260935]:     </disk>
Oct 11 08:58:33 compute-0 nova_compute[260935]:     <disk type="network" device="cdrom">
Oct 11 08:58:33 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 08:58:33 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/b7a3385b-c043-4cc1-963e-3421188270ac_disk.config">
Oct 11 08:58:33 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 08:58:33 compute-0 nova_compute[260935]:       </source>
Oct 11 08:58:33 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 08:58:33 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 08:58:33 compute-0 nova_compute[260935]:       </auth>
Oct 11 08:58:33 compute-0 nova_compute[260935]:       <target dev="sda" bus="sata"/>
Oct 11 08:58:33 compute-0 nova_compute[260935]:     </disk>
Oct 11 08:58:33 compute-0 nova_compute[260935]:     <interface type="ethernet">
Oct 11 08:58:33 compute-0 nova_compute[260935]:       <mac address="fa:16:3e:3b:fa:9f"/>
Oct 11 08:58:33 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 08:58:33 compute-0 nova_compute[260935]:       <driver name="vhost" rx_queue_size="512"/>
Oct 11 08:58:33 compute-0 nova_compute[260935]:       <mtu size="1442"/>
Oct 11 08:58:33 compute-0 nova_compute[260935]:       <target dev="tap2717cdba-7e"/>
Oct 11 08:58:33 compute-0 nova_compute[260935]:     </interface>
Oct 11 08:58:33 compute-0 nova_compute[260935]:     <serial type="pty">
Oct 11 08:58:33 compute-0 nova_compute[260935]:       <log file="/var/lib/nova/instances/b7a3385b-c043-4cc1-963e-3421188270ac/console.log" append="off"/>
Oct 11 08:58:33 compute-0 nova_compute[260935]:     </serial>
Oct 11 08:58:33 compute-0 nova_compute[260935]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 08:58:33 compute-0 nova_compute[260935]:     <video>
Oct 11 08:58:33 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 08:58:33 compute-0 nova_compute[260935]:     </video>
Oct 11 08:58:33 compute-0 nova_compute[260935]:     <input type="tablet" bus="usb"/>
Oct 11 08:58:33 compute-0 nova_compute[260935]:     <rng model="virtio">
Oct 11 08:58:33 compute-0 nova_compute[260935]:       <backend model="random">/dev/urandom</backend>
Oct 11 08:58:33 compute-0 nova_compute[260935]:     </rng>
Oct 11 08:58:33 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root"/>
Oct 11 08:58:33 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:58:33 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:58:33 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:58:33 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:58:33 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:58:33 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:58:33 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:58:33 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:58:33 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:58:33 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:58:33 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:58:33 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:58:33 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:58:33 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:58:33 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:58:33 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:58:33 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:58:33 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:58:33 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:58:33 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:58:33 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:58:33 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:58:33 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:58:33 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:58:33 compute-0 nova_compute[260935]:     <controller type="usb" index="0"/>
Oct 11 08:58:33 compute-0 nova_compute[260935]:     <memballoon model="virtio">
Oct 11 08:58:33 compute-0 nova_compute[260935]:       <stats period="10"/>
Oct 11 08:58:33 compute-0 nova_compute[260935]:     </memballoon>
Oct 11 08:58:33 compute-0 nova_compute[260935]:   </devices>
Oct 11 08:58:33 compute-0 nova_compute[260935]: </domain>
Oct 11 08:58:33 compute-0 nova_compute[260935]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 11 08:58:33 compute-0 nova_compute[260935]: 2025-10-11 08:58:33.564 2 DEBUG nova.compute.manager [None req-576d934c-3bea-49e4-87f0-7ea035c6381d 24dea95ec8d047639f1121e457f54149 a9bdaad8dfb9419b870adbb9b7034af4 - - default default] [instance: b7a3385b-c043-4cc1-963e-3421188270ac] Preparing to wait for external event network-vif-plugged-2717cdba-7e23-4eb0-81a8-fa322def9612 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 11 08:58:33 compute-0 nova_compute[260935]: 2025-10-11 08:58:33.565 2 DEBUG oslo_concurrency.lockutils [None req-576d934c-3bea-49e4-87f0-7ea035c6381d 24dea95ec8d047639f1121e457f54149 a9bdaad8dfb9419b870adbb9b7034af4 - - default default] Acquiring lock "b7a3385b-c043-4cc1-963e-3421188270ac-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:58:33 compute-0 nova_compute[260935]: 2025-10-11 08:58:33.565 2 DEBUG oslo_concurrency.lockutils [None req-576d934c-3bea-49e4-87f0-7ea035c6381d 24dea95ec8d047639f1121e457f54149 a9bdaad8dfb9419b870adbb9b7034af4 - - default default] Lock "b7a3385b-c043-4cc1-963e-3421188270ac-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:58:33 compute-0 nova_compute[260935]: 2025-10-11 08:58:33.566 2 DEBUG oslo_concurrency.lockutils [None req-576d934c-3bea-49e4-87f0-7ea035c6381d 24dea95ec8d047639f1121e457f54149 a9bdaad8dfb9419b870adbb9b7034af4 - - default default] Lock "b7a3385b-c043-4cc1-963e-3421188270ac-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:58:33 compute-0 ceph-mon[74313]: pgmap v1661: 321 pgs: 321 active+clean; 110 MiB data, 583 MiB used, 59 GiB / 60 GiB avail; 164 KiB/s rd, 2.6 MiB/s wr, 93 op/s
Oct 11 08:58:33 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/4084959531' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:58:33 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/525933288' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:58:33 compute-0 nova_compute[260935]: 2025-10-11 08:58:33.567 2 DEBUG nova.virt.libvirt.vif [None req-576d934c-3bea-49e4-87f0-7ea035c6381d 24dea95ec8d047639f1121e457f54149 a9bdaad8dfb9419b870adbb9b7034af4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T08:58:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-InstanceActionsV221TestJSON-server-692932297',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionsv221testjson-server-692932297',id=77,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a9bdaad8dfb9419b870adbb9b7034af4',ramdisk_id='',reservation_id='r-nqt260vz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-InstanceActionsV221TestJSON-1885783580',owner_user_name='tempest-InstanceActionsV221TestJSON-1885783580-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T08:58:26Z,user_data=None,user_id='24dea95ec8d047639f1121e457f54149',uuid=b7a3385b-c043-4cc1-963e-3421188270ac,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2717cdba-7e23-4eb0-81a8-fa322def9612", "address": "fa:16:3e:3b:fa:9f", "network": {"id": "bb3668ff-ecbd-4f84-9f47-fd2ac2c443f1", "bridge": "br-int", "label": "tempest-InstanceActionsV221TestJSON-2102542273-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9bdaad8dfb9419b870adbb9b7034af4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2717cdba-7e", "ovs_interfaceid": "2717cdba-7e23-4eb0-81a8-fa322def9612", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 11 08:58:33 compute-0 nova_compute[260935]: 2025-10-11 08:58:33.568 2 DEBUG nova.network.os_vif_util [None req-576d934c-3bea-49e4-87f0-7ea035c6381d 24dea95ec8d047639f1121e457f54149 a9bdaad8dfb9419b870adbb9b7034af4 - - default default] Converting VIF {"id": "2717cdba-7e23-4eb0-81a8-fa322def9612", "address": "fa:16:3e:3b:fa:9f", "network": {"id": "bb3668ff-ecbd-4f84-9f47-fd2ac2c443f1", "bridge": "br-int", "label": "tempest-InstanceActionsV221TestJSON-2102542273-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9bdaad8dfb9419b870adbb9b7034af4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2717cdba-7e", "ovs_interfaceid": "2717cdba-7e23-4eb0-81a8-fa322def9612", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 08:58:33 compute-0 nova_compute[260935]: 2025-10-11 08:58:33.569 2 DEBUG nova.network.os_vif_util [None req-576d934c-3bea-49e4-87f0-7ea035c6381d 24dea95ec8d047639f1121e457f54149 a9bdaad8dfb9419b870adbb9b7034af4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3b:fa:9f,bridge_name='br-int',has_traffic_filtering=True,id=2717cdba-7e23-4eb0-81a8-fa322def9612,network=Network(bb3668ff-ecbd-4f84-9f47-fd2ac2c443f1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2717cdba-7e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 08:58:33 compute-0 nova_compute[260935]: 2025-10-11 08:58:33.570 2 DEBUG os_vif [None req-576d934c-3bea-49e4-87f0-7ea035c6381d 24dea95ec8d047639f1121e457f54149 a9bdaad8dfb9419b870adbb9b7034af4 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:3b:fa:9f,bridge_name='br-int',has_traffic_filtering=True,id=2717cdba-7e23-4eb0-81a8-fa322def9612,network=Network(bb3668ff-ecbd-4f84-9f47-fd2ac2c443f1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2717cdba-7e') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 11 08:58:33 compute-0 nova_compute[260935]: 2025-10-11 08:58:33.571 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:58:33 compute-0 nova_compute[260935]: 2025-10-11 08:58:33.572 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:58:33 compute-0 nova_compute[260935]: 2025-10-11 08:58:33.573 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 08:58:33 compute-0 nova_compute[260935]: 2025-10-11 08:58:33.578 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:58:33 compute-0 nova_compute[260935]: 2025-10-11 08:58:33.579 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2717cdba-7e, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:58:33 compute-0 nova_compute[260935]: 2025-10-11 08:58:33.580 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap2717cdba-7e, col_values=(('external_ids', {'iface-id': '2717cdba-7e23-4eb0-81a8-fa322def9612', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:3b:fa:9f', 'vm-uuid': 'b7a3385b-c043-4cc1-963e-3421188270ac'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:58:33 compute-0 nova_compute[260935]: 2025-10-11 08:58:33.582 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:58:33 compute-0 NetworkManager[44960]: <info>  [1760173113.5838] manager: (tap2717cdba-7e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/302)
Oct 11 08:58:33 compute-0 nova_compute[260935]: 2025-10-11 08:58:33.585 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 08:58:33 compute-0 nova_compute[260935]: 2025-10-11 08:58:33.592 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:58:33 compute-0 nova_compute[260935]: 2025-10-11 08:58:33.595 2 INFO os_vif [None req-576d934c-3bea-49e4-87f0-7ea035c6381d 24dea95ec8d047639f1121e457f54149 a9bdaad8dfb9419b870adbb9b7034af4 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:3b:fa:9f,bridge_name='br-int',has_traffic_filtering=True,id=2717cdba-7e23-4eb0-81a8-fa322def9612,network=Network(bb3668ff-ecbd-4f84-9f47-fd2ac2c443f1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2717cdba-7e')
Oct 11 08:58:33 compute-0 nova_compute[260935]: 2025-10-11 08:58:33.686 2 DEBUG nova.virt.libvirt.driver [None req-576d934c-3bea-49e4-87f0-7ea035c6381d 24dea95ec8d047639f1121e457f54149 a9bdaad8dfb9419b870adbb9b7034af4 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 08:58:33 compute-0 nova_compute[260935]: 2025-10-11 08:58:33.686 2 DEBUG nova.virt.libvirt.driver [None req-576d934c-3bea-49e4-87f0-7ea035c6381d 24dea95ec8d047639f1121e457f54149 a9bdaad8dfb9419b870adbb9b7034af4 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 08:58:33 compute-0 nova_compute[260935]: 2025-10-11 08:58:33.687 2 DEBUG nova.virt.libvirt.driver [None req-576d934c-3bea-49e4-87f0-7ea035c6381d 24dea95ec8d047639f1121e457f54149 a9bdaad8dfb9419b870adbb9b7034af4 - - default default] No VIF found with MAC fa:16:3e:3b:fa:9f, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 11 08:58:33 compute-0 nova_compute[260935]: 2025-10-11 08:58:33.688 2 INFO nova.virt.libvirt.driver [None req-576d934c-3bea-49e4-87f0-7ea035c6381d 24dea95ec8d047639f1121e457f54149 a9bdaad8dfb9419b870adbb9b7034af4 - - default default] [instance: b7a3385b-c043-4cc1-963e-3421188270ac] Using config drive
Oct 11 08:58:33 compute-0 nova_compute[260935]: 2025-10-11 08:58:33.728 2 DEBUG nova.storage.rbd_utils [None req-576d934c-3bea-49e4-87f0-7ea035c6381d 24dea95ec8d047639f1121e457f54149 a9bdaad8dfb9419b870adbb9b7034af4 - - default default] rbd image b7a3385b-c043-4cc1-963e-3421188270ac_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:58:33 compute-0 podman[336073]: 2025-10-11 08:58:33.737370804 +0000 UTC m=+0.088726861 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Oct 11 08:58:33 compute-0 nova_compute[260935]: 2025-10-11 08:58:33.796 2 DEBUG oslo_concurrency.lockutils [None req-480e185c-6169-4b39-b01e-201406aac779 7195b797fdf14f41b559c905c46027f8 1570574742514675b82edd1079304340 - - default default] Acquiring lock "6208d031-a043-4761-ba4d-f3564e22723d" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:58:33 compute-0 nova_compute[260935]: 2025-10-11 08:58:33.797 2 DEBUG oslo_concurrency.lockutils [None req-480e185c-6169-4b39-b01e-201406aac779 7195b797fdf14f41b559c905c46027f8 1570574742514675b82edd1079304340 - - default default] Lock "6208d031-a043-4761-ba4d-f3564e22723d" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:58:33 compute-0 nova_compute[260935]: 2025-10-11 08:58:33.797 2 DEBUG oslo_concurrency.lockutils [None req-480e185c-6169-4b39-b01e-201406aac779 7195b797fdf14f41b559c905c46027f8 1570574742514675b82edd1079304340 - - default default] Acquiring lock "6208d031-a043-4761-ba4d-f3564e22723d-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:58:33 compute-0 nova_compute[260935]: 2025-10-11 08:58:33.798 2 DEBUG oslo_concurrency.lockutils [None req-480e185c-6169-4b39-b01e-201406aac779 7195b797fdf14f41b559c905c46027f8 1570574742514675b82edd1079304340 - - default default] Lock "6208d031-a043-4761-ba4d-f3564e22723d-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:58:33 compute-0 nova_compute[260935]: 2025-10-11 08:58:33.798 2 DEBUG oslo_concurrency.lockutils [None req-480e185c-6169-4b39-b01e-201406aac779 7195b797fdf14f41b559c905c46027f8 1570574742514675b82edd1079304340 - - default default] Lock "6208d031-a043-4761-ba4d-f3564e22723d-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:58:33 compute-0 nova_compute[260935]: 2025-10-11 08:58:33.800 2 INFO nova.compute.manager [None req-480e185c-6169-4b39-b01e-201406aac779 7195b797fdf14f41b559c905c46027f8 1570574742514675b82edd1079304340 - - default default] [instance: 6208d031-a043-4761-ba4d-f3564e22723d] Terminating instance
Oct 11 08:58:33 compute-0 nova_compute[260935]: 2025-10-11 08:58:33.802 2 DEBUG nova.compute.manager [None req-480e185c-6169-4b39-b01e-201406aac779 7195b797fdf14f41b559c905c46027f8 1570574742514675b82edd1079304340 - - default default] [instance: 6208d031-a043-4761-ba4d-f3564e22723d] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 11 08:58:33 compute-0 kernel: tap6c74faab-60 (unregistering): left promiscuous mode
Oct 11 08:58:33 compute-0 NetworkManager[44960]: <info>  [1760173113.8513] device (tap6c74faab-60): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 08:58:33 compute-0 nova_compute[260935]: 2025-10-11 08:58:33.856 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:58:33 compute-0 ovn_controller[152945]: 2025-10-11T08:58:33Z|00679|binding|INFO|Releasing lport 6c74faab-60f6-485c-b31f-ea324cc34f9c from this chassis (sb_readonly=0)
Oct 11 08:58:33 compute-0 ovn_controller[152945]: 2025-10-11T08:58:33Z|00680|binding|INFO|Setting lport 6c74faab-60f6-485c-b31f-ea324cc34f9c down in Southbound
Oct 11 08:58:33 compute-0 ovn_controller[152945]: 2025-10-11T08:58:33Z|00681|binding|INFO|Removing iface tap6c74faab-60 ovn-installed in OVS
Oct 11 08:58:33 compute-0 nova_compute[260935]: 2025-10-11 08:58:33.875 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:58:33 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:58:33.884 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b4:8a:29 10.100.0.14'], port_security=['fa:16:3e:b4:8a:29 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '6208d031-a043-4761-ba4d-f3564e22723d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c4ceb461-f642-4cfa-bc70-91212708aac4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1570574742514675b82edd1079304340', 'neutron:revision_number': '4', 'neutron:security_group_ids': '38579ec6-1bc1-482c-9459-13b8a5bdaa01', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=033968fa-77cf-4651-a7a6-8d92fdc428f0, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=6c74faab-60f6-485c-b31f-ea324cc34f9c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 08:58:33 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:58:33.887 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 6c74faab-60f6-485c-b31f-ea324cc34f9c in datapath c4ceb461-f642-4cfa-bc70-91212708aac4 unbound from our chassis
Oct 11 08:58:33 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:58:33.889 162815 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network c4ceb461-f642-4cfa-bc70-91212708aac4, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 11 08:58:33 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:58:33.891 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[6b7ad107-7788-488f-bba4-f6de0c05c65d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:58:33 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:58:33.892 162815 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-c4ceb461-f642-4cfa-bc70-91212708aac4 namespace which is not needed anymore
Oct 11 08:58:33 compute-0 nova_compute[260935]: 2025-10-11 08:58:33.909 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:58:33 compute-0 systemd[1]: machine-qemu\x2d85\x2dinstance\x2d0000004c.scope: Deactivated successfully.
Oct 11 08:58:33 compute-0 systemd[1]: machine-qemu\x2d85\x2dinstance\x2d0000004c.scope: Consumed 3.424s CPU time.
Oct 11 08:58:33 compute-0 systemd-machined[215705]: Machine qemu-85-instance-0000004c terminated.
Oct 11 08:58:34 compute-0 nova_compute[260935]: 2025-10-11 08:58:34.047 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:58:34 compute-0 nova_compute[260935]: 2025-10-11 08:58:34.059 2 INFO nova.virt.libvirt.driver [-] [instance: 6208d031-a043-4761-ba4d-f3564e22723d] Instance destroyed successfully.
Oct 11 08:58:34 compute-0 nova_compute[260935]: 2025-10-11 08:58:34.060 2 DEBUG nova.objects.instance [None req-480e185c-6169-4b39-b01e-201406aac779 7195b797fdf14f41b559c905c46027f8 1570574742514675b82edd1079304340 - - default default] Lazy-loading 'resources' on Instance uuid 6208d031-a043-4761-ba4d-f3564e22723d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:58:34 compute-0 nova_compute[260935]: 2025-10-11 08:58:34.063 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:58:34 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1662: 321 pgs: 321 active+clean; 134 MiB data, 593 MiB used, 59 GiB / 60 GiB avail; 459 KiB/s rd, 3.6 MiB/s wr, 120 op/s
Oct 11 08:58:34 compute-0 neutron-haproxy-ovnmeta-c4ceb461-f642-4cfa-bc70-91212708aac4[335994]: [NOTICE]   (335998) : haproxy version is 2.8.14-c23fe91
Oct 11 08:58:34 compute-0 neutron-haproxy-ovnmeta-c4ceb461-f642-4cfa-bc70-91212708aac4[335994]: [NOTICE]   (335998) : path to executable is /usr/sbin/haproxy
Oct 11 08:58:34 compute-0 neutron-haproxy-ovnmeta-c4ceb461-f642-4cfa-bc70-91212708aac4[335994]: [WARNING]  (335998) : Exiting Master process...
Oct 11 08:58:34 compute-0 neutron-haproxy-ovnmeta-c4ceb461-f642-4cfa-bc70-91212708aac4[335994]: [WARNING]  (335998) : Exiting Master process...
Oct 11 08:58:34 compute-0 neutron-haproxy-ovnmeta-c4ceb461-f642-4cfa-bc70-91212708aac4[335994]: [ALERT]    (335998) : Current worker (336000) exited with code 143 (Terminated)
Oct 11 08:58:34 compute-0 neutron-haproxy-ovnmeta-c4ceb461-f642-4cfa-bc70-91212708aac4[335994]: [WARNING]  (335998) : All workers exited. Exiting... (0)
Oct 11 08:58:34 compute-0 systemd[1]: libpod-1b20f96ad882bfb250701de8504ceb0add1c51515c7a0928878cf96ec1ed7fc8.scope: Deactivated successfully.
Oct 11 08:58:34 compute-0 podman[336136]: 2025-10-11 08:58:34.098592414 +0000 UTC m=+0.074539457 container died 1b20f96ad882bfb250701de8504ceb0add1c51515c7a0928878cf96ec1ed7fc8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-c4ceb461-f642-4cfa-bc70-91212708aac4, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251001)
Oct 11 08:58:34 compute-0 nova_compute[260935]: 2025-10-11 08:58:34.107 2 DEBUG nova.virt.libvirt.vif [None req-480e185c-6169-4b39-b01e-201406aac779 7195b797fdf14f41b559c905c46027f8 1570574742514675b82edd1079304340 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T08:58:19Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-InstanceActionsNegativeTestJSON-server-470334281',display_name='tempest-InstanceActionsNegativeTestJSON-server-470334281',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionsnegativetestjson-server-470334281',id=76,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-11T08:58:31Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='1570574742514675b82edd1079304340',ramdisk_id='',reservation_id='r-xbr7uzg0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-InstanceActionsNegativeTestJSON-1759188145',owner_user_name='tempest-InstanceActionsNegativeTestJSON-1759188145-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T08:58:31Z,user_data=None,user_id='7195b797fdf14f41b559c905c46027f8',uuid=6208d031-a043-4761-ba4d-f3564e22723d,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "6c74faab-60f6-485c-b31f-ea324cc34f9c", "address": "fa:16:3e:b4:8a:29", "network": {"id": "c4ceb461-f642-4cfa-bc70-91212708aac4", "bridge": "br-int", "label": "tempest-InstanceActionsNegativeTestJSON-2068762966-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1570574742514675b82edd1079304340", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6c74faab-60", "ovs_interfaceid": "6c74faab-60f6-485c-b31f-ea324cc34f9c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 11 08:58:34 compute-0 nova_compute[260935]: 2025-10-11 08:58:34.108 2 DEBUG nova.network.os_vif_util [None req-480e185c-6169-4b39-b01e-201406aac779 7195b797fdf14f41b559c905c46027f8 1570574742514675b82edd1079304340 - - default default] Converting VIF {"id": "6c74faab-60f6-485c-b31f-ea324cc34f9c", "address": "fa:16:3e:b4:8a:29", "network": {"id": "c4ceb461-f642-4cfa-bc70-91212708aac4", "bridge": "br-int", "label": "tempest-InstanceActionsNegativeTestJSON-2068762966-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1570574742514675b82edd1079304340", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6c74faab-60", "ovs_interfaceid": "6c74faab-60f6-485c-b31f-ea324cc34f9c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 08:58:34 compute-0 nova_compute[260935]: 2025-10-11 08:58:34.109 2 DEBUG nova.network.os_vif_util [None req-480e185c-6169-4b39-b01e-201406aac779 7195b797fdf14f41b559c905c46027f8 1570574742514675b82edd1079304340 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b4:8a:29,bridge_name='br-int',has_traffic_filtering=True,id=6c74faab-60f6-485c-b31f-ea324cc34f9c,network=Network(c4ceb461-f642-4cfa-bc70-91212708aac4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6c74faab-60') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 08:58:34 compute-0 nova_compute[260935]: 2025-10-11 08:58:34.109 2 DEBUG os_vif [None req-480e185c-6169-4b39-b01e-201406aac779 7195b797fdf14f41b559c905c46027f8 1570574742514675b82edd1079304340 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b4:8a:29,bridge_name='br-int',has_traffic_filtering=True,id=6c74faab-60f6-485c-b31f-ea324cc34f9c,network=Network(c4ceb461-f642-4cfa-bc70-91212708aac4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6c74faab-60') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 11 08:58:34 compute-0 nova_compute[260935]: 2025-10-11 08:58:34.112 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:58:34 compute-0 nova_compute[260935]: 2025-10-11 08:58:34.113 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6c74faab-60, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:58:34 compute-0 nova_compute[260935]: 2025-10-11 08:58:34.115 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:58:34 compute-0 nova_compute[260935]: 2025-10-11 08:58:34.118 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 08:58:34 compute-0 nova_compute[260935]: 2025-10-11 08:58:34.122 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:58:34 compute-0 nova_compute[260935]: 2025-10-11 08:58:34.125 2 INFO os_vif [None req-480e185c-6169-4b39-b01e-201406aac779 7195b797fdf14f41b559c905c46027f8 1570574742514675b82edd1079304340 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b4:8a:29,bridge_name='br-int',has_traffic_filtering=True,id=6c74faab-60f6-485c-b31f-ea324cc34f9c,network=Network(c4ceb461-f642-4cfa-bc70-91212708aac4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6c74faab-60')
Oct 11 08:58:34 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-1b20f96ad882bfb250701de8504ceb0add1c51515c7a0928878cf96ec1ed7fc8-userdata-shm.mount: Deactivated successfully.
Oct 11 08:58:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-55a8fa0a4237181abd6bdcdda937bed1536718e20d4dca1ae664421b36fdf681-merged.mount: Deactivated successfully.
Oct 11 08:58:34 compute-0 podman[336136]: 2025-10-11 08:58:34.159916242 +0000 UTC m=+0.135863295 container cleanup 1b20f96ad882bfb250701de8504ceb0add1c51515c7a0928878cf96ec1ed7fc8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-c4ceb461-f642-4cfa-bc70-91212708aac4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 11 08:58:34 compute-0 systemd[1]: libpod-conmon-1b20f96ad882bfb250701de8504ceb0add1c51515c7a0928878cf96ec1ed7fc8.scope: Deactivated successfully.
Oct 11 08:58:34 compute-0 podman[336194]: 2025-10-11 08:58:34.259694198 +0000 UTC m=+0.070283646 container remove 1b20f96ad882bfb250701de8504ceb0add1c51515c7a0928878cf96ec1ed7fc8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-c4ceb461-f642-4cfa-bc70-91212708aac4, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2)
Oct 11 08:58:34 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:58:34.268 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[1860f34a-b4f9-4e16-b6f8-755461c20ba4]: (4, ('Sat Oct 11 08:58:34 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-c4ceb461-f642-4cfa-bc70-91212708aac4 (1b20f96ad882bfb250701de8504ceb0add1c51515c7a0928878cf96ec1ed7fc8)\n1b20f96ad882bfb250701de8504ceb0add1c51515c7a0928878cf96ec1ed7fc8\nSat Oct 11 08:58:34 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-c4ceb461-f642-4cfa-bc70-91212708aac4 (1b20f96ad882bfb250701de8504ceb0add1c51515c7a0928878cf96ec1ed7fc8)\n1b20f96ad882bfb250701de8504ceb0add1c51515c7a0928878cf96ec1ed7fc8\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:58:34 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:58:34.270 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[497f39b0-6479-44a4-800a-07d020e4ebbc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:58:34 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:58:34.272 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc4ceb461-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:58:34 compute-0 nova_compute[260935]: 2025-10-11 08:58:34.274 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:58:34 compute-0 kernel: tapc4ceb461-f0: left promiscuous mode
Oct 11 08:58:34 compute-0 nova_compute[260935]: 2025-10-11 08:58:34.317 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:58:34 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:58:34.322 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[889b75c5-f622-4b69-a396-c10d1cd13ca4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:58:34 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:58:34.361 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[310c1deb-7e42-4065-9b72-2f23d17c9923]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:58:34 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:58:34.364 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[be4147d4-88c9-42b7-9307-dc4994ba3c67]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:58:34 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:58:34.388 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[afead290-9121-4e36-9ee9-6cb0eea88811]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 495439, 'reachable_time': 18208, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 336217, 'error': None, 'target': 'ovnmeta-c4ceb461-f642-4cfa-bc70-91212708aac4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:58:34 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:58:34.391 162929 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-c4ceb461-f642-4cfa-bc70-91212708aac4 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 11 08:58:34 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:58:34.391 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[789afe15-0e5e-4398-88f9-dcdfe672b451]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:58:34 compute-0 systemd[1]: run-netns-ovnmeta\x2dc4ceb461\x2df642\x2d4cfa\x2dbc70\x2d91212708aac4.mount: Deactivated successfully.
Oct 11 08:58:34 compute-0 nova_compute[260935]: 2025-10-11 08:58:34.420 2 INFO nova.virt.libvirt.driver [None req-576d934c-3bea-49e4-87f0-7ea035c6381d 24dea95ec8d047639f1121e457f54149 a9bdaad8dfb9419b870adbb9b7034af4 - - default default] [instance: b7a3385b-c043-4cc1-963e-3421188270ac] Creating config drive at /var/lib/nova/instances/b7a3385b-c043-4cc1-963e-3421188270ac/disk.config
Oct 11 08:58:34 compute-0 nova_compute[260935]: 2025-10-11 08:58:34.430 2 DEBUG oslo_concurrency.processutils [None req-576d934c-3bea-49e4-87f0-7ea035c6381d 24dea95ec8d047639f1121e457f54149 a9bdaad8dfb9419b870adbb9b7034af4 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/b7a3385b-c043-4cc1-963e-3421188270ac/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpd3hae80w execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:58:34 compute-0 nova_compute[260935]: 2025-10-11 08:58:34.590 2 DEBUG oslo_concurrency.processutils [None req-576d934c-3bea-49e4-87f0-7ea035c6381d 24dea95ec8d047639f1121e457f54149 a9bdaad8dfb9419b870adbb9b7034af4 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/b7a3385b-c043-4cc1-963e-3421188270ac/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpd3hae80w" returned: 0 in 0.160s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:58:34 compute-0 nova_compute[260935]: 2025-10-11 08:58:34.630 2 DEBUG nova.storage.rbd_utils [None req-576d934c-3bea-49e4-87f0-7ea035c6381d 24dea95ec8d047639f1121e457f54149 a9bdaad8dfb9419b870adbb9b7034af4 - - default default] rbd image b7a3385b-c043-4cc1-963e-3421188270ac_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:58:34 compute-0 nova_compute[260935]: 2025-10-11 08:58:34.636 2 DEBUG oslo_concurrency.processutils [None req-576d934c-3bea-49e4-87f0-7ea035c6381d 24dea95ec8d047639f1121e457f54149 a9bdaad8dfb9419b870adbb9b7034af4 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/b7a3385b-c043-4cc1-963e-3421188270ac/disk.config b7a3385b-c043-4cc1-963e-3421188270ac_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:58:34 compute-0 nova_compute[260935]: 2025-10-11 08:58:34.699 2 INFO nova.virt.libvirt.driver [None req-480e185c-6169-4b39-b01e-201406aac779 7195b797fdf14f41b559c905c46027f8 1570574742514675b82edd1079304340 - - default default] [instance: 6208d031-a043-4761-ba4d-f3564e22723d] Deleting instance files /var/lib/nova/instances/6208d031-a043-4761-ba4d-f3564e22723d_del
Oct 11 08:58:34 compute-0 nova_compute[260935]: 2025-10-11 08:58:34.701 2 INFO nova.virt.libvirt.driver [None req-480e185c-6169-4b39-b01e-201406aac779 7195b797fdf14f41b559c905c46027f8 1570574742514675b82edd1079304340 - - default default] [instance: 6208d031-a043-4761-ba4d-f3564e22723d] Deletion of /var/lib/nova/instances/6208d031-a043-4761-ba4d-f3564e22723d_del complete
Oct 11 08:58:34 compute-0 nova_compute[260935]: 2025-10-11 08:58:34.762 2 INFO nova.compute.manager [None req-480e185c-6169-4b39-b01e-201406aac779 7195b797fdf14f41b559c905c46027f8 1570574742514675b82edd1079304340 - - default default] [instance: 6208d031-a043-4761-ba4d-f3564e22723d] Took 0.96 seconds to destroy the instance on the hypervisor.
Oct 11 08:58:34 compute-0 nova_compute[260935]: 2025-10-11 08:58:34.763 2 DEBUG oslo.service.loopingcall [None req-480e185c-6169-4b39-b01e-201406aac779 7195b797fdf14f41b559c905c46027f8 1570574742514675b82edd1079304340 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 11 08:58:34 compute-0 nova_compute[260935]: 2025-10-11 08:58:34.764 2 DEBUG nova.compute.manager [-] [instance: 6208d031-a043-4761-ba4d-f3564e22723d] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 11 08:58:34 compute-0 nova_compute[260935]: 2025-10-11 08:58:34.765 2 DEBUG nova.network.neutron [-] [instance: 6208d031-a043-4761-ba4d-f3564e22723d] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 11 08:58:34 compute-0 nova_compute[260935]: 2025-10-11 08:58:34.848 2 DEBUG oslo_concurrency.processutils [None req-576d934c-3bea-49e4-87f0-7ea035c6381d 24dea95ec8d047639f1121e457f54149 a9bdaad8dfb9419b870adbb9b7034af4 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/b7a3385b-c043-4cc1-963e-3421188270ac/disk.config b7a3385b-c043-4cc1-963e-3421188270ac_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.213s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:58:34 compute-0 nova_compute[260935]: 2025-10-11 08:58:34.849 2 INFO nova.virt.libvirt.driver [None req-576d934c-3bea-49e4-87f0-7ea035c6381d 24dea95ec8d047639f1121e457f54149 a9bdaad8dfb9419b870adbb9b7034af4 - - default default] [instance: b7a3385b-c043-4cc1-963e-3421188270ac] Deleting local config drive /var/lib/nova/instances/b7a3385b-c043-4cc1-963e-3421188270ac/disk.config because it was imported into RBD.
Oct 11 08:58:34 compute-0 kernel: tap2717cdba-7e: entered promiscuous mode
Oct 11 08:58:34 compute-0 NetworkManager[44960]: <info>  [1760173114.9525] manager: (tap2717cdba-7e): new Tun device (/org/freedesktop/NetworkManager/Devices/303)
Oct 11 08:58:34 compute-0 systemd-udevd[336114]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 08:58:34 compute-0 nova_compute[260935]: 2025-10-11 08:58:34.959 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:58:34 compute-0 ovn_controller[152945]: 2025-10-11T08:58:34Z|00682|binding|INFO|Claiming lport 2717cdba-7e23-4eb0-81a8-fa322def9612 for this chassis.
Oct 11 08:58:34 compute-0 ovn_controller[152945]: 2025-10-11T08:58:34Z|00683|binding|INFO|2717cdba-7e23-4eb0-81a8-fa322def9612: Claiming fa:16:3e:3b:fa:9f 10.100.0.13
Oct 11 08:58:34 compute-0 nova_compute[260935]: 2025-10-11 08:58:34.965 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:58:34 compute-0 NetworkManager[44960]: <info>  [1760173114.9739] device (tap2717cdba-7e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 08:58:34 compute-0 NetworkManager[44960]: <info>  [1760173114.9746] device (tap2717cdba-7e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 08:58:34 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:58:34.975 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3b:fa:9f 10.100.0.13'], port_security=['fa:16:3e:3b:fa:9f 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'b7a3385b-c043-4cc1-963e-3421188270ac', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-bb3668ff-ecbd-4f84-9f47-fd2ac2c443f1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a9bdaad8dfb9419b870adbb9b7034af4', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'e241bbdd-2a8f-4ebd-9342-e71a7cc5dbc9', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b18009b0-719a-4191-81a0-8bc7fea35127, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=2717cdba-7e23-4eb0-81a8-fa322def9612) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 08:58:34 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:58:34.977 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 2717cdba-7e23-4eb0-81a8-fa322def9612 in datapath bb3668ff-ecbd-4f84-9f47-fd2ac2c443f1 bound to our chassis
Oct 11 08:58:34 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:58:34.980 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network bb3668ff-ecbd-4f84-9f47-fd2ac2c443f1
Oct 11 08:58:35 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:58:35.004 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[08208f96-135c-4446-b368-2039e6ef847f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:58:35 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:58:35.005 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapbb3668ff-e1 in ovnmeta-bb3668ff-ecbd-4f84-9f47-fd2ac2c443f1 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 11 08:58:35 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:58:35.007 276945 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapbb3668ff-e0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 11 08:58:35 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:58:35.008 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[c5ecc352-9353-4b7c-85f9-92e58104990c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:58:35 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:58:35.009 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[fd139870-b339-485a-b261-bb06df3e5de6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:58:35 compute-0 systemd-machined[215705]: New machine qemu-86-instance-0000004d.
Oct 11 08:58:35 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:58:35.030 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[72a0db09-54e0-406c-940d-2334957f749f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:58:35 compute-0 systemd[1]: Started Virtual Machine qemu-86-instance-0000004d.
Oct 11 08:58:35 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:58:35.055 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[e9f2c10f-53cf-48cb-8132-710df7342170]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:58:35 compute-0 nova_compute[260935]: 2025-10-11 08:58:35.081 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:58:35 compute-0 ovn_controller[152945]: 2025-10-11T08:58:35Z|00684|binding|INFO|Setting lport 2717cdba-7e23-4eb0-81a8-fa322def9612 ovn-installed in OVS
Oct 11 08:58:35 compute-0 ovn_controller[152945]: 2025-10-11T08:58:35Z|00685|binding|INFO|Setting lport 2717cdba-7e23-4eb0-81a8-fa322def9612 up in Southbound
Oct 11 08:58:35 compute-0 nova_compute[260935]: 2025-10-11 08:58:35.089 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:58:35 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:58:35.113 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[fb3e0387-3920-4d89-8db1-34fd26426c1b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:58:35 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:58:35.121 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[e2b76bff-36b7-46fb-bc6e-42ff8af1aec5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:58:35 compute-0 NetworkManager[44960]: <info>  [1760173115.1235] manager: (tapbb3668ff-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/304)
Oct 11 08:58:35 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:58:35.181 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[e1a92da9-dc4e-4473-92f3-222c62390848]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:58:35 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:58:35.185 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[9211940c-c4cf-4dfe-9719-d6cc266b51cb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:58:35 compute-0 NetworkManager[44960]: <info>  [1760173115.2284] device (tapbb3668ff-e0): carrier: link connected
Oct 11 08:58:35 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:58:35.241 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[28b199fe-8043-4cc9-bac6-2a626f5ac345]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:58:35 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:58:35.280 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[012d349c-ec71-4159-9036-b4b72e39d665]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapbb3668ff-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1a:1a:28'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 214], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 495993, 'reachable_time': 19908, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 336301, 'error': None, 'target': 'ovnmeta-bb3668ff-ecbd-4f84-9f47-fd2ac2c443f1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:58:35 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:58:35.310 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[71d2ae8b-bdf2-4a6c-8474-995f755e7fad]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe1a:1a28'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 495993, 'tstamp': 495993}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 336302, 'error': None, 'target': 'ovnmeta-bb3668ff-ecbd-4f84-9f47-fd2ac2c443f1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:58:35 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:58:35.343 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[af53125c-b3d1-4bb1-b282-86cb419e47c8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapbb3668ff-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1a:1a:28'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 220, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 220, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 214], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 495993, 'reachable_time': 19908, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 192, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 192, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 336303, 'error': None, 'target': 'ovnmeta-bb3668ff-ecbd-4f84-9f47-fd2ac2c443f1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:58:35 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:58:35.391 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[bb19e398-2842-49af-85ae-0551c08ed0c1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:58:35 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:58:35.508 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[f3d9d1f5-1531-4278-a32f-03d624a6e758]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:58:35 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:58:35.509 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbb3668ff-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:58:35 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:58:35.510 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 08:58:35 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:58:35.510 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbb3668ff-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:58:35 compute-0 kernel: tapbb3668ff-e0: entered promiscuous mode
Oct 11 08:58:35 compute-0 NetworkManager[44960]: <info>  [1760173115.5143] manager: (tapbb3668ff-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/305)
Oct 11 08:58:35 compute-0 nova_compute[260935]: 2025-10-11 08:58:35.512 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:58:35 compute-0 nova_compute[260935]: 2025-10-11 08:58:35.515 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:58:35 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:58:35.518 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapbb3668ff-e0, col_values=(('external_ids', {'iface-id': '32ff24c3-9e9d-405b-8dba-dced4cc9bdc9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:58:35 compute-0 nova_compute[260935]: 2025-10-11 08:58:35.519 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:58:35 compute-0 ovn_controller[152945]: 2025-10-11T08:58:35Z|00686|binding|INFO|Releasing lport 32ff24c3-9e9d-405b-8dba-dced4cc9bdc9 from this chassis (sb_readonly=0)
Oct 11 08:58:35 compute-0 nova_compute[260935]: 2025-10-11 08:58:35.559 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:58:35 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:58:35.561 162815 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/bb3668ff-ecbd-4f84-9f47-fd2ac2c443f1.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/bb3668ff-ecbd-4f84-9f47-fd2ac2c443f1.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 11 08:58:35 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:58:35.562 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[c38bcf23-34d3-4054-9bca-bc73d351ed89]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:58:35 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:58:35.563 162815 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 11 08:58:35 compute-0 ovn_metadata_agent[162810]: global
Oct 11 08:58:35 compute-0 ovn_metadata_agent[162810]:     log         /dev/log local0 debug
Oct 11 08:58:35 compute-0 ovn_metadata_agent[162810]:     log-tag     haproxy-metadata-proxy-bb3668ff-ecbd-4f84-9f47-fd2ac2c443f1
Oct 11 08:58:35 compute-0 ovn_metadata_agent[162810]:     user        root
Oct 11 08:58:35 compute-0 ovn_metadata_agent[162810]:     group       root
Oct 11 08:58:35 compute-0 ovn_metadata_agent[162810]:     maxconn     1024
Oct 11 08:58:35 compute-0 ovn_metadata_agent[162810]:     pidfile     /var/lib/neutron/external/pids/bb3668ff-ecbd-4f84-9f47-fd2ac2c443f1.pid.haproxy
Oct 11 08:58:35 compute-0 ovn_metadata_agent[162810]:     daemon
Oct 11 08:58:35 compute-0 ovn_metadata_agent[162810]: 
Oct 11 08:58:35 compute-0 ovn_metadata_agent[162810]: defaults
Oct 11 08:58:35 compute-0 ovn_metadata_agent[162810]:     log global
Oct 11 08:58:35 compute-0 ovn_metadata_agent[162810]:     mode http
Oct 11 08:58:35 compute-0 ovn_metadata_agent[162810]:     option httplog
Oct 11 08:58:35 compute-0 ovn_metadata_agent[162810]:     option dontlognull
Oct 11 08:58:35 compute-0 ovn_metadata_agent[162810]:     option http-server-close
Oct 11 08:58:35 compute-0 ovn_metadata_agent[162810]:     option forwardfor
Oct 11 08:58:35 compute-0 ovn_metadata_agent[162810]:     retries                 3
Oct 11 08:58:35 compute-0 ovn_metadata_agent[162810]:     timeout http-request    30s
Oct 11 08:58:35 compute-0 ovn_metadata_agent[162810]:     timeout connect         30s
Oct 11 08:58:35 compute-0 ovn_metadata_agent[162810]:     timeout client          32s
Oct 11 08:58:35 compute-0 ovn_metadata_agent[162810]:     timeout server          32s
Oct 11 08:58:35 compute-0 ovn_metadata_agent[162810]:     timeout http-keep-alive 30s
Oct 11 08:58:35 compute-0 ovn_metadata_agent[162810]: 
Oct 11 08:58:35 compute-0 ovn_metadata_agent[162810]: 
Oct 11 08:58:35 compute-0 ovn_metadata_agent[162810]: listen listener
Oct 11 08:58:35 compute-0 ovn_metadata_agent[162810]:     bind 169.254.169.254:80
Oct 11 08:58:35 compute-0 ovn_metadata_agent[162810]:     server metadata /var/lib/neutron/metadata_proxy
Oct 11 08:58:35 compute-0 ovn_metadata_agent[162810]:     http-request add-header X-OVN-Network-ID bb3668ff-ecbd-4f84-9f47-fd2ac2c443f1
Oct 11 08:58:35 compute-0 ovn_metadata_agent[162810]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 11 08:58:35 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:58:35.563 162815 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-bb3668ff-ecbd-4f84-9f47-fd2ac2c443f1', 'env', 'PROCESS_TAG=haproxy-bb3668ff-ecbd-4f84-9f47-fd2ac2c443f1', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/bb3668ff-ecbd-4f84-9f47-fd2ac2c443f1.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 11 08:58:35 compute-0 ceph-mon[74313]: pgmap v1662: 321 pgs: 321 active+clean; 134 MiB data, 593 MiB used, 59 GiB / 60 GiB avail; 459 KiB/s rd, 3.6 MiB/s wr, 120 op/s
Oct 11 08:58:35 compute-0 nova_compute[260935]: 2025-10-11 08:58:35.684 2 DEBUG nova.compute.manager [req-d07a191b-c310-4780-9561-7b91d6e54736 req-db38c018-2542-4158-b0c7-0eba41b05bae e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 6208d031-a043-4761-ba4d-f3564e22723d] Received event network-vif-unplugged-6c74faab-60f6-485c-b31f-ea324cc34f9c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:58:35 compute-0 nova_compute[260935]: 2025-10-11 08:58:35.685 2 DEBUG oslo_concurrency.lockutils [req-d07a191b-c310-4780-9561-7b91d6e54736 req-db38c018-2542-4158-b0c7-0eba41b05bae e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "6208d031-a043-4761-ba4d-f3564e22723d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:58:35 compute-0 nova_compute[260935]: 2025-10-11 08:58:35.685 2 DEBUG oslo_concurrency.lockutils [req-d07a191b-c310-4780-9561-7b91d6e54736 req-db38c018-2542-4158-b0c7-0eba41b05bae e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "6208d031-a043-4761-ba4d-f3564e22723d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:58:35 compute-0 nova_compute[260935]: 2025-10-11 08:58:35.685 2 DEBUG oslo_concurrency.lockutils [req-d07a191b-c310-4780-9561-7b91d6e54736 req-db38c018-2542-4158-b0c7-0eba41b05bae e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "6208d031-a043-4761-ba4d-f3564e22723d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:58:35 compute-0 nova_compute[260935]: 2025-10-11 08:58:35.686 2 DEBUG nova.compute.manager [req-d07a191b-c310-4780-9561-7b91d6e54736 req-db38c018-2542-4158-b0c7-0eba41b05bae e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 6208d031-a043-4761-ba4d-f3564e22723d] No waiting events found dispatching network-vif-unplugged-6c74faab-60f6-485c-b31f-ea324cc34f9c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 08:58:35 compute-0 nova_compute[260935]: 2025-10-11 08:58:35.686 2 DEBUG nova.compute.manager [req-d07a191b-c310-4780-9561-7b91d6e54736 req-db38c018-2542-4158-b0c7-0eba41b05bae e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 6208d031-a043-4761-ba4d-f3564e22723d] Received event network-vif-unplugged-6c74faab-60f6-485c-b31f-ea324cc34f9c for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 11 08:58:35 compute-0 nova_compute[260935]: 2025-10-11 08:58:35.686 2 DEBUG nova.compute.manager [req-d07a191b-c310-4780-9561-7b91d6e54736 req-db38c018-2542-4158-b0c7-0eba41b05bae e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 6208d031-a043-4761-ba4d-f3564e22723d] Received event network-vif-plugged-6c74faab-60f6-485c-b31f-ea324cc34f9c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:58:35 compute-0 nova_compute[260935]: 2025-10-11 08:58:35.687 2 DEBUG oslo_concurrency.lockutils [req-d07a191b-c310-4780-9561-7b91d6e54736 req-db38c018-2542-4158-b0c7-0eba41b05bae e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "6208d031-a043-4761-ba4d-f3564e22723d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:58:35 compute-0 nova_compute[260935]: 2025-10-11 08:58:35.687 2 DEBUG oslo_concurrency.lockutils [req-d07a191b-c310-4780-9561-7b91d6e54736 req-db38c018-2542-4158-b0c7-0eba41b05bae e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "6208d031-a043-4761-ba4d-f3564e22723d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:58:35 compute-0 nova_compute[260935]: 2025-10-11 08:58:35.687 2 DEBUG oslo_concurrency.lockutils [req-d07a191b-c310-4780-9561-7b91d6e54736 req-db38c018-2542-4158-b0c7-0eba41b05bae e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "6208d031-a043-4761-ba4d-f3564e22723d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:58:35 compute-0 nova_compute[260935]: 2025-10-11 08:58:35.688 2 DEBUG nova.compute.manager [req-d07a191b-c310-4780-9561-7b91d6e54736 req-db38c018-2542-4158-b0c7-0eba41b05bae e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 6208d031-a043-4761-ba4d-f3564e22723d] No waiting events found dispatching network-vif-plugged-6c74faab-60f6-485c-b31f-ea324cc34f9c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 08:58:35 compute-0 nova_compute[260935]: 2025-10-11 08:58:35.688 2 WARNING nova.compute.manager [req-d07a191b-c310-4780-9561-7b91d6e54736 req-db38c018-2542-4158-b0c7-0eba41b05bae e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 6208d031-a043-4761-ba4d-f3564e22723d] Received unexpected event network-vif-plugged-6c74faab-60f6-485c-b31f-ea324cc34f9c for instance with vm_state active and task_state deleting.
Oct 11 08:58:35 compute-0 nova_compute[260935]: 2025-10-11 08:58:35.688 2 DEBUG nova.compute.manager [req-d07a191b-c310-4780-9561-7b91d6e54736 req-db38c018-2542-4158-b0c7-0eba41b05bae e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: b7a3385b-c043-4cc1-963e-3421188270ac] Received event network-vif-plugged-2717cdba-7e23-4eb0-81a8-fa322def9612 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:58:35 compute-0 nova_compute[260935]: 2025-10-11 08:58:35.689 2 DEBUG oslo_concurrency.lockutils [req-d07a191b-c310-4780-9561-7b91d6e54736 req-db38c018-2542-4158-b0c7-0eba41b05bae e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "b7a3385b-c043-4cc1-963e-3421188270ac-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:58:35 compute-0 nova_compute[260935]: 2025-10-11 08:58:35.689 2 DEBUG oslo_concurrency.lockutils [req-d07a191b-c310-4780-9561-7b91d6e54736 req-db38c018-2542-4158-b0c7-0eba41b05bae e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "b7a3385b-c043-4cc1-963e-3421188270ac-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:58:35 compute-0 nova_compute[260935]: 2025-10-11 08:58:35.689 2 DEBUG oslo_concurrency.lockutils [req-d07a191b-c310-4780-9561-7b91d6e54736 req-db38c018-2542-4158-b0c7-0eba41b05bae e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "b7a3385b-c043-4cc1-963e-3421188270ac-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:58:35 compute-0 nova_compute[260935]: 2025-10-11 08:58:35.689 2 DEBUG nova.compute.manager [req-d07a191b-c310-4780-9561-7b91d6e54736 req-db38c018-2542-4158-b0c7-0eba41b05bae e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: b7a3385b-c043-4cc1-963e-3421188270ac] Processing event network-vif-plugged-2717cdba-7e23-4eb0-81a8-fa322def9612 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 11 08:58:35 compute-0 nova_compute[260935]: 2025-10-11 08:58:35.965 2 DEBUG nova.network.neutron [-] [instance: 6208d031-a043-4761-ba4d-f3564e22723d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:58:35 compute-0 nova_compute[260935]: 2025-10-11 08:58:35.983 2 INFO nova.compute.manager [-] [instance: 6208d031-a043-4761-ba4d-f3564e22723d] Took 1.22 seconds to deallocate network for instance.
Oct 11 08:58:36 compute-0 podman[336377]: 2025-10-11 08:58:36.056919975 +0000 UTC m=+0.066427665 container create bc5874d383e6d985d73c54086a80624f21a7984b25aad5a75c8775dffddd269f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-bb3668ff-ecbd-4f84-9f47-fd2ac2c443f1, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 11 08:58:36 compute-0 nova_compute[260935]: 2025-10-11 08:58:36.061 2 DEBUG oslo_concurrency.lockutils [None req-480e185c-6169-4b39-b01e-201406aac779 7195b797fdf14f41b559c905c46027f8 1570574742514675b82edd1079304340 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:58:36 compute-0 nova_compute[260935]: 2025-10-11 08:58:36.062 2 DEBUG oslo_concurrency.lockutils [None req-480e185c-6169-4b39-b01e-201406aac779 7195b797fdf14f41b559c905c46027f8 1570574742514675b82edd1079304340 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:58:36 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1663: 321 pgs: 321 active+clean; 134 MiB data, 593 MiB used, 59 GiB / 60 GiB avail; 449 KiB/s rd, 3.6 MiB/s wr, 105 op/s
Oct 11 08:58:36 compute-0 podman[336377]: 2025-10-11 08:58:36.020430054 +0000 UTC m=+0.029937724 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 11 08:58:36 compute-0 systemd[1]: Started libpod-conmon-bc5874d383e6d985d73c54086a80624f21a7984b25aad5a75c8775dffddd269f.scope.
Oct 11 08:58:36 compute-0 systemd[1]: Started libcrun container.
Oct 11 08:58:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc6a7046389d867a3d28fb34193ff9660fbfd12726720b91fa73dee28f3989f3/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 11 08:58:36 compute-0 nova_compute[260935]: 2025-10-11 08:58:36.183 2 DEBUG oslo_concurrency.processutils [None req-480e185c-6169-4b39-b01e-201406aac779 7195b797fdf14f41b559c905c46027f8 1570574742514675b82edd1079304340 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:58:36 compute-0 podman[336377]: 2025-10-11 08:58:36.186099298 +0000 UTC m=+0.195606988 container init bc5874d383e6d985d73c54086a80624f21a7984b25aad5a75c8775dffddd269f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-bb3668ff-ecbd-4f84-9f47-fd2ac2c443f1, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 11 08:58:36 compute-0 podman[336377]: 2025-10-11 08:58:36.19700978 +0000 UTC m=+0.206517480 container start bc5874d383e6d985d73c54086a80624f21a7984b25aad5a75c8775dffddd269f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-bb3668ff-ecbd-4f84-9f47-fd2ac2c443f1, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Oct 11 08:58:36 compute-0 neutron-haproxy-ovnmeta-bb3668ff-ecbd-4f84-9f47-fd2ac2c443f1[336392]: [NOTICE]   (336396) : New worker (336399) forked
Oct 11 08:58:36 compute-0 neutron-haproxy-ovnmeta-bb3668ff-ecbd-4f84-9f47-fd2ac2c443f1[336392]: [NOTICE]   (336396) : Loading success.
Oct 11 08:58:36 compute-0 nova_compute[260935]: 2025-10-11 08:58:36.279 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760173116.2782288, b7a3385b-c043-4cc1-963e-3421188270ac => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:58:36 compute-0 nova_compute[260935]: 2025-10-11 08:58:36.282 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: b7a3385b-c043-4cc1-963e-3421188270ac] VM Started (Lifecycle Event)
Oct 11 08:58:36 compute-0 nova_compute[260935]: 2025-10-11 08:58:36.287 2 DEBUG nova.compute.manager [None req-576d934c-3bea-49e4-87f0-7ea035c6381d 24dea95ec8d047639f1121e457f54149 a9bdaad8dfb9419b870adbb9b7034af4 - - default default] [instance: b7a3385b-c043-4cc1-963e-3421188270ac] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 11 08:58:36 compute-0 nova_compute[260935]: 2025-10-11 08:58:36.292 2 DEBUG nova.virt.libvirt.driver [None req-576d934c-3bea-49e4-87f0-7ea035c6381d 24dea95ec8d047639f1121e457f54149 a9bdaad8dfb9419b870adbb9b7034af4 - - default default] [instance: b7a3385b-c043-4cc1-963e-3421188270ac] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 11 08:58:36 compute-0 nova_compute[260935]: 2025-10-11 08:58:36.300 2 INFO nova.virt.libvirt.driver [-] [instance: b7a3385b-c043-4cc1-963e-3421188270ac] Instance spawned successfully.
Oct 11 08:58:36 compute-0 nova_compute[260935]: 2025-10-11 08:58:36.301 2 DEBUG nova.virt.libvirt.driver [None req-576d934c-3bea-49e4-87f0-7ea035c6381d 24dea95ec8d047639f1121e457f54149 a9bdaad8dfb9419b870adbb9b7034af4 - - default default] [instance: b7a3385b-c043-4cc1-963e-3421188270ac] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 11 08:58:36 compute-0 nova_compute[260935]: 2025-10-11 08:58:36.306 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: b7a3385b-c043-4cc1-963e-3421188270ac] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:58:36 compute-0 nova_compute[260935]: 2025-10-11 08:58:36.311 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: b7a3385b-c043-4cc1-963e-3421188270ac] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 08:58:36 compute-0 nova_compute[260935]: 2025-10-11 08:58:36.325 2 DEBUG nova.virt.libvirt.driver [None req-576d934c-3bea-49e4-87f0-7ea035c6381d 24dea95ec8d047639f1121e457f54149 a9bdaad8dfb9419b870adbb9b7034af4 - - default default] [instance: b7a3385b-c043-4cc1-963e-3421188270ac] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:58:36 compute-0 nova_compute[260935]: 2025-10-11 08:58:36.326 2 DEBUG nova.virt.libvirt.driver [None req-576d934c-3bea-49e4-87f0-7ea035c6381d 24dea95ec8d047639f1121e457f54149 a9bdaad8dfb9419b870adbb9b7034af4 - - default default] [instance: b7a3385b-c043-4cc1-963e-3421188270ac] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:58:36 compute-0 nova_compute[260935]: 2025-10-11 08:58:36.326 2 DEBUG nova.virt.libvirt.driver [None req-576d934c-3bea-49e4-87f0-7ea035c6381d 24dea95ec8d047639f1121e457f54149 a9bdaad8dfb9419b870adbb9b7034af4 - - default default] [instance: b7a3385b-c043-4cc1-963e-3421188270ac] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:58:36 compute-0 nova_compute[260935]: 2025-10-11 08:58:36.327 2 DEBUG nova.virt.libvirt.driver [None req-576d934c-3bea-49e4-87f0-7ea035c6381d 24dea95ec8d047639f1121e457f54149 a9bdaad8dfb9419b870adbb9b7034af4 - - default default] [instance: b7a3385b-c043-4cc1-963e-3421188270ac] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:58:36 compute-0 nova_compute[260935]: 2025-10-11 08:58:36.327 2 DEBUG nova.virt.libvirt.driver [None req-576d934c-3bea-49e4-87f0-7ea035c6381d 24dea95ec8d047639f1121e457f54149 a9bdaad8dfb9419b870adbb9b7034af4 - - default default] [instance: b7a3385b-c043-4cc1-963e-3421188270ac] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:58:36 compute-0 nova_compute[260935]: 2025-10-11 08:58:36.327 2 DEBUG nova.virt.libvirt.driver [None req-576d934c-3bea-49e4-87f0-7ea035c6381d 24dea95ec8d047639f1121e457f54149 a9bdaad8dfb9419b870adbb9b7034af4 - - default default] [instance: b7a3385b-c043-4cc1-963e-3421188270ac] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:58:36 compute-0 nova_compute[260935]: 2025-10-11 08:58:36.332 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: b7a3385b-c043-4cc1-963e-3421188270ac] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 08:58:36 compute-0 nova_compute[260935]: 2025-10-11 08:58:36.332 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760173116.2785757, b7a3385b-c043-4cc1-963e-3421188270ac => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:58:36 compute-0 nova_compute[260935]: 2025-10-11 08:58:36.333 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: b7a3385b-c043-4cc1-963e-3421188270ac] VM Paused (Lifecycle Event)
Oct 11 08:58:36 compute-0 nova_compute[260935]: 2025-10-11 08:58:36.373 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: b7a3385b-c043-4cc1-963e-3421188270ac] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:58:36 compute-0 nova_compute[260935]: 2025-10-11 08:58:36.378 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760173116.2901042, b7a3385b-c043-4cc1-963e-3421188270ac => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:58:36 compute-0 nova_compute[260935]: 2025-10-11 08:58:36.378 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: b7a3385b-c043-4cc1-963e-3421188270ac] VM Resumed (Lifecycle Event)
Oct 11 08:58:36 compute-0 nova_compute[260935]: 2025-10-11 08:58:36.391 2 INFO nova.compute.manager [None req-576d934c-3bea-49e4-87f0-7ea035c6381d 24dea95ec8d047639f1121e457f54149 a9bdaad8dfb9419b870adbb9b7034af4 - - default default] [instance: b7a3385b-c043-4cc1-963e-3421188270ac] Took 9.66 seconds to spawn the instance on the hypervisor.
Oct 11 08:58:36 compute-0 nova_compute[260935]: 2025-10-11 08:58:36.392 2 DEBUG nova.compute.manager [None req-576d934c-3bea-49e4-87f0-7ea035c6381d 24dea95ec8d047639f1121e457f54149 a9bdaad8dfb9419b870adbb9b7034af4 - - default default] [instance: b7a3385b-c043-4cc1-963e-3421188270ac] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:58:36 compute-0 nova_compute[260935]: 2025-10-11 08:58:36.402 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: b7a3385b-c043-4cc1-963e-3421188270ac] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:58:36 compute-0 nova_compute[260935]: 2025-10-11 08:58:36.407 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: b7a3385b-c043-4cc1-963e-3421188270ac] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 08:58:36 compute-0 nova_compute[260935]: 2025-10-11 08:58:36.435 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: b7a3385b-c043-4cc1-963e-3421188270ac] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 08:58:36 compute-0 nova_compute[260935]: 2025-10-11 08:58:36.458 2 INFO nova.compute.manager [None req-576d934c-3bea-49e4-87f0-7ea035c6381d 24dea95ec8d047639f1121e457f54149 a9bdaad8dfb9419b870adbb9b7034af4 - - default default] [instance: b7a3385b-c043-4cc1-963e-3421188270ac] Took 10.76 seconds to build instance.
Oct 11 08:58:36 compute-0 nova_compute[260935]: 2025-10-11 08:58:36.474 2 DEBUG oslo_concurrency.lockutils [None req-576d934c-3bea-49e4-87f0-7ea035c6381d 24dea95ec8d047639f1121e457f54149 a9bdaad8dfb9419b870adbb9b7034af4 - - default default] Lock "b7a3385b-c043-4cc1-963e-3421188270ac" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.867s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:58:36 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 08:58:36 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4229355518' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:58:36 compute-0 nova_compute[260935]: 2025-10-11 08:58:36.679 2 DEBUG oslo_concurrency.processutils [None req-480e185c-6169-4b39-b01e-201406aac779 7195b797fdf14f41b559c905c46027f8 1570574742514675b82edd1079304340 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.496s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:58:36 compute-0 nova_compute[260935]: 2025-10-11 08:58:36.687 2 DEBUG nova.compute.provider_tree [None req-480e185c-6169-4b39-b01e-201406aac779 7195b797fdf14f41b559c905c46027f8 1570574742514675b82edd1079304340 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 08:58:36 compute-0 nova_compute[260935]: 2025-10-11 08:58:36.706 2 DEBUG nova.scheduler.client.report [None req-480e185c-6169-4b39-b01e-201406aac779 7195b797fdf14f41b559c905c46027f8 1570574742514675b82edd1079304340 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 08:58:36 compute-0 nova_compute[260935]: 2025-10-11 08:58:36.737 2 DEBUG oslo_concurrency.lockutils [None req-480e185c-6169-4b39-b01e-201406aac779 7195b797fdf14f41b559c905c46027f8 1570574742514675b82edd1079304340 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.675s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:58:36 compute-0 nova_compute[260935]: 2025-10-11 08:58:36.774 2 INFO nova.scheduler.client.report [None req-480e185c-6169-4b39-b01e-201406aac779 7195b797fdf14f41b559c905c46027f8 1570574742514675b82edd1079304340 - - default default] Deleted allocations for instance 6208d031-a043-4761-ba4d-f3564e22723d
Oct 11 08:58:36 compute-0 nova_compute[260935]: 2025-10-11 08:58:36.863 2 DEBUG oslo_concurrency.lockutils [None req-480e185c-6169-4b39-b01e-201406aac779 7195b797fdf14f41b559c905c46027f8 1570574742514675b82edd1079304340 - - default default] Lock "6208d031-a043-4761-ba4d-f3564e22723d" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.066s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:58:37 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 08:58:37 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/410105533' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 08:58:37 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 08:58:37 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/410105533' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 08:58:37 compute-0 ceph-mon[74313]: pgmap v1663: 321 pgs: 321 active+clean; 134 MiB data, 593 MiB used, 59 GiB / 60 GiB avail; 449 KiB/s rd, 3.6 MiB/s wr, 105 op/s
Oct 11 08:58:37 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/4229355518' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:58:37 compute-0 ceph-mon[74313]: from='client.? 192.168.122.10:0/410105533' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 08:58:37 compute-0 ceph-mon[74313]: from='client.? 192.168.122.10:0/410105533' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 08:58:37 compute-0 nova_compute[260935]: 2025-10-11 08:58:37.822 2 DEBUG nova.compute.manager [req-d110718a-76fa-4969-83cb-f587f6754dbe req-ead56fd1-b2e4-4fed-a94b-f2594c74ad80 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: b7a3385b-c043-4cc1-963e-3421188270ac] Received event network-vif-plugged-2717cdba-7e23-4eb0-81a8-fa322def9612 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:58:37 compute-0 nova_compute[260935]: 2025-10-11 08:58:37.822 2 DEBUG oslo_concurrency.lockutils [req-d110718a-76fa-4969-83cb-f587f6754dbe req-ead56fd1-b2e4-4fed-a94b-f2594c74ad80 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "b7a3385b-c043-4cc1-963e-3421188270ac-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:58:37 compute-0 nova_compute[260935]: 2025-10-11 08:58:37.824 2 DEBUG oslo_concurrency.lockutils [req-d110718a-76fa-4969-83cb-f587f6754dbe req-ead56fd1-b2e4-4fed-a94b-f2594c74ad80 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "b7a3385b-c043-4cc1-963e-3421188270ac-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:58:37 compute-0 nova_compute[260935]: 2025-10-11 08:58:37.824 2 DEBUG oslo_concurrency.lockutils [req-d110718a-76fa-4969-83cb-f587f6754dbe req-ead56fd1-b2e4-4fed-a94b-f2594c74ad80 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "b7a3385b-c043-4cc1-963e-3421188270ac-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:58:37 compute-0 nova_compute[260935]: 2025-10-11 08:58:37.825 2 DEBUG nova.compute.manager [req-d110718a-76fa-4969-83cb-f587f6754dbe req-ead56fd1-b2e4-4fed-a94b-f2594c74ad80 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: b7a3385b-c043-4cc1-963e-3421188270ac] No waiting events found dispatching network-vif-plugged-2717cdba-7e23-4eb0-81a8-fa322def9612 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 08:58:37 compute-0 nova_compute[260935]: 2025-10-11 08:58:37.825 2 WARNING nova.compute.manager [req-d110718a-76fa-4969-83cb-f587f6754dbe req-ead56fd1-b2e4-4fed-a94b-f2594c74ad80 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: b7a3385b-c043-4cc1-963e-3421188270ac] Received unexpected event network-vif-plugged-2717cdba-7e23-4eb0-81a8-fa322def9612 for instance with vm_state active and task_state None.
Oct 11 08:58:37 compute-0 nova_compute[260935]: 2025-10-11 08:58:37.825 2 DEBUG nova.compute.manager [req-d110718a-76fa-4969-83cb-f587f6754dbe req-ead56fd1-b2e4-4fed-a94b-f2594c74ad80 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 6208d031-a043-4761-ba4d-f3564e22723d] Received event network-vif-deleted-6c74faab-60f6-485c-b31f-ea324cc34f9c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:58:38 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1664: 321 pgs: 321 active+clean; 88 MiB data, 572 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.6 MiB/s wr, 203 op/s
Oct 11 08:58:38 compute-0 nova_compute[260935]: 2025-10-11 08:58:38.118 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760173103.116693, b297454f-91af-4716-b4f7-6af9f0d7e62d => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:58:38 compute-0 nova_compute[260935]: 2025-10-11 08:58:38.119 2 INFO nova.compute.manager [-] [instance: b297454f-91af-4716-b4f7-6af9f0d7e62d] VM Stopped (Lifecycle Event)
Oct 11 08:58:38 compute-0 nova_compute[260935]: 2025-10-11 08:58:38.149 2 DEBUG nova.compute.manager [None req-00405f78-bccf-457c-9d1f-531806fd65ba - - - - - -] [instance: b297454f-91af-4716-b4f7-6af9f0d7e62d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:58:38 compute-0 nova_compute[260935]: 2025-10-11 08:58:38.408 2 DEBUG oslo_concurrency.lockutils [None req-f65d1a23-5307-4fe3-8d5c-df108b044ea8 24dea95ec8d047639f1121e457f54149 a9bdaad8dfb9419b870adbb9b7034af4 - - default default] Acquiring lock "b7a3385b-c043-4cc1-963e-3421188270ac" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:58:38 compute-0 nova_compute[260935]: 2025-10-11 08:58:38.408 2 DEBUG oslo_concurrency.lockutils [None req-f65d1a23-5307-4fe3-8d5c-df108b044ea8 24dea95ec8d047639f1121e457f54149 a9bdaad8dfb9419b870adbb9b7034af4 - - default default] Lock "b7a3385b-c043-4cc1-963e-3421188270ac" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:58:38 compute-0 nova_compute[260935]: 2025-10-11 08:58:38.408 2 DEBUG oslo_concurrency.lockutils [None req-f65d1a23-5307-4fe3-8d5c-df108b044ea8 24dea95ec8d047639f1121e457f54149 a9bdaad8dfb9419b870adbb9b7034af4 - - default default] Acquiring lock "b7a3385b-c043-4cc1-963e-3421188270ac-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:58:38 compute-0 nova_compute[260935]: 2025-10-11 08:58:38.408 2 DEBUG oslo_concurrency.lockutils [None req-f65d1a23-5307-4fe3-8d5c-df108b044ea8 24dea95ec8d047639f1121e457f54149 a9bdaad8dfb9419b870adbb9b7034af4 - - default default] Lock "b7a3385b-c043-4cc1-963e-3421188270ac-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:58:38 compute-0 nova_compute[260935]: 2025-10-11 08:58:38.409 2 DEBUG oslo_concurrency.lockutils [None req-f65d1a23-5307-4fe3-8d5c-df108b044ea8 24dea95ec8d047639f1121e457f54149 a9bdaad8dfb9419b870adbb9b7034af4 - - default default] Lock "b7a3385b-c043-4cc1-963e-3421188270ac-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:58:38 compute-0 nova_compute[260935]: 2025-10-11 08:58:38.410 2 INFO nova.compute.manager [None req-f65d1a23-5307-4fe3-8d5c-df108b044ea8 24dea95ec8d047639f1121e457f54149 a9bdaad8dfb9419b870adbb9b7034af4 - - default default] [instance: b7a3385b-c043-4cc1-963e-3421188270ac] Terminating instance
Oct 11 08:58:38 compute-0 nova_compute[260935]: 2025-10-11 08:58:38.410 2 DEBUG nova.compute.manager [None req-f65d1a23-5307-4fe3-8d5c-df108b044ea8 24dea95ec8d047639f1121e457f54149 a9bdaad8dfb9419b870adbb9b7034af4 - - default default] [instance: b7a3385b-c043-4cc1-963e-3421188270ac] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 11 08:58:38 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e242 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 08:58:38 compute-0 kernel: tap2717cdba-7e (unregistering): left promiscuous mode
Oct 11 08:58:38 compute-0 NetworkManager[44960]: <info>  [1760173118.4612] device (tap2717cdba-7e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 08:58:38 compute-0 ovn_controller[152945]: 2025-10-11T08:58:38Z|00687|binding|INFO|Releasing lport 2717cdba-7e23-4eb0-81a8-fa322def9612 from this chassis (sb_readonly=0)
Oct 11 08:58:38 compute-0 ovn_controller[152945]: 2025-10-11T08:58:38Z|00688|binding|INFO|Setting lport 2717cdba-7e23-4eb0-81a8-fa322def9612 down in Southbound
Oct 11 08:58:38 compute-0 ovn_controller[152945]: 2025-10-11T08:58:38Z|00689|binding|INFO|Removing iface tap2717cdba-7e ovn-installed in OVS
Oct 11 08:58:38 compute-0 nova_compute[260935]: 2025-10-11 08:58:38.486 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:58:38 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:58:38.493 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3b:fa:9f 10.100.0.13'], port_security=['fa:16:3e:3b:fa:9f 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'b7a3385b-c043-4cc1-963e-3421188270ac', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-bb3668ff-ecbd-4f84-9f47-fd2ac2c443f1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a9bdaad8dfb9419b870adbb9b7034af4', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'e241bbdd-2a8f-4ebd-9342-e71a7cc5dbc9', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b18009b0-719a-4191-81a0-8bc7fea35127, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=2717cdba-7e23-4eb0-81a8-fa322def9612) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 08:58:38 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:58:38.495 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 2717cdba-7e23-4eb0-81a8-fa322def9612 in datapath bb3668ff-ecbd-4f84-9f47-fd2ac2c443f1 unbound from our chassis
Oct 11 08:58:38 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:58:38.497 162815 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network bb3668ff-ecbd-4f84-9f47-fd2ac2c443f1, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 11 08:58:38 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:58:38.499 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[0e85e98f-885b-4f5e-99f3-0a4b801b458c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:58:38 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:58:38.499 162815 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-bb3668ff-ecbd-4f84-9f47-fd2ac2c443f1 namespace which is not needed anymore
Oct 11 08:58:38 compute-0 nova_compute[260935]: 2025-10-11 08:58:38.528 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:58:38 compute-0 systemd[1]: machine-qemu\x2d86\x2dinstance\x2d0000004d.scope: Deactivated successfully.
Oct 11 08:58:38 compute-0 systemd[1]: machine-qemu\x2d86\x2dinstance\x2d0000004d.scope: Consumed 3.334s CPU time.
Oct 11 08:58:38 compute-0 systemd-machined[215705]: Machine qemu-86-instance-0000004d terminated.
Oct 11 08:58:38 compute-0 nova_compute[260935]: 2025-10-11 08:58:38.641 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:58:38 compute-0 nova_compute[260935]: 2025-10-11 08:58:38.649 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:58:38 compute-0 nova_compute[260935]: 2025-10-11 08:58:38.663 2 INFO nova.virt.libvirt.driver [-] [instance: b7a3385b-c043-4cc1-963e-3421188270ac] Instance destroyed successfully.
Oct 11 08:58:38 compute-0 nova_compute[260935]: 2025-10-11 08:58:38.664 2 DEBUG nova.objects.instance [None req-f65d1a23-5307-4fe3-8d5c-df108b044ea8 24dea95ec8d047639f1121e457f54149 a9bdaad8dfb9419b870adbb9b7034af4 - - default default] Lazy-loading 'resources' on Instance uuid b7a3385b-c043-4cc1-963e-3421188270ac obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:58:38 compute-0 nova_compute[260935]: 2025-10-11 08:58:38.678 2 DEBUG nova.virt.libvirt.vif [None req-f65d1a23-5307-4fe3-8d5c-df108b044ea8 24dea95ec8d047639f1121e457f54149 a9bdaad8dfb9419b870adbb9b7034af4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T08:58:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-InstanceActionsV221TestJSON-server-692932297',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionsv221testjson-server-692932297',id=77,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-11T08:58:36Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='a9bdaad8dfb9419b870adbb9b7034af4',ramdisk_id='',reservation_id='r-nqt260vz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-InstanceActionsV221TestJSON-1885783580',owner_user_name='tempest-InstanceActionsV221TestJSON-1885783580-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T08:58:36Z,user_data=None,user_id='24dea95ec8d047639f1121e457f54149',uuid=b7a3385b-c043-4cc1-963e-3421188270ac,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "2717cdba-7e23-4eb0-81a8-fa322def9612", "address": "fa:16:3e:3b:fa:9f", "network": {"id": "bb3668ff-ecbd-4f84-9f47-fd2ac2c443f1", "bridge": "br-int", "label": "tempest-InstanceActionsV221TestJSON-2102542273-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9bdaad8dfb9419b870adbb9b7034af4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2717cdba-7e", "ovs_interfaceid": "2717cdba-7e23-4eb0-81a8-fa322def9612", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 11 08:58:38 compute-0 nova_compute[260935]: 2025-10-11 08:58:38.678 2 DEBUG nova.network.os_vif_util [None req-f65d1a23-5307-4fe3-8d5c-df108b044ea8 24dea95ec8d047639f1121e457f54149 a9bdaad8dfb9419b870adbb9b7034af4 - - default default] Converting VIF {"id": "2717cdba-7e23-4eb0-81a8-fa322def9612", "address": "fa:16:3e:3b:fa:9f", "network": {"id": "bb3668ff-ecbd-4f84-9f47-fd2ac2c443f1", "bridge": "br-int", "label": "tempest-InstanceActionsV221TestJSON-2102542273-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9bdaad8dfb9419b870adbb9b7034af4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2717cdba-7e", "ovs_interfaceid": "2717cdba-7e23-4eb0-81a8-fa322def9612", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 08:58:38 compute-0 nova_compute[260935]: 2025-10-11 08:58:38.679 2 DEBUG nova.network.os_vif_util [None req-f65d1a23-5307-4fe3-8d5c-df108b044ea8 24dea95ec8d047639f1121e457f54149 a9bdaad8dfb9419b870adbb9b7034af4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3b:fa:9f,bridge_name='br-int',has_traffic_filtering=True,id=2717cdba-7e23-4eb0-81a8-fa322def9612,network=Network(bb3668ff-ecbd-4f84-9f47-fd2ac2c443f1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2717cdba-7e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 08:58:38 compute-0 nova_compute[260935]: 2025-10-11 08:58:38.680 2 DEBUG os_vif [None req-f65d1a23-5307-4fe3-8d5c-df108b044ea8 24dea95ec8d047639f1121e457f54149 a9bdaad8dfb9419b870adbb9b7034af4 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:3b:fa:9f,bridge_name='br-int',has_traffic_filtering=True,id=2717cdba-7e23-4eb0-81a8-fa322def9612,network=Network(bb3668ff-ecbd-4f84-9f47-fd2ac2c443f1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2717cdba-7e') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 11 08:58:38 compute-0 nova_compute[260935]: 2025-10-11 08:58:38.681 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:58:38 compute-0 nova_compute[260935]: 2025-10-11 08:58:38.682 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2717cdba-7e, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:58:38 compute-0 nova_compute[260935]: 2025-10-11 08:58:38.683 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:58:38 compute-0 nova_compute[260935]: 2025-10-11 08:58:38.686 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 08:58:38 compute-0 neutron-haproxy-ovnmeta-bb3668ff-ecbd-4f84-9f47-fd2ac2c443f1[336392]: [NOTICE]   (336396) : haproxy version is 2.8.14-c23fe91
Oct 11 08:58:38 compute-0 neutron-haproxy-ovnmeta-bb3668ff-ecbd-4f84-9f47-fd2ac2c443f1[336392]: [NOTICE]   (336396) : path to executable is /usr/sbin/haproxy
Oct 11 08:58:38 compute-0 neutron-haproxy-ovnmeta-bb3668ff-ecbd-4f84-9f47-fd2ac2c443f1[336392]: [WARNING]  (336396) : Exiting Master process...
Oct 11 08:58:38 compute-0 nova_compute[260935]: 2025-10-11 08:58:38.688 2 INFO os_vif [None req-f65d1a23-5307-4fe3-8d5c-df108b044ea8 24dea95ec8d047639f1121e457f54149 a9bdaad8dfb9419b870adbb9b7034af4 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:3b:fa:9f,bridge_name='br-int',has_traffic_filtering=True,id=2717cdba-7e23-4eb0-81a8-fa322def9612,network=Network(bb3668ff-ecbd-4f84-9f47-fd2ac2c443f1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2717cdba-7e')
Oct 11 08:58:38 compute-0 neutron-haproxy-ovnmeta-bb3668ff-ecbd-4f84-9f47-fd2ac2c443f1[336392]: [ALERT]    (336396) : Current worker (336399) exited with code 143 (Terminated)
Oct 11 08:58:38 compute-0 neutron-haproxy-ovnmeta-bb3668ff-ecbd-4f84-9f47-fd2ac2c443f1[336392]: [WARNING]  (336396) : All workers exited. Exiting... (0)
Oct 11 08:58:38 compute-0 systemd[1]: libpod-bc5874d383e6d985d73c54086a80624f21a7984b25aad5a75c8775dffddd269f.scope: Deactivated successfully.
Oct 11 08:58:38 compute-0 conmon[336392]: conmon bc5874d383e6d985d73c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-bc5874d383e6d985d73c54086a80624f21a7984b25aad5a75c8775dffddd269f.scope/container/memory.events
Oct 11 08:58:38 compute-0 podman[336454]: 2025-10-11 08:58:38.701808942 +0000 UTC m=+0.059649542 container died bc5874d383e6d985d73c54086a80624f21a7984b25aad5a75c8775dffddd269f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-bb3668ff-ecbd-4f84-9f47-fd2ac2c443f1, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 11 08:58:38 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-bc5874d383e6d985d73c54086a80624f21a7984b25aad5a75c8775dffddd269f-userdata-shm.mount: Deactivated successfully.
Oct 11 08:58:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-bc6a7046389d867a3d28fb34193ff9660fbfd12726720b91fa73dee28f3989f3-merged.mount: Deactivated successfully.
Oct 11 08:58:38 compute-0 podman[336454]: 2025-10-11 08:58:38.751024055 +0000 UTC m=+0.108864675 container cleanup bc5874d383e6d985d73c54086a80624f21a7984b25aad5a75c8775dffddd269f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-bb3668ff-ecbd-4f84-9f47-fd2ac2c443f1, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Oct 11 08:58:38 compute-0 systemd[1]: libpod-conmon-bc5874d383e6d985d73c54086a80624f21a7984b25aad5a75c8775dffddd269f.scope: Deactivated successfully.
Oct 11 08:58:38 compute-0 podman[336510]: 2025-10-11 08:58:38.853835107 +0000 UTC m=+0.064754607 container remove bc5874d383e6d985d73c54086a80624f21a7984b25aad5a75c8775dffddd269f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-bb3668ff-ecbd-4f84-9f47-fd2ac2c443f1, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct 11 08:58:38 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:58:38.867 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[8cc7ba61-2397-45bd-9b61-332b4e26c4bd]: (4, ('Sat Oct 11 08:58:38 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-bb3668ff-ecbd-4f84-9f47-fd2ac2c443f1 (bc5874d383e6d985d73c54086a80624f21a7984b25aad5a75c8775dffddd269f)\nbc5874d383e6d985d73c54086a80624f21a7984b25aad5a75c8775dffddd269f\nSat Oct 11 08:58:38 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-bb3668ff-ecbd-4f84-9f47-fd2ac2c443f1 (bc5874d383e6d985d73c54086a80624f21a7984b25aad5a75c8775dffddd269f)\nbc5874d383e6d985d73c54086a80624f21a7984b25aad5a75c8775dffddd269f\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:58:38 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:58:38.869 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[d5890a70-77b7-40c0-9fad-fc8f3688cdcb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:58:38 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:58:38.870 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbb3668ff-e0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:58:38 compute-0 kernel: tapbb3668ff-e0: left promiscuous mode
Oct 11 08:58:38 compute-0 nova_compute[260935]: 2025-10-11 08:58:38.872 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:58:38 compute-0 nova_compute[260935]: 2025-10-11 08:58:38.911 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:58:38 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:58:38.914 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[236ace49-779e-4560-8033-66570fad372d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:58:38 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:58:38.946 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[36aaa8a2-f57d-47dd-9db2-40cf86d5580f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:58:38 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:58:38.948 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[145ab80c-9f3d-4345-b000-a992dcd9e3f1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:58:38 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:58:38.973 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[c7472adc-ce35-4db5-b15b-689459ecd5fc]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 495980, 'reachable_time': 34817, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 336538, 'error': None, 'target': 'ovnmeta-bb3668ff-ecbd-4f84-9f47-fd2ac2c443f1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:58:38 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:58:38.976 162929 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-bb3668ff-ecbd-4f84-9f47-fd2ac2c443f1 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 11 08:58:38 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:58:38.976 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[df43c5e8-3e70-4b16-8320-05c89991c24f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:58:38 compute-0 systemd[1]: run-netns-ovnmeta\x2dbb3668ff\x2decbd\x2d4f84\x2d9f47\x2dfd2ac2c443f1.mount: Deactivated successfully.
Oct 11 08:58:39 compute-0 podman[336525]: 2025-10-11 08:58:39.009631509 +0000 UTC m=+0.084381547 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=iscsid, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct 11 08:58:39 compute-0 nova_compute[260935]: 2025-10-11 08:58:39.140 2 INFO nova.virt.libvirt.driver [None req-f65d1a23-5307-4fe3-8d5c-df108b044ea8 24dea95ec8d047639f1121e457f54149 a9bdaad8dfb9419b870adbb9b7034af4 - - default default] [instance: b7a3385b-c043-4cc1-963e-3421188270ac] Deleting instance files /var/lib/nova/instances/b7a3385b-c043-4cc1-963e-3421188270ac_del
Oct 11 08:58:39 compute-0 nova_compute[260935]: 2025-10-11 08:58:39.141 2 INFO nova.virt.libvirt.driver [None req-f65d1a23-5307-4fe3-8d5c-df108b044ea8 24dea95ec8d047639f1121e457f54149 a9bdaad8dfb9419b870adbb9b7034af4 - - default default] [instance: b7a3385b-c043-4cc1-963e-3421188270ac] Deletion of /var/lib/nova/instances/b7a3385b-c043-4cc1-963e-3421188270ac_del complete
Oct 11 08:58:39 compute-0 nova_compute[260935]: 2025-10-11 08:58:39.197 2 INFO nova.compute.manager [None req-f65d1a23-5307-4fe3-8d5c-df108b044ea8 24dea95ec8d047639f1121e457f54149 a9bdaad8dfb9419b870adbb9b7034af4 - - default default] [instance: b7a3385b-c043-4cc1-963e-3421188270ac] Took 0.79 seconds to destroy the instance on the hypervisor.
Oct 11 08:58:39 compute-0 nova_compute[260935]: 2025-10-11 08:58:39.198 2 DEBUG oslo.service.loopingcall [None req-f65d1a23-5307-4fe3-8d5c-df108b044ea8 24dea95ec8d047639f1121e457f54149 a9bdaad8dfb9419b870adbb9b7034af4 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 11 08:58:39 compute-0 nova_compute[260935]: 2025-10-11 08:58:39.198 2 DEBUG nova.compute.manager [-] [instance: b7a3385b-c043-4cc1-963e-3421188270ac] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 11 08:58:39 compute-0 nova_compute[260935]: 2025-10-11 08:58:39.198 2 DEBUG nova.network.neutron [-] [instance: b7a3385b-c043-4cc1-963e-3421188270ac] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 11 08:58:39 compute-0 ceph-mon[74313]: pgmap v1664: 321 pgs: 321 active+clean; 88 MiB data, 572 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.6 MiB/s wr, 203 op/s
Oct 11 08:58:39 compute-0 nova_compute[260935]: 2025-10-11 08:58:39.948 2 DEBUG nova.network.neutron [-] [instance: b7a3385b-c043-4cc1-963e-3421188270ac] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:58:39 compute-0 nova_compute[260935]: 2025-10-11 08:58:39.975 2 INFO nova.compute.manager [-] [instance: b7a3385b-c043-4cc1-963e-3421188270ac] Took 0.78 seconds to deallocate network for instance.
Oct 11 08:58:40 compute-0 nova_compute[260935]: 2025-10-11 08:58:40.017 2 DEBUG nova.compute.manager [req-ce662d18-8a05-4383-b953-0f3231289d3d req-ea6671e9-8524-48a3-b980-d201a85ff1f8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: b7a3385b-c043-4cc1-963e-3421188270ac] Received event network-vif-unplugged-2717cdba-7e23-4eb0-81a8-fa322def9612 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:58:40 compute-0 nova_compute[260935]: 2025-10-11 08:58:40.018 2 DEBUG oslo_concurrency.lockutils [req-ce662d18-8a05-4383-b953-0f3231289d3d req-ea6671e9-8524-48a3-b980-d201a85ff1f8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "b7a3385b-c043-4cc1-963e-3421188270ac-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:58:40 compute-0 nova_compute[260935]: 2025-10-11 08:58:40.018 2 DEBUG oslo_concurrency.lockutils [req-ce662d18-8a05-4383-b953-0f3231289d3d req-ea6671e9-8524-48a3-b980-d201a85ff1f8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "b7a3385b-c043-4cc1-963e-3421188270ac-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:58:40 compute-0 nova_compute[260935]: 2025-10-11 08:58:40.019 2 DEBUG oslo_concurrency.lockutils [req-ce662d18-8a05-4383-b953-0f3231289d3d req-ea6671e9-8524-48a3-b980-d201a85ff1f8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "b7a3385b-c043-4cc1-963e-3421188270ac-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:58:40 compute-0 nova_compute[260935]: 2025-10-11 08:58:40.019 2 DEBUG nova.compute.manager [req-ce662d18-8a05-4383-b953-0f3231289d3d req-ea6671e9-8524-48a3-b980-d201a85ff1f8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: b7a3385b-c043-4cc1-963e-3421188270ac] No waiting events found dispatching network-vif-unplugged-2717cdba-7e23-4eb0-81a8-fa322def9612 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 08:58:40 compute-0 nova_compute[260935]: 2025-10-11 08:58:40.020 2 DEBUG nova.compute.manager [req-ce662d18-8a05-4383-b953-0f3231289d3d req-ea6671e9-8524-48a3-b980-d201a85ff1f8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: b7a3385b-c043-4cc1-963e-3421188270ac] Received event network-vif-unplugged-2717cdba-7e23-4eb0-81a8-fa322def9612 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 11 08:58:40 compute-0 nova_compute[260935]: 2025-10-11 08:58:40.020 2 DEBUG nova.compute.manager [req-ce662d18-8a05-4383-b953-0f3231289d3d req-ea6671e9-8524-48a3-b980-d201a85ff1f8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: b7a3385b-c043-4cc1-963e-3421188270ac] Received event network-vif-plugged-2717cdba-7e23-4eb0-81a8-fa322def9612 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:58:40 compute-0 nova_compute[260935]: 2025-10-11 08:58:40.020 2 DEBUG oslo_concurrency.lockutils [req-ce662d18-8a05-4383-b953-0f3231289d3d req-ea6671e9-8524-48a3-b980-d201a85ff1f8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "b7a3385b-c043-4cc1-963e-3421188270ac-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:58:40 compute-0 nova_compute[260935]: 2025-10-11 08:58:40.021 2 DEBUG oslo_concurrency.lockutils [req-ce662d18-8a05-4383-b953-0f3231289d3d req-ea6671e9-8524-48a3-b980-d201a85ff1f8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "b7a3385b-c043-4cc1-963e-3421188270ac-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:58:40 compute-0 nova_compute[260935]: 2025-10-11 08:58:40.021 2 DEBUG oslo_concurrency.lockutils [req-ce662d18-8a05-4383-b953-0f3231289d3d req-ea6671e9-8524-48a3-b980-d201a85ff1f8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "b7a3385b-c043-4cc1-963e-3421188270ac-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:58:40 compute-0 nova_compute[260935]: 2025-10-11 08:58:40.022 2 DEBUG nova.compute.manager [req-ce662d18-8a05-4383-b953-0f3231289d3d req-ea6671e9-8524-48a3-b980-d201a85ff1f8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: b7a3385b-c043-4cc1-963e-3421188270ac] No waiting events found dispatching network-vif-plugged-2717cdba-7e23-4eb0-81a8-fa322def9612 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 08:58:40 compute-0 nova_compute[260935]: 2025-10-11 08:58:40.022 2 WARNING nova.compute.manager [req-ce662d18-8a05-4383-b953-0f3231289d3d req-ea6671e9-8524-48a3-b980-d201a85ff1f8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: b7a3385b-c043-4cc1-963e-3421188270ac] Received unexpected event network-vif-plugged-2717cdba-7e23-4eb0-81a8-fa322def9612 for instance with vm_state active and task_state deleting.
Oct 11 08:58:40 compute-0 nova_compute[260935]: 2025-10-11 08:58:40.025 2 DEBUG oslo_concurrency.lockutils [None req-f65d1a23-5307-4fe3-8d5c-df108b044ea8 24dea95ec8d047639f1121e457f54149 a9bdaad8dfb9419b870adbb9b7034af4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:58:40 compute-0 nova_compute[260935]: 2025-10-11 08:58:40.025 2 DEBUG oslo_concurrency.lockutils [None req-f65d1a23-5307-4fe3-8d5c-df108b044ea8 24dea95ec8d047639f1121e457f54149 a9bdaad8dfb9419b870adbb9b7034af4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:58:40 compute-0 nova_compute[260935]: 2025-10-11 08:58:40.046 2 DEBUG nova.compute.manager [req-aeae9d2e-9970-4e35-b803-2098b58e254d req-3b385dd4-1358-43ed-b45b-48e702dfcf8e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: b7a3385b-c043-4cc1-963e-3421188270ac] Received event network-vif-deleted-2717cdba-7e23-4eb0-81a8-fa322def9612 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:58:40 compute-0 nova_compute[260935]: 2025-10-11 08:58:40.072 2 DEBUG oslo_concurrency.processutils [None req-f65d1a23-5307-4fe3-8d5c-df108b044ea8 24dea95ec8d047639f1121e457f54149 a9bdaad8dfb9419b870adbb9b7034af4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:58:40 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1665: 321 pgs: 321 active+clean; 88 MiB data, 572 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 1.6 MiB/s wr, 145 op/s
Oct 11 08:58:40 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 08:58:40 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3107946619' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:58:40 compute-0 nova_compute[260935]: 2025-10-11 08:58:40.533 2 DEBUG oslo_concurrency.processutils [None req-f65d1a23-5307-4fe3-8d5c-df108b044ea8 24dea95ec8d047639f1121e457f54149 a9bdaad8dfb9419b870adbb9b7034af4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:58:40 compute-0 nova_compute[260935]: 2025-10-11 08:58:40.541 2 DEBUG nova.compute.provider_tree [None req-f65d1a23-5307-4fe3-8d5c-df108b044ea8 24dea95ec8d047639f1121e457f54149 a9bdaad8dfb9419b870adbb9b7034af4 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 08:58:40 compute-0 nova_compute[260935]: 2025-10-11 08:58:40.564 2 DEBUG nova.scheduler.client.report [None req-f65d1a23-5307-4fe3-8d5c-df108b044ea8 24dea95ec8d047639f1121e457f54149 a9bdaad8dfb9419b870adbb9b7034af4 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 08:58:40 compute-0 nova_compute[260935]: 2025-10-11 08:58:40.599 2 DEBUG oslo_concurrency.lockutils [None req-f65d1a23-5307-4fe3-8d5c-df108b044ea8 24dea95ec8d047639f1121e457f54149 a9bdaad8dfb9419b870adbb9b7034af4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.574s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:58:40 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/3107946619' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:58:40 compute-0 nova_compute[260935]: 2025-10-11 08:58:40.628 2 INFO nova.scheduler.client.report [None req-f65d1a23-5307-4fe3-8d5c-df108b044ea8 24dea95ec8d047639f1121e457f54149 a9bdaad8dfb9419b870adbb9b7034af4 - - default default] Deleted allocations for instance b7a3385b-c043-4cc1-963e-3421188270ac
Oct 11 08:58:40 compute-0 nova_compute[260935]: 2025-10-11 08:58:40.713 2 DEBUG oslo_concurrency.lockutils [None req-f65d1a23-5307-4fe3-8d5c-df108b044ea8 24dea95ec8d047639f1121e457f54149 a9bdaad8dfb9419b870adbb9b7034af4 - - default default] Lock "b7a3385b-c043-4cc1-963e-3421188270ac" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.305s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:58:41 compute-0 nova_compute[260935]: 2025-10-11 08:58:41.534 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:58:41 compute-0 ceph-mon[74313]: pgmap v1665: 321 pgs: 321 active+clean; 88 MiB data, 572 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 1.6 MiB/s wr, 145 op/s
Oct 11 08:58:42 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1666: 321 pgs: 321 active+clean; 75 MiB data, 564 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 1.6 MiB/s wr, 175 op/s
Oct 11 08:58:42 compute-0 sudo[336568]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:58:42 compute-0 sudo[336568]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:58:42 compute-0 sudo[336568]: pam_unix(sudo:session): session closed for user root
Oct 11 08:58:42 compute-0 sudo[336593]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 08:58:42 compute-0 sudo[336593]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:58:42 compute-0 sudo[336593]: pam_unix(sudo:session): session closed for user root
Oct 11 08:58:42 compute-0 sudo[336618]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:58:42 compute-0 sudo[336618]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:58:42 compute-0 sudo[336618]: pam_unix(sudo:session): session closed for user root
Oct 11 08:58:42 compute-0 sudo[336643]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 11 08:58:42 compute-0 sudo[336643]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:58:43 compute-0 sudo[336643]: pam_unix(sudo:session): session closed for user root
Oct 11 08:58:43 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 08:58:43 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 08:58:43 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 11 08:58:43 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 08:58:43 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 11 08:58:43 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 08:58:43 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev d65c4ac6-5046-46ff-9aff-eb4a176d1a53 does not exist
Oct 11 08:58:43 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev bc430c7c-04a8-4555-bc48-c1889c149dc2 does not exist
Oct 11 08:58:43 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev 86514884-4034-49aa-84c6-1825a89a87b1 does not exist
Oct 11 08:58:43 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 11 08:58:43 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 08:58:43 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 11 08:58:43 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 08:58:43 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 08:58:43 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 08:58:43 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e242 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 08:58:43 compute-0 sudo[336699]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:58:43 compute-0 sudo[336699]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:58:43 compute-0 sudo[336699]: pam_unix(sudo:session): session closed for user root
Oct 11 08:58:43 compute-0 nova_compute[260935]: 2025-10-11 08:58:43.532 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:58:43 compute-0 sudo[336731]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 08:58:43 compute-0 sudo[336731]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:58:43 compute-0 sudo[336731]: pam_unix(sudo:session): session closed for user root
Oct 11 08:58:43 compute-0 podman[336723]: 2025-10-11 08:58:43.565090895 +0000 UTC m=+0.112316303 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, container_name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, config_id=multipathd)
Oct 11 08:58:43 compute-0 podman[336724]: 2025-10-11 08:58:43.577023726 +0000 UTC m=+0.112780527 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Oct 11 08:58:43 compute-0 ceph-mon[74313]: pgmap v1666: 321 pgs: 321 active+clean; 75 MiB data, 564 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 1.6 MiB/s wr, 175 op/s
Oct 11 08:58:43 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 08:58:43 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 08:58:43 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 08:58:43 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 08:58:43 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 08:58:43 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 08:58:43 compute-0 sudo[336789]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:58:43 compute-0 sudo[336789]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:58:43 compute-0 sudo[336789]: pam_unix(sudo:session): session closed for user root
Oct 11 08:58:43 compute-0 nova_compute[260935]: 2025-10-11 08:58:43.683 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:58:43 compute-0 sudo[336817]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 11 08:58:43 compute-0 sudo[336817]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:58:44 compute-0 nova_compute[260935]: 2025-10-11 08:58:44.056 2 DEBUG oslo_concurrency.lockutils [None req-9634aa28-cdb2-4dce-81a1-8c89786153cd 4b13011561414dddaa3b82b6a74b610b 37bfd840a383483889a90bbdefd0f194 - - default default] Acquiring lock "7f16d3ba-54f0-496c-bbd3-09f4d10a41f2" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:58:44 compute-0 nova_compute[260935]: 2025-10-11 08:58:44.058 2 DEBUG oslo_concurrency.lockutils [None req-9634aa28-cdb2-4dce-81a1-8c89786153cd 4b13011561414dddaa3b82b6a74b610b 37bfd840a383483889a90bbdefd0f194 - - default default] Lock "7f16d3ba-54f0-496c-bbd3-09f4d10a41f2" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:58:44 compute-0 nova_compute[260935]: 2025-10-11 08:58:44.084 2 DEBUG nova.compute.manager [None req-9634aa28-cdb2-4dce-81a1-8c89786153cd 4b13011561414dddaa3b82b6a74b610b 37bfd840a383483889a90bbdefd0f194 - - default default] [instance: 7f16d3ba-54f0-496c-bbd3-09f4d10a41f2] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 11 08:58:44 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1667: 321 pgs: 321 active+clean; 41 MiB data, 553 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 977 KiB/s wr, 215 op/s
Oct 11 08:58:44 compute-0 podman[336883]: 2025-10-11 08:58:44.166371692 +0000 UTC m=+0.078072198 container create d3886a5f0c224a073ad156f599f6e35f65163dad85174d3f28ddfa023bc3f09d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_raman, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct 11 08:58:44 compute-0 nova_compute[260935]: 2025-10-11 08:58:44.185 2 DEBUG oslo_concurrency.lockutils [None req-9634aa28-cdb2-4dce-81a1-8c89786153cd 4b13011561414dddaa3b82b6a74b610b 37bfd840a383483889a90bbdefd0f194 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:58:44 compute-0 nova_compute[260935]: 2025-10-11 08:58:44.186 2 DEBUG oslo_concurrency.lockutils [None req-9634aa28-cdb2-4dce-81a1-8c89786153cd 4b13011561414dddaa3b82b6a74b610b 37bfd840a383483889a90bbdefd0f194 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:58:44 compute-0 nova_compute[260935]: 2025-10-11 08:58:44.195 2 DEBUG nova.virt.hardware [None req-9634aa28-cdb2-4dce-81a1-8c89786153cd 4b13011561414dddaa3b82b6a74b610b 37bfd840a383483889a90bbdefd0f194 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 11 08:58:44 compute-0 nova_compute[260935]: 2025-10-11 08:58:44.196 2 INFO nova.compute.claims [None req-9634aa28-cdb2-4dce-81a1-8c89786153cd 4b13011561414dddaa3b82b6a74b610b 37bfd840a383483889a90bbdefd0f194 - - default default] [instance: 7f16d3ba-54f0-496c-bbd3-09f4d10a41f2] Claim successful on node compute-0.ctlplane.example.com
Oct 11 08:58:44 compute-0 systemd[1]: Started libpod-conmon-d3886a5f0c224a073ad156f599f6e35f65163dad85174d3f28ddfa023bc3f09d.scope.
Oct 11 08:58:44 compute-0 podman[336883]: 2025-10-11 08:58:44.132563147 +0000 UTC m=+0.044263674 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 08:58:44 compute-0 systemd[1]: Started libcrun container.
Oct 11 08:58:44 compute-0 podman[336883]: 2025-10-11 08:58:44.284886411 +0000 UTC m=+0.196586997 container init d3886a5f0c224a073ad156f599f6e35f65163dad85174d3f28ddfa023bc3f09d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_raman, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 08:58:44 compute-0 podman[336883]: 2025-10-11 08:58:44.299393735 +0000 UTC m=+0.211094241 container start d3886a5f0c224a073ad156f599f6e35f65163dad85174d3f28ddfa023bc3f09d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_raman, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 08:58:44 compute-0 podman[336883]: 2025-10-11 08:58:44.303916474 +0000 UTC m=+0.215617070 container attach d3886a5f0c224a073ad156f599f6e35f65163dad85174d3f28ddfa023bc3f09d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_raman, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 08:58:44 compute-0 sharp_raman[336899]: 167 167
Oct 11 08:58:44 compute-0 systemd[1]: libpod-d3886a5f0c224a073ad156f599f6e35f65163dad85174d3f28ddfa023bc3f09d.scope: Deactivated successfully.
Oct 11 08:58:44 compute-0 podman[336883]: 2025-10-11 08:58:44.311729877 +0000 UTC m=+0.223430383 container died d3886a5f0c224a073ad156f599f6e35f65163dad85174d3f28ddfa023bc3f09d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_raman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 08:58:44 compute-0 nova_compute[260935]: 2025-10-11 08:58:44.335 2 DEBUG oslo_concurrency.processutils [None req-9634aa28-cdb2-4dce-81a1-8c89786153cd 4b13011561414dddaa3b82b6a74b610b 37bfd840a383483889a90bbdefd0f194 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:58:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-65f337c6dc582fa088124e2da01ec1c63e0f7660f282d6fc311ea3f2c13baf29-merged.mount: Deactivated successfully.
Oct 11 08:58:44 compute-0 podman[336883]: 2025-10-11 08:58:44.364728848 +0000 UTC m=+0.276429334 container remove d3886a5f0c224a073ad156f599f6e35f65163dad85174d3f28ddfa023bc3f09d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_raman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 08:58:44 compute-0 systemd[1]: libpod-conmon-d3886a5f0c224a073ad156f599f6e35f65163dad85174d3f28ddfa023bc3f09d.scope: Deactivated successfully.
Oct 11 08:58:44 compute-0 podman[336943]: 2025-10-11 08:58:44.623392084 +0000 UTC m=+0.064531731 container create 2661bb226deb4ee13d8585ec4a38a5879686f2b3acf66b8abd3e37bf63baf1de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_wilbur, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True)
Oct 11 08:58:44 compute-0 systemd[1]: Started libpod-conmon-2661bb226deb4ee13d8585ec4a38a5879686f2b3acf66b8abd3e37bf63baf1de.scope.
Oct 11 08:58:44 compute-0 podman[336943]: 2025-10-11 08:58:44.59100169 +0000 UTC m=+0.032141387 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 08:58:44 compute-0 systemd[1]: Started libcrun container.
Oct 11 08:58:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66bb85f938532f832796e1aec4717a71ca4128b909769d8dd501a804e2655e7f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 08:58:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66bb85f938532f832796e1aec4717a71ca4128b909769d8dd501a804e2655e7f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 08:58:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66bb85f938532f832796e1aec4717a71ca4128b909769d8dd501a804e2655e7f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 08:58:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66bb85f938532f832796e1aec4717a71ca4128b909769d8dd501a804e2655e7f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 08:58:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66bb85f938532f832796e1aec4717a71ca4128b909769d8dd501a804e2655e7f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 08:58:44 compute-0 podman[336943]: 2025-10-11 08:58:44.738680761 +0000 UTC m=+0.179820448 container init 2661bb226deb4ee13d8585ec4a38a5879686f2b3acf66b8abd3e37bf63baf1de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_wilbur, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Oct 11 08:58:44 compute-0 podman[336943]: 2025-10-11 08:58:44.751891028 +0000 UTC m=+0.193030635 container start 2661bb226deb4ee13d8585ec4a38a5879686f2b3acf66b8abd3e37bf63baf1de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_wilbur, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct 11 08:58:44 compute-0 podman[336943]: 2025-10-11 08:58:44.755385837 +0000 UTC m=+0.196525534 container attach 2661bb226deb4ee13d8585ec4a38a5879686f2b3acf66b8abd3e37bf63baf1de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_wilbur, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 08:58:44 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 08:58:44 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3381773082' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:58:44 compute-0 nova_compute[260935]: 2025-10-11 08:58:44.845 2 DEBUG oslo_concurrency.processutils [None req-9634aa28-cdb2-4dce-81a1-8c89786153cd 4b13011561414dddaa3b82b6a74b610b 37bfd840a383483889a90bbdefd0f194 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.510s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:58:44 compute-0 nova_compute[260935]: 2025-10-11 08:58:44.855 2 DEBUG nova.compute.provider_tree [None req-9634aa28-cdb2-4dce-81a1-8c89786153cd 4b13011561414dddaa3b82b6a74b610b 37bfd840a383483889a90bbdefd0f194 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 08:58:44 compute-0 nova_compute[260935]: 2025-10-11 08:58:44.876 2 DEBUG nova.scheduler.client.report [None req-9634aa28-cdb2-4dce-81a1-8c89786153cd 4b13011561414dddaa3b82b6a74b610b 37bfd840a383483889a90bbdefd0f194 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 08:58:44 compute-0 nova_compute[260935]: 2025-10-11 08:58:44.904 2 DEBUG oslo_concurrency.lockutils [None req-9634aa28-cdb2-4dce-81a1-8c89786153cd 4b13011561414dddaa3b82b6a74b610b 37bfd840a383483889a90bbdefd0f194 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.718s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:58:44 compute-0 nova_compute[260935]: 2025-10-11 08:58:44.905 2 DEBUG nova.compute.manager [None req-9634aa28-cdb2-4dce-81a1-8c89786153cd 4b13011561414dddaa3b82b6a74b610b 37bfd840a383483889a90bbdefd0f194 - - default default] [instance: 7f16d3ba-54f0-496c-bbd3-09f4d10a41f2] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 11 08:58:44 compute-0 nova_compute[260935]: 2025-10-11 08:58:44.962 2 DEBUG nova.compute.manager [None req-9634aa28-cdb2-4dce-81a1-8c89786153cd 4b13011561414dddaa3b82b6a74b610b 37bfd840a383483889a90bbdefd0f194 - - default default] [instance: 7f16d3ba-54f0-496c-bbd3-09f4d10a41f2] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 11 08:58:44 compute-0 nova_compute[260935]: 2025-10-11 08:58:44.963 2 DEBUG nova.network.neutron [None req-9634aa28-cdb2-4dce-81a1-8c89786153cd 4b13011561414dddaa3b82b6a74b610b 37bfd840a383483889a90bbdefd0f194 - - default default] [instance: 7f16d3ba-54f0-496c-bbd3-09f4d10a41f2] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 11 08:58:44 compute-0 nova_compute[260935]: 2025-10-11 08:58:44.987 2 INFO nova.virt.libvirt.driver [None req-9634aa28-cdb2-4dce-81a1-8c89786153cd 4b13011561414dddaa3b82b6a74b610b 37bfd840a383483889a90bbdefd0f194 - - default default] [instance: 7f16d3ba-54f0-496c-bbd3-09f4d10a41f2] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 11 08:58:45 compute-0 nova_compute[260935]: 2025-10-11 08:58:45.010 2 DEBUG nova.compute.manager [None req-9634aa28-cdb2-4dce-81a1-8c89786153cd 4b13011561414dddaa3b82b6a74b610b 37bfd840a383483889a90bbdefd0f194 - - default default] [instance: 7f16d3ba-54f0-496c-bbd3-09f4d10a41f2] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 11 08:58:45 compute-0 nova_compute[260935]: 2025-10-11 08:58:45.111 2 DEBUG nova.compute.manager [None req-9634aa28-cdb2-4dce-81a1-8c89786153cd 4b13011561414dddaa3b82b6a74b610b 37bfd840a383483889a90bbdefd0f194 - - default default] [instance: 7f16d3ba-54f0-496c-bbd3-09f4d10a41f2] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 11 08:58:45 compute-0 nova_compute[260935]: 2025-10-11 08:58:45.113 2 DEBUG nova.virt.libvirt.driver [None req-9634aa28-cdb2-4dce-81a1-8c89786153cd 4b13011561414dddaa3b82b6a74b610b 37bfd840a383483889a90bbdefd0f194 - - default default] [instance: 7f16d3ba-54f0-496c-bbd3-09f4d10a41f2] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 11 08:58:45 compute-0 nova_compute[260935]: 2025-10-11 08:58:45.113 2 INFO nova.virt.libvirt.driver [None req-9634aa28-cdb2-4dce-81a1-8c89786153cd 4b13011561414dddaa3b82b6a74b610b 37bfd840a383483889a90bbdefd0f194 - - default default] [instance: 7f16d3ba-54f0-496c-bbd3-09f4d10a41f2] Creating image(s)
Oct 11 08:58:45 compute-0 nova_compute[260935]: 2025-10-11 08:58:45.147 2 DEBUG nova.storage.rbd_utils [None req-9634aa28-cdb2-4dce-81a1-8c89786153cd 4b13011561414dddaa3b82b6a74b610b 37bfd840a383483889a90bbdefd0f194 - - default default] rbd image 7f16d3ba-54f0-496c-bbd3-09f4d10a41f2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:58:45 compute-0 nova_compute[260935]: 2025-10-11 08:58:45.182 2 DEBUG nova.storage.rbd_utils [None req-9634aa28-cdb2-4dce-81a1-8c89786153cd 4b13011561414dddaa3b82b6a74b610b 37bfd840a383483889a90bbdefd0f194 - - default default] rbd image 7f16d3ba-54f0-496c-bbd3-09f4d10a41f2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:58:45 compute-0 nova_compute[260935]: 2025-10-11 08:58:45.214 2 DEBUG nova.storage.rbd_utils [None req-9634aa28-cdb2-4dce-81a1-8c89786153cd 4b13011561414dddaa3b82b6a74b610b 37bfd840a383483889a90bbdefd0f194 - - default default] rbd image 7f16d3ba-54f0-496c-bbd3-09f4d10a41f2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:58:45 compute-0 nova_compute[260935]: 2025-10-11 08:58:45.218 2 DEBUG oslo_concurrency.processutils [None req-9634aa28-cdb2-4dce-81a1-8c89786153cd 4b13011561414dddaa3b82b6a74b610b 37bfd840a383483889a90bbdefd0f194 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:58:45 compute-0 nova_compute[260935]: 2025-10-11 08:58:45.311 2 DEBUG oslo_concurrency.processutils [None req-9634aa28-cdb2-4dce-81a1-8c89786153cd 4b13011561414dddaa3b82b6a74b610b 37bfd840a383483889a90bbdefd0f194 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json" returned: 0 in 0.093s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:58:45 compute-0 nova_compute[260935]: 2025-10-11 08:58:45.313 2 DEBUG oslo_concurrency.lockutils [None req-9634aa28-cdb2-4dce-81a1-8c89786153cd 4b13011561414dddaa3b82b6a74b610b 37bfd840a383483889a90bbdefd0f194 - - default default] Acquiring lock "9811042c7d73cc51997f7c966840f4b7728169a1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:58:45 compute-0 nova_compute[260935]: 2025-10-11 08:58:45.314 2 DEBUG oslo_concurrency.lockutils [None req-9634aa28-cdb2-4dce-81a1-8c89786153cd 4b13011561414dddaa3b82b6a74b610b 37bfd840a383483889a90bbdefd0f194 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:58:45 compute-0 nova_compute[260935]: 2025-10-11 08:58:45.314 2 DEBUG oslo_concurrency.lockutils [None req-9634aa28-cdb2-4dce-81a1-8c89786153cd 4b13011561414dddaa3b82b6a74b610b 37bfd840a383483889a90bbdefd0f194 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:58:45 compute-0 nova_compute[260935]: 2025-10-11 08:58:45.346 2 DEBUG nova.storage.rbd_utils [None req-9634aa28-cdb2-4dce-81a1-8c89786153cd 4b13011561414dddaa3b82b6a74b610b 37bfd840a383483889a90bbdefd0f194 - - default default] rbd image 7f16d3ba-54f0-496c-bbd3-09f4d10a41f2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:58:45 compute-0 nova_compute[260935]: 2025-10-11 08:58:45.350 2 DEBUG oslo_concurrency.processutils [None req-9634aa28-cdb2-4dce-81a1-8c89786153cd 4b13011561414dddaa3b82b6a74b610b 37bfd840a383483889a90bbdefd0f194 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 7f16d3ba-54f0-496c-bbd3-09f4d10a41f2_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:58:45 compute-0 ceph-mon[74313]: pgmap v1667: 321 pgs: 321 active+clean; 41 MiB data, 553 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 977 KiB/s wr, 215 op/s
Oct 11 08:58:45 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/3381773082' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:58:45 compute-0 nova_compute[260935]: 2025-10-11 08:58:45.660 2 DEBUG oslo_concurrency.processutils [None req-9634aa28-cdb2-4dce-81a1-8c89786153cd 4b13011561414dddaa3b82b6a74b610b 37bfd840a383483889a90bbdefd0f194 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 7f16d3ba-54f0-496c-bbd3-09f4d10a41f2_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.310s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:58:45 compute-0 nova_compute[260935]: 2025-10-11 08:58:45.750 2 DEBUG nova.policy [None req-9634aa28-cdb2-4dce-81a1-8c89786153cd 4b13011561414dddaa3b82b6a74b610b 37bfd840a383483889a90bbdefd0f194 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '4b13011561414dddaa3b82b6a74b610b', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '37bfd840a383483889a90bbdefd0f194', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 11 08:58:45 compute-0 nova_compute[260935]: 2025-10-11 08:58:45.760 2 DEBUG nova.storage.rbd_utils [None req-9634aa28-cdb2-4dce-81a1-8c89786153cd 4b13011561414dddaa3b82b6a74b610b 37bfd840a383483889a90bbdefd0f194 - - default default] resizing rbd image 7f16d3ba-54f0-496c-bbd3-09f4d10a41f2_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 11 08:58:45 compute-0 nova_compute[260935]: 2025-10-11 08:58:45.879 2 DEBUG nova.objects.instance [None req-9634aa28-cdb2-4dce-81a1-8c89786153cd 4b13011561414dddaa3b82b6a74b610b 37bfd840a383483889a90bbdefd0f194 - - default default] Lazy-loading 'migration_context' on Instance uuid 7f16d3ba-54f0-496c-bbd3-09f4d10a41f2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:58:45 compute-0 nova_compute[260935]: 2025-10-11 08:58:45.913 2 DEBUG nova.virt.libvirt.driver [None req-9634aa28-cdb2-4dce-81a1-8c89786153cd 4b13011561414dddaa3b82b6a74b610b 37bfd840a383483889a90bbdefd0f194 - - default default] [instance: 7f16d3ba-54f0-496c-bbd3-09f4d10a41f2] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 11 08:58:45 compute-0 nova_compute[260935]: 2025-10-11 08:58:45.914 2 DEBUG nova.virt.libvirt.driver [None req-9634aa28-cdb2-4dce-81a1-8c89786153cd 4b13011561414dddaa3b82b6a74b610b 37bfd840a383483889a90bbdefd0f194 - - default default] [instance: 7f16d3ba-54f0-496c-bbd3-09f4d10a41f2] Ensure instance console log exists: /var/lib/nova/instances/7f16d3ba-54f0-496c-bbd3-09f4d10a41f2/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 11 08:58:45 compute-0 nova_compute[260935]: 2025-10-11 08:58:45.915 2 DEBUG oslo_concurrency.lockutils [None req-9634aa28-cdb2-4dce-81a1-8c89786153cd 4b13011561414dddaa3b82b6a74b610b 37bfd840a383483889a90bbdefd0f194 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:58:45 compute-0 nova_compute[260935]: 2025-10-11 08:58:45.916 2 DEBUG oslo_concurrency.lockutils [None req-9634aa28-cdb2-4dce-81a1-8c89786153cd 4b13011561414dddaa3b82b6a74b610b 37bfd840a383483889a90bbdefd0f194 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:58:45 compute-0 nova_compute[260935]: 2025-10-11 08:58:45.917 2 DEBUG oslo_concurrency.lockutils [None req-9634aa28-cdb2-4dce-81a1-8c89786153cd 4b13011561414dddaa3b82b6a74b610b 37bfd840a383483889a90bbdefd0f194 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:58:45 compute-0 zealous_wilbur[336959]: --> passed data devices: 0 physical, 3 LVM
Oct 11 08:58:45 compute-0 zealous_wilbur[336959]: --> relative data size: 1.0
Oct 11 08:58:45 compute-0 zealous_wilbur[336959]: --> All data devices are unavailable
Oct 11 08:58:45 compute-0 systemd[1]: libpod-2661bb226deb4ee13d8585ec4a38a5879686f2b3acf66b8abd3e37bf63baf1de.scope: Deactivated successfully.
Oct 11 08:58:45 compute-0 systemd[1]: libpod-2661bb226deb4ee13d8585ec4a38a5879686f2b3acf66b8abd3e37bf63baf1de.scope: Consumed 1.124s CPU time.
Oct 11 08:58:45 compute-0 podman[336943]: 2025-10-11 08:58:45.964245238 +0000 UTC m=+1.405384885 container died 2661bb226deb4ee13d8585ec4a38a5879686f2b3acf66b8abd3e37bf63baf1de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_wilbur, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 08:58:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-66bb85f938532f832796e1aec4717a71ca4128b909769d8dd501a804e2655e7f-merged.mount: Deactivated successfully.
Oct 11 08:58:46 compute-0 podman[336943]: 2025-10-11 08:58:46.03971722 +0000 UTC m=+1.480856867 container remove 2661bb226deb4ee13d8585ec4a38a5879686f2b3acf66b8abd3e37bf63baf1de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_wilbur, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 08:58:46 compute-0 systemd[1]: libpod-conmon-2661bb226deb4ee13d8585ec4a38a5879686f2b3acf66b8abd3e37bf63baf1de.scope: Deactivated successfully.
Oct 11 08:58:46 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1668: 321 pgs: 321 active+clean; 41 MiB data, 553 MiB used, 59 GiB / 60 GiB avail; 3.5 MiB/s rd, 15 KiB/s wr, 176 op/s
Oct 11 08:58:46 compute-0 sudo[336817]: pam_unix(sudo:session): session closed for user root
Oct 11 08:58:46 compute-0 sudo[337171]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:58:46 compute-0 sudo[337171]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:58:46 compute-0 sudo[337171]: pam_unix(sudo:session): session closed for user root
Oct 11 08:58:46 compute-0 sudo[337196]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 08:58:46 compute-0 sudo[337196]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:58:46 compute-0 sudo[337196]: pam_unix(sudo:session): session closed for user root
Oct 11 08:58:46 compute-0 sudo[337221]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:58:46 compute-0 sudo[337221]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:58:46 compute-0 sudo[337221]: pam_unix(sudo:session): session closed for user root
Oct 11 08:58:46 compute-0 sudo[337246]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a -- lvm list --format json
Oct 11 08:58:46 compute-0 sudo[337246]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:58:46 compute-0 nova_compute[260935]: 2025-10-11 08:58:46.548 2 DEBUG nova.network.neutron [None req-9634aa28-cdb2-4dce-81a1-8c89786153cd 4b13011561414dddaa3b82b6a74b610b 37bfd840a383483889a90bbdefd0f194 - - default default] [instance: 7f16d3ba-54f0-496c-bbd3-09f4d10a41f2] Successfully created port: 1fe7496f-f823-4f38-b7e1-8e30255a1ff9 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 11 08:58:46 compute-0 ceph-mon[74313]: pgmap v1668: 321 pgs: 321 active+clean; 41 MiB data, 553 MiB used, 59 GiB / 60 GiB avail; 3.5 MiB/s rd, 15 KiB/s wr, 176 op/s
Oct 11 08:58:46 compute-0 podman[337311]: 2025-10-11 08:58:46.97751163 +0000 UTC m=+0.072018734 container create f0fcfe95aa2676a2a6616e9960e79dfee213e42eedb056fa471c037ca224d103 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_tu, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 08:58:47 compute-0 systemd[1]: Started libpod-conmon-f0fcfe95aa2676a2a6616e9960e79dfee213e42eedb056fa471c037ca224d103.scope.
Oct 11 08:58:47 compute-0 podman[337311]: 2025-10-11 08:58:46.945896699 +0000 UTC m=+0.040403873 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 08:58:47 compute-0 systemd[1]: Started libcrun container.
Oct 11 08:58:47 compute-0 podman[337311]: 2025-10-11 08:58:47.08728552 +0000 UTC m=+0.181792614 container init f0fcfe95aa2676a2a6616e9960e79dfee213e42eedb056fa471c037ca224d103 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_tu, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Oct 11 08:58:47 compute-0 podman[337311]: 2025-10-11 08:58:47.092959211 +0000 UTC m=+0.187466295 container start f0fcfe95aa2676a2a6616e9960e79dfee213e42eedb056fa471c037ca224d103 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_tu, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 08:58:47 compute-0 podman[337311]: 2025-10-11 08:58:47.096140492 +0000 UTC m=+0.190647596 container attach f0fcfe95aa2676a2a6616e9960e79dfee213e42eedb056fa471c037ca224d103 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_tu, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 08:58:47 compute-0 boring_tu[337327]: 167 167
Oct 11 08:58:47 compute-0 systemd[1]: libpod-f0fcfe95aa2676a2a6616e9960e79dfee213e42eedb056fa471c037ca224d103.scope: Deactivated successfully.
Oct 11 08:58:47 compute-0 podman[337311]: 2025-10-11 08:58:47.098089228 +0000 UTC m=+0.192596302 container died f0fcfe95aa2676a2a6616e9960e79dfee213e42eedb056fa471c037ca224d103 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_tu, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 08:58:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-870aadbca2ea0cd84bc41e2ef2fd7d81007a85ac64f627820fe2c9ec78a9a1ea-merged.mount: Deactivated successfully.
Oct 11 08:58:47 compute-0 podman[337311]: 2025-10-11 08:58:47.132356125 +0000 UTC m=+0.226863199 container remove f0fcfe95aa2676a2a6616e9960e79dfee213e42eedb056fa471c037ca224d103 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_tu, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 08:58:47 compute-0 systemd[1]: libpod-conmon-f0fcfe95aa2676a2a6616e9960e79dfee213e42eedb056fa471c037ca224d103.scope: Deactivated successfully.
Oct 11 08:58:47 compute-0 sshd-session[337169]: Invalid user apache from 152.32.213.170 port 46790
Oct 11 08:58:47 compute-0 sshd-session[337169]: pam_unix(sshd:auth): check pass; user unknown
Oct 11 08:58:47 compute-0 sshd-session[337169]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=152.32.213.170
Oct 11 08:58:47 compute-0 podman[337350]: 2025-10-11 08:58:47.40887367 +0000 UTC m=+0.070474961 container create d005a74c8b113fba43925fdfe5565a4e8bc6962c635bdacd16de7a024d98c34a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_wing, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 08:58:47 compute-0 systemd[1]: Started libpod-conmon-d005a74c8b113fba43925fdfe5565a4e8bc6962c635bdacd16de7a024d98c34a.scope.
Oct 11 08:58:47 compute-0 podman[337350]: 2025-10-11 08:58:47.382988412 +0000 UTC m=+0.044589773 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 08:58:47 compute-0 systemd[1]: Started libcrun container.
Oct 11 08:58:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8a489d99d1fc88503ac2b4ba4d28025fdae37f7cc4521417c83d345e908cee3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 08:58:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8a489d99d1fc88503ac2b4ba4d28025fdae37f7cc4521417c83d345e908cee3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 08:58:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8a489d99d1fc88503ac2b4ba4d28025fdae37f7cc4521417c83d345e908cee3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 08:58:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8a489d99d1fc88503ac2b4ba4d28025fdae37f7cc4521417c83d345e908cee3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 08:58:47 compute-0 podman[337350]: 2025-10-11 08:58:47.518054883 +0000 UTC m=+0.179656244 container init d005a74c8b113fba43925fdfe5565a4e8bc6962c635bdacd16de7a024d98c34a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_wing, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 08:58:47 compute-0 podman[337350]: 2025-10-11 08:58:47.526770011 +0000 UTC m=+0.188371332 container start d005a74c8b113fba43925fdfe5565a4e8bc6962c635bdacd16de7a024d98c34a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_wing, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 08:58:47 compute-0 podman[337350]: 2025-10-11 08:58:47.530556289 +0000 UTC m=+0.192157670 container attach d005a74c8b113fba43925fdfe5565a4e8bc6962c635bdacd16de7a024d98c34a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_wing, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct 11 08:58:47 compute-0 nova_compute[260935]: 2025-10-11 08:58:47.660 2 DEBUG nova.network.neutron [None req-9634aa28-cdb2-4dce-81a1-8c89786153cd 4b13011561414dddaa3b82b6a74b610b 37bfd840a383483889a90bbdefd0f194 - - default default] [instance: 7f16d3ba-54f0-496c-bbd3-09f4d10a41f2] Successfully updated port: 1fe7496f-f823-4f38-b7e1-8e30255a1ff9 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 11 08:58:47 compute-0 nova_compute[260935]: 2025-10-11 08:58:47.701 2 DEBUG oslo_concurrency.lockutils [None req-9634aa28-cdb2-4dce-81a1-8c89786153cd 4b13011561414dddaa3b82b6a74b610b 37bfd840a383483889a90bbdefd0f194 - - default default] Acquiring lock "refresh_cache-7f16d3ba-54f0-496c-bbd3-09f4d10a41f2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 08:58:47 compute-0 nova_compute[260935]: 2025-10-11 08:58:47.702 2 DEBUG oslo_concurrency.lockutils [None req-9634aa28-cdb2-4dce-81a1-8c89786153cd 4b13011561414dddaa3b82b6a74b610b 37bfd840a383483889a90bbdefd0f194 - - default default] Acquired lock "refresh_cache-7f16d3ba-54f0-496c-bbd3-09f4d10a41f2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 08:58:47 compute-0 nova_compute[260935]: 2025-10-11 08:58:47.702 2 DEBUG nova.network.neutron [None req-9634aa28-cdb2-4dce-81a1-8c89786153cd 4b13011561414dddaa3b82b6a74b610b 37bfd840a383483889a90bbdefd0f194 - - default default] [instance: 7f16d3ba-54f0-496c-bbd3-09f4d10a41f2] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 11 08:58:47 compute-0 nova_compute[260935]: 2025-10-11 08:58:47.841 2 DEBUG nova.compute.manager [req-8d4e27af-9ba8-4bd7-b2e2-fa6821b8cfd1 req-cca0ea6c-24f0-4d05-994c-1319c44716de e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 7f16d3ba-54f0-496c-bbd3-09f4d10a41f2] Received event network-changed-1fe7496f-f823-4f38-b7e1-8e30255a1ff9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:58:47 compute-0 nova_compute[260935]: 2025-10-11 08:58:47.842 2 DEBUG nova.compute.manager [req-8d4e27af-9ba8-4bd7-b2e2-fa6821b8cfd1 req-cca0ea6c-24f0-4d05-994c-1319c44716de e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 7f16d3ba-54f0-496c-bbd3-09f4d10a41f2] Refreshing instance network info cache due to event network-changed-1fe7496f-f823-4f38-b7e1-8e30255a1ff9. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 08:58:47 compute-0 nova_compute[260935]: 2025-10-11 08:58:47.842 2 DEBUG oslo_concurrency.lockutils [req-8d4e27af-9ba8-4bd7-b2e2-fa6821b8cfd1 req-cca0ea6c-24f0-4d05-994c-1319c44716de e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-7f16d3ba-54f0-496c-bbd3-09f4d10a41f2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 08:58:47 compute-0 nova_compute[260935]: 2025-10-11 08:58:47.901 2 DEBUG nova.network.neutron [None req-9634aa28-cdb2-4dce-81a1-8c89786153cd 4b13011561414dddaa3b82b6a74b610b 37bfd840a383483889a90bbdefd0f194 - - default default] [instance: 7f16d3ba-54f0-496c-bbd3-09f4d10a41f2] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 11 08:58:48 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1669: 321 pgs: 321 active+clean; 88 MiB data, 572 MiB used, 59 GiB / 60 GiB avail; 3.5 MiB/s rd, 1.8 MiB/s wr, 203 op/s
Oct 11 08:58:48 compute-0 amazing_wing[337367]: {
Oct 11 08:58:48 compute-0 amazing_wing[337367]:     "0": [
Oct 11 08:58:48 compute-0 amazing_wing[337367]:         {
Oct 11 08:58:48 compute-0 amazing_wing[337367]:             "devices": [
Oct 11 08:58:48 compute-0 amazing_wing[337367]:                 "/dev/loop3"
Oct 11 08:58:48 compute-0 amazing_wing[337367]:             ],
Oct 11 08:58:48 compute-0 amazing_wing[337367]:             "lv_name": "ceph_lv0",
Oct 11 08:58:48 compute-0 amazing_wing[337367]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 08:58:48 compute-0 amazing_wing[337367]:             "lv_size": "21470642176",
Oct 11 08:58:48 compute-0 amazing_wing[337367]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=bd31cfe9-266a-4479-a7f9-659fff14b3e1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 08:58:48 compute-0 amazing_wing[337367]:             "lv_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 08:58:48 compute-0 amazing_wing[337367]:             "name": "ceph_lv0",
Oct 11 08:58:48 compute-0 amazing_wing[337367]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 08:58:48 compute-0 amazing_wing[337367]:             "tags": {
Oct 11 08:58:48 compute-0 amazing_wing[337367]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 11 08:58:48 compute-0 amazing_wing[337367]:                 "ceph.block_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 08:58:48 compute-0 amazing_wing[337367]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 08:58:48 compute-0 amazing_wing[337367]:                 "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 08:58:48 compute-0 amazing_wing[337367]:                 "ceph.cluster_name": "ceph",
Oct 11 08:58:48 compute-0 amazing_wing[337367]:                 "ceph.crush_device_class": "",
Oct 11 08:58:48 compute-0 amazing_wing[337367]:                 "ceph.encrypted": "0",
Oct 11 08:58:48 compute-0 amazing_wing[337367]:                 "ceph.osd_fsid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 08:58:48 compute-0 amazing_wing[337367]:                 "ceph.osd_id": "0",
Oct 11 08:58:48 compute-0 amazing_wing[337367]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 08:58:48 compute-0 amazing_wing[337367]:                 "ceph.type": "block",
Oct 11 08:58:48 compute-0 amazing_wing[337367]:                 "ceph.vdo": "0"
Oct 11 08:58:48 compute-0 amazing_wing[337367]:             },
Oct 11 08:58:48 compute-0 amazing_wing[337367]:             "type": "block",
Oct 11 08:58:48 compute-0 amazing_wing[337367]:             "vg_name": "ceph_vg0"
Oct 11 08:58:48 compute-0 amazing_wing[337367]:         }
Oct 11 08:58:48 compute-0 amazing_wing[337367]:     ],
Oct 11 08:58:48 compute-0 amazing_wing[337367]:     "1": [
Oct 11 08:58:48 compute-0 amazing_wing[337367]:         {
Oct 11 08:58:48 compute-0 amazing_wing[337367]:             "devices": [
Oct 11 08:58:48 compute-0 amazing_wing[337367]:                 "/dev/loop4"
Oct 11 08:58:48 compute-0 amazing_wing[337367]:             ],
Oct 11 08:58:48 compute-0 amazing_wing[337367]:             "lv_name": "ceph_lv1",
Oct 11 08:58:48 compute-0 amazing_wing[337367]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 08:58:48 compute-0 amazing_wing[337367]:             "lv_size": "21470642176",
Oct 11 08:58:48 compute-0 amazing_wing[337367]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c6e730c8-7075-45e5-bf72-061b73088f5b,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 08:58:48 compute-0 amazing_wing[337367]:             "lv_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 08:58:48 compute-0 amazing_wing[337367]:             "name": "ceph_lv1",
Oct 11 08:58:48 compute-0 amazing_wing[337367]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 08:58:48 compute-0 amazing_wing[337367]:             "tags": {
Oct 11 08:58:48 compute-0 amazing_wing[337367]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 11 08:58:48 compute-0 amazing_wing[337367]:                 "ceph.block_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 08:58:48 compute-0 amazing_wing[337367]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 08:58:48 compute-0 amazing_wing[337367]:                 "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 08:58:48 compute-0 amazing_wing[337367]:                 "ceph.cluster_name": "ceph",
Oct 11 08:58:48 compute-0 amazing_wing[337367]:                 "ceph.crush_device_class": "",
Oct 11 08:58:48 compute-0 amazing_wing[337367]:                 "ceph.encrypted": "0",
Oct 11 08:58:48 compute-0 amazing_wing[337367]:                 "ceph.osd_fsid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 08:58:48 compute-0 amazing_wing[337367]:                 "ceph.osd_id": "1",
Oct 11 08:58:48 compute-0 amazing_wing[337367]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 08:58:48 compute-0 amazing_wing[337367]:                 "ceph.type": "block",
Oct 11 08:58:48 compute-0 amazing_wing[337367]:                 "ceph.vdo": "0"
Oct 11 08:58:48 compute-0 amazing_wing[337367]:             },
Oct 11 08:58:48 compute-0 amazing_wing[337367]:             "type": "block",
Oct 11 08:58:48 compute-0 amazing_wing[337367]:             "vg_name": "ceph_vg1"
Oct 11 08:58:48 compute-0 amazing_wing[337367]:         }
Oct 11 08:58:48 compute-0 amazing_wing[337367]:     ],
Oct 11 08:58:48 compute-0 amazing_wing[337367]:     "2": [
Oct 11 08:58:48 compute-0 amazing_wing[337367]:         {
Oct 11 08:58:48 compute-0 amazing_wing[337367]:             "devices": [
Oct 11 08:58:48 compute-0 amazing_wing[337367]:                 "/dev/loop5"
Oct 11 08:58:48 compute-0 amazing_wing[337367]:             ],
Oct 11 08:58:48 compute-0 amazing_wing[337367]:             "lv_name": "ceph_lv2",
Oct 11 08:58:48 compute-0 amazing_wing[337367]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 08:58:48 compute-0 amazing_wing[337367]:             "lv_size": "21470642176",
Oct 11 08:58:48 compute-0 amazing_wing[337367]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9a8d87fe-940e-4960-94fb-405e22e6e9ee,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 08:58:48 compute-0 amazing_wing[337367]:             "lv_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 08:58:48 compute-0 amazing_wing[337367]:             "name": "ceph_lv2",
Oct 11 08:58:48 compute-0 amazing_wing[337367]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 08:58:48 compute-0 amazing_wing[337367]:             "tags": {
Oct 11 08:58:48 compute-0 amazing_wing[337367]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 11 08:58:48 compute-0 amazing_wing[337367]:                 "ceph.block_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 08:58:48 compute-0 amazing_wing[337367]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 08:58:48 compute-0 amazing_wing[337367]:                 "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 08:58:48 compute-0 amazing_wing[337367]:                 "ceph.cluster_name": "ceph",
Oct 11 08:58:48 compute-0 amazing_wing[337367]:                 "ceph.crush_device_class": "",
Oct 11 08:58:48 compute-0 amazing_wing[337367]:                 "ceph.encrypted": "0",
Oct 11 08:58:48 compute-0 amazing_wing[337367]:                 "ceph.osd_fsid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 08:58:48 compute-0 amazing_wing[337367]:                 "ceph.osd_id": "2",
Oct 11 08:58:48 compute-0 amazing_wing[337367]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 08:58:48 compute-0 amazing_wing[337367]:                 "ceph.type": "block",
Oct 11 08:58:48 compute-0 amazing_wing[337367]:                 "ceph.vdo": "0"
Oct 11 08:58:48 compute-0 amazing_wing[337367]:             },
Oct 11 08:58:48 compute-0 amazing_wing[337367]:             "type": "block",
Oct 11 08:58:48 compute-0 amazing_wing[337367]:             "vg_name": "ceph_vg2"
Oct 11 08:58:48 compute-0 amazing_wing[337367]:         }
Oct 11 08:58:48 compute-0 amazing_wing[337367]:     ]
Oct 11 08:58:48 compute-0 amazing_wing[337367]: }
Oct 11 08:58:48 compute-0 systemd[1]: libpod-d005a74c8b113fba43925fdfe5565a4e8bc6962c635bdacd16de7a024d98c34a.scope: Deactivated successfully.
Oct 11 08:58:48 compute-0 podman[337350]: 2025-10-11 08:58:48.304396245 +0000 UTC m=+0.965997556 container died d005a74c8b113fba43925fdfe5565a4e8bc6962c635bdacd16de7a024d98c34a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_wing, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True)
Oct 11 08:58:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-e8a489d99d1fc88503ac2b4ba4d28025fdae37f7cc4521417c83d345e908cee3-merged.mount: Deactivated successfully.
Oct 11 08:58:48 compute-0 podman[337350]: 2025-10-11 08:58:48.388085162 +0000 UTC m=+1.049686473 container remove d005a74c8b113fba43925fdfe5565a4e8bc6962c635bdacd16de7a024d98c34a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_wing, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 11 08:58:48 compute-0 systemd[1]: libpod-conmon-d005a74c8b113fba43925fdfe5565a4e8bc6962c635bdacd16de7a024d98c34a.scope: Deactivated successfully.
Oct 11 08:58:48 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e242 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 08:58:48 compute-0 sudo[337246]: pam_unix(sudo:session): session closed for user root
Oct 11 08:58:48 compute-0 sudo[337392]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:58:48 compute-0 nova_compute[260935]: 2025-10-11 08:58:48.580 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:58:48 compute-0 sudo[337392]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:58:48 compute-0 sudo[337392]: pam_unix(sudo:session): session closed for user root
Oct 11 08:58:48 compute-0 sudo[337417]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 08:58:48 compute-0 nova_compute[260935]: 2025-10-11 08:58:48.685 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:58:48 compute-0 sudo[337417]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:58:48 compute-0 sudo[337417]: pam_unix(sudo:session): session closed for user root
Oct 11 08:58:48 compute-0 sudo[337442]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:58:48 compute-0 sudo[337442]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:58:48 compute-0 sudo[337442]: pam_unix(sudo:session): session closed for user root
Oct 11 08:58:48 compute-0 sudo[337467]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a -- raw list --format json
Oct 11 08:58:48 compute-0 sudo[337467]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:58:48 compute-0 sshd-session[337169]: Failed password for invalid user apache from 152.32.213.170 port 46790 ssh2
Oct 11 08:58:49 compute-0 nova_compute[260935]: 2025-10-11 08:58:49.057 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760173114.0558534, 6208d031-a043-4761-ba4d-f3564e22723d => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:58:49 compute-0 nova_compute[260935]: 2025-10-11 08:58:49.058 2 INFO nova.compute.manager [-] [instance: 6208d031-a043-4761-ba4d-f3564e22723d] VM Stopped (Lifecycle Event)
Oct 11 08:58:49 compute-0 nova_compute[260935]: 2025-10-11 08:58:49.076 2 DEBUG nova.compute.manager [None req-6adb0e04-f030-4820-adba-1fc9942c1cf3 - - - - - -] [instance: 6208d031-a043-4761-ba4d-f3564e22723d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:58:49 compute-0 ceph-mon[74313]: pgmap v1669: 321 pgs: 321 active+clean; 88 MiB data, 572 MiB used, 59 GiB / 60 GiB avail; 3.5 MiB/s rd, 1.8 MiB/s wr, 203 op/s
Oct 11 08:58:49 compute-0 podman[337534]: 2025-10-11 08:58:49.410093814 +0000 UTC m=+0.058665894 container create 0e1e86f8ca4fe3946b7f80023fba13ce8c7ce698d52c9b2fe0bec8ab08dc84d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_poitras, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Oct 11 08:58:49 compute-0 systemd[1]: Started libpod-conmon-0e1e86f8ca4fe3946b7f80023fba13ce8c7ce698d52c9b2fe0bec8ab08dc84d4.scope.
Oct 11 08:58:49 compute-0 podman[337534]: 2025-10-11 08:58:49.381872679 +0000 UTC m=+0.030444799 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 08:58:49 compute-0 systemd[1]: Started libcrun container.
Oct 11 08:58:49 compute-0 podman[337534]: 2025-10-11 08:58:49.525388271 +0000 UTC m=+0.173960471 container init 0e1e86f8ca4fe3946b7f80023fba13ce8c7ce698d52c9b2fe0bec8ab08dc84d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_poitras, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct 11 08:58:49 compute-0 podman[337534]: 2025-10-11 08:58:49.538476735 +0000 UTC m=+0.187048795 container start 0e1e86f8ca4fe3946b7f80023fba13ce8c7ce698d52c9b2fe0bec8ab08dc84d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_poitras, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef)
Oct 11 08:58:49 compute-0 podman[337534]: 2025-10-11 08:58:49.542397066 +0000 UTC m=+0.190969196 container attach 0e1e86f8ca4fe3946b7f80023fba13ce8c7ce698d52c9b2fe0bec8ab08dc84d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_poitras, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 08:58:49 compute-0 vigilant_poitras[337550]: 167 167
Oct 11 08:58:49 compute-0 systemd[1]: libpod-0e1e86f8ca4fe3946b7f80023fba13ce8c7ce698d52c9b2fe0bec8ab08dc84d4.scope: Deactivated successfully.
Oct 11 08:58:49 compute-0 podman[337534]: 2025-10-11 08:58:49.548625404 +0000 UTC m=+0.197197464 container died 0e1e86f8ca4fe3946b7f80023fba13ce8c7ce698d52c9b2fe0bec8ab08dc84d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_poitras, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 08:58:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-3167d2adb8667aee278bdc845a9656117fc03006860dbb5987d3381f09bed0ba-merged.mount: Deactivated successfully.
Oct 11 08:58:49 compute-0 podman[337534]: 2025-10-11 08:58:49.602076098 +0000 UTC m=+0.250648138 container remove 0e1e86f8ca4fe3946b7f80023fba13ce8c7ce698d52c9b2fe0bec8ab08dc84d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_poitras, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 08:58:49 compute-0 systemd[1]: libpod-conmon-0e1e86f8ca4fe3946b7f80023fba13ce8c7ce698d52c9b2fe0bec8ab08dc84d4.scope: Deactivated successfully.
Oct 11 08:58:49 compute-0 podman[337574]: 2025-10-11 08:58:49.816956765 +0000 UTC m=+0.048973457 container create 9c20d04640ae663b1de22862664f8d8153b3d02ca86d9ccafefe172a1c338e85 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_shirley, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct 11 08:58:49 compute-0 sshd-session[337169]: Received disconnect from 152.32.213.170 port 46790:11: Bye Bye [preauth]
Oct 11 08:58:49 compute-0 sshd-session[337169]: Disconnected from invalid user apache 152.32.213.170 port 46790 [preauth]
Oct 11 08:58:49 compute-0 systemd[1]: Started libpod-conmon-9c20d04640ae663b1de22862664f8d8153b3d02ca86d9ccafefe172a1c338e85.scope.
Oct 11 08:58:49 compute-0 systemd[1]: Started libcrun container.
Oct 11 08:58:49 compute-0 podman[337574]: 2025-10-11 08:58:49.797287505 +0000 UTC m=+0.029304237 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 08:58:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/189f8b5cb2ac0671e267c696fe08675cfb04e362767f752d8790f50b251ace9d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 08:58:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/189f8b5cb2ac0671e267c696fe08675cfb04e362767f752d8790f50b251ace9d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 08:58:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/189f8b5cb2ac0671e267c696fe08675cfb04e362767f752d8790f50b251ace9d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 08:58:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/189f8b5cb2ac0671e267c696fe08675cfb04e362767f752d8790f50b251ace9d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 08:58:49 compute-0 sshd-session[337383]: Invalid user devops from 155.4.244.179 port 47173
Oct 11 08:58:49 compute-0 sshd-session[337383]: pam_unix(sshd:auth): check pass; user unknown
Oct 11 08:58:49 compute-0 sshd-session[337383]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=155.4.244.179
Oct 11 08:58:49 compute-0 podman[337574]: 2025-10-11 08:58:49.918584603 +0000 UTC m=+0.150601325 container init 9c20d04640ae663b1de22862664f8d8153b3d02ca86d9ccafefe172a1c338e85 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_shirley, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 08:58:49 compute-0 podman[337574]: 2025-10-11 08:58:49.935007692 +0000 UTC m=+0.167024424 container start 9c20d04640ae663b1de22862664f8d8153b3d02ca86d9ccafefe172a1c338e85 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_shirley, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 08:58:49 compute-0 podman[337574]: 2025-10-11 08:58:49.939560142 +0000 UTC m=+0.171576854 container attach 9c20d04640ae663b1de22862664f8d8153b3d02ca86d9ccafefe172a1c338e85 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_shirley, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 08:58:50 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1670: 321 pgs: 321 active+clean; 88 MiB data, 572 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 1.8 MiB/s wr, 105 op/s
Oct 11 08:58:50 compute-0 nova_compute[260935]: 2025-10-11 08:58:50.455 2 DEBUG nova.network.neutron [None req-9634aa28-cdb2-4dce-81a1-8c89786153cd 4b13011561414dddaa3b82b6a74b610b 37bfd840a383483889a90bbdefd0f194 - - default default] [instance: 7f16d3ba-54f0-496c-bbd3-09f4d10a41f2] Updating instance_info_cache with network_info: [{"id": "1fe7496f-f823-4f38-b7e1-8e30255a1ff9", "address": "fa:16:3e:b8:a6:83", "network": {"id": "6572b80b-ede8-44a7-b186-9e0f82cb1cf7", "bridge": "br-int", "label": "tempest-ServerAddressesNegativeTestJSON-822924085-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "37bfd840a383483889a90bbdefd0f194", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1fe7496f-f8", "ovs_interfaceid": "1fe7496f-f823-4f38-b7e1-8e30255a1ff9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:58:50 compute-0 nova_compute[260935]: 2025-10-11 08:58:50.474 2 DEBUG oslo_concurrency.lockutils [None req-9634aa28-cdb2-4dce-81a1-8c89786153cd 4b13011561414dddaa3b82b6a74b610b 37bfd840a383483889a90bbdefd0f194 - - default default] Releasing lock "refresh_cache-7f16d3ba-54f0-496c-bbd3-09f4d10a41f2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 08:58:50 compute-0 nova_compute[260935]: 2025-10-11 08:58:50.475 2 DEBUG nova.compute.manager [None req-9634aa28-cdb2-4dce-81a1-8c89786153cd 4b13011561414dddaa3b82b6a74b610b 37bfd840a383483889a90bbdefd0f194 - - default default] [instance: 7f16d3ba-54f0-496c-bbd3-09f4d10a41f2] Instance network_info: |[{"id": "1fe7496f-f823-4f38-b7e1-8e30255a1ff9", "address": "fa:16:3e:b8:a6:83", "network": {"id": "6572b80b-ede8-44a7-b186-9e0f82cb1cf7", "bridge": "br-int", "label": "tempest-ServerAddressesNegativeTestJSON-822924085-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "37bfd840a383483889a90bbdefd0f194", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1fe7496f-f8", "ovs_interfaceid": "1fe7496f-f823-4f38-b7e1-8e30255a1ff9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 11 08:58:50 compute-0 nova_compute[260935]: 2025-10-11 08:58:50.476 2 DEBUG oslo_concurrency.lockutils [req-8d4e27af-9ba8-4bd7-b2e2-fa6821b8cfd1 req-cca0ea6c-24f0-4d05-994c-1319c44716de e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-7f16d3ba-54f0-496c-bbd3-09f4d10a41f2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 08:58:50 compute-0 nova_compute[260935]: 2025-10-11 08:58:50.476 2 DEBUG nova.network.neutron [req-8d4e27af-9ba8-4bd7-b2e2-fa6821b8cfd1 req-cca0ea6c-24f0-4d05-994c-1319c44716de e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 7f16d3ba-54f0-496c-bbd3-09f4d10a41f2] Refreshing network info cache for port 1fe7496f-f823-4f38-b7e1-8e30255a1ff9 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 08:58:50 compute-0 nova_compute[260935]: 2025-10-11 08:58:50.481 2 DEBUG nova.virt.libvirt.driver [None req-9634aa28-cdb2-4dce-81a1-8c89786153cd 4b13011561414dddaa3b82b6a74b610b 37bfd840a383483889a90bbdefd0f194 - - default default] [instance: 7f16d3ba-54f0-496c-bbd3-09f4d10a41f2] Start _get_guest_xml network_info=[{"id": "1fe7496f-f823-4f38-b7e1-8e30255a1ff9", "address": "fa:16:3e:b8:a6:83", "network": {"id": "6572b80b-ede8-44a7-b186-9e0f82cb1cf7", "bridge": "br-int", "label": "tempest-ServerAddressesNegativeTestJSON-822924085-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "37bfd840a383483889a90bbdefd0f194", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1fe7496f-f8", "ovs_interfaceid": "1fe7496f-f823-4f38-b7e1-8e30255a1ff9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 11 08:58:50 compute-0 nova_compute[260935]: 2025-10-11 08:58:50.489 2 WARNING nova.virt.libvirt.driver [None req-9634aa28-cdb2-4dce-81a1-8c89786153cd 4b13011561414dddaa3b82b6a74b610b 37bfd840a383483889a90bbdefd0f194 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 08:58:50 compute-0 nova_compute[260935]: 2025-10-11 08:58:50.498 2 DEBUG nova.virt.libvirt.host [None req-9634aa28-cdb2-4dce-81a1-8c89786153cd 4b13011561414dddaa3b82b6a74b610b 37bfd840a383483889a90bbdefd0f194 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 11 08:58:50 compute-0 nova_compute[260935]: 2025-10-11 08:58:50.499 2 DEBUG nova.virt.libvirt.host [None req-9634aa28-cdb2-4dce-81a1-8c89786153cd 4b13011561414dddaa3b82b6a74b610b 37bfd840a383483889a90bbdefd0f194 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 11 08:58:50 compute-0 nova_compute[260935]: 2025-10-11 08:58:50.505 2 DEBUG nova.virt.libvirt.host [None req-9634aa28-cdb2-4dce-81a1-8c89786153cd 4b13011561414dddaa3b82b6a74b610b 37bfd840a383483889a90bbdefd0f194 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 11 08:58:50 compute-0 nova_compute[260935]: 2025-10-11 08:58:50.506 2 DEBUG nova.virt.libvirt.host [None req-9634aa28-cdb2-4dce-81a1-8c89786153cd 4b13011561414dddaa3b82b6a74b610b 37bfd840a383483889a90bbdefd0f194 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 11 08:58:50 compute-0 nova_compute[260935]: 2025-10-11 08:58:50.507 2 DEBUG nova.virt.libvirt.driver [None req-9634aa28-cdb2-4dce-81a1-8c89786153cd 4b13011561414dddaa3b82b6a74b610b 37bfd840a383483889a90bbdefd0f194 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 11 08:58:50 compute-0 nova_compute[260935]: 2025-10-11 08:58:50.508 2 DEBUG nova.virt.hardware [None req-9634aa28-cdb2-4dce-81a1-8c89786153cd 4b13011561414dddaa3b82b6a74b610b 37bfd840a383483889a90bbdefd0f194 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 11 08:58:50 compute-0 nova_compute[260935]: 2025-10-11 08:58:50.509 2 DEBUG nova.virt.hardware [None req-9634aa28-cdb2-4dce-81a1-8c89786153cd 4b13011561414dddaa3b82b6a74b610b 37bfd840a383483889a90bbdefd0f194 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 11 08:58:50 compute-0 nova_compute[260935]: 2025-10-11 08:58:50.509 2 DEBUG nova.virt.hardware [None req-9634aa28-cdb2-4dce-81a1-8c89786153cd 4b13011561414dddaa3b82b6a74b610b 37bfd840a383483889a90bbdefd0f194 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 11 08:58:50 compute-0 nova_compute[260935]: 2025-10-11 08:58:50.510 2 DEBUG nova.virt.hardware [None req-9634aa28-cdb2-4dce-81a1-8c89786153cd 4b13011561414dddaa3b82b6a74b610b 37bfd840a383483889a90bbdefd0f194 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 11 08:58:50 compute-0 nova_compute[260935]: 2025-10-11 08:58:50.510 2 DEBUG nova.virt.hardware [None req-9634aa28-cdb2-4dce-81a1-8c89786153cd 4b13011561414dddaa3b82b6a74b610b 37bfd840a383483889a90bbdefd0f194 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 11 08:58:50 compute-0 nova_compute[260935]: 2025-10-11 08:58:50.511 2 DEBUG nova.virt.hardware [None req-9634aa28-cdb2-4dce-81a1-8c89786153cd 4b13011561414dddaa3b82b6a74b610b 37bfd840a383483889a90bbdefd0f194 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 11 08:58:50 compute-0 nova_compute[260935]: 2025-10-11 08:58:50.511 2 DEBUG nova.virt.hardware [None req-9634aa28-cdb2-4dce-81a1-8c89786153cd 4b13011561414dddaa3b82b6a74b610b 37bfd840a383483889a90bbdefd0f194 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 11 08:58:50 compute-0 nova_compute[260935]: 2025-10-11 08:58:50.512 2 DEBUG nova.virt.hardware [None req-9634aa28-cdb2-4dce-81a1-8c89786153cd 4b13011561414dddaa3b82b6a74b610b 37bfd840a383483889a90bbdefd0f194 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 11 08:58:50 compute-0 nova_compute[260935]: 2025-10-11 08:58:50.512 2 DEBUG nova.virt.hardware [None req-9634aa28-cdb2-4dce-81a1-8c89786153cd 4b13011561414dddaa3b82b6a74b610b 37bfd840a383483889a90bbdefd0f194 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 11 08:58:50 compute-0 nova_compute[260935]: 2025-10-11 08:58:50.513 2 DEBUG nova.virt.hardware [None req-9634aa28-cdb2-4dce-81a1-8c89786153cd 4b13011561414dddaa3b82b6a74b610b 37bfd840a383483889a90bbdefd0f194 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 11 08:58:50 compute-0 nova_compute[260935]: 2025-10-11 08:58:50.513 2 DEBUG nova.virt.hardware [None req-9634aa28-cdb2-4dce-81a1-8c89786153cd 4b13011561414dddaa3b82b6a74b610b 37bfd840a383483889a90bbdefd0f194 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 11 08:58:50 compute-0 nova_compute[260935]: 2025-10-11 08:58:50.518 2 DEBUG oslo_concurrency.processutils [None req-9634aa28-cdb2-4dce-81a1-8c89786153cd 4b13011561414dddaa3b82b6a74b610b 37bfd840a383483889a90bbdefd0f194 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:58:50 compute-0 thirsty_shirley[337591]: {
Oct 11 08:58:50 compute-0 thirsty_shirley[337591]:     "9a8d87fe-940e-4960-94fb-405e22e6e9ee": {
Oct 11 08:58:50 compute-0 thirsty_shirley[337591]:         "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 08:58:50 compute-0 thirsty_shirley[337591]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 11 08:58:50 compute-0 thirsty_shirley[337591]:         "osd_id": 2,
Oct 11 08:58:50 compute-0 thirsty_shirley[337591]:         "osd_uuid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 08:58:50 compute-0 thirsty_shirley[337591]:         "type": "bluestore"
Oct 11 08:58:50 compute-0 thirsty_shirley[337591]:     },
Oct 11 08:58:50 compute-0 thirsty_shirley[337591]:     "bd31cfe9-266a-4479-a7f9-659fff14b3e1": {
Oct 11 08:58:50 compute-0 thirsty_shirley[337591]:         "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 08:58:50 compute-0 thirsty_shirley[337591]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 11 08:58:50 compute-0 thirsty_shirley[337591]:         "osd_id": 0,
Oct 11 08:58:50 compute-0 thirsty_shirley[337591]:         "osd_uuid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 08:58:50 compute-0 thirsty_shirley[337591]:         "type": "bluestore"
Oct 11 08:58:50 compute-0 thirsty_shirley[337591]:     },
Oct 11 08:58:50 compute-0 thirsty_shirley[337591]:     "c6e730c8-7075-45e5-bf72-061b73088f5b": {
Oct 11 08:58:50 compute-0 thirsty_shirley[337591]:         "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 08:58:50 compute-0 thirsty_shirley[337591]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 11 08:58:50 compute-0 thirsty_shirley[337591]:         "osd_id": 1,
Oct 11 08:58:50 compute-0 thirsty_shirley[337591]:         "osd_uuid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 08:58:50 compute-0 thirsty_shirley[337591]:         "type": "bluestore"
Oct 11 08:58:50 compute-0 thirsty_shirley[337591]:     }
Oct 11 08:58:50 compute-0 thirsty_shirley[337591]: }
Oct 11 08:58:50 compute-0 systemd[1]: libpod-9c20d04640ae663b1de22862664f8d8153b3d02ca86d9ccafefe172a1c338e85.scope: Deactivated successfully.
Oct 11 08:58:50 compute-0 systemd[1]: libpod-9c20d04640ae663b1de22862664f8d8153b3d02ca86d9ccafefe172a1c338e85.scope: Consumed 1.040s CPU time.
Oct 11 08:58:50 compute-0 podman[337574]: 2025-10-11 08:58:50.972481124 +0000 UTC m=+1.204497846 container died 9c20d04640ae663b1de22862664f8d8153b3d02ca86d9ccafefe172a1c338e85 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_shirley, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507)
Oct 11 08:58:50 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 08:58:50 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1204367569' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:58:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-189f8b5cb2ac0671e267c696fe08675cfb04e362767f752d8790f50b251ace9d-merged.mount: Deactivated successfully.
Oct 11 08:58:51 compute-0 nova_compute[260935]: 2025-10-11 08:58:51.020 2 DEBUG oslo_concurrency.processutils [None req-9634aa28-cdb2-4dce-81a1-8c89786153cd 4b13011561414dddaa3b82b6a74b610b 37bfd840a383483889a90bbdefd0f194 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.501s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:58:51 compute-0 podman[337574]: 2025-10-11 08:58:51.052050303 +0000 UTC m=+1.284067005 container remove 9c20d04640ae663b1de22862664f8d8153b3d02ca86d9ccafefe172a1c338e85 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_shirley, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 08:58:51 compute-0 nova_compute[260935]: 2025-10-11 08:58:51.064 2 DEBUG nova.storage.rbd_utils [None req-9634aa28-cdb2-4dce-81a1-8c89786153cd 4b13011561414dddaa3b82b6a74b610b 37bfd840a383483889a90bbdefd0f194 - - default default] rbd image 7f16d3ba-54f0-496c-bbd3-09f4d10a41f2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:58:51 compute-0 systemd[1]: libpod-conmon-9c20d04640ae663b1de22862664f8d8153b3d02ca86d9ccafefe172a1c338e85.scope: Deactivated successfully.
Oct 11 08:58:51 compute-0 nova_compute[260935]: 2025-10-11 08:58:51.074 2 DEBUG oslo_concurrency.processutils [None req-9634aa28-cdb2-4dce-81a1-8c89786153cd 4b13011561414dddaa3b82b6a74b610b 37bfd840a383483889a90bbdefd0f194 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:58:51 compute-0 sudo[337467]: pam_unix(sudo:session): session closed for user root
Oct 11 08:58:51 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 08:58:51 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 08:58:51 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 08:58:51 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 08:58:51 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev 0a8dcdc5-c9b9-4ab2-bba7-a0e6157030a4 does not exist
Oct 11 08:58:51 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev 3661037e-a48e-410b-947e-06b59595c49a does not exist
Oct 11 08:58:51 compute-0 ceph-mon[74313]: pgmap v1670: 321 pgs: 321 active+clean; 88 MiB data, 572 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 1.8 MiB/s wr, 105 op/s
Oct 11 08:58:51 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/1204367569' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:58:51 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 08:58:51 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 08:58:51 compute-0 sudo[337678]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:58:51 compute-0 sudo[337678]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:58:51 compute-0 sudo[337678]: pam_unix(sudo:session): session closed for user root
Oct 11 08:58:51 compute-0 sudo[337720]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 11 08:58:51 compute-0 sudo[337720]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:58:51 compute-0 sudo[337720]: pam_unix(sudo:session): session closed for user root
Oct 11 08:58:51 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 08:58:51 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4256605576' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:58:51 compute-0 nova_compute[260935]: 2025-10-11 08:58:51.513 2 DEBUG oslo_concurrency.processutils [None req-9634aa28-cdb2-4dce-81a1-8c89786153cd 4b13011561414dddaa3b82b6a74b610b 37bfd840a383483889a90bbdefd0f194 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:58:51 compute-0 nova_compute[260935]: 2025-10-11 08:58:51.517 2 DEBUG nova.virt.libvirt.vif [None req-9634aa28-cdb2-4dce-81a1-8c89786153cd 4b13011561414dddaa3b82b6a74b610b 37bfd840a383483889a90bbdefd0f194 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T08:58:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerAddressesNegativeTestJSON-server-1870753311',display_name='tempest-ServerAddressesNegativeTestJSON-server-1870753311',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveraddressesnegativetestjson-server-1870753311',id=78,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='37bfd840a383483889a90bbdefd0f194',ramdisk_id='',reservation_id='r-vgzchiv0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerAddressesNegativeTestJSON-1679036360',owner_user_name='tempest-ServerAddressesNegativeTestJSON-1679036360-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T08:58:45Z,user_data=None,user_id='4b13011561414dddaa3b82b6a74b610b',uuid=7f16d3ba-54f0-496c-bbd3-09f4d10a41f2,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1fe7496f-f823-4f38-b7e1-8e30255a1ff9", "address": "fa:16:3e:b8:a6:83", "network": {"id": "6572b80b-ede8-44a7-b186-9e0f82cb1cf7", "bridge": "br-int", "label": "tempest-ServerAddressesNegativeTestJSON-822924085-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "37bfd840a383483889a90bbdefd0f194", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1fe7496f-f8", "ovs_interfaceid": "1fe7496f-f823-4f38-b7e1-8e30255a1ff9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 11 08:58:51 compute-0 nova_compute[260935]: 2025-10-11 08:58:51.517 2 DEBUG nova.network.os_vif_util [None req-9634aa28-cdb2-4dce-81a1-8c89786153cd 4b13011561414dddaa3b82b6a74b610b 37bfd840a383483889a90bbdefd0f194 - - default default] Converting VIF {"id": "1fe7496f-f823-4f38-b7e1-8e30255a1ff9", "address": "fa:16:3e:b8:a6:83", "network": {"id": "6572b80b-ede8-44a7-b186-9e0f82cb1cf7", "bridge": "br-int", "label": "tempest-ServerAddressesNegativeTestJSON-822924085-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "37bfd840a383483889a90bbdefd0f194", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1fe7496f-f8", "ovs_interfaceid": "1fe7496f-f823-4f38-b7e1-8e30255a1ff9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 08:58:51 compute-0 nova_compute[260935]: 2025-10-11 08:58:51.519 2 DEBUG nova.network.os_vif_util [None req-9634aa28-cdb2-4dce-81a1-8c89786153cd 4b13011561414dddaa3b82b6a74b610b 37bfd840a383483889a90bbdefd0f194 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b8:a6:83,bridge_name='br-int',has_traffic_filtering=True,id=1fe7496f-f823-4f38-b7e1-8e30255a1ff9,network=Network(6572b80b-ede8-44a7-b186-9e0f82cb1cf7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1fe7496f-f8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 08:58:51 compute-0 nova_compute[260935]: 2025-10-11 08:58:51.521 2 DEBUG nova.objects.instance [None req-9634aa28-cdb2-4dce-81a1-8c89786153cd 4b13011561414dddaa3b82b6a74b610b 37bfd840a383483889a90bbdefd0f194 - - default default] Lazy-loading 'pci_devices' on Instance uuid 7f16d3ba-54f0-496c-bbd3-09f4d10a41f2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:58:51 compute-0 nova_compute[260935]: 2025-10-11 08:58:51.547 2 DEBUG nova.virt.libvirt.driver [None req-9634aa28-cdb2-4dce-81a1-8c89786153cd 4b13011561414dddaa3b82b6a74b610b 37bfd840a383483889a90bbdefd0f194 - - default default] [instance: 7f16d3ba-54f0-496c-bbd3-09f4d10a41f2] End _get_guest_xml xml=<domain type="kvm">
Oct 11 08:58:51 compute-0 nova_compute[260935]:   <uuid>7f16d3ba-54f0-496c-bbd3-09f4d10a41f2</uuid>
Oct 11 08:58:51 compute-0 nova_compute[260935]:   <name>instance-0000004e</name>
Oct 11 08:58:51 compute-0 nova_compute[260935]:   <memory>131072</memory>
Oct 11 08:58:51 compute-0 nova_compute[260935]:   <vcpu>1</vcpu>
Oct 11 08:58:51 compute-0 nova_compute[260935]:   <metadata>
Oct 11 08:58:51 compute-0 nova_compute[260935]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 08:58:51 compute-0 nova_compute[260935]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 08:58:51 compute-0 nova_compute[260935]:       <nova:name>tempest-ServerAddressesNegativeTestJSON-server-1870753311</nova:name>
Oct 11 08:58:51 compute-0 nova_compute[260935]:       <nova:creationTime>2025-10-11 08:58:50</nova:creationTime>
Oct 11 08:58:51 compute-0 nova_compute[260935]:       <nova:flavor name="m1.nano">
Oct 11 08:58:51 compute-0 nova_compute[260935]:         <nova:memory>128</nova:memory>
Oct 11 08:58:51 compute-0 nova_compute[260935]:         <nova:disk>1</nova:disk>
Oct 11 08:58:51 compute-0 nova_compute[260935]:         <nova:swap>0</nova:swap>
Oct 11 08:58:51 compute-0 nova_compute[260935]:         <nova:ephemeral>0</nova:ephemeral>
Oct 11 08:58:51 compute-0 nova_compute[260935]:         <nova:vcpus>1</nova:vcpus>
Oct 11 08:58:51 compute-0 nova_compute[260935]:       </nova:flavor>
Oct 11 08:58:51 compute-0 nova_compute[260935]:       <nova:owner>
Oct 11 08:58:51 compute-0 nova_compute[260935]:         <nova:user uuid="4b13011561414dddaa3b82b6a74b610b">tempest-ServerAddressesNegativeTestJSON-1679036360-project-member</nova:user>
Oct 11 08:58:51 compute-0 nova_compute[260935]:         <nova:project uuid="37bfd840a383483889a90bbdefd0f194">tempest-ServerAddressesNegativeTestJSON-1679036360</nova:project>
Oct 11 08:58:51 compute-0 nova_compute[260935]:       </nova:owner>
Oct 11 08:58:51 compute-0 nova_compute[260935]:       <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 08:58:51 compute-0 nova_compute[260935]:       <nova:ports>
Oct 11 08:58:51 compute-0 nova_compute[260935]:         <nova:port uuid="1fe7496f-f823-4f38-b7e1-8e30255a1ff9">
Oct 11 08:58:51 compute-0 nova_compute[260935]:           <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Oct 11 08:58:51 compute-0 nova_compute[260935]:         </nova:port>
Oct 11 08:58:51 compute-0 nova_compute[260935]:       </nova:ports>
Oct 11 08:58:51 compute-0 nova_compute[260935]:     </nova:instance>
Oct 11 08:58:51 compute-0 nova_compute[260935]:   </metadata>
Oct 11 08:58:51 compute-0 nova_compute[260935]:   <sysinfo type="smbios">
Oct 11 08:58:51 compute-0 nova_compute[260935]:     <system>
Oct 11 08:58:51 compute-0 nova_compute[260935]:       <entry name="manufacturer">RDO</entry>
Oct 11 08:58:51 compute-0 nova_compute[260935]:       <entry name="product">OpenStack Compute</entry>
Oct 11 08:58:51 compute-0 nova_compute[260935]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 08:58:51 compute-0 nova_compute[260935]:       <entry name="serial">7f16d3ba-54f0-496c-bbd3-09f4d10a41f2</entry>
Oct 11 08:58:51 compute-0 nova_compute[260935]:       <entry name="uuid">7f16d3ba-54f0-496c-bbd3-09f4d10a41f2</entry>
Oct 11 08:58:51 compute-0 nova_compute[260935]:       <entry name="family">Virtual Machine</entry>
Oct 11 08:58:51 compute-0 nova_compute[260935]:     </system>
Oct 11 08:58:51 compute-0 nova_compute[260935]:   </sysinfo>
Oct 11 08:58:51 compute-0 nova_compute[260935]:   <os>
Oct 11 08:58:51 compute-0 nova_compute[260935]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 11 08:58:51 compute-0 nova_compute[260935]:     <boot dev="hd"/>
Oct 11 08:58:51 compute-0 nova_compute[260935]:     <smbios mode="sysinfo"/>
Oct 11 08:58:51 compute-0 nova_compute[260935]:   </os>
Oct 11 08:58:51 compute-0 nova_compute[260935]:   <features>
Oct 11 08:58:51 compute-0 nova_compute[260935]:     <acpi/>
Oct 11 08:58:51 compute-0 nova_compute[260935]:     <apic/>
Oct 11 08:58:51 compute-0 nova_compute[260935]:     <vmcoreinfo/>
Oct 11 08:58:51 compute-0 nova_compute[260935]:   </features>
Oct 11 08:58:51 compute-0 nova_compute[260935]:   <clock offset="utc">
Oct 11 08:58:51 compute-0 nova_compute[260935]:     <timer name="pit" tickpolicy="delay"/>
Oct 11 08:58:51 compute-0 nova_compute[260935]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 11 08:58:51 compute-0 nova_compute[260935]:     <timer name="hpet" present="no"/>
Oct 11 08:58:51 compute-0 nova_compute[260935]:   </clock>
Oct 11 08:58:51 compute-0 nova_compute[260935]:   <cpu mode="host-model" match="exact">
Oct 11 08:58:51 compute-0 nova_compute[260935]:     <topology sockets="1" cores="1" threads="1"/>
Oct 11 08:58:51 compute-0 nova_compute[260935]:   </cpu>
Oct 11 08:58:51 compute-0 nova_compute[260935]:   <devices>
Oct 11 08:58:51 compute-0 nova_compute[260935]:     <disk type="network" device="disk">
Oct 11 08:58:51 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 08:58:51 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/7f16d3ba-54f0-496c-bbd3-09f4d10a41f2_disk">
Oct 11 08:58:51 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 08:58:51 compute-0 nova_compute[260935]:       </source>
Oct 11 08:58:51 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 08:58:51 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 08:58:51 compute-0 nova_compute[260935]:       </auth>
Oct 11 08:58:51 compute-0 nova_compute[260935]:       <target dev="vda" bus="virtio"/>
Oct 11 08:58:51 compute-0 nova_compute[260935]:     </disk>
Oct 11 08:58:51 compute-0 nova_compute[260935]:     <disk type="network" device="cdrom">
Oct 11 08:58:51 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 08:58:51 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/7f16d3ba-54f0-496c-bbd3-09f4d10a41f2_disk.config">
Oct 11 08:58:51 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 08:58:51 compute-0 nova_compute[260935]:       </source>
Oct 11 08:58:51 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 08:58:51 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 08:58:51 compute-0 nova_compute[260935]:       </auth>
Oct 11 08:58:51 compute-0 nova_compute[260935]:       <target dev="sda" bus="sata"/>
Oct 11 08:58:51 compute-0 nova_compute[260935]:     </disk>
Oct 11 08:58:51 compute-0 nova_compute[260935]:     <interface type="ethernet">
Oct 11 08:58:51 compute-0 nova_compute[260935]:       <mac address="fa:16:3e:b8:a6:83"/>
Oct 11 08:58:51 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 08:58:51 compute-0 nova_compute[260935]:       <driver name="vhost" rx_queue_size="512"/>
Oct 11 08:58:51 compute-0 nova_compute[260935]:       <mtu size="1442"/>
Oct 11 08:58:51 compute-0 nova_compute[260935]:       <target dev="tap1fe7496f-f8"/>
Oct 11 08:58:51 compute-0 nova_compute[260935]:     </interface>
Oct 11 08:58:51 compute-0 nova_compute[260935]:     <serial type="pty">
Oct 11 08:58:51 compute-0 nova_compute[260935]:       <log file="/var/lib/nova/instances/7f16d3ba-54f0-496c-bbd3-09f4d10a41f2/console.log" append="off"/>
Oct 11 08:58:51 compute-0 nova_compute[260935]:     </serial>
Oct 11 08:58:51 compute-0 nova_compute[260935]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 08:58:51 compute-0 nova_compute[260935]:     <video>
Oct 11 08:58:51 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 08:58:51 compute-0 nova_compute[260935]:     </video>
Oct 11 08:58:51 compute-0 nova_compute[260935]:     <input type="tablet" bus="usb"/>
Oct 11 08:58:51 compute-0 nova_compute[260935]:     <rng model="virtio">
Oct 11 08:58:51 compute-0 nova_compute[260935]:       <backend model="random">/dev/urandom</backend>
Oct 11 08:58:51 compute-0 nova_compute[260935]:     </rng>
Oct 11 08:58:51 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root"/>
Oct 11 08:58:51 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:58:51 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:58:51 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:58:51 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:58:51 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:58:51 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:58:51 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:58:51 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:58:51 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:58:51 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:58:51 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:58:51 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:58:51 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:58:51 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:58:51 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:58:51 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:58:51 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:58:51 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:58:51 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:58:51 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:58:51 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:58:51 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:58:51 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:58:51 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:58:51 compute-0 nova_compute[260935]:     <controller type="usb" index="0"/>
Oct 11 08:58:51 compute-0 nova_compute[260935]:     <memballoon model="virtio">
Oct 11 08:58:51 compute-0 nova_compute[260935]:       <stats period="10"/>
Oct 11 08:58:51 compute-0 nova_compute[260935]:     </memballoon>
Oct 11 08:58:51 compute-0 nova_compute[260935]:   </devices>
Oct 11 08:58:51 compute-0 nova_compute[260935]: </domain>
Oct 11 08:58:51 compute-0 nova_compute[260935]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 11 08:58:51 compute-0 nova_compute[260935]: 2025-10-11 08:58:51.549 2 DEBUG nova.compute.manager [None req-9634aa28-cdb2-4dce-81a1-8c89786153cd 4b13011561414dddaa3b82b6a74b610b 37bfd840a383483889a90bbdefd0f194 - - default default] [instance: 7f16d3ba-54f0-496c-bbd3-09f4d10a41f2] Preparing to wait for external event network-vif-plugged-1fe7496f-f823-4f38-b7e1-8e30255a1ff9 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 11 08:58:51 compute-0 nova_compute[260935]: 2025-10-11 08:58:51.550 2 DEBUG oslo_concurrency.lockutils [None req-9634aa28-cdb2-4dce-81a1-8c89786153cd 4b13011561414dddaa3b82b6a74b610b 37bfd840a383483889a90bbdefd0f194 - - default default] Acquiring lock "7f16d3ba-54f0-496c-bbd3-09f4d10a41f2-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:58:51 compute-0 nova_compute[260935]: 2025-10-11 08:58:51.550 2 DEBUG oslo_concurrency.lockutils [None req-9634aa28-cdb2-4dce-81a1-8c89786153cd 4b13011561414dddaa3b82b6a74b610b 37bfd840a383483889a90bbdefd0f194 - - default default] Lock "7f16d3ba-54f0-496c-bbd3-09f4d10a41f2-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:58:51 compute-0 nova_compute[260935]: 2025-10-11 08:58:51.551 2 DEBUG oslo_concurrency.lockutils [None req-9634aa28-cdb2-4dce-81a1-8c89786153cd 4b13011561414dddaa3b82b6a74b610b 37bfd840a383483889a90bbdefd0f194 - - default default] Lock "7f16d3ba-54f0-496c-bbd3-09f4d10a41f2-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:58:51 compute-0 nova_compute[260935]: 2025-10-11 08:58:51.552 2 DEBUG nova.virt.libvirt.vif [None req-9634aa28-cdb2-4dce-81a1-8c89786153cd 4b13011561414dddaa3b82b6a74b610b 37bfd840a383483889a90bbdefd0f194 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T08:58:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerAddressesNegativeTestJSON-server-1870753311',display_name='tempest-ServerAddressesNegativeTestJSON-server-1870753311',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveraddressesnegativetestjson-server-1870753311',id=78,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='37bfd840a383483889a90bbdefd0f194',ramdisk_id='',reservation_id='r-vgzchiv0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerAddressesNegativeTestJSON-1679036360',owner_user_name='tempest-ServerAddressesNegativeTestJSON-1679036360-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T08:58:45Z,user_data=None,user_id='4b13011561414dddaa3b82b6a74b610b',uuid=7f16d3ba-54f0-496c-bbd3-09f4d10a41f2,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1fe7496f-f823-4f38-b7e1-8e30255a1ff9", "address": "fa:16:3e:b8:a6:83", "network": {"id": "6572b80b-ede8-44a7-b186-9e0f82cb1cf7", "bridge": "br-int", "label": "tempest-ServerAddressesNegativeTestJSON-822924085-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "37bfd840a383483889a90bbdefd0f194", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1fe7496f-f8", "ovs_interfaceid": "1fe7496f-f823-4f38-b7e1-8e30255a1ff9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 11 08:58:51 compute-0 nova_compute[260935]: 2025-10-11 08:58:51.552 2 DEBUG nova.network.os_vif_util [None req-9634aa28-cdb2-4dce-81a1-8c89786153cd 4b13011561414dddaa3b82b6a74b610b 37bfd840a383483889a90bbdefd0f194 - - default default] Converting VIF {"id": "1fe7496f-f823-4f38-b7e1-8e30255a1ff9", "address": "fa:16:3e:b8:a6:83", "network": {"id": "6572b80b-ede8-44a7-b186-9e0f82cb1cf7", "bridge": "br-int", "label": "tempest-ServerAddressesNegativeTestJSON-822924085-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "37bfd840a383483889a90bbdefd0f194", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1fe7496f-f8", "ovs_interfaceid": "1fe7496f-f823-4f38-b7e1-8e30255a1ff9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 08:58:51 compute-0 nova_compute[260935]: 2025-10-11 08:58:51.553 2 DEBUG nova.network.os_vif_util [None req-9634aa28-cdb2-4dce-81a1-8c89786153cd 4b13011561414dddaa3b82b6a74b610b 37bfd840a383483889a90bbdefd0f194 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b8:a6:83,bridge_name='br-int',has_traffic_filtering=True,id=1fe7496f-f823-4f38-b7e1-8e30255a1ff9,network=Network(6572b80b-ede8-44a7-b186-9e0f82cb1cf7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1fe7496f-f8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 08:58:51 compute-0 nova_compute[260935]: 2025-10-11 08:58:51.554 2 DEBUG os_vif [None req-9634aa28-cdb2-4dce-81a1-8c89786153cd 4b13011561414dddaa3b82b6a74b610b 37bfd840a383483889a90bbdefd0f194 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b8:a6:83,bridge_name='br-int',has_traffic_filtering=True,id=1fe7496f-f823-4f38-b7e1-8e30255a1ff9,network=Network(6572b80b-ede8-44a7-b186-9e0f82cb1cf7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1fe7496f-f8') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 11 08:58:51 compute-0 nova_compute[260935]: 2025-10-11 08:58:51.557 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:58:51 compute-0 nova_compute[260935]: 2025-10-11 08:58:51.558 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:58:51 compute-0 nova_compute[260935]: 2025-10-11 08:58:51.559 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 08:58:51 compute-0 nova_compute[260935]: 2025-10-11 08:58:51.564 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:58:51 compute-0 nova_compute[260935]: 2025-10-11 08:58:51.564 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1fe7496f-f8, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:58:51 compute-0 nova_compute[260935]: 2025-10-11 08:58:51.565 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap1fe7496f-f8, col_values=(('external_ids', {'iface-id': '1fe7496f-f823-4f38-b7e1-8e30255a1ff9', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:b8:a6:83', 'vm-uuid': '7f16d3ba-54f0-496c-bbd3-09f4d10a41f2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:58:51 compute-0 nova_compute[260935]: 2025-10-11 08:58:51.569 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:58:51 compute-0 NetworkManager[44960]: <info>  [1760173131.5700] manager: (tap1fe7496f-f8): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/306)
Oct 11 08:58:51 compute-0 nova_compute[260935]: 2025-10-11 08:58:51.574 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 08:58:51 compute-0 nova_compute[260935]: 2025-10-11 08:58:51.578 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:58:51 compute-0 nova_compute[260935]: 2025-10-11 08:58:51.579 2 INFO os_vif [None req-9634aa28-cdb2-4dce-81a1-8c89786153cd 4b13011561414dddaa3b82b6a74b610b 37bfd840a383483889a90bbdefd0f194 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b8:a6:83,bridge_name='br-int',has_traffic_filtering=True,id=1fe7496f-f823-4f38-b7e1-8e30255a1ff9,network=Network(6572b80b-ede8-44a7-b186-9e0f82cb1cf7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1fe7496f-f8')
Oct 11 08:58:51 compute-0 nova_compute[260935]: 2025-10-11 08:58:51.687 2 DEBUG nova.virt.libvirt.driver [None req-9634aa28-cdb2-4dce-81a1-8c89786153cd 4b13011561414dddaa3b82b6a74b610b 37bfd840a383483889a90bbdefd0f194 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 08:58:51 compute-0 nova_compute[260935]: 2025-10-11 08:58:51.689 2 DEBUG nova.virt.libvirt.driver [None req-9634aa28-cdb2-4dce-81a1-8c89786153cd 4b13011561414dddaa3b82b6a74b610b 37bfd840a383483889a90bbdefd0f194 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 08:58:51 compute-0 nova_compute[260935]: 2025-10-11 08:58:51.690 2 DEBUG nova.virt.libvirt.driver [None req-9634aa28-cdb2-4dce-81a1-8c89786153cd 4b13011561414dddaa3b82b6a74b610b 37bfd840a383483889a90bbdefd0f194 - - default default] No VIF found with MAC fa:16:3e:b8:a6:83, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 11 08:58:51 compute-0 nova_compute[260935]: 2025-10-11 08:58:51.691 2 INFO nova.virt.libvirt.driver [None req-9634aa28-cdb2-4dce-81a1-8c89786153cd 4b13011561414dddaa3b82b6a74b610b 37bfd840a383483889a90bbdefd0f194 - - default default] [instance: 7f16d3ba-54f0-496c-bbd3-09f4d10a41f2] Using config drive
Oct 11 08:58:51 compute-0 nova_compute[260935]: 2025-10-11 08:58:51.727 2 DEBUG nova.storage.rbd_utils [None req-9634aa28-cdb2-4dce-81a1-8c89786153cd 4b13011561414dddaa3b82b6a74b610b 37bfd840a383483889a90bbdefd0f194 - - default default] rbd image 7f16d3ba-54f0-496c-bbd3-09f4d10a41f2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:58:52 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1671: 321 pgs: 321 active+clean; 88 MiB data, 572 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 1.8 MiB/s wr, 105 op/s
Oct 11 08:58:52 compute-0 sshd-session[337383]: Failed password for invalid user devops from 155.4.244.179 port 47173 ssh2
Oct 11 08:58:52 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/4256605576' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:58:52 compute-0 nova_compute[260935]: 2025-10-11 08:58:52.409 2 INFO nova.virt.libvirt.driver [None req-9634aa28-cdb2-4dce-81a1-8c89786153cd 4b13011561414dddaa3b82b6a74b610b 37bfd840a383483889a90bbdefd0f194 - - default default] [instance: 7f16d3ba-54f0-496c-bbd3-09f4d10a41f2] Creating config drive at /var/lib/nova/instances/7f16d3ba-54f0-496c-bbd3-09f4d10a41f2/disk.config
Oct 11 08:58:52 compute-0 nova_compute[260935]: 2025-10-11 08:58:52.414 2 DEBUG oslo_concurrency.processutils [None req-9634aa28-cdb2-4dce-81a1-8c89786153cd 4b13011561414dddaa3b82b6a74b610b 37bfd840a383483889a90bbdefd0f194 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/7f16d3ba-54f0-496c-bbd3-09f4d10a41f2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmphe0z3vun execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:58:52 compute-0 nova_compute[260935]: 2025-10-11 08:58:52.581 2 DEBUG oslo_concurrency.processutils [None req-9634aa28-cdb2-4dce-81a1-8c89786153cd 4b13011561414dddaa3b82b6a74b610b 37bfd840a383483889a90bbdefd0f194 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/7f16d3ba-54f0-496c-bbd3-09f4d10a41f2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmphe0z3vun" returned: 0 in 0.167s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:58:52 compute-0 nova_compute[260935]: 2025-10-11 08:58:52.610 2 DEBUG nova.storage.rbd_utils [None req-9634aa28-cdb2-4dce-81a1-8c89786153cd 4b13011561414dddaa3b82b6a74b610b 37bfd840a383483889a90bbdefd0f194 - - default default] rbd image 7f16d3ba-54f0-496c-bbd3-09f4d10a41f2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:58:52 compute-0 nova_compute[260935]: 2025-10-11 08:58:52.614 2 DEBUG oslo_concurrency.processutils [None req-9634aa28-cdb2-4dce-81a1-8c89786153cd 4b13011561414dddaa3b82b6a74b610b 37bfd840a383483889a90bbdefd0f194 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/7f16d3ba-54f0-496c-bbd3-09f4d10a41f2/disk.config 7f16d3ba-54f0-496c-bbd3-09f4d10a41f2_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:58:52 compute-0 nova_compute[260935]: 2025-10-11 08:58:52.820 2 DEBUG oslo_concurrency.processutils [None req-9634aa28-cdb2-4dce-81a1-8c89786153cd 4b13011561414dddaa3b82b6a74b610b 37bfd840a383483889a90bbdefd0f194 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/7f16d3ba-54f0-496c-bbd3-09f4d10a41f2/disk.config 7f16d3ba-54f0-496c-bbd3-09f4d10a41f2_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.206s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:58:52 compute-0 nova_compute[260935]: 2025-10-11 08:58:52.822 2 INFO nova.virt.libvirt.driver [None req-9634aa28-cdb2-4dce-81a1-8c89786153cd 4b13011561414dddaa3b82b6a74b610b 37bfd840a383483889a90bbdefd0f194 - - default default] [instance: 7f16d3ba-54f0-496c-bbd3-09f4d10a41f2] Deleting local config drive /var/lib/nova/instances/7f16d3ba-54f0-496c-bbd3-09f4d10a41f2/disk.config because it was imported into RBD.
Oct 11 08:58:52 compute-0 kernel: tap1fe7496f-f8: entered promiscuous mode
Oct 11 08:58:52 compute-0 ovn_controller[152945]: 2025-10-11T08:58:52Z|00690|binding|INFO|Claiming lport 1fe7496f-f823-4f38-b7e1-8e30255a1ff9 for this chassis.
Oct 11 08:58:52 compute-0 ovn_controller[152945]: 2025-10-11T08:58:52Z|00691|binding|INFO|1fe7496f-f823-4f38-b7e1-8e30255a1ff9: Claiming fa:16:3e:b8:a6:83 10.100.0.14
Oct 11 08:58:52 compute-0 NetworkManager[44960]: <info>  [1760173132.9049] manager: (tap1fe7496f-f8): new Tun device (/org/freedesktop/NetworkManager/Devices/307)
Oct 11 08:58:52 compute-0 nova_compute[260935]: 2025-10-11 08:58:52.903 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:58:52 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:58:52.922 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b8:a6:83 10.100.0.14'], port_security=['fa:16:3e:b8:a6:83 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '7f16d3ba-54f0-496c-bbd3-09f4d10a41f2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6572b80b-ede8-44a7-b186-9e0f82cb1cf7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '37bfd840a383483889a90bbdefd0f194', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'aaa4b18d-6b73-4c5e-af64-b5b3a4d5cb9c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3e430929-6cd7-45f9-8de3-0a2cbebc57e5, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=1fe7496f-f823-4f38-b7e1-8e30255a1ff9) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 08:58:52 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:58:52.925 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 1fe7496f-f823-4f38-b7e1-8e30255a1ff9 in datapath 6572b80b-ede8-44a7-b186-9e0f82cb1cf7 bound to our chassis
Oct 11 08:58:52 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:58:52.928 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6572b80b-ede8-44a7-b186-9e0f82cb1cf7
Oct 11 08:58:52 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:58:52.949 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[16473df9-130c-49f9-b095-c3d2b745eacf]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:58:52 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:58:52.951 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap6572b80b-e1 in ovnmeta-6572b80b-ede8-44a7-b186-9e0f82cb1cf7 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 11 08:58:52 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:58:52.957 276945 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap6572b80b-e0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 11 08:58:52 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:58:52.957 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[6cb38f02-bf37-4cde-9db7-d804aedbd902]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:58:52 compute-0 systemd-machined[215705]: New machine qemu-87-instance-0000004e.
Oct 11 08:58:52 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:58:52.959 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[232a77e6-4644-48da-9368-0ef4172b9f86]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:58:52 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:58:52.980 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[e57b019a-2e8a-45e3-838b-9c7df0ecfbf6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:58:52 compute-0 systemd[1]: Started Virtual Machine qemu-87-instance-0000004e.
Oct 11 08:58:52 compute-0 nova_compute[260935]: 2025-10-11 08:58:52.994 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:58:53 compute-0 systemd-udevd[337826]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 08:58:53 compute-0 nova_compute[260935]: 2025-10-11 08:58:53.000 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:58:53 compute-0 ovn_controller[152945]: 2025-10-11T08:58:53Z|00692|binding|INFO|Setting lport 1fe7496f-f823-4f38-b7e1-8e30255a1ff9 ovn-installed in OVS
Oct 11 08:58:53 compute-0 ovn_controller[152945]: 2025-10-11T08:58:53Z|00693|binding|INFO|Setting lport 1fe7496f-f823-4f38-b7e1-8e30255a1ff9 up in Southbound
Oct 11 08:58:53 compute-0 nova_compute[260935]: 2025-10-11 08:58:53.004 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:58:53 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:58:53.012 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[5510d425-047a-487e-9958-971d460f7c55]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:58:53 compute-0 NetworkManager[44960]: <info>  [1760173133.0180] device (tap1fe7496f-f8): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 08:58:53 compute-0 NetworkManager[44960]: <info>  [1760173133.0204] device (tap1fe7496f-f8): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 08:58:53 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:58:53.058 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[76e485b6-c5a9-4f07-9da3-d46c286e1e2d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:58:53 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:58:53.067 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[c849ed6c-83b1-4e48-8f78-579f680884be]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:58:53 compute-0 NetworkManager[44960]: <info>  [1760173133.0695] manager: (tap6572b80b-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/308)
Oct 11 08:58:53 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:58:53.113 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[b8d3cd83-6f80-4ac6-9c49-0e47e609a964]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:58:53 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:58:53.117 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[9a963b41-08cd-46b2-8794-8e46e343cfe7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:58:53 compute-0 NetworkManager[44960]: <info>  [1760173133.1506] device (tap6572b80b-e0): carrier: link connected
Oct 11 08:58:53 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:58:53.157 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[9c066046-ca40-476d-8279-a4143a9ce1dc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:58:53 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:58:53.183 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[1bc4ffcd-b93d-4058-8d9a-2e67e8f1d099]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6572b80b-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:71:3e:59'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 217], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 497785, 'reachable_time': 28735, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 337856, 'error': None, 'target': 'ovnmeta-6572b80b-ede8-44a7-b186-9e0f82cb1cf7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:58:53 compute-0 ceph-mon[74313]: pgmap v1671: 321 pgs: 321 active+clean; 88 MiB data, 572 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 1.8 MiB/s wr, 105 op/s
Oct 11 08:58:53 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:58:53.214 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[11a3cc5f-70de-4482-beb4-960269fa6fe1]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe71:3e59'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 497785, 'tstamp': 497785}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 337857, 'error': None, 'target': 'ovnmeta-6572b80b-ede8-44a7-b186-9e0f82cb1cf7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:58:53 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:58:53.245 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[f922cb7b-54ee-4f9a-9c20-e2f06c5bb2d1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6572b80b-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:71:3e:59'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 2, 'rx_bytes': 220, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 2, 'rx_bytes': 220, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 217], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 497785, 'reachable_time': 28735, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 192, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 192, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 337858, 'error': None, 'target': 'ovnmeta-6572b80b-ede8-44a7-b186-9e0f82cb1cf7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:58:53 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:58:53.298 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[ed23081b-2e56-497f-96d2-68689fb0358d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:58:53 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:58:53.393 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[1ae499a3-b676-4f2c-a119-ada1207ff3e3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:58:53 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:58:53.395 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6572b80b-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:58:53 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:58:53.396 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 08:58:53 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:58:53.396 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6572b80b-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:58:53 compute-0 NetworkManager[44960]: <info>  [1760173133.3994] manager: (tap6572b80b-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/309)
Oct 11 08:58:53 compute-0 nova_compute[260935]: 2025-10-11 08:58:53.398 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:58:53 compute-0 kernel: tap6572b80b-e0: entered promiscuous mode
Oct 11 08:58:53 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:58:53.403 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6572b80b-e0, col_values=(('external_ids', {'iface-id': '2a3e6896-3515-4045-9136-66dd136d2b03'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:58:53 compute-0 ovn_controller[152945]: 2025-10-11T08:58:53Z|00694|binding|INFO|Releasing lport 2a3e6896-3515-4045-9136-66dd136d2b03 from this chassis (sb_readonly=0)
Oct 11 08:58:53 compute-0 nova_compute[260935]: 2025-10-11 08:58:53.407 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:58:53 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:58:53.408 162815 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/6572b80b-ede8-44a7-b186-9e0f82cb1cf7.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/6572b80b-ede8-44a7-b186-9e0f82cb1cf7.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 11 08:58:53 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:58:53.410 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[e78dd7ef-c452-43b7-be1c-06028088aa10]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:58:53 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:58:53.411 162815 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 11 08:58:53 compute-0 ovn_metadata_agent[162810]: global
Oct 11 08:58:53 compute-0 ovn_metadata_agent[162810]:     log         /dev/log local0 debug
Oct 11 08:58:53 compute-0 ovn_metadata_agent[162810]:     log-tag     haproxy-metadata-proxy-6572b80b-ede8-44a7-b186-9e0f82cb1cf7
Oct 11 08:58:53 compute-0 ovn_metadata_agent[162810]:     user        root
Oct 11 08:58:53 compute-0 ovn_metadata_agent[162810]:     group       root
Oct 11 08:58:53 compute-0 ovn_metadata_agent[162810]:     maxconn     1024
Oct 11 08:58:53 compute-0 ovn_metadata_agent[162810]:     pidfile     /var/lib/neutron/external/pids/6572b80b-ede8-44a7-b186-9e0f82cb1cf7.pid.haproxy
Oct 11 08:58:53 compute-0 ovn_metadata_agent[162810]:     daemon
Oct 11 08:58:53 compute-0 ovn_metadata_agent[162810]: 
Oct 11 08:58:53 compute-0 ovn_metadata_agent[162810]: defaults
Oct 11 08:58:53 compute-0 ovn_metadata_agent[162810]:     log global
Oct 11 08:58:53 compute-0 ovn_metadata_agent[162810]:     mode http
Oct 11 08:58:53 compute-0 ovn_metadata_agent[162810]:     option httplog
Oct 11 08:58:53 compute-0 ovn_metadata_agent[162810]:     option dontlognull
Oct 11 08:58:53 compute-0 ovn_metadata_agent[162810]:     option http-server-close
Oct 11 08:58:53 compute-0 ovn_metadata_agent[162810]:     option forwardfor
Oct 11 08:58:53 compute-0 ovn_metadata_agent[162810]:     retries                 3
Oct 11 08:58:53 compute-0 ovn_metadata_agent[162810]:     timeout http-request    30s
Oct 11 08:58:53 compute-0 ovn_metadata_agent[162810]:     timeout connect         30s
Oct 11 08:58:53 compute-0 ovn_metadata_agent[162810]:     timeout client          32s
Oct 11 08:58:53 compute-0 ovn_metadata_agent[162810]:     timeout server          32s
Oct 11 08:58:53 compute-0 ovn_metadata_agent[162810]:     timeout http-keep-alive 30s
Oct 11 08:58:53 compute-0 ovn_metadata_agent[162810]: 
Oct 11 08:58:53 compute-0 ovn_metadata_agent[162810]: 
Oct 11 08:58:53 compute-0 ovn_metadata_agent[162810]: listen listener
Oct 11 08:58:53 compute-0 ovn_metadata_agent[162810]:     bind 169.254.169.254:80
Oct 11 08:58:53 compute-0 ovn_metadata_agent[162810]:     server metadata /var/lib/neutron/metadata_proxy
Oct 11 08:58:53 compute-0 ovn_metadata_agent[162810]:     http-request add-header X-OVN-Network-ID 6572b80b-ede8-44a7-b186-9e0f82cb1cf7
Oct 11 08:58:53 compute-0 ovn_metadata_agent[162810]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 11 08:58:53 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:58:53.412 162815 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-6572b80b-ede8-44a7-b186-9e0f82cb1cf7', 'env', 'PROCESS_TAG=haproxy-6572b80b-ede8-44a7-b186-9e0f82cb1cf7', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/6572b80b-ede8-44a7-b186-9e0f82cb1cf7.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 11 08:58:53 compute-0 nova_compute[260935]: 2025-10-11 08:58:53.434 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:58:53 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e242 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 08:58:53 compute-0 nova_compute[260935]: 2025-10-11 08:58:53.582 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:58:53 compute-0 nova_compute[260935]: 2025-10-11 08:58:53.593 2 DEBUG nova.compute.manager [req-9c275057-a01c-4d99-9a59-c9b800cecd09 req-39f1b45b-f33c-4d35-89f7-7452dc111832 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 7f16d3ba-54f0-496c-bbd3-09f4d10a41f2] Received event network-vif-plugged-1fe7496f-f823-4f38-b7e1-8e30255a1ff9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:58:53 compute-0 nova_compute[260935]: 2025-10-11 08:58:53.594 2 DEBUG oslo_concurrency.lockutils [req-9c275057-a01c-4d99-9a59-c9b800cecd09 req-39f1b45b-f33c-4d35-89f7-7452dc111832 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "7f16d3ba-54f0-496c-bbd3-09f4d10a41f2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:58:53 compute-0 nova_compute[260935]: 2025-10-11 08:58:53.594 2 DEBUG oslo_concurrency.lockutils [req-9c275057-a01c-4d99-9a59-c9b800cecd09 req-39f1b45b-f33c-4d35-89f7-7452dc111832 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "7f16d3ba-54f0-496c-bbd3-09f4d10a41f2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:58:53 compute-0 nova_compute[260935]: 2025-10-11 08:58:53.595 2 DEBUG oslo_concurrency.lockutils [req-9c275057-a01c-4d99-9a59-c9b800cecd09 req-39f1b45b-f33c-4d35-89f7-7452dc111832 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "7f16d3ba-54f0-496c-bbd3-09f4d10a41f2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:58:53 compute-0 nova_compute[260935]: 2025-10-11 08:58:53.595 2 DEBUG nova.compute.manager [req-9c275057-a01c-4d99-9a59-c9b800cecd09 req-39f1b45b-f33c-4d35-89f7-7452dc111832 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 7f16d3ba-54f0-496c-bbd3-09f4d10a41f2] Processing event network-vif-plugged-1fe7496f-f823-4f38-b7e1-8e30255a1ff9 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 11 08:58:53 compute-0 nova_compute[260935]: 2025-10-11 08:58:53.604 2 DEBUG oslo_concurrency.lockutils [None req-5cb69133-b00e-4832-9d04-9fce79d65f74 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] Acquiring lock "8848c29f-c82a-4f50-82f4-b2e317161489" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:58:53 compute-0 nova_compute[260935]: 2025-10-11 08:58:53.605 2 DEBUG oslo_concurrency.lockutils [None req-5cb69133-b00e-4832-9d04-9fce79d65f74 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] Lock "8848c29f-c82a-4f50-82f4-b2e317161489" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:58:53 compute-0 nova_compute[260935]: 2025-10-11 08:58:53.634 2 DEBUG nova.compute.manager [None req-5cb69133-b00e-4832-9d04-9fce79d65f74 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] [instance: 8848c29f-c82a-4f50-82f4-b2e317161489] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 11 08:58:53 compute-0 nova_compute[260935]: 2025-10-11 08:58:53.647 2 DEBUG nova.network.neutron [req-8d4e27af-9ba8-4bd7-b2e2-fa6821b8cfd1 req-cca0ea6c-24f0-4d05-994c-1319c44716de e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 7f16d3ba-54f0-496c-bbd3-09f4d10a41f2] Updated VIF entry in instance network info cache for port 1fe7496f-f823-4f38-b7e1-8e30255a1ff9. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 08:58:53 compute-0 nova_compute[260935]: 2025-10-11 08:58:53.648 2 DEBUG nova.network.neutron [req-8d4e27af-9ba8-4bd7-b2e2-fa6821b8cfd1 req-cca0ea6c-24f0-4d05-994c-1319c44716de e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 7f16d3ba-54f0-496c-bbd3-09f4d10a41f2] Updating instance_info_cache with network_info: [{"id": "1fe7496f-f823-4f38-b7e1-8e30255a1ff9", "address": "fa:16:3e:b8:a6:83", "network": {"id": "6572b80b-ede8-44a7-b186-9e0f82cb1cf7", "bridge": "br-int", "label": "tempest-ServerAddressesNegativeTestJSON-822924085-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "37bfd840a383483889a90bbdefd0f194", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1fe7496f-f8", "ovs_interfaceid": "1fe7496f-f823-4f38-b7e1-8e30255a1ff9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:58:53 compute-0 nova_compute[260935]: 2025-10-11 08:58:53.655 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760173118.654752, b7a3385b-c043-4cc1-963e-3421188270ac => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:58:53 compute-0 nova_compute[260935]: 2025-10-11 08:58:53.656 2 INFO nova.compute.manager [-] [instance: b7a3385b-c043-4cc1-963e-3421188270ac] VM Stopped (Lifecycle Event)
Oct 11 08:58:53 compute-0 nova_compute[260935]: 2025-10-11 08:58:53.680 2 DEBUG oslo_concurrency.lockutils [req-8d4e27af-9ba8-4bd7-b2e2-fa6821b8cfd1 req-cca0ea6c-24f0-4d05-994c-1319c44716de e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-7f16d3ba-54f0-496c-bbd3-09f4d10a41f2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 08:58:53 compute-0 nova_compute[260935]: 2025-10-11 08:58:53.686 2 DEBUG nova.compute.manager [None req-c31d7ccb-eaf4-4f9c-9b70-a9da94c99069 - - - - - -] [instance: b7a3385b-c043-4cc1-963e-3421188270ac] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:58:53 compute-0 nova_compute[260935]: 2025-10-11 08:58:53.811 2 DEBUG oslo_concurrency.lockutils [None req-5cb69133-b00e-4832-9d04-9fce79d65f74 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:58:53 compute-0 nova_compute[260935]: 2025-10-11 08:58:53.813 2 DEBUG oslo_concurrency.lockutils [None req-5cb69133-b00e-4832-9d04-9fce79d65f74 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:58:53 compute-0 nova_compute[260935]: 2025-10-11 08:58:53.821 2 DEBUG nova.virt.hardware [None req-5cb69133-b00e-4832-9d04-9fce79d65f74 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 11 08:58:53 compute-0 nova_compute[260935]: 2025-10-11 08:58:53.822 2 INFO nova.compute.claims [None req-5cb69133-b00e-4832-9d04-9fce79d65f74 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] [instance: 8848c29f-c82a-4f50-82f4-b2e317161489] Claim successful on node compute-0.ctlplane.example.com
Oct 11 08:58:53 compute-0 podman[337933]: 2025-10-11 08:58:53.862006048 +0000 UTC m=+0.062038590 container create 5e02bd6425045c30df2f766c553b6ab59a6b3e9b259943657d4de9d507eaae1c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-6572b80b-ede8-44a7-b186-9e0f82cb1cf7, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 11 08:58:53 compute-0 nova_compute[260935]: 2025-10-11 08:58:53.883 2 DEBUG nova.compute.manager [None req-9634aa28-cdb2-4dce-81a1-8c89786153cd 4b13011561414dddaa3b82b6a74b610b 37bfd840a383483889a90bbdefd0f194 - - default default] [instance: 7f16d3ba-54f0-496c-bbd3-09f4d10a41f2] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 11 08:58:53 compute-0 nova_compute[260935]: 2025-10-11 08:58:53.885 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760173133.8828578, 7f16d3ba-54f0-496c-bbd3-09f4d10a41f2 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:58:53 compute-0 nova_compute[260935]: 2025-10-11 08:58:53.886 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 7f16d3ba-54f0-496c-bbd3-09f4d10a41f2] VM Started (Lifecycle Event)
Oct 11 08:58:53 compute-0 nova_compute[260935]: 2025-10-11 08:58:53.892 2 DEBUG nova.virt.libvirt.driver [None req-9634aa28-cdb2-4dce-81a1-8c89786153cd 4b13011561414dddaa3b82b6a74b610b 37bfd840a383483889a90bbdefd0f194 - - default default] [instance: 7f16d3ba-54f0-496c-bbd3-09f4d10a41f2] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 11 08:58:53 compute-0 nova_compute[260935]: 2025-10-11 08:58:53.896 2 INFO nova.virt.libvirt.driver [-] [instance: 7f16d3ba-54f0-496c-bbd3-09f4d10a41f2] Instance spawned successfully.
Oct 11 08:58:53 compute-0 nova_compute[260935]: 2025-10-11 08:58:53.896 2 DEBUG nova.virt.libvirt.driver [None req-9634aa28-cdb2-4dce-81a1-8c89786153cd 4b13011561414dddaa3b82b6a74b610b 37bfd840a383483889a90bbdefd0f194 - - default default] [instance: 7f16d3ba-54f0-496c-bbd3-09f4d10a41f2] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 11 08:58:53 compute-0 sshd-session[337383]: Received disconnect from 155.4.244.179 port 47173:11: Bye Bye [preauth]
Oct 11 08:58:53 compute-0 sshd-session[337383]: Disconnected from invalid user devops 155.4.244.179 port 47173 [preauth]
Oct 11 08:58:53 compute-0 systemd[1]: Started libpod-conmon-5e02bd6425045c30df2f766c553b6ab59a6b3e9b259943657d4de9d507eaae1c.scope.
Oct 11 08:58:53 compute-0 nova_compute[260935]: 2025-10-11 08:58:53.913 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 7f16d3ba-54f0-496c-bbd3-09f4d10a41f2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:58:53 compute-0 nova_compute[260935]: 2025-10-11 08:58:53.925 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 7f16d3ba-54f0-496c-bbd3-09f4d10a41f2] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 08:58:53 compute-0 podman[337933]: 2025-10-11 08:58:53.835959475 +0000 UTC m=+0.035992047 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 11 08:58:53 compute-0 systemd[1]: Started libcrun container.
Oct 11 08:58:53 compute-0 nova_compute[260935]: 2025-10-11 08:58:53.941 2 DEBUG nova.virt.libvirt.driver [None req-9634aa28-cdb2-4dce-81a1-8c89786153cd 4b13011561414dddaa3b82b6a74b610b 37bfd840a383483889a90bbdefd0f194 - - default default] [instance: 7f16d3ba-54f0-496c-bbd3-09f4d10a41f2] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:58:53 compute-0 nova_compute[260935]: 2025-10-11 08:58:53.942 2 DEBUG nova.virt.libvirt.driver [None req-9634aa28-cdb2-4dce-81a1-8c89786153cd 4b13011561414dddaa3b82b6a74b610b 37bfd840a383483889a90bbdefd0f194 - - default default] [instance: 7f16d3ba-54f0-496c-bbd3-09f4d10a41f2] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:58:53 compute-0 nova_compute[260935]: 2025-10-11 08:58:53.942 2 DEBUG nova.virt.libvirt.driver [None req-9634aa28-cdb2-4dce-81a1-8c89786153cd 4b13011561414dddaa3b82b6a74b610b 37bfd840a383483889a90bbdefd0f194 - - default default] [instance: 7f16d3ba-54f0-496c-bbd3-09f4d10a41f2] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:58:53 compute-0 nova_compute[260935]: 2025-10-11 08:58:53.943 2 DEBUG nova.virt.libvirt.driver [None req-9634aa28-cdb2-4dce-81a1-8c89786153cd 4b13011561414dddaa3b82b6a74b610b 37bfd840a383483889a90bbdefd0f194 - - default default] [instance: 7f16d3ba-54f0-496c-bbd3-09f4d10a41f2] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:58:53 compute-0 nova_compute[260935]: 2025-10-11 08:58:53.944 2 DEBUG nova.virt.libvirt.driver [None req-9634aa28-cdb2-4dce-81a1-8c89786153cd 4b13011561414dddaa3b82b6a74b610b 37bfd840a383483889a90bbdefd0f194 - - default default] [instance: 7f16d3ba-54f0-496c-bbd3-09f4d10a41f2] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:58:53 compute-0 nova_compute[260935]: 2025-10-11 08:58:53.945 2 DEBUG nova.virt.libvirt.driver [None req-9634aa28-cdb2-4dce-81a1-8c89786153cd 4b13011561414dddaa3b82b6a74b610b 37bfd840a383483889a90bbdefd0f194 - - default default] [instance: 7f16d3ba-54f0-496c-bbd3-09f4d10a41f2] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:58:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/300968a36cdce69fce13bc17553f18712450fb589cb3b78268edf69db906b4fd/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 11 08:58:53 compute-0 nova_compute[260935]: 2025-10-11 08:58:53.952 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 7f16d3ba-54f0-496c-bbd3-09f4d10a41f2] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 08:58:53 compute-0 nova_compute[260935]: 2025-10-11 08:58:53.952 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760173133.883021, 7f16d3ba-54f0-496c-bbd3-09f4d10a41f2 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:58:53 compute-0 nova_compute[260935]: 2025-10-11 08:58:53.952 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 7f16d3ba-54f0-496c-bbd3-09f4d10a41f2] VM Paused (Lifecycle Event)
Oct 11 08:58:53 compute-0 podman[337933]: 2025-10-11 08:58:53.965039216 +0000 UTC m=+0.165071868 container init 5e02bd6425045c30df2f766c553b6ab59a6b3e9b259943657d4de9d507eaae1c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-6572b80b-ede8-44a7-b186-9e0f82cb1cf7, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 11 08:58:53 compute-0 podman[337933]: 2025-10-11 08:58:53.972693344 +0000 UTC m=+0.172725926 container start 5e02bd6425045c30df2f766c553b6ab59a6b3e9b259943657d4de9d507eaae1c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-6572b80b-ede8-44a7-b186-9e0f82cb1cf7, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 11 08:58:53 compute-0 nova_compute[260935]: 2025-10-11 08:58:53.982 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 7f16d3ba-54f0-496c-bbd3-09f4d10a41f2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:58:53 compute-0 nova_compute[260935]: 2025-10-11 08:58:53.987 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760173133.8912241, 7f16d3ba-54f0-496c-bbd3-09f4d10a41f2 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:58:53 compute-0 nova_compute[260935]: 2025-10-11 08:58:53.987 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 7f16d3ba-54f0-496c-bbd3-09f4d10a41f2] VM Resumed (Lifecycle Event)
Oct 11 08:58:54 compute-0 neutron-haproxy-ovnmeta-6572b80b-ede8-44a7-b186-9e0f82cb1cf7[337948]: [NOTICE]   (337952) : New worker (337954) forked
Oct 11 08:58:54 compute-0 neutron-haproxy-ovnmeta-6572b80b-ede8-44a7-b186-9e0f82cb1cf7[337948]: [NOTICE]   (337952) : Loading success.
Oct 11 08:58:54 compute-0 nova_compute[260935]: 2025-10-11 08:58:54.017 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 7f16d3ba-54f0-496c-bbd3-09f4d10a41f2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:58:54 compute-0 nova_compute[260935]: 2025-10-11 08:58:54.020 2 INFO nova.compute.manager [None req-9634aa28-cdb2-4dce-81a1-8c89786153cd 4b13011561414dddaa3b82b6a74b610b 37bfd840a383483889a90bbdefd0f194 - - default default] [instance: 7f16d3ba-54f0-496c-bbd3-09f4d10a41f2] Took 8.91 seconds to spawn the instance on the hypervisor.
Oct 11 08:58:54 compute-0 nova_compute[260935]: 2025-10-11 08:58:54.021 2 DEBUG nova.compute.manager [None req-9634aa28-cdb2-4dce-81a1-8c89786153cd 4b13011561414dddaa3b82b6a74b610b 37bfd840a383483889a90bbdefd0f194 - - default default] [instance: 7f16d3ba-54f0-496c-bbd3-09f4d10a41f2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:58:54 compute-0 nova_compute[260935]: 2025-10-11 08:58:54.027 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 7f16d3ba-54f0-496c-bbd3-09f4d10a41f2] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 08:58:54 compute-0 nova_compute[260935]: 2025-10-11 08:58:54.039 2 DEBUG oslo_concurrency.processutils [None req-5cb69133-b00e-4832-9d04-9fce79d65f74 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:58:54 compute-0 nova_compute[260935]: 2025-10-11 08:58:54.089 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 7f16d3ba-54f0-496c-bbd3-09f4d10a41f2] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 08:58:54 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1672: 321 pgs: 321 active+clean; 88 MiB data, 572 MiB used, 59 GiB / 60 GiB avail; 985 KiB/s rd, 1.8 MiB/s wr, 74 op/s
Oct 11 08:58:54 compute-0 nova_compute[260935]: 2025-10-11 08:58:54.160 2 INFO nova.compute.manager [None req-9634aa28-cdb2-4dce-81a1-8c89786153cd 4b13011561414dddaa3b82b6a74b610b 37bfd840a383483889a90bbdefd0f194 - - default default] [instance: 7f16d3ba-54f0-496c-bbd3-09f4d10a41f2] Took 10.01 seconds to build instance.
Oct 11 08:58:54 compute-0 nova_compute[260935]: 2025-10-11 08:58:54.184 2 DEBUG oslo_concurrency.lockutils [None req-9634aa28-cdb2-4dce-81a1-8c89786153cd 4b13011561414dddaa3b82b6a74b610b 37bfd840a383483889a90bbdefd0f194 - - default default] Lock "7f16d3ba-54f0-496c-bbd3-09f4d10a41f2" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.127s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:58:54 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 08:58:54 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1252416261' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:58:54 compute-0 nova_compute[260935]: 2025-10-11 08:58:54.500 2 DEBUG oslo_concurrency.processutils [None req-5cb69133-b00e-4832-9d04-9fce79d65f74 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:58:54 compute-0 nova_compute[260935]: 2025-10-11 08:58:54.511 2 DEBUG nova.compute.provider_tree [None req-5cb69133-b00e-4832-9d04-9fce79d65f74 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 08:58:54 compute-0 nova_compute[260935]: 2025-10-11 08:58:54.533 2 DEBUG nova.scheduler.client.report [None req-5cb69133-b00e-4832-9d04-9fce79d65f74 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 08:58:54 compute-0 nova_compute[260935]: 2025-10-11 08:58:54.566 2 DEBUG oslo_concurrency.lockutils [None req-5cb69133-b00e-4832-9d04-9fce79d65f74 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.752s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:58:54 compute-0 nova_compute[260935]: 2025-10-11 08:58:54.567 2 DEBUG nova.compute.manager [None req-5cb69133-b00e-4832-9d04-9fce79d65f74 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] [instance: 8848c29f-c82a-4f50-82f4-b2e317161489] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 11 08:58:54 compute-0 nova_compute[260935]: 2025-10-11 08:58:54.641 2 DEBUG nova.compute.manager [None req-5cb69133-b00e-4832-9d04-9fce79d65f74 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] [instance: 8848c29f-c82a-4f50-82f4-b2e317161489] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 11 08:58:54 compute-0 nova_compute[260935]: 2025-10-11 08:58:54.642 2 DEBUG nova.network.neutron [None req-5cb69133-b00e-4832-9d04-9fce79d65f74 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] [instance: 8848c29f-c82a-4f50-82f4-b2e317161489] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 11 08:58:54 compute-0 nova_compute[260935]: 2025-10-11 08:58:54.667 2 INFO nova.virt.libvirt.driver [None req-5cb69133-b00e-4832-9d04-9fce79d65f74 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] [instance: 8848c29f-c82a-4f50-82f4-b2e317161489] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 11 08:58:54 compute-0 nova_compute[260935]: 2025-10-11 08:58:54.688 2 DEBUG nova.compute.manager [None req-5cb69133-b00e-4832-9d04-9fce79d65f74 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] [instance: 8848c29f-c82a-4f50-82f4-b2e317161489] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 11 08:58:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 08:58:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 08:58:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 08:58:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 08:58:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 08:58:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 08:58:54 compute-0 nova_compute[260935]: 2025-10-11 08:58:54.803 2 DEBUG nova.compute.manager [None req-5cb69133-b00e-4832-9d04-9fce79d65f74 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] [instance: 8848c29f-c82a-4f50-82f4-b2e317161489] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 11 08:58:54 compute-0 nova_compute[260935]: 2025-10-11 08:58:54.806 2 DEBUG nova.virt.libvirt.driver [None req-5cb69133-b00e-4832-9d04-9fce79d65f74 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] [instance: 8848c29f-c82a-4f50-82f4-b2e317161489] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 11 08:58:54 compute-0 nova_compute[260935]: 2025-10-11 08:58:54.807 2 INFO nova.virt.libvirt.driver [None req-5cb69133-b00e-4832-9d04-9fce79d65f74 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] [instance: 8848c29f-c82a-4f50-82f4-b2e317161489] Creating image(s)
Oct 11 08:58:54 compute-0 nova_compute[260935]: 2025-10-11 08:58:54.845 2 DEBUG nova.storage.rbd_utils [None req-5cb69133-b00e-4832-9d04-9fce79d65f74 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] rbd image 8848c29f-c82a-4f50-82f4-b2e317161489_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:58:54 compute-0 ceph-mgr[74605]: [balancer INFO root] Optimize plan auto_2025-10-11_08:58:54
Oct 11 08:58:54 compute-0 ceph-mgr[74605]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 08:58:54 compute-0 ceph-mgr[74605]: [balancer INFO root] do_upmap
Oct 11 08:58:54 compute-0 ceph-mgr[74605]: [balancer INFO root] pools ['.mgr', 'vms', 'default.rgw.control', 'cephfs.cephfs.meta', 'backups', '.rgw.root', 'images', 'default.rgw.log', 'cephfs.cephfs.data', 'volumes', 'default.rgw.meta']
Oct 11 08:58:54 compute-0 ceph-mgr[74605]: [balancer INFO root] prepared 0/10 changes
Oct 11 08:58:54 compute-0 nova_compute[260935]: 2025-10-11 08:58:54.876 2 DEBUG nova.storage.rbd_utils [None req-5cb69133-b00e-4832-9d04-9fce79d65f74 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] rbd image 8848c29f-c82a-4f50-82f4-b2e317161489_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:58:54 compute-0 nova_compute[260935]: 2025-10-11 08:58:54.915 2 DEBUG nova.storage.rbd_utils [None req-5cb69133-b00e-4832-9d04-9fce79d65f74 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] rbd image 8848c29f-c82a-4f50-82f4-b2e317161489_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:58:54 compute-0 nova_compute[260935]: 2025-10-11 08:58:54.921 2 DEBUG oslo_concurrency.processutils [None req-5cb69133-b00e-4832-9d04-9fce79d65f74 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:58:55 compute-0 nova_compute[260935]: 2025-10-11 08:58:55.022 2 DEBUG oslo_concurrency.processutils [None req-5cb69133-b00e-4832-9d04-9fce79d65f74 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json" returned: 0 in 0.102s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:58:55 compute-0 nova_compute[260935]: 2025-10-11 08:58:55.024 2 DEBUG oslo_concurrency.lockutils [None req-5cb69133-b00e-4832-9d04-9fce79d65f74 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] Acquiring lock "9811042c7d73cc51997f7c966840f4b7728169a1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:58:55 compute-0 nova_compute[260935]: 2025-10-11 08:58:55.025 2 DEBUG oslo_concurrency.lockutils [None req-5cb69133-b00e-4832-9d04-9fce79d65f74 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:58:55 compute-0 nova_compute[260935]: 2025-10-11 08:58:55.026 2 DEBUG oslo_concurrency.lockutils [None req-5cb69133-b00e-4832-9d04-9fce79d65f74 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:58:55 compute-0 nova_compute[260935]: 2025-10-11 08:58:55.058 2 DEBUG nova.storage.rbd_utils [None req-5cb69133-b00e-4832-9d04-9fce79d65f74 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] rbd image 8848c29f-c82a-4f50-82f4-b2e317161489_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:58:55 compute-0 nova_compute[260935]: 2025-10-11 08:58:55.064 2 DEBUG oslo_concurrency.processutils [None req-5cb69133-b00e-4832-9d04-9fce79d65f74 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 8848c29f-c82a-4f50-82f4-b2e317161489_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:58:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 08:58:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 08:58:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 08:58:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 08:58:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 08:58:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 08:58:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 08:58:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 08:58:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 08:58:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 08:58:55 compute-0 ceph-mon[74313]: pgmap v1672: 321 pgs: 321 active+clean; 88 MiB data, 572 MiB used, 59 GiB / 60 GiB avail; 985 KiB/s rd, 1.8 MiB/s wr, 74 op/s
Oct 11 08:58:55 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/1252416261' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:58:55 compute-0 nova_compute[260935]: 2025-10-11 08:58:55.340 2 DEBUG nova.policy [None req-5cb69133-b00e-4832-9d04-9fce79d65f74 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '9734241540ac484291686e1d189d4eea', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'b0430d49d70a46c2b29abef177f8ccb3', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 11 08:58:55 compute-0 nova_compute[260935]: 2025-10-11 08:58:55.385 2 DEBUG oslo_concurrency.processutils [None req-5cb69133-b00e-4832-9d04-9fce79d65f74 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 8848c29f-c82a-4f50-82f4-b2e317161489_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.322s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:58:55 compute-0 nova_compute[260935]: 2025-10-11 08:58:55.494 2 DEBUG nova.storage.rbd_utils [None req-5cb69133-b00e-4832-9d04-9fce79d65f74 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] resizing rbd image 8848c29f-c82a-4f50-82f4-b2e317161489_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 11 08:58:55 compute-0 nova_compute[260935]: 2025-10-11 08:58:55.612 2 DEBUG nova.objects.instance [None req-5cb69133-b00e-4832-9d04-9fce79d65f74 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] Lazy-loading 'migration_context' on Instance uuid 8848c29f-c82a-4f50-82f4-b2e317161489 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:58:55 compute-0 nova_compute[260935]: 2025-10-11 08:58:55.637 2 DEBUG nova.virt.libvirt.driver [None req-5cb69133-b00e-4832-9d04-9fce79d65f74 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] [instance: 8848c29f-c82a-4f50-82f4-b2e317161489] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 11 08:58:55 compute-0 nova_compute[260935]: 2025-10-11 08:58:55.637 2 DEBUG nova.virt.libvirt.driver [None req-5cb69133-b00e-4832-9d04-9fce79d65f74 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] [instance: 8848c29f-c82a-4f50-82f4-b2e317161489] Ensure instance console log exists: /var/lib/nova/instances/8848c29f-c82a-4f50-82f4-b2e317161489/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 11 08:58:55 compute-0 nova_compute[260935]: 2025-10-11 08:58:55.638 2 DEBUG oslo_concurrency.lockutils [None req-5cb69133-b00e-4832-9d04-9fce79d65f74 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:58:55 compute-0 nova_compute[260935]: 2025-10-11 08:58:55.639 2 DEBUG oslo_concurrency.lockutils [None req-5cb69133-b00e-4832-9d04-9fce79d65f74 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:58:55 compute-0 nova_compute[260935]: 2025-10-11 08:58:55.639 2 DEBUG oslo_concurrency.lockutils [None req-5cb69133-b00e-4832-9d04-9fce79d65f74 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:58:55 compute-0 nova_compute[260935]: 2025-10-11 08:58:55.820 2 DEBUG oslo_concurrency.lockutils [None req-8989a0c1-08ec-4ad1-95fb-7c34d2091511 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] Acquiring lock "f64218b5-ea49-4ed7-9945-1b8e056e4161" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:58:55 compute-0 nova_compute[260935]: 2025-10-11 08:58:55.821 2 DEBUG oslo_concurrency.lockutils [None req-8989a0c1-08ec-4ad1-95fb-7c34d2091511 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] Lock "f64218b5-ea49-4ed7-9945-1b8e056e4161" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:58:55 compute-0 nova_compute[260935]: 2025-10-11 08:58:55.840 2 DEBUG nova.compute.manager [None req-8989a0c1-08ec-4ad1-95fb-7c34d2091511 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] [instance: f64218b5-ea49-4ed7-9945-1b8e056e4161] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 11 08:58:55 compute-0 nova_compute[260935]: 2025-10-11 08:58:55.951 2 DEBUG oslo_concurrency.lockutils [None req-8989a0c1-08ec-4ad1-95fb-7c34d2091511 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:58:55 compute-0 nova_compute[260935]: 2025-10-11 08:58:55.952 2 DEBUG oslo_concurrency.lockutils [None req-8989a0c1-08ec-4ad1-95fb-7c34d2091511 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:58:55 compute-0 nova_compute[260935]: 2025-10-11 08:58:55.962 2 DEBUG nova.virt.hardware [None req-8989a0c1-08ec-4ad1-95fb-7c34d2091511 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 11 08:58:55 compute-0 nova_compute[260935]: 2025-10-11 08:58:55.963 2 INFO nova.compute.claims [None req-8989a0c1-08ec-4ad1-95fb-7c34d2091511 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] [instance: f64218b5-ea49-4ed7-9945-1b8e056e4161] Claim successful on node compute-0.ctlplane.example.com
Oct 11 08:58:55 compute-0 nova_compute[260935]: 2025-10-11 08:58:55.991 2 DEBUG nova.compute.manager [req-28c95429-57cf-4ed9-9e8c-b3b40a385599 req-241e16f6-109d-47e4-83fa-7ced1da17586 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 7f16d3ba-54f0-496c-bbd3-09f4d10a41f2] Received event network-vif-plugged-1fe7496f-f823-4f38-b7e1-8e30255a1ff9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:58:55 compute-0 nova_compute[260935]: 2025-10-11 08:58:55.992 2 DEBUG oslo_concurrency.lockutils [req-28c95429-57cf-4ed9-9e8c-b3b40a385599 req-241e16f6-109d-47e4-83fa-7ced1da17586 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "7f16d3ba-54f0-496c-bbd3-09f4d10a41f2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:58:55 compute-0 nova_compute[260935]: 2025-10-11 08:58:55.992 2 DEBUG oslo_concurrency.lockutils [req-28c95429-57cf-4ed9-9e8c-b3b40a385599 req-241e16f6-109d-47e4-83fa-7ced1da17586 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "7f16d3ba-54f0-496c-bbd3-09f4d10a41f2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:58:55 compute-0 nova_compute[260935]: 2025-10-11 08:58:55.993 2 DEBUG oslo_concurrency.lockutils [req-28c95429-57cf-4ed9-9e8c-b3b40a385599 req-241e16f6-109d-47e4-83fa-7ced1da17586 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "7f16d3ba-54f0-496c-bbd3-09f4d10a41f2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:58:55 compute-0 nova_compute[260935]: 2025-10-11 08:58:55.993 2 DEBUG nova.compute.manager [req-28c95429-57cf-4ed9-9e8c-b3b40a385599 req-241e16f6-109d-47e4-83fa-7ced1da17586 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 7f16d3ba-54f0-496c-bbd3-09f4d10a41f2] No waiting events found dispatching network-vif-plugged-1fe7496f-f823-4f38-b7e1-8e30255a1ff9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 08:58:55 compute-0 nova_compute[260935]: 2025-10-11 08:58:55.994 2 WARNING nova.compute.manager [req-28c95429-57cf-4ed9-9e8c-b3b40a385599 req-241e16f6-109d-47e4-83fa-7ced1da17586 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 7f16d3ba-54f0-496c-bbd3-09f4d10a41f2] Received unexpected event network-vif-plugged-1fe7496f-f823-4f38-b7e1-8e30255a1ff9 for instance with vm_state active and task_state None.
Oct 11 08:58:56 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1673: 321 pgs: 321 active+clean; 88 MiB data, 572 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 11 08:58:56 compute-0 nova_compute[260935]: 2025-10-11 08:58:56.185 2 DEBUG oslo_concurrency.processutils [None req-8989a0c1-08ec-4ad1-95fb-7c34d2091511 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:58:56 compute-0 nova_compute[260935]: 2025-10-11 08:58:56.244 2 DEBUG nova.network.neutron [None req-5cb69133-b00e-4832-9d04-9fce79d65f74 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] [instance: 8848c29f-c82a-4f50-82f4-b2e317161489] Successfully created port: e121c426-7734-4def-be42-b69b19dcbf29 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 11 08:58:56 compute-0 nova_compute[260935]: 2025-10-11 08:58:56.569 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:58:56 compute-0 nova_compute[260935]: 2025-10-11 08:58:56.666 2 DEBUG oslo_concurrency.lockutils [None req-9b8069e8-fb7a-4b31-8d8f-1f002cede09a 4b13011561414dddaa3b82b6a74b610b 37bfd840a383483889a90bbdefd0f194 - - default default] Acquiring lock "7f16d3ba-54f0-496c-bbd3-09f4d10a41f2" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:58:56 compute-0 nova_compute[260935]: 2025-10-11 08:58:56.667 2 DEBUG oslo_concurrency.lockutils [None req-9b8069e8-fb7a-4b31-8d8f-1f002cede09a 4b13011561414dddaa3b82b6a74b610b 37bfd840a383483889a90bbdefd0f194 - - default default] Lock "7f16d3ba-54f0-496c-bbd3-09f4d10a41f2" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:58:56 compute-0 nova_compute[260935]: 2025-10-11 08:58:56.667 2 DEBUG oslo_concurrency.lockutils [None req-9b8069e8-fb7a-4b31-8d8f-1f002cede09a 4b13011561414dddaa3b82b6a74b610b 37bfd840a383483889a90bbdefd0f194 - - default default] Acquiring lock "7f16d3ba-54f0-496c-bbd3-09f4d10a41f2-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:58:56 compute-0 nova_compute[260935]: 2025-10-11 08:58:56.672 2 DEBUG oslo_concurrency.lockutils [None req-9b8069e8-fb7a-4b31-8d8f-1f002cede09a 4b13011561414dddaa3b82b6a74b610b 37bfd840a383483889a90bbdefd0f194 - - default default] Lock "7f16d3ba-54f0-496c-bbd3-09f4d10a41f2-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.005s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:58:56 compute-0 nova_compute[260935]: 2025-10-11 08:58:56.672 2 DEBUG oslo_concurrency.lockutils [None req-9b8069e8-fb7a-4b31-8d8f-1f002cede09a 4b13011561414dddaa3b82b6a74b610b 37bfd840a383483889a90bbdefd0f194 - - default default] Lock "7f16d3ba-54f0-496c-bbd3-09f4d10a41f2-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:58:56 compute-0 nova_compute[260935]: 2025-10-11 08:58:56.674 2 INFO nova.compute.manager [None req-9b8069e8-fb7a-4b31-8d8f-1f002cede09a 4b13011561414dddaa3b82b6a74b610b 37bfd840a383483889a90bbdefd0f194 - - default default] [instance: 7f16d3ba-54f0-496c-bbd3-09f4d10a41f2] Terminating instance
Oct 11 08:58:56 compute-0 nova_compute[260935]: 2025-10-11 08:58:56.675 2 DEBUG nova.compute.manager [None req-9b8069e8-fb7a-4b31-8d8f-1f002cede09a 4b13011561414dddaa3b82b6a74b610b 37bfd840a383483889a90bbdefd0f194 - - default default] [instance: 7f16d3ba-54f0-496c-bbd3-09f4d10a41f2] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 11 08:58:56 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 08:58:56 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1521224477' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:58:56 compute-0 nova_compute[260935]: 2025-10-11 08:58:56.706 2 DEBUG oslo_concurrency.processutils [None req-8989a0c1-08ec-4ad1-95fb-7c34d2091511 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.520s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:58:56 compute-0 nova_compute[260935]: 2025-10-11 08:58:56.714 2 DEBUG nova.compute.provider_tree [None req-8989a0c1-08ec-4ad1-95fb-7c34d2091511 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 08:58:56 compute-0 kernel: tap1fe7496f-f8 (unregistering): left promiscuous mode
Oct 11 08:58:56 compute-0 NetworkManager[44960]: <info>  [1760173136.7273] device (tap1fe7496f-f8): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 08:58:56 compute-0 nova_compute[260935]: 2025-10-11 08:58:56.734 2 DEBUG nova.scheduler.client.report [None req-8989a0c1-08ec-4ad1-95fb-7c34d2091511 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 08:58:56 compute-0 nova_compute[260935]: 2025-10-11 08:58:56.739 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:58:56 compute-0 ovn_controller[152945]: 2025-10-11T08:58:56Z|00695|binding|INFO|Releasing lport 1fe7496f-f823-4f38-b7e1-8e30255a1ff9 from this chassis (sb_readonly=0)
Oct 11 08:58:56 compute-0 ovn_controller[152945]: 2025-10-11T08:58:56Z|00696|binding|INFO|Setting lport 1fe7496f-f823-4f38-b7e1-8e30255a1ff9 down in Southbound
Oct 11 08:58:56 compute-0 ovn_controller[152945]: 2025-10-11T08:58:56Z|00697|binding|INFO|Removing iface tap1fe7496f-f8 ovn-installed in OVS
Oct 11 08:58:56 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:58:56.751 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b8:a6:83 10.100.0.14'], port_security=['fa:16:3e:b8:a6:83 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '7f16d3ba-54f0-496c-bbd3-09f4d10a41f2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6572b80b-ede8-44a7-b186-9e0f82cb1cf7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '37bfd840a383483889a90bbdefd0f194', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'aaa4b18d-6b73-4c5e-af64-b5b3a4d5cb9c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3e430929-6cd7-45f9-8de3-0a2cbebc57e5, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=1fe7496f-f823-4f38-b7e1-8e30255a1ff9) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 08:58:56 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:58:56.753 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 1fe7496f-f823-4f38-b7e1-8e30255a1ff9 in datapath 6572b80b-ede8-44a7-b186-9e0f82cb1cf7 unbound from our chassis
Oct 11 08:58:56 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:58:56.756 162815 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 6572b80b-ede8-44a7-b186-9e0f82cb1cf7, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 11 08:58:56 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:58:56.757 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[33515775-37c6-421f-80ac-78cde539cd26]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:58:56 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:58:56.758 162815 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-6572b80b-ede8-44a7-b186-9e0f82cb1cf7 namespace which is not needed anymore
Oct 11 08:58:56 compute-0 nova_compute[260935]: 2025-10-11 08:58:56.760 2 DEBUG oslo_concurrency.lockutils [None req-8989a0c1-08ec-4ad1-95fb-7c34d2091511 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.808s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:58:56 compute-0 nova_compute[260935]: 2025-10-11 08:58:56.761 2 DEBUG nova.compute.manager [None req-8989a0c1-08ec-4ad1-95fb-7c34d2091511 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] [instance: f64218b5-ea49-4ed7-9945-1b8e056e4161] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 11 08:58:56 compute-0 nova_compute[260935]: 2025-10-11 08:58:56.779 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:58:56 compute-0 systemd[1]: machine-qemu\x2d87\x2dinstance\x2d0000004e.scope: Deactivated successfully.
Oct 11 08:58:56 compute-0 systemd[1]: machine-qemu\x2d87\x2dinstance\x2d0000004e.scope: Consumed 3.643s CPU time.
Oct 11 08:58:56 compute-0 systemd-machined[215705]: Machine qemu-87-instance-0000004e terminated.
Oct 11 08:58:56 compute-0 nova_compute[260935]: 2025-10-11 08:58:56.816 2 DEBUG nova.compute.manager [None req-8989a0c1-08ec-4ad1-95fb-7c34d2091511 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] [instance: f64218b5-ea49-4ed7-9945-1b8e056e4161] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 11 08:58:56 compute-0 nova_compute[260935]: 2025-10-11 08:58:56.818 2 DEBUG nova.network.neutron [None req-8989a0c1-08ec-4ad1-95fb-7c34d2091511 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] [instance: f64218b5-ea49-4ed7-9945-1b8e056e4161] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 11 08:58:56 compute-0 nova_compute[260935]: 2025-10-11 08:58:56.836 2 INFO nova.virt.libvirt.driver [None req-8989a0c1-08ec-4ad1-95fb-7c34d2091511 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] [instance: f64218b5-ea49-4ed7-9945-1b8e056e4161] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 11 08:58:56 compute-0 nova_compute[260935]: 2025-10-11 08:58:56.863 2 DEBUG nova.compute.manager [None req-8989a0c1-08ec-4ad1-95fb-7c34d2091511 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] [instance: f64218b5-ea49-4ed7-9945-1b8e056e4161] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 11 08:58:56 compute-0 neutron-haproxy-ovnmeta-6572b80b-ede8-44a7-b186-9e0f82cb1cf7[337948]: [NOTICE]   (337952) : haproxy version is 2.8.14-c23fe91
Oct 11 08:58:56 compute-0 neutron-haproxy-ovnmeta-6572b80b-ede8-44a7-b186-9e0f82cb1cf7[337948]: [NOTICE]   (337952) : path to executable is /usr/sbin/haproxy
Oct 11 08:58:56 compute-0 neutron-haproxy-ovnmeta-6572b80b-ede8-44a7-b186-9e0f82cb1cf7[337948]: [ALERT]    (337952) : Current worker (337954) exited with code 143 (Terminated)
Oct 11 08:58:56 compute-0 neutron-haproxy-ovnmeta-6572b80b-ede8-44a7-b186-9e0f82cb1cf7[337948]: [WARNING]  (337952) : All workers exited. Exiting... (0)
Oct 11 08:58:56 compute-0 systemd[1]: libpod-5e02bd6425045c30df2f766c553b6ab59a6b3e9b259943657d4de9d507eaae1c.scope: Deactivated successfully.
Oct 11 08:58:56 compute-0 podman[338198]: 2025-10-11 08:58:56.917220486 +0000 UTC m=+0.057293365 container died 5e02bd6425045c30df2f766c553b6ab59a6b3e9b259943657d4de9d507eaae1c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-6572b80b-ede8-44a7-b186-9e0f82cb1cf7, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Oct 11 08:58:56 compute-0 nova_compute[260935]: 2025-10-11 08:58:56.925 2 INFO nova.virt.libvirt.driver [-] [instance: 7f16d3ba-54f0-496c-bbd3-09f4d10a41f2] Instance destroyed successfully.
Oct 11 08:58:56 compute-0 nova_compute[260935]: 2025-10-11 08:58:56.926 2 DEBUG nova.objects.instance [None req-9b8069e8-fb7a-4b31-8d8f-1f002cede09a 4b13011561414dddaa3b82b6a74b610b 37bfd840a383483889a90bbdefd0f194 - - default default] Lazy-loading 'resources' on Instance uuid 7f16d3ba-54f0-496c-bbd3-09f4d10a41f2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:58:56 compute-0 nova_compute[260935]: 2025-10-11 08:58:56.946 2 DEBUG nova.virt.libvirt.vif [None req-9b8069e8-fb7a-4b31-8d8f-1f002cede09a 4b13011561414dddaa3b82b6a74b610b 37bfd840a383483889a90bbdefd0f194 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T08:58:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerAddressesNegativeTestJSON-server-1870753311',display_name='tempest-ServerAddressesNegativeTestJSON-server-1870753311',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveraddressesnegativetestjson-server-1870753311',id=78,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-11T08:58:54Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='37bfd840a383483889a90bbdefd0f194',ramdisk_id='',reservation_id='r-vgzchiv0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerAddressesNegativeTestJSON-1679036360',owner_user_name='tempest-ServerAddressesNegativeTestJSON-1679036360-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T08:58:54Z,user_data=None,user_id='4b13011561414dddaa3b82b6a74b610b',uuid=7f16d3ba-54f0-496c-bbd3-09f4d10a41f2,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1fe7496f-f823-4f38-b7e1-8e30255a1ff9", "address": "fa:16:3e:b8:a6:83", "network": {"id": "6572b80b-ede8-44a7-b186-9e0f82cb1cf7", "bridge": "br-int", "label": "tempest-ServerAddressesNegativeTestJSON-822924085-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "37bfd840a383483889a90bbdefd0f194", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1fe7496f-f8", "ovs_interfaceid": "1fe7496f-f823-4f38-b7e1-8e30255a1ff9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 11 08:58:56 compute-0 nova_compute[260935]: 2025-10-11 08:58:56.947 2 DEBUG nova.network.os_vif_util [None req-9b8069e8-fb7a-4b31-8d8f-1f002cede09a 4b13011561414dddaa3b82b6a74b610b 37bfd840a383483889a90bbdefd0f194 - - default default] Converting VIF {"id": "1fe7496f-f823-4f38-b7e1-8e30255a1ff9", "address": "fa:16:3e:b8:a6:83", "network": {"id": "6572b80b-ede8-44a7-b186-9e0f82cb1cf7", "bridge": "br-int", "label": "tempest-ServerAddressesNegativeTestJSON-822924085-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "37bfd840a383483889a90bbdefd0f194", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1fe7496f-f8", "ovs_interfaceid": "1fe7496f-f823-4f38-b7e1-8e30255a1ff9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 08:58:56 compute-0 nova_compute[260935]: 2025-10-11 08:58:56.948 2 DEBUG nova.network.os_vif_util [None req-9b8069e8-fb7a-4b31-8d8f-1f002cede09a 4b13011561414dddaa3b82b6a74b610b 37bfd840a383483889a90bbdefd0f194 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b8:a6:83,bridge_name='br-int',has_traffic_filtering=True,id=1fe7496f-f823-4f38-b7e1-8e30255a1ff9,network=Network(6572b80b-ede8-44a7-b186-9e0f82cb1cf7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1fe7496f-f8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 08:58:56 compute-0 nova_compute[260935]: 2025-10-11 08:58:56.949 2 DEBUG os_vif [None req-9b8069e8-fb7a-4b31-8d8f-1f002cede09a 4b13011561414dddaa3b82b6a74b610b 37bfd840a383483889a90bbdefd0f194 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b8:a6:83,bridge_name='br-int',has_traffic_filtering=True,id=1fe7496f-f823-4f38-b7e1-8e30255a1ff9,network=Network(6572b80b-ede8-44a7-b186-9e0f82cb1cf7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1fe7496f-f8') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 11 08:58:56 compute-0 nova_compute[260935]: 2025-10-11 08:58:56.951 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:58:56 compute-0 nova_compute[260935]: 2025-10-11 08:58:56.952 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1fe7496f-f8, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:58:56 compute-0 nova_compute[260935]: 2025-10-11 08:58:56.954 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:58:56 compute-0 nova_compute[260935]: 2025-10-11 08:58:56.957 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 08:58:56 compute-0 nova_compute[260935]: 2025-10-11 08:58:56.962 2 DEBUG nova.compute.manager [None req-8989a0c1-08ec-4ad1-95fb-7c34d2091511 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] [instance: f64218b5-ea49-4ed7-9945-1b8e056e4161] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 11 08:58:56 compute-0 nova_compute[260935]: 2025-10-11 08:58:56.964 2 DEBUG nova.virt.libvirt.driver [None req-8989a0c1-08ec-4ad1-95fb-7c34d2091511 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] [instance: f64218b5-ea49-4ed7-9945-1b8e056e4161] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 11 08:58:56 compute-0 nova_compute[260935]: 2025-10-11 08:58:56.965 2 INFO nova.virt.libvirt.driver [None req-8989a0c1-08ec-4ad1-95fb-7c34d2091511 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] [instance: f64218b5-ea49-4ed7-9945-1b8e056e4161] Creating image(s)
Oct 11 08:58:56 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-5e02bd6425045c30df2f766c553b6ab59a6b3e9b259943657d4de9d507eaae1c-userdata-shm.mount: Deactivated successfully.
Oct 11 08:58:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-300968a36cdce69fce13bc17553f18712450fb589cb3b78268edf69db906b4fd-merged.mount: Deactivated successfully.
Oct 11 08:58:56 compute-0 podman[338198]: 2025-10-11 08:58:56.985480162 +0000 UTC m=+0.125553041 container cleanup 5e02bd6425045c30df2f766c553b6ab59a6b3e9b259943657d4de9d507eaae1c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-6572b80b-ede8-44a7-b186-9e0f82cb1cf7, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Oct 11 08:58:56 compute-0 nova_compute[260935]: 2025-10-11 08:58:56.994 2 DEBUG nova.storage.rbd_utils [None req-8989a0c1-08ec-4ad1-95fb-7c34d2091511 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] rbd image f64218b5-ea49-4ed7-9945-1b8e056e4161_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:58:56 compute-0 systemd[1]: libpod-conmon-5e02bd6425045c30df2f766c553b6ab59a6b3e9b259943657d4de9d507eaae1c.scope: Deactivated successfully.
Oct 11 08:58:57 compute-0 nova_compute[260935]: 2025-10-11 08:58:57.028 2 DEBUG nova.storage.rbd_utils [None req-8989a0c1-08ec-4ad1-95fb-7c34d2091511 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] rbd image f64218b5-ea49-4ed7-9945-1b8e056e4161_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:58:57 compute-0 nova_compute[260935]: 2025-10-11 08:58:57.061 2 DEBUG nova.storage.rbd_utils [None req-8989a0c1-08ec-4ad1-95fb-7c34d2091511 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] rbd image f64218b5-ea49-4ed7-9945-1b8e056e4161_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:58:57 compute-0 nova_compute[260935]: 2025-10-11 08:58:57.066 2 DEBUG oslo_concurrency.processutils [None req-8989a0c1-08ec-4ad1-95fb-7c34d2091511 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/d427ed36e4acfaf36d5cf36bd49361b1db4ee571 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:58:57 compute-0 podman[338254]: 2025-10-11 08:58:57.077878717 +0000 UTC m=+0.054566647 container remove 5e02bd6425045c30df2f766c553b6ab59a6b3e9b259943657d4de9d507eaae1c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-6572b80b-ede8-44a7-b186-9e0f82cb1cf7, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct 11 08:58:57 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:58:57.088 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[1c1c36ff-2812-46e5-a1a0-396df9e78093]: (4, ('Sat Oct 11 08:58:56 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-6572b80b-ede8-44a7-b186-9e0f82cb1cf7 (5e02bd6425045c30df2f766c553b6ab59a6b3e9b259943657d4de9d507eaae1c)\n5e02bd6425045c30df2f766c553b6ab59a6b3e9b259943657d4de9d507eaae1c\nSat Oct 11 08:58:56 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-6572b80b-ede8-44a7-b186-9e0f82cb1cf7 (5e02bd6425045c30df2f766c553b6ab59a6b3e9b259943657d4de9d507eaae1c)\n5e02bd6425045c30df2f766c553b6ab59a6b3e9b259943657d4de9d507eaae1c\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:58:57 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:58:57.090 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[2be3ac06-10aa-4c7e-bf4c-390cb2ff8eaf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:58:57 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:58:57.091 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6572b80b-e0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:58:57 compute-0 kernel: tap6572b80b-e0: left promiscuous mode
Oct 11 08:58:57 compute-0 nova_compute[260935]: 2025-10-11 08:58:57.109 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:58:57 compute-0 nova_compute[260935]: 2025-10-11 08:58:57.114 2 DEBUG nova.policy [None req-8989a0c1-08ec-4ad1-95fb-7c34d2091511 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '9734241540ac484291686e1d189d4eea', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'b0430d49d70a46c2b29abef177f8ccb3', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 11 08:58:57 compute-0 nova_compute[260935]: 2025-10-11 08:58:57.117 2 INFO os_vif [None req-9b8069e8-fb7a-4b31-8d8f-1f002cede09a 4b13011561414dddaa3b82b6a74b610b 37bfd840a383483889a90bbdefd0f194 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b8:a6:83,bridge_name='br-int',has_traffic_filtering=True,id=1fe7496f-f823-4f38-b7e1-8e30255a1ff9,network=Network(6572b80b-ede8-44a7-b186-9e0f82cb1cf7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1fe7496f-f8')
Oct 11 08:58:57 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:58:57.131 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[cc129276-b480-4aba-b961-faa317cc79ef]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:58:57 compute-0 nova_compute[260935]: 2025-10-11 08:58:57.141 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:58:57 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:58:57.171 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[c792a4c1-6c10-4a8d-816d-a212580d2f49]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:58:57 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:58:57.173 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[da525276-1ba0-4386-a121-2a35eadc21f1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:58:57 compute-0 nova_compute[260935]: 2025-10-11 08:58:57.174 2 DEBUG oslo_concurrency.processutils [None req-8989a0c1-08ec-4ad1-95fb-7c34d2091511 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/d427ed36e4acfaf36d5cf36bd49361b1db4ee571 --force-share --output=json" returned: 0 in 0.108s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:58:57 compute-0 nova_compute[260935]: 2025-10-11 08:58:57.175 2 DEBUG oslo_concurrency.lockutils [None req-8989a0c1-08ec-4ad1-95fb-7c34d2091511 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] Acquiring lock "d427ed36e4acfaf36d5cf36bd49361b1db4ee571" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:58:57 compute-0 nova_compute[260935]: 2025-10-11 08:58:57.175 2 DEBUG oslo_concurrency.lockutils [None req-8989a0c1-08ec-4ad1-95fb-7c34d2091511 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] Lock "d427ed36e4acfaf36d5cf36bd49361b1db4ee571" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:58:57 compute-0 nova_compute[260935]: 2025-10-11 08:58:57.176 2 DEBUG oslo_concurrency.lockutils [None req-8989a0c1-08ec-4ad1-95fb-7c34d2091511 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] Lock "d427ed36e4acfaf36d5cf36bd49361b1db4ee571" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:58:57 compute-0 nova_compute[260935]: 2025-10-11 08:58:57.202 2 DEBUG nova.storage.rbd_utils [None req-8989a0c1-08ec-4ad1-95fb-7c34d2091511 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] rbd image f64218b5-ea49-4ed7-9945-1b8e056e4161_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:58:57 compute-0 ceph-mon[74313]: pgmap v1673: 321 pgs: 321 active+clean; 88 MiB data, 572 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 11 08:58:57 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/1521224477' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:58:57 compute-0 nova_compute[260935]: 2025-10-11 08:58:57.207 2 DEBUG oslo_concurrency.processutils [None req-8989a0c1-08ec-4ad1-95fb-7c34d2091511 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/d427ed36e4acfaf36d5cf36bd49361b1db4ee571 f64218b5-ea49-4ed7-9945-1b8e056e4161_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:58:57 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:58:57.206 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[c3934ab5-572b-4f9d-9995-dec0b075a2c7]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 497775, 'reachable_time': 21959, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 338330, 'error': None, 'target': 'ovnmeta-6572b80b-ede8-44a7-b186-9e0f82cb1cf7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:58:57 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:58:57.213 162929 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-6572b80b-ede8-44a7-b186-9e0f82cb1cf7 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 11 08:58:57 compute-0 systemd[1]: run-netns-ovnmeta\x2d6572b80b\x2dede8\x2d44a7\x2db186\x2d9e0f82cb1cf7.mount: Deactivated successfully.
Oct 11 08:58:57 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:58:57.213 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[5ed7df3b-1af1-434d-954d-90b8d89d651c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:58:57 compute-0 nova_compute[260935]: 2025-10-11 08:58:57.581 2 DEBUG oslo_concurrency.processutils [None req-8989a0c1-08ec-4ad1-95fb-7c34d2091511 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/d427ed36e4acfaf36d5cf36bd49361b1db4ee571 f64218b5-ea49-4ed7-9945-1b8e056e4161_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.374s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:58:57 compute-0 nova_compute[260935]: 2025-10-11 08:58:57.693 2 INFO nova.virt.libvirt.driver [None req-9b8069e8-fb7a-4b31-8d8f-1f002cede09a 4b13011561414dddaa3b82b6a74b610b 37bfd840a383483889a90bbdefd0f194 - - default default] [instance: 7f16d3ba-54f0-496c-bbd3-09f4d10a41f2] Deleting instance files /var/lib/nova/instances/7f16d3ba-54f0-496c-bbd3-09f4d10a41f2_del
Oct 11 08:58:57 compute-0 nova_compute[260935]: 2025-10-11 08:58:57.695 2 INFO nova.virt.libvirt.driver [None req-9b8069e8-fb7a-4b31-8d8f-1f002cede09a 4b13011561414dddaa3b82b6a74b610b 37bfd840a383483889a90bbdefd0f194 - - default default] [instance: 7f16d3ba-54f0-496c-bbd3-09f4d10a41f2] Deletion of /var/lib/nova/instances/7f16d3ba-54f0-496c-bbd3-09f4d10a41f2_del complete
Oct 11 08:58:57 compute-0 nova_compute[260935]: 2025-10-11 08:58:57.711 2 DEBUG nova.storage.rbd_utils [None req-8989a0c1-08ec-4ad1-95fb-7c34d2091511 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] resizing rbd image f64218b5-ea49-4ed7-9945-1b8e056e4161_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 11 08:58:57 compute-0 nova_compute[260935]: 2025-10-11 08:58:57.791 2 INFO nova.compute.manager [None req-9b8069e8-fb7a-4b31-8d8f-1f002cede09a 4b13011561414dddaa3b82b6a74b610b 37bfd840a383483889a90bbdefd0f194 - - default default] [instance: 7f16d3ba-54f0-496c-bbd3-09f4d10a41f2] Took 1.12 seconds to destroy the instance on the hypervisor.
Oct 11 08:58:57 compute-0 nova_compute[260935]: 2025-10-11 08:58:57.792 2 DEBUG oslo.service.loopingcall [None req-9b8069e8-fb7a-4b31-8d8f-1f002cede09a 4b13011561414dddaa3b82b6a74b610b 37bfd840a383483889a90bbdefd0f194 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 11 08:58:57 compute-0 nova_compute[260935]: 2025-10-11 08:58:57.793 2 DEBUG nova.compute.manager [-] [instance: 7f16d3ba-54f0-496c-bbd3-09f4d10a41f2] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 11 08:58:57 compute-0 nova_compute[260935]: 2025-10-11 08:58:57.793 2 DEBUG nova.network.neutron [-] [instance: 7f16d3ba-54f0-496c-bbd3-09f4d10a41f2] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 11 08:58:57 compute-0 nova_compute[260935]: 2025-10-11 08:58:57.844 2 DEBUG nova.objects.instance [None req-8989a0c1-08ec-4ad1-95fb-7c34d2091511 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] Lazy-loading 'migration_context' on Instance uuid f64218b5-ea49-4ed7-9945-1b8e056e4161 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:58:57 compute-0 nova_compute[260935]: 2025-10-11 08:58:57.890 2 DEBUG nova.virt.libvirt.driver [None req-8989a0c1-08ec-4ad1-95fb-7c34d2091511 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] [instance: f64218b5-ea49-4ed7-9945-1b8e056e4161] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 11 08:58:57 compute-0 nova_compute[260935]: 2025-10-11 08:58:57.891 2 DEBUG nova.virt.libvirt.driver [None req-8989a0c1-08ec-4ad1-95fb-7c34d2091511 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] [instance: f64218b5-ea49-4ed7-9945-1b8e056e4161] Ensure instance console log exists: /var/lib/nova/instances/f64218b5-ea49-4ed7-9945-1b8e056e4161/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 11 08:58:57 compute-0 nova_compute[260935]: 2025-10-11 08:58:57.891 2 DEBUG oslo_concurrency.lockutils [None req-8989a0c1-08ec-4ad1-95fb-7c34d2091511 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:58:57 compute-0 nova_compute[260935]: 2025-10-11 08:58:57.891 2 DEBUG oslo_concurrency.lockutils [None req-8989a0c1-08ec-4ad1-95fb-7c34d2091511 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:58:57 compute-0 nova_compute[260935]: 2025-10-11 08:58:57.892 2 DEBUG oslo_concurrency.lockutils [None req-8989a0c1-08ec-4ad1-95fb-7c34d2091511 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:58:57 compute-0 nova_compute[260935]: 2025-10-11 08:58:57.906 2 DEBUG nova.network.neutron [None req-5cb69133-b00e-4832-9d04-9fce79d65f74 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] [instance: 8848c29f-c82a-4f50-82f4-b2e317161489] Successfully updated port: e121c426-7734-4def-be42-b69b19dcbf29 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 11 08:58:57 compute-0 nova_compute[260935]: 2025-10-11 08:58:57.943 2 DEBUG oslo_concurrency.lockutils [None req-5cb69133-b00e-4832-9d04-9fce79d65f74 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] Acquiring lock "refresh_cache-8848c29f-c82a-4f50-82f4-b2e317161489" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 08:58:57 compute-0 nova_compute[260935]: 2025-10-11 08:58:57.944 2 DEBUG oslo_concurrency.lockutils [None req-5cb69133-b00e-4832-9d04-9fce79d65f74 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] Acquired lock "refresh_cache-8848c29f-c82a-4f50-82f4-b2e317161489" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 08:58:57 compute-0 nova_compute[260935]: 2025-10-11 08:58:57.944 2 DEBUG nova.network.neutron [None req-5cb69133-b00e-4832-9d04-9fce79d65f74 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] [instance: 8848c29f-c82a-4f50-82f4-b2e317161489] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 11 08:58:57 compute-0 nova_compute[260935]: 2025-10-11 08:58:57.956 2 DEBUG nova.network.neutron [None req-8989a0c1-08ec-4ad1-95fb-7c34d2091511 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] [instance: f64218b5-ea49-4ed7-9945-1b8e056e4161] Successfully created port: 2d377d55-39e9-4780-bc6f-8bd84e5a93a7 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 11 08:58:58 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1674: 321 pgs: 321 active+clean; 110 MiB data, 593 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 3.6 MiB/s wr, 129 op/s
Oct 11 08:58:58 compute-0 nova_compute[260935]: 2025-10-11 08:58:58.183 2 DEBUG nova.compute.manager [req-753dac88-12f6-4336-975d-5e50898e9e0d req-95a1a4f4-05d3-4242-808e-7ba1fbd066ef e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 7f16d3ba-54f0-496c-bbd3-09f4d10a41f2] Received event network-vif-unplugged-1fe7496f-f823-4f38-b7e1-8e30255a1ff9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:58:58 compute-0 nova_compute[260935]: 2025-10-11 08:58:58.184 2 DEBUG oslo_concurrency.lockutils [req-753dac88-12f6-4336-975d-5e50898e9e0d req-95a1a4f4-05d3-4242-808e-7ba1fbd066ef e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "7f16d3ba-54f0-496c-bbd3-09f4d10a41f2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:58:58 compute-0 nova_compute[260935]: 2025-10-11 08:58:58.185 2 DEBUG oslo_concurrency.lockutils [req-753dac88-12f6-4336-975d-5e50898e9e0d req-95a1a4f4-05d3-4242-808e-7ba1fbd066ef e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "7f16d3ba-54f0-496c-bbd3-09f4d10a41f2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:58:58 compute-0 nova_compute[260935]: 2025-10-11 08:58:58.185 2 DEBUG oslo_concurrency.lockutils [req-753dac88-12f6-4336-975d-5e50898e9e0d req-95a1a4f4-05d3-4242-808e-7ba1fbd066ef e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "7f16d3ba-54f0-496c-bbd3-09f4d10a41f2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:58:58 compute-0 nova_compute[260935]: 2025-10-11 08:58:58.186 2 DEBUG nova.compute.manager [req-753dac88-12f6-4336-975d-5e50898e9e0d req-95a1a4f4-05d3-4242-808e-7ba1fbd066ef e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 7f16d3ba-54f0-496c-bbd3-09f4d10a41f2] No waiting events found dispatching network-vif-unplugged-1fe7496f-f823-4f38-b7e1-8e30255a1ff9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 08:58:58 compute-0 nova_compute[260935]: 2025-10-11 08:58:58.186 2 DEBUG nova.compute.manager [req-753dac88-12f6-4336-975d-5e50898e9e0d req-95a1a4f4-05d3-4242-808e-7ba1fbd066ef e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 7f16d3ba-54f0-496c-bbd3-09f4d10a41f2] Received event network-vif-unplugged-1fe7496f-f823-4f38-b7e1-8e30255a1ff9 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 11 08:58:58 compute-0 nova_compute[260935]: 2025-10-11 08:58:58.187 2 DEBUG nova.compute.manager [req-753dac88-12f6-4336-975d-5e50898e9e0d req-95a1a4f4-05d3-4242-808e-7ba1fbd066ef e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 7f16d3ba-54f0-496c-bbd3-09f4d10a41f2] Received event network-vif-plugged-1fe7496f-f823-4f38-b7e1-8e30255a1ff9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:58:58 compute-0 nova_compute[260935]: 2025-10-11 08:58:58.187 2 DEBUG oslo_concurrency.lockutils [req-753dac88-12f6-4336-975d-5e50898e9e0d req-95a1a4f4-05d3-4242-808e-7ba1fbd066ef e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "7f16d3ba-54f0-496c-bbd3-09f4d10a41f2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:58:58 compute-0 nova_compute[260935]: 2025-10-11 08:58:58.187 2 DEBUG oslo_concurrency.lockutils [req-753dac88-12f6-4336-975d-5e50898e9e0d req-95a1a4f4-05d3-4242-808e-7ba1fbd066ef e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "7f16d3ba-54f0-496c-bbd3-09f4d10a41f2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:58:58 compute-0 nova_compute[260935]: 2025-10-11 08:58:58.188 2 DEBUG oslo_concurrency.lockutils [req-753dac88-12f6-4336-975d-5e50898e9e0d req-95a1a4f4-05d3-4242-808e-7ba1fbd066ef e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "7f16d3ba-54f0-496c-bbd3-09f4d10a41f2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:58:58 compute-0 nova_compute[260935]: 2025-10-11 08:58:58.188 2 DEBUG nova.compute.manager [req-753dac88-12f6-4336-975d-5e50898e9e0d req-95a1a4f4-05d3-4242-808e-7ba1fbd066ef e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 7f16d3ba-54f0-496c-bbd3-09f4d10a41f2] No waiting events found dispatching network-vif-plugged-1fe7496f-f823-4f38-b7e1-8e30255a1ff9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 08:58:58 compute-0 nova_compute[260935]: 2025-10-11 08:58:58.189 2 WARNING nova.compute.manager [req-753dac88-12f6-4336-975d-5e50898e9e0d req-95a1a4f4-05d3-4242-808e-7ba1fbd066ef e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 7f16d3ba-54f0-496c-bbd3-09f4d10a41f2] Received unexpected event network-vif-plugged-1fe7496f-f823-4f38-b7e1-8e30255a1ff9 for instance with vm_state active and task_state deleting.
Oct 11 08:58:58 compute-0 nova_compute[260935]: 2025-10-11 08:58:58.189 2 DEBUG nova.compute.manager [req-753dac88-12f6-4336-975d-5e50898e9e0d req-95a1a4f4-05d3-4242-808e-7ba1fbd066ef e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 8848c29f-c82a-4f50-82f4-b2e317161489] Received event network-changed-e121c426-7734-4def-be42-b69b19dcbf29 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:58:58 compute-0 nova_compute[260935]: 2025-10-11 08:58:58.190 2 DEBUG nova.compute.manager [req-753dac88-12f6-4336-975d-5e50898e9e0d req-95a1a4f4-05d3-4242-808e-7ba1fbd066ef e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 8848c29f-c82a-4f50-82f4-b2e317161489] Refreshing instance network info cache due to event network-changed-e121c426-7734-4def-be42-b69b19dcbf29. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 08:58:58 compute-0 nova_compute[260935]: 2025-10-11 08:58:58.190 2 DEBUG oslo_concurrency.lockutils [req-753dac88-12f6-4336-975d-5e50898e9e0d req-95a1a4f4-05d3-4242-808e-7ba1fbd066ef e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-8848c29f-c82a-4f50-82f4-b2e317161489" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 08:58:58 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e242 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 08:58:58 compute-0 nova_compute[260935]: 2025-10-11 08:58:58.566 2 DEBUG nova.network.neutron [None req-5cb69133-b00e-4832-9d04-9fce79d65f74 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] [instance: 8848c29f-c82a-4f50-82f4-b2e317161489] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 11 08:58:58 compute-0 nova_compute[260935]: 2025-10-11 08:58:58.583 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:58:59 compute-0 nova_compute[260935]: 2025-10-11 08:58:59.099 2 DEBUG nova.network.neutron [None req-8989a0c1-08ec-4ad1-95fb-7c34d2091511 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] [instance: f64218b5-ea49-4ed7-9945-1b8e056e4161] Successfully updated port: 2d377d55-39e9-4780-bc6f-8bd84e5a93a7 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 11 08:58:59 compute-0 nova_compute[260935]: 2025-10-11 08:58:59.123 2 DEBUG oslo_concurrency.lockutils [None req-8989a0c1-08ec-4ad1-95fb-7c34d2091511 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] Acquiring lock "refresh_cache-f64218b5-ea49-4ed7-9945-1b8e056e4161" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 08:58:59 compute-0 nova_compute[260935]: 2025-10-11 08:58:59.124 2 DEBUG oslo_concurrency.lockutils [None req-8989a0c1-08ec-4ad1-95fb-7c34d2091511 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] Acquired lock "refresh_cache-f64218b5-ea49-4ed7-9945-1b8e056e4161" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 08:58:59 compute-0 nova_compute[260935]: 2025-10-11 08:58:59.124 2 DEBUG nova.network.neutron [None req-8989a0c1-08ec-4ad1-95fb-7c34d2091511 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] [instance: f64218b5-ea49-4ed7-9945-1b8e056e4161] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 11 08:58:59 compute-0 ceph-mon[74313]: pgmap v1674: 321 pgs: 321 active+clean; 110 MiB data, 593 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 3.6 MiB/s wr, 129 op/s
Oct 11 08:58:59 compute-0 nova_compute[260935]: 2025-10-11 08:58:59.232 2 DEBUG nova.compute.manager [req-8b7fd642-f32e-4650-bf68-05fedc24c577 req-587207e2-40e4-48d4-a472-dd288647222f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: f64218b5-ea49-4ed7-9945-1b8e056e4161] Received event network-changed-2d377d55-39e9-4780-bc6f-8bd84e5a93a7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:58:59 compute-0 nova_compute[260935]: 2025-10-11 08:58:59.233 2 DEBUG nova.compute.manager [req-8b7fd642-f32e-4650-bf68-05fedc24c577 req-587207e2-40e4-48d4-a472-dd288647222f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: f64218b5-ea49-4ed7-9945-1b8e056e4161] Refreshing instance network info cache due to event network-changed-2d377d55-39e9-4780-bc6f-8bd84e5a93a7. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 08:58:59 compute-0 nova_compute[260935]: 2025-10-11 08:58:59.233 2 DEBUG oslo_concurrency.lockutils [req-8b7fd642-f32e-4650-bf68-05fedc24c577 req-587207e2-40e4-48d4-a472-dd288647222f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-f64218b5-ea49-4ed7-9945-1b8e056e4161" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 08:58:59 compute-0 nova_compute[260935]: 2025-10-11 08:58:59.345 2 DEBUG nova.network.neutron [None req-8989a0c1-08ec-4ad1-95fb-7c34d2091511 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] [instance: f64218b5-ea49-4ed7-9945-1b8e056e4161] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 11 08:58:59 compute-0 nova_compute[260935]: 2025-10-11 08:58:59.471 2 DEBUG nova.network.neutron [-] [instance: 7f16d3ba-54f0-496c-bbd3-09f4d10a41f2] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:58:59 compute-0 nova_compute[260935]: 2025-10-11 08:58:59.490 2 INFO nova.compute.manager [-] [instance: 7f16d3ba-54f0-496c-bbd3-09f4d10a41f2] Took 1.70 seconds to deallocate network for instance.
Oct 11 08:58:59 compute-0 nova_compute[260935]: 2025-10-11 08:58:59.503 2 DEBUG oslo_concurrency.lockutils [None req-9f2a296a-27ec-4721-87d1-cf557a7a87b0 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] Acquiring lock "045df9b6-6c73-44ec-aa65-2c736eb5c00c" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:58:59 compute-0 nova_compute[260935]: 2025-10-11 08:58:59.504 2 DEBUG oslo_concurrency.lockutils [None req-9f2a296a-27ec-4721-87d1-cf557a7a87b0 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] Lock "045df9b6-6c73-44ec-aa65-2c736eb5c00c" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:58:59 compute-0 nova_compute[260935]: 2025-10-11 08:58:59.519 2 DEBUG nova.compute.manager [None req-9f2a296a-27ec-4721-87d1-cf557a7a87b0 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] [instance: 045df9b6-6c73-44ec-aa65-2c736eb5c00c] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 11 08:58:59 compute-0 nova_compute[260935]: 2025-10-11 08:58:59.546 2 DEBUG oslo_concurrency.lockutils [None req-9b8069e8-fb7a-4b31-8d8f-1f002cede09a 4b13011561414dddaa3b82b6a74b610b 37bfd840a383483889a90bbdefd0f194 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:58:59 compute-0 nova_compute[260935]: 2025-10-11 08:58:59.547 2 DEBUG oslo_concurrency.lockutils [None req-9b8069e8-fb7a-4b31-8d8f-1f002cede09a 4b13011561414dddaa3b82b6a74b610b 37bfd840a383483889a90bbdefd0f194 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:58:59 compute-0 nova_compute[260935]: 2025-10-11 08:58:59.603 2 DEBUG oslo_concurrency.lockutils [None req-9f2a296a-27ec-4721-87d1-cf557a7a87b0 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:58:59 compute-0 nova_compute[260935]: 2025-10-11 08:58:59.656 2 DEBUG oslo_concurrency.processutils [None req-9b8069e8-fb7a-4b31-8d8f-1f002cede09a 4b13011561414dddaa3b82b6a74b610b 37bfd840a383483889a90bbdefd0f194 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:58:59 compute-0 nova_compute[260935]: 2025-10-11 08:58:59.985 2 DEBUG nova.network.neutron [None req-5cb69133-b00e-4832-9d04-9fce79d65f74 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] [instance: 8848c29f-c82a-4f50-82f4-b2e317161489] Updating instance_info_cache with network_info: [{"id": "e121c426-7734-4def-be42-b69b19dcbf29", "address": "fa:16:3e:e4:27:e3", "network": {"id": "b4b8fb64-b3cb-4cee-8a22-4a3947fba8e5", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-2015711970-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b0430d49d70a46c2b29abef177f8ccb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape121c426-77", "ovs_interfaceid": "e121c426-7734-4def-be42-b69b19dcbf29", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:59:00 compute-0 nova_compute[260935]: 2025-10-11 08:59:00.014 2 DEBUG oslo_concurrency.lockutils [None req-5cb69133-b00e-4832-9d04-9fce79d65f74 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] Releasing lock "refresh_cache-8848c29f-c82a-4f50-82f4-b2e317161489" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 08:59:00 compute-0 nova_compute[260935]: 2025-10-11 08:59:00.014 2 DEBUG nova.compute.manager [None req-5cb69133-b00e-4832-9d04-9fce79d65f74 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] [instance: 8848c29f-c82a-4f50-82f4-b2e317161489] Instance network_info: |[{"id": "e121c426-7734-4def-be42-b69b19dcbf29", "address": "fa:16:3e:e4:27:e3", "network": {"id": "b4b8fb64-b3cb-4cee-8a22-4a3947fba8e5", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-2015711970-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b0430d49d70a46c2b29abef177f8ccb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape121c426-77", "ovs_interfaceid": "e121c426-7734-4def-be42-b69b19dcbf29", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 11 08:59:00 compute-0 nova_compute[260935]: 2025-10-11 08:59:00.015 2 DEBUG oslo_concurrency.lockutils [req-753dac88-12f6-4336-975d-5e50898e9e0d req-95a1a4f4-05d3-4242-808e-7ba1fbd066ef e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-8848c29f-c82a-4f50-82f4-b2e317161489" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 08:59:00 compute-0 nova_compute[260935]: 2025-10-11 08:59:00.016 2 DEBUG nova.network.neutron [req-753dac88-12f6-4336-975d-5e50898e9e0d req-95a1a4f4-05d3-4242-808e-7ba1fbd066ef e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 8848c29f-c82a-4f50-82f4-b2e317161489] Refreshing network info cache for port e121c426-7734-4def-be42-b69b19dcbf29 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 08:59:00 compute-0 nova_compute[260935]: 2025-10-11 08:59:00.021 2 DEBUG nova.virt.libvirt.driver [None req-5cb69133-b00e-4832-9d04-9fce79d65f74 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] [instance: 8848c29f-c82a-4f50-82f4-b2e317161489] Start _get_guest_xml network_info=[{"id": "e121c426-7734-4def-be42-b69b19dcbf29", "address": "fa:16:3e:e4:27:e3", "network": {"id": "b4b8fb64-b3cb-4cee-8a22-4a3947fba8e5", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-2015711970-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b0430d49d70a46c2b29abef177f8ccb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape121c426-77", "ovs_interfaceid": "e121c426-7734-4def-be42-b69b19dcbf29", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 11 08:59:00 compute-0 nova_compute[260935]: 2025-10-11 08:59:00.030 2 WARNING nova.virt.libvirt.driver [None req-5cb69133-b00e-4832-9d04-9fce79d65f74 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 08:59:00 compute-0 nova_compute[260935]: 2025-10-11 08:59:00.039 2 DEBUG nova.virt.libvirt.host [None req-5cb69133-b00e-4832-9d04-9fce79d65f74 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 11 08:59:00 compute-0 nova_compute[260935]: 2025-10-11 08:59:00.040 2 DEBUG nova.virt.libvirt.host [None req-5cb69133-b00e-4832-9d04-9fce79d65f74 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 11 08:59:00 compute-0 nova_compute[260935]: 2025-10-11 08:59:00.055 2 DEBUG nova.virt.libvirt.host [None req-5cb69133-b00e-4832-9d04-9fce79d65f74 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 11 08:59:00 compute-0 nova_compute[260935]: 2025-10-11 08:59:00.056 2 DEBUG nova.virt.libvirt.host [None req-5cb69133-b00e-4832-9d04-9fce79d65f74 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 11 08:59:00 compute-0 nova_compute[260935]: 2025-10-11 08:59:00.056 2 DEBUG nova.virt.libvirt.driver [None req-5cb69133-b00e-4832-9d04-9fce79d65f74 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 11 08:59:00 compute-0 nova_compute[260935]: 2025-10-11 08:59:00.057 2 DEBUG nova.virt.hardware [None req-5cb69133-b00e-4832-9d04-9fce79d65f74 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 11 08:59:00 compute-0 nova_compute[260935]: 2025-10-11 08:59:00.058 2 DEBUG nova.virt.hardware [None req-5cb69133-b00e-4832-9d04-9fce79d65f74 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 11 08:59:00 compute-0 nova_compute[260935]: 2025-10-11 08:59:00.059 2 DEBUG nova.virt.hardware [None req-5cb69133-b00e-4832-9d04-9fce79d65f74 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 11 08:59:00 compute-0 nova_compute[260935]: 2025-10-11 08:59:00.059 2 DEBUG nova.virt.hardware [None req-5cb69133-b00e-4832-9d04-9fce79d65f74 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 11 08:59:00 compute-0 nova_compute[260935]: 2025-10-11 08:59:00.060 2 DEBUG nova.virt.hardware [None req-5cb69133-b00e-4832-9d04-9fce79d65f74 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 11 08:59:00 compute-0 nova_compute[260935]: 2025-10-11 08:59:00.060 2 DEBUG nova.virt.hardware [None req-5cb69133-b00e-4832-9d04-9fce79d65f74 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 11 08:59:00 compute-0 nova_compute[260935]: 2025-10-11 08:59:00.061 2 DEBUG nova.virt.hardware [None req-5cb69133-b00e-4832-9d04-9fce79d65f74 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 11 08:59:00 compute-0 nova_compute[260935]: 2025-10-11 08:59:00.061 2 DEBUG nova.virt.hardware [None req-5cb69133-b00e-4832-9d04-9fce79d65f74 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 11 08:59:00 compute-0 nova_compute[260935]: 2025-10-11 08:59:00.062 2 DEBUG nova.virt.hardware [None req-5cb69133-b00e-4832-9d04-9fce79d65f74 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 11 08:59:00 compute-0 nova_compute[260935]: 2025-10-11 08:59:00.062 2 DEBUG nova.virt.hardware [None req-5cb69133-b00e-4832-9d04-9fce79d65f74 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 11 08:59:00 compute-0 nova_compute[260935]: 2025-10-11 08:59:00.063 2 DEBUG nova.virt.hardware [None req-5cb69133-b00e-4832-9d04-9fce79d65f74 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 11 08:59:00 compute-0 nova_compute[260935]: 2025-10-11 08:59:00.069 2 DEBUG oslo_concurrency.processutils [None req-5cb69133-b00e-4832-9d04-9fce79d65f74 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:59:00 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1675: 321 pgs: 321 active+clean; 110 MiB data, 593 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 102 op/s
Oct 11 08:59:00 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 08:59:00 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1710621558' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:59:00 compute-0 nova_compute[260935]: 2025-10-11 08:59:00.166 2 DEBUG oslo_concurrency.processutils [None req-9b8069e8-fb7a-4b31-8d8f-1f002cede09a 4b13011561414dddaa3b82b6a74b610b 37bfd840a383483889a90bbdefd0f194 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.510s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:59:00 compute-0 nova_compute[260935]: 2025-10-11 08:59:00.174 2 DEBUG nova.compute.provider_tree [None req-9b8069e8-fb7a-4b31-8d8f-1f002cede09a 4b13011561414dddaa3b82b6a74b610b 37bfd840a383483889a90bbdefd0f194 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 08:59:00 compute-0 nova_compute[260935]: 2025-10-11 08:59:00.209 2 DEBUG nova.scheduler.client.report [None req-9b8069e8-fb7a-4b31-8d8f-1f002cede09a 4b13011561414dddaa3b82b6a74b610b 37bfd840a383483889a90bbdefd0f194 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 08:59:00 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/1710621558' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:59:00 compute-0 nova_compute[260935]: 2025-10-11 08:59:00.239 2 DEBUG oslo_concurrency.lockutils [None req-9b8069e8-fb7a-4b31-8d8f-1f002cede09a 4b13011561414dddaa3b82b6a74b610b 37bfd840a383483889a90bbdefd0f194 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.692s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:59:00 compute-0 nova_compute[260935]: 2025-10-11 08:59:00.244 2 DEBUG oslo_concurrency.lockutils [None req-9f2a296a-27ec-4721-87d1-cf557a7a87b0 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.641s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:59:00 compute-0 nova_compute[260935]: 2025-10-11 08:59:00.258 2 DEBUG nova.virt.hardware [None req-9f2a296a-27ec-4721-87d1-cf557a7a87b0 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 11 08:59:00 compute-0 nova_compute[260935]: 2025-10-11 08:59:00.258 2 INFO nova.compute.claims [None req-9f2a296a-27ec-4721-87d1-cf557a7a87b0 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] [instance: 045df9b6-6c73-44ec-aa65-2c736eb5c00c] Claim successful on node compute-0.ctlplane.example.com
Oct 11 08:59:00 compute-0 nova_compute[260935]: 2025-10-11 08:59:00.281 2 DEBUG nova.compute.manager [req-a19735aa-774c-4564-87b7-87615b78e336 req-59638d49-075a-4bbf-8d35-589127ecfe23 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 7f16d3ba-54f0-496c-bbd3-09f4d10a41f2] Received event network-vif-deleted-1fe7496f-f823-4f38-b7e1-8e30255a1ff9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:59:00 compute-0 nova_compute[260935]: 2025-10-11 08:59:00.284 2 INFO nova.scheduler.client.report [None req-9b8069e8-fb7a-4b31-8d8f-1f002cede09a 4b13011561414dddaa3b82b6a74b610b 37bfd840a383483889a90bbdefd0f194 - - default default] Deleted allocations for instance 7f16d3ba-54f0-496c-bbd3-09f4d10a41f2
Oct 11 08:59:00 compute-0 nova_compute[260935]: 2025-10-11 08:59:00.358 2 DEBUG oslo_concurrency.lockutils [None req-9b8069e8-fb7a-4b31-8d8f-1f002cede09a 4b13011561414dddaa3b82b6a74b610b 37bfd840a383483889a90bbdefd0f194 - - default default] Lock "7f16d3ba-54f0-496c-bbd3-09f4d10a41f2" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.691s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:59:00 compute-0 nova_compute[260935]: 2025-10-11 08:59:00.429 2 DEBUG nova.network.neutron [None req-8989a0c1-08ec-4ad1-95fb-7c34d2091511 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] [instance: f64218b5-ea49-4ed7-9945-1b8e056e4161] Updating instance_info_cache with network_info: [{"id": "2d377d55-39e9-4780-bc6f-8bd84e5a93a7", "address": "fa:16:3e:d5:15:a9", "network": {"id": "b4b8fb64-b3cb-4cee-8a22-4a3947fba8e5", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-2015711970-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b0430d49d70a46c2b29abef177f8ccb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2d377d55-39", "ovs_interfaceid": "2d377d55-39e9-4780-bc6f-8bd84e5a93a7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:59:00 compute-0 nova_compute[260935]: 2025-10-11 08:59:00.431 2 DEBUG oslo_concurrency.processutils [None req-9f2a296a-27ec-4721-87d1-cf557a7a87b0 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:59:00 compute-0 nova_compute[260935]: 2025-10-11 08:59:00.479 2 DEBUG oslo_concurrency.lockutils [None req-8989a0c1-08ec-4ad1-95fb-7c34d2091511 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] Releasing lock "refresh_cache-f64218b5-ea49-4ed7-9945-1b8e056e4161" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 08:59:00 compute-0 nova_compute[260935]: 2025-10-11 08:59:00.481 2 DEBUG nova.compute.manager [None req-8989a0c1-08ec-4ad1-95fb-7c34d2091511 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] [instance: f64218b5-ea49-4ed7-9945-1b8e056e4161] Instance network_info: |[{"id": "2d377d55-39e9-4780-bc6f-8bd84e5a93a7", "address": "fa:16:3e:d5:15:a9", "network": {"id": "b4b8fb64-b3cb-4cee-8a22-4a3947fba8e5", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-2015711970-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b0430d49d70a46c2b29abef177f8ccb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2d377d55-39", "ovs_interfaceid": "2d377d55-39e9-4780-bc6f-8bd84e5a93a7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 11 08:59:00 compute-0 nova_compute[260935]: 2025-10-11 08:59:00.482 2 DEBUG oslo_concurrency.lockutils [req-8b7fd642-f32e-4650-bf68-05fedc24c577 req-587207e2-40e4-48d4-a472-dd288647222f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-f64218b5-ea49-4ed7-9945-1b8e056e4161" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 08:59:00 compute-0 nova_compute[260935]: 2025-10-11 08:59:00.483 2 DEBUG nova.network.neutron [req-8b7fd642-f32e-4650-bf68-05fedc24c577 req-587207e2-40e4-48d4-a472-dd288647222f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: f64218b5-ea49-4ed7-9945-1b8e056e4161] Refreshing network info cache for port 2d377d55-39e9-4780-bc6f-8bd84e5a93a7 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 08:59:00 compute-0 nova_compute[260935]: 2025-10-11 08:59:00.490 2 DEBUG nova.virt.libvirt.driver [None req-8989a0c1-08ec-4ad1-95fb-7c34d2091511 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] [instance: f64218b5-ea49-4ed7-9945-1b8e056e4161] Start _get_guest_xml network_info=[{"id": "2d377d55-39e9-4780-bc6f-8bd84e5a93a7", "address": "fa:16:3e:d5:15:a9", "network": {"id": "b4b8fb64-b3cb-4cee-8a22-4a3947fba8e5", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-2015711970-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b0430d49d70a46c2b29abef177f8ccb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2d377d55-39", "ovs_interfaceid": "2d377d55-39e9-4780-bc6f-8bd84e5a93a7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:29Z,direct_url=<?>,disk_format='qcow2',id=95632eb9-5895-4e20-b760-0f149aadf400,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '95632eb9-5895-4e20-b760-0f149aadf400'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 11 08:59:00 compute-0 nova_compute[260935]: 2025-10-11 08:59:00.499 2 WARNING nova.virt.libvirt.driver [None req-8989a0c1-08ec-4ad1-95fb-7c34d2091511 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 08:59:00 compute-0 nova_compute[260935]: 2025-10-11 08:59:00.520 2 DEBUG nova.virt.libvirt.host [None req-8989a0c1-08ec-4ad1-95fb-7c34d2091511 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 11 08:59:00 compute-0 nova_compute[260935]: 2025-10-11 08:59:00.521 2 DEBUG nova.virt.libvirt.host [None req-8989a0c1-08ec-4ad1-95fb-7c34d2091511 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 11 08:59:00 compute-0 nova_compute[260935]: 2025-10-11 08:59:00.526 2 DEBUG nova.virt.libvirt.host [None req-8989a0c1-08ec-4ad1-95fb-7c34d2091511 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 11 08:59:00 compute-0 nova_compute[260935]: 2025-10-11 08:59:00.527 2 DEBUG nova.virt.libvirt.host [None req-8989a0c1-08ec-4ad1-95fb-7c34d2091511 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 11 08:59:00 compute-0 nova_compute[260935]: 2025-10-11 08:59:00.527 2 DEBUG nova.virt.libvirt.driver [None req-8989a0c1-08ec-4ad1-95fb-7c34d2091511 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 11 08:59:00 compute-0 nova_compute[260935]: 2025-10-11 08:59:00.528 2 DEBUG nova.virt.hardware [None req-8989a0c1-08ec-4ad1-95fb-7c34d2091511 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:29Z,direct_url=<?>,disk_format='qcow2',id=95632eb9-5895-4e20-b760-0f149aadf400,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 11 08:59:00 compute-0 nova_compute[260935]: 2025-10-11 08:59:00.529 2 DEBUG nova.virt.hardware [None req-8989a0c1-08ec-4ad1-95fb-7c34d2091511 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 11 08:59:00 compute-0 nova_compute[260935]: 2025-10-11 08:59:00.530 2 DEBUG nova.virt.hardware [None req-8989a0c1-08ec-4ad1-95fb-7c34d2091511 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 11 08:59:00 compute-0 nova_compute[260935]: 2025-10-11 08:59:00.530 2 DEBUG nova.virt.hardware [None req-8989a0c1-08ec-4ad1-95fb-7c34d2091511 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 11 08:59:00 compute-0 nova_compute[260935]: 2025-10-11 08:59:00.531 2 DEBUG nova.virt.hardware [None req-8989a0c1-08ec-4ad1-95fb-7c34d2091511 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 11 08:59:00 compute-0 nova_compute[260935]: 2025-10-11 08:59:00.532 2 DEBUG nova.virt.hardware [None req-8989a0c1-08ec-4ad1-95fb-7c34d2091511 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 11 08:59:00 compute-0 nova_compute[260935]: 2025-10-11 08:59:00.532 2 DEBUG nova.virt.hardware [None req-8989a0c1-08ec-4ad1-95fb-7c34d2091511 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 11 08:59:00 compute-0 nova_compute[260935]: 2025-10-11 08:59:00.533 2 DEBUG nova.virt.hardware [None req-8989a0c1-08ec-4ad1-95fb-7c34d2091511 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 11 08:59:00 compute-0 nova_compute[260935]: 2025-10-11 08:59:00.534 2 DEBUG nova.virt.hardware [None req-8989a0c1-08ec-4ad1-95fb-7c34d2091511 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 11 08:59:00 compute-0 nova_compute[260935]: 2025-10-11 08:59:00.534 2 DEBUG nova.virt.hardware [None req-8989a0c1-08ec-4ad1-95fb-7c34d2091511 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 11 08:59:00 compute-0 nova_compute[260935]: 2025-10-11 08:59:00.535 2 DEBUG nova.virt.hardware [None req-8989a0c1-08ec-4ad1-95fb-7c34d2091511 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 11 08:59:00 compute-0 nova_compute[260935]: 2025-10-11 08:59:00.541 2 DEBUG oslo_concurrency.processutils [None req-8989a0c1-08ec-4ad1-95fb-7c34d2091511 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:59:00 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 08:59:00 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3821872773' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:59:00 compute-0 nova_compute[260935]: 2025-10-11 08:59:00.604 2 DEBUG oslo_concurrency.processutils [None req-5cb69133-b00e-4832-9d04-9fce79d65f74 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.535s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:59:00 compute-0 nova_compute[260935]: 2025-10-11 08:59:00.644 2 DEBUG nova.storage.rbd_utils [None req-5cb69133-b00e-4832-9d04-9fce79d65f74 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] rbd image 8848c29f-c82a-4f50-82f4-b2e317161489_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:59:00 compute-0 nova_compute[260935]: 2025-10-11 08:59:00.651 2 DEBUG oslo_concurrency.processutils [None req-5cb69133-b00e-4832-9d04-9fce79d65f74 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:59:00 compute-0 nova_compute[260935]: 2025-10-11 08:59:00.725 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 08:59:00 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 08:59:00 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1214567546' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:59:00 compute-0 nova_compute[260935]: 2025-10-11 08:59:00.954 2 DEBUG oslo_concurrency.processutils [None req-9f2a296a-27ec-4721-87d1-cf557a7a87b0 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.522s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:59:00 compute-0 nova_compute[260935]: 2025-10-11 08:59:00.964 2 DEBUG nova.compute.provider_tree [None req-9f2a296a-27ec-4721-87d1-cf557a7a87b0 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 08:59:01 compute-0 nova_compute[260935]: 2025-10-11 08:59:01.016 2 DEBUG nova.scheduler.client.report [None req-9f2a296a-27ec-4721-87d1-cf557a7a87b0 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 08:59:01 compute-0 nova_compute[260935]: 2025-10-11 08:59:01.051 2 DEBUG oslo_concurrency.lockutils [None req-9f2a296a-27ec-4721-87d1-cf557a7a87b0 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.806s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:59:01 compute-0 nova_compute[260935]: 2025-10-11 08:59:01.052 2 DEBUG nova.compute.manager [None req-9f2a296a-27ec-4721-87d1-cf557a7a87b0 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] [instance: 045df9b6-6c73-44ec-aa65-2c736eb5c00c] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 11 08:59:01 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 08:59:01 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1249750289' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:59:01 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 08:59:01 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1707466229' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:59:01 compute-0 nova_compute[260935]: 2025-10-11 08:59:01.095 2 DEBUG oslo_concurrency.processutils [None req-8989a0c1-08ec-4ad1-95fb-7c34d2091511 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.554s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:59:01 compute-0 nova_compute[260935]: 2025-10-11 08:59:01.133 2 DEBUG nova.storage.rbd_utils [None req-8989a0c1-08ec-4ad1-95fb-7c34d2091511 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] rbd image f64218b5-ea49-4ed7-9945-1b8e056e4161_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:59:01 compute-0 nova_compute[260935]: 2025-10-11 08:59:01.140 2 DEBUG oslo_concurrency.processutils [None req-8989a0c1-08ec-4ad1-95fb-7c34d2091511 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:59:01 compute-0 nova_compute[260935]: 2025-10-11 08:59:01.204 2 DEBUG oslo_concurrency.processutils [None req-5cb69133-b00e-4832-9d04-9fce79d65f74 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.553s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:59:01 compute-0 nova_compute[260935]: 2025-10-11 08:59:01.214 2 DEBUG nova.virt.libvirt.vif [None req-5cb69133-b00e-4832-9d04-9fce79d65f74 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T08:58:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-1291030350',display_name='tempest-ListServerFiltersTestJSON-instance-1291030350',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-1291030350',id=79,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b0430d49d70a46c2b29abef177f8ccb3',ramdisk_id='',reservation_id='r-q7why563',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServerFiltersTestJSON-867232992',owner_user_name='tempest-ListServerFiltersTestJSON-867232992-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T08:58:54Z,user_data=None,user_id='9734241540ac484291686e1d189d4eea',uuid=8848c29f-c82a-4f50-82f4-b2e317161489,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e121c426-7734-4def-be42-b69b19dcbf29", "address": "fa:16:3e:e4:27:e3", "network": {"id": "b4b8fb64-b3cb-4cee-8a22-4a3947fba8e5", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-2015711970-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b0430d49d70a46c2b29abef177f8ccb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape121c426-77", "ovs_interfaceid": "e121c426-7734-4def-be42-b69b19dcbf29", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 11 08:59:01 compute-0 nova_compute[260935]: 2025-10-11 08:59:01.216 2 DEBUG nova.network.os_vif_util [None req-5cb69133-b00e-4832-9d04-9fce79d65f74 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] Converting VIF {"id": "e121c426-7734-4def-be42-b69b19dcbf29", "address": "fa:16:3e:e4:27:e3", "network": {"id": "b4b8fb64-b3cb-4cee-8a22-4a3947fba8e5", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-2015711970-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b0430d49d70a46c2b29abef177f8ccb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape121c426-77", "ovs_interfaceid": "e121c426-7734-4def-be42-b69b19dcbf29", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 08:59:01 compute-0 nova_compute[260935]: 2025-10-11 08:59:01.217 2 DEBUG nova.network.os_vif_util [None req-5cb69133-b00e-4832-9d04-9fce79d65f74 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e4:27:e3,bridge_name='br-int',has_traffic_filtering=True,id=e121c426-7734-4def-be42-b69b19dcbf29,network=Network(b4b8fb64-b3cb-4cee-8a22-4a3947fba8e5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape121c426-77') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 08:59:01 compute-0 nova_compute[260935]: 2025-10-11 08:59:01.218 2 DEBUG nova.objects.instance [None req-5cb69133-b00e-4832-9d04-9fce79d65f74 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] Lazy-loading 'pci_devices' on Instance uuid 8848c29f-c82a-4f50-82f4-b2e317161489 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:59:01 compute-0 nova_compute[260935]: 2025-10-11 08:59:01.220 2 DEBUG nova.compute.manager [None req-9f2a296a-27ec-4721-87d1-cf557a7a87b0 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] [instance: 045df9b6-6c73-44ec-aa65-2c736eb5c00c] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 11 08:59:01 compute-0 nova_compute[260935]: 2025-10-11 08:59:01.221 2 DEBUG nova.network.neutron [None req-9f2a296a-27ec-4721-87d1-cf557a7a87b0 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] [instance: 045df9b6-6c73-44ec-aa65-2c736eb5c00c] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 11 08:59:01 compute-0 ceph-mon[74313]: pgmap v1675: 321 pgs: 321 active+clean; 110 MiB data, 593 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 102 op/s
Oct 11 08:59:01 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/3821872773' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:59:01 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/1214567546' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:59:01 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/1249750289' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:59:01 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/1707466229' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:59:01 compute-0 nova_compute[260935]: 2025-10-11 08:59:01.250 2 INFO nova.virt.libvirt.driver [None req-9f2a296a-27ec-4721-87d1-cf557a7a87b0 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] [instance: 045df9b6-6c73-44ec-aa65-2c736eb5c00c] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 11 08:59:01 compute-0 nova_compute[260935]: 2025-10-11 08:59:01.256 2 DEBUG nova.virt.libvirt.driver [None req-5cb69133-b00e-4832-9d04-9fce79d65f74 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] [instance: 8848c29f-c82a-4f50-82f4-b2e317161489] End _get_guest_xml xml=<domain type="kvm">
Oct 11 08:59:01 compute-0 nova_compute[260935]:   <uuid>8848c29f-c82a-4f50-82f4-b2e317161489</uuid>
Oct 11 08:59:01 compute-0 nova_compute[260935]:   <name>instance-0000004f</name>
Oct 11 08:59:01 compute-0 nova_compute[260935]:   <memory>131072</memory>
Oct 11 08:59:01 compute-0 nova_compute[260935]:   <vcpu>1</vcpu>
Oct 11 08:59:01 compute-0 nova_compute[260935]:   <metadata>
Oct 11 08:59:01 compute-0 nova_compute[260935]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 08:59:01 compute-0 nova_compute[260935]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 08:59:01 compute-0 nova_compute[260935]:       <nova:name>tempest-ListServerFiltersTestJSON-instance-1291030350</nova:name>
Oct 11 08:59:01 compute-0 nova_compute[260935]:       <nova:creationTime>2025-10-11 08:59:00</nova:creationTime>
Oct 11 08:59:01 compute-0 nova_compute[260935]:       <nova:flavor name="m1.nano">
Oct 11 08:59:01 compute-0 nova_compute[260935]:         <nova:memory>128</nova:memory>
Oct 11 08:59:01 compute-0 nova_compute[260935]:         <nova:disk>1</nova:disk>
Oct 11 08:59:01 compute-0 nova_compute[260935]:         <nova:swap>0</nova:swap>
Oct 11 08:59:01 compute-0 nova_compute[260935]:         <nova:ephemeral>0</nova:ephemeral>
Oct 11 08:59:01 compute-0 nova_compute[260935]:         <nova:vcpus>1</nova:vcpus>
Oct 11 08:59:01 compute-0 nova_compute[260935]:       </nova:flavor>
Oct 11 08:59:01 compute-0 nova_compute[260935]:       <nova:owner>
Oct 11 08:59:01 compute-0 nova_compute[260935]:         <nova:user uuid="9734241540ac484291686e1d189d4eea">tempest-ListServerFiltersTestJSON-867232992-project-member</nova:user>
Oct 11 08:59:01 compute-0 nova_compute[260935]:         <nova:project uuid="b0430d49d70a46c2b29abef177f8ccb3">tempest-ListServerFiltersTestJSON-867232992</nova:project>
Oct 11 08:59:01 compute-0 nova_compute[260935]:       </nova:owner>
Oct 11 08:59:01 compute-0 nova_compute[260935]:       <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 08:59:01 compute-0 nova_compute[260935]:       <nova:ports>
Oct 11 08:59:01 compute-0 nova_compute[260935]:         <nova:port uuid="e121c426-7734-4def-be42-b69b19dcbf29">
Oct 11 08:59:01 compute-0 nova_compute[260935]:           <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Oct 11 08:59:01 compute-0 nova_compute[260935]:         </nova:port>
Oct 11 08:59:01 compute-0 nova_compute[260935]:       </nova:ports>
Oct 11 08:59:01 compute-0 nova_compute[260935]:     </nova:instance>
Oct 11 08:59:01 compute-0 nova_compute[260935]:   </metadata>
Oct 11 08:59:01 compute-0 nova_compute[260935]:   <sysinfo type="smbios">
Oct 11 08:59:01 compute-0 nova_compute[260935]:     <system>
Oct 11 08:59:01 compute-0 nova_compute[260935]:       <entry name="manufacturer">RDO</entry>
Oct 11 08:59:01 compute-0 nova_compute[260935]:       <entry name="product">OpenStack Compute</entry>
Oct 11 08:59:01 compute-0 nova_compute[260935]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 08:59:01 compute-0 nova_compute[260935]:       <entry name="serial">8848c29f-c82a-4f50-82f4-b2e317161489</entry>
Oct 11 08:59:01 compute-0 nova_compute[260935]:       <entry name="uuid">8848c29f-c82a-4f50-82f4-b2e317161489</entry>
Oct 11 08:59:01 compute-0 nova_compute[260935]:       <entry name="family">Virtual Machine</entry>
Oct 11 08:59:01 compute-0 nova_compute[260935]:     </system>
Oct 11 08:59:01 compute-0 nova_compute[260935]:   </sysinfo>
Oct 11 08:59:01 compute-0 nova_compute[260935]:   <os>
Oct 11 08:59:01 compute-0 nova_compute[260935]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 11 08:59:01 compute-0 nova_compute[260935]:     <boot dev="hd"/>
Oct 11 08:59:01 compute-0 nova_compute[260935]:     <smbios mode="sysinfo"/>
Oct 11 08:59:01 compute-0 nova_compute[260935]:   </os>
Oct 11 08:59:01 compute-0 nova_compute[260935]:   <features>
Oct 11 08:59:01 compute-0 nova_compute[260935]:     <acpi/>
Oct 11 08:59:01 compute-0 nova_compute[260935]:     <apic/>
Oct 11 08:59:01 compute-0 nova_compute[260935]:     <vmcoreinfo/>
Oct 11 08:59:01 compute-0 nova_compute[260935]:   </features>
Oct 11 08:59:01 compute-0 nova_compute[260935]:   <clock offset="utc">
Oct 11 08:59:01 compute-0 nova_compute[260935]:     <timer name="pit" tickpolicy="delay"/>
Oct 11 08:59:01 compute-0 nova_compute[260935]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 11 08:59:01 compute-0 nova_compute[260935]:     <timer name="hpet" present="no"/>
Oct 11 08:59:01 compute-0 nova_compute[260935]:   </clock>
Oct 11 08:59:01 compute-0 nova_compute[260935]:   <cpu mode="host-model" match="exact">
Oct 11 08:59:01 compute-0 nova_compute[260935]:     <topology sockets="1" cores="1" threads="1"/>
Oct 11 08:59:01 compute-0 nova_compute[260935]:   </cpu>
Oct 11 08:59:01 compute-0 nova_compute[260935]:   <devices>
Oct 11 08:59:01 compute-0 nova_compute[260935]:     <disk type="network" device="disk">
Oct 11 08:59:01 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 08:59:01 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/8848c29f-c82a-4f50-82f4-b2e317161489_disk">
Oct 11 08:59:01 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 08:59:01 compute-0 nova_compute[260935]:       </source>
Oct 11 08:59:01 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 08:59:01 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 08:59:01 compute-0 nova_compute[260935]:       </auth>
Oct 11 08:59:01 compute-0 nova_compute[260935]:       <target dev="vda" bus="virtio"/>
Oct 11 08:59:01 compute-0 nova_compute[260935]:     </disk>
Oct 11 08:59:01 compute-0 nova_compute[260935]:     <disk type="network" device="cdrom">
Oct 11 08:59:01 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 08:59:01 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/8848c29f-c82a-4f50-82f4-b2e317161489_disk.config">
Oct 11 08:59:01 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 08:59:01 compute-0 nova_compute[260935]:       </source>
Oct 11 08:59:01 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 08:59:01 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 08:59:01 compute-0 nova_compute[260935]:       </auth>
Oct 11 08:59:01 compute-0 nova_compute[260935]:       <target dev="sda" bus="sata"/>
Oct 11 08:59:01 compute-0 nova_compute[260935]:     </disk>
Oct 11 08:59:01 compute-0 nova_compute[260935]:     <interface type="ethernet">
Oct 11 08:59:01 compute-0 nova_compute[260935]:       <mac address="fa:16:3e:e4:27:e3"/>
Oct 11 08:59:01 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 08:59:01 compute-0 nova_compute[260935]:       <driver name="vhost" rx_queue_size="512"/>
Oct 11 08:59:01 compute-0 nova_compute[260935]:       <mtu size="1442"/>
Oct 11 08:59:01 compute-0 nova_compute[260935]:       <target dev="tape121c426-77"/>
Oct 11 08:59:01 compute-0 nova_compute[260935]:     </interface>
Oct 11 08:59:01 compute-0 nova_compute[260935]:     <serial type="pty">
Oct 11 08:59:01 compute-0 nova_compute[260935]:       <log file="/var/lib/nova/instances/8848c29f-c82a-4f50-82f4-b2e317161489/console.log" append="off"/>
Oct 11 08:59:01 compute-0 nova_compute[260935]:     </serial>
Oct 11 08:59:01 compute-0 nova_compute[260935]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 08:59:01 compute-0 nova_compute[260935]:     <video>
Oct 11 08:59:01 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 08:59:01 compute-0 nova_compute[260935]:     </video>
Oct 11 08:59:01 compute-0 nova_compute[260935]:     <input type="tablet" bus="usb"/>
Oct 11 08:59:01 compute-0 nova_compute[260935]:     <rng model="virtio">
Oct 11 08:59:01 compute-0 nova_compute[260935]:       <backend model="random">/dev/urandom</backend>
Oct 11 08:59:01 compute-0 nova_compute[260935]:     </rng>
Oct 11 08:59:01 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root"/>
Oct 11 08:59:01 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:59:01 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:59:01 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:59:01 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:59:01 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:59:01 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:59:01 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:59:01 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:59:01 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:59:01 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:59:01 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:59:01 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:59:01 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:59:01 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:59:01 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:59:01 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:59:01 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:59:01 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:59:01 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:59:01 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:59:01 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:59:01 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:59:01 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:59:01 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:59:01 compute-0 nova_compute[260935]:     <controller type="usb" index="0"/>
Oct 11 08:59:01 compute-0 nova_compute[260935]:     <memballoon model="virtio">
Oct 11 08:59:01 compute-0 nova_compute[260935]:       <stats period="10"/>
Oct 11 08:59:01 compute-0 nova_compute[260935]:     </memballoon>
Oct 11 08:59:01 compute-0 nova_compute[260935]:   </devices>
Oct 11 08:59:01 compute-0 nova_compute[260935]: </domain>
Oct 11 08:59:01 compute-0 nova_compute[260935]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 11 08:59:01 compute-0 nova_compute[260935]: 2025-10-11 08:59:01.264 2 DEBUG nova.compute.manager [None req-5cb69133-b00e-4832-9d04-9fce79d65f74 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] [instance: 8848c29f-c82a-4f50-82f4-b2e317161489] Preparing to wait for external event network-vif-plugged-e121c426-7734-4def-be42-b69b19dcbf29 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 11 08:59:01 compute-0 nova_compute[260935]: 2025-10-11 08:59:01.264 2 DEBUG oslo_concurrency.lockutils [None req-5cb69133-b00e-4832-9d04-9fce79d65f74 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] Acquiring lock "8848c29f-c82a-4f50-82f4-b2e317161489-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:59:01 compute-0 nova_compute[260935]: 2025-10-11 08:59:01.265 2 DEBUG oslo_concurrency.lockutils [None req-5cb69133-b00e-4832-9d04-9fce79d65f74 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] Lock "8848c29f-c82a-4f50-82f4-b2e317161489-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:59:01 compute-0 nova_compute[260935]: 2025-10-11 08:59:01.265 2 DEBUG oslo_concurrency.lockutils [None req-5cb69133-b00e-4832-9d04-9fce79d65f74 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] Lock "8848c29f-c82a-4f50-82f4-b2e317161489-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:59:01 compute-0 nova_compute[260935]: 2025-10-11 08:59:01.266 2 DEBUG nova.virt.libvirt.vif [None req-5cb69133-b00e-4832-9d04-9fce79d65f74 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T08:58:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-1291030350',display_name='tempest-ListServerFiltersTestJSON-instance-1291030350',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-1291030350',id=79,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b0430d49d70a46c2b29abef177f8ccb3',ramdisk_id='',reservation_id='r-q7why563',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServerFiltersTestJSON-867232992',owner_user_name='tempest-ListServerFiltersTestJSON-867232992-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T08:58:54Z,user_data=None,user_id='9734241540ac484291686e1d189d4eea',uuid=8848c29f-c82a-4f50-82f4-b2e317161489,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e121c426-7734-4def-be42-b69b19dcbf29", "address": "fa:16:3e:e4:27:e3", "network": {"id": "b4b8fb64-b3cb-4cee-8a22-4a3947fba8e5", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-2015711970-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b0430d49d70a46c2b29abef177f8ccb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape121c426-77", "ovs_interfaceid": "e121c426-7734-4def-be42-b69b19dcbf29", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 11 08:59:01 compute-0 nova_compute[260935]: 2025-10-11 08:59:01.266 2 DEBUG nova.network.os_vif_util [None req-5cb69133-b00e-4832-9d04-9fce79d65f74 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] Converting VIF {"id": "e121c426-7734-4def-be42-b69b19dcbf29", "address": "fa:16:3e:e4:27:e3", "network": {"id": "b4b8fb64-b3cb-4cee-8a22-4a3947fba8e5", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-2015711970-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b0430d49d70a46c2b29abef177f8ccb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape121c426-77", "ovs_interfaceid": "e121c426-7734-4def-be42-b69b19dcbf29", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 08:59:01 compute-0 nova_compute[260935]: 2025-10-11 08:59:01.267 2 DEBUG nova.network.os_vif_util [None req-5cb69133-b00e-4832-9d04-9fce79d65f74 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e4:27:e3,bridge_name='br-int',has_traffic_filtering=True,id=e121c426-7734-4def-be42-b69b19dcbf29,network=Network(b4b8fb64-b3cb-4cee-8a22-4a3947fba8e5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape121c426-77') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 08:59:01 compute-0 nova_compute[260935]: 2025-10-11 08:59:01.268 2 DEBUG os_vif [None req-5cb69133-b00e-4832-9d04-9fce79d65f74 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e4:27:e3,bridge_name='br-int',has_traffic_filtering=True,id=e121c426-7734-4def-be42-b69b19dcbf29,network=Network(b4b8fb64-b3cb-4cee-8a22-4a3947fba8e5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape121c426-77') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 11 08:59:01 compute-0 nova_compute[260935]: 2025-10-11 08:59:01.269 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:59:01 compute-0 nova_compute[260935]: 2025-10-11 08:59:01.270 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:59:01 compute-0 nova_compute[260935]: 2025-10-11 08:59:01.270 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 08:59:01 compute-0 nova_compute[260935]: 2025-10-11 08:59:01.272 2 DEBUG nova.compute.manager [None req-9f2a296a-27ec-4721-87d1-cf557a7a87b0 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] [instance: 045df9b6-6c73-44ec-aa65-2c736eb5c00c] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 11 08:59:01 compute-0 nova_compute[260935]: 2025-10-11 08:59:01.276 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:59:01 compute-0 nova_compute[260935]: 2025-10-11 08:59:01.276 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape121c426-77, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:59:01 compute-0 nova_compute[260935]: 2025-10-11 08:59:01.277 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tape121c426-77, col_values=(('external_ids', {'iface-id': 'e121c426-7734-4def-be42-b69b19dcbf29', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:e4:27:e3', 'vm-uuid': '8848c29f-c82a-4f50-82f4-b2e317161489'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:59:01 compute-0 nova_compute[260935]: 2025-10-11 08:59:01.293 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:59:01 compute-0 NetworkManager[44960]: <info>  [1760173141.2947] manager: (tape121c426-77): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/310)
Oct 11 08:59:01 compute-0 nova_compute[260935]: 2025-10-11 08:59:01.298 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 08:59:01 compute-0 nova_compute[260935]: 2025-10-11 08:59:01.302 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:59:01 compute-0 nova_compute[260935]: 2025-10-11 08:59:01.303 2 INFO os_vif [None req-5cb69133-b00e-4832-9d04-9fce79d65f74 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e4:27:e3,bridge_name='br-int',has_traffic_filtering=True,id=e121c426-7734-4def-be42-b69b19dcbf29,network=Network(b4b8fb64-b3cb-4cee-8a22-4a3947fba8e5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape121c426-77')
Oct 11 08:59:01 compute-0 nova_compute[260935]: 2025-10-11 08:59:01.371 2 DEBUG nova.compute.manager [None req-9f2a296a-27ec-4721-87d1-cf557a7a87b0 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] [instance: 045df9b6-6c73-44ec-aa65-2c736eb5c00c] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 11 08:59:01 compute-0 nova_compute[260935]: 2025-10-11 08:59:01.374 2 DEBUG nova.virt.libvirt.driver [None req-9f2a296a-27ec-4721-87d1-cf557a7a87b0 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] [instance: 045df9b6-6c73-44ec-aa65-2c736eb5c00c] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 11 08:59:01 compute-0 nova_compute[260935]: 2025-10-11 08:59:01.375 2 INFO nova.virt.libvirt.driver [None req-9f2a296a-27ec-4721-87d1-cf557a7a87b0 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] [instance: 045df9b6-6c73-44ec-aa65-2c736eb5c00c] Creating image(s)
Oct 11 08:59:01 compute-0 nova_compute[260935]: 2025-10-11 08:59:01.413 2 DEBUG nova.storage.rbd_utils [None req-9f2a296a-27ec-4721-87d1-cf557a7a87b0 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] rbd image 045df9b6-6c73-44ec-aa65-2c736eb5c00c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:59:01 compute-0 nova_compute[260935]: 2025-10-11 08:59:01.456 2 DEBUG nova.storage.rbd_utils [None req-9f2a296a-27ec-4721-87d1-cf557a7a87b0 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] rbd image 045df9b6-6c73-44ec-aa65-2c736eb5c00c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:59:01 compute-0 nova_compute[260935]: 2025-10-11 08:59:01.522 2 DEBUG nova.storage.rbd_utils [None req-9f2a296a-27ec-4721-87d1-cf557a7a87b0 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] rbd image 045df9b6-6c73-44ec-aa65-2c736eb5c00c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:59:01 compute-0 nova_compute[260935]: 2025-10-11 08:59:01.528 2 DEBUG oslo_concurrency.processutils [None req-9f2a296a-27ec-4721-87d1-cf557a7a87b0 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:59:01 compute-0 nova_compute[260935]: 2025-10-11 08:59:01.587 2 DEBUG nova.policy [None req-9f2a296a-27ec-4721-87d1-cf557a7a87b0 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '9734241540ac484291686e1d189d4eea', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'b0430d49d70a46c2b29abef177f8ccb3', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 11 08:59:01 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 08:59:01 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2990062162' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:59:01 compute-0 nova_compute[260935]: 2025-10-11 08:59:01.616 2 DEBUG nova.virt.libvirt.driver [None req-5cb69133-b00e-4832-9d04-9fce79d65f74 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 08:59:01 compute-0 nova_compute[260935]: 2025-10-11 08:59:01.616 2 DEBUG nova.virt.libvirt.driver [None req-5cb69133-b00e-4832-9d04-9fce79d65f74 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 08:59:01 compute-0 nova_compute[260935]: 2025-10-11 08:59:01.617 2 DEBUG nova.virt.libvirt.driver [None req-5cb69133-b00e-4832-9d04-9fce79d65f74 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] No VIF found with MAC fa:16:3e:e4:27:e3, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 11 08:59:01 compute-0 nova_compute[260935]: 2025-10-11 08:59:01.617 2 INFO nova.virt.libvirt.driver [None req-5cb69133-b00e-4832-9d04-9fce79d65f74 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] [instance: 8848c29f-c82a-4f50-82f4-b2e317161489] Using config drive
Oct 11 08:59:01 compute-0 nova_compute[260935]: 2025-10-11 08:59:01.646 2 DEBUG nova.storage.rbd_utils [None req-5cb69133-b00e-4832-9d04-9fce79d65f74 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] rbd image 8848c29f-c82a-4f50-82f4-b2e317161489_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:59:01 compute-0 nova_compute[260935]: 2025-10-11 08:59:01.656 2 DEBUG oslo_concurrency.processutils [None req-9f2a296a-27ec-4721-87d1-cf557a7a87b0 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json" returned: 0 in 0.128s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:59:01 compute-0 nova_compute[260935]: 2025-10-11 08:59:01.657 2 DEBUG oslo_concurrency.processutils [None req-8989a0c1-08ec-4ad1-95fb-7c34d2091511 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.517s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:59:01 compute-0 nova_compute[260935]: 2025-10-11 08:59:01.657 2 DEBUG oslo_concurrency.lockutils [None req-9f2a296a-27ec-4721-87d1-cf557a7a87b0 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] Acquiring lock "9811042c7d73cc51997f7c966840f4b7728169a1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:59:01 compute-0 nova_compute[260935]: 2025-10-11 08:59:01.658 2 DEBUG oslo_concurrency.lockutils [None req-9f2a296a-27ec-4721-87d1-cf557a7a87b0 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:59:01 compute-0 nova_compute[260935]: 2025-10-11 08:59:01.658 2 DEBUG oslo_concurrency.lockutils [None req-9f2a296a-27ec-4721-87d1-cf557a7a87b0 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:59:01 compute-0 nova_compute[260935]: 2025-10-11 08:59:01.684 2 DEBUG nova.storage.rbd_utils [None req-9f2a296a-27ec-4721-87d1-cf557a7a87b0 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] rbd image 045df9b6-6c73-44ec-aa65-2c736eb5c00c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:59:01 compute-0 nova_compute[260935]: 2025-10-11 08:59:01.688 2 DEBUG oslo_concurrency.processutils [None req-9f2a296a-27ec-4721-87d1-cf557a7a87b0 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 045df9b6-6c73-44ec-aa65-2c736eb5c00c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:59:01 compute-0 nova_compute[260935]: 2025-10-11 08:59:01.738 2 DEBUG nova.virt.libvirt.vif [None req-8989a0c1-08ec-4ad1-95fb-7c34d2091511 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T08:58:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-1557190648',display_name='tempest-ListServerFiltersTestJSON-instance-1557190648',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-1557190648',id=80,image_ref='95632eb9-5895-4e20-b760-0f149aadf400',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b0430d49d70a46c2b29abef177f8ccb3',ramdisk_id='',reservation_id='r-rrp9whin',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='95632eb9-5895-4e20-b760-0f149aadf400',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServerFiltersTestJSON-867232992',owner_user_name='tempest-ListServerFiltersTestJSON-867232992-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T08:58:56Z,user_data=None,user_id='9734241540ac484291686e1d189d4eea',uuid=f64218b5-ea49-4ed7-9945-1b8e056e4161,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2d377d55-39e9-4780-bc6f-8bd84e5a93a7", "address": "fa:16:3e:d5:15:a9", "network": {"id": "b4b8fb64-b3cb-4cee-8a22-4a3947fba8e5", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-2015711970-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b0430d49d70a46c2b29abef177f8ccb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2d377d55-39", "ovs_interfaceid": "2d377d55-39e9-4780-bc6f-8bd84e5a93a7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 11 08:59:01 compute-0 nova_compute[260935]: 2025-10-11 08:59:01.739 2 DEBUG nova.network.os_vif_util [None req-8989a0c1-08ec-4ad1-95fb-7c34d2091511 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] Converting VIF {"id": "2d377d55-39e9-4780-bc6f-8bd84e5a93a7", "address": "fa:16:3e:d5:15:a9", "network": {"id": "b4b8fb64-b3cb-4cee-8a22-4a3947fba8e5", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-2015711970-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b0430d49d70a46c2b29abef177f8ccb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2d377d55-39", "ovs_interfaceid": "2d377d55-39e9-4780-bc6f-8bd84e5a93a7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 08:59:01 compute-0 nova_compute[260935]: 2025-10-11 08:59:01.741 2 DEBUG nova.network.os_vif_util [None req-8989a0c1-08ec-4ad1-95fb-7c34d2091511 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d5:15:a9,bridge_name='br-int',has_traffic_filtering=True,id=2d377d55-39e9-4780-bc6f-8bd84e5a93a7,network=Network(b4b8fb64-b3cb-4cee-8a22-4a3947fba8e5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2d377d55-39') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 08:59:01 compute-0 nova_compute[260935]: 2025-10-11 08:59:01.742 2 DEBUG nova.objects.instance [None req-8989a0c1-08ec-4ad1-95fb-7c34d2091511 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] Lazy-loading 'pci_devices' on Instance uuid f64218b5-ea49-4ed7-9945-1b8e056e4161 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:59:01 compute-0 nova_compute[260935]: 2025-10-11 08:59:01.759 2 DEBUG nova.virt.libvirt.driver [None req-8989a0c1-08ec-4ad1-95fb-7c34d2091511 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] [instance: f64218b5-ea49-4ed7-9945-1b8e056e4161] End _get_guest_xml xml=<domain type="kvm">
Oct 11 08:59:01 compute-0 nova_compute[260935]:   <uuid>f64218b5-ea49-4ed7-9945-1b8e056e4161</uuid>
Oct 11 08:59:01 compute-0 nova_compute[260935]:   <name>instance-00000050</name>
Oct 11 08:59:01 compute-0 nova_compute[260935]:   <memory>131072</memory>
Oct 11 08:59:01 compute-0 nova_compute[260935]:   <vcpu>1</vcpu>
Oct 11 08:59:01 compute-0 nova_compute[260935]:   <metadata>
Oct 11 08:59:01 compute-0 nova_compute[260935]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 08:59:01 compute-0 nova_compute[260935]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 08:59:01 compute-0 nova_compute[260935]:       <nova:name>tempest-ListServerFiltersTestJSON-instance-1557190648</nova:name>
Oct 11 08:59:01 compute-0 nova_compute[260935]:       <nova:creationTime>2025-10-11 08:59:00</nova:creationTime>
Oct 11 08:59:01 compute-0 nova_compute[260935]:       <nova:flavor name="m1.nano">
Oct 11 08:59:01 compute-0 nova_compute[260935]:         <nova:memory>128</nova:memory>
Oct 11 08:59:01 compute-0 nova_compute[260935]:         <nova:disk>1</nova:disk>
Oct 11 08:59:01 compute-0 nova_compute[260935]:         <nova:swap>0</nova:swap>
Oct 11 08:59:01 compute-0 nova_compute[260935]:         <nova:ephemeral>0</nova:ephemeral>
Oct 11 08:59:01 compute-0 nova_compute[260935]:         <nova:vcpus>1</nova:vcpus>
Oct 11 08:59:01 compute-0 nova_compute[260935]:       </nova:flavor>
Oct 11 08:59:01 compute-0 nova_compute[260935]:       <nova:owner>
Oct 11 08:59:01 compute-0 nova_compute[260935]:         <nova:user uuid="9734241540ac484291686e1d189d4eea">tempest-ListServerFiltersTestJSON-867232992-project-member</nova:user>
Oct 11 08:59:01 compute-0 nova_compute[260935]:         <nova:project uuid="b0430d49d70a46c2b29abef177f8ccb3">tempest-ListServerFiltersTestJSON-867232992</nova:project>
Oct 11 08:59:01 compute-0 nova_compute[260935]:       </nova:owner>
Oct 11 08:59:01 compute-0 nova_compute[260935]:       <nova:root type="image" uuid="95632eb9-5895-4e20-b760-0f149aadf400"/>
Oct 11 08:59:01 compute-0 nova_compute[260935]:       <nova:ports>
Oct 11 08:59:01 compute-0 nova_compute[260935]:         <nova:port uuid="2d377d55-39e9-4780-bc6f-8bd84e5a93a7">
Oct 11 08:59:01 compute-0 nova_compute[260935]:           <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Oct 11 08:59:01 compute-0 nova_compute[260935]:         </nova:port>
Oct 11 08:59:01 compute-0 nova_compute[260935]:       </nova:ports>
Oct 11 08:59:01 compute-0 nova_compute[260935]:     </nova:instance>
Oct 11 08:59:01 compute-0 nova_compute[260935]:   </metadata>
Oct 11 08:59:01 compute-0 nova_compute[260935]:   <sysinfo type="smbios">
Oct 11 08:59:01 compute-0 nova_compute[260935]:     <system>
Oct 11 08:59:01 compute-0 nova_compute[260935]:       <entry name="manufacturer">RDO</entry>
Oct 11 08:59:01 compute-0 nova_compute[260935]:       <entry name="product">OpenStack Compute</entry>
Oct 11 08:59:01 compute-0 nova_compute[260935]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 08:59:01 compute-0 nova_compute[260935]:       <entry name="serial">f64218b5-ea49-4ed7-9945-1b8e056e4161</entry>
Oct 11 08:59:01 compute-0 nova_compute[260935]:       <entry name="uuid">f64218b5-ea49-4ed7-9945-1b8e056e4161</entry>
Oct 11 08:59:01 compute-0 nova_compute[260935]:       <entry name="family">Virtual Machine</entry>
Oct 11 08:59:01 compute-0 nova_compute[260935]:     </system>
Oct 11 08:59:01 compute-0 nova_compute[260935]:   </sysinfo>
Oct 11 08:59:01 compute-0 nova_compute[260935]:   <os>
Oct 11 08:59:01 compute-0 nova_compute[260935]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 11 08:59:01 compute-0 nova_compute[260935]:     <boot dev="hd"/>
Oct 11 08:59:01 compute-0 nova_compute[260935]:     <smbios mode="sysinfo"/>
Oct 11 08:59:01 compute-0 nova_compute[260935]:   </os>
Oct 11 08:59:01 compute-0 nova_compute[260935]:   <features>
Oct 11 08:59:01 compute-0 nova_compute[260935]:     <acpi/>
Oct 11 08:59:01 compute-0 nova_compute[260935]:     <apic/>
Oct 11 08:59:01 compute-0 nova_compute[260935]:     <vmcoreinfo/>
Oct 11 08:59:01 compute-0 nova_compute[260935]:   </features>
Oct 11 08:59:01 compute-0 nova_compute[260935]:   <clock offset="utc">
Oct 11 08:59:01 compute-0 nova_compute[260935]:     <timer name="pit" tickpolicy="delay"/>
Oct 11 08:59:01 compute-0 nova_compute[260935]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 11 08:59:01 compute-0 nova_compute[260935]:     <timer name="hpet" present="no"/>
Oct 11 08:59:01 compute-0 nova_compute[260935]:   </clock>
Oct 11 08:59:01 compute-0 nova_compute[260935]:   <cpu mode="host-model" match="exact">
Oct 11 08:59:01 compute-0 nova_compute[260935]:     <topology sockets="1" cores="1" threads="1"/>
Oct 11 08:59:01 compute-0 nova_compute[260935]:   </cpu>
Oct 11 08:59:01 compute-0 nova_compute[260935]:   <devices>
Oct 11 08:59:01 compute-0 nova_compute[260935]:     <disk type="network" device="disk">
Oct 11 08:59:01 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 08:59:01 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/f64218b5-ea49-4ed7-9945-1b8e056e4161_disk">
Oct 11 08:59:01 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 08:59:01 compute-0 nova_compute[260935]:       </source>
Oct 11 08:59:01 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 08:59:01 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 08:59:01 compute-0 nova_compute[260935]:       </auth>
Oct 11 08:59:01 compute-0 nova_compute[260935]:       <target dev="vda" bus="virtio"/>
Oct 11 08:59:01 compute-0 nova_compute[260935]:     </disk>
Oct 11 08:59:01 compute-0 nova_compute[260935]:     <disk type="network" device="cdrom">
Oct 11 08:59:01 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 08:59:01 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/f64218b5-ea49-4ed7-9945-1b8e056e4161_disk.config">
Oct 11 08:59:01 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 08:59:01 compute-0 nova_compute[260935]:       </source>
Oct 11 08:59:01 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 08:59:01 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 08:59:01 compute-0 nova_compute[260935]:       </auth>
Oct 11 08:59:01 compute-0 nova_compute[260935]:       <target dev="sda" bus="sata"/>
Oct 11 08:59:01 compute-0 nova_compute[260935]:     </disk>
Oct 11 08:59:01 compute-0 nova_compute[260935]:     <interface type="ethernet">
Oct 11 08:59:01 compute-0 nova_compute[260935]:       <mac address="fa:16:3e:d5:15:a9"/>
Oct 11 08:59:01 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 08:59:01 compute-0 nova_compute[260935]:       <driver name="vhost" rx_queue_size="512"/>
Oct 11 08:59:01 compute-0 nova_compute[260935]:       <mtu size="1442"/>
Oct 11 08:59:01 compute-0 nova_compute[260935]:       <target dev="tap2d377d55-39"/>
Oct 11 08:59:01 compute-0 nova_compute[260935]:     </interface>
Oct 11 08:59:01 compute-0 nova_compute[260935]:     <serial type="pty">
Oct 11 08:59:01 compute-0 nova_compute[260935]:       <log file="/var/lib/nova/instances/f64218b5-ea49-4ed7-9945-1b8e056e4161/console.log" append="off"/>
Oct 11 08:59:01 compute-0 nova_compute[260935]:     </serial>
Oct 11 08:59:01 compute-0 nova_compute[260935]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 08:59:01 compute-0 nova_compute[260935]:     <video>
Oct 11 08:59:01 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 08:59:01 compute-0 nova_compute[260935]:     </video>
Oct 11 08:59:01 compute-0 nova_compute[260935]:     <input type="tablet" bus="usb"/>
Oct 11 08:59:01 compute-0 nova_compute[260935]:     <rng model="virtio">
Oct 11 08:59:01 compute-0 nova_compute[260935]:       <backend model="random">/dev/urandom</backend>
Oct 11 08:59:01 compute-0 nova_compute[260935]:     </rng>
Oct 11 08:59:01 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root"/>
Oct 11 08:59:01 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:59:01 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:59:01 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:59:01 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:59:01 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:59:01 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:59:01 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:59:01 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:59:01 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:59:01 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:59:01 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:59:01 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:59:01 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:59:01 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:59:01 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:59:01 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:59:01 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:59:01 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:59:01 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:59:01 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:59:01 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:59:01 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:59:01 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:59:01 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:59:01 compute-0 nova_compute[260935]:     <controller type="usb" index="0"/>
Oct 11 08:59:01 compute-0 nova_compute[260935]:     <memballoon model="virtio">
Oct 11 08:59:01 compute-0 nova_compute[260935]:       <stats period="10"/>
Oct 11 08:59:01 compute-0 nova_compute[260935]:     </memballoon>
Oct 11 08:59:01 compute-0 nova_compute[260935]:   </devices>
Oct 11 08:59:01 compute-0 nova_compute[260935]: </domain>
Oct 11 08:59:01 compute-0 nova_compute[260935]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 11 08:59:01 compute-0 nova_compute[260935]: 2025-10-11 08:59:01.760 2 DEBUG nova.compute.manager [None req-8989a0c1-08ec-4ad1-95fb-7c34d2091511 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] [instance: f64218b5-ea49-4ed7-9945-1b8e056e4161] Preparing to wait for external event network-vif-plugged-2d377d55-39e9-4780-bc6f-8bd84e5a93a7 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 11 08:59:01 compute-0 nova_compute[260935]: 2025-10-11 08:59:01.761 2 DEBUG oslo_concurrency.lockutils [None req-8989a0c1-08ec-4ad1-95fb-7c34d2091511 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] Acquiring lock "f64218b5-ea49-4ed7-9945-1b8e056e4161-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:59:01 compute-0 nova_compute[260935]: 2025-10-11 08:59:01.762 2 DEBUG oslo_concurrency.lockutils [None req-8989a0c1-08ec-4ad1-95fb-7c34d2091511 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] Lock "f64218b5-ea49-4ed7-9945-1b8e056e4161-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:59:01 compute-0 nova_compute[260935]: 2025-10-11 08:59:01.762 2 DEBUG oslo_concurrency.lockutils [None req-8989a0c1-08ec-4ad1-95fb-7c34d2091511 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] Lock "f64218b5-ea49-4ed7-9945-1b8e056e4161-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:59:01 compute-0 nova_compute[260935]: 2025-10-11 08:59:01.763 2 DEBUG nova.virt.libvirt.vif [None req-8989a0c1-08ec-4ad1-95fb-7c34d2091511 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T08:58:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-1557190648',display_name='tempest-ListServerFiltersTestJSON-instance-1557190648',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-1557190648',id=80,image_ref='95632eb9-5895-4e20-b760-0f149aadf400',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b0430d49d70a46c2b29abef177f8ccb3',ramdisk_id='',reservation_id='r-rrp9whin',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='95632eb9-5895-4e20-b760-0f149aadf400',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServerFiltersTestJSON-867232992',owner_user_name='tempest-ListServerFiltersTestJSON-867232992-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T08:58:56Z,user_data=None,user_id='9734241540ac484291686e1d189d4eea',uuid=f64218b5-ea49-4ed7-9945-1b8e056e4161,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2d377d55-39e9-4780-bc6f-8bd84e5a93a7", "address": "fa:16:3e:d5:15:a9", "network": {"id": "b4b8fb64-b3cb-4cee-8a22-4a3947fba8e5", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-2015711970-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b0430d49d70a46c2b29abef177f8ccb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2d377d55-39", "ovs_interfaceid": "2d377d55-39e9-4780-bc6f-8bd84e5a93a7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 11 08:59:01 compute-0 nova_compute[260935]: 2025-10-11 08:59:01.764 2 DEBUG nova.network.os_vif_util [None req-8989a0c1-08ec-4ad1-95fb-7c34d2091511 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] Converting VIF {"id": "2d377d55-39e9-4780-bc6f-8bd84e5a93a7", "address": "fa:16:3e:d5:15:a9", "network": {"id": "b4b8fb64-b3cb-4cee-8a22-4a3947fba8e5", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-2015711970-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b0430d49d70a46c2b29abef177f8ccb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2d377d55-39", "ovs_interfaceid": "2d377d55-39e9-4780-bc6f-8bd84e5a93a7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 08:59:01 compute-0 nova_compute[260935]: 2025-10-11 08:59:01.765 2 DEBUG nova.network.os_vif_util [None req-8989a0c1-08ec-4ad1-95fb-7c34d2091511 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d5:15:a9,bridge_name='br-int',has_traffic_filtering=True,id=2d377d55-39e9-4780-bc6f-8bd84e5a93a7,network=Network(b4b8fb64-b3cb-4cee-8a22-4a3947fba8e5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2d377d55-39') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 08:59:01 compute-0 nova_compute[260935]: 2025-10-11 08:59:01.766 2 DEBUG os_vif [None req-8989a0c1-08ec-4ad1-95fb-7c34d2091511 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d5:15:a9,bridge_name='br-int',has_traffic_filtering=True,id=2d377d55-39e9-4780-bc6f-8bd84e5a93a7,network=Network(b4b8fb64-b3cb-4cee-8a22-4a3947fba8e5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2d377d55-39') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 11 08:59:01 compute-0 nova_compute[260935]: 2025-10-11 08:59:01.769 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:59:01 compute-0 nova_compute[260935]: 2025-10-11 08:59:01.770 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:59:01 compute-0 nova_compute[260935]: 2025-10-11 08:59:01.771 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 08:59:01 compute-0 nova_compute[260935]: 2025-10-11 08:59:01.779 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:59:01 compute-0 nova_compute[260935]: 2025-10-11 08:59:01.780 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2d377d55-39, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:59:01 compute-0 nova_compute[260935]: 2025-10-11 08:59:01.781 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap2d377d55-39, col_values=(('external_ids', {'iface-id': '2d377d55-39e9-4780-bc6f-8bd84e5a93a7', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:d5:15:a9', 'vm-uuid': 'f64218b5-ea49-4ed7-9945-1b8e056e4161'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:59:01 compute-0 nova_compute[260935]: 2025-10-11 08:59:01.784 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:59:01 compute-0 NetworkManager[44960]: <info>  [1760173141.7856] manager: (tap2d377d55-39): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/311)
Oct 11 08:59:01 compute-0 nova_compute[260935]: 2025-10-11 08:59:01.787 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 08:59:01 compute-0 nova_compute[260935]: 2025-10-11 08:59:01.797 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:59:01 compute-0 nova_compute[260935]: 2025-10-11 08:59:01.799 2 INFO os_vif [None req-8989a0c1-08ec-4ad1-95fb-7c34d2091511 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d5:15:a9,bridge_name='br-int',has_traffic_filtering=True,id=2d377d55-39e9-4780-bc6f-8bd84e5a93a7,network=Network(b4b8fb64-b3cb-4cee-8a22-4a3947fba8e5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2d377d55-39')
Oct 11 08:59:01 compute-0 nova_compute[260935]: 2025-10-11 08:59:01.882 2 DEBUG nova.virt.libvirt.driver [None req-8989a0c1-08ec-4ad1-95fb-7c34d2091511 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 08:59:01 compute-0 nova_compute[260935]: 2025-10-11 08:59:01.883 2 DEBUG nova.virt.libvirt.driver [None req-8989a0c1-08ec-4ad1-95fb-7c34d2091511 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 08:59:01 compute-0 nova_compute[260935]: 2025-10-11 08:59:01.885 2 DEBUG nova.virt.libvirt.driver [None req-8989a0c1-08ec-4ad1-95fb-7c34d2091511 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] No VIF found with MAC fa:16:3e:d5:15:a9, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 11 08:59:01 compute-0 nova_compute[260935]: 2025-10-11 08:59:01.887 2 INFO nova.virt.libvirt.driver [None req-8989a0c1-08ec-4ad1-95fb-7c34d2091511 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] [instance: f64218b5-ea49-4ed7-9945-1b8e056e4161] Using config drive
Oct 11 08:59:01 compute-0 nova_compute[260935]: 2025-10-11 08:59:01.948 2 DEBUG nova.storage.rbd_utils [None req-8989a0c1-08ec-4ad1-95fb-7c34d2091511 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] rbd image f64218b5-ea49-4ed7-9945-1b8e056e4161_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:59:01 compute-0 nova_compute[260935]: 2025-10-11 08:59:01.998 2 DEBUG oslo_concurrency.processutils [None req-9f2a296a-27ec-4721-87d1-cf557a7a87b0 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 045df9b6-6c73-44ec-aa65-2c736eb5c00c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.310s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:59:02 compute-0 nova_compute[260935]: 2025-10-11 08:59:02.086 2 DEBUG nova.storage.rbd_utils [None req-9f2a296a-27ec-4721-87d1-cf557a7a87b0 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] resizing rbd image 045df9b6-6c73-44ec-aa65-2c736eb5c00c_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 11 08:59:02 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1676: 321 pgs: 321 active+clean; 125 MiB data, 593 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.4 MiB/s wr, 133 op/s
Oct 11 08:59:02 compute-0 nova_compute[260935]: 2025-10-11 08:59:02.198 2 DEBUG nova.objects.instance [None req-9f2a296a-27ec-4721-87d1-cf557a7a87b0 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] Lazy-loading 'migration_context' on Instance uuid 045df9b6-6c73-44ec-aa65-2c736eb5c00c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:59:02 compute-0 nova_compute[260935]: 2025-10-11 08:59:02.215 2 DEBUG nova.virt.libvirt.driver [None req-9f2a296a-27ec-4721-87d1-cf557a7a87b0 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] [instance: 045df9b6-6c73-44ec-aa65-2c736eb5c00c] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 11 08:59:02 compute-0 nova_compute[260935]: 2025-10-11 08:59:02.215 2 DEBUG nova.virt.libvirt.driver [None req-9f2a296a-27ec-4721-87d1-cf557a7a87b0 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] [instance: 045df9b6-6c73-44ec-aa65-2c736eb5c00c] Ensure instance console log exists: /var/lib/nova/instances/045df9b6-6c73-44ec-aa65-2c736eb5c00c/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 11 08:59:02 compute-0 nova_compute[260935]: 2025-10-11 08:59:02.216 2 DEBUG oslo_concurrency.lockutils [None req-9f2a296a-27ec-4721-87d1-cf557a7a87b0 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:59:02 compute-0 nova_compute[260935]: 2025-10-11 08:59:02.216 2 DEBUG oslo_concurrency.lockutils [None req-9f2a296a-27ec-4721-87d1-cf557a7a87b0 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:59:02 compute-0 nova_compute[260935]: 2025-10-11 08:59:02.216 2 DEBUG oslo_concurrency.lockutils [None req-9f2a296a-27ec-4721-87d1-cf557a7a87b0 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:59:02 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/2990062162' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:59:02 compute-0 nova_compute[260935]: 2025-10-11 08:59:02.269 2 DEBUG nova.network.neutron [req-753dac88-12f6-4336-975d-5e50898e9e0d req-95a1a4f4-05d3-4242-808e-7ba1fbd066ef e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 8848c29f-c82a-4f50-82f4-b2e317161489] Updated VIF entry in instance network info cache for port e121c426-7734-4def-be42-b69b19dcbf29. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 08:59:02 compute-0 nova_compute[260935]: 2025-10-11 08:59:02.269 2 DEBUG nova.network.neutron [req-753dac88-12f6-4336-975d-5e50898e9e0d req-95a1a4f4-05d3-4242-808e-7ba1fbd066ef e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 8848c29f-c82a-4f50-82f4-b2e317161489] Updating instance_info_cache with network_info: [{"id": "e121c426-7734-4def-be42-b69b19dcbf29", "address": "fa:16:3e:e4:27:e3", "network": {"id": "b4b8fb64-b3cb-4cee-8a22-4a3947fba8e5", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-2015711970-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b0430d49d70a46c2b29abef177f8ccb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape121c426-77", "ovs_interfaceid": "e121c426-7734-4def-be42-b69b19dcbf29", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:59:02 compute-0 nova_compute[260935]: 2025-10-11 08:59:02.292 2 DEBUG oslo_concurrency.lockutils [req-753dac88-12f6-4336-975d-5e50898e9e0d req-95a1a4f4-05d3-4242-808e-7ba1fbd066ef e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-8848c29f-c82a-4f50-82f4-b2e317161489" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 08:59:02 compute-0 nova_compute[260935]: 2025-10-11 08:59:02.488 2 INFO nova.virt.libvirt.driver [None req-5cb69133-b00e-4832-9d04-9fce79d65f74 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] [instance: 8848c29f-c82a-4f50-82f4-b2e317161489] Creating config drive at /var/lib/nova/instances/8848c29f-c82a-4f50-82f4-b2e317161489/disk.config
Oct 11 08:59:02 compute-0 nova_compute[260935]: 2025-10-11 08:59:02.494 2 DEBUG oslo_concurrency.processutils [None req-5cb69133-b00e-4832-9d04-9fce79d65f74 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/8848c29f-c82a-4f50-82f4-b2e317161489/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpzlgkppol execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:59:02 compute-0 nova_compute[260935]: 2025-10-11 08:59:02.661 2 DEBUG oslo_concurrency.processutils [None req-5cb69133-b00e-4832-9d04-9fce79d65f74 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/8848c29f-c82a-4f50-82f4-b2e317161489/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpzlgkppol" returned: 0 in 0.168s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:59:02 compute-0 nova_compute[260935]: 2025-10-11 08:59:02.701 2 DEBUG nova.storage.rbd_utils [None req-5cb69133-b00e-4832-9d04-9fce79d65f74 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] rbd image 8848c29f-c82a-4f50-82f4-b2e317161489_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:59:02 compute-0 nova_compute[260935]: 2025-10-11 08:59:02.706 2 DEBUG oslo_concurrency.processutils [None req-5cb69133-b00e-4832-9d04-9fce79d65f74 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/8848c29f-c82a-4f50-82f4-b2e317161489/disk.config 8848c29f-c82a-4f50-82f4-b2e317161489_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:59:02 compute-0 nova_compute[260935]: 2025-10-11 08:59:02.823 2 DEBUG nova.network.neutron [req-8b7fd642-f32e-4650-bf68-05fedc24c577 req-587207e2-40e4-48d4-a472-dd288647222f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: f64218b5-ea49-4ed7-9945-1b8e056e4161] Updated VIF entry in instance network info cache for port 2d377d55-39e9-4780-bc6f-8bd84e5a93a7. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 08:59:02 compute-0 nova_compute[260935]: 2025-10-11 08:59:02.825 2 DEBUG nova.network.neutron [req-8b7fd642-f32e-4650-bf68-05fedc24c577 req-587207e2-40e4-48d4-a472-dd288647222f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: f64218b5-ea49-4ed7-9945-1b8e056e4161] Updating instance_info_cache with network_info: [{"id": "2d377d55-39e9-4780-bc6f-8bd84e5a93a7", "address": "fa:16:3e:d5:15:a9", "network": {"id": "b4b8fb64-b3cb-4cee-8a22-4a3947fba8e5", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-2015711970-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b0430d49d70a46c2b29abef177f8ccb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2d377d55-39", "ovs_interfaceid": "2d377d55-39e9-4780-bc6f-8bd84e5a93a7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:59:02 compute-0 nova_compute[260935]: 2025-10-11 08:59:02.834 2 INFO nova.virt.libvirt.driver [None req-8989a0c1-08ec-4ad1-95fb-7c34d2091511 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] [instance: f64218b5-ea49-4ed7-9945-1b8e056e4161] Creating config drive at /var/lib/nova/instances/f64218b5-ea49-4ed7-9945-1b8e056e4161/disk.config
Oct 11 08:59:02 compute-0 nova_compute[260935]: 2025-10-11 08:59:02.840 2 DEBUG oslo_concurrency.processutils [None req-8989a0c1-08ec-4ad1-95fb-7c34d2091511 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/f64218b5-ea49-4ed7-9945-1b8e056e4161/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmprndlieor execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:59:02 compute-0 nova_compute[260935]: 2025-10-11 08:59:02.909 2 DEBUG oslo_concurrency.lockutils [req-8b7fd642-f32e-4650-bf68-05fedc24c577 req-587207e2-40e4-48d4-a472-dd288647222f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-f64218b5-ea49-4ed7-9945-1b8e056e4161" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 08:59:02 compute-0 nova_compute[260935]: 2025-10-11 08:59:02.927 2 DEBUG oslo_concurrency.processutils [None req-5cb69133-b00e-4832-9d04-9fce79d65f74 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/8848c29f-c82a-4f50-82f4-b2e317161489/disk.config 8848c29f-c82a-4f50-82f4-b2e317161489_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.221s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:59:02 compute-0 nova_compute[260935]: 2025-10-11 08:59:02.927 2 INFO nova.virt.libvirt.driver [None req-5cb69133-b00e-4832-9d04-9fce79d65f74 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] [instance: 8848c29f-c82a-4f50-82f4-b2e317161489] Deleting local config drive /var/lib/nova/instances/8848c29f-c82a-4f50-82f4-b2e317161489/disk.config because it was imported into RBD.
Oct 11 08:59:03 compute-0 NetworkManager[44960]: <info>  [1760173143.0171] manager: (tape121c426-77): new Tun device (/org/freedesktop/NetworkManager/Devices/312)
Oct 11 08:59:03 compute-0 nova_compute[260935]: 2025-10-11 08:59:03.017 2 DEBUG oslo_concurrency.processutils [None req-8989a0c1-08ec-4ad1-95fb-7c34d2091511 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/f64218b5-ea49-4ed7-9945-1b8e056e4161/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmprndlieor" returned: 0 in 0.177s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:59:03 compute-0 kernel: tape121c426-77: entered promiscuous mode
Oct 11 08:59:03 compute-0 ovn_controller[152945]: 2025-10-11T08:59:03Z|00698|binding|INFO|Claiming lport e121c426-7734-4def-be42-b69b19dcbf29 for this chassis.
Oct 11 08:59:03 compute-0 ovn_controller[152945]: 2025-10-11T08:59:03Z|00699|binding|INFO|e121c426-7734-4def-be42-b69b19dcbf29: Claiming fa:16:3e:e4:27:e3 10.100.0.7
Oct 11 08:59:03 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:59:03.044 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e4:27:e3 10.100.0.7'], port_security=['fa:16:3e:e4:27:e3 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '8848c29f-c82a-4f50-82f4-b2e317161489', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b4b8fb64-b3cb-4cee-8a22-4a3947fba8e5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b0430d49d70a46c2b29abef177f8ccb3', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'cf97e47b-d339-42a8-ae53-936a69d74b51', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7d7c2c31-fb9d-4875-91fa-aedc5fb45092, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=e121c426-7734-4def-be42-b69b19dcbf29) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 08:59:03 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:59:03.046 162815 INFO neutron.agent.ovn.metadata.agent [-] Port e121c426-7734-4def-be42-b69b19dcbf29 in datapath b4b8fb64-b3cb-4cee-8a22-4a3947fba8e5 bound to our chassis
Oct 11 08:59:03 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:59:03.049 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b4b8fb64-b3cb-4cee-8a22-4a3947fba8e5
Oct 11 08:59:03 compute-0 systemd-udevd[338887]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 08:59:03 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:59:03.075 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[0d1e283b-745a-42a3-b53b-09fe6fe23a60]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:59:03 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:59:03.076 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapb4b8fb64-b1 in ovnmeta-b4b8fb64-b3cb-4cee-8a22-4a3947fba8e5 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 11 08:59:03 compute-0 systemd-machined[215705]: New machine qemu-88-instance-0000004f.
Oct 11 08:59:03 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:59:03.080 276945 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapb4b8fb64-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 11 08:59:03 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:59:03.081 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[d168e35e-17e2-49b0-b985-3359f5e30e73]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:59:03 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:59:03.083 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[00a897f9-7ebc-4d5b-8925-36ff2339365b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:59:03 compute-0 nova_compute[260935]: 2025-10-11 08:59:03.086 2 DEBUG nova.storage.rbd_utils [None req-8989a0c1-08ec-4ad1-95fb-7c34d2091511 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] rbd image f64218b5-ea49-4ed7-9945-1b8e056e4161_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:59:03 compute-0 systemd[1]: Started Virtual Machine qemu-88-instance-0000004f.
Oct 11 08:59:03 compute-0 NetworkManager[44960]: <info>  [1760173143.1071] device (tape121c426-77): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 08:59:03 compute-0 NetworkManager[44960]: <info>  [1760173143.1092] device (tape121c426-77): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 08:59:03 compute-0 ovn_controller[152945]: 2025-10-11T08:59:03Z|00700|binding|INFO|Setting lport e121c426-7734-4def-be42-b69b19dcbf29 ovn-installed in OVS
Oct 11 08:59:03 compute-0 ovn_controller[152945]: 2025-10-11T08:59:03Z|00701|binding|INFO|Setting lport e121c426-7734-4def-be42-b69b19dcbf29 up in Southbound
Oct 11 08:59:03 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:59:03.110 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[eaa97e8f-9b0c-47a4-8a04-3bd8c58758e0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:59:03 compute-0 nova_compute[260935]: 2025-10-11 08:59:03.118 2 DEBUG oslo_concurrency.processutils [None req-8989a0c1-08ec-4ad1-95fb-7c34d2091511 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/f64218b5-ea49-4ed7-9945-1b8e056e4161/disk.config f64218b5-ea49-4ed7-9945-1b8e056e4161_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:59:03 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:59:03.148 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[fa4d71fe-a5cf-4e17-8c33-7c3e6b4f3b88]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:59:03 compute-0 nova_compute[260935]: 2025-10-11 08:59:03.179 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:59:03 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:59:03.195 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[5185fcef-bfa8-491a-97dc-efd06e57b860]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:59:03 compute-0 NetworkManager[44960]: <info>  [1760173143.2108] manager: (tapb4b8fb64-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/313)
Oct 11 08:59:03 compute-0 systemd-udevd[338894]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 08:59:03 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:59:03.212 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[fcabccf8-68a0-476c-a6a3-7921c702300b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:59:03 compute-0 ceph-mon[74313]: pgmap v1676: 321 pgs: 321 active+clean; 125 MiB data, 593 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.4 MiB/s wr, 133 op/s
Oct 11 08:59:03 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:59:03.277 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[084eed72-fbf3-4652-83dc-329acd6ac943]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:59:03 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:59:03.281 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[e6d1eef3-6113-42b2-b3ab-360598255692]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:59:03 compute-0 NetworkManager[44960]: <info>  [1760173143.3213] device (tapb4b8fb64-b0): carrier: link connected
Oct 11 08:59:03 compute-0 nova_compute[260935]: 2025-10-11 08:59:03.323 2 DEBUG oslo_concurrency.processutils [None req-8989a0c1-08ec-4ad1-95fb-7c34d2091511 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/f64218b5-ea49-4ed7-9945-1b8e056e4161/disk.config f64218b5-ea49-4ed7-9945-1b8e056e4161_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.205s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:59:03 compute-0 nova_compute[260935]: 2025-10-11 08:59:03.325 2 INFO nova.virt.libvirt.driver [None req-8989a0c1-08ec-4ad1-95fb-7c34d2091511 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] [instance: f64218b5-ea49-4ed7-9945-1b8e056e4161] Deleting local config drive /var/lib/nova/instances/f64218b5-ea49-4ed7-9945-1b8e056e4161/disk.config because it was imported into RBD.
Oct 11 08:59:03 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:59:03.333 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[380e31ab-9043-4667-9aaf-11df00dc1440]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:59:03 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:59:03.366 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[e1106965-95d3-47c5-9474-eef6d8ba57d6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb4b8fb64-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8d:65:14'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 196, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 196, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 220], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 498802, 'reachable_time': 37318, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 168, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 168, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 338945, 'error': None, 'target': 'ovnmeta-b4b8fb64-b3cb-4cee-8a22-4a3947fba8e5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:59:03 compute-0 kernel: tap2d377d55-39: entered promiscuous mode
Oct 11 08:59:03 compute-0 NetworkManager[44960]: <info>  [1760173143.3985] manager: (tap2d377d55-39): new Tun device (/org/freedesktop/NetworkManager/Devices/314)
Oct 11 08:59:03 compute-0 ovn_controller[152945]: 2025-10-11T08:59:03Z|00702|binding|INFO|Claiming lport 2d377d55-39e9-4780-bc6f-8bd84e5a93a7 for this chassis.
Oct 11 08:59:03 compute-0 nova_compute[260935]: 2025-10-11 08:59:03.399 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:59:03 compute-0 ovn_controller[152945]: 2025-10-11T08:59:03Z|00703|binding|INFO|2d377d55-39e9-4780-bc6f-8bd84e5a93a7: Claiming fa:16:3e:d5:15:a9 10.100.0.4
Oct 11 08:59:03 compute-0 systemd-udevd[338934]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 08:59:03 compute-0 NetworkManager[44960]: <info>  [1760173143.4171] device (tap2d377d55-39): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 08:59:03 compute-0 NetworkManager[44960]: <info>  [1760173143.4189] device (tap2d377d55-39): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 08:59:03 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:59:03.400 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[1aff8298-6cbb-4b9d-9f34-e015102399e4]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe8d:6514'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 498802, 'tstamp': 498802}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 338958, 'error': None, 'target': 'ovnmeta-b4b8fb64-b3cb-4cee-8a22-4a3947fba8e5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:59:03 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:59:03.417 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d5:15:a9 10.100.0.4'], port_security=['fa:16:3e:d5:15:a9 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'f64218b5-ea49-4ed7-9945-1b8e056e4161', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b4b8fb64-b3cb-4cee-8a22-4a3947fba8e5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b0430d49d70a46c2b29abef177f8ccb3', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'cf97e47b-d339-42a8-ae53-936a69d74b51', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7d7c2c31-fb9d-4875-91fa-aedc5fb45092, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=2d377d55-39e9-4780-bc6f-8bd84e5a93a7) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 08:59:03 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e242 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 08:59:03 compute-0 ovn_controller[152945]: 2025-10-11T08:59:03Z|00704|binding|INFO|Setting lport 2d377d55-39e9-4780-bc6f-8bd84e5a93a7 ovn-installed in OVS
Oct 11 08:59:03 compute-0 ovn_controller[152945]: 2025-10-11T08:59:03Z|00705|binding|INFO|Setting lport 2d377d55-39e9-4780-bc6f-8bd84e5a93a7 up in Southbound
Oct 11 08:59:03 compute-0 nova_compute[260935]: 2025-10-11 08:59:03.444 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:59:03 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:59:03.446 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[a50dec60-d886-4523-9baf-b2d7bac3efc4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb4b8fb64-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8d:65:14'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 2, 'rx_bytes': 196, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 2, 'rx_bytes': 196, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 220], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 498802, 'reachable_time': 37318, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 168, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 168, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 338973, 'error': None, 'target': 'ovnmeta-b4b8fb64-b3cb-4cee-8a22-4a3947fba8e5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:59:03 compute-0 nova_compute[260935]: 2025-10-11 08:59:03.453 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:59:03 compute-0 systemd-machined[215705]: New machine qemu-89-instance-00000050.
Oct 11 08:59:03 compute-0 systemd[1]: Started Virtual Machine qemu-89-instance-00000050.
Oct 11 08:59:03 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:59:03.482 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[f2906afd-a05d-4fd8-a97d-a42ba2548c65]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:59:03 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:59:03.578 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[c608b4ee-cf50-4701-9240-d00351773653]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:59:03 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:59:03.581 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb4b8fb64-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:59:03 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:59:03.581 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 08:59:03 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:59:03.581 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb4b8fb64-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:59:03 compute-0 kernel: tapb4b8fb64-b0: entered promiscuous mode
Oct 11 08:59:03 compute-0 NetworkManager[44960]: <info>  [1760173143.6216] manager: (tapb4b8fb64-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/315)
Oct 11 08:59:03 compute-0 nova_compute[260935]: 2025-10-11 08:59:03.621 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:59:03 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:59:03.625 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb4b8fb64-b0, col_values=(('external_ids', {'iface-id': '59c88b9d-e04e-4ca9-8c74-9510a5f4ab83'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:59:03 compute-0 nova_compute[260935]: 2025-10-11 08:59:03.626 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:59:03 compute-0 ovn_controller[152945]: 2025-10-11T08:59:03Z|00706|binding|INFO|Releasing lport 59c88b9d-e04e-4ca9-8c74-9510a5f4ab83 from this chassis (sb_readonly=0)
Oct 11 08:59:03 compute-0 nova_compute[260935]: 2025-10-11 08:59:03.627 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:59:03 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:59:03.629 162815 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/b4b8fb64-b3cb-4cee-8a22-4a3947fba8e5.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/b4b8fb64-b3cb-4cee-8a22-4a3947fba8e5.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 11 08:59:03 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:59:03.632 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[84bd7f9d-d2fe-40da-a110-9ff72a4f77f1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:59:03 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:59:03.632 162815 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 11 08:59:03 compute-0 ovn_metadata_agent[162810]: global
Oct 11 08:59:03 compute-0 ovn_metadata_agent[162810]:     log         /dev/log local0 debug
Oct 11 08:59:03 compute-0 ovn_metadata_agent[162810]:     log-tag     haproxy-metadata-proxy-b4b8fb64-b3cb-4cee-8a22-4a3947fba8e5
Oct 11 08:59:03 compute-0 ovn_metadata_agent[162810]:     user        root
Oct 11 08:59:03 compute-0 ovn_metadata_agent[162810]:     group       root
Oct 11 08:59:03 compute-0 ovn_metadata_agent[162810]:     maxconn     1024
Oct 11 08:59:03 compute-0 ovn_metadata_agent[162810]:     pidfile     /var/lib/neutron/external/pids/b4b8fb64-b3cb-4cee-8a22-4a3947fba8e5.pid.haproxy
Oct 11 08:59:03 compute-0 ovn_metadata_agent[162810]:     daemon
Oct 11 08:59:03 compute-0 ovn_metadata_agent[162810]: 
Oct 11 08:59:03 compute-0 ovn_metadata_agent[162810]: defaults
Oct 11 08:59:03 compute-0 ovn_metadata_agent[162810]:     log global
Oct 11 08:59:03 compute-0 ovn_metadata_agent[162810]:     mode http
Oct 11 08:59:03 compute-0 ovn_metadata_agent[162810]:     option httplog
Oct 11 08:59:03 compute-0 ovn_metadata_agent[162810]:     option dontlognull
Oct 11 08:59:03 compute-0 ovn_metadata_agent[162810]:     option http-server-close
Oct 11 08:59:03 compute-0 ovn_metadata_agent[162810]:     option forwardfor
Oct 11 08:59:03 compute-0 ovn_metadata_agent[162810]:     retries                 3
Oct 11 08:59:03 compute-0 ovn_metadata_agent[162810]:     timeout http-request    30s
Oct 11 08:59:03 compute-0 ovn_metadata_agent[162810]:     timeout connect         30s
Oct 11 08:59:03 compute-0 ovn_metadata_agent[162810]:     timeout client          32s
Oct 11 08:59:03 compute-0 ovn_metadata_agent[162810]:     timeout server          32s
Oct 11 08:59:03 compute-0 ovn_metadata_agent[162810]:     timeout http-keep-alive 30s
Oct 11 08:59:03 compute-0 ovn_metadata_agent[162810]: 
Oct 11 08:59:03 compute-0 ovn_metadata_agent[162810]: 
Oct 11 08:59:03 compute-0 ovn_metadata_agent[162810]: listen listener
Oct 11 08:59:03 compute-0 ovn_metadata_agent[162810]:     bind 169.254.169.254:80
Oct 11 08:59:03 compute-0 ovn_metadata_agent[162810]:     server metadata /var/lib/neutron/metadata_proxy
Oct 11 08:59:03 compute-0 ovn_metadata_agent[162810]:     http-request add-header X-OVN-Network-ID b4b8fb64-b3cb-4cee-8a22-4a3947fba8e5
Oct 11 08:59:03 compute-0 ovn_metadata_agent[162810]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 11 08:59:03 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:59:03.634 162815 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-b4b8fb64-b3cb-4cee-8a22-4a3947fba8e5', 'env', 'PROCESS_TAG=haproxy-b4b8fb64-b3cb-4cee-8a22-4a3947fba8e5', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/b4b8fb64-b3cb-4cee-8a22-4a3947fba8e5.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 11 08:59:03 compute-0 nova_compute[260935]: 2025-10-11 08:59:03.644 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:59:03 compute-0 nova_compute[260935]: 2025-10-11 08:59:03.969 2 DEBUG nova.network.neutron [None req-9f2a296a-27ec-4721-87d1-cf557a7a87b0 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] [instance: 045df9b6-6c73-44ec-aa65-2c736eb5c00c] Successfully created port: be9e3f2f-e8b8-4731-be9a-1b84d50bb5f8 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 11 08:59:04 compute-0 nova_compute[260935]: 2025-10-11 08:59:04.015 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760173144.0140615, 8848c29f-c82a-4f50-82f4-b2e317161489 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:59:04 compute-0 nova_compute[260935]: 2025-10-11 08:59:04.016 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 8848c29f-c82a-4f50-82f4-b2e317161489] VM Started (Lifecycle Event)
Oct 11 08:59:04 compute-0 nova_compute[260935]: 2025-10-11 08:59:04.048 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 8848c29f-c82a-4f50-82f4-b2e317161489] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:59:04 compute-0 nova_compute[260935]: 2025-10-11 08:59:04.056 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760173144.0158482, 8848c29f-c82a-4f50-82f4-b2e317161489 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:59:04 compute-0 nova_compute[260935]: 2025-10-11 08:59:04.057 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 8848c29f-c82a-4f50-82f4-b2e317161489] VM Paused (Lifecycle Event)
Oct 11 08:59:04 compute-0 nova_compute[260935]: 2025-10-11 08:59:04.088 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 8848c29f-c82a-4f50-82f4-b2e317161489] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:59:04 compute-0 nova_compute[260935]: 2025-10-11 08:59:04.094 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 8848c29f-c82a-4f50-82f4-b2e317161489] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 08:59:04 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1677: 321 pgs: 321 active+clean; 180 MiB data, 594 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 5.3 MiB/s wr, 178 op/s
Oct 11 08:59:04 compute-0 podman[339040]: 2025-10-11 08:59:04.118029453 +0000 UTC m=+0.072931431 container create a2d5aa78c7bc858f490fe7f3260a94155a261a5c31f673c7123501db88b53f40 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-b4b8fb64-b3cb-4cee-8a22-4a3947fba8e5, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Oct 11 08:59:04 compute-0 nova_compute[260935]: 2025-10-11 08:59:04.135 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 8848c29f-c82a-4f50-82f4-b2e317161489] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 08:59:04 compute-0 podman[339040]: 2025-10-11 08:59:04.075939762 +0000 UTC m=+0.030841800 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 11 08:59:04 compute-0 systemd[1]: Started libpod-conmon-a2d5aa78c7bc858f490fe7f3260a94155a261a5c31f673c7123501db88b53f40.scope.
Oct 11 08:59:04 compute-0 systemd[1]: Started libcrun container.
Oct 11 08:59:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06a2a110a650e7c639a749dbbc88aa287fcc6d71397aecfe32bd21fa828a5cec/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 11 08:59:04 compute-0 podman[339054]: 2025-10-11 08:59:04.238725734 +0000 UTC m=+0.084091859 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 11 08:59:04 compute-0 podman[339040]: 2025-10-11 08:59:04.255912634 +0000 UTC m=+0.210814672 container init a2d5aa78c7bc858f490fe7f3260a94155a261a5c31f673c7123501db88b53f40 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-b4b8fb64-b3cb-4cee-8a22-4a3947fba8e5, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 08:59:04 compute-0 podman[339040]: 2025-10-11 08:59:04.268211395 +0000 UTC m=+0.223113343 container start a2d5aa78c7bc858f490fe7f3260a94155a261a5c31f673c7123501db88b53f40 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-b4b8fb64-b3cb-4cee-8a22-4a3947fba8e5, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3)
Oct 11 08:59:04 compute-0 neutron-haproxy-ovnmeta-b4b8fb64-b3cb-4cee-8a22-4a3947fba8e5[339066]: [NOTICE]   (339079) : New worker (339081) forked
Oct 11 08:59:04 compute-0 neutron-haproxy-ovnmeta-b4b8fb64-b3cb-4cee-8a22-4a3947fba8e5[339066]: [NOTICE]   (339079) : Loading success.
Oct 11 08:59:04 compute-0 nova_compute[260935]: 2025-10-11 08:59:04.320 2 DEBUG nova.compute.manager [req-ca01d187-a37e-4c28-b173-79fc85ef2815 req-cd93327e-b3f6-4b42-8637-1a40558bfec9 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 8848c29f-c82a-4f50-82f4-b2e317161489] Received event network-vif-plugged-e121c426-7734-4def-be42-b69b19dcbf29 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:59:04 compute-0 nova_compute[260935]: 2025-10-11 08:59:04.321 2 DEBUG oslo_concurrency.lockutils [req-ca01d187-a37e-4c28-b173-79fc85ef2815 req-cd93327e-b3f6-4b42-8637-1a40558bfec9 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "8848c29f-c82a-4f50-82f4-b2e317161489-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:59:04 compute-0 nova_compute[260935]: 2025-10-11 08:59:04.321 2 DEBUG oslo_concurrency.lockutils [req-ca01d187-a37e-4c28-b173-79fc85ef2815 req-cd93327e-b3f6-4b42-8637-1a40558bfec9 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "8848c29f-c82a-4f50-82f4-b2e317161489-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:59:04 compute-0 nova_compute[260935]: 2025-10-11 08:59:04.322 2 DEBUG oslo_concurrency.lockutils [req-ca01d187-a37e-4c28-b173-79fc85ef2815 req-cd93327e-b3f6-4b42-8637-1a40558bfec9 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "8848c29f-c82a-4f50-82f4-b2e317161489-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:59:04 compute-0 nova_compute[260935]: 2025-10-11 08:59:04.322 2 DEBUG nova.compute.manager [req-ca01d187-a37e-4c28-b173-79fc85ef2815 req-cd93327e-b3f6-4b42-8637-1a40558bfec9 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 8848c29f-c82a-4f50-82f4-b2e317161489] Processing event network-vif-plugged-e121c426-7734-4def-be42-b69b19dcbf29 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 11 08:59:04 compute-0 nova_compute[260935]: 2025-10-11 08:59:04.323 2 DEBUG nova.compute.manager [None req-5cb69133-b00e-4832-9d04-9fce79d65f74 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] [instance: 8848c29f-c82a-4f50-82f4-b2e317161489] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 11 08:59:04 compute-0 nova_compute[260935]: 2025-10-11 08:59:04.329 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760173144.329273, 8848c29f-c82a-4f50-82f4-b2e317161489 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:59:04 compute-0 nova_compute[260935]: 2025-10-11 08:59:04.330 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 8848c29f-c82a-4f50-82f4-b2e317161489] VM Resumed (Lifecycle Event)
Oct 11 08:59:04 compute-0 nova_compute[260935]: 2025-10-11 08:59:04.332 2 DEBUG nova.virt.libvirt.driver [None req-5cb69133-b00e-4832-9d04-9fce79d65f74 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] [instance: 8848c29f-c82a-4f50-82f4-b2e317161489] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 11 08:59:04 compute-0 nova_compute[260935]: 2025-10-11 08:59:04.337 2 INFO nova.virt.libvirt.driver [-] [instance: 8848c29f-c82a-4f50-82f4-b2e317161489] Instance spawned successfully.
Oct 11 08:59:04 compute-0 nova_compute[260935]: 2025-10-11 08:59:04.337 2 DEBUG nova.virt.libvirt.driver [None req-5cb69133-b00e-4832-9d04-9fce79d65f74 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] [instance: 8848c29f-c82a-4f50-82f4-b2e317161489] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 11 08:59:04 compute-0 nova_compute[260935]: 2025-10-11 08:59:04.362 2 DEBUG nova.virt.libvirt.driver [None req-5cb69133-b00e-4832-9d04-9fce79d65f74 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] [instance: 8848c29f-c82a-4f50-82f4-b2e317161489] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:59:04 compute-0 nova_compute[260935]: 2025-10-11 08:59:04.363 2 DEBUG nova.virt.libvirt.driver [None req-5cb69133-b00e-4832-9d04-9fce79d65f74 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] [instance: 8848c29f-c82a-4f50-82f4-b2e317161489] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:59:04 compute-0 nova_compute[260935]: 2025-10-11 08:59:04.364 2 DEBUG nova.virt.libvirt.driver [None req-5cb69133-b00e-4832-9d04-9fce79d65f74 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] [instance: 8848c29f-c82a-4f50-82f4-b2e317161489] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:59:04 compute-0 nova_compute[260935]: 2025-10-11 08:59:04.365 2 DEBUG nova.virt.libvirt.driver [None req-5cb69133-b00e-4832-9d04-9fce79d65f74 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] [instance: 8848c29f-c82a-4f50-82f4-b2e317161489] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:59:04 compute-0 nova_compute[260935]: 2025-10-11 08:59:04.366 2 DEBUG nova.virt.libvirt.driver [None req-5cb69133-b00e-4832-9d04-9fce79d65f74 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] [instance: 8848c29f-c82a-4f50-82f4-b2e317161489] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:59:04 compute-0 nova_compute[260935]: 2025-10-11 08:59:04.367 2 DEBUG nova.virt.libvirt.driver [None req-5cb69133-b00e-4832-9d04-9fce79d65f74 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] [instance: 8848c29f-c82a-4f50-82f4-b2e317161489] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:59:04 compute-0 nova_compute[260935]: 2025-10-11 08:59:04.370 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 8848c29f-c82a-4f50-82f4-b2e317161489] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:59:04 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:59:04.371 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 2d377d55-39e9-4780-bc6f-8bd84e5a93a7 in datapath b4b8fb64-b3cb-4cee-8a22-4a3947fba8e5 unbound from our chassis
Oct 11 08:59:04 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:59:04.373 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b4b8fb64-b3cb-4cee-8a22-4a3947fba8e5
Oct 11 08:59:04 compute-0 nova_compute[260935]: 2025-10-11 08:59:04.375 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 8848c29f-c82a-4f50-82f4-b2e317161489] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 08:59:04 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:59:04.397 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[579d9ef3-2415-49cc-a1b1-502b09af7002]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:59:04 compute-0 nova_compute[260935]: 2025-10-11 08:59:04.412 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 8848c29f-c82a-4f50-82f4-b2e317161489] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 08:59:04 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:59:04.450 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[d92ed214-c38c-42cf-b1e3-c8ec08216bdc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:59:04 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:59:04.454 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[4c2c4ac5-0759-46de-b196-50d7c1cdcb8f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:59:04 compute-0 nova_compute[260935]: 2025-10-11 08:59:04.486 2 INFO nova.compute.manager [None req-5cb69133-b00e-4832-9d04-9fce79d65f74 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] [instance: 8848c29f-c82a-4f50-82f4-b2e317161489] Took 9.68 seconds to spawn the instance on the hypervisor.
Oct 11 08:59:04 compute-0 nova_compute[260935]: 2025-10-11 08:59:04.487 2 DEBUG nova.compute.manager [None req-5cb69133-b00e-4832-9d04-9fce79d65f74 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] [instance: 8848c29f-c82a-4f50-82f4-b2e317161489] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:59:04 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:59:04.502 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[f6d69753-a778-4a72-a0eb-ec454b625981]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:59:04 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:59:04.533 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[9da17fe1-4352-4618-a3a8-a1618b00018e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb4b8fb64-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8d:65:14'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 6, 'rx_bytes': 612, 'tx_bytes': 440, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 6, 'rx_bytes': 612, 'tx_bytes': 440, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 220], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 498802, 'reachable_time': 37318, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 528, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 528, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 339136, 'error': None, 'target': 'ovnmeta-b4b8fb64-b3cb-4cee-8a22-4a3947fba8e5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:59:04 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:59:04.560 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[ee028dc6-af2e-4871-aed7-f5492b9bbd66]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapb4b8fb64-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 498822, 'tstamp': 498822}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 339138, 'error': None, 'target': 'ovnmeta-b4b8fb64-b3cb-4cee-8a22-4a3947fba8e5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapb4b8fb64-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 498827, 'tstamp': 498827}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 339138, 'error': None, 'target': 'ovnmeta-b4b8fb64-b3cb-4cee-8a22-4a3947fba8e5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:59:04 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:59:04.566 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb4b8fb64-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:59:04 compute-0 nova_compute[260935]: 2025-10-11 08:59:04.569 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:59:04 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:59:04.575 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb4b8fb64-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:59:04 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:59:04.575 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 08:59:04 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:59:04.585 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb4b8fb64-b0, col_values=(('external_ids', {'iface-id': '59c88b9d-e04e-4ca9-8c74-9510a5f4ab83'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:59:04 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:59:04.585 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 08:59:04 compute-0 nova_compute[260935]: 2025-10-11 08:59:04.589 2 INFO nova.compute.manager [None req-5cb69133-b00e-4832-9d04-9fce79d65f74 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] [instance: 8848c29f-c82a-4f50-82f4-b2e317161489] Took 10.82 seconds to build instance.
Oct 11 08:59:04 compute-0 nova_compute[260935]: 2025-10-11 08:59:04.610 2 DEBUG oslo_concurrency.lockutils [None req-5cb69133-b00e-4832-9d04-9fce79d65f74 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] Lock "8848c29f-c82a-4f50-82f4-b2e317161489" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.005s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:59:04 compute-0 nova_compute[260935]: 2025-10-11 08:59:04.704 2 DEBUG nova.compute.manager [req-ae996bdf-c9dc-4b61-ae25-d2277eb337fa req-5daf67f5-644c-44d8-9765-ff247a3dc0d8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: f64218b5-ea49-4ed7-9945-1b8e056e4161] Received event network-vif-plugged-2d377d55-39e9-4780-bc6f-8bd84e5a93a7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:59:04 compute-0 nova_compute[260935]: 2025-10-11 08:59:04.705 2 DEBUG oslo_concurrency.lockutils [req-ae996bdf-c9dc-4b61-ae25-d2277eb337fa req-5daf67f5-644c-44d8-9765-ff247a3dc0d8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "f64218b5-ea49-4ed7-9945-1b8e056e4161-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:59:04 compute-0 nova_compute[260935]: 2025-10-11 08:59:04.705 2 DEBUG oslo_concurrency.lockutils [req-ae996bdf-c9dc-4b61-ae25-d2277eb337fa req-5daf67f5-644c-44d8-9765-ff247a3dc0d8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "f64218b5-ea49-4ed7-9945-1b8e056e4161-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:59:04 compute-0 nova_compute[260935]: 2025-10-11 08:59:04.705 2 DEBUG oslo_concurrency.lockutils [req-ae996bdf-c9dc-4b61-ae25-d2277eb337fa req-5daf67f5-644c-44d8-9765-ff247a3dc0d8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "f64218b5-ea49-4ed7-9945-1b8e056e4161-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:59:04 compute-0 nova_compute[260935]: 2025-10-11 08:59:04.706 2 DEBUG nova.compute.manager [req-ae996bdf-c9dc-4b61-ae25-d2277eb337fa req-5daf67f5-644c-44d8-9765-ff247a3dc0d8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: f64218b5-ea49-4ed7-9945-1b8e056e4161] Processing event network-vif-plugged-2d377d55-39e9-4780-bc6f-8bd84e5a93a7 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 11 08:59:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 08:59:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:59:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 08:59:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:59:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0010378639423389985 of space, bias 1.0, pg target 0.31135918270169954 quantized to 32 (current 32)
Oct 11 08:59:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:59:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 08:59:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:59:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 08:59:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:59:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Oct 11 08:59:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:59:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 11 08:59:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:59:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 08:59:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:59:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 11 08:59:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:59:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 11 08:59:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:59:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 08:59:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 08:59:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 11 08:59:05 compute-0 nova_compute[260935]: 2025-10-11 08:59:05.039 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760173145.0389626, f64218b5-ea49-4ed7-9945-1b8e056e4161 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:59:05 compute-0 nova_compute[260935]: 2025-10-11 08:59:05.043 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: f64218b5-ea49-4ed7-9945-1b8e056e4161] VM Started (Lifecycle Event)
Oct 11 08:59:05 compute-0 nova_compute[260935]: 2025-10-11 08:59:05.047 2 DEBUG nova.compute.manager [None req-8989a0c1-08ec-4ad1-95fb-7c34d2091511 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] [instance: f64218b5-ea49-4ed7-9945-1b8e056e4161] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 11 08:59:05 compute-0 nova_compute[260935]: 2025-10-11 08:59:05.054 2 DEBUG nova.virt.libvirt.driver [None req-8989a0c1-08ec-4ad1-95fb-7c34d2091511 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] [instance: f64218b5-ea49-4ed7-9945-1b8e056e4161] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 11 08:59:05 compute-0 nova_compute[260935]: 2025-10-11 08:59:05.061 2 INFO nova.virt.libvirt.driver [-] [instance: f64218b5-ea49-4ed7-9945-1b8e056e4161] Instance spawned successfully.
Oct 11 08:59:05 compute-0 nova_compute[260935]: 2025-10-11 08:59:05.062 2 DEBUG nova.virt.libvirt.driver [None req-8989a0c1-08ec-4ad1-95fb-7c34d2091511 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] [instance: f64218b5-ea49-4ed7-9945-1b8e056e4161] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 11 08:59:05 compute-0 nova_compute[260935]: 2025-10-11 08:59:05.068 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: f64218b5-ea49-4ed7-9945-1b8e056e4161] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:59:05 compute-0 nova_compute[260935]: 2025-10-11 08:59:05.074 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: f64218b5-ea49-4ed7-9945-1b8e056e4161] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 08:59:05 compute-0 nova_compute[260935]: 2025-10-11 08:59:05.099 2 DEBUG nova.virt.libvirt.driver [None req-8989a0c1-08ec-4ad1-95fb-7c34d2091511 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] [instance: f64218b5-ea49-4ed7-9945-1b8e056e4161] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:59:05 compute-0 nova_compute[260935]: 2025-10-11 08:59:05.100 2 DEBUG nova.virt.libvirt.driver [None req-8989a0c1-08ec-4ad1-95fb-7c34d2091511 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] [instance: f64218b5-ea49-4ed7-9945-1b8e056e4161] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:59:05 compute-0 nova_compute[260935]: 2025-10-11 08:59:05.101 2 DEBUG nova.virt.libvirt.driver [None req-8989a0c1-08ec-4ad1-95fb-7c34d2091511 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] [instance: f64218b5-ea49-4ed7-9945-1b8e056e4161] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:59:05 compute-0 nova_compute[260935]: 2025-10-11 08:59:05.102 2 DEBUG nova.virt.libvirt.driver [None req-8989a0c1-08ec-4ad1-95fb-7c34d2091511 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] [instance: f64218b5-ea49-4ed7-9945-1b8e056e4161] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:59:05 compute-0 nova_compute[260935]: 2025-10-11 08:59:05.103 2 DEBUG nova.virt.libvirt.driver [None req-8989a0c1-08ec-4ad1-95fb-7c34d2091511 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] [instance: f64218b5-ea49-4ed7-9945-1b8e056e4161] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:59:05 compute-0 nova_compute[260935]: 2025-10-11 08:59:05.104 2 DEBUG nova.virt.libvirt.driver [None req-8989a0c1-08ec-4ad1-95fb-7c34d2091511 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] [instance: f64218b5-ea49-4ed7-9945-1b8e056e4161] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:59:05 compute-0 nova_compute[260935]: 2025-10-11 08:59:05.110 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: f64218b5-ea49-4ed7-9945-1b8e056e4161] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 08:59:05 compute-0 nova_compute[260935]: 2025-10-11 08:59:05.111 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760173145.0391285, f64218b5-ea49-4ed7-9945-1b8e056e4161 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:59:05 compute-0 nova_compute[260935]: 2025-10-11 08:59:05.111 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: f64218b5-ea49-4ed7-9945-1b8e056e4161] VM Paused (Lifecycle Event)
Oct 11 08:59:05 compute-0 nova_compute[260935]: 2025-10-11 08:59:05.172 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: f64218b5-ea49-4ed7-9945-1b8e056e4161] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:59:05 compute-0 nova_compute[260935]: 2025-10-11 08:59:05.176 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760173145.0516057, f64218b5-ea49-4ed7-9945-1b8e056e4161 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:59:05 compute-0 nova_compute[260935]: 2025-10-11 08:59:05.176 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: f64218b5-ea49-4ed7-9945-1b8e056e4161] VM Resumed (Lifecycle Event)
Oct 11 08:59:05 compute-0 nova_compute[260935]: 2025-10-11 08:59:05.205 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: f64218b5-ea49-4ed7-9945-1b8e056e4161] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:59:05 compute-0 nova_compute[260935]: 2025-10-11 08:59:05.209 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: f64218b5-ea49-4ed7-9945-1b8e056e4161] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 08:59:05 compute-0 nova_compute[260935]: 2025-10-11 08:59:05.217 2 INFO nova.compute.manager [None req-8989a0c1-08ec-4ad1-95fb-7c34d2091511 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] [instance: f64218b5-ea49-4ed7-9945-1b8e056e4161] Took 8.25 seconds to spawn the instance on the hypervisor.
Oct 11 08:59:05 compute-0 nova_compute[260935]: 2025-10-11 08:59:05.218 2 DEBUG nova.compute.manager [None req-8989a0c1-08ec-4ad1-95fb-7c34d2091511 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] [instance: f64218b5-ea49-4ed7-9945-1b8e056e4161] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:59:05 compute-0 nova_compute[260935]: 2025-10-11 08:59:05.254 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: f64218b5-ea49-4ed7-9945-1b8e056e4161] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 08:59:05 compute-0 ceph-mon[74313]: pgmap v1677: 321 pgs: 321 active+clean; 180 MiB data, 594 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 5.3 MiB/s wr, 178 op/s
Oct 11 08:59:05 compute-0 nova_compute[260935]: 2025-10-11 08:59:05.292 2 INFO nova.compute.manager [None req-8989a0c1-08ec-4ad1-95fb-7c34d2091511 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] [instance: f64218b5-ea49-4ed7-9945-1b8e056e4161] Took 9.40 seconds to build instance.
Oct 11 08:59:05 compute-0 nova_compute[260935]: 2025-10-11 08:59:05.313 2 DEBUG oslo_concurrency.lockutils [None req-8989a0c1-08ec-4ad1-95fb-7c34d2091511 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] Lock "f64218b5-ea49-4ed7-9945-1b8e056e4161" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.492s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:59:05 compute-0 ovn_controller[152945]: 2025-10-11T08:59:05Z|00707|binding|INFO|Releasing lport 59c88b9d-e04e-4ca9-8c74-9510a5f4ab83 from this chassis (sb_readonly=0)
Oct 11 08:59:05 compute-0 nova_compute[260935]: 2025-10-11 08:59:05.567 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:59:06 compute-0 nova_compute[260935]: 2025-10-11 08:59:06.046 2 DEBUG nova.network.neutron [None req-9f2a296a-27ec-4721-87d1-cf557a7a87b0 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] [instance: 045df9b6-6c73-44ec-aa65-2c736eb5c00c] Successfully updated port: be9e3f2f-e8b8-4731-be9a-1b84d50bb5f8 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 11 08:59:06 compute-0 nova_compute[260935]: 2025-10-11 08:59:06.080 2 DEBUG oslo_concurrency.lockutils [None req-9f2a296a-27ec-4721-87d1-cf557a7a87b0 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] Acquiring lock "refresh_cache-045df9b6-6c73-44ec-aa65-2c736eb5c00c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 08:59:06 compute-0 nova_compute[260935]: 2025-10-11 08:59:06.081 2 DEBUG oslo_concurrency.lockutils [None req-9f2a296a-27ec-4721-87d1-cf557a7a87b0 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] Acquired lock "refresh_cache-045df9b6-6c73-44ec-aa65-2c736eb5c00c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 08:59:06 compute-0 nova_compute[260935]: 2025-10-11 08:59:06.081 2 DEBUG nova.network.neutron [None req-9f2a296a-27ec-4721-87d1-cf557a7a87b0 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] [instance: 045df9b6-6c73-44ec-aa65-2c736eb5c00c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 11 08:59:06 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1678: 321 pgs: 321 active+clean; 180 MiB data, 594 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 5.3 MiB/s wr, 178 op/s
Oct 11 08:59:06 compute-0 ceph-mon[74313]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #75. Immutable memtables: 0.
Oct 11 08:59:06 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:59:06.271956) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 11 08:59:06 compute-0 ceph-mon[74313]: rocksdb: [db/flush_job.cc:856] [default] [JOB 41] Flushing memtable with next log file: 75
Oct 11 08:59:06 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760173146272014, "job": 41, "event": "flush_started", "num_memtables": 1, "num_entries": 1627, "num_deletes": 255, "total_data_size": 2262141, "memory_usage": 2292608, "flush_reason": "Manual Compaction"}
Oct 11 08:59:06 compute-0 ceph-mon[74313]: rocksdb: [db/flush_job.cc:885] [default] [JOB 41] Level-0 flush table #76: started
Oct 11 08:59:06 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760173146283300, "cf_name": "default", "job": 41, "event": "table_file_creation", "file_number": 76, "file_size": 2224671, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 33705, "largest_seqno": 35331, "table_properties": {"data_size": 2217231, "index_size": 4318, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2053, "raw_key_size": 16672, "raw_average_key_size": 20, "raw_value_size": 2201837, "raw_average_value_size": 2725, "num_data_blocks": 191, "num_entries": 808, "num_filter_entries": 808, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760173013, "oldest_key_time": 1760173013, "file_creation_time": 1760173146, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "25d4b2da-549a-4320-988a-8e4ed36a341b", "db_session_id": "HBQ1LYMNE0LLZGR7AHC6", "orig_file_number": 76, "seqno_to_time_mapping": "N/A"}}
Oct 11 08:59:06 compute-0 ceph-mon[74313]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 41] Flush lasted 11388 microseconds, and 5355 cpu microseconds.
Oct 11 08:59:06 compute-0 ceph-mon[74313]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 08:59:06 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:59:06.283349) [db/flush_job.cc:967] [default] [JOB 41] Level-0 flush table #76: 2224671 bytes OK
Oct 11 08:59:06 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:59:06.283370) [db/memtable_list.cc:519] [default] Level-0 commit table #76 started
Oct 11 08:59:06 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:59:06.284571) [db/memtable_list.cc:722] [default] Level-0 commit table #76: memtable #1 done
Oct 11 08:59:06 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:59:06.284587) EVENT_LOG_v1 {"time_micros": 1760173146284582, "job": 41, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 11 08:59:06 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:59:06.284607) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 11 08:59:06 compute-0 ceph-mon[74313]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 41] Try to delete WAL files size 2254937, prev total WAL file size 2254937, number of live WAL files 2.
Oct 11 08:59:06 compute-0 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000072.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 08:59:06 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:59:06.285760) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033303132' seq:72057594037927935, type:22 .. '7061786F730033323634' seq:0, type:0; will stop at (end)
Oct 11 08:59:06 compute-0 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 42] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 11 08:59:06 compute-0 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 41 Base level 0, inputs: [76(2172KB)], [74(8895KB)]
Oct 11 08:59:06 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760173146285850, "job": 42, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [76], "files_L6": [74], "score": -1, "input_data_size": 11333195, "oldest_snapshot_seqno": -1}
Oct 11 08:59:06 compute-0 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 42] Generated table #77: 5977 keys, 9671509 bytes, temperature: kUnknown
Oct 11 08:59:06 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760173146332632, "cf_name": "default", "job": 42, "event": "table_file_creation", "file_number": 77, "file_size": 9671509, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9629155, "index_size": 26308, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14981, "raw_key_size": 151071, "raw_average_key_size": 25, "raw_value_size": 9519441, "raw_average_value_size": 1592, "num_data_blocks": 1066, "num_entries": 5977, "num_filter_entries": 5977, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760170204, "oldest_key_time": 0, "file_creation_time": 1760173146, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "25d4b2da-549a-4320-988a-8e4ed36a341b", "db_session_id": "HBQ1LYMNE0LLZGR7AHC6", "orig_file_number": 77, "seqno_to_time_mapping": "N/A"}}
Oct 11 08:59:06 compute-0 ceph-mon[74313]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 08:59:06 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:59:06.333063) [db/compaction/compaction_job.cc:1663] [default] [JOB 42] Compacted 1@0 + 1@6 files to L6 => 9671509 bytes
Oct 11 08:59:06 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:59:06.334919) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 241.6 rd, 206.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.1, 8.7 +0.0 blob) out(9.2 +0.0 blob), read-write-amplify(9.4) write-amplify(4.3) OK, records in: 6501, records dropped: 524 output_compression: NoCompression
Oct 11 08:59:06 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:59:06.334949) EVENT_LOG_v1 {"time_micros": 1760173146334935, "job": 42, "event": "compaction_finished", "compaction_time_micros": 46909, "compaction_time_cpu_micros": 29146, "output_level": 6, "num_output_files": 1, "total_output_size": 9671509, "num_input_records": 6501, "num_output_records": 5977, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 11 08:59:06 compute-0 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000076.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 08:59:06 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760173146335899, "job": 42, "event": "table_file_deletion", "file_number": 76}
Oct 11 08:59:06 compute-0 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000074.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 08:59:06 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760173146338907, "job": 42, "event": "table_file_deletion", "file_number": 74}
Oct 11 08:59:06 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:59:06.285592) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 08:59:06 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:59:06.338997) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 08:59:06 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:59:06.339005) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 08:59:06 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:59:06.339008) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 08:59:06 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:59:06.339010) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 08:59:06 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-08:59:06.339012) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 08:59:06 compute-0 nova_compute[260935]: 2025-10-11 08:59:06.510 2 DEBUG nova.compute.manager [req-9cc7ac83-cb44-465f-8fc8-340dae41c930 req-cfc97c5f-fe1f-446c-99be-cc8336746603 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 8848c29f-c82a-4f50-82f4-b2e317161489] Received event network-vif-plugged-e121c426-7734-4def-be42-b69b19dcbf29 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:59:06 compute-0 nova_compute[260935]: 2025-10-11 08:59:06.511 2 DEBUG oslo_concurrency.lockutils [req-9cc7ac83-cb44-465f-8fc8-340dae41c930 req-cfc97c5f-fe1f-446c-99be-cc8336746603 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "8848c29f-c82a-4f50-82f4-b2e317161489-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:59:06 compute-0 nova_compute[260935]: 2025-10-11 08:59:06.512 2 DEBUG oslo_concurrency.lockutils [req-9cc7ac83-cb44-465f-8fc8-340dae41c930 req-cfc97c5f-fe1f-446c-99be-cc8336746603 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "8848c29f-c82a-4f50-82f4-b2e317161489-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:59:06 compute-0 nova_compute[260935]: 2025-10-11 08:59:06.512 2 DEBUG oslo_concurrency.lockutils [req-9cc7ac83-cb44-465f-8fc8-340dae41c930 req-cfc97c5f-fe1f-446c-99be-cc8336746603 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "8848c29f-c82a-4f50-82f4-b2e317161489-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:59:06 compute-0 nova_compute[260935]: 2025-10-11 08:59:06.513 2 DEBUG nova.compute.manager [req-9cc7ac83-cb44-465f-8fc8-340dae41c930 req-cfc97c5f-fe1f-446c-99be-cc8336746603 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 8848c29f-c82a-4f50-82f4-b2e317161489] No waiting events found dispatching network-vif-plugged-e121c426-7734-4def-be42-b69b19dcbf29 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 08:59:06 compute-0 nova_compute[260935]: 2025-10-11 08:59:06.513 2 WARNING nova.compute.manager [req-9cc7ac83-cb44-465f-8fc8-340dae41c930 req-cfc97c5f-fe1f-446c-99be-cc8336746603 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 8848c29f-c82a-4f50-82f4-b2e317161489] Received unexpected event network-vif-plugged-e121c426-7734-4def-be42-b69b19dcbf29 for instance with vm_state active and task_state None.
Oct 11 08:59:06 compute-0 nova_compute[260935]: 2025-10-11 08:59:06.514 2 DEBUG nova.compute.manager [req-9cc7ac83-cb44-465f-8fc8-340dae41c930 req-cfc97c5f-fe1f-446c-99be-cc8336746603 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 045df9b6-6c73-44ec-aa65-2c736eb5c00c] Received event network-changed-be9e3f2f-e8b8-4731-be9a-1b84d50bb5f8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:59:06 compute-0 nova_compute[260935]: 2025-10-11 08:59:06.515 2 DEBUG nova.compute.manager [req-9cc7ac83-cb44-465f-8fc8-340dae41c930 req-cfc97c5f-fe1f-446c-99be-cc8336746603 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 045df9b6-6c73-44ec-aa65-2c736eb5c00c] Refreshing instance network info cache due to event network-changed-be9e3f2f-e8b8-4731-be9a-1b84d50bb5f8. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 08:59:06 compute-0 nova_compute[260935]: 2025-10-11 08:59:06.515 2 DEBUG oslo_concurrency.lockutils [req-9cc7ac83-cb44-465f-8fc8-340dae41c930 req-cfc97c5f-fe1f-446c-99be-cc8336746603 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-045df9b6-6c73-44ec-aa65-2c736eb5c00c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 08:59:06 compute-0 nova_compute[260935]: 2025-10-11 08:59:06.560 2 DEBUG nova.network.neutron [None req-9f2a296a-27ec-4721-87d1-cf557a7a87b0 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] [instance: 045df9b6-6c73-44ec-aa65-2c736eb5c00c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 11 08:59:06 compute-0 nova_compute[260935]: 2025-10-11 08:59:06.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 08:59:06 compute-0 nova_compute[260935]: 2025-10-11 08:59:06.785 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:59:06 compute-0 nova_compute[260935]: 2025-10-11 08:59:06.815 2 DEBUG nova.compute.manager [req-b8b30810-250c-4474-9831-2467c0218109 req-e9b8a597-ac83-49da-b845-f684b5578ae5 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: f64218b5-ea49-4ed7-9945-1b8e056e4161] Received event network-vif-plugged-2d377d55-39e9-4780-bc6f-8bd84e5a93a7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:59:06 compute-0 nova_compute[260935]: 2025-10-11 08:59:06.816 2 DEBUG oslo_concurrency.lockutils [req-b8b30810-250c-4474-9831-2467c0218109 req-e9b8a597-ac83-49da-b845-f684b5578ae5 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "f64218b5-ea49-4ed7-9945-1b8e056e4161-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:59:06 compute-0 nova_compute[260935]: 2025-10-11 08:59:06.816 2 DEBUG oslo_concurrency.lockutils [req-b8b30810-250c-4474-9831-2467c0218109 req-e9b8a597-ac83-49da-b845-f684b5578ae5 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "f64218b5-ea49-4ed7-9945-1b8e056e4161-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:59:06 compute-0 nova_compute[260935]: 2025-10-11 08:59:06.816 2 DEBUG oslo_concurrency.lockutils [req-b8b30810-250c-4474-9831-2467c0218109 req-e9b8a597-ac83-49da-b845-f684b5578ae5 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "f64218b5-ea49-4ed7-9945-1b8e056e4161-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:59:06 compute-0 nova_compute[260935]: 2025-10-11 08:59:06.817 2 DEBUG nova.compute.manager [req-b8b30810-250c-4474-9831-2467c0218109 req-e9b8a597-ac83-49da-b845-f684b5578ae5 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: f64218b5-ea49-4ed7-9945-1b8e056e4161] No waiting events found dispatching network-vif-plugged-2d377d55-39e9-4780-bc6f-8bd84e5a93a7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 08:59:06 compute-0 nova_compute[260935]: 2025-10-11 08:59:06.817 2 WARNING nova.compute.manager [req-b8b30810-250c-4474-9831-2467c0218109 req-e9b8a597-ac83-49da-b845-f684b5578ae5 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: f64218b5-ea49-4ed7-9945-1b8e056e4161] Received unexpected event network-vif-plugged-2d377d55-39e9-4780-bc6f-8bd84e5a93a7 for instance with vm_state active and task_state None.
Oct 11 08:59:07 compute-0 ceph-mon[74313]: pgmap v1678: 321 pgs: 321 active+clean; 180 MiB data, 594 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 5.3 MiB/s wr, 178 op/s
Oct 11 08:59:08 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1679: 321 pgs: 321 active+clean; 181 MiB data, 614 MiB used, 59 GiB / 60 GiB avail; 5.8 MiB/s rd, 5.4 MiB/s wr, 328 op/s
Oct 11 08:59:08 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e242 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 08:59:08 compute-0 nova_compute[260935]: 2025-10-11 08:59:08.581 2 DEBUG nova.network.neutron [None req-9f2a296a-27ec-4721-87d1-cf557a7a87b0 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] [instance: 045df9b6-6c73-44ec-aa65-2c736eb5c00c] Updating instance_info_cache with network_info: [{"id": "be9e3f2f-e8b8-4731-be9a-1b84d50bb5f8", "address": "fa:16:3e:d1:e2:b4", "network": {"id": "b4b8fb64-b3cb-4cee-8a22-4a3947fba8e5", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-2015711970-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b0430d49d70a46c2b29abef177f8ccb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe9e3f2f-e8", "ovs_interfaceid": "be9e3f2f-e8b8-4731-be9a-1b84d50bb5f8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:59:08 compute-0 nova_compute[260935]: 2025-10-11 08:59:08.611 2 DEBUG oslo_concurrency.lockutils [None req-9f2a296a-27ec-4721-87d1-cf557a7a87b0 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] Releasing lock "refresh_cache-045df9b6-6c73-44ec-aa65-2c736eb5c00c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 08:59:08 compute-0 nova_compute[260935]: 2025-10-11 08:59:08.611 2 DEBUG nova.compute.manager [None req-9f2a296a-27ec-4721-87d1-cf557a7a87b0 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] [instance: 045df9b6-6c73-44ec-aa65-2c736eb5c00c] Instance network_info: |[{"id": "be9e3f2f-e8b8-4731-be9a-1b84d50bb5f8", "address": "fa:16:3e:d1:e2:b4", "network": {"id": "b4b8fb64-b3cb-4cee-8a22-4a3947fba8e5", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-2015711970-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b0430d49d70a46c2b29abef177f8ccb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe9e3f2f-e8", "ovs_interfaceid": "be9e3f2f-e8b8-4731-be9a-1b84d50bb5f8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 11 08:59:08 compute-0 nova_compute[260935]: 2025-10-11 08:59:08.612 2 DEBUG oslo_concurrency.lockutils [req-9cc7ac83-cb44-465f-8fc8-340dae41c930 req-cfc97c5f-fe1f-446c-99be-cc8336746603 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-045df9b6-6c73-44ec-aa65-2c736eb5c00c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 08:59:08 compute-0 nova_compute[260935]: 2025-10-11 08:59:08.613 2 DEBUG nova.network.neutron [req-9cc7ac83-cb44-465f-8fc8-340dae41c930 req-cfc97c5f-fe1f-446c-99be-cc8336746603 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 045df9b6-6c73-44ec-aa65-2c736eb5c00c] Refreshing network info cache for port be9e3f2f-e8b8-4731-be9a-1b84d50bb5f8 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 08:59:08 compute-0 nova_compute[260935]: 2025-10-11 08:59:08.617 2 DEBUG nova.virt.libvirt.driver [None req-9f2a296a-27ec-4721-87d1-cf557a7a87b0 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] [instance: 045df9b6-6c73-44ec-aa65-2c736eb5c00c] Start _get_guest_xml network_info=[{"id": "be9e3f2f-e8b8-4731-be9a-1b84d50bb5f8", "address": "fa:16:3e:d1:e2:b4", "network": {"id": "b4b8fb64-b3cb-4cee-8a22-4a3947fba8e5", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-2015711970-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b0430d49d70a46c2b29abef177f8ccb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe9e3f2f-e8", "ovs_interfaceid": "be9e3f2f-e8b8-4731-be9a-1b84d50bb5f8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 11 08:59:08 compute-0 nova_compute[260935]: 2025-10-11 08:59:08.622 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:59:08 compute-0 nova_compute[260935]: 2025-10-11 08:59:08.626 2 WARNING nova.virt.libvirt.driver [None req-9f2a296a-27ec-4721-87d1-cf557a7a87b0 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 08:59:08 compute-0 nova_compute[260935]: 2025-10-11 08:59:08.643 2 DEBUG nova.virt.libvirt.host [None req-9f2a296a-27ec-4721-87d1-cf557a7a87b0 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 11 08:59:08 compute-0 nova_compute[260935]: 2025-10-11 08:59:08.644 2 DEBUG nova.virt.libvirt.host [None req-9f2a296a-27ec-4721-87d1-cf557a7a87b0 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 11 08:59:08 compute-0 nova_compute[260935]: 2025-10-11 08:59:08.648 2 DEBUG nova.virt.libvirt.host [None req-9f2a296a-27ec-4721-87d1-cf557a7a87b0 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 11 08:59:08 compute-0 nova_compute[260935]: 2025-10-11 08:59:08.648 2 DEBUG nova.virt.libvirt.host [None req-9f2a296a-27ec-4721-87d1-cf557a7a87b0 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 11 08:59:08 compute-0 nova_compute[260935]: 2025-10-11 08:59:08.649 2 DEBUG nova.virt.libvirt.driver [None req-9f2a296a-27ec-4721-87d1-cf557a7a87b0 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 11 08:59:08 compute-0 nova_compute[260935]: 2025-10-11 08:59:08.649 2 DEBUG nova.virt.hardware [None req-9f2a296a-27ec-4721-87d1-cf557a7a87b0 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:27Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='ec8b7fd6-88cc-4be7-9c90-462cbf59d314',id=5,is_public=True,memory_mb=192,name='m1.micro',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 11 08:59:08 compute-0 nova_compute[260935]: 2025-10-11 08:59:08.650 2 DEBUG nova.virt.hardware [None req-9f2a296a-27ec-4721-87d1-cf557a7a87b0 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 11 08:59:08 compute-0 nova_compute[260935]: 2025-10-11 08:59:08.650 2 DEBUG nova.virt.hardware [None req-9f2a296a-27ec-4721-87d1-cf557a7a87b0 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 11 08:59:08 compute-0 nova_compute[260935]: 2025-10-11 08:59:08.651 2 DEBUG nova.virt.hardware [None req-9f2a296a-27ec-4721-87d1-cf557a7a87b0 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 11 08:59:08 compute-0 nova_compute[260935]: 2025-10-11 08:59:08.651 2 DEBUG nova.virt.hardware [None req-9f2a296a-27ec-4721-87d1-cf557a7a87b0 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 11 08:59:08 compute-0 nova_compute[260935]: 2025-10-11 08:59:08.652 2 DEBUG nova.virt.hardware [None req-9f2a296a-27ec-4721-87d1-cf557a7a87b0 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 11 08:59:08 compute-0 nova_compute[260935]: 2025-10-11 08:59:08.652 2 DEBUG nova.virt.hardware [None req-9f2a296a-27ec-4721-87d1-cf557a7a87b0 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 11 08:59:08 compute-0 nova_compute[260935]: 2025-10-11 08:59:08.653 2 DEBUG nova.virt.hardware [None req-9f2a296a-27ec-4721-87d1-cf557a7a87b0 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 11 08:59:08 compute-0 nova_compute[260935]: 2025-10-11 08:59:08.653 2 DEBUG nova.virt.hardware [None req-9f2a296a-27ec-4721-87d1-cf557a7a87b0 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 11 08:59:08 compute-0 nova_compute[260935]: 2025-10-11 08:59:08.653 2 DEBUG nova.virt.hardware [None req-9f2a296a-27ec-4721-87d1-cf557a7a87b0 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 11 08:59:08 compute-0 nova_compute[260935]: 2025-10-11 08:59:08.654 2 DEBUG nova.virt.hardware [None req-9f2a296a-27ec-4721-87d1-cf557a7a87b0 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 11 08:59:08 compute-0 nova_compute[260935]: 2025-10-11 08:59:08.658 2 DEBUG oslo_concurrency.processutils [None req-9f2a296a-27ec-4721-87d1-cf557a7a87b0 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:59:09 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:59:09.130 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=18, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:d1:d9', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '16:ab:1e:b7:4b:7f'}, ipsec=False) old=SB_Global(nb_cfg=17) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 08:59:09 compute-0 nova_compute[260935]: 2025-10-11 08:59:09.131 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:59:09 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:59:09.132 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 11 08:59:09 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 08:59:09 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1041965683' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:59:09 compute-0 nova_compute[260935]: 2025-10-11 08:59:09.163 2 DEBUG oslo_concurrency.processutils [None req-9f2a296a-27ec-4721-87d1-cf557a7a87b0 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.504s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:59:09 compute-0 nova_compute[260935]: 2025-10-11 08:59:09.189 2 DEBUG nova.storage.rbd_utils [None req-9f2a296a-27ec-4721-87d1-cf557a7a87b0 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] rbd image 045df9b6-6c73-44ec-aa65-2c736eb5c00c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:59:09 compute-0 nova_compute[260935]: 2025-10-11 08:59:09.194 2 DEBUG oslo_concurrency.processutils [None req-9f2a296a-27ec-4721-87d1-cf557a7a87b0 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:59:09 compute-0 ceph-mon[74313]: pgmap v1679: 321 pgs: 321 active+clean; 181 MiB data, 614 MiB used, 59 GiB / 60 GiB avail; 5.8 MiB/s rd, 5.4 MiB/s wr, 328 op/s
Oct 11 08:59:09 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/1041965683' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:59:09 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 08:59:09 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3461609093' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:59:09 compute-0 nova_compute[260935]: 2025-10-11 08:59:09.769 2 DEBUG oslo_concurrency.processutils [None req-9f2a296a-27ec-4721-87d1-cf557a7a87b0 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.575s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:59:09 compute-0 nova_compute[260935]: 2025-10-11 08:59:09.772 2 DEBUG nova.virt.libvirt.vif [None req-9f2a296a-27ec-4721-87d1-cf557a7a87b0 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T08:58:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-1046676690',display_name='tempest-ListServerFiltersTestJSON-instance-1046676690',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(5),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-1046676690',id=81,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=5,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b0430d49d70a46c2b29abef177f8ccb3',ramdisk_id='',reservation_id='r-5z4044t2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServerFiltersTestJSON-867232992',owner_user_name='tempest-ListServerFiltersTestJSON-867232992-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T08:59:01Z,user_data=None,user_id='9734241540ac484291686e1d189d4eea',uuid=045df9b6-6c73-44ec-aa65-2c736eb5c00c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "be9e3f2f-e8b8-4731-be9a-1b84d50bb5f8", "address": "fa:16:3e:d1:e2:b4", "network": {"id": "b4b8fb64-b3cb-4cee-8a22-4a3947fba8e5", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-2015711970-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b0430d49d70a46c2b29abef177f8ccb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe9e3f2f-e8", "ovs_interfaceid": "be9e3f2f-e8b8-4731-be9a-1b84d50bb5f8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 11 08:59:09 compute-0 nova_compute[260935]: 2025-10-11 08:59:09.773 2 DEBUG nova.network.os_vif_util [None req-9f2a296a-27ec-4721-87d1-cf557a7a87b0 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] Converting VIF {"id": "be9e3f2f-e8b8-4731-be9a-1b84d50bb5f8", "address": "fa:16:3e:d1:e2:b4", "network": {"id": "b4b8fb64-b3cb-4cee-8a22-4a3947fba8e5", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-2015711970-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b0430d49d70a46c2b29abef177f8ccb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe9e3f2f-e8", "ovs_interfaceid": "be9e3f2f-e8b8-4731-be9a-1b84d50bb5f8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 08:59:09 compute-0 nova_compute[260935]: 2025-10-11 08:59:09.774 2 DEBUG nova.network.os_vif_util [None req-9f2a296a-27ec-4721-87d1-cf557a7a87b0 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d1:e2:b4,bridge_name='br-int',has_traffic_filtering=True,id=be9e3f2f-e8b8-4731-be9a-1b84d50bb5f8,network=Network(b4b8fb64-b3cb-4cee-8a22-4a3947fba8e5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbe9e3f2f-e8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 08:59:09 compute-0 nova_compute[260935]: 2025-10-11 08:59:09.777 2 DEBUG nova.objects.instance [None req-9f2a296a-27ec-4721-87d1-cf557a7a87b0 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] Lazy-loading 'pci_devices' on Instance uuid 045df9b6-6c73-44ec-aa65-2c736eb5c00c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:59:09 compute-0 podman[339199]: 2025-10-11 08:59:09.792791765 +0000 UTC m=+0.086596961 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 11 08:59:09 compute-0 nova_compute[260935]: 2025-10-11 08:59:09.806 2 DEBUG nova.virt.libvirt.driver [None req-9f2a296a-27ec-4721-87d1-cf557a7a87b0 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] [instance: 045df9b6-6c73-44ec-aa65-2c736eb5c00c] End _get_guest_xml xml=<domain type="kvm">
Oct 11 08:59:09 compute-0 nova_compute[260935]:   <uuid>045df9b6-6c73-44ec-aa65-2c736eb5c00c</uuid>
Oct 11 08:59:09 compute-0 nova_compute[260935]:   <name>instance-00000051</name>
Oct 11 08:59:09 compute-0 nova_compute[260935]:   <memory>196608</memory>
Oct 11 08:59:09 compute-0 nova_compute[260935]:   <vcpu>1</vcpu>
Oct 11 08:59:09 compute-0 nova_compute[260935]:   <metadata>
Oct 11 08:59:09 compute-0 nova_compute[260935]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 08:59:09 compute-0 nova_compute[260935]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 08:59:09 compute-0 nova_compute[260935]:       <nova:name>tempest-ListServerFiltersTestJSON-instance-1046676690</nova:name>
Oct 11 08:59:09 compute-0 nova_compute[260935]:       <nova:creationTime>2025-10-11 08:59:08</nova:creationTime>
Oct 11 08:59:09 compute-0 nova_compute[260935]:       <nova:flavor name="m1.micro">
Oct 11 08:59:09 compute-0 nova_compute[260935]:         <nova:memory>192</nova:memory>
Oct 11 08:59:09 compute-0 nova_compute[260935]:         <nova:disk>1</nova:disk>
Oct 11 08:59:09 compute-0 nova_compute[260935]:         <nova:swap>0</nova:swap>
Oct 11 08:59:09 compute-0 nova_compute[260935]:         <nova:ephemeral>0</nova:ephemeral>
Oct 11 08:59:09 compute-0 nova_compute[260935]:         <nova:vcpus>1</nova:vcpus>
Oct 11 08:59:09 compute-0 nova_compute[260935]:       </nova:flavor>
Oct 11 08:59:09 compute-0 nova_compute[260935]:       <nova:owner>
Oct 11 08:59:09 compute-0 nova_compute[260935]:         <nova:user uuid="9734241540ac484291686e1d189d4eea">tempest-ListServerFiltersTestJSON-867232992-project-member</nova:user>
Oct 11 08:59:09 compute-0 nova_compute[260935]:         <nova:project uuid="b0430d49d70a46c2b29abef177f8ccb3">tempest-ListServerFiltersTestJSON-867232992</nova:project>
Oct 11 08:59:09 compute-0 nova_compute[260935]:       </nova:owner>
Oct 11 08:59:09 compute-0 nova_compute[260935]:       <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 08:59:09 compute-0 nova_compute[260935]:       <nova:ports>
Oct 11 08:59:09 compute-0 nova_compute[260935]:         <nova:port uuid="be9e3f2f-e8b8-4731-be9a-1b84d50bb5f8">
Oct 11 08:59:09 compute-0 nova_compute[260935]:           <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Oct 11 08:59:09 compute-0 nova_compute[260935]:         </nova:port>
Oct 11 08:59:09 compute-0 nova_compute[260935]:       </nova:ports>
Oct 11 08:59:09 compute-0 nova_compute[260935]:     </nova:instance>
Oct 11 08:59:09 compute-0 nova_compute[260935]:   </metadata>
Oct 11 08:59:09 compute-0 nova_compute[260935]:   <sysinfo type="smbios">
Oct 11 08:59:09 compute-0 nova_compute[260935]:     <system>
Oct 11 08:59:09 compute-0 nova_compute[260935]:       <entry name="manufacturer">RDO</entry>
Oct 11 08:59:09 compute-0 nova_compute[260935]:       <entry name="product">OpenStack Compute</entry>
Oct 11 08:59:09 compute-0 nova_compute[260935]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 08:59:09 compute-0 nova_compute[260935]:       <entry name="serial">045df9b6-6c73-44ec-aa65-2c736eb5c00c</entry>
Oct 11 08:59:09 compute-0 nova_compute[260935]:       <entry name="uuid">045df9b6-6c73-44ec-aa65-2c736eb5c00c</entry>
Oct 11 08:59:09 compute-0 nova_compute[260935]:       <entry name="family">Virtual Machine</entry>
Oct 11 08:59:09 compute-0 nova_compute[260935]:     </system>
Oct 11 08:59:09 compute-0 nova_compute[260935]:   </sysinfo>
Oct 11 08:59:09 compute-0 nova_compute[260935]:   <os>
Oct 11 08:59:09 compute-0 nova_compute[260935]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 11 08:59:09 compute-0 nova_compute[260935]:     <boot dev="hd"/>
Oct 11 08:59:09 compute-0 nova_compute[260935]:     <smbios mode="sysinfo"/>
Oct 11 08:59:09 compute-0 nova_compute[260935]:   </os>
Oct 11 08:59:09 compute-0 nova_compute[260935]:   <features>
Oct 11 08:59:09 compute-0 nova_compute[260935]:     <acpi/>
Oct 11 08:59:09 compute-0 nova_compute[260935]:     <apic/>
Oct 11 08:59:09 compute-0 nova_compute[260935]:     <vmcoreinfo/>
Oct 11 08:59:09 compute-0 nova_compute[260935]:   </features>
Oct 11 08:59:09 compute-0 nova_compute[260935]:   <clock offset="utc">
Oct 11 08:59:09 compute-0 nova_compute[260935]:     <timer name="pit" tickpolicy="delay"/>
Oct 11 08:59:09 compute-0 nova_compute[260935]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 11 08:59:09 compute-0 nova_compute[260935]:     <timer name="hpet" present="no"/>
Oct 11 08:59:09 compute-0 nova_compute[260935]:   </clock>
Oct 11 08:59:09 compute-0 nova_compute[260935]:   <cpu mode="host-model" match="exact">
Oct 11 08:59:09 compute-0 nova_compute[260935]:     <topology sockets="1" cores="1" threads="1"/>
Oct 11 08:59:09 compute-0 nova_compute[260935]:   </cpu>
Oct 11 08:59:09 compute-0 nova_compute[260935]:   <devices>
Oct 11 08:59:09 compute-0 nova_compute[260935]:     <disk type="network" device="disk">
Oct 11 08:59:09 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 08:59:09 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/045df9b6-6c73-44ec-aa65-2c736eb5c00c_disk">
Oct 11 08:59:09 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 08:59:09 compute-0 nova_compute[260935]:       </source>
Oct 11 08:59:09 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 08:59:09 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 08:59:09 compute-0 nova_compute[260935]:       </auth>
Oct 11 08:59:09 compute-0 nova_compute[260935]:       <target dev="vda" bus="virtio"/>
Oct 11 08:59:09 compute-0 nova_compute[260935]:     </disk>
Oct 11 08:59:09 compute-0 nova_compute[260935]:     <disk type="network" device="cdrom">
Oct 11 08:59:09 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 08:59:09 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/045df9b6-6c73-44ec-aa65-2c736eb5c00c_disk.config">
Oct 11 08:59:09 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 08:59:09 compute-0 nova_compute[260935]:       </source>
Oct 11 08:59:09 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 08:59:09 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 08:59:09 compute-0 nova_compute[260935]:       </auth>
Oct 11 08:59:09 compute-0 nova_compute[260935]:       <target dev="sda" bus="sata"/>
Oct 11 08:59:09 compute-0 nova_compute[260935]:     </disk>
Oct 11 08:59:09 compute-0 nova_compute[260935]:     <interface type="ethernet">
Oct 11 08:59:09 compute-0 nova_compute[260935]:       <mac address="fa:16:3e:d1:e2:b4"/>
Oct 11 08:59:09 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 08:59:09 compute-0 nova_compute[260935]:       <driver name="vhost" rx_queue_size="512"/>
Oct 11 08:59:09 compute-0 nova_compute[260935]:       <mtu size="1442"/>
Oct 11 08:59:09 compute-0 nova_compute[260935]:       <target dev="tapbe9e3f2f-e8"/>
Oct 11 08:59:09 compute-0 nova_compute[260935]:     </interface>
Oct 11 08:59:09 compute-0 nova_compute[260935]:     <serial type="pty">
Oct 11 08:59:09 compute-0 nova_compute[260935]:       <log file="/var/lib/nova/instances/045df9b6-6c73-44ec-aa65-2c736eb5c00c/console.log" append="off"/>
Oct 11 08:59:09 compute-0 nova_compute[260935]:     </serial>
Oct 11 08:59:09 compute-0 nova_compute[260935]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 08:59:09 compute-0 nova_compute[260935]:     <video>
Oct 11 08:59:09 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 08:59:09 compute-0 nova_compute[260935]:     </video>
Oct 11 08:59:09 compute-0 nova_compute[260935]:     <input type="tablet" bus="usb"/>
Oct 11 08:59:09 compute-0 nova_compute[260935]:     <rng model="virtio">
Oct 11 08:59:09 compute-0 nova_compute[260935]:       <backend model="random">/dev/urandom</backend>
Oct 11 08:59:09 compute-0 nova_compute[260935]:     </rng>
Oct 11 08:59:09 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root"/>
Oct 11 08:59:09 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:59:09 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:59:09 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:59:09 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:59:09 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:59:09 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:59:09 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:59:09 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:59:09 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:59:09 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:59:09 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:59:09 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:59:09 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:59:09 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:59:09 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:59:09 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:59:09 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:59:09 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:59:09 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:59:09 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:59:09 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:59:09 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:59:09 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:59:09 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:59:09 compute-0 nova_compute[260935]:     <controller type="usb" index="0"/>
Oct 11 08:59:09 compute-0 nova_compute[260935]:     <memballoon model="virtio">
Oct 11 08:59:09 compute-0 nova_compute[260935]:       <stats period="10"/>
Oct 11 08:59:09 compute-0 nova_compute[260935]:     </memballoon>
Oct 11 08:59:09 compute-0 nova_compute[260935]:   </devices>
Oct 11 08:59:09 compute-0 nova_compute[260935]: </domain>
Oct 11 08:59:09 compute-0 nova_compute[260935]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 11 08:59:09 compute-0 nova_compute[260935]: 2025-10-11 08:59:09.817 2 DEBUG nova.compute.manager [None req-9f2a296a-27ec-4721-87d1-cf557a7a87b0 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] [instance: 045df9b6-6c73-44ec-aa65-2c736eb5c00c] Preparing to wait for external event network-vif-plugged-be9e3f2f-e8b8-4731-be9a-1b84d50bb5f8 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 11 08:59:09 compute-0 nova_compute[260935]: 2025-10-11 08:59:09.818 2 DEBUG oslo_concurrency.lockutils [None req-9f2a296a-27ec-4721-87d1-cf557a7a87b0 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] Acquiring lock "045df9b6-6c73-44ec-aa65-2c736eb5c00c-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:59:09 compute-0 nova_compute[260935]: 2025-10-11 08:59:09.818 2 DEBUG oslo_concurrency.lockutils [None req-9f2a296a-27ec-4721-87d1-cf557a7a87b0 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] Lock "045df9b6-6c73-44ec-aa65-2c736eb5c00c-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:59:09 compute-0 nova_compute[260935]: 2025-10-11 08:59:09.818 2 DEBUG oslo_concurrency.lockutils [None req-9f2a296a-27ec-4721-87d1-cf557a7a87b0 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] Lock "045df9b6-6c73-44ec-aa65-2c736eb5c00c-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:59:09 compute-0 nova_compute[260935]: 2025-10-11 08:59:09.819 2 DEBUG nova.virt.libvirt.vif [None req-9f2a296a-27ec-4721-87d1-cf557a7a87b0 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T08:58:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-1046676690',display_name='tempest-ListServerFiltersTestJSON-instance-1046676690',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(5),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-1046676690',id=81,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=5,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b0430d49d70a46c2b29abef177f8ccb3',ramdisk_id='',reservation_id='r-5z4044t2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServerFiltersTestJSON-867232992',owner_user_name='tempest-ListServerFiltersTestJSON-867232992-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T08:59:01Z,user_data=None,user_id='9734241540ac484291686e1d189d4eea',uuid=045df9b6-6c73-44ec-aa65-2c736eb5c00c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "be9e3f2f-e8b8-4731-be9a-1b84d50bb5f8", "address": "fa:16:3e:d1:e2:b4", "network": {"id": "b4b8fb64-b3cb-4cee-8a22-4a3947fba8e5", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-2015711970-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b0430d49d70a46c2b29abef177f8ccb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe9e3f2f-e8", "ovs_interfaceid": "be9e3f2f-e8b8-4731-be9a-1b84d50bb5f8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 11 08:59:09 compute-0 nova_compute[260935]: 2025-10-11 08:59:09.819 2 DEBUG nova.network.os_vif_util [None req-9f2a296a-27ec-4721-87d1-cf557a7a87b0 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] Converting VIF {"id": "be9e3f2f-e8b8-4731-be9a-1b84d50bb5f8", "address": "fa:16:3e:d1:e2:b4", "network": {"id": "b4b8fb64-b3cb-4cee-8a22-4a3947fba8e5", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-2015711970-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b0430d49d70a46c2b29abef177f8ccb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe9e3f2f-e8", "ovs_interfaceid": "be9e3f2f-e8b8-4731-be9a-1b84d50bb5f8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 08:59:09 compute-0 nova_compute[260935]: 2025-10-11 08:59:09.820 2 DEBUG nova.network.os_vif_util [None req-9f2a296a-27ec-4721-87d1-cf557a7a87b0 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d1:e2:b4,bridge_name='br-int',has_traffic_filtering=True,id=be9e3f2f-e8b8-4731-be9a-1b84d50bb5f8,network=Network(b4b8fb64-b3cb-4cee-8a22-4a3947fba8e5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbe9e3f2f-e8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 08:59:09 compute-0 nova_compute[260935]: 2025-10-11 08:59:09.820 2 DEBUG os_vif [None req-9f2a296a-27ec-4721-87d1-cf557a7a87b0 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d1:e2:b4,bridge_name='br-int',has_traffic_filtering=True,id=be9e3f2f-e8b8-4731-be9a-1b84d50bb5f8,network=Network(b4b8fb64-b3cb-4cee-8a22-4a3947fba8e5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbe9e3f2f-e8') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 11 08:59:09 compute-0 nova_compute[260935]: 2025-10-11 08:59:09.821 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:59:09 compute-0 nova_compute[260935]: 2025-10-11 08:59:09.822 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:59:09 compute-0 nova_compute[260935]: 2025-10-11 08:59:09.822 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 08:59:09 compute-0 nova_compute[260935]: 2025-10-11 08:59:09.825 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:59:09 compute-0 nova_compute[260935]: 2025-10-11 08:59:09.825 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbe9e3f2f-e8, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:59:09 compute-0 nova_compute[260935]: 2025-10-11 08:59:09.826 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapbe9e3f2f-e8, col_values=(('external_ids', {'iface-id': 'be9e3f2f-e8b8-4731-be9a-1b84d50bb5f8', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:d1:e2:b4', 'vm-uuid': '045df9b6-6c73-44ec-aa65-2c736eb5c00c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:59:09 compute-0 NetworkManager[44960]: <info>  [1760173149.8286] manager: (tapbe9e3f2f-e8): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/316)
Oct 11 08:59:09 compute-0 nova_compute[260935]: 2025-10-11 08:59:09.831 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 08:59:09 compute-0 nova_compute[260935]: 2025-10-11 08:59:09.837 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:59:09 compute-0 nova_compute[260935]: 2025-10-11 08:59:09.838 2 INFO os_vif [None req-9f2a296a-27ec-4721-87d1-cf557a7a87b0 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d1:e2:b4,bridge_name='br-int',has_traffic_filtering=True,id=be9e3f2f-e8b8-4731-be9a-1b84d50bb5f8,network=Network(b4b8fb64-b3cb-4cee-8a22-4a3947fba8e5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbe9e3f2f-e8')
Oct 11 08:59:09 compute-0 nova_compute[260935]: 2025-10-11 08:59:09.899 2 DEBUG nova.virt.libvirt.driver [None req-9f2a296a-27ec-4721-87d1-cf557a7a87b0 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 08:59:09 compute-0 nova_compute[260935]: 2025-10-11 08:59:09.901 2 DEBUG nova.virt.libvirt.driver [None req-9f2a296a-27ec-4721-87d1-cf557a7a87b0 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 08:59:09 compute-0 nova_compute[260935]: 2025-10-11 08:59:09.901 2 DEBUG nova.virt.libvirt.driver [None req-9f2a296a-27ec-4721-87d1-cf557a7a87b0 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] No VIF found with MAC fa:16:3e:d1:e2:b4, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 11 08:59:09 compute-0 nova_compute[260935]: 2025-10-11 08:59:09.902 2 INFO nova.virt.libvirt.driver [None req-9f2a296a-27ec-4721-87d1-cf557a7a87b0 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] [instance: 045df9b6-6c73-44ec-aa65-2c736eb5c00c] Using config drive
Oct 11 08:59:09 compute-0 nova_compute[260935]: 2025-10-11 08:59:09.928 2 DEBUG nova.storage.rbd_utils [None req-9f2a296a-27ec-4721-87d1-cf557a7a87b0 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] rbd image 045df9b6-6c73-44ec-aa65-2c736eb5c00c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:59:10 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1680: 321 pgs: 321 active+clean; 181 MiB data, 614 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 3.6 MiB/s wr, 226 op/s
Oct 11 08:59:10 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/3461609093' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:59:10 compute-0 nova_compute[260935]: 2025-10-11 08:59:10.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 08:59:10 compute-0 nova_compute[260935]: 2025-10-11 08:59:10.703 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 11 08:59:10 compute-0 nova_compute[260935]: 2025-10-11 08:59:10.707 2 INFO nova.virt.libvirt.driver [None req-9f2a296a-27ec-4721-87d1-cf557a7a87b0 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] [instance: 045df9b6-6c73-44ec-aa65-2c736eb5c00c] Creating config drive at /var/lib/nova/instances/045df9b6-6c73-44ec-aa65-2c736eb5c00c/disk.config
Oct 11 08:59:10 compute-0 nova_compute[260935]: 2025-10-11 08:59:10.717 2 DEBUG oslo_concurrency.processutils [None req-9f2a296a-27ec-4721-87d1-cf557a7a87b0 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/045df9b6-6c73-44ec-aa65-2c736eb5c00c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp30fy6b7q execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:59:10 compute-0 nova_compute[260935]: 2025-10-11 08:59:10.883 2 DEBUG oslo_concurrency.processutils [None req-9f2a296a-27ec-4721-87d1-cf557a7a87b0 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/045df9b6-6c73-44ec-aa65-2c736eb5c00c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp30fy6b7q" returned: 0 in 0.166s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:59:10 compute-0 nova_compute[260935]: 2025-10-11 08:59:10.924 2 DEBUG nova.storage.rbd_utils [None req-9f2a296a-27ec-4721-87d1-cf557a7a87b0 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] rbd image 045df9b6-6c73-44ec-aa65-2c736eb5c00c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:59:10 compute-0 nova_compute[260935]: 2025-10-11 08:59:10.933 2 DEBUG oslo_concurrency.processutils [None req-9f2a296a-27ec-4721-87d1-cf557a7a87b0 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/045df9b6-6c73-44ec-aa65-2c736eb5c00c/disk.config 045df9b6-6c73-44ec-aa65-2c736eb5c00c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:59:11 compute-0 nova_compute[260935]: 2025-10-11 08:59:11.104 2 DEBUG oslo_concurrency.processutils [None req-9f2a296a-27ec-4721-87d1-cf557a7a87b0 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/045df9b6-6c73-44ec-aa65-2c736eb5c00c/disk.config 045df9b6-6c73-44ec-aa65-2c736eb5c00c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.171s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:59:11 compute-0 nova_compute[260935]: 2025-10-11 08:59:11.105 2 INFO nova.virt.libvirt.driver [None req-9f2a296a-27ec-4721-87d1-cf557a7a87b0 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] [instance: 045df9b6-6c73-44ec-aa65-2c736eb5c00c] Deleting local config drive /var/lib/nova/instances/045df9b6-6c73-44ec-aa65-2c736eb5c00c/disk.config because it was imported into RBD.
Oct 11 08:59:11 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:59:11.137 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=bec9a5e2-82b0-42f0-811d-08d245f1dc66, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '18'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:59:11 compute-0 nova_compute[260935]: 2025-10-11 08:59:11.152 2 DEBUG nova.network.neutron [req-9cc7ac83-cb44-465f-8fc8-340dae41c930 req-cfc97c5f-fe1f-446c-99be-cc8336746603 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 045df9b6-6c73-44ec-aa65-2c736eb5c00c] Updated VIF entry in instance network info cache for port be9e3f2f-e8b8-4731-be9a-1b84d50bb5f8. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 08:59:11 compute-0 nova_compute[260935]: 2025-10-11 08:59:11.153 2 DEBUG nova.network.neutron [req-9cc7ac83-cb44-465f-8fc8-340dae41c930 req-cfc97c5f-fe1f-446c-99be-cc8336746603 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 045df9b6-6c73-44ec-aa65-2c736eb5c00c] Updating instance_info_cache with network_info: [{"id": "be9e3f2f-e8b8-4731-be9a-1b84d50bb5f8", "address": "fa:16:3e:d1:e2:b4", "network": {"id": "b4b8fb64-b3cb-4cee-8a22-4a3947fba8e5", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-2015711970-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b0430d49d70a46c2b29abef177f8ccb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe9e3f2f-e8", "ovs_interfaceid": "be9e3f2f-e8b8-4731-be9a-1b84d50bb5f8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:59:11 compute-0 NetworkManager[44960]: <info>  [1760173151.1752] manager: (tapbe9e3f2f-e8): new Tun device (/org/freedesktop/NetworkManager/Devices/317)
Oct 11 08:59:11 compute-0 kernel: tapbe9e3f2f-e8: entered promiscuous mode
Oct 11 08:59:11 compute-0 nova_compute[260935]: 2025-10-11 08:59:11.178 2 DEBUG oslo_concurrency.lockutils [req-9cc7ac83-cb44-465f-8fc8-340dae41c930 req-cfc97c5f-fe1f-446c-99be-cc8336746603 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-045df9b6-6c73-44ec-aa65-2c736eb5c00c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 08:59:11 compute-0 nova_compute[260935]: 2025-10-11 08:59:11.183 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:59:11 compute-0 ovn_controller[152945]: 2025-10-11T08:59:11Z|00708|binding|INFO|Claiming lport be9e3f2f-e8b8-4731-be9a-1b84d50bb5f8 for this chassis.
Oct 11 08:59:11 compute-0 ovn_controller[152945]: 2025-10-11T08:59:11Z|00709|binding|INFO|be9e3f2f-e8b8-4731-be9a-1b84d50bb5f8: Claiming fa:16:3e:d1:e2:b4 10.100.0.6
Oct 11 08:59:11 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:59:11.190 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d1:e2:b4 10.100.0.6'], port_security=['fa:16:3e:d1:e2:b4 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '045df9b6-6c73-44ec-aa65-2c736eb5c00c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b4b8fb64-b3cb-4cee-8a22-4a3947fba8e5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b0430d49d70a46c2b29abef177f8ccb3', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'cf97e47b-d339-42a8-ae53-936a69d74b51', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7d7c2c31-fb9d-4875-91fa-aedc5fb45092, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=be9e3f2f-e8b8-4731-be9a-1b84d50bb5f8) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 08:59:11 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:59:11.192 162815 INFO neutron.agent.ovn.metadata.agent [-] Port be9e3f2f-e8b8-4731-be9a-1b84d50bb5f8 in datapath b4b8fb64-b3cb-4cee-8a22-4a3947fba8e5 bound to our chassis
Oct 11 08:59:11 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:59:11.194 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b4b8fb64-b3cb-4cee-8a22-4a3947fba8e5
Oct 11 08:59:11 compute-0 systemd-udevd[339292]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 08:59:11 compute-0 NetworkManager[44960]: <info>  [1760173151.2219] device (tapbe9e3f2f-e8): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 08:59:11 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:59:11.220 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[dbcec824-ff46-43bc-b35c-a57fc50f8083]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:59:11 compute-0 NetworkManager[44960]: <info>  [1760173151.2262] device (tapbe9e3f2f-e8): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 08:59:11 compute-0 ovn_controller[152945]: 2025-10-11T08:59:11Z|00710|binding|INFO|Setting lport be9e3f2f-e8b8-4731-be9a-1b84d50bb5f8 ovn-installed in OVS
Oct 11 08:59:11 compute-0 ovn_controller[152945]: 2025-10-11T08:59:11Z|00711|binding|INFO|Setting lport be9e3f2f-e8b8-4731-be9a-1b84d50bb5f8 up in Southbound
Oct 11 08:59:11 compute-0 nova_compute[260935]: 2025-10-11 08:59:11.230 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:59:11 compute-0 nova_compute[260935]: 2025-10-11 08:59:11.238 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:59:11 compute-0 systemd-machined[215705]: New machine qemu-90-instance-00000051.
Oct 11 08:59:11 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:59:11.259 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[7fcd285c-1d78-4d49-b3db-3b98289b931f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:59:11 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:59:11.262 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[5ac8eefc-f9c7-46d7-8100-778745f5ac28]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:59:11 compute-0 systemd[1]: Started Virtual Machine qemu-90-instance-00000051.
Oct 11 08:59:11 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:59:11.294 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[2681bf36-2fea-4ea3-a44c-11dd1809c48a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:59:11 compute-0 ceph-mon[74313]: pgmap v1680: 321 pgs: 321 active+clean; 181 MiB data, 614 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 3.6 MiB/s wr, 226 op/s
Oct 11 08:59:11 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:59:11.316 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[c28c4766-52ee-4cf4-b2bb-ab0486dfed53]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb4b8fb64-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8d:65:14'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 8, 'rx_bytes': 832, 'tx_bytes': 524, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 8, 'rx_bytes': 832, 'tx_bytes': 524, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 220], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 498802, 'reachable_time': 37318, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 339303, 'error': None, 'target': 'ovnmeta-b4b8fb64-b3cb-4cee-8a22-4a3947fba8e5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:59:11 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:59:11.342 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[8e657c98-884a-43bc-80bf-86c346fa26e9]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapb4b8fb64-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 498822, 'tstamp': 498822}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 339309, 'error': None, 'target': 'ovnmeta-b4b8fb64-b3cb-4cee-8a22-4a3947fba8e5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapb4b8fb64-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 498827, 'tstamp': 498827}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 339309, 'error': None, 'target': 'ovnmeta-b4b8fb64-b3cb-4cee-8a22-4a3947fba8e5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:59:11 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:59:11.353 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb4b8fb64-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:59:11 compute-0 nova_compute[260935]: 2025-10-11 08:59:11.355 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:59:11 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:59:11.368 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb4b8fb64-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:59:11 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:59:11.369 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 08:59:11 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:59:11.370 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb4b8fb64-b0, col_values=(('external_ids', {'iface-id': '59c88b9d-e04e-4ca9-8c74-9510a5f4ab83'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:59:11 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:59:11.370 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 08:59:11 compute-0 nova_compute[260935]: 2025-10-11 08:59:11.550 2 DEBUG nova.compute.manager [req-be2bd880-075a-4ccb-94fa-e131e7c4f3fb req-5b65fc2d-dd0f-4e46-99e3-c02b57f69ce8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 045df9b6-6c73-44ec-aa65-2c736eb5c00c] Received event network-vif-plugged-be9e3f2f-e8b8-4731-be9a-1b84d50bb5f8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:59:11 compute-0 nova_compute[260935]: 2025-10-11 08:59:11.551 2 DEBUG oslo_concurrency.lockutils [req-be2bd880-075a-4ccb-94fa-e131e7c4f3fb req-5b65fc2d-dd0f-4e46-99e3-c02b57f69ce8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "045df9b6-6c73-44ec-aa65-2c736eb5c00c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:59:11 compute-0 nova_compute[260935]: 2025-10-11 08:59:11.552 2 DEBUG oslo_concurrency.lockutils [req-be2bd880-075a-4ccb-94fa-e131e7c4f3fb req-5b65fc2d-dd0f-4e46-99e3-c02b57f69ce8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "045df9b6-6c73-44ec-aa65-2c736eb5c00c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:59:11 compute-0 nova_compute[260935]: 2025-10-11 08:59:11.552 2 DEBUG oslo_concurrency.lockutils [req-be2bd880-075a-4ccb-94fa-e131e7c4f3fb req-5b65fc2d-dd0f-4e46-99e3-c02b57f69ce8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "045df9b6-6c73-44ec-aa65-2c736eb5c00c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:59:11 compute-0 nova_compute[260935]: 2025-10-11 08:59:11.553 2 DEBUG nova.compute.manager [req-be2bd880-075a-4ccb-94fa-e131e7c4f3fb req-5b65fc2d-dd0f-4e46-99e3-c02b57f69ce8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 045df9b6-6c73-44ec-aa65-2c736eb5c00c] Processing event network-vif-plugged-be9e3f2f-e8b8-4731-be9a-1b84d50bb5f8 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 11 08:59:11 compute-0 nova_compute[260935]: 2025-10-11 08:59:11.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 08:59:11 compute-0 nova_compute[260935]: 2025-10-11 08:59:11.704 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 08:59:11 compute-0 nova_compute[260935]: 2025-10-11 08:59:11.923 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760173136.9221926, 7f16d3ba-54f0-496c-bbd3-09f4d10a41f2 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:59:11 compute-0 nova_compute[260935]: 2025-10-11 08:59:11.923 2 INFO nova.compute.manager [-] [instance: 7f16d3ba-54f0-496c-bbd3-09f4d10a41f2] VM Stopped (Lifecycle Event)
Oct 11 08:59:11 compute-0 nova_compute[260935]: 2025-10-11 08:59:11.952 2 DEBUG nova.compute.manager [None req-bc0f6789-9e11-4e64-b615-9dc548b9c6a9 - - - - - -] [instance: 7f16d3ba-54f0-496c-bbd3-09f4d10a41f2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:59:12 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1681: 321 pgs: 321 active+clean; 181 MiB data, 614 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 3.6 MiB/s wr, 227 op/s
Oct 11 08:59:12 compute-0 nova_compute[260935]: 2025-10-11 08:59:12.198 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760173152.1978087, 045df9b6-6c73-44ec-aa65-2c736eb5c00c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:59:12 compute-0 nova_compute[260935]: 2025-10-11 08:59:12.199 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 045df9b6-6c73-44ec-aa65-2c736eb5c00c] VM Started (Lifecycle Event)
Oct 11 08:59:12 compute-0 nova_compute[260935]: 2025-10-11 08:59:12.201 2 DEBUG nova.compute.manager [None req-9f2a296a-27ec-4721-87d1-cf557a7a87b0 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] [instance: 045df9b6-6c73-44ec-aa65-2c736eb5c00c] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 11 08:59:12 compute-0 nova_compute[260935]: 2025-10-11 08:59:12.207 2 DEBUG nova.virt.libvirt.driver [None req-9f2a296a-27ec-4721-87d1-cf557a7a87b0 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] [instance: 045df9b6-6c73-44ec-aa65-2c736eb5c00c] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 11 08:59:12 compute-0 nova_compute[260935]: 2025-10-11 08:59:12.210 2 INFO nova.virt.libvirt.driver [-] [instance: 045df9b6-6c73-44ec-aa65-2c736eb5c00c] Instance spawned successfully.
Oct 11 08:59:12 compute-0 nova_compute[260935]: 2025-10-11 08:59:12.210 2 DEBUG nova.virt.libvirt.driver [None req-9f2a296a-27ec-4721-87d1-cf557a7a87b0 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] [instance: 045df9b6-6c73-44ec-aa65-2c736eb5c00c] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 11 08:59:12 compute-0 nova_compute[260935]: 2025-10-11 08:59:12.219 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 045df9b6-6c73-44ec-aa65-2c736eb5c00c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:59:12 compute-0 nova_compute[260935]: 2025-10-11 08:59:12.223 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 045df9b6-6c73-44ec-aa65-2c736eb5c00c] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 08:59:12 compute-0 nova_compute[260935]: 2025-10-11 08:59:12.249 2 DEBUG nova.virt.libvirt.driver [None req-9f2a296a-27ec-4721-87d1-cf557a7a87b0 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] [instance: 045df9b6-6c73-44ec-aa65-2c736eb5c00c] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:59:12 compute-0 nova_compute[260935]: 2025-10-11 08:59:12.250 2 DEBUG nova.virt.libvirt.driver [None req-9f2a296a-27ec-4721-87d1-cf557a7a87b0 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] [instance: 045df9b6-6c73-44ec-aa65-2c736eb5c00c] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:59:12 compute-0 nova_compute[260935]: 2025-10-11 08:59:12.251 2 DEBUG nova.virt.libvirt.driver [None req-9f2a296a-27ec-4721-87d1-cf557a7a87b0 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] [instance: 045df9b6-6c73-44ec-aa65-2c736eb5c00c] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:59:12 compute-0 nova_compute[260935]: 2025-10-11 08:59:12.251 2 DEBUG nova.virt.libvirt.driver [None req-9f2a296a-27ec-4721-87d1-cf557a7a87b0 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] [instance: 045df9b6-6c73-44ec-aa65-2c736eb5c00c] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:59:12 compute-0 nova_compute[260935]: 2025-10-11 08:59:12.252 2 DEBUG nova.virt.libvirt.driver [None req-9f2a296a-27ec-4721-87d1-cf557a7a87b0 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] [instance: 045df9b6-6c73-44ec-aa65-2c736eb5c00c] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:59:12 compute-0 nova_compute[260935]: 2025-10-11 08:59:12.252 2 DEBUG nova.virt.libvirt.driver [None req-9f2a296a-27ec-4721-87d1-cf557a7a87b0 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] [instance: 045df9b6-6c73-44ec-aa65-2c736eb5c00c] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:59:12 compute-0 nova_compute[260935]: 2025-10-11 08:59:12.255 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 045df9b6-6c73-44ec-aa65-2c736eb5c00c] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 08:59:12 compute-0 nova_compute[260935]: 2025-10-11 08:59:12.255 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760173152.1979365, 045df9b6-6c73-44ec-aa65-2c736eb5c00c => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:59:12 compute-0 nova_compute[260935]: 2025-10-11 08:59:12.256 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 045df9b6-6c73-44ec-aa65-2c736eb5c00c] VM Paused (Lifecycle Event)
Oct 11 08:59:12 compute-0 nova_compute[260935]: 2025-10-11 08:59:12.288 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 045df9b6-6c73-44ec-aa65-2c736eb5c00c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:59:12 compute-0 nova_compute[260935]: 2025-10-11 08:59:12.292 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760173152.204994, 045df9b6-6c73-44ec-aa65-2c736eb5c00c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:59:12 compute-0 nova_compute[260935]: 2025-10-11 08:59:12.293 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 045df9b6-6c73-44ec-aa65-2c736eb5c00c] VM Resumed (Lifecycle Event)
Oct 11 08:59:12 compute-0 nova_compute[260935]: 2025-10-11 08:59:12.322 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 045df9b6-6c73-44ec-aa65-2c736eb5c00c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:59:12 compute-0 nova_compute[260935]: 2025-10-11 08:59:12.326 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 045df9b6-6c73-44ec-aa65-2c736eb5c00c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 08:59:12 compute-0 nova_compute[260935]: 2025-10-11 08:59:12.330 2 INFO nova.compute.manager [None req-9f2a296a-27ec-4721-87d1-cf557a7a87b0 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] [instance: 045df9b6-6c73-44ec-aa65-2c736eb5c00c] Took 10.96 seconds to spawn the instance on the hypervisor.
Oct 11 08:59:12 compute-0 nova_compute[260935]: 2025-10-11 08:59:12.331 2 DEBUG nova.compute.manager [None req-9f2a296a-27ec-4721-87d1-cf557a7a87b0 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] [instance: 045df9b6-6c73-44ec-aa65-2c736eb5c00c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:59:12 compute-0 nova_compute[260935]: 2025-10-11 08:59:12.345 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 045df9b6-6c73-44ec-aa65-2c736eb5c00c] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 08:59:12 compute-0 nova_compute[260935]: 2025-10-11 08:59:12.393 2 INFO nova.compute.manager [None req-9f2a296a-27ec-4721-87d1-cf557a7a87b0 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] [instance: 045df9b6-6c73-44ec-aa65-2c736eb5c00c] Took 12.83 seconds to build instance.
Oct 11 08:59:12 compute-0 nova_compute[260935]: 2025-10-11 08:59:12.416 2 DEBUG oslo_concurrency.lockutils [None req-9f2a296a-27ec-4721-87d1-cf557a7a87b0 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] Lock "045df9b6-6c73-44ec-aa65-2c736eb5c00c" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.912s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:59:12 compute-0 nova_compute[260935]: 2025-10-11 08:59:12.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 08:59:13 compute-0 ceph-mon[74313]: pgmap v1681: 321 pgs: 321 active+clean; 181 MiB data, 614 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 3.6 MiB/s wr, 227 op/s
Oct 11 08:59:13 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e242 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 08:59:13 compute-0 nova_compute[260935]: 2025-10-11 08:59:13.625 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:59:13 compute-0 nova_compute[260935]: 2025-10-11 08:59:13.705 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 08:59:13 compute-0 nova_compute[260935]: 2025-10-11 08:59:13.746 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:59:13 compute-0 nova_compute[260935]: 2025-10-11 08:59:13.747 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:59:13 compute-0 nova_compute[260935]: 2025-10-11 08:59:13.747 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:59:13 compute-0 nova_compute[260935]: 2025-10-11 08:59:13.747 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 11 08:59:13 compute-0 nova_compute[260935]: 2025-10-11 08:59:13.747 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:59:13 compute-0 podman[339353]: 2025-10-11 08:59:13.782485609 +0000 UTC m=+0.080360913 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, container_name=multipathd, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 11 08:59:13 compute-0 podman[339354]: 2025-10-11 08:59:13.817463986 +0000 UTC m=+0.113654032 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 11 08:59:14 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1682: 321 pgs: 321 active+clean; 181 MiB data, 615 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 2.0 MiB/s wr, 204 op/s
Oct 11 08:59:14 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 08:59:14 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4162288833' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:59:14 compute-0 nova_compute[260935]: 2025-10-11 08:59:14.191 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:59:14 compute-0 nova_compute[260935]: 2025-10-11 08:59:14.303 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000050 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 08:59:14 compute-0 nova_compute[260935]: 2025-10-11 08:59:14.304 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000050 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 08:59:14 compute-0 nova_compute[260935]: 2025-10-11 08:59:14.312 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-0000004f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 08:59:14 compute-0 nova_compute[260935]: 2025-10-11 08:59:14.313 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-0000004f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 08:59:14 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/4162288833' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:59:14 compute-0 nova_compute[260935]: 2025-10-11 08:59:14.318 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000051 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 08:59:14 compute-0 nova_compute[260935]: 2025-10-11 08:59:14.318 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000051 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 08:59:14 compute-0 nova_compute[260935]: 2025-10-11 08:59:14.504 2 WARNING nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 08:59:14 compute-0 nova_compute[260935]: 2025-10-11 08:59:14.505 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3385MB free_disk=59.92572784423828GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 11 08:59:14 compute-0 nova_compute[260935]: 2025-10-11 08:59:14.505 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:59:14 compute-0 nova_compute[260935]: 2025-10-11 08:59:14.505 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:59:14 compute-0 nova_compute[260935]: 2025-10-11 08:59:14.517 2 DEBUG nova.compute.manager [req-80620d2e-8a59-4c9f-9570-fc486988224f req-1d959fac-c7da-490b-a69f-6a28de8e0b28 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 045df9b6-6c73-44ec-aa65-2c736eb5c00c] Received event network-vif-plugged-be9e3f2f-e8b8-4731-be9a-1b84d50bb5f8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:59:14 compute-0 nova_compute[260935]: 2025-10-11 08:59:14.517 2 DEBUG oslo_concurrency.lockutils [req-80620d2e-8a59-4c9f-9570-fc486988224f req-1d959fac-c7da-490b-a69f-6a28de8e0b28 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "045df9b6-6c73-44ec-aa65-2c736eb5c00c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:59:14 compute-0 nova_compute[260935]: 2025-10-11 08:59:14.518 2 DEBUG oslo_concurrency.lockutils [req-80620d2e-8a59-4c9f-9570-fc486988224f req-1d959fac-c7da-490b-a69f-6a28de8e0b28 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "045df9b6-6c73-44ec-aa65-2c736eb5c00c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:59:14 compute-0 nova_compute[260935]: 2025-10-11 08:59:14.518 2 DEBUG oslo_concurrency.lockutils [req-80620d2e-8a59-4c9f-9570-fc486988224f req-1d959fac-c7da-490b-a69f-6a28de8e0b28 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "045df9b6-6c73-44ec-aa65-2c736eb5c00c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:59:14 compute-0 nova_compute[260935]: 2025-10-11 08:59:14.518 2 DEBUG nova.compute.manager [req-80620d2e-8a59-4c9f-9570-fc486988224f req-1d959fac-c7da-490b-a69f-6a28de8e0b28 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 045df9b6-6c73-44ec-aa65-2c736eb5c00c] No waiting events found dispatching network-vif-plugged-be9e3f2f-e8b8-4731-be9a-1b84d50bb5f8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 08:59:14 compute-0 nova_compute[260935]: 2025-10-11 08:59:14.518 2 WARNING nova.compute.manager [req-80620d2e-8a59-4c9f-9570-fc486988224f req-1d959fac-c7da-490b-a69f-6a28de8e0b28 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 045df9b6-6c73-44ec-aa65-2c736eb5c00c] Received unexpected event network-vif-plugged-be9e3f2f-e8b8-4731-be9a-1b84d50bb5f8 for instance with vm_state active and task_state None.
Oct 11 08:59:14 compute-0 nova_compute[260935]: 2025-10-11 08:59:14.625 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance 8848c29f-c82a-4f50-82f4-b2e317161489 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 08:59:14 compute-0 nova_compute[260935]: 2025-10-11 08:59:14.626 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance f64218b5-ea49-4ed7-9945-1b8e056e4161 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 08:59:14 compute-0 nova_compute[260935]: 2025-10-11 08:59:14.626 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance 045df9b6-6c73-44ec-aa65-2c736eb5c00c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 192, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 08:59:14 compute-0 nova_compute[260935]: 2025-10-11 08:59:14.626 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 11 08:59:14 compute-0 nova_compute[260935]: 2025-10-11 08:59:14.627 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=960MB phys_disk=59GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 11 08:59:14 compute-0 nova_compute[260935]: 2025-10-11 08:59:14.648 2 DEBUG nova.scheduler.client.report [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Refreshing inventories for resource provider ead2f521-4d5d-46d9-864c-1aac19134114 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Oct 11 08:59:14 compute-0 nova_compute[260935]: 2025-10-11 08:59:14.677 2 DEBUG nova.scheduler.client.report [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Updating ProviderTree inventory for provider ead2f521-4d5d-46d9-864c-1aac19134114 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Oct 11 08:59:14 compute-0 nova_compute[260935]: 2025-10-11 08:59:14.678 2 DEBUG nova.compute.provider_tree [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Updating inventory in ProviderTree for provider ead2f521-4d5d-46d9-864c-1aac19134114 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Oct 11 08:59:14 compute-0 nova_compute[260935]: 2025-10-11 08:59:14.709 2 DEBUG nova.scheduler.client.report [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Refreshing aggregate associations for resource provider ead2f521-4d5d-46d9-864c-1aac19134114, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Oct 11 08:59:14 compute-0 nova_compute[260935]: 2025-10-11 08:59:14.735 2 DEBUG nova.scheduler.client.report [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Refreshing trait associations for resource provider ead2f521-4d5d-46d9-864c-1aac19134114, traits: HW_CPU_X86_AESNI,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_CLMUL,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_AVX,COMPUTE_RESCUE_BFV,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_F16C,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NODE,HW_CPU_X86_SSE2,HW_CPU_X86_BMI,COMPUTE_DEVICE_TAGGING,COMPUTE_STORAGE_BUS_FDC,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SHA,COMPUTE_STORAGE_BUS_SATA,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SSE42,HW_CPU_X86_SSE41,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_ABM,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_STORAGE_BUS_IDE,COMPUTE_STORAGE_BUS_USB,COMPUTE_TRUSTED_CERTS,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_FMA3,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_SSE4A,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSSE3,HW_CPU_X86_SVM,HW_CPU_X86_BMI2,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_AVX2,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_SECURITY_TPM_2_0,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_AMD_SVM,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_RTL8139 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Oct 11 08:59:14 compute-0 nova_compute[260935]: 2025-10-11 08:59:14.819 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:59:14 compute-0 nova_compute[260935]: 2025-10-11 08:59:14.881 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:59:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:59:15.194 162815 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:59:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:59:15.194 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:59:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:59:15.195 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:59:15 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 08:59:15 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3558711211' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:59:15 compute-0 ceph-mon[74313]: pgmap v1682: 321 pgs: 321 active+clean; 181 MiB data, 615 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 2.0 MiB/s wr, 204 op/s
Oct 11 08:59:15 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/3558711211' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:59:15 compute-0 nova_compute[260935]: 2025-10-11 08:59:15.350 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.531s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:59:15 compute-0 nova_compute[260935]: 2025-10-11 08:59:15.356 2 DEBUG nova.compute.provider_tree [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 08:59:15 compute-0 nova_compute[260935]: 2025-10-11 08:59:15.385 2 DEBUG nova.scheduler.client.report [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 08:59:15 compute-0 nova_compute[260935]: 2025-10-11 08:59:15.423 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 11 08:59:15 compute-0 nova_compute[260935]: 2025-10-11 08:59:15.424 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.919s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:59:15 compute-0 nova_compute[260935]: 2025-10-11 08:59:15.673 2 DEBUG oslo_concurrency.lockutils [None req-63d228ec-d98f-4af4-90ec-67bd708ffac8 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] Acquiring lock "c176845c-89c0-4038-ba22-4ee79bd3ebfe" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:59:15 compute-0 nova_compute[260935]: 2025-10-11 08:59:15.675 2 DEBUG oslo_concurrency.lockutils [None req-63d228ec-d98f-4af4-90ec-67bd708ffac8 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] Lock "c176845c-89c0-4038-ba22-4ee79bd3ebfe" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:59:15 compute-0 nova_compute[260935]: 2025-10-11 08:59:15.708 2 DEBUG nova.compute.manager [None req-63d228ec-d98f-4af4-90ec-67bd708ffac8 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] [instance: c176845c-89c0-4038-ba22-4ee79bd3ebfe] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 11 08:59:15 compute-0 nova_compute[260935]: 2025-10-11 08:59:15.809 2 DEBUG oslo_concurrency.lockutils [None req-63d228ec-d98f-4af4-90ec-67bd708ffac8 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:59:15 compute-0 nova_compute[260935]: 2025-10-11 08:59:15.810 2 DEBUG oslo_concurrency.lockutils [None req-63d228ec-d98f-4af4-90ec-67bd708ffac8 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:59:15 compute-0 nova_compute[260935]: 2025-10-11 08:59:15.818 2 DEBUG nova.virt.hardware [None req-63d228ec-d98f-4af4-90ec-67bd708ffac8 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 11 08:59:15 compute-0 nova_compute[260935]: 2025-10-11 08:59:15.818 2 INFO nova.compute.claims [None req-63d228ec-d98f-4af4-90ec-67bd708ffac8 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] [instance: c176845c-89c0-4038-ba22-4ee79bd3ebfe] Claim successful on node compute-0.ctlplane.example.com
Oct 11 08:59:15 compute-0 ceph-osd[90364]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #46. Immutable memtables: 3.
Oct 11 08:59:16 compute-0 nova_compute[260935]: 2025-10-11 08:59:16.008 2 DEBUG oslo_concurrency.processutils [None req-63d228ec-d98f-4af4-90ec-67bd708ffac8 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:59:16 compute-0 ovn_controller[152945]: 2025-10-11T08:59:16Z|00083|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:e4:27:e3 10.100.0.7
Oct 11 08:59:16 compute-0 ovn_controller[152945]: 2025-10-11T08:59:16Z|00084|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:e4:27:e3 10.100.0.7
Oct 11 08:59:16 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1683: 321 pgs: 321 active+clean; 181 MiB data, 615 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 37 KiB/s wr, 159 op/s
Oct 11 08:59:16 compute-0 nova_compute[260935]: 2025-10-11 08:59:16.419 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 08:59:16 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 08:59:16 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/947885211' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:59:16 compute-0 nova_compute[260935]: 2025-10-11 08:59:16.556 2 DEBUG oslo_concurrency.processutils [None req-63d228ec-d98f-4af4-90ec-67bd708ffac8 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.548s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:59:16 compute-0 nova_compute[260935]: 2025-10-11 08:59:16.564 2 DEBUG nova.compute.provider_tree [None req-63d228ec-d98f-4af4-90ec-67bd708ffac8 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 08:59:16 compute-0 nova_compute[260935]: 2025-10-11 08:59:16.595 2 DEBUG nova.scheduler.client.report [None req-63d228ec-d98f-4af4-90ec-67bd708ffac8 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 08:59:16 compute-0 nova_compute[260935]: 2025-10-11 08:59:16.640 2 DEBUG oslo_concurrency.lockutils [None req-63d228ec-d98f-4af4-90ec-67bd708ffac8 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.829s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:59:16 compute-0 nova_compute[260935]: 2025-10-11 08:59:16.642 2 DEBUG nova.compute.manager [None req-63d228ec-d98f-4af4-90ec-67bd708ffac8 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] [instance: c176845c-89c0-4038-ba22-4ee79bd3ebfe] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 11 08:59:16 compute-0 nova_compute[260935]: 2025-10-11 08:59:16.699 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 08:59:16 compute-0 nova_compute[260935]: 2025-10-11 08:59:16.728 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 08:59:16 compute-0 nova_compute[260935]: 2025-10-11 08:59:16.729 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 11 08:59:16 compute-0 nova_compute[260935]: 2025-10-11 08:59:16.729 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 11 08:59:16 compute-0 nova_compute[260935]: 2025-10-11 08:59:16.738 2 DEBUG nova.compute.manager [None req-63d228ec-d98f-4af4-90ec-67bd708ffac8 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] [instance: c176845c-89c0-4038-ba22-4ee79bd3ebfe] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 11 08:59:16 compute-0 nova_compute[260935]: 2025-10-11 08:59:16.739 2 DEBUG nova.network.neutron [None req-63d228ec-d98f-4af4-90ec-67bd708ffac8 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] [instance: c176845c-89c0-4038-ba22-4ee79bd3ebfe] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 11 08:59:16 compute-0 nova_compute[260935]: 2025-10-11 08:59:16.768 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: c176845c-89c0-4038-ba22-4ee79bd3ebfe] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Oct 11 08:59:16 compute-0 nova_compute[260935]: 2025-10-11 08:59:16.778 2 INFO nova.virt.libvirt.driver [None req-63d228ec-d98f-4af4-90ec-67bd708ffac8 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] [instance: c176845c-89c0-4038-ba22-4ee79bd3ebfe] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 11 08:59:16 compute-0 nova_compute[260935]: 2025-10-11 08:59:16.806 2 DEBUG nova.compute.manager [None req-63d228ec-d98f-4af4-90ec-67bd708ffac8 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] [instance: c176845c-89c0-4038-ba22-4ee79bd3ebfe] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 11 08:59:16 compute-0 nova_compute[260935]: 2025-10-11 08:59:16.920 2 DEBUG nova.compute.manager [None req-63d228ec-d98f-4af4-90ec-67bd708ffac8 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] [instance: c176845c-89c0-4038-ba22-4ee79bd3ebfe] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 11 08:59:16 compute-0 nova_compute[260935]: 2025-10-11 08:59:16.922 2 DEBUG nova.virt.libvirt.driver [None req-63d228ec-d98f-4af4-90ec-67bd708ffac8 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] [instance: c176845c-89c0-4038-ba22-4ee79bd3ebfe] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 11 08:59:16 compute-0 nova_compute[260935]: 2025-10-11 08:59:16.923 2 INFO nova.virt.libvirt.driver [None req-63d228ec-d98f-4af4-90ec-67bd708ffac8 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] [instance: c176845c-89c0-4038-ba22-4ee79bd3ebfe] Creating image(s)
Oct 11 08:59:16 compute-0 nova_compute[260935]: 2025-10-11 08:59:16.970 2 DEBUG nova.storage.rbd_utils [None req-63d228ec-d98f-4af4-90ec-67bd708ffac8 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] rbd image c176845c-89c0-4038-ba22-4ee79bd3ebfe_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:59:17 compute-0 nova_compute[260935]: 2025-10-11 08:59:17.008 2 DEBUG nova.storage.rbd_utils [None req-63d228ec-d98f-4af4-90ec-67bd708ffac8 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] rbd image c176845c-89c0-4038-ba22-4ee79bd3ebfe_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:59:17 compute-0 nova_compute[260935]: 2025-10-11 08:59:17.036 2 DEBUG nova.storage.rbd_utils [None req-63d228ec-d98f-4af4-90ec-67bd708ffac8 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] rbd image c176845c-89c0-4038-ba22-4ee79bd3ebfe_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:59:17 compute-0 nova_compute[260935]: 2025-10-11 08:59:17.041 2 DEBUG oslo_concurrency.processutils [None req-63d228ec-d98f-4af4-90ec-67bd708ffac8 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:59:17 compute-0 nova_compute[260935]: 2025-10-11 08:59:17.098 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "refresh_cache-8848c29f-c82a-4f50-82f4-b2e317161489" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 08:59:17 compute-0 nova_compute[260935]: 2025-10-11 08:59:17.099 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquired lock "refresh_cache-8848c29f-c82a-4f50-82f4-b2e317161489" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 08:59:17 compute-0 nova_compute[260935]: 2025-10-11 08:59:17.099 2 DEBUG nova.network.neutron [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: 8848c29f-c82a-4f50-82f4-b2e317161489] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 11 08:59:17 compute-0 nova_compute[260935]: 2025-10-11 08:59:17.099 2 DEBUG nova.objects.instance [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 8848c29f-c82a-4f50-82f4-b2e317161489 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:59:17 compute-0 nova_compute[260935]: 2025-10-11 08:59:17.141 2 DEBUG oslo_concurrency.processutils [None req-63d228ec-d98f-4af4-90ec-67bd708ffac8 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json" returned: 0 in 0.099s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:59:17 compute-0 nova_compute[260935]: 2025-10-11 08:59:17.142 2 DEBUG oslo_concurrency.lockutils [None req-63d228ec-d98f-4af4-90ec-67bd708ffac8 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] Acquiring lock "9811042c7d73cc51997f7c966840f4b7728169a1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:59:17 compute-0 nova_compute[260935]: 2025-10-11 08:59:17.142 2 DEBUG oslo_concurrency.lockutils [None req-63d228ec-d98f-4af4-90ec-67bd708ffac8 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:59:17 compute-0 nova_compute[260935]: 2025-10-11 08:59:17.143 2 DEBUG oslo_concurrency.lockutils [None req-63d228ec-d98f-4af4-90ec-67bd708ffac8 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:59:17 compute-0 nova_compute[260935]: 2025-10-11 08:59:17.176 2 DEBUG nova.storage.rbd_utils [None req-63d228ec-d98f-4af4-90ec-67bd708ffac8 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] rbd image c176845c-89c0-4038-ba22-4ee79bd3ebfe_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:59:17 compute-0 nova_compute[260935]: 2025-10-11 08:59:17.182 2 DEBUG oslo_concurrency.processutils [None req-63d228ec-d98f-4af4-90ec-67bd708ffac8 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 c176845c-89c0-4038-ba22-4ee79bd3ebfe_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:59:17 compute-0 nova_compute[260935]: 2025-10-11 08:59:17.251 2 DEBUG nova.policy [None req-63d228ec-d98f-4af4-90ec-67bd708ffac8 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '8d211063ed874837bead2e13898b31d4', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'd33b48586acf4e6c8254f2a1213b001c', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 11 08:59:17 compute-0 ceph-mon[74313]: pgmap v1683: 321 pgs: 321 active+clean; 181 MiB data, 615 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 37 KiB/s wr, 159 op/s
Oct 11 08:59:17 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/947885211' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:59:17 compute-0 nova_compute[260935]: 2025-10-11 08:59:17.531 2 DEBUG oslo_concurrency.processutils [None req-63d228ec-d98f-4af4-90ec-67bd708ffac8 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 c176845c-89c0-4038-ba22-4ee79bd3ebfe_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.349s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:59:17 compute-0 ovn_controller[152945]: 2025-10-11T08:59:17Z|00085|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:d5:15:a9 10.100.0.4
Oct 11 08:59:17 compute-0 ovn_controller[152945]: 2025-10-11T08:59:17Z|00086|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:d5:15:a9 10.100.0.4
Oct 11 08:59:17 compute-0 nova_compute[260935]: 2025-10-11 08:59:17.604 2 DEBUG nova.storage.rbd_utils [None req-63d228ec-d98f-4af4-90ec-67bd708ffac8 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] resizing rbd image c176845c-89c0-4038-ba22-4ee79bd3ebfe_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 11 08:59:17 compute-0 nova_compute[260935]: 2025-10-11 08:59:17.704 2 DEBUG nova.objects.instance [None req-63d228ec-d98f-4af4-90ec-67bd708ffac8 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] Lazy-loading 'migration_context' on Instance uuid c176845c-89c0-4038-ba22-4ee79bd3ebfe obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:59:17 compute-0 nova_compute[260935]: 2025-10-11 08:59:17.721 2 DEBUG nova.virt.libvirt.driver [None req-63d228ec-d98f-4af4-90ec-67bd708ffac8 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] [instance: c176845c-89c0-4038-ba22-4ee79bd3ebfe] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 11 08:59:17 compute-0 nova_compute[260935]: 2025-10-11 08:59:17.721 2 DEBUG nova.virt.libvirt.driver [None req-63d228ec-d98f-4af4-90ec-67bd708ffac8 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] [instance: c176845c-89c0-4038-ba22-4ee79bd3ebfe] Ensure instance console log exists: /var/lib/nova/instances/c176845c-89c0-4038-ba22-4ee79bd3ebfe/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 11 08:59:17 compute-0 nova_compute[260935]: 2025-10-11 08:59:17.723 2 DEBUG oslo_concurrency.lockutils [None req-63d228ec-d98f-4af4-90ec-67bd708ffac8 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:59:17 compute-0 nova_compute[260935]: 2025-10-11 08:59:17.724 2 DEBUG oslo_concurrency.lockutils [None req-63d228ec-d98f-4af4-90ec-67bd708ffac8 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:59:17 compute-0 nova_compute[260935]: 2025-10-11 08:59:17.725 2 DEBUG oslo_concurrency.lockutils [None req-63d228ec-d98f-4af4-90ec-67bd708ffac8 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:59:18 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1684: 321 pgs: 321 active+clean; 235 MiB data, 685 MiB used, 59 GiB / 60 GiB avail; 6.3 MiB/s rd, 4.2 MiB/s wr, 329 op/s
Oct 11 08:59:18 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e242 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 08:59:18 compute-0 nova_compute[260935]: 2025-10-11 08:59:18.628 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:59:18 compute-0 nova_compute[260935]: 2025-10-11 08:59:18.804 2 DEBUG nova.network.neutron [None req-63d228ec-d98f-4af4-90ec-67bd708ffac8 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] [instance: c176845c-89c0-4038-ba22-4ee79bd3ebfe] Successfully created port: e61ae661-47c6-4317-a2c2-6e7a5b567441 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 11 08:59:19 compute-0 ceph-mon[74313]: pgmap v1684: 321 pgs: 321 active+clean; 235 MiB data, 685 MiB used, 59 GiB / 60 GiB avail; 6.3 MiB/s rd, 4.2 MiB/s wr, 329 op/s
Oct 11 08:59:19 compute-0 nova_compute[260935]: 2025-10-11 08:59:19.381 2 DEBUG nova.network.neutron [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: 8848c29f-c82a-4f50-82f4-b2e317161489] Updating instance_info_cache with network_info: [{"id": "e121c426-7734-4def-be42-b69b19dcbf29", "address": "fa:16:3e:e4:27:e3", "network": {"id": "b4b8fb64-b3cb-4cee-8a22-4a3947fba8e5", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-2015711970-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b0430d49d70a46c2b29abef177f8ccb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape121c426-77", "ovs_interfaceid": "e121c426-7734-4def-be42-b69b19dcbf29", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:59:19 compute-0 nova_compute[260935]: 2025-10-11 08:59:19.415 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Releasing lock "refresh_cache-8848c29f-c82a-4f50-82f4-b2e317161489" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 08:59:19 compute-0 nova_compute[260935]: 2025-10-11 08:59:19.416 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: 8848c29f-c82a-4f50-82f4-b2e317161489] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 11 08:59:19 compute-0 nova_compute[260935]: 2025-10-11 08:59:19.885 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:59:20 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1685: 321 pgs: 321 active+clean; 235 MiB data, 685 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 4.2 MiB/s wr, 180 op/s
Oct 11 08:59:20 compute-0 nova_compute[260935]: 2025-10-11 08:59:20.147 2 DEBUG nova.network.neutron [None req-63d228ec-d98f-4af4-90ec-67bd708ffac8 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] [instance: c176845c-89c0-4038-ba22-4ee79bd3ebfe] Successfully updated port: e61ae661-47c6-4317-a2c2-6e7a5b567441 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 11 08:59:20 compute-0 nova_compute[260935]: 2025-10-11 08:59:20.167 2 DEBUG oslo_concurrency.lockutils [None req-63d228ec-d98f-4af4-90ec-67bd708ffac8 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] Acquiring lock "refresh_cache-c176845c-89c0-4038-ba22-4ee79bd3ebfe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 08:59:20 compute-0 nova_compute[260935]: 2025-10-11 08:59:20.168 2 DEBUG oslo_concurrency.lockutils [None req-63d228ec-d98f-4af4-90ec-67bd708ffac8 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] Acquired lock "refresh_cache-c176845c-89c0-4038-ba22-4ee79bd3ebfe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 08:59:20 compute-0 nova_compute[260935]: 2025-10-11 08:59:20.168 2 DEBUG nova.network.neutron [None req-63d228ec-d98f-4af4-90ec-67bd708ffac8 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] [instance: c176845c-89c0-4038-ba22-4ee79bd3ebfe] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 11 08:59:20 compute-0 nova_compute[260935]: 2025-10-11 08:59:20.366 2 DEBUG nova.network.neutron [None req-63d228ec-d98f-4af4-90ec-67bd708ffac8 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] [instance: c176845c-89c0-4038-ba22-4ee79bd3ebfe] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 11 08:59:20 compute-0 nova_compute[260935]: 2025-10-11 08:59:20.445 2 DEBUG nova.compute.manager [req-40fccef8-150f-42ab-b0f8-1bc87e8cab08 req-bfb821db-5eda-424e-9e5c-f5065bd0a1f9 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: c176845c-89c0-4038-ba22-4ee79bd3ebfe] Received event network-changed-e61ae661-47c6-4317-a2c2-6e7a5b567441 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:59:20 compute-0 nova_compute[260935]: 2025-10-11 08:59:20.446 2 DEBUG nova.compute.manager [req-40fccef8-150f-42ab-b0f8-1bc87e8cab08 req-bfb821db-5eda-424e-9e5c-f5065bd0a1f9 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: c176845c-89c0-4038-ba22-4ee79bd3ebfe] Refreshing instance network info cache due to event network-changed-e61ae661-47c6-4317-a2c2-6e7a5b567441. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 08:59:20 compute-0 nova_compute[260935]: 2025-10-11 08:59:20.447 2 DEBUG oslo_concurrency.lockutils [req-40fccef8-150f-42ab-b0f8-1bc87e8cab08 req-bfb821db-5eda-424e-9e5c-f5065bd0a1f9 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-c176845c-89c0-4038-ba22-4ee79bd3ebfe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 08:59:21 compute-0 ceph-mon[74313]: pgmap v1685: 321 pgs: 321 active+clean; 235 MiB data, 685 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 4.2 MiB/s wr, 180 op/s
Oct 11 08:59:21 compute-0 nova_compute[260935]: 2025-10-11 08:59:21.815 2 DEBUG nova.network.neutron [None req-63d228ec-d98f-4af4-90ec-67bd708ffac8 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] [instance: c176845c-89c0-4038-ba22-4ee79bd3ebfe] Updating instance_info_cache with network_info: [{"id": "e61ae661-47c6-4317-a2c2-6e7a5b567441", "address": "fa:16:3e:1e:82:58", "network": {"id": "164a664d-5e52-48b9-8b00-f73d0851a4cc", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-311778958-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d33b48586acf4e6c8254f2a1213b001c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape61ae661-47", "ovs_interfaceid": "e61ae661-47c6-4317-a2c2-6e7a5b567441", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:59:21 compute-0 nova_compute[260935]: 2025-10-11 08:59:21.838 2 DEBUG oslo_concurrency.lockutils [None req-63d228ec-d98f-4af4-90ec-67bd708ffac8 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] Releasing lock "refresh_cache-c176845c-89c0-4038-ba22-4ee79bd3ebfe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 08:59:21 compute-0 nova_compute[260935]: 2025-10-11 08:59:21.838 2 DEBUG nova.compute.manager [None req-63d228ec-d98f-4af4-90ec-67bd708ffac8 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] [instance: c176845c-89c0-4038-ba22-4ee79bd3ebfe] Instance network_info: |[{"id": "e61ae661-47c6-4317-a2c2-6e7a5b567441", "address": "fa:16:3e:1e:82:58", "network": {"id": "164a664d-5e52-48b9-8b00-f73d0851a4cc", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-311778958-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d33b48586acf4e6c8254f2a1213b001c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape61ae661-47", "ovs_interfaceid": "e61ae661-47c6-4317-a2c2-6e7a5b567441", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 11 08:59:21 compute-0 nova_compute[260935]: 2025-10-11 08:59:21.839 2 DEBUG oslo_concurrency.lockutils [req-40fccef8-150f-42ab-b0f8-1bc87e8cab08 req-bfb821db-5eda-424e-9e5c-f5065bd0a1f9 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-c176845c-89c0-4038-ba22-4ee79bd3ebfe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 08:59:21 compute-0 nova_compute[260935]: 2025-10-11 08:59:21.839 2 DEBUG nova.network.neutron [req-40fccef8-150f-42ab-b0f8-1bc87e8cab08 req-bfb821db-5eda-424e-9e5c-f5065bd0a1f9 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: c176845c-89c0-4038-ba22-4ee79bd3ebfe] Refreshing network info cache for port e61ae661-47c6-4317-a2c2-6e7a5b567441 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 08:59:21 compute-0 nova_compute[260935]: 2025-10-11 08:59:21.843 2 DEBUG nova.virt.libvirt.driver [None req-63d228ec-d98f-4af4-90ec-67bd708ffac8 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] [instance: c176845c-89c0-4038-ba22-4ee79bd3ebfe] Start _get_guest_xml network_info=[{"id": "e61ae661-47c6-4317-a2c2-6e7a5b567441", "address": "fa:16:3e:1e:82:58", "network": {"id": "164a664d-5e52-48b9-8b00-f73d0851a4cc", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-311778958-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d33b48586acf4e6c8254f2a1213b001c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape61ae661-47", "ovs_interfaceid": "e61ae661-47c6-4317-a2c2-6e7a5b567441", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 11 08:59:21 compute-0 nova_compute[260935]: 2025-10-11 08:59:21.849 2 WARNING nova.virt.libvirt.driver [None req-63d228ec-d98f-4af4-90ec-67bd708ffac8 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 08:59:21 compute-0 nova_compute[260935]: 2025-10-11 08:59:21.855 2 DEBUG nova.virt.libvirt.host [None req-63d228ec-d98f-4af4-90ec-67bd708ffac8 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 11 08:59:21 compute-0 nova_compute[260935]: 2025-10-11 08:59:21.856 2 DEBUG nova.virt.libvirt.host [None req-63d228ec-d98f-4af4-90ec-67bd708ffac8 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 11 08:59:21 compute-0 nova_compute[260935]: 2025-10-11 08:59:21.858 2 DEBUG nova.virt.libvirt.host [None req-63d228ec-d98f-4af4-90ec-67bd708ffac8 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 11 08:59:21 compute-0 nova_compute[260935]: 2025-10-11 08:59:21.859 2 DEBUG nova.virt.libvirt.host [None req-63d228ec-d98f-4af4-90ec-67bd708ffac8 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 11 08:59:21 compute-0 nova_compute[260935]: 2025-10-11 08:59:21.860 2 DEBUG nova.virt.libvirt.driver [None req-63d228ec-d98f-4af4-90ec-67bd708ffac8 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 11 08:59:21 compute-0 nova_compute[260935]: 2025-10-11 08:59:21.860 2 DEBUG nova.virt.hardware [None req-63d228ec-d98f-4af4-90ec-67bd708ffac8 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 11 08:59:21 compute-0 nova_compute[260935]: 2025-10-11 08:59:21.861 2 DEBUG nova.virt.hardware [None req-63d228ec-d98f-4af4-90ec-67bd708ffac8 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 11 08:59:21 compute-0 nova_compute[260935]: 2025-10-11 08:59:21.861 2 DEBUG nova.virt.hardware [None req-63d228ec-d98f-4af4-90ec-67bd708ffac8 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 11 08:59:21 compute-0 nova_compute[260935]: 2025-10-11 08:59:21.861 2 DEBUG nova.virt.hardware [None req-63d228ec-d98f-4af4-90ec-67bd708ffac8 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 11 08:59:21 compute-0 nova_compute[260935]: 2025-10-11 08:59:21.862 2 DEBUG nova.virt.hardware [None req-63d228ec-d98f-4af4-90ec-67bd708ffac8 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 11 08:59:21 compute-0 nova_compute[260935]: 2025-10-11 08:59:21.862 2 DEBUG nova.virt.hardware [None req-63d228ec-d98f-4af4-90ec-67bd708ffac8 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 11 08:59:21 compute-0 nova_compute[260935]: 2025-10-11 08:59:21.863 2 DEBUG nova.virt.hardware [None req-63d228ec-d98f-4af4-90ec-67bd708ffac8 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 11 08:59:21 compute-0 nova_compute[260935]: 2025-10-11 08:59:21.863 2 DEBUG nova.virt.hardware [None req-63d228ec-d98f-4af4-90ec-67bd708ffac8 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 11 08:59:21 compute-0 nova_compute[260935]: 2025-10-11 08:59:21.863 2 DEBUG nova.virt.hardware [None req-63d228ec-d98f-4af4-90ec-67bd708ffac8 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 11 08:59:21 compute-0 nova_compute[260935]: 2025-10-11 08:59:21.864 2 DEBUG nova.virt.hardware [None req-63d228ec-d98f-4af4-90ec-67bd708ffac8 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 11 08:59:21 compute-0 nova_compute[260935]: 2025-10-11 08:59:21.864 2 DEBUG nova.virt.hardware [None req-63d228ec-d98f-4af4-90ec-67bd708ffac8 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 11 08:59:21 compute-0 nova_compute[260935]: 2025-10-11 08:59:21.868 2 DEBUG oslo_concurrency.processutils [None req-63d228ec-d98f-4af4-90ec-67bd708ffac8 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:59:22 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1686: 321 pgs: 321 active+clean; 268 MiB data, 694 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 4.8 MiB/s wr, 189 op/s
Oct 11 08:59:22 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 08:59:22 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2727248694' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:59:22 compute-0 nova_compute[260935]: 2025-10-11 08:59:22.360 2 DEBUG oslo_concurrency.processutils [None req-63d228ec-d98f-4af4-90ec-67bd708ffac8 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.491s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:59:22 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/2727248694' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:59:22 compute-0 nova_compute[260935]: 2025-10-11 08:59:22.407 2 DEBUG nova.storage.rbd_utils [None req-63d228ec-d98f-4af4-90ec-67bd708ffac8 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] rbd image c176845c-89c0-4038-ba22-4ee79bd3ebfe_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:59:22 compute-0 nova_compute[260935]: 2025-10-11 08:59:22.415 2 DEBUG oslo_concurrency.processutils [None req-63d228ec-d98f-4af4-90ec-67bd708ffac8 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:59:22 compute-0 nova_compute[260935]: 2025-10-11 08:59:22.732 2 DEBUG oslo_concurrency.lockutils [None req-17fdfed9-b47d-4c7f-aeab-151977a88dbc 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] Acquiring lock "8848c29f-c82a-4f50-82f4-b2e317161489" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:59:22 compute-0 nova_compute[260935]: 2025-10-11 08:59:22.734 2 DEBUG oslo_concurrency.lockutils [None req-17fdfed9-b47d-4c7f-aeab-151977a88dbc 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] Lock "8848c29f-c82a-4f50-82f4-b2e317161489" acquired by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:59:22 compute-0 nova_compute[260935]: 2025-10-11 08:59:22.735 2 DEBUG nova.compute.manager [None req-17fdfed9-b47d-4c7f-aeab-151977a88dbc 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] [instance: 8848c29f-c82a-4f50-82f4-b2e317161489] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:59:22 compute-0 nova_compute[260935]: 2025-10-11 08:59:22.742 2 DEBUG nova.compute.manager [None req-17fdfed9-b47d-4c7f-aeab-151977a88dbc 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] [instance: 8848c29f-c82a-4f50-82f4-b2e317161489] Stopping instance; current vm_state: active, current task_state: powering-off, current DB power_state: 1, current VM power_state: 1 do_stop_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3338
Oct 11 08:59:22 compute-0 nova_compute[260935]: 2025-10-11 08:59:22.745 2 DEBUG nova.objects.instance [None req-17fdfed9-b47d-4c7f-aeab-151977a88dbc 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] Lazy-loading 'flavor' on Instance uuid 8848c29f-c82a-4f50-82f4-b2e317161489 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:59:22 compute-0 nova_compute[260935]: 2025-10-11 08:59:22.784 2 DEBUG nova.virt.libvirt.driver [None req-17fdfed9-b47d-4c7f-aeab-151977a88dbc 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] [instance: 8848c29f-c82a-4f50-82f4-b2e317161489] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Oct 11 08:59:22 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 08:59:22 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1968486637' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:59:22 compute-0 nova_compute[260935]: 2025-10-11 08:59:22.925 2 DEBUG oslo_concurrency.processutils [None req-63d228ec-d98f-4af4-90ec-67bd708ffac8 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.510s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:59:22 compute-0 nova_compute[260935]: 2025-10-11 08:59:22.929 2 DEBUG nova.virt.libvirt.vif [None req-63d228ec-d98f-4af4-90ec-67bd708ffac8 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T08:59:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherA-server-1251385331',display_name='tempest-ServerActionsTestOtherA-server-1251385331',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestothera-server-1251385331',id=82,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBD0zd9vOn1MoClsAHXDeWFPP5kO+VNofuvu89K7qYloOUWW4N93cF9QhhUyaB1pmFmHZjCIiPEyZ5cYnUAirQfqKcPyMnKcAjeovnFjGTt2K03Doe1dtzSfmqbvVdrCpkQ==',key_name='tempest-keypair-168857508',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d33b48586acf4e6c8254f2a1213b001c',ramdisk_id='',reservation_id='r-vuepvjtl',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherA-620268335',owner_user_name='tempest-ServerActionsTestOtherA-620268335-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T08:59:16Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='8d211063ed874837bead2e13898b31d4',uuid=c176845c-89c0-4038-ba22-4ee79bd3ebfe,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e61ae661-47c6-4317-a2c2-6e7a5b567441", "address": "fa:16:3e:1e:82:58", "network": {"id": "164a664d-5e52-48b9-8b00-f73d0851a4cc", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-311778958-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d33b48586acf4e6c8254f2a1213b001c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape61ae661-47", "ovs_interfaceid": "e61ae661-47c6-4317-a2c2-6e7a5b567441", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 11 08:59:22 compute-0 nova_compute[260935]: 2025-10-11 08:59:22.930 2 DEBUG nova.network.os_vif_util [None req-63d228ec-d98f-4af4-90ec-67bd708ffac8 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] Converting VIF {"id": "e61ae661-47c6-4317-a2c2-6e7a5b567441", "address": "fa:16:3e:1e:82:58", "network": {"id": "164a664d-5e52-48b9-8b00-f73d0851a4cc", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-311778958-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d33b48586acf4e6c8254f2a1213b001c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape61ae661-47", "ovs_interfaceid": "e61ae661-47c6-4317-a2c2-6e7a5b567441", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 08:59:22 compute-0 nova_compute[260935]: 2025-10-11 08:59:22.932 2 DEBUG nova.network.os_vif_util [None req-63d228ec-d98f-4af4-90ec-67bd708ffac8 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1e:82:58,bridge_name='br-int',has_traffic_filtering=True,id=e61ae661-47c6-4317-a2c2-6e7a5b567441,network=Network(164a664d-5e52-48b9-8b00-f73d0851a4cc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape61ae661-47') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 08:59:22 compute-0 nova_compute[260935]: 2025-10-11 08:59:22.935 2 DEBUG nova.objects.instance [None req-63d228ec-d98f-4af4-90ec-67bd708ffac8 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] Lazy-loading 'pci_devices' on Instance uuid c176845c-89c0-4038-ba22-4ee79bd3ebfe obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:59:22 compute-0 nova_compute[260935]: 2025-10-11 08:59:22.950 2 DEBUG oslo_concurrency.lockutils [None req-b9f724ee-7060-4253-8b52-7460ec69ccdb 88847169865e4e08abccff94205f100d 2e00bda8b86d430782170f8ef1fc58e7 - - default default] Acquiring lock "6b3e0b89-ecfe-48b4-b347-521aca186a52" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:59:22 compute-0 nova_compute[260935]: 2025-10-11 08:59:22.951 2 DEBUG oslo_concurrency.lockutils [None req-b9f724ee-7060-4253-8b52-7460ec69ccdb 88847169865e4e08abccff94205f100d 2e00bda8b86d430782170f8ef1fc58e7 - - default default] Lock "6b3e0b89-ecfe-48b4-b347-521aca186a52" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:59:22 compute-0 nova_compute[260935]: 2025-10-11 08:59:22.961 2 DEBUG nova.virt.libvirt.driver [None req-63d228ec-d98f-4af4-90ec-67bd708ffac8 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] [instance: c176845c-89c0-4038-ba22-4ee79bd3ebfe] End _get_guest_xml xml=<domain type="kvm">
Oct 11 08:59:22 compute-0 nova_compute[260935]:   <uuid>c176845c-89c0-4038-ba22-4ee79bd3ebfe</uuid>
Oct 11 08:59:22 compute-0 nova_compute[260935]:   <name>instance-00000052</name>
Oct 11 08:59:22 compute-0 nova_compute[260935]:   <memory>131072</memory>
Oct 11 08:59:22 compute-0 nova_compute[260935]:   <vcpu>1</vcpu>
Oct 11 08:59:22 compute-0 nova_compute[260935]:   <metadata>
Oct 11 08:59:22 compute-0 nova_compute[260935]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 08:59:22 compute-0 nova_compute[260935]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 08:59:22 compute-0 nova_compute[260935]:       <nova:name>tempest-ServerActionsTestOtherA-server-1251385331</nova:name>
Oct 11 08:59:22 compute-0 nova_compute[260935]:       <nova:creationTime>2025-10-11 08:59:21</nova:creationTime>
Oct 11 08:59:22 compute-0 nova_compute[260935]:       <nova:flavor name="m1.nano">
Oct 11 08:59:22 compute-0 nova_compute[260935]:         <nova:memory>128</nova:memory>
Oct 11 08:59:22 compute-0 nova_compute[260935]:         <nova:disk>1</nova:disk>
Oct 11 08:59:22 compute-0 nova_compute[260935]:         <nova:swap>0</nova:swap>
Oct 11 08:59:22 compute-0 nova_compute[260935]:         <nova:ephemeral>0</nova:ephemeral>
Oct 11 08:59:22 compute-0 nova_compute[260935]:         <nova:vcpus>1</nova:vcpus>
Oct 11 08:59:22 compute-0 nova_compute[260935]:       </nova:flavor>
Oct 11 08:59:22 compute-0 nova_compute[260935]:       <nova:owner>
Oct 11 08:59:22 compute-0 nova_compute[260935]:         <nova:user uuid="8d211063ed874837bead2e13898b31d4">tempest-ServerActionsTestOtherA-620268335-project-member</nova:user>
Oct 11 08:59:22 compute-0 nova_compute[260935]:         <nova:project uuid="d33b48586acf4e6c8254f2a1213b001c">tempest-ServerActionsTestOtherA-620268335</nova:project>
Oct 11 08:59:22 compute-0 nova_compute[260935]:       </nova:owner>
Oct 11 08:59:22 compute-0 nova_compute[260935]:       <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 08:59:22 compute-0 nova_compute[260935]:       <nova:ports>
Oct 11 08:59:22 compute-0 nova_compute[260935]:         <nova:port uuid="e61ae661-47c6-4317-a2c2-6e7a5b567441">
Oct 11 08:59:22 compute-0 nova_compute[260935]:           <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Oct 11 08:59:22 compute-0 nova_compute[260935]:         </nova:port>
Oct 11 08:59:22 compute-0 nova_compute[260935]:       </nova:ports>
Oct 11 08:59:22 compute-0 nova_compute[260935]:     </nova:instance>
Oct 11 08:59:22 compute-0 nova_compute[260935]:   </metadata>
Oct 11 08:59:22 compute-0 nova_compute[260935]:   <sysinfo type="smbios">
Oct 11 08:59:22 compute-0 nova_compute[260935]:     <system>
Oct 11 08:59:22 compute-0 nova_compute[260935]:       <entry name="manufacturer">RDO</entry>
Oct 11 08:59:22 compute-0 nova_compute[260935]:       <entry name="product">OpenStack Compute</entry>
Oct 11 08:59:22 compute-0 nova_compute[260935]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 08:59:22 compute-0 nova_compute[260935]:       <entry name="serial">c176845c-89c0-4038-ba22-4ee79bd3ebfe</entry>
Oct 11 08:59:22 compute-0 nova_compute[260935]:       <entry name="uuid">c176845c-89c0-4038-ba22-4ee79bd3ebfe</entry>
Oct 11 08:59:22 compute-0 nova_compute[260935]:       <entry name="family">Virtual Machine</entry>
Oct 11 08:59:22 compute-0 nova_compute[260935]:     </system>
Oct 11 08:59:22 compute-0 nova_compute[260935]:   </sysinfo>
Oct 11 08:59:22 compute-0 nova_compute[260935]:   <os>
Oct 11 08:59:22 compute-0 nova_compute[260935]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 11 08:59:22 compute-0 nova_compute[260935]:     <boot dev="hd"/>
Oct 11 08:59:22 compute-0 nova_compute[260935]:     <smbios mode="sysinfo"/>
Oct 11 08:59:22 compute-0 nova_compute[260935]:   </os>
Oct 11 08:59:22 compute-0 nova_compute[260935]:   <features>
Oct 11 08:59:22 compute-0 nova_compute[260935]:     <acpi/>
Oct 11 08:59:22 compute-0 nova_compute[260935]:     <apic/>
Oct 11 08:59:22 compute-0 nova_compute[260935]:     <vmcoreinfo/>
Oct 11 08:59:22 compute-0 nova_compute[260935]:   </features>
Oct 11 08:59:22 compute-0 nova_compute[260935]:   <clock offset="utc">
Oct 11 08:59:22 compute-0 nova_compute[260935]:     <timer name="pit" tickpolicy="delay"/>
Oct 11 08:59:22 compute-0 nova_compute[260935]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 11 08:59:22 compute-0 nova_compute[260935]:     <timer name="hpet" present="no"/>
Oct 11 08:59:22 compute-0 nova_compute[260935]:   </clock>
Oct 11 08:59:22 compute-0 nova_compute[260935]:   <cpu mode="host-model" match="exact">
Oct 11 08:59:22 compute-0 nova_compute[260935]:     <topology sockets="1" cores="1" threads="1"/>
Oct 11 08:59:22 compute-0 nova_compute[260935]:   </cpu>
Oct 11 08:59:22 compute-0 nova_compute[260935]:   <devices>
Oct 11 08:59:22 compute-0 nova_compute[260935]:     <disk type="network" device="disk">
Oct 11 08:59:22 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 08:59:22 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/c176845c-89c0-4038-ba22-4ee79bd3ebfe_disk">
Oct 11 08:59:22 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 08:59:22 compute-0 nova_compute[260935]:       </source>
Oct 11 08:59:22 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 08:59:22 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 08:59:22 compute-0 nova_compute[260935]:       </auth>
Oct 11 08:59:22 compute-0 nova_compute[260935]:       <target dev="vda" bus="virtio"/>
Oct 11 08:59:22 compute-0 nova_compute[260935]:     </disk>
Oct 11 08:59:22 compute-0 nova_compute[260935]:     <disk type="network" device="cdrom">
Oct 11 08:59:22 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 08:59:22 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/c176845c-89c0-4038-ba22-4ee79bd3ebfe_disk.config">
Oct 11 08:59:22 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 08:59:22 compute-0 nova_compute[260935]:       </source>
Oct 11 08:59:22 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 08:59:22 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 08:59:22 compute-0 nova_compute[260935]:       </auth>
Oct 11 08:59:22 compute-0 nova_compute[260935]:       <target dev="sda" bus="sata"/>
Oct 11 08:59:22 compute-0 nova_compute[260935]:     </disk>
Oct 11 08:59:22 compute-0 nova_compute[260935]:     <interface type="ethernet">
Oct 11 08:59:22 compute-0 nova_compute[260935]:       <mac address="fa:16:3e:1e:82:58"/>
Oct 11 08:59:22 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 08:59:22 compute-0 nova_compute[260935]:       <driver name="vhost" rx_queue_size="512"/>
Oct 11 08:59:22 compute-0 nova_compute[260935]:       <mtu size="1442"/>
Oct 11 08:59:22 compute-0 nova_compute[260935]:       <target dev="tape61ae661-47"/>
Oct 11 08:59:22 compute-0 nova_compute[260935]:     </interface>
Oct 11 08:59:22 compute-0 nova_compute[260935]:     <serial type="pty">
Oct 11 08:59:22 compute-0 nova_compute[260935]:       <log file="/var/lib/nova/instances/c176845c-89c0-4038-ba22-4ee79bd3ebfe/console.log" append="off"/>
Oct 11 08:59:22 compute-0 nova_compute[260935]:     </serial>
Oct 11 08:59:22 compute-0 nova_compute[260935]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 08:59:22 compute-0 nova_compute[260935]:     <video>
Oct 11 08:59:22 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 08:59:22 compute-0 nova_compute[260935]:     </video>
Oct 11 08:59:22 compute-0 nova_compute[260935]:     <input type="tablet" bus="usb"/>
Oct 11 08:59:22 compute-0 nova_compute[260935]:     <rng model="virtio">
Oct 11 08:59:22 compute-0 nova_compute[260935]:       <backend model="random">/dev/urandom</backend>
Oct 11 08:59:22 compute-0 nova_compute[260935]:     </rng>
Oct 11 08:59:22 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root"/>
Oct 11 08:59:22 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:59:22 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:59:22 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:59:22 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:59:22 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:59:22 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:59:22 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:59:22 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:59:22 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:59:22 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:59:22 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:59:22 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:59:22 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:59:22 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:59:22 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:59:22 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:59:22 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:59:22 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:59:22 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:59:22 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:59:22 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:59:22 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:59:22 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:59:22 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:59:22 compute-0 nova_compute[260935]:     <controller type="usb" index="0"/>
Oct 11 08:59:22 compute-0 nova_compute[260935]:     <memballoon model="virtio">
Oct 11 08:59:22 compute-0 nova_compute[260935]:       <stats period="10"/>
Oct 11 08:59:22 compute-0 nova_compute[260935]:     </memballoon>
Oct 11 08:59:22 compute-0 nova_compute[260935]:   </devices>
Oct 11 08:59:22 compute-0 nova_compute[260935]: </domain>
Oct 11 08:59:22 compute-0 nova_compute[260935]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 11 08:59:22 compute-0 nova_compute[260935]: 2025-10-11 08:59:22.963 2 DEBUG nova.compute.manager [None req-63d228ec-d98f-4af4-90ec-67bd708ffac8 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] [instance: c176845c-89c0-4038-ba22-4ee79bd3ebfe] Preparing to wait for external event network-vif-plugged-e61ae661-47c6-4317-a2c2-6e7a5b567441 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 11 08:59:22 compute-0 nova_compute[260935]: 2025-10-11 08:59:22.964 2 DEBUG oslo_concurrency.lockutils [None req-63d228ec-d98f-4af4-90ec-67bd708ffac8 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] Acquiring lock "c176845c-89c0-4038-ba22-4ee79bd3ebfe-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:59:22 compute-0 nova_compute[260935]: 2025-10-11 08:59:22.964 2 DEBUG oslo_concurrency.lockutils [None req-63d228ec-d98f-4af4-90ec-67bd708ffac8 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] Lock "c176845c-89c0-4038-ba22-4ee79bd3ebfe-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:59:22 compute-0 nova_compute[260935]: 2025-10-11 08:59:22.964 2 DEBUG oslo_concurrency.lockutils [None req-63d228ec-d98f-4af4-90ec-67bd708ffac8 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] Lock "c176845c-89c0-4038-ba22-4ee79bd3ebfe-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:59:22 compute-0 nova_compute[260935]: 2025-10-11 08:59:22.965 2 DEBUG nova.virt.libvirt.vif [None req-63d228ec-d98f-4af4-90ec-67bd708ffac8 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T08:59:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherA-server-1251385331',display_name='tempest-ServerActionsTestOtherA-server-1251385331',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestothera-server-1251385331',id=82,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBD0zd9vOn1MoClsAHXDeWFPP5kO+VNofuvu89K7qYloOUWW4N93cF9QhhUyaB1pmFmHZjCIiPEyZ5cYnUAirQfqKcPyMnKcAjeovnFjGTt2K03Doe1dtzSfmqbvVdrCpkQ==',key_name='tempest-keypair-168857508',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d33b48586acf4e6c8254f2a1213b001c',ramdisk_id='',reservation_id='r-vuepvjtl',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherA-620268335',owner_user_name='tempest-ServerActionsTestOtherA-620268335-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T08:59:16Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='8d211063ed874837bead2e13898b31d4',uuid=c176845c-89c0-4038-ba22-4ee79bd3ebfe,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e61ae661-47c6-4317-a2c2-6e7a5b567441", "address": "fa:16:3e:1e:82:58", "network": {"id": "164a664d-5e52-48b9-8b00-f73d0851a4cc", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-311778958-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d33b48586acf4e6c8254f2a1213b001c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape61ae661-47", "ovs_interfaceid": "e61ae661-47c6-4317-a2c2-6e7a5b567441", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 11 08:59:22 compute-0 nova_compute[260935]: 2025-10-11 08:59:22.965 2 DEBUG nova.network.os_vif_util [None req-63d228ec-d98f-4af4-90ec-67bd708ffac8 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] Converting VIF {"id": "e61ae661-47c6-4317-a2c2-6e7a5b567441", "address": "fa:16:3e:1e:82:58", "network": {"id": "164a664d-5e52-48b9-8b00-f73d0851a4cc", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-311778958-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d33b48586acf4e6c8254f2a1213b001c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape61ae661-47", "ovs_interfaceid": "e61ae661-47c6-4317-a2c2-6e7a5b567441", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 08:59:22 compute-0 nova_compute[260935]: 2025-10-11 08:59:22.966 2 DEBUG nova.network.os_vif_util [None req-63d228ec-d98f-4af4-90ec-67bd708ffac8 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1e:82:58,bridge_name='br-int',has_traffic_filtering=True,id=e61ae661-47c6-4317-a2c2-6e7a5b567441,network=Network(164a664d-5e52-48b9-8b00-f73d0851a4cc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape61ae661-47') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 08:59:22 compute-0 nova_compute[260935]: 2025-10-11 08:59:22.966 2 DEBUG os_vif [None req-63d228ec-d98f-4af4-90ec-67bd708ffac8 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:1e:82:58,bridge_name='br-int',has_traffic_filtering=True,id=e61ae661-47c6-4317-a2c2-6e7a5b567441,network=Network(164a664d-5e52-48b9-8b00-f73d0851a4cc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape61ae661-47') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 11 08:59:22 compute-0 nova_compute[260935]: 2025-10-11 08:59:22.967 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:59:22 compute-0 nova_compute[260935]: 2025-10-11 08:59:22.967 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:59:22 compute-0 nova_compute[260935]: 2025-10-11 08:59:22.968 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 08:59:22 compute-0 nova_compute[260935]: 2025-10-11 08:59:22.970 2 DEBUG nova.compute.manager [None req-b9f724ee-7060-4253-8b52-7460ec69ccdb 88847169865e4e08abccff94205f100d 2e00bda8b86d430782170f8ef1fc58e7 - - default default] [instance: 6b3e0b89-ecfe-48b4-b347-521aca186a52] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 11 08:59:22 compute-0 nova_compute[260935]: 2025-10-11 08:59:22.973 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:59:22 compute-0 nova_compute[260935]: 2025-10-11 08:59:22.973 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape61ae661-47, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:59:22 compute-0 nova_compute[260935]: 2025-10-11 08:59:22.974 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tape61ae661-47, col_values=(('external_ids', {'iface-id': 'e61ae661-47c6-4317-a2c2-6e7a5b567441', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:1e:82:58', 'vm-uuid': 'c176845c-89c0-4038-ba22-4ee79bd3ebfe'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:59:22 compute-0 nova_compute[260935]: 2025-10-11 08:59:22.982 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:59:22 compute-0 nova_compute[260935]: 2025-10-11 08:59:22.985 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 08:59:22 compute-0 NetworkManager[44960]: <info>  [1760173162.9921] manager: (tape61ae661-47): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/318)
Oct 11 08:59:22 compute-0 nova_compute[260935]: 2025-10-11 08:59:22.998 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:59:22 compute-0 nova_compute[260935]: 2025-10-11 08:59:22.998 2 INFO os_vif [None req-63d228ec-d98f-4af4-90ec-67bd708ffac8 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:1e:82:58,bridge_name='br-int',has_traffic_filtering=True,id=e61ae661-47c6-4317-a2c2-6e7a5b567441,network=Network(164a664d-5e52-48b9-8b00-f73d0851a4cc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape61ae661-47')
Oct 11 08:59:23 compute-0 nova_compute[260935]: 2025-10-11 08:59:23.056 2 DEBUG oslo_concurrency.lockutils [None req-b9f724ee-7060-4253-8b52-7460ec69ccdb 88847169865e4e08abccff94205f100d 2e00bda8b86d430782170f8ef1fc58e7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:59:23 compute-0 nova_compute[260935]: 2025-10-11 08:59:23.057 2 DEBUG oslo_concurrency.lockutils [None req-b9f724ee-7060-4253-8b52-7460ec69ccdb 88847169865e4e08abccff94205f100d 2e00bda8b86d430782170f8ef1fc58e7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:59:23 compute-0 nova_compute[260935]: 2025-10-11 08:59:23.064 2 DEBUG nova.virt.hardware [None req-b9f724ee-7060-4253-8b52-7460ec69ccdb 88847169865e4e08abccff94205f100d 2e00bda8b86d430782170f8ef1fc58e7 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 11 08:59:23 compute-0 nova_compute[260935]: 2025-10-11 08:59:23.064 2 INFO nova.compute.claims [None req-b9f724ee-7060-4253-8b52-7460ec69ccdb 88847169865e4e08abccff94205f100d 2e00bda8b86d430782170f8ef1fc58e7 - - default default] [instance: 6b3e0b89-ecfe-48b4-b347-521aca186a52] Claim successful on node compute-0.ctlplane.example.com
Oct 11 08:59:23 compute-0 nova_compute[260935]: 2025-10-11 08:59:23.142 2 DEBUG nova.virt.libvirt.driver [None req-63d228ec-d98f-4af4-90ec-67bd708ffac8 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 08:59:23 compute-0 nova_compute[260935]: 2025-10-11 08:59:23.143 2 DEBUG nova.virt.libvirt.driver [None req-63d228ec-d98f-4af4-90ec-67bd708ffac8 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 08:59:23 compute-0 nova_compute[260935]: 2025-10-11 08:59:23.143 2 DEBUG nova.virt.libvirt.driver [None req-63d228ec-d98f-4af4-90ec-67bd708ffac8 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] No VIF found with MAC fa:16:3e:1e:82:58, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 11 08:59:23 compute-0 nova_compute[260935]: 2025-10-11 08:59:23.144 2 INFO nova.virt.libvirt.driver [None req-63d228ec-d98f-4af4-90ec-67bd708ffac8 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] [instance: c176845c-89c0-4038-ba22-4ee79bd3ebfe] Using config drive
Oct 11 08:59:23 compute-0 nova_compute[260935]: 2025-10-11 08:59:23.170 2 DEBUG nova.storage.rbd_utils [None req-63d228ec-d98f-4af4-90ec-67bd708ffac8 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] rbd image c176845c-89c0-4038-ba22-4ee79bd3ebfe_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:59:23 compute-0 nova_compute[260935]: 2025-10-11 08:59:23.341 2 DEBUG oslo_concurrency.processutils [None req-b9f724ee-7060-4253-8b52-7460ec69ccdb 88847169865e4e08abccff94205f100d 2e00bda8b86d430782170f8ef1fc58e7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:59:23 compute-0 ceph-mon[74313]: pgmap v1686: 321 pgs: 321 active+clean; 268 MiB data, 694 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 4.8 MiB/s wr, 189 op/s
Oct 11 08:59:23 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/1968486637' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:59:23 compute-0 nova_compute[260935]: 2025-10-11 08:59:23.602 2 INFO nova.virt.libvirt.driver [None req-63d228ec-d98f-4af4-90ec-67bd708ffac8 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] [instance: c176845c-89c0-4038-ba22-4ee79bd3ebfe] Creating config drive at /var/lib/nova/instances/c176845c-89c0-4038-ba22-4ee79bd3ebfe/disk.config
Oct 11 08:59:23 compute-0 nova_compute[260935]: 2025-10-11 08:59:23.617 2 DEBUG oslo_concurrency.processutils [None req-63d228ec-d98f-4af4-90ec-67bd708ffac8 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/c176845c-89c0-4038-ba22-4ee79bd3ebfe/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpb3mjdrwl execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:59:23 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e242 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 08:59:23 compute-0 nova_compute[260935]: 2025-10-11 08:59:23.672 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:59:23 compute-0 nova_compute[260935]: 2025-10-11 08:59:23.678 2 DEBUG nova.network.neutron [req-40fccef8-150f-42ab-b0f8-1bc87e8cab08 req-bfb821db-5eda-424e-9e5c-f5065bd0a1f9 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: c176845c-89c0-4038-ba22-4ee79bd3ebfe] Updated VIF entry in instance network info cache for port e61ae661-47c6-4317-a2c2-6e7a5b567441. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 08:59:23 compute-0 nova_compute[260935]: 2025-10-11 08:59:23.678 2 DEBUG nova.network.neutron [req-40fccef8-150f-42ab-b0f8-1bc87e8cab08 req-bfb821db-5eda-424e-9e5c-f5065bd0a1f9 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: c176845c-89c0-4038-ba22-4ee79bd3ebfe] Updating instance_info_cache with network_info: [{"id": "e61ae661-47c6-4317-a2c2-6e7a5b567441", "address": "fa:16:3e:1e:82:58", "network": {"id": "164a664d-5e52-48b9-8b00-f73d0851a4cc", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-311778958-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d33b48586acf4e6c8254f2a1213b001c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape61ae661-47", "ovs_interfaceid": "e61ae661-47c6-4317-a2c2-6e7a5b567441", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:59:23 compute-0 nova_compute[260935]: 2025-10-11 08:59:23.699 2 DEBUG oslo_concurrency.lockutils [req-40fccef8-150f-42ab-b0f8-1bc87e8cab08 req-bfb821db-5eda-424e-9e5c-f5065bd0a1f9 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-c176845c-89c0-4038-ba22-4ee79bd3ebfe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 08:59:23 compute-0 nova_compute[260935]: 2025-10-11 08:59:23.786 2 DEBUG oslo_concurrency.processutils [None req-63d228ec-d98f-4af4-90ec-67bd708ffac8 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/c176845c-89c0-4038-ba22-4ee79bd3ebfe/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpb3mjdrwl" returned: 0 in 0.170s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:59:23 compute-0 nova_compute[260935]: 2025-10-11 08:59:23.826 2 DEBUG nova.storage.rbd_utils [None req-63d228ec-d98f-4af4-90ec-67bd708ffac8 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] rbd image c176845c-89c0-4038-ba22-4ee79bd3ebfe_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:59:23 compute-0 nova_compute[260935]: 2025-10-11 08:59:23.831 2 DEBUG oslo_concurrency.processutils [None req-63d228ec-d98f-4af4-90ec-67bd708ffac8 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/c176845c-89c0-4038-ba22-4ee79bd3ebfe/disk.config c176845c-89c0-4038-ba22-4ee79bd3ebfe_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:59:24 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 08:59:24 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3680995992' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:59:24 compute-0 nova_compute[260935]: 2025-10-11 08:59:24.068 2 DEBUG oslo_concurrency.processutils [None req-63d228ec-d98f-4af4-90ec-67bd708ffac8 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/c176845c-89c0-4038-ba22-4ee79bd3ebfe/disk.config c176845c-89c0-4038-ba22-4ee79bd3ebfe_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.237s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:59:24 compute-0 nova_compute[260935]: 2025-10-11 08:59:24.070 2 INFO nova.virt.libvirt.driver [None req-63d228ec-d98f-4af4-90ec-67bd708ffac8 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] [instance: c176845c-89c0-4038-ba22-4ee79bd3ebfe] Deleting local config drive /var/lib/nova/instances/c176845c-89c0-4038-ba22-4ee79bd3ebfe/disk.config because it was imported into RBD.
Oct 11 08:59:24 compute-0 nova_compute[260935]: 2025-10-11 08:59:24.093 2 DEBUG oslo_concurrency.processutils [None req-b9f724ee-7060-4253-8b52-7460ec69ccdb 88847169865e4e08abccff94205f100d 2e00bda8b86d430782170f8ef1fc58e7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.753s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:59:24 compute-0 nova_compute[260935]: 2025-10-11 08:59:24.100 2 DEBUG nova.compute.provider_tree [None req-b9f724ee-7060-4253-8b52-7460ec69ccdb 88847169865e4e08abccff94205f100d 2e00bda8b86d430782170f8ef1fc58e7 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 08:59:24 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1687: 321 pgs: 321 active+clean; 293 MiB data, 708 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 6.0 MiB/s wr, 224 op/s
Oct 11 08:59:24 compute-0 NetworkManager[44960]: <info>  [1760173164.1200] manager: (tape61ae661-47): new Tun device (/org/freedesktop/NetworkManager/Devices/319)
Oct 11 08:59:24 compute-0 kernel: tape61ae661-47: entered promiscuous mode
Oct 11 08:59:24 compute-0 ovn_controller[152945]: 2025-10-11T08:59:24Z|00712|binding|INFO|Claiming lport e61ae661-47c6-4317-a2c2-6e7a5b567441 for this chassis.
Oct 11 08:59:24 compute-0 ovn_controller[152945]: 2025-10-11T08:59:24Z|00713|binding|INFO|e61ae661-47c6-4317-a2c2-6e7a5b567441: Claiming fa:16:3e:1e:82:58 10.100.0.14
Oct 11 08:59:24 compute-0 nova_compute[260935]: 2025-10-11 08:59:24.129 2 DEBUG nova.scheduler.client.report [None req-b9f724ee-7060-4253-8b52-7460ec69ccdb 88847169865e4e08abccff94205f100d 2e00bda8b86d430782170f8ef1fc58e7 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 08:59:24 compute-0 nova_compute[260935]: 2025-10-11 08:59:24.134 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:59:24 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:59:24.149 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1e:82:58 10.100.0.14'], port_security=['fa:16:3e:1e:82:58 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'c176845c-89c0-4038-ba22-4ee79bd3ebfe', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-164a664d-5e52-48b9-8b00-f73d0851a4cc', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd33b48586acf4e6c8254f2a1213b001c', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'd2a92dd4-4e3a-46e0-b2c1-347b2512c6a3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=68f3e6c4-f574-4830-9133-912bb9cd6132, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=e61ae661-47c6-4317-a2c2-6e7a5b567441) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 08:59:24 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:59:24.151 162815 INFO neutron.agent.ovn.metadata.agent [-] Port e61ae661-47c6-4317-a2c2-6e7a5b567441 in datapath 164a664d-5e52-48b9-8b00-f73d0851a4cc bound to our chassis
Oct 11 08:59:24 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:59:24.155 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 164a664d-5e52-48b9-8b00-f73d0851a4cc
Oct 11 08:59:24 compute-0 systemd-udevd[339782]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 08:59:24 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:59:24.176 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[b2fda43a-25f4-4bcc-b438-a984d50fee11]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:59:24 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:59:24.178 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap164a664d-51 in ovnmeta-164a664d-5e52-48b9-8b00-f73d0851a4cc namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 11 08:59:24 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:59:24.180 276945 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap164a664d-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 11 08:59:24 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:59:24.180 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[28248877-7a36-4738-86e2-c88fbb01a5ef]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:59:24 compute-0 NetworkManager[44960]: <info>  [1760173164.1815] device (tape61ae661-47): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 08:59:24 compute-0 NetworkManager[44960]: <info>  [1760173164.1824] device (tape61ae661-47): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 08:59:24 compute-0 nova_compute[260935]: 2025-10-11 08:59:24.183 2 DEBUG oslo_concurrency.lockutils [None req-b9f724ee-7060-4253-8b52-7460ec69ccdb 88847169865e4e08abccff94205f100d 2e00bda8b86d430782170f8ef1fc58e7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.126s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:59:24 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:59:24.183 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[1f0b8079-1b21-42e7-bae3-4999f46a6e5e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:59:24 compute-0 nova_compute[260935]: 2025-10-11 08:59:24.184 2 DEBUG nova.compute.manager [None req-b9f724ee-7060-4253-8b52-7460ec69ccdb 88847169865e4e08abccff94205f100d 2e00bda8b86d430782170f8ef1fc58e7 - - default default] [instance: 6b3e0b89-ecfe-48b4-b347-521aca186a52] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 11 08:59:24 compute-0 systemd-machined[215705]: New machine qemu-91-instance-00000052.
Oct 11 08:59:24 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:59:24.201 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[1d77fe7f-1733-4501-9795-a60f8ef8094b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:59:24 compute-0 systemd[1]: Started Virtual Machine qemu-91-instance-00000052.
Oct 11 08:59:24 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:59:24.226 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[a2334ee7-9768-4af6-b757-c24a16680a5a]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:59:24 compute-0 ovn_controller[152945]: 2025-10-11T08:59:24Z|00714|binding|INFO|Setting lport e61ae661-47c6-4317-a2c2-6e7a5b567441 ovn-installed in OVS
Oct 11 08:59:24 compute-0 ovn_controller[152945]: 2025-10-11T08:59:24Z|00715|binding|INFO|Setting lport e61ae661-47c6-4317-a2c2-6e7a5b567441 up in Southbound
Oct 11 08:59:24 compute-0 nova_compute[260935]: 2025-10-11 08:59:24.256 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:59:24 compute-0 nova_compute[260935]: 2025-10-11 08:59:24.258 2 DEBUG nova.compute.manager [None req-b9f724ee-7060-4253-8b52-7460ec69ccdb 88847169865e4e08abccff94205f100d 2e00bda8b86d430782170f8ef1fc58e7 - - default default] [instance: 6b3e0b89-ecfe-48b4-b347-521aca186a52] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 11 08:59:24 compute-0 nova_compute[260935]: 2025-10-11 08:59:24.258 2 DEBUG nova.network.neutron [None req-b9f724ee-7060-4253-8b52-7460ec69ccdb 88847169865e4e08abccff94205f100d 2e00bda8b86d430782170f8ef1fc58e7 - - default default] [instance: 6b3e0b89-ecfe-48b4-b347-521aca186a52] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 11 08:59:24 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:59:24.277 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[3951f46c-7290-4646-aeec-e108faee639d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:59:24 compute-0 NetworkManager[44960]: <info>  [1760173164.2858] manager: (tap164a664d-50): new Veth device (/org/freedesktop/NetworkManager/Devices/320)
Oct 11 08:59:24 compute-0 nova_compute[260935]: 2025-10-11 08:59:24.286 2 INFO nova.virt.libvirt.driver [None req-b9f724ee-7060-4253-8b52-7460ec69ccdb 88847169865e4e08abccff94205f100d 2e00bda8b86d430782170f8ef1fc58e7 - - default default] [instance: 6b3e0b89-ecfe-48b4-b347-521aca186a52] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 11 08:59:24 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:59:24.285 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[3f0dc762-7249-40d1-87b8-6c31951f1cd0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:59:24 compute-0 nova_compute[260935]: 2025-10-11 08:59:24.309 2 DEBUG nova.compute.manager [None req-b9f724ee-7060-4253-8b52-7460ec69ccdb 88847169865e4e08abccff94205f100d 2e00bda8b86d430782170f8ef1fc58e7 - - default default] [instance: 6b3e0b89-ecfe-48b4-b347-521aca186a52] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 11 08:59:24 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:59:24.333 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[1ec56ef6-f19c-43f2-907f-b6ac2d3c127a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:59:24 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:59:24.336 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[56b3cbcf-7a6a-4bb5-86cf-7a0356ec7715]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:59:24 compute-0 NetworkManager[44960]: <info>  [1760173164.3606] device (tap164a664d-50): carrier: link connected
Oct 11 08:59:24 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:59:24.367 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[1d84619c-a835-4c41-a2f4-cbd87336bd93]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:59:24 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:59:24.383 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[8e5e49c5-10c0-42fe-ad1d-af5372a236ee]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap164a664d-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e4:a0:ed'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 224], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 500906, 'reachable_time': 17587, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 339817, 'error': None, 'target': 'ovnmeta-164a664d-5e52-48b9-8b00-f73d0851a4cc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:59:24 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:59:24.399 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[856dd0e5-179d-4a7a-8578-cf6e1ebff0d1]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fee4:a0ed'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 500906, 'tstamp': 500906}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 339818, 'error': None, 'target': 'ovnmeta-164a664d-5e52-48b9-8b00-f73d0851a4cc', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:59:24 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:59:24.422 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[234f8cdc-9115-4c77-ab8c-0207222d665a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap164a664d-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e4:a0:ed'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 224], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 500906, 'reachable_time': 17587, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 339826, 'error': None, 'target': 'ovnmeta-164a664d-5e52-48b9-8b00-f73d0851a4cc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:59:24 compute-0 nova_compute[260935]: 2025-10-11 08:59:24.435 2 DEBUG nova.compute.manager [None req-b9f724ee-7060-4253-8b52-7460ec69ccdb 88847169865e4e08abccff94205f100d 2e00bda8b86d430782170f8ef1fc58e7 - - default default] [instance: 6b3e0b89-ecfe-48b4-b347-521aca186a52] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 11 08:59:24 compute-0 nova_compute[260935]: 2025-10-11 08:59:24.437 2 DEBUG nova.virt.libvirt.driver [None req-b9f724ee-7060-4253-8b52-7460ec69ccdb 88847169865e4e08abccff94205f100d 2e00bda8b86d430782170f8ef1fc58e7 - - default default] [instance: 6b3e0b89-ecfe-48b4-b347-521aca186a52] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 11 08:59:24 compute-0 nova_compute[260935]: 2025-10-11 08:59:24.438 2 INFO nova.virt.libvirt.driver [None req-b9f724ee-7060-4253-8b52-7460ec69ccdb 88847169865e4e08abccff94205f100d 2e00bda8b86d430782170f8ef1fc58e7 - - default default] [instance: 6b3e0b89-ecfe-48b4-b347-521aca186a52] Creating image(s)
Oct 11 08:59:24 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/3680995992' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:59:24 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:59:24.456 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[582f8015-bbb5-4f07-9156-86e748619a96]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:59:24 compute-0 nova_compute[260935]: 2025-10-11 08:59:24.477 2 DEBUG nova.storage.rbd_utils [None req-b9f724ee-7060-4253-8b52-7460ec69ccdb 88847169865e4e08abccff94205f100d 2e00bda8b86d430782170f8ef1fc58e7 - - default default] rbd image 6b3e0b89-ecfe-48b4-b347-521aca186a52_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:59:24 compute-0 nova_compute[260935]: 2025-10-11 08:59:24.522 2 DEBUG nova.storage.rbd_utils [None req-b9f724ee-7060-4253-8b52-7460ec69ccdb 88847169865e4e08abccff94205f100d 2e00bda8b86d430782170f8ef1fc58e7 - - default default] rbd image 6b3e0b89-ecfe-48b4-b347-521aca186a52_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:59:24 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:59:24.532 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[b581aed9-4fdb-48a2-85e3-e15cab31e7a5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:59:24 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:59:24.533 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap164a664d-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:59:24 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:59:24.533 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 08:59:24 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:59:24.534 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap164a664d-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:59:24 compute-0 NetworkManager[44960]: <info>  [1760173164.5366] manager: (tap164a664d-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/321)
Oct 11 08:59:24 compute-0 kernel: tap164a664d-50: entered promiscuous mode
Oct 11 08:59:24 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:59:24.539 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap164a664d-50, col_values=(('external_ids', {'iface-id': 'e23cd806-8523-4e59-ba27-db15cee52548'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:59:24 compute-0 ovn_controller[152945]: 2025-10-11T08:59:24Z|00716|binding|INFO|Releasing lport e23cd806-8523-4e59-ba27-db15cee52548 from this chassis (sb_readonly=0)
Oct 11 08:59:24 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:59:24.564 162815 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/164a664d-5e52-48b9-8b00-f73d0851a4cc.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/164a664d-5e52-48b9-8b00-f73d0851a4cc.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 11 08:59:24 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:59:24.565 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[06f2f8bf-449b-4f82-bde0-620a8710474b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:59:24 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:59:24.566 162815 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 11 08:59:24 compute-0 ovn_metadata_agent[162810]: global
Oct 11 08:59:24 compute-0 ovn_metadata_agent[162810]:     log         /dev/log local0 debug
Oct 11 08:59:24 compute-0 ovn_metadata_agent[162810]:     log-tag     haproxy-metadata-proxy-164a664d-5e52-48b9-8b00-f73d0851a4cc
Oct 11 08:59:24 compute-0 ovn_metadata_agent[162810]:     user        root
Oct 11 08:59:24 compute-0 ovn_metadata_agent[162810]:     group       root
Oct 11 08:59:24 compute-0 ovn_metadata_agent[162810]:     maxconn     1024
Oct 11 08:59:24 compute-0 ovn_metadata_agent[162810]:     pidfile     /var/lib/neutron/external/pids/164a664d-5e52-48b9-8b00-f73d0851a4cc.pid.haproxy
Oct 11 08:59:24 compute-0 ovn_metadata_agent[162810]:     daemon
Oct 11 08:59:24 compute-0 ovn_metadata_agent[162810]: 
Oct 11 08:59:24 compute-0 ovn_metadata_agent[162810]: defaults
Oct 11 08:59:24 compute-0 ovn_metadata_agent[162810]:     log global
Oct 11 08:59:24 compute-0 ovn_metadata_agent[162810]:     mode http
Oct 11 08:59:24 compute-0 ovn_metadata_agent[162810]:     option httplog
Oct 11 08:59:24 compute-0 ovn_metadata_agent[162810]:     option dontlognull
Oct 11 08:59:24 compute-0 ovn_metadata_agent[162810]:     option http-server-close
Oct 11 08:59:24 compute-0 ovn_metadata_agent[162810]:     option forwardfor
Oct 11 08:59:24 compute-0 ovn_metadata_agent[162810]:     retries                 3
Oct 11 08:59:24 compute-0 ovn_metadata_agent[162810]:     timeout http-request    30s
Oct 11 08:59:24 compute-0 ovn_metadata_agent[162810]:     timeout connect         30s
Oct 11 08:59:24 compute-0 ovn_metadata_agent[162810]:     timeout client          32s
Oct 11 08:59:24 compute-0 ovn_metadata_agent[162810]:     timeout server          32s
Oct 11 08:59:24 compute-0 ovn_metadata_agent[162810]:     timeout http-keep-alive 30s
Oct 11 08:59:24 compute-0 ovn_metadata_agent[162810]: 
Oct 11 08:59:24 compute-0 ovn_metadata_agent[162810]: 
Oct 11 08:59:24 compute-0 ovn_metadata_agent[162810]: listen listener
Oct 11 08:59:24 compute-0 ovn_metadata_agent[162810]:     bind 169.254.169.254:80
Oct 11 08:59:24 compute-0 ovn_metadata_agent[162810]:     server metadata /var/lib/neutron/metadata_proxy
Oct 11 08:59:24 compute-0 ovn_metadata_agent[162810]:     http-request add-header X-OVN-Network-ID 164a664d-5e52-48b9-8b00-f73d0851a4cc
Oct 11 08:59:24 compute-0 ovn_metadata_agent[162810]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 11 08:59:24 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:59:24.566 162815 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-164a664d-5e52-48b9-8b00-f73d0851a4cc', 'env', 'PROCESS_TAG=haproxy-164a664d-5e52-48b9-8b00-f73d0851a4cc', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/164a664d-5e52-48b9-8b00-f73d0851a4cc.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 11 08:59:24 compute-0 nova_compute[260935]: 2025-10-11 08:59:24.569 2 DEBUG nova.storage.rbd_utils [None req-b9f724ee-7060-4253-8b52-7460ec69ccdb 88847169865e4e08abccff94205f100d 2e00bda8b86d430782170f8ef1fc58e7 - - default default] rbd image 6b3e0b89-ecfe-48b4-b347-521aca186a52_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:59:24 compute-0 nova_compute[260935]: 2025-10-11 08:59:24.575 2 DEBUG oslo_concurrency.processutils [None req-b9f724ee-7060-4253-8b52-7460ec69ccdb 88847169865e4e08abccff94205f100d 2e00bda8b86d430782170f8ef1fc58e7 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:59:24 compute-0 nova_compute[260935]: 2025-10-11 08:59:24.629 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:59:24 compute-0 nova_compute[260935]: 2025-10-11 08:59:24.683 2 DEBUG oslo_concurrency.processutils [None req-b9f724ee-7060-4253-8b52-7460ec69ccdb 88847169865e4e08abccff94205f100d 2e00bda8b86d430782170f8ef1fc58e7 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json" returned: 0 in 0.108s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:59:24 compute-0 nova_compute[260935]: 2025-10-11 08:59:24.684 2 DEBUG oslo_concurrency.lockutils [None req-b9f724ee-7060-4253-8b52-7460ec69ccdb 88847169865e4e08abccff94205f100d 2e00bda8b86d430782170f8ef1fc58e7 - - default default] Acquiring lock "9811042c7d73cc51997f7c966840f4b7728169a1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:59:24 compute-0 nova_compute[260935]: 2025-10-11 08:59:24.685 2 DEBUG oslo_concurrency.lockutils [None req-b9f724ee-7060-4253-8b52-7460ec69ccdb 88847169865e4e08abccff94205f100d 2e00bda8b86d430782170f8ef1fc58e7 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:59:24 compute-0 nova_compute[260935]: 2025-10-11 08:59:24.686 2 DEBUG oslo_concurrency.lockutils [None req-b9f724ee-7060-4253-8b52-7460ec69ccdb 88847169865e4e08abccff94205f100d 2e00bda8b86d430782170f8ef1fc58e7 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:59:24 compute-0 nova_compute[260935]: 2025-10-11 08:59:24.719 2 DEBUG nova.storage.rbd_utils [None req-b9f724ee-7060-4253-8b52-7460ec69ccdb 88847169865e4e08abccff94205f100d 2e00bda8b86d430782170f8ef1fc58e7 - - default default] rbd image 6b3e0b89-ecfe-48b4-b347-521aca186a52_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:59:24 compute-0 nova_compute[260935]: 2025-10-11 08:59:24.724 2 DEBUG oslo_concurrency.processutils [None req-b9f724ee-7060-4253-8b52-7460ec69ccdb 88847169865e4e08abccff94205f100d 2e00bda8b86d430782170f8ef1fc58e7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 6b3e0b89-ecfe-48b4-b347-521aca186a52_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:59:24 compute-0 nova_compute[260935]: 2025-10-11 08:59:24.777 2 DEBUG nova.policy [None req-b9f724ee-7060-4253-8b52-7460ec69ccdb 88847169865e4e08abccff94205f100d 2e00bda8b86d430782170f8ef1fc58e7 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '88847169865e4e08abccff94205f100d', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '2e00bda8b86d430782170f8ef1fc58e7', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 11 08:59:24 compute-0 ovn_controller[152945]: 2025-10-11T08:59:24Z|00087|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:d1:e2:b4 10.100.0.6
Oct 11 08:59:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 08:59:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 08:59:24 compute-0 ovn_controller[152945]: 2025-10-11T08:59:24Z|00088|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:d1:e2:b4 10.100.0.6
Oct 11 08:59:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 08:59:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 08:59:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 08:59:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 08:59:25 compute-0 podman[339987]: 2025-10-11 08:59:25.040741292 +0000 UTC m=+0.111218923 container create 398747a8394e8bb5e06e9a7a905ca846a31a89812749ecde06f10d3769819829 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-164a664d-5e52-48b9-8b00-f73d0851a4cc, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team)
Oct 11 08:59:25 compute-0 podman[339987]: 2025-10-11 08:59:24.977086837 +0000 UTC m=+0.047564508 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 11 08:59:25 compute-0 nova_compute[260935]: 2025-10-11 08:59:25.083 2 DEBUG oslo_concurrency.processutils [None req-b9f724ee-7060-4253-8b52-7460ec69ccdb 88847169865e4e08abccff94205f100d 2e00bda8b86d430782170f8ef1fc58e7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 6b3e0b89-ecfe-48b4-b347-521aca186a52_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.360s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:59:25 compute-0 systemd[1]: Started libpod-conmon-398747a8394e8bb5e06e9a7a905ca846a31a89812749ecde06f10d3769819829.scope.
Oct 11 08:59:25 compute-0 systemd[1]: Started libcrun container.
Oct 11 08:59:25 compute-0 nova_compute[260935]: 2025-10-11 08:59:25.124 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760173165.1086004, c176845c-89c0-4038-ba22-4ee79bd3ebfe => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:59:25 compute-0 nova_compute[260935]: 2025-10-11 08:59:25.124 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: c176845c-89c0-4038-ba22-4ee79bd3ebfe] VM Started (Lifecycle Event)
Oct 11 08:59:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/88ff3d2b1d366c767d0b7303eead81accb675a1c28ccb112da1864efc018408e/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 11 08:59:25 compute-0 podman[339987]: 2025-10-11 08:59:25.20235896 +0000 UTC m=+0.272836571 container init 398747a8394e8bb5e06e9a7a905ca846a31a89812749ecde06f10d3769819829 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-164a664d-5e52-48b9-8b00-f73d0851a4cc, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 11 08:59:25 compute-0 nova_compute[260935]: 2025-10-11 08:59:25.202 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: c176845c-89c0-4038-ba22-4ee79bd3ebfe] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:59:25 compute-0 podman[339987]: 2025-10-11 08:59:25.2114492 +0000 UTC m=+0.281926811 container start 398747a8394e8bb5e06e9a7a905ca846a31a89812749ecde06f10d3769819829 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-164a664d-5e52-48b9-8b00-f73d0851a4cc, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, tcib_managed=true, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Oct 11 08:59:25 compute-0 nova_compute[260935]: 2025-10-11 08:59:25.211 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760173165.1087759, c176845c-89c0-4038-ba22-4ee79bd3ebfe => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:59:25 compute-0 nova_compute[260935]: 2025-10-11 08:59:25.212 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: c176845c-89c0-4038-ba22-4ee79bd3ebfe] VM Paused (Lifecycle Event)
Oct 11 08:59:25 compute-0 nova_compute[260935]: 2025-10-11 08:59:25.227 2 DEBUG nova.storage.rbd_utils [None req-b9f724ee-7060-4253-8b52-7460ec69ccdb 88847169865e4e08abccff94205f100d 2e00bda8b86d430782170f8ef1fc58e7 - - default default] resizing rbd image 6b3e0b89-ecfe-48b4-b347-521aca186a52_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 11 08:59:25 compute-0 neutron-haproxy-ovnmeta-164a664d-5e52-48b9-8b00-f73d0851a4cc[340003]: [NOTICE]   (340042) : New worker (340059) forked
Oct 11 08:59:25 compute-0 neutron-haproxy-ovnmeta-164a664d-5e52-48b9-8b00-f73d0851a4cc[340003]: [NOTICE]   (340042) : Loading success.
Oct 11 08:59:25 compute-0 nova_compute[260935]: 2025-10-11 08:59:25.263 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: c176845c-89c0-4038-ba22-4ee79bd3ebfe] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:59:25 compute-0 nova_compute[260935]: 2025-10-11 08:59:25.268 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: c176845c-89c0-4038-ba22-4ee79bd3ebfe] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 08:59:25 compute-0 nova_compute[260935]: 2025-10-11 08:59:25.317 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: c176845c-89c0-4038-ba22-4ee79bd3ebfe] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 08:59:25 compute-0 nova_compute[260935]: 2025-10-11 08:59:25.324 2 DEBUG nova.objects.instance [None req-b9f724ee-7060-4253-8b52-7460ec69ccdb 88847169865e4e08abccff94205f100d 2e00bda8b86d430782170f8ef1fc58e7 - - default default] Lazy-loading 'migration_context' on Instance uuid 6b3e0b89-ecfe-48b4-b347-521aca186a52 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:59:25 compute-0 nova_compute[260935]: 2025-10-11 08:59:25.337 2 DEBUG nova.virt.libvirt.driver [None req-b9f724ee-7060-4253-8b52-7460ec69ccdb 88847169865e4e08abccff94205f100d 2e00bda8b86d430782170f8ef1fc58e7 - - default default] [instance: 6b3e0b89-ecfe-48b4-b347-521aca186a52] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 11 08:59:25 compute-0 nova_compute[260935]: 2025-10-11 08:59:25.337 2 DEBUG nova.virt.libvirt.driver [None req-b9f724ee-7060-4253-8b52-7460ec69ccdb 88847169865e4e08abccff94205f100d 2e00bda8b86d430782170f8ef1fc58e7 - - default default] [instance: 6b3e0b89-ecfe-48b4-b347-521aca186a52] Ensure instance console log exists: /var/lib/nova/instances/6b3e0b89-ecfe-48b4-b347-521aca186a52/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 11 08:59:25 compute-0 nova_compute[260935]: 2025-10-11 08:59:25.338 2 DEBUG oslo_concurrency.lockutils [None req-b9f724ee-7060-4253-8b52-7460ec69ccdb 88847169865e4e08abccff94205f100d 2e00bda8b86d430782170f8ef1fc58e7 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:59:25 compute-0 nova_compute[260935]: 2025-10-11 08:59:25.338 2 DEBUG oslo_concurrency.lockutils [None req-b9f724ee-7060-4253-8b52-7460ec69ccdb 88847169865e4e08abccff94205f100d 2e00bda8b86d430782170f8ef1fc58e7 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:59:25 compute-0 nova_compute[260935]: 2025-10-11 08:59:25.338 2 DEBUG oslo_concurrency.lockutils [None req-b9f724ee-7060-4253-8b52-7460ec69ccdb 88847169865e4e08abccff94205f100d 2e00bda8b86d430782170f8ef1fc58e7 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:59:25 compute-0 ceph-mon[74313]: pgmap v1687: 321 pgs: 321 active+clean; 293 MiB data, 708 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 6.0 MiB/s wr, 224 op/s
Oct 11 08:59:25 compute-0 nova_compute[260935]: 2025-10-11 08:59:25.573 2 DEBUG nova.network.neutron [None req-b9f724ee-7060-4253-8b52-7460ec69ccdb 88847169865e4e08abccff94205f100d 2e00bda8b86d430782170f8ef1fc58e7 - - default default] [instance: 6b3e0b89-ecfe-48b4-b347-521aca186a52] Successfully created port: 09e90a9c-f11c-4d6a-b0e1-b1ce18f73aa7 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 11 08:59:25 compute-0 nova_compute[260935]: 2025-10-11 08:59:25.630 2 DEBUG nova.compute.manager [req-96903e11-8063-4e9e-8bca-63650fdb3f58 req-48d941d6-fec2-415b-b543-d041f54890ca e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: c176845c-89c0-4038-ba22-4ee79bd3ebfe] Received event network-vif-plugged-e61ae661-47c6-4317-a2c2-6e7a5b567441 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:59:25 compute-0 nova_compute[260935]: 2025-10-11 08:59:25.630 2 DEBUG oslo_concurrency.lockutils [req-96903e11-8063-4e9e-8bca-63650fdb3f58 req-48d941d6-fec2-415b-b543-d041f54890ca e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "c176845c-89c0-4038-ba22-4ee79bd3ebfe-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:59:25 compute-0 nova_compute[260935]: 2025-10-11 08:59:25.631 2 DEBUG oslo_concurrency.lockutils [req-96903e11-8063-4e9e-8bca-63650fdb3f58 req-48d941d6-fec2-415b-b543-d041f54890ca e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "c176845c-89c0-4038-ba22-4ee79bd3ebfe-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:59:25 compute-0 nova_compute[260935]: 2025-10-11 08:59:25.631 2 DEBUG oslo_concurrency.lockutils [req-96903e11-8063-4e9e-8bca-63650fdb3f58 req-48d941d6-fec2-415b-b543-d041f54890ca e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "c176845c-89c0-4038-ba22-4ee79bd3ebfe-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:59:25 compute-0 nova_compute[260935]: 2025-10-11 08:59:25.631 2 DEBUG nova.compute.manager [req-96903e11-8063-4e9e-8bca-63650fdb3f58 req-48d941d6-fec2-415b-b543-d041f54890ca e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: c176845c-89c0-4038-ba22-4ee79bd3ebfe] Processing event network-vif-plugged-e61ae661-47c6-4317-a2c2-6e7a5b567441 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 11 08:59:25 compute-0 nova_compute[260935]: 2025-10-11 08:59:25.633 2 DEBUG nova.compute.manager [None req-63d228ec-d98f-4af4-90ec-67bd708ffac8 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] [instance: c176845c-89c0-4038-ba22-4ee79bd3ebfe] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 11 08:59:25 compute-0 nova_compute[260935]: 2025-10-11 08:59:25.637 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760173165.637027, c176845c-89c0-4038-ba22-4ee79bd3ebfe => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:59:25 compute-0 nova_compute[260935]: 2025-10-11 08:59:25.638 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: c176845c-89c0-4038-ba22-4ee79bd3ebfe] VM Resumed (Lifecycle Event)
Oct 11 08:59:25 compute-0 nova_compute[260935]: 2025-10-11 08:59:25.641 2 DEBUG nova.virt.libvirt.driver [None req-63d228ec-d98f-4af4-90ec-67bd708ffac8 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] [instance: c176845c-89c0-4038-ba22-4ee79bd3ebfe] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 11 08:59:25 compute-0 nova_compute[260935]: 2025-10-11 08:59:25.645 2 INFO nova.virt.libvirt.driver [-] [instance: c176845c-89c0-4038-ba22-4ee79bd3ebfe] Instance spawned successfully.
Oct 11 08:59:25 compute-0 nova_compute[260935]: 2025-10-11 08:59:25.646 2 DEBUG nova.virt.libvirt.driver [None req-63d228ec-d98f-4af4-90ec-67bd708ffac8 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] [instance: c176845c-89c0-4038-ba22-4ee79bd3ebfe] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 11 08:59:25 compute-0 nova_compute[260935]: 2025-10-11 08:59:25.673 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: c176845c-89c0-4038-ba22-4ee79bd3ebfe] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:59:25 compute-0 nova_compute[260935]: 2025-10-11 08:59:25.683 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: c176845c-89c0-4038-ba22-4ee79bd3ebfe] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 08:59:25 compute-0 nova_compute[260935]: 2025-10-11 08:59:25.690 2 DEBUG nova.virt.libvirt.driver [None req-63d228ec-d98f-4af4-90ec-67bd708ffac8 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] [instance: c176845c-89c0-4038-ba22-4ee79bd3ebfe] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:59:25 compute-0 nova_compute[260935]: 2025-10-11 08:59:25.691 2 DEBUG nova.virt.libvirt.driver [None req-63d228ec-d98f-4af4-90ec-67bd708ffac8 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] [instance: c176845c-89c0-4038-ba22-4ee79bd3ebfe] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:59:25 compute-0 nova_compute[260935]: 2025-10-11 08:59:25.691 2 DEBUG nova.virt.libvirt.driver [None req-63d228ec-d98f-4af4-90ec-67bd708ffac8 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] [instance: c176845c-89c0-4038-ba22-4ee79bd3ebfe] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:59:25 compute-0 nova_compute[260935]: 2025-10-11 08:59:25.692 2 DEBUG nova.virt.libvirt.driver [None req-63d228ec-d98f-4af4-90ec-67bd708ffac8 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] [instance: c176845c-89c0-4038-ba22-4ee79bd3ebfe] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:59:25 compute-0 nova_compute[260935]: 2025-10-11 08:59:25.693 2 DEBUG nova.virt.libvirt.driver [None req-63d228ec-d98f-4af4-90ec-67bd708ffac8 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] [instance: c176845c-89c0-4038-ba22-4ee79bd3ebfe] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:59:25 compute-0 nova_compute[260935]: 2025-10-11 08:59:25.694 2 DEBUG nova.virt.libvirt.driver [None req-63d228ec-d98f-4af4-90ec-67bd708ffac8 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] [instance: c176845c-89c0-4038-ba22-4ee79bd3ebfe] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:59:25 compute-0 nova_compute[260935]: 2025-10-11 08:59:25.726 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: c176845c-89c0-4038-ba22-4ee79bd3ebfe] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 08:59:25 compute-0 nova_compute[260935]: 2025-10-11 08:59:25.769 2 INFO nova.compute.manager [None req-63d228ec-d98f-4af4-90ec-67bd708ffac8 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] [instance: c176845c-89c0-4038-ba22-4ee79bd3ebfe] Took 8.85 seconds to spawn the instance on the hypervisor.
Oct 11 08:59:25 compute-0 nova_compute[260935]: 2025-10-11 08:59:25.770 2 DEBUG nova.compute.manager [None req-63d228ec-d98f-4af4-90ec-67bd708ffac8 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] [instance: c176845c-89c0-4038-ba22-4ee79bd3ebfe] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:59:25 compute-0 nova_compute[260935]: 2025-10-11 08:59:25.842 2 INFO nova.compute.manager [None req-63d228ec-d98f-4af4-90ec-67bd708ffac8 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] [instance: c176845c-89c0-4038-ba22-4ee79bd3ebfe] Took 10.07 seconds to build instance.
Oct 11 08:59:25 compute-0 nova_compute[260935]: 2025-10-11 08:59:25.871 2 DEBUG oslo_concurrency.lockutils [None req-63d228ec-d98f-4af4-90ec-67bd708ffac8 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] Lock "c176845c-89c0-4038-ba22-4ee79bd3ebfe" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.196s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:59:26 compute-0 kernel: tape121c426-77 (unregistering): left promiscuous mode
Oct 11 08:59:26 compute-0 NetworkManager[44960]: <info>  [1760173166.0601] device (tape121c426-77): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 08:59:26 compute-0 ovn_controller[152945]: 2025-10-11T08:59:26Z|00717|binding|INFO|Releasing lport e121c426-7734-4def-be42-b69b19dcbf29 from this chassis (sb_readonly=0)
Oct 11 08:59:26 compute-0 ovn_controller[152945]: 2025-10-11T08:59:26Z|00718|binding|INFO|Setting lport e121c426-7734-4def-be42-b69b19dcbf29 down in Southbound
Oct 11 08:59:26 compute-0 nova_compute[260935]: 2025-10-11 08:59:26.076 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:59:26 compute-0 ovn_controller[152945]: 2025-10-11T08:59:26Z|00719|binding|INFO|Removing iface tape121c426-77 ovn-installed in OVS
Oct 11 08:59:26 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:59:26.087 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e4:27:e3 10.100.0.7'], port_security=['fa:16:3e:e4:27:e3 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '8848c29f-c82a-4f50-82f4-b2e317161489', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b4b8fb64-b3cb-4cee-8a22-4a3947fba8e5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b0430d49d70a46c2b29abef177f8ccb3', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'cf97e47b-d339-42a8-ae53-936a69d74b51', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7d7c2c31-fb9d-4875-91fa-aedc5fb45092, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=e121c426-7734-4def-be42-b69b19dcbf29) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 08:59:26 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:59:26.088 162815 INFO neutron.agent.ovn.metadata.agent [-] Port e121c426-7734-4def-be42-b69b19dcbf29 in datapath b4b8fb64-b3cb-4cee-8a22-4a3947fba8e5 unbound from our chassis
Oct 11 08:59:26 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:59:26.091 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b4b8fb64-b3cb-4cee-8a22-4a3947fba8e5
Oct 11 08:59:26 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1688: 321 pgs: 321 active+clean; 293 MiB data, 708 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 6.0 MiB/s wr, 216 op/s
Oct 11 08:59:26 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:59:26.116 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[19b99923-7a96-4787-93b0-49a9801b75c3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:59:26 compute-0 nova_compute[260935]: 2025-10-11 08:59:26.117 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:59:26 compute-0 systemd[1]: machine-qemu\x2d88\x2dinstance\x2d0000004f.scope: Deactivated successfully.
Oct 11 08:59:26 compute-0 systemd[1]: machine-qemu\x2d88\x2dinstance\x2d0000004f.scope: Consumed 12.954s CPU time.
Oct 11 08:59:26 compute-0 systemd-machined[215705]: Machine qemu-88-instance-0000004f terminated.
Oct 11 08:59:26 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:59:26.160 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[51d334e9-0381-41b7-a60b-b2b3412f3214]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:59:26 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:59:26.164 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[b918613b-d75f-49ea-b174-52207945221d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:59:26 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:59:26.210 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[3b2f96ad-abfb-4fe1-9da1-d0eb0576e3bb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:59:26 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:59:26.237 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[1ecede25-2ff9-4dcc-a4fa-646d3fa3cc99]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb4b8fb64-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8d:65:14'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 10, 'rx_bytes': 1000, 'tx_bytes': 608, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 10, 'rx_bytes': 1000, 'tx_bytes': 608, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 220], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 498802, 'reachable_time': 37318, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 340098, 'error': None, 'target': 'ovnmeta-b4b8fb64-b3cb-4cee-8a22-4a3947fba8e5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:59:26 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:59:26.256 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[212d71ba-52fb-4723-847e-53278fd27a44]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapb4b8fb64-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 498822, 'tstamp': 498822}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 340099, 'error': None, 'target': 'ovnmeta-b4b8fb64-b3cb-4cee-8a22-4a3947fba8e5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapb4b8fb64-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 498827, 'tstamp': 498827}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 340099, 'error': None, 'target': 'ovnmeta-b4b8fb64-b3cb-4cee-8a22-4a3947fba8e5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:59:26 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:59:26.257 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb4b8fb64-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:59:26 compute-0 nova_compute[260935]: 2025-10-11 08:59:26.303 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:59:26 compute-0 nova_compute[260935]: 2025-10-11 08:59:26.311 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:59:26 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:59:26.310 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb4b8fb64-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:59:26 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:59:26.311 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 08:59:26 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:59:26.311 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb4b8fb64-b0, col_values=(('external_ids', {'iface-id': '59c88b9d-e04e-4ca9-8c74-9510a5f4ab83'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:59:26 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:59:26.312 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 08:59:26 compute-0 nova_compute[260935]: 2025-10-11 08:59:26.419 2 DEBUG nova.compute.manager [req-ca53b99e-7707-45f9-a53e-d432201ca6c1 req-aa443c32-eac3-4149-8009-37b8ab01f90b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 8848c29f-c82a-4f50-82f4-b2e317161489] Received event network-vif-unplugged-e121c426-7734-4def-be42-b69b19dcbf29 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:59:26 compute-0 nova_compute[260935]: 2025-10-11 08:59:26.420 2 DEBUG oslo_concurrency.lockutils [req-ca53b99e-7707-45f9-a53e-d432201ca6c1 req-aa443c32-eac3-4149-8009-37b8ab01f90b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "8848c29f-c82a-4f50-82f4-b2e317161489-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:59:26 compute-0 nova_compute[260935]: 2025-10-11 08:59:26.421 2 DEBUG oslo_concurrency.lockutils [req-ca53b99e-7707-45f9-a53e-d432201ca6c1 req-aa443c32-eac3-4149-8009-37b8ab01f90b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "8848c29f-c82a-4f50-82f4-b2e317161489-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:59:26 compute-0 nova_compute[260935]: 2025-10-11 08:59:26.421 2 DEBUG oslo_concurrency.lockutils [req-ca53b99e-7707-45f9-a53e-d432201ca6c1 req-aa443c32-eac3-4149-8009-37b8ab01f90b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "8848c29f-c82a-4f50-82f4-b2e317161489-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:59:26 compute-0 nova_compute[260935]: 2025-10-11 08:59:26.422 2 DEBUG nova.compute.manager [req-ca53b99e-7707-45f9-a53e-d432201ca6c1 req-aa443c32-eac3-4149-8009-37b8ab01f90b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 8848c29f-c82a-4f50-82f4-b2e317161489] No waiting events found dispatching network-vif-unplugged-e121c426-7734-4def-be42-b69b19dcbf29 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 08:59:26 compute-0 nova_compute[260935]: 2025-10-11 08:59:26.422 2 WARNING nova.compute.manager [req-ca53b99e-7707-45f9-a53e-d432201ca6c1 req-aa443c32-eac3-4149-8009-37b8ab01f90b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 8848c29f-c82a-4f50-82f4-b2e317161489] Received unexpected event network-vif-unplugged-e121c426-7734-4def-be42-b69b19dcbf29 for instance with vm_state active and task_state powering-off.
Oct 11 08:59:26 compute-0 nova_compute[260935]: 2025-10-11 08:59:26.702 2 DEBUG nova.network.neutron [None req-b9f724ee-7060-4253-8b52-7460ec69ccdb 88847169865e4e08abccff94205f100d 2e00bda8b86d430782170f8ef1fc58e7 - - default default] [instance: 6b3e0b89-ecfe-48b4-b347-521aca186a52] Successfully updated port: 09e90a9c-f11c-4d6a-b0e1-b1ce18f73aa7 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 11 08:59:26 compute-0 nova_compute[260935]: 2025-10-11 08:59:26.725 2 DEBUG oslo_concurrency.lockutils [None req-b9f724ee-7060-4253-8b52-7460ec69ccdb 88847169865e4e08abccff94205f100d 2e00bda8b86d430782170f8ef1fc58e7 - - default default] Acquiring lock "refresh_cache-6b3e0b89-ecfe-48b4-b347-521aca186a52" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 08:59:26 compute-0 nova_compute[260935]: 2025-10-11 08:59:26.726 2 DEBUG oslo_concurrency.lockutils [None req-b9f724ee-7060-4253-8b52-7460ec69ccdb 88847169865e4e08abccff94205f100d 2e00bda8b86d430782170f8ef1fc58e7 - - default default] Acquired lock "refresh_cache-6b3e0b89-ecfe-48b4-b347-521aca186a52" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 08:59:26 compute-0 nova_compute[260935]: 2025-10-11 08:59:26.726 2 DEBUG nova.network.neutron [None req-b9f724ee-7060-4253-8b52-7460ec69ccdb 88847169865e4e08abccff94205f100d 2e00bda8b86d430782170f8ef1fc58e7 - - default default] [instance: 6b3e0b89-ecfe-48b4-b347-521aca186a52] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 11 08:59:26 compute-0 nova_compute[260935]: 2025-10-11 08:59:26.897 2 INFO nova.virt.libvirt.driver [None req-17fdfed9-b47d-4c7f-aeab-151977a88dbc 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] [instance: 8848c29f-c82a-4f50-82f4-b2e317161489] Instance shutdown successfully after 4 seconds.
Oct 11 08:59:26 compute-0 nova_compute[260935]: 2025-10-11 08:59:26.904 2 INFO nova.virt.libvirt.driver [-] [instance: 8848c29f-c82a-4f50-82f4-b2e317161489] Instance destroyed successfully.
Oct 11 08:59:26 compute-0 nova_compute[260935]: 2025-10-11 08:59:26.904 2 DEBUG nova.objects.instance [None req-17fdfed9-b47d-4c7f-aeab-151977a88dbc 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] Lazy-loading 'numa_topology' on Instance uuid 8848c29f-c82a-4f50-82f4-b2e317161489 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:59:26 compute-0 nova_compute[260935]: 2025-10-11 08:59:26.924 2 DEBUG nova.compute.manager [None req-17fdfed9-b47d-4c7f-aeab-151977a88dbc 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] [instance: 8848c29f-c82a-4f50-82f4-b2e317161489] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:59:26 compute-0 nova_compute[260935]: 2025-10-11 08:59:26.979 2 DEBUG oslo_concurrency.lockutils [None req-17fdfed9-b47d-4c7f-aeab-151977a88dbc 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] Lock "8848c29f-c82a-4f50-82f4-b2e317161489" "released" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: held 4.244s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:59:26 compute-0 nova_compute[260935]: 2025-10-11 08:59:26.991 2 DEBUG nova.network.neutron [None req-b9f724ee-7060-4253-8b52-7460ec69ccdb 88847169865e4e08abccff94205f100d 2e00bda8b86d430782170f8ef1fc58e7 - - default default] [instance: 6b3e0b89-ecfe-48b4-b347-521aca186a52] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 11 08:59:27 compute-0 ceph-mon[74313]: pgmap v1688: 321 pgs: 321 active+clean; 293 MiB data, 708 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 6.0 MiB/s wr, 216 op/s
Oct 11 08:59:27 compute-0 nova_compute[260935]: 2025-10-11 08:59:27.738 2 DEBUG nova.compute.manager [req-56f0ebb9-b481-4613-848b-9d998789fda1 req-a031691d-866d-426a-a535-c83d4684a723 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: c176845c-89c0-4038-ba22-4ee79bd3ebfe] Received event network-vif-plugged-e61ae661-47c6-4317-a2c2-6e7a5b567441 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:59:27 compute-0 nova_compute[260935]: 2025-10-11 08:59:27.739 2 DEBUG oslo_concurrency.lockutils [req-56f0ebb9-b481-4613-848b-9d998789fda1 req-a031691d-866d-426a-a535-c83d4684a723 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "c176845c-89c0-4038-ba22-4ee79bd3ebfe-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:59:27 compute-0 nova_compute[260935]: 2025-10-11 08:59:27.739 2 DEBUG oslo_concurrency.lockutils [req-56f0ebb9-b481-4613-848b-9d998789fda1 req-a031691d-866d-426a-a535-c83d4684a723 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "c176845c-89c0-4038-ba22-4ee79bd3ebfe-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:59:27 compute-0 nova_compute[260935]: 2025-10-11 08:59:27.740 2 DEBUG oslo_concurrency.lockutils [req-56f0ebb9-b481-4613-848b-9d998789fda1 req-a031691d-866d-426a-a535-c83d4684a723 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "c176845c-89c0-4038-ba22-4ee79bd3ebfe-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:59:27 compute-0 nova_compute[260935]: 2025-10-11 08:59:27.740 2 DEBUG nova.compute.manager [req-56f0ebb9-b481-4613-848b-9d998789fda1 req-a031691d-866d-426a-a535-c83d4684a723 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: c176845c-89c0-4038-ba22-4ee79bd3ebfe] No waiting events found dispatching network-vif-plugged-e61ae661-47c6-4317-a2c2-6e7a5b567441 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 08:59:27 compute-0 nova_compute[260935]: 2025-10-11 08:59:27.741 2 WARNING nova.compute.manager [req-56f0ebb9-b481-4613-848b-9d998789fda1 req-a031691d-866d-426a-a535-c83d4684a723 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: c176845c-89c0-4038-ba22-4ee79bd3ebfe] Received unexpected event network-vif-plugged-e61ae661-47c6-4317-a2c2-6e7a5b567441 for instance with vm_state active and task_state None.
Oct 11 08:59:27 compute-0 nova_compute[260935]: 2025-10-11 08:59:27.741 2 DEBUG nova.compute.manager [req-56f0ebb9-b481-4613-848b-9d998789fda1 req-a031691d-866d-426a-a535-c83d4684a723 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 6b3e0b89-ecfe-48b4-b347-521aca186a52] Received event network-changed-09e90a9c-f11c-4d6a-b0e1-b1ce18f73aa7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:59:27 compute-0 nova_compute[260935]: 2025-10-11 08:59:27.741 2 DEBUG nova.compute.manager [req-56f0ebb9-b481-4613-848b-9d998789fda1 req-a031691d-866d-426a-a535-c83d4684a723 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 6b3e0b89-ecfe-48b4-b347-521aca186a52] Refreshing instance network info cache due to event network-changed-09e90a9c-f11c-4d6a-b0e1-b1ce18f73aa7. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 08:59:27 compute-0 nova_compute[260935]: 2025-10-11 08:59:27.742 2 DEBUG oslo_concurrency.lockutils [req-56f0ebb9-b481-4613-848b-9d998789fda1 req-a031691d-866d-426a-a535-c83d4684a723 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-6b3e0b89-ecfe-48b4-b347-521aca186a52" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 08:59:27 compute-0 nova_compute[260935]: 2025-10-11 08:59:27.983 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:59:28 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1689: 321 pgs: 321 active+clean; 372 MiB data, 754 MiB used, 59 GiB / 60 GiB avail; 4.7 MiB/s rd, 10 MiB/s wr, 375 op/s
Oct 11 08:59:28 compute-0 NetworkManager[44960]: <info>  [1760173168.2967] manager: (patch-provnet-02213cae-d648-4a8f-b18b-9bdf9573f1be-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/322)
Oct 11 08:59:28 compute-0 NetworkManager[44960]: <info>  [1760173168.2977] manager: (patch-br-int-to-provnet-02213cae-d648-4a8f-b18b-9bdf9573f1be): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/323)
Oct 11 08:59:28 compute-0 nova_compute[260935]: 2025-10-11 08:59:28.306 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:59:28 compute-0 nova_compute[260935]: 2025-10-11 08:59:28.325 2 DEBUG nova.network.neutron [None req-b9f724ee-7060-4253-8b52-7460ec69ccdb 88847169865e4e08abccff94205f100d 2e00bda8b86d430782170f8ef1fc58e7 - - default default] [instance: 6b3e0b89-ecfe-48b4-b347-521aca186a52] Updating instance_info_cache with network_info: [{"id": "09e90a9c-f11c-4d6a-b0e1-b1ce18f73aa7", "address": "fa:16:3e:1d:c1:6a", "network": {"id": "97485920-7a19-4e77-b2e1-57668a58b3d6", "bridge": "br-int", "label": "tempest-ServerMetadataTestJSON-1098551677-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2e00bda8b86d430782170f8ef1fc58e7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap09e90a9c-f1", "ovs_interfaceid": "09e90a9c-f11c-4d6a-b0e1-b1ce18f73aa7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:59:28 compute-0 nova_compute[260935]: 2025-10-11 08:59:28.350 2 DEBUG oslo_concurrency.lockutils [None req-b9f724ee-7060-4253-8b52-7460ec69ccdb 88847169865e4e08abccff94205f100d 2e00bda8b86d430782170f8ef1fc58e7 - - default default] Releasing lock "refresh_cache-6b3e0b89-ecfe-48b4-b347-521aca186a52" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 08:59:28 compute-0 nova_compute[260935]: 2025-10-11 08:59:28.350 2 DEBUG nova.compute.manager [None req-b9f724ee-7060-4253-8b52-7460ec69ccdb 88847169865e4e08abccff94205f100d 2e00bda8b86d430782170f8ef1fc58e7 - - default default] [instance: 6b3e0b89-ecfe-48b4-b347-521aca186a52] Instance network_info: |[{"id": "09e90a9c-f11c-4d6a-b0e1-b1ce18f73aa7", "address": "fa:16:3e:1d:c1:6a", "network": {"id": "97485920-7a19-4e77-b2e1-57668a58b3d6", "bridge": "br-int", "label": "tempest-ServerMetadataTestJSON-1098551677-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2e00bda8b86d430782170f8ef1fc58e7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap09e90a9c-f1", "ovs_interfaceid": "09e90a9c-f11c-4d6a-b0e1-b1ce18f73aa7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 11 08:59:28 compute-0 nova_compute[260935]: 2025-10-11 08:59:28.352 2 DEBUG oslo_concurrency.lockutils [req-56f0ebb9-b481-4613-848b-9d998789fda1 req-a031691d-866d-426a-a535-c83d4684a723 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-6b3e0b89-ecfe-48b4-b347-521aca186a52" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 08:59:28 compute-0 nova_compute[260935]: 2025-10-11 08:59:28.352 2 DEBUG nova.network.neutron [req-56f0ebb9-b481-4613-848b-9d998789fda1 req-a031691d-866d-426a-a535-c83d4684a723 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 6b3e0b89-ecfe-48b4-b347-521aca186a52] Refreshing network info cache for port 09e90a9c-f11c-4d6a-b0e1-b1ce18f73aa7 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 08:59:28 compute-0 nova_compute[260935]: 2025-10-11 08:59:28.357 2 DEBUG nova.virt.libvirt.driver [None req-b9f724ee-7060-4253-8b52-7460ec69ccdb 88847169865e4e08abccff94205f100d 2e00bda8b86d430782170f8ef1fc58e7 - - default default] [instance: 6b3e0b89-ecfe-48b4-b347-521aca186a52] Start _get_guest_xml network_info=[{"id": "09e90a9c-f11c-4d6a-b0e1-b1ce18f73aa7", "address": "fa:16:3e:1d:c1:6a", "network": {"id": "97485920-7a19-4e77-b2e1-57668a58b3d6", "bridge": "br-int", "label": "tempest-ServerMetadataTestJSON-1098551677-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2e00bda8b86d430782170f8ef1fc58e7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap09e90a9c-f1", "ovs_interfaceid": "09e90a9c-f11c-4d6a-b0e1-b1ce18f73aa7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 11 08:59:28 compute-0 nova_compute[260935]: 2025-10-11 08:59:28.365 2 WARNING nova.virt.libvirt.driver [None req-b9f724ee-7060-4253-8b52-7460ec69ccdb 88847169865e4e08abccff94205f100d 2e00bda8b86d430782170f8ef1fc58e7 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 08:59:28 compute-0 nova_compute[260935]: 2025-10-11 08:59:28.374 2 DEBUG nova.virt.libvirt.host [None req-b9f724ee-7060-4253-8b52-7460ec69ccdb 88847169865e4e08abccff94205f100d 2e00bda8b86d430782170f8ef1fc58e7 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 11 08:59:28 compute-0 nova_compute[260935]: 2025-10-11 08:59:28.375 2 DEBUG nova.virt.libvirt.host [None req-b9f724ee-7060-4253-8b52-7460ec69ccdb 88847169865e4e08abccff94205f100d 2e00bda8b86d430782170f8ef1fc58e7 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 11 08:59:28 compute-0 nova_compute[260935]: 2025-10-11 08:59:28.384 2 DEBUG nova.virt.libvirt.host [None req-b9f724ee-7060-4253-8b52-7460ec69ccdb 88847169865e4e08abccff94205f100d 2e00bda8b86d430782170f8ef1fc58e7 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 11 08:59:28 compute-0 nova_compute[260935]: 2025-10-11 08:59:28.384 2 DEBUG nova.virt.libvirt.host [None req-b9f724ee-7060-4253-8b52-7460ec69ccdb 88847169865e4e08abccff94205f100d 2e00bda8b86d430782170f8ef1fc58e7 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 11 08:59:28 compute-0 nova_compute[260935]: 2025-10-11 08:59:28.385 2 DEBUG nova.virt.libvirt.driver [None req-b9f724ee-7060-4253-8b52-7460ec69ccdb 88847169865e4e08abccff94205f100d 2e00bda8b86d430782170f8ef1fc58e7 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 11 08:59:28 compute-0 nova_compute[260935]: 2025-10-11 08:59:28.385 2 DEBUG nova.virt.hardware [None req-b9f724ee-7060-4253-8b52-7460ec69ccdb 88847169865e4e08abccff94205f100d 2e00bda8b86d430782170f8ef1fc58e7 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 11 08:59:28 compute-0 nova_compute[260935]: 2025-10-11 08:59:28.385 2 DEBUG nova.virt.hardware [None req-b9f724ee-7060-4253-8b52-7460ec69ccdb 88847169865e4e08abccff94205f100d 2e00bda8b86d430782170f8ef1fc58e7 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 11 08:59:28 compute-0 nova_compute[260935]: 2025-10-11 08:59:28.385 2 DEBUG nova.virt.hardware [None req-b9f724ee-7060-4253-8b52-7460ec69ccdb 88847169865e4e08abccff94205f100d 2e00bda8b86d430782170f8ef1fc58e7 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 11 08:59:28 compute-0 nova_compute[260935]: 2025-10-11 08:59:28.386 2 DEBUG nova.virt.hardware [None req-b9f724ee-7060-4253-8b52-7460ec69ccdb 88847169865e4e08abccff94205f100d 2e00bda8b86d430782170f8ef1fc58e7 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 11 08:59:28 compute-0 nova_compute[260935]: 2025-10-11 08:59:28.386 2 DEBUG nova.virt.hardware [None req-b9f724ee-7060-4253-8b52-7460ec69ccdb 88847169865e4e08abccff94205f100d 2e00bda8b86d430782170f8ef1fc58e7 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 11 08:59:28 compute-0 nova_compute[260935]: 2025-10-11 08:59:28.386 2 DEBUG nova.virt.hardware [None req-b9f724ee-7060-4253-8b52-7460ec69ccdb 88847169865e4e08abccff94205f100d 2e00bda8b86d430782170f8ef1fc58e7 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 11 08:59:28 compute-0 nova_compute[260935]: 2025-10-11 08:59:28.386 2 DEBUG nova.virt.hardware [None req-b9f724ee-7060-4253-8b52-7460ec69ccdb 88847169865e4e08abccff94205f100d 2e00bda8b86d430782170f8ef1fc58e7 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 11 08:59:28 compute-0 nova_compute[260935]: 2025-10-11 08:59:28.386 2 DEBUG nova.virt.hardware [None req-b9f724ee-7060-4253-8b52-7460ec69ccdb 88847169865e4e08abccff94205f100d 2e00bda8b86d430782170f8ef1fc58e7 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 11 08:59:28 compute-0 nova_compute[260935]: 2025-10-11 08:59:28.386 2 DEBUG nova.virt.hardware [None req-b9f724ee-7060-4253-8b52-7460ec69ccdb 88847169865e4e08abccff94205f100d 2e00bda8b86d430782170f8ef1fc58e7 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 11 08:59:28 compute-0 nova_compute[260935]: 2025-10-11 08:59:28.386 2 DEBUG nova.virt.hardware [None req-b9f724ee-7060-4253-8b52-7460ec69ccdb 88847169865e4e08abccff94205f100d 2e00bda8b86d430782170f8ef1fc58e7 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 11 08:59:28 compute-0 nova_compute[260935]: 2025-10-11 08:59:28.387 2 DEBUG nova.virt.hardware [None req-b9f724ee-7060-4253-8b52-7460ec69ccdb 88847169865e4e08abccff94205f100d 2e00bda8b86d430782170f8ef1fc58e7 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 11 08:59:28 compute-0 nova_compute[260935]: 2025-10-11 08:59:28.389 2 DEBUG oslo_concurrency.processutils [None req-b9f724ee-7060-4253-8b52-7460ec69ccdb 88847169865e4e08abccff94205f100d 2e00bda8b86d430782170f8ef1fc58e7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:59:28 compute-0 ovn_controller[152945]: 2025-10-11T08:59:28Z|00720|binding|INFO|Releasing lport 59c88b9d-e04e-4ca9-8c74-9510a5f4ab83 from this chassis (sb_readonly=0)
Oct 11 08:59:28 compute-0 ovn_controller[152945]: 2025-10-11T08:59:28Z|00721|binding|INFO|Releasing lport e23cd806-8523-4e59-ba27-db15cee52548 from this chassis (sb_readonly=0)
Oct 11 08:59:28 compute-0 ovn_controller[152945]: 2025-10-11T08:59:28Z|00722|binding|INFO|Releasing lport 59c88b9d-e04e-4ca9-8c74-9510a5f4ab83 from this chassis (sb_readonly=0)
Oct 11 08:59:28 compute-0 ovn_controller[152945]: 2025-10-11T08:59:28Z|00723|binding|INFO|Releasing lport e23cd806-8523-4e59-ba27-db15cee52548 from this chassis (sb_readonly=0)
Oct 11 08:59:28 compute-0 nova_compute[260935]: 2025-10-11 08:59:28.458 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:59:28 compute-0 nova_compute[260935]: 2025-10-11 08:59:28.594 2 DEBUG nova.compute.manager [req-66296246-43de-40b1-8fa6-e62838d92573 req-53dd6150-e1cd-48a2-9cbe-626c338208c3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 8848c29f-c82a-4f50-82f4-b2e317161489] Received event network-vif-plugged-e121c426-7734-4def-be42-b69b19dcbf29 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:59:28 compute-0 nova_compute[260935]: 2025-10-11 08:59:28.595 2 DEBUG oslo_concurrency.lockutils [req-66296246-43de-40b1-8fa6-e62838d92573 req-53dd6150-e1cd-48a2-9cbe-626c338208c3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "8848c29f-c82a-4f50-82f4-b2e317161489-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:59:28 compute-0 nova_compute[260935]: 2025-10-11 08:59:28.596 2 DEBUG oslo_concurrency.lockutils [req-66296246-43de-40b1-8fa6-e62838d92573 req-53dd6150-e1cd-48a2-9cbe-626c338208c3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "8848c29f-c82a-4f50-82f4-b2e317161489-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:59:28 compute-0 nova_compute[260935]: 2025-10-11 08:59:28.596 2 DEBUG oslo_concurrency.lockutils [req-66296246-43de-40b1-8fa6-e62838d92573 req-53dd6150-e1cd-48a2-9cbe-626c338208c3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "8848c29f-c82a-4f50-82f4-b2e317161489-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:59:28 compute-0 nova_compute[260935]: 2025-10-11 08:59:28.597 2 DEBUG nova.compute.manager [req-66296246-43de-40b1-8fa6-e62838d92573 req-53dd6150-e1cd-48a2-9cbe-626c338208c3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 8848c29f-c82a-4f50-82f4-b2e317161489] No waiting events found dispatching network-vif-plugged-e121c426-7734-4def-be42-b69b19dcbf29 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 08:59:28 compute-0 nova_compute[260935]: 2025-10-11 08:59:28.597 2 WARNING nova.compute.manager [req-66296246-43de-40b1-8fa6-e62838d92573 req-53dd6150-e1cd-48a2-9cbe-626c338208c3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 8848c29f-c82a-4f50-82f4-b2e317161489] Received unexpected event network-vif-plugged-e121c426-7734-4def-be42-b69b19dcbf29 for instance with vm_state stopped and task_state None.
Oct 11 08:59:28 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e242 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 08:59:28 compute-0 nova_compute[260935]: 2025-10-11 08:59:28.674 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:59:28 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 08:59:28 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2399810328' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:59:28 compute-0 nova_compute[260935]: 2025-10-11 08:59:28.868 2 DEBUG oslo_concurrency.processutils [None req-b9f724ee-7060-4253-8b52-7460ec69ccdb 88847169865e4e08abccff94205f100d 2e00bda8b86d430782170f8ef1fc58e7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.479s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:59:28 compute-0 nova_compute[260935]: 2025-10-11 08:59:28.891 2 DEBUG nova.storage.rbd_utils [None req-b9f724ee-7060-4253-8b52-7460ec69ccdb 88847169865e4e08abccff94205f100d 2e00bda8b86d430782170f8ef1fc58e7 - - default default] rbd image 6b3e0b89-ecfe-48b4-b347-521aca186a52_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:59:28 compute-0 nova_compute[260935]: 2025-10-11 08:59:28.897 2 DEBUG oslo_concurrency.processutils [None req-b9f724ee-7060-4253-8b52-7460ec69ccdb 88847169865e4e08abccff94205f100d 2e00bda8b86d430782170f8ef1fc58e7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:59:29 compute-0 nova_compute[260935]: 2025-10-11 08:59:29.245 2 DEBUG nova.objects.instance [None req-dabea842-f2bb-4202-84ba-0749121bd47c 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] Lazy-loading 'flavor' on Instance uuid 8848c29f-c82a-4f50-82f4-b2e317161489 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:59:29 compute-0 nova_compute[260935]: 2025-10-11 08:59:29.282 2 DEBUG oslo_concurrency.lockutils [None req-dabea842-f2bb-4202-84ba-0749121bd47c 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] Acquiring lock "refresh_cache-8848c29f-c82a-4f50-82f4-b2e317161489" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 08:59:29 compute-0 nova_compute[260935]: 2025-10-11 08:59:29.283 2 DEBUG oslo_concurrency.lockutils [None req-dabea842-f2bb-4202-84ba-0749121bd47c 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] Acquired lock "refresh_cache-8848c29f-c82a-4f50-82f4-b2e317161489" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 08:59:29 compute-0 nova_compute[260935]: 2025-10-11 08:59:29.283 2 DEBUG nova.network.neutron [None req-dabea842-f2bb-4202-84ba-0749121bd47c 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] [instance: 8848c29f-c82a-4f50-82f4-b2e317161489] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 11 08:59:29 compute-0 nova_compute[260935]: 2025-10-11 08:59:29.283 2 DEBUG nova.objects.instance [None req-dabea842-f2bb-4202-84ba-0749121bd47c 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] Lazy-loading 'info_cache' on Instance uuid 8848c29f-c82a-4f50-82f4-b2e317161489 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:59:29 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 08:59:29 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3619509957' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:59:29 compute-0 nova_compute[260935]: 2025-10-11 08:59:29.357 2 DEBUG oslo_concurrency.processutils [None req-b9f724ee-7060-4253-8b52-7460ec69ccdb 88847169865e4e08abccff94205f100d 2e00bda8b86d430782170f8ef1fc58e7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:59:29 compute-0 nova_compute[260935]: 2025-10-11 08:59:29.358 2 DEBUG nova.virt.libvirt.vif [None req-b9f724ee-7060-4253-8b52-7460ec69ccdb 88847169865e4e08abccff94205f100d 2e00bda8b86d430782170f8ef1fc58e7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T08:59:22Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerMetadataTestJSON-server-1241750620',display_name='tempest-ServerMetadataTestJSON-server-1241750620',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-servermetadatatestjson-server-1241750620',id=83,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2e00bda8b86d430782170f8ef1fc58e7',ramdisk_id='',reservation_id='r-rwj6wspy',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerMetadataTestJSON-1151660333',owner_user_name='tempest-ServerMetadataTestJSON-1151660333-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T08:59:24Z,user_data=None,user_id='88847169865e4e08abccff94205f100d',uuid=6b3e0b89-ecfe-48b4-b347-521aca186a52,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "09e90a9c-f11c-4d6a-b0e1-b1ce18f73aa7", "address": "fa:16:3e:1d:c1:6a", "network": {"id": "97485920-7a19-4e77-b2e1-57668a58b3d6", "bridge": "br-int", "label": "tempest-ServerMetadataTestJSON-1098551677-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2e00bda8b86d430782170f8ef1fc58e7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap09e90a9c-f1", "ovs_interfaceid": "09e90a9c-f11c-4d6a-b0e1-b1ce18f73aa7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 11 08:59:29 compute-0 nova_compute[260935]: 2025-10-11 08:59:29.358 2 DEBUG nova.network.os_vif_util [None req-b9f724ee-7060-4253-8b52-7460ec69ccdb 88847169865e4e08abccff94205f100d 2e00bda8b86d430782170f8ef1fc58e7 - - default default] Converting VIF {"id": "09e90a9c-f11c-4d6a-b0e1-b1ce18f73aa7", "address": "fa:16:3e:1d:c1:6a", "network": {"id": "97485920-7a19-4e77-b2e1-57668a58b3d6", "bridge": "br-int", "label": "tempest-ServerMetadataTestJSON-1098551677-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2e00bda8b86d430782170f8ef1fc58e7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap09e90a9c-f1", "ovs_interfaceid": "09e90a9c-f11c-4d6a-b0e1-b1ce18f73aa7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 08:59:29 compute-0 nova_compute[260935]: 2025-10-11 08:59:29.359 2 DEBUG nova.network.os_vif_util [None req-b9f724ee-7060-4253-8b52-7460ec69ccdb 88847169865e4e08abccff94205f100d 2e00bda8b86d430782170f8ef1fc58e7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1d:c1:6a,bridge_name='br-int',has_traffic_filtering=True,id=09e90a9c-f11c-4d6a-b0e1-b1ce18f73aa7,network=Network(97485920-7a19-4e77-b2e1-57668a58b3d6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap09e90a9c-f1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 08:59:29 compute-0 nova_compute[260935]: 2025-10-11 08:59:29.360 2 DEBUG nova.objects.instance [None req-b9f724ee-7060-4253-8b52-7460ec69ccdb 88847169865e4e08abccff94205f100d 2e00bda8b86d430782170f8ef1fc58e7 - - default default] Lazy-loading 'pci_devices' on Instance uuid 6b3e0b89-ecfe-48b4-b347-521aca186a52 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:59:29 compute-0 nova_compute[260935]: 2025-10-11 08:59:29.377 2 DEBUG nova.virt.libvirt.driver [None req-b9f724ee-7060-4253-8b52-7460ec69ccdb 88847169865e4e08abccff94205f100d 2e00bda8b86d430782170f8ef1fc58e7 - - default default] [instance: 6b3e0b89-ecfe-48b4-b347-521aca186a52] End _get_guest_xml xml=<domain type="kvm">
Oct 11 08:59:29 compute-0 nova_compute[260935]:   <uuid>6b3e0b89-ecfe-48b4-b347-521aca186a52</uuid>
Oct 11 08:59:29 compute-0 nova_compute[260935]:   <name>instance-00000053</name>
Oct 11 08:59:29 compute-0 nova_compute[260935]:   <memory>131072</memory>
Oct 11 08:59:29 compute-0 nova_compute[260935]:   <vcpu>1</vcpu>
Oct 11 08:59:29 compute-0 nova_compute[260935]:   <metadata>
Oct 11 08:59:29 compute-0 nova_compute[260935]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 08:59:29 compute-0 nova_compute[260935]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 08:59:29 compute-0 nova_compute[260935]:       <nova:name>tempest-ServerMetadataTestJSON-server-1241750620</nova:name>
Oct 11 08:59:29 compute-0 nova_compute[260935]:       <nova:creationTime>2025-10-11 08:59:28</nova:creationTime>
Oct 11 08:59:29 compute-0 nova_compute[260935]:       <nova:flavor name="m1.nano">
Oct 11 08:59:29 compute-0 nova_compute[260935]:         <nova:memory>128</nova:memory>
Oct 11 08:59:29 compute-0 nova_compute[260935]:         <nova:disk>1</nova:disk>
Oct 11 08:59:29 compute-0 nova_compute[260935]:         <nova:swap>0</nova:swap>
Oct 11 08:59:29 compute-0 nova_compute[260935]:         <nova:ephemeral>0</nova:ephemeral>
Oct 11 08:59:29 compute-0 nova_compute[260935]:         <nova:vcpus>1</nova:vcpus>
Oct 11 08:59:29 compute-0 nova_compute[260935]:       </nova:flavor>
Oct 11 08:59:29 compute-0 nova_compute[260935]:       <nova:owner>
Oct 11 08:59:29 compute-0 nova_compute[260935]:         <nova:user uuid="88847169865e4e08abccff94205f100d">tempest-ServerMetadataTestJSON-1151660333-project-member</nova:user>
Oct 11 08:59:29 compute-0 nova_compute[260935]:         <nova:project uuid="2e00bda8b86d430782170f8ef1fc58e7">tempest-ServerMetadataTestJSON-1151660333</nova:project>
Oct 11 08:59:29 compute-0 nova_compute[260935]:       </nova:owner>
Oct 11 08:59:29 compute-0 nova_compute[260935]:       <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 08:59:29 compute-0 nova_compute[260935]:       <nova:ports>
Oct 11 08:59:29 compute-0 nova_compute[260935]:         <nova:port uuid="09e90a9c-f11c-4d6a-b0e1-b1ce18f73aa7">
Oct 11 08:59:29 compute-0 nova_compute[260935]:           <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Oct 11 08:59:29 compute-0 nova_compute[260935]:         </nova:port>
Oct 11 08:59:29 compute-0 nova_compute[260935]:       </nova:ports>
Oct 11 08:59:29 compute-0 nova_compute[260935]:     </nova:instance>
Oct 11 08:59:29 compute-0 nova_compute[260935]:   </metadata>
Oct 11 08:59:29 compute-0 nova_compute[260935]:   <sysinfo type="smbios">
Oct 11 08:59:29 compute-0 nova_compute[260935]:     <system>
Oct 11 08:59:29 compute-0 nova_compute[260935]:       <entry name="manufacturer">RDO</entry>
Oct 11 08:59:29 compute-0 nova_compute[260935]:       <entry name="product">OpenStack Compute</entry>
Oct 11 08:59:29 compute-0 nova_compute[260935]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 08:59:29 compute-0 nova_compute[260935]:       <entry name="serial">6b3e0b89-ecfe-48b4-b347-521aca186a52</entry>
Oct 11 08:59:29 compute-0 nova_compute[260935]:       <entry name="uuid">6b3e0b89-ecfe-48b4-b347-521aca186a52</entry>
Oct 11 08:59:29 compute-0 nova_compute[260935]:       <entry name="family">Virtual Machine</entry>
Oct 11 08:59:29 compute-0 nova_compute[260935]:     </system>
Oct 11 08:59:29 compute-0 nova_compute[260935]:   </sysinfo>
Oct 11 08:59:29 compute-0 nova_compute[260935]:   <os>
Oct 11 08:59:29 compute-0 nova_compute[260935]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 11 08:59:29 compute-0 nova_compute[260935]:     <boot dev="hd"/>
Oct 11 08:59:29 compute-0 nova_compute[260935]:     <smbios mode="sysinfo"/>
Oct 11 08:59:29 compute-0 nova_compute[260935]:   </os>
Oct 11 08:59:29 compute-0 nova_compute[260935]:   <features>
Oct 11 08:59:29 compute-0 nova_compute[260935]:     <acpi/>
Oct 11 08:59:29 compute-0 nova_compute[260935]:     <apic/>
Oct 11 08:59:29 compute-0 nova_compute[260935]:     <vmcoreinfo/>
Oct 11 08:59:29 compute-0 nova_compute[260935]:   </features>
Oct 11 08:59:29 compute-0 nova_compute[260935]:   <clock offset="utc">
Oct 11 08:59:29 compute-0 nova_compute[260935]:     <timer name="pit" tickpolicy="delay"/>
Oct 11 08:59:29 compute-0 nova_compute[260935]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 11 08:59:29 compute-0 nova_compute[260935]:     <timer name="hpet" present="no"/>
Oct 11 08:59:29 compute-0 nova_compute[260935]:   </clock>
Oct 11 08:59:29 compute-0 nova_compute[260935]:   <cpu mode="host-model" match="exact">
Oct 11 08:59:29 compute-0 nova_compute[260935]:     <topology sockets="1" cores="1" threads="1"/>
Oct 11 08:59:29 compute-0 nova_compute[260935]:   </cpu>
Oct 11 08:59:29 compute-0 nova_compute[260935]:   <devices>
Oct 11 08:59:29 compute-0 nova_compute[260935]:     <disk type="network" device="disk">
Oct 11 08:59:29 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 08:59:29 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/6b3e0b89-ecfe-48b4-b347-521aca186a52_disk">
Oct 11 08:59:29 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 08:59:29 compute-0 nova_compute[260935]:       </source>
Oct 11 08:59:29 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 08:59:29 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 08:59:29 compute-0 nova_compute[260935]:       </auth>
Oct 11 08:59:29 compute-0 nova_compute[260935]:       <target dev="vda" bus="virtio"/>
Oct 11 08:59:29 compute-0 nova_compute[260935]:     </disk>
Oct 11 08:59:29 compute-0 nova_compute[260935]:     <disk type="network" device="cdrom">
Oct 11 08:59:29 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 08:59:29 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/6b3e0b89-ecfe-48b4-b347-521aca186a52_disk.config">
Oct 11 08:59:29 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 08:59:29 compute-0 nova_compute[260935]:       </source>
Oct 11 08:59:29 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 08:59:29 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 08:59:29 compute-0 nova_compute[260935]:       </auth>
Oct 11 08:59:29 compute-0 nova_compute[260935]:       <target dev="sda" bus="sata"/>
Oct 11 08:59:29 compute-0 nova_compute[260935]:     </disk>
Oct 11 08:59:29 compute-0 nova_compute[260935]:     <interface type="ethernet">
Oct 11 08:59:29 compute-0 nova_compute[260935]:       <mac address="fa:16:3e:1d:c1:6a"/>
Oct 11 08:59:29 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 08:59:29 compute-0 nova_compute[260935]:       <driver name="vhost" rx_queue_size="512"/>
Oct 11 08:59:29 compute-0 nova_compute[260935]:       <mtu size="1442"/>
Oct 11 08:59:29 compute-0 nova_compute[260935]:       <target dev="tap09e90a9c-f1"/>
Oct 11 08:59:29 compute-0 nova_compute[260935]:     </interface>
Oct 11 08:59:29 compute-0 nova_compute[260935]:     <serial type="pty">
Oct 11 08:59:29 compute-0 nova_compute[260935]:       <log file="/var/lib/nova/instances/6b3e0b89-ecfe-48b4-b347-521aca186a52/console.log" append="off"/>
Oct 11 08:59:29 compute-0 nova_compute[260935]:     </serial>
Oct 11 08:59:29 compute-0 nova_compute[260935]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 08:59:29 compute-0 nova_compute[260935]:     <video>
Oct 11 08:59:29 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 08:59:29 compute-0 nova_compute[260935]:     </video>
Oct 11 08:59:29 compute-0 nova_compute[260935]:     <input type="tablet" bus="usb"/>
Oct 11 08:59:29 compute-0 nova_compute[260935]:     <rng model="virtio">
Oct 11 08:59:29 compute-0 nova_compute[260935]:       <backend model="random">/dev/urandom</backend>
Oct 11 08:59:29 compute-0 nova_compute[260935]:     </rng>
Oct 11 08:59:29 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root"/>
Oct 11 08:59:29 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:59:29 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:59:29 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:59:29 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:59:29 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:59:29 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:59:29 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:59:29 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:59:29 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:59:29 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:59:29 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:59:29 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:59:29 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:59:29 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:59:29 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:59:29 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:59:29 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:59:29 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:59:29 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:59:29 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:59:29 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:59:29 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:59:29 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:59:29 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:59:29 compute-0 nova_compute[260935]:     <controller type="usb" index="0"/>
Oct 11 08:59:29 compute-0 nova_compute[260935]:     <memballoon model="virtio">
Oct 11 08:59:29 compute-0 nova_compute[260935]:       <stats period="10"/>
Oct 11 08:59:29 compute-0 nova_compute[260935]:     </memballoon>
Oct 11 08:59:29 compute-0 nova_compute[260935]:   </devices>
Oct 11 08:59:29 compute-0 nova_compute[260935]: </domain>
Oct 11 08:59:29 compute-0 nova_compute[260935]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 11 08:59:29 compute-0 nova_compute[260935]: 2025-10-11 08:59:29.377 2 DEBUG nova.compute.manager [None req-b9f724ee-7060-4253-8b52-7460ec69ccdb 88847169865e4e08abccff94205f100d 2e00bda8b86d430782170f8ef1fc58e7 - - default default] [instance: 6b3e0b89-ecfe-48b4-b347-521aca186a52] Preparing to wait for external event network-vif-plugged-09e90a9c-f11c-4d6a-b0e1-b1ce18f73aa7 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 11 08:59:29 compute-0 nova_compute[260935]: 2025-10-11 08:59:29.378 2 DEBUG oslo_concurrency.lockutils [None req-b9f724ee-7060-4253-8b52-7460ec69ccdb 88847169865e4e08abccff94205f100d 2e00bda8b86d430782170f8ef1fc58e7 - - default default] Acquiring lock "6b3e0b89-ecfe-48b4-b347-521aca186a52-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:59:29 compute-0 nova_compute[260935]: 2025-10-11 08:59:29.378 2 DEBUG oslo_concurrency.lockutils [None req-b9f724ee-7060-4253-8b52-7460ec69ccdb 88847169865e4e08abccff94205f100d 2e00bda8b86d430782170f8ef1fc58e7 - - default default] Lock "6b3e0b89-ecfe-48b4-b347-521aca186a52-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:59:29 compute-0 nova_compute[260935]: 2025-10-11 08:59:29.378 2 DEBUG oslo_concurrency.lockutils [None req-b9f724ee-7060-4253-8b52-7460ec69ccdb 88847169865e4e08abccff94205f100d 2e00bda8b86d430782170f8ef1fc58e7 - - default default] Lock "6b3e0b89-ecfe-48b4-b347-521aca186a52-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:59:29 compute-0 nova_compute[260935]: 2025-10-11 08:59:29.379 2 DEBUG nova.virt.libvirt.vif [None req-b9f724ee-7060-4253-8b52-7460ec69ccdb 88847169865e4e08abccff94205f100d 2e00bda8b86d430782170f8ef1fc58e7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T08:59:22Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerMetadataTestJSON-server-1241750620',display_name='tempest-ServerMetadataTestJSON-server-1241750620',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-servermetadatatestjson-server-1241750620',id=83,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2e00bda8b86d430782170f8ef1fc58e7',ramdisk_id='',reservation_id='r-rwj6wspy',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerMetadataTestJSON-1151660333',owner_user_name='tempest-ServerMetadataTestJSON-1151660333-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T08:59:24Z,user_data=None,user_id='88847169865e4e08abccff94205f100d',uuid=6b3e0b89-ecfe-48b4-b347-521aca186a52,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "09e90a9c-f11c-4d6a-b0e1-b1ce18f73aa7", "address": "fa:16:3e:1d:c1:6a", "network": {"id": "97485920-7a19-4e77-b2e1-57668a58b3d6", "bridge": "br-int", "label": "tempest-ServerMetadataTestJSON-1098551677-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2e00bda8b86d430782170f8ef1fc58e7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap09e90a9c-f1", "ovs_interfaceid": "09e90a9c-f11c-4d6a-b0e1-b1ce18f73aa7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 11 08:59:29 compute-0 nova_compute[260935]: 2025-10-11 08:59:29.379 2 DEBUG nova.network.os_vif_util [None req-b9f724ee-7060-4253-8b52-7460ec69ccdb 88847169865e4e08abccff94205f100d 2e00bda8b86d430782170f8ef1fc58e7 - - default default] Converting VIF {"id": "09e90a9c-f11c-4d6a-b0e1-b1ce18f73aa7", "address": "fa:16:3e:1d:c1:6a", "network": {"id": "97485920-7a19-4e77-b2e1-57668a58b3d6", "bridge": "br-int", "label": "tempest-ServerMetadataTestJSON-1098551677-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2e00bda8b86d430782170f8ef1fc58e7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap09e90a9c-f1", "ovs_interfaceid": "09e90a9c-f11c-4d6a-b0e1-b1ce18f73aa7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 08:59:29 compute-0 nova_compute[260935]: 2025-10-11 08:59:29.380 2 DEBUG nova.network.os_vif_util [None req-b9f724ee-7060-4253-8b52-7460ec69ccdb 88847169865e4e08abccff94205f100d 2e00bda8b86d430782170f8ef1fc58e7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1d:c1:6a,bridge_name='br-int',has_traffic_filtering=True,id=09e90a9c-f11c-4d6a-b0e1-b1ce18f73aa7,network=Network(97485920-7a19-4e77-b2e1-57668a58b3d6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap09e90a9c-f1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 08:59:29 compute-0 nova_compute[260935]: 2025-10-11 08:59:29.380 2 DEBUG os_vif [None req-b9f724ee-7060-4253-8b52-7460ec69ccdb 88847169865e4e08abccff94205f100d 2e00bda8b86d430782170f8ef1fc58e7 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:1d:c1:6a,bridge_name='br-int',has_traffic_filtering=True,id=09e90a9c-f11c-4d6a-b0e1-b1ce18f73aa7,network=Network(97485920-7a19-4e77-b2e1-57668a58b3d6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap09e90a9c-f1') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 11 08:59:29 compute-0 nova_compute[260935]: 2025-10-11 08:59:29.381 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:59:29 compute-0 nova_compute[260935]: 2025-10-11 08:59:29.381 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:59:29 compute-0 nova_compute[260935]: 2025-10-11 08:59:29.382 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 08:59:29 compute-0 nova_compute[260935]: 2025-10-11 08:59:29.384 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:59:29 compute-0 nova_compute[260935]: 2025-10-11 08:59:29.384 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap09e90a9c-f1, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:59:29 compute-0 nova_compute[260935]: 2025-10-11 08:59:29.385 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap09e90a9c-f1, col_values=(('external_ids', {'iface-id': '09e90a9c-f11c-4d6a-b0e1-b1ce18f73aa7', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:1d:c1:6a', 'vm-uuid': '6b3e0b89-ecfe-48b4-b347-521aca186a52'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:59:29 compute-0 NetworkManager[44960]: <info>  [1760173169.3879] manager: (tap09e90a9c-f1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/324)
Oct 11 08:59:29 compute-0 nova_compute[260935]: 2025-10-11 08:59:29.390 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 08:59:29 compute-0 nova_compute[260935]: 2025-10-11 08:59:29.394 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:59:29 compute-0 nova_compute[260935]: 2025-10-11 08:59:29.396 2 INFO os_vif [None req-b9f724ee-7060-4253-8b52-7460ec69ccdb 88847169865e4e08abccff94205f100d 2e00bda8b86d430782170f8ef1fc58e7 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:1d:c1:6a,bridge_name='br-int',has_traffic_filtering=True,id=09e90a9c-f11c-4d6a-b0e1-b1ce18f73aa7,network=Network(97485920-7a19-4e77-b2e1-57668a58b3d6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap09e90a9c-f1')
Oct 11 08:59:29 compute-0 nova_compute[260935]: 2025-10-11 08:59:29.450 2 DEBUG nova.virt.libvirt.driver [None req-b9f724ee-7060-4253-8b52-7460ec69ccdb 88847169865e4e08abccff94205f100d 2e00bda8b86d430782170f8ef1fc58e7 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 08:59:29 compute-0 nova_compute[260935]: 2025-10-11 08:59:29.451 2 DEBUG nova.virt.libvirt.driver [None req-b9f724ee-7060-4253-8b52-7460ec69ccdb 88847169865e4e08abccff94205f100d 2e00bda8b86d430782170f8ef1fc58e7 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 08:59:29 compute-0 nova_compute[260935]: 2025-10-11 08:59:29.451 2 DEBUG nova.virt.libvirt.driver [None req-b9f724ee-7060-4253-8b52-7460ec69ccdb 88847169865e4e08abccff94205f100d 2e00bda8b86d430782170f8ef1fc58e7 - - default default] No VIF found with MAC fa:16:3e:1d:c1:6a, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 11 08:59:29 compute-0 nova_compute[260935]: 2025-10-11 08:59:29.452 2 INFO nova.virt.libvirt.driver [None req-b9f724ee-7060-4253-8b52-7460ec69ccdb 88847169865e4e08abccff94205f100d 2e00bda8b86d430782170f8ef1fc58e7 - - default default] [instance: 6b3e0b89-ecfe-48b4-b347-521aca186a52] Using config drive
Oct 11 08:59:29 compute-0 ceph-mon[74313]: pgmap v1689: 321 pgs: 321 active+clean; 372 MiB data, 754 MiB used, 59 GiB / 60 GiB avail; 4.7 MiB/s rd, 10 MiB/s wr, 375 op/s
Oct 11 08:59:29 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/2399810328' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:59:29 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/3619509957' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:59:29 compute-0 nova_compute[260935]: 2025-10-11 08:59:29.477 2 DEBUG nova.storage.rbd_utils [None req-b9f724ee-7060-4253-8b52-7460ec69ccdb 88847169865e4e08abccff94205f100d 2e00bda8b86d430782170f8ef1fc58e7 - - default default] rbd image 6b3e0b89-ecfe-48b4-b347-521aca186a52_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:59:30 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1690: 321 pgs: 321 active+clean; 372 MiB data, 754 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 5.8 MiB/s wr, 205 op/s
Oct 11 08:59:31 compute-0 nova_compute[260935]: 2025-10-11 08:59:31.151 2 INFO nova.virt.libvirt.driver [None req-b9f724ee-7060-4253-8b52-7460ec69ccdb 88847169865e4e08abccff94205f100d 2e00bda8b86d430782170f8ef1fc58e7 - - default default] [instance: 6b3e0b89-ecfe-48b4-b347-521aca186a52] Creating config drive at /var/lib/nova/instances/6b3e0b89-ecfe-48b4-b347-521aca186a52/disk.config
Oct 11 08:59:31 compute-0 nova_compute[260935]: 2025-10-11 08:59:31.161 2 DEBUG oslo_concurrency.processutils [None req-b9f724ee-7060-4253-8b52-7460ec69ccdb 88847169865e4e08abccff94205f100d 2e00bda8b86d430782170f8ef1fc58e7 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/6b3e0b89-ecfe-48b4-b347-521aca186a52/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_pq6yurx execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:59:31 compute-0 nova_compute[260935]: 2025-10-11 08:59:31.294 2 DEBUG nova.compute.manager [req-faa47902-1fb3-43b5-bf23-728c0ef18ffb req-50cd2628-d07c-4c52-a67b-90dce614255d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: c176845c-89c0-4038-ba22-4ee79bd3ebfe] Received event network-changed-e61ae661-47c6-4317-a2c2-6e7a5b567441 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:59:31 compute-0 nova_compute[260935]: 2025-10-11 08:59:31.295 2 DEBUG nova.compute.manager [req-faa47902-1fb3-43b5-bf23-728c0ef18ffb req-50cd2628-d07c-4c52-a67b-90dce614255d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: c176845c-89c0-4038-ba22-4ee79bd3ebfe] Refreshing instance network info cache due to event network-changed-e61ae661-47c6-4317-a2c2-6e7a5b567441. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 08:59:31 compute-0 nova_compute[260935]: 2025-10-11 08:59:31.296 2 DEBUG oslo_concurrency.lockutils [req-faa47902-1fb3-43b5-bf23-728c0ef18ffb req-50cd2628-d07c-4c52-a67b-90dce614255d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-c176845c-89c0-4038-ba22-4ee79bd3ebfe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 08:59:31 compute-0 nova_compute[260935]: 2025-10-11 08:59:31.297 2 DEBUG oslo_concurrency.lockutils [req-faa47902-1fb3-43b5-bf23-728c0ef18ffb req-50cd2628-d07c-4c52-a67b-90dce614255d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-c176845c-89c0-4038-ba22-4ee79bd3ebfe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 08:59:31 compute-0 nova_compute[260935]: 2025-10-11 08:59:31.297 2 DEBUG nova.network.neutron [req-faa47902-1fb3-43b5-bf23-728c0ef18ffb req-50cd2628-d07c-4c52-a67b-90dce614255d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: c176845c-89c0-4038-ba22-4ee79bd3ebfe] Refreshing network info cache for port e61ae661-47c6-4317-a2c2-6e7a5b567441 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 08:59:31 compute-0 nova_compute[260935]: 2025-10-11 08:59:31.312 2 DEBUG oslo_concurrency.processutils [None req-b9f724ee-7060-4253-8b52-7460ec69ccdb 88847169865e4e08abccff94205f100d 2e00bda8b86d430782170f8ef1fc58e7 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/6b3e0b89-ecfe-48b4-b347-521aca186a52/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_pq6yurx" returned: 0 in 0.151s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:59:31 compute-0 nova_compute[260935]: 2025-10-11 08:59:31.352 2 DEBUG nova.storage.rbd_utils [None req-b9f724ee-7060-4253-8b52-7460ec69ccdb 88847169865e4e08abccff94205f100d 2e00bda8b86d430782170f8ef1fc58e7 - - default default] rbd image 6b3e0b89-ecfe-48b4-b347-521aca186a52_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:59:31 compute-0 nova_compute[260935]: 2025-10-11 08:59:31.358 2 DEBUG oslo_concurrency.processutils [None req-b9f724ee-7060-4253-8b52-7460ec69ccdb 88847169865e4e08abccff94205f100d 2e00bda8b86d430782170f8ef1fc58e7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/6b3e0b89-ecfe-48b4-b347-521aca186a52/disk.config 6b3e0b89-ecfe-48b4-b347-521aca186a52_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:59:31 compute-0 ceph-mon[74313]: pgmap v1690: 321 pgs: 321 active+clean; 372 MiB data, 754 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 5.8 MiB/s wr, 205 op/s
Oct 11 08:59:31 compute-0 nova_compute[260935]: 2025-10-11 08:59:31.573 2 DEBUG oslo_concurrency.processutils [None req-b9f724ee-7060-4253-8b52-7460ec69ccdb 88847169865e4e08abccff94205f100d 2e00bda8b86d430782170f8ef1fc58e7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/6b3e0b89-ecfe-48b4-b347-521aca186a52/disk.config 6b3e0b89-ecfe-48b4-b347-521aca186a52_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.215s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:59:31 compute-0 nova_compute[260935]: 2025-10-11 08:59:31.575 2 INFO nova.virt.libvirt.driver [None req-b9f724ee-7060-4253-8b52-7460ec69ccdb 88847169865e4e08abccff94205f100d 2e00bda8b86d430782170f8ef1fc58e7 - - default default] [instance: 6b3e0b89-ecfe-48b4-b347-521aca186a52] Deleting local config drive /var/lib/nova/instances/6b3e0b89-ecfe-48b4-b347-521aca186a52/disk.config because it was imported into RBD.
Oct 11 08:59:31 compute-0 NetworkManager[44960]: <info>  [1760173171.6437] manager: (tap09e90a9c-f1): new Tun device (/org/freedesktop/NetworkManager/Devices/325)
Oct 11 08:59:31 compute-0 kernel: tap09e90a9c-f1: entered promiscuous mode
Oct 11 08:59:31 compute-0 ovn_controller[152945]: 2025-10-11T08:59:31Z|00724|binding|INFO|Claiming lport 09e90a9c-f11c-4d6a-b0e1-b1ce18f73aa7 for this chassis.
Oct 11 08:59:31 compute-0 ovn_controller[152945]: 2025-10-11T08:59:31Z|00725|binding|INFO|09e90a9c-f11c-4d6a-b0e1-b1ce18f73aa7: Claiming fa:16:3e:1d:c1:6a 10.100.0.10
Oct 11 08:59:31 compute-0 nova_compute[260935]: 2025-10-11 08:59:31.659 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:59:31 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:59:31.676 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1d:c1:6a 10.100.0.10'], port_security=['fa:16:3e:1d:c1:6a 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '6b3e0b89-ecfe-48b4-b347-521aca186a52', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-97485920-7a19-4e77-b2e1-57668a58b3d6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2e00bda8b86d430782170f8ef1fc58e7', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'bc6b6507-0961-4217-8315-bb813b2de697', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7ffea8f1-3897-4c69-bf4a-783147f4af36, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=09e90a9c-f11c-4d6a-b0e1-b1ce18f73aa7) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 08:59:31 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:59:31.678 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 09e90a9c-f11c-4d6a-b0e1-b1ce18f73aa7 in datapath 97485920-7a19-4e77-b2e1-57668a58b3d6 bound to our chassis
Oct 11 08:59:31 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:59:31.682 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 97485920-7a19-4e77-b2e1-57668a58b3d6
Oct 11 08:59:31 compute-0 systemd-machined[215705]: New machine qemu-92-instance-00000053.
Oct 11 08:59:31 compute-0 systemd[1]: Started Virtual Machine qemu-92-instance-00000053.
Oct 11 08:59:31 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:59:31.702 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[0c47e32b-7a51-4e6e-a4f0-39dade4981e6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:59:31 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:59:31.704 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap97485920-71 in ovnmeta-97485920-7a19-4e77-b2e1-57668a58b3d6 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 11 08:59:31 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:59:31.712 276945 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap97485920-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 11 08:59:31 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:59:31.712 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[3b599496-a94d-48b6-b0f3-b64dad8fe341]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:59:31 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:59:31.714 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[36cef878-49b1-46ab-aac3-9c48bc9e94cc]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:59:31 compute-0 ovn_controller[152945]: 2025-10-11T08:59:31Z|00726|binding|INFO|Setting lport 09e90a9c-f11c-4d6a-b0e1-b1ce18f73aa7 ovn-installed in OVS
Oct 11 08:59:31 compute-0 ovn_controller[152945]: 2025-10-11T08:59:31Z|00727|binding|INFO|Setting lport 09e90a9c-f11c-4d6a-b0e1-b1ce18f73aa7 up in Southbound
Oct 11 08:59:31 compute-0 nova_compute[260935]: 2025-10-11 08:59:31.718 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:59:31 compute-0 nova_compute[260935]: 2025-10-11 08:59:31.722 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:59:31 compute-0 systemd-udevd[340255]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 08:59:31 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:59:31.749 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[c71e14e5-cf07-408f-a089-5b1630ef689f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:59:31 compute-0 NetworkManager[44960]: <info>  [1760173171.7730] device (tap09e90a9c-f1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 08:59:31 compute-0 NetworkManager[44960]: <info>  [1760173171.7740] device (tap09e90a9c-f1): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 08:59:31 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:59:31.791 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[85684476-3096-454c-96f0-0c354e8aaac1]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:59:31 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:59:31.851 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[cc3e4154-8c24-48c5-8d13-00dfe8ea51e7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:59:31 compute-0 NetworkManager[44960]: <info>  [1760173171.8633] manager: (tap97485920-70): new Veth device (/org/freedesktop/NetworkManager/Devices/326)
Oct 11 08:59:31 compute-0 systemd-udevd[340261]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 08:59:31 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:59:31.866 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[837d703b-fc85-4a7d-bda9-c845c570f91f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:59:31 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:59:31.926 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[b382b533-53b4-48a4-98e3-e2c39f3bffd3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:59:31 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:59:31.932 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[804f3ff8-d297-4875-b4dc-4d5a43793aed]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:59:31 compute-0 NetworkManager[44960]: <info>  [1760173171.9693] device (tap97485920-70): carrier: link connected
Oct 11 08:59:31 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:59:31.977 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[373f4d68-db77-4d24-8c2e-0cf18d618ff8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:59:32 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:59:32.009 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[9b6dc7e5-bc0d-4fd5-acca-9b18f0d6d264]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap97485920-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:60:af:13'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 227], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 501667, 'reachable_time': 32749, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 340302, 'error': None, 'target': 'ovnmeta-97485920-7a19-4e77-b2e1-57668a58b3d6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:59:32 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:59:32.041 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[2ea4d8a7-e1cc-4d5f-ae8f-8edfe40b3afb]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe60:af13'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 501667, 'tstamp': 501667}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 340307, 'error': None, 'target': 'ovnmeta-97485920-7a19-4e77-b2e1-57668a58b3d6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:59:32 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:59:32.077 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[7761e03d-edf9-411b-9ef4-af53d64ead84]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap97485920-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:60:af:13'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 227], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 501667, 'reachable_time': 32749, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 340324, 'error': None, 'target': 'ovnmeta-97485920-7a19-4e77-b2e1-57668a58b3d6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:59:32 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1691: 321 pgs: 321 active+clean; 372 MiB data, 754 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 5.8 MiB/s wr, 205 op/s
Oct 11 08:59:32 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:59:32.120 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[59be936a-702b-4679-acc3-fd5e31896e30]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:59:32 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:59:32.228 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[fbf438a5-0098-4823-8a6e-51a9f6e33781]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:59:32 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:59:32.230 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap97485920-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:59:32 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:59:32.231 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 08:59:32 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:59:32.232 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap97485920-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:59:32 compute-0 nova_compute[260935]: 2025-10-11 08:59:32.292 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:59:32 compute-0 NetworkManager[44960]: <info>  [1760173172.2929] manager: (tap97485920-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/327)
Oct 11 08:59:32 compute-0 kernel: tap97485920-70: entered promiscuous mode
Oct 11 08:59:32 compute-0 nova_compute[260935]: 2025-10-11 08:59:32.304 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:59:32 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:59:32.311 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap97485920-70, col_values=(('external_ids', {'iface-id': 'f58c3218-66f6-4eda-be89-23d8ab3d253c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:59:32 compute-0 nova_compute[260935]: 2025-10-11 08:59:32.313 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:59:32 compute-0 ovn_controller[152945]: 2025-10-11T08:59:32Z|00728|binding|INFO|Releasing lport f58c3218-66f6-4eda-be89-23d8ab3d253c from this chassis (sb_readonly=0)
Oct 11 08:59:32 compute-0 nova_compute[260935]: 2025-10-11 08:59:32.348 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:59:32 compute-0 nova_compute[260935]: 2025-10-11 08:59:32.353 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:59:32 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:59:32.354 162815 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/97485920-7a19-4e77-b2e1-57668a58b3d6.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/97485920-7a19-4e77-b2e1-57668a58b3d6.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 11 08:59:32 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:59:32.355 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[84f42835-223a-4ae3-b288-e68f91b98277]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:59:32 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:59:32.356 162815 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 11 08:59:32 compute-0 ovn_metadata_agent[162810]: global
Oct 11 08:59:32 compute-0 ovn_metadata_agent[162810]:     log         /dev/log local0 debug
Oct 11 08:59:32 compute-0 ovn_metadata_agent[162810]:     log-tag     haproxy-metadata-proxy-97485920-7a19-4e77-b2e1-57668a58b3d6
Oct 11 08:59:32 compute-0 ovn_metadata_agent[162810]:     user        root
Oct 11 08:59:32 compute-0 ovn_metadata_agent[162810]:     group       root
Oct 11 08:59:32 compute-0 ovn_metadata_agent[162810]:     maxconn     1024
Oct 11 08:59:32 compute-0 ovn_metadata_agent[162810]:     pidfile     /var/lib/neutron/external/pids/97485920-7a19-4e77-b2e1-57668a58b3d6.pid.haproxy
Oct 11 08:59:32 compute-0 ovn_metadata_agent[162810]:     daemon
Oct 11 08:59:32 compute-0 ovn_metadata_agent[162810]: 
Oct 11 08:59:32 compute-0 ovn_metadata_agent[162810]: defaults
Oct 11 08:59:32 compute-0 ovn_metadata_agent[162810]:     log global
Oct 11 08:59:32 compute-0 ovn_metadata_agent[162810]:     mode http
Oct 11 08:59:32 compute-0 ovn_metadata_agent[162810]:     option httplog
Oct 11 08:59:32 compute-0 ovn_metadata_agent[162810]:     option dontlognull
Oct 11 08:59:32 compute-0 ovn_metadata_agent[162810]:     option http-server-close
Oct 11 08:59:32 compute-0 ovn_metadata_agent[162810]:     option forwardfor
Oct 11 08:59:32 compute-0 ovn_metadata_agent[162810]:     retries                 3
Oct 11 08:59:32 compute-0 ovn_metadata_agent[162810]:     timeout http-request    30s
Oct 11 08:59:32 compute-0 ovn_metadata_agent[162810]:     timeout connect         30s
Oct 11 08:59:32 compute-0 ovn_metadata_agent[162810]:     timeout client          32s
Oct 11 08:59:32 compute-0 ovn_metadata_agent[162810]:     timeout server          32s
Oct 11 08:59:32 compute-0 ovn_metadata_agent[162810]:     timeout http-keep-alive 30s
Oct 11 08:59:32 compute-0 ovn_metadata_agent[162810]: 
Oct 11 08:59:32 compute-0 ovn_metadata_agent[162810]: 
Oct 11 08:59:32 compute-0 ovn_metadata_agent[162810]: listen listener
Oct 11 08:59:32 compute-0 ovn_metadata_agent[162810]:     bind 169.254.169.254:80
Oct 11 08:59:32 compute-0 ovn_metadata_agent[162810]:     server metadata /var/lib/neutron/metadata_proxy
Oct 11 08:59:32 compute-0 ovn_metadata_agent[162810]:     http-request add-header X-OVN-Network-ID 97485920-7a19-4e77-b2e1-57668a58b3d6
Oct 11 08:59:32 compute-0 ovn_metadata_agent[162810]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 11 08:59:32 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:59:32.357 162815 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-97485920-7a19-4e77-b2e1-57668a58b3d6', 'env', 'PROCESS_TAG=haproxy-97485920-7a19-4e77-b2e1-57668a58b3d6', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/97485920-7a19-4e77-b2e1-57668a58b3d6.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 11 08:59:32 compute-0 nova_compute[260935]: 2025-10-11 08:59:32.750 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760173172.749749, 6b3e0b89-ecfe-48b4-b347-521aca186a52 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:59:32 compute-0 nova_compute[260935]: 2025-10-11 08:59:32.751 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 6b3e0b89-ecfe-48b4-b347-521aca186a52] VM Started (Lifecycle Event)
Oct 11 08:59:32 compute-0 podman[340369]: 2025-10-11 08:59:32.849931457 +0000 UTC m=+0.075724751 container create f5604a11977078659c63d00737d34682488f83f4ce79c12fc90b95ae62176afe (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-97485920-7a19-4e77-b2e1-57668a58b3d6, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 11 08:59:32 compute-0 systemd[1]: Started libpod-conmon-f5604a11977078659c63d00737d34682488f83f4ce79c12fc90b95ae62176afe.scope.
Oct 11 08:59:32 compute-0 podman[340369]: 2025-10-11 08:59:32.808178596 +0000 UTC m=+0.033971990 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 11 08:59:32 compute-0 systemd[1]: Started libcrun container.
Oct 11 08:59:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dde826cc5b23d639ee46dbfc6a9d95b5babf6288365b1ca4a04538e8e738b901/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 11 08:59:32 compute-0 podman[340369]: 2025-10-11 08:59:32.957938046 +0000 UTC m=+0.183731360 container init f5604a11977078659c63d00737d34682488f83f4ce79c12fc90b95ae62176afe (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-97485920-7a19-4e77-b2e1-57668a58b3d6, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS)
Oct 11 08:59:32 compute-0 nova_compute[260935]: 2025-10-11 08:59:32.965 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 6b3e0b89-ecfe-48b4-b347-521aca186a52] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:59:32 compute-0 podman[340369]: 2025-10-11 08:59:32.972380268 +0000 UTC m=+0.198173562 container start f5604a11977078659c63d00737d34682488f83f4ce79c12fc90b95ae62176afe (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-97485920-7a19-4e77-b2e1-57668a58b3d6, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 11 08:59:32 compute-0 nova_compute[260935]: 2025-10-11 08:59:32.973 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760173172.7500815, 6b3e0b89-ecfe-48b4-b347-521aca186a52 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:59:32 compute-0 nova_compute[260935]: 2025-10-11 08:59:32.974 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 6b3e0b89-ecfe-48b4-b347-521aca186a52] VM Paused (Lifecycle Event)
Oct 11 08:59:32 compute-0 neutron-haproxy-ovnmeta-97485920-7a19-4e77-b2e1-57668a58b3d6[340384]: [NOTICE]   (340388) : New worker (340390) forked
Oct 11 08:59:32 compute-0 neutron-haproxy-ovnmeta-97485920-7a19-4e77-b2e1-57668a58b3d6[340384]: [NOTICE]   (340388) : Loading success.
Oct 11 08:59:33 compute-0 nova_compute[260935]: 2025-10-11 08:59:32.999 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 6b3e0b89-ecfe-48b4-b347-521aca186a52] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:59:33 compute-0 nova_compute[260935]: 2025-10-11 08:59:33.004 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 6b3e0b89-ecfe-48b4-b347-521aca186a52] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 08:59:33 compute-0 nova_compute[260935]: 2025-10-11 08:59:33.028 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 6b3e0b89-ecfe-48b4-b347-521aca186a52] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 08:59:33 compute-0 nova_compute[260935]: 2025-10-11 08:59:33.390 2 DEBUG nova.network.neutron [req-56f0ebb9-b481-4613-848b-9d998789fda1 req-a031691d-866d-426a-a535-c83d4684a723 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 6b3e0b89-ecfe-48b4-b347-521aca186a52] Updated VIF entry in instance network info cache for port 09e90a9c-f11c-4d6a-b0e1-b1ce18f73aa7. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 08:59:33 compute-0 nova_compute[260935]: 2025-10-11 08:59:33.391 2 DEBUG nova.network.neutron [req-56f0ebb9-b481-4613-848b-9d998789fda1 req-a031691d-866d-426a-a535-c83d4684a723 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 6b3e0b89-ecfe-48b4-b347-521aca186a52] Updating instance_info_cache with network_info: [{"id": "09e90a9c-f11c-4d6a-b0e1-b1ce18f73aa7", "address": "fa:16:3e:1d:c1:6a", "network": {"id": "97485920-7a19-4e77-b2e1-57668a58b3d6", "bridge": "br-int", "label": "tempest-ServerMetadataTestJSON-1098551677-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2e00bda8b86d430782170f8ef1fc58e7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap09e90a9c-f1", "ovs_interfaceid": "09e90a9c-f11c-4d6a-b0e1-b1ce18f73aa7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:59:33 compute-0 nova_compute[260935]: 2025-10-11 08:59:33.416 2 DEBUG oslo_concurrency.lockutils [req-56f0ebb9-b481-4613-848b-9d998789fda1 req-a031691d-866d-426a-a535-c83d4684a723 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-6b3e0b89-ecfe-48b4-b347-521aca186a52" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 08:59:33 compute-0 ceph-mon[74313]: pgmap v1691: 321 pgs: 321 active+clean; 372 MiB data, 754 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 5.8 MiB/s wr, 205 op/s
Oct 11 08:59:33 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e242 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 08:59:33 compute-0 nova_compute[260935]: 2025-10-11 08:59:33.675 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:59:33 compute-0 nova_compute[260935]: 2025-10-11 08:59:33.787 2 DEBUG nova.compute.manager [req-12b8b1b4-1212-4e72-83fe-b07db337a193 req-2a87c71c-3977-4ee7-9b53-3653bc058de0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 6b3e0b89-ecfe-48b4-b347-521aca186a52] Received event network-vif-plugged-09e90a9c-f11c-4d6a-b0e1-b1ce18f73aa7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:59:33 compute-0 nova_compute[260935]: 2025-10-11 08:59:33.787 2 DEBUG oslo_concurrency.lockutils [req-12b8b1b4-1212-4e72-83fe-b07db337a193 req-2a87c71c-3977-4ee7-9b53-3653bc058de0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "6b3e0b89-ecfe-48b4-b347-521aca186a52-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:59:33 compute-0 nova_compute[260935]: 2025-10-11 08:59:33.787 2 DEBUG oslo_concurrency.lockutils [req-12b8b1b4-1212-4e72-83fe-b07db337a193 req-2a87c71c-3977-4ee7-9b53-3653bc058de0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "6b3e0b89-ecfe-48b4-b347-521aca186a52-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:59:33 compute-0 nova_compute[260935]: 2025-10-11 08:59:33.788 2 DEBUG oslo_concurrency.lockutils [req-12b8b1b4-1212-4e72-83fe-b07db337a193 req-2a87c71c-3977-4ee7-9b53-3653bc058de0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "6b3e0b89-ecfe-48b4-b347-521aca186a52-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:59:33 compute-0 nova_compute[260935]: 2025-10-11 08:59:33.788 2 DEBUG nova.compute.manager [req-12b8b1b4-1212-4e72-83fe-b07db337a193 req-2a87c71c-3977-4ee7-9b53-3653bc058de0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 6b3e0b89-ecfe-48b4-b347-521aca186a52] Processing event network-vif-plugged-09e90a9c-f11c-4d6a-b0e1-b1ce18f73aa7 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 11 08:59:33 compute-0 nova_compute[260935]: 2025-10-11 08:59:33.788 2 DEBUG nova.compute.manager [None req-b9f724ee-7060-4253-8b52-7460ec69ccdb 88847169865e4e08abccff94205f100d 2e00bda8b86d430782170f8ef1fc58e7 - - default default] [instance: 6b3e0b89-ecfe-48b4-b347-521aca186a52] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 11 08:59:33 compute-0 nova_compute[260935]: 2025-10-11 08:59:33.804 2 DEBUG nova.virt.libvirt.driver [None req-b9f724ee-7060-4253-8b52-7460ec69ccdb 88847169865e4e08abccff94205f100d 2e00bda8b86d430782170f8ef1fc58e7 - - default default] [instance: 6b3e0b89-ecfe-48b4-b347-521aca186a52] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 11 08:59:33 compute-0 nova_compute[260935]: 2025-10-11 08:59:33.804 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760173173.7922592, 6b3e0b89-ecfe-48b4-b347-521aca186a52 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:59:33 compute-0 nova_compute[260935]: 2025-10-11 08:59:33.804 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 6b3e0b89-ecfe-48b4-b347-521aca186a52] VM Resumed (Lifecycle Event)
Oct 11 08:59:33 compute-0 nova_compute[260935]: 2025-10-11 08:59:33.809 2 INFO nova.virt.libvirt.driver [-] [instance: 6b3e0b89-ecfe-48b4-b347-521aca186a52] Instance spawned successfully.
Oct 11 08:59:33 compute-0 nova_compute[260935]: 2025-10-11 08:59:33.809 2 DEBUG nova.virt.libvirt.driver [None req-b9f724ee-7060-4253-8b52-7460ec69ccdb 88847169865e4e08abccff94205f100d 2e00bda8b86d430782170f8ef1fc58e7 - - default default] [instance: 6b3e0b89-ecfe-48b4-b347-521aca186a52] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 11 08:59:33 compute-0 nova_compute[260935]: 2025-10-11 08:59:33.844 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 6b3e0b89-ecfe-48b4-b347-521aca186a52] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:59:33 compute-0 nova_compute[260935]: 2025-10-11 08:59:33.848 2 DEBUG nova.virt.libvirt.driver [None req-b9f724ee-7060-4253-8b52-7460ec69ccdb 88847169865e4e08abccff94205f100d 2e00bda8b86d430782170f8ef1fc58e7 - - default default] [instance: 6b3e0b89-ecfe-48b4-b347-521aca186a52] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:59:33 compute-0 nova_compute[260935]: 2025-10-11 08:59:33.848 2 DEBUG nova.virt.libvirt.driver [None req-b9f724ee-7060-4253-8b52-7460ec69ccdb 88847169865e4e08abccff94205f100d 2e00bda8b86d430782170f8ef1fc58e7 - - default default] [instance: 6b3e0b89-ecfe-48b4-b347-521aca186a52] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:59:33 compute-0 nova_compute[260935]: 2025-10-11 08:59:33.848 2 DEBUG nova.virt.libvirt.driver [None req-b9f724ee-7060-4253-8b52-7460ec69ccdb 88847169865e4e08abccff94205f100d 2e00bda8b86d430782170f8ef1fc58e7 - - default default] [instance: 6b3e0b89-ecfe-48b4-b347-521aca186a52] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:59:33 compute-0 nova_compute[260935]: 2025-10-11 08:59:33.849 2 DEBUG nova.virt.libvirt.driver [None req-b9f724ee-7060-4253-8b52-7460ec69ccdb 88847169865e4e08abccff94205f100d 2e00bda8b86d430782170f8ef1fc58e7 - - default default] [instance: 6b3e0b89-ecfe-48b4-b347-521aca186a52] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:59:33 compute-0 nova_compute[260935]: 2025-10-11 08:59:33.849 2 DEBUG nova.virt.libvirt.driver [None req-b9f724ee-7060-4253-8b52-7460ec69ccdb 88847169865e4e08abccff94205f100d 2e00bda8b86d430782170f8ef1fc58e7 - - default default] [instance: 6b3e0b89-ecfe-48b4-b347-521aca186a52] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:59:33 compute-0 nova_compute[260935]: 2025-10-11 08:59:33.849 2 DEBUG nova.virt.libvirt.driver [None req-b9f724ee-7060-4253-8b52-7460ec69ccdb 88847169865e4e08abccff94205f100d 2e00bda8b86d430782170f8ef1fc58e7 - - default default] [instance: 6b3e0b89-ecfe-48b4-b347-521aca186a52] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 08:59:33 compute-0 nova_compute[260935]: 2025-10-11 08:59:33.865 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 6b3e0b89-ecfe-48b4-b347-521aca186a52] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 08:59:33 compute-0 nova_compute[260935]: 2025-10-11 08:59:33.909 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 6b3e0b89-ecfe-48b4-b347-521aca186a52] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 08:59:33 compute-0 nova_compute[260935]: 2025-10-11 08:59:33.940 2 INFO nova.compute.manager [None req-b9f724ee-7060-4253-8b52-7460ec69ccdb 88847169865e4e08abccff94205f100d 2e00bda8b86d430782170f8ef1fc58e7 - - default default] [instance: 6b3e0b89-ecfe-48b4-b347-521aca186a52] Took 9.50 seconds to spawn the instance on the hypervisor.
Oct 11 08:59:33 compute-0 nova_compute[260935]: 2025-10-11 08:59:33.940 2 DEBUG nova.compute.manager [None req-b9f724ee-7060-4253-8b52-7460ec69ccdb 88847169865e4e08abccff94205f100d 2e00bda8b86d430782170f8ef1fc58e7 - - default default] [instance: 6b3e0b89-ecfe-48b4-b347-521aca186a52] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:59:34 compute-0 nova_compute[260935]: 2025-10-11 08:59:34.021 2 INFO nova.compute.manager [None req-b9f724ee-7060-4253-8b52-7460ec69ccdb 88847169865e4e08abccff94205f100d 2e00bda8b86d430782170f8ef1fc58e7 - - default default] [instance: 6b3e0b89-ecfe-48b4-b347-521aca186a52] Took 11.00 seconds to build instance.
Oct 11 08:59:34 compute-0 nova_compute[260935]: 2025-10-11 08:59:34.042 2 DEBUG oslo_concurrency.lockutils [None req-b9f724ee-7060-4253-8b52-7460ec69ccdb 88847169865e4e08abccff94205f100d 2e00bda8b86d430782170f8ef1fc58e7 - - default default] Lock "6b3e0b89-ecfe-48b4-b347-521aca186a52" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.091s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:59:34 compute-0 nova_compute[260935]: 2025-10-11 08:59:34.083 2 DEBUG nova.network.neutron [None req-dabea842-f2bb-4202-84ba-0749121bd47c 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] [instance: 8848c29f-c82a-4f50-82f4-b2e317161489] Updating instance_info_cache with network_info: [{"id": "e121c426-7734-4def-be42-b69b19dcbf29", "address": "fa:16:3e:e4:27:e3", "network": {"id": "b4b8fb64-b3cb-4cee-8a22-4a3947fba8e5", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-2015711970-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b0430d49d70a46c2b29abef177f8ccb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape121c426-77", "ovs_interfaceid": "e121c426-7734-4def-be42-b69b19dcbf29", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:59:34 compute-0 nova_compute[260935]: 2025-10-11 08:59:34.107 2 DEBUG oslo_concurrency.lockutils [None req-dabea842-f2bb-4202-84ba-0749121bd47c 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] Releasing lock "refresh_cache-8848c29f-c82a-4f50-82f4-b2e317161489" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 08:59:34 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1692: 321 pgs: 321 active+clean; 372 MiB data, 755 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 5.2 MiB/s wr, 206 op/s
Oct 11 08:59:34 compute-0 nova_compute[260935]: 2025-10-11 08:59:34.146 2 INFO nova.virt.libvirt.driver [-] [instance: 8848c29f-c82a-4f50-82f4-b2e317161489] Instance destroyed successfully.
Oct 11 08:59:34 compute-0 nova_compute[260935]: 2025-10-11 08:59:34.147 2 DEBUG nova.objects.instance [None req-dabea842-f2bb-4202-84ba-0749121bd47c 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] Lazy-loading 'numa_topology' on Instance uuid 8848c29f-c82a-4f50-82f4-b2e317161489 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:59:34 compute-0 nova_compute[260935]: 2025-10-11 08:59:34.173 2 DEBUG nova.objects.instance [None req-dabea842-f2bb-4202-84ba-0749121bd47c 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] Lazy-loading 'resources' on Instance uuid 8848c29f-c82a-4f50-82f4-b2e317161489 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:59:34 compute-0 nova_compute[260935]: 2025-10-11 08:59:34.197 2 DEBUG nova.virt.libvirt.vif [None req-dabea842-f2bb-4202-84ba-0749121bd47c 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T08:58:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-1291030350',display_name='tempest-ListServerFiltersTestJSON-instance-1291030350',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-1291030350',id=79,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-11T08:59:04Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='b0430d49d70a46c2b29abef177f8ccb3',ramdisk_id='',reservation_id='r-q7why563',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ListServerFiltersTestJSON-867232992',owner_user_name='tempest-ListServerFiltersTestJSON-867232992-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T08:59:26Z,user_data=None,user_id='9734241540ac484291686e1d189d4eea',uuid=8848c29f-c82a-4f50-82f4-b2e317161489,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "e121c426-7734-4def-be42-b69b19dcbf29", "address": "fa:16:3e:e4:27:e3", "network": {"id": "b4b8fb64-b3cb-4cee-8a22-4a3947fba8e5", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-2015711970-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b0430d49d70a46c2b29abef177f8ccb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape121c426-77", "ovs_interfaceid": "e121c426-7734-4def-be42-b69b19dcbf29", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 11 08:59:34 compute-0 nova_compute[260935]: 2025-10-11 08:59:34.197 2 DEBUG nova.network.os_vif_util [None req-dabea842-f2bb-4202-84ba-0749121bd47c 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] Converting VIF {"id": "e121c426-7734-4def-be42-b69b19dcbf29", "address": "fa:16:3e:e4:27:e3", "network": {"id": "b4b8fb64-b3cb-4cee-8a22-4a3947fba8e5", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-2015711970-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b0430d49d70a46c2b29abef177f8ccb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape121c426-77", "ovs_interfaceid": "e121c426-7734-4def-be42-b69b19dcbf29", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 08:59:34 compute-0 nova_compute[260935]: 2025-10-11 08:59:34.199 2 DEBUG nova.network.os_vif_util [None req-dabea842-f2bb-4202-84ba-0749121bd47c 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e4:27:e3,bridge_name='br-int',has_traffic_filtering=True,id=e121c426-7734-4def-be42-b69b19dcbf29,network=Network(b4b8fb64-b3cb-4cee-8a22-4a3947fba8e5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape121c426-77') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 08:59:34 compute-0 nova_compute[260935]: 2025-10-11 08:59:34.199 2 DEBUG os_vif [None req-dabea842-f2bb-4202-84ba-0749121bd47c 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e4:27:e3,bridge_name='br-int',has_traffic_filtering=True,id=e121c426-7734-4def-be42-b69b19dcbf29,network=Network(b4b8fb64-b3cb-4cee-8a22-4a3947fba8e5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape121c426-77') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 11 08:59:34 compute-0 nova_compute[260935]: 2025-10-11 08:59:34.202 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:59:34 compute-0 nova_compute[260935]: 2025-10-11 08:59:34.202 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape121c426-77, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:59:34 compute-0 nova_compute[260935]: 2025-10-11 08:59:34.205 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:59:34 compute-0 nova_compute[260935]: 2025-10-11 08:59:34.207 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:59:34 compute-0 nova_compute[260935]: 2025-10-11 08:59:34.210 2 INFO os_vif [None req-dabea842-f2bb-4202-84ba-0749121bd47c 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e4:27:e3,bridge_name='br-int',has_traffic_filtering=True,id=e121c426-7734-4def-be42-b69b19dcbf29,network=Network(b4b8fb64-b3cb-4cee-8a22-4a3947fba8e5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape121c426-77')
Oct 11 08:59:34 compute-0 nova_compute[260935]: 2025-10-11 08:59:34.221 2 DEBUG nova.virt.libvirt.driver [None req-dabea842-f2bb-4202-84ba-0749121bd47c 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] [instance: 8848c29f-c82a-4f50-82f4-b2e317161489] Start _get_guest_xml network_info=[{"id": "e121c426-7734-4def-be42-b69b19dcbf29", "address": "fa:16:3e:e4:27:e3", "network": {"id": "b4b8fb64-b3cb-4cee-8a22-4a3947fba8e5", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-2015711970-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b0430d49d70a46c2b29abef177f8ccb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape121c426-77", "ovs_interfaceid": "e121c426-7734-4def-be42-b69b19dcbf29", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 11 08:59:34 compute-0 nova_compute[260935]: 2025-10-11 08:59:34.231 2 WARNING nova.virt.libvirt.driver [None req-dabea842-f2bb-4202-84ba-0749121bd47c 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 08:59:34 compute-0 nova_compute[260935]: 2025-10-11 08:59:34.239 2 DEBUG nova.virt.libvirt.host [None req-dabea842-f2bb-4202-84ba-0749121bd47c 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 11 08:59:34 compute-0 nova_compute[260935]: 2025-10-11 08:59:34.240 2 DEBUG nova.virt.libvirt.host [None req-dabea842-f2bb-4202-84ba-0749121bd47c 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 11 08:59:34 compute-0 nova_compute[260935]: 2025-10-11 08:59:34.245 2 DEBUG nova.virt.libvirt.host [None req-dabea842-f2bb-4202-84ba-0749121bd47c 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 11 08:59:34 compute-0 nova_compute[260935]: 2025-10-11 08:59:34.246 2 DEBUG nova.virt.libvirt.host [None req-dabea842-f2bb-4202-84ba-0749121bd47c 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 11 08:59:34 compute-0 nova_compute[260935]: 2025-10-11 08:59:34.246 2 DEBUG nova.virt.libvirt.driver [None req-dabea842-f2bb-4202-84ba-0749121bd47c 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 11 08:59:34 compute-0 nova_compute[260935]: 2025-10-11 08:59:34.247 2 DEBUG nova.virt.hardware [None req-dabea842-f2bb-4202-84ba-0749121bd47c 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 11 08:59:34 compute-0 nova_compute[260935]: 2025-10-11 08:59:34.247 2 DEBUG nova.virt.hardware [None req-dabea842-f2bb-4202-84ba-0749121bd47c 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 11 08:59:34 compute-0 nova_compute[260935]: 2025-10-11 08:59:34.248 2 DEBUG nova.virt.hardware [None req-dabea842-f2bb-4202-84ba-0749121bd47c 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 11 08:59:34 compute-0 nova_compute[260935]: 2025-10-11 08:59:34.248 2 DEBUG nova.virt.hardware [None req-dabea842-f2bb-4202-84ba-0749121bd47c 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 11 08:59:34 compute-0 nova_compute[260935]: 2025-10-11 08:59:34.249 2 DEBUG nova.virt.hardware [None req-dabea842-f2bb-4202-84ba-0749121bd47c 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 11 08:59:34 compute-0 nova_compute[260935]: 2025-10-11 08:59:34.249 2 DEBUG nova.virt.hardware [None req-dabea842-f2bb-4202-84ba-0749121bd47c 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 11 08:59:34 compute-0 nova_compute[260935]: 2025-10-11 08:59:34.249 2 DEBUG nova.virt.hardware [None req-dabea842-f2bb-4202-84ba-0749121bd47c 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 11 08:59:34 compute-0 nova_compute[260935]: 2025-10-11 08:59:34.250 2 DEBUG nova.virt.hardware [None req-dabea842-f2bb-4202-84ba-0749121bd47c 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 11 08:59:34 compute-0 nova_compute[260935]: 2025-10-11 08:59:34.250 2 DEBUG nova.virt.hardware [None req-dabea842-f2bb-4202-84ba-0749121bd47c 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 11 08:59:34 compute-0 nova_compute[260935]: 2025-10-11 08:59:34.250 2 DEBUG nova.virt.hardware [None req-dabea842-f2bb-4202-84ba-0749121bd47c 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 11 08:59:34 compute-0 nova_compute[260935]: 2025-10-11 08:59:34.251 2 DEBUG nova.virt.hardware [None req-dabea842-f2bb-4202-84ba-0749121bd47c 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 11 08:59:34 compute-0 nova_compute[260935]: 2025-10-11 08:59:34.251 2 DEBUG nova.objects.instance [None req-dabea842-f2bb-4202-84ba-0749121bd47c 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 8848c29f-c82a-4f50-82f4-b2e317161489 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:59:34 compute-0 nova_compute[260935]: 2025-10-11 08:59:34.270 2 DEBUG oslo_concurrency.processutils [None req-dabea842-f2bb-4202-84ba-0749121bd47c 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:59:34 compute-0 nova_compute[260935]: 2025-10-11 08:59:34.684 2 DEBUG nova.network.neutron [req-faa47902-1fb3-43b5-bf23-728c0ef18ffb req-50cd2628-d07c-4c52-a67b-90dce614255d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: c176845c-89c0-4038-ba22-4ee79bd3ebfe] Updated VIF entry in instance network info cache for port e61ae661-47c6-4317-a2c2-6e7a5b567441. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 08:59:34 compute-0 nova_compute[260935]: 2025-10-11 08:59:34.685 2 DEBUG nova.network.neutron [req-faa47902-1fb3-43b5-bf23-728c0ef18ffb req-50cd2628-d07c-4c52-a67b-90dce614255d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: c176845c-89c0-4038-ba22-4ee79bd3ebfe] Updating instance_info_cache with network_info: [{"id": "e61ae661-47c6-4317-a2c2-6e7a5b567441", "address": "fa:16:3e:1e:82:58", "network": {"id": "164a664d-5e52-48b9-8b00-f73d0851a4cc", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-311778958-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.202", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d33b48586acf4e6c8254f2a1213b001c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape61ae661-47", "ovs_interfaceid": "e61ae661-47c6-4317-a2c2-6e7a5b567441", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:59:34 compute-0 nova_compute[260935]: 2025-10-11 08:59:34.732 2 DEBUG oslo_concurrency.lockutils [req-faa47902-1fb3-43b5-bf23-728c0ef18ffb req-50cd2628-d07c-4c52-a67b-90dce614255d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-c176845c-89c0-4038-ba22-4ee79bd3ebfe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 08:59:34 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 08:59:34 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1229979888' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:59:34 compute-0 nova_compute[260935]: 2025-10-11 08:59:34.789 2 DEBUG oslo_concurrency.processutils [None req-dabea842-f2bb-4202-84ba-0749121bd47c 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.519s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:59:34 compute-0 podman[340419]: 2025-10-11 08:59:34.843903443 +0000 UTC m=+0.132775347 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_managed=true)
Oct 11 08:59:34 compute-0 nova_compute[260935]: 2025-10-11 08:59:34.845 2 DEBUG oslo_concurrency.processutils [None req-dabea842-f2bb-4202-84ba-0749121bd47c 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:59:35 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 08:59:35 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/884950320' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:59:35 compute-0 nova_compute[260935]: 2025-10-11 08:59:35.373 2 DEBUG oslo_concurrency.processutils [None req-dabea842-f2bb-4202-84ba-0749121bd47c 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.528s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:59:35 compute-0 nova_compute[260935]: 2025-10-11 08:59:35.375 2 DEBUG nova.virt.libvirt.vif [None req-dabea842-f2bb-4202-84ba-0749121bd47c 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T08:58:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-1291030350',display_name='tempest-ListServerFiltersTestJSON-instance-1291030350',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-1291030350',id=79,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-11T08:59:04Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='b0430d49d70a46c2b29abef177f8ccb3',ramdisk_id='',reservation_id='r-q7why563',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ListServerFiltersTestJSON-867232992',owner_user_name='tempest-ListServerFiltersTestJSON-867232992-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T08:59:26Z,user_data=None,user_id='9734241540ac484291686e1d189d4eea',uuid=8848c29f-c82a-4f50-82f4-b2e317161489,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "e121c426-7734-4def-be42-b69b19dcbf29", "address": "fa:16:3e:e4:27:e3", "network": {"id": "b4b8fb64-b3cb-4cee-8a22-4a3947fba8e5", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-2015711970-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b0430d49d70a46c2b29abef177f8ccb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape121c426-77", "ovs_interfaceid": "e121c426-7734-4def-be42-b69b19dcbf29", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 11 08:59:35 compute-0 nova_compute[260935]: 2025-10-11 08:59:35.376 2 DEBUG nova.network.os_vif_util [None req-dabea842-f2bb-4202-84ba-0749121bd47c 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] Converting VIF {"id": "e121c426-7734-4def-be42-b69b19dcbf29", "address": "fa:16:3e:e4:27:e3", "network": {"id": "b4b8fb64-b3cb-4cee-8a22-4a3947fba8e5", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-2015711970-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b0430d49d70a46c2b29abef177f8ccb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape121c426-77", "ovs_interfaceid": "e121c426-7734-4def-be42-b69b19dcbf29", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 08:59:35 compute-0 nova_compute[260935]: 2025-10-11 08:59:35.377 2 DEBUG nova.network.os_vif_util [None req-dabea842-f2bb-4202-84ba-0749121bd47c 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e4:27:e3,bridge_name='br-int',has_traffic_filtering=True,id=e121c426-7734-4def-be42-b69b19dcbf29,network=Network(b4b8fb64-b3cb-4cee-8a22-4a3947fba8e5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape121c426-77') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 08:59:35 compute-0 nova_compute[260935]: 2025-10-11 08:59:35.379 2 DEBUG nova.objects.instance [None req-dabea842-f2bb-4202-84ba-0749121bd47c 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] Lazy-loading 'pci_devices' on Instance uuid 8848c29f-c82a-4f50-82f4-b2e317161489 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:59:35 compute-0 nova_compute[260935]: 2025-10-11 08:59:35.401 2 DEBUG nova.virt.libvirt.driver [None req-dabea842-f2bb-4202-84ba-0749121bd47c 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] [instance: 8848c29f-c82a-4f50-82f4-b2e317161489] End _get_guest_xml xml=<domain type="kvm">
Oct 11 08:59:35 compute-0 nova_compute[260935]:   <uuid>8848c29f-c82a-4f50-82f4-b2e317161489</uuid>
Oct 11 08:59:35 compute-0 nova_compute[260935]:   <name>instance-0000004f</name>
Oct 11 08:59:35 compute-0 nova_compute[260935]:   <memory>131072</memory>
Oct 11 08:59:35 compute-0 nova_compute[260935]:   <vcpu>1</vcpu>
Oct 11 08:59:35 compute-0 nova_compute[260935]:   <metadata>
Oct 11 08:59:35 compute-0 nova_compute[260935]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 08:59:35 compute-0 nova_compute[260935]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 08:59:35 compute-0 nova_compute[260935]:       <nova:name>tempest-ListServerFiltersTestJSON-instance-1291030350</nova:name>
Oct 11 08:59:35 compute-0 nova_compute[260935]:       <nova:creationTime>2025-10-11 08:59:34</nova:creationTime>
Oct 11 08:59:35 compute-0 nova_compute[260935]:       <nova:flavor name="m1.nano">
Oct 11 08:59:35 compute-0 nova_compute[260935]:         <nova:memory>128</nova:memory>
Oct 11 08:59:35 compute-0 nova_compute[260935]:         <nova:disk>1</nova:disk>
Oct 11 08:59:35 compute-0 nova_compute[260935]:         <nova:swap>0</nova:swap>
Oct 11 08:59:35 compute-0 nova_compute[260935]:         <nova:ephemeral>0</nova:ephemeral>
Oct 11 08:59:35 compute-0 nova_compute[260935]:         <nova:vcpus>1</nova:vcpus>
Oct 11 08:59:35 compute-0 nova_compute[260935]:       </nova:flavor>
Oct 11 08:59:35 compute-0 nova_compute[260935]:       <nova:owner>
Oct 11 08:59:35 compute-0 nova_compute[260935]:         <nova:user uuid="9734241540ac484291686e1d189d4eea">tempest-ListServerFiltersTestJSON-867232992-project-member</nova:user>
Oct 11 08:59:35 compute-0 nova_compute[260935]:         <nova:project uuid="b0430d49d70a46c2b29abef177f8ccb3">tempest-ListServerFiltersTestJSON-867232992</nova:project>
Oct 11 08:59:35 compute-0 nova_compute[260935]:       </nova:owner>
Oct 11 08:59:35 compute-0 nova_compute[260935]:       <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 08:59:35 compute-0 nova_compute[260935]:       <nova:ports>
Oct 11 08:59:35 compute-0 nova_compute[260935]:         <nova:port uuid="e121c426-7734-4def-be42-b69b19dcbf29">
Oct 11 08:59:35 compute-0 nova_compute[260935]:           <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Oct 11 08:59:35 compute-0 nova_compute[260935]:         </nova:port>
Oct 11 08:59:35 compute-0 nova_compute[260935]:       </nova:ports>
Oct 11 08:59:35 compute-0 nova_compute[260935]:     </nova:instance>
Oct 11 08:59:35 compute-0 nova_compute[260935]:   </metadata>
Oct 11 08:59:35 compute-0 nova_compute[260935]:   <sysinfo type="smbios">
Oct 11 08:59:35 compute-0 nova_compute[260935]:     <system>
Oct 11 08:59:35 compute-0 nova_compute[260935]:       <entry name="manufacturer">RDO</entry>
Oct 11 08:59:35 compute-0 nova_compute[260935]:       <entry name="product">OpenStack Compute</entry>
Oct 11 08:59:35 compute-0 nova_compute[260935]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 08:59:35 compute-0 nova_compute[260935]:       <entry name="serial">8848c29f-c82a-4f50-82f4-b2e317161489</entry>
Oct 11 08:59:35 compute-0 nova_compute[260935]:       <entry name="uuid">8848c29f-c82a-4f50-82f4-b2e317161489</entry>
Oct 11 08:59:35 compute-0 nova_compute[260935]:       <entry name="family">Virtual Machine</entry>
Oct 11 08:59:35 compute-0 nova_compute[260935]:     </system>
Oct 11 08:59:35 compute-0 nova_compute[260935]:   </sysinfo>
Oct 11 08:59:35 compute-0 nova_compute[260935]:   <os>
Oct 11 08:59:35 compute-0 nova_compute[260935]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 11 08:59:35 compute-0 nova_compute[260935]:     <boot dev="hd"/>
Oct 11 08:59:35 compute-0 nova_compute[260935]:     <smbios mode="sysinfo"/>
Oct 11 08:59:35 compute-0 nova_compute[260935]:   </os>
Oct 11 08:59:35 compute-0 nova_compute[260935]:   <features>
Oct 11 08:59:35 compute-0 nova_compute[260935]:     <acpi/>
Oct 11 08:59:35 compute-0 nova_compute[260935]:     <apic/>
Oct 11 08:59:35 compute-0 nova_compute[260935]:     <vmcoreinfo/>
Oct 11 08:59:35 compute-0 nova_compute[260935]:   </features>
Oct 11 08:59:35 compute-0 nova_compute[260935]:   <clock offset="utc">
Oct 11 08:59:35 compute-0 nova_compute[260935]:     <timer name="pit" tickpolicy="delay"/>
Oct 11 08:59:35 compute-0 nova_compute[260935]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 11 08:59:35 compute-0 nova_compute[260935]:     <timer name="hpet" present="no"/>
Oct 11 08:59:35 compute-0 nova_compute[260935]:   </clock>
Oct 11 08:59:35 compute-0 nova_compute[260935]:   <cpu mode="host-model" match="exact">
Oct 11 08:59:35 compute-0 nova_compute[260935]:     <topology sockets="1" cores="1" threads="1"/>
Oct 11 08:59:35 compute-0 nova_compute[260935]:   </cpu>
Oct 11 08:59:35 compute-0 nova_compute[260935]:   <devices>
Oct 11 08:59:35 compute-0 nova_compute[260935]:     <disk type="network" device="disk">
Oct 11 08:59:35 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 08:59:35 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/8848c29f-c82a-4f50-82f4-b2e317161489_disk">
Oct 11 08:59:35 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 08:59:35 compute-0 nova_compute[260935]:       </source>
Oct 11 08:59:35 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 08:59:35 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 08:59:35 compute-0 nova_compute[260935]:       </auth>
Oct 11 08:59:35 compute-0 nova_compute[260935]:       <target dev="vda" bus="virtio"/>
Oct 11 08:59:35 compute-0 nova_compute[260935]:     </disk>
Oct 11 08:59:35 compute-0 nova_compute[260935]:     <disk type="network" device="cdrom">
Oct 11 08:59:35 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 08:59:35 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/8848c29f-c82a-4f50-82f4-b2e317161489_disk.config">
Oct 11 08:59:35 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 08:59:35 compute-0 nova_compute[260935]:       </source>
Oct 11 08:59:35 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 08:59:35 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 08:59:35 compute-0 nova_compute[260935]:       </auth>
Oct 11 08:59:35 compute-0 nova_compute[260935]:       <target dev="sda" bus="sata"/>
Oct 11 08:59:35 compute-0 nova_compute[260935]:     </disk>
Oct 11 08:59:35 compute-0 nova_compute[260935]:     <interface type="ethernet">
Oct 11 08:59:35 compute-0 nova_compute[260935]:       <mac address="fa:16:3e:e4:27:e3"/>
Oct 11 08:59:35 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 08:59:35 compute-0 nova_compute[260935]:       <driver name="vhost" rx_queue_size="512"/>
Oct 11 08:59:35 compute-0 nova_compute[260935]:       <mtu size="1442"/>
Oct 11 08:59:35 compute-0 nova_compute[260935]:       <target dev="tape121c426-77"/>
Oct 11 08:59:35 compute-0 nova_compute[260935]:     </interface>
Oct 11 08:59:35 compute-0 nova_compute[260935]:     <serial type="pty">
Oct 11 08:59:35 compute-0 nova_compute[260935]:       <log file="/var/lib/nova/instances/8848c29f-c82a-4f50-82f4-b2e317161489/console.log" append="off"/>
Oct 11 08:59:35 compute-0 nova_compute[260935]:     </serial>
Oct 11 08:59:35 compute-0 nova_compute[260935]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 08:59:35 compute-0 nova_compute[260935]:     <video>
Oct 11 08:59:35 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 08:59:35 compute-0 nova_compute[260935]:     </video>
Oct 11 08:59:35 compute-0 nova_compute[260935]:     <input type="tablet" bus="usb"/>
Oct 11 08:59:35 compute-0 nova_compute[260935]:     <input type="keyboard" bus="usb"/>
Oct 11 08:59:35 compute-0 nova_compute[260935]:     <rng model="virtio">
Oct 11 08:59:35 compute-0 nova_compute[260935]:       <backend model="random">/dev/urandom</backend>
Oct 11 08:59:35 compute-0 nova_compute[260935]:     </rng>
Oct 11 08:59:35 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root"/>
Oct 11 08:59:35 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:59:35 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:59:35 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:59:35 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:59:35 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:59:35 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:59:35 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:59:35 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:59:35 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:59:35 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:59:35 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:59:35 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:59:35 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:59:35 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:59:35 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:59:35 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:59:35 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:59:35 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:59:35 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:59:35 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:59:35 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:59:35 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:59:35 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:59:35 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 08:59:35 compute-0 nova_compute[260935]:     <controller type="usb" index="0"/>
Oct 11 08:59:35 compute-0 nova_compute[260935]:     <memballoon model="virtio">
Oct 11 08:59:35 compute-0 nova_compute[260935]:       <stats period="10"/>
Oct 11 08:59:35 compute-0 nova_compute[260935]:     </memballoon>
Oct 11 08:59:35 compute-0 nova_compute[260935]:   </devices>
Oct 11 08:59:35 compute-0 nova_compute[260935]: </domain>
Oct 11 08:59:35 compute-0 nova_compute[260935]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 11 08:59:35 compute-0 nova_compute[260935]: 2025-10-11 08:59:35.402 2 DEBUG nova.virt.libvirt.driver [None req-dabea842-f2bb-4202-84ba-0749121bd47c 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] skipping disk for instance-0000004f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 08:59:35 compute-0 nova_compute[260935]: 2025-10-11 08:59:35.403 2 DEBUG nova.virt.libvirt.driver [None req-dabea842-f2bb-4202-84ba-0749121bd47c 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] skipping disk for instance-0000004f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 08:59:35 compute-0 nova_compute[260935]: 2025-10-11 08:59:35.404 2 DEBUG nova.virt.libvirt.vif [None req-dabea842-f2bb-4202-84ba-0749121bd47c 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T08:58:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-1291030350',display_name='tempest-ListServerFiltersTestJSON-instance-1291030350',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-1291030350',id=79,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-11T08:59:04Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=<?>,power_state=4,progress=0,project_id='b0430d49d70a46c2b29abef177f8ccb3',ramdisk_id='',reservation_id='r-q7why563',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ListServerFiltersTestJSON-867232992',owner_user_name='tempest-ListServerFiltersTestJSON-867232992-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T08:59:26Z,user_data=None,user_id='9734241540ac484291686e1d189d4eea',uuid=8848c29f-c82a-4f50-82f4-b2e317161489,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "e121c426-7734-4def-be42-b69b19dcbf29", "address": "fa:16:3e:e4:27:e3", "network": {"id": "b4b8fb64-b3cb-4cee-8a22-4a3947fba8e5", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-2015711970-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b0430d49d70a46c2b29abef177f8ccb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape121c426-77", "ovs_interfaceid": "e121c426-7734-4def-be42-b69b19dcbf29", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 11 08:59:35 compute-0 nova_compute[260935]: 2025-10-11 08:59:35.404 2 DEBUG nova.network.os_vif_util [None req-dabea842-f2bb-4202-84ba-0749121bd47c 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] Converting VIF {"id": "e121c426-7734-4def-be42-b69b19dcbf29", "address": "fa:16:3e:e4:27:e3", "network": {"id": "b4b8fb64-b3cb-4cee-8a22-4a3947fba8e5", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-2015711970-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b0430d49d70a46c2b29abef177f8ccb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape121c426-77", "ovs_interfaceid": "e121c426-7734-4def-be42-b69b19dcbf29", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 08:59:35 compute-0 nova_compute[260935]: 2025-10-11 08:59:35.405 2 DEBUG nova.network.os_vif_util [None req-dabea842-f2bb-4202-84ba-0749121bd47c 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e4:27:e3,bridge_name='br-int',has_traffic_filtering=True,id=e121c426-7734-4def-be42-b69b19dcbf29,network=Network(b4b8fb64-b3cb-4cee-8a22-4a3947fba8e5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape121c426-77') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 08:59:35 compute-0 nova_compute[260935]: 2025-10-11 08:59:35.405 2 DEBUG os_vif [None req-dabea842-f2bb-4202-84ba-0749121bd47c 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e4:27:e3,bridge_name='br-int',has_traffic_filtering=True,id=e121c426-7734-4def-be42-b69b19dcbf29,network=Network(b4b8fb64-b3cb-4cee-8a22-4a3947fba8e5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape121c426-77') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 11 08:59:35 compute-0 nova_compute[260935]: 2025-10-11 08:59:35.406 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:59:35 compute-0 nova_compute[260935]: 2025-10-11 08:59:35.407 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:59:35 compute-0 nova_compute[260935]: 2025-10-11 08:59:35.407 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 08:59:35 compute-0 nova_compute[260935]: 2025-10-11 08:59:35.411 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:59:35 compute-0 nova_compute[260935]: 2025-10-11 08:59:35.411 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape121c426-77, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:59:35 compute-0 nova_compute[260935]: 2025-10-11 08:59:35.412 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tape121c426-77, col_values=(('external_ids', {'iface-id': 'e121c426-7734-4def-be42-b69b19dcbf29', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:e4:27:e3', 'vm-uuid': '8848c29f-c82a-4f50-82f4-b2e317161489'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:59:35 compute-0 nova_compute[260935]: 2025-10-11 08:59:35.414 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:59:35 compute-0 NetworkManager[44960]: <info>  [1760173175.4155] manager: (tape121c426-77): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/328)
Oct 11 08:59:35 compute-0 nova_compute[260935]: 2025-10-11 08:59:35.417 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 08:59:35 compute-0 nova_compute[260935]: 2025-10-11 08:59:35.421 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:59:35 compute-0 nova_compute[260935]: 2025-10-11 08:59:35.422 2 INFO os_vif [None req-dabea842-f2bb-4202-84ba-0749121bd47c 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e4:27:e3,bridge_name='br-int',has_traffic_filtering=True,id=e121c426-7734-4def-be42-b69b19dcbf29,network=Network(b4b8fb64-b3cb-4cee-8a22-4a3947fba8e5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape121c426-77')
Oct 11 08:59:35 compute-0 ceph-mon[74313]: pgmap v1692: 321 pgs: 321 active+clean; 372 MiB data, 755 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 5.2 MiB/s wr, 206 op/s
Oct 11 08:59:35 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/1229979888' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:59:35 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/884950320' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 08:59:35 compute-0 NetworkManager[44960]: <info>  [1760173175.5155] manager: (tape121c426-77): new Tun device (/org/freedesktop/NetworkManager/Devices/329)
Oct 11 08:59:35 compute-0 kernel: tape121c426-77: entered promiscuous mode
Oct 11 08:59:35 compute-0 nova_compute[260935]: 2025-10-11 08:59:35.524 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:59:35 compute-0 ovn_controller[152945]: 2025-10-11T08:59:35Z|00729|binding|INFO|Claiming lport e121c426-7734-4def-be42-b69b19dcbf29 for this chassis.
Oct 11 08:59:35 compute-0 ovn_controller[152945]: 2025-10-11T08:59:35Z|00730|binding|INFO|e121c426-7734-4def-be42-b69b19dcbf29: Claiming fa:16:3e:e4:27:e3 10.100.0.7
Oct 11 08:59:35 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:59:35.534 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e4:27:e3 10.100.0.7'], port_security=['fa:16:3e:e4:27:e3 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '8848c29f-c82a-4f50-82f4-b2e317161489', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b4b8fb64-b3cb-4cee-8a22-4a3947fba8e5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b0430d49d70a46c2b29abef177f8ccb3', 'neutron:revision_number': '5', 'neutron:security_group_ids': 'cf97e47b-d339-42a8-ae53-936a69d74b51', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7d7c2c31-fb9d-4875-91fa-aedc5fb45092, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=e121c426-7734-4def-be42-b69b19dcbf29) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 08:59:35 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:59:35.536 162815 INFO neutron.agent.ovn.metadata.agent [-] Port e121c426-7734-4def-be42-b69b19dcbf29 in datapath b4b8fb64-b3cb-4cee-8a22-4a3947fba8e5 bound to our chassis
Oct 11 08:59:35 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:59:35.540 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b4b8fb64-b3cb-4cee-8a22-4a3947fba8e5
Oct 11 08:59:35 compute-0 systemd-udevd[340494]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 08:59:35 compute-0 ovn_controller[152945]: 2025-10-11T08:59:35Z|00731|binding|INFO|Setting lport e121c426-7734-4def-be42-b69b19dcbf29 ovn-installed in OVS
Oct 11 08:59:35 compute-0 ovn_controller[152945]: 2025-10-11T08:59:35Z|00732|binding|INFO|Setting lport e121c426-7734-4def-be42-b69b19dcbf29 up in Southbound
Oct 11 08:59:35 compute-0 nova_compute[260935]: 2025-10-11 08:59:35.557 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:59:35 compute-0 nova_compute[260935]: 2025-10-11 08:59:35.563 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:59:35 compute-0 systemd-machined[215705]: New machine qemu-93-instance-0000004f.
Oct 11 08:59:35 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:59:35.569 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[37f69c2b-7495-4fb1-a8d9-ed27bbbe56fc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:59:35 compute-0 NetworkManager[44960]: <info>  [1760173175.5712] device (tape121c426-77): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 08:59:35 compute-0 NetworkManager[44960]: <info>  [1760173175.5727] device (tape121c426-77): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 08:59:35 compute-0 systemd[1]: Started Virtual Machine qemu-93-instance-0000004f.
Oct 11 08:59:35 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:59:35.618 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[68c51ef1-88ab-4033-961a-5e62fc2ff4d3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:59:35 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:59:35.630 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[f5ea56a0-ee52-48a1-a782-2386e38e89bc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:59:35 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:59:35.677 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[b3a2c5cd-93fa-4f0d-a1a3-d740306c7b21]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:59:35 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:59:35.707 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[35e31b30-56e4-4cec-b442-4c8a8f7129c0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb4b8fb64-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8d:65:14'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 14, 'tx_packets': 12, 'rx_bytes': 1084, 'tx_bytes': 692, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 14, 'tx_packets': 12, 'rx_bytes': 1084, 'tx_bytes': 692, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 220], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 498802, 'reachable_time': 37318, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 340508, 'error': None, 'target': 'ovnmeta-b4b8fb64-b3cb-4cee-8a22-4a3947fba8e5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:59:35 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:59:35.727 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[b7cddcfd-2be8-4dce-961d-458c4523e3b7]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapb4b8fb64-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 498822, 'tstamp': 498822}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 340509, 'error': None, 'target': 'ovnmeta-b4b8fb64-b3cb-4cee-8a22-4a3947fba8e5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapb4b8fb64-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 498827, 'tstamp': 498827}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 340509, 'error': None, 'target': 'ovnmeta-b4b8fb64-b3cb-4cee-8a22-4a3947fba8e5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:59:35 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:59:35.729 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb4b8fb64-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:59:35 compute-0 nova_compute[260935]: 2025-10-11 08:59:35.732 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:59:35 compute-0 nova_compute[260935]: 2025-10-11 08:59:35.734 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:59:35 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:59:35.734 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb4b8fb64-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:59:35 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:59:35.734 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 08:59:35 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:59:35.735 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb4b8fb64-b0, col_values=(('external_ids', {'iface-id': '59c88b9d-e04e-4ca9-8c74-9510a5f4ab83'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:59:35 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:59:35.735 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 08:59:35 compute-0 nova_compute[260935]: 2025-10-11 08:59:35.860 2 DEBUG nova.compute.manager [req-ad65c552-5f2a-4226-8813-fb02f250f972 req-214f5280-a1d5-4744-a1ff-388099f407c4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 8848c29f-c82a-4f50-82f4-b2e317161489] Received event network-vif-plugged-e121c426-7734-4def-be42-b69b19dcbf29 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:59:35 compute-0 nova_compute[260935]: 2025-10-11 08:59:35.862 2 DEBUG oslo_concurrency.lockutils [req-ad65c552-5f2a-4226-8813-fb02f250f972 req-214f5280-a1d5-4744-a1ff-388099f407c4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "8848c29f-c82a-4f50-82f4-b2e317161489-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:59:35 compute-0 nova_compute[260935]: 2025-10-11 08:59:35.863 2 DEBUG oslo_concurrency.lockutils [req-ad65c552-5f2a-4226-8813-fb02f250f972 req-214f5280-a1d5-4744-a1ff-388099f407c4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "8848c29f-c82a-4f50-82f4-b2e317161489-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:59:35 compute-0 nova_compute[260935]: 2025-10-11 08:59:35.863 2 DEBUG oslo_concurrency.lockutils [req-ad65c552-5f2a-4226-8813-fb02f250f972 req-214f5280-a1d5-4744-a1ff-388099f407c4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "8848c29f-c82a-4f50-82f4-b2e317161489-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:59:35 compute-0 nova_compute[260935]: 2025-10-11 08:59:35.864 2 DEBUG nova.compute.manager [req-ad65c552-5f2a-4226-8813-fb02f250f972 req-214f5280-a1d5-4744-a1ff-388099f407c4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 8848c29f-c82a-4f50-82f4-b2e317161489] No waiting events found dispatching network-vif-plugged-e121c426-7734-4def-be42-b69b19dcbf29 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 08:59:35 compute-0 nova_compute[260935]: 2025-10-11 08:59:35.865 2 WARNING nova.compute.manager [req-ad65c552-5f2a-4226-8813-fb02f250f972 req-214f5280-a1d5-4744-a1ff-388099f407c4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 8848c29f-c82a-4f50-82f4-b2e317161489] Received unexpected event network-vif-plugged-e121c426-7734-4def-be42-b69b19dcbf29 for instance with vm_state stopped and task_state powering-on.
Oct 11 08:59:35 compute-0 nova_compute[260935]: 2025-10-11 08:59:35.921 2 DEBUG nova.compute.manager [req-ef7b6dd3-3849-4ebf-aa13-db9439c359c6 req-42d1b979-0666-43f1-8913-b5985a15b321 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 6b3e0b89-ecfe-48b4-b347-521aca186a52] Received event network-vif-plugged-09e90a9c-f11c-4d6a-b0e1-b1ce18f73aa7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:59:35 compute-0 nova_compute[260935]: 2025-10-11 08:59:35.923 2 DEBUG oslo_concurrency.lockutils [req-ef7b6dd3-3849-4ebf-aa13-db9439c359c6 req-42d1b979-0666-43f1-8913-b5985a15b321 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "6b3e0b89-ecfe-48b4-b347-521aca186a52-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:59:35 compute-0 nova_compute[260935]: 2025-10-11 08:59:35.923 2 DEBUG oslo_concurrency.lockutils [req-ef7b6dd3-3849-4ebf-aa13-db9439c359c6 req-42d1b979-0666-43f1-8913-b5985a15b321 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "6b3e0b89-ecfe-48b4-b347-521aca186a52-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:59:35 compute-0 nova_compute[260935]: 2025-10-11 08:59:35.924 2 DEBUG oslo_concurrency.lockutils [req-ef7b6dd3-3849-4ebf-aa13-db9439c359c6 req-42d1b979-0666-43f1-8913-b5985a15b321 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "6b3e0b89-ecfe-48b4-b347-521aca186a52-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:59:35 compute-0 nova_compute[260935]: 2025-10-11 08:59:35.925 2 DEBUG nova.compute.manager [req-ef7b6dd3-3849-4ebf-aa13-db9439c359c6 req-42d1b979-0666-43f1-8913-b5985a15b321 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 6b3e0b89-ecfe-48b4-b347-521aca186a52] No waiting events found dispatching network-vif-plugged-09e90a9c-f11c-4d6a-b0e1-b1ce18f73aa7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 08:59:35 compute-0 nova_compute[260935]: 2025-10-11 08:59:35.925 2 WARNING nova.compute.manager [req-ef7b6dd3-3849-4ebf-aa13-db9439c359c6 req-42d1b979-0666-43f1-8913-b5985a15b321 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 6b3e0b89-ecfe-48b4-b347-521aca186a52] Received unexpected event network-vif-plugged-09e90a9c-f11c-4d6a-b0e1-b1ce18f73aa7 for instance with vm_state active and task_state None.
Oct 11 08:59:36 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1693: 321 pgs: 321 active+clean; 372 MiB data, 755 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 4.0 MiB/s wr, 169 op/s
Oct 11 08:59:37 compute-0 nova_compute[260935]: 2025-10-11 08:59:37.123 2 DEBUG nova.virt.libvirt.host [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Removed pending event for 8848c29f-c82a-4f50-82f4-b2e317161489 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Oct 11 08:59:37 compute-0 nova_compute[260935]: 2025-10-11 08:59:37.124 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760173177.122794, 8848c29f-c82a-4f50-82f4-b2e317161489 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:59:37 compute-0 nova_compute[260935]: 2025-10-11 08:59:37.125 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 8848c29f-c82a-4f50-82f4-b2e317161489] VM Resumed (Lifecycle Event)
Oct 11 08:59:37 compute-0 nova_compute[260935]: 2025-10-11 08:59:37.128 2 DEBUG nova.compute.manager [None req-dabea842-f2bb-4202-84ba-0749121bd47c 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] [instance: 8848c29f-c82a-4f50-82f4-b2e317161489] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 11 08:59:37 compute-0 nova_compute[260935]: 2025-10-11 08:59:37.135 2 INFO nova.virt.libvirt.driver [-] [instance: 8848c29f-c82a-4f50-82f4-b2e317161489] Instance rebooted successfully.
Oct 11 08:59:37 compute-0 nova_compute[260935]: 2025-10-11 08:59:37.135 2 DEBUG nova.compute.manager [None req-dabea842-f2bb-4202-84ba-0749121bd47c 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] [instance: 8848c29f-c82a-4f50-82f4-b2e317161489] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:59:37 compute-0 nova_compute[260935]: 2025-10-11 08:59:37.185 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 8848c29f-c82a-4f50-82f4-b2e317161489] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:59:37 compute-0 nova_compute[260935]: 2025-10-11 08:59:37.190 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 8848c29f-c82a-4f50-82f4-b2e317161489] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: stopped, current task_state: powering-on, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 08:59:37 compute-0 nova_compute[260935]: 2025-10-11 08:59:37.227 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760173177.123721, 8848c29f-c82a-4f50-82f4-b2e317161489 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:59:37 compute-0 nova_compute[260935]: 2025-10-11 08:59:37.227 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 8848c29f-c82a-4f50-82f4-b2e317161489] VM Started (Lifecycle Event)
Oct 11 08:59:37 compute-0 nova_compute[260935]: 2025-10-11 08:59:37.260 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 8848c29f-c82a-4f50-82f4-b2e317161489] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:59:37 compute-0 nova_compute[260935]: 2025-10-11 08:59:37.267 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 8848c29f-c82a-4f50-82f4-b2e317161489] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 08:59:37 compute-0 ceph-mon[74313]: pgmap v1693: 321 pgs: 321 active+clean; 372 MiB data, 755 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 4.0 MiB/s wr, 169 op/s
Oct 11 08:59:37 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 08:59:37 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3428325464' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 08:59:37 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 08:59:37 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3428325464' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 08:59:37 compute-0 ovn_controller[152945]: 2025-10-11T08:59:37Z|00089|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:1e:82:58 10.100.0.14
Oct 11 08:59:37 compute-0 ovn_controller[152945]: 2025-10-11T08:59:37Z|00090|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:1e:82:58 10.100.0.14
Oct 11 08:59:37 compute-0 nova_compute[260935]: 2025-10-11 08:59:37.982 2 DEBUG nova.compute.manager [req-a3a5854d-01d9-4fd8-94f1-95bf5343caf3 req-ad0690fc-7089-48f9-9d2a-d6e187a6f00b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 8848c29f-c82a-4f50-82f4-b2e317161489] Received event network-vif-plugged-e121c426-7734-4def-be42-b69b19dcbf29 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:59:37 compute-0 nova_compute[260935]: 2025-10-11 08:59:37.985 2 DEBUG oslo_concurrency.lockutils [req-a3a5854d-01d9-4fd8-94f1-95bf5343caf3 req-ad0690fc-7089-48f9-9d2a-d6e187a6f00b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "8848c29f-c82a-4f50-82f4-b2e317161489-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:59:37 compute-0 nova_compute[260935]: 2025-10-11 08:59:37.987 2 DEBUG oslo_concurrency.lockutils [req-a3a5854d-01d9-4fd8-94f1-95bf5343caf3 req-ad0690fc-7089-48f9-9d2a-d6e187a6f00b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "8848c29f-c82a-4f50-82f4-b2e317161489-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:59:37 compute-0 nova_compute[260935]: 2025-10-11 08:59:37.988 2 DEBUG oslo_concurrency.lockutils [req-a3a5854d-01d9-4fd8-94f1-95bf5343caf3 req-ad0690fc-7089-48f9-9d2a-d6e187a6f00b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "8848c29f-c82a-4f50-82f4-b2e317161489-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:59:37 compute-0 nova_compute[260935]: 2025-10-11 08:59:37.989 2 DEBUG nova.compute.manager [req-a3a5854d-01d9-4fd8-94f1-95bf5343caf3 req-ad0690fc-7089-48f9-9d2a-d6e187a6f00b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 8848c29f-c82a-4f50-82f4-b2e317161489] No waiting events found dispatching network-vif-plugged-e121c426-7734-4def-be42-b69b19dcbf29 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 08:59:37 compute-0 nova_compute[260935]: 2025-10-11 08:59:37.989 2 WARNING nova.compute.manager [req-a3a5854d-01d9-4fd8-94f1-95bf5343caf3 req-ad0690fc-7089-48f9-9d2a-d6e187a6f00b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 8848c29f-c82a-4f50-82f4-b2e317161489] Received unexpected event network-vif-plugged-e121c426-7734-4def-be42-b69b19dcbf29 for instance with vm_state active and task_state None.
Oct 11 08:59:38 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1694: 321 pgs: 321 active+clean; 393 MiB data, 763 MiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 6.0 MiB/s wr, 282 op/s
Oct 11 08:59:38 compute-0 ceph-mon[74313]: from='client.? 192.168.122.10:0/3428325464' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 08:59:38 compute-0 ceph-mon[74313]: from='client.? 192.168.122.10:0/3428325464' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 08:59:38 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e242 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 08:59:38 compute-0 nova_compute[260935]: 2025-10-11 08:59:38.679 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:59:38 compute-0 nova_compute[260935]: 2025-10-11 08:59:38.950 2 DEBUG oslo_concurrency.lockutils [None req-cf81193d-4592-418f-8d87-e4339f3a0b10 88847169865e4e08abccff94205f100d 2e00bda8b86d430782170f8ef1fc58e7 - - default default] Acquiring lock "6b3e0b89-ecfe-48b4-b347-521aca186a52" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:59:38 compute-0 nova_compute[260935]: 2025-10-11 08:59:38.951 2 DEBUG oslo_concurrency.lockutils [None req-cf81193d-4592-418f-8d87-e4339f3a0b10 88847169865e4e08abccff94205f100d 2e00bda8b86d430782170f8ef1fc58e7 - - default default] Lock "6b3e0b89-ecfe-48b4-b347-521aca186a52" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:59:38 compute-0 nova_compute[260935]: 2025-10-11 08:59:38.951 2 DEBUG oslo_concurrency.lockutils [None req-cf81193d-4592-418f-8d87-e4339f3a0b10 88847169865e4e08abccff94205f100d 2e00bda8b86d430782170f8ef1fc58e7 - - default default] Acquiring lock "6b3e0b89-ecfe-48b4-b347-521aca186a52-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:59:38 compute-0 nova_compute[260935]: 2025-10-11 08:59:38.951 2 DEBUG oslo_concurrency.lockutils [None req-cf81193d-4592-418f-8d87-e4339f3a0b10 88847169865e4e08abccff94205f100d 2e00bda8b86d430782170f8ef1fc58e7 - - default default] Lock "6b3e0b89-ecfe-48b4-b347-521aca186a52-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:59:38 compute-0 nova_compute[260935]: 2025-10-11 08:59:38.952 2 DEBUG oslo_concurrency.lockutils [None req-cf81193d-4592-418f-8d87-e4339f3a0b10 88847169865e4e08abccff94205f100d 2e00bda8b86d430782170f8ef1fc58e7 - - default default] Lock "6b3e0b89-ecfe-48b4-b347-521aca186a52-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:59:38 compute-0 nova_compute[260935]: 2025-10-11 08:59:38.954 2 INFO nova.compute.manager [None req-cf81193d-4592-418f-8d87-e4339f3a0b10 88847169865e4e08abccff94205f100d 2e00bda8b86d430782170f8ef1fc58e7 - - default default] [instance: 6b3e0b89-ecfe-48b4-b347-521aca186a52] Terminating instance
Oct 11 08:59:38 compute-0 nova_compute[260935]: 2025-10-11 08:59:38.955 2 DEBUG nova.compute.manager [None req-cf81193d-4592-418f-8d87-e4339f3a0b10 88847169865e4e08abccff94205f100d 2e00bda8b86d430782170f8ef1fc58e7 - - default default] [instance: 6b3e0b89-ecfe-48b4-b347-521aca186a52] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 11 08:59:38 compute-0 kernel: tap09e90a9c-f1 (unregistering): left promiscuous mode
Oct 11 08:59:39 compute-0 NetworkManager[44960]: <info>  [1760173179.0062] device (tap09e90a9c-f1): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 08:59:39 compute-0 ovn_controller[152945]: 2025-10-11T08:59:39Z|00733|binding|INFO|Releasing lport 09e90a9c-f11c-4d6a-b0e1-b1ce18f73aa7 from this chassis (sb_readonly=0)
Oct 11 08:59:39 compute-0 nova_compute[260935]: 2025-10-11 08:59:39.016 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:59:39 compute-0 ovn_controller[152945]: 2025-10-11T08:59:39Z|00734|binding|INFO|Setting lport 09e90a9c-f11c-4d6a-b0e1-b1ce18f73aa7 down in Southbound
Oct 11 08:59:39 compute-0 ovn_controller[152945]: 2025-10-11T08:59:39Z|00735|binding|INFO|Removing iface tap09e90a9c-f1 ovn-installed in OVS
Oct 11 08:59:39 compute-0 nova_compute[260935]: 2025-10-11 08:59:39.020 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:59:39 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:59:39.026 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1d:c1:6a 10.100.0.10'], port_security=['fa:16:3e:1d:c1:6a 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '6b3e0b89-ecfe-48b4-b347-521aca186a52', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-97485920-7a19-4e77-b2e1-57668a58b3d6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2e00bda8b86d430782170f8ef1fc58e7', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'bc6b6507-0961-4217-8315-bb813b2de697', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7ffea8f1-3897-4c69-bf4a-783147f4af36, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=09e90a9c-f11c-4d6a-b0e1-b1ce18f73aa7) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 08:59:39 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:59:39.028 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 09e90a9c-f11c-4d6a-b0e1-b1ce18f73aa7 in datapath 97485920-7a19-4e77-b2e1-57668a58b3d6 unbound from our chassis
Oct 11 08:59:39 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:59:39.030 162815 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 97485920-7a19-4e77-b2e1-57668a58b3d6, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 11 08:59:39 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:59:39.032 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[4363c11f-f62d-4093-9e12-c95189bce64b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:59:39 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:59:39.033 162815 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-97485920-7a19-4e77-b2e1-57668a58b3d6 namespace which is not needed anymore
Oct 11 08:59:39 compute-0 nova_compute[260935]: 2025-10-11 08:59:39.039 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:59:39 compute-0 systemd[1]: machine-qemu\x2d92\x2dinstance\x2d00000053.scope: Deactivated successfully.
Oct 11 08:59:39 compute-0 systemd[1]: machine-qemu\x2d92\x2dinstance\x2d00000053.scope: Consumed 5.998s CPU time.
Oct 11 08:59:39 compute-0 systemd-machined[215705]: Machine qemu-92-instance-00000053 terminated.
Oct 11 08:59:39 compute-0 nova_compute[260935]: 2025-10-11 08:59:39.205 2 INFO nova.virt.libvirt.driver [-] [instance: 6b3e0b89-ecfe-48b4-b347-521aca186a52] Instance destroyed successfully.
Oct 11 08:59:39 compute-0 nova_compute[260935]: 2025-10-11 08:59:39.206 2 DEBUG nova.objects.instance [None req-cf81193d-4592-418f-8d87-e4339f3a0b10 88847169865e4e08abccff94205f100d 2e00bda8b86d430782170f8ef1fc58e7 - - default default] Lazy-loading 'resources' on Instance uuid 6b3e0b89-ecfe-48b4-b347-521aca186a52 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:59:39 compute-0 nova_compute[260935]: 2025-10-11 08:59:39.228 2 DEBUG nova.virt.libvirt.vif [None req-cf81193d-4592-418f-8d87-e4339f3a0b10 88847169865e4e08abccff94205f100d 2e00bda8b86d430782170f8ef1fc58e7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T08:59:22Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerMetadataTestJSON-server-1241750620',display_name='tempest-ServerMetadataTestJSON-server-1241750620',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-servermetadatatestjson-server-1241750620',id=83,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-11T08:59:33Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={key1='alt1',key2='value2',key3='value3'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='2e00bda8b86d430782170f8ef1fc58e7',ramdisk_id='',reservation_id='r-rwj6wspy',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerMetadataTestJSON-1151660333',owner_user_name='tempest-ServerMetadataTestJSON-1151660333-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T08:59:38Z,user_data=None,user_id='88847169865e4e08abccff94205f100d',uuid=6b3e0b89-ecfe-48b4-b347-521aca186a52,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "09e90a9c-f11c-4d6a-b0e1-b1ce18f73aa7", "address": "fa:16:3e:1d:c1:6a", "network": {"id": "97485920-7a19-4e77-b2e1-57668a58b3d6", "bridge": "br-int", "label": "tempest-ServerMetadataTestJSON-1098551677-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2e00bda8b86d430782170f8ef1fc58e7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap09e90a9c-f1", "ovs_interfaceid": "09e90a9c-f11c-4d6a-b0e1-b1ce18f73aa7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 11 08:59:39 compute-0 nova_compute[260935]: 2025-10-11 08:59:39.229 2 DEBUG nova.network.os_vif_util [None req-cf81193d-4592-418f-8d87-e4339f3a0b10 88847169865e4e08abccff94205f100d 2e00bda8b86d430782170f8ef1fc58e7 - - default default] Converting VIF {"id": "09e90a9c-f11c-4d6a-b0e1-b1ce18f73aa7", "address": "fa:16:3e:1d:c1:6a", "network": {"id": "97485920-7a19-4e77-b2e1-57668a58b3d6", "bridge": "br-int", "label": "tempest-ServerMetadataTestJSON-1098551677-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2e00bda8b86d430782170f8ef1fc58e7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap09e90a9c-f1", "ovs_interfaceid": "09e90a9c-f11c-4d6a-b0e1-b1ce18f73aa7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 08:59:39 compute-0 nova_compute[260935]: 2025-10-11 08:59:39.229 2 DEBUG nova.network.os_vif_util [None req-cf81193d-4592-418f-8d87-e4339f3a0b10 88847169865e4e08abccff94205f100d 2e00bda8b86d430782170f8ef1fc58e7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1d:c1:6a,bridge_name='br-int',has_traffic_filtering=True,id=09e90a9c-f11c-4d6a-b0e1-b1ce18f73aa7,network=Network(97485920-7a19-4e77-b2e1-57668a58b3d6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap09e90a9c-f1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 08:59:39 compute-0 nova_compute[260935]: 2025-10-11 08:59:39.230 2 DEBUG os_vif [None req-cf81193d-4592-418f-8d87-e4339f3a0b10 88847169865e4e08abccff94205f100d 2e00bda8b86d430782170f8ef1fc58e7 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:1d:c1:6a,bridge_name='br-int',has_traffic_filtering=True,id=09e90a9c-f11c-4d6a-b0e1-b1ce18f73aa7,network=Network(97485920-7a19-4e77-b2e1-57668a58b3d6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap09e90a9c-f1') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 11 08:59:39 compute-0 nova_compute[260935]: 2025-10-11 08:59:39.232 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:59:39 compute-0 nova_compute[260935]: 2025-10-11 08:59:39.232 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap09e90a9c-f1, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:59:39 compute-0 neutron-haproxy-ovnmeta-97485920-7a19-4e77-b2e1-57668a58b3d6[340384]: [NOTICE]   (340388) : haproxy version is 2.8.14-c23fe91
Oct 11 08:59:39 compute-0 neutron-haproxy-ovnmeta-97485920-7a19-4e77-b2e1-57668a58b3d6[340384]: [NOTICE]   (340388) : path to executable is /usr/sbin/haproxy
Oct 11 08:59:39 compute-0 neutron-haproxy-ovnmeta-97485920-7a19-4e77-b2e1-57668a58b3d6[340384]: [WARNING]  (340388) : Exiting Master process...
Oct 11 08:59:39 compute-0 neutron-haproxy-ovnmeta-97485920-7a19-4e77-b2e1-57668a58b3d6[340384]: [WARNING]  (340388) : Exiting Master process...
Oct 11 08:59:39 compute-0 neutron-haproxy-ovnmeta-97485920-7a19-4e77-b2e1-57668a58b3d6[340384]: [ALERT]    (340388) : Current worker (340390) exited with code 143 (Terminated)
Oct 11 08:59:39 compute-0 neutron-haproxy-ovnmeta-97485920-7a19-4e77-b2e1-57668a58b3d6[340384]: [WARNING]  (340388) : All workers exited. Exiting... (0)
Oct 11 08:59:39 compute-0 systemd[1]: libpod-f5604a11977078659c63d00737d34682488f83f4ce79c12fc90b95ae62176afe.scope: Deactivated successfully.
Oct 11 08:59:39 compute-0 nova_compute[260935]: 2025-10-11 08:59:39.293 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:59:39 compute-0 nova_compute[260935]: 2025-10-11 08:59:39.296 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 08:59:39 compute-0 nova_compute[260935]: 2025-10-11 08:59:39.298 2 INFO os_vif [None req-cf81193d-4592-418f-8d87-e4339f3a0b10 88847169865e4e08abccff94205f100d 2e00bda8b86d430782170f8ef1fc58e7 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:1d:c1:6a,bridge_name='br-int',has_traffic_filtering=True,id=09e90a9c-f11c-4d6a-b0e1-b1ce18f73aa7,network=Network(97485920-7a19-4e77-b2e1-57668a58b3d6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap09e90a9c-f1')
Oct 11 08:59:39 compute-0 podman[340576]: 2025-10-11 08:59:39.308171732 +0000 UTC m=+0.127019166 container died f5604a11977078659c63d00737d34682488f83f4ce79c12fc90b95ae62176afe (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-97485920-7a19-4e77-b2e1-57668a58b3d6, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 11 08:59:39 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-f5604a11977078659c63d00737d34682488f83f4ce79c12fc90b95ae62176afe-userdata-shm.mount: Deactivated successfully.
Oct 11 08:59:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-dde826cc5b23d639ee46dbfc6a9d95b5babf6288365b1ca4a04538e8e738b901-merged.mount: Deactivated successfully.
Oct 11 08:59:39 compute-0 podman[340576]: 2025-10-11 08:59:39.361304436 +0000 UTC m=+0.180151870 container cleanup f5604a11977078659c63d00737d34682488f83f4ce79c12fc90b95ae62176afe (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-97485920-7a19-4e77-b2e1-57668a58b3d6, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 11 08:59:39 compute-0 systemd[1]: libpod-conmon-f5604a11977078659c63d00737d34682488f83f4ce79c12fc90b95ae62176afe.scope: Deactivated successfully.
Oct 11 08:59:39 compute-0 podman[340633]: 2025-10-11 08:59:39.458343312 +0000 UTC m=+0.060661621 container remove f5604a11977078659c63d00737d34682488f83f4ce79c12fc90b95ae62176afe (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-97485920-7a19-4e77-b2e1-57668a58b3d6, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, tcib_managed=true)
Oct 11 08:59:39 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:59:39.468 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[28c4e617-6f07-44f6-99c7-3e4443fc8e1a]: (4, ('Sat Oct 11 08:59:39 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-97485920-7a19-4e77-b2e1-57668a58b3d6 (f5604a11977078659c63d00737d34682488f83f4ce79c12fc90b95ae62176afe)\nf5604a11977078659c63d00737d34682488f83f4ce79c12fc90b95ae62176afe\nSat Oct 11 08:59:39 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-97485920-7a19-4e77-b2e1-57668a58b3d6 (f5604a11977078659c63d00737d34682488f83f4ce79c12fc90b95ae62176afe)\nf5604a11977078659c63d00737d34682488f83f4ce79c12fc90b95ae62176afe\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:59:39 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:59:39.471 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[ec800791-adf4-4326-b864-873891be3a62]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:59:39 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:59:39.473 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap97485920-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:59:39 compute-0 nova_compute[260935]: 2025-10-11 08:59:39.476 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:59:39 compute-0 kernel: tap97485920-70: left promiscuous mode
Oct 11 08:59:39 compute-0 nova_compute[260935]: 2025-10-11 08:59:39.481 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:59:39 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:59:39.494 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[236ea645-7104-4f3a-a2a6-6f1b46f24a31]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:59:39 compute-0 nova_compute[260935]: 2025-10-11 08:59:39.499 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:59:39 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:59:39.528 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[ea33d8ba-ce50-4e96-93b8-ad05970ce4bb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:59:39 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:59:39.530 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[4c319026-6468-4bb8-9bbc-2e112ae06195]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:59:39 compute-0 ceph-mon[74313]: pgmap v1694: 321 pgs: 321 active+clean; 393 MiB data, 763 MiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 6.0 MiB/s wr, 282 op/s
Oct 11 08:59:39 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:59:39.553 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[34876e80-96a7-4d42-b837-c4b6c916dba2]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 501654, 'reachable_time': 31648, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 340648, 'error': None, 'target': 'ovnmeta-97485920-7a19-4e77-b2e1-57668a58b3d6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:59:39 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:59:39.556 162929 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-97485920-7a19-4e77-b2e1-57668a58b3d6 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 11 08:59:39 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:59:39.556 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[8df1b57e-71ef-47f3-a041-2297cea3e9ed]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:59:39 compute-0 systemd[1]: run-netns-ovnmeta\x2d97485920\x2d7a19\x2d4e77\x2db2e1\x2d57668a58b3d6.mount: Deactivated successfully.
Oct 11 08:59:39 compute-0 nova_compute[260935]: 2025-10-11 08:59:39.615 2 DEBUG nova.compute.manager [req-9f442988-1602-47ac-b45d-c38ff964fdc2 req-63eb0141-aa88-472d-8d17-d2cae2534a6e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 6b3e0b89-ecfe-48b4-b347-521aca186a52] Received event network-vif-unplugged-09e90a9c-f11c-4d6a-b0e1-b1ce18f73aa7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:59:39 compute-0 nova_compute[260935]: 2025-10-11 08:59:39.615 2 DEBUG oslo_concurrency.lockutils [req-9f442988-1602-47ac-b45d-c38ff964fdc2 req-63eb0141-aa88-472d-8d17-d2cae2534a6e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "6b3e0b89-ecfe-48b4-b347-521aca186a52-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:59:39 compute-0 nova_compute[260935]: 2025-10-11 08:59:39.615 2 DEBUG oslo_concurrency.lockutils [req-9f442988-1602-47ac-b45d-c38ff964fdc2 req-63eb0141-aa88-472d-8d17-d2cae2534a6e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "6b3e0b89-ecfe-48b4-b347-521aca186a52-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:59:39 compute-0 nova_compute[260935]: 2025-10-11 08:59:39.615 2 DEBUG oslo_concurrency.lockutils [req-9f442988-1602-47ac-b45d-c38ff964fdc2 req-63eb0141-aa88-472d-8d17-d2cae2534a6e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "6b3e0b89-ecfe-48b4-b347-521aca186a52-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:59:39 compute-0 nova_compute[260935]: 2025-10-11 08:59:39.615 2 DEBUG nova.compute.manager [req-9f442988-1602-47ac-b45d-c38ff964fdc2 req-63eb0141-aa88-472d-8d17-d2cae2534a6e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 6b3e0b89-ecfe-48b4-b347-521aca186a52] No waiting events found dispatching network-vif-unplugged-09e90a9c-f11c-4d6a-b0e1-b1ce18f73aa7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 08:59:39 compute-0 nova_compute[260935]: 2025-10-11 08:59:39.616 2 DEBUG nova.compute.manager [req-9f442988-1602-47ac-b45d-c38ff964fdc2 req-63eb0141-aa88-472d-8d17-d2cae2534a6e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 6b3e0b89-ecfe-48b4-b347-521aca186a52] Received event network-vif-unplugged-09e90a9c-f11c-4d6a-b0e1-b1ce18f73aa7 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 11 08:59:39 compute-0 nova_compute[260935]: 2025-10-11 08:59:39.755 2 INFO nova.virt.libvirt.driver [None req-cf81193d-4592-418f-8d87-e4339f3a0b10 88847169865e4e08abccff94205f100d 2e00bda8b86d430782170f8ef1fc58e7 - - default default] [instance: 6b3e0b89-ecfe-48b4-b347-521aca186a52] Deleting instance files /var/lib/nova/instances/6b3e0b89-ecfe-48b4-b347-521aca186a52_del
Oct 11 08:59:39 compute-0 nova_compute[260935]: 2025-10-11 08:59:39.756 2 INFO nova.virt.libvirt.driver [None req-cf81193d-4592-418f-8d87-e4339f3a0b10 88847169865e4e08abccff94205f100d 2e00bda8b86d430782170f8ef1fc58e7 - - default default] [instance: 6b3e0b89-ecfe-48b4-b347-521aca186a52] Deletion of /var/lib/nova/instances/6b3e0b89-ecfe-48b4-b347-521aca186a52_del complete
Oct 11 08:59:39 compute-0 nova_compute[260935]: 2025-10-11 08:59:39.815 2 INFO nova.compute.manager [None req-cf81193d-4592-418f-8d87-e4339f3a0b10 88847169865e4e08abccff94205f100d 2e00bda8b86d430782170f8ef1fc58e7 - - default default] [instance: 6b3e0b89-ecfe-48b4-b347-521aca186a52] Took 0.86 seconds to destroy the instance on the hypervisor.
Oct 11 08:59:39 compute-0 nova_compute[260935]: 2025-10-11 08:59:39.816 2 DEBUG oslo.service.loopingcall [None req-cf81193d-4592-418f-8d87-e4339f3a0b10 88847169865e4e08abccff94205f100d 2e00bda8b86d430782170f8ef1fc58e7 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 11 08:59:39 compute-0 nova_compute[260935]: 2025-10-11 08:59:39.816 2 DEBUG nova.compute.manager [-] [instance: 6b3e0b89-ecfe-48b4-b347-521aca186a52] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 11 08:59:39 compute-0 nova_compute[260935]: 2025-10-11 08:59:39.816 2 DEBUG nova.network.neutron [-] [instance: 6b3e0b89-ecfe-48b4-b347-521aca186a52] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 11 08:59:40 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1695: 321 pgs: 321 active+clean; 393 MiB data, 763 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 2.1 MiB/s wr, 123 op/s
Oct 11 08:59:40 compute-0 nova_compute[260935]: 2025-10-11 08:59:40.730 2 DEBUG nova.network.neutron [-] [instance: 6b3e0b89-ecfe-48b4-b347-521aca186a52] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:59:40 compute-0 nova_compute[260935]: 2025-10-11 08:59:40.760 2 INFO nova.compute.manager [-] [instance: 6b3e0b89-ecfe-48b4-b347-521aca186a52] Took 0.94 seconds to deallocate network for instance.
Oct 11 08:59:40 compute-0 podman[340651]: 2025-10-11 08:59:40.807403208 +0000 UTC m=+0.099892729 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=iscsid)
Oct 11 08:59:40 compute-0 nova_compute[260935]: 2025-10-11 08:59:40.838 2 DEBUG oslo_concurrency.lockutils [None req-cf81193d-4592-418f-8d87-e4339f3a0b10 88847169865e4e08abccff94205f100d 2e00bda8b86d430782170f8ef1fc58e7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:59:40 compute-0 nova_compute[260935]: 2025-10-11 08:59:40.838 2 DEBUG oslo_concurrency.lockutils [None req-cf81193d-4592-418f-8d87-e4339f3a0b10 88847169865e4e08abccff94205f100d 2e00bda8b86d430782170f8ef1fc58e7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:59:40 compute-0 nova_compute[260935]: 2025-10-11 08:59:40.969 2 DEBUG oslo_concurrency.processutils [None req-cf81193d-4592-418f-8d87-e4339f3a0b10 88847169865e4e08abccff94205f100d 2e00bda8b86d430782170f8ef1fc58e7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:59:41 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 08:59:41 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2810498369' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:59:41 compute-0 nova_compute[260935]: 2025-10-11 08:59:41.431 2 DEBUG oslo_concurrency.processutils [None req-cf81193d-4592-418f-8d87-e4339f3a0b10 88847169865e4e08abccff94205f100d 2e00bda8b86d430782170f8ef1fc58e7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:59:41 compute-0 nova_compute[260935]: 2025-10-11 08:59:41.442 2 DEBUG nova.compute.provider_tree [None req-cf81193d-4592-418f-8d87-e4339f3a0b10 88847169865e4e08abccff94205f100d 2e00bda8b86d430782170f8ef1fc58e7 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 08:59:41 compute-0 nova_compute[260935]: 2025-10-11 08:59:41.465 2 DEBUG nova.scheduler.client.report [None req-cf81193d-4592-418f-8d87-e4339f3a0b10 88847169865e4e08abccff94205f100d 2e00bda8b86d430782170f8ef1fc58e7 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 08:59:41 compute-0 nova_compute[260935]: 2025-10-11 08:59:41.495 2 DEBUG oslo_concurrency.lockutils [None req-cf81193d-4592-418f-8d87-e4339f3a0b10 88847169865e4e08abccff94205f100d 2e00bda8b86d430782170f8ef1fc58e7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.657s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:59:41 compute-0 nova_compute[260935]: 2025-10-11 08:59:41.525 2 INFO nova.scheduler.client.report [None req-cf81193d-4592-418f-8d87-e4339f3a0b10 88847169865e4e08abccff94205f100d 2e00bda8b86d430782170f8ef1fc58e7 - - default default] Deleted allocations for instance 6b3e0b89-ecfe-48b4-b347-521aca186a52
Oct 11 08:59:41 compute-0 ceph-mon[74313]: pgmap v1695: 321 pgs: 321 active+clean; 393 MiB data, 763 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 2.1 MiB/s wr, 123 op/s
Oct 11 08:59:41 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/2810498369' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:59:41 compute-0 nova_compute[260935]: 2025-10-11 08:59:41.634 2 DEBUG oslo_concurrency.lockutils [None req-cf81193d-4592-418f-8d87-e4339f3a0b10 88847169865e4e08abccff94205f100d 2e00bda8b86d430782170f8ef1fc58e7 - - default default] Lock "6b3e0b89-ecfe-48b4-b347-521aca186a52" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.684s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:59:41 compute-0 nova_compute[260935]: 2025-10-11 08:59:41.845 2 DEBUG nova.compute.manager [req-63b80750-fd0f-471a-99b6-f161aaed406b req-f72e392a-0468-4f15-be38-ec7e25c4b7d1 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 6b3e0b89-ecfe-48b4-b347-521aca186a52] Received event network-vif-plugged-09e90a9c-f11c-4d6a-b0e1-b1ce18f73aa7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:59:41 compute-0 nova_compute[260935]: 2025-10-11 08:59:41.845 2 DEBUG oslo_concurrency.lockutils [req-63b80750-fd0f-471a-99b6-f161aaed406b req-f72e392a-0468-4f15-be38-ec7e25c4b7d1 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "6b3e0b89-ecfe-48b4-b347-521aca186a52-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:59:41 compute-0 nova_compute[260935]: 2025-10-11 08:59:41.846 2 DEBUG oslo_concurrency.lockutils [req-63b80750-fd0f-471a-99b6-f161aaed406b req-f72e392a-0468-4f15-be38-ec7e25c4b7d1 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "6b3e0b89-ecfe-48b4-b347-521aca186a52-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:59:41 compute-0 nova_compute[260935]: 2025-10-11 08:59:41.846 2 DEBUG oslo_concurrency.lockutils [req-63b80750-fd0f-471a-99b6-f161aaed406b req-f72e392a-0468-4f15-be38-ec7e25c4b7d1 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "6b3e0b89-ecfe-48b4-b347-521aca186a52-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:59:41 compute-0 nova_compute[260935]: 2025-10-11 08:59:41.847 2 DEBUG nova.compute.manager [req-63b80750-fd0f-471a-99b6-f161aaed406b req-f72e392a-0468-4f15-be38-ec7e25c4b7d1 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 6b3e0b89-ecfe-48b4-b347-521aca186a52] No waiting events found dispatching network-vif-plugged-09e90a9c-f11c-4d6a-b0e1-b1ce18f73aa7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 08:59:41 compute-0 nova_compute[260935]: 2025-10-11 08:59:41.847 2 WARNING nova.compute.manager [req-63b80750-fd0f-471a-99b6-f161aaed406b req-f72e392a-0468-4f15-be38-ec7e25c4b7d1 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 6b3e0b89-ecfe-48b4-b347-521aca186a52] Received unexpected event network-vif-plugged-09e90a9c-f11c-4d6a-b0e1-b1ce18f73aa7 for instance with vm_state deleted and task_state None.
Oct 11 08:59:41 compute-0 nova_compute[260935]: 2025-10-11 08:59:41.847 2 DEBUG nova.compute.manager [req-63b80750-fd0f-471a-99b6-f161aaed406b req-f72e392a-0468-4f15-be38-ec7e25c4b7d1 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 6b3e0b89-ecfe-48b4-b347-521aca186a52] Received event network-vif-deleted-09e90a9c-f11c-4d6a-b0e1-b1ce18f73aa7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:59:42 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1696: 321 pgs: 321 active+clean; 388 MiB data, 779 MiB used, 59 GiB / 60 GiB avail; 3.3 MiB/s rd, 2.1 MiB/s wr, 177 op/s
Oct 11 08:59:43 compute-0 ceph-mon[74313]: pgmap v1696: 321 pgs: 321 active+clean; 388 MiB data, 779 MiB used, 59 GiB / 60 GiB avail; 3.3 MiB/s rd, 2.1 MiB/s wr, 177 op/s
Oct 11 08:59:43 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e242 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 08:59:43 compute-0 nova_compute[260935]: 2025-10-11 08:59:43.682 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:59:44 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1697: 321 pgs: 321 active+clean; 359 MiB data, 762 MiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 2.1 MiB/s wr, 237 op/s
Oct 11 08:59:44 compute-0 nova_compute[260935]: 2025-10-11 08:59:44.292 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:59:44 compute-0 podman[340694]: 2025-10-11 08:59:44.814258182 +0000 UTC m=+0.110126492 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct 11 08:59:44 compute-0 podman[340695]: 2025-10-11 08:59:44.900503971 +0000 UTC m=+0.190324198 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible)
Oct 11 08:59:45 compute-0 ceph-mon[74313]: pgmap v1697: 321 pgs: 321 active+clean; 359 MiB data, 762 MiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 2.1 MiB/s wr, 237 op/s
Oct 11 08:59:46 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1698: 321 pgs: 321 active+clean; 359 MiB data, 762 MiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 2.1 MiB/s wr, 227 op/s
Oct 11 08:59:46 compute-0 ovn_controller[152945]: 2025-10-11T08:59:46Z|00736|binding|INFO|Releasing lport 59c88b9d-e04e-4ca9-8c74-9510a5f4ab83 from this chassis (sb_readonly=0)
Oct 11 08:59:46 compute-0 ovn_controller[152945]: 2025-10-11T08:59:46Z|00737|binding|INFO|Releasing lport e23cd806-8523-4e59-ba27-db15cee52548 from this chassis (sb_readonly=0)
Oct 11 08:59:46 compute-0 nova_compute[260935]: 2025-10-11 08:59:46.438 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:59:47 compute-0 ceph-mon[74313]: pgmap v1698: 321 pgs: 321 active+clean; 359 MiB data, 762 MiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 2.1 MiB/s wr, 227 op/s
Oct 11 08:59:48 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1699: 321 pgs: 321 active+clean; 359 MiB data, 759 MiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 2.1 MiB/s wr, 228 op/s
Oct 11 08:59:48 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e242 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 08:59:48 compute-0 nova_compute[260935]: 2025-10-11 08:59:48.685 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:59:49 compute-0 nova_compute[260935]: 2025-10-11 08:59:49.295 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:59:49 compute-0 nova_compute[260935]: 2025-10-11 08:59:49.454 2 DEBUG oslo_concurrency.lockutils [None req-52acd3e5-a2f7-47c1-94eb-4184dd7fdb88 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] Acquiring lock "045df9b6-6c73-44ec-aa65-2c736eb5c00c" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:59:49 compute-0 nova_compute[260935]: 2025-10-11 08:59:49.455 2 DEBUG oslo_concurrency.lockutils [None req-52acd3e5-a2f7-47c1-94eb-4184dd7fdb88 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] Lock "045df9b6-6c73-44ec-aa65-2c736eb5c00c" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:59:49 compute-0 nova_compute[260935]: 2025-10-11 08:59:49.456 2 DEBUG oslo_concurrency.lockutils [None req-52acd3e5-a2f7-47c1-94eb-4184dd7fdb88 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] Acquiring lock "045df9b6-6c73-44ec-aa65-2c736eb5c00c-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:59:49 compute-0 nova_compute[260935]: 2025-10-11 08:59:49.457 2 DEBUG oslo_concurrency.lockutils [None req-52acd3e5-a2f7-47c1-94eb-4184dd7fdb88 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] Lock "045df9b6-6c73-44ec-aa65-2c736eb5c00c-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:59:49 compute-0 nova_compute[260935]: 2025-10-11 08:59:49.457 2 DEBUG oslo_concurrency.lockutils [None req-52acd3e5-a2f7-47c1-94eb-4184dd7fdb88 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] Lock "045df9b6-6c73-44ec-aa65-2c736eb5c00c-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:59:49 compute-0 nova_compute[260935]: 2025-10-11 08:59:49.461 2 INFO nova.compute.manager [None req-52acd3e5-a2f7-47c1-94eb-4184dd7fdb88 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] [instance: 045df9b6-6c73-44ec-aa65-2c736eb5c00c] Terminating instance
Oct 11 08:59:49 compute-0 nova_compute[260935]: 2025-10-11 08:59:49.463 2 DEBUG nova.compute.manager [None req-52acd3e5-a2f7-47c1-94eb-4184dd7fdb88 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] [instance: 045df9b6-6c73-44ec-aa65-2c736eb5c00c] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 11 08:59:49 compute-0 kernel: tapbe9e3f2f-e8 (unregistering): left promiscuous mode
Oct 11 08:59:49 compute-0 NetworkManager[44960]: <info>  [1760173189.5345] device (tapbe9e3f2f-e8): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 08:59:49 compute-0 nova_compute[260935]: 2025-10-11 08:59:49.545 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:59:49 compute-0 ovn_controller[152945]: 2025-10-11T08:59:49Z|00738|binding|INFO|Releasing lport be9e3f2f-e8b8-4731-be9a-1b84d50bb5f8 from this chassis (sb_readonly=0)
Oct 11 08:59:49 compute-0 ovn_controller[152945]: 2025-10-11T08:59:49Z|00739|binding|INFO|Setting lport be9e3f2f-e8b8-4731-be9a-1b84d50bb5f8 down in Southbound
Oct 11 08:59:49 compute-0 ovn_controller[152945]: 2025-10-11T08:59:49Z|00740|binding|INFO|Removing iface tapbe9e3f2f-e8 ovn-installed in OVS
Oct 11 08:59:49 compute-0 nova_compute[260935]: 2025-10-11 08:59:49.548 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:59:49 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:59:49.557 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d1:e2:b4 10.100.0.6'], port_security=['fa:16:3e:d1:e2:b4 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '045df9b6-6c73-44ec-aa65-2c736eb5c00c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b4b8fb64-b3cb-4cee-8a22-4a3947fba8e5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b0430d49d70a46c2b29abef177f8ccb3', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'cf97e47b-d339-42a8-ae53-936a69d74b51', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7d7c2c31-fb9d-4875-91fa-aedc5fb45092, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=be9e3f2f-e8b8-4731-be9a-1b84d50bb5f8) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 08:59:49 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:59:49.561 162815 INFO neutron.agent.ovn.metadata.agent [-] Port be9e3f2f-e8b8-4731-be9a-1b84d50bb5f8 in datapath b4b8fb64-b3cb-4cee-8a22-4a3947fba8e5 unbound from our chassis
Oct 11 08:59:49 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:59:49.566 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b4b8fb64-b3cb-4cee-8a22-4a3947fba8e5
Oct 11 08:59:49 compute-0 nova_compute[260935]: 2025-10-11 08:59:49.569 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:59:49 compute-0 ceph-mon[74313]: pgmap v1699: 321 pgs: 321 active+clean; 359 MiB data, 759 MiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 2.1 MiB/s wr, 228 op/s
Oct 11 08:59:49 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:59:49.603 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[5706c6f1-719d-4f9c-8eea-2db1f2e3afee]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:59:49 compute-0 systemd[1]: machine-qemu\x2d90\x2dinstance\x2d00000051.scope: Deactivated successfully.
Oct 11 08:59:49 compute-0 systemd[1]: machine-qemu\x2d90\x2dinstance\x2d00000051.scope: Consumed 13.808s CPU time.
Oct 11 08:59:49 compute-0 systemd-machined[215705]: Machine qemu-90-instance-00000051 terminated.
Oct 11 08:59:49 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:59:49.659 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[6a44b86d-b038-4b88-bad4-cf1e2956598a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:59:49 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:59:49.665 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[805c5a25-fff4-4da2-baf1-6a4f17a37616]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:59:49 compute-0 nova_compute[260935]: 2025-10-11 08:59:49.715 2 INFO nova.virt.libvirt.driver [-] [instance: 045df9b6-6c73-44ec-aa65-2c736eb5c00c] Instance destroyed successfully.
Oct 11 08:59:49 compute-0 nova_compute[260935]: 2025-10-11 08:59:49.717 2 DEBUG nova.objects.instance [None req-52acd3e5-a2f7-47c1-94eb-4184dd7fdb88 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] Lazy-loading 'resources' on Instance uuid 045df9b6-6c73-44ec-aa65-2c736eb5c00c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:59:49 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:59:49.725 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[63da760b-271a-4eee-83ce-696c94295018]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:59:49 compute-0 nova_compute[260935]: 2025-10-11 08:59:49.733 2 DEBUG nova.virt.libvirt.vif [None req-52acd3e5-a2f7-47c1-94eb-4184dd7fdb88 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T08:58:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-1046676690',display_name='tempest-ListServerFiltersTestJSON-instance-1046676690',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(5),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-1046676690',id=81,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=5,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-11T08:59:12Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='b0430d49d70a46c2b29abef177f8ccb3',ramdisk_id='',reservation_id='r-5z4044t2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ListServerFiltersTestJSON-867232992',owner_user_name='tempest-ListServerFiltersTestJSON-867232992-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T08:59:12Z,user_data=None,user_id='9734241540ac484291686e1d189d4eea',uuid=045df9b6-6c73-44ec-aa65-2c736eb5c00c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "be9e3f2f-e8b8-4731-be9a-1b84d50bb5f8", "address": "fa:16:3e:d1:e2:b4", "network": {"id": "b4b8fb64-b3cb-4cee-8a22-4a3947fba8e5", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-2015711970-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b0430d49d70a46c2b29abef177f8ccb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe9e3f2f-e8", "ovs_interfaceid": "be9e3f2f-e8b8-4731-be9a-1b84d50bb5f8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 11 08:59:49 compute-0 nova_compute[260935]: 2025-10-11 08:59:49.733 2 DEBUG nova.network.os_vif_util [None req-52acd3e5-a2f7-47c1-94eb-4184dd7fdb88 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] Converting VIF {"id": "be9e3f2f-e8b8-4731-be9a-1b84d50bb5f8", "address": "fa:16:3e:d1:e2:b4", "network": {"id": "b4b8fb64-b3cb-4cee-8a22-4a3947fba8e5", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-2015711970-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b0430d49d70a46c2b29abef177f8ccb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe9e3f2f-e8", "ovs_interfaceid": "be9e3f2f-e8b8-4731-be9a-1b84d50bb5f8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 08:59:49 compute-0 nova_compute[260935]: 2025-10-11 08:59:49.734 2 DEBUG nova.network.os_vif_util [None req-52acd3e5-a2f7-47c1-94eb-4184dd7fdb88 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d1:e2:b4,bridge_name='br-int',has_traffic_filtering=True,id=be9e3f2f-e8b8-4731-be9a-1b84d50bb5f8,network=Network(b4b8fb64-b3cb-4cee-8a22-4a3947fba8e5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbe9e3f2f-e8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 08:59:49 compute-0 nova_compute[260935]: 2025-10-11 08:59:49.734 2 DEBUG os_vif [None req-52acd3e5-a2f7-47c1-94eb-4184dd7fdb88 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d1:e2:b4,bridge_name='br-int',has_traffic_filtering=True,id=be9e3f2f-e8b8-4731-be9a-1b84d50bb5f8,network=Network(b4b8fb64-b3cb-4cee-8a22-4a3947fba8e5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbe9e3f2f-e8') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 11 08:59:49 compute-0 nova_compute[260935]: 2025-10-11 08:59:49.736 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:59:49 compute-0 nova_compute[260935]: 2025-10-11 08:59:49.736 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbe9e3f2f-e8, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:59:49 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:59:49.755 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[7315e4a2-391a-44a6-baf2-e28c7aefd0db]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb4b8fb64-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8d:65:14'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 14, 'tx_packets': 14, 'rx_bytes': 1084, 'tx_bytes': 776, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 14, 'tx_packets': 14, 'rx_bytes': 1084, 'tx_bytes': 776, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 220], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 498802, 'reachable_time': 37318, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 340760, 'error': None, 'target': 'ovnmeta-b4b8fb64-b3cb-4cee-8a22-4a3947fba8e5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:59:49 compute-0 nova_compute[260935]: 2025-10-11 08:59:49.767 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:59:49 compute-0 nova_compute[260935]: 2025-10-11 08:59:49.769 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 08:59:49 compute-0 nova_compute[260935]: 2025-10-11 08:59:49.769 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:59:49 compute-0 nova_compute[260935]: 2025-10-11 08:59:49.772 2 INFO os_vif [None req-52acd3e5-a2f7-47c1-94eb-4184dd7fdb88 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d1:e2:b4,bridge_name='br-int',has_traffic_filtering=True,id=be9e3f2f-e8b8-4731-be9a-1b84d50bb5f8,network=Network(b4b8fb64-b3cb-4cee-8a22-4a3947fba8e5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbe9e3f2f-e8')
Oct 11 08:59:49 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:59:49.781 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[54a0e91e-35dc-4869-bde0-4ce9340808f6]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapb4b8fb64-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 498822, 'tstamp': 498822}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 340761, 'error': None, 'target': 'ovnmeta-b4b8fb64-b3cb-4cee-8a22-4a3947fba8e5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapb4b8fb64-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 498827, 'tstamp': 498827}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 340761, 'error': None, 'target': 'ovnmeta-b4b8fb64-b3cb-4cee-8a22-4a3947fba8e5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:59:49 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:59:49.784 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb4b8fb64-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:59:49 compute-0 ovn_controller[152945]: 2025-10-11T08:59:49Z|00091|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:e4:27:e3 10.100.0.7
Oct 11 08:59:49 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:59:49.790 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb4b8fb64-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:59:49 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:59:49.790 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 08:59:49 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:59:49.791 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb4b8fb64-b0, col_values=(('external_ids', {'iface-id': '59c88b9d-e04e-4ca9-8c74-9510a5f4ab83'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:59:49 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:59:49.791 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 08:59:49 compute-0 nova_compute[260935]: 2025-10-11 08:59:49.798 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:59:50 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1700: 321 pgs: 321 active+clean; 359 MiB data, 759 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 110 KiB/s wr, 115 op/s
Oct 11 08:59:50 compute-0 nova_compute[260935]: 2025-10-11 08:59:50.214 2 INFO nova.virt.libvirt.driver [None req-52acd3e5-a2f7-47c1-94eb-4184dd7fdb88 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] [instance: 045df9b6-6c73-44ec-aa65-2c736eb5c00c] Deleting instance files /var/lib/nova/instances/045df9b6-6c73-44ec-aa65-2c736eb5c00c_del
Oct 11 08:59:50 compute-0 nova_compute[260935]: 2025-10-11 08:59:50.215 2 INFO nova.virt.libvirt.driver [None req-52acd3e5-a2f7-47c1-94eb-4184dd7fdb88 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] [instance: 045df9b6-6c73-44ec-aa65-2c736eb5c00c] Deletion of /var/lib/nova/instances/045df9b6-6c73-44ec-aa65-2c736eb5c00c_del complete
Oct 11 08:59:50 compute-0 nova_compute[260935]: 2025-10-11 08:59:50.268 2 INFO nova.compute.manager [None req-52acd3e5-a2f7-47c1-94eb-4184dd7fdb88 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] [instance: 045df9b6-6c73-44ec-aa65-2c736eb5c00c] Took 0.80 seconds to destroy the instance on the hypervisor.
Oct 11 08:59:50 compute-0 nova_compute[260935]: 2025-10-11 08:59:50.269 2 DEBUG oslo.service.loopingcall [None req-52acd3e5-a2f7-47c1-94eb-4184dd7fdb88 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 11 08:59:50 compute-0 nova_compute[260935]: 2025-10-11 08:59:50.269 2 DEBUG nova.compute.manager [-] [instance: 045df9b6-6c73-44ec-aa65-2c736eb5c00c] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 11 08:59:50 compute-0 nova_compute[260935]: 2025-10-11 08:59:50.270 2 DEBUG nova.network.neutron [-] [instance: 045df9b6-6c73-44ec-aa65-2c736eb5c00c] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 11 08:59:50 compute-0 nova_compute[260935]: 2025-10-11 08:59:50.571 2 DEBUG nova.compute.manager [req-524e615c-106a-4583-b41e-ef36006171c9 req-5b7a675f-27aa-481b-afbf-8b7c3dbf77dc e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 045df9b6-6c73-44ec-aa65-2c736eb5c00c] Received event network-vif-unplugged-be9e3f2f-e8b8-4731-be9a-1b84d50bb5f8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:59:50 compute-0 nova_compute[260935]: 2025-10-11 08:59:50.571 2 DEBUG oslo_concurrency.lockutils [req-524e615c-106a-4583-b41e-ef36006171c9 req-5b7a675f-27aa-481b-afbf-8b7c3dbf77dc e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "045df9b6-6c73-44ec-aa65-2c736eb5c00c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:59:50 compute-0 nova_compute[260935]: 2025-10-11 08:59:50.572 2 DEBUG oslo_concurrency.lockutils [req-524e615c-106a-4583-b41e-ef36006171c9 req-5b7a675f-27aa-481b-afbf-8b7c3dbf77dc e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "045df9b6-6c73-44ec-aa65-2c736eb5c00c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:59:50 compute-0 nova_compute[260935]: 2025-10-11 08:59:50.572 2 DEBUG oslo_concurrency.lockutils [req-524e615c-106a-4583-b41e-ef36006171c9 req-5b7a675f-27aa-481b-afbf-8b7c3dbf77dc e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "045df9b6-6c73-44ec-aa65-2c736eb5c00c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:59:50 compute-0 nova_compute[260935]: 2025-10-11 08:59:50.573 2 DEBUG nova.compute.manager [req-524e615c-106a-4583-b41e-ef36006171c9 req-5b7a675f-27aa-481b-afbf-8b7c3dbf77dc e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 045df9b6-6c73-44ec-aa65-2c736eb5c00c] No waiting events found dispatching network-vif-unplugged-be9e3f2f-e8b8-4731-be9a-1b84d50bb5f8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 08:59:50 compute-0 nova_compute[260935]: 2025-10-11 08:59:50.573 2 DEBUG nova.compute.manager [req-524e615c-106a-4583-b41e-ef36006171c9 req-5b7a675f-27aa-481b-afbf-8b7c3dbf77dc e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 045df9b6-6c73-44ec-aa65-2c736eb5c00c] Received event network-vif-unplugged-be9e3f2f-e8b8-4731-be9a-1b84d50bb5f8 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 11 08:59:51 compute-0 sudo[340781]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:59:51 compute-0 sudo[340781]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:59:51 compute-0 sudo[340781]: pam_unix(sudo:session): session closed for user root
Oct 11 08:59:51 compute-0 sudo[340806]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 08:59:51 compute-0 sudo[340806]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:59:51 compute-0 sudo[340806]: pam_unix(sudo:session): session closed for user root
Oct 11 08:59:51 compute-0 nova_compute[260935]: 2025-10-11 08:59:51.561 2 DEBUG nova.network.neutron [-] [instance: 045df9b6-6c73-44ec-aa65-2c736eb5c00c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:59:51 compute-0 ceph-mon[74313]: pgmap v1700: 321 pgs: 321 active+clean; 359 MiB data, 759 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 110 KiB/s wr, 115 op/s
Oct 11 08:59:51 compute-0 nova_compute[260935]: 2025-10-11 08:59:51.592 2 INFO nova.compute.manager [-] [instance: 045df9b6-6c73-44ec-aa65-2c736eb5c00c] Took 1.32 seconds to deallocate network for instance.
Oct 11 08:59:51 compute-0 sudo[340831]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:59:51 compute-0 sudo[340831]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:59:51 compute-0 sudo[340831]: pam_unix(sudo:session): session closed for user root
Oct 11 08:59:51 compute-0 nova_compute[260935]: 2025-10-11 08:59:51.669 2 DEBUG oslo_concurrency.lockutils [None req-52acd3e5-a2f7-47c1-94eb-4184dd7fdb88 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:59:51 compute-0 nova_compute[260935]: 2025-10-11 08:59:51.669 2 DEBUG oslo_concurrency.lockutils [None req-52acd3e5-a2f7-47c1-94eb-4184dd7fdb88 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:59:51 compute-0 sudo[340856]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 11 08:59:51 compute-0 sudo[340856]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:59:51 compute-0 nova_compute[260935]: 2025-10-11 08:59:51.818 2 DEBUG oslo_concurrency.processutils [None req-52acd3e5-a2f7-47c1-94eb-4184dd7fdb88 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:59:52 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1701: 321 pgs: 321 active+clean; 331 MiB data, 743 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 115 KiB/s wr, 129 op/s
Oct 11 08:59:52 compute-0 nova_compute[260935]: 2025-10-11 08:59:52.155 2 DEBUG nova.compute.manager [req-8621b4c4-0a98-4d5d-a5bb-93b72d56d642 req-ade104e7-94b7-4fa9-802f-5ec6ca39da08 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 045df9b6-6c73-44ec-aa65-2c736eb5c00c] Received event network-vif-deleted-be9e3f2f-e8b8-4731-be9a-1b84d50bb5f8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:59:52 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 08:59:52 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/279963949' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:59:52 compute-0 nova_compute[260935]: 2025-10-11 08:59:52.307 2 DEBUG oslo_concurrency.processutils [None req-52acd3e5-a2f7-47c1-94eb-4184dd7fdb88 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.489s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:59:52 compute-0 nova_compute[260935]: 2025-10-11 08:59:52.317 2 DEBUG nova.compute.provider_tree [None req-52acd3e5-a2f7-47c1-94eb-4184dd7fdb88 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 08:59:52 compute-0 nova_compute[260935]: 2025-10-11 08:59:52.357 2 DEBUG nova.scheduler.client.report [None req-52acd3e5-a2f7-47c1-94eb-4184dd7fdb88 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 08:59:52 compute-0 nova_compute[260935]: 2025-10-11 08:59:52.381 2 DEBUG oslo_concurrency.lockutils [None req-52acd3e5-a2f7-47c1-94eb-4184dd7fdb88 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.712s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:59:52 compute-0 sudo[340856]: pam_unix(sudo:session): session closed for user root
Oct 11 08:59:52 compute-0 nova_compute[260935]: 2025-10-11 08:59:52.421 2 INFO nova.scheduler.client.report [None req-52acd3e5-a2f7-47c1-94eb-4184dd7fdb88 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] Deleted allocations for instance 045df9b6-6c73-44ec-aa65-2c736eb5c00c
Oct 11 08:59:52 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 08:59:52 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 08:59:52 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 11 08:59:52 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 08:59:52 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 11 08:59:52 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 08:59:52 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev 793742d8-4d0d-43d7-88ed-c8a5a0f6f75b does not exist
Oct 11 08:59:52 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev 5c98a317-de8b-4796-8b48-7180d55ce934 does not exist
Oct 11 08:59:52 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev 9355d83b-4dad-43d3-8f87-346567475597 does not exist
Oct 11 08:59:52 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 11 08:59:52 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 08:59:52 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 11 08:59:52 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 08:59:52 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 08:59:52 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 08:59:52 compute-0 nova_compute[260935]: 2025-10-11 08:59:52.533 2 DEBUG oslo_concurrency.lockutils [None req-52acd3e5-a2f7-47c1-94eb-4184dd7fdb88 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] Lock "045df9b6-6c73-44ec-aa65-2c736eb5c00c" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.077s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:59:52 compute-0 sudo[340932]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:59:52 compute-0 sudo[340932]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:59:52 compute-0 sudo[340932]: pam_unix(sudo:session): session closed for user root
Oct 11 08:59:52 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/279963949' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:59:52 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 08:59:52 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 08:59:52 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 08:59:52 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 08:59:52 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 08:59:52 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 08:59:52 compute-0 sudo[340957]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 08:59:52 compute-0 sudo[340957]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:59:52 compute-0 sudo[340957]: pam_unix(sudo:session): session closed for user root
Oct 11 08:59:52 compute-0 nova_compute[260935]: 2025-10-11 08:59:52.761 2 DEBUG nova.compute.manager [req-ff9525df-8053-4fdd-a485-a197d788d574 req-1ea59e87-ae56-4150-94f2-9677ae62c427 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 045df9b6-6c73-44ec-aa65-2c736eb5c00c] Received event network-vif-plugged-be9e3f2f-e8b8-4731-be9a-1b84d50bb5f8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:59:52 compute-0 nova_compute[260935]: 2025-10-11 08:59:52.762 2 DEBUG oslo_concurrency.lockutils [req-ff9525df-8053-4fdd-a485-a197d788d574 req-1ea59e87-ae56-4150-94f2-9677ae62c427 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "045df9b6-6c73-44ec-aa65-2c736eb5c00c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:59:52 compute-0 nova_compute[260935]: 2025-10-11 08:59:52.763 2 DEBUG oslo_concurrency.lockutils [req-ff9525df-8053-4fdd-a485-a197d788d574 req-1ea59e87-ae56-4150-94f2-9677ae62c427 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "045df9b6-6c73-44ec-aa65-2c736eb5c00c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:59:52 compute-0 nova_compute[260935]: 2025-10-11 08:59:52.763 2 DEBUG oslo_concurrency.lockutils [req-ff9525df-8053-4fdd-a485-a197d788d574 req-1ea59e87-ae56-4150-94f2-9677ae62c427 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "045df9b6-6c73-44ec-aa65-2c736eb5c00c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:59:52 compute-0 nova_compute[260935]: 2025-10-11 08:59:52.764 2 DEBUG nova.compute.manager [req-ff9525df-8053-4fdd-a485-a197d788d574 req-1ea59e87-ae56-4150-94f2-9677ae62c427 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 045df9b6-6c73-44ec-aa65-2c736eb5c00c] No waiting events found dispatching network-vif-plugged-be9e3f2f-e8b8-4731-be9a-1b84d50bb5f8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 08:59:52 compute-0 nova_compute[260935]: 2025-10-11 08:59:52.764 2 WARNING nova.compute.manager [req-ff9525df-8053-4fdd-a485-a197d788d574 req-1ea59e87-ae56-4150-94f2-9677ae62c427 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 045df9b6-6c73-44ec-aa65-2c736eb5c00c] Received unexpected event network-vif-plugged-be9e3f2f-e8b8-4731-be9a-1b84d50bb5f8 for instance with vm_state deleted and task_state None.
Oct 11 08:59:52 compute-0 sudo[340982]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:59:52 compute-0 sudo[340982]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:59:52 compute-0 sudo[340982]: pam_unix(sudo:session): session closed for user root
Oct 11 08:59:52 compute-0 sudo[341007]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 11 08:59:52 compute-0 sudo[341007]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:59:53 compute-0 podman[341073]: 2025-10-11 08:59:53.397606211 +0000 UTC m=+0.059175719 container create 4998d1dd062fe4f96ec6a56e35e15913a2d30d1c8ed555eae1b622f9221fa979 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_faraday, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 08:59:53 compute-0 systemd[1]: Started libpod-conmon-4998d1dd062fe4f96ec6a56e35e15913a2d30d1c8ed555eae1b622f9221fa979.scope.
Oct 11 08:59:53 compute-0 podman[341073]: 2025-10-11 08:59:53.376031456 +0000 UTC m=+0.037600974 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 08:59:53 compute-0 systemd[1]: Started libcrun container.
Oct 11 08:59:53 compute-0 podman[341073]: 2025-10-11 08:59:53.510064608 +0000 UTC m=+0.171634126 container init 4998d1dd062fe4f96ec6a56e35e15913a2d30d1c8ed555eae1b622f9221fa979 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_faraday, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 08:59:53 compute-0 podman[341073]: 2025-10-11 08:59:53.519690982 +0000 UTC m=+0.181260510 container start 4998d1dd062fe4f96ec6a56e35e15913a2d30d1c8ed555eae1b622f9221fa979 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_faraday, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct 11 08:59:53 compute-0 podman[341073]: 2025-10-11 08:59:53.524905741 +0000 UTC m=+0.186475269 container attach 4998d1dd062fe4f96ec6a56e35e15913a2d30d1c8ed555eae1b622f9221fa979 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_faraday, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Oct 11 08:59:53 compute-0 nova_compute[260935]: 2025-10-11 08:59:53.522 2 DEBUG oslo_concurrency.lockutils [None req-7dcf3a41-2c62-4b10-958c-3c9fbde4cf0c 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] Acquiring lock "f64218b5-ea49-4ed7-9945-1b8e056e4161" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:59:53 compute-0 nova_compute[260935]: 2025-10-11 08:59:53.523 2 DEBUG oslo_concurrency.lockutils [None req-7dcf3a41-2c62-4b10-958c-3c9fbde4cf0c 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] Lock "f64218b5-ea49-4ed7-9945-1b8e056e4161" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:59:53 compute-0 nova_compute[260935]: 2025-10-11 08:59:53.524 2 DEBUG oslo_concurrency.lockutils [None req-7dcf3a41-2c62-4b10-958c-3c9fbde4cf0c 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] Acquiring lock "f64218b5-ea49-4ed7-9945-1b8e056e4161-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:59:53 compute-0 nova_compute[260935]: 2025-10-11 08:59:53.524 2 DEBUG oslo_concurrency.lockutils [None req-7dcf3a41-2c62-4b10-958c-3c9fbde4cf0c 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] Lock "f64218b5-ea49-4ed7-9945-1b8e056e4161-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:59:53 compute-0 nova_compute[260935]: 2025-10-11 08:59:53.524 2 DEBUG oslo_concurrency.lockutils [None req-7dcf3a41-2c62-4b10-958c-3c9fbde4cf0c 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] Lock "f64218b5-ea49-4ed7-9945-1b8e056e4161-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:59:53 compute-0 nova_compute[260935]: 2025-10-11 08:59:53.526 2 INFO nova.compute.manager [None req-7dcf3a41-2c62-4b10-958c-3c9fbde4cf0c 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] [instance: f64218b5-ea49-4ed7-9945-1b8e056e4161] Terminating instance
Oct 11 08:59:53 compute-0 nova_compute[260935]: 2025-10-11 08:59:53.527 2 DEBUG nova.compute.manager [None req-7dcf3a41-2c62-4b10-958c-3c9fbde4cf0c 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] [instance: f64218b5-ea49-4ed7-9945-1b8e056e4161] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 11 08:59:53 compute-0 priceless_faraday[341089]: 167 167
Oct 11 08:59:53 compute-0 systemd[1]: libpod-4998d1dd062fe4f96ec6a56e35e15913a2d30d1c8ed555eae1b622f9221fa979.scope: Deactivated successfully.
Oct 11 08:59:53 compute-0 podman[341073]: 2025-10-11 08:59:53.533073434 +0000 UTC m=+0.194643002 container died 4998d1dd062fe4f96ec6a56e35e15913a2d30d1c8ed555eae1b622f9221fa979 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_faraday, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct 11 08:59:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-62efda5babeaed189f6e48f2b97b3be77604d68dbe1a059fbd591b23b58328e0-merged.mount: Deactivated successfully.
Oct 11 08:59:53 compute-0 podman[341073]: 2025-10-11 08:59:53.584072578 +0000 UTC m=+0.245642076 container remove 4998d1dd062fe4f96ec6a56e35e15913a2d30d1c8ed555eae1b622f9221fa979 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_faraday, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct 11 08:59:53 compute-0 kernel: tap2d377d55-39 (unregistering): left promiscuous mode
Oct 11 08:59:53 compute-0 NetworkManager[44960]: <info>  [1760173193.6050] device (tap2d377d55-39): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 08:59:53 compute-0 systemd[1]: libpod-conmon-4998d1dd062fe4f96ec6a56e35e15913a2d30d1c8ed555eae1b622f9221fa979.scope: Deactivated successfully.
Oct 11 08:59:53 compute-0 ceph-mon[74313]: pgmap v1701: 321 pgs: 321 active+clean; 331 MiB data, 743 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 115 KiB/s wr, 129 op/s
Oct 11 08:59:53 compute-0 nova_compute[260935]: 2025-10-11 08:59:53.633 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:59:53 compute-0 ovn_controller[152945]: 2025-10-11T08:59:53Z|00741|binding|INFO|Releasing lport 2d377d55-39e9-4780-bc6f-8bd84e5a93a7 from this chassis (sb_readonly=0)
Oct 11 08:59:53 compute-0 ovn_controller[152945]: 2025-10-11T08:59:53Z|00742|binding|INFO|Setting lport 2d377d55-39e9-4780-bc6f-8bd84e5a93a7 down in Southbound
Oct 11 08:59:53 compute-0 ovn_controller[152945]: 2025-10-11T08:59:53Z|00743|binding|INFO|Removing iface tap2d377d55-39 ovn-installed in OVS
Oct 11 08:59:53 compute-0 nova_compute[260935]: 2025-10-11 08:59:53.637 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:59:53 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:59:53.641 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d5:15:a9 10.100.0.4'], port_security=['fa:16:3e:d5:15:a9 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'f64218b5-ea49-4ed7-9945-1b8e056e4161', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b4b8fb64-b3cb-4cee-8a22-4a3947fba8e5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b0430d49d70a46c2b29abef177f8ccb3', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'cf97e47b-d339-42a8-ae53-936a69d74b51', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7d7c2c31-fb9d-4875-91fa-aedc5fb45092, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=2d377d55-39e9-4780-bc6f-8bd84e5a93a7) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 08:59:53 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:59:53.643 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 2d377d55-39e9-4780-bc6f-8bd84e5a93a7 in datapath b4b8fb64-b3cb-4cee-8a22-4a3947fba8e5 unbound from our chassis
Oct 11 08:59:53 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:59:53.645 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b4b8fb64-b3cb-4cee-8a22-4a3947fba8e5
Oct 11 08:59:53 compute-0 nova_compute[260935]: 2025-10-11 08:59:53.661 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:59:53 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:59:53.669 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[e7d013d4-cbf7-4259-a3d0-6279f7cee88f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:59:53 compute-0 systemd[1]: machine-qemu\x2d89\x2dinstance\x2d00000050.scope: Deactivated successfully.
Oct 11 08:59:53 compute-0 systemd[1]: machine-qemu\x2d89\x2dinstance\x2d00000050.scope: Consumed 15.478s CPU time.
Oct 11 08:59:53 compute-0 systemd-machined[215705]: Machine qemu-89-instance-00000050 terminated.
Oct 11 08:59:53 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e242 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 08:59:53 compute-0 nova_compute[260935]: 2025-10-11 08:59:53.689 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:59:53 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:59:53.707 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[11cc84f9-885b-4a51-9990-7dc289995644]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:59:53 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:59:53.713 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[3001b848-b3e9-4381-a5dc-35d8eba1d8cf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:59:53 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:59:53.748 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[a8912fc8-b431-4c06-ad64-cc200fd7b8ac]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:59:53 compute-0 nova_compute[260935]: 2025-10-11 08:59:53.760 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:59:53 compute-0 nova_compute[260935]: 2025-10-11 08:59:53.768 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:59:53 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:59:53.773 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[1d47d14f-0c9a-417d-a764-14e27d0399e4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb4b8fb64-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8d:65:14'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 14, 'tx_packets': 16, 'rx_bytes': 1084, 'tx_bytes': 860, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 14, 'tx_packets': 16, 'rx_bytes': 1084, 'tx_bytes': 860, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 220], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 498802, 'reachable_time': 37318, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 341134, 'error': None, 'target': 'ovnmeta-b4b8fb64-b3cb-4cee-8a22-4a3947fba8e5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:59:53 compute-0 nova_compute[260935]: 2025-10-11 08:59:53.784 2 INFO nova.virt.libvirt.driver [-] [instance: f64218b5-ea49-4ed7-9945-1b8e056e4161] Instance destroyed successfully.
Oct 11 08:59:53 compute-0 nova_compute[260935]: 2025-10-11 08:59:53.785 2 DEBUG nova.objects.instance [None req-7dcf3a41-2c62-4b10-958c-3c9fbde4cf0c 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] Lazy-loading 'resources' on Instance uuid f64218b5-ea49-4ed7-9945-1b8e056e4161 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:59:53 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:59:53.794 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[b4512cbe-575a-4cf5-8ddc-05f31834dce1]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapb4b8fb64-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 498822, 'tstamp': 498822}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 341143, 'error': None, 'target': 'ovnmeta-b4b8fb64-b3cb-4cee-8a22-4a3947fba8e5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapb4b8fb64-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 498827, 'tstamp': 498827}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 341143, 'error': None, 'target': 'ovnmeta-b4b8fb64-b3cb-4cee-8a22-4a3947fba8e5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:59:53 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:59:53.796 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb4b8fb64-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:59:53 compute-0 nova_compute[260935]: 2025-10-11 08:59:53.797 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:59:53 compute-0 nova_compute[260935]: 2025-10-11 08:59:53.802 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:59:53 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:59:53.803 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb4b8fb64-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:59:53 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:59:53.804 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 08:59:53 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:59:53.805 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb4b8fb64-b0, col_values=(('external_ids', {'iface-id': '59c88b9d-e04e-4ca9-8c74-9510a5f4ab83'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:59:53 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:59:53.806 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 08:59:53 compute-0 nova_compute[260935]: 2025-10-11 08:59:53.808 2 DEBUG nova.virt.libvirt.vif [None req-7dcf3a41-2c62-4b10-958c-3c9fbde4cf0c 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T08:58:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-1557190648',display_name='tempest-ListServerFiltersTestJSON-instance-1557190648',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-1557190648',id=80,image_ref='95632eb9-5895-4e20-b760-0f149aadf400',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-11T08:59:05Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='b0430d49d70a46c2b29abef177f8ccb3',ramdisk_id='',reservation_id='r-rrp9whin',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='95632eb9-5895-4e20-b760-0f149aadf400',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ListServerFiltersTestJSON-867232992',owner_user_name='tempest-ListServerFiltersTestJSON-867232992-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T08:59:05Z,user_data=None,user_id='9734241540ac484291686e1d189d4eea',uuid=f64218b5-ea49-4ed7-9945-1b8e056e4161,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "2d377d55-39e9-4780-bc6f-8bd84e5a93a7", "address": "fa:16:3e:d5:15:a9", "network": {"id": "b4b8fb64-b3cb-4cee-8a22-4a3947fba8e5", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-2015711970-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b0430d49d70a46c2b29abef177f8ccb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2d377d55-39", "ovs_interfaceid": "2d377d55-39e9-4780-bc6f-8bd84e5a93a7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 11 08:59:53 compute-0 nova_compute[260935]: 2025-10-11 08:59:53.808 2 DEBUG nova.network.os_vif_util [None req-7dcf3a41-2c62-4b10-958c-3c9fbde4cf0c 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] Converting VIF {"id": "2d377d55-39e9-4780-bc6f-8bd84e5a93a7", "address": "fa:16:3e:d5:15:a9", "network": {"id": "b4b8fb64-b3cb-4cee-8a22-4a3947fba8e5", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-2015711970-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b0430d49d70a46c2b29abef177f8ccb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2d377d55-39", "ovs_interfaceid": "2d377d55-39e9-4780-bc6f-8bd84e5a93a7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 08:59:53 compute-0 podman[341125]: 2025-10-11 08:59:53.80789543 +0000 UTC m=+0.053574478 container create 4e949d1dfc85978455136136370234e4cfd570a3f569cf15708680f8812bdd45 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_perlman, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS)
Oct 11 08:59:53 compute-0 nova_compute[260935]: 2025-10-11 08:59:53.809 2 DEBUG nova.network.os_vif_util [None req-7dcf3a41-2c62-4b10-958c-3c9fbde4cf0c 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d5:15:a9,bridge_name='br-int',has_traffic_filtering=True,id=2d377d55-39e9-4780-bc6f-8bd84e5a93a7,network=Network(b4b8fb64-b3cb-4cee-8a22-4a3947fba8e5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2d377d55-39') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 08:59:53 compute-0 nova_compute[260935]: 2025-10-11 08:59:53.809 2 DEBUG os_vif [None req-7dcf3a41-2c62-4b10-958c-3c9fbde4cf0c 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d5:15:a9,bridge_name='br-int',has_traffic_filtering=True,id=2d377d55-39e9-4780-bc6f-8bd84e5a93a7,network=Network(b4b8fb64-b3cb-4cee-8a22-4a3947fba8e5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2d377d55-39') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 11 08:59:53 compute-0 nova_compute[260935]: 2025-10-11 08:59:53.812 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:59:53 compute-0 nova_compute[260935]: 2025-10-11 08:59:53.813 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2d377d55-39, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:59:53 compute-0 nova_compute[260935]: 2025-10-11 08:59:53.815 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:59:53 compute-0 nova_compute[260935]: 2025-10-11 08:59:53.818 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 08:59:53 compute-0 nova_compute[260935]: 2025-10-11 08:59:53.823 2 INFO os_vif [None req-7dcf3a41-2c62-4b10-958c-3c9fbde4cf0c 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d5:15:a9,bridge_name='br-int',has_traffic_filtering=True,id=2d377d55-39e9-4780-bc6f-8bd84e5a93a7,network=Network(b4b8fb64-b3cb-4cee-8a22-4a3947fba8e5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2d377d55-39')
Oct 11 08:59:53 compute-0 systemd[1]: Started libpod-conmon-4e949d1dfc85978455136136370234e4cfd570a3f569cf15708680f8812bdd45.scope.
Oct 11 08:59:53 compute-0 nova_compute[260935]: 2025-10-11 08:59:53.856 2 DEBUG oslo_concurrency.lockutils [None req-fdd5a53b-44a4-431b-b877-4d332eb82860 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] Acquiring lock "2ca1b1c6-7cd2-42f9-a24b-5c20bb567361" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:59:53 compute-0 nova_compute[260935]: 2025-10-11 08:59:53.858 2 DEBUG oslo_concurrency.lockutils [None req-fdd5a53b-44a4-431b-b877-4d332eb82860 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] Lock "2ca1b1c6-7cd2-42f9-a24b-5c20bb567361" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:59:53 compute-0 nova_compute[260935]: 2025-10-11 08:59:53.874 2 DEBUG nova.compute.manager [None req-fdd5a53b-44a4-431b-b877-4d332eb82860 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] [instance: 2ca1b1c6-7cd2-42f9-a24b-5c20bb567361] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 11 08:59:53 compute-0 podman[341125]: 2025-10-11 08:59:53.78262063 +0000 UTC m=+0.028299688 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 08:59:53 compute-0 systemd[1]: Started libcrun container.
Oct 11 08:59:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b7d7e03667c6325bbb0c710134689de46174a0a6ef8e4e96a97c322cdec5b0dc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 08:59:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b7d7e03667c6325bbb0c710134689de46174a0a6ef8e4e96a97c322cdec5b0dc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 08:59:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b7d7e03667c6325bbb0c710134689de46174a0a6ef8e4e96a97c322cdec5b0dc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 08:59:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b7d7e03667c6325bbb0c710134689de46174a0a6ef8e4e96a97c322cdec5b0dc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 08:59:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b7d7e03667c6325bbb0c710134689de46174a0a6ef8e4e96a97c322cdec5b0dc/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 08:59:53 compute-0 podman[341125]: 2025-10-11 08:59:53.92043598 +0000 UTC m=+0.166115028 container init 4e949d1dfc85978455136136370234e4cfd570a3f569cf15708680f8812bdd45 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_perlman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Oct 11 08:59:53 compute-0 podman[341125]: 2025-10-11 08:59:53.933482272 +0000 UTC m=+0.179161320 container start 4e949d1dfc85978455136136370234e4cfd570a3f569cf15708680f8812bdd45 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_perlman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 08:59:53 compute-0 podman[341125]: 2025-10-11 08:59:53.937419854 +0000 UTC m=+0.183098992 container attach 4e949d1dfc85978455136136370234e4cfd570a3f569cf15708680f8812bdd45 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_perlman, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 08:59:53 compute-0 nova_compute[260935]: 2025-10-11 08:59:53.949 2 DEBUG oslo_concurrency.lockutils [None req-fdd5a53b-44a4-431b-b877-4d332eb82860 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:59:53 compute-0 nova_compute[260935]: 2025-10-11 08:59:53.950 2 DEBUG oslo_concurrency.lockutils [None req-fdd5a53b-44a4-431b-b877-4d332eb82860 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:59:53 compute-0 nova_compute[260935]: 2025-10-11 08:59:53.964 2 DEBUG nova.virt.hardware [None req-fdd5a53b-44a4-431b-b877-4d332eb82860 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 11 08:59:53 compute-0 nova_compute[260935]: 2025-10-11 08:59:53.964 2 INFO nova.compute.claims [None req-fdd5a53b-44a4-431b-b877-4d332eb82860 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] [instance: 2ca1b1c6-7cd2-42f9-a24b-5c20bb567361] Claim successful on node compute-0.ctlplane.example.com
Oct 11 08:59:54 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1702: 321 pgs: 321 active+clean; 279 MiB data, 725 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 50 KiB/s wr, 132 op/s
Oct 11 08:59:54 compute-0 nova_compute[260935]: 2025-10-11 08:59:54.170 2 DEBUG oslo_concurrency.processutils [None req-fdd5a53b-44a4-431b-b877-4d332eb82860 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:59:54 compute-0 nova_compute[260935]: 2025-10-11 08:59:54.243 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760173179.1956296, 6b3e0b89-ecfe-48b4-b347-521aca186a52 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 08:59:54 compute-0 nova_compute[260935]: 2025-10-11 08:59:54.244 2 INFO nova.compute.manager [-] [instance: 6b3e0b89-ecfe-48b4-b347-521aca186a52] VM Stopped (Lifecycle Event)
Oct 11 08:59:54 compute-0 nova_compute[260935]: 2025-10-11 08:59:54.289 2 DEBUG nova.compute.manager [None req-2ee2fa97-18e2-4df0-b749-c1dedcb7793f - - - - - -] [instance: 6b3e0b89-ecfe-48b4-b347-521aca186a52] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 08:59:54 compute-0 nova_compute[260935]: 2025-10-11 08:59:54.304 2 INFO nova.virt.libvirt.driver [None req-7dcf3a41-2c62-4b10-958c-3c9fbde4cf0c 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] [instance: f64218b5-ea49-4ed7-9945-1b8e056e4161] Deleting instance files /var/lib/nova/instances/f64218b5-ea49-4ed7-9945-1b8e056e4161_del
Oct 11 08:59:54 compute-0 nova_compute[260935]: 2025-10-11 08:59:54.306 2 INFO nova.virt.libvirt.driver [None req-7dcf3a41-2c62-4b10-958c-3c9fbde4cf0c 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] [instance: f64218b5-ea49-4ed7-9945-1b8e056e4161] Deletion of /var/lib/nova/instances/f64218b5-ea49-4ed7-9945-1b8e056e4161_del complete
Oct 11 08:59:54 compute-0 nova_compute[260935]: 2025-10-11 08:59:54.392 2 DEBUG nova.compute.manager [req-52b73ee1-03c2-45e5-bc5b-ba3916e9dbaa req-60e5bf0d-ed56-4da1-9bd7-7c03ee283501 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: f64218b5-ea49-4ed7-9945-1b8e056e4161] Received event network-vif-unplugged-2d377d55-39e9-4780-bc6f-8bd84e5a93a7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:59:54 compute-0 nova_compute[260935]: 2025-10-11 08:59:54.393 2 DEBUG oslo_concurrency.lockutils [req-52b73ee1-03c2-45e5-bc5b-ba3916e9dbaa req-60e5bf0d-ed56-4da1-9bd7-7c03ee283501 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "f64218b5-ea49-4ed7-9945-1b8e056e4161-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:59:54 compute-0 nova_compute[260935]: 2025-10-11 08:59:54.393 2 DEBUG oslo_concurrency.lockutils [req-52b73ee1-03c2-45e5-bc5b-ba3916e9dbaa req-60e5bf0d-ed56-4da1-9bd7-7c03ee283501 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "f64218b5-ea49-4ed7-9945-1b8e056e4161-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:59:54 compute-0 nova_compute[260935]: 2025-10-11 08:59:54.394 2 DEBUG oslo_concurrency.lockutils [req-52b73ee1-03c2-45e5-bc5b-ba3916e9dbaa req-60e5bf0d-ed56-4da1-9bd7-7c03ee283501 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "f64218b5-ea49-4ed7-9945-1b8e056e4161-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:59:54 compute-0 nova_compute[260935]: 2025-10-11 08:59:54.394 2 DEBUG nova.compute.manager [req-52b73ee1-03c2-45e5-bc5b-ba3916e9dbaa req-60e5bf0d-ed56-4da1-9bd7-7c03ee283501 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: f64218b5-ea49-4ed7-9945-1b8e056e4161] No waiting events found dispatching network-vif-unplugged-2d377d55-39e9-4780-bc6f-8bd84e5a93a7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 08:59:54 compute-0 nova_compute[260935]: 2025-10-11 08:59:54.394 2 DEBUG nova.compute.manager [req-52b73ee1-03c2-45e5-bc5b-ba3916e9dbaa req-60e5bf0d-ed56-4da1-9bd7-7c03ee283501 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: f64218b5-ea49-4ed7-9945-1b8e056e4161] Received event network-vif-unplugged-2d377d55-39e9-4780-bc6f-8bd84e5a93a7 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 11 08:59:54 compute-0 nova_compute[260935]: 2025-10-11 08:59:54.395 2 DEBUG nova.compute.manager [req-52b73ee1-03c2-45e5-bc5b-ba3916e9dbaa req-60e5bf0d-ed56-4da1-9bd7-7c03ee283501 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: f64218b5-ea49-4ed7-9945-1b8e056e4161] Received event network-vif-plugged-2d377d55-39e9-4780-bc6f-8bd84e5a93a7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:59:54 compute-0 nova_compute[260935]: 2025-10-11 08:59:54.395 2 DEBUG oslo_concurrency.lockutils [req-52b73ee1-03c2-45e5-bc5b-ba3916e9dbaa req-60e5bf0d-ed56-4da1-9bd7-7c03ee283501 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "f64218b5-ea49-4ed7-9945-1b8e056e4161-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:59:54 compute-0 nova_compute[260935]: 2025-10-11 08:59:54.396 2 DEBUG oslo_concurrency.lockutils [req-52b73ee1-03c2-45e5-bc5b-ba3916e9dbaa req-60e5bf0d-ed56-4da1-9bd7-7c03ee283501 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "f64218b5-ea49-4ed7-9945-1b8e056e4161-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:59:54 compute-0 nova_compute[260935]: 2025-10-11 08:59:54.396 2 DEBUG oslo_concurrency.lockutils [req-52b73ee1-03c2-45e5-bc5b-ba3916e9dbaa req-60e5bf0d-ed56-4da1-9bd7-7c03ee283501 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "f64218b5-ea49-4ed7-9945-1b8e056e4161-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:59:54 compute-0 nova_compute[260935]: 2025-10-11 08:59:54.396 2 DEBUG nova.compute.manager [req-52b73ee1-03c2-45e5-bc5b-ba3916e9dbaa req-60e5bf0d-ed56-4da1-9bd7-7c03ee283501 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: f64218b5-ea49-4ed7-9945-1b8e056e4161] No waiting events found dispatching network-vif-plugged-2d377d55-39e9-4780-bc6f-8bd84e5a93a7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 08:59:54 compute-0 nova_compute[260935]: 2025-10-11 08:59:54.397 2 WARNING nova.compute.manager [req-52b73ee1-03c2-45e5-bc5b-ba3916e9dbaa req-60e5bf0d-ed56-4da1-9bd7-7c03ee283501 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: f64218b5-ea49-4ed7-9945-1b8e056e4161] Received unexpected event network-vif-plugged-2d377d55-39e9-4780-bc6f-8bd84e5a93a7 for instance with vm_state active and task_state deleting.
Oct 11 08:59:54 compute-0 nova_compute[260935]: 2025-10-11 08:59:54.409 2 INFO nova.compute.manager [None req-7dcf3a41-2c62-4b10-958c-3c9fbde4cf0c 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] [instance: f64218b5-ea49-4ed7-9945-1b8e056e4161] Took 0.88 seconds to destroy the instance on the hypervisor.
Oct 11 08:59:54 compute-0 nova_compute[260935]: 2025-10-11 08:59:54.410 2 DEBUG oslo.service.loopingcall [None req-7dcf3a41-2c62-4b10-958c-3c9fbde4cf0c 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 11 08:59:54 compute-0 nova_compute[260935]: 2025-10-11 08:59:54.411 2 DEBUG nova.compute.manager [-] [instance: f64218b5-ea49-4ed7-9945-1b8e056e4161] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 11 08:59:54 compute-0 nova_compute[260935]: 2025-10-11 08:59:54.411 2 DEBUG nova.network.neutron [-] [instance: f64218b5-ea49-4ed7-9945-1b8e056e4161] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 11 08:59:54 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 08:59:54 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2410347759' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:59:54 compute-0 nova_compute[260935]: 2025-10-11 08:59:54.719 2 DEBUG oslo_concurrency.processutils [None req-fdd5a53b-44a4-431b-b877-4d332eb82860 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.549s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:59:54 compute-0 nova_compute[260935]: 2025-10-11 08:59:54.754 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:59:54 compute-0 nova_compute[260935]: 2025-10-11 08:59:54.761 2 DEBUG nova.compute.provider_tree [None req-fdd5a53b-44a4-431b-b877-4d332eb82860 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 08:59:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 08:59:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 08:59:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 08:59:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 08:59:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 08:59:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 08:59:54 compute-0 nova_compute[260935]: 2025-10-11 08:59:54.830 2 DEBUG nova.scheduler.client.report [None req-fdd5a53b-44a4-431b-b877-4d332eb82860 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 08:59:54 compute-0 nova_compute[260935]: 2025-10-11 08:59:54.858 2 DEBUG oslo_concurrency.lockutils [None req-fdd5a53b-44a4-431b-b877-4d332eb82860 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.908s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:59:54 compute-0 nova_compute[260935]: 2025-10-11 08:59:54.859 2 DEBUG nova.compute.manager [None req-fdd5a53b-44a4-431b-b877-4d332eb82860 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] [instance: 2ca1b1c6-7cd2-42f9-a24b-5c20bb567361] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 11 08:59:54 compute-0 ceph-mgr[74605]: [balancer INFO root] Optimize plan auto_2025-10-11_08:59:54
Oct 11 08:59:54 compute-0 ceph-mgr[74605]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 08:59:54 compute-0 ceph-mgr[74605]: [balancer INFO root] do_upmap
Oct 11 08:59:54 compute-0 ceph-mgr[74605]: [balancer INFO root] pools ['default.rgw.control', 'images', 'default.rgw.log', 'backups', '.mgr', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'volumes', 'vms', 'default.rgw.meta', '.rgw.root']
Oct 11 08:59:54 compute-0 ceph-mgr[74605]: [balancer INFO root] prepared 0/10 changes
Oct 11 08:59:54 compute-0 nova_compute[260935]: 2025-10-11 08:59:54.929 2 DEBUG nova.compute.manager [None req-fdd5a53b-44a4-431b-b877-4d332eb82860 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] [instance: 2ca1b1c6-7cd2-42f9-a24b-5c20bb567361] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 11 08:59:54 compute-0 nova_compute[260935]: 2025-10-11 08:59:54.929 2 DEBUG nova.network.neutron [None req-fdd5a53b-44a4-431b-b877-4d332eb82860 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] [instance: 2ca1b1c6-7cd2-42f9-a24b-5c20bb567361] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 11 08:59:54 compute-0 nova_compute[260935]: 2025-10-11 08:59:54.957 2 INFO nova.virt.libvirt.driver [None req-fdd5a53b-44a4-431b-b877-4d332eb82860 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] [instance: 2ca1b1c6-7cd2-42f9-a24b-5c20bb567361] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 11 08:59:54 compute-0 nova_compute[260935]: 2025-10-11 08:59:54.979 2 DEBUG nova.compute.manager [None req-fdd5a53b-44a4-431b-b877-4d332eb82860 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] [instance: 2ca1b1c6-7cd2-42f9-a24b-5c20bb567361] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 11 08:59:55 compute-0 nova_compute[260935]: 2025-10-11 08:59:55.112 2 DEBUG nova.compute.manager [None req-fdd5a53b-44a4-431b-b877-4d332eb82860 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] [instance: 2ca1b1c6-7cd2-42f9-a24b-5c20bb567361] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 11 08:59:55 compute-0 nova_compute[260935]: 2025-10-11 08:59:55.114 2 DEBUG nova.virt.libvirt.driver [None req-fdd5a53b-44a4-431b-b877-4d332eb82860 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] [instance: 2ca1b1c6-7cd2-42f9-a24b-5c20bb567361] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 11 08:59:55 compute-0 nova_compute[260935]: 2025-10-11 08:59:55.115 2 INFO nova.virt.libvirt.driver [None req-fdd5a53b-44a4-431b-b877-4d332eb82860 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] [instance: 2ca1b1c6-7cd2-42f9-a24b-5c20bb567361] Creating image(s)
Oct 11 08:59:55 compute-0 nova_compute[260935]: 2025-10-11 08:59:55.150 2 DEBUG nova.storage.rbd_utils [None req-fdd5a53b-44a4-431b-b877-4d332eb82860 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] rbd image 2ca1b1c6-7cd2-42f9-a24b-5c20bb567361_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:59:55 compute-0 magical_perlman[341167]: --> passed data devices: 0 physical, 3 LVM
Oct 11 08:59:55 compute-0 magical_perlman[341167]: --> relative data size: 1.0
Oct 11 08:59:55 compute-0 magical_perlman[341167]: --> All data devices are unavailable
Oct 11 08:59:55 compute-0 nova_compute[260935]: 2025-10-11 08:59:55.190 2 DEBUG nova.storage.rbd_utils [None req-fdd5a53b-44a4-431b-b877-4d332eb82860 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] rbd image 2ca1b1c6-7cd2-42f9-a24b-5c20bb567361_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:59:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 08:59:55 compute-0 systemd[1]: libpod-4e949d1dfc85978455136136370234e4cfd570a3f569cf15708680f8812bdd45.scope: Deactivated successfully.
Oct 11 08:59:55 compute-0 systemd[1]: libpod-4e949d1dfc85978455136136370234e4cfd570a3f569cf15708680f8812bdd45.scope: Consumed 1.144s CPU time.
Oct 11 08:59:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 08:59:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 08:59:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 08:59:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 08:59:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 08:59:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 08:59:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 08:59:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 08:59:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 08:59:55 compute-0 nova_compute[260935]: 2025-10-11 08:59:55.232 2 DEBUG nova.storage.rbd_utils [None req-fdd5a53b-44a4-431b-b877-4d332eb82860 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] rbd image 2ca1b1c6-7cd2-42f9-a24b-5c20bb567361_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:59:55 compute-0 nova_compute[260935]: 2025-10-11 08:59:55.250 2 DEBUG oslo_concurrency.processutils [None req-fdd5a53b-44a4-431b-b877-4d332eb82860 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:59:55 compute-0 podman[341258]: 2025-10-11 08:59:55.256165617 +0000 UTC m=+0.037000606 container died 4e949d1dfc85978455136136370234e4cfd570a3f569cf15708680f8812bdd45 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_perlman, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Oct 11 08:59:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-b7d7e03667c6325bbb0c710134689de46174a0a6ef8e4e96a97c322cdec5b0dc-merged.mount: Deactivated successfully.
Oct 11 08:59:55 compute-0 nova_compute[260935]: 2025-10-11 08:59:55.307 2 DEBUG nova.policy [None req-fdd5a53b-44a4-431b-b877-4d332eb82860 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '8d211063ed874837bead2e13898b31d4', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'd33b48586acf4e6c8254f2a1213b001c', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 11 08:59:55 compute-0 podman[341258]: 2025-10-11 08:59:55.323974981 +0000 UTC m=+0.104809970 container remove 4e949d1dfc85978455136136370234e4cfd570a3f569cf15708680f8812bdd45 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_perlman, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct 11 08:59:55 compute-0 nova_compute[260935]: 2025-10-11 08:59:55.328 2 DEBUG nova.network.neutron [-] [instance: f64218b5-ea49-4ed7-9945-1b8e056e4161] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:59:55 compute-0 systemd[1]: libpod-conmon-4e949d1dfc85978455136136370234e4cfd570a3f569cf15708680f8812bdd45.scope: Deactivated successfully.
Oct 11 08:59:55 compute-0 nova_compute[260935]: 2025-10-11 08:59:55.352 2 INFO nova.compute.manager [-] [instance: f64218b5-ea49-4ed7-9945-1b8e056e4161] Took 0.94 seconds to deallocate network for instance.
Oct 11 08:59:55 compute-0 nova_compute[260935]: 2025-10-11 08:59:55.361 2 DEBUG oslo_concurrency.processutils [None req-fdd5a53b-44a4-431b-b877-4d332eb82860 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json" returned: 0 in 0.112s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:59:55 compute-0 nova_compute[260935]: 2025-10-11 08:59:55.362 2 DEBUG oslo_concurrency.lockutils [None req-fdd5a53b-44a4-431b-b877-4d332eb82860 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] Acquiring lock "9811042c7d73cc51997f7c966840f4b7728169a1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:59:55 compute-0 nova_compute[260935]: 2025-10-11 08:59:55.363 2 DEBUG oslo_concurrency.lockutils [None req-fdd5a53b-44a4-431b-b877-4d332eb82860 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:59:55 compute-0 nova_compute[260935]: 2025-10-11 08:59:55.364 2 DEBUG oslo_concurrency.lockutils [None req-fdd5a53b-44a4-431b-b877-4d332eb82860 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:59:55 compute-0 sudo[341007]: pam_unix(sudo:session): session closed for user root
Oct 11 08:59:55 compute-0 nova_compute[260935]: 2025-10-11 08:59:55.405 2 DEBUG nova.storage.rbd_utils [None req-fdd5a53b-44a4-431b-b877-4d332eb82860 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] rbd image 2ca1b1c6-7cd2-42f9-a24b-5c20bb567361_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 08:59:55 compute-0 nova_compute[260935]: 2025-10-11 08:59:55.411 2 DEBUG oslo_concurrency.processutils [None req-fdd5a53b-44a4-431b-b877-4d332eb82860 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 2ca1b1c6-7cd2-42f9-a24b-5c20bb567361_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:59:55 compute-0 nova_compute[260935]: 2025-10-11 08:59:55.477 2 DEBUG oslo_concurrency.lockutils [None req-7dcf3a41-2c62-4b10-958c-3c9fbde4cf0c 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:59:55 compute-0 nova_compute[260935]: 2025-10-11 08:59:55.478 2 DEBUG oslo_concurrency.lockutils [None req-7dcf3a41-2c62-4b10-958c-3c9fbde4cf0c 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:59:55 compute-0 sudo[341310]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:59:55 compute-0 sudo[341310]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:59:55 compute-0 sudo[341310]: pam_unix(sudo:session): session closed for user root
Oct 11 08:59:55 compute-0 sudo[341338]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 08:59:55 compute-0 sudo[341338]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:59:55 compute-0 sudo[341338]: pam_unix(sudo:session): session closed for user root
Oct 11 08:59:55 compute-0 nova_compute[260935]: 2025-10-11 08:59:55.605 2 DEBUG oslo_concurrency.processutils [None req-7dcf3a41-2c62-4b10-958c-3c9fbde4cf0c 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 08:59:55 compute-0 ceph-mon[74313]: pgmap v1702: 321 pgs: 321 active+clean; 279 MiB data, 725 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 50 KiB/s wr, 132 op/s
Oct 11 08:59:55 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/2410347759' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:59:55 compute-0 sudo[341381]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:59:55 compute-0 sudo[341381]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:59:55 compute-0 sudo[341381]: pam_unix(sudo:session): session closed for user root
Oct 11 08:59:55 compute-0 nova_compute[260935]: 2025-10-11 08:59:55.759 2 DEBUG oslo_concurrency.processutils [None req-fdd5a53b-44a4-431b-b877-4d332eb82860 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 2ca1b1c6-7cd2-42f9-a24b-5c20bb567361_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.348s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:59:55 compute-0 sudo[341407]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a -- lvm list --format json
Oct 11 08:59:55 compute-0 sudo[341407]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:59:55 compute-0 nova_compute[260935]: 2025-10-11 08:59:55.844 2 DEBUG nova.storage.rbd_utils [None req-fdd5a53b-44a4-431b-b877-4d332eb82860 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] resizing rbd image 2ca1b1c6-7cd2-42f9-a24b-5c20bb567361_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 11 08:59:55 compute-0 nova_compute[260935]: 2025-10-11 08:59:55.960 2 DEBUG nova.objects.instance [None req-fdd5a53b-44a4-431b-b877-4d332eb82860 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] Lazy-loading 'migration_context' on Instance uuid 2ca1b1c6-7cd2-42f9-a24b-5c20bb567361 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:59:55 compute-0 nova_compute[260935]: 2025-10-11 08:59:55.984 2 DEBUG nova.virt.libvirt.driver [None req-fdd5a53b-44a4-431b-b877-4d332eb82860 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] [instance: 2ca1b1c6-7cd2-42f9-a24b-5c20bb567361] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 11 08:59:55 compute-0 nova_compute[260935]: 2025-10-11 08:59:55.985 2 DEBUG nova.virt.libvirt.driver [None req-fdd5a53b-44a4-431b-b877-4d332eb82860 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] [instance: 2ca1b1c6-7cd2-42f9-a24b-5c20bb567361] Ensure instance console log exists: /var/lib/nova/instances/2ca1b1c6-7cd2-42f9-a24b-5c20bb567361/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 11 08:59:55 compute-0 nova_compute[260935]: 2025-10-11 08:59:55.985 2 DEBUG oslo_concurrency.lockutils [None req-fdd5a53b-44a4-431b-b877-4d332eb82860 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:59:55 compute-0 nova_compute[260935]: 2025-10-11 08:59:55.986 2 DEBUG oslo_concurrency.lockutils [None req-fdd5a53b-44a4-431b-b877-4d332eb82860 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:59:55 compute-0 nova_compute[260935]: 2025-10-11 08:59:55.986 2 DEBUG oslo_concurrency.lockutils [None req-fdd5a53b-44a4-431b-b877-4d332eb82860 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:59:56 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1703: 321 pgs: 321 active+clean; 279 MiB data, 725 MiB used, 59 GiB / 60 GiB avail; 543 KiB/s rd, 30 KiB/s wr, 71 op/s
Oct 11 08:59:56 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 08:59:56 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3969470119' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:59:56 compute-0 nova_compute[260935]: 2025-10-11 08:59:56.150 2 DEBUG oslo_concurrency.processutils [None req-7dcf3a41-2c62-4b10-958c-3c9fbde4cf0c 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.545s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 08:59:56 compute-0 nova_compute[260935]: 2025-10-11 08:59:56.157 2 DEBUG nova.compute.provider_tree [None req-7dcf3a41-2c62-4b10-958c-3c9fbde4cf0c 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 08:59:56 compute-0 nova_compute[260935]: 2025-10-11 08:59:56.172 2 DEBUG nova.scheduler.client.report [None req-7dcf3a41-2c62-4b10-958c-3c9fbde4cf0c 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 08:59:56 compute-0 podman[341564]: 2025-10-11 08:59:56.178283061 +0000 UTC m=+0.046361053 container create fe3d8f22efff58a540852aa0c0d945d9d6becfe30212fc2429f9101db8f981f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_goldberg, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct 11 08:59:56 compute-0 nova_compute[260935]: 2025-10-11 08:59:56.194 2 DEBUG oslo_concurrency.lockutils [None req-7dcf3a41-2c62-4b10-958c-3c9fbde4cf0c 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.715s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:59:56 compute-0 systemd[1]: Started libpod-conmon-fe3d8f22efff58a540852aa0c0d945d9d6becfe30212fc2429f9101db8f981f8.scope.
Oct 11 08:59:56 compute-0 nova_compute[260935]: 2025-10-11 08:59:56.235 2 INFO nova.scheduler.client.report [None req-7dcf3a41-2c62-4b10-958c-3c9fbde4cf0c 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] Deleted allocations for instance f64218b5-ea49-4ed7-9945-1b8e056e4161
Oct 11 08:59:56 compute-0 podman[341564]: 2025-10-11 08:59:56.154871364 +0000 UTC m=+0.022949436 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 08:59:56 compute-0 systemd[1]: Started libcrun container.
Oct 11 08:59:56 compute-0 podman[341564]: 2025-10-11 08:59:56.268247846 +0000 UTC m=+0.136325928 container init fe3d8f22efff58a540852aa0c0d945d9d6becfe30212fc2429f9101db8f981f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_goldberg, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True)
Oct 11 08:59:56 compute-0 podman[341564]: 2025-10-11 08:59:56.279432495 +0000 UTC m=+0.147510517 container start fe3d8f22efff58a540852aa0c0d945d9d6becfe30212fc2429f9101db8f981f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_goldberg, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 08:59:56 compute-0 adoring_goldberg[341582]: 167 167
Oct 11 08:59:56 compute-0 podman[341564]: 2025-10-11 08:59:56.284195691 +0000 UTC m=+0.152273713 container attach fe3d8f22efff58a540852aa0c0d945d9d6becfe30212fc2429f9101db8f981f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_goldberg, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 08:59:56 compute-0 systemd[1]: libpod-fe3d8f22efff58a540852aa0c0d945d9d6becfe30212fc2429f9101db8f981f8.scope: Deactivated successfully.
Oct 11 08:59:56 compute-0 podman[341564]: 2025-10-11 08:59:56.289021319 +0000 UTC m=+0.157099311 container died fe3d8f22efff58a540852aa0c0d945d9d6becfe30212fc2429f9101db8f981f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_goldberg, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True)
Oct 11 08:59:56 compute-0 nova_compute[260935]: 2025-10-11 08:59:56.316 2 DEBUG oslo_concurrency.lockutils [None req-7dcf3a41-2c62-4b10-958c-3c9fbde4cf0c 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] Lock "f64218b5-ea49-4ed7-9945-1b8e056e4161" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.793s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:59:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-9b5986d9de16a25da7efc8758dc3f5a5486b991ce9219638c499901ced9823d8-merged.mount: Deactivated successfully.
Oct 11 08:59:56 compute-0 podman[341564]: 2025-10-11 08:59:56.333085205 +0000 UTC m=+0.201163197 container remove fe3d8f22efff58a540852aa0c0d945d9d6becfe30212fc2429f9101db8f981f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_goldberg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct 11 08:59:56 compute-0 systemd[1]: libpod-conmon-fe3d8f22efff58a540852aa0c0d945d9d6becfe30212fc2429f9101db8f981f8.scope: Deactivated successfully.
Oct 11 08:59:56 compute-0 nova_compute[260935]: 2025-10-11 08:59:56.544 2 DEBUG nova.compute.manager [req-dfe6dc89-a1f5-4943-9557-a23ae3c9a56e req-b6167f59-52da-425e-8b63-5cf083ae43d6 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: f64218b5-ea49-4ed7-9945-1b8e056e4161] Received event network-vif-deleted-2d377d55-39e9-4780-bc6f-8bd84e5a93a7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:59:56 compute-0 podman[341607]: 2025-10-11 08:59:56.570948248 +0000 UTC m=+0.059046865 container create bedc96443b2c1f5246d25f457364430f00f61a99efb1b2f507521d4b6d000e31 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_varahamihira, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Oct 11 08:59:56 compute-0 systemd[1]: Started libpod-conmon-bedc96443b2c1f5246d25f457364430f00f61a99efb1b2f507521d4b6d000e31.scope.
Oct 11 08:59:56 compute-0 podman[341607]: 2025-10-11 08:59:56.538852313 +0000 UTC m=+0.026951000 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 08:59:56 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/3969470119' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 08:59:56 compute-0 systemd[1]: Started libcrun container.
Oct 11 08:59:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e7cd0c2e9363f3386d5e2ca5827d23cfa89f96348e9373d77744b70f59f0cd6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 08:59:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e7cd0c2e9363f3386d5e2ca5827d23cfa89f96348e9373d77744b70f59f0cd6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 08:59:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e7cd0c2e9363f3386d5e2ca5827d23cfa89f96348e9373d77744b70f59f0cd6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 08:59:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e7cd0c2e9363f3386d5e2ca5827d23cfa89f96348e9373d77744b70f59f0cd6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 08:59:56 compute-0 nova_compute[260935]: 2025-10-11 08:59:56.689 2 DEBUG nova.network.neutron [None req-fdd5a53b-44a4-431b-b877-4d332eb82860 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] [instance: 2ca1b1c6-7cd2-42f9-a24b-5c20bb567361] Successfully created port: bcfaf217-8703-4c1e-bf80-d24ab0e642bd _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 11 08:59:56 compute-0 podman[341607]: 2025-10-11 08:59:56.709674514 +0000 UTC m=+0.197773181 container init bedc96443b2c1f5246d25f457364430f00f61a99efb1b2f507521d4b6d000e31 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_varahamihira, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct 11 08:59:56 compute-0 podman[341607]: 2025-10-11 08:59:56.719188225 +0000 UTC m=+0.207286822 container start bedc96443b2c1f5246d25f457364430f00f61a99efb1b2f507521d4b6d000e31 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_varahamihira, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 08:59:56 compute-0 podman[341607]: 2025-10-11 08:59:56.72321368 +0000 UTC m=+0.211312307 container attach bedc96443b2c1f5246d25f457364430f00f61a99efb1b2f507521d4b6d000e31 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_varahamihira, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 08:59:57 compute-0 mystifying_varahamihira[341624]: {
Oct 11 08:59:57 compute-0 mystifying_varahamihira[341624]:     "0": [
Oct 11 08:59:57 compute-0 mystifying_varahamihira[341624]:         {
Oct 11 08:59:57 compute-0 mystifying_varahamihira[341624]:             "devices": [
Oct 11 08:59:57 compute-0 mystifying_varahamihira[341624]:                 "/dev/loop3"
Oct 11 08:59:57 compute-0 mystifying_varahamihira[341624]:             ],
Oct 11 08:59:57 compute-0 mystifying_varahamihira[341624]:             "lv_name": "ceph_lv0",
Oct 11 08:59:57 compute-0 mystifying_varahamihira[341624]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 08:59:57 compute-0 mystifying_varahamihira[341624]:             "lv_size": "21470642176",
Oct 11 08:59:57 compute-0 mystifying_varahamihira[341624]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=bd31cfe9-266a-4479-a7f9-659fff14b3e1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 08:59:57 compute-0 mystifying_varahamihira[341624]:             "lv_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 08:59:57 compute-0 mystifying_varahamihira[341624]:             "name": "ceph_lv0",
Oct 11 08:59:57 compute-0 mystifying_varahamihira[341624]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 08:59:57 compute-0 mystifying_varahamihira[341624]:             "tags": {
Oct 11 08:59:57 compute-0 mystifying_varahamihira[341624]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 11 08:59:57 compute-0 mystifying_varahamihira[341624]:                 "ceph.block_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 08:59:57 compute-0 mystifying_varahamihira[341624]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 08:59:57 compute-0 mystifying_varahamihira[341624]:                 "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 08:59:57 compute-0 mystifying_varahamihira[341624]:                 "ceph.cluster_name": "ceph",
Oct 11 08:59:57 compute-0 mystifying_varahamihira[341624]:                 "ceph.crush_device_class": "",
Oct 11 08:59:57 compute-0 mystifying_varahamihira[341624]:                 "ceph.encrypted": "0",
Oct 11 08:59:57 compute-0 mystifying_varahamihira[341624]:                 "ceph.osd_fsid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 08:59:57 compute-0 mystifying_varahamihira[341624]:                 "ceph.osd_id": "0",
Oct 11 08:59:57 compute-0 mystifying_varahamihira[341624]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 08:59:57 compute-0 mystifying_varahamihira[341624]:                 "ceph.type": "block",
Oct 11 08:59:57 compute-0 mystifying_varahamihira[341624]:                 "ceph.vdo": "0"
Oct 11 08:59:57 compute-0 mystifying_varahamihira[341624]:             },
Oct 11 08:59:57 compute-0 mystifying_varahamihira[341624]:             "type": "block",
Oct 11 08:59:57 compute-0 mystifying_varahamihira[341624]:             "vg_name": "ceph_vg0"
Oct 11 08:59:57 compute-0 mystifying_varahamihira[341624]:         }
Oct 11 08:59:57 compute-0 mystifying_varahamihira[341624]:     ],
Oct 11 08:59:57 compute-0 mystifying_varahamihira[341624]:     "1": [
Oct 11 08:59:57 compute-0 mystifying_varahamihira[341624]:         {
Oct 11 08:59:57 compute-0 mystifying_varahamihira[341624]:             "devices": [
Oct 11 08:59:57 compute-0 mystifying_varahamihira[341624]:                 "/dev/loop4"
Oct 11 08:59:57 compute-0 mystifying_varahamihira[341624]:             ],
Oct 11 08:59:57 compute-0 mystifying_varahamihira[341624]:             "lv_name": "ceph_lv1",
Oct 11 08:59:57 compute-0 mystifying_varahamihira[341624]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 08:59:57 compute-0 mystifying_varahamihira[341624]:             "lv_size": "21470642176",
Oct 11 08:59:57 compute-0 mystifying_varahamihira[341624]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c6e730c8-7075-45e5-bf72-061b73088f5b,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 08:59:57 compute-0 mystifying_varahamihira[341624]:             "lv_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 08:59:57 compute-0 mystifying_varahamihira[341624]:             "name": "ceph_lv1",
Oct 11 08:59:57 compute-0 mystifying_varahamihira[341624]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 08:59:57 compute-0 mystifying_varahamihira[341624]:             "tags": {
Oct 11 08:59:57 compute-0 mystifying_varahamihira[341624]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 11 08:59:57 compute-0 mystifying_varahamihira[341624]:                 "ceph.block_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 08:59:57 compute-0 mystifying_varahamihira[341624]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 08:59:57 compute-0 mystifying_varahamihira[341624]:                 "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 08:59:57 compute-0 mystifying_varahamihira[341624]:                 "ceph.cluster_name": "ceph",
Oct 11 08:59:57 compute-0 mystifying_varahamihira[341624]:                 "ceph.crush_device_class": "",
Oct 11 08:59:57 compute-0 mystifying_varahamihira[341624]:                 "ceph.encrypted": "0",
Oct 11 08:59:57 compute-0 mystifying_varahamihira[341624]:                 "ceph.osd_fsid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 08:59:57 compute-0 mystifying_varahamihira[341624]:                 "ceph.osd_id": "1",
Oct 11 08:59:57 compute-0 mystifying_varahamihira[341624]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 08:59:57 compute-0 mystifying_varahamihira[341624]:                 "ceph.type": "block",
Oct 11 08:59:57 compute-0 mystifying_varahamihira[341624]:                 "ceph.vdo": "0"
Oct 11 08:59:57 compute-0 mystifying_varahamihira[341624]:             },
Oct 11 08:59:57 compute-0 mystifying_varahamihira[341624]:             "type": "block",
Oct 11 08:59:57 compute-0 mystifying_varahamihira[341624]:             "vg_name": "ceph_vg1"
Oct 11 08:59:57 compute-0 mystifying_varahamihira[341624]:         }
Oct 11 08:59:57 compute-0 mystifying_varahamihira[341624]:     ],
Oct 11 08:59:57 compute-0 mystifying_varahamihira[341624]:     "2": [
Oct 11 08:59:57 compute-0 mystifying_varahamihira[341624]:         {
Oct 11 08:59:57 compute-0 mystifying_varahamihira[341624]:             "devices": [
Oct 11 08:59:57 compute-0 mystifying_varahamihira[341624]:                 "/dev/loop5"
Oct 11 08:59:57 compute-0 mystifying_varahamihira[341624]:             ],
Oct 11 08:59:57 compute-0 mystifying_varahamihira[341624]:             "lv_name": "ceph_lv2",
Oct 11 08:59:57 compute-0 mystifying_varahamihira[341624]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 08:59:57 compute-0 mystifying_varahamihira[341624]:             "lv_size": "21470642176",
Oct 11 08:59:57 compute-0 mystifying_varahamihira[341624]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9a8d87fe-940e-4960-94fb-405e22e6e9ee,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 08:59:57 compute-0 mystifying_varahamihira[341624]:             "lv_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 08:59:57 compute-0 mystifying_varahamihira[341624]:             "name": "ceph_lv2",
Oct 11 08:59:57 compute-0 mystifying_varahamihira[341624]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 08:59:57 compute-0 mystifying_varahamihira[341624]:             "tags": {
Oct 11 08:59:57 compute-0 mystifying_varahamihira[341624]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 11 08:59:57 compute-0 mystifying_varahamihira[341624]:                 "ceph.block_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 08:59:57 compute-0 mystifying_varahamihira[341624]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 08:59:57 compute-0 mystifying_varahamihira[341624]:                 "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 08:59:57 compute-0 mystifying_varahamihira[341624]:                 "ceph.cluster_name": "ceph",
Oct 11 08:59:57 compute-0 mystifying_varahamihira[341624]:                 "ceph.crush_device_class": "",
Oct 11 08:59:57 compute-0 mystifying_varahamihira[341624]:                 "ceph.encrypted": "0",
Oct 11 08:59:57 compute-0 mystifying_varahamihira[341624]:                 "ceph.osd_fsid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 08:59:57 compute-0 mystifying_varahamihira[341624]:                 "ceph.osd_id": "2",
Oct 11 08:59:57 compute-0 mystifying_varahamihira[341624]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 08:59:57 compute-0 mystifying_varahamihira[341624]:                 "ceph.type": "block",
Oct 11 08:59:57 compute-0 mystifying_varahamihira[341624]:                 "ceph.vdo": "0"
Oct 11 08:59:57 compute-0 mystifying_varahamihira[341624]:             },
Oct 11 08:59:57 compute-0 mystifying_varahamihira[341624]:             "type": "block",
Oct 11 08:59:57 compute-0 mystifying_varahamihira[341624]:             "vg_name": "ceph_vg2"
Oct 11 08:59:57 compute-0 mystifying_varahamihira[341624]:         }
Oct 11 08:59:57 compute-0 mystifying_varahamihira[341624]:     ]
Oct 11 08:59:57 compute-0 mystifying_varahamihira[341624]: }
Oct 11 08:59:57 compute-0 systemd[1]: libpod-bedc96443b2c1f5246d25f457364430f00f61a99efb1b2f507521d4b6d000e31.scope: Deactivated successfully.
Oct 11 08:59:57 compute-0 podman[341607]: 2025-10-11 08:59:57.547766882 +0000 UTC m=+1.035865499 container died bedc96443b2c1f5246d25f457364430f00f61a99efb1b2f507521d4b6d000e31 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_varahamihira, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct 11 08:59:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-8e7cd0c2e9363f3386d5e2ca5827d23cfa89f96348e9373d77744b70f59f0cd6-merged.mount: Deactivated successfully.
Oct 11 08:59:57 compute-0 podman[341607]: 2025-10-11 08:59:57.617342416 +0000 UTC m=+1.105441003 container remove bedc96443b2c1f5246d25f457364430f00f61a99efb1b2f507521d4b6d000e31 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_varahamihira, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 08:59:57 compute-0 systemd[1]: libpod-conmon-bedc96443b2c1f5246d25f457364430f00f61a99efb1b2f507521d4b6d000e31.scope: Deactivated successfully.
Oct 11 08:59:57 compute-0 ceph-mon[74313]: pgmap v1703: 321 pgs: 321 active+clean; 279 MiB data, 725 MiB used, 59 GiB / 60 GiB avail; 543 KiB/s rd, 30 KiB/s wr, 71 op/s
Oct 11 08:59:57 compute-0 sudo[341407]: pam_unix(sudo:session): session closed for user root
Oct 11 08:59:57 compute-0 nova_compute[260935]: 2025-10-11 08:59:57.709 2 DEBUG oslo_concurrency.lockutils [None req-2db123d2-d298-4b3c-863b-4f3e9d4f3311 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] Acquiring lock "8848c29f-c82a-4f50-82f4-b2e317161489" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:59:57 compute-0 nova_compute[260935]: 2025-10-11 08:59:57.710 2 DEBUG oslo_concurrency.lockutils [None req-2db123d2-d298-4b3c-863b-4f3e9d4f3311 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] Lock "8848c29f-c82a-4f50-82f4-b2e317161489" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:59:57 compute-0 nova_compute[260935]: 2025-10-11 08:59:57.710 2 DEBUG oslo_concurrency.lockutils [None req-2db123d2-d298-4b3c-863b-4f3e9d4f3311 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] Acquiring lock "8848c29f-c82a-4f50-82f4-b2e317161489-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:59:57 compute-0 nova_compute[260935]: 2025-10-11 08:59:57.711 2 DEBUG oslo_concurrency.lockutils [None req-2db123d2-d298-4b3c-863b-4f3e9d4f3311 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] Lock "8848c29f-c82a-4f50-82f4-b2e317161489-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:59:57 compute-0 nova_compute[260935]: 2025-10-11 08:59:57.711 2 DEBUG oslo_concurrency.lockutils [None req-2db123d2-d298-4b3c-863b-4f3e9d4f3311 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] Lock "8848c29f-c82a-4f50-82f4-b2e317161489-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 08:59:57 compute-0 nova_compute[260935]: 2025-10-11 08:59:57.713 2 INFO nova.compute.manager [None req-2db123d2-d298-4b3c-863b-4f3e9d4f3311 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] [instance: 8848c29f-c82a-4f50-82f4-b2e317161489] Terminating instance
Oct 11 08:59:57 compute-0 nova_compute[260935]: 2025-10-11 08:59:57.714 2 DEBUG nova.compute.manager [None req-2db123d2-d298-4b3c-863b-4f3e9d4f3311 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] [instance: 8848c29f-c82a-4f50-82f4-b2e317161489] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 11 08:59:57 compute-0 nova_compute[260935]: 2025-10-11 08:59:57.730 2 DEBUG nova.network.neutron [None req-fdd5a53b-44a4-431b-b877-4d332eb82860 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] [instance: 2ca1b1c6-7cd2-42f9-a24b-5c20bb567361] Successfully updated port: bcfaf217-8703-4c1e-bf80-d24ab0e642bd _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 11 08:59:57 compute-0 nova_compute[260935]: 2025-10-11 08:59:57.765 2 DEBUG oslo_concurrency.lockutils [None req-fdd5a53b-44a4-431b-b877-4d332eb82860 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] Acquiring lock "refresh_cache-2ca1b1c6-7cd2-42f9-a24b-5c20bb567361" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 08:59:57 compute-0 nova_compute[260935]: 2025-10-11 08:59:57.765 2 DEBUG oslo_concurrency.lockutils [None req-fdd5a53b-44a4-431b-b877-4d332eb82860 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] Acquired lock "refresh_cache-2ca1b1c6-7cd2-42f9-a24b-5c20bb567361" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 08:59:57 compute-0 nova_compute[260935]: 2025-10-11 08:59:57.766 2 DEBUG nova.network.neutron [None req-fdd5a53b-44a4-431b-b877-4d332eb82860 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] [instance: 2ca1b1c6-7cd2-42f9-a24b-5c20bb567361] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 11 08:59:57 compute-0 kernel: tape121c426-77 (unregistering): left promiscuous mode
Oct 11 08:59:57 compute-0 NetworkManager[44960]: <info>  [1760173197.7955] device (tape121c426-77): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 08:59:57 compute-0 sudo[341645]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:59:57 compute-0 sudo[341645]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:59:57 compute-0 sudo[341645]: pam_unix(sudo:session): session closed for user root
Oct 11 08:59:57 compute-0 ovn_controller[152945]: 2025-10-11T08:59:57Z|00744|binding|INFO|Releasing lport e121c426-7734-4def-be42-b69b19dcbf29 from this chassis (sb_readonly=0)
Oct 11 08:59:57 compute-0 nova_compute[260935]: 2025-10-11 08:59:57.819 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:59:57 compute-0 ovn_controller[152945]: 2025-10-11T08:59:57Z|00745|binding|INFO|Setting lport e121c426-7734-4def-be42-b69b19dcbf29 down in Southbound
Oct 11 08:59:57 compute-0 ovn_controller[152945]: 2025-10-11T08:59:57Z|00746|binding|INFO|Removing iface tape121c426-77 ovn-installed in OVS
Oct 11 08:59:57 compute-0 nova_compute[260935]: 2025-10-11 08:59:57.827 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:59:57 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:59:57.832 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e4:27:e3 10.100.0.7'], port_security=['fa:16:3e:e4:27:e3 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '8848c29f-c82a-4f50-82f4-b2e317161489', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b4b8fb64-b3cb-4cee-8a22-4a3947fba8e5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b0430d49d70a46c2b29abef177f8ccb3', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'cf97e47b-d339-42a8-ae53-936a69d74b51', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7d7c2c31-fb9d-4875-91fa-aedc5fb45092, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=e121c426-7734-4def-be42-b69b19dcbf29) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 08:59:57 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:59:57.834 162815 INFO neutron.agent.ovn.metadata.agent [-] Port e121c426-7734-4def-be42-b69b19dcbf29 in datapath b4b8fb64-b3cb-4cee-8a22-4a3947fba8e5 unbound from our chassis
Oct 11 08:59:57 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:59:57.836 162815 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b4b8fb64-b3cb-4cee-8a22-4a3947fba8e5, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 11 08:59:57 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:59:57.839 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[3d436c26-b387-4708-8e19-258bc396f0fa]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:59:57 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:59:57.840 162815 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-b4b8fb64-b3cb-4cee-8a22-4a3947fba8e5 namespace which is not needed anymore
Oct 11 08:59:57 compute-0 nova_compute[260935]: 2025-10-11 08:59:57.854 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:59:57 compute-0 systemd[1]: machine-qemu\x2d93\x2dinstance\x2d0000004f.scope: Deactivated successfully.
Oct 11 08:59:57 compute-0 systemd[1]: machine-qemu\x2d93\x2dinstance\x2d0000004f.scope: Consumed 14.612s CPU time.
Oct 11 08:59:57 compute-0 systemd-machined[215705]: Machine qemu-93-instance-0000004f terminated.
Oct 11 08:59:57 compute-0 sudo[341672]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 08:59:57 compute-0 sudo[341672]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:59:57 compute-0 sudo[341672]: pam_unix(sudo:session): session closed for user root
Oct 11 08:59:57 compute-0 nova_compute[260935]: 2025-10-11 08:59:57.973 2 INFO nova.virt.libvirt.driver [-] [instance: 8848c29f-c82a-4f50-82f4-b2e317161489] Instance destroyed successfully.
Oct 11 08:59:57 compute-0 nova_compute[260935]: 2025-10-11 08:59:57.974 2 DEBUG nova.objects.instance [None req-2db123d2-d298-4b3c-863b-4f3e9d4f3311 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] Lazy-loading 'resources' on Instance uuid 8848c29f-c82a-4f50-82f4-b2e317161489 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 08:59:57 compute-0 nova_compute[260935]: 2025-10-11 08:59:57.993 2 DEBUG nova.virt.libvirt.vif [None req-2db123d2-d298-4b3c-863b-4f3e9d4f3311 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T08:58:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-1291030350',display_name='tempest-ListServerFiltersTestJSON-instance-1291030350',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-1291030350',id=79,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-11T08:59:04Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='b0430d49d70a46c2b29abef177f8ccb3',ramdisk_id='',reservation_id='r-q7why563',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ListServerFiltersTestJSON-867232992',owner_user_name='tempest-ListServerFiltersTestJSON-867232992-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T08:59:37Z,user_data=None,user_id='9734241540ac484291686e1d189d4eea',uuid=8848c29f-c82a-4f50-82f4-b2e317161489,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "e121c426-7734-4def-be42-b69b19dcbf29", "address": "fa:16:3e:e4:27:e3", "network": {"id": "b4b8fb64-b3cb-4cee-8a22-4a3947fba8e5", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-2015711970-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b0430d49d70a46c2b29abef177f8ccb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape121c426-77", "ovs_interfaceid": "e121c426-7734-4def-be42-b69b19dcbf29", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 11 08:59:57 compute-0 nova_compute[260935]: 2025-10-11 08:59:57.994 2 DEBUG nova.network.os_vif_util [None req-2db123d2-d298-4b3c-863b-4f3e9d4f3311 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] Converting VIF {"id": "e121c426-7734-4def-be42-b69b19dcbf29", "address": "fa:16:3e:e4:27:e3", "network": {"id": "b4b8fb64-b3cb-4cee-8a22-4a3947fba8e5", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-2015711970-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b0430d49d70a46c2b29abef177f8ccb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape121c426-77", "ovs_interfaceid": "e121c426-7734-4def-be42-b69b19dcbf29", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 08:59:57 compute-0 nova_compute[260935]: 2025-10-11 08:59:57.995 2 DEBUG nova.network.os_vif_util [None req-2db123d2-d298-4b3c-863b-4f3e9d4f3311 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e4:27:e3,bridge_name='br-int',has_traffic_filtering=True,id=e121c426-7734-4def-be42-b69b19dcbf29,network=Network(b4b8fb64-b3cb-4cee-8a22-4a3947fba8e5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape121c426-77') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 08:59:57 compute-0 nova_compute[260935]: 2025-10-11 08:59:57.996 2 DEBUG os_vif [None req-2db123d2-d298-4b3c-863b-4f3e9d4f3311 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e4:27:e3,bridge_name='br-int',has_traffic_filtering=True,id=e121c426-7734-4def-be42-b69b19dcbf29,network=Network(b4b8fb64-b3cb-4cee-8a22-4a3947fba8e5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape121c426-77') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 11 08:59:57 compute-0 nova_compute[260935]: 2025-10-11 08:59:57.999 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:59:58 compute-0 nova_compute[260935]: 2025-10-11 08:59:57.999 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape121c426-77, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:59:58 compute-0 nova_compute[260935]: 2025-10-11 08:59:58.001 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:59:58 compute-0 nova_compute[260935]: 2025-10-11 08:59:58.002 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:59:58 compute-0 nova_compute[260935]: 2025-10-11 08:59:58.005 2 INFO os_vif [None req-2db123d2-d298-4b3c-863b-4f3e9d4f3311 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e4:27:e3,bridge_name='br-int',has_traffic_filtering=True,id=e121c426-7734-4def-be42-b69b19dcbf29,network=Network(b4b8fb64-b3cb-4cee-8a22-4a3947fba8e5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape121c426-77')
Oct 11 08:59:58 compute-0 sudo[341723]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 08:59:58 compute-0 sudo[341723]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:59:58 compute-0 sudo[341723]: pam_unix(sudo:session): session closed for user root
Oct 11 08:59:58 compute-0 neutron-haproxy-ovnmeta-b4b8fb64-b3cb-4cee-8a22-4a3947fba8e5[339066]: [NOTICE]   (339079) : haproxy version is 2.8.14-c23fe91
Oct 11 08:59:58 compute-0 neutron-haproxy-ovnmeta-b4b8fb64-b3cb-4cee-8a22-4a3947fba8e5[339066]: [NOTICE]   (339079) : path to executable is /usr/sbin/haproxy
Oct 11 08:59:58 compute-0 neutron-haproxy-ovnmeta-b4b8fb64-b3cb-4cee-8a22-4a3947fba8e5[339066]: [WARNING]  (339079) : Exiting Master process...
Oct 11 08:59:58 compute-0 neutron-haproxy-ovnmeta-b4b8fb64-b3cb-4cee-8a22-4a3947fba8e5[339066]: [WARNING]  (339079) : Exiting Master process...
Oct 11 08:59:58 compute-0 neutron-haproxy-ovnmeta-b4b8fb64-b3cb-4cee-8a22-4a3947fba8e5[339066]: [ALERT]    (339079) : Current worker (339081) exited with code 143 (Terminated)
Oct 11 08:59:58 compute-0 neutron-haproxy-ovnmeta-b4b8fb64-b3cb-4cee-8a22-4a3947fba8e5[339066]: [WARNING]  (339079) : All workers exited. Exiting... (0)
Oct 11 08:59:58 compute-0 systemd[1]: libpod-a2d5aa78c7bc858f490fe7f3260a94155a261a5c31f673c7123501db88b53f40.scope: Deactivated successfully.
Oct 11 08:59:58 compute-0 podman[341728]: 2025-10-11 08:59:58.075897261 +0000 UTC m=+0.079692163 container died a2d5aa78c7bc858f490fe7f3260a94155a261a5c31f673c7123501db88b53f40 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-b4b8fb64-b3cb-4cee-8a22-4a3947fba8e5, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.build-date=20251001)
Oct 11 08:59:58 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a2d5aa78c7bc858f490fe7f3260a94155a261a5c31f673c7123501db88b53f40-userdata-shm.mount: Deactivated successfully.
Oct 11 08:59:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-06a2a110a650e7c639a749dbbc88aa287fcc6d71397aecfe32bd21fa828a5cec-merged.mount: Deactivated successfully.
Oct 11 08:59:58 compute-0 podman[341728]: 2025-10-11 08:59:58.122055618 +0000 UTC m=+0.125850510 container cleanup a2d5aa78c7bc858f490fe7f3260a94155a261a5c31f673c7123501db88b53f40 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-b4b8fb64-b3cb-4cee-8a22-4a3947fba8e5, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001)
Oct 11 08:59:58 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1704: 321 pgs: 321 active+clean; 248 MiB data, 687 MiB used, 59 GiB / 60 GiB avail; 579 KiB/s rd, 1.8 MiB/s wr, 127 op/s
Oct 11 08:59:58 compute-0 sudo[341786]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a -- raw list --format json
Oct 11 08:59:58 compute-0 sudo[341786]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 08:59:58 compute-0 systemd[1]: libpod-conmon-a2d5aa78c7bc858f490fe7f3260a94155a261a5c31f673c7123501db88b53f40.scope: Deactivated successfully.
Oct 11 08:59:58 compute-0 nova_compute[260935]: 2025-10-11 08:59:58.211 2 DEBUG nova.network.neutron [None req-fdd5a53b-44a4-431b-b877-4d332eb82860 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] [instance: 2ca1b1c6-7cd2-42f9-a24b-5c20bb567361] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 11 08:59:58 compute-0 podman[341826]: 2025-10-11 08:59:58.224872079 +0000 UTC m=+0.068833063 container remove a2d5aa78c7bc858f490fe7f3260a94155a261a5c31f673c7123501db88b53f40 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-b4b8fb64-b3cb-4cee-8a22-4a3947fba8e5, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, tcib_managed=true)
Oct 11 08:59:58 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:59:58.232 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[ffd4ec72-3dd6-473a-ba45-8432d314773e]: (4, ('Sat Oct 11 08:59:57 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-b4b8fb64-b3cb-4cee-8a22-4a3947fba8e5 (a2d5aa78c7bc858f490fe7f3260a94155a261a5c31f673c7123501db88b53f40)\na2d5aa78c7bc858f490fe7f3260a94155a261a5c31f673c7123501db88b53f40\nSat Oct 11 08:59:58 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-b4b8fb64-b3cb-4cee-8a22-4a3947fba8e5 (a2d5aa78c7bc858f490fe7f3260a94155a261a5c31f673c7123501db88b53f40)\na2d5aa78c7bc858f490fe7f3260a94155a261a5c31f673c7123501db88b53f40\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:59:58 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:59:58.240 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[4604ba41-afc1-4cce-b97c-2352a37d8c68]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:59:58 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:59:58.241 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb4b8fb64-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 08:59:58 compute-0 nova_compute[260935]: 2025-10-11 08:59:58.293 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:59:58 compute-0 kernel: tapb4b8fb64-b0: left promiscuous mode
Oct 11 08:59:58 compute-0 nova_compute[260935]: 2025-10-11 08:59:58.319 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:59:58 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:59:58.324 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[b6c8fd12-6b0a-40e2-9558-946d6fbd11eb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:59:58 compute-0 nova_compute[260935]: 2025-10-11 08:59:58.432 2 DEBUG nova.compute.manager [req-ecc9314c-81fb-43c3-9620-56450e87a1b9 req-8e54e414-fe6b-44a1-9429-bf8515f7da0f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 2ca1b1c6-7cd2-42f9-a24b-5c20bb567361] Received event network-changed-bcfaf217-8703-4c1e-bf80-d24ab0e642bd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:59:58 compute-0 nova_compute[260935]: 2025-10-11 08:59:58.432 2 DEBUG nova.compute.manager [req-ecc9314c-81fb-43c3-9620-56450e87a1b9 req-8e54e414-fe6b-44a1-9429-bf8515f7da0f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 2ca1b1c6-7cd2-42f9-a24b-5c20bb567361] Refreshing instance network info cache due to event network-changed-bcfaf217-8703-4c1e-bf80-d24ab0e642bd. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 08:59:58 compute-0 nova_compute[260935]: 2025-10-11 08:59:58.432 2 DEBUG oslo_concurrency.lockutils [req-ecc9314c-81fb-43c3-9620-56450e87a1b9 req-8e54e414-fe6b-44a1-9429-bf8515f7da0f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-2ca1b1c6-7cd2-42f9-a24b-5c20bb567361" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 08:59:58 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:59:58.494 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[fde9ccf5-d071-4681-a9a2-e2ddba5424a2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:59:58 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:59:58.495 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[fb7252f2-55a3-4006-bf49-bc18ed32cc1c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:59:58 compute-0 nova_compute[260935]: 2025-10-11 08:59:58.497 2 INFO nova.virt.libvirt.driver [None req-2db123d2-d298-4b3c-863b-4f3e9d4f3311 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] [instance: 8848c29f-c82a-4f50-82f4-b2e317161489] Deleting instance files /var/lib/nova/instances/8848c29f-c82a-4f50-82f4-b2e317161489_del
Oct 11 08:59:58 compute-0 nova_compute[260935]: 2025-10-11 08:59:58.498 2 INFO nova.virt.libvirt.driver [None req-2db123d2-d298-4b3c-863b-4f3e9d4f3311 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] [instance: 8848c29f-c82a-4f50-82f4-b2e317161489] Deletion of /var/lib/nova/instances/8848c29f-c82a-4f50-82f4-b2e317161489_del complete
Oct 11 08:59:58 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:59:58.514 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[048f4397-e6a8-4557-b230-8995e2589910]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 498788, 'reachable_time': 44882, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 341876, 'error': None, 'target': 'ovnmeta-b4b8fb64-b3cb-4cee-8a22-4a3947fba8e5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:59:58 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:59:58.516 162929 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-b4b8fb64-b3cb-4cee-8a22-4a3947fba8e5 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 11 08:59:58 compute-0 ovn_metadata_agent[162810]: 2025-10-11 08:59:58.517 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[6816d33d-8395-4b6d-a273-82af35da94dc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 08:59:58 compute-0 systemd[1]: run-netns-ovnmeta\x2db4b8fb64\x2db3cb\x2d4cee\x2d8a22\x2d4a3947fba8e5.mount: Deactivated successfully.
Oct 11 08:59:58 compute-0 nova_compute[260935]: 2025-10-11 08:59:58.579 2 INFO nova.compute.manager [None req-2db123d2-d298-4b3c-863b-4f3e9d4f3311 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] [instance: 8848c29f-c82a-4f50-82f4-b2e317161489] Took 0.86 seconds to destroy the instance on the hypervisor.
Oct 11 08:59:58 compute-0 nova_compute[260935]: 2025-10-11 08:59:58.580 2 DEBUG oslo.service.loopingcall [None req-2db123d2-d298-4b3c-863b-4f3e9d4f3311 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 11 08:59:58 compute-0 nova_compute[260935]: 2025-10-11 08:59:58.580 2 DEBUG nova.compute.manager [-] [instance: 8848c29f-c82a-4f50-82f4-b2e317161489] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 11 08:59:58 compute-0 nova_compute[260935]: 2025-10-11 08:59:58.580 2 DEBUG nova.network.neutron [-] [instance: 8848c29f-c82a-4f50-82f4-b2e317161489] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 11 08:59:58 compute-0 podman[341884]: 2025-10-11 08:59:58.601371784 +0000 UTC m=+0.045231369 container create bfb0d665591138d750fb932175bdb22d7fc65d1469197a4eacbd59f07b732d69 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_engelbart, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507)
Oct 11 08:59:58 compute-0 systemd[1]: Started libpod-conmon-bfb0d665591138d750fb932175bdb22d7fc65d1469197a4eacbd59f07b732d69.scope.
Oct 11 08:59:58 compute-0 systemd[1]: Started libcrun container.
Oct 11 08:59:58 compute-0 podman[341884]: 2025-10-11 08:59:58.583424652 +0000 UTC m=+0.027284287 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 08:59:58 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e242 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 08:59:58 compute-0 podman[341884]: 2025-10-11 08:59:58.68222475 +0000 UTC m=+0.126084425 container init bfb0d665591138d750fb932175bdb22d7fc65d1469197a4eacbd59f07b732d69 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_engelbart, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 08:59:58 compute-0 nova_compute[260935]: 2025-10-11 08:59:58.689 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:59:58 compute-0 podman[341884]: 2025-10-11 08:59:58.694442968 +0000 UTC m=+0.138302563 container start bfb0d665591138d750fb932175bdb22d7fc65d1469197a4eacbd59f07b732d69 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_engelbart, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 08:59:58 compute-0 podman[341884]: 2025-10-11 08:59:58.698323579 +0000 UTC m=+0.142183264 container attach bfb0d665591138d750fb932175bdb22d7fc65d1469197a4eacbd59f07b732d69 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_engelbart, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 08:59:58 compute-0 ceph-mon[74313]: pgmap v1704: 321 pgs: 321 active+clean; 248 MiB data, 687 MiB used, 59 GiB / 60 GiB avail; 579 KiB/s rd, 1.8 MiB/s wr, 127 op/s
Oct 11 08:59:58 compute-0 happy_engelbart[341901]: 167 167
Oct 11 08:59:58 compute-0 systemd[1]: libpod-bfb0d665591138d750fb932175bdb22d7fc65d1469197a4eacbd59f07b732d69.scope: Deactivated successfully.
Oct 11 08:59:58 compute-0 podman[341884]: 2025-10-11 08:59:58.705115503 +0000 UTC m=+0.148975098 container died bfb0d665591138d750fb932175bdb22d7fc65d1469197a4eacbd59f07b732d69 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_engelbart, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct 11 08:59:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-625f31a76bd67647b7176484ff1b0362a875e04e81f7e75fe7c08120f8cb4fa1-merged.mount: Deactivated successfully.
Oct 11 08:59:58 compute-0 podman[341884]: 2025-10-11 08:59:58.749810067 +0000 UTC m=+0.193669662 container remove bfb0d665591138d750fb932175bdb22d7fc65d1469197a4eacbd59f07b732d69 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_engelbart, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 08:59:58 compute-0 systemd[1]: libpod-conmon-bfb0d665591138d750fb932175bdb22d7fc65d1469197a4eacbd59f07b732d69.scope: Deactivated successfully.
Oct 11 08:59:58 compute-0 podman[341926]: 2025-10-11 08:59:58.973989769 +0000 UTC m=+0.064297754 container create b5781c3c54e0fdbc34eff95822116d1c5f9e7bc247e3693dad911953cd15a5e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_sanderson, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 08:59:59 compute-0 nova_compute[260935]: 2025-10-11 08:59:59.018 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 08:59:59 compute-0 systemd[1]: Started libpod-conmon-b5781c3c54e0fdbc34eff95822116d1c5f9e7bc247e3693dad911953cd15a5e3.scope.
Oct 11 08:59:59 compute-0 podman[341926]: 2025-10-11 08:59:58.944049176 +0000 UTC m=+0.034357241 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 08:59:59 compute-0 systemd[1]: Started libcrun container.
Oct 11 08:59:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/19162af01d678c4eebcdc7c4f2b4eb52623787e6a40fe584e954e738ad2eea1a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 08:59:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/19162af01d678c4eebcdc7c4f2b4eb52623787e6a40fe584e954e738ad2eea1a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 08:59:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/19162af01d678c4eebcdc7c4f2b4eb52623787e6a40fe584e954e738ad2eea1a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 08:59:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/19162af01d678c4eebcdc7c4f2b4eb52623787e6a40fe584e954e738ad2eea1a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 08:59:59 compute-0 podman[341926]: 2025-10-11 08:59:59.086007854 +0000 UTC m=+0.176315909 container init b5781c3c54e0fdbc34eff95822116d1c5f9e7bc247e3693dad911953cd15a5e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_sanderson, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 08:59:59 compute-0 podman[341926]: 2025-10-11 08:59:59.097327356 +0000 UTC m=+0.187635381 container start b5781c3c54e0fdbc34eff95822116d1c5f9e7bc247e3693dad911953cd15a5e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_sanderson, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct 11 08:59:59 compute-0 podman[341926]: 2025-10-11 08:59:59.101125725 +0000 UTC m=+0.191433790 container attach b5781c3c54e0fdbc34eff95822116d1c5f9e7bc247e3693dad911953cd15a5e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_sanderson, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2)
Oct 11 08:59:59 compute-0 nova_compute[260935]: 2025-10-11 08:59:59.551 2 DEBUG nova.network.neutron [-] [instance: 8848c29f-c82a-4f50-82f4-b2e317161489] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 08:59:59 compute-0 nova_compute[260935]: 2025-10-11 08:59:59.580 2 INFO nova.compute.manager [-] [instance: 8848c29f-c82a-4f50-82f4-b2e317161489] Took 1.00 seconds to deallocate network for instance.
Oct 11 08:59:59 compute-0 nova_compute[260935]: 2025-10-11 08:59:59.633 2 DEBUG oslo_concurrency.lockutils [None req-2db123d2-d298-4b3c-863b-4f3e9d4f3311 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 08:59:59 compute-0 nova_compute[260935]: 2025-10-11 08:59:59.634 2 DEBUG oslo_concurrency.lockutils [None req-2db123d2-d298-4b3c-863b-4f3e9d4f3311 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 08:59:59 compute-0 nova_compute[260935]: 2025-10-11 08:59:59.707 2 DEBUG nova.compute.manager [req-b29a3338-5c3b-41b1-a2e6-bc30ba14bfaf req-8472a373-813a-429d-8196-8550c96b7f99 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 8848c29f-c82a-4f50-82f4-b2e317161489] Received event network-vif-deleted-e121c426-7734-4def-be42-b69b19dcbf29 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 08:59:59 compute-0 nova_compute[260935]: 2025-10-11 08:59:59.749 2 DEBUG oslo_concurrency.processutils [None req-2db123d2-d298-4b3c-863b-4f3e9d4f3311 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:00:00 compute-0 tender_sanderson[341942]: {
Oct 11 09:00:00 compute-0 tender_sanderson[341942]:     "9a8d87fe-940e-4960-94fb-405e22e6e9ee": {
Oct 11 09:00:00 compute-0 tender_sanderson[341942]:         "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:00:00 compute-0 tender_sanderson[341942]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 11 09:00:00 compute-0 tender_sanderson[341942]:         "osd_id": 2,
Oct 11 09:00:00 compute-0 tender_sanderson[341942]:         "osd_uuid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 09:00:00 compute-0 tender_sanderson[341942]:         "type": "bluestore"
Oct 11 09:00:00 compute-0 tender_sanderson[341942]:     },
Oct 11 09:00:00 compute-0 tender_sanderson[341942]:     "bd31cfe9-266a-4479-a7f9-659fff14b3e1": {
Oct 11 09:00:00 compute-0 tender_sanderson[341942]:         "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:00:00 compute-0 tender_sanderson[341942]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 11 09:00:00 compute-0 tender_sanderson[341942]:         "osd_id": 0,
Oct 11 09:00:00 compute-0 tender_sanderson[341942]:         "osd_uuid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 09:00:00 compute-0 tender_sanderson[341942]:         "type": "bluestore"
Oct 11 09:00:00 compute-0 tender_sanderson[341942]:     },
Oct 11 09:00:00 compute-0 tender_sanderson[341942]:     "c6e730c8-7075-45e5-bf72-061b73088f5b": {
Oct 11 09:00:00 compute-0 tender_sanderson[341942]:         "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:00:00 compute-0 tender_sanderson[341942]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 11 09:00:00 compute-0 tender_sanderson[341942]:         "osd_id": 1,
Oct 11 09:00:00 compute-0 tender_sanderson[341942]:         "osd_uuid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 09:00:00 compute-0 tender_sanderson[341942]:         "type": "bluestore"
Oct 11 09:00:00 compute-0 tender_sanderson[341942]:     }
Oct 11 09:00:00 compute-0 tender_sanderson[341942]: }
Oct 11 09:00:00 compute-0 systemd[1]: libpod-b5781c3c54e0fdbc34eff95822116d1c5f9e7bc247e3693dad911953cd15a5e3.scope: Deactivated successfully.
Oct 11 09:00:00 compute-0 podman[341926]: 2025-10-11 09:00:00.065432922 +0000 UTC m=+1.155740947 container died b5781c3c54e0fdbc34eff95822116d1c5f9e7bc247e3693dad911953cd15a5e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_sanderson, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct 11 09:00:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-19162af01d678c4eebcdc7c4f2b4eb52623787e6a40fe584e954e738ad2eea1a-merged.mount: Deactivated successfully.
Oct 11 09:00:00 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1705: 321 pgs: 321 active+clean; 248 MiB data, 687 MiB used, 59 GiB / 60 GiB avail; 578 KiB/s rd, 1.8 MiB/s wr, 126 op/s
Oct 11 09:00:00 compute-0 podman[341926]: 2025-10-11 09:00:00.14605931 +0000 UTC m=+1.236367305 container remove b5781c3c54e0fdbc34eff95822116d1c5f9e7bc247e3693dad911953cd15a5e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_sanderson, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 09:00:00 compute-0 systemd[1]: libpod-conmon-b5781c3c54e0fdbc34eff95822116d1c5f9e7bc247e3693dad911953cd15a5e3.scope: Deactivated successfully.
Oct 11 09:00:00 compute-0 sudo[341786]: pam_unix(sudo:session): session closed for user root
Oct 11 09:00:00 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 09:00:00 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:00:00 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 09:00:00 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:00:00 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev e0cf059d-2c54-4cc1-9bf8-d5d3d1c856dd does not exist
Oct 11 09:00:00 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev ee0291bf-6622-4960-aec4-305a5e13d752 does not exist
Oct 11 09:00:00 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 09:00:00 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2937361287' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:00:00 compute-0 nova_compute[260935]: 2025-10-11 09:00:00.273 2 DEBUG oslo_concurrency.processutils [None req-2db123d2-d298-4b3c-863b-4f3e9d4f3311 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.524s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:00:00 compute-0 nova_compute[260935]: 2025-10-11 09:00:00.280 2 DEBUG nova.network.neutron [None req-fdd5a53b-44a4-431b-b877-4d332eb82860 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] [instance: 2ca1b1c6-7cd2-42f9-a24b-5c20bb567361] Updating instance_info_cache with network_info: [{"id": "bcfaf217-8703-4c1e-bf80-d24ab0e642bd", "address": "fa:16:3e:16:b9:d1", "network": {"id": "164a664d-5e52-48b9-8b00-f73d0851a4cc", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-311778958-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d33b48586acf4e6c8254f2a1213b001c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbcfaf217-87", "ovs_interfaceid": "bcfaf217-8703-4c1e-bf80-d24ab0e642bd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:00:00 compute-0 nova_compute[260935]: 2025-10-11 09:00:00.284 2 DEBUG nova.compute.provider_tree [None req-2db123d2-d298-4b3c-863b-4f3e9d4f3311 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 09:00:00 compute-0 sudo[342010]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:00:00 compute-0 sudo[342010]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:00:00 compute-0 sudo[342010]: pam_unix(sudo:session): session closed for user root
Oct 11 09:00:00 compute-0 nova_compute[260935]: 2025-10-11 09:00:00.308 2 DEBUG nova.scheduler.client.report [None req-2db123d2-d298-4b3c-863b-4f3e9d4f3311 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 09:00:00 compute-0 nova_compute[260935]: 2025-10-11 09:00:00.312 2 DEBUG oslo_concurrency.lockutils [None req-fdd5a53b-44a4-431b-b877-4d332eb82860 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] Releasing lock "refresh_cache-2ca1b1c6-7cd2-42f9-a24b-5c20bb567361" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:00:00 compute-0 nova_compute[260935]: 2025-10-11 09:00:00.313 2 DEBUG nova.compute.manager [None req-fdd5a53b-44a4-431b-b877-4d332eb82860 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] [instance: 2ca1b1c6-7cd2-42f9-a24b-5c20bb567361] Instance network_info: |[{"id": "bcfaf217-8703-4c1e-bf80-d24ab0e642bd", "address": "fa:16:3e:16:b9:d1", "network": {"id": "164a664d-5e52-48b9-8b00-f73d0851a4cc", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-311778958-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d33b48586acf4e6c8254f2a1213b001c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbcfaf217-87", "ovs_interfaceid": "bcfaf217-8703-4c1e-bf80-d24ab0e642bd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 11 09:00:00 compute-0 nova_compute[260935]: 2025-10-11 09:00:00.313 2 DEBUG oslo_concurrency.lockutils [req-ecc9314c-81fb-43c3-9620-56450e87a1b9 req-8e54e414-fe6b-44a1-9429-bf8515f7da0f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-2ca1b1c6-7cd2-42f9-a24b-5c20bb567361" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:00:00 compute-0 nova_compute[260935]: 2025-10-11 09:00:00.313 2 DEBUG nova.network.neutron [req-ecc9314c-81fb-43c3-9620-56450e87a1b9 req-8e54e414-fe6b-44a1-9429-bf8515f7da0f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 2ca1b1c6-7cd2-42f9-a24b-5c20bb567361] Refreshing network info cache for port bcfaf217-8703-4c1e-bf80-d24ab0e642bd _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 09:00:00 compute-0 nova_compute[260935]: 2025-10-11 09:00:00.316 2 DEBUG nova.virt.libvirt.driver [None req-fdd5a53b-44a4-431b-b877-4d332eb82860 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] [instance: 2ca1b1c6-7cd2-42f9-a24b-5c20bb567361] Start _get_guest_xml network_info=[{"id": "bcfaf217-8703-4c1e-bf80-d24ab0e642bd", "address": "fa:16:3e:16:b9:d1", "network": {"id": "164a664d-5e52-48b9-8b00-f73d0851a4cc", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-311778958-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d33b48586acf4e6c8254f2a1213b001c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbcfaf217-87", "ovs_interfaceid": "bcfaf217-8703-4c1e-bf80-d24ab0e642bd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 11 09:00:00 compute-0 nova_compute[260935]: 2025-10-11 09:00:00.320 2 WARNING nova.virt.libvirt.driver [None req-fdd5a53b-44a4-431b-b877-4d332eb82860 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 09:00:00 compute-0 nova_compute[260935]: 2025-10-11 09:00:00.326 2 DEBUG nova.virt.libvirt.host [None req-fdd5a53b-44a4-431b-b877-4d332eb82860 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 11 09:00:00 compute-0 nova_compute[260935]: 2025-10-11 09:00:00.327 2 DEBUG nova.virt.libvirt.host [None req-fdd5a53b-44a4-431b-b877-4d332eb82860 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 11 09:00:00 compute-0 nova_compute[260935]: 2025-10-11 09:00:00.335 2 DEBUG oslo_concurrency.lockutils [None req-2db123d2-d298-4b3c-863b-4f3e9d4f3311 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.701s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:00:00 compute-0 nova_compute[260935]: 2025-10-11 09:00:00.338 2 DEBUG nova.virt.libvirt.host [None req-fdd5a53b-44a4-431b-b877-4d332eb82860 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 11 09:00:00 compute-0 nova_compute[260935]: 2025-10-11 09:00:00.339 2 DEBUG nova.virt.libvirt.host [None req-fdd5a53b-44a4-431b-b877-4d332eb82860 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 11 09:00:00 compute-0 nova_compute[260935]: 2025-10-11 09:00:00.339 2 DEBUG nova.virt.libvirt.driver [None req-fdd5a53b-44a4-431b-b877-4d332eb82860 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 11 09:00:00 compute-0 nova_compute[260935]: 2025-10-11 09:00:00.339 2 DEBUG nova.virt.hardware [None req-fdd5a53b-44a4-431b-b877-4d332eb82860 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 11 09:00:00 compute-0 nova_compute[260935]: 2025-10-11 09:00:00.340 2 DEBUG nova.virt.hardware [None req-fdd5a53b-44a4-431b-b877-4d332eb82860 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 11 09:00:00 compute-0 nova_compute[260935]: 2025-10-11 09:00:00.340 2 DEBUG nova.virt.hardware [None req-fdd5a53b-44a4-431b-b877-4d332eb82860 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 11 09:00:00 compute-0 nova_compute[260935]: 2025-10-11 09:00:00.340 2 DEBUG nova.virt.hardware [None req-fdd5a53b-44a4-431b-b877-4d332eb82860 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 11 09:00:00 compute-0 nova_compute[260935]: 2025-10-11 09:00:00.340 2 DEBUG nova.virt.hardware [None req-fdd5a53b-44a4-431b-b877-4d332eb82860 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 11 09:00:00 compute-0 nova_compute[260935]: 2025-10-11 09:00:00.340 2 DEBUG nova.virt.hardware [None req-fdd5a53b-44a4-431b-b877-4d332eb82860 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 11 09:00:00 compute-0 nova_compute[260935]: 2025-10-11 09:00:00.341 2 DEBUG nova.virt.hardware [None req-fdd5a53b-44a4-431b-b877-4d332eb82860 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 11 09:00:00 compute-0 nova_compute[260935]: 2025-10-11 09:00:00.341 2 DEBUG nova.virt.hardware [None req-fdd5a53b-44a4-431b-b877-4d332eb82860 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 11 09:00:00 compute-0 nova_compute[260935]: 2025-10-11 09:00:00.341 2 DEBUG nova.virt.hardware [None req-fdd5a53b-44a4-431b-b877-4d332eb82860 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 11 09:00:00 compute-0 nova_compute[260935]: 2025-10-11 09:00:00.341 2 DEBUG nova.virt.hardware [None req-fdd5a53b-44a4-431b-b877-4d332eb82860 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 11 09:00:00 compute-0 nova_compute[260935]: 2025-10-11 09:00:00.341 2 DEBUG nova.virt.hardware [None req-fdd5a53b-44a4-431b-b877-4d332eb82860 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 11 09:00:00 compute-0 nova_compute[260935]: 2025-10-11 09:00:00.344 2 DEBUG oslo_concurrency.processutils [None req-fdd5a53b-44a4-431b-b877-4d332eb82860 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:00:00 compute-0 sudo[342037]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 11 09:00:00 compute-0 sudo[342037]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:00:00 compute-0 sudo[342037]: pam_unix(sudo:session): session closed for user root
Oct 11 09:00:00 compute-0 nova_compute[260935]: 2025-10-11 09:00:00.435 2 INFO nova.scheduler.client.report [None req-2db123d2-d298-4b3c-863b-4f3e9d4f3311 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] Deleted allocations for instance 8848c29f-c82a-4f50-82f4-b2e317161489
Oct 11 09:00:00 compute-0 nova_compute[260935]: 2025-10-11 09:00:00.529 2 DEBUG oslo_concurrency.lockutils [None req-2db123d2-d298-4b3c-863b-4f3e9d4f3311 9734241540ac484291686e1d189d4eea b0430d49d70a46c2b29abef177f8ccb3 - - default default] Lock "8848c29f-c82a-4f50-82f4-b2e317161489" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.820s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:00:00 compute-0 nova_compute[260935]: 2025-10-11 09:00:00.590 2 DEBUG nova.compute.manager [req-2fcbee7f-2d7e-45e3-b3ad-596f2554e544 req-e41c0ec4-dd0e-4041-93e6-1d7c618a4bbd e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 8848c29f-c82a-4f50-82f4-b2e317161489] Received event network-vif-unplugged-e121c426-7734-4def-be42-b69b19dcbf29 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:00:00 compute-0 nova_compute[260935]: 2025-10-11 09:00:00.591 2 DEBUG oslo_concurrency.lockutils [req-2fcbee7f-2d7e-45e3-b3ad-596f2554e544 req-e41c0ec4-dd0e-4041-93e6-1d7c618a4bbd e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "8848c29f-c82a-4f50-82f4-b2e317161489-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:00:00 compute-0 nova_compute[260935]: 2025-10-11 09:00:00.591 2 DEBUG oslo_concurrency.lockutils [req-2fcbee7f-2d7e-45e3-b3ad-596f2554e544 req-e41c0ec4-dd0e-4041-93e6-1d7c618a4bbd e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "8848c29f-c82a-4f50-82f4-b2e317161489-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:00:00 compute-0 nova_compute[260935]: 2025-10-11 09:00:00.591 2 DEBUG oslo_concurrency.lockutils [req-2fcbee7f-2d7e-45e3-b3ad-596f2554e544 req-e41c0ec4-dd0e-4041-93e6-1d7c618a4bbd e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "8848c29f-c82a-4f50-82f4-b2e317161489-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:00:00 compute-0 nova_compute[260935]: 2025-10-11 09:00:00.591 2 DEBUG nova.compute.manager [req-2fcbee7f-2d7e-45e3-b3ad-596f2554e544 req-e41c0ec4-dd0e-4041-93e6-1d7c618a4bbd e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 8848c29f-c82a-4f50-82f4-b2e317161489] No waiting events found dispatching network-vif-unplugged-e121c426-7734-4def-be42-b69b19dcbf29 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 09:00:00 compute-0 nova_compute[260935]: 2025-10-11 09:00:00.591 2 WARNING nova.compute.manager [req-2fcbee7f-2d7e-45e3-b3ad-596f2554e544 req-e41c0ec4-dd0e-4041-93e6-1d7c618a4bbd e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 8848c29f-c82a-4f50-82f4-b2e317161489] Received unexpected event network-vif-unplugged-e121c426-7734-4def-be42-b69b19dcbf29 for instance with vm_state deleted and task_state None.
Oct 11 09:00:00 compute-0 nova_compute[260935]: 2025-10-11 09:00:00.592 2 DEBUG nova.compute.manager [req-2fcbee7f-2d7e-45e3-b3ad-596f2554e544 req-e41c0ec4-dd0e-4041-93e6-1d7c618a4bbd e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 8848c29f-c82a-4f50-82f4-b2e317161489] Received event network-vif-plugged-e121c426-7734-4def-be42-b69b19dcbf29 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:00:00 compute-0 nova_compute[260935]: 2025-10-11 09:00:00.592 2 DEBUG oslo_concurrency.lockutils [req-2fcbee7f-2d7e-45e3-b3ad-596f2554e544 req-e41c0ec4-dd0e-4041-93e6-1d7c618a4bbd e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "8848c29f-c82a-4f50-82f4-b2e317161489-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:00:00 compute-0 nova_compute[260935]: 2025-10-11 09:00:00.592 2 DEBUG oslo_concurrency.lockutils [req-2fcbee7f-2d7e-45e3-b3ad-596f2554e544 req-e41c0ec4-dd0e-4041-93e6-1d7c618a4bbd e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "8848c29f-c82a-4f50-82f4-b2e317161489-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:00:00 compute-0 nova_compute[260935]: 2025-10-11 09:00:00.592 2 DEBUG oslo_concurrency.lockutils [req-2fcbee7f-2d7e-45e3-b3ad-596f2554e544 req-e41c0ec4-dd0e-4041-93e6-1d7c618a4bbd e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "8848c29f-c82a-4f50-82f4-b2e317161489-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:00:00 compute-0 nova_compute[260935]: 2025-10-11 09:00:00.592 2 DEBUG nova.compute.manager [req-2fcbee7f-2d7e-45e3-b3ad-596f2554e544 req-e41c0ec4-dd0e-4041-93e6-1d7c618a4bbd e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 8848c29f-c82a-4f50-82f4-b2e317161489] No waiting events found dispatching network-vif-plugged-e121c426-7734-4def-be42-b69b19dcbf29 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 09:00:00 compute-0 nova_compute[260935]: 2025-10-11 09:00:00.592 2 WARNING nova.compute.manager [req-2fcbee7f-2d7e-45e3-b3ad-596f2554e544 req-e41c0ec4-dd0e-4041-93e6-1d7c618a4bbd e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 8848c29f-c82a-4f50-82f4-b2e317161489] Received unexpected event network-vif-plugged-e121c426-7734-4def-be42-b69b19dcbf29 for instance with vm_state deleted and task_state None.
Oct 11 09:00:00 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 09:00:00 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3816986270' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:00:00 compute-0 nova_compute[260935]: 2025-10-11 09:00:00.817 2 DEBUG oslo_concurrency.processutils [None req-fdd5a53b-44a4-431b-b877-4d332eb82860 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:00:00 compute-0 nova_compute[260935]: 2025-10-11 09:00:00.838 2 DEBUG nova.storage.rbd_utils [None req-fdd5a53b-44a4-431b-b877-4d332eb82860 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] rbd image 2ca1b1c6-7cd2-42f9-a24b-5c20bb567361_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:00:00 compute-0 nova_compute[260935]: 2025-10-11 09:00:00.842 2 DEBUG oslo_concurrency.processutils [None req-fdd5a53b-44a4-431b-b877-4d332eb82860 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:00:01 compute-0 ceph-mon[74313]: pgmap v1705: 321 pgs: 321 active+clean; 248 MiB data, 687 MiB used, 59 GiB / 60 GiB avail; 578 KiB/s rd, 1.8 MiB/s wr, 126 op/s
Oct 11 09:00:01 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:00:01 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:00:01 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/2937361287' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:00:01 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/3816986270' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:00:01 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 09:00:01 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3640315760' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:00:01 compute-0 nova_compute[260935]: 2025-10-11 09:00:01.301 2 DEBUG oslo_concurrency.processutils [None req-fdd5a53b-44a4-431b-b877-4d332eb82860 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:00:01 compute-0 nova_compute[260935]: 2025-10-11 09:00:01.304 2 DEBUG nova.virt.libvirt.vif [None req-fdd5a53b-44a4-431b-b877-4d332eb82860 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T08:59:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1432792747',display_name='tempest-tempest.common.compute-instance-1432792747',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1432792747',id=84,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d33b48586acf4e6c8254f2a1213b001c',ramdisk_id='',reservation_id='r-dj88erzq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherA-620268335',owner_user_name='tempest-ServerActionsTestOtherA-620268335-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T08:59:55Z,user_data=None,user_id='8d211063ed874837bead2e13898b31d4',uuid=2ca1b1c6-7cd2-42f9-a24b-5c20bb567361,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "bcfaf217-8703-4c1e-bf80-d24ab0e642bd", "address": "fa:16:3e:16:b9:d1", "network": {"id": "164a664d-5e52-48b9-8b00-f73d0851a4cc", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-311778958-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d33b48586acf4e6c8254f2a1213b001c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbcfaf217-87", "ovs_interfaceid": "bcfaf217-8703-4c1e-bf80-d24ab0e642bd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 11 09:00:01 compute-0 nova_compute[260935]: 2025-10-11 09:00:01.305 2 DEBUG nova.network.os_vif_util [None req-fdd5a53b-44a4-431b-b877-4d332eb82860 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] Converting VIF {"id": "bcfaf217-8703-4c1e-bf80-d24ab0e642bd", "address": "fa:16:3e:16:b9:d1", "network": {"id": "164a664d-5e52-48b9-8b00-f73d0851a4cc", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-311778958-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d33b48586acf4e6c8254f2a1213b001c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbcfaf217-87", "ovs_interfaceid": "bcfaf217-8703-4c1e-bf80-d24ab0e642bd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 09:00:01 compute-0 nova_compute[260935]: 2025-10-11 09:00:01.307 2 DEBUG nova.network.os_vif_util [None req-fdd5a53b-44a4-431b-b877-4d332eb82860 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:16:b9:d1,bridge_name='br-int',has_traffic_filtering=True,id=bcfaf217-8703-4c1e-bf80-d24ab0e642bd,network=Network(164a664d-5e52-48b9-8b00-f73d0851a4cc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbcfaf217-87') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 09:00:01 compute-0 nova_compute[260935]: 2025-10-11 09:00:01.309 2 DEBUG nova.objects.instance [None req-fdd5a53b-44a4-431b-b877-4d332eb82860 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] Lazy-loading 'pci_devices' on Instance uuid 2ca1b1c6-7cd2-42f9-a24b-5c20bb567361 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:00:01 compute-0 nova_compute[260935]: 2025-10-11 09:00:01.330 2 DEBUG nova.virt.libvirt.driver [None req-fdd5a53b-44a4-431b-b877-4d332eb82860 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] [instance: 2ca1b1c6-7cd2-42f9-a24b-5c20bb567361] End _get_guest_xml xml=<domain type="kvm">
Oct 11 09:00:01 compute-0 nova_compute[260935]:   <uuid>2ca1b1c6-7cd2-42f9-a24b-5c20bb567361</uuid>
Oct 11 09:00:01 compute-0 nova_compute[260935]:   <name>instance-00000054</name>
Oct 11 09:00:01 compute-0 nova_compute[260935]:   <memory>131072</memory>
Oct 11 09:00:01 compute-0 nova_compute[260935]:   <vcpu>1</vcpu>
Oct 11 09:00:01 compute-0 nova_compute[260935]:   <metadata>
Oct 11 09:00:01 compute-0 nova_compute[260935]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 09:00:01 compute-0 nova_compute[260935]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 09:00:01 compute-0 nova_compute[260935]:       <nova:name>tempest-tempest.common.compute-instance-1432792747</nova:name>
Oct 11 09:00:01 compute-0 nova_compute[260935]:       <nova:creationTime>2025-10-11 09:00:00</nova:creationTime>
Oct 11 09:00:01 compute-0 nova_compute[260935]:       <nova:flavor name="m1.nano">
Oct 11 09:00:01 compute-0 nova_compute[260935]:         <nova:memory>128</nova:memory>
Oct 11 09:00:01 compute-0 nova_compute[260935]:         <nova:disk>1</nova:disk>
Oct 11 09:00:01 compute-0 nova_compute[260935]:         <nova:swap>0</nova:swap>
Oct 11 09:00:01 compute-0 nova_compute[260935]:         <nova:ephemeral>0</nova:ephemeral>
Oct 11 09:00:01 compute-0 nova_compute[260935]:         <nova:vcpus>1</nova:vcpus>
Oct 11 09:00:01 compute-0 nova_compute[260935]:       </nova:flavor>
Oct 11 09:00:01 compute-0 nova_compute[260935]:       <nova:owner>
Oct 11 09:00:01 compute-0 nova_compute[260935]:         <nova:user uuid="8d211063ed874837bead2e13898b31d4">tempest-ServerActionsTestOtherA-620268335-project-member</nova:user>
Oct 11 09:00:01 compute-0 nova_compute[260935]:         <nova:project uuid="d33b48586acf4e6c8254f2a1213b001c">tempest-ServerActionsTestOtherA-620268335</nova:project>
Oct 11 09:00:01 compute-0 nova_compute[260935]:       </nova:owner>
Oct 11 09:00:01 compute-0 nova_compute[260935]:       <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 09:00:01 compute-0 nova_compute[260935]:       <nova:ports>
Oct 11 09:00:01 compute-0 nova_compute[260935]:         <nova:port uuid="bcfaf217-8703-4c1e-bf80-d24ab0e642bd">
Oct 11 09:00:01 compute-0 nova_compute[260935]:           <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Oct 11 09:00:01 compute-0 nova_compute[260935]:         </nova:port>
Oct 11 09:00:01 compute-0 nova_compute[260935]:       </nova:ports>
Oct 11 09:00:01 compute-0 nova_compute[260935]:     </nova:instance>
Oct 11 09:00:01 compute-0 nova_compute[260935]:   </metadata>
Oct 11 09:00:01 compute-0 nova_compute[260935]:   <sysinfo type="smbios">
Oct 11 09:00:01 compute-0 nova_compute[260935]:     <system>
Oct 11 09:00:01 compute-0 nova_compute[260935]:       <entry name="manufacturer">RDO</entry>
Oct 11 09:00:01 compute-0 nova_compute[260935]:       <entry name="product">OpenStack Compute</entry>
Oct 11 09:00:01 compute-0 nova_compute[260935]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 09:00:01 compute-0 nova_compute[260935]:       <entry name="serial">2ca1b1c6-7cd2-42f9-a24b-5c20bb567361</entry>
Oct 11 09:00:01 compute-0 nova_compute[260935]:       <entry name="uuid">2ca1b1c6-7cd2-42f9-a24b-5c20bb567361</entry>
Oct 11 09:00:01 compute-0 nova_compute[260935]:       <entry name="family">Virtual Machine</entry>
Oct 11 09:00:01 compute-0 nova_compute[260935]:     </system>
Oct 11 09:00:01 compute-0 nova_compute[260935]:   </sysinfo>
Oct 11 09:00:01 compute-0 nova_compute[260935]:   <os>
Oct 11 09:00:01 compute-0 nova_compute[260935]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 11 09:00:01 compute-0 nova_compute[260935]:     <boot dev="hd"/>
Oct 11 09:00:01 compute-0 nova_compute[260935]:     <smbios mode="sysinfo"/>
Oct 11 09:00:01 compute-0 nova_compute[260935]:   </os>
Oct 11 09:00:01 compute-0 nova_compute[260935]:   <features>
Oct 11 09:00:01 compute-0 nova_compute[260935]:     <acpi/>
Oct 11 09:00:01 compute-0 nova_compute[260935]:     <apic/>
Oct 11 09:00:01 compute-0 nova_compute[260935]:     <vmcoreinfo/>
Oct 11 09:00:01 compute-0 nova_compute[260935]:   </features>
Oct 11 09:00:01 compute-0 nova_compute[260935]:   <clock offset="utc">
Oct 11 09:00:01 compute-0 nova_compute[260935]:     <timer name="pit" tickpolicy="delay"/>
Oct 11 09:00:01 compute-0 nova_compute[260935]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 11 09:00:01 compute-0 nova_compute[260935]:     <timer name="hpet" present="no"/>
Oct 11 09:00:01 compute-0 nova_compute[260935]:   </clock>
Oct 11 09:00:01 compute-0 nova_compute[260935]:   <cpu mode="host-model" match="exact">
Oct 11 09:00:01 compute-0 nova_compute[260935]:     <topology sockets="1" cores="1" threads="1"/>
Oct 11 09:00:01 compute-0 nova_compute[260935]:   </cpu>
Oct 11 09:00:01 compute-0 nova_compute[260935]:   <devices>
Oct 11 09:00:01 compute-0 nova_compute[260935]:     <disk type="network" device="disk">
Oct 11 09:00:01 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 09:00:01 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/2ca1b1c6-7cd2-42f9-a24b-5c20bb567361_disk">
Oct 11 09:00:01 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 09:00:01 compute-0 nova_compute[260935]:       </source>
Oct 11 09:00:01 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 09:00:01 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 09:00:01 compute-0 nova_compute[260935]:       </auth>
Oct 11 09:00:01 compute-0 nova_compute[260935]:       <target dev="vda" bus="virtio"/>
Oct 11 09:00:01 compute-0 nova_compute[260935]:     </disk>
Oct 11 09:00:01 compute-0 nova_compute[260935]:     <disk type="network" device="cdrom">
Oct 11 09:00:01 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 09:00:01 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/2ca1b1c6-7cd2-42f9-a24b-5c20bb567361_disk.config">
Oct 11 09:00:01 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 09:00:01 compute-0 nova_compute[260935]:       </source>
Oct 11 09:00:01 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 09:00:01 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 09:00:01 compute-0 nova_compute[260935]:       </auth>
Oct 11 09:00:01 compute-0 nova_compute[260935]:       <target dev="sda" bus="sata"/>
Oct 11 09:00:01 compute-0 nova_compute[260935]:     </disk>
Oct 11 09:00:01 compute-0 nova_compute[260935]:     <interface type="ethernet">
Oct 11 09:00:01 compute-0 nova_compute[260935]:       <mac address="fa:16:3e:16:b9:d1"/>
Oct 11 09:00:01 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 09:00:01 compute-0 nova_compute[260935]:       <driver name="vhost" rx_queue_size="512"/>
Oct 11 09:00:01 compute-0 nova_compute[260935]:       <mtu size="1442"/>
Oct 11 09:00:01 compute-0 nova_compute[260935]:       <target dev="tapbcfaf217-87"/>
Oct 11 09:00:01 compute-0 nova_compute[260935]:     </interface>
Oct 11 09:00:01 compute-0 nova_compute[260935]:     <serial type="pty">
Oct 11 09:00:01 compute-0 nova_compute[260935]:       <log file="/var/lib/nova/instances/2ca1b1c6-7cd2-42f9-a24b-5c20bb567361/console.log" append="off"/>
Oct 11 09:00:01 compute-0 nova_compute[260935]:     </serial>
Oct 11 09:00:01 compute-0 nova_compute[260935]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 09:00:01 compute-0 nova_compute[260935]:     <video>
Oct 11 09:00:01 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 09:00:01 compute-0 nova_compute[260935]:     </video>
Oct 11 09:00:01 compute-0 nova_compute[260935]:     <input type="tablet" bus="usb"/>
Oct 11 09:00:01 compute-0 nova_compute[260935]:     <rng model="virtio">
Oct 11 09:00:01 compute-0 nova_compute[260935]:       <backend model="random">/dev/urandom</backend>
Oct 11 09:00:01 compute-0 nova_compute[260935]:     </rng>
Oct 11 09:00:01 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root"/>
Oct 11 09:00:01 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:00:01 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:00:01 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:00:01 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:00:01 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:00:01 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:00:01 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:00:01 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:00:01 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:00:01 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:00:01 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:00:01 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:00:01 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:00:01 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:00:01 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:00:01 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:00:01 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:00:01 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:00:01 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:00:01 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:00:01 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:00:01 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:00:01 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:00:01 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:00:01 compute-0 nova_compute[260935]:     <controller type="usb" index="0"/>
Oct 11 09:00:01 compute-0 nova_compute[260935]:     <memballoon model="virtio">
Oct 11 09:00:01 compute-0 nova_compute[260935]:       <stats period="10"/>
Oct 11 09:00:01 compute-0 nova_compute[260935]:     </memballoon>
Oct 11 09:00:01 compute-0 nova_compute[260935]:   </devices>
Oct 11 09:00:01 compute-0 nova_compute[260935]: </domain>
Oct 11 09:00:01 compute-0 nova_compute[260935]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 11 09:00:01 compute-0 nova_compute[260935]: 2025-10-11 09:00:01.332 2 DEBUG nova.compute.manager [None req-fdd5a53b-44a4-431b-b877-4d332eb82860 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] [instance: 2ca1b1c6-7cd2-42f9-a24b-5c20bb567361] Preparing to wait for external event network-vif-plugged-bcfaf217-8703-4c1e-bf80-d24ab0e642bd prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 11 09:00:01 compute-0 nova_compute[260935]: 2025-10-11 09:00:01.333 2 DEBUG oslo_concurrency.lockutils [None req-fdd5a53b-44a4-431b-b877-4d332eb82860 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] Acquiring lock "2ca1b1c6-7cd2-42f9-a24b-5c20bb567361-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:00:01 compute-0 nova_compute[260935]: 2025-10-11 09:00:01.333 2 DEBUG oslo_concurrency.lockutils [None req-fdd5a53b-44a4-431b-b877-4d332eb82860 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] Lock "2ca1b1c6-7cd2-42f9-a24b-5c20bb567361-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:00:01 compute-0 nova_compute[260935]: 2025-10-11 09:00:01.333 2 DEBUG oslo_concurrency.lockutils [None req-fdd5a53b-44a4-431b-b877-4d332eb82860 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] Lock "2ca1b1c6-7cd2-42f9-a24b-5c20bb567361-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:00:01 compute-0 nova_compute[260935]: 2025-10-11 09:00:01.334 2 DEBUG nova.virt.libvirt.vif [None req-fdd5a53b-44a4-431b-b877-4d332eb82860 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T08:59:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1432792747',display_name='tempest-tempest.common.compute-instance-1432792747',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1432792747',id=84,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d33b48586acf4e6c8254f2a1213b001c',ramdisk_id='',reservation_id='r-dj88erzq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherA-620268335',owner_user_name='tempest-ServerActionsTestOtherA-620268335-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T08:59:55Z,user_data=None,user_id='8d211063ed874837bead2e13898b31d4',uuid=2ca1b1c6-7cd2-42f9-a24b-5c20bb567361,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "bcfaf217-8703-4c1e-bf80-d24ab0e642bd", "address": "fa:16:3e:16:b9:d1", "network": {"id": "164a664d-5e52-48b9-8b00-f73d0851a4cc", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-311778958-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d33b48586acf4e6c8254f2a1213b001c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbcfaf217-87", "ovs_interfaceid": "bcfaf217-8703-4c1e-bf80-d24ab0e642bd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 11 09:00:01 compute-0 nova_compute[260935]: 2025-10-11 09:00:01.335 2 DEBUG nova.network.os_vif_util [None req-fdd5a53b-44a4-431b-b877-4d332eb82860 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] Converting VIF {"id": "bcfaf217-8703-4c1e-bf80-d24ab0e642bd", "address": "fa:16:3e:16:b9:d1", "network": {"id": "164a664d-5e52-48b9-8b00-f73d0851a4cc", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-311778958-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d33b48586acf4e6c8254f2a1213b001c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbcfaf217-87", "ovs_interfaceid": "bcfaf217-8703-4c1e-bf80-d24ab0e642bd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 09:00:01 compute-0 nova_compute[260935]: 2025-10-11 09:00:01.335 2 DEBUG nova.network.os_vif_util [None req-fdd5a53b-44a4-431b-b877-4d332eb82860 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:16:b9:d1,bridge_name='br-int',has_traffic_filtering=True,id=bcfaf217-8703-4c1e-bf80-d24ab0e642bd,network=Network(164a664d-5e52-48b9-8b00-f73d0851a4cc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbcfaf217-87') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 09:00:01 compute-0 nova_compute[260935]: 2025-10-11 09:00:01.336 2 DEBUG os_vif [None req-fdd5a53b-44a4-431b-b877-4d332eb82860 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:16:b9:d1,bridge_name='br-int',has_traffic_filtering=True,id=bcfaf217-8703-4c1e-bf80-d24ab0e642bd,network=Network(164a664d-5e52-48b9-8b00-f73d0851a4cc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbcfaf217-87') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 11 09:00:01 compute-0 nova_compute[260935]: 2025-10-11 09:00:01.336 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:00:01 compute-0 nova_compute[260935]: 2025-10-11 09:00:01.337 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:00:01 compute-0 nova_compute[260935]: 2025-10-11 09:00:01.337 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 09:00:01 compute-0 nova_compute[260935]: 2025-10-11 09:00:01.341 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:00:01 compute-0 nova_compute[260935]: 2025-10-11 09:00:01.341 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbcfaf217-87, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:00:01 compute-0 nova_compute[260935]: 2025-10-11 09:00:01.341 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapbcfaf217-87, col_values=(('external_ids', {'iface-id': 'bcfaf217-8703-4c1e-bf80-d24ab0e642bd', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:16:b9:d1', 'vm-uuid': '2ca1b1c6-7cd2-42f9-a24b-5c20bb567361'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:00:01 compute-0 nova_compute[260935]: 2025-10-11 09:00:01.343 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:00:01 compute-0 NetworkManager[44960]: <info>  [1760173201.3442] manager: (tapbcfaf217-87): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/330)
Oct 11 09:00:01 compute-0 nova_compute[260935]: 2025-10-11 09:00:01.345 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 09:00:01 compute-0 nova_compute[260935]: 2025-10-11 09:00:01.352 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:00:01 compute-0 nova_compute[260935]: 2025-10-11 09:00:01.353 2 INFO os_vif [None req-fdd5a53b-44a4-431b-b877-4d332eb82860 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:16:b9:d1,bridge_name='br-int',has_traffic_filtering=True,id=bcfaf217-8703-4c1e-bf80-d24ab0e642bd,network=Network(164a664d-5e52-48b9-8b00-f73d0851a4cc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbcfaf217-87')
Oct 11 09:00:01 compute-0 nova_compute[260935]: 2025-10-11 09:00:01.431 2 DEBUG nova.virt.libvirt.driver [None req-fdd5a53b-44a4-431b-b877-4d332eb82860 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 09:00:01 compute-0 nova_compute[260935]: 2025-10-11 09:00:01.431 2 DEBUG nova.virt.libvirt.driver [None req-fdd5a53b-44a4-431b-b877-4d332eb82860 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 09:00:01 compute-0 nova_compute[260935]: 2025-10-11 09:00:01.431 2 DEBUG nova.virt.libvirt.driver [None req-fdd5a53b-44a4-431b-b877-4d332eb82860 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] No VIF found with MAC fa:16:3e:16:b9:d1, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 11 09:00:01 compute-0 nova_compute[260935]: 2025-10-11 09:00:01.432 2 INFO nova.virt.libvirt.driver [None req-fdd5a53b-44a4-431b-b877-4d332eb82860 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] [instance: 2ca1b1c6-7cd2-42f9-a24b-5c20bb567361] Using config drive
Oct 11 09:00:01 compute-0 nova_compute[260935]: 2025-10-11 09:00:01.452 2 DEBUG nova.storage.rbd_utils [None req-fdd5a53b-44a4-431b-b877-4d332eb82860 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] rbd image 2ca1b1c6-7cd2-42f9-a24b-5c20bb567361_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:00:01 compute-0 nova_compute[260935]: 2025-10-11 09:00:01.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:00:02 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1706: 321 pgs: 321 active+clean; 206 MiB data, 673 MiB used, 59 GiB / 60 GiB avail; 586 KiB/s rd, 1.8 MiB/s wr, 136 op/s
Oct 11 09:00:02 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/3640315760' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:00:02 compute-0 nova_compute[260935]: 2025-10-11 09:00:02.373 2 DEBUG nova.network.neutron [req-ecc9314c-81fb-43c3-9620-56450e87a1b9 req-8e54e414-fe6b-44a1-9429-bf8515f7da0f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 2ca1b1c6-7cd2-42f9-a24b-5c20bb567361] Updated VIF entry in instance network info cache for port bcfaf217-8703-4c1e-bf80-d24ab0e642bd. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 09:00:02 compute-0 nova_compute[260935]: 2025-10-11 09:00:02.374 2 DEBUG nova.network.neutron [req-ecc9314c-81fb-43c3-9620-56450e87a1b9 req-8e54e414-fe6b-44a1-9429-bf8515f7da0f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 2ca1b1c6-7cd2-42f9-a24b-5c20bb567361] Updating instance_info_cache with network_info: [{"id": "bcfaf217-8703-4c1e-bf80-d24ab0e642bd", "address": "fa:16:3e:16:b9:d1", "network": {"id": "164a664d-5e52-48b9-8b00-f73d0851a4cc", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-311778958-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d33b48586acf4e6c8254f2a1213b001c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbcfaf217-87", "ovs_interfaceid": "bcfaf217-8703-4c1e-bf80-d24ab0e642bd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:00:02 compute-0 nova_compute[260935]: 2025-10-11 09:00:02.401 2 DEBUG oslo_concurrency.lockutils [req-ecc9314c-81fb-43c3-9620-56450e87a1b9 req-8e54e414-fe6b-44a1-9429-bf8515f7da0f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-2ca1b1c6-7cd2-42f9-a24b-5c20bb567361" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:00:02 compute-0 nova_compute[260935]: 2025-10-11 09:00:02.662 2 INFO nova.virt.libvirt.driver [None req-fdd5a53b-44a4-431b-b877-4d332eb82860 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] [instance: 2ca1b1c6-7cd2-42f9-a24b-5c20bb567361] Creating config drive at /var/lib/nova/instances/2ca1b1c6-7cd2-42f9-a24b-5c20bb567361/disk.config
Oct 11 09:00:02 compute-0 nova_compute[260935]: 2025-10-11 09:00:02.667 2 DEBUG oslo_concurrency.processutils [None req-fdd5a53b-44a4-431b-b877-4d332eb82860 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/2ca1b1c6-7cd2-42f9-a24b-5c20bb567361/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp2bj_hv3d execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:00:02 compute-0 nova_compute[260935]: 2025-10-11 09:00:02.808 2 DEBUG oslo_concurrency.processutils [None req-fdd5a53b-44a4-431b-b877-4d332eb82860 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/2ca1b1c6-7cd2-42f9-a24b-5c20bb567361/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp2bj_hv3d" returned: 0 in 0.141s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:00:02 compute-0 nova_compute[260935]: 2025-10-11 09:00:02.840 2 DEBUG nova.storage.rbd_utils [None req-fdd5a53b-44a4-431b-b877-4d332eb82860 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] rbd image 2ca1b1c6-7cd2-42f9-a24b-5c20bb567361_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:00:02 compute-0 nova_compute[260935]: 2025-10-11 09:00:02.845 2 DEBUG oslo_concurrency.processutils [None req-fdd5a53b-44a4-431b-b877-4d332eb82860 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/2ca1b1c6-7cd2-42f9-a24b-5c20bb567361/disk.config 2ca1b1c6-7cd2-42f9-a24b-5c20bb567361_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:00:02 compute-0 ovn_controller[152945]: 2025-10-11T09:00:02Z|00747|binding|INFO|Releasing lport e23cd806-8523-4e59-ba27-db15cee52548 from this chassis (sb_readonly=0)
Oct 11 09:00:03 compute-0 nova_compute[260935]: 2025-10-11 09:00:03.050 2 DEBUG oslo_concurrency.processutils [None req-fdd5a53b-44a4-431b-b877-4d332eb82860 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/2ca1b1c6-7cd2-42f9-a24b-5c20bb567361/disk.config 2ca1b1c6-7cd2-42f9-a24b-5c20bb567361_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.205s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:00:03 compute-0 nova_compute[260935]: 2025-10-11 09:00:03.052 2 INFO nova.virt.libvirt.driver [None req-fdd5a53b-44a4-431b-b877-4d332eb82860 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] [instance: 2ca1b1c6-7cd2-42f9-a24b-5c20bb567361] Deleting local config drive /var/lib/nova/instances/2ca1b1c6-7cd2-42f9-a24b-5c20bb567361/disk.config because it was imported into RBD.
Oct 11 09:00:03 compute-0 nova_compute[260935]: 2025-10-11 09:00:03.066 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:00:03 compute-0 kernel: tapbcfaf217-87: entered promiscuous mode
Oct 11 09:00:03 compute-0 ovn_controller[152945]: 2025-10-11T09:00:03Z|00748|binding|INFO|Claiming lport bcfaf217-8703-4c1e-bf80-d24ab0e642bd for this chassis.
Oct 11 09:00:03 compute-0 ovn_controller[152945]: 2025-10-11T09:00:03Z|00749|binding|INFO|bcfaf217-8703-4c1e-bf80-d24ab0e642bd: Claiming fa:16:3e:16:b9:d1 10.100.0.11
Oct 11 09:00:03 compute-0 nova_compute[260935]: 2025-10-11 09:00:03.135 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:00:03 compute-0 NetworkManager[44960]: <info>  [1760173203.1369] manager: (tapbcfaf217-87): new Tun device (/org/freedesktop/NetworkManager/Devices/331)
Oct 11 09:00:03 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:00:03.144 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:16:b9:d1 10.100.0.11'], port_security=['fa:16:3e:16:b9:d1 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '2ca1b1c6-7cd2-42f9-a24b-5c20bb567361', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-164a664d-5e52-48b9-8b00-f73d0851a4cc', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd33b48586acf4e6c8254f2a1213b001c', 'neutron:revision_number': '2', 'neutron:security_group_ids': '7c2dc1cf-8ac0-4645-86fa-d32df3cf1552', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=68f3e6c4-f574-4830-9133-912bb9cd6132, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=bcfaf217-8703-4c1e-bf80-d24ab0e642bd) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 09:00:03 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:00:03.146 162815 INFO neutron.agent.ovn.metadata.agent [-] Port bcfaf217-8703-4c1e-bf80-d24ab0e642bd in datapath 164a664d-5e52-48b9-8b00-f73d0851a4cc bound to our chassis
Oct 11 09:00:03 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:00:03.149 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 164a664d-5e52-48b9-8b00-f73d0851a4cc
Oct 11 09:00:03 compute-0 ovn_controller[152945]: 2025-10-11T09:00:03Z|00750|binding|INFO|Setting lport bcfaf217-8703-4c1e-bf80-d24ab0e642bd ovn-installed in OVS
Oct 11 09:00:03 compute-0 ovn_controller[152945]: 2025-10-11T09:00:03Z|00751|binding|INFO|Setting lport bcfaf217-8703-4c1e-bf80-d24ab0e642bd up in Southbound
Oct 11 09:00:03 compute-0 nova_compute[260935]: 2025-10-11 09:00:03.193 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:00:03 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:00:03.196 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[f877361b-d4b6-4a8b-a6c6-1a95f0de8d4c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:00:03 compute-0 nova_compute[260935]: 2025-10-11 09:00:03.205 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:00:03 compute-0 ceph-mon[74313]: pgmap v1706: 321 pgs: 321 active+clean; 206 MiB data, 673 MiB used, 59 GiB / 60 GiB avail; 586 KiB/s rd, 1.8 MiB/s wr, 136 op/s
Oct 11 09:00:03 compute-0 systemd-udevd[342200]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 09:00:03 compute-0 systemd-machined[215705]: New machine qemu-94-instance-00000054.
Oct 11 09:00:03 compute-0 NetworkManager[44960]: <info>  [1760173203.2360] device (tapbcfaf217-87): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 09:00:03 compute-0 NetworkManager[44960]: <info>  [1760173203.2371] device (tapbcfaf217-87): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 09:00:03 compute-0 systemd[1]: Started Virtual Machine qemu-94-instance-00000054.
Oct 11 09:00:03 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:00:03.241 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[fbdd419f-c689-48b7-9a42-53cced4a8fdd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:00:03 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:00:03.247 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[d8a9d7b0-36a8-4d79-9d35-dec9a0e9e8f9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:00:03 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:00:03.278 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[54c8728e-d3ac-4eea-abe7-f917802ddbd0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:00:03 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:00:03.308 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[b265f9a4-fdb9-4cd1-83e3-f5588e6b6d15]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap164a664d-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e4:a0:ed'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 224], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 500906, 'reachable_time': 17587, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 342208, 'error': None, 'target': 'ovnmeta-164a664d-5e52-48b9-8b00-f73d0851a4cc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:00:03 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:00:03.328 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[3418834c-67ea-40cc-847c-c1c72dc6f281]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap164a664d-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 500919, 'tstamp': 500919}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 342212, 'error': None, 'target': 'ovnmeta-164a664d-5e52-48b9-8b00-f73d0851a4cc', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap164a664d-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 500922, 'tstamp': 500922}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 342212, 'error': None, 'target': 'ovnmeta-164a664d-5e52-48b9-8b00-f73d0851a4cc', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:00:03 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:00:03.330 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap164a664d-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:00:03 compute-0 nova_compute[260935]: 2025-10-11 09:00:03.332 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:00:03 compute-0 nova_compute[260935]: 2025-10-11 09:00:03.333 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:00:03 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:00:03.333 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap164a664d-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:00:03 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:00:03.334 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 09:00:03 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:00:03.334 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap164a664d-50, col_values=(('external_ids', {'iface-id': 'e23cd806-8523-4e59-ba27-db15cee52548'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:00:03 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:00:03.335 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 09:00:03 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e242 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:00:03 compute-0 nova_compute[260935]: 2025-10-11 09:00:03.693 2 DEBUG nova.compute.manager [req-be837a9a-4b6e-432b-994e-01ff3133f9fd req-eb1f2e85-fa12-426b-81f0-570ca6548146 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 2ca1b1c6-7cd2-42f9-a24b-5c20bb567361] Received event network-vif-plugged-bcfaf217-8703-4c1e-bf80-d24ab0e642bd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:00:03 compute-0 nova_compute[260935]: 2025-10-11 09:00:03.693 2 DEBUG oslo_concurrency.lockutils [req-be837a9a-4b6e-432b-994e-01ff3133f9fd req-eb1f2e85-fa12-426b-81f0-570ca6548146 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "2ca1b1c6-7cd2-42f9-a24b-5c20bb567361-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:00:03 compute-0 nova_compute[260935]: 2025-10-11 09:00:03.694 2 DEBUG oslo_concurrency.lockutils [req-be837a9a-4b6e-432b-994e-01ff3133f9fd req-eb1f2e85-fa12-426b-81f0-570ca6548146 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "2ca1b1c6-7cd2-42f9-a24b-5c20bb567361-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:00:03 compute-0 nova_compute[260935]: 2025-10-11 09:00:03.695 2 DEBUG oslo_concurrency.lockutils [req-be837a9a-4b6e-432b-994e-01ff3133f9fd req-eb1f2e85-fa12-426b-81f0-570ca6548146 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "2ca1b1c6-7cd2-42f9-a24b-5c20bb567361-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:00:03 compute-0 nova_compute[260935]: 2025-10-11 09:00:03.695 2 DEBUG nova.compute.manager [req-be837a9a-4b6e-432b-994e-01ff3133f9fd req-eb1f2e85-fa12-426b-81f0-570ca6548146 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 2ca1b1c6-7cd2-42f9-a24b-5c20bb567361] Processing event network-vif-plugged-bcfaf217-8703-4c1e-bf80-d24ab0e642bd _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 11 09:00:03 compute-0 nova_compute[260935]: 2025-10-11 09:00:03.696 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:00:03 compute-0 nova_compute[260935]: 2025-10-11 09:00:03.703 2 DEBUG oslo_concurrency.lockutils [None req-b29b2c60-5c6d-47bd-834f-dfdd81e28227 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] Acquiring lock "b75d8ded-515b-48ff-a6b6-28df88878996" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:00:03 compute-0 nova_compute[260935]: 2025-10-11 09:00:03.703 2 DEBUG oslo_concurrency.lockutils [None req-b29b2c60-5c6d-47bd-834f-dfdd81e28227 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] Lock "b75d8ded-515b-48ff-a6b6-28df88878996" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:00:03 compute-0 nova_compute[260935]: 2025-10-11 09:00:03.731 2 DEBUG nova.compute.manager [None req-b29b2c60-5c6d-47bd-834f-dfdd81e28227 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] [instance: b75d8ded-515b-48ff-a6b6-28df88878996] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 11 09:00:03 compute-0 nova_compute[260935]: 2025-10-11 09:00:03.825 2 DEBUG oslo_concurrency.lockutils [None req-b29b2c60-5c6d-47bd-834f-dfdd81e28227 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:00:03 compute-0 nova_compute[260935]: 2025-10-11 09:00:03.825 2 DEBUG oslo_concurrency.lockutils [None req-b29b2c60-5c6d-47bd-834f-dfdd81e28227 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:00:03 compute-0 nova_compute[260935]: 2025-10-11 09:00:03.846 2 DEBUG nova.virt.hardware [None req-b29b2c60-5c6d-47bd-834f-dfdd81e28227 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 11 09:00:03 compute-0 nova_compute[260935]: 2025-10-11 09:00:03.846 2 INFO nova.compute.claims [None req-b29b2c60-5c6d-47bd-834f-dfdd81e28227 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] [instance: b75d8ded-515b-48ff-a6b6-28df88878996] Claim successful on node compute-0.ctlplane.example.com
Oct 11 09:00:04 compute-0 nova_compute[260935]: 2025-10-11 09:00:04.006 2 DEBUG oslo_concurrency.processutils [None req-b29b2c60-5c6d-47bd-834f-dfdd81e28227 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:00:04 compute-0 nova_compute[260935]: 2025-10-11 09:00:04.069 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760173204.0549061, 2ca1b1c6-7cd2-42f9-a24b-5c20bb567361 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:00:04 compute-0 nova_compute[260935]: 2025-10-11 09:00:04.070 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 2ca1b1c6-7cd2-42f9-a24b-5c20bb567361] VM Started (Lifecycle Event)
Oct 11 09:00:04 compute-0 nova_compute[260935]: 2025-10-11 09:00:04.075 2 DEBUG nova.compute.manager [None req-fdd5a53b-44a4-431b-b877-4d332eb82860 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] [instance: 2ca1b1c6-7cd2-42f9-a24b-5c20bb567361] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 11 09:00:04 compute-0 nova_compute[260935]: 2025-10-11 09:00:04.082 2 DEBUG nova.virt.libvirt.driver [None req-fdd5a53b-44a4-431b-b877-4d332eb82860 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] [instance: 2ca1b1c6-7cd2-42f9-a24b-5c20bb567361] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 11 09:00:04 compute-0 nova_compute[260935]: 2025-10-11 09:00:04.087 2 INFO nova.virt.libvirt.driver [-] [instance: 2ca1b1c6-7cd2-42f9-a24b-5c20bb567361] Instance spawned successfully.
Oct 11 09:00:04 compute-0 nova_compute[260935]: 2025-10-11 09:00:04.088 2 DEBUG nova.virt.libvirt.driver [None req-fdd5a53b-44a4-431b-b877-4d332eb82860 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] [instance: 2ca1b1c6-7cd2-42f9-a24b-5c20bb567361] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 11 09:00:04 compute-0 nova_compute[260935]: 2025-10-11 09:00:04.112 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 2ca1b1c6-7cd2-42f9-a24b-5c20bb567361] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:00:04 compute-0 nova_compute[260935]: 2025-10-11 09:00:04.123 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 2ca1b1c6-7cd2-42f9-a24b-5c20bb567361] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 09:00:04 compute-0 nova_compute[260935]: 2025-10-11 09:00:04.131 2 DEBUG nova.virt.libvirt.driver [None req-fdd5a53b-44a4-431b-b877-4d332eb82860 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] [instance: 2ca1b1c6-7cd2-42f9-a24b-5c20bb567361] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:00:04 compute-0 nova_compute[260935]: 2025-10-11 09:00:04.132 2 DEBUG nova.virt.libvirt.driver [None req-fdd5a53b-44a4-431b-b877-4d332eb82860 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] [instance: 2ca1b1c6-7cd2-42f9-a24b-5c20bb567361] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:00:04 compute-0 nova_compute[260935]: 2025-10-11 09:00:04.133 2 DEBUG nova.virt.libvirt.driver [None req-fdd5a53b-44a4-431b-b877-4d332eb82860 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] [instance: 2ca1b1c6-7cd2-42f9-a24b-5c20bb567361] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:00:04 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1707: 321 pgs: 321 active+clean; 167 MiB data, 653 MiB used, 59 GiB / 60 GiB avail; 472 KiB/s rd, 1.8 MiB/s wr, 139 op/s
Oct 11 09:00:04 compute-0 nova_compute[260935]: 2025-10-11 09:00:04.134 2 DEBUG nova.virt.libvirt.driver [None req-fdd5a53b-44a4-431b-b877-4d332eb82860 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] [instance: 2ca1b1c6-7cd2-42f9-a24b-5c20bb567361] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:00:04 compute-0 nova_compute[260935]: 2025-10-11 09:00:04.135 2 DEBUG nova.virt.libvirt.driver [None req-fdd5a53b-44a4-431b-b877-4d332eb82860 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] [instance: 2ca1b1c6-7cd2-42f9-a24b-5c20bb567361] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:00:04 compute-0 nova_compute[260935]: 2025-10-11 09:00:04.136 2 DEBUG nova.virt.libvirt.driver [None req-fdd5a53b-44a4-431b-b877-4d332eb82860 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] [instance: 2ca1b1c6-7cd2-42f9-a24b-5c20bb567361] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:00:04 compute-0 nova_compute[260935]: 2025-10-11 09:00:04.146 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 2ca1b1c6-7cd2-42f9-a24b-5c20bb567361] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 09:00:04 compute-0 nova_compute[260935]: 2025-10-11 09:00:04.147 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760173204.0560832, 2ca1b1c6-7cd2-42f9-a24b-5c20bb567361 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:00:04 compute-0 nova_compute[260935]: 2025-10-11 09:00:04.147 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 2ca1b1c6-7cd2-42f9-a24b-5c20bb567361] VM Paused (Lifecycle Event)
Oct 11 09:00:04 compute-0 nova_compute[260935]: 2025-10-11 09:00:04.174 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 2ca1b1c6-7cd2-42f9-a24b-5c20bb567361] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:00:04 compute-0 nova_compute[260935]: 2025-10-11 09:00:04.178 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760173204.081225, 2ca1b1c6-7cd2-42f9-a24b-5c20bb567361 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:00:04 compute-0 nova_compute[260935]: 2025-10-11 09:00:04.179 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 2ca1b1c6-7cd2-42f9-a24b-5c20bb567361] VM Resumed (Lifecycle Event)
Oct 11 09:00:04 compute-0 nova_compute[260935]: 2025-10-11 09:00:04.204 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 2ca1b1c6-7cd2-42f9-a24b-5c20bb567361] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:00:04 compute-0 nova_compute[260935]: 2025-10-11 09:00:04.210 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 2ca1b1c6-7cd2-42f9-a24b-5c20bb567361] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 09:00:04 compute-0 nova_compute[260935]: 2025-10-11 09:00:04.216 2 INFO nova.compute.manager [None req-fdd5a53b-44a4-431b-b877-4d332eb82860 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] [instance: 2ca1b1c6-7cd2-42f9-a24b-5c20bb567361] Took 9.10 seconds to spawn the instance on the hypervisor.
Oct 11 09:00:04 compute-0 nova_compute[260935]: 2025-10-11 09:00:04.216 2 DEBUG nova.compute.manager [None req-fdd5a53b-44a4-431b-b877-4d332eb82860 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] [instance: 2ca1b1c6-7cd2-42f9-a24b-5c20bb567361] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:00:04 compute-0 nova_compute[260935]: 2025-10-11 09:00:04.262 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 2ca1b1c6-7cd2-42f9-a24b-5c20bb567361] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 09:00:04 compute-0 nova_compute[260935]: 2025-10-11 09:00:04.311 2 INFO nova.compute.manager [None req-fdd5a53b-44a4-431b-b877-4d332eb82860 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] [instance: 2ca1b1c6-7cd2-42f9-a24b-5c20bb567361] Took 10.38 seconds to build instance.
Oct 11 09:00:04 compute-0 nova_compute[260935]: 2025-10-11 09:00:04.335 2 DEBUG oslo_concurrency.lockutils [None req-fdd5a53b-44a4-431b-b877-4d332eb82860 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] Lock "2ca1b1c6-7cd2-42f9-a24b-5c20bb567361" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.477s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:00:04 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 09:00:04 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1098586771' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:00:04 compute-0 nova_compute[260935]: 2025-10-11 09:00:04.495 2 DEBUG oslo_concurrency.processutils [None req-b29b2c60-5c6d-47bd-834f-dfdd81e28227 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.489s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:00:04 compute-0 nova_compute[260935]: 2025-10-11 09:00:04.504 2 DEBUG nova.compute.provider_tree [None req-b29b2c60-5c6d-47bd-834f-dfdd81e28227 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 09:00:04 compute-0 nova_compute[260935]: 2025-10-11 09:00:04.523 2 DEBUG nova.scheduler.client.report [None req-b29b2c60-5c6d-47bd-834f-dfdd81e28227 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 09:00:04 compute-0 nova_compute[260935]: 2025-10-11 09:00:04.550 2 DEBUG oslo_concurrency.lockutils [None req-b29b2c60-5c6d-47bd-834f-dfdd81e28227 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.725s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:00:04 compute-0 nova_compute[260935]: 2025-10-11 09:00:04.552 2 DEBUG nova.compute.manager [None req-b29b2c60-5c6d-47bd-834f-dfdd81e28227 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] [instance: b75d8ded-515b-48ff-a6b6-28df88878996] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 11 09:00:04 compute-0 nova_compute[260935]: 2025-10-11 09:00:04.605 2 DEBUG nova.compute.manager [None req-b29b2c60-5c6d-47bd-834f-dfdd81e28227 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] [instance: b75d8ded-515b-48ff-a6b6-28df88878996] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 11 09:00:04 compute-0 nova_compute[260935]: 2025-10-11 09:00:04.606 2 DEBUG nova.network.neutron [None req-b29b2c60-5c6d-47bd-834f-dfdd81e28227 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] [instance: b75d8ded-515b-48ff-a6b6-28df88878996] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 11 09:00:04 compute-0 nova_compute[260935]: 2025-10-11 09:00:04.626 2 INFO nova.virt.libvirt.driver [None req-b29b2c60-5c6d-47bd-834f-dfdd81e28227 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] [instance: b75d8ded-515b-48ff-a6b6-28df88878996] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 11 09:00:04 compute-0 nova_compute[260935]: 2025-10-11 09:00:04.645 2 DEBUG nova.compute.manager [None req-b29b2c60-5c6d-47bd-834f-dfdd81e28227 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] [instance: b75d8ded-515b-48ff-a6b6-28df88878996] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 11 09:00:04 compute-0 nova_compute[260935]: 2025-10-11 09:00:04.708 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760173189.7076836, 045df9b6-6c73-44ec-aa65-2c736eb5c00c => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:00:04 compute-0 nova_compute[260935]: 2025-10-11 09:00:04.709 2 INFO nova.compute.manager [-] [instance: 045df9b6-6c73-44ec-aa65-2c736eb5c00c] VM Stopped (Lifecycle Event)
Oct 11 09:00:04 compute-0 nova_compute[260935]: 2025-10-11 09:00:04.739 2 DEBUG nova.compute.manager [None req-d9cd4df9-519e-4a42-86ab-24fae69a3e36 - - - - - -] [instance: 045df9b6-6c73-44ec-aa65-2c736eb5c00c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:00:04 compute-0 nova_compute[260935]: 2025-10-11 09:00:04.763 2 DEBUG nova.compute.manager [None req-b29b2c60-5c6d-47bd-834f-dfdd81e28227 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] [instance: b75d8ded-515b-48ff-a6b6-28df88878996] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 11 09:00:04 compute-0 nova_compute[260935]: 2025-10-11 09:00:04.764 2 DEBUG nova.virt.libvirt.driver [None req-b29b2c60-5c6d-47bd-834f-dfdd81e28227 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] [instance: b75d8ded-515b-48ff-a6b6-28df88878996] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 11 09:00:04 compute-0 nova_compute[260935]: 2025-10-11 09:00:04.765 2 INFO nova.virt.libvirt.driver [None req-b29b2c60-5c6d-47bd-834f-dfdd81e28227 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] [instance: b75d8ded-515b-48ff-a6b6-28df88878996] Creating image(s)
Oct 11 09:00:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 09:00:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:00:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 09:00:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:00:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.001105905999706974 of space, bias 1.0, pg target 0.3317717999120922 quantized to 32 (current 32)
Oct 11 09:00:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:00:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 09:00:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:00:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 09:00:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:00:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Oct 11 09:00:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:00:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 11 09:00:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:00:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 09:00:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:00:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 11 09:00:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:00:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 11 09:00:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:00:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 09:00:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:00:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 11 09:00:04 compute-0 nova_compute[260935]: 2025-10-11 09:00:04.812 2 DEBUG nova.storage.rbd_utils [None req-b29b2c60-5c6d-47bd-834f-dfdd81e28227 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] rbd image b75d8ded-515b-48ff-a6b6-28df88878996_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:00:04 compute-0 nova_compute[260935]: 2025-10-11 09:00:04.849 2 DEBUG nova.storage.rbd_utils [None req-b29b2c60-5c6d-47bd-834f-dfdd81e28227 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] rbd image b75d8ded-515b-48ff-a6b6-28df88878996_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:00:04 compute-0 nova_compute[260935]: 2025-10-11 09:00:04.889 2 DEBUG nova.storage.rbd_utils [None req-b29b2c60-5c6d-47bd-834f-dfdd81e28227 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] rbd image b75d8ded-515b-48ff-a6b6-28df88878996_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:00:04 compute-0 nova_compute[260935]: 2025-10-11 09:00:04.895 2 DEBUG oslo_concurrency.processutils [None req-b29b2c60-5c6d-47bd-834f-dfdd81e28227 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:00:04 compute-0 nova_compute[260935]: 2025-10-11 09:00:04.958 2 DEBUG nova.policy [None req-b29b2c60-5c6d-47bd-834f-dfdd81e28227 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'df5a3c3a5d68473aa2e2950de45ebce1', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '11b44ad9193e4e43838d52056ccf413e', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 11 09:00:05 compute-0 nova_compute[260935]: 2025-10-11 09:00:05.009 2 DEBUG oslo_concurrency.processutils [None req-b29b2c60-5c6d-47bd-834f-dfdd81e28227 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json" returned: 0 in 0.115s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:00:05 compute-0 nova_compute[260935]: 2025-10-11 09:00:05.011 2 DEBUG oslo_concurrency.lockutils [None req-b29b2c60-5c6d-47bd-834f-dfdd81e28227 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] Acquiring lock "9811042c7d73cc51997f7c966840f4b7728169a1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:00:05 compute-0 nova_compute[260935]: 2025-10-11 09:00:05.012 2 DEBUG oslo_concurrency.lockutils [None req-b29b2c60-5c6d-47bd-834f-dfdd81e28227 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:00:05 compute-0 nova_compute[260935]: 2025-10-11 09:00:05.012 2 DEBUG oslo_concurrency.lockutils [None req-b29b2c60-5c6d-47bd-834f-dfdd81e28227 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:00:05 compute-0 nova_compute[260935]: 2025-10-11 09:00:05.044 2 DEBUG nova.storage.rbd_utils [None req-b29b2c60-5c6d-47bd-834f-dfdd81e28227 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] rbd image b75d8ded-515b-48ff-a6b6-28df88878996_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:00:05 compute-0 nova_compute[260935]: 2025-10-11 09:00:05.052 2 DEBUG oslo_concurrency.processutils [None req-b29b2c60-5c6d-47bd-834f-dfdd81e28227 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 b75d8ded-515b-48ff-a6b6-28df88878996_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:00:05 compute-0 ceph-mon[74313]: pgmap v1707: 321 pgs: 321 active+clean; 167 MiB data, 653 MiB used, 59 GiB / 60 GiB avail; 472 KiB/s rd, 1.8 MiB/s wr, 139 op/s
Oct 11 09:00:05 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/1098586771' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:00:05 compute-0 nova_compute[260935]: 2025-10-11 09:00:05.325 2 DEBUG oslo_concurrency.processutils [None req-b29b2c60-5c6d-47bd-834f-dfdd81e28227 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 b75d8ded-515b-48ff-a6b6-28df88878996_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.273s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:00:05 compute-0 nova_compute[260935]: 2025-10-11 09:00:05.407 2 DEBUG nova.storage.rbd_utils [None req-b29b2c60-5c6d-47bd-834f-dfdd81e28227 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] resizing rbd image b75d8ded-515b-48ff-a6b6-28df88878996_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 11 09:00:05 compute-0 nova_compute[260935]: 2025-10-11 09:00:05.511 2 DEBUG nova.objects.instance [None req-b29b2c60-5c6d-47bd-834f-dfdd81e28227 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] Lazy-loading 'migration_context' on Instance uuid b75d8ded-515b-48ff-a6b6-28df88878996 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:00:05 compute-0 nova_compute[260935]: 2025-10-11 09:00:05.534 2 DEBUG nova.virt.libvirt.driver [None req-b29b2c60-5c6d-47bd-834f-dfdd81e28227 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] [instance: b75d8ded-515b-48ff-a6b6-28df88878996] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 11 09:00:05 compute-0 nova_compute[260935]: 2025-10-11 09:00:05.534 2 DEBUG nova.virt.libvirt.driver [None req-b29b2c60-5c6d-47bd-834f-dfdd81e28227 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] [instance: b75d8ded-515b-48ff-a6b6-28df88878996] Ensure instance console log exists: /var/lib/nova/instances/b75d8ded-515b-48ff-a6b6-28df88878996/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 11 09:00:05 compute-0 nova_compute[260935]: 2025-10-11 09:00:05.534 2 DEBUG oslo_concurrency.lockutils [None req-b29b2c60-5c6d-47bd-834f-dfdd81e28227 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:00:05 compute-0 nova_compute[260935]: 2025-10-11 09:00:05.535 2 DEBUG oslo_concurrency.lockutils [None req-b29b2c60-5c6d-47bd-834f-dfdd81e28227 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:00:05 compute-0 nova_compute[260935]: 2025-10-11 09:00:05.535 2 DEBUG oslo_concurrency.lockutils [None req-b29b2c60-5c6d-47bd-834f-dfdd81e28227 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:00:05 compute-0 nova_compute[260935]: 2025-10-11 09:00:05.755 2 DEBUG nova.network.neutron [None req-b29b2c60-5c6d-47bd-834f-dfdd81e28227 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] [instance: b75d8ded-515b-48ff-a6b6-28df88878996] Successfully created port: 99e74dca-1d94-446c-ac4b-bc16dc028d2b _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 11 09:00:05 compute-0 podman[342444]: 2025-10-11 09:00:05.807283317 +0000 UTC m=+0.101504815 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Oct 11 09:00:06 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1708: 321 pgs: 321 active+clean; 167 MiB data, 653 MiB used, 59 GiB / 60 GiB avail; 55 KiB/s rd, 1.8 MiB/s wr, 83 op/s
Oct 11 09:00:06 compute-0 nova_compute[260935]: 2025-10-11 09:00:06.343 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:00:06 compute-0 nova_compute[260935]: 2025-10-11 09:00:06.882 2 DEBUG nova.compute.manager [req-440ef9eb-dd8b-43a2-9b4d-297087bf5660 req-f619c386-6e73-4730-b3c6-8b28fbcb2fdb e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 2ca1b1c6-7cd2-42f9-a24b-5c20bb567361] Received event network-vif-plugged-bcfaf217-8703-4c1e-bf80-d24ab0e642bd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:00:06 compute-0 nova_compute[260935]: 2025-10-11 09:00:06.882 2 DEBUG oslo_concurrency.lockutils [req-440ef9eb-dd8b-43a2-9b4d-297087bf5660 req-f619c386-6e73-4730-b3c6-8b28fbcb2fdb e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "2ca1b1c6-7cd2-42f9-a24b-5c20bb567361-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:00:06 compute-0 nova_compute[260935]: 2025-10-11 09:00:06.882 2 DEBUG oslo_concurrency.lockutils [req-440ef9eb-dd8b-43a2-9b4d-297087bf5660 req-f619c386-6e73-4730-b3c6-8b28fbcb2fdb e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "2ca1b1c6-7cd2-42f9-a24b-5c20bb567361-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:00:06 compute-0 nova_compute[260935]: 2025-10-11 09:00:06.882 2 DEBUG oslo_concurrency.lockutils [req-440ef9eb-dd8b-43a2-9b4d-297087bf5660 req-f619c386-6e73-4730-b3c6-8b28fbcb2fdb e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "2ca1b1c6-7cd2-42f9-a24b-5c20bb567361-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:00:06 compute-0 nova_compute[260935]: 2025-10-11 09:00:06.883 2 DEBUG nova.compute.manager [req-440ef9eb-dd8b-43a2-9b4d-297087bf5660 req-f619c386-6e73-4730-b3c6-8b28fbcb2fdb e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 2ca1b1c6-7cd2-42f9-a24b-5c20bb567361] No waiting events found dispatching network-vif-plugged-bcfaf217-8703-4c1e-bf80-d24ab0e642bd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 09:00:06 compute-0 nova_compute[260935]: 2025-10-11 09:00:06.883 2 WARNING nova.compute.manager [req-440ef9eb-dd8b-43a2-9b4d-297087bf5660 req-f619c386-6e73-4730-b3c6-8b28fbcb2fdb e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 2ca1b1c6-7cd2-42f9-a24b-5c20bb567361] Received unexpected event network-vif-plugged-bcfaf217-8703-4c1e-bf80-d24ab0e642bd for instance with vm_state active and task_state None.
Oct 11 09:00:06 compute-0 nova_compute[260935]: 2025-10-11 09:00:06.953 2 DEBUG nova.network.neutron [None req-b29b2c60-5c6d-47bd-834f-dfdd81e28227 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] [instance: b75d8ded-515b-48ff-a6b6-28df88878996] Successfully updated port: 99e74dca-1d94-446c-ac4b-bc16dc028d2b _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 11 09:00:06 compute-0 nova_compute[260935]: 2025-10-11 09:00:06.974 2 DEBUG oslo_concurrency.lockutils [None req-b29b2c60-5c6d-47bd-834f-dfdd81e28227 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] Acquiring lock "refresh_cache-b75d8ded-515b-48ff-a6b6-28df88878996" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:00:06 compute-0 nova_compute[260935]: 2025-10-11 09:00:06.975 2 DEBUG oslo_concurrency.lockutils [None req-b29b2c60-5c6d-47bd-834f-dfdd81e28227 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] Acquired lock "refresh_cache-b75d8ded-515b-48ff-a6b6-28df88878996" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:00:06 compute-0 nova_compute[260935]: 2025-10-11 09:00:06.975 2 DEBUG nova.network.neutron [None req-b29b2c60-5c6d-47bd-834f-dfdd81e28227 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] [instance: b75d8ded-515b-48ff-a6b6-28df88878996] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 11 09:00:07 compute-0 nova_compute[260935]: 2025-10-11 09:00:07.252 2 DEBUG nova.network.neutron [None req-b29b2c60-5c6d-47bd-834f-dfdd81e28227 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] [instance: b75d8ded-515b-48ff-a6b6-28df88878996] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 11 09:00:07 compute-0 ceph-mon[74313]: pgmap v1708: 321 pgs: 321 active+clean; 167 MiB data, 653 MiB used, 59 GiB / 60 GiB avail; 55 KiB/s rd, 1.8 MiB/s wr, 83 op/s
Oct 11 09:00:07 compute-0 nova_compute[260935]: 2025-10-11 09:00:07.396 2 DEBUG oslo_concurrency.lockutils [None req-645c919b-3eac-43f3-b7db-1327b3724a83 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] Acquiring lock "2ca1b1c6-7cd2-42f9-a24b-5c20bb567361" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:00:07 compute-0 nova_compute[260935]: 2025-10-11 09:00:07.397 2 DEBUG oslo_concurrency.lockutils [None req-645c919b-3eac-43f3-b7db-1327b3724a83 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] Lock "2ca1b1c6-7cd2-42f9-a24b-5c20bb567361" acquired by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:00:07 compute-0 nova_compute[260935]: 2025-10-11 09:00:07.398 2 DEBUG nova.compute.manager [None req-645c919b-3eac-43f3-b7db-1327b3724a83 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] [instance: 2ca1b1c6-7cd2-42f9-a24b-5c20bb567361] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:00:07 compute-0 nova_compute[260935]: 2025-10-11 09:00:07.403 2 DEBUG nova.compute.manager [None req-645c919b-3eac-43f3-b7db-1327b3724a83 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] [instance: 2ca1b1c6-7cd2-42f9-a24b-5c20bb567361] Stopping instance; current vm_state: active, current task_state: powering-off, current DB power_state: 1, current VM power_state: 1 do_stop_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3338
Oct 11 09:00:07 compute-0 nova_compute[260935]: 2025-10-11 09:00:07.405 2 DEBUG nova.objects.instance [None req-645c919b-3eac-43f3-b7db-1327b3724a83 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] Lazy-loading 'flavor' on Instance uuid 2ca1b1c6-7cd2-42f9-a24b-5c20bb567361 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:00:07 compute-0 nova_compute[260935]: 2025-10-11 09:00:07.437 2 DEBUG nova.virt.libvirt.driver [None req-645c919b-3eac-43f3-b7db-1327b3724a83 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] [instance: 2ca1b1c6-7cd2-42f9-a24b-5c20bb567361] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Oct 11 09:00:07 compute-0 ceph-mon[74313]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 11 09:00:07 compute-0 ceph-mon[74313]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 3000.0 total, 600.0 interval
                                           Cumulative writes: 7901 writes, 35K keys, 7901 commit groups, 1.0 writes per commit group, ingest: 0.05 GB, 0.02 MB/s
                                           Cumulative WAL: 7901 writes, 7901 syncs, 1.00 writes per sync, written: 0.05 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1719 writes, 7952 keys, 1719 commit groups, 1.0 writes per commit group, ingest: 10.03 MB, 0.02 MB/s
                                           Interval WAL: 1719 writes, 1719 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     90.4      0.47              0.16        21    0.022       0      0       0.0       0.0
                                             L6      1/0    9.22 MB   0.0      0.2     0.0      0.1       0.1      0.0       0.0   3.5    166.5    136.5      1.09              0.64        20    0.055    100K    11K       0.0       0.0
                                            Sum      1/0    9.22 MB   0.0      0.2     0.0      0.1       0.2      0.1       0.0   4.5    116.8    122.7      1.56              0.80        41    0.038    100K    11K       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   5.7    139.8    142.5      0.36              0.19        10    0.036     31K   3145       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.2     0.0      0.1       0.1      0.0       0.0   0.0    166.5    136.5      1.09              0.64        20    0.055    100K    11K       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     91.1      0.46              0.16        20    0.023       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     12.2      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 3000.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.041, interval 0.009
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.19 GB write, 0.06 MB/s write, 0.18 GB read, 0.06 MB/s read, 1.6 seconds
                                           Interval compaction: 0.05 GB write, 0.09 MB/s write, 0.05 GB read, 0.08 MB/s read, 0.4 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558f0ab3b1f0#2 capacity: 304.00 MB usage: 21.78 MB table_size: 0 occupancy: 18446744073709551615 collections: 6 last_copies: 0 last_secs: 0.000261 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1440,21.00 MB,6.9065%) FilterBlock(42,288.42 KB,0.092652%) IndexBlock(42,511.12 KB,0.164193%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Oct 11 09:00:08 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1709: 321 pgs: 321 active+clean; 213 MiB data, 661 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.6 MiB/s wr, 184 op/s
Oct 11 09:00:08 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e242 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:00:08 compute-0 nova_compute[260935]: 2025-10-11 09:00:08.694 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:00:08 compute-0 nova_compute[260935]: 2025-10-11 09:00:08.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:00:08 compute-0 nova_compute[260935]: 2025-10-11 09:00:08.780 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760173193.779023, f64218b5-ea49-4ed7-9945-1b8e056e4161 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:00:08 compute-0 nova_compute[260935]: 2025-10-11 09:00:08.781 2 INFO nova.compute.manager [-] [instance: f64218b5-ea49-4ed7-9945-1b8e056e4161] VM Stopped (Lifecycle Event)
Oct 11 09:00:08 compute-0 nova_compute[260935]: 2025-10-11 09:00:08.810 2 DEBUG nova.compute.manager [None req-97972fe1-bace-43e4-a208-5590b48c74e4 - - - - - -] [instance: f64218b5-ea49-4ed7-9945-1b8e056e4161] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:00:09 compute-0 nova_compute[260935]: 2025-10-11 09:00:09.007 2 DEBUG nova.compute.manager [req-6903f328-a7eb-4204-b76a-bdc6ca033fd5 req-cb539685-fa21-499d-8ad7-7bf9f1fa49e6 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: b75d8ded-515b-48ff-a6b6-28df88878996] Received event network-changed-99e74dca-1d94-446c-ac4b-bc16dc028d2b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:00:09 compute-0 nova_compute[260935]: 2025-10-11 09:00:09.008 2 DEBUG nova.compute.manager [req-6903f328-a7eb-4204-b76a-bdc6ca033fd5 req-cb539685-fa21-499d-8ad7-7bf9f1fa49e6 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: b75d8ded-515b-48ff-a6b6-28df88878996] Refreshing instance network info cache due to event network-changed-99e74dca-1d94-446c-ac4b-bc16dc028d2b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 09:00:09 compute-0 nova_compute[260935]: 2025-10-11 09:00:09.009 2 DEBUG oslo_concurrency.lockutils [req-6903f328-a7eb-4204-b76a-bdc6ca033fd5 req-cb539685-fa21-499d-8ad7-7bf9f1fa49e6 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-b75d8ded-515b-48ff-a6b6-28df88878996" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:00:09 compute-0 ceph-mon[74313]: pgmap v1709: 321 pgs: 321 active+clean; 213 MiB data, 661 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.6 MiB/s wr, 184 op/s
Oct 11 09:00:09 compute-0 nova_compute[260935]: 2025-10-11 09:00:09.313 2 DEBUG nova.network.neutron [None req-b29b2c60-5c6d-47bd-834f-dfdd81e28227 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] [instance: b75d8ded-515b-48ff-a6b6-28df88878996] Updating instance_info_cache with network_info: [{"id": "99e74dca-1d94-446c-ac4b-bc16dc028d2b", "address": "fa:16:3e:ab:9b:26", "network": {"id": "e4686205-cbf0-4221-bc49-ebb890c4a59f", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1553544744-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "11b44ad9193e4e43838d52056ccf413e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap99e74dca-1d", "ovs_interfaceid": "99e74dca-1d94-446c-ac4b-bc16dc028d2b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:00:09 compute-0 nova_compute[260935]: 2025-10-11 09:00:09.358 2 DEBUG oslo_concurrency.lockutils [None req-b29b2c60-5c6d-47bd-834f-dfdd81e28227 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] Releasing lock "refresh_cache-b75d8ded-515b-48ff-a6b6-28df88878996" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:00:09 compute-0 nova_compute[260935]: 2025-10-11 09:00:09.359 2 DEBUG nova.compute.manager [None req-b29b2c60-5c6d-47bd-834f-dfdd81e28227 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] [instance: b75d8ded-515b-48ff-a6b6-28df88878996] Instance network_info: |[{"id": "99e74dca-1d94-446c-ac4b-bc16dc028d2b", "address": "fa:16:3e:ab:9b:26", "network": {"id": "e4686205-cbf0-4221-bc49-ebb890c4a59f", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1553544744-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "11b44ad9193e4e43838d52056ccf413e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap99e74dca-1d", "ovs_interfaceid": "99e74dca-1d94-446c-ac4b-bc16dc028d2b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 11 09:00:09 compute-0 nova_compute[260935]: 2025-10-11 09:00:09.361 2 DEBUG oslo_concurrency.lockutils [req-6903f328-a7eb-4204-b76a-bdc6ca033fd5 req-cb539685-fa21-499d-8ad7-7bf9f1fa49e6 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-b75d8ded-515b-48ff-a6b6-28df88878996" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:00:09 compute-0 nova_compute[260935]: 2025-10-11 09:00:09.362 2 DEBUG nova.network.neutron [req-6903f328-a7eb-4204-b76a-bdc6ca033fd5 req-cb539685-fa21-499d-8ad7-7bf9f1fa49e6 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: b75d8ded-515b-48ff-a6b6-28df88878996] Refreshing network info cache for port 99e74dca-1d94-446c-ac4b-bc16dc028d2b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 09:00:09 compute-0 nova_compute[260935]: 2025-10-11 09:00:09.368 2 DEBUG nova.virt.libvirt.driver [None req-b29b2c60-5c6d-47bd-834f-dfdd81e28227 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] [instance: b75d8ded-515b-48ff-a6b6-28df88878996] Start _get_guest_xml network_info=[{"id": "99e74dca-1d94-446c-ac4b-bc16dc028d2b", "address": "fa:16:3e:ab:9b:26", "network": {"id": "e4686205-cbf0-4221-bc49-ebb890c4a59f", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1553544744-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "11b44ad9193e4e43838d52056ccf413e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap99e74dca-1d", "ovs_interfaceid": "99e74dca-1d94-446c-ac4b-bc16dc028d2b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 11 09:00:09 compute-0 nova_compute[260935]: 2025-10-11 09:00:09.375 2 WARNING nova.virt.libvirt.driver [None req-b29b2c60-5c6d-47bd-834f-dfdd81e28227 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 09:00:09 compute-0 nova_compute[260935]: 2025-10-11 09:00:09.381 2 DEBUG nova.virt.libvirt.host [None req-b29b2c60-5c6d-47bd-834f-dfdd81e28227 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 11 09:00:09 compute-0 nova_compute[260935]: 2025-10-11 09:00:09.383 2 DEBUG nova.virt.libvirt.host [None req-b29b2c60-5c6d-47bd-834f-dfdd81e28227 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 11 09:00:09 compute-0 nova_compute[260935]: 2025-10-11 09:00:09.387 2 DEBUG nova.virt.libvirt.host [None req-b29b2c60-5c6d-47bd-834f-dfdd81e28227 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 11 09:00:09 compute-0 nova_compute[260935]: 2025-10-11 09:00:09.388 2 DEBUG nova.virt.libvirt.host [None req-b29b2c60-5c6d-47bd-834f-dfdd81e28227 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 11 09:00:09 compute-0 nova_compute[260935]: 2025-10-11 09:00:09.389 2 DEBUG nova.virt.libvirt.driver [None req-b29b2c60-5c6d-47bd-834f-dfdd81e28227 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 11 09:00:09 compute-0 nova_compute[260935]: 2025-10-11 09:00:09.390 2 DEBUG nova.virt.hardware [None req-b29b2c60-5c6d-47bd-834f-dfdd81e28227 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 11 09:00:09 compute-0 nova_compute[260935]: 2025-10-11 09:00:09.391 2 DEBUG nova.virt.hardware [None req-b29b2c60-5c6d-47bd-834f-dfdd81e28227 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 11 09:00:09 compute-0 nova_compute[260935]: 2025-10-11 09:00:09.391 2 DEBUG nova.virt.hardware [None req-b29b2c60-5c6d-47bd-834f-dfdd81e28227 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 11 09:00:09 compute-0 nova_compute[260935]: 2025-10-11 09:00:09.392 2 DEBUG nova.virt.hardware [None req-b29b2c60-5c6d-47bd-834f-dfdd81e28227 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 11 09:00:09 compute-0 nova_compute[260935]: 2025-10-11 09:00:09.392 2 DEBUG nova.virt.hardware [None req-b29b2c60-5c6d-47bd-834f-dfdd81e28227 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 11 09:00:09 compute-0 nova_compute[260935]: 2025-10-11 09:00:09.393 2 DEBUG nova.virt.hardware [None req-b29b2c60-5c6d-47bd-834f-dfdd81e28227 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 11 09:00:09 compute-0 nova_compute[260935]: 2025-10-11 09:00:09.394 2 DEBUG nova.virt.hardware [None req-b29b2c60-5c6d-47bd-834f-dfdd81e28227 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 11 09:00:09 compute-0 nova_compute[260935]: 2025-10-11 09:00:09.395 2 DEBUG nova.virt.hardware [None req-b29b2c60-5c6d-47bd-834f-dfdd81e28227 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 11 09:00:09 compute-0 nova_compute[260935]: 2025-10-11 09:00:09.395 2 DEBUG nova.virt.hardware [None req-b29b2c60-5c6d-47bd-834f-dfdd81e28227 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 11 09:00:09 compute-0 nova_compute[260935]: 2025-10-11 09:00:09.396 2 DEBUG nova.virt.hardware [None req-b29b2c60-5c6d-47bd-834f-dfdd81e28227 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 11 09:00:09 compute-0 nova_compute[260935]: 2025-10-11 09:00:09.397 2 DEBUG nova.virt.hardware [None req-b29b2c60-5c6d-47bd-834f-dfdd81e28227 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 11 09:00:09 compute-0 nova_compute[260935]: 2025-10-11 09:00:09.402 2 DEBUG oslo_concurrency.processutils [None req-b29b2c60-5c6d-47bd-834f-dfdd81e28227 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:00:09 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:00:09.535 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=19, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:d1:d9', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '16:ab:1e:b7:4b:7f'}, ipsec=False) old=SB_Global(nb_cfg=18) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 09:00:09 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:00:09.537 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 11 09:00:09 compute-0 nova_compute[260935]: 2025-10-11 09:00:09.539 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:00:09 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 09:00:09 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/535352107' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:00:09 compute-0 nova_compute[260935]: 2025-10-11 09:00:09.885 2 DEBUG oslo_concurrency.processutils [None req-b29b2c60-5c6d-47bd-834f-dfdd81e28227 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.483s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:00:09 compute-0 nova_compute[260935]: 2025-10-11 09:00:09.926 2 DEBUG nova.storage.rbd_utils [None req-b29b2c60-5c6d-47bd-834f-dfdd81e28227 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] rbd image b75d8ded-515b-48ff-a6b6-28df88878996_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:00:09 compute-0 nova_compute[260935]: 2025-10-11 09:00:09.934 2 DEBUG oslo_concurrency.processutils [None req-b29b2c60-5c6d-47bd-834f-dfdd81e28227 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:00:10 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1710: 321 pgs: 321 active+clean; 213 MiB data, 661 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 128 op/s
Oct 11 09:00:10 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/535352107' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:00:10 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 09:00:10 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3792069021' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:00:10 compute-0 nova_compute[260935]: 2025-10-11 09:00:10.433 2 DEBUG oslo_concurrency.processutils [None req-b29b2c60-5c6d-47bd-834f-dfdd81e28227 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.499s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:00:10 compute-0 nova_compute[260935]: 2025-10-11 09:00:10.437 2 DEBUG nova.virt.libvirt.vif [None req-b29b2c60-5c6d-47bd-834f-dfdd81e28227 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T09:00:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerRescueTestJSON-server-217285829',display_name='tempest-ServerRescueTestJSON-server-217285829',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuetestjson-server-217285829',id=85,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='11b44ad9193e4e43838d52056ccf413e',ramdisk_id='',reservation_id='r-i0umpniu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerRescueTestJSON-1667208638',owner_user_name='tempest-ServerRescueTestJSON-1667208638-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:00:04Z,user_data=None,user_id='df5a3c3a5d68473aa2e2950de45ebce1',uuid=b75d8ded-515b-48ff-a6b6-28df88878996,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "99e74dca-1d94-446c-ac4b-bc16dc028d2b", "address": "fa:16:3e:ab:9b:26", "network": {"id": "e4686205-cbf0-4221-bc49-ebb890c4a59f", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1553544744-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "11b44ad9193e4e43838d52056ccf413e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap99e74dca-1d", "ovs_interfaceid": "99e74dca-1d94-446c-ac4b-bc16dc028d2b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 11 09:00:10 compute-0 nova_compute[260935]: 2025-10-11 09:00:10.438 2 DEBUG nova.network.os_vif_util [None req-b29b2c60-5c6d-47bd-834f-dfdd81e28227 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] Converting VIF {"id": "99e74dca-1d94-446c-ac4b-bc16dc028d2b", "address": "fa:16:3e:ab:9b:26", "network": {"id": "e4686205-cbf0-4221-bc49-ebb890c4a59f", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1553544744-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "11b44ad9193e4e43838d52056ccf413e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap99e74dca-1d", "ovs_interfaceid": "99e74dca-1d94-446c-ac4b-bc16dc028d2b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 09:00:10 compute-0 nova_compute[260935]: 2025-10-11 09:00:10.440 2 DEBUG nova.network.os_vif_util [None req-b29b2c60-5c6d-47bd-834f-dfdd81e28227 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ab:9b:26,bridge_name='br-int',has_traffic_filtering=True,id=99e74dca-1d94-446c-ac4b-bc16dc028d2b,network=Network(e4686205-cbf0-4221-bc49-ebb890c4a59f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap99e74dca-1d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 09:00:10 compute-0 nova_compute[260935]: 2025-10-11 09:00:10.443 2 DEBUG nova.objects.instance [None req-b29b2c60-5c6d-47bd-834f-dfdd81e28227 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] Lazy-loading 'pci_devices' on Instance uuid b75d8ded-515b-48ff-a6b6-28df88878996 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:00:10 compute-0 nova_compute[260935]: 2025-10-11 09:00:10.464 2 DEBUG nova.virt.libvirt.driver [None req-b29b2c60-5c6d-47bd-834f-dfdd81e28227 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] [instance: b75d8ded-515b-48ff-a6b6-28df88878996] End _get_guest_xml xml=<domain type="kvm">
Oct 11 09:00:10 compute-0 nova_compute[260935]:   <uuid>b75d8ded-515b-48ff-a6b6-28df88878996</uuid>
Oct 11 09:00:10 compute-0 nova_compute[260935]:   <name>instance-00000055</name>
Oct 11 09:00:10 compute-0 nova_compute[260935]:   <memory>131072</memory>
Oct 11 09:00:10 compute-0 nova_compute[260935]:   <vcpu>1</vcpu>
Oct 11 09:00:10 compute-0 nova_compute[260935]:   <metadata>
Oct 11 09:00:10 compute-0 nova_compute[260935]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 09:00:10 compute-0 nova_compute[260935]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 09:00:10 compute-0 nova_compute[260935]:       <nova:name>tempest-ServerRescueTestJSON-server-217285829</nova:name>
Oct 11 09:00:10 compute-0 nova_compute[260935]:       <nova:creationTime>2025-10-11 09:00:09</nova:creationTime>
Oct 11 09:00:10 compute-0 nova_compute[260935]:       <nova:flavor name="m1.nano">
Oct 11 09:00:10 compute-0 nova_compute[260935]:         <nova:memory>128</nova:memory>
Oct 11 09:00:10 compute-0 nova_compute[260935]:         <nova:disk>1</nova:disk>
Oct 11 09:00:10 compute-0 nova_compute[260935]:         <nova:swap>0</nova:swap>
Oct 11 09:00:10 compute-0 nova_compute[260935]:         <nova:ephemeral>0</nova:ephemeral>
Oct 11 09:00:10 compute-0 nova_compute[260935]:         <nova:vcpus>1</nova:vcpus>
Oct 11 09:00:10 compute-0 nova_compute[260935]:       </nova:flavor>
Oct 11 09:00:10 compute-0 nova_compute[260935]:       <nova:owner>
Oct 11 09:00:10 compute-0 nova_compute[260935]:         <nova:user uuid="df5a3c3a5d68473aa2e2950de45ebce1">tempest-ServerRescueTestJSON-1667208638-project-member</nova:user>
Oct 11 09:00:10 compute-0 nova_compute[260935]:         <nova:project uuid="11b44ad9193e4e43838d52056ccf413e">tempest-ServerRescueTestJSON-1667208638</nova:project>
Oct 11 09:00:10 compute-0 nova_compute[260935]:       </nova:owner>
Oct 11 09:00:10 compute-0 nova_compute[260935]:       <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 09:00:10 compute-0 nova_compute[260935]:       <nova:ports>
Oct 11 09:00:10 compute-0 nova_compute[260935]:         <nova:port uuid="99e74dca-1d94-446c-ac4b-bc16dc028d2b">
Oct 11 09:00:10 compute-0 nova_compute[260935]:           <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Oct 11 09:00:10 compute-0 nova_compute[260935]:         </nova:port>
Oct 11 09:00:10 compute-0 nova_compute[260935]:       </nova:ports>
Oct 11 09:00:10 compute-0 nova_compute[260935]:     </nova:instance>
Oct 11 09:00:10 compute-0 nova_compute[260935]:   </metadata>
Oct 11 09:00:10 compute-0 nova_compute[260935]:   <sysinfo type="smbios">
Oct 11 09:00:10 compute-0 nova_compute[260935]:     <system>
Oct 11 09:00:10 compute-0 nova_compute[260935]:       <entry name="manufacturer">RDO</entry>
Oct 11 09:00:10 compute-0 nova_compute[260935]:       <entry name="product">OpenStack Compute</entry>
Oct 11 09:00:10 compute-0 nova_compute[260935]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 09:00:10 compute-0 nova_compute[260935]:       <entry name="serial">b75d8ded-515b-48ff-a6b6-28df88878996</entry>
Oct 11 09:00:10 compute-0 nova_compute[260935]:       <entry name="uuid">b75d8ded-515b-48ff-a6b6-28df88878996</entry>
Oct 11 09:00:10 compute-0 nova_compute[260935]:       <entry name="family">Virtual Machine</entry>
Oct 11 09:00:10 compute-0 nova_compute[260935]:     </system>
Oct 11 09:00:10 compute-0 nova_compute[260935]:   </sysinfo>
Oct 11 09:00:10 compute-0 nova_compute[260935]:   <os>
Oct 11 09:00:10 compute-0 nova_compute[260935]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 11 09:00:10 compute-0 nova_compute[260935]:     <boot dev="hd"/>
Oct 11 09:00:10 compute-0 nova_compute[260935]:     <smbios mode="sysinfo"/>
Oct 11 09:00:10 compute-0 nova_compute[260935]:   </os>
Oct 11 09:00:10 compute-0 nova_compute[260935]:   <features>
Oct 11 09:00:10 compute-0 nova_compute[260935]:     <acpi/>
Oct 11 09:00:10 compute-0 nova_compute[260935]:     <apic/>
Oct 11 09:00:10 compute-0 nova_compute[260935]:     <vmcoreinfo/>
Oct 11 09:00:10 compute-0 nova_compute[260935]:   </features>
Oct 11 09:00:10 compute-0 nova_compute[260935]:   <clock offset="utc">
Oct 11 09:00:10 compute-0 nova_compute[260935]:     <timer name="pit" tickpolicy="delay"/>
Oct 11 09:00:10 compute-0 nova_compute[260935]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 11 09:00:10 compute-0 nova_compute[260935]:     <timer name="hpet" present="no"/>
Oct 11 09:00:10 compute-0 nova_compute[260935]:   </clock>
Oct 11 09:00:10 compute-0 nova_compute[260935]:   <cpu mode="host-model" match="exact">
Oct 11 09:00:10 compute-0 nova_compute[260935]:     <topology sockets="1" cores="1" threads="1"/>
Oct 11 09:00:10 compute-0 nova_compute[260935]:   </cpu>
Oct 11 09:00:10 compute-0 nova_compute[260935]:   <devices>
Oct 11 09:00:10 compute-0 nova_compute[260935]:     <disk type="network" device="disk">
Oct 11 09:00:10 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 09:00:10 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/b75d8ded-515b-48ff-a6b6-28df88878996_disk">
Oct 11 09:00:10 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 09:00:10 compute-0 nova_compute[260935]:       </source>
Oct 11 09:00:10 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 09:00:10 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 09:00:10 compute-0 nova_compute[260935]:       </auth>
Oct 11 09:00:10 compute-0 nova_compute[260935]:       <target dev="vda" bus="virtio"/>
Oct 11 09:00:10 compute-0 nova_compute[260935]:     </disk>
Oct 11 09:00:10 compute-0 nova_compute[260935]:     <disk type="network" device="cdrom">
Oct 11 09:00:10 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 09:00:10 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/b75d8ded-515b-48ff-a6b6-28df88878996_disk.config">
Oct 11 09:00:10 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 09:00:10 compute-0 nova_compute[260935]:       </source>
Oct 11 09:00:10 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 09:00:10 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 09:00:10 compute-0 nova_compute[260935]:       </auth>
Oct 11 09:00:10 compute-0 nova_compute[260935]:       <target dev="sda" bus="sata"/>
Oct 11 09:00:10 compute-0 nova_compute[260935]:     </disk>
Oct 11 09:00:10 compute-0 nova_compute[260935]:     <interface type="ethernet">
Oct 11 09:00:10 compute-0 nova_compute[260935]:       <mac address="fa:16:3e:ab:9b:26"/>
Oct 11 09:00:10 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 09:00:10 compute-0 nova_compute[260935]:       <driver name="vhost" rx_queue_size="512"/>
Oct 11 09:00:10 compute-0 nova_compute[260935]:       <mtu size="1442"/>
Oct 11 09:00:10 compute-0 nova_compute[260935]:       <target dev="tap99e74dca-1d"/>
Oct 11 09:00:10 compute-0 nova_compute[260935]:     </interface>
Oct 11 09:00:10 compute-0 nova_compute[260935]:     <serial type="pty">
Oct 11 09:00:10 compute-0 nova_compute[260935]:       <log file="/var/lib/nova/instances/b75d8ded-515b-48ff-a6b6-28df88878996/console.log" append="off"/>
Oct 11 09:00:10 compute-0 nova_compute[260935]:     </serial>
Oct 11 09:00:10 compute-0 nova_compute[260935]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 09:00:10 compute-0 nova_compute[260935]:     <video>
Oct 11 09:00:10 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 09:00:10 compute-0 nova_compute[260935]:     </video>
Oct 11 09:00:10 compute-0 nova_compute[260935]:     <input type="tablet" bus="usb"/>
Oct 11 09:00:10 compute-0 nova_compute[260935]:     <rng model="virtio">
Oct 11 09:00:10 compute-0 nova_compute[260935]:       <backend model="random">/dev/urandom</backend>
Oct 11 09:00:10 compute-0 nova_compute[260935]:     </rng>
Oct 11 09:00:10 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root"/>
Oct 11 09:00:10 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:00:10 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:00:10 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:00:10 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:00:10 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:00:10 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:00:10 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:00:10 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:00:10 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:00:10 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:00:10 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:00:10 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:00:10 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:00:10 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:00:10 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:00:10 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:00:10 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:00:10 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:00:10 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:00:10 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:00:10 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:00:10 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:00:10 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:00:10 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:00:10 compute-0 nova_compute[260935]:     <controller type="usb" index="0"/>
Oct 11 09:00:10 compute-0 nova_compute[260935]:     <memballoon model="virtio">
Oct 11 09:00:10 compute-0 nova_compute[260935]:       <stats period="10"/>
Oct 11 09:00:10 compute-0 nova_compute[260935]:     </memballoon>
Oct 11 09:00:10 compute-0 nova_compute[260935]:   </devices>
Oct 11 09:00:10 compute-0 nova_compute[260935]: </domain>
Oct 11 09:00:10 compute-0 nova_compute[260935]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 11 09:00:10 compute-0 nova_compute[260935]: 2025-10-11 09:00:10.471 2 DEBUG nova.compute.manager [None req-b29b2c60-5c6d-47bd-834f-dfdd81e28227 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] [instance: b75d8ded-515b-48ff-a6b6-28df88878996] Preparing to wait for external event network-vif-plugged-99e74dca-1d94-446c-ac4b-bc16dc028d2b prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 11 09:00:10 compute-0 nova_compute[260935]: 2025-10-11 09:00:10.472 2 DEBUG oslo_concurrency.lockutils [None req-b29b2c60-5c6d-47bd-834f-dfdd81e28227 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] Acquiring lock "b75d8ded-515b-48ff-a6b6-28df88878996-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:00:10 compute-0 nova_compute[260935]: 2025-10-11 09:00:10.472 2 DEBUG oslo_concurrency.lockutils [None req-b29b2c60-5c6d-47bd-834f-dfdd81e28227 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] Lock "b75d8ded-515b-48ff-a6b6-28df88878996-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:00:10 compute-0 nova_compute[260935]: 2025-10-11 09:00:10.472 2 DEBUG oslo_concurrency.lockutils [None req-b29b2c60-5c6d-47bd-834f-dfdd81e28227 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] Lock "b75d8ded-515b-48ff-a6b6-28df88878996-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:00:10 compute-0 nova_compute[260935]: 2025-10-11 09:00:10.473 2 DEBUG nova.virt.libvirt.vif [None req-b29b2c60-5c6d-47bd-834f-dfdd81e28227 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T09:00:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerRescueTestJSON-server-217285829',display_name='tempest-ServerRescueTestJSON-server-217285829',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuetestjson-server-217285829',id=85,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='11b44ad9193e4e43838d52056ccf413e',ramdisk_id='',reservation_id='r-i0umpniu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerRescueTestJSON-1667208638',owner_user_name='tempest-ServerRescueTestJSON-1667208638-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:00:04Z,user_data=None,user_id='df5a3c3a5d68473aa2e2950de45ebce1',uuid=b75d8ded-515b-48ff-a6b6-28df88878996,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "99e74dca-1d94-446c-ac4b-bc16dc028d2b", "address": "fa:16:3e:ab:9b:26", "network": {"id": "e4686205-cbf0-4221-bc49-ebb890c4a59f", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1553544744-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "11b44ad9193e4e43838d52056ccf413e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap99e74dca-1d", "ovs_interfaceid": "99e74dca-1d94-446c-ac4b-bc16dc028d2b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 11 09:00:10 compute-0 nova_compute[260935]: 2025-10-11 09:00:10.473 2 DEBUG nova.network.os_vif_util [None req-b29b2c60-5c6d-47bd-834f-dfdd81e28227 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] Converting VIF {"id": "99e74dca-1d94-446c-ac4b-bc16dc028d2b", "address": "fa:16:3e:ab:9b:26", "network": {"id": "e4686205-cbf0-4221-bc49-ebb890c4a59f", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1553544744-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "11b44ad9193e4e43838d52056ccf413e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap99e74dca-1d", "ovs_interfaceid": "99e74dca-1d94-446c-ac4b-bc16dc028d2b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 09:00:10 compute-0 nova_compute[260935]: 2025-10-11 09:00:10.475 2 DEBUG nova.network.os_vif_util [None req-b29b2c60-5c6d-47bd-834f-dfdd81e28227 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ab:9b:26,bridge_name='br-int',has_traffic_filtering=True,id=99e74dca-1d94-446c-ac4b-bc16dc028d2b,network=Network(e4686205-cbf0-4221-bc49-ebb890c4a59f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap99e74dca-1d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 09:00:10 compute-0 nova_compute[260935]: 2025-10-11 09:00:10.475 2 DEBUG os_vif [None req-b29b2c60-5c6d-47bd-834f-dfdd81e28227 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ab:9b:26,bridge_name='br-int',has_traffic_filtering=True,id=99e74dca-1d94-446c-ac4b-bc16dc028d2b,network=Network(e4686205-cbf0-4221-bc49-ebb890c4a59f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap99e74dca-1d') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 11 09:00:10 compute-0 nova_compute[260935]: 2025-10-11 09:00:10.476 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:00:10 compute-0 nova_compute[260935]: 2025-10-11 09:00:10.477 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:00:10 compute-0 nova_compute[260935]: 2025-10-11 09:00:10.478 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 09:00:10 compute-0 nova_compute[260935]: 2025-10-11 09:00:10.484 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:00:10 compute-0 nova_compute[260935]: 2025-10-11 09:00:10.485 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap99e74dca-1d, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:00:10 compute-0 nova_compute[260935]: 2025-10-11 09:00:10.485 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap99e74dca-1d, col_values=(('external_ids', {'iface-id': '99e74dca-1d94-446c-ac4b-bc16dc028d2b', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ab:9b:26', 'vm-uuid': 'b75d8ded-515b-48ff-a6b6-28df88878996'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:00:10 compute-0 nova_compute[260935]: 2025-10-11 09:00:10.487 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:00:10 compute-0 NetworkManager[44960]: <info>  [1760173210.4896] manager: (tap99e74dca-1d): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/332)
Oct 11 09:00:10 compute-0 nova_compute[260935]: 2025-10-11 09:00:10.491 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 09:00:10 compute-0 nova_compute[260935]: 2025-10-11 09:00:10.495 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:00:10 compute-0 nova_compute[260935]: 2025-10-11 09:00:10.496 2 INFO os_vif [None req-b29b2c60-5c6d-47bd-834f-dfdd81e28227 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ab:9b:26,bridge_name='br-int',has_traffic_filtering=True,id=99e74dca-1d94-446c-ac4b-bc16dc028d2b,network=Network(e4686205-cbf0-4221-bc49-ebb890c4a59f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap99e74dca-1d')
Oct 11 09:00:10 compute-0 nova_compute[260935]: 2025-10-11 09:00:10.594 2 DEBUG nova.virt.libvirt.driver [None req-b29b2c60-5c6d-47bd-834f-dfdd81e28227 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 09:00:10 compute-0 nova_compute[260935]: 2025-10-11 09:00:10.594 2 DEBUG nova.virt.libvirt.driver [None req-b29b2c60-5c6d-47bd-834f-dfdd81e28227 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 09:00:10 compute-0 nova_compute[260935]: 2025-10-11 09:00:10.594 2 DEBUG nova.virt.libvirt.driver [None req-b29b2c60-5c6d-47bd-834f-dfdd81e28227 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] No VIF found with MAC fa:16:3e:ab:9b:26, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 11 09:00:10 compute-0 nova_compute[260935]: 2025-10-11 09:00:10.595 2 INFO nova.virt.libvirt.driver [None req-b29b2c60-5c6d-47bd-834f-dfdd81e28227 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] [instance: b75d8ded-515b-48ff-a6b6-28df88878996] Using config drive
Oct 11 09:00:10 compute-0 nova_compute[260935]: 2025-10-11 09:00:10.621 2 DEBUG nova.storage.rbd_utils [None req-b29b2c60-5c6d-47bd-834f-dfdd81e28227 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] rbd image b75d8ded-515b-48ff-a6b6-28df88878996_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:00:11 compute-0 nova_compute[260935]: 2025-10-11 09:00:11.239 2 INFO nova.virt.libvirt.driver [None req-b29b2c60-5c6d-47bd-834f-dfdd81e28227 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] [instance: b75d8ded-515b-48ff-a6b6-28df88878996] Creating config drive at /var/lib/nova/instances/b75d8ded-515b-48ff-a6b6-28df88878996/disk.config
Oct 11 09:00:11 compute-0 nova_compute[260935]: 2025-10-11 09:00:11.245 2 DEBUG oslo_concurrency.processutils [None req-b29b2c60-5c6d-47bd-834f-dfdd81e28227 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/b75d8ded-515b-48ff-a6b6-28df88878996/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpmj_s1w9o execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:00:11 compute-0 ceph-mon[74313]: pgmap v1710: 321 pgs: 321 active+clean; 213 MiB data, 661 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 128 op/s
Oct 11 09:00:11 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/3792069021' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:00:11 compute-0 nova_compute[260935]: 2025-10-11 09:00:11.396 2 DEBUG oslo_concurrency.processutils [None req-b29b2c60-5c6d-47bd-834f-dfdd81e28227 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/b75d8ded-515b-48ff-a6b6-28df88878996/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpmj_s1w9o" returned: 0 in 0.151s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:00:11 compute-0 nova_compute[260935]: 2025-10-11 09:00:11.427 2 DEBUG nova.storage.rbd_utils [None req-b29b2c60-5c6d-47bd-834f-dfdd81e28227 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] rbd image b75d8ded-515b-48ff-a6b6-28df88878996_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:00:11 compute-0 nova_compute[260935]: 2025-10-11 09:00:11.432 2 DEBUG oslo_concurrency.processutils [None req-b29b2c60-5c6d-47bd-834f-dfdd81e28227 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/b75d8ded-515b-48ff-a6b6-28df88878996/disk.config b75d8ded-515b-48ff-a6b6-28df88878996_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:00:11 compute-0 nova_compute[260935]: 2025-10-11 09:00:11.615 2 DEBUG oslo_concurrency.processutils [None req-b29b2c60-5c6d-47bd-834f-dfdd81e28227 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/b75d8ded-515b-48ff-a6b6-28df88878996/disk.config b75d8ded-515b-48ff-a6b6-28df88878996_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.183s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:00:11 compute-0 nova_compute[260935]: 2025-10-11 09:00:11.618 2 INFO nova.virt.libvirt.driver [None req-b29b2c60-5c6d-47bd-834f-dfdd81e28227 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] [instance: b75d8ded-515b-48ff-a6b6-28df88878996] Deleting local config drive /var/lib/nova/instances/b75d8ded-515b-48ff-a6b6-28df88878996/disk.config because it was imported into RBD.
Oct 11 09:00:11 compute-0 kernel: tap99e74dca-1d: entered promiscuous mode
Oct 11 09:00:11 compute-0 NetworkManager[44960]: <info>  [1760173211.6847] manager: (tap99e74dca-1d): new Tun device (/org/freedesktop/NetworkManager/Devices/333)
Oct 11 09:00:11 compute-0 ovn_controller[152945]: 2025-10-11T09:00:11Z|00752|binding|INFO|Claiming lport 99e74dca-1d94-446c-ac4b-bc16dc028d2b for this chassis.
Oct 11 09:00:11 compute-0 ovn_controller[152945]: 2025-10-11T09:00:11Z|00753|binding|INFO|99e74dca-1d94-446c-ac4b-bc16dc028d2b: Claiming fa:16:3e:ab:9b:26 10.100.0.3
Oct 11 09:00:11 compute-0 nova_compute[260935]: 2025-10-11 09:00:11.689 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:00:11 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:00:11.698 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ab:9b:26 10.100.0.3'], port_security=['fa:16:3e:ab:9b:26 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'b75d8ded-515b-48ff-a6b6-28df88878996', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e4686205-cbf0-4221-bc49-ebb890c4a59f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '11b44ad9193e4e43838d52056ccf413e', 'neutron:revision_number': '2', 'neutron:security_group_ids': '4b0cfb76-aebd-4cfc-96f5-00bacc345d65', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9601b6e8-d9bc-46ca-99e8-33ec15f713e5, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=99e74dca-1d94-446c-ac4b-bc16dc028d2b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 09:00:11 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:00:11.700 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 99e74dca-1d94-446c-ac4b-bc16dc028d2b in datapath e4686205-cbf0-4221-bc49-ebb890c4a59f bound to our chassis
Oct 11 09:00:11 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:00:11.701 162815 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network e4686205-cbf0-4221-bc49-ebb890c4a59f or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Oct 11 09:00:11 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:00:11.702 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[20808fd8-f57a-43c5-b559-8881380d1bb3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:00:11 compute-0 nova_compute[260935]: 2025-10-11 09:00:11.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:00:11 compute-0 nova_compute[260935]: 2025-10-11 09:00:11.704 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:00:11 compute-0 nova_compute[260935]: 2025-10-11 09:00:11.704 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:00:11 compute-0 nova_compute[260935]: 2025-10-11 09:00:11.704 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 11 09:00:11 compute-0 ovn_controller[152945]: 2025-10-11T09:00:11Z|00754|binding|INFO|Setting lport 99e74dca-1d94-446c-ac4b-bc16dc028d2b up in Southbound
Oct 11 09:00:11 compute-0 ovn_controller[152945]: 2025-10-11T09:00:11Z|00755|binding|INFO|Setting lport 99e74dca-1d94-446c-ac4b-bc16dc028d2b ovn-installed in OVS
Oct 11 09:00:11 compute-0 nova_compute[260935]: 2025-10-11 09:00:11.723 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:00:11 compute-0 nova_compute[260935]: 2025-10-11 09:00:11.730 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:00:11 compute-0 systemd-udevd[342609]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 09:00:11 compute-0 NetworkManager[44960]: <info>  [1760173211.7463] device (tap99e74dca-1d): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 09:00:11 compute-0 NetworkManager[44960]: <info>  [1760173211.7485] device (tap99e74dca-1d): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 09:00:11 compute-0 systemd-machined[215705]: New machine qemu-95-instance-00000055.
Oct 11 09:00:11 compute-0 systemd[1]: Started Virtual Machine qemu-95-instance-00000055.
Oct 11 09:00:11 compute-0 podman[342593]: 2025-10-11 09:00:11.789192698 +0000 UTC m=+0.092726285 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_managed=true, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 09:00:11 compute-0 nova_compute[260935]: 2025-10-11 09:00:11.882 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:00:12 compute-0 nova_compute[260935]: 2025-10-11 09:00:12.117 2 DEBUG nova.compute.manager [req-f9bc9816-41d9-4656-9524-88041794e421 req-9cf9b6da-78d1-43e3-aa2f-7ad671257e1b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: b75d8ded-515b-48ff-a6b6-28df88878996] Received event network-vif-plugged-99e74dca-1d94-446c-ac4b-bc16dc028d2b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:00:12 compute-0 nova_compute[260935]: 2025-10-11 09:00:12.117 2 DEBUG oslo_concurrency.lockutils [req-f9bc9816-41d9-4656-9524-88041794e421 req-9cf9b6da-78d1-43e3-aa2f-7ad671257e1b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "b75d8ded-515b-48ff-a6b6-28df88878996-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:00:12 compute-0 nova_compute[260935]: 2025-10-11 09:00:12.118 2 DEBUG oslo_concurrency.lockutils [req-f9bc9816-41d9-4656-9524-88041794e421 req-9cf9b6da-78d1-43e3-aa2f-7ad671257e1b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "b75d8ded-515b-48ff-a6b6-28df88878996-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:00:12 compute-0 nova_compute[260935]: 2025-10-11 09:00:12.118 2 DEBUG oslo_concurrency.lockutils [req-f9bc9816-41d9-4656-9524-88041794e421 req-9cf9b6da-78d1-43e3-aa2f-7ad671257e1b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "b75d8ded-515b-48ff-a6b6-28df88878996-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:00:12 compute-0 nova_compute[260935]: 2025-10-11 09:00:12.119 2 DEBUG nova.compute.manager [req-f9bc9816-41d9-4656-9524-88041794e421 req-9cf9b6da-78d1-43e3-aa2f-7ad671257e1b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: b75d8ded-515b-48ff-a6b6-28df88878996] Processing event network-vif-plugged-99e74dca-1d94-446c-ac4b-bc16dc028d2b _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 11 09:00:12 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1711: 321 pgs: 321 active+clean; 214 MiB data, 662 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 130 op/s
Oct 11 09:00:12 compute-0 nova_compute[260935]: 2025-10-11 09:00:12.265 2 DEBUG nova.network.neutron [req-6903f328-a7eb-4204-b76a-bdc6ca033fd5 req-cb539685-fa21-499d-8ad7-7bf9f1fa49e6 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: b75d8ded-515b-48ff-a6b6-28df88878996] Updated VIF entry in instance network info cache for port 99e74dca-1d94-446c-ac4b-bc16dc028d2b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 09:00:12 compute-0 nova_compute[260935]: 2025-10-11 09:00:12.266 2 DEBUG nova.network.neutron [req-6903f328-a7eb-4204-b76a-bdc6ca033fd5 req-cb539685-fa21-499d-8ad7-7bf9f1fa49e6 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: b75d8ded-515b-48ff-a6b6-28df88878996] Updating instance_info_cache with network_info: [{"id": "99e74dca-1d94-446c-ac4b-bc16dc028d2b", "address": "fa:16:3e:ab:9b:26", "network": {"id": "e4686205-cbf0-4221-bc49-ebb890c4a59f", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1553544744-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "11b44ad9193e4e43838d52056ccf413e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap99e74dca-1d", "ovs_interfaceid": "99e74dca-1d94-446c-ac4b-bc16dc028d2b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:00:12 compute-0 nova_compute[260935]: 2025-10-11 09:00:12.292 2 DEBUG oslo_concurrency.lockutils [req-6903f328-a7eb-4204-b76a-bdc6ca033fd5 req-cb539685-fa21-499d-8ad7-7bf9f1fa49e6 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-b75d8ded-515b-48ff-a6b6-28df88878996" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:00:12 compute-0 nova_compute[260935]: 2025-10-11 09:00:12.796 2 DEBUG nova.compute.manager [None req-b29b2c60-5c6d-47bd-834f-dfdd81e28227 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] [instance: b75d8ded-515b-48ff-a6b6-28df88878996] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 11 09:00:12 compute-0 nova_compute[260935]: 2025-10-11 09:00:12.797 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760173212.795359, b75d8ded-515b-48ff-a6b6-28df88878996 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:00:12 compute-0 nova_compute[260935]: 2025-10-11 09:00:12.797 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: b75d8ded-515b-48ff-a6b6-28df88878996] VM Started (Lifecycle Event)
Oct 11 09:00:12 compute-0 nova_compute[260935]: 2025-10-11 09:00:12.801 2 DEBUG nova.virt.libvirt.driver [None req-b29b2c60-5c6d-47bd-834f-dfdd81e28227 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] [instance: b75d8ded-515b-48ff-a6b6-28df88878996] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 11 09:00:12 compute-0 nova_compute[260935]: 2025-10-11 09:00:12.805 2 INFO nova.virt.libvirt.driver [-] [instance: b75d8ded-515b-48ff-a6b6-28df88878996] Instance spawned successfully.
Oct 11 09:00:12 compute-0 nova_compute[260935]: 2025-10-11 09:00:12.805 2 DEBUG nova.virt.libvirt.driver [None req-b29b2c60-5c6d-47bd-834f-dfdd81e28227 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] [instance: b75d8ded-515b-48ff-a6b6-28df88878996] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 11 09:00:12 compute-0 nova_compute[260935]: 2025-10-11 09:00:12.853 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: b75d8ded-515b-48ff-a6b6-28df88878996] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:00:12 compute-0 nova_compute[260935]: 2025-10-11 09:00:12.860 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: b75d8ded-515b-48ff-a6b6-28df88878996] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 09:00:12 compute-0 nova_compute[260935]: 2025-10-11 09:00:12.869 2 DEBUG nova.virt.libvirt.driver [None req-b29b2c60-5c6d-47bd-834f-dfdd81e28227 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] [instance: b75d8ded-515b-48ff-a6b6-28df88878996] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:00:12 compute-0 nova_compute[260935]: 2025-10-11 09:00:12.870 2 DEBUG nova.virt.libvirt.driver [None req-b29b2c60-5c6d-47bd-834f-dfdd81e28227 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] [instance: b75d8ded-515b-48ff-a6b6-28df88878996] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:00:12 compute-0 nova_compute[260935]: 2025-10-11 09:00:12.871 2 DEBUG nova.virt.libvirt.driver [None req-b29b2c60-5c6d-47bd-834f-dfdd81e28227 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] [instance: b75d8ded-515b-48ff-a6b6-28df88878996] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:00:12 compute-0 nova_compute[260935]: 2025-10-11 09:00:12.873 2 DEBUG nova.virt.libvirt.driver [None req-b29b2c60-5c6d-47bd-834f-dfdd81e28227 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] [instance: b75d8ded-515b-48ff-a6b6-28df88878996] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:00:12 compute-0 nova_compute[260935]: 2025-10-11 09:00:12.873 2 DEBUG nova.virt.libvirt.driver [None req-b29b2c60-5c6d-47bd-834f-dfdd81e28227 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] [instance: b75d8ded-515b-48ff-a6b6-28df88878996] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:00:12 compute-0 nova_compute[260935]: 2025-10-11 09:00:12.874 2 DEBUG nova.virt.libvirt.driver [None req-b29b2c60-5c6d-47bd-834f-dfdd81e28227 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] [instance: b75d8ded-515b-48ff-a6b6-28df88878996] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:00:12 compute-0 nova_compute[260935]: 2025-10-11 09:00:12.910 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: b75d8ded-515b-48ff-a6b6-28df88878996] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 09:00:12 compute-0 nova_compute[260935]: 2025-10-11 09:00:12.910 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760173212.8004985, b75d8ded-515b-48ff-a6b6-28df88878996 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:00:12 compute-0 nova_compute[260935]: 2025-10-11 09:00:12.911 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: b75d8ded-515b-48ff-a6b6-28df88878996] VM Paused (Lifecycle Event)
Oct 11 09:00:12 compute-0 nova_compute[260935]: 2025-10-11 09:00:12.945 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: b75d8ded-515b-48ff-a6b6-28df88878996] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:00:12 compute-0 nova_compute[260935]: 2025-10-11 09:00:12.951 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760173212.8006685, b75d8ded-515b-48ff-a6b6-28df88878996 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:00:12 compute-0 nova_compute[260935]: 2025-10-11 09:00:12.951 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: b75d8ded-515b-48ff-a6b6-28df88878996] VM Resumed (Lifecycle Event)
Oct 11 09:00:12 compute-0 nova_compute[260935]: 2025-10-11 09:00:12.962 2 INFO nova.compute.manager [None req-b29b2c60-5c6d-47bd-834f-dfdd81e28227 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] [instance: b75d8ded-515b-48ff-a6b6-28df88878996] Took 8.20 seconds to spawn the instance on the hypervisor.
Oct 11 09:00:12 compute-0 nova_compute[260935]: 2025-10-11 09:00:12.963 2 DEBUG nova.compute.manager [None req-b29b2c60-5c6d-47bd-834f-dfdd81e28227 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] [instance: b75d8ded-515b-48ff-a6b6-28df88878996] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:00:12 compute-0 nova_compute[260935]: 2025-10-11 09:00:12.970 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760173197.9668672, 8848c29f-c82a-4f50-82f4-b2e317161489 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:00:12 compute-0 nova_compute[260935]: 2025-10-11 09:00:12.970 2 INFO nova.compute.manager [-] [instance: 8848c29f-c82a-4f50-82f4-b2e317161489] VM Stopped (Lifecycle Event)
Oct 11 09:00:12 compute-0 nova_compute[260935]: 2025-10-11 09:00:12.972 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: b75d8ded-515b-48ff-a6b6-28df88878996] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:00:12 compute-0 nova_compute[260935]: 2025-10-11 09:00:12.978 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: b75d8ded-515b-48ff-a6b6-28df88878996] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 09:00:13 compute-0 nova_compute[260935]: 2025-10-11 09:00:13.013 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: b75d8ded-515b-48ff-a6b6-28df88878996] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 09:00:13 compute-0 nova_compute[260935]: 2025-10-11 09:00:13.024 2 DEBUG nova.compute.manager [None req-8c9594bd-cf62-4ed8-8395-373ba6ab589b - - - - - -] [instance: 8848c29f-c82a-4f50-82f4-b2e317161489] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:00:13 compute-0 nova_compute[260935]: 2025-10-11 09:00:13.041 2 INFO nova.compute.manager [None req-b29b2c60-5c6d-47bd-834f-dfdd81e28227 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] [instance: b75d8ded-515b-48ff-a6b6-28df88878996] Took 9.26 seconds to build instance.
Oct 11 09:00:13 compute-0 nova_compute[260935]: 2025-10-11 09:00:13.073 2 DEBUG oslo_concurrency.lockutils [None req-b29b2c60-5c6d-47bd-834f-dfdd81e28227 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] Lock "b75d8ded-515b-48ff-a6b6-28df88878996" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.370s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:00:13 compute-0 ceph-mon[74313]: pgmap v1711: 321 pgs: 321 active+clean; 214 MiB data, 662 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 130 op/s
Oct 11 09:00:13 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e242 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:00:13 compute-0 nova_compute[260935]: 2025-10-11 09:00:13.697 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:00:13 compute-0 nova_compute[260935]: 2025-10-11 09:00:13.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:00:14 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1712: 321 pgs: 321 active+clean; 214 MiB data, 662 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 128 op/s
Oct 11 09:00:14 compute-0 sshd-session[342672]: Invalid user matthew from 152.32.213.170 port 38148
Oct 11 09:00:14 compute-0 sshd-session[342672]: pam_unix(sshd:auth): check pass; user unknown
Oct 11 09:00:14 compute-0 sshd-session[342672]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=152.32.213.170
Oct 11 09:00:15 compute-0 podman[342674]: 2025-10-11 09:00:15.054007302 +0000 UTC m=+0.064277443 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=multipathd, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct 11 09:00:15 compute-0 podman[342675]: 2025-10-11 09:00:15.121749364 +0000 UTC m=+0.125363226 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 11 09:00:15 compute-0 nova_compute[260935]: 2025-10-11 09:00:15.160 2 DEBUG nova.compute.manager [req-5e2849d7-000b-47ef-8ce6-91978639c0ac req-8dd7bf9b-81b5-4441-a38e-edb7743ee534 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: b75d8ded-515b-48ff-a6b6-28df88878996] Received event network-vif-plugged-99e74dca-1d94-446c-ac4b-bc16dc028d2b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:00:15 compute-0 nova_compute[260935]: 2025-10-11 09:00:15.160 2 DEBUG oslo_concurrency.lockutils [req-5e2849d7-000b-47ef-8ce6-91978639c0ac req-8dd7bf9b-81b5-4441-a38e-edb7743ee534 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "b75d8ded-515b-48ff-a6b6-28df88878996-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:00:15 compute-0 nova_compute[260935]: 2025-10-11 09:00:15.161 2 DEBUG oslo_concurrency.lockutils [req-5e2849d7-000b-47ef-8ce6-91978639c0ac req-8dd7bf9b-81b5-4441-a38e-edb7743ee534 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "b75d8ded-515b-48ff-a6b6-28df88878996-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:00:15 compute-0 nova_compute[260935]: 2025-10-11 09:00:15.161 2 DEBUG oslo_concurrency.lockutils [req-5e2849d7-000b-47ef-8ce6-91978639c0ac req-8dd7bf9b-81b5-4441-a38e-edb7743ee534 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "b75d8ded-515b-48ff-a6b6-28df88878996-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:00:15 compute-0 nova_compute[260935]: 2025-10-11 09:00:15.162 2 DEBUG nova.compute.manager [req-5e2849d7-000b-47ef-8ce6-91978639c0ac req-8dd7bf9b-81b5-4441-a38e-edb7743ee534 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: b75d8ded-515b-48ff-a6b6-28df88878996] No waiting events found dispatching network-vif-plugged-99e74dca-1d94-446c-ac4b-bc16dc028d2b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 09:00:15 compute-0 nova_compute[260935]: 2025-10-11 09:00:15.162 2 WARNING nova.compute.manager [req-5e2849d7-000b-47ef-8ce6-91978639c0ac req-8dd7bf9b-81b5-4441-a38e-edb7743ee534 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: b75d8ded-515b-48ff-a6b6-28df88878996] Received unexpected event network-vif-plugged-99e74dca-1d94-446c-ac4b-bc16dc028d2b for instance with vm_state active and task_state None.
Oct 11 09:00:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:00:15.195 162815 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:00:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:00:15.195 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:00:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:00:15.196 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:00:15 compute-0 nova_compute[260935]: 2025-10-11 09:00:15.296 2 INFO nova.compute.manager [None req-794a5152-5f55-4f4d-9857-6ffb9281654c df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] [instance: b75d8ded-515b-48ff-a6b6-28df88878996] Rescuing
Oct 11 09:00:15 compute-0 nova_compute[260935]: 2025-10-11 09:00:15.297 2 DEBUG oslo_concurrency.lockutils [None req-794a5152-5f55-4f4d-9857-6ffb9281654c df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] Acquiring lock "refresh_cache-b75d8ded-515b-48ff-a6b6-28df88878996" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:00:15 compute-0 nova_compute[260935]: 2025-10-11 09:00:15.298 2 DEBUG oslo_concurrency.lockutils [None req-794a5152-5f55-4f4d-9857-6ffb9281654c df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] Acquired lock "refresh_cache-b75d8ded-515b-48ff-a6b6-28df88878996" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:00:15 compute-0 nova_compute[260935]: 2025-10-11 09:00:15.298 2 DEBUG nova.network.neutron [None req-794a5152-5f55-4f4d-9857-6ffb9281654c df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] [instance: b75d8ded-515b-48ff-a6b6-28df88878996] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 11 09:00:15 compute-0 ceph-mon[74313]: pgmap v1712: 321 pgs: 321 active+clean; 214 MiB data, 662 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 128 op/s
Oct 11 09:00:15 compute-0 nova_compute[260935]: 2025-10-11 09:00:15.425 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:00:15 compute-0 nova_compute[260935]: 2025-10-11 09:00:15.488 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:00:15 compute-0 nova_compute[260935]: 2025-10-11 09:00:15.698 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:00:15 compute-0 nova_compute[260935]: 2025-10-11 09:00:15.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:00:15 compute-0 nova_compute[260935]: 2025-10-11 09:00:15.725 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:00:15 compute-0 nova_compute[260935]: 2025-10-11 09:00:15.725 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:00:15 compute-0 nova_compute[260935]: 2025-10-11 09:00:15.726 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:00:15 compute-0 nova_compute[260935]: 2025-10-11 09:00:15.726 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 11 09:00:15 compute-0 nova_compute[260935]: 2025-10-11 09:00:15.727 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:00:16 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1713: 321 pgs: 321 active+clean; 214 MiB data, 662 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 110 op/s
Oct 11 09:00:16 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 09:00:16 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4141083374' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:00:16 compute-0 nova_compute[260935]: 2025-10-11 09:00:16.227 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.500s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:00:16 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/4141083374' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:00:16 compute-0 nova_compute[260935]: 2025-10-11 09:00:16.319 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:00:16 compute-0 nova_compute[260935]: 2025-10-11 09:00:16.320 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:00:16 compute-0 nova_compute[260935]: 2025-10-11 09:00:16.325 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000052 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:00:16 compute-0 nova_compute[260935]: 2025-10-11 09:00:16.325 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000052 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:00:16 compute-0 nova_compute[260935]: 2025-10-11 09:00:16.330 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000054 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:00:16 compute-0 nova_compute[260935]: 2025-10-11 09:00:16.331 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000054 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:00:16 compute-0 nova_compute[260935]: 2025-10-11 09:00:16.582 2 WARNING nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 09:00:16 compute-0 nova_compute[260935]: 2025-10-11 09:00:16.584 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3376MB free_disk=59.90089416503906GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 11 09:00:16 compute-0 nova_compute[260935]: 2025-10-11 09:00:16.584 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:00:16 compute-0 nova_compute[260935]: 2025-10-11 09:00:16.585 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:00:16 compute-0 nova_compute[260935]: 2025-10-11 09:00:16.700 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance c176845c-89c0-4038-ba22-4ee79bd3ebfe actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 09:00:16 compute-0 nova_compute[260935]: 2025-10-11 09:00:16.700 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance 2ca1b1c6-7cd2-42f9-a24b-5c20bb567361 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 09:00:16 compute-0 nova_compute[260935]: 2025-10-11 09:00:16.701 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance b75d8ded-515b-48ff-a6b6-28df88878996 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 09:00:16 compute-0 nova_compute[260935]: 2025-10-11 09:00:16.701 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 11 09:00:16 compute-0 nova_compute[260935]: 2025-10-11 09:00:16.702 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=896MB phys_disk=59GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 11 09:00:16 compute-0 nova_compute[260935]: 2025-10-11 09:00:16.801 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:00:17 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 09:00:17 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/177225758' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:00:17 compute-0 sshd-session[342672]: Failed password for invalid user matthew from 152.32.213.170 port 38148 ssh2
Oct 11 09:00:17 compute-0 nova_compute[260935]: 2025-10-11 09:00:17.250 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:00:17 compute-0 nova_compute[260935]: 2025-10-11 09:00:17.258 2 DEBUG nova.compute.provider_tree [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 09:00:17 compute-0 nova_compute[260935]: 2025-10-11 09:00:17.285 2 DEBUG nova.scheduler.client.report [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 09:00:17 compute-0 nova_compute[260935]: 2025-10-11 09:00:17.331 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 11 09:00:17 compute-0 nova_compute[260935]: 2025-10-11 09:00:17.332 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.747s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:00:17 compute-0 ceph-mon[74313]: pgmap v1713: 321 pgs: 321 active+clean; 214 MiB data, 662 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 110 op/s
Oct 11 09:00:17 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/177225758' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:00:17 compute-0 ovn_controller[152945]: 2025-10-11T09:00:17Z|00092|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:16:b9:d1 10.100.0.11
Oct 11 09:00:17 compute-0 ovn_controller[152945]: 2025-10-11T09:00:17Z|00093|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:16:b9:d1 10.100.0.11
Oct 11 09:00:17 compute-0 nova_compute[260935]: 2025-10-11 09:00:17.544 2 DEBUG nova.virt.libvirt.driver [None req-645c919b-3eac-43f3-b7db-1327b3724a83 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] [instance: 2ca1b1c6-7cd2-42f9-a24b-5c20bb567361] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Oct 11 09:00:17 compute-0 nova_compute[260935]: 2025-10-11 09:00:17.638 2 DEBUG nova.network.neutron [None req-794a5152-5f55-4f4d-9857-6ffb9281654c df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] [instance: b75d8ded-515b-48ff-a6b6-28df88878996] Updating instance_info_cache with network_info: [{"id": "99e74dca-1d94-446c-ac4b-bc16dc028d2b", "address": "fa:16:3e:ab:9b:26", "network": {"id": "e4686205-cbf0-4221-bc49-ebb890c4a59f", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1553544744-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "11b44ad9193e4e43838d52056ccf413e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap99e74dca-1d", "ovs_interfaceid": "99e74dca-1d94-446c-ac4b-bc16dc028d2b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:00:18 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1714: 321 pgs: 321 active+clean; 246 MiB data, 704 MiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 3.9 MiB/s wr, 237 op/s
Oct 11 09:00:18 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e242 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:00:18 compute-0 nova_compute[260935]: 2025-10-11 09:00:18.702 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:00:19 compute-0 nova_compute[260935]: 2025-10-11 09:00:19.111 2 DEBUG oslo_concurrency.lockutils [None req-794a5152-5f55-4f4d-9857-6ffb9281654c df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] Releasing lock "refresh_cache-b75d8ded-515b-48ff-a6b6-28df88878996" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:00:19 compute-0 ceph-mon[74313]: pgmap v1714: 321 pgs: 321 active+clean; 246 MiB data, 704 MiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 3.9 MiB/s wr, 237 op/s
Oct 11 09:00:19 compute-0 nova_compute[260935]: 2025-10-11 09:00:19.531 2 DEBUG nova.virt.libvirt.driver [None req-794a5152-5f55-4f4d-9857-6ffb9281654c df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] [instance: b75d8ded-515b-48ff-a6b6-28df88878996] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Oct 11 09:00:19 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:00:19.539 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=bec9a5e2-82b0-42f0-811d-08d245f1dc66, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '19'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:00:19 compute-0 sshd-session[342672]: Received disconnect from 152.32.213.170 port 38148:11: Bye Bye [preauth]
Oct 11 09:00:19 compute-0 sshd-session[342672]: Disconnected from invalid user matthew 152.32.213.170 port 38148 [preauth]
Oct 11 09:00:20 compute-0 kernel: tapbcfaf217-87 (unregistering): left promiscuous mode
Oct 11 09:00:20 compute-0 NetworkManager[44960]: <info>  [1760173220.0285] device (tapbcfaf217-87): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 09:00:20 compute-0 ovn_controller[152945]: 2025-10-11T09:00:20Z|00756|binding|INFO|Releasing lport bcfaf217-8703-4c1e-bf80-d24ab0e642bd from this chassis (sb_readonly=0)
Oct 11 09:00:20 compute-0 ovn_controller[152945]: 2025-10-11T09:00:20Z|00757|binding|INFO|Setting lport bcfaf217-8703-4c1e-bf80-d24ab0e642bd down in Southbound
Oct 11 09:00:20 compute-0 ovn_controller[152945]: 2025-10-11T09:00:20Z|00758|binding|INFO|Removing iface tapbcfaf217-87 ovn-installed in OVS
Oct 11 09:00:20 compute-0 nova_compute[260935]: 2025-10-11 09:00:20.076 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:00:20 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:00:20.084 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:16:b9:d1 10.100.0.11'], port_security=['fa:16:3e:16:b9:d1 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '2ca1b1c6-7cd2-42f9-a24b-5c20bb567361', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-164a664d-5e52-48b9-8b00-f73d0851a4cc', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd33b48586acf4e6c8254f2a1213b001c', 'neutron:revision_number': '4', 'neutron:security_group_ids': '7c2dc1cf-8ac0-4645-86fa-d32df3cf1552', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=68f3e6c4-f574-4830-9133-912bb9cd6132, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=bcfaf217-8703-4c1e-bf80-d24ab0e642bd) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 09:00:20 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:00:20.086 162815 INFO neutron.agent.ovn.metadata.agent [-] Port bcfaf217-8703-4c1e-bf80-d24ab0e642bd in datapath 164a664d-5e52-48b9-8b00-f73d0851a4cc unbound from our chassis
Oct 11 09:00:20 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:00:20.090 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 164a664d-5e52-48b9-8b00-f73d0851a4cc
Oct 11 09:00:20 compute-0 nova_compute[260935]: 2025-10-11 09:00:20.107 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:00:20 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:00:20.121 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[4a2fd853-8923-41bd-af16-91b246e21e97]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:00:20 compute-0 systemd[1]: machine-qemu\x2d94\x2dinstance\x2d00000054.scope: Deactivated successfully.
Oct 11 09:00:20 compute-0 systemd[1]: machine-qemu\x2d94\x2dinstance\x2d00000054.scope: Consumed 13.495s CPU time.
Oct 11 09:00:20 compute-0 systemd-machined[215705]: Machine qemu-94-instance-00000054 terminated.
Oct 11 09:00:20 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1715: 321 pgs: 321 active+clean; 246 MiB data, 704 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 136 op/s
Oct 11 09:00:20 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:00:20.161 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[4ea6d154-1f6b-48ed-a697-b0d2b1c3c4f1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:00:20 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:00:20.166 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[c70b345f-86c3-4eac-a79b-c2bd69f23ecb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:00:20 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:00:20.203 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[46201030-aba1-4cd3-99a4-9424169f592c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:00:20 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:00:20.228 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[52fd0fb3-c2fb-45be-9bd6-0cd78fd0dd5e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap164a664d-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e4:a0:ed'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 916, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 916, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 224], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 500906, 'reachable_time': 17587, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 342773, 'error': None, 'target': 'ovnmeta-164a664d-5e52-48b9-8b00-f73d0851a4cc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:00:20 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:00:20.253 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[287d914f-87e7-4b63-b865-a5f189a34f9b]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap164a664d-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 500919, 'tstamp': 500919}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 342774, 'error': None, 'target': 'ovnmeta-164a664d-5e52-48b9-8b00-f73d0851a4cc', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap164a664d-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 500922, 'tstamp': 500922}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 342774, 'error': None, 'target': 'ovnmeta-164a664d-5e52-48b9-8b00-f73d0851a4cc', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:00:20 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:00:20.256 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap164a664d-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:00:20 compute-0 kernel: tapbcfaf217-87: entered promiscuous mode
Oct 11 09:00:20 compute-0 NetworkManager[44960]: <info>  [1760173220.2587] manager: (tapbcfaf217-87): new Tun device (/org/freedesktop/NetworkManager/Devices/334)
Oct 11 09:00:20 compute-0 kernel: tapbcfaf217-87 (unregistering): left promiscuous mode
Oct 11 09:00:20 compute-0 nova_compute[260935]: 2025-10-11 09:00:20.263 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:00:20 compute-0 nova_compute[260935]: 2025-10-11 09:00:20.269 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:00:20 compute-0 ovn_controller[152945]: 2025-10-11T09:00:20Z|00759|binding|INFO|Claiming lport bcfaf217-8703-4c1e-bf80-d24ab0e642bd for this chassis.
Oct 11 09:00:20 compute-0 ovn_controller[152945]: 2025-10-11T09:00:20Z|00760|binding|INFO|bcfaf217-8703-4c1e-bf80-d24ab0e642bd: Claiming fa:16:3e:16:b9:d1 10.100.0.11
Oct 11 09:00:20 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:00:20.291 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:16:b9:d1 10.100.0.11'], port_security=['fa:16:3e:16:b9:d1 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '2ca1b1c6-7cd2-42f9-a24b-5c20bb567361', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-164a664d-5e52-48b9-8b00-f73d0851a4cc', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd33b48586acf4e6c8254f2a1213b001c', 'neutron:revision_number': '4', 'neutron:security_group_ids': '7c2dc1cf-8ac0-4645-86fa-d32df3cf1552', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=68f3e6c4-f574-4830-9133-912bb9cd6132, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=bcfaf217-8703-4c1e-bf80-d24ab0e642bd) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 09:00:20 compute-0 ovn_controller[152945]: 2025-10-11T09:00:20Z|00761|binding|INFO|Setting lport bcfaf217-8703-4c1e-bf80-d24ab0e642bd ovn-installed in OVS
Oct 11 09:00:20 compute-0 ovn_controller[152945]: 2025-10-11T09:00:20Z|00762|binding|INFO|Setting lport bcfaf217-8703-4c1e-bf80-d24ab0e642bd up in Southbound
Oct 11 09:00:20 compute-0 ovn_controller[152945]: 2025-10-11T09:00:20Z|00763|binding|INFO|Releasing lport bcfaf217-8703-4c1e-bf80-d24ab0e642bd from this chassis (sb_readonly=1)
Oct 11 09:00:20 compute-0 ovn_controller[152945]: 2025-10-11T09:00:20Z|00764|if_status|INFO|Dropped 1 log messages in last 186 seconds (most recently, 186 seconds ago) due to excessive rate
Oct 11 09:00:20 compute-0 ovn_controller[152945]: 2025-10-11T09:00:20Z|00765|if_status|INFO|Not setting lport bcfaf217-8703-4c1e-bf80-d24ab0e642bd down as sb is readonly
Oct 11 09:00:20 compute-0 ovn_controller[152945]: 2025-10-11T09:00:20Z|00766|binding|INFO|Removing iface tapbcfaf217-87 ovn-installed in OVS
Oct 11 09:00:20 compute-0 nova_compute[260935]: 2025-10-11 09:00:20.324 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:00:20 compute-0 ovn_controller[152945]: 2025-10-11T09:00:20Z|00767|binding|INFO|Releasing lport bcfaf217-8703-4c1e-bf80-d24ab0e642bd from this chassis (sb_readonly=0)
Oct 11 09:00:20 compute-0 ovn_controller[152945]: 2025-10-11T09:00:20Z|00768|binding|INFO|Setting lport bcfaf217-8703-4c1e-bf80-d24ab0e642bd down in Southbound
Oct 11 09:00:20 compute-0 nova_compute[260935]: 2025-10-11 09:00:20.333 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:00:20 compute-0 nova_compute[260935]: 2025-10-11 09:00:20.334 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 11 09:00:20 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:00:20.333 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap164a664d-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:00:20 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:00:20.337 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 09:00:20 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:00:20.339 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap164a664d-50, col_values=(('external_ids', {'iface-id': 'e23cd806-8523-4e59-ba27-db15cee52548'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:00:20 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:00:20.341 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 09:00:20 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:00:20.343 162815 INFO neutron.agent.ovn.metadata.agent [-] Port bcfaf217-8703-4c1e-bf80-d24ab0e642bd in datapath 164a664d-5e52-48b9-8b00-f73d0851a4cc unbound from our chassis
Oct 11 09:00:20 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:00:20.345 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:16:b9:d1 10.100.0.11'], port_security=['fa:16:3e:16:b9:d1 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '2ca1b1c6-7cd2-42f9-a24b-5c20bb567361', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-164a664d-5e52-48b9-8b00-f73d0851a4cc', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd33b48586acf4e6c8254f2a1213b001c', 'neutron:revision_number': '4', 'neutron:security_group_ids': '7c2dc1cf-8ac0-4645-86fa-d32df3cf1552', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=68f3e6c4-f574-4830-9133-912bb9cd6132, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=bcfaf217-8703-4c1e-bf80-d24ab0e642bd) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 09:00:20 compute-0 nova_compute[260935]: 2025-10-11 09:00:20.351 2 DEBUG nova.compute.manager [req-b57a7511-7c5a-4c0a-b4cb-42560edbcf21 req-53e6f6f3-6089-4f7a-bce0-ace86b37c962 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 2ca1b1c6-7cd2-42f9-a24b-5c20bb567361] Received event network-vif-unplugged-bcfaf217-8703-4c1e-bf80-d24ab0e642bd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:00:20 compute-0 nova_compute[260935]: 2025-10-11 09:00:20.353 2 DEBUG oslo_concurrency.lockutils [req-b57a7511-7c5a-4c0a-b4cb-42560edbcf21 req-53e6f6f3-6089-4f7a-bce0-ace86b37c962 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "2ca1b1c6-7cd2-42f9-a24b-5c20bb567361-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:00:20 compute-0 nova_compute[260935]: 2025-10-11 09:00:20.353 2 DEBUG oslo_concurrency.lockutils [req-b57a7511-7c5a-4c0a-b4cb-42560edbcf21 req-53e6f6f3-6089-4f7a-bce0-ace86b37c962 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "2ca1b1c6-7cd2-42f9-a24b-5c20bb567361-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:00:20 compute-0 nova_compute[260935]: 2025-10-11 09:00:20.354 2 DEBUG oslo_concurrency.lockutils [req-b57a7511-7c5a-4c0a-b4cb-42560edbcf21 req-53e6f6f3-6089-4f7a-bce0-ace86b37c962 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "2ca1b1c6-7cd2-42f9-a24b-5c20bb567361-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:00:20 compute-0 nova_compute[260935]: 2025-10-11 09:00:20.355 2 DEBUG nova.compute.manager [req-b57a7511-7c5a-4c0a-b4cb-42560edbcf21 req-53e6f6f3-6089-4f7a-bce0-ace86b37c962 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 2ca1b1c6-7cd2-42f9-a24b-5c20bb567361] No waiting events found dispatching network-vif-unplugged-bcfaf217-8703-4c1e-bf80-d24ab0e642bd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 09:00:20 compute-0 nova_compute[260935]: 2025-10-11 09:00:20.355 2 WARNING nova.compute.manager [req-b57a7511-7c5a-4c0a-b4cb-42560edbcf21 req-53e6f6f3-6089-4f7a-bce0-ace86b37c962 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 2ca1b1c6-7cd2-42f9-a24b-5c20bb567361] Received unexpected event network-vif-unplugged-bcfaf217-8703-4c1e-bf80-d24ab0e642bd for instance with vm_state active and task_state powering-off.
Oct 11 09:00:20 compute-0 nova_compute[260935]: 2025-10-11 09:00:20.357 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:00:20 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:00:20.358 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 164a664d-5e52-48b9-8b00-f73d0851a4cc
Oct 11 09:00:20 compute-0 nova_compute[260935]: 2025-10-11 09:00:20.381 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 11 09:00:20 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:00:20.388 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[d5f47fb1-c626-4731-8058-682c4f3f725e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:00:20 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:00:20.441 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[1f9f2126-c900-4ed9-9148-89a0c349eb6d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:00:20 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:00:20.446 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[d6104aa9-5e7f-4f72-8c2b-2963757b0718]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:00:20 compute-0 nova_compute[260935]: 2025-10-11 09:00:20.489 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:00:20 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:00:20.495 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[874431c1-8534-4171-9c15-bd5d7ae52126]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:00:20 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:00:20.523 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[34016e69-295b-4bed-9e50-69d3759aa6bd]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap164a664d-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e4:a0:ed'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 9, 'rx_bytes': 916, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 9, 'rx_bytes': 916, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 224], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 500906, 'reachable_time': 17587, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 342785, 'error': None, 'target': 'ovnmeta-164a664d-5e52-48b9-8b00-f73d0851a4cc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:00:20 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:00:20.554 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[f1b41ef9-0a36-4ee5-b80b-98dc2a52cd4a]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap164a664d-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 500919, 'tstamp': 500919}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 342786, 'error': None, 'target': 'ovnmeta-164a664d-5e52-48b9-8b00-f73d0851a4cc', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap164a664d-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 500922, 'tstamp': 500922}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 342786, 'error': None, 'target': 'ovnmeta-164a664d-5e52-48b9-8b00-f73d0851a4cc', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:00:20 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:00:20.556 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap164a664d-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:00:20 compute-0 nova_compute[260935]: 2025-10-11 09:00:20.557 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:00:20 compute-0 nova_compute[260935]: 2025-10-11 09:00:20.562 2 INFO nova.virt.libvirt.driver [None req-645c919b-3eac-43f3-b7db-1327b3724a83 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] [instance: 2ca1b1c6-7cd2-42f9-a24b-5c20bb567361] Instance shutdown successfully after 13 seconds.
Oct 11 09:00:20 compute-0 nova_compute[260935]: 2025-10-11 09:00:20.564 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:00:20 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:00:20.564 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap164a664d-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:00:20 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:00:20.565 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 09:00:20 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:00:20.565 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap164a664d-50, col_values=(('external_ids', {'iface-id': 'e23cd806-8523-4e59-ba27-db15cee52548'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:00:20 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:00:20.566 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 09:00:20 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:00:20.568 162815 INFO neutron.agent.ovn.metadata.agent [-] Port bcfaf217-8703-4c1e-bf80-d24ab0e642bd in datapath 164a664d-5e52-48b9-8b00-f73d0851a4cc unbound from our chassis
Oct 11 09:00:20 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:00:20.569 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 164a664d-5e52-48b9-8b00-f73d0851a4cc
Oct 11 09:00:20 compute-0 nova_compute[260935]: 2025-10-11 09:00:20.571 2 INFO nova.virt.libvirt.driver [-] [instance: 2ca1b1c6-7cd2-42f9-a24b-5c20bb567361] Instance destroyed successfully.
Oct 11 09:00:20 compute-0 nova_compute[260935]: 2025-10-11 09:00:20.572 2 DEBUG nova.objects.instance [None req-645c919b-3eac-43f3-b7db-1327b3724a83 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] Lazy-loading 'numa_topology' on Instance uuid 2ca1b1c6-7cd2-42f9-a24b-5c20bb567361 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:00:20 compute-0 nova_compute[260935]: 2025-10-11 09:00:20.585 2 DEBUG nova.compute.manager [None req-645c919b-3eac-43f3-b7db-1327b3724a83 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] [instance: 2ca1b1c6-7cd2-42f9-a24b-5c20bb567361] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:00:20 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:00:20.593 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[936e0a35-131e-472f-8894-e20515f03040]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:00:20 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:00:20.636 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[3cf4f99c-3d0b-4ef4-82a6-ccf254eb58e9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:00:20 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:00:20.640 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[b024f252-a71d-4662-8550-7e6247ee6182]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:00:20 compute-0 nova_compute[260935]: 2025-10-11 09:00:20.647 2 DEBUG oslo_concurrency.lockutils [None req-645c919b-3eac-43f3-b7db-1327b3724a83 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] Lock "2ca1b1c6-7cd2-42f9-a24b-5c20bb567361" "released" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: held 13.250s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:00:20 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:00:20.681 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[ff78c4bc-fe6e-4b27-a29f-530b783aa392]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:00:20 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:00:20.708 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[f1f256c7-f506-4b5d-9226-9bf3981a0125]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap164a664d-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e4:a0:ed'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 11, 'rx_bytes': 916, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 11, 'rx_bytes': 916, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 224], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 500906, 'reachable_time': 17587, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 342793, 'error': None, 'target': 'ovnmeta-164a664d-5e52-48b9-8b00-f73d0851a4cc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:00:20 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:00:20.737 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[09d44e5a-8f1c-491c-84f4-c36dc7df9b28]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap164a664d-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 500919, 'tstamp': 500919}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 342794, 'error': None, 'target': 'ovnmeta-164a664d-5e52-48b9-8b00-f73d0851a4cc', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap164a664d-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 500922, 'tstamp': 500922}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 342794, 'error': None, 'target': 'ovnmeta-164a664d-5e52-48b9-8b00-f73d0851a4cc', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:00:20 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:00:20.741 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap164a664d-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:00:20 compute-0 nova_compute[260935]: 2025-10-11 09:00:20.743 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:00:20 compute-0 nova_compute[260935]: 2025-10-11 09:00:20.750 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:00:20 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:00:20.751 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap164a664d-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:00:20 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:00:20.752 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 09:00:20 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:00:20.753 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap164a664d-50, col_values=(('external_ids', {'iface-id': 'e23cd806-8523-4e59-ba27-db15cee52548'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:00:20 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:00:20.754 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 09:00:21 compute-0 ceph-mon[74313]: pgmap v1715: 321 pgs: 321 active+clean; 246 MiB data, 704 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 136 op/s
Oct 11 09:00:22 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1716: 321 pgs: 321 active+clean; 246 MiB data, 704 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.2 MiB/s wr, 137 op/s
Oct 11 09:00:22 compute-0 nova_compute[260935]: 2025-10-11 09:00:22.687 2 DEBUG nova.compute.manager [req-809e2310-3a88-41ea-b896-72e3fac3c923 req-5d2e59fc-fd00-4efb-9982-c660798ff717 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 2ca1b1c6-7cd2-42f9-a24b-5c20bb567361] Received event network-vif-plugged-bcfaf217-8703-4c1e-bf80-d24ab0e642bd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:00:22 compute-0 nova_compute[260935]: 2025-10-11 09:00:22.687 2 DEBUG oslo_concurrency.lockutils [req-809e2310-3a88-41ea-b896-72e3fac3c923 req-5d2e59fc-fd00-4efb-9982-c660798ff717 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "2ca1b1c6-7cd2-42f9-a24b-5c20bb567361-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:00:22 compute-0 nova_compute[260935]: 2025-10-11 09:00:22.688 2 DEBUG oslo_concurrency.lockutils [req-809e2310-3a88-41ea-b896-72e3fac3c923 req-5d2e59fc-fd00-4efb-9982-c660798ff717 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "2ca1b1c6-7cd2-42f9-a24b-5c20bb567361-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:00:22 compute-0 nova_compute[260935]: 2025-10-11 09:00:22.688 2 DEBUG oslo_concurrency.lockutils [req-809e2310-3a88-41ea-b896-72e3fac3c923 req-5d2e59fc-fd00-4efb-9982-c660798ff717 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "2ca1b1c6-7cd2-42f9-a24b-5c20bb567361-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:00:22 compute-0 nova_compute[260935]: 2025-10-11 09:00:22.689 2 DEBUG nova.compute.manager [req-809e2310-3a88-41ea-b896-72e3fac3c923 req-5d2e59fc-fd00-4efb-9982-c660798ff717 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 2ca1b1c6-7cd2-42f9-a24b-5c20bb567361] No waiting events found dispatching network-vif-plugged-bcfaf217-8703-4c1e-bf80-d24ab0e642bd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 09:00:22 compute-0 nova_compute[260935]: 2025-10-11 09:00:22.689 2 WARNING nova.compute.manager [req-809e2310-3a88-41ea-b896-72e3fac3c923 req-5d2e59fc-fd00-4efb-9982-c660798ff717 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 2ca1b1c6-7cd2-42f9-a24b-5c20bb567361] Received unexpected event network-vif-plugged-bcfaf217-8703-4c1e-bf80-d24ab0e642bd for instance with vm_state stopped and task_state rebuilding.
Oct 11 09:00:22 compute-0 nova_compute[260935]: 2025-10-11 09:00:22.689 2 DEBUG nova.compute.manager [req-809e2310-3a88-41ea-b896-72e3fac3c923 req-5d2e59fc-fd00-4efb-9982-c660798ff717 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 2ca1b1c6-7cd2-42f9-a24b-5c20bb567361] Received event network-vif-plugged-bcfaf217-8703-4c1e-bf80-d24ab0e642bd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:00:22 compute-0 nova_compute[260935]: 2025-10-11 09:00:22.690 2 DEBUG oslo_concurrency.lockutils [req-809e2310-3a88-41ea-b896-72e3fac3c923 req-5d2e59fc-fd00-4efb-9982-c660798ff717 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "2ca1b1c6-7cd2-42f9-a24b-5c20bb567361-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:00:22 compute-0 nova_compute[260935]: 2025-10-11 09:00:22.690 2 DEBUG oslo_concurrency.lockutils [req-809e2310-3a88-41ea-b896-72e3fac3c923 req-5d2e59fc-fd00-4efb-9982-c660798ff717 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "2ca1b1c6-7cd2-42f9-a24b-5c20bb567361-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:00:22 compute-0 nova_compute[260935]: 2025-10-11 09:00:22.691 2 DEBUG oslo_concurrency.lockutils [req-809e2310-3a88-41ea-b896-72e3fac3c923 req-5d2e59fc-fd00-4efb-9982-c660798ff717 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "2ca1b1c6-7cd2-42f9-a24b-5c20bb567361-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:00:22 compute-0 nova_compute[260935]: 2025-10-11 09:00:22.691 2 DEBUG nova.compute.manager [req-809e2310-3a88-41ea-b896-72e3fac3c923 req-5d2e59fc-fd00-4efb-9982-c660798ff717 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 2ca1b1c6-7cd2-42f9-a24b-5c20bb567361] No waiting events found dispatching network-vif-plugged-bcfaf217-8703-4c1e-bf80-d24ab0e642bd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 09:00:22 compute-0 nova_compute[260935]: 2025-10-11 09:00:22.692 2 WARNING nova.compute.manager [req-809e2310-3a88-41ea-b896-72e3fac3c923 req-5d2e59fc-fd00-4efb-9982-c660798ff717 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 2ca1b1c6-7cd2-42f9-a24b-5c20bb567361] Received unexpected event network-vif-plugged-bcfaf217-8703-4c1e-bf80-d24ab0e642bd for instance with vm_state stopped and task_state rebuilding.
Oct 11 09:00:22 compute-0 nova_compute[260935]: 2025-10-11 09:00:22.692 2 DEBUG nova.compute.manager [req-809e2310-3a88-41ea-b896-72e3fac3c923 req-5d2e59fc-fd00-4efb-9982-c660798ff717 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 2ca1b1c6-7cd2-42f9-a24b-5c20bb567361] Received event network-vif-plugged-bcfaf217-8703-4c1e-bf80-d24ab0e642bd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:00:22 compute-0 nova_compute[260935]: 2025-10-11 09:00:22.693 2 DEBUG oslo_concurrency.lockutils [req-809e2310-3a88-41ea-b896-72e3fac3c923 req-5d2e59fc-fd00-4efb-9982-c660798ff717 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "2ca1b1c6-7cd2-42f9-a24b-5c20bb567361-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:00:22 compute-0 nova_compute[260935]: 2025-10-11 09:00:22.693 2 DEBUG oslo_concurrency.lockutils [req-809e2310-3a88-41ea-b896-72e3fac3c923 req-5d2e59fc-fd00-4efb-9982-c660798ff717 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "2ca1b1c6-7cd2-42f9-a24b-5c20bb567361-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:00:22 compute-0 nova_compute[260935]: 2025-10-11 09:00:22.694 2 DEBUG oslo_concurrency.lockutils [req-809e2310-3a88-41ea-b896-72e3fac3c923 req-5d2e59fc-fd00-4efb-9982-c660798ff717 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "2ca1b1c6-7cd2-42f9-a24b-5c20bb567361-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:00:22 compute-0 nova_compute[260935]: 2025-10-11 09:00:22.694 2 DEBUG nova.compute.manager [req-809e2310-3a88-41ea-b896-72e3fac3c923 req-5d2e59fc-fd00-4efb-9982-c660798ff717 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 2ca1b1c6-7cd2-42f9-a24b-5c20bb567361] No waiting events found dispatching network-vif-plugged-bcfaf217-8703-4c1e-bf80-d24ab0e642bd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 09:00:22 compute-0 nova_compute[260935]: 2025-10-11 09:00:22.695 2 WARNING nova.compute.manager [req-809e2310-3a88-41ea-b896-72e3fac3c923 req-5d2e59fc-fd00-4efb-9982-c660798ff717 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 2ca1b1c6-7cd2-42f9-a24b-5c20bb567361] Received unexpected event network-vif-plugged-bcfaf217-8703-4c1e-bf80-d24ab0e642bd for instance with vm_state stopped and task_state rebuilding.
Oct 11 09:00:22 compute-0 nova_compute[260935]: 2025-10-11 09:00:22.695 2 DEBUG nova.compute.manager [req-809e2310-3a88-41ea-b896-72e3fac3c923 req-5d2e59fc-fd00-4efb-9982-c660798ff717 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 2ca1b1c6-7cd2-42f9-a24b-5c20bb567361] Received event network-vif-unplugged-bcfaf217-8703-4c1e-bf80-d24ab0e642bd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:00:22 compute-0 nova_compute[260935]: 2025-10-11 09:00:22.695 2 DEBUG oslo_concurrency.lockutils [req-809e2310-3a88-41ea-b896-72e3fac3c923 req-5d2e59fc-fd00-4efb-9982-c660798ff717 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "2ca1b1c6-7cd2-42f9-a24b-5c20bb567361-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:00:22 compute-0 nova_compute[260935]: 2025-10-11 09:00:22.696 2 DEBUG oslo_concurrency.lockutils [req-809e2310-3a88-41ea-b896-72e3fac3c923 req-5d2e59fc-fd00-4efb-9982-c660798ff717 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "2ca1b1c6-7cd2-42f9-a24b-5c20bb567361-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:00:22 compute-0 nova_compute[260935]: 2025-10-11 09:00:22.696 2 DEBUG oslo_concurrency.lockutils [req-809e2310-3a88-41ea-b896-72e3fac3c923 req-5d2e59fc-fd00-4efb-9982-c660798ff717 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "2ca1b1c6-7cd2-42f9-a24b-5c20bb567361-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:00:22 compute-0 nova_compute[260935]: 2025-10-11 09:00:22.697 2 DEBUG nova.compute.manager [req-809e2310-3a88-41ea-b896-72e3fac3c923 req-5d2e59fc-fd00-4efb-9982-c660798ff717 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 2ca1b1c6-7cd2-42f9-a24b-5c20bb567361] No waiting events found dispatching network-vif-unplugged-bcfaf217-8703-4c1e-bf80-d24ab0e642bd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 09:00:22 compute-0 nova_compute[260935]: 2025-10-11 09:00:22.697 2 WARNING nova.compute.manager [req-809e2310-3a88-41ea-b896-72e3fac3c923 req-5d2e59fc-fd00-4efb-9982-c660798ff717 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 2ca1b1c6-7cd2-42f9-a24b-5c20bb567361] Received unexpected event network-vif-unplugged-bcfaf217-8703-4c1e-bf80-d24ab0e642bd for instance with vm_state stopped and task_state rebuilding.
Oct 11 09:00:22 compute-0 nova_compute[260935]: 2025-10-11 09:00:22.698 2 DEBUG nova.compute.manager [req-809e2310-3a88-41ea-b896-72e3fac3c923 req-5d2e59fc-fd00-4efb-9982-c660798ff717 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 2ca1b1c6-7cd2-42f9-a24b-5c20bb567361] Received event network-vif-plugged-bcfaf217-8703-4c1e-bf80-d24ab0e642bd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:00:22 compute-0 nova_compute[260935]: 2025-10-11 09:00:22.698 2 DEBUG oslo_concurrency.lockutils [req-809e2310-3a88-41ea-b896-72e3fac3c923 req-5d2e59fc-fd00-4efb-9982-c660798ff717 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "2ca1b1c6-7cd2-42f9-a24b-5c20bb567361-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:00:22 compute-0 nova_compute[260935]: 2025-10-11 09:00:22.699 2 DEBUG oslo_concurrency.lockutils [req-809e2310-3a88-41ea-b896-72e3fac3c923 req-5d2e59fc-fd00-4efb-9982-c660798ff717 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "2ca1b1c6-7cd2-42f9-a24b-5c20bb567361-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:00:22 compute-0 nova_compute[260935]: 2025-10-11 09:00:22.699 2 DEBUG oslo_concurrency.lockutils [req-809e2310-3a88-41ea-b896-72e3fac3c923 req-5d2e59fc-fd00-4efb-9982-c660798ff717 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "2ca1b1c6-7cd2-42f9-a24b-5c20bb567361-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:00:22 compute-0 nova_compute[260935]: 2025-10-11 09:00:22.700 2 DEBUG nova.compute.manager [req-809e2310-3a88-41ea-b896-72e3fac3c923 req-5d2e59fc-fd00-4efb-9982-c660798ff717 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 2ca1b1c6-7cd2-42f9-a24b-5c20bb567361] No waiting events found dispatching network-vif-plugged-bcfaf217-8703-4c1e-bf80-d24ab0e642bd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 09:00:22 compute-0 nova_compute[260935]: 2025-10-11 09:00:22.700 2 WARNING nova.compute.manager [req-809e2310-3a88-41ea-b896-72e3fac3c923 req-5d2e59fc-fd00-4efb-9982-c660798ff717 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 2ca1b1c6-7cd2-42f9-a24b-5c20bb567361] Received unexpected event network-vif-plugged-bcfaf217-8703-4c1e-bf80-d24ab0e642bd for instance with vm_state stopped and task_state rebuilding.
Oct 11 09:00:23 compute-0 ceph-mon[74313]: pgmap v1716: 321 pgs: 321 active+clean; 246 MiB data, 704 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.2 MiB/s wr, 137 op/s
Oct 11 09:00:23 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e242 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:00:23 compute-0 nova_compute[260935]: 2025-10-11 09:00:23.703 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:00:24 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1717: 321 pgs: 321 active+clean; 246 MiB data, 704 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 139 op/s
Oct 11 09:00:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:00:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:00:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:00:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:00:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:00:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:00:25 compute-0 ceph-mon[74313]: pgmap v1717: 321 pgs: 321 active+clean; 246 MiB data, 704 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 139 op/s
Oct 11 09:00:25 compute-0 nova_compute[260935]: 2025-10-11 09:00:25.492 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:00:26 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1718: 321 pgs: 321 active+clean; 246 MiB data, 704 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 130 op/s
Oct 11 09:00:27 compute-0 ceph-mon[74313]: pgmap v1718: 321 pgs: 321 active+clean; 246 MiB data, 704 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 130 op/s
Oct 11 09:00:28 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1719: 321 pgs: 321 active+clean; 279 MiB data, 747 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 4.3 MiB/s wr, 192 op/s
Oct 11 09:00:28 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e242 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:00:28 compute-0 nova_compute[260935]: 2025-10-11 09:00:28.747 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:00:29 compute-0 ceph-mon[74313]: pgmap v1719: 321 pgs: 321 active+clean; 279 MiB data, 747 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 4.3 MiB/s wr, 192 op/s
Oct 11 09:00:29 compute-0 nova_compute[260935]: 2025-10-11 09:00:29.591 2 DEBUG nova.virt.libvirt.driver [None req-794a5152-5f55-4f4d-9857-6ffb9281654c df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] [instance: b75d8ded-515b-48ff-a6b6-28df88878996] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Oct 11 09:00:30 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1720: 321 pgs: 321 active+clean; 279 MiB data, 747 MiB used, 59 GiB / 60 GiB avail; 322 KiB/s rd, 2.2 MiB/s wr, 66 op/s
Oct 11 09:00:30 compute-0 nova_compute[260935]: 2025-10-11 09:00:30.494 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:00:31 compute-0 ceph-mon[74313]: pgmap v1720: 321 pgs: 321 active+clean; 279 MiB data, 747 MiB used, 59 GiB / 60 GiB avail; 322 KiB/s rd, 2.2 MiB/s wr, 66 op/s
Oct 11 09:00:31 compute-0 kernel: tap99e74dca-1d (unregistering): left promiscuous mode
Oct 11 09:00:31 compute-0 NetworkManager[44960]: <info>  [1760173231.9250] device (tap99e74dca-1d): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 09:00:31 compute-0 nova_compute[260935]: 2025-10-11 09:00:31.943 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:00:31 compute-0 ovn_controller[152945]: 2025-10-11T09:00:31Z|00769|binding|INFO|Releasing lport 99e74dca-1d94-446c-ac4b-bc16dc028d2b from this chassis (sb_readonly=0)
Oct 11 09:00:31 compute-0 ovn_controller[152945]: 2025-10-11T09:00:31Z|00770|binding|INFO|Setting lport 99e74dca-1d94-446c-ac4b-bc16dc028d2b down in Southbound
Oct 11 09:00:31 compute-0 ovn_controller[152945]: 2025-10-11T09:00:31Z|00771|binding|INFO|Removing iface tap99e74dca-1d ovn-installed in OVS
Oct 11 09:00:31 compute-0 nova_compute[260935]: 2025-10-11 09:00:31.947 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:00:31 compute-0 nova_compute[260935]: 2025-10-11 09:00:31.967 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:00:31 compute-0 systemd[1]: machine-qemu\x2d95\x2dinstance\x2d00000055.scope: Deactivated successfully.
Oct 11 09:00:31 compute-0 systemd[1]: machine-qemu\x2d95\x2dinstance\x2d00000055.scope: Consumed 13.022s CPU time.
Oct 11 09:00:32 compute-0 systemd-machined[215705]: Machine qemu-95-instance-00000055 terminated.
Oct 11 09:00:32 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1721: 321 pgs: 321 active+clean; 279 MiB data, 747 MiB used, 59 GiB / 60 GiB avail; 323 KiB/s rd, 2.2 MiB/s wr, 66 op/s
Oct 11 09:00:32 compute-0 nova_compute[260935]: 2025-10-11 09:00:32.608 2 INFO nova.virt.libvirt.driver [None req-794a5152-5f55-4f4d-9857-6ffb9281654c df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] [instance: b75d8ded-515b-48ff-a6b6-28df88878996] Instance shutdown successfully after 13 seconds.
Oct 11 09:00:32 compute-0 nova_compute[260935]: 2025-10-11 09:00:32.616 2 INFO nova.virt.libvirt.driver [-] [instance: b75d8ded-515b-48ff-a6b6-28df88878996] Instance destroyed successfully.
Oct 11 09:00:32 compute-0 nova_compute[260935]: 2025-10-11 09:00:32.617 2 DEBUG nova.objects.instance [None req-794a5152-5f55-4f4d-9857-6ffb9281654c df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] Lazy-loading 'numa_topology' on Instance uuid b75d8ded-515b-48ff-a6b6-28df88878996 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:00:33 compute-0 ceph-mon[74313]: pgmap v1721: 321 pgs: 321 active+clean; 279 MiB data, 747 MiB used, 59 GiB / 60 GiB avail; 323 KiB/s rd, 2.2 MiB/s wr, 66 op/s
Oct 11 09:00:33 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e242 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:00:33 compute-0 nova_compute[260935]: 2025-10-11 09:00:33.750 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:00:34 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1722: 321 pgs: 321 active+clean; 279 MiB data, 747 MiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.2 MiB/s wr, 68 op/s
Oct 11 09:00:35 compute-0 nova_compute[260935]: 2025-10-11 09:00:35.280 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760173220.2782521, 2ca1b1c6-7cd2-42f9-a24b-5c20bb567361 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:00:35 compute-0 nova_compute[260935]: 2025-10-11 09:00:35.280 2 INFO nova.compute.manager [-] [instance: 2ca1b1c6-7cd2-42f9-a24b-5c20bb567361] VM Stopped (Lifecycle Event)
Oct 11 09:00:35 compute-0 ceph-mon[74313]: pgmap v1722: 321 pgs: 321 active+clean; 279 MiB data, 747 MiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.2 MiB/s wr, 68 op/s
Oct 11 09:00:35 compute-0 nova_compute[260935]: 2025-10-11 09:00:35.496 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:00:36 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1723: 321 pgs: 321 active+clean; 279 MiB data, 747 MiB used, 59 GiB / 60 GiB avail; 319 KiB/s rd, 2.2 MiB/s wr, 65 op/s
Oct 11 09:00:36 compute-0 podman[342822]: 2025-10-11 09:00:36.809510378 +0000 UTC m=+0.106264581 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 11 09:00:37 compute-0 ceph-mon[74313]: pgmap v1723: 321 pgs: 321 active+clean; 279 MiB data, 747 MiB used, 59 GiB / 60 GiB avail; 319 KiB/s rd, 2.2 MiB/s wr, 65 op/s
Oct 11 09:00:37 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 09:00:37 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3570339183' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 09:00:37 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 09:00:37 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3570339183' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 09:00:38 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1724: 321 pgs: 321 active+clean; 279 MiB data, 747 MiB used, 59 GiB / 60 GiB avail; 319 KiB/s rd, 2.2 MiB/s wr, 65 op/s
Oct 11 09:00:38 compute-0 ceph-mon[74313]: from='client.? 192.168.122.10:0/3570339183' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 09:00:38 compute-0 ceph-mon[74313]: from='client.? 192.168.122.10:0/3570339183' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 09:00:38 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e242 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:00:38 compute-0 nova_compute[260935]: 2025-10-11 09:00:38.752 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:00:39 compute-0 ceph-mon[74313]: pgmap v1724: 321 pgs: 321 active+clean; 279 MiB data, 747 MiB used, 59 GiB / 60 GiB avail; 319 KiB/s rd, 2.2 MiB/s wr, 65 op/s
Oct 11 09:00:39 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:00:39.663 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ab:9b:26 10.100.0.3'], port_security=['fa:16:3e:ab:9b:26 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'b75d8ded-515b-48ff-a6b6-28df88878996', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e4686205-cbf0-4221-bc49-ebb890c4a59f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '11b44ad9193e4e43838d52056ccf413e', 'neutron:revision_number': '4', 'neutron:security_group_ids': '4b0cfb76-aebd-4cfc-96f5-00bacc345d65', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9601b6e8-d9bc-46ca-99e8-33ec15f713e5, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=99e74dca-1d94-446c-ac4b-bc16dc028d2b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 09:00:39 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:00:39.665 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 99e74dca-1d94-446c-ac4b-bc16dc028d2b in datapath e4686205-cbf0-4221-bc49-ebb890c4a59f unbound from our chassis
Oct 11 09:00:39 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:00:39.667 162815 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network e4686205-cbf0-4221-bc49-ebb890c4a59f or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Oct 11 09:00:39 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:00:39.668 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[c3efd92a-3268-4cf3-a3c7-3f25b3891f44]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:00:39 compute-0 nova_compute[260935]: 2025-10-11 09:00:39.678 2 WARNING oslo.service.loopingcall [-] Function 'nova.servicegroup.drivers.db.DbDriver._report_state' run outlasted interval by 5.43 sec
Oct 11 09:00:39 compute-0 nova_compute[260935]: 2025-10-11 09:00:39.700 2 DEBUG nova.compute.manager [None req-5fecd48f-6f90-44d4-a11c-ba3248170dd6 - - - - - -] [instance: 2ca1b1c6-7cd2-42f9-a24b-5c20bb567361] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:00:39 compute-0 nova_compute[260935]: 2025-10-11 09:00:39.702 2 INFO nova.virt.libvirt.driver [None req-794a5152-5f55-4f4d-9857-6ffb9281654c df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] [instance: b75d8ded-515b-48ff-a6b6-28df88878996] Attempting rescue
Oct 11 09:00:39 compute-0 nova_compute[260935]: 2025-10-11 09:00:39.703 2 DEBUG nova.virt.libvirt.driver [None req-794a5152-5f55-4f4d-9857-6ffb9281654c df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] rescue generated disk_info: {'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'disk.rescue': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config.rescue': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} rescue /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4314
Oct 11 09:00:39 compute-0 nova_compute[260935]: 2025-10-11 09:00:39.707 2 DEBUG nova.compute.manager [None req-5fecd48f-6f90-44d4-a11c-ba3248170dd6 - - - - - -] [instance: 2ca1b1c6-7cd2-42f9-a24b-5c20bb567361] Synchronizing instance power state after lifecycle event "Stopped"; current vm_state: stopped, current task_state: rebuilding, current DB power_state: 4, VM power_state: 4 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 09:00:39 compute-0 nova_compute[260935]: 2025-10-11 09:00:39.715 2 DEBUG nova.virt.libvirt.driver [None req-794a5152-5f55-4f4d-9857-6ffb9281654c df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] [instance: b75d8ded-515b-48ff-a6b6-28df88878996] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719
Oct 11 09:00:39 compute-0 nova_compute[260935]: 2025-10-11 09:00:39.716 2 INFO nova.virt.libvirt.driver [None req-794a5152-5f55-4f4d-9857-6ffb9281654c df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] [instance: b75d8ded-515b-48ff-a6b6-28df88878996] Creating image(s)
Oct 11 09:00:39 compute-0 nova_compute[260935]: 2025-10-11 09:00:39.751 2 DEBUG nova.storage.rbd_utils [None req-794a5152-5f55-4f4d-9857-6ffb9281654c df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] rbd image b75d8ded-515b-48ff-a6b6-28df88878996_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:00:39 compute-0 nova_compute[260935]: 2025-10-11 09:00:39.757 2 DEBUG nova.objects.instance [None req-794a5152-5f55-4f4d-9857-6ffb9281654c df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] Lazy-loading 'trusted_certs' on Instance uuid b75d8ded-515b-48ff-a6b6-28df88878996 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:00:39 compute-0 nova_compute[260935]: 2025-10-11 09:00:39.760 2 INFO nova.compute.manager [None req-5fecd48f-6f90-44d4-a11c-ba3248170dd6 - - - - - -] [instance: 2ca1b1c6-7cd2-42f9-a24b-5c20bb567361] During sync_power_state the instance has a pending task (rebuilding). Skip.
Oct 11 09:00:39 compute-0 nova_compute[260935]: 2025-10-11 09:00:39.821 2 DEBUG nova.storage.rbd_utils [None req-794a5152-5f55-4f4d-9857-6ffb9281654c df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] rbd image b75d8ded-515b-48ff-a6b6-28df88878996_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:00:39 compute-0 nova_compute[260935]: 2025-10-11 09:00:39.861 2 DEBUG nova.storage.rbd_utils [None req-794a5152-5f55-4f4d-9857-6ffb9281654c df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] rbd image b75d8ded-515b-48ff-a6b6-28df88878996_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:00:39 compute-0 nova_compute[260935]: 2025-10-11 09:00:39.866 2 DEBUG oslo_concurrency.processutils [None req-794a5152-5f55-4f4d-9857-6ffb9281654c df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:00:39 compute-0 nova_compute[260935]: 2025-10-11 09:00:39.919 2 INFO nova.compute.manager [None req-272738df-6af2-4acf-8251-7329bafbeba8 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] [instance: 2ca1b1c6-7cd2-42f9-a24b-5c20bb567361] Rebuilding instance
Oct 11 09:00:39 compute-0 nova_compute[260935]: 2025-10-11 09:00:39.971 2 DEBUG oslo_concurrency.processutils [None req-794a5152-5f55-4f4d-9857-6ffb9281654c df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json" returned: 0 in 0.106s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:00:39 compute-0 nova_compute[260935]: 2025-10-11 09:00:39.972 2 DEBUG oslo_concurrency.lockutils [None req-794a5152-5f55-4f4d-9857-6ffb9281654c df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] Acquiring lock "9811042c7d73cc51997f7c966840f4b7728169a1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:00:39 compute-0 nova_compute[260935]: 2025-10-11 09:00:39.973 2 DEBUG oslo_concurrency.lockutils [None req-794a5152-5f55-4f4d-9857-6ffb9281654c df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:00:39 compute-0 nova_compute[260935]: 2025-10-11 09:00:39.973 2 DEBUG oslo_concurrency.lockutils [None req-794a5152-5f55-4f4d-9857-6ffb9281654c df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:00:40 compute-0 nova_compute[260935]: 2025-10-11 09:00:40.003 2 DEBUG nova.storage.rbd_utils [None req-794a5152-5f55-4f4d-9857-6ffb9281654c df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] rbd image b75d8ded-515b-48ff-a6b6-28df88878996_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:00:40 compute-0 nova_compute[260935]: 2025-10-11 09:00:40.008 2 DEBUG oslo_concurrency.processutils [None req-794a5152-5f55-4f4d-9857-6ffb9281654c df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 b75d8ded-515b-48ff-a6b6-28df88878996_disk.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:00:40 compute-0 nova_compute[260935]: 2025-10-11 09:00:40.073 2 DEBUG oslo_concurrency.lockutils [None req-9972abff-c76e-4a7a-9404-0b8dd9c06fcd 2e604a7f01ba42f8a2f2a90bf14cafba 0ba95f2514ce4fe4b00f245335eaeb01 - - default default] Acquiring lock "52be16b4-343a-4fd4-9041-39069a1fde2a" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:00:40 compute-0 nova_compute[260935]: 2025-10-11 09:00:40.074 2 DEBUG oslo_concurrency.lockutils [None req-9972abff-c76e-4a7a-9404-0b8dd9c06fcd 2e604a7f01ba42f8a2f2a90bf14cafba 0ba95f2514ce4fe4b00f245335eaeb01 - - default default] Lock "52be16b4-343a-4fd4-9041-39069a1fde2a" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:00:40 compute-0 nova_compute[260935]: 2025-10-11 09:00:40.089 2 DEBUG nova.compute.manager [None req-9972abff-c76e-4a7a-9404-0b8dd9c06fcd 2e604a7f01ba42f8a2f2a90bf14cafba 0ba95f2514ce4fe4b00f245335eaeb01 - - default default] [instance: 52be16b4-343a-4fd4-9041-39069a1fde2a] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 11 09:00:40 compute-0 nova_compute[260935]: 2025-10-11 09:00:40.097 2 DEBUG nova.compute.manager [req-7b173ca2-553d-497a-9883-72c0c9b5a140 req-5bb03c11-a21b-4059-8d5c-2887939e59d7 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: b75d8ded-515b-48ff-a6b6-28df88878996] Received event network-vif-unplugged-99e74dca-1d94-446c-ac4b-bc16dc028d2b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:00:40 compute-0 nova_compute[260935]: 2025-10-11 09:00:40.097 2 DEBUG oslo_concurrency.lockutils [req-7b173ca2-553d-497a-9883-72c0c9b5a140 req-5bb03c11-a21b-4059-8d5c-2887939e59d7 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "b75d8ded-515b-48ff-a6b6-28df88878996-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:00:40 compute-0 nova_compute[260935]: 2025-10-11 09:00:40.098 2 DEBUG oslo_concurrency.lockutils [req-7b173ca2-553d-497a-9883-72c0c9b5a140 req-5bb03c11-a21b-4059-8d5c-2887939e59d7 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "b75d8ded-515b-48ff-a6b6-28df88878996-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:00:40 compute-0 nova_compute[260935]: 2025-10-11 09:00:40.098 2 DEBUG oslo_concurrency.lockutils [req-7b173ca2-553d-497a-9883-72c0c9b5a140 req-5bb03c11-a21b-4059-8d5c-2887939e59d7 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "b75d8ded-515b-48ff-a6b6-28df88878996-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:00:40 compute-0 nova_compute[260935]: 2025-10-11 09:00:40.098 2 DEBUG nova.compute.manager [req-7b173ca2-553d-497a-9883-72c0c9b5a140 req-5bb03c11-a21b-4059-8d5c-2887939e59d7 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: b75d8ded-515b-48ff-a6b6-28df88878996] No waiting events found dispatching network-vif-unplugged-99e74dca-1d94-446c-ac4b-bc16dc028d2b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 09:00:40 compute-0 nova_compute[260935]: 2025-10-11 09:00:40.098 2 WARNING nova.compute.manager [req-7b173ca2-553d-497a-9883-72c0c9b5a140 req-5bb03c11-a21b-4059-8d5c-2887939e59d7 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: b75d8ded-515b-48ff-a6b6-28df88878996] Received unexpected event network-vif-unplugged-99e74dca-1d94-446c-ac4b-bc16dc028d2b for instance with vm_state active and task_state rescuing.
Oct 11 09:00:40 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1725: 321 pgs: 321 active+clean; 279 MiB data, 747 MiB used, 59 GiB / 60 GiB avail; 2.5 KiB/s rd, 24 KiB/s wr, 3 op/s
Oct 11 09:00:40 compute-0 nova_compute[260935]: 2025-10-11 09:00:40.173 2 DEBUG oslo_concurrency.lockutils [None req-9972abff-c76e-4a7a-9404-0b8dd9c06fcd 2e604a7f01ba42f8a2f2a90bf14cafba 0ba95f2514ce4fe4b00f245335eaeb01 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:00:40 compute-0 nova_compute[260935]: 2025-10-11 09:00:40.173 2 DEBUG oslo_concurrency.lockutils [None req-9972abff-c76e-4a7a-9404-0b8dd9c06fcd 2e604a7f01ba42f8a2f2a90bf14cafba 0ba95f2514ce4fe4b00f245335eaeb01 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:00:40 compute-0 nova_compute[260935]: 2025-10-11 09:00:40.187 2 DEBUG nova.virt.hardware [None req-9972abff-c76e-4a7a-9404-0b8dd9c06fcd 2e604a7f01ba42f8a2f2a90bf14cafba 0ba95f2514ce4fe4b00f245335eaeb01 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 11 09:00:40 compute-0 nova_compute[260935]: 2025-10-11 09:00:40.187 2 INFO nova.compute.claims [None req-9972abff-c76e-4a7a-9404-0b8dd9c06fcd 2e604a7f01ba42f8a2f2a90bf14cafba 0ba95f2514ce4fe4b00f245335eaeb01 - - default default] [instance: 52be16b4-343a-4fd4-9041-39069a1fde2a] Claim successful on node compute-0.ctlplane.example.com
Oct 11 09:00:40 compute-0 nova_compute[260935]: 2025-10-11 09:00:40.340 2 DEBUG nova.objects.instance [None req-272738df-6af2-4acf-8251-7329bafbeba8 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] Lazy-loading 'trusted_certs' on Instance uuid 2ca1b1c6-7cd2-42f9-a24b-5c20bb567361 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:00:40 compute-0 nova_compute[260935]: 2025-10-11 09:00:40.357 2 DEBUG nova.compute.manager [None req-272738df-6af2-4acf-8251-7329bafbeba8 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] [instance: 2ca1b1c6-7cd2-42f9-a24b-5c20bb567361] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:00:40 compute-0 nova_compute[260935]: 2025-10-11 09:00:40.360 2 DEBUG oslo_concurrency.processutils [None req-9972abff-c76e-4a7a-9404-0b8dd9c06fcd 2e604a7f01ba42f8a2f2a90bf14cafba 0ba95f2514ce4fe4b00f245335eaeb01 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:00:40 compute-0 nova_compute[260935]: 2025-10-11 09:00:40.420 2 DEBUG oslo_concurrency.processutils [None req-794a5152-5f55-4f4d-9857-6ffb9281654c df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 b75d8ded-515b-48ff-a6b6-28df88878996_disk.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.411s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:00:40 compute-0 nova_compute[260935]: 2025-10-11 09:00:40.422 2 DEBUG nova.objects.instance [None req-794a5152-5f55-4f4d-9857-6ffb9281654c df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] Lazy-loading 'migration_context' on Instance uuid b75d8ded-515b-48ff-a6b6-28df88878996 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:00:40 compute-0 nova_compute[260935]: 2025-10-11 09:00:40.445 2 DEBUG nova.virt.libvirt.driver [None req-794a5152-5f55-4f4d-9857-6ffb9281654c df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] [instance: b75d8ded-515b-48ff-a6b6-28df88878996] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 11 09:00:40 compute-0 nova_compute[260935]: 2025-10-11 09:00:40.446 2 DEBUG nova.virt.libvirt.driver [None req-794a5152-5f55-4f4d-9857-6ffb9281654c df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] [instance: b75d8ded-515b-48ff-a6b6-28df88878996] Start _get_guest_xml network_info=[{"id": "99e74dca-1d94-446c-ac4b-bc16dc028d2b", "address": "fa:16:3e:ab:9b:26", "network": {"id": "e4686205-cbf0-4221-bc49-ebb890c4a59f", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1553544744-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueTestJSON-1553544744-network", "vif_mac": "fa:16:3e:ab:9b:26"}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "11b44ad9193e4e43838d52056ccf413e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap99e74dca-1d", "ovs_interfaceid": "99e74dca-1d94-446c-ac4b-bc16dc028d2b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'disk.rescue': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config.rescue': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>) rescue={'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e', 'kernel_id': '', 'ramdisk_id': ''} block_device_info=None _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 11 09:00:40 compute-0 nova_compute[260935]: 2025-10-11 09:00:40.447 2 DEBUG nova.objects.instance [None req-794a5152-5f55-4f4d-9857-6ffb9281654c df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] Lazy-loading 'resources' on Instance uuid b75d8ded-515b-48ff-a6b6-28df88878996 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:00:40 compute-0 nova_compute[260935]: 2025-10-11 09:00:40.469 2 WARNING nova.virt.libvirt.driver [None req-794a5152-5f55-4f4d-9857-6ffb9281654c df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 09:00:40 compute-0 nova_compute[260935]: 2025-10-11 09:00:40.475 2 DEBUG nova.virt.libvirt.host [None req-794a5152-5f55-4f4d-9857-6ffb9281654c df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 11 09:00:40 compute-0 nova_compute[260935]: 2025-10-11 09:00:40.476 2 DEBUG nova.virt.libvirt.host [None req-794a5152-5f55-4f4d-9857-6ffb9281654c df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 11 09:00:40 compute-0 nova_compute[260935]: 2025-10-11 09:00:40.481 2 DEBUG nova.virt.libvirt.host [None req-794a5152-5f55-4f4d-9857-6ffb9281654c df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 11 09:00:40 compute-0 nova_compute[260935]: 2025-10-11 09:00:40.481 2 DEBUG nova.virt.libvirt.host [None req-794a5152-5f55-4f4d-9857-6ffb9281654c df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 11 09:00:40 compute-0 nova_compute[260935]: 2025-10-11 09:00:40.482 2 DEBUG nova.virt.libvirt.driver [None req-794a5152-5f55-4f4d-9857-6ffb9281654c df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 11 09:00:40 compute-0 nova_compute[260935]: 2025-10-11 09:00:40.483 2 DEBUG nova.virt.hardware [None req-794a5152-5f55-4f4d-9857-6ffb9281654c df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 11 09:00:40 compute-0 nova_compute[260935]: 2025-10-11 09:00:40.484 2 DEBUG nova.virt.hardware [None req-794a5152-5f55-4f4d-9857-6ffb9281654c df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 11 09:00:40 compute-0 nova_compute[260935]: 2025-10-11 09:00:40.484 2 DEBUG nova.virt.hardware [None req-794a5152-5f55-4f4d-9857-6ffb9281654c df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 11 09:00:40 compute-0 nova_compute[260935]: 2025-10-11 09:00:40.485 2 DEBUG nova.virt.hardware [None req-794a5152-5f55-4f4d-9857-6ffb9281654c df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 11 09:00:40 compute-0 nova_compute[260935]: 2025-10-11 09:00:40.485 2 DEBUG nova.virt.hardware [None req-794a5152-5f55-4f4d-9857-6ffb9281654c df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 11 09:00:40 compute-0 nova_compute[260935]: 2025-10-11 09:00:40.486 2 DEBUG nova.virt.hardware [None req-794a5152-5f55-4f4d-9857-6ffb9281654c df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 11 09:00:40 compute-0 nova_compute[260935]: 2025-10-11 09:00:40.487 2 DEBUG nova.virt.hardware [None req-794a5152-5f55-4f4d-9857-6ffb9281654c df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 11 09:00:40 compute-0 nova_compute[260935]: 2025-10-11 09:00:40.487 2 DEBUG nova.virt.hardware [None req-794a5152-5f55-4f4d-9857-6ffb9281654c df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 11 09:00:40 compute-0 nova_compute[260935]: 2025-10-11 09:00:40.488 2 DEBUG nova.virt.hardware [None req-794a5152-5f55-4f4d-9857-6ffb9281654c df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 11 09:00:40 compute-0 nova_compute[260935]: 2025-10-11 09:00:40.488 2 DEBUG nova.virt.hardware [None req-794a5152-5f55-4f4d-9857-6ffb9281654c df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 11 09:00:40 compute-0 nova_compute[260935]: 2025-10-11 09:00:40.489 2 DEBUG nova.virt.hardware [None req-794a5152-5f55-4f4d-9857-6ffb9281654c df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 11 09:00:40 compute-0 nova_compute[260935]: 2025-10-11 09:00:40.490 2 DEBUG nova.objects.instance [None req-794a5152-5f55-4f4d-9857-6ffb9281654c df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] Lazy-loading 'vcpu_model' on Instance uuid b75d8ded-515b-48ff-a6b6-28df88878996 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:00:40 compute-0 nova_compute[260935]: 2025-10-11 09:00:40.495 2 DEBUG nova.objects.instance [None req-272738df-6af2-4acf-8251-7329bafbeba8 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] Lazy-loading 'pci_requests' on Instance uuid 2ca1b1c6-7cd2-42f9-a24b-5c20bb567361 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:00:40 compute-0 nova_compute[260935]: 2025-10-11 09:00:40.498 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:00:40 compute-0 nova_compute[260935]: 2025-10-11 09:00:40.518 2 DEBUG oslo_concurrency.processutils [None req-794a5152-5f55-4f4d-9857-6ffb9281654c df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:00:40 compute-0 nova_compute[260935]: 2025-10-11 09:00:40.576 2 DEBUG nova.objects.instance [None req-272738df-6af2-4acf-8251-7329bafbeba8 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] Lazy-loading 'pci_devices' on Instance uuid 2ca1b1c6-7cd2-42f9-a24b-5c20bb567361 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:00:40 compute-0 nova_compute[260935]: 2025-10-11 09:00:40.599 2 DEBUG nova.objects.instance [None req-272738df-6af2-4acf-8251-7329bafbeba8 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] Lazy-loading 'resources' on Instance uuid 2ca1b1c6-7cd2-42f9-a24b-5c20bb567361 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:00:40 compute-0 nova_compute[260935]: 2025-10-11 09:00:40.612 2 DEBUG nova.objects.instance [None req-272738df-6af2-4acf-8251-7329bafbeba8 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] Lazy-loading 'migration_context' on Instance uuid 2ca1b1c6-7cd2-42f9-a24b-5c20bb567361 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:00:40 compute-0 nova_compute[260935]: 2025-10-11 09:00:40.627 2 DEBUG nova.objects.instance [None req-272738df-6af2-4acf-8251-7329bafbeba8 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] [instance: 2ca1b1c6-7cd2-42f9-a24b-5c20bb567361] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Oct 11 09:00:40 compute-0 nova_compute[260935]: 2025-10-11 09:00:40.633 2 INFO nova.virt.libvirt.driver [None req-272738df-6af2-4acf-8251-7329bafbeba8 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] [instance: 2ca1b1c6-7cd2-42f9-a24b-5c20bb567361] Instance already shutdown.
Oct 11 09:00:40 compute-0 nova_compute[260935]: 2025-10-11 09:00:40.642 2 INFO nova.virt.libvirt.driver [-] [instance: 2ca1b1c6-7cd2-42f9-a24b-5c20bb567361] Instance destroyed successfully.
Oct 11 09:00:40 compute-0 nova_compute[260935]: 2025-10-11 09:00:40.650 2 INFO nova.virt.libvirt.driver [-] [instance: 2ca1b1c6-7cd2-42f9-a24b-5c20bb567361] Instance destroyed successfully.
Oct 11 09:00:40 compute-0 nova_compute[260935]: 2025-10-11 09:00:40.652 2 DEBUG nova.virt.libvirt.vif [None req-272738df-6af2-4acf-8251-7329bafbeba8 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T08:59:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1432792747',display_name='tempest-tempest.common.compute-instance-1432792747',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1432792747',id=84,image_ref='95632eb9-5895-4e20-b760-0f149aadf400',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-11T09:00:04Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='d33b48586acf4e6c8254f2a1213b001c',ramdisk_id='',reservation_id='r-dj88erzq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='95632eb9-5895-4e20-b760-0f149aadf400',image_container_format='bare',image_disk_format='qcow2',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherA-620268335',owner_user_name='tempest-ServerActionsTestOtherA-620268335-project-member'},tags=<?>,task_state='rebuilding',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:00:22Z,user_data=None,user_id='8d211063ed874837bead2e13898b31d4',uuid=2ca1b1c6-7cd2-42f9-a24b-5c20bb567361,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "bcfaf217-8703-4c1e-bf80-d24ab0e642bd", "address": "fa:16:3e:16:b9:d1", "network": {"id": "164a664d-5e52-48b9-8b00-f73d0851a4cc", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-311778958-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d33b48586acf4e6c8254f2a1213b001c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbcfaf217-87", "ovs_interfaceid": "bcfaf217-8703-4c1e-bf80-d24ab0e642bd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 11 09:00:40 compute-0 nova_compute[260935]: 2025-10-11 09:00:40.653 2 DEBUG nova.network.os_vif_util [None req-272738df-6af2-4acf-8251-7329bafbeba8 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] Converting VIF {"id": "bcfaf217-8703-4c1e-bf80-d24ab0e642bd", "address": "fa:16:3e:16:b9:d1", "network": {"id": "164a664d-5e52-48b9-8b00-f73d0851a4cc", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-311778958-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d33b48586acf4e6c8254f2a1213b001c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbcfaf217-87", "ovs_interfaceid": "bcfaf217-8703-4c1e-bf80-d24ab0e642bd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 09:00:40 compute-0 nova_compute[260935]: 2025-10-11 09:00:40.655 2 DEBUG nova.network.os_vif_util [None req-272738df-6af2-4acf-8251-7329bafbeba8 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:16:b9:d1,bridge_name='br-int',has_traffic_filtering=True,id=bcfaf217-8703-4c1e-bf80-d24ab0e642bd,network=Network(164a664d-5e52-48b9-8b00-f73d0851a4cc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbcfaf217-87') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 09:00:40 compute-0 nova_compute[260935]: 2025-10-11 09:00:40.656 2 DEBUG os_vif [None req-272738df-6af2-4acf-8251-7329bafbeba8 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:16:b9:d1,bridge_name='br-int',has_traffic_filtering=True,id=bcfaf217-8703-4c1e-bf80-d24ab0e642bd,network=Network(164a664d-5e52-48b9-8b00-f73d0851a4cc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbcfaf217-87') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 11 09:00:40 compute-0 nova_compute[260935]: 2025-10-11 09:00:40.659 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:00:40 compute-0 nova_compute[260935]: 2025-10-11 09:00:40.660 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbcfaf217-87, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:00:40 compute-0 nova_compute[260935]: 2025-10-11 09:00:40.667 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 09:00:40 compute-0 nova_compute[260935]: 2025-10-11 09:00:40.672 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:00:40 compute-0 nova_compute[260935]: 2025-10-11 09:00:40.676 2 INFO os_vif [None req-272738df-6af2-4acf-8251-7329bafbeba8 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:16:b9:d1,bridge_name='br-int',has_traffic_filtering=True,id=bcfaf217-8703-4c1e-bf80-d24ab0e642bd,network=Network(164a664d-5e52-48b9-8b00-f73d0851a4cc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbcfaf217-87')
Oct 11 09:00:40 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 09:00:40 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1661328014' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:00:40 compute-0 nova_compute[260935]: 2025-10-11 09:00:40.987 2 DEBUG oslo_concurrency.processutils [None req-9972abff-c76e-4a7a-9404-0b8dd9c06fcd 2e604a7f01ba42f8a2f2a90bf14cafba 0ba95f2514ce4fe4b00f245335eaeb01 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.627s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:00:40 compute-0 nova_compute[260935]: 2025-10-11 09:00:40.995 2 DEBUG nova.compute.provider_tree [None req-9972abff-c76e-4a7a-9404-0b8dd9c06fcd 2e604a7f01ba42f8a2f2a90bf14cafba 0ba95f2514ce4fe4b00f245335eaeb01 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 09:00:41 compute-0 nova_compute[260935]: 2025-10-11 09:00:41.011 2 DEBUG nova.scheduler.client.report [None req-9972abff-c76e-4a7a-9404-0b8dd9c06fcd 2e604a7f01ba42f8a2f2a90bf14cafba 0ba95f2514ce4fe4b00f245335eaeb01 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 09:00:41 compute-0 nova_compute[260935]: 2025-10-11 09:00:41.035 2 DEBUG oslo_concurrency.lockutils [None req-9972abff-c76e-4a7a-9404-0b8dd9c06fcd 2e604a7f01ba42f8a2f2a90bf14cafba 0ba95f2514ce4fe4b00f245335eaeb01 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.862s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:00:41 compute-0 nova_compute[260935]: 2025-10-11 09:00:41.037 2 DEBUG nova.compute.manager [None req-9972abff-c76e-4a7a-9404-0b8dd9c06fcd 2e604a7f01ba42f8a2f2a90bf14cafba 0ba95f2514ce4fe4b00f245335eaeb01 - - default default] [instance: 52be16b4-343a-4fd4-9041-39069a1fde2a] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 11 09:00:41 compute-0 nova_compute[260935]: 2025-10-11 09:00:41.099 2 DEBUG nova.compute.manager [None req-9972abff-c76e-4a7a-9404-0b8dd9c06fcd 2e604a7f01ba42f8a2f2a90bf14cafba 0ba95f2514ce4fe4b00f245335eaeb01 - - default default] [instance: 52be16b4-343a-4fd4-9041-39069a1fde2a] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 11 09:00:41 compute-0 nova_compute[260935]: 2025-10-11 09:00:41.100 2 DEBUG nova.network.neutron [None req-9972abff-c76e-4a7a-9404-0b8dd9c06fcd 2e604a7f01ba42f8a2f2a90bf14cafba 0ba95f2514ce4fe4b00f245335eaeb01 - - default default] [instance: 52be16b4-343a-4fd4-9041-39069a1fde2a] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 11 09:00:41 compute-0 nova_compute[260935]: 2025-10-11 09:00:41.115 2 INFO nova.virt.libvirt.driver [None req-272738df-6af2-4acf-8251-7329bafbeba8 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] [instance: 2ca1b1c6-7cd2-42f9-a24b-5c20bb567361] Deleting instance files /var/lib/nova/instances/2ca1b1c6-7cd2-42f9-a24b-5c20bb567361_del
Oct 11 09:00:41 compute-0 nova_compute[260935]: 2025-10-11 09:00:41.116 2 INFO nova.virt.libvirt.driver [None req-272738df-6af2-4acf-8251-7329bafbeba8 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] [instance: 2ca1b1c6-7cd2-42f9-a24b-5c20bb567361] Deletion of /var/lib/nova/instances/2ca1b1c6-7cd2-42f9-a24b-5c20bb567361_del complete
Oct 11 09:00:41 compute-0 nova_compute[260935]: 2025-10-11 09:00:41.131 2 INFO nova.virt.libvirt.driver [None req-9972abff-c76e-4a7a-9404-0b8dd9c06fcd 2e604a7f01ba42f8a2f2a90bf14cafba 0ba95f2514ce4fe4b00f245335eaeb01 - - default default] [instance: 52be16b4-343a-4fd4-9041-39069a1fde2a] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 11 09:00:41 compute-0 nova_compute[260935]: 2025-10-11 09:00:41.159 2 DEBUG nova.compute.manager [None req-9972abff-c76e-4a7a-9404-0b8dd9c06fcd 2e604a7f01ba42f8a2f2a90bf14cafba 0ba95f2514ce4fe4b00f245335eaeb01 - - default default] [instance: 52be16b4-343a-4fd4-9041-39069a1fde2a] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 11 09:00:41 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 09:00:41 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3767028327' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:00:41 compute-0 nova_compute[260935]: 2025-10-11 09:00:41.257 2 DEBUG oslo_concurrency.processutils [None req-794a5152-5f55-4f4d-9857-6ffb9281654c df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.739s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:00:41 compute-0 nova_compute[260935]: 2025-10-11 09:00:41.259 2 DEBUG oslo_concurrency.processutils [None req-794a5152-5f55-4f4d-9857-6ffb9281654c df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:00:41 compute-0 nova_compute[260935]: 2025-10-11 09:00:41.327 2 DEBUG nova.virt.libvirt.driver [None req-272738df-6af2-4acf-8251-7329bafbeba8 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] [instance: 2ca1b1c6-7cd2-42f9-a24b-5c20bb567361] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 11 09:00:41 compute-0 nova_compute[260935]: 2025-10-11 09:00:41.329 2 INFO nova.virt.libvirt.driver [None req-272738df-6af2-4acf-8251-7329bafbeba8 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] [instance: 2ca1b1c6-7cd2-42f9-a24b-5c20bb567361] Creating image(s)
Oct 11 09:00:41 compute-0 nova_compute[260935]: 2025-10-11 09:00:41.363 2 DEBUG nova.storage.rbd_utils [None req-272738df-6af2-4acf-8251-7329bafbeba8 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] rbd image 2ca1b1c6-7cd2-42f9-a24b-5c20bb567361_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:00:41 compute-0 nova_compute[260935]: 2025-10-11 09:00:41.399 2 DEBUG nova.storage.rbd_utils [None req-272738df-6af2-4acf-8251-7329bafbeba8 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] rbd image 2ca1b1c6-7cd2-42f9-a24b-5c20bb567361_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:00:41 compute-0 nova_compute[260935]: 2025-10-11 09:00:41.439 2 DEBUG nova.storage.rbd_utils [None req-272738df-6af2-4acf-8251-7329bafbeba8 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] rbd image 2ca1b1c6-7cd2-42f9-a24b-5c20bb567361_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:00:41 compute-0 nova_compute[260935]: 2025-10-11 09:00:41.448 2 DEBUG oslo_concurrency.processutils [None req-272738df-6af2-4acf-8251-7329bafbeba8 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/d427ed36e4acfaf36d5cf36bd49361b1db4ee571 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:00:41 compute-0 nova_compute[260935]: 2025-10-11 09:00:41.492 2 DEBUG nova.compute.manager [None req-9972abff-c76e-4a7a-9404-0b8dd9c06fcd 2e604a7f01ba42f8a2f2a90bf14cafba 0ba95f2514ce4fe4b00f245335eaeb01 - - default default] [instance: 52be16b4-343a-4fd4-9041-39069a1fde2a] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 11 09:00:41 compute-0 nova_compute[260935]: 2025-10-11 09:00:41.495 2 DEBUG nova.virt.libvirt.driver [None req-9972abff-c76e-4a7a-9404-0b8dd9c06fcd 2e604a7f01ba42f8a2f2a90bf14cafba 0ba95f2514ce4fe4b00f245335eaeb01 - - default default] [instance: 52be16b4-343a-4fd4-9041-39069a1fde2a] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 11 09:00:41 compute-0 nova_compute[260935]: 2025-10-11 09:00:41.496 2 INFO nova.virt.libvirt.driver [None req-9972abff-c76e-4a7a-9404-0b8dd9c06fcd 2e604a7f01ba42f8a2f2a90bf14cafba 0ba95f2514ce4fe4b00f245335eaeb01 - - default default] [instance: 52be16b4-343a-4fd4-9041-39069a1fde2a] Creating image(s)
Oct 11 09:00:41 compute-0 ceph-mon[74313]: pgmap v1725: 321 pgs: 321 active+clean; 279 MiB data, 747 MiB used, 59 GiB / 60 GiB avail; 2.5 KiB/s rd, 24 KiB/s wr, 3 op/s
Oct 11 09:00:41 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/1661328014' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:00:41 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/3767028327' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:00:41 compute-0 nova_compute[260935]: 2025-10-11 09:00:41.539 2 DEBUG nova.storage.rbd_utils [None req-9972abff-c76e-4a7a-9404-0b8dd9c06fcd 2e604a7f01ba42f8a2f2a90bf14cafba 0ba95f2514ce4fe4b00f245335eaeb01 - - default default] rbd image 52be16b4-343a-4fd4-9041-39069a1fde2a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:00:41 compute-0 nova_compute[260935]: 2025-10-11 09:00:41.573 2 DEBUG nova.storage.rbd_utils [None req-9972abff-c76e-4a7a-9404-0b8dd9c06fcd 2e604a7f01ba42f8a2f2a90bf14cafba 0ba95f2514ce4fe4b00f245335eaeb01 - - default default] rbd image 52be16b4-343a-4fd4-9041-39069a1fde2a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:00:41 compute-0 nova_compute[260935]: 2025-10-11 09:00:41.607 2 DEBUG nova.storage.rbd_utils [None req-9972abff-c76e-4a7a-9404-0b8dd9c06fcd 2e604a7f01ba42f8a2f2a90bf14cafba 0ba95f2514ce4fe4b00f245335eaeb01 - - default default] rbd image 52be16b4-343a-4fd4-9041-39069a1fde2a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:00:41 compute-0 nova_compute[260935]: 2025-10-11 09:00:41.612 2 DEBUG oslo_concurrency.processutils [None req-9972abff-c76e-4a7a-9404-0b8dd9c06fcd 2e604a7f01ba42f8a2f2a90bf14cafba 0ba95f2514ce4fe4b00f245335eaeb01 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:00:41 compute-0 nova_compute[260935]: 2025-10-11 09:00:41.667 2 DEBUG nova.policy [None req-9972abff-c76e-4a7a-9404-0b8dd9c06fcd 2e604a7f01ba42f8a2f2a90bf14cafba 0ba95f2514ce4fe4b00f245335eaeb01 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '2e604a7f01ba42f8a2f2a90bf14cafba', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '0ba95f2514ce4fe4b00f245335eaeb01', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 11 09:00:41 compute-0 nova_compute[260935]: 2025-10-11 09:00:41.673 2 DEBUG oslo_concurrency.processutils [None req-272738df-6af2-4acf-8251-7329bafbeba8 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/d427ed36e4acfaf36d5cf36bd49361b1db4ee571 --force-share --output=json" returned: 0 in 0.225s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:00:41 compute-0 nova_compute[260935]: 2025-10-11 09:00:41.675 2 DEBUG oslo_concurrency.lockutils [None req-272738df-6af2-4acf-8251-7329bafbeba8 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] Acquiring lock "d427ed36e4acfaf36d5cf36bd49361b1db4ee571" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:00:41 compute-0 nova_compute[260935]: 2025-10-11 09:00:41.676 2 DEBUG oslo_concurrency.lockutils [None req-272738df-6af2-4acf-8251-7329bafbeba8 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] Lock "d427ed36e4acfaf36d5cf36bd49361b1db4ee571" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:00:41 compute-0 nova_compute[260935]: 2025-10-11 09:00:41.676 2 DEBUG oslo_concurrency.lockutils [None req-272738df-6af2-4acf-8251-7329bafbeba8 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] Lock "d427ed36e4acfaf36d5cf36bd49361b1db4ee571" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:00:41 compute-0 nova_compute[260935]: 2025-10-11 09:00:41.712 2 DEBUG nova.storage.rbd_utils [None req-272738df-6af2-4acf-8251-7329bafbeba8 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] rbd image 2ca1b1c6-7cd2-42f9-a24b-5c20bb567361_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:00:41 compute-0 nova_compute[260935]: 2025-10-11 09:00:41.717 2 DEBUG oslo_concurrency.processutils [None req-272738df-6af2-4acf-8251-7329bafbeba8 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/d427ed36e4acfaf36d5cf36bd49361b1db4ee571 2ca1b1c6-7cd2-42f9-a24b-5c20bb567361_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:00:41 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 09:00:41 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2566765516' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:00:41 compute-0 nova_compute[260935]: 2025-10-11 09:00:41.779 2 DEBUG oslo_concurrency.processutils [None req-9972abff-c76e-4a7a-9404-0b8dd9c06fcd 2e604a7f01ba42f8a2f2a90bf14cafba 0ba95f2514ce4fe4b00f245335eaeb01 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json" returned: 0 in 0.167s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:00:41 compute-0 nova_compute[260935]: 2025-10-11 09:00:41.781 2 DEBUG oslo_concurrency.processutils [None req-794a5152-5f55-4f4d-9857-6ffb9281654c df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.522s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:00:41 compute-0 nova_compute[260935]: 2025-10-11 09:00:41.781 2 DEBUG oslo_concurrency.lockutils [None req-9972abff-c76e-4a7a-9404-0b8dd9c06fcd 2e604a7f01ba42f8a2f2a90bf14cafba 0ba95f2514ce4fe4b00f245335eaeb01 - - default default] Acquiring lock "9811042c7d73cc51997f7c966840f4b7728169a1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:00:41 compute-0 nova_compute[260935]: 2025-10-11 09:00:41.782 2 DEBUG oslo_concurrency.lockutils [None req-9972abff-c76e-4a7a-9404-0b8dd9c06fcd 2e604a7f01ba42f8a2f2a90bf14cafba 0ba95f2514ce4fe4b00f245335eaeb01 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:00:41 compute-0 nova_compute[260935]: 2025-10-11 09:00:41.783 2 DEBUG oslo_concurrency.lockutils [None req-9972abff-c76e-4a7a-9404-0b8dd9c06fcd 2e604a7f01ba42f8a2f2a90bf14cafba 0ba95f2514ce4fe4b00f245335eaeb01 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:00:41 compute-0 nova_compute[260935]: 2025-10-11 09:00:41.817 2 DEBUG nova.storage.rbd_utils [None req-9972abff-c76e-4a7a-9404-0b8dd9c06fcd 2e604a7f01ba42f8a2f2a90bf14cafba 0ba95f2514ce4fe4b00f245335eaeb01 - - default default] rbd image 52be16b4-343a-4fd4-9041-39069a1fde2a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:00:41 compute-0 nova_compute[260935]: 2025-10-11 09:00:41.822 2 DEBUG oslo_concurrency.processutils [None req-9972abff-c76e-4a7a-9404-0b8dd9c06fcd 2e604a7f01ba42f8a2f2a90bf14cafba 0ba95f2514ce4fe4b00f245335eaeb01 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 52be16b4-343a-4fd4-9041-39069a1fde2a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:00:41 compute-0 nova_compute[260935]: 2025-10-11 09:00:41.889 2 DEBUG oslo_concurrency.processutils [None req-794a5152-5f55-4f4d-9857-6ffb9281654c df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:00:42 compute-0 nova_compute[260935]: 2025-10-11 09:00:42.071 2 DEBUG oslo_concurrency.processutils [None req-272738df-6af2-4acf-8251-7329bafbeba8 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/d427ed36e4acfaf36d5cf36bd49361b1db4ee571 2ca1b1c6-7cd2-42f9-a24b-5c20bb567361_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.354s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:00:42 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1726: 321 pgs: 321 active+clean; 273 MiB data, 731 MiB used, 59 GiB / 60 GiB avail; 12 KiB/s rd, 1.6 MiB/s wr, 20 op/s
Oct 11 09:00:42 compute-0 nova_compute[260935]: 2025-10-11 09:00:42.202 2 DEBUG nova.compute.manager [req-38185548-c55c-4f95-a331-ab65dd200afa req-d6d2f913-74af-40cd-942b-a325572e5b73 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: b75d8ded-515b-48ff-a6b6-28df88878996] Received event network-vif-plugged-99e74dca-1d94-446c-ac4b-bc16dc028d2b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:00:42 compute-0 nova_compute[260935]: 2025-10-11 09:00:42.203 2 DEBUG oslo_concurrency.lockutils [req-38185548-c55c-4f95-a331-ab65dd200afa req-d6d2f913-74af-40cd-942b-a325572e5b73 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "b75d8ded-515b-48ff-a6b6-28df88878996-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:00:42 compute-0 nova_compute[260935]: 2025-10-11 09:00:42.204 2 DEBUG oslo_concurrency.lockutils [req-38185548-c55c-4f95-a331-ab65dd200afa req-d6d2f913-74af-40cd-942b-a325572e5b73 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "b75d8ded-515b-48ff-a6b6-28df88878996-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:00:42 compute-0 nova_compute[260935]: 2025-10-11 09:00:42.205 2 DEBUG oslo_concurrency.lockutils [req-38185548-c55c-4f95-a331-ab65dd200afa req-d6d2f913-74af-40cd-942b-a325572e5b73 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "b75d8ded-515b-48ff-a6b6-28df88878996-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:00:42 compute-0 nova_compute[260935]: 2025-10-11 09:00:42.205 2 DEBUG nova.compute.manager [req-38185548-c55c-4f95-a331-ab65dd200afa req-d6d2f913-74af-40cd-942b-a325572e5b73 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: b75d8ded-515b-48ff-a6b6-28df88878996] No waiting events found dispatching network-vif-plugged-99e74dca-1d94-446c-ac4b-bc16dc028d2b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 09:00:42 compute-0 nova_compute[260935]: 2025-10-11 09:00:42.205 2 WARNING nova.compute.manager [req-38185548-c55c-4f95-a331-ab65dd200afa req-d6d2f913-74af-40cd-942b-a325572e5b73 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: b75d8ded-515b-48ff-a6b6-28df88878996] Received unexpected event network-vif-plugged-99e74dca-1d94-446c-ac4b-bc16dc028d2b for instance with vm_state active and task_state rescuing.
Oct 11 09:00:42 compute-0 nova_compute[260935]: 2025-10-11 09:00:42.213 2 DEBUG nova.storage.rbd_utils [None req-272738df-6af2-4acf-8251-7329bafbeba8 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] resizing rbd image 2ca1b1c6-7cd2-42f9-a24b-5c20bb567361_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 11 09:00:42 compute-0 nova_compute[260935]: 2025-10-11 09:00:42.255 2 DEBUG oslo_concurrency.processutils [None req-9972abff-c76e-4a7a-9404-0b8dd9c06fcd 2e604a7f01ba42f8a2f2a90bf14cafba 0ba95f2514ce4fe4b00f245335eaeb01 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 52be16b4-343a-4fd4-9041-39069a1fde2a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.433s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:00:42 compute-0 nova_compute[260935]: 2025-10-11 09:00:42.369 2 DEBUG nova.storage.rbd_utils [None req-9972abff-c76e-4a7a-9404-0b8dd9c06fcd 2e604a7f01ba42f8a2f2a90bf14cafba 0ba95f2514ce4fe4b00f245335eaeb01 - - default default] resizing rbd image 52be16b4-343a-4fd4-9041-39069a1fde2a_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 11 09:00:42 compute-0 nova_compute[260935]: 2025-10-11 09:00:42.406 2 DEBUG nova.virt.libvirt.driver [None req-272738df-6af2-4acf-8251-7329bafbeba8 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] [instance: 2ca1b1c6-7cd2-42f9-a24b-5c20bb567361] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 11 09:00:42 compute-0 nova_compute[260935]: 2025-10-11 09:00:42.407 2 DEBUG nova.virt.libvirt.driver [None req-272738df-6af2-4acf-8251-7329bafbeba8 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] [instance: 2ca1b1c6-7cd2-42f9-a24b-5c20bb567361] Ensure instance console log exists: /var/lib/nova/instances/2ca1b1c6-7cd2-42f9-a24b-5c20bb567361/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 11 09:00:42 compute-0 nova_compute[260935]: 2025-10-11 09:00:42.407 2 DEBUG oslo_concurrency.lockutils [None req-272738df-6af2-4acf-8251-7329bafbeba8 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:00:42 compute-0 nova_compute[260935]: 2025-10-11 09:00:42.408 2 DEBUG oslo_concurrency.lockutils [None req-272738df-6af2-4acf-8251-7329bafbeba8 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:00:42 compute-0 nova_compute[260935]: 2025-10-11 09:00:42.408 2 DEBUG oslo_concurrency.lockutils [None req-272738df-6af2-4acf-8251-7329bafbeba8 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:00:42 compute-0 nova_compute[260935]: 2025-10-11 09:00:42.412 2 DEBUG nova.virt.libvirt.driver [None req-272738df-6af2-4acf-8251-7329bafbeba8 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] [instance: 2ca1b1c6-7cd2-42f9-a24b-5c20bb567361] Start _get_guest_xml network_info=[{"id": "bcfaf217-8703-4c1e-bf80-d24ab0e642bd", "address": "fa:16:3e:16:b9:d1", "network": {"id": "164a664d-5e52-48b9-8b00-f73d0851a4cc", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-311778958-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d33b48586acf4e6c8254f2a1213b001c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbcfaf217-87", "ovs_interfaceid": "bcfaf217-8703-4c1e-bf80-d24ab0e642bd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:29Z,direct_url=<?>,disk_format='qcow2',id=95632eb9-5895-4e20-b760-0f149aadf400,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 11 09:00:42 compute-0 nova_compute[260935]: 2025-10-11 09:00:42.416 2 WARNING nova.virt.libvirt.driver [None req-272738df-6af2-4acf-8251-7329bafbeba8 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError
Oct 11 09:00:42 compute-0 nova_compute[260935]: 2025-10-11 09:00:42.423 2 DEBUG nova.virt.libvirt.host [None req-272738df-6af2-4acf-8251-7329bafbeba8 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 11 09:00:42 compute-0 nova_compute[260935]: 2025-10-11 09:00:42.425 2 DEBUG nova.virt.libvirt.host [None req-272738df-6af2-4acf-8251-7329bafbeba8 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 11 09:00:42 compute-0 nova_compute[260935]: 2025-10-11 09:00:42.428 2 DEBUG nova.virt.libvirt.host [None req-272738df-6af2-4acf-8251-7329bafbeba8 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 11 09:00:42 compute-0 nova_compute[260935]: 2025-10-11 09:00:42.428 2 DEBUG nova.virt.libvirt.host [None req-272738df-6af2-4acf-8251-7329bafbeba8 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 11 09:00:42 compute-0 nova_compute[260935]: 2025-10-11 09:00:42.429 2 DEBUG nova.virt.libvirt.driver [None req-272738df-6af2-4acf-8251-7329bafbeba8 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 11 09:00:42 compute-0 nova_compute[260935]: 2025-10-11 09:00:42.429 2 DEBUG nova.virt.hardware [None req-272738df-6af2-4acf-8251-7329bafbeba8 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:29Z,direct_url=<?>,disk_format='qcow2',id=95632eb9-5895-4e20-b760-0f149aadf400,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 11 09:00:42 compute-0 nova_compute[260935]: 2025-10-11 09:00:42.430 2 DEBUG nova.virt.hardware [None req-272738df-6af2-4acf-8251-7329bafbeba8 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 11 09:00:42 compute-0 nova_compute[260935]: 2025-10-11 09:00:42.430 2 DEBUG nova.virt.hardware [None req-272738df-6af2-4acf-8251-7329bafbeba8 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 11 09:00:42 compute-0 nova_compute[260935]: 2025-10-11 09:00:42.430 2 DEBUG nova.virt.hardware [None req-272738df-6af2-4acf-8251-7329bafbeba8 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 11 09:00:42 compute-0 nova_compute[260935]: 2025-10-11 09:00:42.431 2 DEBUG nova.virt.hardware [None req-272738df-6af2-4acf-8251-7329bafbeba8 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 11 09:00:42 compute-0 nova_compute[260935]: 2025-10-11 09:00:42.431 2 DEBUG nova.virt.hardware [None req-272738df-6af2-4acf-8251-7329bafbeba8 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 11 09:00:42 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 09:00:42 compute-0 nova_compute[260935]: 2025-10-11 09:00:42.431 2 DEBUG nova.virt.hardware [None req-272738df-6af2-4acf-8251-7329bafbeba8 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 11 09:00:42 compute-0 nova_compute[260935]: 2025-10-11 09:00:42.432 2 DEBUG nova.virt.hardware [None req-272738df-6af2-4acf-8251-7329bafbeba8 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 11 09:00:42 compute-0 nova_compute[260935]: 2025-10-11 09:00:42.432 2 DEBUG nova.virt.hardware [None req-272738df-6af2-4acf-8251-7329bafbeba8 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 11 09:00:42 compute-0 nova_compute[260935]: 2025-10-11 09:00:42.432 2 DEBUG nova.virt.hardware [None req-272738df-6af2-4acf-8251-7329bafbeba8 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 11 09:00:42 compute-0 nova_compute[260935]: 2025-10-11 09:00:42.433 2 DEBUG nova.virt.hardware [None req-272738df-6af2-4acf-8251-7329bafbeba8 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 11 09:00:42 compute-0 nova_compute[260935]: 2025-10-11 09:00:42.433 2 DEBUG nova.objects.instance [None req-272738df-6af2-4acf-8251-7329bafbeba8 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] Lazy-loading 'vcpu_model' on Instance uuid 2ca1b1c6-7cd2-42f9-a24b-5c20bb567361 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:00:42 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/270844653' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:00:42 compute-0 nova_compute[260935]: 2025-10-11 09:00:42.474 2 DEBUG oslo_concurrency.processutils [None req-272738df-6af2-4acf-8251-7329bafbeba8 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:00:42 compute-0 nova_compute[260935]: 2025-10-11 09:00:42.510 2 DEBUG oslo_concurrency.processutils [None req-794a5152-5f55-4f4d-9857-6ffb9281654c df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.621s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:00:42 compute-0 nova_compute[260935]: 2025-10-11 09:00:42.514 2 DEBUG nova.virt.libvirt.vif [None req-794a5152-5f55-4f4d-9857-6ffb9281654c df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T09:00:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerRescueTestJSON-server-217285829',display_name='tempest-ServerRescueTestJSON-server-217285829',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuetestjson-server-217285829',id=85,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-11T09:00:12Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='11b44ad9193e4e43838d52056ccf413e',ramdisk_id='',reservation_id='r-i0umpniu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerRescueTestJSON-1667208638',owner_user_name='tempest-ServerRescueTestJSON-1667208638-project-member'},tags=<?>,task_state='rescuing',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:00:13Z,user_data=None,user_id='df5a3c3a5d68473aa2e2950de45ebce1',uuid=b75d8ded-515b-48ff-a6b6-28df88878996,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "99e74dca-1d94-446c-ac4b-bc16dc028d2b", "address": "fa:16:3e:ab:9b:26", "network": {"id": "e4686205-cbf0-4221-bc49-ebb890c4a59f", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1553544744-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueTestJSON-1553544744-network", "vif_mac": "fa:16:3e:ab:9b:26"}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "11b44ad9193e4e43838d52056ccf413e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap99e74dca-1d", "ovs_interfaceid": "99e74dca-1d94-446c-ac4b-bc16dc028d2b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 11 09:00:42 compute-0 nova_compute[260935]: 2025-10-11 09:00:42.514 2 DEBUG nova.network.os_vif_util [None req-794a5152-5f55-4f4d-9857-6ffb9281654c df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] Converting VIF {"id": "99e74dca-1d94-446c-ac4b-bc16dc028d2b", "address": "fa:16:3e:ab:9b:26", "network": {"id": "e4686205-cbf0-4221-bc49-ebb890c4a59f", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1553544744-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueTestJSON-1553544744-network", "vif_mac": "fa:16:3e:ab:9b:26"}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "11b44ad9193e4e43838d52056ccf413e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap99e74dca-1d", "ovs_interfaceid": "99e74dca-1d94-446c-ac4b-bc16dc028d2b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 09:00:42 compute-0 nova_compute[260935]: 2025-10-11 09:00:42.515 2 DEBUG nova.network.os_vif_util [None req-794a5152-5f55-4f4d-9857-6ffb9281654c df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:ab:9b:26,bridge_name='br-int',has_traffic_filtering=True,id=99e74dca-1d94-446c-ac4b-bc16dc028d2b,network=Network(e4686205-cbf0-4221-bc49-ebb890c4a59f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap99e74dca-1d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 09:00:42 compute-0 nova_compute[260935]: 2025-10-11 09:00:42.517 2 DEBUG nova.objects.instance [None req-794a5152-5f55-4f4d-9857-6ffb9281654c df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] Lazy-loading 'pci_devices' on Instance uuid b75d8ded-515b-48ff-a6b6-28df88878996 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:00:42 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/2566765516' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:00:42 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/270844653' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:00:42 compute-0 nova_compute[260935]: 2025-10-11 09:00:42.527 2 DEBUG nova.objects.instance [None req-9972abff-c76e-4a7a-9404-0b8dd9c06fcd 2e604a7f01ba42f8a2f2a90bf14cafba 0ba95f2514ce4fe4b00f245335eaeb01 - - default default] Lazy-loading 'migration_context' on Instance uuid 52be16b4-343a-4fd4-9041-39069a1fde2a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:00:42 compute-0 nova_compute[260935]: 2025-10-11 09:00:42.558 2 DEBUG nova.virt.libvirt.driver [None req-794a5152-5f55-4f4d-9857-6ffb9281654c df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] [instance: b75d8ded-515b-48ff-a6b6-28df88878996] End _get_guest_xml xml=<domain type="kvm">
Oct 11 09:00:42 compute-0 nova_compute[260935]:   <uuid>b75d8ded-515b-48ff-a6b6-28df88878996</uuid>
Oct 11 09:00:42 compute-0 nova_compute[260935]:   <name>instance-00000055</name>
Oct 11 09:00:42 compute-0 nova_compute[260935]:   <memory>131072</memory>
Oct 11 09:00:42 compute-0 nova_compute[260935]:   <vcpu>1</vcpu>
Oct 11 09:00:42 compute-0 nova_compute[260935]:   <metadata>
Oct 11 09:00:42 compute-0 nova_compute[260935]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 09:00:42 compute-0 nova_compute[260935]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 09:00:42 compute-0 nova_compute[260935]:       <nova:name>tempest-ServerRescueTestJSON-server-217285829</nova:name>
Oct 11 09:00:42 compute-0 nova_compute[260935]:       <nova:creationTime>2025-10-11 09:00:40</nova:creationTime>
Oct 11 09:00:42 compute-0 nova_compute[260935]:       <nova:flavor name="m1.nano">
Oct 11 09:00:42 compute-0 nova_compute[260935]:         <nova:memory>128</nova:memory>
Oct 11 09:00:42 compute-0 nova_compute[260935]:         <nova:disk>1</nova:disk>
Oct 11 09:00:42 compute-0 nova_compute[260935]:         <nova:swap>0</nova:swap>
Oct 11 09:00:42 compute-0 nova_compute[260935]:         <nova:ephemeral>0</nova:ephemeral>
Oct 11 09:00:42 compute-0 nova_compute[260935]:         <nova:vcpus>1</nova:vcpus>
Oct 11 09:00:42 compute-0 nova_compute[260935]:       </nova:flavor>
Oct 11 09:00:42 compute-0 nova_compute[260935]:       <nova:owner>
Oct 11 09:00:42 compute-0 nova_compute[260935]:         <nova:user uuid="df5a3c3a5d68473aa2e2950de45ebce1">tempest-ServerRescueTestJSON-1667208638-project-member</nova:user>
Oct 11 09:00:42 compute-0 nova_compute[260935]:         <nova:project uuid="11b44ad9193e4e43838d52056ccf413e">tempest-ServerRescueTestJSON-1667208638</nova:project>
Oct 11 09:00:42 compute-0 nova_compute[260935]:       </nova:owner>
Oct 11 09:00:42 compute-0 nova_compute[260935]:       <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 09:00:42 compute-0 nova_compute[260935]:       <nova:ports>
Oct 11 09:00:42 compute-0 nova_compute[260935]:         <nova:port uuid="99e74dca-1d94-446c-ac4b-bc16dc028d2b">
Oct 11 09:00:42 compute-0 nova_compute[260935]:           <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Oct 11 09:00:42 compute-0 nova_compute[260935]:         </nova:port>
Oct 11 09:00:42 compute-0 nova_compute[260935]:       </nova:ports>
Oct 11 09:00:42 compute-0 nova_compute[260935]:     </nova:instance>
Oct 11 09:00:42 compute-0 nova_compute[260935]:   </metadata>
Oct 11 09:00:42 compute-0 nova_compute[260935]:   <sysinfo type="smbios">
Oct 11 09:00:42 compute-0 nova_compute[260935]:     <system>
Oct 11 09:00:42 compute-0 nova_compute[260935]:       <entry name="manufacturer">RDO</entry>
Oct 11 09:00:42 compute-0 nova_compute[260935]:       <entry name="product">OpenStack Compute</entry>
Oct 11 09:00:42 compute-0 nova_compute[260935]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 09:00:42 compute-0 nova_compute[260935]:       <entry name="serial">b75d8ded-515b-48ff-a6b6-28df88878996</entry>
Oct 11 09:00:42 compute-0 nova_compute[260935]:       <entry name="uuid">b75d8ded-515b-48ff-a6b6-28df88878996</entry>
Oct 11 09:00:42 compute-0 nova_compute[260935]:       <entry name="family">Virtual Machine</entry>
Oct 11 09:00:42 compute-0 nova_compute[260935]:     </system>
Oct 11 09:00:42 compute-0 nova_compute[260935]:   </sysinfo>
Oct 11 09:00:42 compute-0 nova_compute[260935]:   <os>
Oct 11 09:00:42 compute-0 nova_compute[260935]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 11 09:00:42 compute-0 nova_compute[260935]:     <smbios mode="sysinfo"/>
Oct 11 09:00:42 compute-0 nova_compute[260935]:   </os>
Oct 11 09:00:42 compute-0 nova_compute[260935]:   <features>
Oct 11 09:00:42 compute-0 nova_compute[260935]:     <acpi/>
Oct 11 09:00:42 compute-0 nova_compute[260935]:     <apic/>
Oct 11 09:00:42 compute-0 nova_compute[260935]:     <vmcoreinfo/>
Oct 11 09:00:42 compute-0 nova_compute[260935]:   </features>
Oct 11 09:00:42 compute-0 nova_compute[260935]:   <clock offset="utc">
Oct 11 09:00:42 compute-0 nova_compute[260935]:     <timer name="pit" tickpolicy="delay"/>
Oct 11 09:00:42 compute-0 nova_compute[260935]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 11 09:00:42 compute-0 nova_compute[260935]:     <timer name="hpet" present="no"/>
Oct 11 09:00:42 compute-0 nova_compute[260935]:   </clock>
Oct 11 09:00:42 compute-0 nova_compute[260935]:   <cpu mode="host-model" match="exact">
Oct 11 09:00:42 compute-0 nova_compute[260935]:     <topology sockets="1" cores="1" threads="1"/>
Oct 11 09:00:42 compute-0 nova_compute[260935]:   </cpu>
Oct 11 09:00:42 compute-0 nova_compute[260935]:   <devices>
Oct 11 09:00:42 compute-0 nova_compute[260935]:     <disk type="network" device="disk">
Oct 11 09:00:42 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 09:00:42 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/b75d8ded-515b-48ff-a6b6-28df88878996_disk.rescue">
Oct 11 09:00:42 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 09:00:42 compute-0 nova_compute[260935]:       </source>
Oct 11 09:00:42 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 09:00:42 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 09:00:42 compute-0 nova_compute[260935]:       </auth>
Oct 11 09:00:42 compute-0 nova_compute[260935]:       <target dev="vda" bus="virtio"/>
Oct 11 09:00:42 compute-0 nova_compute[260935]:     </disk>
Oct 11 09:00:42 compute-0 nova_compute[260935]:     <disk type="network" device="disk">
Oct 11 09:00:42 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 09:00:42 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/b75d8ded-515b-48ff-a6b6-28df88878996_disk">
Oct 11 09:00:42 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 09:00:42 compute-0 nova_compute[260935]:       </source>
Oct 11 09:00:42 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 09:00:42 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 09:00:42 compute-0 nova_compute[260935]:       </auth>
Oct 11 09:00:42 compute-0 nova_compute[260935]:       <target dev="vdb" bus="virtio"/>
Oct 11 09:00:42 compute-0 nova_compute[260935]:     </disk>
Oct 11 09:00:42 compute-0 nova_compute[260935]:     <disk type="network" device="cdrom">
Oct 11 09:00:42 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 09:00:42 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/b75d8ded-515b-48ff-a6b6-28df88878996_disk.config.rescue">
Oct 11 09:00:42 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 09:00:42 compute-0 nova_compute[260935]:       </source>
Oct 11 09:00:42 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 09:00:42 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 09:00:42 compute-0 nova_compute[260935]:       </auth>
Oct 11 09:00:42 compute-0 nova_compute[260935]:       <target dev="sda" bus="sata"/>
Oct 11 09:00:42 compute-0 nova_compute[260935]:     </disk>
Oct 11 09:00:42 compute-0 nova_compute[260935]:     <interface type="ethernet">
Oct 11 09:00:42 compute-0 nova_compute[260935]:       <mac address="fa:16:3e:ab:9b:26"/>
Oct 11 09:00:42 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 09:00:42 compute-0 nova_compute[260935]:       <driver name="vhost" rx_queue_size="512"/>
Oct 11 09:00:42 compute-0 nova_compute[260935]:       <mtu size="1442"/>
Oct 11 09:00:42 compute-0 nova_compute[260935]:       <target dev="tap99e74dca-1d"/>
Oct 11 09:00:42 compute-0 nova_compute[260935]:     </interface>
Oct 11 09:00:42 compute-0 nova_compute[260935]:     <serial type="pty">
Oct 11 09:00:42 compute-0 nova_compute[260935]:       <log file="/var/lib/nova/instances/b75d8ded-515b-48ff-a6b6-28df88878996/console.log" append="off"/>
Oct 11 09:00:42 compute-0 nova_compute[260935]:     </serial>
Oct 11 09:00:42 compute-0 nova_compute[260935]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 09:00:42 compute-0 nova_compute[260935]:     <video>
Oct 11 09:00:42 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 09:00:42 compute-0 nova_compute[260935]:     </video>
Oct 11 09:00:42 compute-0 nova_compute[260935]:     <input type="tablet" bus="usb"/>
Oct 11 09:00:42 compute-0 nova_compute[260935]:     <rng model="virtio">
Oct 11 09:00:42 compute-0 nova_compute[260935]:       <backend model="random">/dev/urandom</backend>
Oct 11 09:00:42 compute-0 nova_compute[260935]:     </rng>
Oct 11 09:00:42 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root"/>
Oct 11 09:00:42 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:00:42 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:00:42 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:00:42 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:00:42 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:00:42 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:00:42 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:00:42 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:00:42 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:00:42 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:00:42 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:00:42 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:00:42 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:00:42 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:00:42 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:00:42 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:00:42 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:00:42 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:00:42 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:00:42 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:00:42 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:00:42 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:00:42 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:00:42 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:00:42 compute-0 nova_compute[260935]:     <controller type="usb" index="0"/>
Oct 11 09:00:42 compute-0 nova_compute[260935]:     <memballoon model="virtio">
Oct 11 09:00:42 compute-0 nova_compute[260935]:       <stats period="10"/>
Oct 11 09:00:42 compute-0 nova_compute[260935]:     </memballoon>
Oct 11 09:00:42 compute-0 nova_compute[260935]:   </devices>
Oct 11 09:00:42 compute-0 nova_compute[260935]: </domain>
Oct 11 09:00:42 compute-0 nova_compute[260935]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 11 09:00:42 compute-0 nova_compute[260935]: 2025-10-11 09:00:42.565 2 DEBUG nova.virt.libvirt.driver [None req-9972abff-c76e-4a7a-9404-0b8dd9c06fcd 2e604a7f01ba42f8a2f2a90bf14cafba 0ba95f2514ce4fe4b00f245335eaeb01 - - default default] [instance: 52be16b4-343a-4fd4-9041-39069a1fde2a] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 11 09:00:42 compute-0 nova_compute[260935]: 2025-10-11 09:00:42.565 2 DEBUG nova.virt.libvirt.driver [None req-9972abff-c76e-4a7a-9404-0b8dd9c06fcd 2e604a7f01ba42f8a2f2a90bf14cafba 0ba95f2514ce4fe4b00f245335eaeb01 - - default default] [instance: 52be16b4-343a-4fd4-9041-39069a1fde2a] Ensure instance console log exists: /var/lib/nova/instances/52be16b4-343a-4fd4-9041-39069a1fde2a/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 11 09:00:42 compute-0 nova_compute[260935]: 2025-10-11 09:00:42.565 2 DEBUG oslo_concurrency.lockutils [None req-9972abff-c76e-4a7a-9404-0b8dd9c06fcd 2e604a7f01ba42f8a2f2a90bf14cafba 0ba95f2514ce4fe4b00f245335eaeb01 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:00:42 compute-0 nova_compute[260935]: 2025-10-11 09:00:42.566 2 DEBUG oslo_concurrency.lockutils [None req-9972abff-c76e-4a7a-9404-0b8dd9c06fcd 2e604a7f01ba42f8a2f2a90bf14cafba 0ba95f2514ce4fe4b00f245335eaeb01 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:00:42 compute-0 nova_compute[260935]: 2025-10-11 09:00:42.566 2 DEBUG oslo_concurrency.lockutils [None req-9972abff-c76e-4a7a-9404-0b8dd9c06fcd 2e604a7f01ba42f8a2f2a90bf14cafba 0ba95f2514ce4fe4b00f245335eaeb01 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:00:42 compute-0 nova_compute[260935]: 2025-10-11 09:00:42.575 2 INFO nova.virt.libvirt.driver [-] [instance: b75d8ded-515b-48ff-a6b6-28df88878996] Instance destroyed successfully.
Oct 11 09:00:42 compute-0 nova_compute[260935]: 2025-10-11 09:00:42.646 2 DEBUG nova.virt.libvirt.driver [None req-794a5152-5f55-4f4d-9857-6ffb9281654c df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 09:00:42 compute-0 nova_compute[260935]: 2025-10-11 09:00:42.647 2 DEBUG nova.virt.libvirt.driver [None req-794a5152-5f55-4f4d-9857-6ffb9281654c df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 09:00:42 compute-0 nova_compute[260935]: 2025-10-11 09:00:42.647 2 DEBUG nova.virt.libvirt.driver [None req-794a5152-5f55-4f4d-9857-6ffb9281654c df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 09:00:42 compute-0 nova_compute[260935]: 2025-10-11 09:00:42.647 2 DEBUG nova.virt.libvirt.driver [None req-794a5152-5f55-4f4d-9857-6ffb9281654c df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] No VIF found with MAC fa:16:3e:ab:9b:26, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 11 09:00:42 compute-0 nova_compute[260935]: 2025-10-11 09:00:42.647 2 INFO nova.virt.libvirt.driver [None req-794a5152-5f55-4f4d-9857-6ffb9281654c df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] [instance: b75d8ded-515b-48ff-a6b6-28df88878996] Using config drive
Oct 11 09:00:42 compute-0 nova_compute[260935]: 2025-10-11 09:00:42.673 2 DEBUG nova.storage.rbd_utils [None req-794a5152-5f55-4f4d-9857-6ffb9281654c df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] rbd image b75d8ded-515b-48ff-a6b6-28df88878996_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:00:42 compute-0 nova_compute[260935]: 2025-10-11 09:00:42.707 2 DEBUG nova.objects.instance [None req-794a5152-5f55-4f4d-9857-6ffb9281654c df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] Lazy-loading 'ec2_ids' on Instance uuid b75d8ded-515b-48ff-a6b6-28df88878996 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:00:42 compute-0 podman[343381]: 2025-10-11 09:00:42.723732168 +0000 UTC m=+0.105390166 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, tcib_managed=true, config_id=iscsid, container_name=iscsid, io.buildah.version=1.41.3)
Oct 11 09:00:42 compute-0 nova_compute[260935]: 2025-10-11 09:00:42.766 2 DEBUG nova.objects.instance [None req-794a5152-5f55-4f4d-9857-6ffb9281654c df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] Lazy-loading 'keypairs' on Instance uuid b75d8ded-515b-48ff-a6b6-28df88878996 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:00:42 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 09:00:42 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1446902666' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:00:42 compute-0 nova_compute[260935]: 2025-10-11 09:00:42.934 2 DEBUG oslo_concurrency.processutils [None req-272738df-6af2-4acf-8251-7329bafbeba8 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:00:42 compute-0 nova_compute[260935]: 2025-10-11 09:00:42.969 2 DEBUG nova.storage.rbd_utils [None req-272738df-6af2-4acf-8251-7329bafbeba8 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] rbd image 2ca1b1c6-7cd2-42f9-a24b-5c20bb567361_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:00:42 compute-0 nova_compute[260935]: 2025-10-11 09:00:42.975 2 DEBUG oslo_concurrency.processutils [None req-272738df-6af2-4acf-8251-7329bafbeba8 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:00:43 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 09:00:43 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/320656599' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:00:43 compute-0 nova_compute[260935]: 2025-10-11 09:00:43.445 2 DEBUG oslo_concurrency.processutils [None req-272738df-6af2-4acf-8251-7329bafbeba8 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:00:43 compute-0 nova_compute[260935]: 2025-10-11 09:00:43.448 2 DEBUG nova.virt.libvirt.vif [None req-272738df-6af2-4acf-8251-7329bafbeba8 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-11T08:59:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1432792747',display_name='tempest-tempest.common.compute-instance-1432792747',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1432792747',id=84,image_ref='95632eb9-5895-4e20-b760-0f149aadf400',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-11T09:00:04Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='d33b48586acf4e6c8254f2a1213b001c',ramdisk_id='',reservation_id='r-dj88erzq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='95632eb9-5895-4e20-b760-0f149aadf400',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherA-620268335',owner_user_name='tempest-ServerActionsTestOtherA-620268335-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:00:41Z,user_data=None,user_id='8d211063ed874837bead2e13898b31d4',uuid=2ca1b1c6-7cd2-42f9-a24b-5c20bb567361,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "bcfaf217-8703-4c1e-bf80-d24ab0e642bd", "address": "fa:16:3e:16:b9:d1", "network": {"id": "164a664d-5e52-48b9-8b00-f73d0851a4cc", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-311778958-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d33b48586acf4e6c8254f2a1213b001c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbcfaf217-87", "ovs_interfaceid": "bcfaf217-8703-4c1e-bf80-d24ab0e642bd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 11 09:00:43 compute-0 nova_compute[260935]: 2025-10-11 09:00:43.448 2 DEBUG nova.network.os_vif_util [None req-272738df-6af2-4acf-8251-7329bafbeba8 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] Converting VIF {"id": "bcfaf217-8703-4c1e-bf80-d24ab0e642bd", "address": "fa:16:3e:16:b9:d1", "network": {"id": "164a664d-5e52-48b9-8b00-f73d0851a4cc", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-311778958-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d33b48586acf4e6c8254f2a1213b001c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbcfaf217-87", "ovs_interfaceid": "bcfaf217-8703-4c1e-bf80-d24ab0e642bd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 09:00:43 compute-0 nova_compute[260935]: 2025-10-11 09:00:43.450 2 DEBUG nova.network.os_vif_util [None req-272738df-6af2-4acf-8251-7329bafbeba8 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:16:b9:d1,bridge_name='br-int',has_traffic_filtering=True,id=bcfaf217-8703-4c1e-bf80-d24ab0e642bd,network=Network(164a664d-5e52-48b9-8b00-f73d0851a4cc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbcfaf217-87') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 09:00:43 compute-0 nova_compute[260935]: 2025-10-11 09:00:43.455 2 DEBUG nova.virt.libvirt.driver [None req-272738df-6af2-4acf-8251-7329bafbeba8 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] [instance: 2ca1b1c6-7cd2-42f9-a24b-5c20bb567361] End _get_guest_xml xml=<domain type="kvm">
Oct 11 09:00:43 compute-0 nova_compute[260935]:   <uuid>2ca1b1c6-7cd2-42f9-a24b-5c20bb567361</uuid>
Oct 11 09:00:43 compute-0 nova_compute[260935]:   <name>instance-00000054</name>
Oct 11 09:00:43 compute-0 nova_compute[260935]:   <memory>131072</memory>
Oct 11 09:00:43 compute-0 nova_compute[260935]:   <vcpu>1</vcpu>
Oct 11 09:00:43 compute-0 nova_compute[260935]:   <metadata>
Oct 11 09:00:43 compute-0 nova_compute[260935]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 09:00:43 compute-0 nova_compute[260935]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 09:00:43 compute-0 nova_compute[260935]:       <nova:name>tempest-tempest.common.compute-instance-1432792747</nova:name>
Oct 11 09:00:43 compute-0 nova_compute[260935]:       <nova:creationTime>2025-10-11 09:00:42</nova:creationTime>
Oct 11 09:00:43 compute-0 nova_compute[260935]:       <nova:flavor name="m1.nano">
Oct 11 09:00:43 compute-0 nova_compute[260935]:         <nova:memory>128</nova:memory>
Oct 11 09:00:43 compute-0 nova_compute[260935]:         <nova:disk>1</nova:disk>
Oct 11 09:00:43 compute-0 nova_compute[260935]:         <nova:swap>0</nova:swap>
Oct 11 09:00:43 compute-0 nova_compute[260935]:         <nova:ephemeral>0</nova:ephemeral>
Oct 11 09:00:43 compute-0 nova_compute[260935]:         <nova:vcpus>1</nova:vcpus>
Oct 11 09:00:43 compute-0 nova_compute[260935]:       </nova:flavor>
Oct 11 09:00:43 compute-0 nova_compute[260935]:       <nova:owner>
Oct 11 09:00:43 compute-0 nova_compute[260935]:         <nova:user uuid="8d211063ed874837bead2e13898b31d4">tempest-ServerActionsTestOtherA-620268335-project-member</nova:user>
Oct 11 09:00:43 compute-0 nova_compute[260935]:         <nova:project uuid="d33b48586acf4e6c8254f2a1213b001c">tempest-ServerActionsTestOtherA-620268335</nova:project>
Oct 11 09:00:43 compute-0 nova_compute[260935]:       </nova:owner>
Oct 11 09:00:43 compute-0 nova_compute[260935]:       <nova:root type="image" uuid="95632eb9-5895-4e20-b760-0f149aadf400"/>
Oct 11 09:00:43 compute-0 nova_compute[260935]:       <nova:ports>
Oct 11 09:00:43 compute-0 nova_compute[260935]:         <nova:port uuid="bcfaf217-8703-4c1e-bf80-d24ab0e642bd">
Oct 11 09:00:43 compute-0 nova_compute[260935]:           <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Oct 11 09:00:43 compute-0 nova_compute[260935]:         </nova:port>
Oct 11 09:00:43 compute-0 nova_compute[260935]:       </nova:ports>
Oct 11 09:00:43 compute-0 nova_compute[260935]:     </nova:instance>
Oct 11 09:00:43 compute-0 nova_compute[260935]:   </metadata>
Oct 11 09:00:43 compute-0 nova_compute[260935]:   <sysinfo type="smbios">
Oct 11 09:00:43 compute-0 nova_compute[260935]:     <system>
Oct 11 09:00:43 compute-0 nova_compute[260935]:       <entry name="manufacturer">RDO</entry>
Oct 11 09:00:43 compute-0 nova_compute[260935]:       <entry name="product">OpenStack Compute</entry>
Oct 11 09:00:43 compute-0 nova_compute[260935]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 09:00:43 compute-0 nova_compute[260935]:       <entry name="serial">2ca1b1c6-7cd2-42f9-a24b-5c20bb567361</entry>
Oct 11 09:00:43 compute-0 nova_compute[260935]:       <entry name="uuid">2ca1b1c6-7cd2-42f9-a24b-5c20bb567361</entry>
Oct 11 09:00:43 compute-0 nova_compute[260935]:       <entry name="family">Virtual Machine</entry>
Oct 11 09:00:43 compute-0 nova_compute[260935]:     </system>
Oct 11 09:00:43 compute-0 nova_compute[260935]:   </sysinfo>
Oct 11 09:00:43 compute-0 nova_compute[260935]:   <os>
Oct 11 09:00:43 compute-0 nova_compute[260935]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 11 09:00:43 compute-0 nova_compute[260935]:     <boot dev="hd"/>
Oct 11 09:00:43 compute-0 nova_compute[260935]:     <smbios mode="sysinfo"/>
Oct 11 09:00:43 compute-0 nova_compute[260935]:   </os>
Oct 11 09:00:43 compute-0 nova_compute[260935]:   <features>
Oct 11 09:00:43 compute-0 nova_compute[260935]:     <acpi/>
Oct 11 09:00:43 compute-0 nova_compute[260935]:     <apic/>
Oct 11 09:00:43 compute-0 nova_compute[260935]:     <vmcoreinfo/>
Oct 11 09:00:43 compute-0 nova_compute[260935]:   </features>
Oct 11 09:00:43 compute-0 nova_compute[260935]:   <clock offset="utc">
Oct 11 09:00:43 compute-0 nova_compute[260935]:     <timer name="pit" tickpolicy="delay"/>
Oct 11 09:00:43 compute-0 nova_compute[260935]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 11 09:00:43 compute-0 nova_compute[260935]:     <timer name="hpet" present="no"/>
Oct 11 09:00:43 compute-0 nova_compute[260935]:   </clock>
Oct 11 09:00:43 compute-0 nova_compute[260935]:   <cpu mode="host-model" match="exact">
Oct 11 09:00:43 compute-0 nova_compute[260935]:     <topology sockets="1" cores="1" threads="1"/>
Oct 11 09:00:43 compute-0 nova_compute[260935]:   </cpu>
Oct 11 09:00:43 compute-0 nova_compute[260935]:   <devices>
Oct 11 09:00:43 compute-0 nova_compute[260935]:     <disk type="network" device="disk">
Oct 11 09:00:43 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 09:00:43 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/2ca1b1c6-7cd2-42f9-a24b-5c20bb567361_disk">
Oct 11 09:00:43 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 09:00:43 compute-0 nova_compute[260935]:       </source>
Oct 11 09:00:43 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 09:00:43 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 09:00:43 compute-0 nova_compute[260935]:       </auth>
Oct 11 09:00:43 compute-0 nova_compute[260935]:       <target dev="vda" bus="virtio"/>
Oct 11 09:00:43 compute-0 nova_compute[260935]:     </disk>
Oct 11 09:00:43 compute-0 nova_compute[260935]:     <disk type="network" device="cdrom">
Oct 11 09:00:43 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 09:00:43 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/2ca1b1c6-7cd2-42f9-a24b-5c20bb567361_disk.config">
Oct 11 09:00:43 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 09:00:43 compute-0 nova_compute[260935]:       </source>
Oct 11 09:00:43 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 09:00:43 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 09:00:43 compute-0 nova_compute[260935]:       </auth>
Oct 11 09:00:43 compute-0 nova_compute[260935]:       <target dev="sda" bus="sata"/>
Oct 11 09:00:43 compute-0 nova_compute[260935]:     </disk>
Oct 11 09:00:43 compute-0 nova_compute[260935]:     <interface type="ethernet">
Oct 11 09:00:43 compute-0 nova_compute[260935]:       <mac address="fa:16:3e:16:b9:d1"/>
Oct 11 09:00:43 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 09:00:43 compute-0 nova_compute[260935]:       <driver name="vhost" rx_queue_size="512"/>
Oct 11 09:00:43 compute-0 nova_compute[260935]:       <mtu size="1442"/>
Oct 11 09:00:43 compute-0 nova_compute[260935]:       <target dev="tapbcfaf217-87"/>
Oct 11 09:00:43 compute-0 nova_compute[260935]:     </interface>
Oct 11 09:00:43 compute-0 nova_compute[260935]:     <serial type="pty">
Oct 11 09:00:43 compute-0 nova_compute[260935]:       <log file="/var/lib/nova/instances/2ca1b1c6-7cd2-42f9-a24b-5c20bb567361/console.log" append="off"/>
Oct 11 09:00:43 compute-0 nova_compute[260935]:     </serial>
Oct 11 09:00:43 compute-0 nova_compute[260935]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 09:00:43 compute-0 nova_compute[260935]:     <video>
Oct 11 09:00:43 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 09:00:43 compute-0 nova_compute[260935]:     </video>
Oct 11 09:00:43 compute-0 nova_compute[260935]:     <input type="tablet" bus="usb"/>
Oct 11 09:00:43 compute-0 nova_compute[260935]:     <rng model="virtio">
Oct 11 09:00:43 compute-0 nova_compute[260935]:       <backend model="random">/dev/urandom</backend>
Oct 11 09:00:43 compute-0 nova_compute[260935]:     </rng>
Oct 11 09:00:43 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root"/>
Oct 11 09:00:43 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:00:43 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:00:43 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:00:43 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:00:43 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:00:43 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:00:43 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:00:43 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:00:43 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:00:43 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:00:43 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:00:43 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:00:43 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:00:43 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:00:43 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:00:43 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:00:43 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:00:43 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:00:43 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:00:43 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:00:43 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:00:43 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:00:43 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:00:43 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:00:43 compute-0 nova_compute[260935]:     <controller type="usb" index="0"/>
Oct 11 09:00:43 compute-0 nova_compute[260935]:     <memballoon model="virtio">
Oct 11 09:00:43 compute-0 nova_compute[260935]:       <stats period="10"/>
Oct 11 09:00:43 compute-0 nova_compute[260935]:     </memballoon>
Oct 11 09:00:43 compute-0 nova_compute[260935]:   </devices>
Oct 11 09:00:43 compute-0 nova_compute[260935]: </domain>
Oct 11 09:00:43 compute-0 nova_compute[260935]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 11 09:00:43 compute-0 nova_compute[260935]: 2025-10-11 09:00:43.456 2 DEBUG nova.compute.manager [None req-272738df-6af2-4acf-8251-7329bafbeba8 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] [instance: 2ca1b1c6-7cd2-42f9-a24b-5c20bb567361] Preparing to wait for external event network-vif-plugged-bcfaf217-8703-4c1e-bf80-d24ab0e642bd prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 11 09:00:43 compute-0 nova_compute[260935]: 2025-10-11 09:00:43.457 2 DEBUG oslo_concurrency.lockutils [None req-272738df-6af2-4acf-8251-7329bafbeba8 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] Acquiring lock "2ca1b1c6-7cd2-42f9-a24b-5c20bb567361-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:00:43 compute-0 nova_compute[260935]: 2025-10-11 09:00:43.457 2 DEBUG oslo_concurrency.lockutils [None req-272738df-6af2-4acf-8251-7329bafbeba8 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] Lock "2ca1b1c6-7cd2-42f9-a24b-5c20bb567361-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:00:43 compute-0 nova_compute[260935]: 2025-10-11 09:00:43.458 2 DEBUG oslo_concurrency.lockutils [None req-272738df-6af2-4acf-8251-7329bafbeba8 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] Lock "2ca1b1c6-7cd2-42f9-a24b-5c20bb567361-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:00:43 compute-0 nova_compute[260935]: 2025-10-11 09:00:43.459 2 DEBUG nova.virt.libvirt.vif [None req-272738df-6af2-4acf-8251-7329bafbeba8 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-11T08:59:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1432792747',display_name='tempest-tempest.common.compute-instance-1432792747',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1432792747',id=84,image_ref='95632eb9-5895-4e20-b760-0f149aadf400',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-11T09:00:04Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='d33b48586acf4e6c8254f2a1213b001c',ramdisk_id='',reservation_id='r-dj88erzq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='95632eb9-5895-4e20-b760-0f149aadf400',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherA-620268335',owner_user_name='tempest-ServerActionsTestOtherA-620268335-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:00:41Z,user_data=None,user_id='8d211063ed874837bead2e13898b31d4',uuid=2ca1b1c6-7cd2-42f9-a24b-5c20bb567361,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "bcfaf217-8703-4c1e-bf80-d24ab0e642bd", "address": "fa:16:3e:16:b9:d1", "network": {"id": "164a664d-5e52-48b9-8b00-f73d0851a4cc", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-311778958-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d33b48586acf4e6c8254f2a1213b001c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbcfaf217-87", "ovs_interfaceid": "bcfaf217-8703-4c1e-bf80-d24ab0e642bd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 11 09:00:43 compute-0 nova_compute[260935]: 2025-10-11 09:00:43.459 2 DEBUG nova.network.os_vif_util [None req-272738df-6af2-4acf-8251-7329bafbeba8 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] Converting VIF {"id": "bcfaf217-8703-4c1e-bf80-d24ab0e642bd", "address": "fa:16:3e:16:b9:d1", "network": {"id": "164a664d-5e52-48b9-8b00-f73d0851a4cc", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-311778958-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d33b48586acf4e6c8254f2a1213b001c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbcfaf217-87", "ovs_interfaceid": "bcfaf217-8703-4c1e-bf80-d24ab0e642bd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 09:00:43 compute-0 nova_compute[260935]: 2025-10-11 09:00:43.460 2 DEBUG nova.network.os_vif_util [None req-272738df-6af2-4acf-8251-7329bafbeba8 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:16:b9:d1,bridge_name='br-int',has_traffic_filtering=True,id=bcfaf217-8703-4c1e-bf80-d24ab0e642bd,network=Network(164a664d-5e52-48b9-8b00-f73d0851a4cc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbcfaf217-87') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 09:00:43 compute-0 nova_compute[260935]: 2025-10-11 09:00:43.461 2 DEBUG os_vif [None req-272738df-6af2-4acf-8251-7329bafbeba8 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:16:b9:d1,bridge_name='br-int',has_traffic_filtering=True,id=bcfaf217-8703-4c1e-bf80-d24ab0e642bd,network=Network(164a664d-5e52-48b9-8b00-f73d0851a4cc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbcfaf217-87') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 11 09:00:43 compute-0 nova_compute[260935]: 2025-10-11 09:00:43.462 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:00:43 compute-0 nova_compute[260935]: 2025-10-11 09:00:43.463 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:00:43 compute-0 nova_compute[260935]: 2025-10-11 09:00:43.464 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 09:00:43 compute-0 nova_compute[260935]: 2025-10-11 09:00:43.467 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:00:43 compute-0 nova_compute[260935]: 2025-10-11 09:00:43.468 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbcfaf217-87, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:00:43 compute-0 nova_compute[260935]: 2025-10-11 09:00:43.469 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapbcfaf217-87, col_values=(('external_ids', {'iface-id': 'bcfaf217-8703-4c1e-bf80-d24ab0e642bd', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:16:b9:d1', 'vm-uuid': '2ca1b1c6-7cd2-42f9-a24b-5c20bb567361'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:00:43 compute-0 nova_compute[260935]: 2025-10-11 09:00:43.471 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:00:43 compute-0 NetworkManager[44960]: <info>  [1760173243.4733] manager: (tapbcfaf217-87): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/335)
Oct 11 09:00:43 compute-0 nova_compute[260935]: 2025-10-11 09:00:43.475 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 09:00:43 compute-0 nova_compute[260935]: 2025-10-11 09:00:43.484 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:00:43 compute-0 nova_compute[260935]: 2025-10-11 09:00:43.485 2 INFO os_vif [None req-272738df-6af2-4acf-8251-7329bafbeba8 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:16:b9:d1,bridge_name='br-int',has_traffic_filtering=True,id=bcfaf217-8703-4c1e-bf80-d24ab0e642bd,network=Network(164a664d-5e52-48b9-8b00-f73d0851a4cc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbcfaf217-87')
Oct 11 09:00:43 compute-0 ceph-mon[74313]: pgmap v1726: 321 pgs: 321 active+clean; 273 MiB data, 731 MiB used, 59 GiB / 60 GiB avail; 12 KiB/s rd, 1.6 MiB/s wr, 20 op/s
Oct 11 09:00:43 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/1446902666' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:00:43 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/320656599' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:00:43 compute-0 nova_compute[260935]: 2025-10-11 09:00:43.569 2 DEBUG nova.network.neutron [None req-9972abff-c76e-4a7a-9404-0b8dd9c06fcd 2e604a7f01ba42f8a2f2a90bf14cafba 0ba95f2514ce4fe4b00f245335eaeb01 - - default default] [instance: 52be16b4-343a-4fd4-9041-39069a1fde2a] Successfully created port: c992d6e3-ef59-42a0-80c5-109fe0c056cd _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 11 09:00:43 compute-0 nova_compute[260935]: 2025-10-11 09:00:43.577 2 DEBUG nova.virt.libvirt.driver [None req-272738df-6af2-4acf-8251-7329bafbeba8 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 09:00:43 compute-0 nova_compute[260935]: 2025-10-11 09:00:43.578 2 DEBUG nova.virt.libvirt.driver [None req-272738df-6af2-4acf-8251-7329bafbeba8 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 09:00:43 compute-0 nova_compute[260935]: 2025-10-11 09:00:43.578 2 DEBUG nova.virt.libvirt.driver [None req-272738df-6af2-4acf-8251-7329bafbeba8 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] No VIF found with MAC fa:16:3e:16:b9:d1, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 11 09:00:43 compute-0 nova_compute[260935]: 2025-10-11 09:00:43.579 2 INFO nova.virt.libvirt.driver [None req-272738df-6af2-4acf-8251-7329bafbeba8 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] [instance: 2ca1b1c6-7cd2-42f9-a24b-5c20bb567361] Using config drive
Oct 11 09:00:43 compute-0 nova_compute[260935]: 2025-10-11 09:00:43.612 2 DEBUG nova.storage.rbd_utils [None req-272738df-6af2-4acf-8251-7329bafbeba8 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] rbd image 2ca1b1c6-7cd2-42f9-a24b-5c20bb567361_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:00:43 compute-0 nova_compute[260935]: 2025-10-11 09:00:43.623 2 INFO nova.virt.libvirt.driver [None req-794a5152-5f55-4f4d-9857-6ffb9281654c df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] [instance: b75d8ded-515b-48ff-a6b6-28df88878996] Creating config drive at /var/lib/nova/instances/b75d8ded-515b-48ff-a6b6-28df88878996/disk.config.rescue
Oct 11 09:00:43 compute-0 nova_compute[260935]: 2025-10-11 09:00:43.633 2 DEBUG oslo_concurrency.processutils [None req-794a5152-5f55-4f4d-9857-6ffb9281654c df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/b75d8ded-515b-48ff-a6b6-28df88878996/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmptkmmi17g execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:00:43 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e242 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:00:43 compute-0 nova_compute[260935]: 2025-10-11 09:00:43.707 2 DEBUG nova.objects.instance [None req-272738df-6af2-4acf-8251-7329bafbeba8 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] Lazy-loading 'ec2_ids' on Instance uuid 2ca1b1c6-7cd2-42f9-a24b-5c20bb567361 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:00:43 compute-0 nova_compute[260935]: 2025-10-11 09:00:43.746 2 DEBUG nova.objects.instance [None req-272738df-6af2-4acf-8251-7329bafbeba8 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] Lazy-loading 'keypairs' on Instance uuid 2ca1b1c6-7cd2-42f9-a24b-5c20bb567361 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:00:43 compute-0 nova_compute[260935]: 2025-10-11 09:00:43.755 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:00:43 compute-0 nova_compute[260935]: 2025-10-11 09:00:43.798 2 DEBUG oslo_concurrency.processutils [None req-794a5152-5f55-4f4d-9857-6ffb9281654c df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/b75d8ded-515b-48ff-a6b6-28df88878996/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmptkmmi17g" returned: 0 in 0.165s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:00:43 compute-0 nova_compute[260935]: 2025-10-11 09:00:43.838 2 DEBUG nova.storage.rbd_utils [None req-794a5152-5f55-4f4d-9857-6ffb9281654c df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] rbd image b75d8ded-515b-48ff-a6b6-28df88878996_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:00:43 compute-0 nova_compute[260935]: 2025-10-11 09:00:43.842 2 DEBUG oslo_concurrency.processutils [None req-794a5152-5f55-4f4d-9857-6ffb9281654c df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/b75d8ded-515b-48ff-a6b6-28df88878996/disk.config.rescue b75d8ded-515b-48ff-a6b6-28df88878996_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:00:44 compute-0 nova_compute[260935]: 2025-10-11 09:00:44.046 2 DEBUG oslo_concurrency.processutils [None req-794a5152-5f55-4f4d-9857-6ffb9281654c df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/b75d8ded-515b-48ff-a6b6-28df88878996/disk.config.rescue b75d8ded-515b-48ff-a6b6-28df88878996_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.204s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:00:44 compute-0 nova_compute[260935]: 2025-10-11 09:00:44.048 2 INFO nova.virt.libvirt.driver [None req-794a5152-5f55-4f4d-9857-6ffb9281654c df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] [instance: b75d8ded-515b-48ff-a6b6-28df88878996] Deleting local config drive /var/lib/nova/instances/b75d8ded-515b-48ff-a6b6-28df88878996/disk.config.rescue because it was imported into RBD.
Oct 11 09:00:44 compute-0 kernel: tap99e74dca-1d: entered promiscuous mode
Oct 11 09:00:44 compute-0 NetworkManager[44960]: <info>  [1760173244.1282] manager: (tap99e74dca-1d): new Tun device (/org/freedesktop/NetworkManager/Devices/336)
Oct 11 09:00:44 compute-0 nova_compute[260935]: 2025-10-11 09:00:44.135 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:00:44 compute-0 ovn_controller[152945]: 2025-10-11T09:00:44Z|00772|binding|INFO|Claiming lport 99e74dca-1d94-446c-ac4b-bc16dc028d2b for this chassis.
Oct 11 09:00:44 compute-0 ovn_controller[152945]: 2025-10-11T09:00:44Z|00773|binding|INFO|99e74dca-1d94-446c-ac4b-bc16dc028d2b: Claiming fa:16:3e:ab:9b:26 10.100.0.3
Oct 11 09:00:44 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:00:44.145 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ab:9b:26 10.100.0.3'], port_security=['fa:16:3e:ab:9b:26 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'b75d8ded-515b-48ff-a6b6-28df88878996', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e4686205-cbf0-4221-bc49-ebb890c4a59f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '11b44ad9193e4e43838d52056ccf413e', 'neutron:revision_number': '5', 'neutron:security_group_ids': '4b0cfb76-aebd-4cfc-96f5-00bacc345d65', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9601b6e8-d9bc-46ca-99e8-33ec15f713e5, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=99e74dca-1d94-446c-ac4b-bc16dc028d2b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 09:00:44 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:00:44.147 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 99e74dca-1d94-446c-ac4b-bc16dc028d2b in datapath e4686205-cbf0-4221-bc49-ebb890c4a59f bound to our chassis
Oct 11 09:00:44 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:00:44.149 162815 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network e4686205-cbf0-4221-bc49-ebb890c4a59f or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Oct 11 09:00:44 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:00:44.150 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[f736b308-40f7-4cc0-a03d-be5472a6d3eb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:00:44 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1727: 321 pgs: 321 active+clean; 315 MiB data, 748 MiB used, 59 GiB / 60 GiB avail; 50 KiB/s rd, 4.5 MiB/s wr, 83 op/s
Oct 11 09:00:44 compute-0 systemd-udevd[343553]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 09:00:44 compute-0 systemd-machined[215705]: New machine qemu-96-instance-00000055.
Oct 11 09:00:44 compute-0 nova_compute[260935]: 2025-10-11 09:00:44.183 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:00:44 compute-0 ovn_controller[152945]: 2025-10-11T09:00:44Z|00774|binding|INFO|Setting lport 99e74dca-1d94-446c-ac4b-bc16dc028d2b ovn-installed in OVS
Oct 11 09:00:44 compute-0 ovn_controller[152945]: 2025-10-11T09:00:44Z|00775|binding|INFO|Setting lport 99e74dca-1d94-446c-ac4b-bc16dc028d2b up in Southbound
Oct 11 09:00:44 compute-0 NetworkManager[44960]: <info>  [1760173244.1927] device (tap99e74dca-1d): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 09:00:44 compute-0 systemd[1]: Started Virtual Machine qemu-96-instance-00000055.
Oct 11 09:00:44 compute-0 nova_compute[260935]: 2025-10-11 09:00:44.195 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:00:44 compute-0 NetworkManager[44960]: <info>  [1760173244.1973] device (tap99e74dca-1d): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 09:00:44 compute-0 nova_compute[260935]: 2025-10-11 09:00:44.415 2 INFO nova.virt.libvirt.driver [None req-272738df-6af2-4acf-8251-7329bafbeba8 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] [instance: 2ca1b1c6-7cd2-42f9-a24b-5c20bb567361] Creating config drive at /var/lib/nova/instances/2ca1b1c6-7cd2-42f9-a24b-5c20bb567361/disk.config
Oct 11 09:00:44 compute-0 nova_compute[260935]: 2025-10-11 09:00:44.424 2 DEBUG oslo_concurrency.processutils [None req-272738df-6af2-4acf-8251-7329bafbeba8 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/2ca1b1c6-7cd2-42f9-a24b-5c20bb567361/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpa0dl5fg2 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:00:44 compute-0 nova_compute[260935]: 2025-10-11 09:00:44.600 2 DEBUG oslo_concurrency.processutils [None req-272738df-6af2-4acf-8251-7329bafbeba8 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/2ca1b1c6-7cd2-42f9-a24b-5c20bb567361/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpa0dl5fg2" returned: 0 in 0.176s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:00:44 compute-0 nova_compute[260935]: 2025-10-11 09:00:44.641 2 DEBUG nova.storage.rbd_utils [None req-272738df-6af2-4acf-8251-7329bafbeba8 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] rbd image 2ca1b1c6-7cd2-42f9-a24b-5c20bb567361_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:00:44 compute-0 nova_compute[260935]: 2025-10-11 09:00:44.646 2 DEBUG oslo_concurrency.processutils [None req-272738df-6af2-4acf-8251-7329bafbeba8 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/2ca1b1c6-7cd2-42f9-a24b-5c20bb567361/disk.config 2ca1b1c6-7cd2-42f9-a24b-5c20bb567361_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:00:44 compute-0 nova_compute[260935]: 2025-10-11 09:00:44.701 2 DEBUG nova.compute.manager [req-aaa00bee-17d7-4598-9f2b-119ff101def2 req-09edfea5-d868-470e-a07f-e28babd836b0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: b75d8ded-515b-48ff-a6b6-28df88878996] Received event network-vif-plugged-99e74dca-1d94-446c-ac4b-bc16dc028d2b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:00:44 compute-0 nova_compute[260935]: 2025-10-11 09:00:44.702 2 DEBUG oslo_concurrency.lockutils [req-aaa00bee-17d7-4598-9f2b-119ff101def2 req-09edfea5-d868-470e-a07f-e28babd836b0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "b75d8ded-515b-48ff-a6b6-28df88878996-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:00:44 compute-0 nova_compute[260935]: 2025-10-11 09:00:44.703 2 DEBUG oslo_concurrency.lockutils [req-aaa00bee-17d7-4598-9f2b-119ff101def2 req-09edfea5-d868-470e-a07f-e28babd836b0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "b75d8ded-515b-48ff-a6b6-28df88878996-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:00:44 compute-0 nova_compute[260935]: 2025-10-11 09:00:44.703 2 DEBUG oslo_concurrency.lockutils [req-aaa00bee-17d7-4598-9f2b-119ff101def2 req-09edfea5-d868-470e-a07f-e28babd836b0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "b75d8ded-515b-48ff-a6b6-28df88878996-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:00:44 compute-0 nova_compute[260935]: 2025-10-11 09:00:44.703 2 DEBUG nova.compute.manager [req-aaa00bee-17d7-4598-9f2b-119ff101def2 req-09edfea5-d868-470e-a07f-e28babd836b0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: b75d8ded-515b-48ff-a6b6-28df88878996] No waiting events found dispatching network-vif-plugged-99e74dca-1d94-446c-ac4b-bc16dc028d2b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 09:00:44 compute-0 nova_compute[260935]: 2025-10-11 09:00:44.704 2 WARNING nova.compute.manager [req-aaa00bee-17d7-4598-9f2b-119ff101def2 req-09edfea5-d868-470e-a07f-e28babd836b0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: b75d8ded-515b-48ff-a6b6-28df88878996] Received unexpected event network-vif-plugged-99e74dca-1d94-446c-ac4b-bc16dc028d2b for instance with vm_state active and task_state rescuing.
Oct 11 09:00:44 compute-0 nova_compute[260935]: 2025-10-11 09:00:44.707 2 DEBUG nova.network.neutron [None req-9972abff-c76e-4a7a-9404-0b8dd9c06fcd 2e604a7f01ba42f8a2f2a90bf14cafba 0ba95f2514ce4fe4b00f245335eaeb01 - - default default] [instance: 52be16b4-343a-4fd4-9041-39069a1fde2a] Successfully updated port: c992d6e3-ef59-42a0-80c5-109fe0c056cd _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 11 09:00:44 compute-0 nova_compute[260935]: 2025-10-11 09:00:44.738 2 DEBUG oslo_concurrency.lockutils [None req-9972abff-c76e-4a7a-9404-0b8dd9c06fcd 2e604a7f01ba42f8a2f2a90bf14cafba 0ba95f2514ce4fe4b00f245335eaeb01 - - default default] Acquiring lock "refresh_cache-52be16b4-343a-4fd4-9041-39069a1fde2a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:00:44 compute-0 nova_compute[260935]: 2025-10-11 09:00:44.738 2 DEBUG oslo_concurrency.lockutils [None req-9972abff-c76e-4a7a-9404-0b8dd9c06fcd 2e604a7f01ba42f8a2f2a90bf14cafba 0ba95f2514ce4fe4b00f245335eaeb01 - - default default] Acquired lock "refresh_cache-52be16b4-343a-4fd4-9041-39069a1fde2a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:00:44 compute-0 nova_compute[260935]: 2025-10-11 09:00:44.739 2 DEBUG nova.network.neutron [None req-9972abff-c76e-4a7a-9404-0b8dd9c06fcd 2e604a7f01ba42f8a2f2a90bf14cafba 0ba95f2514ce4fe4b00f245335eaeb01 - - default default] [instance: 52be16b4-343a-4fd4-9041-39069a1fde2a] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 11 09:00:44 compute-0 nova_compute[260935]: 2025-10-11 09:00:44.841 2 DEBUG oslo_concurrency.processutils [None req-272738df-6af2-4acf-8251-7329bafbeba8 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/2ca1b1c6-7cd2-42f9-a24b-5c20bb567361/disk.config 2ca1b1c6-7cd2-42f9-a24b-5c20bb567361_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.195s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:00:44 compute-0 nova_compute[260935]: 2025-10-11 09:00:44.842 2 INFO nova.virt.libvirt.driver [None req-272738df-6af2-4acf-8251-7329bafbeba8 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] [instance: 2ca1b1c6-7cd2-42f9-a24b-5c20bb567361] Deleting local config drive /var/lib/nova/instances/2ca1b1c6-7cd2-42f9-a24b-5c20bb567361/disk.config because it was imported into RBD.
Oct 11 09:00:44 compute-0 kernel: tapbcfaf217-87: entered promiscuous mode
Oct 11 09:00:44 compute-0 NetworkManager[44960]: <info>  [1760173244.8945] manager: (tapbcfaf217-87): new Tun device (/org/freedesktop/NetworkManager/Devices/337)
Oct 11 09:00:44 compute-0 systemd-udevd[343557]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 09:00:44 compute-0 NetworkManager[44960]: <info>  [1760173244.9124] device (tapbcfaf217-87): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 09:00:44 compute-0 NetworkManager[44960]: <info>  [1760173244.9137] device (tapbcfaf217-87): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 09:00:44 compute-0 nova_compute[260935]: 2025-10-11 09:00:44.930 2 DEBUG nova.network.neutron [None req-9972abff-c76e-4a7a-9404-0b8dd9c06fcd 2e604a7f01ba42f8a2f2a90bf14cafba 0ba95f2514ce4fe4b00f245335eaeb01 - - default default] [instance: 52be16b4-343a-4fd4-9041-39069a1fde2a] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 11 09:00:44 compute-0 nova_compute[260935]: 2025-10-11 09:00:44.954 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:00:44 compute-0 ovn_controller[152945]: 2025-10-11T09:00:44Z|00776|binding|INFO|Claiming lport bcfaf217-8703-4c1e-bf80-d24ab0e642bd for this chassis.
Oct 11 09:00:44 compute-0 ovn_controller[152945]: 2025-10-11T09:00:44Z|00777|binding|INFO|bcfaf217-8703-4c1e-bf80-d24ab0e642bd: Claiming fa:16:3e:16:b9:d1 10.100.0.11
Oct 11 09:00:44 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:00:44.970 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:16:b9:d1 10.100.0.11'], port_security=['fa:16:3e:16:b9:d1 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '2ca1b1c6-7cd2-42f9-a24b-5c20bb567361', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-164a664d-5e52-48b9-8b00-f73d0851a4cc', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd33b48586acf4e6c8254f2a1213b001c', 'neutron:revision_number': '7', 'neutron:security_group_ids': '7c2dc1cf-8ac0-4645-86fa-d32df3cf1552', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=68f3e6c4-f574-4830-9133-912bb9cd6132, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=bcfaf217-8703-4c1e-bf80-d24ab0e642bd) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 09:00:44 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:00:44.971 162815 INFO neutron.agent.ovn.metadata.agent [-] Port bcfaf217-8703-4c1e-bf80-d24ab0e642bd in datapath 164a664d-5e52-48b9-8b00-f73d0851a4cc bound to our chassis
Oct 11 09:00:44 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:00:44.973 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 164a664d-5e52-48b9-8b00-f73d0851a4cc
Oct 11 09:00:44 compute-0 systemd-machined[215705]: New machine qemu-97-instance-00000054.
Oct 11 09:00:44 compute-0 systemd[1]: Started Virtual Machine qemu-97-instance-00000054.
Oct 11 09:00:44 compute-0 ovn_controller[152945]: 2025-10-11T09:00:44Z|00778|binding|INFO|Setting lport bcfaf217-8703-4c1e-bf80-d24ab0e642bd ovn-installed in OVS
Oct 11 09:00:44 compute-0 ovn_controller[152945]: 2025-10-11T09:00:44Z|00779|binding|INFO|Setting lport bcfaf217-8703-4c1e-bf80-d24ab0e642bd up in Southbound
Oct 11 09:00:44 compute-0 nova_compute[260935]: 2025-10-11 09:00:44.983 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:00:44 compute-0 nova_compute[260935]: 2025-10-11 09:00:44.986 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:00:44 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:00:44.994 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[887c9808-4298-41bd-b562-064278badeca]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:00:45 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:00:45.026 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[39b4ef0a-8e50-44ee-bc56-d130da2bca5e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:00:45 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:00:45.029 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[c6c2e310-382f-4722-a514-2911be86046b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:00:45 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:00:45.068 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[47928288-143e-4028-a065-effe0f1dcc6a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:00:45 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:00:45.087 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[06f6008b-5962-4be3-9ad2-813ed6e1945e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap164a664d-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e4:a0:ed'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 13, 'rx_bytes': 916, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 13, 'rx_bytes': 916, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 224], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 500906, 'reachable_time': 17587, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 343688, 'error': None, 'target': 'ovnmeta-164a664d-5e52-48b9-8b00-f73d0851a4cc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:00:45 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:00:45.105 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[65a5c6ee-7b42-4979-a148-0e8ccba449f3]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap164a664d-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 500919, 'tstamp': 500919}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 343689, 'error': None, 'target': 'ovnmeta-164a664d-5e52-48b9-8b00-f73d0851a4cc', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap164a664d-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 500922, 'tstamp': 500922}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 343689, 'error': None, 'target': 'ovnmeta-164a664d-5e52-48b9-8b00-f73d0851a4cc', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:00:45 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:00:45.107 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap164a664d-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:00:45 compute-0 nova_compute[260935]: 2025-10-11 09:00:45.108 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:00:45 compute-0 nova_compute[260935]: 2025-10-11 09:00:45.109 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:00:45 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:00:45.110 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap164a664d-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:00:45 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:00:45.110 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 09:00:45 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:00:45.110 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap164a664d-50, col_values=(('external_ids', {'iface-id': 'e23cd806-8523-4e59-ba27-db15cee52548'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:00:45 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:00:45.111 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 09:00:45 compute-0 rsyslogd[1003]: imjournal: 13554 messages lost due to rate-limiting (20000 allowed within 600 seconds)
Oct 11 09:00:45 compute-0 ceph-mon[74313]: pgmap v1727: 321 pgs: 321 active+clean; 315 MiB data, 748 MiB used, 59 GiB / 60 GiB avail; 50 KiB/s rd, 4.5 MiB/s wr, 83 op/s
Oct 11 09:00:45 compute-0 nova_compute[260935]: 2025-10-11 09:00:45.556 2 DEBUG nova.virt.libvirt.host [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Removed pending event for b75d8ded-515b-48ff-a6b6-28df88878996 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Oct 11 09:00:45 compute-0 nova_compute[260935]: 2025-10-11 09:00:45.556 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760173245.555472, b75d8ded-515b-48ff-a6b6-28df88878996 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:00:45 compute-0 nova_compute[260935]: 2025-10-11 09:00:45.557 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: b75d8ded-515b-48ff-a6b6-28df88878996] VM Resumed (Lifecycle Event)
Oct 11 09:00:45 compute-0 nova_compute[260935]: 2025-10-11 09:00:45.569 2 DEBUG nova.compute.manager [None req-794a5152-5f55-4f4d-9857-6ffb9281654c df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] [instance: b75d8ded-515b-48ff-a6b6-28df88878996] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:00:45 compute-0 nova_compute[260935]: 2025-10-11 09:00:45.585 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: b75d8ded-515b-48ff-a6b6-28df88878996] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:00:45 compute-0 nova_compute[260935]: 2025-10-11 09:00:45.590 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: b75d8ded-515b-48ff-a6b6-28df88878996] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 09:00:45 compute-0 podman[343692]: 2025-10-11 09:00:45.770088983 +0000 UTC m=+0.076198324 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, config_id=multipathd, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 11 09:00:45 compute-0 podman[343693]: 2025-10-11 09:00:45.808584941 +0000 UTC m=+0.114702752 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 11 09:00:45 compute-0 nova_compute[260935]: 2025-10-11 09:00:45.884 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: b75d8ded-515b-48ff-a6b6-28df88878996] During sync_power_state the instance has a pending task (rescuing). Skip.
Oct 11 09:00:45 compute-0 nova_compute[260935]: 2025-10-11 09:00:45.886 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760173245.5641122, b75d8ded-515b-48ff-a6b6-28df88878996 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:00:45 compute-0 nova_compute[260935]: 2025-10-11 09:00:45.886 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: b75d8ded-515b-48ff-a6b6-28df88878996] VM Started (Lifecycle Event)
Oct 11 09:00:45 compute-0 nova_compute[260935]: 2025-10-11 09:00:45.912 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: b75d8ded-515b-48ff-a6b6-28df88878996] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:00:45 compute-0 nova_compute[260935]: 2025-10-11 09:00:45.922 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: b75d8ded-515b-48ff-a6b6-28df88878996] Synchronizing instance power state after lifecycle event "Started"; current vm_state: rescued, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 09:00:46 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1728: 321 pgs: 321 active+clean; 315 MiB data, 748 MiB used, 59 GiB / 60 GiB avail; 49 KiB/s rd, 4.4 MiB/s wr, 81 op/s
Oct 11 09:00:46 compute-0 nova_compute[260935]: 2025-10-11 09:00:46.427 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760173246.4262223, 2ca1b1c6-7cd2-42f9-a24b-5c20bb567361 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:00:46 compute-0 nova_compute[260935]: 2025-10-11 09:00:46.427 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 2ca1b1c6-7cd2-42f9-a24b-5c20bb567361] VM Started (Lifecycle Event)
Oct 11 09:00:46 compute-0 nova_compute[260935]: 2025-10-11 09:00:46.461 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 2ca1b1c6-7cd2-42f9-a24b-5c20bb567361] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:00:46 compute-0 nova_compute[260935]: 2025-10-11 09:00:46.468 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760173246.4264927, 2ca1b1c6-7cd2-42f9-a24b-5c20bb567361 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:00:46 compute-0 nova_compute[260935]: 2025-10-11 09:00:46.468 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 2ca1b1c6-7cd2-42f9-a24b-5c20bb567361] VM Paused (Lifecycle Event)
Oct 11 09:00:46 compute-0 nova_compute[260935]: 2025-10-11 09:00:46.493 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 2ca1b1c6-7cd2-42f9-a24b-5c20bb567361] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:00:46 compute-0 nova_compute[260935]: 2025-10-11 09:00:46.499 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 2ca1b1c6-7cd2-42f9-a24b-5c20bb567361] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: stopped, current task_state: rebuild_spawning, current DB power_state: 4, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 09:00:46 compute-0 nova_compute[260935]: 2025-10-11 09:00:46.524 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 2ca1b1c6-7cd2-42f9-a24b-5c20bb567361] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Oct 11 09:00:46 compute-0 nova_compute[260935]: 2025-10-11 09:00:46.820 2 DEBUG nova.compute.manager [req-aa14fb0e-377e-45a3-a06f-c7b8d6a9cffd req-75cc260b-05c3-4438-8cfd-0aa40c8f120a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 52be16b4-343a-4fd4-9041-39069a1fde2a] Received event network-changed-c992d6e3-ef59-42a0-80c5-109fe0c056cd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:00:46 compute-0 nova_compute[260935]: 2025-10-11 09:00:46.821 2 DEBUG nova.compute.manager [req-aa14fb0e-377e-45a3-a06f-c7b8d6a9cffd req-75cc260b-05c3-4438-8cfd-0aa40c8f120a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 52be16b4-343a-4fd4-9041-39069a1fde2a] Refreshing instance network info cache due to event network-changed-c992d6e3-ef59-42a0-80c5-109fe0c056cd. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 09:00:46 compute-0 nova_compute[260935]: 2025-10-11 09:00:46.822 2 DEBUG oslo_concurrency.lockutils [req-aa14fb0e-377e-45a3-a06f-c7b8d6a9cffd req-75cc260b-05c3-4438-8cfd-0aa40c8f120a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-52be16b4-343a-4fd4-9041-39069a1fde2a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:00:46 compute-0 sshd-session[343690]: Invalid user chaos from 155.4.244.179 port 61060
Oct 11 09:00:46 compute-0 sshd-session[343690]: pam_unix(sshd:auth): check pass; user unknown
Oct 11 09:00:46 compute-0 sshd-session[343690]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=155.4.244.179
Oct 11 09:00:47 compute-0 nova_compute[260935]: 2025-10-11 09:00:47.223 2 DEBUG nova.network.neutron [None req-9972abff-c76e-4a7a-9404-0b8dd9c06fcd 2e604a7f01ba42f8a2f2a90bf14cafba 0ba95f2514ce4fe4b00f245335eaeb01 - - default default] [instance: 52be16b4-343a-4fd4-9041-39069a1fde2a] Updating instance_info_cache with network_info: [{"id": "c992d6e3-ef59-42a0-80c5-109fe0c056cd", "address": "fa:16:3e:d3:b5:ce", "network": {"id": "7c40ad6c-6e2c-4d8e-a70f-72c8786fa745", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1855455514-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0ba95f2514ce4fe4b00f245335eaeb01", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc992d6e3-ef", "ovs_interfaceid": "c992d6e3-ef59-42a0-80c5-109fe0c056cd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:00:47 compute-0 nova_compute[260935]: 2025-10-11 09:00:47.254 2 DEBUG oslo_concurrency.lockutils [None req-9972abff-c76e-4a7a-9404-0b8dd9c06fcd 2e604a7f01ba42f8a2f2a90bf14cafba 0ba95f2514ce4fe4b00f245335eaeb01 - - default default] Releasing lock "refresh_cache-52be16b4-343a-4fd4-9041-39069a1fde2a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:00:47 compute-0 nova_compute[260935]: 2025-10-11 09:00:47.255 2 DEBUG nova.compute.manager [None req-9972abff-c76e-4a7a-9404-0b8dd9c06fcd 2e604a7f01ba42f8a2f2a90bf14cafba 0ba95f2514ce4fe4b00f245335eaeb01 - - default default] [instance: 52be16b4-343a-4fd4-9041-39069a1fde2a] Instance network_info: |[{"id": "c992d6e3-ef59-42a0-80c5-109fe0c056cd", "address": "fa:16:3e:d3:b5:ce", "network": {"id": "7c40ad6c-6e2c-4d8e-a70f-72c8786fa745", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1855455514-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0ba95f2514ce4fe4b00f245335eaeb01", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc992d6e3-ef", "ovs_interfaceid": "c992d6e3-ef59-42a0-80c5-109fe0c056cd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 11 09:00:47 compute-0 nova_compute[260935]: 2025-10-11 09:00:47.255 2 DEBUG oslo_concurrency.lockutils [req-aa14fb0e-377e-45a3-a06f-c7b8d6a9cffd req-75cc260b-05c3-4438-8cfd-0aa40c8f120a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-52be16b4-343a-4fd4-9041-39069a1fde2a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:00:47 compute-0 nova_compute[260935]: 2025-10-11 09:00:47.256 2 DEBUG nova.network.neutron [req-aa14fb0e-377e-45a3-a06f-c7b8d6a9cffd req-75cc260b-05c3-4438-8cfd-0aa40c8f120a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 52be16b4-343a-4fd4-9041-39069a1fde2a] Refreshing network info cache for port c992d6e3-ef59-42a0-80c5-109fe0c056cd _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 09:00:47 compute-0 nova_compute[260935]: 2025-10-11 09:00:47.261 2 DEBUG nova.virt.libvirt.driver [None req-9972abff-c76e-4a7a-9404-0b8dd9c06fcd 2e604a7f01ba42f8a2f2a90bf14cafba 0ba95f2514ce4fe4b00f245335eaeb01 - - default default] [instance: 52be16b4-343a-4fd4-9041-39069a1fde2a] Start _get_guest_xml network_info=[{"id": "c992d6e3-ef59-42a0-80c5-109fe0c056cd", "address": "fa:16:3e:d3:b5:ce", "network": {"id": "7c40ad6c-6e2c-4d8e-a70f-72c8786fa745", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1855455514-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0ba95f2514ce4fe4b00f245335eaeb01", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc992d6e3-ef", "ovs_interfaceid": "c992d6e3-ef59-42a0-80c5-109fe0c056cd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 11 09:00:47 compute-0 nova_compute[260935]: 2025-10-11 09:00:47.267 2 WARNING nova.virt.libvirt.driver [None req-9972abff-c76e-4a7a-9404-0b8dd9c06fcd 2e604a7f01ba42f8a2f2a90bf14cafba 0ba95f2514ce4fe4b00f245335eaeb01 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 09:00:47 compute-0 nova_compute[260935]: 2025-10-11 09:00:47.273 2 DEBUG nova.virt.libvirt.host [None req-9972abff-c76e-4a7a-9404-0b8dd9c06fcd 2e604a7f01ba42f8a2f2a90bf14cafba 0ba95f2514ce4fe4b00f245335eaeb01 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 11 09:00:47 compute-0 nova_compute[260935]: 2025-10-11 09:00:47.274 2 DEBUG nova.virt.libvirt.host [None req-9972abff-c76e-4a7a-9404-0b8dd9c06fcd 2e604a7f01ba42f8a2f2a90bf14cafba 0ba95f2514ce4fe4b00f245335eaeb01 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 11 09:00:47 compute-0 nova_compute[260935]: 2025-10-11 09:00:47.282 2 DEBUG nova.virt.libvirt.host [None req-9972abff-c76e-4a7a-9404-0b8dd9c06fcd 2e604a7f01ba42f8a2f2a90bf14cafba 0ba95f2514ce4fe4b00f245335eaeb01 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 11 09:00:47 compute-0 nova_compute[260935]: 2025-10-11 09:00:47.283 2 DEBUG nova.virt.libvirt.host [None req-9972abff-c76e-4a7a-9404-0b8dd9c06fcd 2e604a7f01ba42f8a2f2a90bf14cafba 0ba95f2514ce4fe4b00f245335eaeb01 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 11 09:00:47 compute-0 nova_compute[260935]: 2025-10-11 09:00:47.284 2 DEBUG nova.virt.libvirt.driver [None req-9972abff-c76e-4a7a-9404-0b8dd9c06fcd 2e604a7f01ba42f8a2f2a90bf14cafba 0ba95f2514ce4fe4b00f245335eaeb01 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 11 09:00:47 compute-0 nova_compute[260935]: 2025-10-11 09:00:47.285 2 DEBUG nova.virt.hardware [None req-9972abff-c76e-4a7a-9404-0b8dd9c06fcd 2e604a7f01ba42f8a2f2a90bf14cafba 0ba95f2514ce4fe4b00f245335eaeb01 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 11 09:00:47 compute-0 nova_compute[260935]: 2025-10-11 09:00:47.286 2 DEBUG nova.virt.hardware [None req-9972abff-c76e-4a7a-9404-0b8dd9c06fcd 2e604a7f01ba42f8a2f2a90bf14cafba 0ba95f2514ce4fe4b00f245335eaeb01 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 11 09:00:47 compute-0 nova_compute[260935]: 2025-10-11 09:00:47.286 2 DEBUG nova.virt.hardware [None req-9972abff-c76e-4a7a-9404-0b8dd9c06fcd 2e604a7f01ba42f8a2f2a90bf14cafba 0ba95f2514ce4fe4b00f245335eaeb01 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 11 09:00:47 compute-0 nova_compute[260935]: 2025-10-11 09:00:47.287 2 DEBUG nova.virt.hardware [None req-9972abff-c76e-4a7a-9404-0b8dd9c06fcd 2e604a7f01ba42f8a2f2a90bf14cafba 0ba95f2514ce4fe4b00f245335eaeb01 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 11 09:00:47 compute-0 nova_compute[260935]: 2025-10-11 09:00:47.287 2 DEBUG nova.virt.hardware [None req-9972abff-c76e-4a7a-9404-0b8dd9c06fcd 2e604a7f01ba42f8a2f2a90bf14cafba 0ba95f2514ce4fe4b00f245335eaeb01 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 11 09:00:47 compute-0 nova_compute[260935]: 2025-10-11 09:00:47.288 2 DEBUG nova.virt.hardware [None req-9972abff-c76e-4a7a-9404-0b8dd9c06fcd 2e604a7f01ba42f8a2f2a90bf14cafba 0ba95f2514ce4fe4b00f245335eaeb01 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 11 09:00:47 compute-0 nova_compute[260935]: 2025-10-11 09:00:47.288 2 DEBUG nova.virt.hardware [None req-9972abff-c76e-4a7a-9404-0b8dd9c06fcd 2e604a7f01ba42f8a2f2a90bf14cafba 0ba95f2514ce4fe4b00f245335eaeb01 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 11 09:00:47 compute-0 nova_compute[260935]: 2025-10-11 09:00:47.289 2 DEBUG nova.virt.hardware [None req-9972abff-c76e-4a7a-9404-0b8dd9c06fcd 2e604a7f01ba42f8a2f2a90bf14cafba 0ba95f2514ce4fe4b00f245335eaeb01 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 11 09:00:47 compute-0 nova_compute[260935]: 2025-10-11 09:00:47.289 2 DEBUG nova.virt.hardware [None req-9972abff-c76e-4a7a-9404-0b8dd9c06fcd 2e604a7f01ba42f8a2f2a90bf14cafba 0ba95f2514ce4fe4b00f245335eaeb01 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 11 09:00:47 compute-0 nova_compute[260935]: 2025-10-11 09:00:47.290 2 DEBUG nova.virt.hardware [None req-9972abff-c76e-4a7a-9404-0b8dd9c06fcd 2e604a7f01ba42f8a2f2a90bf14cafba 0ba95f2514ce4fe4b00f245335eaeb01 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 11 09:00:47 compute-0 nova_compute[260935]: 2025-10-11 09:00:47.290 2 DEBUG nova.virt.hardware [None req-9972abff-c76e-4a7a-9404-0b8dd9c06fcd 2e604a7f01ba42f8a2f2a90bf14cafba 0ba95f2514ce4fe4b00f245335eaeb01 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 11 09:00:47 compute-0 nova_compute[260935]: 2025-10-11 09:00:47.296 2 DEBUG oslo_concurrency.processutils [None req-9972abff-c76e-4a7a-9404-0b8dd9c06fcd 2e604a7f01ba42f8a2f2a90bf14cafba 0ba95f2514ce4fe4b00f245335eaeb01 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:00:47 compute-0 ceph-mon[74313]: pgmap v1728: 321 pgs: 321 active+clean; 315 MiB data, 748 MiB used, 59 GiB / 60 GiB avail; 49 KiB/s rd, 4.4 MiB/s wr, 81 op/s
Oct 11 09:00:47 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 09:00:47 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1540332784' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:00:47 compute-0 nova_compute[260935]: 2025-10-11 09:00:47.830 2 DEBUG oslo_concurrency.processutils [None req-9972abff-c76e-4a7a-9404-0b8dd9c06fcd 2e604a7f01ba42f8a2f2a90bf14cafba 0ba95f2514ce4fe4b00f245335eaeb01 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.534s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:00:47 compute-0 nova_compute[260935]: 2025-10-11 09:00:47.865 2 DEBUG nova.storage.rbd_utils [None req-9972abff-c76e-4a7a-9404-0b8dd9c06fcd 2e604a7f01ba42f8a2f2a90bf14cafba 0ba95f2514ce4fe4b00f245335eaeb01 - - default default] rbd image 52be16b4-343a-4fd4-9041-39069a1fde2a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:00:47 compute-0 nova_compute[260935]: 2025-10-11 09:00:47.870 2 DEBUG oslo_concurrency.processutils [None req-9972abff-c76e-4a7a-9404-0b8dd9c06fcd 2e604a7f01ba42f8a2f2a90bf14cafba 0ba95f2514ce4fe4b00f245335eaeb01 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:00:48 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1729: 321 pgs: 321 active+clean; 339 MiB data, 769 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 5.3 MiB/s wr, 183 op/s
Oct 11 09:00:48 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 09:00:48 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2669296274' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:00:48 compute-0 nova_compute[260935]: 2025-10-11 09:00:48.298 2 DEBUG oslo_concurrency.processutils [None req-9972abff-c76e-4a7a-9404-0b8dd9c06fcd 2e604a7f01ba42f8a2f2a90bf14cafba 0ba95f2514ce4fe4b00f245335eaeb01 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.428s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:00:48 compute-0 nova_compute[260935]: 2025-10-11 09:00:48.300 2 DEBUG nova.virt.libvirt.vif [None req-9972abff-c76e-4a7a-9404-0b8dd9c06fcd 2e604a7f01ba42f8a2f2a90bf14cafba 0ba95f2514ce4fe4b00f245335eaeb01 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T09:00:22Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1975294956',display_name='tempest-ServerActionsTestJSON-server-1975294956',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1975294956',id=86,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCldfXJbbZ7d23yZCI3wgMskc62RJ3W+h+Bujyoq+l99HIouQoz2ogsrxnyNOy7JQrwu2S23uZGGnM/6kJAmk9ewWoiaLMeddrGku0Zod7LFIlcm/esb5hA9IKL9pBW3cA==',key_name='tempest-keypair-1522598924',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0ba95f2514ce4fe4b00f245335eaeb01',ramdisk_id='',reservation_id='r-4yee06pj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestJSON-332398676',owner_user_name='tempest-ServerActionsTestJSON-332398676-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:00:41Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='2e604a7f01ba42f8a2f2a90bf14cafba',uuid=52be16b4-343a-4fd4-9041-39069a1fde2a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c992d6e3-ef59-42a0-80c5-109fe0c056cd", "address": "fa:16:3e:d3:b5:ce", "network": {"id": "7c40ad6c-6e2c-4d8e-a70f-72c8786fa745", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1855455514-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0ba95f2514ce4fe4b00f245335eaeb01", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc992d6e3-ef", "ovs_interfaceid": "c992d6e3-ef59-42a0-80c5-109fe0c056cd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 11 09:00:48 compute-0 nova_compute[260935]: 2025-10-11 09:00:48.300 2 DEBUG nova.network.os_vif_util [None req-9972abff-c76e-4a7a-9404-0b8dd9c06fcd 2e604a7f01ba42f8a2f2a90bf14cafba 0ba95f2514ce4fe4b00f245335eaeb01 - - default default] Converting VIF {"id": "c992d6e3-ef59-42a0-80c5-109fe0c056cd", "address": "fa:16:3e:d3:b5:ce", "network": {"id": "7c40ad6c-6e2c-4d8e-a70f-72c8786fa745", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1855455514-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0ba95f2514ce4fe4b00f245335eaeb01", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc992d6e3-ef", "ovs_interfaceid": "c992d6e3-ef59-42a0-80c5-109fe0c056cd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 09:00:48 compute-0 nova_compute[260935]: 2025-10-11 09:00:48.301 2 DEBUG nova.network.os_vif_util [None req-9972abff-c76e-4a7a-9404-0b8dd9c06fcd 2e604a7f01ba42f8a2f2a90bf14cafba 0ba95f2514ce4fe4b00f245335eaeb01 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d3:b5:ce,bridge_name='br-int',has_traffic_filtering=True,id=c992d6e3-ef59-42a0-80c5-109fe0c056cd,network=Network(7c40ad6c-6e2c-4d8e-a70f-72c8786fa745),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc992d6e3-ef') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 09:00:48 compute-0 nova_compute[260935]: 2025-10-11 09:00:48.302 2 DEBUG nova.objects.instance [None req-9972abff-c76e-4a7a-9404-0b8dd9c06fcd 2e604a7f01ba42f8a2f2a90bf14cafba 0ba95f2514ce4fe4b00f245335eaeb01 - - default default] Lazy-loading 'pci_devices' on Instance uuid 52be16b4-343a-4fd4-9041-39069a1fde2a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:00:48 compute-0 nova_compute[260935]: 2025-10-11 09:00:48.320 2 DEBUG nova.virt.libvirt.driver [None req-9972abff-c76e-4a7a-9404-0b8dd9c06fcd 2e604a7f01ba42f8a2f2a90bf14cafba 0ba95f2514ce4fe4b00f245335eaeb01 - - default default] [instance: 52be16b4-343a-4fd4-9041-39069a1fde2a] End _get_guest_xml xml=<domain type="kvm">
Oct 11 09:00:48 compute-0 nova_compute[260935]:   <uuid>52be16b4-343a-4fd4-9041-39069a1fde2a</uuid>
Oct 11 09:00:48 compute-0 nova_compute[260935]:   <name>instance-00000056</name>
Oct 11 09:00:48 compute-0 nova_compute[260935]:   <memory>131072</memory>
Oct 11 09:00:48 compute-0 nova_compute[260935]:   <vcpu>1</vcpu>
Oct 11 09:00:48 compute-0 nova_compute[260935]:   <metadata>
Oct 11 09:00:48 compute-0 nova_compute[260935]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 09:00:48 compute-0 nova_compute[260935]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 09:00:48 compute-0 nova_compute[260935]:       <nova:name>tempest-ServerActionsTestJSON-server-1975294956</nova:name>
Oct 11 09:00:48 compute-0 nova_compute[260935]:       <nova:creationTime>2025-10-11 09:00:47</nova:creationTime>
Oct 11 09:00:48 compute-0 nova_compute[260935]:       <nova:flavor name="m1.nano">
Oct 11 09:00:48 compute-0 nova_compute[260935]:         <nova:memory>128</nova:memory>
Oct 11 09:00:48 compute-0 nova_compute[260935]:         <nova:disk>1</nova:disk>
Oct 11 09:00:48 compute-0 nova_compute[260935]:         <nova:swap>0</nova:swap>
Oct 11 09:00:48 compute-0 nova_compute[260935]:         <nova:ephemeral>0</nova:ephemeral>
Oct 11 09:00:48 compute-0 nova_compute[260935]:         <nova:vcpus>1</nova:vcpus>
Oct 11 09:00:48 compute-0 nova_compute[260935]:       </nova:flavor>
Oct 11 09:00:48 compute-0 nova_compute[260935]:       <nova:owner>
Oct 11 09:00:48 compute-0 nova_compute[260935]:         <nova:user uuid="2e604a7f01ba42f8a2f2a90bf14cafba">tempest-ServerActionsTestJSON-332398676-project-member</nova:user>
Oct 11 09:00:48 compute-0 nova_compute[260935]:         <nova:project uuid="0ba95f2514ce4fe4b00f245335eaeb01">tempest-ServerActionsTestJSON-332398676</nova:project>
Oct 11 09:00:48 compute-0 nova_compute[260935]:       </nova:owner>
Oct 11 09:00:48 compute-0 rsyslogd[1003]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 11 09:00:48 compute-0 nova_compute[260935]:       <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 09:00:48 compute-0 rsyslogd[1003]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 11 09:00:48 compute-0 nova_compute[260935]:       <nova:ports>
Oct 11 09:00:48 compute-0 nova_compute[260935]:         <nova:port uuid="c992d6e3-ef59-42a0-80c5-109fe0c056cd">
Oct 11 09:00:48 compute-0 nova_compute[260935]:           <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Oct 11 09:00:48 compute-0 nova_compute[260935]:         </nova:port>
Oct 11 09:00:48 compute-0 nova_compute[260935]:       </nova:ports>
Oct 11 09:00:48 compute-0 nova_compute[260935]:     </nova:instance>
Oct 11 09:00:48 compute-0 nova_compute[260935]:   </metadata>
Oct 11 09:00:48 compute-0 nova_compute[260935]:   <sysinfo type="smbios">
Oct 11 09:00:48 compute-0 nova_compute[260935]:     <system>
Oct 11 09:00:48 compute-0 nova_compute[260935]:       <entry name="manufacturer">RDO</entry>
Oct 11 09:00:48 compute-0 nova_compute[260935]:       <entry name="product">OpenStack Compute</entry>
Oct 11 09:00:48 compute-0 nova_compute[260935]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 09:00:48 compute-0 nova_compute[260935]:       <entry name="serial">52be16b4-343a-4fd4-9041-39069a1fde2a</entry>
Oct 11 09:00:48 compute-0 nova_compute[260935]:       <entry name="uuid">52be16b4-343a-4fd4-9041-39069a1fde2a</entry>
Oct 11 09:00:48 compute-0 nova_compute[260935]:       <entry name="family">Virtual Machine</entry>
Oct 11 09:00:48 compute-0 nova_compute[260935]:     </system>
Oct 11 09:00:48 compute-0 nova_compute[260935]:   </sysinfo>
Oct 11 09:00:48 compute-0 nova_compute[260935]:   <os>
Oct 11 09:00:48 compute-0 nova_compute[260935]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 11 09:00:48 compute-0 nova_compute[260935]:     <boot dev="hd"/>
Oct 11 09:00:48 compute-0 nova_compute[260935]:     <smbios mode="sysinfo"/>
Oct 11 09:00:48 compute-0 nova_compute[260935]:   </os>
Oct 11 09:00:48 compute-0 nova_compute[260935]:   <features>
Oct 11 09:00:48 compute-0 nova_compute[260935]:     <acpi/>
Oct 11 09:00:48 compute-0 nova_compute[260935]:     <apic/>
Oct 11 09:00:48 compute-0 nova_compute[260935]:     <vmcoreinfo/>
Oct 11 09:00:48 compute-0 nova_compute[260935]:   </features>
Oct 11 09:00:48 compute-0 nova_compute[260935]:   <clock offset="utc">
Oct 11 09:00:48 compute-0 nova_compute[260935]:     <timer name="pit" tickpolicy="delay"/>
Oct 11 09:00:48 compute-0 nova_compute[260935]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 11 09:00:48 compute-0 nova_compute[260935]:     <timer name="hpet" present="no"/>
Oct 11 09:00:48 compute-0 nova_compute[260935]:   </clock>
Oct 11 09:00:48 compute-0 nova_compute[260935]:   <cpu mode="host-model" match="exact">
Oct 11 09:00:48 compute-0 nova_compute[260935]:     <topology sockets="1" cores="1" threads="1"/>
Oct 11 09:00:48 compute-0 nova_compute[260935]:   </cpu>
Oct 11 09:00:48 compute-0 nova_compute[260935]:   <devices>
Oct 11 09:00:48 compute-0 nova_compute[260935]:     <disk type="network" device="disk">
Oct 11 09:00:48 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 09:00:48 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/52be16b4-343a-4fd4-9041-39069a1fde2a_disk">
Oct 11 09:00:48 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 09:00:48 compute-0 nova_compute[260935]:       </source>
Oct 11 09:00:48 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 09:00:48 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 09:00:48 compute-0 nova_compute[260935]:       </auth>
Oct 11 09:00:48 compute-0 nova_compute[260935]:       <target dev="vda" bus="virtio"/>
Oct 11 09:00:48 compute-0 nova_compute[260935]:     </disk>
Oct 11 09:00:48 compute-0 nova_compute[260935]:     <disk type="network" device="cdrom">
Oct 11 09:00:48 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 09:00:48 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/52be16b4-343a-4fd4-9041-39069a1fde2a_disk.config">
Oct 11 09:00:48 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 09:00:48 compute-0 nova_compute[260935]:       </source>
Oct 11 09:00:48 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 09:00:48 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 09:00:48 compute-0 nova_compute[260935]:       </auth>
Oct 11 09:00:48 compute-0 nova_compute[260935]:       <target dev="sda" bus="sata"/>
Oct 11 09:00:48 compute-0 nova_compute[260935]:     </disk>
Oct 11 09:00:48 compute-0 nova_compute[260935]:     <interface type="ethernet">
Oct 11 09:00:48 compute-0 nova_compute[260935]:       <mac address="fa:16:3e:d3:b5:ce"/>
Oct 11 09:00:48 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 09:00:48 compute-0 nova_compute[260935]:       <driver name="vhost" rx_queue_size="512"/>
Oct 11 09:00:48 compute-0 nova_compute[260935]:       <mtu size="1442"/>
Oct 11 09:00:48 compute-0 nova_compute[260935]:       <target dev="tapc992d6e3-ef"/>
Oct 11 09:00:48 compute-0 nova_compute[260935]:     </interface>
Oct 11 09:00:48 compute-0 nova_compute[260935]:     <serial type="pty">
Oct 11 09:00:48 compute-0 nova_compute[260935]:       <log file="/var/lib/nova/instances/52be16b4-343a-4fd4-9041-39069a1fde2a/console.log" append="off"/>
Oct 11 09:00:48 compute-0 nova_compute[260935]:     </serial>
Oct 11 09:00:48 compute-0 nova_compute[260935]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 09:00:48 compute-0 nova_compute[260935]:     <video>
Oct 11 09:00:48 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 09:00:48 compute-0 nova_compute[260935]:     </video>
Oct 11 09:00:48 compute-0 nova_compute[260935]:     <input type="tablet" bus="usb"/>
Oct 11 09:00:48 compute-0 nova_compute[260935]:     <rng model="virtio">
Oct 11 09:00:48 compute-0 nova_compute[260935]:       <backend model="random">/dev/urandom</backend>
Oct 11 09:00:48 compute-0 nova_compute[260935]:     </rng>
Oct 11 09:00:48 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root"/>
Oct 11 09:00:48 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:00:48 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:00:48 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:00:48 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:00:48 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:00:48 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:00:48 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:00:48 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:00:48 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:00:48 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:00:48 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:00:48 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:00:48 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:00:48 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:00:48 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:00:48 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:00:48 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:00:48 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:00:48 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:00:48 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:00:48 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:00:48 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:00:48 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:00:48 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:00:48 compute-0 nova_compute[260935]:     <controller type="usb" index="0"/>
Oct 11 09:00:48 compute-0 nova_compute[260935]:     <memballoon model="virtio">
Oct 11 09:00:48 compute-0 nova_compute[260935]:       <stats period="10"/>
Oct 11 09:00:48 compute-0 nova_compute[260935]:     </memballoon>
Oct 11 09:00:48 compute-0 nova_compute[260935]:   </devices>
Oct 11 09:00:48 compute-0 nova_compute[260935]: </domain>
Oct 11 09:00:48 compute-0 nova_compute[260935]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 11 09:00:48 compute-0 nova_compute[260935]: 2025-10-11 09:00:48.330 2 DEBUG nova.compute.manager [None req-9972abff-c76e-4a7a-9404-0b8dd9c06fcd 2e604a7f01ba42f8a2f2a90bf14cafba 0ba95f2514ce4fe4b00f245335eaeb01 - - default default] [instance: 52be16b4-343a-4fd4-9041-39069a1fde2a] Preparing to wait for external event network-vif-plugged-c992d6e3-ef59-42a0-80c5-109fe0c056cd prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 11 09:00:48 compute-0 nova_compute[260935]: 2025-10-11 09:00:48.330 2 DEBUG oslo_concurrency.lockutils [None req-9972abff-c76e-4a7a-9404-0b8dd9c06fcd 2e604a7f01ba42f8a2f2a90bf14cafba 0ba95f2514ce4fe4b00f245335eaeb01 - - default default] Acquiring lock "52be16b4-343a-4fd4-9041-39069a1fde2a-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:00:48 compute-0 nova_compute[260935]: 2025-10-11 09:00:48.330 2 DEBUG oslo_concurrency.lockutils [None req-9972abff-c76e-4a7a-9404-0b8dd9c06fcd 2e604a7f01ba42f8a2f2a90bf14cafba 0ba95f2514ce4fe4b00f245335eaeb01 - - default default] Lock "52be16b4-343a-4fd4-9041-39069a1fde2a-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:00:48 compute-0 nova_compute[260935]: 2025-10-11 09:00:48.330 2 DEBUG oslo_concurrency.lockutils [None req-9972abff-c76e-4a7a-9404-0b8dd9c06fcd 2e604a7f01ba42f8a2f2a90bf14cafba 0ba95f2514ce4fe4b00f245335eaeb01 - - default default] Lock "52be16b4-343a-4fd4-9041-39069a1fde2a-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:00:48 compute-0 nova_compute[260935]: 2025-10-11 09:00:48.331 2 DEBUG nova.virt.libvirt.vif [None req-9972abff-c76e-4a7a-9404-0b8dd9c06fcd 2e604a7f01ba42f8a2f2a90bf14cafba 0ba95f2514ce4fe4b00f245335eaeb01 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T09:00:22Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1975294956',display_name='tempest-ServerActionsTestJSON-server-1975294956',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1975294956',id=86,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCldfXJbbZ7d23yZCI3wgMskc62RJ3W+h+Bujyoq+l99HIouQoz2ogsrxnyNOy7JQrwu2S23uZGGnM/6kJAmk9ewWoiaLMeddrGku0Zod7LFIlcm/esb5hA9IKL9pBW3cA==',key_name='tempest-keypair-1522598924',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0ba95f2514ce4fe4b00f245335eaeb01',ramdisk_id='',reservation_id='r-4yee06pj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestJSON-332398676',owner_user_name='tempest-ServerActionsTestJSON-332398676-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:00:41Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='2e604a7f01ba42f8a2f2a90bf14cafba',uuid=52be16b4-343a-4fd4-9041-39069a1fde2a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c992d6e3-ef59-42a0-80c5-109fe0c056cd", "address": "fa:16:3e:d3:b5:ce", "network": {"id": "7c40ad6c-6e2c-4d8e-a70f-72c8786fa745", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1855455514-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0ba95f2514ce4fe4b00f245335eaeb01", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc992d6e3-ef", "ovs_interfaceid": "c992d6e3-ef59-42a0-80c5-109fe0c056cd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 11 09:00:48 compute-0 nova_compute[260935]: 2025-10-11 09:00:48.331 2 DEBUG nova.network.os_vif_util [None req-9972abff-c76e-4a7a-9404-0b8dd9c06fcd 2e604a7f01ba42f8a2f2a90bf14cafba 0ba95f2514ce4fe4b00f245335eaeb01 - - default default] Converting VIF {"id": "c992d6e3-ef59-42a0-80c5-109fe0c056cd", "address": "fa:16:3e:d3:b5:ce", "network": {"id": "7c40ad6c-6e2c-4d8e-a70f-72c8786fa745", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1855455514-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0ba95f2514ce4fe4b00f245335eaeb01", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc992d6e3-ef", "ovs_interfaceid": "c992d6e3-ef59-42a0-80c5-109fe0c056cd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 09:00:48 compute-0 nova_compute[260935]: 2025-10-11 09:00:48.332 2 DEBUG nova.network.os_vif_util [None req-9972abff-c76e-4a7a-9404-0b8dd9c06fcd 2e604a7f01ba42f8a2f2a90bf14cafba 0ba95f2514ce4fe4b00f245335eaeb01 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d3:b5:ce,bridge_name='br-int',has_traffic_filtering=True,id=c992d6e3-ef59-42a0-80c5-109fe0c056cd,network=Network(7c40ad6c-6e2c-4d8e-a70f-72c8786fa745),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc992d6e3-ef') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 09:00:48 compute-0 nova_compute[260935]: 2025-10-11 09:00:48.332 2 DEBUG os_vif [None req-9972abff-c76e-4a7a-9404-0b8dd9c06fcd 2e604a7f01ba42f8a2f2a90bf14cafba 0ba95f2514ce4fe4b00f245335eaeb01 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d3:b5:ce,bridge_name='br-int',has_traffic_filtering=True,id=c992d6e3-ef59-42a0-80c5-109fe0c056cd,network=Network(7c40ad6c-6e2c-4d8e-a70f-72c8786fa745),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc992d6e3-ef') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 11 09:00:48 compute-0 nova_compute[260935]: 2025-10-11 09:00:48.332 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:00:48 compute-0 nova_compute[260935]: 2025-10-11 09:00:48.333 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:00:48 compute-0 nova_compute[260935]: 2025-10-11 09:00:48.333 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 09:00:48 compute-0 nova_compute[260935]: 2025-10-11 09:00:48.337 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:00:48 compute-0 nova_compute[260935]: 2025-10-11 09:00:48.338 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc992d6e3-ef, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:00:48 compute-0 nova_compute[260935]: 2025-10-11 09:00:48.338 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapc992d6e3-ef, col_values=(('external_ids', {'iface-id': 'c992d6e3-ef59-42a0-80c5-109fe0c056cd', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:d3:b5:ce', 'vm-uuid': '52be16b4-343a-4fd4-9041-39069a1fde2a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:00:48 compute-0 NetworkManager[44960]: <info>  [1760173248.3409] manager: (tapc992d6e3-ef): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/338)
Oct 11 09:00:48 compute-0 nova_compute[260935]: 2025-10-11 09:00:48.339 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:00:48 compute-0 nova_compute[260935]: 2025-10-11 09:00:48.342 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 09:00:48 compute-0 nova_compute[260935]: 2025-10-11 09:00:48.351 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:00:48 compute-0 nova_compute[260935]: 2025-10-11 09:00:48.352 2 INFO os_vif [None req-9972abff-c76e-4a7a-9404-0b8dd9c06fcd 2e604a7f01ba42f8a2f2a90bf14cafba 0ba95f2514ce4fe4b00f245335eaeb01 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d3:b5:ce,bridge_name='br-int',has_traffic_filtering=True,id=c992d6e3-ef59-42a0-80c5-109fe0c056cd,network=Network(7c40ad6c-6e2c-4d8e-a70f-72c8786fa745),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc992d6e3-ef')
Oct 11 09:00:48 compute-0 nova_compute[260935]: 2025-10-11 09:00:48.406 2 DEBUG nova.virt.libvirt.driver [None req-9972abff-c76e-4a7a-9404-0b8dd9c06fcd 2e604a7f01ba42f8a2f2a90bf14cafba 0ba95f2514ce4fe4b00f245335eaeb01 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 09:00:48 compute-0 nova_compute[260935]: 2025-10-11 09:00:48.407 2 DEBUG nova.virt.libvirt.driver [None req-9972abff-c76e-4a7a-9404-0b8dd9c06fcd 2e604a7f01ba42f8a2f2a90bf14cafba 0ba95f2514ce4fe4b00f245335eaeb01 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 09:00:48 compute-0 nova_compute[260935]: 2025-10-11 09:00:48.407 2 DEBUG nova.virt.libvirt.driver [None req-9972abff-c76e-4a7a-9404-0b8dd9c06fcd 2e604a7f01ba42f8a2f2a90bf14cafba 0ba95f2514ce4fe4b00f245335eaeb01 - - default default] No VIF found with MAC fa:16:3e:d3:b5:ce, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 11 09:00:48 compute-0 nova_compute[260935]: 2025-10-11 09:00:48.408 2 INFO nova.virt.libvirt.driver [None req-9972abff-c76e-4a7a-9404-0b8dd9c06fcd 2e604a7f01ba42f8a2f2a90bf14cafba 0ba95f2514ce4fe4b00f245335eaeb01 - - default default] [instance: 52be16b4-343a-4fd4-9041-39069a1fde2a] Using config drive
Oct 11 09:00:48 compute-0 nova_compute[260935]: 2025-10-11 09:00:48.438 2 DEBUG nova.storage.rbd_utils [None req-9972abff-c76e-4a7a-9404-0b8dd9c06fcd 2e604a7f01ba42f8a2f2a90bf14cafba 0ba95f2514ce4fe4b00f245335eaeb01 - - default default] rbd image 52be16b4-343a-4fd4-9041-39069a1fde2a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:00:48 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/1540332784' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:00:48 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/2669296274' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:00:48 compute-0 sshd-session[343690]: Failed password for invalid user chaos from 155.4.244.179 port 61060 ssh2
Oct 11 09:00:48 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e242 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:00:48 compute-0 nova_compute[260935]: 2025-10-11 09:00:48.760 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:00:49 compute-0 nova_compute[260935]: 2025-10-11 09:00:49.156 2 INFO nova.virt.libvirt.driver [None req-9972abff-c76e-4a7a-9404-0b8dd9c06fcd 2e604a7f01ba42f8a2f2a90bf14cafba 0ba95f2514ce4fe4b00f245335eaeb01 - - default default] [instance: 52be16b4-343a-4fd4-9041-39069a1fde2a] Creating config drive at /var/lib/nova/instances/52be16b4-343a-4fd4-9041-39069a1fde2a/disk.config
Oct 11 09:00:49 compute-0 nova_compute[260935]: 2025-10-11 09:00:49.165 2 DEBUG oslo_concurrency.processutils [None req-9972abff-c76e-4a7a-9404-0b8dd9c06fcd 2e604a7f01ba42f8a2f2a90bf14cafba 0ba95f2514ce4fe4b00f245335eaeb01 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/52be16b4-343a-4fd4-9041-39069a1fde2a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpkcischal execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:00:49 compute-0 nova_compute[260935]: 2025-10-11 09:00:49.331 2 DEBUG oslo_concurrency.processutils [None req-9972abff-c76e-4a7a-9404-0b8dd9c06fcd 2e604a7f01ba42f8a2f2a90bf14cafba 0ba95f2514ce4fe4b00f245335eaeb01 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/52be16b4-343a-4fd4-9041-39069a1fde2a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpkcischal" returned: 0 in 0.166s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:00:49 compute-0 nova_compute[260935]: 2025-10-11 09:00:49.369 2 DEBUG nova.storage.rbd_utils [None req-9972abff-c76e-4a7a-9404-0b8dd9c06fcd 2e604a7f01ba42f8a2f2a90bf14cafba 0ba95f2514ce4fe4b00f245335eaeb01 - - default default] rbd image 52be16b4-343a-4fd4-9041-39069a1fde2a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:00:49 compute-0 nova_compute[260935]: 2025-10-11 09:00:49.375 2 DEBUG oslo_concurrency.processutils [None req-9972abff-c76e-4a7a-9404-0b8dd9c06fcd 2e604a7f01ba42f8a2f2a90bf14cafba 0ba95f2514ce4fe4b00f245335eaeb01 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/52be16b4-343a-4fd4-9041-39069a1fde2a/disk.config 52be16b4-343a-4fd4-9041-39069a1fde2a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:00:49 compute-0 nova_compute[260935]: 2025-10-11 09:00:49.438 2 DEBUG nova.network.neutron [req-aa14fb0e-377e-45a3-a06f-c7b8d6a9cffd req-75cc260b-05c3-4438-8cfd-0aa40c8f120a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 52be16b4-343a-4fd4-9041-39069a1fde2a] Updated VIF entry in instance network info cache for port c992d6e3-ef59-42a0-80c5-109fe0c056cd. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 09:00:49 compute-0 nova_compute[260935]: 2025-10-11 09:00:49.439 2 DEBUG nova.network.neutron [req-aa14fb0e-377e-45a3-a06f-c7b8d6a9cffd req-75cc260b-05c3-4438-8cfd-0aa40c8f120a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 52be16b4-343a-4fd4-9041-39069a1fde2a] Updating instance_info_cache with network_info: [{"id": "c992d6e3-ef59-42a0-80c5-109fe0c056cd", "address": "fa:16:3e:d3:b5:ce", "network": {"id": "7c40ad6c-6e2c-4d8e-a70f-72c8786fa745", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1855455514-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0ba95f2514ce4fe4b00f245335eaeb01", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc992d6e3-ef", "ovs_interfaceid": "c992d6e3-ef59-42a0-80c5-109fe0c056cd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:00:49 compute-0 nova_compute[260935]: 2025-10-11 09:00:49.457 2 DEBUG oslo_concurrency.lockutils [req-aa14fb0e-377e-45a3-a06f-c7b8d6a9cffd req-75cc260b-05c3-4438-8cfd-0aa40c8f120a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-52be16b4-343a-4fd4-9041-39069a1fde2a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:00:49 compute-0 nova_compute[260935]: 2025-10-11 09:00:49.458 2 DEBUG nova.compute.manager [req-aa14fb0e-377e-45a3-a06f-c7b8d6a9cffd req-75cc260b-05c3-4438-8cfd-0aa40c8f120a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: b75d8ded-515b-48ff-a6b6-28df88878996] Received event network-vif-plugged-99e74dca-1d94-446c-ac4b-bc16dc028d2b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:00:49 compute-0 nova_compute[260935]: 2025-10-11 09:00:49.458 2 DEBUG oslo_concurrency.lockutils [req-aa14fb0e-377e-45a3-a06f-c7b8d6a9cffd req-75cc260b-05c3-4438-8cfd-0aa40c8f120a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "b75d8ded-515b-48ff-a6b6-28df88878996-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:00:49 compute-0 nova_compute[260935]: 2025-10-11 09:00:49.459 2 DEBUG oslo_concurrency.lockutils [req-aa14fb0e-377e-45a3-a06f-c7b8d6a9cffd req-75cc260b-05c3-4438-8cfd-0aa40c8f120a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "b75d8ded-515b-48ff-a6b6-28df88878996-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:00:49 compute-0 nova_compute[260935]: 2025-10-11 09:00:49.459 2 DEBUG oslo_concurrency.lockutils [req-aa14fb0e-377e-45a3-a06f-c7b8d6a9cffd req-75cc260b-05c3-4438-8cfd-0aa40c8f120a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "b75d8ded-515b-48ff-a6b6-28df88878996-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:00:49 compute-0 nova_compute[260935]: 2025-10-11 09:00:49.460 2 DEBUG nova.compute.manager [req-aa14fb0e-377e-45a3-a06f-c7b8d6a9cffd req-75cc260b-05c3-4438-8cfd-0aa40c8f120a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: b75d8ded-515b-48ff-a6b6-28df88878996] No waiting events found dispatching network-vif-plugged-99e74dca-1d94-446c-ac4b-bc16dc028d2b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 09:00:49 compute-0 nova_compute[260935]: 2025-10-11 09:00:49.460 2 WARNING nova.compute.manager [req-aa14fb0e-377e-45a3-a06f-c7b8d6a9cffd req-75cc260b-05c3-4438-8cfd-0aa40c8f120a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: b75d8ded-515b-48ff-a6b6-28df88878996] Received unexpected event network-vif-plugged-99e74dca-1d94-446c-ac4b-bc16dc028d2b for instance with vm_state rescued and task_state None.
Oct 11 09:00:49 compute-0 nova_compute[260935]: 2025-10-11 09:00:49.461 2 DEBUG nova.compute.manager [req-aa14fb0e-377e-45a3-a06f-c7b8d6a9cffd req-75cc260b-05c3-4438-8cfd-0aa40c8f120a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 2ca1b1c6-7cd2-42f9-a24b-5c20bb567361] Received event network-vif-plugged-bcfaf217-8703-4c1e-bf80-d24ab0e642bd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:00:49 compute-0 nova_compute[260935]: 2025-10-11 09:00:49.461 2 DEBUG oslo_concurrency.lockutils [req-aa14fb0e-377e-45a3-a06f-c7b8d6a9cffd req-75cc260b-05c3-4438-8cfd-0aa40c8f120a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "2ca1b1c6-7cd2-42f9-a24b-5c20bb567361-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:00:49 compute-0 nova_compute[260935]: 2025-10-11 09:00:49.462 2 DEBUG oslo_concurrency.lockutils [req-aa14fb0e-377e-45a3-a06f-c7b8d6a9cffd req-75cc260b-05c3-4438-8cfd-0aa40c8f120a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "2ca1b1c6-7cd2-42f9-a24b-5c20bb567361-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:00:49 compute-0 nova_compute[260935]: 2025-10-11 09:00:49.462 2 DEBUG oslo_concurrency.lockutils [req-aa14fb0e-377e-45a3-a06f-c7b8d6a9cffd req-75cc260b-05c3-4438-8cfd-0aa40c8f120a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "2ca1b1c6-7cd2-42f9-a24b-5c20bb567361-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:00:49 compute-0 nova_compute[260935]: 2025-10-11 09:00:49.462 2 DEBUG nova.compute.manager [req-aa14fb0e-377e-45a3-a06f-c7b8d6a9cffd req-75cc260b-05c3-4438-8cfd-0aa40c8f120a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 2ca1b1c6-7cd2-42f9-a24b-5c20bb567361] Processing event network-vif-plugged-bcfaf217-8703-4c1e-bf80-d24ab0e642bd _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 11 09:00:49 compute-0 nova_compute[260935]: 2025-10-11 09:00:49.463 2 DEBUG nova.compute.manager [req-aa14fb0e-377e-45a3-a06f-c7b8d6a9cffd req-75cc260b-05c3-4438-8cfd-0aa40c8f120a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 2ca1b1c6-7cd2-42f9-a24b-5c20bb567361] Received event network-vif-plugged-bcfaf217-8703-4c1e-bf80-d24ab0e642bd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:00:49 compute-0 nova_compute[260935]: 2025-10-11 09:00:49.463 2 DEBUG oslo_concurrency.lockutils [req-aa14fb0e-377e-45a3-a06f-c7b8d6a9cffd req-75cc260b-05c3-4438-8cfd-0aa40c8f120a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "2ca1b1c6-7cd2-42f9-a24b-5c20bb567361-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:00:49 compute-0 nova_compute[260935]: 2025-10-11 09:00:49.464 2 DEBUG oslo_concurrency.lockutils [req-aa14fb0e-377e-45a3-a06f-c7b8d6a9cffd req-75cc260b-05c3-4438-8cfd-0aa40c8f120a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "2ca1b1c6-7cd2-42f9-a24b-5c20bb567361-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:00:49 compute-0 nova_compute[260935]: 2025-10-11 09:00:49.464 2 DEBUG oslo_concurrency.lockutils [req-aa14fb0e-377e-45a3-a06f-c7b8d6a9cffd req-75cc260b-05c3-4438-8cfd-0aa40c8f120a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "2ca1b1c6-7cd2-42f9-a24b-5c20bb567361-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:00:49 compute-0 nova_compute[260935]: 2025-10-11 09:00:49.465 2 DEBUG nova.compute.manager [req-aa14fb0e-377e-45a3-a06f-c7b8d6a9cffd req-75cc260b-05c3-4438-8cfd-0aa40c8f120a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 2ca1b1c6-7cd2-42f9-a24b-5c20bb567361] No waiting events found dispatching network-vif-plugged-bcfaf217-8703-4c1e-bf80-d24ab0e642bd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 09:00:49 compute-0 nova_compute[260935]: 2025-10-11 09:00:49.465 2 WARNING nova.compute.manager [req-aa14fb0e-377e-45a3-a06f-c7b8d6a9cffd req-75cc260b-05c3-4438-8cfd-0aa40c8f120a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 2ca1b1c6-7cd2-42f9-a24b-5c20bb567361] Received unexpected event network-vif-plugged-bcfaf217-8703-4c1e-bf80-d24ab0e642bd for instance with vm_state stopped and task_state rebuild_spawning.
Oct 11 09:00:49 compute-0 nova_compute[260935]: 2025-10-11 09:00:49.466 2 DEBUG nova.compute.manager [None req-272738df-6af2-4acf-8251-7329bafbeba8 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] [instance: 2ca1b1c6-7cd2-42f9-a24b-5c20bb567361] Instance event wait completed in 3 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 11 09:00:49 compute-0 nova_compute[260935]: 2025-10-11 09:00:49.470 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760173249.4698424, 2ca1b1c6-7cd2-42f9-a24b-5c20bb567361 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:00:49 compute-0 nova_compute[260935]: 2025-10-11 09:00:49.470 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 2ca1b1c6-7cd2-42f9-a24b-5c20bb567361] VM Resumed (Lifecycle Event)
Oct 11 09:00:49 compute-0 nova_compute[260935]: 2025-10-11 09:00:49.473 2 DEBUG nova.virt.libvirt.driver [None req-272738df-6af2-4acf-8251-7329bafbeba8 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] [instance: 2ca1b1c6-7cd2-42f9-a24b-5c20bb567361] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 11 09:00:49 compute-0 nova_compute[260935]: 2025-10-11 09:00:49.477 2 INFO nova.virt.libvirt.driver [-] [instance: 2ca1b1c6-7cd2-42f9-a24b-5c20bb567361] Instance spawned successfully.
Oct 11 09:00:49 compute-0 nova_compute[260935]: 2025-10-11 09:00:49.477 2 DEBUG nova.virt.libvirt.driver [None req-272738df-6af2-4acf-8251-7329bafbeba8 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] [instance: 2ca1b1c6-7cd2-42f9-a24b-5c20bb567361] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 11 09:00:49 compute-0 nova_compute[260935]: 2025-10-11 09:00:49.496 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 2ca1b1c6-7cd2-42f9-a24b-5c20bb567361] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:00:49 compute-0 nova_compute[260935]: 2025-10-11 09:00:49.508 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 2ca1b1c6-7cd2-42f9-a24b-5c20bb567361] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: stopped, current task_state: rebuild_spawning, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 09:00:49 compute-0 nova_compute[260935]: 2025-10-11 09:00:49.513 2 DEBUG nova.virt.libvirt.driver [None req-272738df-6af2-4acf-8251-7329bafbeba8 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] [instance: 2ca1b1c6-7cd2-42f9-a24b-5c20bb567361] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:00:49 compute-0 nova_compute[260935]: 2025-10-11 09:00:49.514 2 DEBUG nova.virt.libvirt.driver [None req-272738df-6af2-4acf-8251-7329bafbeba8 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] [instance: 2ca1b1c6-7cd2-42f9-a24b-5c20bb567361] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:00:49 compute-0 nova_compute[260935]: 2025-10-11 09:00:49.514 2 DEBUG nova.virt.libvirt.driver [None req-272738df-6af2-4acf-8251-7329bafbeba8 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] [instance: 2ca1b1c6-7cd2-42f9-a24b-5c20bb567361] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:00:49 compute-0 nova_compute[260935]: 2025-10-11 09:00:49.515 2 DEBUG nova.virt.libvirt.driver [None req-272738df-6af2-4acf-8251-7329bafbeba8 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] [instance: 2ca1b1c6-7cd2-42f9-a24b-5c20bb567361] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:00:49 compute-0 nova_compute[260935]: 2025-10-11 09:00:49.515 2 DEBUG nova.virt.libvirt.driver [None req-272738df-6af2-4acf-8251-7329bafbeba8 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] [instance: 2ca1b1c6-7cd2-42f9-a24b-5c20bb567361] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:00:49 compute-0 nova_compute[260935]: 2025-10-11 09:00:49.516 2 DEBUG nova.virt.libvirt.driver [None req-272738df-6af2-4acf-8251-7329bafbeba8 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] [instance: 2ca1b1c6-7cd2-42f9-a24b-5c20bb567361] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:00:49 compute-0 nova_compute[260935]: 2025-10-11 09:00:49.543 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 2ca1b1c6-7cd2-42f9-a24b-5c20bb567361] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Oct 11 09:00:49 compute-0 ceph-mon[74313]: pgmap v1729: 321 pgs: 321 active+clean; 339 MiB data, 769 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 5.3 MiB/s wr, 183 op/s
Oct 11 09:00:49 compute-0 nova_compute[260935]: 2025-10-11 09:00:49.582 2 DEBUG nova.compute.manager [None req-272738df-6af2-4acf-8251-7329bafbeba8 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] [instance: 2ca1b1c6-7cd2-42f9-a24b-5c20bb567361] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:00:49 compute-0 nova_compute[260935]: 2025-10-11 09:00:49.603 2 DEBUG oslo_concurrency.processutils [None req-9972abff-c76e-4a7a-9404-0b8dd9c06fcd 2e604a7f01ba42f8a2f2a90bf14cafba 0ba95f2514ce4fe4b00f245335eaeb01 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/52be16b4-343a-4fd4-9041-39069a1fde2a/disk.config 52be16b4-343a-4fd4-9041-39069a1fde2a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.228s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:00:49 compute-0 nova_compute[260935]: 2025-10-11 09:00:49.604 2 INFO nova.virt.libvirt.driver [None req-9972abff-c76e-4a7a-9404-0b8dd9c06fcd 2e604a7f01ba42f8a2f2a90bf14cafba 0ba95f2514ce4fe4b00f245335eaeb01 - - default default] [instance: 52be16b4-343a-4fd4-9041-39069a1fde2a] Deleting local config drive /var/lib/nova/instances/52be16b4-343a-4fd4-9041-39069a1fde2a/disk.config because it was imported into RBD.
Oct 11 09:00:49 compute-0 nova_compute[260935]: 2025-10-11 09:00:49.646 2 INFO nova.compute.manager [None req-272738df-6af2-4acf-8251-7329bafbeba8 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] [instance: 2ca1b1c6-7cd2-42f9-a24b-5c20bb567361] bringing vm to original state: 'stopped'
Oct 11 09:00:49 compute-0 NetworkManager[44960]: <info>  [1760173249.6757] manager: (tapc992d6e3-ef): new Tun device (/org/freedesktop/NetworkManager/Devices/339)
Oct 11 09:00:49 compute-0 kernel: tapc992d6e3-ef: entered promiscuous mode
Oct 11 09:00:49 compute-0 ovn_controller[152945]: 2025-10-11T09:00:49Z|00780|binding|INFO|Claiming lport c992d6e3-ef59-42a0-80c5-109fe0c056cd for this chassis.
Oct 11 09:00:49 compute-0 ovn_controller[152945]: 2025-10-11T09:00:49Z|00781|binding|INFO|c992d6e3-ef59-42a0-80c5-109fe0c056cd: Claiming fa:16:3e:d3:b5:ce 10.100.0.14
Oct 11 09:00:49 compute-0 nova_compute[260935]: 2025-10-11 09:00:49.682 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:00:49 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:00:49.689 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d3:b5:ce 10.100.0.14'], port_security=['fa:16:3e:d3:b5:ce 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '52be16b4-343a-4fd4-9041-39069a1fde2a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7c40ad6c-6e2c-4d8e-a70f-72c8786fa745', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0ba95f2514ce4fe4b00f245335eaeb01', 'neutron:revision_number': '2', 'neutron:security_group_ids': '462a25ad-d94b-4af1-ba28-eaa1a993c459', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=22e9d786-1ab0-4026-8b17-f42f91b9280f, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=c992d6e3-ef59-42a0-80c5-109fe0c056cd) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 09:00:49 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:00:49.691 162815 INFO neutron.agent.ovn.metadata.agent [-] Port c992d6e3-ef59-42a0-80c5-109fe0c056cd in datapath 7c40ad6c-6e2c-4d8e-a70f-72c8786fa745 bound to our chassis
Oct 11 09:00:49 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:00:49.695 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7c40ad6c-6e2c-4d8e-a70f-72c8786fa745
Oct 11 09:00:49 compute-0 systemd-udevd[343915]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 09:00:49 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:00:49.711 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[4f08efd3-b75f-4c50-9fb1-64770a948c7a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:00:49 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:00:49.714 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap7c40ad6c-61 in ovnmeta-7c40ad6c-6e2c-4d8e-a70f-72c8786fa745 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 11 09:00:49 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:00:49.716 276945 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap7c40ad6c-60 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 11 09:00:49 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:00:49.716 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[73479237-8be1-41fc-87bc-bdedcfaafb87]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:00:49 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:00:49.717 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[50a5d27e-b93e-4bf9-b82f-50335d15d82e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:00:49 compute-0 NetworkManager[44960]: <info>  [1760173249.7278] device (tapc992d6e3-ef): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 09:00:49 compute-0 NetworkManager[44960]: <info>  [1760173249.7288] device (tapc992d6e3-ef): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 09:00:49 compute-0 nova_compute[260935]: 2025-10-11 09:00:49.730 2 DEBUG oslo_concurrency.lockutils [None req-272738df-6af2-4acf-8251-7329bafbeba8 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] Acquiring lock "2ca1b1c6-7cd2-42f9-a24b-5c20bb567361" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:00:49 compute-0 nova_compute[260935]: 2025-10-11 09:00:49.731 2 DEBUG oslo_concurrency.lockutils [None req-272738df-6af2-4acf-8251-7329bafbeba8 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] Lock "2ca1b1c6-7cd2-42f9-a24b-5c20bb567361" acquired by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:00:49 compute-0 nova_compute[260935]: 2025-10-11 09:00:49.731 2 DEBUG nova.compute.manager [None req-272738df-6af2-4acf-8251-7329bafbeba8 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] [instance: 2ca1b1c6-7cd2-42f9-a24b-5c20bb567361] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:00:49 compute-0 ovn_controller[152945]: 2025-10-11T09:00:49Z|00782|binding|INFO|Setting lport c992d6e3-ef59-42a0-80c5-109fe0c056cd ovn-installed in OVS
Oct 11 09:00:49 compute-0 ovn_controller[152945]: 2025-10-11T09:00:49Z|00783|binding|INFO|Setting lport c992d6e3-ef59-42a0-80c5-109fe0c056cd up in Southbound
Oct 11 09:00:49 compute-0 nova_compute[260935]: 2025-10-11 09:00:49.736 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:00:49 compute-0 systemd-machined[215705]: New machine qemu-98-instance-00000056.
Oct 11 09:00:49 compute-0 nova_compute[260935]: 2025-10-11 09:00:49.740 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:00:49 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:00:49.739 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[83206be6-1440-465a-838a-d134e0673d26]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:00:49 compute-0 nova_compute[260935]: 2025-10-11 09:00:49.748 2 DEBUG nova.compute.manager [None req-272738df-6af2-4acf-8251-7329bafbeba8 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] [instance: 2ca1b1c6-7cd2-42f9-a24b-5c20bb567361] Stopping instance; current vm_state: active, current task_state: powering-off, current DB power_state: 1, current VM power_state: 1 do_stop_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3338
Oct 11 09:00:49 compute-0 systemd[1]: Started Virtual Machine qemu-98-instance-00000056.
Oct 11 09:00:49 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:00:49.772 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[bac42852-7c34-46fb-8526-ef9aba0e8fc9]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:00:49 compute-0 kernel: tapbcfaf217-87 (unregistering): left promiscuous mode
Oct 11 09:00:49 compute-0 NetworkManager[44960]: <info>  [1760173249.8056] device (tapbcfaf217-87): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 09:00:49 compute-0 nova_compute[260935]: 2025-10-11 09:00:49.814 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:00:49 compute-0 ovn_controller[152945]: 2025-10-11T09:00:49Z|00784|binding|INFO|Releasing lport bcfaf217-8703-4c1e-bf80-d24ab0e642bd from this chassis (sb_readonly=0)
Oct 11 09:00:49 compute-0 ovn_controller[152945]: 2025-10-11T09:00:49Z|00785|binding|INFO|Setting lport bcfaf217-8703-4c1e-bf80-d24ab0e642bd down in Southbound
Oct 11 09:00:49 compute-0 ovn_controller[152945]: 2025-10-11T09:00:49Z|00786|binding|INFO|Removing iface tapbcfaf217-87 ovn-installed in OVS
Oct 11 09:00:49 compute-0 nova_compute[260935]: 2025-10-11 09:00:49.818 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:00:49 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:00:49.822 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[8fb89c06-d6d0-4905-b82c-8e70ae47c809]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:00:49 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:00:49.833 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:16:b9:d1 10.100.0.11'], port_security=['fa:16:3e:16:b9:d1 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '2ca1b1c6-7cd2-42f9-a24b-5c20bb567361', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-164a664d-5e52-48b9-8b00-f73d0851a4cc', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd33b48586acf4e6c8254f2a1213b001c', 'neutron:revision_number': '8', 'neutron:security_group_ids': '7c2dc1cf-8ac0-4645-86fa-d32df3cf1552', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=68f3e6c4-f574-4830-9133-912bb9cd6132, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=bcfaf217-8703-4c1e-bf80-d24ab0e642bd) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 09:00:49 compute-0 systemd-udevd[343918]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 09:00:49 compute-0 NetworkManager[44960]: <info>  [1760173249.8369] manager: (tap7c40ad6c-60): new Veth device (/org/freedesktop/NetworkManager/Devices/340)
Oct 11 09:00:49 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:00:49.834 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[a3a453d9-6e04-4b47-ad61-78371dd71fdb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:00:49 compute-0 nova_compute[260935]: 2025-10-11 09:00:49.841 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:00:49 compute-0 systemd[1]: machine-qemu\x2d97\x2dinstance\x2d00000054.scope: Deactivated successfully.
Oct 11 09:00:49 compute-0 systemd[1]: machine-qemu\x2d97\x2dinstance\x2d00000054.scope: Consumed 1.503s CPU time.
Oct 11 09:00:49 compute-0 systemd-machined[215705]: Machine qemu-97-instance-00000054 terminated.
Oct 11 09:00:49 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:00:49.874 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[aa5e6e4b-d86f-4060-89b1-fefe53a6e573]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:00:49 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:00:49.877 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[d239be38-9b5b-4f5e-a657-a85581e8c84e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:00:49 compute-0 NetworkManager[44960]: <info>  [1760173249.9064] device (tap7c40ad6c-60): carrier: link connected
Oct 11 09:00:49 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:00:49.916 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[0c0e6b71-bb93-4191-b417-8bee1db24493]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:00:49 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:00:49.935 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[f42b8b33-f4db-4d8d-b053-368d2d68d0aa]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7c40ad6c-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5b:9b:0f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 240], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 509460, 'reachable_time': 18721, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 343954, 'error': None, 'target': 'ovnmeta-7c40ad6c-6e2c-4d8e-a70f-72c8786fa745', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:00:49 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:00:49.953 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[fecc542e-a656-4903-8bbd-57b449b63fee]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe5b:9b0f'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 509460, 'tstamp': 509460}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 343955, 'error': None, 'target': 'ovnmeta-7c40ad6c-6e2c-4d8e-a70f-72c8786fa745', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:00:49 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:00:49.970 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[60f2e2ff-c249-43e5-81d6-3f0ff3544754]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7c40ad6c-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5b:9b:0f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 240], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 509460, 'reachable_time': 18721, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 343956, 'error': None, 'target': 'ovnmeta-7c40ad6c-6e2c-4d8e-a70f-72c8786fa745', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:00:49 compute-0 NetworkManager[44960]: <info>  [1760173249.9770] manager: (tapbcfaf217-87): new Tun device (/org/freedesktop/NetworkManager/Devices/341)
Oct 11 09:00:49 compute-0 nova_compute[260935]: 2025-10-11 09:00:49.995 2 INFO nova.virt.libvirt.driver [-] [instance: 2ca1b1c6-7cd2-42f9-a24b-5c20bb567361] Instance destroyed successfully.
Oct 11 09:00:49 compute-0 nova_compute[260935]: 2025-10-11 09:00:49.995 2 DEBUG nova.compute.manager [None req-272738df-6af2-4acf-8251-7329bafbeba8 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] [instance: 2ca1b1c6-7cd2-42f9-a24b-5c20bb567361] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:00:50 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:00:50.006 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[d70239ba-5c62-451c-981d-2faff33919f9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:00:50 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:00:50.084 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[abc87846-175a-4f3f-a4f6-b36d0ad58192]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:00:50 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:00:50.086 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7c40ad6c-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:00:50 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:00:50.087 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 09:00:50 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:00:50.087 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7c40ad6c-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:00:50 compute-0 NetworkManager[44960]: <info>  [1760173250.0906] manager: (tap7c40ad6c-60): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/342)
Oct 11 09:00:50 compute-0 nova_compute[260935]: 2025-10-11 09:00:50.090 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:00:50 compute-0 kernel: tap7c40ad6c-60: entered promiscuous mode
Oct 11 09:00:50 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:00:50.099 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7c40ad6c-60, col_values=(('external_ids', {'iface-id': 'f45dd889-4db0-488b-b6d3-356bc191844e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:00:50 compute-0 ovn_controller[152945]: 2025-10-11T09:00:50Z|00787|binding|INFO|Releasing lport f45dd889-4db0-488b-b6d3-356bc191844e from this chassis (sb_readonly=0)
Oct 11 09:00:50 compute-0 nova_compute[260935]: 2025-10-11 09:00:50.100 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:00:50 compute-0 nova_compute[260935]: 2025-10-11 09:00:50.126 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:00:50 compute-0 nova_compute[260935]: 2025-10-11 09:00:50.134 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:00:50 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:00:50.135 162815 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/7c40ad6c-6e2c-4d8e-a70f-72c8786fa745.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/7c40ad6c-6e2c-4d8e-a70f-72c8786fa745.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 11 09:00:50 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:00:50.136 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[b3beb447-0ef8-461c-b428-f1ca47ec9835]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:00:50 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:00:50.137 162815 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 11 09:00:50 compute-0 ovn_metadata_agent[162810]: global
Oct 11 09:00:50 compute-0 ovn_metadata_agent[162810]:     log         /dev/log local0 debug
Oct 11 09:00:50 compute-0 ovn_metadata_agent[162810]:     log-tag     haproxy-metadata-proxy-7c40ad6c-6e2c-4d8e-a70f-72c8786fa745
Oct 11 09:00:50 compute-0 ovn_metadata_agent[162810]:     user        root
Oct 11 09:00:50 compute-0 ovn_metadata_agent[162810]:     group       root
Oct 11 09:00:50 compute-0 ovn_metadata_agent[162810]:     maxconn     1024
Oct 11 09:00:50 compute-0 ovn_metadata_agent[162810]:     pidfile     /var/lib/neutron/external/pids/7c40ad6c-6e2c-4d8e-a70f-72c8786fa745.pid.haproxy
Oct 11 09:00:50 compute-0 ovn_metadata_agent[162810]:     daemon
Oct 11 09:00:50 compute-0 ovn_metadata_agent[162810]: 
Oct 11 09:00:50 compute-0 ovn_metadata_agent[162810]: defaults
Oct 11 09:00:50 compute-0 ovn_metadata_agent[162810]:     log global
Oct 11 09:00:50 compute-0 ovn_metadata_agent[162810]:     mode http
Oct 11 09:00:50 compute-0 ovn_metadata_agent[162810]:     option httplog
Oct 11 09:00:50 compute-0 ovn_metadata_agent[162810]:     option dontlognull
Oct 11 09:00:50 compute-0 ovn_metadata_agent[162810]:     option http-server-close
Oct 11 09:00:50 compute-0 ovn_metadata_agent[162810]:     option forwardfor
Oct 11 09:00:50 compute-0 ovn_metadata_agent[162810]:     retries                 3
Oct 11 09:00:50 compute-0 ovn_metadata_agent[162810]:     timeout http-request    30s
Oct 11 09:00:50 compute-0 ovn_metadata_agent[162810]:     timeout connect         30s
Oct 11 09:00:50 compute-0 ovn_metadata_agent[162810]:     timeout client          32s
Oct 11 09:00:50 compute-0 ovn_metadata_agent[162810]:     timeout server          32s
Oct 11 09:00:50 compute-0 ovn_metadata_agent[162810]:     timeout http-keep-alive 30s
Oct 11 09:00:50 compute-0 ovn_metadata_agent[162810]: 
Oct 11 09:00:50 compute-0 ovn_metadata_agent[162810]: 
Oct 11 09:00:50 compute-0 ovn_metadata_agent[162810]: listen listener
Oct 11 09:00:50 compute-0 ovn_metadata_agent[162810]:     bind 169.254.169.254:80
Oct 11 09:00:50 compute-0 ovn_metadata_agent[162810]:     server metadata /var/lib/neutron/metadata_proxy
Oct 11 09:00:50 compute-0 ovn_metadata_agent[162810]:     http-request add-header X-OVN-Network-ID 7c40ad6c-6e2c-4d8e-a70f-72c8786fa745
Oct 11 09:00:50 compute-0 ovn_metadata_agent[162810]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 11 09:00:50 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:00:50.139 162815 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-7c40ad6c-6e2c-4d8e-a70f-72c8786fa745', 'env', 'PROCESS_TAG=haproxy-7c40ad6c-6e2c-4d8e-a70f-72c8786fa745', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/7c40ad6c-6e2c-4d8e-a70f-72c8786fa745.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 11 09:00:50 compute-0 sshd-session[343690]: Received disconnect from 155.4.244.179 port 61060:11: Bye Bye [preauth]
Oct 11 09:00:50 compute-0 sshd-session[343690]: Disconnected from invalid user chaos 155.4.244.179 port 61060 [preauth]
Oct 11 09:00:50 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1730: 321 pgs: 321 active+clean; 339 MiB data, 769 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 5.3 MiB/s wr, 183 op/s
Oct 11 09:00:50 compute-0 nova_compute[260935]: 2025-10-11 09:00:50.265 2 DEBUG oslo_concurrency.lockutils [None req-272738df-6af2-4acf-8251-7329bafbeba8 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] Lock "2ca1b1c6-7cd2-42f9-a24b-5c20bb567361" "released" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: held 0.534s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:00:50 compute-0 nova_compute[260935]: 2025-10-11 09:00:50.318 2 DEBUG oslo_concurrency.lockutils [None req-272738df-6af2-4acf-8251-7329bafbeba8 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:00:50 compute-0 nova_compute[260935]: 2025-10-11 09:00:50.318 2 DEBUG oslo_concurrency.lockutils [None req-272738df-6af2-4acf-8251-7329bafbeba8 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:00:50 compute-0 nova_compute[260935]: 2025-10-11 09:00:50.319 2 DEBUG nova.objects.instance [None req-272738df-6af2-4acf-8251-7329bafbeba8 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] [instance: 2ca1b1c6-7cd2-42f9-a24b-5c20bb567361] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Oct 11 09:00:50 compute-0 nova_compute[260935]: 2025-10-11 09:00:50.415 2 DEBUG nova.compute.manager [req-ed853ab6-1dea-4840-bdb4-6fd91cb3585d req-6c7922f9-8668-443b-9e8f-b7210e5a7d91 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 52be16b4-343a-4fd4-9041-39069a1fde2a] Received event network-vif-plugged-c992d6e3-ef59-42a0-80c5-109fe0c056cd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:00:50 compute-0 nova_compute[260935]: 2025-10-11 09:00:50.416 2 DEBUG oslo_concurrency.lockutils [req-ed853ab6-1dea-4840-bdb4-6fd91cb3585d req-6c7922f9-8668-443b-9e8f-b7210e5a7d91 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "52be16b4-343a-4fd4-9041-39069a1fde2a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:00:50 compute-0 nova_compute[260935]: 2025-10-11 09:00:50.416 2 DEBUG oslo_concurrency.lockutils [req-ed853ab6-1dea-4840-bdb4-6fd91cb3585d req-6c7922f9-8668-443b-9e8f-b7210e5a7d91 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "52be16b4-343a-4fd4-9041-39069a1fde2a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:00:50 compute-0 nova_compute[260935]: 2025-10-11 09:00:50.416 2 DEBUG oslo_concurrency.lockutils [req-ed853ab6-1dea-4840-bdb4-6fd91cb3585d req-6c7922f9-8668-443b-9e8f-b7210e5a7d91 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "52be16b4-343a-4fd4-9041-39069a1fde2a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:00:50 compute-0 nova_compute[260935]: 2025-10-11 09:00:50.417 2 DEBUG nova.compute.manager [req-ed853ab6-1dea-4840-bdb4-6fd91cb3585d req-6c7922f9-8668-443b-9e8f-b7210e5a7d91 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 52be16b4-343a-4fd4-9041-39069a1fde2a] Processing event network-vif-plugged-c992d6e3-ef59-42a0-80c5-109fe0c056cd _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 11 09:00:50 compute-0 nova_compute[260935]: 2025-10-11 09:00:50.419 2 DEBUG oslo_concurrency.lockutils [None req-272738df-6af2-4acf-8251-7329bafbeba8 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.100s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:00:50 compute-0 podman[344002]: 2025-10-11 09:00:50.570795853 +0000 UTC m=+0.071359976 container create b5511f41d5dad8ddef6656dea0a0356c4dbed9d9c7c06c1948c618f051a18d4b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-7c40ad6c-6e2c-4d8e-a70f-72c8786fa745, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 11 09:00:50 compute-0 systemd[1]: Started libpod-conmon-b5511f41d5dad8ddef6656dea0a0356c4dbed9d9c7c06c1948c618f051a18d4b.scope.
Oct 11 09:00:50 compute-0 podman[344002]: 2025-10-11 09:00:50.534018754 +0000 UTC m=+0.034582947 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 11 09:00:50 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:00:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d388700494a60a977f53213e9adaae75b3b656a3610a624cd88ae77b82267d55/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 11 09:00:50 compute-0 podman[344002]: 2025-10-11 09:00:50.684712341 +0000 UTC m=+0.185276554 container init b5511f41d5dad8ddef6656dea0a0356c4dbed9d9c7c06c1948c618f051a18d4b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-7c40ad6c-6e2c-4d8e-a70f-72c8786fa745, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 11 09:00:50 compute-0 podman[344002]: 2025-10-11 09:00:50.698108803 +0000 UTC m=+0.198672956 container start b5511f41d5dad8ddef6656dea0a0356c4dbed9d9c7c06c1948c618f051a18d4b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-7c40ad6c-6e2c-4d8e-a70f-72c8786fa745, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.schema-version=1.0)
Oct 11 09:00:50 compute-0 neutron-haproxy-ovnmeta-7c40ad6c-6e2c-4d8e-a70f-72c8786fa745[344017]: [NOTICE]   (344035) : New worker (344056) forked
Oct 11 09:00:50 compute-0 neutron-haproxy-ovnmeta-7c40ad6c-6e2c-4d8e-a70f-72c8786fa745[344017]: [NOTICE]   (344035) : Loading success.
Oct 11 09:00:50 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:00:50.787 162815 INFO neutron.agent.ovn.metadata.agent [-] Port bcfaf217-8703-4c1e-bf80-d24ab0e642bd in datapath 164a664d-5e52-48b9-8b00-f73d0851a4cc unbound from our chassis
Oct 11 09:00:50 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:00:50.791 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 164a664d-5e52-48b9-8b00-f73d0851a4cc
Oct 11 09:00:50 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:00:50.807 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[caa3dafd-d9aa-4f46-bb2c-ad9b63eebbc4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:00:50 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:00:50.845 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[f5f14a08-b056-438d-b0e1-0f76505368ee]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:00:50 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:00:50.853 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[3dcc5a5c-8522-47e2-9851-adece4e015ef]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:00:50 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:00:50.898 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[2be529bd-0c8f-4499-80a3-576c8f281241]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:00:50 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:00:50.925 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[0097c04b-93b7-45dd-ae8b-a5c2fcf22046]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap164a664d-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e4:a0:ed'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 15, 'rx_bytes': 916, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 15, 'rx_bytes': 916, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 224], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 500906, 'reachable_time': 17587, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 344081, 'error': None, 'target': 'ovnmeta-164a664d-5e52-48b9-8b00-f73d0851a4cc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:00:50 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:00:50.956 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[4b4027ec-f001-4601-a5ca-aa9a79b309d4]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap164a664d-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 500919, 'tstamp': 500919}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 344082, 'error': None, 'target': 'ovnmeta-164a664d-5e52-48b9-8b00-f73d0851a4cc', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap164a664d-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 500922, 'tstamp': 500922}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 344082, 'error': None, 'target': 'ovnmeta-164a664d-5e52-48b9-8b00-f73d0851a4cc', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:00:50 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:00:50.959 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap164a664d-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:00:50 compute-0 nova_compute[260935]: 2025-10-11 09:00:50.961 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:00:50 compute-0 nova_compute[260935]: 2025-10-11 09:00:50.967 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:00:50 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:00:50.968 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap164a664d-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:00:50 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:00:50.969 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 09:00:50 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:00:50.970 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap164a664d-50, col_values=(('external_ids', {'iface-id': 'e23cd806-8523-4e59-ba27-db15cee52548'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:00:50 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:00:50.970 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 09:00:51 compute-0 nova_compute[260935]: 2025-10-11 09:00:51.201 2 DEBUG oslo_concurrency.lockutils [None req-21f60ae5-4d4b-4bf1-a957-74c05ca1d226 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] Acquiring lock "15633aee-234a-4417-b5ea-f35f13820404" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:00:51 compute-0 nova_compute[260935]: 2025-10-11 09:00:51.202 2 DEBUG oslo_concurrency.lockutils [None req-21f60ae5-4d4b-4bf1-a957-74c05ca1d226 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] Lock "15633aee-234a-4417-b5ea-f35f13820404" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:00:51 compute-0 nova_compute[260935]: 2025-10-11 09:00:51.224 2 DEBUG nova.compute.manager [None req-21f60ae5-4d4b-4bf1-a957-74c05ca1d226 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] [instance: 15633aee-234a-4417-b5ea-f35f13820404] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 11 09:00:51 compute-0 nova_compute[260935]: 2025-10-11 09:00:51.312 2 DEBUG oslo_concurrency.lockutils [None req-21f60ae5-4d4b-4bf1-a957-74c05ca1d226 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:00:51 compute-0 nova_compute[260935]: 2025-10-11 09:00:51.313 2 DEBUG oslo_concurrency.lockutils [None req-21f60ae5-4d4b-4bf1-a957-74c05ca1d226 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:00:51 compute-0 nova_compute[260935]: 2025-10-11 09:00:51.324 2 DEBUG nova.virt.hardware [None req-21f60ae5-4d4b-4bf1-a957-74c05ca1d226 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 11 09:00:51 compute-0 nova_compute[260935]: 2025-10-11 09:00:51.325 2 INFO nova.compute.claims [None req-21f60ae5-4d4b-4bf1-a957-74c05ca1d226 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] [instance: 15633aee-234a-4417-b5ea-f35f13820404] Claim successful on node compute-0.ctlplane.example.com
Oct 11 09:00:51 compute-0 nova_compute[260935]: 2025-10-11 09:00:51.399 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760173251.399392, 52be16b4-343a-4fd4-9041-39069a1fde2a => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:00:51 compute-0 nova_compute[260935]: 2025-10-11 09:00:51.401 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 52be16b4-343a-4fd4-9041-39069a1fde2a] VM Started (Lifecycle Event)
Oct 11 09:00:51 compute-0 nova_compute[260935]: 2025-10-11 09:00:51.405 2 DEBUG nova.compute.manager [None req-9972abff-c76e-4a7a-9404-0b8dd9c06fcd 2e604a7f01ba42f8a2f2a90bf14cafba 0ba95f2514ce4fe4b00f245335eaeb01 - - default default] [instance: 52be16b4-343a-4fd4-9041-39069a1fde2a] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 11 09:00:51 compute-0 nova_compute[260935]: 2025-10-11 09:00:51.422 2 DEBUG nova.virt.libvirt.driver [None req-9972abff-c76e-4a7a-9404-0b8dd9c06fcd 2e604a7f01ba42f8a2f2a90bf14cafba 0ba95f2514ce4fe4b00f245335eaeb01 - - default default] [instance: 52be16b4-343a-4fd4-9041-39069a1fde2a] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 11 09:00:51 compute-0 nova_compute[260935]: 2025-10-11 09:00:51.429 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 52be16b4-343a-4fd4-9041-39069a1fde2a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:00:51 compute-0 nova_compute[260935]: 2025-10-11 09:00:51.448 2 INFO nova.virt.libvirt.driver [-] [instance: 52be16b4-343a-4fd4-9041-39069a1fde2a] Instance spawned successfully.
Oct 11 09:00:51 compute-0 nova_compute[260935]: 2025-10-11 09:00:51.450 2 DEBUG nova.virt.libvirt.driver [None req-9972abff-c76e-4a7a-9404-0b8dd9c06fcd 2e604a7f01ba42f8a2f2a90bf14cafba 0ba95f2514ce4fe4b00f245335eaeb01 - - default default] [instance: 52be16b4-343a-4fd4-9041-39069a1fde2a] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 11 09:00:51 compute-0 nova_compute[260935]: 2025-10-11 09:00:51.454 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 52be16b4-343a-4fd4-9041-39069a1fde2a] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 09:00:51 compute-0 nova_compute[260935]: 2025-10-11 09:00:51.491 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 52be16b4-343a-4fd4-9041-39069a1fde2a] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 09:00:51 compute-0 nova_compute[260935]: 2025-10-11 09:00:51.492 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760173251.3994942, 52be16b4-343a-4fd4-9041-39069a1fde2a => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:00:51 compute-0 nova_compute[260935]: 2025-10-11 09:00:51.492 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 52be16b4-343a-4fd4-9041-39069a1fde2a] VM Paused (Lifecycle Event)
Oct 11 09:00:51 compute-0 nova_compute[260935]: 2025-10-11 09:00:51.511 2 DEBUG nova.virt.libvirt.driver [None req-9972abff-c76e-4a7a-9404-0b8dd9c06fcd 2e604a7f01ba42f8a2f2a90bf14cafba 0ba95f2514ce4fe4b00f245335eaeb01 - - default default] [instance: 52be16b4-343a-4fd4-9041-39069a1fde2a] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:00:51 compute-0 nova_compute[260935]: 2025-10-11 09:00:51.512 2 DEBUG nova.virt.libvirt.driver [None req-9972abff-c76e-4a7a-9404-0b8dd9c06fcd 2e604a7f01ba42f8a2f2a90bf14cafba 0ba95f2514ce4fe4b00f245335eaeb01 - - default default] [instance: 52be16b4-343a-4fd4-9041-39069a1fde2a] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:00:51 compute-0 nova_compute[260935]: 2025-10-11 09:00:51.512 2 DEBUG nova.virt.libvirt.driver [None req-9972abff-c76e-4a7a-9404-0b8dd9c06fcd 2e604a7f01ba42f8a2f2a90bf14cafba 0ba95f2514ce4fe4b00f245335eaeb01 - - default default] [instance: 52be16b4-343a-4fd4-9041-39069a1fde2a] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:00:51 compute-0 nova_compute[260935]: 2025-10-11 09:00:51.513 2 DEBUG nova.virt.libvirt.driver [None req-9972abff-c76e-4a7a-9404-0b8dd9c06fcd 2e604a7f01ba42f8a2f2a90bf14cafba 0ba95f2514ce4fe4b00f245335eaeb01 - - default default] [instance: 52be16b4-343a-4fd4-9041-39069a1fde2a] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:00:51 compute-0 nova_compute[260935]: 2025-10-11 09:00:51.514 2 DEBUG nova.virt.libvirt.driver [None req-9972abff-c76e-4a7a-9404-0b8dd9c06fcd 2e604a7f01ba42f8a2f2a90bf14cafba 0ba95f2514ce4fe4b00f245335eaeb01 - - default default] [instance: 52be16b4-343a-4fd4-9041-39069a1fde2a] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:00:51 compute-0 nova_compute[260935]: 2025-10-11 09:00:51.514 2 DEBUG nova.virt.libvirt.driver [None req-9972abff-c76e-4a7a-9404-0b8dd9c06fcd 2e604a7f01ba42f8a2f2a90bf14cafba 0ba95f2514ce4fe4b00f245335eaeb01 - - default default] [instance: 52be16b4-343a-4fd4-9041-39069a1fde2a] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:00:51 compute-0 nova_compute[260935]: 2025-10-11 09:00:51.524 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 52be16b4-343a-4fd4-9041-39069a1fde2a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:00:51 compute-0 nova_compute[260935]: 2025-10-11 09:00:51.529 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760173251.4118798, 52be16b4-343a-4fd4-9041-39069a1fde2a => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:00:51 compute-0 nova_compute[260935]: 2025-10-11 09:00:51.529 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 52be16b4-343a-4fd4-9041-39069a1fde2a] VM Resumed (Lifecycle Event)
Oct 11 09:00:51 compute-0 nova_compute[260935]: 2025-10-11 09:00:51.559 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 52be16b4-343a-4fd4-9041-39069a1fde2a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:00:51 compute-0 nova_compute[260935]: 2025-10-11 09:00:51.564 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 52be16b4-343a-4fd4-9041-39069a1fde2a] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 09:00:51 compute-0 ceph-mon[74313]: pgmap v1730: 321 pgs: 321 active+clean; 339 MiB data, 769 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 5.3 MiB/s wr, 183 op/s
Oct 11 09:00:51 compute-0 nova_compute[260935]: 2025-10-11 09:00:51.589 2 DEBUG oslo_concurrency.processutils [None req-21f60ae5-4d4b-4bf1-a957-74c05ca1d226 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:00:51 compute-0 nova_compute[260935]: 2025-10-11 09:00:51.645 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 52be16b4-343a-4fd4-9041-39069a1fde2a] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 09:00:51 compute-0 nova_compute[260935]: 2025-10-11 09:00:51.649 2 INFO nova.compute.manager [None req-9972abff-c76e-4a7a-9404-0b8dd9c06fcd 2e604a7f01ba42f8a2f2a90bf14cafba 0ba95f2514ce4fe4b00f245335eaeb01 - - default default] [instance: 52be16b4-343a-4fd4-9041-39069a1fde2a] Took 10.16 seconds to spawn the instance on the hypervisor.
Oct 11 09:00:51 compute-0 nova_compute[260935]: 2025-10-11 09:00:51.650 2 DEBUG nova.compute.manager [None req-9972abff-c76e-4a7a-9404-0b8dd9c06fcd 2e604a7f01ba42f8a2f2a90bf14cafba 0ba95f2514ce4fe4b00f245335eaeb01 - - default default] [instance: 52be16b4-343a-4fd4-9041-39069a1fde2a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:00:51 compute-0 nova_compute[260935]: 2025-10-11 09:00:51.734 2 INFO nova.compute.manager [None req-9972abff-c76e-4a7a-9404-0b8dd9c06fcd 2e604a7f01ba42f8a2f2a90bf14cafba 0ba95f2514ce4fe4b00f245335eaeb01 - - default default] [instance: 52be16b4-343a-4fd4-9041-39069a1fde2a] Took 11.59 seconds to build instance.
Oct 11 09:00:51 compute-0 nova_compute[260935]: 2025-10-11 09:00:51.758 2 DEBUG oslo_concurrency.lockutils [None req-9972abff-c76e-4a7a-9404-0b8dd9c06fcd 2e604a7f01ba42f8a2f2a90bf14cafba 0ba95f2514ce4fe4b00f245335eaeb01 - - default default] Lock "52be16b4-343a-4fd4-9041-39069a1fde2a" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.684s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:00:52 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 09:00:52 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3365592419' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:00:52 compute-0 nova_compute[260935]: 2025-10-11 09:00:52.112 2 DEBUG oslo_concurrency.processutils [None req-21f60ae5-4d4b-4bf1-a957-74c05ca1d226 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.523s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:00:52 compute-0 nova_compute[260935]: 2025-10-11 09:00:52.119 2 DEBUG nova.compute.provider_tree [None req-21f60ae5-4d4b-4bf1-a957-74c05ca1d226 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 09:00:52 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1731: 321 pgs: 321 active+clean; 339 MiB data, 769 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 5.4 MiB/s wr, 189 op/s
Oct 11 09:00:52 compute-0 nova_compute[260935]: 2025-10-11 09:00:52.163 2 DEBUG nova.scheduler.client.report [None req-21f60ae5-4d4b-4bf1-a957-74c05ca1d226 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 09:00:52 compute-0 nova_compute[260935]: 2025-10-11 09:00:52.220 2 DEBUG oslo_concurrency.lockutils [None req-21f60ae5-4d4b-4bf1-a957-74c05ca1d226 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.907s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:00:52 compute-0 nova_compute[260935]: 2025-10-11 09:00:52.221 2 DEBUG nova.compute.manager [None req-21f60ae5-4d4b-4bf1-a957-74c05ca1d226 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] [instance: 15633aee-234a-4417-b5ea-f35f13820404] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 11 09:00:52 compute-0 nova_compute[260935]: 2025-10-11 09:00:52.306 2 DEBUG nova.compute.manager [None req-21f60ae5-4d4b-4bf1-a957-74c05ca1d226 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] [instance: 15633aee-234a-4417-b5ea-f35f13820404] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 11 09:00:52 compute-0 nova_compute[260935]: 2025-10-11 09:00:52.306 2 DEBUG nova.network.neutron [None req-21f60ae5-4d4b-4bf1-a957-74c05ca1d226 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] [instance: 15633aee-234a-4417-b5ea-f35f13820404] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 11 09:00:52 compute-0 nova_compute[260935]: 2025-10-11 09:00:52.333 2 INFO nova.virt.libvirt.driver [None req-21f60ae5-4d4b-4bf1-a957-74c05ca1d226 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] [instance: 15633aee-234a-4417-b5ea-f35f13820404] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 11 09:00:52 compute-0 nova_compute[260935]: 2025-10-11 09:00:52.367 2 DEBUG nova.compute.manager [None req-21f60ae5-4d4b-4bf1-a957-74c05ca1d226 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] [instance: 15633aee-234a-4417-b5ea-f35f13820404] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 11 09:00:52 compute-0 nova_compute[260935]: 2025-10-11 09:00:52.488 2 DEBUG nova.compute.manager [None req-21f60ae5-4d4b-4bf1-a957-74c05ca1d226 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] [instance: 15633aee-234a-4417-b5ea-f35f13820404] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 11 09:00:52 compute-0 nova_compute[260935]: 2025-10-11 09:00:52.490 2 DEBUG nova.virt.libvirt.driver [None req-21f60ae5-4d4b-4bf1-a957-74c05ca1d226 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] [instance: 15633aee-234a-4417-b5ea-f35f13820404] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 11 09:00:52 compute-0 nova_compute[260935]: 2025-10-11 09:00:52.491 2 INFO nova.virt.libvirt.driver [None req-21f60ae5-4d4b-4bf1-a957-74c05ca1d226 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] [instance: 15633aee-234a-4417-b5ea-f35f13820404] Creating image(s)
Oct 11 09:00:52 compute-0 nova_compute[260935]: 2025-10-11 09:00:52.524 2 DEBUG nova.storage.rbd_utils [None req-21f60ae5-4d4b-4bf1-a957-74c05ca1d226 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] rbd image 15633aee-234a-4417-b5ea-f35f13820404_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:00:52 compute-0 nova_compute[260935]: 2025-10-11 09:00:52.565 2 DEBUG nova.storage.rbd_utils [None req-21f60ae5-4d4b-4bf1-a957-74c05ca1d226 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] rbd image 15633aee-234a-4417-b5ea-f35f13820404_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:00:52 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/3365592419' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:00:52 compute-0 nova_compute[260935]: 2025-10-11 09:00:52.607 2 DEBUG nova.storage.rbd_utils [None req-21f60ae5-4d4b-4bf1-a957-74c05ca1d226 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] rbd image 15633aee-234a-4417-b5ea-f35f13820404_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:00:52 compute-0 nova_compute[260935]: 2025-10-11 09:00:52.614 2 DEBUG oslo_concurrency.processutils [None req-21f60ae5-4d4b-4bf1-a957-74c05ca1d226 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:00:52 compute-0 nova_compute[260935]: 2025-10-11 09:00:52.674 2 DEBUG nova.policy [None req-21f60ae5-4d4b-4bf1-a957-74c05ca1d226 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'df5a3c3a5d68473aa2e2950de45ebce1', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '11b44ad9193e4e43838d52056ccf413e', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 11 09:00:52 compute-0 nova_compute[260935]: 2025-10-11 09:00:52.722 2 DEBUG oslo_concurrency.processutils [None req-21f60ae5-4d4b-4bf1-a957-74c05ca1d226 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json" returned: 0 in 0.107s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:00:52 compute-0 nova_compute[260935]: 2025-10-11 09:00:52.723 2 DEBUG oslo_concurrency.lockutils [None req-21f60ae5-4d4b-4bf1-a957-74c05ca1d226 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] Acquiring lock "9811042c7d73cc51997f7c966840f4b7728169a1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:00:52 compute-0 nova_compute[260935]: 2025-10-11 09:00:52.724 2 DEBUG oslo_concurrency.lockutils [None req-21f60ae5-4d4b-4bf1-a957-74c05ca1d226 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:00:52 compute-0 nova_compute[260935]: 2025-10-11 09:00:52.725 2 DEBUG oslo_concurrency.lockutils [None req-21f60ae5-4d4b-4bf1-a957-74c05ca1d226 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:00:52 compute-0 nova_compute[260935]: 2025-10-11 09:00:52.758 2 DEBUG nova.storage.rbd_utils [None req-21f60ae5-4d4b-4bf1-a957-74c05ca1d226 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] rbd image 15633aee-234a-4417-b5ea-f35f13820404_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:00:52 compute-0 nova_compute[260935]: 2025-10-11 09:00:52.763 2 DEBUG oslo_concurrency.processutils [None req-21f60ae5-4d4b-4bf1-a957-74c05ca1d226 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 15633aee-234a-4417-b5ea-f35f13820404_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:00:52 compute-0 ceph-osd[88249]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #47. Immutable memtables: 4.
Oct 11 09:00:53 compute-0 nova_compute[260935]: 2025-10-11 09:00:53.093 2 DEBUG oslo_concurrency.processutils [None req-21f60ae5-4d4b-4bf1-a957-74c05ca1d226 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 15633aee-234a-4417-b5ea-f35f13820404_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.330s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:00:53 compute-0 nova_compute[260935]: 2025-10-11 09:00:53.177 2 DEBUG nova.storage.rbd_utils [None req-21f60ae5-4d4b-4bf1-a957-74c05ca1d226 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] resizing rbd image 15633aee-234a-4417-b5ea-f35f13820404_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 11 09:00:53 compute-0 nova_compute[260935]: 2025-10-11 09:00:53.299 2 DEBUG nova.objects.instance [None req-21f60ae5-4d4b-4bf1-a957-74c05ca1d226 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] Lazy-loading 'migration_context' on Instance uuid 15633aee-234a-4417-b5ea-f35f13820404 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:00:53 compute-0 nova_compute[260935]: 2025-10-11 09:00:53.325 2 DEBUG nova.virt.libvirt.driver [None req-21f60ae5-4d4b-4bf1-a957-74c05ca1d226 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] [instance: 15633aee-234a-4417-b5ea-f35f13820404] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 11 09:00:53 compute-0 nova_compute[260935]: 2025-10-11 09:00:53.326 2 DEBUG nova.virt.libvirt.driver [None req-21f60ae5-4d4b-4bf1-a957-74c05ca1d226 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] [instance: 15633aee-234a-4417-b5ea-f35f13820404] Ensure instance console log exists: /var/lib/nova/instances/15633aee-234a-4417-b5ea-f35f13820404/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 11 09:00:53 compute-0 nova_compute[260935]: 2025-10-11 09:00:53.327 2 DEBUG oslo_concurrency.lockutils [None req-21f60ae5-4d4b-4bf1-a957-74c05ca1d226 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:00:53 compute-0 nova_compute[260935]: 2025-10-11 09:00:53.327 2 DEBUG oslo_concurrency.lockutils [None req-21f60ae5-4d4b-4bf1-a957-74c05ca1d226 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:00:53 compute-0 nova_compute[260935]: 2025-10-11 09:00:53.328 2 DEBUG oslo_concurrency.lockutils [None req-21f60ae5-4d4b-4bf1-a957-74c05ca1d226 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:00:53 compute-0 nova_compute[260935]: 2025-10-11 09:00:53.340 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:00:53 compute-0 nova_compute[260935]: 2025-10-11 09:00:53.392 2 DEBUG nova.compute.manager [req-1a63c503-5835-4b47-98e7-1cae2f859516 req-f5afc13b-8d4d-4b88-92c6-2749e598bd21 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 52be16b4-343a-4fd4-9041-39069a1fde2a] Received event network-vif-plugged-c992d6e3-ef59-42a0-80c5-109fe0c056cd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:00:53 compute-0 nova_compute[260935]: 2025-10-11 09:00:53.393 2 DEBUG oslo_concurrency.lockutils [req-1a63c503-5835-4b47-98e7-1cae2f859516 req-f5afc13b-8d4d-4b88-92c6-2749e598bd21 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "52be16b4-343a-4fd4-9041-39069a1fde2a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:00:53 compute-0 nova_compute[260935]: 2025-10-11 09:00:53.393 2 DEBUG oslo_concurrency.lockutils [req-1a63c503-5835-4b47-98e7-1cae2f859516 req-f5afc13b-8d4d-4b88-92c6-2749e598bd21 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "52be16b4-343a-4fd4-9041-39069a1fde2a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:00:53 compute-0 nova_compute[260935]: 2025-10-11 09:00:53.394 2 DEBUG oslo_concurrency.lockutils [req-1a63c503-5835-4b47-98e7-1cae2f859516 req-f5afc13b-8d4d-4b88-92c6-2749e598bd21 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "52be16b4-343a-4fd4-9041-39069a1fde2a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:00:53 compute-0 nova_compute[260935]: 2025-10-11 09:00:53.394 2 DEBUG nova.compute.manager [req-1a63c503-5835-4b47-98e7-1cae2f859516 req-f5afc13b-8d4d-4b88-92c6-2749e598bd21 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 52be16b4-343a-4fd4-9041-39069a1fde2a] No waiting events found dispatching network-vif-plugged-c992d6e3-ef59-42a0-80c5-109fe0c056cd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 09:00:53 compute-0 nova_compute[260935]: 2025-10-11 09:00:53.395 2 WARNING nova.compute.manager [req-1a63c503-5835-4b47-98e7-1cae2f859516 req-f5afc13b-8d4d-4b88-92c6-2749e598bd21 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 52be16b4-343a-4fd4-9041-39069a1fde2a] Received unexpected event network-vif-plugged-c992d6e3-ef59-42a0-80c5-109fe0c056cd for instance with vm_state active and task_state None.
Oct 11 09:00:53 compute-0 nova_compute[260935]: 2025-10-11 09:00:53.396 2 DEBUG nova.compute.manager [req-1a63c503-5835-4b47-98e7-1cae2f859516 req-f5afc13b-8d4d-4b88-92c6-2749e598bd21 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 2ca1b1c6-7cd2-42f9-a24b-5c20bb567361] Received event network-vif-unplugged-bcfaf217-8703-4c1e-bf80-d24ab0e642bd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:00:53 compute-0 nova_compute[260935]: 2025-10-11 09:00:53.396 2 DEBUG oslo_concurrency.lockutils [req-1a63c503-5835-4b47-98e7-1cae2f859516 req-f5afc13b-8d4d-4b88-92c6-2749e598bd21 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "2ca1b1c6-7cd2-42f9-a24b-5c20bb567361-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:00:53 compute-0 nova_compute[260935]: 2025-10-11 09:00:53.397 2 DEBUG oslo_concurrency.lockutils [req-1a63c503-5835-4b47-98e7-1cae2f859516 req-f5afc13b-8d4d-4b88-92c6-2749e598bd21 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "2ca1b1c6-7cd2-42f9-a24b-5c20bb567361-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:00:53 compute-0 nova_compute[260935]: 2025-10-11 09:00:53.397 2 DEBUG oslo_concurrency.lockutils [req-1a63c503-5835-4b47-98e7-1cae2f859516 req-f5afc13b-8d4d-4b88-92c6-2749e598bd21 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "2ca1b1c6-7cd2-42f9-a24b-5c20bb567361-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:00:53 compute-0 nova_compute[260935]: 2025-10-11 09:00:53.398 2 DEBUG nova.compute.manager [req-1a63c503-5835-4b47-98e7-1cae2f859516 req-f5afc13b-8d4d-4b88-92c6-2749e598bd21 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 2ca1b1c6-7cd2-42f9-a24b-5c20bb567361] No waiting events found dispatching network-vif-unplugged-bcfaf217-8703-4c1e-bf80-d24ab0e642bd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 09:00:53 compute-0 nova_compute[260935]: 2025-10-11 09:00:53.398 2 WARNING nova.compute.manager [req-1a63c503-5835-4b47-98e7-1cae2f859516 req-f5afc13b-8d4d-4b88-92c6-2749e598bd21 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 2ca1b1c6-7cd2-42f9-a24b-5c20bb567361] Received unexpected event network-vif-unplugged-bcfaf217-8703-4c1e-bf80-d24ab0e642bd for instance with vm_state stopped and task_state None.
Oct 11 09:00:53 compute-0 nova_compute[260935]: 2025-10-11 09:00:53.399 2 DEBUG nova.compute.manager [req-1a63c503-5835-4b47-98e7-1cae2f859516 req-f5afc13b-8d4d-4b88-92c6-2749e598bd21 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 2ca1b1c6-7cd2-42f9-a24b-5c20bb567361] Received event network-vif-plugged-bcfaf217-8703-4c1e-bf80-d24ab0e642bd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:00:53 compute-0 nova_compute[260935]: 2025-10-11 09:00:53.399 2 DEBUG oslo_concurrency.lockutils [req-1a63c503-5835-4b47-98e7-1cae2f859516 req-f5afc13b-8d4d-4b88-92c6-2749e598bd21 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "2ca1b1c6-7cd2-42f9-a24b-5c20bb567361-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:00:53 compute-0 nova_compute[260935]: 2025-10-11 09:00:53.400 2 DEBUG oslo_concurrency.lockutils [req-1a63c503-5835-4b47-98e7-1cae2f859516 req-f5afc13b-8d4d-4b88-92c6-2749e598bd21 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "2ca1b1c6-7cd2-42f9-a24b-5c20bb567361-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:00:53 compute-0 nova_compute[260935]: 2025-10-11 09:00:53.400 2 DEBUG oslo_concurrency.lockutils [req-1a63c503-5835-4b47-98e7-1cae2f859516 req-f5afc13b-8d4d-4b88-92c6-2749e598bd21 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "2ca1b1c6-7cd2-42f9-a24b-5c20bb567361-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:00:53 compute-0 nova_compute[260935]: 2025-10-11 09:00:53.401 2 DEBUG nova.compute.manager [req-1a63c503-5835-4b47-98e7-1cae2f859516 req-f5afc13b-8d4d-4b88-92c6-2749e598bd21 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 2ca1b1c6-7cd2-42f9-a24b-5c20bb567361] No waiting events found dispatching network-vif-plugged-bcfaf217-8703-4c1e-bf80-d24ab0e642bd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 09:00:53 compute-0 nova_compute[260935]: 2025-10-11 09:00:53.402 2 WARNING nova.compute.manager [req-1a63c503-5835-4b47-98e7-1cae2f859516 req-f5afc13b-8d4d-4b88-92c6-2749e598bd21 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 2ca1b1c6-7cd2-42f9-a24b-5c20bb567361] Received unexpected event network-vif-plugged-bcfaf217-8703-4c1e-bf80-d24ab0e642bd for instance with vm_state stopped and task_state None.
Oct 11 09:00:53 compute-0 ceph-mon[74313]: pgmap v1731: 321 pgs: 321 active+clean; 339 MiB data, 769 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 5.4 MiB/s wr, 189 op/s
Oct 11 09:00:53 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e242 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:00:53 compute-0 nova_compute[260935]: 2025-10-11 09:00:53.760 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:00:53 compute-0 nova_compute[260935]: 2025-10-11 09:00:53.925 2 DEBUG nova.network.neutron [None req-21f60ae5-4d4b-4bf1-a957-74c05ca1d226 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] [instance: 15633aee-234a-4417-b5ea-f35f13820404] Successfully created port: 074db183-8679-40f2-b39d-06759a8dfceb _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 11 09:00:54 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1732: 321 pgs: 321 active+clean; 339 MiB data, 769 MiB used, 59 GiB / 60 GiB avail; 2.8 MiB/s rd, 3.8 MiB/s wr, 205 op/s
Oct 11 09:00:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:00:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:00:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:00:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:00:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:00:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:00:54 compute-0 ceph-mgr[74605]: [balancer INFO root] Optimize plan auto_2025-10-11_09:00:54
Oct 11 09:00:54 compute-0 ceph-mgr[74605]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 09:00:54 compute-0 ceph-mgr[74605]: [balancer INFO root] do_upmap
Oct 11 09:00:54 compute-0 ceph-mgr[74605]: [balancer INFO root] pools ['.rgw.root', '.mgr', 'default.rgw.meta', 'default.rgw.log', 'default.rgw.control', 'backups', 'volumes', 'images', 'cephfs.cephfs.meta', 'vms', 'cephfs.cephfs.data']
Oct 11 09:00:54 compute-0 ceph-mgr[74605]: [balancer INFO root] prepared 0/10 changes
Oct 11 09:00:54 compute-0 nova_compute[260935]: 2025-10-11 09:00:54.932 2 DEBUG oslo_concurrency.lockutils [None req-ce634aba-2255-4bb3-84bd-81dc9c6307c7 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] Acquiring lock "2ca1b1c6-7cd2-42f9-a24b-5c20bb567361" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:00:54 compute-0 nova_compute[260935]: 2025-10-11 09:00:54.934 2 DEBUG oslo_concurrency.lockutils [None req-ce634aba-2255-4bb3-84bd-81dc9c6307c7 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] Lock "2ca1b1c6-7cd2-42f9-a24b-5c20bb567361" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:00:54 compute-0 nova_compute[260935]: 2025-10-11 09:00:54.934 2 DEBUG oslo_concurrency.lockutils [None req-ce634aba-2255-4bb3-84bd-81dc9c6307c7 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] Acquiring lock "2ca1b1c6-7cd2-42f9-a24b-5c20bb567361-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:00:54 compute-0 nova_compute[260935]: 2025-10-11 09:00:54.934 2 DEBUG oslo_concurrency.lockutils [None req-ce634aba-2255-4bb3-84bd-81dc9c6307c7 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] Lock "2ca1b1c6-7cd2-42f9-a24b-5c20bb567361-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:00:54 compute-0 nova_compute[260935]: 2025-10-11 09:00:54.935 2 DEBUG oslo_concurrency.lockutils [None req-ce634aba-2255-4bb3-84bd-81dc9c6307c7 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] Lock "2ca1b1c6-7cd2-42f9-a24b-5c20bb567361-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:00:54 compute-0 nova_compute[260935]: 2025-10-11 09:00:54.937 2 INFO nova.compute.manager [None req-ce634aba-2255-4bb3-84bd-81dc9c6307c7 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] [instance: 2ca1b1c6-7cd2-42f9-a24b-5c20bb567361] Terminating instance
Oct 11 09:00:54 compute-0 nova_compute[260935]: 2025-10-11 09:00:54.938 2 DEBUG nova.compute.manager [None req-ce634aba-2255-4bb3-84bd-81dc9c6307c7 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] [instance: 2ca1b1c6-7cd2-42f9-a24b-5c20bb567361] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 11 09:00:54 compute-0 nova_compute[260935]: 2025-10-11 09:00:54.945 2 INFO nova.virt.libvirt.driver [-] [instance: 2ca1b1c6-7cd2-42f9-a24b-5c20bb567361] Instance destroyed successfully.
Oct 11 09:00:54 compute-0 nova_compute[260935]: 2025-10-11 09:00:54.946 2 DEBUG nova.objects.instance [None req-ce634aba-2255-4bb3-84bd-81dc9c6307c7 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] Lazy-loading 'resources' on Instance uuid 2ca1b1c6-7cd2-42f9-a24b-5c20bb567361 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:00:54 compute-0 nova_compute[260935]: 2025-10-11 09:00:54.977 2 DEBUG nova.virt.libvirt.vif [None req-ce634aba-2255-4bb3-84bd-81dc9c6307c7 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-11T08:59:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1432792747',display_name='tempest-tempest.common.compute-instance-1432792747',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1432792747',id=84,image_ref='95632eb9-5895-4e20-b760-0f149aadf400',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-11T09:00:49Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='d33b48586acf4e6c8254f2a1213b001c',ramdisk_id='',reservation_id='r-dj88erzq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='95632eb9-5895-4e20-b760-0f149aadf400',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherA-620268335',owner_user_name='tempest-ServerActionsTestOtherA-620268335-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T09:00:50Z,user_data=None,user_id='8d211063ed874837bead2e13898b31d4',uuid=2ca1b1c6-7cd2-42f9-a24b-5c20bb567361,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "bcfaf217-8703-4c1e-bf80-d24ab0e642bd", "address": "fa:16:3e:16:b9:d1", "network": {"id": "164a664d-5e52-48b9-8b00-f73d0851a4cc", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-311778958-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d33b48586acf4e6c8254f2a1213b001c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbcfaf217-87", "ovs_interfaceid": "bcfaf217-8703-4c1e-bf80-d24ab0e642bd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 11 09:00:54 compute-0 nova_compute[260935]: 2025-10-11 09:00:54.978 2 DEBUG nova.network.os_vif_util [None req-ce634aba-2255-4bb3-84bd-81dc9c6307c7 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] Converting VIF {"id": "bcfaf217-8703-4c1e-bf80-d24ab0e642bd", "address": "fa:16:3e:16:b9:d1", "network": {"id": "164a664d-5e52-48b9-8b00-f73d0851a4cc", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-311778958-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d33b48586acf4e6c8254f2a1213b001c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbcfaf217-87", "ovs_interfaceid": "bcfaf217-8703-4c1e-bf80-d24ab0e642bd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 09:00:54 compute-0 nova_compute[260935]: 2025-10-11 09:00:54.979 2 DEBUG nova.network.os_vif_util [None req-ce634aba-2255-4bb3-84bd-81dc9c6307c7 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:16:b9:d1,bridge_name='br-int',has_traffic_filtering=True,id=bcfaf217-8703-4c1e-bf80-d24ab0e642bd,network=Network(164a664d-5e52-48b9-8b00-f73d0851a4cc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbcfaf217-87') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 09:00:54 compute-0 nova_compute[260935]: 2025-10-11 09:00:54.980 2 DEBUG os_vif [None req-ce634aba-2255-4bb3-84bd-81dc9c6307c7 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:16:b9:d1,bridge_name='br-int',has_traffic_filtering=True,id=bcfaf217-8703-4c1e-bf80-d24ab0e642bd,network=Network(164a664d-5e52-48b9-8b00-f73d0851a4cc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbcfaf217-87') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 11 09:00:54 compute-0 nova_compute[260935]: 2025-10-11 09:00:54.982 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:00:54 compute-0 nova_compute[260935]: 2025-10-11 09:00:54.983 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbcfaf217-87, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:00:54 compute-0 nova_compute[260935]: 2025-10-11 09:00:54.985 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:00:54 compute-0 nova_compute[260935]: 2025-10-11 09:00:54.987 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 09:00:54 compute-0 nova_compute[260935]: 2025-10-11 09:00:54.991 2 INFO os_vif [None req-ce634aba-2255-4bb3-84bd-81dc9c6307c7 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:16:b9:d1,bridge_name='br-int',has_traffic_filtering=True,id=bcfaf217-8703-4c1e-bf80-d24ab0e642bd,network=Network(164a664d-5e52-48b9-8b00-f73d0851a4cc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbcfaf217-87')
Oct 11 09:00:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 09:00:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 09:00:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 09:00:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 09:00:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 09:00:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 09:00:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 09:00:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 09:00:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 09:00:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 09:00:55 compute-0 nova_compute[260935]: 2025-10-11 09:00:55.461 2 DEBUG nova.compute.manager [req-6217c10a-82d7-4f3c-a0f0-e5f6d20ca543 req-d8d81b36-c6d1-45d0-bf25-b4cf99a6ec13 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 52be16b4-343a-4fd4-9041-39069a1fde2a] Received event network-changed-c992d6e3-ef59-42a0-80c5-109fe0c056cd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:00:55 compute-0 nova_compute[260935]: 2025-10-11 09:00:55.462 2 DEBUG nova.compute.manager [req-6217c10a-82d7-4f3c-a0f0-e5f6d20ca543 req-d8d81b36-c6d1-45d0-bf25-b4cf99a6ec13 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 52be16b4-343a-4fd4-9041-39069a1fde2a] Refreshing instance network info cache due to event network-changed-c992d6e3-ef59-42a0-80c5-109fe0c056cd. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 09:00:55 compute-0 nova_compute[260935]: 2025-10-11 09:00:55.463 2 DEBUG oslo_concurrency.lockutils [req-6217c10a-82d7-4f3c-a0f0-e5f6d20ca543 req-d8d81b36-c6d1-45d0-bf25-b4cf99a6ec13 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-52be16b4-343a-4fd4-9041-39069a1fde2a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:00:55 compute-0 nova_compute[260935]: 2025-10-11 09:00:55.464 2 DEBUG oslo_concurrency.lockutils [req-6217c10a-82d7-4f3c-a0f0-e5f6d20ca543 req-d8d81b36-c6d1-45d0-bf25-b4cf99a6ec13 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-52be16b4-343a-4fd4-9041-39069a1fde2a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:00:55 compute-0 nova_compute[260935]: 2025-10-11 09:00:55.464 2 DEBUG nova.network.neutron [req-6217c10a-82d7-4f3c-a0f0-e5f6d20ca543 req-d8d81b36-c6d1-45d0-bf25-b4cf99a6ec13 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 52be16b4-343a-4fd4-9041-39069a1fde2a] Refreshing network info cache for port c992d6e3-ef59-42a0-80c5-109fe0c056cd _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 09:00:55 compute-0 nova_compute[260935]: 2025-10-11 09:00:55.514 2 INFO nova.virt.libvirt.driver [None req-ce634aba-2255-4bb3-84bd-81dc9c6307c7 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] [instance: 2ca1b1c6-7cd2-42f9-a24b-5c20bb567361] Deleting instance files /var/lib/nova/instances/2ca1b1c6-7cd2-42f9-a24b-5c20bb567361_del
Oct 11 09:00:55 compute-0 nova_compute[260935]: 2025-10-11 09:00:55.516 2 INFO nova.virt.libvirt.driver [None req-ce634aba-2255-4bb3-84bd-81dc9c6307c7 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] [instance: 2ca1b1c6-7cd2-42f9-a24b-5c20bb567361] Deletion of /var/lib/nova/instances/2ca1b1c6-7cd2-42f9-a24b-5c20bb567361_del complete
Oct 11 09:00:55 compute-0 nova_compute[260935]: 2025-10-11 09:00:55.595 2 INFO nova.compute.manager [None req-ce634aba-2255-4bb3-84bd-81dc9c6307c7 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] [instance: 2ca1b1c6-7cd2-42f9-a24b-5c20bb567361] Took 0.66 seconds to destroy the instance on the hypervisor.
Oct 11 09:00:55 compute-0 nova_compute[260935]: 2025-10-11 09:00:55.596 2 DEBUG oslo.service.loopingcall [None req-ce634aba-2255-4bb3-84bd-81dc9c6307c7 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 11 09:00:55 compute-0 nova_compute[260935]: 2025-10-11 09:00:55.597 2 DEBUG nova.compute.manager [-] [instance: 2ca1b1c6-7cd2-42f9-a24b-5c20bb567361] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 11 09:00:55 compute-0 nova_compute[260935]: 2025-10-11 09:00:55.598 2 DEBUG nova.network.neutron [-] [instance: 2ca1b1c6-7cd2-42f9-a24b-5c20bb567361] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 11 09:00:55 compute-0 ceph-mon[74313]: pgmap v1732: 321 pgs: 321 active+clean; 339 MiB data, 769 MiB used, 59 GiB / 60 GiB avail; 2.8 MiB/s rd, 3.8 MiB/s wr, 205 op/s
Oct 11 09:00:55 compute-0 nova_compute[260935]: 2025-10-11 09:00:55.721 2 DEBUG nova.network.neutron [None req-21f60ae5-4d4b-4bf1-a957-74c05ca1d226 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] [instance: 15633aee-234a-4417-b5ea-f35f13820404] Successfully updated port: 074db183-8679-40f2-b39d-06759a8dfceb _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 11 09:00:55 compute-0 nova_compute[260935]: 2025-10-11 09:00:55.743 2 DEBUG oslo_concurrency.lockutils [None req-21f60ae5-4d4b-4bf1-a957-74c05ca1d226 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] Acquiring lock "refresh_cache-15633aee-234a-4417-b5ea-f35f13820404" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:00:55 compute-0 nova_compute[260935]: 2025-10-11 09:00:55.744 2 DEBUG oslo_concurrency.lockutils [None req-21f60ae5-4d4b-4bf1-a957-74c05ca1d226 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] Acquired lock "refresh_cache-15633aee-234a-4417-b5ea-f35f13820404" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:00:55 compute-0 nova_compute[260935]: 2025-10-11 09:00:55.745 2 DEBUG nova.network.neutron [None req-21f60ae5-4d4b-4bf1-a957-74c05ca1d226 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] [instance: 15633aee-234a-4417-b5ea-f35f13820404] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 11 09:00:56 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1733: 321 pgs: 321 active+clean; 339 MiB data, 769 MiB used, 59 GiB / 60 GiB avail; 2.8 MiB/s rd, 935 KiB/s wr, 142 op/s
Oct 11 09:00:56 compute-0 nova_compute[260935]: 2025-10-11 09:00:56.270 2 DEBUG nova.network.neutron [None req-21f60ae5-4d4b-4bf1-a957-74c05ca1d226 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] [instance: 15633aee-234a-4417-b5ea-f35f13820404] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 11 09:00:57 compute-0 ceph-mon[74313]: pgmap v1733: 321 pgs: 321 active+clean; 339 MiB data, 769 MiB used, 59 GiB / 60 GiB avail; 2.8 MiB/s rd, 935 KiB/s wr, 142 op/s
Oct 11 09:00:58 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1734: 321 pgs: 321 active+clean; 339 MiB data, 790 MiB used, 59 GiB / 60 GiB avail; 4.4 MiB/s rd, 2.7 MiB/s wr, 285 op/s
Oct 11 09:00:58 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e242 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:00:58 compute-0 nova_compute[260935]: 2025-10-11 09:00:58.764 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:00:59 compute-0 ceph-mon[74313]: pgmap v1734: 321 pgs: 321 active+clean; 339 MiB data, 790 MiB used, 59 GiB / 60 GiB avail; 4.4 MiB/s rd, 2.7 MiB/s wr, 285 op/s
Oct 11 09:00:59 compute-0 nova_compute[260935]: 2025-10-11 09:00:59.986 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:01:00 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1735: 321 pgs: 321 active+clean; 339 MiB data, 790 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 1.8 MiB/s wr, 182 op/s
Oct 11 09:01:00 compute-0 sudo[344291]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:01:00 compute-0 sudo[344291]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:01:00 compute-0 sudo[344291]: pam_unix(sudo:session): session closed for user root
Oct 11 09:01:00 compute-0 sudo[344316]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 09:01:00 compute-0 sudo[344316]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:01:00 compute-0 sudo[344316]: pam_unix(sudo:session): session closed for user root
Oct 11 09:01:00 compute-0 sudo[344341]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:01:00 compute-0 sudo[344341]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:01:00 compute-0 sudo[344341]: pam_unix(sudo:session): session closed for user root
Oct 11 09:01:00 compute-0 sudo[344366]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 11 09:01:00 compute-0 sudo[344366]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:01:01 compute-0 sudo[344366]: pam_unix(sudo:session): session closed for user root
Oct 11 09:01:01 compute-0 ceph-mon[74313]: pgmap v1735: 321 pgs: 321 active+clean; 339 MiB data, 790 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 1.8 MiB/s wr, 182 op/s
Oct 11 09:01:01 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 09:01:01 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 09:01:01 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 11 09:01:01 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 09:01:01 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 11 09:01:01 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:01:01 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev b2be673f-a1cd-49c1-af1a-e64def204cbf does not exist
Oct 11 09:01:01 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev 7d908ca6-53b7-41fa-a31d-ba7c3e8b20c6 does not exist
Oct 11 09:01:01 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev 64454929-abb3-4677-bf5d-548ab40eaff8 does not exist
Oct 11 09:01:01 compute-0 nova_compute[260935]: 2025-10-11 09:01:01.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:01:01 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 11 09:01:01 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 09:01:01 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 11 09:01:01 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 09:01:01 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 09:01:01 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 09:01:01 compute-0 CROND[344437]: (root) CMD (run-parts /etc/cron.hourly)
Oct 11 09:01:01 compute-0 run-parts[344449]: (/etc/cron.hourly) starting 0anacron
Oct 11 09:01:01 compute-0 sudo[344422]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:01:01 compute-0 sudo[344422]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:01:01 compute-0 run-parts[344456]: (/etc/cron.hourly) finished 0anacron
Oct 11 09:01:01 compute-0 sudo[344422]: pam_unix(sudo:session): session closed for user root
Oct 11 09:01:01 compute-0 CROND[344430]: (root) CMDEND (run-parts /etc/cron.hourly)
Oct 11 09:01:01 compute-0 sudo[344458]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 09:01:01 compute-0 sudo[344458]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:01:01 compute-0 sudo[344458]: pam_unix(sudo:session): session closed for user root
Oct 11 09:01:01 compute-0 sudo[344483]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:01:01 compute-0 sudo[344483]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:01:02 compute-0 sudo[344483]: pam_unix(sudo:session): session closed for user root
Oct 11 09:01:02 compute-0 sudo[344508]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 11 09:01:02 compute-0 sudo[344508]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:01:02 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1736: 321 pgs: 321 active+clean; 352 MiB data, 798 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 3.1 MiB/s wr, 201 op/s
Oct 11 09:01:02 compute-0 podman[344575]: 2025-10-11 09:01:02.559288188 +0000 UTC m=+0.056133872 container create 5a17e4d32f870f23355cd771892c78d8a48239ca6296fb1fd0165cfe206b1334 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_dijkstra, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct 11 09:01:02 compute-0 systemd[1]: Started libpod-conmon-5a17e4d32f870f23355cd771892c78d8a48239ca6296fb1fd0165cfe206b1334.scope.
Oct 11 09:01:02 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:01:02 compute-0 podman[344575]: 2025-10-11 09:01:02.536179379 +0000 UTC m=+0.033025073 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:01:02 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 09:01:02 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 09:01:02 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:01:02 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 09:01:02 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 09:01:02 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 09:01:02 compute-0 podman[344575]: 2025-10-11 09:01:02.643572571 +0000 UTC m=+0.140418255 container init 5a17e4d32f870f23355cd771892c78d8a48239ca6296fb1fd0165cfe206b1334 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_dijkstra, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct 11 09:01:02 compute-0 podman[344575]: 2025-10-11 09:01:02.652210457 +0000 UTC m=+0.149056111 container start 5a17e4d32f870f23355cd771892c78d8a48239ca6296fb1fd0165cfe206b1334 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_dijkstra, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Oct 11 09:01:02 compute-0 podman[344575]: 2025-10-11 09:01:02.656094268 +0000 UTC m=+0.152939932 container attach 5a17e4d32f870f23355cd771892c78d8a48239ca6296fb1fd0165cfe206b1334 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_dijkstra, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct 11 09:01:02 compute-0 inspiring_dijkstra[344592]: 167 167
Oct 11 09:01:02 compute-0 systemd[1]: libpod-5a17e4d32f870f23355cd771892c78d8a48239ca6296fb1fd0165cfe206b1334.scope: Deactivated successfully.
Oct 11 09:01:02 compute-0 conmon[344592]: conmon 5a17e4d32f870f23355c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-5a17e4d32f870f23355cd771892c78d8a48239ca6296fb1fd0165cfe206b1334.scope/container/memory.events
Oct 11 09:01:02 compute-0 podman[344575]: 2025-10-11 09:01:02.659960118 +0000 UTC m=+0.156805822 container died 5a17e4d32f870f23355cd771892c78d8a48239ca6296fb1fd0165cfe206b1334 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_dijkstra, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 09:01:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-693e0739eed15ba8fb17fdd2654d067539a8e18ad1f92eadf453db4037ad46d8-merged.mount: Deactivated successfully.
Oct 11 09:01:02 compute-0 podman[344575]: 2025-10-11 09:01:02.706804614 +0000 UTC m=+0.203650268 container remove 5a17e4d32f870f23355cd771892c78d8a48239ca6296fb1fd0165cfe206b1334 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_dijkstra, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct 11 09:01:02 compute-0 systemd[1]: libpod-conmon-5a17e4d32f870f23355cd771892c78d8a48239ca6296fb1fd0165cfe206b1334.scope: Deactivated successfully.
Oct 11 09:01:02 compute-0 podman[344614]: 2025-10-11 09:01:02.954377384 +0000 UTC m=+0.065062767 container create 087a0e515dcbd23596daca268386fa62048d7ef4dd5cc009a22bb0b427db03c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_lamarr, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default)
Oct 11 09:01:03 compute-0 systemd[1]: Started libpod-conmon-087a0e515dcbd23596daca268386fa62048d7ef4dd5cc009a22bb0b427db03c7.scope.
Oct 11 09:01:03 compute-0 podman[344614]: 2025-10-11 09:01:02.928533387 +0000 UTC m=+0.039218810 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:01:03 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:01:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/923fe524f927e63842d907eea50a54d45ebdc29b3407a0f858f3a1768ee448f9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 09:01:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/923fe524f927e63842d907eea50a54d45ebdc29b3407a0f858f3a1768ee448f9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 09:01:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/923fe524f927e63842d907eea50a54d45ebdc29b3407a0f858f3a1768ee448f9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 09:01:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/923fe524f927e63842d907eea50a54d45ebdc29b3407a0f858f3a1768ee448f9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 09:01:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/923fe524f927e63842d907eea50a54d45ebdc29b3407a0f858f3a1768ee448f9/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 09:01:03 compute-0 podman[344614]: 2025-10-11 09:01:03.058071929 +0000 UTC m=+0.168757272 container init 087a0e515dcbd23596daca268386fa62048d7ef4dd5cc009a22bb0b427db03c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_lamarr, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Oct 11 09:01:03 compute-0 podman[344614]: 2025-10-11 09:01:03.069857615 +0000 UTC m=+0.180542958 container start 087a0e515dcbd23596daca268386fa62048d7ef4dd5cc009a22bb0b427db03c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_lamarr, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct 11 09:01:03 compute-0 podman[344614]: 2025-10-11 09:01:03.074359404 +0000 UTC m=+0.185044777 container attach 087a0e515dcbd23596daca268386fa62048d7ef4dd5cc009a22bb0b427db03c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_lamarr, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct 11 09:01:03 compute-0 ovn_controller[152945]: 2025-10-11T09:01:03Z|00094|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:d3:b5:ce 10.100.0.14
Oct 11 09:01:03 compute-0 ovn_controller[152945]: 2025-10-11T09:01:03Z|00095|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:d3:b5:ce 10.100.0.14
Oct 11 09:01:03 compute-0 ceph-mon[74313]: pgmap v1736: 321 pgs: 321 active+clean; 352 MiB data, 798 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 3.1 MiB/s wr, 201 op/s
Oct 11 09:01:03 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e242 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:01:03 compute-0 nova_compute[260935]: 2025-10-11 09:01:03.765 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:01:04 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1737: 321 pgs: 321 active+clean; 362 MiB data, 806 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 3.8 MiB/s wr, 214 op/s
Oct 11 09:01:04 compute-0 mystifying_lamarr[344631]: --> passed data devices: 0 physical, 3 LVM
Oct 11 09:01:04 compute-0 mystifying_lamarr[344631]: --> relative data size: 1.0
Oct 11 09:01:04 compute-0 mystifying_lamarr[344631]: --> All data devices are unavailable
Oct 11 09:01:04 compute-0 systemd[1]: libpod-087a0e515dcbd23596daca268386fa62048d7ef4dd5cc009a22bb0b427db03c7.scope: Deactivated successfully.
Oct 11 09:01:04 compute-0 podman[344614]: 2025-10-11 09:01:04.397502254 +0000 UTC m=+1.508187607 container died 087a0e515dcbd23596daca268386fa62048d7ef4dd5cc009a22bb0b427db03c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_lamarr, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct 11 09:01:04 compute-0 systemd[1]: libpod-087a0e515dcbd23596daca268386fa62048d7ef4dd5cc009a22bb0b427db03c7.scope: Consumed 1.278s CPU time.
Oct 11 09:01:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-923fe524f927e63842d907eea50a54d45ebdc29b3407a0f858f3a1768ee448f9-merged.mount: Deactivated successfully.
Oct 11 09:01:04 compute-0 podman[344614]: 2025-10-11 09:01:04.466631655 +0000 UTC m=+1.577317008 container remove 087a0e515dcbd23596daca268386fa62048d7ef4dd5cc009a22bb0b427db03c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_lamarr, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Oct 11 09:01:04 compute-0 systemd[1]: libpod-conmon-087a0e515dcbd23596daca268386fa62048d7ef4dd5cc009a22bb0b427db03c7.scope: Deactivated successfully.
Oct 11 09:01:04 compute-0 sudo[344508]: pam_unix(sudo:session): session closed for user root
Oct 11 09:01:04 compute-0 sudo[344671]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:01:04 compute-0 sudo[344671]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:01:04 compute-0 sudo[344671]: pam_unix(sudo:session): session closed for user root
Oct 11 09:01:04 compute-0 sudo[344696]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 09:01:04 compute-0 sudo[344696]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:01:04 compute-0 sudo[344696]: pam_unix(sudo:session): session closed for user root
Oct 11 09:01:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 09:01:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:01:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 09:01:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:01:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0029572858671941134 of space, bias 1.0, pg target 0.887185760158234 quantized to 32 (current 32)
Oct 11 09:01:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:01:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 09:01:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:01:04 compute-0 sudo[344721]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:01:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 09:01:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:01:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Oct 11 09:01:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:01:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 11 09:01:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:01:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 09:01:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:01:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 11 09:01:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:01:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 11 09:01:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:01:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 09:01:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:01:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 11 09:01:04 compute-0 sudo[344721]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:01:04 compute-0 sudo[344721]: pam_unix(sudo:session): session closed for user root
Oct 11 09:01:04 compute-0 sudo[344746]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a -- lvm list --format json
Oct 11 09:01:04 compute-0 sudo[344746]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:01:04 compute-0 nova_compute[260935]: 2025-10-11 09:01:04.990 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:01:04 compute-0 nova_compute[260935]: 2025-10-11 09:01:04.991 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760173249.9902055, 2ca1b1c6-7cd2-42f9-a24b-5c20bb567361 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:01:04 compute-0 nova_compute[260935]: 2025-10-11 09:01:04.991 2 INFO nova.compute.manager [-] [instance: 2ca1b1c6-7cd2-42f9-a24b-5c20bb567361] VM Stopped (Lifecycle Event)
Oct 11 09:01:05 compute-0 podman[344812]: 2025-10-11 09:01:05.323153369 +0000 UTC m=+0.035900645 container create 6ca13abe5a5ce42045f502244752454057853ede43dd936b2efc5eadb8a2b52e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_ishizaka, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct 11 09:01:05 compute-0 systemd[1]: Started libpod-conmon-6ca13abe5a5ce42045f502244752454057853ede43dd936b2efc5eadb8a2b52e.scope.
Oct 11 09:01:05 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:01:05 compute-0 podman[344812]: 2025-10-11 09:01:05.309363796 +0000 UTC m=+0.022111092 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:01:05 compute-0 podman[344812]: 2025-10-11 09:01:05.407575436 +0000 UTC m=+0.120322732 container init 6ca13abe5a5ce42045f502244752454057853ede43dd936b2efc5eadb8a2b52e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_ishizaka, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct 11 09:01:05 compute-0 podman[344812]: 2025-10-11 09:01:05.417921231 +0000 UTC m=+0.130668527 container start 6ca13abe5a5ce42045f502244752454057853ede43dd936b2efc5eadb8a2b52e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_ishizaka, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 09:01:05 compute-0 podman[344812]: 2025-10-11 09:01:05.421135623 +0000 UTC m=+0.133882919 container attach 6ca13abe5a5ce42045f502244752454057853ede43dd936b2efc5eadb8a2b52e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_ishizaka, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 09:01:05 compute-0 youthful_ishizaka[344829]: 167 167
Oct 11 09:01:05 compute-0 systemd[1]: libpod-6ca13abe5a5ce42045f502244752454057853ede43dd936b2efc5eadb8a2b52e.scope: Deactivated successfully.
Oct 11 09:01:05 compute-0 podman[344812]: 2025-10-11 09:01:05.424827438 +0000 UTC m=+0.137574714 container died 6ca13abe5a5ce42045f502244752454057853ede43dd936b2efc5eadb8a2b52e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_ishizaka, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Oct 11 09:01:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-1b5901b482c3972a01413ea0df29e518e76c9c99e0c2eb227c52dcb7b23e9bd6-merged.mount: Deactivated successfully.
Oct 11 09:01:05 compute-0 podman[344812]: 2025-10-11 09:01:05.463997185 +0000 UTC m=+0.176744471 container remove 6ca13abe5a5ce42045f502244752454057853ede43dd936b2efc5eadb8a2b52e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_ishizaka, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Oct 11 09:01:05 compute-0 systemd[1]: libpod-conmon-6ca13abe5a5ce42045f502244752454057853ede43dd936b2efc5eadb8a2b52e.scope: Deactivated successfully.
Oct 11 09:01:05 compute-0 ceph-mon[74313]: pgmap v1737: 321 pgs: 321 active+clean; 362 MiB data, 806 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 3.8 MiB/s wr, 214 op/s
Oct 11 09:01:05 compute-0 podman[344852]: 2025-10-11 09:01:05.736032772 +0000 UTC m=+0.082452142 container create b6359e7f726a3e6e7f91633ccc4338cf79359af53e74167e33b2cea4958a62c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_lewin, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 09:01:05 compute-0 podman[344852]: 2025-10-11 09:01:05.697361049 +0000 UTC m=+0.043780479 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:01:05 compute-0 systemd[1]: Started libpod-conmon-b6359e7f726a3e6e7f91633ccc4338cf79359af53e74167e33b2cea4958a62c5.scope.
Oct 11 09:01:05 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:01:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b2f461d224b5337077d4bb0acd9785eeee8f7a6737b5ec4f3aa506f6bfdfe1e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 09:01:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b2f461d224b5337077d4bb0acd9785eeee8f7a6737b5ec4f3aa506f6bfdfe1e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 09:01:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b2f461d224b5337077d4bb0acd9785eeee8f7a6737b5ec4f3aa506f6bfdfe1e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 09:01:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b2f461d224b5337077d4bb0acd9785eeee8f7a6737b5ec4f3aa506f6bfdfe1e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 09:01:05 compute-0 podman[344852]: 2025-10-11 09:01:05.856706633 +0000 UTC m=+0.203126003 container init b6359e7f726a3e6e7f91633ccc4338cf79359af53e74167e33b2cea4958a62c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_lewin, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef)
Oct 11 09:01:05 compute-0 podman[344852]: 2025-10-11 09:01:05.874090629 +0000 UTC m=+0.220509969 container start b6359e7f726a3e6e7f91633ccc4338cf79359af53e74167e33b2cea4958a62c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_lewin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 09:01:05 compute-0 podman[344852]: 2025-10-11 09:01:05.877744413 +0000 UTC m=+0.224163833 container attach b6359e7f726a3e6e7f91633ccc4338cf79359af53e74167e33b2cea4958a62c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_lewin, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct 11 09:01:06 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1738: 321 pgs: 321 active+clean; 362 MiB data, 806 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 3.8 MiB/s wr, 181 op/s
Oct 11 09:01:06 compute-0 mystifying_lewin[344868]: {
Oct 11 09:01:06 compute-0 mystifying_lewin[344868]:     "0": [
Oct 11 09:01:06 compute-0 mystifying_lewin[344868]:         {
Oct 11 09:01:06 compute-0 mystifying_lewin[344868]:             "devices": [
Oct 11 09:01:06 compute-0 mystifying_lewin[344868]:                 "/dev/loop3"
Oct 11 09:01:06 compute-0 mystifying_lewin[344868]:             ],
Oct 11 09:01:06 compute-0 mystifying_lewin[344868]:             "lv_name": "ceph_lv0",
Oct 11 09:01:06 compute-0 mystifying_lewin[344868]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 09:01:06 compute-0 mystifying_lewin[344868]:             "lv_size": "21470642176",
Oct 11 09:01:06 compute-0 mystifying_lewin[344868]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=bd31cfe9-266a-4479-a7f9-659fff14b3e1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 09:01:06 compute-0 mystifying_lewin[344868]:             "lv_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 09:01:06 compute-0 mystifying_lewin[344868]:             "name": "ceph_lv0",
Oct 11 09:01:06 compute-0 mystifying_lewin[344868]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 09:01:06 compute-0 mystifying_lewin[344868]:             "tags": {
Oct 11 09:01:06 compute-0 mystifying_lewin[344868]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 11 09:01:06 compute-0 mystifying_lewin[344868]:                 "ceph.block_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 09:01:06 compute-0 mystifying_lewin[344868]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 09:01:06 compute-0 mystifying_lewin[344868]:                 "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:01:06 compute-0 mystifying_lewin[344868]:                 "ceph.cluster_name": "ceph",
Oct 11 09:01:06 compute-0 mystifying_lewin[344868]:                 "ceph.crush_device_class": "",
Oct 11 09:01:06 compute-0 mystifying_lewin[344868]:                 "ceph.encrypted": "0",
Oct 11 09:01:06 compute-0 mystifying_lewin[344868]:                 "ceph.osd_fsid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 09:01:06 compute-0 mystifying_lewin[344868]:                 "ceph.osd_id": "0",
Oct 11 09:01:06 compute-0 mystifying_lewin[344868]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 09:01:06 compute-0 mystifying_lewin[344868]:                 "ceph.type": "block",
Oct 11 09:01:06 compute-0 mystifying_lewin[344868]:                 "ceph.vdo": "0"
Oct 11 09:01:06 compute-0 mystifying_lewin[344868]:             },
Oct 11 09:01:06 compute-0 mystifying_lewin[344868]:             "type": "block",
Oct 11 09:01:06 compute-0 mystifying_lewin[344868]:             "vg_name": "ceph_vg0"
Oct 11 09:01:06 compute-0 mystifying_lewin[344868]:         }
Oct 11 09:01:06 compute-0 mystifying_lewin[344868]:     ],
Oct 11 09:01:06 compute-0 mystifying_lewin[344868]:     "1": [
Oct 11 09:01:06 compute-0 mystifying_lewin[344868]:         {
Oct 11 09:01:06 compute-0 mystifying_lewin[344868]:             "devices": [
Oct 11 09:01:06 compute-0 mystifying_lewin[344868]:                 "/dev/loop4"
Oct 11 09:01:06 compute-0 mystifying_lewin[344868]:             ],
Oct 11 09:01:06 compute-0 mystifying_lewin[344868]:             "lv_name": "ceph_lv1",
Oct 11 09:01:06 compute-0 mystifying_lewin[344868]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 09:01:06 compute-0 mystifying_lewin[344868]:             "lv_size": "21470642176",
Oct 11 09:01:06 compute-0 mystifying_lewin[344868]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c6e730c8-7075-45e5-bf72-061b73088f5b,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 09:01:06 compute-0 mystifying_lewin[344868]:             "lv_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 09:01:06 compute-0 mystifying_lewin[344868]:             "name": "ceph_lv1",
Oct 11 09:01:06 compute-0 mystifying_lewin[344868]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 09:01:06 compute-0 mystifying_lewin[344868]:             "tags": {
Oct 11 09:01:06 compute-0 mystifying_lewin[344868]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 11 09:01:06 compute-0 mystifying_lewin[344868]:                 "ceph.block_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 09:01:06 compute-0 mystifying_lewin[344868]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 09:01:06 compute-0 mystifying_lewin[344868]:                 "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:01:06 compute-0 mystifying_lewin[344868]:                 "ceph.cluster_name": "ceph",
Oct 11 09:01:06 compute-0 mystifying_lewin[344868]:                 "ceph.crush_device_class": "",
Oct 11 09:01:06 compute-0 mystifying_lewin[344868]:                 "ceph.encrypted": "0",
Oct 11 09:01:06 compute-0 mystifying_lewin[344868]:                 "ceph.osd_fsid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 09:01:06 compute-0 mystifying_lewin[344868]:                 "ceph.osd_id": "1",
Oct 11 09:01:06 compute-0 mystifying_lewin[344868]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 09:01:06 compute-0 mystifying_lewin[344868]:                 "ceph.type": "block",
Oct 11 09:01:06 compute-0 mystifying_lewin[344868]:                 "ceph.vdo": "0"
Oct 11 09:01:06 compute-0 mystifying_lewin[344868]:             },
Oct 11 09:01:06 compute-0 mystifying_lewin[344868]:             "type": "block",
Oct 11 09:01:06 compute-0 mystifying_lewin[344868]:             "vg_name": "ceph_vg1"
Oct 11 09:01:06 compute-0 mystifying_lewin[344868]:         }
Oct 11 09:01:06 compute-0 mystifying_lewin[344868]:     ],
Oct 11 09:01:06 compute-0 mystifying_lewin[344868]:     "2": [
Oct 11 09:01:06 compute-0 mystifying_lewin[344868]:         {
Oct 11 09:01:06 compute-0 mystifying_lewin[344868]:             "devices": [
Oct 11 09:01:06 compute-0 mystifying_lewin[344868]:                 "/dev/loop5"
Oct 11 09:01:06 compute-0 mystifying_lewin[344868]:             ],
Oct 11 09:01:06 compute-0 mystifying_lewin[344868]:             "lv_name": "ceph_lv2",
Oct 11 09:01:06 compute-0 mystifying_lewin[344868]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 09:01:06 compute-0 mystifying_lewin[344868]:             "lv_size": "21470642176",
Oct 11 09:01:06 compute-0 mystifying_lewin[344868]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9a8d87fe-940e-4960-94fb-405e22e6e9ee,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 09:01:06 compute-0 mystifying_lewin[344868]:             "lv_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 09:01:06 compute-0 mystifying_lewin[344868]:             "name": "ceph_lv2",
Oct 11 09:01:06 compute-0 mystifying_lewin[344868]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 09:01:06 compute-0 mystifying_lewin[344868]:             "tags": {
Oct 11 09:01:06 compute-0 mystifying_lewin[344868]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 11 09:01:06 compute-0 mystifying_lewin[344868]:                 "ceph.block_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 09:01:06 compute-0 mystifying_lewin[344868]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 09:01:06 compute-0 mystifying_lewin[344868]:                 "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:01:06 compute-0 mystifying_lewin[344868]:                 "ceph.cluster_name": "ceph",
Oct 11 09:01:06 compute-0 mystifying_lewin[344868]:                 "ceph.crush_device_class": "",
Oct 11 09:01:06 compute-0 mystifying_lewin[344868]:                 "ceph.encrypted": "0",
Oct 11 09:01:06 compute-0 mystifying_lewin[344868]:                 "ceph.osd_fsid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 09:01:06 compute-0 mystifying_lewin[344868]:                 "ceph.osd_id": "2",
Oct 11 09:01:06 compute-0 mystifying_lewin[344868]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 09:01:06 compute-0 mystifying_lewin[344868]:                 "ceph.type": "block",
Oct 11 09:01:06 compute-0 mystifying_lewin[344868]:                 "ceph.vdo": "0"
Oct 11 09:01:06 compute-0 mystifying_lewin[344868]:             },
Oct 11 09:01:06 compute-0 mystifying_lewin[344868]:             "type": "block",
Oct 11 09:01:06 compute-0 mystifying_lewin[344868]:             "vg_name": "ceph_vg2"
Oct 11 09:01:06 compute-0 mystifying_lewin[344868]:         }
Oct 11 09:01:06 compute-0 mystifying_lewin[344868]:     ]
Oct 11 09:01:06 compute-0 mystifying_lewin[344868]: }
Oct 11 09:01:06 compute-0 systemd[1]: libpod-b6359e7f726a3e6e7f91633ccc4338cf79359af53e74167e33b2cea4958a62c5.scope: Deactivated successfully.
Oct 11 09:01:06 compute-0 podman[344877]: 2025-10-11 09:01:06.694378298 +0000 UTC m=+0.035742210 container died b6359e7f726a3e6e7f91633ccc4338cf79359af53e74167e33b2cea4958a62c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_lewin, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct 11 09:01:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-9b2f461d224b5337077d4bb0acd9785eeee8f7a6737b5ec4f3aa506f6bfdfe1e-merged.mount: Deactivated successfully.
Oct 11 09:01:06 compute-0 podman[344877]: 2025-10-11 09:01:06.773481823 +0000 UTC m=+0.114845715 container remove b6359e7f726a3e6e7f91633ccc4338cf79359af53e74167e33b2cea4958a62c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_lewin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Oct 11 09:01:06 compute-0 systemd[1]: libpod-conmon-b6359e7f726a3e6e7f91633ccc4338cf79359af53e74167e33b2cea4958a62c5.scope: Deactivated successfully.
Oct 11 09:01:06 compute-0 sudo[344746]: pam_unix(sudo:session): session closed for user root
Oct 11 09:01:06 compute-0 sudo[344893]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:01:06 compute-0 sudo[344893]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:01:06 compute-0 sudo[344893]: pam_unix(sudo:session): session closed for user root
Oct 11 09:01:07 compute-0 sudo[344924]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 09:01:07 compute-0 sudo[344924]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:01:07 compute-0 sudo[344924]: pam_unix(sudo:session): session closed for user root
Oct 11 09:01:07 compute-0 podman[344917]: 2025-10-11 09:01:07.101026503 +0000 UTC m=+0.122468133 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251001)
Oct 11 09:01:07 compute-0 sudo[344963]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:01:07 compute-0 sudo[344963]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:01:07 compute-0 sudo[344963]: pam_unix(sudo:session): session closed for user root
Oct 11 09:01:07 compute-0 sudo[344988]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a -- raw list --format json
Oct 11 09:01:07 compute-0 sudo[344988]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:01:07 compute-0 ceph-mon[74313]: pgmap v1738: 321 pgs: 321 active+clean; 362 MiB data, 806 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 3.8 MiB/s wr, 181 op/s
Oct 11 09:01:07 compute-0 podman[345056]: 2025-10-11 09:01:07.696626497 +0000 UTC m=+0.063921024 container create 7c92bbd7fd0257760ffe1c0add5565ba91d0b4cf415b5744554724181e8f0c3c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_lumiere, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Oct 11 09:01:07 compute-0 systemd[1]: Started libpod-conmon-7c92bbd7fd0257760ffe1c0add5565ba91d0b4cf415b5744554724181e8f0c3c.scope.
Oct 11 09:01:07 compute-0 podman[345056]: 2025-10-11 09:01:07.665646553 +0000 UTC m=+0.032941120 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:01:07 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:01:07 compute-0 podman[345056]: 2025-10-11 09:01:07.799723186 +0000 UTC m=+0.167017753 container init 7c92bbd7fd0257760ffe1c0add5565ba91d0b4cf415b5744554724181e8f0c3c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_lumiere, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct 11 09:01:07 compute-0 podman[345056]: 2025-10-11 09:01:07.811176713 +0000 UTC m=+0.178471240 container start 7c92bbd7fd0257760ffe1c0add5565ba91d0b4cf415b5744554724181e8f0c3c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_lumiere, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 09:01:07 compute-0 podman[345056]: 2025-10-11 09:01:07.815240279 +0000 UTC m=+0.182534806 container attach 7c92bbd7fd0257760ffe1c0add5565ba91d0b4cf415b5744554724181e8f0c3c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_lumiere, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Oct 11 09:01:07 compute-0 frosty_lumiere[345072]: 167 167
Oct 11 09:01:07 compute-0 systemd[1]: libpod-7c92bbd7fd0257760ffe1c0add5565ba91d0b4cf415b5744554724181e8f0c3c.scope: Deactivated successfully.
Oct 11 09:01:07 compute-0 podman[345056]: 2025-10-11 09:01:07.819576932 +0000 UTC m=+0.186871459 container died 7c92bbd7fd0257760ffe1c0add5565ba91d0b4cf415b5744554724181e8f0c3c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_lumiere, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct 11 09:01:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-f59be9c02687326753ec136097e93e23c5abe808e4d7240e7233d83a5f20bfa2-merged.mount: Deactivated successfully.
Oct 11 09:01:07 compute-0 podman[345056]: 2025-10-11 09:01:07.863295469 +0000 UTC m=+0.230589986 container remove 7c92bbd7fd0257760ffe1c0add5565ba91d0b4cf415b5744554724181e8f0c3c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_lumiere, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 09:01:07 compute-0 systemd[1]: libpod-conmon-7c92bbd7fd0257760ffe1c0add5565ba91d0b4cf415b5744554724181e8f0c3c.scope: Deactivated successfully.
Oct 11 09:01:08 compute-0 podman[345095]: 2025-10-11 09:01:08.118153976 +0000 UTC m=+0.053973630 container create a64f9c2d870da33ecd0412f20b4644537199e46bbea233e746182967fdf6b577 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_khorana, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 09:01:08 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1739: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.9 MiB/s wr, 211 op/s
Oct 11 09:01:08 compute-0 systemd[1]: Started libpod-conmon-a64f9c2d870da33ecd0412f20b4644537199e46bbea233e746182967fdf6b577.scope.
Oct 11 09:01:08 compute-0 podman[345095]: 2025-10-11 09:01:08.100126142 +0000 UTC m=+0.035945796 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:01:08 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:01:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd0005dc978592679f94a179c27d9da8000016c228c6a382ea9925e687e41206/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 09:01:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd0005dc978592679f94a179c27d9da8000016c228c6a382ea9925e687e41206/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 09:01:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd0005dc978592679f94a179c27d9da8000016c228c6a382ea9925e687e41206/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 09:01:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd0005dc978592679f94a179c27d9da8000016c228c6a382ea9925e687e41206/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 09:01:08 compute-0 podman[345095]: 2025-10-11 09:01:08.229617865 +0000 UTC m=+0.165437509 container init a64f9c2d870da33ecd0412f20b4644537199e46bbea233e746182967fdf6b577 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_khorana, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 09:01:08 compute-0 podman[345095]: 2025-10-11 09:01:08.236142571 +0000 UTC m=+0.171962195 container start a64f9c2d870da33ecd0412f20b4644537199e46bbea233e746182967fdf6b577 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_khorana, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct 11 09:01:08 compute-0 podman[345095]: 2025-10-11 09:01:08.23926081 +0000 UTC m=+0.175080434 container attach a64f9c2d870da33ecd0412f20b4644537199e46bbea233e746182967fdf6b577 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_khorana, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Oct 11 09:01:08 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e242 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:01:08 compute-0 nova_compute[260935]: 2025-10-11 09:01:08.704 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:01:08 compute-0 nova_compute[260935]: 2025-10-11 09:01:08.767 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:01:09 compute-0 exciting_khorana[345111]: {
Oct 11 09:01:09 compute-0 exciting_khorana[345111]:     "9a8d87fe-940e-4960-94fb-405e22e6e9ee": {
Oct 11 09:01:09 compute-0 exciting_khorana[345111]:         "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:01:09 compute-0 exciting_khorana[345111]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 11 09:01:09 compute-0 exciting_khorana[345111]:         "osd_id": 2,
Oct 11 09:01:09 compute-0 exciting_khorana[345111]:         "osd_uuid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 09:01:09 compute-0 exciting_khorana[345111]:         "type": "bluestore"
Oct 11 09:01:09 compute-0 exciting_khorana[345111]:     },
Oct 11 09:01:09 compute-0 exciting_khorana[345111]:     "bd31cfe9-266a-4479-a7f9-659fff14b3e1": {
Oct 11 09:01:09 compute-0 exciting_khorana[345111]:         "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:01:09 compute-0 exciting_khorana[345111]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 11 09:01:09 compute-0 exciting_khorana[345111]:         "osd_id": 0,
Oct 11 09:01:09 compute-0 exciting_khorana[345111]:         "osd_uuid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 09:01:09 compute-0 exciting_khorana[345111]:         "type": "bluestore"
Oct 11 09:01:09 compute-0 exciting_khorana[345111]:     },
Oct 11 09:01:09 compute-0 exciting_khorana[345111]:     "c6e730c8-7075-45e5-bf72-061b73088f5b": {
Oct 11 09:01:09 compute-0 exciting_khorana[345111]:         "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:01:09 compute-0 exciting_khorana[345111]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 11 09:01:09 compute-0 exciting_khorana[345111]:         "osd_id": 1,
Oct 11 09:01:09 compute-0 exciting_khorana[345111]:         "osd_uuid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 09:01:09 compute-0 exciting_khorana[345111]:         "type": "bluestore"
Oct 11 09:01:09 compute-0 exciting_khorana[345111]:     }
Oct 11 09:01:09 compute-0 exciting_khorana[345111]: }
Oct 11 09:01:09 compute-0 systemd[1]: libpod-a64f9c2d870da33ecd0412f20b4644537199e46bbea233e746182967fdf6b577.scope: Deactivated successfully.
Oct 11 09:01:09 compute-0 podman[345095]: 2025-10-11 09:01:09.380691107 +0000 UTC m=+1.316510771 container died a64f9c2d870da33ecd0412f20b4644537199e46bbea233e746182967fdf6b577 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_khorana, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 09:01:09 compute-0 systemd[1]: libpod-a64f9c2d870da33ecd0412f20b4644537199e46bbea233e746182967fdf6b577.scope: Consumed 1.143s CPU time.
Oct 11 09:01:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-bd0005dc978592679f94a179c27d9da8000016c228c6a382ea9925e687e41206-merged.mount: Deactivated successfully.
Oct 11 09:01:09 compute-0 podman[345095]: 2025-10-11 09:01:09.45478679 +0000 UTC m=+1.390606454 container remove a64f9c2d870da33ecd0412f20b4644537199e46bbea233e746182967fdf6b577 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_khorana, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 09:01:09 compute-0 systemd[1]: libpod-conmon-a64f9c2d870da33ecd0412f20b4644537199e46bbea233e746182967fdf6b577.scope: Deactivated successfully.
Oct 11 09:01:09 compute-0 sudo[344988]: pam_unix(sudo:session): session closed for user root
Oct 11 09:01:09 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 09:01:09 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:01:09 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 09:01:09 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:01:09 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev d1e120c9-acd2-4969-be20-4c7b336d5443 does not exist
Oct 11 09:01:09 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev 7aec89cb-18e4-4dbd-95c9-0db948c7a7a1 does not exist
Oct 11 09:01:09 compute-0 sudo[345158]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:01:09 compute-0 sudo[345158]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:01:09 compute-0 sudo[345158]: pam_unix(sudo:session): session closed for user root
Oct 11 09:01:09 compute-0 ceph-mon[74313]: pgmap v1739: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.9 MiB/s wr, 211 op/s
Oct 11 09:01:09 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:01:09 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:01:09 compute-0 sudo[345183]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 11 09:01:09 compute-0 sudo[345183]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:01:09 compute-0 sudo[345183]: pam_unix(sudo:session): session closed for user root
Oct 11 09:01:10 compute-0 nova_compute[260935]: 2025-10-11 09:01:10.030 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:01:10 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1740: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 372 KiB/s rd, 2.1 MiB/s wr, 68 op/s
Oct 11 09:01:10 compute-0 ceph-mon[74313]: pgmap v1740: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 372 KiB/s rd, 2.1 MiB/s wr, 68 op/s
Oct 11 09:01:11 compute-0 nova_compute[260935]: 2025-10-11 09:01:11.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:01:11 compute-0 nova_compute[260935]: 2025-10-11 09:01:11.703 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 11 09:01:12 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1741: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 372 KiB/s rd, 2.1 MiB/s wr, 68 op/s
Oct 11 09:01:12 compute-0 nova_compute[260935]: 2025-10-11 09:01:12.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:01:12 compute-0 nova_compute[260935]: 2025-10-11 09:01:12.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:01:13 compute-0 ceph-mon[74313]: pgmap v1741: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 372 KiB/s rd, 2.1 MiB/s wr, 68 op/s
Oct 11 09:01:13 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e242 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:01:13 compute-0 nova_compute[260935]: 2025-10-11 09:01:13.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:01:13 compute-0 nova_compute[260935]: 2025-10-11 09:01:13.770 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:01:13 compute-0 podman[345208]: 2025-10-11 09:01:13.811719425 +0000 UTC m=+0.107981609 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, config_id=iscsid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Oct 11 09:01:14 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1742: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 199 KiB/s rd, 919 KiB/s wr, 50 op/s
Oct 11 09:01:15 compute-0 nova_compute[260935]: 2025-10-11 09:01:15.034 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:01:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:01:15.196 162815 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:01:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:01:15.197 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:01:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:01:15.198 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:01:15 compute-0 ceph-mon[74313]: pgmap v1742: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 199 KiB/s rd, 919 KiB/s wr, 50 op/s
Oct 11 09:01:15 compute-0 nova_compute[260935]: 2025-10-11 09:01:15.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:01:15 compute-0 nova_compute[260935]: 2025-10-11 09:01:15.704 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:01:16 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1743: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 130 KiB/s rd, 110 KiB/s wr, 31 op/s
Oct 11 09:01:16 compute-0 podman[345229]: 2025-10-11 09:01:16.76827138 +0000 UTC m=+0.072382115 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Oct 11 09:01:16 compute-0 podman[345230]: 2025-10-11 09:01:16.85000166 +0000 UTC m=+0.140088245 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct 11 09:01:17 compute-0 ceph-mon[74313]: pgmap v1743: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 130 KiB/s rd, 110 KiB/s wr, 31 op/s
Oct 11 09:01:18 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1744: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 130 KiB/s rd, 111 KiB/s wr, 31 op/s
Oct 11 09:01:18 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e242 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:01:18 compute-0 nova_compute[260935]: 2025-10-11 09:01:18.773 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:01:19 compute-0 ceph-mon[74313]: pgmap v1744: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 130 KiB/s rd, 111 KiB/s wr, 31 op/s
Oct 11 09:01:20 compute-0 nova_compute[260935]: 2025-10-11 09:01:20.076 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:01:20 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1745: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 0 B/s rd, 15 KiB/s wr, 0 op/s
Oct 11 09:01:21 compute-0 ceph-mon[74313]: pgmap v1745: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 0 B/s rd, 15 KiB/s wr, 0 op/s
Oct 11 09:01:22 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1746: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 0 B/s rd, 15 KiB/s wr, 0 op/s
Oct 11 09:01:23 compute-0 ceph-mon[74313]: pgmap v1746: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 0 B/s rd, 15 KiB/s wr, 0 op/s
Oct 11 09:01:23 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e242 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:01:23 compute-0 nova_compute[260935]: 2025-10-11 09:01:23.776 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:01:24 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1747: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 0 B/s rd, 15 KiB/s wr, 0 op/s
Oct 11 09:01:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:01:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:01:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:01:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:01:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:01:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:01:24 compute-0 ovn_controller[152945]: 2025-10-11T09:01:24Z|00788|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory
Oct 11 09:01:25 compute-0 nova_compute[260935]: 2025-10-11 09:01:25.113 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:01:25 compute-0 ceph-mon[74313]: pgmap v1747: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 0 B/s rd, 15 KiB/s wr, 0 op/s
Oct 11 09:01:26 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1748: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 2.0 KiB/s wr, 0 op/s
Oct 11 09:01:27 compute-0 ceph-mon[74313]: pgmap v1748: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 2.0 KiB/s wr, 0 op/s
Oct 11 09:01:28 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1749: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 4.0 KiB/s wr, 0 op/s
Oct 11 09:01:28 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e242 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:01:28 compute-0 nova_compute[260935]: 2025-10-11 09:01:28.819 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:01:29 compute-0 ceph-mon[74313]: pgmap v1749: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 4.0 KiB/s wr, 0 op/s
Oct 11 09:01:30 compute-0 nova_compute[260935]: 2025-10-11 09:01:30.155 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:01:30 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1750: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 3.0 KiB/s wr, 0 op/s
Oct 11 09:01:31 compute-0 ceph-mon[74313]: pgmap v1750: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 3.0 KiB/s wr, 0 op/s
Oct 11 09:01:32 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1751: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 3.0 KiB/s wr, 0 op/s
Oct 11 09:01:33 compute-0 ceph-mon[74313]: pgmap v1751: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 3.0 KiB/s wr, 0 op/s
Oct 11 09:01:33 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e242 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:01:33 compute-0 ceph-mon[74313]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #78. Immutable memtables: 0.
Oct 11 09:01:33 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:01:33.706936) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 11 09:01:33 compute-0 ceph-mon[74313]: rocksdb: [db/flush_job.cc:856] [default] [JOB 43] Flushing memtable with next log file: 78
Oct 11 09:01:33 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760173293706993, "job": 43, "event": "flush_started", "num_memtables": 1, "num_entries": 1473, "num_deletes": 250, "total_data_size": 2208065, "memory_usage": 2247112, "flush_reason": "Manual Compaction"}
Oct 11 09:01:33 compute-0 ceph-mon[74313]: rocksdb: [db/flush_job.cc:885] [default] [JOB 43] Level-0 flush table #79: started
Oct 11 09:01:33 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760173293723151, "cf_name": "default", "job": 43, "event": "table_file_creation", "file_number": 79, "file_size": 1294245, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 35332, "largest_seqno": 36804, "table_properties": {"data_size": 1289201, "index_size": 2312, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1669, "raw_key_size": 13773, "raw_average_key_size": 20, "raw_value_size": 1277954, "raw_average_value_size": 1933, "num_data_blocks": 105, "num_entries": 661, "num_filter_entries": 661, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760173147, "oldest_key_time": 1760173147, "file_creation_time": 1760173293, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "25d4b2da-549a-4320-988a-8e4ed36a341b", "db_session_id": "HBQ1LYMNE0LLZGR7AHC6", "orig_file_number": 79, "seqno_to_time_mapping": "N/A"}}
Oct 11 09:01:33 compute-0 ceph-mon[74313]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 43] Flush lasted 16300 microseconds, and 7069 cpu microseconds.
Oct 11 09:01:33 compute-0 ceph-mon[74313]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 09:01:33 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:01:33.723229) [db/flush_job.cc:967] [default] [JOB 43] Level-0 flush table #79: 1294245 bytes OK
Oct 11 09:01:33 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:01:33.723263) [db/memtable_list.cc:519] [default] Level-0 commit table #79 started
Oct 11 09:01:33 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:01:33.725141) [db/memtable_list.cc:722] [default] Level-0 commit table #79: memtable #1 done
Oct 11 09:01:33 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:01:33.725161) EVENT_LOG_v1 {"time_micros": 1760173293725154, "job": 43, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 11 09:01:33 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:01:33.725189) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 11 09:01:33 compute-0 ceph-mon[74313]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 43] Try to delete WAL files size 2201560, prev total WAL file size 2201560, number of live WAL files 2.
Oct 11 09:01:33 compute-0 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000075.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 09:01:33 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:01:33.726733) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031323534' seq:72057594037927935, type:22 .. '6D6772737461740031353035' seq:0, type:0; will stop at (end)
Oct 11 09:01:33 compute-0 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 44] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 11 09:01:33 compute-0 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 43 Base level 0, inputs: [79(1263KB)], [77(9444KB)]
Oct 11 09:01:33 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760173293726917, "job": 44, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [79], "files_L6": [77], "score": -1, "input_data_size": 10965754, "oldest_snapshot_seqno": -1}
Oct 11 09:01:33 compute-0 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 44] Generated table #80: 6193 keys, 8543085 bytes, temperature: kUnknown
Oct 11 09:01:33 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760173293791173, "cf_name": "default", "job": 44, "event": "table_file_creation", "file_number": 80, "file_size": 8543085, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8502285, "index_size": 24242, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15493, "raw_key_size": 155791, "raw_average_key_size": 25, "raw_value_size": 8391715, "raw_average_value_size": 1355, "num_data_blocks": 986, "num_entries": 6193, "num_filter_entries": 6193, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760170204, "oldest_key_time": 0, "file_creation_time": 1760173293, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "25d4b2da-549a-4320-988a-8e4ed36a341b", "db_session_id": "HBQ1LYMNE0LLZGR7AHC6", "orig_file_number": 80, "seqno_to_time_mapping": "N/A"}}
Oct 11 09:01:33 compute-0 ceph-mon[74313]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 09:01:33 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:01:33.792037) [db/compaction/compaction_job.cc:1663] [default] [JOB 44] Compacted 1@0 + 1@6 files to L6 => 8543085 bytes
Oct 11 09:01:33 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:01:33.793546) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 170.4 rd, 132.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.2, 9.2 +0.0 blob) out(8.1 +0.0 blob), read-write-amplify(15.1) write-amplify(6.6) OK, records in: 6638, records dropped: 445 output_compression: NoCompression
Oct 11 09:01:33 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:01:33.793569) EVENT_LOG_v1 {"time_micros": 1760173293793560, "job": 44, "event": "compaction_finished", "compaction_time_micros": 64348, "compaction_time_cpu_micros": 49131, "output_level": 6, "num_output_files": 1, "total_output_size": 8543085, "num_input_records": 6638, "num_output_records": 6193, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 11 09:01:33 compute-0 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000079.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 09:01:33 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760173293795146, "job": 44, "event": "table_file_deletion", "file_number": 79}
Oct 11 09:01:33 compute-0 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000077.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 09:01:33 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760173293797464, "job": 44, "event": "table_file_deletion", "file_number": 77}
Oct 11 09:01:33 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:01:33.726560) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 09:01:33 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:01:33.797617) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 09:01:33 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:01:33.797635) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 09:01:33 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:01:33.797639) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 09:01:33 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:01:33.797643) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 09:01:33 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:01:33.797646) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 09:01:33 compute-0 nova_compute[260935]: 2025-10-11 09:01:33.821 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:01:34 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1752: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 3.0 KiB/s wr, 0 op/s
Oct 11 09:01:34 compute-0 ceph-mon[74313]: pgmap v1752: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 3.0 KiB/s wr, 0 op/s
Oct 11 09:01:35 compute-0 nova_compute[260935]: 2025-10-11 09:01:35.194 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:01:36 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1753: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 2.0 KiB/s wr, 0 op/s
Oct 11 09:01:37 compute-0 ceph-mon[74313]: pgmap v1753: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 2.0 KiB/s wr, 0 op/s
Oct 11 09:01:37 compute-0 podman[345277]: 2025-10-11 09:01:37.806438093 +0000 UTC m=+0.097746278 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 09:01:38 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1754: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 4.3 KiB/s wr, 0 op/s
Oct 11 09:01:38 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e242 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:01:38 compute-0 nova_compute[260935]: 2025-10-11 09:01:38.823 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:01:39 compute-0 ceph-mon[74313]: pgmap v1754: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 4.3 KiB/s wr, 0 op/s
Oct 11 09:01:40 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1755: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 2.3 KiB/s wr, 0 op/s
Oct 11 09:01:40 compute-0 nova_compute[260935]: 2025-10-11 09:01:40.238 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:01:41 compute-0 ceph-mon[74313]: pgmap v1755: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 2.3 KiB/s wr, 0 op/s
Oct 11 09:01:42 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1756: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 2.3 KiB/s wr, 0 op/s
Oct 11 09:01:43 compute-0 ceph-mon[74313]: pgmap v1756: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 2.3 KiB/s wr, 0 op/s
Oct 11 09:01:43 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e242 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:01:43 compute-0 nova_compute[260935]: 2025-10-11 09:01:43.825 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:01:44 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1757: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 2.3 KiB/s wr, 0 op/s
Oct 11 09:01:44 compute-0 sshd-session[345297]: Invalid user validator from 152.32.213.170 port 58802
Oct 11 09:01:44 compute-0 sshd-session[345297]: pam_unix(sshd:auth): check pass; user unknown
Oct 11 09:01:44 compute-0 sshd-session[345297]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=152.32.213.170
Oct 11 09:01:44 compute-0 podman[345299]: 2025-10-11 09:01:44.450778638 +0000 UTC m=+0.088183196 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=iscsid, tcib_managed=true)
Oct 11 09:01:45 compute-0 nova_compute[260935]: 2025-10-11 09:01:45.273 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:01:45 compute-0 ceph-mon[74313]: pgmap v1757: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 2.3 KiB/s wr, 0 op/s
Oct 11 09:01:45 compute-0 sshd-session[345297]: Failed password for invalid user validator from 152.32.213.170 port 58802 ssh2
Oct 11 09:01:46 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1758: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 2.3 KiB/s wr, 0 op/s
Oct 11 09:01:47 compute-0 ceph-mon[74313]: pgmap v1758: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 2.3 KiB/s wr, 0 op/s
Oct 11 09:01:47 compute-0 sshd-session[345297]: Received disconnect from 152.32.213.170 port 58802:11: Bye Bye [preauth]
Oct 11 09:01:47 compute-0 sshd-session[345297]: Disconnected from invalid user validator 152.32.213.170 port 58802 [preauth]
Oct 11 09:01:47 compute-0 podman[345319]: 2025-10-11 09:01:47.762893671 +0000 UTC m=+0.069055520 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, managed_by=edpm_ansible, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3)
Oct 11 09:01:47 compute-0 podman[345320]: 2025-10-11 09:01:47.861428691 +0000 UTC m=+0.157630796 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2)
Oct 11 09:01:48 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1759: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 2.3 KiB/s wr, 0 op/s
Oct 11 09:01:48 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e242 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:01:48 compute-0 nova_compute[260935]: 2025-10-11 09:01:48.827 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:01:49 compute-0 ceph-mon[74313]: pgmap v1759: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 2.3 KiB/s wr, 0 op/s
Oct 11 09:01:50 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1760: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail
Oct 11 09:01:50 compute-0 nova_compute[260935]: 2025-10-11 09:01:50.318 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:01:51 compute-0 ceph-mon[74313]: pgmap v1760: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail
Oct 11 09:01:52 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1761: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail
Oct 11 09:01:53 compute-0 ceph-mon[74313]: pgmap v1761: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail
Oct 11 09:01:53 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e242 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:01:53 compute-0 nova_compute[260935]: 2025-10-11 09:01:53.829 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:01:54 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1762: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 0 B/s rd, 8.4 KiB/s wr, 1 op/s
Oct 11 09:01:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:01:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:01:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:01:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:01:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:01:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:01:54 compute-0 ceph-mgr[74605]: [balancer INFO root] Optimize plan auto_2025-10-11_09:01:54
Oct 11 09:01:54 compute-0 ceph-mgr[74605]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 09:01:54 compute-0 ceph-mgr[74605]: [balancer INFO root] do_upmap
Oct 11 09:01:54 compute-0 ceph-mgr[74605]: [balancer INFO root] pools ['default.rgw.meta', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'default.rgw.log', '.mgr', 'vms', 'images', 'volumes', 'default.rgw.control', 'backups', '.rgw.root']
Oct 11 09:01:54 compute-0 ceph-mgr[74605]: [balancer INFO root] prepared 0/10 changes
Oct 11 09:01:55 compute-0 ceph-osd[88249]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 11 09:01:55 compute-0 ceph-osd[88249]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 3000.1 total, 600.0 interval
                                           Cumulative writes: 27K writes, 106K keys, 27K commit groups, 1.0 writes per commit group, ingest: 0.10 GB, 0.03 MB/s
                                           Cumulative WAL: 27K writes, 9390 syncs, 2.88 writes per sync, written: 0.10 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 10K writes, 41K keys, 10K commit groups, 1.0 writes per commit group, ingest: 43.34 MB, 0.07 MB/s
                                           Interval WAL: 10K writes, 4289 syncs, 2.51 writes per sync, written: 0.04 GB, 0.07 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 11 09:01:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 09:01:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 09:01:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 09:01:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 09:01:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 09:01:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 09:01:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 09:01:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 09:01:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 09:01:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 09:01:55 compute-0 nova_compute[260935]: 2025-10-11 09:01:55.354 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:01:55 compute-0 ceph-mon[74313]: pgmap v1762: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 0 B/s rd, 8.4 KiB/s wr, 1 op/s
Oct 11 09:01:56 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1763: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 0 B/s rd, 8.4 KiB/s wr, 1 op/s
Oct 11 09:01:57 compute-0 ceph-mon[74313]: pgmap v1763: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 0 B/s rd, 8.4 KiB/s wr, 1 op/s
Oct 11 09:01:58 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1764: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 0 B/s rd, 8.4 KiB/s wr, 1 op/s
Oct 11 09:01:58 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e242 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:01:58 compute-0 nova_compute[260935]: 2025-10-11 09:01:58.831 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:01:59 compute-0 ceph-mon[74313]: pgmap v1764: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 0 B/s rd, 8.4 KiB/s wr, 1 op/s
Oct 11 09:02:00 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1765: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 0 B/s rd, 8.4 KiB/s wr, 1 op/s
Oct 11 09:02:00 compute-0 nova_compute[260935]: 2025-10-11 09:02:00.358 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:02:00 compute-0 ceph-osd[89278]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 11 09:02:00 compute-0 ceph-osd[89278]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 3000.1 total, 600.0 interval
                                           Cumulative writes: 28K writes, 106K keys, 28K commit groups, 1.0 writes per commit group, ingest: 0.10 GB, 0.03 MB/s
                                           Cumulative WAL: 28K writes, 10K syncs, 2.82 writes per sync, written: 0.10 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 11K writes, 40K keys, 11K commit groups, 1.0 writes per commit group, ingest: 38.99 MB, 0.06 MB/s
                                           Interval WAL: 11K writes, 4591 syncs, 2.43 writes per sync, written: 0.04 GB, 0.06 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 11 09:02:01 compute-0 ceph-mon[74313]: pgmap v1765: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 0 B/s rd, 8.4 KiB/s wr, 1 op/s
Oct 11 09:02:02 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1766: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 0 B/s rd, 9.4 KiB/s wr, 1 op/s
Oct 11 09:02:03 compute-0 ceph-mon[74313]: pgmap v1766: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 0 B/s rd, 9.4 KiB/s wr, 1 op/s
Oct 11 09:02:03 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e242 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:02:03 compute-0 nova_compute[260935]: 2025-10-11 09:02:03.833 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:02:04 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1767: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 0 B/s rd, 14 KiB/s wr, 2 op/s
Oct 11 09:02:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 09:02:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:02:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 09:02:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:02:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0029729927720257864 of space, bias 1.0, pg target 0.891897831607736 quantized to 32 (current 32)
Oct 11 09:02:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:02:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 09:02:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:02:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 09:02:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:02:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Oct 11 09:02:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:02:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 11 09:02:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:02:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 09:02:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:02:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 11 09:02:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:02:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 11 09:02:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:02:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 09:02:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:02:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 11 09:02:05 compute-0 nova_compute[260935]: 2025-10-11 09:02:05.360 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:02:05 compute-0 ceph-mon[74313]: pgmap v1767: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 0 B/s rd, 14 KiB/s wr, 2 op/s
Oct 11 09:02:06 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1768: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 5.3 KiB/s wr, 0 op/s
Oct 11 09:02:06 compute-0 ceph-osd[90364]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 11 09:02:06 compute-0 ceph-osd[90364]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 3000.3 total, 600.0 interval
                                           Cumulative writes: 22K writes, 88K keys, 22K commit groups, 1.0 writes per commit group, ingest: 0.08 GB, 0.03 MB/s
                                           Cumulative WAL: 22K writes, 7557 syncs, 2.94 writes per sync, written: 0.08 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 8223 writes, 32K keys, 8223 commit groups, 1.0 writes per commit group, ingest: 34.46 MB, 0.06 MB/s
                                           Interval WAL: 8223 writes, 3263 syncs, 2.52 writes per sync, written: 0.03 GB, 0.06 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 11 09:02:07 compute-0 ceph-mon[74313]: pgmap v1768: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 5.3 KiB/s wr, 0 op/s
Oct 11 09:02:07 compute-0 ceph-mgr[74605]: [devicehealth INFO root] Check health
Oct 11 09:02:08 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1769: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 5.3 KiB/s wr, 0 op/s
Oct 11 09:02:08 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e242 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:02:08 compute-0 podman[345362]: 2025-10-11 09:02:08.759091146 +0000 UTC m=+0.065283442 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Oct 11 09:02:08 compute-0 nova_compute[260935]: 2025-10-11 09:02:08.835 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:02:09 compute-0 ceph-mon[74313]: pgmap v1769: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 5.3 KiB/s wr, 0 op/s
Oct 11 09:02:09 compute-0 sudo[345382]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:02:09 compute-0 sudo[345382]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:02:09 compute-0 sudo[345382]: pam_unix(sudo:session): session closed for user root
Oct 11 09:02:10 compute-0 sudo[345407]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 09:02:10 compute-0 sudo[345407]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:02:10 compute-0 sudo[345407]: pam_unix(sudo:session): session closed for user root
Oct 11 09:02:10 compute-0 sudo[345432]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:02:10 compute-0 sudo[345432]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:02:10 compute-0 sudo[345432]: pam_unix(sudo:session): session closed for user root
Oct 11 09:02:10 compute-0 sudo[345457]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 11 09:02:10 compute-0 sudo[345457]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:02:10 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1770: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 5.3 KiB/s wr, 0 op/s
Oct 11 09:02:10 compute-0 nova_compute[260935]: 2025-10-11 09:02:10.400 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:02:10 compute-0 sudo[345457]: pam_unix(sudo:session): session closed for user root
Oct 11 09:02:10 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 09:02:10 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 09:02:10 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 11 09:02:10 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 09:02:10 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 11 09:02:10 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:02:10 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev a53fb34f-5f92-4742-bc08-44b3facb55f0 does not exist
Oct 11 09:02:10 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev 56048202-f037-4d84-9a64-f49698cdf53e does not exist
Oct 11 09:02:10 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev 92cebac6-7a17-4899-8b95-5dbd2bcfdb4d does not exist
Oct 11 09:02:10 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 11 09:02:10 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 09:02:10 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 11 09:02:10 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 09:02:10 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 09:02:10 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 09:02:11 compute-0 sudo[345512]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:02:11 compute-0 sudo[345512]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:02:11 compute-0 sudo[345512]: pam_unix(sudo:session): session closed for user root
Oct 11 09:02:11 compute-0 sudo[345537]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 09:02:11 compute-0 sudo[345537]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:02:11 compute-0 sudo[345537]: pam_unix(sudo:session): session closed for user root
Oct 11 09:02:11 compute-0 sudo[345562]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:02:11 compute-0 sudo[345562]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:02:11 compute-0 sudo[345562]: pam_unix(sudo:session): session closed for user root
Oct 11 09:02:11 compute-0 sudo[345587]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 11 09:02:11 compute-0 sudo[345587]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:02:11 compute-0 ceph-mon[74313]: pgmap v1770: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 5.3 KiB/s wr, 0 op/s
Oct 11 09:02:11 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 09:02:11 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 09:02:11 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:02:11 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 09:02:11 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 09:02:11 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 09:02:11 compute-0 podman[345652]: 2025-10-11 09:02:11.694618087 +0000 UTC m=+0.041268489 container create d6fc928b8eb683d6fe8bcdce343a400dd8e7c9ecc6fddcacd162ed805228683e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_hofstadter, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True)
Oct 11 09:02:11 compute-0 systemd[1]: Started libpod-conmon-d6fc928b8eb683d6fe8bcdce343a400dd8e7c9ecc6fddcacd162ed805228683e.scope.
Oct 11 09:02:11 compute-0 podman[345652]: 2025-10-11 09:02:11.675910033 +0000 UTC m=+0.022560455 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:02:11 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:02:11 compute-0 podman[345652]: 2025-10-11 09:02:11.7980467 +0000 UTC m=+0.144697112 container init d6fc928b8eb683d6fe8bcdce343a400dd8e7c9ecc6fddcacd162ed805228683e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_hofstadter, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Oct 11 09:02:11 compute-0 podman[345652]: 2025-10-11 09:02:11.811220926 +0000 UTC m=+0.157871338 container start d6fc928b8eb683d6fe8bcdce343a400dd8e7c9ecc6fddcacd162ed805228683e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_hofstadter, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 09:02:11 compute-0 podman[345652]: 2025-10-11 09:02:11.816522247 +0000 UTC m=+0.163172679 container attach d6fc928b8eb683d6fe8bcdce343a400dd8e7c9ecc6fddcacd162ed805228683e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_hofstadter, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 09:02:11 compute-0 nervous_hofstadter[345668]: 167 167
Oct 11 09:02:11 compute-0 systemd[1]: libpod-d6fc928b8eb683d6fe8bcdce343a400dd8e7c9ecc6fddcacd162ed805228683e.scope: Deactivated successfully.
Oct 11 09:02:11 compute-0 podman[345652]: 2025-10-11 09:02:11.822747655 +0000 UTC m=+0.169398057 container died d6fc928b8eb683d6fe8bcdce343a400dd8e7c9ecc6fddcacd162ed805228683e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_hofstadter, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 09:02:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-ebc73cb1761e18435a8efcc62018132e92e58b49af15d5e916dbc11d852864d2-merged.mount: Deactivated successfully.
Oct 11 09:02:11 compute-0 nova_compute[260935]: 2025-10-11 09:02:11.862 2 DEBUG nova.compute.manager [req-b3a49696-6fde-49ee-932a-08193025f2ee req-fcbdbf43-b827-47e3-b560-4c19f9fd1de3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 15633aee-234a-4417-b5ea-f35f13820404] Received event network-changed-074db183-8679-40f2-b39d-06759a8dfceb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:02:11 compute-0 podman[345652]: 2025-10-11 09:02:11.866623977 +0000 UTC m=+0.213274369 container remove d6fc928b8eb683d6fe8bcdce343a400dd8e7c9ecc6fddcacd162ed805228683e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_hofstadter, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 11 09:02:11 compute-0 nova_compute[260935]: 2025-10-11 09:02:11.873 2 DEBUG nova.compute.manager [req-b3a49696-6fde-49ee-932a-08193025f2ee req-fcbdbf43-b827-47e3-b560-4c19f9fd1de3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 15633aee-234a-4417-b5ea-f35f13820404] Refreshing instance network info cache due to event network-changed-074db183-8679-40f2-b39d-06759a8dfceb. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 09:02:11 compute-0 nova_compute[260935]: 2025-10-11 09:02:11.874 2 DEBUG oslo_concurrency.lockutils [req-b3a49696-6fde-49ee-932a-08193025f2ee req-fcbdbf43-b827-47e3-b560-4c19f9fd1de3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-15633aee-234a-4417-b5ea-f35f13820404" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:02:11 compute-0 systemd[1]: libpod-conmon-d6fc928b8eb683d6fe8bcdce343a400dd8e7c9ecc6fddcacd162ed805228683e.scope: Deactivated successfully.
Oct 11 09:02:12 compute-0 podman[345692]: 2025-10-11 09:02:12.100469003 +0000 UTC m=+0.069881896 container create 993156546e0aa24d57838b94a3640208c2fff4c0f1c5bd39b87e4194d3b77e46 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_boyd, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 09:02:12 compute-0 systemd[1]: Started libpod-conmon-993156546e0aa24d57838b94a3640208c2fff4c0f1c5bd39b87e4194d3b77e46.scope.
Oct 11 09:02:12 compute-0 podman[345692]: 2025-10-11 09:02:12.07267738 +0000 UTC m=+0.042090383 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:02:12 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:02:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/156c11dfc6f1868ac6c40d8c5aa639b018c5db964ba60f15e0a0284b41557f95/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 09:02:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/156c11dfc6f1868ac6c40d8c5aa639b018c5db964ba60f15e0a0284b41557f95/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 09:02:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/156c11dfc6f1868ac6c40d8c5aa639b018c5db964ba60f15e0a0284b41557f95/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 09:02:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/156c11dfc6f1868ac6c40d8c5aa639b018c5db964ba60f15e0a0284b41557f95/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 09:02:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/156c11dfc6f1868ac6c40d8c5aa639b018c5db964ba60f15e0a0284b41557f95/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 09:02:12 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1771: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 5.3 KiB/s wr, 0 op/s
Oct 11 09:02:12 compute-0 podman[345692]: 2025-10-11 09:02:12.214450267 +0000 UTC m=+0.183863170 container init 993156546e0aa24d57838b94a3640208c2fff4c0f1c5bd39b87e4194d3b77e46 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_boyd, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 09:02:12 compute-0 podman[345692]: 2025-10-11 09:02:12.228277902 +0000 UTC m=+0.197690835 container start 993156546e0aa24d57838b94a3640208c2fff4c0f1c5bd39b87e4194d3b77e46 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_boyd, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 09:02:12 compute-0 podman[345692]: 2025-10-11 09:02:12.232656937 +0000 UTC m=+0.202069850 container attach 993156546e0aa24d57838b94a3640208c2fff4c0f1c5bd39b87e4194d3b77e46 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_boyd, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct 11 09:02:13 compute-0 gallant_boyd[345708]: --> passed data devices: 0 physical, 3 LVM
Oct 11 09:02:13 compute-0 gallant_boyd[345708]: --> relative data size: 1.0
Oct 11 09:02:13 compute-0 gallant_boyd[345708]: --> All data devices are unavailable
Oct 11 09:02:13 compute-0 ceph-mon[74313]: pgmap v1771: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 5.3 KiB/s wr, 0 op/s
Oct 11 09:02:13 compute-0 systemd[1]: libpod-993156546e0aa24d57838b94a3640208c2fff4c0f1c5bd39b87e4194d3b77e46.scope: Deactivated successfully.
Oct 11 09:02:13 compute-0 systemd[1]: libpod-993156546e0aa24d57838b94a3640208c2fff4c0f1c5bd39b87e4194d3b77e46.scope: Consumed 1.099s CPU time.
Oct 11 09:02:13 compute-0 podman[345737]: 2025-10-11 09:02:13.512417301 +0000 UTC m=+0.035509375 container died 993156546e0aa24d57838b94a3640208c2fff4c0f1c5bd39b87e4194d3b77e46 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_boyd, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0)
Oct 11 09:02:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-156c11dfc6f1868ac6c40d8c5aa639b018c5db964ba60f15e0a0284b41557f95-merged.mount: Deactivated successfully.
Oct 11 09:02:13 compute-0 podman[345737]: 2025-10-11 09:02:13.580980708 +0000 UTC m=+0.104072772 container remove 993156546e0aa24d57838b94a3640208c2fff4c0f1c5bd39b87e4194d3b77e46 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_boyd, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 09:02:13 compute-0 systemd[1]: libpod-conmon-993156546e0aa24d57838b94a3640208c2fff4c0f1c5bd39b87e4194d3b77e46.scope: Deactivated successfully.
Oct 11 09:02:13 compute-0 sudo[345587]: pam_unix(sudo:session): session closed for user root
Oct 11 09:02:13 compute-0 sudo[345750]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:02:13 compute-0 sudo[345750]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:02:13 compute-0 sudo[345750]: pam_unix(sudo:session): session closed for user root
Oct 11 09:02:13 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e242 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:02:13 compute-0 sudo[345775]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 09:02:13 compute-0 sudo[345775]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:02:13 compute-0 sudo[345775]: pam_unix(sudo:session): session closed for user root
Oct 11 09:02:13 compute-0 nova_compute[260935]: 2025-10-11 09:02:13.839 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:02:13 compute-0 sudo[345800]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:02:13 compute-0 sudo[345800]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:02:13 compute-0 sudo[345800]: pam_unix(sudo:session): session closed for user root
Oct 11 09:02:13 compute-0 sudo[345825]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a -- lvm list --format json
Oct 11 09:02:13 compute-0 sudo[345825]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:02:14 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1772: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 4.3 KiB/s wr, 0 op/s
Oct 11 09:02:14 compute-0 podman[345891]: 2025-10-11 09:02:14.339566974 +0000 UTC m=+0.059555181 container create 80a2229b6d2d8cd6238b407e7251e267f26f781a270b5c8fe1c69d9d81ed4014 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_ramanujan, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 09:02:14 compute-0 systemd[1]: Started libpod-conmon-80a2229b6d2d8cd6238b407e7251e267f26f781a270b5c8fe1c69d9d81ed4014.scope.
Oct 11 09:02:14 compute-0 podman[345891]: 2025-10-11 09:02:14.310097992 +0000 UTC m=+0.030086249 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:02:14 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:02:14 compute-0 podman[345891]: 2025-10-11 09:02:14.452123437 +0000 UTC m=+0.172111624 container init 80a2229b6d2d8cd6238b407e7251e267f26f781a270b5c8fe1c69d9d81ed4014 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_ramanujan, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 09:02:14 compute-0 podman[345891]: 2025-10-11 09:02:14.461915176 +0000 UTC m=+0.181903383 container start 80a2229b6d2d8cd6238b407e7251e267f26f781a270b5c8fe1c69d9d81ed4014 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_ramanujan, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 09:02:14 compute-0 podman[345891]: 2025-10-11 09:02:14.465744516 +0000 UTC m=+0.185732783 container attach 80a2229b6d2d8cd6238b407e7251e267f26f781a270b5c8fe1c69d9d81ed4014 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_ramanujan, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 09:02:14 compute-0 crazy_ramanujan[345908]: 167 167
Oct 11 09:02:14 compute-0 systemd[1]: libpod-80a2229b6d2d8cd6238b407e7251e267f26f781a270b5c8fe1c69d9d81ed4014.scope: Deactivated successfully.
Oct 11 09:02:14 compute-0 podman[345891]: 2025-10-11 09:02:14.47150675 +0000 UTC m=+0.191495007 container died 80a2229b6d2d8cd6238b407e7251e267f26f781a270b5c8fe1c69d9d81ed4014 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_ramanujan, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct 11 09:02:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-e8f4c7cb55e9bdcecc10416263b1a5d796e4789d839099c8b227e156fa7c0a09-merged.mount: Deactivated successfully.
Oct 11 09:02:14 compute-0 podman[345891]: 2025-10-11 09:02:14.534361234 +0000 UTC m=+0.254349441 container remove 80a2229b6d2d8cd6238b407e7251e267f26f781a270b5c8fe1c69d9d81ed4014 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_ramanujan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct 11 09:02:14 compute-0 systemd[1]: libpod-conmon-80a2229b6d2d8cd6238b407e7251e267f26f781a270b5c8fe1c69d9d81ed4014.scope: Deactivated successfully.
Oct 11 09:02:14 compute-0 podman[345914]: 2025-10-11 09:02:14.643795127 +0000 UTC m=+0.122313451 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 11 09:02:14 compute-0 podman[345951]: 2025-10-11 09:02:14.807223503 +0000 UTC m=+0.074206420 container create 7ced772c3be8065f4b233d546dfb079c87036c1b14bbcffe2c8607163890c586 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_carson, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Oct 11 09:02:14 compute-0 podman[345951]: 2025-10-11 09:02:14.768660982 +0000 UTC m=+0.035643949 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:02:14 compute-0 systemd[1]: Started libpod-conmon-7ced772c3be8065f4b233d546dfb079c87036c1b14bbcffe2c8607163890c586.scope.
Oct 11 09:02:14 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:02:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/278cd0a6b40fe7547858b9e04af7e8b2734607ec3e191647be754a5ff36b2d97/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 09:02:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/278cd0a6b40fe7547858b9e04af7e8b2734607ec3e191647be754a5ff36b2d97/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 09:02:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/278cd0a6b40fe7547858b9e04af7e8b2734607ec3e191647be754a5ff36b2d97/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 09:02:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/278cd0a6b40fe7547858b9e04af7e8b2734607ec3e191647be754a5ff36b2d97/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 09:02:14 compute-0 podman[345951]: 2025-10-11 09:02:14.941871057 +0000 UTC m=+0.208854014 container init 7ced772c3be8065f4b233d546dfb079c87036c1b14bbcffe2c8607163890c586 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_carson, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct 11 09:02:14 compute-0 podman[345951]: 2025-10-11 09:02:14.956481394 +0000 UTC m=+0.223464271 container start 7ced772c3be8065f4b233d546dfb079c87036c1b14bbcffe2c8607163890c586 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_carson, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct 11 09:02:14 compute-0 podman[345951]: 2025-10-11 09:02:14.960341154 +0000 UTC m=+0.227324111 container attach 7ced772c3be8065f4b233d546dfb079c87036c1b14bbcffe2c8607163890c586 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_carson, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 09:02:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:02:15.198 162815 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:02:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:02:15.200 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:02:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:02:15.201 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:02:15 compute-0 nova_compute[260935]: 2025-10-11 09:02:15.406 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:02:15 compute-0 ceph-mon[74313]: pgmap v1772: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 4.3 KiB/s wr, 0 op/s
Oct 11 09:02:15 compute-0 loving_carson[345968]: {
Oct 11 09:02:15 compute-0 loving_carson[345968]:     "0": [
Oct 11 09:02:15 compute-0 loving_carson[345968]:         {
Oct 11 09:02:15 compute-0 loving_carson[345968]:             "devices": [
Oct 11 09:02:15 compute-0 loving_carson[345968]:                 "/dev/loop3"
Oct 11 09:02:15 compute-0 loving_carson[345968]:             ],
Oct 11 09:02:15 compute-0 loving_carson[345968]:             "lv_name": "ceph_lv0",
Oct 11 09:02:15 compute-0 loving_carson[345968]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 09:02:15 compute-0 loving_carson[345968]:             "lv_size": "21470642176",
Oct 11 09:02:15 compute-0 loving_carson[345968]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=bd31cfe9-266a-4479-a7f9-659fff14b3e1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 09:02:15 compute-0 loving_carson[345968]:             "lv_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 09:02:15 compute-0 loving_carson[345968]:             "name": "ceph_lv0",
Oct 11 09:02:15 compute-0 loving_carson[345968]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 09:02:15 compute-0 loving_carson[345968]:             "tags": {
Oct 11 09:02:15 compute-0 loving_carson[345968]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 11 09:02:15 compute-0 loving_carson[345968]:                 "ceph.block_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 09:02:15 compute-0 loving_carson[345968]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 09:02:15 compute-0 loving_carson[345968]:                 "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:02:15 compute-0 loving_carson[345968]:                 "ceph.cluster_name": "ceph",
Oct 11 09:02:15 compute-0 loving_carson[345968]:                 "ceph.crush_device_class": "",
Oct 11 09:02:15 compute-0 loving_carson[345968]:                 "ceph.encrypted": "0",
Oct 11 09:02:15 compute-0 loving_carson[345968]:                 "ceph.osd_fsid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 09:02:15 compute-0 loving_carson[345968]:                 "ceph.osd_id": "0",
Oct 11 09:02:15 compute-0 loving_carson[345968]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 09:02:15 compute-0 loving_carson[345968]:                 "ceph.type": "block",
Oct 11 09:02:15 compute-0 loving_carson[345968]:                 "ceph.vdo": "0"
Oct 11 09:02:15 compute-0 loving_carson[345968]:             },
Oct 11 09:02:15 compute-0 loving_carson[345968]:             "type": "block",
Oct 11 09:02:15 compute-0 loving_carson[345968]:             "vg_name": "ceph_vg0"
Oct 11 09:02:15 compute-0 loving_carson[345968]:         }
Oct 11 09:02:15 compute-0 loving_carson[345968]:     ],
Oct 11 09:02:15 compute-0 loving_carson[345968]:     "1": [
Oct 11 09:02:15 compute-0 loving_carson[345968]:         {
Oct 11 09:02:15 compute-0 loving_carson[345968]:             "devices": [
Oct 11 09:02:15 compute-0 loving_carson[345968]:                 "/dev/loop4"
Oct 11 09:02:15 compute-0 loving_carson[345968]:             ],
Oct 11 09:02:15 compute-0 loving_carson[345968]:             "lv_name": "ceph_lv1",
Oct 11 09:02:15 compute-0 loving_carson[345968]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 09:02:15 compute-0 loving_carson[345968]:             "lv_size": "21470642176",
Oct 11 09:02:15 compute-0 loving_carson[345968]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c6e730c8-7075-45e5-bf72-061b73088f5b,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 09:02:15 compute-0 loving_carson[345968]:             "lv_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 09:02:15 compute-0 loving_carson[345968]:             "name": "ceph_lv1",
Oct 11 09:02:15 compute-0 loving_carson[345968]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 09:02:15 compute-0 loving_carson[345968]:             "tags": {
Oct 11 09:02:15 compute-0 loving_carson[345968]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 11 09:02:15 compute-0 loving_carson[345968]:                 "ceph.block_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 09:02:15 compute-0 loving_carson[345968]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 09:02:15 compute-0 loving_carson[345968]:                 "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:02:15 compute-0 loving_carson[345968]:                 "ceph.cluster_name": "ceph",
Oct 11 09:02:15 compute-0 loving_carson[345968]:                 "ceph.crush_device_class": "",
Oct 11 09:02:15 compute-0 loving_carson[345968]:                 "ceph.encrypted": "0",
Oct 11 09:02:15 compute-0 loving_carson[345968]:                 "ceph.osd_fsid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 09:02:15 compute-0 loving_carson[345968]:                 "ceph.osd_id": "1",
Oct 11 09:02:15 compute-0 loving_carson[345968]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 09:02:15 compute-0 loving_carson[345968]:                 "ceph.type": "block",
Oct 11 09:02:15 compute-0 loving_carson[345968]:                 "ceph.vdo": "0"
Oct 11 09:02:15 compute-0 loving_carson[345968]:             },
Oct 11 09:02:15 compute-0 loving_carson[345968]:             "type": "block",
Oct 11 09:02:15 compute-0 loving_carson[345968]:             "vg_name": "ceph_vg1"
Oct 11 09:02:15 compute-0 loving_carson[345968]:         }
Oct 11 09:02:15 compute-0 loving_carson[345968]:     ],
Oct 11 09:02:15 compute-0 loving_carson[345968]:     "2": [
Oct 11 09:02:15 compute-0 loving_carson[345968]:         {
Oct 11 09:02:15 compute-0 loving_carson[345968]:             "devices": [
Oct 11 09:02:15 compute-0 loving_carson[345968]:                 "/dev/loop5"
Oct 11 09:02:15 compute-0 loving_carson[345968]:             ],
Oct 11 09:02:15 compute-0 loving_carson[345968]:             "lv_name": "ceph_lv2",
Oct 11 09:02:15 compute-0 loving_carson[345968]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 09:02:15 compute-0 loving_carson[345968]:             "lv_size": "21470642176",
Oct 11 09:02:15 compute-0 loving_carson[345968]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9a8d87fe-940e-4960-94fb-405e22e6e9ee,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 09:02:15 compute-0 loving_carson[345968]:             "lv_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 09:02:15 compute-0 loving_carson[345968]:             "name": "ceph_lv2",
Oct 11 09:02:15 compute-0 loving_carson[345968]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 09:02:15 compute-0 loving_carson[345968]:             "tags": {
Oct 11 09:02:15 compute-0 loving_carson[345968]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 11 09:02:15 compute-0 loving_carson[345968]:                 "ceph.block_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 09:02:15 compute-0 loving_carson[345968]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 09:02:15 compute-0 loving_carson[345968]:                 "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:02:15 compute-0 loving_carson[345968]:                 "ceph.cluster_name": "ceph",
Oct 11 09:02:15 compute-0 loving_carson[345968]:                 "ceph.crush_device_class": "",
Oct 11 09:02:15 compute-0 loving_carson[345968]:                 "ceph.encrypted": "0",
Oct 11 09:02:15 compute-0 loving_carson[345968]:                 "ceph.osd_fsid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 09:02:15 compute-0 loving_carson[345968]:                 "ceph.osd_id": "2",
Oct 11 09:02:15 compute-0 loving_carson[345968]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 09:02:15 compute-0 loving_carson[345968]:                 "ceph.type": "block",
Oct 11 09:02:15 compute-0 loving_carson[345968]:                 "ceph.vdo": "0"
Oct 11 09:02:15 compute-0 loving_carson[345968]:             },
Oct 11 09:02:15 compute-0 loving_carson[345968]:             "type": "block",
Oct 11 09:02:15 compute-0 loving_carson[345968]:             "vg_name": "ceph_vg2"
Oct 11 09:02:15 compute-0 loving_carson[345968]:         }
Oct 11 09:02:15 compute-0 loving_carson[345968]:     ]
Oct 11 09:02:15 compute-0 loving_carson[345968]: }
Oct 11 09:02:15 compute-0 systemd[1]: libpod-7ced772c3be8065f4b233d546dfb079c87036c1b14bbcffe2c8607163890c586.scope: Deactivated successfully.
Oct 11 09:02:15 compute-0 podman[345951]: 2025-10-11 09:02:15.845127132 +0000 UTC m=+1.112110029 container died 7ced772c3be8065f4b233d546dfb079c87036c1b14bbcffe2c8607163890c586 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_carson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct 11 09:02:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-278cd0a6b40fe7547858b9e04af7e8b2734607ec3e191647be754a5ff36b2d97-merged.mount: Deactivated successfully.
Oct 11 09:02:16 compute-0 podman[345951]: 2025-10-11 09:02:16.040718956 +0000 UTC m=+1.307701863 container remove 7ced772c3be8065f4b233d546dfb079c87036c1b14bbcffe2c8607163890c586 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_carson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 11 09:02:16 compute-0 systemd[1]: libpod-conmon-7ced772c3be8065f4b233d546dfb079c87036c1b14bbcffe2c8607163890c586.scope: Deactivated successfully.
Oct 11 09:02:16 compute-0 sudo[345825]: pam_unix(sudo:session): session closed for user root
Oct 11 09:02:16 compute-0 sudo[345991]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:02:16 compute-0 sudo[345991]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:02:16 compute-0 sudo[345991]: pam_unix(sudo:session): session closed for user root
Oct 11 09:02:16 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1773: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail
Oct 11 09:02:16 compute-0 sudo[346016]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 09:02:16 compute-0 sudo[346016]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:02:16 compute-0 sudo[346016]: pam_unix(sudo:session): session closed for user root
Oct 11 09:02:16 compute-0 sudo[346041]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:02:16 compute-0 sudo[346041]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:02:16 compute-0 sudo[346041]: pam_unix(sudo:session): session closed for user root
Oct 11 09:02:16 compute-0 sudo[346066]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a -- raw list --format json
Oct 11 09:02:16 compute-0 sudo[346066]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:02:16 compute-0 podman[346134]: 2025-10-11 09:02:16.918521805 +0000 UTC m=+0.097234557 container create 84ffff01ce1ab4826c0ec9da1929d510f385dfdc589e99dc0375eaa2170f7692 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_ptolemy, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 11 09:02:16 compute-0 podman[346134]: 2025-10-11 09:02:16.849927896 +0000 UTC m=+0.028640668 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:02:17 compute-0 systemd[1]: Started libpod-conmon-84ffff01ce1ab4826c0ec9da1929d510f385dfdc589e99dc0375eaa2170f7692.scope.
Oct 11 09:02:17 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:02:17 compute-0 podman[346134]: 2025-10-11 09:02:17.15061047 +0000 UTC m=+0.329323202 container init 84ffff01ce1ab4826c0ec9da1929d510f385dfdc589e99dc0375eaa2170f7692 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_ptolemy, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 09:02:17 compute-0 podman[346134]: 2025-10-11 09:02:17.163581041 +0000 UTC m=+0.342293763 container start 84ffff01ce1ab4826c0ec9da1929d510f385dfdc589e99dc0375eaa2170f7692 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_ptolemy, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 09:02:17 compute-0 eloquent_ptolemy[346150]: 167 167
Oct 11 09:02:17 compute-0 systemd[1]: libpod-84ffff01ce1ab4826c0ec9da1929d510f385dfdc589e99dc0375eaa2170f7692.scope: Deactivated successfully.
Oct 11 09:02:17 compute-0 conmon[346150]: conmon 84ffff01ce1ab4826c0e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-84ffff01ce1ab4826c0ec9da1929d510f385dfdc589e99dc0375eaa2170f7692.scope/container/memory.events
Oct 11 09:02:17 compute-0 podman[346134]: 2025-10-11 09:02:17.216176982 +0000 UTC m=+0.394889714 container attach 84ffff01ce1ab4826c0ec9da1929d510f385dfdc589e99dc0375eaa2170f7692 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_ptolemy, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 09:02:17 compute-0 podman[346134]: 2025-10-11 09:02:17.216728918 +0000 UTC m=+0.395441630 container died 84ffff01ce1ab4826c0ec9da1929d510f385dfdc589e99dc0375eaa2170f7692 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_ptolemy, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct 11 09:02:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-c8afd1d32b759a12e6f3d0387ddc9ea80dc9150ab81255c2224edfd5903dd732-merged.mount: Deactivated successfully.
Oct 11 09:02:17 compute-0 podman[346134]: 2025-10-11 09:02:17.501535378 +0000 UTC m=+0.680248110 container remove 84ffff01ce1ab4826c0ec9da1929d510f385dfdc589e99dc0375eaa2170f7692 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_ptolemy, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3)
Oct 11 09:02:17 compute-0 systemd[1]: libpod-conmon-84ffff01ce1ab4826c0ec9da1929d510f385dfdc589e99dc0375eaa2170f7692.scope: Deactivated successfully.
Oct 11 09:02:17 compute-0 ceph-mon[74313]: pgmap v1773: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail
Oct 11 09:02:17 compute-0 podman[346174]: 2025-10-11 09:02:17.708196838 +0000 UTC m=+0.027778404 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:02:17 compute-0 podman[346174]: 2025-10-11 09:02:17.817349193 +0000 UTC m=+0.136930739 container create 45b6947b021359043977d85186ac1866079f8763341eaccfcfa63c1f32684c9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_ardinghelli, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef)
Oct 11 09:02:17 compute-0 systemd[1]: Started libpod-conmon-45b6947b021359043977d85186ac1866079f8763341eaccfcfa63c1f32684c9e.scope.
Oct 11 09:02:17 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:02:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2bb077074efce836e5f959ba4b92713fa27aa13fdea4d940d66415d4a1de8858/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 09:02:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2bb077074efce836e5f959ba4b92713fa27aa13fdea4d940d66415d4a1de8858/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 09:02:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2bb077074efce836e5f959ba4b92713fa27aa13fdea4d940d66415d4a1de8858/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 09:02:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2bb077074efce836e5f959ba4b92713fa27aa13fdea4d940d66415d4a1de8858/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 09:02:18 compute-0 podman[346174]: 2025-10-11 09:02:18.107791675 +0000 UTC m=+0.427373321 container init 45b6947b021359043977d85186ac1866079f8763341eaccfcfa63c1f32684c9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_ardinghelli, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef)
Oct 11 09:02:18 compute-0 podman[346174]: 2025-10-11 09:02:18.1205748 +0000 UTC m=+0.440156346 container start 45b6947b021359043977d85186ac1866079f8763341eaccfcfa63c1f32684c9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_ardinghelli, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 09:02:18 compute-0 podman[346188]: 2025-10-11 09:02:18.121583898 +0000 UTC m=+0.246806916 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_id=multipathd, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct 11 09:02:18 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1774: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail
Oct 11 09:02:18 compute-0 podman[346174]: 2025-10-11 09:02:18.225960617 +0000 UTC m=+0.545542213 container attach 45b6947b021359043977d85186ac1866079f8763341eaccfcfa63c1f32684c9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_ardinghelli, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True)
Oct 11 09:02:18 compute-0 podman[346213]: 2025-10-11 09:02:18.375107455 +0000 UTC m=+0.371539317 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct 11 09:02:18 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e242 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:02:18 compute-0 nova_compute[260935]: 2025-10-11 09:02:18.841 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:02:19 compute-0 nifty_ardinghelli[346211]: {
Oct 11 09:02:19 compute-0 nifty_ardinghelli[346211]:     "9a8d87fe-940e-4960-94fb-405e22e6e9ee": {
Oct 11 09:02:19 compute-0 nifty_ardinghelli[346211]:         "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:02:19 compute-0 nifty_ardinghelli[346211]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 11 09:02:19 compute-0 nifty_ardinghelli[346211]:         "osd_id": 2,
Oct 11 09:02:19 compute-0 nifty_ardinghelli[346211]:         "osd_uuid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 09:02:19 compute-0 nifty_ardinghelli[346211]:         "type": "bluestore"
Oct 11 09:02:19 compute-0 nifty_ardinghelli[346211]:     },
Oct 11 09:02:19 compute-0 nifty_ardinghelli[346211]:     "bd31cfe9-266a-4479-a7f9-659fff14b3e1": {
Oct 11 09:02:19 compute-0 nifty_ardinghelli[346211]:         "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:02:19 compute-0 nifty_ardinghelli[346211]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 11 09:02:19 compute-0 nifty_ardinghelli[346211]:         "osd_id": 0,
Oct 11 09:02:19 compute-0 nifty_ardinghelli[346211]:         "osd_uuid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 09:02:19 compute-0 nifty_ardinghelli[346211]:         "type": "bluestore"
Oct 11 09:02:19 compute-0 nifty_ardinghelli[346211]:     },
Oct 11 09:02:19 compute-0 nifty_ardinghelli[346211]:     "c6e730c8-7075-45e5-bf72-061b73088f5b": {
Oct 11 09:02:19 compute-0 nifty_ardinghelli[346211]:         "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:02:19 compute-0 nifty_ardinghelli[346211]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 11 09:02:19 compute-0 nifty_ardinghelli[346211]:         "osd_id": 1,
Oct 11 09:02:19 compute-0 nifty_ardinghelli[346211]:         "osd_uuid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 09:02:19 compute-0 nifty_ardinghelli[346211]:         "type": "bluestore"
Oct 11 09:02:19 compute-0 nifty_ardinghelli[346211]:     }
Oct 11 09:02:19 compute-0 nifty_ardinghelli[346211]: }
Oct 11 09:02:19 compute-0 systemd[1]: libpod-45b6947b021359043977d85186ac1866079f8763341eaccfcfa63c1f32684c9e.scope: Deactivated successfully.
Oct 11 09:02:19 compute-0 systemd[1]: libpod-45b6947b021359043977d85186ac1866079f8763341eaccfcfa63c1f32684c9e.scope: Consumed 1.193s CPU time.
Oct 11 09:02:19 compute-0 podman[346273]: 2025-10-11 09:02:19.3775027 +0000 UTC m=+0.045212001 container died 45b6947b021359043977d85186ac1866079f8763341eaccfcfa63c1f32684c9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_ardinghelli, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct 11 09:02:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-2bb077074efce836e5f959ba4b92713fa27aa13fdea4d940d66415d4a1de8858-merged.mount: Deactivated successfully.
Oct 11 09:02:19 compute-0 podman[346273]: 2025-10-11 09:02:19.449459714 +0000 UTC m=+0.117168925 container remove 45b6947b021359043977d85186ac1866079f8763341eaccfcfa63c1f32684c9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_ardinghelli, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True)
Oct 11 09:02:19 compute-0 systemd[1]: libpod-conmon-45b6947b021359043977d85186ac1866079f8763341eaccfcfa63c1f32684c9e.scope: Deactivated successfully.
Oct 11 09:02:19 compute-0 sudo[346066]: pam_unix(sudo:session): session closed for user root
Oct 11 09:02:19 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 09:02:19 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:02:19 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 09:02:19 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:02:19 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev a410bbd8-fb0d-497c-8ae0-4041827a06a2 does not exist
Oct 11 09:02:19 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev 5abf3b44-2651-4b50-860f-0e63cf0e6f93 does not exist
Oct 11 09:02:19 compute-0 ceph-mon[74313]: pgmap v1774: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail
Oct 11 09:02:19 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:02:19 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:02:19 compute-0 sudo[346288]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:02:19 compute-0 sudo[346288]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:02:19 compute-0 sudo[346288]: pam_unix(sudo:session): session closed for user root
Oct 11 09:02:19 compute-0 sudo[346313]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 11 09:02:19 compute-0 sudo[346313]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:02:19 compute-0 sudo[346313]: pam_unix(sudo:session): session closed for user root
Oct 11 09:02:20 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1775: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail
Oct 11 09:02:20 compute-0 nova_compute[260935]: 2025-10-11 09:02:20.432 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:02:21 compute-0 ceph-mon[74313]: pgmap v1775: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail
Oct 11 09:02:22 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1776: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail
Oct 11 09:02:23 compute-0 ceph-mon[74313]: pgmap v1776: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail
Oct 11 09:02:23 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e242 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:02:23 compute-0 nova_compute[260935]: 2025-10-11 09:02:23.893 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:02:24 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1777: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail
Oct 11 09:02:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:02:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:02:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:02:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:02:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:02:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:02:25 compute-0 nova_compute[260935]: 2025-10-11 09:02:25.467 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:02:25 compute-0 ceph-mon[74313]: pgmap v1777: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail
Oct 11 09:02:26 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1778: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail
Oct 11 09:02:27 compute-0 ceph-mon[74313]: pgmap v1778: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail
Oct 11 09:02:28 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1779: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail
Oct 11 09:02:28 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e242 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:02:28 compute-0 nova_compute[260935]: 2025-10-11 09:02:28.901 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:02:29 compute-0 ceph-mon[74313]: pgmap v1779: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail
Oct 11 09:02:30 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1780: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail
Oct 11 09:02:30 compute-0 nova_compute[260935]: 2025-10-11 09:02:30.515 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:02:31 compute-0 ceph-mon[74313]: pgmap v1780: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail
Oct 11 09:02:32 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1781: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail
Oct 11 09:02:33 compute-0 ceph-mon[74313]: pgmap v1781: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail
Oct 11 09:02:33 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e242 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:02:33 compute-0 nova_compute[260935]: 2025-10-11 09:02:33.904 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:02:34 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1782: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail
Oct 11 09:02:34 compute-0 ceph-mon[74313]: pgmap v1782: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail
Oct 11 09:02:35 compute-0 nova_compute[260935]: 2025-10-11 09:02:35.518 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:02:36 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1783: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail
Oct 11 09:02:37 compute-0 ceph-mon[74313]: pgmap v1783: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail
Oct 11 09:02:37 compute-0 ovn_controller[152945]: 2025-10-11T09:02:37Z|00789|binding|INFO|Releasing lport e23cd806-8523-4e59-ba27-db15cee52548 from this chassis (sb_readonly=0)
Oct 11 09:02:37 compute-0 ovn_controller[152945]: 2025-10-11T09:02:37Z|00790|binding|INFO|Releasing lport f45dd889-4db0-488b-b6d3-356bc191844e from this chassis (sb_readonly=0)
Oct 11 09:02:37 compute-0 nova_compute[260935]: 2025-10-11 09:02:37.682 2 DEBUG nova.compute.manager [req-1cce769b-0f1a-488b-a39e-3ce12748e312 req-d486c419-e27a-413a-8e58-70eb54824d6d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 2ca1b1c6-7cd2-42f9-a24b-5c20bb567361] Received event network-vif-deleted-bcfaf217-8703-4c1e-bf80-d24ab0e642bd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:02:37 compute-0 nova_compute[260935]: 2025-10-11 09:02:37.683 2 INFO nova.compute.manager [req-1cce769b-0f1a-488b-a39e-3ce12748e312 req-d486c419-e27a-413a-8e58-70eb54824d6d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 2ca1b1c6-7cd2-42f9-a24b-5c20bb567361] Neutron deleted interface bcfaf217-8703-4c1e-bf80-d24ab0e642bd; detaching it from the instance and deleting it from the info cache
Oct 11 09:02:37 compute-0 nova_compute[260935]: 2025-10-11 09:02:37.683 2 DEBUG nova.network.neutron [req-1cce769b-0f1a-488b-a39e-3ce12748e312 req-d486c419-e27a-413a-8e58-70eb54824d6d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 2ca1b1c6-7cd2-42f9-a24b-5c20bb567361] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:02:38 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1784: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail
Oct 11 09:02:38 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e242 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:02:38 compute-0 nova_compute[260935]: 2025-10-11 09:02:38.740 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:02:38 compute-0 nova_compute[260935]: 2025-10-11 09:02:38.742 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:02:38 compute-0 nova_compute[260935]: 2025-10-11 09:02:38.743 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:02:38 compute-0 nova_compute[260935]: 2025-10-11 09:02:38.743 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 11 09:02:38 compute-0 nova_compute[260935]: 2025-10-11 09:02:38.744 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:02:38 compute-0 nova_compute[260935]: 2025-10-11 09:02:38.906 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:02:39 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 09:02:39 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/862062399' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:02:39 compute-0 nova_compute[260935]: 2025-10-11 09:02:39.232 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.488s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:02:39 compute-0 ceph-mon[74313]: pgmap v1784: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail
Oct 11 09:02:39 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/862062399' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:02:39 compute-0 podman[346361]: 2025-10-11 09:02:39.436241954 +0000 UTC m=+0.123637020 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 11 09:02:40 compute-0 nova_compute[260935]: 2025-10-11 09:02:40.186 2 DEBUG nova.compute.manager [None req-5832882f-6ec9-43d0-8673-3453baa1ea4d - - - - - -] [instance: 2ca1b1c6-7cd2-42f9-a24b-5c20bb567361] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:02:40 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1785: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail
Oct 11 09:02:40 compute-0 nova_compute[260935]: 2025-10-11 09:02:40.559 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:02:41 compute-0 nova_compute[260935]: 2025-10-11 09:02:41.203 2 DEBUG nova.network.neutron [-] [instance: 2ca1b1c6-7cd2-42f9-a24b-5c20bb567361] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:02:41 compute-0 nova_compute[260935]: 2025-10-11 09:02:41.209 2 DEBUG nova.compute.manager [req-1cce769b-0f1a-488b-a39e-3ce12748e312 req-d486c419-e27a-413a-8e58-70eb54824d6d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 2ca1b1c6-7cd2-42f9-a24b-5c20bb567361] Detach interface failed, port_id=bcfaf217-8703-4c1e-bf80-d24ab0e642bd, reason: Instance 2ca1b1c6-7cd2-42f9-a24b-5c20bb567361 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Oct 11 09:02:41 compute-0 ceph-mon[74313]: pgmap v1785: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail
Oct 11 09:02:41 compute-0 nova_compute[260935]: 2025-10-11 09:02:41.598 2 INFO nova.compute.manager [-] [instance: 2ca1b1c6-7cd2-42f9-a24b-5c20bb567361] Took 106.00 seconds to deallocate network for instance.
Oct 11 09:02:41 compute-0 nova_compute[260935]: 2025-10-11 09:02:41.623 2 WARNING oslo.service.loopingcall [-] Function 'nova.servicegroup.drivers.db.DbDriver._report_state' run outlasted interval by 91.94 sec
Oct 11 09:02:41 compute-0 nova_compute[260935]: 2025-10-11 09:02:41.814 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:02:41 compute-0 nova_compute[260935]: 2025-10-11 09:02:41.815 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:02:41 compute-0 nova_compute[260935]: 2025-10-11 09:02:41.815 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:02:41 compute-0 nova_compute[260935]: 2025-10-11 09:02:41.821 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:02:41 compute-0 nova_compute[260935]: 2025-10-11 09:02:41.821 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:02:41 compute-0 nova_compute[260935]: 2025-10-11 09:02:41.827 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000052 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:02:41 compute-0 nova_compute[260935]: 2025-10-11 09:02:41.827 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000052 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:02:42 compute-0 nova_compute[260935]: 2025-10-11 09:02:42.022 2 DEBUG oslo_concurrency.lockutils [None req-ce634aba-2255-4bb3-84bd-81dc9c6307c7 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:02:42 compute-0 nova_compute[260935]: 2025-10-11 09:02:42.023 2 DEBUG oslo_concurrency.lockutils [None req-ce634aba-2255-4bb3-84bd-81dc9c6307c7 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:02:42 compute-0 nova_compute[260935]: 2025-10-11 09:02:42.182 2 WARNING nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 09:02:42 compute-0 nova_compute[260935]: 2025-10-11 09:02:42.184 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3269MB free_disk=59.8099365234375GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 11 09:02:42 compute-0 nova_compute[260935]: 2025-10-11 09:02:42.184 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:02:42 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1786: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 682 B/s rd, 0 op/s
Oct 11 09:02:43 compute-0 nova_compute[260935]: 2025-10-11 09:02:43.227 2 DEBUG nova.network.neutron [req-6217c10a-82d7-4f3c-a0f0-e5f6d20ca543 req-d8d81b36-c6d1-45d0-bf25-b4cf99a6ec13 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 52be16b4-343a-4fd4-9041-39069a1fde2a] Updated VIF entry in instance network info cache for port c992d6e3-ef59-42a0-80c5-109fe0c056cd. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 09:02:43 compute-0 nova_compute[260935]: 2025-10-11 09:02:43.228 2 DEBUG nova.network.neutron [req-6217c10a-82d7-4f3c-a0f0-e5f6d20ca543 req-d8d81b36-c6d1-45d0-bf25-b4cf99a6ec13 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 52be16b4-343a-4fd4-9041-39069a1fde2a] Updating instance_info_cache with network_info: [{"id": "c992d6e3-ef59-42a0-80c5-109fe0c056cd", "address": "fa:16:3e:d3:b5:ce", "network": {"id": "7c40ad6c-6e2c-4d8e-a70f-72c8786fa745", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1855455514-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.204", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0ba95f2514ce4fe4b00f245335eaeb01", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc992d6e3-ef", "ovs_interfaceid": "c992d6e3-ef59-42a0-80c5-109fe0c056cd", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:02:43 compute-0 nova_compute[260935]: 2025-10-11 09:02:43.230 2 DEBUG nova.network.neutron [None req-21f60ae5-4d4b-4bf1-a957-74c05ca1d226 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] [instance: 15633aee-234a-4417-b5ea-f35f13820404] Updating instance_info_cache with network_info: [{"id": "074db183-8679-40f2-b39d-06759a8dfceb", "address": "fa:16:3e:33:2b:94", "network": {"id": "e4686205-cbf0-4221-bc49-ebb890c4a59f", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1553544744-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "11b44ad9193e4e43838d52056ccf413e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap074db183-86", "ovs_interfaceid": "074db183-8679-40f2-b39d-06759a8dfceb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:02:43 compute-0 ceph-mon[74313]: pgmap v1786: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 682 B/s rd, 0 op/s
Oct 11 09:02:43 compute-0 nova_compute[260935]: 2025-10-11 09:02:43.524 2 DEBUG oslo_concurrency.lockutils [req-6217c10a-82d7-4f3c-a0f0-e5f6d20ca543 req-d8d81b36-c6d1-45d0-bf25-b4cf99a6ec13 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-52be16b4-343a-4fd4-9041-39069a1fde2a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:02:43 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e242 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:02:43 compute-0 nova_compute[260935]: 2025-10-11 09:02:43.723 2 DEBUG oslo_concurrency.lockutils [None req-21f60ae5-4d4b-4bf1-a957-74c05ca1d226 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] Releasing lock "refresh_cache-15633aee-234a-4417-b5ea-f35f13820404" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:02:43 compute-0 nova_compute[260935]: 2025-10-11 09:02:43.724 2 DEBUG nova.compute.manager [None req-21f60ae5-4d4b-4bf1-a957-74c05ca1d226 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] [instance: 15633aee-234a-4417-b5ea-f35f13820404] Instance network_info: |[{"id": "074db183-8679-40f2-b39d-06759a8dfceb", "address": "fa:16:3e:33:2b:94", "network": {"id": "e4686205-cbf0-4221-bc49-ebb890c4a59f", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1553544744-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "11b44ad9193e4e43838d52056ccf413e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap074db183-86", "ovs_interfaceid": "074db183-8679-40f2-b39d-06759a8dfceb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 11 09:02:43 compute-0 nova_compute[260935]: 2025-10-11 09:02:43.725 2 DEBUG oslo_concurrency.lockutils [req-b3a49696-6fde-49ee-932a-08193025f2ee req-fcbdbf43-b827-47e3-b560-4c19f9fd1de3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-15633aee-234a-4417-b5ea-f35f13820404" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:02:43 compute-0 nova_compute[260935]: 2025-10-11 09:02:43.726 2 DEBUG nova.network.neutron [req-b3a49696-6fde-49ee-932a-08193025f2ee req-fcbdbf43-b827-47e3-b560-4c19f9fd1de3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 15633aee-234a-4417-b5ea-f35f13820404] Refreshing network info cache for port 074db183-8679-40f2-b39d-06759a8dfceb _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 09:02:43 compute-0 nova_compute[260935]: 2025-10-11 09:02:43.732 2 DEBUG nova.virt.libvirt.driver [None req-21f60ae5-4d4b-4bf1-a957-74c05ca1d226 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] [instance: 15633aee-234a-4417-b5ea-f35f13820404] Start _get_guest_xml network_info=[{"id": "074db183-8679-40f2-b39d-06759a8dfceb", "address": "fa:16:3e:33:2b:94", "network": {"id": "e4686205-cbf0-4221-bc49-ebb890c4a59f", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1553544744-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "11b44ad9193e4e43838d52056ccf413e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap074db183-86", "ovs_interfaceid": "074db183-8679-40f2-b39d-06759a8dfceb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 11 09:02:43 compute-0 nova_compute[260935]: 2025-10-11 09:02:43.738 2 WARNING nova.virt.libvirt.driver [None req-21f60ae5-4d4b-4bf1-a957-74c05ca1d226 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 09:02:43 compute-0 nova_compute[260935]: 2025-10-11 09:02:43.747 2 DEBUG nova.virt.libvirt.host [None req-21f60ae5-4d4b-4bf1-a957-74c05ca1d226 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 11 09:02:43 compute-0 nova_compute[260935]: 2025-10-11 09:02:43.749 2 DEBUG nova.virt.libvirt.host [None req-21f60ae5-4d4b-4bf1-a957-74c05ca1d226 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 11 09:02:43 compute-0 nova_compute[260935]: 2025-10-11 09:02:43.751 2 DEBUG oslo_concurrency.processutils [None req-ce634aba-2255-4bb3-84bd-81dc9c6307c7 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:02:43 compute-0 nova_compute[260935]: 2025-10-11 09:02:43.825 2 DEBUG nova.virt.libvirt.host [None req-21f60ae5-4d4b-4bf1-a957-74c05ca1d226 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 11 09:02:43 compute-0 nova_compute[260935]: 2025-10-11 09:02:43.827 2 DEBUG nova.virt.libvirt.host [None req-21f60ae5-4d4b-4bf1-a957-74c05ca1d226 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 11 09:02:43 compute-0 nova_compute[260935]: 2025-10-11 09:02:43.828 2 DEBUG nova.virt.libvirt.driver [None req-21f60ae5-4d4b-4bf1-a957-74c05ca1d226 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 11 09:02:43 compute-0 nova_compute[260935]: 2025-10-11 09:02:43.828 2 DEBUG nova.virt.hardware [None req-21f60ae5-4d4b-4bf1-a957-74c05ca1d226 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 11 09:02:43 compute-0 nova_compute[260935]: 2025-10-11 09:02:43.830 2 DEBUG nova.virt.hardware [None req-21f60ae5-4d4b-4bf1-a957-74c05ca1d226 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 11 09:02:43 compute-0 nova_compute[260935]: 2025-10-11 09:02:43.830 2 DEBUG nova.virt.hardware [None req-21f60ae5-4d4b-4bf1-a957-74c05ca1d226 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 11 09:02:43 compute-0 nova_compute[260935]: 2025-10-11 09:02:43.831 2 DEBUG nova.virt.hardware [None req-21f60ae5-4d4b-4bf1-a957-74c05ca1d226 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 11 09:02:43 compute-0 nova_compute[260935]: 2025-10-11 09:02:43.831 2 DEBUG nova.virt.hardware [None req-21f60ae5-4d4b-4bf1-a957-74c05ca1d226 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 11 09:02:43 compute-0 nova_compute[260935]: 2025-10-11 09:02:43.832 2 DEBUG nova.virt.hardware [None req-21f60ae5-4d4b-4bf1-a957-74c05ca1d226 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 11 09:02:43 compute-0 nova_compute[260935]: 2025-10-11 09:02:43.832 2 DEBUG nova.virt.hardware [None req-21f60ae5-4d4b-4bf1-a957-74c05ca1d226 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 11 09:02:43 compute-0 nova_compute[260935]: 2025-10-11 09:02:43.833 2 DEBUG nova.virt.hardware [None req-21f60ae5-4d4b-4bf1-a957-74c05ca1d226 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 11 09:02:43 compute-0 nova_compute[260935]: 2025-10-11 09:02:43.833 2 DEBUG nova.virt.hardware [None req-21f60ae5-4d4b-4bf1-a957-74c05ca1d226 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 11 09:02:43 compute-0 nova_compute[260935]: 2025-10-11 09:02:43.834 2 DEBUG nova.virt.hardware [None req-21f60ae5-4d4b-4bf1-a957-74c05ca1d226 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 11 09:02:43 compute-0 nova_compute[260935]: 2025-10-11 09:02:43.834 2 DEBUG nova.virt.hardware [None req-21f60ae5-4d4b-4bf1-a957-74c05ca1d226 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 11 09:02:43 compute-0 nova_compute[260935]: 2025-10-11 09:02:43.842 2 DEBUG oslo_concurrency.processutils [None req-21f60ae5-4d4b-4bf1-a957-74c05ca1d226 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:02:43 compute-0 nova_compute[260935]: 2025-10-11 09:02:43.914 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:02:44 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1787: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 682 B/s rd, 85 B/s wr, 0 op/s
Oct 11 09:02:44 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 09:02:44 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3003496757' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:02:44 compute-0 nova_compute[260935]: 2025-10-11 09:02:44.251 2 DEBUG oslo_concurrency.processutils [None req-ce634aba-2255-4bb3-84bd-81dc9c6307c7 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.500s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:02:44 compute-0 nova_compute[260935]: 2025-10-11 09:02:44.259 2 DEBUG nova.compute.provider_tree [None req-ce634aba-2255-4bb3-84bd-81dc9c6307c7 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 09:02:44 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/3003496757' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:02:44 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 09:02:44 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1427179780' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:02:44 compute-0 nova_compute[260935]: 2025-10-11 09:02:44.351 2 DEBUG oslo_concurrency.processutils [None req-21f60ae5-4d4b-4bf1-a957-74c05ca1d226 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.509s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:02:44 compute-0 nova_compute[260935]: 2025-10-11 09:02:44.389 2 DEBUG nova.storage.rbd_utils [None req-21f60ae5-4d4b-4bf1-a957-74c05ca1d226 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] rbd image 15633aee-234a-4417-b5ea-f35f13820404_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:02:44 compute-0 nova_compute[260935]: 2025-10-11 09:02:44.394 2 DEBUG oslo_concurrency.processutils [None req-21f60ae5-4d4b-4bf1-a957-74c05ca1d226 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:02:44 compute-0 nova_compute[260935]: 2025-10-11 09:02:44.435 2 DEBUG nova.scheduler.client.report [None req-ce634aba-2255-4bb3-84bd-81dc9c6307c7 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 09:02:44 compute-0 nova_compute[260935]: 2025-10-11 09:02:44.701 2 DEBUG oslo_concurrency.lockutils [None req-ce634aba-2255-4bb3-84bd-81dc9c6307c7 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 2.678s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:02:44 compute-0 nova_compute[260935]: 2025-10-11 09:02:44.706 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 2.522s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:02:44 compute-0 podman[346463]: 2025-10-11 09:02:44.795150915 +0000 UTC m=+0.087939852 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, managed_by=edpm_ansible, container_name=iscsid, org.label-schema.schema-version=1.0)
Oct 11 09:02:44 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 09:02:44 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3146808339' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:02:44 compute-0 nova_compute[260935]: 2025-10-11 09:02:44.843 2 DEBUG oslo_concurrency.processutils [None req-21f60ae5-4d4b-4bf1-a957-74c05ca1d226 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:02:44 compute-0 nova_compute[260935]: 2025-10-11 09:02:44.845 2 DEBUG nova.virt.libvirt.vif [None req-21f60ae5-4d4b-4bf1-a957-74c05ca1d226 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T09:00:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerRescueTestJSON-server-2012807637',display_name='tempest-ServerRescueTestJSON-server-2012807637',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuetestjson-server-2012807637',id=87,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='11b44ad9193e4e43838d52056ccf413e',ramdisk_id='',reservation_id='r-es16o3e8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerRescueTestJSON-1667208638',owner_user_name='tempest-ServerRescueTestJSON-1667208638-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:00:52Z,user_data=None,user_id='df5a3c3a5d68473aa2e2950de45ebce1',uuid=15633aee-234a-4417-b5ea-f35f13820404,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "074db183-8679-40f2-b39d-06759a8dfceb", "address": "fa:16:3e:33:2b:94", "network": {"id": "e4686205-cbf0-4221-bc49-ebb890c4a59f", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1553544744-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "11b44ad9193e4e43838d52056ccf413e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap074db183-86", "ovs_interfaceid": "074db183-8679-40f2-b39d-06759a8dfceb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 11 09:02:44 compute-0 nova_compute[260935]: 2025-10-11 09:02:44.846 2 DEBUG nova.network.os_vif_util [None req-21f60ae5-4d4b-4bf1-a957-74c05ca1d226 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] Converting VIF {"id": "074db183-8679-40f2-b39d-06759a8dfceb", "address": "fa:16:3e:33:2b:94", "network": {"id": "e4686205-cbf0-4221-bc49-ebb890c4a59f", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1553544744-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "11b44ad9193e4e43838d52056ccf413e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap074db183-86", "ovs_interfaceid": "074db183-8679-40f2-b39d-06759a8dfceb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 09:02:44 compute-0 nova_compute[260935]: 2025-10-11 09:02:44.847 2 DEBUG nova.network.os_vif_util [None req-21f60ae5-4d4b-4bf1-a957-74c05ca1d226 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:33:2b:94,bridge_name='br-int',has_traffic_filtering=True,id=074db183-8679-40f2-b39d-06759a8dfceb,network=Network(e4686205-cbf0-4221-bc49-ebb890c4a59f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap074db183-86') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 09:02:44 compute-0 nova_compute[260935]: 2025-10-11 09:02:44.848 2 DEBUG nova.objects.instance [None req-21f60ae5-4d4b-4bf1-a957-74c05ca1d226 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] Lazy-loading 'pci_devices' on Instance uuid 15633aee-234a-4417-b5ea-f35f13820404 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:02:44 compute-0 nova_compute[260935]: 2025-10-11 09:02:44.988 2 INFO nova.scheduler.client.report [None req-ce634aba-2255-4bb3-84bd-81dc9c6307c7 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] Deleted allocations for instance 2ca1b1c6-7cd2-42f9-a24b-5c20bb567361
Oct 11 09:02:45 compute-0 nova_compute[260935]: 2025-10-11 09:02:45.066 2 DEBUG nova.virt.libvirt.driver [None req-21f60ae5-4d4b-4bf1-a957-74c05ca1d226 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] [instance: 15633aee-234a-4417-b5ea-f35f13820404] End _get_guest_xml xml=<domain type="kvm">
Oct 11 09:02:45 compute-0 nova_compute[260935]:   <uuid>15633aee-234a-4417-b5ea-f35f13820404</uuid>
Oct 11 09:02:45 compute-0 nova_compute[260935]:   <name>instance-00000057</name>
Oct 11 09:02:45 compute-0 nova_compute[260935]:   <memory>131072</memory>
Oct 11 09:02:45 compute-0 nova_compute[260935]:   <vcpu>1</vcpu>
Oct 11 09:02:45 compute-0 nova_compute[260935]:   <metadata>
Oct 11 09:02:45 compute-0 nova_compute[260935]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 09:02:45 compute-0 nova_compute[260935]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 09:02:45 compute-0 nova_compute[260935]:       <nova:name>tempest-ServerRescueTestJSON-server-2012807637</nova:name>
Oct 11 09:02:45 compute-0 nova_compute[260935]:       <nova:creationTime>2025-10-11 09:02:43</nova:creationTime>
Oct 11 09:02:45 compute-0 nova_compute[260935]:       <nova:flavor name="m1.nano">
Oct 11 09:02:45 compute-0 nova_compute[260935]:         <nova:memory>128</nova:memory>
Oct 11 09:02:45 compute-0 nova_compute[260935]:         <nova:disk>1</nova:disk>
Oct 11 09:02:45 compute-0 nova_compute[260935]:         <nova:swap>0</nova:swap>
Oct 11 09:02:45 compute-0 nova_compute[260935]:         <nova:ephemeral>0</nova:ephemeral>
Oct 11 09:02:45 compute-0 nova_compute[260935]:         <nova:vcpus>1</nova:vcpus>
Oct 11 09:02:45 compute-0 nova_compute[260935]:       </nova:flavor>
Oct 11 09:02:45 compute-0 nova_compute[260935]:       <nova:owner>
Oct 11 09:02:45 compute-0 nova_compute[260935]:         <nova:user uuid="df5a3c3a5d68473aa2e2950de45ebce1">tempest-ServerRescueTestJSON-1667208638-project-member</nova:user>
Oct 11 09:02:45 compute-0 nova_compute[260935]:         <nova:project uuid="11b44ad9193e4e43838d52056ccf413e">tempest-ServerRescueTestJSON-1667208638</nova:project>
Oct 11 09:02:45 compute-0 nova_compute[260935]:       </nova:owner>
Oct 11 09:02:45 compute-0 nova_compute[260935]:       <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 09:02:45 compute-0 nova_compute[260935]:       <nova:ports>
Oct 11 09:02:45 compute-0 nova_compute[260935]:         <nova:port uuid="074db183-8679-40f2-b39d-06759a8dfceb">
Oct 11 09:02:45 compute-0 nova_compute[260935]:           <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Oct 11 09:02:45 compute-0 nova_compute[260935]:         </nova:port>
Oct 11 09:02:45 compute-0 nova_compute[260935]:       </nova:ports>
Oct 11 09:02:45 compute-0 nova_compute[260935]:     </nova:instance>
Oct 11 09:02:45 compute-0 nova_compute[260935]:   </metadata>
Oct 11 09:02:45 compute-0 nova_compute[260935]:   <sysinfo type="smbios">
Oct 11 09:02:45 compute-0 nova_compute[260935]:     <system>
Oct 11 09:02:45 compute-0 nova_compute[260935]:       <entry name="manufacturer">RDO</entry>
Oct 11 09:02:45 compute-0 nova_compute[260935]:       <entry name="product">OpenStack Compute</entry>
Oct 11 09:02:45 compute-0 nova_compute[260935]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 09:02:45 compute-0 nova_compute[260935]:       <entry name="serial">15633aee-234a-4417-b5ea-f35f13820404</entry>
Oct 11 09:02:45 compute-0 nova_compute[260935]:       <entry name="uuid">15633aee-234a-4417-b5ea-f35f13820404</entry>
Oct 11 09:02:45 compute-0 nova_compute[260935]:       <entry name="family">Virtual Machine</entry>
Oct 11 09:02:45 compute-0 nova_compute[260935]:     </system>
Oct 11 09:02:45 compute-0 nova_compute[260935]:   </sysinfo>
Oct 11 09:02:45 compute-0 nova_compute[260935]:   <os>
Oct 11 09:02:45 compute-0 nova_compute[260935]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 11 09:02:45 compute-0 nova_compute[260935]:     <boot dev="hd"/>
Oct 11 09:02:45 compute-0 nova_compute[260935]:     <smbios mode="sysinfo"/>
Oct 11 09:02:45 compute-0 nova_compute[260935]:   </os>
Oct 11 09:02:45 compute-0 nova_compute[260935]:   <features>
Oct 11 09:02:45 compute-0 nova_compute[260935]:     <acpi/>
Oct 11 09:02:45 compute-0 nova_compute[260935]:     <apic/>
Oct 11 09:02:45 compute-0 nova_compute[260935]:     <vmcoreinfo/>
Oct 11 09:02:45 compute-0 nova_compute[260935]:   </features>
Oct 11 09:02:45 compute-0 nova_compute[260935]:   <clock offset="utc">
Oct 11 09:02:45 compute-0 nova_compute[260935]:     <timer name="pit" tickpolicy="delay"/>
Oct 11 09:02:45 compute-0 nova_compute[260935]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 11 09:02:45 compute-0 nova_compute[260935]:     <timer name="hpet" present="no"/>
Oct 11 09:02:45 compute-0 nova_compute[260935]:   </clock>
Oct 11 09:02:45 compute-0 nova_compute[260935]:   <cpu mode="host-model" match="exact">
Oct 11 09:02:45 compute-0 nova_compute[260935]:     <topology sockets="1" cores="1" threads="1"/>
Oct 11 09:02:45 compute-0 nova_compute[260935]:   </cpu>
Oct 11 09:02:45 compute-0 nova_compute[260935]:   <devices>
Oct 11 09:02:45 compute-0 nova_compute[260935]:     <disk type="network" device="disk">
Oct 11 09:02:45 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 09:02:45 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/15633aee-234a-4417-b5ea-f35f13820404_disk">
Oct 11 09:02:45 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 09:02:45 compute-0 nova_compute[260935]:       </source>
Oct 11 09:02:45 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 09:02:45 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 09:02:45 compute-0 nova_compute[260935]:       </auth>
Oct 11 09:02:45 compute-0 nova_compute[260935]:       <target dev="vda" bus="virtio"/>
Oct 11 09:02:45 compute-0 nova_compute[260935]:     </disk>
Oct 11 09:02:45 compute-0 nova_compute[260935]:     <disk type="network" device="cdrom">
Oct 11 09:02:45 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 09:02:45 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/15633aee-234a-4417-b5ea-f35f13820404_disk.config">
Oct 11 09:02:45 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 09:02:45 compute-0 nova_compute[260935]:       </source>
Oct 11 09:02:45 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 09:02:45 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 09:02:45 compute-0 nova_compute[260935]:       </auth>
Oct 11 09:02:45 compute-0 nova_compute[260935]:       <target dev="sda" bus="sata"/>
Oct 11 09:02:45 compute-0 nova_compute[260935]:     </disk>
Oct 11 09:02:45 compute-0 nova_compute[260935]:     <interface type="ethernet">
Oct 11 09:02:45 compute-0 nova_compute[260935]:       <mac address="fa:16:3e:33:2b:94"/>
Oct 11 09:02:45 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 09:02:45 compute-0 nova_compute[260935]:       <driver name="vhost" rx_queue_size="512"/>
Oct 11 09:02:45 compute-0 nova_compute[260935]:       <mtu size="1442"/>
Oct 11 09:02:45 compute-0 nova_compute[260935]:       <target dev="tap074db183-86"/>
Oct 11 09:02:45 compute-0 nova_compute[260935]:     </interface>
Oct 11 09:02:45 compute-0 nova_compute[260935]:     <serial type="pty">
Oct 11 09:02:45 compute-0 nova_compute[260935]:       <log file="/var/lib/nova/instances/15633aee-234a-4417-b5ea-f35f13820404/console.log" append="off"/>
Oct 11 09:02:45 compute-0 nova_compute[260935]:     </serial>
Oct 11 09:02:45 compute-0 nova_compute[260935]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 09:02:45 compute-0 nova_compute[260935]:     <video>
Oct 11 09:02:45 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 09:02:45 compute-0 nova_compute[260935]:     </video>
Oct 11 09:02:45 compute-0 nova_compute[260935]:     <input type="tablet" bus="usb"/>
Oct 11 09:02:45 compute-0 nova_compute[260935]:     <rng model="virtio">
Oct 11 09:02:45 compute-0 nova_compute[260935]:       <backend model="random">/dev/urandom</backend>
Oct 11 09:02:45 compute-0 nova_compute[260935]:     </rng>
Oct 11 09:02:45 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root"/>
Oct 11 09:02:45 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:02:45 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:02:45 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:02:45 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:02:45 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:02:45 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:02:45 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:02:45 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:02:45 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:02:45 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:02:45 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:02:45 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:02:45 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:02:45 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:02:45 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:02:45 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:02:45 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:02:45 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:02:45 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:02:45 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:02:45 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:02:45 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:02:45 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:02:45 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:02:45 compute-0 nova_compute[260935]:     <controller type="usb" index="0"/>
Oct 11 09:02:45 compute-0 nova_compute[260935]:     <memballoon model="virtio">
Oct 11 09:02:45 compute-0 nova_compute[260935]:       <stats period="10"/>
Oct 11 09:02:45 compute-0 nova_compute[260935]:     </memballoon>
Oct 11 09:02:45 compute-0 nova_compute[260935]:   </devices>
Oct 11 09:02:45 compute-0 nova_compute[260935]: </domain>
Oct 11 09:02:45 compute-0 nova_compute[260935]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 11 09:02:45 compute-0 nova_compute[260935]: 2025-10-11 09:02:45.068 2 DEBUG nova.compute.manager [None req-21f60ae5-4d4b-4bf1-a957-74c05ca1d226 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] [instance: 15633aee-234a-4417-b5ea-f35f13820404] Preparing to wait for external event network-vif-plugged-074db183-8679-40f2-b39d-06759a8dfceb prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 11 09:02:45 compute-0 nova_compute[260935]: 2025-10-11 09:02:45.068 2 DEBUG oslo_concurrency.lockutils [None req-21f60ae5-4d4b-4bf1-a957-74c05ca1d226 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] Acquiring lock "15633aee-234a-4417-b5ea-f35f13820404-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:02:45 compute-0 nova_compute[260935]: 2025-10-11 09:02:45.068 2 DEBUG oslo_concurrency.lockutils [None req-21f60ae5-4d4b-4bf1-a957-74c05ca1d226 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] Lock "15633aee-234a-4417-b5ea-f35f13820404-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:02:45 compute-0 nova_compute[260935]: 2025-10-11 09:02:45.069 2 DEBUG oslo_concurrency.lockutils [None req-21f60ae5-4d4b-4bf1-a957-74c05ca1d226 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] Lock "15633aee-234a-4417-b5ea-f35f13820404-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:02:45 compute-0 nova_compute[260935]: 2025-10-11 09:02:45.070 2 DEBUG nova.virt.libvirt.vif [None req-21f60ae5-4d4b-4bf1-a957-74c05ca1d226 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T09:00:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerRescueTestJSON-server-2012807637',display_name='tempest-ServerRescueTestJSON-server-2012807637',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuetestjson-server-2012807637',id=87,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='11b44ad9193e4e43838d52056ccf413e',ramdisk_id='',reservation_id='r-es16o3e8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerRescueTestJSON-1667208638',owner_user_name='tempest-ServerRescueTestJSON-1667208638-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:00:52Z,user_data=None,user_id='df5a3c3a5d68473aa2e2950de45ebce1',uuid=15633aee-234a-4417-b5ea-f35f13820404,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "074db183-8679-40f2-b39d-06759a8dfceb", "address": "fa:16:3e:33:2b:94", "network": {"id": "e4686205-cbf0-4221-bc49-ebb890c4a59f", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1553544744-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "11b44ad9193e4e43838d52056ccf413e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap074db183-86", "ovs_interfaceid": "074db183-8679-40f2-b39d-06759a8dfceb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 11 09:02:45 compute-0 nova_compute[260935]: 2025-10-11 09:02:45.070 2 DEBUG nova.network.os_vif_util [None req-21f60ae5-4d4b-4bf1-a957-74c05ca1d226 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] Converting VIF {"id": "074db183-8679-40f2-b39d-06759a8dfceb", "address": "fa:16:3e:33:2b:94", "network": {"id": "e4686205-cbf0-4221-bc49-ebb890c4a59f", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1553544744-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "11b44ad9193e4e43838d52056ccf413e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap074db183-86", "ovs_interfaceid": "074db183-8679-40f2-b39d-06759a8dfceb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 09:02:45 compute-0 nova_compute[260935]: 2025-10-11 09:02:45.071 2 DEBUG nova.network.os_vif_util [None req-21f60ae5-4d4b-4bf1-a957-74c05ca1d226 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:33:2b:94,bridge_name='br-int',has_traffic_filtering=True,id=074db183-8679-40f2-b39d-06759a8dfceb,network=Network(e4686205-cbf0-4221-bc49-ebb890c4a59f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap074db183-86') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 09:02:45 compute-0 nova_compute[260935]: 2025-10-11 09:02:45.072 2 DEBUG os_vif [None req-21f60ae5-4d4b-4bf1-a957-74c05ca1d226 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:33:2b:94,bridge_name='br-int',has_traffic_filtering=True,id=074db183-8679-40f2-b39d-06759a8dfceb,network=Network(e4686205-cbf0-4221-bc49-ebb890c4a59f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap074db183-86') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 11 09:02:45 compute-0 nova_compute[260935]: 2025-10-11 09:02:45.074 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:02:45 compute-0 nova_compute[260935]: 2025-10-11 09:02:45.075 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:02:45 compute-0 nova_compute[260935]: 2025-10-11 09:02:45.076 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 09:02:45 compute-0 nova_compute[260935]: 2025-10-11 09:02:45.091 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:02:45 compute-0 nova_compute[260935]: 2025-10-11 09:02:45.092 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap074db183-86, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:02:45 compute-0 nova_compute[260935]: 2025-10-11 09:02:45.093 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap074db183-86, col_values=(('external_ids', {'iface-id': '074db183-8679-40f2-b39d-06759a8dfceb', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:33:2b:94', 'vm-uuid': '15633aee-234a-4417-b5ea-f35f13820404'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:02:45 compute-0 nova_compute[260935]: 2025-10-11 09:02:45.095 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:02:45 compute-0 NetworkManager[44960]: <info>  [1760173365.0967] manager: (tap074db183-86): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/343)
Oct 11 09:02:45 compute-0 nova_compute[260935]: 2025-10-11 09:02:45.099 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 09:02:45 compute-0 nova_compute[260935]: 2025-10-11 09:02:45.106 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:02:45 compute-0 nova_compute[260935]: 2025-10-11 09:02:45.107 2 INFO os_vif [None req-21f60ae5-4d4b-4bf1-a957-74c05ca1d226 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:33:2b:94,bridge_name='br-int',has_traffic_filtering=True,id=074db183-8679-40f2-b39d-06759a8dfceb,network=Network(e4686205-cbf0-4221-bc49-ebb890c4a59f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap074db183-86')
Oct 11 09:02:45 compute-0 nova_compute[260935]: 2025-10-11 09:02:45.164 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance c176845c-89c0-4038-ba22-4ee79bd3ebfe actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 09:02:45 compute-0 nova_compute[260935]: 2025-10-11 09:02:45.164 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance b75d8ded-515b-48ff-a6b6-28df88878996 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 09:02:45 compute-0 nova_compute[260935]: 2025-10-11 09:02:45.164 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance 52be16b4-343a-4fd4-9041-39069a1fde2a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 09:02:45 compute-0 nova_compute[260935]: 2025-10-11 09:02:45.164 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance 15633aee-234a-4417-b5ea-f35f13820404 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 09:02:45 compute-0 nova_compute[260935]: 2025-10-11 09:02:45.165 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 11 09:02:45 compute-0 nova_compute[260935]: 2025-10-11 09:02:45.165 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=1024MB phys_disk=59GB used_disk=4GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 11 09:02:45 compute-0 ceph-mon[74313]: pgmap v1787: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 682 B/s rd, 85 B/s wr, 0 op/s
Oct 11 09:02:45 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/1427179780' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:02:45 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/3146808339' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:02:45 compute-0 nova_compute[260935]: 2025-10-11 09:02:45.501 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:02:45 compute-0 nova_compute[260935]: 2025-10-11 09:02:45.570 2 DEBUG nova.virt.libvirt.driver [None req-21f60ae5-4d4b-4bf1-a957-74c05ca1d226 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 09:02:45 compute-0 nova_compute[260935]: 2025-10-11 09:02:45.571 2 DEBUG nova.virt.libvirt.driver [None req-21f60ae5-4d4b-4bf1-a957-74c05ca1d226 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 09:02:45 compute-0 nova_compute[260935]: 2025-10-11 09:02:45.571 2 DEBUG nova.virt.libvirt.driver [None req-21f60ae5-4d4b-4bf1-a957-74c05ca1d226 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] No VIF found with MAC fa:16:3e:33:2b:94, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 11 09:02:45 compute-0 nova_compute[260935]: 2025-10-11 09:02:45.572 2 INFO nova.virt.libvirt.driver [None req-21f60ae5-4d4b-4bf1-a957-74c05ca1d226 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] [instance: 15633aee-234a-4417-b5ea-f35f13820404] Using config drive
Oct 11 09:02:45 compute-0 nova_compute[260935]: 2025-10-11 09:02:45.608 2 DEBUG nova.storage.rbd_utils [None req-21f60ae5-4d4b-4bf1-a957-74c05ca1d226 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] rbd image 15633aee-234a-4417-b5ea-f35f13820404_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:02:45 compute-0 nova_compute[260935]: 2025-10-11 09:02:45.634 2 DEBUG oslo_concurrency.lockutils [None req-ce634aba-2255-4bb3-84bd-81dc9c6307c7 8d211063ed874837bead2e13898b31d4 d33b48586acf4e6c8254f2a1213b001c - - default default] Lock "2ca1b1c6-7cd2-42f9-a24b-5c20bb567361" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 110.700s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:02:45 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 09:02:45 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3292923213' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:02:45 compute-0 nova_compute[260935]: 2025-10-11 09:02:45.980 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:02:45 compute-0 nova_compute[260935]: 2025-10-11 09:02:45.987 2 DEBUG nova.compute.provider_tree [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 09:02:46 compute-0 nova_compute[260935]: 2025-10-11 09:02:46.114 2 DEBUG oslo_concurrency.lockutils [None req-a9b118e5-7aae-466a-a2e8-73259e1e4185 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] Acquiring lock "15633aee-234a-4417-b5ea-f35f13820404" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:02:46 compute-0 nova_compute[260935]: 2025-10-11 09:02:46.120 2 DEBUG nova.scheduler.client.report [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 09:02:46 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "version", "format": "json"} v 0) v1
Oct 11 09:02:46 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/550488350' entity='client.openstack' cmd=[{"prefix": "version", "format": "json"}]: dispatch
Oct 11 09:02:46 compute-0 ceph-mgr[74605]: log_channel(audit) log [DBG] : from='client.19097 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Oct 11 09:02:46 compute-0 ceph-mgr[74605]: [volumes INFO volumes.module] Starting _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Oct 11 09:02:46 compute-0 ceph-mgr[74605]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Oct 11 09:02:46 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1788: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 682 B/s rd, 85 B/s wr, 0 op/s
Oct 11 09:02:46 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/3292923213' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:02:46 compute-0 ceph-mon[74313]: from='client.? 192.168.122.10:0/550488350' entity='client.openstack' cmd=[{"prefix": "version", "format": "json"}]: dispatch
Oct 11 09:02:46 compute-0 nova_compute[260935]: 2025-10-11 09:02:46.646 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 11 09:02:46 compute-0 nova_compute[260935]: 2025-10-11 09:02:46.646 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.940s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:02:47 compute-0 nova_compute[260935]: 2025-10-11 09:02:47.256 2 INFO nova.virt.libvirt.driver [None req-21f60ae5-4d4b-4bf1-a957-74c05ca1d226 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] [instance: 15633aee-234a-4417-b5ea-f35f13820404] Creating config drive at /var/lib/nova/instances/15633aee-234a-4417-b5ea-f35f13820404/disk.config
Oct 11 09:02:47 compute-0 nova_compute[260935]: 2025-10-11 09:02:47.264 2 DEBUG oslo_concurrency.processutils [None req-21f60ae5-4d4b-4bf1-a957-74c05ca1d226 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/15633aee-234a-4417-b5ea-f35f13820404/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpyxsfcnce execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:02:47 compute-0 ceph-mon[74313]: from='client.19097 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Oct 11 09:02:47 compute-0 ceph-mon[74313]: pgmap v1788: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 682 B/s rd, 85 B/s wr, 0 op/s
Oct 11 09:02:47 compute-0 nova_compute[260935]: 2025-10-11 09:02:47.424 2 DEBUG oslo_concurrency.processutils [None req-21f60ae5-4d4b-4bf1-a957-74c05ca1d226 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/15633aee-234a-4417-b5ea-f35f13820404/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpyxsfcnce" returned: 0 in 0.160s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:02:47 compute-0 nova_compute[260935]: 2025-10-11 09:02:47.456 2 DEBUG nova.storage.rbd_utils [None req-21f60ae5-4d4b-4bf1-a957-74c05ca1d226 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] rbd image 15633aee-234a-4417-b5ea-f35f13820404_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:02:47 compute-0 nova_compute[260935]: 2025-10-11 09:02:47.459 2 DEBUG oslo_concurrency.processutils [None req-21f60ae5-4d4b-4bf1-a957-74c05ca1d226 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/15633aee-234a-4417-b5ea-f35f13820404/disk.config 15633aee-234a-4417-b5ea-f35f13820404_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:02:47 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:02:47.524 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=20, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:d1:d9', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '16:ab:1e:b7:4b:7f'}, ipsec=False) old=SB_Global(nb_cfg=19) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 09:02:47 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:02:47.525 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 11 09:02:47 compute-0 nova_compute[260935]: 2025-10-11 09:02:47.524 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:02:47 compute-0 nova_compute[260935]: 2025-10-11 09:02:47.604 2 DEBUG oslo_concurrency.processutils [None req-21f60ae5-4d4b-4bf1-a957-74c05ca1d226 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/15633aee-234a-4417-b5ea-f35f13820404/disk.config 15633aee-234a-4417-b5ea-f35f13820404_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.145s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:02:47 compute-0 nova_compute[260935]: 2025-10-11 09:02:47.605 2 INFO nova.virt.libvirt.driver [None req-21f60ae5-4d4b-4bf1-a957-74c05ca1d226 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] [instance: 15633aee-234a-4417-b5ea-f35f13820404] Deleting local config drive /var/lib/nova/instances/15633aee-234a-4417-b5ea-f35f13820404/disk.config because it was imported into RBD.
Oct 11 09:02:47 compute-0 kernel: tap074db183-86: entered promiscuous mode
Oct 11 09:02:47 compute-0 NetworkManager[44960]: <info>  [1760173367.6797] manager: (tap074db183-86): new Tun device (/org/freedesktop/NetworkManager/Devices/344)
Oct 11 09:02:47 compute-0 ovn_controller[152945]: 2025-10-11T09:02:47Z|00791|if_status|INFO|Not updating pb chassis for 074db183-8679-40f2-b39d-06759a8dfceb now as sb is readonly
Oct 11 09:02:47 compute-0 nova_compute[260935]: 2025-10-11 09:02:47.683 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:02:47 compute-0 nova_compute[260935]: 2025-10-11 09:02:47.719 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:02:47 compute-0 systemd-udevd[346580]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 09:02:47 compute-0 nova_compute[260935]: 2025-10-11 09:02:47.728 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:02:47 compute-0 systemd-machined[215705]: New machine qemu-99-instance-00000057.
Oct 11 09:02:47 compute-0 NetworkManager[44960]: <info>  [1760173367.7467] device (tap074db183-86): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 09:02:47 compute-0 NetworkManager[44960]: <info>  [1760173367.7475] device (tap074db183-86): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 09:02:47 compute-0 systemd[1]: Started Virtual Machine qemu-99-instance-00000057.
Oct 11 09:02:48 compute-0 ovn_controller[152945]: 2025-10-11T09:02:48Z|00792|binding|INFO|Claiming lport 074db183-8679-40f2-b39d-06759a8dfceb for this chassis.
Oct 11 09:02:48 compute-0 ovn_controller[152945]: 2025-10-11T09:02:48Z|00793|binding|INFO|074db183-8679-40f2-b39d-06759a8dfceb: Claiming fa:16:3e:33:2b:94 10.100.0.8
Oct 11 09:02:48 compute-0 ovn_controller[152945]: 2025-10-11T09:02:48Z|00794|binding|INFO|Setting lport 074db183-8679-40f2-b39d-06759a8dfceb ovn-installed in OVS
Oct 11 09:02:48 compute-0 nova_compute[260935]: 2025-10-11 09:02:48.128 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:02:48 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1789: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 682 B/s rd, 1.7 KiB/s wr, 0 op/s
Oct 11 09:02:48 compute-0 ovn_controller[152945]: 2025-10-11T09:02:48Z|00795|binding|INFO|Setting lport 074db183-8679-40f2-b39d-06759a8dfceb up in Southbound
Oct 11 09:02:48 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:02:48.619 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:33:2b:94 10.100.0.8'], port_security=['fa:16:3e:33:2b:94 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '15633aee-234a-4417-b5ea-f35f13820404', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e4686205-cbf0-4221-bc49-ebb890c4a59f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '11b44ad9193e4e43838d52056ccf413e', 'neutron:revision_number': '2', 'neutron:security_group_ids': '4b0cfb76-aebd-4cfc-96f5-00bacc345d65', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9601b6e8-d9bc-46ca-99e8-33ec15f713e5, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=074db183-8679-40f2-b39d-06759a8dfceb) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 09:02:48 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:02:48.620 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 074db183-8679-40f2-b39d-06759a8dfceb in datapath e4686205-cbf0-4221-bc49-ebb890c4a59f bound to our chassis
Oct 11 09:02:48 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:02:48.622 162815 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network e4686205-cbf0-4221-bc49-ebb890c4a59f or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Oct 11 09:02:48 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:02:48.623 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[82e9519b-9c42-4cda-857a-7560bbecc39b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:02:48 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e242 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:02:48 compute-0 ceph-mon[74313]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #81. Immutable memtables: 0.
Oct 11 09:02:48 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:02:48.729029) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 11 09:02:48 compute-0 ceph-mon[74313]: rocksdb: [db/flush_job.cc:856] [default] [JOB 45] Flushing memtable with next log file: 81
Oct 11 09:02:48 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760173368729070, "job": 45, "event": "flush_started", "num_memtables": 1, "num_entries": 837, "num_deletes": 251, "total_data_size": 1129490, "memory_usage": 1144160, "flush_reason": "Manual Compaction"}
Oct 11 09:02:48 compute-0 ceph-mon[74313]: rocksdb: [db/flush_job.cc:885] [default] [JOB 45] Level-0 flush table #82: started
Oct 11 09:02:48 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760173368747154, "cf_name": "default", "job": 45, "event": "table_file_creation", "file_number": 82, "file_size": 1119042, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 36805, "largest_seqno": 37641, "table_properties": {"data_size": 1114767, "index_size": 1991, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1221, "raw_key_size": 9392, "raw_average_key_size": 19, "raw_value_size": 1106203, "raw_average_value_size": 2304, "num_data_blocks": 89, "num_entries": 480, "num_filter_entries": 480, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760173294, "oldest_key_time": 1760173294, "file_creation_time": 1760173368, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "25d4b2da-549a-4320-988a-8e4ed36a341b", "db_session_id": "HBQ1LYMNE0LLZGR7AHC6", "orig_file_number": 82, "seqno_to_time_mapping": "N/A"}}
Oct 11 09:02:48 compute-0 ceph-mon[74313]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 45] Flush lasted 18189 microseconds, and 8071 cpu microseconds.
Oct 11 09:02:48 compute-0 ceph-mon[74313]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 09:02:48 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:02:48.747213) [db/flush_job.cc:967] [default] [JOB 45] Level-0 flush table #82: 1119042 bytes OK
Oct 11 09:02:48 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:02:48.747241) [db/memtable_list.cc:519] [default] Level-0 commit table #82 started
Oct 11 09:02:48 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:02:48.750077) [db/memtable_list.cc:722] [default] Level-0 commit table #82: memtable #1 done
Oct 11 09:02:48 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:02:48.750128) EVENT_LOG_v1 {"time_micros": 1760173368750120, "job": 45, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 11 09:02:48 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:02:48.750155) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 11 09:02:48 compute-0 ceph-mon[74313]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 45] Try to delete WAL files size 1125348, prev total WAL file size 1125348, number of live WAL files 2.
Oct 11 09:02:48 compute-0 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000078.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 09:02:48 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:02:48.751521) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033323633' seq:72057594037927935, type:22 .. '7061786F730033353135' seq:0, type:0; will stop at (end)
Oct 11 09:02:48 compute-0 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 46] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 11 09:02:48 compute-0 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 45 Base level 0, inputs: [82(1092KB)], [80(8342KB)]
Oct 11 09:02:48 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760173368751671, "job": 46, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [82], "files_L6": [80], "score": -1, "input_data_size": 9662127, "oldest_snapshot_seqno": -1}
Oct 11 09:02:48 compute-0 podman[346633]: 2025-10-11 09:02:48.789453899 +0000 UTC m=+0.095000223 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 11 09:02:48 compute-0 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 46] Generated table #83: 6157 keys, 8067508 bytes, temperature: kUnknown
Oct 11 09:02:48 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760173368813160, "cf_name": "default", "job": 46, "event": "table_file_creation", "file_number": 83, "file_size": 8067508, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8027354, "index_size": 23660, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15429, "raw_key_size": 155707, "raw_average_key_size": 25, "raw_value_size": 7917848, "raw_average_value_size": 1285, "num_data_blocks": 954, "num_entries": 6157, "num_filter_entries": 6157, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760170204, "oldest_key_time": 0, "file_creation_time": 1760173368, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "25d4b2da-549a-4320-988a-8e4ed36a341b", "db_session_id": "HBQ1LYMNE0LLZGR7AHC6", "orig_file_number": 83, "seqno_to_time_mapping": "N/A"}}
Oct 11 09:02:48 compute-0 ceph-mon[74313]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 09:02:48 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:02:48.813503) [db/compaction/compaction_job.cc:1663] [default] [JOB 46] Compacted 1@0 + 1@6 files to L6 => 8067508 bytes
Oct 11 09:02:48 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:02:48.815214) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 156.8 rd, 130.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.1, 8.1 +0.0 blob) out(7.7 +0.0 blob), read-write-amplify(15.8) write-amplify(7.2) OK, records in: 6673, records dropped: 516 output_compression: NoCompression
Oct 11 09:02:48 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:02:48.815245) EVENT_LOG_v1 {"time_micros": 1760173368815231, "job": 46, "event": "compaction_finished", "compaction_time_micros": 61615, "compaction_time_cpu_micros": 37315, "output_level": 6, "num_output_files": 1, "total_output_size": 8067508, "num_input_records": 6673, "num_output_records": 6157, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 11 09:02:48 compute-0 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000082.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 09:02:48 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760173368816008, "job": 46, "event": "table_file_deletion", "file_number": 82}
Oct 11 09:02:48 compute-0 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000080.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 09:02:48 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760173368819619, "job": 46, "event": "table_file_deletion", "file_number": 80}
Oct 11 09:02:48 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:02:48.751359) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 09:02:48 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:02:48.819843) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 09:02:48 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:02:48.819848) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 09:02:48 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:02:48.819850) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 09:02:48 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:02:48.819852) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 09:02:48 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:02:48.819854) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 09:02:48 compute-0 podman[346634]: 2025-10-11 09:02:48.841423833 +0000 UTC m=+0.139407341 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct 11 09:02:48 compute-0 nova_compute[260935]: 2025-10-11 09:02:48.911 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:02:49 compute-0 nova_compute[260935]: 2025-10-11 09:02:49.104 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760173369.10403, 15633aee-234a-4417-b5ea-f35f13820404 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:02:49 compute-0 nova_compute[260935]: 2025-10-11 09:02:49.105 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 15633aee-234a-4417-b5ea-f35f13820404] VM Started (Lifecycle Event)
Oct 11 09:02:49 compute-0 nova_compute[260935]: 2025-10-11 09:02:49.171 2 DEBUG nova.network.neutron [req-b3a49696-6fde-49ee-932a-08193025f2ee req-fcbdbf43-b827-47e3-b560-4c19f9fd1de3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 15633aee-234a-4417-b5ea-f35f13820404] Updated VIF entry in instance network info cache for port 074db183-8679-40f2-b39d-06759a8dfceb. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 09:02:49 compute-0 nova_compute[260935]: 2025-10-11 09:02:49.172 2 DEBUG nova.network.neutron [req-b3a49696-6fde-49ee-932a-08193025f2ee req-fcbdbf43-b827-47e3-b560-4c19f9fd1de3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 15633aee-234a-4417-b5ea-f35f13820404] Updating instance_info_cache with network_info: [{"id": "074db183-8679-40f2-b39d-06759a8dfceb", "address": "fa:16:3e:33:2b:94", "network": {"id": "e4686205-cbf0-4221-bc49-ebb890c4a59f", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1553544744-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "11b44ad9193e4e43838d52056ccf413e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap074db183-86", "ovs_interfaceid": "074db183-8679-40f2-b39d-06759a8dfceb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:02:49 compute-0 nova_compute[260935]: 2025-10-11 09:02:49.217 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 15633aee-234a-4417-b5ea-f35f13820404] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:02:49 compute-0 nova_compute[260935]: 2025-10-11 09:02:49.224 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760173369.1061523, 15633aee-234a-4417-b5ea-f35f13820404 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:02:49 compute-0 nova_compute[260935]: 2025-10-11 09:02:49.224 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 15633aee-234a-4417-b5ea-f35f13820404] VM Paused (Lifecycle Event)
Oct 11 09:02:49 compute-0 ceph-mon[74313]: pgmap v1789: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 682 B/s rd, 1.7 KiB/s wr, 0 op/s
Oct 11 09:02:49 compute-0 nova_compute[260935]: 2025-10-11 09:02:49.573 2 DEBUG oslo_concurrency.lockutils [req-b3a49696-6fde-49ee-932a-08193025f2ee req-fcbdbf43-b827-47e3-b560-4c19f9fd1de3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-15633aee-234a-4417-b5ea-f35f13820404" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:02:49 compute-0 nova_compute[260935]: 2025-10-11 09:02:49.648 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 15633aee-234a-4417-b5ea-f35f13820404] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:02:49 compute-0 nova_compute[260935]: 2025-10-11 09:02:49.653 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 15633aee-234a-4417-b5ea-f35f13820404] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: deleting, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 09:02:50 compute-0 nova_compute[260935]: 2025-10-11 09:02:50.079 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 15633aee-234a-4417-b5ea-f35f13820404] During sync_power_state the instance has a pending task (deleting). Skip.
Oct 11 09:02:50 compute-0 nova_compute[260935]: 2025-10-11 09:02:50.095 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:02:50 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1790: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 682 B/s rd, 1.7 KiB/s wr, 0 op/s
Oct 11 09:02:50 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:02:50.528 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=bec9a5e2-82b0-42f0-811d-08d245f1dc66, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '20'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:02:50 compute-0 nova_compute[260935]: 2025-10-11 09:02:50.645 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:02:50 compute-0 nova_compute[260935]: 2025-10-11 09:02:50.646 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:02:51 compute-0 sshd-session[346677]: Invalid user hardy from 155.4.244.179 port 5969
Oct 11 09:02:51 compute-0 sshd-session[346677]: pam_unix(sshd:auth): check pass; user unknown
Oct 11 09:02:51 compute-0 sshd-session[346677]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=155.4.244.179
Oct 11 09:02:51 compute-0 ceph-mon[74313]: pgmap v1790: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 682 B/s rd, 1.7 KiB/s wr, 0 op/s
Oct 11 09:02:51 compute-0 nova_compute[260935]: 2025-10-11 09:02:51.638 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:02:51 compute-0 nova_compute[260935]: 2025-10-11 09:02:51.639 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 11 09:02:51 compute-0 nova_compute[260935]: 2025-10-11 09:02:51.639 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 11 09:02:52 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1791: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 1.5 KiB/s rd, 2.1 KiB/s wr, 2 op/s
Oct 11 09:02:52 compute-0 nova_compute[260935]: 2025-10-11 09:02:52.834 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: 15633aee-234a-4417-b5ea-f35f13820404] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Oct 11 09:02:53 compute-0 sshd-session[346677]: Failed password for invalid user hardy from 155.4.244.179 port 5969 ssh2
Oct 11 09:02:53 compute-0 ceph-mon[74313]: pgmap v1791: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 1.5 KiB/s rd, 2.1 KiB/s wr, 2 op/s
Oct 11 09:02:53 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e242 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:02:53 compute-0 nova_compute[260935]: 2025-10-11 09:02:53.914 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:02:53 compute-0 sshd-session[346677]: Received disconnect from 155.4.244.179 port 5969:11: Bye Bye [preauth]
Oct 11 09:02:53 compute-0 sshd-session[346677]: Disconnected from invalid user hardy 155.4.244.179 port 5969 [preauth]
Oct 11 09:02:54 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1792: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 7.2 KiB/s rd, 14 KiB/s wr, 10 op/s
Oct 11 09:02:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:02:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:02:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:02:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:02:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:02:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:02:54 compute-0 ceph-mgr[74605]: [balancer INFO root] Optimize plan auto_2025-10-11_09:02:54
Oct 11 09:02:54 compute-0 ceph-mgr[74605]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 09:02:54 compute-0 ceph-mgr[74605]: [balancer INFO root] do_upmap
Oct 11 09:02:54 compute-0 ceph-mgr[74605]: [balancer INFO root] pools ['default.rgw.meta', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', '.rgw.root', 'vms', 'default.rgw.control', 'backups', 'images', '.mgr', 'default.rgw.log', 'volumes']
Oct 11 09:02:54 compute-0 ceph-mgr[74605]: [balancer INFO root] prepared 0/10 changes
Oct 11 09:02:55 compute-0 nova_compute[260935]: 2025-10-11 09:02:55.097 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:02:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 09:02:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 09:02:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 09:02:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 09:02:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 09:02:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 09:02:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 09:02:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 09:02:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 09:02:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 09:02:55 compute-0 ceph-mon[74313]: pgmap v1792: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 7.2 KiB/s rd, 14 KiB/s wr, 10 op/s
Oct 11 09:02:55 compute-0 nova_compute[260935]: 2025-10-11 09:02:55.768 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "refresh_cache-c176845c-89c0-4038-ba22-4ee79bd3ebfe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:02:55 compute-0 nova_compute[260935]: 2025-10-11 09:02:55.769 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquired lock "refresh_cache-c176845c-89c0-4038-ba22-4ee79bd3ebfe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:02:55 compute-0 nova_compute[260935]: 2025-10-11 09:02:55.770 2 DEBUG nova.network.neutron [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: c176845c-89c0-4038-ba22-4ee79bd3ebfe] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 11 09:02:55 compute-0 nova_compute[260935]: 2025-10-11 09:02:55.770 2 DEBUG nova.objects.instance [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lazy-loading 'info_cache' on Instance uuid c176845c-89c0-4038-ba22-4ee79bd3ebfe obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:02:56 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1793: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 7.2 KiB/s rd, 14 KiB/s wr, 10 op/s
Oct 11 09:02:57 compute-0 ceph-mon[74313]: pgmap v1793: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 7.2 KiB/s rd, 14 KiB/s wr, 10 op/s
Oct 11 09:02:57 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 09:02:57 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/345437772' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 09:02:57 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 09:02:57 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/345437772' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 09:02:58 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 09:02:58 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2455969037' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 09:02:58 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 09:02:58 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2455969037' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 09:02:58 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1794: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 7.2 KiB/s rd, 14 KiB/s wr, 10 op/s
Oct 11 09:02:58 compute-0 ceph-mon[74313]: from='client.? 192.168.122.10:0/345437772' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 09:02:58 compute-0 ceph-mon[74313]: from='client.? 192.168.122.10:0/345437772' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 09:02:58 compute-0 ceph-mon[74313]: from='client.? 192.168.122.10:0/2455969037' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 09:02:58 compute-0 ceph-mon[74313]: from='client.? 192.168.122.10:0/2455969037' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 09:02:58 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e242 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:02:58 compute-0 nova_compute[260935]: 2025-10-11 09:02:58.916 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:02:59 compute-0 ceph-mon[74313]: pgmap v1794: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 7.2 KiB/s rd, 14 KiB/s wr, 10 op/s
Oct 11 09:03:00 compute-0 nova_compute[260935]: 2025-10-11 09:03:00.101 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:03:00 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1795: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 7.2 KiB/s rd, 13 KiB/s wr, 10 op/s
Oct 11 09:03:01 compute-0 ceph-mon[74313]: pgmap v1795: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 7.2 KiB/s rd, 13 KiB/s wr, 10 op/s
Oct 11 09:03:02 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1796: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 7.2 KiB/s rd, 13 KiB/s wr, 10 op/s
Oct 11 09:03:03 compute-0 ceph-mon[74313]: pgmap v1796: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 7.2 KiB/s rd, 13 KiB/s wr, 10 op/s
Oct 11 09:03:03 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e242 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:03:03 compute-0 nova_compute[260935]: 2025-10-11 09:03:03.918 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:03:04 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1797: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 6.4 KiB/s rd, 12 KiB/s wr, 8 op/s
Oct 11 09:03:04 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 09:03:04 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/957849191' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 09:03:04 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 09:03:04 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/957849191' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 09:03:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 09:03:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:03:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 09:03:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:03:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0029757271724620694 of space, bias 1.0, pg target 0.8927181517386208 quantized to 32 (current 32)
Oct 11 09:03:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:03:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 09:03:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:03:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 09:03:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:03:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Oct 11 09:03:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:03:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 11 09:03:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:03:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 09:03:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:03:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 11 09:03:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:03:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 11 09:03:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:03:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 09:03:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:03:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 11 09:03:05 compute-0 nova_compute[260935]: 2025-10-11 09:03:05.104 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:03:05 compute-0 ceph-mon[74313]: pgmap v1797: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 6.4 KiB/s rd, 12 KiB/s wr, 8 op/s
Oct 11 09:03:05 compute-0 ceph-mon[74313]: from='client.? 192.168.122.10:0/957849191' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 09:03:05 compute-0 ceph-mon[74313]: from='client.? 192.168.122.10:0/957849191' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 09:03:06 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1798: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail
Oct 11 09:03:07 compute-0 ceph-mon[74313]: pgmap v1798: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail
Oct 11 09:03:08 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1799: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail
Oct 11 09:03:08 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e242 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:03:08 compute-0 sshd-session[346679]: Invalid user diego from 152.32.213.170 port 46702
Oct 11 09:03:08 compute-0 sshd-session[346679]: pam_unix(sshd:auth): check pass; user unknown
Oct 11 09:03:08 compute-0 sshd-session[346679]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=152.32.213.170
Oct 11 09:03:08 compute-0 nova_compute[260935]: 2025-10-11 09:03:08.921 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:03:09 compute-0 ceph-mon[74313]: pgmap v1799: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail
Oct 11 09:03:09 compute-0 nova_compute[260935]: 2025-10-11 09:03:09.625 2 DEBUG nova.network.neutron [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: c176845c-89c0-4038-ba22-4ee79bd3ebfe] Updating instance_info_cache with network_info: [{"id": "e61ae661-47c6-4317-a2c2-6e7a5b567441", "address": "fa:16:3e:1e:82:58", "network": {"id": "164a664d-5e52-48b9-8b00-f73d0851a4cc", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-311778958-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d33b48586acf4e6c8254f2a1213b001c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape61ae661-47", "ovs_interfaceid": "e61ae661-47c6-4317-a2c2-6e7a5b567441", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:03:09 compute-0 podman[346681]: 2025-10-11 09:03:09.840269997 +0000 UTC m=+0.122278922 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct 11 09:03:10 compute-0 nova_compute[260935]: 2025-10-11 09:03:10.106 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:03:10 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1800: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail
Oct 11 09:03:10 compute-0 nova_compute[260935]: 2025-10-11 09:03:10.969 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Releasing lock "refresh_cache-c176845c-89c0-4038-ba22-4ee79bd3ebfe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:03:10 compute-0 nova_compute[260935]: 2025-10-11 09:03:10.969 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: c176845c-89c0-4038-ba22-4ee79bd3ebfe] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 11 09:03:10 compute-0 nova_compute[260935]: 2025-10-11 09:03:10.970 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:03:10 compute-0 nova_compute[260935]: 2025-10-11 09:03:10.970 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:03:10 compute-0 nova_compute[260935]: 2025-10-11 09:03:10.971 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:03:10 compute-0 nova_compute[260935]: 2025-10-11 09:03:10.971 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:03:10 compute-0 nova_compute[260935]: 2025-10-11 09:03:10.971 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:03:10 compute-0 nova_compute[260935]: 2025-10-11 09:03:10.971 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:03:10 compute-0 nova_compute[260935]: 2025-10-11 09:03:10.971 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 11 09:03:10 compute-0 nova_compute[260935]: 2025-10-11 09:03:10.972 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:03:11 compute-0 sshd-session[346679]: Failed password for invalid user diego from 152.32.213.170 port 46702 ssh2
Oct 11 09:03:11 compute-0 nova_compute[260935]: 2025-10-11 09:03:11.194 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:03:11 compute-0 nova_compute[260935]: 2025-10-11 09:03:11.195 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:03:11 compute-0 nova_compute[260935]: 2025-10-11 09:03:11.195 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:03:11 compute-0 nova_compute[260935]: 2025-10-11 09:03:11.196 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 11 09:03:11 compute-0 nova_compute[260935]: 2025-10-11 09:03:11.196 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:03:11 compute-0 ceph-mon[74313]: pgmap v1800: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail
Oct 11 09:03:11 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 09:03:11 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1374859541' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:03:11 compute-0 nova_compute[260935]: 2025-10-11 09:03:11.705 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.509s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:03:12 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1801: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 0 B/s rd, 426 B/s wr, 0 op/s
Oct 11 09:03:12 compute-0 sshd-session[346679]: Received disconnect from 152.32.213.170 port 46702:11: Bye Bye [preauth]
Oct 11 09:03:12 compute-0 sshd-session[346679]: Disconnected from invalid user diego 152.32.213.170 port 46702 [preauth]
Oct 11 09:03:12 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/1374859541' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:03:13 compute-0 ceph-mon[74313]: pgmap v1801: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 0 B/s rd, 426 B/s wr, 0 op/s
Oct 11 09:03:13 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e242 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:03:13 compute-0 nova_compute[260935]: 2025-10-11 09:03:13.924 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:03:14 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1802: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 0 B/s rd, 426 B/s wr, 0 op/s
Oct 11 09:03:14 compute-0 nova_compute[260935]: 2025-10-11 09:03:14.522 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:03:14 compute-0 nova_compute[260935]: 2025-10-11 09:03:14.523 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:03:14 compute-0 nova_compute[260935]: 2025-10-11 09:03:14.523 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:03:14 compute-0 nova_compute[260935]: 2025-10-11 09:03:14.525 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:03:14 compute-0 nova_compute[260935]: 2025-10-11 09:03:14.525 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:03:14 compute-0 nova_compute[260935]: 2025-10-11 09:03:14.528 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000057 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:03:14 compute-0 nova_compute[260935]: 2025-10-11 09:03:14.528 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000057 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:03:14 compute-0 nova_compute[260935]: 2025-10-11 09:03:14.531 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000052 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:03:14 compute-0 nova_compute[260935]: 2025-10-11 09:03:14.531 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000052 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:03:14 compute-0 nova_compute[260935]: 2025-10-11 09:03:14.720 2 WARNING nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 09:03:14 compute-0 nova_compute[260935]: 2025-10-11 09:03:14.721 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3225MB free_disk=59.80977249145508GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 11 09:03:14 compute-0 nova_compute[260935]: 2025-10-11 09:03:14.722 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:03:14 compute-0 nova_compute[260935]: 2025-10-11 09:03:14.722 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:03:15 compute-0 nova_compute[260935]: 2025-10-11 09:03:15.110 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:03:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:03:15.198 162815 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:03:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:03:15.200 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:03:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:03:15.201 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:03:15 compute-0 ceph-mon[74313]: pgmap v1802: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 0 B/s rd, 426 B/s wr, 0 op/s
Oct 11 09:03:15 compute-0 podman[346723]: 2025-10-11 09:03:15.796541578 +0000 UTC m=+0.088996312 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team)
Oct 11 09:03:16 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1803: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 0 B/s rd, 426 B/s wr, 0 op/s
Oct 11 09:03:17 compute-0 ceph-mon[74313]: pgmap v1803: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 0 B/s rd, 426 B/s wr, 0 op/s
Oct 11 09:03:18 compute-0 nova_compute[260935]: 2025-10-11 09:03:18.032 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance c176845c-89c0-4038-ba22-4ee79bd3ebfe actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 09:03:18 compute-0 nova_compute[260935]: 2025-10-11 09:03:18.033 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance b75d8ded-515b-48ff-a6b6-28df88878996 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 09:03:18 compute-0 nova_compute[260935]: 2025-10-11 09:03:18.033 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance 52be16b4-343a-4fd4-9041-39069a1fde2a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 09:03:18 compute-0 nova_compute[260935]: 2025-10-11 09:03:18.034 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance 15633aee-234a-4417-b5ea-f35f13820404 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 09:03:18 compute-0 nova_compute[260935]: 2025-10-11 09:03:18.034 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 11 09:03:18 compute-0 nova_compute[260935]: 2025-10-11 09:03:18.035 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=1024MB phys_disk=59GB used_disk=4GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 11 09:03:18 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1804: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 0 B/s rd, 1.4 KiB/s wr, 0 op/s
Oct 11 09:03:18 compute-0 nova_compute[260935]: 2025-10-11 09:03:18.254 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:03:18 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e242 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:03:18 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 09:03:18 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3190328056' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:03:18 compute-0 nova_compute[260935]: 2025-10-11 09:03:18.754 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.500s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:03:18 compute-0 nova_compute[260935]: 2025-10-11 09:03:18.763 2 DEBUG nova.compute.provider_tree [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 09:03:18 compute-0 nova_compute[260935]: 2025-10-11 09:03:18.927 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:03:19 compute-0 ceph-mon[74313]: pgmap v1804: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 0 B/s rd, 1.4 KiB/s wr, 0 op/s
Oct 11 09:03:19 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/3190328056' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:03:19 compute-0 podman[346765]: 2025-10-11 09:03:19.787610112 +0000 UTC m=+0.085742829 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251001, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 11 09:03:19 compute-0 podman[346766]: 2025-10-11 09:03:19.865247648 +0000 UTC m=+0.157021673 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 11 09:03:19 compute-0 sudo[346807]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:03:19 compute-0 sudo[346807]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:03:19 compute-0 sudo[346807]: pam_unix(sudo:session): session closed for user root
Oct 11 09:03:19 compute-0 sudo[346835]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 09:03:19 compute-0 sudo[346835]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:03:20 compute-0 sudo[346835]: pam_unix(sudo:session): session closed for user root
Oct 11 09:03:20 compute-0 sudo[346860]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:03:20 compute-0 sudo[346860]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:03:20 compute-0 sudo[346860]: pam_unix(sudo:session): session closed for user root
Oct 11 09:03:20 compute-0 nova_compute[260935]: 2025-10-11 09:03:20.112 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:03:20 compute-0 sudo[346885]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Oct 11 09:03:20 compute-0 sudo[346885]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:03:20 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1805: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 0 B/s rd, 1.4 KiB/s wr, 0 op/s
Oct 11 09:03:20 compute-0 sudo[346885]: pam_unix(sudo:session): session closed for user root
Oct 11 09:03:20 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 09:03:20 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:03:20 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 09:03:20 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:03:20 compute-0 sudo[346932]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:03:20 compute-0 sudo[346932]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:03:20 compute-0 sudo[346932]: pam_unix(sudo:session): session closed for user root
Oct 11 09:03:20 compute-0 sudo[346957]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 09:03:20 compute-0 sudo[346957]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:03:20 compute-0 sudo[346957]: pam_unix(sudo:session): session closed for user root
Oct 11 09:03:20 compute-0 sudo[346982]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:03:20 compute-0 sudo[346982]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:03:20 compute-0 sudo[346982]: pam_unix(sudo:session): session closed for user root
Oct 11 09:03:20 compute-0 sudo[347007]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 11 09:03:20 compute-0 sudo[347007]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:03:21 compute-0 nova_compute[260935]: 2025-10-11 09:03:21.014 2 DEBUG nova.scheduler.client.report [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 09:03:21 compute-0 ceph-mon[74313]: pgmap v1805: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 0 B/s rd, 1.4 KiB/s wr, 0 op/s
Oct 11 09:03:21 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:03:21 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:03:21 compute-0 sudo[347007]: pam_unix(sudo:session): session closed for user root
Oct 11 09:03:21 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 09:03:21 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 09:03:21 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 11 09:03:21 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 09:03:21 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 11 09:03:21 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:03:21 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev 147dcab4-25d7-459e-a68f-397366782ce6 does not exist
Oct 11 09:03:21 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev 2a22582f-8416-4602-8c94-2c3c7a855fbf does not exist
Oct 11 09:03:21 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev f9fd6c57-e21e-4d3c-8004-f04b18751943 does not exist
Oct 11 09:03:21 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 11 09:03:21 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 09:03:21 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 11 09:03:21 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 09:03:21 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 09:03:21 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 09:03:21 compute-0 sudo[347064]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:03:21 compute-0 sudo[347064]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:03:21 compute-0 sudo[347064]: pam_unix(sudo:session): session closed for user root
Oct 11 09:03:21 compute-0 sudo[347089]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 09:03:21 compute-0 sudo[347089]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:03:21 compute-0 sudo[347089]: pam_unix(sudo:session): session closed for user root
Oct 11 09:03:22 compute-0 sudo[347114]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:03:22 compute-0 sudo[347114]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:03:22 compute-0 sudo[347114]: pam_unix(sudo:session): session closed for user root
Oct 11 09:03:22 compute-0 sudo[347139]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 11 09:03:22 compute-0 sudo[347139]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:03:22 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1806: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 0 B/s rd, 1.4 KiB/s wr, 0 op/s
Oct 11 09:03:22 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 09:03:22 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 09:03:22 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:03:22 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 09:03:22 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 09:03:22 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 09:03:22 compute-0 podman[347205]: 2025-10-11 09:03:22.578003529 +0000 UTC m=+0.055412252 container create 197d4b73bba0c7bc540f5f9145898fb939421384ebe8658dd0ae65b90d2c7194 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_gould, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 09:03:22 compute-0 systemd[1]: Started libpod-conmon-197d4b73bba0c7bc540f5f9145898fb939421384ebe8658dd0ae65b90d2c7194.scope.
Oct 11 09:03:22 compute-0 podman[347205]: 2025-10-11 09:03:22.550260547 +0000 UTC m=+0.027669290 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:03:22 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:03:22 compute-0 podman[347205]: 2025-10-11 09:03:22.712169529 +0000 UTC m=+0.189578262 container init 197d4b73bba0c7bc540f5f9145898fb939421384ebe8658dd0ae65b90d2c7194 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_gould, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 09:03:22 compute-0 podman[347205]: 2025-10-11 09:03:22.725353075 +0000 UTC m=+0.202761808 container start 197d4b73bba0c7bc540f5f9145898fb939421384ebe8658dd0ae65b90d2c7194 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_gould, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct 11 09:03:22 compute-0 podman[347205]: 2025-10-11 09:03:22.72973309 +0000 UTC m=+0.207141813 container attach 197d4b73bba0c7bc540f5f9145898fb939421384ebe8658dd0ae65b90d2c7194 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_gould, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 09:03:22 compute-0 vibrant_gould[347222]: 167 167
Oct 11 09:03:22 compute-0 systemd[1]: libpod-197d4b73bba0c7bc540f5f9145898fb939421384ebe8658dd0ae65b90d2c7194.scope: Deactivated successfully.
Oct 11 09:03:22 compute-0 conmon[347222]: conmon 197d4b73bba0c7bc540f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-197d4b73bba0c7bc540f5f9145898fb939421384ebe8658dd0ae65b90d2c7194.scope/container/memory.events
Oct 11 09:03:22 compute-0 podman[347227]: 2025-10-11 09:03:22.808691344 +0000 UTC m=+0.050306147 container died 197d4b73bba0c7bc540f5f9145898fb939421384ebe8658dd0ae65b90d2c7194 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_gould, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 09:03:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-6633c0162577a91651029c5517da94a83b54b326b5b8108a125f3ffdd8bcccf6-merged.mount: Deactivated successfully.
Oct 11 09:03:22 compute-0 podman[347227]: 2025-10-11 09:03:22.863113257 +0000 UTC m=+0.104728000 container remove 197d4b73bba0c7bc540f5f9145898fb939421384ebe8658dd0ae65b90d2c7194 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_gould, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3)
Oct 11 09:03:22 compute-0 systemd[1]: libpod-conmon-197d4b73bba0c7bc540f5f9145898fb939421384ebe8658dd0ae65b90d2c7194.scope: Deactivated successfully.
Oct 11 09:03:23 compute-0 podman[347250]: 2025-10-11 09:03:23.126678662 +0000 UTC m=+0.066058687 container create a4532cda1f1174878f98566f089095d007cafb2ee668d1a724d479704f5052e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_hodgkin, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS)
Oct 11 09:03:23 compute-0 systemd[1]: Started libpod-conmon-a4532cda1f1174878f98566f089095d007cafb2ee668d1a724d479704f5052e8.scope.
Oct 11 09:03:23 compute-0 podman[347250]: 2025-10-11 09:03:23.093565286 +0000 UTC m=+0.032945371 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:03:23 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:03:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7e228aa8a66acf5ecec5ff097e7f3f73459b9ca57e628acf90fb983ffb93285/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 09:03:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7e228aa8a66acf5ecec5ff097e7f3f73459b9ca57e628acf90fb983ffb93285/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 09:03:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7e228aa8a66acf5ecec5ff097e7f3f73459b9ca57e628acf90fb983ffb93285/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 09:03:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7e228aa8a66acf5ecec5ff097e7f3f73459b9ca57e628acf90fb983ffb93285/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 09:03:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7e228aa8a66acf5ecec5ff097e7f3f73459b9ca57e628acf90fb983ffb93285/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 09:03:23 compute-0 podman[347250]: 2025-10-11 09:03:23.244883306 +0000 UTC m=+0.184263371 container init a4532cda1f1174878f98566f089095d007cafb2ee668d1a724d479704f5052e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_hodgkin, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 09:03:23 compute-0 podman[347250]: 2025-10-11 09:03:23.256474487 +0000 UTC m=+0.195854482 container start a4532cda1f1174878f98566f089095d007cafb2ee668d1a724d479704f5052e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_hodgkin, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 09:03:23 compute-0 podman[347250]: 2025-10-11 09:03:23.260210684 +0000 UTC m=+0.199590699 container attach a4532cda1f1174878f98566f089095d007cafb2ee668d1a724d479704f5052e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_hodgkin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 09:03:23 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e242 do_prune osdmap full prune enabled
Oct 11 09:03:23 compute-0 ceph-mon[74313]: pgmap v1806: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 0 B/s rd, 1.4 KiB/s wr, 0 op/s
Oct 11 09:03:23 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e243 e243: 3 total, 3 up, 3 in
Oct 11 09:03:23 compute-0 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e243: 3 total, 3 up, 3 in
Oct 11 09:03:23 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e243 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:03:23 compute-0 nova_compute[260935]: 2025-10-11 09:03:23.929 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:03:24 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1808: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 1.2 KiB/s wr, 0 op/s
Oct 11 09:03:24 compute-0 nostalgic_hodgkin[347266]: --> passed data devices: 0 physical, 3 LVM
Oct 11 09:03:24 compute-0 nostalgic_hodgkin[347266]: --> relative data size: 1.0
Oct 11 09:03:24 compute-0 nostalgic_hodgkin[347266]: --> All data devices are unavailable
Oct 11 09:03:24 compute-0 systemd[1]: libpod-a4532cda1f1174878f98566f089095d007cafb2ee668d1a724d479704f5052e8.scope: Deactivated successfully.
Oct 11 09:03:24 compute-0 systemd[1]: libpod-a4532cda1f1174878f98566f089095d007cafb2ee668d1a724d479704f5052e8.scope: Consumed 1.101s CPU time.
Oct 11 09:03:24 compute-0 podman[347295]: 2025-10-11 09:03:24.479617374 +0000 UTC m=+0.052146209 container died a4532cda1f1174878f98566f089095d007cafb2ee668d1a724d479704f5052e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_hodgkin, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 09:03:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-d7e228aa8a66acf5ecec5ff097e7f3f73459b9ca57e628acf90fb983ffb93285-merged.mount: Deactivated successfully.
Oct 11 09:03:24 compute-0 podman[347295]: 2025-10-11 09:03:24.55163848 +0000 UTC m=+0.124167225 container remove a4532cda1f1174878f98566f089095d007cafb2ee668d1a724d479704f5052e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_hodgkin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct 11 09:03:24 compute-0 systemd[1]: libpod-conmon-a4532cda1f1174878f98566f089095d007cafb2ee668d1a724d479704f5052e8.scope: Deactivated successfully.
Oct 11 09:03:24 compute-0 sudo[347139]: pam_unix(sudo:session): session closed for user root
Oct 11 09:03:24 compute-0 sudo[347311]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:03:24 compute-0 sudo[347311]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:03:24 compute-0 sudo[347311]: pam_unix(sudo:session): session closed for user root
Oct 11 09:03:24 compute-0 ceph-mon[74313]: osdmap e243: 3 total, 3 up, 3 in
Oct 11 09:03:24 compute-0 ceph-mon[74313]: pgmap v1808: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 1.2 KiB/s wr, 0 op/s
Oct 11 09:03:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:03:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:03:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:03:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:03:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:03:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:03:24 compute-0 sudo[347336]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 09:03:24 compute-0 sudo[347336]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:03:24 compute-0 sudo[347336]: pam_unix(sudo:session): session closed for user root
Oct 11 09:03:24 compute-0 sudo[347361]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:03:24 compute-0 sudo[347361]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:03:24 compute-0 sudo[347361]: pam_unix(sudo:session): session closed for user root
Oct 11 09:03:25 compute-0 sudo[347386]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a -- lvm list --format json
Oct 11 09:03:25 compute-0 sudo[347386]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:03:25 compute-0 nova_compute[260935]: 2025-10-11 09:03:25.114 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:03:25 compute-0 podman[347452]: 2025-10-11 09:03:25.549329292 +0000 UTC m=+0.072624404 container create 1e28c258e13ae81f1136f51213d3a1de73361d62538a934ddebff8130d3346a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_perlman, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 09:03:25 compute-0 systemd[1]: Started libpod-conmon-1e28c258e13ae81f1136f51213d3a1de73361d62538a934ddebff8130d3346a3.scope.
Oct 11 09:03:25 compute-0 podman[347452]: 2025-10-11 09:03:25.520208891 +0000 UTC m=+0.043504013 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:03:25 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:03:25 compute-0 podman[347452]: 2025-10-11 09:03:25.652521148 +0000 UTC m=+0.175816280 container init 1e28c258e13ae81f1136f51213d3a1de73361d62538a934ddebff8130d3346a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_perlman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 09:03:25 compute-0 podman[347452]: 2025-10-11 09:03:25.664042627 +0000 UTC m=+0.187337739 container start 1e28c258e13ae81f1136f51213d3a1de73361d62538a934ddebff8130d3346a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_perlman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 09:03:25 compute-0 podman[347452]: 2025-10-11 09:03:25.668333609 +0000 UTC m=+0.191628761 container attach 1e28c258e13ae81f1136f51213d3a1de73361d62538a934ddebff8130d3346a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_perlman, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 09:03:25 compute-0 lucid_perlman[347469]: 167 167
Oct 11 09:03:25 compute-0 systemd[1]: libpod-1e28c258e13ae81f1136f51213d3a1de73361d62538a934ddebff8130d3346a3.scope: Deactivated successfully.
Oct 11 09:03:25 compute-0 conmon[347469]: conmon 1e28c258e13ae81f1136 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-1e28c258e13ae81f1136f51213d3a1de73361d62538a934ddebff8130d3346a3.scope/container/memory.events
Oct 11 09:03:25 compute-0 podman[347452]: 2025-10-11 09:03:25.677743518 +0000 UTC m=+0.201038620 container died 1e28c258e13ae81f1136f51213d3a1de73361d62538a934ddebff8130d3346a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_perlman, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct 11 09:03:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-a840339e6770782ed1a56847eb1f053c3e2ce365aefe5f5d207baa991bd9bdba-merged.mount: Deactivated successfully.
Oct 11 09:03:25 compute-0 podman[347452]: 2025-10-11 09:03:25.728287021 +0000 UTC m=+0.251582133 container remove 1e28c258e13ae81f1136f51213d3a1de73361d62538a934ddebff8130d3346a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_perlman, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct 11 09:03:25 compute-0 systemd[1]: libpod-conmon-1e28c258e13ae81f1136f51213d3a1de73361d62538a934ddebff8130d3346a3.scope: Deactivated successfully.
Oct 11 09:03:25 compute-0 podman[347492]: 2025-10-11 09:03:25.99554249 +0000 UTC m=+0.069292809 container create 7fd28de74f2aabc6403fde5c203e4c1d553b9187f497b0482a05d71fe31a9e2d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_hawking, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 11 09:03:26 compute-0 systemd[1]: Started libpod-conmon-7fd28de74f2aabc6403fde5c203e4c1d553b9187f497b0482a05d71fe31a9e2d.scope.
Oct 11 09:03:26 compute-0 podman[347492]: 2025-10-11 09:03:25.963544127 +0000 UTC m=+0.037294536 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:03:26 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:03:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/488255750ac8ea871c0584088c075afd5753ac8c8f2a024b6a2c453da81bbf7b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 09:03:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/488255750ac8ea871c0584088c075afd5753ac8c8f2a024b6a2c453da81bbf7b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 09:03:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/488255750ac8ea871c0584088c075afd5753ac8c8f2a024b6a2c453da81bbf7b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 09:03:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/488255750ac8ea871c0584088c075afd5753ac8c8f2a024b6a2c453da81bbf7b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 09:03:26 compute-0 podman[347492]: 2025-10-11 09:03:26.097205822 +0000 UTC m=+0.170956181 container init 7fd28de74f2aabc6403fde5c203e4c1d553b9187f497b0482a05d71fe31a9e2d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_hawking, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 09:03:26 compute-0 podman[347492]: 2025-10-11 09:03:26.111297655 +0000 UTC m=+0.185048004 container start 7fd28de74f2aabc6403fde5c203e4c1d553b9187f497b0482a05d71fe31a9e2d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_hawking, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 09:03:26 compute-0 podman[347492]: 2025-10-11 09:03:26.115068062 +0000 UTC m=+0.188818401 container attach 7fd28de74f2aabc6403fde5c203e4c1d553b9187f497b0482a05d71fe31a9e2d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_hawking, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 09:03:26 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1809: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 1.2 KiB/s wr, 0 op/s
Oct 11 09:03:26 compute-0 agitated_hawking[347509]: {
Oct 11 09:03:26 compute-0 agitated_hawking[347509]:     "0": [
Oct 11 09:03:26 compute-0 agitated_hawking[347509]:         {
Oct 11 09:03:26 compute-0 agitated_hawking[347509]:             "devices": [
Oct 11 09:03:26 compute-0 agitated_hawking[347509]:                 "/dev/loop3"
Oct 11 09:03:26 compute-0 agitated_hawking[347509]:             ],
Oct 11 09:03:26 compute-0 agitated_hawking[347509]:             "lv_name": "ceph_lv0",
Oct 11 09:03:26 compute-0 agitated_hawking[347509]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 09:03:26 compute-0 agitated_hawking[347509]:             "lv_size": "21470642176",
Oct 11 09:03:26 compute-0 agitated_hawking[347509]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=bd31cfe9-266a-4479-a7f9-659fff14b3e1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 09:03:26 compute-0 agitated_hawking[347509]:             "lv_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 09:03:26 compute-0 agitated_hawking[347509]:             "name": "ceph_lv0",
Oct 11 09:03:26 compute-0 agitated_hawking[347509]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 09:03:26 compute-0 agitated_hawking[347509]:             "tags": {
Oct 11 09:03:26 compute-0 agitated_hawking[347509]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 11 09:03:26 compute-0 agitated_hawking[347509]:                 "ceph.block_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 09:03:26 compute-0 agitated_hawking[347509]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 09:03:26 compute-0 agitated_hawking[347509]:                 "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:03:26 compute-0 agitated_hawking[347509]:                 "ceph.cluster_name": "ceph",
Oct 11 09:03:26 compute-0 agitated_hawking[347509]:                 "ceph.crush_device_class": "",
Oct 11 09:03:26 compute-0 agitated_hawking[347509]:                 "ceph.encrypted": "0",
Oct 11 09:03:26 compute-0 agitated_hawking[347509]:                 "ceph.osd_fsid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 09:03:26 compute-0 agitated_hawking[347509]:                 "ceph.osd_id": "0",
Oct 11 09:03:26 compute-0 agitated_hawking[347509]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 09:03:26 compute-0 agitated_hawking[347509]:                 "ceph.type": "block",
Oct 11 09:03:26 compute-0 agitated_hawking[347509]:                 "ceph.vdo": "0"
Oct 11 09:03:26 compute-0 agitated_hawking[347509]:             },
Oct 11 09:03:26 compute-0 agitated_hawking[347509]:             "type": "block",
Oct 11 09:03:26 compute-0 agitated_hawking[347509]:             "vg_name": "ceph_vg0"
Oct 11 09:03:26 compute-0 agitated_hawking[347509]:         }
Oct 11 09:03:26 compute-0 agitated_hawking[347509]:     ],
Oct 11 09:03:26 compute-0 agitated_hawking[347509]:     "1": [
Oct 11 09:03:26 compute-0 agitated_hawking[347509]:         {
Oct 11 09:03:26 compute-0 agitated_hawking[347509]:             "devices": [
Oct 11 09:03:26 compute-0 agitated_hawking[347509]:                 "/dev/loop4"
Oct 11 09:03:26 compute-0 agitated_hawking[347509]:             ],
Oct 11 09:03:26 compute-0 agitated_hawking[347509]:             "lv_name": "ceph_lv1",
Oct 11 09:03:26 compute-0 agitated_hawking[347509]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 09:03:26 compute-0 agitated_hawking[347509]:             "lv_size": "21470642176",
Oct 11 09:03:26 compute-0 agitated_hawking[347509]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c6e730c8-7075-45e5-bf72-061b73088f5b,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 09:03:26 compute-0 agitated_hawking[347509]:             "lv_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 09:03:26 compute-0 agitated_hawking[347509]:             "name": "ceph_lv1",
Oct 11 09:03:26 compute-0 agitated_hawking[347509]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 09:03:26 compute-0 agitated_hawking[347509]:             "tags": {
Oct 11 09:03:26 compute-0 agitated_hawking[347509]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 11 09:03:26 compute-0 agitated_hawking[347509]:                 "ceph.block_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 09:03:26 compute-0 agitated_hawking[347509]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 09:03:26 compute-0 agitated_hawking[347509]:                 "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:03:26 compute-0 agitated_hawking[347509]:                 "ceph.cluster_name": "ceph",
Oct 11 09:03:26 compute-0 agitated_hawking[347509]:                 "ceph.crush_device_class": "",
Oct 11 09:03:26 compute-0 agitated_hawking[347509]:                 "ceph.encrypted": "0",
Oct 11 09:03:26 compute-0 agitated_hawking[347509]:                 "ceph.osd_fsid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 09:03:26 compute-0 agitated_hawking[347509]:                 "ceph.osd_id": "1",
Oct 11 09:03:26 compute-0 agitated_hawking[347509]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 09:03:26 compute-0 agitated_hawking[347509]:                 "ceph.type": "block",
Oct 11 09:03:26 compute-0 agitated_hawking[347509]:                 "ceph.vdo": "0"
Oct 11 09:03:26 compute-0 agitated_hawking[347509]:             },
Oct 11 09:03:26 compute-0 agitated_hawking[347509]:             "type": "block",
Oct 11 09:03:26 compute-0 agitated_hawking[347509]:             "vg_name": "ceph_vg1"
Oct 11 09:03:26 compute-0 agitated_hawking[347509]:         }
Oct 11 09:03:26 compute-0 agitated_hawking[347509]:     ],
Oct 11 09:03:26 compute-0 agitated_hawking[347509]:     "2": [
Oct 11 09:03:26 compute-0 agitated_hawking[347509]:         {
Oct 11 09:03:26 compute-0 agitated_hawking[347509]:             "devices": [
Oct 11 09:03:26 compute-0 agitated_hawking[347509]:                 "/dev/loop5"
Oct 11 09:03:26 compute-0 agitated_hawking[347509]:             ],
Oct 11 09:03:26 compute-0 agitated_hawking[347509]:             "lv_name": "ceph_lv2",
Oct 11 09:03:26 compute-0 agitated_hawking[347509]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 09:03:26 compute-0 agitated_hawking[347509]:             "lv_size": "21470642176",
Oct 11 09:03:26 compute-0 agitated_hawking[347509]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9a8d87fe-940e-4960-94fb-405e22e6e9ee,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 09:03:26 compute-0 agitated_hawking[347509]:             "lv_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 09:03:26 compute-0 agitated_hawking[347509]:             "name": "ceph_lv2",
Oct 11 09:03:26 compute-0 agitated_hawking[347509]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 09:03:26 compute-0 agitated_hawking[347509]:             "tags": {
Oct 11 09:03:26 compute-0 agitated_hawking[347509]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 11 09:03:26 compute-0 agitated_hawking[347509]:                 "ceph.block_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 09:03:26 compute-0 agitated_hawking[347509]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 09:03:26 compute-0 agitated_hawking[347509]:                 "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:03:26 compute-0 agitated_hawking[347509]:                 "ceph.cluster_name": "ceph",
Oct 11 09:03:26 compute-0 agitated_hawking[347509]:                 "ceph.crush_device_class": "",
Oct 11 09:03:26 compute-0 agitated_hawking[347509]:                 "ceph.encrypted": "0",
Oct 11 09:03:26 compute-0 agitated_hawking[347509]:                 "ceph.osd_fsid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 09:03:26 compute-0 agitated_hawking[347509]:                 "ceph.osd_id": "2",
Oct 11 09:03:26 compute-0 agitated_hawking[347509]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 09:03:26 compute-0 agitated_hawking[347509]:                 "ceph.type": "block",
Oct 11 09:03:26 compute-0 agitated_hawking[347509]:                 "ceph.vdo": "0"
Oct 11 09:03:26 compute-0 agitated_hawking[347509]:             },
Oct 11 09:03:26 compute-0 agitated_hawking[347509]:             "type": "block",
Oct 11 09:03:26 compute-0 agitated_hawking[347509]:             "vg_name": "ceph_vg2"
Oct 11 09:03:26 compute-0 agitated_hawking[347509]:         }
Oct 11 09:03:26 compute-0 agitated_hawking[347509]:     ]
Oct 11 09:03:26 compute-0 agitated_hawking[347509]: }
Oct 11 09:03:26 compute-0 systemd[1]: libpod-7fd28de74f2aabc6403fde5c203e4c1d553b9187f497b0482a05d71fe31a9e2d.scope: Deactivated successfully.
Oct 11 09:03:26 compute-0 podman[347518]: 2025-10-11 09:03:26.965482938 +0000 UTC m=+0.042050961 container died 7fd28de74f2aabc6403fde5c203e4c1d553b9187f497b0482a05d71fe31a9e2d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_hawking, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 09:03:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-488255750ac8ea871c0584088c075afd5753ac8c8f2a024b6a2c453da81bbf7b-merged.mount: Deactivated successfully.
Oct 11 09:03:27 compute-0 podman[347518]: 2025-10-11 09:03:27.042759044 +0000 UTC m=+0.119326967 container remove 7fd28de74f2aabc6403fde5c203e4c1d553b9187f497b0482a05d71fe31a9e2d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_hawking, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Oct 11 09:03:27 compute-0 systemd[1]: libpod-conmon-7fd28de74f2aabc6403fde5c203e4c1d553b9187f497b0482a05d71fe31a9e2d.scope: Deactivated successfully.
Oct 11 09:03:27 compute-0 sudo[347386]: pam_unix(sudo:session): session closed for user root
Oct 11 09:03:27 compute-0 nova_compute[260935]: 2025-10-11 09:03:27.156 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 11 09:03:27 compute-0 nova_compute[260935]: 2025-10-11 09:03:27.157 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 12.435s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:03:27 compute-0 nova_compute[260935]: 2025-10-11 09:03:27.158 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:03:27 compute-0 nova_compute[260935]: 2025-10-11 09:03:27.158 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Oct 11 09:03:27 compute-0 sudo[347533]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:03:27 compute-0 sudo[347533]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:03:27 compute-0 sudo[347533]: pam_unix(sudo:session): session closed for user root
Oct 11 09:03:27 compute-0 sudo[347558]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 09:03:27 compute-0 sudo[347558]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:03:27 compute-0 sudo[347558]: pam_unix(sudo:session): session closed for user root
Oct 11 09:03:27 compute-0 ceph-mon[74313]: pgmap v1809: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 1.2 KiB/s wr, 0 op/s
Oct 11 09:03:27 compute-0 sudo[347583]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:03:27 compute-0 sudo[347583]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:03:27 compute-0 sudo[347583]: pam_unix(sudo:session): session closed for user root
Oct 11 09:03:27 compute-0 sudo[347608]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a -- raw list --format json
Oct 11 09:03:27 compute-0 sudo[347608]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:03:27 compute-0 podman[347672]: 2025-10-11 09:03:27.934542012 +0000 UTC m=+0.072590303 container create 156a458c51af34a596f3069c55a52d55511dfa29193a8d26808a03b134ac52ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_franklin, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 09:03:27 compute-0 systemd[1]: Started libpod-conmon-156a458c51af34a596f3069c55a52d55511dfa29193a8d26808a03b134ac52ba.scope.
Oct 11 09:03:27 compute-0 podman[347672]: 2025-10-11 09:03:27.902202489 +0000 UTC m=+0.040250830 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:03:28 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:03:28 compute-0 podman[347672]: 2025-10-11 09:03:28.024583192 +0000 UTC m=+0.162631543 container init 156a458c51af34a596f3069c55a52d55511dfa29193a8d26808a03b134ac52ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_franklin, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 09:03:28 compute-0 podman[347672]: 2025-10-11 09:03:28.031445338 +0000 UTC m=+0.169493599 container start 156a458c51af34a596f3069c55a52d55511dfa29193a8d26808a03b134ac52ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_franklin, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 09:03:28 compute-0 podman[347672]: 2025-10-11 09:03:28.034769653 +0000 UTC m=+0.172817934 container attach 156a458c51af34a596f3069c55a52d55511dfa29193a8d26808a03b134ac52ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_franklin, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 09:03:28 compute-0 compassionate_franklin[347688]: 167 167
Oct 11 09:03:28 compute-0 podman[347672]: 2025-10-11 09:03:28.036187974 +0000 UTC m=+0.174236225 container died 156a458c51af34a596f3069c55a52d55511dfa29193a8d26808a03b134ac52ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_franklin, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Oct 11 09:03:28 compute-0 systemd[1]: libpod-156a458c51af34a596f3069c55a52d55511dfa29193a8d26808a03b134ac52ba.scope: Deactivated successfully.
Oct 11 09:03:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-f4566db413e532aedcb0082282ba9bea73505e413f4e3051aa1713e91ecb1465-merged.mount: Deactivated successfully.
Oct 11 09:03:28 compute-0 podman[347672]: 2025-10-11 09:03:28.069570336 +0000 UTC m=+0.207618577 container remove 156a458c51af34a596f3069c55a52d55511dfa29193a8d26808a03b134ac52ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_franklin, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 09:03:28 compute-0 systemd[1]: libpod-conmon-156a458c51af34a596f3069c55a52d55511dfa29193a8d26808a03b134ac52ba.scope: Deactivated successfully.
Oct 11 09:03:28 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1810: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 10 KiB/s rd, 1.6 KiB/s wr, 14 op/s
Oct 11 09:03:28 compute-0 podman[347712]: 2025-10-11 09:03:28.298221984 +0000 UTC m=+0.064949345 container create 7888c0ac2fc4f17a7e4b1197efb4db53865da1bd257e30de480033252c8b5732 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_hypatia, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 09:03:28 compute-0 podman[347712]: 2025-10-11 09:03:28.268900007 +0000 UTC m=+0.035627408 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:03:28 compute-0 systemd[1]: Started libpod-conmon-7888c0ac2fc4f17a7e4b1197efb4db53865da1bd257e30de480033252c8b5732.scope.
Oct 11 09:03:28 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:03:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f797dd5d0d85dff5e844c4790e14704c3d98d6e88e07fdb76f134d2b3fcb3a3e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 09:03:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f797dd5d0d85dff5e844c4790e14704c3d98d6e88e07fdb76f134d2b3fcb3a3e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 09:03:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f797dd5d0d85dff5e844c4790e14704c3d98d6e88e07fdb76f134d2b3fcb3a3e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 09:03:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f797dd5d0d85dff5e844c4790e14704c3d98d6e88e07fdb76f134d2b3fcb3a3e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 09:03:28 compute-0 podman[347712]: 2025-10-11 09:03:28.423687925 +0000 UTC m=+0.190415376 container init 7888c0ac2fc4f17a7e4b1197efb4db53865da1bd257e30de480033252c8b5732 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_hypatia, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 09:03:28 compute-0 podman[347712]: 2025-10-11 09:03:28.444662844 +0000 UTC m=+0.211390225 container start 7888c0ac2fc4f17a7e4b1197efb4db53865da1bd257e30de480033252c8b5732 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_hypatia, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 09:03:28 compute-0 podman[347712]: 2025-10-11 09:03:28.449607075 +0000 UTC m=+0.216334506 container attach 7888c0ac2fc4f17a7e4b1197efb4db53865da1bd257e30de480033252c8b5732 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_hypatia, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 09:03:28 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e243 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:03:28 compute-0 nova_compute[260935]: 2025-10-11 09:03:28.933 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:03:29 compute-0 ceph-mon[74313]: pgmap v1810: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 10 KiB/s rd, 1.6 KiB/s wr, 14 op/s
Oct 11 09:03:29 compute-0 admiring_hypatia[347728]: {
Oct 11 09:03:29 compute-0 admiring_hypatia[347728]:     "9a8d87fe-940e-4960-94fb-405e22e6e9ee": {
Oct 11 09:03:29 compute-0 admiring_hypatia[347728]:         "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:03:29 compute-0 admiring_hypatia[347728]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 11 09:03:29 compute-0 admiring_hypatia[347728]:         "osd_id": 2,
Oct 11 09:03:29 compute-0 admiring_hypatia[347728]:         "osd_uuid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 09:03:29 compute-0 admiring_hypatia[347728]:         "type": "bluestore"
Oct 11 09:03:29 compute-0 admiring_hypatia[347728]:     },
Oct 11 09:03:29 compute-0 admiring_hypatia[347728]:     "bd31cfe9-266a-4479-a7f9-659fff14b3e1": {
Oct 11 09:03:29 compute-0 admiring_hypatia[347728]:         "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:03:29 compute-0 admiring_hypatia[347728]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 11 09:03:29 compute-0 admiring_hypatia[347728]:         "osd_id": 0,
Oct 11 09:03:29 compute-0 admiring_hypatia[347728]:         "osd_uuid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 09:03:29 compute-0 admiring_hypatia[347728]:         "type": "bluestore"
Oct 11 09:03:29 compute-0 admiring_hypatia[347728]:     },
Oct 11 09:03:29 compute-0 admiring_hypatia[347728]:     "c6e730c8-7075-45e5-bf72-061b73088f5b": {
Oct 11 09:03:29 compute-0 admiring_hypatia[347728]:         "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:03:29 compute-0 admiring_hypatia[347728]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 11 09:03:29 compute-0 admiring_hypatia[347728]:         "osd_id": 1,
Oct 11 09:03:29 compute-0 admiring_hypatia[347728]:         "osd_uuid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 09:03:29 compute-0 admiring_hypatia[347728]:         "type": "bluestore"
Oct 11 09:03:29 compute-0 admiring_hypatia[347728]:     }
Oct 11 09:03:29 compute-0 admiring_hypatia[347728]: }
Oct 11 09:03:29 compute-0 systemd[1]: libpod-7888c0ac2fc4f17a7e4b1197efb4db53865da1bd257e30de480033252c8b5732.scope: Deactivated successfully.
Oct 11 09:03:29 compute-0 systemd[1]: libpod-7888c0ac2fc4f17a7e4b1197efb4db53865da1bd257e30de480033252c8b5732.scope: Consumed 1.145s CPU time.
Oct 11 09:03:29 compute-0 podman[347712]: 2025-10-11 09:03:29.589122835 +0000 UTC m=+1.355850196 container died 7888c0ac2fc4f17a7e4b1197efb4db53865da1bd257e30de480033252c8b5732 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_hypatia, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 09:03:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-f797dd5d0d85dff5e844c4790e14704c3d98d6e88e07fdb76f134d2b3fcb3a3e-merged.mount: Deactivated successfully.
Oct 11 09:03:29 compute-0 podman[347712]: 2025-10-11 09:03:29.657504217 +0000 UTC m=+1.424231568 container remove 7888c0ac2fc4f17a7e4b1197efb4db53865da1bd257e30de480033252c8b5732 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_hypatia, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 09:03:29 compute-0 systemd[1]: libpod-conmon-7888c0ac2fc4f17a7e4b1197efb4db53865da1bd257e30de480033252c8b5732.scope: Deactivated successfully.
Oct 11 09:03:29 compute-0 sudo[347608]: pam_unix(sudo:session): session closed for user root
Oct 11 09:03:29 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 09:03:29 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:03:29 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 09:03:29 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:03:29 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev e21c66a3-b944-402e-a2b3-f8a72986f86a does not exist
Oct 11 09:03:29 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev 692a29df-cf03-4623-9ce3-5d98d1aab078 does not exist
Oct 11 09:03:29 compute-0 sudo[347776]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:03:29 compute-0 sudo[347776]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:03:29 compute-0 sudo[347776]: pam_unix(sudo:session): session closed for user root
Oct 11 09:03:29 compute-0 sudo[347801]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 11 09:03:29 compute-0 sudo[347801]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:03:29 compute-0 sudo[347801]: pam_unix(sudo:session): session closed for user root
Oct 11 09:03:30 compute-0 nova_compute[260935]: 2025-10-11 09:03:30.117 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:03:30 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1811: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 10 KiB/s rd, 1.6 KiB/s wr, 14 op/s
Oct 11 09:03:30 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:03:30 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:03:31 compute-0 ceph-mon[74313]: pgmap v1811: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 10 KiB/s rd, 1.6 KiB/s wr, 14 op/s
Oct 11 09:03:31 compute-0 ovn_controller[152945]: 2025-10-11T09:03:31Z|00796|binding|INFO|Releasing lport e23cd806-8523-4e59-ba27-db15cee52548 from this chassis (sb_readonly=0)
Oct 11 09:03:31 compute-0 ovn_controller[152945]: 2025-10-11T09:03:31Z|00797|binding|INFO|Releasing lport f45dd889-4db0-488b-b6d3-356bc191844e from this chassis (sb_readonly=0)
Oct 11 09:03:31 compute-0 nova_compute[260935]: 2025-10-11 09:03:31.926 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:03:32 compute-0 ovn_controller[152945]: 2025-10-11T09:03:32Z|00798|binding|INFO|Releasing lport e23cd806-8523-4e59-ba27-db15cee52548 from this chassis (sb_readonly=0)
Oct 11 09:03:32 compute-0 ovn_controller[152945]: 2025-10-11T09:03:32Z|00799|binding|INFO|Releasing lport f45dd889-4db0-488b-b6d3-356bc191844e from this chassis (sb_readonly=0)
Oct 11 09:03:32 compute-0 nova_compute[260935]: 2025-10-11 09:03:32.057 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:03:32 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1812: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 40 KiB/s rd, 1.6 KiB/s wr, 63 op/s
Oct 11 09:03:32 compute-0 nova_compute[260935]: 2025-10-11 09:03:32.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:03:32 compute-0 nova_compute[260935]: 2025-10-11 09:03:32.704 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:03:32 compute-0 ceph-mon[74313]: pgmap v1812: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 40 KiB/s rd, 1.6 KiB/s wr, 63 op/s
Oct 11 09:03:32 compute-0 nova_compute[260935]: 2025-10-11 09:03:32.785 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:03:32 compute-0 nova_compute[260935]: 2025-10-11 09:03:32.786 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 11 09:03:33 compute-0 nova_compute[260935]: 2025-10-11 09:03:33.161 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "refresh_cache-b75d8ded-515b-48ff-a6b6-28df88878996" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:03:33 compute-0 nova_compute[260935]: 2025-10-11 09:03:33.162 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquired lock "refresh_cache-b75d8ded-515b-48ff-a6b6-28df88878996" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:03:33 compute-0 nova_compute[260935]: 2025-10-11 09:03:33.162 2 DEBUG nova.network.neutron [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: b75d8ded-515b-48ff-a6b6-28df88878996] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 11 09:03:33 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e243 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:03:33 compute-0 ceph-mon[74313]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #84. Immutable memtables: 0.
Oct 11 09:03:33 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:03:33.737885) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 11 09:03:33 compute-0 ceph-mon[74313]: rocksdb: [db/flush_job.cc:856] [default] [JOB 47] Flushing memtable with next log file: 84
Oct 11 09:03:33 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760173413738000, "job": 47, "event": "flush_started", "num_memtables": 1, "num_entries": 665, "num_deletes": 255, "total_data_size": 755202, "memory_usage": 767520, "flush_reason": "Manual Compaction"}
Oct 11 09:03:33 compute-0 ceph-mon[74313]: rocksdb: [db/flush_job.cc:885] [default] [JOB 47] Level-0 flush table #85: started
Oct 11 09:03:33 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760173413751699, "cf_name": "default", "job": 47, "event": "table_file_creation", "file_number": 85, "file_size": 737418, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 37642, "largest_seqno": 38306, "table_properties": {"data_size": 733905, "index_size": 1357, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1093, "raw_key_size": 8080, "raw_average_key_size": 18, "raw_value_size": 726701, "raw_average_value_size": 1705, "num_data_blocks": 60, "num_entries": 426, "num_filter_entries": 426, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760173369, "oldest_key_time": 1760173369, "file_creation_time": 1760173413, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "25d4b2da-549a-4320-988a-8e4ed36a341b", "db_session_id": "HBQ1LYMNE0LLZGR7AHC6", "orig_file_number": 85, "seqno_to_time_mapping": "N/A"}}
Oct 11 09:03:33 compute-0 ceph-mon[74313]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 47] Flush lasted 13901 microseconds, and 6334 cpu microseconds.
Oct 11 09:03:33 compute-0 ceph-mon[74313]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 09:03:33 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:03:33.751792) [db/flush_job.cc:967] [default] [JOB 47] Level-0 flush table #85: 737418 bytes OK
Oct 11 09:03:33 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:03:33.751872) [db/memtable_list.cc:519] [default] Level-0 commit table #85 started
Oct 11 09:03:33 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:03:33.753927) [db/memtable_list.cc:722] [default] Level-0 commit table #85: memtable #1 done
Oct 11 09:03:33 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:03:33.753952) EVENT_LOG_v1 {"time_micros": 1760173413753942, "job": 47, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 11 09:03:33 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:03:33.753983) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 11 09:03:33 compute-0 ceph-mon[74313]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 47] Try to delete WAL files size 751648, prev total WAL file size 751648, number of live WAL files 2.
Oct 11 09:03:33 compute-0 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000081.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 09:03:33 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:03:33.754789) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031323538' seq:72057594037927935, type:22 .. '6C6F676D0031353039' seq:0, type:0; will stop at (end)
Oct 11 09:03:33 compute-0 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 48] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 11 09:03:33 compute-0 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 47 Base level 0, inputs: [85(720KB)], [83(7878KB)]
Oct 11 09:03:33 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760173413754877, "job": 48, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [85], "files_L6": [83], "score": -1, "input_data_size": 8804926, "oldest_snapshot_seqno": -1}
Oct 11 09:03:33 compute-0 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 48] Generated table #86: 6057 keys, 8673095 bytes, temperature: kUnknown
Oct 11 09:03:33 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760173413811740, "cf_name": "default", "job": 48, "event": "table_file_creation", "file_number": 86, "file_size": 8673095, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8632449, "index_size": 24408, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15173, "raw_key_size": 154569, "raw_average_key_size": 25, "raw_value_size": 8523509, "raw_average_value_size": 1407, "num_data_blocks": 983, "num_entries": 6057, "num_filter_entries": 6057, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760170204, "oldest_key_time": 0, "file_creation_time": 1760173413, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "25d4b2da-549a-4320-988a-8e4ed36a341b", "db_session_id": "HBQ1LYMNE0LLZGR7AHC6", "orig_file_number": 86, "seqno_to_time_mapping": "N/A"}}
Oct 11 09:03:33 compute-0 ceph-mon[74313]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 09:03:33 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:03:33.812206) [db/compaction/compaction_job.cc:1663] [default] [JOB 48] Compacted 1@0 + 1@6 files to L6 => 8673095 bytes
Oct 11 09:03:33 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:03:33.813537) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 154.3 rd, 152.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.7, 7.7 +0.0 blob) out(8.3 +0.0 blob), read-write-amplify(23.7) write-amplify(11.8) OK, records in: 6583, records dropped: 526 output_compression: NoCompression
Oct 11 09:03:33 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:03:33.813568) EVENT_LOG_v1 {"time_micros": 1760173413813555, "job": 48, "event": "compaction_finished", "compaction_time_micros": 57063, "compaction_time_cpu_micros": 38154, "output_level": 6, "num_output_files": 1, "total_output_size": 8673095, "num_input_records": 6583, "num_output_records": 6057, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 11 09:03:33 compute-0 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000085.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 09:03:33 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760173413814065, "job": 48, "event": "table_file_deletion", "file_number": 85}
Oct 11 09:03:33 compute-0 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000083.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 09:03:33 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760173413816683, "job": 48, "event": "table_file_deletion", "file_number": 83}
Oct 11 09:03:33 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:03:33.754681) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 09:03:33 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:03:33.816911) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 09:03:33 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:03:33.816929) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 09:03:33 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:03:33.816935) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 09:03:33 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:03:33.816940) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 09:03:33 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:03:33.816944) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 09:03:33 compute-0 nova_compute[260935]: 2025-10-11 09:03:33.935 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:03:34 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1813: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 51 KiB/s rd, 1.5 KiB/s wr, 82 op/s
Oct 11 09:03:34 compute-0 ceph-mon[74313]: pgmap v1813: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 51 KiB/s rd, 1.5 KiB/s wr, 82 op/s
Oct 11 09:03:35 compute-0 nova_compute[260935]: 2025-10-11 09:03:35.121 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:03:35 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e243 do_prune osdmap full prune enabled
Oct 11 09:03:35 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e244 e244: 3 total, 3 up, 3 in
Oct 11 09:03:35 compute-0 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e244: 3 total, 3 up, 3 in
Oct 11 09:03:35 compute-0 nova_compute[260935]: 2025-10-11 09:03:35.796 2 DEBUG nova.network.neutron [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: b75d8ded-515b-48ff-a6b6-28df88878996] Updating instance_info_cache with network_info: [{"id": "99e74dca-1d94-446c-ac4b-bc16dc028d2b", "address": "fa:16:3e:ab:9b:26", "network": {"id": "e4686205-cbf0-4221-bc49-ebb890c4a59f", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1553544744-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "11b44ad9193e4e43838d52056ccf413e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap99e74dca-1d", "ovs_interfaceid": "99e74dca-1d94-446c-ac4b-bc16dc028d2b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:03:36 compute-0 nova_compute[260935]: 2025-10-11 09:03:36.124 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Releasing lock "refresh_cache-b75d8ded-515b-48ff-a6b6-28df88878996" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:03:36 compute-0 nova_compute[260935]: 2025-10-11 09:03:36.124 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: b75d8ded-515b-48ff-a6b6-28df88878996] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 11 09:03:36 compute-0 nova_compute[260935]: 2025-10-11 09:03:36.125 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:03:36 compute-0 nova_compute[260935]: 2025-10-11 09:03:36.125 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:03:36 compute-0 nova_compute[260935]: 2025-10-11 09:03:36.126 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:03:36 compute-0 nova_compute[260935]: 2025-10-11 09:03:36.126 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:03:36 compute-0 nova_compute[260935]: 2025-10-11 09:03:36.126 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 11 09:03:36 compute-0 nova_compute[260935]: 2025-10-11 09:03:36.127 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:03:36 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1815: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 53 KiB/s rd, 1.6 KiB/s wr, 86 op/s
Oct 11 09:03:36 compute-0 nova_compute[260935]: 2025-10-11 09:03:36.253 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:03:36 compute-0 nova_compute[260935]: 2025-10-11 09:03:36.253 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:03:36 compute-0 nova_compute[260935]: 2025-10-11 09:03:36.254 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:03:36 compute-0 nova_compute[260935]: 2025-10-11 09:03:36.254 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 11 09:03:36 compute-0 nova_compute[260935]: 2025-10-11 09:03:36.254 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:03:36 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 09:03:36 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1826216385' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:03:36 compute-0 nova_compute[260935]: 2025-10-11 09:03:36.720 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:03:36 compute-0 ceph-mon[74313]: osdmap e244: 3 total, 3 up, 3 in
Oct 11 09:03:36 compute-0 ceph-mon[74313]: pgmap v1815: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 53 KiB/s rd, 1.6 KiB/s wr, 86 op/s
Oct 11 09:03:36 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/1826216385' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:03:38 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1816: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 66 KiB/s rd, 1.6 KiB/s wr, 102 op/s
Oct 11 09:03:38 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e244 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:03:38 compute-0 nova_compute[260935]: 2025-10-11 09:03:38.936 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:03:39 compute-0 ceph-mon[74313]: pgmap v1816: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 66 KiB/s rd, 1.6 KiB/s wr, 102 op/s
Oct 11 09:03:40 compute-0 nova_compute[260935]: 2025-10-11 09:03:40.125 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:03:40 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1817: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 66 KiB/s rd, 1.6 KiB/s wr, 102 op/s
Oct 11 09:03:40 compute-0 podman[347849]: 2025-10-11 09:03:40.79901462 +0000 UTC m=+0.090406712 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Oct 11 09:03:41 compute-0 ceph-mon[74313]: pgmap v1817: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 66 KiB/s rd, 1.6 KiB/s wr, 102 op/s
Oct 11 09:03:42 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1818: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 1.6 KiB/s wr, 53 op/s
Oct 11 09:03:43 compute-0 ceph-mon[74313]: pgmap v1818: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 1.6 KiB/s wr, 53 op/s
Oct 11 09:03:43 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e244 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:03:43 compute-0 nova_compute[260935]: 2025-10-11 09:03:43.939 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:03:44 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1819: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 1.6 KiB/s wr, 30 op/s
Oct 11 09:03:45 compute-0 nova_compute[260935]: 2025-10-11 09:03:45.128 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:03:45 compute-0 ceph-mon[74313]: pgmap v1819: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 1.6 KiB/s wr, 30 op/s
Oct 11 09:03:46 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1820: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 22 KiB/s rd, 1.5 KiB/s wr, 29 op/s
Oct 11 09:03:46 compute-0 podman[347868]: 2025-10-11 09:03:46.81580467 +0000 UTC m=+0.096455134 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=iscsid, managed_by=edpm_ansible)
Oct 11 09:03:47 compute-0 ceph-mon[74313]: pgmap v1820: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 22 KiB/s rd, 1.5 KiB/s wr, 29 op/s
Oct 11 09:03:48 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1821: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.3 KiB/s wr, 25 op/s
Oct 11 09:03:48 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e244 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:03:48 compute-0 nova_compute[260935]: 2025-10-11 09:03:48.941 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:03:49 compute-0 ceph-mon[74313]: pgmap v1821: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.3 KiB/s wr, 25 op/s
Oct 11 09:03:50 compute-0 nova_compute[260935]: 2025-10-11 09:03:50.131 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:03:50 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1822: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail
Oct 11 09:03:50 compute-0 podman[347891]: 2025-10-11 09:03:50.824473314 +0000 UTC m=+0.112035129 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 11 09:03:50 compute-0 podman[347890]: 2025-10-11 09:03:50.83061836 +0000 UTC m=+0.112876744 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=multipathd, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Oct 11 09:03:51 compute-0 ceph-mon[74313]: pgmap v1822: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail
Oct 11 09:03:52 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1823: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail
Oct 11 09:03:53 compute-0 ceph-mon[74313]: pgmap v1823: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail
Oct 11 09:03:53 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e244 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:03:53 compute-0 nova_compute[260935]: 2025-10-11 09:03:53.943 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:03:54 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1824: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail
Oct 11 09:03:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:03:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:03:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:03:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:03:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:03:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:03:54 compute-0 ceph-mgr[74605]: [balancer INFO root] Optimize plan auto_2025-10-11_09:03:54
Oct 11 09:03:54 compute-0 ceph-mgr[74605]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 09:03:54 compute-0 ceph-mgr[74605]: [balancer INFO root] do_upmap
Oct 11 09:03:54 compute-0 ceph-mgr[74605]: [balancer INFO root] pools ['.mgr', 'vms', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'backups', '.rgw.root', 'volumes', 'images', 'default.rgw.control', 'default.rgw.log', 'default.rgw.meta']
Oct 11 09:03:54 compute-0 ceph-mgr[74605]: [balancer INFO root] prepared 0/10 changes
Oct 11 09:03:55 compute-0 nova_compute[260935]: 2025-10-11 09:03:55.133 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:03:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 09:03:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 09:03:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 09:03:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 09:03:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 09:03:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 09:03:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 09:03:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 09:03:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 09:03:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 09:03:55 compute-0 ceph-mon[74313]: pgmap v1824: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail
Oct 11 09:03:56 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1825: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail
Oct 11 09:03:56 compute-0 ceph-mon[74313]: pgmap v1825: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail
Oct 11 09:03:58 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1826: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail
Oct 11 09:03:58 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e244 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:03:58 compute-0 nova_compute[260935]: 2025-10-11 09:03:58.945 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:03:59 compute-0 ceph-mon[74313]: pgmap v1826: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail
Oct 11 09:04:00 compute-0 nova_compute[260935]: 2025-10-11 09:04:00.137 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:04:00 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1827: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail
Oct 11 09:04:01 compute-0 ceph-mon[74313]: pgmap v1827: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail
Oct 11 09:04:02 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1828: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail
Oct 11 09:04:02 compute-0 ceph-mon[74313]: pgmap v1828: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail
Oct 11 09:04:03 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e244 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:04:03 compute-0 nova_compute[260935]: 2025-10-11 09:04:03.948 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:04:04 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1829: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail
Oct 11 09:04:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 09:04:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:04:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 09:04:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:04:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0029757271724620694 of space, bias 1.0, pg target 0.8927181517386208 quantized to 32 (current 32)
Oct 11 09:04:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:04:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 09:04:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:04:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 09:04:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:04:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006666213900826984 of space, bias 1.0, pg target 0.19998641702480952 quantized to 32 (current 32)
Oct 11 09:04:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:04:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 11 09:04:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:04:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 09:04:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:04:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 11 09:04:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:04:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 11 09:04:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:04:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 09:04:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:04:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 11 09:04:05 compute-0 nova_compute[260935]: 2025-10-11 09:04:05.139 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:04:05 compute-0 ceph-mon[74313]: pgmap v1829: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail
Oct 11 09:04:06 compute-0 ovn_controller[152945]: 2025-10-11T09:04:06Z|00800|memory_trim|INFO|Detected inactivity (last active 30008 ms ago): trimming memory
Oct 11 09:04:06 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1830: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail
Oct 11 09:04:07 compute-0 ceph-mon[74313]: pgmap v1830: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail
Oct 11 09:04:08 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1831: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail
Oct 11 09:04:08 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e244 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:04:08 compute-0 nova_compute[260935]: 2025-10-11 09:04:08.950 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:04:09 compute-0 ceph-mon[74313]: pgmap v1831: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail
Oct 11 09:04:10 compute-0 nova_compute[260935]: 2025-10-11 09:04:10.141 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:04:10 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1832: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail
Oct 11 09:04:11 compute-0 ceph-mon[74313]: pgmap v1832: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail
Oct 11 09:04:11 compute-0 podman[347934]: 2025-10-11 09:04:11.737697189 +0000 UTC m=+0.045348216 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 09:04:12 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1833: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail
Oct 11 09:04:13 compute-0 ceph-mon[74313]: pgmap v1833: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail
Oct 11 09:04:13 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e244 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:04:13 compute-0 nova_compute[260935]: 2025-10-11 09:04:13.952 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:04:14 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1834: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail
Oct 11 09:04:15 compute-0 nova_compute[260935]: 2025-10-11 09:04:15.143 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:04:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:04:15.199 162815 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:04:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:04:15.200 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:04:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:04:15.200 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:04:15 compute-0 ceph-mon[74313]: pgmap v1834: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail
Oct 11 09:04:16 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1835: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail
Oct 11 09:04:17 compute-0 ceph-mon[74313]: pgmap v1835: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail
Oct 11 09:04:17 compute-0 nova_compute[260935]: 2025-10-11 09:04:17.699 2 WARNING oslo.service.loopingcall [-] Function 'nova.servicegroup.drivers.db.DbDriver._report_state' run outlasted interval by 26.07 sec
Oct 11 09:04:17 compute-0 nova_compute[260935]: 2025-10-11 09:04:17.749 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:04:17 compute-0 nova_compute[260935]: 2025-10-11 09:04:17.750 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:04:17 compute-0 nova_compute[260935]: 2025-10-11 09:04:17.750 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:04:17 compute-0 nova_compute[260935]: 2025-10-11 09:04:17.758 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:04:17 compute-0 nova_compute[260935]: 2025-10-11 09:04:17.758 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:04:17 compute-0 nova_compute[260935]: 2025-10-11 09:04:17.764 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000057 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:04:17 compute-0 nova_compute[260935]: 2025-10-11 09:04:17.765 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000057 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:04:17 compute-0 nova_compute[260935]: 2025-10-11 09:04:17.771 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000052 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:04:17 compute-0 nova_compute[260935]: 2025-10-11 09:04:17.771 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000052 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:04:17 compute-0 podman[347953]: 2025-10-11 09:04:17.794877722 +0000 UTC m=+0.088420355 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=iscsid, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Oct 11 09:04:18 compute-0 nova_compute[260935]: 2025-10-11 09:04:18.029 2 WARNING nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 09:04:18 compute-0 nova_compute[260935]: 2025-10-11 09:04:18.031 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3208MB free_disk=59.80977249145508GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 11 09:04:18 compute-0 nova_compute[260935]: 2025-10-11 09:04:18.032 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:04:18 compute-0 nova_compute[260935]: 2025-10-11 09:04:18.032 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:04:18 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1836: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail
Oct 11 09:04:18 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e244 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:04:18 compute-0 nova_compute[260935]: 2025-10-11 09:04:18.954 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:04:19 compute-0 nova_compute[260935]: 2025-10-11 09:04:19.255 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance c176845c-89c0-4038-ba22-4ee79bd3ebfe actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 09:04:19 compute-0 nova_compute[260935]: 2025-10-11 09:04:19.256 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance b75d8ded-515b-48ff-a6b6-28df88878996 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 09:04:19 compute-0 nova_compute[260935]: 2025-10-11 09:04:19.256 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance 52be16b4-343a-4fd4-9041-39069a1fde2a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 09:04:19 compute-0 nova_compute[260935]: 2025-10-11 09:04:19.257 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance 15633aee-234a-4417-b5ea-f35f13820404 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 09:04:19 compute-0 nova_compute[260935]: 2025-10-11 09:04:19.257 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 11 09:04:19 compute-0 nova_compute[260935]: 2025-10-11 09:04:19.258 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=1024MB phys_disk=59GB used_disk=4GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 11 09:04:19 compute-0 nova_compute[260935]: 2025-10-11 09:04:19.304 2 DEBUG nova.scheduler.client.report [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Refreshing inventories for resource provider ead2f521-4d5d-46d9-864c-1aac19134114 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Oct 11 09:04:19 compute-0 nova_compute[260935]: 2025-10-11 09:04:19.360 2 DEBUG nova.scheduler.client.report [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Updating ProviderTree inventory for provider ead2f521-4d5d-46d9-864c-1aac19134114 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Oct 11 09:04:19 compute-0 nova_compute[260935]: 2025-10-11 09:04:19.361 2 DEBUG nova.compute.provider_tree [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Updating inventory in ProviderTree for provider ead2f521-4d5d-46d9-864c-1aac19134114 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Oct 11 09:04:19 compute-0 nova_compute[260935]: 2025-10-11 09:04:19.422 2 DEBUG nova.scheduler.client.report [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Refreshing aggregate associations for resource provider ead2f521-4d5d-46d9-864c-1aac19134114, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Oct 11 09:04:19 compute-0 ceph-mon[74313]: pgmap v1836: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail
Oct 11 09:04:19 compute-0 nova_compute[260935]: 2025-10-11 09:04:19.468 2 DEBUG nova.scheduler.client.report [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Refreshing trait associations for resource provider ead2f521-4d5d-46d9-864c-1aac19134114, traits: HW_CPU_X86_AESNI,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_CLMUL,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_AVX,COMPUTE_RESCUE_BFV,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_F16C,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NODE,HW_CPU_X86_SSE2,HW_CPU_X86_BMI,COMPUTE_DEVICE_TAGGING,COMPUTE_STORAGE_BUS_FDC,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SHA,COMPUTE_STORAGE_BUS_SATA,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SSE42,HW_CPU_X86_SSE41,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_ABM,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_STORAGE_BUS_IDE,COMPUTE_STORAGE_BUS_USB,COMPUTE_TRUSTED_CERTS,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_FMA3,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_SSE4A,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSSE3,HW_CPU_X86_SVM,HW_CPU_X86_BMI2,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_AVX2,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_SECURITY_TPM_2_0,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_AMD_SVM,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_RTL8139 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Oct 11 09:04:19 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:04:19.594 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=21, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:d1:d9', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '16:ab:1e:b7:4b:7f'}, ipsec=False) old=SB_Global(nb_cfg=20) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 09:04:19 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:04:19.595 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 11 09:04:19 compute-0 nova_compute[260935]: 2025-10-11 09:04:19.595 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:04:19 compute-0 nova_compute[260935]: 2025-10-11 09:04:19.805 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:04:20 compute-0 nova_compute[260935]: 2025-10-11 09:04:20.145 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:04:20 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1837: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail
Oct 11 09:04:20 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 09:04:20 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/905249276' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:04:20 compute-0 nova_compute[260935]: 2025-10-11 09:04:20.325 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.520s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:04:20 compute-0 nova_compute[260935]: 2025-10-11 09:04:20.332 2 DEBUG nova.compute.provider_tree [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 09:04:20 compute-0 nova_compute[260935]: 2025-10-11 09:04:20.389 2 DEBUG nova.scheduler.client.report [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 09:04:20 compute-0 nova_compute[260935]: 2025-10-11 09:04:20.392 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 11 09:04:20 compute-0 nova_compute[260935]: 2025-10-11 09:04:20.393 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.360s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:04:20 compute-0 nova_compute[260935]: 2025-10-11 09:04:20.394 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:04:20 compute-0 nova_compute[260935]: 2025-10-11 09:04:20.444 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:04:20 compute-0 nova_compute[260935]: 2025-10-11 09:04:20.445 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Oct 11 09:04:20 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/905249276' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:04:20 compute-0 nova_compute[260935]: 2025-10-11 09:04:20.512 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Oct 11 09:04:20 compute-0 nova_compute[260935]: 2025-10-11 09:04:20.513 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:04:21 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e244 do_prune osdmap full prune enabled
Oct 11 09:04:21 compute-0 ceph-mon[74313]: pgmap v1837: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail
Oct 11 09:04:21 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e245 e245: 3 total, 3 up, 3 in
Oct 11 09:04:21 compute-0 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e245: 3 total, 3 up, 3 in
Oct 11 09:04:21 compute-0 nova_compute[260935]: 2025-10-11 09:04:21.542 2 DEBUG oslo_concurrency.lockutils [None req-bb71e3ff-0a93-4fe0-8fc7-ba9fb278dd93 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] Acquiring lock "cb40f13d-eaf0-4d7f-8745-acdd85fa0e86" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:04:21 compute-0 nova_compute[260935]: 2025-10-11 09:04:21.542 2 DEBUG oslo_concurrency.lockutils [None req-bb71e3ff-0a93-4fe0-8fc7-ba9fb278dd93 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] Lock "cb40f13d-eaf0-4d7f-8745-acdd85fa0e86" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:04:21 compute-0 nova_compute[260935]: 2025-10-11 09:04:21.626 2 DEBUG nova.compute.manager [None req-bb71e3ff-0a93-4fe0-8fc7-ba9fb278dd93 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] [instance: cb40f13d-eaf0-4d7f-8745-acdd85fa0e86] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 11 09:04:21 compute-0 nova_compute[260935]: 2025-10-11 09:04:21.792 2 DEBUG oslo_concurrency.lockutils [None req-bb71e3ff-0a93-4fe0-8fc7-ba9fb278dd93 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:04:21 compute-0 nova_compute[260935]: 2025-10-11 09:04:21.793 2 DEBUG oslo_concurrency.lockutils [None req-bb71e3ff-0a93-4fe0-8fc7-ba9fb278dd93 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:04:21 compute-0 podman[347996]: 2025-10-11 09:04:21.79841049 +0000 UTC m=+0.087212940 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=multipathd, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 09:04:21 compute-0 nova_compute[260935]: 2025-10-11 09:04:21.801 2 DEBUG nova.virt.hardware [None req-bb71e3ff-0a93-4fe0-8fc7-ba9fb278dd93 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 11 09:04:21 compute-0 nova_compute[260935]: 2025-10-11 09:04:21.801 2 INFO nova.compute.claims [None req-bb71e3ff-0a93-4fe0-8fc7-ba9fb278dd93 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] [instance: cb40f13d-eaf0-4d7f-8745-acdd85fa0e86] Claim successful on node compute-0.ctlplane.example.com
Oct 11 09:04:21 compute-0 podman[347997]: 2025-10-11 09:04:21.856320503 +0000 UTC m=+0.136965871 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 11 09:04:22 compute-0 nova_compute[260935]: 2025-10-11 09:04:22.186 2 DEBUG oslo_concurrency.processutils [None req-bb71e3ff-0a93-4fe0-8fc7-ba9fb278dd93 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:04:22 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1839: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 15 KiB/s rd, 921 B/s wr, 19 op/s
Oct 11 09:04:22 compute-0 nova_compute[260935]: 2025-10-11 09:04:22.455 2 DEBUG oslo_concurrency.lockutils [None req-0bf42a87-3fff-4e4e-b766-81235b468a20 2930ce5b9093473e9d0f4e49fb1e934b 5ae7015798d64fdeb69d4d5d5fda3326 - - default default] Acquiring lock "14b020dc-ae35-4a87-87c8-ed8504968319" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:04:22 compute-0 nova_compute[260935]: 2025-10-11 09:04:22.456 2 DEBUG oslo_concurrency.lockutils [None req-0bf42a87-3fff-4e4e-b766-81235b468a20 2930ce5b9093473e9d0f4e49fb1e934b 5ae7015798d64fdeb69d4d5d5fda3326 - - default default] Lock "14b020dc-ae35-4a87-87c8-ed8504968319" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:04:22 compute-0 ceph-mon[74313]: osdmap e245: 3 total, 3 up, 3 in
Oct 11 09:04:22 compute-0 nova_compute[260935]: 2025-10-11 09:04:22.504 2 DEBUG nova.compute.manager [None req-0bf42a87-3fff-4e4e-b766-81235b468a20 2930ce5b9093473e9d0f4e49fb1e934b 5ae7015798d64fdeb69d4d5d5fda3326 - - default default] [instance: 14b020dc-ae35-4a87-87c8-ed8504968319] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 11 09:04:22 compute-0 nova_compute[260935]: 2025-10-11 09:04:22.621 2 DEBUG oslo_concurrency.lockutils [None req-0bf42a87-3fff-4e4e-b766-81235b468a20 2930ce5b9093473e9d0f4e49fb1e934b 5ae7015798d64fdeb69d4d5d5fda3326 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:04:22 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 09:04:22 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/69219880' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:04:22 compute-0 nova_compute[260935]: 2025-10-11 09:04:22.665 2 DEBUG oslo_concurrency.processutils [None req-bb71e3ff-0a93-4fe0-8fc7-ba9fb278dd93 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.479s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:04:22 compute-0 nova_compute[260935]: 2025-10-11 09:04:22.672 2 DEBUG nova.compute.provider_tree [None req-bb71e3ff-0a93-4fe0-8fc7-ba9fb278dd93 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 09:04:22 compute-0 nova_compute[260935]: 2025-10-11 09:04:22.710 2 DEBUG nova.scheduler.client.report [None req-bb71e3ff-0a93-4fe0-8fc7-ba9fb278dd93 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 09:04:22 compute-0 nova_compute[260935]: 2025-10-11 09:04:22.808 2 DEBUG oslo_concurrency.lockutils [None req-bb71e3ff-0a93-4fe0-8fc7-ba9fb278dd93 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.015s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:04:22 compute-0 nova_compute[260935]: 2025-10-11 09:04:22.809 2 DEBUG nova.compute.manager [None req-bb71e3ff-0a93-4fe0-8fc7-ba9fb278dd93 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] [instance: cb40f13d-eaf0-4d7f-8745-acdd85fa0e86] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 11 09:04:22 compute-0 nova_compute[260935]: 2025-10-11 09:04:22.814 2 DEBUG oslo_concurrency.lockutils [None req-0bf42a87-3fff-4e4e-b766-81235b468a20 2930ce5b9093473e9d0f4e49fb1e934b 5ae7015798d64fdeb69d4d5d5fda3326 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.192s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:04:22 compute-0 nova_compute[260935]: 2025-10-11 09:04:22.824 2 DEBUG nova.virt.hardware [None req-0bf42a87-3fff-4e4e-b766-81235b468a20 2930ce5b9093473e9d0f4e49fb1e934b 5ae7015798d64fdeb69d4d5d5fda3326 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 11 09:04:22 compute-0 nova_compute[260935]: 2025-10-11 09:04:22.825 2 INFO nova.compute.claims [None req-0bf42a87-3fff-4e4e-b766-81235b468a20 2930ce5b9093473e9d0f4e49fb1e934b 5ae7015798d64fdeb69d4d5d5fda3326 - - default default] [instance: 14b020dc-ae35-4a87-87c8-ed8504968319] Claim successful on node compute-0.ctlplane.example.com
Oct 11 09:04:22 compute-0 nova_compute[260935]: 2025-10-11 09:04:22.913 2 DEBUG nova.compute.manager [None req-bb71e3ff-0a93-4fe0-8fc7-ba9fb278dd93 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] [instance: cb40f13d-eaf0-4d7f-8745-acdd85fa0e86] Not allocating networking since 'none' was specified. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1948
Oct 11 09:04:23 compute-0 nova_compute[260935]: 2025-10-11 09:04:23.039 2 INFO nova.virt.libvirt.driver [None req-bb71e3ff-0a93-4fe0-8fc7-ba9fb278dd93 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] [instance: cb40f13d-eaf0-4d7f-8745-acdd85fa0e86] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 11 09:04:23 compute-0 nova_compute[260935]: 2025-10-11 09:04:23.133 2 DEBUG nova.compute.manager [None req-bb71e3ff-0a93-4fe0-8fc7-ba9fb278dd93 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] [instance: cb40f13d-eaf0-4d7f-8745-acdd85fa0e86] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 11 09:04:23 compute-0 nova_compute[260935]: 2025-10-11 09:04:23.291 2 DEBUG oslo_concurrency.processutils [None req-0bf42a87-3fff-4e4e-b766-81235b468a20 2930ce5b9093473e9d0f4e49fb1e934b 5ae7015798d64fdeb69d4d5d5fda3326 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:04:23 compute-0 nova_compute[260935]: 2025-10-11 09:04:23.428 2 DEBUG nova.compute.manager [None req-bb71e3ff-0a93-4fe0-8fc7-ba9fb278dd93 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] [instance: cb40f13d-eaf0-4d7f-8745-acdd85fa0e86] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 11 09:04:23 compute-0 nova_compute[260935]: 2025-10-11 09:04:23.431 2 DEBUG nova.virt.libvirt.driver [None req-bb71e3ff-0a93-4fe0-8fc7-ba9fb278dd93 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] [instance: cb40f13d-eaf0-4d7f-8745-acdd85fa0e86] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 11 09:04:23 compute-0 nova_compute[260935]: 2025-10-11 09:04:23.432 2 INFO nova.virt.libvirt.driver [None req-bb71e3ff-0a93-4fe0-8fc7-ba9fb278dd93 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] [instance: cb40f13d-eaf0-4d7f-8745-acdd85fa0e86] Creating image(s)
Oct 11 09:04:23 compute-0 nova_compute[260935]: 2025-10-11 09:04:23.478 2 DEBUG nova.storage.rbd_utils [None req-bb71e3ff-0a93-4fe0-8fc7-ba9fb278dd93 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] rbd image cb40f13d-eaf0-4d7f-8745-acdd85fa0e86_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:04:23 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e245 do_prune osdmap full prune enabled
Oct 11 09:04:23 compute-0 ceph-mon[74313]: pgmap v1839: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 15 KiB/s rd, 921 B/s wr, 19 op/s
Oct 11 09:04:23 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/69219880' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:04:23 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e246 e246: 3 total, 3 up, 3 in
Oct 11 09:04:23 compute-0 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e246: 3 total, 3 up, 3 in
Oct 11 09:04:23 compute-0 nova_compute[260935]: 2025-10-11 09:04:23.535 2 DEBUG nova.storage.rbd_utils [None req-bb71e3ff-0a93-4fe0-8fc7-ba9fb278dd93 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] rbd image cb40f13d-eaf0-4d7f-8745-acdd85fa0e86_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:04:23 compute-0 nova_compute[260935]: 2025-10-11 09:04:23.572 2 DEBUG nova.storage.rbd_utils [None req-bb71e3ff-0a93-4fe0-8fc7-ba9fb278dd93 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] rbd image cb40f13d-eaf0-4d7f-8745-acdd85fa0e86_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:04:23 compute-0 nova_compute[260935]: 2025-10-11 09:04:23.579 2 DEBUG oslo_concurrency.processutils [None req-bb71e3ff-0a93-4fe0-8fc7-ba9fb278dd93 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:04:23 compute-0 nova_compute[260935]: 2025-10-11 09:04:23.655 2 DEBUG oslo_concurrency.processutils [None req-bb71e3ff-0a93-4fe0-8fc7-ba9fb278dd93 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json" returned: 0 in 0.076s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:04:23 compute-0 nova_compute[260935]: 2025-10-11 09:04:23.656 2 DEBUG oslo_concurrency.lockutils [None req-bb71e3ff-0a93-4fe0-8fc7-ba9fb278dd93 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] Acquiring lock "9811042c7d73cc51997f7c966840f4b7728169a1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:04:23 compute-0 nova_compute[260935]: 2025-10-11 09:04:23.657 2 DEBUG oslo_concurrency.lockutils [None req-bb71e3ff-0a93-4fe0-8fc7-ba9fb278dd93 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:04:23 compute-0 nova_compute[260935]: 2025-10-11 09:04:23.657 2 DEBUG oslo_concurrency.lockutils [None req-bb71e3ff-0a93-4fe0-8fc7-ba9fb278dd93 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:04:23 compute-0 nova_compute[260935]: 2025-10-11 09:04:23.685 2 DEBUG nova.storage.rbd_utils [None req-bb71e3ff-0a93-4fe0-8fc7-ba9fb278dd93 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] rbd image cb40f13d-eaf0-4d7f-8745-acdd85fa0e86_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:04:23 compute-0 nova_compute[260935]: 2025-10-11 09:04:23.690 2 DEBUG oslo_concurrency.processutils [None req-bb71e3ff-0a93-4fe0-8fc7-ba9fb278dd93 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 cb40f13d-eaf0-4d7f-8745-acdd85fa0e86_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:04:23 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e246 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:04:23 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 09:04:23 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1875438717' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:04:23 compute-0 nova_compute[260935]: 2025-10-11 09:04:23.776 2 DEBUG oslo_concurrency.processutils [None req-0bf42a87-3fff-4e4e-b766-81235b468a20 2930ce5b9093473e9d0f4e49fb1e934b 5ae7015798d64fdeb69d4d5d5fda3326 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.484s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:04:23 compute-0 nova_compute[260935]: 2025-10-11 09:04:23.785 2 DEBUG nova.compute.provider_tree [None req-0bf42a87-3fff-4e4e-b766-81235b468a20 2930ce5b9093473e9d0f4e49fb1e934b 5ae7015798d64fdeb69d4d5d5fda3326 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 09:04:23 compute-0 nova_compute[260935]: 2025-10-11 09:04:23.825 2 DEBUG nova.scheduler.client.report [None req-0bf42a87-3fff-4e4e-b766-81235b468a20 2930ce5b9093473e9d0f4e49fb1e934b 5ae7015798d64fdeb69d4d5d5fda3326 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 09:04:23 compute-0 nova_compute[260935]: 2025-10-11 09:04:23.881 2 DEBUG oslo_concurrency.lockutils [None req-0bf42a87-3fff-4e4e-b766-81235b468a20 2930ce5b9093473e9d0f4e49fb1e934b 5ae7015798d64fdeb69d4d5d5fda3326 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.067s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:04:23 compute-0 nova_compute[260935]: 2025-10-11 09:04:23.882 2 DEBUG nova.compute.manager [None req-0bf42a87-3fff-4e4e-b766-81235b468a20 2930ce5b9093473e9d0f4e49fb1e934b 5ae7015798d64fdeb69d4d5d5fda3326 - - default default] [instance: 14b020dc-ae35-4a87-87c8-ed8504968319] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 11 09:04:23 compute-0 nova_compute[260935]: 2025-10-11 09:04:23.956 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:04:24 compute-0 nova_compute[260935]: 2025-10-11 09:04:24.058 2 DEBUG nova.compute.manager [None req-0bf42a87-3fff-4e4e-b766-81235b468a20 2930ce5b9093473e9d0f4e49fb1e934b 5ae7015798d64fdeb69d4d5d5fda3326 - - default default] [instance: 14b020dc-ae35-4a87-87c8-ed8504968319] Not allocating networking since 'none' was specified. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1948
Oct 11 09:04:24 compute-0 nova_compute[260935]: 2025-10-11 09:04:24.081 2 DEBUG oslo_concurrency.processutils [None req-bb71e3ff-0a93-4fe0-8fc7-ba9fb278dd93 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 cb40f13d-eaf0-4d7f-8745-acdd85fa0e86_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.391s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:04:24 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 09:04:24 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/464112611' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 09:04:24 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 09:04:24 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/464112611' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 09:04:24 compute-0 nova_compute[260935]: 2025-10-11 09:04:24.172 2 DEBUG nova.storage.rbd_utils [None req-bb71e3ff-0a93-4fe0-8fc7-ba9fb278dd93 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] resizing rbd image cb40f13d-eaf0-4d7f-8745-acdd85fa0e86_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 11 09:04:24 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 09:04:24 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2742869319' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 09:04:24 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 09:04:24 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2742869319' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 09:04:24 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1841: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 1.7 KiB/s wr, 31 op/s
Oct 11 09:04:24 compute-0 nova_compute[260935]: 2025-10-11 09:04:24.309 2 DEBUG nova.objects.instance [None req-bb71e3ff-0a93-4fe0-8fc7-ba9fb278dd93 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] Lazy-loading 'migration_context' on Instance uuid cb40f13d-eaf0-4d7f-8745-acdd85fa0e86 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:04:24 compute-0 nova_compute[260935]: 2025-10-11 09:04:24.324 2 INFO nova.virt.libvirt.driver [None req-0bf42a87-3fff-4e4e-b766-81235b468a20 2930ce5b9093473e9d0f4e49fb1e934b 5ae7015798d64fdeb69d4d5d5fda3326 - - default default] [instance: 14b020dc-ae35-4a87-87c8-ed8504968319] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 11 09:04:24 compute-0 nova_compute[260935]: 2025-10-11 09:04:24.349 2 DEBUG nova.virt.libvirt.driver [None req-bb71e3ff-0a93-4fe0-8fc7-ba9fb278dd93 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] [instance: cb40f13d-eaf0-4d7f-8745-acdd85fa0e86] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 11 09:04:24 compute-0 nova_compute[260935]: 2025-10-11 09:04:24.349 2 DEBUG nova.virt.libvirt.driver [None req-bb71e3ff-0a93-4fe0-8fc7-ba9fb278dd93 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] [instance: cb40f13d-eaf0-4d7f-8745-acdd85fa0e86] Ensure instance console log exists: /var/lib/nova/instances/cb40f13d-eaf0-4d7f-8745-acdd85fa0e86/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 11 09:04:24 compute-0 nova_compute[260935]: 2025-10-11 09:04:24.350 2 DEBUG oslo_concurrency.lockutils [None req-bb71e3ff-0a93-4fe0-8fc7-ba9fb278dd93 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:04:24 compute-0 nova_compute[260935]: 2025-10-11 09:04:24.350 2 DEBUG oslo_concurrency.lockutils [None req-bb71e3ff-0a93-4fe0-8fc7-ba9fb278dd93 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:04:24 compute-0 nova_compute[260935]: 2025-10-11 09:04:24.351 2 DEBUG oslo_concurrency.lockutils [None req-bb71e3ff-0a93-4fe0-8fc7-ba9fb278dd93 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:04:24 compute-0 nova_compute[260935]: 2025-10-11 09:04:24.353 2 DEBUG nova.virt.libvirt.driver [None req-bb71e3ff-0a93-4fe0-8fc7-ba9fb278dd93 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] [instance: cb40f13d-eaf0-4d7f-8745-acdd85fa0e86] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 11 09:04:24 compute-0 nova_compute[260935]: 2025-10-11 09:04:24.359 2 WARNING nova.virt.libvirt.driver [None req-bb71e3ff-0a93-4fe0-8fc7-ba9fb278dd93 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 09:04:24 compute-0 nova_compute[260935]: 2025-10-11 09:04:24.366 2 DEBUG nova.virt.libvirt.host [None req-bb71e3ff-0a93-4fe0-8fc7-ba9fb278dd93 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 11 09:04:24 compute-0 nova_compute[260935]: 2025-10-11 09:04:24.367 2 DEBUG nova.virt.libvirt.host [None req-bb71e3ff-0a93-4fe0-8fc7-ba9fb278dd93 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 11 09:04:24 compute-0 nova_compute[260935]: 2025-10-11 09:04:24.372 2 DEBUG nova.virt.libvirt.host [None req-bb71e3ff-0a93-4fe0-8fc7-ba9fb278dd93 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 11 09:04:24 compute-0 nova_compute[260935]: 2025-10-11 09:04:24.373 2 DEBUG nova.virt.libvirt.host [None req-bb71e3ff-0a93-4fe0-8fc7-ba9fb278dd93 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 11 09:04:24 compute-0 nova_compute[260935]: 2025-10-11 09:04:24.374 2 DEBUG nova.virt.libvirt.driver [None req-bb71e3ff-0a93-4fe0-8fc7-ba9fb278dd93 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 11 09:04:24 compute-0 nova_compute[260935]: 2025-10-11 09:04:24.374 2 DEBUG nova.virt.hardware [None req-bb71e3ff-0a93-4fe0-8fc7-ba9fb278dd93 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 11 09:04:24 compute-0 nova_compute[260935]: 2025-10-11 09:04:24.375 2 DEBUG nova.virt.hardware [None req-bb71e3ff-0a93-4fe0-8fc7-ba9fb278dd93 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 11 09:04:24 compute-0 nova_compute[260935]: 2025-10-11 09:04:24.375 2 DEBUG nova.virt.hardware [None req-bb71e3ff-0a93-4fe0-8fc7-ba9fb278dd93 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 11 09:04:24 compute-0 nova_compute[260935]: 2025-10-11 09:04:24.376 2 DEBUG nova.virt.hardware [None req-bb71e3ff-0a93-4fe0-8fc7-ba9fb278dd93 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 11 09:04:24 compute-0 nova_compute[260935]: 2025-10-11 09:04:24.376 2 DEBUG nova.virt.hardware [None req-bb71e3ff-0a93-4fe0-8fc7-ba9fb278dd93 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 11 09:04:24 compute-0 nova_compute[260935]: 2025-10-11 09:04:24.376 2 DEBUG nova.virt.hardware [None req-bb71e3ff-0a93-4fe0-8fc7-ba9fb278dd93 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 11 09:04:24 compute-0 nova_compute[260935]: 2025-10-11 09:04:24.377 2 DEBUG nova.virt.hardware [None req-bb71e3ff-0a93-4fe0-8fc7-ba9fb278dd93 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 11 09:04:24 compute-0 nova_compute[260935]: 2025-10-11 09:04:24.377 2 DEBUG nova.virt.hardware [None req-bb71e3ff-0a93-4fe0-8fc7-ba9fb278dd93 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 11 09:04:24 compute-0 nova_compute[260935]: 2025-10-11 09:04:24.378 2 DEBUG nova.virt.hardware [None req-bb71e3ff-0a93-4fe0-8fc7-ba9fb278dd93 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 11 09:04:24 compute-0 nova_compute[260935]: 2025-10-11 09:04:24.378 2 DEBUG nova.virt.hardware [None req-bb71e3ff-0a93-4fe0-8fc7-ba9fb278dd93 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 11 09:04:24 compute-0 nova_compute[260935]: 2025-10-11 09:04:24.378 2 DEBUG nova.virt.hardware [None req-bb71e3ff-0a93-4fe0-8fc7-ba9fb278dd93 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 11 09:04:24 compute-0 nova_compute[260935]: 2025-10-11 09:04:24.383 2 DEBUG oslo_concurrency.processutils [None req-bb71e3ff-0a93-4fe0-8fc7-ba9fb278dd93 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:04:24 compute-0 nova_compute[260935]: 2025-10-11 09:04:24.429 2 DEBUG nova.compute.manager [None req-0bf42a87-3fff-4e4e-b766-81235b468a20 2930ce5b9093473e9d0f4e49fb1e934b 5ae7015798d64fdeb69d4d5d5fda3326 - - default default] [instance: 14b020dc-ae35-4a87-87c8-ed8504968319] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 11 09:04:24 compute-0 ceph-mon[74313]: osdmap e246: 3 total, 3 up, 3 in
Oct 11 09:04:24 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/1875438717' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:04:24 compute-0 ceph-mon[74313]: from='client.? 192.168.122.10:0/464112611' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 09:04:24 compute-0 ceph-mon[74313]: from='client.? 192.168.122.10:0/464112611' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 09:04:24 compute-0 ceph-mon[74313]: from='client.? 192.168.122.10:0/2742869319' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 09:04:24 compute-0 ceph-mon[74313]: from='client.? 192.168.122.10:0/2742869319' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 09:04:24 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 09:04:24 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3468149435' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 09:04:24 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 09:04:24 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3468149435' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 09:04:24 compute-0 nova_compute[260935]: 2025-10-11 09:04:24.637 2 DEBUG nova.compute.manager [None req-0bf42a87-3fff-4e4e-b766-81235b468a20 2930ce5b9093473e9d0f4e49fb1e934b 5ae7015798d64fdeb69d4d5d5fda3326 - - default default] [instance: 14b020dc-ae35-4a87-87c8-ed8504968319] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 11 09:04:24 compute-0 nova_compute[260935]: 2025-10-11 09:04:24.639 2 DEBUG nova.virt.libvirt.driver [None req-0bf42a87-3fff-4e4e-b766-81235b468a20 2930ce5b9093473e9d0f4e49fb1e934b 5ae7015798d64fdeb69d4d5d5fda3326 - - default default] [instance: 14b020dc-ae35-4a87-87c8-ed8504968319] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 11 09:04:24 compute-0 nova_compute[260935]: 2025-10-11 09:04:24.639 2 INFO nova.virt.libvirt.driver [None req-0bf42a87-3fff-4e4e-b766-81235b468a20 2930ce5b9093473e9d0f4e49fb1e934b 5ae7015798d64fdeb69d4d5d5fda3326 - - default default] [instance: 14b020dc-ae35-4a87-87c8-ed8504968319] Creating image(s)
Oct 11 09:04:24 compute-0 nova_compute[260935]: 2025-10-11 09:04:24.676 2 DEBUG nova.storage.rbd_utils [None req-0bf42a87-3fff-4e4e-b766-81235b468a20 2930ce5b9093473e9d0f4e49fb1e934b 5ae7015798d64fdeb69d4d5d5fda3326 - - default default] rbd image 14b020dc-ae35-4a87-87c8-ed8504968319_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:04:24 compute-0 nova_compute[260935]: 2025-10-11 09:04:24.712 2 DEBUG nova.storage.rbd_utils [None req-0bf42a87-3fff-4e4e-b766-81235b468a20 2930ce5b9093473e9d0f4e49fb1e934b 5ae7015798d64fdeb69d4d5d5fda3326 - - default default] rbd image 14b020dc-ae35-4a87-87c8-ed8504968319_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:04:24 compute-0 nova_compute[260935]: 2025-10-11 09:04:24.745 2 DEBUG nova.storage.rbd_utils [None req-0bf42a87-3fff-4e4e-b766-81235b468a20 2930ce5b9093473e9d0f4e49fb1e934b 5ae7015798d64fdeb69d4d5d5fda3326 - - default default] rbd image 14b020dc-ae35-4a87-87c8-ed8504968319_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:04:24 compute-0 nova_compute[260935]: 2025-10-11 09:04:24.749 2 DEBUG oslo_concurrency.processutils [None req-0bf42a87-3fff-4e4e-b766-81235b468a20 2930ce5b9093473e9d0f4e49fb1e934b 5ae7015798d64fdeb69d4d5d5fda3326 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:04:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:04:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:04:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:04:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:04:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:04:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:04:24 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 09:04:24 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2840872353' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:04:24 compute-0 nova_compute[260935]: 2025-10-11 09:04:24.850 2 DEBUG oslo_concurrency.processutils [None req-0bf42a87-3fff-4e4e-b766-81235b468a20 2930ce5b9093473e9d0f4e49fb1e934b 5ae7015798d64fdeb69d4d5d5fda3326 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json" returned: 0 in 0.101s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:04:24 compute-0 nova_compute[260935]: 2025-10-11 09:04:24.854 2 DEBUG oslo_concurrency.lockutils [None req-0bf42a87-3fff-4e4e-b766-81235b468a20 2930ce5b9093473e9d0f4e49fb1e934b 5ae7015798d64fdeb69d4d5d5fda3326 - - default default] Acquiring lock "9811042c7d73cc51997f7c966840f4b7728169a1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:04:24 compute-0 nova_compute[260935]: 2025-10-11 09:04:24.855 2 DEBUG oslo_concurrency.lockutils [None req-0bf42a87-3fff-4e4e-b766-81235b468a20 2930ce5b9093473e9d0f4e49fb1e934b 5ae7015798d64fdeb69d4d5d5fda3326 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:04:24 compute-0 nova_compute[260935]: 2025-10-11 09:04:24.856 2 DEBUG oslo_concurrency.lockutils [None req-0bf42a87-3fff-4e4e-b766-81235b468a20 2930ce5b9093473e9d0f4e49fb1e934b 5ae7015798d64fdeb69d4d5d5fda3326 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:04:24 compute-0 nova_compute[260935]: 2025-10-11 09:04:24.889 2 DEBUG nova.storage.rbd_utils [None req-0bf42a87-3fff-4e4e-b766-81235b468a20 2930ce5b9093473e9d0f4e49fb1e934b 5ae7015798d64fdeb69d4d5d5fda3326 - - default default] rbd image 14b020dc-ae35-4a87-87c8-ed8504968319_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:04:24 compute-0 nova_compute[260935]: 2025-10-11 09:04:24.898 2 DEBUG oslo_concurrency.processutils [None req-0bf42a87-3fff-4e4e-b766-81235b468a20 2930ce5b9093473e9d0f4e49fb1e934b 5ae7015798d64fdeb69d4d5d5fda3326 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 14b020dc-ae35-4a87-87c8-ed8504968319_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:04:24 compute-0 nova_compute[260935]: 2025-10-11 09:04:24.942 2 DEBUG oslo_concurrency.processutils [None req-bb71e3ff-0a93-4fe0-8fc7-ba9fb278dd93 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.559s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:04:24 compute-0 nova_compute[260935]: 2025-10-11 09:04:24.977 2 DEBUG nova.storage.rbd_utils [None req-bb71e3ff-0a93-4fe0-8fc7-ba9fb278dd93 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] rbd image cb40f13d-eaf0-4d7f-8745-acdd85fa0e86_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:04:24 compute-0 nova_compute[260935]: 2025-10-11 09:04:24.983 2 DEBUG oslo_concurrency.processutils [None req-bb71e3ff-0a93-4fe0-8fc7-ba9fb278dd93 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:04:25 compute-0 nova_compute[260935]: 2025-10-11 09:04:25.188 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:04:25 compute-0 nova_compute[260935]: 2025-10-11 09:04:25.264 2 DEBUG oslo_concurrency.processutils [None req-0bf42a87-3fff-4e4e-b766-81235b468a20 2930ce5b9093473e9d0f4e49fb1e934b 5ae7015798d64fdeb69d4d5d5fda3326 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 14b020dc-ae35-4a87-87c8-ed8504968319_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.366s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:04:25 compute-0 nova_compute[260935]: 2025-10-11 09:04:25.364 2 DEBUG nova.storage.rbd_utils [None req-0bf42a87-3fff-4e4e-b766-81235b468a20 2930ce5b9093473e9d0f4e49fb1e934b 5ae7015798d64fdeb69d4d5d5fda3326 - - default default] resizing rbd image 14b020dc-ae35-4a87-87c8-ed8504968319_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 11 09:04:25 compute-0 nova_compute[260935]: 2025-10-11 09:04:25.498 2 DEBUG nova.objects.instance [None req-0bf42a87-3fff-4e4e-b766-81235b468a20 2930ce5b9093473e9d0f4e49fb1e934b 5ae7015798d64fdeb69d4d5d5fda3326 - - default default] Lazy-loading 'migration_context' on Instance uuid 14b020dc-ae35-4a87-87c8-ed8504968319 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:04:25 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 09:04:25 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/531030210' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:04:25 compute-0 ceph-mon[74313]: pgmap v1841: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 1.7 KiB/s wr, 31 op/s
Oct 11 09:04:25 compute-0 ceph-mon[74313]: from='client.? 192.168.122.10:0/3468149435' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 09:04:25 compute-0 ceph-mon[74313]: from='client.? 192.168.122.10:0/3468149435' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 09:04:25 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/2840872353' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:04:25 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/531030210' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:04:25 compute-0 nova_compute[260935]: 2025-10-11 09:04:25.535 2 DEBUG oslo_concurrency.processutils [None req-bb71e3ff-0a93-4fe0-8fc7-ba9fb278dd93 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.552s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:04:25 compute-0 nova_compute[260935]: 2025-10-11 09:04:25.538 2 DEBUG nova.objects.instance [None req-bb71e3ff-0a93-4fe0-8fc7-ba9fb278dd93 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] Lazy-loading 'pci_devices' on Instance uuid cb40f13d-eaf0-4d7f-8745-acdd85fa0e86 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:04:25 compute-0 nova_compute[260935]: 2025-10-11 09:04:25.556 2 DEBUG nova.virt.libvirt.driver [None req-0bf42a87-3fff-4e4e-b766-81235b468a20 2930ce5b9093473e9d0f4e49fb1e934b 5ae7015798d64fdeb69d4d5d5fda3326 - - default default] [instance: 14b020dc-ae35-4a87-87c8-ed8504968319] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 11 09:04:25 compute-0 nova_compute[260935]: 2025-10-11 09:04:25.556 2 DEBUG nova.virt.libvirt.driver [None req-0bf42a87-3fff-4e4e-b766-81235b468a20 2930ce5b9093473e9d0f4e49fb1e934b 5ae7015798d64fdeb69d4d5d5fda3326 - - default default] [instance: 14b020dc-ae35-4a87-87c8-ed8504968319] Ensure instance console log exists: /var/lib/nova/instances/14b020dc-ae35-4a87-87c8-ed8504968319/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 11 09:04:25 compute-0 nova_compute[260935]: 2025-10-11 09:04:25.557 2 DEBUG oslo_concurrency.lockutils [None req-0bf42a87-3fff-4e4e-b766-81235b468a20 2930ce5b9093473e9d0f4e49fb1e934b 5ae7015798d64fdeb69d4d5d5fda3326 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:04:25 compute-0 nova_compute[260935]: 2025-10-11 09:04:25.558 2 DEBUG oslo_concurrency.lockutils [None req-0bf42a87-3fff-4e4e-b766-81235b468a20 2930ce5b9093473e9d0f4e49fb1e934b 5ae7015798d64fdeb69d4d5d5fda3326 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:04:25 compute-0 nova_compute[260935]: 2025-10-11 09:04:25.558 2 DEBUG oslo_concurrency.lockutils [None req-0bf42a87-3fff-4e4e-b766-81235b468a20 2930ce5b9093473e9d0f4e49fb1e934b 5ae7015798d64fdeb69d4d5d5fda3326 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:04:25 compute-0 nova_compute[260935]: 2025-10-11 09:04:25.561 2 DEBUG nova.virt.libvirt.driver [None req-0bf42a87-3fff-4e4e-b766-81235b468a20 2930ce5b9093473e9d0f4e49fb1e934b 5ae7015798d64fdeb69d4d5d5fda3326 - - default default] [instance: 14b020dc-ae35-4a87-87c8-ed8504968319] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 11 09:04:25 compute-0 nova_compute[260935]: 2025-10-11 09:04:25.568 2 WARNING nova.virt.libvirt.driver [None req-0bf42a87-3fff-4e4e-b766-81235b468a20 2930ce5b9093473e9d0f4e49fb1e934b 5ae7015798d64fdeb69d4d5d5fda3326 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 09:04:25 compute-0 nova_compute[260935]: 2025-10-11 09:04:25.574 2 DEBUG nova.virt.libvirt.driver [None req-bb71e3ff-0a93-4fe0-8fc7-ba9fb278dd93 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] [instance: cb40f13d-eaf0-4d7f-8745-acdd85fa0e86] End _get_guest_xml xml=<domain type="kvm">
Oct 11 09:04:25 compute-0 nova_compute[260935]:   <uuid>cb40f13d-eaf0-4d7f-8745-acdd85fa0e86</uuid>
Oct 11 09:04:25 compute-0 nova_compute[260935]:   <name>instance-00000058</name>
Oct 11 09:04:25 compute-0 nova_compute[260935]:   <memory>131072</memory>
Oct 11 09:04:25 compute-0 nova_compute[260935]:   <vcpu>1</vcpu>
Oct 11 09:04:25 compute-0 nova_compute[260935]:   <metadata>
Oct 11 09:04:25 compute-0 nova_compute[260935]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 09:04:25 compute-0 nova_compute[260935]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 09:04:25 compute-0 nova_compute[260935]:       <nova:name>tempest-ServerShowV257Test-server-919774712</nova:name>
Oct 11 09:04:25 compute-0 nova_compute[260935]:       <nova:creationTime>2025-10-11 09:04:24</nova:creationTime>
Oct 11 09:04:25 compute-0 nova_compute[260935]:       <nova:flavor name="m1.nano">
Oct 11 09:04:25 compute-0 nova_compute[260935]:         <nova:memory>128</nova:memory>
Oct 11 09:04:25 compute-0 nova_compute[260935]:         <nova:disk>1</nova:disk>
Oct 11 09:04:25 compute-0 nova_compute[260935]:         <nova:swap>0</nova:swap>
Oct 11 09:04:25 compute-0 nova_compute[260935]:         <nova:ephemeral>0</nova:ephemeral>
Oct 11 09:04:25 compute-0 nova_compute[260935]:         <nova:vcpus>1</nova:vcpus>
Oct 11 09:04:25 compute-0 nova_compute[260935]:       </nova:flavor>
Oct 11 09:04:25 compute-0 nova_compute[260935]:       <nova:owner>
Oct 11 09:04:25 compute-0 nova_compute[260935]:         <nova:user uuid="a90fe9900cc64109bfeb61e3bc71fb95">tempest-ServerShowV257Test-315301913-project-member</nova:user>
Oct 11 09:04:25 compute-0 nova_compute[260935]:         <nova:project uuid="f83a57424c0643a1b6a9b84fe208cb0e">tempest-ServerShowV257Test-315301913</nova:project>
Oct 11 09:04:25 compute-0 nova_compute[260935]:       </nova:owner>
Oct 11 09:04:25 compute-0 nova_compute[260935]:       <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 09:04:25 compute-0 nova_compute[260935]:       <nova:ports/>
Oct 11 09:04:25 compute-0 nova_compute[260935]:     </nova:instance>
Oct 11 09:04:25 compute-0 nova_compute[260935]:   </metadata>
Oct 11 09:04:25 compute-0 nova_compute[260935]:   <sysinfo type="smbios">
Oct 11 09:04:25 compute-0 nova_compute[260935]:     <system>
Oct 11 09:04:25 compute-0 nova_compute[260935]:       <entry name="manufacturer">RDO</entry>
Oct 11 09:04:25 compute-0 nova_compute[260935]:       <entry name="product">OpenStack Compute</entry>
Oct 11 09:04:25 compute-0 nova_compute[260935]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 09:04:25 compute-0 nova_compute[260935]:       <entry name="serial">cb40f13d-eaf0-4d7f-8745-acdd85fa0e86</entry>
Oct 11 09:04:25 compute-0 nova_compute[260935]:       <entry name="uuid">cb40f13d-eaf0-4d7f-8745-acdd85fa0e86</entry>
Oct 11 09:04:25 compute-0 nova_compute[260935]:       <entry name="family">Virtual Machine</entry>
Oct 11 09:04:25 compute-0 nova_compute[260935]:     </system>
Oct 11 09:04:25 compute-0 nova_compute[260935]:   </sysinfo>
Oct 11 09:04:25 compute-0 nova_compute[260935]:   <os>
Oct 11 09:04:25 compute-0 nova_compute[260935]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 11 09:04:25 compute-0 nova_compute[260935]:     <boot dev="hd"/>
Oct 11 09:04:25 compute-0 nova_compute[260935]:     <smbios mode="sysinfo"/>
Oct 11 09:04:25 compute-0 nova_compute[260935]:   </os>
Oct 11 09:04:25 compute-0 nova_compute[260935]:   <features>
Oct 11 09:04:25 compute-0 nova_compute[260935]:     <acpi/>
Oct 11 09:04:25 compute-0 nova_compute[260935]:     <apic/>
Oct 11 09:04:25 compute-0 nova_compute[260935]:     <vmcoreinfo/>
Oct 11 09:04:25 compute-0 nova_compute[260935]:   </features>
Oct 11 09:04:25 compute-0 nova_compute[260935]:   <clock offset="utc">
Oct 11 09:04:25 compute-0 nova_compute[260935]:     <timer name="pit" tickpolicy="delay"/>
Oct 11 09:04:25 compute-0 nova_compute[260935]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 11 09:04:25 compute-0 nova_compute[260935]:     <timer name="hpet" present="no"/>
Oct 11 09:04:25 compute-0 nova_compute[260935]:   </clock>
Oct 11 09:04:25 compute-0 nova_compute[260935]:   <cpu mode="host-model" match="exact">
Oct 11 09:04:25 compute-0 nova_compute[260935]:     <topology sockets="1" cores="1" threads="1"/>
Oct 11 09:04:25 compute-0 nova_compute[260935]:   </cpu>
Oct 11 09:04:25 compute-0 nova_compute[260935]:   <devices>
Oct 11 09:04:25 compute-0 nova_compute[260935]:     <disk type="network" device="disk">
Oct 11 09:04:25 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 09:04:25 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/cb40f13d-eaf0-4d7f-8745-acdd85fa0e86_disk">
Oct 11 09:04:25 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 09:04:25 compute-0 nova_compute[260935]:       </source>
Oct 11 09:04:25 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 09:04:25 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 09:04:25 compute-0 nova_compute[260935]:       </auth>
Oct 11 09:04:25 compute-0 nova_compute[260935]:       <target dev="vda" bus="virtio"/>
Oct 11 09:04:25 compute-0 nova_compute[260935]:     </disk>
Oct 11 09:04:25 compute-0 nova_compute[260935]:     <disk type="network" device="cdrom">
Oct 11 09:04:25 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 09:04:25 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/cb40f13d-eaf0-4d7f-8745-acdd85fa0e86_disk.config">
Oct 11 09:04:25 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 09:04:25 compute-0 nova_compute[260935]:       </source>
Oct 11 09:04:25 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 09:04:25 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 09:04:25 compute-0 nova_compute[260935]:       </auth>
Oct 11 09:04:25 compute-0 nova_compute[260935]:       <target dev="sda" bus="sata"/>
Oct 11 09:04:25 compute-0 nova_compute[260935]:     </disk>
Oct 11 09:04:25 compute-0 nova_compute[260935]:     <serial type="pty">
Oct 11 09:04:25 compute-0 nova_compute[260935]:       <log file="/var/lib/nova/instances/cb40f13d-eaf0-4d7f-8745-acdd85fa0e86/console.log" append="off"/>
Oct 11 09:04:25 compute-0 nova_compute[260935]:     </serial>
Oct 11 09:04:25 compute-0 nova_compute[260935]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 09:04:25 compute-0 nova_compute[260935]:     <video>
Oct 11 09:04:25 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 09:04:25 compute-0 nova_compute[260935]:     </video>
Oct 11 09:04:25 compute-0 nova_compute[260935]:     <input type="tablet" bus="usb"/>
Oct 11 09:04:25 compute-0 nova_compute[260935]:     <rng model="virtio">
Oct 11 09:04:25 compute-0 nova_compute[260935]:       <backend model="random">/dev/urandom</backend>
Oct 11 09:04:25 compute-0 nova_compute[260935]:     </rng>
Oct 11 09:04:25 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root"/>
Oct 11 09:04:25 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:04:25 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:04:25 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:04:25 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:04:25 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:04:25 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:04:25 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:04:25 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:04:25 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:04:25 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:04:25 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:04:25 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:04:25 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:04:25 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:04:25 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:04:25 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:04:25 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:04:25 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:04:25 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:04:25 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:04:25 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:04:25 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:04:25 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:04:25 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:04:25 compute-0 nova_compute[260935]:     <controller type="usb" index="0"/>
Oct 11 09:04:25 compute-0 nova_compute[260935]:     <memballoon model="virtio">
Oct 11 09:04:25 compute-0 nova_compute[260935]:       <stats period="10"/>
Oct 11 09:04:25 compute-0 nova_compute[260935]:     </memballoon>
Oct 11 09:04:25 compute-0 nova_compute[260935]:   </devices>
Oct 11 09:04:25 compute-0 nova_compute[260935]: </domain>
Oct 11 09:04:25 compute-0 nova_compute[260935]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 11 09:04:25 compute-0 nova_compute[260935]: 2025-10-11 09:04:25.578 2 DEBUG nova.virt.libvirt.host [None req-0bf42a87-3fff-4e4e-b766-81235b468a20 2930ce5b9093473e9d0f4e49fb1e934b 5ae7015798d64fdeb69d4d5d5fda3326 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 11 09:04:25 compute-0 nova_compute[260935]: 2025-10-11 09:04:25.579 2 DEBUG nova.virt.libvirt.host [None req-0bf42a87-3fff-4e4e-b766-81235b468a20 2930ce5b9093473e9d0f4e49fb1e934b 5ae7015798d64fdeb69d4d5d5fda3326 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 11 09:04:25 compute-0 nova_compute[260935]: 2025-10-11 09:04:25.589 2 DEBUG nova.virt.libvirt.host [None req-0bf42a87-3fff-4e4e-b766-81235b468a20 2930ce5b9093473e9d0f4e49fb1e934b 5ae7015798d64fdeb69d4d5d5fda3326 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 11 09:04:25 compute-0 nova_compute[260935]: 2025-10-11 09:04:25.590 2 DEBUG nova.virt.libvirt.host [None req-0bf42a87-3fff-4e4e-b766-81235b468a20 2930ce5b9093473e9d0f4e49fb1e934b 5ae7015798d64fdeb69d4d5d5fda3326 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 11 09:04:25 compute-0 nova_compute[260935]: 2025-10-11 09:04:25.590 2 DEBUG nova.virt.libvirt.driver [None req-0bf42a87-3fff-4e4e-b766-81235b468a20 2930ce5b9093473e9d0f4e49fb1e934b 5ae7015798d64fdeb69d4d5d5fda3326 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 11 09:04:25 compute-0 nova_compute[260935]: 2025-10-11 09:04:25.591 2 DEBUG nova.virt.hardware [None req-0bf42a87-3fff-4e4e-b766-81235b468a20 2930ce5b9093473e9d0f4e49fb1e934b 5ae7015798d64fdeb69d4d5d5fda3326 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 11 09:04:25 compute-0 nova_compute[260935]: 2025-10-11 09:04:25.592 2 DEBUG nova.virt.hardware [None req-0bf42a87-3fff-4e4e-b766-81235b468a20 2930ce5b9093473e9d0f4e49fb1e934b 5ae7015798d64fdeb69d4d5d5fda3326 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 11 09:04:25 compute-0 nova_compute[260935]: 2025-10-11 09:04:25.592 2 DEBUG nova.virt.hardware [None req-0bf42a87-3fff-4e4e-b766-81235b468a20 2930ce5b9093473e9d0f4e49fb1e934b 5ae7015798d64fdeb69d4d5d5fda3326 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 11 09:04:25 compute-0 nova_compute[260935]: 2025-10-11 09:04:25.592 2 DEBUG nova.virt.hardware [None req-0bf42a87-3fff-4e4e-b766-81235b468a20 2930ce5b9093473e9d0f4e49fb1e934b 5ae7015798d64fdeb69d4d5d5fda3326 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 11 09:04:25 compute-0 nova_compute[260935]: 2025-10-11 09:04:25.593 2 DEBUG nova.virt.hardware [None req-0bf42a87-3fff-4e4e-b766-81235b468a20 2930ce5b9093473e9d0f4e49fb1e934b 5ae7015798d64fdeb69d4d5d5fda3326 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 11 09:04:25 compute-0 nova_compute[260935]: 2025-10-11 09:04:25.593 2 DEBUG nova.virt.hardware [None req-0bf42a87-3fff-4e4e-b766-81235b468a20 2930ce5b9093473e9d0f4e49fb1e934b 5ae7015798d64fdeb69d4d5d5fda3326 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 11 09:04:25 compute-0 nova_compute[260935]: 2025-10-11 09:04:25.594 2 DEBUG nova.virt.hardware [None req-0bf42a87-3fff-4e4e-b766-81235b468a20 2930ce5b9093473e9d0f4e49fb1e934b 5ae7015798d64fdeb69d4d5d5fda3326 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 11 09:04:25 compute-0 nova_compute[260935]: 2025-10-11 09:04:25.594 2 DEBUG nova.virt.hardware [None req-0bf42a87-3fff-4e4e-b766-81235b468a20 2930ce5b9093473e9d0f4e49fb1e934b 5ae7015798d64fdeb69d4d5d5fda3326 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 11 09:04:25 compute-0 nova_compute[260935]: 2025-10-11 09:04:25.595 2 DEBUG nova.virt.hardware [None req-0bf42a87-3fff-4e4e-b766-81235b468a20 2930ce5b9093473e9d0f4e49fb1e934b 5ae7015798d64fdeb69d4d5d5fda3326 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 11 09:04:25 compute-0 nova_compute[260935]: 2025-10-11 09:04:25.595 2 DEBUG nova.virt.hardware [None req-0bf42a87-3fff-4e4e-b766-81235b468a20 2930ce5b9093473e9d0f4e49fb1e934b 5ae7015798d64fdeb69d4d5d5fda3326 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 11 09:04:25 compute-0 nova_compute[260935]: 2025-10-11 09:04:25.595 2 DEBUG nova.virt.hardware [None req-0bf42a87-3fff-4e4e-b766-81235b468a20 2930ce5b9093473e9d0f4e49fb1e934b 5ae7015798d64fdeb69d4d5d5fda3326 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 11 09:04:25 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:04:25.597 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=bec9a5e2-82b0-42f0-811d-08d245f1dc66, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '21'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:04:25 compute-0 nova_compute[260935]: 2025-10-11 09:04:25.602 2 DEBUG oslo_concurrency.processutils [None req-0bf42a87-3fff-4e4e-b766-81235b468a20 2930ce5b9093473e9d0f4e49fb1e934b 5ae7015798d64fdeb69d4d5d5fda3326 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:04:25 compute-0 nova_compute[260935]: 2025-10-11 09:04:25.784 2 DEBUG nova.virt.libvirt.driver [None req-bb71e3ff-0a93-4fe0-8fc7-ba9fb278dd93 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 09:04:25 compute-0 nova_compute[260935]: 2025-10-11 09:04:25.785 2 DEBUG nova.virt.libvirt.driver [None req-bb71e3ff-0a93-4fe0-8fc7-ba9fb278dd93 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 09:04:25 compute-0 nova_compute[260935]: 2025-10-11 09:04:25.787 2 INFO nova.virt.libvirt.driver [None req-bb71e3ff-0a93-4fe0-8fc7-ba9fb278dd93 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] [instance: cb40f13d-eaf0-4d7f-8745-acdd85fa0e86] Using config drive
Oct 11 09:04:25 compute-0 nova_compute[260935]: 2025-10-11 09:04:25.832 2 DEBUG nova.storage.rbd_utils [None req-bb71e3ff-0a93-4fe0-8fc7-ba9fb278dd93 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] rbd image cb40f13d-eaf0-4d7f-8745-acdd85fa0e86_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:04:26 compute-0 nova_compute[260935]: 2025-10-11 09:04:26.076 2 INFO nova.virt.libvirt.driver [None req-bb71e3ff-0a93-4fe0-8fc7-ba9fb278dd93 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] [instance: cb40f13d-eaf0-4d7f-8745-acdd85fa0e86] Creating config drive at /var/lib/nova/instances/cb40f13d-eaf0-4d7f-8745-acdd85fa0e86/disk.config
Oct 11 09:04:26 compute-0 nova_compute[260935]: 2025-10-11 09:04:26.085 2 DEBUG oslo_concurrency.processutils [None req-bb71e3ff-0a93-4fe0-8fc7-ba9fb278dd93 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/cb40f13d-eaf0-4d7f-8745-acdd85fa0e86/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp9y54wy51 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:04:26 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 09:04:26 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2169207255' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:04:26 compute-0 nova_compute[260935]: 2025-10-11 09:04:26.134 2 DEBUG oslo_concurrency.processutils [None req-0bf42a87-3fff-4e4e-b766-81235b468a20 2930ce5b9093473e9d0f4e49fb1e934b 5ae7015798d64fdeb69d4d5d5fda3326 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.532s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:04:26 compute-0 nova_compute[260935]: 2025-10-11 09:04:26.172 2 DEBUG nova.storage.rbd_utils [None req-0bf42a87-3fff-4e4e-b766-81235b468a20 2930ce5b9093473e9d0f4e49fb1e934b 5ae7015798d64fdeb69d4d5d5fda3326 - - default default] rbd image 14b020dc-ae35-4a87-87c8-ed8504968319_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:04:26 compute-0 nova_compute[260935]: 2025-10-11 09:04:26.177 2 DEBUG oslo_concurrency.processutils [None req-0bf42a87-3fff-4e4e-b766-81235b468a20 2930ce5b9093473e9d0f4e49fb1e934b 5ae7015798d64fdeb69d4d5d5fda3326 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:04:26 compute-0 nova_compute[260935]: 2025-10-11 09:04:26.243 2 DEBUG oslo_concurrency.processutils [None req-bb71e3ff-0a93-4fe0-8fc7-ba9fb278dd93 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/cb40f13d-eaf0-4d7f-8745-acdd85fa0e86/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp9y54wy51" returned: 0 in 0.158s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:04:26 compute-0 nova_compute[260935]: 2025-10-11 09:04:26.272 2 DEBUG nova.storage.rbd_utils [None req-bb71e3ff-0a93-4fe0-8fc7-ba9fb278dd93 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] rbd image cb40f13d-eaf0-4d7f-8745-acdd85fa0e86_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:04:26 compute-0 nova_compute[260935]: 2025-10-11 09:04:26.278 2 DEBUG oslo_concurrency.processutils [None req-bb71e3ff-0a93-4fe0-8fc7-ba9fb278dd93 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/cb40f13d-eaf0-4d7f-8745-acdd85fa0e86/disk.config cb40f13d-eaf0-4d7f-8745-acdd85fa0e86_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:04:26 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1842: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 1.7 KiB/s wr, 31 op/s
Oct 11 09:04:26 compute-0 nova_compute[260935]: 2025-10-11 09:04:26.490 2 DEBUG oslo_concurrency.processutils [None req-bb71e3ff-0a93-4fe0-8fc7-ba9fb278dd93 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/cb40f13d-eaf0-4d7f-8745-acdd85fa0e86/disk.config cb40f13d-eaf0-4d7f-8745-acdd85fa0e86_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.212s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:04:26 compute-0 nova_compute[260935]: 2025-10-11 09:04:26.492 2 INFO nova.virt.libvirt.driver [None req-bb71e3ff-0a93-4fe0-8fc7-ba9fb278dd93 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] [instance: cb40f13d-eaf0-4d7f-8745-acdd85fa0e86] Deleting local config drive /var/lib/nova/instances/cb40f13d-eaf0-4d7f-8745-acdd85fa0e86/disk.config because it was imported into RBD.
Oct 11 09:04:26 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/2169207255' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:04:26 compute-0 systemd-machined[215705]: New machine qemu-100-instance-00000058.
Oct 11 09:04:26 compute-0 systemd[1]: Started Virtual Machine qemu-100-instance-00000058.
Oct 11 09:04:26 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 09:04:26 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1923217079' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:04:26 compute-0 nova_compute[260935]: 2025-10-11 09:04:26.750 2 DEBUG oslo_concurrency.processutils [None req-0bf42a87-3fff-4e4e-b766-81235b468a20 2930ce5b9093473e9d0f4e49fb1e934b 5ae7015798d64fdeb69d4d5d5fda3326 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.573s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:04:26 compute-0 nova_compute[260935]: 2025-10-11 09:04:26.752 2 DEBUG nova.objects.instance [None req-0bf42a87-3fff-4e4e-b766-81235b468a20 2930ce5b9093473e9d0f4e49fb1e934b 5ae7015798d64fdeb69d4d5d5fda3326 - - default default] Lazy-loading 'pci_devices' on Instance uuid 14b020dc-ae35-4a87-87c8-ed8504968319 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:04:26 compute-0 nova_compute[260935]: 2025-10-11 09:04:26.792 2 DEBUG nova.virt.libvirt.driver [None req-0bf42a87-3fff-4e4e-b766-81235b468a20 2930ce5b9093473e9d0f4e49fb1e934b 5ae7015798d64fdeb69d4d5d5fda3326 - - default default] [instance: 14b020dc-ae35-4a87-87c8-ed8504968319] End _get_guest_xml xml=<domain type="kvm">
Oct 11 09:04:26 compute-0 nova_compute[260935]:   <uuid>14b020dc-ae35-4a87-87c8-ed8504968319</uuid>
Oct 11 09:04:26 compute-0 nova_compute[260935]:   <name>instance-00000059</name>
Oct 11 09:04:26 compute-0 nova_compute[260935]:   <memory>131072</memory>
Oct 11 09:04:26 compute-0 nova_compute[260935]:   <vcpu>1</vcpu>
Oct 11 09:04:26 compute-0 nova_compute[260935]:   <metadata>
Oct 11 09:04:26 compute-0 nova_compute[260935]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 09:04:26 compute-0 nova_compute[260935]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 09:04:26 compute-0 nova_compute[260935]:       <nova:name>tempest-ServersAaction247Test-server-101545139</nova:name>
Oct 11 09:04:26 compute-0 nova_compute[260935]:       <nova:creationTime>2025-10-11 09:04:25</nova:creationTime>
Oct 11 09:04:26 compute-0 nova_compute[260935]:       <nova:flavor name="m1.nano">
Oct 11 09:04:26 compute-0 nova_compute[260935]:         <nova:memory>128</nova:memory>
Oct 11 09:04:26 compute-0 nova_compute[260935]:         <nova:disk>1</nova:disk>
Oct 11 09:04:26 compute-0 nova_compute[260935]:         <nova:swap>0</nova:swap>
Oct 11 09:04:26 compute-0 nova_compute[260935]:         <nova:ephemeral>0</nova:ephemeral>
Oct 11 09:04:26 compute-0 nova_compute[260935]:         <nova:vcpus>1</nova:vcpus>
Oct 11 09:04:26 compute-0 nova_compute[260935]:       </nova:flavor>
Oct 11 09:04:26 compute-0 nova_compute[260935]:       <nova:owner>
Oct 11 09:04:26 compute-0 nova_compute[260935]:         <nova:user uuid="2930ce5b9093473e9d0f4e49fb1e934b">tempest-ServersAaction247Test-1311798005-project-member</nova:user>
Oct 11 09:04:26 compute-0 nova_compute[260935]:         <nova:project uuid="5ae7015798d64fdeb69d4d5d5fda3326">tempest-ServersAaction247Test-1311798005</nova:project>
Oct 11 09:04:26 compute-0 nova_compute[260935]:       </nova:owner>
Oct 11 09:04:26 compute-0 nova_compute[260935]:       <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 09:04:26 compute-0 nova_compute[260935]:       <nova:ports/>
Oct 11 09:04:26 compute-0 nova_compute[260935]:     </nova:instance>
Oct 11 09:04:26 compute-0 nova_compute[260935]:   </metadata>
Oct 11 09:04:26 compute-0 nova_compute[260935]:   <sysinfo type="smbios">
Oct 11 09:04:26 compute-0 nova_compute[260935]:     <system>
Oct 11 09:04:26 compute-0 nova_compute[260935]:       <entry name="manufacturer">RDO</entry>
Oct 11 09:04:26 compute-0 nova_compute[260935]:       <entry name="product">OpenStack Compute</entry>
Oct 11 09:04:26 compute-0 nova_compute[260935]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 09:04:26 compute-0 nova_compute[260935]:       <entry name="serial">14b020dc-ae35-4a87-87c8-ed8504968319</entry>
Oct 11 09:04:26 compute-0 nova_compute[260935]:       <entry name="uuid">14b020dc-ae35-4a87-87c8-ed8504968319</entry>
Oct 11 09:04:26 compute-0 nova_compute[260935]:       <entry name="family">Virtual Machine</entry>
Oct 11 09:04:26 compute-0 nova_compute[260935]:     </system>
Oct 11 09:04:26 compute-0 nova_compute[260935]:   </sysinfo>
Oct 11 09:04:26 compute-0 nova_compute[260935]:   <os>
Oct 11 09:04:26 compute-0 nova_compute[260935]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 11 09:04:26 compute-0 nova_compute[260935]:     <boot dev="hd"/>
Oct 11 09:04:26 compute-0 nova_compute[260935]:     <smbios mode="sysinfo"/>
Oct 11 09:04:26 compute-0 nova_compute[260935]:   </os>
Oct 11 09:04:26 compute-0 nova_compute[260935]:   <features>
Oct 11 09:04:26 compute-0 nova_compute[260935]:     <acpi/>
Oct 11 09:04:26 compute-0 nova_compute[260935]:     <apic/>
Oct 11 09:04:26 compute-0 nova_compute[260935]:     <vmcoreinfo/>
Oct 11 09:04:26 compute-0 nova_compute[260935]:   </features>
Oct 11 09:04:26 compute-0 nova_compute[260935]:   <clock offset="utc">
Oct 11 09:04:26 compute-0 nova_compute[260935]:     <timer name="pit" tickpolicy="delay"/>
Oct 11 09:04:26 compute-0 nova_compute[260935]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 11 09:04:26 compute-0 nova_compute[260935]:     <timer name="hpet" present="no"/>
Oct 11 09:04:26 compute-0 nova_compute[260935]:   </clock>
Oct 11 09:04:26 compute-0 nova_compute[260935]:   <cpu mode="host-model" match="exact">
Oct 11 09:04:26 compute-0 nova_compute[260935]:     <topology sockets="1" cores="1" threads="1"/>
Oct 11 09:04:26 compute-0 nova_compute[260935]:   </cpu>
Oct 11 09:04:26 compute-0 nova_compute[260935]:   <devices>
Oct 11 09:04:26 compute-0 nova_compute[260935]:     <disk type="network" device="disk">
Oct 11 09:04:26 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 09:04:26 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/14b020dc-ae35-4a87-87c8-ed8504968319_disk">
Oct 11 09:04:26 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 09:04:26 compute-0 nova_compute[260935]:       </source>
Oct 11 09:04:26 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 09:04:26 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 09:04:26 compute-0 nova_compute[260935]:       </auth>
Oct 11 09:04:26 compute-0 nova_compute[260935]:       <target dev="vda" bus="virtio"/>
Oct 11 09:04:26 compute-0 nova_compute[260935]:     </disk>
Oct 11 09:04:26 compute-0 nova_compute[260935]:     <disk type="network" device="cdrom">
Oct 11 09:04:26 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 09:04:26 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/14b020dc-ae35-4a87-87c8-ed8504968319_disk.config">
Oct 11 09:04:26 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 09:04:26 compute-0 nova_compute[260935]:       </source>
Oct 11 09:04:26 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 09:04:26 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 09:04:26 compute-0 nova_compute[260935]:       </auth>
Oct 11 09:04:26 compute-0 nova_compute[260935]:       <target dev="sda" bus="sata"/>
Oct 11 09:04:26 compute-0 nova_compute[260935]:     </disk>
Oct 11 09:04:26 compute-0 nova_compute[260935]:     <serial type="pty">
Oct 11 09:04:26 compute-0 nova_compute[260935]:       <log file="/var/lib/nova/instances/14b020dc-ae35-4a87-87c8-ed8504968319/console.log" append="off"/>
Oct 11 09:04:26 compute-0 nova_compute[260935]:     </serial>
Oct 11 09:04:26 compute-0 nova_compute[260935]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 09:04:26 compute-0 nova_compute[260935]:     <video>
Oct 11 09:04:26 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 09:04:26 compute-0 nova_compute[260935]:     </video>
Oct 11 09:04:26 compute-0 nova_compute[260935]:     <input type="tablet" bus="usb"/>
Oct 11 09:04:26 compute-0 nova_compute[260935]:     <rng model="virtio">
Oct 11 09:04:26 compute-0 nova_compute[260935]:       <backend model="random">/dev/urandom</backend>
Oct 11 09:04:26 compute-0 nova_compute[260935]:     </rng>
Oct 11 09:04:26 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root"/>
Oct 11 09:04:26 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:04:26 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:04:26 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:04:26 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:04:26 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:04:26 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:04:26 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:04:26 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:04:26 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:04:26 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:04:26 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:04:26 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:04:26 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:04:26 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:04:26 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:04:26 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:04:26 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:04:26 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:04:26 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:04:26 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:04:26 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:04:26 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:04:26 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:04:26 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:04:26 compute-0 nova_compute[260935]:     <controller type="usb" index="0"/>
Oct 11 09:04:26 compute-0 nova_compute[260935]:     <memballoon model="virtio">
Oct 11 09:04:26 compute-0 nova_compute[260935]:       <stats period="10"/>
Oct 11 09:04:26 compute-0 nova_compute[260935]:     </memballoon>
Oct 11 09:04:26 compute-0 nova_compute[260935]:   </devices>
Oct 11 09:04:26 compute-0 nova_compute[260935]: </domain>
Oct 11 09:04:26 compute-0 nova_compute[260935]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 11 09:04:26 compute-0 nova_compute[260935]: 2025-10-11 09:04:26.916 2 DEBUG nova.virt.libvirt.driver [None req-0bf42a87-3fff-4e4e-b766-81235b468a20 2930ce5b9093473e9d0f4e49fb1e934b 5ae7015798d64fdeb69d4d5d5fda3326 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 09:04:26 compute-0 nova_compute[260935]: 2025-10-11 09:04:26.916 2 DEBUG nova.virt.libvirt.driver [None req-0bf42a87-3fff-4e4e-b766-81235b468a20 2930ce5b9093473e9d0f4e49fb1e934b 5ae7015798d64fdeb69d4d5d5fda3326 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 09:04:26 compute-0 nova_compute[260935]: 2025-10-11 09:04:26.917 2 INFO nova.virt.libvirt.driver [None req-0bf42a87-3fff-4e4e-b766-81235b468a20 2930ce5b9093473e9d0f4e49fb1e934b 5ae7015798d64fdeb69d4d5d5fda3326 - - default default] [instance: 14b020dc-ae35-4a87-87c8-ed8504968319] Using config drive
Oct 11 09:04:26 compute-0 nova_compute[260935]: 2025-10-11 09:04:26.951 2 DEBUG nova.storage.rbd_utils [None req-0bf42a87-3fff-4e4e-b766-81235b468a20 2930ce5b9093473e9d0f4e49fb1e934b 5ae7015798d64fdeb69d4d5d5fda3326 - - default default] rbd image 14b020dc-ae35-4a87-87c8-ed8504968319_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:04:27 compute-0 nova_compute[260935]: 2025-10-11 09:04:27.350 2 INFO nova.virt.libvirt.driver [None req-0bf42a87-3fff-4e4e-b766-81235b468a20 2930ce5b9093473e9d0f4e49fb1e934b 5ae7015798d64fdeb69d4d5d5fda3326 - - default default] [instance: 14b020dc-ae35-4a87-87c8-ed8504968319] Creating config drive at /var/lib/nova/instances/14b020dc-ae35-4a87-87c8-ed8504968319/disk.config
Oct 11 09:04:27 compute-0 nova_compute[260935]: 2025-10-11 09:04:27.359 2 DEBUG oslo_concurrency.processutils [None req-0bf42a87-3fff-4e4e-b766-81235b468a20 2930ce5b9093473e9d0f4e49fb1e934b 5ae7015798d64fdeb69d4d5d5fda3326 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/14b020dc-ae35-4a87-87c8-ed8504968319/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpyogygpqx execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:04:27 compute-0 nova_compute[260935]: 2025-10-11 09:04:27.531 2 DEBUG oslo_concurrency.processutils [None req-0bf42a87-3fff-4e4e-b766-81235b468a20 2930ce5b9093473e9d0f4e49fb1e934b 5ae7015798d64fdeb69d4d5d5fda3326 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/14b020dc-ae35-4a87-87c8-ed8504968319/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpyogygpqx" returned: 0 in 0.171s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:04:27 compute-0 ceph-mon[74313]: pgmap v1842: 321 pgs: 321 active+clean; 374 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 1.7 KiB/s wr, 31 op/s
Oct 11 09:04:27 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/1923217079' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:04:27 compute-0 nova_compute[260935]: 2025-10-11 09:04:27.573 2 DEBUG nova.storage.rbd_utils [None req-0bf42a87-3fff-4e4e-b766-81235b468a20 2930ce5b9093473e9d0f4e49fb1e934b 5ae7015798d64fdeb69d4d5d5fda3326 - - default default] rbd image 14b020dc-ae35-4a87-87c8-ed8504968319_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:04:27 compute-0 nova_compute[260935]: 2025-10-11 09:04:27.578 2 DEBUG oslo_concurrency.processutils [None req-0bf42a87-3fff-4e4e-b766-81235b468a20 2930ce5b9093473e9d0f4e49fb1e934b 5ae7015798d64fdeb69d4d5d5fda3326 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/14b020dc-ae35-4a87-87c8-ed8504968319/disk.config 14b020dc-ae35-4a87-87c8-ed8504968319_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:04:27 compute-0 nova_compute[260935]: 2025-10-11 09:04:27.631 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760173467.6297598, cb40f13d-eaf0-4d7f-8745-acdd85fa0e86 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:04:27 compute-0 nova_compute[260935]: 2025-10-11 09:04:27.632 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: cb40f13d-eaf0-4d7f-8745-acdd85fa0e86] VM Resumed (Lifecycle Event)
Oct 11 09:04:27 compute-0 nova_compute[260935]: 2025-10-11 09:04:27.638 2 DEBUG nova.compute.manager [None req-bb71e3ff-0a93-4fe0-8fc7-ba9fb278dd93 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] [instance: cb40f13d-eaf0-4d7f-8745-acdd85fa0e86] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 11 09:04:27 compute-0 nova_compute[260935]: 2025-10-11 09:04:27.639 2 DEBUG nova.virt.libvirt.driver [None req-bb71e3ff-0a93-4fe0-8fc7-ba9fb278dd93 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] [instance: cb40f13d-eaf0-4d7f-8745-acdd85fa0e86] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 11 09:04:27 compute-0 nova_compute[260935]: 2025-10-11 09:04:27.644 2 INFO nova.virt.libvirt.driver [-] [instance: cb40f13d-eaf0-4d7f-8745-acdd85fa0e86] Instance spawned successfully.
Oct 11 09:04:27 compute-0 nova_compute[260935]: 2025-10-11 09:04:27.645 2 DEBUG nova.virt.libvirt.driver [None req-bb71e3ff-0a93-4fe0-8fc7-ba9fb278dd93 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] [instance: cb40f13d-eaf0-4d7f-8745-acdd85fa0e86] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 11 09:04:27 compute-0 nova_compute[260935]: 2025-10-11 09:04:27.733 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: cb40f13d-eaf0-4d7f-8745-acdd85fa0e86] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:04:27 compute-0 nova_compute[260935]: 2025-10-11 09:04:27.740 2 DEBUG nova.virt.libvirt.driver [None req-bb71e3ff-0a93-4fe0-8fc7-ba9fb278dd93 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] [instance: cb40f13d-eaf0-4d7f-8745-acdd85fa0e86] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:04:27 compute-0 nova_compute[260935]: 2025-10-11 09:04:27.741 2 DEBUG nova.virt.libvirt.driver [None req-bb71e3ff-0a93-4fe0-8fc7-ba9fb278dd93 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] [instance: cb40f13d-eaf0-4d7f-8745-acdd85fa0e86] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:04:27 compute-0 nova_compute[260935]: 2025-10-11 09:04:27.741 2 DEBUG nova.virt.libvirt.driver [None req-bb71e3ff-0a93-4fe0-8fc7-ba9fb278dd93 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] [instance: cb40f13d-eaf0-4d7f-8745-acdd85fa0e86] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:04:27 compute-0 nova_compute[260935]: 2025-10-11 09:04:27.742 2 DEBUG nova.virt.libvirt.driver [None req-bb71e3ff-0a93-4fe0-8fc7-ba9fb278dd93 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] [instance: cb40f13d-eaf0-4d7f-8745-acdd85fa0e86] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:04:27 compute-0 nova_compute[260935]: 2025-10-11 09:04:27.743 2 DEBUG nova.virt.libvirt.driver [None req-bb71e3ff-0a93-4fe0-8fc7-ba9fb278dd93 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] [instance: cb40f13d-eaf0-4d7f-8745-acdd85fa0e86] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:04:27 compute-0 nova_compute[260935]: 2025-10-11 09:04:27.743 2 DEBUG nova.virt.libvirt.driver [None req-bb71e3ff-0a93-4fe0-8fc7-ba9fb278dd93 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] [instance: cb40f13d-eaf0-4d7f-8745-acdd85fa0e86] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:04:27 compute-0 nova_compute[260935]: 2025-10-11 09:04:27.751 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: cb40f13d-eaf0-4d7f-8745-acdd85fa0e86] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 09:04:27 compute-0 nova_compute[260935]: 2025-10-11 09:04:27.787 2 DEBUG oslo_concurrency.processutils [None req-0bf42a87-3fff-4e4e-b766-81235b468a20 2930ce5b9093473e9d0f4e49fb1e934b 5ae7015798d64fdeb69d4d5d5fda3326 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/14b020dc-ae35-4a87-87c8-ed8504968319/disk.config 14b020dc-ae35-4a87-87c8-ed8504968319_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.209s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:04:27 compute-0 nova_compute[260935]: 2025-10-11 09:04:27.788 2 INFO nova.virt.libvirt.driver [None req-0bf42a87-3fff-4e4e-b766-81235b468a20 2930ce5b9093473e9d0f4e49fb1e934b 5ae7015798d64fdeb69d4d5d5fda3326 - - default default] [instance: 14b020dc-ae35-4a87-87c8-ed8504968319] Deleting local config drive /var/lib/nova/instances/14b020dc-ae35-4a87-87c8-ed8504968319/disk.config because it was imported into RBD.
Oct 11 09:04:27 compute-0 nova_compute[260935]: 2025-10-11 09:04:27.816 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: cb40f13d-eaf0-4d7f-8745-acdd85fa0e86] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 09:04:27 compute-0 nova_compute[260935]: 2025-10-11 09:04:27.818 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760173467.631032, cb40f13d-eaf0-4d7f-8745-acdd85fa0e86 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:04:27 compute-0 nova_compute[260935]: 2025-10-11 09:04:27.818 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: cb40f13d-eaf0-4d7f-8745-acdd85fa0e86] VM Started (Lifecycle Event)
Oct 11 09:04:27 compute-0 nova_compute[260935]: 2025-10-11 09:04:27.869 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: cb40f13d-eaf0-4d7f-8745-acdd85fa0e86] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:04:27 compute-0 nova_compute[260935]: 2025-10-11 09:04:27.875 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: cb40f13d-eaf0-4d7f-8745-acdd85fa0e86] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 09:04:27 compute-0 nova_compute[260935]: 2025-10-11 09:04:27.884 2 INFO nova.compute.manager [None req-bb71e3ff-0a93-4fe0-8fc7-ba9fb278dd93 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] [instance: cb40f13d-eaf0-4d7f-8745-acdd85fa0e86] Took 4.45 seconds to spawn the instance on the hypervisor.
Oct 11 09:04:27 compute-0 nova_compute[260935]: 2025-10-11 09:04:27.885 2 DEBUG nova.compute.manager [None req-bb71e3ff-0a93-4fe0-8fc7-ba9fb278dd93 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] [instance: cb40f13d-eaf0-4d7f-8745-acdd85fa0e86] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:04:27 compute-0 nova_compute[260935]: 2025-10-11 09:04:27.907 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: cb40f13d-eaf0-4d7f-8745-acdd85fa0e86] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 09:04:27 compute-0 systemd-machined[215705]: New machine qemu-101-instance-00000059.
Oct 11 09:04:27 compute-0 systemd[1]: Started Virtual Machine qemu-101-instance-00000059.
Oct 11 09:04:27 compute-0 nova_compute[260935]: 2025-10-11 09:04:27.979 2 INFO nova.compute.manager [None req-bb71e3ff-0a93-4fe0-8fc7-ba9fb278dd93 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] [instance: cb40f13d-eaf0-4d7f-8745-acdd85fa0e86] Took 6.21 seconds to build instance.
Oct 11 09:04:28 compute-0 nova_compute[260935]: 2025-10-11 09:04:28.032 2 DEBUG oslo_concurrency.lockutils [None req-bb71e3ff-0a93-4fe0-8fc7-ba9fb278dd93 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] Lock "cb40f13d-eaf0-4d7f-8745-acdd85fa0e86" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 6.490s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:04:28 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1843: 321 pgs: 321 active+clean; 467 MiB data, 854 MiB used, 59 GiB / 60 GiB avail; 108 KiB/s rd, 5.3 MiB/s wr, 158 op/s
Oct 11 09:04:28 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e246 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:04:28 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e246 do_prune osdmap full prune enabled
Oct 11 09:04:28 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e247 e247: 3 total, 3 up, 3 in
Oct 11 09:04:28 compute-0 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e247: 3 total, 3 up, 3 in
Oct 11 09:04:28 compute-0 nova_compute[260935]: 2025-10-11 09:04:28.854 2 DEBUG nova.compute.manager [None req-0bf42a87-3fff-4e4e-b766-81235b468a20 2930ce5b9093473e9d0f4e49fb1e934b 5ae7015798d64fdeb69d4d5d5fda3326 - - default default] [instance: 14b020dc-ae35-4a87-87c8-ed8504968319] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 11 09:04:28 compute-0 nova_compute[260935]: 2025-10-11 09:04:28.856 2 DEBUG nova.virt.libvirt.driver [None req-0bf42a87-3fff-4e4e-b766-81235b468a20 2930ce5b9093473e9d0f4e49fb1e934b 5ae7015798d64fdeb69d4d5d5fda3326 - - default default] [instance: 14b020dc-ae35-4a87-87c8-ed8504968319] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 11 09:04:28 compute-0 nova_compute[260935]: 2025-10-11 09:04:28.856 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760173468.852929, 14b020dc-ae35-4a87-87c8-ed8504968319 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:04:28 compute-0 nova_compute[260935]: 2025-10-11 09:04:28.857 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 14b020dc-ae35-4a87-87c8-ed8504968319] VM Resumed (Lifecycle Event)
Oct 11 09:04:28 compute-0 nova_compute[260935]: 2025-10-11 09:04:28.869 2 INFO nova.virt.libvirt.driver [-] [instance: 14b020dc-ae35-4a87-87c8-ed8504968319] Instance spawned successfully.
Oct 11 09:04:28 compute-0 nova_compute[260935]: 2025-10-11 09:04:28.869 2 DEBUG nova.virt.libvirt.driver [None req-0bf42a87-3fff-4e4e-b766-81235b468a20 2930ce5b9093473e9d0f4e49fb1e934b 5ae7015798d64fdeb69d4d5d5fda3326 - - default default] [instance: 14b020dc-ae35-4a87-87c8-ed8504968319] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 11 09:04:28 compute-0 nova_compute[260935]: 2025-10-11 09:04:28.901 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 14b020dc-ae35-4a87-87c8-ed8504968319] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:04:28 compute-0 nova_compute[260935]: 2025-10-11 09:04:28.905 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 14b020dc-ae35-4a87-87c8-ed8504968319] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 09:04:28 compute-0 nova_compute[260935]: 2025-10-11 09:04:28.934 2 DEBUG nova.virt.libvirt.driver [None req-0bf42a87-3fff-4e4e-b766-81235b468a20 2930ce5b9093473e9d0f4e49fb1e934b 5ae7015798d64fdeb69d4d5d5fda3326 - - default default] [instance: 14b020dc-ae35-4a87-87c8-ed8504968319] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:04:28 compute-0 nova_compute[260935]: 2025-10-11 09:04:28.934 2 DEBUG nova.virt.libvirt.driver [None req-0bf42a87-3fff-4e4e-b766-81235b468a20 2930ce5b9093473e9d0f4e49fb1e934b 5ae7015798d64fdeb69d4d5d5fda3326 - - default default] [instance: 14b020dc-ae35-4a87-87c8-ed8504968319] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:04:28 compute-0 nova_compute[260935]: 2025-10-11 09:04:28.935 2 DEBUG nova.virt.libvirt.driver [None req-0bf42a87-3fff-4e4e-b766-81235b468a20 2930ce5b9093473e9d0f4e49fb1e934b 5ae7015798d64fdeb69d4d5d5fda3326 - - default default] [instance: 14b020dc-ae35-4a87-87c8-ed8504968319] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:04:28 compute-0 nova_compute[260935]: 2025-10-11 09:04:28.935 2 DEBUG nova.virt.libvirt.driver [None req-0bf42a87-3fff-4e4e-b766-81235b468a20 2930ce5b9093473e9d0f4e49fb1e934b 5ae7015798d64fdeb69d4d5d5fda3326 - - default default] [instance: 14b020dc-ae35-4a87-87c8-ed8504968319] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:04:28 compute-0 nova_compute[260935]: 2025-10-11 09:04:28.936 2 DEBUG nova.virt.libvirt.driver [None req-0bf42a87-3fff-4e4e-b766-81235b468a20 2930ce5b9093473e9d0f4e49fb1e934b 5ae7015798d64fdeb69d4d5d5fda3326 - - default default] [instance: 14b020dc-ae35-4a87-87c8-ed8504968319] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:04:28 compute-0 nova_compute[260935]: 2025-10-11 09:04:28.936 2 DEBUG nova.virt.libvirt.driver [None req-0bf42a87-3fff-4e4e-b766-81235b468a20 2930ce5b9093473e9d0f4e49fb1e934b 5ae7015798d64fdeb69d4d5d5fda3326 - - default default] [instance: 14b020dc-ae35-4a87-87c8-ed8504968319] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:04:28 compute-0 nova_compute[260935]: 2025-10-11 09:04:28.959 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:04:28 compute-0 nova_compute[260935]: 2025-10-11 09:04:28.975 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 14b020dc-ae35-4a87-87c8-ed8504968319] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 09:04:28 compute-0 nova_compute[260935]: 2025-10-11 09:04:28.976 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760173468.8553307, 14b020dc-ae35-4a87-87c8-ed8504968319 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:04:28 compute-0 nova_compute[260935]: 2025-10-11 09:04:28.976 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 14b020dc-ae35-4a87-87c8-ed8504968319] VM Started (Lifecycle Event)
Oct 11 09:04:29 compute-0 nova_compute[260935]: 2025-10-11 09:04:29.032 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 14b020dc-ae35-4a87-87c8-ed8504968319] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:04:29 compute-0 nova_compute[260935]: 2025-10-11 09:04:29.039 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 14b020dc-ae35-4a87-87c8-ed8504968319] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 09:04:29 compute-0 nova_compute[260935]: 2025-10-11 09:04:29.046 2 INFO nova.compute.manager [None req-0bf42a87-3fff-4e4e-b766-81235b468a20 2930ce5b9093473e9d0f4e49fb1e934b 5ae7015798d64fdeb69d4d5d5fda3326 - - default default] [instance: 14b020dc-ae35-4a87-87c8-ed8504968319] Took 4.41 seconds to spawn the instance on the hypervisor.
Oct 11 09:04:29 compute-0 nova_compute[260935]: 2025-10-11 09:04:29.047 2 DEBUG nova.compute.manager [None req-0bf42a87-3fff-4e4e-b766-81235b468a20 2930ce5b9093473e9d0f4e49fb1e934b 5ae7015798d64fdeb69d4d5d5fda3326 - - default default] [instance: 14b020dc-ae35-4a87-87c8-ed8504968319] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:04:29 compute-0 nova_compute[260935]: 2025-10-11 09:04:29.081 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 14b020dc-ae35-4a87-87c8-ed8504968319] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 09:04:29 compute-0 nova_compute[260935]: 2025-10-11 09:04:29.199 2 INFO nova.compute.manager [None req-0bf42a87-3fff-4e4e-b766-81235b468a20 2930ce5b9093473e9d0f4e49fb1e934b 5ae7015798d64fdeb69d4d5d5fda3326 - - default default] [instance: 14b020dc-ae35-4a87-87c8-ed8504968319] Took 6.60 seconds to build instance.
Oct 11 09:04:29 compute-0 nova_compute[260935]: 2025-10-11 09:04:29.238 2 DEBUG oslo_concurrency.lockutils [None req-0bf42a87-3fff-4e4e-b766-81235b468a20 2930ce5b9093473e9d0f4e49fb1e934b 5ae7015798d64fdeb69d4d5d5fda3326 - - default default] Lock "14b020dc-ae35-4a87-87c8-ed8504968319" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 6.781s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:04:29 compute-0 ceph-mon[74313]: pgmap v1843: 321 pgs: 321 active+clean; 467 MiB data, 854 MiB used, 59 GiB / 60 GiB avail; 108 KiB/s rd, 5.3 MiB/s wr, 158 op/s
Oct 11 09:04:29 compute-0 ceph-mon[74313]: osdmap e247: 3 total, 3 up, 3 in
Oct 11 09:04:29 compute-0 sudo[348774]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:04:29 compute-0 sudo[348774]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:04:29 compute-0 sudo[348774]: pam_unix(sudo:session): session closed for user root
Oct 11 09:04:30 compute-0 sudo[348799]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 09:04:30 compute-0 sudo[348799]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:04:30 compute-0 sudo[348799]: pam_unix(sudo:session): session closed for user root
Oct 11 09:04:30 compute-0 sudo[348824]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:04:30 compute-0 nova_compute[260935]: 2025-10-11 09:04:30.191 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:04:30 compute-0 sudo[348824]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:04:30 compute-0 sudo[348824]: pam_unix(sudo:session): session closed for user root
Oct 11 09:04:30 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1845: 321 pgs: 321 active+clean; 467 MiB data, 854 MiB used, 59 GiB / 60 GiB avail; 89 KiB/s rd, 5.3 MiB/s wr, 133 op/s
Oct 11 09:04:30 compute-0 sudo[348849]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 11 09:04:30 compute-0 sudo[348849]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:04:30 compute-0 nova_compute[260935]: 2025-10-11 09:04:30.388 2 INFO nova.compute.manager [None req-edf5c013-a2de-433b-b5dc-cd9b80326402 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] [instance: cb40f13d-eaf0-4d7f-8745-acdd85fa0e86] Rebuilding instance
Oct 11 09:04:30 compute-0 nova_compute[260935]: 2025-10-11 09:04:30.692 2 DEBUG nova.objects.instance [None req-edf5c013-a2de-433b-b5dc-cd9b80326402 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] Lazy-loading 'trusted_certs' on Instance uuid cb40f13d-eaf0-4d7f-8745-acdd85fa0e86 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:04:30 compute-0 nova_compute[260935]: 2025-10-11 09:04:30.735 2 DEBUG nova.compute.manager [None req-edf5c013-a2de-433b-b5dc-cd9b80326402 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] [instance: cb40f13d-eaf0-4d7f-8745-acdd85fa0e86] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:04:30 compute-0 nova_compute[260935]: 2025-10-11 09:04:30.816 2 DEBUG nova.objects.instance [None req-edf5c013-a2de-433b-b5dc-cd9b80326402 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] Lazy-loading 'pci_requests' on Instance uuid cb40f13d-eaf0-4d7f-8745-acdd85fa0e86 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:04:30 compute-0 nova_compute[260935]: 2025-10-11 09:04:30.843 2 DEBUG nova.objects.instance [None req-edf5c013-a2de-433b-b5dc-cd9b80326402 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] Lazy-loading 'pci_devices' on Instance uuid cb40f13d-eaf0-4d7f-8745-acdd85fa0e86 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:04:30 compute-0 nova_compute[260935]: 2025-10-11 09:04:30.870 2 DEBUG nova.objects.instance [None req-edf5c013-a2de-433b-b5dc-cd9b80326402 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] Lazy-loading 'resources' on Instance uuid cb40f13d-eaf0-4d7f-8745-acdd85fa0e86 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:04:30 compute-0 nova_compute[260935]: 2025-10-11 09:04:30.891 2 DEBUG nova.objects.instance [None req-edf5c013-a2de-433b-b5dc-cd9b80326402 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] Lazy-loading 'migration_context' on Instance uuid cb40f13d-eaf0-4d7f-8745-acdd85fa0e86 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:04:30 compute-0 nova_compute[260935]: 2025-10-11 09:04:30.922 2 DEBUG nova.objects.instance [None req-edf5c013-a2de-433b-b5dc-cd9b80326402 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] [instance: cb40f13d-eaf0-4d7f-8745-acdd85fa0e86] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Oct 11 09:04:30 compute-0 nova_compute[260935]: 2025-10-11 09:04:30.932 2 DEBUG nova.virt.libvirt.driver [None req-edf5c013-a2de-433b-b5dc-cd9b80326402 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] [instance: cb40f13d-eaf0-4d7f-8745-acdd85fa0e86] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Oct 11 09:04:31 compute-0 sudo[348849]: pam_unix(sudo:session): session closed for user root
Oct 11 09:04:31 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Oct 11 09:04:31 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 11 09:04:31 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 09:04:31 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 09:04:31 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 11 09:04:31 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 09:04:31 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 11 09:04:31 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:04:31 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev e27992bb-8e9b-4eba-91dc-57828e7c9820 does not exist
Oct 11 09:04:31 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev ae0f2143-4069-48af-bfe6-80fe0052a1f5 does not exist
Oct 11 09:04:31 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev 6aa71a5c-42a0-4e5d-8a35-27e92455345c does not exist
Oct 11 09:04:31 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 11 09:04:31 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 09:04:31 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 11 09:04:31 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 09:04:31 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 09:04:31 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 09:04:31 compute-0 sudo[348905]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:04:31 compute-0 sudo[348905]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:04:31 compute-0 sudo[348905]: pam_unix(sudo:session): session closed for user root
Oct 11 09:04:31 compute-0 sudo[348930]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 09:04:31 compute-0 sudo[348930]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:04:31 compute-0 sudo[348930]: pam_unix(sudo:session): session closed for user root
Oct 11 09:04:31 compute-0 nova_compute[260935]: 2025-10-11 09:04:31.290 2 DEBUG nova.compute.manager [None req-3132466f-c37e-4b3f-9ed4-f18e908a762b 2930ce5b9093473e9d0f4e49fb1e934b 5ae7015798d64fdeb69d4d5d5fda3326 - - default default] [instance: 14b020dc-ae35-4a87-87c8-ed8504968319] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:04:31 compute-0 nova_compute[260935]: 2025-10-11 09:04:31.350 2 INFO nova.compute.manager [None req-3132466f-c37e-4b3f-9ed4-f18e908a762b 2930ce5b9093473e9d0f4e49fb1e934b 5ae7015798d64fdeb69d4d5d5fda3326 - - default default] [instance: 14b020dc-ae35-4a87-87c8-ed8504968319] instance snapshotting
Oct 11 09:04:31 compute-0 nova_compute[260935]: 2025-10-11 09:04:31.351 2 DEBUG nova.objects.instance [None req-3132466f-c37e-4b3f-9ed4-f18e908a762b 2930ce5b9093473e9d0f4e49fb1e934b 5ae7015798d64fdeb69d4d5d5fda3326 - - default default] Lazy-loading 'flavor' on Instance uuid 14b020dc-ae35-4a87-87c8-ed8504968319 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:04:31 compute-0 sudo[348955]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:04:31 compute-0 sudo[348955]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:04:31 compute-0 sudo[348955]: pam_unix(sudo:session): session closed for user root
Oct 11 09:04:31 compute-0 sudo[348980]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 11 09:04:31 compute-0 sudo[348980]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:04:31 compute-0 nova_compute[260935]: 2025-10-11 09:04:31.540 2 DEBUG oslo_concurrency.lockutils [None req-bdc924b6-5434-44f6-bf04-aa9e1098f5c4 2930ce5b9093473e9d0f4e49fb1e934b 5ae7015798d64fdeb69d4d5d5fda3326 - - default default] Acquiring lock "14b020dc-ae35-4a87-87c8-ed8504968319" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:04:31 compute-0 nova_compute[260935]: 2025-10-11 09:04:31.540 2 DEBUG oslo_concurrency.lockutils [None req-bdc924b6-5434-44f6-bf04-aa9e1098f5c4 2930ce5b9093473e9d0f4e49fb1e934b 5ae7015798d64fdeb69d4d5d5fda3326 - - default default] Lock "14b020dc-ae35-4a87-87c8-ed8504968319" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:04:31 compute-0 nova_compute[260935]: 2025-10-11 09:04:31.541 2 DEBUG oslo_concurrency.lockutils [None req-bdc924b6-5434-44f6-bf04-aa9e1098f5c4 2930ce5b9093473e9d0f4e49fb1e934b 5ae7015798d64fdeb69d4d5d5fda3326 - - default default] Acquiring lock "14b020dc-ae35-4a87-87c8-ed8504968319-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:04:31 compute-0 nova_compute[260935]: 2025-10-11 09:04:31.541 2 DEBUG oslo_concurrency.lockutils [None req-bdc924b6-5434-44f6-bf04-aa9e1098f5c4 2930ce5b9093473e9d0f4e49fb1e934b 5ae7015798d64fdeb69d4d5d5fda3326 - - default default] Lock "14b020dc-ae35-4a87-87c8-ed8504968319-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:04:31 compute-0 nova_compute[260935]: 2025-10-11 09:04:31.542 2 DEBUG oslo_concurrency.lockutils [None req-bdc924b6-5434-44f6-bf04-aa9e1098f5c4 2930ce5b9093473e9d0f4e49fb1e934b 5ae7015798d64fdeb69d4d5d5fda3326 - - default default] Lock "14b020dc-ae35-4a87-87c8-ed8504968319-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:04:31 compute-0 nova_compute[260935]: 2025-10-11 09:04:31.544 2 INFO nova.compute.manager [None req-bdc924b6-5434-44f6-bf04-aa9e1098f5c4 2930ce5b9093473e9d0f4e49fb1e934b 5ae7015798d64fdeb69d4d5d5fda3326 - - default default] [instance: 14b020dc-ae35-4a87-87c8-ed8504968319] Terminating instance
Oct 11 09:04:31 compute-0 nova_compute[260935]: 2025-10-11 09:04:31.545 2 DEBUG oslo_concurrency.lockutils [None req-bdc924b6-5434-44f6-bf04-aa9e1098f5c4 2930ce5b9093473e9d0f4e49fb1e934b 5ae7015798d64fdeb69d4d5d5fda3326 - - default default] Acquiring lock "refresh_cache-14b020dc-ae35-4a87-87c8-ed8504968319" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:04:31 compute-0 nova_compute[260935]: 2025-10-11 09:04:31.546 2 DEBUG oslo_concurrency.lockutils [None req-bdc924b6-5434-44f6-bf04-aa9e1098f5c4 2930ce5b9093473e9d0f4e49fb1e934b 5ae7015798d64fdeb69d4d5d5fda3326 - - default default] Acquired lock "refresh_cache-14b020dc-ae35-4a87-87c8-ed8504968319" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:04:31 compute-0 nova_compute[260935]: 2025-10-11 09:04:31.546 2 DEBUG nova.network.neutron [None req-bdc924b6-5434-44f6-bf04-aa9e1098f5c4 2930ce5b9093473e9d0f4e49fb1e934b 5ae7015798d64fdeb69d4d5d5fda3326 - - default default] [instance: 14b020dc-ae35-4a87-87c8-ed8504968319] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 11 09:04:31 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e247 do_prune osdmap full prune enabled
Oct 11 09:04:31 compute-0 ceph-mon[74313]: pgmap v1845: 321 pgs: 321 active+clean; 467 MiB data, 854 MiB used, 59 GiB / 60 GiB avail; 89 KiB/s rd, 5.3 MiB/s wr, 133 op/s
Oct 11 09:04:31 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 11 09:04:31 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 09:04:31 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 09:04:31 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:04:31 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 09:04:31 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 09:04:31 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 09:04:31 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e248 e248: 3 total, 3 up, 3 in
Oct 11 09:04:31 compute-0 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e248: 3 total, 3 up, 3 in
Oct 11 09:04:31 compute-0 sshd-session[348881]: Invalid user database from 152.32.213.170 port 47422
Oct 11 09:04:31 compute-0 sshd-session[348881]: pam_unix(sshd:auth): check pass; user unknown
Oct 11 09:04:31 compute-0 sshd-session[348881]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=152.32.213.170
Oct 11 09:04:31 compute-0 nova_compute[260935]: 2025-10-11 09:04:31.743 2 INFO nova.virt.libvirt.driver [None req-3132466f-c37e-4b3f-9ed4-f18e908a762b 2930ce5b9093473e9d0f4e49fb1e934b 5ae7015798d64fdeb69d4d5d5fda3326 - - default default] [instance: 14b020dc-ae35-4a87-87c8-ed8504968319] Beginning live snapshot process
Oct 11 09:04:31 compute-0 nova_compute[260935]: 2025-10-11 09:04:31.762 2 DEBUG nova.network.neutron [None req-bdc924b6-5434-44f6-bf04-aa9e1098f5c4 2930ce5b9093473e9d0f4e49fb1e934b 5ae7015798d64fdeb69d4d5d5fda3326 - - default default] [instance: 14b020dc-ae35-4a87-87c8-ed8504968319] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 11 09:04:31 compute-0 nova_compute[260935]: 2025-10-11 09:04:31.803 2 DEBUG nova.compute.manager [None req-3132466f-c37e-4b3f-9ed4-f18e908a762b 2930ce5b9093473e9d0f4e49fb1e934b 5ae7015798d64fdeb69d4d5d5fda3326 - - default default] [instance: 14b020dc-ae35-4a87-87c8-ed8504968319] Instance disappeared during snapshot _snapshot_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:4390
Oct 11 09:04:32 compute-0 podman[349045]: 2025-10-11 09:04:32.061051336 +0000 UTC m=+0.067008924 container create 65fcf7aad827aa6574b4051e0c99310775b76e600af6c2d0d113972fe6446da9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_dewdney, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 09:04:32 compute-0 systemd[1]: Started libpod-conmon-65fcf7aad827aa6574b4051e0c99310775b76e600af6c2d0d113972fe6446da9.scope.
Oct 11 09:04:32 compute-0 podman[349045]: 2025-10-11 09:04:32.037283607 +0000 UTC m=+0.043241285 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:04:32 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:04:32 compute-0 podman[349045]: 2025-10-11 09:04:32.167969848 +0000 UTC m=+0.173927476 container init 65fcf7aad827aa6574b4051e0c99310775b76e600af6c2d0d113972fe6446da9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_dewdney, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct 11 09:04:32 compute-0 podman[349045]: 2025-10-11 09:04:32.177079298 +0000 UTC m=+0.183036896 container start 65fcf7aad827aa6574b4051e0c99310775b76e600af6c2d0d113972fe6446da9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_dewdney, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 09:04:32 compute-0 podman[349045]: 2025-10-11 09:04:32.180897277 +0000 UTC m=+0.186854935 container attach 65fcf7aad827aa6574b4051e0c99310775b76e600af6c2d0d113972fe6446da9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_dewdney, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Oct 11 09:04:32 compute-0 competent_dewdney[349061]: 167 167
Oct 11 09:04:32 compute-0 systemd[1]: libpod-65fcf7aad827aa6574b4051e0c99310775b76e600af6c2d0d113972fe6446da9.scope: Deactivated successfully.
Oct 11 09:04:32 compute-0 conmon[349061]: conmon 65fcf7aad827aa6574b4 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-65fcf7aad827aa6574b4051e0c99310775b76e600af6c2d0d113972fe6446da9.scope/container/memory.events
Oct 11 09:04:32 compute-0 podman[349045]: 2025-10-11 09:04:32.185673953 +0000 UTC m=+0.191631561 container died 65fcf7aad827aa6574b4051e0c99310775b76e600af6c2d0d113972fe6446da9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_dewdney, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 11 09:04:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-256db1653e8460fe33fd2801446bb8001323d73d87d758e5bab8208900c57072-merged.mount: Deactivated successfully.
Oct 11 09:04:32 compute-0 podman[349045]: 2025-10-11 09:04:32.235896707 +0000 UTC m=+0.241854305 container remove 65fcf7aad827aa6574b4051e0c99310775b76e600af6c2d0d113972fe6446da9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_dewdney, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 09:04:32 compute-0 systemd[1]: libpod-conmon-65fcf7aad827aa6574b4051e0c99310775b76e600af6c2d0d113972fe6446da9.scope: Deactivated successfully.
Oct 11 09:04:32 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1847: 321 pgs: 321 active+clean; 467 MiB data, 855 MiB used, 59 GiB / 60 GiB avail; 3.2 MiB/s rd, 5.4 MiB/s wr, 264 op/s
Oct 11 09:04:32 compute-0 ceph-mon[74313]: osdmap e248: 3 total, 3 up, 3 in
Oct 11 09:04:32 compute-0 podman[349088]: 2025-10-11 09:04:32.59471579 +0000 UTC m=+0.080519799 container create aa2a3b7a78d788a6e7ae69d039c6dadc7563910859c17e72cdeb561a90499428 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_solomon, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 09:04:32 compute-0 podman[349088]: 2025-10-11 09:04:32.54775872 +0000 UTC m=+0.033562789 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:04:32 compute-0 systemd[1]: Started libpod-conmon-aa2a3b7a78d788a6e7ae69d039c6dadc7563910859c17e72cdeb561a90499428.scope.
Oct 11 09:04:32 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:04:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff7b799685fe55c48ce64d469eb1a0d124f65159749f40a34bcd681fdbfe54df/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 09:04:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff7b799685fe55c48ce64d469eb1a0d124f65159749f40a34bcd681fdbfe54df/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 09:04:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff7b799685fe55c48ce64d469eb1a0d124f65159749f40a34bcd681fdbfe54df/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 09:04:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff7b799685fe55c48ce64d469eb1a0d124f65159749f40a34bcd681fdbfe54df/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 09:04:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff7b799685fe55c48ce64d469eb1a0d124f65159749f40a34bcd681fdbfe54df/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 09:04:32 compute-0 podman[349088]: 2025-10-11 09:04:32.71556951 +0000 UTC m=+0.201373589 container init aa2a3b7a78d788a6e7ae69d039c6dadc7563910859c17e72cdeb561a90499428 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_solomon, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 09:04:32 compute-0 podman[349088]: 2025-10-11 09:04:32.725686669 +0000 UTC m=+0.211490698 container start aa2a3b7a78d788a6e7ae69d039c6dadc7563910859c17e72cdeb561a90499428 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_solomon, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default)
Oct 11 09:04:32 compute-0 podman[349088]: 2025-10-11 09:04:32.730198358 +0000 UTC m=+0.216002427 container attach aa2a3b7a78d788a6e7ae69d039c6dadc7563910859c17e72cdeb561a90499428 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_solomon, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 11 09:04:33 compute-0 nova_compute[260935]: 2025-10-11 09:04:33.039 2 DEBUG nova.network.neutron [None req-bdc924b6-5434-44f6-bf04-aa9e1098f5c4 2930ce5b9093473e9d0f4e49fb1e934b 5ae7015798d64fdeb69d4d5d5fda3326 - - default default] [instance: 14b020dc-ae35-4a87-87c8-ed8504968319] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:04:33 compute-0 nova_compute[260935]: 2025-10-11 09:04:33.107 2 DEBUG oslo_concurrency.lockutils [None req-bdc924b6-5434-44f6-bf04-aa9e1098f5c4 2930ce5b9093473e9d0f4e49fb1e934b 5ae7015798d64fdeb69d4d5d5fda3326 - - default default] Releasing lock "refresh_cache-14b020dc-ae35-4a87-87c8-ed8504968319" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:04:33 compute-0 nova_compute[260935]: 2025-10-11 09:04:33.108 2 DEBUG nova.compute.manager [None req-bdc924b6-5434-44f6-bf04-aa9e1098f5c4 2930ce5b9093473e9d0f4e49fb1e934b 5ae7015798d64fdeb69d4d5d5fda3326 - - default default] [instance: 14b020dc-ae35-4a87-87c8-ed8504968319] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 11 09:04:33 compute-0 nova_compute[260935]: 2025-10-11 09:04:33.188 2 DEBUG nova.compute.manager [None req-3132466f-c37e-4b3f-9ed4-f18e908a762b 2930ce5b9093473e9d0f4e49fb1e934b 5ae7015798d64fdeb69d4d5d5fda3326 - - default default] [instance: 14b020dc-ae35-4a87-87c8-ed8504968319] Found 0 images (rotation: 2) _rotate_backups /usr/lib/python3.9/site-packages/nova/compute/manager.py:4450
Oct 11 09:04:33 compute-0 systemd[1]: machine-qemu\x2d101\x2dinstance\x2d00000059.scope: Deactivated successfully.
Oct 11 09:04:33 compute-0 systemd[1]: machine-qemu\x2d101\x2dinstance\x2d00000059.scope: Consumed 5.113s CPU time.
Oct 11 09:04:33 compute-0 systemd-machined[215705]: Machine qemu-101-instance-00000059 terminated.
Oct 11 09:04:33 compute-0 nova_compute[260935]: 2025-10-11 09:04:33.331 2 INFO nova.virt.libvirt.driver [-] [instance: 14b020dc-ae35-4a87-87c8-ed8504968319] Instance destroyed successfully.
Oct 11 09:04:33 compute-0 nova_compute[260935]: 2025-10-11 09:04:33.332 2 DEBUG nova.objects.instance [None req-bdc924b6-5434-44f6-bf04-aa9e1098f5c4 2930ce5b9093473e9d0f4e49fb1e934b 5ae7015798d64fdeb69d4d5d5fda3326 - - default default] Lazy-loading 'resources' on Instance uuid 14b020dc-ae35-4a87-87c8-ed8504968319 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:04:33 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e248 do_prune osdmap full prune enabled
Oct 11 09:04:33 compute-0 ceph-mon[74313]: pgmap v1847: 321 pgs: 321 active+clean; 467 MiB data, 855 MiB used, 59 GiB / 60 GiB avail; 3.2 MiB/s rd, 5.4 MiB/s wr, 264 op/s
Oct 11 09:04:33 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e249 e249: 3 total, 3 up, 3 in
Oct 11 09:04:33 compute-0 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e249: 3 total, 3 up, 3 in
Oct 11 09:04:33 compute-0 sshd-session[348881]: Failed password for invalid user database from 152.32.213.170 port 47422 ssh2
Oct 11 09:04:33 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e249 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:04:33 compute-0 nova_compute[260935]: 2025-10-11 09:04:33.764 2 INFO nova.virt.libvirt.driver [None req-bdc924b6-5434-44f6-bf04-aa9e1098f5c4 2930ce5b9093473e9d0f4e49fb1e934b 5ae7015798d64fdeb69d4d5d5fda3326 - - default default] [instance: 14b020dc-ae35-4a87-87c8-ed8504968319] Deleting instance files /var/lib/nova/instances/14b020dc-ae35-4a87-87c8-ed8504968319_del
Oct 11 09:04:33 compute-0 nova_compute[260935]: 2025-10-11 09:04:33.765 2 INFO nova.virt.libvirt.driver [None req-bdc924b6-5434-44f6-bf04-aa9e1098f5c4 2930ce5b9093473e9d0f4e49fb1e934b 5ae7015798d64fdeb69d4d5d5fda3326 - - default default] [instance: 14b020dc-ae35-4a87-87c8-ed8504968319] Deletion of /var/lib/nova/instances/14b020dc-ae35-4a87-87c8-ed8504968319_del complete
Oct 11 09:04:33 compute-0 nova_compute[260935]: 2025-10-11 09:04:33.844 2 INFO nova.compute.manager [None req-bdc924b6-5434-44f6-bf04-aa9e1098f5c4 2930ce5b9093473e9d0f4e49fb1e934b 5ae7015798d64fdeb69d4d5d5fda3326 - - default default] [instance: 14b020dc-ae35-4a87-87c8-ed8504968319] Took 0.74 seconds to destroy the instance on the hypervisor.
Oct 11 09:04:33 compute-0 nova_compute[260935]: 2025-10-11 09:04:33.844 2 DEBUG oslo.service.loopingcall [None req-bdc924b6-5434-44f6-bf04-aa9e1098f5c4 2930ce5b9093473e9d0f4e49fb1e934b 5ae7015798d64fdeb69d4d5d5fda3326 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 11 09:04:33 compute-0 nova_compute[260935]: 2025-10-11 09:04:33.844 2 DEBUG nova.compute.manager [-] [instance: 14b020dc-ae35-4a87-87c8-ed8504968319] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 11 09:04:33 compute-0 nova_compute[260935]: 2025-10-11 09:04:33.844 2 DEBUG nova.network.neutron [-] [instance: 14b020dc-ae35-4a87-87c8-ed8504968319] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 11 09:04:33 compute-0 nova_compute[260935]: 2025-10-11 09:04:33.922 2 DEBUG oslo_concurrency.lockutils [None req-03665052-f111-4d21-b479-e4ce66ca71fa f7e5365d09e140aaa4289b21435cbd70 884a8b5cd18948009939a3ab6cb1d42a - - default default] Acquiring lock "dffb6f2b-b5a8-4d28-ae9b-aca7728ce617" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:04:33 compute-0 nova_compute[260935]: 2025-10-11 09:04:33.923 2 DEBUG oslo_concurrency.lockutils [None req-03665052-f111-4d21-b479-e4ce66ca71fa f7e5365d09e140aaa4289b21435cbd70 884a8b5cd18948009939a3ab6cb1d42a - - default default] Lock "dffb6f2b-b5a8-4d28-ae9b-aca7728ce617" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:04:33 compute-0 ecstatic_solomon[349106]: --> passed data devices: 0 physical, 3 LVM
Oct 11 09:04:33 compute-0 ecstatic_solomon[349106]: --> relative data size: 1.0
Oct 11 09:04:33 compute-0 ecstatic_solomon[349106]: --> All data devices are unavailable
Oct 11 09:04:33 compute-0 systemd[1]: libpod-aa2a3b7a78d788a6e7ae69d039c6dadc7563910859c17e72cdeb561a90499428.scope: Deactivated successfully.
Oct 11 09:04:33 compute-0 systemd[1]: libpod-aa2a3b7a78d788a6e7ae69d039c6dadc7563910859c17e72cdeb561a90499428.scope: Consumed 1.115s CPU time.
Oct 11 09:04:33 compute-0 podman[349088]: 2025-10-11 09:04:33.958625407 +0000 UTC m=+1.444429396 container died aa2a3b7a78d788a6e7ae69d039c6dadc7563910859c17e72cdeb561a90499428 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_solomon, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Oct 11 09:04:33 compute-0 nova_compute[260935]: 2025-10-11 09:04:33.962 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:04:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-ff7b799685fe55c48ce64d469eb1a0d124f65159749f40a34bcd681fdbfe54df-merged.mount: Deactivated successfully.
Oct 11 09:04:34 compute-0 nova_compute[260935]: 2025-10-11 09:04:34.019 2 DEBUG nova.compute.manager [None req-03665052-f111-4d21-b479-e4ce66ca71fa f7e5365d09e140aaa4289b21435cbd70 884a8b5cd18948009939a3ab6cb1d42a - - default default] [instance: dffb6f2b-b5a8-4d28-ae9b-aca7728ce617] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 11 09:04:34 compute-0 podman[349088]: 2025-10-11 09:04:34.023684034 +0000 UTC m=+1.509488033 container remove aa2a3b7a78d788a6e7ae69d039c6dadc7563910859c17e72cdeb561a90499428 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_solomon, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef)
Oct 11 09:04:34 compute-0 systemd[1]: libpod-conmon-aa2a3b7a78d788a6e7ae69d039c6dadc7563910859c17e72cdeb561a90499428.scope: Deactivated successfully.
Oct 11 09:04:34 compute-0 sudo[348980]: pam_unix(sudo:session): session closed for user root
Oct 11 09:04:34 compute-0 nova_compute[260935]: 2025-10-11 09:04:34.141 2 DEBUG oslo_concurrency.lockutils [None req-03665052-f111-4d21-b479-e4ce66ca71fa f7e5365d09e140aaa4289b21435cbd70 884a8b5cd18948009939a3ab6cb1d42a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:04:34 compute-0 nova_compute[260935]: 2025-10-11 09:04:34.142 2 DEBUG oslo_concurrency.lockutils [None req-03665052-f111-4d21-b479-e4ce66ca71fa f7e5365d09e140aaa4289b21435cbd70 884a8b5cd18948009939a3ab6cb1d42a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:04:34 compute-0 nova_compute[260935]: 2025-10-11 09:04:34.152 2 DEBUG nova.virt.hardware [None req-03665052-f111-4d21-b479-e4ce66ca71fa f7e5365d09e140aaa4289b21435cbd70 884a8b5cd18948009939a3ab6cb1d42a - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 11 09:04:34 compute-0 nova_compute[260935]: 2025-10-11 09:04:34.153 2 INFO nova.compute.claims [None req-03665052-f111-4d21-b479-e4ce66ca71fa f7e5365d09e140aaa4289b21435cbd70 884a8b5cd18948009939a3ab6cb1d42a - - default default] [instance: dffb6f2b-b5a8-4d28-ae9b-aca7728ce617] Claim successful on node compute-0.ctlplane.example.com
Oct 11 09:04:34 compute-0 sudo[349170]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:04:34 compute-0 sudo[349170]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:04:34 compute-0 sudo[349170]: pam_unix(sudo:session): session closed for user root
Oct 11 09:04:34 compute-0 nova_compute[260935]: 2025-10-11 09:04:34.170 2 DEBUG nova.network.neutron [-] [instance: 14b020dc-ae35-4a87-87c8-ed8504968319] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 11 09:04:34 compute-0 nova_compute[260935]: 2025-10-11 09:04:34.233 2 DEBUG nova.network.neutron [-] [instance: 14b020dc-ae35-4a87-87c8-ed8504968319] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:04:34 compute-0 sudo[349195]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 09:04:34 compute-0 sudo[349195]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:04:34 compute-0 sudo[349195]: pam_unix(sudo:session): session closed for user root
Oct 11 09:04:34 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1849: 321 pgs: 321 active+clean; 467 MiB data, 855 MiB used, 59 GiB / 60 GiB avail; 7.7 MiB/s rd, 28 KiB/s wr, 300 op/s
Oct 11 09:04:34 compute-0 nova_compute[260935]: 2025-10-11 09:04:34.305 2 INFO nova.compute.manager [-] [instance: 14b020dc-ae35-4a87-87c8-ed8504968319] Took 0.46 seconds to deallocate network for instance.
Oct 11 09:04:34 compute-0 sudo[349220]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:04:34 compute-0 sudo[349220]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:04:34 compute-0 sudo[349220]: pam_unix(sudo:session): session closed for user root
Oct 11 09:04:34 compute-0 nova_compute[260935]: 2025-10-11 09:04:34.382 2 DEBUG oslo_concurrency.lockutils [None req-bdc924b6-5434-44f6-bf04-aa9e1098f5c4 2930ce5b9093473e9d0f4e49fb1e934b 5ae7015798d64fdeb69d4d5d5fda3326 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:04:34 compute-0 sudo[349245]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a -- lvm list --format json
Oct 11 09:04:34 compute-0 sudo[349245]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:04:34 compute-0 nova_compute[260935]: 2025-10-11 09:04:34.519 2 DEBUG oslo_concurrency.processutils [None req-03665052-f111-4d21-b479-e4ce66ca71fa f7e5365d09e140aaa4289b21435cbd70 884a8b5cd18948009939a3ab6cb1d42a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:04:34 compute-0 ceph-mon[74313]: osdmap e249: 3 total, 3 up, 3 in
Oct 11 09:04:34 compute-0 podman[349329]: 2025-10-11 09:04:34.908404479 +0000 UTC m=+0.059820529 container create 4634c42738ac7ac078fdd7ad3b4f39ba1fc07b62cb949b83197e96038cce19ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_chatelet, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct 11 09:04:34 compute-0 systemd[1]: Started libpod-conmon-4634c42738ac7ac078fdd7ad3b4f39ba1fc07b62cb949b83197e96038cce19ff.scope.
Oct 11 09:04:34 compute-0 podman[349329]: 2025-10-11 09:04:34.882707166 +0000 UTC m=+0.034123236 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:04:35 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:04:35 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 09:04:35 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2043618827' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:04:35 compute-0 podman[349329]: 2025-10-11 09:04:35.038273777 +0000 UTC m=+0.189689887 container init 4634c42738ac7ac078fdd7ad3b4f39ba1fc07b62cb949b83197e96038cce19ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_chatelet, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct 11 09:04:35 compute-0 nova_compute[260935]: 2025-10-11 09:04:35.041 2 DEBUG oslo_concurrency.processutils [None req-03665052-f111-4d21-b479-e4ce66ca71fa f7e5365d09e140aaa4289b21435cbd70 884a8b5cd18948009939a3ab6cb1d42a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.521s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:04:35 compute-0 podman[349329]: 2025-10-11 09:04:35.047542151 +0000 UTC m=+0.198958201 container start 4634c42738ac7ac078fdd7ad3b4f39ba1fc07b62cb949b83197e96038cce19ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_chatelet, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 09:04:35 compute-0 nova_compute[260935]: 2025-10-11 09:04:35.053 2 DEBUG nova.compute.provider_tree [None req-03665052-f111-4d21-b479-e4ce66ca71fa f7e5365d09e140aaa4289b21435cbd70 884a8b5cd18948009939a3ab6cb1d42a - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 09:04:35 compute-0 mystifying_chatelet[349346]: 167 167
Oct 11 09:04:35 compute-0 systemd[1]: libpod-4634c42738ac7ac078fdd7ad3b4f39ba1fc07b62cb949b83197e96038cce19ff.scope: Deactivated successfully.
Oct 11 09:04:35 compute-0 podman[349329]: 2025-10-11 09:04:35.070666681 +0000 UTC m=+0.222082741 container attach 4634c42738ac7ac078fdd7ad3b4f39ba1fc07b62cb949b83197e96038cce19ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_chatelet, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 09:04:35 compute-0 podman[349329]: 2025-10-11 09:04:35.071803754 +0000 UTC m=+0.223219794 container died 4634c42738ac7ac078fdd7ad3b4f39ba1fc07b62cb949b83197e96038cce19ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_chatelet, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct 11 09:04:35 compute-0 nova_compute[260935]: 2025-10-11 09:04:35.105 2 DEBUG nova.scheduler.client.report [None req-03665052-f111-4d21-b479-e4ce66ca71fa f7e5365d09e140aaa4289b21435cbd70 884a8b5cd18948009939a3ab6cb1d42a - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 09:04:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-d6e910854422a861075278aec4597b51bbdd397969e914f669dcc1171876d234-merged.mount: Deactivated successfully.
Oct 11 09:04:35 compute-0 nova_compute[260935]: 2025-10-11 09:04:35.175 2 DEBUG oslo_concurrency.lockutils [None req-03665052-f111-4d21-b479-e4ce66ca71fa f7e5365d09e140aaa4289b21435cbd70 884a8b5cd18948009939a3ab6cb1d42a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.033s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:04:35 compute-0 nova_compute[260935]: 2025-10-11 09:04:35.177 2 DEBUG oslo_concurrency.lockutils [None req-bdc924b6-5434-44f6-bf04-aa9e1098f5c4 2930ce5b9093473e9d0f4e49fb1e934b 5ae7015798d64fdeb69d4d5d5fda3326 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.795s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:04:35 compute-0 nova_compute[260935]: 2025-10-11 09:04:35.195 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:04:35 compute-0 podman[349329]: 2025-10-11 09:04:35.222555567 +0000 UTC m=+0.373971597 container remove 4634c42738ac7ac078fdd7ad3b4f39ba1fc07b62cb949b83197e96038cce19ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_chatelet, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507)
Oct 11 09:04:35 compute-0 systemd[1]: libpod-conmon-4634c42738ac7ac078fdd7ad3b4f39ba1fc07b62cb949b83197e96038cce19ff.scope: Deactivated successfully.
Oct 11 09:04:35 compute-0 nova_compute[260935]: 2025-10-11 09:04:35.309 2 DEBUG oslo_concurrency.lockutils [None req-03665052-f111-4d21-b479-e4ce66ca71fa f7e5365d09e140aaa4289b21435cbd70 884a8b5cd18948009939a3ab6cb1d42a - - default default] Acquiring lock "7bf7bbee-16b0-4eaa-b3f1-a25e58d9b292" by "nova.compute.manager.ComputeManager._validate_instance_group_policy.<locals>._do_validation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:04:35 compute-0 nova_compute[260935]: 2025-10-11 09:04:35.310 2 DEBUG oslo_concurrency.lockutils [None req-03665052-f111-4d21-b479-e4ce66ca71fa f7e5365d09e140aaa4289b21435cbd70 884a8b5cd18948009939a3ab6cb1d42a - - default default] Lock "7bf7bbee-16b0-4eaa-b3f1-a25e58d9b292" acquired by "nova.compute.manager.ComputeManager._validate_instance_group_policy.<locals>._do_validation" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:04:35 compute-0 nova_compute[260935]: 2025-10-11 09:04:35.349 2 DEBUG oslo_concurrency.lockutils [None req-03665052-f111-4d21-b479-e4ce66ca71fa f7e5365d09e140aaa4289b21435cbd70 884a8b5cd18948009939a3ab6cb1d42a - - default default] Lock "7bf7bbee-16b0-4eaa-b3f1-a25e58d9b292" "released" by "nova.compute.manager.ComputeManager._validate_instance_group_policy.<locals>._do_validation" :: held 0.039s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:04:35 compute-0 nova_compute[260935]: 2025-10-11 09:04:35.351 2 DEBUG nova.compute.manager [None req-03665052-f111-4d21-b479-e4ce66ca71fa f7e5365d09e140aaa4289b21435cbd70 884a8b5cd18948009939a3ab6cb1d42a - - default default] [instance: dffb6f2b-b5a8-4d28-ae9b-aca7728ce617] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 11 09:04:35 compute-0 nova_compute[260935]: 2025-10-11 09:04:35.412 2 DEBUG oslo_concurrency.processutils [None req-bdc924b6-5434-44f6-bf04-aa9e1098f5c4 2930ce5b9093473e9d0f4e49fb1e934b 5ae7015798d64fdeb69d4d5d5fda3326 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:04:35 compute-0 nova_compute[260935]: 2025-10-11 09:04:35.508 2 DEBUG nova.compute.manager [None req-03665052-f111-4d21-b479-e4ce66ca71fa f7e5365d09e140aaa4289b21435cbd70 884a8b5cd18948009939a3ab6cb1d42a - - default default] [instance: dffb6f2b-b5a8-4d28-ae9b-aca7728ce617] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 11 09:04:35 compute-0 nova_compute[260935]: 2025-10-11 09:04:35.510 2 DEBUG nova.network.neutron [None req-03665052-f111-4d21-b479-e4ce66ca71fa f7e5365d09e140aaa4289b21435cbd70 884a8b5cd18948009939a3ab6cb1d42a - - default default] [instance: dffb6f2b-b5a8-4d28-ae9b-aca7728ce617] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 11 09:04:35 compute-0 podman[349375]: 2025-10-11 09:04:35.552109465 +0000 UTC m=+0.086981124 container create 78f0b0f12b9a67bf1606237f6567c6cce42d14b5f98bbc1682180ea419780b6d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_sinoussi, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 11 09:04:35 compute-0 nova_compute[260935]: 2025-10-11 09:04:35.575 2 INFO nova.virt.libvirt.driver [None req-03665052-f111-4d21-b479-e4ce66ca71fa f7e5365d09e140aaa4289b21435cbd70 884a8b5cd18948009939a3ab6cb1d42a - - default default] [instance: dffb6f2b-b5a8-4d28-ae9b-aca7728ce617] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 11 09:04:35 compute-0 podman[349375]: 2025-10-11 09:04:35.519558966 +0000 UTC m=+0.054430665 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:04:35 compute-0 ceph-mon[74313]: pgmap v1849: 321 pgs: 321 active+clean; 467 MiB data, 855 MiB used, 59 GiB / 60 GiB avail; 7.7 MiB/s rd, 28 KiB/s wr, 300 op/s
Oct 11 09:04:35 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/2043618827' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:04:35 compute-0 systemd[1]: Started libpod-conmon-78f0b0f12b9a67bf1606237f6567c6cce42d14b5f98bbc1682180ea419780b6d.scope.
Oct 11 09:04:35 compute-0 nova_compute[260935]: 2025-10-11 09:04:35.618 2 DEBUG nova.compute.manager [None req-03665052-f111-4d21-b479-e4ce66ca71fa f7e5365d09e140aaa4289b21435cbd70 884a8b5cd18948009939a3ab6cb1d42a - - default default] [instance: dffb6f2b-b5a8-4d28-ae9b-aca7728ce617] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 11 09:04:35 compute-0 sshd-session[348881]: Received disconnect from 152.32.213.170 port 47422:11: Bye Bye [preauth]
Oct 11 09:04:35 compute-0 sshd-session[348881]: Disconnected from invalid user database 152.32.213.170 port 47422 [preauth]
Oct 11 09:04:35 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:04:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6dab66d3398a2695a961d4f08361fbec6e2ab4a823b0ea4911f4063c29e85dd6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 09:04:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6dab66d3398a2695a961d4f08361fbec6e2ab4a823b0ea4911f4063c29e85dd6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 09:04:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6dab66d3398a2695a961d4f08361fbec6e2ab4a823b0ea4911f4063c29e85dd6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 09:04:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6dab66d3398a2695a961d4f08361fbec6e2ab4a823b0ea4911f4063c29e85dd6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 09:04:35 compute-0 podman[349375]: 2025-10-11 09:04:35.681849269 +0000 UTC m=+0.216720928 container init 78f0b0f12b9a67bf1606237f6567c6cce42d14b5f98bbc1682180ea419780b6d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_sinoussi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct 11 09:04:35 compute-0 podman[349375]: 2025-10-11 09:04:35.697694481 +0000 UTC m=+0.232566140 container start 78f0b0f12b9a67bf1606237f6567c6cce42d14b5f98bbc1682180ea419780b6d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_sinoussi, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Oct 11 09:04:35 compute-0 podman[349375]: 2025-10-11 09:04:35.702972602 +0000 UTC m=+0.237844231 container attach 78f0b0f12b9a67bf1606237f6567c6cce42d14b5f98bbc1682180ea419780b6d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_sinoussi, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507)
Oct 11 09:04:35 compute-0 nova_compute[260935]: 2025-10-11 09:04:35.733 2 DEBUG nova.policy [None req-03665052-f111-4d21-b479-e4ce66ca71fa f7e5365d09e140aaa4289b21435cbd70 884a8b5cd18948009939a3ab6cb1d42a - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'f7e5365d09e140aaa4289b21435cbd70', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '884a8b5cd18948009939a3ab6cb1d42a', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 11 09:04:35 compute-0 nova_compute[260935]: 2025-10-11 09:04:35.793 2 DEBUG nova.compute.manager [None req-03665052-f111-4d21-b479-e4ce66ca71fa f7e5365d09e140aaa4289b21435cbd70 884a8b5cd18948009939a3ab6cb1d42a - - default default] [instance: dffb6f2b-b5a8-4d28-ae9b-aca7728ce617] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 11 09:04:35 compute-0 nova_compute[260935]: 2025-10-11 09:04:35.795 2 DEBUG nova.virt.libvirt.driver [None req-03665052-f111-4d21-b479-e4ce66ca71fa f7e5365d09e140aaa4289b21435cbd70 884a8b5cd18948009939a3ab6cb1d42a - - default default] [instance: dffb6f2b-b5a8-4d28-ae9b-aca7728ce617] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 11 09:04:35 compute-0 nova_compute[260935]: 2025-10-11 09:04:35.795 2 INFO nova.virt.libvirt.driver [None req-03665052-f111-4d21-b479-e4ce66ca71fa f7e5365d09e140aaa4289b21435cbd70 884a8b5cd18948009939a3ab6cb1d42a - - default default] [instance: dffb6f2b-b5a8-4d28-ae9b-aca7728ce617] Creating image(s)
Oct 11 09:04:35 compute-0 nova_compute[260935]: 2025-10-11 09:04:35.822 2 DEBUG nova.storage.rbd_utils [None req-03665052-f111-4d21-b479-e4ce66ca71fa f7e5365d09e140aaa4289b21435cbd70 884a8b5cd18948009939a3ab6cb1d42a - - default default] rbd image dffb6f2b-b5a8-4d28-ae9b-aca7728ce617_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:04:35 compute-0 nova_compute[260935]: 2025-10-11 09:04:35.850 2 DEBUG nova.storage.rbd_utils [None req-03665052-f111-4d21-b479-e4ce66ca71fa f7e5365d09e140aaa4289b21435cbd70 884a8b5cd18948009939a3ab6cb1d42a - - default default] rbd image dffb6f2b-b5a8-4d28-ae9b-aca7728ce617_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:04:35 compute-0 nova_compute[260935]: 2025-10-11 09:04:35.877 2 DEBUG nova.storage.rbd_utils [None req-03665052-f111-4d21-b479-e4ce66ca71fa f7e5365d09e140aaa4289b21435cbd70 884a8b5cd18948009939a3ab6cb1d42a - - default default] rbd image dffb6f2b-b5a8-4d28-ae9b-aca7728ce617_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:04:35 compute-0 nova_compute[260935]: 2025-10-11 09:04:35.882 2 DEBUG oslo_concurrency.processutils [None req-03665052-f111-4d21-b479-e4ce66ca71fa f7e5365d09e140aaa4289b21435cbd70 884a8b5cd18948009939a3ab6cb1d42a - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:04:35 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 09:04:35 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1752793777' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:04:35 compute-0 nova_compute[260935]: 2025-10-11 09:04:35.969 2 DEBUG oslo_concurrency.processutils [None req-bdc924b6-5434-44f6-bf04-aa9e1098f5c4 2930ce5b9093473e9d0f4e49fb1e934b 5ae7015798d64fdeb69d4d5d5fda3326 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.557s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:04:35 compute-0 nova_compute[260935]: 2025-10-11 09:04:35.972 2 DEBUG oslo_concurrency.processutils [None req-03665052-f111-4d21-b479-e4ce66ca71fa f7e5365d09e140aaa4289b21435cbd70 884a8b5cd18948009939a3ab6cb1d42a - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json" returned: 0 in 0.090s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:04:35 compute-0 nova_compute[260935]: 2025-10-11 09:04:35.973 2 DEBUG oslo_concurrency.lockutils [None req-03665052-f111-4d21-b479-e4ce66ca71fa f7e5365d09e140aaa4289b21435cbd70 884a8b5cd18948009939a3ab6cb1d42a - - default default] Acquiring lock "9811042c7d73cc51997f7c966840f4b7728169a1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:04:35 compute-0 nova_compute[260935]: 2025-10-11 09:04:35.974 2 DEBUG oslo_concurrency.lockutils [None req-03665052-f111-4d21-b479-e4ce66ca71fa f7e5365d09e140aaa4289b21435cbd70 884a8b5cd18948009939a3ab6cb1d42a - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:04:35 compute-0 nova_compute[260935]: 2025-10-11 09:04:35.974 2 DEBUG oslo_concurrency.lockutils [None req-03665052-f111-4d21-b479-e4ce66ca71fa f7e5365d09e140aaa4289b21435cbd70 884a8b5cd18948009939a3ab6cb1d42a - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:04:36 compute-0 nova_compute[260935]: 2025-10-11 09:04:36.006 2 DEBUG nova.storage.rbd_utils [None req-03665052-f111-4d21-b479-e4ce66ca71fa f7e5365d09e140aaa4289b21435cbd70 884a8b5cd18948009939a3ab6cb1d42a - - default default] rbd image dffb6f2b-b5a8-4d28-ae9b-aca7728ce617_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:04:36 compute-0 nova_compute[260935]: 2025-10-11 09:04:36.009 2 DEBUG oslo_concurrency.processutils [None req-03665052-f111-4d21-b479-e4ce66ca71fa f7e5365d09e140aaa4289b21435cbd70 884a8b5cd18948009939a3ab6cb1d42a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 dffb6f2b-b5a8-4d28-ae9b-aca7728ce617_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:04:36 compute-0 nova_compute[260935]: 2025-10-11 09:04:36.071 2 DEBUG nova.compute.provider_tree [None req-bdc924b6-5434-44f6-bf04-aa9e1098f5c4 2930ce5b9093473e9d0f4e49fb1e934b 5ae7015798d64fdeb69d4d5d5fda3326 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 09:04:36 compute-0 nova_compute[260935]: 2025-10-11 09:04:36.106 2 DEBUG nova.scheduler.client.report [None req-bdc924b6-5434-44f6-bf04-aa9e1098f5c4 2930ce5b9093473e9d0f4e49fb1e934b 5ae7015798d64fdeb69d4d5d5fda3326 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 09:04:36 compute-0 nova_compute[260935]: 2025-10-11 09:04:36.170 2 DEBUG oslo_concurrency.lockutils [None req-bdc924b6-5434-44f6-bf04-aa9e1098f5c4 2930ce5b9093473e9d0f4e49fb1e934b 5ae7015798d64fdeb69d4d5d5fda3326 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.993s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:04:36 compute-0 nova_compute[260935]: 2025-10-11 09:04:36.243 2 INFO nova.scheduler.client.report [None req-bdc924b6-5434-44f6-bf04-aa9e1098f5c4 2930ce5b9093473e9d0f4e49fb1e934b 5ae7015798d64fdeb69d4d5d5fda3326 - - default default] Deleted allocations for instance 14b020dc-ae35-4a87-87c8-ed8504968319
Oct 11 09:04:36 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1850: 321 pgs: 321 active+clean; 467 MiB data, 855 MiB used, 59 GiB / 60 GiB avail; 6.1 MiB/s rd, 22 KiB/s wr, 239 op/s
Oct 11 09:04:36 compute-0 nova_compute[260935]: 2025-10-11 09:04:36.414 2 DEBUG oslo_concurrency.processutils [None req-03665052-f111-4d21-b479-e4ce66ca71fa f7e5365d09e140aaa4289b21435cbd70 884a8b5cd18948009939a3ab6cb1d42a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 dffb6f2b-b5a8-4d28-ae9b-aca7728ce617_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.405s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:04:36 compute-0 nova_compute[260935]: 2025-10-11 09:04:36.461 2 DEBUG oslo_concurrency.lockutils [None req-bdc924b6-5434-44f6-bf04-aa9e1098f5c4 2930ce5b9093473e9d0f4e49fb1e934b 5ae7015798d64fdeb69d4d5d5fda3326 - - default default] Lock "14b020dc-ae35-4a87-87c8-ed8504968319" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.920s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:04:36 compute-0 affectionate_sinoussi[349410]: {
Oct 11 09:04:36 compute-0 affectionate_sinoussi[349410]:     "0": [
Oct 11 09:04:36 compute-0 affectionate_sinoussi[349410]:         {
Oct 11 09:04:36 compute-0 affectionate_sinoussi[349410]:             "devices": [
Oct 11 09:04:36 compute-0 affectionate_sinoussi[349410]:                 "/dev/loop3"
Oct 11 09:04:36 compute-0 affectionate_sinoussi[349410]:             ],
Oct 11 09:04:36 compute-0 affectionate_sinoussi[349410]:             "lv_name": "ceph_lv0",
Oct 11 09:04:36 compute-0 affectionate_sinoussi[349410]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 09:04:36 compute-0 affectionate_sinoussi[349410]:             "lv_size": "21470642176",
Oct 11 09:04:36 compute-0 affectionate_sinoussi[349410]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=bd31cfe9-266a-4479-a7f9-659fff14b3e1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 09:04:36 compute-0 affectionate_sinoussi[349410]:             "lv_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 09:04:36 compute-0 affectionate_sinoussi[349410]:             "name": "ceph_lv0",
Oct 11 09:04:36 compute-0 affectionate_sinoussi[349410]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 09:04:36 compute-0 affectionate_sinoussi[349410]:             "tags": {
Oct 11 09:04:36 compute-0 affectionate_sinoussi[349410]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 11 09:04:36 compute-0 affectionate_sinoussi[349410]:                 "ceph.block_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 09:04:36 compute-0 affectionate_sinoussi[349410]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 09:04:36 compute-0 affectionate_sinoussi[349410]:                 "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:04:36 compute-0 affectionate_sinoussi[349410]:                 "ceph.cluster_name": "ceph",
Oct 11 09:04:36 compute-0 affectionate_sinoussi[349410]:                 "ceph.crush_device_class": "",
Oct 11 09:04:36 compute-0 affectionate_sinoussi[349410]:                 "ceph.encrypted": "0",
Oct 11 09:04:36 compute-0 affectionate_sinoussi[349410]:                 "ceph.osd_fsid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 09:04:36 compute-0 affectionate_sinoussi[349410]:                 "ceph.osd_id": "0",
Oct 11 09:04:36 compute-0 affectionate_sinoussi[349410]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 09:04:36 compute-0 affectionate_sinoussi[349410]:                 "ceph.type": "block",
Oct 11 09:04:36 compute-0 affectionate_sinoussi[349410]:                 "ceph.vdo": "0"
Oct 11 09:04:36 compute-0 affectionate_sinoussi[349410]:             },
Oct 11 09:04:36 compute-0 affectionate_sinoussi[349410]:             "type": "block",
Oct 11 09:04:36 compute-0 affectionate_sinoussi[349410]:             "vg_name": "ceph_vg0"
Oct 11 09:04:36 compute-0 affectionate_sinoussi[349410]:         }
Oct 11 09:04:36 compute-0 affectionate_sinoussi[349410]:     ],
Oct 11 09:04:36 compute-0 affectionate_sinoussi[349410]:     "1": [
Oct 11 09:04:36 compute-0 affectionate_sinoussi[349410]:         {
Oct 11 09:04:36 compute-0 affectionate_sinoussi[349410]:             "devices": [
Oct 11 09:04:36 compute-0 affectionate_sinoussi[349410]:                 "/dev/loop4"
Oct 11 09:04:36 compute-0 affectionate_sinoussi[349410]:             ],
Oct 11 09:04:36 compute-0 affectionate_sinoussi[349410]:             "lv_name": "ceph_lv1",
Oct 11 09:04:36 compute-0 affectionate_sinoussi[349410]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 09:04:36 compute-0 affectionate_sinoussi[349410]:             "lv_size": "21470642176",
Oct 11 09:04:36 compute-0 affectionate_sinoussi[349410]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c6e730c8-7075-45e5-bf72-061b73088f5b,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 09:04:36 compute-0 affectionate_sinoussi[349410]:             "lv_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 09:04:36 compute-0 affectionate_sinoussi[349410]:             "name": "ceph_lv1",
Oct 11 09:04:36 compute-0 affectionate_sinoussi[349410]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 09:04:36 compute-0 affectionate_sinoussi[349410]:             "tags": {
Oct 11 09:04:36 compute-0 affectionate_sinoussi[349410]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 11 09:04:36 compute-0 affectionate_sinoussi[349410]:                 "ceph.block_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 09:04:36 compute-0 affectionate_sinoussi[349410]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 09:04:36 compute-0 affectionate_sinoussi[349410]:                 "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:04:36 compute-0 affectionate_sinoussi[349410]:                 "ceph.cluster_name": "ceph",
Oct 11 09:04:36 compute-0 affectionate_sinoussi[349410]:                 "ceph.crush_device_class": "",
Oct 11 09:04:36 compute-0 affectionate_sinoussi[349410]:                 "ceph.encrypted": "0",
Oct 11 09:04:36 compute-0 affectionate_sinoussi[349410]:                 "ceph.osd_fsid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 09:04:36 compute-0 affectionate_sinoussi[349410]:                 "ceph.osd_id": "1",
Oct 11 09:04:36 compute-0 affectionate_sinoussi[349410]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 09:04:36 compute-0 affectionate_sinoussi[349410]:                 "ceph.type": "block",
Oct 11 09:04:36 compute-0 affectionate_sinoussi[349410]:                 "ceph.vdo": "0"
Oct 11 09:04:36 compute-0 affectionate_sinoussi[349410]:             },
Oct 11 09:04:36 compute-0 affectionate_sinoussi[349410]:             "type": "block",
Oct 11 09:04:36 compute-0 affectionate_sinoussi[349410]:             "vg_name": "ceph_vg1"
Oct 11 09:04:36 compute-0 affectionate_sinoussi[349410]:         }
Oct 11 09:04:36 compute-0 affectionate_sinoussi[349410]:     ],
Oct 11 09:04:36 compute-0 affectionate_sinoussi[349410]:     "2": [
Oct 11 09:04:36 compute-0 affectionate_sinoussi[349410]:         {
Oct 11 09:04:36 compute-0 affectionate_sinoussi[349410]:             "devices": [
Oct 11 09:04:36 compute-0 affectionate_sinoussi[349410]:                 "/dev/loop5"
Oct 11 09:04:36 compute-0 affectionate_sinoussi[349410]:             ],
Oct 11 09:04:36 compute-0 affectionate_sinoussi[349410]:             "lv_name": "ceph_lv2",
Oct 11 09:04:36 compute-0 affectionate_sinoussi[349410]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 09:04:36 compute-0 affectionate_sinoussi[349410]:             "lv_size": "21470642176",
Oct 11 09:04:36 compute-0 affectionate_sinoussi[349410]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9a8d87fe-940e-4960-94fb-405e22e6e9ee,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 09:04:36 compute-0 affectionate_sinoussi[349410]:             "lv_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 09:04:36 compute-0 affectionate_sinoussi[349410]:             "name": "ceph_lv2",
Oct 11 09:04:36 compute-0 affectionate_sinoussi[349410]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 09:04:36 compute-0 affectionate_sinoussi[349410]:             "tags": {
Oct 11 09:04:36 compute-0 affectionate_sinoussi[349410]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 11 09:04:36 compute-0 affectionate_sinoussi[349410]:                 "ceph.block_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 09:04:36 compute-0 affectionate_sinoussi[349410]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 09:04:36 compute-0 affectionate_sinoussi[349410]:                 "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:04:36 compute-0 affectionate_sinoussi[349410]:                 "ceph.cluster_name": "ceph",
Oct 11 09:04:36 compute-0 affectionate_sinoussi[349410]:                 "ceph.crush_device_class": "",
Oct 11 09:04:36 compute-0 affectionate_sinoussi[349410]:                 "ceph.encrypted": "0",
Oct 11 09:04:36 compute-0 affectionate_sinoussi[349410]:                 "ceph.osd_fsid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 09:04:36 compute-0 affectionate_sinoussi[349410]:                 "ceph.osd_id": "2",
Oct 11 09:04:36 compute-0 affectionate_sinoussi[349410]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 09:04:36 compute-0 affectionate_sinoussi[349410]:                 "ceph.type": "block",
Oct 11 09:04:36 compute-0 affectionate_sinoussi[349410]:                 "ceph.vdo": "0"
Oct 11 09:04:36 compute-0 affectionate_sinoussi[349410]:             },
Oct 11 09:04:36 compute-0 affectionate_sinoussi[349410]:             "type": "block",
Oct 11 09:04:36 compute-0 affectionate_sinoussi[349410]:             "vg_name": "ceph_vg2"
Oct 11 09:04:36 compute-0 affectionate_sinoussi[349410]:         }
Oct 11 09:04:36 compute-0 affectionate_sinoussi[349410]:     ]
Oct 11 09:04:36 compute-0 affectionate_sinoussi[349410]: }
Oct 11 09:04:36 compute-0 nova_compute[260935]: 2025-10-11 09:04:36.514 2 DEBUG nova.storage.rbd_utils [None req-03665052-f111-4d21-b479-e4ce66ca71fa f7e5365d09e140aaa4289b21435cbd70 884a8b5cd18948009939a3ab6cb1d42a - - default default] resizing rbd image dffb6f2b-b5a8-4d28-ae9b-aca7728ce617_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 11 09:04:36 compute-0 systemd[1]: libpod-78f0b0f12b9a67bf1606237f6567c6cce42d14b5f98bbc1682180ea419780b6d.scope: Deactivated successfully.
Oct 11 09:04:36 compute-0 podman[349375]: 2025-10-11 09:04:36.524583666 +0000 UTC m=+1.059455345 container died 78f0b0f12b9a67bf1606237f6567c6cce42d14b5f98bbc1682180ea419780b6d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_sinoussi, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct 11 09:04:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-6dab66d3398a2695a961d4f08361fbec6e2ab4a823b0ea4911f4063c29e85dd6-merged.mount: Deactivated successfully.
Oct 11 09:04:36 compute-0 nova_compute[260935]: 2025-10-11 09:04:36.569 2 DEBUG nova.network.neutron [None req-03665052-f111-4d21-b479-e4ce66ca71fa f7e5365d09e140aaa4289b21435cbd70 884a8b5cd18948009939a3ab6cb1d42a - - default default] [instance: dffb6f2b-b5a8-4d28-ae9b-aca7728ce617] Successfully created port: 63a1eeb0-05f1-4f8a-b115-a1323d4d119d _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 11 09:04:36 compute-0 podman[349375]: 2025-10-11 09:04:36.597019524 +0000 UTC m=+1.131891153 container remove 78f0b0f12b9a67bf1606237f6567c6cce42d14b5f98bbc1682180ea419780b6d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_sinoussi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct 11 09:04:36 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/1752793777' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:04:36 compute-0 systemd[1]: libpod-conmon-78f0b0f12b9a67bf1606237f6567c6cce42d14b5f98bbc1682180ea419780b6d.scope: Deactivated successfully.
Oct 11 09:04:36 compute-0 sudo[349245]: pam_unix(sudo:session): session closed for user root
Oct 11 09:04:36 compute-0 nova_compute[260935]: 2025-10-11 09:04:36.712 2 DEBUG nova.objects.instance [None req-03665052-f111-4d21-b479-e4ce66ca71fa f7e5365d09e140aaa4289b21435cbd70 884a8b5cd18948009939a3ab6cb1d42a - - default default] Lazy-loading 'migration_context' on Instance uuid dffb6f2b-b5a8-4d28-ae9b-aca7728ce617 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:04:36 compute-0 sudo[349585]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:04:36 compute-0 sudo[349585]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:04:36 compute-0 sudo[349585]: pam_unix(sudo:session): session closed for user root
Oct 11 09:04:36 compute-0 nova_compute[260935]: 2025-10-11 09:04:36.759 2 DEBUG nova.virt.libvirt.driver [None req-03665052-f111-4d21-b479-e4ce66ca71fa f7e5365d09e140aaa4289b21435cbd70 884a8b5cd18948009939a3ab6cb1d42a - - default default] [instance: dffb6f2b-b5a8-4d28-ae9b-aca7728ce617] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 11 09:04:36 compute-0 nova_compute[260935]: 2025-10-11 09:04:36.760 2 DEBUG nova.virt.libvirt.driver [None req-03665052-f111-4d21-b479-e4ce66ca71fa f7e5365d09e140aaa4289b21435cbd70 884a8b5cd18948009939a3ab6cb1d42a - - default default] [instance: dffb6f2b-b5a8-4d28-ae9b-aca7728ce617] Ensure instance console log exists: /var/lib/nova/instances/dffb6f2b-b5a8-4d28-ae9b-aca7728ce617/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 11 09:04:36 compute-0 nova_compute[260935]: 2025-10-11 09:04:36.760 2 DEBUG oslo_concurrency.lockutils [None req-03665052-f111-4d21-b479-e4ce66ca71fa f7e5365d09e140aaa4289b21435cbd70 884a8b5cd18948009939a3ab6cb1d42a - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:04:36 compute-0 nova_compute[260935]: 2025-10-11 09:04:36.761 2 DEBUG oslo_concurrency.lockutils [None req-03665052-f111-4d21-b479-e4ce66ca71fa f7e5365d09e140aaa4289b21435cbd70 884a8b5cd18948009939a3ab6cb1d42a - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:04:36 compute-0 nova_compute[260935]: 2025-10-11 09:04:36.761 2 DEBUG oslo_concurrency.lockutils [None req-03665052-f111-4d21-b479-e4ce66ca71fa f7e5365d09e140aaa4289b21435cbd70 884a8b5cd18948009939a3ab6cb1d42a - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:04:36 compute-0 sudo[349621]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 09:04:36 compute-0 sudo[349621]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:04:36 compute-0 sudo[349621]: pam_unix(sudo:session): session closed for user root
Oct 11 09:04:36 compute-0 sudo[349646]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:04:36 compute-0 sudo[349646]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:04:36 compute-0 sudo[349646]: pam_unix(sudo:session): session closed for user root
Oct 11 09:04:37 compute-0 sudo[349671]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a -- raw list --format json
Oct 11 09:04:37 compute-0 sudo[349671]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:04:37 compute-0 podman[349737]: 2025-10-11 09:04:37.499341153 +0000 UTC m=+0.053977532 container create c63f16dba182e14e9a1ed1b7db0b2e23427c97381af89b7835ff5e46d437ee99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_mayer, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Oct 11 09:04:37 compute-0 systemd[1]: Started libpod-conmon-c63f16dba182e14e9a1ed1b7db0b2e23427c97381af89b7835ff5e46d437ee99.scope.
Oct 11 09:04:37 compute-0 nova_compute[260935]: 2025-10-11 09:04:37.545 2 DEBUG nova.network.neutron [None req-03665052-f111-4d21-b479-e4ce66ca71fa f7e5365d09e140aaa4289b21435cbd70 884a8b5cd18948009939a3ab6cb1d42a - - default default] [instance: dffb6f2b-b5a8-4d28-ae9b-aca7728ce617] Successfully updated port: 63a1eeb0-05f1-4f8a-b115-a1323d4d119d _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 11 09:04:37 compute-0 podman[349737]: 2025-10-11 09:04:37.474254077 +0000 UTC m=+0.028890496 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:04:37 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:04:37 compute-0 nova_compute[260935]: 2025-10-11 09:04:37.583 2 DEBUG oslo_concurrency.lockutils [None req-03665052-f111-4d21-b479-e4ce66ca71fa f7e5365d09e140aaa4289b21435cbd70 884a8b5cd18948009939a3ab6cb1d42a - - default default] Acquiring lock "refresh_cache-dffb6f2b-b5a8-4d28-ae9b-aca7728ce617" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:04:37 compute-0 nova_compute[260935]: 2025-10-11 09:04:37.584 2 DEBUG oslo_concurrency.lockutils [None req-03665052-f111-4d21-b479-e4ce66ca71fa f7e5365d09e140aaa4289b21435cbd70 884a8b5cd18948009939a3ab6cb1d42a - - default default] Acquired lock "refresh_cache-dffb6f2b-b5a8-4d28-ae9b-aca7728ce617" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:04:37 compute-0 nova_compute[260935]: 2025-10-11 09:04:37.584 2 DEBUG nova.network.neutron [None req-03665052-f111-4d21-b479-e4ce66ca71fa f7e5365d09e140aaa4289b21435cbd70 884a8b5cd18948009939a3ab6cb1d42a - - default default] [instance: dffb6f2b-b5a8-4d28-ae9b-aca7728ce617] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 11 09:04:37 compute-0 podman[349737]: 2025-10-11 09:04:37.605250796 +0000 UTC m=+0.159887185 container init c63f16dba182e14e9a1ed1b7db0b2e23427c97381af89b7835ff5e46d437ee99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_mayer, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 09:04:37 compute-0 podman[349737]: 2025-10-11 09:04:37.616412485 +0000 UTC m=+0.171048864 container start c63f16dba182e14e9a1ed1b7db0b2e23427c97381af89b7835ff5e46d437ee99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_mayer, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct 11 09:04:37 compute-0 podman[349737]: 2025-10-11 09:04:37.621534491 +0000 UTC m=+0.176170880 container attach c63f16dba182e14e9a1ed1b7db0b2e23427c97381af89b7835ff5e46d437ee99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_mayer, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef)
Oct 11 09:04:37 compute-0 wizardly_mayer[349753]: 167 167
Oct 11 09:04:37 compute-0 systemd[1]: libpod-c63f16dba182e14e9a1ed1b7db0b2e23427c97381af89b7835ff5e46d437ee99.scope: Deactivated successfully.
Oct 11 09:04:37 compute-0 ceph-mon[74313]: pgmap v1850: 321 pgs: 321 active+clean; 467 MiB data, 855 MiB used, 59 GiB / 60 GiB avail; 6.1 MiB/s rd, 22 KiB/s wr, 239 op/s
Oct 11 09:04:37 compute-0 podman[349758]: 2025-10-11 09:04:37.67824958 +0000 UTC m=+0.034091994 container died c63f16dba182e14e9a1ed1b7db0b2e23427c97381af89b7835ff5e46d437ee99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_mayer, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct 11 09:04:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-8bc9546443f31a8baf04d62be30abcaddb7e75e7d2a33936aa87cf5f9e2c1cac-merged.mount: Deactivated successfully.
Oct 11 09:04:37 compute-0 podman[349758]: 2025-10-11 09:04:37.7293956 +0000 UTC m=+0.085238004 container remove c63f16dba182e14e9a1ed1b7db0b2e23427c97381af89b7835ff5e46d437ee99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_mayer, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct 11 09:04:37 compute-0 systemd[1]: libpod-conmon-c63f16dba182e14e9a1ed1b7db0b2e23427c97381af89b7835ff5e46d437ee99.scope: Deactivated successfully.
Oct 11 09:04:37 compute-0 nova_compute[260935]: 2025-10-11 09:04:37.753 2 DEBUG nova.compute.manager [req-0aa97448-4e72-4d82-b44b-5ee6d7e4a670 req-dabcf617-b846-49f0-b919-99615cdcb0fd e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: dffb6f2b-b5a8-4d28-ae9b-aca7728ce617] Received event network-changed-63a1eeb0-05f1-4f8a-b115-a1323d4d119d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:04:37 compute-0 nova_compute[260935]: 2025-10-11 09:04:37.759 2 DEBUG nova.compute.manager [req-0aa97448-4e72-4d82-b44b-5ee6d7e4a670 req-dabcf617-b846-49f0-b919-99615cdcb0fd e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: dffb6f2b-b5a8-4d28-ae9b-aca7728ce617] Refreshing instance network info cache due to event network-changed-63a1eeb0-05f1-4f8a-b115-a1323d4d119d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 09:04:37 compute-0 nova_compute[260935]: 2025-10-11 09:04:37.761 2 DEBUG oslo_concurrency.lockutils [req-0aa97448-4e72-4d82-b44b-5ee6d7e4a670 req-dabcf617-b846-49f0-b919-99615cdcb0fd e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-dffb6f2b-b5a8-4d28-ae9b-aca7728ce617" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:04:37 compute-0 nova_compute[260935]: 2025-10-11 09:04:37.844 2 DEBUG nova.network.neutron [None req-03665052-f111-4d21-b479-e4ce66ca71fa f7e5365d09e140aaa4289b21435cbd70 884a8b5cd18948009939a3ab6cb1d42a - - default default] [instance: dffb6f2b-b5a8-4d28-ae9b-aca7728ce617] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 11 09:04:38 compute-0 podman[349780]: 2025-10-11 09:04:38.023013681 +0000 UTC m=+0.071437010 container create 1069f11545fd213da9c9771073a8de6484c9569daffd22568870d65cfd4d0b71 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_panini, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct 11 09:04:38 compute-0 systemd[1]: Started libpod-conmon-1069f11545fd213da9c9771073a8de6484c9569daffd22568870d65cfd4d0b71.scope.
Oct 11 09:04:38 compute-0 podman[349780]: 2025-10-11 09:04:37.997942386 +0000 UTC m=+0.046365705 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:04:38 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:04:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e1ccdfa78297bbecd3acac2ff229188486fb3237de5c832862110009a2326b14/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 09:04:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e1ccdfa78297bbecd3acac2ff229188486fb3237de5c832862110009a2326b14/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 09:04:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e1ccdfa78297bbecd3acac2ff229188486fb3237de5c832862110009a2326b14/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 09:04:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e1ccdfa78297bbecd3acac2ff229188486fb3237de5c832862110009a2326b14/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 09:04:38 compute-0 podman[349780]: 2025-10-11 09:04:38.140827985 +0000 UTC m=+0.189251294 container init 1069f11545fd213da9c9771073a8de6484c9569daffd22568870d65cfd4d0b71 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_panini, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 09:04:38 compute-0 podman[349780]: 2025-10-11 09:04:38.154897666 +0000 UTC m=+0.203320995 container start 1069f11545fd213da9c9771073a8de6484c9569daffd22568870d65cfd4d0b71 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_panini, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 09:04:38 compute-0 podman[349780]: 2025-10-11 09:04:38.15994436 +0000 UTC m=+0.208367679 container attach 1069f11545fd213da9c9771073a8de6484c9569daffd22568870d65cfd4d0b71 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_panini, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Oct 11 09:04:38 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1851: 321 pgs: 321 active+clean; 467 MiB data, 855 MiB used, 59 GiB / 60 GiB avail; 5.8 MiB/s rd, 2.7 MiB/s wr, 336 op/s
Oct 11 09:04:38 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e249 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:04:38 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e249 do_prune osdmap full prune enabled
Oct 11 09:04:38 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e250 e250: 3 total, 3 up, 3 in
Oct 11 09:04:38 compute-0 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e250: 3 total, 3 up, 3 in
Oct 11 09:04:38 compute-0 nova_compute[260935]: 2025-10-11 09:04:38.892 2 DEBUG nova.network.neutron [None req-03665052-f111-4d21-b479-e4ce66ca71fa f7e5365d09e140aaa4289b21435cbd70 884a8b5cd18948009939a3ab6cb1d42a - - default default] [instance: dffb6f2b-b5a8-4d28-ae9b-aca7728ce617] Updating instance_info_cache with network_info: [{"id": "63a1eeb0-05f1-4f8a-b115-a1323d4d119d", "address": "fa:16:3e:a1:89:76", "network": {"id": "80aba2f5-1646-44f0-b2f3-d9c000867c6d", "bridge": "br-int", "label": "tempest-ServerGroupTestJSON-2046079859-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "884a8b5cd18948009939a3ab6cb1d42a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap63a1eeb0-05", "ovs_interfaceid": "63a1eeb0-05f1-4f8a-b115-a1323d4d119d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:04:38 compute-0 nova_compute[260935]: 2025-10-11 09:04:38.966 2 DEBUG oslo_concurrency.lockutils [None req-03665052-f111-4d21-b479-e4ce66ca71fa f7e5365d09e140aaa4289b21435cbd70 884a8b5cd18948009939a3ab6cb1d42a - - default default] Releasing lock "refresh_cache-dffb6f2b-b5a8-4d28-ae9b-aca7728ce617" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:04:38 compute-0 nova_compute[260935]: 2025-10-11 09:04:38.967 2 DEBUG nova.compute.manager [None req-03665052-f111-4d21-b479-e4ce66ca71fa f7e5365d09e140aaa4289b21435cbd70 884a8b5cd18948009939a3ab6cb1d42a - - default default] [instance: dffb6f2b-b5a8-4d28-ae9b-aca7728ce617] Instance network_info: |[{"id": "63a1eeb0-05f1-4f8a-b115-a1323d4d119d", "address": "fa:16:3e:a1:89:76", "network": {"id": "80aba2f5-1646-44f0-b2f3-d9c000867c6d", "bridge": "br-int", "label": "tempest-ServerGroupTestJSON-2046079859-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "884a8b5cd18948009939a3ab6cb1d42a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap63a1eeb0-05", "ovs_interfaceid": "63a1eeb0-05f1-4f8a-b115-a1323d4d119d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 11 09:04:38 compute-0 nova_compute[260935]: 2025-10-11 09:04:38.968 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:04:38 compute-0 nova_compute[260935]: 2025-10-11 09:04:38.972 2 DEBUG oslo_concurrency.lockutils [req-0aa97448-4e72-4d82-b44b-5ee6d7e4a670 req-dabcf617-b846-49f0-b919-99615cdcb0fd e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-dffb6f2b-b5a8-4d28-ae9b-aca7728ce617" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:04:38 compute-0 nova_compute[260935]: 2025-10-11 09:04:38.972 2 DEBUG nova.network.neutron [req-0aa97448-4e72-4d82-b44b-5ee6d7e4a670 req-dabcf617-b846-49f0-b919-99615cdcb0fd e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: dffb6f2b-b5a8-4d28-ae9b-aca7728ce617] Refreshing network info cache for port 63a1eeb0-05f1-4f8a-b115-a1323d4d119d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 09:04:38 compute-0 nova_compute[260935]: 2025-10-11 09:04:38.978 2 DEBUG nova.virt.libvirt.driver [None req-03665052-f111-4d21-b479-e4ce66ca71fa f7e5365d09e140aaa4289b21435cbd70 884a8b5cd18948009939a3ab6cb1d42a - - default default] [instance: dffb6f2b-b5a8-4d28-ae9b-aca7728ce617] Start _get_guest_xml network_info=[{"id": "63a1eeb0-05f1-4f8a-b115-a1323d4d119d", "address": "fa:16:3e:a1:89:76", "network": {"id": "80aba2f5-1646-44f0-b2f3-d9c000867c6d", "bridge": "br-int", "label": "tempest-ServerGroupTestJSON-2046079859-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "884a8b5cd18948009939a3ab6cb1d42a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap63a1eeb0-05", "ovs_interfaceid": "63a1eeb0-05f1-4f8a-b115-a1323d4d119d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 11 09:04:38 compute-0 nova_compute[260935]: 2025-10-11 09:04:38.986 2 WARNING nova.virt.libvirt.driver [None req-03665052-f111-4d21-b479-e4ce66ca71fa f7e5365d09e140aaa4289b21435cbd70 884a8b5cd18948009939a3ab6cb1d42a - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 09:04:38 compute-0 nova_compute[260935]: 2025-10-11 09:04:38.993 2 DEBUG nova.virt.libvirt.host [None req-03665052-f111-4d21-b479-e4ce66ca71fa f7e5365d09e140aaa4289b21435cbd70 884a8b5cd18948009939a3ab6cb1d42a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 11 09:04:38 compute-0 nova_compute[260935]: 2025-10-11 09:04:38.994 2 DEBUG nova.virt.libvirt.host [None req-03665052-f111-4d21-b479-e4ce66ca71fa f7e5365d09e140aaa4289b21435cbd70 884a8b5cd18948009939a3ab6cb1d42a - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 11 09:04:38 compute-0 nova_compute[260935]: 2025-10-11 09:04:38.999 2 DEBUG nova.virt.libvirt.host [None req-03665052-f111-4d21-b479-e4ce66ca71fa f7e5365d09e140aaa4289b21435cbd70 884a8b5cd18948009939a3ab6cb1d42a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 11 09:04:39 compute-0 nova_compute[260935]: 2025-10-11 09:04:38.999 2 DEBUG nova.virt.libvirt.host [None req-03665052-f111-4d21-b479-e4ce66ca71fa f7e5365d09e140aaa4289b21435cbd70 884a8b5cd18948009939a3ab6cb1d42a - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 11 09:04:39 compute-0 nova_compute[260935]: 2025-10-11 09:04:39.000 2 DEBUG nova.virt.libvirt.driver [None req-03665052-f111-4d21-b479-e4ce66ca71fa f7e5365d09e140aaa4289b21435cbd70 884a8b5cd18948009939a3ab6cb1d42a - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 11 09:04:39 compute-0 nova_compute[260935]: 2025-10-11 09:04:39.001 2 DEBUG nova.virt.hardware [None req-03665052-f111-4d21-b479-e4ce66ca71fa f7e5365d09e140aaa4289b21435cbd70 884a8b5cd18948009939a3ab6cb1d42a - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 11 09:04:39 compute-0 nova_compute[260935]: 2025-10-11 09:04:39.002 2 DEBUG nova.virt.hardware [None req-03665052-f111-4d21-b479-e4ce66ca71fa f7e5365d09e140aaa4289b21435cbd70 884a8b5cd18948009939a3ab6cb1d42a - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 11 09:04:39 compute-0 nova_compute[260935]: 2025-10-11 09:04:39.003 2 DEBUG nova.virt.hardware [None req-03665052-f111-4d21-b479-e4ce66ca71fa f7e5365d09e140aaa4289b21435cbd70 884a8b5cd18948009939a3ab6cb1d42a - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 11 09:04:39 compute-0 nova_compute[260935]: 2025-10-11 09:04:39.003 2 DEBUG nova.virt.hardware [None req-03665052-f111-4d21-b479-e4ce66ca71fa f7e5365d09e140aaa4289b21435cbd70 884a8b5cd18948009939a3ab6cb1d42a - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 11 09:04:39 compute-0 nova_compute[260935]: 2025-10-11 09:04:39.004 2 DEBUG nova.virt.hardware [None req-03665052-f111-4d21-b479-e4ce66ca71fa f7e5365d09e140aaa4289b21435cbd70 884a8b5cd18948009939a3ab6cb1d42a - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 11 09:04:39 compute-0 nova_compute[260935]: 2025-10-11 09:04:39.004 2 DEBUG nova.virt.hardware [None req-03665052-f111-4d21-b479-e4ce66ca71fa f7e5365d09e140aaa4289b21435cbd70 884a8b5cd18948009939a3ab6cb1d42a - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 11 09:04:39 compute-0 nova_compute[260935]: 2025-10-11 09:04:39.005 2 DEBUG nova.virt.hardware [None req-03665052-f111-4d21-b479-e4ce66ca71fa f7e5365d09e140aaa4289b21435cbd70 884a8b5cd18948009939a3ab6cb1d42a - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 11 09:04:39 compute-0 nova_compute[260935]: 2025-10-11 09:04:39.005 2 DEBUG nova.virt.hardware [None req-03665052-f111-4d21-b479-e4ce66ca71fa f7e5365d09e140aaa4289b21435cbd70 884a8b5cd18948009939a3ab6cb1d42a - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 11 09:04:39 compute-0 nova_compute[260935]: 2025-10-11 09:04:39.006 2 DEBUG nova.virt.hardware [None req-03665052-f111-4d21-b479-e4ce66ca71fa f7e5365d09e140aaa4289b21435cbd70 884a8b5cd18948009939a3ab6cb1d42a - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 11 09:04:39 compute-0 nova_compute[260935]: 2025-10-11 09:04:39.006 2 DEBUG nova.virt.hardware [None req-03665052-f111-4d21-b479-e4ce66ca71fa f7e5365d09e140aaa4289b21435cbd70 884a8b5cd18948009939a3ab6cb1d42a - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 11 09:04:39 compute-0 nova_compute[260935]: 2025-10-11 09:04:39.007 2 DEBUG nova.virt.hardware [None req-03665052-f111-4d21-b479-e4ce66ca71fa f7e5365d09e140aaa4289b21435cbd70 884a8b5cd18948009939a3ab6cb1d42a - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 11 09:04:39 compute-0 nova_compute[260935]: 2025-10-11 09:04:39.012 2 DEBUG oslo_concurrency.processutils [None req-03665052-f111-4d21-b479-e4ce66ca71fa f7e5365d09e140aaa4289b21435cbd70 884a8b5cd18948009939a3ab6cb1d42a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:04:39 compute-0 peaceful_panini[349797]: {
Oct 11 09:04:39 compute-0 peaceful_panini[349797]:     "9a8d87fe-940e-4960-94fb-405e22e6e9ee": {
Oct 11 09:04:39 compute-0 peaceful_panini[349797]:         "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:04:39 compute-0 peaceful_panini[349797]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 11 09:04:39 compute-0 peaceful_panini[349797]:         "osd_id": 2,
Oct 11 09:04:39 compute-0 peaceful_panini[349797]:         "osd_uuid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 09:04:39 compute-0 peaceful_panini[349797]:         "type": "bluestore"
Oct 11 09:04:39 compute-0 peaceful_panini[349797]:     },
Oct 11 09:04:39 compute-0 peaceful_panini[349797]:     "bd31cfe9-266a-4479-a7f9-659fff14b3e1": {
Oct 11 09:04:39 compute-0 peaceful_panini[349797]:         "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:04:39 compute-0 peaceful_panini[349797]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 11 09:04:39 compute-0 peaceful_panini[349797]:         "osd_id": 0,
Oct 11 09:04:39 compute-0 peaceful_panini[349797]:         "osd_uuid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 09:04:39 compute-0 peaceful_panini[349797]:         "type": "bluestore"
Oct 11 09:04:39 compute-0 peaceful_panini[349797]:     },
Oct 11 09:04:39 compute-0 peaceful_panini[349797]:     "c6e730c8-7075-45e5-bf72-061b73088f5b": {
Oct 11 09:04:39 compute-0 peaceful_panini[349797]:         "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:04:39 compute-0 peaceful_panini[349797]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 11 09:04:39 compute-0 peaceful_panini[349797]:         "osd_id": 1,
Oct 11 09:04:39 compute-0 peaceful_panini[349797]:         "osd_uuid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 09:04:39 compute-0 peaceful_panini[349797]:         "type": "bluestore"
Oct 11 09:04:39 compute-0 peaceful_panini[349797]:     }
Oct 11 09:04:39 compute-0 peaceful_panini[349797]: }
Oct 11 09:04:39 compute-0 systemd[1]: libpod-1069f11545fd213da9c9771073a8de6484c9569daffd22568870d65cfd4d0b71.scope: Deactivated successfully.
Oct 11 09:04:39 compute-0 podman[349780]: 2025-10-11 09:04:39.244626065 +0000 UTC m=+1.293049394 container died 1069f11545fd213da9c9771073a8de6484c9569daffd22568870d65cfd4d0b71 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_panini, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct 11 09:04:39 compute-0 systemd[1]: libpod-1069f11545fd213da9c9771073a8de6484c9569daffd22568870d65cfd4d0b71.scope: Consumed 1.061s CPU time.
Oct 11 09:04:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-e1ccdfa78297bbecd3acac2ff229188486fb3237de5c832862110009a2326b14-merged.mount: Deactivated successfully.
Oct 11 09:04:39 compute-0 podman[349780]: 2025-10-11 09:04:39.315241271 +0000 UTC m=+1.363664570 container remove 1069f11545fd213da9c9771073a8de6484c9569daffd22568870d65cfd4d0b71 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_panini, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507)
Oct 11 09:04:39 compute-0 systemd[1]: libpod-conmon-1069f11545fd213da9c9771073a8de6484c9569daffd22568870d65cfd4d0b71.scope: Deactivated successfully.
Oct 11 09:04:39 compute-0 sudo[349671]: pam_unix(sudo:session): session closed for user root
Oct 11 09:04:39 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 09:04:39 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:04:39 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 09:04:39 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:04:39 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev 2acd76f1-6426-434e-a8af-d8b99e4d5cc6 does not exist
Oct 11 09:04:39 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev 36be4cfb-bb9c-4728-b032-ede1d93103f5 does not exist
Oct 11 09:04:39 compute-0 sudo[349863]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:04:39 compute-0 sudo[349863]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:04:39 compute-0 sudo[349863]: pam_unix(sudo:session): session closed for user root
Oct 11 09:04:39 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 09:04:39 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1292809805' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:04:39 compute-0 nova_compute[260935]: 2025-10-11 09:04:39.500 2 DEBUG oslo_concurrency.processutils [None req-03665052-f111-4d21-b479-e4ce66ca71fa f7e5365d09e140aaa4289b21435cbd70 884a8b5cd18948009939a3ab6cb1d42a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.488s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:04:39 compute-0 sudo[349888]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 11 09:04:39 compute-0 sudo[349888]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:04:39 compute-0 sudo[349888]: pam_unix(sudo:session): session closed for user root
Oct 11 09:04:39 compute-0 nova_compute[260935]: 2025-10-11 09:04:39.548 2 DEBUG nova.storage.rbd_utils [None req-03665052-f111-4d21-b479-e4ce66ca71fa f7e5365d09e140aaa4289b21435cbd70 884a8b5cd18948009939a3ab6cb1d42a - - default default] rbd image dffb6f2b-b5a8-4d28-ae9b-aca7728ce617_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:04:39 compute-0 nova_compute[260935]: 2025-10-11 09:04:39.553 2 DEBUG oslo_concurrency.processutils [None req-03665052-f111-4d21-b479-e4ce66ca71fa f7e5365d09e140aaa4289b21435cbd70 884a8b5cd18948009939a3ab6cb1d42a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:04:39 compute-0 ceph-mon[74313]: pgmap v1851: 321 pgs: 321 active+clean; 467 MiB data, 855 MiB used, 59 GiB / 60 GiB avail; 5.8 MiB/s rd, 2.7 MiB/s wr, 336 op/s
Oct 11 09:04:39 compute-0 ceph-mon[74313]: osdmap e250: 3 total, 3 up, 3 in
Oct 11 09:04:39 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:04:39 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:04:39 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/1292809805' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:04:40 compute-0 ceph-osd[89278]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #47. Immutable memtables: 4.
Oct 11 09:04:40 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 09:04:40 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3839639460' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:04:40 compute-0 nova_compute[260935]: 2025-10-11 09:04:40.075 2 DEBUG oslo_concurrency.processutils [None req-03665052-f111-4d21-b479-e4ce66ca71fa f7e5365d09e140aaa4289b21435cbd70 884a8b5cd18948009939a3ab6cb1d42a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.522s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:04:40 compute-0 nova_compute[260935]: 2025-10-11 09:04:40.078 2 DEBUG nova.virt.libvirt.vif [None req-03665052-f111-4d21-b479-e4ce66ca71fa f7e5365d09e140aaa4289b21435cbd70 884a8b5cd18948009939a3ab6cb1d42a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T09:04:32Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerGroupTestJSON-server-236424263',display_name='tempest-ServerGroupTestJSON-server-236424263',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-servergrouptestjson-server-236424263',id=90,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='884a8b5cd18948009939a3ab6cb1d42a',ramdisk_id='',reservation_id='r-4ccwpn6r',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerGroupTestJSON-1907510446',owner_user_name='tempest-ServerGroupTestJSON-1907510446-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:04:35Z,user_data=None,user_id='f7e5365d09e140aaa4289b21435cbd70',uuid=dffb6f2b-b5a8-4d28-ae9b-aca7728ce617,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "63a1eeb0-05f1-4f8a-b115-a1323d4d119d", "address": "fa:16:3e:a1:89:76", "network": {"id": "80aba2f5-1646-44f0-b2f3-d9c000867c6d", "bridge": "br-int", "label": "tempest-ServerGroupTestJSON-2046079859-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "884a8b5cd18948009939a3ab6cb1d42a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap63a1eeb0-05", "ovs_interfaceid": "63a1eeb0-05f1-4f8a-b115-a1323d4d119d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 11 09:04:40 compute-0 nova_compute[260935]: 2025-10-11 09:04:40.078 2 DEBUG nova.network.os_vif_util [None req-03665052-f111-4d21-b479-e4ce66ca71fa f7e5365d09e140aaa4289b21435cbd70 884a8b5cd18948009939a3ab6cb1d42a - - default default] Converting VIF {"id": "63a1eeb0-05f1-4f8a-b115-a1323d4d119d", "address": "fa:16:3e:a1:89:76", "network": {"id": "80aba2f5-1646-44f0-b2f3-d9c000867c6d", "bridge": "br-int", "label": "tempest-ServerGroupTestJSON-2046079859-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "884a8b5cd18948009939a3ab6cb1d42a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap63a1eeb0-05", "ovs_interfaceid": "63a1eeb0-05f1-4f8a-b115-a1323d4d119d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 09:04:40 compute-0 nova_compute[260935]: 2025-10-11 09:04:40.079 2 DEBUG nova.network.os_vif_util [None req-03665052-f111-4d21-b479-e4ce66ca71fa f7e5365d09e140aaa4289b21435cbd70 884a8b5cd18948009939a3ab6cb1d42a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a1:89:76,bridge_name='br-int',has_traffic_filtering=True,id=63a1eeb0-05f1-4f8a-b115-a1323d4d119d,network=Network(80aba2f5-1646-44f0-b2f3-d9c000867c6d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap63a1eeb0-05') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 09:04:40 compute-0 nova_compute[260935]: 2025-10-11 09:04:40.081 2 DEBUG nova.objects.instance [None req-03665052-f111-4d21-b479-e4ce66ca71fa f7e5365d09e140aaa4289b21435cbd70 884a8b5cd18948009939a3ab6cb1d42a - - default default] Lazy-loading 'pci_devices' on Instance uuid dffb6f2b-b5a8-4d28-ae9b-aca7728ce617 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:04:40 compute-0 nova_compute[260935]: 2025-10-11 09:04:40.119 2 DEBUG nova.virt.libvirt.driver [None req-03665052-f111-4d21-b479-e4ce66ca71fa f7e5365d09e140aaa4289b21435cbd70 884a8b5cd18948009939a3ab6cb1d42a - - default default] [instance: dffb6f2b-b5a8-4d28-ae9b-aca7728ce617] End _get_guest_xml xml=<domain type="kvm">
Oct 11 09:04:40 compute-0 nova_compute[260935]:   <uuid>dffb6f2b-b5a8-4d28-ae9b-aca7728ce617</uuid>
Oct 11 09:04:40 compute-0 nova_compute[260935]:   <name>instance-0000005a</name>
Oct 11 09:04:40 compute-0 nova_compute[260935]:   <memory>131072</memory>
Oct 11 09:04:40 compute-0 nova_compute[260935]:   <vcpu>1</vcpu>
Oct 11 09:04:40 compute-0 nova_compute[260935]:   <metadata>
Oct 11 09:04:40 compute-0 nova_compute[260935]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 09:04:40 compute-0 nova_compute[260935]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 09:04:40 compute-0 nova_compute[260935]:       <nova:name>tempest-ServerGroupTestJSON-server-236424263</nova:name>
Oct 11 09:04:40 compute-0 nova_compute[260935]:       <nova:creationTime>2025-10-11 09:04:38</nova:creationTime>
Oct 11 09:04:40 compute-0 nova_compute[260935]:       <nova:flavor name="m1.nano">
Oct 11 09:04:40 compute-0 nova_compute[260935]:         <nova:memory>128</nova:memory>
Oct 11 09:04:40 compute-0 nova_compute[260935]:         <nova:disk>1</nova:disk>
Oct 11 09:04:40 compute-0 nova_compute[260935]:         <nova:swap>0</nova:swap>
Oct 11 09:04:40 compute-0 nova_compute[260935]:         <nova:ephemeral>0</nova:ephemeral>
Oct 11 09:04:40 compute-0 nova_compute[260935]:         <nova:vcpus>1</nova:vcpus>
Oct 11 09:04:40 compute-0 nova_compute[260935]:       </nova:flavor>
Oct 11 09:04:40 compute-0 nova_compute[260935]:       <nova:owner>
Oct 11 09:04:40 compute-0 nova_compute[260935]:         <nova:user uuid="f7e5365d09e140aaa4289b21435cbd70">tempest-ServerGroupTestJSON-1907510446-project-member</nova:user>
Oct 11 09:04:40 compute-0 nova_compute[260935]:         <nova:project uuid="884a8b5cd18948009939a3ab6cb1d42a">tempest-ServerGroupTestJSON-1907510446</nova:project>
Oct 11 09:04:40 compute-0 nova_compute[260935]:       </nova:owner>
Oct 11 09:04:40 compute-0 nova_compute[260935]:       <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 09:04:40 compute-0 nova_compute[260935]:       <nova:ports>
Oct 11 09:04:40 compute-0 nova_compute[260935]:         <nova:port uuid="63a1eeb0-05f1-4f8a-b115-a1323d4d119d">
Oct 11 09:04:40 compute-0 nova_compute[260935]:           <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Oct 11 09:04:40 compute-0 nova_compute[260935]:         </nova:port>
Oct 11 09:04:40 compute-0 nova_compute[260935]:       </nova:ports>
Oct 11 09:04:40 compute-0 nova_compute[260935]:     </nova:instance>
Oct 11 09:04:40 compute-0 nova_compute[260935]:   </metadata>
Oct 11 09:04:40 compute-0 nova_compute[260935]:   <sysinfo type="smbios">
Oct 11 09:04:40 compute-0 nova_compute[260935]:     <system>
Oct 11 09:04:40 compute-0 nova_compute[260935]:       <entry name="manufacturer">RDO</entry>
Oct 11 09:04:40 compute-0 nova_compute[260935]:       <entry name="product">OpenStack Compute</entry>
Oct 11 09:04:40 compute-0 nova_compute[260935]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 09:04:40 compute-0 nova_compute[260935]:       <entry name="serial">dffb6f2b-b5a8-4d28-ae9b-aca7728ce617</entry>
Oct 11 09:04:40 compute-0 nova_compute[260935]:       <entry name="uuid">dffb6f2b-b5a8-4d28-ae9b-aca7728ce617</entry>
Oct 11 09:04:40 compute-0 nova_compute[260935]:       <entry name="family">Virtual Machine</entry>
Oct 11 09:04:40 compute-0 nova_compute[260935]:     </system>
Oct 11 09:04:40 compute-0 nova_compute[260935]:   </sysinfo>
Oct 11 09:04:40 compute-0 nova_compute[260935]:   <os>
Oct 11 09:04:40 compute-0 nova_compute[260935]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 11 09:04:40 compute-0 nova_compute[260935]:     <boot dev="hd"/>
Oct 11 09:04:40 compute-0 nova_compute[260935]:     <smbios mode="sysinfo"/>
Oct 11 09:04:40 compute-0 nova_compute[260935]:   </os>
Oct 11 09:04:40 compute-0 nova_compute[260935]:   <features>
Oct 11 09:04:40 compute-0 nova_compute[260935]:     <acpi/>
Oct 11 09:04:40 compute-0 nova_compute[260935]:     <apic/>
Oct 11 09:04:40 compute-0 nova_compute[260935]:     <vmcoreinfo/>
Oct 11 09:04:40 compute-0 nova_compute[260935]:   </features>
Oct 11 09:04:40 compute-0 nova_compute[260935]:   <clock offset="utc">
Oct 11 09:04:40 compute-0 nova_compute[260935]:     <timer name="pit" tickpolicy="delay"/>
Oct 11 09:04:40 compute-0 nova_compute[260935]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 11 09:04:40 compute-0 nova_compute[260935]:     <timer name="hpet" present="no"/>
Oct 11 09:04:40 compute-0 nova_compute[260935]:   </clock>
Oct 11 09:04:40 compute-0 nova_compute[260935]:   <cpu mode="host-model" match="exact">
Oct 11 09:04:40 compute-0 nova_compute[260935]:     <topology sockets="1" cores="1" threads="1"/>
Oct 11 09:04:40 compute-0 nova_compute[260935]:   </cpu>
Oct 11 09:04:40 compute-0 nova_compute[260935]:   <devices>
Oct 11 09:04:40 compute-0 nova_compute[260935]:     <disk type="network" device="disk">
Oct 11 09:04:40 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 09:04:40 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/dffb6f2b-b5a8-4d28-ae9b-aca7728ce617_disk">
Oct 11 09:04:40 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 09:04:40 compute-0 nova_compute[260935]:       </source>
Oct 11 09:04:40 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 09:04:40 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 09:04:40 compute-0 nova_compute[260935]:       </auth>
Oct 11 09:04:40 compute-0 nova_compute[260935]:       <target dev="vda" bus="virtio"/>
Oct 11 09:04:40 compute-0 nova_compute[260935]:     </disk>
Oct 11 09:04:40 compute-0 nova_compute[260935]:     <disk type="network" device="cdrom">
Oct 11 09:04:40 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 09:04:40 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/dffb6f2b-b5a8-4d28-ae9b-aca7728ce617_disk.config">
Oct 11 09:04:40 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 09:04:40 compute-0 nova_compute[260935]:       </source>
Oct 11 09:04:40 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 09:04:40 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 09:04:40 compute-0 nova_compute[260935]:       </auth>
Oct 11 09:04:40 compute-0 nova_compute[260935]:       <target dev="sda" bus="sata"/>
Oct 11 09:04:40 compute-0 nova_compute[260935]:     </disk>
Oct 11 09:04:40 compute-0 nova_compute[260935]:     <interface type="ethernet">
Oct 11 09:04:40 compute-0 nova_compute[260935]:       <mac address="fa:16:3e:a1:89:76"/>
Oct 11 09:04:40 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 09:04:40 compute-0 nova_compute[260935]:       <driver name="vhost" rx_queue_size="512"/>
Oct 11 09:04:40 compute-0 nova_compute[260935]:       <mtu size="1442"/>
Oct 11 09:04:40 compute-0 nova_compute[260935]:       <target dev="tap63a1eeb0-05"/>
Oct 11 09:04:40 compute-0 nova_compute[260935]:     </interface>
Oct 11 09:04:40 compute-0 nova_compute[260935]:     <serial type="pty">
Oct 11 09:04:40 compute-0 nova_compute[260935]:       <log file="/var/lib/nova/instances/dffb6f2b-b5a8-4d28-ae9b-aca7728ce617/console.log" append="off"/>
Oct 11 09:04:40 compute-0 nova_compute[260935]:     </serial>
Oct 11 09:04:40 compute-0 nova_compute[260935]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 09:04:40 compute-0 nova_compute[260935]:     <video>
Oct 11 09:04:40 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 09:04:40 compute-0 nova_compute[260935]:     </video>
Oct 11 09:04:40 compute-0 nova_compute[260935]:     <input type="tablet" bus="usb"/>
Oct 11 09:04:40 compute-0 nova_compute[260935]:     <rng model="virtio">
Oct 11 09:04:40 compute-0 nova_compute[260935]:       <backend model="random">/dev/urandom</backend>
Oct 11 09:04:40 compute-0 nova_compute[260935]:     </rng>
Oct 11 09:04:40 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root"/>
Oct 11 09:04:40 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:04:40 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:04:40 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:04:40 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:04:40 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:04:40 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:04:40 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:04:40 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:04:40 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:04:40 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:04:40 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:04:40 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:04:40 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:04:40 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:04:40 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:04:40 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:04:40 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:04:40 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:04:40 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:04:40 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:04:40 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:04:40 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:04:40 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:04:40 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:04:40 compute-0 nova_compute[260935]:     <controller type="usb" index="0"/>
Oct 11 09:04:40 compute-0 nova_compute[260935]:     <memballoon model="virtio">
Oct 11 09:04:40 compute-0 nova_compute[260935]:       <stats period="10"/>
Oct 11 09:04:40 compute-0 nova_compute[260935]:     </memballoon>
Oct 11 09:04:40 compute-0 nova_compute[260935]:   </devices>
Oct 11 09:04:40 compute-0 nova_compute[260935]: </domain>
Oct 11 09:04:40 compute-0 nova_compute[260935]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 11 09:04:40 compute-0 nova_compute[260935]: 2025-10-11 09:04:40.120 2 DEBUG nova.compute.manager [None req-03665052-f111-4d21-b479-e4ce66ca71fa f7e5365d09e140aaa4289b21435cbd70 884a8b5cd18948009939a3ab6cb1d42a - - default default] [instance: dffb6f2b-b5a8-4d28-ae9b-aca7728ce617] Preparing to wait for external event network-vif-plugged-63a1eeb0-05f1-4f8a-b115-a1323d4d119d prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 11 09:04:40 compute-0 nova_compute[260935]: 2025-10-11 09:04:40.121 2 DEBUG oslo_concurrency.lockutils [None req-03665052-f111-4d21-b479-e4ce66ca71fa f7e5365d09e140aaa4289b21435cbd70 884a8b5cd18948009939a3ab6cb1d42a - - default default] Acquiring lock "dffb6f2b-b5a8-4d28-ae9b-aca7728ce617-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:04:40 compute-0 nova_compute[260935]: 2025-10-11 09:04:40.124 2 DEBUG oslo_concurrency.lockutils [None req-03665052-f111-4d21-b479-e4ce66ca71fa f7e5365d09e140aaa4289b21435cbd70 884a8b5cd18948009939a3ab6cb1d42a - - default default] Lock "dffb6f2b-b5a8-4d28-ae9b-aca7728ce617-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:04:40 compute-0 nova_compute[260935]: 2025-10-11 09:04:40.124 2 DEBUG oslo_concurrency.lockutils [None req-03665052-f111-4d21-b479-e4ce66ca71fa f7e5365d09e140aaa4289b21435cbd70 884a8b5cd18948009939a3ab6cb1d42a - - default default] Lock "dffb6f2b-b5a8-4d28-ae9b-aca7728ce617-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:04:40 compute-0 nova_compute[260935]: 2025-10-11 09:04:40.125 2 DEBUG nova.virt.libvirt.vif [None req-03665052-f111-4d21-b479-e4ce66ca71fa f7e5365d09e140aaa4289b21435cbd70 884a8b5cd18948009939a3ab6cb1d42a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T09:04:32Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerGroupTestJSON-server-236424263',display_name='tempest-ServerGroupTestJSON-server-236424263',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-servergrouptestjson-server-236424263',id=90,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='884a8b5cd18948009939a3ab6cb1d42a',ramdisk_id='',reservation_id='r-4ccwpn6r',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerGroupTestJSON-1907510446',owner_user_name='tempest-ServerGroupTestJSON-1907510446-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:04:35Z,user_data=None,user_id='f7e5365d09e140aaa4289b21435cbd70',uuid=dffb6f2b-b5a8-4d28-ae9b-aca7728ce617,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "63a1eeb0-05f1-4f8a-b115-a1323d4d119d", "address": "fa:16:3e:a1:89:76", "network": {"id": "80aba2f5-1646-44f0-b2f3-d9c000867c6d", "bridge": "br-int", "label": "tempest-ServerGroupTestJSON-2046079859-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "884a8b5cd18948009939a3ab6cb1d42a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap63a1eeb0-05", "ovs_interfaceid": "63a1eeb0-05f1-4f8a-b115-a1323d4d119d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 11 09:04:40 compute-0 nova_compute[260935]: 2025-10-11 09:04:40.126 2 DEBUG nova.network.os_vif_util [None req-03665052-f111-4d21-b479-e4ce66ca71fa f7e5365d09e140aaa4289b21435cbd70 884a8b5cd18948009939a3ab6cb1d42a - - default default] Converting VIF {"id": "63a1eeb0-05f1-4f8a-b115-a1323d4d119d", "address": "fa:16:3e:a1:89:76", "network": {"id": "80aba2f5-1646-44f0-b2f3-d9c000867c6d", "bridge": "br-int", "label": "tempest-ServerGroupTestJSON-2046079859-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "884a8b5cd18948009939a3ab6cb1d42a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap63a1eeb0-05", "ovs_interfaceid": "63a1eeb0-05f1-4f8a-b115-a1323d4d119d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 09:04:40 compute-0 nova_compute[260935]: 2025-10-11 09:04:40.128 2 DEBUG nova.network.os_vif_util [None req-03665052-f111-4d21-b479-e4ce66ca71fa f7e5365d09e140aaa4289b21435cbd70 884a8b5cd18948009939a3ab6cb1d42a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a1:89:76,bridge_name='br-int',has_traffic_filtering=True,id=63a1eeb0-05f1-4f8a-b115-a1323d4d119d,network=Network(80aba2f5-1646-44f0-b2f3-d9c000867c6d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap63a1eeb0-05') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 09:04:40 compute-0 nova_compute[260935]: 2025-10-11 09:04:40.128 2 DEBUG os_vif [None req-03665052-f111-4d21-b479-e4ce66ca71fa f7e5365d09e140aaa4289b21435cbd70 884a8b5cd18948009939a3ab6cb1d42a - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a1:89:76,bridge_name='br-int',has_traffic_filtering=True,id=63a1eeb0-05f1-4f8a-b115-a1323d4d119d,network=Network(80aba2f5-1646-44f0-b2f3-d9c000867c6d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap63a1eeb0-05') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 11 09:04:40 compute-0 nova_compute[260935]: 2025-10-11 09:04:40.129 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:04:40 compute-0 nova_compute[260935]: 2025-10-11 09:04:40.130 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:04:40 compute-0 nova_compute[260935]: 2025-10-11 09:04:40.131 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 09:04:40 compute-0 nova_compute[260935]: 2025-10-11 09:04:40.138 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:04:40 compute-0 nova_compute[260935]: 2025-10-11 09:04:40.139 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap63a1eeb0-05, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:04:40 compute-0 nova_compute[260935]: 2025-10-11 09:04:40.140 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap63a1eeb0-05, col_values=(('external_ids', {'iface-id': '63a1eeb0-05f1-4f8a-b115-a1323d4d119d', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:a1:89:76', 'vm-uuid': 'dffb6f2b-b5a8-4d28-ae9b-aca7728ce617'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:04:40 compute-0 nova_compute[260935]: 2025-10-11 09:04:40.142 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:04:40 compute-0 NetworkManager[44960]: <info>  [1760173480.1442] manager: (tap63a1eeb0-05): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/345)
Oct 11 09:04:40 compute-0 nova_compute[260935]: 2025-10-11 09:04:40.146 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 09:04:40 compute-0 nova_compute[260935]: 2025-10-11 09:04:40.154 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:04:40 compute-0 nova_compute[260935]: 2025-10-11 09:04:40.156 2 INFO os_vif [None req-03665052-f111-4d21-b479-e4ce66ca71fa f7e5365d09e140aaa4289b21435cbd70 884a8b5cd18948009939a3ab6cb1d42a - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a1:89:76,bridge_name='br-int',has_traffic_filtering=True,id=63a1eeb0-05f1-4f8a-b115-a1323d4d119d,network=Network(80aba2f5-1646-44f0-b2f3-d9c000867c6d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap63a1eeb0-05')
Oct 11 09:04:40 compute-0 nova_compute[260935]: 2025-10-11 09:04:40.268 2 DEBUG nova.virt.libvirt.driver [None req-03665052-f111-4d21-b479-e4ce66ca71fa f7e5365d09e140aaa4289b21435cbd70 884a8b5cd18948009939a3ab6cb1d42a - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 09:04:40 compute-0 nova_compute[260935]: 2025-10-11 09:04:40.269 2 DEBUG nova.virt.libvirt.driver [None req-03665052-f111-4d21-b479-e4ce66ca71fa f7e5365d09e140aaa4289b21435cbd70 884a8b5cd18948009939a3ab6cb1d42a - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 09:04:40 compute-0 nova_compute[260935]: 2025-10-11 09:04:40.269 2 DEBUG nova.virt.libvirt.driver [None req-03665052-f111-4d21-b479-e4ce66ca71fa f7e5365d09e140aaa4289b21435cbd70 884a8b5cd18948009939a3ab6cb1d42a - - default default] No VIF found with MAC fa:16:3e:a1:89:76, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 11 09:04:40 compute-0 nova_compute[260935]: 2025-10-11 09:04:40.270 2 INFO nova.virt.libvirt.driver [None req-03665052-f111-4d21-b479-e4ce66ca71fa f7e5365d09e140aaa4289b21435cbd70 884a8b5cd18948009939a3ab6cb1d42a - - default default] [instance: dffb6f2b-b5a8-4d28-ae9b-aca7728ce617] Using config drive
Oct 11 09:04:40 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1853: 321 pgs: 321 active+clean; 467 MiB data, 855 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 2.7 MiB/s wr, 199 op/s
Oct 11 09:04:40 compute-0 nova_compute[260935]: 2025-10-11 09:04:40.298 2 DEBUG nova.storage.rbd_utils [None req-03665052-f111-4d21-b479-e4ce66ca71fa f7e5365d09e140aaa4289b21435cbd70 884a8b5cd18948009939a3ab6cb1d42a - - default default] rbd image dffb6f2b-b5a8-4d28-ae9b-aca7728ce617_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:04:40 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e250 do_prune osdmap full prune enabled
Oct 11 09:04:40 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/3839639460' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:04:40 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e251 e251: 3 total, 3 up, 3 in
Oct 11 09:04:40 compute-0 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e251: 3 total, 3 up, 3 in
Oct 11 09:04:40 compute-0 nova_compute[260935]: 2025-10-11 09:04:40.730 2 DEBUG nova.network.neutron [req-0aa97448-4e72-4d82-b44b-5ee6d7e4a670 req-dabcf617-b846-49f0-b919-99615cdcb0fd e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: dffb6f2b-b5a8-4d28-ae9b-aca7728ce617] Updated VIF entry in instance network info cache for port 63a1eeb0-05f1-4f8a-b115-a1323d4d119d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 09:04:40 compute-0 nova_compute[260935]: 2025-10-11 09:04:40.730 2 DEBUG nova.network.neutron [req-0aa97448-4e72-4d82-b44b-5ee6d7e4a670 req-dabcf617-b846-49f0-b919-99615cdcb0fd e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: dffb6f2b-b5a8-4d28-ae9b-aca7728ce617] Updating instance_info_cache with network_info: [{"id": "63a1eeb0-05f1-4f8a-b115-a1323d4d119d", "address": "fa:16:3e:a1:89:76", "network": {"id": "80aba2f5-1646-44f0-b2f3-d9c000867c6d", "bridge": "br-int", "label": "tempest-ServerGroupTestJSON-2046079859-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "884a8b5cd18948009939a3ab6cb1d42a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap63a1eeb0-05", "ovs_interfaceid": "63a1eeb0-05f1-4f8a-b115-a1323d4d119d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:04:40 compute-0 nova_compute[260935]: 2025-10-11 09:04:40.760 2 DEBUG oslo_concurrency.lockutils [req-0aa97448-4e72-4d82-b44b-5ee6d7e4a670 req-dabcf617-b846-49f0-b919-99615cdcb0fd e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-dffb6f2b-b5a8-4d28-ae9b-aca7728ce617" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:04:41 compute-0 nova_compute[260935]: 2025-10-11 09:04:41.012 2 INFO nova.virt.libvirt.driver [None req-03665052-f111-4d21-b479-e4ce66ca71fa f7e5365d09e140aaa4289b21435cbd70 884a8b5cd18948009939a3ab6cb1d42a - - default default] [instance: dffb6f2b-b5a8-4d28-ae9b-aca7728ce617] Creating config drive at /var/lib/nova/instances/dffb6f2b-b5a8-4d28-ae9b-aca7728ce617/disk.config
Oct 11 09:04:41 compute-0 nova_compute[260935]: 2025-10-11 09:04:41.020 2 DEBUG oslo_concurrency.processutils [None req-03665052-f111-4d21-b479-e4ce66ca71fa f7e5365d09e140aaa4289b21435cbd70 884a8b5cd18948009939a3ab6cb1d42a - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/dffb6f2b-b5a8-4d28-ae9b-aca7728ce617/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp6_t6w6s3 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:04:41 compute-0 nova_compute[260935]: 2025-10-11 09:04:41.107 2 DEBUG nova.virt.libvirt.driver [None req-edf5c013-a2de-433b-b5dc-cd9b80326402 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] [instance: cb40f13d-eaf0-4d7f-8745-acdd85fa0e86] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Oct 11 09:04:41 compute-0 nova_compute[260935]: 2025-10-11 09:04:41.194 2 DEBUG oslo_concurrency.processutils [None req-03665052-f111-4d21-b479-e4ce66ca71fa f7e5365d09e140aaa4289b21435cbd70 884a8b5cd18948009939a3ab6cb1d42a - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/dffb6f2b-b5a8-4d28-ae9b-aca7728ce617/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp6_t6w6s3" returned: 0 in 0.174s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:04:41 compute-0 nova_compute[260935]: 2025-10-11 09:04:41.237 2 DEBUG nova.storage.rbd_utils [None req-03665052-f111-4d21-b479-e4ce66ca71fa f7e5365d09e140aaa4289b21435cbd70 884a8b5cd18948009939a3ab6cb1d42a - - default default] rbd image dffb6f2b-b5a8-4d28-ae9b-aca7728ce617_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:04:41 compute-0 nova_compute[260935]: 2025-10-11 09:04:41.242 2 DEBUG oslo_concurrency.processutils [None req-03665052-f111-4d21-b479-e4ce66ca71fa f7e5365d09e140aaa4289b21435cbd70 884a8b5cd18948009939a3ab6cb1d42a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/dffb6f2b-b5a8-4d28-ae9b-aca7728ce617/disk.config dffb6f2b-b5a8-4d28-ae9b-aca7728ce617_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:04:41 compute-0 nova_compute[260935]: 2025-10-11 09:04:41.504 2 DEBUG oslo_concurrency.processutils [None req-03665052-f111-4d21-b479-e4ce66ca71fa f7e5365d09e140aaa4289b21435cbd70 884a8b5cd18948009939a3ab6cb1d42a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/dffb6f2b-b5a8-4d28-ae9b-aca7728ce617/disk.config dffb6f2b-b5a8-4d28-ae9b-aca7728ce617_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.262s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:04:41 compute-0 nova_compute[260935]: 2025-10-11 09:04:41.505 2 INFO nova.virt.libvirt.driver [None req-03665052-f111-4d21-b479-e4ce66ca71fa f7e5365d09e140aaa4289b21435cbd70 884a8b5cd18948009939a3ab6cb1d42a - - default default] [instance: dffb6f2b-b5a8-4d28-ae9b-aca7728ce617] Deleting local config drive /var/lib/nova/instances/dffb6f2b-b5a8-4d28-ae9b-aca7728ce617/disk.config because it was imported into RBD.
Oct 11 09:04:41 compute-0 kernel: tap63a1eeb0-05: entered promiscuous mode
Oct 11 09:04:41 compute-0 ovn_controller[152945]: 2025-10-11T09:04:41Z|00801|binding|INFO|Claiming lport 63a1eeb0-05f1-4f8a-b115-a1323d4d119d for this chassis.
Oct 11 09:04:41 compute-0 ovn_controller[152945]: 2025-10-11T09:04:41Z|00802|binding|INFO|63a1eeb0-05f1-4f8a-b115-a1323d4d119d: Claiming fa:16:3e:a1:89:76 10.100.0.3
Oct 11 09:04:41 compute-0 NetworkManager[44960]: <info>  [1760173481.5832] manager: (tap63a1eeb0-05): new Tun device (/org/freedesktop/NetworkManager/Devices/346)
Oct 11 09:04:41 compute-0 nova_compute[260935]: 2025-10-11 09:04:41.583 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:04:41 compute-0 nova_compute[260935]: 2025-10-11 09:04:41.589 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:04:41 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:04:41.609 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a1:89:76 10.100.0.3'], port_security=['fa:16:3e:a1:89:76 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'dffb6f2b-b5a8-4d28-ae9b-aca7728ce617', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-80aba2f5-1646-44f0-b2f3-d9c000867c6d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '884a8b5cd18948009939a3ab6cb1d42a', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'f1f8aaee-5285-4faa-914e-87133500808c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=57af006d-3f68-4651-b79e-9b2e72269c0e, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=63a1eeb0-05f1-4f8a-b115-a1323d4d119d) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 09:04:41 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:04:41.610 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 63a1eeb0-05f1-4f8a-b115-a1323d4d119d in datapath 80aba2f5-1646-44f0-b2f3-d9c000867c6d bound to our chassis
Oct 11 09:04:41 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:04:41.613 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 80aba2f5-1646-44f0-b2f3-d9c000867c6d
Oct 11 09:04:41 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:04:41.665 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[19e83f7f-7751-46b6-bb77-9a12eb0c1951]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:04:41 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:04:41.667 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap80aba2f5-11 in ovnmeta-80aba2f5-1646-44f0-b2f3-d9c000867c6d namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 11 09:04:41 compute-0 ceph-mon[74313]: pgmap v1853: 321 pgs: 321 active+clean; 467 MiB data, 855 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 2.7 MiB/s wr, 199 op/s
Oct 11 09:04:41 compute-0 systemd-udevd[350030]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 09:04:41 compute-0 ceph-mon[74313]: osdmap e251: 3 total, 3 up, 3 in
Oct 11 09:04:41 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:04:41.671 276945 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap80aba2f5-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 11 09:04:41 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:04:41.671 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[450179d4-98bc-4345-8874-699ccff75981]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:04:41 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:04:41.672 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[bffad023-107e-448f-883a-6eb599f5d9c9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:04:41 compute-0 systemd-machined[215705]: New machine qemu-102-instance-0000005a.
Oct 11 09:04:41 compute-0 NetworkManager[44960]: <info>  [1760173481.6826] device (tap63a1eeb0-05): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 09:04:41 compute-0 NetworkManager[44960]: <info>  [1760173481.6832] device (tap63a1eeb0-05): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 09:04:41 compute-0 systemd[1]: Started Virtual Machine qemu-102-instance-0000005a.
Oct 11 09:04:41 compute-0 ovn_controller[152945]: 2025-10-11T09:04:41Z|00803|binding|INFO|Setting lport 63a1eeb0-05f1-4f8a-b115-a1323d4d119d ovn-installed in OVS
Oct 11 09:04:41 compute-0 ovn_controller[152945]: 2025-10-11T09:04:41Z|00804|binding|INFO|Setting lport 63a1eeb0-05f1-4f8a-b115-a1323d4d119d up in Southbound
Oct 11 09:04:41 compute-0 nova_compute[260935]: 2025-10-11 09:04:41.694 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:04:41 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:04:41.696 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[2e5cdbc1-5bec-4059-89cf-00a8fcf315cf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:04:41 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:04:41.715 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[d969edba-102e-41b7-b65a-02d6fa719a22]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:04:41 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:04:41.771 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[2dfc7d82-71b6-4204-a740-6acfb66251e2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:04:41 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:04:41.779 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[eedfb6e4-3522-4026-9d38-39d31be4b864]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:04:41 compute-0 NetworkManager[44960]: <info>  [1760173481.7803] manager: (tap80aba2f5-10): new Veth device (/org/freedesktop/NetworkManager/Devices/347)
Oct 11 09:04:41 compute-0 systemd-udevd[350034]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 09:04:41 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:04:41.837 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[0d13f969-e0de-4f47-8878-9c293dddc8d9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:04:41 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:04:41.841 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[87ba1c7f-2604-4fc3-8e4c-f773a99a29c8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:04:41 compute-0 NetworkManager[44960]: <info>  [1760173481.8714] device (tap80aba2f5-10): carrier: link connected
Oct 11 09:04:41 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:04:41.878 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[79f3f930-73c8-4e88-b565-4edcbc9c2a95]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:04:41 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:04:41.904 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[038eadf4-952b-4aef-9938-c2d03255b4fa]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap80aba2f5-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:21:e7:39'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 244], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 532657, 'reachable_time': 30369, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 350073, 'error': None, 'target': 'ovnmeta-80aba2f5-1646-44f0-b2f3-d9c000867c6d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:04:41 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:04:41.925 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[626e9b6c-a34b-4f20-acf4-8629f5f2fb38]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe21:e739'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 532657, 'tstamp': 532657}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 350079, 'error': None, 'target': 'ovnmeta-80aba2f5-1646-44f0-b2f3-d9c000867c6d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:04:41 compute-0 podman[350047]: 2025-10-11 09:04:41.928776258 +0000 UTC m=+0.093461139 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, tcib_managed=true, config_id=ovn_metadata_agent)
Oct 11 09:04:41 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:04:41.949 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[03a8519f-61f4-4973-9c61-ced46c670fea]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap80aba2f5-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:21:e7:39'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 244], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 532657, 'reachable_time': 30369, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 350082, 'error': None, 'target': 'ovnmeta-80aba2f5-1646-44f0-b2f3-d9c000867c6d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:04:41 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:04:41.983 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[c7cee455-20ce-4012-8580-b6050c4fa90e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:04:42 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:04:42.071 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[e7993875-1481-492b-b970-a7deb04e3bda]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:04:42 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:04:42.073 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap80aba2f5-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:04:42 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:04:42.074 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 09:04:42 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:04:42.074 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap80aba2f5-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:04:42 compute-0 nova_compute[260935]: 2025-10-11 09:04:42.108 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:04:42 compute-0 NetworkManager[44960]: <info>  [1760173482.1088] manager: (tap80aba2f5-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/348)
Oct 11 09:04:42 compute-0 kernel: tap80aba2f5-10: entered promiscuous mode
Oct 11 09:04:42 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:04:42.113 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap80aba2f5-10, col_values=(('external_ids', {'iface-id': '0ca342cd-acfa-447a-940e-c8b97b9bebfe'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:04:42 compute-0 nova_compute[260935]: 2025-10-11 09:04:42.115 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:04:42 compute-0 ovn_controller[152945]: 2025-10-11T09:04:42Z|00805|binding|INFO|Releasing lport 0ca342cd-acfa-447a-940e-c8b97b9bebfe from this chassis (sb_readonly=0)
Oct 11 09:04:42 compute-0 nova_compute[260935]: 2025-10-11 09:04:42.145 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:04:42 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:04:42.146 162815 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/80aba2f5-1646-44f0-b2f3-d9c000867c6d.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/80aba2f5-1646-44f0-b2f3-d9c000867c6d.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 11 09:04:42 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:04:42.147 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[511d9182-b198-49fd-a99c-b91d35fd45fb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:04:42 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:04:42.148 162815 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 11 09:04:42 compute-0 ovn_metadata_agent[162810]: global
Oct 11 09:04:42 compute-0 ovn_metadata_agent[162810]:     log         /dev/log local0 debug
Oct 11 09:04:42 compute-0 ovn_metadata_agent[162810]:     log-tag     haproxy-metadata-proxy-80aba2f5-1646-44f0-b2f3-d9c000867c6d
Oct 11 09:04:42 compute-0 ovn_metadata_agent[162810]:     user        root
Oct 11 09:04:42 compute-0 ovn_metadata_agent[162810]:     group       root
Oct 11 09:04:42 compute-0 ovn_metadata_agent[162810]:     maxconn     1024
Oct 11 09:04:42 compute-0 ovn_metadata_agent[162810]:     pidfile     /var/lib/neutron/external/pids/80aba2f5-1646-44f0-b2f3-d9c000867c6d.pid.haproxy
Oct 11 09:04:42 compute-0 ovn_metadata_agent[162810]:     daemon
Oct 11 09:04:42 compute-0 ovn_metadata_agent[162810]: 
Oct 11 09:04:42 compute-0 ovn_metadata_agent[162810]: defaults
Oct 11 09:04:42 compute-0 ovn_metadata_agent[162810]:     log global
Oct 11 09:04:42 compute-0 ovn_metadata_agent[162810]:     mode http
Oct 11 09:04:42 compute-0 ovn_metadata_agent[162810]:     option httplog
Oct 11 09:04:42 compute-0 ovn_metadata_agent[162810]:     option dontlognull
Oct 11 09:04:42 compute-0 ovn_metadata_agent[162810]:     option http-server-close
Oct 11 09:04:42 compute-0 ovn_metadata_agent[162810]:     option forwardfor
Oct 11 09:04:42 compute-0 ovn_metadata_agent[162810]:     retries                 3
Oct 11 09:04:42 compute-0 ovn_metadata_agent[162810]:     timeout http-request    30s
Oct 11 09:04:42 compute-0 ovn_metadata_agent[162810]:     timeout connect         30s
Oct 11 09:04:42 compute-0 ovn_metadata_agent[162810]:     timeout client          32s
Oct 11 09:04:42 compute-0 ovn_metadata_agent[162810]:     timeout server          32s
Oct 11 09:04:42 compute-0 ovn_metadata_agent[162810]:     timeout http-keep-alive 30s
Oct 11 09:04:42 compute-0 ovn_metadata_agent[162810]: 
Oct 11 09:04:42 compute-0 ovn_metadata_agent[162810]: 
Oct 11 09:04:42 compute-0 ovn_metadata_agent[162810]: listen listener
Oct 11 09:04:42 compute-0 ovn_metadata_agent[162810]:     bind 169.254.169.254:80
Oct 11 09:04:42 compute-0 ovn_metadata_agent[162810]:     server metadata /var/lib/neutron/metadata_proxy
Oct 11 09:04:42 compute-0 ovn_metadata_agent[162810]:     http-request add-header X-OVN-Network-ID 80aba2f5-1646-44f0-b2f3-d9c000867c6d
Oct 11 09:04:42 compute-0 ovn_metadata_agent[162810]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 11 09:04:42 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:04:42.149 162815 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-80aba2f5-1646-44f0-b2f3-d9c000867c6d', 'env', 'PROCESS_TAG=haproxy-80aba2f5-1646-44f0-b2f3-d9c000867c6d', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/80aba2f5-1646-44f0-b2f3-d9c000867c6d.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 11 09:04:42 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1855: 321 pgs: 321 active+clean; 491 MiB data, 893 MiB used, 59 GiB / 60 GiB avail; 348 KiB/s rd, 5.2 MiB/s wr, 189 op/s
Oct 11 09:04:42 compute-0 podman[350156]: 2025-10-11 09:04:42.576437247 +0000 UTC m=+0.078631675 container create 1557ff322d0f03bbb771928ada4ce745c19a84b2b08942388e4af507e34aa64e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-80aba2f5-1646-44f0-b2f3-d9c000867c6d, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS)
Oct 11 09:04:42 compute-0 systemd[1]: Started libpod-conmon-1557ff322d0f03bbb771928ada4ce745c19a84b2b08942388e4af507e34aa64e.scope.
Oct 11 09:04:42 compute-0 podman[350156]: 2025-10-11 09:04:42.537228208 +0000 UTC m=+0.039422726 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 11 09:04:42 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:04:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/35f771fc789c94264e2ade21343c20941b7339208f940dc9d09191577d5f9af6/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 11 09:04:42 compute-0 podman[350156]: 2025-10-11 09:04:42.688356282 +0000 UTC m=+0.190550730 container init 1557ff322d0f03bbb771928ada4ce745c19a84b2b08942388e4af507e34aa64e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-80aba2f5-1646-44f0-b2f3-d9c000867c6d, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 11 09:04:42 compute-0 podman[350156]: 2025-10-11 09:04:42.698055699 +0000 UTC m=+0.200250117 container start 1557ff322d0f03bbb771928ada4ce745c19a84b2b08942388e4af507e34aa64e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-80aba2f5-1646-44f0-b2f3-d9c000867c6d, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 11 09:04:42 compute-0 neutron-haproxy-ovnmeta-80aba2f5-1646-44f0-b2f3-d9c000867c6d[350171]: [NOTICE]   (350175) : New worker (350177) forked
Oct 11 09:04:42 compute-0 neutron-haproxy-ovnmeta-80aba2f5-1646-44f0-b2f3-d9c000867c6d[350171]: [NOTICE]   (350175) : Loading success.
Oct 11 09:04:42 compute-0 nova_compute[260935]: 2025-10-11 09:04:42.803 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760173482.8031628, dffb6f2b-b5a8-4d28-ae9b-aca7728ce617 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:04:42 compute-0 nova_compute[260935]: 2025-10-11 09:04:42.804 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: dffb6f2b-b5a8-4d28-ae9b-aca7728ce617] VM Started (Lifecycle Event)
Oct 11 09:04:42 compute-0 nova_compute[260935]: 2025-10-11 09:04:42.844 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: dffb6f2b-b5a8-4d28-ae9b-aca7728ce617] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:04:42 compute-0 nova_compute[260935]: 2025-10-11 09:04:42.850 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760173482.8074305, dffb6f2b-b5a8-4d28-ae9b-aca7728ce617 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:04:42 compute-0 nova_compute[260935]: 2025-10-11 09:04:42.850 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: dffb6f2b-b5a8-4d28-ae9b-aca7728ce617] VM Paused (Lifecycle Event)
Oct 11 09:04:42 compute-0 nova_compute[260935]: 2025-10-11 09:04:42.947 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: dffb6f2b-b5a8-4d28-ae9b-aca7728ce617] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:04:42 compute-0 nova_compute[260935]: 2025-10-11 09:04:42.952 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: dffb6f2b-b5a8-4d28-ae9b-aca7728ce617] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 09:04:43 compute-0 nova_compute[260935]: 2025-10-11 09:04:43.014 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: dffb6f2b-b5a8-4d28-ae9b-aca7728ce617] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 09:04:43 compute-0 nova_compute[260935]: 2025-10-11 09:04:43.223 2 DEBUG nova.compute.manager [req-bc174533-a154-4025-8085-e11cf7c03da0 req-c0d0569f-2de4-4c74-985d-97c9be3c56c9 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: dffb6f2b-b5a8-4d28-ae9b-aca7728ce617] Received event network-vif-plugged-63a1eeb0-05f1-4f8a-b115-a1323d4d119d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:04:43 compute-0 nova_compute[260935]: 2025-10-11 09:04:43.223 2 DEBUG oslo_concurrency.lockutils [req-bc174533-a154-4025-8085-e11cf7c03da0 req-c0d0569f-2de4-4c74-985d-97c9be3c56c9 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "dffb6f2b-b5a8-4d28-ae9b-aca7728ce617-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:04:43 compute-0 nova_compute[260935]: 2025-10-11 09:04:43.224 2 DEBUG oslo_concurrency.lockutils [req-bc174533-a154-4025-8085-e11cf7c03da0 req-c0d0569f-2de4-4c74-985d-97c9be3c56c9 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "dffb6f2b-b5a8-4d28-ae9b-aca7728ce617-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:04:43 compute-0 nova_compute[260935]: 2025-10-11 09:04:43.224 2 DEBUG oslo_concurrency.lockutils [req-bc174533-a154-4025-8085-e11cf7c03da0 req-c0d0569f-2de4-4c74-985d-97c9be3c56c9 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "dffb6f2b-b5a8-4d28-ae9b-aca7728ce617-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:04:43 compute-0 nova_compute[260935]: 2025-10-11 09:04:43.224 2 DEBUG nova.compute.manager [req-bc174533-a154-4025-8085-e11cf7c03da0 req-c0d0569f-2de4-4c74-985d-97c9be3c56c9 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: dffb6f2b-b5a8-4d28-ae9b-aca7728ce617] Processing event network-vif-plugged-63a1eeb0-05f1-4f8a-b115-a1323d4d119d _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 11 09:04:43 compute-0 nova_compute[260935]: 2025-10-11 09:04:43.225 2 DEBUG nova.compute.manager [None req-03665052-f111-4d21-b479-e4ce66ca71fa f7e5365d09e140aaa4289b21435cbd70 884a8b5cd18948009939a3ab6cb1d42a - - default default] [instance: dffb6f2b-b5a8-4d28-ae9b-aca7728ce617] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 11 09:04:43 compute-0 nova_compute[260935]: 2025-10-11 09:04:43.228 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760173483.2284052, dffb6f2b-b5a8-4d28-ae9b-aca7728ce617 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:04:43 compute-0 nova_compute[260935]: 2025-10-11 09:04:43.228 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: dffb6f2b-b5a8-4d28-ae9b-aca7728ce617] VM Resumed (Lifecycle Event)
Oct 11 09:04:43 compute-0 nova_compute[260935]: 2025-10-11 09:04:43.233 2 DEBUG nova.virt.libvirt.driver [None req-03665052-f111-4d21-b479-e4ce66ca71fa f7e5365d09e140aaa4289b21435cbd70 884a8b5cd18948009939a3ab6cb1d42a - - default default] [instance: dffb6f2b-b5a8-4d28-ae9b-aca7728ce617] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 11 09:04:43 compute-0 nova_compute[260935]: 2025-10-11 09:04:43.237 2 INFO nova.virt.libvirt.driver [-] [instance: dffb6f2b-b5a8-4d28-ae9b-aca7728ce617] Instance spawned successfully.
Oct 11 09:04:43 compute-0 nova_compute[260935]: 2025-10-11 09:04:43.237 2 DEBUG nova.virt.libvirt.driver [None req-03665052-f111-4d21-b479-e4ce66ca71fa f7e5365d09e140aaa4289b21435cbd70 884a8b5cd18948009939a3ab6cb1d42a - - default default] [instance: dffb6f2b-b5a8-4d28-ae9b-aca7728ce617] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 11 09:04:43 compute-0 nova_compute[260935]: 2025-10-11 09:04:43.312 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: dffb6f2b-b5a8-4d28-ae9b-aca7728ce617] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:04:43 compute-0 nova_compute[260935]: 2025-10-11 09:04:43.319 2 DEBUG nova.virt.libvirt.driver [None req-03665052-f111-4d21-b479-e4ce66ca71fa f7e5365d09e140aaa4289b21435cbd70 884a8b5cd18948009939a3ab6cb1d42a - - default default] [instance: dffb6f2b-b5a8-4d28-ae9b-aca7728ce617] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:04:43 compute-0 nova_compute[260935]: 2025-10-11 09:04:43.319 2 DEBUG nova.virt.libvirt.driver [None req-03665052-f111-4d21-b479-e4ce66ca71fa f7e5365d09e140aaa4289b21435cbd70 884a8b5cd18948009939a3ab6cb1d42a - - default default] [instance: dffb6f2b-b5a8-4d28-ae9b-aca7728ce617] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:04:43 compute-0 nova_compute[260935]: 2025-10-11 09:04:43.320 2 DEBUG nova.virt.libvirt.driver [None req-03665052-f111-4d21-b479-e4ce66ca71fa f7e5365d09e140aaa4289b21435cbd70 884a8b5cd18948009939a3ab6cb1d42a - - default default] [instance: dffb6f2b-b5a8-4d28-ae9b-aca7728ce617] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:04:43 compute-0 nova_compute[260935]: 2025-10-11 09:04:43.320 2 DEBUG nova.virt.libvirt.driver [None req-03665052-f111-4d21-b479-e4ce66ca71fa f7e5365d09e140aaa4289b21435cbd70 884a8b5cd18948009939a3ab6cb1d42a - - default default] [instance: dffb6f2b-b5a8-4d28-ae9b-aca7728ce617] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:04:43 compute-0 nova_compute[260935]: 2025-10-11 09:04:43.321 2 DEBUG nova.virt.libvirt.driver [None req-03665052-f111-4d21-b479-e4ce66ca71fa f7e5365d09e140aaa4289b21435cbd70 884a8b5cd18948009939a3ab6cb1d42a - - default default] [instance: dffb6f2b-b5a8-4d28-ae9b-aca7728ce617] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:04:43 compute-0 nova_compute[260935]: 2025-10-11 09:04:43.321 2 DEBUG nova.virt.libvirt.driver [None req-03665052-f111-4d21-b479-e4ce66ca71fa f7e5365d09e140aaa4289b21435cbd70 884a8b5cd18948009939a3ab6cb1d42a - - default default] [instance: dffb6f2b-b5a8-4d28-ae9b-aca7728ce617] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:04:43 compute-0 nova_compute[260935]: 2025-10-11 09:04:43.326 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: dffb6f2b-b5a8-4d28-ae9b-aca7728ce617] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 09:04:43 compute-0 nova_compute[260935]: 2025-10-11 09:04:43.382 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: dffb6f2b-b5a8-4d28-ae9b-aca7728ce617] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 09:04:43 compute-0 nova_compute[260935]: 2025-10-11 09:04:43.429 2 INFO nova.compute.manager [None req-03665052-f111-4d21-b479-e4ce66ca71fa f7e5365d09e140aaa4289b21435cbd70 884a8b5cd18948009939a3ab6cb1d42a - - default default] [instance: dffb6f2b-b5a8-4d28-ae9b-aca7728ce617] Took 7.63 seconds to spawn the instance on the hypervisor.
Oct 11 09:04:43 compute-0 nova_compute[260935]: 2025-10-11 09:04:43.429 2 DEBUG nova.compute.manager [None req-03665052-f111-4d21-b479-e4ce66ca71fa f7e5365d09e140aaa4289b21435cbd70 884a8b5cd18948009939a3ab6cb1d42a - - default default] [instance: dffb6f2b-b5a8-4d28-ae9b-aca7728ce617] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:04:43 compute-0 ceph-mon[74313]: pgmap v1855: 321 pgs: 321 active+clean; 491 MiB data, 893 MiB used, 59 GiB / 60 GiB avail; 348 KiB/s rd, 5.2 MiB/s wr, 189 op/s
Oct 11 09:04:43 compute-0 nova_compute[260935]: 2025-10-11 09:04:43.706 2 INFO nova.compute.manager [None req-03665052-f111-4d21-b479-e4ce66ca71fa f7e5365d09e140aaa4289b21435cbd70 884a8b5cd18948009939a3ab6cb1d42a - - default default] [instance: dffb6f2b-b5a8-4d28-ae9b-aca7728ce617] Took 9.59 seconds to build instance.
Oct 11 09:04:43 compute-0 nova_compute[260935]: 2025-10-11 09:04:43.746 2 DEBUG oslo_concurrency.lockutils [None req-03665052-f111-4d21-b479-e4ce66ca71fa f7e5365d09e140aaa4289b21435cbd70 884a8b5cd18948009939a3ab6cb1d42a - - default default] Lock "dffb6f2b-b5a8-4d28-ae9b-aca7728ce617" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.823s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:04:43 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e251 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:04:43 compute-0 nova_compute[260935]: 2025-10-11 09:04:43.840 2 DEBUG oslo_concurrency.lockutils [None req-c80e8326-bca5-4522-bb48-4252d11da3c9 1c9cf27bf1c44ef1ba0c291e3d5818be 001deb6d2eb5499b8e24cf16d6c2dbaa - - default default] Acquiring lock "4af522a5-ae89-49de-a956-50cfa7dd136a" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:04:43 compute-0 nova_compute[260935]: 2025-10-11 09:04:43.840 2 DEBUG oslo_concurrency.lockutils [None req-c80e8326-bca5-4522-bb48-4252d11da3c9 1c9cf27bf1c44ef1ba0c291e3d5818be 001deb6d2eb5499b8e24cf16d6c2dbaa - - default default] Lock "4af522a5-ae89-49de-a956-50cfa7dd136a" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:04:43 compute-0 nova_compute[260935]: 2025-10-11 09:04:43.902 2 DEBUG nova.compute.manager [None req-c80e8326-bca5-4522-bb48-4252d11da3c9 1c9cf27bf1c44ef1ba0c291e3d5818be 001deb6d2eb5499b8e24cf16d6c2dbaa - - default default] [instance: 4af522a5-ae89-49de-a956-50cfa7dd136a] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 11 09:04:43 compute-0 nova_compute[260935]: 2025-10-11 09:04:43.968 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:04:44 compute-0 nova_compute[260935]: 2025-10-11 09:04:44.131 2 INFO nova.virt.libvirt.driver [None req-edf5c013-a2de-433b-b5dc-cd9b80326402 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] [instance: cb40f13d-eaf0-4d7f-8745-acdd85fa0e86] Instance shutdown successfully after 13 seconds.
Oct 11 09:04:44 compute-0 systemd[1]: machine-qemu\x2d100\x2dinstance\x2d00000058.scope: Deactivated successfully.
Oct 11 09:04:44 compute-0 systemd[1]: machine-qemu\x2d100\x2dinstance\x2d00000058.scope: Consumed 14.406s CPU time.
Oct 11 09:04:44 compute-0 systemd-machined[215705]: Machine qemu-100-instance-00000058 terminated.
Oct 11 09:04:44 compute-0 nova_compute[260935]: 2025-10-11 09:04:44.277 2 DEBUG oslo_concurrency.lockutils [None req-c80e8326-bca5-4522-bb48-4252d11da3c9 1c9cf27bf1c44ef1ba0c291e3d5818be 001deb6d2eb5499b8e24cf16d6c2dbaa - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:04:44 compute-0 nova_compute[260935]: 2025-10-11 09:04:44.279 2 DEBUG oslo_concurrency.lockutils [None req-c80e8326-bca5-4522-bb48-4252d11da3c9 1c9cf27bf1c44ef1ba0c291e3d5818be 001deb6d2eb5499b8e24cf16d6c2dbaa - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:04:44 compute-0 nova_compute[260935]: 2025-10-11 09:04:44.287 2 DEBUG nova.virt.hardware [None req-c80e8326-bca5-4522-bb48-4252d11da3c9 1c9cf27bf1c44ef1ba0c291e3d5818be 001deb6d2eb5499b8e24cf16d6c2dbaa - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 11 09:04:44 compute-0 nova_compute[260935]: 2025-10-11 09:04:44.288 2 INFO nova.compute.claims [None req-c80e8326-bca5-4522-bb48-4252d11da3c9 1c9cf27bf1c44ef1ba0c291e3d5818be 001deb6d2eb5499b8e24cf16d6c2dbaa - - default default] [instance: 4af522a5-ae89-49de-a956-50cfa7dd136a] Claim successful on node compute-0.ctlplane.example.com
Oct 11 09:04:44 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1856: 321 pgs: 321 active+clean; 500 MiB data, 898 MiB used, 59 GiB / 60 GiB avail; 583 KiB/s rd, 5.9 MiB/s wr, 241 op/s
Oct 11 09:04:44 compute-0 nova_compute[260935]: 2025-10-11 09:04:44.354 2 INFO nova.virt.libvirt.driver [-] [instance: cb40f13d-eaf0-4d7f-8745-acdd85fa0e86] Instance destroyed successfully.
Oct 11 09:04:44 compute-0 nova_compute[260935]: 2025-10-11 09:04:44.359 2 INFO nova.virt.libvirt.driver [-] [instance: cb40f13d-eaf0-4d7f-8745-acdd85fa0e86] Instance destroyed successfully.
Oct 11 09:04:44 compute-0 nova_compute[260935]: 2025-10-11 09:04:44.782 2 DEBUG oslo_concurrency.processutils [None req-c80e8326-bca5-4522-bb48-4252d11da3c9 1c9cf27bf1c44ef1ba0c291e3d5818be 001deb6d2eb5499b8e24cf16d6c2dbaa - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:04:44 compute-0 nova_compute[260935]: 2025-10-11 09:04:44.875 2 INFO nova.virt.libvirt.driver [None req-edf5c013-a2de-433b-b5dc-cd9b80326402 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] [instance: cb40f13d-eaf0-4d7f-8745-acdd85fa0e86] Deleting instance files /var/lib/nova/instances/cb40f13d-eaf0-4d7f-8745-acdd85fa0e86_del
Oct 11 09:04:44 compute-0 nova_compute[260935]: 2025-10-11 09:04:44.877 2 INFO nova.virt.libvirt.driver [None req-edf5c013-a2de-433b-b5dc-cd9b80326402 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] [instance: cb40f13d-eaf0-4d7f-8745-acdd85fa0e86] Deletion of /var/lib/nova/instances/cb40f13d-eaf0-4d7f-8745-acdd85fa0e86_del complete
Oct 11 09:04:45 compute-0 nova_compute[260935]: 2025-10-11 09:04:45.143 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:04:45 compute-0 nova_compute[260935]: 2025-10-11 09:04:45.152 2 DEBUG nova.virt.libvirt.driver [None req-edf5c013-a2de-433b-b5dc-cd9b80326402 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] [instance: cb40f13d-eaf0-4d7f-8745-acdd85fa0e86] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 11 09:04:45 compute-0 nova_compute[260935]: 2025-10-11 09:04:45.153 2 INFO nova.virt.libvirt.driver [None req-edf5c013-a2de-433b-b5dc-cd9b80326402 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] [instance: cb40f13d-eaf0-4d7f-8745-acdd85fa0e86] Creating image(s)
Oct 11 09:04:45 compute-0 nova_compute[260935]: 2025-10-11 09:04:45.190 2 DEBUG nova.storage.rbd_utils [None req-edf5c013-a2de-433b-b5dc-cd9b80326402 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] rbd image cb40f13d-eaf0-4d7f-8745-acdd85fa0e86_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:04:45 compute-0 nova_compute[260935]: 2025-10-11 09:04:45.229 2 DEBUG nova.storage.rbd_utils [None req-edf5c013-a2de-433b-b5dc-cd9b80326402 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] rbd image cb40f13d-eaf0-4d7f-8745-acdd85fa0e86_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:04:45 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 09:04:45 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2136822517' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:04:45 compute-0 nova_compute[260935]: 2025-10-11 09:04:45.269 2 DEBUG nova.storage.rbd_utils [None req-edf5c013-a2de-433b-b5dc-cd9b80326402 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] rbd image cb40f13d-eaf0-4d7f-8745-acdd85fa0e86_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:04:45 compute-0 nova_compute[260935]: 2025-10-11 09:04:45.275 2 DEBUG oslo_concurrency.processutils [None req-edf5c013-a2de-433b-b5dc-cd9b80326402 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/d427ed36e4acfaf36d5cf36bd49361b1db4ee571 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:04:45 compute-0 nova_compute[260935]: 2025-10-11 09:04:45.328 2 DEBUG oslo_concurrency.processutils [None req-c80e8326-bca5-4522-bb48-4252d11da3c9 1c9cf27bf1c44ef1ba0c291e3d5818be 001deb6d2eb5499b8e24cf16d6c2dbaa - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.546s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:04:45 compute-0 nova_compute[260935]: 2025-10-11 09:04:45.338 2 DEBUG nova.compute.provider_tree [None req-c80e8326-bca5-4522-bb48-4252d11da3c9 1c9cf27bf1c44ef1ba0c291e3d5818be 001deb6d2eb5499b8e24cf16d6c2dbaa - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 09:04:45 compute-0 nova_compute[260935]: 2025-10-11 09:04:45.368 2 DEBUG oslo_concurrency.processutils [None req-edf5c013-a2de-433b-b5dc-cd9b80326402 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/d427ed36e4acfaf36d5cf36bd49361b1db4ee571 --force-share --output=json" returned: 0 in 0.092s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:04:45 compute-0 nova_compute[260935]: 2025-10-11 09:04:45.369 2 DEBUG oslo_concurrency.lockutils [None req-edf5c013-a2de-433b-b5dc-cd9b80326402 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] Acquiring lock "d427ed36e4acfaf36d5cf36bd49361b1db4ee571" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:04:45 compute-0 nova_compute[260935]: 2025-10-11 09:04:45.370 2 DEBUG oslo_concurrency.lockutils [None req-edf5c013-a2de-433b-b5dc-cd9b80326402 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] Lock "d427ed36e4acfaf36d5cf36bd49361b1db4ee571" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:04:45 compute-0 nova_compute[260935]: 2025-10-11 09:04:45.371 2 DEBUG oslo_concurrency.lockutils [None req-edf5c013-a2de-433b-b5dc-cd9b80326402 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] Lock "d427ed36e4acfaf36d5cf36bd49361b1db4ee571" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:04:45 compute-0 nova_compute[260935]: 2025-10-11 09:04:45.412 2 DEBUG nova.storage.rbd_utils [None req-edf5c013-a2de-433b-b5dc-cd9b80326402 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] rbd image cb40f13d-eaf0-4d7f-8745-acdd85fa0e86_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:04:45 compute-0 nova_compute[260935]: 2025-10-11 09:04:45.419 2 DEBUG oslo_concurrency.processutils [None req-edf5c013-a2de-433b-b5dc-cd9b80326402 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/d427ed36e4acfaf36d5cf36bd49361b1db4ee571 cb40f13d-eaf0-4d7f-8745-acdd85fa0e86_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:04:45 compute-0 nova_compute[260935]: 2025-10-11 09:04:45.471 2 DEBUG nova.scheduler.client.report [None req-c80e8326-bca5-4522-bb48-4252d11da3c9 1c9cf27bf1c44ef1ba0c291e3d5818be 001deb6d2eb5499b8e24cf16d6c2dbaa - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 09:04:45 compute-0 nova_compute[260935]: 2025-10-11 09:04:45.482 2 DEBUG nova.compute.manager [req-b72134e1-d1df-4463-aee5-baece2e829cc req-ae5cd6db-e820-4b8c-9734-273c2352e5fd e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: dffb6f2b-b5a8-4d28-ae9b-aca7728ce617] Received event network-vif-plugged-63a1eeb0-05f1-4f8a-b115-a1323d4d119d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:04:45 compute-0 nova_compute[260935]: 2025-10-11 09:04:45.482 2 DEBUG oslo_concurrency.lockutils [req-b72134e1-d1df-4463-aee5-baece2e829cc req-ae5cd6db-e820-4b8c-9734-273c2352e5fd e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "dffb6f2b-b5a8-4d28-ae9b-aca7728ce617-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:04:45 compute-0 nova_compute[260935]: 2025-10-11 09:04:45.483 2 DEBUG oslo_concurrency.lockutils [req-b72134e1-d1df-4463-aee5-baece2e829cc req-ae5cd6db-e820-4b8c-9734-273c2352e5fd e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "dffb6f2b-b5a8-4d28-ae9b-aca7728ce617-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:04:45 compute-0 nova_compute[260935]: 2025-10-11 09:04:45.483 2 DEBUG oslo_concurrency.lockutils [req-b72134e1-d1df-4463-aee5-baece2e829cc req-ae5cd6db-e820-4b8c-9734-273c2352e5fd e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "dffb6f2b-b5a8-4d28-ae9b-aca7728ce617-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:04:45 compute-0 nova_compute[260935]: 2025-10-11 09:04:45.483 2 DEBUG nova.compute.manager [req-b72134e1-d1df-4463-aee5-baece2e829cc req-ae5cd6db-e820-4b8c-9734-273c2352e5fd e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: dffb6f2b-b5a8-4d28-ae9b-aca7728ce617] No waiting events found dispatching network-vif-plugged-63a1eeb0-05f1-4f8a-b115-a1323d4d119d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 09:04:45 compute-0 nova_compute[260935]: 2025-10-11 09:04:45.484 2 WARNING nova.compute.manager [req-b72134e1-d1df-4463-aee5-baece2e829cc req-ae5cd6db-e820-4b8c-9734-273c2352e5fd e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: dffb6f2b-b5a8-4d28-ae9b-aca7728ce617] Received unexpected event network-vif-plugged-63a1eeb0-05f1-4f8a-b115-a1323d4d119d for instance with vm_state active and task_state None.
Oct 11 09:04:45 compute-0 nova_compute[260935]: 2025-10-11 09:04:45.525 2 DEBUG oslo_concurrency.lockutils [None req-c80e8326-bca5-4522-bb48-4252d11da3c9 1c9cf27bf1c44ef1ba0c291e3d5818be 001deb6d2eb5499b8e24cf16d6c2dbaa - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.246s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:04:45 compute-0 nova_compute[260935]: 2025-10-11 09:04:45.526 2 DEBUG nova.compute.manager [None req-c80e8326-bca5-4522-bb48-4252d11da3c9 1c9cf27bf1c44ef1ba0c291e3d5818be 001deb6d2eb5499b8e24cf16d6c2dbaa - - default default] [instance: 4af522a5-ae89-49de-a956-50cfa7dd136a] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 11 09:04:45 compute-0 ceph-mon[74313]: pgmap v1856: 321 pgs: 321 active+clean; 500 MiB data, 898 MiB used, 59 GiB / 60 GiB avail; 583 KiB/s rd, 5.9 MiB/s wr, 241 op/s
Oct 11 09:04:45 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/2136822517' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:04:45 compute-0 nova_compute[260935]: 2025-10-11 09:04:45.745 2 DEBUG nova.compute.manager [None req-c80e8326-bca5-4522-bb48-4252d11da3c9 1c9cf27bf1c44ef1ba0c291e3d5818be 001deb6d2eb5499b8e24cf16d6c2dbaa - - default default] [instance: 4af522a5-ae89-49de-a956-50cfa7dd136a] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 11 09:04:45 compute-0 nova_compute[260935]: 2025-10-11 09:04:45.746 2 DEBUG nova.network.neutron [None req-c80e8326-bca5-4522-bb48-4252d11da3c9 1c9cf27bf1c44ef1ba0c291e3d5818be 001deb6d2eb5499b8e24cf16d6c2dbaa - - default default] [instance: 4af522a5-ae89-49de-a956-50cfa7dd136a] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 11 09:04:45 compute-0 nova_compute[260935]: 2025-10-11 09:04:45.757 2 DEBUG oslo_concurrency.processutils [None req-edf5c013-a2de-433b-b5dc-cd9b80326402 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/d427ed36e4acfaf36d5cf36bd49361b1db4ee571 cb40f13d-eaf0-4d7f-8745-acdd85fa0e86_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.338s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:04:45 compute-0 nova_compute[260935]: 2025-10-11 09:04:45.852 2 DEBUG nova.storage.rbd_utils [None req-edf5c013-a2de-433b-b5dc-cd9b80326402 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] resizing rbd image cb40f13d-eaf0-4d7f-8745-acdd85fa0e86_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 11 09:04:45 compute-0 nova_compute[260935]: 2025-10-11 09:04:45.900 2 INFO nova.virt.libvirt.driver [None req-c80e8326-bca5-4522-bb48-4252d11da3c9 1c9cf27bf1c44ef1ba0c291e3d5818be 001deb6d2eb5499b8e24cf16d6c2dbaa - - default default] [instance: 4af522a5-ae89-49de-a956-50cfa7dd136a] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 11 09:04:45 compute-0 nova_compute[260935]: 2025-10-11 09:04:45.977 2 DEBUG nova.virt.libvirt.driver [None req-edf5c013-a2de-433b-b5dc-cd9b80326402 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] [instance: cb40f13d-eaf0-4d7f-8745-acdd85fa0e86] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 11 09:04:45 compute-0 nova_compute[260935]: 2025-10-11 09:04:45.978 2 DEBUG nova.virt.libvirt.driver [None req-edf5c013-a2de-433b-b5dc-cd9b80326402 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] [instance: cb40f13d-eaf0-4d7f-8745-acdd85fa0e86] Ensure instance console log exists: /var/lib/nova/instances/cb40f13d-eaf0-4d7f-8745-acdd85fa0e86/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 11 09:04:45 compute-0 nova_compute[260935]: 2025-10-11 09:04:45.979 2 DEBUG oslo_concurrency.lockutils [None req-edf5c013-a2de-433b-b5dc-cd9b80326402 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:04:45 compute-0 nova_compute[260935]: 2025-10-11 09:04:45.980 2 DEBUG oslo_concurrency.lockutils [None req-edf5c013-a2de-433b-b5dc-cd9b80326402 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:04:45 compute-0 nova_compute[260935]: 2025-10-11 09:04:45.981 2 DEBUG oslo_concurrency.lockutils [None req-edf5c013-a2de-433b-b5dc-cd9b80326402 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:04:45 compute-0 nova_compute[260935]: 2025-10-11 09:04:45.983 2 DEBUG nova.virt.libvirt.driver [None req-edf5c013-a2de-433b-b5dc-cd9b80326402 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] [instance: cb40f13d-eaf0-4d7f-8745-acdd85fa0e86] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:29Z,direct_url=<?>,disk_format='qcow2',id=95632eb9-5895-4e20-b760-0f149aadf400,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 11 09:04:45 compute-0 nova_compute[260935]: 2025-10-11 09:04:45.989 2 WARNING nova.virt.libvirt.driver [None req-edf5c013-a2de-433b-b5dc-cd9b80326402 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError
Oct 11 09:04:45 compute-0 nova_compute[260935]: 2025-10-11 09:04:45.994 2 DEBUG nova.virt.libvirt.host [None req-edf5c013-a2de-433b-b5dc-cd9b80326402 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 11 09:04:45 compute-0 nova_compute[260935]: 2025-10-11 09:04:45.995 2 DEBUG nova.virt.libvirt.host [None req-edf5c013-a2de-433b-b5dc-cd9b80326402 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 11 09:04:46 compute-0 nova_compute[260935]: 2025-10-11 09:04:46.000 2 DEBUG nova.virt.libvirt.host [None req-edf5c013-a2de-433b-b5dc-cd9b80326402 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 11 09:04:46 compute-0 nova_compute[260935]: 2025-10-11 09:04:46.001 2 DEBUG nova.virt.libvirt.host [None req-edf5c013-a2de-433b-b5dc-cd9b80326402 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 11 09:04:46 compute-0 nova_compute[260935]: 2025-10-11 09:04:46.002 2 DEBUG nova.virt.libvirt.driver [None req-edf5c013-a2de-433b-b5dc-cd9b80326402 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 11 09:04:46 compute-0 nova_compute[260935]: 2025-10-11 09:04:46.003 2 DEBUG nova.virt.hardware [None req-edf5c013-a2de-433b-b5dc-cd9b80326402 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:29Z,direct_url=<?>,disk_format='qcow2',id=95632eb9-5895-4e20-b760-0f149aadf400,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 11 09:04:46 compute-0 nova_compute[260935]: 2025-10-11 09:04:46.004 2 DEBUG nova.virt.hardware [None req-edf5c013-a2de-433b-b5dc-cd9b80326402 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 11 09:04:46 compute-0 nova_compute[260935]: 2025-10-11 09:04:46.004 2 DEBUG nova.virt.hardware [None req-edf5c013-a2de-433b-b5dc-cd9b80326402 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 11 09:04:46 compute-0 nova_compute[260935]: 2025-10-11 09:04:46.005 2 DEBUG nova.virt.hardware [None req-edf5c013-a2de-433b-b5dc-cd9b80326402 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 11 09:04:46 compute-0 nova_compute[260935]: 2025-10-11 09:04:46.005 2 DEBUG nova.virt.hardware [None req-edf5c013-a2de-433b-b5dc-cd9b80326402 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 11 09:04:46 compute-0 nova_compute[260935]: 2025-10-11 09:04:46.006 2 DEBUG nova.virt.hardware [None req-edf5c013-a2de-433b-b5dc-cd9b80326402 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 11 09:04:46 compute-0 nova_compute[260935]: 2025-10-11 09:04:46.007 2 DEBUG nova.virt.hardware [None req-edf5c013-a2de-433b-b5dc-cd9b80326402 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 11 09:04:46 compute-0 nova_compute[260935]: 2025-10-11 09:04:46.007 2 DEBUG nova.virt.hardware [None req-edf5c013-a2de-433b-b5dc-cd9b80326402 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 11 09:04:46 compute-0 nova_compute[260935]: 2025-10-11 09:04:46.008 2 DEBUG nova.virt.hardware [None req-edf5c013-a2de-433b-b5dc-cd9b80326402 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 11 09:04:46 compute-0 nova_compute[260935]: 2025-10-11 09:04:46.009 2 DEBUG nova.virt.hardware [None req-edf5c013-a2de-433b-b5dc-cd9b80326402 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 11 09:04:46 compute-0 nova_compute[260935]: 2025-10-11 09:04:46.009 2 DEBUG nova.virt.hardware [None req-edf5c013-a2de-433b-b5dc-cd9b80326402 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 11 09:04:46 compute-0 nova_compute[260935]: 2025-10-11 09:04:46.010 2 DEBUG nova.objects.instance [None req-edf5c013-a2de-433b-b5dc-cd9b80326402 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] Lazy-loading 'vcpu_model' on Instance uuid cb40f13d-eaf0-4d7f-8745-acdd85fa0e86 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:04:46 compute-0 nova_compute[260935]: 2025-10-11 09:04:46.047 2 DEBUG nova.network.neutron [None req-c80e8326-bca5-4522-bb48-4252d11da3c9 1c9cf27bf1c44ef1ba0c291e3d5818be 001deb6d2eb5499b8e24cf16d6c2dbaa - - default default] [instance: 4af522a5-ae89-49de-a956-50cfa7dd136a] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188
Oct 11 09:04:46 compute-0 nova_compute[260935]: 2025-10-11 09:04:46.048 2 DEBUG nova.compute.manager [None req-c80e8326-bca5-4522-bb48-4252d11da3c9 1c9cf27bf1c44ef1ba0c291e3d5818be 001deb6d2eb5499b8e24cf16d6c2dbaa - - default default] [instance: 4af522a5-ae89-49de-a956-50cfa7dd136a] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 11 09:04:46 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1857: 321 pgs: 321 active+clean; 500 MiB data, 898 MiB used, 59 GiB / 60 GiB avail; 506 KiB/s rd, 3.2 MiB/s wr, 129 op/s
Oct 11 09:04:46 compute-0 nova_compute[260935]: 2025-10-11 09:04:46.398 2 DEBUG nova.compute.manager [None req-c80e8326-bca5-4522-bb48-4252d11da3c9 1c9cf27bf1c44ef1ba0c291e3d5818be 001deb6d2eb5499b8e24cf16d6c2dbaa - - default default] [instance: 4af522a5-ae89-49de-a956-50cfa7dd136a] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 11 09:04:46 compute-0 nova_compute[260935]: 2025-10-11 09:04:46.528 2 DEBUG oslo_concurrency.processutils [None req-edf5c013-a2de-433b-b5dc-cd9b80326402 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:04:46 compute-0 nova_compute[260935]: 2025-10-11 09:04:46.575 2 DEBUG oslo_concurrency.lockutils [None req-2f97a1e7-8cbd-40ad-9350-e718e78c356b f7e5365d09e140aaa4289b21435cbd70 884a8b5cd18948009939a3ab6cb1d42a - - default default] Acquiring lock "dffb6f2b-b5a8-4d28-ae9b-aca7728ce617" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:04:46 compute-0 nova_compute[260935]: 2025-10-11 09:04:46.576 2 DEBUG oslo_concurrency.lockutils [None req-2f97a1e7-8cbd-40ad-9350-e718e78c356b f7e5365d09e140aaa4289b21435cbd70 884a8b5cd18948009939a3ab6cb1d42a - - default default] Lock "dffb6f2b-b5a8-4d28-ae9b-aca7728ce617" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:04:46 compute-0 nova_compute[260935]: 2025-10-11 09:04:46.577 2 DEBUG oslo_concurrency.lockutils [None req-2f97a1e7-8cbd-40ad-9350-e718e78c356b f7e5365d09e140aaa4289b21435cbd70 884a8b5cd18948009939a3ab6cb1d42a - - default default] Acquiring lock "dffb6f2b-b5a8-4d28-ae9b-aca7728ce617-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:04:46 compute-0 nova_compute[260935]: 2025-10-11 09:04:46.577 2 DEBUG oslo_concurrency.lockutils [None req-2f97a1e7-8cbd-40ad-9350-e718e78c356b f7e5365d09e140aaa4289b21435cbd70 884a8b5cd18948009939a3ab6cb1d42a - - default default] Lock "dffb6f2b-b5a8-4d28-ae9b-aca7728ce617-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:04:46 compute-0 nova_compute[260935]: 2025-10-11 09:04:46.577 2 DEBUG oslo_concurrency.lockutils [None req-2f97a1e7-8cbd-40ad-9350-e718e78c356b f7e5365d09e140aaa4289b21435cbd70 884a8b5cd18948009939a3ab6cb1d42a - - default default] Lock "dffb6f2b-b5a8-4d28-ae9b-aca7728ce617-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:04:46 compute-0 nova_compute[260935]: 2025-10-11 09:04:46.579 2 INFO nova.compute.manager [None req-2f97a1e7-8cbd-40ad-9350-e718e78c356b f7e5365d09e140aaa4289b21435cbd70 884a8b5cd18948009939a3ab6cb1d42a - - default default] [instance: dffb6f2b-b5a8-4d28-ae9b-aca7728ce617] Terminating instance
Oct 11 09:04:46 compute-0 nova_compute[260935]: 2025-10-11 09:04:46.580 2 DEBUG nova.compute.manager [None req-2f97a1e7-8cbd-40ad-9350-e718e78c356b f7e5365d09e140aaa4289b21435cbd70 884a8b5cd18948009939a3ab6cb1d42a - - default default] [instance: dffb6f2b-b5a8-4d28-ae9b-aca7728ce617] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 11 09:04:46 compute-0 kernel: tap63a1eeb0-05 (unregistering): left promiscuous mode
Oct 11 09:04:46 compute-0 NetworkManager[44960]: <info>  [1760173486.6274] device (tap63a1eeb0-05): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 09:04:46 compute-0 nova_compute[260935]: 2025-10-11 09:04:46.641 2 DEBUG nova.compute.manager [None req-c80e8326-bca5-4522-bb48-4252d11da3c9 1c9cf27bf1c44ef1ba0c291e3d5818be 001deb6d2eb5499b8e24cf16d6c2dbaa - - default default] [instance: 4af522a5-ae89-49de-a956-50cfa7dd136a] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 11 09:04:46 compute-0 ovn_controller[152945]: 2025-10-11T09:04:46Z|00806|binding|INFO|Releasing lport 63a1eeb0-05f1-4f8a-b115-a1323d4d119d from this chassis (sb_readonly=0)
Oct 11 09:04:46 compute-0 ovn_controller[152945]: 2025-10-11T09:04:46Z|00807|binding|INFO|Setting lport 63a1eeb0-05f1-4f8a-b115-a1323d4d119d down in Southbound
Oct 11 09:04:46 compute-0 ovn_controller[152945]: 2025-10-11T09:04:46Z|00808|binding|INFO|Removing iface tap63a1eeb0-05 ovn-installed in OVS
Oct 11 09:04:46 compute-0 nova_compute[260935]: 2025-10-11 09:04:46.643 2 DEBUG nova.virt.libvirt.driver [None req-c80e8326-bca5-4522-bb48-4252d11da3c9 1c9cf27bf1c44ef1ba0c291e3d5818be 001deb6d2eb5499b8e24cf16d6c2dbaa - - default default] [instance: 4af522a5-ae89-49de-a956-50cfa7dd136a] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 11 09:04:46 compute-0 nova_compute[260935]: 2025-10-11 09:04:46.644 2 INFO nova.virt.libvirt.driver [None req-c80e8326-bca5-4522-bb48-4252d11da3c9 1c9cf27bf1c44ef1ba0c291e3d5818be 001deb6d2eb5499b8e24cf16d6c2dbaa - - default default] [instance: 4af522a5-ae89-49de-a956-50cfa7dd136a] Creating image(s)
Oct 11 09:04:46 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:04:46.653 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a1:89:76 10.100.0.3'], port_security=['fa:16:3e:a1:89:76 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'dffb6f2b-b5a8-4d28-ae9b-aca7728ce617', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-80aba2f5-1646-44f0-b2f3-d9c000867c6d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '884a8b5cd18948009939a3ab6cb1d42a', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'f1f8aaee-5285-4faa-914e-87133500808c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=57af006d-3f68-4651-b79e-9b2e72269c0e, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=63a1eeb0-05f1-4f8a-b115-a1323d4d119d) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 09:04:46 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:04:46.655 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 63a1eeb0-05f1-4f8a-b115-a1323d4d119d in datapath 80aba2f5-1646-44f0-b2f3-d9c000867c6d unbound from our chassis
Oct 11 09:04:46 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:04:46.657 162815 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 80aba2f5-1646-44f0-b2f3-d9c000867c6d, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 11 09:04:46 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:04:46.658 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[94a07826-5292-423c-af90-d3705e936d5f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:04:46 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:04:46.659 162815 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-80aba2f5-1646-44f0-b2f3-d9c000867c6d namespace which is not needed anymore
Oct 11 09:04:46 compute-0 systemd[1]: machine-qemu\x2d102\x2dinstance\x2d0000005a.scope: Deactivated successfully.
Oct 11 09:04:46 compute-0 systemd[1]: machine-qemu\x2d102\x2dinstance\x2d0000005a.scope: Consumed 4.412s CPU time.
Oct 11 09:04:46 compute-0 systemd-machined[215705]: Machine qemu-102-instance-0000005a terminated.
Oct 11 09:04:46 compute-0 nova_compute[260935]: 2025-10-11 09:04:46.688 2 DEBUG nova.storage.rbd_utils [None req-c80e8326-bca5-4522-bb48-4252d11da3c9 1c9cf27bf1c44ef1ba0c291e3d5818be 001deb6d2eb5499b8e24cf16d6c2dbaa - - default default] rbd image 4af522a5-ae89-49de-a956-50cfa7dd136a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:04:46 compute-0 nova_compute[260935]: 2025-10-11 09:04:46.719 2 DEBUG nova.storage.rbd_utils [None req-c80e8326-bca5-4522-bb48-4252d11da3c9 1c9cf27bf1c44ef1ba0c291e3d5818be 001deb6d2eb5499b8e24cf16d6c2dbaa - - default default] rbd image 4af522a5-ae89-49de-a956-50cfa7dd136a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:04:46 compute-0 nova_compute[260935]: 2025-10-11 09:04:46.759 2 DEBUG nova.storage.rbd_utils [None req-c80e8326-bca5-4522-bb48-4252d11da3c9 1c9cf27bf1c44ef1ba0c291e3d5818be 001deb6d2eb5499b8e24cf16d6c2dbaa - - default default] rbd image 4af522a5-ae89-49de-a956-50cfa7dd136a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:04:46 compute-0 nova_compute[260935]: 2025-10-11 09:04:46.768 2 DEBUG oslo_concurrency.lockutils [None req-c80e8326-bca5-4522-bb48-4252d11da3c9 1c9cf27bf1c44ef1ba0c291e3d5818be 001deb6d2eb5499b8e24cf16d6c2dbaa - - default default] Acquiring lock "24fb2f8914224986c1a288ff195771cacbd87d4d" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:04:46 compute-0 nova_compute[260935]: 2025-10-11 09:04:46.769 2 DEBUG oslo_concurrency.lockutils [None req-c80e8326-bca5-4522-bb48-4252d11da3c9 1c9cf27bf1c44ef1ba0c291e3d5818be 001deb6d2eb5499b8e24cf16d6c2dbaa - - default default] Lock "24fb2f8914224986c1a288ff195771cacbd87d4d" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:04:46 compute-0 nova_compute[260935]: 2025-10-11 09:04:46.772 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:04:46 compute-0 nova_compute[260935]: 2025-10-11 09:04:46.815 2 INFO nova.virt.libvirt.driver [-] [instance: dffb6f2b-b5a8-4d28-ae9b-aca7728ce617] Instance destroyed successfully.
Oct 11 09:04:46 compute-0 nova_compute[260935]: 2025-10-11 09:04:46.816 2 DEBUG nova.objects.instance [None req-2f97a1e7-8cbd-40ad-9350-e718e78c356b f7e5365d09e140aaa4289b21435cbd70 884a8b5cd18948009939a3ab6cb1d42a - - default default] Lazy-loading 'resources' on Instance uuid dffb6f2b-b5a8-4d28-ae9b-aca7728ce617 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:04:46 compute-0 nova_compute[260935]: 2025-10-11 09:04:46.839 2 DEBUG nova.virt.libvirt.vif [None req-2f97a1e7-8cbd-40ad-9350-e718e78c356b f7e5365d09e140aaa4289b21435cbd70 884a8b5cd18948009939a3ab6cb1d42a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T09:04:32Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerGroupTestJSON-server-236424263',display_name='tempest-ServerGroupTestJSON-server-236424263',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-servergrouptestjson-server-236424263',id=90,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-11T09:04:43Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='884a8b5cd18948009939a3ab6cb1d42a',ramdisk_id='',reservation_id='r-4ccwpn6r',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerGroupTestJSON-1907510446',owner_user_name='tempest-ServerGroupTestJSON-1907510446-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T09:04:43Z,user_data=None,user_id='f7e5365d09e140aaa4289b21435cbd70',uuid=dffb6f2b-b5a8-4d28-ae9b-aca7728ce617,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "63a1eeb0-05f1-4f8a-b115-a1323d4d119d", "address": "fa:16:3e:a1:89:76", "network": {"id": "80aba2f5-1646-44f0-b2f3-d9c000867c6d", "bridge": "br-int", "label": "tempest-ServerGroupTestJSON-2046079859-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "884a8b5cd18948009939a3ab6cb1d42a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap63a1eeb0-05", "ovs_interfaceid": "63a1eeb0-05f1-4f8a-b115-a1323d4d119d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 11 09:04:46 compute-0 nova_compute[260935]: 2025-10-11 09:04:46.844 2 DEBUG nova.network.os_vif_util [None req-2f97a1e7-8cbd-40ad-9350-e718e78c356b f7e5365d09e140aaa4289b21435cbd70 884a8b5cd18948009939a3ab6cb1d42a - - default default] Converting VIF {"id": "63a1eeb0-05f1-4f8a-b115-a1323d4d119d", "address": "fa:16:3e:a1:89:76", "network": {"id": "80aba2f5-1646-44f0-b2f3-d9c000867c6d", "bridge": "br-int", "label": "tempest-ServerGroupTestJSON-2046079859-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "884a8b5cd18948009939a3ab6cb1d42a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap63a1eeb0-05", "ovs_interfaceid": "63a1eeb0-05f1-4f8a-b115-a1323d4d119d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 09:04:46 compute-0 neutron-haproxy-ovnmeta-80aba2f5-1646-44f0-b2f3-d9c000867c6d[350171]: [NOTICE]   (350175) : haproxy version is 2.8.14-c23fe91
Oct 11 09:04:46 compute-0 neutron-haproxy-ovnmeta-80aba2f5-1646-44f0-b2f3-d9c000867c6d[350171]: [NOTICE]   (350175) : path to executable is /usr/sbin/haproxy
Oct 11 09:04:46 compute-0 neutron-haproxy-ovnmeta-80aba2f5-1646-44f0-b2f3-d9c000867c6d[350171]: [WARNING]  (350175) : Exiting Master process...
Oct 11 09:04:46 compute-0 neutron-haproxy-ovnmeta-80aba2f5-1646-44f0-b2f3-d9c000867c6d[350171]: [WARNING]  (350175) : Exiting Master process...
Oct 11 09:04:46 compute-0 nova_compute[260935]: 2025-10-11 09:04:46.845 2 DEBUG nova.network.os_vif_util [None req-2f97a1e7-8cbd-40ad-9350-e718e78c356b f7e5365d09e140aaa4289b21435cbd70 884a8b5cd18948009939a3ab6cb1d42a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a1:89:76,bridge_name='br-int',has_traffic_filtering=True,id=63a1eeb0-05f1-4f8a-b115-a1323d4d119d,network=Network(80aba2f5-1646-44f0-b2f3-d9c000867c6d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap63a1eeb0-05') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 09:04:46 compute-0 nova_compute[260935]: 2025-10-11 09:04:46.846 2 DEBUG os_vif [None req-2f97a1e7-8cbd-40ad-9350-e718e78c356b f7e5365d09e140aaa4289b21435cbd70 884a8b5cd18948009939a3ab6cb1d42a - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a1:89:76,bridge_name='br-int',has_traffic_filtering=True,id=63a1eeb0-05f1-4f8a-b115-a1323d4d119d,network=Network(80aba2f5-1646-44f0-b2f3-d9c000867c6d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap63a1eeb0-05') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 11 09:04:46 compute-0 neutron-haproxy-ovnmeta-80aba2f5-1646-44f0-b2f3-d9c000867c6d[350171]: [ALERT]    (350175) : Current worker (350177) exited with code 143 (Terminated)
Oct 11 09:04:46 compute-0 neutron-haproxy-ovnmeta-80aba2f5-1646-44f0-b2f3-d9c000867c6d[350171]: [WARNING]  (350175) : All workers exited. Exiting... (0)
Oct 11 09:04:46 compute-0 nova_compute[260935]: 2025-10-11 09:04:46.849 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:04:46 compute-0 nova_compute[260935]: 2025-10-11 09:04:46.850 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap63a1eeb0-05, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:04:46 compute-0 systemd[1]: libpod-1557ff322d0f03bbb771928ada4ce745c19a84b2b08942388e4af507e34aa64e.scope: Deactivated successfully.
Oct 11 09:04:46 compute-0 podman[350495]: 2025-10-11 09:04:46.857107317 +0000 UTC m=+0.063151304 container died 1557ff322d0f03bbb771928ada4ce745c19a84b2b08942388e4af507e34aa64e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-80aba2f5-1646-44f0-b2f3-d9c000867c6d, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3)
Oct 11 09:04:46 compute-0 nova_compute[260935]: 2025-10-11 09:04:46.899 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:04:46 compute-0 nova_compute[260935]: 2025-10-11 09:04:46.901 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 09:04:46 compute-0 nova_compute[260935]: 2025-10-11 09:04:46.906 2 INFO os_vif [None req-2f97a1e7-8cbd-40ad-9350-e718e78c356b f7e5365d09e140aaa4289b21435cbd70 884a8b5cd18948009939a3ab6cb1d42a - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a1:89:76,bridge_name='br-int',has_traffic_filtering=True,id=63a1eeb0-05f1-4f8a-b115-a1323d4d119d,network=Network(80aba2f5-1646-44f0-b2f3-d9c000867c6d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap63a1eeb0-05')
Oct 11 09:04:46 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-1557ff322d0f03bbb771928ada4ce745c19a84b2b08942388e4af507e34aa64e-userdata-shm.mount: Deactivated successfully.
Oct 11 09:04:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-35f771fc789c94264e2ade21343c20941b7339208f940dc9d09191577d5f9af6-merged.mount: Deactivated successfully.
Oct 11 09:04:46 compute-0 podman[350495]: 2025-10-11 09:04:46.931899452 +0000 UTC m=+0.137943459 container cleanup 1557ff322d0f03bbb771928ada4ce745c19a84b2b08942388e4af507e34aa64e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-80aba2f5-1646-44f0-b2f3-d9c000867c6d, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 11 09:04:46 compute-0 systemd[1]: libpod-conmon-1557ff322d0f03bbb771928ada4ce745c19a84b2b08942388e4af507e34aa64e.scope: Deactivated successfully.
Oct 11 09:04:46 compute-0 nova_compute[260935]: 2025-10-11 09:04:46.990 2 DEBUG nova.virt.libvirt.imagebackend [None req-c80e8326-bca5-4522-bb48-4252d11da3c9 1c9cf27bf1c44ef1ba0c291e3d5818be 001deb6d2eb5499b8e24cf16d6c2dbaa - - default default] Image locations are: [{'url': 'rbd://33219f8b-dc38-5a8f-a577-8ccc4b37190a/images/d6bf8b58-0442-4653-84ed-9390a999a943/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://33219f8b-dc38-5a8f-a577-8ccc4b37190a/images/d6bf8b58-0442-4653-84ed-9390a999a943/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085
Oct 11 09:04:47 compute-0 podman[350550]: 2025-10-11 09:04:47.015046506 +0000 UTC m=+0.054627781 container remove 1557ff322d0f03bbb771928ada4ce745c19a84b2b08942388e4af507e34aa64e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-80aba2f5-1646-44f0-b2f3-d9c000867c6d, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Oct 11 09:04:47 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:04:47.031 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[063a8bf5-d519-4be2-8c8c-e7673e1da34d]: (4, ('Sat Oct 11 09:04:46 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-80aba2f5-1646-44f0-b2f3-d9c000867c6d (1557ff322d0f03bbb771928ada4ce745c19a84b2b08942388e4af507e34aa64e)\n1557ff322d0f03bbb771928ada4ce745c19a84b2b08942388e4af507e34aa64e\nSat Oct 11 09:04:46 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-80aba2f5-1646-44f0-b2f3-d9c000867c6d (1557ff322d0f03bbb771928ada4ce745c19a84b2b08942388e4af507e34aa64e)\n1557ff322d0f03bbb771928ada4ce745c19a84b2b08942388e4af507e34aa64e\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:04:47 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:04:47.034 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[1e1f8acf-3f0d-4618-8c04-e50a47b2726a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:04:47 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:04:47.035 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap80aba2f5-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:04:47 compute-0 nova_compute[260935]: 2025-10-11 09:04:47.042 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:04:47 compute-0 nova_compute[260935]: 2025-10-11 09:04:47.047 2 DEBUG nova.virt.libvirt.imagebackend [None req-c80e8326-bca5-4522-bb48-4252d11da3c9 1c9cf27bf1c44ef1ba0c291e3d5818be 001deb6d2eb5499b8e24cf16d6c2dbaa - - default default] Selected location: {'url': 'rbd://33219f8b-dc38-5a8f-a577-8ccc4b37190a/images/d6bf8b58-0442-4653-84ed-9390a999a943/snap', 'metadata': {'store': 'default_backend'}} clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1094
Oct 11 09:04:47 compute-0 nova_compute[260935]: 2025-10-11 09:04:47.047 2 DEBUG nova.storage.rbd_utils [None req-c80e8326-bca5-4522-bb48-4252d11da3c9 1c9cf27bf1c44ef1ba0c291e3d5818be 001deb6d2eb5499b8e24cf16d6c2dbaa - - default default] cloning images/d6bf8b58-0442-4653-84ed-9390a999a943@snap to None/4af522a5-ae89-49de-a956-50cfa7dd136a_disk clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Oct 11 09:04:47 compute-0 kernel: tap80aba2f5-10: left promiscuous mode
Oct 11 09:04:47 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 09:04:47 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1299647705' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:04:47 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:04:47.063 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[5817d724-69a3-4754-bde2-1b320412730c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:04:47 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:04:47.087 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[9956c49e-0ed7-46b6-b191-4d03fb981216]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:04:47 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:04:47.088 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[031b384d-134c-4067-8ea3-3875128af63d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:04:47 compute-0 nova_compute[260935]: 2025-10-11 09:04:47.092 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:04:47 compute-0 nova_compute[260935]: 2025-10-11 09:04:47.094 2 DEBUG oslo_concurrency.processutils [None req-edf5c013-a2de-433b-b5dc-cd9b80326402 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.566s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:04:47 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:04:47.111 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[7c54d727-bf98-4761-9d1f-e57f487b4d0e]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 532646, 'reachable_time': 36885, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 350634, 'error': None, 'target': 'ovnmeta-80aba2f5-1646-44f0-b2f3-d9c000867c6d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:04:47 compute-0 nova_compute[260935]: 2025-10-11 09:04:47.114 2 DEBUG nova.storage.rbd_utils [None req-edf5c013-a2de-433b-b5dc-cd9b80326402 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] rbd image cb40f13d-eaf0-4d7f-8745-acdd85fa0e86_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:04:47 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:04:47.115 162929 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-80aba2f5-1646-44f0-b2f3-d9c000867c6d deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 11 09:04:47 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:04:47.115 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[88f66bd8-7c9a-4b05-bf6e-3a84d7f0d93c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:04:47 compute-0 systemd[1]: run-netns-ovnmeta\x2d80aba2f5\x2d1646\x2d44f0\x2db2f3\x2dd9c000867c6d.mount: Deactivated successfully.
Oct 11 09:04:47 compute-0 nova_compute[260935]: 2025-10-11 09:04:47.121 2 DEBUG oslo_concurrency.processutils [None req-edf5c013-a2de-433b-b5dc-cd9b80326402 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:04:47 compute-0 sshd-session[350395]: Invalid user shashank from 155.4.244.179 port 8996
Oct 11 09:04:47 compute-0 sshd-session[350395]: pam_unix(sshd:auth): check pass; user unknown
Oct 11 09:04:47 compute-0 sshd-session[350395]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=155.4.244.179
Oct 11 09:04:47 compute-0 nova_compute[260935]: 2025-10-11 09:04:47.214 2 DEBUG oslo_concurrency.lockutils [None req-c80e8326-bca5-4522-bb48-4252d11da3c9 1c9cf27bf1c44ef1ba0c291e3d5818be 001deb6d2eb5499b8e24cf16d6c2dbaa - - default default] Lock "24fb2f8914224986c1a288ff195771cacbd87d4d" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.445s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:04:47 compute-0 nova_compute[260935]: 2025-10-11 09:04:47.363 2 INFO nova.virt.libvirt.driver [None req-2f97a1e7-8cbd-40ad-9350-e718e78c356b f7e5365d09e140aaa4289b21435cbd70 884a8b5cd18948009939a3ab6cb1d42a - - default default] [instance: dffb6f2b-b5a8-4d28-ae9b-aca7728ce617] Deleting instance files /var/lib/nova/instances/dffb6f2b-b5a8-4d28-ae9b-aca7728ce617_del
Oct 11 09:04:47 compute-0 nova_compute[260935]: 2025-10-11 09:04:47.364 2 INFO nova.virt.libvirt.driver [None req-2f97a1e7-8cbd-40ad-9350-e718e78c356b f7e5365d09e140aaa4289b21435cbd70 884a8b5cd18948009939a3ab6cb1d42a - - default default] [instance: dffb6f2b-b5a8-4d28-ae9b-aca7728ce617] Deletion of /var/lib/nova/instances/dffb6f2b-b5a8-4d28-ae9b-aca7728ce617_del complete
Oct 11 09:04:47 compute-0 nova_compute[260935]: 2025-10-11 09:04:47.371 2 DEBUG nova.storage.rbd_utils [None req-c80e8326-bca5-4522-bb48-4252d11da3c9 1c9cf27bf1c44ef1ba0c291e3d5818be 001deb6d2eb5499b8e24cf16d6c2dbaa - - default default] resizing rbd image 4af522a5-ae89-49de-a956-50cfa7dd136a_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 11 09:04:47 compute-0 nova_compute[260935]: 2025-10-11 09:04:47.447 2 DEBUG nova.objects.instance [None req-c80e8326-bca5-4522-bb48-4252d11da3c9 1c9cf27bf1c44ef1ba0c291e3d5818be 001deb6d2eb5499b8e24cf16d6c2dbaa - - default default] Lazy-loading 'migration_context' on Instance uuid 4af522a5-ae89-49de-a956-50cfa7dd136a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:04:47 compute-0 nova_compute[260935]: 2025-10-11 09:04:47.480 2 DEBUG nova.virt.libvirt.driver [None req-c80e8326-bca5-4522-bb48-4252d11da3c9 1c9cf27bf1c44ef1ba0c291e3d5818be 001deb6d2eb5499b8e24cf16d6c2dbaa - - default default] [instance: 4af522a5-ae89-49de-a956-50cfa7dd136a] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 11 09:04:47 compute-0 nova_compute[260935]: 2025-10-11 09:04:47.480 2 DEBUG nova.virt.libvirt.driver [None req-c80e8326-bca5-4522-bb48-4252d11da3c9 1c9cf27bf1c44ef1ba0c291e3d5818be 001deb6d2eb5499b8e24cf16d6c2dbaa - - default default] [instance: 4af522a5-ae89-49de-a956-50cfa7dd136a] Ensure instance console log exists: /var/lib/nova/instances/4af522a5-ae89-49de-a956-50cfa7dd136a/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 11 09:04:47 compute-0 nova_compute[260935]: 2025-10-11 09:04:47.481 2 DEBUG oslo_concurrency.lockutils [None req-c80e8326-bca5-4522-bb48-4252d11da3c9 1c9cf27bf1c44ef1ba0c291e3d5818be 001deb6d2eb5499b8e24cf16d6c2dbaa - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:04:47 compute-0 nova_compute[260935]: 2025-10-11 09:04:47.481 2 DEBUG oslo_concurrency.lockutils [None req-c80e8326-bca5-4522-bb48-4252d11da3c9 1c9cf27bf1c44ef1ba0c291e3d5818be 001deb6d2eb5499b8e24cf16d6c2dbaa - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:04:47 compute-0 nova_compute[260935]: 2025-10-11 09:04:47.481 2 DEBUG oslo_concurrency.lockutils [None req-c80e8326-bca5-4522-bb48-4252d11da3c9 1c9cf27bf1c44ef1ba0c291e3d5818be 001deb6d2eb5499b8e24cf16d6c2dbaa - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:04:47 compute-0 nova_compute[260935]: 2025-10-11 09:04:47.483 2 DEBUG nova.virt.libvirt.driver [None req-c80e8326-bca5-4522-bb48-4252d11da3c9 1c9cf27bf1c44ef1ba0c291e3d5818be 001deb6d2eb5499b8e24cf16d6c2dbaa - - default default] [instance: 4af522a5-ae89-49de-a956-50cfa7dd136a] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='762f3d8f1f3fcf09a35e950f82f57acd',container_format='bare',created_at=2025-10-11T09:04:39Z,direct_url=<?>,disk_format='raw',id=d6bf8b58-0442-4653-84ed-9390a999a943,min_disk=0,min_ram=0,name='tempest-image-dependency-test-269879602',owner='001deb6d2eb5499b8e24cf16d6c2dbaa',properties=ImageMetaProps,protected=<?>,size=1024,status='active',tags=<?>,updated_at=2025-10-11T09:04:40Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': 'd6bf8b58-0442-4653-84ed-9390a999a943'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 11 09:04:47 compute-0 nova_compute[260935]: 2025-10-11 09:04:47.484 2 INFO nova.compute.manager [None req-2f97a1e7-8cbd-40ad-9350-e718e78c356b f7e5365d09e140aaa4289b21435cbd70 884a8b5cd18948009939a3ab6cb1d42a - - default default] [instance: dffb6f2b-b5a8-4d28-ae9b-aca7728ce617] Took 0.90 seconds to destroy the instance on the hypervisor.
Oct 11 09:04:47 compute-0 nova_compute[260935]: 2025-10-11 09:04:47.485 2 DEBUG oslo.service.loopingcall [None req-2f97a1e7-8cbd-40ad-9350-e718e78c356b f7e5365d09e140aaa4289b21435cbd70 884a8b5cd18948009939a3ab6cb1d42a - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 11 09:04:47 compute-0 nova_compute[260935]: 2025-10-11 09:04:47.485 2 DEBUG nova.compute.manager [-] [instance: dffb6f2b-b5a8-4d28-ae9b-aca7728ce617] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 11 09:04:47 compute-0 nova_compute[260935]: 2025-10-11 09:04:47.486 2 DEBUG nova.network.neutron [-] [instance: dffb6f2b-b5a8-4d28-ae9b-aca7728ce617] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 11 09:04:47 compute-0 nova_compute[260935]: 2025-10-11 09:04:47.492 2 WARNING nova.virt.libvirt.driver [None req-c80e8326-bca5-4522-bb48-4252d11da3c9 1c9cf27bf1c44ef1ba0c291e3d5818be 001deb6d2eb5499b8e24cf16d6c2dbaa - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 09:04:47 compute-0 nova_compute[260935]: 2025-10-11 09:04:47.500 2 DEBUG nova.virt.libvirt.host [None req-c80e8326-bca5-4522-bb48-4252d11da3c9 1c9cf27bf1c44ef1ba0c291e3d5818be 001deb6d2eb5499b8e24cf16d6c2dbaa - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 11 09:04:47 compute-0 nova_compute[260935]: 2025-10-11 09:04:47.501 2 DEBUG nova.virt.libvirt.host [None req-c80e8326-bca5-4522-bb48-4252d11da3c9 1c9cf27bf1c44ef1ba0c291e3d5818be 001deb6d2eb5499b8e24cf16d6c2dbaa - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 11 09:04:47 compute-0 nova_compute[260935]: 2025-10-11 09:04:47.506 2 DEBUG nova.virt.libvirt.host [None req-c80e8326-bca5-4522-bb48-4252d11da3c9 1c9cf27bf1c44ef1ba0c291e3d5818be 001deb6d2eb5499b8e24cf16d6c2dbaa - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 11 09:04:47 compute-0 nova_compute[260935]: 2025-10-11 09:04:47.506 2 DEBUG nova.virt.libvirt.host [None req-c80e8326-bca5-4522-bb48-4252d11da3c9 1c9cf27bf1c44ef1ba0c291e3d5818be 001deb6d2eb5499b8e24cf16d6c2dbaa - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 11 09:04:47 compute-0 nova_compute[260935]: 2025-10-11 09:04:47.507 2 DEBUG nova.virt.libvirt.driver [None req-c80e8326-bca5-4522-bb48-4252d11da3c9 1c9cf27bf1c44ef1ba0c291e3d5818be 001deb6d2eb5499b8e24cf16d6c2dbaa - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 11 09:04:47 compute-0 nova_compute[260935]: 2025-10-11 09:04:47.507 2 DEBUG nova.virt.hardware [None req-c80e8326-bca5-4522-bb48-4252d11da3c9 1c9cf27bf1c44ef1ba0c291e3d5818be 001deb6d2eb5499b8e24cf16d6c2dbaa - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='762f3d8f1f3fcf09a35e950f82f57acd',container_format='bare',created_at=2025-10-11T09:04:39Z,direct_url=<?>,disk_format='raw',id=d6bf8b58-0442-4653-84ed-9390a999a943,min_disk=0,min_ram=0,name='tempest-image-dependency-test-269879602',owner='001deb6d2eb5499b8e24cf16d6c2dbaa',properties=ImageMetaProps,protected=<?>,size=1024,status='active',tags=<?>,updated_at=2025-10-11T09:04:40Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 11 09:04:47 compute-0 nova_compute[260935]: 2025-10-11 09:04:47.507 2 DEBUG nova.virt.hardware [None req-c80e8326-bca5-4522-bb48-4252d11da3c9 1c9cf27bf1c44ef1ba0c291e3d5818be 001deb6d2eb5499b8e24cf16d6c2dbaa - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 11 09:04:47 compute-0 nova_compute[260935]: 2025-10-11 09:04:47.508 2 DEBUG nova.virt.hardware [None req-c80e8326-bca5-4522-bb48-4252d11da3c9 1c9cf27bf1c44ef1ba0c291e3d5818be 001deb6d2eb5499b8e24cf16d6c2dbaa - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 11 09:04:47 compute-0 nova_compute[260935]: 2025-10-11 09:04:47.508 2 DEBUG nova.virt.hardware [None req-c80e8326-bca5-4522-bb48-4252d11da3c9 1c9cf27bf1c44ef1ba0c291e3d5818be 001deb6d2eb5499b8e24cf16d6c2dbaa - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 11 09:04:47 compute-0 nova_compute[260935]: 2025-10-11 09:04:47.508 2 DEBUG nova.virt.hardware [None req-c80e8326-bca5-4522-bb48-4252d11da3c9 1c9cf27bf1c44ef1ba0c291e3d5818be 001deb6d2eb5499b8e24cf16d6c2dbaa - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 11 09:04:47 compute-0 nova_compute[260935]: 2025-10-11 09:04:47.508 2 DEBUG nova.virt.hardware [None req-c80e8326-bca5-4522-bb48-4252d11da3c9 1c9cf27bf1c44ef1ba0c291e3d5818be 001deb6d2eb5499b8e24cf16d6c2dbaa - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 11 09:04:47 compute-0 nova_compute[260935]: 2025-10-11 09:04:47.508 2 DEBUG nova.virt.hardware [None req-c80e8326-bca5-4522-bb48-4252d11da3c9 1c9cf27bf1c44ef1ba0c291e3d5818be 001deb6d2eb5499b8e24cf16d6c2dbaa - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 11 09:04:47 compute-0 nova_compute[260935]: 2025-10-11 09:04:47.509 2 DEBUG nova.virt.hardware [None req-c80e8326-bca5-4522-bb48-4252d11da3c9 1c9cf27bf1c44ef1ba0c291e3d5818be 001deb6d2eb5499b8e24cf16d6c2dbaa - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 11 09:04:47 compute-0 nova_compute[260935]: 2025-10-11 09:04:47.509 2 DEBUG nova.virt.hardware [None req-c80e8326-bca5-4522-bb48-4252d11da3c9 1c9cf27bf1c44ef1ba0c291e3d5818be 001deb6d2eb5499b8e24cf16d6c2dbaa - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 11 09:04:47 compute-0 nova_compute[260935]: 2025-10-11 09:04:47.509 2 DEBUG nova.virt.hardware [None req-c80e8326-bca5-4522-bb48-4252d11da3c9 1c9cf27bf1c44ef1ba0c291e3d5818be 001deb6d2eb5499b8e24cf16d6c2dbaa - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 11 09:04:47 compute-0 nova_compute[260935]: 2025-10-11 09:04:47.509 2 DEBUG nova.virt.hardware [None req-c80e8326-bca5-4522-bb48-4252d11da3c9 1c9cf27bf1c44ef1ba0c291e3d5818be 001deb6d2eb5499b8e24cf16d6c2dbaa - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 11 09:04:47 compute-0 nova_compute[260935]: 2025-10-11 09:04:47.512 2 DEBUG oslo_concurrency.processutils [None req-c80e8326-bca5-4522-bb48-4252d11da3c9 1c9cf27bf1c44ef1ba0c291e3d5818be 001deb6d2eb5499b8e24cf16d6c2dbaa - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:04:47 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 09:04:47 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1945827982' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:04:47 compute-0 nova_compute[260935]: 2025-10-11 09:04:47.607 2 DEBUG nova.compute.manager [req-70c296bb-50aa-48c8-bb1f-9fca9f282295 req-3a7f33fc-5156-48d2-92a4-5916f821bf04 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: dffb6f2b-b5a8-4d28-ae9b-aca7728ce617] Received event network-vif-unplugged-63a1eeb0-05f1-4f8a-b115-a1323d4d119d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:04:47 compute-0 nova_compute[260935]: 2025-10-11 09:04:47.608 2 DEBUG oslo_concurrency.lockutils [req-70c296bb-50aa-48c8-bb1f-9fca9f282295 req-3a7f33fc-5156-48d2-92a4-5916f821bf04 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "dffb6f2b-b5a8-4d28-ae9b-aca7728ce617-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:04:47 compute-0 nova_compute[260935]: 2025-10-11 09:04:47.609 2 DEBUG oslo_concurrency.lockutils [req-70c296bb-50aa-48c8-bb1f-9fca9f282295 req-3a7f33fc-5156-48d2-92a4-5916f821bf04 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "dffb6f2b-b5a8-4d28-ae9b-aca7728ce617-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:04:47 compute-0 nova_compute[260935]: 2025-10-11 09:04:47.609 2 DEBUG oslo_concurrency.lockutils [req-70c296bb-50aa-48c8-bb1f-9fca9f282295 req-3a7f33fc-5156-48d2-92a4-5916f821bf04 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "dffb6f2b-b5a8-4d28-ae9b-aca7728ce617-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:04:47 compute-0 nova_compute[260935]: 2025-10-11 09:04:47.609 2 DEBUG nova.compute.manager [req-70c296bb-50aa-48c8-bb1f-9fca9f282295 req-3a7f33fc-5156-48d2-92a4-5916f821bf04 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: dffb6f2b-b5a8-4d28-ae9b-aca7728ce617] No waiting events found dispatching network-vif-unplugged-63a1eeb0-05f1-4f8a-b115-a1323d4d119d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 09:04:47 compute-0 nova_compute[260935]: 2025-10-11 09:04:47.609 2 DEBUG nova.compute.manager [req-70c296bb-50aa-48c8-bb1f-9fca9f282295 req-3a7f33fc-5156-48d2-92a4-5916f821bf04 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: dffb6f2b-b5a8-4d28-ae9b-aca7728ce617] Received event network-vif-unplugged-63a1eeb0-05f1-4f8a-b115-a1323d4d119d for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 11 09:04:47 compute-0 nova_compute[260935]: 2025-10-11 09:04:47.621 2 DEBUG oslo_concurrency.processutils [None req-edf5c013-a2de-433b-b5dc-cd9b80326402 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.500s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:04:47 compute-0 nova_compute[260935]: 2025-10-11 09:04:47.625 2 DEBUG nova.virt.libvirt.driver [None req-edf5c013-a2de-433b-b5dc-cd9b80326402 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] [instance: cb40f13d-eaf0-4d7f-8745-acdd85fa0e86] End _get_guest_xml xml=<domain type="kvm">
Oct 11 09:04:47 compute-0 nova_compute[260935]:   <uuid>cb40f13d-eaf0-4d7f-8745-acdd85fa0e86</uuid>
Oct 11 09:04:47 compute-0 nova_compute[260935]:   <name>instance-00000058</name>
Oct 11 09:04:47 compute-0 nova_compute[260935]:   <memory>131072</memory>
Oct 11 09:04:47 compute-0 nova_compute[260935]:   <vcpu>1</vcpu>
Oct 11 09:04:47 compute-0 nova_compute[260935]:   <metadata>
Oct 11 09:04:47 compute-0 nova_compute[260935]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 09:04:47 compute-0 nova_compute[260935]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 09:04:47 compute-0 nova_compute[260935]:       <nova:name>tempest-ServerShowV257Test-server-919774712</nova:name>
Oct 11 09:04:47 compute-0 nova_compute[260935]:       <nova:creationTime>2025-10-11 09:04:45</nova:creationTime>
Oct 11 09:04:47 compute-0 nova_compute[260935]:       <nova:flavor name="m1.nano">
Oct 11 09:04:47 compute-0 nova_compute[260935]:         <nova:memory>128</nova:memory>
Oct 11 09:04:47 compute-0 nova_compute[260935]:         <nova:disk>1</nova:disk>
Oct 11 09:04:47 compute-0 nova_compute[260935]:         <nova:swap>0</nova:swap>
Oct 11 09:04:47 compute-0 nova_compute[260935]:         <nova:ephemeral>0</nova:ephemeral>
Oct 11 09:04:47 compute-0 nova_compute[260935]:         <nova:vcpus>1</nova:vcpus>
Oct 11 09:04:47 compute-0 nova_compute[260935]:       </nova:flavor>
Oct 11 09:04:47 compute-0 nova_compute[260935]:       <nova:owner>
Oct 11 09:04:47 compute-0 nova_compute[260935]:         <nova:user uuid="a90fe9900cc64109bfeb61e3bc71fb95">tempest-ServerShowV257Test-315301913-project-member</nova:user>
Oct 11 09:04:47 compute-0 nova_compute[260935]:         <nova:project uuid="f83a57424c0643a1b6a9b84fe208cb0e">tempest-ServerShowV257Test-315301913</nova:project>
Oct 11 09:04:47 compute-0 nova_compute[260935]:       </nova:owner>
Oct 11 09:04:47 compute-0 nova_compute[260935]:       <nova:root type="image" uuid="95632eb9-5895-4e20-b760-0f149aadf400"/>
Oct 11 09:04:47 compute-0 nova_compute[260935]:       <nova:ports/>
Oct 11 09:04:47 compute-0 nova_compute[260935]:     </nova:instance>
Oct 11 09:04:47 compute-0 nova_compute[260935]:   </metadata>
Oct 11 09:04:47 compute-0 nova_compute[260935]:   <sysinfo type="smbios">
Oct 11 09:04:47 compute-0 nova_compute[260935]:     <system>
Oct 11 09:04:47 compute-0 nova_compute[260935]:       <entry name="manufacturer">RDO</entry>
Oct 11 09:04:47 compute-0 nova_compute[260935]:       <entry name="product">OpenStack Compute</entry>
Oct 11 09:04:47 compute-0 nova_compute[260935]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 09:04:47 compute-0 nova_compute[260935]:       <entry name="serial">cb40f13d-eaf0-4d7f-8745-acdd85fa0e86</entry>
Oct 11 09:04:47 compute-0 nova_compute[260935]:       <entry name="uuid">cb40f13d-eaf0-4d7f-8745-acdd85fa0e86</entry>
Oct 11 09:04:47 compute-0 nova_compute[260935]:       <entry name="family">Virtual Machine</entry>
Oct 11 09:04:47 compute-0 nova_compute[260935]:     </system>
Oct 11 09:04:47 compute-0 nova_compute[260935]:   </sysinfo>
Oct 11 09:04:47 compute-0 nova_compute[260935]:   <os>
Oct 11 09:04:47 compute-0 nova_compute[260935]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 11 09:04:47 compute-0 nova_compute[260935]:     <boot dev="hd"/>
Oct 11 09:04:47 compute-0 nova_compute[260935]:     <smbios mode="sysinfo"/>
Oct 11 09:04:47 compute-0 nova_compute[260935]:   </os>
Oct 11 09:04:47 compute-0 nova_compute[260935]:   <features>
Oct 11 09:04:47 compute-0 nova_compute[260935]:     <acpi/>
Oct 11 09:04:47 compute-0 nova_compute[260935]:     <apic/>
Oct 11 09:04:47 compute-0 nova_compute[260935]:     <vmcoreinfo/>
Oct 11 09:04:47 compute-0 nova_compute[260935]:   </features>
Oct 11 09:04:47 compute-0 nova_compute[260935]:   <clock offset="utc">
Oct 11 09:04:47 compute-0 nova_compute[260935]:     <timer name="pit" tickpolicy="delay"/>
Oct 11 09:04:47 compute-0 nova_compute[260935]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 11 09:04:47 compute-0 nova_compute[260935]:     <timer name="hpet" present="no"/>
Oct 11 09:04:47 compute-0 nova_compute[260935]:   </clock>
Oct 11 09:04:47 compute-0 nova_compute[260935]:   <cpu mode="host-model" match="exact">
Oct 11 09:04:47 compute-0 nova_compute[260935]:     <topology sockets="1" cores="1" threads="1"/>
Oct 11 09:04:47 compute-0 nova_compute[260935]:   </cpu>
Oct 11 09:04:47 compute-0 nova_compute[260935]:   <devices>
Oct 11 09:04:47 compute-0 nova_compute[260935]:     <disk type="network" device="disk">
Oct 11 09:04:47 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 09:04:47 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/cb40f13d-eaf0-4d7f-8745-acdd85fa0e86_disk">
Oct 11 09:04:47 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 09:04:47 compute-0 nova_compute[260935]:       </source>
Oct 11 09:04:47 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 09:04:47 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 09:04:47 compute-0 nova_compute[260935]:       </auth>
Oct 11 09:04:47 compute-0 nova_compute[260935]:       <target dev="vda" bus="virtio"/>
Oct 11 09:04:47 compute-0 nova_compute[260935]:     </disk>
Oct 11 09:04:47 compute-0 nova_compute[260935]:     <disk type="network" device="cdrom">
Oct 11 09:04:47 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 09:04:47 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/cb40f13d-eaf0-4d7f-8745-acdd85fa0e86_disk.config">
Oct 11 09:04:47 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 09:04:47 compute-0 nova_compute[260935]:       </source>
Oct 11 09:04:47 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 09:04:47 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 09:04:47 compute-0 nova_compute[260935]:       </auth>
Oct 11 09:04:47 compute-0 nova_compute[260935]:       <target dev="sda" bus="sata"/>
Oct 11 09:04:47 compute-0 nova_compute[260935]:     </disk>
Oct 11 09:04:47 compute-0 nova_compute[260935]:     <serial type="pty">
Oct 11 09:04:47 compute-0 nova_compute[260935]:       <log file="/var/lib/nova/instances/cb40f13d-eaf0-4d7f-8745-acdd85fa0e86/console.log" append="off"/>
Oct 11 09:04:47 compute-0 nova_compute[260935]:     </serial>
Oct 11 09:04:47 compute-0 nova_compute[260935]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 09:04:47 compute-0 nova_compute[260935]:     <video>
Oct 11 09:04:47 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 09:04:47 compute-0 nova_compute[260935]:     </video>
Oct 11 09:04:47 compute-0 nova_compute[260935]:     <input type="tablet" bus="usb"/>
Oct 11 09:04:47 compute-0 nova_compute[260935]:     <rng model="virtio">
Oct 11 09:04:47 compute-0 nova_compute[260935]:       <backend model="random">/dev/urandom</backend>
Oct 11 09:04:47 compute-0 nova_compute[260935]:     </rng>
Oct 11 09:04:47 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root"/>
Oct 11 09:04:47 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:04:47 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:04:47 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:04:47 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:04:47 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:04:47 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:04:47 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:04:47 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:04:47 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:04:47 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:04:47 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:04:47 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:04:47 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:04:47 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:04:47 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:04:47 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:04:47 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:04:47 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:04:47 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:04:47 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:04:47 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:04:47 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:04:47 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:04:47 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:04:47 compute-0 nova_compute[260935]:     <controller type="usb" index="0"/>
Oct 11 09:04:47 compute-0 nova_compute[260935]:     <memballoon model="virtio">
Oct 11 09:04:47 compute-0 nova_compute[260935]:       <stats period="10"/>
Oct 11 09:04:47 compute-0 nova_compute[260935]:     </memballoon>
Oct 11 09:04:47 compute-0 nova_compute[260935]:   </devices>
Oct 11 09:04:47 compute-0 nova_compute[260935]: </domain>
Oct 11 09:04:47 compute-0 nova_compute[260935]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 11 09:04:47 compute-0 ceph-mon[74313]: pgmap v1857: 321 pgs: 321 active+clean; 500 MiB data, 898 MiB used, 59 GiB / 60 GiB avail; 506 KiB/s rd, 3.2 MiB/s wr, 129 op/s
Oct 11 09:04:47 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/1299647705' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:04:47 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/1945827982' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:04:47 compute-0 nova_compute[260935]: 2025-10-11 09:04:47.736 2 DEBUG nova.virt.libvirt.driver [None req-edf5c013-a2de-433b-b5dc-cd9b80326402 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 09:04:47 compute-0 nova_compute[260935]: 2025-10-11 09:04:47.736 2 DEBUG nova.virt.libvirt.driver [None req-edf5c013-a2de-433b-b5dc-cd9b80326402 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 09:04:47 compute-0 nova_compute[260935]: 2025-10-11 09:04:47.737 2 INFO nova.virt.libvirt.driver [None req-edf5c013-a2de-433b-b5dc-cd9b80326402 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] [instance: cb40f13d-eaf0-4d7f-8745-acdd85fa0e86] Using config drive
Oct 11 09:04:47 compute-0 nova_compute[260935]: 2025-10-11 09:04:47.757 2 DEBUG nova.storage.rbd_utils [None req-edf5c013-a2de-433b-b5dc-cd9b80326402 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] rbd image cb40f13d-eaf0-4d7f-8745-acdd85fa0e86_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:04:47 compute-0 nova_compute[260935]: 2025-10-11 09:04:47.787 2 DEBUG nova.objects.instance [None req-edf5c013-a2de-433b-b5dc-cd9b80326402 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] Lazy-loading 'ec2_ids' on Instance uuid cb40f13d-eaf0-4d7f-8745-acdd85fa0e86 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:04:47 compute-0 nova_compute[260935]: 2025-10-11 09:04:47.888 2 DEBUG nova.objects.instance [None req-edf5c013-a2de-433b-b5dc-cd9b80326402 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] Lazy-loading 'keypairs' on Instance uuid cb40f13d-eaf0-4d7f-8745-acdd85fa0e86 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:04:47 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 09:04:47 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3280813713' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:04:48 compute-0 nova_compute[260935]: 2025-10-11 09:04:48.017 2 DEBUG oslo_concurrency.processutils [None req-c80e8326-bca5-4522-bb48-4252d11da3c9 1c9cf27bf1c44ef1ba0c291e3d5818be 001deb6d2eb5499b8e24cf16d6c2dbaa - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.505s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:04:48 compute-0 nova_compute[260935]: 2025-10-11 09:04:48.051 2 DEBUG nova.storage.rbd_utils [None req-c80e8326-bca5-4522-bb48-4252d11da3c9 1c9cf27bf1c44ef1ba0c291e3d5818be 001deb6d2eb5499b8e24cf16d6c2dbaa - - default default] rbd image 4af522a5-ae89-49de-a956-50cfa7dd136a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:04:48 compute-0 nova_compute[260935]: 2025-10-11 09:04:48.056 2 DEBUG oslo_concurrency.processutils [None req-c80e8326-bca5-4522-bb48-4252d11da3c9 1c9cf27bf1c44ef1ba0c291e3d5818be 001deb6d2eb5499b8e24cf16d6c2dbaa - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:04:48 compute-0 nova_compute[260935]: 2025-10-11 09:04:48.126 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:04:48 compute-0 nova_compute[260935]: 2025-10-11 09:04:48.127 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:04:48 compute-0 nova_compute[260935]: 2025-10-11 09:04:48.127 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 11 09:04:48 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1858: 321 pgs: 321 active+clean; 421 MiB data, 883 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 5.0 MiB/s wr, 301 op/s
Oct 11 09:04:48 compute-0 nova_compute[260935]: 2025-10-11 09:04:48.329 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760173473.329016, 14b020dc-ae35-4a87-87c8-ed8504968319 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:04:48 compute-0 nova_compute[260935]: 2025-10-11 09:04:48.329 2 INFO nova.compute.manager [-] [instance: 14b020dc-ae35-4a87-87c8-ed8504968319] VM Stopped (Lifecycle Event)
Oct 11 09:04:48 compute-0 nova_compute[260935]: 2025-10-11 09:04:48.357 2 INFO nova.virt.libvirt.driver [None req-edf5c013-a2de-433b-b5dc-cd9b80326402 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] [instance: cb40f13d-eaf0-4d7f-8745-acdd85fa0e86] Creating config drive at /var/lib/nova/instances/cb40f13d-eaf0-4d7f-8745-acdd85fa0e86/disk.config
Oct 11 09:04:48 compute-0 nova_compute[260935]: 2025-10-11 09:04:48.362 2 DEBUG oslo_concurrency.processutils [None req-edf5c013-a2de-433b-b5dc-cd9b80326402 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/cb40f13d-eaf0-4d7f-8745-acdd85fa0e86/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpvvj9s1rw execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:04:48 compute-0 nova_compute[260935]: 2025-10-11 09:04:48.409 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "refresh_cache-52be16b4-343a-4fd4-9041-39069a1fde2a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:04:48 compute-0 nova_compute[260935]: 2025-10-11 09:04:48.409 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquired lock "refresh_cache-52be16b4-343a-4fd4-9041-39069a1fde2a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:04:48 compute-0 nova_compute[260935]: 2025-10-11 09:04:48.409 2 DEBUG nova.network.neutron [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: 52be16b4-343a-4fd4-9041-39069a1fde2a] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 11 09:04:48 compute-0 nova_compute[260935]: 2025-10-11 09:04:48.411 2 DEBUG nova.compute.manager [None req-cfaa289a-694b-4301-bf10-efcea187848b - - - - - -] [instance: 14b020dc-ae35-4a87-87c8-ed8504968319] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:04:48 compute-0 nova_compute[260935]: 2025-10-11 09:04:48.520 2 DEBUG oslo_concurrency.processutils [None req-edf5c013-a2de-433b-b5dc-cd9b80326402 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/cb40f13d-eaf0-4d7f-8745-acdd85fa0e86/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpvvj9s1rw" returned: 0 in 0.159s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:04:48 compute-0 sshd-session[350395]: Failed password for invalid user shashank from 155.4.244.179 port 8996 ssh2
Oct 11 09:04:48 compute-0 nova_compute[260935]: 2025-10-11 09:04:48.557 2 DEBUG nova.storage.rbd_utils [None req-edf5c013-a2de-433b-b5dc-cd9b80326402 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] rbd image cb40f13d-eaf0-4d7f-8745-acdd85fa0e86_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:04:48 compute-0 nova_compute[260935]: 2025-10-11 09:04:48.562 2 DEBUG oslo_concurrency.processutils [None req-edf5c013-a2de-433b-b5dc-cd9b80326402 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/cb40f13d-eaf0-4d7f-8745-acdd85fa0e86/disk.config cb40f13d-eaf0-4d7f-8745-acdd85fa0e86_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:04:48 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 09:04:48 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3947318784' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:04:48 compute-0 nova_compute[260935]: 2025-10-11 09:04:48.615 2 DEBUG oslo_concurrency.processutils [None req-c80e8326-bca5-4522-bb48-4252d11da3c9 1c9cf27bf1c44ef1ba0c291e3d5818be 001deb6d2eb5499b8e24cf16d6c2dbaa - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.559s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:04:48 compute-0 nova_compute[260935]: 2025-10-11 09:04:48.618 2 DEBUG nova.objects.instance [None req-c80e8326-bca5-4522-bb48-4252d11da3c9 1c9cf27bf1c44ef1ba0c291e3d5818be 001deb6d2eb5499b8e24cf16d6c2dbaa - - default default] Lazy-loading 'pci_devices' on Instance uuid 4af522a5-ae89-49de-a956-50cfa7dd136a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:04:48 compute-0 nova_compute[260935]: 2025-10-11 09:04:48.666 2 DEBUG nova.virt.libvirt.driver [None req-c80e8326-bca5-4522-bb48-4252d11da3c9 1c9cf27bf1c44ef1ba0c291e3d5818be 001deb6d2eb5499b8e24cf16d6c2dbaa - - default default] [instance: 4af522a5-ae89-49de-a956-50cfa7dd136a] End _get_guest_xml xml=<domain type="kvm">
Oct 11 09:04:48 compute-0 nova_compute[260935]:   <uuid>4af522a5-ae89-49de-a956-50cfa7dd136a</uuid>
Oct 11 09:04:48 compute-0 nova_compute[260935]:   <name>instance-0000005b</name>
Oct 11 09:04:48 compute-0 nova_compute[260935]:   <memory>131072</memory>
Oct 11 09:04:48 compute-0 nova_compute[260935]:   <vcpu>1</vcpu>
Oct 11 09:04:48 compute-0 nova_compute[260935]:   <metadata>
Oct 11 09:04:48 compute-0 nova_compute[260935]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 09:04:48 compute-0 nova_compute[260935]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 09:04:48 compute-0 nova_compute[260935]:       <nova:name>instance-depend-image</nova:name>
Oct 11 09:04:48 compute-0 nova_compute[260935]:       <nova:creationTime>2025-10-11 09:04:47</nova:creationTime>
Oct 11 09:04:48 compute-0 nova_compute[260935]:       <nova:flavor name="m1.nano">
Oct 11 09:04:48 compute-0 nova_compute[260935]:         <nova:memory>128</nova:memory>
Oct 11 09:04:48 compute-0 nova_compute[260935]:         <nova:disk>1</nova:disk>
Oct 11 09:04:48 compute-0 nova_compute[260935]:         <nova:swap>0</nova:swap>
Oct 11 09:04:48 compute-0 nova_compute[260935]:         <nova:ephemeral>0</nova:ephemeral>
Oct 11 09:04:48 compute-0 nova_compute[260935]:         <nova:vcpus>1</nova:vcpus>
Oct 11 09:04:48 compute-0 nova_compute[260935]:       </nova:flavor>
Oct 11 09:04:48 compute-0 nova_compute[260935]:       <nova:owner>
Oct 11 09:04:48 compute-0 nova_compute[260935]:         <nova:user uuid="1c9cf27bf1c44ef1ba0c291e3d5818be">tempest-ImageDependencyTests-755665872-project-member</nova:user>
Oct 11 09:04:48 compute-0 nova_compute[260935]:         <nova:project uuid="001deb6d2eb5499b8e24cf16d6c2dbaa">tempest-ImageDependencyTests-755665872</nova:project>
Oct 11 09:04:48 compute-0 nova_compute[260935]:       </nova:owner>
Oct 11 09:04:48 compute-0 nova_compute[260935]:       <nova:root type="image" uuid="d6bf8b58-0442-4653-84ed-9390a999a943"/>
Oct 11 09:04:48 compute-0 nova_compute[260935]:       <nova:ports/>
Oct 11 09:04:48 compute-0 nova_compute[260935]:     </nova:instance>
Oct 11 09:04:48 compute-0 nova_compute[260935]:   </metadata>
Oct 11 09:04:48 compute-0 nova_compute[260935]:   <sysinfo type="smbios">
Oct 11 09:04:48 compute-0 nova_compute[260935]:     <system>
Oct 11 09:04:48 compute-0 nova_compute[260935]:       <entry name="manufacturer">RDO</entry>
Oct 11 09:04:48 compute-0 nova_compute[260935]:       <entry name="product">OpenStack Compute</entry>
Oct 11 09:04:48 compute-0 nova_compute[260935]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 09:04:48 compute-0 nova_compute[260935]:       <entry name="serial">4af522a5-ae89-49de-a956-50cfa7dd136a</entry>
Oct 11 09:04:48 compute-0 nova_compute[260935]:       <entry name="uuid">4af522a5-ae89-49de-a956-50cfa7dd136a</entry>
Oct 11 09:04:48 compute-0 nova_compute[260935]:       <entry name="family">Virtual Machine</entry>
Oct 11 09:04:48 compute-0 nova_compute[260935]:     </system>
Oct 11 09:04:48 compute-0 nova_compute[260935]:   </sysinfo>
Oct 11 09:04:48 compute-0 nova_compute[260935]:   <os>
Oct 11 09:04:48 compute-0 nova_compute[260935]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 11 09:04:48 compute-0 nova_compute[260935]:     <boot dev="hd"/>
Oct 11 09:04:48 compute-0 nova_compute[260935]:     <smbios mode="sysinfo"/>
Oct 11 09:04:48 compute-0 nova_compute[260935]:   </os>
Oct 11 09:04:48 compute-0 nova_compute[260935]:   <features>
Oct 11 09:04:48 compute-0 nova_compute[260935]:     <acpi/>
Oct 11 09:04:48 compute-0 nova_compute[260935]:     <apic/>
Oct 11 09:04:48 compute-0 nova_compute[260935]:     <vmcoreinfo/>
Oct 11 09:04:48 compute-0 nova_compute[260935]:   </features>
Oct 11 09:04:48 compute-0 nova_compute[260935]:   <clock offset="utc">
Oct 11 09:04:48 compute-0 nova_compute[260935]:     <timer name="pit" tickpolicy="delay"/>
Oct 11 09:04:48 compute-0 nova_compute[260935]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 11 09:04:48 compute-0 nova_compute[260935]:     <timer name="hpet" present="no"/>
Oct 11 09:04:48 compute-0 nova_compute[260935]:   </clock>
Oct 11 09:04:48 compute-0 nova_compute[260935]:   <cpu mode="host-model" match="exact">
Oct 11 09:04:48 compute-0 nova_compute[260935]:     <topology sockets="1" cores="1" threads="1"/>
Oct 11 09:04:48 compute-0 nova_compute[260935]:   </cpu>
Oct 11 09:04:48 compute-0 nova_compute[260935]:   <devices>
Oct 11 09:04:48 compute-0 nova_compute[260935]:     <disk type="network" device="disk">
Oct 11 09:04:48 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 09:04:48 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/4af522a5-ae89-49de-a956-50cfa7dd136a_disk">
Oct 11 09:04:48 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 09:04:48 compute-0 nova_compute[260935]:       </source>
Oct 11 09:04:48 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 09:04:48 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 09:04:48 compute-0 nova_compute[260935]:       </auth>
Oct 11 09:04:48 compute-0 nova_compute[260935]:       <target dev="vda" bus="virtio"/>
Oct 11 09:04:48 compute-0 nova_compute[260935]:     </disk>
Oct 11 09:04:48 compute-0 nova_compute[260935]:     <disk type="network" device="cdrom">
Oct 11 09:04:48 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 09:04:48 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/4af522a5-ae89-49de-a956-50cfa7dd136a_disk.config">
Oct 11 09:04:48 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 09:04:48 compute-0 nova_compute[260935]:       </source>
Oct 11 09:04:48 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 09:04:48 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 09:04:48 compute-0 nova_compute[260935]:       </auth>
Oct 11 09:04:48 compute-0 nova_compute[260935]:       <target dev="sda" bus="sata"/>
Oct 11 09:04:48 compute-0 nova_compute[260935]:     </disk>
Oct 11 09:04:48 compute-0 nova_compute[260935]:     <serial type="pty">
Oct 11 09:04:48 compute-0 nova_compute[260935]:       <log file="/var/lib/nova/instances/4af522a5-ae89-49de-a956-50cfa7dd136a/console.log" append="off"/>
Oct 11 09:04:48 compute-0 nova_compute[260935]:     </serial>
Oct 11 09:04:48 compute-0 nova_compute[260935]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 09:04:48 compute-0 nova_compute[260935]:     <video>
Oct 11 09:04:48 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 09:04:48 compute-0 nova_compute[260935]:     </video>
Oct 11 09:04:48 compute-0 nova_compute[260935]:     <input type="tablet" bus="usb"/>
Oct 11 09:04:48 compute-0 nova_compute[260935]:     <rng model="virtio">
Oct 11 09:04:48 compute-0 nova_compute[260935]:       <backend model="random">/dev/urandom</backend>
Oct 11 09:04:48 compute-0 nova_compute[260935]:     </rng>
Oct 11 09:04:48 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root"/>
Oct 11 09:04:48 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:04:48 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:04:48 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:04:48 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:04:48 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:04:48 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:04:48 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:04:48 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:04:48 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:04:48 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:04:48 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:04:48 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:04:48 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:04:48 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:04:48 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:04:48 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:04:48 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:04:48 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:04:48 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:04:48 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:04:48 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:04:48 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:04:48 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:04:48 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:04:48 compute-0 nova_compute[260935]:     <controller type="usb" index="0"/>
Oct 11 09:04:48 compute-0 nova_compute[260935]:     <memballoon model="virtio">
Oct 11 09:04:48 compute-0 nova_compute[260935]:       <stats period="10"/>
Oct 11 09:04:48 compute-0 nova_compute[260935]:     </memballoon>
Oct 11 09:04:48 compute-0 nova_compute[260935]:   </devices>
Oct 11 09:04:48 compute-0 nova_compute[260935]: </domain>
Oct 11 09:04:48 compute-0 nova_compute[260935]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 11 09:04:48 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/3280813713' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:04:48 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/3947318784' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:04:48 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e251 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:04:48 compute-0 nova_compute[260935]: 2025-10-11 09:04:48.791 2 DEBUG oslo_concurrency.processutils [None req-edf5c013-a2de-433b-b5dc-cd9b80326402 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/cb40f13d-eaf0-4d7f-8745-acdd85fa0e86/disk.config cb40f13d-eaf0-4d7f-8745-acdd85fa0e86_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.230s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:04:48 compute-0 nova_compute[260935]: 2025-10-11 09:04:48.792 2 INFO nova.virt.libvirt.driver [None req-edf5c013-a2de-433b-b5dc-cd9b80326402 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] [instance: cb40f13d-eaf0-4d7f-8745-acdd85fa0e86] Deleting local config drive /var/lib/nova/instances/cb40f13d-eaf0-4d7f-8745-acdd85fa0e86/disk.config because it was imported into RBD.
Oct 11 09:04:48 compute-0 nova_compute[260935]: 2025-10-11 09:04:48.803 2 DEBUG nova.virt.libvirt.driver [None req-c80e8326-bca5-4522-bb48-4252d11da3c9 1c9cf27bf1c44ef1ba0c291e3d5818be 001deb6d2eb5499b8e24cf16d6c2dbaa - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 09:04:48 compute-0 nova_compute[260935]: 2025-10-11 09:04:48.803 2 DEBUG nova.virt.libvirt.driver [None req-c80e8326-bca5-4522-bb48-4252d11da3c9 1c9cf27bf1c44ef1ba0c291e3d5818be 001deb6d2eb5499b8e24cf16d6c2dbaa - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 09:04:48 compute-0 nova_compute[260935]: 2025-10-11 09:04:48.804 2 INFO nova.virt.libvirt.driver [None req-c80e8326-bca5-4522-bb48-4252d11da3c9 1c9cf27bf1c44ef1ba0c291e3d5818be 001deb6d2eb5499b8e24cf16d6c2dbaa - - default default] [instance: 4af522a5-ae89-49de-a956-50cfa7dd136a] Using config drive
Oct 11 09:04:48 compute-0 podman[350864]: 2025-10-11 09:04:48.819086265 +0000 UTC m=+0.115669023 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid)
Oct 11 09:04:48 compute-0 nova_compute[260935]: 2025-10-11 09:04:48.830 2 DEBUG nova.storage.rbd_utils [None req-c80e8326-bca5-4522-bb48-4252d11da3c9 1c9cf27bf1c44ef1ba0c291e3d5818be 001deb6d2eb5499b8e24cf16d6c2dbaa - - default default] rbd image 4af522a5-ae89-49de-a956-50cfa7dd136a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:04:48 compute-0 systemd-machined[215705]: New machine qemu-103-instance-00000058.
Oct 11 09:04:48 compute-0 systemd[1]: Started Virtual Machine qemu-103-instance-00000058.
Oct 11 09:04:48 compute-0 nova_compute[260935]: 2025-10-11 09:04:48.912 2 DEBUG nova.network.neutron [-] [instance: dffb6f2b-b5a8-4d28-ae9b-aca7728ce617] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:04:48 compute-0 nova_compute[260935]: 2025-10-11 09:04:48.970 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:04:49 compute-0 nova_compute[260935]: 2025-10-11 09:04:49.100 2 INFO nova.compute.manager [-] [instance: dffb6f2b-b5a8-4d28-ae9b-aca7728ce617] Took 1.61 seconds to deallocate network for instance.
Oct 11 09:04:49 compute-0 nova_compute[260935]: 2025-10-11 09:04:49.105 2 INFO nova.virt.libvirt.driver [None req-c80e8326-bca5-4522-bb48-4252d11da3c9 1c9cf27bf1c44ef1ba0c291e3d5818be 001deb6d2eb5499b8e24cf16d6c2dbaa - - default default] [instance: 4af522a5-ae89-49de-a956-50cfa7dd136a] Creating config drive at /var/lib/nova/instances/4af522a5-ae89-49de-a956-50cfa7dd136a/disk.config
Oct 11 09:04:49 compute-0 nova_compute[260935]: 2025-10-11 09:04:49.117 2 DEBUG oslo_concurrency.processutils [None req-c80e8326-bca5-4522-bb48-4252d11da3c9 1c9cf27bf1c44ef1ba0c291e3d5818be 001deb6d2eb5499b8e24cf16d6c2dbaa - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/4af522a5-ae89-49de-a956-50cfa7dd136a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmps9wil9ay execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:04:49 compute-0 nova_compute[260935]: 2025-10-11 09:04:49.290 2 DEBUG oslo_concurrency.processutils [None req-c80e8326-bca5-4522-bb48-4252d11da3c9 1c9cf27bf1c44ef1ba0c291e3d5818be 001deb6d2eb5499b8e24cf16d6c2dbaa - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/4af522a5-ae89-49de-a956-50cfa7dd136a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmps9wil9ay" returned: 0 in 0.173s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:04:49 compute-0 nova_compute[260935]: 2025-10-11 09:04:49.317 2 DEBUG nova.storage.rbd_utils [None req-c80e8326-bca5-4522-bb48-4252d11da3c9 1c9cf27bf1c44ef1ba0c291e3d5818be 001deb6d2eb5499b8e24cf16d6c2dbaa - - default default] rbd image 4af522a5-ae89-49de-a956-50cfa7dd136a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:04:49 compute-0 nova_compute[260935]: 2025-10-11 09:04:49.322 2 DEBUG oslo_concurrency.processutils [None req-c80e8326-bca5-4522-bb48-4252d11da3c9 1c9cf27bf1c44ef1ba0c291e3d5818be 001deb6d2eb5499b8e24cf16d6c2dbaa - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/4af522a5-ae89-49de-a956-50cfa7dd136a/disk.config 4af522a5-ae89-49de-a956-50cfa7dd136a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:04:49 compute-0 nova_compute[260935]: 2025-10-11 09:04:49.395 2 DEBUG nova.compute.manager [req-b647b5f5-fc2b-4b7c-a6c3-ccd745319210 req-5cec846d-2f04-4e86-8482-2a3ae209da46 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 52be16b4-343a-4fd4-9041-39069a1fde2a] Received event network-changed-c992d6e3-ef59-42a0-80c5-109fe0c056cd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:04:49 compute-0 nova_compute[260935]: 2025-10-11 09:04:49.396 2 DEBUG nova.compute.manager [req-b647b5f5-fc2b-4b7c-a6c3-ccd745319210 req-5cec846d-2f04-4e86-8482-2a3ae209da46 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 52be16b4-343a-4fd4-9041-39069a1fde2a] Refreshing instance network info cache due to event network-changed-c992d6e3-ef59-42a0-80c5-109fe0c056cd. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 09:04:49 compute-0 nova_compute[260935]: 2025-10-11 09:04:49.396 2 DEBUG oslo_concurrency.lockutils [req-b647b5f5-fc2b-4b7c-a6c3-ccd745319210 req-5cec846d-2f04-4e86-8482-2a3ae209da46 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-52be16b4-343a-4fd4-9041-39069a1fde2a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:04:49 compute-0 nova_compute[260935]: 2025-10-11 09:04:49.398 2 DEBUG oslo_concurrency.lockutils [None req-2f97a1e7-8cbd-40ad-9350-e718e78c356b f7e5365d09e140aaa4289b21435cbd70 884a8b5cd18948009939a3ab6cb1d42a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:04:49 compute-0 nova_compute[260935]: 2025-10-11 09:04:49.398 2 DEBUG oslo_concurrency.lockutils [None req-2f97a1e7-8cbd-40ad-9350-e718e78c356b f7e5365d09e140aaa4289b21435cbd70 884a8b5cd18948009939a3ab6cb1d42a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:04:49 compute-0 nova_compute[260935]: 2025-10-11 09:04:49.508 2 DEBUG oslo_concurrency.processutils [None req-c80e8326-bca5-4522-bb48-4252d11da3c9 1c9cf27bf1c44ef1ba0c291e3d5818be 001deb6d2eb5499b8e24cf16d6c2dbaa - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/4af522a5-ae89-49de-a956-50cfa7dd136a/disk.config 4af522a5-ae89-49de-a956-50cfa7dd136a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.186s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:04:49 compute-0 nova_compute[260935]: 2025-10-11 09:04:49.509 2 INFO nova.virt.libvirt.driver [None req-c80e8326-bca5-4522-bb48-4252d11da3c9 1c9cf27bf1c44ef1ba0c291e3d5818be 001deb6d2eb5499b8e24cf16d6c2dbaa - - default default] [instance: 4af522a5-ae89-49de-a956-50cfa7dd136a] Deleting local config drive /var/lib/nova/instances/4af522a5-ae89-49de-a956-50cfa7dd136a/disk.config because it was imported into RBD.
Oct 11 09:04:49 compute-0 systemd-machined[215705]: New machine qemu-104-instance-0000005b.
Oct 11 09:04:49 compute-0 nova_compute[260935]: 2025-10-11 09:04:49.602 2 DEBUG oslo_concurrency.processutils [None req-2f97a1e7-8cbd-40ad-9350-e718e78c356b f7e5365d09e140aaa4289b21435cbd70 884a8b5cd18948009939a3ab6cb1d42a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:04:49 compute-0 systemd[1]: Started Virtual Machine qemu-104-instance-0000005b.
Oct 11 09:04:49 compute-0 ceph-mon[74313]: pgmap v1858: 321 pgs: 321 active+clean; 421 MiB data, 883 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 5.0 MiB/s wr, 301 op/s
Oct 11 09:04:49 compute-0 nova_compute[260935]: 2025-10-11 09:04:49.805 2 DEBUG nova.compute.manager [req-9a8506a8-37cb-4247-a0c5-8ffcffd270e5 req-629a986b-716e-419f-8020-802f37ba8fb7 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: dffb6f2b-b5a8-4d28-ae9b-aca7728ce617] Received event network-vif-plugged-63a1eeb0-05f1-4f8a-b115-a1323d4d119d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:04:49 compute-0 nova_compute[260935]: 2025-10-11 09:04:49.806 2 DEBUG oslo_concurrency.lockutils [req-9a8506a8-37cb-4247-a0c5-8ffcffd270e5 req-629a986b-716e-419f-8020-802f37ba8fb7 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "dffb6f2b-b5a8-4d28-ae9b-aca7728ce617-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:04:49 compute-0 nova_compute[260935]: 2025-10-11 09:04:49.806 2 DEBUG oslo_concurrency.lockutils [req-9a8506a8-37cb-4247-a0c5-8ffcffd270e5 req-629a986b-716e-419f-8020-802f37ba8fb7 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "dffb6f2b-b5a8-4d28-ae9b-aca7728ce617-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:04:49 compute-0 nova_compute[260935]: 2025-10-11 09:04:49.806 2 DEBUG oslo_concurrency.lockutils [req-9a8506a8-37cb-4247-a0c5-8ffcffd270e5 req-629a986b-716e-419f-8020-802f37ba8fb7 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "dffb6f2b-b5a8-4d28-ae9b-aca7728ce617-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:04:49 compute-0 nova_compute[260935]: 2025-10-11 09:04:49.806 2 DEBUG nova.compute.manager [req-9a8506a8-37cb-4247-a0c5-8ffcffd270e5 req-629a986b-716e-419f-8020-802f37ba8fb7 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: dffb6f2b-b5a8-4d28-ae9b-aca7728ce617] No waiting events found dispatching network-vif-plugged-63a1eeb0-05f1-4f8a-b115-a1323d4d119d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 09:04:49 compute-0 nova_compute[260935]: 2025-10-11 09:04:49.807 2 WARNING nova.compute.manager [req-9a8506a8-37cb-4247-a0c5-8ffcffd270e5 req-629a986b-716e-419f-8020-802f37ba8fb7 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: dffb6f2b-b5a8-4d28-ae9b-aca7728ce617] Received unexpected event network-vif-plugged-63a1eeb0-05f1-4f8a-b115-a1323d4d119d for instance with vm_state deleted and task_state None.
Oct 11 09:04:49 compute-0 nova_compute[260935]: 2025-10-11 09:04:49.858 2 DEBUG nova.virt.libvirt.host [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Removed pending event for cb40f13d-eaf0-4d7f-8745-acdd85fa0e86 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Oct 11 09:04:49 compute-0 nova_compute[260935]: 2025-10-11 09:04:49.859 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760173489.85819, cb40f13d-eaf0-4d7f-8745-acdd85fa0e86 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:04:49 compute-0 nova_compute[260935]: 2025-10-11 09:04:49.859 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: cb40f13d-eaf0-4d7f-8745-acdd85fa0e86] VM Resumed (Lifecycle Event)
Oct 11 09:04:49 compute-0 nova_compute[260935]: 2025-10-11 09:04:49.861 2 DEBUG nova.compute.manager [None req-edf5c013-a2de-433b-b5dc-cd9b80326402 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] [instance: cb40f13d-eaf0-4d7f-8745-acdd85fa0e86] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 11 09:04:49 compute-0 nova_compute[260935]: 2025-10-11 09:04:49.862 2 DEBUG nova.virt.libvirt.driver [None req-edf5c013-a2de-433b-b5dc-cd9b80326402 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] [instance: cb40f13d-eaf0-4d7f-8745-acdd85fa0e86] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 11 09:04:49 compute-0 nova_compute[260935]: 2025-10-11 09:04:49.873 2 INFO nova.virt.libvirt.driver [-] [instance: cb40f13d-eaf0-4d7f-8745-acdd85fa0e86] Instance spawned successfully.
Oct 11 09:04:49 compute-0 nova_compute[260935]: 2025-10-11 09:04:49.873 2 DEBUG nova.virt.libvirt.driver [None req-edf5c013-a2de-433b-b5dc-cd9b80326402 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] [instance: cb40f13d-eaf0-4d7f-8745-acdd85fa0e86] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 11 09:04:49 compute-0 nova_compute[260935]: 2025-10-11 09:04:49.916 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: cb40f13d-eaf0-4d7f-8745-acdd85fa0e86] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:04:49 compute-0 nova_compute[260935]: 2025-10-11 09:04:49.921 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: cb40f13d-eaf0-4d7f-8745-acdd85fa0e86] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 09:04:49 compute-0 nova_compute[260935]: 2025-10-11 09:04:49.938 2 DEBUG nova.virt.libvirt.driver [None req-edf5c013-a2de-433b-b5dc-cd9b80326402 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] [instance: cb40f13d-eaf0-4d7f-8745-acdd85fa0e86] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:04:49 compute-0 nova_compute[260935]: 2025-10-11 09:04:49.938 2 DEBUG nova.virt.libvirt.driver [None req-edf5c013-a2de-433b-b5dc-cd9b80326402 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] [instance: cb40f13d-eaf0-4d7f-8745-acdd85fa0e86] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:04:49 compute-0 nova_compute[260935]: 2025-10-11 09:04:49.939 2 DEBUG nova.virt.libvirt.driver [None req-edf5c013-a2de-433b-b5dc-cd9b80326402 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] [instance: cb40f13d-eaf0-4d7f-8745-acdd85fa0e86] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:04:49 compute-0 nova_compute[260935]: 2025-10-11 09:04:49.939 2 DEBUG nova.virt.libvirt.driver [None req-edf5c013-a2de-433b-b5dc-cd9b80326402 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] [instance: cb40f13d-eaf0-4d7f-8745-acdd85fa0e86] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:04:49 compute-0 nova_compute[260935]: 2025-10-11 09:04:49.940 2 DEBUG nova.virt.libvirt.driver [None req-edf5c013-a2de-433b-b5dc-cd9b80326402 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] [instance: cb40f13d-eaf0-4d7f-8745-acdd85fa0e86] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:04:49 compute-0 nova_compute[260935]: 2025-10-11 09:04:49.940 2 DEBUG nova.virt.libvirt.driver [None req-edf5c013-a2de-433b-b5dc-cd9b80326402 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] [instance: cb40f13d-eaf0-4d7f-8745-acdd85fa0e86] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:04:49 compute-0 sshd-session[350395]: Received disconnect from 155.4.244.179 port 8996:11: Bye Bye [preauth]
Oct 11 09:04:49 compute-0 sshd-session[350395]: Disconnected from invalid user shashank 155.4.244.179 port 8996 [preauth]
Oct 11 09:04:50 compute-0 nova_compute[260935]: 2025-10-11 09:04:50.027 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: cb40f13d-eaf0-4d7f-8745-acdd85fa0e86] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Oct 11 09:04:50 compute-0 nova_compute[260935]: 2025-10-11 09:04:50.027 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760173489.8625157, cb40f13d-eaf0-4d7f-8745-acdd85fa0e86 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:04:50 compute-0 nova_compute[260935]: 2025-10-11 09:04:50.028 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: cb40f13d-eaf0-4d7f-8745-acdd85fa0e86] VM Started (Lifecycle Event)
Oct 11 09:04:50 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 09:04:50 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2737087605' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:04:50 compute-0 nova_compute[260935]: 2025-10-11 09:04:50.105 2 DEBUG oslo_concurrency.processutils [None req-2f97a1e7-8cbd-40ad-9350-e718e78c356b f7e5365d09e140aaa4289b21435cbd70 884a8b5cd18948009939a3ab6cb1d42a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.502s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:04:50 compute-0 nova_compute[260935]: 2025-10-11 09:04:50.111 2 DEBUG nova.compute.provider_tree [None req-2f97a1e7-8cbd-40ad-9350-e718e78c356b f7e5365d09e140aaa4289b21435cbd70 884a8b5cd18948009939a3ab6cb1d42a - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 09:04:50 compute-0 nova_compute[260935]: 2025-10-11 09:04:50.147 2 DEBUG nova.compute.manager [None req-edf5c013-a2de-433b-b5dc-cd9b80326402 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] [instance: cb40f13d-eaf0-4d7f-8745-acdd85fa0e86] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:04:50 compute-0 nova_compute[260935]: 2025-10-11 09:04:50.171 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: cb40f13d-eaf0-4d7f-8745-acdd85fa0e86] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:04:50 compute-0 nova_compute[260935]: 2025-10-11 09:04:50.181 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: cb40f13d-eaf0-4d7f-8745-acdd85fa0e86] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 09:04:50 compute-0 nova_compute[260935]: 2025-10-11 09:04:50.221 2 DEBUG nova.scheduler.client.report [None req-2f97a1e7-8cbd-40ad-9350-e718e78c356b f7e5365d09e140aaa4289b21435cbd70 884a8b5cd18948009939a3ab6cb1d42a - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 09:04:50 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1859: 321 pgs: 321 active+clean; 421 MiB data, 883 MiB used, 59 GiB / 60 GiB avail; 2.8 MiB/s rd, 4.7 MiB/s wr, 287 op/s
Oct 11 09:04:50 compute-0 nova_compute[260935]: 2025-10-11 09:04:50.322 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: cb40f13d-eaf0-4d7f-8745-acdd85fa0e86] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Oct 11 09:04:50 compute-0 nova_compute[260935]: 2025-10-11 09:04:50.409 2 DEBUG oslo_concurrency.lockutils [None req-edf5c013-a2de-433b-b5dc-cd9b80326402 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:04:50 compute-0 nova_compute[260935]: 2025-10-11 09:04:50.478 2 DEBUG oslo_concurrency.lockutils [None req-2f97a1e7-8cbd-40ad-9350-e718e78c356b f7e5365d09e140aaa4289b21435cbd70 884a8b5cd18948009939a3ab6cb1d42a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.080s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:04:50 compute-0 nova_compute[260935]: 2025-10-11 09:04:50.482 2 DEBUG oslo_concurrency.lockutils [None req-edf5c013-a2de-433b-b5dc-cd9b80326402 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.073s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:04:50 compute-0 nova_compute[260935]: 2025-10-11 09:04:50.482 2 DEBUG nova.objects.instance [None req-edf5c013-a2de-433b-b5dc-cd9b80326402 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] [instance: cb40f13d-eaf0-4d7f-8745-acdd85fa0e86] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Oct 11 09:04:50 compute-0 nova_compute[260935]: 2025-10-11 09:04:50.644 2 INFO nova.scheduler.client.report [None req-2f97a1e7-8cbd-40ad-9350-e718e78c356b f7e5365d09e140aaa4289b21435cbd70 884a8b5cd18948009939a3ab6cb1d42a - - default default] Deleted allocations for instance dffb6f2b-b5a8-4d28-ae9b-aca7728ce617
Oct 11 09:04:50 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/2737087605' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:04:50 compute-0 ceph-mon[74313]: pgmap v1859: 321 pgs: 321 active+clean; 421 MiB data, 883 MiB used, 59 GiB / 60 GiB avail; 2.8 MiB/s rd, 4.7 MiB/s wr, 287 op/s
Oct 11 09:04:50 compute-0 nova_compute[260935]: 2025-10-11 09:04:50.840 2 DEBUG oslo_concurrency.lockutils [None req-edf5c013-a2de-433b-b5dc-cd9b80326402 a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.358s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:04:50 compute-0 nova_compute[260935]: 2025-10-11 09:04:50.920 2 DEBUG oslo_concurrency.lockutils [None req-2f97a1e7-8cbd-40ad-9350-e718e78c356b f7e5365d09e140aaa4289b21435cbd70 884a8b5cd18948009939a3ab6cb1d42a - - default default] Lock "dffb6f2b-b5a8-4d28-ae9b-aca7728ce617" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.344s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:04:51 compute-0 nova_compute[260935]: 2025-10-11 09:04:51.282 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760173491.2821743, 4af522a5-ae89-49de-a956-50cfa7dd136a => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:04:51 compute-0 nova_compute[260935]: 2025-10-11 09:04:51.283 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 4af522a5-ae89-49de-a956-50cfa7dd136a] VM Resumed (Lifecycle Event)
Oct 11 09:04:51 compute-0 nova_compute[260935]: 2025-10-11 09:04:51.285 2 DEBUG nova.compute.manager [None req-c80e8326-bca5-4522-bb48-4252d11da3c9 1c9cf27bf1c44ef1ba0c291e3d5818be 001deb6d2eb5499b8e24cf16d6c2dbaa - - default default] [instance: 4af522a5-ae89-49de-a956-50cfa7dd136a] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 11 09:04:51 compute-0 nova_compute[260935]: 2025-10-11 09:04:51.286 2 DEBUG nova.virt.libvirt.driver [None req-c80e8326-bca5-4522-bb48-4252d11da3c9 1c9cf27bf1c44ef1ba0c291e3d5818be 001deb6d2eb5499b8e24cf16d6c2dbaa - - default default] [instance: 4af522a5-ae89-49de-a956-50cfa7dd136a] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 11 09:04:51 compute-0 nova_compute[260935]: 2025-10-11 09:04:51.289 2 INFO nova.virt.libvirt.driver [-] [instance: 4af522a5-ae89-49de-a956-50cfa7dd136a] Instance spawned successfully.
Oct 11 09:04:51 compute-0 nova_compute[260935]: 2025-10-11 09:04:51.289 2 DEBUG nova.virt.libvirt.driver [None req-c80e8326-bca5-4522-bb48-4252d11da3c9 1c9cf27bf1c44ef1ba0c291e3d5818be 001deb6d2eb5499b8e24cf16d6c2dbaa - - default default] [instance: 4af522a5-ae89-49de-a956-50cfa7dd136a] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 11 09:04:51 compute-0 nova_compute[260935]: 2025-10-11 09:04:51.353 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 4af522a5-ae89-49de-a956-50cfa7dd136a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:04:51 compute-0 nova_compute[260935]: 2025-10-11 09:04:51.357 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 4af522a5-ae89-49de-a956-50cfa7dd136a] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 09:04:51 compute-0 nova_compute[260935]: 2025-10-11 09:04:51.387 2 DEBUG nova.virt.libvirt.driver [None req-c80e8326-bca5-4522-bb48-4252d11da3c9 1c9cf27bf1c44ef1ba0c291e3d5818be 001deb6d2eb5499b8e24cf16d6c2dbaa - - default default] [instance: 4af522a5-ae89-49de-a956-50cfa7dd136a] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:04:51 compute-0 nova_compute[260935]: 2025-10-11 09:04:51.388 2 DEBUG nova.virt.libvirt.driver [None req-c80e8326-bca5-4522-bb48-4252d11da3c9 1c9cf27bf1c44ef1ba0c291e3d5818be 001deb6d2eb5499b8e24cf16d6c2dbaa - - default default] [instance: 4af522a5-ae89-49de-a956-50cfa7dd136a] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:04:51 compute-0 nova_compute[260935]: 2025-10-11 09:04:51.388 2 DEBUG nova.virt.libvirt.driver [None req-c80e8326-bca5-4522-bb48-4252d11da3c9 1c9cf27bf1c44ef1ba0c291e3d5818be 001deb6d2eb5499b8e24cf16d6c2dbaa - - default default] [instance: 4af522a5-ae89-49de-a956-50cfa7dd136a] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:04:51 compute-0 nova_compute[260935]: 2025-10-11 09:04:51.389 2 DEBUG nova.virt.libvirt.driver [None req-c80e8326-bca5-4522-bb48-4252d11da3c9 1c9cf27bf1c44ef1ba0c291e3d5818be 001deb6d2eb5499b8e24cf16d6c2dbaa - - default default] [instance: 4af522a5-ae89-49de-a956-50cfa7dd136a] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:04:51 compute-0 nova_compute[260935]: 2025-10-11 09:04:51.389 2 DEBUG nova.virt.libvirt.driver [None req-c80e8326-bca5-4522-bb48-4252d11da3c9 1c9cf27bf1c44ef1ba0c291e3d5818be 001deb6d2eb5499b8e24cf16d6c2dbaa - - default default] [instance: 4af522a5-ae89-49de-a956-50cfa7dd136a] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:04:51 compute-0 nova_compute[260935]: 2025-10-11 09:04:51.389 2 DEBUG nova.virt.libvirt.driver [None req-c80e8326-bca5-4522-bb48-4252d11da3c9 1c9cf27bf1c44ef1ba0c291e3d5818be 001deb6d2eb5499b8e24cf16d6c2dbaa - - default default] [instance: 4af522a5-ae89-49de-a956-50cfa7dd136a] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:04:51 compute-0 nova_compute[260935]: 2025-10-11 09:04:51.415 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 4af522a5-ae89-49de-a956-50cfa7dd136a] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 09:04:51 compute-0 nova_compute[260935]: 2025-10-11 09:04:51.415 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760173491.2832425, 4af522a5-ae89-49de-a956-50cfa7dd136a => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:04:51 compute-0 nova_compute[260935]: 2025-10-11 09:04:51.415 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 4af522a5-ae89-49de-a956-50cfa7dd136a] VM Started (Lifecycle Event)
Oct 11 09:04:51 compute-0 nova_compute[260935]: 2025-10-11 09:04:51.521 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 4af522a5-ae89-49de-a956-50cfa7dd136a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:04:51 compute-0 nova_compute[260935]: 2025-10-11 09:04:51.527 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 4af522a5-ae89-49de-a956-50cfa7dd136a] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 09:04:51 compute-0 nova_compute[260935]: 2025-10-11 09:04:51.531 2 DEBUG nova.network.neutron [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: 52be16b4-343a-4fd4-9041-39069a1fde2a] Updating instance_info_cache with network_info: [{"id": "c992d6e3-ef59-42a0-80c5-109fe0c056cd", "address": "fa:16:3e:d3:b5:ce", "network": {"id": "7c40ad6c-6e2c-4d8e-a70f-72c8786fa745", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1855455514-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0ba95f2514ce4fe4b00f245335eaeb01", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc992d6e3-ef", "ovs_interfaceid": "c992d6e3-ef59-42a0-80c5-109fe0c056cd", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:04:51 compute-0 nova_compute[260935]: 2025-10-11 09:04:51.537 2 DEBUG oslo_concurrency.lockutils [None req-7641b456-eabb-4f11-b264-7d16c458804f a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] Acquiring lock "cb40f13d-eaf0-4d7f-8745-acdd85fa0e86" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:04:51 compute-0 nova_compute[260935]: 2025-10-11 09:04:51.538 2 DEBUG oslo_concurrency.lockutils [None req-7641b456-eabb-4f11-b264-7d16c458804f a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] Lock "cb40f13d-eaf0-4d7f-8745-acdd85fa0e86" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:04:51 compute-0 nova_compute[260935]: 2025-10-11 09:04:51.538 2 DEBUG oslo_concurrency.lockutils [None req-7641b456-eabb-4f11-b264-7d16c458804f a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] Acquiring lock "cb40f13d-eaf0-4d7f-8745-acdd85fa0e86-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:04:51 compute-0 nova_compute[260935]: 2025-10-11 09:04:51.538 2 DEBUG oslo_concurrency.lockutils [None req-7641b456-eabb-4f11-b264-7d16c458804f a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] Lock "cb40f13d-eaf0-4d7f-8745-acdd85fa0e86-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:04:51 compute-0 nova_compute[260935]: 2025-10-11 09:04:51.539 2 DEBUG oslo_concurrency.lockutils [None req-7641b456-eabb-4f11-b264-7d16c458804f a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] Lock "cb40f13d-eaf0-4d7f-8745-acdd85fa0e86-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:04:51 compute-0 nova_compute[260935]: 2025-10-11 09:04:51.540 2 INFO nova.compute.manager [None req-7641b456-eabb-4f11-b264-7d16c458804f a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] [instance: cb40f13d-eaf0-4d7f-8745-acdd85fa0e86] Terminating instance
Oct 11 09:04:51 compute-0 nova_compute[260935]: 2025-10-11 09:04:51.541 2 DEBUG oslo_concurrency.lockutils [None req-7641b456-eabb-4f11-b264-7d16c458804f a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] Acquiring lock "refresh_cache-cb40f13d-eaf0-4d7f-8745-acdd85fa0e86" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:04:51 compute-0 nova_compute[260935]: 2025-10-11 09:04:51.542 2 DEBUG oslo_concurrency.lockutils [None req-7641b456-eabb-4f11-b264-7d16c458804f a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] Acquired lock "refresh_cache-cb40f13d-eaf0-4d7f-8745-acdd85fa0e86" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:04:51 compute-0 nova_compute[260935]: 2025-10-11 09:04:51.542 2 DEBUG nova.network.neutron [None req-7641b456-eabb-4f11-b264-7d16c458804f a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] [instance: cb40f13d-eaf0-4d7f-8745-acdd85fa0e86] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 11 09:04:51 compute-0 nova_compute[260935]: 2025-10-11 09:04:51.554 2 INFO nova.compute.manager [None req-c80e8326-bca5-4522-bb48-4252d11da3c9 1c9cf27bf1c44ef1ba0c291e3d5818be 001deb6d2eb5499b8e24cf16d6c2dbaa - - default default] [instance: 4af522a5-ae89-49de-a956-50cfa7dd136a] Took 4.91 seconds to spawn the instance on the hypervisor.
Oct 11 09:04:51 compute-0 nova_compute[260935]: 2025-10-11 09:04:51.555 2 DEBUG nova.compute.manager [None req-c80e8326-bca5-4522-bb48-4252d11da3c9 1c9cf27bf1c44ef1ba0c291e3d5818be 001deb6d2eb5499b8e24cf16d6c2dbaa - - default default] [instance: 4af522a5-ae89-49de-a956-50cfa7dd136a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:04:51 compute-0 nova_compute[260935]: 2025-10-11 09:04:51.592 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 4af522a5-ae89-49de-a956-50cfa7dd136a] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 09:04:51 compute-0 nova_compute[260935]: 2025-10-11 09:04:51.665 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Releasing lock "refresh_cache-52be16b4-343a-4fd4-9041-39069a1fde2a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:04:51 compute-0 nova_compute[260935]: 2025-10-11 09:04:51.666 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: 52be16b4-343a-4fd4-9041-39069a1fde2a] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 11 09:04:51 compute-0 nova_compute[260935]: 2025-10-11 09:04:51.667 2 DEBUG oslo_concurrency.lockutils [req-b647b5f5-fc2b-4b7c-a6c3-ccd745319210 req-5cec846d-2f04-4e86-8482-2a3ae209da46 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-52be16b4-343a-4fd4-9041-39069a1fde2a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:04:51 compute-0 nova_compute[260935]: 2025-10-11 09:04:51.667 2 DEBUG nova.network.neutron [req-b647b5f5-fc2b-4b7c-a6c3-ccd745319210 req-5cec846d-2f04-4e86-8482-2a3ae209da46 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 52be16b4-343a-4fd4-9041-39069a1fde2a] Refreshing network info cache for port c992d6e3-ef59-42a0-80c5-109fe0c056cd _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 09:04:51 compute-0 nova_compute[260935]: 2025-10-11 09:04:51.668 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:04:51 compute-0 nova_compute[260935]: 2025-10-11 09:04:51.669 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:04:51 compute-0 nova_compute[260935]: 2025-10-11 09:04:51.670 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:04:51 compute-0 nova_compute[260935]: 2025-10-11 09:04:51.670 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:04:51 compute-0 nova_compute[260935]: 2025-10-11 09:04:51.670 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:04:51 compute-0 nova_compute[260935]: 2025-10-11 09:04:51.671 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:04:51 compute-0 nova_compute[260935]: 2025-10-11 09:04:51.725 2 INFO nova.compute.manager [None req-c80e8326-bca5-4522-bb48-4252d11da3c9 1c9cf27bf1c44ef1ba0c291e3d5818be 001deb6d2eb5499b8e24cf16d6c2dbaa - - default default] [instance: 4af522a5-ae89-49de-a956-50cfa7dd136a] Took 7.70 seconds to build instance.
Oct 11 09:04:51 compute-0 nova_compute[260935]: 2025-10-11 09:04:51.749 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Triggering sync for uuid c176845c-89c0-4038-ba22-4ee79bd3ebfe _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Oct 11 09:04:51 compute-0 nova_compute[260935]: 2025-10-11 09:04:51.750 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Triggering sync for uuid b75d8ded-515b-48ff-a6b6-28df88878996 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Oct 11 09:04:51 compute-0 nova_compute[260935]: 2025-10-11 09:04:51.750 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Triggering sync for uuid 52be16b4-343a-4fd4-9041-39069a1fde2a _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Oct 11 09:04:51 compute-0 nova_compute[260935]: 2025-10-11 09:04:51.750 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Triggering sync for uuid 15633aee-234a-4417-b5ea-f35f13820404 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Oct 11 09:04:51 compute-0 nova_compute[260935]: 2025-10-11 09:04:51.751 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Triggering sync for uuid cb40f13d-eaf0-4d7f-8745-acdd85fa0e86 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Oct 11 09:04:51 compute-0 nova_compute[260935]: 2025-10-11 09:04:51.751 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Triggering sync for uuid 4af522a5-ae89-49de-a956-50cfa7dd136a _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Oct 11 09:04:51 compute-0 nova_compute[260935]: 2025-10-11 09:04:51.751 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "c176845c-89c0-4038-ba22-4ee79bd3ebfe" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:04:51 compute-0 nova_compute[260935]: 2025-10-11 09:04:51.752 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "c176845c-89c0-4038-ba22-4ee79bd3ebfe" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:04:51 compute-0 nova_compute[260935]: 2025-10-11 09:04:51.753 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "b75d8ded-515b-48ff-a6b6-28df88878996" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:04:51 compute-0 nova_compute[260935]: 2025-10-11 09:04:51.754 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "b75d8ded-515b-48ff-a6b6-28df88878996" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:04:51 compute-0 nova_compute[260935]: 2025-10-11 09:04:51.754 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "52be16b4-343a-4fd4-9041-39069a1fde2a" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:04:51 compute-0 nova_compute[260935]: 2025-10-11 09:04:51.755 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "52be16b4-343a-4fd4-9041-39069a1fde2a" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:04:51 compute-0 nova_compute[260935]: 2025-10-11 09:04:51.755 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "15633aee-234a-4417-b5ea-f35f13820404" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:04:51 compute-0 nova_compute[260935]: 2025-10-11 09:04:51.755 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "cb40f13d-eaf0-4d7f-8745-acdd85fa0e86" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:04:51 compute-0 nova_compute[260935]: 2025-10-11 09:04:51.756 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "4af522a5-ae89-49de-a956-50cfa7dd136a" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:04:51 compute-0 nova_compute[260935]: 2025-10-11 09:04:51.756 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:04:51 compute-0 nova_compute[260935]: 2025-10-11 09:04:51.756 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 11 09:04:51 compute-0 nova_compute[260935]: 2025-10-11 09:04:51.757 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:04:51 compute-0 nova_compute[260935]: 2025-10-11 09:04:51.821 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "c176845c-89c0-4038-ba22-4ee79bd3ebfe" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.069s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:04:51 compute-0 nova_compute[260935]: 2025-10-11 09:04:51.838 2 DEBUG oslo_concurrency.lockutils [None req-c80e8326-bca5-4522-bb48-4252d11da3c9 1c9cf27bf1c44ef1ba0c291e3d5818be 001deb6d2eb5499b8e24cf16d6c2dbaa - - default default] Lock "4af522a5-ae89-49de-a956-50cfa7dd136a" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.998s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:04:51 compute-0 nova_compute[260935]: 2025-10-11 09:04:51.839 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "4af522a5-ae89-49de-a956-50cfa7dd136a" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.083s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:04:51 compute-0 nova_compute[260935]: 2025-10-11 09:04:51.841 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "b75d8ded-515b-48ff-a6b6-28df88878996" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.087s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:04:51 compute-0 nova_compute[260935]: 2025-10-11 09:04:51.843 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "52be16b4-343a-4fd4-9041-39069a1fde2a" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.088s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:04:51 compute-0 nova_compute[260935]: 2025-10-11 09:04:51.950 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "4af522a5-ae89-49de-a956-50cfa7dd136a" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.111s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:04:51 compute-0 nova_compute[260935]: 2025-10-11 09:04:51.951 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:04:51 compute-0 nova_compute[260935]: 2025-10-11 09:04:51.970 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:04:51 compute-0 nova_compute[260935]: 2025-10-11 09:04:51.970 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:04:51 compute-0 nova_compute[260935]: 2025-10-11 09:04:51.971 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:04:51 compute-0 nova_compute[260935]: 2025-10-11 09:04:51.971 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 11 09:04:51 compute-0 nova_compute[260935]: 2025-10-11 09:04:51.971 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:04:52 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1860: 321 pgs: 321 active+clean; 421 MiB data, 869 MiB used, 59 GiB / 60 GiB avail; 3.5 MiB/s rd, 4.1 MiB/s wr, 323 op/s
Oct 11 09:04:52 compute-0 nova_compute[260935]: 2025-10-11 09:04:52.370 2 DEBUG nova.network.neutron [None req-7641b456-eabb-4f11-b264-7d16c458804f a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] [instance: cb40f13d-eaf0-4d7f-8745-acdd85fa0e86] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 11 09:04:52 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 09:04:52 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3934776140' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:04:52 compute-0 nova_compute[260935]: 2025-10-11 09:04:52.480 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.509s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:04:52 compute-0 nova_compute[260935]: 2025-10-11 09:04:52.606 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000058 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:04:52 compute-0 nova_compute[260935]: 2025-10-11 09:04:52.606 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000058 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:04:52 compute-0 nova_compute[260935]: 2025-10-11 09:04:52.610 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:04:52 compute-0 nova_compute[260935]: 2025-10-11 09:04:52.610 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:04:52 compute-0 nova_compute[260935]: 2025-10-11 09:04:52.611 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:04:52 compute-0 nova_compute[260935]: 2025-10-11 09:04:52.615 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:04:52 compute-0 nova_compute[260935]: 2025-10-11 09:04:52.616 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:04:52 compute-0 nova_compute[260935]: 2025-10-11 09:04:52.622 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000057 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:04:52 compute-0 nova_compute[260935]: 2025-10-11 09:04:52.623 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000057 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:04:52 compute-0 nova_compute[260935]: 2025-10-11 09:04:52.629 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-0000005b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:04:52 compute-0 nova_compute[260935]: 2025-10-11 09:04:52.629 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-0000005b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:04:52 compute-0 nova_compute[260935]: 2025-10-11 09:04:52.633 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000052 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:04:52 compute-0 nova_compute[260935]: 2025-10-11 09:04:52.633 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000052 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:04:52 compute-0 podman[351113]: 2025-10-11 09:04:52.635510032 +0000 UTC m=+0.097331040 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Oct 11 09:04:52 compute-0 podman[351114]: 2025-10-11 09:04:52.710521793 +0000 UTC m=+0.165864166 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Oct 11 09:04:52 compute-0 nova_compute[260935]: 2025-10-11 09:04:52.740 2 DEBUG nova.network.neutron [None req-7641b456-eabb-4f11-b264-7d16c458804f a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] [instance: cb40f13d-eaf0-4d7f-8745-acdd85fa0e86] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:04:52 compute-0 nova_compute[260935]: 2025-10-11 09:04:52.797 2 DEBUG oslo_concurrency.lockutils [None req-7641b456-eabb-4f11-b264-7d16c458804f a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] Releasing lock "refresh_cache-cb40f13d-eaf0-4d7f-8745-acdd85fa0e86" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:04:52 compute-0 nova_compute[260935]: 2025-10-11 09:04:52.798 2 DEBUG nova.compute.manager [None req-7641b456-eabb-4f11-b264-7d16c458804f a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] [instance: cb40f13d-eaf0-4d7f-8745-acdd85fa0e86] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 11 09:04:52 compute-0 systemd[1]: machine-qemu\x2d103\x2dinstance\x2d00000058.scope: Deactivated successfully.
Oct 11 09:04:52 compute-0 systemd[1]: machine-qemu\x2d103\x2dinstance\x2d00000058.scope: Consumed 3.822s CPU time.
Oct 11 09:04:52 compute-0 systemd-machined[215705]: Machine qemu-103-instance-00000058 terminated.
Oct 11 09:04:52 compute-0 nova_compute[260935]: 2025-10-11 09:04:52.933 2 WARNING nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 09:04:52 compute-0 nova_compute[260935]: 2025-10-11 09:04:52.934 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3010MB free_disk=59.788700103759766GB free_vcpus=2 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 11 09:04:52 compute-0 nova_compute[260935]: 2025-10-11 09:04:52.935 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:04:52 compute-0 nova_compute[260935]: 2025-10-11 09:04:52.935 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:04:53 compute-0 nova_compute[260935]: 2025-10-11 09:04:53.037 2 INFO nova.virt.libvirt.driver [-] [instance: cb40f13d-eaf0-4d7f-8745-acdd85fa0e86] Instance destroyed successfully.
Oct 11 09:04:53 compute-0 nova_compute[260935]: 2025-10-11 09:04:53.038 2 DEBUG nova.objects.instance [None req-7641b456-eabb-4f11-b264-7d16c458804f a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] Lazy-loading 'resources' on Instance uuid cb40f13d-eaf0-4d7f-8745-acdd85fa0e86 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:04:53 compute-0 nova_compute[260935]: 2025-10-11 09:04:53.117 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance c176845c-89c0-4038-ba22-4ee79bd3ebfe actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 09:04:53 compute-0 nova_compute[260935]: 2025-10-11 09:04:53.118 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance b75d8ded-515b-48ff-a6b6-28df88878996 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 09:04:53 compute-0 nova_compute[260935]: 2025-10-11 09:04:53.118 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance 52be16b4-343a-4fd4-9041-39069a1fde2a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 09:04:53 compute-0 nova_compute[260935]: 2025-10-11 09:04:53.118 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance 15633aee-234a-4417-b5ea-f35f13820404 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 09:04:53 compute-0 nova_compute[260935]: 2025-10-11 09:04:53.118 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance cb40f13d-eaf0-4d7f-8745-acdd85fa0e86 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 09:04:53 compute-0 nova_compute[260935]: 2025-10-11 09:04:53.119 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance 4af522a5-ae89-49de-a956-50cfa7dd136a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 09:04:53 compute-0 nova_compute[260935]: 2025-10-11 09:04:53.119 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 6 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 11 09:04:53 compute-0 nova_compute[260935]: 2025-10-11 09:04:53.119 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=1280MB phys_disk=59GB used_disk=6GB total_vcpus=8 used_vcpus=6 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 11 09:04:53 compute-0 nova_compute[260935]: 2025-10-11 09:04:53.338 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:04:53 compute-0 ceph-mon[74313]: pgmap v1860: 321 pgs: 321 active+clean; 421 MiB data, 869 MiB used, 59 GiB / 60 GiB avail; 3.5 MiB/s rd, 4.1 MiB/s wr, 323 op/s
Oct 11 09:04:53 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/3934776140' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:04:53 compute-0 nova_compute[260935]: 2025-10-11 09:04:53.624 2 DEBUG nova.network.neutron [req-b647b5f5-fc2b-4b7c-a6c3-ccd745319210 req-5cec846d-2f04-4e86-8482-2a3ae209da46 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 52be16b4-343a-4fd4-9041-39069a1fde2a] Updated VIF entry in instance network info cache for port c992d6e3-ef59-42a0-80c5-109fe0c056cd. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 09:04:53 compute-0 nova_compute[260935]: 2025-10-11 09:04:53.625 2 DEBUG nova.network.neutron [req-b647b5f5-fc2b-4b7c-a6c3-ccd745319210 req-5cec846d-2f04-4e86-8482-2a3ae209da46 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 52be16b4-343a-4fd4-9041-39069a1fde2a] Updating instance_info_cache with network_info: [{"id": "c992d6e3-ef59-42a0-80c5-109fe0c056cd", "address": "fa:16:3e:d3:b5:ce", "network": {"id": "7c40ad6c-6e2c-4d8e-a70f-72c8786fa745", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1855455514-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0ba95f2514ce4fe4b00f245335eaeb01", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc992d6e3-ef", "ovs_interfaceid": "c992d6e3-ef59-42a0-80c5-109fe0c056cd", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:04:53 compute-0 nova_compute[260935]: 2025-10-11 09:04:53.683 2 INFO nova.virt.libvirt.driver [None req-7641b456-eabb-4f11-b264-7d16c458804f a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] [instance: cb40f13d-eaf0-4d7f-8745-acdd85fa0e86] Deleting instance files /var/lib/nova/instances/cb40f13d-eaf0-4d7f-8745-acdd85fa0e86_del
Oct 11 09:04:53 compute-0 nova_compute[260935]: 2025-10-11 09:04:53.684 2 INFO nova.virt.libvirt.driver [None req-7641b456-eabb-4f11-b264-7d16c458804f a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] [instance: cb40f13d-eaf0-4d7f-8745-acdd85fa0e86] Deletion of /var/lib/nova/instances/cb40f13d-eaf0-4d7f-8745-acdd85fa0e86_del complete
Oct 11 09:04:53 compute-0 nova_compute[260935]: 2025-10-11 09:04:53.739 2 DEBUG oslo_concurrency.lockutils [req-b647b5f5-fc2b-4b7c-a6c3-ccd745319210 req-5cec846d-2f04-4e86-8482-2a3ae209da46 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-52be16b4-343a-4fd4-9041-39069a1fde2a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:04:53 compute-0 nova_compute[260935]: 2025-10-11 09:04:53.740 2 DEBUG nova.compute.manager [req-b647b5f5-fc2b-4b7c-a6c3-ccd745319210 req-5cec846d-2f04-4e86-8482-2a3ae209da46 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: dffb6f2b-b5a8-4d28-ae9b-aca7728ce617] Received event network-vif-deleted-63a1eeb0-05f1-4f8a-b115-a1323d4d119d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:04:53 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e251 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:04:53 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 09:04:53 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1130079266' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:04:53 compute-0 nova_compute[260935]: 2025-10-11 09:04:53.848 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.510s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:04:53 compute-0 nova_compute[260935]: 2025-10-11 09:04:53.862 2 DEBUG nova.compute.provider_tree [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 09:04:53 compute-0 nova_compute[260935]: 2025-10-11 09:04:53.869 2 INFO nova.compute.manager [None req-7641b456-eabb-4f11-b264-7d16c458804f a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] [instance: cb40f13d-eaf0-4d7f-8745-acdd85fa0e86] Took 1.07 seconds to destroy the instance on the hypervisor.
Oct 11 09:04:53 compute-0 nova_compute[260935]: 2025-10-11 09:04:53.869 2 DEBUG oslo.service.loopingcall [None req-7641b456-eabb-4f11-b264-7d16c458804f a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 11 09:04:53 compute-0 nova_compute[260935]: 2025-10-11 09:04:53.870 2 DEBUG nova.compute.manager [-] [instance: cb40f13d-eaf0-4d7f-8745-acdd85fa0e86] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 11 09:04:53 compute-0 nova_compute[260935]: 2025-10-11 09:04:53.870 2 DEBUG nova.network.neutron [-] [instance: cb40f13d-eaf0-4d7f-8745-acdd85fa0e86] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 11 09:04:53 compute-0 nova_compute[260935]: 2025-10-11 09:04:53.910 2 DEBUG nova.scheduler.client.report [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 09:04:53 compute-0 nova_compute[260935]: 2025-10-11 09:04:53.970 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 11 09:04:53 compute-0 nova_compute[260935]: 2025-10-11 09:04:53.971 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.036s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:04:53 compute-0 nova_compute[260935]: 2025-10-11 09:04:53.973 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:04:54 compute-0 nova_compute[260935]: 2025-10-11 09:04:54.057 2 DEBUG nova.network.neutron [-] [instance: cb40f13d-eaf0-4d7f-8745-acdd85fa0e86] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 11 09:04:54 compute-0 nova_compute[260935]: 2025-10-11 09:04:54.211 2 DEBUG nova.network.neutron [-] [instance: cb40f13d-eaf0-4d7f-8745-acdd85fa0e86] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:04:54 compute-0 nova_compute[260935]: 2025-10-11 09:04:54.286 2 INFO nova.compute.manager [-] [instance: cb40f13d-eaf0-4d7f-8745-acdd85fa0e86] Took 0.42 seconds to deallocate network for instance.
Oct 11 09:04:54 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1861: 321 pgs: 321 active+clean; 421 MiB data, 855 MiB used, 59 GiB / 60 GiB avail; 4.1 MiB/s rd, 2.2 MiB/s wr, 301 op/s
Oct 11 09:04:54 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/1130079266' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:04:54 compute-0 nova_compute[260935]: 2025-10-11 09:04:54.521 2 DEBUG nova.compute.manager [None req-46588549-d987-4215-831e-39457622aab4 1c9cf27bf1c44ef1ba0c291e3d5818be 001deb6d2eb5499b8e24cf16d6c2dbaa - - default default] [instance: 4af522a5-ae89-49de-a956-50cfa7dd136a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:04:54 compute-0 nova_compute[260935]: 2025-10-11 09:04:54.538 2 DEBUG oslo_concurrency.lockutils [None req-7641b456-eabb-4f11-b264-7d16c458804f a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:04:54 compute-0 nova_compute[260935]: 2025-10-11 09:04:54.539 2 DEBUG oslo_concurrency.lockutils [None req-7641b456-eabb-4f11-b264-7d16c458804f a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:04:54 compute-0 nova_compute[260935]: 2025-10-11 09:04:54.620 2 INFO nova.compute.manager [None req-46588549-d987-4215-831e-39457622aab4 1c9cf27bf1c44ef1ba0c291e3d5818be 001deb6d2eb5499b8e24cf16d6c2dbaa - - default default] [instance: 4af522a5-ae89-49de-a956-50cfa7dd136a] instance snapshotting
Oct 11 09:04:54 compute-0 nova_compute[260935]: 2025-10-11 09:04:54.725 2 DEBUG oslo_concurrency.processutils [None req-7641b456-eabb-4f11-b264-7d16c458804f a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:04:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:04:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:04:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:04:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:04:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:04:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:04:54 compute-0 ceph-mgr[74605]: [balancer INFO root] Optimize plan auto_2025-10-11_09:04:54
Oct 11 09:04:54 compute-0 ceph-mgr[74605]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 09:04:54 compute-0 ceph-mgr[74605]: [balancer INFO root] do_upmap
Oct 11 09:04:54 compute-0 ceph-mgr[74605]: [balancer INFO root] pools ['backups', 'default.rgw.control', 'images', '.rgw.root', '.mgr', 'default.rgw.log', 'volumes', 'default.rgw.meta', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'vms']
Oct 11 09:04:54 compute-0 ceph-mgr[74605]: [balancer INFO root] prepared 0/10 changes
Oct 11 09:04:54 compute-0 nova_compute[260935]: 2025-10-11 09:04:54.975 2 INFO nova.virt.libvirt.driver [None req-46588549-d987-4215-831e-39457622aab4 1c9cf27bf1c44ef1ba0c291e3d5818be 001deb6d2eb5499b8e24cf16d6c2dbaa - - default default] [instance: 4af522a5-ae89-49de-a956-50cfa7dd136a] Beginning live snapshot process
Oct 11 09:04:55 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 09:04:55 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4224717697' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:04:55 compute-0 nova_compute[260935]: 2025-10-11 09:04:55.186 2 DEBUG oslo_concurrency.processutils [None req-7641b456-eabb-4f11-b264-7d16c458804f a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:04:55 compute-0 nova_compute[260935]: 2025-10-11 09:04:55.194 2 DEBUG nova.compute.provider_tree [None req-7641b456-eabb-4f11-b264-7d16c458804f a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 09:04:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 09:04:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 09:04:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 09:04:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 09:04:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 09:04:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 09:04:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 09:04:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 09:04:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 09:04:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 09:04:55 compute-0 nova_compute[260935]: 2025-10-11 09:04:55.370 2 DEBUG nova.scheduler.client.report [None req-7641b456-eabb-4f11-b264-7d16c458804f a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 09:04:55 compute-0 ceph-mon[74313]: pgmap v1861: 321 pgs: 321 active+clean; 421 MiB data, 855 MiB used, 59 GiB / 60 GiB avail; 4.1 MiB/s rd, 2.2 MiB/s wr, 301 op/s
Oct 11 09:04:55 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/4224717697' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:04:55 compute-0 nova_compute[260935]: 2025-10-11 09:04:55.383 2 DEBUG nova.storage.rbd_utils [None req-46588549-d987-4215-831e-39457622aab4 1c9cf27bf1c44ef1ba0c291e3d5818be 001deb6d2eb5499b8e24cf16d6c2dbaa - - default default] creating snapshot(2e7a082774e34d0eaf6df91effc02bf7) on rbd image(4af522a5-ae89-49de-a956-50cfa7dd136a_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Oct 11 09:04:55 compute-0 nova_compute[260935]: 2025-10-11 09:04:55.507 2 DEBUG oslo_concurrency.lockutils [None req-7641b456-eabb-4f11-b264-7d16c458804f a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.967s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:04:56 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1862: 321 pgs: 321 active+clean; 421 MiB data, 855 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.8 MiB/s wr, 266 op/s
Oct 11 09:04:56 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e251 do_prune osdmap full prune enabled
Oct 11 09:04:56 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e252 e252: 3 total, 3 up, 3 in
Oct 11 09:04:56 compute-0 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e252: 3 total, 3 up, 3 in
Oct 11 09:04:56 compute-0 nova_compute[260935]: 2025-10-11 09:04:56.401 2 INFO nova.scheduler.client.report [None req-7641b456-eabb-4f11-b264-7d16c458804f a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] Deleted allocations for instance cb40f13d-eaf0-4d7f-8745-acdd85fa0e86
Oct 11 09:04:56 compute-0 nova_compute[260935]: 2025-10-11 09:04:56.504 2 DEBUG nova.storage.rbd_utils [None req-46588549-d987-4215-831e-39457622aab4 1c9cf27bf1c44ef1ba0c291e3d5818be 001deb6d2eb5499b8e24cf16d6c2dbaa - - default default] cloning vms/4af522a5-ae89-49de-a956-50cfa7dd136a_disk@2e7a082774e34d0eaf6df91effc02bf7 to images/d9a855f1-b63e-4cbc-895e-23378797f027 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Oct 11 09:04:56 compute-0 nova_compute[260935]: 2025-10-11 09:04:56.605 2 DEBUG oslo_concurrency.lockutils [None req-7641b456-eabb-4f11-b264-7d16c458804f a90fe9900cc64109bfeb61e3bc71fb95 f83a57424c0643a1b6a9b84fe208cb0e - - default default] Lock "cb40f13d-eaf0-4d7f-8745-acdd85fa0e86" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.067s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:04:56 compute-0 nova_compute[260935]: 2025-10-11 09:04:56.607 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "cb40f13d-eaf0-4d7f-8745-acdd85fa0e86" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 4.852s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:04:56 compute-0 nova_compute[260935]: 2025-10-11 09:04:56.607 2 INFO nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: cb40f13d-eaf0-4d7f-8745-acdd85fa0e86] During sync_power_state the instance has a pending task (deleting). Skip.
Oct 11 09:04:56 compute-0 nova_compute[260935]: 2025-10-11 09:04:56.608 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "cb40f13d-eaf0-4d7f-8745-acdd85fa0e86" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:04:56 compute-0 nova_compute[260935]: 2025-10-11 09:04:56.745 2 DEBUG nova.storage.rbd_utils [None req-46588549-d987-4215-831e-39457622aab4 1c9cf27bf1c44ef1ba0c291e3d5818be 001deb6d2eb5499b8e24cf16d6c2dbaa - - default default] flattening images/d9a855f1-b63e-4cbc-895e-23378797f027 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Oct 11 09:04:56 compute-0 nova_compute[260935]: 2025-10-11 09:04:56.954 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:04:56 compute-0 nova_compute[260935]: 2025-10-11 09:04:56.975 2 DEBUG nova.storage.rbd_utils [None req-46588549-d987-4215-831e-39457622aab4 1c9cf27bf1c44ef1ba0c291e3d5818be 001deb6d2eb5499b8e24cf16d6c2dbaa - - default default] removing snapshot(2e7a082774e34d0eaf6df91effc02bf7) on rbd image(4af522a5-ae89-49de-a956-50cfa7dd136a_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Oct 11 09:04:57 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e252 do_prune osdmap full prune enabled
Oct 11 09:04:57 compute-0 ceph-mon[74313]: pgmap v1862: 321 pgs: 321 active+clean; 421 MiB data, 855 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.8 MiB/s wr, 266 op/s
Oct 11 09:04:57 compute-0 ceph-mon[74313]: osdmap e252: 3 total, 3 up, 3 in
Oct 11 09:04:57 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e253 e253: 3 total, 3 up, 3 in
Oct 11 09:04:57 compute-0 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e253: 3 total, 3 up, 3 in
Oct 11 09:04:57 compute-0 nova_compute[260935]: 2025-10-11 09:04:57.457 2 DEBUG nova.storage.rbd_utils [None req-46588549-d987-4215-831e-39457622aab4 1c9cf27bf1c44ef1ba0c291e3d5818be 001deb6d2eb5499b8e24cf16d6c2dbaa - - default default] creating snapshot(snap) on rbd image(d9a855f1-b63e-4cbc-895e-23378797f027) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Oct 11 09:04:58 compute-0 ovn_controller[152945]: 2025-10-11T09:04:58Z|00809|binding|INFO|Releasing lport e23cd806-8523-4e59-ba27-db15cee52548 from this chassis (sb_readonly=0)
Oct 11 09:04:58 compute-0 ovn_controller[152945]: 2025-10-11T09:04:58Z|00810|binding|INFO|Releasing lport f45dd889-4db0-488b-b6d3-356bc191844e from this chassis (sb_readonly=0)
Oct 11 09:04:58 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1865: 321 pgs: 1 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 318 active+clean; 374 MiB data, 838 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 47 KiB/s wr, 283 op/s
Oct 11 09:04:58 compute-0 nova_compute[260935]: 2025-10-11 09:04:58.353 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:04:58 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e253 do_prune osdmap full prune enabled
Oct 11 09:04:58 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e254 e254: 3 total, 3 up, 3 in
Oct 11 09:04:58 compute-0 ceph-mon[74313]: osdmap e253: 3 total, 3 up, 3 in
Oct 11 09:04:58 compute-0 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e254: 3 total, 3 up, 3 in
Oct 11 09:04:58 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e254 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:04:58 compute-0 nova_compute[260935]: 2025-10-11 09:04:58.975 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:04:59 compute-0 ceph-mon[74313]: pgmap v1865: 321 pgs: 1 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 318 active+clean; 374 MiB data, 838 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 47 KiB/s wr, 283 op/s
Oct 11 09:04:59 compute-0 ceph-mon[74313]: osdmap e254: 3 total, 3 up, 3 in
Oct 11 09:05:00 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1867: 321 pgs: 1 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 318 active+clean; 374 MiB data, 838 MiB used, 59 GiB / 60 GiB avail; 114 KiB/s rd, 6.5 KiB/s wr, 150 op/s
Oct 11 09:05:00 compute-0 nova_compute[260935]: 2025-10-11 09:05:00.389 2 INFO nova.virt.libvirt.driver [None req-46588549-d987-4215-831e-39457622aab4 1c9cf27bf1c44ef1ba0c291e3d5818be 001deb6d2eb5499b8e24cf16d6c2dbaa - - default default] [instance: 4af522a5-ae89-49de-a956-50cfa7dd136a] Snapshot image upload complete
Oct 11 09:05:00 compute-0 nova_compute[260935]: 2025-10-11 09:05:00.390 2 INFO nova.compute.manager [None req-46588549-d987-4215-831e-39457622aab4 1c9cf27bf1c44ef1ba0c291e3d5818be 001deb6d2eb5499b8e24cf16d6c2dbaa - - default default] [instance: 4af522a5-ae89-49de-a956-50cfa7dd136a] Took 5.77 seconds to snapshot the instance on the hypervisor.
Oct 11 09:05:01 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e254 do_prune osdmap full prune enabled
Oct 11 09:05:01 compute-0 ceph-mon[74313]: pgmap v1867: 321 pgs: 1 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 318 active+clean; 374 MiB data, 838 MiB used, 59 GiB / 60 GiB avail; 114 KiB/s rd, 6.5 KiB/s wr, 150 op/s
Oct 11 09:05:01 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e255 e255: 3 total, 3 up, 3 in
Oct 11 09:05:01 compute-0 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e255: 3 total, 3 up, 3 in
Oct 11 09:05:01 compute-0 nova_compute[260935]: 2025-10-11 09:05:01.815 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760173486.8129892, dffb6f2b-b5a8-4d28-ae9b-aca7728ce617 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:05:01 compute-0 nova_compute[260935]: 2025-10-11 09:05:01.815 2 INFO nova.compute.manager [-] [instance: dffb6f2b-b5a8-4d28-ae9b-aca7728ce617] VM Stopped (Lifecycle Event)
Oct 11 09:05:01 compute-0 nova_compute[260935]: 2025-10-11 09:05:01.896 2 DEBUG nova.compute.manager [None req-df9fa7c4-6291-4d17-8451-8b7cd6a83ba3 - - - - - -] [instance: dffb6f2b-b5a8-4d28-ae9b-aca7728ce617] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:05:01 compute-0 nova_compute[260935]: 2025-10-11 09:05:01.957 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:05:02 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1869: 321 pgs: 1 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 318 active+clean; 374 MiB data, 838 MiB used, 59 GiB / 60 GiB avail; 199 KiB/s rd, 11 KiB/s wr, 259 op/s
Oct 11 09:05:02 compute-0 ceph-mon[74313]: osdmap e255: 3 total, 3 up, 3 in
Oct 11 09:05:02 compute-0 nova_compute[260935]: 2025-10-11 09:05:02.995 2 DEBUG oslo_concurrency.lockutils [None req-6755e971-7c8b-442c-b622-a7d1567419fc 1c9cf27bf1c44ef1ba0c291e3d5818be 001deb6d2eb5499b8e24cf16d6c2dbaa - - default default] Acquiring lock "4af522a5-ae89-49de-a956-50cfa7dd136a" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:05:02 compute-0 nova_compute[260935]: 2025-10-11 09:05:02.996 2 DEBUG oslo_concurrency.lockutils [None req-6755e971-7c8b-442c-b622-a7d1567419fc 1c9cf27bf1c44ef1ba0c291e3d5818be 001deb6d2eb5499b8e24cf16d6c2dbaa - - default default] Lock "4af522a5-ae89-49de-a956-50cfa7dd136a" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:05:02 compute-0 nova_compute[260935]: 2025-10-11 09:05:02.996 2 DEBUG oslo_concurrency.lockutils [None req-6755e971-7c8b-442c-b622-a7d1567419fc 1c9cf27bf1c44ef1ba0c291e3d5818be 001deb6d2eb5499b8e24cf16d6c2dbaa - - default default] Acquiring lock "4af522a5-ae89-49de-a956-50cfa7dd136a-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:05:02 compute-0 nova_compute[260935]: 2025-10-11 09:05:02.997 2 DEBUG oslo_concurrency.lockutils [None req-6755e971-7c8b-442c-b622-a7d1567419fc 1c9cf27bf1c44ef1ba0c291e3d5818be 001deb6d2eb5499b8e24cf16d6c2dbaa - - default default] Lock "4af522a5-ae89-49de-a956-50cfa7dd136a-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:05:02 compute-0 nova_compute[260935]: 2025-10-11 09:05:02.997 2 DEBUG oslo_concurrency.lockutils [None req-6755e971-7c8b-442c-b622-a7d1567419fc 1c9cf27bf1c44ef1ba0c291e3d5818be 001deb6d2eb5499b8e24cf16d6c2dbaa - - default default] Lock "4af522a5-ae89-49de-a956-50cfa7dd136a-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:05:02 compute-0 nova_compute[260935]: 2025-10-11 09:05:02.999 2 INFO nova.compute.manager [None req-6755e971-7c8b-442c-b622-a7d1567419fc 1c9cf27bf1c44ef1ba0c291e3d5818be 001deb6d2eb5499b8e24cf16d6c2dbaa - - default default] [instance: 4af522a5-ae89-49de-a956-50cfa7dd136a] Terminating instance
Oct 11 09:05:03 compute-0 nova_compute[260935]: 2025-10-11 09:05:03.000 2 DEBUG oslo_concurrency.lockutils [None req-6755e971-7c8b-442c-b622-a7d1567419fc 1c9cf27bf1c44ef1ba0c291e3d5818be 001deb6d2eb5499b8e24cf16d6c2dbaa - - default default] Acquiring lock "refresh_cache-4af522a5-ae89-49de-a956-50cfa7dd136a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:05:03 compute-0 nova_compute[260935]: 2025-10-11 09:05:03.001 2 DEBUG oslo_concurrency.lockutils [None req-6755e971-7c8b-442c-b622-a7d1567419fc 1c9cf27bf1c44ef1ba0c291e3d5818be 001deb6d2eb5499b8e24cf16d6c2dbaa - - default default] Acquired lock "refresh_cache-4af522a5-ae89-49de-a956-50cfa7dd136a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:05:03 compute-0 nova_compute[260935]: 2025-10-11 09:05:03.001 2 DEBUG nova.network.neutron [None req-6755e971-7c8b-442c-b622-a7d1567419fc 1c9cf27bf1c44ef1ba0c291e3d5818be 001deb6d2eb5499b8e24cf16d6c2dbaa - - default default] [instance: 4af522a5-ae89-49de-a956-50cfa7dd136a] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 11 09:05:03 compute-0 nova_compute[260935]: 2025-10-11 09:05:03.404 2 DEBUG nova.network.neutron [None req-6755e971-7c8b-442c-b622-a7d1567419fc 1c9cf27bf1c44ef1ba0c291e3d5818be 001deb6d2eb5499b8e24cf16d6c2dbaa - - default default] [instance: 4af522a5-ae89-49de-a956-50cfa7dd136a] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 11 09:05:03 compute-0 ceph-mon[74313]: pgmap v1869: 321 pgs: 1 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 318 active+clean; 374 MiB data, 838 MiB used, 59 GiB / 60 GiB avail; 199 KiB/s rd, 11 KiB/s wr, 259 op/s
Oct 11 09:05:03 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e255 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:05:03 compute-0 nova_compute[260935]: 2025-10-11 09:05:03.976 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:05:04 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1870: 321 pgs: 321 active+clean; 374 MiB data, 838 MiB used, 59 GiB / 60 GiB avail; 122 KiB/s rd, 6.5 KiB/s wr, 160 op/s
Oct 11 09:05:04 compute-0 nova_compute[260935]: 2025-10-11 09:05:04.423 2 DEBUG nova.network.neutron [None req-6755e971-7c8b-442c-b622-a7d1567419fc 1c9cf27bf1c44ef1ba0c291e3d5818be 001deb6d2eb5499b8e24cf16d6c2dbaa - - default default] [instance: 4af522a5-ae89-49de-a956-50cfa7dd136a] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:05:04 compute-0 nova_compute[260935]: 2025-10-11 09:05:04.464 2 DEBUG oslo_concurrency.lockutils [None req-6755e971-7c8b-442c-b622-a7d1567419fc 1c9cf27bf1c44ef1ba0c291e3d5818be 001deb6d2eb5499b8e24cf16d6c2dbaa - - default default] Releasing lock "refresh_cache-4af522a5-ae89-49de-a956-50cfa7dd136a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:05:04 compute-0 nova_compute[260935]: 2025-10-11 09:05:04.466 2 DEBUG nova.compute.manager [None req-6755e971-7c8b-442c-b622-a7d1567419fc 1c9cf27bf1c44ef1ba0c291e3d5818be 001deb6d2eb5499b8e24cf16d6c2dbaa - - default default] [instance: 4af522a5-ae89-49de-a956-50cfa7dd136a] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 11 09:05:04 compute-0 systemd[1]: machine-qemu\x2d104\x2dinstance\x2d0000005b.scope: Deactivated successfully.
Oct 11 09:05:04 compute-0 systemd[1]: machine-qemu\x2d104\x2dinstance\x2d0000005b.scope: Consumed 2.001s CPU time.
Oct 11 09:05:04 compute-0 systemd-machined[215705]: Machine qemu-104-instance-0000005b terminated.
Oct 11 09:05:04 compute-0 nova_compute[260935]: 2025-10-11 09:05:04.697 2 INFO nova.virt.libvirt.driver [-] [instance: 4af522a5-ae89-49de-a956-50cfa7dd136a] Instance destroyed successfully.
Oct 11 09:05:04 compute-0 nova_compute[260935]: 2025-10-11 09:05:04.698 2 DEBUG nova.objects.instance [None req-6755e971-7c8b-442c-b622-a7d1567419fc 1c9cf27bf1c44ef1ba0c291e3d5818be 001deb6d2eb5499b8e24cf16d6c2dbaa - - default default] Lazy-loading 'resources' on Instance uuid 4af522a5-ae89-49de-a956-50cfa7dd136a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:05:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 09:05:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:05:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 09:05:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:05:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0029782708007748907 of space, bias 1.0, pg target 0.8934812402324672 quantized to 32 (current 32)
Oct 11 09:05:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:05:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 09:05:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:05:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 09:05:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:05:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006663034365435958 of space, bias 1.0, pg target 0.19989103096307873 quantized to 32 (current 32)
Oct 11 09:05:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:05:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 11 09:05:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:05:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 09:05:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:05:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 11 09:05:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:05:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 11 09:05:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:05:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 09:05:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:05:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 11 09:05:05 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e255 do_prune osdmap full prune enabled
Oct 11 09:05:05 compute-0 ceph-mon[74313]: pgmap v1870: 321 pgs: 321 active+clean; 374 MiB data, 838 MiB used, 59 GiB / 60 GiB avail; 122 KiB/s rd, 6.5 KiB/s wr, 160 op/s
Oct 11 09:05:05 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e256 e256: 3 total, 3 up, 3 in
Oct 11 09:05:05 compute-0 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e256: 3 total, 3 up, 3 in
Oct 11 09:05:06 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1872: 321 pgs: 321 active+clean; 374 MiB data, 838 MiB used, 59 GiB / 60 GiB avail; 65 KiB/s rd, 3.2 KiB/s wr, 86 op/s
Oct 11 09:05:06 compute-0 nova_compute[260935]: 2025-10-11 09:05:06.386 2 INFO nova.virt.libvirt.driver [None req-6755e971-7c8b-442c-b622-a7d1567419fc 1c9cf27bf1c44ef1ba0c291e3d5818be 001deb6d2eb5499b8e24cf16d6c2dbaa - - default default] [instance: 4af522a5-ae89-49de-a956-50cfa7dd136a] Deleting instance files /var/lib/nova/instances/4af522a5-ae89-49de-a956-50cfa7dd136a_del
Oct 11 09:05:06 compute-0 nova_compute[260935]: 2025-10-11 09:05:06.389 2 INFO nova.virt.libvirt.driver [None req-6755e971-7c8b-442c-b622-a7d1567419fc 1c9cf27bf1c44ef1ba0c291e3d5818be 001deb6d2eb5499b8e24cf16d6c2dbaa - - default default] [instance: 4af522a5-ae89-49de-a956-50cfa7dd136a] Deletion of /var/lib/nova/instances/4af522a5-ae89-49de-a956-50cfa7dd136a_del complete
Oct 11 09:05:06 compute-0 nova_compute[260935]: 2025-10-11 09:05:06.506 2 INFO nova.compute.manager [None req-6755e971-7c8b-442c-b622-a7d1567419fc 1c9cf27bf1c44ef1ba0c291e3d5818be 001deb6d2eb5499b8e24cf16d6c2dbaa - - default default] [instance: 4af522a5-ae89-49de-a956-50cfa7dd136a] Took 2.04 seconds to destroy the instance on the hypervisor.
Oct 11 09:05:06 compute-0 nova_compute[260935]: 2025-10-11 09:05:06.507 2 DEBUG oslo.service.loopingcall [None req-6755e971-7c8b-442c-b622-a7d1567419fc 1c9cf27bf1c44ef1ba0c291e3d5818be 001deb6d2eb5499b8e24cf16d6c2dbaa - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 11 09:05:06 compute-0 nova_compute[260935]: 2025-10-11 09:05:06.508 2 DEBUG nova.compute.manager [-] [instance: 4af522a5-ae89-49de-a956-50cfa7dd136a] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 11 09:05:06 compute-0 nova_compute[260935]: 2025-10-11 09:05:06.509 2 DEBUG nova.network.neutron [-] [instance: 4af522a5-ae89-49de-a956-50cfa7dd136a] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 11 09:05:06 compute-0 ceph-mon[74313]: osdmap e256: 3 total, 3 up, 3 in
Oct 11 09:05:07 compute-0 nova_compute[260935]: 2025-10-11 09:05:07.009 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:05:07 compute-0 ceph-mon[74313]: pgmap v1872: 321 pgs: 321 active+clean; 374 MiB data, 838 MiB used, 59 GiB / 60 GiB avail; 65 KiB/s rd, 3.2 KiB/s wr, 86 op/s
Oct 11 09:05:08 compute-0 nova_compute[260935]: 2025-10-11 09:05:08.034 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760173493.0332146, cb40f13d-eaf0-4d7f-8745-acdd85fa0e86 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:05:08 compute-0 nova_compute[260935]: 2025-10-11 09:05:08.035 2 INFO nova.compute.manager [-] [instance: cb40f13d-eaf0-4d7f-8745-acdd85fa0e86] VM Stopped (Lifecycle Event)
Oct 11 09:05:08 compute-0 nova_compute[260935]: 2025-10-11 09:05:08.196 2 DEBUG nova.compute.manager [None req-de80e430-7f56-4f70-81c0-c30aafbeb67f - - - - - -] [instance: cb40f13d-eaf0-4d7f-8745-acdd85fa0e86] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:05:08 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1873: 321 pgs: 321 active+clean; 374 MiB data, 838 MiB used, 59 GiB / 60 GiB avail; 98 KiB/s rd, 4.9 KiB/s wr, 129 op/s
Oct 11 09:05:08 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e256 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:05:08 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e256 do_prune osdmap full prune enabled
Oct 11 09:05:08 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e257 e257: 3 total, 3 up, 3 in
Oct 11 09:05:08 compute-0 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e257: 3 total, 3 up, 3 in
Oct 11 09:05:08 compute-0 nova_compute[260935]: 2025-10-11 09:05:08.885 2 DEBUG nova.network.neutron [-] [instance: 4af522a5-ae89-49de-a956-50cfa7dd136a] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 11 09:05:08 compute-0 nova_compute[260935]: 2025-10-11 09:05:08.943 2 DEBUG nova.network.neutron [-] [instance: 4af522a5-ae89-49de-a956-50cfa7dd136a] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:05:08 compute-0 nova_compute[260935]: 2025-10-11 09:05:08.978 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:05:09 compute-0 nova_compute[260935]: 2025-10-11 09:05:09.031 2 INFO nova.compute.manager [-] [instance: 4af522a5-ae89-49de-a956-50cfa7dd136a] Took 2.52 seconds to deallocate network for instance.
Oct 11 09:05:09 compute-0 nova_compute[260935]: 2025-10-11 09:05:09.143 2 DEBUG oslo_concurrency.lockutils [None req-6755e971-7c8b-442c-b622-a7d1567419fc 1c9cf27bf1c44ef1ba0c291e3d5818be 001deb6d2eb5499b8e24cf16d6c2dbaa - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:05:09 compute-0 nova_compute[260935]: 2025-10-11 09:05:09.144 2 DEBUG oslo_concurrency.lockutils [None req-6755e971-7c8b-442c-b622-a7d1567419fc 1c9cf27bf1c44ef1ba0c291e3d5818be 001deb6d2eb5499b8e24cf16d6c2dbaa - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:05:09 compute-0 nova_compute[260935]: 2025-10-11 09:05:09.275 2 DEBUG oslo_concurrency.processutils [None req-6755e971-7c8b-442c-b622-a7d1567419fc 1c9cf27bf1c44ef1ba0c291e3d5818be 001deb6d2eb5499b8e24cf16d6c2dbaa - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:05:09 compute-0 ceph-mon[74313]: pgmap v1873: 321 pgs: 321 active+clean; 374 MiB data, 838 MiB used, 59 GiB / 60 GiB avail; 98 KiB/s rd, 4.9 KiB/s wr, 129 op/s
Oct 11 09:05:09 compute-0 ceph-mon[74313]: osdmap e257: 3 total, 3 up, 3 in
Oct 11 09:05:09 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 09:05:09 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1398104273' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:05:09 compute-0 nova_compute[260935]: 2025-10-11 09:05:09.734 2 DEBUG oslo_concurrency.processutils [None req-6755e971-7c8b-442c-b622-a7d1567419fc 1c9cf27bf1c44ef1ba0c291e3d5818be 001deb6d2eb5499b8e24cf16d6c2dbaa - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:05:09 compute-0 nova_compute[260935]: 2025-10-11 09:05:09.741 2 DEBUG nova.compute.provider_tree [None req-6755e971-7c8b-442c-b622-a7d1567419fc 1c9cf27bf1c44ef1ba0c291e3d5818be 001deb6d2eb5499b8e24cf16d6c2dbaa - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 09:05:09 compute-0 nova_compute[260935]: 2025-10-11 09:05:09.808 2 DEBUG nova.scheduler.client.report [None req-6755e971-7c8b-442c-b622-a7d1567419fc 1c9cf27bf1c44ef1ba0c291e3d5818be 001deb6d2eb5499b8e24cf16d6c2dbaa - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 09:05:10 compute-0 nova_compute[260935]: 2025-10-11 09:05:10.067 2 DEBUG oslo_concurrency.lockutils [None req-6755e971-7c8b-442c-b622-a7d1567419fc 1c9cf27bf1c44ef1ba0c291e3d5818be 001deb6d2eb5499b8e24cf16d6c2dbaa - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.924s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:05:10 compute-0 nova_compute[260935]: 2025-10-11 09:05:10.165 2 INFO nova.scheduler.client.report [None req-6755e971-7c8b-442c-b622-a7d1567419fc 1c9cf27bf1c44ef1ba0c291e3d5818be 001deb6d2eb5499b8e24cf16d6c2dbaa - - default default] Deleted allocations for instance 4af522a5-ae89-49de-a956-50cfa7dd136a
Oct 11 09:05:10 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1875: 321 pgs: 321 active+clean; 374 MiB data, 838 MiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 1.9 KiB/s wr, 51 op/s
Oct 11 09:05:10 compute-0 nova_compute[260935]: 2025-10-11 09:05:10.426 2 DEBUG oslo_concurrency.lockutils [None req-6755e971-7c8b-442c-b622-a7d1567419fc 1c9cf27bf1c44ef1ba0c291e3d5818be 001deb6d2eb5499b8e24cf16d6c2dbaa - - default default] Lock "4af522a5-ae89-49de-a956-50cfa7dd136a" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 7.430s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:05:10 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/1398104273' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:05:11 compute-0 ceph-mon[74313]: pgmap v1875: 321 pgs: 321 active+clean; 374 MiB data, 838 MiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 1.9 KiB/s wr, 51 op/s
Oct 11 09:05:12 compute-0 nova_compute[260935]: 2025-10-11 09:05:12.011 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:05:12 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1876: 321 pgs: 321 active+clean; 374 MiB data, 838 MiB used, 59 GiB / 60 GiB avail; 33 KiB/s rd, 1.7 KiB/s wr, 44 op/s
Oct 11 09:05:12 compute-0 podman[351410]: 2025-10-11 09:05:12.795787205 +0000 UTC m=+0.093554492 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 11 09:05:13 compute-0 ceph-mon[74313]: pgmap v1876: 321 pgs: 321 active+clean; 374 MiB data, 838 MiB used, 59 GiB / 60 GiB avail; 33 KiB/s rd, 1.7 KiB/s wr, 44 op/s
Oct 11 09:05:13 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e257 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:05:13 compute-0 nova_compute[260935]: 2025-10-11 09:05:13.981 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:05:14 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1877: 321 pgs: 321 active+clean; 374 MiB data, 838 MiB used, 59 GiB / 60 GiB avail; 31 KiB/s rd, 1.6 KiB/s wr, 40 op/s
Oct 11 09:05:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:05:15.200 162815 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:05:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:05:15.201 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:05:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:05:15.203 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:05:15 compute-0 ceph-mon[74313]: pgmap v1877: 321 pgs: 321 active+clean; 374 MiB data, 838 MiB used, 59 GiB / 60 GiB avail; 31 KiB/s rd, 1.6 KiB/s wr, 40 op/s
Oct 11 09:05:16 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1878: 321 pgs: 321 active+clean; 374 MiB data, 838 MiB used, 59 GiB / 60 GiB avail; 27 KiB/s rd, 1.4 KiB/s wr, 35 op/s
Oct 11 09:05:17 compute-0 nova_compute[260935]: 2025-10-11 09:05:17.014 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:05:17 compute-0 ceph-mon[74313]: pgmap v1878: 321 pgs: 321 active+clean; 374 MiB data, 838 MiB used, 59 GiB / 60 GiB avail; 27 KiB/s rd, 1.4 KiB/s wr, 35 op/s
Oct 11 09:05:18 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1879: 321 pgs: 321 active+clean; 374 MiB data, 838 MiB used, 59 GiB / 60 GiB avail
Oct 11 09:05:18 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e257 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:05:18 compute-0 nova_compute[260935]: 2025-10-11 09:05:18.983 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:05:19 compute-0 ceph-mon[74313]: pgmap v1879: 321 pgs: 321 active+clean; 374 MiB data, 838 MiB used, 59 GiB / 60 GiB avail
Oct 11 09:05:19 compute-0 nova_compute[260935]: 2025-10-11 09:05:19.694 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760173504.6933997, 4af522a5-ae89-49de-a956-50cfa7dd136a => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:05:19 compute-0 nova_compute[260935]: 2025-10-11 09:05:19.695 2 INFO nova.compute.manager [-] [instance: 4af522a5-ae89-49de-a956-50cfa7dd136a] VM Stopped (Lifecycle Event)
Oct 11 09:05:19 compute-0 podman[351429]: 2025-10-11 09:05:19.790082894 +0000 UTC m=+0.089868317 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, container_name=iscsid, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=iscsid, managed_by=edpm_ansible)
Oct 11 09:05:19 compute-0 nova_compute[260935]: 2025-10-11 09:05:19.883 2 DEBUG nova.compute.manager [None req-05e39b50-7a52-494c-b8be-961d85758b5b - - - - - -] [instance: 4af522a5-ae89-49de-a956-50cfa7dd136a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:05:20 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1880: 321 pgs: 321 active+clean; 374 MiB data, 838 MiB used, 59 GiB / 60 GiB avail
Oct 11 09:05:20 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:05:20.949 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=22, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:d1:d9', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '16:ab:1e:b7:4b:7f'}, ipsec=False) old=SB_Global(nb_cfg=21) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 09:05:20 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:05:20.952 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 11 09:05:20 compute-0 nova_compute[260935]: 2025-10-11 09:05:20.989 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:05:21 compute-0 ceph-mon[74313]: pgmap v1880: 321 pgs: 321 active+clean; 374 MiB data, 838 MiB used, 59 GiB / 60 GiB avail
Oct 11 09:05:22 compute-0 nova_compute[260935]: 2025-10-11 09:05:22.050 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:05:22 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1881: 321 pgs: 321 active+clean; 374 MiB data, 838 MiB used, 59 GiB / 60 GiB avail
Oct 11 09:05:22 compute-0 podman[351450]: 2025-10-11 09:05:22.784592308 +0000 UTC m=+0.085107950 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=multipathd, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team)
Oct 11 09:05:22 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:05:22.954 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=bec9a5e2-82b0-42f0-811d-08d245f1dc66, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '22'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:05:22 compute-0 podman[351470]: 2025-10-11 09:05:22.991909886 +0000 UTC m=+0.162505190 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3)
Oct 11 09:05:23 compute-0 ceph-mon[74313]: pgmap v1881: 321 pgs: 321 active+clean; 374 MiB data, 838 MiB used, 59 GiB / 60 GiB avail
Oct 11 09:05:23 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e257 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:05:23 compute-0 nova_compute[260935]: 2025-10-11 09:05:23.985 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:05:24 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1882: 321 pgs: 321 active+clean; 374 MiB data, 838 MiB used, 59 GiB / 60 GiB avail
Oct 11 09:05:24 compute-0 nova_compute[260935]: 2025-10-11 09:05:24.455 2 DEBUG oslo_concurrency.lockutils [None req-46756192-3102-4c02-ab31-97e0ad11d9b7 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] Acquiring lock "a83f40e4-c852-4b45-a3d2-1cd65e9aaa31" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:05:24 compute-0 nova_compute[260935]: 2025-10-11 09:05:24.455 2 DEBUG oslo_concurrency.lockutils [None req-46756192-3102-4c02-ab31-97e0ad11d9b7 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] Lock "a83f40e4-c852-4b45-a3d2-1cd65e9aaa31" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:05:24 compute-0 nova_compute[260935]: 2025-10-11 09:05:24.544 2 DEBUG nova.compute.manager [None req-46756192-3102-4c02-ab31-97e0ad11d9b7 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] [instance: a83f40e4-c852-4b45-a3d2-1cd65e9aaa31] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 11 09:05:24 compute-0 nova_compute[260935]: 2025-10-11 09:05:24.708 2 DEBUG oslo_concurrency.lockutils [None req-46756192-3102-4c02-ab31-97e0ad11d9b7 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:05:24 compute-0 nova_compute[260935]: 2025-10-11 09:05:24.709 2 DEBUG oslo_concurrency.lockutils [None req-46756192-3102-4c02-ab31-97e0ad11d9b7 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:05:24 compute-0 nova_compute[260935]: 2025-10-11 09:05:24.717 2 DEBUG nova.virt.hardware [None req-46756192-3102-4c02-ab31-97e0ad11d9b7 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 11 09:05:24 compute-0 nova_compute[260935]: 2025-10-11 09:05:24.717 2 INFO nova.compute.claims [None req-46756192-3102-4c02-ab31-97e0ad11d9b7 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] [instance: a83f40e4-c852-4b45-a3d2-1cd65e9aaa31] Claim successful on node compute-0.ctlplane.example.com
Oct 11 09:05:24 compute-0 ceph-mon[74313]: pgmap v1882: 321 pgs: 321 active+clean; 374 MiB data, 838 MiB used, 59 GiB / 60 GiB avail
Oct 11 09:05:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:05:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:05:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:05:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:05:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:05:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:05:25 compute-0 nova_compute[260935]: 2025-10-11 09:05:25.061 2 DEBUG oslo_concurrency.processutils [None req-46756192-3102-4c02-ab31-97e0ad11d9b7 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:05:25 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 09:05:25 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/313735075' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:05:25 compute-0 nova_compute[260935]: 2025-10-11 09:05:25.563 2 DEBUG oslo_concurrency.processutils [None req-46756192-3102-4c02-ab31-97e0ad11d9b7 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.502s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:05:25 compute-0 nova_compute[260935]: 2025-10-11 09:05:25.574 2 DEBUG nova.compute.provider_tree [None req-46756192-3102-4c02-ab31-97e0ad11d9b7 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 09:05:25 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/313735075' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:05:25 compute-0 nova_compute[260935]: 2025-10-11 09:05:25.821 2 DEBUG nova.scheduler.client.report [None req-46756192-3102-4c02-ab31-97e0ad11d9b7 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 09:05:25 compute-0 nova_compute[260935]: 2025-10-11 09:05:25.957 2 DEBUG oslo_concurrency.lockutils [None req-46756192-3102-4c02-ab31-97e0ad11d9b7 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.249s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:05:25 compute-0 nova_compute[260935]: 2025-10-11 09:05:25.959 2 DEBUG nova.compute.manager [None req-46756192-3102-4c02-ab31-97e0ad11d9b7 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] [instance: a83f40e4-c852-4b45-a3d2-1cd65e9aaa31] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 11 09:05:26 compute-0 nova_compute[260935]: 2025-10-11 09:05:26.106 2 DEBUG nova.compute.manager [None req-46756192-3102-4c02-ab31-97e0ad11d9b7 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] [instance: a83f40e4-c852-4b45-a3d2-1cd65e9aaa31] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 11 09:05:26 compute-0 nova_compute[260935]: 2025-10-11 09:05:26.106 2 DEBUG nova.network.neutron [None req-46756192-3102-4c02-ab31-97e0ad11d9b7 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] [instance: a83f40e4-c852-4b45-a3d2-1cd65e9aaa31] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 11 09:05:26 compute-0 nova_compute[260935]: 2025-10-11 09:05:26.163 2 INFO nova.virt.libvirt.driver [None req-46756192-3102-4c02-ab31-97e0ad11d9b7 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] [instance: a83f40e4-c852-4b45-a3d2-1cd65e9aaa31] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 11 09:05:26 compute-0 nova_compute[260935]: 2025-10-11 09:05:26.238 2 DEBUG nova.compute.manager [None req-46756192-3102-4c02-ab31-97e0ad11d9b7 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] [instance: a83f40e4-c852-4b45-a3d2-1cd65e9aaa31] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 11 09:05:26 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1883: 321 pgs: 321 active+clean; 374 MiB data, 838 MiB used, 59 GiB / 60 GiB avail
Oct 11 09:05:26 compute-0 nova_compute[260935]: 2025-10-11 09:05:26.423 2 DEBUG nova.policy [None req-46756192-3102-4c02-ab31-97e0ad11d9b7 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'b7730a035fdf47498398e20e5aaf9ba4', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '0a5d578da5e746caa535eef295e1a67d', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 11 09:05:26 compute-0 nova_compute[260935]: 2025-10-11 09:05:26.531 2 DEBUG nova.compute.manager [None req-46756192-3102-4c02-ab31-97e0ad11d9b7 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] [instance: a83f40e4-c852-4b45-a3d2-1cd65e9aaa31] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 11 09:05:26 compute-0 nova_compute[260935]: 2025-10-11 09:05:26.533 2 DEBUG nova.virt.libvirt.driver [None req-46756192-3102-4c02-ab31-97e0ad11d9b7 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] [instance: a83f40e4-c852-4b45-a3d2-1cd65e9aaa31] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 11 09:05:26 compute-0 nova_compute[260935]: 2025-10-11 09:05:26.534 2 INFO nova.virt.libvirt.driver [None req-46756192-3102-4c02-ab31-97e0ad11d9b7 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] [instance: a83f40e4-c852-4b45-a3d2-1cd65e9aaa31] Creating image(s)
Oct 11 09:05:26 compute-0 nova_compute[260935]: 2025-10-11 09:05:26.568 2 DEBUG nova.storage.rbd_utils [None req-46756192-3102-4c02-ab31-97e0ad11d9b7 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] rbd image a83f40e4-c852-4b45-a3d2-1cd65e9aaa31_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:05:26 compute-0 nova_compute[260935]: 2025-10-11 09:05:26.603 2 DEBUG nova.storage.rbd_utils [None req-46756192-3102-4c02-ab31-97e0ad11d9b7 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] rbd image a83f40e4-c852-4b45-a3d2-1cd65e9aaa31_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:05:26 compute-0 nova_compute[260935]: 2025-10-11 09:05:26.640 2 DEBUG nova.storage.rbd_utils [None req-46756192-3102-4c02-ab31-97e0ad11d9b7 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] rbd image a83f40e4-c852-4b45-a3d2-1cd65e9aaa31_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:05:26 compute-0 nova_compute[260935]: 2025-10-11 09:05:26.645 2 DEBUG oslo_concurrency.processutils [None req-46756192-3102-4c02-ab31-97e0ad11d9b7 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:05:26 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 09:05:26 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2529107901' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 09:05:26 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 09:05:26 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2529107901' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 09:05:26 compute-0 nova_compute[260935]: 2025-10-11 09:05:26.773 2 DEBUG oslo_concurrency.processutils [None req-46756192-3102-4c02-ab31-97e0ad11d9b7 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json" returned: 0 in 0.128s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:05:26 compute-0 nova_compute[260935]: 2025-10-11 09:05:26.775 2 DEBUG oslo_concurrency.lockutils [None req-46756192-3102-4c02-ab31-97e0ad11d9b7 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] Acquiring lock "9811042c7d73cc51997f7c966840f4b7728169a1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:05:26 compute-0 ceph-mon[74313]: pgmap v1883: 321 pgs: 321 active+clean; 374 MiB data, 838 MiB used, 59 GiB / 60 GiB avail
Oct 11 09:05:26 compute-0 ceph-mon[74313]: from='client.? 192.168.122.10:0/2529107901' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 09:05:26 compute-0 ceph-mon[74313]: from='client.? 192.168.122.10:0/2529107901' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 09:05:26 compute-0 nova_compute[260935]: 2025-10-11 09:05:26.776 2 DEBUG oslo_concurrency.lockutils [None req-46756192-3102-4c02-ab31-97e0ad11d9b7 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:05:26 compute-0 nova_compute[260935]: 2025-10-11 09:05:26.777 2 DEBUG oslo_concurrency.lockutils [None req-46756192-3102-4c02-ab31-97e0ad11d9b7 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:05:26 compute-0 nova_compute[260935]: 2025-10-11 09:05:26.810 2 DEBUG nova.storage.rbd_utils [None req-46756192-3102-4c02-ab31-97e0ad11d9b7 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] rbd image a83f40e4-c852-4b45-a3d2-1cd65e9aaa31_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:05:26 compute-0 nova_compute[260935]: 2025-10-11 09:05:26.815 2 DEBUG oslo_concurrency.processutils [None req-46756192-3102-4c02-ab31-97e0ad11d9b7 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 a83f40e4-c852-4b45-a3d2-1cd65e9aaa31_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:05:27 compute-0 nova_compute[260935]: 2025-10-11 09:05:27.090 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:05:27 compute-0 nova_compute[260935]: 2025-10-11 09:05:27.145 2 DEBUG oslo_concurrency.processutils [None req-46756192-3102-4c02-ab31-97e0ad11d9b7 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 a83f40e4-c852-4b45-a3d2-1cd65e9aaa31_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.331s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:05:27 compute-0 nova_compute[260935]: 2025-10-11 09:05:27.232 2 DEBUG nova.storage.rbd_utils [None req-46756192-3102-4c02-ab31-97e0ad11d9b7 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] resizing rbd image a83f40e4-c852-4b45-a3d2-1cd65e9aaa31_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 11 09:05:27 compute-0 nova_compute[260935]: 2025-10-11 09:05:27.320 2 DEBUG nova.network.neutron [None req-46756192-3102-4c02-ab31-97e0ad11d9b7 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] [instance: a83f40e4-c852-4b45-a3d2-1cd65e9aaa31] Successfully created port: f8dc388c-9e5a-43c2-8dd9-8fc28768ec31 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 11 09:05:27 compute-0 nova_compute[260935]: 2025-10-11 09:05:27.378 2 DEBUG nova.objects.instance [None req-46756192-3102-4c02-ab31-97e0ad11d9b7 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] Lazy-loading 'migration_context' on Instance uuid a83f40e4-c852-4b45-a3d2-1cd65e9aaa31 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:05:27 compute-0 nova_compute[260935]: 2025-10-11 09:05:27.477 2 DEBUG nova.virt.libvirt.driver [None req-46756192-3102-4c02-ab31-97e0ad11d9b7 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] [instance: a83f40e4-c852-4b45-a3d2-1cd65e9aaa31] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 11 09:05:27 compute-0 nova_compute[260935]: 2025-10-11 09:05:27.478 2 DEBUG nova.virt.libvirt.driver [None req-46756192-3102-4c02-ab31-97e0ad11d9b7 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] [instance: a83f40e4-c852-4b45-a3d2-1cd65e9aaa31] Ensure instance console log exists: /var/lib/nova/instances/a83f40e4-c852-4b45-a3d2-1cd65e9aaa31/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 11 09:05:27 compute-0 nova_compute[260935]: 2025-10-11 09:05:27.479 2 DEBUG oslo_concurrency.lockutils [None req-46756192-3102-4c02-ab31-97e0ad11d9b7 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:05:27 compute-0 nova_compute[260935]: 2025-10-11 09:05:27.480 2 DEBUG oslo_concurrency.lockutils [None req-46756192-3102-4c02-ab31-97e0ad11d9b7 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:05:27 compute-0 nova_compute[260935]: 2025-10-11 09:05:27.481 2 DEBUG oslo_concurrency.lockutils [None req-46756192-3102-4c02-ab31-97e0ad11d9b7 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:05:28 compute-0 nova_compute[260935]: 2025-10-11 09:05:28.184 2 DEBUG oslo_concurrency.lockutils [None req-cb7f5307-269b-4aa4-9170-94eb5f315201 727ffe9af08040179ad1981f985d61cc 72f2c977b4a147e3b45be9903a0afdf1 - - default default] Acquiring lock "81e243b6-6e9c-428e-a7b7-743f60f94728" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:05:28 compute-0 nova_compute[260935]: 2025-10-11 09:05:28.185 2 DEBUG oslo_concurrency.lockutils [None req-cb7f5307-269b-4aa4-9170-94eb5f315201 727ffe9af08040179ad1981f985d61cc 72f2c977b4a147e3b45be9903a0afdf1 - - default default] Lock "81e243b6-6e9c-428e-a7b7-743f60f94728" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:05:28 compute-0 nova_compute[260935]: 2025-10-11 09:05:28.240 2 DEBUG nova.compute.manager [None req-cb7f5307-269b-4aa4-9170-94eb5f315201 727ffe9af08040179ad1981f985d61cc 72f2c977b4a147e3b45be9903a0afdf1 - - default default] [instance: 81e243b6-6e9c-428e-a7b7-743f60f94728] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 11 09:05:28 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1884: 321 pgs: 321 active+clean; 420 MiB data, 843 MiB used, 59 GiB / 60 GiB avail; 9.2 KiB/s rd, 1.8 MiB/s wr, 17 op/s
Oct 11 09:05:28 compute-0 nova_compute[260935]: 2025-10-11 09:05:28.343 2 DEBUG oslo_concurrency.lockutils [None req-cb7f5307-269b-4aa4-9170-94eb5f315201 727ffe9af08040179ad1981f985d61cc 72f2c977b4a147e3b45be9903a0afdf1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:05:28 compute-0 nova_compute[260935]: 2025-10-11 09:05:28.343 2 DEBUG oslo_concurrency.lockutils [None req-cb7f5307-269b-4aa4-9170-94eb5f315201 727ffe9af08040179ad1981f985d61cc 72f2c977b4a147e3b45be9903a0afdf1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:05:28 compute-0 nova_compute[260935]: 2025-10-11 09:05:28.354 2 DEBUG nova.virt.hardware [None req-cb7f5307-269b-4aa4-9170-94eb5f315201 727ffe9af08040179ad1981f985d61cc 72f2c977b4a147e3b45be9903a0afdf1 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 11 09:05:28 compute-0 nova_compute[260935]: 2025-10-11 09:05:28.355 2 INFO nova.compute.claims [None req-cb7f5307-269b-4aa4-9170-94eb5f315201 727ffe9af08040179ad1981f985d61cc 72f2c977b4a147e3b45be9903a0afdf1 - - default default] [instance: 81e243b6-6e9c-428e-a7b7-743f60f94728] Claim successful on node compute-0.ctlplane.example.com
Oct 11 09:05:28 compute-0 nova_compute[260935]: 2025-10-11 09:05:28.452 2 DEBUG nova.network.neutron [None req-46756192-3102-4c02-ab31-97e0ad11d9b7 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] [instance: a83f40e4-c852-4b45-a3d2-1cd65e9aaa31] Successfully updated port: f8dc388c-9e5a-43c2-8dd9-8fc28768ec31 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 11 09:05:28 compute-0 nova_compute[260935]: 2025-10-11 09:05:28.498 2 DEBUG oslo_concurrency.lockutils [None req-46756192-3102-4c02-ab31-97e0ad11d9b7 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] Acquiring lock "refresh_cache-a83f40e4-c852-4b45-a3d2-1cd65e9aaa31" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:05:28 compute-0 nova_compute[260935]: 2025-10-11 09:05:28.498 2 DEBUG oslo_concurrency.lockutils [None req-46756192-3102-4c02-ab31-97e0ad11d9b7 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] Acquired lock "refresh_cache-a83f40e4-c852-4b45-a3d2-1cd65e9aaa31" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:05:28 compute-0 nova_compute[260935]: 2025-10-11 09:05:28.499 2 DEBUG nova.network.neutron [None req-46756192-3102-4c02-ab31-97e0ad11d9b7 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] [instance: a83f40e4-c852-4b45-a3d2-1cd65e9aaa31] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 11 09:05:28 compute-0 nova_compute[260935]: 2025-10-11 09:05:28.545 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:05:28 compute-0 nova_compute[260935]: 2025-10-11 09:05:28.546 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:05:28 compute-0 nova_compute[260935]: 2025-10-11 09:05:28.599 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:05:28 compute-0 nova_compute[260935]: 2025-10-11 09:05:28.599 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 11 09:05:28 compute-0 nova_compute[260935]: 2025-10-11 09:05:28.600 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 11 09:05:28 compute-0 nova_compute[260935]: 2025-10-11 09:05:28.652 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: 15633aee-234a-4417-b5ea-f35f13820404] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Oct 11 09:05:28 compute-0 nova_compute[260935]: 2025-10-11 09:05:28.652 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: a83f40e4-c852-4b45-a3d2-1cd65e9aaa31] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Oct 11 09:05:28 compute-0 nova_compute[260935]: 2025-10-11 09:05:28.653 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: 81e243b6-6e9c-428e-a7b7-743f60f94728] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Oct 11 09:05:28 compute-0 nova_compute[260935]: 2025-10-11 09:05:28.754 2 DEBUG oslo_concurrency.processutils [None req-cb7f5307-269b-4aa4-9170-94eb5f315201 727ffe9af08040179ad1981f985d61cc 72f2c977b4a147e3b45be9903a0afdf1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:05:28 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e257 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:05:28 compute-0 nova_compute[260935]: 2025-10-11 09:05:28.807 2 DEBUG nova.network.neutron [None req-46756192-3102-4c02-ab31-97e0ad11d9b7 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] [instance: a83f40e4-c852-4b45-a3d2-1cd65e9aaa31] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 11 09:05:28 compute-0 nova_compute[260935]: 2025-10-11 09:05:28.857 2 DEBUG nova.compute.manager [req-cc78e6de-1276-4c9a-ad98-664820c9438c req-6cf85b23-11c9-4542-92a4-3231365a2c7b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a83f40e4-c852-4b45-a3d2-1cd65e9aaa31] Received event network-changed-f8dc388c-9e5a-43c2-8dd9-8fc28768ec31 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:05:28 compute-0 nova_compute[260935]: 2025-10-11 09:05:28.858 2 DEBUG nova.compute.manager [req-cc78e6de-1276-4c9a-ad98-664820c9438c req-6cf85b23-11c9-4542-92a4-3231365a2c7b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a83f40e4-c852-4b45-a3d2-1cd65e9aaa31] Refreshing instance network info cache due to event network-changed-f8dc388c-9e5a-43c2-8dd9-8fc28768ec31. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 09:05:28 compute-0 nova_compute[260935]: 2025-10-11 09:05:28.858 2 DEBUG oslo_concurrency.lockutils [req-cc78e6de-1276-4c9a-ad98-664820c9438c req-6cf85b23-11c9-4542-92a4-3231365a2c7b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-a83f40e4-c852-4b45-a3d2-1cd65e9aaa31" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:05:28 compute-0 nova_compute[260935]: 2025-10-11 09:05:28.885 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "refresh_cache-c176845c-89c0-4038-ba22-4ee79bd3ebfe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:05:28 compute-0 nova_compute[260935]: 2025-10-11 09:05:28.885 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquired lock "refresh_cache-c176845c-89c0-4038-ba22-4ee79bd3ebfe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:05:28 compute-0 nova_compute[260935]: 2025-10-11 09:05:28.886 2 DEBUG nova.network.neutron [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: c176845c-89c0-4038-ba22-4ee79bd3ebfe] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 11 09:05:28 compute-0 nova_compute[260935]: 2025-10-11 09:05:28.886 2 DEBUG nova.objects.instance [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lazy-loading 'info_cache' on Instance uuid c176845c-89c0-4038-ba22-4ee79bd3ebfe obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:05:28 compute-0 nova_compute[260935]: 2025-10-11 09:05:28.988 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:05:29 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 09:05:29 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/659624736' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:05:29 compute-0 nova_compute[260935]: 2025-10-11 09:05:29.238 2 DEBUG oslo_concurrency.processutils [None req-cb7f5307-269b-4aa4-9170-94eb5f315201 727ffe9af08040179ad1981f985d61cc 72f2c977b4a147e3b45be9903a0afdf1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.484s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:05:29 compute-0 nova_compute[260935]: 2025-10-11 09:05:29.245 2 DEBUG nova.compute.provider_tree [None req-cb7f5307-269b-4aa4-9170-94eb5f315201 727ffe9af08040179ad1981f985d61cc 72f2c977b4a147e3b45be9903a0afdf1 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 09:05:29 compute-0 nova_compute[260935]: 2025-10-11 09:05:29.290 2 DEBUG nova.scheduler.client.report [None req-cb7f5307-269b-4aa4-9170-94eb5f315201 727ffe9af08040179ad1981f985d61cc 72f2c977b4a147e3b45be9903a0afdf1 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 09:05:29 compute-0 ceph-mon[74313]: pgmap v1884: 321 pgs: 321 active+clean; 420 MiB data, 843 MiB used, 59 GiB / 60 GiB avail; 9.2 KiB/s rd, 1.8 MiB/s wr, 17 op/s
Oct 11 09:05:29 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/659624736' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:05:29 compute-0 nova_compute[260935]: 2025-10-11 09:05:29.432 2 DEBUG oslo_concurrency.lockutils [None req-cb7f5307-269b-4aa4-9170-94eb5f315201 727ffe9af08040179ad1981f985d61cc 72f2c977b4a147e3b45be9903a0afdf1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.088s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:05:29 compute-0 nova_compute[260935]: 2025-10-11 09:05:29.432 2 DEBUG nova.compute.manager [None req-cb7f5307-269b-4aa4-9170-94eb5f315201 727ffe9af08040179ad1981f985d61cc 72f2c977b4a147e3b45be9903a0afdf1 - - default default] [instance: 81e243b6-6e9c-428e-a7b7-743f60f94728] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 11 09:05:29 compute-0 nova_compute[260935]: 2025-10-11 09:05:29.777 2 DEBUG nova.compute.manager [None req-cb7f5307-269b-4aa4-9170-94eb5f315201 727ffe9af08040179ad1981f985d61cc 72f2c977b4a147e3b45be9903a0afdf1 - - default default] [instance: 81e243b6-6e9c-428e-a7b7-743f60f94728] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 11 09:05:29 compute-0 nova_compute[260935]: 2025-10-11 09:05:29.778 2 DEBUG nova.network.neutron [None req-cb7f5307-269b-4aa4-9170-94eb5f315201 727ffe9af08040179ad1981f985d61cc 72f2c977b4a147e3b45be9903a0afdf1 - - default default] [instance: 81e243b6-6e9c-428e-a7b7-743f60f94728] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 11 09:05:29 compute-0 nova_compute[260935]: 2025-10-11 09:05:29.890 2 INFO nova.virt.libvirt.driver [None req-cb7f5307-269b-4aa4-9170-94eb5f315201 727ffe9af08040179ad1981f985d61cc 72f2c977b4a147e3b45be9903a0afdf1 - - default default] [instance: 81e243b6-6e9c-428e-a7b7-743f60f94728] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 11 09:05:30 compute-0 nova_compute[260935]: 2025-10-11 09:05:30.094 2 DEBUG nova.compute.manager [None req-cb7f5307-269b-4aa4-9170-94eb5f315201 727ffe9af08040179ad1981f985d61cc 72f2c977b4a147e3b45be9903a0afdf1 - - default default] [instance: 81e243b6-6e9c-428e-a7b7-743f60f94728] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 11 09:05:30 compute-0 nova_compute[260935]: 2025-10-11 09:05:30.235 2 DEBUG nova.compute.manager [None req-cb7f5307-269b-4aa4-9170-94eb5f315201 727ffe9af08040179ad1981f985d61cc 72f2c977b4a147e3b45be9903a0afdf1 - - default default] [instance: 81e243b6-6e9c-428e-a7b7-743f60f94728] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 11 09:05:30 compute-0 nova_compute[260935]: 2025-10-11 09:05:30.237 2 DEBUG nova.virt.libvirt.driver [None req-cb7f5307-269b-4aa4-9170-94eb5f315201 727ffe9af08040179ad1981f985d61cc 72f2c977b4a147e3b45be9903a0afdf1 - - default default] [instance: 81e243b6-6e9c-428e-a7b7-743f60f94728] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 11 09:05:30 compute-0 nova_compute[260935]: 2025-10-11 09:05:30.237 2 INFO nova.virt.libvirt.driver [None req-cb7f5307-269b-4aa4-9170-94eb5f315201 727ffe9af08040179ad1981f985d61cc 72f2c977b4a147e3b45be9903a0afdf1 - - default default] [instance: 81e243b6-6e9c-428e-a7b7-743f60f94728] Creating image(s)
Oct 11 09:05:30 compute-0 nova_compute[260935]: 2025-10-11 09:05:30.269 2 DEBUG nova.storage.rbd_utils [None req-cb7f5307-269b-4aa4-9170-94eb5f315201 727ffe9af08040179ad1981f985d61cc 72f2c977b4a147e3b45be9903a0afdf1 - - default default] rbd image 81e243b6-6e9c-428e-a7b7-743f60f94728_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:05:30 compute-0 nova_compute[260935]: 2025-10-11 09:05:30.306 2 DEBUG nova.storage.rbd_utils [None req-cb7f5307-269b-4aa4-9170-94eb5f315201 727ffe9af08040179ad1981f985d61cc 72f2c977b4a147e3b45be9903a0afdf1 - - default default] rbd image 81e243b6-6e9c-428e-a7b7-743f60f94728_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:05:30 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1885: 321 pgs: 321 active+clean; 420 MiB data, 843 MiB used, 59 GiB / 60 GiB avail; 9.2 KiB/s rd, 1.8 MiB/s wr, 17 op/s
Oct 11 09:05:30 compute-0 nova_compute[260935]: 2025-10-11 09:05:30.342 2 DEBUG nova.storage.rbd_utils [None req-cb7f5307-269b-4aa4-9170-94eb5f315201 727ffe9af08040179ad1981f985d61cc 72f2c977b4a147e3b45be9903a0afdf1 - - default default] rbd image 81e243b6-6e9c-428e-a7b7-743f60f94728_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:05:30 compute-0 nova_compute[260935]: 2025-10-11 09:05:30.347 2 DEBUG oslo_concurrency.processutils [None req-cb7f5307-269b-4aa4-9170-94eb5f315201 727ffe9af08040179ad1981f985d61cc 72f2c977b4a147e3b45be9903a0afdf1 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:05:30 compute-0 nova_compute[260935]: 2025-10-11 09:05:30.455 2 DEBUG oslo_concurrency.processutils [None req-cb7f5307-269b-4aa4-9170-94eb5f315201 727ffe9af08040179ad1981f985d61cc 72f2c977b4a147e3b45be9903a0afdf1 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json" returned: 0 in 0.108s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:05:30 compute-0 nova_compute[260935]: 2025-10-11 09:05:30.457 2 DEBUG oslo_concurrency.lockutils [None req-cb7f5307-269b-4aa4-9170-94eb5f315201 727ffe9af08040179ad1981f985d61cc 72f2c977b4a147e3b45be9903a0afdf1 - - default default] Acquiring lock "9811042c7d73cc51997f7c966840f4b7728169a1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:05:30 compute-0 nova_compute[260935]: 2025-10-11 09:05:30.458 2 DEBUG oslo_concurrency.lockutils [None req-cb7f5307-269b-4aa4-9170-94eb5f315201 727ffe9af08040179ad1981f985d61cc 72f2c977b4a147e3b45be9903a0afdf1 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:05:30 compute-0 nova_compute[260935]: 2025-10-11 09:05:30.458 2 DEBUG oslo_concurrency.lockutils [None req-cb7f5307-269b-4aa4-9170-94eb5f315201 727ffe9af08040179ad1981f985d61cc 72f2c977b4a147e3b45be9903a0afdf1 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:05:30 compute-0 nova_compute[260935]: 2025-10-11 09:05:30.490 2 DEBUG nova.storage.rbd_utils [None req-cb7f5307-269b-4aa4-9170-94eb5f315201 727ffe9af08040179ad1981f985d61cc 72f2c977b4a147e3b45be9903a0afdf1 - - default default] rbd image 81e243b6-6e9c-428e-a7b7-743f60f94728_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:05:30 compute-0 nova_compute[260935]: 2025-10-11 09:05:30.495 2 DEBUG oslo_concurrency.processutils [None req-cb7f5307-269b-4aa4-9170-94eb5f315201 727ffe9af08040179ad1981f985d61cc 72f2c977b4a147e3b45be9903a0afdf1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 81e243b6-6e9c-428e-a7b7-743f60f94728_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:05:30 compute-0 nova_compute[260935]: 2025-10-11 09:05:30.761 2 DEBUG nova.policy [None req-cb7f5307-269b-4aa4-9170-94eb5f315201 727ffe9af08040179ad1981f985d61cc 72f2c977b4a147e3b45be9903a0afdf1 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '727ffe9af08040179ad1981f985d61cc', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '72f2c977b4a147e3b45be9903a0afdf1', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 11 09:05:30 compute-0 nova_compute[260935]: 2025-10-11 09:05:30.829 2 DEBUG oslo_concurrency.processutils [None req-cb7f5307-269b-4aa4-9170-94eb5f315201 727ffe9af08040179ad1981f985d61cc 72f2c977b4a147e3b45be9903a0afdf1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 81e243b6-6e9c-428e-a7b7-743f60f94728_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.334s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:05:30 compute-0 nova_compute[260935]: 2025-10-11 09:05:30.915 2 DEBUG nova.storage.rbd_utils [None req-cb7f5307-269b-4aa4-9170-94eb5f315201 727ffe9af08040179ad1981f985d61cc 72f2c977b4a147e3b45be9903a0afdf1 - - default default] resizing rbd image 81e243b6-6e9c-428e-a7b7-743f60f94728_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 11 09:05:31 compute-0 nova_compute[260935]: 2025-10-11 09:05:31.047 2 DEBUG nova.objects.instance [None req-cb7f5307-269b-4aa4-9170-94eb5f315201 727ffe9af08040179ad1981f985d61cc 72f2c977b4a147e3b45be9903a0afdf1 - - default default] Lazy-loading 'migration_context' on Instance uuid 81e243b6-6e9c-428e-a7b7-743f60f94728 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:05:31 compute-0 nova_compute[260935]: 2025-10-11 09:05:31.094 2 DEBUG nova.virt.libvirt.driver [None req-cb7f5307-269b-4aa4-9170-94eb5f315201 727ffe9af08040179ad1981f985d61cc 72f2c977b4a147e3b45be9903a0afdf1 - - default default] [instance: 81e243b6-6e9c-428e-a7b7-743f60f94728] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 11 09:05:31 compute-0 nova_compute[260935]: 2025-10-11 09:05:31.095 2 DEBUG nova.virt.libvirt.driver [None req-cb7f5307-269b-4aa4-9170-94eb5f315201 727ffe9af08040179ad1981f985d61cc 72f2c977b4a147e3b45be9903a0afdf1 - - default default] [instance: 81e243b6-6e9c-428e-a7b7-743f60f94728] Ensure instance console log exists: /var/lib/nova/instances/81e243b6-6e9c-428e-a7b7-743f60f94728/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 11 09:05:31 compute-0 nova_compute[260935]: 2025-10-11 09:05:31.096 2 DEBUG oslo_concurrency.lockutils [None req-cb7f5307-269b-4aa4-9170-94eb5f315201 727ffe9af08040179ad1981f985d61cc 72f2c977b4a147e3b45be9903a0afdf1 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:05:31 compute-0 nova_compute[260935]: 2025-10-11 09:05:31.097 2 DEBUG oslo_concurrency.lockutils [None req-cb7f5307-269b-4aa4-9170-94eb5f315201 727ffe9af08040179ad1981f985d61cc 72f2c977b4a147e3b45be9903a0afdf1 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:05:31 compute-0 nova_compute[260935]: 2025-10-11 09:05:31.097 2 DEBUG oslo_concurrency.lockutils [None req-cb7f5307-269b-4aa4-9170-94eb5f315201 727ffe9af08040179ad1981f985d61cc 72f2c977b4a147e3b45be9903a0afdf1 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:05:31 compute-0 ceph-mon[74313]: pgmap v1885: 321 pgs: 321 active+clean; 420 MiB data, 843 MiB used, 59 GiB / 60 GiB avail; 9.2 KiB/s rd, 1.8 MiB/s wr, 17 op/s
Oct 11 09:05:31 compute-0 nova_compute[260935]: 2025-10-11 09:05:31.459 2 DEBUG nova.network.neutron [None req-46756192-3102-4c02-ab31-97e0ad11d9b7 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] [instance: a83f40e4-c852-4b45-a3d2-1cd65e9aaa31] Updating instance_info_cache with network_info: [{"id": "f8dc388c-9e5a-43c2-8dd9-8fc28768ec31", "address": "fa:16:3e:b4:bf:b7", "network": {"id": "7ac98e7f-e9cd-45cf-8bf4-dcecaa65ca75", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-1957321638-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "0a5d578da5e746caa535eef295e1a67d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf8dc388c-9e", "ovs_interfaceid": "f8dc388c-9e5a-43c2-8dd9-8fc28768ec31", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:05:31 compute-0 nova_compute[260935]: 2025-10-11 09:05:31.754 2 DEBUG oslo_concurrency.lockutils [None req-46756192-3102-4c02-ab31-97e0ad11d9b7 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] Releasing lock "refresh_cache-a83f40e4-c852-4b45-a3d2-1cd65e9aaa31" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:05:31 compute-0 nova_compute[260935]: 2025-10-11 09:05:31.754 2 DEBUG nova.compute.manager [None req-46756192-3102-4c02-ab31-97e0ad11d9b7 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] [instance: a83f40e4-c852-4b45-a3d2-1cd65e9aaa31] Instance network_info: |[{"id": "f8dc388c-9e5a-43c2-8dd9-8fc28768ec31", "address": "fa:16:3e:b4:bf:b7", "network": {"id": "7ac98e7f-e9cd-45cf-8bf4-dcecaa65ca75", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-1957321638-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "0a5d578da5e746caa535eef295e1a67d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf8dc388c-9e", "ovs_interfaceid": "f8dc388c-9e5a-43c2-8dd9-8fc28768ec31", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 11 09:05:31 compute-0 nova_compute[260935]: 2025-10-11 09:05:31.755 2 DEBUG oslo_concurrency.lockutils [req-cc78e6de-1276-4c9a-ad98-664820c9438c req-6cf85b23-11c9-4542-92a4-3231365a2c7b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-a83f40e4-c852-4b45-a3d2-1cd65e9aaa31" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:05:31 compute-0 nova_compute[260935]: 2025-10-11 09:05:31.756 2 DEBUG nova.network.neutron [req-cc78e6de-1276-4c9a-ad98-664820c9438c req-6cf85b23-11c9-4542-92a4-3231365a2c7b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a83f40e4-c852-4b45-a3d2-1cd65e9aaa31] Refreshing network info cache for port f8dc388c-9e5a-43c2-8dd9-8fc28768ec31 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 09:05:31 compute-0 nova_compute[260935]: 2025-10-11 09:05:31.761 2 DEBUG nova.virt.libvirt.driver [None req-46756192-3102-4c02-ab31-97e0ad11d9b7 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] [instance: a83f40e4-c852-4b45-a3d2-1cd65e9aaa31] Start _get_guest_xml network_info=[{"id": "f8dc388c-9e5a-43c2-8dd9-8fc28768ec31", "address": "fa:16:3e:b4:bf:b7", "network": {"id": "7ac98e7f-e9cd-45cf-8bf4-dcecaa65ca75", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-1957321638-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "0a5d578da5e746caa535eef295e1a67d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf8dc388c-9e", "ovs_interfaceid": "f8dc388c-9e5a-43c2-8dd9-8fc28768ec31", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 11 09:05:31 compute-0 nova_compute[260935]: 2025-10-11 09:05:31.768 2 WARNING nova.virt.libvirt.driver [None req-46756192-3102-4c02-ab31-97e0ad11d9b7 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 09:05:31 compute-0 nova_compute[260935]: 2025-10-11 09:05:31.773 2 DEBUG nova.virt.libvirt.host [None req-46756192-3102-4c02-ab31-97e0ad11d9b7 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 11 09:05:31 compute-0 nova_compute[260935]: 2025-10-11 09:05:31.774 2 DEBUG nova.virt.libvirt.host [None req-46756192-3102-4c02-ab31-97e0ad11d9b7 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 11 09:05:31 compute-0 nova_compute[260935]: 2025-10-11 09:05:31.778 2 DEBUG nova.virt.libvirt.host [None req-46756192-3102-4c02-ab31-97e0ad11d9b7 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 11 09:05:31 compute-0 nova_compute[260935]: 2025-10-11 09:05:31.779 2 DEBUG nova.virt.libvirt.host [None req-46756192-3102-4c02-ab31-97e0ad11d9b7 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 11 09:05:31 compute-0 nova_compute[260935]: 2025-10-11 09:05:31.779 2 DEBUG nova.virt.libvirt.driver [None req-46756192-3102-4c02-ab31-97e0ad11d9b7 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 11 09:05:31 compute-0 nova_compute[260935]: 2025-10-11 09:05:31.780 2 DEBUG nova.virt.hardware [None req-46756192-3102-4c02-ab31-97e0ad11d9b7 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 11 09:05:31 compute-0 nova_compute[260935]: 2025-10-11 09:05:31.780 2 DEBUG nova.virt.hardware [None req-46756192-3102-4c02-ab31-97e0ad11d9b7 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 11 09:05:31 compute-0 nova_compute[260935]: 2025-10-11 09:05:31.781 2 DEBUG nova.virt.hardware [None req-46756192-3102-4c02-ab31-97e0ad11d9b7 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 11 09:05:31 compute-0 nova_compute[260935]: 2025-10-11 09:05:31.781 2 DEBUG nova.virt.hardware [None req-46756192-3102-4c02-ab31-97e0ad11d9b7 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 11 09:05:31 compute-0 nova_compute[260935]: 2025-10-11 09:05:31.781 2 DEBUG nova.virt.hardware [None req-46756192-3102-4c02-ab31-97e0ad11d9b7 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 11 09:05:31 compute-0 nova_compute[260935]: 2025-10-11 09:05:31.782 2 DEBUG nova.virt.hardware [None req-46756192-3102-4c02-ab31-97e0ad11d9b7 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 11 09:05:31 compute-0 nova_compute[260935]: 2025-10-11 09:05:31.782 2 DEBUG nova.virt.hardware [None req-46756192-3102-4c02-ab31-97e0ad11d9b7 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 11 09:05:31 compute-0 nova_compute[260935]: 2025-10-11 09:05:31.783 2 DEBUG nova.virt.hardware [None req-46756192-3102-4c02-ab31-97e0ad11d9b7 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 11 09:05:31 compute-0 nova_compute[260935]: 2025-10-11 09:05:31.783 2 DEBUG nova.virt.hardware [None req-46756192-3102-4c02-ab31-97e0ad11d9b7 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 11 09:05:31 compute-0 nova_compute[260935]: 2025-10-11 09:05:31.783 2 DEBUG nova.virt.hardware [None req-46756192-3102-4c02-ab31-97e0ad11d9b7 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 11 09:05:31 compute-0 nova_compute[260935]: 2025-10-11 09:05:31.784 2 DEBUG nova.virt.hardware [None req-46756192-3102-4c02-ab31-97e0ad11d9b7 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 11 09:05:31 compute-0 nova_compute[260935]: 2025-10-11 09:05:31.788 2 DEBUG oslo_concurrency.processutils [None req-46756192-3102-4c02-ab31-97e0ad11d9b7 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:05:32 compute-0 nova_compute[260935]: 2025-10-11 09:05:32.143 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:05:32 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1886: 321 pgs: 321 active+clean; 458 MiB data, 873 MiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 3.2 MiB/s wr, 53 op/s
Oct 11 09:05:32 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 09:05:32 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1115873053' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:05:32 compute-0 nova_compute[260935]: 2025-10-11 09:05:32.329 2 DEBUG oslo_concurrency.processutils [None req-46756192-3102-4c02-ab31-97e0ad11d9b7 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.540s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:05:32 compute-0 nova_compute[260935]: 2025-10-11 09:05:32.352 2 DEBUG nova.storage.rbd_utils [None req-46756192-3102-4c02-ab31-97e0ad11d9b7 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] rbd image a83f40e4-c852-4b45-a3d2-1cd65e9aaa31_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:05:32 compute-0 nova_compute[260935]: 2025-10-11 09:05:32.356 2 DEBUG oslo_concurrency.processutils [None req-46756192-3102-4c02-ab31-97e0ad11d9b7 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:05:32 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/1115873053' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:05:32 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 09:05:32 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1525929425' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:05:32 compute-0 nova_compute[260935]: 2025-10-11 09:05:32.876 2 DEBUG oslo_concurrency.processutils [None req-46756192-3102-4c02-ab31-97e0ad11d9b7 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.519s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:05:32 compute-0 nova_compute[260935]: 2025-10-11 09:05:32.877 2 DEBUG nova.virt.libvirt.vif [None req-46756192-3102-4c02-ab31-97e0ad11d9b7 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T09:05:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerRescueTestJSONUnderV235-server-942927350',display_name='tempest-ServerRescueTestJSONUnderV235-server-942927350',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuetestjsonunderv235-server-942927350',id=92,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0a5d578da5e746caa535eef295e1a67d',ramdisk_id='',reservation_id='r-i0emrtot',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerRescueTestJSONUnderV235-2035879439',owner_user_name='tempest-ServerRescueTestJSONUnderV235-2035879439-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:05:26Z,user_data=None,user_id='b7730a035fdf47498398e20e5aaf9ba4',uuid=a83f40e4-c852-4b45-a3d2-1cd65e9aaa31,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f8dc388c-9e5a-43c2-8dd9-8fc28768ec31", "address": "fa:16:3e:b4:bf:b7", "network": {"id": "7ac98e7f-e9cd-45cf-8bf4-dcecaa65ca75", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-1957321638-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "0a5d578da5e746caa535eef295e1a67d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf8dc388c-9e", "ovs_interfaceid": "f8dc388c-9e5a-43c2-8dd9-8fc28768ec31", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 11 09:05:32 compute-0 nova_compute[260935]: 2025-10-11 09:05:32.878 2 DEBUG nova.network.os_vif_util [None req-46756192-3102-4c02-ab31-97e0ad11d9b7 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] Converting VIF {"id": "f8dc388c-9e5a-43c2-8dd9-8fc28768ec31", "address": "fa:16:3e:b4:bf:b7", "network": {"id": "7ac98e7f-e9cd-45cf-8bf4-dcecaa65ca75", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-1957321638-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "0a5d578da5e746caa535eef295e1a67d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf8dc388c-9e", "ovs_interfaceid": "f8dc388c-9e5a-43c2-8dd9-8fc28768ec31", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 09:05:32 compute-0 nova_compute[260935]: 2025-10-11 09:05:32.879 2 DEBUG nova.network.os_vif_util [None req-46756192-3102-4c02-ab31-97e0ad11d9b7 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b4:bf:b7,bridge_name='br-int',has_traffic_filtering=True,id=f8dc388c-9e5a-43c2-8dd9-8fc28768ec31,network=Network(7ac98e7f-e9cd-45cf-8bf4-dcecaa65ca75),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf8dc388c-9e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 09:05:32 compute-0 nova_compute[260935]: 2025-10-11 09:05:32.880 2 DEBUG nova.objects.instance [None req-46756192-3102-4c02-ab31-97e0ad11d9b7 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] Lazy-loading 'pci_devices' on Instance uuid a83f40e4-c852-4b45-a3d2-1cd65e9aaa31 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:05:32 compute-0 nova_compute[260935]: 2025-10-11 09:05:32.917 2 DEBUG nova.virt.libvirt.driver [None req-46756192-3102-4c02-ab31-97e0ad11d9b7 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] [instance: a83f40e4-c852-4b45-a3d2-1cd65e9aaa31] End _get_guest_xml xml=<domain type="kvm">
Oct 11 09:05:32 compute-0 nova_compute[260935]:   <uuid>a83f40e4-c852-4b45-a3d2-1cd65e9aaa31</uuid>
Oct 11 09:05:32 compute-0 nova_compute[260935]:   <name>instance-0000005c</name>
Oct 11 09:05:32 compute-0 nova_compute[260935]:   <memory>131072</memory>
Oct 11 09:05:32 compute-0 nova_compute[260935]:   <vcpu>1</vcpu>
Oct 11 09:05:32 compute-0 nova_compute[260935]:   <metadata>
Oct 11 09:05:32 compute-0 nova_compute[260935]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 09:05:32 compute-0 nova_compute[260935]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 09:05:32 compute-0 nova_compute[260935]:       <nova:name>tempest-ServerRescueTestJSONUnderV235-server-942927350</nova:name>
Oct 11 09:05:32 compute-0 nova_compute[260935]:       <nova:creationTime>2025-10-11 09:05:31</nova:creationTime>
Oct 11 09:05:32 compute-0 nova_compute[260935]:       <nova:flavor name="m1.nano">
Oct 11 09:05:32 compute-0 nova_compute[260935]:         <nova:memory>128</nova:memory>
Oct 11 09:05:32 compute-0 nova_compute[260935]:         <nova:disk>1</nova:disk>
Oct 11 09:05:32 compute-0 nova_compute[260935]:         <nova:swap>0</nova:swap>
Oct 11 09:05:32 compute-0 nova_compute[260935]:         <nova:ephemeral>0</nova:ephemeral>
Oct 11 09:05:32 compute-0 nova_compute[260935]:         <nova:vcpus>1</nova:vcpus>
Oct 11 09:05:32 compute-0 nova_compute[260935]:       </nova:flavor>
Oct 11 09:05:32 compute-0 nova_compute[260935]:       <nova:owner>
Oct 11 09:05:32 compute-0 nova_compute[260935]:         <nova:user uuid="b7730a035fdf47498398e20e5aaf9ba4">tempest-ServerRescueTestJSONUnderV235-2035879439-project-member</nova:user>
Oct 11 09:05:32 compute-0 nova_compute[260935]:         <nova:project uuid="0a5d578da5e746caa535eef295e1a67d">tempest-ServerRescueTestJSONUnderV235-2035879439</nova:project>
Oct 11 09:05:32 compute-0 nova_compute[260935]:       </nova:owner>
Oct 11 09:05:32 compute-0 nova_compute[260935]:       <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 09:05:32 compute-0 nova_compute[260935]:       <nova:ports>
Oct 11 09:05:32 compute-0 nova_compute[260935]:         <nova:port uuid="f8dc388c-9e5a-43c2-8dd9-8fc28768ec31">
Oct 11 09:05:32 compute-0 nova_compute[260935]:           <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Oct 11 09:05:32 compute-0 nova_compute[260935]:         </nova:port>
Oct 11 09:05:32 compute-0 nova_compute[260935]:       </nova:ports>
Oct 11 09:05:32 compute-0 nova_compute[260935]:     </nova:instance>
Oct 11 09:05:32 compute-0 nova_compute[260935]:   </metadata>
Oct 11 09:05:32 compute-0 nova_compute[260935]:   <sysinfo type="smbios">
Oct 11 09:05:32 compute-0 nova_compute[260935]:     <system>
Oct 11 09:05:32 compute-0 nova_compute[260935]:       <entry name="manufacturer">RDO</entry>
Oct 11 09:05:32 compute-0 nova_compute[260935]:       <entry name="product">OpenStack Compute</entry>
Oct 11 09:05:32 compute-0 nova_compute[260935]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 09:05:32 compute-0 nova_compute[260935]:       <entry name="serial">a83f40e4-c852-4b45-a3d2-1cd65e9aaa31</entry>
Oct 11 09:05:32 compute-0 nova_compute[260935]:       <entry name="uuid">a83f40e4-c852-4b45-a3d2-1cd65e9aaa31</entry>
Oct 11 09:05:32 compute-0 nova_compute[260935]:       <entry name="family">Virtual Machine</entry>
Oct 11 09:05:32 compute-0 nova_compute[260935]:     </system>
Oct 11 09:05:32 compute-0 nova_compute[260935]:   </sysinfo>
Oct 11 09:05:32 compute-0 nova_compute[260935]:   <os>
Oct 11 09:05:32 compute-0 nova_compute[260935]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 11 09:05:32 compute-0 nova_compute[260935]:     <boot dev="hd"/>
Oct 11 09:05:32 compute-0 nova_compute[260935]:     <smbios mode="sysinfo"/>
Oct 11 09:05:32 compute-0 nova_compute[260935]:   </os>
Oct 11 09:05:32 compute-0 nova_compute[260935]:   <features>
Oct 11 09:05:32 compute-0 nova_compute[260935]:     <acpi/>
Oct 11 09:05:32 compute-0 nova_compute[260935]:     <apic/>
Oct 11 09:05:32 compute-0 nova_compute[260935]:     <vmcoreinfo/>
Oct 11 09:05:32 compute-0 nova_compute[260935]:   </features>
Oct 11 09:05:32 compute-0 nova_compute[260935]:   <clock offset="utc">
Oct 11 09:05:32 compute-0 nova_compute[260935]:     <timer name="pit" tickpolicy="delay"/>
Oct 11 09:05:32 compute-0 nova_compute[260935]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 11 09:05:32 compute-0 nova_compute[260935]:     <timer name="hpet" present="no"/>
Oct 11 09:05:32 compute-0 nova_compute[260935]:   </clock>
Oct 11 09:05:32 compute-0 nova_compute[260935]:   <cpu mode="host-model" match="exact">
Oct 11 09:05:32 compute-0 nova_compute[260935]:     <topology sockets="1" cores="1" threads="1"/>
Oct 11 09:05:32 compute-0 nova_compute[260935]:   </cpu>
Oct 11 09:05:32 compute-0 nova_compute[260935]:   <devices>
Oct 11 09:05:32 compute-0 nova_compute[260935]:     <disk type="network" device="disk">
Oct 11 09:05:32 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 09:05:32 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/a83f40e4-c852-4b45-a3d2-1cd65e9aaa31_disk">
Oct 11 09:05:32 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 09:05:32 compute-0 nova_compute[260935]:       </source>
Oct 11 09:05:32 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 09:05:32 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 09:05:32 compute-0 nova_compute[260935]:       </auth>
Oct 11 09:05:32 compute-0 nova_compute[260935]:       <target dev="vda" bus="virtio"/>
Oct 11 09:05:32 compute-0 nova_compute[260935]:     </disk>
Oct 11 09:05:32 compute-0 nova_compute[260935]:     <disk type="network" device="cdrom">
Oct 11 09:05:32 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 09:05:32 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/a83f40e4-c852-4b45-a3d2-1cd65e9aaa31_disk.config">
Oct 11 09:05:32 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 09:05:32 compute-0 nova_compute[260935]:       </source>
Oct 11 09:05:32 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 09:05:32 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 09:05:32 compute-0 nova_compute[260935]:       </auth>
Oct 11 09:05:32 compute-0 nova_compute[260935]:       <target dev="sda" bus="sata"/>
Oct 11 09:05:32 compute-0 nova_compute[260935]:     </disk>
Oct 11 09:05:32 compute-0 nova_compute[260935]:     <interface type="ethernet">
Oct 11 09:05:32 compute-0 nova_compute[260935]:       <mac address="fa:16:3e:b4:bf:b7"/>
Oct 11 09:05:32 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 09:05:32 compute-0 nova_compute[260935]:       <driver name="vhost" rx_queue_size="512"/>
Oct 11 09:05:32 compute-0 nova_compute[260935]:       <mtu size="1442"/>
Oct 11 09:05:32 compute-0 nova_compute[260935]:       <target dev="tapf8dc388c-9e"/>
Oct 11 09:05:32 compute-0 nova_compute[260935]:     </interface>
Oct 11 09:05:32 compute-0 nova_compute[260935]:     <serial type="pty">
Oct 11 09:05:32 compute-0 nova_compute[260935]:       <log file="/var/lib/nova/instances/a83f40e4-c852-4b45-a3d2-1cd65e9aaa31/console.log" append="off"/>
Oct 11 09:05:32 compute-0 nova_compute[260935]:     </serial>
Oct 11 09:05:32 compute-0 nova_compute[260935]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 09:05:32 compute-0 nova_compute[260935]:     <video>
Oct 11 09:05:32 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 09:05:32 compute-0 nova_compute[260935]:     </video>
Oct 11 09:05:32 compute-0 nova_compute[260935]:     <input type="tablet" bus="usb"/>
Oct 11 09:05:32 compute-0 nova_compute[260935]:     <rng model="virtio">
Oct 11 09:05:32 compute-0 nova_compute[260935]:       <backend model="random">/dev/urandom</backend>
Oct 11 09:05:32 compute-0 nova_compute[260935]:     </rng>
Oct 11 09:05:32 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root"/>
Oct 11 09:05:32 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:05:32 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:05:32 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:05:32 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:05:32 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:05:32 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:05:32 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:05:32 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:05:32 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:05:32 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:05:32 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:05:32 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:05:32 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:05:32 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:05:32 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:05:32 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:05:32 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:05:32 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:05:32 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:05:32 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:05:32 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:05:32 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:05:32 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:05:32 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:05:32 compute-0 nova_compute[260935]:     <controller type="usb" index="0"/>
Oct 11 09:05:32 compute-0 nova_compute[260935]:     <memballoon model="virtio">
Oct 11 09:05:32 compute-0 nova_compute[260935]:       <stats period="10"/>
Oct 11 09:05:32 compute-0 nova_compute[260935]:     </memballoon>
Oct 11 09:05:32 compute-0 nova_compute[260935]:   </devices>
Oct 11 09:05:32 compute-0 nova_compute[260935]: </domain>
Oct 11 09:05:32 compute-0 nova_compute[260935]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 11 09:05:32 compute-0 nova_compute[260935]: 2025-10-11 09:05:32.918 2 DEBUG nova.compute.manager [None req-46756192-3102-4c02-ab31-97e0ad11d9b7 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] [instance: a83f40e4-c852-4b45-a3d2-1cd65e9aaa31] Preparing to wait for external event network-vif-plugged-f8dc388c-9e5a-43c2-8dd9-8fc28768ec31 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 11 09:05:32 compute-0 nova_compute[260935]: 2025-10-11 09:05:32.918 2 DEBUG oslo_concurrency.lockutils [None req-46756192-3102-4c02-ab31-97e0ad11d9b7 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] Acquiring lock "a83f40e4-c852-4b45-a3d2-1cd65e9aaa31-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:05:32 compute-0 nova_compute[260935]: 2025-10-11 09:05:32.918 2 DEBUG oslo_concurrency.lockutils [None req-46756192-3102-4c02-ab31-97e0ad11d9b7 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] Lock "a83f40e4-c852-4b45-a3d2-1cd65e9aaa31-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:05:32 compute-0 nova_compute[260935]: 2025-10-11 09:05:32.919 2 DEBUG oslo_concurrency.lockutils [None req-46756192-3102-4c02-ab31-97e0ad11d9b7 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] Lock "a83f40e4-c852-4b45-a3d2-1cd65e9aaa31-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:05:32 compute-0 nova_compute[260935]: 2025-10-11 09:05:32.919 2 DEBUG nova.virt.libvirt.vif [None req-46756192-3102-4c02-ab31-97e0ad11d9b7 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T09:05:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerRescueTestJSONUnderV235-server-942927350',display_name='tempest-ServerRescueTestJSONUnderV235-server-942927350',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuetestjsonunderv235-server-942927350',id=92,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0a5d578da5e746caa535eef295e1a67d',ramdisk_id='',reservation_id='r-i0emrtot',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerRescueTestJSONUnderV235-2035879439',owner_user_name='tempest-ServerRescueTestJSONUnderV235-2035879439-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:05:26Z,user_data=None,user_id='b7730a035fdf47498398e20e5aaf9ba4',uuid=a83f40e4-c852-4b45-a3d2-1cd65e9aaa31,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f8dc388c-9e5a-43c2-8dd9-8fc28768ec31", "address": "fa:16:3e:b4:bf:b7", "network": {"id": "7ac98e7f-e9cd-45cf-8bf4-dcecaa65ca75", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-1957321638-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "0a5d578da5e746caa535eef295e1a67d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf8dc388c-9e", "ovs_interfaceid": "f8dc388c-9e5a-43c2-8dd9-8fc28768ec31", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 11 09:05:32 compute-0 nova_compute[260935]: 2025-10-11 09:05:32.920 2 DEBUG nova.network.os_vif_util [None req-46756192-3102-4c02-ab31-97e0ad11d9b7 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] Converting VIF {"id": "f8dc388c-9e5a-43c2-8dd9-8fc28768ec31", "address": "fa:16:3e:b4:bf:b7", "network": {"id": "7ac98e7f-e9cd-45cf-8bf4-dcecaa65ca75", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-1957321638-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "0a5d578da5e746caa535eef295e1a67d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf8dc388c-9e", "ovs_interfaceid": "f8dc388c-9e5a-43c2-8dd9-8fc28768ec31", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 09:05:32 compute-0 nova_compute[260935]: 2025-10-11 09:05:32.920 2 DEBUG nova.network.os_vif_util [None req-46756192-3102-4c02-ab31-97e0ad11d9b7 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b4:bf:b7,bridge_name='br-int',has_traffic_filtering=True,id=f8dc388c-9e5a-43c2-8dd9-8fc28768ec31,network=Network(7ac98e7f-e9cd-45cf-8bf4-dcecaa65ca75),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf8dc388c-9e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 09:05:32 compute-0 nova_compute[260935]: 2025-10-11 09:05:32.920 2 DEBUG os_vif [None req-46756192-3102-4c02-ab31-97e0ad11d9b7 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b4:bf:b7,bridge_name='br-int',has_traffic_filtering=True,id=f8dc388c-9e5a-43c2-8dd9-8fc28768ec31,network=Network(7ac98e7f-e9cd-45cf-8bf4-dcecaa65ca75),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf8dc388c-9e') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 11 09:05:32 compute-0 nova_compute[260935]: 2025-10-11 09:05:32.921 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:05:32 compute-0 nova_compute[260935]: 2025-10-11 09:05:32.921 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:05:32 compute-0 nova_compute[260935]: 2025-10-11 09:05:32.922 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 09:05:32 compute-0 nova_compute[260935]: 2025-10-11 09:05:32.925 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:05:32 compute-0 nova_compute[260935]: 2025-10-11 09:05:32.925 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf8dc388c-9e, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:05:32 compute-0 nova_compute[260935]: 2025-10-11 09:05:32.926 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapf8dc388c-9e, col_values=(('external_ids', {'iface-id': 'f8dc388c-9e5a-43c2-8dd9-8fc28768ec31', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:b4:bf:b7', 'vm-uuid': 'a83f40e4-c852-4b45-a3d2-1cd65e9aaa31'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:05:32 compute-0 nova_compute[260935]: 2025-10-11 09:05:32.927 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:05:32 compute-0 NetworkManager[44960]: <info>  [1760173532.9293] manager: (tapf8dc388c-9e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/349)
Oct 11 09:05:32 compute-0 nova_compute[260935]: 2025-10-11 09:05:32.930 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 09:05:32 compute-0 nova_compute[260935]: 2025-10-11 09:05:32.940 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:05:32 compute-0 nova_compute[260935]: 2025-10-11 09:05:32.941 2 INFO os_vif [None req-46756192-3102-4c02-ab31-97e0ad11d9b7 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b4:bf:b7,bridge_name='br-int',has_traffic_filtering=True,id=f8dc388c-9e5a-43c2-8dd9-8fc28768ec31,network=Network(7ac98e7f-e9cd-45cf-8bf4-dcecaa65ca75),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf8dc388c-9e')
Oct 11 09:05:33 compute-0 nova_compute[260935]: 2025-10-11 09:05:33.162 2 DEBUG nova.virt.libvirt.driver [None req-46756192-3102-4c02-ab31-97e0ad11d9b7 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 09:05:33 compute-0 nova_compute[260935]: 2025-10-11 09:05:33.163 2 DEBUG nova.virt.libvirt.driver [None req-46756192-3102-4c02-ab31-97e0ad11d9b7 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 09:05:33 compute-0 nova_compute[260935]: 2025-10-11 09:05:33.163 2 DEBUG nova.virt.libvirt.driver [None req-46756192-3102-4c02-ab31-97e0ad11d9b7 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] No VIF found with MAC fa:16:3e:b4:bf:b7, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 11 09:05:33 compute-0 nova_compute[260935]: 2025-10-11 09:05:33.164 2 INFO nova.virt.libvirt.driver [None req-46756192-3102-4c02-ab31-97e0ad11d9b7 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] [instance: a83f40e4-c852-4b45-a3d2-1cd65e9aaa31] Using config drive
Oct 11 09:05:33 compute-0 nova_compute[260935]: 2025-10-11 09:05:33.201 2 DEBUG nova.storage.rbd_utils [None req-46756192-3102-4c02-ab31-97e0ad11d9b7 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] rbd image a83f40e4-c852-4b45-a3d2-1cd65e9aaa31_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:05:33 compute-0 nova_compute[260935]: 2025-10-11 09:05:33.280 2 DEBUG nova.network.neutron [None req-cb7f5307-269b-4aa4-9170-94eb5f315201 727ffe9af08040179ad1981f985d61cc 72f2c977b4a147e3b45be9903a0afdf1 - - default default] [instance: 81e243b6-6e9c-428e-a7b7-743f60f94728] Successfully created port: 406fc421-7a8c-4974-8410-c31cab730ea9 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 11 09:05:33 compute-0 nova_compute[260935]: 2025-10-11 09:05:33.318 2 DEBUG nova.network.neutron [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: c176845c-89c0-4038-ba22-4ee79bd3ebfe] Updating instance_info_cache with network_info: [{"id": "e61ae661-47c6-4317-a2c2-6e7a5b567441", "address": "fa:16:3e:1e:82:58", "network": {"id": "164a664d-5e52-48b9-8b00-f73d0851a4cc", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-311778958-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d33b48586acf4e6c8254f2a1213b001c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape61ae661-47", "ovs_interfaceid": "e61ae661-47c6-4317-a2c2-6e7a5b567441", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:05:33 compute-0 nova_compute[260935]: 2025-10-11 09:05:33.380 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Releasing lock "refresh_cache-c176845c-89c0-4038-ba22-4ee79bd3ebfe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:05:33 compute-0 nova_compute[260935]: 2025-10-11 09:05:33.381 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: c176845c-89c0-4038-ba22-4ee79bd3ebfe] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 11 09:05:33 compute-0 nova_compute[260935]: 2025-10-11 09:05:33.381 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:05:33 compute-0 nova_compute[260935]: 2025-10-11 09:05:33.382 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:05:33 compute-0 nova_compute[260935]: 2025-10-11 09:05:33.382 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:05:33 compute-0 nova_compute[260935]: 2025-10-11 09:05:33.383 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:05:33 compute-0 nova_compute[260935]: 2025-10-11 09:05:33.383 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:05:33 compute-0 nova_compute[260935]: 2025-10-11 09:05:33.383 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:05:33 compute-0 nova_compute[260935]: 2025-10-11 09:05:33.384 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 11 09:05:33 compute-0 nova_compute[260935]: 2025-10-11 09:05:33.384 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:05:33 compute-0 ceph-mon[74313]: pgmap v1886: 321 pgs: 321 active+clean; 458 MiB data, 873 MiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 3.2 MiB/s wr, 53 op/s
Oct 11 09:05:33 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/1525929425' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:05:33 compute-0 nova_compute[260935]: 2025-10-11 09:05:33.450 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:05:33 compute-0 nova_compute[260935]: 2025-10-11 09:05:33.451 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:05:33 compute-0 nova_compute[260935]: 2025-10-11 09:05:33.452 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:05:33 compute-0 nova_compute[260935]: 2025-10-11 09:05:33.452 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 11 09:05:33 compute-0 nova_compute[260935]: 2025-10-11 09:05:33.452 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:05:33 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e257 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:05:33 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 09:05:33 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/290285277' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:05:33 compute-0 nova_compute[260935]: 2025-10-11 09:05:33.945 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.492s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:05:33 compute-0 nova_compute[260935]: 2025-10-11 09:05:33.990 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:05:34 compute-0 nova_compute[260935]: 2025-10-11 09:05:34.093 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-0000005c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:05:34 compute-0 nova_compute[260935]: 2025-10-11 09:05:34.094 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-0000005c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:05:34 compute-0 nova_compute[260935]: 2025-10-11 09:05:34.100 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:05:34 compute-0 nova_compute[260935]: 2025-10-11 09:05:34.100 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:05:34 compute-0 nova_compute[260935]: 2025-10-11 09:05:34.101 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:05:34 compute-0 nova_compute[260935]: 2025-10-11 09:05:34.107 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:05:34 compute-0 nova_compute[260935]: 2025-10-11 09:05:34.107 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:05:34 compute-0 nova_compute[260935]: 2025-10-11 09:05:34.113 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000057 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:05:34 compute-0 nova_compute[260935]: 2025-10-11 09:05:34.113 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000057 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:05:34 compute-0 nova_compute[260935]: 2025-10-11 09:05:34.118 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000052 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:05:34 compute-0 nova_compute[260935]: 2025-10-11 09:05:34.118 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000052 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:05:34 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1887: 321 pgs: 321 active+clean; 467 MiB data, 864 MiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 3.5 MiB/s wr, 54 op/s
Oct 11 09:05:34 compute-0 nova_compute[260935]: 2025-10-11 09:05:34.412 2 WARNING nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 09:05:34 compute-0 nova_compute[260935]: 2025-10-11 09:05:34.414 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3117MB free_disk=59.77231216430664GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 11 09:05:34 compute-0 nova_compute[260935]: 2025-10-11 09:05:34.414 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:05:34 compute-0 nova_compute[260935]: 2025-10-11 09:05:34.414 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:05:34 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/290285277' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:05:34 compute-0 nova_compute[260935]: 2025-10-11 09:05:34.618 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance c176845c-89c0-4038-ba22-4ee79bd3ebfe actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 09:05:34 compute-0 nova_compute[260935]: 2025-10-11 09:05:34.618 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance b75d8ded-515b-48ff-a6b6-28df88878996 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 09:05:34 compute-0 nova_compute[260935]: 2025-10-11 09:05:34.619 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance 52be16b4-343a-4fd4-9041-39069a1fde2a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 09:05:34 compute-0 nova_compute[260935]: 2025-10-11 09:05:34.619 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance 15633aee-234a-4417-b5ea-f35f13820404 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 09:05:34 compute-0 nova_compute[260935]: 2025-10-11 09:05:34.619 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance a83f40e4-c852-4b45-a3d2-1cd65e9aaa31 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 09:05:34 compute-0 nova_compute[260935]: 2025-10-11 09:05:34.620 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance 81e243b6-6e9c-428e-a7b7-743f60f94728 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 09:05:34 compute-0 nova_compute[260935]: 2025-10-11 09:05:34.620 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 6 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 11 09:05:34 compute-0 nova_compute[260935]: 2025-10-11 09:05:34.621 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=1280MB phys_disk=59GB used_disk=6GB total_vcpus=8 used_vcpus=6 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 11 09:05:34 compute-0 nova_compute[260935]: 2025-10-11 09:05:34.739 2 INFO nova.virt.libvirt.driver [None req-46756192-3102-4c02-ab31-97e0ad11d9b7 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] [instance: a83f40e4-c852-4b45-a3d2-1cd65e9aaa31] Creating config drive at /var/lib/nova/instances/a83f40e4-c852-4b45-a3d2-1cd65e9aaa31/disk.config
Oct 11 09:05:34 compute-0 nova_compute[260935]: 2025-10-11 09:05:34.749 2 DEBUG oslo_concurrency.processutils [None req-46756192-3102-4c02-ab31-97e0ad11d9b7 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/a83f40e4-c852-4b45-a3d2-1cd65e9aaa31/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpjdtg4xrk execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:05:34 compute-0 nova_compute[260935]: 2025-10-11 09:05:34.906 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:05:34 compute-0 nova_compute[260935]: 2025-10-11 09:05:34.967 2 DEBUG oslo_concurrency.processutils [None req-46756192-3102-4c02-ab31-97e0ad11d9b7 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/a83f40e4-c852-4b45-a3d2-1cd65e9aaa31/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpjdtg4xrk" returned: 0 in 0.218s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:05:35 compute-0 nova_compute[260935]: 2025-10-11 09:05:35.008 2 DEBUG nova.storage.rbd_utils [None req-46756192-3102-4c02-ab31-97e0ad11d9b7 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] rbd image a83f40e4-c852-4b45-a3d2-1cd65e9aaa31_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:05:35 compute-0 nova_compute[260935]: 2025-10-11 09:05:35.014 2 DEBUG oslo_concurrency.processutils [None req-46756192-3102-4c02-ab31-97e0ad11d9b7 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/a83f40e4-c852-4b45-a3d2-1cd65e9aaa31/disk.config a83f40e4-c852-4b45-a3d2-1cd65e9aaa31_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:05:35 compute-0 nova_compute[260935]: 2025-10-11 09:05:35.219 2 DEBUG oslo_concurrency.processutils [None req-46756192-3102-4c02-ab31-97e0ad11d9b7 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/a83f40e4-c852-4b45-a3d2-1cd65e9aaa31/disk.config a83f40e4-c852-4b45-a3d2-1cd65e9aaa31_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.205s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:05:35 compute-0 nova_compute[260935]: 2025-10-11 09:05:35.221 2 INFO nova.virt.libvirt.driver [None req-46756192-3102-4c02-ab31-97e0ad11d9b7 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] [instance: a83f40e4-c852-4b45-a3d2-1cd65e9aaa31] Deleting local config drive /var/lib/nova/instances/a83f40e4-c852-4b45-a3d2-1cd65e9aaa31/disk.config because it was imported into RBD.
Oct 11 09:05:35 compute-0 kernel: tapf8dc388c-9e: entered promiscuous mode
Oct 11 09:05:35 compute-0 ovn_controller[152945]: 2025-10-11T09:05:35Z|00811|binding|INFO|Claiming lport f8dc388c-9e5a-43c2-8dd9-8fc28768ec31 for this chassis.
Oct 11 09:05:35 compute-0 ovn_controller[152945]: 2025-10-11T09:05:35Z|00812|binding|INFO|f8dc388c-9e5a-43c2-8dd9-8fc28768ec31: Claiming fa:16:3e:b4:bf:b7 10.100.0.12
Oct 11 09:05:35 compute-0 NetworkManager[44960]: <info>  [1760173535.2883] manager: (tapf8dc388c-9e): new Tun device (/org/freedesktop/NetworkManager/Devices/350)
Oct 11 09:05:35 compute-0 nova_compute[260935]: 2025-10-11 09:05:35.289 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:05:35 compute-0 systemd-udevd[352048]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 09:05:35 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:05:35.340 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b4:bf:b7 10.100.0.12'], port_security=['fa:16:3e:b4:bf:b7 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'a83f40e4-c852-4b45-a3d2-1cd65e9aaa31', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7ac98e7f-e9cd-45cf-8bf4-dcecaa65ca75', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0a5d578da5e746caa535eef295e1a67d', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'e8a3473d-4723-4d4d-be0a-f96b6f35a4ee', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ae01bff6-0f5a-48e3-a011-50bc6bac19c7, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=f8dc388c-9e5a-43c2-8dd9-8fc28768ec31) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 09:05:35 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:05:35.342 162815 INFO neutron.agent.ovn.metadata.agent [-] Port f8dc388c-9e5a-43c2-8dd9-8fc28768ec31 in datapath 7ac98e7f-e9cd-45cf-8bf4-dcecaa65ca75 bound to our chassis
Oct 11 09:05:35 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:05:35.343 162815 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 7ac98e7f-e9cd-45cf-8bf4-dcecaa65ca75 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Oct 11 09:05:35 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:05:35.343 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[dbb4c68b-6d7e-46f8-a4a1-9e09afd70ad9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:05:35 compute-0 NetworkManager[44960]: <info>  [1760173535.3563] device (tapf8dc388c-9e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 09:05:35 compute-0 NetworkManager[44960]: <info>  [1760173535.3579] device (tapf8dc388c-9e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 09:05:35 compute-0 systemd-machined[215705]: New machine qemu-105-instance-0000005c.
Oct 11 09:05:35 compute-0 nova_compute[260935]: 2025-10-11 09:05:35.373 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:05:35 compute-0 ovn_controller[152945]: 2025-10-11T09:05:35Z|00813|binding|INFO|Setting lport f8dc388c-9e5a-43c2-8dd9-8fc28768ec31 ovn-installed in OVS
Oct 11 09:05:35 compute-0 systemd[1]: Started Virtual Machine qemu-105-instance-0000005c.
Oct 11 09:05:35 compute-0 ovn_controller[152945]: 2025-10-11T09:05:35Z|00814|binding|INFO|Setting lport f8dc388c-9e5a-43c2-8dd9-8fc28768ec31 up in Southbound
Oct 11 09:05:35 compute-0 nova_compute[260935]: 2025-10-11 09:05:35.382 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:05:35 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 09:05:35 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4224136825' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:05:35 compute-0 nova_compute[260935]: 2025-10-11 09:05:35.419 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.513s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:05:35 compute-0 nova_compute[260935]: 2025-10-11 09:05:35.427 2 DEBUG nova.compute.provider_tree [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 09:05:35 compute-0 ceph-mon[74313]: pgmap v1887: 321 pgs: 321 active+clean; 467 MiB data, 864 MiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 3.5 MiB/s wr, 54 op/s
Oct 11 09:05:35 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/4224136825' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:05:35 compute-0 nova_compute[260935]: 2025-10-11 09:05:35.482 2 DEBUG nova.scheduler.client.report [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 09:05:35 compute-0 nova_compute[260935]: 2025-10-11 09:05:35.568 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 11 09:05:35 compute-0 nova_compute[260935]: 2025-10-11 09:05:35.568 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.154s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:05:36 compute-0 nova_compute[260935]: 2025-10-11 09:05:36.178 2 DEBUG nova.network.neutron [req-cc78e6de-1276-4c9a-ad98-664820c9438c req-6cf85b23-11c9-4542-92a4-3231365a2c7b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a83f40e4-c852-4b45-a3d2-1cd65e9aaa31] Updated VIF entry in instance network info cache for port f8dc388c-9e5a-43c2-8dd9-8fc28768ec31. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 09:05:36 compute-0 nova_compute[260935]: 2025-10-11 09:05:36.179 2 DEBUG nova.network.neutron [req-cc78e6de-1276-4c9a-ad98-664820c9438c req-6cf85b23-11c9-4542-92a4-3231365a2c7b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a83f40e4-c852-4b45-a3d2-1cd65e9aaa31] Updating instance_info_cache with network_info: [{"id": "f8dc388c-9e5a-43c2-8dd9-8fc28768ec31", "address": "fa:16:3e:b4:bf:b7", "network": {"id": "7ac98e7f-e9cd-45cf-8bf4-dcecaa65ca75", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-1957321638-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "0a5d578da5e746caa535eef295e1a67d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf8dc388c-9e", "ovs_interfaceid": "f8dc388c-9e5a-43c2-8dd9-8fc28768ec31", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:05:36 compute-0 nova_compute[260935]: 2025-10-11 09:05:36.235 2 DEBUG oslo_concurrency.lockutils [req-cc78e6de-1276-4c9a-ad98-664820c9438c req-6cf85b23-11c9-4542-92a4-3231365a2c7b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-a83f40e4-c852-4b45-a3d2-1cd65e9aaa31" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:05:36 compute-0 nova_compute[260935]: 2025-10-11 09:05:36.263 2 DEBUG nova.network.neutron [None req-cb7f5307-269b-4aa4-9170-94eb5f315201 727ffe9af08040179ad1981f985d61cc 72f2c977b4a147e3b45be9903a0afdf1 - - default default] [instance: 81e243b6-6e9c-428e-a7b7-743f60f94728] Successfully updated port: 406fc421-7a8c-4974-8410-c31cab730ea9 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 11 09:05:36 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1888: 321 pgs: 321 active+clean; 467 MiB data, 864 MiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 3.5 MiB/s wr, 54 op/s
Oct 11 09:05:36 compute-0 nova_compute[260935]: 2025-10-11 09:05:36.321 2 DEBUG oslo_concurrency.lockutils [None req-cb7f5307-269b-4aa4-9170-94eb5f315201 727ffe9af08040179ad1981f985d61cc 72f2c977b4a147e3b45be9903a0afdf1 - - default default] Acquiring lock "refresh_cache-81e243b6-6e9c-428e-a7b7-743f60f94728" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:05:36 compute-0 nova_compute[260935]: 2025-10-11 09:05:36.321 2 DEBUG oslo_concurrency.lockutils [None req-cb7f5307-269b-4aa4-9170-94eb5f315201 727ffe9af08040179ad1981f985d61cc 72f2c977b4a147e3b45be9903a0afdf1 - - default default] Acquired lock "refresh_cache-81e243b6-6e9c-428e-a7b7-743f60f94728" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:05:36 compute-0 nova_compute[260935]: 2025-10-11 09:05:36.322 2 DEBUG nova.network.neutron [None req-cb7f5307-269b-4aa4-9170-94eb5f315201 727ffe9af08040179ad1981f985d61cc 72f2c977b4a147e3b45be9903a0afdf1 - - default default] [instance: 81e243b6-6e9c-428e-a7b7-743f60f94728] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 11 09:05:36 compute-0 nova_compute[260935]: 2025-10-11 09:05:36.364 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760173536.3634126, a83f40e4-c852-4b45-a3d2-1cd65e9aaa31 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:05:36 compute-0 nova_compute[260935]: 2025-10-11 09:05:36.365 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: a83f40e4-c852-4b45-a3d2-1cd65e9aaa31] VM Started (Lifecycle Event)
Oct 11 09:05:36 compute-0 nova_compute[260935]: 2025-10-11 09:05:36.437 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: a83f40e4-c852-4b45-a3d2-1cd65e9aaa31] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:05:36 compute-0 nova_compute[260935]: 2025-10-11 09:05:36.445 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760173536.3635485, a83f40e4-c852-4b45-a3d2-1cd65e9aaa31 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:05:36 compute-0 nova_compute[260935]: 2025-10-11 09:05:36.445 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: a83f40e4-c852-4b45-a3d2-1cd65e9aaa31] VM Paused (Lifecycle Event)
Oct 11 09:05:36 compute-0 nova_compute[260935]: 2025-10-11 09:05:36.489 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: a83f40e4-c852-4b45-a3d2-1cd65e9aaa31] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:05:36 compute-0 nova_compute[260935]: 2025-10-11 09:05:36.493 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: a83f40e4-c852-4b45-a3d2-1cd65e9aaa31] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 09:05:36 compute-0 nova_compute[260935]: 2025-10-11 09:05:36.503 2 DEBUG nova.compute.manager [req-e30a5b44-e1c2-4b5a-a0e6-2456047c966e req-fcc04171-65cd-4b45-b7b4-0b6a6d8e7574 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 81e243b6-6e9c-428e-a7b7-743f60f94728] Received event network-changed-406fc421-7a8c-4974-8410-c31cab730ea9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:05:36 compute-0 nova_compute[260935]: 2025-10-11 09:05:36.504 2 DEBUG nova.compute.manager [req-e30a5b44-e1c2-4b5a-a0e6-2456047c966e req-fcc04171-65cd-4b45-b7b4-0b6a6d8e7574 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 81e243b6-6e9c-428e-a7b7-743f60f94728] Refreshing instance network info cache due to event network-changed-406fc421-7a8c-4974-8410-c31cab730ea9. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 09:05:36 compute-0 nova_compute[260935]: 2025-10-11 09:05:36.504 2 DEBUG oslo_concurrency.lockutils [req-e30a5b44-e1c2-4b5a-a0e6-2456047c966e req-fcc04171-65cd-4b45-b7b4-0b6a6d8e7574 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-81e243b6-6e9c-428e-a7b7-743f60f94728" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:05:36 compute-0 nova_compute[260935]: 2025-10-11 09:05:36.522 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: a83f40e4-c852-4b45-a3d2-1cd65e9aaa31] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 09:05:36 compute-0 nova_compute[260935]: 2025-10-11 09:05:36.588 2 DEBUG nova.compute.manager [req-62936306-aa36-4aa1-bc02-e1b09bcc3b97 req-6b0dbc56-8118-4412-96ed-fee8b4a6b0d6 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a83f40e4-c852-4b45-a3d2-1cd65e9aaa31] Received event network-vif-plugged-f8dc388c-9e5a-43c2-8dd9-8fc28768ec31 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:05:36 compute-0 nova_compute[260935]: 2025-10-11 09:05:36.589 2 DEBUG oslo_concurrency.lockutils [req-62936306-aa36-4aa1-bc02-e1b09bcc3b97 req-6b0dbc56-8118-4412-96ed-fee8b4a6b0d6 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "a83f40e4-c852-4b45-a3d2-1cd65e9aaa31-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:05:36 compute-0 nova_compute[260935]: 2025-10-11 09:05:36.589 2 DEBUG oslo_concurrency.lockutils [req-62936306-aa36-4aa1-bc02-e1b09bcc3b97 req-6b0dbc56-8118-4412-96ed-fee8b4a6b0d6 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "a83f40e4-c852-4b45-a3d2-1cd65e9aaa31-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:05:36 compute-0 nova_compute[260935]: 2025-10-11 09:05:36.590 2 DEBUG oslo_concurrency.lockutils [req-62936306-aa36-4aa1-bc02-e1b09bcc3b97 req-6b0dbc56-8118-4412-96ed-fee8b4a6b0d6 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "a83f40e4-c852-4b45-a3d2-1cd65e9aaa31-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:05:36 compute-0 nova_compute[260935]: 2025-10-11 09:05:36.590 2 DEBUG nova.compute.manager [req-62936306-aa36-4aa1-bc02-e1b09bcc3b97 req-6b0dbc56-8118-4412-96ed-fee8b4a6b0d6 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a83f40e4-c852-4b45-a3d2-1cd65e9aaa31] Processing event network-vif-plugged-f8dc388c-9e5a-43c2-8dd9-8fc28768ec31 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 11 09:05:36 compute-0 nova_compute[260935]: 2025-10-11 09:05:36.591 2 DEBUG nova.compute.manager [None req-46756192-3102-4c02-ab31-97e0ad11d9b7 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] [instance: a83f40e4-c852-4b45-a3d2-1cd65e9aaa31] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 11 09:05:36 compute-0 nova_compute[260935]: 2025-10-11 09:05:36.599 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760173536.5992155, a83f40e4-c852-4b45-a3d2-1cd65e9aaa31 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:05:36 compute-0 nova_compute[260935]: 2025-10-11 09:05:36.600 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: a83f40e4-c852-4b45-a3d2-1cd65e9aaa31] VM Resumed (Lifecycle Event)
Oct 11 09:05:36 compute-0 nova_compute[260935]: 2025-10-11 09:05:36.602 2 DEBUG nova.virt.libvirt.driver [None req-46756192-3102-4c02-ab31-97e0ad11d9b7 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] [instance: a83f40e4-c852-4b45-a3d2-1cd65e9aaa31] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 11 09:05:36 compute-0 nova_compute[260935]: 2025-10-11 09:05:36.606 2 INFO nova.virt.libvirt.driver [-] [instance: a83f40e4-c852-4b45-a3d2-1cd65e9aaa31] Instance spawned successfully.
Oct 11 09:05:36 compute-0 nova_compute[260935]: 2025-10-11 09:05:36.607 2 DEBUG nova.virt.libvirt.driver [None req-46756192-3102-4c02-ab31-97e0ad11d9b7 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] [instance: a83f40e4-c852-4b45-a3d2-1cd65e9aaa31] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 11 09:05:36 compute-0 nova_compute[260935]: 2025-10-11 09:05:36.736 2 DEBUG nova.network.neutron [None req-cb7f5307-269b-4aa4-9170-94eb5f315201 727ffe9af08040179ad1981f985d61cc 72f2c977b4a147e3b45be9903a0afdf1 - - default default] [instance: 81e243b6-6e9c-428e-a7b7-743f60f94728] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 11 09:05:36 compute-0 nova_compute[260935]: 2025-10-11 09:05:36.743 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: a83f40e4-c852-4b45-a3d2-1cd65e9aaa31] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:05:36 compute-0 nova_compute[260935]: 2025-10-11 09:05:36.753 2 DEBUG oslo_concurrency.lockutils [None req-67191ef9-234c-418d-8bd8-d769ba8e6f62 ac3a19a7426e4e51aa4f94b016decc82 0eed0ccef3b34c4db44e88ebe1aef9f0 - - default default] Acquiring lock "d813afc2-c844-45eb-b1ec-efbbf95498ef" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:05:36 compute-0 nova_compute[260935]: 2025-10-11 09:05:36.753 2 DEBUG oslo_concurrency.lockutils [None req-67191ef9-234c-418d-8bd8-d769ba8e6f62 ac3a19a7426e4e51aa4f94b016decc82 0eed0ccef3b34c4db44e88ebe1aef9f0 - - default default] Lock "d813afc2-c844-45eb-b1ec-efbbf95498ef" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:05:36 compute-0 nova_compute[260935]: 2025-10-11 09:05:36.760 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: a83f40e4-c852-4b45-a3d2-1cd65e9aaa31] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 09:05:36 compute-0 nova_compute[260935]: 2025-10-11 09:05:36.765 2 DEBUG nova.virt.libvirt.driver [None req-46756192-3102-4c02-ab31-97e0ad11d9b7 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] [instance: a83f40e4-c852-4b45-a3d2-1cd65e9aaa31] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:05:36 compute-0 nova_compute[260935]: 2025-10-11 09:05:36.766 2 DEBUG nova.virt.libvirt.driver [None req-46756192-3102-4c02-ab31-97e0ad11d9b7 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] [instance: a83f40e4-c852-4b45-a3d2-1cd65e9aaa31] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:05:36 compute-0 nova_compute[260935]: 2025-10-11 09:05:36.767 2 DEBUG nova.virt.libvirt.driver [None req-46756192-3102-4c02-ab31-97e0ad11d9b7 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] [instance: a83f40e4-c852-4b45-a3d2-1cd65e9aaa31] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:05:36 compute-0 nova_compute[260935]: 2025-10-11 09:05:36.768 2 DEBUG nova.virt.libvirt.driver [None req-46756192-3102-4c02-ab31-97e0ad11d9b7 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] [instance: a83f40e4-c852-4b45-a3d2-1cd65e9aaa31] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:05:36 compute-0 nova_compute[260935]: 2025-10-11 09:05:36.769 2 DEBUG nova.virt.libvirt.driver [None req-46756192-3102-4c02-ab31-97e0ad11d9b7 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] [instance: a83f40e4-c852-4b45-a3d2-1cd65e9aaa31] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:05:36 compute-0 nova_compute[260935]: 2025-10-11 09:05:36.770 2 DEBUG nova.virt.libvirt.driver [None req-46756192-3102-4c02-ab31-97e0ad11d9b7 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] [instance: a83f40e4-c852-4b45-a3d2-1cd65e9aaa31] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:05:36 compute-0 nova_compute[260935]: 2025-10-11 09:05:36.841 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: a83f40e4-c852-4b45-a3d2-1cd65e9aaa31] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 09:05:36 compute-0 nova_compute[260935]: 2025-10-11 09:05:36.843 2 DEBUG nova.compute.manager [None req-67191ef9-234c-418d-8bd8-d769ba8e6f62 ac3a19a7426e4e51aa4f94b016decc82 0eed0ccef3b34c4db44e88ebe1aef9f0 - - default default] [instance: d813afc2-c844-45eb-b1ec-efbbf95498ef] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 11 09:05:37 compute-0 nova_compute[260935]: 2025-10-11 09:05:37.072 2 INFO nova.compute.manager [None req-46756192-3102-4c02-ab31-97e0ad11d9b7 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] [instance: a83f40e4-c852-4b45-a3d2-1cd65e9aaa31] Took 10.54 seconds to spawn the instance on the hypervisor.
Oct 11 09:05:37 compute-0 nova_compute[260935]: 2025-10-11 09:05:37.073 2 DEBUG nova.compute.manager [None req-46756192-3102-4c02-ab31-97e0ad11d9b7 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] [instance: a83f40e4-c852-4b45-a3d2-1cd65e9aaa31] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:05:37 compute-0 nova_compute[260935]: 2025-10-11 09:05:37.105 2 DEBUG oslo_concurrency.lockutils [None req-67191ef9-234c-418d-8bd8-d769ba8e6f62 ac3a19a7426e4e51aa4f94b016decc82 0eed0ccef3b34c4db44e88ebe1aef9f0 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:05:37 compute-0 nova_compute[260935]: 2025-10-11 09:05:37.105 2 DEBUG oslo_concurrency.lockutils [None req-67191ef9-234c-418d-8bd8-d769ba8e6f62 ac3a19a7426e4e51aa4f94b016decc82 0eed0ccef3b34c4db44e88ebe1aef9f0 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:05:37 compute-0 nova_compute[260935]: 2025-10-11 09:05:37.113 2 DEBUG nova.virt.hardware [None req-67191ef9-234c-418d-8bd8-d769ba8e6f62 ac3a19a7426e4e51aa4f94b016decc82 0eed0ccef3b34c4db44e88ebe1aef9f0 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 11 09:05:37 compute-0 nova_compute[260935]: 2025-10-11 09:05:37.113 2 INFO nova.compute.claims [None req-67191ef9-234c-418d-8bd8-d769ba8e6f62 ac3a19a7426e4e51aa4f94b016decc82 0eed0ccef3b34c4db44e88ebe1aef9f0 - - default default] [instance: d813afc2-c844-45eb-b1ec-efbbf95498ef] Claim successful on node compute-0.ctlplane.example.com
Oct 11 09:05:37 compute-0 nova_compute[260935]: 2025-10-11 09:05:37.237 2 INFO nova.compute.manager [None req-46756192-3102-4c02-ab31-97e0ad11d9b7 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] [instance: a83f40e4-c852-4b45-a3d2-1cd65e9aaa31] Took 12.57 seconds to build instance.
Oct 11 09:05:37 compute-0 nova_compute[260935]: 2025-10-11 09:05:37.301 2 DEBUG oslo_concurrency.lockutils [None req-46756192-3102-4c02-ab31-97e0ad11d9b7 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] Lock "a83f40e4-c852-4b45-a3d2-1cd65e9aaa31" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.846s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:05:37 compute-0 ceph-mon[74313]: pgmap v1888: 321 pgs: 321 active+clean; 467 MiB data, 864 MiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 3.5 MiB/s wr, 54 op/s
Oct 11 09:05:37 compute-0 nova_compute[260935]: 2025-10-11 09:05:37.490 2 DEBUG oslo_concurrency.processutils [None req-67191ef9-234c-418d-8bd8-d769ba8e6f62 ac3a19a7426e4e51aa4f94b016decc82 0eed0ccef3b34c4db44e88ebe1aef9f0 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:05:37 compute-0 nova_compute[260935]: 2025-10-11 09:05:37.928 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:05:38 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 09:05:38 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4204038342' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:05:38 compute-0 nova_compute[260935]: 2025-10-11 09:05:38.083 2 DEBUG oslo_concurrency.processutils [None req-67191ef9-234c-418d-8bd8-d769ba8e6f62 ac3a19a7426e4e51aa4f94b016decc82 0eed0ccef3b34c4db44e88ebe1aef9f0 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.593s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:05:38 compute-0 nova_compute[260935]: 2025-10-11 09:05:38.090 2 DEBUG nova.compute.provider_tree [None req-67191ef9-234c-418d-8bd8-d769ba8e6f62 ac3a19a7426e4e51aa4f94b016decc82 0eed0ccef3b34c4db44e88ebe1aef9f0 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 09:05:38 compute-0 nova_compute[260935]: 2025-10-11 09:05:38.121 2 DEBUG nova.scheduler.client.report [None req-67191ef9-234c-418d-8bd8-d769ba8e6f62 ac3a19a7426e4e51aa4f94b016decc82 0eed0ccef3b34c4db44e88ebe1aef9f0 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 09:05:38 compute-0 nova_compute[260935]: 2025-10-11 09:05:38.130 2 DEBUG nova.network.neutron [None req-cb7f5307-269b-4aa4-9170-94eb5f315201 727ffe9af08040179ad1981f985d61cc 72f2c977b4a147e3b45be9903a0afdf1 - - default default] [instance: 81e243b6-6e9c-428e-a7b7-743f60f94728] Updating instance_info_cache with network_info: [{"id": "406fc421-7a8c-4974-8410-c31cab730ea9", "address": "fa:16:3e:fe:a8:86", "network": {"id": "6273b59c-68e6-46d1-ac04-7afa70039429", "bridge": "br-int", "label": "tempest-ServerPasswordTestJSON-29199496-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "72f2c977b4a147e3b45be9903a0afdf1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap406fc421-7a", "ovs_interfaceid": "406fc421-7a8c-4974-8410-c31cab730ea9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:05:38 compute-0 nova_compute[260935]: 2025-10-11 09:05:38.197 2 DEBUG oslo_concurrency.lockutils [None req-67191ef9-234c-418d-8bd8-d769ba8e6f62 ac3a19a7426e4e51aa4f94b016decc82 0eed0ccef3b34c4db44e88ebe1aef9f0 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.092s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:05:38 compute-0 nova_compute[260935]: 2025-10-11 09:05:38.198 2 DEBUG nova.compute.manager [None req-67191ef9-234c-418d-8bd8-d769ba8e6f62 ac3a19a7426e4e51aa4f94b016decc82 0eed0ccef3b34c4db44e88ebe1aef9f0 - - default default] [instance: d813afc2-c844-45eb-b1ec-efbbf95498ef] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 11 09:05:38 compute-0 nova_compute[260935]: 2025-10-11 09:05:38.234 2 DEBUG oslo_concurrency.lockutils [None req-cb7f5307-269b-4aa4-9170-94eb5f315201 727ffe9af08040179ad1981f985d61cc 72f2c977b4a147e3b45be9903a0afdf1 - - default default] Releasing lock "refresh_cache-81e243b6-6e9c-428e-a7b7-743f60f94728" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:05:38 compute-0 nova_compute[260935]: 2025-10-11 09:05:38.235 2 DEBUG nova.compute.manager [None req-cb7f5307-269b-4aa4-9170-94eb5f315201 727ffe9af08040179ad1981f985d61cc 72f2c977b4a147e3b45be9903a0afdf1 - - default default] [instance: 81e243b6-6e9c-428e-a7b7-743f60f94728] Instance network_info: |[{"id": "406fc421-7a8c-4974-8410-c31cab730ea9", "address": "fa:16:3e:fe:a8:86", "network": {"id": "6273b59c-68e6-46d1-ac04-7afa70039429", "bridge": "br-int", "label": "tempest-ServerPasswordTestJSON-29199496-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "72f2c977b4a147e3b45be9903a0afdf1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap406fc421-7a", "ovs_interfaceid": "406fc421-7a8c-4974-8410-c31cab730ea9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 11 09:05:38 compute-0 nova_compute[260935]: 2025-10-11 09:05:38.236 2 DEBUG oslo_concurrency.lockutils [req-e30a5b44-e1c2-4b5a-a0e6-2456047c966e req-fcc04171-65cd-4b45-b7b4-0b6a6d8e7574 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-81e243b6-6e9c-428e-a7b7-743f60f94728" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:05:38 compute-0 nova_compute[260935]: 2025-10-11 09:05:38.236 2 DEBUG nova.network.neutron [req-e30a5b44-e1c2-4b5a-a0e6-2456047c966e req-fcc04171-65cd-4b45-b7b4-0b6a6d8e7574 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 81e243b6-6e9c-428e-a7b7-743f60f94728] Refreshing network info cache for port 406fc421-7a8c-4974-8410-c31cab730ea9 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 09:05:38 compute-0 nova_compute[260935]: 2025-10-11 09:05:38.243 2 DEBUG nova.virt.libvirt.driver [None req-cb7f5307-269b-4aa4-9170-94eb5f315201 727ffe9af08040179ad1981f985d61cc 72f2c977b4a147e3b45be9903a0afdf1 - - default default] [instance: 81e243b6-6e9c-428e-a7b7-743f60f94728] Start _get_guest_xml network_info=[{"id": "406fc421-7a8c-4974-8410-c31cab730ea9", "address": "fa:16:3e:fe:a8:86", "network": {"id": "6273b59c-68e6-46d1-ac04-7afa70039429", "bridge": "br-int", "label": "tempest-ServerPasswordTestJSON-29199496-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "72f2c977b4a147e3b45be9903a0afdf1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap406fc421-7a", "ovs_interfaceid": "406fc421-7a8c-4974-8410-c31cab730ea9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 11 09:05:38 compute-0 nova_compute[260935]: 2025-10-11 09:05:38.250 2 WARNING nova.virt.libvirt.driver [None req-cb7f5307-269b-4aa4-9170-94eb5f315201 727ffe9af08040179ad1981f985d61cc 72f2c977b4a147e3b45be9903a0afdf1 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 09:05:38 compute-0 nova_compute[260935]: 2025-10-11 09:05:38.259 2 DEBUG nova.virt.libvirt.host [None req-cb7f5307-269b-4aa4-9170-94eb5f315201 727ffe9af08040179ad1981f985d61cc 72f2c977b4a147e3b45be9903a0afdf1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 11 09:05:38 compute-0 nova_compute[260935]: 2025-10-11 09:05:38.261 2 DEBUG nova.virt.libvirt.host [None req-cb7f5307-269b-4aa4-9170-94eb5f315201 727ffe9af08040179ad1981f985d61cc 72f2c977b4a147e3b45be9903a0afdf1 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 11 09:05:38 compute-0 nova_compute[260935]: 2025-10-11 09:05:38.265 2 DEBUG nova.virt.libvirt.host [None req-cb7f5307-269b-4aa4-9170-94eb5f315201 727ffe9af08040179ad1981f985d61cc 72f2c977b4a147e3b45be9903a0afdf1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 11 09:05:38 compute-0 nova_compute[260935]: 2025-10-11 09:05:38.266 2 DEBUG nova.virt.libvirt.host [None req-cb7f5307-269b-4aa4-9170-94eb5f315201 727ffe9af08040179ad1981f985d61cc 72f2c977b4a147e3b45be9903a0afdf1 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 11 09:05:38 compute-0 nova_compute[260935]: 2025-10-11 09:05:38.266 2 DEBUG nova.virt.libvirt.driver [None req-cb7f5307-269b-4aa4-9170-94eb5f315201 727ffe9af08040179ad1981f985d61cc 72f2c977b4a147e3b45be9903a0afdf1 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 11 09:05:38 compute-0 nova_compute[260935]: 2025-10-11 09:05:38.267 2 DEBUG nova.virt.hardware [None req-cb7f5307-269b-4aa4-9170-94eb5f315201 727ffe9af08040179ad1981f985d61cc 72f2c977b4a147e3b45be9903a0afdf1 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 11 09:05:38 compute-0 nova_compute[260935]: 2025-10-11 09:05:38.268 2 DEBUG nova.virt.hardware [None req-cb7f5307-269b-4aa4-9170-94eb5f315201 727ffe9af08040179ad1981f985d61cc 72f2c977b4a147e3b45be9903a0afdf1 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 11 09:05:38 compute-0 nova_compute[260935]: 2025-10-11 09:05:38.268 2 DEBUG nova.virt.hardware [None req-cb7f5307-269b-4aa4-9170-94eb5f315201 727ffe9af08040179ad1981f985d61cc 72f2c977b4a147e3b45be9903a0afdf1 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 11 09:05:38 compute-0 nova_compute[260935]: 2025-10-11 09:05:38.269 2 DEBUG nova.virt.hardware [None req-cb7f5307-269b-4aa4-9170-94eb5f315201 727ffe9af08040179ad1981f985d61cc 72f2c977b4a147e3b45be9903a0afdf1 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 11 09:05:38 compute-0 nova_compute[260935]: 2025-10-11 09:05:38.269 2 DEBUG nova.virt.hardware [None req-cb7f5307-269b-4aa4-9170-94eb5f315201 727ffe9af08040179ad1981f985d61cc 72f2c977b4a147e3b45be9903a0afdf1 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 11 09:05:38 compute-0 nova_compute[260935]: 2025-10-11 09:05:38.269 2 DEBUG nova.virt.hardware [None req-cb7f5307-269b-4aa4-9170-94eb5f315201 727ffe9af08040179ad1981f985d61cc 72f2c977b4a147e3b45be9903a0afdf1 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 11 09:05:38 compute-0 nova_compute[260935]: 2025-10-11 09:05:38.270 2 DEBUG nova.virt.hardware [None req-cb7f5307-269b-4aa4-9170-94eb5f315201 727ffe9af08040179ad1981f985d61cc 72f2c977b4a147e3b45be9903a0afdf1 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 11 09:05:38 compute-0 nova_compute[260935]: 2025-10-11 09:05:38.270 2 DEBUG nova.virt.hardware [None req-cb7f5307-269b-4aa4-9170-94eb5f315201 727ffe9af08040179ad1981f985d61cc 72f2c977b4a147e3b45be9903a0afdf1 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 11 09:05:38 compute-0 nova_compute[260935]: 2025-10-11 09:05:38.271 2 DEBUG nova.virt.hardware [None req-cb7f5307-269b-4aa4-9170-94eb5f315201 727ffe9af08040179ad1981f985d61cc 72f2c977b4a147e3b45be9903a0afdf1 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 11 09:05:38 compute-0 nova_compute[260935]: 2025-10-11 09:05:38.271 2 DEBUG nova.virt.hardware [None req-cb7f5307-269b-4aa4-9170-94eb5f315201 727ffe9af08040179ad1981f985d61cc 72f2c977b4a147e3b45be9903a0afdf1 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 11 09:05:38 compute-0 nova_compute[260935]: 2025-10-11 09:05:38.272 2 DEBUG nova.virt.hardware [None req-cb7f5307-269b-4aa4-9170-94eb5f315201 727ffe9af08040179ad1981f985d61cc 72f2c977b4a147e3b45be9903a0afdf1 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 11 09:05:38 compute-0 nova_compute[260935]: 2025-10-11 09:05:38.276 2 DEBUG oslo_concurrency.processutils [None req-cb7f5307-269b-4aa4-9170-94eb5f315201 727ffe9af08040179ad1981f985d61cc 72f2c977b4a147e3b45be9903a0afdf1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:05:38 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1889: 321 pgs: 321 active+clean; 467 MiB data, 864 MiB used, 59 GiB / 60 GiB avail; 160 KiB/s rd, 3.6 MiB/s wr, 68 op/s
Oct 11 09:05:38 compute-0 nova_compute[260935]: 2025-10-11 09:05:38.332 2 DEBUG nova.compute.manager [None req-67191ef9-234c-418d-8bd8-d769ba8e6f62 ac3a19a7426e4e51aa4f94b016decc82 0eed0ccef3b34c4db44e88ebe1aef9f0 - - default default] [instance: d813afc2-c844-45eb-b1ec-efbbf95498ef] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 11 09:05:38 compute-0 nova_compute[260935]: 2025-10-11 09:05:38.334 2 DEBUG nova.network.neutron [None req-67191ef9-234c-418d-8bd8-d769ba8e6f62 ac3a19a7426e4e51aa4f94b016decc82 0eed0ccef3b34c4db44e88ebe1aef9f0 - - default default] [instance: d813afc2-c844-45eb-b1ec-efbbf95498ef] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 11 09:05:38 compute-0 nova_compute[260935]: 2025-10-11 09:05:38.397 2 INFO nova.virt.libvirt.driver [None req-67191ef9-234c-418d-8bd8-d769ba8e6f62 ac3a19a7426e4e51aa4f94b016decc82 0eed0ccef3b34c4db44e88ebe1aef9f0 - - default default] [instance: d813afc2-c844-45eb-b1ec-efbbf95498ef] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 11 09:05:38 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/4204038342' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:05:38 compute-0 nova_compute[260935]: 2025-10-11 09:05:38.456 2 DEBUG nova.compute.manager [None req-67191ef9-234c-418d-8bd8-d769ba8e6f62 ac3a19a7426e4e51aa4f94b016decc82 0eed0ccef3b34c4db44e88ebe1aef9f0 - - default default] [instance: d813afc2-c844-45eb-b1ec-efbbf95498ef] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 11 09:05:38 compute-0 nova_compute[260935]: 2025-10-11 09:05:38.697 2 DEBUG nova.compute.manager [None req-67191ef9-234c-418d-8bd8-d769ba8e6f62 ac3a19a7426e4e51aa4f94b016decc82 0eed0ccef3b34c4db44e88ebe1aef9f0 - - default default] [instance: d813afc2-c844-45eb-b1ec-efbbf95498ef] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 11 09:05:38 compute-0 nova_compute[260935]: 2025-10-11 09:05:38.699 2 DEBUG nova.virt.libvirt.driver [None req-67191ef9-234c-418d-8bd8-d769ba8e6f62 ac3a19a7426e4e51aa4f94b016decc82 0eed0ccef3b34c4db44e88ebe1aef9f0 - - default default] [instance: d813afc2-c844-45eb-b1ec-efbbf95498ef] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 11 09:05:38 compute-0 nova_compute[260935]: 2025-10-11 09:05:38.700 2 INFO nova.virt.libvirt.driver [None req-67191ef9-234c-418d-8bd8-d769ba8e6f62 ac3a19a7426e4e51aa4f94b016decc82 0eed0ccef3b34c4db44e88ebe1aef9f0 - - default default] [instance: d813afc2-c844-45eb-b1ec-efbbf95498ef] Creating image(s)
Oct 11 09:05:38 compute-0 nova_compute[260935]: 2025-10-11 09:05:38.734 2 DEBUG nova.storage.rbd_utils [None req-67191ef9-234c-418d-8bd8-d769ba8e6f62 ac3a19a7426e4e51aa4f94b016decc82 0eed0ccef3b34c4db44e88ebe1aef9f0 - - default default] rbd image d813afc2-c844-45eb-b1ec-efbbf95498ef_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:05:38 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 09:05:38 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3640757075' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:05:38 compute-0 nova_compute[260935]: 2025-10-11 09:05:38.769 2 DEBUG nova.storage.rbd_utils [None req-67191ef9-234c-418d-8bd8-d769ba8e6f62 ac3a19a7426e4e51aa4f94b016decc82 0eed0ccef3b34c4db44e88ebe1aef9f0 - - default default] rbd image d813afc2-c844-45eb-b1ec-efbbf95498ef_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:05:38 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e257 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:05:38 compute-0 nova_compute[260935]: 2025-10-11 09:05:38.791 2 DEBUG nova.storage.rbd_utils [None req-67191ef9-234c-418d-8bd8-d769ba8e6f62 ac3a19a7426e4e51aa4f94b016decc82 0eed0ccef3b34c4db44e88ebe1aef9f0 - - default default] rbd image d813afc2-c844-45eb-b1ec-efbbf95498ef_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:05:38 compute-0 nova_compute[260935]: 2025-10-11 09:05:38.793 2 DEBUG oslo_concurrency.processutils [None req-67191ef9-234c-418d-8bd8-d769ba8e6f62 ac3a19a7426e4e51aa4f94b016decc82 0eed0ccef3b34c4db44e88ebe1aef9f0 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:05:38 compute-0 nova_compute[260935]: 2025-10-11 09:05:38.823 2 DEBUG oslo_concurrency.processutils [None req-cb7f5307-269b-4aa4-9170-94eb5f315201 727ffe9af08040179ad1981f985d61cc 72f2c977b4a147e3b45be9903a0afdf1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.547s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:05:38 compute-0 nova_compute[260935]: 2025-10-11 09:05:38.849 2 DEBUG nova.storage.rbd_utils [None req-cb7f5307-269b-4aa4-9170-94eb5f315201 727ffe9af08040179ad1981f985d61cc 72f2c977b4a147e3b45be9903a0afdf1 - - default default] rbd image 81e243b6-6e9c-428e-a7b7-743f60f94728_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:05:38 compute-0 nova_compute[260935]: 2025-10-11 09:05:38.853 2 DEBUG oslo_concurrency.processutils [None req-cb7f5307-269b-4aa4-9170-94eb5f315201 727ffe9af08040179ad1981f985d61cc 72f2c977b4a147e3b45be9903a0afdf1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:05:38 compute-0 nova_compute[260935]: 2025-10-11 09:05:38.887 2 DEBUG oslo_concurrency.processutils [None req-67191ef9-234c-418d-8bd8-d769ba8e6f62 ac3a19a7426e4e51aa4f94b016decc82 0eed0ccef3b34c4db44e88ebe1aef9f0 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json" returned: 0 in 0.093s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:05:38 compute-0 nova_compute[260935]: 2025-10-11 09:05:38.888 2 DEBUG oslo_concurrency.lockutils [None req-67191ef9-234c-418d-8bd8-d769ba8e6f62 ac3a19a7426e4e51aa4f94b016decc82 0eed0ccef3b34c4db44e88ebe1aef9f0 - - default default] Acquiring lock "9811042c7d73cc51997f7c966840f4b7728169a1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:05:38 compute-0 nova_compute[260935]: 2025-10-11 09:05:38.889 2 DEBUG oslo_concurrency.lockutils [None req-67191ef9-234c-418d-8bd8-d769ba8e6f62 ac3a19a7426e4e51aa4f94b016decc82 0eed0ccef3b34c4db44e88ebe1aef9f0 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:05:38 compute-0 nova_compute[260935]: 2025-10-11 09:05:38.889 2 DEBUG oslo_concurrency.lockutils [None req-67191ef9-234c-418d-8bd8-d769ba8e6f62 ac3a19a7426e4e51aa4f94b016decc82 0eed0ccef3b34c4db44e88ebe1aef9f0 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:05:38 compute-0 nova_compute[260935]: 2025-10-11 09:05:38.913 2 DEBUG nova.storage.rbd_utils [None req-67191ef9-234c-418d-8bd8-d769ba8e6f62 ac3a19a7426e4e51aa4f94b016decc82 0eed0ccef3b34c4db44e88ebe1aef9f0 - - default default] rbd image d813afc2-c844-45eb-b1ec-efbbf95498ef_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:05:38 compute-0 nova_compute[260935]: 2025-10-11 09:05:38.917 2 DEBUG oslo_concurrency.processutils [None req-67191ef9-234c-418d-8bd8-d769ba8e6f62 ac3a19a7426e4e51aa4f94b016decc82 0eed0ccef3b34c4db44e88ebe1aef9f0 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 d813afc2-c844-45eb-b1ec-efbbf95498ef_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:05:38 compute-0 nova_compute[260935]: 2025-10-11 09:05:38.970 2 DEBUG nova.policy [None req-67191ef9-234c-418d-8bd8-d769ba8e6f62 ac3a19a7426e4e51aa4f94b016decc82 0eed0ccef3b34c4db44e88ebe1aef9f0 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'ac3a19a7426e4e51aa4f94b016decc82', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '0eed0ccef3b34c4db44e88ebe1aef9f0', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 11 09:05:38 compute-0 nova_compute[260935]: 2025-10-11 09:05:38.992 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:05:39 compute-0 nova_compute[260935]: 2025-10-11 09:05:39.005 2 DEBUG nova.compute.manager [req-e62fb6ec-2ee4-4318-8757-2073701ea5c1 req-71ef2b50-6610-4d69-8bfa-dc822e2b2200 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a83f40e4-c852-4b45-a3d2-1cd65e9aaa31] Received event network-vif-plugged-f8dc388c-9e5a-43c2-8dd9-8fc28768ec31 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:05:39 compute-0 nova_compute[260935]: 2025-10-11 09:05:39.006 2 DEBUG oslo_concurrency.lockutils [req-e62fb6ec-2ee4-4318-8757-2073701ea5c1 req-71ef2b50-6610-4d69-8bfa-dc822e2b2200 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "a83f40e4-c852-4b45-a3d2-1cd65e9aaa31-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:05:39 compute-0 nova_compute[260935]: 2025-10-11 09:05:39.007 2 DEBUG oslo_concurrency.lockutils [req-e62fb6ec-2ee4-4318-8757-2073701ea5c1 req-71ef2b50-6610-4d69-8bfa-dc822e2b2200 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "a83f40e4-c852-4b45-a3d2-1cd65e9aaa31-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:05:39 compute-0 nova_compute[260935]: 2025-10-11 09:05:39.008 2 DEBUG oslo_concurrency.lockutils [req-e62fb6ec-2ee4-4318-8757-2073701ea5c1 req-71ef2b50-6610-4d69-8bfa-dc822e2b2200 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "a83f40e4-c852-4b45-a3d2-1cd65e9aaa31-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:05:39 compute-0 nova_compute[260935]: 2025-10-11 09:05:39.008 2 DEBUG nova.compute.manager [req-e62fb6ec-2ee4-4318-8757-2073701ea5c1 req-71ef2b50-6610-4d69-8bfa-dc822e2b2200 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a83f40e4-c852-4b45-a3d2-1cd65e9aaa31] No waiting events found dispatching network-vif-plugged-f8dc388c-9e5a-43c2-8dd9-8fc28768ec31 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 09:05:39 compute-0 nova_compute[260935]: 2025-10-11 09:05:39.009 2 WARNING nova.compute.manager [req-e62fb6ec-2ee4-4318-8757-2073701ea5c1 req-71ef2b50-6610-4d69-8bfa-dc822e2b2200 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a83f40e4-c852-4b45-a3d2-1cd65e9aaa31] Received unexpected event network-vif-plugged-f8dc388c-9e5a-43c2-8dd9-8fc28768ec31 for instance with vm_state active and task_state None.
Oct 11 09:05:39 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 09:05:39 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4268169213' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:05:39 compute-0 nova_compute[260935]: 2025-10-11 09:05:39.318 2 DEBUG oslo_concurrency.processutils [None req-67191ef9-234c-418d-8bd8-d769ba8e6f62 ac3a19a7426e4e51aa4f94b016decc82 0eed0ccef3b34c4db44e88ebe1aef9f0 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 d813afc2-c844-45eb-b1ec-efbbf95498ef_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.401s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:05:39 compute-0 nova_compute[260935]: 2025-10-11 09:05:39.355 2 DEBUG oslo_concurrency.processutils [None req-cb7f5307-269b-4aa4-9170-94eb5f315201 727ffe9af08040179ad1981f985d61cc 72f2c977b4a147e3b45be9903a0afdf1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.502s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:05:39 compute-0 nova_compute[260935]: 2025-10-11 09:05:39.356 2 DEBUG nova.virt.libvirt.vif [None req-cb7f5307-269b-4aa4-9170-94eb5f315201 727ffe9af08040179ad1981f985d61cc 72f2c977b4a147e3b45be9903a0afdf1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T09:05:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerPasswordTestJSON-server-2126714831',display_name='tempest-ServerPasswordTestJSON-server-2126714831',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverpasswordtestjson-server-2126714831',id=93,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='72f2c977b4a147e3b45be9903a0afdf1',ramdisk_id='',reservation_id='r-ig7c58si',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerPasswordTestJSON-624321625',owner_user_name='tempest-ServerPasswordTestJSON-624321625-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:05:30Z,user_data=None,user_id='727ffe9af08040179ad1981f985d61cc',uuid=81e243b6-6e9c-428e-a7b7-743f60f94728,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "406fc421-7a8c-4974-8410-c31cab730ea9", "address": "fa:16:3e:fe:a8:86", "network": {"id": "6273b59c-68e6-46d1-ac04-7afa70039429", "bridge": "br-int", "label": "tempest-ServerPasswordTestJSON-29199496-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "72f2c977b4a147e3b45be9903a0afdf1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap406fc421-7a", "ovs_interfaceid": "406fc421-7a8c-4974-8410-c31cab730ea9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 11 09:05:39 compute-0 nova_compute[260935]: 2025-10-11 09:05:39.357 2 DEBUG nova.network.os_vif_util [None req-cb7f5307-269b-4aa4-9170-94eb5f315201 727ffe9af08040179ad1981f985d61cc 72f2c977b4a147e3b45be9903a0afdf1 - - default default] Converting VIF {"id": "406fc421-7a8c-4974-8410-c31cab730ea9", "address": "fa:16:3e:fe:a8:86", "network": {"id": "6273b59c-68e6-46d1-ac04-7afa70039429", "bridge": "br-int", "label": "tempest-ServerPasswordTestJSON-29199496-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "72f2c977b4a147e3b45be9903a0afdf1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap406fc421-7a", "ovs_interfaceid": "406fc421-7a8c-4974-8410-c31cab730ea9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 09:05:39 compute-0 nova_compute[260935]: 2025-10-11 09:05:39.358 2 DEBUG nova.network.os_vif_util [None req-cb7f5307-269b-4aa4-9170-94eb5f315201 727ffe9af08040179ad1981f985d61cc 72f2c977b4a147e3b45be9903a0afdf1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fe:a8:86,bridge_name='br-int',has_traffic_filtering=True,id=406fc421-7a8c-4974-8410-c31cab730ea9,network=Network(6273b59c-68e6-46d1-ac04-7afa70039429),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap406fc421-7a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 09:05:39 compute-0 nova_compute[260935]: 2025-10-11 09:05:39.359 2 DEBUG nova.objects.instance [None req-cb7f5307-269b-4aa4-9170-94eb5f315201 727ffe9af08040179ad1981f985d61cc 72f2c977b4a147e3b45be9903a0afdf1 - - default default] Lazy-loading 'pci_devices' on Instance uuid 81e243b6-6e9c-428e-a7b7-743f60f94728 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:05:39 compute-0 nova_compute[260935]: 2025-10-11 09:05:39.403 2 DEBUG nova.virt.libvirt.driver [None req-cb7f5307-269b-4aa4-9170-94eb5f315201 727ffe9af08040179ad1981f985d61cc 72f2c977b4a147e3b45be9903a0afdf1 - - default default] [instance: 81e243b6-6e9c-428e-a7b7-743f60f94728] End _get_guest_xml xml=<domain type="kvm">
Oct 11 09:05:39 compute-0 nova_compute[260935]:   <uuid>81e243b6-6e9c-428e-a7b7-743f60f94728</uuid>
Oct 11 09:05:39 compute-0 nova_compute[260935]:   <name>instance-0000005d</name>
Oct 11 09:05:39 compute-0 nova_compute[260935]:   <memory>131072</memory>
Oct 11 09:05:39 compute-0 nova_compute[260935]:   <vcpu>1</vcpu>
Oct 11 09:05:39 compute-0 nova_compute[260935]:   <metadata>
Oct 11 09:05:39 compute-0 nova_compute[260935]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 09:05:39 compute-0 nova_compute[260935]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 09:05:39 compute-0 nova_compute[260935]:       <nova:name>tempest-ServerPasswordTestJSON-server-2126714831</nova:name>
Oct 11 09:05:39 compute-0 nova_compute[260935]:       <nova:creationTime>2025-10-11 09:05:38</nova:creationTime>
Oct 11 09:05:39 compute-0 nova_compute[260935]:       <nova:flavor name="m1.nano">
Oct 11 09:05:39 compute-0 nova_compute[260935]:         <nova:memory>128</nova:memory>
Oct 11 09:05:39 compute-0 nova_compute[260935]:         <nova:disk>1</nova:disk>
Oct 11 09:05:39 compute-0 nova_compute[260935]:         <nova:swap>0</nova:swap>
Oct 11 09:05:39 compute-0 nova_compute[260935]:         <nova:ephemeral>0</nova:ephemeral>
Oct 11 09:05:39 compute-0 nova_compute[260935]:         <nova:vcpus>1</nova:vcpus>
Oct 11 09:05:39 compute-0 nova_compute[260935]:       </nova:flavor>
Oct 11 09:05:39 compute-0 nova_compute[260935]:       <nova:owner>
Oct 11 09:05:39 compute-0 nova_compute[260935]:         <nova:user uuid="727ffe9af08040179ad1981f985d61cc">tempest-ServerPasswordTestJSON-624321625-project-member</nova:user>
Oct 11 09:05:39 compute-0 nova_compute[260935]:         <nova:project uuid="72f2c977b4a147e3b45be9903a0afdf1">tempest-ServerPasswordTestJSON-624321625</nova:project>
Oct 11 09:05:39 compute-0 nova_compute[260935]:       </nova:owner>
Oct 11 09:05:39 compute-0 nova_compute[260935]:       <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 09:05:39 compute-0 nova_compute[260935]:       <nova:ports>
Oct 11 09:05:39 compute-0 nova_compute[260935]:         <nova:port uuid="406fc421-7a8c-4974-8410-c31cab730ea9">
Oct 11 09:05:39 compute-0 nova_compute[260935]:           <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Oct 11 09:05:39 compute-0 nova_compute[260935]:         </nova:port>
Oct 11 09:05:39 compute-0 nova_compute[260935]:       </nova:ports>
Oct 11 09:05:39 compute-0 nova_compute[260935]:     </nova:instance>
Oct 11 09:05:39 compute-0 nova_compute[260935]:   </metadata>
Oct 11 09:05:39 compute-0 nova_compute[260935]:   <sysinfo type="smbios">
Oct 11 09:05:39 compute-0 nova_compute[260935]:     <system>
Oct 11 09:05:39 compute-0 nova_compute[260935]:       <entry name="manufacturer">RDO</entry>
Oct 11 09:05:39 compute-0 nova_compute[260935]:       <entry name="product">OpenStack Compute</entry>
Oct 11 09:05:39 compute-0 nova_compute[260935]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 09:05:39 compute-0 nova_compute[260935]:       <entry name="serial">81e243b6-6e9c-428e-a7b7-743f60f94728</entry>
Oct 11 09:05:39 compute-0 nova_compute[260935]:       <entry name="uuid">81e243b6-6e9c-428e-a7b7-743f60f94728</entry>
Oct 11 09:05:39 compute-0 nova_compute[260935]:       <entry name="family">Virtual Machine</entry>
Oct 11 09:05:39 compute-0 nova_compute[260935]:     </system>
Oct 11 09:05:39 compute-0 nova_compute[260935]:   </sysinfo>
Oct 11 09:05:39 compute-0 nova_compute[260935]:   <os>
Oct 11 09:05:39 compute-0 nova_compute[260935]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 11 09:05:39 compute-0 nova_compute[260935]:     <boot dev="hd"/>
Oct 11 09:05:39 compute-0 nova_compute[260935]:     <smbios mode="sysinfo"/>
Oct 11 09:05:39 compute-0 nova_compute[260935]:   </os>
Oct 11 09:05:39 compute-0 nova_compute[260935]:   <features>
Oct 11 09:05:39 compute-0 nova_compute[260935]:     <acpi/>
Oct 11 09:05:39 compute-0 nova_compute[260935]:     <apic/>
Oct 11 09:05:39 compute-0 nova_compute[260935]:     <vmcoreinfo/>
Oct 11 09:05:39 compute-0 nova_compute[260935]:   </features>
Oct 11 09:05:39 compute-0 nova_compute[260935]:   <clock offset="utc">
Oct 11 09:05:39 compute-0 nova_compute[260935]:     <timer name="pit" tickpolicy="delay"/>
Oct 11 09:05:39 compute-0 nova_compute[260935]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 11 09:05:39 compute-0 nova_compute[260935]:     <timer name="hpet" present="no"/>
Oct 11 09:05:39 compute-0 nova_compute[260935]:   </clock>
Oct 11 09:05:39 compute-0 nova_compute[260935]:   <cpu mode="host-model" match="exact">
Oct 11 09:05:39 compute-0 nova_compute[260935]:     <topology sockets="1" cores="1" threads="1"/>
Oct 11 09:05:39 compute-0 nova_compute[260935]:   </cpu>
Oct 11 09:05:39 compute-0 nova_compute[260935]:   <devices>
Oct 11 09:05:39 compute-0 nova_compute[260935]:     <disk type="network" device="disk">
Oct 11 09:05:39 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 09:05:39 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/81e243b6-6e9c-428e-a7b7-743f60f94728_disk">
Oct 11 09:05:39 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 09:05:39 compute-0 nova_compute[260935]:       </source>
Oct 11 09:05:39 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 09:05:39 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 09:05:39 compute-0 nova_compute[260935]:       </auth>
Oct 11 09:05:39 compute-0 nova_compute[260935]:       <target dev="vda" bus="virtio"/>
Oct 11 09:05:39 compute-0 nova_compute[260935]:     </disk>
Oct 11 09:05:39 compute-0 nova_compute[260935]:     <disk type="network" device="cdrom">
Oct 11 09:05:39 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 09:05:39 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/81e243b6-6e9c-428e-a7b7-743f60f94728_disk.config">
Oct 11 09:05:39 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 09:05:39 compute-0 nova_compute[260935]:       </source>
Oct 11 09:05:39 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 09:05:39 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 09:05:39 compute-0 nova_compute[260935]:       </auth>
Oct 11 09:05:39 compute-0 nova_compute[260935]:       <target dev="sda" bus="sata"/>
Oct 11 09:05:39 compute-0 nova_compute[260935]:     </disk>
Oct 11 09:05:39 compute-0 nova_compute[260935]:     <interface type="ethernet">
Oct 11 09:05:39 compute-0 nova_compute[260935]:       <mac address="fa:16:3e:fe:a8:86"/>
Oct 11 09:05:39 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 09:05:39 compute-0 nova_compute[260935]:       <driver name="vhost" rx_queue_size="512"/>
Oct 11 09:05:39 compute-0 nova_compute[260935]:       <mtu size="1442"/>
Oct 11 09:05:39 compute-0 nova_compute[260935]:       <target dev="tap406fc421-7a"/>
Oct 11 09:05:39 compute-0 nova_compute[260935]:     </interface>
Oct 11 09:05:39 compute-0 nova_compute[260935]:     <serial type="pty">
Oct 11 09:05:39 compute-0 nova_compute[260935]:       <log file="/var/lib/nova/instances/81e243b6-6e9c-428e-a7b7-743f60f94728/console.log" append="off"/>
Oct 11 09:05:39 compute-0 nova_compute[260935]:     </serial>
Oct 11 09:05:39 compute-0 nova_compute[260935]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 09:05:39 compute-0 nova_compute[260935]:     <video>
Oct 11 09:05:39 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 09:05:39 compute-0 nova_compute[260935]:     </video>
Oct 11 09:05:39 compute-0 nova_compute[260935]:     <input type="tablet" bus="usb"/>
Oct 11 09:05:39 compute-0 nova_compute[260935]:     <rng model="virtio">
Oct 11 09:05:39 compute-0 nova_compute[260935]:       <backend model="random">/dev/urandom</backend>
Oct 11 09:05:39 compute-0 nova_compute[260935]:     </rng>
Oct 11 09:05:39 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root"/>
Oct 11 09:05:39 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:05:39 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:05:39 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:05:39 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:05:39 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:05:39 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:05:39 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:05:39 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:05:39 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:05:39 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:05:39 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:05:39 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:05:39 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:05:39 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:05:39 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:05:39 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:05:39 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:05:39 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:05:39 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:05:39 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:05:39 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:05:39 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:05:39 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:05:39 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:05:39 compute-0 nova_compute[260935]:     <controller type="usb" index="0"/>
Oct 11 09:05:39 compute-0 nova_compute[260935]:     <memballoon model="virtio">
Oct 11 09:05:39 compute-0 nova_compute[260935]:       <stats period="10"/>
Oct 11 09:05:39 compute-0 nova_compute[260935]:     </memballoon>
Oct 11 09:05:39 compute-0 nova_compute[260935]:   </devices>
Oct 11 09:05:39 compute-0 nova_compute[260935]: </domain>
Oct 11 09:05:39 compute-0 nova_compute[260935]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 11 09:05:39 compute-0 nova_compute[260935]: 2025-10-11 09:05:39.409 2 DEBUG nova.compute.manager [None req-cb7f5307-269b-4aa4-9170-94eb5f315201 727ffe9af08040179ad1981f985d61cc 72f2c977b4a147e3b45be9903a0afdf1 - - default default] [instance: 81e243b6-6e9c-428e-a7b7-743f60f94728] Preparing to wait for external event network-vif-plugged-406fc421-7a8c-4974-8410-c31cab730ea9 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 11 09:05:39 compute-0 nova_compute[260935]: 2025-10-11 09:05:39.409 2 DEBUG oslo_concurrency.lockutils [None req-cb7f5307-269b-4aa4-9170-94eb5f315201 727ffe9af08040179ad1981f985d61cc 72f2c977b4a147e3b45be9903a0afdf1 - - default default] Acquiring lock "81e243b6-6e9c-428e-a7b7-743f60f94728-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:05:39 compute-0 nova_compute[260935]: 2025-10-11 09:05:39.409 2 DEBUG oslo_concurrency.lockutils [None req-cb7f5307-269b-4aa4-9170-94eb5f315201 727ffe9af08040179ad1981f985d61cc 72f2c977b4a147e3b45be9903a0afdf1 - - default default] Lock "81e243b6-6e9c-428e-a7b7-743f60f94728-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:05:39 compute-0 nova_compute[260935]: 2025-10-11 09:05:39.410 2 DEBUG oslo_concurrency.lockutils [None req-cb7f5307-269b-4aa4-9170-94eb5f315201 727ffe9af08040179ad1981f985d61cc 72f2c977b4a147e3b45be9903a0afdf1 - - default default] Lock "81e243b6-6e9c-428e-a7b7-743f60f94728-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:05:39 compute-0 nova_compute[260935]: 2025-10-11 09:05:39.410 2 DEBUG nova.virt.libvirt.vif [None req-cb7f5307-269b-4aa4-9170-94eb5f315201 727ffe9af08040179ad1981f985d61cc 72f2c977b4a147e3b45be9903a0afdf1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T09:05:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerPasswordTestJSON-server-2126714831',display_name='tempest-ServerPasswordTestJSON-server-2126714831',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverpasswordtestjson-server-2126714831',id=93,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='72f2c977b4a147e3b45be9903a0afdf1',ramdisk_id='',reservation_id='r-ig7c58si',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerPasswordTestJSON-624321625',owner_user_name='tempest-ServerPasswordTestJSON-624321625-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:05:30Z,user_data=None,user_id='727ffe9af08040179ad1981f985d61cc',uuid=81e243b6-6e9c-428e-a7b7-743f60f94728,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "406fc421-7a8c-4974-8410-c31cab730ea9", "address": "fa:16:3e:fe:a8:86", "network": {"id": "6273b59c-68e6-46d1-ac04-7afa70039429", "bridge": "br-int", "label": "tempest-ServerPasswordTestJSON-29199496-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "72f2c977b4a147e3b45be9903a0afdf1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap406fc421-7a", "ovs_interfaceid": "406fc421-7a8c-4974-8410-c31cab730ea9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 11 09:05:39 compute-0 nova_compute[260935]: 2025-10-11 09:05:39.410 2 DEBUG nova.network.os_vif_util [None req-cb7f5307-269b-4aa4-9170-94eb5f315201 727ffe9af08040179ad1981f985d61cc 72f2c977b4a147e3b45be9903a0afdf1 - - default default] Converting VIF {"id": "406fc421-7a8c-4974-8410-c31cab730ea9", "address": "fa:16:3e:fe:a8:86", "network": {"id": "6273b59c-68e6-46d1-ac04-7afa70039429", "bridge": "br-int", "label": "tempest-ServerPasswordTestJSON-29199496-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "72f2c977b4a147e3b45be9903a0afdf1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap406fc421-7a", "ovs_interfaceid": "406fc421-7a8c-4974-8410-c31cab730ea9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 09:05:39 compute-0 nova_compute[260935]: 2025-10-11 09:05:39.411 2 DEBUG nova.network.os_vif_util [None req-cb7f5307-269b-4aa4-9170-94eb5f315201 727ffe9af08040179ad1981f985d61cc 72f2c977b4a147e3b45be9903a0afdf1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fe:a8:86,bridge_name='br-int',has_traffic_filtering=True,id=406fc421-7a8c-4974-8410-c31cab730ea9,network=Network(6273b59c-68e6-46d1-ac04-7afa70039429),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap406fc421-7a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 09:05:39 compute-0 nova_compute[260935]: 2025-10-11 09:05:39.411 2 DEBUG os_vif [None req-cb7f5307-269b-4aa4-9170-94eb5f315201 727ffe9af08040179ad1981f985d61cc 72f2c977b4a147e3b45be9903a0afdf1 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:fe:a8:86,bridge_name='br-int',has_traffic_filtering=True,id=406fc421-7a8c-4974-8410-c31cab730ea9,network=Network(6273b59c-68e6-46d1-ac04-7afa70039429),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap406fc421-7a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 11 09:05:39 compute-0 nova_compute[260935]: 2025-10-11 09:05:39.413 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:05:39 compute-0 nova_compute[260935]: 2025-10-11 09:05:39.414 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:05:39 compute-0 nova_compute[260935]: 2025-10-11 09:05:39.414 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 09:05:39 compute-0 nova_compute[260935]: 2025-10-11 09:05:39.419 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:05:39 compute-0 nova_compute[260935]: 2025-10-11 09:05:39.419 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap406fc421-7a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:05:39 compute-0 nova_compute[260935]: 2025-10-11 09:05:39.420 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap406fc421-7a, col_values=(('external_ids', {'iface-id': '406fc421-7a8c-4974-8410-c31cab730ea9', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:fe:a8:86', 'vm-uuid': '81e243b6-6e9c-428e-a7b7-743f60f94728'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:05:39 compute-0 nova_compute[260935]: 2025-10-11 09:05:39.448 2 DEBUG nova.storage.rbd_utils [None req-67191ef9-234c-418d-8bd8-d769ba8e6f62 ac3a19a7426e4e51aa4f94b016decc82 0eed0ccef3b34c4db44e88ebe1aef9f0 - - default default] resizing rbd image d813afc2-c844-45eb-b1ec-efbbf95498ef_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 11 09:05:39 compute-0 NetworkManager[44960]: <info>  [1760173539.4493] manager: (tap406fc421-7a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/351)
Oct 11 09:05:39 compute-0 ceph-mon[74313]: pgmap v1889: 321 pgs: 321 active+clean; 467 MiB data, 864 MiB used, 59 GiB / 60 GiB avail; 160 KiB/s rd, 3.6 MiB/s wr, 68 op/s
Oct 11 09:05:39 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/3640757075' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:05:39 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/4268169213' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:05:39 compute-0 nova_compute[260935]: 2025-10-11 09:05:39.500 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:05:39 compute-0 nova_compute[260935]: 2025-10-11 09:05:39.505 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 09:05:39 compute-0 nova_compute[260935]: 2025-10-11 09:05:39.505 2 INFO os_vif [None req-cb7f5307-269b-4aa4-9170-94eb5f315201 727ffe9af08040179ad1981f985d61cc 72f2c977b4a147e3b45be9903a0afdf1 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:fe:a8:86,bridge_name='br-int',has_traffic_filtering=True,id=406fc421-7a8c-4974-8410-c31cab730ea9,network=Network(6273b59c-68e6-46d1-ac04-7afa70039429),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap406fc421-7a')
Oct 11 09:05:39 compute-0 nova_compute[260935]: 2025-10-11 09:05:39.511 2 INFO nova.compute.manager [None req-caf85ff0-b2b9-4f98-9639-b907a658f8d4 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] [instance: a83f40e4-c852-4b45-a3d2-1cd65e9aaa31] Rescuing
Oct 11 09:05:39 compute-0 nova_compute[260935]: 2025-10-11 09:05:39.512 2 DEBUG oslo_concurrency.lockutils [None req-caf85ff0-b2b9-4f98-9639-b907a658f8d4 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] Acquiring lock "refresh_cache-a83f40e4-c852-4b45-a3d2-1cd65e9aaa31" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:05:39 compute-0 nova_compute[260935]: 2025-10-11 09:05:39.514 2 DEBUG oslo_concurrency.lockutils [None req-caf85ff0-b2b9-4f98-9639-b907a658f8d4 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] Acquired lock "refresh_cache-a83f40e4-c852-4b45-a3d2-1cd65e9aaa31" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:05:39 compute-0 nova_compute[260935]: 2025-10-11 09:05:39.514 2 DEBUG nova.network.neutron [None req-caf85ff0-b2b9-4f98-9639-b907a658f8d4 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] [instance: a83f40e4-c852-4b45-a3d2-1cd65e9aaa31] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 11 09:05:39 compute-0 nova_compute[260935]: 2025-10-11 09:05:39.595 2 DEBUG nova.objects.instance [None req-67191ef9-234c-418d-8bd8-d769ba8e6f62 ac3a19a7426e4e51aa4f94b016decc82 0eed0ccef3b34c4db44e88ebe1aef9f0 - - default default] Lazy-loading 'migration_context' on Instance uuid d813afc2-c844-45eb-b1ec-efbbf95498ef obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:05:39 compute-0 nova_compute[260935]: 2025-10-11 09:05:39.635 2 DEBUG nova.virt.libvirt.driver [None req-cb7f5307-269b-4aa4-9170-94eb5f315201 727ffe9af08040179ad1981f985d61cc 72f2c977b4a147e3b45be9903a0afdf1 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 09:05:39 compute-0 nova_compute[260935]: 2025-10-11 09:05:39.635 2 DEBUG nova.virt.libvirt.driver [None req-cb7f5307-269b-4aa4-9170-94eb5f315201 727ffe9af08040179ad1981f985d61cc 72f2c977b4a147e3b45be9903a0afdf1 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 09:05:39 compute-0 nova_compute[260935]: 2025-10-11 09:05:39.635 2 DEBUG nova.virt.libvirt.driver [None req-cb7f5307-269b-4aa4-9170-94eb5f315201 727ffe9af08040179ad1981f985d61cc 72f2c977b4a147e3b45be9903a0afdf1 - - default default] No VIF found with MAC fa:16:3e:fe:a8:86, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 11 09:05:39 compute-0 nova_compute[260935]: 2025-10-11 09:05:39.636 2 INFO nova.virt.libvirt.driver [None req-cb7f5307-269b-4aa4-9170-94eb5f315201 727ffe9af08040179ad1981f985d61cc 72f2c977b4a147e3b45be9903a0afdf1 - - default default] [instance: 81e243b6-6e9c-428e-a7b7-743f60f94728] Using config drive
Oct 11 09:05:39 compute-0 sudo[352338]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:05:39 compute-0 sudo[352338]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:05:39 compute-0 sudo[352338]: pam_unix(sudo:session): session closed for user root
Oct 11 09:05:39 compute-0 nova_compute[260935]: 2025-10-11 09:05:39.661 2 DEBUG nova.storage.rbd_utils [None req-cb7f5307-269b-4aa4-9170-94eb5f315201 727ffe9af08040179ad1981f985d61cc 72f2c977b4a147e3b45be9903a0afdf1 - - default default] rbd image 81e243b6-6e9c-428e-a7b7-743f60f94728_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:05:39 compute-0 nova_compute[260935]: 2025-10-11 09:05:39.671 2 DEBUG nova.virt.libvirt.driver [None req-67191ef9-234c-418d-8bd8-d769ba8e6f62 ac3a19a7426e4e51aa4f94b016decc82 0eed0ccef3b34c4db44e88ebe1aef9f0 - - default default] [instance: d813afc2-c844-45eb-b1ec-efbbf95498ef] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 11 09:05:39 compute-0 nova_compute[260935]: 2025-10-11 09:05:39.672 2 DEBUG nova.virt.libvirt.driver [None req-67191ef9-234c-418d-8bd8-d769ba8e6f62 ac3a19a7426e4e51aa4f94b016decc82 0eed0ccef3b34c4db44e88ebe1aef9f0 - - default default] [instance: d813afc2-c844-45eb-b1ec-efbbf95498ef] Ensure instance console log exists: /var/lib/nova/instances/d813afc2-c844-45eb-b1ec-efbbf95498ef/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 11 09:05:39 compute-0 nova_compute[260935]: 2025-10-11 09:05:39.673 2 DEBUG oslo_concurrency.lockutils [None req-67191ef9-234c-418d-8bd8-d769ba8e6f62 ac3a19a7426e4e51aa4f94b016decc82 0eed0ccef3b34c4db44e88ebe1aef9f0 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:05:39 compute-0 nova_compute[260935]: 2025-10-11 09:05:39.673 2 DEBUG oslo_concurrency.lockutils [None req-67191ef9-234c-418d-8bd8-d769ba8e6f62 ac3a19a7426e4e51aa4f94b016decc82 0eed0ccef3b34c4db44e88ebe1aef9f0 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:05:39 compute-0 nova_compute[260935]: 2025-10-11 09:05:39.673 2 DEBUG oslo_concurrency.lockutils [None req-67191ef9-234c-418d-8bd8-d769ba8e6f62 ac3a19a7426e4e51aa4f94b016decc82 0eed0ccef3b34c4db44e88ebe1aef9f0 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:05:39 compute-0 sudo[352397]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 09:05:39 compute-0 sudo[352397]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:05:39 compute-0 sudo[352397]: pam_unix(sudo:session): session closed for user root
Oct 11 09:05:39 compute-0 sudo[352424]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:05:39 compute-0 sudo[352424]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:05:39 compute-0 sudo[352424]: pam_unix(sudo:session): session closed for user root
Oct 11 09:05:39 compute-0 sudo[352449]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Oct 11 09:05:39 compute-0 sudo[352449]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:05:40 compute-0 nova_compute[260935]: 2025-10-11 09:05:40.106 2 INFO nova.virt.libvirt.driver [None req-cb7f5307-269b-4aa4-9170-94eb5f315201 727ffe9af08040179ad1981f985d61cc 72f2c977b4a147e3b45be9903a0afdf1 - - default default] [instance: 81e243b6-6e9c-428e-a7b7-743f60f94728] Creating config drive at /var/lib/nova/instances/81e243b6-6e9c-428e-a7b7-743f60f94728/disk.config
Oct 11 09:05:40 compute-0 nova_compute[260935]: 2025-10-11 09:05:40.119 2 DEBUG oslo_concurrency.processutils [None req-cb7f5307-269b-4aa4-9170-94eb5f315201 727ffe9af08040179ad1981f985d61cc 72f2c977b4a147e3b45be9903a0afdf1 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/81e243b6-6e9c-428e-a7b7-743f60f94728/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpy41g0n3w execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:05:40 compute-0 nova_compute[260935]: 2025-10-11 09:05:40.266 2 DEBUG oslo_concurrency.processutils [None req-cb7f5307-269b-4aa4-9170-94eb5f315201 727ffe9af08040179ad1981f985d61cc 72f2c977b4a147e3b45be9903a0afdf1 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/81e243b6-6e9c-428e-a7b7-743f60f94728/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpy41g0n3w" returned: 0 in 0.147s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:05:40 compute-0 nova_compute[260935]: 2025-10-11 09:05:40.307 2 DEBUG nova.storage.rbd_utils [None req-cb7f5307-269b-4aa4-9170-94eb5f315201 727ffe9af08040179ad1981f985d61cc 72f2c977b4a147e3b45be9903a0afdf1 - - default default] rbd image 81e243b6-6e9c-428e-a7b7-743f60f94728_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:05:40 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1890: 321 pgs: 321 active+clean; 467 MiB data, 864 MiB used, 59 GiB / 60 GiB avail; 151 KiB/s rd, 1.8 MiB/s wr, 50 op/s
Oct 11 09:05:40 compute-0 nova_compute[260935]: 2025-10-11 09:05:40.320 2 DEBUG oslo_concurrency.processutils [None req-cb7f5307-269b-4aa4-9170-94eb5f315201 727ffe9af08040179ad1981f985d61cc 72f2c977b4a147e3b45be9903a0afdf1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/81e243b6-6e9c-428e-a7b7-743f60f94728/disk.config 81e243b6-6e9c-428e-a7b7-743f60f94728_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:05:40 compute-0 nova_compute[260935]: 2025-10-11 09:05:40.452 2 DEBUG nova.network.neutron [None req-67191ef9-234c-418d-8bd8-d769ba8e6f62 ac3a19a7426e4e51aa4f94b016decc82 0eed0ccef3b34c4db44e88ebe1aef9f0 - - default default] [instance: d813afc2-c844-45eb-b1ec-efbbf95498ef] Successfully created port: 4d16ebf4-64d7-4476-8898-f62d856c729b _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 11 09:05:40 compute-0 nova_compute[260935]: 2025-10-11 09:05:40.552 2 DEBUG oslo_concurrency.processutils [None req-cb7f5307-269b-4aa4-9170-94eb5f315201 727ffe9af08040179ad1981f985d61cc 72f2c977b4a147e3b45be9903a0afdf1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/81e243b6-6e9c-428e-a7b7-743f60f94728/disk.config 81e243b6-6e9c-428e-a7b7-743f60f94728_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.232s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:05:40 compute-0 nova_compute[260935]: 2025-10-11 09:05:40.553 2 INFO nova.virt.libvirt.driver [None req-cb7f5307-269b-4aa4-9170-94eb5f315201 727ffe9af08040179ad1981f985d61cc 72f2c977b4a147e3b45be9903a0afdf1 - - default default] [instance: 81e243b6-6e9c-428e-a7b7-743f60f94728] Deleting local config drive /var/lib/nova/instances/81e243b6-6e9c-428e-a7b7-743f60f94728/disk.config because it was imported into RBD.
Oct 11 09:05:40 compute-0 podman[352585]: 2025-10-11 09:05:40.626494127 +0000 UTC m=+0.092012828 container exec ef4d743dbf6b626090e433b260dff1359de31ba4682290cbdab8727911345729 (image=quay.io/ceph/ceph:v18, name=ceph-33219f8b-dc38-5a8f-a577-8ccc4b37190a-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 09:05:40 compute-0 kernel: tap406fc421-7a: entered promiscuous mode
Oct 11 09:05:40 compute-0 NetworkManager[44960]: <info>  [1760173540.6326] manager: (tap406fc421-7a): new Tun device (/org/freedesktop/NetworkManager/Devices/352)
Oct 11 09:05:40 compute-0 systemd-udevd[352615]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 09:05:40 compute-0 ovn_controller[152945]: 2025-10-11T09:05:40Z|00815|binding|INFO|Claiming lport 406fc421-7a8c-4974-8410-c31cab730ea9 for this chassis.
Oct 11 09:05:40 compute-0 ovn_controller[152945]: 2025-10-11T09:05:40Z|00816|binding|INFO|406fc421-7a8c-4974-8410-c31cab730ea9: Claiming fa:16:3e:fe:a8:86 10.100.0.9
Oct 11 09:05:40 compute-0 nova_compute[260935]: 2025-10-11 09:05:40.680 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:05:40 compute-0 nova_compute[260935]: 2025-10-11 09:05:40.688 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:05:40 compute-0 NetworkManager[44960]: <info>  [1760173540.7068] device (tap406fc421-7a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 09:05:40 compute-0 NetworkManager[44960]: <info>  [1760173540.7083] device (tap406fc421-7a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 09:05:40 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:05:40.723 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fe:a8:86 10.100.0.9'], port_security=['fa:16:3e:fe:a8:86 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '81e243b6-6e9c-428e-a7b7-743f60f94728', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6273b59c-68e6-46d1-ac04-7afa70039429', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '72f2c977b4a147e3b45be9903a0afdf1', 'neutron:revision_number': '2', 'neutron:security_group_ids': '6b8ac1d7-d0d6-46d1-9dc3-fb48cbd12dd5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=dbffcb7d-7dd8-41b0-903b-e11c21f9092c, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=406fc421-7a8c-4974-8410-c31cab730ea9) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 09:05:40 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:05:40.724 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 406fc421-7a8c-4974-8410-c31cab730ea9 in datapath 6273b59c-68e6-46d1-ac04-7afa70039429 bound to our chassis
Oct 11 09:05:40 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:05:40.727 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6273b59c-68e6-46d1-ac04-7afa70039429
Oct 11 09:05:40 compute-0 podman[352585]: 2025-10-11 09:05:40.732078531 +0000 UTC m=+0.197597252 container exec_died ef4d743dbf6b626090e433b260dff1359de31ba4682290cbdab8727911345729 (image=quay.io/ceph/ceph:v18, name=ceph-33219f8b-dc38-5a8f-a577-8ccc4b37190a-mon-compute-0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 09:05:40 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:05:40.745 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[3b212c04-666c-4438-bb04-79bea638e819]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:05:40 compute-0 systemd-machined[215705]: New machine qemu-106-instance-0000005d.
Oct 11 09:05:40 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:05:40.749 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap6273b59c-61 in ovnmeta-6273b59c-68e6-46d1-ac04-7afa70039429 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 11 09:05:40 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:05:40.752 276945 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap6273b59c-60 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 11 09:05:40 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:05:40.753 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[ba13cf30-8caa-49bb-9c6e-6c24962cbdca]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:05:40 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:05:40.754 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[b863be8c-1a75-40d0-bdaa-f951ebc470d9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:05:40 compute-0 systemd[1]: Started Virtual Machine qemu-106-instance-0000005d.
Oct 11 09:05:40 compute-0 ovn_controller[152945]: 2025-10-11T09:05:40Z|00817|binding|INFO|Setting lport 406fc421-7a8c-4974-8410-c31cab730ea9 ovn-installed in OVS
Oct 11 09:05:40 compute-0 ovn_controller[152945]: 2025-10-11T09:05:40Z|00818|binding|INFO|Setting lport 406fc421-7a8c-4974-8410-c31cab730ea9 up in Southbound
Oct 11 09:05:40 compute-0 nova_compute[260935]: 2025-10-11 09:05:40.770 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:05:40 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:05:40.772 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[6ccd1650-66ce-4a80-a86e-039b33c0420f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:05:40 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:05:40.800 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[e6f45e60-d27c-4a5a-90bf-55558cb1ea78]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:05:40 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:05:40.844 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[9bfce720-67f3-421a-bfea-838ff2156a6e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:05:40 compute-0 NetworkManager[44960]: <info>  [1760173540.8520] manager: (tap6273b59c-60): new Veth device (/org/freedesktop/NetworkManager/Devices/353)
Oct 11 09:05:40 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:05:40.850 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[c5e19546-589a-4b14-ba8b-eb710fb4d98c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:05:40 compute-0 systemd-udevd[352619]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 09:05:40 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:05:40.892 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[7eec5254-3b0a-471e-a063-19f89cbdb4a0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:05:40 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:05:40.898 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[ce0f7b63-bf78-412e-a0bc-6b8247f9df83]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:05:40 compute-0 NetworkManager[44960]: <info>  [1760173540.9283] device (tap6273b59c-60): carrier: link connected
Oct 11 09:05:40 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:05:40.941 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[a0e90790-c741-4a52-87dd-57081552dd74]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:05:40 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:05:40.965 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[b12f712e-3351-46e5-85f6-0d5395d51a59]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6273b59c-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b9:44:8c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 248], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 538563, 'reachable_time': 40063, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 352682, 'error': None, 'target': 'ovnmeta-6273b59c-68e6-46d1-ac04-7afa70039429', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:05:40 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:05:40.986 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[34fe9fb4-9767-4fa4-9de5-c00a443b9b92]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feb9:448c'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 538563, 'tstamp': 538563}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 352690, 'error': None, 'target': 'ovnmeta-6273b59c-68e6-46d1-ac04-7afa70039429', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:05:41 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:05:41.014 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[3f124662-73db-41d4-8ca1-02df76ee12db]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6273b59c-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b9:44:8c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 248], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 538563, 'reachable_time': 40063, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 352695, 'error': None, 'target': 'ovnmeta-6273b59c-68e6-46d1-ac04-7afa70039429', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:05:41 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:05:41.077 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[eae6f127-610f-4f1d-9f85-bc5ec5fe978a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:05:41 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:05:41.155 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[ec86d8fd-c942-4dbf-8b98-e155e9b36490]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:05:41 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:05:41.157 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6273b59c-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:05:41 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:05:41.157 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 09:05:41 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:05:41.158 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6273b59c-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:05:41 compute-0 NetworkManager[44960]: <info>  [1760173541.1602] manager: (tap6273b59c-60): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/354)
Oct 11 09:05:41 compute-0 kernel: tap6273b59c-60: entered promiscuous mode
Oct 11 09:05:41 compute-0 nova_compute[260935]: 2025-10-11 09:05:41.159 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:05:41 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:05:41.162 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6273b59c-60, col_values=(('external_ids', {'iface-id': '7e357d1d-ae9b-4fa9-ab3c-8d0e604ab2fd'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:05:41 compute-0 ovn_controller[152945]: 2025-10-11T09:05:41Z|00819|binding|INFO|Releasing lport 7e357d1d-ae9b-4fa9-ab3c-8d0e604ab2fd from this chassis (sb_readonly=0)
Oct 11 09:05:41 compute-0 nova_compute[260935]: 2025-10-11 09:05:41.181 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:05:41 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:05:41.182 162815 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/6273b59c-68e6-46d1-ac04-7afa70039429.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/6273b59c-68e6-46d1-ac04-7afa70039429.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 11 09:05:41 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:05:41.183 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[d22087c3-7806-476b-b145-894ec6251c40]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:05:41 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:05:41.184 162815 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 11 09:05:41 compute-0 ovn_metadata_agent[162810]: global
Oct 11 09:05:41 compute-0 ovn_metadata_agent[162810]:     log         /dev/log local0 debug
Oct 11 09:05:41 compute-0 ovn_metadata_agent[162810]:     log-tag     haproxy-metadata-proxy-6273b59c-68e6-46d1-ac04-7afa70039429
Oct 11 09:05:41 compute-0 ovn_metadata_agent[162810]:     user        root
Oct 11 09:05:41 compute-0 ovn_metadata_agent[162810]:     group       root
Oct 11 09:05:41 compute-0 ovn_metadata_agent[162810]:     maxconn     1024
Oct 11 09:05:41 compute-0 ovn_metadata_agent[162810]:     pidfile     /var/lib/neutron/external/pids/6273b59c-68e6-46d1-ac04-7afa70039429.pid.haproxy
Oct 11 09:05:41 compute-0 ovn_metadata_agent[162810]:     daemon
Oct 11 09:05:41 compute-0 ovn_metadata_agent[162810]: 
Oct 11 09:05:41 compute-0 ovn_metadata_agent[162810]: defaults
Oct 11 09:05:41 compute-0 ovn_metadata_agent[162810]:     log global
Oct 11 09:05:41 compute-0 ovn_metadata_agent[162810]:     mode http
Oct 11 09:05:41 compute-0 ovn_metadata_agent[162810]:     option httplog
Oct 11 09:05:41 compute-0 ovn_metadata_agent[162810]:     option dontlognull
Oct 11 09:05:41 compute-0 ovn_metadata_agent[162810]:     option http-server-close
Oct 11 09:05:41 compute-0 ovn_metadata_agent[162810]:     option forwardfor
Oct 11 09:05:41 compute-0 ovn_metadata_agent[162810]:     retries                 3
Oct 11 09:05:41 compute-0 ovn_metadata_agent[162810]:     timeout http-request    30s
Oct 11 09:05:41 compute-0 ovn_metadata_agent[162810]:     timeout connect         30s
Oct 11 09:05:41 compute-0 ovn_metadata_agent[162810]:     timeout client          32s
Oct 11 09:05:41 compute-0 ovn_metadata_agent[162810]:     timeout server          32s
Oct 11 09:05:41 compute-0 ovn_metadata_agent[162810]:     timeout http-keep-alive 30s
Oct 11 09:05:41 compute-0 ovn_metadata_agent[162810]: 
Oct 11 09:05:41 compute-0 ovn_metadata_agent[162810]: 
Oct 11 09:05:41 compute-0 ovn_metadata_agent[162810]: listen listener
Oct 11 09:05:41 compute-0 ovn_metadata_agent[162810]:     bind 169.254.169.254:80
Oct 11 09:05:41 compute-0 ovn_metadata_agent[162810]:     server metadata /var/lib/neutron/metadata_proxy
Oct 11 09:05:41 compute-0 ovn_metadata_agent[162810]:     http-request add-header X-OVN-Network-ID 6273b59c-68e6-46d1-ac04-7afa70039429
Oct 11 09:05:41 compute-0 ovn_metadata_agent[162810]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 11 09:05:41 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:05:41.185 162815 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-6273b59c-68e6-46d1-ac04-7afa70039429', 'env', 'PROCESS_TAG=haproxy-6273b59c-68e6-46d1-ac04-7afa70039429', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/6273b59c-68e6-46d1-ac04-7afa70039429.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 11 09:05:41 compute-0 ceph-mon[74313]: pgmap v1890: 321 pgs: 321 active+clean; 467 MiB data, 864 MiB used, 59 GiB / 60 GiB avail; 151 KiB/s rd, 1.8 MiB/s wr, 50 op/s
Oct 11 09:05:41 compute-0 sudo[352449]: pam_unix(sudo:session): session closed for user root
Oct 11 09:05:41 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 09:05:41 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:05:41 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 09:05:41 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:05:41 compute-0 sudo[352854]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:05:41 compute-0 sudo[352854]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:05:41 compute-0 sudo[352854]: pam_unix(sudo:session): session closed for user root
Oct 11 09:05:41 compute-0 podman[352853]: 2025-10-11 09:05:41.627495312 +0000 UTC m=+0.069970228 container create 684862f00538c46291fa1579d08116448704da1e44bd0a36fd2799fc3cd69c15 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-6273b59c-68e6-46d1-ac04-7afa70039429, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct 11 09:05:41 compute-0 systemd[1]: Started libpod-conmon-684862f00538c46291fa1579d08116448704da1e44bd0a36fd2799fc3cd69c15.scope.
Oct 11 09:05:41 compute-0 podman[352853]: 2025-10-11 09:05:41.598728941 +0000 UTC m=+0.041203887 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 11 09:05:41 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:05:41 compute-0 sudo[352891]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 09:05:41 compute-0 sudo[352891]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:05:41 compute-0 sudo[352891]: pam_unix(sudo:session): session closed for user root
Oct 11 09:05:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/38285c17898e47fb52b792120b4ee9bb04913e8a53c1010175d0aebd20a4d016/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 11 09:05:41 compute-0 podman[352853]: 2025-10-11 09:05:41.717538663 +0000 UTC m=+0.160013599 container init 684862f00538c46291fa1579d08116448704da1e44bd0a36fd2799fc3cd69c15 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-6273b59c-68e6-46d1-ac04-7afa70039429, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct 11 09:05:41 compute-0 podman[352853]: 2025-10-11 09:05:41.724658196 +0000 UTC m=+0.167133112 container start 684862f00538c46291fa1579d08116448704da1e44bd0a36fd2799fc3cd69c15 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-6273b59c-68e6-46d1-ac04-7afa70039429, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 11 09:05:41 compute-0 neutron-haproxy-ovnmeta-6273b59c-68e6-46d1-ac04-7afa70039429[352915]: [NOTICE]   (352930) : New worker (352947) forked
Oct 11 09:05:41 compute-0 neutron-haproxy-ovnmeta-6273b59c-68e6-46d1-ac04-7afa70039429[352915]: [NOTICE]   (352930) : Loading success.
Oct 11 09:05:41 compute-0 sudo[352921]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:05:41 compute-0 sudo[352921]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:05:41 compute-0 sudo[352921]: pam_unix(sudo:session): session closed for user root
Oct 11 09:05:41 compute-0 sudo[352958]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 11 09:05:41 compute-0 sudo[352958]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:05:41 compute-0 nova_compute[260935]: 2025-10-11 09:05:41.952 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760173541.9510348, 81e243b6-6e9c-428e-a7b7-743f60f94728 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:05:41 compute-0 nova_compute[260935]: 2025-10-11 09:05:41.953 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 81e243b6-6e9c-428e-a7b7-743f60f94728] VM Started (Lifecycle Event)
Oct 11 09:05:42 compute-0 nova_compute[260935]: 2025-10-11 09:05:42.011 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 81e243b6-6e9c-428e-a7b7-743f60f94728] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:05:42 compute-0 nova_compute[260935]: 2025-10-11 09:05:42.016 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760173541.9513943, 81e243b6-6e9c-428e-a7b7-743f60f94728 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:05:42 compute-0 nova_compute[260935]: 2025-10-11 09:05:42.016 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 81e243b6-6e9c-428e-a7b7-743f60f94728] VM Paused (Lifecycle Event)
Oct 11 09:05:42 compute-0 nova_compute[260935]: 2025-10-11 09:05:42.031 2 DEBUG nova.network.neutron [req-e30a5b44-e1c2-4b5a-a0e6-2456047c966e req-fcc04171-65cd-4b45-b7b4-0b6a6d8e7574 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 81e243b6-6e9c-428e-a7b7-743f60f94728] Updated VIF entry in instance network info cache for port 406fc421-7a8c-4974-8410-c31cab730ea9. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 09:05:42 compute-0 nova_compute[260935]: 2025-10-11 09:05:42.032 2 DEBUG nova.network.neutron [req-e30a5b44-e1c2-4b5a-a0e6-2456047c966e req-fcc04171-65cd-4b45-b7b4-0b6a6d8e7574 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 81e243b6-6e9c-428e-a7b7-743f60f94728] Updating instance_info_cache with network_info: [{"id": "406fc421-7a8c-4974-8410-c31cab730ea9", "address": "fa:16:3e:fe:a8:86", "network": {"id": "6273b59c-68e6-46d1-ac04-7afa70039429", "bridge": "br-int", "label": "tempest-ServerPasswordTestJSON-29199496-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "72f2c977b4a147e3b45be9903a0afdf1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap406fc421-7a", "ovs_interfaceid": "406fc421-7a8c-4974-8410-c31cab730ea9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:05:42 compute-0 nova_compute[260935]: 2025-10-11 09:05:42.080 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 81e243b6-6e9c-428e-a7b7-743f60f94728] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:05:42 compute-0 nova_compute[260935]: 2025-10-11 09:05:42.083 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 81e243b6-6e9c-428e-a7b7-743f60f94728] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 09:05:42 compute-0 nova_compute[260935]: 2025-10-11 09:05:42.095 2 DEBUG oslo_concurrency.lockutils [req-e30a5b44-e1c2-4b5a-a0e6-2456047c966e req-fcc04171-65cd-4b45-b7b4-0b6a6d8e7574 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-81e243b6-6e9c-428e-a7b7-743f60f94728" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:05:42 compute-0 nova_compute[260935]: 2025-10-11 09:05:42.189 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 81e243b6-6e9c-428e-a7b7-743f60f94728] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 09:05:42 compute-0 nova_compute[260935]: 2025-10-11 09:05:42.265 2 DEBUG nova.compute.manager [req-a036fa2f-54f3-4a22-b174-b888b56df61f req-02a45eb3-8aa2-4a28-af7a-7b602968e0e8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 81e243b6-6e9c-428e-a7b7-743f60f94728] Received event network-vif-plugged-406fc421-7a8c-4974-8410-c31cab730ea9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:05:42 compute-0 nova_compute[260935]: 2025-10-11 09:05:42.265 2 DEBUG oslo_concurrency.lockutils [req-a036fa2f-54f3-4a22-b174-b888b56df61f req-02a45eb3-8aa2-4a28-af7a-7b602968e0e8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "81e243b6-6e9c-428e-a7b7-743f60f94728-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:05:42 compute-0 nova_compute[260935]: 2025-10-11 09:05:42.266 2 DEBUG oslo_concurrency.lockutils [req-a036fa2f-54f3-4a22-b174-b888b56df61f req-02a45eb3-8aa2-4a28-af7a-7b602968e0e8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "81e243b6-6e9c-428e-a7b7-743f60f94728-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:05:42 compute-0 nova_compute[260935]: 2025-10-11 09:05:42.266 2 DEBUG oslo_concurrency.lockutils [req-a036fa2f-54f3-4a22-b174-b888b56df61f req-02a45eb3-8aa2-4a28-af7a-7b602968e0e8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "81e243b6-6e9c-428e-a7b7-743f60f94728-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:05:42 compute-0 nova_compute[260935]: 2025-10-11 09:05:42.266 2 DEBUG nova.compute.manager [req-a036fa2f-54f3-4a22-b174-b888b56df61f req-02a45eb3-8aa2-4a28-af7a-7b602968e0e8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 81e243b6-6e9c-428e-a7b7-743f60f94728] Processing event network-vif-plugged-406fc421-7a8c-4974-8410-c31cab730ea9 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 11 09:05:42 compute-0 nova_compute[260935]: 2025-10-11 09:05:42.267 2 DEBUG nova.compute.manager [None req-cb7f5307-269b-4aa4-9170-94eb5f315201 727ffe9af08040179ad1981f985d61cc 72f2c977b4a147e3b45be9903a0afdf1 - - default default] [instance: 81e243b6-6e9c-428e-a7b7-743f60f94728] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 11 09:05:42 compute-0 nova_compute[260935]: 2025-10-11 09:05:42.272 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760173542.2724977, 81e243b6-6e9c-428e-a7b7-743f60f94728 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:05:42 compute-0 nova_compute[260935]: 2025-10-11 09:05:42.274 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 81e243b6-6e9c-428e-a7b7-743f60f94728] VM Resumed (Lifecycle Event)
Oct 11 09:05:42 compute-0 nova_compute[260935]: 2025-10-11 09:05:42.276 2 DEBUG nova.virt.libvirt.driver [None req-cb7f5307-269b-4aa4-9170-94eb5f315201 727ffe9af08040179ad1981f985d61cc 72f2c977b4a147e3b45be9903a0afdf1 - - default default] [instance: 81e243b6-6e9c-428e-a7b7-743f60f94728] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 11 09:05:42 compute-0 nova_compute[260935]: 2025-10-11 09:05:42.281 2 INFO nova.virt.libvirt.driver [-] [instance: 81e243b6-6e9c-428e-a7b7-743f60f94728] Instance spawned successfully.
Oct 11 09:05:42 compute-0 nova_compute[260935]: 2025-10-11 09:05:42.282 2 DEBUG nova.virt.libvirt.driver [None req-cb7f5307-269b-4aa4-9170-94eb5f315201 727ffe9af08040179ad1981f985d61cc 72f2c977b4a147e3b45be9903a0afdf1 - - default default] [instance: 81e243b6-6e9c-428e-a7b7-743f60f94728] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 11 09:05:42 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1891: 321 pgs: 321 active+clean; 498 MiB data, 880 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 3.1 MiB/s wr, 136 op/s
Oct 11 09:05:42 compute-0 nova_compute[260935]: 2025-10-11 09:05:42.387 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 81e243b6-6e9c-428e-a7b7-743f60f94728] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:05:42 compute-0 nova_compute[260935]: 2025-10-11 09:05:42.391 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 81e243b6-6e9c-428e-a7b7-743f60f94728] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 09:05:42 compute-0 nova_compute[260935]: 2025-10-11 09:05:42.422 2 DEBUG nova.virt.libvirt.driver [None req-cb7f5307-269b-4aa4-9170-94eb5f315201 727ffe9af08040179ad1981f985d61cc 72f2c977b4a147e3b45be9903a0afdf1 - - default default] [instance: 81e243b6-6e9c-428e-a7b7-743f60f94728] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:05:42 compute-0 nova_compute[260935]: 2025-10-11 09:05:42.423 2 DEBUG nova.virt.libvirt.driver [None req-cb7f5307-269b-4aa4-9170-94eb5f315201 727ffe9af08040179ad1981f985d61cc 72f2c977b4a147e3b45be9903a0afdf1 - - default default] [instance: 81e243b6-6e9c-428e-a7b7-743f60f94728] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:05:42 compute-0 nova_compute[260935]: 2025-10-11 09:05:42.423 2 DEBUG nova.virt.libvirt.driver [None req-cb7f5307-269b-4aa4-9170-94eb5f315201 727ffe9af08040179ad1981f985d61cc 72f2c977b4a147e3b45be9903a0afdf1 - - default default] [instance: 81e243b6-6e9c-428e-a7b7-743f60f94728] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:05:42 compute-0 nova_compute[260935]: 2025-10-11 09:05:42.424 2 DEBUG nova.virt.libvirt.driver [None req-cb7f5307-269b-4aa4-9170-94eb5f315201 727ffe9af08040179ad1981f985d61cc 72f2c977b4a147e3b45be9903a0afdf1 - - default default] [instance: 81e243b6-6e9c-428e-a7b7-743f60f94728] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:05:42 compute-0 nova_compute[260935]: 2025-10-11 09:05:42.425 2 DEBUG nova.virt.libvirt.driver [None req-cb7f5307-269b-4aa4-9170-94eb5f315201 727ffe9af08040179ad1981f985d61cc 72f2c977b4a147e3b45be9903a0afdf1 - - default default] [instance: 81e243b6-6e9c-428e-a7b7-743f60f94728] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:05:42 compute-0 nova_compute[260935]: 2025-10-11 09:05:42.426 2 DEBUG nova.virt.libvirt.driver [None req-cb7f5307-269b-4aa4-9170-94eb5f315201 727ffe9af08040179ad1981f985d61cc 72f2c977b4a147e3b45be9903a0afdf1 - - default default] [instance: 81e243b6-6e9c-428e-a7b7-743f60f94728] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:05:42 compute-0 nova_compute[260935]: 2025-10-11 09:05:42.461 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 81e243b6-6e9c-428e-a7b7-743f60f94728] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 09:05:42 compute-0 sudo[352958]: pam_unix(sudo:session): session closed for user root
Oct 11 09:05:42 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:05:42 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:05:42 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 09:05:42 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 09:05:42 compute-0 nova_compute[260935]: 2025-10-11 09:05:42.554 2 INFO nova.compute.manager [None req-cb7f5307-269b-4aa4-9170-94eb5f315201 727ffe9af08040179ad1981f985d61cc 72f2c977b4a147e3b45be9903a0afdf1 - - default default] [instance: 81e243b6-6e9c-428e-a7b7-743f60f94728] Took 12.32 seconds to spawn the instance on the hypervisor.
Oct 11 09:05:42 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 11 09:05:42 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 09:05:42 compute-0 nova_compute[260935]: 2025-10-11 09:05:42.556 2 DEBUG nova.compute.manager [None req-cb7f5307-269b-4aa4-9170-94eb5f315201 727ffe9af08040179ad1981f985d61cc 72f2c977b4a147e3b45be9903a0afdf1 - - default default] [instance: 81e243b6-6e9c-428e-a7b7-743f60f94728] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:05:42 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 11 09:05:42 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:05:42 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev d079bdc9-6cad-42ad-9794-2d0b0e94d0da does not exist
Oct 11 09:05:42 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev 93da4e52-7f4c-4939-9ca3-c867c9a0be41 does not exist
Oct 11 09:05:42 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev 17af2677-1c92-468e-88c7-1f1a362f899c does not exist
Oct 11 09:05:42 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 11 09:05:42 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 09:05:42 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 11 09:05:42 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 09:05:42 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 09:05:42 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 09:05:42 compute-0 nova_compute[260935]: 2025-10-11 09:05:42.638 2 DEBUG nova.network.neutron [None req-caf85ff0-b2b9-4f98-9639-b907a658f8d4 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] [instance: a83f40e4-c852-4b45-a3d2-1cd65e9aaa31] Updating instance_info_cache with network_info: [{"id": "f8dc388c-9e5a-43c2-8dd9-8fc28768ec31", "address": "fa:16:3e:b4:bf:b7", "network": {"id": "7ac98e7f-e9cd-45cf-8bf4-dcecaa65ca75", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-1957321638-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "0a5d578da5e746caa535eef295e1a67d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf8dc388c-9e", "ovs_interfaceid": "f8dc388c-9e5a-43c2-8dd9-8fc28768ec31", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:05:42 compute-0 sudo[353016]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:05:42 compute-0 sudo[353016]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:05:42 compute-0 nova_compute[260935]: 2025-10-11 09:05:42.671 2 INFO nova.compute.manager [None req-cb7f5307-269b-4aa4-9170-94eb5f315201 727ffe9af08040179ad1981f985d61cc 72f2c977b4a147e3b45be9903a0afdf1 - - default default] [instance: 81e243b6-6e9c-428e-a7b7-743f60f94728] Took 14.35 seconds to build instance.
Oct 11 09:05:42 compute-0 sudo[353016]: pam_unix(sudo:session): session closed for user root
Oct 11 09:05:42 compute-0 nova_compute[260935]: 2025-10-11 09:05:42.702 2 DEBUG oslo_concurrency.lockutils [None req-caf85ff0-b2b9-4f98-9639-b907a658f8d4 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] Releasing lock "refresh_cache-a83f40e4-c852-4b45-a3d2-1cd65e9aaa31" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:05:42 compute-0 nova_compute[260935]: 2025-10-11 09:05:42.723 2 DEBUG oslo_concurrency.lockutils [None req-cb7f5307-269b-4aa4-9170-94eb5f315201 727ffe9af08040179ad1981f985d61cc 72f2c977b4a147e3b45be9903a0afdf1 - - default default] Lock "81e243b6-6e9c-428e-a7b7-743f60f94728" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 14.538s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:05:42 compute-0 sudo[353041]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 09:05:42 compute-0 sudo[353041]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:05:42 compute-0 sudo[353041]: pam_unix(sudo:session): session closed for user root
Oct 11 09:05:42 compute-0 sudo[353066]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:05:42 compute-0 sudo[353066]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:05:42 compute-0 sudo[353066]: pam_unix(sudo:session): session closed for user root
Oct 11 09:05:42 compute-0 podman[353090]: 2025-10-11 09:05:42.968535334 +0000 UTC m=+0.085426460 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true)
Oct 11 09:05:42 compute-0 sudo[353097]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 11 09:05:42 compute-0 sudo[353097]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:05:43 compute-0 nova_compute[260935]: 2025-10-11 09:05:43.156 2 DEBUG nova.virt.libvirt.driver [None req-caf85ff0-b2b9-4f98-9639-b907a658f8d4 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] [instance: a83f40e4-c852-4b45-a3d2-1cd65e9aaa31] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Oct 11 09:05:43 compute-0 podman[353172]: 2025-10-11 09:05:43.430311067 +0000 UTC m=+0.054246880 container create 63268ac528dda4463fb02d65897d580530bf0b1d32c739fbf717783e08c0b88a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_mcclintock, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Oct 11 09:05:43 compute-0 systemd[1]: Started libpod-conmon-63268ac528dda4463fb02d65897d580530bf0b1d32c739fbf717783e08c0b88a.scope.
Oct 11 09:05:43 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:05:43 compute-0 podman[353172]: 2025-10-11 09:05:43.407041702 +0000 UTC m=+0.030977525 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:05:43 compute-0 podman[353172]: 2025-10-11 09:05:43.509362773 +0000 UTC m=+0.133298616 container init 63268ac528dda4463fb02d65897d580530bf0b1d32c739fbf717783e08c0b88a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_mcclintock, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct 11 09:05:43 compute-0 podman[353172]: 2025-10-11 09:05:43.518607077 +0000 UTC m=+0.142542920 container start 63268ac528dda4463fb02d65897d580530bf0b1d32c739fbf717783e08c0b88a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_mcclintock, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct 11 09:05:43 compute-0 podman[353172]: 2025-10-11 09:05:43.523149257 +0000 UTC m=+0.147085110 container attach 63268ac528dda4463fb02d65897d580530bf0b1d32c739fbf717783e08c0b88a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_mcclintock, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 09:05:43 compute-0 affectionate_mcclintock[353188]: 167 167
Oct 11 09:05:43 compute-0 systemd[1]: libpod-63268ac528dda4463fb02d65897d580530bf0b1d32c739fbf717783e08c0b88a.scope: Deactivated successfully.
Oct 11 09:05:43 compute-0 podman[353172]: 2025-10-11 09:05:43.527889572 +0000 UTC m=+0.151825435 container died 63268ac528dda4463fb02d65897d580530bf0b1d32c739fbf717783e08c0b88a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_mcclintock, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Oct 11 09:05:43 compute-0 ceph-mon[74313]: pgmap v1891: 321 pgs: 321 active+clean; 498 MiB data, 880 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 3.1 MiB/s wr, 136 op/s
Oct 11 09:05:43 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 09:05:43 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 09:05:43 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:05:43 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 09:05:43 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 09:05:43 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 09:05:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-04b81ab6612f6cc4bebc39542880487a00c371e56db58da4af147f804723695f-merged.mount: Deactivated successfully.
Oct 11 09:05:43 compute-0 podman[353172]: 2025-10-11 09:05:43.576245783 +0000 UTC m=+0.200181586 container remove 63268ac528dda4463fb02d65897d580530bf0b1d32c739fbf717783e08c0b88a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_mcclintock, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 09:05:43 compute-0 systemd[1]: libpod-conmon-63268ac528dda4463fb02d65897d580530bf0b1d32c739fbf717783e08c0b88a.scope: Deactivated successfully.
Oct 11 09:05:43 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e257 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:05:43 compute-0 podman[353211]: 2025-10-11 09:05:43.818112758 +0000 UTC m=+0.058007067 container create 741002d399bae12a4c47b7c98bce94a889b79450a9baa31311e3368d61805176 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_swanson, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct 11 09:05:43 compute-0 systemd[1]: Started libpod-conmon-741002d399bae12a4c47b7c98bce94a889b79450a9baa31311e3368d61805176.scope.
Oct 11 09:05:43 compute-0 podman[353211]: 2025-10-11 09:05:43.792849427 +0000 UTC m=+0.032743776 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:05:43 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:05:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/27cc36f4a5c6c245982bff7ef571483fe9a8b371de51605ff15338529ffbba78/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 09:05:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/27cc36f4a5c6c245982bff7ef571483fe9a8b371de51605ff15338529ffbba78/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 09:05:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/27cc36f4a5c6c245982bff7ef571483fe9a8b371de51605ff15338529ffbba78/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 09:05:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/27cc36f4a5c6c245982bff7ef571483fe9a8b371de51605ff15338529ffbba78/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 09:05:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/27cc36f4a5c6c245982bff7ef571483fe9a8b371de51605ff15338529ffbba78/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 09:05:43 compute-0 podman[353211]: 2025-10-11 09:05:43.923461805 +0000 UTC m=+0.163356104 container init 741002d399bae12a4c47b7c98bce94a889b79450a9baa31311e3368d61805176 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_swanson, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct 11 09:05:43 compute-0 podman[353211]: 2025-10-11 09:05:43.935639663 +0000 UTC m=+0.175533952 container start 741002d399bae12a4c47b7c98bce94a889b79450a9baa31311e3368d61805176 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_swanson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct 11 09:05:43 compute-0 podman[353211]: 2025-10-11 09:05:43.938734881 +0000 UTC m=+0.178629170 container attach 741002d399bae12a4c47b7c98bce94a889b79450a9baa31311e3368d61805176 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_swanson, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct 11 09:05:43 compute-0 nova_compute[260935]: 2025-10-11 09:05:43.995 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:05:44 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1892: 321 pgs: 321 active+clean; 513 MiB data, 886 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.1 MiB/s wr, 111 op/s
Oct 11 09:05:44 compute-0 nova_compute[260935]: 2025-10-11 09:05:44.409 2 DEBUG nova.network.neutron [None req-67191ef9-234c-418d-8bd8-d769ba8e6f62 ac3a19a7426e4e51aa4f94b016decc82 0eed0ccef3b34c4db44e88ebe1aef9f0 - - default default] [instance: d813afc2-c844-45eb-b1ec-efbbf95498ef] Successfully updated port: 4d16ebf4-64d7-4476-8898-f62d856c729b _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 11 09:05:44 compute-0 nova_compute[260935]: 2025-10-11 09:05:44.507 2 DEBUG nova.compute.manager [req-10681af1-9d21-4910-b014-759dcd7bac42 req-3b41aaae-17e3-4264-aff7-140fca51137e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 81e243b6-6e9c-428e-a7b7-743f60f94728] Received event network-vif-plugged-406fc421-7a8c-4974-8410-c31cab730ea9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:05:44 compute-0 nova_compute[260935]: 2025-10-11 09:05:44.507 2 DEBUG oslo_concurrency.lockutils [req-10681af1-9d21-4910-b014-759dcd7bac42 req-3b41aaae-17e3-4264-aff7-140fca51137e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "81e243b6-6e9c-428e-a7b7-743f60f94728-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:05:44 compute-0 nova_compute[260935]: 2025-10-11 09:05:44.508 2 DEBUG oslo_concurrency.lockutils [req-10681af1-9d21-4910-b014-759dcd7bac42 req-3b41aaae-17e3-4264-aff7-140fca51137e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "81e243b6-6e9c-428e-a7b7-743f60f94728-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:05:44 compute-0 nova_compute[260935]: 2025-10-11 09:05:44.513 2 DEBUG oslo_concurrency.lockutils [req-10681af1-9d21-4910-b014-759dcd7bac42 req-3b41aaae-17e3-4264-aff7-140fca51137e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "81e243b6-6e9c-428e-a7b7-743f60f94728-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.005s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:05:44 compute-0 nova_compute[260935]: 2025-10-11 09:05:44.513 2 DEBUG nova.compute.manager [req-10681af1-9d21-4910-b014-759dcd7bac42 req-3b41aaae-17e3-4264-aff7-140fca51137e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 81e243b6-6e9c-428e-a7b7-743f60f94728] No waiting events found dispatching network-vif-plugged-406fc421-7a8c-4974-8410-c31cab730ea9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 09:05:44 compute-0 nova_compute[260935]: 2025-10-11 09:05:44.513 2 WARNING nova.compute.manager [req-10681af1-9d21-4910-b014-759dcd7bac42 req-3b41aaae-17e3-4264-aff7-140fca51137e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 81e243b6-6e9c-428e-a7b7-743f60f94728] Received unexpected event network-vif-plugged-406fc421-7a8c-4974-8410-c31cab730ea9 for instance with vm_state active and task_state None.
Oct 11 09:05:44 compute-0 nova_compute[260935]: 2025-10-11 09:05:44.514 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:05:44 compute-0 nova_compute[260935]: 2025-10-11 09:05:44.517 2 DEBUG oslo_concurrency.lockutils [None req-67191ef9-234c-418d-8bd8-d769ba8e6f62 ac3a19a7426e4e51aa4f94b016decc82 0eed0ccef3b34c4db44e88ebe1aef9f0 - - default default] Acquiring lock "refresh_cache-d813afc2-c844-45eb-b1ec-efbbf95498ef" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:05:44 compute-0 nova_compute[260935]: 2025-10-11 09:05:44.517 2 DEBUG oslo_concurrency.lockutils [None req-67191ef9-234c-418d-8bd8-d769ba8e6f62 ac3a19a7426e4e51aa4f94b016decc82 0eed0ccef3b34c4db44e88ebe1aef9f0 - - default default] Acquired lock "refresh_cache-d813afc2-c844-45eb-b1ec-efbbf95498ef" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:05:44 compute-0 nova_compute[260935]: 2025-10-11 09:05:44.517 2 DEBUG nova.network.neutron [None req-67191ef9-234c-418d-8bd8-d769ba8e6f62 ac3a19a7426e4e51aa4f94b016decc82 0eed0ccef3b34c4db44e88ebe1aef9f0 - - default default] [instance: d813afc2-c844-45eb-b1ec-efbbf95498ef] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 11 09:05:45 compute-0 funny_swanson[353228]: --> passed data devices: 0 physical, 3 LVM
Oct 11 09:05:45 compute-0 funny_swanson[353228]: --> relative data size: 1.0
Oct 11 09:05:45 compute-0 funny_swanson[353228]: --> All data devices are unavailable
Oct 11 09:05:45 compute-0 systemd[1]: libpod-741002d399bae12a4c47b7c98bce94a889b79450a9baa31311e3368d61805176.scope: Deactivated successfully.
Oct 11 09:05:45 compute-0 systemd[1]: libpod-741002d399bae12a4c47b7c98bce94a889b79450a9baa31311e3368d61805176.scope: Consumed 1.256s CPU time.
Oct 11 09:05:45 compute-0 podman[353257]: 2025-10-11 09:05:45.336623697 +0000 UTC m=+0.051814220 container died 741002d399bae12a4c47b7c98bce94a889b79450a9baa31311e3368d61805176 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_swanson, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct 11 09:05:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-27cc36f4a5c6c245982bff7ef571483fe9a8b371de51605ff15338529ffbba78-merged.mount: Deactivated successfully.
Oct 11 09:05:45 compute-0 nova_compute[260935]: 2025-10-11 09:05:45.372 2 DEBUG nova.network.neutron [None req-67191ef9-234c-418d-8bd8-d769ba8e6f62 ac3a19a7426e4e51aa4f94b016decc82 0eed0ccef3b34c4db44e88ebe1aef9f0 - - default default] [instance: d813afc2-c844-45eb-b1ec-efbbf95498ef] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 11 09:05:45 compute-0 podman[353257]: 2025-10-11 09:05:45.403418623 +0000 UTC m=+0.118609096 container remove 741002d399bae12a4c47b7c98bce94a889b79450a9baa31311e3368d61805176 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_swanson, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 09:05:45 compute-0 systemd[1]: libpod-conmon-741002d399bae12a4c47b7c98bce94a889b79450a9baa31311e3368d61805176.scope: Deactivated successfully.
Oct 11 09:05:45 compute-0 sudo[353097]: pam_unix(sudo:session): session closed for user root
Oct 11 09:05:45 compute-0 sudo[353270]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:05:45 compute-0 sudo[353270]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:05:45 compute-0 sudo[353270]: pam_unix(sudo:session): session closed for user root
Oct 11 09:05:45 compute-0 ceph-mon[74313]: pgmap v1892: 321 pgs: 321 active+clean; 513 MiB data, 886 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.1 MiB/s wr, 111 op/s
Oct 11 09:05:45 compute-0 sudo[353295]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 09:05:45 compute-0 sudo[353295]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:05:45 compute-0 sudo[353295]: pam_unix(sudo:session): session closed for user root
Oct 11 09:05:45 compute-0 sudo[353320]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:05:45 compute-0 sudo[353320]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:05:45 compute-0 sudo[353320]: pam_unix(sudo:session): session closed for user root
Oct 11 09:05:45 compute-0 sudo[353345]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a -- lvm list --format json
Oct 11 09:05:45 compute-0 sudo[353345]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:05:45 compute-0 nova_compute[260935]: 2025-10-11 09:05:45.830 2 DEBUG oslo_concurrency.lockutils [None req-aff4ef40-b159-4e3d-afbe-eb0ed60088f4 727ffe9af08040179ad1981f985d61cc 72f2c977b4a147e3b45be9903a0afdf1 - - default default] Acquiring lock "81e243b6-6e9c-428e-a7b7-743f60f94728" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:05:45 compute-0 nova_compute[260935]: 2025-10-11 09:05:45.831 2 DEBUG oslo_concurrency.lockutils [None req-aff4ef40-b159-4e3d-afbe-eb0ed60088f4 727ffe9af08040179ad1981f985d61cc 72f2c977b4a147e3b45be9903a0afdf1 - - default default] Lock "81e243b6-6e9c-428e-a7b7-743f60f94728" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:05:45 compute-0 nova_compute[260935]: 2025-10-11 09:05:45.832 2 DEBUG oslo_concurrency.lockutils [None req-aff4ef40-b159-4e3d-afbe-eb0ed60088f4 727ffe9af08040179ad1981f985d61cc 72f2c977b4a147e3b45be9903a0afdf1 - - default default] Acquiring lock "81e243b6-6e9c-428e-a7b7-743f60f94728-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:05:45 compute-0 nova_compute[260935]: 2025-10-11 09:05:45.832 2 DEBUG oslo_concurrency.lockutils [None req-aff4ef40-b159-4e3d-afbe-eb0ed60088f4 727ffe9af08040179ad1981f985d61cc 72f2c977b4a147e3b45be9903a0afdf1 - - default default] Lock "81e243b6-6e9c-428e-a7b7-743f60f94728-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:05:45 compute-0 nova_compute[260935]: 2025-10-11 09:05:45.832 2 DEBUG oslo_concurrency.lockutils [None req-aff4ef40-b159-4e3d-afbe-eb0ed60088f4 727ffe9af08040179ad1981f985d61cc 72f2c977b4a147e3b45be9903a0afdf1 - - default default] Lock "81e243b6-6e9c-428e-a7b7-743f60f94728-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:05:45 compute-0 nova_compute[260935]: 2025-10-11 09:05:45.834 2 INFO nova.compute.manager [None req-aff4ef40-b159-4e3d-afbe-eb0ed60088f4 727ffe9af08040179ad1981f985d61cc 72f2c977b4a147e3b45be9903a0afdf1 - - default default] [instance: 81e243b6-6e9c-428e-a7b7-743f60f94728] Terminating instance
Oct 11 09:05:45 compute-0 nova_compute[260935]: 2025-10-11 09:05:45.836 2 DEBUG nova.compute.manager [None req-aff4ef40-b159-4e3d-afbe-eb0ed60088f4 727ffe9af08040179ad1981f985d61cc 72f2c977b4a147e3b45be9903a0afdf1 - - default default] [instance: 81e243b6-6e9c-428e-a7b7-743f60f94728] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 11 09:05:45 compute-0 kernel: tap406fc421-7a (unregistering): left promiscuous mode
Oct 11 09:05:45 compute-0 NetworkManager[44960]: <info>  [1760173545.9039] device (tap406fc421-7a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 09:05:45 compute-0 ovn_controller[152945]: 2025-10-11T09:05:45Z|00820|binding|INFO|Releasing lport 406fc421-7a8c-4974-8410-c31cab730ea9 from this chassis (sb_readonly=0)
Oct 11 09:05:45 compute-0 ovn_controller[152945]: 2025-10-11T09:05:45Z|00821|binding|INFO|Setting lport 406fc421-7a8c-4974-8410-c31cab730ea9 down in Southbound
Oct 11 09:05:45 compute-0 nova_compute[260935]: 2025-10-11 09:05:45.914 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:05:45 compute-0 ovn_controller[152945]: 2025-10-11T09:05:45Z|00822|binding|INFO|Removing iface tap406fc421-7a ovn-installed in OVS
Oct 11 09:05:45 compute-0 nova_compute[260935]: 2025-10-11 09:05:45.917 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:05:45 compute-0 nova_compute[260935]: 2025-10-11 09:05:45.938 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:05:45 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:05:45.960 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fe:a8:86 10.100.0.9'], port_security=['fa:16:3e:fe:a8:86 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '81e243b6-6e9c-428e-a7b7-743f60f94728', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6273b59c-68e6-46d1-ac04-7afa70039429', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '72f2c977b4a147e3b45be9903a0afdf1', 'neutron:revision_number': '4', 'neutron:security_group_ids': '6b8ac1d7-d0d6-46d1-9dc3-fb48cbd12dd5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=dbffcb7d-7dd8-41b0-903b-e11c21f9092c, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=406fc421-7a8c-4974-8410-c31cab730ea9) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 09:05:45 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:05:45.961 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 406fc421-7a8c-4974-8410-c31cab730ea9 in datapath 6273b59c-68e6-46d1-ac04-7afa70039429 unbound from our chassis
Oct 11 09:05:45 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:05:45.963 162815 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 6273b59c-68e6-46d1-ac04-7afa70039429, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 11 09:05:45 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:05:45.964 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[c5ee67e4-1a25-41da-bc32-5881a726f2fe]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:05:45 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:05:45.965 162815 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-6273b59c-68e6-46d1-ac04-7afa70039429 namespace which is not needed anymore
Oct 11 09:05:45 compute-0 systemd[1]: machine-qemu\x2d106\x2dinstance\x2d0000005d.scope: Deactivated successfully.
Oct 11 09:05:45 compute-0 systemd[1]: machine-qemu\x2d106\x2dinstance\x2d0000005d.scope: Consumed 4.566s CPU time.
Oct 11 09:05:45 compute-0 systemd-machined[215705]: Machine qemu-106-instance-0000005d terminated.
Oct 11 09:05:46 compute-0 nova_compute[260935]: 2025-10-11 09:05:46.084 2 INFO nova.virt.libvirt.driver [-] [instance: 81e243b6-6e9c-428e-a7b7-743f60f94728] Instance destroyed successfully.
Oct 11 09:05:46 compute-0 nova_compute[260935]: 2025-10-11 09:05:46.084 2 DEBUG nova.objects.instance [None req-aff4ef40-b159-4e3d-afbe-eb0ed60088f4 727ffe9af08040179ad1981f985d61cc 72f2c977b4a147e3b45be9903a0afdf1 - - default default] Lazy-loading 'resources' on Instance uuid 81e243b6-6e9c-428e-a7b7-743f60f94728 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:05:46 compute-0 neutron-haproxy-ovnmeta-6273b59c-68e6-46d1-ac04-7afa70039429[352915]: [NOTICE]   (352930) : haproxy version is 2.8.14-c23fe91
Oct 11 09:05:46 compute-0 neutron-haproxy-ovnmeta-6273b59c-68e6-46d1-ac04-7afa70039429[352915]: [NOTICE]   (352930) : path to executable is /usr/sbin/haproxy
Oct 11 09:05:46 compute-0 neutron-haproxy-ovnmeta-6273b59c-68e6-46d1-ac04-7afa70039429[352915]: [WARNING]  (352930) : Exiting Master process...
Oct 11 09:05:46 compute-0 nova_compute[260935]: 2025-10-11 09:05:46.140 2 DEBUG nova.virt.libvirt.vif [None req-aff4ef40-b159-4e3d-afbe-eb0ed60088f4 727ffe9af08040179ad1981f985d61cc 72f2c977b4a147e3b45be9903a0afdf1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T09:05:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerPasswordTestJSON-server-2126714831',display_name='tempest-ServerPasswordTestJSON-server-2126714831',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverpasswordtestjson-server-2126714831',id=93,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-11T09:05:42Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='72f2c977b4a147e3b45be9903a0afdf1',ramdisk_id='',reservation_id='r-ig7c58si',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerPasswordTestJSON-624321625',owner_user_name='tempest-ServerPasswordTestJSON-624321625-project-member',password_0='',password_1='',password_2='',password_3=''},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T09:05:45Z,user_data=None,user_id='727ffe9af08040179ad1981f985d61cc',uuid=81e243b6-6e9c-428e-a7b7-743f60f94728,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "406fc421-7a8c-4974-8410-c31cab730ea9", "address": "fa:16:3e:fe:a8:86", "network": {"id": "6273b59c-68e6-46d1-ac04-7afa70039429", "bridge": "br-int", "label": "tempest-ServerPasswordTestJSON-29199496-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "72f2c977b4a147e3b45be9903a0afdf1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap406fc421-7a", "ovs_interfaceid": "406fc421-7a8c-4974-8410-c31cab730ea9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 11 09:05:46 compute-0 nova_compute[260935]: 2025-10-11 09:05:46.141 2 DEBUG nova.network.os_vif_util [None req-aff4ef40-b159-4e3d-afbe-eb0ed60088f4 727ffe9af08040179ad1981f985d61cc 72f2c977b4a147e3b45be9903a0afdf1 - - default default] Converting VIF {"id": "406fc421-7a8c-4974-8410-c31cab730ea9", "address": "fa:16:3e:fe:a8:86", "network": {"id": "6273b59c-68e6-46d1-ac04-7afa70039429", "bridge": "br-int", "label": "tempest-ServerPasswordTestJSON-29199496-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "72f2c977b4a147e3b45be9903a0afdf1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap406fc421-7a", "ovs_interfaceid": "406fc421-7a8c-4974-8410-c31cab730ea9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 09:05:46 compute-0 nova_compute[260935]: 2025-10-11 09:05:46.142 2 DEBUG nova.network.os_vif_util [None req-aff4ef40-b159-4e3d-afbe-eb0ed60088f4 727ffe9af08040179ad1981f985d61cc 72f2c977b4a147e3b45be9903a0afdf1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fe:a8:86,bridge_name='br-int',has_traffic_filtering=True,id=406fc421-7a8c-4974-8410-c31cab730ea9,network=Network(6273b59c-68e6-46d1-ac04-7afa70039429),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap406fc421-7a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 09:05:46 compute-0 neutron-haproxy-ovnmeta-6273b59c-68e6-46d1-ac04-7afa70039429[352915]: [ALERT]    (352930) : Current worker (352947) exited with code 143 (Terminated)
Oct 11 09:05:46 compute-0 neutron-haproxy-ovnmeta-6273b59c-68e6-46d1-ac04-7afa70039429[352915]: [WARNING]  (352930) : All workers exited. Exiting... (0)
Oct 11 09:05:46 compute-0 nova_compute[260935]: 2025-10-11 09:05:46.142 2 DEBUG os_vif [None req-aff4ef40-b159-4e3d-afbe-eb0ed60088f4 727ffe9af08040179ad1981f985d61cc 72f2c977b4a147e3b45be9903a0afdf1 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:fe:a8:86,bridge_name='br-int',has_traffic_filtering=True,id=406fc421-7a8c-4974-8410-c31cab730ea9,network=Network(6273b59c-68e6-46d1-ac04-7afa70039429),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap406fc421-7a') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 11 09:05:46 compute-0 systemd[1]: libpod-684862f00538c46291fa1579d08116448704da1e44bd0a36fd2799fc3cd69c15.scope: Deactivated successfully.
Oct 11 09:05:46 compute-0 nova_compute[260935]: 2025-10-11 09:05:46.144 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:05:46 compute-0 nova_compute[260935]: 2025-10-11 09:05:46.144 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap406fc421-7a, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:05:46 compute-0 nova_compute[260935]: 2025-10-11 09:05:46.146 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:05:46 compute-0 nova_compute[260935]: 2025-10-11 09:05:46.149 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 09:05:46 compute-0 podman[353438]: 2025-10-11 09:05:46.151058605 +0000 UTC m=+0.047313971 container died 684862f00538c46291fa1579d08116448704da1e44bd0a36fd2799fc3cd69c15 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-6273b59c-68e6-46d1-ac04-7afa70039429, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 11 09:05:46 compute-0 nova_compute[260935]: 2025-10-11 09:05:46.154 2 INFO os_vif [None req-aff4ef40-b159-4e3d-afbe-eb0ed60088f4 727ffe9af08040179ad1981f985d61cc 72f2c977b4a147e3b45be9903a0afdf1 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:fe:a8:86,bridge_name='br-int',has_traffic_filtering=True,id=406fc421-7a8c-4974-8410-c31cab730ea9,network=Network(6273b59c-68e6-46d1-ac04-7afa70039429),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap406fc421-7a')
Oct 11 09:05:46 compute-0 podman[353439]: 2025-10-11 09:05:46.175156733 +0000 UTC m=+0.059551021 container create dd7dc82b72e482770bc053cec497e5e3cc595ff263cb17f4297d1cb0260970fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_cannon, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 09:05:46 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-684862f00538c46291fa1579d08116448704da1e44bd0a36fd2799fc3cd69c15-userdata-shm.mount: Deactivated successfully.
Oct 11 09:05:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-38285c17898e47fb52b792120b4ee9bb04913e8a53c1010175d0aebd20a4d016-merged.mount: Deactivated successfully.
Oct 11 09:05:46 compute-0 podman[353438]: 2025-10-11 09:05:46.208247428 +0000 UTC m=+0.104502794 container cleanup 684862f00538c46291fa1579d08116448704da1e44bd0a36fd2799fc3cd69c15 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-6273b59c-68e6-46d1-ac04-7afa70039429, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 11 09:05:46 compute-0 systemd[1]: Started libpod-conmon-dd7dc82b72e482770bc053cec497e5e3cc595ff263cb17f4297d1cb0260970fd.scope.
Oct 11 09:05:46 compute-0 systemd[1]: libpod-conmon-684862f00538c46291fa1579d08116448704da1e44bd0a36fd2799fc3cd69c15.scope: Deactivated successfully.
Oct 11 09:05:46 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:05:46 compute-0 podman[353439]: 2025-10-11 09:05:46.140780202 +0000 UTC m=+0.025174510 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:05:46 compute-0 podman[353439]: 2025-10-11 09:05:46.251907694 +0000 UTC m=+0.136301992 container init dd7dc82b72e482770bc053cec497e5e3cc595ff263cb17f4297d1cb0260970fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_cannon, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 11 09:05:46 compute-0 podman[353439]: 2025-10-11 09:05:46.260392037 +0000 UTC m=+0.144786335 container start dd7dc82b72e482770bc053cec497e5e3cc595ff263cb17f4297d1cb0260970fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_cannon, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 09:05:46 compute-0 podman[353439]: 2025-10-11 09:05:46.264226116 +0000 UTC m=+0.148620404 container attach dd7dc82b72e482770bc053cec497e5e3cc595ff263cb17f4297d1cb0260970fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_cannon, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 09:05:46 compute-0 jovial_cannon[353501]: 167 167
Oct 11 09:05:46 compute-0 systemd[1]: libpod-dd7dc82b72e482770bc053cec497e5e3cc595ff263cb17f4297d1cb0260970fd.scope: Deactivated successfully.
Oct 11 09:05:46 compute-0 podman[353439]: 2025-10-11 09:05:46.267334195 +0000 UTC m=+0.151728503 container died dd7dc82b72e482770bc053cec497e5e3cc595ff263cb17f4297d1cb0260970fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_cannon, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 11 09:05:46 compute-0 podman[353505]: 2025-10-11 09:05:46.294797639 +0000 UTC m=+0.053289612 container remove 684862f00538c46291fa1579d08116448704da1e44bd0a36fd2799fc3cd69c15 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-6273b59c-68e6-46d1-ac04-7afa70039429, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2)
Oct 11 09:05:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-f9c57ab65fdaa015f4f1ce372af127816abc7ceaf7e71b73a7076ddc42c7637c-merged.mount: Deactivated successfully.
Oct 11 09:05:46 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:05:46.306 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[c28ed99d-82eb-4d85-a160-db6b9f23e0bd]: (4, ('Sat Oct 11 09:05:46 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-6273b59c-68e6-46d1-ac04-7afa70039429 (684862f00538c46291fa1579d08116448704da1e44bd0a36fd2799fc3cd69c15)\n684862f00538c46291fa1579d08116448704da1e44bd0a36fd2799fc3cd69c15\nSat Oct 11 09:05:46 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-6273b59c-68e6-46d1-ac04-7afa70039429 (684862f00538c46291fa1579d08116448704da1e44bd0a36fd2799fc3cd69c15)\n684862f00538c46291fa1579d08116448704da1e44bd0a36fd2799fc3cd69c15\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:05:46 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:05:46.310 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[39681526-5e33-4ec5-b4b0-de7e1938c712]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:05:46 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:05:46.311 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6273b59c-60, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:05:46 compute-0 nova_compute[260935]: 2025-10-11 09:05:46.314 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:05:46 compute-0 kernel: tap6273b59c-60: left promiscuous mode
Oct 11 09:05:46 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1893: 321 pgs: 321 active+clean; 513 MiB data, 886 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 110 op/s
Oct 11 09:05:46 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:05:46.319 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[17098208-6736-4095-b445-8902fce379de]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:05:46 compute-0 podman[353439]: 2025-10-11 09:05:46.325833025 +0000 UTC m=+0.210227303 container remove dd7dc82b72e482770bc053cec497e5e3cc595ff263cb17f4297d1cb0260970fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_cannon, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 09:05:46 compute-0 nova_compute[260935]: 2025-10-11 09:05:46.342 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:05:46 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:05:46.353 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[7bb1036f-6a35-41a2-90b3-b53212d93745]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:05:46 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:05:46.356 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[91fb4888-42a1-4ba7-ad93-c93f108722f6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:05:46 compute-0 systemd[1]: libpod-conmon-dd7dc82b72e482770bc053cec497e5e3cc595ff263cb17f4297d1cb0260970fd.scope: Deactivated successfully.
Oct 11 09:05:46 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:05:46.376 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[d1a60f09-bb42-4cba-9709-0b67c7616d5a]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 538553, 'reachable_time': 21140, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 353534, 'error': None, 'target': 'ovnmeta-6273b59c-68e6-46d1-ac04-7afa70039429', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:05:46 compute-0 systemd[1]: run-netns-ovnmeta\x2d6273b59c\x2d68e6\x2d46d1\x2dac04\x2d7afa70039429.mount: Deactivated successfully.
Oct 11 09:05:46 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:05:46.379 162929 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-6273b59c-68e6-46d1-ac04-7afa70039429 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 11 09:05:46 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:05:46.379 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[e0cec0f5-3a71-4adb-8ae9-2bfc404f2f5b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:05:46 compute-0 podman[353541]: 2025-10-11 09:05:46.533624507 +0000 UTC m=+0.056663419 container create 8e5208a5397319c0f84e76de8073b0ab50ac7bcd07ebece966238a6ebbb96988 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_wing, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 09:05:46 compute-0 systemd[1]: Started libpod-conmon-8e5208a5397319c0f84e76de8073b0ab50ac7bcd07ebece966238a6ebbb96988.scope.
Oct 11 09:05:46 compute-0 podman[353541]: 2025-10-11 09:05:46.515592002 +0000 UTC m=+0.038630944 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:05:46 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:05:46 compute-0 nova_compute[260935]: 2025-10-11 09:05:46.615 2 INFO nova.virt.libvirt.driver [None req-aff4ef40-b159-4e3d-afbe-eb0ed60088f4 727ffe9af08040179ad1981f985d61cc 72f2c977b4a147e3b45be9903a0afdf1 - - default default] [instance: 81e243b6-6e9c-428e-a7b7-743f60f94728] Deleting instance files /var/lib/nova/instances/81e243b6-6e9c-428e-a7b7-743f60f94728_del
Oct 11 09:05:46 compute-0 nova_compute[260935]: 2025-10-11 09:05:46.616 2 INFO nova.virt.libvirt.driver [None req-aff4ef40-b159-4e3d-afbe-eb0ed60088f4 727ffe9af08040179ad1981f985d61cc 72f2c977b4a147e3b45be9903a0afdf1 - - default default] [instance: 81e243b6-6e9c-428e-a7b7-743f60f94728] Deletion of /var/lib/nova/instances/81e243b6-6e9c-428e-a7b7-743f60f94728_del complete
Oct 11 09:05:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/baee55a1e3748b9bd8cd712664663de6349b8a4622b3886a36b62e8234b6995d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 09:05:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/baee55a1e3748b9bd8cd712664663de6349b8a4622b3886a36b62e8234b6995d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 09:05:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/baee55a1e3748b9bd8cd712664663de6349b8a4622b3886a36b62e8234b6995d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 09:05:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/baee55a1e3748b9bd8cd712664663de6349b8a4622b3886a36b62e8234b6995d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 09:05:46 compute-0 podman[353541]: 2025-10-11 09:05:46.637647426 +0000 UTC m=+0.160686378 container init 8e5208a5397319c0f84e76de8073b0ab50ac7bcd07ebece966238a6ebbb96988 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_wing, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 09:05:46 compute-0 podman[353541]: 2025-10-11 09:05:46.65075097 +0000 UTC m=+0.173789892 container start 8e5208a5397319c0f84e76de8073b0ab50ac7bcd07ebece966238a6ebbb96988 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_wing, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 09:05:46 compute-0 podman[353541]: 2025-10-11 09:05:46.65844496 +0000 UTC m=+0.181483882 container attach 8e5208a5397319c0f84e76de8073b0ab50ac7bcd07ebece966238a6ebbb96988 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_wing, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct 11 09:05:46 compute-0 nova_compute[260935]: 2025-10-11 09:05:46.849 2 DEBUG nova.compute.manager [req-af78b1c6-3dbb-4ecd-b4ea-e18596b3966e req-7b53d93e-6296-458d-b9fd-eb00d8460ce1 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d813afc2-c844-45eb-b1ec-efbbf95498ef] Received event network-changed-4d16ebf4-64d7-4476-8898-f62d856c729b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:05:46 compute-0 nova_compute[260935]: 2025-10-11 09:05:46.849 2 DEBUG nova.compute.manager [req-af78b1c6-3dbb-4ecd-b4ea-e18596b3966e req-7b53d93e-6296-458d-b9fd-eb00d8460ce1 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d813afc2-c844-45eb-b1ec-efbbf95498ef] Refreshing instance network info cache due to event network-changed-4d16ebf4-64d7-4476-8898-f62d856c729b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 09:05:46 compute-0 nova_compute[260935]: 2025-10-11 09:05:46.849 2 DEBUG oslo_concurrency.lockutils [req-af78b1c6-3dbb-4ecd-b4ea-e18596b3966e req-7b53d93e-6296-458d-b9fd-eb00d8460ce1 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-d813afc2-c844-45eb-b1ec-efbbf95498ef" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:05:46 compute-0 nova_compute[260935]: 2025-10-11 09:05:46.852 2 INFO nova.compute.manager [None req-aff4ef40-b159-4e3d-afbe-eb0ed60088f4 727ffe9af08040179ad1981f985d61cc 72f2c977b4a147e3b45be9903a0afdf1 - - default default] [instance: 81e243b6-6e9c-428e-a7b7-743f60f94728] Took 1.02 seconds to destroy the instance on the hypervisor.
Oct 11 09:05:46 compute-0 nova_compute[260935]: 2025-10-11 09:05:46.853 2 DEBUG oslo.service.loopingcall [None req-aff4ef40-b159-4e3d-afbe-eb0ed60088f4 727ffe9af08040179ad1981f985d61cc 72f2c977b4a147e3b45be9903a0afdf1 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 11 09:05:46 compute-0 nova_compute[260935]: 2025-10-11 09:05:46.853 2 DEBUG nova.compute.manager [-] [instance: 81e243b6-6e9c-428e-a7b7-743f60f94728] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 11 09:05:46 compute-0 nova_compute[260935]: 2025-10-11 09:05:46.853 2 DEBUG nova.network.neutron [-] [instance: 81e243b6-6e9c-428e-a7b7-743f60f94728] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 11 09:05:47 compute-0 great_wing[353558]: {
Oct 11 09:05:47 compute-0 great_wing[353558]:     "0": [
Oct 11 09:05:47 compute-0 great_wing[353558]:         {
Oct 11 09:05:47 compute-0 great_wing[353558]:             "devices": [
Oct 11 09:05:47 compute-0 great_wing[353558]:                 "/dev/loop3"
Oct 11 09:05:47 compute-0 great_wing[353558]:             ],
Oct 11 09:05:47 compute-0 great_wing[353558]:             "lv_name": "ceph_lv0",
Oct 11 09:05:47 compute-0 great_wing[353558]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 09:05:47 compute-0 great_wing[353558]:             "lv_size": "21470642176",
Oct 11 09:05:47 compute-0 great_wing[353558]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=bd31cfe9-266a-4479-a7f9-659fff14b3e1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 09:05:47 compute-0 great_wing[353558]:             "lv_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 09:05:47 compute-0 great_wing[353558]:             "name": "ceph_lv0",
Oct 11 09:05:47 compute-0 great_wing[353558]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 09:05:47 compute-0 great_wing[353558]:             "tags": {
Oct 11 09:05:47 compute-0 great_wing[353558]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 11 09:05:47 compute-0 great_wing[353558]:                 "ceph.block_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 09:05:47 compute-0 great_wing[353558]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 09:05:47 compute-0 great_wing[353558]:                 "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:05:47 compute-0 great_wing[353558]:                 "ceph.cluster_name": "ceph",
Oct 11 09:05:47 compute-0 great_wing[353558]:                 "ceph.crush_device_class": "",
Oct 11 09:05:47 compute-0 great_wing[353558]:                 "ceph.encrypted": "0",
Oct 11 09:05:47 compute-0 great_wing[353558]:                 "ceph.osd_fsid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 09:05:47 compute-0 great_wing[353558]:                 "ceph.osd_id": "0",
Oct 11 09:05:47 compute-0 great_wing[353558]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 09:05:47 compute-0 great_wing[353558]:                 "ceph.type": "block",
Oct 11 09:05:47 compute-0 great_wing[353558]:                 "ceph.vdo": "0"
Oct 11 09:05:47 compute-0 great_wing[353558]:             },
Oct 11 09:05:47 compute-0 great_wing[353558]:             "type": "block",
Oct 11 09:05:47 compute-0 great_wing[353558]:             "vg_name": "ceph_vg0"
Oct 11 09:05:47 compute-0 great_wing[353558]:         }
Oct 11 09:05:47 compute-0 great_wing[353558]:     ],
Oct 11 09:05:47 compute-0 great_wing[353558]:     "1": [
Oct 11 09:05:47 compute-0 great_wing[353558]:         {
Oct 11 09:05:47 compute-0 great_wing[353558]:             "devices": [
Oct 11 09:05:47 compute-0 great_wing[353558]:                 "/dev/loop4"
Oct 11 09:05:47 compute-0 great_wing[353558]:             ],
Oct 11 09:05:47 compute-0 great_wing[353558]:             "lv_name": "ceph_lv1",
Oct 11 09:05:47 compute-0 great_wing[353558]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 09:05:47 compute-0 great_wing[353558]:             "lv_size": "21470642176",
Oct 11 09:05:47 compute-0 great_wing[353558]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c6e730c8-7075-45e5-bf72-061b73088f5b,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 09:05:47 compute-0 great_wing[353558]:             "lv_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 09:05:47 compute-0 great_wing[353558]:             "name": "ceph_lv1",
Oct 11 09:05:47 compute-0 great_wing[353558]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 09:05:47 compute-0 great_wing[353558]:             "tags": {
Oct 11 09:05:47 compute-0 great_wing[353558]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 11 09:05:47 compute-0 great_wing[353558]:                 "ceph.block_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 09:05:47 compute-0 great_wing[353558]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 09:05:47 compute-0 great_wing[353558]:                 "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:05:47 compute-0 great_wing[353558]:                 "ceph.cluster_name": "ceph",
Oct 11 09:05:47 compute-0 great_wing[353558]:                 "ceph.crush_device_class": "",
Oct 11 09:05:47 compute-0 great_wing[353558]:                 "ceph.encrypted": "0",
Oct 11 09:05:47 compute-0 great_wing[353558]:                 "ceph.osd_fsid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 09:05:47 compute-0 great_wing[353558]:                 "ceph.osd_id": "1",
Oct 11 09:05:47 compute-0 great_wing[353558]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 09:05:47 compute-0 great_wing[353558]:                 "ceph.type": "block",
Oct 11 09:05:47 compute-0 great_wing[353558]:                 "ceph.vdo": "0"
Oct 11 09:05:47 compute-0 great_wing[353558]:             },
Oct 11 09:05:47 compute-0 great_wing[353558]:             "type": "block",
Oct 11 09:05:47 compute-0 great_wing[353558]:             "vg_name": "ceph_vg1"
Oct 11 09:05:47 compute-0 great_wing[353558]:         }
Oct 11 09:05:47 compute-0 great_wing[353558]:     ],
Oct 11 09:05:47 compute-0 great_wing[353558]:     "2": [
Oct 11 09:05:47 compute-0 great_wing[353558]:         {
Oct 11 09:05:47 compute-0 great_wing[353558]:             "devices": [
Oct 11 09:05:47 compute-0 great_wing[353558]:                 "/dev/loop5"
Oct 11 09:05:47 compute-0 great_wing[353558]:             ],
Oct 11 09:05:47 compute-0 great_wing[353558]:             "lv_name": "ceph_lv2",
Oct 11 09:05:47 compute-0 great_wing[353558]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 09:05:47 compute-0 great_wing[353558]:             "lv_size": "21470642176",
Oct 11 09:05:47 compute-0 great_wing[353558]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9a8d87fe-940e-4960-94fb-405e22e6e9ee,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 09:05:47 compute-0 great_wing[353558]:             "lv_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 09:05:47 compute-0 great_wing[353558]:             "name": "ceph_lv2",
Oct 11 09:05:47 compute-0 great_wing[353558]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 09:05:47 compute-0 great_wing[353558]:             "tags": {
Oct 11 09:05:47 compute-0 great_wing[353558]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 11 09:05:47 compute-0 great_wing[353558]:                 "ceph.block_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 09:05:47 compute-0 great_wing[353558]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 09:05:47 compute-0 great_wing[353558]:                 "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:05:47 compute-0 great_wing[353558]:                 "ceph.cluster_name": "ceph",
Oct 11 09:05:47 compute-0 great_wing[353558]:                 "ceph.crush_device_class": "",
Oct 11 09:05:47 compute-0 great_wing[353558]:                 "ceph.encrypted": "0",
Oct 11 09:05:47 compute-0 great_wing[353558]:                 "ceph.osd_fsid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 09:05:47 compute-0 great_wing[353558]:                 "ceph.osd_id": "2",
Oct 11 09:05:47 compute-0 great_wing[353558]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 09:05:47 compute-0 great_wing[353558]:                 "ceph.type": "block",
Oct 11 09:05:47 compute-0 great_wing[353558]:                 "ceph.vdo": "0"
Oct 11 09:05:47 compute-0 great_wing[353558]:             },
Oct 11 09:05:47 compute-0 great_wing[353558]:             "type": "block",
Oct 11 09:05:47 compute-0 great_wing[353558]:             "vg_name": "ceph_vg2"
Oct 11 09:05:47 compute-0 great_wing[353558]:         }
Oct 11 09:05:47 compute-0 great_wing[353558]:     ]
Oct 11 09:05:47 compute-0 great_wing[353558]: }
Oct 11 09:05:47 compute-0 systemd[1]: libpod-8e5208a5397319c0f84e76de8073b0ab50ac7bcd07ebece966238a6ebbb96988.scope: Deactivated successfully.
Oct 11 09:05:47 compute-0 podman[353541]: 2025-10-11 09:05:47.484528752 +0000 UTC m=+1.007567694 container died 8e5208a5397319c0f84e76de8073b0ab50ac7bcd07ebece966238a6ebbb96988 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_wing, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True)
Oct 11 09:05:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-baee55a1e3748b9bd8cd712664663de6349b8a4622b3886a36b62e8234b6995d-merged.mount: Deactivated successfully.
Oct 11 09:05:47 compute-0 nova_compute[260935]: 2025-10-11 09:05:47.564 2 DEBUG nova.network.neutron [None req-67191ef9-234c-418d-8bd8-d769ba8e6f62 ac3a19a7426e4e51aa4f94b016decc82 0eed0ccef3b34c4db44e88ebe1aef9f0 - - default default] [instance: d813afc2-c844-45eb-b1ec-efbbf95498ef] Updating instance_info_cache with network_info: [{"id": "4d16ebf4-64d7-4476-8898-f62d856c729b", "address": "fa:16:3e:62:2f:b2", "network": {"id": "da65ea75-f60d-4001-894b-df4408baa99c", "bridge": "br-int", "label": "tempest-ServersNegativeTestMultiTenantJSON-908406648-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0eed0ccef3b34c4db44e88ebe1aef9f0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4d16ebf4-64", "ovs_interfaceid": "4d16ebf4-64d7-4476-8898-f62d856c729b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:05:47 compute-0 ceph-mon[74313]: pgmap v1893: 321 pgs: 321 active+clean; 513 MiB data, 886 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 110 op/s
Oct 11 09:05:47 compute-0 podman[353541]: 2025-10-11 09:05:47.57728395 +0000 UTC m=+1.100322902 container remove 8e5208a5397319c0f84e76de8073b0ab50ac7bcd07ebece966238a6ebbb96988 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_wing, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 09:05:47 compute-0 systemd[1]: libpod-conmon-8e5208a5397319c0f84e76de8073b0ab50ac7bcd07ebece966238a6ebbb96988.scope: Deactivated successfully.
Oct 11 09:05:47 compute-0 sudo[353345]: pam_unix(sudo:session): session closed for user root
Oct 11 09:05:47 compute-0 nova_compute[260935]: 2025-10-11 09:05:47.705 2 DEBUG oslo_concurrency.lockutils [None req-67191ef9-234c-418d-8bd8-d769ba8e6f62 ac3a19a7426e4e51aa4f94b016decc82 0eed0ccef3b34c4db44e88ebe1aef9f0 - - default default] Releasing lock "refresh_cache-d813afc2-c844-45eb-b1ec-efbbf95498ef" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:05:47 compute-0 nova_compute[260935]: 2025-10-11 09:05:47.706 2 DEBUG nova.compute.manager [None req-67191ef9-234c-418d-8bd8-d769ba8e6f62 ac3a19a7426e4e51aa4f94b016decc82 0eed0ccef3b34c4db44e88ebe1aef9f0 - - default default] [instance: d813afc2-c844-45eb-b1ec-efbbf95498ef] Instance network_info: |[{"id": "4d16ebf4-64d7-4476-8898-f62d856c729b", "address": "fa:16:3e:62:2f:b2", "network": {"id": "da65ea75-f60d-4001-894b-df4408baa99c", "bridge": "br-int", "label": "tempest-ServersNegativeTestMultiTenantJSON-908406648-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0eed0ccef3b34c4db44e88ebe1aef9f0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4d16ebf4-64", "ovs_interfaceid": "4d16ebf4-64d7-4476-8898-f62d856c729b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 11 09:05:47 compute-0 nova_compute[260935]: 2025-10-11 09:05:47.707 2 DEBUG oslo_concurrency.lockutils [req-af78b1c6-3dbb-4ecd-b4ea-e18596b3966e req-7b53d93e-6296-458d-b9fd-eb00d8460ce1 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-d813afc2-c844-45eb-b1ec-efbbf95498ef" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:05:47 compute-0 nova_compute[260935]: 2025-10-11 09:05:47.707 2 DEBUG nova.network.neutron [req-af78b1c6-3dbb-4ecd-b4ea-e18596b3966e req-7b53d93e-6296-458d-b9fd-eb00d8460ce1 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d813afc2-c844-45eb-b1ec-efbbf95498ef] Refreshing network info cache for port 4d16ebf4-64d7-4476-8898-f62d856c729b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 09:05:47 compute-0 nova_compute[260935]: 2025-10-11 09:05:47.711 2 DEBUG nova.virt.libvirt.driver [None req-67191ef9-234c-418d-8bd8-d769ba8e6f62 ac3a19a7426e4e51aa4f94b016decc82 0eed0ccef3b34c4db44e88ebe1aef9f0 - - default default] [instance: d813afc2-c844-45eb-b1ec-efbbf95498ef] Start _get_guest_xml network_info=[{"id": "4d16ebf4-64d7-4476-8898-f62d856c729b", "address": "fa:16:3e:62:2f:b2", "network": {"id": "da65ea75-f60d-4001-894b-df4408baa99c", "bridge": "br-int", "label": "tempest-ServersNegativeTestMultiTenantJSON-908406648-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0eed0ccef3b34c4db44e88ebe1aef9f0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4d16ebf4-64", "ovs_interfaceid": "4d16ebf4-64d7-4476-8898-f62d856c729b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 11 09:05:47 compute-0 nova_compute[260935]: 2025-10-11 09:05:47.719 2 WARNING nova.virt.libvirt.driver [None req-67191ef9-234c-418d-8bd8-d769ba8e6f62 ac3a19a7426e4e51aa4f94b016decc82 0eed0ccef3b34c4db44e88ebe1aef9f0 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 09:05:47 compute-0 sudo[353581]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:05:47 compute-0 sudo[353581]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:05:47 compute-0 nova_compute[260935]: 2025-10-11 09:05:47.727 2 DEBUG nova.virt.libvirt.host [None req-67191ef9-234c-418d-8bd8-d769ba8e6f62 ac3a19a7426e4e51aa4f94b016decc82 0eed0ccef3b34c4db44e88ebe1aef9f0 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 11 09:05:47 compute-0 nova_compute[260935]: 2025-10-11 09:05:47.728 2 DEBUG nova.virt.libvirt.host [None req-67191ef9-234c-418d-8bd8-d769ba8e6f62 ac3a19a7426e4e51aa4f94b016decc82 0eed0ccef3b34c4db44e88ebe1aef9f0 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 11 09:05:47 compute-0 sudo[353581]: pam_unix(sudo:session): session closed for user root
Oct 11 09:05:47 compute-0 nova_compute[260935]: 2025-10-11 09:05:47.732 2 DEBUG nova.virt.libvirt.host [None req-67191ef9-234c-418d-8bd8-d769ba8e6f62 ac3a19a7426e4e51aa4f94b016decc82 0eed0ccef3b34c4db44e88ebe1aef9f0 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 11 09:05:47 compute-0 nova_compute[260935]: 2025-10-11 09:05:47.732 2 DEBUG nova.virt.libvirt.host [None req-67191ef9-234c-418d-8bd8-d769ba8e6f62 ac3a19a7426e4e51aa4f94b016decc82 0eed0ccef3b34c4db44e88ebe1aef9f0 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 11 09:05:47 compute-0 nova_compute[260935]: 2025-10-11 09:05:47.732 2 DEBUG nova.virt.libvirt.driver [None req-67191ef9-234c-418d-8bd8-d769ba8e6f62 ac3a19a7426e4e51aa4f94b016decc82 0eed0ccef3b34c4db44e88ebe1aef9f0 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 11 09:05:47 compute-0 nova_compute[260935]: 2025-10-11 09:05:47.733 2 DEBUG nova.virt.hardware [None req-67191ef9-234c-418d-8bd8-d769ba8e6f62 ac3a19a7426e4e51aa4f94b016decc82 0eed0ccef3b34c4db44e88ebe1aef9f0 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 11 09:05:47 compute-0 nova_compute[260935]: 2025-10-11 09:05:47.733 2 DEBUG nova.virt.hardware [None req-67191ef9-234c-418d-8bd8-d769ba8e6f62 ac3a19a7426e4e51aa4f94b016decc82 0eed0ccef3b34c4db44e88ebe1aef9f0 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 11 09:05:47 compute-0 nova_compute[260935]: 2025-10-11 09:05:47.734 2 DEBUG nova.virt.hardware [None req-67191ef9-234c-418d-8bd8-d769ba8e6f62 ac3a19a7426e4e51aa4f94b016decc82 0eed0ccef3b34c4db44e88ebe1aef9f0 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 11 09:05:47 compute-0 nova_compute[260935]: 2025-10-11 09:05:47.734 2 DEBUG nova.virt.hardware [None req-67191ef9-234c-418d-8bd8-d769ba8e6f62 ac3a19a7426e4e51aa4f94b016decc82 0eed0ccef3b34c4db44e88ebe1aef9f0 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 11 09:05:47 compute-0 nova_compute[260935]: 2025-10-11 09:05:47.734 2 DEBUG nova.virt.hardware [None req-67191ef9-234c-418d-8bd8-d769ba8e6f62 ac3a19a7426e4e51aa4f94b016decc82 0eed0ccef3b34c4db44e88ebe1aef9f0 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 11 09:05:47 compute-0 nova_compute[260935]: 2025-10-11 09:05:47.734 2 DEBUG nova.virt.hardware [None req-67191ef9-234c-418d-8bd8-d769ba8e6f62 ac3a19a7426e4e51aa4f94b016decc82 0eed0ccef3b34c4db44e88ebe1aef9f0 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 11 09:05:47 compute-0 nova_compute[260935]: 2025-10-11 09:05:47.734 2 DEBUG nova.virt.hardware [None req-67191ef9-234c-418d-8bd8-d769ba8e6f62 ac3a19a7426e4e51aa4f94b016decc82 0eed0ccef3b34c4db44e88ebe1aef9f0 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 11 09:05:47 compute-0 nova_compute[260935]: 2025-10-11 09:05:47.735 2 DEBUG nova.virt.hardware [None req-67191ef9-234c-418d-8bd8-d769ba8e6f62 ac3a19a7426e4e51aa4f94b016decc82 0eed0ccef3b34c4db44e88ebe1aef9f0 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 11 09:05:47 compute-0 nova_compute[260935]: 2025-10-11 09:05:47.735 2 DEBUG nova.virt.hardware [None req-67191ef9-234c-418d-8bd8-d769ba8e6f62 ac3a19a7426e4e51aa4f94b016decc82 0eed0ccef3b34c4db44e88ebe1aef9f0 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 11 09:05:47 compute-0 nova_compute[260935]: 2025-10-11 09:05:47.735 2 DEBUG nova.virt.hardware [None req-67191ef9-234c-418d-8bd8-d769ba8e6f62 ac3a19a7426e4e51aa4f94b016decc82 0eed0ccef3b34c4db44e88ebe1aef9f0 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 11 09:05:47 compute-0 nova_compute[260935]: 2025-10-11 09:05:47.735 2 DEBUG nova.virt.hardware [None req-67191ef9-234c-418d-8bd8-d769ba8e6f62 ac3a19a7426e4e51aa4f94b016decc82 0eed0ccef3b34c4db44e88ebe1aef9f0 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 11 09:05:47 compute-0 nova_compute[260935]: 2025-10-11 09:05:47.739 2 DEBUG oslo_concurrency.processutils [None req-67191ef9-234c-418d-8bd8-d769ba8e6f62 ac3a19a7426e4e51aa4f94b016decc82 0eed0ccef3b34c4db44e88ebe1aef9f0 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:05:47 compute-0 sudo[353606]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 09:05:47 compute-0 sudo[353606]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:05:47 compute-0 sudo[353606]: pam_unix(sudo:session): session closed for user root
Oct 11 09:05:47 compute-0 sudo[353632]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:05:47 compute-0 sudo[353632]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:05:47 compute-0 sudo[353632]: pam_unix(sudo:session): session closed for user root
Oct 11 09:05:47 compute-0 sudo[353658]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a -- raw list --format json
Oct 11 09:05:47 compute-0 sudo[353658]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:05:48 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 09:05:48 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/357432966' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:05:48 compute-0 nova_compute[260935]: 2025-10-11 09:05:48.237 2 DEBUG oslo_concurrency.processutils [None req-67191ef9-234c-418d-8bd8-d769ba8e6f62 ac3a19a7426e4e51aa4f94b016decc82 0eed0ccef3b34c4db44e88ebe1aef9f0 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.498s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:05:48 compute-0 nova_compute[260935]: 2025-10-11 09:05:48.262 2 DEBUG nova.storage.rbd_utils [None req-67191ef9-234c-418d-8bd8-d769ba8e6f62 ac3a19a7426e4e51aa4f94b016decc82 0eed0ccef3b34c4db44e88ebe1aef9f0 - - default default] rbd image d813afc2-c844-45eb-b1ec-efbbf95498ef_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:05:48 compute-0 nova_compute[260935]: 2025-10-11 09:05:48.272 2 DEBUG oslo_concurrency.processutils [None req-67191ef9-234c-418d-8bd8-d769ba8e6f62 ac3a19a7426e4e51aa4f94b016decc82 0eed0ccef3b34c4db44e88ebe1aef9f0 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:05:48 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1894: 321 pgs: 321 active+clean; 467 MiB data, 866 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.8 MiB/s wr, 201 op/s
Oct 11 09:05:48 compute-0 podman[353762]: 2025-10-11 09:05:48.346394646 +0000 UTC m=+0.062230408 container create 598213dc38dc76b7d6fc56a063ee1a103cee9da81fc8a3e8f4037bfa7ab2412a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_ptolemy, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 09:05:48 compute-0 systemd[1]: Started libpod-conmon-598213dc38dc76b7d6fc56a063ee1a103cee9da81fc8a3e8f4037bfa7ab2412a.scope.
Oct 11 09:05:48 compute-0 podman[353762]: 2025-10-11 09:05:48.317015897 +0000 UTC m=+0.032851679 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:05:48 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:05:48 compute-0 nova_compute[260935]: 2025-10-11 09:05:48.437 2 DEBUG nova.network.neutron [-] [instance: 81e243b6-6e9c-428e-a7b7-743f60f94728] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:05:48 compute-0 podman[353762]: 2025-10-11 09:05:48.442632783 +0000 UTC m=+0.158468615 container init 598213dc38dc76b7d6fc56a063ee1a103cee9da81fc8a3e8f4037bfa7ab2412a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_ptolemy, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Oct 11 09:05:48 compute-0 podman[353762]: 2025-10-11 09:05:48.454122331 +0000 UTC m=+0.169958163 container start 598213dc38dc76b7d6fc56a063ee1a103cee9da81fc8a3e8f4037bfa7ab2412a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_ptolemy, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct 11 09:05:48 compute-0 cranky_ptolemy[353779]: 167 167
Oct 11 09:05:48 compute-0 systemd[1]: libpod-598213dc38dc76b7d6fc56a063ee1a103cee9da81fc8a3e8f4037bfa7ab2412a.scope: Deactivated successfully.
Oct 11 09:05:48 compute-0 podman[353762]: 2025-10-11 09:05:48.467925115 +0000 UTC m=+0.183760917 container attach 598213dc38dc76b7d6fc56a063ee1a103cee9da81fc8a3e8f4037bfa7ab2412a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_ptolemy, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Oct 11 09:05:48 compute-0 podman[353762]: 2025-10-11 09:05:48.468378938 +0000 UTC m=+0.184214740 container died 598213dc38dc76b7d6fc56a063ee1a103cee9da81fc8a3e8f4037bfa7ab2412a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_ptolemy, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Oct 11 09:05:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-2daa7a28f2dc543d1c8305c45e7d36d5ff7dedcddcbe923d13161148674a30f9-merged.mount: Deactivated successfully.
Oct 11 09:05:48 compute-0 podman[353762]: 2025-10-11 09:05:48.512950801 +0000 UTC m=+0.228786583 container remove 598213dc38dc76b7d6fc56a063ee1a103cee9da81fc8a3e8f4037bfa7ab2412a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_ptolemy, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 09:05:48 compute-0 systemd[1]: libpod-conmon-598213dc38dc76b7d6fc56a063ee1a103cee9da81fc8a3e8f4037bfa7ab2412a.scope: Deactivated successfully.
Oct 11 09:05:48 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/357432966' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:05:48 compute-0 podman[353821]: 2025-10-11 09:05:48.726991761 +0000 UTC m=+0.044769399 container create 4b65afe31600305389737a4aba147272010165a63f46799fabc19c2e7177721b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_meninsky, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 09:05:48 compute-0 nova_compute[260935]: 2025-10-11 09:05:48.739 2 INFO nova.compute.manager [-] [instance: 81e243b6-6e9c-428e-a7b7-743f60f94728] Took 1.89 seconds to deallocate network for instance.
Oct 11 09:05:48 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 09:05:48 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2419311604' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:05:48 compute-0 systemd[1]: Started libpod-conmon-4b65afe31600305389737a4aba147272010165a63f46799fabc19c2e7177721b.scope.
Oct 11 09:05:48 compute-0 nova_compute[260935]: 2025-10-11 09:05:48.779 2 DEBUG oslo_concurrency.processutils [None req-67191ef9-234c-418d-8bd8-d769ba8e6f62 ac3a19a7426e4e51aa4f94b016decc82 0eed0ccef3b34c4db44e88ebe1aef9f0 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.507s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:05:48 compute-0 nova_compute[260935]: 2025-10-11 09:05:48.781 2 DEBUG nova.virt.libvirt.vif [None req-67191ef9-234c-418d-8bd8-d769ba8e6f62 ac3a19a7426e4e51aa4f94b016decc82 0eed0ccef3b34c4db44e88ebe1aef9f0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T09:05:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersNegativeTestMultiTenantJSON-server-393316733',display_name='tempest-ServersNegativeTestMultiTenantJSON-server-393316733',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestmultitenantjson-server-393316733',id=94,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0eed0ccef3b34c4db44e88ebe1aef9f0',ramdisk_id='',reservation_id='r-ikgw00lh',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersNegativeTestMultiTenantJSON-1815816531',owner_user_name='tempest-ServersNegativeTestMultiTenantJSON-1815816531-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:05:38Z,user_data=None,user_id='ac3a19a7426e4e51aa4f94b016decc82',uuid=d813afc2-c844-45eb-b1ec-efbbf95498ef,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4d16ebf4-64d7-4476-8898-f62d856c729b", "address": "fa:16:3e:62:2f:b2", "network": {"id": "da65ea75-f60d-4001-894b-df4408baa99c", "bridge": "br-int", "label": "tempest-ServersNegativeTestMultiTenantJSON-908406648-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0eed0ccef3b34c4db44e88ebe1aef9f0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4d16ebf4-64", "ovs_interfaceid": "4d16ebf4-64d7-4476-8898-f62d856c729b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 11 09:05:48 compute-0 nova_compute[260935]: 2025-10-11 09:05:48.781 2 DEBUG nova.network.os_vif_util [None req-67191ef9-234c-418d-8bd8-d769ba8e6f62 ac3a19a7426e4e51aa4f94b016decc82 0eed0ccef3b34c4db44e88ebe1aef9f0 - - default default] Converting VIF {"id": "4d16ebf4-64d7-4476-8898-f62d856c729b", "address": "fa:16:3e:62:2f:b2", "network": {"id": "da65ea75-f60d-4001-894b-df4408baa99c", "bridge": "br-int", "label": "tempest-ServersNegativeTestMultiTenantJSON-908406648-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0eed0ccef3b34c4db44e88ebe1aef9f0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4d16ebf4-64", "ovs_interfaceid": "4d16ebf4-64d7-4476-8898-f62d856c729b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 09:05:48 compute-0 nova_compute[260935]: 2025-10-11 09:05:48.782 2 DEBUG nova.network.os_vif_util [None req-67191ef9-234c-418d-8bd8-d769ba8e6f62 ac3a19a7426e4e51aa4f94b016decc82 0eed0ccef3b34c4db44e88ebe1aef9f0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:62:2f:b2,bridge_name='br-int',has_traffic_filtering=True,id=4d16ebf4-64d7-4476-8898-f62d856c729b,network=Network(da65ea75-f60d-4001-894b-df4408baa99c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4d16ebf4-64') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 09:05:48 compute-0 nova_compute[260935]: 2025-10-11 09:05:48.783 2 DEBUG nova.objects.instance [None req-67191ef9-234c-418d-8bd8-d769ba8e6f62 ac3a19a7426e4e51aa4f94b016decc82 0eed0ccef3b34c4db44e88ebe1aef9f0 - - default default] Lazy-loading 'pci_devices' on Instance uuid d813afc2-c844-45eb-b1ec-efbbf95498ef obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:05:48 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e257 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:05:48 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:05:48 compute-0 podman[353821]: 2025-10-11 09:05:48.709207983 +0000 UTC m=+0.026985631 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:05:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/34bc5ce5b16c8534d034dcd628bea28fa529c800c6322af5f691982cb6a39ffd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 09:05:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/34bc5ce5b16c8534d034dcd628bea28fa529c800c6322af5f691982cb6a39ffd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 09:05:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/34bc5ce5b16c8534d034dcd628bea28fa529c800c6322af5f691982cb6a39ffd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 09:05:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/34bc5ce5b16c8534d034dcd628bea28fa529c800c6322af5f691982cb6a39ffd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 09:05:48 compute-0 podman[353821]: 2025-10-11 09:05:48.838315689 +0000 UTC m=+0.156093407 container init 4b65afe31600305389737a4aba147272010165a63f46799fabc19c2e7177721b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_meninsky, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Oct 11 09:05:48 compute-0 podman[353821]: 2025-10-11 09:05:48.853366499 +0000 UTC m=+0.171144137 container start 4b65afe31600305389737a4aba147272010165a63f46799fabc19c2e7177721b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_meninsky, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0)
Oct 11 09:05:48 compute-0 podman[353821]: 2025-10-11 09:05:48.857657591 +0000 UTC m=+0.175435309 container attach 4b65afe31600305389737a4aba147272010165a63f46799fabc19c2e7177721b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_meninsky, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 09:05:48 compute-0 nova_compute[260935]: 2025-10-11 09:05:48.870 2 DEBUG nova.virt.libvirt.driver [None req-67191ef9-234c-418d-8bd8-d769ba8e6f62 ac3a19a7426e4e51aa4f94b016decc82 0eed0ccef3b34c4db44e88ebe1aef9f0 - - default default] [instance: d813afc2-c844-45eb-b1ec-efbbf95498ef] End _get_guest_xml xml=<domain type="kvm">
Oct 11 09:05:48 compute-0 nova_compute[260935]:   <uuid>d813afc2-c844-45eb-b1ec-efbbf95498ef</uuid>
Oct 11 09:05:48 compute-0 nova_compute[260935]:   <name>instance-0000005e</name>
Oct 11 09:05:48 compute-0 nova_compute[260935]:   <memory>131072</memory>
Oct 11 09:05:48 compute-0 nova_compute[260935]:   <vcpu>1</vcpu>
Oct 11 09:05:48 compute-0 nova_compute[260935]:   <metadata>
Oct 11 09:05:48 compute-0 nova_compute[260935]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 09:05:48 compute-0 nova_compute[260935]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 09:05:48 compute-0 nova_compute[260935]:       <nova:name>tempest-ServersNegativeTestMultiTenantJSON-server-393316733</nova:name>
Oct 11 09:05:48 compute-0 nova_compute[260935]:       <nova:creationTime>2025-10-11 09:05:47</nova:creationTime>
Oct 11 09:05:48 compute-0 nova_compute[260935]:       <nova:flavor name="m1.nano">
Oct 11 09:05:48 compute-0 nova_compute[260935]:         <nova:memory>128</nova:memory>
Oct 11 09:05:48 compute-0 nova_compute[260935]:         <nova:disk>1</nova:disk>
Oct 11 09:05:48 compute-0 nova_compute[260935]:         <nova:swap>0</nova:swap>
Oct 11 09:05:48 compute-0 nova_compute[260935]:         <nova:ephemeral>0</nova:ephemeral>
Oct 11 09:05:48 compute-0 nova_compute[260935]:         <nova:vcpus>1</nova:vcpus>
Oct 11 09:05:48 compute-0 nova_compute[260935]:       </nova:flavor>
Oct 11 09:05:48 compute-0 nova_compute[260935]:       <nova:owner>
Oct 11 09:05:48 compute-0 nova_compute[260935]:         <nova:user uuid="ac3a19a7426e4e51aa4f94b016decc82">tempest-ServersNegativeTestMultiTenantJSON-1815816531-project-member</nova:user>
Oct 11 09:05:48 compute-0 nova_compute[260935]:         <nova:project uuid="0eed0ccef3b34c4db44e88ebe1aef9f0">tempest-ServersNegativeTestMultiTenantJSON-1815816531</nova:project>
Oct 11 09:05:48 compute-0 nova_compute[260935]:       </nova:owner>
Oct 11 09:05:48 compute-0 nova_compute[260935]:       <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 09:05:48 compute-0 nova_compute[260935]:       <nova:ports>
Oct 11 09:05:48 compute-0 nova_compute[260935]:         <nova:port uuid="4d16ebf4-64d7-4476-8898-f62d856c729b">
Oct 11 09:05:48 compute-0 nova_compute[260935]:           <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Oct 11 09:05:48 compute-0 nova_compute[260935]:         </nova:port>
Oct 11 09:05:48 compute-0 nova_compute[260935]:       </nova:ports>
Oct 11 09:05:48 compute-0 nova_compute[260935]:     </nova:instance>
Oct 11 09:05:48 compute-0 nova_compute[260935]:   </metadata>
Oct 11 09:05:48 compute-0 nova_compute[260935]:   <sysinfo type="smbios">
Oct 11 09:05:48 compute-0 nova_compute[260935]:     <system>
Oct 11 09:05:48 compute-0 nova_compute[260935]:       <entry name="manufacturer">RDO</entry>
Oct 11 09:05:48 compute-0 nova_compute[260935]:       <entry name="product">OpenStack Compute</entry>
Oct 11 09:05:48 compute-0 nova_compute[260935]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 09:05:48 compute-0 nova_compute[260935]:       <entry name="serial">d813afc2-c844-45eb-b1ec-efbbf95498ef</entry>
Oct 11 09:05:48 compute-0 nova_compute[260935]:       <entry name="uuid">d813afc2-c844-45eb-b1ec-efbbf95498ef</entry>
Oct 11 09:05:48 compute-0 nova_compute[260935]:       <entry name="family">Virtual Machine</entry>
Oct 11 09:05:48 compute-0 nova_compute[260935]:     </system>
Oct 11 09:05:48 compute-0 nova_compute[260935]:   </sysinfo>
Oct 11 09:05:48 compute-0 nova_compute[260935]:   <os>
Oct 11 09:05:48 compute-0 nova_compute[260935]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 11 09:05:48 compute-0 nova_compute[260935]:     <boot dev="hd"/>
Oct 11 09:05:48 compute-0 nova_compute[260935]:     <smbios mode="sysinfo"/>
Oct 11 09:05:48 compute-0 nova_compute[260935]:   </os>
Oct 11 09:05:48 compute-0 nova_compute[260935]:   <features>
Oct 11 09:05:48 compute-0 nova_compute[260935]:     <acpi/>
Oct 11 09:05:48 compute-0 nova_compute[260935]:     <apic/>
Oct 11 09:05:48 compute-0 nova_compute[260935]:     <vmcoreinfo/>
Oct 11 09:05:48 compute-0 nova_compute[260935]:   </features>
Oct 11 09:05:48 compute-0 nova_compute[260935]:   <clock offset="utc">
Oct 11 09:05:48 compute-0 nova_compute[260935]:     <timer name="pit" tickpolicy="delay"/>
Oct 11 09:05:48 compute-0 nova_compute[260935]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 11 09:05:48 compute-0 nova_compute[260935]:     <timer name="hpet" present="no"/>
Oct 11 09:05:48 compute-0 nova_compute[260935]:   </clock>
Oct 11 09:05:48 compute-0 nova_compute[260935]:   <cpu mode="host-model" match="exact">
Oct 11 09:05:48 compute-0 nova_compute[260935]:     <topology sockets="1" cores="1" threads="1"/>
Oct 11 09:05:48 compute-0 nova_compute[260935]:   </cpu>
Oct 11 09:05:48 compute-0 nova_compute[260935]:   <devices>
Oct 11 09:05:48 compute-0 nova_compute[260935]:     <disk type="network" device="disk">
Oct 11 09:05:48 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 09:05:48 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/d813afc2-c844-45eb-b1ec-efbbf95498ef_disk">
Oct 11 09:05:48 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 09:05:48 compute-0 nova_compute[260935]:       </source>
Oct 11 09:05:48 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 09:05:48 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 09:05:48 compute-0 nova_compute[260935]:       </auth>
Oct 11 09:05:48 compute-0 nova_compute[260935]:       <target dev="vda" bus="virtio"/>
Oct 11 09:05:48 compute-0 nova_compute[260935]:     </disk>
Oct 11 09:05:48 compute-0 nova_compute[260935]:     <disk type="network" device="cdrom">
Oct 11 09:05:48 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 09:05:48 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/d813afc2-c844-45eb-b1ec-efbbf95498ef_disk.config">
Oct 11 09:05:48 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 09:05:48 compute-0 nova_compute[260935]:       </source>
Oct 11 09:05:48 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 09:05:48 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 09:05:48 compute-0 nova_compute[260935]:       </auth>
Oct 11 09:05:48 compute-0 nova_compute[260935]:       <target dev="sda" bus="sata"/>
Oct 11 09:05:48 compute-0 nova_compute[260935]:     </disk>
Oct 11 09:05:48 compute-0 nova_compute[260935]:     <interface type="ethernet">
Oct 11 09:05:48 compute-0 nova_compute[260935]:       <mac address="fa:16:3e:62:2f:b2"/>
Oct 11 09:05:48 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 09:05:48 compute-0 nova_compute[260935]:       <driver name="vhost" rx_queue_size="512"/>
Oct 11 09:05:48 compute-0 nova_compute[260935]:       <mtu size="1442"/>
Oct 11 09:05:48 compute-0 nova_compute[260935]:       <target dev="tap4d16ebf4-64"/>
Oct 11 09:05:48 compute-0 nova_compute[260935]:     </interface>
Oct 11 09:05:48 compute-0 nova_compute[260935]:     <serial type="pty">
Oct 11 09:05:48 compute-0 nova_compute[260935]:       <log file="/var/lib/nova/instances/d813afc2-c844-45eb-b1ec-efbbf95498ef/console.log" append="off"/>
Oct 11 09:05:48 compute-0 nova_compute[260935]:     </serial>
Oct 11 09:05:48 compute-0 nova_compute[260935]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 09:05:48 compute-0 nova_compute[260935]:     <video>
Oct 11 09:05:48 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 09:05:48 compute-0 nova_compute[260935]:     </video>
Oct 11 09:05:48 compute-0 nova_compute[260935]:     <input type="tablet" bus="usb"/>
Oct 11 09:05:48 compute-0 nova_compute[260935]:     <rng model="virtio">
Oct 11 09:05:48 compute-0 nova_compute[260935]:       <backend model="random">/dev/urandom</backend>
Oct 11 09:05:48 compute-0 nova_compute[260935]:     </rng>
Oct 11 09:05:48 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root"/>
Oct 11 09:05:48 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:05:48 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:05:48 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:05:48 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:05:48 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:05:48 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:05:48 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:05:48 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:05:48 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:05:48 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:05:48 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:05:48 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:05:48 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:05:48 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:05:48 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:05:48 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:05:48 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:05:48 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:05:48 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:05:48 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:05:48 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:05:48 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:05:48 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:05:48 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:05:48 compute-0 nova_compute[260935]:     <controller type="usb" index="0"/>
Oct 11 09:05:48 compute-0 nova_compute[260935]:     <memballoon model="virtio">
Oct 11 09:05:48 compute-0 nova_compute[260935]:       <stats period="10"/>
Oct 11 09:05:48 compute-0 nova_compute[260935]:     </memballoon>
Oct 11 09:05:48 compute-0 nova_compute[260935]:   </devices>
Oct 11 09:05:48 compute-0 nova_compute[260935]: </domain>
Oct 11 09:05:48 compute-0 nova_compute[260935]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 11 09:05:48 compute-0 nova_compute[260935]: 2025-10-11 09:05:48.871 2 DEBUG nova.compute.manager [None req-67191ef9-234c-418d-8bd8-d769ba8e6f62 ac3a19a7426e4e51aa4f94b016decc82 0eed0ccef3b34c4db44e88ebe1aef9f0 - - default default] [instance: d813afc2-c844-45eb-b1ec-efbbf95498ef] Preparing to wait for external event network-vif-plugged-4d16ebf4-64d7-4476-8898-f62d856c729b prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 11 09:05:48 compute-0 nova_compute[260935]: 2025-10-11 09:05:48.871 2 DEBUG oslo_concurrency.lockutils [None req-67191ef9-234c-418d-8bd8-d769ba8e6f62 ac3a19a7426e4e51aa4f94b016decc82 0eed0ccef3b34c4db44e88ebe1aef9f0 - - default default] Acquiring lock "d813afc2-c844-45eb-b1ec-efbbf95498ef-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:05:48 compute-0 nova_compute[260935]: 2025-10-11 09:05:48.872 2 DEBUG oslo_concurrency.lockutils [None req-67191ef9-234c-418d-8bd8-d769ba8e6f62 ac3a19a7426e4e51aa4f94b016decc82 0eed0ccef3b34c4db44e88ebe1aef9f0 - - default default] Lock "d813afc2-c844-45eb-b1ec-efbbf95498ef-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:05:48 compute-0 nova_compute[260935]: 2025-10-11 09:05:48.872 2 DEBUG oslo_concurrency.lockutils [None req-67191ef9-234c-418d-8bd8-d769ba8e6f62 ac3a19a7426e4e51aa4f94b016decc82 0eed0ccef3b34c4db44e88ebe1aef9f0 - - default default] Lock "d813afc2-c844-45eb-b1ec-efbbf95498ef-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:05:48 compute-0 nova_compute[260935]: 2025-10-11 09:05:48.873 2 DEBUG nova.virt.libvirt.vif [None req-67191ef9-234c-418d-8bd8-d769ba8e6f62 ac3a19a7426e4e51aa4f94b016decc82 0eed0ccef3b34c4db44e88ebe1aef9f0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T09:05:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersNegativeTestMultiTenantJSON-server-393316733',display_name='tempest-ServersNegativeTestMultiTenantJSON-server-393316733',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestmultitenantjson-server-393316733',id=94,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0eed0ccef3b34c4db44e88ebe1aef9f0',ramdisk_id='',reservation_id='r-ikgw00lh',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersNegativeTestMultiTenantJSON-1815816531',owner_user_name='tempest-ServersNegativeTestMultiTenantJSON-1815816531-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:05:38Z,user_data=None,user_id='ac3a19a7426e4e51aa4f94b016decc82',uuid=d813afc2-c844-45eb-b1ec-efbbf95498ef,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4d16ebf4-64d7-4476-8898-f62d856c729b", "address": "fa:16:3e:62:2f:b2", "network": {"id": "da65ea75-f60d-4001-894b-df4408baa99c", "bridge": "br-int", "label": "tempest-ServersNegativeTestMultiTenantJSON-908406648-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0eed0ccef3b34c4db44e88ebe1aef9f0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4d16ebf4-64", "ovs_interfaceid": "4d16ebf4-64d7-4476-8898-f62d856c729b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 11 09:05:48 compute-0 nova_compute[260935]: 2025-10-11 09:05:48.873 2 DEBUG nova.network.os_vif_util [None req-67191ef9-234c-418d-8bd8-d769ba8e6f62 ac3a19a7426e4e51aa4f94b016decc82 0eed0ccef3b34c4db44e88ebe1aef9f0 - - default default] Converting VIF {"id": "4d16ebf4-64d7-4476-8898-f62d856c729b", "address": "fa:16:3e:62:2f:b2", "network": {"id": "da65ea75-f60d-4001-894b-df4408baa99c", "bridge": "br-int", "label": "tempest-ServersNegativeTestMultiTenantJSON-908406648-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0eed0ccef3b34c4db44e88ebe1aef9f0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4d16ebf4-64", "ovs_interfaceid": "4d16ebf4-64d7-4476-8898-f62d856c729b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 09:05:48 compute-0 nova_compute[260935]: 2025-10-11 09:05:48.874 2 DEBUG nova.network.os_vif_util [None req-67191ef9-234c-418d-8bd8-d769ba8e6f62 ac3a19a7426e4e51aa4f94b016decc82 0eed0ccef3b34c4db44e88ebe1aef9f0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:62:2f:b2,bridge_name='br-int',has_traffic_filtering=True,id=4d16ebf4-64d7-4476-8898-f62d856c729b,network=Network(da65ea75-f60d-4001-894b-df4408baa99c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4d16ebf4-64') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 09:05:48 compute-0 nova_compute[260935]: 2025-10-11 09:05:48.874 2 DEBUG os_vif [None req-67191ef9-234c-418d-8bd8-d769ba8e6f62 ac3a19a7426e4e51aa4f94b016decc82 0eed0ccef3b34c4db44e88ebe1aef9f0 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:62:2f:b2,bridge_name='br-int',has_traffic_filtering=True,id=4d16ebf4-64d7-4476-8898-f62d856c729b,network=Network(da65ea75-f60d-4001-894b-df4408baa99c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4d16ebf4-64') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 11 09:05:48 compute-0 nova_compute[260935]: 2025-10-11 09:05:48.876 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:05:48 compute-0 nova_compute[260935]: 2025-10-11 09:05:48.876 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:05:48 compute-0 nova_compute[260935]: 2025-10-11 09:05:48.877 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 09:05:48 compute-0 nova_compute[260935]: 2025-10-11 09:05:48.882 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:05:48 compute-0 nova_compute[260935]: 2025-10-11 09:05:48.882 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4d16ebf4-64, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:05:48 compute-0 nova_compute[260935]: 2025-10-11 09:05:48.883 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap4d16ebf4-64, col_values=(('external_ids', {'iface-id': '4d16ebf4-64d7-4476-8898-f62d856c729b', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:62:2f:b2', 'vm-uuid': 'd813afc2-c844-45eb-b1ec-efbbf95498ef'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:05:48 compute-0 nova_compute[260935]: 2025-10-11 09:05:48.884 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:05:48 compute-0 NetworkManager[44960]: <info>  [1760173548.8859] manager: (tap4d16ebf4-64): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/355)
Oct 11 09:05:48 compute-0 nova_compute[260935]: 2025-10-11 09:05:48.886 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 09:05:48 compute-0 nova_compute[260935]: 2025-10-11 09:05:48.894 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:05:48 compute-0 nova_compute[260935]: 2025-10-11 09:05:48.896 2 INFO os_vif [None req-67191ef9-234c-418d-8bd8-d769ba8e6f62 ac3a19a7426e4e51aa4f94b016decc82 0eed0ccef3b34c4db44e88ebe1aef9f0 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:62:2f:b2,bridge_name='br-int',has_traffic_filtering=True,id=4d16ebf4-64d7-4476-8898-f62d856c729b,network=Network(da65ea75-f60d-4001-894b-df4408baa99c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4d16ebf4-64')
Oct 11 09:05:48 compute-0 nova_compute[260935]: 2025-10-11 09:05:48.906 2 DEBUG oslo_concurrency.lockutils [None req-aff4ef40-b159-4e3d-afbe-eb0ed60088f4 727ffe9af08040179ad1981f985d61cc 72f2c977b4a147e3b45be9903a0afdf1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:05:48 compute-0 nova_compute[260935]: 2025-10-11 09:05:48.906 2 DEBUG oslo_concurrency.lockutils [None req-aff4ef40-b159-4e3d-afbe-eb0ed60088f4 727ffe9af08040179ad1981f985d61cc 72f2c977b4a147e3b45be9903a0afdf1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:05:48 compute-0 nova_compute[260935]: 2025-10-11 09:05:48.997 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:05:49 compute-0 nova_compute[260935]: 2025-10-11 09:05:49.015 2 DEBUG nova.virt.libvirt.driver [None req-67191ef9-234c-418d-8bd8-d769ba8e6f62 ac3a19a7426e4e51aa4f94b016decc82 0eed0ccef3b34c4db44e88ebe1aef9f0 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 09:05:49 compute-0 nova_compute[260935]: 2025-10-11 09:05:49.015 2 DEBUG nova.virt.libvirt.driver [None req-67191ef9-234c-418d-8bd8-d769ba8e6f62 ac3a19a7426e4e51aa4f94b016decc82 0eed0ccef3b34c4db44e88ebe1aef9f0 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 09:05:49 compute-0 nova_compute[260935]: 2025-10-11 09:05:49.016 2 DEBUG nova.virt.libvirt.driver [None req-67191ef9-234c-418d-8bd8-d769ba8e6f62 ac3a19a7426e4e51aa4f94b016decc82 0eed0ccef3b34c4db44e88ebe1aef9f0 - - default default] No VIF found with MAC fa:16:3e:62:2f:b2, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 11 09:05:49 compute-0 nova_compute[260935]: 2025-10-11 09:05:49.016 2 INFO nova.virt.libvirt.driver [None req-67191ef9-234c-418d-8bd8-d769ba8e6f62 ac3a19a7426e4e51aa4f94b016decc82 0eed0ccef3b34c4db44e88ebe1aef9f0 - - default default] [instance: d813afc2-c844-45eb-b1ec-efbbf95498ef] Using config drive
Oct 11 09:05:49 compute-0 nova_compute[260935]: 2025-10-11 09:05:49.040 2 DEBUG nova.storage.rbd_utils [None req-67191ef9-234c-418d-8bd8-d769ba8e6f62 ac3a19a7426e4e51aa4f94b016decc82 0eed0ccef3b34c4db44e88ebe1aef9f0 - - default default] rbd image d813afc2-c844-45eb-b1ec-efbbf95498ef_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:05:49 compute-0 nova_compute[260935]: 2025-10-11 09:05:49.059 2 DEBUG nova.compute.manager [req-961d1691-4ea0-4af7-8d80-bb563e36055b req-9d9c081b-6cdd-4cd6-8d11-c5721a8343d2 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 81e243b6-6e9c-428e-a7b7-743f60f94728] Received event network-vif-unplugged-406fc421-7a8c-4974-8410-c31cab730ea9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:05:49 compute-0 nova_compute[260935]: 2025-10-11 09:05:49.060 2 DEBUG oslo_concurrency.lockutils [req-961d1691-4ea0-4af7-8d80-bb563e36055b req-9d9c081b-6cdd-4cd6-8d11-c5721a8343d2 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "81e243b6-6e9c-428e-a7b7-743f60f94728-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:05:49 compute-0 nova_compute[260935]: 2025-10-11 09:05:49.060 2 DEBUG oslo_concurrency.lockutils [req-961d1691-4ea0-4af7-8d80-bb563e36055b req-9d9c081b-6cdd-4cd6-8d11-c5721a8343d2 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "81e243b6-6e9c-428e-a7b7-743f60f94728-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:05:49 compute-0 nova_compute[260935]: 2025-10-11 09:05:49.061 2 DEBUG oslo_concurrency.lockutils [req-961d1691-4ea0-4af7-8d80-bb563e36055b req-9d9c081b-6cdd-4cd6-8d11-c5721a8343d2 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "81e243b6-6e9c-428e-a7b7-743f60f94728-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:05:49 compute-0 nova_compute[260935]: 2025-10-11 09:05:49.062 2 DEBUG nova.compute.manager [req-961d1691-4ea0-4af7-8d80-bb563e36055b req-9d9c081b-6cdd-4cd6-8d11-c5721a8343d2 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 81e243b6-6e9c-428e-a7b7-743f60f94728] No waiting events found dispatching network-vif-unplugged-406fc421-7a8c-4974-8410-c31cab730ea9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 09:05:49 compute-0 nova_compute[260935]: 2025-10-11 09:05:49.063 2 WARNING nova.compute.manager [req-961d1691-4ea0-4af7-8d80-bb563e36055b req-9d9c081b-6cdd-4cd6-8d11-c5721a8343d2 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 81e243b6-6e9c-428e-a7b7-743f60f94728] Received unexpected event network-vif-unplugged-406fc421-7a8c-4974-8410-c31cab730ea9 for instance with vm_state deleted and task_state None.
Oct 11 09:05:49 compute-0 nova_compute[260935]: 2025-10-11 09:05:49.063 2 DEBUG nova.compute.manager [req-961d1691-4ea0-4af7-8d80-bb563e36055b req-9d9c081b-6cdd-4cd6-8d11-c5721a8343d2 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 81e243b6-6e9c-428e-a7b7-743f60f94728] Received event network-vif-plugged-406fc421-7a8c-4974-8410-c31cab730ea9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:05:49 compute-0 nova_compute[260935]: 2025-10-11 09:05:49.063 2 DEBUG oslo_concurrency.lockutils [req-961d1691-4ea0-4af7-8d80-bb563e36055b req-9d9c081b-6cdd-4cd6-8d11-c5721a8343d2 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "81e243b6-6e9c-428e-a7b7-743f60f94728-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:05:49 compute-0 nova_compute[260935]: 2025-10-11 09:05:49.064 2 DEBUG oslo_concurrency.lockutils [req-961d1691-4ea0-4af7-8d80-bb563e36055b req-9d9c081b-6cdd-4cd6-8d11-c5721a8343d2 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "81e243b6-6e9c-428e-a7b7-743f60f94728-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:05:49 compute-0 nova_compute[260935]: 2025-10-11 09:05:49.064 2 DEBUG oslo_concurrency.lockutils [req-961d1691-4ea0-4af7-8d80-bb563e36055b req-9d9c081b-6cdd-4cd6-8d11-c5721a8343d2 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "81e243b6-6e9c-428e-a7b7-743f60f94728-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:05:49 compute-0 nova_compute[260935]: 2025-10-11 09:05:49.064 2 DEBUG nova.compute.manager [req-961d1691-4ea0-4af7-8d80-bb563e36055b req-9d9c081b-6cdd-4cd6-8d11-c5721a8343d2 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 81e243b6-6e9c-428e-a7b7-743f60f94728] No waiting events found dispatching network-vif-plugged-406fc421-7a8c-4974-8410-c31cab730ea9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 09:05:49 compute-0 nova_compute[260935]: 2025-10-11 09:05:49.064 2 WARNING nova.compute.manager [req-961d1691-4ea0-4af7-8d80-bb563e36055b req-9d9c081b-6cdd-4cd6-8d11-c5721a8343d2 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 81e243b6-6e9c-428e-a7b7-743f60f94728] Received unexpected event network-vif-plugged-406fc421-7a8c-4974-8410-c31cab730ea9 for instance with vm_state deleted and task_state None.
Oct 11 09:05:49 compute-0 nova_compute[260935]: 2025-10-11 09:05:49.064 2 DEBUG nova.compute.manager [req-961d1691-4ea0-4af7-8d80-bb563e36055b req-9d9c081b-6cdd-4cd6-8d11-c5721a8343d2 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 81e243b6-6e9c-428e-a7b7-743f60f94728] Received event network-vif-deleted-406fc421-7a8c-4974-8410-c31cab730ea9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:05:49 compute-0 nova_compute[260935]: 2025-10-11 09:05:49.138 2 DEBUG oslo_concurrency.processutils [None req-aff4ef40-b159-4e3d-afbe-eb0ed60088f4 727ffe9af08040179ad1981f985d61cc 72f2c977b4a147e3b45be9903a0afdf1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:05:49 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 09:05:49 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2447797774' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:05:49 compute-0 ceph-mon[74313]: pgmap v1894: 321 pgs: 321 active+clean; 467 MiB data, 866 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.8 MiB/s wr, 201 op/s
Oct 11 09:05:49 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/2419311604' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:05:49 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/2447797774' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:05:49 compute-0 nova_compute[260935]: 2025-10-11 09:05:49.615 2 DEBUG oslo_concurrency.processutils [None req-aff4ef40-b159-4e3d-afbe-eb0ed60088f4 727ffe9af08040179ad1981f985d61cc 72f2c977b4a147e3b45be9903a0afdf1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.477s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:05:49 compute-0 nova_compute[260935]: 2025-10-11 09:05:49.622 2 DEBUG nova.compute.provider_tree [None req-aff4ef40-b159-4e3d-afbe-eb0ed60088f4 727ffe9af08040179ad1981f985d61cc 72f2c977b4a147e3b45be9903a0afdf1 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 09:05:49 compute-0 nova_compute[260935]: 2025-10-11 09:05:49.686 2 DEBUG nova.scheduler.client.report [None req-aff4ef40-b159-4e3d-afbe-eb0ed60088f4 727ffe9af08040179ad1981f985d61cc 72f2c977b4a147e3b45be9903a0afdf1 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 09:05:49 compute-0 nova_compute[260935]: 2025-10-11 09:05:49.710 2 INFO nova.virt.libvirt.driver [None req-67191ef9-234c-418d-8bd8-d769ba8e6f62 ac3a19a7426e4e51aa4f94b016decc82 0eed0ccef3b34c4db44e88ebe1aef9f0 - - default default] [instance: d813afc2-c844-45eb-b1ec-efbbf95498ef] Creating config drive at /var/lib/nova/instances/d813afc2-c844-45eb-b1ec-efbbf95498ef/disk.config
Oct 11 09:05:49 compute-0 nova_compute[260935]: 2025-10-11 09:05:49.716 2 DEBUG oslo_concurrency.processutils [None req-67191ef9-234c-418d-8bd8-d769ba8e6f62 ac3a19a7426e4e51aa4f94b016decc82 0eed0ccef3b34c4db44e88ebe1aef9f0 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/d813afc2-c844-45eb-b1ec-efbbf95498ef/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp46qvn6gw execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:05:49 compute-0 nova_compute[260935]: 2025-10-11 09:05:49.768 2 DEBUG oslo_concurrency.lockutils [None req-aff4ef40-b159-4e3d-afbe-eb0ed60088f4 727ffe9af08040179ad1981f985d61cc 72f2c977b4a147e3b45be9903a0afdf1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.862s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:05:49 compute-0 nova_compute[260935]: 2025-10-11 09:05:49.850 2 INFO nova.scheduler.client.report [None req-aff4ef40-b159-4e3d-afbe-eb0ed60088f4 727ffe9af08040179ad1981f985d61cc 72f2c977b4a147e3b45be9903a0afdf1 - - default default] Deleted allocations for instance 81e243b6-6e9c-428e-a7b7-743f60f94728
Oct 11 09:05:49 compute-0 nova_compute[260935]: 2025-10-11 09:05:49.870 2 DEBUG oslo_concurrency.processutils [None req-67191ef9-234c-418d-8bd8-d769ba8e6f62 ac3a19a7426e4e51aa4f94b016decc82 0eed0ccef3b34c4db44e88ebe1aef9f0 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/d813afc2-c844-45eb-b1ec-efbbf95498ef/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp46qvn6gw" returned: 0 in 0.155s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:05:49 compute-0 loving_meninsky[353839]: {
Oct 11 09:05:49 compute-0 loving_meninsky[353839]:     "9a8d87fe-940e-4960-94fb-405e22e6e9ee": {
Oct 11 09:05:49 compute-0 loving_meninsky[353839]:         "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:05:49 compute-0 loving_meninsky[353839]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 11 09:05:49 compute-0 loving_meninsky[353839]:         "osd_id": 2,
Oct 11 09:05:49 compute-0 loving_meninsky[353839]:         "osd_uuid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 09:05:49 compute-0 loving_meninsky[353839]:         "type": "bluestore"
Oct 11 09:05:49 compute-0 loving_meninsky[353839]:     },
Oct 11 09:05:49 compute-0 loving_meninsky[353839]:     "bd31cfe9-266a-4479-a7f9-659fff14b3e1": {
Oct 11 09:05:49 compute-0 loving_meninsky[353839]:         "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:05:49 compute-0 loving_meninsky[353839]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 11 09:05:49 compute-0 loving_meninsky[353839]:         "osd_id": 0,
Oct 11 09:05:49 compute-0 loving_meninsky[353839]:         "osd_uuid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 09:05:49 compute-0 loving_meninsky[353839]:         "type": "bluestore"
Oct 11 09:05:49 compute-0 loving_meninsky[353839]:     },
Oct 11 09:05:49 compute-0 loving_meninsky[353839]:     "c6e730c8-7075-45e5-bf72-061b73088f5b": {
Oct 11 09:05:49 compute-0 loving_meninsky[353839]:         "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:05:49 compute-0 loving_meninsky[353839]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 11 09:05:49 compute-0 loving_meninsky[353839]:         "osd_id": 1,
Oct 11 09:05:49 compute-0 loving_meninsky[353839]:         "osd_uuid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 09:05:49 compute-0 loving_meninsky[353839]:         "type": "bluestore"
Oct 11 09:05:49 compute-0 loving_meninsky[353839]:     }
Oct 11 09:05:49 compute-0 loving_meninsky[353839]: }
Oct 11 09:05:49 compute-0 nova_compute[260935]: 2025-10-11 09:05:49.909 2 DEBUG nova.storage.rbd_utils [None req-67191ef9-234c-418d-8bd8-d769ba8e6f62 ac3a19a7426e4e51aa4f94b016decc82 0eed0ccef3b34c4db44e88ebe1aef9f0 - - default default] rbd image d813afc2-c844-45eb-b1ec-efbbf95498ef_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:05:49 compute-0 nova_compute[260935]: 2025-10-11 09:05:49.914 2 DEBUG oslo_concurrency.processutils [None req-67191ef9-234c-418d-8bd8-d769ba8e6f62 ac3a19a7426e4e51aa4f94b016decc82 0eed0ccef3b34c4db44e88ebe1aef9f0 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/d813afc2-c844-45eb-b1ec-efbbf95498ef/disk.config d813afc2-c844-45eb-b1ec-efbbf95498ef_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:05:49 compute-0 systemd[1]: libpod-4b65afe31600305389737a4aba147272010165a63f46799fabc19c2e7177721b.scope: Deactivated successfully.
Oct 11 09:05:49 compute-0 systemd[1]: libpod-4b65afe31600305389737a4aba147272010165a63f46799fabc19c2e7177721b.scope: Consumed 1.067s CPU time.
Oct 11 09:05:49 compute-0 conmon[353839]: conmon 4b65afe3160030538973 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-4b65afe31600305389737a4aba147272010165a63f46799fabc19c2e7177721b.scope/container/memory.events
Oct 11 09:05:49 compute-0 podman[353821]: 2025-10-11 09:05:49.938494735 +0000 UTC m=+1.256272393 container died 4b65afe31600305389737a4aba147272010165a63f46799fabc19c2e7177721b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_meninsky, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct 11 09:05:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-34bc5ce5b16c8534d034dcd628bea28fa529c800c6322af5f691982cb6a39ffd-merged.mount: Deactivated successfully.
Oct 11 09:05:50 compute-0 podman[353821]: 2025-10-11 09:05:50.01469003 +0000 UTC m=+1.332467668 container remove 4b65afe31600305389737a4aba147272010165a63f46799fabc19c2e7177721b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_meninsky, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Oct 11 09:05:50 compute-0 systemd[1]: libpod-conmon-4b65afe31600305389737a4aba147272010165a63f46799fabc19c2e7177721b.scope: Deactivated successfully.
Oct 11 09:05:50 compute-0 sudo[353658]: pam_unix(sudo:session): session closed for user root
Oct 11 09:05:50 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 09:05:50 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:05:50 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 09:05:50 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:05:50 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev 5a7f2fab-7892-4c01-b33b-f02aa5df2a09 does not exist
Oct 11 09:05:50 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev 14f6292b-9012-445c-84ee-22f08709343a does not exist
Oct 11 09:05:50 compute-0 podman[353937]: 2025-10-11 09:05:50.087675004 +0000 UTC m=+0.102487557 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_id=iscsid, container_name=iscsid, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 11 09:05:50 compute-0 nova_compute[260935]: 2025-10-11 09:05:50.087 2 DEBUG nova.network.neutron [req-af78b1c6-3dbb-4ecd-b4ea-e18596b3966e req-7b53d93e-6296-458d-b9fd-eb00d8460ce1 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d813afc2-c844-45eb-b1ec-efbbf95498ef] Updated VIF entry in instance network info cache for port 4d16ebf4-64d7-4476-8898-f62d856c729b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 09:05:50 compute-0 nova_compute[260935]: 2025-10-11 09:05:50.088 2 DEBUG nova.network.neutron [req-af78b1c6-3dbb-4ecd-b4ea-e18596b3966e req-7b53d93e-6296-458d-b9fd-eb00d8460ce1 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d813afc2-c844-45eb-b1ec-efbbf95498ef] Updating instance_info_cache with network_info: [{"id": "4d16ebf4-64d7-4476-8898-f62d856c729b", "address": "fa:16:3e:62:2f:b2", "network": {"id": "da65ea75-f60d-4001-894b-df4408baa99c", "bridge": "br-int", "label": "tempest-ServersNegativeTestMultiTenantJSON-908406648-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0eed0ccef3b34c4db44e88ebe1aef9f0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4d16ebf4-64", "ovs_interfaceid": "4d16ebf4-64d7-4476-8898-f62d856c729b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:05:50 compute-0 nova_compute[260935]: 2025-10-11 09:05:50.122 2 DEBUG oslo_concurrency.processutils [None req-67191ef9-234c-418d-8bd8-d769ba8e6f62 ac3a19a7426e4e51aa4f94b016decc82 0eed0ccef3b34c4db44e88ebe1aef9f0 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/d813afc2-c844-45eb-b1ec-efbbf95498ef/disk.config d813afc2-c844-45eb-b1ec-efbbf95498ef_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.208s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:05:50 compute-0 nova_compute[260935]: 2025-10-11 09:05:50.122 2 INFO nova.virt.libvirt.driver [None req-67191ef9-234c-418d-8bd8-d769ba8e6f62 ac3a19a7426e4e51aa4f94b016decc82 0eed0ccef3b34c4db44e88ebe1aef9f0 - - default default] [instance: d813afc2-c844-45eb-b1ec-efbbf95498ef] Deleting local config drive /var/lib/nova/instances/d813afc2-c844-45eb-b1ec-efbbf95498ef/disk.config because it was imported into RBD.
Oct 11 09:05:50 compute-0 sudo[353982]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:05:50 compute-0 sudo[353982]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:05:50 compute-0 sudo[353982]: pam_unix(sudo:session): session closed for user root
Oct 11 09:05:50 compute-0 nova_compute[260935]: 2025-10-11 09:05:50.154 2 DEBUG oslo_concurrency.lockutils [req-af78b1c6-3dbb-4ecd-b4ea-e18596b3966e req-7b53d93e-6296-458d-b9fd-eb00d8460ce1 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-d813afc2-c844-45eb-b1ec-efbbf95498ef" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:05:50 compute-0 NetworkManager[44960]: <info>  [1760173550.1917] manager: (tap4d16ebf4-64): new Tun device (/org/freedesktop/NetworkManager/Devices/356)
Oct 11 09:05:50 compute-0 kernel: tap4d16ebf4-64: entered promiscuous mode
Oct 11 09:05:50 compute-0 ovn_controller[152945]: 2025-10-11T09:05:50Z|00823|binding|INFO|Claiming lport 4d16ebf4-64d7-4476-8898-f62d856c729b for this chassis.
Oct 11 09:05:50 compute-0 ovn_controller[152945]: 2025-10-11T09:05:50Z|00824|binding|INFO|4d16ebf4-64d7-4476-8898-f62d856c729b: Claiming fa:16:3e:62:2f:b2 10.100.0.7
Oct 11 09:05:50 compute-0 nova_compute[260935]: 2025-10-11 09:05:50.198 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:05:50 compute-0 nova_compute[260935]: 2025-10-11 09:05:50.206 2 DEBUG oslo_concurrency.lockutils [None req-aff4ef40-b159-4e3d-afbe-eb0ed60088f4 727ffe9af08040179ad1981f985d61cc 72f2c977b4a147e3b45be9903a0afdf1 - - default default] Lock "81e243b6-6e9c-428e-a7b7-743f60f94728" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.375s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:05:50 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:05:50.214 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:62:2f:b2 10.100.0.7'], port_security=['fa:16:3e:62:2f:b2 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'd813afc2-c844-45eb-b1ec-efbbf95498ef', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-da65ea75-f60d-4001-894b-df4408baa99c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0eed0ccef3b34c4db44e88ebe1aef9f0', 'neutron:revision_number': '2', 'neutron:security_group_ids': '27e50c86-89d7-49fd-89cc-35770f4ed881', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=04f380b3-4aca-47a6-b068-b7f58cfe8d61, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=4d16ebf4-64d7-4476-8898-f62d856c729b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 09:05:50 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:05:50.215 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 4d16ebf4-64d7-4476-8898-f62d856c729b in datapath da65ea75-f60d-4001-894b-df4408baa99c bound to our chassis
Oct 11 09:05:50 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:05:50.218 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network da65ea75-f60d-4001-894b-df4408baa99c
Oct 11 09:05:50 compute-0 sudo[354015]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 11 09:05:50 compute-0 sudo[354015]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:05:50 compute-0 sudo[354015]: pam_unix(sudo:session): session closed for user root
Oct 11 09:05:50 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:05:50.236 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[a58ec2cb-f719-4aa9-8bef-d7ca264a5d6d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:05:50 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:05:50.237 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapda65ea75-f1 in ovnmeta-da65ea75-f60d-4001-894b-df4408baa99c namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 11 09:05:50 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:05:50.239 276945 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapda65ea75-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 11 09:05:50 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:05:50.239 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[589ac9a0-848a-4572-9746-ecf16ebb7128]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:05:50 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:05:50.240 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[d8d44659-6718-4cab-a07c-899416116da6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:05:50 compute-0 systemd-udevd[354050]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 09:05:50 compute-0 systemd-machined[215705]: New machine qemu-107-instance-0000005e.
Oct 11 09:05:50 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:05:50.255 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[74fbf0d1-430c-4825-a83e-100acef2e4bf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:05:50 compute-0 NetworkManager[44960]: <info>  [1760173550.2573] device (tap4d16ebf4-64): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 09:05:50 compute-0 NetworkManager[44960]: <info>  [1760173550.2585] device (tap4d16ebf4-64): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 09:05:50 compute-0 systemd[1]: Started Virtual Machine qemu-107-instance-0000005e.
Oct 11 09:05:50 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:05:50.285 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[37f24eb9-cbda-4363-9f36-25279f374f61]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:05:50 compute-0 ovn_controller[152945]: 2025-10-11T09:05:50Z|00825|binding|INFO|Releasing lport e23cd806-8523-4e59-ba27-db15cee52548 from this chassis (sb_readonly=0)
Oct 11 09:05:50 compute-0 ovn_controller[152945]: 2025-10-11T09:05:50Z|00826|binding|INFO|Releasing lport f45dd889-4db0-488b-b6d3-356bc191844e from this chassis (sb_readonly=0)
Oct 11 09:05:50 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1895: 321 pgs: 321 active+clean; 467 MiB data, 866 MiB used, 59 GiB / 60 GiB avail; 3.7 MiB/s rd, 1.8 MiB/s wr, 186 op/s
Oct 11 09:05:50 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:05:50.323 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[33b37dc6-28bf-45f2-9ad2-3885c9c242fb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:05:50 compute-0 nova_compute[260935]: 2025-10-11 09:05:50.338 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:05:50 compute-0 systemd-udevd[354053]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 09:05:50 compute-0 NetworkManager[44960]: <info>  [1760173550.3411] manager: (tapda65ea75-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/357)
Oct 11 09:05:50 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:05:50.339 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[e7b70f32-fc63-454f-87ce-164a894b2900]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:05:50 compute-0 ovn_controller[152945]: 2025-10-11T09:05:50Z|00827|binding|INFO|Setting lport 4d16ebf4-64d7-4476-8898-f62d856c729b ovn-installed in OVS
Oct 11 09:05:50 compute-0 ovn_controller[152945]: 2025-10-11T09:05:50Z|00828|binding|INFO|Setting lport 4d16ebf4-64d7-4476-8898-f62d856c729b up in Southbound
Oct 11 09:05:50 compute-0 nova_compute[260935]: 2025-10-11 09:05:50.374 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:05:50 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:05:50.391 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[5b8a23a5-9025-4087-883d-d5aa10e6bf65]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:05:50 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:05:50.397 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[de01ff64-d1da-448e-9026-2f7c2524d07c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:05:50 compute-0 NetworkManager[44960]: <info>  [1760173550.4223] device (tapda65ea75-f0): carrier: link connected
Oct 11 09:05:50 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:05:50.430 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[039d6108-2bb1-4fa2-8dd1-9e77fe07cb1a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:05:50 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:05:50.450 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[929811e4-460f-433d-844b-edde672d684a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapda65ea75-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:02:1a:b2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 251], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 539512, 'reachable_time': 23095, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 354082, 'error': None, 'target': 'ovnmeta-da65ea75-f60d-4001-894b-df4408baa99c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:05:50 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:05:50.475 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[2b2886cd-5f28-4421-8f7d-cdd47c4a9508]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe02:1ab2'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 539512, 'tstamp': 539512}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 354083, 'error': None, 'target': 'ovnmeta-da65ea75-f60d-4001-894b-df4408baa99c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:05:50 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:05:50.497 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[18df5aaa-a0ba-4f8a-8104-b59d3f2e4129]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapda65ea75-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:02:1a:b2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 251], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 539512, 'reachable_time': 23095, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 354084, 'error': None, 'target': 'ovnmeta-da65ea75-f60d-4001-894b-df4408baa99c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:05:50 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:05:50.546 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[e3e27bcf-aba4-46ab-974a-e5ca775ce805]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:05:50 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:05:50.625 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[cd68cf19-8c4f-4ca7-9126-33d54ced22c3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:05:50 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:05:50.627 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapda65ea75-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:05:50 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:05:50.628 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 09:05:50 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:05:50.628 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapda65ea75-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:05:50 compute-0 NetworkManager[44960]: <info>  [1760173550.6319] manager: (tapda65ea75-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/358)
Oct 11 09:05:50 compute-0 nova_compute[260935]: 2025-10-11 09:05:50.631 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:05:50 compute-0 kernel: tapda65ea75-f0: entered promiscuous mode
Oct 11 09:05:50 compute-0 nova_compute[260935]: 2025-10-11 09:05:50.634 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:05:50 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:05:50.636 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapda65ea75-f0, col_values=(('external_ids', {'iface-id': '938161fc-259e-4821-9e55-2e083e89770d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:05:50 compute-0 nova_compute[260935]: 2025-10-11 09:05:50.637 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:05:50 compute-0 ovn_controller[152945]: 2025-10-11T09:05:50Z|00829|binding|INFO|Releasing lport 938161fc-259e-4821-9e55-2e083e89770d from this chassis (sb_readonly=0)
Oct 11 09:05:50 compute-0 nova_compute[260935]: 2025-10-11 09:05:50.669 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:05:50 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:05:50.671 162815 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/da65ea75-f60d-4001-894b-df4408baa99c.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/da65ea75-f60d-4001-894b-df4408baa99c.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 11 09:05:50 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:05:50.672 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[3aa53667-f6d9-49cc-b551-a1d3d37db7dd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:05:50 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:05:50.673 162815 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 11 09:05:50 compute-0 ovn_metadata_agent[162810]: global
Oct 11 09:05:50 compute-0 ovn_metadata_agent[162810]:     log         /dev/log local0 debug
Oct 11 09:05:50 compute-0 ovn_metadata_agent[162810]:     log-tag     haproxy-metadata-proxy-da65ea75-f60d-4001-894b-df4408baa99c
Oct 11 09:05:50 compute-0 ovn_metadata_agent[162810]:     user        root
Oct 11 09:05:50 compute-0 ovn_metadata_agent[162810]:     group       root
Oct 11 09:05:50 compute-0 ovn_metadata_agent[162810]:     maxconn     1024
Oct 11 09:05:50 compute-0 ovn_metadata_agent[162810]:     pidfile     /var/lib/neutron/external/pids/da65ea75-f60d-4001-894b-df4408baa99c.pid.haproxy
Oct 11 09:05:50 compute-0 ovn_metadata_agent[162810]:     daemon
Oct 11 09:05:50 compute-0 ovn_metadata_agent[162810]: 
Oct 11 09:05:50 compute-0 ovn_metadata_agent[162810]: defaults
Oct 11 09:05:50 compute-0 ovn_metadata_agent[162810]:     log global
Oct 11 09:05:50 compute-0 ovn_metadata_agent[162810]:     mode http
Oct 11 09:05:50 compute-0 ovn_metadata_agent[162810]:     option httplog
Oct 11 09:05:50 compute-0 ovn_metadata_agent[162810]:     option dontlognull
Oct 11 09:05:50 compute-0 ovn_metadata_agent[162810]:     option http-server-close
Oct 11 09:05:50 compute-0 ovn_metadata_agent[162810]:     option forwardfor
Oct 11 09:05:50 compute-0 ovn_metadata_agent[162810]:     retries                 3
Oct 11 09:05:50 compute-0 ovn_metadata_agent[162810]:     timeout http-request    30s
Oct 11 09:05:50 compute-0 ovn_metadata_agent[162810]:     timeout connect         30s
Oct 11 09:05:50 compute-0 ovn_metadata_agent[162810]:     timeout client          32s
Oct 11 09:05:50 compute-0 ovn_metadata_agent[162810]:     timeout server          32s
Oct 11 09:05:50 compute-0 ovn_metadata_agent[162810]:     timeout http-keep-alive 30s
Oct 11 09:05:50 compute-0 ovn_metadata_agent[162810]: 
Oct 11 09:05:50 compute-0 ovn_metadata_agent[162810]: 
Oct 11 09:05:50 compute-0 ovn_metadata_agent[162810]: listen listener
Oct 11 09:05:50 compute-0 ovn_metadata_agent[162810]:     bind 169.254.169.254:80
Oct 11 09:05:50 compute-0 ovn_metadata_agent[162810]:     server metadata /var/lib/neutron/metadata_proxy
Oct 11 09:05:50 compute-0 ovn_metadata_agent[162810]:     http-request add-header X-OVN-Network-ID da65ea75-f60d-4001-894b-df4408baa99c
Oct 11 09:05:50 compute-0 ovn_metadata_agent[162810]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 11 09:05:50 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:05:50.674 162815 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-da65ea75-f60d-4001-894b-df4408baa99c', 'env', 'PROCESS_TAG=haproxy-da65ea75-f60d-4001-894b-df4408baa99c', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/da65ea75-f60d-4001-894b-df4408baa99c.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 11 09:05:51 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:05:51 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:05:51 compute-0 ceph-mon[74313]: pgmap v1895: 321 pgs: 321 active+clean; 467 MiB data, 866 MiB used, 59 GiB / 60 GiB avail; 3.7 MiB/s rd, 1.8 MiB/s wr, 186 op/s
Oct 11 09:05:51 compute-0 podman[354158]: 2025-10-11 09:05:51.113437386 +0000 UTC m=+0.056080462 container create fe1073206b8884e40d5fa95be894fa4fa71d43f68f6ba8ef4b1c8249e73078f1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-da65ea75-f60d-4001-894b-df4408baa99c, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 11 09:05:51 compute-0 systemd[1]: Started libpod-conmon-fe1073206b8884e40d5fa95be894fa4fa71d43f68f6ba8ef4b1c8249e73078f1.scope.
Oct 11 09:05:51 compute-0 podman[354158]: 2025-10-11 09:05:51.085170229 +0000 UTC m=+0.027813325 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 11 09:05:51 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:05:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7944f1711c8b20f7abfb4df9886afc904490f332f531da6a73509c2289aa5d79/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 11 09:05:51 compute-0 podman[354158]: 2025-10-11 09:05:51.198872935 +0000 UTC m=+0.141516041 container init fe1073206b8884e40d5fa95be894fa4fa71d43f68f6ba8ef4b1c8249e73078f1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-da65ea75-f60d-4001-894b-df4408baa99c, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 11 09:05:51 compute-0 podman[354158]: 2025-10-11 09:05:51.204672351 +0000 UTC m=+0.147315427 container start fe1073206b8884e40d5fa95be894fa4fa71d43f68f6ba8ef4b1c8249e73078f1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-da65ea75-f60d-4001-894b-df4408baa99c, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Oct 11 09:05:51 compute-0 neutron-haproxy-ovnmeta-da65ea75-f60d-4001-894b-df4408baa99c[354173]: [NOTICE]   (354177) : New worker (354179) forked
Oct 11 09:05:51 compute-0 neutron-haproxy-ovnmeta-da65ea75-f60d-4001-894b-df4408baa99c[354173]: [NOTICE]   (354177) : Loading success.
Oct 11 09:05:51 compute-0 nova_compute[260935]: 2025-10-11 09:05:51.233 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760173551.232891, d813afc2-c844-45eb-b1ec-efbbf95498ef => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:05:51 compute-0 nova_compute[260935]: 2025-10-11 09:05:51.233 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: d813afc2-c844-45eb-b1ec-efbbf95498ef] VM Started (Lifecycle Event)
Oct 11 09:05:51 compute-0 nova_compute[260935]: 2025-10-11 09:05:51.331 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: d813afc2-c844-45eb-b1ec-efbbf95498ef] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:05:51 compute-0 nova_compute[260935]: 2025-10-11 09:05:51.336 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760173551.233206, d813afc2-c844-45eb-b1ec-efbbf95498ef => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:05:51 compute-0 nova_compute[260935]: 2025-10-11 09:05:51.336 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: d813afc2-c844-45eb-b1ec-efbbf95498ef] VM Paused (Lifecycle Event)
Oct 11 09:05:51 compute-0 nova_compute[260935]: 2025-10-11 09:05:51.380 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: d813afc2-c844-45eb-b1ec-efbbf95498ef] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:05:51 compute-0 nova_compute[260935]: 2025-10-11 09:05:51.383 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: d813afc2-c844-45eb-b1ec-efbbf95498ef] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 09:05:51 compute-0 nova_compute[260935]: 2025-10-11 09:05:51.430 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: d813afc2-c844-45eb-b1ec-efbbf95498ef] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 09:05:52 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1896: 321 pgs: 321 active+clean; 490 MiB data, 885 MiB used, 59 GiB / 60 GiB avail; 4.0 MiB/s rd, 3.5 MiB/s wr, 243 op/s
Oct 11 09:05:53 compute-0 nova_compute[260935]: 2025-10-11 09:05:53.271 2 DEBUG nova.virt.libvirt.driver [None req-caf85ff0-b2b9-4f98-9639-b907a658f8d4 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] [instance: a83f40e4-c852-4b45-a3d2-1cd65e9aaa31] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Oct 11 09:05:53 compute-0 ceph-mon[74313]: pgmap v1896: 321 pgs: 321 active+clean; 490 MiB data, 885 MiB used, 59 GiB / 60 GiB avail; 4.0 MiB/s rd, 3.5 MiB/s wr, 243 op/s
Oct 11 09:05:53 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e257 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:05:53 compute-0 podman[354189]: 2025-10-11 09:05:53.823367035 +0000 UTC m=+0.111097252 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 09:05:53 compute-0 podman[354190]: 2025-10-11 09:05:53.865065606 +0000 UTC m=+0.148325055 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, container_name=ovn_controller, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 11 09:05:53 compute-0 nova_compute[260935]: 2025-10-11 09:05:53.885 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:05:54 compute-0 nova_compute[260935]: 2025-10-11 09:05:54.000 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:05:54 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1897: 321 pgs: 321 active+clean; 500 MiB data, 890 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 2.6 MiB/s wr, 173 op/s
Oct 11 09:05:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:05:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:05:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:05:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:05:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:05:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:05:54 compute-0 ceph-mgr[74605]: [balancer INFO root] Optimize plan auto_2025-10-11_09:05:54
Oct 11 09:05:54 compute-0 ceph-mgr[74605]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 09:05:54 compute-0 ceph-mgr[74605]: [balancer INFO root] do_upmap
Oct 11 09:05:54 compute-0 ceph-mgr[74605]: [balancer INFO root] pools ['vms', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'default.rgw.control', 'images', 'default.rgw.log', 'default.rgw.meta', 'backups', '.mgr', 'volumes', '.rgw.root']
Oct 11 09:05:54 compute-0 ceph-mgr[74605]: [balancer INFO root] prepared 0/10 changes
Oct 11 09:05:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 09:05:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 09:05:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 09:05:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 09:05:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 09:05:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 09:05:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 09:05:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 09:05:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 09:05:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 09:05:55 compute-0 ceph-mon[74313]: pgmap v1897: 321 pgs: 321 active+clean; 500 MiB data, 890 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 2.6 MiB/s wr, 173 op/s
Oct 11 09:05:56 compute-0 nova_compute[260935]: 2025-10-11 09:05:56.292 2 INFO nova.virt.libvirt.driver [None req-caf85ff0-b2b9-4f98-9639-b907a658f8d4 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] [instance: a83f40e4-c852-4b45-a3d2-1cd65e9aaa31] Instance shutdown successfully after 13 seconds.
Oct 11 09:05:56 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1898: 321 pgs: 321 active+clean; 500 MiB data, 890 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 162 op/s
Oct 11 09:05:56 compute-0 nova_compute[260935]: 2025-10-11 09:05:56.443 2 DEBUG nova.compute.manager [req-eb6115b7-788c-4d0a-a832-519e45e0c2b6 req-03d06685-cde1-4a75-bab4-6ae0be3198b8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d813afc2-c844-45eb-b1ec-efbbf95498ef] Received event network-vif-plugged-4d16ebf4-64d7-4476-8898-f62d856c729b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:05:56 compute-0 nova_compute[260935]: 2025-10-11 09:05:56.444 2 DEBUG oslo_concurrency.lockutils [req-eb6115b7-788c-4d0a-a832-519e45e0c2b6 req-03d06685-cde1-4a75-bab4-6ae0be3198b8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "d813afc2-c844-45eb-b1ec-efbbf95498ef-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:05:56 compute-0 nova_compute[260935]: 2025-10-11 09:05:56.444 2 DEBUG oslo_concurrency.lockutils [req-eb6115b7-788c-4d0a-a832-519e45e0c2b6 req-03d06685-cde1-4a75-bab4-6ae0be3198b8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "d813afc2-c844-45eb-b1ec-efbbf95498ef-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:05:56 compute-0 nova_compute[260935]: 2025-10-11 09:05:56.444 2 DEBUG oslo_concurrency.lockutils [req-eb6115b7-788c-4d0a-a832-519e45e0c2b6 req-03d06685-cde1-4a75-bab4-6ae0be3198b8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "d813afc2-c844-45eb-b1ec-efbbf95498ef-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:05:56 compute-0 nova_compute[260935]: 2025-10-11 09:05:56.445 2 DEBUG nova.compute.manager [req-eb6115b7-788c-4d0a-a832-519e45e0c2b6 req-03d06685-cde1-4a75-bab4-6ae0be3198b8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d813afc2-c844-45eb-b1ec-efbbf95498ef] Processing event network-vif-plugged-4d16ebf4-64d7-4476-8898-f62d856c729b _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 11 09:05:56 compute-0 nova_compute[260935]: 2025-10-11 09:05:56.445 2 DEBUG nova.compute.manager [None req-67191ef9-234c-418d-8bd8-d769ba8e6f62 ac3a19a7426e4e51aa4f94b016decc82 0eed0ccef3b34c4db44e88ebe1aef9f0 - - default default] [instance: d813afc2-c844-45eb-b1ec-efbbf95498ef] Instance event wait completed in 5 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 11 09:05:56 compute-0 nova_compute[260935]: 2025-10-11 09:05:56.450 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760173556.4498963, d813afc2-c844-45eb-b1ec-efbbf95498ef => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:05:56 compute-0 nova_compute[260935]: 2025-10-11 09:05:56.450 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: d813afc2-c844-45eb-b1ec-efbbf95498ef] VM Resumed (Lifecycle Event)
Oct 11 09:05:56 compute-0 nova_compute[260935]: 2025-10-11 09:05:56.451 2 DEBUG nova.virt.libvirt.driver [None req-67191ef9-234c-418d-8bd8-d769ba8e6f62 ac3a19a7426e4e51aa4f94b016decc82 0eed0ccef3b34c4db44e88ebe1aef9f0 - - default default] [instance: d813afc2-c844-45eb-b1ec-efbbf95498ef] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 11 09:05:56 compute-0 nova_compute[260935]: 2025-10-11 09:05:56.454 2 INFO nova.virt.libvirt.driver [-] [instance: d813afc2-c844-45eb-b1ec-efbbf95498ef] Instance spawned successfully.
Oct 11 09:05:56 compute-0 nova_compute[260935]: 2025-10-11 09:05:56.455 2 DEBUG nova.virt.libvirt.driver [None req-67191ef9-234c-418d-8bd8-d769ba8e6f62 ac3a19a7426e4e51aa4f94b016decc82 0eed0ccef3b34c4db44e88ebe1aef9f0 - - default default] [instance: d813afc2-c844-45eb-b1ec-efbbf95498ef] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 11 09:05:56 compute-0 nova_compute[260935]: 2025-10-11 09:05:56.495 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: d813afc2-c844-45eb-b1ec-efbbf95498ef] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:05:56 compute-0 nova_compute[260935]: 2025-10-11 09:05:56.500 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: d813afc2-c844-45eb-b1ec-efbbf95498ef] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 09:05:56 compute-0 nova_compute[260935]: 2025-10-11 09:05:56.518 2 DEBUG nova.virt.libvirt.driver [None req-67191ef9-234c-418d-8bd8-d769ba8e6f62 ac3a19a7426e4e51aa4f94b016decc82 0eed0ccef3b34c4db44e88ebe1aef9f0 - - default default] [instance: d813afc2-c844-45eb-b1ec-efbbf95498ef] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:05:56 compute-0 nova_compute[260935]: 2025-10-11 09:05:56.518 2 DEBUG nova.virt.libvirt.driver [None req-67191ef9-234c-418d-8bd8-d769ba8e6f62 ac3a19a7426e4e51aa4f94b016decc82 0eed0ccef3b34c4db44e88ebe1aef9f0 - - default default] [instance: d813afc2-c844-45eb-b1ec-efbbf95498ef] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:05:56 compute-0 nova_compute[260935]: 2025-10-11 09:05:56.519 2 DEBUG nova.virt.libvirt.driver [None req-67191ef9-234c-418d-8bd8-d769ba8e6f62 ac3a19a7426e4e51aa4f94b016decc82 0eed0ccef3b34c4db44e88ebe1aef9f0 - - default default] [instance: d813afc2-c844-45eb-b1ec-efbbf95498ef] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:05:56 compute-0 nova_compute[260935]: 2025-10-11 09:05:56.519 2 DEBUG nova.virt.libvirt.driver [None req-67191ef9-234c-418d-8bd8-d769ba8e6f62 ac3a19a7426e4e51aa4f94b016decc82 0eed0ccef3b34c4db44e88ebe1aef9f0 - - default default] [instance: d813afc2-c844-45eb-b1ec-efbbf95498ef] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:05:56 compute-0 nova_compute[260935]: 2025-10-11 09:05:56.520 2 DEBUG nova.virt.libvirt.driver [None req-67191ef9-234c-418d-8bd8-d769ba8e6f62 ac3a19a7426e4e51aa4f94b016decc82 0eed0ccef3b34c4db44e88ebe1aef9f0 - - default default] [instance: d813afc2-c844-45eb-b1ec-efbbf95498ef] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:05:56 compute-0 nova_compute[260935]: 2025-10-11 09:05:56.521 2 DEBUG nova.virt.libvirt.driver [None req-67191ef9-234c-418d-8bd8-d769ba8e6f62 ac3a19a7426e4e51aa4f94b016decc82 0eed0ccef3b34c4db44e88ebe1aef9f0 - - default default] [instance: d813afc2-c844-45eb-b1ec-efbbf95498ef] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:05:56 compute-0 nova_compute[260935]: 2025-10-11 09:05:56.571 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: d813afc2-c844-45eb-b1ec-efbbf95498ef] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 09:05:56 compute-0 kernel: tapf8dc388c-9e (unregistering): left promiscuous mode
Oct 11 09:05:56 compute-0 NetworkManager[44960]: <info>  [1760173556.5891] device (tapf8dc388c-9e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 09:05:56 compute-0 ovn_controller[152945]: 2025-10-11T09:05:56Z|00830|binding|INFO|Releasing lport f8dc388c-9e5a-43c2-8dd9-8fc28768ec31 from this chassis (sb_readonly=0)
Oct 11 09:05:56 compute-0 ovn_controller[152945]: 2025-10-11T09:05:56Z|00831|binding|INFO|Setting lport f8dc388c-9e5a-43c2-8dd9-8fc28768ec31 down in Southbound
Oct 11 09:05:56 compute-0 ovn_controller[152945]: 2025-10-11T09:05:56Z|00832|binding|INFO|Removing iface tapf8dc388c-9e ovn-installed in OVS
Oct 11 09:05:56 compute-0 nova_compute[260935]: 2025-10-11 09:05:56.611 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:05:56 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:05:56.627 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b4:bf:b7 10.100.0.12'], port_security=['fa:16:3e:b4:bf:b7 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'a83f40e4-c852-4b45-a3d2-1cd65e9aaa31', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7ac98e7f-e9cd-45cf-8bf4-dcecaa65ca75', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0a5d578da5e746caa535eef295e1a67d', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'e8a3473d-4723-4d4d-be0a-f96b6f35a4ee', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ae01bff6-0f5a-48e3-a011-50bc6bac19c7, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=f8dc388c-9e5a-43c2-8dd9-8fc28768ec31) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 09:05:56 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:05:56.628 162815 INFO neutron.agent.ovn.metadata.agent [-] Port f8dc388c-9e5a-43c2-8dd9-8fc28768ec31 in datapath 7ac98e7f-e9cd-45cf-8bf4-dcecaa65ca75 unbound from our chassis
Oct 11 09:05:56 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:05:56.629 162815 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 7ac98e7f-e9cd-45cf-8bf4-dcecaa65ca75 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Oct 11 09:05:56 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:05:56.630 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[a6c6a0ea-e5d2-4bac-bfc7-27df2a07ff4a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:05:56 compute-0 nova_compute[260935]: 2025-10-11 09:05:56.642 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:05:56 compute-0 nova_compute[260935]: 2025-10-11 09:05:56.647 2 INFO nova.compute.manager [None req-67191ef9-234c-418d-8bd8-d769ba8e6f62 ac3a19a7426e4e51aa4f94b016decc82 0eed0ccef3b34c4db44e88ebe1aef9f0 - - default default] [instance: d813afc2-c844-45eb-b1ec-efbbf95498ef] Took 17.95 seconds to spawn the instance on the hypervisor.
Oct 11 09:05:56 compute-0 nova_compute[260935]: 2025-10-11 09:05:56.647 2 DEBUG nova.compute.manager [None req-67191ef9-234c-418d-8bd8-d769ba8e6f62 ac3a19a7426e4e51aa4f94b016decc82 0eed0ccef3b34c4db44e88ebe1aef9f0 - - default default] [instance: d813afc2-c844-45eb-b1ec-efbbf95498ef] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:05:56 compute-0 systemd[1]: machine-qemu\x2d105\x2dinstance\x2d0000005c.scope: Deactivated successfully.
Oct 11 09:05:56 compute-0 systemd[1]: machine-qemu\x2d105\x2dinstance\x2d0000005c.scope: Consumed 14.693s CPU time.
Oct 11 09:05:56 compute-0 systemd-machined[215705]: Machine qemu-105-instance-0000005c terminated.
Oct 11 09:05:56 compute-0 nova_compute[260935]: 2025-10-11 09:05:56.734 2 INFO nova.virt.libvirt.driver [-] [instance: a83f40e4-c852-4b45-a3d2-1cd65e9aaa31] Instance destroyed successfully.
Oct 11 09:05:56 compute-0 nova_compute[260935]: 2025-10-11 09:05:56.735 2 DEBUG nova.objects.instance [None req-caf85ff0-b2b9-4f98-9639-b907a658f8d4 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] Lazy-loading 'numa_topology' on Instance uuid a83f40e4-c852-4b45-a3d2-1cd65e9aaa31 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:05:56 compute-0 nova_compute[260935]: 2025-10-11 09:05:56.737 2 INFO nova.compute.manager [None req-67191ef9-234c-418d-8bd8-d769ba8e6f62 ac3a19a7426e4e51aa4f94b016decc82 0eed0ccef3b34c4db44e88ebe1aef9f0 - - default default] [instance: d813afc2-c844-45eb-b1ec-efbbf95498ef] Took 19.65 seconds to build instance.
Oct 11 09:05:56 compute-0 nova_compute[260935]: 2025-10-11 09:05:56.787 2 INFO nova.virt.libvirt.driver [None req-caf85ff0-b2b9-4f98-9639-b907a658f8d4 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] [instance: a83f40e4-c852-4b45-a3d2-1cd65e9aaa31] Attempting rescue
Oct 11 09:05:56 compute-0 nova_compute[260935]: 2025-10-11 09:05:56.789 2 DEBUG nova.virt.libvirt.driver [None req-caf85ff0-b2b9-4f98-9639-b907a658f8d4 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] rescue generated disk_info: {'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'disk.rescue': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config.rescue': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} rescue /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4314
Oct 11 09:05:56 compute-0 nova_compute[260935]: 2025-10-11 09:05:56.794 2 DEBUG nova.virt.libvirt.driver [None req-caf85ff0-b2b9-4f98-9639-b907a658f8d4 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] [instance: a83f40e4-c852-4b45-a3d2-1cd65e9aaa31] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719
Oct 11 09:05:56 compute-0 nova_compute[260935]: 2025-10-11 09:05:56.794 2 INFO nova.virt.libvirt.driver [None req-caf85ff0-b2b9-4f98-9639-b907a658f8d4 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] [instance: a83f40e4-c852-4b45-a3d2-1cd65e9aaa31] Creating image(s)
Oct 11 09:05:56 compute-0 nova_compute[260935]: 2025-10-11 09:05:56.817 2 DEBUG nova.storage.rbd_utils [None req-caf85ff0-b2b9-4f98-9639-b907a658f8d4 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] rbd image a83f40e4-c852-4b45-a3d2-1cd65e9aaa31_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:05:56 compute-0 nova_compute[260935]: 2025-10-11 09:05:56.820 2 DEBUG nova.objects.instance [None req-caf85ff0-b2b9-4f98-9639-b907a658f8d4 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] Lazy-loading 'trusted_certs' on Instance uuid a83f40e4-c852-4b45-a3d2-1cd65e9aaa31 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:05:56 compute-0 nova_compute[260935]: 2025-10-11 09:05:56.822 2 DEBUG oslo_concurrency.lockutils [None req-67191ef9-234c-418d-8bd8-d769ba8e6f62 ac3a19a7426e4e51aa4f94b016decc82 0eed0ccef3b34c4db44e88ebe1aef9f0 - - default default] Lock "d813afc2-c844-45eb-b1ec-efbbf95498ef" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 20.068s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:05:56 compute-0 nova_compute[260935]: 2025-10-11 09:05:56.853 2 DEBUG nova.storage.rbd_utils [None req-caf85ff0-b2b9-4f98-9639-b907a658f8d4 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] rbd image a83f40e4-c852-4b45-a3d2-1cd65e9aaa31_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:05:56 compute-0 nova_compute[260935]: 2025-10-11 09:05:56.875 2 DEBUG nova.storage.rbd_utils [None req-caf85ff0-b2b9-4f98-9639-b907a658f8d4 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] rbd image a83f40e4-c852-4b45-a3d2-1cd65e9aaa31_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:05:56 compute-0 nova_compute[260935]: 2025-10-11 09:05:56.878 2 DEBUG oslo_concurrency.processutils [None req-caf85ff0-b2b9-4f98-9639-b907a658f8d4 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:05:56 compute-0 nova_compute[260935]: 2025-10-11 09:05:56.966 2 DEBUG oslo_concurrency.processutils [None req-caf85ff0-b2b9-4f98-9639-b907a658f8d4 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json" returned: 0 in 0.088s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:05:56 compute-0 nova_compute[260935]: 2025-10-11 09:05:56.968 2 DEBUG oslo_concurrency.lockutils [None req-caf85ff0-b2b9-4f98-9639-b907a658f8d4 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] Acquiring lock "9811042c7d73cc51997f7c966840f4b7728169a1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:05:56 compute-0 nova_compute[260935]: 2025-10-11 09:05:56.969 2 DEBUG oslo_concurrency.lockutils [None req-caf85ff0-b2b9-4f98-9639-b907a658f8d4 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:05:56 compute-0 nova_compute[260935]: 2025-10-11 09:05:56.969 2 DEBUG oslo_concurrency.lockutils [None req-caf85ff0-b2b9-4f98-9639-b907a658f8d4 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:05:56 compute-0 nova_compute[260935]: 2025-10-11 09:05:56.994 2 DEBUG nova.storage.rbd_utils [None req-caf85ff0-b2b9-4f98-9639-b907a658f8d4 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] rbd image a83f40e4-c852-4b45-a3d2-1cd65e9aaa31_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:05:57 compute-0 nova_compute[260935]: 2025-10-11 09:05:57.000 2 DEBUG oslo_concurrency.processutils [None req-caf85ff0-b2b9-4f98-9639-b907a658f8d4 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 a83f40e4-c852-4b45-a3d2-1cd65e9aaa31_disk.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:05:57 compute-0 nova_compute[260935]: 2025-10-11 09:05:57.046 2 DEBUG nova.compute.manager [req-d367a1e0-54f1-4229-8273-845f25e732f1 req-e2651ff5-951d-42ba-a0b1-00d809ae8a6e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a83f40e4-c852-4b45-a3d2-1cd65e9aaa31] Received event network-vif-unplugged-f8dc388c-9e5a-43c2-8dd9-8fc28768ec31 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:05:57 compute-0 nova_compute[260935]: 2025-10-11 09:05:57.046 2 DEBUG oslo_concurrency.lockutils [req-d367a1e0-54f1-4229-8273-845f25e732f1 req-e2651ff5-951d-42ba-a0b1-00d809ae8a6e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "a83f40e4-c852-4b45-a3d2-1cd65e9aaa31-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:05:57 compute-0 nova_compute[260935]: 2025-10-11 09:05:57.047 2 DEBUG oslo_concurrency.lockutils [req-d367a1e0-54f1-4229-8273-845f25e732f1 req-e2651ff5-951d-42ba-a0b1-00d809ae8a6e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "a83f40e4-c852-4b45-a3d2-1cd65e9aaa31-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:05:57 compute-0 nova_compute[260935]: 2025-10-11 09:05:57.047 2 DEBUG oslo_concurrency.lockutils [req-d367a1e0-54f1-4229-8273-845f25e732f1 req-e2651ff5-951d-42ba-a0b1-00d809ae8a6e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "a83f40e4-c852-4b45-a3d2-1cd65e9aaa31-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:05:57 compute-0 nova_compute[260935]: 2025-10-11 09:05:57.048 2 DEBUG nova.compute.manager [req-d367a1e0-54f1-4229-8273-845f25e732f1 req-e2651ff5-951d-42ba-a0b1-00d809ae8a6e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a83f40e4-c852-4b45-a3d2-1cd65e9aaa31] No waiting events found dispatching network-vif-unplugged-f8dc388c-9e5a-43c2-8dd9-8fc28768ec31 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 09:05:57 compute-0 nova_compute[260935]: 2025-10-11 09:05:57.048 2 WARNING nova.compute.manager [req-d367a1e0-54f1-4229-8273-845f25e732f1 req-e2651ff5-951d-42ba-a0b1-00d809ae8a6e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a83f40e4-c852-4b45-a3d2-1cd65e9aaa31] Received unexpected event network-vif-unplugged-f8dc388c-9e5a-43c2-8dd9-8fc28768ec31 for instance with vm_state active and task_state rescuing.
Oct 11 09:05:57 compute-0 ceph-mon[74313]: pgmap v1898: 321 pgs: 321 active+clean; 500 MiB data, 890 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 162 op/s
Oct 11 09:05:57 compute-0 nova_compute[260935]: 2025-10-11 09:05:57.792 2 DEBUG oslo_concurrency.processutils [None req-caf85ff0-b2b9-4f98-9639-b907a658f8d4 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 a83f40e4-c852-4b45-a3d2-1cd65e9aaa31_disk.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.792s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:05:57 compute-0 nova_compute[260935]: 2025-10-11 09:05:57.793 2 DEBUG nova.objects.instance [None req-caf85ff0-b2b9-4f98-9639-b907a658f8d4 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] Lazy-loading 'migration_context' on Instance uuid a83f40e4-c852-4b45-a3d2-1cd65e9aaa31 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:05:57 compute-0 nova_compute[260935]: 2025-10-11 09:05:57.879 2 DEBUG nova.virt.libvirt.driver [None req-caf85ff0-b2b9-4f98-9639-b907a658f8d4 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] [instance: a83f40e4-c852-4b45-a3d2-1cd65e9aaa31] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 11 09:05:57 compute-0 nova_compute[260935]: 2025-10-11 09:05:57.881 2 DEBUG nova.virt.libvirt.driver [None req-caf85ff0-b2b9-4f98-9639-b907a658f8d4 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] [instance: a83f40e4-c852-4b45-a3d2-1cd65e9aaa31] Start _get_guest_xml network_info=[{"id": "f8dc388c-9e5a-43c2-8dd9-8fc28768ec31", "address": "fa:16:3e:b4:bf:b7", "network": {"id": "7ac98e7f-e9cd-45cf-8bf4-dcecaa65ca75", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-1957321638-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueTestJSONUnderV235-1957321638-network", "vif_mac": "fa:16:3e:b4:bf:b7"}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "0a5d578da5e746caa535eef295e1a67d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf8dc388c-9e", "ovs_interfaceid": "f8dc388c-9e5a-43c2-8dd9-8fc28768ec31", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'disk.rescue': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config.rescue': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>) rescue={'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e', 'kernel_id': '', 'ramdisk_id': ''} block_device_info=None _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 11 09:05:57 compute-0 nova_compute[260935]: 2025-10-11 09:05:57.881 2 DEBUG nova.objects.instance [None req-caf85ff0-b2b9-4f98-9639-b907a658f8d4 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] Lazy-loading 'resources' on Instance uuid a83f40e4-c852-4b45-a3d2-1cd65e9aaa31 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:05:57 compute-0 nova_compute[260935]: 2025-10-11 09:05:57.908 2 WARNING nova.virt.libvirt.driver [None req-caf85ff0-b2b9-4f98-9639-b907a658f8d4 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 09:05:57 compute-0 nova_compute[260935]: 2025-10-11 09:05:57.914 2 DEBUG nova.virt.libvirt.host [None req-caf85ff0-b2b9-4f98-9639-b907a658f8d4 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 11 09:05:57 compute-0 nova_compute[260935]: 2025-10-11 09:05:57.915 2 DEBUG nova.virt.libvirt.host [None req-caf85ff0-b2b9-4f98-9639-b907a658f8d4 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 11 09:05:57 compute-0 nova_compute[260935]: 2025-10-11 09:05:57.919 2 DEBUG nova.virt.libvirt.host [None req-caf85ff0-b2b9-4f98-9639-b907a658f8d4 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 11 09:05:57 compute-0 nova_compute[260935]: 2025-10-11 09:05:57.919 2 DEBUG nova.virt.libvirt.host [None req-caf85ff0-b2b9-4f98-9639-b907a658f8d4 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 11 09:05:57 compute-0 nova_compute[260935]: 2025-10-11 09:05:57.920 2 DEBUG nova.virt.libvirt.driver [None req-caf85ff0-b2b9-4f98-9639-b907a658f8d4 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 11 09:05:57 compute-0 nova_compute[260935]: 2025-10-11 09:05:57.920 2 DEBUG nova.virt.hardware [None req-caf85ff0-b2b9-4f98-9639-b907a658f8d4 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 11 09:05:57 compute-0 nova_compute[260935]: 2025-10-11 09:05:57.921 2 DEBUG nova.virt.hardware [None req-caf85ff0-b2b9-4f98-9639-b907a658f8d4 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 11 09:05:57 compute-0 nova_compute[260935]: 2025-10-11 09:05:57.921 2 DEBUG nova.virt.hardware [None req-caf85ff0-b2b9-4f98-9639-b907a658f8d4 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 11 09:05:57 compute-0 nova_compute[260935]: 2025-10-11 09:05:57.921 2 DEBUG nova.virt.hardware [None req-caf85ff0-b2b9-4f98-9639-b907a658f8d4 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 11 09:05:57 compute-0 nova_compute[260935]: 2025-10-11 09:05:57.922 2 DEBUG nova.virt.hardware [None req-caf85ff0-b2b9-4f98-9639-b907a658f8d4 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 11 09:05:57 compute-0 nova_compute[260935]: 2025-10-11 09:05:57.922 2 DEBUG nova.virt.hardware [None req-caf85ff0-b2b9-4f98-9639-b907a658f8d4 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 11 09:05:57 compute-0 nova_compute[260935]: 2025-10-11 09:05:57.923 2 DEBUG nova.virt.hardware [None req-caf85ff0-b2b9-4f98-9639-b907a658f8d4 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 11 09:05:57 compute-0 nova_compute[260935]: 2025-10-11 09:05:57.923 2 DEBUG nova.virt.hardware [None req-caf85ff0-b2b9-4f98-9639-b907a658f8d4 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 11 09:05:57 compute-0 nova_compute[260935]: 2025-10-11 09:05:57.923 2 DEBUG nova.virt.hardware [None req-caf85ff0-b2b9-4f98-9639-b907a658f8d4 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 11 09:05:57 compute-0 nova_compute[260935]: 2025-10-11 09:05:57.924 2 DEBUG nova.virt.hardware [None req-caf85ff0-b2b9-4f98-9639-b907a658f8d4 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 11 09:05:57 compute-0 nova_compute[260935]: 2025-10-11 09:05:57.924 2 DEBUG nova.virt.hardware [None req-caf85ff0-b2b9-4f98-9639-b907a658f8d4 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 11 09:05:57 compute-0 nova_compute[260935]: 2025-10-11 09:05:57.924 2 DEBUG nova.objects.instance [None req-caf85ff0-b2b9-4f98-9639-b907a658f8d4 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] Lazy-loading 'vcpu_model' on Instance uuid a83f40e4-c852-4b45-a3d2-1cd65e9aaa31 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:05:57 compute-0 nova_compute[260935]: 2025-10-11 09:05:57.958 2 DEBUG oslo_concurrency.processutils [None req-caf85ff0-b2b9-4f98-9639-b907a658f8d4 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:05:58 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1899: 321 pgs: 321 active+clean; 500 MiB data, 890 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 2.2 MiB/s wr, 185 op/s
Oct 11 09:05:58 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 09:05:58 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/259846511' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:05:58 compute-0 nova_compute[260935]: 2025-10-11 09:05:58.388 2 DEBUG oslo_concurrency.processutils [None req-caf85ff0-b2b9-4f98-9639-b907a658f8d4 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.430s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:05:58 compute-0 nova_compute[260935]: 2025-10-11 09:05:58.390 2 DEBUG oslo_concurrency.processutils [None req-caf85ff0-b2b9-4f98-9639-b907a658f8d4 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:05:58 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/259846511' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:05:58 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e257 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:05:58 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 09:05:58 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/690254663' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:05:58 compute-0 nova_compute[260935]: 2025-10-11 09:05:58.870 2 DEBUG nova.compute.manager [req-bce0cced-5d0d-4e3b-b1b2-dfe2f5fab09e req-53bb1e06-cc33-4738-a551-d3151298d75c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d813afc2-c844-45eb-b1ec-efbbf95498ef] Received event network-vif-plugged-4d16ebf4-64d7-4476-8898-f62d856c729b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:05:58 compute-0 nova_compute[260935]: 2025-10-11 09:05:58.871 2 DEBUG oslo_concurrency.lockutils [req-bce0cced-5d0d-4e3b-b1b2-dfe2f5fab09e req-53bb1e06-cc33-4738-a551-d3151298d75c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "d813afc2-c844-45eb-b1ec-efbbf95498ef-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:05:58 compute-0 nova_compute[260935]: 2025-10-11 09:05:58.871 2 DEBUG oslo_concurrency.lockutils [req-bce0cced-5d0d-4e3b-b1b2-dfe2f5fab09e req-53bb1e06-cc33-4738-a551-d3151298d75c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "d813afc2-c844-45eb-b1ec-efbbf95498ef-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:05:58 compute-0 nova_compute[260935]: 2025-10-11 09:05:58.872 2 DEBUG oslo_concurrency.lockutils [req-bce0cced-5d0d-4e3b-b1b2-dfe2f5fab09e req-53bb1e06-cc33-4738-a551-d3151298d75c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "d813afc2-c844-45eb-b1ec-efbbf95498ef-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:05:58 compute-0 nova_compute[260935]: 2025-10-11 09:05:58.872 2 DEBUG nova.compute.manager [req-bce0cced-5d0d-4e3b-b1b2-dfe2f5fab09e req-53bb1e06-cc33-4738-a551-d3151298d75c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d813afc2-c844-45eb-b1ec-efbbf95498ef] No waiting events found dispatching network-vif-plugged-4d16ebf4-64d7-4476-8898-f62d856c729b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 09:05:58 compute-0 nova_compute[260935]: 2025-10-11 09:05:58.872 2 WARNING nova.compute.manager [req-bce0cced-5d0d-4e3b-b1b2-dfe2f5fab09e req-53bb1e06-cc33-4738-a551-d3151298d75c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d813afc2-c844-45eb-b1ec-efbbf95498ef] Received unexpected event network-vif-plugged-4d16ebf4-64d7-4476-8898-f62d856c729b for instance with vm_state active and task_state None.
Oct 11 09:05:58 compute-0 nova_compute[260935]: 2025-10-11 09:05:58.876 2 DEBUG oslo_concurrency.processutils [None req-caf85ff0-b2b9-4f98-9639-b907a658f8d4 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.486s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:05:58 compute-0 nova_compute[260935]: 2025-10-11 09:05:58.877 2 DEBUG oslo_concurrency.processutils [None req-caf85ff0-b2b9-4f98-9639-b907a658f8d4 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:05:58 compute-0 nova_compute[260935]: 2025-10-11 09:05:58.916 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:05:59 compute-0 nova_compute[260935]: 2025-10-11 09:05:59.001 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:05:59 compute-0 nova_compute[260935]: 2025-10-11 09:05:59.101 2 DEBUG nova.compute.manager [req-dd5db1cc-3183-4374-8363-6c0ccec1d7e4 req-99bbce1c-4890-4cbd-b0fb-8bc39368718c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a83f40e4-c852-4b45-a3d2-1cd65e9aaa31] Received event network-vif-plugged-f8dc388c-9e5a-43c2-8dd9-8fc28768ec31 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:05:59 compute-0 nova_compute[260935]: 2025-10-11 09:05:59.102 2 DEBUG oslo_concurrency.lockutils [req-dd5db1cc-3183-4374-8363-6c0ccec1d7e4 req-99bbce1c-4890-4cbd-b0fb-8bc39368718c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "a83f40e4-c852-4b45-a3d2-1cd65e9aaa31-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:05:59 compute-0 nova_compute[260935]: 2025-10-11 09:05:59.102 2 DEBUG oslo_concurrency.lockutils [req-dd5db1cc-3183-4374-8363-6c0ccec1d7e4 req-99bbce1c-4890-4cbd-b0fb-8bc39368718c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "a83f40e4-c852-4b45-a3d2-1cd65e9aaa31-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:05:59 compute-0 nova_compute[260935]: 2025-10-11 09:05:59.103 2 DEBUG oslo_concurrency.lockutils [req-dd5db1cc-3183-4374-8363-6c0ccec1d7e4 req-99bbce1c-4890-4cbd-b0fb-8bc39368718c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "a83f40e4-c852-4b45-a3d2-1cd65e9aaa31-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:05:59 compute-0 nova_compute[260935]: 2025-10-11 09:05:59.103 2 DEBUG nova.compute.manager [req-dd5db1cc-3183-4374-8363-6c0ccec1d7e4 req-99bbce1c-4890-4cbd-b0fb-8bc39368718c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a83f40e4-c852-4b45-a3d2-1cd65e9aaa31] No waiting events found dispatching network-vif-plugged-f8dc388c-9e5a-43c2-8dd9-8fc28768ec31 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 09:05:59 compute-0 nova_compute[260935]: 2025-10-11 09:05:59.103 2 WARNING nova.compute.manager [req-dd5db1cc-3183-4374-8363-6c0ccec1d7e4 req-99bbce1c-4890-4cbd-b0fb-8bc39368718c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a83f40e4-c852-4b45-a3d2-1cd65e9aaa31] Received unexpected event network-vif-plugged-f8dc388c-9e5a-43c2-8dd9-8fc28768ec31 for instance with vm_state active and task_state rescuing.
Oct 11 09:05:59 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 09:05:59 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2676679246' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:05:59 compute-0 nova_compute[260935]: 2025-10-11 09:05:59.322 2 DEBUG oslo_concurrency.processutils [None req-caf85ff0-b2b9-4f98-9639-b907a658f8d4 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:05:59 compute-0 nova_compute[260935]: 2025-10-11 09:05:59.323 2 DEBUG nova.virt.libvirt.vif [None req-caf85ff0-b2b9-4f98-9639-b907a658f8d4 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T09:05:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerRescueTestJSONUnderV235-server-942927350',display_name='tempest-ServerRescueTestJSONUnderV235-server-942927350',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuetestjsonunderv235-server-942927350',id=92,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-11T09:05:37Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='0a5d578da5e746caa535eef295e1a67d',ramdisk_id='',reservation_id='r-i0emrtot',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerRescueTestJSONUnderV235-2035879439',owner_user_name='tempest-ServerRescueTestJSONUnderV235-2035879439-project-member'},tags=<?>,task_state='rescuing',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:05:37Z,user_data=None,user_id='b7730a035fdf47498398e20e5aaf9ba4',uuid=a83f40e4-c852-4b45-a3d2-1cd65e9aaa31,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f8dc388c-9e5a-43c2-8dd9-8fc28768ec31", "address": "fa:16:3e:b4:bf:b7", "network": {"id": "7ac98e7f-e9cd-45cf-8bf4-dcecaa65ca75", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-1957321638-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueTestJSONUnderV235-1957321638-network", "vif_mac": "fa:16:3e:b4:bf:b7"}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "0a5d578da5e746caa535eef295e1a67d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf8dc388c-9e", "ovs_interfaceid": "f8dc388c-9e5a-43c2-8dd9-8fc28768ec31", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 11 09:05:59 compute-0 nova_compute[260935]: 2025-10-11 09:05:59.323 2 DEBUG nova.network.os_vif_util [None req-caf85ff0-b2b9-4f98-9639-b907a658f8d4 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] Converting VIF {"id": "f8dc388c-9e5a-43c2-8dd9-8fc28768ec31", "address": "fa:16:3e:b4:bf:b7", "network": {"id": "7ac98e7f-e9cd-45cf-8bf4-dcecaa65ca75", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-1957321638-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueTestJSONUnderV235-1957321638-network", "vif_mac": "fa:16:3e:b4:bf:b7"}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "0a5d578da5e746caa535eef295e1a67d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf8dc388c-9e", "ovs_interfaceid": "f8dc388c-9e5a-43c2-8dd9-8fc28768ec31", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 09:05:59 compute-0 nova_compute[260935]: 2025-10-11 09:05:59.324 2 DEBUG nova.network.os_vif_util [None req-caf85ff0-b2b9-4f98-9639-b907a658f8d4 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:b4:bf:b7,bridge_name='br-int',has_traffic_filtering=True,id=f8dc388c-9e5a-43c2-8dd9-8fc28768ec31,network=Network(7ac98e7f-e9cd-45cf-8bf4-dcecaa65ca75),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf8dc388c-9e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 09:05:59 compute-0 nova_compute[260935]: 2025-10-11 09:05:59.325 2 DEBUG nova.objects.instance [None req-caf85ff0-b2b9-4f98-9639-b907a658f8d4 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] Lazy-loading 'pci_devices' on Instance uuid a83f40e4-c852-4b45-a3d2-1cd65e9aaa31 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:05:59 compute-0 nova_compute[260935]: 2025-10-11 09:05:59.372 2 DEBUG nova.virt.libvirt.driver [None req-caf85ff0-b2b9-4f98-9639-b907a658f8d4 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] [instance: a83f40e4-c852-4b45-a3d2-1cd65e9aaa31] End _get_guest_xml xml=<domain type="kvm">
Oct 11 09:05:59 compute-0 nova_compute[260935]:   <uuid>a83f40e4-c852-4b45-a3d2-1cd65e9aaa31</uuid>
Oct 11 09:05:59 compute-0 nova_compute[260935]:   <name>instance-0000005c</name>
Oct 11 09:05:59 compute-0 nova_compute[260935]:   <memory>131072</memory>
Oct 11 09:05:59 compute-0 nova_compute[260935]:   <vcpu>1</vcpu>
Oct 11 09:05:59 compute-0 nova_compute[260935]:   <metadata>
Oct 11 09:05:59 compute-0 nova_compute[260935]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 09:05:59 compute-0 nova_compute[260935]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 09:05:59 compute-0 nova_compute[260935]:       <nova:name>tempest-ServerRescueTestJSONUnderV235-server-942927350</nova:name>
Oct 11 09:05:59 compute-0 nova_compute[260935]:       <nova:creationTime>2025-10-11 09:05:57</nova:creationTime>
Oct 11 09:05:59 compute-0 nova_compute[260935]:       <nova:flavor name="m1.nano">
Oct 11 09:05:59 compute-0 nova_compute[260935]:         <nova:memory>128</nova:memory>
Oct 11 09:05:59 compute-0 nova_compute[260935]:         <nova:disk>1</nova:disk>
Oct 11 09:05:59 compute-0 nova_compute[260935]:         <nova:swap>0</nova:swap>
Oct 11 09:05:59 compute-0 nova_compute[260935]:         <nova:ephemeral>0</nova:ephemeral>
Oct 11 09:05:59 compute-0 nova_compute[260935]:         <nova:vcpus>1</nova:vcpus>
Oct 11 09:05:59 compute-0 nova_compute[260935]:       </nova:flavor>
Oct 11 09:05:59 compute-0 nova_compute[260935]:       <nova:owner>
Oct 11 09:05:59 compute-0 nova_compute[260935]:         <nova:user uuid="b7730a035fdf47498398e20e5aaf9ba4">tempest-ServerRescueTestJSONUnderV235-2035879439-project-member</nova:user>
Oct 11 09:05:59 compute-0 nova_compute[260935]:         <nova:project uuid="0a5d578da5e746caa535eef295e1a67d">tempest-ServerRescueTestJSONUnderV235-2035879439</nova:project>
Oct 11 09:05:59 compute-0 nova_compute[260935]:       </nova:owner>
Oct 11 09:05:59 compute-0 nova_compute[260935]:       <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 09:05:59 compute-0 nova_compute[260935]:       <nova:ports>
Oct 11 09:05:59 compute-0 nova_compute[260935]:         <nova:port uuid="f8dc388c-9e5a-43c2-8dd9-8fc28768ec31">
Oct 11 09:05:59 compute-0 nova_compute[260935]:           <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Oct 11 09:05:59 compute-0 nova_compute[260935]:         </nova:port>
Oct 11 09:05:59 compute-0 nova_compute[260935]:       </nova:ports>
Oct 11 09:05:59 compute-0 nova_compute[260935]:     </nova:instance>
Oct 11 09:05:59 compute-0 nova_compute[260935]:   </metadata>
Oct 11 09:05:59 compute-0 nova_compute[260935]:   <sysinfo type="smbios">
Oct 11 09:05:59 compute-0 nova_compute[260935]:     <system>
Oct 11 09:05:59 compute-0 nova_compute[260935]:       <entry name="manufacturer">RDO</entry>
Oct 11 09:05:59 compute-0 nova_compute[260935]:       <entry name="product">OpenStack Compute</entry>
Oct 11 09:05:59 compute-0 nova_compute[260935]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 09:05:59 compute-0 nova_compute[260935]:       <entry name="serial">a83f40e4-c852-4b45-a3d2-1cd65e9aaa31</entry>
Oct 11 09:05:59 compute-0 nova_compute[260935]:       <entry name="uuid">a83f40e4-c852-4b45-a3d2-1cd65e9aaa31</entry>
Oct 11 09:05:59 compute-0 nova_compute[260935]:       <entry name="family">Virtual Machine</entry>
Oct 11 09:05:59 compute-0 nova_compute[260935]:     </system>
Oct 11 09:05:59 compute-0 nova_compute[260935]:   </sysinfo>
Oct 11 09:05:59 compute-0 nova_compute[260935]:   <os>
Oct 11 09:05:59 compute-0 nova_compute[260935]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 11 09:05:59 compute-0 nova_compute[260935]:     <smbios mode="sysinfo"/>
Oct 11 09:05:59 compute-0 nova_compute[260935]:   </os>
Oct 11 09:05:59 compute-0 nova_compute[260935]:   <features>
Oct 11 09:05:59 compute-0 nova_compute[260935]:     <acpi/>
Oct 11 09:05:59 compute-0 nova_compute[260935]:     <apic/>
Oct 11 09:05:59 compute-0 nova_compute[260935]:     <vmcoreinfo/>
Oct 11 09:05:59 compute-0 nova_compute[260935]:   </features>
Oct 11 09:05:59 compute-0 nova_compute[260935]:   <clock offset="utc">
Oct 11 09:05:59 compute-0 nova_compute[260935]:     <timer name="pit" tickpolicy="delay"/>
Oct 11 09:05:59 compute-0 nova_compute[260935]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 11 09:05:59 compute-0 nova_compute[260935]:     <timer name="hpet" present="no"/>
Oct 11 09:05:59 compute-0 nova_compute[260935]:   </clock>
Oct 11 09:05:59 compute-0 nova_compute[260935]:   <cpu mode="host-model" match="exact">
Oct 11 09:05:59 compute-0 nova_compute[260935]:     <topology sockets="1" cores="1" threads="1"/>
Oct 11 09:05:59 compute-0 nova_compute[260935]:   </cpu>
Oct 11 09:05:59 compute-0 nova_compute[260935]:   <devices>
Oct 11 09:05:59 compute-0 nova_compute[260935]:     <disk type="network" device="disk">
Oct 11 09:05:59 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 09:05:59 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/a83f40e4-c852-4b45-a3d2-1cd65e9aaa31_disk.rescue">
Oct 11 09:05:59 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 09:05:59 compute-0 nova_compute[260935]:       </source>
Oct 11 09:05:59 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 09:05:59 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 09:05:59 compute-0 nova_compute[260935]:       </auth>
Oct 11 09:05:59 compute-0 nova_compute[260935]:       <target dev="vda" bus="virtio"/>
Oct 11 09:05:59 compute-0 nova_compute[260935]:     </disk>
Oct 11 09:05:59 compute-0 nova_compute[260935]:     <disk type="network" device="disk">
Oct 11 09:05:59 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 09:05:59 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/a83f40e4-c852-4b45-a3d2-1cd65e9aaa31_disk">
Oct 11 09:05:59 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 09:05:59 compute-0 nova_compute[260935]:       </source>
Oct 11 09:05:59 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 09:05:59 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 09:05:59 compute-0 nova_compute[260935]:       </auth>
Oct 11 09:05:59 compute-0 nova_compute[260935]:       <target dev="vdb" bus="virtio"/>
Oct 11 09:05:59 compute-0 nova_compute[260935]:     </disk>
Oct 11 09:05:59 compute-0 nova_compute[260935]:     <disk type="network" device="cdrom">
Oct 11 09:05:59 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 09:05:59 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/a83f40e4-c852-4b45-a3d2-1cd65e9aaa31_disk.config.rescue">
Oct 11 09:05:59 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 09:05:59 compute-0 nova_compute[260935]:       </source>
Oct 11 09:05:59 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 09:05:59 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 09:05:59 compute-0 nova_compute[260935]:       </auth>
Oct 11 09:05:59 compute-0 nova_compute[260935]:       <target dev="sda" bus="sata"/>
Oct 11 09:05:59 compute-0 nova_compute[260935]:     </disk>
Oct 11 09:05:59 compute-0 nova_compute[260935]:     <interface type="ethernet">
Oct 11 09:05:59 compute-0 nova_compute[260935]:       <mac address="fa:16:3e:b4:bf:b7"/>
Oct 11 09:05:59 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 09:05:59 compute-0 nova_compute[260935]:       <driver name="vhost" rx_queue_size="512"/>
Oct 11 09:05:59 compute-0 nova_compute[260935]:       <mtu size="1442"/>
Oct 11 09:05:59 compute-0 nova_compute[260935]:       <target dev="tapf8dc388c-9e"/>
Oct 11 09:05:59 compute-0 nova_compute[260935]:     </interface>
Oct 11 09:05:59 compute-0 nova_compute[260935]:     <serial type="pty">
Oct 11 09:05:59 compute-0 nova_compute[260935]:       <log file="/var/lib/nova/instances/a83f40e4-c852-4b45-a3d2-1cd65e9aaa31/console.log" append="off"/>
Oct 11 09:05:59 compute-0 nova_compute[260935]:     </serial>
Oct 11 09:05:59 compute-0 nova_compute[260935]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 09:05:59 compute-0 nova_compute[260935]:     <video>
Oct 11 09:05:59 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 09:05:59 compute-0 nova_compute[260935]:     </video>
Oct 11 09:05:59 compute-0 nova_compute[260935]:     <input type="tablet" bus="usb"/>
Oct 11 09:05:59 compute-0 nova_compute[260935]:     <rng model="virtio">
Oct 11 09:05:59 compute-0 nova_compute[260935]:       <backend model="random">/dev/urandom</backend>
Oct 11 09:05:59 compute-0 nova_compute[260935]:     </rng>
Oct 11 09:05:59 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root"/>
Oct 11 09:05:59 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:05:59 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:05:59 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:05:59 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:05:59 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:05:59 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:05:59 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:05:59 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:05:59 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:05:59 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:05:59 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:05:59 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:05:59 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:05:59 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:05:59 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:05:59 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:05:59 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:05:59 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:05:59 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:05:59 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:05:59 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:05:59 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:05:59 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:05:59 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:05:59 compute-0 nova_compute[260935]:     <controller type="usb" index="0"/>
Oct 11 09:05:59 compute-0 nova_compute[260935]:     <memballoon model="virtio">
Oct 11 09:05:59 compute-0 nova_compute[260935]:       <stats period="10"/>
Oct 11 09:05:59 compute-0 nova_compute[260935]:     </memballoon>
Oct 11 09:05:59 compute-0 nova_compute[260935]:   </devices>
Oct 11 09:05:59 compute-0 nova_compute[260935]: </domain>
Oct 11 09:05:59 compute-0 nova_compute[260935]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 11 09:05:59 compute-0 nova_compute[260935]: 2025-10-11 09:05:59.382 2 INFO nova.virt.libvirt.driver [-] [instance: a83f40e4-c852-4b45-a3d2-1cd65e9aaa31] Instance destroyed successfully.
Oct 11 09:05:59 compute-0 nova_compute[260935]: 2025-10-11 09:05:59.493 2 DEBUG nova.virt.libvirt.driver [None req-caf85ff0-b2b9-4f98-9639-b907a658f8d4 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 09:05:59 compute-0 nova_compute[260935]: 2025-10-11 09:05:59.494 2 DEBUG nova.virt.libvirt.driver [None req-caf85ff0-b2b9-4f98-9639-b907a658f8d4 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 09:05:59 compute-0 nova_compute[260935]: 2025-10-11 09:05:59.494 2 DEBUG nova.virt.libvirt.driver [None req-caf85ff0-b2b9-4f98-9639-b907a658f8d4 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 09:05:59 compute-0 nova_compute[260935]: 2025-10-11 09:05:59.495 2 DEBUG nova.virt.libvirt.driver [None req-caf85ff0-b2b9-4f98-9639-b907a658f8d4 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] No VIF found with MAC fa:16:3e:b4:bf:b7, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 11 09:05:59 compute-0 nova_compute[260935]: 2025-10-11 09:05:59.495 2 INFO nova.virt.libvirt.driver [None req-caf85ff0-b2b9-4f98-9639-b907a658f8d4 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] [instance: a83f40e4-c852-4b45-a3d2-1cd65e9aaa31] Using config drive
Oct 11 09:05:59 compute-0 nova_compute[260935]: 2025-10-11 09:05:59.530 2 DEBUG nova.storage.rbd_utils [None req-caf85ff0-b2b9-4f98-9639-b907a658f8d4 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] rbd image a83f40e4-c852-4b45-a3d2-1cd65e9aaa31_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:05:59 compute-0 nova_compute[260935]: 2025-10-11 09:05:59.565 2 DEBUG nova.objects.instance [None req-caf85ff0-b2b9-4f98-9639-b907a658f8d4 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] Lazy-loading 'ec2_ids' on Instance uuid a83f40e4-c852-4b45-a3d2-1cd65e9aaa31 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:05:59 compute-0 nova_compute[260935]: 2025-10-11 09:05:59.622 2 DEBUG nova.objects.instance [None req-caf85ff0-b2b9-4f98-9639-b907a658f8d4 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] Lazy-loading 'keypairs' on Instance uuid a83f40e4-c852-4b45-a3d2-1cd65e9aaa31 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:05:59 compute-0 ceph-mon[74313]: pgmap v1899: 321 pgs: 321 active+clean; 500 MiB data, 890 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 2.2 MiB/s wr, 185 op/s
Oct 11 09:05:59 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/690254663' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:05:59 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/2676679246' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:06:00 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1900: 321 pgs: 321 active+clean; 500 MiB data, 890 MiB used, 59 GiB / 60 GiB avail; 812 KiB/s rd, 2.2 MiB/s wr, 94 op/s
Oct 11 09:06:01 compute-0 nova_compute[260935]: 2025-10-11 09:06:01.059 2 INFO nova.virt.libvirt.driver [None req-caf85ff0-b2b9-4f98-9639-b907a658f8d4 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] [instance: a83f40e4-c852-4b45-a3d2-1cd65e9aaa31] Creating config drive at /var/lib/nova/instances/a83f40e4-c852-4b45-a3d2-1cd65e9aaa31/disk.config.rescue
Oct 11 09:06:01 compute-0 nova_compute[260935]: 2025-10-11 09:06:01.064 2 DEBUG oslo_concurrency.processutils [None req-caf85ff0-b2b9-4f98-9639-b907a658f8d4 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/a83f40e4-c852-4b45-a3d2-1cd65e9aaa31/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmppi_gl5x7 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:06:01 compute-0 nova_compute[260935]: 2025-10-11 09:06:01.108 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760173546.0733197, 81e243b6-6e9c-428e-a7b7-743f60f94728 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:06:01 compute-0 nova_compute[260935]: 2025-10-11 09:06:01.109 2 INFO nova.compute.manager [-] [instance: 81e243b6-6e9c-428e-a7b7-743f60f94728] VM Stopped (Lifecycle Event)
Oct 11 09:06:01 compute-0 nova_compute[260935]: 2025-10-11 09:06:01.164 2 DEBUG nova.compute.manager [None req-b7a6eaac-a604-4839-a6c9-72ed785bf8b8 - - - - - -] [instance: 81e243b6-6e9c-428e-a7b7-743f60f94728] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:06:01 compute-0 nova_compute[260935]: 2025-10-11 09:06:01.222 2 DEBUG oslo_concurrency.processutils [None req-caf85ff0-b2b9-4f98-9639-b907a658f8d4 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/a83f40e4-c852-4b45-a3d2-1cd65e9aaa31/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmppi_gl5x7" returned: 0 in 0.158s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:06:01 compute-0 nova_compute[260935]: 2025-10-11 09:06:01.244 2 DEBUG nova.storage.rbd_utils [None req-caf85ff0-b2b9-4f98-9639-b907a658f8d4 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] rbd image a83f40e4-c852-4b45-a3d2-1cd65e9aaa31_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:06:01 compute-0 nova_compute[260935]: 2025-10-11 09:06:01.247 2 DEBUG oslo_concurrency.processutils [None req-caf85ff0-b2b9-4f98-9639-b907a658f8d4 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/a83f40e4-c852-4b45-a3d2-1cd65e9aaa31/disk.config.rescue a83f40e4-c852-4b45-a3d2-1cd65e9aaa31_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:06:01 compute-0 nova_compute[260935]: 2025-10-11 09:06:01.416 2 DEBUG oslo_concurrency.processutils [None req-caf85ff0-b2b9-4f98-9639-b907a658f8d4 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/a83f40e4-c852-4b45-a3d2-1cd65e9aaa31/disk.config.rescue a83f40e4-c852-4b45-a3d2-1cd65e9aaa31_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.169s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:06:01 compute-0 nova_compute[260935]: 2025-10-11 09:06:01.417 2 INFO nova.virt.libvirt.driver [None req-caf85ff0-b2b9-4f98-9639-b907a658f8d4 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] [instance: a83f40e4-c852-4b45-a3d2-1cd65e9aaa31] Deleting local config drive /var/lib/nova/instances/a83f40e4-c852-4b45-a3d2-1cd65e9aaa31/disk.config.rescue because it was imported into RBD.
Oct 11 09:06:01 compute-0 kernel: tapf8dc388c-9e: entered promiscuous mode
Oct 11 09:06:01 compute-0 ovn_controller[152945]: 2025-10-11T09:06:01Z|00833|binding|INFO|Claiming lport f8dc388c-9e5a-43c2-8dd9-8fc28768ec31 for this chassis.
Oct 11 09:06:01 compute-0 ovn_controller[152945]: 2025-10-11T09:06:01Z|00834|binding|INFO|f8dc388c-9e5a-43c2-8dd9-8fc28768ec31: Claiming fa:16:3e:b4:bf:b7 10.100.0.12
Oct 11 09:06:01 compute-0 nova_compute[260935]: 2025-10-11 09:06:01.480 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:06:01 compute-0 NetworkManager[44960]: <info>  [1760173561.4871] manager: (tapf8dc388c-9e): new Tun device (/org/freedesktop/NetworkManager/Devices/359)
Oct 11 09:06:01 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:06:01.515 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b4:bf:b7 10.100.0.12'], port_security=['fa:16:3e:b4:bf:b7 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'a83f40e4-c852-4b45-a3d2-1cd65e9aaa31', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7ac98e7f-e9cd-45cf-8bf4-dcecaa65ca75', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0a5d578da5e746caa535eef295e1a67d', 'neutron:revision_number': '5', 'neutron:security_group_ids': 'e8a3473d-4723-4d4d-be0a-f96b6f35a4ee', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ae01bff6-0f5a-48e3-a011-50bc6bac19c7, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=f8dc388c-9e5a-43c2-8dd9-8fc28768ec31) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 09:06:01 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:06:01.517 162815 INFO neutron.agent.ovn.metadata.agent [-] Port f8dc388c-9e5a-43c2-8dd9-8fc28768ec31 in datapath 7ac98e7f-e9cd-45cf-8bf4-dcecaa65ca75 bound to our chassis
Oct 11 09:06:01 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:06:01.519 162815 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 7ac98e7f-e9cd-45cf-8bf4-dcecaa65ca75 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Oct 11 09:06:01 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:06:01.521 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[e0919731-933f-4ff4-a83c-0d30a1c76288]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:06:01 compute-0 ovn_controller[152945]: 2025-10-11T09:06:01Z|00835|binding|INFO|Setting lport f8dc388c-9e5a-43c2-8dd9-8fc28768ec31 ovn-installed in OVS
Oct 11 09:06:01 compute-0 ovn_controller[152945]: 2025-10-11T09:06:01Z|00836|binding|INFO|Setting lport f8dc388c-9e5a-43c2-8dd9-8fc28768ec31 up in Southbound
Oct 11 09:06:01 compute-0 nova_compute[260935]: 2025-10-11 09:06:01.520 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:06:01 compute-0 nova_compute[260935]: 2025-10-11 09:06:01.523 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:06:01 compute-0 systemd-machined[215705]: New machine qemu-108-instance-0000005c.
Oct 11 09:06:01 compute-0 nova_compute[260935]: 2025-10-11 09:06:01.529 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:06:01 compute-0 systemd[1]: Started Virtual Machine qemu-108-instance-0000005c.
Oct 11 09:06:01 compute-0 systemd-udevd[354487]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 09:06:01 compute-0 NetworkManager[44960]: <info>  [1760173561.5670] device (tapf8dc388c-9e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 09:06:01 compute-0 NetworkManager[44960]: <info>  [1760173561.5683] device (tapf8dc388c-9e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 09:06:01 compute-0 nova_compute[260935]: 2025-10-11 09:06:01.695 2 DEBUG oslo_concurrency.lockutils [None req-7088ee2a-f426-4885-8621-49d560c830e1 ac3a19a7426e4e51aa4f94b016decc82 0eed0ccef3b34c4db44e88ebe1aef9f0 - - default default] Acquiring lock "d813afc2-c844-45eb-b1ec-efbbf95498ef" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:06:01 compute-0 nova_compute[260935]: 2025-10-11 09:06:01.696 2 DEBUG oslo_concurrency.lockutils [None req-7088ee2a-f426-4885-8621-49d560c830e1 ac3a19a7426e4e51aa4f94b016decc82 0eed0ccef3b34c4db44e88ebe1aef9f0 - - default default] Lock "d813afc2-c844-45eb-b1ec-efbbf95498ef" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:06:01 compute-0 nova_compute[260935]: 2025-10-11 09:06:01.697 2 DEBUG oslo_concurrency.lockutils [None req-7088ee2a-f426-4885-8621-49d560c830e1 ac3a19a7426e4e51aa4f94b016decc82 0eed0ccef3b34c4db44e88ebe1aef9f0 - - default default] Acquiring lock "d813afc2-c844-45eb-b1ec-efbbf95498ef-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:06:01 compute-0 nova_compute[260935]: 2025-10-11 09:06:01.698 2 DEBUG oslo_concurrency.lockutils [None req-7088ee2a-f426-4885-8621-49d560c830e1 ac3a19a7426e4e51aa4f94b016decc82 0eed0ccef3b34c4db44e88ebe1aef9f0 - - default default] Lock "d813afc2-c844-45eb-b1ec-efbbf95498ef-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:06:01 compute-0 nova_compute[260935]: 2025-10-11 09:06:01.698 2 DEBUG oslo_concurrency.lockutils [None req-7088ee2a-f426-4885-8621-49d560c830e1 ac3a19a7426e4e51aa4f94b016decc82 0eed0ccef3b34c4db44e88ebe1aef9f0 - - default default] Lock "d813afc2-c844-45eb-b1ec-efbbf95498ef-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:06:01 compute-0 nova_compute[260935]: 2025-10-11 09:06:01.701 2 INFO nova.compute.manager [None req-7088ee2a-f426-4885-8621-49d560c830e1 ac3a19a7426e4e51aa4f94b016decc82 0eed0ccef3b34c4db44e88ebe1aef9f0 - - default default] [instance: d813afc2-c844-45eb-b1ec-efbbf95498ef] Terminating instance
Oct 11 09:06:01 compute-0 nova_compute[260935]: 2025-10-11 09:06:01.703 2 DEBUG nova.compute.manager [None req-7088ee2a-f426-4885-8621-49d560c830e1 ac3a19a7426e4e51aa4f94b016decc82 0eed0ccef3b34c4db44e88ebe1aef9f0 - - default default] [instance: d813afc2-c844-45eb-b1ec-efbbf95498ef] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 11 09:06:01 compute-0 ceph-mon[74313]: pgmap v1900: 321 pgs: 321 active+clean; 500 MiB data, 890 MiB used, 59 GiB / 60 GiB avail; 812 KiB/s rd, 2.2 MiB/s wr, 94 op/s
Oct 11 09:06:01 compute-0 kernel: tap4d16ebf4-64 (unregistering): left promiscuous mode
Oct 11 09:06:01 compute-0 NetworkManager[44960]: <info>  [1760173561.7654] device (tap4d16ebf4-64): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 09:06:01 compute-0 ovn_controller[152945]: 2025-10-11T09:06:01Z|00837|binding|INFO|Releasing lport 4d16ebf4-64d7-4476-8898-f62d856c729b from this chassis (sb_readonly=0)
Oct 11 09:06:01 compute-0 ovn_controller[152945]: 2025-10-11T09:06:01Z|00838|binding|INFO|Setting lport 4d16ebf4-64d7-4476-8898-f62d856c729b down in Southbound
Oct 11 09:06:01 compute-0 nova_compute[260935]: 2025-10-11 09:06:01.784 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:06:01 compute-0 ovn_controller[152945]: 2025-10-11T09:06:01Z|00839|binding|INFO|Removing iface tap4d16ebf4-64 ovn-installed in OVS
Oct 11 09:06:01 compute-0 nova_compute[260935]: 2025-10-11 09:06:01.787 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:06:01 compute-0 nova_compute[260935]: 2025-10-11 09:06:01.815 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:06:01 compute-0 systemd[1]: machine-qemu\x2d107\x2dinstance\x2d0000005e.scope: Deactivated successfully.
Oct 11 09:06:01 compute-0 systemd[1]: machine-qemu\x2d107\x2dinstance\x2d0000005e.scope: Consumed 6.039s CPU time.
Oct 11 09:06:01 compute-0 systemd-machined[215705]: Machine qemu-107-instance-0000005e terminated.
Oct 11 09:06:01 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:06:01.830 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:62:2f:b2 10.100.0.7'], port_security=['fa:16:3e:62:2f:b2 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'd813afc2-c844-45eb-b1ec-efbbf95498ef', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-da65ea75-f60d-4001-894b-df4408baa99c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0eed0ccef3b34c4db44e88ebe1aef9f0', 'neutron:revision_number': '4', 'neutron:security_group_ids': '27e50c86-89d7-49fd-89cc-35770f4ed881', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=04f380b3-4aca-47a6-b068-b7f58cfe8d61, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=4d16ebf4-64d7-4476-8898-f62d856c729b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 09:06:01 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:06:01.832 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 4d16ebf4-64d7-4476-8898-f62d856c729b in datapath da65ea75-f60d-4001-894b-df4408baa99c unbound from our chassis
Oct 11 09:06:01 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:06:01.834 162815 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network da65ea75-f60d-4001-894b-df4408baa99c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 11 09:06:01 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:06:01.836 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[5bc56190-acfd-4a5e-922d-e72961ef5601]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:06:01 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:06:01.837 162815 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-da65ea75-f60d-4001-894b-df4408baa99c namespace which is not needed anymore
Oct 11 09:06:01 compute-0 sshd-session[354432]: Invalid user transfer from 152.32.213.170 port 33466
Oct 11 09:06:01 compute-0 sshd-session[354432]: pam_unix(sshd:auth): check pass; user unknown
Oct 11 09:06:01 compute-0 sshd-session[354432]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=152.32.213.170
Oct 11 09:06:01 compute-0 nova_compute[260935]: 2025-10-11 09:06:01.953 2 INFO nova.virt.libvirt.driver [-] [instance: d813afc2-c844-45eb-b1ec-efbbf95498ef] Instance destroyed successfully.
Oct 11 09:06:01 compute-0 nova_compute[260935]: 2025-10-11 09:06:01.954 2 DEBUG nova.objects.instance [None req-7088ee2a-f426-4885-8621-49d560c830e1 ac3a19a7426e4e51aa4f94b016decc82 0eed0ccef3b34c4db44e88ebe1aef9f0 - - default default] Lazy-loading 'resources' on Instance uuid d813afc2-c844-45eb-b1ec-efbbf95498ef obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:06:02 compute-0 nova_compute[260935]: 2025-10-11 09:06:02.002 2 DEBUG nova.virt.libvirt.vif [None req-7088ee2a-f426-4885-8621-49d560c830e1 ac3a19a7426e4e51aa4f94b016decc82 0eed0ccef3b34c4db44e88ebe1aef9f0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T09:05:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersNegativeTestMultiTenantJSON-server-393316733',display_name='tempest-ServersNegativeTestMultiTenantJSON-server-393316733',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestmultitenantjson-server-393316733',id=94,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-11T09:05:56Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='0eed0ccef3b34c4db44e88ebe1aef9f0',ramdisk_id='',reservation_id='r-ikgw00lh',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersNegativeTestMultiTenantJSON-1815816531',owner_user_name='tempest-ServersNegativeTestMultiTenantJSON-1815816531-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T09:05:56Z,user_data=None,user_id='ac3a19a7426e4e51aa4f94b016decc82',uuid=d813afc2-c844-45eb-b1ec-efbbf95498ef,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "4d16ebf4-64d7-4476-8898-f62d856c729b", "address": "fa:16:3e:62:2f:b2", "network": {"id": "da65ea75-f60d-4001-894b-df4408baa99c", "bridge": "br-int", "label": "tempest-ServersNegativeTestMultiTenantJSON-908406648-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0eed0ccef3b34c4db44e88ebe1aef9f0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4d16ebf4-64", "ovs_interfaceid": "4d16ebf4-64d7-4476-8898-f62d856c729b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 11 09:06:02 compute-0 nova_compute[260935]: 2025-10-11 09:06:02.003 2 DEBUG nova.network.os_vif_util [None req-7088ee2a-f426-4885-8621-49d560c830e1 ac3a19a7426e4e51aa4f94b016decc82 0eed0ccef3b34c4db44e88ebe1aef9f0 - - default default] Converting VIF {"id": "4d16ebf4-64d7-4476-8898-f62d856c729b", "address": "fa:16:3e:62:2f:b2", "network": {"id": "da65ea75-f60d-4001-894b-df4408baa99c", "bridge": "br-int", "label": "tempest-ServersNegativeTestMultiTenantJSON-908406648-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0eed0ccef3b34c4db44e88ebe1aef9f0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4d16ebf4-64", "ovs_interfaceid": "4d16ebf4-64d7-4476-8898-f62d856c729b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 09:06:02 compute-0 nova_compute[260935]: 2025-10-11 09:06:02.003 2 DEBUG nova.network.os_vif_util [None req-7088ee2a-f426-4885-8621-49d560c830e1 ac3a19a7426e4e51aa4f94b016decc82 0eed0ccef3b34c4db44e88ebe1aef9f0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:62:2f:b2,bridge_name='br-int',has_traffic_filtering=True,id=4d16ebf4-64d7-4476-8898-f62d856c729b,network=Network(da65ea75-f60d-4001-894b-df4408baa99c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4d16ebf4-64') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 09:06:02 compute-0 nova_compute[260935]: 2025-10-11 09:06:02.004 2 DEBUG os_vif [None req-7088ee2a-f426-4885-8621-49d560c830e1 ac3a19a7426e4e51aa4f94b016decc82 0eed0ccef3b34c4db44e88ebe1aef9f0 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:62:2f:b2,bridge_name='br-int',has_traffic_filtering=True,id=4d16ebf4-64d7-4476-8898-f62d856c729b,network=Network(da65ea75-f60d-4001-894b-df4408baa99c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4d16ebf4-64') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 11 09:06:02 compute-0 nova_compute[260935]: 2025-10-11 09:06:02.006 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:06:02 compute-0 nova_compute[260935]: 2025-10-11 09:06:02.006 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4d16ebf4-64, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:06:02 compute-0 nova_compute[260935]: 2025-10-11 09:06:02.008 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:06:02 compute-0 nova_compute[260935]: 2025-10-11 09:06:02.010 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:06:02 compute-0 nova_compute[260935]: 2025-10-11 09:06:02.013 2 INFO os_vif [None req-7088ee2a-f426-4885-8621-49d560c830e1 ac3a19a7426e4e51aa4f94b016decc82 0eed0ccef3b34c4db44e88ebe1aef9f0 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:62:2f:b2,bridge_name='br-int',has_traffic_filtering=True,id=4d16ebf4-64d7-4476-8898-f62d856c729b,network=Network(da65ea75-f60d-4001-894b-df4408baa99c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4d16ebf4-64')
Oct 11 09:06:02 compute-0 neutron-haproxy-ovnmeta-da65ea75-f60d-4001-894b-df4408baa99c[354173]: [NOTICE]   (354177) : haproxy version is 2.8.14-c23fe91
Oct 11 09:06:02 compute-0 neutron-haproxy-ovnmeta-da65ea75-f60d-4001-894b-df4408baa99c[354173]: [NOTICE]   (354177) : path to executable is /usr/sbin/haproxy
Oct 11 09:06:02 compute-0 neutron-haproxy-ovnmeta-da65ea75-f60d-4001-894b-df4408baa99c[354173]: [ALERT]    (354177) : Current worker (354179) exited with code 143 (Terminated)
Oct 11 09:06:02 compute-0 neutron-haproxy-ovnmeta-da65ea75-f60d-4001-894b-df4408baa99c[354173]: [WARNING]  (354177) : All workers exited. Exiting... (0)
Oct 11 09:06:02 compute-0 systemd[1]: libpod-fe1073206b8884e40d5fa95be894fa4fa71d43f68f6ba8ef4b1c8249e73078f1.scope: Deactivated successfully.
Oct 11 09:06:02 compute-0 podman[354523]: 2025-10-11 09:06:02.088735667 +0000 UTC m=+0.130526267 container died fe1073206b8884e40d5fa95be894fa4fa71d43f68f6ba8ef4b1c8249e73078f1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-da65ea75-f60d-4001-894b-df4408baa99c, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team)
Oct 11 09:06:02 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-fe1073206b8884e40d5fa95be894fa4fa71d43f68f6ba8ef4b1c8249e73078f1-userdata-shm.mount: Deactivated successfully.
Oct 11 09:06:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-7944f1711c8b20f7abfb4df9886afc904490f332f531da6a73509c2289aa5d79-merged.mount: Deactivated successfully.
Oct 11 09:06:02 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1901: 321 pgs: 321 active+clean; 533 MiB data, 903 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 3.2 MiB/s wr, 160 op/s
Oct 11 09:06:02 compute-0 podman[354523]: 2025-10-11 09:06:02.53546687 +0000 UTC m=+0.577257430 container cleanup fe1073206b8884e40d5fa95be894fa4fa71d43f68f6ba8ef4b1c8249e73078f1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-da65ea75-f60d-4001-894b-df4408baa99c, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true)
Oct 11 09:06:02 compute-0 systemd[1]: libpod-conmon-fe1073206b8884e40d5fa95be894fa4fa71d43f68f6ba8ef4b1c8249e73078f1.scope: Deactivated successfully.
Oct 11 09:06:02 compute-0 nova_compute[260935]: 2025-10-11 09:06:02.550 2 DEBUG nova.compute.manager [req-32cbd574-dd41-4d4e-b764-d338cd366cf2 req-e1bba347-a58e-43f5-ba09-b3bc7e7919b0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d813afc2-c844-45eb-b1ec-efbbf95498ef] Received event network-vif-unplugged-4d16ebf4-64d7-4476-8898-f62d856c729b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:06:02 compute-0 nova_compute[260935]: 2025-10-11 09:06:02.552 2 DEBUG oslo_concurrency.lockutils [req-32cbd574-dd41-4d4e-b764-d338cd366cf2 req-e1bba347-a58e-43f5-ba09-b3bc7e7919b0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "d813afc2-c844-45eb-b1ec-efbbf95498ef-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:06:02 compute-0 nova_compute[260935]: 2025-10-11 09:06:02.553 2 DEBUG oslo_concurrency.lockutils [req-32cbd574-dd41-4d4e-b764-d338cd366cf2 req-e1bba347-a58e-43f5-ba09-b3bc7e7919b0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "d813afc2-c844-45eb-b1ec-efbbf95498ef-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:06:02 compute-0 nova_compute[260935]: 2025-10-11 09:06:02.553 2 DEBUG oslo_concurrency.lockutils [req-32cbd574-dd41-4d4e-b764-d338cd366cf2 req-e1bba347-a58e-43f5-ba09-b3bc7e7919b0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "d813afc2-c844-45eb-b1ec-efbbf95498ef-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:06:02 compute-0 nova_compute[260935]: 2025-10-11 09:06:02.554 2 DEBUG nova.compute.manager [req-32cbd574-dd41-4d4e-b764-d338cd366cf2 req-e1bba347-a58e-43f5-ba09-b3bc7e7919b0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d813afc2-c844-45eb-b1ec-efbbf95498ef] No waiting events found dispatching network-vif-unplugged-4d16ebf4-64d7-4476-8898-f62d856c729b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 09:06:02 compute-0 nova_compute[260935]: 2025-10-11 09:06:02.554 2 DEBUG nova.compute.manager [req-32cbd574-dd41-4d4e-b764-d338cd366cf2 req-e1bba347-a58e-43f5-ba09-b3bc7e7919b0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d813afc2-c844-45eb-b1ec-efbbf95498ef] Received event network-vif-unplugged-4d16ebf4-64d7-4476-8898-f62d856c729b for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 11 09:06:02 compute-0 nova_compute[260935]: 2025-10-11 09:06:02.889 2 DEBUG nova.virt.libvirt.host [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Removed pending event for a83f40e4-c852-4b45-a3d2-1cd65e9aaa31 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Oct 11 09:06:02 compute-0 nova_compute[260935]: 2025-10-11 09:06:02.890 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760173562.8885512, a83f40e4-c852-4b45-a3d2-1cd65e9aaa31 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:06:02 compute-0 nova_compute[260935]: 2025-10-11 09:06:02.890 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: a83f40e4-c852-4b45-a3d2-1cd65e9aaa31] VM Resumed (Lifecycle Event)
Oct 11 09:06:02 compute-0 nova_compute[260935]: 2025-10-11 09:06:02.900 2 DEBUG nova.compute.manager [None req-caf85ff0-b2b9-4f98-9639-b907a658f8d4 b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] [instance: a83f40e4-c852-4b45-a3d2-1cd65e9aaa31] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:06:02 compute-0 nova_compute[260935]: 2025-10-11 09:06:02.928 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: a83f40e4-c852-4b45-a3d2-1cd65e9aaa31] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:06:02 compute-0 nova_compute[260935]: 2025-10-11 09:06:02.932 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: a83f40e4-c852-4b45-a3d2-1cd65e9aaa31] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 09:06:02 compute-0 ceph-mon[74313]: pgmap v1901: 321 pgs: 321 active+clean; 533 MiB data, 903 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 3.2 MiB/s wr, 160 op/s
Oct 11 09:06:02 compute-0 nova_compute[260935]: 2025-10-11 09:06:02.998 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760173562.896696, a83f40e4-c852-4b45-a3d2-1cd65e9aaa31 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:06:02 compute-0 nova_compute[260935]: 2025-10-11 09:06:02.998 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: a83f40e4-c852-4b45-a3d2-1cd65e9aaa31] VM Started (Lifecycle Event)
Oct 11 09:06:03 compute-0 nova_compute[260935]: 2025-10-11 09:06:03.054 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: a83f40e4-c852-4b45-a3d2-1cd65e9aaa31] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:06:03 compute-0 nova_compute[260935]: 2025-10-11 09:06:03.059 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: a83f40e4-c852-4b45-a3d2-1cd65e9aaa31] Synchronizing instance power state after lifecycle event "Started"; current vm_state: rescued, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 09:06:03 compute-0 podman[354635]: 2025-10-11 09:06:03.130433844 +0000 UTC m=+0.558414012 container remove fe1073206b8884e40d5fa95be894fa4fa71d43f68f6ba8ef4b1c8249e73078f1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-da65ea75-f60d-4001-894b-df4408baa99c, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 11 09:06:03 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:06:03.139 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[fc255948-0482-4b67-8d64-65c012cd3bdf]: (4, ('Sat Oct 11 09:06:01 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-da65ea75-f60d-4001-894b-df4408baa99c (fe1073206b8884e40d5fa95be894fa4fa71d43f68f6ba8ef4b1c8249e73078f1)\nfe1073206b8884e40d5fa95be894fa4fa71d43f68f6ba8ef4b1c8249e73078f1\nSat Oct 11 09:06:02 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-da65ea75-f60d-4001-894b-df4408baa99c (fe1073206b8884e40d5fa95be894fa4fa71d43f68f6ba8ef4b1c8249e73078f1)\nfe1073206b8884e40d5fa95be894fa4fa71d43f68f6ba8ef4b1c8249e73078f1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:06:03 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:06:03.141 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[137aa968-0294-4af6-a27f-2d1706da5d9e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:06:03 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:06:03.144 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapda65ea75-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:06:03 compute-0 nova_compute[260935]: 2025-10-11 09:06:03.146 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:06:03 compute-0 kernel: tapda65ea75-f0: left promiscuous mode
Oct 11 09:06:03 compute-0 nova_compute[260935]: 2025-10-11 09:06:03.178 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:06:03 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:06:03.185 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[338c2800-12ea-4cc6-b371-1a5c56c9cab6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:06:03 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:06:03.215 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[b5f93fa9-e1a3-4f2e-992c-fd1d7ab3d4d0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:06:03 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:06:03.218 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[b4336244-da3a-4a4c-9195-e4ae768ccbe9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:06:03 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:06:03.244 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[0f8f71b1-be11-457f-bd0d-b569babf21e1]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 539501, 'reachable_time': 41708, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 354651, 'error': None, 'target': 'ovnmeta-da65ea75-f60d-4001-894b-df4408baa99c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:06:03 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:06:03.248 162929 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-da65ea75-f60d-4001-894b-df4408baa99c deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 11 09:06:03 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:06:03.249 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[b02f2b3e-a4b9-4701-bab7-58c702aceb9c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:06:03 compute-0 systemd[1]: run-netns-ovnmeta\x2dda65ea75\x2df60d\x2d4001\x2d894b\x2ddf4408baa99c.mount: Deactivated successfully.
Oct 11 09:06:03 compute-0 sshd-session[354432]: Failed password for invalid user transfer from 152.32.213.170 port 33466 ssh2
Oct 11 09:06:03 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e257 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:06:04 compute-0 nova_compute[260935]: 2025-10-11 09:06:04.003 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:06:04 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1902: 321 pgs: 321 active+clean; 546 MiB data, 911 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.2 MiB/s wr, 114 op/s
Oct 11 09:06:04 compute-0 nova_compute[260935]: 2025-10-11 09:06:04.708 2 DEBUG nova.compute.manager [req-99d192d0-d489-4fae-8f8a-9c4362410b4d req-762d23eb-778d-4845-a678-f19b664007a3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a83f40e4-c852-4b45-a3d2-1cd65e9aaa31] Received event network-vif-plugged-f8dc388c-9e5a-43c2-8dd9-8fc28768ec31 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:06:04 compute-0 nova_compute[260935]: 2025-10-11 09:06:04.709 2 DEBUG oslo_concurrency.lockutils [req-99d192d0-d489-4fae-8f8a-9c4362410b4d req-762d23eb-778d-4845-a678-f19b664007a3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "a83f40e4-c852-4b45-a3d2-1cd65e9aaa31-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:06:04 compute-0 nova_compute[260935]: 2025-10-11 09:06:04.710 2 DEBUG oslo_concurrency.lockutils [req-99d192d0-d489-4fae-8f8a-9c4362410b4d req-762d23eb-778d-4845-a678-f19b664007a3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "a83f40e4-c852-4b45-a3d2-1cd65e9aaa31-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:06:04 compute-0 nova_compute[260935]: 2025-10-11 09:06:04.710 2 DEBUG oslo_concurrency.lockutils [req-99d192d0-d489-4fae-8f8a-9c4362410b4d req-762d23eb-778d-4845-a678-f19b664007a3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "a83f40e4-c852-4b45-a3d2-1cd65e9aaa31-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:06:04 compute-0 nova_compute[260935]: 2025-10-11 09:06:04.710 2 DEBUG nova.compute.manager [req-99d192d0-d489-4fae-8f8a-9c4362410b4d req-762d23eb-778d-4845-a678-f19b664007a3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a83f40e4-c852-4b45-a3d2-1cd65e9aaa31] No waiting events found dispatching network-vif-plugged-f8dc388c-9e5a-43c2-8dd9-8fc28768ec31 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 09:06:04 compute-0 nova_compute[260935]: 2025-10-11 09:06:04.711 2 WARNING nova.compute.manager [req-99d192d0-d489-4fae-8f8a-9c4362410b4d req-762d23eb-778d-4845-a678-f19b664007a3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a83f40e4-c852-4b45-a3d2-1cd65e9aaa31] Received unexpected event network-vif-plugged-f8dc388c-9e5a-43c2-8dd9-8fc28768ec31 for instance with vm_state rescued and task_state None.
Oct 11 09:06:04 compute-0 nova_compute[260935]: 2025-10-11 09:06:04.839 2 DEBUG nova.compute.manager [req-8ad2ffc6-9463-4705-8a83-0c7e8ec312aa req-67b8f32e-ee30-4f5b-b17e-dc435f2d6317 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d813afc2-c844-45eb-b1ec-efbbf95498ef] Received event network-vif-plugged-4d16ebf4-64d7-4476-8898-f62d856c729b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:06:04 compute-0 nova_compute[260935]: 2025-10-11 09:06:04.840 2 DEBUG oslo_concurrency.lockutils [req-8ad2ffc6-9463-4705-8a83-0c7e8ec312aa req-67b8f32e-ee30-4f5b-b17e-dc435f2d6317 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "d813afc2-c844-45eb-b1ec-efbbf95498ef-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:06:04 compute-0 nova_compute[260935]: 2025-10-11 09:06:04.840 2 DEBUG oslo_concurrency.lockutils [req-8ad2ffc6-9463-4705-8a83-0c7e8ec312aa req-67b8f32e-ee30-4f5b-b17e-dc435f2d6317 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "d813afc2-c844-45eb-b1ec-efbbf95498ef-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:06:04 compute-0 nova_compute[260935]: 2025-10-11 09:06:04.840 2 DEBUG oslo_concurrency.lockutils [req-8ad2ffc6-9463-4705-8a83-0c7e8ec312aa req-67b8f32e-ee30-4f5b-b17e-dc435f2d6317 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "d813afc2-c844-45eb-b1ec-efbbf95498ef-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:06:04 compute-0 nova_compute[260935]: 2025-10-11 09:06:04.841 2 DEBUG nova.compute.manager [req-8ad2ffc6-9463-4705-8a83-0c7e8ec312aa req-67b8f32e-ee30-4f5b-b17e-dc435f2d6317 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d813afc2-c844-45eb-b1ec-efbbf95498ef] No waiting events found dispatching network-vif-plugged-4d16ebf4-64d7-4476-8898-f62d856c729b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 09:06:04 compute-0 nova_compute[260935]: 2025-10-11 09:06:04.841 2 WARNING nova.compute.manager [req-8ad2ffc6-9463-4705-8a83-0c7e8ec312aa req-67b8f32e-ee30-4f5b-b17e-dc435f2d6317 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d813afc2-c844-45eb-b1ec-efbbf95498ef] Received unexpected event network-vif-plugged-4d16ebf4-64d7-4476-8898-f62d856c729b for instance with vm_state active and task_state deleting.
Oct 11 09:06:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 09:06:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:06:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 09:06:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:06:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.004430873339519437 of space, bias 1.0, pg target 1.329262001855831 quantized to 32 (current 32)
Oct 11 09:06:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:06:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 09:06:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:06:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 09:06:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:06:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.1992057139048968 quantized to 32 (current 32)
Oct 11 09:06:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:06:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006084358924269063 quantized to 16 (current 32)
Oct 11 09:06:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:06:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 09:06:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:06:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.605448655336329e-05 quantized to 32 (current 32)
Oct 11 09:06:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:06:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006464631357035879 quantized to 32 (current 32)
Oct 11 09:06:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:06:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 09:06:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:06:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015210897310672657 quantized to 32 (current 32)
Oct 11 09:06:05 compute-0 sshd-session[354432]: Received disconnect from 152.32.213.170 port 33466:11: Bye Bye [preauth]
Oct 11 09:06:05 compute-0 sshd-session[354432]: Disconnected from invalid user transfer 152.32.213.170 port 33466 [preauth]
Oct 11 09:06:05 compute-0 ceph-mon[74313]: pgmap v1902: 321 pgs: 321 active+clean; 546 MiB data, 911 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.2 MiB/s wr, 114 op/s
Oct 11 09:06:06 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1903: 321 pgs: 321 active+clean; 546 MiB data, 911 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 98 op/s
Oct 11 09:06:06 compute-0 nova_compute[260935]: 2025-10-11 09:06:06.885 2 DEBUG nova.compute.manager [req-e57b61e6-fa18-43ff-aa6c-ae2867f68cc4 req-92e3955c-daef-4495-b593-b277a5b22de8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a83f40e4-c852-4b45-a3d2-1cd65e9aaa31] Received event network-vif-plugged-f8dc388c-9e5a-43c2-8dd9-8fc28768ec31 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:06:06 compute-0 nova_compute[260935]: 2025-10-11 09:06:06.885 2 DEBUG oslo_concurrency.lockutils [req-e57b61e6-fa18-43ff-aa6c-ae2867f68cc4 req-92e3955c-daef-4495-b593-b277a5b22de8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "a83f40e4-c852-4b45-a3d2-1cd65e9aaa31-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:06:06 compute-0 nova_compute[260935]: 2025-10-11 09:06:06.886 2 DEBUG oslo_concurrency.lockutils [req-e57b61e6-fa18-43ff-aa6c-ae2867f68cc4 req-92e3955c-daef-4495-b593-b277a5b22de8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "a83f40e4-c852-4b45-a3d2-1cd65e9aaa31-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:06:06 compute-0 nova_compute[260935]: 2025-10-11 09:06:06.886 2 DEBUG oslo_concurrency.lockutils [req-e57b61e6-fa18-43ff-aa6c-ae2867f68cc4 req-92e3955c-daef-4495-b593-b277a5b22de8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "a83f40e4-c852-4b45-a3d2-1cd65e9aaa31-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:06:06 compute-0 nova_compute[260935]: 2025-10-11 09:06:06.886 2 DEBUG nova.compute.manager [req-e57b61e6-fa18-43ff-aa6c-ae2867f68cc4 req-92e3955c-daef-4495-b593-b277a5b22de8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a83f40e4-c852-4b45-a3d2-1cd65e9aaa31] No waiting events found dispatching network-vif-plugged-f8dc388c-9e5a-43c2-8dd9-8fc28768ec31 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 09:06:06 compute-0 nova_compute[260935]: 2025-10-11 09:06:06.886 2 WARNING nova.compute.manager [req-e57b61e6-fa18-43ff-aa6c-ae2867f68cc4 req-92e3955c-daef-4495-b593-b277a5b22de8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a83f40e4-c852-4b45-a3d2-1cd65e9aaa31] Received unexpected event network-vif-plugged-f8dc388c-9e5a-43c2-8dd9-8fc28768ec31 for instance with vm_state rescued and task_state None.
Oct 11 09:06:07 compute-0 ceph-mon[74313]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #87. Immutable memtables: 0.
Oct 11 09:06:07 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:06:07.009692) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 11 09:06:07 compute-0 ceph-mon[74313]: rocksdb: [db/flush_job.cc:856] [default] [JOB 49] Flushing memtable with next log file: 87
Oct 11 09:06:07 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760173567009771, "job": 49, "event": "flush_started", "num_memtables": 1, "num_entries": 1745, "num_deletes": 257, "total_data_size": 2594584, "memory_usage": 2639072, "flush_reason": "Manual Compaction"}
Oct 11 09:06:07 compute-0 ceph-mon[74313]: rocksdb: [db/flush_job.cc:885] [default] [JOB 49] Level-0 flush table #88: started
Oct 11 09:06:07 compute-0 nova_compute[260935]: 2025-10-11 09:06:07.010 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:06:07 compute-0 ceph-mon[74313]: pgmap v1903: 321 pgs: 321 active+clean; 546 MiB data, 911 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 98 op/s
Oct 11 09:06:07 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760173567262497, "cf_name": "default", "job": 49, "event": "table_file_creation", "file_number": 88, "file_size": 2543747, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 38307, "largest_seqno": 40051, "table_properties": {"data_size": 2535700, "index_size": 4861, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2117, "raw_key_size": 17320, "raw_average_key_size": 20, "raw_value_size": 2519308, "raw_average_value_size": 2995, "num_data_blocks": 215, "num_entries": 841, "num_filter_entries": 841, "num_deletions": 257, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760173414, "oldest_key_time": 1760173414, "file_creation_time": 1760173567, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "25d4b2da-549a-4320-988a-8e4ed36a341b", "db_session_id": "HBQ1LYMNE0LLZGR7AHC6", "orig_file_number": 88, "seqno_to_time_mapping": "N/A"}}
Oct 11 09:06:07 compute-0 ceph-mon[74313]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 49] Flush lasted 252873 microseconds, and 8069 cpu microseconds.
Oct 11 09:06:07 compute-0 ceph-mon[74313]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 09:06:07 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:06:07.262568) [db/flush_job.cc:967] [default] [JOB 49] Level-0 flush table #88: 2543747 bytes OK
Oct 11 09:06:07 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:06:07.262601) [db/memtable_list.cc:519] [default] Level-0 commit table #88 started
Oct 11 09:06:07 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:06:07.281036) [db/memtable_list.cc:722] [default] Level-0 commit table #88: memtable #1 done
Oct 11 09:06:07 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:06:07.281068) EVENT_LOG_v1 {"time_micros": 1760173567281057, "job": 49, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 11 09:06:07 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:06:07.281095) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 11 09:06:07 compute-0 ceph-mon[74313]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 49] Try to delete WAL files size 2586960, prev total WAL file size 2586960, number of live WAL files 2.
Oct 11 09:06:07 compute-0 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000084.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 09:06:07 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:06:07.282297) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033353134' seq:72057594037927935, type:22 .. '7061786F730033373636' seq:0, type:0; will stop at (end)
Oct 11 09:06:07 compute-0 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 50] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 11 09:06:07 compute-0 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 49 Base level 0, inputs: [88(2484KB)], [86(8469KB)]
Oct 11 09:06:07 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760173567282391, "job": 50, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [88], "files_L6": [86], "score": -1, "input_data_size": 11216842, "oldest_snapshot_seqno": -1}
Oct 11 09:06:07 compute-0 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 50] Generated table #89: 6372 keys, 9580481 bytes, temperature: kUnknown
Oct 11 09:06:07 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760173567415892, "cf_name": "default", "job": 50, "event": "table_file_creation", "file_number": 89, "file_size": 9580481, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9536811, "index_size": 26664, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15941, "raw_key_size": 161978, "raw_average_key_size": 25, "raw_value_size": 9421353, "raw_average_value_size": 1478, "num_data_blocks": 1074, "num_entries": 6372, "num_filter_entries": 6372, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760170204, "oldest_key_time": 0, "file_creation_time": 1760173567, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "25d4b2da-549a-4320-988a-8e4ed36a341b", "db_session_id": "HBQ1LYMNE0LLZGR7AHC6", "orig_file_number": 89, "seqno_to_time_mapping": "N/A"}}
Oct 11 09:06:07 compute-0 ceph-mon[74313]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 09:06:07 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:06:07.416145) [db/compaction/compaction_job.cc:1663] [default] [JOB 50] Compacted 1@0 + 1@6 files to L6 => 9580481 bytes
Oct 11 09:06:07 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:06:07.420222) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 84.0 rd, 71.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.4, 8.3 +0.0 blob) out(9.1 +0.0 blob), read-write-amplify(8.2) write-amplify(3.8) OK, records in: 6898, records dropped: 526 output_compression: NoCompression
Oct 11 09:06:07 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:06:07.420241) EVENT_LOG_v1 {"time_micros": 1760173567420232, "job": 50, "event": "compaction_finished", "compaction_time_micros": 133573, "compaction_time_cpu_micros": 23795, "output_level": 6, "num_output_files": 1, "total_output_size": 9580481, "num_input_records": 6898, "num_output_records": 6372, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 11 09:06:07 compute-0 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000088.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 09:06:07 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760173567420845, "job": 50, "event": "table_file_deletion", "file_number": 88}
Oct 11 09:06:07 compute-0 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000086.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 09:06:07 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760173567422508, "job": 50, "event": "table_file_deletion", "file_number": 86}
Oct 11 09:06:07 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:06:07.282140) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 09:06:07 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:06:07.422557) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 09:06:07 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:06:07.422564) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 09:06:07 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:06:07.422566) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 09:06:07 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:06:07.422567) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 09:06:07 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:06:07.422569) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 09:06:07 compute-0 nova_compute[260935]: 2025-10-11 09:06:07.823 2 INFO nova.virt.libvirt.driver [None req-7088ee2a-f426-4885-8621-49d560c830e1 ac3a19a7426e4e51aa4f94b016decc82 0eed0ccef3b34c4db44e88ebe1aef9f0 - - default default] [instance: d813afc2-c844-45eb-b1ec-efbbf95498ef] Deleting instance files /var/lib/nova/instances/d813afc2-c844-45eb-b1ec-efbbf95498ef_del
Oct 11 09:06:07 compute-0 nova_compute[260935]: 2025-10-11 09:06:07.824 2 INFO nova.virt.libvirt.driver [None req-7088ee2a-f426-4885-8621-49d560c830e1 ac3a19a7426e4e51aa4f94b016decc82 0eed0ccef3b34c4db44e88ebe1aef9f0 - - default default] [instance: d813afc2-c844-45eb-b1ec-efbbf95498ef] Deletion of /var/lib/nova/instances/d813afc2-c844-45eb-b1ec-efbbf95498ef_del complete
Oct 11 09:06:07 compute-0 nova_compute[260935]: 2025-10-11 09:06:07.979 2 INFO nova.compute.manager [None req-7088ee2a-f426-4885-8621-49d560c830e1 ac3a19a7426e4e51aa4f94b016decc82 0eed0ccef3b34c4db44e88ebe1aef9f0 - - default default] [instance: d813afc2-c844-45eb-b1ec-efbbf95498ef] Took 6.28 seconds to destroy the instance on the hypervisor.
Oct 11 09:06:07 compute-0 nova_compute[260935]: 2025-10-11 09:06:07.980 2 DEBUG oslo.service.loopingcall [None req-7088ee2a-f426-4885-8621-49d560c830e1 ac3a19a7426e4e51aa4f94b016decc82 0eed0ccef3b34c4db44e88ebe1aef9f0 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 11 09:06:07 compute-0 nova_compute[260935]: 2025-10-11 09:06:07.980 2 DEBUG nova.compute.manager [-] [instance: d813afc2-c844-45eb-b1ec-efbbf95498ef] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 11 09:06:07 compute-0 nova_compute[260935]: 2025-10-11 09:06:07.980 2 DEBUG nova.network.neutron [-] [instance: d813afc2-c844-45eb-b1ec-efbbf95498ef] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 11 09:06:08 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1904: 321 pgs: 321 active+clean; 500 MiB data, 894 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 1.8 MiB/s wr, 179 op/s
Oct 11 09:06:08 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e257 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:06:09 compute-0 nova_compute[260935]: 2025-10-11 09:06:09.005 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:06:09 compute-0 nova_compute[260935]: 2025-10-11 09:06:09.067 2 DEBUG nova.compute.manager [req-7b0044d4-61ed-413f-a6ae-4045767783f5 req-a9fc5dc2-cd68-4fe5-877f-d4e1d2fe4622 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a83f40e4-c852-4b45-a3d2-1cd65e9aaa31] Received event network-changed-f8dc388c-9e5a-43c2-8dd9-8fc28768ec31 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:06:09 compute-0 nova_compute[260935]: 2025-10-11 09:06:09.067 2 DEBUG nova.compute.manager [req-7b0044d4-61ed-413f-a6ae-4045767783f5 req-a9fc5dc2-cd68-4fe5-877f-d4e1d2fe4622 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a83f40e4-c852-4b45-a3d2-1cd65e9aaa31] Refreshing instance network info cache due to event network-changed-f8dc388c-9e5a-43c2-8dd9-8fc28768ec31. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 09:06:09 compute-0 nova_compute[260935]: 2025-10-11 09:06:09.068 2 DEBUG oslo_concurrency.lockutils [req-7b0044d4-61ed-413f-a6ae-4045767783f5 req-a9fc5dc2-cd68-4fe5-877f-d4e1d2fe4622 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-a83f40e4-c852-4b45-a3d2-1cd65e9aaa31" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:06:09 compute-0 nova_compute[260935]: 2025-10-11 09:06:09.069 2 DEBUG oslo_concurrency.lockutils [req-7b0044d4-61ed-413f-a6ae-4045767783f5 req-a9fc5dc2-cd68-4fe5-877f-d4e1d2fe4622 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-a83f40e4-c852-4b45-a3d2-1cd65e9aaa31" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:06:09 compute-0 nova_compute[260935]: 2025-10-11 09:06:09.069 2 DEBUG nova.network.neutron [req-7b0044d4-61ed-413f-a6ae-4045767783f5 req-a9fc5dc2-cd68-4fe5-877f-d4e1d2fe4622 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a83f40e4-c852-4b45-a3d2-1cd65e9aaa31] Refreshing network info cache for port f8dc388c-9e5a-43c2-8dd9-8fc28768ec31 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 09:06:09 compute-0 ceph-mon[74313]: pgmap v1904: 321 pgs: 321 active+clean; 500 MiB data, 894 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 1.8 MiB/s wr, 179 op/s
Oct 11 09:06:09 compute-0 nova_compute[260935]: 2025-10-11 09:06:09.628 2 DEBUG nova.network.neutron [-] [instance: d813afc2-c844-45eb-b1ec-efbbf95498ef] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:06:09 compute-0 nova_compute[260935]: 2025-10-11 09:06:09.727 2 INFO nova.compute.manager [-] [instance: d813afc2-c844-45eb-b1ec-efbbf95498ef] Took 1.75 seconds to deallocate network for instance.
Oct 11 09:06:09 compute-0 nova_compute[260935]: 2025-10-11 09:06:09.802 2 DEBUG oslo_concurrency.lockutils [None req-7088ee2a-f426-4885-8621-49d560c830e1 ac3a19a7426e4e51aa4f94b016decc82 0eed0ccef3b34c4db44e88ebe1aef9f0 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:06:09 compute-0 nova_compute[260935]: 2025-10-11 09:06:09.803 2 DEBUG oslo_concurrency.lockutils [None req-7088ee2a-f426-4885-8621-49d560c830e1 ac3a19a7426e4e51aa4f94b016decc82 0eed0ccef3b34c4db44e88ebe1aef9f0 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:06:10 compute-0 nova_compute[260935]: 2025-10-11 09:06:10.025 2 DEBUG oslo_concurrency.processutils [None req-7088ee2a-f426-4885-8621-49d560c830e1 ac3a19a7426e4e51aa4f94b016decc82 0eed0ccef3b34c4db44e88ebe1aef9f0 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:06:10 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1905: 321 pgs: 321 active+clean; 500 MiB data, 894 MiB used, 59 GiB / 60 GiB avail; 3.4 MiB/s rd, 1.8 MiB/s wr, 156 op/s
Oct 11 09:06:10 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 09:06:10 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3825419757' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:06:10 compute-0 nova_compute[260935]: 2025-10-11 09:06:10.522 2 DEBUG oslo_concurrency.processutils [None req-7088ee2a-f426-4885-8621-49d560c830e1 ac3a19a7426e4e51aa4f94b016decc82 0eed0ccef3b34c4db44e88ebe1aef9f0 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.497s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:06:10 compute-0 nova_compute[260935]: 2025-10-11 09:06:10.531 2 DEBUG nova.compute.provider_tree [None req-7088ee2a-f426-4885-8621-49d560c830e1 ac3a19a7426e4e51aa4f94b016decc82 0eed0ccef3b34c4db44e88ebe1aef9f0 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 09:06:10 compute-0 nova_compute[260935]: 2025-10-11 09:06:10.560 2 DEBUG nova.scheduler.client.report [None req-7088ee2a-f426-4885-8621-49d560c830e1 ac3a19a7426e4e51aa4f94b016decc82 0eed0ccef3b34c4db44e88ebe1aef9f0 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 09:06:10 compute-0 nova_compute[260935]: 2025-10-11 09:06:10.643 2 DEBUG oslo_concurrency.lockutils [None req-7088ee2a-f426-4885-8621-49d560c830e1 ac3a19a7426e4e51aa4f94b016decc82 0eed0ccef3b34c4db44e88ebe1aef9f0 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.840s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:06:10 compute-0 nova_compute[260935]: 2025-10-11 09:06:10.724 2 INFO nova.scheduler.client.report [None req-7088ee2a-f426-4885-8621-49d560c830e1 ac3a19a7426e4e51aa4f94b016decc82 0eed0ccef3b34c4db44e88ebe1aef9f0 - - default default] Deleted allocations for instance d813afc2-c844-45eb-b1ec-efbbf95498ef
Oct 11 09:06:10 compute-0 nova_compute[260935]: 2025-10-11 09:06:10.939 2 DEBUG oslo_concurrency.lockutils [None req-7088ee2a-f426-4885-8621-49d560c830e1 ac3a19a7426e4e51aa4f94b016decc82 0eed0ccef3b34c4db44e88ebe1aef9f0 - - default default] Lock "d813afc2-c844-45eb-b1ec-efbbf95498ef" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 9.243s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:06:11 compute-0 nova_compute[260935]: 2025-10-11 09:06:11.228 2 DEBUG nova.compute.manager [req-4170ac78-8c28-4209-b5e8-a139551dc92c req-e40c567e-0498-4f87-a222-0c51203f7fdd e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d813afc2-c844-45eb-b1ec-efbbf95498ef] Received event network-vif-deleted-4d16ebf4-64d7-4476-8898-f62d856c729b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:06:11 compute-0 nova_compute[260935]: 2025-10-11 09:06:11.228 2 DEBUG nova.compute.manager [req-4170ac78-8c28-4209-b5e8-a139551dc92c req-e40c567e-0498-4f87-a222-0c51203f7fdd e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a83f40e4-c852-4b45-a3d2-1cd65e9aaa31] Received event network-changed-f8dc388c-9e5a-43c2-8dd9-8fc28768ec31 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:06:11 compute-0 nova_compute[260935]: 2025-10-11 09:06:11.229 2 DEBUG nova.compute.manager [req-4170ac78-8c28-4209-b5e8-a139551dc92c req-e40c567e-0498-4f87-a222-0c51203f7fdd e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a83f40e4-c852-4b45-a3d2-1cd65e9aaa31] Refreshing instance network info cache due to event network-changed-f8dc388c-9e5a-43c2-8dd9-8fc28768ec31. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 09:06:11 compute-0 nova_compute[260935]: 2025-10-11 09:06:11.229 2 DEBUG oslo_concurrency.lockutils [req-4170ac78-8c28-4209-b5e8-a139551dc92c req-e40c567e-0498-4f87-a222-0c51203f7fdd e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-a83f40e4-c852-4b45-a3d2-1cd65e9aaa31" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:06:11 compute-0 ceph-mon[74313]: pgmap v1905: 321 pgs: 321 active+clean; 500 MiB data, 894 MiB used, 59 GiB / 60 GiB avail; 3.4 MiB/s rd, 1.8 MiB/s wr, 156 op/s
Oct 11 09:06:11 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/3825419757' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:06:11 compute-0 nova_compute[260935]: 2025-10-11 09:06:11.467 2 DEBUG nova.network.neutron [req-7b0044d4-61ed-413f-a6ae-4045767783f5 req-a9fc5dc2-cd68-4fe5-877f-d4e1d2fe4622 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a83f40e4-c852-4b45-a3d2-1cd65e9aaa31] Updated VIF entry in instance network info cache for port f8dc388c-9e5a-43c2-8dd9-8fc28768ec31. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 09:06:11 compute-0 nova_compute[260935]: 2025-10-11 09:06:11.468 2 DEBUG nova.network.neutron [req-7b0044d4-61ed-413f-a6ae-4045767783f5 req-a9fc5dc2-cd68-4fe5-877f-d4e1d2fe4622 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a83f40e4-c852-4b45-a3d2-1cd65e9aaa31] Updating instance_info_cache with network_info: [{"id": "f8dc388c-9e5a-43c2-8dd9-8fc28768ec31", "address": "fa:16:3e:b4:bf:b7", "network": {"id": "7ac98e7f-e9cd-45cf-8bf4-dcecaa65ca75", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-1957321638-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "0a5d578da5e746caa535eef295e1a67d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf8dc388c-9e", "ovs_interfaceid": "f8dc388c-9e5a-43c2-8dd9-8fc28768ec31", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:06:11 compute-0 nova_compute[260935]: 2025-10-11 09:06:11.519 2 DEBUG oslo_concurrency.lockutils [req-7b0044d4-61ed-413f-a6ae-4045767783f5 req-a9fc5dc2-cd68-4fe5-877f-d4e1d2fe4622 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-a83f40e4-c852-4b45-a3d2-1cd65e9aaa31" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:06:11 compute-0 nova_compute[260935]: 2025-10-11 09:06:11.520 2 DEBUG oslo_concurrency.lockutils [req-4170ac78-8c28-4209-b5e8-a139551dc92c req-e40c567e-0498-4f87-a222-0c51203f7fdd e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-a83f40e4-c852-4b45-a3d2-1cd65e9aaa31" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:06:11 compute-0 nova_compute[260935]: 2025-10-11 09:06:11.521 2 DEBUG nova.network.neutron [req-4170ac78-8c28-4209-b5e8-a139551dc92c req-e40c567e-0498-4f87-a222-0c51203f7fdd e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a83f40e4-c852-4b45-a3d2-1cd65e9aaa31] Refreshing network info cache for port f8dc388c-9e5a-43c2-8dd9-8fc28768ec31 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 09:06:12 compute-0 nova_compute[260935]: 2025-10-11 09:06:12.012 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:06:12 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1906: 321 pgs: 321 active+clean; 500 MiB data, 890 MiB used, 59 GiB / 60 GiB avail; 3.4 MiB/s rd, 1.8 MiB/s wr, 164 op/s
Oct 11 09:06:13 compute-0 ceph-mon[74313]: pgmap v1906: 321 pgs: 321 active+clean; 500 MiB data, 890 MiB used, 59 GiB / 60 GiB avail; 3.4 MiB/s rd, 1.8 MiB/s wr, 164 op/s
Oct 11 09:06:13 compute-0 podman[354675]: 2025-10-11 09:06:13.770961648 +0000 UTC m=+0.070284407 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent)
Oct 11 09:06:13 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e257 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:06:14 compute-0 nova_compute[260935]: 2025-10-11 09:06:14.009 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:06:14 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1907: 321 pgs: 321 active+clean; 500 MiB data, 890 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 734 KiB/s wr, 99 op/s
Oct 11 09:06:14 compute-0 nova_compute[260935]: 2025-10-11 09:06:14.340 2 DEBUG nova.network.neutron [req-4170ac78-8c28-4209-b5e8-a139551dc92c req-e40c567e-0498-4f87-a222-0c51203f7fdd e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a83f40e4-c852-4b45-a3d2-1cd65e9aaa31] Updated VIF entry in instance network info cache for port f8dc388c-9e5a-43c2-8dd9-8fc28768ec31. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 09:06:14 compute-0 nova_compute[260935]: 2025-10-11 09:06:14.340 2 DEBUG nova.network.neutron [req-4170ac78-8c28-4209-b5e8-a139551dc92c req-e40c567e-0498-4f87-a222-0c51203f7fdd e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a83f40e4-c852-4b45-a3d2-1cd65e9aaa31] Updating instance_info_cache with network_info: [{"id": "f8dc388c-9e5a-43c2-8dd9-8fc28768ec31", "address": "fa:16:3e:b4:bf:b7", "network": {"id": "7ac98e7f-e9cd-45cf-8bf4-dcecaa65ca75", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-1957321638-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "0a5d578da5e746caa535eef295e1a67d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf8dc388c-9e", "ovs_interfaceid": "f8dc388c-9e5a-43c2-8dd9-8fc28768ec31", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:06:14 compute-0 nova_compute[260935]: 2025-10-11 09:06:14.392 2 DEBUG oslo_concurrency.lockutils [req-4170ac78-8c28-4209-b5e8-a139551dc92c req-e40c567e-0498-4f87-a222-0c51203f7fdd e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-a83f40e4-c852-4b45-a3d2-1cd65e9aaa31" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:06:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:06:15.202 162815 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:06:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:06:15.202 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:06:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:06:15.203 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:06:15 compute-0 ceph-mon[74313]: pgmap v1907: 321 pgs: 321 active+clean; 500 MiB data, 890 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 734 KiB/s wr, 99 op/s
Oct 11 09:06:15 compute-0 nova_compute[260935]: 2025-10-11 09:06:15.535 2 DEBUG oslo_concurrency.lockutils [None req-71e95a42-81bc-46fa-9b05-06608d4cfeb7 c555ec17274647ed83d33852feed0fa6 b86be458d54543fcae9df9884591edf3 - - default default] Acquiring lock "079324f3-2fba-431a-9b8a-6b755af3fe74" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:06:15 compute-0 nova_compute[260935]: 2025-10-11 09:06:15.536 2 DEBUG oslo_concurrency.lockutils [None req-71e95a42-81bc-46fa-9b05-06608d4cfeb7 c555ec17274647ed83d33852feed0fa6 b86be458d54543fcae9df9884591edf3 - - default default] Lock "079324f3-2fba-431a-9b8a-6b755af3fe74" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:06:15 compute-0 nova_compute[260935]: 2025-10-11 09:06:15.538 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:06:15 compute-0 NetworkManager[44960]: <info>  [1760173575.5395] manager: (patch-provnet-02213cae-d648-4a8f-b18b-9bdf9573f1be-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/360)
Oct 11 09:06:15 compute-0 NetworkManager[44960]: <info>  [1760173575.5411] manager: (patch-br-int-to-provnet-02213cae-d648-4a8f-b18b-9bdf9573f1be): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/361)
Oct 11 09:06:15 compute-0 nova_compute[260935]: 2025-10-11 09:06:15.691 2 DEBUG nova.compute.manager [None req-71e95a42-81bc-46fa-9b05-06608d4cfeb7 c555ec17274647ed83d33852feed0fa6 b86be458d54543fcae9df9884591edf3 - - default default] [instance: 079324f3-2fba-431a-9b8a-6b755af3fe74] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 11 09:06:15 compute-0 nova_compute[260935]: 2025-10-11 09:06:15.698 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:06:15 compute-0 ovn_controller[152945]: 2025-10-11T09:06:15Z|00840|binding|INFO|Releasing lport e23cd806-8523-4e59-ba27-db15cee52548 from this chassis (sb_readonly=0)
Oct 11 09:06:15 compute-0 ovn_controller[152945]: 2025-10-11T09:06:15Z|00841|binding|INFO|Releasing lport f45dd889-4db0-488b-b6d3-356bc191844e from this chassis (sb_readonly=0)
Oct 11 09:06:15 compute-0 nova_compute[260935]: 2025-10-11 09:06:15.721 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:06:15 compute-0 nova_compute[260935]: 2025-10-11 09:06:15.749 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:06:15 compute-0 nova_compute[260935]: 2025-10-11 09:06:15.865 2 DEBUG oslo_concurrency.lockutils [None req-71e95a42-81bc-46fa-9b05-06608d4cfeb7 c555ec17274647ed83d33852feed0fa6 b86be458d54543fcae9df9884591edf3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:06:15 compute-0 nova_compute[260935]: 2025-10-11 09:06:15.866 2 DEBUG oslo_concurrency.lockutils [None req-71e95a42-81bc-46fa-9b05-06608d4cfeb7 c555ec17274647ed83d33852feed0fa6 b86be458d54543fcae9df9884591edf3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:06:15 compute-0 nova_compute[260935]: 2025-10-11 09:06:15.875 2 DEBUG nova.virt.hardware [None req-71e95a42-81bc-46fa-9b05-06608d4cfeb7 c555ec17274647ed83d33852feed0fa6 b86be458d54543fcae9df9884591edf3 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 11 09:06:15 compute-0 nova_compute[260935]: 2025-10-11 09:06:15.876 2 INFO nova.compute.claims [None req-71e95a42-81bc-46fa-9b05-06608d4cfeb7 c555ec17274647ed83d33852feed0fa6 b86be458d54543fcae9df9884591edf3 - - default default] [instance: 079324f3-2fba-431a-9b8a-6b755af3fe74] Claim successful on node compute-0.ctlplane.example.com
Oct 11 09:06:16 compute-0 nova_compute[260935]: 2025-10-11 09:06:16.201 2 DEBUG oslo_concurrency.processutils [None req-71e95a42-81bc-46fa-9b05-06608d4cfeb7 c555ec17274647ed83d33852feed0fa6 b86be458d54543fcae9df9884591edf3 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:06:16 compute-0 nova_compute[260935]: 2025-10-11 09:06:16.273 2 DEBUG nova.compute.manager [req-0d2c80a8-44e1-4ed8-9680-e43ee35cd8d6 req-8e764754-5f3f-439e-8565-766ccedd0815 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a83f40e4-c852-4b45-a3d2-1cd65e9aaa31] Received event network-changed-f8dc388c-9e5a-43c2-8dd9-8fc28768ec31 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:06:16 compute-0 nova_compute[260935]: 2025-10-11 09:06:16.274 2 DEBUG nova.compute.manager [req-0d2c80a8-44e1-4ed8-9680-e43ee35cd8d6 req-8e764754-5f3f-439e-8565-766ccedd0815 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a83f40e4-c852-4b45-a3d2-1cd65e9aaa31] Refreshing instance network info cache due to event network-changed-f8dc388c-9e5a-43c2-8dd9-8fc28768ec31. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 09:06:16 compute-0 nova_compute[260935]: 2025-10-11 09:06:16.275 2 DEBUG oslo_concurrency.lockutils [req-0d2c80a8-44e1-4ed8-9680-e43ee35cd8d6 req-8e764754-5f3f-439e-8565-766ccedd0815 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-a83f40e4-c852-4b45-a3d2-1cd65e9aaa31" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:06:16 compute-0 nova_compute[260935]: 2025-10-11 09:06:16.275 2 DEBUG oslo_concurrency.lockutils [req-0d2c80a8-44e1-4ed8-9680-e43ee35cd8d6 req-8e764754-5f3f-439e-8565-766ccedd0815 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-a83f40e4-c852-4b45-a3d2-1cd65e9aaa31" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:06:16 compute-0 nova_compute[260935]: 2025-10-11 09:06:16.276 2 DEBUG nova.network.neutron [req-0d2c80a8-44e1-4ed8-9680-e43ee35cd8d6 req-8e764754-5f3f-439e-8565-766ccedd0815 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a83f40e4-c852-4b45-a3d2-1cd65e9aaa31] Refreshing network info cache for port f8dc388c-9e5a-43c2-8dd9-8fc28768ec31 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 09:06:16 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1908: 321 pgs: 321 active+clean; 500 MiB data, 890 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.2 KiB/s wr, 88 op/s
Oct 11 09:06:16 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 09:06:16 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/934959076' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:06:16 compute-0 nova_compute[260935]: 2025-10-11 09:06:16.747 2 DEBUG oslo_concurrency.processutils [None req-71e95a42-81bc-46fa-9b05-06608d4cfeb7 c555ec17274647ed83d33852feed0fa6 b86be458d54543fcae9df9884591edf3 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.546s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:06:16 compute-0 nova_compute[260935]: 2025-10-11 09:06:16.755 2 DEBUG nova.compute.provider_tree [None req-71e95a42-81bc-46fa-9b05-06608d4cfeb7 c555ec17274647ed83d33852feed0fa6 b86be458d54543fcae9df9884591edf3 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 09:06:16 compute-0 nova_compute[260935]: 2025-10-11 09:06:16.804 2 DEBUG nova.scheduler.client.report [None req-71e95a42-81bc-46fa-9b05-06608d4cfeb7 c555ec17274647ed83d33852feed0fa6 b86be458d54543fcae9df9884591edf3 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 09:06:16 compute-0 nova_compute[260935]: 2025-10-11 09:06:16.944 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760173561.9432986, d813afc2-c844-45eb-b1ec-efbbf95498ef => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:06:16 compute-0 nova_compute[260935]: 2025-10-11 09:06:16.945 2 INFO nova.compute.manager [-] [instance: d813afc2-c844-45eb-b1ec-efbbf95498ef] VM Stopped (Lifecycle Event)
Oct 11 09:06:16 compute-0 nova_compute[260935]: 2025-10-11 09:06:16.996 2 DEBUG oslo_concurrency.lockutils [None req-71e95a42-81bc-46fa-9b05-06608d4cfeb7 c555ec17274647ed83d33852feed0fa6 b86be458d54543fcae9df9884591edf3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.130s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:06:16 compute-0 nova_compute[260935]: 2025-10-11 09:06:16.998 2 DEBUG nova.compute.manager [None req-71e95a42-81bc-46fa-9b05-06608d4cfeb7 c555ec17274647ed83d33852feed0fa6 b86be458d54543fcae9df9884591edf3 - - default default] [instance: 079324f3-2fba-431a-9b8a-6b755af3fe74] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 11 09:06:17 compute-0 nova_compute[260935]: 2025-10-11 09:06:17.015 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:06:17 compute-0 nova_compute[260935]: 2025-10-11 09:06:17.022 2 DEBUG nova.compute.manager [None req-347f5c8d-42ba-4b21-8b5d-43f93fc6b508 - - - - - -] [instance: d813afc2-c844-45eb-b1ec-efbbf95498ef] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:06:17 compute-0 nova_compute[260935]: 2025-10-11 09:06:17.229 2 DEBUG nova.compute.manager [None req-71e95a42-81bc-46fa-9b05-06608d4cfeb7 c555ec17274647ed83d33852feed0fa6 b86be458d54543fcae9df9884591edf3 - - default default] [instance: 079324f3-2fba-431a-9b8a-6b755af3fe74] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 11 09:06:17 compute-0 nova_compute[260935]: 2025-10-11 09:06:17.230 2 DEBUG nova.network.neutron [None req-71e95a42-81bc-46fa-9b05-06608d4cfeb7 c555ec17274647ed83d33852feed0fa6 b86be458d54543fcae9df9884591edf3 - - default default] [instance: 079324f3-2fba-431a-9b8a-6b755af3fe74] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 11 09:06:17 compute-0 nova_compute[260935]: 2025-10-11 09:06:17.333 2 INFO nova.virt.libvirt.driver [None req-71e95a42-81bc-46fa-9b05-06608d4cfeb7 c555ec17274647ed83d33852feed0fa6 b86be458d54543fcae9df9884591edf3 - - default default] [instance: 079324f3-2fba-431a-9b8a-6b755af3fe74] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 11 09:06:17 compute-0 nova_compute[260935]: 2025-10-11 09:06:17.394 2 DEBUG nova.compute.manager [None req-71e95a42-81bc-46fa-9b05-06608d4cfeb7 c555ec17274647ed83d33852feed0fa6 b86be458d54543fcae9df9884591edf3 - - default default] [instance: 079324f3-2fba-431a-9b8a-6b755af3fe74] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 11 09:06:17 compute-0 ceph-mon[74313]: pgmap v1908: 321 pgs: 321 active+clean; 500 MiB data, 890 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.2 KiB/s wr, 88 op/s
Oct 11 09:06:17 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/934959076' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:06:17 compute-0 nova_compute[260935]: 2025-10-11 09:06:17.622 2 DEBUG nova.compute.manager [None req-71e95a42-81bc-46fa-9b05-06608d4cfeb7 c555ec17274647ed83d33852feed0fa6 b86be458d54543fcae9df9884591edf3 - - default default] [instance: 079324f3-2fba-431a-9b8a-6b755af3fe74] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 11 09:06:17 compute-0 nova_compute[260935]: 2025-10-11 09:06:17.625 2 DEBUG nova.virt.libvirt.driver [None req-71e95a42-81bc-46fa-9b05-06608d4cfeb7 c555ec17274647ed83d33852feed0fa6 b86be458d54543fcae9df9884591edf3 - - default default] [instance: 079324f3-2fba-431a-9b8a-6b755af3fe74] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 11 09:06:17 compute-0 nova_compute[260935]: 2025-10-11 09:06:17.626 2 INFO nova.virt.libvirt.driver [None req-71e95a42-81bc-46fa-9b05-06608d4cfeb7 c555ec17274647ed83d33852feed0fa6 b86be458d54543fcae9df9884591edf3 - - default default] [instance: 079324f3-2fba-431a-9b8a-6b755af3fe74] Creating image(s)
Oct 11 09:06:17 compute-0 nova_compute[260935]: 2025-10-11 09:06:17.659 2 DEBUG nova.storage.rbd_utils [None req-71e95a42-81bc-46fa-9b05-06608d4cfeb7 c555ec17274647ed83d33852feed0fa6 b86be458d54543fcae9df9884591edf3 - - default default] rbd image 079324f3-2fba-431a-9b8a-6b755af3fe74_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:06:17 compute-0 nova_compute[260935]: 2025-10-11 09:06:17.703 2 DEBUG nova.storage.rbd_utils [None req-71e95a42-81bc-46fa-9b05-06608d4cfeb7 c555ec17274647ed83d33852feed0fa6 b86be458d54543fcae9df9884591edf3 - - default default] rbd image 079324f3-2fba-431a-9b8a-6b755af3fe74_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:06:17 compute-0 nova_compute[260935]: 2025-10-11 09:06:17.746 2 DEBUG nova.storage.rbd_utils [None req-71e95a42-81bc-46fa-9b05-06608d4cfeb7 c555ec17274647ed83d33852feed0fa6 b86be458d54543fcae9df9884591edf3 - - default default] rbd image 079324f3-2fba-431a-9b8a-6b755af3fe74_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:06:17 compute-0 nova_compute[260935]: 2025-10-11 09:06:17.754 2 DEBUG oslo_concurrency.processutils [None req-71e95a42-81bc-46fa-9b05-06608d4cfeb7 c555ec17274647ed83d33852feed0fa6 b86be458d54543fcae9df9884591edf3 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:06:17 compute-0 nova_compute[260935]: 2025-10-11 09:06:17.828 2 DEBUG nova.policy [None req-71e95a42-81bc-46fa-9b05-06608d4cfeb7 c555ec17274647ed83d33852feed0fa6 b86be458d54543fcae9df9884591edf3 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'c555ec17274647ed83d33852feed0fa6', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'b86be458d54543fcae9df9884591edf3', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 11 09:06:17 compute-0 nova_compute[260935]: 2025-10-11 09:06:17.868 2 DEBUG oslo_concurrency.processutils [None req-71e95a42-81bc-46fa-9b05-06608d4cfeb7 c555ec17274647ed83d33852feed0fa6 b86be458d54543fcae9df9884591edf3 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json" returned: 0 in 0.115s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:06:17 compute-0 nova_compute[260935]: 2025-10-11 09:06:17.870 2 DEBUG oslo_concurrency.lockutils [None req-71e95a42-81bc-46fa-9b05-06608d4cfeb7 c555ec17274647ed83d33852feed0fa6 b86be458d54543fcae9df9884591edf3 - - default default] Acquiring lock "9811042c7d73cc51997f7c966840f4b7728169a1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:06:17 compute-0 nova_compute[260935]: 2025-10-11 09:06:17.871 2 DEBUG oslo_concurrency.lockutils [None req-71e95a42-81bc-46fa-9b05-06608d4cfeb7 c555ec17274647ed83d33852feed0fa6 b86be458d54543fcae9df9884591edf3 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:06:17 compute-0 nova_compute[260935]: 2025-10-11 09:06:17.871 2 DEBUG oslo_concurrency.lockutils [None req-71e95a42-81bc-46fa-9b05-06608d4cfeb7 c555ec17274647ed83d33852feed0fa6 b86be458d54543fcae9df9884591edf3 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:06:17 compute-0 nova_compute[260935]: 2025-10-11 09:06:17.908 2 DEBUG nova.storage.rbd_utils [None req-71e95a42-81bc-46fa-9b05-06608d4cfeb7 c555ec17274647ed83d33852feed0fa6 b86be458d54543fcae9df9884591edf3 - - default default] rbd image 079324f3-2fba-431a-9b8a-6b755af3fe74_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:06:17 compute-0 nova_compute[260935]: 2025-10-11 09:06:17.914 2 DEBUG oslo_concurrency.processutils [None req-71e95a42-81bc-46fa-9b05-06608d4cfeb7 c555ec17274647ed83d33852feed0fa6 b86be458d54543fcae9df9884591edf3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 079324f3-2fba-431a-9b8a-6b755af3fe74_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:06:18 compute-0 nova_compute[260935]: 2025-10-11 09:06:18.286 2 DEBUG oslo_concurrency.processutils [None req-71e95a42-81bc-46fa-9b05-06608d4cfeb7 c555ec17274647ed83d33852feed0fa6 b86be458d54543fcae9df9884591edf3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 079324f3-2fba-431a-9b8a-6b755af3fe74_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.372s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:06:18 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1909: 321 pgs: 321 active+clean; 500 MiB data, 890 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 12 KiB/s wr, 147 op/s
Oct 11 09:06:18 compute-0 nova_compute[260935]: 2025-10-11 09:06:18.375 2 DEBUG nova.storage.rbd_utils [None req-71e95a42-81bc-46fa-9b05-06608d4cfeb7 c555ec17274647ed83d33852feed0fa6 b86be458d54543fcae9df9884591edf3 - - default default] resizing rbd image 079324f3-2fba-431a-9b8a-6b755af3fe74_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 11 09:06:18 compute-0 nova_compute[260935]: 2025-10-11 09:06:18.501 2 DEBUG nova.objects.instance [None req-71e95a42-81bc-46fa-9b05-06608d4cfeb7 c555ec17274647ed83d33852feed0fa6 b86be458d54543fcae9df9884591edf3 - - default default] Lazy-loading 'migration_context' on Instance uuid 079324f3-2fba-431a-9b8a-6b755af3fe74 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:06:18 compute-0 nova_compute[260935]: 2025-10-11 09:06:18.537 2 DEBUG nova.virt.libvirt.driver [None req-71e95a42-81bc-46fa-9b05-06608d4cfeb7 c555ec17274647ed83d33852feed0fa6 b86be458d54543fcae9df9884591edf3 - - default default] [instance: 079324f3-2fba-431a-9b8a-6b755af3fe74] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 11 09:06:18 compute-0 nova_compute[260935]: 2025-10-11 09:06:18.537 2 DEBUG nova.virt.libvirt.driver [None req-71e95a42-81bc-46fa-9b05-06608d4cfeb7 c555ec17274647ed83d33852feed0fa6 b86be458d54543fcae9df9884591edf3 - - default default] [instance: 079324f3-2fba-431a-9b8a-6b755af3fe74] Ensure instance console log exists: /var/lib/nova/instances/079324f3-2fba-431a-9b8a-6b755af3fe74/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 11 09:06:18 compute-0 nova_compute[260935]: 2025-10-11 09:06:18.538 2 DEBUG oslo_concurrency.lockutils [None req-71e95a42-81bc-46fa-9b05-06608d4cfeb7 c555ec17274647ed83d33852feed0fa6 b86be458d54543fcae9df9884591edf3 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:06:18 compute-0 nova_compute[260935]: 2025-10-11 09:06:18.538 2 DEBUG oslo_concurrency.lockutils [None req-71e95a42-81bc-46fa-9b05-06608d4cfeb7 c555ec17274647ed83d33852feed0fa6 b86be458d54543fcae9df9884591edf3 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:06:18 compute-0 nova_compute[260935]: 2025-10-11 09:06:18.539 2 DEBUG oslo_concurrency.lockutils [None req-71e95a42-81bc-46fa-9b05-06608d4cfeb7 c555ec17274647ed83d33852feed0fa6 b86be458d54543fcae9df9884591edf3 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:06:18 compute-0 ovn_controller[152945]: 2025-10-11T09:06:18Z|00842|binding|INFO|Releasing lport e23cd806-8523-4e59-ba27-db15cee52548 from this chassis (sb_readonly=0)
Oct 11 09:06:18 compute-0 ovn_controller[152945]: 2025-10-11T09:06:18Z|00843|binding|INFO|Releasing lport f45dd889-4db0-488b-b6d3-356bc191844e from this chassis (sb_readonly=0)
Oct 11 09:06:18 compute-0 nova_compute[260935]: 2025-10-11 09:06:18.692 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:06:18 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e257 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:06:18 compute-0 nova_compute[260935]: 2025-10-11 09:06:18.992 2 DEBUG nova.network.neutron [req-0d2c80a8-44e1-4ed8-9680-e43ee35cd8d6 req-8e764754-5f3f-439e-8565-766ccedd0815 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a83f40e4-c852-4b45-a3d2-1cd65e9aaa31] Updated VIF entry in instance network info cache for port f8dc388c-9e5a-43c2-8dd9-8fc28768ec31. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 09:06:18 compute-0 nova_compute[260935]: 2025-10-11 09:06:18.993 2 DEBUG nova.network.neutron [req-0d2c80a8-44e1-4ed8-9680-e43ee35cd8d6 req-8e764754-5f3f-439e-8565-766ccedd0815 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a83f40e4-c852-4b45-a3d2-1cd65e9aaa31] Updating instance_info_cache with network_info: [{"id": "f8dc388c-9e5a-43c2-8dd9-8fc28768ec31", "address": "fa:16:3e:b4:bf:b7", "network": {"id": "7ac98e7f-e9cd-45cf-8bf4-dcecaa65ca75", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-1957321638-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.192", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "0a5d578da5e746caa535eef295e1a67d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf8dc388c-9e", "ovs_interfaceid": "f8dc388c-9e5a-43c2-8dd9-8fc28768ec31", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:06:19 compute-0 nova_compute[260935]: 2025-10-11 09:06:19.011 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:06:19 compute-0 nova_compute[260935]: 2025-10-11 09:06:19.030 2 DEBUG oslo_concurrency.lockutils [req-0d2c80a8-44e1-4ed8-9680-e43ee35cd8d6 req-8e764754-5f3f-439e-8565-766ccedd0815 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-a83f40e4-c852-4b45-a3d2-1cd65e9aaa31" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:06:19 compute-0 nova_compute[260935]: 2025-10-11 09:06:19.245 2 DEBUG nova.network.neutron [None req-71e95a42-81bc-46fa-9b05-06608d4cfeb7 c555ec17274647ed83d33852feed0fa6 b86be458d54543fcae9df9884591edf3 - - default default] [instance: 079324f3-2fba-431a-9b8a-6b755af3fe74] Successfully created port: f44dbabf-c6b7-4aa2-ac55-6df262611e5e _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 11 09:06:19 compute-0 ceph-mon[74313]: pgmap v1909: 321 pgs: 321 active+clean; 500 MiB data, 890 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 12 KiB/s wr, 147 op/s
Oct 11 09:06:20 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1910: 321 pgs: 321 active+clean; 500 MiB data, 890 MiB used, 59 GiB / 60 GiB avail; 615 KiB/s rd, 12 KiB/s wr, 67 op/s
Oct 11 09:06:20 compute-0 podman[354884]: 2025-10-11 09:06:20.77911865 +0000 UTC m=+0.078112911 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=iscsid, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, container_name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct 11 09:06:21 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:06:21.215 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=23, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:d1:d9', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '16:ab:1e:b7:4b:7f'}, ipsec=False) old=SB_Global(nb_cfg=22) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 09:06:21 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:06:21.217 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 11 09:06:21 compute-0 nova_compute[260935]: 2025-10-11 09:06:21.262 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:06:21 compute-0 nova_compute[260935]: 2025-10-11 09:06:21.387 2 DEBUG nova.network.neutron [None req-71e95a42-81bc-46fa-9b05-06608d4cfeb7 c555ec17274647ed83d33852feed0fa6 b86be458d54543fcae9df9884591edf3 - - default default] [instance: 079324f3-2fba-431a-9b8a-6b755af3fe74] Successfully updated port: f44dbabf-c6b7-4aa2-ac55-6df262611e5e _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 11 09:06:21 compute-0 nova_compute[260935]: 2025-10-11 09:06:21.453 2 DEBUG oslo_concurrency.lockutils [None req-71e95a42-81bc-46fa-9b05-06608d4cfeb7 c555ec17274647ed83d33852feed0fa6 b86be458d54543fcae9df9884591edf3 - - default default] Acquiring lock "refresh_cache-079324f3-2fba-431a-9b8a-6b755af3fe74" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:06:21 compute-0 nova_compute[260935]: 2025-10-11 09:06:21.453 2 DEBUG oslo_concurrency.lockutils [None req-71e95a42-81bc-46fa-9b05-06608d4cfeb7 c555ec17274647ed83d33852feed0fa6 b86be458d54543fcae9df9884591edf3 - - default default] Acquired lock "refresh_cache-079324f3-2fba-431a-9b8a-6b755af3fe74" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:06:21 compute-0 nova_compute[260935]: 2025-10-11 09:06:21.454 2 DEBUG nova.network.neutron [None req-71e95a42-81bc-46fa-9b05-06608d4cfeb7 c555ec17274647ed83d33852feed0fa6 b86be458d54543fcae9df9884591edf3 - - default default] [instance: 079324f3-2fba-431a-9b8a-6b755af3fe74] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 11 09:06:21 compute-0 ceph-mon[74313]: pgmap v1910: 321 pgs: 321 active+clean; 500 MiB data, 890 MiB used, 59 GiB / 60 GiB avail; 615 KiB/s rd, 12 KiB/s wr, 67 op/s
Oct 11 09:06:21 compute-0 nova_compute[260935]: 2025-10-11 09:06:21.534 2 DEBUG nova.compute.manager [req-865f2bcd-2c62-4aa8-9a02-ef624eeddc8f req-7ac852fb-6a64-4a90-b1a7-66b99e151061 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 079324f3-2fba-431a-9b8a-6b755af3fe74] Received event network-changed-f44dbabf-c6b7-4aa2-ac55-6df262611e5e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:06:21 compute-0 nova_compute[260935]: 2025-10-11 09:06:21.535 2 DEBUG nova.compute.manager [req-865f2bcd-2c62-4aa8-9a02-ef624eeddc8f req-7ac852fb-6a64-4a90-b1a7-66b99e151061 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 079324f3-2fba-431a-9b8a-6b755af3fe74] Refreshing instance network info cache due to event network-changed-f44dbabf-c6b7-4aa2-ac55-6df262611e5e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 09:06:21 compute-0 nova_compute[260935]: 2025-10-11 09:06:21.535 2 DEBUG oslo_concurrency.lockutils [req-865f2bcd-2c62-4aa8-9a02-ef624eeddc8f req-7ac852fb-6a64-4a90-b1a7-66b99e151061 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-079324f3-2fba-431a-9b8a-6b755af3fe74" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:06:21 compute-0 nova_compute[260935]: 2025-10-11 09:06:21.678 2 DEBUG nova.network.neutron [None req-71e95a42-81bc-46fa-9b05-06608d4cfeb7 c555ec17274647ed83d33852feed0fa6 b86be458d54543fcae9df9884591edf3 - - default default] [instance: 079324f3-2fba-431a-9b8a-6b755af3fe74] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 11 09:06:22 compute-0 nova_compute[260935]: 2025-10-11 09:06:22.018 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:06:22 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1911: 321 pgs: 321 active+clean; 541 MiB data, 908 MiB used, 59 GiB / 60 GiB avail; 632 KiB/s rd, 1.5 MiB/s wr, 93 op/s
Oct 11 09:06:23 compute-0 ceph-mon[74313]: pgmap v1911: 321 pgs: 321 active+clean; 541 MiB data, 908 MiB used, 59 GiB / 60 GiB avail; 632 KiB/s rd, 1.5 MiB/s wr, 93 op/s
Oct 11 09:06:23 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e257 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:06:24 compute-0 nova_compute[260935]: 2025-10-11 09:06:24.013 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:06:24 compute-0 nova_compute[260935]: 2025-10-11 09:06:24.224 2 DEBUG nova.compute.manager [req-708a7f1d-21de-4345-aa85-c6656682bf64 req-fa701e60-c1b2-427c-951f-79449e96fc05 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a83f40e4-c852-4b45-a3d2-1cd65e9aaa31] Received event network-changed-f8dc388c-9e5a-43c2-8dd9-8fc28768ec31 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:06:24 compute-0 nova_compute[260935]: 2025-10-11 09:06:24.224 2 DEBUG nova.compute.manager [req-708a7f1d-21de-4345-aa85-c6656682bf64 req-fa701e60-c1b2-427c-951f-79449e96fc05 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a83f40e4-c852-4b45-a3d2-1cd65e9aaa31] Refreshing instance network info cache due to event network-changed-f8dc388c-9e5a-43c2-8dd9-8fc28768ec31. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 09:06:24 compute-0 nova_compute[260935]: 2025-10-11 09:06:24.225 2 DEBUG oslo_concurrency.lockutils [req-708a7f1d-21de-4345-aa85-c6656682bf64 req-fa701e60-c1b2-427c-951f-79449e96fc05 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-a83f40e4-c852-4b45-a3d2-1cd65e9aaa31" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:06:24 compute-0 nova_compute[260935]: 2025-10-11 09:06:24.225 2 DEBUG oslo_concurrency.lockutils [req-708a7f1d-21de-4345-aa85-c6656682bf64 req-fa701e60-c1b2-427c-951f-79449e96fc05 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-a83f40e4-c852-4b45-a3d2-1cd65e9aaa31" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:06:24 compute-0 nova_compute[260935]: 2025-10-11 09:06:24.225 2 DEBUG nova.network.neutron [req-708a7f1d-21de-4345-aa85-c6656682bf64 req-fa701e60-c1b2-427c-951f-79449e96fc05 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a83f40e4-c852-4b45-a3d2-1cd65e9aaa31] Refreshing network info cache for port f8dc388c-9e5a-43c2-8dd9-8fc28768ec31 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 09:06:24 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1912: 321 pgs: 321 active+clean; 548 MiB data, 911 MiB used, 59 GiB / 60 GiB avail; 625 KiB/s rd, 1.8 MiB/s wr, 87 op/s
Oct 11 09:06:24 compute-0 nova_compute[260935]: 2025-10-11 09:06:24.367 2 DEBUG nova.network.neutron [None req-71e95a42-81bc-46fa-9b05-06608d4cfeb7 c555ec17274647ed83d33852feed0fa6 b86be458d54543fcae9df9884591edf3 - - default default] [instance: 079324f3-2fba-431a-9b8a-6b755af3fe74] Updating instance_info_cache with network_info: [{"id": "f44dbabf-c6b7-4aa2-ac55-6df262611e5e", "address": "fa:16:3e:47:6a:2d", "network": {"id": "c4ebf6c9-009d-4565-ba58-6b9452636d41", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-2078761724-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b86be458d54543fcae9df9884591edf3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf44dbabf-c6", "ovs_interfaceid": "f44dbabf-c6b7-4aa2-ac55-6df262611e5e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:06:24 compute-0 nova_compute[260935]: 2025-10-11 09:06:24.426 2 DEBUG oslo_concurrency.lockutils [None req-71e95a42-81bc-46fa-9b05-06608d4cfeb7 c555ec17274647ed83d33852feed0fa6 b86be458d54543fcae9df9884591edf3 - - default default] Releasing lock "refresh_cache-079324f3-2fba-431a-9b8a-6b755af3fe74" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:06:24 compute-0 nova_compute[260935]: 2025-10-11 09:06:24.427 2 DEBUG nova.compute.manager [None req-71e95a42-81bc-46fa-9b05-06608d4cfeb7 c555ec17274647ed83d33852feed0fa6 b86be458d54543fcae9df9884591edf3 - - default default] [instance: 079324f3-2fba-431a-9b8a-6b755af3fe74] Instance network_info: |[{"id": "f44dbabf-c6b7-4aa2-ac55-6df262611e5e", "address": "fa:16:3e:47:6a:2d", "network": {"id": "c4ebf6c9-009d-4565-ba58-6b9452636d41", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-2078761724-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b86be458d54543fcae9df9884591edf3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf44dbabf-c6", "ovs_interfaceid": "f44dbabf-c6b7-4aa2-ac55-6df262611e5e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 11 09:06:24 compute-0 nova_compute[260935]: 2025-10-11 09:06:24.427 2 DEBUG oslo_concurrency.lockutils [req-865f2bcd-2c62-4aa8-9a02-ef624eeddc8f req-7ac852fb-6a64-4a90-b1a7-66b99e151061 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-079324f3-2fba-431a-9b8a-6b755af3fe74" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:06:24 compute-0 nova_compute[260935]: 2025-10-11 09:06:24.428 2 DEBUG nova.network.neutron [req-865f2bcd-2c62-4aa8-9a02-ef624eeddc8f req-7ac852fb-6a64-4a90-b1a7-66b99e151061 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 079324f3-2fba-431a-9b8a-6b755af3fe74] Refreshing network info cache for port f44dbabf-c6b7-4aa2-ac55-6df262611e5e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 09:06:24 compute-0 nova_compute[260935]: 2025-10-11 09:06:24.433 2 DEBUG nova.virt.libvirt.driver [None req-71e95a42-81bc-46fa-9b05-06608d4cfeb7 c555ec17274647ed83d33852feed0fa6 b86be458d54543fcae9df9884591edf3 - - default default] [instance: 079324f3-2fba-431a-9b8a-6b755af3fe74] Start _get_guest_xml network_info=[{"id": "f44dbabf-c6b7-4aa2-ac55-6df262611e5e", "address": "fa:16:3e:47:6a:2d", "network": {"id": "c4ebf6c9-009d-4565-ba58-6b9452636d41", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-2078761724-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b86be458d54543fcae9df9884591edf3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf44dbabf-c6", "ovs_interfaceid": "f44dbabf-c6b7-4aa2-ac55-6df262611e5e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 11 09:06:24 compute-0 nova_compute[260935]: 2025-10-11 09:06:24.440 2 WARNING nova.virt.libvirt.driver [None req-71e95a42-81bc-46fa-9b05-06608d4cfeb7 c555ec17274647ed83d33852feed0fa6 b86be458d54543fcae9df9884591edf3 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 09:06:24 compute-0 nova_compute[260935]: 2025-10-11 09:06:24.446 2 DEBUG nova.virt.libvirt.host [None req-71e95a42-81bc-46fa-9b05-06608d4cfeb7 c555ec17274647ed83d33852feed0fa6 b86be458d54543fcae9df9884591edf3 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 11 09:06:24 compute-0 nova_compute[260935]: 2025-10-11 09:06:24.447 2 DEBUG nova.virt.libvirt.host [None req-71e95a42-81bc-46fa-9b05-06608d4cfeb7 c555ec17274647ed83d33852feed0fa6 b86be458d54543fcae9df9884591edf3 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 11 09:06:24 compute-0 nova_compute[260935]: 2025-10-11 09:06:24.451 2 DEBUG nova.virt.libvirt.host [None req-71e95a42-81bc-46fa-9b05-06608d4cfeb7 c555ec17274647ed83d33852feed0fa6 b86be458d54543fcae9df9884591edf3 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 11 09:06:24 compute-0 nova_compute[260935]: 2025-10-11 09:06:24.452 2 DEBUG nova.virt.libvirt.host [None req-71e95a42-81bc-46fa-9b05-06608d4cfeb7 c555ec17274647ed83d33852feed0fa6 b86be458d54543fcae9df9884591edf3 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 11 09:06:24 compute-0 nova_compute[260935]: 2025-10-11 09:06:24.452 2 DEBUG nova.virt.libvirt.driver [None req-71e95a42-81bc-46fa-9b05-06608d4cfeb7 c555ec17274647ed83d33852feed0fa6 b86be458d54543fcae9df9884591edf3 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 11 09:06:24 compute-0 nova_compute[260935]: 2025-10-11 09:06:24.453 2 DEBUG nova.virt.hardware [None req-71e95a42-81bc-46fa-9b05-06608d4cfeb7 c555ec17274647ed83d33852feed0fa6 b86be458d54543fcae9df9884591edf3 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 11 09:06:24 compute-0 nova_compute[260935]: 2025-10-11 09:06:24.453 2 DEBUG nova.virt.hardware [None req-71e95a42-81bc-46fa-9b05-06608d4cfeb7 c555ec17274647ed83d33852feed0fa6 b86be458d54543fcae9df9884591edf3 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 11 09:06:24 compute-0 nova_compute[260935]: 2025-10-11 09:06:24.454 2 DEBUG nova.virt.hardware [None req-71e95a42-81bc-46fa-9b05-06608d4cfeb7 c555ec17274647ed83d33852feed0fa6 b86be458d54543fcae9df9884591edf3 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 11 09:06:24 compute-0 nova_compute[260935]: 2025-10-11 09:06:24.454 2 DEBUG nova.virt.hardware [None req-71e95a42-81bc-46fa-9b05-06608d4cfeb7 c555ec17274647ed83d33852feed0fa6 b86be458d54543fcae9df9884591edf3 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 11 09:06:24 compute-0 nova_compute[260935]: 2025-10-11 09:06:24.455 2 DEBUG nova.virt.hardware [None req-71e95a42-81bc-46fa-9b05-06608d4cfeb7 c555ec17274647ed83d33852feed0fa6 b86be458d54543fcae9df9884591edf3 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 11 09:06:24 compute-0 nova_compute[260935]: 2025-10-11 09:06:24.455 2 DEBUG nova.virt.hardware [None req-71e95a42-81bc-46fa-9b05-06608d4cfeb7 c555ec17274647ed83d33852feed0fa6 b86be458d54543fcae9df9884591edf3 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 11 09:06:24 compute-0 nova_compute[260935]: 2025-10-11 09:06:24.455 2 DEBUG nova.virt.hardware [None req-71e95a42-81bc-46fa-9b05-06608d4cfeb7 c555ec17274647ed83d33852feed0fa6 b86be458d54543fcae9df9884591edf3 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 11 09:06:24 compute-0 nova_compute[260935]: 2025-10-11 09:06:24.456 2 DEBUG nova.virt.hardware [None req-71e95a42-81bc-46fa-9b05-06608d4cfeb7 c555ec17274647ed83d33852feed0fa6 b86be458d54543fcae9df9884591edf3 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 11 09:06:24 compute-0 nova_compute[260935]: 2025-10-11 09:06:24.456 2 DEBUG nova.virt.hardware [None req-71e95a42-81bc-46fa-9b05-06608d4cfeb7 c555ec17274647ed83d33852feed0fa6 b86be458d54543fcae9df9884591edf3 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 11 09:06:24 compute-0 nova_compute[260935]: 2025-10-11 09:06:24.456 2 DEBUG nova.virt.hardware [None req-71e95a42-81bc-46fa-9b05-06608d4cfeb7 c555ec17274647ed83d33852feed0fa6 b86be458d54543fcae9df9884591edf3 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 11 09:06:24 compute-0 nova_compute[260935]: 2025-10-11 09:06:24.457 2 DEBUG nova.virt.hardware [None req-71e95a42-81bc-46fa-9b05-06608d4cfeb7 c555ec17274647ed83d33852feed0fa6 b86be458d54543fcae9df9884591edf3 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 11 09:06:24 compute-0 nova_compute[260935]: 2025-10-11 09:06:24.461 2 DEBUG oslo_concurrency.processutils [None req-71e95a42-81bc-46fa-9b05-06608d4cfeb7 c555ec17274647ed83d33852feed0fa6 b86be458d54543fcae9df9884591edf3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:06:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:06:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:06:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:06:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:06:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:06:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:06:24 compute-0 podman[354924]: 2025-10-11 09:06:24.813546458 +0000 UTC m=+0.107669675 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=multipathd, tcib_managed=true)
Oct 11 09:06:24 compute-0 podman[354925]: 2025-10-11 09:06:24.853837488 +0000 UTC m=+0.140700838 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Oct 11 09:06:24 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 09:06:24 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3714150570' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:06:24 compute-0 nova_compute[260935]: 2025-10-11 09:06:24.963 2 DEBUG oslo_concurrency.processutils [None req-71e95a42-81bc-46fa-9b05-06608d4cfeb7 c555ec17274647ed83d33852feed0fa6 b86be458d54543fcae9df9884591edf3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.502s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:06:24 compute-0 nova_compute[260935]: 2025-10-11 09:06:24.988 2 DEBUG nova.storage.rbd_utils [None req-71e95a42-81bc-46fa-9b05-06608d4cfeb7 c555ec17274647ed83d33852feed0fa6 b86be458d54543fcae9df9884591edf3 - - default default] rbd image 079324f3-2fba-431a-9b8a-6b755af3fe74_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:06:24 compute-0 nova_compute[260935]: 2025-10-11 09:06:24.993 2 DEBUG oslo_concurrency.processutils [None req-71e95a42-81bc-46fa-9b05-06608d4cfeb7 c555ec17274647ed83d33852feed0fa6 b86be458d54543fcae9df9884591edf3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:06:25 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 09:06:25 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2926475377' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:06:25 compute-0 nova_compute[260935]: 2025-10-11 09:06:25.460 2 DEBUG oslo_concurrency.processutils [None req-71e95a42-81bc-46fa-9b05-06608d4cfeb7 c555ec17274647ed83d33852feed0fa6 b86be458d54543fcae9df9884591edf3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.468s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:06:25 compute-0 nova_compute[260935]: 2025-10-11 09:06:25.463 2 DEBUG nova.virt.libvirt.vif [None req-71e95a42-81bc-46fa-9b05-06608d4cfeb7 c555ec17274647ed83d33852feed0fa6 b86be458d54543fcae9df9884591edf3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T09:06:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-ServerTagsTestJSON-server-1151527895',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-servertagstestjson-server-1151527895',id=95,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b86be458d54543fcae9df9884591edf3',ramdisk_id='',reservation_id='r-7frv4663',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerTagsTestJSON-363741472',owner_user_name='tempest-ServerTagsTestJSON-363741472-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:06:17Z,user_data=None,user_id='c555ec17274647ed83d33852feed0fa6',uuid=079324f3-2fba-431a-9b8a-6b755af3fe74,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f44dbabf-c6b7-4aa2-ac55-6df262611e5e", "address": "fa:16:3e:47:6a:2d", "network": {"id": "c4ebf6c9-009d-4565-ba58-6b9452636d41", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-2078761724-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b86be458d54543fcae9df9884591edf3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf44dbabf-c6", "ovs_interfaceid": "f44dbabf-c6b7-4aa2-ac55-6df262611e5e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 11 09:06:25 compute-0 nova_compute[260935]: 2025-10-11 09:06:25.464 2 DEBUG nova.network.os_vif_util [None req-71e95a42-81bc-46fa-9b05-06608d4cfeb7 c555ec17274647ed83d33852feed0fa6 b86be458d54543fcae9df9884591edf3 - - default default] Converting VIF {"id": "f44dbabf-c6b7-4aa2-ac55-6df262611e5e", "address": "fa:16:3e:47:6a:2d", "network": {"id": "c4ebf6c9-009d-4565-ba58-6b9452636d41", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-2078761724-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b86be458d54543fcae9df9884591edf3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf44dbabf-c6", "ovs_interfaceid": "f44dbabf-c6b7-4aa2-ac55-6df262611e5e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 09:06:25 compute-0 nova_compute[260935]: 2025-10-11 09:06:25.465 2 DEBUG nova.network.os_vif_util [None req-71e95a42-81bc-46fa-9b05-06608d4cfeb7 c555ec17274647ed83d33852feed0fa6 b86be458d54543fcae9df9884591edf3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:47:6a:2d,bridge_name='br-int',has_traffic_filtering=True,id=f44dbabf-c6b7-4aa2-ac55-6df262611e5e,network=Network(c4ebf6c9-009d-4565-ba58-6b9452636d41),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf44dbabf-c6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 09:06:25 compute-0 nova_compute[260935]: 2025-10-11 09:06:25.469 2 DEBUG nova.objects.instance [None req-71e95a42-81bc-46fa-9b05-06608d4cfeb7 c555ec17274647ed83d33852feed0fa6 b86be458d54543fcae9df9884591edf3 - - default default] Lazy-loading 'pci_devices' on Instance uuid 079324f3-2fba-431a-9b8a-6b755af3fe74 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:06:25 compute-0 nova_compute[260935]: 2025-10-11 09:06:25.498 2 DEBUG nova.virt.libvirt.driver [None req-71e95a42-81bc-46fa-9b05-06608d4cfeb7 c555ec17274647ed83d33852feed0fa6 b86be458d54543fcae9df9884591edf3 - - default default] [instance: 079324f3-2fba-431a-9b8a-6b755af3fe74] End _get_guest_xml xml=<domain type="kvm">
Oct 11 09:06:25 compute-0 nova_compute[260935]:   <uuid>079324f3-2fba-431a-9b8a-6b755af3fe74</uuid>
Oct 11 09:06:25 compute-0 nova_compute[260935]:   <name>instance-0000005f</name>
Oct 11 09:06:25 compute-0 nova_compute[260935]:   <memory>131072</memory>
Oct 11 09:06:25 compute-0 nova_compute[260935]:   <vcpu>1</vcpu>
Oct 11 09:06:25 compute-0 nova_compute[260935]:   <metadata>
Oct 11 09:06:25 compute-0 nova_compute[260935]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 09:06:25 compute-0 nova_compute[260935]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 09:06:25 compute-0 nova_compute[260935]:       <nova:name>tempest-ServerTagsTestJSON-server-1151527895</nova:name>
Oct 11 09:06:25 compute-0 nova_compute[260935]:       <nova:creationTime>2025-10-11 09:06:24</nova:creationTime>
Oct 11 09:06:25 compute-0 nova_compute[260935]:       <nova:flavor name="m1.nano">
Oct 11 09:06:25 compute-0 nova_compute[260935]:         <nova:memory>128</nova:memory>
Oct 11 09:06:25 compute-0 nova_compute[260935]:         <nova:disk>1</nova:disk>
Oct 11 09:06:25 compute-0 nova_compute[260935]:         <nova:swap>0</nova:swap>
Oct 11 09:06:25 compute-0 nova_compute[260935]:         <nova:ephemeral>0</nova:ephemeral>
Oct 11 09:06:25 compute-0 nova_compute[260935]:         <nova:vcpus>1</nova:vcpus>
Oct 11 09:06:25 compute-0 nova_compute[260935]:       </nova:flavor>
Oct 11 09:06:25 compute-0 nova_compute[260935]:       <nova:owner>
Oct 11 09:06:25 compute-0 nova_compute[260935]:         <nova:user uuid="c555ec17274647ed83d33852feed0fa6">tempest-ServerTagsTestJSON-363741472-project-member</nova:user>
Oct 11 09:06:25 compute-0 nova_compute[260935]:         <nova:project uuid="b86be458d54543fcae9df9884591edf3">tempest-ServerTagsTestJSON-363741472</nova:project>
Oct 11 09:06:25 compute-0 nova_compute[260935]:       </nova:owner>
Oct 11 09:06:25 compute-0 nova_compute[260935]:       <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 09:06:25 compute-0 nova_compute[260935]:       <nova:ports>
Oct 11 09:06:25 compute-0 nova_compute[260935]:         <nova:port uuid="f44dbabf-c6b7-4aa2-ac55-6df262611e5e">
Oct 11 09:06:25 compute-0 nova_compute[260935]:           <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Oct 11 09:06:25 compute-0 nova_compute[260935]:         </nova:port>
Oct 11 09:06:25 compute-0 nova_compute[260935]:       </nova:ports>
Oct 11 09:06:25 compute-0 nova_compute[260935]:     </nova:instance>
Oct 11 09:06:25 compute-0 nova_compute[260935]:   </metadata>
Oct 11 09:06:25 compute-0 nova_compute[260935]:   <sysinfo type="smbios">
Oct 11 09:06:25 compute-0 nova_compute[260935]:     <system>
Oct 11 09:06:25 compute-0 nova_compute[260935]:       <entry name="manufacturer">RDO</entry>
Oct 11 09:06:25 compute-0 nova_compute[260935]:       <entry name="product">OpenStack Compute</entry>
Oct 11 09:06:25 compute-0 nova_compute[260935]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 09:06:25 compute-0 nova_compute[260935]:       <entry name="serial">079324f3-2fba-431a-9b8a-6b755af3fe74</entry>
Oct 11 09:06:25 compute-0 nova_compute[260935]:       <entry name="uuid">079324f3-2fba-431a-9b8a-6b755af3fe74</entry>
Oct 11 09:06:25 compute-0 nova_compute[260935]:       <entry name="family">Virtual Machine</entry>
Oct 11 09:06:25 compute-0 nova_compute[260935]:     </system>
Oct 11 09:06:25 compute-0 nova_compute[260935]:   </sysinfo>
Oct 11 09:06:25 compute-0 nova_compute[260935]:   <os>
Oct 11 09:06:25 compute-0 nova_compute[260935]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 11 09:06:25 compute-0 nova_compute[260935]:     <boot dev="hd"/>
Oct 11 09:06:25 compute-0 nova_compute[260935]:     <smbios mode="sysinfo"/>
Oct 11 09:06:25 compute-0 nova_compute[260935]:   </os>
Oct 11 09:06:25 compute-0 nova_compute[260935]:   <features>
Oct 11 09:06:25 compute-0 nova_compute[260935]:     <acpi/>
Oct 11 09:06:25 compute-0 nova_compute[260935]:     <apic/>
Oct 11 09:06:25 compute-0 nova_compute[260935]:     <vmcoreinfo/>
Oct 11 09:06:25 compute-0 nova_compute[260935]:   </features>
Oct 11 09:06:25 compute-0 nova_compute[260935]:   <clock offset="utc">
Oct 11 09:06:25 compute-0 nova_compute[260935]:     <timer name="pit" tickpolicy="delay"/>
Oct 11 09:06:25 compute-0 nova_compute[260935]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 11 09:06:25 compute-0 nova_compute[260935]:     <timer name="hpet" present="no"/>
Oct 11 09:06:25 compute-0 nova_compute[260935]:   </clock>
Oct 11 09:06:25 compute-0 nova_compute[260935]:   <cpu mode="host-model" match="exact">
Oct 11 09:06:25 compute-0 nova_compute[260935]:     <topology sockets="1" cores="1" threads="1"/>
Oct 11 09:06:25 compute-0 nova_compute[260935]:   </cpu>
Oct 11 09:06:25 compute-0 nova_compute[260935]:   <devices>
Oct 11 09:06:25 compute-0 nova_compute[260935]:     <disk type="network" device="disk">
Oct 11 09:06:25 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 09:06:25 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/079324f3-2fba-431a-9b8a-6b755af3fe74_disk">
Oct 11 09:06:25 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 09:06:25 compute-0 nova_compute[260935]:       </source>
Oct 11 09:06:25 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 09:06:25 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 09:06:25 compute-0 nova_compute[260935]:       </auth>
Oct 11 09:06:25 compute-0 nova_compute[260935]:       <target dev="vda" bus="virtio"/>
Oct 11 09:06:25 compute-0 nova_compute[260935]:     </disk>
Oct 11 09:06:25 compute-0 nova_compute[260935]:     <disk type="network" device="cdrom">
Oct 11 09:06:25 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 09:06:25 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/079324f3-2fba-431a-9b8a-6b755af3fe74_disk.config">
Oct 11 09:06:25 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 09:06:25 compute-0 nova_compute[260935]:       </source>
Oct 11 09:06:25 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 09:06:25 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 09:06:25 compute-0 nova_compute[260935]:       </auth>
Oct 11 09:06:25 compute-0 nova_compute[260935]:       <target dev="sda" bus="sata"/>
Oct 11 09:06:25 compute-0 nova_compute[260935]:     </disk>
Oct 11 09:06:25 compute-0 nova_compute[260935]:     <interface type="ethernet">
Oct 11 09:06:25 compute-0 nova_compute[260935]:       <mac address="fa:16:3e:47:6a:2d"/>
Oct 11 09:06:25 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 09:06:25 compute-0 nova_compute[260935]:       <driver name="vhost" rx_queue_size="512"/>
Oct 11 09:06:25 compute-0 nova_compute[260935]:       <mtu size="1442"/>
Oct 11 09:06:25 compute-0 nova_compute[260935]:       <target dev="tapf44dbabf-c6"/>
Oct 11 09:06:25 compute-0 nova_compute[260935]:     </interface>
Oct 11 09:06:25 compute-0 nova_compute[260935]:     <serial type="pty">
Oct 11 09:06:25 compute-0 nova_compute[260935]:       <log file="/var/lib/nova/instances/079324f3-2fba-431a-9b8a-6b755af3fe74/console.log" append="off"/>
Oct 11 09:06:25 compute-0 nova_compute[260935]:     </serial>
Oct 11 09:06:25 compute-0 nova_compute[260935]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 09:06:25 compute-0 nova_compute[260935]:     <video>
Oct 11 09:06:25 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 09:06:25 compute-0 nova_compute[260935]:     </video>
Oct 11 09:06:25 compute-0 nova_compute[260935]:     <input type="tablet" bus="usb"/>
Oct 11 09:06:25 compute-0 nova_compute[260935]:     <rng model="virtio">
Oct 11 09:06:25 compute-0 nova_compute[260935]:       <backend model="random">/dev/urandom</backend>
Oct 11 09:06:25 compute-0 nova_compute[260935]:     </rng>
Oct 11 09:06:25 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root"/>
Oct 11 09:06:25 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:06:25 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:06:25 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:06:25 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:06:25 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:06:25 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:06:25 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:06:25 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:06:25 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:06:25 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:06:25 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:06:25 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:06:25 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:06:25 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:06:25 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:06:25 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:06:25 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:06:25 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:06:25 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:06:25 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:06:25 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:06:25 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:06:25 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:06:25 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:06:25 compute-0 nova_compute[260935]:     <controller type="usb" index="0"/>
Oct 11 09:06:25 compute-0 nova_compute[260935]:     <memballoon model="virtio">
Oct 11 09:06:25 compute-0 nova_compute[260935]:       <stats period="10"/>
Oct 11 09:06:25 compute-0 nova_compute[260935]:     </memballoon>
Oct 11 09:06:25 compute-0 nova_compute[260935]:   </devices>
Oct 11 09:06:25 compute-0 nova_compute[260935]: </domain>
Oct 11 09:06:25 compute-0 nova_compute[260935]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 11 09:06:25 compute-0 nova_compute[260935]: 2025-10-11 09:06:25.499 2 DEBUG nova.compute.manager [None req-71e95a42-81bc-46fa-9b05-06608d4cfeb7 c555ec17274647ed83d33852feed0fa6 b86be458d54543fcae9df9884591edf3 - - default default] [instance: 079324f3-2fba-431a-9b8a-6b755af3fe74] Preparing to wait for external event network-vif-plugged-f44dbabf-c6b7-4aa2-ac55-6df262611e5e prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 11 09:06:25 compute-0 nova_compute[260935]: 2025-10-11 09:06:25.499 2 DEBUG oslo_concurrency.lockutils [None req-71e95a42-81bc-46fa-9b05-06608d4cfeb7 c555ec17274647ed83d33852feed0fa6 b86be458d54543fcae9df9884591edf3 - - default default] Acquiring lock "079324f3-2fba-431a-9b8a-6b755af3fe74-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:06:25 compute-0 nova_compute[260935]: 2025-10-11 09:06:25.500 2 DEBUG oslo_concurrency.lockutils [None req-71e95a42-81bc-46fa-9b05-06608d4cfeb7 c555ec17274647ed83d33852feed0fa6 b86be458d54543fcae9df9884591edf3 - - default default] Lock "079324f3-2fba-431a-9b8a-6b755af3fe74-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:06:25 compute-0 nova_compute[260935]: 2025-10-11 09:06:25.500 2 DEBUG oslo_concurrency.lockutils [None req-71e95a42-81bc-46fa-9b05-06608d4cfeb7 c555ec17274647ed83d33852feed0fa6 b86be458d54543fcae9df9884591edf3 - - default default] Lock "079324f3-2fba-431a-9b8a-6b755af3fe74-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:06:25 compute-0 nova_compute[260935]: 2025-10-11 09:06:25.502 2 DEBUG nova.virt.libvirt.vif [None req-71e95a42-81bc-46fa-9b05-06608d4cfeb7 c555ec17274647ed83d33852feed0fa6 b86be458d54543fcae9df9884591edf3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T09:06:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-ServerTagsTestJSON-server-1151527895',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-servertagstestjson-server-1151527895',id=95,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b86be458d54543fcae9df9884591edf3',ramdisk_id='',reservation_id='r-7frv4663',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerTagsTestJSON-363741472',owner_user_name='tempest-ServerTagsTestJSON-363741472-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:06:17Z,user_data=None,user_id='c555ec17274647ed83d33852feed0fa6',uuid=079324f3-2fba-431a-9b8a-6b755af3fe74,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f44dbabf-c6b7-4aa2-ac55-6df262611e5e", "address": "fa:16:3e:47:6a:2d", "network": {"id": "c4ebf6c9-009d-4565-ba58-6b9452636d41", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-2078761724-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b86be458d54543fcae9df9884591edf3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf44dbabf-c6", "ovs_interfaceid": "f44dbabf-c6b7-4aa2-ac55-6df262611e5e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 11 09:06:25 compute-0 nova_compute[260935]: 2025-10-11 09:06:25.503 2 DEBUG nova.network.os_vif_util [None req-71e95a42-81bc-46fa-9b05-06608d4cfeb7 c555ec17274647ed83d33852feed0fa6 b86be458d54543fcae9df9884591edf3 - - default default] Converting VIF {"id": "f44dbabf-c6b7-4aa2-ac55-6df262611e5e", "address": "fa:16:3e:47:6a:2d", "network": {"id": "c4ebf6c9-009d-4565-ba58-6b9452636d41", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-2078761724-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b86be458d54543fcae9df9884591edf3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf44dbabf-c6", "ovs_interfaceid": "f44dbabf-c6b7-4aa2-ac55-6df262611e5e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 09:06:25 compute-0 nova_compute[260935]: 2025-10-11 09:06:25.504 2 DEBUG nova.network.os_vif_util [None req-71e95a42-81bc-46fa-9b05-06608d4cfeb7 c555ec17274647ed83d33852feed0fa6 b86be458d54543fcae9df9884591edf3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:47:6a:2d,bridge_name='br-int',has_traffic_filtering=True,id=f44dbabf-c6b7-4aa2-ac55-6df262611e5e,network=Network(c4ebf6c9-009d-4565-ba58-6b9452636d41),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf44dbabf-c6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 09:06:25 compute-0 nova_compute[260935]: 2025-10-11 09:06:25.505 2 DEBUG os_vif [None req-71e95a42-81bc-46fa-9b05-06608d4cfeb7 c555ec17274647ed83d33852feed0fa6 b86be458d54543fcae9df9884591edf3 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:47:6a:2d,bridge_name='br-int',has_traffic_filtering=True,id=f44dbabf-c6b7-4aa2-ac55-6df262611e5e,network=Network(c4ebf6c9-009d-4565-ba58-6b9452636d41),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf44dbabf-c6') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 11 09:06:25 compute-0 nova_compute[260935]: 2025-10-11 09:06:25.506 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:06:25 compute-0 nova_compute[260935]: 2025-10-11 09:06:25.507 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:06:25 compute-0 nova_compute[260935]: 2025-10-11 09:06:25.507 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 09:06:25 compute-0 nova_compute[260935]: 2025-10-11 09:06:25.512 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:06:25 compute-0 nova_compute[260935]: 2025-10-11 09:06:25.512 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf44dbabf-c6, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:06:25 compute-0 nova_compute[260935]: 2025-10-11 09:06:25.513 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapf44dbabf-c6, col_values=(('external_ids', {'iface-id': 'f44dbabf-c6b7-4aa2-ac55-6df262611e5e', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:47:6a:2d', 'vm-uuid': '079324f3-2fba-431a-9b8a-6b755af3fe74'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:06:25 compute-0 nova_compute[260935]: 2025-10-11 09:06:25.515 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:06:25 compute-0 NetworkManager[44960]: <info>  [1760173585.5162] manager: (tapf44dbabf-c6): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/362)
Oct 11 09:06:25 compute-0 nova_compute[260935]: 2025-10-11 09:06:25.519 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 09:06:25 compute-0 nova_compute[260935]: 2025-10-11 09:06:25.526 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:06:25 compute-0 nova_compute[260935]: 2025-10-11 09:06:25.528 2 INFO os_vif [None req-71e95a42-81bc-46fa-9b05-06608d4cfeb7 c555ec17274647ed83d33852feed0fa6 b86be458d54543fcae9df9884591edf3 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:47:6a:2d,bridge_name='br-int',has_traffic_filtering=True,id=f44dbabf-c6b7-4aa2-ac55-6df262611e5e,network=Network(c4ebf6c9-009d-4565-ba58-6b9452636d41),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf44dbabf-c6')
Oct 11 09:06:25 compute-0 ceph-mon[74313]: pgmap v1912: 321 pgs: 321 active+clean; 548 MiB data, 911 MiB used, 59 GiB / 60 GiB avail; 625 KiB/s rd, 1.8 MiB/s wr, 87 op/s
Oct 11 09:06:25 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/3714150570' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:06:25 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/2926475377' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:06:25 compute-0 nova_compute[260935]: 2025-10-11 09:06:25.616 2 DEBUG nova.virt.libvirt.driver [None req-71e95a42-81bc-46fa-9b05-06608d4cfeb7 c555ec17274647ed83d33852feed0fa6 b86be458d54543fcae9df9884591edf3 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 09:06:25 compute-0 nova_compute[260935]: 2025-10-11 09:06:25.616 2 DEBUG nova.virt.libvirt.driver [None req-71e95a42-81bc-46fa-9b05-06608d4cfeb7 c555ec17274647ed83d33852feed0fa6 b86be458d54543fcae9df9884591edf3 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 09:06:25 compute-0 nova_compute[260935]: 2025-10-11 09:06:25.617 2 DEBUG nova.virt.libvirt.driver [None req-71e95a42-81bc-46fa-9b05-06608d4cfeb7 c555ec17274647ed83d33852feed0fa6 b86be458d54543fcae9df9884591edf3 - - default default] No VIF found with MAC fa:16:3e:47:6a:2d, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 11 09:06:25 compute-0 nova_compute[260935]: 2025-10-11 09:06:25.617 2 INFO nova.virt.libvirt.driver [None req-71e95a42-81bc-46fa-9b05-06608d4cfeb7 c555ec17274647ed83d33852feed0fa6 b86be458d54543fcae9df9884591edf3 - - default default] [instance: 079324f3-2fba-431a-9b8a-6b755af3fe74] Using config drive
Oct 11 09:06:25 compute-0 nova_compute[260935]: 2025-10-11 09:06:25.651 2 DEBUG nova.storage.rbd_utils [None req-71e95a42-81bc-46fa-9b05-06608d4cfeb7 c555ec17274647ed83d33852feed0fa6 b86be458d54543fcae9df9884591edf3 - - default default] rbd image 079324f3-2fba-431a-9b8a-6b755af3fe74_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:06:25 compute-0 nova_compute[260935]: 2025-10-11 09:06:25.660 2 DEBUG nova.network.neutron [req-708a7f1d-21de-4345-aa85-c6656682bf64 req-fa701e60-c1b2-427c-951f-79449e96fc05 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a83f40e4-c852-4b45-a3d2-1cd65e9aaa31] Updated VIF entry in instance network info cache for port f8dc388c-9e5a-43c2-8dd9-8fc28768ec31. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 09:06:25 compute-0 nova_compute[260935]: 2025-10-11 09:06:25.661 2 DEBUG nova.network.neutron [req-708a7f1d-21de-4345-aa85-c6656682bf64 req-fa701e60-c1b2-427c-951f-79449e96fc05 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a83f40e4-c852-4b45-a3d2-1cd65e9aaa31] Updating instance_info_cache with network_info: [{"id": "f8dc388c-9e5a-43c2-8dd9-8fc28768ec31", "address": "fa:16:3e:b4:bf:b7", "network": {"id": "7ac98e7f-e9cd-45cf-8bf4-dcecaa65ca75", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-1957321638-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "0a5d578da5e746caa535eef295e1a67d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf8dc388c-9e", "ovs_interfaceid": "f8dc388c-9e5a-43c2-8dd9-8fc28768ec31", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:06:25 compute-0 nova_compute[260935]: 2025-10-11 09:06:25.759 2 DEBUG oslo_concurrency.lockutils [req-708a7f1d-21de-4345-aa85-c6656682bf64 req-fa701e60-c1b2-427c-951f-79449e96fc05 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-a83f40e4-c852-4b45-a3d2-1cd65e9aaa31" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:06:26 compute-0 nova_compute[260935]: 2025-10-11 09:06:26.118 2 INFO nova.virt.libvirt.driver [None req-71e95a42-81bc-46fa-9b05-06608d4cfeb7 c555ec17274647ed83d33852feed0fa6 b86be458d54543fcae9df9884591edf3 - - default default] [instance: 079324f3-2fba-431a-9b8a-6b755af3fe74] Creating config drive at /var/lib/nova/instances/079324f3-2fba-431a-9b8a-6b755af3fe74/disk.config
Oct 11 09:06:26 compute-0 nova_compute[260935]: 2025-10-11 09:06:26.123 2 DEBUG oslo_concurrency.processutils [None req-71e95a42-81bc-46fa-9b05-06608d4cfeb7 c555ec17274647ed83d33852feed0fa6 b86be458d54543fcae9df9884591edf3 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/079324f3-2fba-431a-9b8a-6b755af3fe74/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpyakuuedm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:06:26 compute-0 nova_compute[260935]: 2025-10-11 09:06:26.290 2 DEBUG oslo_concurrency.processutils [None req-71e95a42-81bc-46fa-9b05-06608d4cfeb7 c555ec17274647ed83d33852feed0fa6 b86be458d54543fcae9df9884591edf3 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/079324f3-2fba-431a-9b8a-6b755af3fe74/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpyakuuedm" returned: 0 in 0.167s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:06:26 compute-0 nova_compute[260935]: 2025-10-11 09:06:26.314 2 DEBUG nova.storage.rbd_utils [None req-71e95a42-81bc-46fa-9b05-06608d4cfeb7 c555ec17274647ed83d33852feed0fa6 b86be458d54543fcae9df9884591edf3 - - default default] rbd image 079324f3-2fba-431a-9b8a-6b755af3fe74_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:06:26 compute-0 nova_compute[260935]: 2025-10-11 09:06:26.321 2 DEBUG oslo_concurrency.processutils [None req-71e95a42-81bc-46fa-9b05-06608d4cfeb7 c555ec17274647ed83d33852feed0fa6 b86be458d54543fcae9df9884591edf3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/079324f3-2fba-431a-9b8a-6b755af3fe74/disk.config 079324f3-2fba-431a-9b8a-6b755af3fe74_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:06:26 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1913: 321 pgs: 321 active+clean; 548 MiB data, 911 MiB used, 59 GiB / 60 GiB avail; 623 KiB/s rd, 1.8 MiB/s wr, 86 op/s
Oct 11 09:06:26 compute-0 nova_compute[260935]: 2025-10-11 09:06:26.527 2 DEBUG oslo_concurrency.processutils [None req-71e95a42-81bc-46fa-9b05-06608d4cfeb7 c555ec17274647ed83d33852feed0fa6 b86be458d54543fcae9df9884591edf3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/079324f3-2fba-431a-9b8a-6b755af3fe74/disk.config 079324f3-2fba-431a-9b8a-6b755af3fe74_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.206s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:06:26 compute-0 nova_compute[260935]: 2025-10-11 09:06:26.529 2 INFO nova.virt.libvirt.driver [None req-71e95a42-81bc-46fa-9b05-06608d4cfeb7 c555ec17274647ed83d33852feed0fa6 b86be458d54543fcae9df9884591edf3 - - default default] [instance: 079324f3-2fba-431a-9b8a-6b755af3fe74] Deleting local config drive /var/lib/nova/instances/079324f3-2fba-431a-9b8a-6b755af3fe74/disk.config because it was imported into RBD.
Oct 11 09:06:26 compute-0 kernel: tapf44dbabf-c6: entered promiscuous mode
Oct 11 09:06:26 compute-0 NetworkManager[44960]: <info>  [1760173586.6094] manager: (tapf44dbabf-c6): new Tun device (/org/freedesktop/NetworkManager/Devices/363)
Oct 11 09:06:26 compute-0 nova_compute[260935]: 2025-10-11 09:06:26.612 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:06:26 compute-0 ovn_controller[152945]: 2025-10-11T09:06:26Z|00844|binding|INFO|Claiming lport f44dbabf-c6b7-4aa2-ac55-6df262611e5e for this chassis.
Oct 11 09:06:26 compute-0 ovn_controller[152945]: 2025-10-11T09:06:26Z|00845|binding|INFO|f44dbabf-c6b7-4aa2-ac55-6df262611e5e: Claiming fa:16:3e:47:6a:2d 10.100.0.3
Oct 11 09:06:26 compute-0 nova_compute[260935]: 2025-10-11 09:06:26.655 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:06:26 compute-0 ovn_controller[152945]: 2025-10-11T09:06:26Z|00846|binding|INFO|Setting lport f44dbabf-c6b7-4aa2-ac55-6df262611e5e ovn-installed in OVS
Oct 11 09:06:26 compute-0 ovn_controller[152945]: 2025-10-11T09:06:26Z|00847|binding|INFO|Setting lport f44dbabf-c6b7-4aa2-ac55-6df262611e5e up in Southbound
Oct 11 09:06:26 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:06:26.657 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:47:6a:2d 10.100.0.3'], port_security=['fa:16:3e:47:6a:2d 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '079324f3-2fba-431a-9b8a-6b755af3fe74', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c4ebf6c9-009d-4565-ba58-6b9452636d41', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b86be458d54543fcae9df9884591edf3', 'neutron:revision_number': '2', 'neutron:security_group_ids': '97cd21ff-67df-467b-bb0c-629509de3f23', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=820964f1-3ad5-4fec-b6e5-3d57a1853278, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=f44dbabf-c6b7-4aa2-ac55-6df262611e5e) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 09:06:26 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:06:26.659 162815 INFO neutron.agent.ovn.metadata.agent [-] Port f44dbabf-c6b7-4aa2-ac55-6df262611e5e in datapath c4ebf6c9-009d-4565-ba58-6b9452636d41 bound to our chassis
Oct 11 09:06:26 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:06:26.661 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c4ebf6c9-009d-4565-ba58-6b9452636d41
Oct 11 09:06:26 compute-0 nova_compute[260935]: 2025-10-11 09:06:26.664 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:06:26 compute-0 systemd-udevd[355084]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 09:06:26 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 09:06:26 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3834400119' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 09:06:26 compute-0 systemd-machined[215705]: New machine qemu-109-instance-0000005f.
Oct 11 09:06:26 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:06:26.676 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[b7c3d0dd-e1b1-432c-9567-b889b5698cab]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:06:26 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:06:26.678 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapc4ebf6c9-01 in ovnmeta-c4ebf6c9-009d-4565-ba58-6b9452636d41 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 11 09:06:26 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:06:26.680 276945 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapc4ebf6c9-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 11 09:06:26 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:06:26.681 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[fc6323db-6fbd-4e2b-b1df-9effd6e7a210]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:06:26 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 09:06:26 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3834400119' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 09:06:26 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:06:26.682 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[bda94d8b-cd2a-46d6-a2ed-0b1c4914ab83]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:06:26 compute-0 NetworkManager[44960]: <info>  [1760173586.6957] device (tapf44dbabf-c6): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 09:06:26 compute-0 NetworkManager[44960]: <info>  [1760173586.6965] device (tapf44dbabf-c6): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 09:06:26 compute-0 systemd[1]: Started Virtual Machine qemu-109-instance-0000005f.
Oct 11 09:06:26 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:06:26.699 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[368554e5-7f59-4684-87b8-8f8afc5ec0a7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:06:26 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:06:26.715 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[082797a3-ab94-4e07-872d-7b654602f959]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:06:26 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:06:26.757 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[416d1f22-c2a3-47b1-9d39-d573c75a26a6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:06:26 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:06:26.762 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[fa6180c8-307e-448c-9ab5-86180f74f273]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:06:26 compute-0 NetworkManager[44960]: <info>  [1760173586.7634] manager: (tapc4ebf6c9-00): new Veth device (/org/freedesktop/NetworkManager/Devices/364)
Oct 11 09:06:26 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:06:26.812 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[5f7a69f9-32df-4948-a5d1-bdf28f1be944]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:06:26 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:06:26.817 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[18c4ca42-8957-43d4-b3b8-2ebc6f2dd822]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:06:26 compute-0 NetworkManager[44960]: <info>  [1760173586.8576] device (tapc4ebf6c9-00): carrier: link connected
Oct 11 09:06:26 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:06:26.867 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[eae7a1f8-b716-4577-9d2e-2f3e5266d4fd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:06:26 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:06:26.890 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[774a4fe5-2f7a-4222-badd-6a642db04dff]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc4ebf6c9-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7e:00:f2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 256], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 543156, 'reachable_time': 40561, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 355117, 'error': None, 'target': 'ovnmeta-c4ebf6c9-009d-4565-ba58-6b9452636d41', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:06:26 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:06:26.912 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[365b7d57-3cef-4768-9aba-d5d7bc5d0880]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe7e:f2'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 543156, 'tstamp': 543156}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 355118, 'error': None, 'target': 'ovnmeta-c4ebf6c9-009d-4565-ba58-6b9452636d41', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:06:26 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:06:26.934 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[ec941c9a-28f8-4d94-80e4-1f142cbf1033]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc4ebf6c9-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7e:00:f2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 256], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 543156, 'reachable_time': 40561, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 355119, 'error': None, 'target': 'ovnmeta-c4ebf6c9-009d-4565-ba58-6b9452636d41', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:06:26 compute-0 nova_compute[260935]: 2025-10-11 09:06:26.957 2 DEBUG nova.network.neutron [req-865f2bcd-2c62-4aa8-9a02-ef624eeddc8f req-7ac852fb-6a64-4a90-b1a7-66b99e151061 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 079324f3-2fba-431a-9b8a-6b755af3fe74] Updated VIF entry in instance network info cache for port f44dbabf-c6b7-4aa2-ac55-6df262611e5e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 09:06:26 compute-0 nova_compute[260935]: 2025-10-11 09:06:26.958 2 DEBUG nova.network.neutron [req-865f2bcd-2c62-4aa8-9a02-ef624eeddc8f req-7ac852fb-6a64-4a90-b1a7-66b99e151061 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 079324f3-2fba-431a-9b8a-6b755af3fe74] Updating instance_info_cache with network_info: [{"id": "f44dbabf-c6b7-4aa2-ac55-6df262611e5e", "address": "fa:16:3e:47:6a:2d", "network": {"id": "c4ebf6c9-009d-4565-ba58-6b9452636d41", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-2078761724-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b86be458d54543fcae9df9884591edf3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf44dbabf-c6", "ovs_interfaceid": "f44dbabf-c6b7-4aa2-ac55-6df262611e5e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:06:26 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:06:26.987 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[101f8500-3feb-4505-ad76-28fc4f814222]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:06:27 compute-0 nova_compute[260935]: 2025-10-11 09:06:27.008 2 DEBUG oslo_concurrency.lockutils [req-865f2bcd-2c62-4aa8-9a02-ef624eeddc8f req-7ac852fb-6a64-4a90-b1a7-66b99e151061 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-079324f3-2fba-431a-9b8a-6b755af3fe74" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:06:27 compute-0 ovn_controller[152945]: 2025-10-11T09:06:27Z|00848|binding|INFO|Releasing lport e23cd806-8523-4e59-ba27-db15cee52548 from this chassis (sb_readonly=0)
Oct 11 09:06:27 compute-0 ovn_controller[152945]: 2025-10-11T09:06:27Z|00849|binding|INFO|Releasing lport f45dd889-4db0-488b-b6d3-356bc191844e from this chassis (sb_readonly=0)
Oct 11 09:06:27 compute-0 nova_compute[260935]: 2025-10-11 09:06:27.048 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:06:27 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:06:27.084 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[e1559799-3887-4f49-af2e-9d0cb31953b6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:06:27 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:06:27.086 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc4ebf6c9-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:06:27 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:06:27.087 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 09:06:27 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:06:27.087 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc4ebf6c9-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:06:27 compute-0 nova_compute[260935]: 2025-10-11 09:06:27.089 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:06:27 compute-0 NetworkManager[44960]: <info>  [1760173587.0904] manager: (tapc4ebf6c9-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/365)
Oct 11 09:06:27 compute-0 kernel: tapc4ebf6c9-00: entered promiscuous mode
Oct 11 09:06:27 compute-0 nova_compute[260935]: 2025-10-11 09:06:27.140 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:06:27 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:06:27.141 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc4ebf6c9-00, col_values=(('external_ids', {'iface-id': '07f4b6ae-9260-4c66-a561-df5bb8d757b4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:06:27 compute-0 nova_compute[260935]: 2025-10-11 09:06:27.143 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:06:27 compute-0 nova_compute[260935]: 2025-10-11 09:06:27.146 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:06:27 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:06:27.146 162815 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/c4ebf6c9-009d-4565-ba58-6b9452636d41.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/c4ebf6c9-009d-4565-ba58-6b9452636d41.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 11 09:06:27 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:06:27.147 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[c5683be3-de48-4245-8360-ecee887ed4c3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:06:27 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:06:27.148 162815 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 11 09:06:27 compute-0 ovn_metadata_agent[162810]: global
Oct 11 09:06:27 compute-0 ovn_metadata_agent[162810]:     log         /dev/log local0 debug
Oct 11 09:06:27 compute-0 ovn_metadata_agent[162810]:     log-tag     haproxy-metadata-proxy-c4ebf6c9-009d-4565-ba58-6b9452636d41
Oct 11 09:06:27 compute-0 ovn_metadata_agent[162810]:     user        root
Oct 11 09:06:27 compute-0 ovn_metadata_agent[162810]:     group       root
Oct 11 09:06:27 compute-0 ovn_metadata_agent[162810]:     maxconn     1024
Oct 11 09:06:27 compute-0 ovn_metadata_agent[162810]:     pidfile     /var/lib/neutron/external/pids/c4ebf6c9-009d-4565-ba58-6b9452636d41.pid.haproxy
Oct 11 09:06:27 compute-0 ovn_metadata_agent[162810]:     daemon
Oct 11 09:06:27 compute-0 ovn_metadata_agent[162810]: 
Oct 11 09:06:27 compute-0 ovn_metadata_agent[162810]: defaults
Oct 11 09:06:27 compute-0 ovn_metadata_agent[162810]:     log global
Oct 11 09:06:27 compute-0 ovn_metadata_agent[162810]:     mode http
Oct 11 09:06:27 compute-0 ovn_metadata_agent[162810]:     option httplog
Oct 11 09:06:27 compute-0 ovn_metadata_agent[162810]:     option dontlognull
Oct 11 09:06:27 compute-0 ovn_metadata_agent[162810]:     option http-server-close
Oct 11 09:06:27 compute-0 ovn_metadata_agent[162810]:     option forwardfor
Oct 11 09:06:27 compute-0 ovn_metadata_agent[162810]:     retries                 3
Oct 11 09:06:27 compute-0 ovn_metadata_agent[162810]:     timeout http-request    30s
Oct 11 09:06:27 compute-0 ovn_metadata_agent[162810]:     timeout connect         30s
Oct 11 09:06:27 compute-0 ovn_metadata_agent[162810]:     timeout client          32s
Oct 11 09:06:27 compute-0 ovn_metadata_agent[162810]:     timeout server          32s
Oct 11 09:06:27 compute-0 ovn_metadata_agent[162810]:     timeout http-keep-alive 30s
Oct 11 09:06:27 compute-0 ovn_metadata_agent[162810]: 
Oct 11 09:06:27 compute-0 ovn_metadata_agent[162810]: 
Oct 11 09:06:27 compute-0 ovn_metadata_agent[162810]: listen listener
Oct 11 09:06:27 compute-0 ovn_metadata_agent[162810]:     bind 169.254.169.254:80
Oct 11 09:06:27 compute-0 ovn_metadata_agent[162810]:     server metadata /var/lib/neutron/metadata_proxy
Oct 11 09:06:27 compute-0 ovn_metadata_agent[162810]:     http-request add-header X-OVN-Network-ID c4ebf6c9-009d-4565-ba58-6b9452636d41
Oct 11 09:06:27 compute-0 ovn_metadata_agent[162810]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 11 09:06:27 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:06:27.148 162815 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-c4ebf6c9-009d-4565-ba58-6b9452636d41', 'env', 'PROCESS_TAG=haproxy-c4ebf6c9-009d-4565-ba58-6b9452636d41', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/c4ebf6c9-009d-4565-ba58-6b9452636d41.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 11 09:06:27 compute-0 ovn_controller[152945]: 2025-10-11T09:06:27Z|00850|binding|INFO|Releasing lport 07f4b6ae-9260-4c66-a561-df5bb8d757b4 from this chassis (sb_readonly=0)
Oct 11 09:06:27 compute-0 ovn_controller[152945]: 2025-10-11T09:06:27Z|00851|binding|INFO|Releasing lport e23cd806-8523-4e59-ba27-db15cee52548 from this chassis (sb_readonly=0)
Oct 11 09:06:27 compute-0 ovn_controller[152945]: 2025-10-11T09:06:27Z|00852|binding|INFO|Releasing lport f45dd889-4db0-488b-b6d3-356bc191844e from this chassis (sb_readonly=0)
Oct 11 09:06:27 compute-0 nova_compute[260935]: 2025-10-11 09:06:27.155 2 DEBUG nova.compute.manager [req-b538ec5c-3aa9-4b2c-b4f7-b991142a7e2a req-5173cd97-b3dd-49c3-98d4-cc9757768553 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 079324f3-2fba-431a-9b8a-6b755af3fe74] Received event network-vif-plugged-f44dbabf-c6b7-4aa2-ac55-6df262611e5e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:06:27 compute-0 nova_compute[260935]: 2025-10-11 09:06:27.155 2 DEBUG oslo_concurrency.lockutils [req-b538ec5c-3aa9-4b2c-b4f7-b991142a7e2a req-5173cd97-b3dd-49c3-98d4-cc9757768553 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "079324f3-2fba-431a-9b8a-6b755af3fe74-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:06:27 compute-0 nova_compute[260935]: 2025-10-11 09:06:27.156 2 DEBUG oslo_concurrency.lockutils [req-b538ec5c-3aa9-4b2c-b4f7-b991142a7e2a req-5173cd97-b3dd-49c3-98d4-cc9757768553 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "079324f3-2fba-431a-9b8a-6b755af3fe74-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:06:27 compute-0 nova_compute[260935]: 2025-10-11 09:06:27.156 2 DEBUG oslo_concurrency.lockutils [req-b538ec5c-3aa9-4b2c-b4f7-b991142a7e2a req-5173cd97-b3dd-49c3-98d4-cc9757768553 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "079324f3-2fba-431a-9b8a-6b755af3fe74-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:06:27 compute-0 nova_compute[260935]: 2025-10-11 09:06:27.156 2 DEBUG nova.compute.manager [req-b538ec5c-3aa9-4b2c-b4f7-b991142a7e2a req-5173cd97-b3dd-49c3-98d4-cc9757768553 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 079324f3-2fba-431a-9b8a-6b755af3fe74] Processing event network-vif-plugged-f44dbabf-c6b7-4aa2-ac55-6df262611e5e _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 11 09:06:27 compute-0 nova_compute[260935]: 2025-10-11 09:06:27.165 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:06:27 compute-0 nova_compute[260935]: 2025-10-11 09:06:27.178 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:06:27 compute-0 podman[355193]: 2025-10-11 09:06:27.548939134 +0000 UTC m=+0.062147365 container create 430ab45702f3269ff635d2a687f1fa6c64c61e0b8b3ea0d1d1139c16c4b33446 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-c4ebf6c9-009d-4565-ba58-6b9452636d41, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 11 09:06:27 compute-0 ceph-mon[74313]: pgmap v1913: 321 pgs: 321 active+clean; 548 MiB data, 911 MiB used, 59 GiB / 60 GiB avail; 623 KiB/s rd, 1.8 MiB/s wr, 86 op/s
Oct 11 09:06:27 compute-0 ceph-mon[74313]: from='client.? 192.168.122.10:0/3834400119' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 09:06:27 compute-0 ceph-mon[74313]: from='client.? 192.168.122.10:0/3834400119' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 09:06:27 compute-0 systemd[1]: Started libpod-conmon-430ab45702f3269ff635d2a687f1fa6c64c61e0b8b3ea0d1d1139c16c4b33446.scope.
Oct 11 09:06:27 compute-0 podman[355193]: 2025-10-11 09:06:27.514629285 +0000 UTC m=+0.027837556 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 11 09:06:27 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:06:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5116281b771ae2e0fec1c2aeaecd4d786f9db2d4ea87dc93adfd474008700f33/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 11 09:06:27 compute-0 podman[355193]: 2025-10-11 09:06:27.64125814 +0000 UTC m=+0.154466401 container init 430ab45702f3269ff635d2a687f1fa6c64c61e0b8b3ea0d1d1139c16c4b33446 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-c4ebf6c9-009d-4565-ba58-6b9452636d41, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2)
Oct 11 09:06:27 compute-0 podman[355193]: 2025-10-11 09:06:27.647287052 +0000 UTC m=+0.160495283 container start 430ab45702f3269ff635d2a687f1fa6c64c61e0b8b3ea0d1d1139c16c4b33446 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-c4ebf6c9-009d-4565-ba58-6b9452636d41, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Oct 11 09:06:27 compute-0 neutron-haproxy-ovnmeta-c4ebf6c9-009d-4565-ba58-6b9452636d41[355209]: [NOTICE]   (355213) : New worker (355215) forked
Oct 11 09:06:27 compute-0 neutron-haproxy-ovnmeta-c4ebf6c9-009d-4565-ba58-6b9452636d41[355209]: [NOTICE]   (355213) : Loading success.
Oct 11 09:06:27 compute-0 nova_compute[260935]: 2025-10-11 09:06:27.776 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760173587.7765043, 079324f3-2fba-431a-9b8a-6b755af3fe74 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:06:27 compute-0 nova_compute[260935]: 2025-10-11 09:06:27.777 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 079324f3-2fba-431a-9b8a-6b755af3fe74] VM Started (Lifecycle Event)
Oct 11 09:06:27 compute-0 nova_compute[260935]: 2025-10-11 09:06:27.780 2 DEBUG nova.compute.manager [None req-71e95a42-81bc-46fa-9b05-06608d4cfeb7 c555ec17274647ed83d33852feed0fa6 b86be458d54543fcae9df9884591edf3 - - default default] [instance: 079324f3-2fba-431a-9b8a-6b755af3fe74] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 11 09:06:27 compute-0 nova_compute[260935]: 2025-10-11 09:06:27.784 2 DEBUG nova.virt.libvirt.driver [None req-71e95a42-81bc-46fa-9b05-06608d4cfeb7 c555ec17274647ed83d33852feed0fa6 b86be458d54543fcae9df9884591edf3 - - default default] [instance: 079324f3-2fba-431a-9b8a-6b755af3fe74] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 11 09:06:27 compute-0 nova_compute[260935]: 2025-10-11 09:06:27.788 2 INFO nova.virt.libvirt.driver [-] [instance: 079324f3-2fba-431a-9b8a-6b755af3fe74] Instance spawned successfully.
Oct 11 09:06:27 compute-0 nova_compute[260935]: 2025-10-11 09:06:27.789 2 DEBUG nova.virt.libvirt.driver [None req-71e95a42-81bc-46fa-9b05-06608d4cfeb7 c555ec17274647ed83d33852feed0fa6 b86be458d54543fcae9df9884591edf3 - - default default] [instance: 079324f3-2fba-431a-9b8a-6b755af3fe74] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 11 09:06:27 compute-0 nova_compute[260935]: 2025-10-11 09:06:27.837 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 079324f3-2fba-431a-9b8a-6b755af3fe74] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:06:27 compute-0 nova_compute[260935]: 2025-10-11 09:06:27.842 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 079324f3-2fba-431a-9b8a-6b755af3fe74] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 09:06:27 compute-0 nova_compute[260935]: 2025-10-11 09:06:27.852 2 DEBUG nova.virt.libvirt.driver [None req-71e95a42-81bc-46fa-9b05-06608d4cfeb7 c555ec17274647ed83d33852feed0fa6 b86be458d54543fcae9df9884591edf3 - - default default] [instance: 079324f3-2fba-431a-9b8a-6b755af3fe74] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:06:27 compute-0 nova_compute[260935]: 2025-10-11 09:06:27.853 2 DEBUG nova.virt.libvirt.driver [None req-71e95a42-81bc-46fa-9b05-06608d4cfeb7 c555ec17274647ed83d33852feed0fa6 b86be458d54543fcae9df9884591edf3 - - default default] [instance: 079324f3-2fba-431a-9b8a-6b755af3fe74] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:06:27 compute-0 nova_compute[260935]: 2025-10-11 09:06:27.854 2 DEBUG nova.virt.libvirt.driver [None req-71e95a42-81bc-46fa-9b05-06608d4cfeb7 c555ec17274647ed83d33852feed0fa6 b86be458d54543fcae9df9884591edf3 - - default default] [instance: 079324f3-2fba-431a-9b8a-6b755af3fe74] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:06:27 compute-0 nova_compute[260935]: 2025-10-11 09:06:27.854 2 DEBUG nova.virt.libvirt.driver [None req-71e95a42-81bc-46fa-9b05-06608d4cfeb7 c555ec17274647ed83d33852feed0fa6 b86be458d54543fcae9df9884591edf3 - - default default] [instance: 079324f3-2fba-431a-9b8a-6b755af3fe74] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:06:27 compute-0 nova_compute[260935]: 2025-10-11 09:06:27.855 2 DEBUG nova.virt.libvirt.driver [None req-71e95a42-81bc-46fa-9b05-06608d4cfeb7 c555ec17274647ed83d33852feed0fa6 b86be458d54543fcae9df9884591edf3 - - default default] [instance: 079324f3-2fba-431a-9b8a-6b755af3fe74] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:06:27 compute-0 nova_compute[260935]: 2025-10-11 09:06:27.856 2 DEBUG nova.virt.libvirt.driver [None req-71e95a42-81bc-46fa-9b05-06608d4cfeb7 c555ec17274647ed83d33852feed0fa6 b86be458d54543fcae9df9884591edf3 - - default default] [instance: 079324f3-2fba-431a-9b8a-6b755af3fe74] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:06:27 compute-0 nova_compute[260935]: 2025-10-11 09:06:27.915 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 079324f3-2fba-431a-9b8a-6b755af3fe74] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 09:06:27 compute-0 nova_compute[260935]: 2025-10-11 09:06:27.915 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760173587.77752, 079324f3-2fba-431a-9b8a-6b755af3fe74 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:06:27 compute-0 nova_compute[260935]: 2025-10-11 09:06:27.916 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 079324f3-2fba-431a-9b8a-6b755af3fe74] VM Paused (Lifecycle Event)
Oct 11 09:06:27 compute-0 nova_compute[260935]: 2025-10-11 09:06:27.962 2 INFO nova.compute.manager [None req-71e95a42-81bc-46fa-9b05-06608d4cfeb7 c555ec17274647ed83d33852feed0fa6 b86be458d54543fcae9df9884591edf3 - - default default] [instance: 079324f3-2fba-431a-9b8a-6b755af3fe74] Took 10.34 seconds to spawn the instance on the hypervisor.
Oct 11 09:06:27 compute-0 nova_compute[260935]: 2025-10-11 09:06:27.963 2 DEBUG nova.compute.manager [None req-71e95a42-81bc-46fa-9b05-06608d4cfeb7 c555ec17274647ed83d33852feed0fa6 b86be458d54543fcae9df9884591edf3 - - default default] [instance: 079324f3-2fba-431a-9b8a-6b755af3fe74] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:06:27 compute-0 nova_compute[260935]: 2025-10-11 09:06:27.967 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 079324f3-2fba-431a-9b8a-6b755af3fe74] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:06:27 compute-0 nova_compute[260935]: 2025-10-11 09:06:27.979 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760173587.7838619, 079324f3-2fba-431a-9b8a-6b755af3fe74 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:06:27 compute-0 nova_compute[260935]: 2025-10-11 09:06:27.980 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 079324f3-2fba-431a-9b8a-6b755af3fe74] VM Resumed (Lifecycle Event)
Oct 11 09:06:28 compute-0 nova_compute[260935]: 2025-10-11 09:06:28.036 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 079324f3-2fba-431a-9b8a-6b755af3fe74] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:06:28 compute-0 nova_compute[260935]: 2025-10-11 09:06:28.040 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 079324f3-2fba-431a-9b8a-6b755af3fe74] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 09:06:28 compute-0 nova_compute[260935]: 2025-10-11 09:06:28.081 2 INFO nova.compute.manager [None req-71e95a42-81bc-46fa-9b05-06608d4cfeb7 c555ec17274647ed83d33852feed0fa6 b86be458d54543fcae9df9884591edf3 - - default default] [instance: 079324f3-2fba-431a-9b8a-6b755af3fe74] Took 12.27 seconds to build instance.
Oct 11 09:06:28 compute-0 nova_compute[260935]: 2025-10-11 09:06:28.193 2 DEBUG oslo_concurrency.lockutils [None req-71e95a42-81bc-46fa-9b05-06608d4cfeb7 c555ec17274647ed83d33852feed0fa6 b86be458d54543fcae9df9884591edf3 - - default default] Lock "079324f3-2fba-431a-9b8a-6b755af3fe74" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.657s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:06:28 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1914: 321 pgs: 321 active+clean; 548 MiB data, 911 MiB used, 59 GiB / 60 GiB avail; 630 KiB/s rd, 1.8 MiB/s wr, 96 op/s
Oct 11 09:06:28 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e257 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:06:29 compute-0 nova_compute[260935]: 2025-10-11 09:06:29.016 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:06:29 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:06:29.219 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=bec9a5e2-82b0-42f0-811d-08d245f1dc66, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '23'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:06:29 compute-0 nova_compute[260935]: 2025-10-11 09:06:29.345 2 DEBUG nova.compute.manager [req-5632545d-104a-47d3-afb4-cc3517583f70 req-02f0ff8b-ce22-4eb1-a477-7faed60faca1 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 079324f3-2fba-431a-9b8a-6b755af3fe74] Received event network-vif-plugged-f44dbabf-c6b7-4aa2-ac55-6df262611e5e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:06:29 compute-0 nova_compute[260935]: 2025-10-11 09:06:29.345 2 DEBUG oslo_concurrency.lockutils [req-5632545d-104a-47d3-afb4-cc3517583f70 req-02f0ff8b-ce22-4eb1-a477-7faed60faca1 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "079324f3-2fba-431a-9b8a-6b755af3fe74-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:06:29 compute-0 nova_compute[260935]: 2025-10-11 09:06:29.345 2 DEBUG oslo_concurrency.lockutils [req-5632545d-104a-47d3-afb4-cc3517583f70 req-02f0ff8b-ce22-4eb1-a477-7faed60faca1 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "079324f3-2fba-431a-9b8a-6b755af3fe74-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:06:29 compute-0 nova_compute[260935]: 2025-10-11 09:06:29.346 2 DEBUG oslo_concurrency.lockutils [req-5632545d-104a-47d3-afb4-cc3517583f70 req-02f0ff8b-ce22-4eb1-a477-7faed60faca1 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "079324f3-2fba-431a-9b8a-6b755af3fe74-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:06:29 compute-0 nova_compute[260935]: 2025-10-11 09:06:29.346 2 DEBUG nova.compute.manager [req-5632545d-104a-47d3-afb4-cc3517583f70 req-02f0ff8b-ce22-4eb1-a477-7faed60faca1 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 079324f3-2fba-431a-9b8a-6b755af3fe74] No waiting events found dispatching network-vif-plugged-f44dbabf-c6b7-4aa2-ac55-6df262611e5e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 09:06:29 compute-0 nova_compute[260935]: 2025-10-11 09:06:29.346 2 WARNING nova.compute.manager [req-5632545d-104a-47d3-afb4-cc3517583f70 req-02f0ff8b-ce22-4eb1-a477-7faed60faca1 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 079324f3-2fba-431a-9b8a-6b755af3fe74] Received unexpected event network-vif-plugged-f44dbabf-c6b7-4aa2-ac55-6df262611e5e for instance with vm_state active and task_state None.
Oct 11 09:06:29 compute-0 nova_compute[260935]: 2025-10-11 09:06:29.364 2 DEBUG oslo_concurrency.lockutils [None req-6a40a529-0650-4691-8e19-6221ac47d0be b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] Acquiring lock "a83f40e4-c852-4b45-a3d2-1cd65e9aaa31" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:06:29 compute-0 nova_compute[260935]: 2025-10-11 09:06:29.364 2 DEBUG oslo_concurrency.lockutils [None req-6a40a529-0650-4691-8e19-6221ac47d0be b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] Lock "a83f40e4-c852-4b45-a3d2-1cd65e9aaa31" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:06:29 compute-0 nova_compute[260935]: 2025-10-11 09:06:29.365 2 DEBUG oslo_concurrency.lockutils [None req-6a40a529-0650-4691-8e19-6221ac47d0be b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] Acquiring lock "a83f40e4-c852-4b45-a3d2-1cd65e9aaa31-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:06:29 compute-0 nova_compute[260935]: 2025-10-11 09:06:29.365 2 DEBUG oslo_concurrency.lockutils [None req-6a40a529-0650-4691-8e19-6221ac47d0be b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] Lock "a83f40e4-c852-4b45-a3d2-1cd65e9aaa31-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:06:29 compute-0 nova_compute[260935]: 2025-10-11 09:06:29.366 2 DEBUG oslo_concurrency.lockutils [None req-6a40a529-0650-4691-8e19-6221ac47d0be b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] Lock "a83f40e4-c852-4b45-a3d2-1cd65e9aaa31-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:06:29 compute-0 nova_compute[260935]: 2025-10-11 09:06:29.367 2 INFO nova.compute.manager [None req-6a40a529-0650-4691-8e19-6221ac47d0be b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] [instance: a83f40e4-c852-4b45-a3d2-1cd65e9aaa31] Terminating instance
Oct 11 09:06:29 compute-0 nova_compute[260935]: 2025-10-11 09:06:29.368 2 DEBUG nova.compute.manager [None req-6a40a529-0650-4691-8e19-6221ac47d0be b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] [instance: a83f40e4-c852-4b45-a3d2-1cd65e9aaa31] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 11 09:06:29 compute-0 kernel: tapf8dc388c-9e (unregistering): left promiscuous mode
Oct 11 09:06:29 compute-0 NetworkManager[44960]: <info>  [1760173589.4461] device (tapf8dc388c-9e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 09:06:29 compute-0 nova_compute[260935]: 2025-10-11 09:06:29.509 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:06:29 compute-0 ovn_controller[152945]: 2025-10-11T09:06:29Z|00853|binding|INFO|Releasing lport f8dc388c-9e5a-43c2-8dd9-8fc28768ec31 from this chassis (sb_readonly=0)
Oct 11 09:06:29 compute-0 ovn_controller[152945]: 2025-10-11T09:06:29Z|00854|binding|INFO|Setting lport f8dc388c-9e5a-43c2-8dd9-8fc28768ec31 down in Southbound
Oct 11 09:06:29 compute-0 ovn_controller[152945]: 2025-10-11T09:06:29Z|00855|binding|INFO|Removing iface tapf8dc388c-9e ovn-installed in OVS
Oct 11 09:06:29 compute-0 nova_compute[260935]: 2025-10-11 09:06:29.514 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:06:29 compute-0 nova_compute[260935]: 2025-10-11 09:06:29.544 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:06:29 compute-0 systemd[1]: machine-qemu\x2d108\x2dinstance\x2d0000005c.scope: Deactivated successfully.
Oct 11 09:06:29 compute-0 systemd[1]: machine-qemu\x2d108\x2dinstance\x2d0000005c.scope: Consumed 12.937s CPU time.
Oct 11 09:06:29 compute-0 ceph-mon[74313]: pgmap v1914: 321 pgs: 321 active+clean; 548 MiB data, 911 MiB used, 59 GiB / 60 GiB avail; 630 KiB/s rd, 1.8 MiB/s wr, 96 op/s
Oct 11 09:06:29 compute-0 systemd-machined[215705]: Machine qemu-108-instance-0000005c terminated.
Oct 11 09:06:29 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:06:29.566 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b4:bf:b7 10.100.0.12'], port_security=['fa:16:3e:b4:bf:b7 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'a83f40e4-c852-4b45-a3d2-1cd65e9aaa31', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7ac98e7f-e9cd-45cf-8bf4-dcecaa65ca75', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0a5d578da5e746caa535eef295e1a67d', 'neutron:revision_number': '8', 'neutron:security_group_ids': 'e8a3473d-4723-4d4d-be0a-f96b6f35a4ee', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ae01bff6-0f5a-48e3-a011-50bc6bac19c7, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=f8dc388c-9e5a-43c2-8dd9-8fc28768ec31) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 09:06:29 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:06:29.574 162815 INFO neutron.agent.ovn.metadata.agent [-] Port f8dc388c-9e5a-43c2-8dd9-8fc28768ec31 in datapath 7ac98e7f-e9cd-45cf-8bf4-dcecaa65ca75 unbound from our chassis
Oct 11 09:06:29 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:06:29.576 162815 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 7ac98e7f-e9cd-45cf-8bf4-dcecaa65ca75 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Oct 11 09:06:29 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:06:29.578 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[11dd83b4-edc4-471e-8d16-8943a296d41a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:06:29 compute-0 nova_compute[260935]: 2025-10-11 09:06:29.612 2 INFO nova.virt.libvirt.driver [-] [instance: a83f40e4-c852-4b45-a3d2-1cd65e9aaa31] Instance destroyed successfully.
Oct 11 09:06:29 compute-0 nova_compute[260935]: 2025-10-11 09:06:29.612 2 DEBUG nova.objects.instance [None req-6a40a529-0650-4691-8e19-6221ac47d0be b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] Lazy-loading 'resources' on Instance uuid a83f40e4-c852-4b45-a3d2-1cd65e9aaa31 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:06:29 compute-0 nova_compute[260935]: 2025-10-11 09:06:29.673 2 DEBUG nova.virt.libvirt.vif [None req-6a40a529-0650-4691-8e19-6221ac47d0be b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T09:05:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerRescueTestJSONUnderV235-server-942927350',display_name='tempest-ServerRescueTestJSONUnderV235-server-942927350',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuetestjsonunderv235-server-942927350',id=92,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-11T09:06:02Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='0a5d578da5e746caa535eef295e1a67d',ramdisk_id='',reservation_id='r-i0emrtot',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerRescueTestJSONUnderV235-2035879439',owner_user_name='tempest-ServerRescueTestJSONUnderV235-2035879439-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T09:06:02Z,user_data=None,user_id='b7730a035fdf47498398e20e5aaf9ba4',uuid=a83f40e4-c852-4b45-a3d2-1cd65e9aaa31,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='rescued') vif={"id": "f8dc388c-9e5a-43c2-8dd9-8fc28768ec31", "address": "fa:16:3e:b4:bf:b7", "network": {"id": "7ac98e7f-e9cd-45cf-8bf4-dcecaa65ca75", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-1957321638-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "0a5d578da5e746caa535eef295e1a67d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf8dc388c-9e", "ovs_interfaceid": "f8dc388c-9e5a-43c2-8dd9-8fc28768ec31", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 11 09:06:29 compute-0 nova_compute[260935]: 2025-10-11 09:06:29.674 2 DEBUG nova.network.os_vif_util [None req-6a40a529-0650-4691-8e19-6221ac47d0be b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] Converting VIF {"id": "f8dc388c-9e5a-43c2-8dd9-8fc28768ec31", "address": "fa:16:3e:b4:bf:b7", "network": {"id": "7ac98e7f-e9cd-45cf-8bf4-dcecaa65ca75", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-1957321638-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "0a5d578da5e746caa535eef295e1a67d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf8dc388c-9e", "ovs_interfaceid": "f8dc388c-9e5a-43c2-8dd9-8fc28768ec31", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 09:06:29 compute-0 nova_compute[260935]: 2025-10-11 09:06:29.675 2 DEBUG nova.network.os_vif_util [None req-6a40a529-0650-4691-8e19-6221ac47d0be b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:b4:bf:b7,bridge_name='br-int',has_traffic_filtering=True,id=f8dc388c-9e5a-43c2-8dd9-8fc28768ec31,network=Network(7ac98e7f-e9cd-45cf-8bf4-dcecaa65ca75),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf8dc388c-9e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 09:06:29 compute-0 nova_compute[260935]: 2025-10-11 09:06:29.676 2 DEBUG os_vif [None req-6a40a529-0650-4691-8e19-6221ac47d0be b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:b4:bf:b7,bridge_name='br-int',has_traffic_filtering=True,id=f8dc388c-9e5a-43c2-8dd9-8fc28768ec31,network=Network(7ac98e7f-e9cd-45cf-8bf4-dcecaa65ca75),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf8dc388c-9e') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 11 09:06:29 compute-0 nova_compute[260935]: 2025-10-11 09:06:29.678 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:06:29 compute-0 nova_compute[260935]: 2025-10-11 09:06:29.679 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf8dc388c-9e, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:06:29 compute-0 nova_compute[260935]: 2025-10-11 09:06:29.681 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:06:29 compute-0 nova_compute[260935]: 2025-10-11 09:06:29.683 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 09:06:29 compute-0 nova_compute[260935]: 2025-10-11 09:06:29.686 2 INFO os_vif [None req-6a40a529-0650-4691-8e19-6221ac47d0be b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:b4:bf:b7,bridge_name='br-int',has_traffic_filtering=True,id=f8dc388c-9e5a-43c2-8dd9-8fc28768ec31,network=Network(7ac98e7f-e9cd-45cf-8bf4-dcecaa65ca75),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf8dc388c-9e')
Oct 11 09:06:30 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1915: 321 pgs: 321 active+clean; 548 MiB data, 911 MiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 37 op/s
Oct 11 09:06:30 compute-0 nova_compute[260935]: 2025-10-11 09:06:30.448 2 INFO nova.virt.libvirt.driver [None req-6a40a529-0650-4691-8e19-6221ac47d0be b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] [instance: a83f40e4-c852-4b45-a3d2-1cd65e9aaa31] Deleting instance files /var/lib/nova/instances/a83f40e4-c852-4b45-a3d2-1cd65e9aaa31_del
Oct 11 09:06:30 compute-0 nova_compute[260935]: 2025-10-11 09:06:30.449 2 INFO nova.virt.libvirt.driver [None req-6a40a529-0650-4691-8e19-6221ac47d0be b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] [instance: a83f40e4-c852-4b45-a3d2-1cd65e9aaa31] Deletion of /var/lib/nova/instances/a83f40e4-c852-4b45-a3d2-1cd65e9aaa31_del complete
Oct 11 09:06:30 compute-0 nova_compute[260935]: 2025-10-11 09:06:30.655 2 INFO nova.compute.manager [None req-6a40a529-0650-4691-8e19-6221ac47d0be b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] [instance: a83f40e4-c852-4b45-a3d2-1cd65e9aaa31] Took 1.29 seconds to destroy the instance on the hypervisor.
Oct 11 09:06:30 compute-0 nova_compute[260935]: 2025-10-11 09:06:30.656 2 DEBUG oslo.service.loopingcall [None req-6a40a529-0650-4691-8e19-6221ac47d0be b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 11 09:06:30 compute-0 nova_compute[260935]: 2025-10-11 09:06:30.656 2 DEBUG nova.compute.manager [-] [instance: a83f40e4-c852-4b45-a3d2-1cd65e9aaa31] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 11 09:06:30 compute-0 nova_compute[260935]: 2025-10-11 09:06:30.657 2 DEBUG nova.network.neutron [-] [instance: a83f40e4-c852-4b45-a3d2-1cd65e9aaa31] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 11 09:06:31 compute-0 nova_compute[260935]: 2025-10-11 09:06:31.555 2 DEBUG nova.compute.manager [req-d7ab80e6-7e08-4aca-a72a-bd86076d93ac req-da0f480b-8a6b-4a78-9edc-4b82097c89e4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a83f40e4-c852-4b45-a3d2-1cd65e9aaa31] Received event network-vif-unplugged-f8dc388c-9e5a-43c2-8dd9-8fc28768ec31 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:06:31 compute-0 nova_compute[260935]: 2025-10-11 09:06:31.556 2 DEBUG oslo_concurrency.lockutils [req-d7ab80e6-7e08-4aca-a72a-bd86076d93ac req-da0f480b-8a6b-4a78-9edc-4b82097c89e4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "a83f40e4-c852-4b45-a3d2-1cd65e9aaa31-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:06:31 compute-0 nova_compute[260935]: 2025-10-11 09:06:31.556 2 DEBUG oslo_concurrency.lockutils [req-d7ab80e6-7e08-4aca-a72a-bd86076d93ac req-da0f480b-8a6b-4a78-9edc-4b82097c89e4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "a83f40e4-c852-4b45-a3d2-1cd65e9aaa31-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:06:31 compute-0 nova_compute[260935]: 2025-10-11 09:06:31.557 2 DEBUG oslo_concurrency.lockutils [req-d7ab80e6-7e08-4aca-a72a-bd86076d93ac req-da0f480b-8a6b-4a78-9edc-4b82097c89e4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "a83f40e4-c852-4b45-a3d2-1cd65e9aaa31-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:06:31 compute-0 nova_compute[260935]: 2025-10-11 09:06:31.557 2 DEBUG nova.compute.manager [req-d7ab80e6-7e08-4aca-a72a-bd86076d93ac req-da0f480b-8a6b-4a78-9edc-4b82097c89e4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a83f40e4-c852-4b45-a3d2-1cd65e9aaa31] No waiting events found dispatching network-vif-unplugged-f8dc388c-9e5a-43c2-8dd9-8fc28768ec31 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 09:06:31 compute-0 nova_compute[260935]: 2025-10-11 09:06:31.557 2 DEBUG nova.compute.manager [req-d7ab80e6-7e08-4aca-a72a-bd86076d93ac req-da0f480b-8a6b-4a78-9edc-4b82097c89e4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a83f40e4-c852-4b45-a3d2-1cd65e9aaa31] Received event network-vif-unplugged-f8dc388c-9e5a-43c2-8dd9-8fc28768ec31 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 11 09:06:31 compute-0 ceph-mon[74313]: pgmap v1915: 321 pgs: 321 active+clean; 548 MiB data, 911 MiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 37 op/s
Oct 11 09:06:31 compute-0 nova_compute[260935]: 2025-10-11 09:06:31.725 2 DEBUG nova.network.neutron [-] [instance: a83f40e4-c852-4b45-a3d2-1cd65e9aaa31] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:06:31 compute-0 nova_compute[260935]: 2025-10-11 09:06:31.779 2 INFO nova.compute.manager [-] [instance: a83f40e4-c852-4b45-a3d2-1cd65e9aaa31] Took 1.12 seconds to deallocate network for instance.
Oct 11 09:06:31 compute-0 nova_compute[260935]: 2025-10-11 09:06:31.856 2 DEBUG oslo_concurrency.lockutils [None req-6a40a529-0650-4691-8e19-6221ac47d0be b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:06:31 compute-0 nova_compute[260935]: 2025-10-11 09:06:31.857 2 DEBUG oslo_concurrency.lockutils [None req-6a40a529-0650-4691-8e19-6221ac47d0be b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:06:32 compute-0 nova_compute[260935]: 2025-10-11 09:06:32.068 2 DEBUG oslo_concurrency.processutils [None req-6a40a529-0650-4691-8e19-6221ac47d0be b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:06:32 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1916: 321 pgs: 321 active+clean; 454 MiB data, 865 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 1.8 MiB/s wr, 131 op/s
Oct 11 09:06:32 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 09:06:32 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/592571589' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:06:32 compute-0 nova_compute[260935]: 2025-10-11 09:06:32.612 2 DEBUG oslo_concurrency.processutils [None req-6a40a529-0650-4691-8e19-6221ac47d0be b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.544s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:06:32 compute-0 nova_compute[260935]: 2025-10-11 09:06:32.618 2 DEBUG nova.compute.provider_tree [None req-6a40a529-0650-4691-8e19-6221ac47d0be b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 09:06:32 compute-0 nova_compute[260935]: 2025-10-11 09:06:32.658 2 DEBUG nova.scheduler.client.report [None req-6a40a529-0650-4691-8e19-6221ac47d0be b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 09:06:32 compute-0 nova_compute[260935]: 2025-10-11 09:06:32.741 2 DEBUG oslo_concurrency.lockutils [None req-6a40a529-0650-4691-8e19-6221ac47d0be b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.884s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:06:32 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/592571589' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:06:32 compute-0 nova_compute[260935]: 2025-10-11 09:06:32.803 2 INFO nova.scheduler.client.report [None req-6a40a529-0650-4691-8e19-6221ac47d0be b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] Deleted allocations for instance a83f40e4-c852-4b45-a3d2-1cd65e9aaa31
Oct 11 09:06:33 compute-0 nova_compute[260935]: 2025-10-11 09:06:33.018 2 DEBUG oslo_concurrency.lockutils [None req-6a40a529-0650-4691-8e19-6221ac47d0be b7730a035fdf47498398e20e5aaf9ba4 0a5d578da5e746caa535eef295e1a67d - - default default] Lock "a83f40e4-c852-4b45-a3d2-1cd65e9aaa31" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.653s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:06:33 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e257 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:06:33 compute-0 ceph-mon[74313]: pgmap v1916: 321 pgs: 321 active+clean; 454 MiB data, 865 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 1.8 MiB/s wr, 131 op/s
Oct 11 09:06:34 compute-0 nova_compute[260935]: 2025-10-11 09:06:34.018 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:06:34 compute-0 nova_compute[260935]: 2025-10-11 09:06:34.030 2 DEBUG nova.compute.manager [req-8ba58660-a261-4f22-8c0b-711330870007 req-52c9f5c3-060f-47e8-beea-49ee22aefae8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a83f40e4-c852-4b45-a3d2-1cd65e9aaa31] Received event network-vif-plugged-f8dc388c-9e5a-43c2-8dd9-8fc28768ec31 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:06:34 compute-0 nova_compute[260935]: 2025-10-11 09:06:34.031 2 DEBUG oslo_concurrency.lockutils [req-8ba58660-a261-4f22-8c0b-711330870007 req-52c9f5c3-060f-47e8-beea-49ee22aefae8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "a83f40e4-c852-4b45-a3d2-1cd65e9aaa31-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:06:34 compute-0 nova_compute[260935]: 2025-10-11 09:06:34.032 2 DEBUG oslo_concurrency.lockutils [req-8ba58660-a261-4f22-8c0b-711330870007 req-52c9f5c3-060f-47e8-beea-49ee22aefae8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "a83f40e4-c852-4b45-a3d2-1cd65e9aaa31-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:06:34 compute-0 nova_compute[260935]: 2025-10-11 09:06:34.032 2 DEBUG oslo_concurrency.lockutils [req-8ba58660-a261-4f22-8c0b-711330870007 req-52c9f5c3-060f-47e8-beea-49ee22aefae8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "a83f40e4-c852-4b45-a3d2-1cd65e9aaa31-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:06:34 compute-0 nova_compute[260935]: 2025-10-11 09:06:34.033 2 DEBUG nova.compute.manager [req-8ba58660-a261-4f22-8c0b-711330870007 req-52c9f5c3-060f-47e8-beea-49ee22aefae8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a83f40e4-c852-4b45-a3d2-1cd65e9aaa31] No waiting events found dispatching network-vif-plugged-f8dc388c-9e5a-43c2-8dd9-8fc28768ec31 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 09:06:34 compute-0 nova_compute[260935]: 2025-10-11 09:06:34.033 2 WARNING nova.compute.manager [req-8ba58660-a261-4f22-8c0b-711330870007 req-52c9f5c3-060f-47e8-beea-49ee22aefae8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a83f40e4-c852-4b45-a3d2-1cd65e9aaa31] Received unexpected event network-vif-plugged-f8dc388c-9e5a-43c2-8dd9-8fc28768ec31 for instance with vm_state deleted and task_state None.
Oct 11 09:06:34 compute-0 nova_compute[260935]: 2025-10-11 09:06:34.034 2 DEBUG nova.compute.manager [req-8ba58660-a261-4f22-8c0b-711330870007 req-52c9f5c3-060f-47e8-beea-49ee22aefae8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a83f40e4-c852-4b45-a3d2-1cd65e9aaa31] Received event network-vif-deleted-f8dc388c-9e5a-43c2-8dd9-8fc28768ec31 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:06:34 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1917: 321 pgs: 321 active+clean; 421 MiB data, 855 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 275 KiB/s wr, 129 op/s
Oct 11 09:06:34 compute-0 nova_compute[260935]: 2025-10-11 09:06:34.682 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:06:34 compute-0 nova_compute[260935]: 2025-10-11 09:06:34.742 2 DEBUG oslo_concurrency.lockutils [None req-1d4a41c2-0404-42b2-96af-2a766c7d1a71 c555ec17274647ed83d33852feed0fa6 b86be458d54543fcae9df9884591edf3 - - default default] Acquiring lock "079324f3-2fba-431a-9b8a-6b755af3fe74" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:06:34 compute-0 nova_compute[260935]: 2025-10-11 09:06:34.743 2 DEBUG oslo_concurrency.lockutils [None req-1d4a41c2-0404-42b2-96af-2a766c7d1a71 c555ec17274647ed83d33852feed0fa6 b86be458d54543fcae9df9884591edf3 - - default default] Lock "079324f3-2fba-431a-9b8a-6b755af3fe74" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:06:34 compute-0 nova_compute[260935]: 2025-10-11 09:06:34.744 2 DEBUG oslo_concurrency.lockutils [None req-1d4a41c2-0404-42b2-96af-2a766c7d1a71 c555ec17274647ed83d33852feed0fa6 b86be458d54543fcae9df9884591edf3 - - default default] Acquiring lock "079324f3-2fba-431a-9b8a-6b755af3fe74-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:06:34 compute-0 nova_compute[260935]: 2025-10-11 09:06:34.744 2 DEBUG oslo_concurrency.lockutils [None req-1d4a41c2-0404-42b2-96af-2a766c7d1a71 c555ec17274647ed83d33852feed0fa6 b86be458d54543fcae9df9884591edf3 - - default default] Lock "079324f3-2fba-431a-9b8a-6b755af3fe74-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:06:34 compute-0 nova_compute[260935]: 2025-10-11 09:06:34.745 2 DEBUG oslo_concurrency.lockutils [None req-1d4a41c2-0404-42b2-96af-2a766c7d1a71 c555ec17274647ed83d33852feed0fa6 b86be458d54543fcae9df9884591edf3 - - default default] Lock "079324f3-2fba-431a-9b8a-6b755af3fe74-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:06:34 compute-0 nova_compute[260935]: 2025-10-11 09:06:34.747 2 INFO nova.compute.manager [None req-1d4a41c2-0404-42b2-96af-2a766c7d1a71 c555ec17274647ed83d33852feed0fa6 b86be458d54543fcae9df9884591edf3 - - default default] [instance: 079324f3-2fba-431a-9b8a-6b755af3fe74] Terminating instance
Oct 11 09:06:34 compute-0 nova_compute[260935]: 2025-10-11 09:06:34.749 2 DEBUG nova.compute.manager [None req-1d4a41c2-0404-42b2-96af-2a766c7d1a71 c555ec17274647ed83d33852feed0fa6 b86be458d54543fcae9df9884591edf3 - - default default] [instance: 079324f3-2fba-431a-9b8a-6b755af3fe74] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 11 09:06:34 compute-0 kernel: tapf44dbabf-c6 (unregistering): left promiscuous mode
Oct 11 09:06:34 compute-0 NetworkManager[44960]: <info>  [1760173594.7905] device (tapf44dbabf-c6): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 09:06:34 compute-0 ovn_controller[152945]: 2025-10-11T09:06:34Z|00856|binding|INFO|Releasing lport f44dbabf-c6b7-4aa2-ac55-6df262611e5e from this chassis (sb_readonly=0)
Oct 11 09:06:34 compute-0 ovn_controller[152945]: 2025-10-11T09:06:34Z|00857|binding|INFO|Setting lport f44dbabf-c6b7-4aa2-ac55-6df262611e5e down in Southbound
Oct 11 09:06:34 compute-0 ceph-mon[74313]: pgmap v1917: 321 pgs: 321 active+clean; 421 MiB data, 855 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 275 KiB/s wr, 129 op/s
Oct 11 09:06:34 compute-0 ovn_controller[152945]: 2025-10-11T09:06:34Z|00858|binding|INFO|Removing iface tapf44dbabf-c6 ovn-installed in OVS
Oct 11 09:06:34 compute-0 nova_compute[260935]: 2025-10-11 09:06:34.806 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:06:34 compute-0 nova_compute[260935]: 2025-10-11 09:06:34.809 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:06:34 compute-0 nova_compute[260935]: 2025-10-11 09:06:34.838 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:06:34 compute-0 systemd[1]: machine-qemu\x2d109\x2dinstance\x2d0000005f.scope: Deactivated successfully.
Oct 11 09:06:34 compute-0 systemd[1]: machine-qemu\x2d109\x2dinstance\x2d0000005f.scope: Consumed 8.007s CPU time.
Oct 11 09:06:34 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:06:34.858 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:47:6a:2d 10.100.0.3'], port_security=['fa:16:3e:47:6a:2d 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '079324f3-2fba-431a-9b8a-6b755af3fe74', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c4ebf6c9-009d-4565-ba58-6b9452636d41', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b86be458d54543fcae9df9884591edf3', 'neutron:revision_number': '4', 'neutron:security_group_ids': '97cd21ff-67df-467b-bb0c-629509de3f23', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=820964f1-3ad5-4fec-b6e5-3d57a1853278, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=f44dbabf-c6b7-4aa2-ac55-6df262611e5e) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 09:06:34 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:06:34.860 162815 INFO neutron.agent.ovn.metadata.agent [-] Port f44dbabf-c6b7-4aa2-ac55-6df262611e5e in datapath c4ebf6c9-009d-4565-ba58-6b9452636d41 unbound from our chassis
Oct 11 09:06:34 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:06:34.863 162815 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network c4ebf6c9-009d-4565-ba58-6b9452636d41, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 11 09:06:34 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:06:34.864 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[a2fd342d-49b0-49b2-b123-d5bac162604a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:06:34 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:06:34.865 162815 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-c4ebf6c9-009d-4565-ba58-6b9452636d41 namespace which is not needed anymore
Oct 11 09:06:34 compute-0 systemd-machined[215705]: Machine qemu-109-instance-0000005f terminated.
Oct 11 09:06:34 compute-0 nova_compute[260935]: 2025-10-11 09:06:34.992 2 INFO nova.virt.libvirt.driver [-] [instance: 079324f3-2fba-431a-9b8a-6b755af3fe74] Instance destroyed successfully.
Oct 11 09:06:34 compute-0 nova_compute[260935]: 2025-10-11 09:06:34.993 2 DEBUG nova.objects.instance [None req-1d4a41c2-0404-42b2-96af-2a766c7d1a71 c555ec17274647ed83d33852feed0fa6 b86be458d54543fcae9df9884591edf3 - - default default] Lazy-loading 'resources' on Instance uuid 079324f3-2fba-431a-9b8a-6b755af3fe74 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:06:35 compute-0 neutron-haproxy-ovnmeta-c4ebf6c9-009d-4565-ba58-6b9452636d41[355209]: [NOTICE]   (355213) : haproxy version is 2.8.14-c23fe91
Oct 11 09:06:35 compute-0 neutron-haproxy-ovnmeta-c4ebf6c9-009d-4565-ba58-6b9452636d41[355209]: [NOTICE]   (355213) : path to executable is /usr/sbin/haproxy
Oct 11 09:06:35 compute-0 neutron-haproxy-ovnmeta-c4ebf6c9-009d-4565-ba58-6b9452636d41[355209]: [WARNING]  (355213) : Exiting Master process...
Oct 11 09:06:35 compute-0 neutron-haproxy-ovnmeta-c4ebf6c9-009d-4565-ba58-6b9452636d41[355209]: [ALERT]    (355213) : Current worker (355215) exited with code 143 (Terminated)
Oct 11 09:06:35 compute-0 neutron-haproxy-ovnmeta-c4ebf6c9-009d-4565-ba58-6b9452636d41[355209]: [WARNING]  (355213) : All workers exited. Exiting... (0)
Oct 11 09:06:35 compute-0 nova_compute[260935]: 2025-10-11 09:06:35.063 2 DEBUG nova.virt.libvirt.vif [None req-1d4a41c2-0404-42b2-96af-2a766c7d1a71 c555ec17274647ed83d33852feed0fa6 b86be458d54543fcae9df9884591edf3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T09:06:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-ServerTagsTestJSON-server-1151527895',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-servertagstestjson-server-1151527895',id=95,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-11T09:06:27Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='b86be458d54543fcae9df9884591edf3',ramdisk_id='',reservation_id='r-7frv4663',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerTagsTestJSON-363741472',owner_user_name='tempest-ServerTagsTestJSON-363741472-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T09:06:28Z,user_data=None,user_id='c555ec17274647ed83d33852feed0fa6',uuid=079324f3-2fba-431a-9b8a-6b755af3fe74,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f44dbabf-c6b7-4aa2-ac55-6df262611e5e", "address": "fa:16:3e:47:6a:2d", "network": {"id": "c4ebf6c9-009d-4565-ba58-6b9452636d41", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-2078761724-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b86be458d54543fcae9df9884591edf3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf44dbabf-c6", "ovs_interfaceid": "f44dbabf-c6b7-4aa2-ac55-6df262611e5e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 11 09:06:35 compute-0 nova_compute[260935]: 2025-10-11 09:06:35.063 2 DEBUG nova.network.os_vif_util [None req-1d4a41c2-0404-42b2-96af-2a766c7d1a71 c555ec17274647ed83d33852feed0fa6 b86be458d54543fcae9df9884591edf3 - - default default] Converting VIF {"id": "f44dbabf-c6b7-4aa2-ac55-6df262611e5e", "address": "fa:16:3e:47:6a:2d", "network": {"id": "c4ebf6c9-009d-4565-ba58-6b9452636d41", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-2078761724-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b86be458d54543fcae9df9884591edf3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf44dbabf-c6", "ovs_interfaceid": "f44dbabf-c6b7-4aa2-ac55-6df262611e5e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 09:06:35 compute-0 systemd[1]: libpod-430ab45702f3269ff635d2a687f1fa6c64c61e0b8b3ea0d1d1139c16c4b33446.scope: Deactivated successfully.
Oct 11 09:06:35 compute-0 nova_compute[260935]: 2025-10-11 09:06:35.065 2 DEBUG nova.network.os_vif_util [None req-1d4a41c2-0404-42b2-96af-2a766c7d1a71 c555ec17274647ed83d33852feed0fa6 b86be458d54543fcae9df9884591edf3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:47:6a:2d,bridge_name='br-int',has_traffic_filtering=True,id=f44dbabf-c6b7-4aa2-ac55-6df262611e5e,network=Network(c4ebf6c9-009d-4565-ba58-6b9452636d41),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf44dbabf-c6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 09:06:35 compute-0 nova_compute[260935]: 2025-10-11 09:06:35.065 2 DEBUG os_vif [None req-1d4a41c2-0404-42b2-96af-2a766c7d1a71 c555ec17274647ed83d33852feed0fa6 b86be458d54543fcae9df9884591edf3 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:47:6a:2d,bridge_name='br-int',has_traffic_filtering=True,id=f44dbabf-c6b7-4aa2-ac55-6df262611e5e,network=Network(c4ebf6c9-009d-4565-ba58-6b9452636d41),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf44dbabf-c6') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 11 09:06:35 compute-0 nova_compute[260935]: 2025-10-11 09:06:35.067 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:06:35 compute-0 nova_compute[260935]: 2025-10-11 09:06:35.068 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf44dbabf-c6, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:06:35 compute-0 nova_compute[260935]: 2025-10-11 09:06:35.069 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:06:35 compute-0 nova_compute[260935]: 2025-10-11 09:06:35.071 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:06:35 compute-0 podman[355312]: 2025-10-11 09:06:35.074044673 +0000 UTC m=+0.063863955 container died 430ab45702f3269ff635d2a687f1fa6c64c61e0b8b3ea0d1d1139c16c4b33446 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-c4ebf6c9-009d-4565-ba58-6b9452636d41, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true)
Oct 11 09:06:35 compute-0 nova_compute[260935]: 2025-10-11 09:06:35.075 2 INFO os_vif [None req-1d4a41c2-0404-42b2-96af-2a766c7d1a71 c555ec17274647ed83d33852feed0fa6 b86be458d54543fcae9df9884591edf3 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:47:6a:2d,bridge_name='br-int',has_traffic_filtering=True,id=f44dbabf-c6b7-4aa2-ac55-6df262611e5e,network=Network(c4ebf6c9-009d-4565-ba58-6b9452636d41),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf44dbabf-c6')
Oct 11 09:06:35 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-430ab45702f3269ff635d2a687f1fa6c64c61e0b8b3ea0d1d1139c16c4b33446-userdata-shm.mount: Deactivated successfully.
Oct 11 09:06:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-5116281b771ae2e0fec1c2aeaecd4d786f9db2d4ea87dc93adfd474008700f33-merged.mount: Deactivated successfully.
Oct 11 09:06:35 compute-0 podman[355312]: 2025-10-11 09:06:35.136946778 +0000 UTC m=+0.126766080 container cleanup 430ab45702f3269ff635d2a687f1fa6c64c61e0b8b3ea0d1d1139c16c4b33446 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-c4ebf6c9-009d-4565-ba58-6b9452636d41, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 11 09:06:35 compute-0 systemd[1]: libpod-conmon-430ab45702f3269ff635d2a687f1fa6c64c61e0b8b3ea0d1d1139c16c4b33446.scope: Deactivated successfully.
Oct 11 09:06:35 compute-0 podman[355363]: 2025-10-11 09:06:35.233115144 +0000 UTC m=+0.059123829 container remove 430ab45702f3269ff635d2a687f1fa6c64c61e0b8b3ea0d1d1139c16c4b33446 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-c4ebf6c9-009d-4565-ba58-6b9452636d41, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001)
Oct 11 09:06:35 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:06:35.244 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[befe9dc5-789c-4dcc-8854-a65396586f2a]: (4, ('Sat Oct 11 09:06:34 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-c4ebf6c9-009d-4565-ba58-6b9452636d41 (430ab45702f3269ff635d2a687f1fa6c64c61e0b8b3ea0d1d1139c16c4b33446)\n430ab45702f3269ff635d2a687f1fa6c64c61e0b8b3ea0d1d1139c16c4b33446\nSat Oct 11 09:06:35 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-c4ebf6c9-009d-4565-ba58-6b9452636d41 (430ab45702f3269ff635d2a687f1fa6c64c61e0b8b3ea0d1d1139c16c4b33446)\n430ab45702f3269ff635d2a687f1fa6c64c61e0b8b3ea0d1d1139c16c4b33446\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:06:35 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:06:35.247 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[05b8a7c9-1132-4d68-86b7-eec5c92d0269]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:06:35 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:06:35.248 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc4ebf6c9-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:06:35 compute-0 nova_compute[260935]: 2025-10-11 09:06:35.250 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:06:35 compute-0 kernel: tapc4ebf6c9-00: left promiscuous mode
Oct 11 09:06:35 compute-0 nova_compute[260935]: 2025-10-11 09:06:35.271 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:06:35 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:06:35.277 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[90540d08-3810-44e0-9790-3fc7bdb9c802]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:06:35 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:06:35.300 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[7d9351cf-98b6-4179-b304-807f3eed23f9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:06:35 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:06:35.303 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[2de8fe03-18b2-48ac-ae38-708828b92857]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:06:35 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:06:35.340 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[60da574d-ea12-4632-aad0-ccf3d219f655]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 543145, 'reachable_time': 38993, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 355380, 'error': None, 'target': 'ovnmeta-c4ebf6c9-009d-4565-ba58-6b9452636d41', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:06:35 compute-0 systemd[1]: run-netns-ovnmeta\x2dc4ebf6c9\x2d009d\x2d4565\x2dba58\x2d6b9452636d41.mount: Deactivated successfully.
Oct 11 09:06:35 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:06:35.345 162929 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-c4ebf6c9-009d-4565-ba58-6b9452636d41 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 11 09:06:35 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:06:35.345 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[b1f5bfd1-07fc-4400-9c1c-92531d6f5fc5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:06:35 compute-0 nova_compute[260935]: 2025-10-11 09:06:35.522 2 INFO nova.virt.libvirt.driver [None req-1d4a41c2-0404-42b2-96af-2a766c7d1a71 c555ec17274647ed83d33852feed0fa6 b86be458d54543fcae9df9884591edf3 - - default default] [instance: 079324f3-2fba-431a-9b8a-6b755af3fe74] Deleting instance files /var/lib/nova/instances/079324f3-2fba-431a-9b8a-6b755af3fe74_del
Oct 11 09:06:35 compute-0 nova_compute[260935]: 2025-10-11 09:06:35.524 2 INFO nova.virt.libvirt.driver [None req-1d4a41c2-0404-42b2-96af-2a766c7d1a71 c555ec17274647ed83d33852feed0fa6 b86be458d54543fcae9df9884591edf3 - - default default] [instance: 079324f3-2fba-431a-9b8a-6b755af3fe74] Deletion of /var/lib/nova/instances/079324f3-2fba-431a-9b8a-6b755af3fe74_del complete
Oct 11 09:06:35 compute-0 nova_compute[260935]: 2025-10-11 09:06:35.569 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:06:35 compute-0 nova_compute[260935]: 2025-10-11 09:06:35.570 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:06:35 compute-0 nova_compute[260935]: 2025-10-11 09:06:35.570 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 11 09:06:35 compute-0 nova_compute[260935]: 2025-10-11 09:06:35.681 2 INFO nova.compute.manager [None req-1d4a41c2-0404-42b2-96af-2a766c7d1a71 c555ec17274647ed83d33852feed0fa6 b86be458d54543fcae9df9884591edf3 - - default default] [instance: 079324f3-2fba-431a-9b8a-6b755af3fe74] Took 0.93 seconds to destroy the instance on the hypervisor.
Oct 11 09:06:35 compute-0 nova_compute[260935]: 2025-10-11 09:06:35.682 2 DEBUG oslo.service.loopingcall [None req-1d4a41c2-0404-42b2-96af-2a766c7d1a71 c555ec17274647ed83d33852feed0fa6 b86be458d54543fcae9df9884591edf3 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 11 09:06:35 compute-0 nova_compute[260935]: 2025-10-11 09:06:35.683 2 DEBUG nova.compute.manager [-] [instance: 079324f3-2fba-431a-9b8a-6b755af3fe74] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 11 09:06:35 compute-0 nova_compute[260935]: 2025-10-11 09:06:35.684 2 DEBUG nova.network.neutron [-] [instance: 079324f3-2fba-431a-9b8a-6b755af3fe74] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 11 09:06:35 compute-0 nova_compute[260935]: 2025-10-11 09:06:35.919 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "refresh_cache-b75d8ded-515b-48ff-a6b6-28df88878996" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:06:35 compute-0 nova_compute[260935]: 2025-10-11 09:06:35.919 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquired lock "refresh_cache-b75d8ded-515b-48ff-a6b6-28df88878996" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:06:35 compute-0 nova_compute[260935]: 2025-10-11 09:06:35.920 2 DEBUG nova.network.neutron [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: b75d8ded-515b-48ff-a6b6-28df88878996] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 11 09:06:36 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1918: 321 pgs: 321 active+clean; 421 MiB data, 855 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 16 KiB/s wr, 127 op/s
Oct 11 09:06:36 compute-0 nova_compute[260935]: 2025-10-11 09:06:36.747 2 DEBUG nova.network.neutron [-] [instance: 079324f3-2fba-431a-9b8a-6b755af3fe74] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:06:36 compute-0 nova_compute[260935]: 2025-10-11 09:06:36.796 2 INFO nova.compute.manager [-] [instance: 079324f3-2fba-431a-9b8a-6b755af3fe74] Took 1.11 seconds to deallocate network for instance.
Oct 11 09:06:36 compute-0 nova_compute[260935]: 2025-10-11 09:06:36.858 2 DEBUG nova.compute.manager [req-08f9846a-1750-4858-87d3-e0166b5eeb3c req-be75b409-ee1a-4220-a251-199506322aa6 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 079324f3-2fba-431a-9b8a-6b755af3fe74] Received event network-vif-deleted-f44dbabf-c6b7-4aa2-ac55-6df262611e5e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:06:36 compute-0 nova_compute[260935]: 2025-10-11 09:06:36.882 2 DEBUG oslo_concurrency.lockutils [None req-1d4a41c2-0404-42b2-96af-2a766c7d1a71 c555ec17274647ed83d33852feed0fa6 b86be458d54543fcae9df9884591edf3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:06:36 compute-0 nova_compute[260935]: 2025-10-11 09:06:36.882 2 DEBUG oslo_concurrency.lockutils [None req-1d4a41c2-0404-42b2-96af-2a766c7d1a71 c555ec17274647ed83d33852feed0fa6 b86be458d54543fcae9df9884591edf3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:06:37 compute-0 nova_compute[260935]: 2025-10-11 09:06:37.057 2 DEBUG oslo_concurrency.processutils [None req-1d4a41c2-0404-42b2-96af-2a766c7d1a71 c555ec17274647ed83d33852feed0fa6 b86be458d54543fcae9df9884591edf3 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:06:37 compute-0 ceph-mon[74313]: pgmap v1918: 321 pgs: 321 active+clean; 421 MiB data, 855 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 16 KiB/s wr, 127 op/s
Oct 11 09:06:37 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 09:06:37 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3067880928' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:06:37 compute-0 nova_compute[260935]: 2025-10-11 09:06:37.557 2 DEBUG oslo_concurrency.processutils [None req-1d4a41c2-0404-42b2-96af-2a766c7d1a71 c555ec17274647ed83d33852feed0fa6 b86be458d54543fcae9df9884591edf3 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.500s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:06:37 compute-0 nova_compute[260935]: 2025-10-11 09:06:37.566 2 DEBUG nova.compute.provider_tree [None req-1d4a41c2-0404-42b2-96af-2a766c7d1a71 c555ec17274647ed83d33852feed0fa6 b86be458d54543fcae9df9884591edf3 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 09:06:37 compute-0 nova_compute[260935]: 2025-10-11 09:06:37.626 2 DEBUG nova.scheduler.client.report [None req-1d4a41c2-0404-42b2-96af-2a766c7d1a71 c555ec17274647ed83d33852feed0fa6 b86be458d54543fcae9df9884591edf3 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 09:06:37 compute-0 nova_compute[260935]: 2025-10-11 09:06:37.636 2 DEBUG nova.network.neutron [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: b75d8ded-515b-48ff-a6b6-28df88878996] Updating instance_info_cache with network_info: [{"id": "99e74dca-1d94-446c-ac4b-bc16dc028d2b", "address": "fa:16:3e:ab:9b:26", "network": {"id": "e4686205-cbf0-4221-bc49-ebb890c4a59f", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1553544744-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "11b44ad9193e4e43838d52056ccf413e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap99e74dca-1d", "ovs_interfaceid": "99e74dca-1d94-446c-ac4b-bc16dc028d2b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:06:37 compute-0 nova_compute[260935]: 2025-10-11 09:06:37.672 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Releasing lock "refresh_cache-b75d8ded-515b-48ff-a6b6-28df88878996" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:06:37 compute-0 nova_compute[260935]: 2025-10-11 09:06:37.673 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: b75d8ded-515b-48ff-a6b6-28df88878996] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 11 09:06:37 compute-0 nova_compute[260935]: 2025-10-11 09:06:37.675 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:06:37 compute-0 nova_compute[260935]: 2025-10-11 09:06:37.676 2 DEBUG oslo_concurrency.lockutils [None req-1d4a41c2-0404-42b2-96af-2a766c7d1a71 c555ec17274647ed83d33852feed0fa6 b86be458d54543fcae9df9884591edf3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.793s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:06:37 compute-0 nova_compute[260935]: 2025-10-11 09:06:37.681 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:06:37 compute-0 nova_compute[260935]: 2025-10-11 09:06:37.682 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:06:37 compute-0 nova_compute[260935]: 2025-10-11 09:06:37.683 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:06:37 compute-0 nova_compute[260935]: 2025-10-11 09:06:37.684 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:06:37 compute-0 nova_compute[260935]: 2025-10-11 09:06:37.685 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:06:37 compute-0 nova_compute[260935]: 2025-10-11 09:06:37.686 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 11 09:06:37 compute-0 nova_compute[260935]: 2025-10-11 09:06:37.686 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:06:37 compute-0 nova_compute[260935]: 2025-10-11 09:06:37.716 2 INFO nova.scheduler.client.report [None req-1d4a41c2-0404-42b2-96af-2a766c7d1a71 c555ec17274647ed83d33852feed0fa6 b86be458d54543fcae9df9884591edf3 - - default default] Deleted allocations for instance 079324f3-2fba-431a-9b8a-6b755af3fe74
Oct 11 09:06:37 compute-0 nova_compute[260935]: 2025-10-11 09:06:37.749 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:06:37 compute-0 nova_compute[260935]: 2025-10-11 09:06:37.750 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:06:37 compute-0 nova_compute[260935]: 2025-10-11 09:06:37.750 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:06:37 compute-0 nova_compute[260935]: 2025-10-11 09:06:37.751 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 11 09:06:37 compute-0 nova_compute[260935]: 2025-10-11 09:06:37.752 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:06:37 compute-0 nova_compute[260935]: 2025-10-11 09:06:37.919 2 DEBUG oslo_concurrency.lockutils [None req-1d4a41c2-0404-42b2-96af-2a766c7d1a71 c555ec17274647ed83d33852feed0fa6 b86be458d54543fcae9df9884591edf3 - - default default] Lock "079324f3-2fba-431a-9b8a-6b755af3fe74" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.175s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:06:38 compute-0 ovn_controller[152945]: 2025-10-11T09:06:38Z|00859|binding|INFO|Releasing lport e23cd806-8523-4e59-ba27-db15cee52548 from this chassis (sb_readonly=0)
Oct 11 09:06:38 compute-0 ovn_controller[152945]: 2025-10-11T09:06:38Z|00860|binding|INFO|Releasing lport f45dd889-4db0-488b-b6d3-356bc191844e from this chassis (sb_readonly=0)
Oct 11 09:06:38 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 09:06:38 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2376962690' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:06:38 compute-0 nova_compute[260935]: 2025-10-11 09:06:38.252 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.501s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:06:38 compute-0 nova_compute[260935]: 2025-10-11 09:06:38.282 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:06:38 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1919: 321 pgs: 321 active+clean; 374 MiB data, 826 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 17 KiB/s wr, 154 op/s
Oct 11 09:06:38 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/3067880928' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:06:38 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/2376962690' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:06:38 compute-0 nova_compute[260935]: 2025-10-11 09:06:38.483 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:06:38 compute-0 nova_compute[260935]: 2025-10-11 09:06:38.484 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:06:38 compute-0 nova_compute[260935]: 2025-10-11 09:06:38.484 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:06:38 compute-0 nova_compute[260935]: 2025-10-11 09:06:38.490 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:06:38 compute-0 nova_compute[260935]: 2025-10-11 09:06:38.490 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:06:38 compute-0 nova_compute[260935]: 2025-10-11 09:06:38.496 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000057 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:06:38 compute-0 nova_compute[260935]: 2025-10-11 09:06:38.496 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000057 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:06:38 compute-0 nova_compute[260935]: 2025-10-11 09:06:38.501 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000052 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:06:38 compute-0 nova_compute[260935]: 2025-10-11 09:06:38.501 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000052 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:06:38 compute-0 nova_compute[260935]: 2025-10-11 09:06:38.778 2 WARNING nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 09:06:38 compute-0 nova_compute[260935]: 2025-10-11 09:06:38.780 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3146MB free_disk=59.788875579833984GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 11 09:06:38 compute-0 nova_compute[260935]: 2025-10-11 09:06:38.781 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:06:38 compute-0 nova_compute[260935]: 2025-10-11 09:06:38.781 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:06:38 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e257 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:06:38 compute-0 nova_compute[260935]: 2025-10-11 09:06:38.969 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance c176845c-89c0-4038-ba22-4ee79bd3ebfe actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 09:06:38 compute-0 nova_compute[260935]: 2025-10-11 09:06:38.969 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance b75d8ded-515b-48ff-a6b6-28df88878996 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 09:06:38 compute-0 nova_compute[260935]: 2025-10-11 09:06:38.969 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance 52be16b4-343a-4fd4-9041-39069a1fde2a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 09:06:38 compute-0 nova_compute[260935]: 2025-10-11 09:06:38.970 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance 15633aee-234a-4417-b5ea-f35f13820404 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 09:06:38 compute-0 nova_compute[260935]: 2025-10-11 09:06:38.970 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 11 09:06:38 compute-0 nova_compute[260935]: 2025-10-11 09:06:38.970 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=1024MB phys_disk=59GB used_disk=4GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 11 09:06:39 compute-0 nova_compute[260935]: 2025-10-11 09:06:39.020 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:06:39 compute-0 nova_compute[260935]: 2025-10-11 09:06:39.112 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:06:39 compute-0 ceph-mon[74313]: pgmap v1919: 321 pgs: 321 active+clean; 374 MiB data, 826 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 17 KiB/s wr, 154 op/s
Oct 11 09:06:39 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 09:06:39 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2578979883' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:06:39 compute-0 nova_compute[260935]: 2025-10-11 09:06:39.694 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.581s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:06:39 compute-0 nova_compute[260935]: 2025-10-11 09:06:39.705 2 DEBUG nova.compute.provider_tree [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 09:06:39 compute-0 nova_compute[260935]: 2025-10-11 09:06:39.771 2 DEBUG nova.scheduler.client.report [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 09:06:39 compute-0 nova_compute[260935]: 2025-10-11 09:06:39.875 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 11 09:06:39 compute-0 nova_compute[260935]: 2025-10-11 09:06:39.876 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.095s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:06:40 compute-0 nova_compute[260935]: 2025-10-11 09:06:40.070 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:06:40 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1920: 321 pgs: 321 active+clean; 374 MiB data, 826 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.5 KiB/s wr, 144 op/s
Oct 11 09:06:40 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/2578979883' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:06:41 compute-0 ceph-mon[74313]: pgmap v1920: 321 pgs: 321 active+clean; 374 MiB data, 826 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.5 KiB/s wr, 144 op/s
Oct 11 09:06:42 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1921: 321 pgs: 321 active+clean; 374 MiB data, 826 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.5 KiB/s wr, 144 op/s
Oct 11 09:06:43 compute-0 ceph-mon[74313]: pgmap v1921: 321 pgs: 321 active+clean; 374 MiB data, 826 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.5 KiB/s wr, 144 op/s
Oct 11 09:06:43 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e257 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:06:44 compute-0 nova_compute[260935]: 2025-10-11 09:06:44.022 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:06:44 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1922: 321 pgs: 321 active+clean; 374 MiB data, 826 MiB used, 59 GiB / 60 GiB avail; 379 KiB/s rd, 1.5 KiB/s wr, 51 op/s
Oct 11 09:06:44 compute-0 nova_compute[260935]: 2025-10-11 09:06:44.607 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760173589.6062007, a83f40e4-c852-4b45-a3d2-1cd65e9aaa31 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:06:44 compute-0 nova_compute[260935]: 2025-10-11 09:06:44.608 2 INFO nova.compute.manager [-] [instance: a83f40e4-c852-4b45-a3d2-1cd65e9aaa31] VM Stopped (Lifecycle Event)
Oct 11 09:06:44 compute-0 nova_compute[260935]: 2025-10-11 09:06:44.658 2 DEBUG nova.compute.manager [None req-d0ec6287-caab-4f20-80b0-0aeaf5d41526 - - - - - -] [instance: a83f40e4-c852-4b45-a3d2-1cd65e9aaa31] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:06:44 compute-0 podman[355448]: 2025-10-11 09:06:44.791493554 +0000 UTC m=+0.090941447 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251001, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Oct 11 09:06:45 compute-0 nova_compute[260935]: 2025-10-11 09:06:45.074 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:06:45 compute-0 ceph-mon[74313]: pgmap v1922: 321 pgs: 321 active+clean; 374 MiB data, 826 MiB used, 59 GiB / 60 GiB avail; 379 KiB/s rd, 1.5 KiB/s wr, 51 op/s
Oct 11 09:06:46 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1923: 321 pgs: 321 active+clean; 374 MiB data, 826 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 26 op/s
Oct 11 09:06:46 compute-0 ceph-mon[74313]: pgmap v1923: 321 pgs: 321 active+clean; 374 MiB data, 826 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 26 op/s
Oct 11 09:06:47 compute-0 sshd-session[355467]: Invalid user taha from 155.4.244.179 port 22875
Oct 11 09:06:47 compute-0 sshd-session[355467]: pam_unix(sshd:auth): check pass; user unknown
Oct 11 09:06:47 compute-0 sshd-session[355467]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=155.4.244.179
Oct 11 09:06:48 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1924: 321 pgs: 321 active+clean; 374 MiB data, 826 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 26 op/s
Oct 11 09:06:48 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e257 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:06:48 compute-0 sshd-session[355467]: Failed password for invalid user taha from 155.4.244.179 port 22875 ssh2
Oct 11 09:06:49 compute-0 nova_compute[260935]: 2025-10-11 09:06:49.024 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:06:49 compute-0 sshd-session[355467]: Received disconnect from 155.4.244.179 port 22875:11: Bye Bye [preauth]
Oct 11 09:06:49 compute-0 sshd-session[355467]: Disconnected from invalid user taha 155.4.244.179 port 22875 [preauth]
Oct 11 09:06:49 compute-0 ceph-mon[74313]: pgmap v1924: 321 pgs: 321 active+clean; 374 MiB data, 826 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 26 op/s
Oct 11 09:06:49 compute-0 nova_compute[260935]: 2025-10-11 09:06:49.987 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760173594.9862652, 079324f3-2fba-431a-9b8a-6b755af3fe74 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:06:49 compute-0 nova_compute[260935]: 2025-10-11 09:06:49.988 2 INFO nova.compute.manager [-] [instance: 079324f3-2fba-431a-9b8a-6b755af3fe74] VM Stopped (Lifecycle Event)
Oct 11 09:06:50 compute-0 nova_compute[260935]: 2025-10-11 09:06:50.044 2 DEBUG nova.compute.manager [None req-27a818a9-b487-4c54-9e1a-6c489f013290 - - - - - -] [instance: 079324f3-2fba-431a-9b8a-6b755af3fe74] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:06:50 compute-0 nova_compute[260935]: 2025-10-11 09:06:50.103 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:06:50 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1925: 321 pgs: 321 active+clean; 374 MiB data, 826 MiB used, 59 GiB / 60 GiB avail
Oct 11 09:06:50 compute-0 sudo[355469]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:06:50 compute-0 sudo[355469]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:06:50 compute-0 sudo[355469]: pam_unix(sudo:session): session closed for user root
Oct 11 09:06:50 compute-0 sudo[355494]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 09:06:50 compute-0 sudo[355494]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:06:50 compute-0 sudo[355494]: pam_unix(sudo:session): session closed for user root
Oct 11 09:06:50 compute-0 sudo[355519]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:06:50 compute-0 sudo[355519]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:06:50 compute-0 sudo[355519]: pam_unix(sudo:session): session closed for user root
Oct 11 09:06:50 compute-0 sudo[355544]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 11 09:06:50 compute-0 sudo[355544]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:06:51 compute-0 sudo[355544]: pam_unix(sudo:session): session closed for user root
Oct 11 09:06:51 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 09:06:51 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 09:06:51 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 11 09:06:51 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 09:06:51 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 11 09:06:51 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:06:51 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev 6938261c-e0e3-46cf-95c3-fe921f11058e does not exist
Oct 11 09:06:51 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev 9f3c5bb3-4126-41d9-89a1-da16ae0f37e5 does not exist
Oct 11 09:06:51 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev a6761191-011f-4788-a267-d508a01b7f23 does not exist
Oct 11 09:06:51 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 11 09:06:51 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 09:06:51 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 11 09:06:51 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 09:06:51 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 09:06:51 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 09:06:51 compute-0 ceph-mon[74313]: pgmap v1925: 321 pgs: 321 active+clean; 374 MiB data, 826 MiB used, 59 GiB / 60 GiB avail
Oct 11 09:06:51 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 09:06:51 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 09:06:51 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:06:51 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 09:06:51 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 09:06:51 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 09:06:51 compute-0 sudo[355600]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:06:51 compute-0 sudo[355600]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:06:51 compute-0 sudo[355600]: pam_unix(sudo:session): session closed for user root
Oct 11 09:06:51 compute-0 sudo[355626]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 09:06:51 compute-0 sudo[355626]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:06:51 compute-0 sudo[355626]: pam_unix(sudo:session): session closed for user root
Oct 11 09:06:51 compute-0 podman[355624]: 2025-10-11 09:06:51.574163068 +0000 UTC m=+0.093640404 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=iscsid, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct 11 09:06:51 compute-0 sudo[355668]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:06:51 compute-0 sudo[355668]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:06:51 compute-0 sudo[355668]: pam_unix(sudo:session): session closed for user root
Oct 11 09:06:51 compute-0 sudo[355693]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 11 09:06:51 compute-0 sudo[355693]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:06:52 compute-0 podman[355761]: 2025-10-11 09:06:52.220499649 +0000 UTC m=+0.056139994 container create 9299614edfb0bbb8af5c42942b3de8088002772559ef10f263bd9dbfa1370941 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_kilby, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 11 09:06:52 compute-0 systemd[1]: Started libpod-conmon-9299614edfb0bbb8af5c42942b3de8088002772559ef10f263bd9dbfa1370941.scope.
Oct 11 09:06:52 compute-0 podman[355761]: 2025-10-11 09:06:52.193013954 +0000 UTC m=+0.028654379 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:06:52 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:06:52 compute-0 podman[355761]: 2025-10-11 09:06:52.331018864 +0000 UTC m=+0.166659229 container init 9299614edfb0bbb8af5c42942b3de8088002772559ef10f263bd9dbfa1370941 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_kilby, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct 11 09:06:52 compute-0 podman[355761]: 2025-10-11 09:06:52.337725395 +0000 UTC m=+0.173365740 container start 9299614edfb0bbb8af5c42942b3de8088002772559ef10f263bd9dbfa1370941 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_kilby, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 09:06:52 compute-0 podman[355761]: 2025-10-11 09:06:52.340913566 +0000 UTC m=+0.176553911 container attach 9299614edfb0bbb8af5c42942b3de8088002772559ef10f263bd9dbfa1370941 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_kilby, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct 11 09:06:52 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1926: 321 pgs: 321 active+clean; 374 MiB data, 826 MiB used, 59 GiB / 60 GiB avail
Oct 11 09:06:52 compute-0 busy_kilby[355777]: 167 167
Oct 11 09:06:52 compute-0 systemd[1]: libpod-9299614edfb0bbb8af5c42942b3de8088002772559ef10f263bd9dbfa1370941.scope: Deactivated successfully.
Oct 11 09:06:52 compute-0 podman[355761]: 2025-10-11 09:06:52.348617576 +0000 UTC m=+0.184257951 container died 9299614edfb0bbb8af5c42942b3de8088002772559ef10f263bd9dbfa1370941 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_kilby, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 09:06:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-d85bfd72ca7cd1bc68309e8b2d2eaeedcfc89cc59a9c05ab40ce3902dff2fd68-merged.mount: Deactivated successfully.
Oct 11 09:06:52 compute-0 podman[355761]: 2025-10-11 09:06:52.399460278 +0000 UTC m=+0.235100673 container remove 9299614edfb0bbb8af5c42942b3de8088002772559ef10f263bd9dbfa1370941 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_kilby, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct 11 09:06:52 compute-0 systemd[1]: libpod-conmon-9299614edfb0bbb8af5c42942b3de8088002772559ef10f263bd9dbfa1370941.scope: Deactivated successfully.
Oct 11 09:06:52 compute-0 podman[355801]: 2025-10-11 09:06:52.653494989 +0000 UTC m=+0.053220620 container create 1bf005b848b727bc1f7f8751b6dfdc45970ecb033382539c1e9348fbf2a975b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_joliot, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 09:06:52 compute-0 systemd[1]: Started libpod-conmon-1bf005b848b727bc1f7f8751b6dfdc45970ecb033382539c1e9348fbf2a975b1.scope.
Oct 11 09:06:52 compute-0 podman[355801]: 2025-10-11 09:06:52.624316546 +0000 UTC m=+0.024042227 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:06:52 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:06:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d7314b1faef6b12828acf593b1ee4ae303b65c7390c67a5fed77dc34b0003c4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 09:06:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d7314b1faef6b12828acf593b1ee4ae303b65c7390c67a5fed77dc34b0003c4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 09:06:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d7314b1faef6b12828acf593b1ee4ae303b65c7390c67a5fed77dc34b0003c4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 09:06:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d7314b1faef6b12828acf593b1ee4ae303b65c7390c67a5fed77dc34b0003c4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 09:06:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d7314b1faef6b12828acf593b1ee4ae303b65c7390c67a5fed77dc34b0003c4/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 09:06:52 compute-0 podman[355801]: 2025-10-11 09:06:52.757749966 +0000 UTC m=+0.157475657 container init 1bf005b848b727bc1f7f8751b6dfdc45970ecb033382539c1e9348fbf2a975b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_joliot, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 11 09:06:52 compute-0 podman[355801]: 2025-10-11 09:06:52.772730473 +0000 UTC m=+0.172456064 container start 1bf005b848b727bc1f7f8751b6dfdc45970ecb033382539c1e9348fbf2a975b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_joliot, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Oct 11 09:06:52 compute-0 podman[355801]: 2025-10-11 09:06:52.776998965 +0000 UTC m=+0.176724596 container attach 1bf005b848b727bc1f7f8751b6dfdc45970ecb033382539c1e9348fbf2a975b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_joliot, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct 11 09:06:53 compute-0 ceph-mon[74313]: pgmap v1926: 321 pgs: 321 active+clean; 374 MiB data, 826 MiB used, 59 GiB / 60 GiB avail
Oct 11 09:06:53 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e257 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:06:53 compute-0 zen_joliot[355817]: --> passed data devices: 0 physical, 3 LVM
Oct 11 09:06:53 compute-0 zen_joliot[355817]: --> relative data size: 1.0
Oct 11 09:06:53 compute-0 zen_joliot[355817]: --> All data devices are unavailable
Oct 11 09:06:54 compute-0 systemd[1]: libpod-1bf005b848b727bc1f7f8751b6dfdc45970ecb033382539c1e9348fbf2a975b1.scope: Deactivated successfully.
Oct 11 09:06:54 compute-0 systemd[1]: libpod-1bf005b848b727bc1f7f8751b6dfdc45970ecb033382539c1e9348fbf2a975b1.scope: Consumed 1.166s CPU time.
Oct 11 09:06:54 compute-0 podman[355846]: 2025-10-11 09:06:54.061906514 +0000 UTC m=+0.033224529 container died 1bf005b848b727bc1f7f8751b6dfdc45970ecb033382539c1e9348fbf2a975b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_joliot, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct 11 09:06:54 compute-0 nova_compute[260935]: 2025-10-11 09:06:54.082 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:06:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-5d7314b1faef6b12828acf593b1ee4ae303b65c7390c67a5fed77dc34b0003c4-merged.mount: Deactivated successfully.
Oct 11 09:06:54 compute-0 podman[355846]: 2025-10-11 09:06:54.132097888 +0000 UTC m=+0.103415863 container remove 1bf005b848b727bc1f7f8751b6dfdc45970ecb033382539c1e9348fbf2a975b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_joliot, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 09:06:54 compute-0 systemd[1]: libpod-conmon-1bf005b848b727bc1f7f8751b6dfdc45970ecb033382539c1e9348fbf2a975b1.scope: Deactivated successfully.
Oct 11 09:06:54 compute-0 sudo[355693]: pam_unix(sudo:session): session closed for user root
Oct 11 09:06:54 compute-0 sudo[355862]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:06:54 compute-0 sudo[355862]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:06:54 compute-0 sudo[355862]: pam_unix(sudo:session): session closed for user root
Oct 11 09:06:54 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1927: 321 pgs: 321 active+clean; 374 MiB data, 826 MiB used, 59 GiB / 60 GiB avail
Oct 11 09:06:54 compute-0 sudo[355887]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 09:06:54 compute-0 sudo[355887]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:06:54 compute-0 sudo[355887]: pam_unix(sudo:session): session closed for user root
Oct 11 09:06:54 compute-0 sudo[355912]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:06:54 compute-0 sudo[355912]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:06:54 compute-0 sudo[355912]: pam_unix(sudo:session): session closed for user root
Oct 11 09:06:54 compute-0 sudo[355937]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a -- lvm list --format json
Oct 11 09:06:54 compute-0 sudo[355937]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:06:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:06:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:06:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:06:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:06:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:06:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:06:54 compute-0 ceph-mgr[74605]: [balancer INFO root] Optimize plan auto_2025-10-11_09:06:54
Oct 11 09:06:54 compute-0 ceph-mgr[74605]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 09:06:54 compute-0 ceph-mgr[74605]: [balancer INFO root] do_upmap
Oct 11 09:06:54 compute-0 ceph-mgr[74605]: [balancer INFO root] pools ['backups', 'volumes', 'cephfs.cephfs.data', '.mgr', 'cephfs.cephfs.meta', 'default.rgw.meta', 'images', '.rgw.root', 'default.rgw.log', 'vms', 'default.rgw.control']
Oct 11 09:06:54 compute-0 ceph-mgr[74605]: [balancer INFO root] prepared 0/10 changes
Oct 11 09:06:55 compute-0 podman[356003]: 2025-10-11 09:06:55.058424022 +0000 UTC m=+0.069117104 container create 21018926948b6223e1cafd88f1706061caaad17d2da24f13d9bc5e5a2b9ab194 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_carson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Oct 11 09:06:55 compute-0 podman[356003]: 2025-10-11 09:06:55.033305515 +0000 UTC m=+0.043998687 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:06:55 compute-0 systemd[1]: Started libpod-conmon-21018926948b6223e1cafd88f1706061caaad17d2da24f13d9bc5e5a2b9ab194.scope.
Oct 11 09:06:55 compute-0 nova_compute[260935]: 2025-10-11 09:06:55.157 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:06:55 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:06:55 compute-0 podman[356003]: 2025-10-11 09:06:55.206339825 +0000 UTC m=+0.217032927 container init 21018926948b6223e1cafd88f1706061caaad17d2da24f13d9bc5e5a2b9ab194 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_carson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 09:06:55 compute-0 podman[356003]: 2025-10-11 09:06:55.214664643 +0000 UTC m=+0.225357725 container start 21018926948b6223e1cafd88f1706061caaad17d2da24f13d9bc5e5a2b9ab194 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_carson, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct 11 09:06:55 compute-0 podman[356003]: 2025-10-11 09:06:55.219018497 +0000 UTC m=+0.229711649 container attach 21018926948b6223e1cafd88f1706061caaad17d2da24f13d9bc5e5a2b9ab194 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_carson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Oct 11 09:06:55 compute-0 youthful_carson[356041]: 167 167
Oct 11 09:06:55 compute-0 systemd[1]: libpod-21018926948b6223e1cafd88f1706061caaad17d2da24f13d9bc5e5a2b9ab194.scope: Deactivated successfully.
Oct 11 09:06:55 compute-0 podman[356003]: 2025-10-11 09:06:55.222773894 +0000 UTC m=+0.233466996 container died 21018926948b6223e1cafd88f1706061caaad17d2da24f13d9bc5e5a2b9ab194 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_carson, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 09:06:55 compute-0 podman[356017]: 2025-10-11 09:06:55.250735493 +0000 UTC m=+0.148726067 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=multipathd, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, managed_by=edpm_ansible)
Oct 11 09:06:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-29d43d04394a56214b59efdcadc71fd935e126fd919b889ad5907a0dad388df1-merged.mount: Deactivated successfully.
Oct 11 09:06:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 09:06:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 09:06:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 09:06:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 09:06:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 09:06:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 09:06:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 09:06:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 09:06:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 09:06:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 09:06:55 compute-0 podman[356020]: 2025-10-11 09:06:55.270126066 +0000 UTC m=+0.158099684 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 11 09:06:55 compute-0 podman[356003]: 2025-10-11 09:06:55.280934165 +0000 UTC m=+0.291627277 container remove 21018926948b6223e1cafd88f1706061caaad17d2da24f13d9bc5e5a2b9ab194 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_carson, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct 11 09:06:55 compute-0 systemd[1]: libpod-conmon-21018926948b6223e1cafd88f1706061caaad17d2da24f13d9bc5e5a2b9ab194.scope: Deactivated successfully.
Oct 11 09:06:55 compute-0 ceph-mon[74313]: pgmap v1927: 321 pgs: 321 active+clean; 374 MiB data, 826 MiB used, 59 GiB / 60 GiB avail
Oct 11 09:06:55 compute-0 podman[356085]: 2025-10-11 09:06:55.527421021 +0000 UTC m=+0.042843244 container create 027c5e9c93f814dedd5124a4e542a26cf422c967e124a9429e199ae374b7ea46 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_bassi, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 09:06:55 compute-0 systemd[1]: Started libpod-conmon-027c5e9c93f814dedd5124a4e542a26cf422c967e124a9429e199ae374b7ea46.scope.
Oct 11 09:06:55 compute-0 podman[356085]: 2025-10-11 09:06:55.507150452 +0000 UTC m=+0.022572665 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:06:55 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:06:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81b3d3f72eecbc537a3ec37abd640584b7e50128cf01d15f9774e58fc1967097/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 09:06:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81b3d3f72eecbc537a3ec37abd640584b7e50128cf01d15f9774e58fc1967097/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 09:06:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81b3d3f72eecbc537a3ec37abd640584b7e50128cf01d15f9774e58fc1967097/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 09:06:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81b3d3f72eecbc537a3ec37abd640584b7e50128cf01d15f9774e58fc1967097/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 09:06:55 compute-0 podman[356085]: 2025-10-11 09:06:55.664299728 +0000 UTC m=+0.179722021 container init 027c5e9c93f814dedd5124a4e542a26cf422c967e124a9429e199ae374b7ea46 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_bassi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Oct 11 09:06:55 compute-0 podman[356085]: 2025-10-11 09:06:55.672634036 +0000 UTC m=+0.188056259 container start 027c5e9c93f814dedd5124a4e542a26cf422c967e124a9429e199ae374b7ea46 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_bassi, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 09:06:55 compute-0 podman[356085]: 2025-10-11 09:06:55.677990169 +0000 UTC m=+0.193412392 container attach 027c5e9c93f814dedd5124a4e542a26cf422c967e124a9429e199ae374b7ea46 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_bassi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Oct 11 09:06:56 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1928: 321 pgs: 321 active+clean; 374 MiB data, 826 MiB used, 59 GiB / 60 GiB avail
Oct 11 09:06:56 compute-0 elastic_bassi[356102]: {
Oct 11 09:06:56 compute-0 elastic_bassi[356102]:     "0": [
Oct 11 09:06:56 compute-0 elastic_bassi[356102]:         {
Oct 11 09:06:56 compute-0 elastic_bassi[356102]:             "devices": [
Oct 11 09:06:56 compute-0 elastic_bassi[356102]:                 "/dev/loop3"
Oct 11 09:06:56 compute-0 elastic_bassi[356102]:             ],
Oct 11 09:06:56 compute-0 elastic_bassi[356102]:             "lv_name": "ceph_lv0",
Oct 11 09:06:56 compute-0 elastic_bassi[356102]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 09:06:56 compute-0 elastic_bassi[356102]:             "lv_size": "21470642176",
Oct 11 09:06:56 compute-0 elastic_bassi[356102]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=bd31cfe9-266a-4479-a7f9-659fff14b3e1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 09:06:56 compute-0 elastic_bassi[356102]:             "lv_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 09:06:56 compute-0 elastic_bassi[356102]:             "name": "ceph_lv0",
Oct 11 09:06:56 compute-0 elastic_bassi[356102]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 09:06:56 compute-0 elastic_bassi[356102]:             "tags": {
Oct 11 09:06:56 compute-0 elastic_bassi[356102]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 11 09:06:56 compute-0 elastic_bassi[356102]:                 "ceph.block_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 09:06:56 compute-0 elastic_bassi[356102]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 09:06:56 compute-0 elastic_bassi[356102]:                 "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:06:56 compute-0 elastic_bassi[356102]:                 "ceph.cluster_name": "ceph",
Oct 11 09:06:56 compute-0 elastic_bassi[356102]:                 "ceph.crush_device_class": "",
Oct 11 09:06:56 compute-0 elastic_bassi[356102]:                 "ceph.encrypted": "0",
Oct 11 09:06:56 compute-0 elastic_bassi[356102]:                 "ceph.osd_fsid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 09:06:56 compute-0 elastic_bassi[356102]:                 "ceph.osd_id": "0",
Oct 11 09:06:56 compute-0 elastic_bassi[356102]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 09:06:56 compute-0 elastic_bassi[356102]:                 "ceph.type": "block",
Oct 11 09:06:56 compute-0 elastic_bassi[356102]:                 "ceph.vdo": "0"
Oct 11 09:06:56 compute-0 elastic_bassi[356102]:             },
Oct 11 09:06:56 compute-0 elastic_bassi[356102]:             "type": "block",
Oct 11 09:06:56 compute-0 elastic_bassi[356102]:             "vg_name": "ceph_vg0"
Oct 11 09:06:56 compute-0 elastic_bassi[356102]:         }
Oct 11 09:06:56 compute-0 elastic_bassi[356102]:     ],
Oct 11 09:06:56 compute-0 elastic_bassi[356102]:     "1": [
Oct 11 09:06:56 compute-0 elastic_bassi[356102]:         {
Oct 11 09:06:56 compute-0 elastic_bassi[356102]:             "devices": [
Oct 11 09:06:56 compute-0 elastic_bassi[356102]:                 "/dev/loop4"
Oct 11 09:06:56 compute-0 elastic_bassi[356102]:             ],
Oct 11 09:06:56 compute-0 elastic_bassi[356102]:             "lv_name": "ceph_lv1",
Oct 11 09:06:56 compute-0 elastic_bassi[356102]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 09:06:56 compute-0 elastic_bassi[356102]:             "lv_size": "21470642176",
Oct 11 09:06:56 compute-0 elastic_bassi[356102]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c6e730c8-7075-45e5-bf72-061b73088f5b,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 09:06:56 compute-0 elastic_bassi[356102]:             "lv_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 09:06:56 compute-0 elastic_bassi[356102]:             "name": "ceph_lv1",
Oct 11 09:06:56 compute-0 elastic_bassi[356102]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 09:06:56 compute-0 elastic_bassi[356102]:             "tags": {
Oct 11 09:06:56 compute-0 elastic_bassi[356102]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 11 09:06:56 compute-0 elastic_bassi[356102]:                 "ceph.block_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 09:06:56 compute-0 elastic_bassi[356102]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 09:06:56 compute-0 elastic_bassi[356102]:                 "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:06:56 compute-0 elastic_bassi[356102]:                 "ceph.cluster_name": "ceph",
Oct 11 09:06:56 compute-0 elastic_bassi[356102]:                 "ceph.crush_device_class": "",
Oct 11 09:06:56 compute-0 elastic_bassi[356102]:                 "ceph.encrypted": "0",
Oct 11 09:06:56 compute-0 elastic_bassi[356102]:                 "ceph.osd_fsid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 09:06:56 compute-0 elastic_bassi[356102]:                 "ceph.osd_id": "1",
Oct 11 09:06:56 compute-0 elastic_bassi[356102]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 09:06:56 compute-0 elastic_bassi[356102]:                 "ceph.type": "block",
Oct 11 09:06:56 compute-0 elastic_bassi[356102]:                 "ceph.vdo": "0"
Oct 11 09:06:56 compute-0 elastic_bassi[356102]:             },
Oct 11 09:06:56 compute-0 elastic_bassi[356102]:             "type": "block",
Oct 11 09:06:56 compute-0 elastic_bassi[356102]:             "vg_name": "ceph_vg1"
Oct 11 09:06:56 compute-0 elastic_bassi[356102]:         }
Oct 11 09:06:56 compute-0 elastic_bassi[356102]:     ],
Oct 11 09:06:56 compute-0 elastic_bassi[356102]:     "2": [
Oct 11 09:06:56 compute-0 elastic_bassi[356102]:         {
Oct 11 09:06:56 compute-0 elastic_bassi[356102]:             "devices": [
Oct 11 09:06:56 compute-0 elastic_bassi[356102]:                 "/dev/loop5"
Oct 11 09:06:56 compute-0 elastic_bassi[356102]:             ],
Oct 11 09:06:56 compute-0 elastic_bassi[356102]:             "lv_name": "ceph_lv2",
Oct 11 09:06:56 compute-0 elastic_bassi[356102]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 09:06:56 compute-0 elastic_bassi[356102]:             "lv_size": "21470642176",
Oct 11 09:06:56 compute-0 elastic_bassi[356102]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9a8d87fe-940e-4960-94fb-405e22e6e9ee,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 09:06:56 compute-0 elastic_bassi[356102]:             "lv_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 09:06:56 compute-0 elastic_bassi[356102]:             "name": "ceph_lv2",
Oct 11 09:06:56 compute-0 elastic_bassi[356102]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 09:06:56 compute-0 elastic_bassi[356102]:             "tags": {
Oct 11 09:06:56 compute-0 elastic_bassi[356102]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 11 09:06:56 compute-0 elastic_bassi[356102]:                 "ceph.block_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 09:06:56 compute-0 elastic_bassi[356102]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 09:06:56 compute-0 elastic_bassi[356102]:                 "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:06:56 compute-0 elastic_bassi[356102]:                 "ceph.cluster_name": "ceph",
Oct 11 09:06:56 compute-0 elastic_bassi[356102]:                 "ceph.crush_device_class": "",
Oct 11 09:06:56 compute-0 elastic_bassi[356102]:                 "ceph.encrypted": "0",
Oct 11 09:06:56 compute-0 elastic_bassi[356102]:                 "ceph.osd_fsid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 09:06:56 compute-0 elastic_bassi[356102]:                 "ceph.osd_id": "2",
Oct 11 09:06:56 compute-0 elastic_bassi[356102]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 09:06:56 compute-0 elastic_bassi[356102]:                 "ceph.type": "block",
Oct 11 09:06:56 compute-0 elastic_bassi[356102]:                 "ceph.vdo": "0"
Oct 11 09:06:56 compute-0 elastic_bassi[356102]:             },
Oct 11 09:06:56 compute-0 elastic_bassi[356102]:             "type": "block",
Oct 11 09:06:56 compute-0 elastic_bassi[356102]:             "vg_name": "ceph_vg2"
Oct 11 09:06:56 compute-0 elastic_bassi[356102]:         }
Oct 11 09:06:56 compute-0 elastic_bassi[356102]:     ]
Oct 11 09:06:56 compute-0 elastic_bassi[356102]: }
Oct 11 09:06:56 compute-0 systemd[1]: libpod-027c5e9c93f814dedd5124a4e542a26cf422c967e124a9429e199ae374b7ea46.scope: Deactivated successfully.
Oct 11 09:06:56 compute-0 podman[356085]: 2025-10-11 09:06:56.427147716 +0000 UTC m=+0.942569979 container died 027c5e9c93f814dedd5124a4e542a26cf422c967e124a9429e199ae374b7ea46 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_bassi, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Oct 11 09:06:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-81b3d3f72eecbc537a3ec37abd640584b7e50128cf01d15f9774e58fc1967097-merged.mount: Deactivated successfully.
Oct 11 09:06:56 compute-0 podman[356085]: 2025-10-11 09:06:56.49665157 +0000 UTC m=+1.012073773 container remove 027c5e9c93f814dedd5124a4e542a26cf422c967e124a9429e199ae374b7ea46 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_bassi, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Oct 11 09:06:56 compute-0 systemd[1]: libpod-conmon-027c5e9c93f814dedd5124a4e542a26cf422c967e124a9429e199ae374b7ea46.scope: Deactivated successfully.
Oct 11 09:06:56 compute-0 sudo[355937]: pam_unix(sudo:session): session closed for user root
Oct 11 09:06:56 compute-0 sudo[356125]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:06:56 compute-0 sudo[356125]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:06:56 compute-0 sudo[356125]: pam_unix(sudo:session): session closed for user root
Oct 11 09:06:56 compute-0 sudo[356150]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 09:06:56 compute-0 sudo[356150]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:06:56 compute-0 sudo[356150]: pam_unix(sudo:session): session closed for user root
Oct 11 09:06:56 compute-0 sudo[356175]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:06:56 compute-0 sudo[356175]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:06:56 compute-0 sudo[356175]: pam_unix(sudo:session): session closed for user root
Oct 11 09:06:56 compute-0 sudo[356200]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a -- raw list --format json
Oct 11 09:06:56 compute-0 sudo[356200]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:06:57 compute-0 podman[356266]: 2025-10-11 09:06:57.342370412 +0000 UTC m=+0.061954879 container create aa3b47c1670aec9ca748e2cd582b9b44fe76ba00910df0dd6bc3be89ec05a3a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_goodall, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct 11 09:06:57 compute-0 systemd[1]: Started libpod-conmon-aa3b47c1670aec9ca748e2cd582b9b44fe76ba00910df0dd6bc3be89ec05a3a5.scope.
Oct 11 09:06:57 compute-0 podman[356266]: 2025-10-11 09:06:57.312929052 +0000 UTC m=+0.032513569 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:06:57 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:06:57 compute-0 ceph-mon[74313]: pgmap v1928: 321 pgs: 321 active+clean; 374 MiB data, 826 MiB used, 59 GiB / 60 GiB avail
Oct 11 09:06:57 compute-0 podman[356266]: 2025-10-11 09:06:57.454463592 +0000 UTC m=+0.174048119 container init aa3b47c1670aec9ca748e2cd582b9b44fe76ba00910df0dd6bc3be89ec05a3a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_goodall, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 09:06:57 compute-0 podman[356266]: 2025-10-11 09:06:57.461227725 +0000 UTC m=+0.180812162 container start aa3b47c1670aec9ca748e2cd582b9b44fe76ba00910df0dd6bc3be89ec05a3a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_goodall, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 09:06:57 compute-0 podman[356266]: 2025-10-11 09:06:57.465910758 +0000 UTC m=+0.185495215 container attach aa3b47c1670aec9ca748e2cd582b9b44fe76ba00910df0dd6bc3be89ec05a3a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_goodall, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct 11 09:06:57 compute-0 optimistic_goodall[356282]: 167 167
Oct 11 09:06:57 compute-0 systemd[1]: libpod-aa3b47c1670aec9ca748e2cd582b9b44fe76ba00910df0dd6bc3be89ec05a3a5.scope: Deactivated successfully.
Oct 11 09:06:57 compute-0 podman[356266]: 2025-10-11 09:06:57.469943583 +0000 UTC m=+0.189528040 container died aa3b47c1670aec9ca748e2cd582b9b44fe76ba00910df0dd6bc3be89ec05a3a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_goodall, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 09:06:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-46644da71927bc24422585f4cbe36740bbf08fd8c91856bbee0275886165c180-merged.mount: Deactivated successfully.
Oct 11 09:06:57 compute-0 podman[356266]: 2025-10-11 09:06:57.522350239 +0000 UTC m=+0.241934666 container remove aa3b47c1670aec9ca748e2cd582b9b44fe76ba00910df0dd6bc3be89ec05a3a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_goodall, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True)
Oct 11 09:06:57 compute-0 systemd[1]: libpod-conmon-aa3b47c1670aec9ca748e2cd582b9b44fe76ba00910df0dd6bc3be89ec05a3a5.scope: Deactivated successfully.
Oct 11 09:06:57 compute-0 podman[356306]: 2025-10-11 09:06:57.810709821 +0000 UTC m=+0.065439059 container create a32e2e5cffa902ab2897365d240d73823c697878a77026df1129dfafc74a59a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_bhaskara, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 09:06:57 compute-0 systemd[1]: Started libpod-conmon-a32e2e5cffa902ab2897365d240d73823c697878a77026df1129dfafc74a59a6.scope.
Oct 11 09:06:57 compute-0 podman[356306]: 2025-10-11 09:06:57.779434809 +0000 UTC m=+0.034164107 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:06:57 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:06:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22fb0b1e107adcb00515a81c4b0b9653838cc8fbade640f8b723e769f08a38f0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 09:06:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22fb0b1e107adcb00515a81c4b0b9653838cc8fbade640f8b723e769f08a38f0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 09:06:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22fb0b1e107adcb00515a81c4b0b9653838cc8fbade640f8b723e769f08a38f0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 09:06:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22fb0b1e107adcb00515a81c4b0b9653838cc8fbade640f8b723e769f08a38f0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 09:06:57 compute-0 podman[356306]: 2025-10-11 09:06:57.911403246 +0000 UTC m=+0.166132544 container init a32e2e5cffa902ab2897365d240d73823c697878a77026df1129dfafc74a59a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_bhaskara, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 11 09:06:57 compute-0 podman[356306]: 2025-10-11 09:06:57.925852478 +0000 UTC m=+0.180581736 container start a32e2e5cffa902ab2897365d240d73823c697878a77026df1129dfafc74a59a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_bhaskara, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 09:06:57 compute-0 podman[356306]: 2025-10-11 09:06:57.932525639 +0000 UTC m=+0.187254937 container attach a32e2e5cffa902ab2897365d240d73823c697878a77026df1129dfafc74a59a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_bhaskara, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Oct 11 09:06:58 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1929: 321 pgs: 321 active+clean; 374 MiB data, 826 MiB used, 59 GiB / 60 GiB avail
Oct 11 09:06:58 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e257 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:06:58 compute-0 interesting_bhaskara[356323]: {
Oct 11 09:06:58 compute-0 interesting_bhaskara[356323]:     "9a8d87fe-940e-4960-94fb-405e22e6e9ee": {
Oct 11 09:06:58 compute-0 interesting_bhaskara[356323]:         "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:06:58 compute-0 interesting_bhaskara[356323]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 11 09:06:58 compute-0 interesting_bhaskara[356323]:         "osd_id": 2,
Oct 11 09:06:58 compute-0 interesting_bhaskara[356323]:         "osd_uuid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 09:06:58 compute-0 interesting_bhaskara[356323]:         "type": "bluestore"
Oct 11 09:06:58 compute-0 interesting_bhaskara[356323]:     },
Oct 11 09:06:58 compute-0 interesting_bhaskara[356323]:     "bd31cfe9-266a-4479-a7f9-659fff14b3e1": {
Oct 11 09:06:58 compute-0 interesting_bhaskara[356323]:         "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:06:58 compute-0 interesting_bhaskara[356323]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 11 09:06:58 compute-0 interesting_bhaskara[356323]:         "osd_id": 0,
Oct 11 09:06:58 compute-0 interesting_bhaskara[356323]:         "osd_uuid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 09:06:58 compute-0 interesting_bhaskara[356323]:         "type": "bluestore"
Oct 11 09:06:58 compute-0 interesting_bhaskara[356323]:     },
Oct 11 09:06:58 compute-0 interesting_bhaskara[356323]:     "c6e730c8-7075-45e5-bf72-061b73088f5b": {
Oct 11 09:06:58 compute-0 interesting_bhaskara[356323]:         "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:06:58 compute-0 interesting_bhaskara[356323]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 11 09:06:58 compute-0 interesting_bhaskara[356323]:         "osd_id": 1,
Oct 11 09:06:58 compute-0 interesting_bhaskara[356323]:         "osd_uuid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 09:06:58 compute-0 interesting_bhaskara[356323]:         "type": "bluestore"
Oct 11 09:06:58 compute-0 interesting_bhaskara[356323]:     }
Oct 11 09:06:58 compute-0 interesting_bhaskara[356323]: }
Oct 11 09:06:58 compute-0 systemd[1]: libpod-a32e2e5cffa902ab2897365d240d73823c697878a77026df1129dfafc74a59a6.scope: Deactivated successfully.
Oct 11 09:06:58 compute-0 systemd[1]: libpod-a32e2e5cffa902ab2897365d240d73823c697878a77026df1129dfafc74a59a6.scope: Consumed 1.053s CPU time.
Oct 11 09:06:58 compute-0 podman[356306]: 2025-10-11 09:06:58.984101729 +0000 UTC m=+1.238830987 container died a32e2e5cffa902ab2897365d240d73823c697878a77026df1129dfafc74a59a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_bhaskara, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 09:06:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-22fb0b1e107adcb00515a81c4b0b9653838cc8fbade640f8b723e769f08a38f0-merged.mount: Deactivated successfully.
Oct 11 09:06:59 compute-0 podman[356306]: 2025-10-11 09:06:59.050979898 +0000 UTC m=+1.305709126 container remove a32e2e5cffa902ab2897365d240d73823c697878a77026df1129dfafc74a59a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_bhaskara, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct 11 09:06:59 compute-0 systemd[1]: libpod-conmon-a32e2e5cffa902ab2897365d240d73823c697878a77026df1129dfafc74a59a6.scope: Deactivated successfully.
Oct 11 09:06:59 compute-0 nova_compute[260935]: 2025-10-11 09:06:59.083 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:06:59 compute-0 sudo[356200]: pam_unix(sudo:session): session closed for user root
Oct 11 09:06:59 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 09:06:59 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:06:59 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 09:06:59 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:06:59 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev 83d8e14b-df29-42ec-9d9c-ae7cd4202c97 does not exist
Oct 11 09:06:59 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev 0fc927ec-5bf5-44b4-a109-213d3fa10367 does not exist
Oct 11 09:06:59 compute-0 sudo[356368]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:06:59 compute-0 sudo[356368]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:06:59 compute-0 sudo[356368]: pam_unix(sudo:session): session closed for user root
Oct 11 09:06:59 compute-0 sudo[356393]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 11 09:06:59 compute-0 sudo[356393]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:06:59 compute-0 sudo[356393]: pam_unix(sudo:session): session closed for user root
Oct 11 09:06:59 compute-0 ceph-mon[74313]: pgmap v1929: 321 pgs: 321 active+clean; 374 MiB data, 826 MiB used, 59 GiB / 60 GiB avail
Oct 11 09:06:59 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:06:59 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:07:00 compute-0 nova_compute[260935]: 2025-10-11 09:07:00.204 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:07:00 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1930: 321 pgs: 321 active+clean; 374 MiB data, 826 MiB used, 59 GiB / 60 GiB avail
Oct 11 09:07:01 compute-0 ceph-mon[74313]: pgmap v1930: 321 pgs: 321 active+clean; 374 MiB data, 826 MiB used, 59 GiB / 60 GiB avail
Oct 11 09:07:02 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1931: 321 pgs: 321 active+clean; 374 MiB data, 826 MiB used, 59 GiB / 60 GiB avail
Oct 11 09:07:02 compute-0 nova_compute[260935]: 2025-10-11 09:07:02.552 2 DEBUG oslo_concurrency.lockutils [None req-a53b906c-dd89-46cd-ab9f-94c4cd475ba8 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Acquiring lock "6520fc43-79ed-4060-85bb-dcdff5f5c101" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:07:02 compute-0 nova_compute[260935]: 2025-10-11 09:07:02.552 2 DEBUG oslo_concurrency.lockutils [None req-a53b906c-dd89-46cd-ab9f-94c4cd475ba8 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Lock "6520fc43-79ed-4060-85bb-dcdff5f5c101" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:07:02 compute-0 nova_compute[260935]: 2025-10-11 09:07:02.590 2 DEBUG nova.compute.manager [None req-a53b906c-dd89-46cd-ab9f-94c4cd475ba8 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] [instance: 6520fc43-79ed-4060-85bb-dcdff5f5c101] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 11 09:07:02 compute-0 nova_compute[260935]: 2025-10-11 09:07:02.716 2 DEBUG oslo_concurrency.lockutils [None req-a53b906c-dd89-46cd-ab9f-94c4cd475ba8 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:07:02 compute-0 nova_compute[260935]: 2025-10-11 09:07:02.717 2 DEBUG oslo_concurrency.lockutils [None req-a53b906c-dd89-46cd-ab9f-94c4cd475ba8 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:07:02 compute-0 nova_compute[260935]: 2025-10-11 09:07:02.729 2 DEBUG nova.virt.hardware [None req-a53b906c-dd89-46cd-ab9f-94c4cd475ba8 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 11 09:07:02 compute-0 nova_compute[260935]: 2025-10-11 09:07:02.730 2 INFO nova.compute.claims [None req-a53b906c-dd89-46cd-ab9f-94c4cd475ba8 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] [instance: 6520fc43-79ed-4060-85bb-dcdff5f5c101] Claim successful on node compute-0.ctlplane.example.com
Oct 11 09:07:02 compute-0 nova_compute[260935]: 2025-10-11 09:07:02.981 2 DEBUG oslo_concurrency.processutils [None req-a53b906c-dd89-46cd-ab9f-94c4cd475ba8 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:07:03 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 09:07:03 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/218263712' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:07:03 compute-0 nova_compute[260935]: 2025-10-11 09:07:03.457 2 DEBUG oslo_concurrency.processutils [None req-a53b906c-dd89-46cd-ab9f-94c4cd475ba8 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:07:03 compute-0 nova_compute[260935]: 2025-10-11 09:07:03.465 2 DEBUG nova.compute.provider_tree [None req-a53b906c-dd89-46cd-ab9f-94c4cd475ba8 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 09:07:03 compute-0 ceph-mon[74313]: pgmap v1931: 321 pgs: 321 active+clean; 374 MiB data, 826 MiB used, 59 GiB / 60 GiB avail
Oct 11 09:07:03 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/218263712' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:07:03 compute-0 nova_compute[260935]: 2025-10-11 09:07:03.560 2 DEBUG nova.scheduler.client.report [None req-a53b906c-dd89-46cd-ab9f-94c4cd475ba8 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 09:07:03 compute-0 nova_compute[260935]: 2025-10-11 09:07:03.624 2 DEBUG oslo_concurrency.lockutils [None req-a53b906c-dd89-46cd-ab9f-94c4cd475ba8 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.907s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:07:03 compute-0 nova_compute[260935]: 2025-10-11 09:07:03.626 2 DEBUG nova.compute.manager [None req-a53b906c-dd89-46cd-ab9f-94c4cd475ba8 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] [instance: 6520fc43-79ed-4060-85bb-dcdff5f5c101] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 11 09:07:03 compute-0 nova_compute[260935]: 2025-10-11 09:07:03.732 2 DEBUG nova.compute.manager [None req-a53b906c-dd89-46cd-ab9f-94c4cd475ba8 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] [instance: 6520fc43-79ed-4060-85bb-dcdff5f5c101] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 11 09:07:03 compute-0 nova_compute[260935]: 2025-10-11 09:07:03.733 2 DEBUG nova.network.neutron [None req-a53b906c-dd89-46cd-ab9f-94c4cd475ba8 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] [instance: 6520fc43-79ed-4060-85bb-dcdff5f5c101] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 11 09:07:03 compute-0 nova_compute[260935]: 2025-10-11 09:07:03.798 2 INFO nova.virt.libvirt.driver [None req-a53b906c-dd89-46cd-ab9f-94c4cd475ba8 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] [instance: 6520fc43-79ed-4060-85bb-dcdff5f5c101] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 11 09:07:03 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e257 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:07:03 compute-0 nova_compute[260935]: 2025-10-11 09:07:03.874 2 DEBUG nova.compute.manager [None req-a53b906c-dd89-46cd-ab9f-94c4cd475ba8 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] [instance: 6520fc43-79ed-4060-85bb-dcdff5f5c101] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 11 09:07:04 compute-0 nova_compute[260935]: 2025-10-11 09:07:04.085 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:07:04 compute-0 nova_compute[260935]: 2025-10-11 09:07:04.096 2 DEBUG nova.compute.manager [None req-a53b906c-dd89-46cd-ab9f-94c4cd475ba8 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] [instance: 6520fc43-79ed-4060-85bb-dcdff5f5c101] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 11 09:07:04 compute-0 nova_compute[260935]: 2025-10-11 09:07:04.099 2 DEBUG nova.virt.libvirt.driver [None req-a53b906c-dd89-46cd-ab9f-94c4cd475ba8 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] [instance: 6520fc43-79ed-4060-85bb-dcdff5f5c101] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 11 09:07:04 compute-0 nova_compute[260935]: 2025-10-11 09:07:04.100 2 INFO nova.virt.libvirt.driver [None req-a53b906c-dd89-46cd-ab9f-94c4cd475ba8 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] [instance: 6520fc43-79ed-4060-85bb-dcdff5f5c101] Creating image(s)
Oct 11 09:07:04 compute-0 nova_compute[260935]: 2025-10-11 09:07:04.133 2 DEBUG nova.storage.rbd_utils [None req-a53b906c-dd89-46cd-ab9f-94c4cd475ba8 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] rbd image 6520fc43-79ed-4060-85bb-dcdff5f5c101_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:07:04 compute-0 nova_compute[260935]: 2025-10-11 09:07:04.173 2 DEBUG nova.storage.rbd_utils [None req-a53b906c-dd89-46cd-ab9f-94c4cd475ba8 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] rbd image 6520fc43-79ed-4060-85bb-dcdff5f5c101_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:07:04 compute-0 nova_compute[260935]: 2025-10-11 09:07:04.207 2 DEBUG nova.storage.rbd_utils [None req-a53b906c-dd89-46cd-ab9f-94c4cd475ba8 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] rbd image 6520fc43-79ed-4060-85bb-dcdff5f5c101_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:07:04 compute-0 nova_compute[260935]: 2025-10-11 09:07:04.212 2 DEBUG oslo_concurrency.processutils [None req-a53b906c-dd89-46cd-ab9f-94c4cd475ba8 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:07:04 compute-0 nova_compute[260935]: 2025-10-11 09:07:04.267 2 DEBUG nova.policy [None req-a53b906c-dd89-46cd-ab9f-94c4cd475ba8 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'f52fcc072b5843a197064a063d4a5d30', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'c221c92f100b42fbb2581f0c7035540b', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 11 09:07:04 compute-0 nova_compute[260935]: 2025-10-11 09:07:04.320 2 DEBUG oslo_concurrency.processutils [None req-a53b906c-dd89-46cd-ab9f-94c4cd475ba8 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json" returned: 0 in 0.108s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:07:04 compute-0 nova_compute[260935]: 2025-10-11 09:07:04.321 2 DEBUG oslo_concurrency.lockutils [None req-a53b906c-dd89-46cd-ab9f-94c4cd475ba8 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Acquiring lock "9811042c7d73cc51997f7c966840f4b7728169a1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:07:04 compute-0 nova_compute[260935]: 2025-10-11 09:07:04.322 2 DEBUG oslo_concurrency.lockutils [None req-a53b906c-dd89-46cd-ab9f-94c4cd475ba8 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:07:04 compute-0 nova_compute[260935]: 2025-10-11 09:07:04.323 2 DEBUG oslo_concurrency.lockutils [None req-a53b906c-dd89-46cd-ab9f-94c4cd475ba8 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:07:04 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1932: 321 pgs: 321 active+clean; 374 MiB data, 826 MiB used, 59 GiB / 60 GiB avail
Oct 11 09:07:04 compute-0 nova_compute[260935]: 2025-10-11 09:07:04.348 2 DEBUG nova.storage.rbd_utils [None req-a53b906c-dd89-46cd-ab9f-94c4cd475ba8 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] rbd image 6520fc43-79ed-4060-85bb-dcdff5f5c101_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:07:04 compute-0 nova_compute[260935]: 2025-10-11 09:07:04.353 2 DEBUG oslo_concurrency.processutils [None req-a53b906c-dd89-46cd-ab9f-94c4cd475ba8 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 6520fc43-79ed-4060-85bb-dcdff5f5c101_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:07:04 compute-0 nova_compute[260935]: 2025-10-11 09:07:04.718 2 DEBUG oslo_concurrency.processutils [None req-a53b906c-dd89-46cd-ab9f-94c4cd475ba8 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 6520fc43-79ed-4060-85bb-dcdff5f5c101_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.365s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:07:04 compute-0 nova_compute[260935]: 2025-10-11 09:07:04.804 2 DEBUG nova.storage.rbd_utils [None req-a53b906c-dd89-46cd-ab9f-94c4cd475ba8 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] resizing rbd image 6520fc43-79ed-4060-85bb-dcdff5f5c101_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 11 09:07:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 09:07:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:07:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 09:07:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:07:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0029757271724620694 of space, bias 1.0, pg target 0.8927181517386208 quantized to 32 (current 32)
Oct 11 09:07:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:07:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 09:07:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:07:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 09:07:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:07:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Oct 11 09:07:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:07:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 11 09:07:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:07:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 09:07:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:07:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 11 09:07:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:07:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 11 09:07:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:07:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 09:07:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:07:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 11 09:07:04 compute-0 nova_compute[260935]: 2025-10-11 09:07:04.942 2 DEBUG nova.objects.instance [None req-a53b906c-dd89-46cd-ab9f-94c4cd475ba8 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Lazy-loading 'migration_context' on Instance uuid 6520fc43-79ed-4060-85bb-dcdff5f5c101 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:07:04 compute-0 nova_compute[260935]: 2025-10-11 09:07:04.991 2 DEBUG nova.virt.libvirt.driver [None req-a53b906c-dd89-46cd-ab9f-94c4cd475ba8 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] [instance: 6520fc43-79ed-4060-85bb-dcdff5f5c101] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 11 09:07:04 compute-0 nova_compute[260935]: 2025-10-11 09:07:04.992 2 DEBUG nova.virt.libvirt.driver [None req-a53b906c-dd89-46cd-ab9f-94c4cd475ba8 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] [instance: 6520fc43-79ed-4060-85bb-dcdff5f5c101] Ensure instance console log exists: /var/lib/nova/instances/6520fc43-79ed-4060-85bb-dcdff5f5c101/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 11 09:07:04 compute-0 nova_compute[260935]: 2025-10-11 09:07:04.993 2 DEBUG oslo_concurrency.lockutils [None req-a53b906c-dd89-46cd-ab9f-94c4cd475ba8 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:07:04 compute-0 nova_compute[260935]: 2025-10-11 09:07:04.993 2 DEBUG oslo_concurrency.lockutils [None req-a53b906c-dd89-46cd-ab9f-94c4cd475ba8 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:07:04 compute-0 nova_compute[260935]: 2025-10-11 09:07:04.994 2 DEBUG oslo_concurrency.lockutils [None req-a53b906c-dd89-46cd-ab9f-94c4cd475ba8 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:07:05 compute-0 nova_compute[260935]: 2025-10-11 09:07:05.206 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:07:05 compute-0 nova_compute[260935]: 2025-10-11 09:07:05.246 2 DEBUG nova.network.neutron [None req-a53b906c-dd89-46cd-ab9f-94c4cd475ba8 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] [instance: 6520fc43-79ed-4060-85bb-dcdff5f5c101] Successfully created port: 7f81893d-380a-42b4-88a7-76a98c30a38d _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 11 09:07:05 compute-0 ceph-mon[74313]: pgmap v1932: 321 pgs: 321 active+clean; 374 MiB data, 826 MiB used, 59 GiB / 60 GiB avail
Oct 11 09:07:06 compute-0 nova_compute[260935]: 2025-10-11 09:07:06.003 2 DEBUG oslo_concurrency.lockutils [None req-c1c53135-8b25-424a-af01-9028ba12c254 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Acquiring lock "f5dfbe0b-a1ff-4001-abe4-a4493c9124f2" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:07:06 compute-0 nova_compute[260935]: 2025-10-11 09:07:06.004 2 DEBUG oslo_concurrency.lockutils [None req-c1c53135-8b25-424a-af01-9028ba12c254 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Lock "f5dfbe0b-a1ff-4001-abe4-a4493c9124f2" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:07:06 compute-0 nova_compute[260935]: 2025-10-11 09:07:06.074 2 DEBUG nova.compute.manager [None req-c1c53135-8b25-424a-af01-9028ba12c254 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] [instance: f5dfbe0b-a1ff-4001-abe4-a4493c9124f2] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 11 09:07:06 compute-0 nova_compute[260935]: 2025-10-11 09:07:06.207 2 DEBUG oslo_concurrency.lockutils [None req-c1c53135-8b25-424a-af01-9028ba12c254 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:07:06 compute-0 nova_compute[260935]: 2025-10-11 09:07:06.207 2 DEBUG oslo_concurrency.lockutils [None req-c1c53135-8b25-424a-af01-9028ba12c254 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:07:06 compute-0 nova_compute[260935]: 2025-10-11 09:07:06.216 2 DEBUG nova.virt.hardware [None req-c1c53135-8b25-424a-af01-9028ba12c254 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 11 09:07:06 compute-0 nova_compute[260935]: 2025-10-11 09:07:06.216 2 INFO nova.compute.claims [None req-c1c53135-8b25-424a-af01-9028ba12c254 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] [instance: f5dfbe0b-a1ff-4001-abe4-a4493c9124f2] Claim successful on node compute-0.ctlplane.example.com
Oct 11 09:07:06 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1933: 321 pgs: 321 active+clean; 374 MiB data, 826 MiB used, 59 GiB / 60 GiB avail
Oct 11 09:07:06 compute-0 nova_compute[260935]: 2025-10-11 09:07:06.416 2 DEBUG nova.network.neutron [None req-a53b906c-dd89-46cd-ab9f-94c4cd475ba8 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] [instance: 6520fc43-79ed-4060-85bb-dcdff5f5c101] Successfully updated port: 7f81893d-380a-42b4-88a7-76a98c30a38d _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 11 09:07:06 compute-0 nova_compute[260935]: 2025-10-11 09:07:06.454 2 DEBUG oslo_concurrency.lockutils [None req-a53b906c-dd89-46cd-ab9f-94c4cd475ba8 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Acquiring lock "refresh_cache-6520fc43-79ed-4060-85bb-dcdff5f5c101" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:07:06 compute-0 nova_compute[260935]: 2025-10-11 09:07:06.454 2 DEBUG oslo_concurrency.lockutils [None req-a53b906c-dd89-46cd-ab9f-94c4cd475ba8 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Acquired lock "refresh_cache-6520fc43-79ed-4060-85bb-dcdff5f5c101" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:07:06 compute-0 nova_compute[260935]: 2025-10-11 09:07:06.455 2 DEBUG nova.network.neutron [None req-a53b906c-dd89-46cd-ab9f-94c4cd475ba8 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] [instance: 6520fc43-79ed-4060-85bb-dcdff5f5c101] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 11 09:07:06 compute-0 nova_compute[260935]: 2025-10-11 09:07:06.513 2 DEBUG oslo_concurrency.lockutils [None req-b6692d21-50b0-4c84-aa61-2f92accd1c83 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Acquiring lock "0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:07:06 compute-0 nova_compute[260935]: 2025-10-11 09:07:06.514 2 DEBUG oslo_concurrency.lockutils [None req-b6692d21-50b0-4c84-aa61-2f92accd1c83 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Lock "0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:07:06 compute-0 nova_compute[260935]: 2025-10-11 09:07:06.581 2 DEBUG oslo_concurrency.processutils [None req-c1c53135-8b25-424a-af01-9028ba12c254 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:07:06 compute-0 nova_compute[260935]: 2025-10-11 09:07:06.636 2 DEBUG nova.compute.manager [None req-b6692d21-50b0-4c84-aa61-2f92accd1c83 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 11 09:07:06 compute-0 nova_compute[260935]: 2025-10-11 09:07:06.658 2 DEBUG nova.compute.manager [req-cb9d72b0-cdd6-406b-8cab-854d0fbef8d9 req-7abf82ba-745c-4997-8ef7-a175fb946a24 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 6520fc43-79ed-4060-85bb-dcdff5f5c101] Received event network-changed-7f81893d-380a-42b4-88a7-76a98c30a38d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:07:06 compute-0 nova_compute[260935]: 2025-10-11 09:07:06.659 2 DEBUG nova.compute.manager [req-cb9d72b0-cdd6-406b-8cab-854d0fbef8d9 req-7abf82ba-745c-4997-8ef7-a175fb946a24 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 6520fc43-79ed-4060-85bb-dcdff5f5c101] Refreshing instance network info cache due to event network-changed-7f81893d-380a-42b4-88a7-76a98c30a38d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 09:07:06 compute-0 nova_compute[260935]: 2025-10-11 09:07:06.660 2 DEBUG oslo_concurrency.lockutils [req-cb9d72b0-cdd6-406b-8cab-854d0fbef8d9 req-7abf82ba-745c-4997-8ef7-a175fb946a24 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-6520fc43-79ed-4060-85bb-dcdff5f5c101" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:07:06 compute-0 nova_compute[260935]: 2025-10-11 09:07:06.767 2 DEBUG nova.network.neutron [None req-a53b906c-dd89-46cd-ab9f-94c4cd475ba8 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] [instance: 6520fc43-79ed-4060-85bb-dcdff5f5c101] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 11 09:07:06 compute-0 nova_compute[260935]: 2025-10-11 09:07:06.842 2 DEBUG oslo_concurrency.lockutils [None req-b6692d21-50b0-4c84-aa61-2f92accd1c83 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:07:07 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 09:07:07 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3679868914' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:07:07 compute-0 nova_compute[260935]: 2025-10-11 09:07:07.044 2 DEBUG oslo_concurrency.processutils [None req-c1c53135-8b25-424a-af01-9028ba12c254 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:07:07 compute-0 nova_compute[260935]: 2025-10-11 09:07:07.051 2 DEBUG nova.compute.provider_tree [None req-c1c53135-8b25-424a-af01-9028ba12c254 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 09:07:07 compute-0 nova_compute[260935]: 2025-10-11 09:07:07.081 2 DEBUG nova.scheduler.client.report [None req-c1c53135-8b25-424a-af01-9028ba12c254 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 09:07:07 compute-0 nova_compute[260935]: 2025-10-11 09:07:07.147 2 DEBUG oslo_concurrency.lockutils [None req-c1c53135-8b25-424a-af01-9028ba12c254 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.940s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:07:07 compute-0 nova_compute[260935]: 2025-10-11 09:07:07.148 2 DEBUG nova.compute.manager [None req-c1c53135-8b25-424a-af01-9028ba12c254 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] [instance: f5dfbe0b-a1ff-4001-abe4-a4493c9124f2] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 11 09:07:07 compute-0 nova_compute[260935]: 2025-10-11 09:07:07.150 2 DEBUG oslo_concurrency.lockutils [None req-b6692d21-50b0-4c84-aa61-2f92accd1c83 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.308s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:07:07 compute-0 nova_compute[260935]: 2025-10-11 09:07:07.158 2 DEBUG nova.virt.hardware [None req-b6692d21-50b0-4c84-aa61-2f92accd1c83 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 11 09:07:07 compute-0 nova_compute[260935]: 2025-10-11 09:07:07.158 2 INFO nova.compute.claims [None req-b6692d21-50b0-4c84-aa61-2f92accd1c83 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Claim successful on node compute-0.ctlplane.example.com
Oct 11 09:07:07 compute-0 nova_compute[260935]: 2025-10-11 09:07:07.414 2 DEBUG nova.compute.manager [None req-c1c53135-8b25-424a-af01-9028ba12c254 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] [instance: f5dfbe0b-a1ff-4001-abe4-a4493c9124f2] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 11 09:07:07 compute-0 nova_compute[260935]: 2025-10-11 09:07:07.415 2 DEBUG nova.network.neutron [None req-c1c53135-8b25-424a-af01-9028ba12c254 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] [instance: f5dfbe0b-a1ff-4001-abe4-a4493c9124f2] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 11 09:07:07 compute-0 ceph-mon[74313]: pgmap v1933: 321 pgs: 321 active+clean; 374 MiB data, 826 MiB used, 59 GiB / 60 GiB avail
Oct 11 09:07:07 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/3679868914' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:07:07 compute-0 nova_compute[260935]: 2025-10-11 09:07:07.522 2 INFO nova.virt.libvirt.driver [None req-c1c53135-8b25-424a-af01-9028ba12c254 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] [instance: f5dfbe0b-a1ff-4001-abe4-a4493c9124f2] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 11 09:07:07 compute-0 nova_compute[260935]: 2025-10-11 09:07:07.585 2 DEBUG nova.compute.manager [None req-c1c53135-8b25-424a-af01-9028ba12c254 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] [instance: f5dfbe0b-a1ff-4001-abe4-a4493c9124f2] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 11 09:07:07 compute-0 nova_compute[260935]: 2025-10-11 09:07:07.744 2 DEBUG oslo_concurrency.processutils [None req-b6692d21-50b0-4c84-aa61-2f92accd1c83 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:07:07 compute-0 nova_compute[260935]: 2025-10-11 09:07:07.793 2 DEBUG nova.policy [None req-c1c53135-8b25-424a-af01-9028ba12c254 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'f52fcc072b5843a197064a063d4a5d30', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'c221c92f100b42fbb2581f0c7035540b', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 11 09:07:07 compute-0 nova_compute[260935]: 2025-10-11 09:07:07.801 2 DEBUG nova.compute.manager [None req-c1c53135-8b25-424a-af01-9028ba12c254 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] [instance: f5dfbe0b-a1ff-4001-abe4-a4493c9124f2] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 11 09:07:07 compute-0 nova_compute[260935]: 2025-10-11 09:07:07.804 2 DEBUG nova.virt.libvirt.driver [None req-c1c53135-8b25-424a-af01-9028ba12c254 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] [instance: f5dfbe0b-a1ff-4001-abe4-a4493c9124f2] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 11 09:07:07 compute-0 nova_compute[260935]: 2025-10-11 09:07:07.806 2 INFO nova.virt.libvirt.driver [None req-c1c53135-8b25-424a-af01-9028ba12c254 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] [instance: f5dfbe0b-a1ff-4001-abe4-a4493c9124f2] Creating image(s)
Oct 11 09:07:07 compute-0 nova_compute[260935]: 2025-10-11 09:07:07.841 2 DEBUG nova.storage.rbd_utils [None req-c1c53135-8b25-424a-af01-9028ba12c254 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] rbd image f5dfbe0b-a1ff-4001-abe4-a4493c9124f2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:07:07 compute-0 nova_compute[260935]: 2025-10-11 09:07:07.869 2 DEBUG nova.storage.rbd_utils [None req-c1c53135-8b25-424a-af01-9028ba12c254 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] rbd image f5dfbe0b-a1ff-4001-abe4-a4493c9124f2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:07:07 compute-0 nova_compute[260935]: 2025-10-11 09:07:07.905 2 DEBUG nova.storage.rbd_utils [None req-c1c53135-8b25-424a-af01-9028ba12c254 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] rbd image f5dfbe0b-a1ff-4001-abe4-a4493c9124f2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:07:07 compute-0 nova_compute[260935]: 2025-10-11 09:07:07.909 2 DEBUG oslo_concurrency.processutils [None req-c1c53135-8b25-424a-af01-9028ba12c254 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:07:08 compute-0 nova_compute[260935]: 2025-10-11 09:07:08.003 2 DEBUG oslo_concurrency.processutils [None req-c1c53135-8b25-424a-af01-9028ba12c254 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json" returned: 0 in 0.095s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:07:08 compute-0 nova_compute[260935]: 2025-10-11 09:07:08.004 2 DEBUG oslo_concurrency.lockutils [None req-c1c53135-8b25-424a-af01-9028ba12c254 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Acquiring lock "9811042c7d73cc51997f7c966840f4b7728169a1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:07:08 compute-0 nova_compute[260935]: 2025-10-11 09:07:08.005 2 DEBUG oslo_concurrency.lockutils [None req-c1c53135-8b25-424a-af01-9028ba12c254 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:07:08 compute-0 nova_compute[260935]: 2025-10-11 09:07:08.005 2 DEBUG oslo_concurrency.lockutils [None req-c1c53135-8b25-424a-af01-9028ba12c254 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:07:08 compute-0 nova_compute[260935]: 2025-10-11 09:07:08.022 2 DEBUG nova.storage.rbd_utils [None req-c1c53135-8b25-424a-af01-9028ba12c254 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] rbd image f5dfbe0b-a1ff-4001-abe4-a4493c9124f2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:07:08 compute-0 nova_compute[260935]: 2025-10-11 09:07:08.025 2 DEBUG oslo_concurrency.processutils [None req-c1c53135-8b25-424a-af01-9028ba12c254 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 f5dfbe0b-a1ff-4001-abe4-a4493c9124f2_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:07:08 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 09:07:08 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4079266758' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:07:08 compute-0 nova_compute[260935]: 2025-10-11 09:07:08.182 2 DEBUG oslo_concurrency.processutils [None req-b6692d21-50b0-4c84-aa61-2f92accd1c83 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.439s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:07:08 compute-0 nova_compute[260935]: 2025-10-11 09:07:08.191 2 DEBUG nova.compute.provider_tree [None req-b6692d21-50b0-4c84-aa61-2f92accd1c83 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 09:07:08 compute-0 nova_compute[260935]: 2025-10-11 09:07:08.230 2 DEBUG nova.scheduler.client.report [None req-b6692d21-50b0-4c84-aa61-2f92accd1c83 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 09:07:08 compute-0 nova_compute[260935]: 2025-10-11 09:07:08.274 2 DEBUG oslo_concurrency.lockutils [None req-b6692d21-50b0-4c84-aa61-2f92accd1c83 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.124s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:07:08 compute-0 nova_compute[260935]: 2025-10-11 09:07:08.276 2 DEBUG nova.compute.manager [None req-b6692d21-50b0-4c84-aa61-2f92accd1c83 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 11 09:07:08 compute-0 nova_compute[260935]: 2025-10-11 09:07:08.347 2 DEBUG nova.compute.manager [None req-b6692d21-50b0-4c84-aa61-2f92accd1c83 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 11 09:07:08 compute-0 nova_compute[260935]: 2025-10-11 09:07:08.348 2 DEBUG nova.network.neutron [None req-b6692d21-50b0-4c84-aa61-2f92accd1c83 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 11 09:07:08 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1934: 321 pgs: 321 active+clean; 420 MiB data, 847 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 11 09:07:08 compute-0 nova_compute[260935]: 2025-10-11 09:07:08.359 2 DEBUG oslo_concurrency.processutils [None req-c1c53135-8b25-424a-af01-9028ba12c254 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 f5dfbe0b-a1ff-4001-abe4-a4493c9124f2_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.335s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:07:08 compute-0 nova_compute[260935]: 2025-10-11 09:07:08.407 2 INFO nova.virt.libvirt.driver [None req-b6692d21-50b0-4c84-aa61-2f92accd1c83 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 11 09:07:08 compute-0 nova_compute[260935]: 2025-10-11 09:07:08.456 2 DEBUG nova.compute.manager [None req-b6692d21-50b0-4c84-aa61-2f92accd1c83 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 11 09:07:08 compute-0 nova_compute[260935]: 2025-10-11 09:07:08.466 2 DEBUG nova.storage.rbd_utils [None req-c1c53135-8b25-424a-af01-9028ba12c254 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] resizing rbd image f5dfbe0b-a1ff-4001-abe4-a4493c9124f2_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 11 09:07:08 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/4079266758' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:07:08 compute-0 nova_compute[260935]: 2025-10-11 09:07:08.580 2 DEBUG nova.objects.instance [None req-c1c53135-8b25-424a-af01-9028ba12c254 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Lazy-loading 'migration_context' on Instance uuid f5dfbe0b-a1ff-4001-abe4-a4493c9124f2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:07:08 compute-0 nova_compute[260935]: 2025-10-11 09:07:08.658 2 DEBUG nova.compute.manager [None req-b6692d21-50b0-4c84-aa61-2f92accd1c83 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 11 09:07:08 compute-0 nova_compute[260935]: 2025-10-11 09:07:08.660 2 DEBUG nova.virt.libvirt.driver [None req-b6692d21-50b0-4c84-aa61-2f92accd1c83 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 11 09:07:08 compute-0 nova_compute[260935]: 2025-10-11 09:07:08.660 2 INFO nova.virt.libvirt.driver [None req-b6692d21-50b0-4c84-aa61-2f92accd1c83 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Creating image(s)
Oct 11 09:07:08 compute-0 nova_compute[260935]: 2025-10-11 09:07:08.684 2 DEBUG nova.storage.rbd_utils [None req-b6692d21-50b0-4c84-aa61-2f92accd1c83 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] rbd image 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:07:08 compute-0 nova_compute[260935]: 2025-10-11 09:07:08.711 2 DEBUG nova.storage.rbd_utils [None req-b6692d21-50b0-4c84-aa61-2f92accd1c83 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] rbd image 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:07:08 compute-0 nova_compute[260935]: 2025-10-11 09:07:08.738 2 DEBUG nova.storage.rbd_utils [None req-b6692d21-50b0-4c84-aa61-2f92accd1c83 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] rbd image 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:07:08 compute-0 nova_compute[260935]: 2025-10-11 09:07:08.742 2 DEBUG oslo_concurrency.processutils [None req-b6692d21-50b0-4c84-aa61-2f92accd1c83 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:07:08 compute-0 nova_compute[260935]: 2025-10-11 09:07:08.790 2 DEBUG nova.virt.libvirt.driver [None req-c1c53135-8b25-424a-af01-9028ba12c254 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] [instance: f5dfbe0b-a1ff-4001-abe4-a4493c9124f2] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 11 09:07:08 compute-0 nova_compute[260935]: 2025-10-11 09:07:08.791 2 DEBUG nova.virt.libvirt.driver [None req-c1c53135-8b25-424a-af01-9028ba12c254 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] [instance: f5dfbe0b-a1ff-4001-abe4-a4493c9124f2] Ensure instance console log exists: /var/lib/nova/instances/f5dfbe0b-a1ff-4001-abe4-a4493c9124f2/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 11 09:07:08 compute-0 nova_compute[260935]: 2025-10-11 09:07:08.791 2 DEBUG oslo_concurrency.lockutils [None req-c1c53135-8b25-424a-af01-9028ba12c254 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:07:08 compute-0 nova_compute[260935]: 2025-10-11 09:07:08.792 2 DEBUG oslo_concurrency.lockutils [None req-c1c53135-8b25-424a-af01-9028ba12c254 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:07:08 compute-0 nova_compute[260935]: 2025-10-11 09:07:08.792 2 DEBUG oslo_concurrency.lockutils [None req-c1c53135-8b25-424a-af01-9028ba12c254 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:07:08 compute-0 nova_compute[260935]: 2025-10-11 09:07:08.794 2 DEBUG nova.network.neutron [None req-a53b906c-dd89-46cd-ab9f-94c4cd475ba8 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] [instance: 6520fc43-79ed-4060-85bb-dcdff5f5c101] Updating instance_info_cache with network_info: [{"id": "7f81893d-380a-42b4-88a7-76a98c30a38d", "address": "fa:16:3e:61:f1:b7", "network": {"id": "76432838-bd4d-4fb8-8e44-6e230b5868b8", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-372698809-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c221c92f100b42fbb2581f0c7035540b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7f81893d-38", "ovs_interfaceid": "7f81893d-380a-42b4-88a7-76a98c30a38d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:07:08 compute-0 nova_compute[260935]: 2025-10-11 09:07:08.798 2 DEBUG nova.policy [None req-b6692d21-50b0-4c84-aa61-2f92accd1c83 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '6b8d9d5ab01d48ae81a09f922875ea3e', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '21f163e616ee4917a580701d466f7dc9', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 11 09:07:08 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e257 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:07:08 compute-0 nova_compute[260935]: 2025-10-11 09:07:08.841 2 DEBUG oslo_concurrency.processutils [None req-b6692d21-50b0-4c84-aa61-2f92accd1c83 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json" returned: 0 in 0.099s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:07:08 compute-0 nova_compute[260935]: 2025-10-11 09:07:08.842 2 DEBUG oslo_concurrency.lockutils [None req-b6692d21-50b0-4c84-aa61-2f92accd1c83 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Acquiring lock "9811042c7d73cc51997f7c966840f4b7728169a1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:07:08 compute-0 nova_compute[260935]: 2025-10-11 09:07:08.843 2 DEBUG oslo_concurrency.lockutils [None req-b6692d21-50b0-4c84-aa61-2f92accd1c83 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:07:08 compute-0 nova_compute[260935]: 2025-10-11 09:07:08.843 2 DEBUG oslo_concurrency.lockutils [None req-b6692d21-50b0-4c84-aa61-2f92accd1c83 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:07:08 compute-0 nova_compute[260935]: 2025-10-11 09:07:08.866 2 DEBUG nova.storage.rbd_utils [None req-b6692d21-50b0-4c84-aa61-2f92accd1c83 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] rbd image 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:07:08 compute-0 nova_compute[260935]: 2025-10-11 09:07:08.870 2 DEBUG oslo_concurrency.processutils [None req-b6692d21-50b0-4c84-aa61-2f92accd1c83 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:07:08 compute-0 nova_compute[260935]: 2025-10-11 09:07:08.950 2 DEBUG nova.network.neutron [None req-c1c53135-8b25-424a-af01-9028ba12c254 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] [instance: f5dfbe0b-a1ff-4001-abe4-a4493c9124f2] Successfully created port: 34749eb0-e6d2-4c4b-a2b8-9fc709c1caca _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 11 09:07:09 compute-0 nova_compute[260935]: 2025-10-11 09:07:09.068 2 DEBUG oslo_concurrency.lockutils [None req-a53b906c-dd89-46cd-ab9f-94c4cd475ba8 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Releasing lock "refresh_cache-6520fc43-79ed-4060-85bb-dcdff5f5c101" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:07:09 compute-0 nova_compute[260935]: 2025-10-11 09:07:09.069 2 DEBUG nova.compute.manager [None req-a53b906c-dd89-46cd-ab9f-94c4cd475ba8 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] [instance: 6520fc43-79ed-4060-85bb-dcdff5f5c101] Instance network_info: |[{"id": "7f81893d-380a-42b4-88a7-76a98c30a38d", "address": "fa:16:3e:61:f1:b7", "network": {"id": "76432838-bd4d-4fb8-8e44-6e230b5868b8", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-372698809-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c221c92f100b42fbb2581f0c7035540b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7f81893d-38", "ovs_interfaceid": "7f81893d-380a-42b4-88a7-76a98c30a38d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 11 09:07:09 compute-0 nova_compute[260935]: 2025-10-11 09:07:09.071 2 DEBUG oslo_concurrency.lockutils [req-cb9d72b0-cdd6-406b-8cab-854d0fbef8d9 req-7abf82ba-745c-4997-8ef7-a175fb946a24 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-6520fc43-79ed-4060-85bb-dcdff5f5c101" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:07:09 compute-0 nova_compute[260935]: 2025-10-11 09:07:09.071 2 DEBUG nova.network.neutron [req-cb9d72b0-cdd6-406b-8cab-854d0fbef8d9 req-7abf82ba-745c-4997-8ef7-a175fb946a24 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 6520fc43-79ed-4060-85bb-dcdff5f5c101] Refreshing network info cache for port 7f81893d-380a-42b4-88a7-76a98c30a38d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 09:07:09 compute-0 nova_compute[260935]: 2025-10-11 09:07:09.078 2 DEBUG nova.virt.libvirt.driver [None req-a53b906c-dd89-46cd-ab9f-94c4cd475ba8 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] [instance: 6520fc43-79ed-4060-85bb-dcdff5f5c101] Start _get_guest_xml network_info=[{"id": "7f81893d-380a-42b4-88a7-76a98c30a38d", "address": "fa:16:3e:61:f1:b7", "network": {"id": "76432838-bd4d-4fb8-8e44-6e230b5868b8", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-372698809-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c221c92f100b42fbb2581f0c7035540b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7f81893d-38", "ovs_interfaceid": "7f81893d-380a-42b4-88a7-76a98c30a38d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 11 09:07:09 compute-0 nova_compute[260935]: 2025-10-11 09:07:09.085 2 WARNING nova.virt.libvirt.driver [None req-a53b906c-dd89-46cd-ab9f-94c4cd475ba8 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 09:07:09 compute-0 nova_compute[260935]: 2025-10-11 09:07:09.088 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:07:09 compute-0 nova_compute[260935]: 2025-10-11 09:07:09.097 2 DEBUG nova.virt.libvirt.host [None req-a53b906c-dd89-46cd-ab9f-94c4cd475ba8 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 11 09:07:09 compute-0 nova_compute[260935]: 2025-10-11 09:07:09.098 2 DEBUG nova.virt.libvirt.host [None req-a53b906c-dd89-46cd-ab9f-94c4cd475ba8 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 11 09:07:09 compute-0 nova_compute[260935]: 2025-10-11 09:07:09.111 2 DEBUG nova.virt.libvirt.host [None req-a53b906c-dd89-46cd-ab9f-94c4cd475ba8 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 11 09:07:09 compute-0 nova_compute[260935]: 2025-10-11 09:07:09.113 2 DEBUG nova.virt.libvirt.host [None req-a53b906c-dd89-46cd-ab9f-94c4cd475ba8 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 11 09:07:09 compute-0 nova_compute[260935]: 2025-10-11 09:07:09.113 2 DEBUG nova.virt.libvirt.driver [None req-a53b906c-dd89-46cd-ab9f-94c4cd475ba8 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 11 09:07:09 compute-0 nova_compute[260935]: 2025-10-11 09:07:09.114 2 DEBUG nova.virt.hardware [None req-a53b906c-dd89-46cd-ab9f-94c4cd475ba8 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 11 09:07:09 compute-0 nova_compute[260935]: 2025-10-11 09:07:09.115 2 DEBUG nova.virt.hardware [None req-a53b906c-dd89-46cd-ab9f-94c4cd475ba8 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 11 09:07:09 compute-0 nova_compute[260935]: 2025-10-11 09:07:09.115 2 DEBUG nova.virt.hardware [None req-a53b906c-dd89-46cd-ab9f-94c4cd475ba8 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 11 09:07:09 compute-0 nova_compute[260935]: 2025-10-11 09:07:09.116 2 DEBUG nova.virt.hardware [None req-a53b906c-dd89-46cd-ab9f-94c4cd475ba8 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 11 09:07:09 compute-0 nova_compute[260935]: 2025-10-11 09:07:09.116 2 DEBUG nova.virt.hardware [None req-a53b906c-dd89-46cd-ab9f-94c4cd475ba8 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 11 09:07:09 compute-0 nova_compute[260935]: 2025-10-11 09:07:09.117 2 DEBUG nova.virt.hardware [None req-a53b906c-dd89-46cd-ab9f-94c4cd475ba8 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 11 09:07:09 compute-0 nova_compute[260935]: 2025-10-11 09:07:09.117 2 DEBUG nova.virt.hardware [None req-a53b906c-dd89-46cd-ab9f-94c4cd475ba8 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 11 09:07:09 compute-0 nova_compute[260935]: 2025-10-11 09:07:09.118 2 DEBUG nova.virt.hardware [None req-a53b906c-dd89-46cd-ab9f-94c4cd475ba8 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 11 09:07:09 compute-0 nova_compute[260935]: 2025-10-11 09:07:09.118 2 DEBUG nova.virt.hardware [None req-a53b906c-dd89-46cd-ab9f-94c4cd475ba8 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 11 09:07:09 compute-0 nova_compute[260935]: 2025-10-11 09:07:09.119 2 DEBUG nova.virt.hardware [None req-a53b906c-dd89-46cd-ab9f-94c4cd475ba8 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 11 09:07:09 compute-0 nova_compute[260935]: 2025-10-11 09:07:09.119 2 DEBUG nova.virt.hardware [None req-a53b906c-dd89-46cd-ab9f-94c4cd475ba8 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 11 09:07:09 compute-0 nova_compute[260935]: 2025-10-11 09:07:09.125 2 DEBUG oslo_concurrency.processutils [None req-a53b906c-dd89-46cd-ab9f-94c4cd475ba8 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:07:09 compute-0 nova_compute[260935]: 2025-10-11 09:07:09.191 2 DEBUG oslo_concurrency.processutils [None req-b6692d21-50b0-4c84-aa61-2f92accd1c83 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.322s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:07:09 compute-0 nova_compute[260935]: 2025-10-11 09:07:09.260 2 DEBUG nova.storage.rbd_utils [None req-b6692d21-50b0-4c84-aa61-2f92accd1c83 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] resizing rbd image 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 11 09:07:09 compute-0 nova_compute[260935]: 2025-10-11 09:07:09.364 2 DEBUG nova.objects.instance [None req-b6692d21-50b0-4c84-aa61-2f92accd1c83 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Lazy-loading 'migration_context' on Instance uuid 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:07:09 compute-0 nova_compute[260935]: 2025-10-11 09:07:09.407 2 DEBUG nova.virt.libvirt.driver [None req-b6692d21-50b0-4c84-aa61-2f92accd1c83 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 11 09:07:09 compute-0 nova_compute[260935]: 2025-10-11 09:07:09.407 2 DEBUG nova.virt.libvirt.driver [None req-b6692d21-50b0-4c84-aa61-2f92accd1c83 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Ensure instance console log exists: /var/lib/nova/instances/0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 11 09:07:09 compute-0 nova_compute[260935]: 2025-10-11 09:07:09.408 2 DEBUG oslo_concurrency.lockutils [None req-b6692d21-50b0-4c84-aa61-2f92accd1c83 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:07:09 compute-0 nova_compute[260935]: 2025-10-11 09:07:09.408 2 DEBUG oslo_concurrency.lockutils [None req-b6692d21-50b0-4c84-aa61-2f92accd1c83 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:07:09 compute-0 nova_compute[260935]: 2025-10-11 09:07:09.408 2 DEBUG oslo_concurrency.lockutils [None req-b6692d21-50b0-4c84-aa61-2f92accd1c83 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:07:09 compute-0 ceph-mon[74313]: pgmap v1934: 321 pgs: 321 active+clean; 420 MiB data, 847 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 11 09:07:09 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 09:07:09 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/299898137' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:07:09 compute-0 nova_compute[260935]: 2025-10-11 09:07:09.601 2 DEBUG oslo_concurrency.processutils [None req-a53b906c-dd89-46cd-ab9f-94c4cd475ba8 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:07:09 compute-0 nova_compute[260935]: 2025-10-11 09:07:09.620 2 DEBUG nova.storage.rbd_utils [None req-a53b906c-dd89-46cd-ab9f-94c4cd475ba8 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] rbd image 6520fc43-79ed-4060-85bb-dcdff5f5c101_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:07:09 compute-0 nova_compute[260935]: 2025-10-11 09:07:09.626 2 DEBUG oslo_concurrency.processutils [None req-a53b906c-dd89-46cd-ab9f-94c4cd475ba8 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:07:10 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 09:07:10 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/734158486' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:07:10 compute-0 nova_compute[260935]: 2025-10-11 09:07:10.060 2 DEBUG oslo_concurrency.processutils [None req-a53b906c-dd89-46cd-ab9f-94c4cd475ba8 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.434s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:07:10 compute-0 nova_compute[260935]: 2025-10-11 09:07:10.064 2 DEBUG nova.virt.libvirt.vif [None req-a53b906c-dd89-46cd-ab9f-94c4cd475ba8 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T09:07:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerRescueNegativeTestJSON-server-653793177',display_name='tempest-ServerRescueNegativeTestJSON-server-653793177',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuenegativetestjson-server-653793177',id=96,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c221c92f100b42fbb2581f0c7035540b',ramdisk_id='',reservation_id='r-id4jerug',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerRescueNegativeTestJSON-1667632067',owner_user_name='tempest-ServerRescueNegativeTestJSON-1667632067-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:07:03Z,user_data=None,user_id='f52fcc072b5843a197064a063d4a5d30',uuid=6520fc43-79ed-4060-85bb-dcdff5f5c101,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "7f81893d-380a-42b4-88a7-76a98c30a38d", "address": "fa:16:3e:61:f1:b7", "network": {"id": "76432838-bd4d-4fb8-8e44-6e230b5868b8", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-372698809-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c221c92f100b42fbb2581f0c7035540b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7f81893d-38", "ovs_interfaceid": "7f81893d-380a-42b4-88a7-76a98c30a38d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 11 09:07:10 compute-0 nova_compute[260935]: 2025-10-11 09:07:10.065 2 DEBUG nova.network.os_vif_util [None req-a53b906c-dd89-46cd-ab9f-94c4cd475ba8 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Converting VIF {"id": "7f81893d-380a-42b4-88a7-76a98c30a38d", "address": "fa:16:3e:61:f1:b7", "network": {"id": "76432838-bd4d-4fb8-8e44-6e230b5868b8", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-372698809-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c221c92f100b42fbb2581f0c7035540b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7f81893d-38", "ovs_interfaceid": "7f81893d-380a-42b4-88a7-76a98c30a38d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 09:07:10 compute-0 nova_compute[260935]: 2025-10-11 09:07:10.066 2 DEBUG nova.network.os_vif_util [None req-a53b906c-dd89-46cd-ab9f-94c4cd475ba8 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:61:f1:b7,bridge_name='br-int',has_traffic_filtering=True,id=7f81893d-380a-42b4-88a7-76a98c30a38d,network=Network(76432838-bd4d-4fb8-8e44-6e230b5868b8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7f81893d-38') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 09:07:10 compute-0 nova_compute[260935]: 2025-10-11 09:07:10.069 2 DEBUG nova.objects.instance [None req-a53b906c-dd89-46cd-ab9f-94c4cd475ba8 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Lazy-loading 'pci_devices' on Instance uuid 6520fc43-79ed-4060-85bb-dcdff5f5c101 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:07:10 compute-0 nova_compute[260935]: 2025-10-11 09:07:10.113 2 DEBUG nova.virt.libvirt.driver [None req-a53b906c-dd89-46cd-ab9f-94c4cd475ba8 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] [instance: 6520fc43-79ed-4060-85bb-dcdff5f5c101] End _get_guest_xml xml=<domain type="kvm">
Oct 11 09:07:10 compute-0 nova_compute[260935]:   <uuid>6520fc43-79ed-4060-85bb-dcdff5f5c101</uuid>
Oct 11 09:07:10 compute-0 nova_compute[260935]:   <name>instance-00000060</name>
Oct 11 09:07:10 compute-0 nova_compute[260935]:   <memory>131072</memory>
Oct 11 09:07:10 compute-0 nova_compute[260935]:   <vcpu>1</vcpu>
Oct 11 09:07:10 compute-0 nova_compute[260935]:   <metadata>
Oct 11 09:07:10 compute-0 nova_compute[260935]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 09:07:10 compute-0 nova_compute[260935]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 09:07:10 compute-0 nova_compute[260935]:       <nova:name>tempest-ServerRescueNegativeTestJSON-server-653793177</nova:name>
Oct 11 09:07:10 compute-0 nova_compute[260935]:       <nova:creationTime>2025-10-11 09:07:09</nova:creationTime>
Oct 11 09:07:10 compute-0 nova_compute[260935]:       <nova:flavor name="m1.nano">
Oct 11 09:07:10 compute-0 nova_compute[260935]:         <nova:memory>128</nova:memory>
Oct 11 09:07:10 compute-0 nova_compute[260935]:         <nova:disk>1</nova:disk>
Oct 11 09:07:10 compute-0 nova_compute[260935]:         <nova:swap>0</nova:swap>
Oct 11 09:07:10 compute-0 nova_compute[260935]:         <nova:ephemeral>0</nova:ephemeral>
Oct 11 09:07:10 compute-0 nova_compute[260935]:         <nova:vcpus>1</nova:vcpus>
Oct 11 09:07:10 compute-0 nova_compute[260935]:       </nova:flavor>
Oct 11 09:07:10 compute-0 nova_compute[260935]:       <nova:owner>
Oct 11 09:07:10 compute-0 nova_compute[260935]:         <nova:user uuid="f52fcc072b5843a197064a063d4a5d30">tempest-ServerRescueNegativeTestJSON-1667632067-project-member</nova:user>
Oct 11 09:07:10 compute-0 nova_compute[260935]:         <nova:project uuid="c221c92f100b42fbb2581f0c7035540b">tempest-ServerRescueNegativeTestJSON-1667632067</nova:project>
Oct 11 09:07:10 compute-0 nova_compute[260935]:       </nova:owner>
Oct 11 09:07:10 compute-0 nova_compute[260935]:       <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 09:07:10 compute-0 nova_compute[260935]:       <nova:ports>
Oct 11 09:07:10 compute-0 nova_compute[260935]:         <nova:port uuid="7f81893d-380a-42b4-88a7-76a98c30a38d">
Oct 11 09:07:10 compute-0 nova_compute[260935]:           <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Oct 11 09:07:10 compute-0 nova_compute[260935]:         </nova:port>
Oct 11 09:07:10 compute-0 nova_compute[260935]:       </nova:ports>
Oct 11 09:07:10 compute-0 nova_compute[260935]:     </nova:instance>
Oct 11 09:07:10 compute-0 nova_compute[260935]:   </metadata>
Oct 11 09:07:10 compute-0 nova_compute[260935]:   <sysinfo type="smbios">
Oct 11 09:07:10 compute-0 nova_compute[260935]:     <system>
Oct 11 09:07:10 compute-0 nova_compute[260935]:       <entry name="manufacturer">RDO</entry>
Oct 11 09:07:10 compute-0 nova_compute[260935]:       <entry name="product">OpenStack Compute</entry>
Oct 11 09:07:10 compute-0 nova_compute[260935]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 09:07:10 compute-0 nova_compute[260935]:       <entry name="serial">6520fc43-79ed-4060-85bb-dcdff5f5c101</entry>
Oct 11 09:07:10 compute-0 nova_compute[260935]:       <entry name="uuid">6520fc43-79ed-4060-85bb-dcdff5f5c101</entry>
Oct 11 09:07:10 compute-0 nova_compute[260935]:       <entry name="family">Virtual Machine</entry>
Oct 11 09:07:10 compute-0 nova_compute[260935]:     </system>
Oct 11 09:07:10 compute-0 nova_compute[260935]:   </sysinfo>
Oct 11 09:07:10 compute-0 nova_compute[260935]:   <os>
Oct 11 09:07:10 compute-0 nova_compute[260935]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 11 09:07:10 compute-0 nova_compute[260935]:     <boot dev="hd"/>
Oct 11 09:07:10 compute-0 nova_compute[260935]:     <smbios mode="sysinfo"/>
Oct 11 09:07:10 compute-0 nova_compute[260935]:   </os>
Oct 11 09:07:10 compute-0 nova_compute[260935]:   <features>
Oct 11 09:07:10 compute-0 nova_compute[260935]:     <acpi/>
Oct 11 09:07:10 compute-0 nova_compute[260935]:     <apic/>
Oct 11 09:07:10 compute-0 nova_compute[260935]:     <vmcoreinfo/>
Oct 11 09:07:10 compute-0 nova_compute[260935]:   </features>
Oct 11 09:07:10 compute-0 nova_compute[260935]:   <clock offset="utc">
Oct 11 09:07:10 compute-0 nova_compute[260935]:     <timer name="pit" tickpolicy="delay"/>
Oct 11 09:07:10 compute-0 nova_compute[260935]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 11 09:07:10 compute-0 nova_compute[260935]:     <timer name="hpet" present="no"/>
Oct 11 09:07:10 compute-0 nova_compute[260935]:   </clock>
Oct 11 09:07:10 compute-0 nova_compute[260935]:   <cpu mode="host-model" match="exact">
Oct 11 09:07:10 compute-0 nova_compute[260935]:     <topology sockets="1" cores="1" threads="1"/>
Oct 11 09:07:10 compute-0 nova_compute[260935]:   </cpu>
Oct 11 09:07:10 compute-0 nova_compute[260935]:   <devices>
Oct 11 09:07:10 compute-0 nova_compute[260935]:     <disk type="network" device="disk">
Oct 11 09:07:10 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 09:07:10 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/6520fc43-79ed-4060-85bb-dcdff5f5c101_disk">
Oct 11 09:07:10 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 09:07:10 compute-0 nova_compute[260935]:       </source>
Oct 11 09:07:10 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 09:07:10 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 09:07:10 compute-0 nova_compute[260935]:       </auth>
Oct 11 09:07:10 compute-0 nova_compute[260935]:       <target dev="vda" bus="virtio"/>
Oct 11 09:07:10 compute-0 nova_compute[260935]:     </disk>
Oct 11 09:07:10 compute-0 nova_compute[260935]:     <disk type="network" device="cdrom">
Oct 11 09:07:10 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 09:07:10 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/6520fc43-79ed-4060-85bb-dcdff5f5c101_disk.config">
Oct 11 09:07:10 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 09:07:10 compute-0 nova_compute[260935]:       </source>
Oct 11 09:07:10 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 09:07:10 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 09:07:10 compute-0 nova_compute[260935]:       </auth>
Oct 11 09:07:10 compute-0 nova_compute[260935]:       <target dev="sda" bus="sata"/>
Oct 11 09:07:10 compute-0 nova_compute[260935]:     </disk>
Oct 11 09:07:10 compute-0 nova_compute[260935]:     <interface type="ethernet">
Oct 11 09:07:10 compute-0 nova_compute[260935]:       <mac address="fa:16:3e:61:f1:b7"/>
Oct 11 09:07:10 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 09:07:10 compute-0 nova_compute[260935]:       <driver name="vhost" rx_queue_size="512"/>
Oct 11 09:07:10 compute-0 nova_compute[260935]:       <mtu size="1442"/>
Oct 11 09:07:10 compute-0 nova_compute[260935]:       <target dev="tap7f81893d-38"/>
Oct 11 09:07:10 compute-0 nova_compute[260935]:     </interface>
Oct 11 09:07:10 compute-0 nova_compute[260935]:     <serial type="pty">
Oct 11 09:07:10 compute-0 nova_compute[260935]:       <log file="/var/lib/nova/instances/6520fc43-79ed-4060-85bb-dcdff5f5c101/console.log" append="off"/>
Oct 11 09:07:10 compute-0 nova_compute[260935]:     </serial>
Oct 11 09:07:10 compute-0 nova_compute[260935]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 09:07:10 compute-0 nova_compute[260935]:     <video>
Oct 11 09:07:10 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 09:07:10 compute-0 nova_compute[260935]:     </video>
Oct 11 09:07:10 compute-0 nova_compute[260935]:     <input type="tablet" bus="usb"/>
Oct 11 09:07:10 compute-0 nova_compute[260935]:     <rng model="virtio">
Oct 11 09:07:10 compute-0 nova_compute[260935]:       <backend model="random">/dev/urandom</backend>
Oct 11 09:07:10 compute-0 nova_compute[260935]:     </rng>
Oct 11 09:07:10 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root"/>
Oct 11 09:07:10 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:07:10 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:07:10 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:07:10 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:07:10 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:07:10 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:07:10 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:07:10 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:07:10 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:07:10 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:07:10 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:07:10 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:07:10 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:07:10 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:07:10 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:07:10 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:07:10 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:07:10 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:07:10 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:07:10 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:07:10 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:07:10 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:07:10 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:07:10 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:07:10 compute-0 nova_compute[260935]:     <controller type="usb" index="0"/>
Oct 11 09:07:10 compute-0 nova_compute[260935]:     <memballoon model="virtio">
Oct 11 09:07:10 compute-0 nova_compute[260935]:       <stats period="10"/>
Oct 11 09:07:10 compute-0 nova_compute[260935]:     </memballoon>
Oct 11 09:07:10 compute-0 nova_compute[260935]:   </devices>
Oct 11 09:07:10 compute-0 nova_compute[260935]: </domain>
Oct 11 09:07:10 compute-0 nova_compute[260935]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 11 09:07:10 compute-0 nova_compute[260935]: 2025-10-11 09:07:10.114 2 DEBUG nova.compute.manager [None req-a53b906c-dd89-46cd-ab9f-94c4cd475ba8 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] [instance: 6520fc43-79ed-4060-85bb-dcdff5f5c101] Preparing to wait for external event network-vif-plugged-7f81893d-380a-42b4-88a7-76a98c30a38d prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 11 09:07:10 compute-0 nova_compute[260935]: 2025-10-11 09:07:10.115 2 DEBUG oslo_concurrency.lockutils [None req-a53b906c-dd89-46cd-ab9f-94c4cd475ba8 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Acquiring lock "6520fc43-79ed-4060-85bb-dcdff5f5c101-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:07:10 compute-0 nova_compute[260935]: 2025-10-11 09:07:10.115 2 DEBUG oslo_concurrency.lockutils [None req-a53b906c-dd89-46cd-ab9f-94c4cd475ba8 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Lock "6520fc43-79ed-4060-85bb-dcdff5f5c101-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:07:10 compute-0 nova_compute[260935]: 2025-10-11 09:07:10.116 2 DEBUG oslo_concurrency.lockutils [None req-a53b906c-dd89-46cd-ab9f-94c4cd475ba8 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Lock "6520fc43-79ed-4060-85bb-dcdff5f5c101-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:07:10 compute-0 nova_compute[260935]: 2025-10-11 09:07:10.117 2 DEBUG nova.virt.libvirt.vif [None req-a53b906c-dd89-46cd-ab9f-94c4cd475ba8 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T09:07:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerRescueNegativeTestJSON-server-653793177',display_name='tempest-ServerRescueNegativeTestJSON-server-653793177',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuenegativetestjson-server-653793177',id=96,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c221c92f100b42fbb2581f0c7035540b',ramdisk_id='',reservation_id='r-id4jerug',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerRescueNegativeTestJSON-1667632067',owner_user_name='tempest-ServerRescueNegativeTestJSON-1667632067-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:07:03Z,user_data=None,user_id='f52fcc072b5843a197064a063d4a5d30',uuid=6520fc43-79ed-4060-85bb-dcdff5f5c101,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "7f81893d-380a-42b4-88a7-76a98c30a38d", "address": "fa:16:3e:61:f1:b7", "network": {"id": "76432838-bd4d-4fb8-8e44-6e230b5868b8", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-372698809-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c221c92f100b42fbb2581f0c7035540b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7f81893d-38", "ovs_interfaceid": "7f81893d-380a-42b4-88a7-76a98c30a38d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 11 09:07:10 compute-0 nova_compute[260935]: 2025-10-11 09:07:10.118 2 DEBUG nova.network.os_vif_util [None req-a53b906c-dd89-46cd-ab9f-94c4cd475ba8 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Converting VIF {"id": "7f81893d-380a-42b4-88a7-76a98c30a38d", "address": "fa:16:3e:61:f1:b7", "network": {"id": "76432838-bd4d-4fb8-8e44-6e230b5868b8", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-372698809-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c221c92f100b42fbb2581f0c7035540b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7f81893d-38", "ovs_interfaceid": "7f81893d-380a-42b4-88a7-76a98c30a38d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 09:07:10 compute-0 nova_compute[260935]: 2025-10-11 09:07:10.119 2 DEBUG nova.network.os_vif_util [None req-a53b906c-dd89-46cd-ab9f-94c4cd475ba8 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:61:f1:b7,bridge_name='br-int',has_traffic_filtering=True,id=7f81893d-380a-42b4-88a7-76a98c30a38d,network=Network(76432838-bd4d-4fb8-8e44-6e230b5868b8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7f81893d-38') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 09:07:10 compute-0 nova_compute[260935]: 2025-10-11 09:07:10.120 2 DEBUG os_vif [None req-a53b906c-dd89-46cd-ab9f-94c4cd475ba8 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:61:f1:b7,bridge_name='br-int',has_traffic_filtering=True,id=7f81893d-380a-42b4-88a7-76a98c30a38d,network=Network(76432838-bd4d-4fb8-8e44-6e230b5868b8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7f81893d-38') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 11 09:07:10 compute-0 nova_compute[260935]: 2025-10-11 09:07:10.121 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:07:10 compute-0 nova_compute[260935]: 2025-10-11 09:07:10.122 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:07:10 compute-0 nova_compute[260935]: 2025-10-11 09:07:10.123 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 09:07:10 compute-0 nova_compute[260935]: 2025-10-11 09:07:10.128 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:07:10 compute-0 nova_compute[260935]: 2025-10-11 09:07:10.129 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7f81893d-38, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:07:10 compute-0 nova_compute[260935]: 2025-10-11 09:07:10.130 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap7f81893d-38, col_values=(('external_ids', {'iface-id': '7f81893d-380a-42b4-88a7-76a98c30a38d', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:61:f1:b7', 'vm-uuid': '6520fc43-79ed-4060-85bb-dcdff5f5c101'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:07:10 compute-0 NetworkManager[44960]: <info>  [1760173630.1677] manager: (tap7f81893d-38): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/366)
Oct 11 09:07:10 compute-0 nova_compute[260935]: 2025-10-11 09:07:10.166 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:07:10 compute-0 nova_compute[260935]: 2025-10-11 09:07:10.170 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 09:07:10 compute-0 nova_compute[260935]: 2025-10-11 09:07:10.178 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:07:10 compute-0 nova_compute[260935]: 2025-10-11 09:07:10.180 2 INFO os_vif [None req-a53b906c-dd89-46cd-ab9f-94c4cd475ba8 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:61:f1:b7,bridge_name='br-int',has_traffic_filtering=True,id=7f81893d-380a-42b4-88a7-76a98c30a38d,network=Network(76432838-bd4d-4fb8-8e44-6e230b5868b8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7f81893d-38')
Oct 11 09:07:10 compute-0 nova_compute[260935]: 2025-10-11 09:07:10.301 2 DEBUG nova.virt.libvirt.driver [None req-a53b906c-dd89-46cd-ab9f-94c4cd475ba8 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 09:07:10 compute-0 nova_compute[260935]: 2025-10-11 09:07:10.301 2 DEBUG nova.virt.libvirt.driver [None req-a53b906c-dd89-46cd-ab9f-94c4cd475ba8 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 09:07:10 compute-0 nova_compute[260935]: 2025-10-11 09:07:10.301 2 DEBUG nova.virt.libvirt.driver [None req-a53b906c-dd89-46cd-ab9f-94c4cd475ba8 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] No VIF found with MAC fa:16:3e:61:f1:b7, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 11 09:07:10 compute-0 nova_compute[260935]: 2025-10-11 09:07:10.302 2 INFO nova.virt.libvirt.driver [None req-a53b906c-dd89-46cd-ab9f-94c4cd475ba8 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] [instance: 6520fc43-79ed-4060-85bb-dcdff5f5c101] Using config drive
Oct 11 09:07:10 compute-0 nova_compute[260935]: 2025-10-11 09:07:10.333 2 DEBUG nova.storage.rbd_utils [None req-a53b906c-dd89-46cd-ab9f-94c4cd475ba8 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] rbd image 6520fc43-79ed-4060-85bb-dcdff5f5c101_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:07:10 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1935: 321 pgs: 321 active+clean; 420 MiB data, 847 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 11 09:07:10 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/299898137' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:07:10 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/734158486' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:07:10 compute-0 nova_compute[260935]: 2025-10-11 09:07:10.914 2 DEBUG nova.network.neutron [None req-b6692d21-50b0-4c84-aa61-2f92accd1c83 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Successfully created port: a611854c-0a61-41b8-91ce-0c0f893aa54c _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 11 09:07:11 compute-0 nova_compute[260935]: 2025-10-11 09:07:11.426 2 INFO nova.virt.libvirt.driver [None req-a53b906c-dd89-46cd-ab9f-94c4cd475ba8 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] [instance: 6520fc43-79ed-4060-85bb-dcdff5f5c101] Creating config drive at /var/lib/nova/instances/6520fc43-79ed-4060-85bb-dcdff5f5c101/disk.config
Oct 11 09:07:11 compute-0 nova_compute[260935]: 2025-10-11 09:07:11.435 2 DEBUG oslo_concurrency.processutils [None req-a53b906c-dd89-46cd-ab9f-94c4cd475ba8 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/6520fc43-79ed-4060-85bb-dcdff5f5c101/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmplt37ypx8 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:07:11 compute-0 ceph-mon[74313]: pgmap v1935: 321 pgs: 321 active+clean; 420 MiB data, 847 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 11 09:07:11 compute-0 nova_compute[260935]: 2025-10-11 09:07:11.604 2 DEBUG oslo_concurrency.processutils [None req-a53b906c-dd89-46cd-ab9f-94c4cd475ba8 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/6520fc43-79ed-4060-85bb-dcdff5f5c101/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmplt37ypx8" returned: 0 in 0.170s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:07:11 compute-0 nova_compute[260935]: 2025-10-11 09:07:11.648 2 DEBUG nova.storage.rbd_utils [None req-a53b906c-dd89-46cd-ab9f-94c4cd475ba8 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] rbd image 6520fc43-79ed-4060-85bb-dcdff5f5c101_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:07:11 compute-0 nova_compute[260935]: 2025-10-11 09:07:11.653 2 DEBUG oslo_concurrency.processutils [None req-a53b906c-dd89-46cd-ab9f-94c4cd475ba8 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/6520fc43-79ed-4060-85bb-dcdff5f5c101/disk.config 6520fc43-79ed-4060-85bb-dcdff5f5c101_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:07:11 compute-0 nova_compute[260935]: 2025-10-11 09:07:11.711 2 DEBUG nova.network.neutron [None req-c1c53135-8b25-424a-af01-9028ba12c254 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] [instance: f5dfbe0b-a1ff-4001-abe4-a4493c9124f2] Successfully updated port: 34749eb0-e6d2-4c4b-a2b8-9fc709c1caca _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 11 09:07:11 compute-0 nova_compute[260935]: 2025-10-11 09:07:11.742 2 DEBUG oslo_concurrency.lockutils [None req-c1c53135-8b25-424a-af01-9028ba12c254 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Acquiring lock "refresh_cache-f5dfbe0b-a1ff-4001-abe4-a4493c9124f2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:07:11 compute-0 nova_compute[260935]: 2025-10-11 09:07:11.742 2 DEBUG oslo_concurrency.lockutils [None req-c1c53135-8b25-424a-af01-9028ba12c254 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Acquired lock "refresh_cache-f5dfbe0b-a1ff-4001-abe4-a4493c9124f2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:07:11 compute-0 nova_compute[260935]: 2025-10-11 09:07:11.742 2 DEBUG nova.network.neutron [None req-c1c53135-8b25-424a-af01-9028ba12c254 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] [instance: f5dfbe0b-a1ff-4001-abe4-a4493c9124f2] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 11 09:07:11 compute-0 nova_compute[260935]: 2025-10-11 09:07:11.870 2 DEBUG oslo_concurrency.processutils [None req-a53b906c-dd89-46cd-ab9f-94c4cd475ba8 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/6520fc43-79ed-4060-85bb-dcdff5f5c101/disk.config 6520fc43-79ed-4060-85bb-dcdff5f5c101_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.217s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:07:11 compute-0 nova_compute[260935]: 2025-10-11 09:07:11.871 2 INFO nova.virt.libvirt.driver [None req-a53b906c-dd89-46cd-ab9f-94c4cd475ba8 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] [instance: 6520fc43-79ed-4060-85bb-dcdff5f5c101] Deleting local config drive /var/lib/nova/instances/6520fc43-79ed-4060-85bb-dcdff5f5c101/disk.config because it was imported into RBD.
Oct 11 09:07:11 compute-0 nova_compute[260935]: 2025-10-11 09:07:11.946 2 DEBUG nova.compute.manager [req-63bb350f-1c02-48b5-b2b5-971ef214e168 req-c9f499ff-daf4-4848-bd71-77f92dec20e0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: f5dfbe0b-a1ff-4001-abe4-a4493c9124f2] Received event network-changed-34749eb0-e6d2-4c4b-a2b8-9fc709c1caca external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:07:11 compute-0 nova_compute[260935]: 2025-10-11 09:07:11.947 2 DEBUG nova.compute.manager [req-63bb350f-1c02-48b5-b2b5-971ef214e168 req-c9f499ff-daf4-4848-bd71-77f92dec20e0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: f5dfbe0b-a1ff-4001-abe4-a4493c9124f2] Refreshing instance network info cache due to event network-changed-34749eb0-e6d2-4c4b-a2b8-9fc709c1caca. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 09:07:11 compute-0 nova_compute[260935]: 2025-10-11 09:07:11.947 2 DEBUG oslo_concurrency.lockutils [req-63bb350f-1c02-48b5-b2b5-971ef214e168 req-c9f499ff-daf4-4848-bd71-77f92dec20e0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-f5dfbe0b-a1ff-4001-abe4-a4493c9124f2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:07:11 compute-0 kernel: tap7f81893d-38: entered promiscuous mode
Oct 11 09:07:11 compute-0 NetworkManager[44960]: <info>  [1760173631.9516] manager: (tap7f81893d-38): new Tun device (/org/freedesktop/NetworkManager/Devices/367)
Oct 11 09:07:11 compute-0 nova_compute[260935]: 2025-10-11 09:07:11.953 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:07:11 compute-0 ovn_controller[152945]: 2025-10-11T09:07:11Z|00861|binding|INFO|Claiming lport 7f81893d-380a-42b4-88a7-76a98c30a38d for this chassis.
Oct 11 09:07:11 compute-0 ovn_controller[152945]: 2025-10-11T09:07:11Z|00862|binding|INFO|7f81893d-380a-42b4-88a7-76a98c30a38d: Claiming fa:16:3e:61:f1:b7 10.100.0.6
Oct 11 09:07:11 compute-0 nova_compute[260935]: 2025-10-11 09:07:11.966 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:07:11 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:07:11.988 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:61:f1:b7 10.100.0.6'], port_security=['fa:16:3e:61:f1:b7 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '6520fc43-79ed-4060-85bb-dcdff5f5c101', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-76432838-bd4d-4fb8-8e44-6e230b5868b8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c221c92f100b42fbb2581f0c7035540b', 'neutron:revision_number': '2', 'neutron:security_group_ids': '933852e8-c082-4590-9200-b967a1691dcb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7f1c5e2c-b27b-4d23-ad04-5134938386e4, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=7f81893d-380a-42b4-88a7-76a98c30a38d) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 09:07:11 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:07:11.990 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 7f81893d-380a-42b4-88a7-76a98c30a38d in datapath 76432838-bd4d-4fb8-8e44-6e230b5868b8 bound to our chassis
Oct 11 09:07:11 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:07:11.993 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 76432838-bd4d-4fb8-8e44-6e230b5868b8
Oct 11 09:07:11 compute-0 systemd-udevd[357117]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 09:07:12 compute-0 nova_compute[260935]: 2025-10-11 09:07:12.035 2 DEBUG nova.network.neutron [None req-c1c53135-8b25-424a-af01-9028ba12c254 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] [instance: f5dfbe0b-a1ff-4001-abe4-a4493c9124f2] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 11 09:07:12 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:07:12.037 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[d60f87ad-7920-4886-b651-474eb322deab]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:07:12 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:07:12.038 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap76432838-b1 in ovnmeta-76432838-bd4d-4fb8-8e44-6e230b5868b8 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 11 09:07:12 compute-0 NetworkManager[44960]: <info>  [1760173632.0400] device (tap7f81893d-38): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 09:07:12 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:07:12.040 276945 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap76432838-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 11 09:07:12 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:07:12.040 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[9c78ed11-8398-4c9a-8cb9-bc2ff0f6f427]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:07:12 compute-0 NetworkManager[44960]: <info>  [1760173632.0415] device (tap7f81893d-38): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 09:07:12 compute-0 systemd-machined[215705]: New machine qemu-110-instance-00000060.
Oct 11 09:07:12 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:07:12.041 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[457339be-1501-4b7a-a5e3-33d62d2c8483]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:07:12 compute-0 nova_compute[260935]: 2025-10-11 09:07:12.043 2 DEBUG nova.network.neutron [None req-b6692d21-50b0-4c84-aa61-2f92accd1c83 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Successfully updated port: a611854c-0a61-41b8-91ce-0c0f893aa54c _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 11 09:07:12 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:07:12.056 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[3f8e8994-039c-4b83-b71b-a3654715ddf0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:07:12 compute-0 nova_compute[260935]: 2025-10-11 09:07:12.078 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:07:12 compute-0 ovn_controller[152945]: 2025-10-11T09:07:12Z|00863|binding|INFO|Setting lport 7f81893d-380a-42b4-88a7-76a98c30a38d ovn-installed in OVS
Oct 11 09:07:12 compute-0 ovn_controller[152945]: 2025-10-11T09:07:12Z|00864|binding|INFO|Setting lport 7f81893d-380a-42b4-88a7-76a98c30a38d up in Southbound
Oct 11 09:07:12 compute-0 nova_compute[260935]: 2025-10-11 09:07:12.081 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:07:12 compute-0 systemd[1]: Started Virtual Machine qemu-110-instance-00000060.
Oct 11 09:07:12 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:07:12.083 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[fc9da250-c0ae-413d-9bac-8ea677b7a5fa]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:07:12 compute-0 nova_compute[260935]: 2025-10-11 09:07:12.088 2 DEBUG oslo_concurrency.lockutils [None req-b6692d21-50b0-4c84-aa61-2f92accd1c83 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Acquiring lock "refresh_cache-0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:07:12 compute-0 nova_compute[260935]: 2025-10-11 09:07:12.089 2 DEBUG oslo_concurrency.lockutils [None req-b6692d21-50b0-4c84-aa61-2f92accd1c83 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Acquired lock "refresh_cache-0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:07:12 compute-0 nova_compute[260935]: 2025-10-11 09:07:12.089 2 DEBUG nova.network.neutron [None req-b6692d21-50b0-4c84-aa61-2f92accd1c83 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 11 09:07:12 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:07:12.117 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[f4c6ec29-5460-4242-8ba0-48304b1bb47e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:07:12 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:07:12.125 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[a12a7830-a7b9-49cd-b392-aad41dfa00d1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:07:12 compute-0 NetworkManager[44960]: <info>  [1760173632.1258] manager: (tap76432838-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/368)
Oct 11 09:07:12 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:07:12.179 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[3439922c-f0e9-401d-921c-10292398a924]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:07:12 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:07:12.183 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[d8778af9-6339-411d-ab59-135f4c2854c4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:07:12 compute-0 nova_compute[260935]: 2025-10-11 09:07:12.194 2 DEBUG nova.compute.manager [req-0aadb39a-5069-4869-bf34-1b9c7a7a1b96 req-2f083799-105c-4728-9368-6bf82f2baf9f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Received event network-changed-a611854c-0a61-41b8-91ce-0c0f893aa54c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:07:12 compute-0 nova_compute[260935]: 2025-10-11 09:07:12.194 2 DEBUG nova.compute.manager [req-0aadb39a-5069-4869-bf34-1b9c7a7a1b96 req-2f083799-105c-4728-9368-6bf82f2baf9f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Refreshing instance network info cache due to event network-changed-a611854c-0a61-41b8-91ce-0c0f893aa54c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 09:07:12 compute-0 nova_compute[260935]: 2025-10-11 09:07:12.195 2 DEBUG oslo_concurrency.lockutils [req-0aadb39a-5069-4869-bf34-1b9c7a7a1b96 req-2f083799-105c-4728-9368-6bf82f2baf9f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:07:12 compute-0 NetworkManager[44960]: <info>  [1760173632.2155] device (tap76432838-b0): carrier: link connected
Oct 11 09:07:12 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:07:12.223 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[44a8e5eb-d06b-4baf-8f4d-bcd847281c06]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:07:12 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:07:12.248 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[247eca88-e4c7-4f17-9868-887ef10ab394]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap76432838-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:06:2f:a3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 260], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 547691, 'reachable_time': 27608, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 357151, 'error': None, 'target': 'ovnmeta-76432838-bd4d-4fb8-8e44-6e230b5868b8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:07:12 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:07:12.271 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[298aa085-6819-42f0-a2be-9306c2f2fd61]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe06:2fa3'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 547691, 'tstamp': 547691}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 357152, 'error': None, 'target': 'ovnmeta-76432838-bd4d-4fb8-8e44-6e230b5868b8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:07:12 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:07:12.295 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[69e65564-17ea-4d2f-87bb-8e31161af436]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap76432838-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:06:2f:a3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 260], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 547691, 'reachable_time': 27608, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 357153, 'error': None, 'target': 'ovnmeta-76432838-bd4d-4fb8-8e44-6e230b5868b8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:07:12 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:07:12.336 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[5a8166c2-16f8-4c4f-8876-7c68474ac0e6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:07:12 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1936: 321 pgs: 321 active+clean; 491 MiB data, 877 MiB used, 59 GiB / 60 GiB avail; 53 KiB/s rd, 4.3 MiB/s wr, 83 op/s
Oct 11 09:07:12 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:07:12.420 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[b7c17db1-e4e2-437f-8d7a-9163c5a57cde]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:07:12 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:07:12.421 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap76432838-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:07:12 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:07:12.421 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 09:07:12 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:07:12.422 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap76432838-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:07:12 compute-0 nova_compute[260935]: 2025-10-11 09:07:12.424 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:07:12 compute-0 NetworkManager[44960]: <info>  [1760173632.4250] manager: (tap76432838-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/369)
Oct 11 09:07:12 compute-0 kernel: tap76432838-b0: entered promiscuous mode
Oct 11 09:07:12 compute-0 nova_compute[260935]: 2025-10-11 09:07:12.428 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:07:12 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:07:12.430 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap76432838-b0, col_values=(('external_ids', {'iface-id': 'b6ca7d68-6b07-4260-8964-929cc77a92b2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:07:12 compute-0 ovn_controller[152945]: 2025-10-11T09:07:12Z|00865|binding|INFO|Releasing lport b6ca7d68-6b07-4260-8964-929cc77a92b2 from this chassis (sb_readonly=0)
Oct 11 09:07:12 compute-0 nova_compute[260935]: 2025-10-11 09:07:12.431 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:07:12 compute-0 nova_compute[260935]: 2025-10-11 09:07:12.456 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:07:12 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:07:12.457 162815 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/76432838-bd4d-4fb8-8e44-6e230b5868b8.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/76432838-bd4d-4fb8-8e44-6e230b5868b8.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 11 09:07:12 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:07:12.458 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[033374da-61d9-4be3-85b9-07fd99e1e097]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:07:12 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:07:12.459 162815 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 11 09:07:12 compute-0 ovn_metadata_agent[162810]: global
Oct 11 09:07:12 compute-0 ovn_metadata_agent[162810]:     log         /dev/log local0 debug
Oct 11 09:07:12 compute-0 ovn_metadata_agent[162810]:     log-tag     haproxy-metadata-proxy-76432838-bd4d-4fb8-8e44-6e230b5868b8
Oct 11 09:07:12 compute-0 ovn_metadata_agent[162810]:     user        root
Oct 11 09:07:12 compute-0 ovn_metadata_agent[162810]:     group       root
Oct 11 09:07:12 compute-0 ovn_metadata_agent[162810]:     maxconn     1024
Oct 11 09:07:12 compute-0 ovn_metadata_agent[162810]:     pidfile     /var/lib/neutron/external/pids/76432838-bd4d-4fb8-8e44-6e230b5868b8.pid.haproxy
Oct 11 09:07:12 compute-0 ovn_metadata_agent[162810]:     daemon
Oct 11 09:07:12 compute-0 ovn_metadata_agent[162810]: 
Oct 11 09:07:12 compute-0 ovn_metadata_agent[162810]: defaults
Oct 11 09:07:12 compute-0 ovn_metadata_agent[162810]:     log global
Oct 11 09:07:12 compute-0 ovn_metadata_agent[162810]:     mode http
Oct 11 09:07:12 compute-0 ovn_metadata_agent[162810]:     option httplog
Oct 11 09:07:12 compute-0 ovn_metadata_agent[162810]:     option dontlognull
Oct 11 09:07:12 compute-0 ovn_metadata_agent[162810]:     option http-server-close
Oct 11 09:07:12 compute-0 ovn_metadata_agent[162810]:     option forwardfor
Oct 11 09:07:12 compute-0 ovn_metadata_agent[162810]:     retries                 3
Oct 11 09:07:12 compute-0 ovn_metadata_agent[162810]:     timeout http-request    30s
Oct 11 09:07:12 compute-0 ovn_metadata_agent[162810]:     timeout connect         30s
Oct 11 09:07:12 compute-0 ovn_metadata_agent[162810]:     timeout client          32s
Oct 11 09:07:12 compute-0 ovn_metadata_agent[162810]:     timeout server          32s
Oct 11 09:07:12 compute-0 ovn_metadata_agent[162810]:     timeout http-keep-alive 30s
Oct 11 09:07:12 compute-0 ovn_metadata_agent[162810]: 
Oct 11 09:07:12 compute-0 ovn_metadata_agent[162810]: 
Oct 11 09:07:12 compute-0 ovn_metadata_agent[162810]: listen listener
Oct 11 09:07:12 compute-0 ovn_metadata_agent[162810]:     bind 169.254.169.254:80
Oct 11 09:07:12 compute-0 ovn_metadata_agent[162810]:     server metadata /var/lib/neutron/metadata_proxy
Oct 11 09:07:12 compute-0 ovn_metadata_agent[162810]:     http-request add-header X-OVN-Network-ID 76432838-bd4d-4fb8-8e44-6e230b5868b8
Oct 11 09:07:12 compute-0 ovn_metadata_agent[162810]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 11 09:07:12 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:07:12.461 162815 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-76432838-bd4d-4fb8-8e44-6e230b5868b8', 'env', 'PROCESS_TAG=haproxy-76432838-bd4d-4fb8-8e44-6e230b5868b8', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/76432838-bd4d-4fb8-8e44-6e230b5868b8.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 11 09:07:12 compute-0 nova_compute[260935]: 2025-10-11 09:07:12.486 2 DEBUG nova.network.neutron [None req-b6692d21-50b0-4c84-aa61-2f92accd1c83 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 11 09:07:12 compute-0 nova_compute[260935]: 2025-10-11 09:07:12.870 2 DEBUG nova.network.neutron [req-cb9d72b0-cdd6-406b-8cab-854d0fbef8d9 req-7abf82ba-745c-4997-8ef7-a175fb946a24 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 6520fc43-79ed-4060-85bb-dcdff5f5c101] Updated VIF entry in instance network info cache for port 7f81893d-380a-42b4-88a7-76a98c30a38d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 09:07:12 compute-0 nova_compute[260935]: 2025-10-11 09:07:12.871 2 DEBUG nova.network.neutron [req-cb9d72b0-cdd6-406b-8cab-854d0fbef8d9 req-7abf82ba-745c-4997-8ef7-a175fb946a24 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 6520fc43-79ed-4060-85bb-dcdff5f5c101] Updating instance_info_cache with network_info: [{"id": "7f81893d-380a-42b4-88a7-76a98c30a38d", "address": "fa:16:3e:61:f1:b7", "network": {"id": "76432838-bd4d-4fb8-8e44-6e230b5868b8", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-372698809-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c221c92f100b42fbb2581f0c7035540b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7f81893d-38", "ovs_interfaceid": "7f81893d-380a-42b4-88a7-76a98c30a38d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:07:12 compute-0 nova_compute[260935]: 2025-10-11 09:07:12.903 2 DEBUG oslo_concurrency.lockutils [req-cb9d72b0-cdd6-406b-8cab-854d0fbef8d9 req-7abf82ba-745c-4997-8ef7-a175fb946a24 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-6520fc43-79ed-4060-85bb-dcdff5f5c101" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:07:12 compute-0 podman[357227]: 2025-10-11 09:07:12.925531704 +0000 UTC m=+0.073085057 container create e40b14f688a76c580a6de35085da7c0a9566389ef2c8a28953b020fd0054b819 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-76432838-bd4d-4fb8-8e44-6e230b5868b8, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 11 09:07:12 compute-0 podman[357227]: 2025-10-11 09:07:12.895092496 +0000 UTC m=+0.042645919 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 11 09:07:12 compute-0 systemd[1]: Started libpod-conmon-e40b14f688a76c580a6de35085da7c0a9566389ef2c8a28953b020fd0054b819.scope.
Oct 11 09:07:13 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:07:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f42249320cbd9f6be1af07a22f94351bbe874386f708642b1d430373ffaae9fb/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 11 09:07:13 compute-0 podman[357227]: 2025-10-11 09:07:13.052479938 +0000 UTC m=+0.200033371 container init e40b14f688a76c580a6de35085da7c0a9566389ef2c8a28953b020fd0054b819 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-76432838-bd4d-4fb8-8e44-6e230b5868b8, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3)
Oct 11 09:07:13 compute-0 podman[357227]: 2025-10-11 09:07:13.062161865 +0000 UTC m=+0.209715238 container start e40b14f688a76c580a6de35085da7c0a9566389ef2c8a28953b020fd0054b819 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-76432838-bd4d-4fb8-8e44-6e230b5868b8, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team)
Oct 11 09:07:13 compute-0 nova_compute[260935]: 2025-10-11 09:07:13.090 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760173633.0890605, 6520fc43-79ed-4060-85bb-dcdff5f5c101 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:07:13 compute-0 nova_compute[260935]: 2025-10-11 09:07:13.090 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 6520fc43-79ed-4060-85bb-dcdff5f5c101] VM Started (Lifecycle Event)
Oct 11 09:07:13 compute-0 neutron-haproxy-ovnmeta-76432838-bd4d-4fb8-8e44-6e230b5868b8[357243]: [NOTICE]   (357247) : New worker (357249) forked
Oct 11 09:07:13 compute-0 neutron-haproxy-ovnmeta-76432838-bd4d-4fb8-8e44-6e230b5868b8[357243]: [NOTICE]   (357247) : Loading success.
Oct 11 09:07:13 compute-0 nova_compute[260935]: 2025-10-11 09:07:13.129 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 6520fc43-79ed-4060-85bb-dcdff5f5c101] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:07:13 compute-0 nova_compute[260935]: 2025-10-11 09:07:13.135 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760173633.0894687, 6520fc43-79ed-4060-85bb-dcdff5f5c101 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:07:13 compute-0 nova_compute[260935]: 2025-10-11 09:07:13.135 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 6520fc43-79ed-4060-85bb-dcdff5f5c101] VM Paused (Lifecycle Event)
Oct 11 09:07:13 compute-0 nova_compute[260935]: 2025-10-11 09:07:13.172 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 6520fc43-79ed-4060-85bb-dcdff5f5c101] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:07:13 compute-0 nova_compute[260935]: 2025-10-11 09:07:13.177 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 6520fc43-79ed-4060-85bb-dcdff5f5c101] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 09:07:13 compute-0 nova_compute[260935]: 2025-10-11 09:07:13.230 2 DEBUG nova.network.neutron [None req-c1c53135-8b25-424a-af01-9028ba12c254 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] [instance: f5dfbe0b-a1ff-4001-abe4-a4493c9124f2] Updating instance_info_cache with network_info: [{"id": "34749eb0-e6d2-4c4b-a2b8-9fc709c1caca", "address": "fa:16:3e:88:5f:9b", "network": {"id": "76432838-bd4d-4fb8-8e44-6e230b5868b8", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-372698809-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c221c92f100b42fbb2581f0c7035540b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap34749eb0-e6", "ovs_interfaceid": "34749eb0-e6d2-4c4b-a2b8-9fc709c1caca", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:07:13 compute-0 nova_compute[260935]: 2025-10-11 09:07:13.246 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 6520fc43-79ed-4060-85bb-dcdff5f5c101] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 09:07:13 compute-0 nova_compute[260935]: 2025-10-11 09:07:13.293 2 DEBUG oslo_concurrency.lockutils [None req-c1c53135-8b25-424a-af01-9028ba12c254 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Releasing lock "refresh_cache-f5dfbe0b-a1ff-4001-abe4-a4493c9124f2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:07:13 compute-0 nova_compute[260935]: 2025-10-11 09:07:13.294 2 DEBUG nova.compute.manager [None req-c1c53135-8b25-424a-af01-9028ba12c254 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] [instance: f5dfbe0b-a1ff-4001-abe4-a4493c9124f2] Instance network_info: |[{"id": "34749eb0-e6d2-4c4b-a2b8-9fc709c1caca", "address": "fa:16:3e:88:5f:9b", "network": {"id": "76432838-bd4d-4fb8-8e44-6e230b5868b8", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-372698809-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c221c92f100b42fbb2581f0c7035540b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap34749eb0-e6", "ovs_interfaceid": "34749eb0-e6d2-4c4b-a2b8-9fc709c1caca", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 11 09:07:13 compute-0 nova_compute[260935]: 2025-10-11 09:07:13.294 2 DEBUG oslo_concurrency.lockutils [req-63bb350f-1c02-48b5-b2b5-971ef214e168 req-c9f499ff-daf4-4848-bd71-77f92dec20e0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-f5dfbe0b-a1ff-4001-abe4-a4493c9124f2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:07:13 compute-0 nova_compute[260935]: 2025-10-11 09:07:13.295 2 DEBUG nova.network.neutron [req-63bb350f-1c02-48b5-b2b5-971ef214e168 req-c9f499ff-daf4-4848-bd71-77f92dec20e0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: f5dfbe0b-a1ff-4001-abe4-a4493c9124f2] Refreshing network info cache for port 34749eb0-e6d2-4c4b-a2b8-9fc709c1caca _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 09:07:13 compute-0 nova_compute[260935]: 2025-10-11 09:07:13.303 2 DEBUG nova.virt.libvirt.driver [None req-c1c53135-8b25-424a-af01-9028ba12c254 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] [instance: f5dfbe0b-a1ff-4001-abe4-a4493c9124f2] Start _get_guest_xml network_info=[{"id": "34749eb0-e6d2-4c4b-a2b8-9fc709c1caca", "address": "fa:16:3e:88:5f:9b", "network": {"id": "76432838-bd4d-4fb8-8e44-6e230b5868b8", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-372698809-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c221c92f100b42fbb2581f0c7035540b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap34749eb0-e6", "ovs_interfaceid": "34749eb0-e6d2-4c4b-a2b8-9fc709c1caca", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 11 09:07:13 compute-0 nova_compute[260935]: 2025-10-11 09:07:13.309 2 WARNING nova.virt.libvirt.driver [None req-c1c53135-8b25-424a-af01-9028ba12c254 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 09:07:13 compute-0 nova_compute[260935]: 2025-10-11 09:07:13.317 2 DEBUG nova.virt.libvirt.host [None req-c1c53135-8b25-424a-af01-9028ba12c254 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 11 09:07:13 compute-0 nova_compute[260935]: 2025-10-11 09:07:13.318 2 DEBUG nova.virt.libvirt.host [None req-c1c53135-8b25-424a-af01-9028ba12c254 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 11 09:07:13 compute-0 nova_compute[260935]: 2025-10-11 09:07:13.323 2 DEBUG nova.virt.libvirt.host [None req-c1c53135-8b25-424a-af01-9028ba12c254 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 11 09:07:13 compute-0 nova_compute[260935]: 2025-10-11 09:07:13.323 2 DEBUG nova.virt.libvirt.host [None req-c1c53135-8b25-424a-af01-9028ba12c254 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 11 09:07:13 compute-0 nova_compute[260935]: 2025-10-11 09:07:13.324 2 DEBUG nova.virt.libvirt.driver [None req-c1c53135-8b25-424a-af01-9028ba12c254 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 11 09:07:13 compute-0 nova_compute[260935]: 2025-10-11 09:07:13.324 2 DEBUG nova.virt.hardware [None req-c1c53135-8b25-424a-af01-9028ba12c254 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 11 09:07:13 compute-0 nova_compute[260935]: 2025-10-11 09:07:13.325 2 DEBUG nova.virt.hardware [None req-c1c53135-8b25-424a-af01-9028ba12c254 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 11 09:07:13 compute-0 nova_compute[260935]: 2025-10-11 09:07:13.325 2 DEBUG nova.virt.hardware [None req-c1c53135-8b25-424a-af01-9028ba12c254 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 11 09:07:13 compute-0 nova_compute[260935]: 2025-10-11 09:07:13.326 2 DEBUG nova.virt.hardware [None req-c1c53135-8b25-424a-af01-9028ba12c254 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 11 09:07:13 compute-0 nova_compute[260935]: 2025-10-11 09:07:13.326 2 DEBUG nova.virt.hardware [None req-c1c53135-8b25-424a-af01-9028ba12c254 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 11 09:07:13 compute-0 nova_compute[260935]: 2025-10-11 09:07:13.326 2 DEBUG nova.virt.hardware [None req-c1c53135-8b25-424a-af01-9028ba12c254 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 11 09:07:13 compute-0 nova_compute[260935]: 2025-10-11 09:07:13.327 2 DEBUG nova.virt.hardware [None req-c1c53135-8b25-424a-af01-9028ba12c254 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 11 09:07:13 compute-0 nova_compute[260935]: 2025-10-11 09:07:13.327 2 DEBUG nova.virt.hardware [None req-c1c53135-8b25-424a-af01-9028ba12c254 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 11 09:07:13 compute-0 nova_compute[260935]: 2025-10-11 09:07:13.327 2 DEBUG nova.virt.hardware [None req-c1c53135-8b25-424a-af01-9028ba12c254 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 11 09:07:13 compute-0 nova_compute[260935]: 2025-10-11 09:07:13.328 2 DEBUG nova.virt.hardware [None req-c1c53135-8b25-424a-af01-9028ba12c254 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 11 09:07:13 compute-0 nova_compute[260935]: 2025-10-11 09:07:13.328 2 DEBUG nova.virt.hardware [None req-c1c53135-8b25-424a-af01-9028ba12c254 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 11 09:07:13 compute-0 nova_compute[260935]: 2025-10-11 09:07:13.333 2 DEBUG oslo_concurrency.processutils [None req-c1c53135-8b25-424a-af01-9028ba12c254 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:07:13 compute-0 ceph-mon[74313]: pgmap v1936: 321 pgs: 321 active+clean; 491 MiB data, 877 MiB used, 59 GiB / 60 GiB avail; 53 KiB/s rd, 4.3 MiB/s wr, 83 op/s
Oct 11 09:07:13 compute-0 nova_compute[260935]: 2025-10-11 09:07:13.716 2 DEBUG nova.network.neutron [None req-b6692d21-50b0-4c84-aa61-2f92accd1c83 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Updating instance_info_cache with network_info: [{"id": "a611854c-0a61-41b8-91ce-0c0f893aa54c", "address": "fa:16:3e:9d:da:b0", "network": {"id": "14e82eeb-74e2-4de3-9047-74da777fe1ec", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-461409610-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "21f163e616ee4917a580701d466f7dc9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa611854c-0a", "ovs_interfaceid": "a611854c-0a61-41b8-91ce-0c0f893aa54c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:07:13 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e257 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:07:13 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 09:07:13 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3591828787' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:07:13 compute-0 nova_compute[260935]: 2025-10-11 09:07:13.842 2 DEBUG oslo_concurrency.processutils [None req-c1c53135-8b25-424a-af01-9028ba12c254 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.510s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:07:13 compute-0 nova_compute[260935]: 2025-10-11 09:07:13.864 2 DEBUG nova.storage.rbd_utils [None req-c1c53135-8b25-424a-af01-9028ba12c254 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] rbd image f5dfbe0b-a1ff-4001-abe4-a4493c9124f2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:07:13 compute-0 nova_compute[260935]: 2025-10-11 09:07:13.867 2 DEBUG oslo_concurrency.processutils [None req-c1c53135-8b25-424a-af01-9028ba12c254 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:07:13 compute-0 nova_compute[260935]: 2025-10-11 09:07:13.909 2 DEBUG oslo_concurrency.lockutils [None req-b6692d21-50b0-4c84-aa61-2f92accd1c83 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Releasing lock "refresh_cache-0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:07:13 compute-0 nova_compute[260935]: 2025-10-11 09:07:13.909 2 DEBUG nova.compute.manager [None req-b6692d21-50b0-4c84-aa61-2f92accd1c83 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Instance network_info: |[{"id": "a611854c-0a61-41b8-91ce-0c0f893aa54c", "address": "fa:16:3e:9d:da:b0", "network": {"id": "14e82eeb-74e2-4de3-9047-74da777fe1ec", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-461409610-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "21f163e616ee4917a580701d466f7dc9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa611854c-0a", "ovs_interfaceid": "a611854c-0a61-41b8-91ce-0c0f893aa54c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 11 09:07:13 compute-0 nova_compute[260935]: 2025-10-11 09:07:13.910 2 DEBUG oslo_concurrency.lockutils [req-0aadb39a-5069-4869-bf34-1b9c7a7a1b96 req-2f083799-105c-4728-9368-6bf82f2baf9f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:07:13 compute-0 nova_compute[260935]: 2025-10-11 09:07:13.910 2 DEBUG nova.network.neutron [req-0aadb39a-5069-4869-bf34-1b9c7a7a1b96 req-2f083799-105c-4728-9368-6bf82f2baf9f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Refreshing network info cache for port a611854c-0a61-41b8-91ce-0c0f893aa54c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 09:07:13 compute-0 nova_compute[260935]: 2025-10-11 09:07:13.913 2 DEBUG nova.virt.libvirt.driver [None req-b6692d21-50b0-4c84-aa61-2f92accd1c83 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Start _get_guest_xml network_info=[{"id": "a611854c-0a61-41b8-91ce-0c0f893aa54c", "address": "fa:16:3e:9d:da:b0", "network": {"id": "14e82eeb-74e2-4de3-9047-74da777fe1ec", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-461409610-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "21f163e616ee4917a580701d466f7dc9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa611854c-0a", "ovs_interfaceid": "a611854c-0a61-41b8-91ce-0c0f893aa54c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 11 09:07:13 compute-0 nova_compute[260935]: 2025-10-11 09:07:13.917 2 WARNING nova.virt.libvirt.driver [None req-b6692d21-50b0-4c84-aa61-2f92accd1c83 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 09:07:13 compute-0 nova_compute[260935]: 2025-10-11 09:07:13.922 2 DEBUG nova.virt.libvirt.host [None req-b6692d21-50b0-4c84-aa61-2f92accd1c83 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 11 09:07:13 compute-0 nova_compute[260935]: 2025-10-11 09:07:13.922 2 DEBUG nova.virt.libvirt.host [None req-b6692d21-50b0-4c84-aa61-2f92accd1c83 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 11 09:07:13 compute-0 nova_compute[260935]: 2025-10-11 09:07:13.925 2 DEBUG nova.virt.libvirt.host [None req-b6692d21-50b0-4c84-aa61-2f92accd1c83 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 11 09:07:13 compute-0 nova_compute[260935]: 2025-10-11 09:07:13.926 2 DEBUG nova.virt.libvirt.host [None req-b6692d21-50b0-4c84-aa61-2f92accd1c83 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 11 09:07:13 compute-0 nova_compute[260935]: 2025-10-11 09:07:13.926 2 DEBUG nova.virt.libvirt.driver [None req-b6692d21-50b0-4c84-aa61-2f92accd1c83 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 11 09:07:13 compute-0 nova_compute[260935]: 2025-10-11 09:07:13.926 2 DEBUG nova.virt.hardware [None req-b6692d21-50b0-4c84-aa61-2f92accd1c83 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 11 09:07:13 compute-0 nova_compute[260935]: 2025-10-11 09:07:13.926 2 DEBUG nova.virt.hardware [None req-b6692d21-50b0-4c84-aa61-2f92accd1c83 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 11 09:07:13 compute-0 nova_compute[260935]: 2025-10-11 09:07:13.927 2 DEBUG nova.virt.hardware [None req-b6692d21-50b0-4c84-aa61-2f92accd1c83 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 11 09:07:13 compute-0 nova_compute[260935]: 2025-10-11 09:07:13.927 2 DEBUG nova.virt.hardware [None req-b6692d21-50b0-4c84-aa61-2f92accd1c83 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 11 09:07:13 compute-0 nova_compute[260935]: 2025-10-11 09:07:13.927 2 DEBUG nova.virt.hardware [None req-b6692d21-50b0-4c84-aa61-2f92accd1c83 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 11 09:07:13 compute-0 nova_compute[260935]: 2025-10-11 09:07:13.927 2 DEBUG nova.virt.hardware [None req-b6692d21-50b0-4c84-aa61-2f92accd1c83 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 11 09:07:13 compute-0 nova_compute[260935]: 2025-10-11 09:07:13.927 2 DEBUG nova.virt.hardware [None req-b6692d21-50b0-4c84-aa61-2f92accd1c83 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 11 09:07:13 compute-0 nova_compute[260935]: 2025-10-11 09:07:13.927 2 DEBUG nova.virt.hardware [None req-b6692d21-50b0-4c84-aa61-2f92accd1c83 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 11 09:07:13 compute-0 nova_compute[260935]: 2025-10-11 09:07:13.928 2 DEBUG nova.virt.hardware [None req-b6692d21-50b0-4c84-aa61-2f92accd1c83 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 11 09:07:13 compute-0 nova_compute[260935]: 2025-10-11 09:07:13.928 2 DEBUG nova.virt.hardware [None req-b6692d21-50b0-4c84-aa61-2f92accd1c83 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 11 09:07:13 compute-0 nova_compute[260935]: 2025-10-11 09:07:13.928 2 DEBUG nova.virt.hardware [None req-b6692d21-50b0-4c84-aa61-2f92accd1c83 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 11 09:07:13 compute-0 nova_compute[260935]: 2025-10-11 09:07:13.930 2 DEBUG oslo_concurrency.processutils [None req-b6692d21-50b0-4c84-aa61-2f92accd1c83 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:07:14 compute-0 nova_compute[260935]: 2025-10-11 09:07:14.090 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:07:14 compute-0 nova_compute[260935]: 2025-10-11 09:07:14.110 2 DEBUG nova.compute.manager [req-db52cbf2-52e1-46e6-9cf1-e4dce8df12f0 req-a830075b-86b9-46e4-8efa-1d1c8d7fe0a4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 6520fc43-79ed-4060-85bb-dcdff5f5c101] Received event network-vif-plugged-7f81893d-380a-42b4-88a7-76a98c30a38d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:07:14 compute-0 nova_compute[260935]: 2025-10-11 09:07:14.111 2 DEBUG oslo_concurrency.lockutils [req-db52cbf2-52e1-46e6-9cf1-e4dce8df12f0 req-a830075b-86b9-46e4-8efa-1d1c8d7fe0a4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "6520fc43-79ed-4060-85bb-dcdff5f5c101-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:07:14 compute-0 nova_compute[260935]: 2025-10-11 09:07:14.111 2 DEBUG oslo_concurrency.lockutils [req-db52cbf2-52e1-46e6-9cf1-e4dce8df12f0 req-a830075b-86b9-46e4-8efa-1d1c8d7fe0a4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "6520fc43-79ed-4060-85bb-dcdff5f5c101-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:07:14 compute-0 nova_compute[260935]: 2025-10-11 09:07:14.112 2 DEBUG oslo_concurrency.lockutils [req-db52cbf2-52e1-46e6-9cf1-e4dce8df12f0 req-a830075b-86b9-46e4-8efa-1d1c8d7fe0a4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "6520fc43-79ed-4060-85bb-dcdff5f5c101-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:07:14 compute-0 nova_compute[260935]: 2025-10-11 09:07:14.112 2 DEBUG nova.compute.manager [req-db52cbf2-52e1-46e6-9cf1-e4dce8df12f0 req-a830075b-86b9-46e4-8efa-1d1c8d7fe0a4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 6520fc43-79ed-4060-85bb-dcdff5f5c101] Processing event network-vif-plugged-7f81893d-380a-42b4-88a7-76a98c30a38d _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 11 09:07:14 compute-0 nova_compute[260935]: 2025-10-11 09:07:14.113 2 DEBUG nova.compute.manager [req-db52cbf2-52e1-46e6-9cf1-e4dce8df12f0 req-a830075b-86b9-46e4-8efa-1d1c8d7fe0a4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 6520fc43-79ed-4060-85bb-dcdff5f5c101] Received event network-vif-plugged-7f81893d-380a-42b4-88a7-76a98c30a38d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:07:14 compute-0 nova_compute[260935]: 2025-10-11 09:07:14.113 2 DEBUG oslo_concurrency.lockutils [req-db52cbf2-52e1-46e6-9cf1-e4dce8df12f0 req-a830075b-86b9-46e4-8efa-1d1c8d7fe0a4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "6520fc43-79ed-4060-85bb-dcdff5f5c101-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:07:14 compute-0 nova_compute[260935]: 2025-10-11 09:07:14.113 2 DEBUG oslo_concurrency.lockutils [req-db52cbf2-52e1-46e6-9cf1-e4dce8df12f0 req-a830075b-86b9-46e4-8efa-1d1c8d7fe0a4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "6520fc43-79ed-4060-85bb-dcdff5f5c101-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:07:14 compute-0 nova_compute[260935]: 2025-10-11 09:07:14.114 2 DEBUG oslo_concurrency.lockutils [req-db52cbf2-52e1-46e6-9cf1-e4dce8df12f0 req-a830075b-86b9-46e4-8efa-1d1c8d7fe0a4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "6520fc43-79ed-4060-85bb-dcdff5f5c101-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:07:14 compute-0 nova_compute[260935]: 2025-10-11 09:07:14.115 2 DEBUG nova.compute.manager [req-db52cbf2-52e1-46e6-9cf1-e4dce8df12f0 req-a830075b-86b9-46e4-8efa-1d1c8d7fe0a4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 6520fc43-79ed-4060-85bb-dcdff5f5c101] No waiting events found dispatching network-vif-plugged-7f81893d-380a-42b4-88a7-76a98c30a38d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 09:07:14 compute-0 nova_compute[260935]: 2025-10-11 09:07:14.116 2 WARNING nova.compute.manager [req-db52cbf2-52e1-46e6-9cf1-e4dce8df12f0 req-a830075b-86b9-46e4-8efa-1d1c8d7fe0a4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 6520fc43-79ed-4060-85bb-dcdff5f5c101] Received unexpected event network-vif-plugged-7f81893d-380a-42b4-88a7-76a98c30a38d for instance with vm_state building and task_state spawning.
Oct 11 09:07:14 compute-0 nova_compute[260935]: 2025-10-11 09:07:14.118 2 DEBUG nova.compute.manager [None req-a53b906c-dd89-46cd-ab9f-94c4cd475ba8 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] [instance: 6520fc43-79ed-4060-85bb-dcdff5f5c101] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 11 09:07:14 compute-0 nova_compute[260935]: 2025-10-11 09:07:14.122 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760173634.1212041, 6520fc43-79ed-4060-85bb-dcdff5f5c101 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:07:14 compute-0 nova_compute[260935]: 2025-10-11 09:07:14.122 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 6520fc43-79ed-4060-85bb-dcdff5f5c101] VM Resumed (Lifecycle Event)
Oct 11 09:07:14 compute-0 nova_compute[260935]: 2025-10-11 09:07:14.123 2 DEBUG nova.virt.libvirt.driver [None req-a53b906c-dd89-46cd-ab9f-94c4cd475ba8 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] [instance: 6520fc43-79ed-4060-85bb-dcdff5f5c101] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 11 09:07:14 compute-0 nova_compute[260935]: 2025-10-11 09:07:14.126 2 INFO nova.virt.libvirt.driver [-] [instance: 6520fc43-79ed-4060-85bb-dcdff5f5c101] Instance spawned successfully.
Oct 11 09:07:14 compute-0 nova_compute[260935]: 2025-10-11 09:07:14.127 2 DEBUG nova.virt.libvirt.driver [None req-a53b906c-dd89-46cd-ab9f-94c4cd475ba8 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] [instance: 6520fc43-79ed-4060-85bb-dcdff5f5c101] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 11 09:07:14 compute-0 nova_compute[260935]: 2025-10-11 09:07:14.169 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 6520fc43-79ed-4060-85bb-dcdff5f5c101] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:07:14 compute-0 nova_compute[260935]: 2025-10-11 09:07:14.172 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 6520fc43-79ed-4060-85bb-dcdff5f5c101] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 09:07:14 compute-0 nova_compute[260935]: 2025-10-11 09:07:14.181 2 DEBUG nova.virt.libvirt.driver [None req-a53b906c-dd89-46cd-ab9f-94c4cd475ba8 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] [instance: 6520fc43-79ed-4060-85bb-dcdff5f5c101] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:07:14 compute-0 nova_compute[260935]: 2025-10-11 09:07:14.181 2 DEBUG nova.virt.libvirt.driver [None req-a53b906c-dd89-46cd-ab9f-94c4cd475ba8 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] [instance: 6520fc43-79ed-4060-85bb-dcdff5f5c101] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:07:14 compute-0 nova_compute[260935]: 2025-10-11 09:07:14.182 2 DEBUG nova.virt.libvirt.driver [None req-a53b906c-dd89-46cd-ab9f-94c4cd475ba8 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] [instance: 6520fc43-79ed-4060-85bb-dcdff5f5c101] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:07:14 compute-0 nova_compute[260935]: 2025-10-11 09:07:14.182 2 DEBUG nova.virt.libvirt.driver [None req-a53b906c-dd89-46cd-ab9f-94c4cd475ba8 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] [instance: 6520fc43-79ed-4060-85bb-dcdff5f5c101] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:07:14 compute-0 nova_compute[260935]: 2025-10-11 09:07:14.182 2 DEBUG nova.virt.libvirt.driver [None req-a53b906c-dd89-46cd-ab9f-94c4cd475ba8 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] [instance: 6520fc43-79ed-4060-85bb-dcdff5f5c101] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:07:14 compute-0 nova_compute[260935]: 2025-10-11 09:07:14.183 2 DEBUG nova.virt.libvirt.driver [None req-a53b906c-dd89-46cd-ab9f-94c4cd475ba8 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] [instance: 6520fc43-79ed-4060-85bb-dcdff5f5c101] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:07:14 compute-0 nova_compute[260935]: 2025-10-11 09:07:14.246 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 6520fc43-79ed-4060-85bb-dcdff5f5c101] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 09:07:14 compute-0 nova_compute[260935]: 2025-10-11 09:07:14.301 2 INFO nova.compute.manager [None req-a53b906c-dd89-46cd-ab9f-94c4cd475ba8 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] [instance: 6520fc43-79ed-4060-85bb-dcdff5f5c101] Took 10.20 seconds to spawn the instance on the hypervisor.
Oct 11 09:07:14 compute-0 nova_compute[260935]: 2025-10-11 09:07:14.302 2 DEBUG nova.compute.manager [None req-a53b906c-dd89-46cd-ab9f-94c4cd475ba8 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] [instance: 6520fc43-79ed-4060-85bb-dcdff5f5c101] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:07:14 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 09:07:14 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3364657527' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:07:14 compute-0 nova_compute[260935]: 2025-10-11 09:07:14.352 2 DEBUG oslo_concurrency.processutils [None req-c1c53135-8b25-424a-af01-9028ba12c254 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.484s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:07:14 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1937: 321 pgs: 321 active+clean; 513 MiB data, 890 MiB used, 59 GiB / 60 GiB avail; 56 KiB/s rd, 5.3 MiB/s wr, 88 op/s
Oct 11 09:07:14 compute-0 nova_compute[260935]: 2025-10-11 09:07:14.354 2 DEBUG nova.virt.libvirt.vif [None req-c1c53135-8b25-424a-af01-9028ba12c254 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T09:07:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerRescueNegativeTestJSON-server-1694931715',display_name='tempest-ServerRescueNegativeTestJSON-server-1694931715',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuenegativetestjson-server-1694931715',id=97,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c221c92f100b42fbb2581f0c7035540b',ramdisk_id='',reservation_id='r-w0ylr4kw',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerRescueNegativeTestJSON-1667632067',owner_user_name='tempest-ServerRescueNegativeTestJSON-1667632067-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:07:07Z,user_data=None,user_id='f52fcc072b5843a197064a063d4a5d30',uuid=f5dfbe0b-a1ff-4001-abe4-a4493c9124f2,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "34749eb0-e6d2-4c4b-a2b8-9fc709c1caca", "address": "fa:16:3e:88:5f:9b", "network": {"id": "76432838-bd4d-4fb8-8e44-6e230b5868b8", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-372698809-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c221c92f100b42fbb2581f0c7035540b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap34749eb0-e6", "ovs_interfaceid": "34749eb0-e6d2-4c4b-a2b8-9fc709c1caca", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 11 09:07:14 compute-0 nova_compute[260935]: 2025-10-11 09:07:14.355 2 DEBUG nova.network.os_vif_util [None req-c1c53135-8b25-424a-af01-9028ba12c254 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Converting VIF {"id": "34749eb0-e6d2-4c4b-a2b8-9fc709c1caca", "address": "fa:16:3e:88:5f:9b", "network": {"id": "76432838-bd4d-4fb8-8e44-6e230b5868b8", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-372698809-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c221c92f100b42fbb2581f0c7035540b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap34749eb0-e6", "ovs_interfaceid": "34749eb0-e6d2-4c4b-a2b8-9fc709c1caca", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 09:07:14 compute-0 nova_compute[260935]: 2025-10-11 09:07:14.356 2 DEBUG nova.network.os_vif_util [None req-c1c53135-8b25-424a-af01-9028ba12c254 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:88:5f:9b,bridge_name='br-int',has_traffic_filtering=True,id=34749eb0-e6d2-4c4b-a2b8-9fc709c1caca,network=Network(76432838-bd4d-4fb8-8e44-6e230b5868b8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap34749eb0-e6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 09:07:14 compute-0 nova_compute[260935]: 2025-10-11 09:07:14.358 2 DEBUG nova.objects.instance [None req-c1c53135-8b25-424a-af01-9028ba12c254 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Lazy-loading 'pci_devices' on Instance uuid f5dfbe0b-a1ff-4001-abe4-a4493c9124f2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:07:14 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 09:07:14 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/377748573' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:07:14 compute-0 nova_compute[260935]: 2025-10-11 09:07:14.376 2 DEBUG oslo_concurrency.processutils [None req-b6692d21-50b0-4c84-aa61-2f92accd1c83 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:07:14 compute-0 nova_compute[260935]: 2025-10-11 09:07:14.413 2 DEBUG nova.storage.rbd_utils [None req-b6692d21-50b0-4c84-aa61-2f92accd1c83 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] rbd image 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:07:14 compute-0 nova_compute[260935]: 2025-10-11 09:07:14.418 2 DEBUG oslo_concurrency.processutils [None req-b6692d21-50b0-4c84-aa61-2f92accd1c83 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:07:14 compute-0 nova_compute[260935]: 2025-10-11 09:07:14.497 2 DEBUG nova.virt.libvirt.driver [None req-c1c53135-8b25-424a-af01-9028ba12c254 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] [instance: f5dfbe0b-a1ff-4001-abe4-a4493c9124f2] End _get_guest_xml xml=<domain type="kvm">
Oct 11 09:07:14 compute-0 nova_compute[260935]:   <uuid>f5dfbe0b-a1ff-4001-abe4-a4493c9124f2</uuid>
Oct 11 09:07:14 compute-0 nova_compute[260935]:   <name>instance-00000061</name>
Oct 11 09:07:14 compute-0 nova_compute[260935]:   <memory>131072</memory>
Oct 11 09:07:14 compute-0 nova_compute[260935]:   <vcpu>1</vcpu>
Oct 11 09:07:14 compute-0 nova_compute[260935]:   <metadata>
Oct 11 09:07:14 compute-0 nova_compute[260935]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 09:07:14 compute-0 nova_compute[260935]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 09:07:14 compute-0 nova_compute[260935]:       <nova:name>tempest-ServerRescueNegativeTestJSON-server-1694931715</nova:name>
Oct 11 09:07:14 compute-0 nova_compute[260935]:       <nova:creationTime>2025-10-11 09:07:13</nova:creationTime>
Oct 11 09:07:14 compute-0 nova_compute[260935]:       <nova:flavor name="m1.nano">
Oct 11 09:07:14 compute-0 nova_compute[260935]:         <nova:memory>128</nova:memory>
Oct 11 09:07:14 compute-0 nova_compute[260935]:         <nova:disk>1</nova:disk>
Oct 11 09:07:14 compute-0 nova_compute[260935]:         <nova:swap>0</nova:swap>
Oct 11 09:07:14 compute-0 nova_compute[260935]:         <nova:ephemeral>0</nova:ephemeral>
Oct 11 09:07:14 compute-0 nova_compute[260935]:         <nova:vcpus>1</nova:vcpus>
Oct 11 09:07:14 compute-0 nova_compute[260935]:       </nova:flavor>
Oct 11 09:07:14 compute-0 nova_compute[260935]:       <nova:owner>
Oct 11 09:07:14 compute-0 nova_compute[260935]:         <nova:user uuid="f52fcc072b5843a197064a063d4a5d30">tempest-ServerRescueNegativeTestJSON-1667632067-project-member</nova:user>
Oct 11 09:07:14 compute-0 nova_compute[260935]:         <nova:project uuid="c221c92f100b42fbb2581f0c7035540b">tempest-ServerRescueNegativeTestJSON-1667632067</nova:project>
Oct 11 09:07:14 compute-0 nova_compute[260935]:       </nova:owner>
Oct 11 09:07:14 compute-0 nova_compute[260935]:       <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 09:07:14 compute-0 nova_compute[260935]:       <nova:ports>
Oct 11 09:07:14 compute-0 nova_compute[260935]:         <nova:port uuid="34749eb0-e6d2-4c4b-a2b8-9fc709c1caca">
Oct 11 09:07:14 compute-0 nova_compute[260935]:           <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Oct 11 09:07:14 compute-0 nova_compute[260935]:         </nova:port>
Oct 11 09:07:14 compute-0 nova_compute[260935]:       </nova:ports>
Oct 11 09:07:14 compute-0 nova_compute[260935]:     </nova:instance>
Oct 11 09:07:14 compute-0 nova_compute[260935]:   </metadata>
Oct 11 09:07:14 compute-0 nova_compute[260935]:   <sysinfo type="smbios">
Oct 11 09:07:14 compute-0 nova_compute[260935]:     <system>
Oct 11 09:07:14 compute-0 nova_compute[260935]:       <entry name="manufacturer">RDO</entry>
Oct 11 09:07:14 compute-0 nova_compute[260935]:       <entry name="product">OpenStack Compute</entry>
Oct 11 09:07:14 compute-0 nova_compute[260935]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 09:07:14 compute-0 nova_compute[260935]:       <entry name="serial">f5dfbe0b-a1ff-4001-abe4-a4493c9124f2</entry>
Oct 11 09:07:14 compute-0 nova_compute[260935]:       <entry name="uuid">f5dfbe0b-a1ff-4001-abe4-a4493c9124f2</entry>
Oct 11 09:07:14 compute-0 nova_compute[260935]:       <entry name="family">Virtual Machine</entry>
Oct 11 09:07:14 compute-0 nova_compute[260935]:     </system>
Oct 11 09:07:14 compute-0 nova_compute[260935]:   </sysinfo>
Oct 11 09:07:14 compute-0 nova_compute[260935]:   <os>
Oct 11 09:07:14 compute-0 nova_compute[260935]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 11 09:07:14 compute-0 nova_compute[260935]:     <boot dev="hd"/>
Oct 11 09:07:14 compute-0 nova_compute[260935]:     <smbios mode="sysinfo"/>
Oct 11 09:07:14 compute-0 nova_compute[260935]:   </os>
Oct 11 09:07:14 compute-0 nova_compute[260935]:   <features>
Oct 11 09:07:14 compute-0 nova_compute[260935]:     <acpi/>
Oct 11 09:07:14 compute-0 nova_compute[260935]:     <apic/>
Oct 11 09:07:14 compute-0 nova_compute[260935]:     <vmcoreinfo/>
Oct 11 09:07:14 compute-0 nova_compute[260935]:   </features>
Oct 11 09:07:14 compute-0 nova_compute[260935]:   <clock offset="utc">
Oct 11 09:07:14 compute-0 nova_compute[260935]:     <timer name="pit" tickpolicy="delay"/>
Oct 11 09:07:14 compute-0 nova_compute[260935]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 11 09:07:14 compute-0 nova_compute[260935]:     <timer name="hpet" present="no"/>
Oct 11 09:07:14 compute-0 nova_compute[260935]:   </clock>
Oct 11 09:07:14 compute-0 nova_compute[260935]:   <cpu mode="host-model" match="exact">
Oct 11 09:07:14 compute-0 nova_compute[260935]:     <topology sockets="1" cores="1" threads="1"/>
Oct 11 09:07:14 compute-0 nova_compute[260935]:   </cpu>
Oct 11 09:07:14 compute-0 nova_compute[260935]:   <devices>
Oct 11 09:07:14 compute-0 nova_compute[260935]:     <disk type="network" device="disk">
Oct 11 09:07:14 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 09:07:14 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/f5dfbe0b-a1ff-4001-abe4-a4493c9124f2_disk">
Oct 11 09:07:14 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 09:07:14 compute-0 nova_compute[260935]:       </source>
Oct 11 09:07:14 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 09:07:14 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 09:07:14 compute-0 nova_compute[260935]:       </auth>
Oct 11 09:07:14 compute-0 nova_compute[260935]:       <target dev="vda" bus="virtio"/>
Oct 11 09:07:14 compute-0 nova_compute[260935]:     </disk>
Oct 11 09:07:14 compute-0 nova_compute[260935]:     <disk type="network" device="cdrom">
Oct 11 09:07:14 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 09:07:14 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/f5dfbe0b-a1ff-4001-abe4-a4493c9124f2_disk.config">
Oct 11 09:07:14 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 09:07:14 compute-0 nova_compute[260935]:       </source>
Oct 11 09:07:14 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 09:07:14 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 09:07:14 compute-0 nova_compute[260935]:       </auth>
Oct 11 09:07:14 compute-0 nova_compute[260935]:       <target dev="sda" bus="sata"/>
Oct 11 09:07:14 compute-0 nova_compute[260935]:     </disk>
Oct 11 09:07:14 compute-0 nova_compute[260935]:     <interface type="ethernet">
Oct 11 09:07:14 compute-0 nova_compute[260935]:       <mac address="fa:16:3e:88:5f:9b"/>
Oct 11 09:07:14 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 09:07:14 compute-0 nova_compute[260935]:       <driver name="vhost" rx_queue_size="512"/>
Oct 11 09:07:14 compute-0 nova_compute[260935]:       <mtu size="1442"/>
Oct 11 09:07:14 compute-0 nova_compute[260935]:       <target dev="tap34749eb0-e6"/>
Oct 11 09:07:14 compute-0 nova_compute[260935]:     </interface>
Oct 11 09:07:14 compute-0 nova_compute[260935]:     <serial type="pty">
Oct 11 09:07:14 compute-0 nova_compute[260935]:       <log file="/var/lib/nova/instances/f5dfbe0b-a1ff-4001-abe4-a4493c9124f2/console.log" append="off"/>
Oct 11 09:07:14 compute-0 nova_compute[260935]:     </serial>
Oct 11 09:07:14 compute-0 nova_compute[260935]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 09:07:14 compute-0 nova_compute[260935]:     <video>
Oct 11 09:07:14 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 09:07:14 compute-0 nova_compute[260935]:     </video>
Oct 11 09:07:14 compute-0 nova_compute[260935]:     <input type="tablet" bus="usb"/>
Oct 11 09:07:14 compute-0 nova_compute[260935]:     <rng model="virtio">
Oct 11 09:07:14 compute-0 nova_compute[260935]:       <backend model="random">/dev/urandom</backend>
Oct 11 09:07:14 compute-0 nova_compute[260935]:     </rng>
Oct 11 09:07:14 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root"/>
Oct 11 09:07:14 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:07:14 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:07:14 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:07:14 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:07:14 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:07:14 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:07:14 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:07:14 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:07:14 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:07:14 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:07:14 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:07:14 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:07:14 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:07:14 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:07:14 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:07:14 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:07:14 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:07:14 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:07:14 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:07:14 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:07:14 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:07:14 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:07:14 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:07:14 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:07:14 compute-0 nova_compute[260935]:     <controller type="usb" index="0"/>
Oct 11 09:07:14 compute-0 nova_compute[260935]:     <memballoon model="virtio">
Oct 11 09:07:14 compute-0 nova_compute[260935]:       <stats period="10"/>
Oct 11 09:07:14 compute-0 nova_compute[260935]:     </memballoon>
Oct 11 09:07:14 compute-0 nova_compute[260935]:   </devices>
Oct 11 09:07:14 compute-0 nova_compute[260935]: </domain>
Oct 11 09:07:14 compute-0 nova_compute[260935]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 11 09:07:14 compute-0 nova_compute[260935]: 2025-10-11 09:07:14.498 2 DEBUG nova.compute.manager [None req-c1c53135-8b25-424a-af01-9028ba12c254 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] [instance: f5dfbe0b-a1ff-4001-abe4-a4493c9124f2] Preparing to wait for external event network-vif-plugged-34749eb0-e6d2-4c4b-a2b8-9fc709c1caca prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 11 09:07:14 compute-0 nova_compute[260935]: 2025-10-11 09:07:14.499 2 DEBUG oslo_concurrency.lockutils [None req-c1c53135-8b25-424a-af01-9028ba12c254 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Acquiring lock "f5dfbe0b-a1ff-4001-abe4-a4493c9124f2-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:07:14 compute-0 nova_compute[260935]: 2025-10-11 09:07:14.499 2 DEBUG oslo_concurrency.lockutils [None req-c1c53135-8b25-424a-af01-9028ba12c254 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Lock "f5dfbe0b-a1ff-4001-abe4-a4493c9124f2-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:07:14 compute-0 nova_compute[260935]: 2025-10-11 09:07:14.500 2 DEBUG oslo_concurrency.lockutils [None req-c1c53135-8b25-424a-af01-9028ba12c254 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Lock "f5dfbe0b-a1ff-4001-abe4-a4493c9124f2-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:07:14 compute-0 nova_compute[260935]: 2025-10-11 09:07:14.501 2 DEBUG nova.virt.libvirt.vif [None req-c1c53135-8b25-424a-af01-9028ba12c254 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T09:07:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerRescueNegativeTestJSON-server-1694931715',display_name='tempest-ServerRescueNegativeTestJSON-server-1694931715',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuenegativetestjson-server-1694931715',id=97,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c221c92f100b42fbb2581f0c7035540b',ramdisk_id='',reservation_id='r-w0ylr4kw',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerRescueNegativeTestJSON-1667632067',owner_user_name='tempest-ServerRescueNegativeTestJSON-1667632067-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:07:07Z,user_data=None,user_id='f52fcc072b5843a197064a063d4a5d30',uuid=f5dfbe0b-a1ff-4001-abe4-a4493c9124f2,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "34749eb0-e6d2-4c4b-a2b8-9fc709c1caca", "address": "fa:16:3e:88:5f:9b", "network": {"id": "76432838-bd4d-4fb8-8e44-6e230b5868b8", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-372698809-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c221c92f100b42fbb2581f0c7035540b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap34749eb0-e6", "ovs_interfaceid": "34749eb0-e6d2-4c4b-a2b8-9fc709c1caca", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 11 09:07:14 compute-0 nova_compute[260935]: 2025-10-11 09:07:14.502 2 DEBUG nova.network.os_vif_util [None req-c1c53135-8b25-424a-af01-9028ba12c254 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Converting VIF {"id": "34749eb0-e6d2-4c4b-a2b8-9fc709c1caca", "address": "fa:16:3e:88:5f:9b", "network": {"id": "76432838-bd4d-4fb8-8e44-6e230b5868b8", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-372698809-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c221c92f100b42fbb2581f0c7035540b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap34749eb0-e6", "ovs_interfaceid": "34749eb0-e6d2-4c4b-a2b8-9fc709c1caca", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 09:07:14 compute-0 nova_compute[260935]: 2025-10-11 09:07:14.503 2 DEBUG nova.network.os_vif_util [None req-c1c53135-8b25-424a-af01-9028ba12c254 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:88:5f:9b,bridge_name='br-int',has_traffic_filtering=True,id=34749eb0-e6d2-4c4b-a2b8-9fc709c1caca,network=Network(76432838-bd4d-4fb8-8e44-6e230b5868b8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap34749eb0-e6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 09:07:14 compute-0 nova_compute[260935]: 2025-10-11 09:07:14.504 2 DEBUG os_vif [None req-c1c53135-8b25-424a-af01-9028ba12c254 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:88:5f:9b,bridge_name='br-int',has_traffic_filtering=True,id=34749eb0-e6d2-4c4b-a2b8-9fc709c1caca,network=Network(76432838-bd4d-4fb8-8e44-6e230b5868b8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap34749eb0-e6') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 11 09:07:14 compute-0 nova_compute[260935]: 2025-10-11 09:07:14.513 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:07:14 compute-0 nova_compute[260935]: 2025-10-11 09:07:14.513 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:07:14 compute-0 nova_compute[260935]: 2025-10-11 09:07:14.514 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 09:07:14 compute-0 nova_compute[260935]: 2025-10-11 09:07:14.518 2 INFO nova.compute.manager [None req-a53b906c-dd89-46cd-ab9f-94c4cd475ba8 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] [instance: 6520fc43-79ed-4060-85bb-dcdff5f5c101] Took 11.85 seconds to build instance.
Oct 11 09:07:14 compute-0 nova_compute[260935]: 2025-10-11 09:07:14.519 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:07:14 compute-0 nova_compute[260935]: 2025-10-11 09:07:14.519 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap34749eb0-e6, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:07:14 compute-0 nova_compute[260935]: 2025-10-11 09:07:14.520 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap34749eb0-e6, col_values=(('external_ids', {'iface-id': '34749eb0-e6d2-4c4b-a2b8-9fc709c1caca', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:88:5f:9b', 'vm-uuid': 'f5dfbe0b-a1ff-4001-abe4-a4493c9124f2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:07:14 compute-0 nova_compute[260935]: 2025-10-11 09:07:14.522 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:07:14 compute-0 NetworkManager[44960]: <info>  [1760173634.5237] manager: (tap34749eb0-e6): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/370)
Oct 11 09:07:14 compute-0 nova_compute[260935]: 2025-10-11 09:07:14.524 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 09:07:14 compute-0 nova_compute[260935]: 2025-10-11 09:07:14.533 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:07:14 compute-0 nova_compute[260935]: 2025-10-11 09:07:14.534 2 INFO os_vif [None req-c1c53135-8b25-424a-af01-9028ba12c254 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:88:5f:9b,bridge_name='br-int',has_traffic_filtering=True,id=34749eb0-e6d2-4c4b-a2b8-9fc709c1caca,network=Network(76432838-bd4d-4fb8-8e44-6e230b5868b8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap34749eb0-e6')
Oct 11 09:07:14 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/3591828787' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:07:14 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/3364657527' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:07:14 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/377748573' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:07:14 compute-0 nova_compute[260935]: 2025-10-11 09:07:14.596 2 DEBUG oslo_concurrency.lockutils [None req-a53b906c-dd89-46cd-ab9f-94c4cd475ba8 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Lock "6520fc43-79ed-4060-85bb-dcdff5f5c101" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.044s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:07:14 compute-0 nova_compute[260935]: 2025-10-11 09:07:14.700 2 DEBUG nova.virt.libvirt.driver [None req-c1c53135-8b25-424a-af01-9028ba12c254 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 09:07:14 compute-0 nova_compute[260935]: 2025-10-11 09:07:14.701 2 DEBUG nova.virt.libvirt.driver [None req-c1c53135-8b25-424a-af01-9028ba12c254 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 09:07:14 compute-0 nova_compute[260935]: 2025-10-11 09:07:14.701 2 DEBUG nova.virt.libvirt.driver [None req-c1c53135-8b25-424a-af01-9028ba12c254 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] No VIF found with MAC fa:16:3e:88:5f:9b, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 11 09:07:14 compute-0 nova_compute[260935]: 2025-10-11 09:07:14.701 2 INFO nova.virt.libvirt.driver [None req-c1c53135-8b25-424a-af01-9028ba12c254 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] [instance: f5dfbe0b-a1ff-4001-abe4-a4493c9124f2] Using config drive
Oct 11 09:07:14 compute-0 nova_compute[260935]: 2025-10-11 09:07:14.726 2 DEBUG nova.storage.rbd_utils [None req-c1c53135-8b25-424a-af01-9028ba12c254 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] rbd image f5dfbe0b-a1ff-4001-abe4-a4493c9124f2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:07:14 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 09:07:14 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2464163947' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:07:14 compute-0 nova_compute[260935]: 2025-10-11 09:07:14.944 2 DEBUG oslo_concurrency.processutils [None req-b6692d21-50b0-4c84-aa61-2f92accd1c83 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.526s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:07:14 compute-0 nova_compute[260935]: 2025-10-11 09:07:14.947 2 DEBUG nova.virt.libvirt.vif [None req-b6692d21-50b0-4c84-aa61-2f92accd1c83 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T09:07:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersNegativeTestJSON-server-581429551',display_name='tempest-ServersNegativeTestJSON-server-581429551',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestjson-server-581429551',id=98,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='21f163e616ee4917a580701d466f7dc9',ramdisk_id='',reservation_id='r-wp4zx0a9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersNegativeTestJSON-414353187',owner_user_name='tempest-ServersNegativeTestJSON-414353187-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:07:08Z,user_data=None,user_id='6b8d9d5ab01d48ae81a09f922875ea3e',uuid=0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a611854c-0a61-41b8-91ce-0c0f893aa54c", "address": "fa:16:3e:9d:da:b0", "network": {"id": "14e82eeb-74e2-4de3-9047-74da777fe1ec", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-461409610-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "21f163e616ee4917a580701d466f7dc9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa611854c-0a", "ovs_interfaceid": "a611854c-0a61-41b8-91ce-0c0f893aa54c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 11 09:07:14 compute-0 nova_compute[260935]: 2025-10-11 09:07:14.948 2 DEBUG nova.network.os_vif_util [None req-b6692d21-50b0-4c84-aa61-2f92accd1c83 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Converting VIF {"id": "a611854c-0a61-41b8-91ce-0c0f893aa54c", "address": "fa:16:3e:9d:da:b0", "network": {"id": "14e82eeb-74e2-4de3-9047-74da777fe1ec", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-461409610-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "21f163e616ee4917a580701d466f7dc9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa611854c-0a", "ovs_interfaceid": "a611854c-0a61-41b8-91ce-0c0f893aa54c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 09:07:14 compute-0 nova_compute[260935]: 2025-10-11 09:07:14.951 2 DEBUG nova.network.os_vif_util [None req-b6692d21-50b0-4c84-aa61-2f92accd1c83 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9d:da:b0,bridge_name='br-int',has_traffic_filtering=True,id=a611854c-0a61-41b8-91ce-0c0f893aa54c,network=Network(14e82eeb-74e2-4de3-9047-74da777fe1ec),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa611854c-0a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 09:07:14 compute-0 nova_compute[260935]: 2025-10-11 09:07:14.954 2 DEBUG nova.objects.instance [None req-b6692d21-50b0-4c84-aa61-2f92accd1c83 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Lazy-loading 'pci_devices' on Instance uuid 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:07:14 compute-0 nova_compute[260935]: 2025-10-11 09:07:14.992 2 DEBUG nova.virt.libvirt.driver [None req-b6692d21-50b0-4c84-aa61-2f92accd1c83 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] End _get_guest_xml xml=<domain type="kvm">
Oct 11 09:07:14 compute-0 nova_compute[260935]:   <uuid>0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c</uuid>
Oct 11 09:07:14 compute-0 nova_compute[260935]:   <name>instance-00000062</name>
Oct 11 09:07:14 compute-0 nova_compute[260935]:   <memory>131072</memory>
Oct 11 09:07:14 compute-0 nova_compute[260935]:   <vcpu>1</vcpu>
Oct 11 09:07:14 compute-0 nova_compute[260935]:   <metadata>
Oct 11 09:07:14 compute-0 nova_compute[260935]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 09:07:14 compute-0 nova_compute[260935]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 09:07:14 compute-0 nova_compute[260935]:       <nova:name>tempest-ServersNegativeTestJSON-server-581429551</nova:name>
Oct 11 09:07:14 compute-0 nova_compute[260935]:       <nova:creationTime>2025-10-11 09:07:13</nova:creationTime>
Oct 11 09:07:14 compute-0 nova_compute[260935]:       <nova:flavor name="m1.nano">
Oct 11 09:07:14 compute-0 nova_compute[260935]:         <nova:memory>128</nova:memory>
Oct 11 09:07:14 compute-0 nova_compute[260935]:         <nova:disk>1</nova:disk>
Oct 11 09:07:14 compute-0 nova_compute[260935]:         <nova:swap>0</nova:swap>
Oct 11 09:07:14 compute-0 nova_compute[260935]:         <nova:ephemeral>0</nova:ephemeral>
Oct 11 09:07:14 compute-0 nova_compute[260935]:         <nova:vcpus>1</nova:vcpus>
Oct 11 09:07:14 compute-0 nova_compute[260935]:       </nova:flavor>
Oct 11 09:07:14 compute-0 nova_compute[260935]:       <nova:owner>
Oct 11 09:07:14 compute-0 nova_compute[260935]:         <nova:user uuid="6b8d9d5ab01d48ae81a09f922875ea3e">tempest-ServersNegativeTestJSON-414353187-project-member</nova:user>
Oct 11 09:07:14 compute-0 nova_compute[260935]:         <nova:project uuid="21f163e616ee4917a580701d466f7dc9">tempest-ServersNegativeTestJSON-414353187</nova:project>
Oct 11 09:07:14 compute-0 nova_compute[260935]:       </nova:owner>
Oct 11 09:07:14 compute-0 nova_compute[260935]:       <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 09:07:14 compute-0 nova_compute[260935]:       <nova:ports>
Oct 11 09:07:14 compute-0 nova_compute[260935]:         <nova:port uuid="a611854c-0a61-41b8-91ce-0c0f893aa54c">
Oct 11 09:07:14 compute-0 nova_compute[260935]:           <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Oct 11 09:07:14 compute-0 nova_compute[260935]:         </nova:port>
Oct 11 09:07:14 compute-0 nova_compute[260935]:       </nova:ports>
Oct 11 09:07:14 compute-0 nova_compute[260935]:     </nova:instance>
Oct 11 09:07:14 compute-0 nova_compute[260935]:   </metadata>
Oct 11 09:07:14 compute-0 nova_compute[260935]:   <sysinfo type="smbios">
Oct 11 09:07:14 compute-0 nova_compute[260935]:     <system>
Oct 11 09:07:14 compute-0 nova_compute[260935]:       <entry name="manufacturer">RDO</entry>
Oct 11 09:07:14 compute-0 nova_compute[260935]:       <entry name="product">OpenStack Compute</entry>
Oct 11 09:07:14 compute-0 nova_compute[260935]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 09:07:14 compute-0 nova_compute[260935]:       <entry name="serial">0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c</entry>
Oct 11 09:07:14 compute-0 nova_compute[260935]:       <entry name="uuid">0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c</entry>
Oct 11 09:07:14 compute-0 nova_compute[260935]:       <entry name="family">Virtual Machine</entry>
Oct 11 09:07:14 compute-0 nova_compute[260935]:     </system>
Oct 11 09:07:14 compute-0 nova_compute[260935]:   </sysinfo>
Oct 11 09:07:14 compute-0 nova_compute[260935]:   <os>
Oct 11 09:07:14 compute-0 nova_compute[260935]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 11 09:07:14 compute-0 nova_compute[260935]:     <boot dev="hd"/>
Oct 11 09:07:14 compute-0 nova_compute[260935]:     <smbios mode="sysinfo"/>
Oct 11 09:07:14 compute-0 nova_compute[260935]:   </os>
Oct 11 09:07:14 compute-0 nova_compute[260935]:   <features>
Oct 11 09:07:14 compute-0 nova_compute[260935]:     <acpi/>
Oct 11 09:07:14 compute-0 nova_compute[260935]:     <apic/>
Oct 11 09:07:14 compute-0 nova_compute[260935]:     <vmcoreinfo/>
Oct 11 09:07:14 compute-0 nova_compute[260935]:   </features>
Oct 11 09:07:14 compute-0 nova_compute[260935]:   <clock offset="utc">
Oct 11 09:07:14 compute-0 nova_compute[260935]:     <timer name="pit" tickpolicy="delay"/>
Oct 11 09:07:14 compute-0 nova_compute[260935]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 11 09:07:14 compute-0 nova_compute[260935]:     <timer name="hpet" present="no"/>
Oct 11 09:07:14 compute-0 nova_compute[260935]:   </clock>
Oct 11 09:07:14 compute-0 nova_compute[260935]:   <cpu mode="host-model" match="exact">
Oct 11 09:07:14 compute-0 nova_compute[260935]:     <topology sockets="1" cores="1" threads="1"/>
Oct 11 09:07:14 compute-0 nova_compute[260935]:   </cpu>
Oct 11 09:07:14 compute-0 nova_compute[260935]:   <devices>
Oct 11 09:07:14 compute-0 nova_compute[260935]:     <disk type="network" device="disk">
Oct 11 09:07:14 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 09:07:14 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c_disk">
Oct 11 09:07:14 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 09:07:14 compute-0 nova_compute[260935]:       </source>
Oct 11 09:07:14 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 09:07:14 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 09:07:14 compute-0 nova_compute[260935]:       </auth>
Oct 11 09:07:14 compute-0 nova_compute[260935]:       <target dev="vda" bus="virtio"/>
Oct 11 09:07:14 compute-0 nova_compute[260935]:     </disk>
Oct 11 09:07:14 compute-0 nova_compute[260935]:     <disk type="network" device="cdrom">
Oct 11 09:07:14 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 09:07:14 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c_disk.config">
Oct 11 09:07:14 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 09:07:14 compute-0 nova_compute[260935]:       </source>
Oct 11 09:07:14 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 09:07:14 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 09:07:14 compute-0 nova_compute[260935]:       </auth>
Oct 11 09:07:14 compute-0 nova_compute[260935]:       <target dev="sda" bus="sata"/>
Oct 11 09:07:14 compute-0 nova_compute[260935]:     </disk>
Oct 11 09:07:14 compute-0 nova_compute[260935]:     <interface type="ethernet">
Oct 11 09:07:14 compute-0 nova_compute[260935]:       <mac address="fa:16:3e:9d:da:b0"/>
Oct 11 09:07:14 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 09:07:14 compute-0 nova_compute[260935]:       <driver name="vhost" rx_queue_size="512"/>
Oct 11 09:07:14 compute-0 nova_compute[260935]:       <mtu size="1442"/>
Oct 11 09:07:14 compute-0 nova_compute[260935]:       <target dev="tapa611854c-0a"/>
Oct 11 09:07:14 compute-0 nova_compute[260935]:     </interface>
Oct 11 09:07:14 compute-0 nova_compute[260935]:     <serial type="pty">
Oct 11 09:07:14 compute-0 nova_compute[260935]:       <log file="/var/lib/nova/instances/0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c/console.log" append="off"/>
Oct 11 09:07:14 compute-0 nova_compute[260935]:     </serial>
Oct 11 09:07:14 compute-0 nova_compute[260935]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 09:07:14 compute-0 nova_compute[260935]:     <video>
Oct 11 09:07:14 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 09:07:14 compute-0 nova_compute[260935]:     </video>
Oct 11 09:07:14 compute-0 nova_compute[260935]:     <input type="tablet" bus="usb"/>
Oct 11 09:07:14 compute-0 nova_compute[260935]:     <rng model="virtio">
Oct 11 09:07:14 compute-0 nova_compute[260935]:       <backend model="random">/dev/urandom</backend>
Oct 11 09:07:14 compute-0 nova_compute[260935]:     </rng>
Oct 11 09:07:14 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root"/>
Oct 11 09:07:14 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:07:14 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:07:14 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:07:14 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:07:14 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:07:14 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:07:14 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:07:14 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:07:14 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:07:14 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:07:14 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:07:14 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:07:14 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:07:14 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:07:14 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:07:14 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:07:14 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:07:14 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:07:14 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:07:14 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:07:14 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:07:14 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:07:14 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:07:14 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:07:14 compute-0 nova_compute[260935]:     <controller type="usb" index="0"/>
Oct 11 09:07:14 compute-0 nova_compute[260935]:     <memballoon model="virtio">
Oct 11 09:07:14 compute-0 nova_compute[260935]:       <stats period="10"/>
Oct 11 09:07:14 compute-0 nova_compute[260935]:     </memballoon>
Oct 11 09:07:14 compute-0 nova_compute[260935]:   </devices>
Oct 11 09:07:14 compute-0 nova_compute[260935]: </domain>
Oct 11 09:07:14 compute-0 nova_compute[260935]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 11 09:07:14 compute-0 nova_compute[260935]: 2025-10-11 09:07:14.993 2 DEBUG nova.compute.manager [None req-b6692d21-50b0-4c84-aa61-2f92accd1c83 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Preparing to wait for external event network-vif-plugged-a611854c-0a61-41b8-91ce-0c0f893aa54c prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 11 09:07:14 compute-0 nova_compute[260935]: 2025-10-11 09:07:14.994 2 DEBUG oslo_concurrency.lockutils [None req-b6692d21-50b0-4c84-aa61-2f92accd1c83 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Acquiring lock "0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:07:14 compute-0 nova_compute[260935]: 2025-10-11 09:07:14.995 2 DEBUG oslo_concurrency.lockutils [None req-b6692d21-50b0-4c84-aa61-2f92accd1c83 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Lock "0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:07:14 compute-0 nova_compute[260935]: 2025-10-11 09:07:14.996 2 DEBUG oslo_concurrency.lockutils [None req-b6692d21-50b0-4c84-aa61-2f92accd1c83 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Lock "0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:07:14 compute-0 nova_compute[260935]: 2025-10-11 09:07:14.998 2 DEBUG nova.virt.libvirt.vif [None req-b6692d21-50b0-4c84-aa61-2f92accd1c83 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T09:07:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersNegativeTestJSON-server-581429551',display_name='tempest-ServersNegativeTestJSON-server-581429551',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestjson-server-581429551',id=98,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='21f163e616ee4917a580701d466f7dc9',ramdisk_id='',reservation_id='r-wp4zx0a9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersNegativeTestJSON-414353187',owner_user_name='tempest-ServersNegativeTestJSON-414353187-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:07:08Z,user_data=None,user_id='6b8d9d5ab01d48ae81a09f922875ea3e',uuid=0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a611854c-0a61-41b8-91ce-0c0f893aa54c", "address": "fa:16:3e:9d:da:b0", "network": {"id": "14e82eeb-74e2-4de3-9047-74da777fe1ec", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-461409610-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "21f163e616ee4917a580701d466f7dc9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa611854c-0a", "ovs_interfaceid": "a611854c-0a61-41b8-91ce-0c0f893aa54c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 11 09:07:15 compute-0 nova_compute[260935]: 2025-10-11 09:07:14.999 2 DEBUG nova.network.os_vif_util [None req-b6692d21-50b0-4c84-aa61-2f92accd1c83 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Converting VIF {"id": "a611854c-0a61-41b8-91ce-0c0f893aa54c", "address": "fa:16:3e:9d:da:b0", "network": {"id": "14e82eeb-74e2-4de3-9047-74da777fe1ec", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-461409610-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "21f163e616ee4917a580701d466f7dc9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa611854c-0a", "ovs_interfaceid": "a611854c-0a61-41b8-91ce-0c0f893aa54c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 09:07:15 compute-0 nova_compute[260935]: 2025-10-11 09:07:15.002 2 DEBUG nova.network.os_vif_util [None req-b6692d21-50b0-4c84-aa61-2f92accd1c83 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9d:da:b0,bridge_name='br-int',has_traffic_filtering=True,id=a611854c-0a61-41b8-91ce-0c0f893aa54c,network=Network(14e82eeb-74e2-4de3-9047-74da777fe1ec),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa611854c-0a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 09:07:15 compute-0 nova_compute[260935]: 2025-10-11 09:07:15.003 2 DEBUG os_vif [None req-b6692d21-50b0-4c84-aa61-2f92accd1c83 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:9d:da:b0,bridge_name='br-int',has_traffic_filtering=True,id=a611854c-0a61-41b8-91ce-0c0f893aa54c,network=Network(14e82eeb-74e2-4de3-9047-74da777fe1ec),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa611854c-0a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 11 09:07:15 compute-0 nova_compute[260935]: 2025-10-11 09:07:15.004 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:07:15 compute-0 nova_compute[260935]: 2025-10-11 09:07:15.004 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:07:15 compute-0 nova_compute[260935]: 2025-10-11 09:07:15.005 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 09:07:15 compute-0 nova_compute[260935]: 2025-10-11 09:07:15.008 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:07:15 compute-0 nova_compute[260935]: 2025-10-11 09:07:15.008 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa611854c-0a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:07:15 compute-0 nova_compute[260935]: 2025-10-11 09:07:15.009 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa611854c-0a, col_values=(('external_ids', {'iface-id': 'a611854c-0a61-41b8-91ce-0c0f893aa54c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:9d:da:b0', 'vm-uuid': '0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:07:15 compute-0 nova_compute[260935]: 2025-10-11 09:07:15.011 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:07:15 compute-0 NetworkManager[44960]: <info>  [1760173635.0121] manager: (tapa611854c-0a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/371)
Oct 11 09:07:15 compute-0 nova_compute[260935]: 2025-10-11 09:07:15.014 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 09:07:15 compute-0 nova_compute[260935]: 2025-10-11 09:07:15.019 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:07:15 compute-0 nova_compute[260935]: 2025-10-11 09:07:15.021 2 INFO os_vif [None req-b6692d21-50b0-4c84-aa61-2f92accd1c83 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:9d:da:b0,bridge_name='br-int',has_traffic_filtering=True,id=a611854c-0a61-41b8-91ce-0c0f893aa54c,network=Network(14e82eeb-74e2-4de3-9047-74da777fe1ec),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa611854c-0a')
Oct 11 09:07:15 compute-0 nova_compute[260935]: 2025-10-11 09:07:15.101 2 DEBUG nova.virt.libvirt.driver [None req-b6692d21-50b0-4c84-aa61-2f92accd1c83 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 09:07:15 compute-0 nova_compute[260935]: 2025-10-11 09:07:15.102 2 DEBUG nova.virt.libvirt.driver [None req-b6692d21-50b0-4c84-aa61-2f92accd1c83 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 09:07:15 compute-0 nova_compute[260935]: 2025-10-11 09:07:15.102 2 DEBUG nova.virt.libvirt.driver [None req-b6692d21-50b0-4c84-aa61-2f92accd1c83 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] No VIF found with MAC fa:16:3e:9d:da:b0, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 11 09:07:15 compute-0 nova_compute[260935]: 2025-10-11 09:07:15.103 2 INFO nova.virt.libvirt.driver [None req-b6692d21-50b0-4c84-aa61-2f92accd1c83 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Using config drive
Oct 11 09:07:15 compute-0 nova_compute[260935]: 2025-10-11 09:07:15.134 2 DEBUG nova.storage.rbd_utils [None req-b6692d21-50b0-4c84-aa61-2f92accd1c83 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] rbd image 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:07:15 compute-0 nova_compute[260935]: 2025-10-11 09:07:15.168 2 INFO nova.virt.libvirt.driver [None req-c1c53135-8b25-424a-af01-9028ba12c254 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] [instance: f5dfbe0b-a1ff-4001-abe4-a4493c9124f2] Creating config drive at /var/lib/nova/instances/f5dfbe0b-a1ff-4001-abe4-a4493c9124f2/disk.config
Oct 11 09:07:15 compute-0 nova_compute[260935]: 2025-10-11 09:07:15.183 2 DEBUG oslo_concurrency.processutils [None req-c1c53135-8b25-424a-af01-9028ba12c254 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/f5dfbe0b-a1ff-4001-abe4-a4493c9124f2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmppo_5ys1n execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:07:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:07:15.202 162815 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:07:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:07:15.203 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:07:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:07:15.204 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:07:15 compute-0 nova_compute[260935]: 2025-10-11 09:07:15.347 2 DEBUG oslo_concurrency.processutils [None req-c1c53135-8b25-424a-af01-9028ba12c254 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/f5dfbe0b-a1ff-4001-abe4-a4493c9124f2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmppo_5ys1n" returned: 0 in 0.164s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:07:15 compute-0 nova_compute[260935]: 2025-10-11 09:07:15.378 2 DEBUG nova.storage.rbd_utils [None req-c1c53135-8b25-424a-af01-9028ba12c254 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] rbd image f5dfbe0b-a1ff-4001-abe4-a4493c9124f2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:07:15 compute-0 nova_compute[260935]: 2025-10-11 09:07:15.383 2 DEBUG oslo_concurrency.processutils [None req-c1c53135-8b25-424a-af01-9028ba12c254 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/f5dfbe0b-a1ff-4001-abe4-a4493c9124f2/disk.config f5dfbe0b-a1ff-4001-abe4-a4493c9124f2_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:07:15 compute-0 nova_compute[260935]: 2025-10-11 09:07:15.445 2 DEBUG nova.network.neutron [req-63bb350f-1c02-48b5-b2b5-971ef214e168 req-c9f499ff-daf4-4848-bd71-77f92dec20e0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: f5dfbe0b-a1ff-4001-abe4-a4493c9124f2] Updated VIF entry in instance network info cache for port 34749eb0-e6d2-4c4b-a2b8-9fc709c1caca. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 09:07:15 compute-0 nova_compute[260935]: 2025-10-11 09:07:15.446 2 DEBUG nova.network.neutron [req-63bb350f-1c02-48b5-b2b5-971ef214e168 req-c9f499ff-daf4-4848-bd71-77f92dec20e0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: f5dfbe0b-a1ff-4001-abe4-a4493c9124f2] Updating instance_info_cache with network_info: [{"id": "34749eb0-e6d2-4c4b-a2b8-9fc709c1caca", "address": "fa:16:3e:88:5f:9b", "network": {"id": "76432838-bd4d-4fb8-8e44-6e230b5868b8", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-372698809-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c221c92f100b42fbb2581f0c7035540b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap34749eb0-e6", "ovs_interfaceid": "34749eb0-e6d2-4c4b-a2b8-9fc709c1caca", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:07:15 compute-0 nova_compute[260935]: 2025-10-11 09:07:15.471 2 DEBUG oslo_concurrency.lockutils [req-63bb350f-1c02-48b5-b2b5-971ef214e168 req-c9f499ff-daf4-4848-bd71-77f92dec20e0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-f5dfbe0b-a1ff-4001-abe4-a4493c9124f2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:07:15 compute-0 ceph-mon[74313]: pgmap v1937: 321 pgs: 321 active+clean; 513 MiB data, 890 MiB used, 59 GiB / 60 GiB avail; 56 KiB/s rd, 5.3 MiB/s wr, 88 op/s
Oct 11 09:07:15 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/2464163947' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:07:15 compute-0 nova_compute[260935]: 2025-10-11 09:07:15.570 2 DEBUG oslo_concurrency.processutils [None req-c1c53135-8b25-424a-af01-9028ba12c254 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/f5dfbe0b-a1ff-4001-abe4-a4493c9124f2/disk.config f5dfbe0b-a1ff-4001-abe4-a4493c9124f2_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.187s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:07:15 compute-0 nova_compute[260935]: 2025-10-11 09:07:15.571 2 INFO nova.virt.libvirt.driver [None req-c1c53135-8b25-424a-af01-9028ba12c254 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] [instance: f5dfbe0b-a1ff-4001-abe4-a4493c9124f2] Deleting local config drive /var/lib/nova/instances/f5dfbe0b-a1ff-4001-abe4-a4493c9124f2/disk.config because it was imported into RBD.
Oct 11 09:07:15 compute-0 nova_compute[260935]: 2025-10-11 09:07:15.606 2 INFO nova.virt.libvirt.driver [None req-b6692d21-50b0-4c84-aa61-2f92accd1c83 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Creating config drive at /var/lib/nova/instances/0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c/disk.config
Oct 11 09:07:15 compute-0 nova_compute[260935]: 2025-10-11 09:07:15.617 2 DEBUG oslo_concurrency.processutils [None req-b6692d21-50b0-4c84-aa61-2f92accd1c83 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp9e8o79bs execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:07:15 compute-0 NetworkManager[44960]: <info>  [1760173635.6308] manager: (tap34749eb0-e6): new Tun device (/org/freedesktop/NetworkManager/Devices/372)
Oct 11 09:07:15 compute-0 kernel: tap34749eb0-e6: entered promiscuous mode
Oct 11 09:07:15 compute-0 ovn_controller[152945]: 2025-10-11T09:07:15Z|00866|binding|INFO|Claiming lport 34749eb0-e6d2-4c4b-a2b8-9fc709c1caca for this chassis.
Oct 11 09:07:15 compute-0 ovn_controller[152945]: 2025-10-11T09:07:15Z|00867|binding|INFO|34749eb0-e6d2-4c4b-a2b8-9fc709c1caca: Claiming fa:16:3e:88:5f:9b 10.100.0.12
Oct 11 09:07:15 compute-0 nova_compute[260935]: 2025-10-11 09:07:15.670 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:07:15 compute-0 ovn_controller[152945]: 2025-10-11T09:07:15Z|00868|binding|INFO|Setting lport 34749eb0-e6d2-4c4b-a2b8-9fc709c1caca ovn-installed in OVS
Oct 11 09:07:15 compute-0 nova_compute[260935]: 2025-10-11 09:07:15.678 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:07:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:07:15.681 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:88:5f:9b 10.100.0.12'], port_security=['fa:16:3e:88:5f:9b 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'f5dfbe0b-a1ff-4001-abe4-a4493c9124f2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-76432838-bd4d-4fb8-8e44-6e230b5868b8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c221c92f100b42fbb2581f0c7035540b', 'neutron:revision_number': '2', 'neutron:security_group_ids': '933852e8-c082-4590-9200-b967a1691dcb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7f1c5e2c-b27b-4d23-ad04-5134938386e4, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=34749eb0-e6d2-4c4b-a2b8-9fc709c1caca) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 09:07:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:07:15.682 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 34749eb0-e6d2-4c4b-a2b8-9fc709c1caca in datapath 76432838-bd4d-4fb8-8e44-6e230b5868b8 bound to our chassis
Oct 11 09:07:15 compute-0 ovn_controller[152945]: 2025-10-11T09:07:15Z|00869|binding|INFO|Setting lport 34749eb0-e6d2-4c4b-a2b8-9fc709c1caca up in Southbound
Oct 11 09:07:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:07:15.683 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 76432838-bd4d-4fb8-8e44-6e230b5868b8
Oct 11 09:07:15 compute-0 systemd-machined[215705]: New machine qemu-111-instance-00000061.
Oct 11 09:07:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:07:15.702 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[8e1b746c-bc0a-444d-a2b8-8c4cea687bec]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:07:15 compute-0 systemd[1]: Started Virtual Machine qemu-111-instance-00000061.
Oct 11 09:07:15 compute-0 systemd-udevd[357501]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 09:07:15 compute-0 NetworkManager[44960]: <info>  [1760173635.7470] device (tap34749eb0-e6): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 09:07:15 compute-0 NetworkManager[44960]: <info>  [1760173635.7484] device (tap34749eb0-e6): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 09:07:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:07:15.752 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[aa8cdcbf-146e-4e2e-98b5-7f9e81490bd3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:07:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:07:15.756 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[2884b045-1b67-4230-95ca-f238338b9ac0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:07:15 compute-0 podman[357473]: 2025-10-11 09:07:15.757657603 +0000 UTC m=+0.090390482 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct 11 09:07:15 compute-0 nova_compute[260935]: 2025-10-11 09:07:15.781 2 DEBUG oslo_concurrency.processutils [None req-b6692d21-50b0-4c84-aa61-2f92accd1c83 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp9e8o79bs" returned: 0 in 0.164s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:07:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:07:15.797 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[674d6e1d-cc25-43a1-84d9-eeaf1be98813]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:07:15 compute-0 nova_compute[260935]: 2025-10-11 09:07:15.808 2 DEBUG nova.storage.rbd_utils [None req-b6692d21-50b0-4c84-aa61-2f92accd1c83 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] rbd image 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:07:15 compute-0 nova_compute[260935]: 2025-10-11 09:07:15.813 2 DEBUG oslo_concurrency.processutils [None req-b6692d21-50b0-4c84-aa61-2f92accd1c83 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c/disk.config 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:07:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:07:15.817 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[7486d2bf-ca6a-46dc-a101-fa9c510c81f2]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap76432838-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:06:2f:a3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 832, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 832, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 260], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 547691, 'reachable_time': 27608, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 357527, 'error': None, 'target': 'ovnmeta-76432838-bd4d-4fb8-8e44-6e230b5868b8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:07:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:07:15.842 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[c16d672d-98ea-4c6e-b1b6-080b3225222d]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap76432838-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 547707, 'tstamp': 547707}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 357531, 'error': None, 'target': 'ovnmeta-76432838-bd4d-4fb8-8e44-6e230b5868b8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap76432838-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 547711, 'tstamp': 547711}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 357531, 'error': None, 'target': 'ovnmeta-76432838-bd4d-4fb8-8e44-6e230b5868b8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:07:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:07:15.843 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap76432838-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:07:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:07:15.851 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap76432838-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:07:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:07:15.851 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 09:07:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:07:15.851 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap76432838-b0, col_values=(('external_ids', {'iface-id': 'b6ca7d68-6b07-4260-8964-929cc77a92b2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:07:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:07:15.852 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 09:07:15 compute-0 nova_compute[260935]: 2025-10-11 09:07:15.857 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:07:15 compute-0 nova_compute[260935]: 2025-10-11 09:07:15.974 2 DEBUG nova.network.neutron [req-0aadb39a-5069-4869-bf34-1b9c7a7a1b96 req-2f083799-105c-4728-9368-6bf82f2baf9f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Updated VIF entry in instance network info cache for port a611854c-0a61-41b8-91ce-0c0f893aa54c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 09:07:15 compute-0 nova_compute[260935]: 2025-10-11 09:07:15.975 2 DEBUG nova.network.neutron [req-0aadb39a-5069-4869-bf34-1b9c7a7a1b96 req-2f083799-105c-4728-9368-6bf82f2baf9f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Updating instance_info_cache with network_info: [{"id": "a611854c-0a61-41b8-91ce-0c0f893aa54c", "address": "fa:16:3e:9d:da:b0", "network": {"id": "14e82eeb-74e2-4de3-9047-74da777fe1ec", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-461409610-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "21f163e616ee4917a580701d466f7dc9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa611854c-0a", "ovs_interfaceid": "a611854c-0a61-41b8-91ce-0c0f893aa54c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:07:15 compute-0 nova_compute[260935]: 2025-10-11 09:07:15.984 2 DEBUG oslo_concurrency.processutils [None req-b6692d21-50b0-4c84-aa61-2f92accd1c83 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c/disk.config 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.171s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:07:15 compute-0 nova_compute[260935]: 2025-10-11 09:07:15.984 2 INFO nova.virt.libvirt.driver [None req-b6692d21-50b0-4c84-aa61-2f92accd1c83 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Deleting local config drive /var/lib/nova/instances/0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c/disk.config because it was imported into RBD.
Oct 11 09:07:16 compute-0 nova_compute[260935]: 2025-10-11 09:07:16.012 2 DEBUG oslo_concurrency.lockutils [req-0aadb39a-5069-4869-bf34-1b9c7a7a1b96 req-2f083799-105c-4728-9368-6bf82f2baf9f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:07:16 compute-0 NetworkManager[44960]: <info>  [1760173636.0494] manager: (tapa611854c-0a): new Tun device (/org/freedesktop/NetworkManager/Devices/373)
Oct 11 09:07:16 compute-0 kernel: tapa611854c-0a: entered promiscuous mode
Oct 11 09:07:16 compute-0 NetworkManager[44960]: <info>  [1760173636.0627] device (tapa611854c-0a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 09:07:16 compute-0 NetworkManager[44960]: <info>  [1760173636.0633] device (tapa611854c-0a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 09:07:16 compute-0 ovn_controller[152945]: 2025-10-11T09:07:16Z|00870|binding|INFO|Claiming lport a611854c-0a61-41b8-91ce-0c0f893aa54c for this chassis.
Oct 11 09:07:16 compute-0 ovn_controller[152945]: 2025-10-11T09:07:16Z|00871|binding|INFO|a611854c-0a61-41b8-91ce-0c0f893aa54c: Claiming fa:16:3e:9d:da:b0 10.100.0.12
Oct 11 09:07:16 compute-0 nova_compute[260935]: 2025-10-11 09:07:16.100 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:07:16 compute-0 nova_compute[260935]: 2025-10-11 09:07:16.108 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:07:16 compute-0 systemd-machined[215705]: New machine qemu-112-instance-00000062.
Oct 11 09:07:16 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:07:16.140 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9d:da:b0 10.100.0.12'], port_security=['fa:16:3e:9d:da:b0 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-14e82eeb-74e2-4de3-9047-74da777fe1ec', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '21f163e616ee4917a580701d466f7dc9', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'df67e917-90ec-4dfb-a7e0-e0c1517579e8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e0adf864-e0f8-4e06-afb7-e390a634b36e, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=a611854c-0a61-41b8-91ce-0c0f893aa54c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 09:07:16 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:07:16.141 162815 INFO neutron.agent.ovn.metadata.agent [-] Port a611854c-0a61-41b8-91ce-0c0f893aa54c in datapath 14e82eeb-74e2-4de3-9047-74da777fe1ec bound to our chassis
Oct 11 09:07:16 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:07:16.143 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 14e82eeb-74e2-4de3-9047-74da777fe1ec
Oct 11 09:07:16 compute-0 systemd[1]: Started Virtual Machine qemu-112-instance-00000062.
Oct 11 09:07:16 compute-0 nova_compute[260935]: 2025-10-11 09:07:16.147 2 DEBUG nova.compute.manager [req-a96ce835-9cc2-4d8f-af67-4f14e913ad85 req-34cf876f-d892-43f1-b85b-fdd75506d7a8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: f5dfbe0b-a1ff-4001-abe4-a4493c9124f2] Received event network-vif-plugged-34749eb0-e6d2-4c4b-a2b8-9fc709c1caca external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:07:16 compute-0 nova_compute[260935]: 2025-10-11 09:07:16.148 2 DEBUG oslo_concurrency.lockutils [req-a96ce835-9cc2-4d8f-af67-4f14e913ad85 req-34cf876f-d892-43f1-b85b-fdd75506d7a8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "f5dfbe0b-a1ff-4001-abe4-a4493c9124f2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:07:16 compute-0 nova_compute[260935]: 2025-10-11 09:07:16.148 2 DEBUG oslo_concurrency.lockutils [req-a96ce835-9cc2-4d8f-af67-4f14e913ad85 req-34cf876f-d892-43f1-b85b-fdd75506d7a8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "f5dfbe0b-a1ff-4001-abe4-a4493c9124f2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:07:16 compute-0 nova_compute[260935]: 2025-10-11 09:07:16.148 2 DEBUG oslo_concurrency.lockutils [req-a96ce835-9cc2-4d8f-af67-4f14e913ad85 req-34cf876f-d892-43f1-b85b-fdd75506d7a8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "f5dfbe0b-a1ff-4001-abe4-a4493c9124f2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:07:16 compute-0 nova_compute[260935]: 2025-10-11 09:07:16.149 2 DEBUG nova.compute.manager [req-a96ce835-9cc2-4d8f-af67-4f14e913ad85 req-34cf876f-d892-43f1-b85b-fdd75506d7a8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: f5dfbe0b-a1ff-4001-abe4-a4493c9124f2] Processing event network-vif-plugged-34749eb0-e6d2-4c4b-a2b8-9fc709c1caca _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 11 09:07:16 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:07:16.154 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[4436517f-1ec2-4ec4-835b-d7fb2f9f6660]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:07:16 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:07:16.155 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap14e82eeb-71 in ovnmeta-14e82eeb-74e2-4de3-9047-74da777fe1ec namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 11 09:07:16 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:07:16.161 276945 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap14e82eeb-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 11 09:07:16 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:07:16.162 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[43701903-1a24-424d-bd63-e80e5b715833]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:07:16 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:07:16.165 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[063dd97f-2b49-41ce-886a-8dfca189be60]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:07:16 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:07:16.183 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[8e7a3ef5-2711-4549-9672-10745536cf9b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:07:16 compute-0 ovn_controller[152945]: 2025-10-11T09:07:16Z|00872|binding|INFO|Setting lport a611854c-0a61-41b8-91ce-0c0f893aa54c ovn-installed in OVS
Oct 11 09:07:16 compute-0 ovn_controller[152945]: 2025-10-11T09:07:16Z|00873|binding|INFO|Setting lport a611854c-0a61-41b8-91ce-0c0f893aa54c up in Southbound
Oct 11 09:07:16 compute-0 nova_compute[260935]: 2025-10-11 09:07:16.190 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:07:16 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:07:16.197 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[cb830089-cf0d-4047-b2e2-d2295917ef6e]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:07:16 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:07:16.240 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[4ade678d-759c-43d7-83de-496e98f72302]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:07:16 compute-0 NetworkManager[44960]: <info>  [1760173636.2471] manager: (tap14e82eeb-70): new Veth device (/org/freedesktop/NetworkManager/Devices/374)
Oct 11 09:07:16 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:07:16.245 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[56b800d9-793c-4c12-ac47-5fce38541ec2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:07:16 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:07:16.286 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[95cd6fd1-74f3-41f1-aa76-d49333489313]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:07:16 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:07:16.290 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[9fc5b6ad-3fee-422b-a58d-340afc5c350e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:07:16 compute-0 NetworkManager[44960]: <info>  [1760173636.3161] device (tap14e82eeb-70): carrier: link connected
Oct 11 09:07:16 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:07:16.322 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[3281a3f4-4bc5-4088-91f9-6b68ad219ee8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:07:16 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:07:16.343 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[0a17facc-c17d-46d4-93da-2e93bd2d6652]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap14e82eeb-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:60:56:f1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 263], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 548101, 'reachable_time': 18729, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 357637, 'error': None, 'target': 'ovnmeta-14e82eeb-74e2-4de3-9047-74da777fe1ec', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:07:16 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1938: 321 pgs: 321 active+clean; 513 MiB data, 890 MiB used, 59 GiB / 60 GiB avail; 56 KiB/s rd, 5.3 MiB/s wr, 88 op/s
Oct 11 09:07:16 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:07:16.358 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[909902b6-32c9-48a5-9b0c-44336a548ce2]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe60:56f1'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 548101, 'tstamp': 548101}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 357638, 'error': None, 'target': 'ovnmeta-14e82eeb-74e2-4de3-9047-74da777fe1ec', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:07:16 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:07:16.380 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[69b57796-f135-44c0-bbb0-a275081b8c23]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap14e82eeb-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:60:56:f1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 263], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 548101, 'reachable_time': 18729, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 357639, 'error': None, 'target': 'ovnmeta-14e82eeb-74e2-4de3-9047-74da777fe1ec', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:07:16 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:07:16.414 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[47b59b35-02b9-4a6b-98ed-b033f5ee2607]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:07:16 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:07:16.470 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[210a8ee3-08b0-453d-a035-e0042cc4a012]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:07:16 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:07:16.471 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap14e82eeb-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:07:16 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:07:16.472 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 09:07:16 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:07:16.472 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap14e82eeb-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:07:16 compute-0 NetworkManager[44960]: <info>  [1760173636.4746] manager: (tap14e82eeb-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/375)
Oct 11 09:07:16 compute-0 kernel: tap14e82eeb-70: entered promiscuous mode
Oct 11 09:07:16 compute-0 nova_compute[260935]: 2025-10-11 09:07:16.474 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:07:16 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:07:16.477 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap14e82eeb-70, col_values=(('external_ids', {'iface-id': 'c44760fe-71c5-482d-aee7-a0b0513681b7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:07:16 compute-0 ovn_controller[152945]: 2025-10-11T09:07:16Z|00874|binding|INFO|Releasing lport c44760fe-71c5-482d-aee7-a0b0513681b7 from this chassis (sb_readonly=0)
Oct 11 09:07:16 compute-0 nova_compute[260935]: 2025-10-11 09:07:16.478 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:07:16 compute-0 nova_compute[260935]: 2025-10-11 09:07:16.496 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:07:16 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:07:16.498 162815 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/14e82eeb-74e2-4de3-9047-74da777fe1ec.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/14e82eeb-74e2-4de3-9047-74da777fe1ec.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 11 09:07:16 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:07:16.499 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[0429739d-3d79-4d03-82c8-2b84232c3c79]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:07:16 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:07:16.500 162815 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 11 09:07:16 compute-0 ovn_metadata_agent[162810]: global
Oct 11 09:07:16 compute-0 ovn_metadata_agent[162810]:     log         /dev/log local0 debug
Oct 11 09:07:16 compute-0 ovn_metadata_agent[162810]:     log-tag     haproxy-metadata-proxy-14e82eeb-74e2-4de3-9047-74da777fe1ec
Oct 11 09:07:16 compute-0 ovn_metadata_agent[162810]:     user        root
Oct 11 09:07:16 compute-0 ovn_metadata_agent[162810]:     group       root
Oct 11 09:07:16 compute-0 ovn_metadata_agent[162810]:     maxconn     1024
Oct 11 09:07:16 compute-0 ovn_metadata_agent[162810]:     pidfile     /var/lib/neutron/external/pids/14e82eeb-74e2-4de3-9047-74da777fe1ec.pid.haproxy
Oct 11 09:07:16 compute-0 ovn_metadata_agent[162810]:     daemon
Oct 11 09:07:16 compute-0 ovn_metadata_agent[162810]: 
Oct 11 09:07:16 compute-0 ovn_metadata_agent[162810]: defaults
Oct 11 09:07:16 compute-0 ovn_metadata_agent[162810]:     log global
Oct 11 09:07:16 compute-0 ovn_metadata_agent[162810]:     mode http
Oct 11 09:07:16 compute-0 ovn_metadata_agent[162810]:     option httplog
Oct 11 09:07:16 compute-0 ovn_metadata_agent[162810]:     option dontlognull
Oct 11 09:07:16 compute-0 ovn_metadata_agent[162810]:     option http-server-close
Oct 11 09:07:16 compute-0 ovn_metadata_agent[162810]:     option forwardfor
Oct 11 09:07:16 compute-0 ovn_metadata_agent[162810]:     retries                 3
Oct 11 09:07:16 compute-0 ovn_metadata_agent[162810]:     timeout http-request    30s
Oct 11 09:07:16 compute-0 ovn_metadata_agent[162810]:     timeout connect         30s
Oct 11 09:07:16 compute-0 ovn_metadata_agent[162810]:     timeout client          32s
Oct 11 09:07:16 compute-0 ovn_metadata_agent[162810]:     timeout server          32s
Oct 11 09:07:16 compute-0 ovn_metadata_agent[162810]:     timeout http-keep-alive 30s
Oct 11 09:07:16 compute-0 ovn_metadata_agent[162810]: 
Oct 11 09:07:16 compute-0 ovn_metadata_agent[162810]: 
Oct 11 09:07:16 compute-0 ovn_metadata_agent[162810]: listen listener
Oct 11 09:07:16 compute-0 ovn_metadata_agent[162810]:     bind 169.254.169.254:80
Oct 11 09:07:16 compute-0 ovn_metadata_agent[162810]:     server metadata /var/lib/neutron/metadata_proxy
Oct 11 09:07:16 compute-0 ovn_metadata_agent[162810]:     http-request add-header X-OVN-Network-ID 14e82eeb-74e2-4de3-9047-74da777fe1ec
Oct 11 09:07:16 compute-0 ovn_metadata_agent[162810]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 11 09:07:16 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:07:16.501 162815 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-14e82eeb-74e2-4de3-9047-74da777fe1ec', 'env', 'PROCESS_TAG=haproxy-14e82eeb-74e2-4de3-9047-74da777fe1ec', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/14e82eeb-74e2-4de3-9047-74da777fe1ec.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 11 09:07:16 compute-0 nova_compute[260935]: 2025-10-11 09:07:16.714 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760173636.7137704, f5dfbe0b-a1ff-4001-abe4-a4493c9124f2 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:07:16 compute-0 nova_compute[260935]: 2025-10-11 09:07:16.714 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: f5dfbe0b-a1ff-4001-abe4-a4493c9124f2] VM Started (Lifecycle Event)
Oct 11 09:07:16 compute-0 nova_compute[260935]: 2025-10-11 09:07:16.717 2 DEBUG nova.compute.manager [None req-c1c53135-8b25-424a-af01-9028ba12c254 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] [instance: f5dfbe0b-a1ff-4001-abe4-a4493c9124f2] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 11 09:07:16 compute-0 nova_compute[260935]: 2025-10-11 09:07:16.720 2 DEBUG nova.virt.libvirt.driver [None req-c1c53135-8b25-424a-af01-9028ba12c254 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] [instance: f5dfbe0b-a1ff-4001-abe4-a4493c9124f2] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 11 09:07:16 compute-0 nova_compute[260935]: 2025-10-11 09:07:16.724 2 INFO nova.virt.libvirt.driver [-] [instance: f5dfbe0b-a1ff-4001-abe4-a4493c9124f2] Instance spawned successfully.
Oct 11 09:07:16 compute-0 nova_compute[260935]: 2025-10-11 09:07:16.724 2 DEBUG nova.virt.libvirt.driver [None req-c1c53135-8b25-424a-af01-9028ba12c254 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] [instance: f5dfbe0b-a1ff-4001-abe4-a4493c9124f2] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 11 09:07:16 compute-0 nova_compute[260935]: 2025-10-11 09:07:16.758 2 DEBUG nova.compute.manager [req-c9377d82-fb2e-4f30-ace0-228b0b66d784 req-ecca4dea-a122-4f7d-b747-810797baadfd e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Received event network-vif-plugged-a611854c-0a61-41b8-91ce-0c0f893aa54c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:07:16 compute-0 nova_compute[260935]: 2025-10-11 09:07:16.758 2 DEBUG oslo_concurrency.lockutils [req-c9377d82-fb2e-4f30-ace0-228b0b66d784 req-ecca4dea-a122-4f7d-b747-810797baadfd e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:07:16 compute-0 nova_compute[260935]: 2025-10-11 09:07:16.758 2 DEBUG oslo_concurrency.lockutils [req-c9377d82-fb2e-4f30-ace0-228b0b66d784 req-ecca4dea-a122-4f7d-b747-810797baadfd e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:07:16 compute-0 nova_compute[260935]: 2025-10-11 09:07:16.758 2 DEBUG oslo_concurrency.lockutils [req-c9377d82-fb2e-4f30-ace0-228b0b66d784 req-ecca4dea-a122-4f7d-b747-810797baadfd e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:07:16 compute-0 nova_compute[260935]: 2025-10-11 09:07:16.759 2 DEBUG nova.compute.manager [req-c9377d82-fb2e-4f30-ace0-228b0b66d784 req-ecca4dea-a122-4f7d-b747-810797baadfd e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Processing event network-vif-plugged-a611854c-0a61-41b8-91ce-0c0f893aa54c _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 11 09:07:16 compute-0 nova_compute[260935]: 2025-10-11 09:07:16.775 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: f5dfbe0b-a1ff-4001-abe4-a4493c9124f2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:07:16 compute-0 nova_compute[260935]: 2025-10-11 09:07:16.778 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: f5dfbe0b-a1ff-4001-abe4-a4493c9124f2] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 09:07:16 compute-0 nova_compute[260935]: 2025-10-11 09:07:16.785 2 DEBUG nova.virt.libvirt.driver [None req-c1c53135-8b25-424a-af01-9028ba12c254 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] [instance: f5dfbe0b-a1ff-4001-abe4-a4493c9124f2] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:07:16 compute-0 nova_compute[260935]: 2025-10-11 09:07:16.785 2 DEBUG nova.virt.libvirt.driver [None req-c1c53135-8b25-424a-af01-9028ba12c254 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] [instance: f5dfbe0b-a1ff-4001-abe4-a4493c9124f2] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:07:16 compute-0 nova_compute[260935]: 2025-10-11 09:07:16.786 2 DEBUG nova.virt.libvirt.driver [None req-c1c53135-8b25-424a-af01-9028ba12c254 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] [instance: f5dfbe0b-a1ff-4001-abe4-a4493c9124f2] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:07:16 compute-0 nova_compute[260935]: 2025-10-11 09:07:16.786 2 DEBUG nova.virt.libvirt.driver [None req-c1c53135-8b25-424a-af01-9028ba12c254 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] [instance: f5dfbe0b-a1ff-4001-abe4-a4493c9124f2] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:07:16 compute-0 nova_compute[260935]: 2025-10-11 09:07:16.787 2 DEBUG nova.virt.libvirt.driver [None req-c1c53135-8b25-424a-af01-9028ba12c254 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] [instance: f5dfbe0b-a1ff-4001-abe4-a4493c9124f2] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:07:16 compute-0 nova_compute[260935]: 2025-10-11 09:07:16.787 2 DEBUG nova.virt.libvirt.driver [None req-c1c53135-8b25-424a-af01-9028ba12c254 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] [instance: f5dfbe0b-a1ff-4001-abe4-a4493c9124f2] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:07:16 compute-0 nova_compute[260935]: 2025-10-11 09:07:16.826 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: f5dfbe0b-a1ff-4001-abe4-a4493c9124f2] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 09:07:16 compute-0 nova_compute[260935]: 2025-10-11 09:07:16.826 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760173636.7150145, f5dfbe0b-a1ff-4001-abe4-a4493c9124f2 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:07:16 compute-0 nova_compute[260935]: 2025-10-11 09:07:16.826 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: f5dfbe0b-a1ff-4001-abe4-a4493c9124f2] VM Paused (Lifecycle Event)
Oct 11 09:07:16 compute-0 nova_compute[260935]: 2025-10-11 09:07:16.927 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: f5dfbe0b-a1ff-4001-abe4-a4493c9124f2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:07:16 compute-0 nova_compute[260935]: 2025-10-11 09:07:16.932 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760173636.7192333, f5dfbe0b-a1ff-4001-abe4-a4493c9124f2 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:07:16 compute-0 nova_compute[260935]: 2025-10-11 09:07:16.932 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: f5dfbe0b-a1ff-4001-abe4-a4493c9124f2] VM Resumed (Lifecycle Event)
Oct 11 09:07:16 compute-0 nova_compute[260935]: 2025-10-11 09:07:16.969 2 INFO nova.compute.manager [None req-c1c53135-8b25-424a-af01-9028ba12c254 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] [instance: f5dfbe0b-a1ff-4001-abe4-a4493c9124f2] Took 9.17 seconds to spawn the instance on the hypervisor.
Oct 11 09:07:16 compute-0 nova_compute[260935]: 2025-10-11 09:07:16.970 2 DEBUG nova.compute.manager [None req-c1c53135-8b25-424a-af01-9028ba12c254 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] [instance: f5dfbe0b-a1ff-4001-abe4-a4493c9124f2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:07:16 compute-0 podman[357670]: 2025-10-11 09:07:16.97799207 +0000 UTC m=+0.055576628 container create 80ced7c6a3c16ccdbe23725730c8db7aeed4183f2fc1bc06e5ea3e07111ac748 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-14e82eeb-74e2-4de3-9047-74da777fe1ec, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Oct 11 09:07:16 compute-0 nova_compute[260935]: 2025-10-11 09:07:16.995 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: f5dfbe0b-a1ff-4001-abe4-a4493c9124f2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:07:17 compute-0 nova_compute[260935]: 2025-10-11 09:07:17.000 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: f5dfbe0b-a1ff-4001-abe4-a4493c9124f2] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 09:07:17 compute-0 systemd[1]: Started libpod-conmon-80ced7c6a3c16ccdbe23725730c8db7aeed4183f2fc1bc06e5ea3e07111ac748.scope.
Oct 11 09:07:17 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:07:17 compute-0 podman[357670]: 2025-10-11 09:07:16.950097333 +0000 UTC m=+0.027681921 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 11 09:07:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/046b66f7870015e0b115ddde4088213d8c930e4e0ddec3540ecd817899022603/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 11 09:07:17 compute-0 podman[357670]: 2025-10-11 09:07:17.069336517 +0000 UTC m=+0.146921085 container init 80ced7c6a3c16ccdbe23725730c8db7aeed4183f2fc1bc06e5ea3e07111ac748 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-14e82eeb-74e2-4de3-9047-74da777fe1ec, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 11 09:07:17 compute-0 podman[357670]: 2025-10-11 09:07:17.075126343 +0000 UTC m=+0.152710901 container start 80ced7c6a3c16ccdbe23725730c8db7aeed4183f2fc1bc06e5ea3e07111ac748 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-14e82eeb-74e2-4de3-9047-74da777fe1ec, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, io.buildah.version=1.41.3)
Oct 11 09:07:17 compute-0 neutron-haproxy-ovnmeta-14e82eeb-74e2-4de3-9047-74da777fe1ec[357685]: [NOTICE]   (357689) : New worker (357691) forked
Oct 11 09:07:17 compute-0 neutron-haproxy-ovnmeta-14e82eeb-74e2-4de3-9047-74da777fe1ec[357685]: [NOTICE]   (357689) : Loading success.
Oct 11 09:07:17 compute-0 nova_compute[260935]: 2025-10-11 09:07:17.372 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: f5dfbe0b-a1ff-4001-abe4-a4493c9124f2] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 09:07:17 compute-0 nova_compute[260935]: 2025-10-11 09:07:17.480 2 INFO nova.compute.manager [None req-c1c53135-8b25-424a-af01-9028ba12c254 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] [instance: f5dfbe0b-a1ff-4001-abe4-a4493c9124f2] Took 11.29 seconds to build instance.
Oct 11 09:07:17 compute-0 nova_compute[260935]: 2025-10-11 09:07:17.543 2 DEBUG oslo_concurrency.lockutils [None req-c1c53135-8b25-424a-af01-9028ba12c254 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Lock "f5dfbe0b-a1ff-4001-abe4-a4493c9124f2" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.539s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:07:17 compute-0 ceph-mon[74313]: pgmap v1938: 321 pgs: 321 active+clean; 513 MiB data, 890 MiB used, 59 GiB / 60 GiB avail; 56 KiB/s rd, 5.3 MiB/s wr, 88 op/s
Oct 11 09:07:18 compute-0 nova_compute[260935]: 2025-10-11 09:07:18.329 2 DEBUG nova.compute.manager [req-2f0309e3-30f2-4d53-a9e6-2987dcb33822 req-72c2f569-8757-42b4-80cd-2b97d7dbe50c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: f5dfbe0b-a1ff-4001-abe4-a4493c9124f2] Received event network-vif-plugged-34749eb0-e6d2-4c4b-a2b8-9fc709c1caca external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:07:18 compute-0 nova_compute[260935]: 2025-10-11 09:07:18.329 2 DEBUG oslo_concurrency.lockutils [req-2f0309e3-30f2-4d53-a9e6-2987dcb33822 req-72c2f569-8757-42b4-80cd-2b97d7dbe50c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "f5dfbe0b-a1ff-4001-abe4-a4493c9124f2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:07:18 compute-0 nova_compute[260935]: 2025-10-11 09:07:18.331 2 DEBUG oslo_concurrency.lockutils [req-2f0309e3-30f2-4d53-a9e6-2987dcb33822 req-72c2f569-8757-42b4-80cd-2b97d7dbe50c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "f5dfbe0b-a1ff-4001-abe4-a4493c9124f2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:07:18 compute-0 nova_compute[260935]: 2025-10-11 09:07:18.332 2 DEBUG oslo_concurrency.lockutils [req-2f0309e3-30f2-4d53-a9e6-2987dcb33822 req-72c2f569-8757-42b4-80cd-2b97d7dbe50c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "f5dfbe0b-a1ff-4001-abe4-a4493c9124f2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:07:18 compute-0 nova_compute[260935]: 2025-10-11 09:07:18.332 2 DEBUG nova.compute.manager [req-2f0309e3-30f2-4d53-a9e6-2987dcb33822 req-72c2f569-8757-42b4-80cd-2b97d7dbe50c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: f5dfbe0b-a1ff-4001-abe4-a4493c9124f2] No waiting events found dispatching network-vif-plugged-34749eb0-e6d2-4c4b-a2b8-9fc709c1caca pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 09:07:18 compute-0 nova_compute[260935]: 2025-10-11 09:07:18.332 2 WARNING nova.compute.manager [req-2f0309e3-30f2-4d53-a9e6-2987dcb33822 req-72c2f569-8757-42b4-80cd-2b97d7dbe50c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: f5dfbe0b-a1ff-4001-abe4-a4493c9124f2] Received unexpected event network-vif-plugged-34749eb0-e6d2-4c4b-a2b8-9fc709c1caca for instance with vm_state active and task_state None.
Oct 11 09:07:18 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1939: 321 pgs: 321 active+clean; 514 MiB data, 890 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 5.4 MiB/s wr, 175 op/s
Oct 11 09:07:18 compute-0 nova_compute[260935]: 2025-10-11 09:07:18.462 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760173638.4623191, 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:07:18 compute-0 nova_compute[260935]: 2025-10-11 09:07:18.463 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] VM Started (Lifecycle Event)
Oct 11 09:07:18 compute-0 nova_compute[260935]: 2025-10-11 09:07:18.465 2 DEBUG nova.compute.manager [None req-b6692d21-50b0-4c84-aa61-2f92accd1c83 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 11 09:07:18 compute-0 nova_compute[260935]: 2025-10-11 09:07:18.472 2 DEBUG nova.virt.libvirt.driver [None req-b6692d21-50b0-4c84-aa61-2f92accd1c83 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 11 09:07:18 compute-0 nova_compute[260935]: 2025-10-11 09:07:18.476 2 INFO nova.virt.libvirt.driver [-] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Instance spawned successfully.
Oct 11 09:07:18 compute-0 nova_compute[260935]: 2025-10-11 09:07:18.476 2 DEBUG nova.virt.libvirt.driver [None req-b6692d21-50b0-4c84-aa61-2f92accd1c83 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 11 09:07:18 compute-0 nova_compute[260935]: 2025-10-11 09:07:18.553 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:07:18 compute-0 nova_compute[260935]: 2025-10-11 09:07:18.558 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 09:07:18 compute-0 nova_compute[260935]: 2025-10-11 09:07:18.614 2 DEBUG nova.virt.libvirt.driver [None req-b6692d21-50b0-4c84-aa61-2f92accd1c83 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:07:18 compute-0 nova_compute[260935]: 2025-10-11 09:07:18.615 2 DEBUG nova.virt.libvirt.driver [None req-b6692d21-50b0-4c84-aa61-2f92accd1c83 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:07:18 compute-0 nova_compute[260935]: 2025-10-11 09:07:18.616 2 DEBUG nova.virt.libvirt.driver [None req-b6692d21-50b0-4c84-aa61-2f92accd1c83 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:07:18 compute-0 nova_compute[260935]: 2025-10-11 09:07:18.616 2 DEBUG nova.virt.libvirt.driver [None req-b6692d21-50b0-4c84-aa61-2f92accd1c83 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:07:18 compute-0 nova_compute[260935]: 2025-10-11 09:07:18.617 2 DEBUG nova.virt.libvirt.driver [None req-b6692d21-50b0-4c84-aa61-2f92accd1c83 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:07:18 compute-0 nova_compute[260935]: 2025-10-11 09:07:18.618 2 DEBUG nova.virt.libvirt.driver [None req-b6692d21-50b0-4c84-aa61-2f92accd1c83 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:07:18 compute-0 nova_compute[260935]: 2025-10-11 09:07:18.679 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 09:07:18 compute-0 nova_compute[260935]: 2025-10-11 09:07:18.679 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760173638.4625487, 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:07:18 compute-0 nova_compute[260935]: 2025-10-11 09:07:18.680 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] VM Paused (Lifecycle Event)
Oct 11 09:07:18 compute-0 nova_compute[260935]: 2025-10-11 09:07:18.770 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:07:18 compute-0 nova_compute[260935]: 2025-10-11 09:07:18.776 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760173638.4676518, 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:07:18 compute-0 nova_compute[260935]: 2025-10-11 09:07:18.776 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] VM Resumed (Lifecycle Event)
Oct 11 09:07:18 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e257 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:07:18 compute-0 nova_compute[260935]: 2025-10-11 09:07:18.933 2 INFO nova.compute.manager [None req-b6692d21-50b0-4c84-aa61-2f92accd1c83 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Took 10.27 seconds to spawn the instance on the hypervisor.
Oct 11 09:07:18 compute-0 nova_compute[260935]: 2025-10-11 09:07:18.933 2 DEBUG nova.compute.manager [None req-b6692d21-50b0-4c84-aa61-2f92accd1c83 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:07:18 compute-0 nova_compute[260935]: 2025-10-11 09:07:18.946 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:07:18 compute-0 nova_compute[260935]: 2025-10-11 09:07:18.951 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 09:07:19 compute-0 nova_compute[260935]: 2025-10-11 09:07:19.062 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 09:07:19 compute-0 nova_compute[260935]: 2025-10-11 09:07:19.092 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:07:19 compute-0 nova_compute[260935]: 2025-10-11 09:07:19.148 2 INFO nova.compute.manager [None req-b6692d21-50b0-4c84-aa61-2f92accd1c83 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Took 12.33 seconds to build instance.
Oct 11 09:07:19 compute-0 nova_compute[260935]: 2025-10-11 09:07:19.176 2 DEBUG nova.compute.manager [req-0d989d29-a205-422a-9078-6c00fd632446 req-7b47c299-07de-4b94-b462-6a114e7d95c5 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Received event network-vif-plugged-a611854c-0a61-41b8-91ce-0c0f893aa54c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:07:19 compute-0 nova_compute[260935]: 2025-10-11 09:07:19.176 2 DEBUG oslo_concurrency.lockutils [req-0d989d29-a205-422a-9078-6c00fd632446 req-7b47c299-07de-4b94-b462-6a114e7d95c5 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:07:19 compute-0 nova_compute[260935]: 2025-10-11 09:07:19.177 2 DEBUG oslo_concurrency.lockutils [req-0d989d29-a205-422a-9078-6c00fd632446 req-7b47c299-07de-4b94-b462-6a114e7d95c5 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:07:19 compute-0 nova_compute[260935]: 2025-10-11 09:07:19.177 2 DEBUG oslo_concurrency.lockutils [req-0d989d29-a205-422a-9078-6c00fd632446 req-7b47c299-07de-4b94-b462-6a114e7d95c5 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:07:19 compute-0 nova_compute[260935]: 2025-10-11 09:07:19.178 2 DEBUG nova.compute.manager [req-0d989d29-a205-422a-9078-6c00fd632446 req-7b47c299-07de-4b94-b462-6a114e7d95c5 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] No waiting events found dispatching network-vif-plugged-a611854c-0a61-41b8-91ce-0c0f893aa54c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 09:07:19 compute-0 nova_compute[260935]: 2025-10-11 09:07:19.178 2 WARNING nova.compute.manager [req-0d989d29-a205-422a-9078-6c00fd632446 req-7b47c299-07de-4b94-b462-6a114e7d95c5 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Received unexpected event network-vif-plugged-a611854c-0a61-41b8-91ce-0c0f893aa54c for instance with vm_state active and task_state None.
Oct 11 09:07:19 compute-0 nova_compute[260935]: 2025-10-11 09:07:19.241 2 DEBUG oslo_concurrency.lockutils [None req-b6692d21-50b0-4c84-aa61-2f92accd1c83 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Lock "0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.727s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:07:19 compute-0 nova_compute[260935]: 2025-10-11 09:07:19.455 2 INFO nova.compute.manager [None req-25246455-e498-45d1-a432-7e80017b2ff9 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] [instance: f5dfbe0b-a1ff-4001-abe4-a4493c9124f2] Rescuing
Oct 11 09:07:19 compute-0 nova_compute[260935]: 2025-10-11 09:07:19.456 2 DEBUG oslo_concurrency.lockutils [None req-25246455-e498-45d1-a432-7e80017b2ff9 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Acquiring lock "refresh_cache-f5dfbe0b-a1ff-4001-abe4-a4493c9124f2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:07:19 compute-0 nova_compute[260935]: 2025-10-11 09:07:19.456 2 DEBUG oslo_concurrency.lockutils [None req-25246455-e498-45d1-a432-7e80017b2ff9 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Acquired lock "refresh_cache-f5dfbe0b-a1ff-4001-abe4-a4493c9124f2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:07:19 compute-0 nova_compute[260935]: 2025-10-11 09:07:19.456 2 DEBUG nova.network.neutron [None req-25246455-e498-45d1-a432-7e80017b2ff9 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] [instance: f5dfbe0b-a1ff-4001-abe4-a4493c9124f2] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 11 09:07:19 compute-0 ceph-mon[74313]: pgmap v1939: 321 pgs: 321 active+clean; 514 MiB data, 890 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 5.4 MiB/s wr, 175 op/s
Oct 11 09:07:20 compute-0 nova_compute[260935]: 2025-10-11 09:07:20.057 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:07:20 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1940: 321 pgs: 321 active+clean; 514 MiB data, 890 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 3.6 MiB/s wr, 148 op/s
Oct 11 09:07:21 compute-0 ceph-mon[74313]: pgmap v1940: 321 pgs: 321 active+clean; 514 MiB data, 890 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 3.6 MiB/s wr, 148 op/s
Oct 11 09:07:21 compute-0 nova_compute[260935]: 2025-10-11 09:07:21.663 2 DEBUG nova.network.neutron [None req-25246455-e498-45d1-a432-7e80017b2ff9 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] [instance: f5dfbe0b-a1ff-4001-abe4-a4493c9124f2] Updating instance_info_cache with network_info: [{"id": "34749eb0-e6d2-4c4b-a2b8-9fc709c1caca", "address": "fa:16:3e:88:5f:9b", "network": {"id": "76432838-bd4d-4fb8-8e44-6e230b5868b8", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-372698809-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c221c92f100b42fbb2581f0c7035540b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap34749eb0-e6", "ovs_interfaceid": "34749eb0-e6d2-4c4b-a2b8-9fc709c1caca", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:07:21 compute-0 nova_compute[260935]: 2025-10-11 09:07:21.757 2 DEBUG oslo_concurrency.lockutils [None req-25246455-e498-45d1-a432-7e80017b2ff9 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Releasing lock "refresh_cache-f5dfbe0b-a1ff-4001-abe4-a4493c9124f2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:07:21 compute-0 podman[357742]: 2025-10-11 09:07:21.869792496 +0000 UTC m=+0.152919167 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, config_id=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true)
Oct 11 09:07:22 compute-0 nova_compute[260935]: 2025-10-11 09:07:22.196 2 DEBUG nova.virt.libvirt.driver [None req-25246455-e498-45d1-a432-7e80017b2ff9 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] [instance: f5dfbe0b-a1ff-4001-abe4-a4493c9124f2] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Oct 11 09:07:22 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1941: 321 pgs: 321 active+clean; 514 MiB data, 890 MiB used, 59 GiB / 60 GiB avail; 4.6 MiB/s rd, 3.6 MiB/s wr, 237 op/s
Oct 11 09:07:22 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:07:22.605 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=24, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:d1:d9', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '16:ab:1e:b7:4b:7f'}, ipsec=False) old=SB_Global(nb_cfg=23) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 09:07:22 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:07:22.607 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 11 09:07:22 compute-0 nova_compute[260935]: 2025-10-11 09:07:22.652 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:07:23 compute-0 ceph-mon[74313]: pgmap v1941: 321 pgs: 321 active+clean; 514 MiB data, 890 MiB used, 59 GiB / 60 GiB avail; 4.6 MiB/s rd, 3.6 MiB/s wr, 237 op/s
Oct 11 09:07:23 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e257 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:07:24 compute-0 nova_compute[260935]: 2025-10-11 09:07:24.094 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:07:24 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1942: 321 pgs: 321 active+clean; 514 MiB data, 890 MiB used, 59 GiB / 60 GiB avail; 5.7 MiB/s rd, 1.1 MiB/s wr, 218 op/s
Oct 11 09:07:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:07:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:07:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:07:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:07:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:07:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:07:25 compute-0 nova_compute[260935]: 2025-10-11 09:07:25.072 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:07:25 compute-0 ceph-mon[74313]: pgmap v1942: 321 pgs: 321 active+clean; 514 MiB data, 890 MiB used, 59 GiB / 60 GiB avail; 5.7 MiB/s rd, 1.1 MiB/s wr, 218 op/s
Oct 11 09:07:25 compute-0 podman[357759]: 2025-10-11 09:07:25.806144086 +0000 UTC m=+0.104071852 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible)
Oct 11 09:07:25 compute-0 podman[357760]: 2025-10-11 09:07:25.84797223 +0000 UTC m=+0.145685480 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3)
Oct 11 09:07:26 compute-0 nova_compute[260935]: 2025-10-11 09:07:26.068 2 DEBUG oslo_concurrency.lockutils [None req-18ed1bb4-2807-4456-bd6d-09e6368d595b 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Acquiring lock "02489367-ca19-4428-9727-6d8ab66e1b46" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:07:26 compute-0 nova_compute[260935]: 2025-10-11 09:07:26.069 2 DEBUG oslo_concurrency.lockutils [None req-18ed1bb4-2807-4456-bd6d-09e6368d595b 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Lock "02489367-ca19-4428-9727-6d8ab66e1b46" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:07:26 compute-0 nova_compute[260935]: 2025-10-11 09:07:26.172 2 DEBUG nova.compute.manager [None req-18ed1bb4-2807-4456-bd6d-09e6368d595b 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] [instance: 02489367-ca19-4428-9727-6d8ab66e1b46] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 11 09:07:26 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1943: 321 pgs: 321 active+clean; 514 MiB data, 890 MiB used, 59 GiB / 60 GiB avail; 5.7 MiB/s rd, 25 KiB/s wr, 213 op/s
Oct 11 09:07:26 compute-0 nova_compute[260935]: 2025-10-11 09:07:26.421 2 DEBUG oslo_concurrency.lockutils [None req-18ed1bb4-2807-4456-bd6d-09e6368d595b 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:07:26 compute-0 nova_compute[260935]: 2025-10-11 09:07:26.422 2 DEBUG oslo_concurrency.lockutils [None req-18ed1bb4-2807-4456-bd6d-09e6368d595b 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:07:26 compute-0 nova_compute[260935]: 2025-10-11 09:07:26.433 2 DEBUG nova.virt.hardware [None req-18ed1bb4-2807-4456-bd6d-09e6368d595b 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 11 09:07:26 compute-0 nova_compute[260935]: 2025-10-11 09:07:26.433 2 INFO nova.compute.claims [None req-18ed1bb4-2807-4456-bd6d-09e6368d595b 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] [instance: 02489367-ca19-4428-9727-6d8ab66e1b46] Claim successful on node compute-0.ctlplane.example.com
Oct 11 09:07:26 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:07:26.611 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=bec9a5e2-82b0-42f0-811d-08d245f1dc66, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '24'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:07:26 compute-0 nova_compute[260935]: 2025-10-11 09:07:26.768 2 DEBUG oslo_concurrency.processutils [None req-18ed1bb4-2807-4456-bd6d-09e6368d595b 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:07:27 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 09:07:27 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1560192760' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:07:27 compute-0 ovn_controller[152945]: 2025-10-11T09:07:27Z|00096|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:61:f1:b7 10.100.0.6
Oct 11 09:07:27 compute-0 ovn_controller[152945]: 2025-10-11T09:07:27Z|00097|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:61:f1:b7 10.100.0.6
Oct 11 09:07:27 compute-0 nova_compute[260935]: 2025-10-11 09:07:27.314 2 DEBUG oslo_concurrency.processutils [None req-18ed1bb4-2807-4456-bd6d-09e6368d595b 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.546s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:07:27 compute-0 nova_compute[260935]: 2025-10-11 09:07:27.325 2 DEBUG nova.compute.provider_tree [None req-18ed1bb4-2807-4456-bd6d-09e6368d595b 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 09:07:27 compute-0 nova_compute[260935]: 2025-10-11 09:07:27.379 2 DEBUG nova.scheduler.client.report [None req-18ed1bb4-2807-4456-bd6d-09e6368d595b 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 09:07:27 compute-0 nova_compute[260935]: 2025-10-11 09:07:27.454 2 DEBUG oslo_concurrency.lockutils [None req-18ed1bb4-2807-4456-bd6d-09e6368d595b 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.032s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:07:27 compute-0 nova_compute[260935]: 2025-10-11 09:07:27.455 2 DEBUG nova.compute.manager [None req-18ed1bb4-2807-4456-bd6d-09e6368d595b 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] [instance: 02489367-ca19-4428-9727-6d8ab66e1b46] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 11 09:07:27 compute-0 nova_compute[260935]: 2025-10-11 09:07:27.600 2 DEBUG nova.compute.manager [None req-18ed1bb4-2807-4456-bd6d-09e6368d595b 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] [instance: 02489367-ca19-4428-9727-6d8ab66e1b46] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 11 09:07:27 compute-0 nova_compute[260935]: 2025-10-11 09:07:27.601 2 DEBUG nova.network.neutron [None req-18ed1bb4-2807-4456-bd6d-09e6368d595b 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] [instance: 02489367-ca19-4428-9727-6d8ab66e1b46] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 11 09:07:27 compute-0 ceph-mon[74313]: pgmap v1943: 321 pgs: 321 active+clean; 514 MiB data, 890 MiB used, 59 GiB / 60 GiB avail; 5.7 MiB/s rd, 25 KiB/s wr, 213 op/s
Oct 11 09:07:27 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/1560192760' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:07:27 compute-0 nova_compute[260935]: 2025-10-11 09:07:27.708 2 INFO nova.virt.libvirt.driver [None req-18ed1bb4-2807-4456-bd6d-09e6368d595b 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] [instance: 02489367-ca19-4428-9727-6d8ab66e1b46] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 11 09:07:27 compute-0 nova_compute[260935]: 2025-10-11 09:07:27.808 2 DEBUG nova.compute.manager [None req-18ed1bb4-2807-4456-bd6d-09e6368d595b 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] [instance: 02489367-ca19-4428-9727-6d8ab66e1b46] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 11 09:07:28 compute-0 nova_compute[260935]: 2025-10-11 09:07:28.005 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:07:28 compute-0 nova_compute[260935]: 2025-10-11 09:07:28.005 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:07:28 compute-0 nova_compute[260935]: 2025-10-11 09:07:28.063 2 DEBUG nova.policy [None req-18ed1bb4-2807-4456-bd6d-09e6368d595b 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '6b8d9d5ab01d48ae81a09f922875ea3e', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '21f163e616ee4917a580701d466f7dc9', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 11 09:07:28 compute-0 nova_compute[260935]: 2025-10-11 09:07:28.093 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:07:28 compute-0 nova_compute[260935]: 2025-10-11 09:07:28.093 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 11 09:07:28 compute-0 nova_compute[260935]: 2025-10-11 09:07:28.100 2 DEBUG nova.compute.manager [None req-18ed1bb4-2807-4456-bd6d-09e6368d595b 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] [instance: 02489367-ca19-4428-9727-6d8ab66e1b46] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 11 09:07:28 compute-0 nova_compute[260935]: 2025-10-11 09:07:28.101 2 DEBUG nova.virt.libvirt.driver [None req-18ed1bb4-2807-4456-bd6d-09e6368d595b 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] [instance: 02489367-ca19-4428-9727-6d8ab66e1b46] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 11 09:07:28 compute-0 nova_compute[260935]: 2025-10-11 09:07:28.101 2 INFO nova.virt.libvirt.driver [None req-18ed1bb4-2807-4456-bd6d-09e6368d595b 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] [instance: 02489367-ca19-4428-9727-6d8ab66e1b46] Creating image(s)
Oct 11 09:07:28 compute-0 nova_compute[260935]: 2025-10-11 09:07:28.125 2 DEBUG nova.storage.rbd_utils [None req-18ed1bb4-2807-4456-bd6d-09e6368d595b 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] rbd image 02489367-ca19-4428-9727-6d8ab66e1b46_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:07:28 compute-0 nova_compute[260935]: 2025-10-11 09:07:28.146 2 DEBUG nova.storage.rbd_utils [None req-18ed1bb4-2807-4456-bd6d-09e6368d595b 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] rbd image 02489367-ca19-4428-9727-6d8ab66e1b46_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:07:28 compute-0 nova_compute[260935]: 2025-10-11 09:07:28.171 2 DEBUG nova.storage.rbd_utils [None req-18ed1bb4-2807-4456-bd6d-09e6368d595b 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] rbd image 02489367-ca19-4428-9727-6d8ab66e1b46_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:07:28 compute-0 nova_compute[260935]: 2025-10-11 09:07:28.175 2 DEBUG oslo_concurrency.processutils [None req-18ed1bb4-2807-4456-bd6d-09e6368d595b 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:07:28 compute-0 nova_compute[260935]: 2025-10-11 09:07:28.276 2 DEBUG oslo_concurrency.processutils [None req-18ed1bb4-2807-4456-bd6d-09e6368d595b 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json" returned: 0 in 0.101s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:07:28 compute-0 nova_compute[260935]: 2025-10-11 09:07:28.277 2 DEBUG oslo_concurrency.lockutils [None req-18ed1bb4-2807-4456-bd6d-09e6368d595b 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Acquiring lock "9811042c7d73cc51997f7c966840f4b7728169a1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:07:28 compute-0 nova_compute[260935]: 2025-10-11 09:07:28.278 2 DEBUG oslo_concurrency.lockutils [None req-18ed1bb4-2807-4456-bd6d-09e6368d595b 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:07:28 compute-0 nova_compute[260935]: 2025-10-11 09:07:28.278 2 DEBUG oslo_concurrency.lockutils [None req-18ed1bb4-2807-4456-bd6d-09e6368d595b 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:07:28 compute-0 nova_compute[260935]: 2025-10-11 09:07:28.302 2 DEBUG nova.storage.rbd_utils [None req-18ed1bb4-2807-4456-bd6d-09e6368d595b 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] rbd image 02489367-ca19-4428-9727-6d8ab66e1b46_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:07:28 compute-0 nova_compute[260935]: 2025-10-11 09:07:28.306 2 DEBUG oslo_concurrency.processutils [None req-18ed1bb4-2807-4456-bd6d-09e6368d595b 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 02489367-ca19-4428-9727-6d8ab66e1b46_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:07:28 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1944: 321 pgs: 321 active+clean; 546 MiB data, 915 MiB used, 59 GiB / 60 GiB avail; 6.1 MiB/s rd, 2.2 MiB/s wr, 276 op/s
Oct 11 09:07:28 compute-0 nova_compute[260935]: 2025-10-11 09:07:28.440 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "refresh_cache-52be16b4-343a-4fd4-9041-39069a1fde2a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:07:28 compute-0 nova_compute[260935]: 2025-10-11 09:07:28.441 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquired lock "refresh_cache-52be16b4-343a-4fd4-9041-39069a1fde2a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:07:28 compute-0 nova_compute[260935]: 2025-10-11 09:07:28.441 2 DEBUG nova.network.neutron [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: 52be16b4-343a-4fd4-9041-39069a1fde2a] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 11 09:07:28 compute-0 nova_compute[260935]: 2025-10-11 09:07:28.620 2 DEBUG oslo_concurrency.processutils [None req-18ed1bb4-2807-4456-bd6d-09e6368d595b 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 02489367-ca19-4428-9727-6d8ab66e1b46_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.314s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:07:28 compute-0 nova_compute[260935]: 2025-10-11 09:07:28.723 2 DEBUG nova.storage.rbd_utils [None req-18ed1bb4-2807-4456-bd6d-09e6368d595b 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] resizing rbd image 02489367-ca19-4428-9727-6d8ab66e1b46_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 11 09:07:28 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e257 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:07:28 compute-0 nova_compute[260935]: 2025-10-11 09:07:28.859 2 DEBUG nova.objects.instance [None req-18ed1bb4-2807-4456-bd6d-09e6368d595b 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Lazy-loading 'migration_context' on Instance uuid 02489367-ca19-4428-9727-6d8ab66e1b46 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:07:28 compute-0 nova_compute[260935]: 2025-10-11 09:07:28.902 2 DEBUG nova.virt.libvirt.driver [None req-18ed1bb4-2807-4456-bd6d-09e6368d595b 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] [instance: 02489367-ca19-4428-9727-6d8ab66e1b46] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 11 09:07:28 compute-0 nova_compute[260935]: 2025-10-11 09:07:28.903 2 DEBUG nova.virt.libvirt.driver [None req-18ed1bb4-2807-4456-bd6d-09e6368d595b 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] [instance: 02489367-ca19-4428-9727-6d8ab66e1b46] Ensure instance console log exists: /var/lib/nova/instances/02489367-ca19-4428-9727-6d8ab66e1b46/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 11 09:07:28 compute-0 nova_compute[260935]: 2025-10-11 09:07:28.904 2 DEBUG oslo_concurrency.lockutils [None req-18ed1bb4-2807-4456-bd6d-09e6368d595b 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:07:28 compute-0 nova_compute[260935]: 2025-10-11 09:07:28.905 2 DEBUG oslo_concurrency.lockutils [None req-18ed1bb4-2807-4456-bd6d-09e6368d595b 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:07:28 compute-0 nova_compute[260935]: 2025-10-11 09:07:28.906 2 DEBUG oslo_concurrency.lockutils [None req-18ed1bb4-2807-4456-bd6d-09e6368d595b 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:07:28 compute-0 nova_compute[260935]: 2025-10-11 09:07:28.908 2 DEBUG nova.network.neutron [None req-18ed1bb4-2807-4456-bd6d-09e6368d595b 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] [instance: 02489367-ca19-4428-9727-6d8ab66e1b46] Successfully created port: 91e3c288-76e0-4c61-9a40-fde0087f647b _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 11 09:07:29 compute-0 nova_compute[260935]: 2025-10-11 09:07:29.099 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:07:29 compute-0 ceph-mon[74313]: pgmap v1944: 321 pgs: 321 active+clean; 546 MiB data, 915 MiB used, 59 GiB / 60 GiB avail; 6.1 MiB/s rd, 2.2 MiB/s wr, 276 op/s
Oct 11 09:07:30 compute-0 nova_compute[260935]: 2025-10-11 09:07:30.129 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:07:30 compute-0 nova_compute[260935]: 2025-10-11 09:07:30.283 2 DEBUG nova.network.neutron [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: 52be16b4-343a-4fd4-9041-39069a1fde2a] Updating instance_info_cache with network_info: [{"id": "c992d6e3-ef59-42a0-80c5-109fe0c056cd", "address": "fa:16:3e:d3:b5:ce", "network": {"id": "7c40ad6c-6e2c-4d8e-a70f-72c8786fa745", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1855455514-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0ba95f2514ce4fe4b00f245335eaeb01", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc992d6e3-ef", "ovs_interfaceid": "c992d6e3-ef59-42a0-80c5-109fe0c056cd", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:07:30 compute-0 nova_compute[260935]: 2025-10-11 09:07:30.332 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Releasing lock "refresh_cache-52be16b4-343a-4fd4-9041-39069a1fde2a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:07:30 compute-0 nova_compute[260935]: 2025-10-11 09:07:30.333 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: 52be16b4-343a-4fd4-9041-39069a1fde2a] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 11 09:07:30 compute-0 nova_compute[260935]: 2025-10-11 09:07:30.334 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:07:30 compute-0 nova_compute[260935]: 2025-10-11 09:07:30.334 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:07:30 compute-0 nova_compute[260935]: 2025-10-11 09:07:30.334 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:07:30 compute-0 nova_compute[260935]: 2025-10-11 09:07:30.335 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:07:30 compute-0 nova_compute[260935]: 2025-10-11 09:07:30.335 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:07:30 compute-0 nova_compute[260935]: 2025-10-11 09:07:30.335 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:07:30 compute-0 nova_compute[260935]: 2025-10-11 09:07:30.335 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 11 09:07:30 compute-0 nova_compute[260935]: 2025-10-11 09:07:30.336 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:07:30 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1945: 321 pgs: 321 active+clean; 546 MiB data, 915 MiB used, 59 GiB / 60 GiB avail; 4.0 MiB/s rd, 2.1 MiB/s wr, 189 op/s
Oct 11 09:07:30 compute-0 nova_compute[260935]: 2025-10-11 09:07:30.383 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:07:30 compute-0 nova_compute[260935]: 2025-10-11 09:07:30.383 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:07:30 compute-0 nova_compute[260935]: 2025-10-11 09:07:30.384 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:07:30 compute-0 nova_compute[260935]: 2025-10-11 09:07:30.384 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 11 09:07:30 compute-0 nova_compute[260935]: 2025-10-11 09:07:30.385 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:07:30 compute-0 nova_compute[260935]: 2025-10-11 09:07:30.547 2 DEBUG nova.network.neutron [None req-18ed1bb4-2807-4456-bd6d-09e6368d595b 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] [instance: 02489367-ca19-4428-9727-6d8ab66e1b46] Successfully updated port: 91e3c288-76e0-4c61-9a40-fde0087f647b _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 11 09:07:30 compute-0 nova_compute[260935]: 2025-10-11 09:07:30.597 2 DEBUG oslo_concurrency.lockutils [None req-18ed1bb4-2807-4456-bd6d-09e6368d595b 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Acquiring lock "refresh_cache-02489367-ca19-4428-9727-6d8ab66e1b46" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:07:30 compute-0 nova_compute[260935]: 2025-10-11 09:07:30.597 2 DEBUG oslo_concurrency.lockutils [None req-18ed1bb4-2807-4456-bd6d-09e6368d595b 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Acquired lock "refresh_cache-02489367-ca19-4428-9727-6d8ab66e1b46" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:07:30 compute-0 nova_compute[260935]: 2025-10-11 09:07:30.598 2 DEBUG nova.network.neutron [None req-18ed1bb4-2807-4456-bd6d-09e6368d595b 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] [instance: 02489367-ca19-4428-9727-6d8ab66e1b46] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 11 09:07:30 compute-0 nova_compute[260935]: 2025-10-11 09:07:30.836 2 DEBUG nova.compute.manager [req-788b4548-ea66-4fd6-8770-9372dd1ad601 req-3e74e85b-384a-4ccf-b061-89097d89dba2 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 02489367-ca19-4428-9727-6d8ab66e1b46] Received event network-changed-91e3c288-76e0-4c61-9a40-fde0087f647b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:07:30 compute-0 nova_compute[260935]: 2025-10-11 09:07:30.837 2 DEBUG nova.compute.manager [req-788b4548-ea66-4fd6-8770-9372dd1ad601 req-3e74e85b-384a-4ccf-b061-89097d89dba2 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 02489367-ca19-4428-9727-6d8ab66e1b46] Refreshing instance network info cache due to event network-changed-91e3c288-76e0-4c61-9a40-fde0087f647b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 09:07:30 compute-0 nova_compute[260935]: 2025-10-11 09:07:30.837 2 DEBUG oslo_concurrency.lockutils [req-788b4548-ea66-4fd6-8770-9372dd1ad601 req-3e74e85b-384a-4ccf-b061-89097d89dba2 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-02489367-ca19-4428-9727-6d8ab66e1b46" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:07:30 compute-0 nova_compute[260935]: 2025-10-11 09:07:30.852 2 DEBUG nova.network.neutron [None req-18ed1bb4-2807-4456-bd6d-09e6368d595b 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] [instance: 02489367-ca19-4428-9727-6d8ab66e1b46] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 11 09:07:30 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 09:07:30 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2730568505' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:07:30 compute-0 nova_compute[260935]: 2025-10-11 09:07:30.992 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.607s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:07:31 compute-0 nova_compute[260935]: 2025-10-11 09:07:31.174 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:07:31 compute-0 nova_compute[260935]: 2025-10-11 09:07:31.175 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:07:31 compute-0 nova_compute[260935]: 2025-10-11 09:07:31.181 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000057 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:07:31 compute-0 nova_compute[260935]: 2025-10-11 09:07:31.181 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000057 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:07:31 compute-0 nova_compute[260935]: 2025-10-11 09:07:31.186 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000052 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:07:31 compute-0 nova_compute[260935]: 2025-10-11 09:07:31.187 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000052 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:07:31 compute-0 nova_compute[260935]: 2025-10-11 09:07:31.192 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000060 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:07:31 compute-0 nova_compute[260935]: 2025-10-11 09:07:31.193 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000060 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:07:31 compute-0 nova_compute[260935]: 2025-10-11 09:07:31.197 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:07:31 compute-0 nova_compute[260935]: 2025-10-11 09:07:31.197 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:07:31 compute-0 nova_compute[260935]: 2025-10-11 09:07:31.197 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:07:31 compute-0 nova_compute[260935]: 2025-10-11 09:07:31.200 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000061 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:07:31 compute-0 nova_compute[260935]: 2025-10-11 09:07:31.200 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000061 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:07:31 compute-0 nova_compute[260935]: 2025-10-11 09:07:31.204 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000062 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:07:31 compute-0 nova_compute[260935]: 2025-10-11 09:07:31.204 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000062 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:07:31 compute-0 nova_compute[260935]: 2025-10-11 09:07:31.489 2 WARNING nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 09:07:31 compute-0 nova_compute[260935]: 2025-10-11 09:07:31.490 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=2567MB free_disk=59.72262954711914GB free_vcpus=1 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 11 09:07:31 compute-0 nova_compute[260935]: 2025-10-11 09:07:31.490 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:07:31 compute-0 nova_compute[260935]: 2025-10-11 09:07:31.491 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:07:31 compute-0 ceph-mon[74313]: pgmap v1945: 321 pgs: 321 active+clean; 546 MiB data, 915 MiB used, 59 GiB / 60 GiB avail; 4.0 MiB/s rd, 2.1 MiB/s wr, 189 op/s
Oct 11 09:07:31 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/2730568505' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:07:31 compute-0 nova_compute[260935]: 2025-10-11 09:07:31.640 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance c176845c-89c0-4038-ba22-4ee79bd3ebfe actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 09:07:31 compute-0 nova_compute[260935]: 2025-10-11 09:07:31.641 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance b75d8ded-515b-48ff-a6b6-28df88878996 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 09:07:31 compute-0 nova_compute[260935]: 2025-10-11 09:07:31.642 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance 52be16b4-343a-4fd4-9041-39069a1fde2a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 09:07:31 compute-0 nova_compute[260935]: 2025-10-11 09:07:31.642 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance 15633aee-234a-4417-b5ea-f35f13820404 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 09:07:31 compute-0 nova_compute[260935]: 2025-10-11 09:07:31.643 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance 6520fc43-79ed-4060-85bb-dcdff5f5c101 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 09:07:31 compute-0 nova_compute[260935]: 2025-10-11 09:07:31.643 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance f5dfbe0b-a1ff-4001-abe4-a4493c9124f2 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 09:07:31 compute-0 nova_compute[260935]: 2025-10-11 09:07:31.643 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 09:07:31 compute-0 nova_compute[260935]: 2025-10-11 09:07:31.644 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance 02489367-ca19-4428-9727-6d8ab66e1b46 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 09:07:31 compute-0 nova_compute[260935]: 2025-10-11 09:07:31.644 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 8 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 11 09:07:31 compute-0 nova_compute[260935]: 2025-10-11 09:07:31.645 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=1536MB phys_disk=59GB used_disk=8GB total_vcpus=8 used_vcpus=8 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 11 09:07:31 compute-0 ovn_controller[152945]: 2025-10-11T09:07:31Z|00098|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:88:5f:9b 10.100.0.12
Oct 11 09:07:31 compute-0 ovn_controller[152945]: 2025-10-11T09:07:31Z|00099|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:88:5f:9b 10.100.0.12
Oct 11 09:07:31 compute-0 nova_compute[260935]: 2025-10-11 09:07:31.866 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:07:32 compute-0 nova_compute[260935]: 2025-10-11 09:07:32.264 2 DEBUG nova.virt.libvirt.driver [None req-25246455-e498-45d1-a432-7e80017b2ff9 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] [instance: f5dfbe0b-a1ff-4001-abe4-a4493c9124f2] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Oct 11 09:07:32 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1946: 321 pgs: 321 active+clean; 613 MiB data, 964 MiB used, 59 GiB / 60 GiB avail; 4.4 MiB/s rd, 6.2 MiB/s wr, 294 op/s
Oct 11 09:07:32 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 09:07:32 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2240840697' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:07:32 compute-0 nova_compute[260935]: 2025-10-11 09:07:32.420 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.554s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:07:32 compute-0 nova_compute[260935]: 2025-10-11 09:07:32.427 2 DEBUG nova.compute.provider_tree [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 09:07:32 compute-0 nova_compute[260935]: 2025-10-11 09:07:32.439 2 DEBUG nova.network.neutron [None req-18ed1bb4-2807-4456-bd6d-09e6368d595b 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] [instance: 02489367-ca19-4428-9727-6d8ab66e1b46] Updating instance_info_cache with network_info: [{"id": "91e3c288-76e0-4c61-9a40-fde0087f647b", "address": "fa:16:3e:56:f9:d8", "network": {"id": "14e82eeb-74e2-4de3-9047-74da777fe1ec", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-461409610-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "21f163e616ee4917a580701d466f7dc9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap91e3c288-76", "ovs_interfaceid": "91e3c288-76e0-4c61-9a40-fde0087f647b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:07:32 compute-0 nova_compute[260935]: 2025-10-11 09:07:32.482 2 DEBUG nova.scheduler.client.report [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 09:07:32 compute-0 nova_compute[260935]: 2025-10-11 09:07:32.573 2 DEBUG oslo_concurrency.lockutils [None req-18ed1bb4-2807-4456-bd6d-09e6368d595b 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Releasing lock "refresh_cache-02489367-ca19-4428-9727-6d8ab66e1b46" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:07:32 compute-0 nova_compute[260935]: 2025-10-11 09:07:32.573 2 DEBUG nova.compute.manager [None req-18ed1bb4-2807-4456-bd6d-09e6368d595b 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] [instance: 02489367-ca19-4428-9727-6d8ab66e1b46] Instance network_info: |[{"id": "91e3c288-76e0-4c61-9a40-fde0087f647b", "address": "fa:16:3e:56:f9:d8", "network": {"id": "14e82eeb-74e2-4de3-9047-74da777fe1ec", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-461409610-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "21f163e616ee4917a580701d466f7dc9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap91e3c288-76", "ovs_interfaceid": "91e3c288-76e0-4c61-9a40-fde0087f647b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 11 09:07:32 compute-0 nova_compute[260935]: 2025-10-11 09:07:32.574 2 DEBUG oslo_concurrency.lockutils [req-788b4548-ea66-4fd6-8770-9372dd1ad601 req-3e74e85b-384a-4ccf-b061-89097d89dba2 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-02489367-ca19-4428-9727-6d8ab66e1b46" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:07:32 compute-0 nova_compute[260935]: 2025-10-11 09:07:32.574 2 DEBUG nova.network.neutron [req-788b4548-ea66-4fd6-8770-9372dd1ad601 req-3e74e85b-384a-4ccf-b061-89097d89dba2 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 02489367-ca19-4428-9727-6d8ab66e1b46] Refreshing network info cache for port 91e3c288-76e0-4c61-9a40-fde0087f647b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 09:07:32 compute-0 nova_compute[260935]: 2025-10-11 09:07:32.576 2 DEBUG nova.virt.libvirt.driver [None req-18ed1bb4-2807-4456-bd6d-09e6368d595b 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] [instance: 02489367-ca19-4428-9727-6d8ab66e1b46] Start _get_guest_xml network_info=[{"id": "91e3c288-76e0-4c61-9a40-fde0087f647b", "address": "fa:16:3e:56:f9:d8", "network": {"id": "14e82eeb-74e2-4de3-9047-74da777fe1ec", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-461409610-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "21f163e616ee4917a580701d466f7dc9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap91e3c288-76", "ovs_interfaceid": "91e3c288-76e0-4c61-9a40-fde0087f647b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 11 09:07:32 compute-0 nova_compute[260935]: 2025-10-11 09:07:32.580 2 WARNING nova.virt.libvirt.driver [None req-18ed1bb4-2807-4456-bd6d-09e6368d595b 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 09:07:32 compute-0 nova_compute[260935]: 2025-10-11 09:07:32.584 2 DEBUG nova.virt.libvirt.host [None req-18ed1bb4-2807-4456-bd6d-09e6368d595b 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 11 09:07:32 compute-0 nova_compute[260935]: 2025-10-11 09:07:32.584 2 DEBUG nova.virt.libvirt.host [None req-18ed1bb4-2807-4456-bd6d-09e6368d595b 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 11 09:07:32 compute-0 nova_compute[260935]: 2025-10-11 09:07:32.587 2 DEBUG nova.virt.libvirt.host [None req-18ed1bb4-2807-4456-bd6d-09e6368d595b 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 11 09:07:32 compute-0 nova_compute[260935]: 2025-10-11 09:07:32.588 2 DEBUG nova.virt.libvirt.host [None req-18ed1bb4-2807-4456-bd6d-09e6368d595b 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 11 09:07:32 compute-0 nova_compute[260935]: 2025-10-11 09:07:32.588 2 DEBUG nova.virt.libvirt.driver [None req-18ed1bb4-2807-4456-bd6d-09e6368d595b 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 11 09:07:32 compute-0 nova_compute[260935]: 2025-10-11 09:07:32.588 2 DEBUG nova.virt.hardware [None req-18ed1bb4-2807-4456-bd6d-09e6368d595b 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 11 09:07:32 compute-0 nova_compute[260935]: 2025-10-11 09:07:32.588 2 DEBUG nova.virt.hardware [None req-18ed1bb4-2807-4456-bd6d-09e6368d595b 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 11 09:07:32 compute-0 nova_compute[260935]: 2025-10-11 09:07:32.588 2 DEBUG nova.virt.hardware [None req-18ed1bb4-2807-4456-bd6d-09e6368d595b 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 11 09:07:32 compute-0 nova_compute[260935]: 2025-10-11 09:07:32.589 2 DEBUG nova.virt.hardware [None req-18ed1bb4-2807-4456-bd6d-09e6368d595b 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 11 09:07:32 compute-0 nova_compute[260935]: 2025-10-11 09:07:32.589 2 DEBUG nova.virt.hardware [None req-18ed1bb4-2807-4456-bd6d-09e6368d595b 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 11 09:07:32 compute-0 nova_compute[260935]: 2025-10-11 09:07:32.589 2 DEBUG nova.virt.hardware [None req-18ed1bb4-2807-4456-bd6d-09e6368d595b 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 11 09:07:32 compute-0 nova_compute[260935]: 2025-10-11 09:07:32.589 2 DEBUG nova.virt.hardware [None req-18ed1bb4-2807-4456-bd6d-09e6368d595b 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 11 09:07:32 compute-0 nova_compute[260935]: 2025-10-11 09:07:32.589 2 DEBUG nova.virt.hardware [None req-18ed1bb4-2807-4456-bd6d-09e6368d595b 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 11 09:07:32 compute-0 nova_compute[260935]: 2025-10-11 09:07:32.589 2 DEBUG nova.virt.hardware [None req-18ed1bb4-2807-4456-bd6d-09e6368d595b 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 11 09:07:32 compute-0 nova_compute[260935]: 2025-10-11 09:07:32.590 2 DEBUG nova.virt.hardware [None req-18ed1bb4-2807-4456-bd6d-09e6368d595b 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 11 09:07:32 compute-0 nova_compute[260935]: 2025-10-11 09:07:32.590 2 DEBUG nova.virt.hardware [None req-18ed1bb4-2807-4456-bd6d-09e6368d595b 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 11 09:07:32 compute-0 nova_compute[260935]: 2025-10-11 09:07:32.592 2 DEBUG oslo_concurrency.processutils [None req-18ed1bb4-2807-4456-bd6d-09e6368d595b 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:07:32 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/2240840697' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:07:32 compute-0 nova_compute[260935]: 2025-10-11 09:07:32.768 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 11 09:07:32 compute-0 nova_compute[260935]: 2025-10-11 09:07:32.769 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.278s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:07:33 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 09:07:33 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4214189064' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:07:33 compute-0 nova_compute[260935]: 2025-10-11 09:07:33.075 2 DEBUG oslo_concurrency.processutils [None req-18ed1bb4-2807-4456-bd6d-09e6368d595b 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.483s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:07:33 compute-0 nova_compute[260935]: 2025-10-11 09:07:33.113 2 DEBUG nova.storage.rbd_utils [None req-18ed1bb4-2807-4456-bd6d-09e6368d595b 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] rbd image 02489367-ca19-4428-9727-6d8ab66e1b46_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:07:33 compute-0 nova_compute[260935]: 2025-10-11 09:07:33.120 2 DEBUG oslo_concurrency.processutils [None req-18ed1bb4-2807-4456-bd6d-09e6368d595b 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:07:33 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 09:07:33 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4175653657' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:07:33 compute-0 nova_compute[260935]: 2025-10-11 09:07:33.612 2 DEBUG oslo_concurrency.processutils [None req-18ed1bb4-2807-4456-bd6d-09e6368d595b 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.491s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:07:33 compute-0 nova_compute[260935]: 2025-10-11 09:07:33.615 2 DEBUG nova.virt.libvirt.vif [None req-18ed1bb4-2807-4456-bd6d-09e6368d595b 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T09:07:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersNegativeTestJSON-server-322685699',display_name='tempest-ServersNegativeTestJSON-server-322685699',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestjson-server-322685699',id=99,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='21f163e616ee4917a580701d466f7dc9',ramdisk_id='',reservation_id='r-m7jzw6nr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersNegativeTestJSON-414353187',owner_user_name='tempest-ServersNegativeTestJSON-414353187-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:07:27Z,user_data=None,user_id='6b8d9d5ab01d48ae81a09f922875ea3e',uuid=02489367-ca19-4428-9727-6d8ab66e1b46,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "91e3c288-76e0-4c61-9a40-fde0087f647b", "address": "fa:16:3e:56:f9:d8", "network": {"id": "14e82eeb-74e2-4de3-9047-74da777fe1ec", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-461409610-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "21f163e616ee4917a580701d466f7dc9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap91e3c288-76", "ovs_interfaceid": "91e3c288-76e0-4c61-9a40-fde0087f647b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 11 09:07:33 compute-0 nova_compute[260935]: 2025-10-11 09:07:33.616 2 DEBUG nova.network.os_vif_util [None req-18ed1bb4-2807-4456-bd6d-09e6368d595b 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Converting VIF {"id": "91e3c288-76e0-4c61-9a40-fde0087f647b", "address": "fa:16:3e:56:f9:d8", "network": {"id": "14e82eeb-74e2-4de3-9047-74da777fe1ec", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-461409610-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "21f163e616ee4917a580701d466f7dc9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap91e3c288-76", "ovs_interfaceid": "91e3c288-76e0-4c61-9a40-fde0087f647b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 09:07:33 compute-0 nova_compute[260935]: 2025-10-11 09:07:33.619 2 DEBUG nova.network.os_vif_util [None req-18ed1bb4-2807-4456-bd6d-09e6368d595b 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:56:f9:d8,bridge_name='br-int',has_traffic_filtering=True,id=91e3c288-76e0-4c61-9a40-fde0087f647b,network=Network(14e82eeb-74e2-4de3-9047-74da777fe1ec),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap91e3c288-76') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 09:07:33 compute-0 nova_compute[260935]: 2025-10-11 09:07:33.622 2 DEBUG nova.objects.instance [None req-18ed1bb4-2807-4456-bd6d-09e6368d595b 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Lazy-loading 'pci_devices' on Instance uuid 02489367-ca19-4428-9727-6d8ab66e1b46 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:07:33 compute-0 ceph-mon[74313]: pgmap v1946: 321 pgs: 321 active+clean; 613 MiB data, 964 MiB used, 59 GiB / 60 GiB avail; 4.4 MiB/s rd, 6.2 MiB/s wr, 294 op/s
Oct 11 09:07:33 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/4214189064' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:07:33 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/4175653657' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:07:33 compute-0 ovn_controller[152945]: 2025-10-11T09:07:33Z|00100|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:9d:da:b0 10.100.0.12
Oct 11 09:07:33 compute-0 nova_compute[260935]: 2025-10-11 09:07:33.680 2 DEBUG nova.virt.libvirt.driver [None req-18ed1bb4-2807-4456-bd6d-09e6368d595b 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] [instance: 02489367-ca19-4428-9727-6d8ab66e1b46] End _get_guest_xml xml=<domain type="kvm">
Oct 11 09:07:33 compute-0 nova_compute[260935]:   <uuid>02489367-ca19-4428-9727-6d8ab66e1b46</uuid>
Oct 11 09:07:33 compute-0 nova_compute[260935]:   <name>instance-00000063</name>
Oct 11 09:07:33 compute-0 nova_compute[260935]:   <memory>131072</memory>
Oct 11 09:07:33 compute-0 nova_compute[260935]:   <vcpu>1</vcpu>
Oct 11 09:07:33 compute-0 nova_compute[260935]:   <metadata>
Oct 11 09:07:33 compute-0 nova_compute[260935]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 09:07:33 compute-0 nova_compute[260935]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 09:07:33 compute-0 nova_compute[260935]:       <nova:name>tempest-ServersNegativeTestJSON-server-322685699</nova:name>
Oct 11 09:07:33 compute-0 nova_compute[260935]:       <nova:creationTime>2025-10-11 09:07:32</nova:creationTime>
Oct 11 09:07:33 compute-0 nova_compute[260935]:       <nova:flavor name="m1.nano">
Oct 11 09:07:33 compute-0 nova_compute[260935]:         <nova:memory>128</nova:memory>
Oct 11 09:07:33 compute-0 nova_compute[260935]:         <nova:disk>1</nova:disk>
Oct 11 09:07:33 compute-0 nova_compute[260935]:         <nova:swap>0</nova:swap>
Oct 11 09:07:33 compute-0 nova_compute[260935]:         <nova:ephemeral>0</nova:ephemeral>
Oct 11 09:07:33 compute-0 nova_compute[260935]:         <nova:vcpus>1</nova:vcpus>
Oct 11 09:07:33 compute-0 nova_compute[260935]:       </nova:flavor>
Oct 11 09:07:33 compute-0 nova_compute[260935]:       <nova:owner>
Oct 11 09:07:33 compute-0 nova_compute[260935]:         <nova:user uuid="6b8d9d5ab01d48ae81a09f922875ea3e">tempest-ServersNegativeTestJSON-414353187-project-member</nova:user>
Oct 11 09:07:33 compute-0 nova_compute[260935]:         <nova:project uuid="21f163e616ee4917a580701d466f7dc9">tempest-ServersNegativeTestJSON-414353187</nova:project>
Oct 11 09:07:33 compute-0 nova_compute[260935]:       </nova:owner>
Oct 11 09:07:33 compute-0 nova_compute[260935]:       <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 09:07:33 compute-0 nova_compute[260935]:       <nova:ports>
Oct 11 09:07:33 compute-0 nova_compute[260935]:         <nova:port uuid="91e3c288-76e0-4c61-9a40-fde0087f647b">
Oct 11 09:07:33 compute-0 nova_compute[260935]:           <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Oct 11 09:07:33 compute-0 nova_compute[260935]:         </nova:port>
Oct 11 09:07:33 compute-0 nova_compute[260935]:       </nova:ports>
Oct 11 09:07:33 compute-0 nova_compute[260935]:     </nova:instance>
Oct 11 09:07:33 compute-0 nova_compute[260935]:   </metadata>
Oct 11 09:07:33 compute-0 nova_compute[260935]:   <sysinfo type="smbios">
Oct 11 09:07:33 compute-0 nova_compute[260935]:     <system>
Oct 11 09:07:33 compute-0 nova_compute[260935]:       <entry name="manufacturer">RDO</entry>
Oct 11 09:07:33 compute-0 nova_compute[260935]:       <entry name="product">OpenStack Compute</entry>
Oct 11 09:07:33 compute-0 nova_compute[260935]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 09:07:33 compute-0 nova_compute[260935]:       <entry name="serial">02489367-ca19-4428-9727-6d8ab66e1b46</entry>
Oct 11 09:07:33 compute-0 nova_compute[260935]:       <entry name="uuid">02489367-ca19-4428-9727-6d8ab66e1b46</entry>
Oct 11 09:07:33 compute-0 nova_compute[260935]:       <entry name="family">Virtual Machine</entry>
Oct 11 09:07:33 compute-0 nova_compute[260935]:     </system>
Oct 11 09:07:33 compute-0 nova_compute[260935]:   </sysinfo>
Oct 11 09:07:33 compute-0 nova_compute[260935]:   <os>
Oct 11 09:07:33 compute-0 nova_compute[260935]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 11 09:07:33 compute-0 nova_compute[260935]:     <boot dev="hd"/>
Oct 11 09:07:33 compute-0 nova_compute[260935]:     <smbios mode="sysinfo"/>
Oct 11 09:07:33 compute-0 nova_compute[260935]:   </os>
Oct 11 09:07:33 compute-0 nova_compute[260935]:   <features>
Oct 11 09:07:33 compute-0 nova_compute[260935]:     <acpi/>
Oct 11 09:07:33 compute-0 nova_compute[260935]:     <apic/>
Oct 11 09:07:33 compute-0 nova_compute[260935]:     <vmcoreinfo/>
Oct 11 09:07:33 compute-0 nova_compute[260935]:   </features>
Oct 11 09:07:33 compute-0 nova_compute[260935]:   <clock offset="utc">
Oct 11 09:07:33 compute-0 nova_compute[260935]:     <timer name="pit" tickpolicy="delay"/>
Oct 11 09:07:33 compute-0 nova_compute[260935]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 11 09:07:33 compute-0 nova_compute[260935]:     <timer name="hpet" present="no"/>
Oct 11 09:07:33 compute-0 nova_compute[260935]:   </clock>
Oct 11 09:07:33 compute-0 nova_compute[260935]:   <cpu mode="host-model" match="exact">
Oct 11 09:07:33 compute-0 nova_compute[260935]:     <topology sockets="1" cores="1" threads="1"/>
Oct 11 09:07:33 compute-0 nova_compute[260935]:   </cpu>
Oct 11 09:07:33 compute-0 nova_compute[260935]:   <devices>
Oct 11 09:07:33 compute-0 nova_compute[260935]:     <disk type="network" device="disk">
Oct 11 09:07:33 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 09:07:33 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/02489367-ca19-4428-9727-6d8ab66e1b46_disk">
Oct 11 09:07:33 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 09:07:33 compute-0 nova_compute[260935]:       </source>
Oct 11 09:07:33 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 09:07:33 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 09:07:33 compute-0 nova_compute[260935]:       </auth>
Oct 11 09:07:33 compute-0 nova_compute[260935]:       <target dev="vda" bus="virtio"/>
Oct 11 09:07:33 compute-0 nova_compute[260935]:     </disk>
Oct 11 09:07:33 compute-0 nova_compute[260935]:     <disk type="network" device="cdrom">
Oct 11 09:07:33 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 09:07:33 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/02489367-ca19-4428-9727-6d8ab66e1b46_disk.config">
Oct 11 09:07:33 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 09:07:33 compute-0 nova_compute[260935]:       </source>
Oct 11 09:07:33 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 09:07:33 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 09:07:33 compute-0 nova_compute[260935]:       </auth>
Oct 11 09:07:33 compute-0 nova_compute[260935]:       <target dev="sda" bus="sata"/>
Oct 11 09:07:33 compute-0 nova_compute[260935]:     </disk>
Oct 11 09:07:33 compute-0 nova_compute[260935]:     <interface type="ethernet">
Oct 11 09:07:33 compute-0 nova_compute[260935]:       <mac address="fa:16:3e:56:f9:d8"/>
Oct 11 09:07:33 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 09:07:33 compute-0 nova_compute[260935]:       <driver name="vhost" rx_queue_size="512"/>
Oct 11 09:07:33 compute-0 nova_compute[260935]:       <mtu size="1442"/>
Oct 11 09:07:33 compute-0 nova_compute[260935]:       <target dev="tap91e3c288-76"/>
Oct 11 09:07:33 compute-0 nova_compute[260935]:     </interface>
Oct 11 09:07:33 compute-0 nova_compute[260935]:     <serial type="pty">
Oct 11 09:07:33 compute-0 nova_compute[260935]:       <log file="/var/lib/nova/instances/02489367-ca19-4428-9727-6d8ab66e1b46/console.log" append="off"/>
Oct 11 09:07:33 compute-0 nova_compute[260935]:     </serial>
Oct 11 09:07:33 compute-0 nova_compute[260935]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 09:07:33 compute-0 nova_compute[260935]:     <video>
Oct 11 09:07:33 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 09:07:33 compute-0 nova_compute[260935]:     </video>
Oct 11 09:07:33 compute-0 nova_compute[260935]:     <input type="tablet" bus="usb"/>
Oct 11 09:07:33 compute-0 nova_compute[260935]:     <rng model="virtio">
Oct 11 09:07:33 compute-0 nova_compute[260935]:       <backend model="random">/dev/urandom</backend>
Oct 11 09:07:33 compute-0 nova_compute[260935]:     </rng>
Oct 11 09:07:33 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root"/>
Oct 11 09:07:33 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:07:33 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:07:33 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:07:33 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:07:33 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:07:33 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:07:33 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:07:33 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:07:33 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:07:33 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:07:33 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:07:33 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:07:33 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:07:33 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:07:33 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:07:33 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:07:33 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:07:33 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:07:33 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:07:33 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:07:33 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:07:33 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:07:33 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:07:33 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:07:33 compute-0 nova_compute[260935]:     <controller type="usb" index="0"/>
Oct 11 09:07:33 compute-0 nova_compute[260935]:     <memballoon model="virtio">
Oct 11 09:07:33 compute-0 nova_compute[260935]:       <stats period="10"/>
Oct 11 09:07:33 compute-0 nova_compute[260935]:     </memballoon>
Oct 11 09:07:33 compute-0 nova_compute[260935]:   </devices>
Oct 11 09:07:33 compute-0 nova_compute[260935]: </domain>
Oct 11 09:07:33 compute-0 nova_compute[260935]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 11 09:07:33 compute-0 nova_compute[260935]: 2025-10-11 09:07:33.681 2 DEBUG nova.compute.manager [None req-18ed1bb4-2807-4456-bd6d-09e6368d595b 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] [instance: 02489367-ca19-4428-9727-6d8ab66e1b46] Preparing to wait for external event network-vif-plugged-91e3c288-76e0-4c61-9a40-fde0087f647b prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 11 09:07:33 compute-0 nova_compute[260935]: 2025-10-11 09:07:33.681 2 DEBUG oslo_concurrency.lockutils [None req-18ed1bb4-2807-4456-bd6d-09e6368d595b 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Acquiring lock "02489367-ca19-4428-9727-6d8ab66e1b46-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:07:33 compute-0 nova_compute[260935]: 2025-10-11 09:07:33.682 2 DEBUG oslo_concurrency.lockutils [None req-18ed1bb4-2807-4456-bd6d-09e6368d595b 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Lock "02489367-ca19-4428-9727-6d8ab66e1b46-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:07:33 compute-0 nova_compute[260935]: 2025-10-11 09:07:33.682 2 DEBUG oslo_concurrency.lockutils [None req-18ed1bb4-2807-4456-bd6d-09e6368d595b 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Lock "02489367-ca19-4428-9727-6d8ab66e1b46-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:07:33 compute-0 nova_compute[260935]: 2025-10-11 09:07:33.683 2 DEBUG nova.virt.libvirt.vif [None req-18ed1bb4-2807-4456-bd6d-09e6368d595b 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T09:07:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersNegativeTestJSON-server-322685699',display_name='tempest-ServersNegativeTestJSON-server-322685699',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestjson-server-322685699',id=99,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='21f163e616ee4917a580701d466f7dc9',ramdisk_id='',reservation_id='r-m7jzw6nr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersNegativeTestJSON-414353187',owner_user_name='tempest-ServersNegativeTestJSON-414353187-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:07:27Z,user_data=None,user_id='6b8d9d5ab01d48ae81a09f922875ea3e',uuid=02489367-ca19-4428-9727-6d8ab66e1b46,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "91e3c288-76e0-4c61-9a40-fde0087f647b", "address": "fa:16:3e:56:f9:d8", "network": {"id": "14e82eeb-74e2-4de3-9047-74da777fe1ec", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-461409610-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "21f163e616ee4917a580701d466f7dc9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap91e3c288-76", "ovs_interfaceid": "91e3c288-76e0-4c61-9a40-fde0087f647b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 11 09:07:33 compute-0 nova_compute[260935]: 2025-10-11 09:07:33.683 2 DEBUG nova.network.os_vif_util [None req-18ed1bb4-2807-4456-bd6d-09e6368d595b 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Converting VIF {"id": "91e3c288-76e0-4c61-9a40-fde0087f647b", "address": "fa:16:3e:56:f9:d8", "network": {"id": "14e82eeb-74e2-4de3-9047-74da777fe1ec", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-461409610-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "21f163e616ee4917a580701d466f7dc9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap91e3c288-76", "ovs_interfaceid": "91e3c288-76e0-4c61-9a40-fde0087f647b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 09:07:33 compute-0 nova_compute[260935]: 2025-10-11 09:07:33.684 2 DEBUG nova.network.os_vif_util [None req-18ed1bb4-2807-4456-bd6d-09e6368d595b 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:56:f9:d8,bridge_name='br-int',has_traffic_filtering=True,id=91e3c288-76e0-4c61-9a40-fde0087f647b,network=Network(14e82eeb-74e2-4de3-9047-74da777fe1ec),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap91e3c288-76') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 09:07:33 compute-0 ovn_controller[152945]: 2025-10-11T09:07:33Z|00101|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:9d:da:b0 10.100.0.12
Oct 11 09:07:33 compute-0 nova_compute[260935]: 2025-10-11 09:07:33.684 2 DEBUG os_vif [None req-18ed1bb4-2807-4456-bd6d-09e6368d595b 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:56:f9:d8,bridge_name='br-int',has_traffic_filtering=True,id=91e3c288-76e0-4c61-9a40-fde0087f647b,network=Network(14e82eeb-74e2-4de3-9047-74da777fe1ec),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap91e3c288-76') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 11 09:07:33 compute-0 nova_compute[260935]: 2025-10-11 09:07:33.687 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:07:33 compute-0 nova_compute[260935]: 2025-10-11 09:07:33.689 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:07:33 compute-0 nova_compute[260935]: 2025-10-11 09:07:33.691 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 09:07:33 compute-0 nova_compute[260935]: 2025-10-11 09:07:33.702 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:07:33 compute-0 nova_compute[260935]: 2025-10-11 09:07:33.703 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap91e3c288-76, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:07:33 compute-0 nova_compute[260935]: 2025-10-11 09:07:33.705 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap91e3c288-76, col_values=(('external_ids', {'iface-id': '91e3c288-76e0-4c61-9a40-fde0087f647b', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:56:f9:d8', 'vm-uuid': '02489367-ca19-4428-9727-6d8ab66e1b46'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:07:33 compute-0 nova_compute[260935]: 2025-10-11 09:07:33.710 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:07:33 compute-0 NetworkManager[44960]: <info>  [1760173653.7107] manager: (tap91e3c288-76): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/376)
Oct 11 09:07:33 compute-0 nova_compute[260935]: 2025-10-11 09:07:33.711 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 09:07:33 compute-0 nova_compute[260935]: 2025-10-11 09:07:33.719 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:07:33 compute-0 nova_compute[260935]: 2025-10-11 09:07:33.721 2 INFO os_vif [None req-18ed1bb4-2807-4456-bd6d-09e6368d595b 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:56:f9:d8,bridge_name='br-int',has_traffic_filtering=True,id=91e3c288-76e0-4c61-9a40-fde0087f647b,network=Network(14e82eeb-74e2-4de3-9047-74da777fe1ec),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap91e3c288-76')
Oct 11 09:07:33 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e257 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:07:33 compute-0 nova_compute[260935]: 2025-10-11 09:07:33.812 2 DEBUG nova.virt.libvirt.driver [None req-18ed1bb4-2807-4456-bd6d-09e6368d595b 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 09:07:33 compute-0 nova_compute[260935]: 2025-10-11 09:07:33.813 2 DEBUG nova.virt.libvirt.driver [None req-18ed1bb4-2807-4456-bd6d-09e6368d595b 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 09:07:33 compute-0 nova_compute[260935]: 2025-10-11 09:07:33.813 2 DEBUG nova.virt.libvirt.driver [None req-18ed1bb4-2807-4456-bd6d-09e6368d595b 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] No VIF found with MAC fa:16:3e:56:f9:d8, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 11 09:07:33 compute-0 nova_compute[260935]: 2025-10-11 09:07:33.813 2 INFO nova.virt.libvirt.driver [None req-18ed1bb4-2807-4456-bd6d-09e6368d595b 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] [instance: 02489367-ca19-4428-9727-6d8ab66e1b46] Using config drive
Oct 11 09:07:33 compute-0 nova_compute[260935]: 2025-10-11 09:07:33.849 2 DEBUG nova.storage.rbd_utils [None req-18ed1bb4-2807-4456-bd6d-09e6368d595b 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] rbd image 02489367-ca19-4428-9727-6d8ab66e1b46_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:07:34 compute-0 nova_compute[260935]: 2025-10-11 09:07:34.100 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:07:34 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1947: 321 pgs: 321 active+clean; 647 MiB data, 986 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 8.1 MiB/s wr, 229 op/s
Oct 11 09:07:34 compute-0 kernel: tap34749eb0-e6 (unregistering): left promiscuous mode
Oct 11 09:07:34 compute-0 NetworkManager[44960]: <info>  [1760173654.5544] device (tap34749eb0-e6): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 09:07:34 compute-0 ovn_controller[152945]: 2025-10-11T09:07:34Z|00875|binding|INFO|Releasing lport 34749eb0-e6d2-4c4b-a2b8-9fc709c1caca from this chassis (sb_readonly=0)
Oct 11 09:07:34 compute-0 ovn_controller[152945]: 2025-10-11T09:07:34Z|00876|binding|INFO|Setting lport 34749eb0-e6d2-4c4b-a2b8-9fc709c1caca down in Southbound
Oct 11 09:07:34 compute-0 ovn_controller[152945]: 2025-10-11T09:07:34Z|00877|binding|INFO|Removing iface tap34749eb0-e6 ovn-installed in OVS
Oct 11 09:07:34 compute-0 nova_compute[260935]: 2025-10-11 09:07:34.565 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:07:34 compute-0 nova_compute[260935]: 2025-10-11 09:07:34.567 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:07:34 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:07:34.585 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:88:5f:9b 10.100.0.12'], port_security=['fa:16:3e:88:5f:9b 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'f5dfbe0b-a1ff-4001-abe4-a4493c9124f2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-76432838-bd4d-4fb8-8e44-6e230b5868b8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c221c92f100b42fbb2581f0c7035540b', 'neutron:revision_number': '4', 'neutron:security_group_ids': '933852e8-c082-4590-9200-b967a1691dcb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7f1c5e2c-b27b-4d23-ad04-5134938386e4, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=34749eb0-e6d2-4c4b-a2b8-9fc709c1caca) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 09:07:34 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:07:34.588 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 34749eb0-e6d2-4c4b-a2b8-9fc709c1caca in datapath 76432838-bd4d-4fb8-8e44-6e230b5868b8 unbound from our chassis
Oct 11 09:07:34 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:07:34.590 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 76432838-bd4d-4fb8-8e44-6e230b5868b8
Oct 11 09:07:34 compute-0 nova_compute[260935]: 2025-10-11 09:07:34.589 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:07:34 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:07:34.608 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[8e00e19f-1549-4d41-a1a5-6b3f4b5d57b3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:07:34 compute-0 systemd[1]: machine-qemu\x2d111\x2dinstance\x2d00000061.scope: Deactivated successfully.
Oct 11 09:07:34 compute-0 systemd[1]: machine-qemu\x2d111\x2dinstance\x2d00000061.scope: Consumed 14.713s CPU time.
Oct 11 09:07:34 compute-0 systemd-machined[215705]: Machine qemu-111-instance-00000061 terminated.
Oct 11 09:07:34 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:07:34.642 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[890b325a-3a44-4e54-bb1e-7be5c237e6c3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:07:34 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:07:34.646 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[a95a576e-69b1-4388-b138-29e962f10c4e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:07:34 compute-0 nova_compute[260935]: 2025-10-11 09:07:34.656 2 INFO nova.virt.libvirt.driver [None req-18ed1bb4-2807-4456-bd6d-09e6368d595b 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] [instance: 02489367-ca19-4428-9727-6d8ab66e1b46] Creating config drive at /var/lib/nova/instances/02489367-ca19-4428-9727-6d8ab66e1b46/disk.config
Oct 11 09:07:34 compute-0 nova_compute[260935]: 2025-10-11 09:07:34.667 2 DEBUG oslo_concurrency.processutils [None req-18ed1bb4-2807-4456-bd6d-09e6368d595b 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/02489367-ca19-4428-9727-6d8ab66e1b46/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpyzie0oqd execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:07:34 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:07:34.688 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[5f84f57c-1cd2-42d2-bd30-e38909f11404]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:07:34 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:07:34.712 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[adfbe43c-9a1e-4680-9140-0229a7d18589]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap76432838-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:06:2f:a3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 916, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 916, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 260], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 547691, 'reachable_time': 43408, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 358128, 'error': None, 'target': 'ovnmeta-76432838-bd4d-4fb8-8e44-6e230b5868b8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:07:34 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:07:34.730 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[ad99086b-c8f1-4453-bba1-8823bcacf3b4]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap76432838-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 547707, 'tstamp': 547707}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 358129, 'error': None, 'target': 'ovnmeta-76432838-bd4d-4fb8-8e44-6e230b5868b8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap76432838-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 547711, 'tstamp': 547711}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 358129, 'error': None, 'target': 'ovnmeta-76432838-bd4d-4fb8-8e44-6e230b5868b8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:07:34 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:07:34.733 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap76432838-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:07:34 compute-0 nova_compute[260935]: 2025-10-11 09:07:34.735 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:07:34 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:07:34.744 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap76432838-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:07:34 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:07:34.745 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 09:07:34 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:07:34.745 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap76432838-b0, col_values=(('external_ids', {'iface-id': 'b6ca7d68-6b07-4260-8964-929cc77a92b2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:07:34 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:07:34.745 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 09:07:34 compute-0 nova_compute[260935]: 2025-10-11 09:07:34.747 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:07:34 compute-0 nova_compute[260935]: 2025-10-11 09:07:34.833 2 DEBUG oslo_concurrency.processutils [None req-18ed1bb4-2807-4456-bd6d-09e6368d595b 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/02489367-ca19-4428-9727-6d8ab66e1b46/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpyzie0oqd" returned: 0 in 0.166s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:07:34 compute-0 nova_compute[260935]: 2025-10-11 09:07:34.873 2 DEBUG nova.storage.rbd_utils [None req-18ed1bb4-2807-4456-bd6d-09e6368d595b 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] rbd image 02489367-ca19-4428-9727-6d8ab66e1b46_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:07:34 compute-0 nova_compute[260935]: 2025-10-11 09:07:34.878 2 DEBUG oslo_concurrency.processutils [None req-18ed1bb4-2807-4456-bd6d-09e6368d595b 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/02489367-ca19-4428-9727-6d8ab66e1b46/disk.config 02489367-ca19-4428-9727-6d8ab66e1b46_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:07:35 compute-0 nova_compute[260935]: 2025-10-11 09:07:35.052 2 DEBUG nova.compute.manager [req-16c99b88-665e-4db1-8994-77379c416931 req-a1bf17bd-ddc4-4503-885c-73ed172b1c99 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: f5dfbe0b-a1ff-4001-abe4-a4493c9124f2] Received event network-vif-unplugged-34749eb0-e6d2-4c4b-a2b8-9fc709c1caca external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:07:35 compute-0 nova_compute[260935]: 2025-10-11 09:07:35.053 2 DEBUG oslo_concurrency.lockutils [req-16c99b88-665e-4db1-8994-77379c416931 req-a1bf17bd-ddc4-4503-885c-73ed172b1c99 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "f5dfbe0b-a1ff-4001-abe4-a4493c9124f2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:07:35 compute-0 nova_compute[260935]: 2025-10-11 09:07:35.054 2 DEBUG oslo_concurrency.lockutils [req-16c99b88-665e-4db1-8994-77379c416931 req-a1bf17bd-ddc4-4503-885c-73ed172b1c99 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "f5dfbe0b-a1ff-4001-abe4-a4493c9124f2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:07:35 compute-0 nova_compute[260935]: 2025-10-11 09:07:35.054 2 DEBUG oslo_concurrency.lockutils [req-16c99b88-665e-4db1-8994-77379c416931 req-a1bf17bd-ddc4-4503-885c-73ed172b1c99 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "f5dfbe0b-a1ff-4001-abe4-a4493c9124f2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:07:35 compute-0 nova_compute[260935]: 2025-10-11 09:07:35.054 2 DEBUG nova.compute.manager [req-16c99b88-665e-4db1-8994-77379c416931 req-a1bf17bd-ddc4-4503-885c-73ed172b1c99 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: f5dfbe0b-a1ff-4001-abe4-a4493c9124f2] No waiting events found dispatching network-vif-unplugged-34749eb0-e6d2-4c4b-a2b8-9fc709c1caca pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 09:07:35 compute-0 nova_compute[260935]: 2025-10-11 09:07:35.054 2 WARNING nova.compute.manager [req-16c99b88-665e-4db1-8994-77379c416931 req-a1bf17bd-ddc4-4503-885c-73ed172b1c99 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: f5dfbe0b-a1ff-4001-abe4-a4493c9124f2] Received unexpected event network-vif-unplugged-34749eb0-e6d2-4c4b-a2b8-9fc709c1caca for instance with vm_state active and task_state rescuing.
Oct 11 09:07:35 compute-0 nova_compute[260935]: 2025-10-11 09:07:35.068 2 DEBUG oslo_concurrency.processutils [None req-18ed1bb4-2807-4456-bd6d-09e6368d595b 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/02489367-ca19-4428-9727-6d8ab66e1b46/disk.config 02489367-ca19-4428-9727-6d8ab66e1b46_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.189s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:07:35 compute-0 nova_compute[260935]: 2025-10-11 09:07:35.068 2 INFO nova.virt.libvirt.driver [None req-18ed1bb4-2807-4456-bd6d-09e6368d595b 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] [instance: 02489367-ca19-4428-9727-6d8ab66e1b46] Deleting local config drive /var/lib/nova/instances/02489367-ca19-4428-9727-6d8ab66e1b46/disk.config because it was imported into RBD.
Oct 11 09:07:35 compute-0 systemd-udevd[358120]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 09:07:35 compute-0 kernel: tap91e3c288-76: entered promiscuous mode
Oct 11 09:07:35 compute-0 NetworkManager[44960]: <info>  [1760173655.1588] manager: (tap91e3c288-76): new Tun device (/org/freedesktop/NetworkManager/Devices/377)
Oct 11 09:07:35 compute-0 nova_compute[260935]: 2025-10-11 09:07:35.163 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:07:35 compute-0 ovn_controller[152945]: 2025-10-11T09:07:35Z|00878|binding|INFO|Claiming lport 91e3c288-76e0-4c61-9a40-fde0087f647b for this chassis.
Oct 11 09:07:35 compute-0 ovn_controller[152945]: 2025-10-11T09:07:35Z|00879|binding|INFO|91e3c288-76e0-4c61-9a40-fde0087f647b: Claiming fa:16:3e:56:f9:d8 10.100.0.11
Oct 11 09:07:35 compute-0 NetworkManager[44960]: <info>  [1760173655.1773] device (tap91e3c288-76): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 09:07:35 compute-0 NetworkManager[44960]: <info>  [1760173655.1792] device (tap91e3c288-76): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 09:07:35 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:07:35.184 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:56:f9:d8 10.100.0.11'], port_security=['fa:16:3e:56:f9:d8 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '02489367-ca19-4428-9727-6d8ab66e1b46', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-14e82eeb-74e2-4de3-9047-74da777fe1ec', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '21f163e616ee4917a580701d466f7dc9', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'df67e917-90ec-4dfb-a7e0-e0c1517579e8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e0adf864-e0f8-4e06-afb7-e390a634b36e, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=91e3c288-76e0-4c61-9a40-fde0087f647b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 09:07:35 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:07:35.186 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 91e3c288-76e0-4c61-9a40-fde0087f647b in datapath 14e82eeb-74e2-4de3-9047-74da777fe1ec bound to our chassis
Oct 11 09:07:35 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:07:35.189 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 14e82eeb-74e2-4de3-9047-74da777fe1ec
Oct 11 09:07:35 compute-0 systemd-machined[215705]: New machine qemu-113-instance-00000063.
Oct 11 09:07:35 compute-0 ovn_controller[152945]: 2025-10-11T09:07:35Z|00880|binding|INFO|Setting lport 91e3c288-76e0-4c61-9a40-fde0087f647b ovn-installed in OVS
Oct 11 09:07:35 compute-0 ovn_controller[152945]: 2025-10-11T09:07:35Z|00881|binding|INFO|Setting lport 91e3c288-76e0-4c61-9a40-fde0087f647b up in Southbound
Oct 11 09:07:35 compute-0 nova_compute[260935]: 2025-10-11 09:07:35.214 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:07:35 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:07:35.223 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[f44c63c9-6f75-4c04-a443-bfaac2848ec2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:07:35 compute-0 systemd[1]: Started Virtual Machine qemu-113-instance-00000063.
Oct 11 09:07:35 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:07:35.273 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[8894f231-ab51-44cd-ad93-fc253bcc3fb2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:07:35 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:07:35.280 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[e0ebcf33-095f-4073-83d4-644edd7d19f7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:07:35 compute-0 nova_compute[260935]: 2025-10-11 09:07:35.286 2 INFO nova.virt.libvirt.driver [None req-25246455-e498-45d1-a432-7e80017b2ff9 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] [instance: f5dfbe0b-a1ff-4001-abe4-a4493c9124f2] Instance shutdown successfully after 13 seconds.
Oct 11 09:07:35 compute-0 nova_compute[260935]: 2025-10-11 09:07:35.304 2 INFO nova.virt.libvirt.driver [-] [instance: f5dfbe0b-a1ff-4001-abe4-a4493c9124f2] Instance destroyed successfully.
Oct 11 09:07:35 compute-0 nova_compute[260935]: 2025-10-11 09:07:35.305 2 DEBUG nova.objects.instance [None req-25246455-e498-45d1-a432-7e80017b2ff9 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Lazy-loading 'numa_topology' on Instance uuid f5dfbe0b-a1ff-4001-abe4-a4493c9124f2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:07:35 compute-0 nova_compute[260935]: 2025-10-11 09:07:35.334 2 INFO nova.virt.libvirt.driver [None req-25246455-e498-45d1-a432-7e80017b2ff9 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] [instance: f5dfbe0b-a1ff-4001-abe4-a4493c9124f2] Attempting rescue
Oct 11 09:07:35 compute-0 nova_compute[260935]: 2025-10-11 09:07:35.336 2 DEBUG nova.virt.libvirt.driver [None req-25246455-e498-45d1-a432-7e80017b2ff9 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] rescue generated disk_info: {'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'disk.rescue': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config.rescue': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} rescue /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4314
Oct 11 09:07:35 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:07:35.336 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[0b95137e-856a-48e7-a646-8dca04214103]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:07:35 compute-0 nova_compute[260935]: 2025-10-11 09:07:35.344 2 DEBUG nova.virt.libvirt.driver [None req-25246455-e498-45d1-a432-7e80017b2ff9 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] [instance: f5dfbe0b-a1ff-4001-abe4-a4493c9124f2] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719
Oct 11 09:07:35 compute-0 nova_compute[260935]: 2025-10-11 09:07:35.345 2 INFO nova.virt.libvirt.driver [None req-25246455-e498-45d1-a432-7e80017b2ff9 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] [instance: f5dfbe0b-a1ff-4001-abe4-a4493c9124f2] Creating image(s)
Oct 11 09:07:35 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:07:35.367 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[228d4389-1c2a-414c-8b92-563928cf68af]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap14e82eeb-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:60:56:f1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 832, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 832, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 263], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 548101, 'reachable_time': 37420, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 358209, 'error': None, 'target': 'ovnmeta-14e82eeb-74e2-4de3-9047-74da777fe1ec', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:07:35 compute-0 nova_compute[260935]: 2025-10-11 09:07:35.384 2 DEBUG nova.storage.rbd_utils [None req-25246455-e498-45d1-a432-7e80017b2ff9 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] rbd image f5dfbe0b-a1ff-4001-abe4-a4493c9124f2_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:07:35 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:07:35.394 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[3892e0f5-6b3b-4973-bd16-4a68178d6799]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap14e82eeb-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 548114, 'tstamp': 548114}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 358225, 'error': None, 'target': 'ovnmeta-14e82eeb-74e2-4de3-9047-74da777fe1ec', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap14e82eeb-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 548116, 'tstamp': 548116}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 358225, 'error': None, 'target': 'ovnmeta-14e82eeb-74e2-4de3-9047-74da777fe1ec', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:07:35 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:07:35.397 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap14e82eeb-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:07:35 compute-0 nova_compute[260935]: 2025-10-11 09:07:35.401 2 DEBUG nova.objects.instance [None req-25246455-e498-45d1-a432-7e80017b2ff9 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Lazy-loading 'trusted_certs' on Instance uuid f5dfbe0b-a1ff-4001-abe4-a4493c9124f2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:07:35 compute-0 nova_compute[260935]: 2025-10-11 09:07:35.403 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:07:35 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:07:35.406 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap14e82eeb-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:07:35 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:07:35.406 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 09:07:35 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:07:35.407 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap14e82eeb-70, col_values=(('external_ids', {'iface-id': 'c44760fe-71c5-482d-aee7-a0b0513681b7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:07:35 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:07:35.408 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 09:07:35 compute-0 nova_compute[260935]: 2025-10-11 09:07:35.473 2 DEBUG nova.storage.rbd_utils [None req-25246455-e498-45d1-a432-7e80017b2ff9 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] rbd image f5dfbe0b-a1ff-4001-abe4-a4493c9124f2_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:07:35 compute-0 nova_compute[260935]: 2025-10-11 09:07:35.509 2 DEBUG nova.storage.rbd_utils [None req-25246455-e498-45d1-a432-7e80017b2ff9 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] rbd image f5dfbe0b-a1ff-4001-abe4-a4493c9124f2_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:07:35 compute-0 nova_compute[260935]: 2025-10-11 09:07:35.514 2 DEBUG oslo_concurrency.processutils [None req-25246455-e498-45d1-a432-7e80017b2ff9 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:07:35 compute-0 nova_compute[260935]: 2025-10-11 09:07:35.566 2 DEBUG nova.network.neutron [req-788b4548-ea66-4fd6-8770-9372dd1ad601 req-3e74e85b-384a-4ccf-b061-89097d89dba2 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 02489367-ca19-4428-9727-6d8ab66e1b46] Updated VIF entry in instance network info cache for port 91e3c288-76e0-4c61-9a40-fde0087f647b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 09:07:35 compute-0 nova_compute[260935]: 2025-10-11 09:07:35.567 2 DEBUG nova.network.neutron [req-788b4548-ea66-4fd6-8770-9372dd1ad601 req-3e74e85b-384a-4ccf-b061-89097d89dba2 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 02489367-ca19-4428-9727-6d8ab66e1b46] Updating instance_info_cache with network_info: [{"id": "91e3c288-76e0-4c61-9a40-fde0087f647b", "address": "fa:16:3e:56:f9:d8", "network": {"id": "14e82eeb-74e2-4de3-9047-74da777fe1ec", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-461409610-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "21f163e616ee4917a580701d466f7dc9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap91e3c288-76", "ovs_interfaceid": "91e3c288-76e0-4c61-9a40-fde0087f647b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:07:35 compute-0 nova_compute[260935]: 2025-10-11 09:07:35.597 2 DEBUG oslo_concurrency.lockutils [req-788b4548-ea66-4fd6-8770-9372dd1ad601 req-3e74e85b-384a-4ccf-b061-89097d89dba2 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-02489367-ca19-4428-9727-6d8ab66e1b46" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:07:35 compute-0 nova_compute[260935]: 2025-10-11 09:07:35.611 2 DEBUG oslo_concurrency.processutils [None req-25246455-e498-45d1-a432-7e80017b2ff9 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json" returned: 0 in 0.097s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:07:35 compute-0 nova_compute[260935]: 2025-10-11 09:07:35.612 2 DEBUG oslo_concurrency.lockutils [None req-25246455-e498-45d1-a432-7e80017b2ff9 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Acquiring lock "9811042c7d73cc51997f7c966840f4b7728169a1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:07:35 compute-0 nova_compute[260935]: 2025-10-11 09:07:35.613 2 DEBUG oslo_concurrency.lockutils [None req-25246455-e498-45d1-a432-7e80017b2ff9 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:07:35 compute-0 nova_compute[260935]: 2025-10-11 09:07:35.613 2 DEBUG oslo_concurrency.lockutils [None req-25246455-e498-45d1-a432-7e80017b2ff9 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:07:35 compute-0 nova_compute[260935]: 2025-10-11 09:07:35.639 2 DEBUG nova.storage.rbd_utils [None req-25246455-e498-45d1-a432-7e80017b2ff9 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] rbd image f5dfbe0b-a1ff-4001-abe4-a4493c9124f2_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:07:35 compute-0 nova_compute[260935]: 2025-10-11 09:07:35.643 2 DEBUG oslo_concurrency.processutils [None req-25246455-e498-45d1-a432-7e80017b2ff9 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 f5dfbe0b-a1ff-4001-abe4-a4493c9124f2_disk.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:07:35 compute-0 ceph-mon[74313]: pgmap v1947: 321 pgs: 321 active+clean; 647 MiB data, 986 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 8.1 MiB/s wr, 229 op/s
Oct 11 09:07:35 compute-0 nova_compute[260935]: 2025-10-11 09:07:35.989 2 DEBUG oslo_concurrency.processutils [None req-25246455-e498-45d1-a432-7e80017b2ff9 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 f5dfbe0b-a1ff-4001-abe4-a4493c9124f2_disk.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.346s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:07:35 compute-0 nova_compute[260935]: 2025-10-11 09:07:35.990 2 DEBUG nova.objects.instance [None req-25246455-e498-45d1-a432-7e80017b2ff9 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Lazy-loading 'migration_context' on Instance uuid f5dfbe0b-a1ff-4001-abe4-a4493c9124f2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:07:36 compute-0 nova_compute[260935]: 2025-10-11 09:07:36.033 2 DEBUG nova.virt.libvirt.driver [None req-25246455-e498-45d1-a432-7e80017b2ff9 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] [instance: f5dfbe0b-a1ff-4001-abe4-a4493c9124f2] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 11 09:07:36 compute-0 nova_compute[260935]: 2025-10-11 09:07:36.034 2 DEBUG nova.virt.libvirt.driver [None req-25246455-e498-45d1-a432-7e80017b2ff9 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] [instance: f5dfbe0b-a1ff-4001-abe4-a4493c9124f2] Start _get_guest_xml network_info=[{"id": "34749eb0-e6d2-4c4b-a2b8-9fc709c1caca", "address": "fa:16:3e:88:5f:9b", "network": {"id": "76432838-bd4d-4fb8-8e44-6e230b5868b8", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-372698809-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueNegativeTestJSON-372698809-network", "vif_mac": "fa:16:3e:88:5f:9b"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c221c92f100b42fbb2581f0c7035540b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap34749eb0-e6", "ovs_interfaceid": "34749eb0-e6d2-4c4b-a2b8-9fc709c1caca", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'disk.rescue': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config.rescue': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>) rescue={'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e', 'kernel_id': '', 'ramdisk_id': ''} block_device_info=None _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 11 09:07:36 compute-0 nova_compute[260935]: 2025-10-11 09:07:36.035 2 DEBUG nova.objects.instance [None req-25246455-e498-45d1-a432-7e80017b2ff9 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Lazy-loading 'resources' on Instance uuid f5dfbe0b-a1ff-4001-abe4-a4493c9124f2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:07:36 compute-0 nova_compute[260935]: 2025-10-11 09:07:36.107 2 WARNING nova.virt.libvirt.driver [None req-25246455-e498-45d1-a432-7e80017b2ff9 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 09:07:36 compute-0 nova_compute[260935]: 2025-10-11 09:07:36.116 2 DEBUG nova.virt.libvirt.host [None req-25246455-e498-45d1-a432-7e80017b2ff9 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 11 09:07:36 compute-0 nova_compute[260935]: 2025-10-11 09:07:36.117 2 DEBUG nova.virt.libvirt.host [None req-25246455-e498-45d1-a432-7e80017b2ff9 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 11 09:07:36 compute-0 nova_compute[260935]: 2025-10-11 09:07:36.122 2 DEBUG nova.virt.libvirt.host [None req-25246455-e498-45d1-a432-7e80017b2ff9 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 11 09:07:36 compute-0 nova_compute[260935]: 2025-10-11 09:07:36.123 2 DEBUG nova.virt.libvirt.host [None req-25246455-e498-45d1-a432-7e80017b2ff9 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 11 09:07:36 compute-0 nova_compute[260935]: 2025-10-11 09:07:36.123 2 DEBUG nova.virt.libvirt.driver [None req-25246455-e498-45d1-a432-7e80017b2ff9 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 11 09:07:36 compute-0 nova_compute[260935]: 2025-10-11 09:07:36.124 2 DEBUG nova.virt.hardware [None req-25246455-e498-45d1-a432-7e80017b2ff9 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 11 09:07:36 compute-0 nova_compute[260935]: 2025-10-11 09:07:36.125 2 DEBUG nova.virt.hardware [None req-25246455-e498-45d1-a432-7e80017b2ff9 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 11 09:07:36 compute-0 nova_compute[260935]: 2025-10-11 09:07:36.125 2 DEBUG nova.virt.hardware [None req-25246455-e498-45d1-a432-7e80017b2ff9 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 11 09:07:36 compute-0 nova_compute[260935]: 2025-10-11 09:07:36.126 2 DEBUG nova.virt.hardware [None req-25246455-e498-45d1-a432-7e80017b2ff9 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 11 09:07:36 compute-0 nova_compute[260935]: 2025-10-11 09:07:36.126 2 DEBUG nova.virt.hardware [None req-25246455-e498-45d1-a432-7e80017b2ff9 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 11 09:07:36 compute-0 nova_compute[260935]: 2025-10-11 09:07:36.126 2 DEBUG nova.virt.hardware [None req-25246455-e498-45d1-a432-7e80017b2ff9 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 11 09:07:36 compute-0 nova_compute[260935]: 2025-10-11 09:07:36.127 2 DEBUG nova.virt.hardware [None req-25246455-e498-45d1-a432-7e80017b2ff9 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 11 09:07:36 compute-0 nova_compute[260935]: 2025-10-11 09:07:36.127 2 DEBUG nova.virt.hardware [None req-25246455-e498-45d1-a432-7e80017b2ff9 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 11 09:07:36 compute-0 nova_compute[260935]: 2025-10-11 09:07:36.128 2 DEBUG nova.virt.hardware [None req-25246455-e498-45d1-a432-7e80017b2ff9 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 11 09:07:36 compute-0 nova_compute[260935]: 2025-10-11 09:07:36.128 2 DEBUG nova.virt.hardware [None req-25246455-e498-45d1-a432-7e80017b2ff9 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 11 09:07:36 compute-0 nova_compute[260935]: 2025-10-11 09:07:36.129 2 DEBUG nova.virt.hardware [None req-25246455-e498-45d1-a432-7e80017b2ff9 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 11 09:07:36 compute-0 nova_compute[260935]: 2025-10-11 09:07:36.129 2 DEBUG nova.objects.instance [None req-25246455-e498-45d1-a432-7e80017b2ff9 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Lazy-loading 'vcpu_model' on Instance uuid f5dfbe0b-a1ff-4001-abe4-a4493c9124f2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:07:36 compute-0 nova_compute[260935]: 2025-10-11 09:07:36.171 2 DEBUG oslo_concurrency.processutils [None req-25246455-e498-45d1-a432-7e80017b2ff9 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:07:36 compute-0 nova_compute[260935]: 2025-10-11 09:07:36.343 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760173656.3427625, 02489367-ca19-4428-9727-6d8ab66e1b46 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:07:36 compute-0 nova_compute[260935]: 2025-10-11 09:07:36.344 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 02489367-ca19-4428-9727-6d8ab66e1b46] VM Started (Lifecycle Event)
Oct 11 09:07:36 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1948: 321 pgs: 321 active+clean; 647 MiB data, 986 MiB used, 59 GiB / 60 GiB avail; 881 KiB/s rd, 8.1 MiB/s wr, 192 op/s
Oct 11 09:07:36 compute-0 nova_compute[260935]: 2025-10-11 09:07:36.398 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 02489367-ca19-4428-9727-6d8ab66e1b46] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:07:36 compute-0 nova_compute[260935]: 2025-10-11 09:07:36.404 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760173656.3429835, 02489367-ca19-4428-9727-6d8ab66e1b46 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:07:36 compute-0 nova_compute[260935]: 2025-10-11 09:07:36.405 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 02489367-ca19-4428-9727-6d8ab66e1b46] VM Paused (Lifecycle Event)
Oct 11 09:07:36 compute-0 nova_compute[260935]: 2025-10-11 09:07:36.453 2 DEBUG nova.compute.manager [req-b2f38212-1c50-488f-8b15-dac8f7997e0b req-1f2d7cbc-86a6-496c-814a-e6ac947db80d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 02489367-ca19-4428-9727-6d8ab66e1b46] Received event network-vif-plugged-91e3c288-76e0-4c61-9a40-fde0087f647b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:07:36 compute-0 nova_compute[260935]: 2025-10-11 09:07:36.453 2 DEBUG oslo_concurrency.lockutils [req-b2f38212-1c50-488f-8b15-dac8f7997e0b req-1f2d7cbc-86a6-496c-814a-e6ac947db80d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "02489367-ca19-4428-9727-6d8ab66e1b46-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:07:36 compute-0 nova_compute[260935]: 2025-10-11 09:07:36.454 2 DEBUG oslo_concurrency.lockutils [req-b2f38212-1c50-488f-8b15-dac8f7997e0b req-1f2d7cbc-86a6-496c-814a-e6ac947db80d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "02489367-ca19-4428-9727-6d8ab66e1b46-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:07:36 compute-0 nova_compute[260935]: 2025-10-11 09:07:36.455 2 DEBUG oslo_concurrency.lockutils [req-b2f38212-1c50-488f-8b15-dac8f7997e0b req-1f2d7cbc-86a6-496c-814a-e6ac947db80d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "02489367-ca19-4428-9727-6d8ab66e1b46-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:07:36 compute-0 nova_compute[260935]: 2025-10-11 09:07:36.455 2 DEBUG nova.compute.manager [req-b2f38212-1c50-488f-8b15-dac8f7997e0b req-1f2d7cbc-86a6-496c-814a-e6ac947db80d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 02489367-ca19-4428-9727-6d8ab66e1b46] Processing event network-vif-plugged-91e3c288-76e0-4c61-9a40-fde0087f647b _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 11 09:07:36 compute-0 nova_compute[260935]: 2025-10-11 09:07:36.456 2 DEBUG nova.compute.manager [None req-18ed1bb4-2807-4456-bd6d-09e6368d595b 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] [instance: 02489367-ca19-4428-9727-6d8ab66e1b46] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 11 09:07:36 compute-0 nova_compute[260935]: 2025-10-11 09:07:36.460 2 DEBUG nova.virt.libvirt.driver [None req-18ed1bb4-2807-4456-bd6d-09e6368d595b 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] [instance: 02489367-ca19-4428-9727-6d8ab66e1b46] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 11 09:07:36 compute-0 nova_compute[260935]: 2025-10-11 09:07:36.464 2 INFO nova.virt.libvirt.driver [-] [instance: 02489367-ca19-4428-9727-6d8ab66e1b46] Instance spawned successfully.
Oct 11 09:07:36 compute-0 nova_compute[260935]: 2025-10-11 09:07:36.465 2 DEBUG nova.virt.libvirt.driver [None req-18ed1bb4-2807-4456-bd6d-09e6368d595b 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] [instance: 02489367-ca19-4428-9727-6d8ab66e1b46] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 11 09:07:36 compute-0 nova_compute[260935]: 2025-10-11 09:07:36.476 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 02489367-ca19-4428-9727-6d8ab66e1b46] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:07:36 compute-0 nova_compute[260935]: 2025-10-11 09:07:36.486 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760173656.4601095, 02489367-ca19-4428-9727-6d8ab66e1b46 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:07:36 compute-0 nova_compute[260935]: 2025-10-11 09:07:36.486 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 02489367-ca19-4428-9727-6d8ab66e1b46] VM Resumed (Lifecycle Event)
Oct 11 09:07:36 compute-0 nova_compute[260935]: 2025-10-11 09:07:36.562 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 02489367-ca19-4428-9727-6d8ab66e1b46] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:07:36 compute-0 nova_compute[260935]: 2025-10-11 09:07:36.570 2 DEBUG nova.virt.libvirt.driver [None req-18ed1bb4-2807-4456-bd6d-09e6368d595b 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] [instance: 02489367-ca19-4428-9727-6d8ab66e1b46] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:07:36 compute-0 nova_compute[260935]: 2025-10-11 09:07:36.570 2 DEBUG nova.virt.libvirt.driver [None req-18ed1bb4-2807-4456-bd6d-09e6368d595b 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] [instance: 02489367-ca19-4428-9727-6d8ab66e1b46] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:07:36 compute-0 nova_compute[260935]: 2025-10-11 09:07:36.571 2 DEBUG nova.virt.libvirt.driver [None req-18ed1bb4-2807-4456-bd6d-09e6368d595b 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] [instance: 02489367-ca19-4428-9727-6d8ab66e1b46] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:07:36 compute-0 nova_compute[260935]: 2025-10-11 09:07:36.572 2 DEBUG nova.virt.libvirt.driver [None req-18ed1bb4-2807-4456-bd6d-09e6368d595b 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] [instance: 02489367-ca19-4428-9727-6d8ab66e1b46] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:07:36 compute-0 nova_compute[260935]: 2025-10-11 09:07:36.574 2 DEBUG nova.virt.libvirt.driver [None req-18ed1bb4-2807-4456-bd6d-09e6368d595b 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] [instance: 02489367-ca19-4428-9727-6d8ab66e1b46] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:07:36 compute-0 nova_compute[260935]: 2025-10-11 09:07:36.575 2 DEBUG nova.virt.libvirt.driver [None req-18ed1bb4-2807-4456-bd6d-09e6368d595b 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] [instance: 02489367-ca19-4428-9727-6d8ab66e1b46] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:07:36 compute-0 nova_compute[260935]: 2025-10-11 09:07:36.585 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 02489367-ca19-4428-9727-6d8ab66e1b46] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 09:07:36 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 09:07:36 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3357456046' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:07:36 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/3357456046' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:07:36 compute-0 nova_compute[260935]: 2025-10-11 09:07:36.662 2 DEBUG oslo_concurrency.processutils [None req-25246455-e498-45d1-a432-7e80017b2ff9 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.491s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:07:36 compute-0 nova_compute[260935]: 2025-10-11 09:07:36.663 2 DEBUG oslo_concurrency.processutils [None req-25246455-e498-45d1-a432-7e80017b2ff9 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:07:36 compute-0 nova_compute[260935]: 2025-10-11 09:07:36.709 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 02489367-ca19-4428-9727-6d8ab66e1b46] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 09:07:36 compute-0 nova_compute[260935]: 2025-10-11 09:07:36.729 2 INFO nova.compute.manager [None req-18ed1bb4-2807-4456-bd6d-09e6368d595b 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] [instance: 02489367-ca19-4428-9727-6d8ab66e1b46] Took 8.63 seconds to spawn the instance on the hypervisor.
Oct 11 09:07:36 compute-0 nova_compute[260935]: 2025-10-11 09:07:36.730 2 DEBUG nova.compute.manager [None req-18ed1bb4-2807-4456-bd6d-09e6368d595b 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] [instance: 02489367-ca19-4428-9727-6d8ab66e1b46] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:07:36 compute-0 nova_compute[260935]: 2025-10-11 09:07:36.886 2 INFO nova.compute.manager [None req-18ed1bb4-2807-4456-bd6d-09e6368d595b 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] [instance: 02489367-ca19-4428-9727-6d8ab66e1b46] Took 10.53 seconds to build instance.
Oct 11 09:07:36 compute-0 nova_compute[260935]: 2025-10-11 09:07:36.943 2 DEBUG oslo_concurrency.lockutils [None req-18ed1bb4-2807-4456-bd6d-09e6368d595b 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Lock "02489367-ca19-4428-9727-6d8ab66e1b46" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.874s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:07:37 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 09:07:37 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2550403082' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:07:37 compute-0 nova_compute[260935]: 2025-10-11 09:07:37.170 2 DEBUG oslo_concurrency.processutils [None req-25246455-e498-45d1-a432-7e80017b2ff9 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.507s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:07:37 compute-0 nova_compute[260935]: 2025-10-11 09:07:37.171 2 DEBUG oslo_concurrency.processutils [None req-25246455-e498-45d1-a432-7e80017b2ff9 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:07:37 compute-0 nova_compute[260935]: 2025-10-11 09:07:37.221 2 DEBUG nova.compute.manager [req-c91df565-b811-4d5b-b0ab-16aed7688eb5 req-ed2b8c0e-2954-497b-911a-60c823ceac0b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: f5dfbe0b-a1ff-4001-abe4-a4493c9124f2] Received event network-vif-plugged-34749eb0-e6d2-4c4b-a2b8-9fc709c1caca external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:07:37 compute-0 nova_compute[260935]: 2025-10-11 09:07:37.222 2 DEBUG oslo_concurrency.lockutils [req-c91df565-b811-4d5b-b0ab-16aed7688eb5 req-ed2b8c0e-2954-497b-911a-60c823ceac0b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "f5dfbe0b-a1ff-4001-abe4-a4493c9124f2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:07:37 compute-0 nova_compute[260935]: 2025-10-11 09:07:37.222 2 DEBUG oslo_concurrency.lockutils [req-c91df565-b811-4d5b-b0ab-16aed7688eb5 req-ed2b8c0e-2954-497b-911a-60c823ceac0b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "f5dfbe0b-a1ff-4001-abe4-a4493c9124f2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:07:37 compute-0 nova_compute[260935]: 2025-10-11 09:07:37.223 2 DEBUG oslo_concurrency.lockutils [req-c91df565-b811-4d5b-b0ab-16aed7688eb5 req-ed2b8c0e-2954-497b-911a-60c823ceac0b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "f5dfbe0b-a1ff-4001-abe4-a4493c9124f2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:07:37 compute-0 nova_compute[260935]: 2025-10-11 09:07:37.223 2 DEBUG nova.compute.manager [req-c91df565-b811-4d5b-b0ab-16aed7688eb5 req-ed2b8c0e-2954-497b-911a-60c823ceac0b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: f5dfbe0b-a1ff-4001-abe4-a4493c9124f2] No waiting events found dispatching network-vif-plugged-34749eb0-e6d2-4c4b-a2b8-9fc709c1caca pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 09:07:37 compute-0 nova_compute[260935]: 2025-10-11 09:07:37.223 2 WARNING nova.compute.manager [req-c91df565-b811-4d5b-b0ab-16aed7688eb5 req-ed2b8c0e-2954-497b-911a-60c823ceac0b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: f5dfbe0b-a1ff-4001-abe4-a4493c9124f2] Received unexpected event network-vif-plugged-34749eb0-e6d2-4c4b-a2b8-9fc709c1caca for instance with vm_state active and task_state rescuing.
Oct 11 09:07:37 compute-0 sshd-session[358350]: Invalid user huang from 152.32.213.170 port 60332
Oct 11 09:07:37 compute-0 sshd-session[358350]: pam_unix(sshd:auth): check pass; user unknown
Oct 11 09:07:37 compute-0 sshd-session[358350]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=152.32.213.170
Oct 11 09:07:37 compute-0 ceph-mon[74313]: pgmap v1948: 321 pgs: 321 active+clean; 647 MiB data, 986 MiB used, 59 GiB / 60 GiB avail; 881 KiB/s rd, 8.1 MiB/s wr, 192 op/s
Oct 11 09:07:37 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/2550403082' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:07:37 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 09:07:37 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3449958463' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:07:37 compute-0 nova_compute[260935]: 2025-10-11 09:07:37.696 2 DEBUG oslo_concurrency.processutils [None req-25246455-e498-45d1-a432-7e80017b2ff9 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.525s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:07:37 compute-0 nova_compute[260935]: 2025-10-11 09:07:37.698 2 DEBUG nova.virt.libvirt.vif [None req-25246455-e498-45d1-a432-7e80017b2ff9 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T09:07:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerRescueNegativeTestJSON-server-1694931715',display_name='tempest-ServerRescueNegativeTestJSON-server-1694931715',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuenegativetestjson-server-1694931715',id=97,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-11T09:07:16Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='c221c92f100b42fbb2581f0c7035540b',ramdisk_id='',reservation_id='r-w0ylr4kw',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerRescueNegativeTestJSON-1667632067',owner_user_name='tempest-ServerRescueNegativeTestJSON-1667632067-project-member'},tags=<?>,task_state='rescuing',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:07:17Z,user_data=None,user_id='f52fcc072b5843a197064a063d4a5d30',uuid=f5dfbe0b-a1ff-4001-abe4-a4493c9124f2,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "34749eb0-e6d2-4c4b-a2b8-9fc709c1caca", "address": "fa:16:3e:88:5f:9b", "network": {"id": "76432838-bd4d-4fb8-8e44-6e230b5868b8", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-372698809-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueNegativeTestJSON-372698809-network", "vif_mac": "fa:16:3e:88:5f:9b"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c221c92f100b42fbb2581f0c7035540b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap34749eb0-e6", "ovs_interfaceid": "34749eb0-e6d2-4c4b-a2b8-9fc709c1caca", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 11 09:07:37 compute-0 nova_compute[260935]: 2025-10-11 09:07:37.699 2 DEBUG nova.network.os_vif_util [None req-25246455-e498-45d1-a432-7e80017b2ff9 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Converting VIF {"id": "34749eb0-e6d2-4c4b-a2b8-9fc709c1caca", "address": "fa:16:3e:88:5f:9b", "network": {"id": "76432838-bd4d-4fb8-8e44-6e230b5868b8", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-372698809-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueNegativeTestJSON-372698809-network", "vif_mac": "fa:16:3e:88:5f:9b"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c221c92f100b42fbb2581f0c7035540b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap34749eb0-e6", "ovs_interfaceid": "34749eb0-e6d2-4c4b-a2b8-9fc709c1caca", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 09:07:37 compute-0 nova_compute[260935]: 2025-10-11 09:07:37.700 2 DEBUG nova.network.os_vif_util [None req-25246455-e498-45d1-a432-7e80017b2ff9 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:88:5f:9b,bridge_name='br-int',has_traffic_filtering=True,id=34749eb0-e6d2-4c4b-a2b8-9fc709c1caca,network=Network(76432838-bd4d-4fb8-8e44-6e230b5868b8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap34749eb0-e6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 09:07:37 compute-0 nova_compute[260935]: 2025-10-11 09:07:37.702 2 DEBUG nova.objects.instance [None req-25246455-e498-45d1-a432-7e80017b2ff9 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Lazy-loading 'pci_devices' on Instance uuid f5dfbe0b-a1ff-4001-abe4-a4493c9124f2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:07:37 compute-0 nova_compute[260935]: 2025-10-11 09:07:37.746 2 DEBUG nova.virt.libvirt.driver [None req-25246455-e498-45d1-a432-7e80017b2ff9 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] [instance: f5dfbe0b-a1ff-4001-abe4-a4493c9124f2] End _get_guest_xml xml=<domain type="kvm">
Oct 11 09:07:37 compute-0 nova_compute[260935]:   <uuid>f5dfbe0b-a1ff-4001-abe4-a4493c9124f2</uuid>
Oct 11 09:07:37 compute-0 nova_compute[260935]:   <name>instance-00000061</name>
Oct 11 09:07:37 compute-0 nova_compute[260935]:   <memory>131072</memory>
Oct 11 09:07:37 compute-0 nova_compute[260935]:   <vcpu>1</vcpu>
Oct 11 09:07:37 compute-0 nova_compute[260935]:   <metadata>
Oct 11 09:07:37 compute-0 nova_compute[260935]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 09:07:37 compute-0 nova_compute[260935]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 09:07:37 compute-0 nova_compute[260935]:       <nova:name>tempest-ServerRescueNegativeTestJSON-server-1694931715</nova:name>
Oct 11 09:07:37 compute-0 nova_compute[260935]:       <nova:creationTime>2025-10-11 09:07:36</nova:creationTime>
Oct 11 09:07:37 compute-0 nova_compute[260935]:       <nova:flavor name="m1.nano">
Oct 11 09:07:37 compute-0 nova_compute[260935]:         <nova:memory>128</nova:memory>
Oct 11 09:07:37 compute-0 nova_compute[260935]:         <nova:disk>1</nova:disk>
Oct 11 09:07:37 compute-0 nova_compute[260935]:         <nova:swap>0</nova:swap>
Oct 11 09:07:37 compute-0 nova_compute[260935]:         <nova:ephemeral>0</nova:ephemeral>
Oct 11 09:07:37 compute-0 nova_compute[260935]:         <nova:vcpus>1</nova:vcpus>
Oct 11 09:07:37 compute-0 nova_compute[260935]:       </nova:flavor>
Oct 11 09:07:37 compute-0 nova_compute[260935]:       <nova:owner>
Oct 11 09:07:37 compute-0 nova_compute[260935]:         <nova:user uuid="f52fcc072b5843a197064a063d4a5d30">tempest-ServerRescueNegativeTestJSON-1667632067-project-member</nova:user>
Oct 11 09:07:37 compute-0 nova_compute[260935]:         <nova:project uuid="c221c92f100b42fbb2581f0c7035540b">tempest-ServerRescueNegativeTestJSON-1667632067</nova:project>
Oct 11 09:07:37 compute-0 nova_compute[260935]:       </nova:owner>
Oct 11 09:07:37 compute-0 nova_compute[260935]:       <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 09:07:37 compute-0 nova_compute[260935]:       <nova:ports>
Oct 11 09:07:37 compute-0 nova_compute[260935]:         <nova:port uuid="34749eb0-e6d2-4c4b-a2b8-9fc709c1caca">
Oct 11 09:07:37 compute-0 nova_compute[260935]:           <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Oct 11 09:07:37 compute-0 nova_compute[260935]:         </nova:port>
Oct 11 09:07:37 compute-0 nova_compute[260935]:       </nova:ports>
Oct 11 09:07:37 compute-0 nova_compute[260935]:     </nova:instance>
Oct 11 09:07:37 compute-0 nova_compute[260935]:   </metadata>
Oct 11 09:07:37 compute-0 nova_compute[260935]:   <sysinfo type="smbios">
Oct 11 09:07:37 compute-0 nova_compute[260935]:     <system>
Oct 11 09:07:37 compute-0 nova_compute[260935]:       <entry name="manufacturer">RDO</entry>
Oct 11 09:07:37 compute-0 nova_compute[260935]:       <entry name="product">OpenStack Compute</entry>
Oct 11 09:07:37 compute-0 nova_compute[260935]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 09:07:37 compute-0 nova_compute[260935]:       <entry name="serial">f5dfbe0b-a1ff-4001-abe4-a4493c9124f2</entry>
Oct 11 09:07:37 compute-0 nova_compute[260935]:       <entry name="uuid">f5dfbe0b-a1ff-4001-abe4-a4493c9124f2</entry>
Oct 11 09:07:37 compute-0 nova_compute[260935]:       <entry name="family">Virtual Machine</entry>
Oct 11 09:07:37 compute-0 nova_compute[260935]:     </system>
Oct 11 09:07:37 compute-0 nova_compute[260935]:   </sysinfo>
Oct 11 09:07:37 compute-0 nova_compute[260935]:   <os>
Oct 11 09:07:37 compute-0 nova_compute[260935]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 11 09:07:37 compute-0 nova_compute[260935]:     <smbios mode="sysinfo"/>
Oct 11 09:07:37 compute-0 nova_compute[260935]:   </os>
Oct 11 09:07:37 compute-0 nova_compute[260935]:   <features>
Oct 11 09:07:37 compute-0 nova_compute[260935]:     <acpi/>
Oct 11 09:07:37 compute-0 nova_compute[260935]:     <apic/>
Oct 11 09:07:37 compute-0 nova_compute[260935]:     <vmcoreinfo/>
Oct 11 09:07:37 compute-0 nova_compute[260935]:   </features>
Oct 11 09:07:37 compute-0 nova_compute[260935]:   <clock offset="utc">
Oct 11 09:07:37 compute-0 nova_compute[260935]:     <timer name="pit" tickpolicy="delay"/>
Oct 11 09:07:37 compute-0 nova_compute[260935]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 11 09:07:37 compute-0 nova_compute[260935]:     <timer name="hpet" present="no"/>
Oct 11 09:07:37 compute-0 nova_compute[260935]:   </clock>
Oct 11 09:07:37 compute-0 nova_compute[260935]:   <cpu mode="host-model" match="exact">
Oct 11 09:07:37 compute-0 nova_compute[260935]:     <topology sockets="1" cores="1" threads="1"/>
Oct 11 09:07:37 compute-0 nova_compute[260935]:   </cpu>
Oct 11 09:07:37 compute-0 nova_compute[260935]:   <devices>
Oct 11 09:07:37 compute-0 nova_compute[260935]:     <disk type="network" device="disk">
Oct 11 09:07:37 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 09:07:37 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/f5dfbe0b-a1ff-4001-abe4-a4493c9124f2_disk.rescue">
Oct 11 09:07:37 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 09:07:37 compute-0 nova_compute[260935]:       </source>
Oct 11 09:07:37 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 09:07:37 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 09:07:37 compute-0 nova_compute[260935]:       </auth>
Oct 11 09:07:37 compute-0 nova_compute[260935]:       <target dev="vda" bus="virtio"/>
Oct 11 09:07:37 compute-0 nova_compute[260935]:     </disk>
Oct 11 09:07:37 compute-0 nova_compute[260935]:     <disk type="network" device="disk">
Oct 11 09:07:37 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 09:07:37 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/f5dfbe0b-a1ff-4001-abe4-a4493c9124f2_disk">
Oct 11 09:07:37 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 09:07:37 compute-0 nova_compute[260935]:       </source>
Oct 11 09:07:37 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 09:07:37 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 09:07:37 compute-0 nova_compute[260935]:       </auth>
Oct 11 09:07:37 compute-0 nova_compute[260935]:       <target dev="vdb" bus="virtio"/>
Oct 11 09:07:37 compute-0 nova_compute[260935]:     </disk>
Oct 11 09:07:37 compute-0 nova_compute[260935]:     <disk type="network" device="cdrom">
Oct 11 09:07:37 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 09:07:37 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/f5dfbe0b-a1ff-4001-abe4-a4493c9124f2_disk.config.rescue">
Oct 11 09:07:37 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 09:07:37 compute-0 nova_compute[260935]:       </source>
Oct 11 09:07:37 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 09:07:37 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 09:07:37 compute-0 nova_compute[260935]:       </auth>
Oct 11 09:07:37 compute-0 nova_compute[260935]:       <target dev="sda" bus="sata"/>
Oct 11 09:07:37 compute-0 nova_compute[260935]:     </disk>
Oct 11 09:07:37 compute-0 nova_compute[260935]:     <interface type="ethernet">
Oct 11 09:07:37 compute-0 nova_compute[260935]:       <mac address="fa:16:3e:88:5f:9b"/>
Oct 11 09:07:37 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 09:07:37 compute-0 nova_compute[260935]:       <driver name="vhost" rx_queue_size="512"/>
Oct 11 09:07:37 compute-0 nova_compute[260935]:       <mtu size="1442"/>
Oct 11 09:07:37 compute-0 nova_compute[260935]:       <target dev="tap34749eb0-e6"/>
Oct 11 09:07:37 compute-0 nova_compute[260935]:     </interface>
Oct 11 09:07:37 compute-0 nova_compute[260935]:     <serial type="pty">
Oct 11 09:07:37 compute-0 nova_compute[260935]:       <log file="/var/lib/nova/instances/f5dfbe0b-a1ff-4001-abe4-a4493c9124f2/console.log" append="off"/>
Oct 11 09:07:37 compute-0 nova_compute[260935]:     </serial>
Oct 11 09:07:37 compute-0 nova_compute[260935]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 09:07:37 compute-0 nova_compute[260935]:     <video>
Oct 11 09:07:37 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 09:07:37 compute-0 nova_compute[260935]:     </video>
Oct 11 09:07:37 compute-0 nova_compute[260935]:     <input type="tablet" bus="usb"/>
Oct 11 09:07:37 compute-0 nova_compute[260935]:     <rng model="virtio">
Oct 11 09:07:37 compute-0 nova_compute[260935]:       <backend model="random">/dev/urandom</backend>
Oct 11 09:07:37 compute-0 nova_compute[260935]:     </rng>
Oct 11 09:07:37 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root"/>
Oct 11 09:07:37 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:07:37 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:07:37 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:07:37 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:07:37 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:07:37 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:07:37 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:07:37 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:07:37 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:07:37 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:07:37 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:07:37 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:07:37 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:07:37 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:07:37 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:07:37 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:07:37 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:07:37 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:07:37 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:07:37 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:07:37 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:07:37 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:07:37 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:07:37 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:07:37 compute-0 nova_compute[260935]:     <controller type="usb" index="0"/>
Oct 11 09:07:37 compute-0 nova_compute[260935]:     <memballoon model="virtio">
Oct 11 09:07:37 compute-0 nova_compute[260935]:       <stats period="10"/>
Oct 11 09:07:37 compute-0 nova_compute[260935]:     </memballoon>
Oct 11 09:07:37 compute-0 nova_compute[260935]:   </devices>
Oct 11 09:07:37 compute-0 nova_compute[260935]: </domain>
Oct 11 09:07:37 compute-0 nova_compute[260935]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 11 09:07:37 compute-0 nova_compute[260935]: 2025-10-11 09:07:37.753 2 INFO nova.virt.libvirt.driver [-] [instance: f5dfbe0b-a1ff-4001-abe4-a4493c9124f2] Instance destroyed successfully.
Oct 11 09:07:37 compute-0 nova_compute[260935]: 2025-10-11 09:07:37.979 2 DEBUG nova.virt.libvirt.driver [None req-25246455-e498-45d1-a432-7e80017b2ff9 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 09:07:37 compute-0 nova_compute[260935]: 2025-10-11 09:07:37.980 2 DEBUG nova.virt.libvirt.driver [None req-25246455-e498-45d1-a432-7e80017b2ff9 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 09:07:37 compute-0 nova_compute[260935]: 2025-10-11 09:07:37.980 2 DEBUG nova.virt.libvirt.driver [None req-25246455-e498-45d1-a432-7e80017b2ff9 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 09:07:37 compute-0 nova_compute[260935]: 2025-10-11 09:07:37.980 2 DEBUG nova.virt.libvirt.driver [None req-25246455-e498-45d1-a432-7e80017b2ff9 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] No VIF found with MAC fa:16:3e:88:5f:9b, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 11 09:07:37 compute-0 nova_compute[260935]: 2025-10-11 09:07:37.981 2 INFO nova.virt.libvirt.driver [None req-25246455-e498-45d1-a432-7e80017b2ff9 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] [instance: f5dfbe0b-a1ff-4001-abe4-a4493c9124f2] Using config drive
Oct 11 09:07:38 compute-0 nova_compute[260935]: 2025-10-11 09:07:38.006 2 DEBUG nova.storage.rbd_utils [None req-25246455-e498-45d1-a432-7e80017b2ff9 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] rbd image f5dfbe0b-a1ff-4001-abe4-a4493c9124f2_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:07:38 compute-0 nova_compute[260935]: 2025-10-11 09:07:38.099 2 DEBUG nova.objects.instance [None req-25246455-e498-45d1-a432-7e80017b2ff9 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Lazy-loading 'ec2_ids' on Instance uuid f5dfbe0b-a1ff-4001-abe4-a4493c9124f2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:07:38 compute-0 nova_compute[260935]: 2025-10-11 09:07:38.210 2 DEBUG nova.objects.instance [None req-25246455-e498-45d1-a432-7e80017b2ff9 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Lazy-loading 'keypairs' on Instance uuid f5dfbe0b-a1ff-4001-abe4-a4493c9124f2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:07:38 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1949: 321 pgs: 321 active+clean; 705 MiB data, 1008 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 10 MiB/s wr, 265 op/s
Oct 11 09:07:38 compute-0 nova_compute[260935]: 2025-10-11 09:07:38.636 2 DEBUG nova.compute.manager [req-eaf3687b-023b-49f0-823d-b66a13c227c9 req-516e0026-cc4c-4b59-870c-ddb789fab311 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 02489367-ca19-4428-9727-6d8ab66e1b46] Received event network-vif-plugged-91e3c288-76e0-4c61-9a40-fde0087f647b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:07:38 compute-0 nova_compute[260935]: 2025-10-11 09:07:38.637 2 DEBUG oslo_concurrency.lockutils [req-eaf3687b-023b-49f0-823d-b66a13c227c9 req-516e0026-cc4c-4b59-870c-ddb789fab311 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "02489367-ca19-4428-9727-6d8ab66e1b46-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:07:38 compute-0 nova_compute[260935]: 2025-10-11 09:07:38.637 2 DEBUG oslo_concurrency.lockutils [req-eaf3687b-023b-49f0-823d-b66a13c227c9 req-516e0026-cc4c-4b59-870c-ddb789fab311 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "02489367-ca19-4428-9727-6d8ab66e1b46-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:07:38 compute-0 nova_compute[260935]: 2025-10-11 09:07:38.637 2 DEBUG oslo_concurrency.lockutils [req-eaf3687b-023b-49f0-823d-b66a13c227c9 req-516e0026-cc4c-4b59-870c-ddb789fab311 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "02489367-ca19-4428-9727-6d8ab66e1b46-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:07:38 compute-0 nova_compute[260935]: 2025-10-11 09:07:38.637 2 DEBUG nova.compute.manager [req-eaf3687b-023b-49f0-823d-b66a13c227c9 req-516e0026-cc4c-4b59-870c-ddb789fab311 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 02489367-ca19-4428-9727-6d8ab66e1b46] No waiting events found dispatching network-vif-plugged-91e3c288-76e0-4c61-9a40-fde0087f647b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 09:07:38 compute-0 nova_compute[260935]: 2025-10-11 09:07:38.638 2 WARNING nova.compute.manager [req-eaf3687b-023b-49f0-823d-b66a13c227c9 req-516e0026-cc4c-4b59-870c-ddb789fab311 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 02489367-ca19-4428-9727-6d8ab66e1b46] Received unexpected event network-vif-plugged-91e3c288-76e0-4c61-9a40-fde0087f647b for instance with vm_state active and task_state None.
Oct 11 09:07:38 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/3449958463' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:07:38 compute-0 nova_compute[260935]: 2025-10-11 09:07:38.710 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:07:38 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e257 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:07:38 compute-0 nova_compute[260935]: 2025-10-11 09:07:38.809 2 INFO nova.virt.libvirt.driver [None req-25246455-e498-45d1-a432-7e80017b2ff9 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] [instance: f5dfbe0b-a1ff-4001-abe4-a4493c9124f2] Creating config drive at /var/lib/nova/instances/f5dfbe0b-a1ff-4001-abe4-a4493c9124f2/disk.config.rescue
Oct 11 09:07:38 compute-0 nova_compute[260935]: 2025-10-11 09:07:38.814 2 DEBUG oslo_concurrency.processutils [None req-25246455-e498-45d1-a432-7e80017b2ff9 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/f5dfbe0b-a1ff-4001-abe4-a4493c9124f2/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmplq_hy6r9 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:07:38 compute-0 nova_compute[260935]: 2025-10-11 09:07:38.960 2 DEBUG oslo_concurrency.processutils [None req-25246455-e498-45d1-a432-7e80017b2ff9 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/f5dfbe0b-a1ff-4001-abe4-a4493c9124f2/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmplq_hy6r9" returned: 0 in 0.146s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:07:38 compute-0 nova_compute[260935]: 2025-10-11 09:07:38.987 2 DEBUG nova.storage.rbd_utils [None req-25246455-e498-45d1-a432-7e80017b2ff9 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] rbd image f5dfbe0b-a1ff-4001-abe4-a4493c9124f2_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:07:38 compute-0 nova_compute[260935]: 2025-10-11 09:07:38.991 2 DEBUG oslo_concurrency.processutils [None req-25246455-e498-45d1-a432-7e80017b2ff9 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/f5dfbe0b-a1ff-4001-abe4-a4493c9124f2/disk.config.rescue f5dfbe0b-a1ff-4001-abe4-a4493c9124f2_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:07:39 compute-0 nova_compute[260935]: 2025-10-11 09:07:39.029 2 DEBUG oslo_concurrency.lockutils [None req-4e0ad04b-15ca-4ec2-a8c0-a7cd32858035 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Acquiring lock "02489367-ca19-4428-9727-6d8ab66e1b46" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:07:39 compute-0 nova_compute[260935]: 2025-10-11 09:07:39.030 2 DEBUG oslo_concurrency.lockutils [None req-4e0ad04b-15ca-4ec2-a8c0-a7cd32858035 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Lock "02489367-ca19-4428-9727-6d8ab66e1b46" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:07:39 compute-0 nova_compute[260935]: 2025-10-11 09:07:39.030 2 DEBUG oslo_concurrency.lockutils [None req-4e0ad04b-15ca-4ec2-a8c0-a7cd32858035 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Acquiring lock "02489367-ca19-4428-9727-6d8ab66e1b46-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:07:39 compute-0 nova_compute[260935]: 2025-10-11 09:07:39.031 2 DEBUG oslo_concurrency.lockutils [None req-4e0ad04b-15ca-4ec2-a8c0-a7cd32858035 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Lock "02489367-ca19-4428-9727-6d8ab66e1b46-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:07:39 compute-0 nova_compute[260935]: 2025-10-11 09:07:39.031 2 DEBUG oslo_concurrency.lockutils [None req-4e0ad04b-15ca-4ec2-a8c0-a7cd32858035 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Lock "02489367-ca19-4428-9727-6d8ab66e1b46-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:07:39 compute-0 nova_compute[260935]: 2025-10-11 09:07:39.033 2 INFO nova.compute.manager [None req-4e0ad04b-15ca-4ec2-a8c0-a7cd32858035 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] [instance: 02489367-ca19-4428-9727-6d8ab66e1b46] Terminating instance
Oct 11 09:07:39 compute-0 nova_compute[260935]: 2025-10-11 09:07:39.034 2 DEBUG nova.compute.manager [None req-4e0ad04b-15ca-4ec2-a8c0-a7cd32858035 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] [instance: 02489367-ca19-4428-9727-6d8ab66e1b46] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 11 09:07:39 compute-0 kernel: tap91e3c288-76 (unregistering): left promiscuous mode
Oct 11 09:07:39 compute-0 NetworkManager[44960]: <info>  [1760173659.0872] device (tap91e3c288-76): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 09:07:39 compute-0 nova_compute[260935]: 2025-10-11 09:07:39.103 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:07:39 compute-0 ovn_controller[152945]: 2025-10-11T09:07:39Z|00882|binding|INFO|Releasing lport 91e3c288-76e0-4c61-9a40-fde0087f647b from this chassis (sb_readonly=0)
Oct 11 09:07:39 compute-0 ovn_controller[152945]: 2025-10-11T09:07:39Z|00883|binding|INFO|Setting lport 91e3c288-76e0-4c61-9a40-fde0087f647b down in Southbound
Oct 11 09:07:39 compute-0 ovn_controller[152945]: 2025-10-11T09:07:39Z|00884|binding|INFO|Removing iface tap91e3c288-76 ovn-installed in OVS
Oct 11 09:07:39 compute-0 nova_compute[260935]: 2025-10-11 09:07:39.122 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:07:39 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:07:39.136 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:56:f9:d8 10.100.0.11'], port_security=['fa:16:3e:56:f9:d8 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '02489367-ca19-4428-9727-6d8ab66e1b46', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-14e82eeb-74e2-4de3-9047-74da777fe1ec', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '21f163e616ee4917a580701d466f7dc9', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'df67e917-90ec-4dfb-a7e0-e0c1517579e8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e0adf864-e0f8-4e06-afb7-e390a634b36e, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=91e3c288-76e0-4c61-9a40-fde0087f647b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 09:07:39 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:07:39.139 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 91e3c288-76e0-4c61-9a40-fde0087f647b in datapath 14e82eeb-74e2-4de3-9047-74da777fe1ec unbound from our chassis
Oct 11 09:07:39 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:07:39.142 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 14e82eeb-74e2-4de3-9047-74da777fe1ec
Oct 11 09:07:39 compute-0 systemd[1]: machine-qemu\x2d113\x2dinstance\x2d00000063.scope: Deactivated successfully.
Oct 11 09:07:39 compute-0 systemd[1]: machine-qemu\x2d113\x2dinstance\x2d00000063.scope: Consumed 3.582s CPU time.
Oct 11 09:07:39 compute-0 systemd-machined[215705]: Machine qemu-113-instance-00000063 terminated.
Oct 11 09:07:39 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:07:39.175 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[6f28e77f-040a-477f-a8a9-7487fda5125b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:07:39 compute-0 nova_compute[260935]: 2025-10-11 09:07:39.197 2 DEBUG oslo_concurrency.processutils [None req-25246455-e498-45d1-a432-7e80017b2ff9 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/f5dfbe0b-a1ff-4001-abe4-a4493c9124f2/disk.config.rescue f5dfbe0b-a1ff-4001-abe4-a4493c9124f2_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.206s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:07:39 compute-0 nova_compute[260935]: 2025-10-11 09:07:39.198 2 INFO nova.virt.libvirt.driver [None req-25246455-e498-45d1-a432-7e80017b2ff9 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] [instance: f5dfbe0b-a1ff-4001-abe4-a4493c9124f2] Deleting local config drive /var/lib/nova/instances/f5dfbe0b-a1ff-4001-abe4-a4493c9124f2/disk.config.rescue because it was imported into RBD.
Oct 11 09:07:39 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:07:39.218 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[039715c1-148c-4ba9-bb9b-2b6c86571aa1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:07:39 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:07:39.222 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[a104a861-5ba5-4bae-ad62-a375d62c3132]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:07:39 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:07:39.257 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[517a96a1-bc24-4290-88b9-e0bbbfc16679]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:07:39 compute-0 systemd-udevd[358482]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 09:07:39 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:07:39.276 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[156e84a1-2109-4107-b09e-e33871ea97eb]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap14e82eeb-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:60:56:f1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 9, 'tx_packets': 7, 'rx_bytes': 874, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 9, 'tx_packets': 7, 'rx_bytes': 874, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 263], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 548101, 'reachable_time': 37420, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 358497, 'error': None, 'target': 'ovnmeta-14e82eeb-74e2-4de3-9047-74da777fe1ec', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:07:39 compute-0 NetworkManager[44960]: <info>  [1760173659.2808] manager: (tap91e3c288-76): new Tun device (/org/freedesktop/NetworkManager/Devices/378)
Oct 11 09:07:39 compute-0 kernel: tap91e3c288-76: entered promiscuous mode
Oct 11 09:07:39 compute-0 nova_compute[260935]: 2025-10-11 09:07:39.288 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:07:39 compute-0 kernel: tap91e3c288-76 (unregistering): left promiscuous mode
Oct 11 09:07:39 compute-0 ovn_controller[152945]: 2025-10-11T09:07:39Z|00885|binding|INFO|Claiming lport 91e3c288-76e0-4c61-9a40-fde0087f647b for this chassis.
Oct 11 09:07:39 compute-0 ovn_controller[152945]: 2025-10-11T09:07:39Z|00886|binding|INFO|91e3c288-76e0-4c61-9a40-fde0087f647b: Claiming fa:16:3e:56:f9:d8 10.100.0.11
Oct 11 09:07:39 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:07:39.308 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[4a22e422-ca00-4639-ab36-11f944f730b2]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap14e82eeb-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 548114, 'tstamp': 548114}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 358502, 'error': None, 'target': 'ovnmeta-14e82eeb-74e2-4de3-9047-74da777fe1ec', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap14e82eeb-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 548116, 'tstamp': 548116}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 358502, 'error': None, 'target': 'ovnmeta-14e82eeb-74e2-4de3-9047-74da777fe1ec', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:07:39 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:07:39.311 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap14e82eeb-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:07:39 compute-0 nova_compute[260935]: 2025-10-11 09:07:39.314 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:07:39 compute-0 nova_compute[260935]: 2025-10-11 09:07:39.320 2 INFO nova.virt.libvirt.driver [-] [instance: 02489367-ca19-4428-9727-6d8ab66e1b46] Instance destroyed successfully.
Oct 11 09:07:39 compute-0 nova_compute[260935]: 2025-10-11 09:07:39.320 2 DEBUG nova.objects.instance [None req-4e0ad04b-15ca-4ec2-a8c0-a7cd32858035 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Lazy-loading 'resources' on Instance uuid 02489367-ca19-4428-9727-6d8ab66e1b46 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:07:39 compute-0 ovn_controller[152945]: 2025-10-11T09:07:39Z|00887|binding|INFO|Setting lport 91e3c288-76e0-4c61-9a40-fde0087f647b ovn-installed in OVS
Oct 11 09:07:39 compute-0 NetworkManager[44960]: <info>  [1760173659.3273] manager: (tap34749eb0-e6): new Tun device (/org/freedesktop/NetworkManager/Devices/379)
Oct 11 09:07:39 compute-0 nova_compute[260935]: 2025-10-11 09:07:39.327 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:07:39 compute-0 ovn_controller[152945]: 2025-10-11T09:07:39Z|00888|if_status|INFO|Dropped 2 log messages in last 439 seconds (most recently, 439 seconds ago) due to excessive rate
Oct 11 09:07:39 compute-0 ovn_controller[152945]: 2025-10-11T09:07:39Z|00889|if_status|INFO|Not setting lport 91e3c288-76e0-4c61-9a40-fde0087f647b down as sb is readonly
Oct 11 09:07:39 compute-0 kernel: tap34749eb0-e6: entered promiscuous mode
Oct 11 09:07:39 compute-0 ovn_controller[152945]: 2025-10-11T09:07:39Z|00890|binding|INFO|Releasing lport 91e3c288-76e0-4c61-9a40-fde0087f647b from this chassis (sb_readonly=0)
Oct 11 09:07:39 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:07:39.335 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:56:f9:d8 10.100.0.11'], port_security=['fa:16:3e:56:f9:d8 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '02489367-ca19-4428-9727-6d8ab66e1b46', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-14e82eeb-74e2-4de3-9047-74da777fe1ec', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '21f163e616ee4917a580701d466f7dc9', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'df67e917-90ec-4dfb-a7e0-e0c1517579e8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e0adf864-e0f8-4e06-afb7-e390a634b36e, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=91e3c288-76e0-4c61-9a40-fde0087f647b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 09:07:39 compute-0 NetworkManager[44960]: <info>  [1760173659.3510] device (tap34749eb0-e6): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 09:07:39 compute-0 NetworkManager[44960]: <info>  [1760173659.3525] device (tap34749eb0-e6): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 09:07:39 compute-0 ovn_controller[152945]: 2025-10-11T09:07:39Z|00891|binding|INFO|Claiming lport 34749eb0-e6d2-4c4b-a2b8-9fc709c1caca for this chassis.
Oct 11 09:07:39 compute-0 ovn_controller[152945]: 2025-10-11T09:07:39Z|00892|binding|INFO|34749eb0-e6d2-4c4b-a2b8-9fc709c1caca: Claiming fa:16:3e:88:5f:9b 10.100.0.12
Oct 11 09:07:39 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:07:39.359 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:56:f9:d8 10.100.0.11'], port_security=['fa:16:3e:56:f9:d8 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '02489367-ca19-4428-9727-6d8ab66e1b46', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-14e82eeb-74e2-4de3-9047-74da777fe1ec', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '21f163e616ee4917a580701d466f7dc9', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'df67e917-90ec-4dfb-a7e0-e0c1517579e8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e0adf864-e0f8-4e06-afb7-e390a634b36e, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=91e3c288-76e0-4c61-9a40-fde0087f647b) old=Port_Binding(chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 09:07:39 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:07:39.361 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap14e82eeb-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:07:39 compute-0 nova_compute[260935]: 2025-10-11 09:07:39.356 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:07:39 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:07:39.361 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 09:07:39 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:07:39.362 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap14e82eeb-70, col_values=(('external_ids', {'iface-id': 'c44760fe-71c5-482d-aee7-a0b0513681b7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:07:39 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:07:39.363 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 09:07:39 compute-0 nova_compute[260935]: 2025-10-11 09:07:39.363 2 DEBUG nova.virt.libvirt.vif [None req-4e0ad04b-15ca-4ec2-a8c0-a7cd32858035 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T09:07:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersNegativeTestJSON-server-322685699',display_name='tempest-ServersNegativeTestJSON-server-322685699',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestjson-server-322685699',id=99,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-11T09:07:36Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='21f163e616ee4917a580701d466f7dc9',ramdisk_id='',reservation_id='r-m7jzw6nr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersNegativeTestJSON-414353187',owner_user_name='tempest-ServersNegativeTestJSON-414353187-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T09:07:36Z,user_data=None,user_id='6b8d9d5ab01d48ae81a09f922875ea3e',uuid=02489367-ca19-4428-9727-6d8ab66e1b46,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "91e3c288-76e0-4c61-9a40-fde0087f647b", "address": "fa:16:3e:56:f9:d8", "network": {"id": "14e82eeb-74e2-4de3-9047-74da777fe1ec", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-461409610-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "21f163e616ee4917a580701d466f7dc9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap91e3c288-76", "ovs_interfaceid": "91e3c288-76e0-4c61-9a40-fde0087f647b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 11 09:07:39 compute-0 nova_compute[260935]: 2025-10-11 09:07:39.364 2 DEBUG nova.network.os_vif_util [None req-4e0ad04b-15ca-4ec2-a8c0-a7cd32858035 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Converting VIF {"id": "91e3c288-76e0-4c61-9a40-fde0087f647b", "address": "fa:16:3e:56:f9:d8", "network": {"id": "14e82eeb-74e2-4de3-9047-74da777fe1ec", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-461409610-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "21f163e616ee4917a580701d466f7dc9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap91e3c288-76", "ovs_interfaceid": "91e3c288-76e0-4c61-9a40-fde0087f647b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 09:07:39 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:07:39.365 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 91e3c288-76e0-4c61-9a40-fde0087f647b in datapath 14e82eeb-74e2-4de3-9047-74da777fe1ec unbound from our chassis
Oct 11 09:07:39 compute-0 nova_compute[260935]: 2025-10-11 09:07:39.365 2 DEBUG nova.network.os_vif_util [None req-4e0ad04b-15ca-4ec2-a8c0-a7cd32858035 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:56:f9:d8,bridge_name='br-int',has_traffic_filtering=True,id=91e3c288-76e0-4c61-9a40-fde0087f647b,network=Network(14e82eeb-74e2-4de3-9047-74da777fe1ec),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap91e3c288-76') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 09:07:39 compute-0 nova_compute[260935]: 2025-10-11 09:07:39.366 2 DEBUG os_vif [None req-4e0ad04b-15ca-4ec2-a8c0-a7cd32858035 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:56:f9:d8,bridge_name='br-int',has_traffic_filtering=True,id=91e3c288-76e0-4c61-9a40-fde0087f647b,network=Network(14e82eeb-74e2-4de3-9047-74da777fe1ec),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap91e3c288-76') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 11 09:07:39 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:07:39.368 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 14e82eeb-74e2-4de3-9047-74da777fe1ec
Oct 11 09:07:39 compute-0 nova_compute[260935]: 2025-10-11 09:07:39.369 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:07:39 compute-0 nova_compute[260935]: 2025-10-11 09:07:39.369 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap91e3c288-76, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:07:39 compute-0 nova_compute[260935]: 2025-10-11 09:07:39.373 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:07:39 compute-0 nova_compute[260935]: 2025-10-11 09:07:39.377 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 09:07:39 compute-0 ovn_controller[152945]: 2025-10-11T09:07:39Z|00893|binding|INFO|Setting lport 34749eb0-e6d2-4c4b-a2b8-9fc709c1caca ovn-installed in OVS
Oct 11 09:07:39 compute-0 nova_compute[260935]: 2025-10-11 09:07:39.380 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:07:39 compute-0 sshd-session[358350]: Failed password for invalid user huang from 152.32.213.170 port 60332 ssh2
Oct 11 09:07:39 compute-0 nova_compute[260935]: 2025-10-11 09:07:39.384 2 INFO os_vif [None req-4e0ad04b-15ca-4ec2-a8c0-a7cd32858035 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:56:f9:d8,bridge_name='br-int',has_traffic_filtering=True,id=91e3c288-76e0-4c61-9a40-fde0087f647b,network=Network(14e82eeb-74e2-4de3-9047-74da777fe1ec),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap91e3c288-76')
Oct 11 09:07:39 compute-0 systemd-machined[215705]: New machine qemu-114-instance-00000061.
Oct 11 09:07:39 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:07:39.395 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[c83326d3-b833-4a92-85dc-d7026f590bcf]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:07:39 compute-0 systemd[1]: Started Virtual Machine qemu-114-instance-00000061.
Oct 11 09:07:39 compute-0 ovn_controller[152945]: 2025-10-11T09:07:39Z|00894|binding|INFO|Setting lport 34749eb0-e6d2-4c4b-a2b8-9fc709c1caca up in Southbound
Oct 11 09:07:39 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:07:39.410 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:88:5f:9b 10.100.0.12'], port_security=['fa:16:3e:88:5f:9b 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'f5dfbe0b-a1ff-4001-abe4-a4493c9124f2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-76432838-bd4d-4fb8-8e44-6e230b5868b8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c221c92f100b42fbb2581f0c7035540b', 'neutron:revision_number': '5', 'neutron:security_group_ids': '933852e8-c082-4590-9200-b967a1691dcb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7f1c5e2c-b27b-4d23-ad04-5134938386e4, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=34749eb0-e6d2-4c4b-a2b8-9fc709c1caca) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 09:07:39 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:07:39.447 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[e019f1ee-37fe-4cd0-a617-3150475ec890]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:07:39 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:07:39.454 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[56ccd447-9a81-4097-974f-8e6fab801124]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:07:39 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:07:39.508 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[53f2b591-665c-445c-8069-56d1c19a161b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:07:39 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:07:39.548 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[4afb42ed-1045-41a8-a4d3-a310415da124]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap14e82eeb-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:60:56:f1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 9, 'tx_packets': 9, 'rx_bytes': 874, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 9, 'tx_packets': 9, 'rx_bytes': 874, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 263], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 548101, 'reachable_time': 37420, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 358544, 'error': None, 'target': 'ovnmeta-14e82eeb-74e2-4de3-9047-74da777fe1ec', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:07:39 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:07:39.578 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[1ab485c5-bbf7-42f0-a347-847612fdc1a2]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap14e82eeb-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 548114, 'tstamp': 548114}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 358545, 'error': None, 'target': 'ovnmeta-14e82eeb-74e2-4de3-9047-74da777fe1ec', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap14e82eeb-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 548116, 'tstamp': 548116}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 358545, 'error': None, 'target': 'ovnmeta-14e82eeb-74e2-4de3-9047-74da777fe1ec', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:07:39 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:07:39.581 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap14e82eeb-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:07:39 compute-0 nova_compute[260935]: 2025-10-11 09:07:39.582 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:07:39 compute-0 nova_compute[260935]: 2025-10-11 09:07:39.584 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:07:39 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:07:39.584 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap14e82eeb-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:07:39 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:07:39.585 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 09:07:39 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:07:39.586 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap14e82eeb-70, col_values=(('external_ids', {'iface-id': 'c44760fe-71c5-482d-aee7-a0b0513681b7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:07:39 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:07:39.586 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 09:07:39 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:07:39.589 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 91e3c288-76e0-4c61-9a40-fde0087f647b in datapath 14e82eeb-74e2-4de3-9047-74da777fe1ec unbound from our chassis
Oct 11 09:07:39 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:07:39.592 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 14e82eeb-74e2-4de3-9047-74da777fe1ec
Oct 11 09:07:39 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:07:39.620 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[0af62b12-b0bb-4cbd-a3cd-479b2d0d86f6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:07:39 compute-0 ceph-mon[74313]: pgmap v1949: 321 pgs: 321 active+clean; 705 MiB data, 1008 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 10 MiB/s wr, 265 op/s
Oct 11 09:07:39 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:07:39.696 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[e9c7989b-9c2d-4998-862f-a095c8a23b87]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:07:39 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:07:39.705 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[b8c8b3e3-5bac-4a6d-bb1c-d7b4a35de931]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:07:39 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:07:39.755 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[c9537497-93ce-469e-a8c6-06bd30af4b48]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:07:39 compute-0 sshd-session[358350]: Received disconnect from 152.32.213.170 port 60332:11: Bye Bye [preauth]
Oct 11 09:07:39 compute-0 sshd-session[358350]: Disconnected from invalid user huang 152.32.213.170 port 60332 [preauth]
Oct 11 09:07:39 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:07:39.788 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[5bb3b651-7138-4a6a-b793-5ffcf157b81e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap14e82eeb-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:60:56:f1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 9, 'tx_packets': 11, 'rx_bytes': 874, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 9, 'tx_packets': 11, 'rx_bytes': 874, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 263], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 548101, 'reachable_time': 37420, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 358552, 'error': None, 'target': 'ovnmeta-14e82eeb-74e2-4de3-9047-74da777fe1ec', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:07:39 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:07:39.824 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[95189f4c-4c0a-4499-9477-9d8556f333c1]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap14e82eeb-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 548114, 'tstamp': 548114}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 358553, 'error': None, 'target': 'ovnmeta-14e82eeb-74e2-4de3-9047-74da777fe1ec', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap14e82eeb-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 548116, 'tstamp': 548116}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 358553, 'error': None, 'target': 'ovnmeta-14e82eeb-74e2-4de3-9047-74da777fe1ec', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:07:39 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:07:39.828 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap14e82eeb-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:07:39 compute-0 nova_compute[260935]: 2025-10-11 09:07:39.830 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:07:39 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:07:39.833 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap14e82eeb-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:07:39 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:07:39.833 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 09:07:39 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:07:39.834 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap14e82eeb-70, col_values=(('external_ids', {'iface-id': 'c44760fe-71c5-482d-aee7-a0b0513681b7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:07:39 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:07:39.835 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 09:07:39 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:07:39.841 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 34749eb0-e6d2-4c4b-a2b8-9fc709c1caca in datapath 76432838-bd4d-4fb8-8e44-6e230b5868b8 unbound from our chassis
Oct 11 09:07:39 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:07:39.844 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 76432838-bd4d-4fb8-8e44-6e230b5868b8
Oct 11 09:07:39 compute-0 nova_compute[260935]: 2025-10-11 09:07:39.850 2 DEBUG nova.compute.manager [req-3e04bbb9-0b7a-4771-9618-71c7208f474c req-199ac017-85ee-4f2d-b64a-9bdcec49da71 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: f5dfbe0b-a1ff-4001-abe4-a4493c9124f2] Received event network-vif-plugged-34749eb0-e6d2-4c4b-a2b8-9fc709c1caca external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:07:39 compute-0 nova_compute[260935]: 2025-10-11 09:07:39.852 2 DEBUG oslo_concurrency.lockutils [req-3e04bbb9-0b7a-4771-9618-71c7208f474c req-199ac017-85ee-4f2d-b64a-9bdcec49da71 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "f5dfbe0b-a1ff-4001-abe4-a4493c9124f2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:07:39 compute-0 nova_compute[260935]: 2025-10-11 09:07:39.852 2 DEBUG oslo_concurrency.lockutils [req-3e04bbb9-0b7a-4771-9618-71c7208f474c req-199ac017-85ee-4f2d-b64a-9bdcec49da71 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "f5dfbe0b-a1ff-4001-abe4-a4493c9124f2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:07:39 compute-0 nova_compute[260935]: 2025-10-11 09:07:39.853 2 DEBUG oslo_concurrency.lockutils [req-3e04bbb9-0b7a-4771-9618-71c7208f474c req-199ac017-85ee-4f2d-b64a-9bdcec49da71 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "f5dfbe0b-a1ff-4001-abe4-a4493c9124f2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:07:39 compute-0 nova_compute[260935]: 2025-10-11 09:07:39.854 2 DEBUG nova.compute.manager [req-3e04bbb9-0b7a-4771-9618-71c7208f474c req-199ac017-85ee-4f2d-b64a-9bdcec49da71 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: f5dfbe0b-a1ff-4001-abe4-a4493c9124f2] No waiting events found dispatching network-vif-plugged-34749eb0-e6d2-4c4b-a2b8-9fc709c1caca pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 09:07:39 compute-0 nova_compute[260935]: 2025-10-11 09:07:39.855 2 WARNING nova.compute.manager [req-3e04bbb9-0b7a-4771-9618-71c7208f474c req-199ac017-85ee-4f2d-b64a-9bdcec49da71 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: f5dfbe0b-a1ff-4001-abe4-a4493c9124f2] Received unexpected event network-vif-plugged-34749eb0-e6d2-4c4b-a2b8-9fc709c1caca for instance with vm_state active and task_state rescuing.
Oct 11 09:07:39 compute-0 nova_compute[260935]: 2025-10-11 09:07:39.885 2 INFO nova.virt.libvirt.driver [None req-4e0ad04b-15ca-4ec2-a8c0-a7cd32858035 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] [instance: 02489367-ca19-4428-9727-6d8ab66e1b46] Deleting instance files /var/lib/nova/instances/02489367-ca19-4428-9727-6d8ab66e1b46_del
Oct 11 09:07:39 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:07:39.885 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[e6409ba5-dd58-4058-87e2-580b199f0b9f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:07:39 compute-0 nova_compute[260935]: 2025-10-11 09:07:39.887 2 INFO nova.virt.libvirt.driver [None req-4e0ad04b-15ca-4ec2-a8c0-a7cd32858035 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] [instance: 02489367-ca19-4428-9727-6d8ab66e1b46] Deletion of /var/lib/nova/instances/02489367-ca19-4428-9727-6d8ab66e1b46_del complete
Oct 11 09:07:39 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:07:39.930 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[ea9ec12a-973b-4b5b-9242-57d308f43b3d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:07:39 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:07:39.936 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[d4a6babe-b15d-458d-854d-06ddbcd198cd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:07:39 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:07:39.990 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[5964d806-6cb0-4c09-ba29-aad136f55bca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:07:40 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:07:40.016 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[67f71799-df17-466a-86c7-f43cedc4836f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap76432838-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:06:2f:a3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 9, 'rx_bytes': 916, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 9, 'rx_bytes': 916, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 260], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 547691, 'reachable_time': 43408, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 358618, 'error': None, 'target': 'ovnmeta-76432838-bd4d-4fb8-8e44-6e230b5868b8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:07:40 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:07:40.034 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[566ca55a-abda-4d03-9d70-1ac349a607ad]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap76432838-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 547707, 'tstamp': 547707}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 358619, 'error': None, 'target': 'ovnmeta-76432838-bd4d-4fb8-8e44-6e230b5868b8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap76432838-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 547711, 'tstamp': 547711}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 358619, 'error': None, 'target': 'ovnmeta-76432838-bd4d-4fb8-8e44-6e230b5868b8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:07:40 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:07:40.035 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap76432838-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:07:40 compute-0 nova_compute[260935]: 2025-10-11 09:07:40.087 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:07:40 compute-0 nova_compute[260935]: 2025-10-11 09:07:40.089 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:07:40 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:07:40.089 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap76432838-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:07:40 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:07:40.089 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 09:07:40 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:07:40.090 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap76432838-b0, col_values=(('external_ids', {'iface-id': 'b6ca7d68-6b07-4260-8964-929cc77a92b2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:07:40 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:07:40.090 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 09:07:40 compute-0 nova_compute[260935]: 2025-10-11 09:07:40.093 2 INFO nova.compute.manager [None req-4e0ad04b-15ca-4ec2-a8c0-a7cd32858035 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] [instance: 02489367-ca19-4428-9727-6d8ab66e1b46] Took 1.06 seconds to destroy the instance on the hypervisor.
Oct 11 09:07:40 compute-0 nova_compute[260935]: 2025-10-11 09:07:40.094 2 DEBUG oslo.service.loopingcall [None req-4e0ad04b-15ca-4ec2-a8c0-a7cd32858035 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 11 09:07:40 compute-0 nova_compute[260935]: 2025-10-11 09:07:40.095 2 DEBUG nova.compute.manager [-] [instance: 02489367-ca19-4428-9727-6d8ab66e1b46] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 11 09:07:40 compute-0 nova_compute[260935]: 2025-10-11 09:07:40.096 2 DEBUG nova.network.neutron [-] [instance: 02489367-ca19-4428-9727-6d8ab66e1b46] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 11 09:07:40 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1950: 321 pgs: 321 active+clean; 705 MiB data, 1008 MiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 7.8 MiB/s wr, 202 op/s
Oct 11 09:07:40 compute-0 nova_compute[260935]: 2025-10-11 09:07:40.574 2 DEBUG nova.virt.libvirt.host [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Removed pending event for f5dfbe0b-a1ff-4001-abe4-a4493c9124f2 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Oct 11 09:07:40 compute-0 nova_compute[260935]: 2025-10-11 09:07:40.576 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760173660.5741084, f5dfbe0b-a1ff-4001-abe4-a4493c9124f2 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:07:40 compute-0 nova_compute[260935]: 2025-10-11 09:07:40.577 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: f5dfbe0b-a1ff-4001-abe4-a4493c9124f2] VM Resumed (Lifecycle Event)
Oct 11 09:07:40 compute-0 nova_compute[260935]: 2025-10-11 09:07:40.582 2 DEBUG nova.compute.manager [None req-25246455-e498-45d1-a432-7e80017b2ff9 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] [instance: f5dfbe0b-a1ff-4001-abe4-a4493c9124f2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:07:40 compute-0 nova_compute[260935]: 2025-10-11 09:07:40.738 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: f5dfbe0b-a1ff-4001-abe4-a4493c9124f2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:07:40 compute-0 nova_compute[260935]: 2025-10-11 09:07:40.743 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: f5dfbe0b-a1ff-4001-abe4-a4493c9124f2] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 09:07:40 compute-0 nova_compute[260935]: 2025-10-11 09:07:40.809 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760173660.5744529, f5dfbe0b-a1ff-4001-abe4-a4493c9124f2 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:07:40 compute-0 nova_compute[260935]: 2025-10-11 09:07:40.810 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: f5dfbe0b-a1ff-4001-abe4-a4493c9124f2] VM Started (Lifecycle Event)
Oct 11 09:07:40 compute-0 nova_compute[260935]: 2025-10-11 09:07:40.859 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: f5dfbe0b-a1ff-4001-abe4-a4493c9124f2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:07:40 compute-0 nova_compute[260935]: 2025-10-11 09:07:40.863 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: f5dfbe0b-a1ff-4001-abe4-a4493c9124f2] Synchronizing instance power state after lifecycle event "Started"; current vm_state: rescued, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 09:07:40 compute-0 nova_compute[260935]: 2025-10-11 09:07:40.894 2 DEBUG nova.compute.manager [req-5377cf6c-8f78-4f84-b711-614667f27f4d req-5d78ac9d-dc3c-489e-acec-0a57e8b521b9 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 02489367-ca19-4428-9727-6d8ab66e1b46] Received event network-vif-unplugged-91e3c288-76e0-4c61-9a40-fde0087f647b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:07:40 compute-0 nova_compute[260935]: 2025-10-11 09:07:40.895 2 DEBUG oslo_concurrency.lockutils [req-5377cf6c-8f78-4f84-b711-614667f27f4d req-5d78ac9d-dc3c-489e-acec-0a57e8b521b9 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "02489367-ca19-4428-9727-6d8ab66e1b46-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:07:40 compute-0 nova_compute[260935]: 2025-10-11 09:07:40.896 2 DEBUG oslo_concurrency.lockutils [req-5377cf6c-8f78-4f84-b711-614667f27f4d req-5d78ac9d-dc3c-489e-acec-0a57e8b521b9 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "02489367-ca19-4428-9727-6d8ab66e1b46-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:07:40 compute-0 nova_compute[260935]: 2025-10-11 09:07:40.896 2 DEBUG oslo_concurrency.lockutils [req-5377cf6c-8f78-4f84-b711-614667f27f4d req-5d78ac9d-dc3c-489e-acec-0a57e8b521b9 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "02489367-ca19-4428-9727-6d8ab66e1b46-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:07:40 compute-0 nova_compute[260935]: 2025-10-11 09:07:40.897 2 DEBUG nova.compute.manager [req-5377cf6c-8f78-4f84-b711-614667f27f4d req-5d78ac9d-dc3c-489e-acec-0a57e8b521b9 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 02489367-ca19-4428-9727-6d8ab66e1b46] No waiting events found dispatching network-vif-unplugged-91e3c288-76e0-4c61-9a40-fde0087f647b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 09:07:40 compute-0 nova_compute[260935]: 2025-10-11 09:07:40.897 2 DEBUG nova.compute.manager [req-5377cf6c-8f78-4f84-b711-614667f27f4d req-5d78ac9d-dc3c-489e-acec-0a57e8b521b9 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 02489367-ca19-4428-9727-6d8ab66e1b46] Received event network-vif-unplugged-91e3c288-76e0-4c61-9a40-fde0087f647b for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 11 09:07:40 compute-0 nova_compute[260935]: 2025-10-11 09:07:40.897 2 DEBUG nova.compute.manager [req-5377cf6c-8f78-4f84-b711-614667f27f4d req-5d78ac9d-dc3c-489e-acec-0a57e8b521b9 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 02489367-ca19-4428-9727-6d8ab66e1b46] Received event network-vif-plugged-91e3c288-76e0-4c61-9a40-fde0087f647b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:07:40 compute-0 nova_compute[260935]: 2025-10-11 09:07:40.898 2 DEBUG oslo_concurrency.lockutils [req-5377cf6c-8f78-4f84-b711-614667f27f4d req-5d78ac9d-dc3c-489e-acec-0a57e8b521b9 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "02489367-ca19-4428-9727-6d8ab66e1b46-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:07:40 compute-0 nova_compute[260935]: 2025-10-11 09:07:40.898 2 DEBUG oslo_concurrency.lockutils [req-5377cf6c-8f78-4f84-b711-614667f27f4d req-5d78ac9d-dc3c-489e-acec-0a57e8b521b9 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "02489367-ca19-4428-9727-6d8ab66e1b46-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:07:40 compute-0 nova_compute[260935]: 2025-10-11 09:07:40.898 2 DEBUG oslo_concurrency.lockutils [req-5377cf6c-8f78-4f84-b711-614667f27f4d req-5d78ac9d-dc3c-489e-acec-0a57e8b521b9 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "02489367-ca19-4428-9727-6d8ab66e1b46-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:07:40 compute-0 nova_compute[260935]: 2025-10-11 09:07:40.899 2 DEBUG nova.compute.manager [req-5377cf6c-8f78-4f84-b711-614667f27f4d req-5d78ac9d-dc3c-489e-acec-0a57e8b521b9 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 02489367-ca19-4428-9727-6d8ab66e1b46] No waiting events found dispatching network-vif-plugged-91e3c288-76e0-4c61-9a40-fde0087f647b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 09:07:40 compute-0 nova_compute[260935]: 2025-10-11 09:07:40.899 2 WARNING nova.compute.manager [req-5377cf6c-8f78-4f84-b711-614667f27f4d req-5d78ac9d-dc3c-489e-acec-0a57e8b521b9 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 02489367-ca19-4428-9727-6d8ab66e1b46] Received unexpected event network-vif-plugged-91e3c288-76e0-4c61-9a40-fde0087f647b for instance with vm_state active and task_state deleting.
Oct 11 09:07:40 compute-0 nova_compute[260935]: 2025-10-11 09:07:40.954 2 DEBUG nova.network.neutron [-] [instance: 02489367-ca19-4428-9727-6d8ab66e1b46] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:07:41 compute-0 nova_compute[260935]: 2025-10-11 09:07:41.032 2 INFO nova.compute.manager [-] [instance: 02489367-ca19-4428-9727-6d8ab66e1b46] Took 0.94 seconds to deallocate network for instance.
Oct 11 09:07:41 compute-0 nova_compute[260935]: 2025-10-11 09:07:41.194 2 DEBUG oslo_concurrency.lockutils [None req-4e0ad04b-15ca-4ec2-a8c0-a7cd32858035 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:07:41 compute-0 nova_compute[260935]: 2025-10-11 09:07:41.195 2 DEBUG oslo_concurrency.lockutils [None req-4e0ad04b-15ca-4ec2-a8c0-a7cd32858035 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:07:41 compute-0 nova_compute[260935]: 2025-10-11 09:07:41.411 2 DEBUG oslo_concurrency.processutils [None req-4e0ad04b-15ca-4ec2-a8c0-a7cd32858035 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:07:41 compute-0 ceph-mon[74313]: pgmap v1950: 321 pgs: 321 active+clean; 705 MiB data, 1008 MiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 7.8 MiB/s wr, 202 op/s
Oct 11 09:07:41 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 09:07:41 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2743563293' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:07:41 compute-0 nova_compute[260935]: 2025-10-11 09:07:41.963 2 DEBUG oslo_concurrency.processutils [None req-4e0ad04b-15ca-4ec2-a8c0-a7cd32858035 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.551s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:07:41 compute-0 nova_compute[260935]: 2025-10-11 09:07:41.972 2 DEBUG nova.compute.provider_tree [None req-4e0ad04b-15ca-4ec2-a8c0-a7cd32858035 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 09:07:42 compute-0 nova_compute[260935]: 2025-10-11 09:07:42.009 2 DEBUG nova.compute.manager [req-88f2c59c-2a23-4fae-82f0-5a81075fb305 req-7840a850-cbc2-4d20-8eaf-afb7b99bb814 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: f5dfbe0b-a1ff-4001-abe4-a4493c9124f2] Received event network-vif-plugged-34749eb0-e6d2-4c4b-a2b8-9fc709c1caca external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:07:42 compute-0 nova_compute[260935]: 2025-10-11 09:07:42.010 2 DEBUG oslo_concurrency.lockutils [req-88f2c59c-2a23-4fae-82f0-5a81075fb305 req-7840a850-cbc2-4d20-8eaf-afb7b99bb814 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "f5dfbe0b-a1ff-4001-abe4-a4493c9124f2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:07:42 compute-0 nova_compute[260935]: 2025-10-11 09:07:42.011 2 DEBUG oslo_concurrency.lockutils [req-88f2c59c-2a23-4fae-82f0-5a81075fb305 req-7840a850-cbc2-4d20-8eaf-afb7b99bb814 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "f5dfbe0b-a1ff-4001-abe4-a4493c9124f2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:07:42 compute-0 nova_compute[260935]: 2025-10-11 09:07:42.011 2 DEBUG oslo_concurrency.lockutils [req-88f2c59c-2a23-4fae-82f0-5a81075fb305 req-7840a850-cbc2-4d20-8eaf-afb7b99bb814 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "f5dfbe0b-a1ff-4001-abe4-a4493c9124f2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:07:42 compute-0 nova_compute[260935]: 2025-10-11 09:07:42.011 2 DEBUG nova.compute.manager [req-88f2c59c-2a23-4fae-82f0-5a81075fb305 req-7840a850-cbc2-4d20-8eaf-afb7b99bb814 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: f5dfbe0b-a1ff-4001-abe4-a4493c9124f2] No waiting events found dispatching network-vif-plugged-34749eb0-e6d2-4c4b-a2b8-9fc709c1caca pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 09:07:42 compute-0 nova_compute[260935]: 2025-10-11 09:07:42.012 2 WARNING nova.compute.manager [req-88f2c59c-2a23-4fae-82f0-5a81075fb305 req-7840a850-cbc2-4d20-8eaf-afb7b99bb814 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: f5dfbe0b-a1ff-4001-abe4-a4493c9124f2] Received unexpected event network-vif-plugged-34749eb0-e6d2-4c4b-a2b8-9fc709c1caca for instance with vm_state rescued and task_state None.
Oct 11 09:07:42 compute-0 nova_compute[260935]: 2025-10-11 09:07:42.012 2 DEBUG nova.compute.manager [req-88f2c59c-2a23-4fae-82f0-5a81075fb305 req-7840a850-cbc2-4d20-8eaf-afb7b99bb814 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 02489367-ca19-4428-9727-6d8ab66e1b46] Received event network-vif-deleted-91e3c288-76e0-4c61-9a40-fde0087f647b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:07:42 compute-0 nova_compute[260935]: 2025-10-11 09:07:42.061 2 DEBUG nova.scheduler.client.report [None req-4e0ad04b-15ca-4ec2-a8c0-a7cd32858035 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 09:07:42 compute-0 nova_compute[260935]: 2025-10-11 09:07:42.142 2 DEBUG oslo_concurrency.lockutils [None req-4e0ad04b-15ca-4ec2-a8c0-a7cd32858035 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.947s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:07:42 compute-0 nova_compute[260935]: 2025-10-11 09:07:42.197 2 INFO nova.scheduler.client.report [None req-4e0ad04b-15ca-4ec2-a8c0-a7cd32858035 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Deleted allocations for instance 02489367-ca19-4428-9727-6d8ab66e1b46
Oct 11 09:07:42 compute-0 nova_compute[260935]: 2025-10-11 09:07:42.354 2 DEBUG oslo_concurrency.lockutils [None req-4e0ad04b-15ca-4ec2-a8c0-a7cd32858035 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Lock "02489367-ca19-4428-9727-6d8ab66e1b46" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.324s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:07:42 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1951: 321 pgs: 321 active+clean; 674 MiB data, 1001 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 7.9 MiB/s wr, 316 op/s
Oct 11 09:07:42 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/2743563293' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:07:43 compute-0 ceph-mon[74313]: pgmap v1951: 321 pgs: 321 active+clean; 674 MiB data, 1001 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 7.9 MiB/s wr, 316 op/s
Oct 11 09:07:43 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e257 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:07:44 compute-0 nova_compute[260935]: 2025-10-11 09:07:44.125 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:07:44 compute-0 nova_compute[260935]: 2025-10-11 09:07:44.176 2 INFO nova.compute.manager [None req-f7cb1d73-c488-44d7-b3cb-1394c9065d01 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] [instance: 6520fc43-79ed-4060-85bb-dcdff5f5c101] Pausing
Oct 11 09:07:44 compute-0 nova_compute[260935]: 2025-10-11 09:07:44.177 2 DEBUG nova.objects.instance [None req-f7cb1d73-c488-44d7-b3cb-1394c9065d01 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Lazy-loading 'flavor' on Instance uuid 6520fc43-79ed-4060-85bb-dcdff5f5c101 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:07:44 compute-0 nova_compute[260935]: 2025-10-11 09:07:44.252 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760173664.2511065, 6520fc43-79ed-4060-85bb-dcdff5f5c101 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:07:44 compute-0 nova_compute[260935]: 2025-10-11 09:07:44.252 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 6520fc43-79ed-4060-85bb-dcdff5f5c101] VM Paused (Lifecycle Event)
Oct 11 09:07:44 compute-0 nova_compute[260935]: 2025-10-11 09:07:44.255 2 DEBUG nova.compute.manager [None req-f7cb1d73-c488-44d7-b3cb-1394c9065d01 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] [instance: 6520fc43-79ed-4060-85bb-dcdff5f5c101] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:07:44 compute-0 nova_compute[260935]: 2025-10-11 09:07:44.302 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 6520fc43-79ed-4060-85bb-dcdff5f5c101] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:07:44 compute-0 nova_compute[260935]: 2025-10-11 09:07:44.311 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 6520fc43-79ed-4060-85bb-dcdff5f5c101] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: pausing, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 09:07:44 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1952: 321 pgs: 321 active+clean; 658 MiB data, 994 MiB used, 59 GiB / 60 GiB avail; 4.1 MiB/s rd, 3.8 MiB/s wr, 246 op/s
Oct 11 09:07:44 compute-0 nova_compute[260935]: 2025-10-11 09:07:44.374 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:07:44 compute-0 nova_compute[260935]: 2025-10-11 09:07:44.400 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 6520fc43-79ed-4060-85bb-dcdff5f5c101] During sync_power_state the instance has a pending task (pausing). Skip.
Oct 11 09:07:45 compute-0 ceph-mon[74313]: pgmap v1952: 321 pgs: 321 active+clean; 658 MiB data, 994 MiB used, 59 GiB / 60 GiB avail; 4.1 MiB/s rd, 3.8 MiB/s wr, 246 op/s
Oct 11 09:07:46 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1953: 321 pgs: 321 active+clean; 658 MiB data, 994 MiB used, 59 GiB / 60 GiB avail; 4.0 MiB/s rd, 1.9 MiB/s wr, 222 op/s
Oct 11 09:07:46 compute-0 podman[358643]: 2025-10-11 09:07:46.801229709 +0000 UTC m=+0.094426046 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS)
Oct 11 09:07:47 compute-0 ceph-mon[74313]: pgmap v1953: 321 pgs: 321 active+clean; 658 MiB data, 994 MiB used, 59 GiB / 60 GiB avail; 4.0 MiB/s rd, 1.9 MiB/s wr, 222 op/s
Oct 11 09:07:48 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1954: 321 pgs: 321 active+clean; 658 MiB data, 994 MiB used, 59 GiB / 60 GiB avail; 4.0 MiB/s rd, 1.9 MiB/s wr, 222 op/s
Oct 11 09:07:48 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e257 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:07:49 compute-0 nova_compute[260935]: 2025-10-11 09:07:49.111 2 WARNING nova.compute.manager [None req-21f60ae5-4d4b-4bf1-a957-74c05ca1d226 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] [instance: 15633aee-234a-4417-b5ea-f35f13820404] Timeout waiting for ['network-vif-plugged-074db183-8679-40f2-b39d-06759a8dfceb'] for instance with vm_state building and task_state spawning. Event states are: network-vif-plugged-074db183-8679-40f2-b39d-06759a8dfceb: timed out after 300.00 seconds: eventlet.timeout.Timeout: 300 seconds
Oct 11 09:07:49 compute-0 nova_compute[260935]: 2025-10-11 09:07:49.125 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:07:49 compute-0 nova_compute[260935]: 2025-10-11 09:07:49.143 2 INFO nova.compute.manager [None req-4bb4a7d3-a6d1-470b-99df-bf48b915bce2 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] [instance: 6520fc43-79ed-4060-85bb-dcdff5f5c101] Unpausing
Oct 11 09:07:49 compute-0 nova_compute[260935]: 2025-10-11 09:07:49.144 2 DEBUG nova.objects.instance [None req-4bb4a7d3-a6d1-470b-99df-bf48b915bce2 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Lazy-loading 'flavor' on Instance uuid 6520fc43-79ed-4060-85bb-dcdff5f5c101 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:07:49 compute-0 kernel: tap074db183-86 (unregistering): left promiscuous mode
Oct 11 09:07:49 compute-0 NetworkManager[44960]: <info>  [1760173669.1597] device (tap074db183-86): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 09:07:49 compute-0 nova_compute[260935]: 2025-10-11 09:07:49.179 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:07:49 compute-0 ovn_controller[152945]: 2025-10-11T09:07:49Z|00895|binding|INFO|Releasing lport 074db183-8679-40f2-b39d-06759a8dfceb from this chassis (sb_readonly=0)
Oct 11 09:07:49 compute-0 ovn_controller[152945]: 2025-10-11T09:07:49Z|00896|binding|INFO|Setting lport 074db183-8679-40f2-b39d-06759a8dfceb down in Southbound
Oct 11 09:07:49 compute-0 ovn_controller[152945]: 2025-10-11T09:07:49Z|00897|binding|INFO|Removing iface tap074db183-86 ovn-installed in OVS
Oct 11 09:07:49 compute-0 nova_compute[260935]: 2025-10-11 09:07:49.185 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:07:49 compute-0 nova_compute[260935]: 2025-10-11 09:07:49.201 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:07:49 compute-0 systemd[1]: machine-qemu\x2d99\x2dinstance\x2d00000057.scope: Deactivated successfully.
Oct 11 09:07:49 compute-0 systemd[1]: machine-qemu\x2d99\x2dinstance\x2d00000057.scope: Consumed 1.435s CPU time.
Oct 11 09:07:49 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:07:49.231 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:33:2b:94 10.100.0.8'], port_security=['fa:16:3e:33:2b:94 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '15633aee-234a-4417-b5ea-f35f13820404', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e4686205-cbf0-4221-bc49-ebb890c4a59f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '11b44ad9193e4e43838d52056ccf413e', 'neutron:revision_number': '4', 'neutron:security_group_ids': '4b0cfb76-aebd-4cfc-96f5-00bacc345d65', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9601b6e8-d9bc-46ca-99e8-33ec15f713e5, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=074db183-8679-40f2-b39d-06759a8dfceb) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 09:07:49 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:07:49.234 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 074db183-8679-40f2-b39d-06759a8dfceb in datapath e4686205-cbf0-4221-bc49-ebb890c4a59f unbound from our chassis
Oct 11 09:07:49 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:07:49.236 162815 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network e4686205-cbf0-4221-bc49-ebb890c4a59f or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Oct 11 09:07:49 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:07:49.237 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[fda9f397-8df4-4f48-8db2-dbe17f7d3ad6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:07:49 compute-0 systemd-machined[215705]: Machine qemu-99-instance-00000057 terminated.
Oct 11 09:07:49 compute-0 nova_compute[260935]: 2025-10-11 09:07:49.249 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760173669.2490623, 6520fc43-79ed-4060-85bb-dcdff5f5c101 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:07:49 compute-0 nova_compute[260935]: 2025-10-11 09:07:49.249 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 6520fc43-79ed-4060-85bb-dcdff5f5c101] VM Resumed (Lifecycle Event)
Oct 11 09:07:49 compute-0 virtqemud[260524]: argument unsupported: QEMU guest agent is not configured
Oct 11 09:07:49 compute-0 nova_compute[260935]: 2025-10-11 09:07:49.256 2 DEBUG nova.virt.libvirt.guest [None req-4bb4a7d3-a6d1-470b-99df-bf48b915bce2 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] [instance: 6520fc43-79ed-4060-85bb-dcdff5f5c101] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200
Oct 11 09:07:49 compute-0 nova_compute[260935]: 2025-10-11 09:07:49.256 2 DEBUG nova.compute.manager [None req-4bb4a7d3-a6d1-470b-99df-bf48b915bce2 f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] [instance: 6520fc43-79ed-4060-85bb-dcdff5f5c101] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:07:49 compute-0 nova_compute[260935]: 2025-10-11 09:07:49.316 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 6520fc43-79ed-4060-85bb-dcdff5f5c101] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:07:49 compute-0 nova_compute[260935]: 2025-10-11 09:07:49.326 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 6520fc43-79ed-4060-85bb-dcdff5f5c101] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: paused, current task_state: unpausing, current DB power_state: 3, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 09:07:49 compute-0 NetworkManager[44960]: <info>  [1760173669.3361] manager: (tap074db183-86): new Tun device (/org/freedesktop/NetworkManager/Devices/380)
Oct 11 09:07:49 compute-0 nova_compute[260935]: 2025-10-11 09:07:49.344 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:07:49 compute-0 nova_compute[260935]: 2025-10-11 09:07:49.355 2 DEBUG nova.virt.libvirt.vif [None req-21f60ae5-4d4b-4bf1-a957-74c05ca1d226 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T09:00:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerRescueTestJSON-server-2012807637',display_name='tempest-ServerRescueTestJSON-server-2012807637',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuetestjson-server-2012807637',id=87,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='11b44ad9193e4e43838d52056ccf413e',ramdisk_id='',reservation_id='r-es16o3e8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerRescueTestJSON-1667208638',owner_user_name='tempest-ServerRescueTestJSON-1667208638-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:00:52Z,user_data=None,user_id='df5a3c3a5d68473aa2e2950de45ebce1',uuid=15633aee-234a-4417-b5ea-f35f13820404,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "074db183-8679-40f2-b39d-06759a8dfceb", "address": "fa:16:3e:33:2b:94", "network": {"id": "e4686205-cbf0-4221-bc49-ebb890c4a59f", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1553544744-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "11b44ad9193e4e43838d52056ccf413e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap074db183-86", "ovs_interfaceid": "074db183-8679-40f2-b39d-06759a8dfceb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 11 09:07:49 compute-0 nova_compute[260935]: 2025-10-11 09:07:49.355 2 DEBUG nova.network.os_vif_util [None req-21f60ae5-4d4b-4bf1-a957-74c05ca1d226 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] Converting VIF {"id": "074db183-8679-40f2-b39d-06759a8dfceb", "address": "fa:16:3e:33:2b:94", "network": {"id": "e4686205-cbf0-4221-bc49-ebb890c4a59f", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1553544744-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "11b44ad9193e4e43838d52056ccf413e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap074db183-86", "ovs_interfaceid": "074db183-8679-40f2-b39d-06759a8dfceb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 09:07:49 compute-0 nova_compute[260935]: 2025-10-11 09:07:49.356 2 DEBUG nova.network.os_vif_util [None req-21f60ae5-4d4b-4bf1-a957-74c05ca1d226 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:33:2b:94,bridge_name='br-int',has_traffic_filtering=True,id=074db183-8679-40f2-b39d-06759a8dfceb,network=Network(e4686205-cbf0-4221-bc49-ebb890c4a59f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap074db183-86') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 09:07:49 compute-0 nova_compute[260935]: 2025-10-11 09:07:49.357 2 DEBUG os_vif [None req-21f60ae5-4d4b-4bf1-a957-74c05ca1d226 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:33:2b:94,bridge_name='br-int',has_traffic_filtering=True,id=074db183-8679-40f2-b39d-06759a8dfceb,network=Network(e4686205-cbf0-4221-bc49-ebb890c4a59f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap074db183-86') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 11 09:07:49 compute-0 nova_compute[260935]: 2025-10-11 09:07:49.363 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:07:49 compute-0 nova_compute[260935]: 2025-10-11 09:07:49.364 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap074db183-86, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:07:49 compute-0 nova_compute[260935]: 2025-10-11 09:07:49.365 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:07:49 compute-0 nova_compute[260935]: 2025-10-11 09:07:49.367 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:07:49 compute-0 nova_compute[260935]: 2025-10-11 09:07:49.369 2 INFO os_vif [None req-21f60ae5-4d4b-4bf1-a957-74c05ca1d226 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:33:2b:94,bridge_name='br-int',has_traffic_filtering=True,id=074db183-8679-40f2-b39d-06759a8dfceb,network=Network(e4686205-cbf0-4221-bc49-ebb890c4a59f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap074db183-86')
Oct 11 09:07:49 compute-0 ceph-mon[74313]: pgmap v1954: 321 pgs: 321 active+clean; 658 MiB data, 994 MiB used, 59 GiB / 60 GiB avail; 4.0 MiB/s rd, 1.9 MiB/s wr, 222 op/s
Oct 11 09:07:49 compute-0 nova_compute[260935]: 2025-10-11 09:07:49.812 2 INFO nova.virt.libvirt.driver [None req-21f60ae5-4d4b-4bf1-a957-74c05ca1d226 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] [instance: 15633aee-234a-4417-b5ea-f35f13820404] Deleting instance files /var/lib/nova/instances/15633aee-234a-4417-b5ea-f35f13820404_del
Oct 11 09:07:49 compute-0 nova_compute[260935]: 2025-10-11 09:07:49.812 2 INFO nova.virt.libvirt.driver [None req-21f60ae5-4d4b-4bf1-a957-74c05ca1d226 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] [instance: 15633aee-234a-4417-b5ea-f35f13820404] Deletion of /var/lib/nova/instances/15633aee-234a-4417-b5ea-f35f13820404_del complete
Oct 11 09:07:49 compute-0 nova_compute[260935]: 2025-10-11 09:07:49.954 2 DEBUG nova.compute.manager [req-4b68501c-3cf2-44d0-ba9b-a1165a3b4936 req-cd2dda02-ff6a-45d7-a39b-067380bf2bd3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 15633aee-234a-4417-b5ea-f35f13820404] Received event network-vif-unplugged-074db183-8679-40f2-b39d-06759a8dfceb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:07:49 compute-0 nova_compute[260935]: 2025-10-11 09:07:49.955 2 DEBUG oslo_concurrency.lockutils [req-4b68501c-3cf2-44d0-ba9b-a1165a3b4936 req-cd2dda02-ff6a-45d7-a39b-067380bf2bd3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "15633aee-234a-4417-b5ea-f35f13820404-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:07:49 compute-0 nova_compute[260935]: 2025-10-11 09:07:49.955 2 DEBUG oslo_concurrency.lockutils [req-4b68501c-3cf2-44d0-ba9b-a1165a3b4936 req-cd2dda02-ff6a-45d7-a39b-067380bf2bd3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "15633aee-234a-4417-b5ea-f35f13820404-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:07:49 compute-0 nova_compute[260935]: 2025-10-11 09:07:49.955 2 DEBUG oslo_concurrency.lockutils [req-4b68501c-3cf2-44d0-ba9b-a1165a3b4936 req-cd2dda02-ff6a-45d7-a39b-067380bf2bd3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "15633aee-234a-4417-b5ea-f35f13820404-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:07:49 compute-0 nova_compute[260935]: 2025-10-11 09:07:49.955 2 DEBUG nova.compute.manager [req-4b68501c-3cf2-44d0-ba9b-a1165a3b4936 req-cd2dda02-ff6a-45d7-a39b-067380bf2bd3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 15633aee-234a-4417-b5ea-f35f13820404] No event matching network-vif-unplugged-074db183-8679-40f2-b39d-06759a8dfceb in dict_keys([('network-vif-plugged', '074db183-8679-40f2-b39d-06759a8dfceb')]) pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:325
Oct 11 09:07:49 compute-0 nova_compute[260935]: 2025-10-11 09:07:49.955 2 DEBUG nova.compute.manager [req-4b68501c-3cf2-44d0-ba9b-a1165a3b4936 req-cd2dda02-ff6a-45d7-a39b-067380bf2bd3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 15633aee-234a-4417-b5ea-f35f13820404] Received event network-vif-unplugged-074db183-8679-40f2-b39d-06759a8dfceb for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 11 09:07:49 compute-0 nova_compute[260935]: 2025-10-11 09:07:49.963 2 ERROR nova.compute.manager [None req-21f60ae5-4d4b-4bf1-a957-74c05ca1d226 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] [instance: 15633aee-234a-4417-b5ea-f35f13820404] Instance failed to spawn: nova.exception.VirtualInterfaceCreateException: Virtual Interface creation failed
Oct 11 09:07:49 compute-0 nova_compute[260935]: 2025-10-11 09:07:49.963 2 ERROR nova.compute.manager [instance: 15633aee-234a-4417-b5ea-f35f13820404] Traceback (most recent call last):
Oct 11 09:07:49 compute-0 nova_compute[260935]: 2025-10-11 09:07:49.963 2 ERROR nova.compute.manager [instance: 15633aee-234a-4417-b5ea-f35f13820404]   File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 7750, in _create_guest_with_network
Oct 11 09:07:49 compute-0 nova_compute[260935]: 2025-10-11 09:07:49.963 2 ERROR nova.compute.manager [instance: 15633aee-234a-4417-b5ea-f35f13820404]     guest = self._create_guest(
Oct 11 09:07:49 compute-0 nova_compute[260935]: 2025-10-11 09:07:49.963 2 ERROR nova.compute.manager [instance: 15633aee-234a-4417-b5ea-f35f13820404]   File "/usr/lib64/python3.9/contextlib.py", line 126, in __exit__
Oct 11 09:07:49 compute-0 nova_compute[260935]: 2025-10-11 09:07:49.963 2 ERROR nova.compute.manager [instance: 15633aee-234a-4417-b5ea-f35f13820404]     next(self.gen)
Oct 11 09:07:49 compute-0 nova_compute[260935]: 2025-10-11 09:07:49.963 2 ERROR nova.compute.manager [instance: 15633aee-234a-4417-b5ea-f35f13820404]   File "/usr/lib/python3.9/site-packages/nova/compute/manager.py", line 559, in wait_for_instance_event
Oct 11 09:07:49 compute-0 nova_compute[260935]: 2025-10-11 09:07:49.963 2 ERROR nova.compute.manager [instance: 15633aee-234a-4417-b5ea-f35f13820404]     self._wait_for_instance_events(
Oct 11 09:07:49 compute-0 nova_compute[260935]: 2025-10-11 09:07:49.963 2 ERROR nova.compute.manager [instance: 15633aee-234a-4417-b5ea-f35f13820404]   File "/usr/lib/python3.9/site-packages/nova/compute/manager.py", line 471, in _wait_for_instance_events
Oct 11 09:07:49 compute-0 nova_compute[260935]: 2025-10-11 09:07:49.963 2 ERROR nova.compute.manager [instance: 15633aee-234a-4417-b5ea-f35f13820404]     actual_event = event.wait()
Oct 11 09:07:49 compute-0 nova_compute[260935]: 2025-10-11 09:07:49.963 2 ERROR nova.compute.manager [instance: 15633aee-234a-4417-b5ea-f35f13820404]   File "/usr/lib/python3.9/site-packages/nova/compute/manager.py", line 436, in wait
Oct 11 09:07:49 compute-0 nova_compute[260935]: 2025-10-11 09:07:49.963 2 ERROR nova.compute.manager [instance: 15633aee-234a-4417-b5ea-f35f13820404]     instance_event = self.event.wait()
Oct 11 09:07:49 compute-0 nova_compute[260935]: 2025-10-11 09:07:49.963 2 ERROR nova.compute.manager [instance: 15633aee-234a-4417-b5ea-f35f13820404]   File "/usr/lib/python3.9/site-packages/eventlet/event.py", line 125, in wait
Oct 11 09:07:49 compute-0 nova_compute[260935]: 2025-10-11 09:07:49.963 2 ERROR nova.compute.manager [instance: 15633aee-234a-4417-b5ea-f35f13820404]     result = hub.switch()
Oct 11 09:07:49 compute-0 nova_compute[260935]: 2025-10-11 09:07:49.963 2 ERROR nova.compute.manager [instance: 15633aee-234a-4417-b5ea-f35f13820404]   File "/usr/lib/python3.9/site-packages/eventlet/hubs/hub.py", line 313, in switch
Oct 11 09:07:49 compute-0 nova_compute[260935]: 2025-10-11 09:07:49.963 2 ERROR nova.compute.manager [instance: 15633aee-234a-4417-b5ea-f35f13820404]     return self.greenlet.switch()
Oct 11 09:07:49 compute-0 nova_compute[260935]: 2025-10-11 09:07:49.963 2 ERROR nova.compute.manager [instance: 15633aee-234a-4417-b5ea-f35f13820404] eventlet.timeout.Timeout: 300 seconds
Oct 11 09:07:49 compute-0 nova_compute[260935]: 2025-10-11 09:07:49.963 2 ERROR nova.compute.manager [instance: 15633aee-234a-4417-b5ea-f35f13820404] 
Oct 11 09:07:49 compute-0 nova_compute[260935]: 2025-10-11 09:07:49.963 2 ERROR nova.compute.manager [instance: 15633aee-234a-4417-b5ea-f35f13820404] During handling of the above exception, another exception occurred:
Oct 11 09:07:49 compute-0 nova_compute[260935]: 2025-10-11 09:07:49.963 2 ERROR nova.compute.manager [instance: 15633aee-234a-4417-b5ea-f35f13820404] 
Oct 11 09:07:49 compute-0 nova_compute[260935]: 2025-10-11 09:07:49.963 2 ERROR nova.compute.manager [instance: 15633aee-234a-4417-b5ea-f35f13820404] Traceback (most recent call last):
Oct 11 09:07:49 compute-0 nova_compute[260935]: 2025-10-11 09:07:49.963 2 ERROR nova.compute.manager [instance: 15633aee-234a-4417-b5ea-f35f13820404]   File "/usr/lib/python3.9/site-packages/nova/compute/manager.py", line 2864, in _build_resources
Oct 11 09:07:49 compute-0 nova_compute[260935]: 2025-10-11 09:07:49.963 2 ERROR nova.compute.manager [instance: 15633aee-234a-4417-b5ea-f35f13820404]     yield resources
Oct 11 09:07:49 compute-0 nova_compute[260935]: 2025-10-11 09:07:49.963 2 ERROR nova.compute.manager [instance: 15633aee-234a-4417-b5ea-f35f13820404]   File "/usr/lib/python3.9/site-packages/nova/compute/manager.py", line 2611, in _build_and_run_instance
Oct 11 09:07:49 compute-0 nova_compute[260935]: 2025-10-11 09:07:49.963 2 ERROR nova.compute.manager [instance: 15633aee-234a-4417-b5ea-f35f13820404]     self.driver.spawn(context, instance, image_meta,
Oct 11 09:07:49 compute-0 nova_compute[260935]: 2025-10-11 09:07:49.963 2 ERROR nova.compute.manager [instance: 15633aee-234a-4417-b5ea-f35f13820404]   File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 4411, in spawn
Oct 11 09:07:49 compute-0 nova_compute[260935]: 2025-10-11 09:07:49.963 2 ERROR nova.compute.manager [instance: 15633aee-234a-4417-b5ea-f35f13820404]     self._create_guest_with_network(
Oct 11 09:07:49 compute-0 nova_compute[260935]: 2025-10-11 09:07:49.963 2 ERROR nova.compute.manager [instance: 15633aee-234a-4417-b5ea-f35f13820404]   File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 7768, in _create_guest_with_network
Oct 11 09:07:49 compute-0 nova_compute[260935]: 2025-10-11 09:07:49.963 2 ERROR nova.compute.manager [instance: 15633aee-234a-4417-b5ea-f35f13820404]     raise exception.VirtualInterfaceCreateException()
Oct 11 09:07:49 compute-0 nova_compute[260935]: 2025-10-11 09:07:49.963 2 ERROR nova.compute.manager [instance: 15633aee-234a-4417-b5ea-f35f13820404] nova.exception.VirtualInterfaceCreateException: Virtual Interface creation failed
Oct 11 09:07:49 compute-0 nova_compute[260935]: 2025-10-11 09:07:49.963 2 ERROR nova.compute.manager [instance: 15633aee-234a-4417-b5ea-f35f13820404] 
Oct 11 09:07:49 compute-0 nova_compute[260935]: 2025-10-11 09:07:49.972 2 INFO nova.compute.manager [None req-21f60ae5-4d4b-4bf1-a957-74c05ca1d226 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] [instance: 15633aee-234a-4417-b5ea-f35f13820404] Terminating instance
Oct 11 09:07:49 compute-0 nova_compute[260935]: 2025-10-11 09:07:49.974 2 DEBUG nova.compute.manager [None req-21f60ae5-4d4b-4bf1-a957-74c05ca1d226 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] [instance: 15633aee-234a-4417-b5ea-f35f13820404] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 11 09:07:49 compute-0 nova_compute[260935]: 2025-10-11 09:07:49.977 2 DEBUG nova.virt.libvirt.driver [-] [instance: 15633aee-234a-4417-b5ea-f35f13820404] During wait destroy, instance disappeared. _wait_for_destroy /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1527
Oct 11 09:07:49 compute-0 nova_compute[260935]: 2025-10-11 09:07:49.977 2 INFO nova.virt.libvirt.driver [-] [instance: 15633aee-234a-4417-b5ea-f35f13820404] Instance destroyed successfully.
Oct 11 09:07:49 compute-0 nova_compute[260935]: 2025-10-11 09:07:49.977 2 DEBUG nova.virt.libvirt.vif [None req-21f60ae5-4d4b-4bf1-a957-74c05ca1d226 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='',created_at=2025-10-11T09:00:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerRescueTestJSON-server-2012807637',display_name='tempest-ServerRescueTestJSON-server-2012807637',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuetestjson-server-2012807637',id=87,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='11b44ad9193e4e43838d52056ccf413e',ramdisk_id='',reservation_id='r-es16o3e8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerRescueTestJSON-1667208638',owner_user_name='tempest-ServerRescueTestJSON-1667208638-project-member'},tags=TagList,task_state='deleting',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:02:45Z,user_data=None,user_id='df5a3c3a5d68473aa2e2950de45ebce1',uuid=15633aee-234a-4417-b5ea-f35f13820404,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "074db183-8679-40f2-b39d-06759a8dfceb", "address": "fa:16:3e:33:2b:94", "network": {"id": "e4686205-cbf0-4221-bc49-ebb890c4a59f", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1553544744-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "11b44ad9193e4e43838d52056ccf413e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap074db183-86", "ovs_interfaceid": "074db183-8679-40f2-b39d-06759a8dfceb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 11 09:07:49 compute-0 nova_compute[260935]: 2025-10-11 09:07:49.978 2 DEBUG nova.network.os_vif_util [None req-21f60ae5-4d4b-4bf1-a957-74c05ca1d226 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] Converting VIF {"id": "074db183-8679-40f2-b39d-06759a8dfceb", "address": "fa:16:3e:33:2b:94", "network": {"id": "e4686205-cbf0-4221-bc49-ebb890c4a59f", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1553544744-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "11b44ad9193e4e43838d52056ccf413e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap074db183-86", "ovs_interfaceid": "074db183-8679-40f2-b39d-06759a8dfceb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 09:07:49 compute-0 nova_compute[260935]: 2025-10-11 09:07:49.978 2 DEBUG nova.network.os_vif_util [None req-21f60ae5-4d4b-4bf1-a957-74c05ca1d226 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:33:2b:94,bridge_name='br-int',has_traffic_filtering=True,id=074db183-8679-40f2-b39d-06759a8dfceb,network=Network(e4686205-cbf0-4221-bc49-ebb890c4a59f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap074db183-86') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 09:07:49 compute-0 nova_compute[260935]: 2025-10-11 09:07:49.979 2 DEBUG os_vif [None req-21f60ae5-4d4b-4bf1-a957-74c05ca1d226 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:33:2b:94,bridge_name='br-int',has_traffic_filtering=True,id=074db183-8679-40f2-b39d-06759a8dfceb,network=Network(e4686205-cbf0-4221-bc49-ebb890c4a59f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap074db183-86') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 11 09:07:49 compute-0 nova_compute[260935]: 2025-10-11 09:07:49.980 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:07:49 compute-0 nova_compute[260935]: 2025-10-11 09:07:49.980 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap074db183-86, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:07:49 compute-0 nova_compute[260935]: 2025-10-11 09:07:49.981 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 09:07:49 compute-0 nova_compute[260935]: 2025-10-11 09:07:49.986 2 INFO os_vif [None req-21f60ae5-4d4b-4bf1-a957-74c05ca1d226 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:33:2b:94,bridge_name='br-int',has_traffic_filtering=True,id=074db183-8679-40f2-b39d-06759a8dfceb,network=Network(e4686205-cbf0-4221-bc49-ebb890c4a59f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap074db183-86')
Oct 11 09:07:50 compute-0 nova_compute[260935]: 2025-10-11 09:07:50.011 2 INFO nova.virt.libvirt.driver [None req-21f60ae5-4d4b-4bf1-a957-74c05ca1d226 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] [instance: 15633aee-234a-4417-b5ea-f35f13820404] Deletion of /var/lib/nova/instances/15633aee-234a-4417-b5ea-f35f13820404_del complete
Oct 11 09:07:50 compute-0 nova_compute[260935]: 2025-10-11 09:07:50.227 2 INFO nova.compute.manager [None req-21f60ae5-4d4b-4bf1-a957-74c05ca1d226 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] [instance: 15633aee-234a-4417-b5ea-f35f13820404] Took 0.25 seconds to destroy the instance on the hypervisor.
Oct 11 09:07:50 compute-0 nova_compute[260935]: 2025-10-11 09:07:50.228 2 DEBUG nova.compute.claims [None req-21f60ae5-4d4b-4bf1-a957-74c05ca1d226 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] [instance: 15633aee-234a-4417-b5ea-f35f13820404] Aborting claim: <nova.compute.claims.Claim object at 0x7f1e55ff1c10> abort /usr/lib/python3.9/site-packages/nova/compute/claims.py:85
Oct 11 09:07:50 compute-0 nova_compute[260935]: 2025-10-11 09:07:50.229 2 DEBUG oslo_concurrency.lockutils [None req-21f60ae5-4d4b-4bf1-a957-74c05ca1d226 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.abort_instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:07:50 compute-0 nova_compute[260935]: 2025-10-11 09:07:50.229 2 DEBUG oslo_concurrency.lockutils [None req-21f60ae5-4d4b-4bf1-a957-74c05ca1d226 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.abort_instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:07:50 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1955: 321 pgs: 321 active+clean; 658 MiB data, 994 MiB used, 59 GiB / 60 GiB avail; 3.3 MiB/s rd, 27 KiB/s wr, 149 op/s
Oct 11 09:07:50 compute-0 nova_compute[260935]: 2025-10-11 09:07:50.531 2 DEBUG oslo_concurrency.processutils [None req-21f60ae5-4d4b-4bf1-a957-74c05ca1d226 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:07:51 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 09:07:51 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3089977427' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:07:51 compute-0 nova_compute[260935]: 2025-10-11 09:07:51.039 2 DEBUG oslo_concurrency.processutils [None req-21f60ae5-4d4b-4bf1-a957-74c05ca1d226 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.507s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:07:51 compute-0 nova_compute[260935]: 2025-10-11 09:07:51.052 2 DEBUG nova.compute.provider_tree [None req-21f60ae5-4d4b-4bf1-a957-74c05ca1d226 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 09:07:51 compute-0 nova_compute[260935]: 2025-10-11 09:07:51.115 2 DEBUG nova.scheduler.client.report [None req-21f60ae5-4d4b-4bf1-a957-74c05ca1d226 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 09:07:51 compute-0 nova_compute[260935]: 2025-10-11 09:07:51.224 2 DEBUG oslo_concurrency.lockutils [None req-21f60ae5-4d4b-4bf1-a957-74c05ca1d226 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.abort_instance_claim" :: held 0.995s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:07:51 compute-0 nova_compute[260935]: 2025-10-11 09:07:51.225 2 ERROR nova.compute.manager [None req-21f60ae5-4d4b-4bf1-a957-74c05ca1d226 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] [instance: 15633aee-234a-4417-b5ea-f35f13820404] Failed to allocate network(s): nova.exception.VirtualInterfaceCreateException: Virtual Interface creation failed
Oct 11 09:07:51 compute-0 nova_compute[260935]: 2025-10-11 09:07:51.225 2 ERROR nova.compute.manager [instance: 15633aee-234a-4417-b5ea-f35f13820404] Traceback (most recent call last):
Oct 11 09:07:51 compute-0 nova_compute[260935]: 2025-10-11 09:07:51.225 2 ERROR nova.compute.manager [instance: 15633aee-234a-4417-b5ea-f35f13820404]   File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 7750, in _create_guest_with_network
Oct 11 09:07:51 compute-0 nova_compute[260935]: 2025-10-11 09:07:51.225 2 ERROR nova.compute.manager [instance: 15633aee-234a-4417-b5ea-f35f13820404]     guest = self._create_guest(
Oct 11 09:07:51 compute-0 nova_compute[260935]: 2025-10-11 09:07:51.225 2 ERROR nova.compute.manager [instance: 15633aee-234a-4417-b5ea-f35f13820404]   File "/usr/lib64/python3.9/contextlib.py", line 126, in __exit__
Oct 11 09:07:51 compute-0 nova_compute[260935]: 2025-10-11 09:07:51.225 2 ERROR nova.compute.manager [instance: 15633aee-234a-4417-b5ea-f35f13820404]     next(self.gen)
Oct 11 09:07:51 compute-0 nova_compute[260935]: 2025-10-11 09:07:51.225 2 ERROR nova.compute.manager [instance: 15633aee-234a-4417-b5ea-f35f13820404]   File "/usr/lib/python3.9/site-packages/nova/compute/manager.py", line 559, in wait_for_instance_event
Oct 11 09:07:51 compute-0 nova_compute[260935]: 2025-10-11 09:07:51.225 2 ERROR nova.compute.manager [instance: 15633aee-234a-4417-b5ea-f35f13820404]     self._wait_for_instance_events(
Oct 11 09:07:51 compute-0 nova_compute[260935]: 2025-10-11 09:07:51.225 2 ERROR nova.compute.manager [instance: 15633aee-234a-4417-b5ea-f35f13820404]   File "/usr/lib/python3.9/site-packages/nova/compute/manager.py", line 471, in _wait_for_instance_events
Oct 11 09:07:51 compute-0 nova_compute[260935]: 2025-10-11 09:07:51.225 2 ERROR nova.compute.manager [instance: 15633aee-234a-4417-b5ea-f35f13820404]     actual_event = event.wait()
Oct 11 09:07:51 compute-0 nova_compute[260935]: 2025-10-11 09:07:51.225 2 ERROR nova.compute.manager [instance: 15633aee-234a-4417-b5ea-f35f13820404]   File "/usr/lib/python3.9/site-packages/nova/compute/manager.py", line 436, in wait
Oct 11 09:07:51 compute-0 nova_compute[260935]: 2025-10-11 09:07:51.225 2 ERROR nova.compute.manager [instance: 15633aee-234a-4417-b5ea-f35f13820404]     instance_event = self.event.wait()
Oct 11 09:07:51 compute-0 nova_compute[260935]: 2025-10-11 09:07:51.225 2 ERROR nova.compute.manager [instance: 15633aee-234a-4417-b5ea-f35f13820404]   File "/usr/lib/python3.9/site-packages/eventlet/event.py", line 125, in wait
Oct 11 09:07:51 compute-0 nova_compute[260935]: 2025-10-11 09:07:51.225 2 ERROR nova.compute.manager [instance: 15633aee-234a-4417-b5ea-f35f13820404]     result = hub.switch()
Oct 11 09:07:51 compute-0 nova_compute[260935]: 2025-10-11 09:07:51.225 2 ERROR nova.compute.manager [instance: 15633aee-234a-4417-b5ea-f35f13820404]   File "/usr/lib/python3.9/site-packages/eventlet/hubs/hub.py", line 313, in switch
Oct 11 09:07:51 compute-0 nova_compute[260935]: 2025-10-11 09:07:51.225 2 ERROR nova.compute.manager [instance: 15633aee-234a-4417-b5ea-f35f13820404]     return self.greenlet.switch()
Oct 11 09:07:51 compute-0 nova_compute[260935]: 2025-10-11 09:07:51.225 2 ERROR nova.compute.manager [instance: 15633aee-234a-4417-b5ea-f35f13820404] eventlet.timeout.Timeout: 300 seconds
Oct 11 09:07:51 compute-0 nova_compute[260935]: 2025-10-11 09:07:51.225 2 ERROR nova.compute.manager [instance: 15633aee-234a-4417-b5ea-f35f13820404] 
Oct 11 09:07:51 compute-0 nova_compute[260935]: 2025-10-11 09:07:51.225 2 ERROR nova.compute.manager [instance: 15633aee-234a-4417-b5ea-f35f13820404] During handling of the above exception, another exception occurred:
Oct 11 09:07:51 compute-0 nova_compute[260935]: 2025-10-11 09:07:51.225 2 ERROR nova.compute.manager [instance: 15633aee-234a-4417-b5ea-f35f13820404] 
Oct 11 09:07:51 compute-0 nova_compute[260935]: 2025-10-11 09:07:51.225 2 ERROR nova.compute.manager [instance: 15633aee-234a-4417-b5ea-f35f13820404] Traceback (most recent call last):
Oct 11 09:07:51 compute-0 nova_compute[260935]: 2025-10-11 09:07:51.225 2 ERROR nova.compute.manager [instance: 15633aee-234a-4417-b5ea-f35f13820404]   File "/usr/lib/python3.9/site-packages/nova/compute/manager.py", line 2611, in _build_and_run_instance
Oct 11 09:07:51 compute-0 nova_compute[260935]: 2025-10-11 09:07:51.225 2 ERROR nova.compute.manager [instance: 15633aee-234a-4417-b5ea-f35f13820404]     self.driver.spawn(context, instance, image_meta,
Oct 11 09:07:51 compute-0 nova_compute[260935]: 2025-10-11 09:07:51.225 2 ERROR nova.compute.manager [instance: 15633aee-234a-4417-b5ea-f35f13820404]   File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 4411, in spawn
Oct 11 09:07:51 compute-0 nova_compute[260935]: 2025-10-11 09:07:51.225 2 ERROR nova.compute.manager [instance: 15633aee-234a-4417-b5ea-f35f13820404]     self._create_guest_with_network(
Oct 11 09:07:51 compute-0 nova_compute[260935]: 2025-10-11 09:07:51.225 2 ERROR nova.compute.manager [instance: 15633aee-234a-4417-b5ea-f35f13820404]   File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 7768, in _create_guest_with_network
Oct 11 09:07:51 compute-0 nova_compute[260935]: 2025-10-11 09:07:51.225 2 ERROR nova.compute.manager [instance: 15633aee-234a-4417-b5ea-f35f13820404]     raise exception.VirtualInterfaceCreateException()
Oct 11 09:07:51 compute-0 nova_compute[260935]: 2025-10-11 09:07:51.225 2 ERROR nova.compute.manager [instance: 15633aee-234a-4417-b5ea-f35f13820404] nova.exception.VirtualInterfaceCreateException: Virtual Interface creation failed
Oct 11 09:07:51 compute-0 nova_compute[260935]: 2025-10-11 09:07:51.225 2 ERROR nova.compute.manager [instance: 15633aee-234a-4417-b5ea-f35f13820404] 
Oct 11 09:07:51 compute-0 nova_compute[260935]: 2025-10-11 09:07:51.228 2 DEBUG nova.compute.utils [None req-21f60ae5-4d4b-4bf1-a957-74c05ca1d226 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] [instance: 15633aee-234a-4417-b5ea-f35f13820404] Virtual Interface creation failed notify_about_instance_usage /usr/lib/python3.9/site-packages/nova/compute/utils.py:430
Oct 11 09:07:51 compute-0 nova_compute[260935]: 2025-10-11 09:07:51.229 2 ERROR nova.compute.manager [None req-21f60ae5-4d4b-4bf1-a957-74c05ca1d226 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] [instance: 15633aee-234a-4417-b5ea-f35f13820404] Build of instance 15633aee-234a-4417-b5ea-f35f13820404 aborted: Failed to allocate the network(s), not rescheduling.: nova.exception.BuildAbortException: Build of instance 15633aee-234a-4417-b5ea-f35f13820404 aborted: Failed to allocate the network(s), not rescheduling.
Oct 11 09:07:51 compute-0 nova_compute[260935]: 2025-10-11 09:07:51.230 2 DEBUG nova.compute.manager [None req-21f60ae5-4d4b-4bf1-a957-74c05ca1d226 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] [instance: 15633aee-234a-4417-b5ea-f35f13820404] Unplugging VIFs for instance _cleanup_allocated_networks /usr/lib/python3.9/site-packages/nova/compute/manager.py:2976
Oct 11 09:07:51 compute-0 nova_compute[260935]: 2025-10-11 09:07:51.231 2 DEBUG nova.virt.libvirt.vif [None req-21f60ae5-4d4b-4bf1-a957-74c05ca1d226 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='',created_at=2025-10-11T09:00:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerRescueTestJSON-server-2012807637',display_name='tempest-ServerRescueTestJSON-server-2012807637',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host=None,hostname='tempest-serverrescuetestjson-server-2012807637',id=87,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node=None,numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='11b44ad9193e4e43838d52056ccf413e',ramdisk_id='',reservation_id='r-es16o3e8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='2',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerRescueTestJSON-1667208638',owner_user_name='tempest-ServerRescueTestJSON-1667208638-project-member'},tags=TagList,task_state='deleting',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:07:50Z,user_data=None,user_id='df5a3c3a5d68473aa2e2950de45ebce1',uuid=15633aee-234a-4417-b5ea-f35f13820404,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "074db183-8679-40f2-b39d-06759a8dfceb", "address": "fa:16:3e:33:2b:94", "network": {"id": "e4686205-cbf0-4221-bc49-ebb890c4a59f", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1553544744-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "11b44ad9193e4e43838d52056ccf413e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap074db183-86", "ovs_interfaceid": "074db183-8679-40f2-b39d-06759a8dfceb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 11 09:07:51 compute-0 nova_compute[260935]: 2025-10-11 09:07:51.231 2 DEBUG nova.network.os_vif_util [None req-21f60ae5-4d4b-4bf1-a957-74c05ca1d226 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] Converting VIF {"id": "074db183-8679-40f2-b39d-06759a8dfceb", "address": "fa:16:3e:33:2b:94", "network": {"id": "e4686205-cbf0-4221-bc49-ebb890c4a59f", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1553544744-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "11b44ad9193e4e43838d52056ccf413e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap074db183-86", "ovs_interfaceid": "074db183-8679-40f2-b39d-06759a8dfceb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 09:07:51 compute-0 nova_compute[260935]: 2025-10-11 09:07:51.233 2 DEBUG nova.network.os_vif_util [None req-21f60ae5-4d4b-4bf1-a957-74c05ca1d226 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:33:2b:94,bridge_name='br-int',has_traffic_filtering=True,id=074db183-8679-40f2-b39d-06759a8dfceb,network=Network(e4686205-cbf0-4221-bc49-ebb890c4a59f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap074db183-86') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 09:07:51 compute-0 nova_compute[260935]: 2025-10-11 09:07:51.233 2 DEBUG os_vif [None req-21f60ae5-4d4b-4bf1-a957-74c05ca1d226 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:33:2b:94,bridge_name='br-int',has_traffic_filtering=True,id=074db183-8679-40f2-b39d-06759a8dfceb,network=Network(e4686205-cbf0-4221-bc49-ebb890c4a59f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap074db183-86') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 11 09:07:51 compute-0 nova_compute[260935]: 2025-10-11 09:07:51.236 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:07:51 compute-0 nova_compute[260935]: 2025-10-11 09:07:51.236 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap074db183-86, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:07:51 compute-0 nova_compute[260935]: 2025-10-11 09:07:51.237 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 09:07:51 compute-0 nova_compute[260935]: 2025-10-11 09:07:51.241 2 INFO os_vif [None req-21f60ae5-4d4b-4bf1-a957-74c05ca1d226 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:33:2b:94,bridge_name='br-int',has_traffic_filtering=True,id=074db183-8679-40f2-b39d-06759a8dfceb,network=Network(e4686205-cbf0-4221-bc49-ebb890c4a59f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap074db183-86')
Oct 11 09:07:51 compute-0 nova_compute[260935]: 2025-10-11 09:07:51.242 2 DEBUG nova.compute.manager [None req-21f60ae5-4d4b-4bf1-a957-74c05ca1d226 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] [instance: 15633aee-234a-4417-b5ea-f35f13820404] Unplugged VIFs for instance _cleanup_allocated_networks /usr/lib/python3.9/site-packages/nova/compute/manager.py:3012
Oct 11 09:07:51 compute-0 nova_compute[260935]: 2025-10-11 09:07:51.243 2 DEBUG nova.compute.manager [None req-21f60ae5-4d4b-4bf1-a957-74c05ca1d226 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] [instance: 15633aee-234a-4417-b5ea-f35f13820404] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 11 09:07:51 compute-0 nova_compute[260935]: 2025-10-11 09:07:51.243 2 DEBUG nova.network.neutron [None req-21f60ae5-4d4b-4bf1-a957-74c05ca1d226 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] [instance: 15633aee-234a-4417-b5ea-f35f13820404] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 11 09:07:51 compute-0 ovn_controller[152945]: 2025-10-11T09:07:51Z|00102|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:88:5f:9b 10.100.0.12
Oct 11 09:07:51 compute-0 ovn_controller[152945]: 2025-10-11T09:07:51Z|00103|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:88:5f:9b 10.100.0.12
Oct 11 09:07:51 compute-0 ceph-mon[74313]: pgmap v1955: 321 pgs: 321 active+clean; 658 MiB data, 994 MiB used, 59 GiB / 60 GiB avail; 3.3 MiB/s rd, 27 KiB/s wr, 149 op/s
Oct 11 09:07:51 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/3089977427' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:07:52 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1956: 321 pgs: 321 active+clean; 618 MiB data, 967 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 40 KiB/s wr, 217 op/s
Oct 11 09:07:52 compute-0 nova_compute[260935]: 2025-10-11 09:07:52.576 2 DEBUG nova.network.neutron [None req-21f60ae5-4d4b-4bf1-a957-74c05ca1d226 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] [instance: 15633aee-234a-4417-b5ea-f35f13820404] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:07:52 compute-0 nova_compute[260935]: 2025-10-11 09:07:52.764 2 INFO nova.compute.manager [None req-21f60ae5-4d4b-4bf1-a957-74c05ca1d226 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] [instance: 15633aee-234a-4417-b5ea-f35f13820404] Took 1.52 seconds to deallocate network for instance.
Oct 11 09:07:52 compute-0 podman[358731]: 2025-10-11 09:07:52.825537244 +0000 UTC m=+0.108253181 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=iscsid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=iscsid, io.buildah.version=1.41.3)
Oct 11 09:07:52 compute-0 nova_compute[260935]: 2025-10-11 09:07:52.960 2 DEBUG oslo_concurrency.lockutils [None req-016b3afb-e77e-442b-b195-3639acdd94fb f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Acquiring lock "f5dfbe0b-a1ff-4001-abe4-a4493c9124f2" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:07:52 compute-0 nova_compute[260935]: 2025-10-11 09:07:52.962 2 DEBUG oslo_concurrency.lockutils [None req-016b3afb-e77e-442b-b195-3639acdd94fb f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Lock "f5dfbe0b-a1ff-4001-abe4-a4493c9124f2" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:07:52 compute-0 nova_compute[260935]: 2025-10-11 09:07:52.962 2 DEBUG oslo_concurrency.lockutils [None req-016b3afb-e77e-442b-b195-3639acdd94fb f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Acquiring lock "f5dfbe0b-a1ff-4001-abe4-a4493c9124f2-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:07:52 compute-0 nova_compute[260935]: 2025-10-11 09:07:52.963 2 DEBUG oslo_concurrency.lockutils [None req-016b3afb-e77e-442b-b195-3639acdd94fb f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Lock "f5dfbe0b-a1ff-4001-abe4-a4493c9124f2-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:07:52 compute-0 nova_compute[260935]: 2025-10-11 09:07:52.963 2 DEBUG oslo_concurrency.lockutils [None req-016b3afb-e77e-442b-b195-3639acdd94fb f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Lock "f5dfbe0b-a1ff-4001-abe4-a4493c9124f2-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:07:52 compute-0 nova_compute[260935]: 2025-10-11 09:07:52.964 2 INFO nova.compute.manager [None req-016b3afb-e77e-442b-b195-3639acdd94fb f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] [instance: f5dfbe0b-a1ff-4001-abe4-a4493c9124f2] Terminating instance
Oct 11 09:07:52 compute-0 nova_compute[260935]: 2025-10-11 09:07:52.965 2 DEBUG nova.compute.manager [None req-016b3afb-e77e-442b-b195-3639acdd94fb f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] [instance: f5dfbe0b-a1ff-4001-abe4-a4493c9124f2] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 11 09:07:53 compute-0 kernel: tap34749eb0-e6 (unregistering): left promiscuous mode
Oct 11 09:07:53 compute-0 NetworkManager[44960]: <info>  [1760173673.0397] device (tap34749eb0-e6): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 09:07:53 compute-0 ovn_controller[152945]: 2025-10-11T09:07:53Z|00898|binding|INFO|Releasing lport 34749eb0-e6d2-4c4b-a2b8-9fc709c1caca from this chassis (sb_readonly=0)
Oct 11 09:07:53 compute-0 ovn_controller[152945]: 2025-10-11T09:07:53Z|00899|binding|INFO|Setting lport 34749eb0-e6d2-4c4b-a2b8-9fc709c1caca down in Southbound
Oct 11 09:07:53 compute-0 nova_compute[260935]: 2025-10-11 09:07:53.055 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:07:53 compute-0 ovn_controller[152945]: 2025-10-11T09:07:53Z|00900|binding|INFO|Removing iface tap34749eb0-e6 ovn-installed in OVS
Oct 11 09:07:53 compute-0 nova_compute[260935]: 2025-10-11 09:07:53.058 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:07:53 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:07:53.068 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:88:5f:9b 10.100.0.12'], port_security=['fa:16:3e:88:5f:9b 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'f5dfbe0b-a1ff-4001-abe4-a4493c9124f2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-76432838-bd4d-4fb8-8e44-6e230b5868b8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c221c92f100b42fbb2581f0c7035540b', 'neutron:revision_number': '6', 'neutron:security_group_ids': '933852e8-c082-4590-9200-b967a1691dcb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7f1c5e2c-b27b-4d23-ad04-5134938386e4, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=34749eb0-e6d2-4c4b-a2b8-9fc709c1caca) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 09:07:53 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:07:53.070 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 34749eb0-e6d2-4c4b-a2b8-9fc709c1caca in datapath 76432838-bd4d-4fb8-8e44-6e230b5868b8 unbound from our chassis
Oct 11 09:07:53 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:07:53.073 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 76432838-bd4d-4fb8-8e44-6e230b5868b8
Oct 11 09:07:53 compute-0 nova_compute[260935]: 2025-10-11 09:07:53.093 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:07:53 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:07:53.100 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[a942457e-dede-4290-999f-71e6f11e004c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:07:53 compute-0 systemd[1]: machine-qemu\x2d114\x2dinstance\x2d00000061.scope: Deactivated successfully.
Oct 11 09:07:53 compute-0 systemd[1]: machine-qemu\x2d114\x2dinstance\x2d00000061.scope: Consumed 12.544s CPU time.
Oct 11 09:07:53 compute-0 systemd-machined[215705]: Machine qemu-114-instance-00000061 terminated.
Oct 11 09:07:53 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:07:53.153 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[3e4a1048-8cbb-491f-997f-fbc889c5e1aa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:07:53 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:07:53.156 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[2ae6a108-1dc0-4933-be0b-a60e961e1d15]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:07:53 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:07:53.207 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[d353fc9e-27e9-4621-a15d-9ef6e07a9468]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:07:53 compute-0 nova_compute[260935]: 2025-10-11 09:07:53.214 2 INFO nova.virt.libvirt.driver [-] [instance: f5dfbe0b-a1ff-4001-abe4-a4493c9124f2] Instance destroyed successfully.
Oct 11 09:07:53 compute-0 nova_compute[260935]: 2025-10-11 09:07:53.215 2 DEBUG nova.objects.instance [None req-016b3afb-e77e-442b-b195-3639acdd94fb f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Lazy-loading 'resources' on Instance uuid f5dfbe0b-a1ff-4001-abe4-a4493c9124f2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:07:53 compute-0 nova_compute[260935]: 2025-10-11 09:07:53.219 2 INFO nova.scheduler.client.report [None req-21f60ae5-4d4b-4bf1-a957-74c05ca1d226 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] Deleted allocations for instance 15633aee-234a-4417-b5ea-f35f13820404
Oct 11 09:07:53 compute-0 nova_compute[260935]: 2025-10-11 09:07:53.220 2 DEBUG oslo_concurrency.lockutils [None req-21f60ae5-4d4b-4bf1-a957-74c05ca1d226 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] Lock "15633aee-234a-4417-b5ea-f35f13820404" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 422.017s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:07:53 compute-0 nova_compute[260935]: 2025-10-11 09:07:53.221 2 DEBUG oslo_concurrency.lockutils [None req-a9b118e5-7aae-466a-a2e8-73259e1e4185 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] Lock "15633aee-234a-4417-b5ea-f35f13820404" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 307.107s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:07:53 compute-0 nova_compute[260935]: 2025-10-11 09:07:53.221 2 DEBUG oslo_concurrency.lockutils [None req-a9b118e5-7aae-466a-a2e8-73259e1e4185 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] Acquiring lock "15633aee-234a-4417-b5ea-f35f13820404-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:07:53 compute-0 nova_compute[260935]: 2025-10-11 09:07:53.222 2 DEBUG oslo_concurrency.lockutils [None req-a9b118e5-7aae-466a-a2e8-73259e1e4185 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] Lock "15633aee-234a-4417-b5ea-f35f13820404-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:07:53 compute-0 nova_compute[260935]: 2025-10-11 09:07:53.222 2 DEBUG oslo_concurrency.lockutils [None req-a9b118e5-7aae-466a-a2e8-73259e1e4185 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] Lock "15633aee-234a-4417-b5ea-f35f13820404-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:07:53 compute-0 nova_compute[260935]: 2025-10-11 09:07:53.223 2 DEBUG nova.compute.manager [None req-a9b118e5-7aae-466a-a2e8-73259e1e4185 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] [instance: 15633aee-234a-4417-b5ea-f35f13820404] Events pending at deletion: network-vif-plugged-074db183-8679-40f2-b39d-06759a8dfceb _delete_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3237
Oct 11 09:07:53 compute-0 nova_compute[260935]: 2025-10-11 09:07:53.224 2 INFO nova.compute.manager [None req-a9b118e5-7aae-466a-a2e8-73259e1e4185 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] [instance: 15633aee-234a-4417-b5ea-f35f13820404] Terminating instance
Oct 11 09:07:53 compute-0 nova_compute[260935]: 2025-10-11 09:07:53.226 2 DEBUG nova.compute.manager [None req-a9b118e5-7aae-466a-a2e8-73259e1e4185 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] [instance: 15633aee-234a-4417-b5ea-f35f13820404] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 11 09:07:53 compute-0 nova_compute[260935]: 2025-10-11 09:07:53.230 2 DEBUG nova.virt.libvirt.driver [-] [instance: 15633aee-234a-4417-b5ea-f35f13820404] During wait destroy, instance disappeared. _wait_for_destroy /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1527
Oct 11 09:07:53 compute-0 nova_compute[260935]: 2025-10-11 09:07:53.230 2 INFO nova.virt.libvirt.driver [-] [instance: 15633aee-234a-4417-b5ea-f35f13820404] Instance destroyed successfully.
Oct 11 09:07:53 compute-0 nova_compute[260935]: 2025-10-11 09:07:53.231 2 DEBUG nova.objects.instance [None req-a9b118e5-7aae-466a-a2e8-73259e1e4185 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] Lazy-loading 'resources' on Instance uuid 15633aee-234a-4417-b5ea-f35f13820404 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:07:53 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:07:53.237 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[4965d8f3-419d-4cfb-a789-8c77a18c0bbd]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap76432838-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:06:2f:a3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 11, 'rx_bytes': 916, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 11, 'rx_bytes': 916, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 260], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 547691, 'reachable_time': 43408, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 358773, 'error': None, 'target': 'ovnmeta-76432838-bd4d-4fb8-8e44-6e230b5868b8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:07:53 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:07:53.255 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[ecd84883-e90c-4206-bffa-db8da0daec8a]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap76432838-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 547707, 'tstamp': 547707}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 358775, 'error': None, 'target': 'ovnmeta-76432838-bd4d-4fb8-8e44-6e230b5868b8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap76432838-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 547711, 'tstamp': 547711}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 358775, 'error': None, 'target': 'ovnmeta-76432838-bd4d-4fb8-8e44-6e230b5868b8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:07:53 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:07:53.256 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap76432838-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:07:53 compute-0 nova_compute[260935]: 2025-10-11 09:07:53.258 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:07:53 compute-0 nova_compute[260935]: 2025-10-11 09:07:53.262 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:07:53 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:07:53.263 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap76432838-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:07:53 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:07:53.263 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 09:07:53 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:07:53.264 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap76432838-b0, col_values=(('external_ids', {'iface-id': 'b6ca7d68-6b07-4260-8964-929cc77a92b2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:07:53 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:07:53.264 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 09:07:53 compute-0 nova_compute[260935]: 2025-10-11 09:07:53.265 2 DEBUG nova.virt.libvirt.vif [None req-016b3afb-e77e-442b-b195-3639acdd94fb f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T09:07:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerRescueNegativeTestJSON-server-1694931715',display_name='tempest-ServerRescueNegativeTestJSON-server-1694931715',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuenegativetestjson-server-1694931715',id=97,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-11T09:07:40Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='c221c92f100b42fbb2581f0c7035540b',ramdisk_id='',reservation_id='r-w0ylr4kw',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerRescueNegativeTestJSON-1667632067',owner_user_name='tempest-ServerRescueNegativeTestJSON-1667632067-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T09:07:40Z,user_data=None,user_id='f52fcc072b5843a197064a063d4a5d30',uuid=f5dfbe0b-a1ff-4001-abe4-a4493c9124f2,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='rescued') vif={"id": "34749eb0-e6d2-4c4b-a2b8-9fc709c1caca", "address": "fa:16:3e:88:5f:9b", "network": {"id": "76432838-bd4d-4fb8-8e44-6e230b5868b8", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-372698809-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c221c92f100b42fbb2581f0c7035540b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap34749eb0-e6", "ovs_interfaceid": "34749eb0-e6d2-4c4b-a2b8-9fc709c1caca", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 11 09:07:53 compute-0 nova_compute[260935]: 2025-10-11 09:07:53.265 2 DEBUG nova.network.os_vif_util [None req-016b3afb-e77e-442b-b195-3639acdd94fb f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Converting VIF {"id": "34749eb0-e6d2-4c4b-a2b8-9fc709c1caca", "address": "fa:16:3e:88:5f:9b", "network": {"id": "76432838-bd4d-4fb8-8e44-6e230b5868b8", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-372698809-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c221c92f100b42fbb2581f0c7035540b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap34749eb0-e6", "ovs_interfaceid": "34749eb0-e6d2-4c4b-a2b8-9fc709c1caca", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 09:07:53 compute-0 nova_compute[260935]: 2025-10-11 09:07:53.266 2 DEBUG nova.network.os_vif_util [None req-016b3afb-e77e-442b-b195-3639acdd94fb f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:88:5f:9b,bridge_name='br-int',has_traffic_filtering=True,id=34749eb0-e6d2-4c4b-a2b8-9fc709c1caca,network=Network(76432838-bd4d-4fb8-8e44-6e230b5868b8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap34749eb0-e6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 09:07:53 compute-0 nova_compute[260935]: 2025-10-11 09:07:53.266 2 DEBUG os_vif [None req-016b3afb-e77e-442b-b195-3639acdd94fb f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:88:5f:9b,bridge_name='br-int',has_traffic_filtering=True,id=34749eb0-e6d2-4c4b-a2b8-9fc709c1caca,network=Network(76432838-bd4d-4fb8-8e44-6e230b5868b8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap34749eb0-e6') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 11 09:07:53 compute-0 nova_compute[260935]: 2025-10-11 09:07:53.267 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:07:53 compute-0 nova_compute[260935]: 2025-10-11 09:07:53.267 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap34749eb0-e6, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:07:53 compute-0 nova_compute[260935]: 2025-10-11 09:07:53.269 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:07:53 compute-0 nova_compute[260935]: 2025-10-11 09:07:53.271 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 09:07:53 compute-0 nova_compute[260935]: 2025-10-11 09:07:53.272 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:07:53 compute-0 nova_compute[260935]: 2025-10-11 09:07:53.273 2 INFO os_vif [None req-016b3afb-e77e-442b-b195-3639acdd94fb f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:88:5f:9b,bridge_name='br-int',has_traffic_filtering=True,id=34749eb0-e6d2-4c4b-a2b8-9fc709c1caca,network=Network(76432838-bd4d-4fb8-8e44-6e230b5868b8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap34749eb0-e6')
Oct 11 09:07:53 compute-0 nova_compute[260935]: 2025-10-11 09:07:53.289 2 DEBUG nova.virt.libvirt.vif [None req-a9b118e5-7aae-466a-a2e8-73259e1e4185 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T09:00:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerRescueTestJSON-server-2012807637',display_name='tempest-ServerRescueTestJSON-server-2012807637',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuetestjson-server-2012807637',id=87,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=0,progress=0,project_id='11b44ad9193e4e43838d52056ccf413e',ramdisk_id='',reservation_id='r-es16o3e8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerRescueTestJSON-1667208638',owner_user_name='tempest-ServerRescueTestJSON-1667208638-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T09:00:52Z,user_data=None,user_id='df5a3c3a5d68473aa2e2950de45ebce1',uuid=15633aee-234a-4417-b5ea-f35f13820404,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "074db183-8679-40f2-b39d-06759a8dfceb", "address": "fa:16:3e:33:2b:94", "network": {"id": "e4686205-cbf0-4221-bc49-ebb890c4a59f", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1553544744-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "11b44ad9193e4e43838d52056ccf413e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap074db183-86", "ovs_interfaceid": "074db183-8679-40f2-b39d-06759a8dfceb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 11 09:07:53 compute-0 nova_compute[260935]: 2025-10-11 09:07:53.290 2 DEBUG nova.network.os_vif_util [None req-a9b118e5-7aae-466a-a2e8-73259e1e4185 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] Converting VIF {"id": "074db183-8679-40f2-b39d-06759a8dfceb", "address": "fa:16:3e:33:2b:94", "network": {"id": "e4686205-cbf0-4221-bc49-ebb890c4a59f", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1553544744-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "11b44ad9193e4e43838d52056ccf413e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap074db183-86", "ovs_interfaceid": "074db183-8679-40f2-b39d-06759a8dfceb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 09:07:53 compute-0 nova_compute[260935]: 2025-10-11 09:07:53.291 2 DEBUG nova.network.os_vif_util [None req-a9b118e5-7aae-466a-a2e8-73259e1e4185 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:33:2b:94,bridge_name='br-int',has_traffic_filtering=True,id=074db183-8679-40f2-b39d-06759a8dfceb,network=Network(e4686205-cbf0-4221-bc49-ebb890c4a59f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap074db183-86') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 09:07:53 compute-0 nova_compute[260935]: 2025-10-11 09:07:53.291 2 DEBUG os_vif [None req-a9b118e5-7aae-466a-a2e8-73259e1e4185 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:33:2b:94,bridge_name='br-int',has_traffic_filtering=True,id=074db183-8679-40f2-b39d-06759a8dfceb,network=Network(e4686205-cbf0-4221-bc49-ebb890c4a59f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap074db183-86') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 11 09:07:53 compute-0 nova_compute[260935]: 2025-10-11 09:07:53.293 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:07:53 compute-0 nova_compute[260935]: 2025-10-11 09:07:53.293 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap074db183-86, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:07:53 compute-0 nova_compute[260935]: 2025-10-11 09:07:53.293 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 09:07:53 compute-0 nova_compute[260935]: 2025-10-11 09:07:53.296 2 INFO os_vif [None req-a9b118e5-7aae-466a-a2e8-73259e1e4185 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:33:2b:94,bridge_name='br-int',has_traffic_filtering=True,id=074db183-8679-40f2-b39d-06759a8dfceb,network=Network(e4686205-cbf0-4221-bc49-ebb890c4a59f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap074db183-86')
Oct 11 09:07:53 compute-0 nova_compute[260935]: 2025-10-11 09:07:53.329 2 INFO nova.virt.libvirt.driver [None req-a9b118e5-7aae-466a-a2e8-73259e1e4185 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] [instance: 15633aee-234a-4417-b5ea-f35f13820404] Deletion of /var/lib/nova/instances/15633aee-234a-4417-b5ea-f35f13820404_del complete
Oct 11 09:07:53 compute-0 nova_compute[260935]: 2025-10-11 09:07:53.477 2 INFO nova.compute.manager [None req-a9b118e5-7aae-466a-a2e8-73259e1e4185 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] [instance: 15633aee-234a-4417-b5ea-f35f13820404] Took 0.25 seconds to destroy the instance on the hypervisor.
Oct 11 09:07:53 compute-0 nova_compute[260935]: 2025-10-11 09:07:53.478 2 DEBUG oslo.service.loopingcall [None req-a9b118e5-7aae-466a-a2e8-73259e1e4185 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 11 09:07:53 compute-0 nova_compute[260935]: 2025-10-11 09:07:53.478 2 DEBUG nova.compute.manager [-] [instance: 15633aee-234a-4417-b5ea-f35f13820404] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 11 09:07:53 compute-0 nova_compute[260935]: 2025-10-11 09:07:53.479 2 DEBUG nova.network.neutron [-] [instance: 15633aee-234a-4417-b5ea-f35f13820404] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 11 09:07:53 compute-0 nova_compute[260935]: 2025-10-11 09:07:53.492 2 DEBUG nova.compute.manager [req-fde38772-a79f-4ffc-bd49-37d94ffc0c79 req-855ee445-19fe-42d7-b132-7300e4e7f87d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: f5dfbe0b-a1ff-4001-abe4-a4493c9124f2] Received event network-vif-unplugged-34749eb0-e6d2-4c4b-a2b8-9fc709c1caca external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:07:53 compute-0 nova_compute[260935]: 2025-10-11 09:07:53.492 2 DEBUG oslo_concurrency.lockutils [req-fde38772-a79f-4ffc-bd49-37d94ffc0c79 req-855ee445-19fe-42d7-b132-7300e4e7f87d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "f5dfbe0b-a1ff-4001-abe4-a4493c9124f2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:07:53 compute-0 nova_compute[260935]: 2025-10-11 09:07:53.493 2 DEBUG oslo_concurrency.lockutils [req-fde38772-a79f-4ffc-bd49-37d94ffc0c79 req-855ee445-19fe-42d7-b132-7300e4e7f87d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "f5dfbe0b-a1ff-4001-abe4-a4493c9124f2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:07:53 compute-0 nova_compute[260935]: 2025-10-11 09:07:53.494 2 DEBUG oslo_concurrency.lockutils [req-fde38772-a79f-4ffc-bd49-37d94ffc0c79 req-855ee445-19fe-42d7-b132-7300e4e7f87d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "f5dfbe0b-a1ff-4001-abe4-a4493c9124f2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:07:53 compute-0 nova_compute[260935]: 2025-10-11 09:07:53.494 2 DEBUG nova.compute.manager [req-fde38772-a79f-4ffc-bd49-37d94ffc0c79 req-855ee445-19fe-42d7-b132-7300e4e7f87d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: f5dfbe0b-a1ff-4001-abe4-a4493c9124f2] No waiting events found dispatching network-vif-unplugged-34749eb0-e6d2-4c4b-a2b8-9fc709c1caca pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 09:07:53 compute-0 nova_compute[260935]: 2025-10-11 09:07:53.495 2 DEBUG nova.compute.manager [req-fde38772-a79f-4ffc-bd49-37d94ffc0c79 req-855ee445-19fe-42d7-b132-7300e4e7f87d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: f5dfbe0b-a1ff-4001-abe4-a4493c9124f2] Received event network-vif-unplugged-34749eb0-e6d2-4c4b-a2b8-9fc709c1caca for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 11 09:07:53 compute-0 nova_compute[260935]: 2025-10-11 09:07:53.742 2 DEBUG nova.network.neutron [-] [instance: 15633aee-234a-4417-b5ea-f35f13820404] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 11 09:07:53 compute-0 ceph-mon[74313]: pgmap v1956: 321 pgs: 321 active+clean; 618 MiB data, 967 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 40 KiB/s wr, 217 op/s
Oct 11 09:07:53 compute-0 nova_compute[260935]: 2025-10-11 09:07:53.774 2 DEBUG nova.network.neutron [-] [instance: 15633aee-234a-4417-b5ea-f35f13820404] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:07:53 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e257 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:07:53 compute-0 nova_compute[260935]: 2025-10-11 09:07:53.842 2 INFO nova.compute.manager [-] [instance: 15633aee-234a-4417-b5ea-f35f13820404] Took 0.36 seconds to deallocate network for instance.
Oct 11 09:07:54 compute-0 nova_compute[260935]: 2025-10-11 09:07:54.131 2 INFO nova.virt.libvirt.driver [None req-016b3afb-e77e-442b-b195-3639acdd94fb f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] [instance: f5dfbe0b-a1ff-4001-abe4-a4493c9124f2] Deleting instance files /var/lib/nova/instances/f5dfbe0b-a1ff-4001-abe4-a4493c9124f2_del
Oct 11 09:07:54 compute-0 nova_compute[260935]: 2025-10-11 09:07:54.133 2 INFO nova.virt.libvirt.driver [None req-016b3afb-e77e-442b-b195-3639acdd94fb f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] [instance: f5dfbe0b-a1ff-4001-abe4-a4493c9124f2] Deletion of /var/lib/nova/instances/f5dfbe0b-a1ff-4001-abe4-a4493c9124f2_del complete
Oct 11 09:07:54 compute-0 nova_compute[260935]: 2025-10-11 09:07:54.139 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:07:54 compute-0 nova_compute[260935]: 2025-10-11 09:07:54.246 2 DEBUG oslo_concurrency.lockutils [None req-a9b118e5-7aae-466a-a2e8-73259e1e4185 df5a3c3a5d68473aa2e2950de45ebce1 11b44ad9193e4e43838d52056ccf413e - - default default] Lock "15633aee-234a-4417-b5ea-f35f13820404" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 1.025s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:07:54 compute-0 nova_compute[260935]: 2025-10-11 09:07:54.249 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "15633aee-234a-4417-b5ea-f35f13820404" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 182.494s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:07:54 compute-0 nova_compute[260935]: 2025-10-11 09:07:54.249 2 INFO nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: 15633aee-234a-4417-b5ea-f35f13820404] During sync_power_state the instance has a pending task (deleting). Skip.
Oct 11 09:07:54 compute-0 nova_compute[260935]: 2025-10-11 09:07:54.250 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "15633aee-234a-4417-b5ea-f35f13820404" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:07:54 compute-0 nova_compute[260935]: 2025-10-11 09:07:54.268 2 INFO nova.compute.manager [None req-016b3afb-e77e-442b-b195-3639acdd94fb f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] [instance: f5dfbe0b-a1ff-4001-abe4-a4493c9124f2] Took 1.30 seconds to destroy the instance on the hypervisor.
Oct 11 09:07:54 compute-0 nova_compute[260935]: 2025-10-11 09:07:54.269 2 DEBUG oslo.service.loopingcall [None req-016b3afb-e77e-442b-b195-3639acdd94fb f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 11 09:07:54 compute-0 nova_compute[260935]: 2025-10-11 09:07:54.270 2 DEBUG nova.compute.manager [-] [instance: f5dfbe0b-a1ff-4001-abe4-a4493c9124f2] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 11 09:07:54 compute-0 nova_compute[260935]: 2025-10-11 09:07:54.270 2 DEBUG nova.network.neutron [-] [instance: f5dfbe0b-a1ff-4001-abe4-a4493c9124f2] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 11 09:07:54 compute-0 nova_compute[260935]: 2025-10-11 09:07:54.313 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760173659.3121285, 02489367-ca19-4428-9727-6d8ab66e1b46 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:07:54 compute-0 nova_compute[260935]: 2025-10-11 09:07:54.314 2 INFO nova.compute.manager [-] [instance: 02489367-ca19-4428-9727-6d8ab66e1b46] VM Stopped (Lifecycle Event)
Oct 11 09:07:54 compute-0 nova_compute[260935]: 2025-10-11 09:07:54.356 2 DEBUG nova.compute.manager [None req-359a0feb-6a4f-4893-a2bb-a010a3acb2a8 - - - - - -] [instance: 02489367-ca19-4428-9727-6d8ab66e1b46] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:07:54 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1957: 321 pgs: 321 active+clean; 612 MiB data, 966 MiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 15 KiB/s wr, 121 op/s
Oct 11 09:07:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:07:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:07:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:07:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:07:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:07:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:07:54 compute-0 ceph-mgr[74605]: [balancer INFO root] Optimize plan auto_2025-10-11_09:07:54
Oct 11 09:07:54 compute-0 ceph-mgr[74605]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 09:07:54 compute-0 ceph-mgr[74605]: [balancer INFO root] do_upmap
Oct 11 09:07:54 compute-0 ceph-mgr[74605]: [balancer INFO root] pools ['cephfs.cephfs.data', '.mgr', '.rgw.root', 'default.rgw.control', 'cephfs.cephfs.meta', 'default.rgw.log', 'volumes', 'default.rgw.meta', 'images', 'vms', 'backups']
Oct 11 09:07:54 compute-0 ceph-mgr[74605]: [balancer INFO root] prepared 0/10 changes
Oct 11 09:07:55 compute-0 nova_compute[260935]: 2025-10-11 09:07:55.163 2 DEBUG nova.network.neutron [-] [instance: f5dfbe0b-a1ff-4001-abe4-a4493c9124f2] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:07:55 compute-0 nova_compute[260935]: 2025-10-11 09:07:55.216 2 INFO nova.compute.manager [-] [instance: f5dfbe0b-a1ff-4001-abe4-a4493c9124f2] Took 0.95 seconds to deallocate network for instance.
Oct 11 09:07:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 09:07:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 09:07:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 09:07:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 09:07:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 09:07:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 09:07:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 09:07:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 09:07:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 09:07:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 09:07:55 compute-0 nova_compute[260935]: 2025-10-11 09:07:55.304 2 DEBUG oslo_concurrency.lockutils [None req-016b3afb-e77e-442b-b195-3639acdd94fb f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:07:55 compute-0 nova_compute[260935]: 2025-10-11 09:07:55.305 2 DEBUG oslo_concurrency.lockutils [None req-016b3afb-e77e-442b-b195-3639acdd94fb f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:07:55 compute-0 nova_compute[260935]: 2025-10-11 09:07:55.518 2 DEBUG oslo_concurrency.processutils [None req-016b3afb-e77e-442b-b195-3639acdd94fb f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:07:55 compute-0 nova_compute[260935]: 2025-10-11 09:07:55.600 2 DEBUG nova.compute.manager [req-70e8d8b0-3bc9-4306-9e1b-2026275c81c5 req-5fc69e3b-6124-4b93-b264-218ba03d1f17 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: f5dfbe0b-a1ff-4001-abe4-a4493c9124f2] Received event network-vif-plugged-34749eb0-e6d2-4c4b-a2b8-9fc709c1caca external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:07:55 compute-0 nova_compute[260935]: 2025-10-11 09:07:55.601 2 DEBUG oslo_concurrency.lockutils [req-70e8d8b0-3bc9-4306-9e1b-2026275c81c5 req-5fc69e3b-6124-4b93-b264-218ba03d1f17 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "f5dfbe0b-a1ff-4001-abe4-a4493c9124f2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:07:55 compute-0 nova_compute[260935]: 2025-10-11 09:07:55.601 2 DEBUG oslo_concurrency.lockutils [req-70e8d8b0-3bc9-4306-9e1b-2026275c81c5 req-5fc69e3b-6124-4b93-b264-218ba03d1f17 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "f5dfbe0b-a1ff-4001-abe4-a4493c9124f2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:07:55 compute-0 nova_compute[260935]: 2025-10-11 09:07:55.602 2 DEBUG oslo_concurrency.lockutils [req-70e8d8b0-3bc9-4306-9e1b-2026275c81c5 req-5fc69e3b-6124-4b93-b264-218ba03d1f17 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "f5dfbe0b-a1ff-4001-abe4-a4493c9124f2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:07:55 compute-0 nova_compute[260935]: 2025-10-11 09:07:55.602 2 DEBUG nova.compute.manager [req-70e8d8b0-3bc9-4306-9e1b-2026275c81c5 req-5fc69e3b-6124-4b93-b264-218ba03d1f17 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: f5dfbe0b-a1ff-4001-abe4-a4493c9124f2] No waiting events found dispatching network-vif-plugged-34749eb0-e6d2-4c4b-a2b8-9fc709c1caca pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 09:07:55 compute-0 nova_compute[260935]: 2025-10-11 09:07:55.603 2 WARNING nova.compute.manager [req-70e8d8b0-3bc9-4306-9e1b-2026275c81c5 req-5fc69e3b-6124-4b93-b264-218ba03d1f17 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: f5dfbe0b-a1ff-4001-abe4-a4493c9124f2] Received unexpected event network-vif-plugged-34749eb0-e6d2-4c4b-a2b8-9fc709c1caca for instance with vm_state deleted and task_state None.
Oct 11 09:07:55 compute-0 ceph-mon[74313]: pgmap v1957: 321 pgs: 321 active+clean; 612 MiB data, 966 MiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 15 KiB/s wr, 121 op/s
Oct 11 09:07:55 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 09:07:55 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1005458417' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:07:55 compute-0 nova_compute[260935]: 2025-10-11 09:07:55.998 2 DEBUG oslo_concurrency.processutils [None req-016b3afb-e77e-442b-b195-3639acdd94fb f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.481s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:07:56 compute-0 nova_compute[260935]: 2025-10-11 09:07:56.006 2 DEBUG nova.compute.provider_tree [None req-016b3afb-e77e-442b-b195-3639acdd94fb f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 09:07:56 compute-0 nova_compute[260935]: 2025-10-11 09:07:56.036 2 DEBUG nova.scheduler.client.report [None req-016b3afb-e77e-442b-b195-3639acdd94fb f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 09:07:56 compute-0 nova_compute[260935]: 2025-10-11 09:07:56.082 2 DEBUG oslo_concurrency.lockutils [None req-016b3afb-e77e-442b-b195-3639acdd94fb f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.777s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:07:56 compute-0 nova_compute[260935]: 2025-10-11 09:07:56.143 2 INFO nova.scheduler.client.report [None req-016b3afb-e77e-442b-b195-3639acdd94fb f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Deleted allocations for instance f5dfbe0b-a1ff-4001-abe4-a4493c9124f2
Oct 11 09:07:56 compute-0 nova_compute[260935]: 2025-10-11 09:07:56.317 2 DEBUG oslo_concurrency.lockutils [None req-016b3afb-e77e-442b-b195-3639acdd94fb f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Lock "f5dfbe0b-a1ff-4001-abe4-a4493c9124f2" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.355s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:07:56 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1958: 321 pgs: 321 active+clean; 612 MiB data, 966 MiB used, 59 GiB / 60 GiB avail; 630 KiB/s rd, 14 KiB/s wr, 87 op/s
Oct 11 09:07:56 compute-0 nova_compute[260935]: 2025-10-11 09:07:56.577 2 DEBUG nova.compute.manager [req-ead7e16f-f1a0-4a4f-a202-93d06c23d717 req-f1d0bba6-1d4e-4e60-93f6-773861b1b2ce e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: f5dfbe0b-a1ff-4001-abe4-a4493c9124f2] Received event network-vif-deleted-34749eb0-e6d2-4c4b-a2b8-9fc709c1caca external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:07:56 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/1005458417' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:07:56 compute-0 podman[358833]: 2025-10-11 09:07:56.804743798 +0000 UTC m=+0.092188743 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=multipathd, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 11 09:07:56 compute-0 podman[358834]: 2025-10-11 09:07:56.835179757 +0000 UTC m=+0.121024476 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct 11 09:07:57 compute-0 nova_compute[260935]: 2025-10-11 09:07:57.764 2 DEBUG oslo_concurrency.lockutils [None req-b119b161-266e-4039-a587-323785b07d0d f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Acquiring lock "6520fc43-79ed-4060-85bb-dcdff5f5c101" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:07:57 compute-0 nova_compute[260935]: 2025-10-11 09:07:57.766 2 DEBUG oslo_concurrency.lockutils [None req-b119b161-266e-4039-a587-323785b07d0d f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Lock "6520fc43-79ed-4060-85bb-dcdff5f5c101" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:07:57 compute-0 nova_compute[260935]: 2025-10-11 09:07:57.767 2 DEBUG oslo_concurrency.lockutils [None req-b119b161-266e-4039-a587-323785b07d0d f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Acquiring lock "6520fc43-79ed-4060-85bb-dcdff5f5c101-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:07:57 compute-0 nova_compute[260935]: 2025-10-11 09:07:57.767 2 DEBUG oslo_concurrency.lockutils [None req-b119b161-266e-4039-a587-323785b07d0d f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Lock "6520fc43-79ed-4060-85bb-dcdff5f5c101-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:07:57 compute-0 nova_compute[260935]: 2025-10-11 09:07:57.768 2 DEBUG oslo_concurrency.lockutils [None req-b119b161-266e-4039-a587-323785b07d0d f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Lock "6520fc43-79ed-4060-85bb-dcdff5f5c101-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:07:57 compute-0 nova_compute[260935]: 2025-10-11 09:07:57.770 2 INFO nova.compute.manager [None req-b119b161-266e-4039-a587-323785b07d0d f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] [instance: 6520fc43-79ed-4060-85bb-dcdff5f5c101] Terminating instance
Oct 11 09:07:57 compute-0 nova_compute[260935]: 2025-10-11 09:07:57.772 2 DEBUG nova.compute.manager [None req-b119b161-266e-4039-a587-323785b07d0d f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] [instance: 6520fc43-79ed-4060-85bb-dcdff5f5c101] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 11 09:07:57 compute-0 ceph-mon[74313]: pgmap v1958: 321 pgs: 321 active+clean; 612 MiB data, 966 MiB used, 59 GiB / 60 GiB avail; 630 KiB/s rd, 14 KiB/s wr, 87 op/s
Oct 11 09:07:57 compute-0 kernel: tap7f81893d-38 (unregistering): left promiscuous mode
Oct 11 09:07:57 compute-0 NetworkManager[44960]: <info>  [1760173677.8320] device (tap7f81893d-38): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 09:07:57 compute-0 ovn_controller[152945]: 2025-10-11T09:07:57Z|00901|binding|INFO|Releasing lport 7f81893d-380a-42b4-88a7-76a98c30a38d from this chassis (sb_readonly=0)
Oct 11 09:07:57 compute-0 nova_compute[260935]: 2025-10-11 09:07:57.859 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:07:57 compute-0 ovn_controller[152945]: 2025-10-11T09:07:57Z|00902|binding|INFO|Setting lport 7f81893d-380a-42b4-88a7-76a98c30a38d down in Southbound
Oct 11 09:07:57 compute-0 ovn_controller[152945]: 2025-10-11T09:07:57Z|00903|binding|INFO|Removing iface tap7f81893d-38 ovn-installed in OVS
Oct 11 09:07:57 compute-0 nova_compute[260935]: 2025-10-11 09:07:57.866 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:07:57 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:07:57.891 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:61:f1:b7 10.100.0.6'], port_security=['fa:16:3e:61:f1:b7 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '6520fc43-79ed-4060-85bb-dcdff5f5c101', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-76432838-bd4d-4fb8-8e44-6e230b5868b8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c221c92f100b42fbb2581f0c7035540b', 'neutron:revision_number': '4', 'neutron:security_group_ids': '933852e8-c082-4590-9200-b967a1691dcb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7f1c5e2c-b27b-4d23-ad04-5134938386e4, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=7f81893d-380a-42b4-88a7-76a98c30a38d) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 09:07:57 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:07:57.893 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 7f81893d-380a-42b4-88a7-76a98c30a38d in datapath 76432838-bd4d-4fb8-8e44-6e230b5868b8 unbound from our chassis
Oct 11 09:07:57 compute-0 nova_compute[260935]: 2025-10-11 09:07:57.896 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:07:57 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:07:57.897 162815 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 76432838-bd4d-4fb8-8e44-6e230b5868b8, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 11 09:07:57 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:07:57.899 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[fc64fc95-63bd-40d9-a3ae-3a622c373675]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:07:57 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:07:57.900 162815 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-76432838-bd4d-4fb8-8e44-6e230b5868b8 namespace which is not needed anymore
Oct 11 09:07:57 compute-0 systemd[1]: machine-qemu\x2d110\x2dinstance\x2d00000060.scope: Deactivated successfully.
Oct 11 09:07:57 compute-0 systemd[1]: machine-qemu\x2d110\x2dinstance\x2d00000060.scope: Consumed 13.615s CPU time.
Oct 11 09:07:57 compute-0 systemd-machined[215705]: Machine qemu-110-instance-00000060 terminated.
Oct 11 09:07:58 compute-0 nova_compute[260935]: 2025-10-11 09:07:58.002 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:07:58 compute-0 nova_compute[260935]: 2025-10-11 09:07:58.013 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:07:58 compute-0 nova_compute[260935]: 2025-10-11 09:07:58.019 2 INFO nova.virt.libvirt.driver [-] [instance: 6520fc43-79ed-4060-85bb-dcdff5f5c101] Instance destroyed successfully.
Oct 11 09:07:58 compute-0 nova_compute[260935]: 2025-10-11 09:07:58.019 2 DEBUG nova.objects.instance [None req-b119b161-266e-4039-a587-323785b07d0d f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Lazy-loading 'resources' on Instance uuid 6520fc43-79ed-4060-85bb-dcdff5f5c101 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:07:58 compute-0 nova_compute[260935]: 2025-10-11 09:07:58.055 2 DEBUG nova.virt.libvirt.vif [None req-b119b161-266e-4039-a587-323785b07d0d f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T09:07:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerRescueNegativeTestJSON-server-653793177',display_name='tempest-ServerRescueNegativeTestJSON-server-653793177',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuenegativetestjson-server-653793177',id=96,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-11T09:07:14Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='c221c92f100b42fbb2581f0c7035540b',ramdisk_id='',reservation_id='r-id4jerug',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerRescueNegativeTestJSON-1667632067',owner_user_name='tempest-ServerRescueNegativeTestJSON-1667632067-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T09:07:49Z,user_data=None,user_id='f52fcc072b5843a197064a063d4a5d30',uuid=6520fc43-79ed-4060-85bb-dcdff5f5c101,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "7f81893d-380a-42b4-88a7-76a98c30a38d", "address": "fa:16:3e:61:f1:b7", "network": {"id": "76432838-bd4d-4fb8-8e44-6e230b5868b8", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-372698809-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c221c92f100b42fbb2581f0c7035540b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7f81893d-38", "ovs_interfaceid": "7f81893d-380a-42b4-88a7-76a98c30a38d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 11 09:07:58 compute-0 nova_compute[260935]: 2025-10-11 09:07:58.055 2 DEBUG nova.network.os_vif_util [None req-b119b161-266e-4039-a587-323785b07d0d f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Converting VIF {"id": "7f81893d-380a-42b4-88a7-76a98c30a38d", "address": "fa:16:3e:61:f1:b7", "network": {"id": "76432838-bd4d-4fb8-8e44-6e230b5868b8", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-372698809-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c221c92f100b42fbb2581f0c7035540b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7f81893d-38", "ovs_interfaceid": "7f81893d-380a-42b4-88a7-76a98c30a38d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 09:07:58 compute-0 nova_compute[260935]: 2025-10-11 09:07:58.056 2 DEBUG nova.network.os_vif_util [None req-b119b161-266e-4039-a587-323785b07d0d f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:61:f1:b7,bridge_name='br-int',has_traffic_filtering=True,id=7f81893d-380a-42b4-88a7-76a98c30a38d,network=Network(76432838-bd4d-4fb8-8e44-6e230b5868b8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7f81893d-38') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 09:07:58 compute-0 nova_compute[260935]: 2025-10-11 09:07:58.056 2 DEBUG os_vif [None req-b119b161-266e-4039-a587-323785b07d0d f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:61:f1:b7,bridge_name='br-int',has_traffic_filtering=True,id=7f81893d-380a-42b4-88a7-76a98c30a38d,network=Network(76432838-bd4d-4fb8-8e44-6e230b5868b8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7f81893d-38') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 11 09:07:58 compute-0 nova_compute[260935]: 2025-10-11 09:07:58.059 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:07:58 compute-0 nova_compute[260935]: 2025-10-11 09:07:58.059 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7f81893d-38, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:07:58 compute-0 nova_compute[260935]: 2025-10-11 09:07:58.060 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:07:58 compute-0 nova_compute[260935]: 2025-10-11 09:07:58.062 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 09:07:58 compute-0 nova_compute[260935]: 2025-10-11 09:07:58.064 2 INFO os_vif [None req-b119b161-266e-4039-a587-323785b07d0d f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:61:f1:b7,bridge_name='br-int',has_traffic_filtering=True,id=7f81893d-380a-42b4-88a7-76a98c30a38d,network=Network(76432838-bd4d-4fb8-8e44-6e230b5868b8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7f81893d-38')
Oct 11 09:07:58 compute-0 neutron-haproxy-ovnmeta-76432838-bd4d-4fb8-8e44-6e230b5868b8[357243]: [NOTICE]   (357247) : haproxy version is 2.8.14-c23fe91
Oct 11 09:07:58 compute-0 neutron-haproxy-ovnmeta-76432838-bd4d-4fb8-8e44-6e230b5868b8[357243]: [NOTICE]   (357247) : path to executable is /usr/sbin/haproxy
Oct 11 09:07:58 compute-0 neutron-haproxy-ovnmeta-76432838-bd4d-4fb8-8e44-6e230b5868b8[357243]: [WARNING]  (357247) : Exiting Master process...
Oct 11 09:07:58 compute-0 neutron-haproxy-ovnmeta-76432838-bd4d-4fb8-8e44-6e230b5868b8[357243]: [WARNING]  (357247) : Exiting Master process...
Oct 11 09:07:58 compute-0 neutron-haproxy-ovnmeta-76432838-bd4d-4fb8-8e44-6e230b5868b8[357243]: [ALERT]    (357247) : Current worker (357249) exited with code 143 (Terminated)
Oct 11 09:07:58 compute-0 neutron-haproxy-ovnmeta-76432838-bd4d-4fb8-8e44-6e230b5868b8[357243]: [WARNING]  (357247) : All workers exited. Exiting... (0)
Oct 11 09:07:58 compute-0 systemd[1]: libpod-e40b14f688a76c580a6de35085da7c0a9566389ef2c8a28953b020fd0054b819.scope: Deactivated successfully.
Oct 11 09:07:58 compute-0 podman[358911]: 2025-10-11 09:07:58.092692135 +0000 UTC m=+0.046413456 container died e40b14f688a76c580a6de35085da7c0a9566389ef2c8a28953b020fd0054b819 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-76432838-bd4d-4fb8-8e44-6e230b5868b8, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001)
Oct 11 09:07:58 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e40b14f688a76c580a6de35085da7c0a9566389ef2c8a28953b020fd0054b819-userdata-shm.mount: Deactivated successfully.
Oct 11 09:07:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-f42249320cbd9f6be1af07a22f94351bbe874386f708642b1d430373ffaae9fb-merged.mount: Deactivated successfully.
Oct 11 09:07:58 compute-0 podman[358911]: 2025-10-11 09:07:58.132549673 +0000 UTC m=+0.086270994 container cleanup e40b14f688a76c580a6de35085da7c0a9566389ef2c8a28953b020fd0054b819 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-76432838-bd4d-4fb8-8e44-6e230b5868b8, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3)
Oct 11 09:07:58 compute-0 systemd[1]: libpod-conmon-e40b14f688a76c580a6de35085da7c0a9566389ef2c8a28953b020fd0054b819.scope: Deactivated successfully.
Oct 11 09:07:58 compute-0 podman[358960]: 2025-10-11 09:07:58.200991115 +0000 UTC m=+0.044824659 container remove e40b14f688a76c580a6de35085da7c0a9566389ef2c8a28953b020fd0054b819 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-76432838-bd4d-4fb8-8e44-6e230b5868b8, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true)
Oct 11 09:07:58 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:07:58.212 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[dc4fb653-f9fb-4963-bc9e-4240a0cc8ba2]: (4, ('Sat Oct 11 09:07:58 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-76432838-bd4d-4fb8-8e44-6e230b5868b8 (e40b14f688a76c580a6de35085da7c0a9566389ef2c8a28953b020fd0054b819)\ne40b14f688a76c580a6de35085da7c0a9566389ef2c8a28953b020fd0054b819\nSat Oct 11 09:07:58 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-76432838-bd4d-4fb8-8e44-6e230b5868b8 (e40b14f688a76c580a6de35085da7c0a9566389ef2c8a28953b020fd0054b819)\ne40b14f688a76c580a6de35085da7c0a9566389ef2c8a28953b020fd0054b819\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:07:58 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:07:58.214 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[0ec5641b-77ca-483c-80fb-32e0cffea2fd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:07:58 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:07:58.215 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap76432838-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:07:58 compute-0 kernel: tap76432838-b0: left promiscuous mode
Oct 11 09:07:58 compute-0 nova_compute[260935]: 2025-10-11 09:07:58.216 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:07:58 compute-0 nova_compute[260935]: 2025-10-11 09:07:58.244 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:07:58 compute-0 nova_compute[260935]: 2025-10-11 09:07:58.245 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:07:58 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:07:58.249 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[821b7a3e-5e9f-4e59-884c-71b4899d7945]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:07:58 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:07:58.281 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[65f3bae7-365e-48a1-9f4a-4c157e22229f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:07:58 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:07:58.282 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[d5b7ab67-c463-4de5-8d17-00c6df932902]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:07:58 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:07:58.302 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[b17c1b9e-aa60-434a-865a-06e8b2945813]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 547681, 'reachable_time': 32528, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 358976, 'error': None, 'target': 'ovnmeta-76432838-bd4d-4fb8-8e44-6e230b5868b8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:07:58 compute-0 systemd[1]: run-netns-ovnmeta\x2d76432838\x2dbd4d\x2d4fb8\x2d8e44\x2d6e230b5868b8.mount: Deactivated successfully.
Oct 11 09:07:58 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:07:58.309 162929 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-76432838-bd4d-4fb8-8e44-6e230b5868b8 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 11 09:07:58 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:07:58.309 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[49d6a1b3-58e6-409b-a015-1ece1f8b258e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:07:58 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1959: 321 pgs: 321 active+clean; 486 MiB data, 905 MiB used, 59 GiB / 60 GiB avail; 669 KiB/s rd, 16 KiB/s wr, 141 op/s
Oct 11 09:07:58 compute-0 nova_compute[260935]: 2025-10-11 09:07:58.441 2 INFO nova.virt.libvirt.driver [None req-b119b161-266e-4039-a587-323785b07d0d f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] [instance: 6520fc43-79ed-4060-85bb-dcdff5f5c101] Deleting instance files /var/lib/nova/instances/6520fc43-79ed-4060-85bb-dcdff5f5c101_del
Oct 11 09:07:58 compute-0 nova_compute[260935]: 2025-10-11 09:07:58.441 2 INFO nova.virt.libvirt.driver [None req-b119b161-266e-4039-a587-323785b07d0d f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] [instance: 6520fc43-79ed-4060-85bb-dcdff5f5c101] Deletion of /var/lib/nova/instances/6520fc43-79ed-4060-85bb-dcdff5f5c101_del complete
Oct 11 09:07:58 compute-0 nova_compute[260935]: 2025-10-11 09:07:58.579 2 INFO nova.compute.manager [None req-b119b161-266e-4039-a587-323785b07d0d f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] [instance: 6520fc43-79ed-4060-85bb-dcdff5f5c101] Took 0.81 seconds to destroy the instance on the hypervisor.
Oct 11 09:07:58 compute-0 nova_compute[260935]: 2025-10-11 09:07:58.580 2 DEBUG oslo.service.loopingcall [None req-b119b161-266e-4039-a587-323785b07d0d f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 11 09:07:58 compute-0 nova_compute[260935]: 2025-10-11 09:07:58.580 2 DEBUG nova.compute.manager [-] [instance: 6520fc43-79ed-4060-85bb-dcdff5f5c101] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 11 09:07:58 compute-0 nova_compute[260935]: 2025-10-11 09:07:58.580 2 DEBUG nova.network.neutron [-] [instance: 6520fc43-79ed-4060-85bb-dcdff5f5c101] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 11 09:07:58 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e257 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:07:58 compute-0 ceph-mon[74313]: pgmap v1959: 321 pgs: 321 active+clean; 486 MiB data, 905 MiB used, 59 GiB / 60 GiB avail; 669 KiB/s rd, 16 KiB/s wr, 141 op/s
Oct 11 09:07:59 compute-0 nova_compute[260935]: 2025-10-11 09:07:59.131 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:07:59 compute-0 nova_compute[260935]: 2025-10-11 09:07:59.293 2 DEBUG nova.compute.manager [req-41e78be9-1f72-47c9-aa63-71d843742f54 req-1495bcf5-27dd-430e-9daf-5438d1e8c385 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 6520fc43-79ed-4060-85bb-dcdff5f5c101] Received event network-vif-unplugged-7f81893d-380a-42b4-88a7-76a98c30a38d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:07:59 compute-0 nova_compute[260935]: 2025-10-11 09:07:59.293 2 DEBUG oslo_concurrency.lockutils [req-41e78be9-1f72-47c9-aa63-71d843742f54 req-1495bcf5-27dd-430e-9daf-5438d1e8c385 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "6520fc43-79ed-4060-85bb-dcdff5f5c101-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:07:59 compute-0 nova_compute[260935]: 2025-10-11 09:07:59.294 2 DEBUG oslo_concurrency.lockutils [req-41e78be9-1f72-47c9-aa63-71d843742f54 req-1495bcf5-27dd-430e-9daf-5438d1e8c385 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "6520fc43-79ed-4060-85bb-dcdff5f5c101-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:07:59 compute-0 nova_compute[260935]: 2025-10-11 09:07:59.294 2 DEBUG oslo_concurrency.lockutils [req-41e78be9-1f72-47c9-aa63-71d843742f54 req-1495bcf5-27dd-430e-9daf-5438d1e8c385 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "6520fc43-79ed-4060-85bb-dcdff5f5c101-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:07:59 compute-0 nova_compute[260935]: 2025-10-11 09:07:59.294 2 DEBUG nova.compute.manager [req-41e78be9-1f72-47c9-aa63-71d843742f54 req-1495bcf5-27dd-430e-9daf-5438d1e8c385 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 6520fc43-79ed-4060-85bb-dcdff5f5c101] No waiting events found dispatching network-vif-unplugged-7f81893d-380a-42b4-88a7-76a98c30a38d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 09:07:59 compute-0 nova_compute[260935]: 2025-10-11 09:07:59.295 2 DEBUG nova.compute.manager [req-41e78be9-1f72-47c9-aa63-71d843742f54 req-1495bcf5-27dd-430e-9daf-5438d1e8c385 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 6520fc43-79ed-4060-85bb-dcdff5f5c101] Received event network-vif-unplugged-7f81893d-380a-42b4-88a7-76a98c30a38d for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 11 09:07:59 compute-0 sudo[358977]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:07:59 compute-0 sudo[358977]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:07:59 compute-0 sudo[358977]: pam_unix(sudo:session): session closed for user root
Oct 11 09:07:59 compute-0 sudo[359002]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 09:07:59 compute-0 sudo[359002]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:07:59 compute-0 sudo[359002]: pam_unix(sudo:session): session closed for user root
Oct 11 09:07:59 compute-0 sudo[359027]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:07:59 compute-0 sudo[359027]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:07:59 compute-0 sudo[359027]: pam_unix(sudo:session): session closed for user root
Oct 11 09:07:59 compute-0 sudo[359052]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 11 09:07:59 compute-0 sudo[359052]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:07:59 compute-0 nova_compute[260935]: 2025-10-11 09:07:59.714 2 DEBUG nova.network.neutron [-] [instance: 6520fc43-79ed-4060-85bb-dcdff5f5c101] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:07:59 compute-0 nova_compute[260935]: 2025-10-11 09:07:59.796 2 INFO nova.compute.manager [-] [instance: 6520fc43-79ed-4060-85bb-dcdff5f5c101] Took 1.22 seconds to deallocate network for instance.
Oct 11 09:07:59 compute-0 nova_compute[260935]: 2025-10-11 09:07:59.874 2 DEBUG oslo_concurrency.lockutils [None req-b119b161-266e-4039-a587-323785b07d0d f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:07:59 compute-0 nova_compute[260935]: 2025-10-11 09:07:59.876 2 DEBUG oslo_concurrency.lockutils [None req-b119b161-266e-4039-a587-323785b07d0d f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:08:00 compute-0 nova_compute[260935]: 2025-10-11 09:08:00.019 2 DEBUG oslo_concurrency.processutils [None req-b119b161-266e-4039-a587-323785b07d0d f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:08:00 compute-0 sudo[359052]: pam_unix(sudo:session): session closed for user root
Oct 11 09:08:00 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1960: 321 pgs: 321 active+clean; 486 MiB data, 905 MiB used, 59 GiB / 60 GiB avail; 669 KiB/s rd, 16 KiB/s wr, 141 op/s
Oct 11 09:08:00 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 09:08:00 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 09:08:00 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 11 09:08:00 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 09:08:00 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 11 09:08:00 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:08:00 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev 128e326b-94cc-4cd4-8ebe-16dddb985bcc does not exist
Oct 11 09:08:00 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev 1d3b29ef-35f7-42cf-98f2-b78201c0966f does not exist
Oct 11 09:08:00 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev e2b30296-bbde-4a49-a28e-819efc420246 does not exist
Oct 11 09:08:00 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 11 09:08:00 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 09:08:00 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 11 09:08:00 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 09:08:00 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 09:08:00 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 09:08:00 compute-0 sudo[359128]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:08:00 compute-0 sudo[359128]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:08:00 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 09:08:00 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/42263550' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:08:00 compute-0 sudo[359128]: pam_unix(sudo:session): session closed for user root
Oct 11 09:08:00 compute-0 nova_compute[260935]: 2025-10-11 09:08:00.571 2 DEBUG oslo_concurrency.processutils [None req-b119b161-266e-4039-a587-323785b07d0d f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.552s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:08:00 compute-0 nova_compute[260935]: 2025-10-11 09:08:00.583 2 DEBUG nova.compute.provider_tree [None req-b119b161-266e-4039-a587-323785b07d0d f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 09:08:00 compute-0 nova_compute[260935]: 2025-10-11 09:08:00.608 2 DEBUG nova.scheduler.client.report [None req-b119b161-266e-4039-a587-323785b07d0d f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 09:08:00 compute-0 sudo[359155]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 09:08:00 compute-0 sudo[359155]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:08:00 compute-0 sudo[359155]: pam_unix(sudo:session): session closed for user root
Oct 11 09:08:00 compute-0 nova_compute[260935]: 2025-10-11 09:08:00.656 2 DEBUG oslo_concurrency.lockutils [None req-b119b161-266e-4039-a587-323785b07d0d f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.780s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:08:00 compute-0 nova_compute[260935]: 2025-10-11 09:08:00.694 2 INFO nova.scheduler.client.report [None req-b119b161-266e-4039-a587-323785b07d0d f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Deleted allocations for instance 6520fc43-79ed-4060-85bb-dcdff5f5c101
Oct 11 09:08:00 compute-0 sudo[359180]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:08:00 compute-0 sudo[359180]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:08:00 compute-0 sudo[359180]: pam_unix(sudo:session): session closed for user root
Oct 11 09:08:00 compute-0 nova_compute[260935]: 2025-10-11 09:08:00.798 2 DEBUG oslo_concurrency.lockutils [None req-b119b161-266e-4039-a587-323785b07d0d f52fcc072b5843a197064a063d4a5d30 c221c92f100b42fbb2581f0c7035540b - - default default] Lock "6520fc43-79ed-4060-85bb-dcdff5f5c101" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.032s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:08:00 compute-0 sudo[359205]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 11 09:08:00 compute-0 sudo[359205]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:08:01 compute-0 podman[359271]: 2025-10-11 09:08:01.307019424 +0000 UTC m=+0.059102528 container create 273bb35f3d5c6a5882d20befa86a3a4de5e0e9b5308011946efe649885662af5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_wu, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 09:08:01 compute-0 systemd[1]: Started libpod-conmon-273bb35f3d5c6a5882d20befa86a3a4de5e0e9b5308011946efe649885662af5.scope.
Oct 11 09:08:01 compute-0 podman[359271]: 2025-10-11 09:08:01.277270554 +0000 UTC m=+0.029353698 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:08:01 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:08:01 compute-0 nova_compute[260935]: 2025-10-11 09:08:01.401 2 DEBUG nova.compute.manager [req-3b9fb120-85cc-4787-ba36-d6272d0b6300 req-a293f33a-6e88-441f-bbb0-df4b30deb537 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 6520fc43-79ed-4060-85bb-dcdff5f5c101] Received event network-vif-plugged-7f81893d-380a-42b4-88a7-76a98c30a38d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:08:01 compute-0 nova_compute[260935]: 2025-10-11 09:08:01.401 2 DEBUG oslo_concurrency.lockutils [req-3b9fb120-85cc-4787-ba36-d6272d0b6300 req-a293f33a-6e88-441f-bbb0-df4b30deb537 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "6520fc43-79ed-4060-85bb-dcdff5f5c101-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:08:01 compute-0 nova_compute[260935]: 2025-10-11 09:08:01.402 2 DEBUG oslo_concurrency.lockutils [req-3b9fb120-85cc-4787-ba36-d6272d0b6300 req-a293f33a-6e88-441f-bbb0-df4b30deb537 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "6520fc43-79ed-4060-85bb-dcdff5f5c101-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:08:01 compute-0 nova_compute[260935]: 2025-10-11 09:08:01.402 2 DEBUG oslo_concurrency.lockutils [req-3b9fb120-85cc-4787-ba36-d6272d0b6300 req-a293f33a-6e88-441f-bbb0-df4b30deb537 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "6520fc43-79ed-4060-85bb-dcdff5f5c101-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:08:01 compute-0 nova_compute[260935]: 2025-10-11 09:08:01.403 2 DEBUG nova.compute.manager [req-3b9fb120-85cc-4787-ba36-d6272d0b6300 req-a293f33a-6e88-441f-bbb0-df4b30deb537 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 6520fc43-79ed-4060-85bb-dcdff5f5c101] No waiting events found dispatching network-vif-plugged-7f81893d-380a-42b4-88a7-76a98c30a38d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 09:08:01 compute-0 nova_compute[260935]: 2025-10-11 09:08:01.403 2 WARNING nova.compute.manager [req-3b9fb120-85cc-4787-ba36-d6272d0b6300 req-a293f33a-6e88-441f-bbb0-df4b30deb537 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 6520fc43-79ed-4060-85bb-dcdff5f5c101] Received unexpected event network-vif-plugged-7f81893d-380a-42b4-88a7-76a98c30a38d for instance with vm_state deleted and task_state None.
Oct 11 09:08:01 compute-0 nova_compute[260935]: 2025-10-11 09:08:01.403 2 DEBUG nova.compute.manager [req-3b9fb120-85cc-4787-ba36-d6272d0b6300 req-a293f33a-6e88-441f-bbb0-df4b30deb537 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 6520fc43-79ed-4060-85bb-dcdff5f5c101] Received event network-vif-deleted-7f81893d-380a-42b4-88a7-76a98c30a38d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:08:01 compute-0 podman[359271]: 2025-10-11 09:08:01.419407882 +0000 UTC m=+0.171491026 container init 273bb35f3d5c6a5882d20befa86a3a4de5e0e9b5308011946efe649885662af5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_wu, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 09:08:01 compute-0 podman[359271]: 2025-10-11 09:08:01.433759862 +0000 UTC m=+0.185842966 container start 273bb35f3d5c6a5882d20befa86a3a4de5e0e9b5308011946efe649885662af5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_wu, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct 11 09:08:01 compute-0 ceph-mon[74313]: pgmap v1960: 321 pgs: 321 active+clean; 486 MiB data, 905 MiB used, 59 GiB / 60 GiB avail; 669 KiB/s rd, 16 KiB/s wr, 141 op/s
Oct 11 09:08:01 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 09:08:01 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 09:08:01 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:08:01 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 09:08:01 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 09:08:01 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 09:08:01 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/42263550' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:08:01 compute-0 podman[359271]: 2025-10-11 09:08:01.440195786 +0000 UTC m=+0.192278950 container attach 273bb35f3d5c6a5882d20befa86a3a4de5e0e9b5308011946efe649885662af5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_wu, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 11 09:08:01 compute-0 distracted_wu[359288]: 167 167
Oct 11 09:08:01 compute-0 systemd[1]: libpod-273bb35f3d5c6a5882d20befa86a3a4de5e0e9b5308011946efe649885662af5.scope: Deactivated successfully.
Oct 11 09:08:01 compute-0 podman[359271]: 2025-10-11 09:08:01.444048886 +0000 UTC m=+0.196132010 container died 273bb35f3d5c6a5882d20befa86a3a4de5e0e9b5308011946efe649885662af5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_wu, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 11 09:08:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-2408be3c77edc026a0feae089920d458e96849660abb6d78ec91ad4f294f2bbd-merged.mount: Deactivated successfully.
Oct 11 09:08:01 compute-0 podman[359271]: 2025-10-11 09:08:01.492294313 +0000 UTC m=+0.244377407 container remove 273bb35f3d5c6a5882d20befa86a3a4de5e0e9b5308011946efe649885662af5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_wu, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 09:08:01 compute-0 systemd[1]: libpod-conmon-273bb35f3d5c6a5882d20befa86a3a4de5e0e9b5308011946efe649885662af5.scope: Deactivated successfully.
Oct 11 09:08:01 compute-0 podman[359311]: 2025-10-11 09:08:01.810110075 +0000 UTC m=+0.062408352 container create 4ea8433c87bb4efe4fad25a7d8744d47b45473d864298615590df4f586dcce82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_banzai, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 09:08:01 compute-0 systemd[1]: Started libpod-conmon-4ea8433c87bb4efe4fad25a7d8744d47b45473d864298615590df4f586dcce82.scope.
Oct 11 09:08:01 compute-0 podman[359311]: 2025-10-11 09:08:01.788959331 +0000 UTC m=+0.041257648 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:08:01 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:08:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9929dd5316c2eb0f30415f3e82041505b7f33de98927a734b19837d8c645e9cf/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 09:08:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9929dd5316c2eb0f30415f3e82041505b7f33de98927a734b19837d8c645e9cf/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 09:08:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9929dd5316c2eb0f30415f3e82041505b7f33de98927a734b19837d8c645e9cf/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 09:08:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9929dd5316c2eb0f30415f3e82041505b7f33de98927a734b19837d8c645e9cf/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 09:08:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9929dd5316c2eb0f30415f3e82041505b7f33de98927a734b19837d8c645e9cf/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 09:08:01 compute-0 podman[359311]: 2025-10-11 09:08:01.914768832 +0000 UTC m=+0.167067189 container init 4ea8433c87bb4efe4fad25a7d8744d47b45473d864298615590df4f586dcce82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_banzai, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS)
Oct 11 09:08:01 compute-0 podman[359311]: 2025-10-11 09:08:01.926069185 +0000 UTC m=+0.178367482 container start 4ea8433c87bb4efe4fad25a7d8744d47b45473d864298615590df4f586dcce82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_banzai, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 09:08:01 compute-0 podman[359311]: 2025-10-11 09:08:01.931275283 +0000 UTC m=+0.183573640 container attach 4ea8433c87bb4efe4fad25a7d8744d47b45473d864298615590df4f586dcce82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_banzai, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 09:08:02 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1961: 321 pgs: 321 active+clean; 437 MiB data, 870 MiB used, 59 GiB / 60 GiB avail; 680 KiB/s rd, 18 KiB/s wr, 160 op/s
Oct 11 09:08:03 compute-0 silly_banzai[359328]: --> passed data devices: 0 physical, 3 LVM
Oct 11 09:08:03 compute-0 silly_banzai[359328]: --> relative data size: 1.0
Oct 11 09:08:03 compute-0 silly_banzai[359328]: --> All data devices are unavailable
Oct 11 09:08:03 compute-0 systemd[1]: libpod-4ea8433c87bb4efe4fad25a7d8744d47b45473d864298615590df4f586dcce82.scope: Deactivated successfully.
Oct 11 09:08:03 compute-0 systemd[1]: libpod-4ea8433c87bb4efe4fad25a7d8744d47b45473d864298615590df4f586dcce82.scope: Consumed 1.075s CPU time.
Oct 11 09:08:03 compute-0 podman[359311]: 2025-10-11 09:08:03.10253879 +0000 UTC m=+1.354837097 container died 4ea8433c87bb4efe4fad25a7d8744d47b45473d864298615590df4f586dcce82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_banzai, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 09:08:03 compute-0 nova_compute[260935]: 2025-10-11 09:08:03.101 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:08:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-9929dd5316c2eb0f30415f3e82041505b7f33de98927a734b19837d8c645e9cf-merged.mount: Deactivated successfully.
Oct 11 09:08:03 compute-0 podman[359311]: 2025-10-11 09:08:03.159973699 +0000 UTC m=+1.412271986 container remove 4ea8433c87bb4efe4fad25a7d8744d47b45473d864298615590df4f586dcce82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_banzai, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct 11 09:08:03 compute-0 systemd[1]: libpod-conmon-4ea8433c87bb4efe4fad25a7d8744d47b45473d864298615590df4f586dcce82.scope: Deactivated successfully.
Oct 11 09:08:03 compute-0 sudo[359205]: pam_unix(sudo:session): session closed for user root
Oct 11 09:08:03 compute-0 sudo[359371]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:08:03 compute-0 sudo[359371]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:08:03 compute-0 sudo[359371]: pam_unix(sudo:session): session closed for user root
Oct 11 09:08:03 compute-0 sudo[359396]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 09:08:03 compute-0 sudo[359396]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:08:03 compute-0 sudo[359396]: pam_unix(sudo:session): session closed for user root
Oct 11 09:08:03 compute-0 ceph-mon[74313]: pgmap v1961: 321 pgs: 321 active+clean; 437 MiB data, 870 MiB used, 59 GiB / 60 GiB avail; 680 KiB/s rd, 18 KiB/s wr, 160 op/s
Oct 11 09:08:03 compute-0 sudo[359421]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:08:03 compute-0 sudo[359421]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:08:03 compute-0 sudo[359421]: pam_unix(sudo:session): session closed for user root
Oct 11 09:08:03 compute-0 sudo[359446]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a -- lvm list --format json
Oct 11 09:08:03 compute-0 sudo[359446]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:08:03 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e257 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:08:04 compute-0 podman[359511]: 2025-10-11 09:08:04.038320264 +0000 UTC m=+0.045074158 container create d2b695695c4aeaa7e9e680128692651c2b881ae4e300fab203298c39703ee3ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_ramanujan, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 09:08:04 compute-0 systemd[1]: Started libpod-conmon-d2b695695c4aeaa7e9e680128692651c2b881ae4e300fab203298c39703ee3ee.scope.
Oct 11 09:08:04 compute-0 podman[359511]: 2025-10-11 09:08:04.017658374 +0000 UTC m=+0.024412308 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:08:04 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:08:04 compute-0 nova_compute[260935]: 2025-10-11 09:08:04.170 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:08:04 compute-0 podman[359511]: 2025-10-11 09:08:04.179678788 +0000 UTC m=+0.186432752 container init d2b695695c4aeaa7e9e680128692651c2b881ae4e300fab203298c39703ee3ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_ramanujan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 09:08:04 compute-0 podman[359511]: 2025-10-11 09:08:04.190924709 +0000 UTC m=+0.197678623 container start d2b695695c4aeaa7e9e680128692651c2b881ae4e300fab203298c39703ee3ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_ramanujan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Oct 11 09:08:04 compute-0 podman[359511]: 2025-10-11 09:08:04.195242203 +0000 UTC m=+0.201996127 container attach d2b695695c4aeaa7e9e680128692651c2b881ae4e300fab203298c39703ee3ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_ramanujan, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct 11 09:08:04 compute-0 mystifying_ramanujan[359528]: 167 167
Oct 11 09:08:04 compute-0 systemd[1]: libpod-d2b695695c4aeaa7e9e680128692651c2b881ae4e300fab203298c39703ee3ee.scope: Deactivated successfully.
Oct 11 09:08:04 compute-0 podman[359511]: 2025-10-11 09:08:04.198757263 +0000 UTC m=+0.205511187 container died d2b695695c4aeaa7e9e680128692651c2b881ae4e300fab203298c39703ee3ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_ramanujan, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 11 09:08:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-32335e376981d49e5832637acef4164b6d6e09cfed266a93f404a01aed786282-merged.mount: Deactivated successfully.
Oct 11 09:08:04 compute-0 podman[359511]: 2025-10-11 09:08:04.250974224 +0000 UTC m=+0.257728148 container remove d2b695695c4aeaa7e9e680128692651c2b881ae4e300fab203298c39703ee3ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_ramanujan, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 09:08:04 compute-0 systemd[1]: libpod-conmon-d2b695695c4aeaa7e9e680128692651c2b881ae4e300fab203298c39703ee3ee.scope: Deactivated successfully.
Oct 11 09:08:04 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1962: 321 pgs: 321 active+clean; 407 MiB data, 854 MiB used, 59 GiB / 60 GiB avail; 239 KiB/s rd, 4.2 KiB/s wr, 101 op/s
Oct 11 09:08:04 compute-0 nova_compute[260935]: 2025-10-11 09:08:04.392 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760173669.354265, 15633aee-234a-4417-b5ea-f35f13820404 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:08:04 compute-0 nova_compute[260935]: 2025-10-11 09:08:04.394 2 INFO nova.compute.manager [-] [instance: 15633aee-234a-4417-b5ea-f35f13820404] VM Stopped (Lifecycle Event)
Oct 11 09:08:04 compute-0 nova_compute[260935]: 2025-10-11 09:08:04.432 2 DEBUG nova.compute.manager [None req-a11257c6-af69-4aa0-894e-5e01768e7172 - - - - - -] [instance: 15633aee-234a-4417-b5ea-f35f13820404] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:08:04 compute-0 podman[359551]: 2025-10-11 09:08:04.552135211 +0000 UTC m=+0.076491174 container create 813edca0b6fbe1d20dc77f6913c1f17d41e9599ab495fa88d820cacecbc91944 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_pare, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 09:08:04 compute-0 systemd[1]: Started libpod-conmon-813edca0b6fbe1d20dc77f6913c1f17d41e9599ab495fa88d820cacecbc91944.scope.
Oct 11 09:08:04 compute-0 podman[359551]: 2025-10-11 09:08:04.522387822 +0000 UTC m=+0.046743835 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:08:04 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:08:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a7c631370fd333bba102789d8496e36a7ab2dac6c18b1a9aed948d393c2e0fc5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 09:08:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a7c631370fd333bba102789d8496e36a7ab2dac6c18b1a9aed948d393c2e0fc5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 09:08:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a7c631370fd333bba102789d8496e36a7ab2dac6c18b1a9aed948d393c2e0fc5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 09:08:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a7c631370fd333bba102789d8496e36a7ab2dac6c18b1a9aed948d393c2e0fc5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 09:08:04 compute-0 podman[359551]: 2025-10-11 09:08:04.666664981 +0000 UTC m=+0.191020954 container init 813edca0b6fbe1d20dc77f6913c1f17d41e9599ab495fa88d820cacecbc91944 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_pare, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 11 09:08:04 compute-0 podman[359551]: 2025-10-11 09:08:04.67959824 +0000 UTC m=+0.203954213 container start 813edca0b6fbe1d20dc77f6913c1f17d41e9599ab495fa88d820cacecbc91944 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_pare, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 09:08:04 compute-0 podman[359551]: 2025-10-11 09:08:04.684325175 +0000 UTC m=+0.208681148 container attach 813edca0b6fbe1d20dc77f6913c1f17d41e9599ab495fa88d820cacecbc91944 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_pare, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 09:08:04 compute-0 ovn_controller[152945]: 2025-10-11T09:08:04Z|00904|binding|INFO|Releasing lport c44760fe-71c5-482d-aee7-a0b0513681b7 from this chassis (sb_readonly=0)
Oct 11 09:08:04 compute-0 ovn_controller[152945]: 2025-10-11T09:08:04Z|00905|binding|INFO|Releasing lport e23cd806-8523-4e59-ba27-db15cee52548 from this chassis (sb_readonly=0)
Oct 11 09:08:04 compute-0 ovn_controller[152945]: 2025-10-11T09:08:04Z|00906|binding|INFO|Releasing lport f45dd889-4db0-488b-b6d3-356bc191844e from this chassis (sb_readonly=0)
Oct 11 09:08:04 compute-0 nova_compute[260935]: 2025-10-11 09:08:04.812 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:08:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 09:08:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:08:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 09:08:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:08:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0033857600564888976 of space, bias 1.0, pg target 1.0157280169466694 quantized to 32 (current 32)
Oct 11 09:08:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:08:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 09:08:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:08:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 09:08:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:08:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.1992057139048968 quantized to 32 (current 32)
Oct 11 09:08:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:08:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006084358924269063 quantized to 16 (current 32)
Oct 11 09:08:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:08:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 09:08:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:08:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.605448655336329e-05 quantized to 32 (current 32)
Oct 11 09:08:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:08:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006464631357035879 quantized to 32 (current 32)
Oct 11 09:08:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:08:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 09:08:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:08:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015210897310672657 quantized to 32 (current 32)
Oct 11 09:08:05 compute-0 ceph-mon[74313]: pgmap v1962: 321 pgs: 321 active+clean; 407 MiB data, 854 MiB used, 59 GiB / 60 GiB avail; 239 KiB/s rd, 4.2 KiB/s wr, 101 op/s
Oct 11 09:08:05 compute-0 strange_pare[359568]: {
Oct 11 09:08:05 compute-0 strange_pare[359568]:     "0": [
Oct 11 09:08:05 compute-0 strange_pare[359568]:         {
Oct 11 09:08:05 compute-0 strange_pare[359568]:             "devices": [
Oct 11 09:08:05 compute-0 strange_pare[359568]:                 "/dev/loop3"
Oct 11 09:08:05 compute-0 strange_pare[359568]:             ],
Oct 11 09:08:05 compute-0 strange_pare[359568]:             "lv_name": "ceph_lv0",
Oct 11 09:08:05 compute-0 strange_pare[359568]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 09:08:05 compute-0 strange_pare[359568]:             "lv_size": "21470642176",
Oct 11 09:08:05 compute-0 strange_pare[359568]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=bd31cfe9-266a-4479-a7f9-659fff14b3e1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 09:08:05 compute-0 strange_pare[359568]:             "lv_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 09:08:05 compute-0 strange_pare[359568]:             "name": "ceph_lv0",
Oct 11 09:08:05 compute-0 strange_pare[359568]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 09:08:05 compute-0 strange_pare[359568]:             "tags": {
Oct 11 09:08:05 compute-0 strange_pare[359568]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 11 09:08:05 compute-0 strange_pare[359568]:                 "ceph.block_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 09:08:05 compute-0 strange_pare[359568]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 09:08:05 compute-0 strange_pare[359568]:                 "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:08:05 compute-0 strange_pare[359568]:                 "ceph.cluster_name": "ceph",
Oct 11 09:08:05 compute-0 strange_pare[359568]:                 "ceph.crush_device_class": "",
Oct 11 09:08:05 compute-0 strange_pare[359568]:                 "ceph.encrypted": "0",
Oct 11 09:08:05 compute-0 strange_pare[359568]:                 "ceph.osd_fsid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 09:08:05 compute-0 strange_pare[359568]:                 "ceph.osd_id": "0",
Oct 11 09:08:05 compute-0 strange_pare[359568]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 09:08:05 compute-0 strange_pare[359568]:                 "ceph.type": "block",
Oct 11 09:08:05 compute-0 strange_pare[359568]:                 "ceph.vdo": "0"
Oct 11 09:08:05 compute-0 strange_pare[359568]:             },
Oct 11 09:08:05 compute-0 strange_pare[359568]:             "type": "block",
Oct 11 09:08:05 compute-0 strange_pare[359568]:             "vg_name": "ceph_vg0"
Oct 11 09:08:05 compute-0 strange_pare[359568]:         }
Oct 11 09:08:05 compute-0 strange_pare[359568]:     ],
Oct 11 09:08:05 compute-0 strange_pare[359568]:     "1": [
Oct 11 09:08:05 compute-0 strange_pare[359568]:         {
Oct 11 09:08:05 compute-0 strange_pare[359568]:             "devices": [
Oct 11 09:08:05 compute-0 strange_pare[359568]:                 "/dev/loop4"
Oct 11 09:08:05 compute-0 strange_pare[359568]:             ],
Oct 11 09:08:05 compute-0 strange_pare[359568]:             "lv_name": "ceph_lv1",
Oct 11 09:08:05 compute-0 strange_pare[359568]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 09:08:05 compute-0 strange_pare[359568]:             "lv_size": "21470642176",
Oct 11 09:08:05 compute-0 strange_pare[359568]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c6e730c8-7075-45e5-bf72-061b73088f5b,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 09:08:05 compute-0 strange_pare[359568]:             "lv_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 09:08:05 compute-0 strange_pare[359568]:             "name": "ceph_lv1",
Oct 11 09:08:05 compute-0 strange_pare[359568]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 09:08:05 compute-0 strange_pare[359568]:             "tags": {
Oct 11 09:08:05 compute-0 strange_pare[359568]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 11 09:08:05 compute-0 strange_pare[359568]:                 "ceph.block_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 09:08:05 compute-0 strange_pare[359568]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 09:08:05 compute-0 strange_pare[359568]:                 "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:08:05 compute-0 strange_pare[359568]:                 "ceph.cluster_name": "ceph",
Oct 11 09:08:05 compute-0 strange_pare[359568]:                 "ceph.crush_device_class": "",
Oct 11 09:08:05 compute-0 strange_pare[359568]:                 "ceph.encrypted": "0",
Oct 11 09:08:05 compute-0 strange_pare[359568]:                 "ceph.osd_fsid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 09:08:05 compute-0 strange_pare[359568]:                 "ceph.osd_id": "1",
Oct 11 09:08:05 compute-0 strange_pare[359568]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 09:08:05 compute-0 strange_pare[359568]:                 "ceph.type": "block",
Oct 11 09:08:05 compute-0 strange_pare[359568]:                 "ceph.vdo": "0"
Oct 11 09:08:05 compute-0 strange_pare[359568]:             },
Oct 11 09:08:05 compute-0 strange_pare[359568]:             "type": "block",
Oct 11 09:08:05 compute-0 strange_pare[359568]:             "vg_name": "ceph_vg1"
Oct 11 09:08:05 compute-0 strange_pare[359568]:         }
Oct 11 09:08:05 compute-0 strange_pare[359568]:     ],
Oct 11 09:08:05 compute-0 strange_pare[359568]:     "2": [
Oct 11 09:08:05 compute-0 strange_pare[359568]:         {
Oct 11 09:08:05 compute-0 strange_pare[359568]:             "devices": [
Oct 11 09:08:05 compute-0 strange_pare[359568]:                 "/dev/loop5"
Oct 11 09:08:05 compute-0 strange_pare[359568]:             ],
Oct 11 09:08:05 compute-0 strange_pare[359568]:             "lv_name": "ceph_lv2",
Oct 11 09:08:05 compute-0 strange_pare[359568]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 09:08:05 compute-0 strange_pare[359568]:             "lv_size": "21470642176",
Oct 11 09:08:05 compute-0 strange_pare[359568]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9a8d87fe-940e-4960-94fb-405e22e6e9ee,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 09:08:05 compute-0 strange_pare[359568]:             "lv_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 09:08:05 compute-0 strange_pare[359568]:             "name": "ceph_lv2",
Oct 11 09:08:05 compute-0 strange_pare[359568]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 09:08:05 compute-0 strange_pare[359568]:             "tags": {
Oct 11 09:08:05 compute-0 strange_pare[359568]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 11 09:08:05 compute-0 strange_pare[359568]:                 "ceph.block_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 09:08:05 compute-0 strange_pare[359568]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 09:08:05 compute-0 strange_pare[359568]:                 "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:08:05 compute-0 strange_pare[359568]:                 "ceph.cluster_name": "ceph",
Oct 11 09:08:05 compute-0 strange_pare[359568]:                 "ceph.crush_device_class": "",
Oct 11 09:08:05 compute-0 strange_pare[359568]:                 "ceph.encrypted": "0",
Oct 11 09:08:05 compute-0 strange_pare[359568]:                 "ceph.osd_fsid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 09:08:05 compute-0 strange_pare[359568]:                 "ceph.osd_id": "2",
Oct 11 09:08:05 compute-0 strange_pare[359568]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 09:08:05 compute-0 strange_pare[359568]:                 "ceph.type": "block",
Oct 11 09:08:05 compute-0 strange_pare[359568]:                 "ceph.vdo": "0"
Oct 11 09:08:05 compute-0 strange_pare[359568]:             },
Oct 11 09:08:05 compute-0 strange_pare[359568]:             "type": "block",
Oct 11 09:08:05 compute-0 strange_pare[359568]:             "vg_name": "ceph_vg2"
Oct 11 09:08:05 compute-0 strange_pare[359568]:         }
Oct 11 09:08:05 compute-0 strange_pare[359568]:     ]
Oct 11 09:08:05 compute-0 strange_pare[359568]: }
Oct 11 09:08:05 compute-0 systemd[1]: libpod-813edca0b6fbe1d20dc77f6913c1f17d41e9599ab495fa88d820cacecbc91944.scope: Deactivated successfully.
Oct 11 09:08:05 compute-0 podman[359551]: 2025-10-11 09:08:05.516197882 +0000 UTC m=+1.040553875 container died 813edca0b6fbe1d20dc77f6913c1f17d41e9599ab495fa88d820cacecbc91944 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_pare, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct 11 09:08:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-a7c631370fd333bba102789d8496e36a7ab2dac6c18b1a9aed948d393c2e0fc5-merged.mount: Deactivated successfully.
Oct 11 09:08:05 compute-0 podman[359551]: 2025-10-11 09:08:05.588744212 +0000 UTC m=+1.113100145 container remove 813edca0b6fbe1d20dc77f6913c1f17d41e9599ab495fa88d820cacecbc91944 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_pare, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 09:08:05 compute-0 systemd[1]: libpod-conmon-813edca0b6fbe1d20dc77f6913c1f17d41e9599ab495fa88d820cacecbc91944.scope: Deactivated successfully.
Oct 11 09:08:05 compute-0 sudo[359446]: pam_unix(sudo:session): session closed for user root
Oct 11 09:08:05 compute-0 sudo[359591]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:08:05 compute-0 sudo[359591]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:08:05 compute-0 sudo[359591]: pam_unix(sudo:session): session closed for user root
Oct 11 09:08:05 compute-0 sudo[359616]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 09:08:05 compute-0 sudo[359616]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:08:05 compute-0 sudo[359616]: pam_unix(sudo:session): session closed for user root
Oct 11 09:08:05 compute-0 sudo[359641]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:08:05 compute-0 sudo[359641]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:08:05 compute-0 sudo[359641]: pam_unix(sudo:session): session closed for user root
Oct 11 09:08:05 compute-0 sudo[359666]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a -- raw list --format json
Oct 11 09:08:05 compute-0 sudo[359666]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:08:06 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1963: 321 pgs: 321 active+clean; 407 MiB data, 854 MiB used, 59 GiB / 60 GiB avail; 58 KiB/s rd, 3.5 KiB/s wr, 82 op/s
Oct 11 09:08:06 compute-0 podman[359731]: 2025-10-11 09:08:06.383016137 +0000 UTC m=+0.066701965 container create 089dd3eca2b81224119c2d379c703e1124955c08409b1c9803b5bcac7e6273ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_bose, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 09:08:06 compute-0 systemd[1]: Started libpod-conmon-089dd3eca2b81224119c2d379c703e1124955c08409b1c9803b5bcac7e6273ca.scope.
Oct 11 09:08:06 compute-0 podman[359731]: 2025-10-11 09:08:06.354565905 +0000 UTC m=+0.038251803 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:08:06 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:08:06 compute-0 podman[359731]: 2025-10-11 09:08:06.49275383 +0000 UTC m=+0.176439738 container init 089dd3eca2b81224119c2d379c703e1124955c08409b1c9803b5bcac7e6273ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_bose, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS)
Oct 11 09:08:06 compute-0 podman[359731]: 2025-10-11 09:08:06.507228443 +0000 UTC m=+0.190914301 container start 089dd3eca2b81224119c2d379c703e1124955c08409b1c9803b5bcac7e6273ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_bose, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True)
Oct 11 09:08:06 compute-0 podman[359731]: 2025-10-11 09:08:06.511308609 +0000 UTC m=+0.194994527 container attach 089dd3eca2b81224119c2d379c703e1124955c08409b1c9803b5bcac7e6273ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_bose, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True)
Oct 11 09:08:06 compute-0 eager_bose[359747]: 167 167
Oct 11 09:08:06 compute-0 systemd[1]: libpod-089dd3eca2b81224119c2d379c703e1124955c08409b1c9803b5bcac7e6273ca.scope: Deactivated successfully.
Oct 11 09:08:06 compute-0 podman[359731]: 2025-10-11 09:08:06.514304235 +0000 UTC m=+0.197990083 container died 089dd3eca2b81224119c2d379c703e1124955c08409b1c9803b5bcac7e6273ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_bose, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 09:08:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-044540c52eb96aef9b479ee0ae5be58dcb5ff700d86c9a49c56d95b17c3d68e6-merged.mount: Deactivated successfully.
Oct 11 09:08:06 compute-0 podman[359731]: 2025-10-11 09:08:06.567533714 +0000 UTC m=+0.251219572 container remove 089dd3eca2b81224119c2d379c703e1124955c08409b1c9803b5bcac7e6273ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_bose, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 09:08:06 compute-0 systemd[1]: libpod-conmon-089dd3eca2b81224119c2d379c703e1124955c08409b1c9803b5bcac7e6273ca.scope: Deactivated successfully.
Oct 11 09:08:06 compute-0 podman[359770]: 2025-10-11 09:08:06.823232444 +0000 UTC m=+0.044153022 container create 26fe89bad18cf6bcd8dbb2be1ac123c656b76930a028c0cc8cbd2ecbf6caf744 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_roentgen, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct 11 09:08:06 compute-0 systemd[1]: Started libpod-conmon-26fe89bad18cf6bcd8dbb2be1ac123c656b76930a028c0cc8cbd2ecbf6caf744.scope.
Oct 11 09:08:06 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:08:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14eb0b8d8ea89158a01eccca5783607782f22945e082da0bb6645b426867c650/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 09:08:06 compute-0 podman[359770]: 2025-10-11 09:08:06.805085786 +0000 UTC m=+0.026006394 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:08:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14eb0b8d8ea89158a01eccca5783607782f22945e082da0bb6645b426867c650/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 09:08:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14eb0b8d8ea89158a01eccca5783607782f22945e082da0bb6645b426867c650/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 09:08:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14eb0b8d8ea89158a01eccca5783607782f22945e082da0bb6645b426867c650/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 09:08:06 compute-0 podman[359770]: 2025-10-11 09:08:06.919733049 +0000 UTC m=+0.140653707 container init 26fe89bad18cf6bcd8dbb2be1ac123c656b76930a028c0cc8cbd2ecbf6caf744 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_roentgen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct 11 09:08:06 compute-0 podman[359770]: 2025-10-11 09:08:06.931866695 +0000 UTC m=+0.152787293 container start 26fe89bad18cf6bcd8dbb2be1ac123c656b76930a028c0cc8cbd2ecbf6caf744 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_roentgen, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 09:08:06 compute-0 podman[359770]: 2025-10-11 09:08:06.935802627 +0000 UTC m=+0.156723235 container attach 26fe89bad18cf6bcd8dbb2be1ac123c656b76930a028c0cc8cbd2ecbf6caf744 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_roentgen, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default)
Oct 11 09:08:07 compute-0 ceph-mon[74313]: pgmap v1963: 321 pgs: 321 active+clean; 407 MiB data, 854 MiB used, 59 GiB / 60 GiB avail; 58 KiB/s rd, 3.5 KiB/s wr, 82 op/s
Oct 11 09:08:07 compute-0 sleepy_roentgen[359787]: {
Oct 11 09:08:07 compute-0 sleepy_roentgen[359787]:     "9a8d87fe-940e-4960-94fb-405e22e6e9ee": {
Oct 11 09:08:07 compute-0 sleepy_roentgen[359787]:         "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:08:07 compute-0 sleepy_roentgen[359787]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 11 09:08:07 compute-0 sleepy_roentgen[359787]:         "osd_id": 2,
Oct 11 09:08:07 compute-0 sleepy_roentgen[359787]:         "osd_uuid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 09:08:07 compute-0 sleepy_roentgen[359787]:         "type": "bluestore"
Oct 11 09:08:07 compute-0 sleepy_roentgen[359787]:     },
Oct 11 09:08:07 compute-0 sleepy_roentgen[359787]:     "bd31cfe9-266a-4479-a7f9-659fff14b3e1": {
Oct 11 09:08:07 compute-0 sleepy_roentgen[359787]:         "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:08:07 compute-0 sleepy_roentgen[359787]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 11 09:08:07 compute-0 sleepy_roentgen[359787]:         "osd_id": 0,
Oct 11 09:08:07 compute-0 sleepy_roentgen[359787]:         "osd_uuid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 09:08:07 compute-0 sleepy_roentgen[359787]:         "type": "bluestore"
Oct 11 09:08:07 compute-0 sleepy_roentgen[359787]:     },
Oct 11 09:08:07 compute-0 sleepy_roentgen[359787]:     "c6e730c8-7075-45e5-bf72-061b73088f5b": {
Oct 11 09:08:07 compute-0 sleepy_roentgen[359787]:         "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:08:07 compute-0 sleepy_roentgen[359787]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 11 09:08:07 compute-0 sleepy_roentgen[359787]:         "osd_id": 1,
Oct 11 09:08:07 compute-0 sleepy_roentgen[359787]:         "osd_uuid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 09:08:07 compute-0 sleepy_roentgen[359787]:         "type": "bluestore"
Oct 11 09:08:07 compute-0 sleepy_roentgen[359787]:     }
Oct 11 09:08:07 compute-0 sleepy_roentgen[359787]: }
Oct 11 09:08:07 compute-0 systemd[1]: libpod-26fe89bad18cf6bcd8dbb2be1ac123c656b76930a028c0cc8cbd2ecbf6caf744.scope: Deactivated successfully.
Oct 11 09:08:07 compute-0 systemd[1]: libpod-26fe89bad18cf6bcd8dbb2be1ac123c656b76930a028c0cc8cbd2ecbf6caf744.scope: Consumed 1.053s CPU time.
Oct 11 09:08:08 compute-0 podman[359820]: 2025-10-11 09:08:08.025954748 +0000 UTC m=+0.030649706 container died 26fe89bad18cf6bcd8dbb2be1ac123c656b76930a028c0cc8cbd2ecbf6caf744 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_roentgen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Oct 11 09:08:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-14eb0b8d8ea89158a01eccca5783607782f22945e082da0bb6645b426867c650-merged.mount: Deactivated successfully.
Oct 11 09:08:08 compute-0 nova_compute[260935]: 2025-10-11 09:08:08.106 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:08:08 compute-0 podman[359820]: 2025-10-11 09:08:08.111594623 +0000 UTC m=+0.116289531 container remove 26fe89bad18cf6bcd8dbb2be1ac123c656b76930a028c0cc8cbd2ecbf6caf744 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_roentgen, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct 11 09:08:08 compute-0 systemd[1]: libpod-conmon-26fe89bad18cf6bcd8dbb2be1ac123c656b76930a028c0cc8cbd2ecbf6caf744.scope: Deactivated successfully.
Oct 11 09:08:08 compute-0 sudo[359666]: pam_unix(sudo:session): session closed for user root
Oct 11 09:08:08 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 09:08:08 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:08:08 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 09:08:08 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:08:08 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev 2760c6e6-a1f9-4a76-9fdd-3664e946f6a2 does not exist
Oct 11 09:08:08 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev d38f01bc-206b-43ce-a5d0-453e690a6534 does not exist
Oct 11 09:08:08 compute-0 nova_compute[260935]: 2025-10-11 09:08:08.210 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760173673.2089772, f5dfbe0b-a1ff-4001-abe4-a4493c9124f2 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:08:08 compute-0 nova_compute[260935]: 2025-10-11 09:08:08.211 2 INFO nova.compute.manager [-] [instance: f5dfbe0b-a1ff-4001-abe4-a4493c9124f2] VM Stopped (Lifecycle Event)
Oct 11 09:08:08 compute-0 nova_compute[260935]: 2025-10-11 09:08:08.233 2 DEBUG nova.compute.manager [None req-52d3d7a1-73d2-44ff-9891-f5779a784348 - - - - - -] [instance: f5dfbe0b-a1ff-4001-abe4-a4493c9124f2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:08:08 compute-0 sudo[359836]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:08:08 compute-0 sudo[359836]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:08:08 compute-0 sudo[359836]: pam_unix(sudo:session): session closed for user root
Oct 11 09:08:08 compute-0 sudo[359861]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 11 09:08:08 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1964: 321 pgs: 321 active+clean; 407 MiB data, 851 MiB used, 59 GiB / 60 GiB avail; 58 KiB/s rd, 5.8 KiB/s wr, 83 op/s
Oct 11 09:08:08 compute-0 sudo[359861]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:08:08 compute-0 sudo[359861]: pam_unix(sudo:session): session closed for user root
Oct 11 09:08:08 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e257 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:08:09 compute-0 nova_compute[260935]: 2025-10-11 09:08:09.205 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:08:09 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:08:09 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:08:09 compute-0 ceph-mon[74313]: pgmap v1964: 321 pgs: 321 active+clean; 407 MiB data, 851 MiB used, 59 GiB / 60 GiB avail; 58 KiB/s rd, 5.8 KiB/s wr, 83 op/s
Oct 11 09:08:10 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1965: 321 pgs: 321 active+clean; 407 MiB data, 851 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 3.5 KiB/s wr, 28 op/s
Oct 11 09:08:11 compute-0 ceph-mon[74313]: pgmap v1965: 321 pgs: 321 active+clean; 407 MiB data, 851 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 3.5 KiB/s wr, 28 op/s
Oct 11 09:08:11 compute-0 nova_compute[260935]: 2025-10-11 09:08:11.855 2 DEBUG oslo_concurrency.lockutils [None req-3b7369a5-ec76-44c3-95de-d12671f2656a 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Acquiring lock "1a90a348-da49-4ba3-8bae-5b426e9d7424" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:08:11 compute-0 nova_compute[260935]: 2025-10-11 09:08:11.855 2 DEBUG oslo_concurrency.lockutils [None req-3b7369a5-ec76-44c3-95de-d12671f2656a 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Lock "1a90a348-da49-4ba3-8bae-5b426e9d7424" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:08:11 compute-0 nova_compute[260935]: 2025-10-11 09:08:11.875 2 DEBUG nova.compute.manager [None req-3b7369a5-ec76-44c3-95de-d12671f2656a 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] [instance: 1a90a348-da49-4ba3-8bae-5b426e9d7424] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 11 09:08:11 compute-0 nova_compute[260935]: 2025-10-11 09:08:11.965 2 DEBUG oslo_concurrency.lockutils [None req-3b7369a5-ec76-44c3-95de-d12671f2656a 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:08:11 compute-0 nova_compute[260935]: 2025-10-11 09:08:11.965 2 DEBUG oslo_concurrency.lockutils [None req-3b7369a5-ec76-44c3-95de-d12671f2656a 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:08:11 compute-0 nova_compute[260935]: 2025-10-11 09:08:11.976 2 DEBUG nova.virt.hardware [None req-3b7369a5-ec76-44c3-95de-d12671f2656a 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 11 09:08:11 compute-0 nova_compute[260935]: 2025-10-11 09:08:11.976 2 INFO nova.compute.claims [None req-3b7369a5-ec76-44c3-95de-d12671f2656a 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] [instance: 1a90a348-da49-4ba3-8bae-5b426e9d7424] Claim successful on node compute-0.ctlplane.example.com
Oct 11 09:08:12 compute-0 nova_compute[260935]: 2025-10-11 09:08:12.193 2 DEBUG oslo_concurrency.processutils [None req-3b7369a5-ec76-44c3-95de-d12671f2656a 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:08:12 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1966: 321 pgs: 321 active+clean; 407 MiB data, 851 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 3.5 KiB/s wr, 28 op/s
Oct 11 09:08:12 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 09:08:12 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3739722464' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:08:12 compute-0 nova_compute[260935]: 2025-10-11 09:08:12.732 2 DEBUG oslo_concurrency.processutils [None req-3b7369a5-ec76-44c3-95de-d12671f2656a 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.538s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:08:12 compute-0 nova_compute[260935]: 2025-10-11 09:08:12.743 2 DEBUG nova.compute.provider_tree [None req-3b7369a5-ec76-44c3-95de-d12671f2656a 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 09:08:12 compute-0 nova_compute[260935]: 2025-10-11 09:08:12.897 2 DEBUG nova.scheduler.client.report [None req-3b7369a5-ec76-44c3-95de-d12671f2656a 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 09:08:12 compute-0 nova_compute[260935]: 2025-10-11 09:08:12.991 2 DEBUG oslo_concurrency.lockutils [None req-3b7369a5-ec76-44c3-95de-d12671f2656a 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.026s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:08:12 compute-0 nova_compute[260935]: 2025-10-11 09:08:12.993 2 DEBUG nova.compute.manager [None req-3b7369a5-ec76-44c3-95de-d12671f2656a 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] [instance: 1a90a348-da49-4ba3-8bae-5b426e9d7424] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 11 09:08:13 compute-0 nova_compute[260935]: 2025-10-11 09:08:13.018 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760173678.017132, 6520fc43-79ed-4060-85bb-dcdff5f5c101 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:08:13 compute-0 nova_compute[260935]: 2025-10-11 09:08:13.019 2 INFO nova.compute.manager [-] [instance: 6520fc43-79ed-4060-85bb-dcdff5f5c101] VM Stopped (Lifecycle Event)
Oct 11 09:08:13 compute-0 nova_compute[260935]: 2025-10-11 09:08:13.150 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:08:13 compute-0 nova_compute[260935]: 2025-10-11 09:08:13.152 2 DEBUG nova.compute.manager [None req-a0612205-2e22-43c0-a43e-a2cf44eb818d - - - - - -] [instance: 6520fc43-79ed-4060-85bb-dcdff5f5c101] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:08:13 compute-0 nova_compute[260935]: 2025-10-11 09:08:13.172 2 DEBUG nova.compute.manager [None req-3b7369a5-ec76-44c3-95de-d12671f2656a 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] [instance: 1a90a348-da49-4ba3-8bae-5b426e9d7424] Not allocating networking since 'none' was specified. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1948
Oct 11 09:08:13 compute-0 nova_compute[260935]: 2025-10-11 09:08:13.253 2 INFO nova.virt.libvirt.driver [None req-3b7369a5-ec76-44c3-95de-d12671f2656a 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] [instance: 1a90a348-da49-4ba3-8bae-5b426e9d7424] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 11 09:08:13 compute-0 nova_compute[260935]: 2025-10-11 09:08:13.286 2 DEBUG nova.compute.manager [None req-3b7369a5-ec76-44c3-95de-d12671f2656a 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] [instance: 1a90a348-da49-4ba3-8bae-5b426e9d7424] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 11 09:08:13 compute-0 nova_compute[260935]: 2025-10-11 09:08:13.397 2 DEBUG nova.compute.manager [None req-3b7369a5-ec76-44c3-95de-d12671f2656a 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] [instance: 1a90a348-da49-4ba3-8bae-5b426e9d7424] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 11 09:08:13 compute-0 nova_compute[260935]: 2025-10-11 09:08:13.399 2 DEBUG nova.virt.libvirt.driver [None req-3b7369a5-ec76-44c3-95de-d12671f2656a 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] [instance: 1a90a348-da49-4ba3-8bae-5b426e9d7424] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 11 09:08:13 compute-0 nova_compute[260935]: 2025-10-11 09:08:13.400 2 INFO nova.virt.libvirt.driver [None req-3b7369a5-ec76-44c3-95de-d12671f2656a 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] [instance: 1a90a348-da49-4ba3-8bae-5b426e9d7424] Creating image(s)
Oct 11 09:08:13 compute-0 nova_compute[260935]: 2025-10-11 09:08:13.434 2 DEBUG nova.storage.rbd_utils [None req-3b7369a5-ec76-44c3-95de-d12671f2656a 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] rbd image 1a90a348-da49-4ba3-8bae-5b426e9d7424_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:08:13 compute-0 ceph-mon[74313]: pgmap v1966: 321 pgs: 321 active+clean; 407 MiB data, 851 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 3.5 KiB/s wr, 28 op/s
Oct 11 09:08:13 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/3739722464' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:08:13 compute-0 nova_compute[260935]: 2025-10-11 09:08:13.469 2 DEBUG nova.storage.rbd_utils [None req-3b7369a5-ec76-44c3-95de-d12671f2656a 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] rbd image 1a90a348-da49-4ba3-8bae-5b426e9d7424_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:08:13 compute-0 nova_compute[260935]: 2025-10-11 09:08:13.503 2 DEBUG nova.storage.rbd_utils [None req-3b7369a5-ec76-44c3-95de-d12671f2656a 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] rbd image 1a90a348-da49-4ba3-8bae-5b426e9d7424_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:08:13 compute-0 nova_compute[260935]: 2025-10-11 09:08:13.508 2 DEBUG oslo_concurrency.processutils [None req-3b7369a5-ec76-44c3-95de-d12671f2656a 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:08:13 compute-0 nova_compute[260935]: 2025-10-11 09:08:13.571 2 DEBUG oslo_concurrency.lockutils [None req-db07f152-9077-48ec-a572-328138ea3e15 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Acquiring lock "df634e60-d790-4a39-adcc-c6345a12e9df" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:08:13 compute-0 nova_compute[260935]: 2025-10-11 09:08:13.572 2 DEBUG oslo_concurrency.lockutils [None req-db07f152-9077-48ec-a572-328138ea3e15 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Lock "df634e60-d790-4a39-adcc-c6345a12e9df" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:08:13 compute-0 nova_compute[260935]: 2025-10-11 09:08:13.612 2 DEBUG nova.compute.manager [None req-db07f152-9077-48ec-a572-328138ea3e15 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] [instance: df634e60-d790-4a39-adcc-c6345a12e9df] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 11 09:08:13 compute-0 nova_compute[260935]: 2025-10-11 09:08:13.618 2 DEBUG oslo_concurrency.processutils [None req-3b7369a5-ec76-44c3-95de-d12671f2656a 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json" returned: 0 in 0.109s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:08:13 compute-0 nova_compute[260935]: 2025-10-11 09:08:13.618 2 DEBUG oslo_concurrency.lockutils [None req-3b7369a5-ec76-44c3-95de-d12671f2656a 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Acquiring lock "9811042c7d73cc51997f7c966840f4b7728169a1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:08:13 compute-0 nova_compute[260935]: 2025-10-11 09:08:13.619 2 DEBUG oslo_concurrency.lockutils [None req-3b7369a5-ec76-44c3-95de-d12671f2656a 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:08:13 compute-0 nova_compute[260935]: 2025-10-11 09:08:13.620 2 DEBUG oslo_concurrency.lockutils [None req-3b7369a5-ec76-44c3-95de-d12671f2656a 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:08:13 compute-0 nova_compute[260935]: 2025-10-11 09:08:13.653 2 DEBUG nova.storage.rbd_utils [None req-3b7369a5-ec76-44c3-95de-d12671f2656a 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] rbd image 1a90a348-da49-4ba3-8bae-5b426e9d7424_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:08:13 compute-0 nova_compute[260935]: 2025-10-11 09:08:13.657 2 DEBUG oslo_concurrency.processutils [None req-3b7369a5-ec76-44c3-95de-d12671f2656a 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 1a90a348-da49-4ba3-8bae-5b426e9d7424_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:08:13 compute-0 nova_compute[260935]: 2025-10-11 09:08:13.753 2 DEBUG oslo_concurrency.lockutils [None req-db07f152-9077-48ec-a572-328138ea3e15 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:08:13 compute-0 nova_compute[260935]: 2025-10-11 09:08:13.754 2 DEBUG oslo_concurrency.lockutils [None req-db07f152-9077-48ec-a572-328138ea3e15 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:08:13 compute-0 nova_compute[260935]: 2025-10-11 09:08:13.763 2 DEBUG nova.virt.hardware [None req-db07f152-9077-48ec-a572-328138ea3e15 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 11 09:08:13 compute-0 nova_compute[260935]: 2025-10-11 09:08:13.764 2 INFO nova.compute.claims [None req-db07f152-9077-48ec-a572-328138ea3e15 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] [instance: df634e60-d790-4a39-adcc-c6345a12e9df] Claim successful on node compute-0.ctlplane.example.com
Oct 11 09:08:13 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e257 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:08:13 compute-0 nova_compute[260935]: 2025-10-11 09:08:13.957 2 DEBUG oslo_concurrency.processutils [None req-3b7369a5-ec76-44c3-95de-d12671f2656a 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 1a90a348-da49-4ba3-8bae-5b426e9d7424_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.299s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:08:14 compute-0 nova_compute[260935]: 2025-10-11 09:08:14.030 2 DEBUG nova.storage.rbd_utils [None req-3b7369a5-ec76-44c3-95de-d12671f2656a 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] resizing rbd image 1a90a348-da49-4ba3-8bae-5b426e9d7424_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 11 09:08:14 compute-0 nova_compute[260935]: 2025-10-11 09:08:14.068 2 DEBUG oslo_concurrency.processutils [None req-db07f152-9077-48ec-a572-328138ea3e15 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:08:14 compute-0 nova_compute[260935]: 2025-10-11 09:08:14.159 2 DEBUG nova.objects.instance [None req-3b7369a5-ec76-44c3-95de-d12671f2656a 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Lazy-loading 'migration_context' on Instance uuid 1a90a348-da49-4ba3-8bae-5b426e9d7424 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:08:14 compute-0 nova_compute[260935]: 2025-10-11 09:08:14.178 2 DEBUG nova.virt.libvirt.driver [None req-3b7369a5-ec76-44c3-95de-d12671f2656a 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] [instance: 1a90a348-da49-4ba3-8bae-5b426e9d7424] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 11 09:08:14 compute-0 nova_compute[260935]: 2025-10-11 09:08:14.179 2 DEBUG nova.virt.libvirt.driver [None req-3b7369a5-ec76-44c3-95de-d12671f2656a 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] [instance: 1a90a348-da49-4ba3-8bae-5b426e9d7424] Ensure instance console log exists: /var/lib/nova/instances/1a90a348-da49-4ba3-8bae-5b426e9d7424/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 11 09:08:14 compute-0 nova_compute[260935]: 2025-10-11 09:08:14.180 2 DEBUG oslo_concurrency.lockutils [None req-3b7369a5-ec76-44c3-95de-d12671f2656a 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:08:14 compute-0 nova_compute[260935]: 2025-10-11 09:08:14.180 2 DEBUG oslo_concurrency.lockutils [None req-3b7369a5-ec76-44c3-95de-d12671f2656a 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:08:14 compute-0 nova_compute[260935]: 2025-10-11 09:08:14.181 2 DEBUG oslo_concurrency.lockutils [None req-3b7369a5-ec76-44c3-95de-d12671f2656a 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:08:14 compute-0 nova_compute[260935]: 2025-10-11 09:08:14.183 2 DEBUG nova.virt.libvirt.driver [None req-3b7369a5-ec76-44c3-95de-d12671f2656a 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] [instance: 1a90a348-da49-4ba3-8bae-5b426e9d7424] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 11 09:08:14 compute-0 nova_compute[260935]: 2025-10-11 09:08:14.188 2 WARNING nova.virt.libvirt.driver [None req-3b7369a5-ec76-44c3-95de-d12671f2656a 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 09:08:14 compute-0 nova_compute[260935]: 2025-10-11 09:08:14.193 2 DEBUG nova.virt.libvirt.host [None req-3b7369a5-ec76-44c3-95de-d12671f2656a 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 11 09:08:14 compute-0 nova_compute[260935]: 2025-10-11 09:08:14.194 2 DEBUG nova.virt.libvirt.host [None req-3b7369a5-ec76-44c3-95de-d12671f2656a 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 11 09:08:14 compute-0 nova_compute[260935]: 2025-10-11 09:08:14.198 2 DEBUG nova.virt.libvirt.host [None req-3b7369a5-ec76-44c3-95de-d12671f2656a 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 11 09:08:14 compute-0 nova_compute[260935]: 2025-10-11 09:08:14.199 2 DEBUG nova.virt.libvirt.host [None req-3b7369a5-ec76-44c3-95de-d12671f2656a 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 11 09:08:14 compute-0 nova_compute[260935]: 2025-10-11 09:08:14.199 2 DEBUG nova.virt.libvirt.driver [None req-3b7369a5-ec76-44c3-95de-d12671f2656a 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 11 09:08:14 compute-0 nova_compute[260935]: 2025-10-11 09:08:14.200 2 DEBUG nova.virt.hardware [None req-3b7369a5-ec76-44c3-95de-d12671f2656a 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 11 09:08:14 compute-0 nova_compute[260935]: 2025-10-11 09:08:14.200 2 DEBUG nova.virt.hardware [None req-3b7369a5-ec76-44c3-95de-d12671f2656a 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 11 09:08:14 compute-0 nova_compute[260935]: 2025-10-11 09:08:14.201 2 DEBUG nova.virt.hardware [None req-3b7369a5-ec76-44c3-95de-d12671f2656a 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 11 09:08:14 compute-0 nova_compute[260935]: 2025-10-11 09:08:14.201 2 DEBUG nova.virt.hardware [None req-3b7369a5-ec76-44c3-95de-d12671f2656a 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 11 09:08:14 compute-0 nova_compute[260935]: 2025-10-11 09:08:14.202 2 DEBUG nova.virt.hardware [None req-3b7369a5-ec76-44c3-95de-d12671f2656a 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 11 09:08:14 compute-0 nova_compute[260935]: 2025-10-11 09:08:14.203 2 DEBUG nova.virt.hardware [None req-3b7369a5-ec76-44c3-95de-d12671f2656a 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 11 09:08:14 compute-0 nova_compute[260935]: 2025-10-11 09:08:14.204 2 DEBUG nova.virt.hardware [None req-3b7369a5-ec76-44c3-95de-d12671f2656a 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 11 09:08:14 compute-0 nova_compute[260935]: 2025-10-11 09:08:14.205 2 DEBUG nova.virt.hardware [None req-3b7369a5-ec76-44c3-95de-d12671f2656a 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 11 09:08:14 compute-0 nova_compute[260935]: 2025-10-11 09:08:14.205 2 DEBUG nova.virt.hardware [None req-3b7369a5-ec76-44c3-95de-d12671f2656a 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 11 09:08:14 compute-0 nova_compute[260935]: 2025-10-11 09:08:14.205 2 DEBUG nova.virt.hardware [None req-3b7369a5-ec76-44c3-95de-d12671f2656a 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 11 09:08:14 compute-0 nova_compute[260935]: 2025-10-11 09:08:14.206 2 DEBUG nova.virt.hardware [None req-3b7369a5-ec76-44c3-95de-d12671f2656a 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 11 09:08:14 compute-0 nova_compute[260935]: 2025-10-11 09:08:14.209 2 DEBUG oslo_concurrency.processutils [None req-3b7369a5-ec76-44c3-95de-d12671f2656a 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:08:14 compute-0 nova_compute[260935]: 2025-10-11 09:08:14.259 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:08:14 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1967: 321 pgs: 321 active+clean; 407 MiB data, 851 MiB used, 59 GiB / 60 GiB avail; 7.2 KiB/s rd, 2.3 KiB/s wr, 10 op/s
Oct 11 09:08:14 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 09:08:14 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/709926393' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:08:14 compute-0 nova_compute[260935]: 2025-10-11 09:08:14.519 2 DEBUG oslo_concurrency.processutils [None req-db07f152-9077-48ec-a572-328138ea3e15 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:08:14 compute-0 nova_compute[260935]: 2025-10-11 09:08:14.524 2 DEBUG nova.compute.provider_tree [None req-db07f152-9077-48ec-a572-328138ea3e15 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 09:08:14 compute-0 nova_compute[260935]: 2025-10-11 09:08:14.545 2 DEBUG nova.scheduler.client.report [None req-db07f152-9077-48ec-a572-328138ea3e15 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 09:08:14 compute-0 nova_compute[260935]: 2025-10-11 09:08:14.587 2 DEBUG oslo_concurrency.lockutils [None req-db07f152-9077-48ec-a572-328138ea3e15 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.833s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:08:14 compute-0 nova_compute[260935]: 2025-10-11 09:08:14.588 2 DEBUG nova.compute.manager [None req-db07f152-9077-48ec-a572-328138ea3e15 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] [instance: df634e60-d790-4a39-adcc-c6345a12e9df] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 11 09:08:14 compute-0 nova_compute[260935]: 2025-10-11 09:08:14.648 2 DEBUG nova.compute.manager [None req-db07f152-9077-48ec-a572-328138ea3e15 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] [instance: df634e60-d790-4a39-adcc-c6345a12e9df] Not allocating networking since 'none' was specified. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1948
Oct 11 09:08:14 compute-0 nova_compute[260935]: 2025-10-11 09:08:14.664 2 INFO nova.virt.libvirt.driver [None req-db07f152-9077-48ec-a572-328138ea3e15 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] [instance: df634e60-d790-4a39-adcc-c6345a12e9df] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 11 09:08:14 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 09:08:14 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/855587875' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:08:14 compute-0 nova_compute[260935]: 2025-10-11 09:08:14.680 2 DEBUG nova.compute.manager [None req-db07f152-9077-48ec-a572-328138ea3e15 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] [instance: df634e60-d790-4a39-adcc-c6345a12e9df] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 11 09:08:14 compute-0 nova_compute[260935]: 2025-10-11 09:08:14.687 2 DEBUG oslo_concurrency.processutils [None req-3b7369a5-ec76-44c3-95de-d12671f2656a 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:08:14 compute-0 nova_compute[260935]: 2025-10-11 09:08:14.723 2 DEBUG nova.storage.rbd_utils [None req-3b7369a5-ec76-44c3-95de-d12671f2656a 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] rbd image 1a90a348-da49-4ba3-8bae-5b426e9d7424_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:08:14 compute-0 nova_compute[260935]: 2025-10-11 09:08:14.729 2 DEBUG oslo_concurrency.processutils [None req-3b7369a5-ec76-44c3-95de-d12671f2656a 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:08:14 compute-0 nova_compute[260935]: 2025-10-11 09:08:14.839 2 DEBUG nova.compute.manager [None req-db07f152-9077-48ec-a572-328138ea3e15 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] [instance: df634e60-d790-4a39-adcc-c6345a12e9df] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 11 09:08:14 compute-0 nova_compute[260935]: 2025-10-11 09:08:14.842 2 DEBUG nova.virt.libvirt.driver [None req-db07f152-9077-48ec-a572-328138ea3e15 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] [instance: df634e60-d790-4a39-adcc-c6345a12e9df] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 11 09:08:14 compute-0 nova_compute[260935]: 2025-10-11 09:08:14.843 2 INFO nova.virt.libvirt.driver [None req-db07f152-9077-48ec-a572-328138ea3e15 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] [instance: df634e60-d790-4a39-adcc-c6345a12e9df] Creating image(s)
Oct 11 09:08:14 compute-0 nova_compute[260935]: 2025-10-11 09:08:14.880 2 DEBUG nova.storage.rbd_utils [None req-db07f152-9077-48ec-a572-328138ea3e15 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] rbd image df634e60-d790-4a39-adcc-c6345a12e9df_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:08:14 compute-0 nova_compute[260935]: 2025-10-11 09:08:14.923 2 DEBUG nova.storage.rbd_utils [None req-db07f152-9077-48ec-a572-328138ea3e15 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] rbd image df634e60-d790-4a39-adcc-c6345a12e9df_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:08:14 compute-0 nova_compute[260935]: 2025-10-11 09:08:14.957 2 DEBUG nova.storage.rbd_utils [None req-db07f152-9077-48ec-a572-328138ea3e15 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] rbd image df634e60-d790-4a39-adcc-c6345a12e9df_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:08:14 compute-0 nova_compute[260935]: 2025-10-11 09:08:14.962 2 DEBUG oslo_concurrency.processutils [None req-db07f152-9077-48ec-a572-328138ea3e15 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:08:15 compute-0 nova_compute[260935]: 2025-10-11 09:08:15.056 2 DEBUG oslo_concurrency.processutils [None req-db07f152-9077-48ec-a572-328138ea3e15 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json" returned: 0 in 0.094s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:08:15 compute-0 nova_compute[260935]: 2025-10-11 09:08:15.058 2 DEBUG oslo_concurrency.lockutils [None req-db07f152-9077-48ec-a572-328138ea3e15 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Acquiring lock "9811042c7d73cc51997f7c966840f4b7728169a1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:08:15 compute-0 nova_compute[260935]: 2025-10-11 09:08:15.059 2 DEBUG oslo_concurrency.lockutils [None req-db07f152-9077-48ec-a572-328138ea3e15 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:08:15 compute-0 nova_compute[260935]: 2025-10-11 09:08:15.059 2 DEBUG oslo_concurrency.lockutils [None req-db07f152-9077-48ec-a572-328138ea3e15 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:08:15 compute-0 nova_compute[260935]: 2025-10-11 09:08:15.091 2 DEBUG nova.storage.rbd_utils [None req-db07f152-9077-48ec-a572-328138ea3e15 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] rbd image df634e60-d790-4a39-adcc-c6345a12e9df_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:08:15 compute-0 nova_compute[260935]: 2025-10-11 09:08:15.095 2 DEBUG oslo_concurrency.processutils [None req-db07f152-9077-48ec-a572-328138ea3e15 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 df634e60-d790-4a39-adcc-c6345a12e9df_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:08:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:08:15.203 162815 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:08:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:08:15.204 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:08:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:08:15.205 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:08:15 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 09:08:15 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1487155864' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:08:15 compute-0 nova_compute[260935]: 2025-10-11 09:08:15.231 2 DEBUG oslo_concurrency.processutils [None req-3b7369a5-ec76-44c3-95de-d12671f2656a 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.502s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:08:15 compute-0 nova_compute[260935]: 2025-10-11 09:08:15.235 2 DEBUG nova.objects.instance [None req-3b7369a5-ec76-44c3-95de-d12671f2656a 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Lazy-loading 'pci_devices' on Instance uuid 1a90a348-da49-4ba3-8bae-5b426e9d7424 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:08:15 compute-0 nova_compute[260935]: 2025-10-11 09:08:15.261 2 DEBUG nova.virt.libvirt.driver [None req-3b7369a5-ec76-44c3-95de-d12671f2656a 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] [instance: 1a90a348-da49-4ba3-8bae-5b426e9d7424] End _get_guest_xml xml=<domain type="kvm">
Oct 11 09:08:15 compute-0 nova_compute[260935]:   <uuid>1a90a348-da49-4ba3-8bae-5b426e9d7424</uuid>
Oct 11 09:08:15 compute-0 nova_compute[260935]:   <name>instance-00000064</name>
Oct 11 09:08:15 compute-0 nova_compute[260935]:   <memory>131072</memory>
Oct 11 09:08:15 compute-0 nova_compute[260935]:   <vcpu>1</vcpu>
Oct 11 09:08:15 compute-0 nova_compute[260935]:   <metadata>
Oct 11 09:08:15 compute-0 nova_compute[260935]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 09:08:15 compute-0 nova_compute[260935]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 09:08:15 compute-0 nova_compute[260935]:       <nova:name>tempest-ServerShowV247Test-server-1283643768</nova:name>
Oct 11 09:08:15 compute-0 nova_compute[260935]:       <nova:creationTime>2025-10-11 09:08:14</nova:creationTime>
Oct 11 09:08:15 compute-0 nova_compute[260935]:       <nova:flavor name="m1.nano">
Oct 11 09:08:15 compute-0 nova_compute[260935]:         <nova:memory>128</nova:memory>
Oct 11 09:08:15 compute-0 nova_compute[260935]:         <nova:disk>1</nova:disk>
Oct 11 09:08:15 compute-0 nova_compute[260935]:         <nova:swap>0</nova:swap>
Oct 11 09:08:15 compute-0 nova_compute[260935]:         <nova:ephemeral>0</nova:ephemeral>
Oct 11 09:08:15 compute-0 nova_compute[260935]:         <nova:vcpus>1</nova:vcpus>
Oct 11 09:08:15 compute-0 nova_compute[260935]:       </nova:flavor>
Oct 11 09:08:15 compute-0 nova_compute[260935]:       <nova:owner>
Oct 11 09:08:15 compute-0 nova_compute[260935]:         <nova:user uuid="2273281c2fb74ffc95506b1aaa8874ed">tempest-ServerShowV247Test-462364684-project-member</nova:user>
Oct 11 09:08:15 compute-0 nova_compute[260935]:         <nova:project uuid="f4d141b93bd54c8f859a3ec1283a9a71">tempest-ServerShowV247Test-462364684</nova:project>
Oct 11 09:08:15 compute-0 nova_compute[260935]:       </nova:owner>
Oct 11 09:08:15 compute-0 nova_compute[260935]:       <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 09:08:15 compute-0 nova_compute[260935]:       <nova:ports/>
Oct 11 09:08:15 compute-0 nova_compute[260935]:     </nova:instance>
Oct 11 09:08:15 compute-0 nova_compute[260935]:   </metadata>
Oct 11 09:08:15 compute-0 nova_compute[260935]:   <sysinfo type="smbios">
Oct 11 09:08:15 compute-0 nova_compute[260935]:     <system>
Oct 11 09:08:15 compute-0 nova_compute[260935]:       <entry name="manufacturer">RDO</entry>
Oct 11 09:08:15 compute-0 nova_compute[260935]:       <entry name="product">OpenStack Compute</entry>
Oct 11 09:08:15 compute-0 nova_compute[260935]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 09:08:15 compute-0 nova_compute[260935]:       <entry name="serial">1a90a348-da49-4ba3-8bae-5b426e9d7424</entry>
Oct 11 09:08:15 compute-0 nova_compute[260935]:       <entry name="uuid">1a90a348-da49-4ba3-8bae-5b426e9d7424</entry>
Oct 11 09:08:15 compute-0 nova_compute[260935]:       <entry name="family">Virtual Machine</entry>
Oct 11 09:08:15 compute-0 nova_compute[260935]:     </system>
Oct 11 09:08:15 compute-0 nova_compute[260935]:   </sysinfo>
Oct 11 09:08:15 compute-0 nova_compute[260935]:   <os>
Oct 11 09:08:15 compute-0 nova_compute[260935]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 11 09:08:15 compute-0 nova_compute[260935]:     <boot dev="hd"/>
Oct 11 09:08:15 compute-0 nova_compute[260935]:     <smbios mode="sysinfo"/>
Oct 11 09:08:15 compute-0 nova_compute[260935]:   </os>
Oct 11 09:08:15 compute-0 nova_compute[260935]:   <features>
Oct 11 09:08:15 compute-0 nova_compute[260935]:     <acpi/>
Oct 11 09:08:15 compute-0 nova_compute[260935]:     <apic/>
Oct 11 09:08:15 compute-0 nova_compute[260935]:     <vmcoreinfo/>
Oct 11 09:08:15 compute-0 nova_compute[260935]:   </features>
Oct 11 09:08:15 compute-0 nova_compute[260935]:   <clock offset="utc">
Oct 11 09:08:15 compute-0 nova_compute[260935]:     <timer name="pit" tickpolicy="delay"/>
Oct 11 09:08:15 compute-0 nova_compute[260935]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 11 09:08:15 compute-0 nova_compute[260935]:     <timer name="hpet" present="no"/>
Oct 11 09:08:15 compute-0 nova_compute[260935]:   </clock>
Oct 11 09:08:15 compute-0 nova_compute[260935]:   <cpu mode="host-model" match="exact">
Oct 11 09:08:15 compute-0 nova_compute[260935]:     <topology sockets="1" cores="1" threads="1"/>
Oct 11 09:08:15 compute-0 nova_compute[260935]:   </cpu>
Oct 11 09:08:15 compute-0 nova_compute[260935]:   <devices>
Oct 11 09:08:15 compute-0 nova_compute[260935]:     <disk type="network" device="disk">
Oct 11 09:08:15 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 09:08:15 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/1a90a348-da49-4ba3-8bae-5b426e9d7424_disk">
Oct 11 09:08:15 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 09:08:15 compute-0 nova_compute[260935]:       </source>
Oct 11 09:08:15 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 09:08:15 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 09:08:15 compute-0 nova_compute[260935]:       </auth>
Oct 11 09:08:15 compute-0 nova_compute[260935]:       <target dev="vda" bus="virtio"/>
Oct 11 09:08:15 compute-0 nova_compute[260935]:     </disk>
Oct 11 09:08:15 compute-0 nova_compute[260935]:     <disk type="network" device="cdrom">
Oct 11 09:08:15 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 09:08:15 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/1a90a348-da49-4ba3-8bae-5b426e9d7424_disk.config">
Oct 11 09:08:15 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 09:08:15 compute-0 nova_compute[260935]:       </source>
Oct 11 09:08:15 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 09:08:15 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 09:08:15 compute-0 nova_compute[260935]:       </auth>
Oct 11 09:08:15 compute-0 nova_compute[260935]:       <target dev="sda" bus="sata"/>
Oct 11 09:08:15 compute-0 nova_compute[260935]:     </disk>
Oct 11 09:08:15 compute-0 nova_compute[260935]:     <serial type="pty">
Oct 11 09:08:15 compute-0 nova_compute[260935]:       <log file="/var/lib/nova/instances/1a90a348-da49-4ba3-8bae-5b426e9d7424/console.log" append="off"/>
Oct 11 09:08:15 compute-0 nova_compute[260935]:     </serial>
Oct 11 09:08:15 compute-0 nova_compute[260935]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 09:08:15 compute-0 nova_compute[260935]:     <video>
Oct 11 09:08:15 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 09:08:15 compute-0 nova_compute[260935]:     </video>
Oct 11 09:08:15 compute-0 nova_compute[260935]:     <input type="tablet" bus="usb"/>
Oct 11 09:08:15 compute-0 nova_compute[260935]:     <rng model="virtio">
Oct 11 09:08:15 compute-0 nova_compute[260935]:       <backend model="random">/dev/urandom</backend>
Oct 11 09:08:15 compute-0 nova_compute[260935]:     </rng>
Oct 11 09:08:15 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root"/>
Oct 11 09:08:15 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:08:15 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:08:15 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:08:15 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:08:15 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:08:15 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:08:15 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:08:15 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:08:15 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:08:15 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:08:15 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:08:15 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:08:15 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:08:15 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:08:15 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:08:15 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:08:15 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:08:15 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:08:15 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:08:15 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:08:15 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:08:15 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:08:15 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:08:15 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:08:15 compute-0 nova_compute[260935]:     <controller type="usb" index="0"/>
Oct 11 09:08:15 compute-0 nova_compute[260935]:     <memballoon model="virtio">
Oct 11 09:08:15 compute-0 nova_compute[260935]:       <stats period="10"/>
Oct 11 09:08:15 compute-0 nova_compute[260935]:     </memballoon>
Oct 11 09:08:15 compute-0 nova_compute[260935]:   </devices>
Oct 11 09:08:15 compute-0 nova_compute[260935]: </domain>
Oct 11 09:08:15 compute-0 nova_compute[260935]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 11 09:08:15 compute-0 nova_compute[260935]: 2025-10-11 09:08:15.364 2 DEBUG nova.virt.libvirt.driver [None req-3b7369a5-ec76-44c3-95de-d12671f2656a 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 09:08:15 compute-0 nova_compute[260935]: 2025-10-11 09:08:15.364 2 DEBUG nova.virt.libvirt.driver [None req-3b7369a5-ec76-44c3-95de-d12671f2656a 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 09:08:15 compute-0 nova_compute[260935]: 2025-10-11 09:08:15.365 2 INFO nova.virt.libvirt.driver [None req-3b7369a5-ec76-44c3-95de-d12671f2656a 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] [instance: 1a90a348-da49-4ba3-8bae-5b426e9d7424] Using config drive
Oct 11 09:08:15 compute-0 nova_compute[260935]: 2025-10-11 09:08:15.390 2 DEBUG nova.storage.rbd_utils [None req-3b7369a5-ec76-44c3-95de-d12671f2656a 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] rbd image 1a90a348-da49-4ba3-8bae-5b426e9d7424_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:08:15 compute-0 nova_compute[260935]: 2025-10-11 09:08:15.397 2 DEBUG oslo_concurrency.processutils [None req-db07f152-9077-48ec-a572-328138ea3e15 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 df634e60-d790-4a39-adcc-c6345a12e9df_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.302s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:08:15 compute-0 ceph-mon[74313]: pgmap v1967: 321 pgs: 321 active+clean; 407 MiB data, 851 MiB used, 59 GiB / 60 GiB avail; 7.2 KiB/s rd, 2.3 KiB/s wr, 10 op/s
Oct 11 09:08:15 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/709926393' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:08:15 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/855587875' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:08:15 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/1487155864' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:08:15 compute-0 nova_compute[260935]: 2025-10-11 09:08:15.479 2 DEBUG nova.storage.rbd_utils [None req-db07f152-9077-48ec-a572-328138ea3e15 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] resizing rbd image df634e60-d790-4a39-adcc-c6345a12e9df_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 11 09:08:15 compute-0 nova_compute[260935]: 2025-10-11 09:08:15.573 2 DEBUG nova.objects.instance [None req-db07f152-9077-48ec-a572-328138ea3e15 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Lazy-loading 'migration_context' on Instance uuid df634e60-d790-4a39-adcc-c6345a12e9df obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:08:15 compute-0 nova_compute[260935]: 2025-10-11 09:08:15.596 2 DEBUG nova.virt.libvirt.driver [None req-db07f152-9077-48ec-a572-328138ea3e15 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] [instance: df634e60-d790-4a39-adcc-c6345a12e9df] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 11 09:08:15 compute-0 nova_compute[260935]: 2025-10-11 09:08:15.596 2 DEBUG nova.virt.libvirt.driver [None req-db07f152-9077-48ec-a572-328138ea3e15 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] [instance: df634e60-d790-4a39-adcc-c6345a12e9df] Ensure instance console log exists: /var/lib/nova/instances/df634e60-d790-4a39-adcc-c6345a12e9df/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 11 09:08:15 compute-0 nova_compute[260935]: 2025-10-11 09:08:15.597 2 DEBUG oslo_concurrency.lockutils [None req-db07f152-9077-48ec-a572-328138ea3e15 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:08:15 compute-0 nova_compute[260935]: 2025-10-11 09:08:15.597 2 DEBUG oslo_concurrency.lockutils [None req-db07f152-9077-48ec-a572-328138ea3e15 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:08:15 compute-0 nova_compute[260935]: 2025-10-11 09:08:15.597 2 DEBUG oslo_concurrency.lockutils [None req-db07f152-9077-48ec-a572-328138ea3e15 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:08:15 compute-0 nova_compute[260935]: 2025-10-11 09:08:15.599 2 DEBUG nova.virt.libvirt.driver [None req-db07f152-9077-48ec-a572-328138ea3e15 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] [instance: df634e60-d790-4a39-adcc-c6345a12e9df] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 11 09:08:15 compute-0 nova_compute[260935]: 2025-10-11 09:08:15.603 2 WARNING nova.virt.libvirt.driver [None req-db07f152-9077-48ec-a572-328138ea3e15 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 09:08:15 compute-0 nova_compute[260935]: 2025-10-11 09:08:15.609 2 DEBUG nova.virt.libvirt.host [None req-db07f152-9077-48ec-a572-328138ea3e15 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 11 09:08:15 compute-0 nova_compute[260935]: 2025-10-11 09:08:15.609 2 DEBUG nova.virt.libvirt.host [None req-db07f152-9077-48ec-a572-328138ea3e15 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 11 09:08:15 compute-0 nova_compute[260935]: 2025-10-11 09:08:15.614 2 DEBUG nova.virt.libvirt.host [None req-db07f152-9077-48ec-a572-328138ea3e15 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 11 09:08:15 compute-0 nova_compute[260935]: 2025-10-11 09:08:15.614 2 DEBUG nova.virt.libvirt.host [None req-db07f152-9077-48ec-a572-328138ea3e15 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 11 09:08:15 compute-0 nova_compute[260935]: 2025-10-11 09:08:15.614 2 DEBUG nova.virt.libvirt.driver [None req-db07f152-9077-48ec-a572-328138ea3e15 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 11 09:08:15 compute-0 nova_compute[260935]: 2025-10-11 09:08:15.615 2 DEBUG nova.virt.hardware [None req-db07f152-9077-48ec-a572-328138ea3e15 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 11 09:08:15 compute-0 nova_compute[260935]: 2025-10-11 09:08:15.615 2 DEBUG nova.virt.hardware [None req-db07f152-9077-48ec-a572-328138ea3e15 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 11 09:08:15 compute-0 nova_compute[260935]: 2025-10-11 09:08:15.615 2 DEBUG nova.virt.hardware [None req-db07f152-9077-48ec-a572-328138ea3e15 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 11 09:08:15 compute-0 nova_compute[260935]: 2025-10-11 09:08:15.616 2 DEBUG nova.virt.hardware [None req-db07f152-9077-48ec-a572-328138ea3e15 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 11 09:08:15 compute-0 nova_compute[260935]: 2025-10-11 09:08:15.616 2 DEBUG nova.virt.hardware [None req-db07f152-9077-48ec-a572-328138ea3e15 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 11 09:08:15 compute-0 nova_compute[260935]: 2025-10-11 09:08:15.616 2 DEBUG nova.virt.hardware [None req-db07f152-9077-48ec-a572-328138ea3e15 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 11 09:08:15 compute-0 nova_compute[260935]: 2025-10-11 09:08:15.616 2 DEBUG nova.virt.hardware [None req-db07f152-9077-48ec-a572-328138ea3e15 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 11 09:08:15 compute-0 nova_compute[260935]: 2025-10-11 09:08:15.616 2 DEBUG nova.virt.hardware [None req-db07f152-9077-48ec-a572-328138ea3e15 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 11 09:08:15 compute-0 nova_compute[260935]: 2025-10-11 09:08:15.617 2 DEBUG nova.virt.hardware [None req-db07f152-9077-48ec-a572-328138ea3e15 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 11 09:08:15 compute-0 nova_compute[260935]: 2025-10-11 09:08:15.617 2 DEBUG nova.virt.hardware [None req-db07f152-9077-48ec-a572-328138ea3e15 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 11 09:08:15 compute-0 nova_compute[260935]: 2025-10-11 09:08:15.617 2 DEBUG nova.virt.hardware [None req-db07f152-9077-48ec-a572-328138ea3e15 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 11 09:08:15 compute-0 nova_compute[260935]: 2025-10-11 09:08:15.619 2 DEBUG oslo_concurrency.processutils [None req-db07f152-9077-48ec-a572-328138ea3e15 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:08:15 compute-0 nova_compute[260935]: 2025-10-11 09:08:15.937 2 INFO nova.virt.libvirt.driver [None req-3b7369a5-ec76-44c3-95de-d12671f2656a 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] [instance: 1a90a348-da49-4ba3-8bae-5b426e9d7424] Creating config drive at /var/lib/nova/instances/1a90a348-da49-4ba3-8bae-5b426e9d7424/disk.config
Oct 11 09:08:15 compute-0 nova_compute[260935]: 2025-10-11 09:08:15.947 2 DEBUG oslo_concurrency.processutils [None req-3b7369a5-ec76-44c3-95de-d12671f2656a 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/1a90a348-da49-4ba3-8bae-5b426e9d7424/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpb267h23a execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:08:16 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 09:08:16 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/328235877' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:08:16 compute-0 nova_compute[260935]: 2025-10-11 09:08:16.107 2 DEBUG oslo_concurrency.processutils [None req-db07f152-9077-48ec-a572-328138ea3e15 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.487s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:08:16 compute-0 nova_compute[260935]: 2025-10-11 09:08:16.143 2 DEBUG nova.storage.rbd_utils [None req-db07f152-9077-48ec-a572-328138ea3e15 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] rbd image df634e60-d790-4a39-adcc-c6345a12e9df_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:08:16 compute-0 nova_compute[260935]: 2025-10-11 09:08:16.148 2 DEBUG oslo_concurrency.processutils [None req-db07f152-9077-48ec-a572-328138ea3e15 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:08:16 compute-0 nova_compute[260935]: 2025-10-11 09:08:16.187 2 DEBUG oslo_concurrency.processutils [None req-3b7369a5-ec76-44c3-95de-d12671f2656a 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/1a90a348-da49-4ba3-8bae-5b426e9d7424/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpb267h23a" returned: 0 in 0.240s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:08:16 compute-0 nova_compute[260935]: 2025-10-11 09:08:16.228 2 DEBUG nova.storage.rbd_utils [None req-3b7369a5-ec76-44c3-95de-d12671f2656a 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] rbd image 1a90a348-da49-4ba3-8bae-5b426e9d7424_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:08:16 compute-0 nova_compute[260935]: 2025-10-11 09:08:16.234 2 DEBUG oslo_concurrency.processutils [None req-3b7369a5-ec76-44c3-95de-d12671f2656a 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/1a90a348-da49-4ba3-8bae-5b426e9d7424/disk.config 1a90a348-da49-4ba3-8bae-5b426e9d7424_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:08:16 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1968: 321 pgs: 321 active+clean; 407 MiB data, 851 MiB used, 59 GiB / 60 GiB avail; 2.3 KiB/s wr, 0 op/s
Oct 11 09:08:16 compute-0 nova_compute[260935]: 2025-10-11 09:08:16.448 2 DEBUG oslo_concurrency.processutils [None req-3b7369a5-ec76-44c3-95de-d12671f2656a 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/1a90a348-da49-4ba3-8bae-5b426e9d7424/disk.config 1a90a348-da49-4ba3-8bae-5b426e9d7424_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.213s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:08:16 compute-0 nova_compute[260935]: 2025-10-11 09:08:16.450 2 INFO nova.virt.libvirt.driver [None req-3b7369a5-ec76-44c3-95de-d12671f2656a 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] [instance: 1a90a348-da49-4ba3-8bae-5b426e9d7424] Deleting local config drive /var/lib/nova/instances/1a90a348-da49-4ba3-8bae-5b426e9d7424/disk.config because it was imported into RBD.
Oct 11 09:08:16 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/328235877' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:08:16 compute-0 systemd-machined[215705]: New machine qemu-115-instance-00000064.
Oct 11 09:08:16 compute-0 systemd[1]: Started Virtual Machine qemu-115-instance-00000064.
Oct 11 09:08:16 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 09:08:16 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3934649351' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:08:16 compute-0 nova_compute[260935]: 2025-10-11 09:08:16.610 2 DEBUG oslo_concurrency.processutils [None req-db07f152-9077-48ec-a572-328138ea3e15 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:08:16 compute-0 nova_compute[260935]: 2025-10-11 09:08:16.614 2 DEBUG nova.objects.instance [None req-db07f152-9077-48ec-a572-328138ea3e15 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Lazy-loading 'pci_devices' on Instance uuid df634e60-d790-4a39-adcc-c6345a12e9df obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:08:16 compute-0 nova_compute[260935]: 2025-10-11 09:08:16.640 2 DEBUG nova.virt.libvirt.driver [None req-db07f152-9077-48ec-a572-328138ea3e15 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] [instance: df634e60-d790-4a39-adcc-c6345a12e9df] End _get_guest_xml xml=<domain type="kvm">
Oct 11 09:08:16 compute-0 nova_compute[260935]:   <uuid>df634e60-d790-4a39-adcc-c6345a12e9df</uuid>
Oct 11 09:08:16 compute-0 nova_compute[260935]:   <name>instance-00000065</name>
Oct 11 09:08:16 compute-0 nova_compute[260935]:   <memory>131072</memory>
Oct 11 09:08:16 compute-0 nova_compute[260935]:   <vcpu>1</vcpu>
Oct 11 09:08:16 compute-0 nova_compute[260935]:   <metadata>
Oct 11 09:08:16 compute-0 nova_compute[260935]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 09:08:16 compute-0 nova_compute[260935]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 09:08:16 compute-0 nova_compute[260935]:       <nova:name>tempest-ServerShowV247Test-server-32539140</nova:name>
Oct 11 09:08:16 compute-0 nova_compute[260935]:       <nova:creationTime>2025-10-11 09:08:15</nova:creationTime>
Oct 11 09:08:16 compute-0 nova_compute[260935]:       <nova:flavor name="m1.nano">
Oct 11 09:08:16 compute-0 nova_compute[260935]:         <nova:memory>128</nova:memory>
Oct 11 09:08:16 compute-0 nova_compute[260935]:         <nova:disk>1</nova:disk>
Oct 11 09:08:16 compute-0 nova_compute[260935]:         <nova:swap>0</nova:swap>
Oct 11 09:08:16 compute-0 nova_compute[260935]:         <nova:ephemeral>0</nova:ephemeral>
Oct 11 09:08:16 compute-0 nova_compute[260935]:         <nova:vcpus>1</nova:vcpus>
Oct 11 09:08:16 compute-0 nova_compute[260935]:       </nova:flavor>
Oct 11 09:08:16 compute-0 nova_compute[260935]:       <nova:owner>
Oct 11 09:08:16 compute-0 nova_compute[260935]:         <nova:user uuid="2273281c2fb74ffc95506b1aaa8874ed">tempest-ServerShowV247Test-462364684-project-member</nova:user>
Oct 11 09:08:16 compute-0 nova_compute[260935]:         <nova:project uuid="f4d141b93bd54c8f859a3ec1283a9a71">tempest-ServerShowV247Test-462364684</nova:project>
Oct 11 09:08:16 compute-0 nova_compute[260935]:       </nova:owner>
Oct 11 09:08:16 compute-0 nova_compute[260935]:       <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 09:08:16 compute-0 nova_compute[260935]:       <nova:ports/>
Oct 11 09:08:16 compute-0 nova_compute[260935]:     </nova:instance>
Oct 11 09:08:16 compute-0 nova_compute[260935]:   </metadata>
Oct 11 09:08:16 compute-0 nova_compute[260935]:   <sysinfo type="smbios">
Oct 11 09:08:16 compute-0 nova_compute[260935]:     <system>
Oct 11 09:08:16 compute-0 nova_compute[260935]:       <entry name="manufacturer">RDO</entry>
Oct 11 09:08:16 compute-0 nova_compute[260935]:       <entry name="product">OpenStack Compute</entry>
Oct 11 09:08:16 compute-0 nova_compute[260935]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 09:08:16 compute-0 nova_compute[260935]:       <entry name="serial">df634e60-d790-4a39-adcc-c6345a12e9df</entry>
Oct 11 09:08:16 compute-0 nova_compute[260935]:       <entry name="uuid">df634e60-d790-4a39-adcc-c6345a12e9df</entry>
Oct 11 09:08:16 compute-0 nova_compute[260935]:       <entry name="family">Virtual Machine</entry>
Oct 11 09:08:16 compute-0 nova_compute[260935]:     </system>
Oct 11 09:08:16 compute-0 nova_compute[260935]:   </sysinfo>
Oct 11 09:08:16 compute-0 nova_compute[260935]:   <os>
Oct 11 09:08:16 compute-0 nova_compute[260935]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 11 09:08:16 compute-0 nova_compute[260935]:     <boot dev="hd"/>
Oct 11 09:08:16 compute-0 nova_compute[260935]:     <smbios mode="sysinfo"/>
Oct 11 09:08:16 compute-0 nova_compute[260935]:   </os>
Oct 11 09:08:16 compute-0 nova_compute[260935]:   <features>
Oct 11 09:08:16 compute-0 nova_compute[260935]:     <acpi/>
Oct 11 09:08:16 compute-0 nova_compute[260935]:     <apic/>
Oct 11 09:08:16 compute-0 nova_compute[260935]:     <vmcoreinfo/>
Oct 11 09:08:16 compute-0 nova_compute[260935]:   </features>
Oct 11 09:08:16 compute-0 nova_compute[260935]:   <clock offset="utc">
Oct 11 09:08:16 compute-0 nova_compute[260935]:     <timer name="pit" tickpolicy="delay"/>
Oct 11 09:08:16 compute-0 nova_compute[260935]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 11 09:08:16 compute-0 nova_compute[260935]:     <timer name="hpet" present="no"/>
Oct 11 09:08:16 compute-0 nova_compute[260935]:   </clock>
Oct 11 09:08:16 compute-0 nova_compute[260935]:   <cpu mode="host-model" match="exact">
Oct 11 09:08:16 compute-0 nova_compute[260935]:     <topology sockets="1" cores="1" threads="1"/>
Oct 11 09:08:16 compute-0 nova_compute[260935]:   </cpu>
Oct 11 09:08:16 compute-0 nova_compute[260935]:   <devices>
Oct 11 09:08:16 compute-0 nova_compute[260935]:     <disk type="network" device="disk">
Oct 11 09:08:16 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 09:08:16 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/df634e60-d790-4a39-adcc-c6345a12e9df_disk">
Oct 11 09:08:16 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 09:08:16 compute-0 nova_compute[260935]:       </source>
Oct 11 09:08:16 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 09:08:16 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 09:08:16 compute-0 nova_compute[260935]:       </auth>
Oct 11 09:08:16 compute-0 nova_compute[260935]:       <target dev="vda" bus="virtio"/>
Oct 11 09:08:16 compute-0 nova_compute[260935]:     </disk>
Oct 11 09:08:16 compute-0 nova_compute[260935]:     <disk type="network" device="cdrom">
Oct 11 09:08:16 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 09:08:16 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/df634e60-d790-4a39-adcc-c6345a12e9df_disk.config">
Oct 11 09:08:16 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 09:08:16 compute-0 nova_compute[260935]:       </source>
Oct 11 09:08:16 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 09:08:16 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 09:08:16 compute-0 nova_compute[260935]:       </auth>
Oct 11 09:08:16 compute-0 nova_compute[260935]:       <target dev="sda" bus="sata"/>
Oct 11 09:08:16 compute-0 nova_compute[260935]:     </disk>
Oct 11 09:08:16 compute-0 nova_compute[260935]:     <serial type="pty">
Oct 11 09:08:16 compute-0 nova_compute[260935]:       <log file="/var/lib/nova/instances/df634e60-d790-4a39-adcc-c6345a12e9df/console.log" append="off"/>
Oct 11 09:08:16 compute-0 nova_compute[260935]:     </serial>
Oct 11 09:08:16 compute-0 nova_compute[260935]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 09:08:16 compute-0 nova_compute[260935]:     <video>
Oct 11 09:08:16 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 09:08:16 compute-0 nova_compute[260935]:     </video>
Oct 11 09:08:16 compute-0 nova_compute[260935]:     <input type="tablet" bus="usb"/>
Oct 11 09:08:16 compute-0 nova_compute[260935]:     <rng model="virtio">
Oct 11 09:08:16 compute-0 nova_compute[260935]:       <backend model="random">/dev/urandom</backend>
Oct 11 09:08:16 compute-0 nova_compute[260935]:     </rng>
Oct 11 09:08:16 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root"/>
Oct 11 09:08:16 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:08:16 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:08:16 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:08:16 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:08:16 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:08:16 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:08:16 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:08:16 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:08:16 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:08:16 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:08:16 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:08:16 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:08:16 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:08:16 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:08:16 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:08:16 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:08:16 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:08:16 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:08:16 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:08:16 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:08:16 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:08:16 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:08:16 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:08:16 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:08:16 compute-0 nova_compute[260935]:     <controller type="usb" index="0"/>
Oct 11 09:08:16 compute-0 nova_compute[260935]:     <memballoon model="virtio">
Oct 11 09:08:16 compute-0 nova_compute[260935]:       <stats period="10"/>
Oct 11 09:08:16 compute-0 nova_compute[260935]:     </memballoon>
Oct 11 09:08:16 compute-0 nova_compute[260935]:   </devices>
Oct 11 09:08:16 compute-0 nova_compute[260935]: </domain>
Oct 11 09:08:16 compute-0 nova_compute[260935]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 11 09:08:16 compute-0 nova_compute[260935]: 2025-10-11 09:08:16.708 2 DEBUG nova.virt.libvirt.driver [None req-db07f152-9077-48ec-a572-328138ea3e15 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 09:08:16 compute-0 nova_compute[260935]: 2025-10-11 09:08:16.709 2 DEBUG nova.virt.libvirt.driver [None req-db07f152-9077-48ec-a572-328138ea3e15 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 09:08:16 compute-0 nova_compute[260935]: 2025-10-11 09:08:16.710 2 INFO nova.virt.libvirt.driver [None req-db07f152-9077-48ec-a572-328138ea3e15 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] [instance: df634e60-d790-4a39-adcc-c6345a12e9df] Using config drive
Oct 11 09:08:16 compute-0 nova_compute[260935]: 2025-10-11 09:08:16.742 2 DEBUG nova.storage.rbd_utils [None req-db07f152-9077-48ec-a572-328138ea3e15 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] rbd image df634e60-d790-4a39-adcc-c6345a12e9df_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:08:17 compute-0 podman[360517]: 2025-10-11 09:08:17.072852607 +0000 UTC m=+0.073081357 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent)
Oct 11 09:08:17 compute-0 nova_compute[260935]: 2025-10-11 09:08:17.094 2 INFO nova.virt.libvirt.driver [None req-db07f152-9077-48ec-a572-328138ea3e15 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] [instance: df634e60-d790-4a39-adcc-c6345a12e9df] Creating config drive at /var/lib/nova/instances/df634e60-d790-4a39-adcc-c6345a12e9df/disk.config
Oct 11 09:08:17 compute-0 nova_compute[260935]: 2025-10-11 09:08:17.101 2 DEBUG oslo_concurrency.processutils [None req-db07f152-9077-48ec-a572-328138ea3e15 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/df634e60-d790-4a39-adcc-c6345a12e9df/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpk75cggww execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:08:17 compute-0 nova_compute[260935]: 2025-10-11 09:08:17.263 2 DEBUG oslo_concurrency.processutils [None req-db07f152-9077-48ec-a572-328138ea3e15 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/df634e60-d790-4a39-adcc-c6345a12e9df/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpk75cggww" returned: 0 in 0.162s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:08:17 compute-0 nova_compute[260935]: 2025-10-11 09:08:17.291 2 DEBUG nova.storage.rbd_utils [None req-db07f152-9077-48ec-a572-328138ea3e15 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] rbd image df634e60-d790-4a39-adcc-c6345a12e9df_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:08:17 compute-0 nova_compute[260935]: 2025-10-11 09:08:17.295 2 DEBUG oslo_concurrency.processutils [None req-db07f152-9077-48ec-a572-328138ea3e15 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/df634e60-d790-4a39-adcc-c6345a12e9df/disk.config df634e60-d790-4a39-adcc-c6345a12e9df_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:08:17 compute-0 ceph-mon[74313]: pgmap v1968: 321 pgs: 321 active+clean; 407 MiB data, 851 MiB used, 59 GiB / 60 GiB avail; 2.3 KiB/s wr, 0 op/s
Oct 11 09:08:17 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/3934649351' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:08:17 compute-0 nova_compute[260935]: 2025-10-11 09:08:17.503 2 DEBUG oslo_concurrency.processutils [None req-db07f152-9077-48ec-a572-328138ea3e15 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/df634e60-d790-4a39-adcc-c6345a12e9df/disk.config df634e60-d790-4a39-adcc-c6345a12e9df_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.208s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:08:17 compute-0 nova_compute[260935]: 2025-10-11 09:08:17.505 2 INFO nova.virt.libvirt.driver [None req-db07f152-9077-48ec-a572-328138ea3e15 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] [instance: df634e60-d790-4a39-adcc-c6345a12e9df] Deleting local config drive /var/lib/nova/instances/df634e60-d790-4a39-adcc-c6345a12e9df/disk.config because it was imported into RBD.
Oct 11 09:08:17 compute-0 nova_compute[260935]: 2025-10-11 09:08:17.507 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760173697.5022328, 1a90a348-da49-4ba3-8bae-5b426e9d7424 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:08:17 compute-0 nova_compute[260935]: 2025-10-11 09:08:17.509 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 1a90a348-da49-4ba3-8bae-5b426e9d7424] VM Resumed (Lifecycle Event)
Oct 11 09:08:17 compute-0 nova_compute[260935]: 2025-10-11 09:08:17.515 2 DEBUG nova.compute.manager [None req-3b7369a5-ec76-44c3-95de-d12671f2656a 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] [instance: 1a90a348-da49-4ba3-8bae-5b426e9d7424] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 11 09:08:17 compute-0 nova_compute[260935]: 2025-10-11 09:08:17.516 2 DEBUG nova.virt.libvirt.driver [None req-3b7369a5-ec76-44c3-95de-d12671f2656a 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] [instance: 1a90a348-da49-4ba3-8bae-5b426e9d7424] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 11 09:08:17 compute-0 nova_compute[260935]: 2025-10-11 09:08:17.524 2 INFO nova.virt.libvirt.driver [-] [instance: 1a90a348-da49-4ba3-8bae-5b426e9d7424] Instance spawned successfully.
Oct 11 09:08:17 compute-0 nova_compute[260935]: 2025-10-11 09:08:17.525 2 DEBUG nova.virt.libvirt.driver [None req-3b7369a5-ec76-44c3-95de-d12671f2656a 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] [instance: 1a90a348-da49-4ba3-8bae-5b426e9d7424] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 11 09:08:17 compute-0 nova_compute[260935]: 2025-10-11 09:08:17.540 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 1a90a348-da49-4ba3-8bae-5b426e9d7424] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:08:17 compute-0 nova_compute[260935]: 2025-10-11 09:08:17.552 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 1a90a348-da49-4ba3-8bae-5b426e9d7424] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 09:08:17 compute-0 nova_compute[260935]: 2025-10-11 09:08:17.557 2 DEBUG nova.virt.libvirt.driver [None req-3b7369a5-ec76-44c3-95de-d12671f2656a 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] [instance: 1a90a348-da49-4ba3-8bae-5b426e9d7424] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:08:17 compute-0 nova_compute[260935]: 2025-10-11 09:08:17.558 2 DEBUG nova.virt.libvirt.driver [None req-3b7369a5-ec76-44c3-95de-d12671f2656a 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] [instance: 1a90a348-da49-4ba3-8bae-5b426e9d7424] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:08:17 compute-0 nova_compute[260935]: 2025-10-11 09:08:17.559 2 DEBUG nova.virt.libvirt.driver [None req-3b7369a5-ec76-44c3-95de-d12671f2656a 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] [instance: 1a90a348-da49-4ba3-8bae-5b426e9d7424] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:08:17 compute-0 nova_compute[260935]: 2025-10-11 09:08:17.559 2 DEBUG nova.virt.libvirt.driver [None req-3b7369a5-ec76-44c3-95de-d12671f2656a 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] [instance: 1a90a348-da49-4ba3-8bae-5b426e9d7424] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:08:17 compute-0 nova_compute[260935]: 2025-10-11 09:08:17.560 2 DEBUG nova.virt.libvirt.driver [None req-3b7369a5-ec76-44c3-95de-d12671f2656a 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] [instance: 1a90a348-da49-4ba3-8bae-5b426e9d7424] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:08:17 compute-0 nova_compute[260935]: 2025-10-11 09:08:17.561 2 DEBUG nova.virt.libvirt.driver [None req-3b7369a5-ec76-44c3-95de-d12671f2656a 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] [instance: 1a90a348-da49-4ba3-8bae-5b426e9d7424] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:08:17 compute-0 nova_compute[260935]: 2025-10-11 09:08:17.590 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 1a90a348-da49-4ba3-8bae-5b426e9d7424] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 09:08:17 compute-0 nova_compute[260935]: 2025-10-11 09:08:17.591 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760173697.502834, 1a90a348-da49-4ba3-8bae-5b426e9d7424 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:08:17 compute-0 nova_compute[260935]: 2025-10-11 09:08:17.591 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 1a90a348-da49-4ba3-8bae-5b426e9d7424] VM Started (Lifecycle Event)
Oct 11 09:08:17 compute-0 nova_compute[260935]: 2025-10-11 09:08:17.617 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 1a90a348-da49-4ba3-8bae-5b426e9d7424] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:08:17 compute-0 systemd-machined[215705]: New machine qemu-116-instance-00000065.
Oct 11 09:08:17 compute-0 nova_compute[260935]: 2025-10-11 09:08:17.622 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 1a90a348-da49-4ba3-8bae-5b426e9d7424] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 09:08:17 compute-0 nova_compute[260935]: 2025-10-11 09:08:17.630 2 INFO nova.compute.manager [None req-3b7369a5-ec76-44c3-95de-d12671f2656a 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] [instance: 1a90a348-da49-4ba3-8bae-5b426e9d7424] Took 4.23 seconds to spawn the instance on the hypervisor.
Oct 11 09:08:17 compute-0 nova_compute[260935]: 2025-10-11 09:08:17.631 2 DEBUG nova.compute.manager [None req-3b7369a5-ec76-44c3-95de-d12671f2656a 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] [instance: 1a90a348-da49-4ba3-8bae-5b426e9d7424] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:08:17 compute-0 systemd[1]: Started Virtual Machine qemu-116-instance-00000065.
Oct 11 09:08:17 compute-0 nova_compute[260935]: 2025-10-11 09:08:17.640 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 1a90a348-da49-4ba3-8bae-5b426e9d7424] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 09:08:17 compute-0 nova_compute[260935]: 2025-10-11 09:08:17.700 2 INFO nova.compute.manager [None req-3b7369a5-ec76-44c3-95de-d12671f2656a 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] [instance: 1a90a348-da49-4ba3-8bae-5b426e9d7424] Took 5.77 seconds to build instance.
Oct 11 09:08:17 compute-0 nova_compute[260935]: 2025-10-11 09:08:17.720 2 DEBUG oslo_concurrency.lockutils [None req-3b7369a5-ec76-44c3-95de-d12671f2656a 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Lock "1a90a348-da49-4ba3-8bae-5b426e9d7424" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 5.864s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:08:18 compute-0 nova_compute[260935]: 2025-10-11 09:08:18.153 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:08:18 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1969: 321 pgs: 321 active+clean; 500 MiB data, 894 MiB used, 59 GiB / 60 GiB avail; 44 KiB/s rd, 3.6 MiB/s wr, 68 op/s
Oct 11 09:08:18 compute-0 nova_compute[260935]: 2025-10-11 09:08:18.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:08:18 compute-0 nova_compute[260935]: 2025-10-11 09:08:18.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:08:18 compute-0 nova_compute[260935]: 2025-10-11 09:08:18.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:08:18 compute-0 nova_compute[260935]: 2025-10-11 09:08:18.704 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:08:18 compute-0 nova_compute[260935]: 2025-10-11 09:08:18.704 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Oct 11 09:08:18 compute-0 nova_compute[260935]: 2025-10-11 09:08:18.786 2 DEBUG nova.compute.manager [None req-db07f152-9077-48ec-a572-328138ea3e15 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] [instance: df634e60-d790-4a39-adcc-c6345a12e9df] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 11 09:08:18 compute-0 nova_compute[260935]: 2025-10-11 09:08:18.787 2 DEBUG nova.virt.libvirt.driver [None req-db07f152-9077-48ec-a572-328138ea3e15 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] [instance: df634e60-d790-4a39-adcc-c6345a12e9df] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 11 09:08:18 compute-0 nova_compute[260935]: 2025-10-11 09:08:18.787 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760173698.7856712, df634e60-d790-4a39-adcc-c6345a12e9df => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:08:18 compute-0 nova_compute[260935]: 2025-10-11 09:08:18.788 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: df634e60-d790-4a39-adcc-c6345a12e9df] VM Resumed (Lifecycle Event)
Oct 11 09:08:18 compute-0 nova_compute[260935]: 2025-10-11 09:08:18.794 2 INFO nova.virt.libvirt.driver [-] [instance: df634e60-d790-4a39-adcc-c6345a12e9df] Instance spawned successfully.
Oct 11 09:08:18 compute-0 nova_compute[260935]: 2025-10-11 09:08:18.795 2 DEBUG nova.virt.libvirt.driver [None req-db07f152-9077-48ec-a572-328138ea3e15 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] [instance: df634e60-d790-4a39-adcc-c6345a12e9df] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 11 09:08:18 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e257 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:08:18 compute-0 nova_compute[260935]: 2025-10-11 09:08:18.829 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: df634e60-d790-4a39-adcc-c6345a12e9df] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:08:18 compute-0 nova_compute[260935]: 2025-10-11 09:08:18.838 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: df634e60-d790-4a39-adcc-c6345a12e9df] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 09:08:18 compute-0 nova_compute[260935]: 2025-10-11 09:08:18.844 2 DEBUG nova.virt.libvirt.driver [None req-db07f152-9077-48ec-a572-328138ea3e15 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] [instance: df634e60-d790-4a39-adcc-c6345a12e9df] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:08:18 compute-0 nova_compute[260935]: 2025-10-11 09:08:18.844 2 DEBUG nova.virt.libvirt.driver [None req-db07f152-9077-48ec-a572-328138ea3e15 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] [instance: df634e60-d790-4a39-adcc-c6345a12e9df] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:08:18 compute-0 nova_compute[260935]: 2025-10-11 09:08:18.845 2 DEBUG nova.virt.libvirt.driver [None req-db07f152-9077-48ec-a572-328138ea3e15 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] [instance: df634e60-d790-4a39-adcc-c6345a12e9df] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:08:18 compute-0 nova_compute[260935]: 2025-10-11 09:08:18.846 2 DEBUG nova.virt.libvirt.driver [None req-db07f152-9077-48ec-a572-328138ea3e15 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] [instance: df634e60-d790-4a39-adcc-c6345a12e9df] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:08:18 compute-0 nova_compute[260935]: 2025-10-11 09:08:18.846 2 DEBUG nova.virt.libvirt.driver [None req-db07f152-9077-48ec-a572-328138ea3e15 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] [instance: df634e60-d790-4a39-adcc-c6345a12e9df] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:08:18 compute-0 nova_compute[260935]: 2025-10-11 09:08:18.847 2 DEBUG nova.virt.libvirt.driver [None req-db07f152-9077-48ec-a572-328138ea3e15 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] [instance: df634e60-d790-4a39-adcc-c6345a12e9df] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:08:18 compute-0 nova_compute[260935]: 2025-10-11 09:08:18.874 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: df634e60-d790-4a39-adcc-c6345a12e9df] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 09:08:18 compute-0 nova_compute[260935]: 2025-10-11 09:08:18.875 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760173698.785767, df634e60-d790-4a39-adcc-c6345a12e9df => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:08:18 compute-0 nova_compute[260935]: 2025-10-11 09:08:18.875 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: df634e60-d790-4a39-adcc-c6345a12e9df] VM Started (Lifecycle Event)
Oct 11 09:08:18 compute-0 nova_compute[260935]: 2025-10-11 09:08:18.905 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: df634e60-d790-4a39-adcc-c6345a12e9df] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:08:18 compute-0 nova_compute[260935]: 2025-10-11 09:08:18.910 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: df634e60-d790-4a39-adcc-c6345a12e9df] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 09:08:18 compute-0 nova_compute[260935]: 2025-10-11 09:08:18.917 2 INFO nova.compute.manager [None req-db07f152-9077-48ec-a572-328138ea3e15 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] [instance: df634e60-d790-4a39-adcc-c6345a12e9df] Took 4.08 seconds to spawn the instance on the hypervisor.
Oct 11 09:08:18 compute-0 nova_compute[260935]: 2025-10-11 09:08:18.918 2 DEBUG nova.compute.manager [None req-db07f152-9077-48ec-a572-328138ea3e15 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] [instance: df634e60-d790-4a39-adcc-c6345a12e9df] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:08:18 compute-0 nova_compute[260935]: 2025-10-11 09:08:18.952 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: df634e60-d790-4a39-adcc-c6345a12e9df] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 09:08:18 compute-0 nova_compute[260935]: 2025-10-11 09:08:18.987 2 INFO nova.compute.manager [None req-db07f152-9077-48ec-a572-328138ea3e15 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] [instance: df634e60-d790-4a39-adcc-c6345a12e9df] Took 5.28 seconds to build instance.
Oct 11 09:08:19 compute-0 nova_compute[260935]: 2025-10-11 09:08:19.006 2 DEBUG oslo_concurrency.lockutils [None req-db07f152-9077-48ec-a572-328138ea3e15 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Lock "df634e60-d790-4a39-adcc-c6345a12e9df" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 5.434s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:08:19 compute-0 nova_compute[260935]: 2025-10-11 09:08:19.258 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:08:19 compute-0 ceph-mon[74313]: pgmap v1969: 321 pgs: 321 active+clean; 500 MiB data, 894 MiB used, 59 GiB / 60 GiB avail; 44 KiB/s rd, 3.6 MiB/s wr, 68 op/s
Oct 11 09:08:19 compute-0 nova_compute[260935]: 2025-10-11 09:08:19.731 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:08:19 compute-0 nova_compute[260935]: 2025-10-11 09:08:19.732 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:08:19 compute-0 nova_compute[260935]: 2025-10-11 09:08:19.733 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:08:19 compute-0 nova_compute[260935]: 2025-10-11 09:08:19.733 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 11 09:08:20 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1970: 321 pgs: 321 active+clean; 500 MiB data, 894 MiB used, 59 GiB / 60 GiB avail; 44 KiB/s rd, 3.6 MiB/s wr, 68 op/s
Oct 11 09:08:20 compute-0 nova_compute[260935]: 2025-10-11 09:08:20.429 2 INFO nova.compute.manager [None req-751950c1-bd6b-4ff4-a050-b5bf3552245e 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] [instance: df634e60-d790-4a39-adcc-c6345a12e9df] Rebuilding instance
Oct 11 09:08:20 compute-0 nova_compute[260935]: 2025-10-11 09:08:20.909 2 DEBUG nova.objects.instance [None req-751950c1-bd6b-4ff4-a050-b5bf3552245e 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Lazy-loading 'trusted_certs' on Instance uuid df634e60-d790-4a39-adcc-c6345a12e9df obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:08:20 compute-0 nova_compute[260935]: 2025-10-11 09:08:20.934 2 DEBUG nova.compute.manager [None req-751950c1-bd6b-4ff4-a050-b5bf3552245e 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] [instance: df634e60-d790-4a39-adcc-c6345a12e9df] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:08:21 compute-0 nova_compute[260935]: 2025-10-11 09:08:21.005 2 DEBUG nova.objects.instance [None req-751950c1-bd6b-4ff4-a050-b5bf3552245e 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Lazy-loading 'pci_requests' on Instance uuid df634e60-d790-4a39-adcc-c6345a12e9df obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:08:21 compute-0 nova_compute[260935]: 2025-10-11 09:08:21.022 2 DEBUG nova.objects.instance [None req-751950c1-bd6b-4ff4-a050-b5bf3552245e 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Lazy-loading 'pci_devices' on Instance uuid df634e60-d790-4a39-adcc-c6345a12e9df obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:08:21 compute-0 nova_compute[260935]: 2025-10-11 09:08:21.038 2 DEBUG nova.objects.instance [None req-751950c1-bd6b-4ff4-a050-b5bf3552245e 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Lazy-loading 'resources' on Instance uuid df634e60-d790-4a39-adcc-c6345a12e9df obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:08:21 compute-0 nova_compute[260935]: 2025-10-11 09:08:21.049 2 DEBUG nova.objects.instance [None req-751950c1-bd6b-4ff4-a050-b5bf3552245e 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Lazy-loading 'migration_context' on Instance uuid df634e60-d790-4a39-adcc-c6345a12e9df obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:08:21 compute-0 nova_compute[260935]: 2025-10-11 09:08:21.063 2 DEBUG nova.objects.instance [None req-751950c1-bd6b-4ff4-a050-b5bf3552245e 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] [instance: df634e60-d790-4a39-adcc-c6345a12e9df] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Oct 11 09:08:21 compute-0 nova_compute[260935]: 2025-10-11 09:08:21.067 2 DEBUG nova.virt.libvirt.driver [None req-751950c1-bd6b-4ff4-a050-b5bf3552245e 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] [instance: df634e60-d790-4a39-adcc-c6345a12e9df] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Oct 11 09:08:21 compute-0 ceph-mon[74313]: pgmap v1970: 321 pgs: 321 active+clean; 500 MiB data, 894 MiB used, 59 GiB / 60 GiB avail; 44 KiB/s rd, 3.6 MiB/s wr, 68 op/s
Oct 11 09:08:21 compute-0 nova_compute[260935]: 2025-10-11 09:08:21.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:08:21 compute-0 nova_compute[260935]: 2025-10-11 09:08:21.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:08:21 compute-0 nova_compute[260935]: 2025-10-11 09:08:21.735 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:08:21 compute-0 nova_compute[260935]: 2025-10-11 09:08:21.735 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:08:21 compute-0 nova_compute[260935]: 2025-10-11 09:08:21.736 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:08:21 compute-0 nova_compute[260935]: 2025-10-11 09:08:21.736 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 11 09:08:21 compute-0 nova_compute[260935]: 2025-10-11 09:08:21.737 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:08:22 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 09:08:22 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3340733308' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:08:22 compute-0 nova_compute[260935]: 2025-10-11 09:08:22.233 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.496s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:08:22 compute-0 nova_compute[260935]: 2025-10-11 09:08:22.358 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:08:22 compute-0 nova_compute[260935]: 2025-10-11 09:08:22.359 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:08:22 compute-0 nova_compute[260935]: 2025-10-11 09:08:22.364 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000064 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:08:22 compute-0 nova_compute[260935]: 2025-10-11 09:08:22.364 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000064 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:08:22 compute-0 nova_compute[260935]: 2025-10-11 09:08:22.368 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000065 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:08:22 compute-0 nova_compute[260935]: 2025-10-11 09:08:22.369 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000065 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:08:22 compute-0 nova_compute[260935]: 2025-10-11 09:08:22.373 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000052 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:08:22 compute-0 nova_compute[260935]: 2025-10-11 09:08:22.374 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000052 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:08:22 compute-0 nova_compute[260935]: 2025-10-11 09:08:22.379 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:08:22 compute-0 nova_compute[260935]: 2025-10-11 09:08:22.379 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:08:22 compute-0 nova_compute[260935]: 2025-10-11 09:08:22.380 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:08:22 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1971: 321 pgs: 321 active+clean; 500 MiB data, 894 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 3.6 MiB/s wr, 136 op/s
Oct 11 09:08:22 compute-0 nova_compute[260935]: 2025-10-11 09:08:22.384 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000062 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:08:22 compute-0 nova_compute[260935]: 2025-10-11 09:08:22.384 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000062 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:08:22 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/3340733308' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:08:22 compute-0 nova_compute[260935]: 2025-10-11 09:08:22.598 2 WARNING nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 09:08:22 compute-0 nova_compute[260935]: 2025-10-11 09:08:22.599 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=2643MB free_disk=59.74338150024414GB free_vcpus=2 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 11 09:08:22 compute-0 nova_compute[260935]: 2025-10-11 09:08:22.600 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:08:22 compute-0 nova_compute[260935]: 2025-10-11 09:08:22.600 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:08:22 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:08:22.840 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=25, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:d1:d9', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '16:ab:1e:b7:4b:7f'}, ipsec=False) old=SB_Global(nb_cfg=24) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 09:08:22 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:08:22.840 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 11 09:08:22 compute-0 nova_compute[260935]: 2025-10-11 09:08:22.880 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance c176845c-89c0-4038-ba22-4ee79bd3ebfe actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 09:08:22 compute-0 nova_compute[260935]: 2025-10-11 09:08:22.881 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance b75d8ded-515b-48ff-a6b6-28df88878996 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 09:08:22 compute-0 nova_compute[260935]: 2025-10-11 09:08:22.881 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance 52be16b4-343a-4fd4-9041-39069a1fde2a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 09:08:22 compute-0 nova_compute[260935]: 2025-10-11 09:08:22.881 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 09:08:22 compute-0 nova_compute[260935]: 2025-10-11 09:08:22.882 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance 1a90a348-da49-4ba3-8bae-5b426e9d7424 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 09:08:22 compute-0 nova_compute[260935]: 2025-10-11 09:08:22.882 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance df634e60-d790-4a39-adcc-c6345a12e9df actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 09:08:22 compute-0 nova_compute[260935]: 2025-10-11 09:08:22.883 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 6 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 11 09:08:22 compute-0 nova_compute[260935]: 2025-10-11 09:08:22.883 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=1280MB phys_disk=59GB used_disk=6GB total_vcpus=8 used_vcpus=6 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 11 09:08:22 compute-0 nova_compute[260935]: 2025-10-11 09:08:22.887 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:08:23 compute-0 nova_compute[260935]: 2025-10-11 09:08:23.076 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:08:23 compute-0 nova_compute[260935]: 2025-10-11 09:08:23.155 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:08:23 compute-0 ceph-mon[74313]: pgmap v1971: 321 pgs: 321 active+clean; 500 MiB data, 894 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 3.6 MiB/s wr, 136 op/s
Oct 11 09:08:23 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 09:08:23 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3950555897' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:08:23 compute-0 nova_compute[260935]: 2025-10-11 09:08:23.551 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:08:23 compute-0 nova_compute[260935]: 2025-10-11 09:08:23.559 2 DEBUG nova.compute.provider_tree [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 09:08:23 compute-0 nova_compute[260935]: 2025-10-11 09:08:23.588 2 DEBUG nova.scheduler.client.report [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 09:08:23 compute-0 nova_compute[260935]: 2025-10-11 09:08:23.635 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 11 09:08:23 compute-0 nova_compute[260935]: 2025-10-11 09:08:23.636 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.037s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:08:23 compute-0 podman[360682]: 2025-10-11 09:08:23.800024476 +0000 UTC m=+0.101336924 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, config_id=iscsid, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct 11 09:08:23 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e257 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:08:24 compute-0 nova_compute[260935]: 2025-10-11 09:08:24.260 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:08:24 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1972: 321 pgs: 321 active+clean; 500 MiB data, 894 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 3.6 MiB/s wr, 202 op/s
Oct 11 09:08:24 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/3950555897' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:08:24 compute-0 nova_compute[260935]: 2025-10-11 09:08:24.512 2 INFO nova.compute.manager [None req-08cfeb84-ea6e-40f1-86cc-2bb0c9615a8f 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Pausing
Oct 11 09:08:24 compute-0 nova_compute[260935]: 2025-10-11 09:08:24.514 2 DEBUG nova.objects.instance [None req-08cfeb84-ea6e-40f1-86cc-2bb0c9615a8f 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Lazy-loading 'flavor' on Instance uuid 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:08:24 compute-0 nova_compute[260935]: 2025-10-11 09:08:24.583 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760173704.58298, 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:08:24 compute-0 nova_compute[260935]: 2025-10-11 09:08:24.586 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] VM Paused (Lifecycle Event)
Oct 11 09:08:24 compute-0 nova_compute[260935]: 2025-10-11 09:08:24.588 2 DEBUG nova.compute.manager [None req-08cfeb84-ea6e-40f1-86cc-2bb0c9615a8f 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:08:24 compute-0 nova_compute[260935]: 2025-10-11 09:08:24.628 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:08:24 compute-0 nova_compute[260935]: 2025-10-11 09:08:24.634 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: pausing, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 09:08:24 compute-0 nova_compute[260935]: 2025-10-11 09:08:24.666 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] During sync_power_state the instance has a pending task (pausing). Skip.
Oct 11 09:08:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:08:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:08:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:08:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:08:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:08:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:08:25 compute-0 ceph-mon[74313]: pgmap v1972: 321 pgs: 321 active+clean; 500 MiB data, 894 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 3.6 MiB/s wr, 202 op/s
Oct 11 09:08:25 compute-0 nova_compute[260935]: 2025-10-11 09:08:25.638 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:08:25 compute-0 nova_compute[260935]: 2025-10-11 09:08:25.640 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 11 09:08:25 compute-0 nova_compute[260935]: 2025-10-11 09:08:25.640 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 11 09:08:25 compute-0 nova_compute[260935]: 2025-10-11 09:08:25.920 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "refresh_cache-c176845c-89c0-4038-ba22-4ee79bd3ebfe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:08:25 compute-0 nova_compute[260935]: 2025-10-11 09:08:25.921 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquired lock "refresh_cache-c176845c-89c0-4038-ba22-4ee79bd3ebfe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:08:25 compute-0 nova_compute[260935]: 2025-10-11 09:08:25.921 2 DEBUG nova.network.neutron [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: c176845c-89c0-4038-ba22-4ee79bd3ebfe] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 11 09:08:25 compute-0 nova_compute[260935]: 2025-10-11 09:08:25.922 2 DEBUG nova.objects.instance [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lazy-loading 'info_cache' on Instance uuid c176845c-89c0-4038-ba22-4ee79bd3ebfe obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:08:26 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1973: 321 pgs: 321 active+clean; 500 MiB data, 894 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 3.6 MiB/s wr, 202 op/s
Oct 11 09:08:26 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 09:08:26 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/433786690' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 09:08:26 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 09:08:26 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/433786690' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 09:08:27 compute-0 ceph-mon[74313]: pgmap v1973: 321 pgs: 321 active+clean; 500 MiB data, 894 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 3.6 MiB/s wr, 202 op/s
Oct 11 09:08:27 compute-0 ceph-mon[74313]: from='client.? 192.168.122.10:0/433786690' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 09:08:27 compute-0 ceph-mon[74313]: from='client.? 192.168.122.10:0/433786690' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 09:08:27 compute-0 nova_compute[260935]: 2025-10-11 09:08:27.638 2 DEBUG nova.network.neutron [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: c176845c-89c0-4038-ba22-4ee79bd3ebfe] Updating instance_info_cache with network_info: [{"id": "e61ae661-47c6-4317-a2c2-6e7a5b567441", "address": "fa:16:3e:1e:82:58", "network": {"id": "164a664d-5e52-48b9-8b00-f73d0851a4cc", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-311778958-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d33b48586acf4e6c8254f2a1213b001c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape61ae661-47", "ovs_interfaceid": "e61ae661-47c6-4317-a2c2-6e7a5b567441", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:08:27 compute-0 nova_compute[260935]: 2025-10-11 09:08:27.666 2 INFO nova.compute.manager [None req-5b3142f5-12df-4068-ace6-b88cbfd5eb73 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Unpausing
Oct 11 09:08:27 compute-0 nova_compute[260935]: 2025-10-11 09:08:27.667 2 DEBUG nova.objects.instance [None req-5b3142f5-12df-4068-ace6-b88cbfd5eb73 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Lazy-loading 'flavor' on Instance uuid 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:08:27 compute-0 nova_compute[260935]: 2025-10-11 09:08:27.672 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Releasing lock "refresh_cache-c176845c-89c0-4038-ba22-4ee79bd3ebfe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:08:27 compute-0 nova_compute[260935]: 2025-10-11 09:08:27.672 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: c176845c-89c0-4038-ba22-4ee79bd3ebfe] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 11 09:08:27 compute-0 nova_compute[260935]: 2025-10-11 09:08:27.714 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760173707.713921, 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:08:27 compute-0 nova_compute[260935]: 2025-10-11 09:08:27.714 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] VM Resumed (Lifecycle Event)
Oct 11 09:08:27 compute-0 virtqemud[260524]: argument unsupported: QEMU guest agent is not configured
Oct 11 09:08:27 compute-0 nova_compute[260935]: 2025-10-11 09:08:27.728 2 DEBUG nova.virt.libvirt.guest [None req-5b3142f5-12df-4068-ace6-b88cbfd5eb73 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200
Oct 11 09:08:27 compute-0 nova_compute[260935]: 2025-10-11 09:08:27.728 2 DEBUG nova.compute.manager [None req-5b3142f5-12df-4068-ace6-b88cbfd5eb73 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:08:27 compute-0 nova_compute[260935]: 2025-10-11 09:08:27.742 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:08:27 compute-0 nova_compute[260935]: 2025-10-11 09:08:27.747 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: paused, current task_state: unpausing, current DB power_state: 3, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 09:08:27 compute-0 nova_compute[260935]: 2025-10-11 09:08:27.783 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] During sync_power_state the instance has a pending task (unpausing). Skip.
Oct 11 09:08:27 compute-0 podman[360701]: 2025-10-11 09:08:27.808189617 +0000 UTC m=+0.110112085 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=multipathd, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 11 09:08:27 compute-0 podman[360702]: 2025-10-11 09:08:27.890433484 +0000 UTC m=+0.164265100 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, managed_by=edpm_ansible)
Oct 11 09:08:28 compute-0 nova_compute[260935]: 2025-10-11 09:08:28.200 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:08:28 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1974: 321 pgs: 321 active+clean; 500 MiB data, 894 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 3.6 MiB/s wr, 203 op/s
Oct 11 09:08:28 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e257 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:08:28 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:08:28.842 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=bec9a5e2-82b0-42f0-811d-08d245f1dc66, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '25'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:08:29 compute-0 nova_compute[260935]: 2025-10-11 09:08:29.306 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:08:29 compute-0 ceph-mon[74313]: pgmap v1974: 321 pgs: 321 active+clean; 500 MiB data, 894 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 3.6 MiB/s wr, 203 op/s
Oct 11 09:08:29 compute-0 nova_compute[260935]: 2025-10-11 09:08:29.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:08:29 compute-0 nova_compute[260935]: 2025-10-11 09:08:29.702 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Oct 11 09:08:29 compute-0 nova_compute[260935]: 2025-10-11 09:08:29.719 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Oct 11 09:08:30 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1975: 321 pgs: 321 active+clean; 500 MiB data, 894 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 8.2 KiB/s wr, 135 op/s
Oct 11 09:08:31 compute-0 nova_compute[260935]: 2025-10-11 09:08:31.212 2 DEBUG nova.virt.libvirt.driver [None req-751950c1-bd6b-4ff4-a050-b5bf3552245e 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] [instance: df634e60-d790-4a39-adcc-c6345a12e9df] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Oct 11 09:08:31 compute-0 ceph-mon[74313]: pgmap v1975: 321 pgs: 321 active+clean; 500 MiB data, 894 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 8.2 KiB/s wr, 135 op/s
Oct 11 09:08:32 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1976: 321 pgs: 321 active+clean; 544 MiB data, 928 MiB used, 59 GiB / 60 GiB avail; 4.3 MiB/s rd, 2.9 MiB/s wr, 227 op/s
Oct 11 09:08:33 compute-0 nova_compute[260935]: 2025-10-11 09:08:33.204 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:08:33 compute-0 systemd[1]: machine-qemu\x2d116\x2dinstance\x2d00000065.scope: Deactivated successfully.
Oct 11 09:08:33 compute-0 ceph-mon[74313]: pgmap v1976: 321 pgs: 321 active+clean; 544 MiB data, 928 MiB used, 59 GiB / 60 GiB avail; 4.3 MiB/s rd, 2.9 MiB/s wr, 227 op/s
Oct 11 09:08:33 compute-0 systemd[1]: machine-qemu\x2d116\x2dinstance\x2d00000065.scope: Consumed 12.380s CPU time.
Oct 11 09:08:33 compute-0 systemd-machined[215705]: Machine qemu-116-instance-00000065 terminated.
Oct 11 09:08:33 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e257 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:08:34 compute-0 nova_compute[260935]: 2025-10-11 09:08:34.228 2 INFO nova.virt.libvirt.driver [None req-751950c1-bd6b-4ff4-a050-b5bf3552245e 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] [instance: df634e60-d790-4a39-adcc-c6345a12e9df] Instance shutdown successfully after 13 seconds.
Oct 11 09:08:34 compute-0 nova_compute[260935]: 2025-10-11 09:08:34.236 2 INFO nova.virt.libvirt.driver [-] [instance: df634e60-d790-4a39-adcc-c6345a12e9df] Instance destroyed successfully.
Oct 11 09:08:34 compute-0 nova_compute[260935]: 2025-10-11 09:08:34.242 2 INFO nova.virt.libvirt.driver [-] [instance: df634e60-d790-4a39-adcc-c6345a12e9df] Instance destroyed successfully.
Oct 11 09:08:34 compute-0 nova_compute[260935]: 2025-10-11 09:08:34.351 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:08:34 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1977: 321 pgs: 321 active+clean; 566 MiB data, 944 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 4.3 MiB/s wr, 193 op/s
Oct 11 09:08:34 compute-0 nova_compute[260935]: 2025-10-11 09:08:34.713 2 INFO nova.virt.libvirt.driver [None req-751950c1-bd6b-4ff4-a050-b5bf3552245e 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] [instance: df634e60-d790-4a39-adcc-c6345a12e9df] Deleting instance files /var/lib/nova/instances/df634e60-d790-4a39-adcc-c6345a12e9df_del
Oct 11 09:08:34 compute-0 nova_compute[260935]: 2025-10-11 09:08:34.714 2 INFO nova.virt.libvirt.driver [None req-751950c1-bd6b-4ff4-a050-b5bf3552245e 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] [instance: df634e60-d790-4a39-adcc-c6345a12e9df] Deletion of /var/lib/nova/instances/df634e60-d790-4a39-adcc-c6345a12e9df_del complete
Oct 11 09:08:34 compute-0 nova_compute[260935]: 2025-10-11 09:08:34.891 2 DEBUG nova.virt.libvirt.driver [None req-751950c1-bd6b-4ff4-a050-b5bf3552245e 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] [instance: df634e60-d790-4a39-adcc-c6345a12e9df] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 11 09:08:34 compute-0 nova_compute[260935]: 2025-10-11 09:08:34.892 2 INFO nova.virt.libvirt.driver [None req-751950c1-bd6b-4ff4-a050-b5bf3552245e 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] [instance: df634e60-d790-4a39-adcc-c6345a12e9df] Creating image(s)
Oct 11 09:08:34 compute-0 nova_compute[260935]: 2025-10-11 09:08:34.926 2 DEBUG nova.storage.rbd_utils [None req-751950c1-bd6b-4ff4-a050-b5bf3552245e 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] rbd image df634e60-d790-4a39-adcc-c6345a12e9df_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:08:34 compute-0 nova_compute[260935]: 2025-10-11 09:08:34.961 2 DEBUG nova.storage.rbd_utils [None req-751950c1-bd6b-4ff4-a050-b5bf3552245e 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] rbd image df634e60-d790-4a39-adcc-c6345a12e9df_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:08:34 compute-0 nova_compute[260935]: 2025-10-11 09:08:34.988 2 DEBUG nova.storage.rbd_utils [None req-751950c1-bd6b-4ff4-a050-b5bf3552245e 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] rbd image df634e60-d790-4a39-adcc-c6345a12e9df_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:08:34 compute-0 nova_compute[260935]: 2025-10-11 09:08:34.994 2 DEBUG oslo_concurrency.processutils [None req-751950c1-bd6b-4ff4-a050-b5bf3552245e 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/d427ed36e4acfaf36d5cf36bd49361b1db4ee571 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:08:35 compute-0 nova_compute[260935]: 2025-10-11 09:08:35.076 2 DEBUG oslo_concurrency.processutils [None req-751950c1-bd6b-4ff4-a050-b5bf3552245e 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/d427ed36e4acfaf36d5cf36bd49361b1db4ee571 --force-share --output=json" returned: 0 in 0.082s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:08:35 compute-0 nova_compute[260935]: 2025-10-11 09:08:35.077 2 DEBUG oslo_concurrency.lockutils [None req-751950c1-bd6b-4ff4-a050-b5bf3552245e 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Acquiring lock "d427ed36e4acfaf36d5cf36bd49361b1db4ee571" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:08:35 compute-0 nova_compute[260935]: 2025-10-11 09:08:35.078 2 DEBUG oslo_concurrency.lockutils [None req-751950c1-bd6b-4ff4-a050-b5bf3552245e 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Lock "d427ed36e4acfaf36d5cf36bd49361b1db4ee571" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:08:35 compute-0 nova_compute[260935]: 2025-10-11 09:08:35.078 2 DEBUG oslo_concurrency.lockutils [None req-751950c1-bd6b-4ff4-a050-b5bf3552245e 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Lock "d427ed36e4acfaf36d5cf36bd49361b1db4ee571" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:08:35 compute-0 nova_compute[260935]: 2025-10-11 09:08:35.107 2 DEBUG nova.storage.rbd_utils [None req-751950c1-bd6b-4ff4-a050-b5bf3552245e 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] rbd image df634e60-d790-4a39-adcc-c6345a12e9df_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:08:35 compute-0 nova_compute[260935]: 2025-10-11 09:08:35.111 2 DEBUG oslo_concurrency.processutils [None req-751950c1-bd6b-4ff4-a050-b5bf3552245e 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/d427ed36e4acfaf36d5cf36bd49361b1db4ee571 df634e60-d790-4a39-adcc-c6345a12e9df_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:08:35 compute-0 nova_compute[260935]: 2025-10-11 09:08:35.403 2 DEBUG oslo_concurrency.processutils [None req-751950c1-bd6b-4ff4-a050-b5bf3552245e 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/d427ed36e4acfaf36d5cf36bd49361b1db4ee571 df634e60-d790-4a39-adcc-c6345a12e9df_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.292s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:08:35 compute-0 nova_compute[260935]: 2025-10-11 09:08:35.483 2 DEBUG nova.storage.rbd_utils [None req-751950c1-bd6b-4ff4-a050-b5bf3552245e 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] resizing rbd image df634e60-d790-4a39-adcc-c6345a12e9df_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 11 09:08:35 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e257 do_prune osdmap full prune enabled
Oct 11 09:08:35 compute-0 ceph-mon[74313]: pgmap v1977: 321 pgs: 321 active+clean; 566 MiB data, 944 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 4.3 MiB/s wr, 193 op/s
Oct 11 09:08:35 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e258 e258: 3 total, 3 up, 3 in
Oct 11 09:08:35 compute-0 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e258: 3 total, 3 up, 3 in
Oct 11 09:08:35 compute-0 nova_compute[260935]: 2025-10-11 09:08:35.607 2 DEBUG nova.virt.libvirt.driver [None req-751950c1-bd6b-4ff4-a050-b5bf3552245e 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] [instance: df634e60-d790-4a39-adcc-c6345a12e9df] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 11 09:08:35 compute-0 nova_compute[260935]: 2025-10-11 09:08:35.608 2 DEBUG nova.virt.libvirt.driver [None req-751950c1-bd6b-4ff4-a050-b5bf3552245e 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] [instance: df634e60-d790-4a39-adcc-c6345a12e9df] Ensure instance console log exists: /var/lib/nova/instances/df634e60-d790-4a39-adcc-c6345a12e9df/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 11 09:08:35 compute-0 nova_compute[260935]: 2025-10-11 09:08:35.609 2 DEBUG oslo_concurrency.lockutils [None req-751950c1-bd6b-4ff4-a050-b5bf3552245e 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:08:35 compute-0 nova_compute[260935]: 2025-10-11 09:08:35.609 2 DEBUG oslo_concurrency.lockutils [None req-751950c1-bd6b-4ff4-a050-b5bf3552245e 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:08:35 compute-0 nova_compute[260935]: 2025-10-11 09:08:35.610 2 DEBUG oslo_concurrency.lockutils [None req-751950c1-bd6b-4ff4-a050-b5bf3552245e 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:08:35 compute-0 nova_compute[260935]: 2025-10-11 09:08:35.611 2 DEBUG nova.virt.libvirt.driver [None req-751950c1-bd6b-4ff4-a050-b5bf3552245e 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] [instance: df634e60-d790-4a39-adcc-c6345a12e9df] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:29Z,direct_url=<?>,disk_format='qcow2',id=95632eb9-5895-4e20-b760-0f149aadf400,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 11 09:08:35 compute-0 nova_compute[260935]: 2025-10-11 09:08:35.616 2 WARNING nova.virt.libvirt.driver [None req-751950c1-bd6b-4ff4-a050-b5bf3552245e 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError
Oct 11 09:08:35 compute-0 nova_compute[260935]: 2025-10-11 09:08:35.624 2 DEBUG nova.virt.libvirt.host [None req-751950c1-bd6b-4ff4-a050-b5bf3552245e 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 11 09:08:35 compute-0 nova_compute[260935]: 2025-10-11 09:08:35.624 2 DEBUG nova.virt.libvirt.host [None req-751950c1-bd6b-4ff4-a050-b5bf3552245e 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 11 09:08:35 compute-0 nova_compute[260935]: 2025-10-11 09:08:35.629 2 DEBUG nova.virt.libvirt.host [None req-751950c1-bd6b-4ff4-a050-b5bf3552245e 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 11 09:08:35 compute-0 nova_compute[260935]: 2025-10-11 09:08:35.629 2 DEBUG nova.virt.libvirt.host [None req-751950c1-bd6b-4ff4-a050-b5bf3552245e 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 11 09:08:35 compute-0 nova_compute[260935]: 2025-10-11 09:08:35.629 2 DEBUG nova.virt.libvirt.driver [None req-751950c1-bd6b-4ff4-a050-b5bf3552245e 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 11 09:08:35 compute-0 nova_compute[260935]: 2025-10-11 09:08:35.630 2 DEBUG nova.virt.hardware [None req-751950c1-bd6b-4ff4-a050-b5bf3552245e 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:29Z,direct_url=<?>,disk_format='qcow2',id=95632eb9-5895-4e20-b760-0f149aadf400,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 11 09:08:35 compute-0 nova_compute[260935]: 2025-10-11 09:08:35.630 2 DEBUG nova.virt.hardware [None req-751950c1-bd6b-4ff4-a050-b5bf3552245e 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 11 09:08:35 compute-0 nova_compute[260935]: 2025-10-11 09:08:35.630 2 DEBUG nova.virt.hardware [None req-751950c1-bd6b-4ff4-a050-b5bf3552245e 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 11 09:08:35 compute-0 nova_compute[260935]: 2025-10-11 09:08:35.631 2 DEBUG nova.virt.hardware [None req-751950c1-bd6b-4ff4-a050-b5bf3552245e 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 11 09:08:35 compute-0 nova_compute[260935]: 2025-10-11 09:08:35.631 2 DEBUG nova.virt.hardware [None req-751950c1-bd6b-4ff4-a050-b5bf3552245e 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 11 09:08:35 compute-0 nova_compute[260935]: 2025-10-11 09:08:35.631 2 DEBUG nova.virt.hardware [None req-751950c1-bd6b-4ff4-a050-b5bf3552245e 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 11 09:08:35 compute-0 nova_compute[260935]: 2025-10-11 09:08:35.632 2 DEBUG nova.virt.hardware [None req-751950c1-bd6b-4ff4-a050-b5bf3552245e 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 11 09:08:35 compute-0 nova_compute[260935]: 2025-10-11 09:08:35.632 2 DEBUG nova.virt.hardware [None req-751950c1-bd6b-4ff4-a050-b5bf3552245e 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 11 09:08:35 compute-0 nova_compute[260935]: 2025-10-11 09:08:35.632 2 DEBUG nova.virt.hardware [None req-751950c1-bd6b-4ff4-a050-b5bf3552245e 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 11 09:08:35 compute-0 nova_compute[260935]: 2025-10-11 09:08:35.633 2 DEBUG nova.virt.hardware [None req-751950c1-bd6b-4ff4-a050-b5bf3552245e 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 11 09:08:35 compute-0 nova_compute[260935]: 2025-10-11 09:08:35.633 2 DEBUG nova.virt.hardware [None req-751950c1-bd6b-4ff4-a050-b5bf3552245e 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 11 09:08:35 compute-0 nova_compute[260935]: 2025-10-11 09:08:35.633 2 DEBUG nova.objects.instance [None req-751950c1-bd6b-4ff4-a050-b5bf3552245e 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Lazy-loading 'vcpu_model' on Instance uuid df634e60-d790-4a39-adcc-c6345a12e9df obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:08:35 compute-0 nova_compute[260935]: 2025-10-11 09:08:35.657 2 DEBUG oslo_concurrency.processutils [None req-751950c1-bd6b-4ff4-a050-b5bf3552245e 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:08:36 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 09:08:36 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3185534938' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:08:36 compute-0 nova_compute[260935]: 2025-10-11 09:08:36.138 2 DEBUG oslo_concurrency.processutils [None req-751950c1-bd6b-4ff4-a050-b5bf3552245e 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:08:36 compute-0 nova_compute[260935]: 2025-10-11 09:08:36.172 2 DEBUG nova.storage.rbd_utils [None req-751950c1-bd6b-4ff4-a050-b5bf3552245e 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] rbd image df634e60-d790-4a39-adcc-c6345a12e9df_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:08:36 compute-0 nova_compute[260935]: 2025-10-11 09:08:36.179 2 DEBUG oslo_concurrency.processutils [None req-751950c1-bd6b-4ff4-a050-b5bf3552245e 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:08:36 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1979: 321 pgs: 321 active+clean; 566 MiB data, 944 MiB used, 59 GiB / 60 GiB avail; 772 KiB/s rd, 5.1 MiB/s wr, 152 op/s
Oct 11 09:08:36 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e258 do_prune osdmap full prune enabled
Oct 11 09:08:36 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e259 e259: 3 total, 3 up, 3 in
Oct 11 09:08:36 compute-0 ceph-mon[74313]: osdmap e258: 3 total, 3 up, 3 in
Oct 11 09:08:36 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/3185534938' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:08:36 compute-0 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e259: 3 total, 3 up, 3 in
Oct 11 09:08:36 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 09:08:36 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1845454379' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:08:36 compute-0 nova_compute[260935]: 2025-10-11 09:08:36.673 2 DEBUG oslo_concurrency.processutils [None req-751950c1-bd6b-4ff4-a050-b5bf3552245e 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.495s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:08:36 compute-0 nova_compute[260935]: 2025-10-11 09:08:36.682 2 DEBUG nova.virt.libvirt.driver [None req-751950c1-bd6b-4ff4-a050-b5bf3552245e 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] [instance: df634e60-d790-4a39-adcc-c6345a12e9df] End _get_guest_xml xml=<domain type="kvm">
Oct 11 09:08:36 compute-0 nova_compute[260935]:   <uuid>df634e60-d790-4a39-adcc-c6345a12e9df</uuid>
Oct 11 09:08:36 compute-0 nova_compute[260935]:   <name>instance-00000065</name>
Oct 11 09:08:36 compute-0 nova_compute[260935]:   <memory>131072</memory>
Oct 11 09:08:36 compute-0 nova_compute[260935]:   <vcpu>1</vcpu>
Oct 11 09:08:36 compute-0 nova_compute[260935]:   <metadata>
Oct 11 09:08:36 compute-0 nova_compute[260935]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 09:08:36 compute-0 nova_compute[260935]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 09:08:36 compute-0 nova_compute[260935]:       <nova:name>tempest-ServerShowV247Test-server-32539140</nova:name>
Oct 11 09:08:36 compute-0 nova_compute[260935]:       <nova:creationTime>2025-10-11 09:08:35</nova:creationTime>
Oct 11 09:08:36 compute-0 nova_compute[260935]:       <nova:flavor name="m1.nano">
Oct 11 09:08:36 compute-0 nova_compute[260935]:         <nova:memory>128</nova:memory>
Oct 11 09:08:36 compute-0 nova_compute[260935]:         <nova:disk>1</nova:disk>
Oct 11 09:08:36 compute-0 nova_compute[260935]:         <nova:swap>0</nova:swap>
Oct 11 09:08:36 compute-0 nova_compute[260935]:         <nova:ephemeral>0</nova:ephemeral>
Oct 11 09:08:36 compute-0 nova_compute[260935]:         <nova:vcpus>1</nova:vcpus>
Oct 11 09:08:36 compute-0 nova_compute[260935]:       </nova:flavor>
Oct 11 09:08:36 compute-0 nova_compute[260935]:       <nova:owner>
Oct 11 09:08:36 compute-0 nova_compute[260935]:         <nova:user uuid="2273281c2fb74ffc95506b1aaa8874ed">tempest-ServerShowV247Test-462364684-project-member</nova:user>
Oct 11 09:08:36 compute-0 nova_compute[260935]:         <nova:project uuid="f4d141b93bd54c8f859a3ec1283a9a71">tempest-ServerShowV247Test-462364684</nova:project>
Oct 11 09:08:36 compute-0 nova_compute[260935]:       </nova:owner>
Oct 11 09:08:36 compute-0 nova_compute[260935]:       <nova:root type="image" uuid="95632eb9-5895-4e20-b760-0f149aadf400"/>
Oct 11 09:08:36 compute-0 nova_compute[260935]:       <nova:ports/>
Oct 11 09:08:36 compute-0 nova_compute[260935]:     </nova:instance>
Oct 11 09:08:36 compute-0 nova_compute[260935]:   </metadata>
Oct 11 09:08:36 compute-0 nova_compute[260935]:   <sysinfo type="smbios">
Oct 11 09:08:36 compute-0 nova_compute[260935]:     <system>
Oct 11 09:08:36 compute-0 nova_compute[260935]:       <entry name="manufacturer">RDO</entry>
Oct 11 09:08:36 compute-0 nova_compute[260935]:       <entry name="product">OpenStack Compute</entry>
Oct 11 09:08:36 compute-0 nova_compute[260935]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 09:08:36 compute-0 nova_compute[260935]:       <entry name="serial">df634e60-d790-4a39-adcc-c6345a12e9df</entry>
Oct 11 09:08:36 compute-0 nova_compute[260935]:       <entry name="uuid">df634e60-d790-4a39-adcc-c6345a12e9df</entry>
Oct 11 09:08:36 compute-0 nova_compute[260935]:       <entry name="family">Virtual Machine</entry>
Oct 11 09:08:36 compute-0 nova_compute[260935]:     </system>
Oct 11 09:08:36 compute-0 nova_compute[260935]:   </sysinfo>
Oct 11 09:08:36 compute-0 nova_compute[260935]:   <os>
Oct 11 09:08:36 compute-0 nova_compute[260935]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 11 09:08:36 compute-0 nova_compute[260935]:     <boot dev="hd"/>
Oct 11 09:08:36 compute-0 nova_compute[260935]:     <smbios mode="sysinfo"/>
Oct 11 09:08:36 compute-0 nova_compute[260935]:   </os>
Oct 11 09:08:36 compute-0 nova_compute[260935]:   <features>
Oct 11 09:08:36 compute-0 nova_compute[260935]:     <acpi/>
Oct 11 09:08:36 compute-0 nova_compute[260935]:     <apic/>
Oct 11 09:08:36 compute-0 nova_compute[260935]:     <vmcoreinfo/>
Oct 11 09:08:36 compute-0 nova_compute[260935]:   </features>
Oct 11 09:08:36 compute-0 nova_compute[260935]:   <clock offset="utc">
Oct 11 09:08:36 compute-0 nova_compute[260935]:     <timer name="pit" tickpolicy="delay"/>
Oct 11 09:08:36 compute-0 nova_compute[260935]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 11 09:08:36 compute-0 nova_compute[260935]:     <timer name="hpet" present="no"/>
Oct 11 09:08:36 compute-0 nova_compute[260935]:   </clock>
Oct 11 09:08:36 compute-0 nova_compute[260935]:   <cpu mode="host-model" match="exact">
Oct 11 09:08:36 compute-0 nova_compute[260935]:     <topology sockets="1" cores="1" threads="1"/>
Oct 11 09:08:36 compute-0 nova_compute[260935]:   </cpu>
Oct 11 09:08:36 compute-0 nova_compute[260935]:   <devices>
Oct 11 09:08:36 compute-0 nova_compute[260935]:     <disk type="network" device="disk">
Oct 11 09:08:36 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 09:08:36 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/df634e60-d790-4a39-adcc-c6345a12e9df_disk">
Oct 11 09:08:36 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 09:08:36 compute-0 nova_compute[260935]:       </source>
Oct 11 09:08:36 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 09:08:36 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 09:08:36 compute-0 nova_compute[260935]:       </auth>
Oct 11 09:08:36 compute-0 nova_compute[260935]:       <target dev="vda" bus="virtio"/>
Oct 11 09:08:36 compute-0 nova_compute[260935]:     </disk>
Oct 11 09:08:36 compute-0 nova_compute[260935]:     <disk type="network" device="cdrom">
Oct 11 09:08:36 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 09:08:36 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/df634e60-d790-4a39-adcc-c6345a12e9df_disk.config">
Oct 11 09:08:36 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 09:08:36 compute-0 nova_compute[260935]:       </source>
Oct 11 09:08:36 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 09:08:36 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 09:08:36 compute-0 nova_compute[260935]:       </auth>
Oct 11 09:08:36 compute-0 nova_compute[260935]:       <target dev="sda" bus="sata"/>
Oct 11 09:08:36 compute-0 nova_compute[260935]:     </disk>
Oct 11 09:08:36 compute-0 nova_compute[260935]:     <serial type="pty">
Oct 11 09:08:36 compute-0 nova_compute[260935]:       <log file="/var/lib/nova/instances/df634e60-d790-4a39-adcc-c6345a12e9df/console.log" append="off"/>
Oct 11 09:08:36 compute-0 nova_compute[260935]:     </serial>
Oct 11 09:08:36 compute-0 nova_compute[260935]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 09:08:36 compute-0 nova_compute[260935]:     <video>
Oct 11 09:08:36 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 09:08:36 compute-0 nova_compute[260935]:     </video>
Oct 11 09:08:36 compute-0 nova_compute[260935]:     <input type="tablet" bus="usb"/>
Oct 11 09:08:36 compute-0 nova_compute[260935]:     <rng model="virtio">
Oct 11 09:08:36 compute-0 nova_compute[260935]:       <backend model="random">/dev/urandom</backend>
Oct 11 09:08:36 compute-0 nova_compute[260935]:     </rng>
Oct 11 09:08:36 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root"/>
Oct 11 09:08:36 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:08:36 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:08:36 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:08:36 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:08:36 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:08:36 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:08:36 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:08:36 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:08:36 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:08:36 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:08:36 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:08:36 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:08:36 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:08:36 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:08:36 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:08:36 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:08:36 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:08:36 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:08:36 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:08:36 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:08:36 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:08:36 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:08:36 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:08:36 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:08:36 compute-0 nova_compute[260935]:     <controller type="usb" index="0"/>
Oct 11 09:08:36 compute-0 nova_compute[260935]:     <memballoon model="virtio">
Oct 11 09:08:36 compute-0 nova_compute[260935]:       <stats period="10"/>
Oct 11 09:08:36 compute-0 nova_compute[260935]:     </memballoon>
Oct 11 09:08:36 compute-0 nova_compute[260935]:   </devices>
Oct 11 09:08:36 compute-0 nova_compute[260935]: </domain>
Oct 11 09:08:36 compute-0 nova_compute[260935]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 11 09:08:36 compute-0 nova_compute[260935]: 2025-10-11 09:08:36.770 2 DEBUG nova.virt.libvirt.driver [None req-751950c1-bd6b-4ff4-a050-b5bf3552245e 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 09:08:36 compute-0 nova_compute[260935]: 2025-10-11 09:08:36.771 2 DEBUG nova.virt.libvirt.driver [None req-751950c1-bd6b-4ff4-a050-b5bf3552245e 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 09:08:36 compute-0 nova_compute[260935]: 2025-10-11 09:08:36.772 2 INFO nova.virt.libvirt.driver [None req-751950c1-bd6b-4ff4-a050-b5bf3552245e 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] [instance: df634e60-d790-4a39-adcc-c6345a12e9df] Using config drive
Oct 11 09:08:36 compute-0 nova_compute[260935]: 2025-10-11 09:08:36.805 2 DEBUG nova.storage.rbd_utils [None req-751950c1-bd6b-4ff4-a050-b5bf3552245e 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] rbd image df634e60-d790-4a39-adcc-c6345a12e9df_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:08:36 compute-0 nova_compute[260935]: 2025-10-11 09:08:36.839 2 DEBUG nova.objects.instance [None req-751950c1-bd6b-4ff4-a050-b5bf3552245e 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Lazy-loading 'ec2_ids' on Instance uuid df634e60-d790-4a39-adcc-c6345a12e9df obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:08:36 compute-0 nova_compute[260935]: 2025-10-11 09:08:36.876 2 DEBUG nova.objects.instance [None req-751950c1-bd6b-4ff4-a050-b5bf3552245e 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Lazy-loading 'keypairs' on Instance uuid df634e60-d790-4a39-adcc-c6345a12e9df obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:08:37 compute-0 nova_compute[260935]: 2025-10-11 09:08:37.215 2 INFO nova.virt.libvirt.driver [None req-751950c1-bd6b-4ff4-a050-b5bf3552245e 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] [instance: df634e60-d790-4a39-adcc-c6345a12e9df] Creating config drive at /var/lib/nova/instances/df634e60-d790-4a39-adcc-c6345a12e9df/disk.config
Oct 11 09:08:37 compute-0 nova_compute[260935]: 2025-10-11 09:08:37.225 2 DEBUG oslo_concurrency.processutils [None req-751950c1-bd6b-4ff4-a050-b5bf3552245e 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/df634e60-d790-4a39-adcc-c6345a12e9df/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpo2qe_sr0 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:08:37 compute-0 nova_compute[260935]: 2025-10-11 09:08:37.395 2 DEBUG oslo_concurrency.processutils [None req-751950c1-bd6b-4ff4-a050-b5bf3552245e 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/df634e60-d790-4a39-adcc-c6345a12e9df/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpo2qe_sr0" returned: 0 in 0.171s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:08:37 compute-0 nova_compute[260935]: 2025-10-11 09:08:37.433 2 DEBUG nova.storage.rbd_utils [None req-751950c1-bd6b-4ff4-a050-b5bf3552245e 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] rbd image df634e60-d790-4a39-adcc-c6345a12e9df_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:08:37 compute-0 nova_compute[260935]: 2025-10-11 09:08:37.437 2 DEBUG oslo_concurrency.processutils [None req-751950c1-bd6b-4ff4-a050-b5bf3552245e 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/df634e60-d790-4a39-adcc-c6345a12e9df/disk.config df634e60-d790-4a39-adcc-c6345a12e9df_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:08:37 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e259 do_prune osdmap full prune enabled
Oct 11 09:08:37 compute-0 ceph-mon[74313]: pgmap v1979: 321 pgs: 321 active+clean; 566 MiB data, 944 MiB used, 59 GiB / 60 GiB avail; 772 KiB/s rd, 5.1 MiB/s wr, 152 op/s
Oct 11 09:08:37 compute-0 ceph-mon[74313]: osdmap e259: 3 total, 3 up, 3 in
Oct 11 09:08:37 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/1845454379' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:08:37 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e260 e260: 3 total, 3 up, 3 in
Oct 11 09:08:37 compute-0 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e260: 3 total, 3 up, 3 in
Oct 11 09:08:37 compute-0 nova_compute[260935]: 2025-10-11 09:08:37.647 2 DEBUG oslo_concurrency.processutils [None req-751950c1-bd6b-4ff4-a050-b5bf3552245e 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/df634e60-d790-4a39-adcc-c6345a12e9df/disk.config df634e60-d790-4a39-adcc-c6345a12e9df_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.210s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:08:37 compute-0 nova_compute[260935]: 2025-10-11 09:08:37.649 2 INFO nova.virt.libvirt.driver [None req-751950c1-bd6b-4ff4-a050-b5bf3552245e 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] [instance: df634e60-d790-4a39-adcc-c6345a12e9df] Deleting local config drive /var/lib/nova/instances/df634e60-d790-4a39-adcc-c6345a12e9df/disk.config because it was imported into RBD.
Oct 11 09:08:37 compute-0 systemd-machined[215705]: New machine qemu-117-instance-00000065.
Oct 11 09:08:37 compute-0 systemd[1]: Started Virtual Machine qemu-117-instance-00000065.
Oct 11 09:08:38 compute-0 nova_compute[260935]: 2025-10-11 09:08:38.207 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:08:38 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1982: 321 pgs: 321 active+clean; 533 MiB data, 919 MiB used, 59 GiB / 60 GiB avail; 359 KiB/s rd, 6.3 MiB/s wr, 241 op/s
Oct 11 09:08:38 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e260 do_prune osdmap full prune enabled
Oct 11 09:08:38 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e261 e261: 3 total, 3 up, 3 in
Oct 11 09:08:38 compute-0 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e261: 3 total, 3 up, 3 in
Oct 11 09:08:38 compute-0 ceph-mon[74313]: osdmap e260: 3 total, 3 up, 3 in
Oct 11 09:08:38 compute-0 nova_compute[260935]: 2025-10-11 09:08:38.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:08:38 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e261 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:08:38 compute-0 nova_compute[260935]: 2025-10-11 09:08:38.828 2 DEBUG nova.virt.libvirt.host [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Removed pending event for df634e60-d790-4a39-adcc-c6345a12e9df due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Oct 11 09:08:38 compute-0 nova_compute[260935]: 2025-10-11 09:08:38.829 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760173718.827769, df634e60-d790-4a39-adcc-c6345a12e9df => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:08:38 compute-0 nova_compute[260935]: 2025-10-11 09:08:38.829 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: df634e60-d790-4a39-adcc-c6345a12e9df] VM Resumed (Lifecycle Event)
Oct 11 09:08:38 compute-0 nova_compute[260935]: 2025-10-11 09:08:38.833 2 DEBUG nova.compute.manager [None req-751950c1-bd6b-4ff4-a050-b5bf3552245e 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] [instance: df634e60-d790-4a39-adcc-c6345a12e9df] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 11 09:08:38 compute-0 nova_compute[260935]: 2025-10-11 09:08:38.834 2 DEBUG nova.virt.libvirt.driver [None req-751950c1-bd6b-4ff4-a050-b5bf3552245e 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] [instance: df634e60-d790-4a39-adcc-c6345a12e9df] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 11 09:08:38 compute-0 nova_compute[260935]: 2025-10-11 09:08:38.840 2 INFO nova.virt.libvirt.driver [-] [instance: df634e60-d790-4a39-adcc-c6345a12e9df] Instance spawned successfully.
Oct 11 09:08:38 compute-0 nova_compute[260935]: 2025-10-11 09:08:38.841 2 DEBUG nova.virt.libvirt.driver [None req-751950c1-bd6b-4ff4-a050-b5bf3552245e 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] [instance: df634e60-d790-4a39-adcc-c6345a12e9df] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 11 09:08:38 compute-0 nova_compute[260935]: 2025-10-11 09:08:38.865 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: df634e60-d790-4a39-adcc-c6345a12e9df] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:08:38 compute-0 nova_compute[260935]: 2025-10-11 09:08:38.875 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: df634e60-d790-4a39-adcc-c6345a12e9df] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 09:08:38 compute-0 nova_compute[260935]: 2025-10-11 09:08:38.882 2 DEBUG nova.virt.libvirt.driver [None req-751950c1-bd6b-4ff4-a050-b5bf3552245e 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] [instance: df634e60-d790-4a39-adcc-c6345a12e9df] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:08:38 compute-0 nova_compute[260935]: 2025-10-11 09:08:38.883 2 DEBUG nova.virt.libvirt.driver [None req-751950c1-bd6b-4ff4-a050-b5bf3552245e 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] [instance: df634e60-d790-4a39-adcc-c6345a12e9df] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:08:38 compute-0 nova_compute[260935]: 2025-10-11 09:08:38.884 2 DEBUG nova.virt.libvirt.driver [None req-751950c1-bd6b-4ff4-a050-b5bf3552245e 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] [instance: df634e60-d790-4a39-adcc-c6345a12e9df] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:08:38 compute-0 nova_compute[260935]: 2025-10-11 09:08:38.884 2 DEBUG nova.virt.libvirt.driver [None req-751950c1-bd6b-4ff4-a050-b5bf3552245e 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] [instance: df634e60-d790-4a39-adcc-c6345a12e9df] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:08:38 compute-0 nova_compute[260935]: 2025-10-11 09:08:38.885 2 DEBUG nova.virt.libvirt.driver [None req-751950c1-bd6b-4ff4-a050-b5bf3552245e 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] [instance: df634e60-d790-4a39-adcc-c6345a12e9df] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:08:38 compute-0 nova_compute[260935]: 2025-10-11 09:08:38.886 2 DEBUG nova.virt.libvirt.driver [None req-751950c1-bd6b-4ff4-a050-b5bf3552245e 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] [instance: df634e60-d790-4a39-adcc-c6345a12e9df] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:08:38 compute-0 nova_compute[260935]: 2025-10-11 09:08:38.926 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: df634e60-d790-4a39-adcc-c6345a12e9df] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Oct 11 09:08:38 compute-0 nova_compute[260935]: 2025-10-11 09:08:38.927 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760173718.8327749, df634e60-d790-4a39-adcc-c6345a12e9df => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:08:38 compute-0 nova_compute[260935]: 2025-10-11 09:08:38.928 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: df634e60-d790-4a39-adcc-c6345a12e9df] VM Started (Lifecycle Event)
Oct 11 09:08:38 compute-0 nova_compute[260935]: 2025-10-11 09:08:38.959 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: df634e60-d790-4a39-adcc-c6345a12e9df] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:08:38 compute-0 nova_compute[260935]: 2025-10-11 09:08:38.965 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: df634e60-d790-4a39-adcc-c6345a12e9df] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 09:08:38 compute-0 nova_compute[260935]: 2025-10-11 09:08:38.972 2 DEBUG nova.compute.manager [None req-751950c1-bd6b-4ff4-a050-b5bf3552245e 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] [instance: df634e60-d790-4a39-adcc-c6345a12e9df] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:08:39 compute-0 nova_compute[260935]: 2025-10-11 09:08:39.004 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: df634e60-d790-4a39-adcc-c6345a12e9df] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Oct 11 09:08:39 compute-0 nova_compute[260935]: 2025-10-11 09:08:39.037 2 DEBUG oslo_concurrency.lockutils [None req-751950c1-bd6b-4ff4-a050-b5bf3552245e 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:08:39 compute-0 nova_compute[260935]: 2025-10-11 09:08:39.037 2 DEBUG oslo_concurrency.lockutils [None req-751950c1-bd6b-4ff4-a050-b5bf3552245e 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:08:39 compute-0 nova_compute[260935]: 2025-10-11 09:08:39.038 2 DEBUG nova.objects.instance [None req-751950c1-bd6b-4ff4-a050-b5bf3552245e 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] [instance: df634e60-d790-4a39-adcc-c6345a12e9df] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Oct 11 09:08:39 compute-0 nova_compute[260935]: 2025-10-11 09:08:39.101 2 DEBUG oslo_concurrency.lockutils [None req-751950c1-bd6b-4ff4-a050-b5bf3552245e 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.063s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:08:39 compute-0 nova_compute[260935]: 2025-10-11 09:08:39.353 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:08:39 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e261 do_prune osdmap full prune enabled
Oct 11 09:08:39 compute-0 ceph-mon[74313]: pgmap v1982: 321 pgs: 321 active+clean; 533 MiB data, 919 MiB used, 59 GiB / 60 GiB avail; 359 KiB/s rd, 6.3 MiB/s wr, 241 op/s
Oct 11 09:08:39 compute-0 ceph-mon[74313]: osdmap e261: 3 total, 3 up, 3 in
Oct 11 09:08:39 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e262 e262: 3 total, 3 up, 3 in
Oct 11 09:08:39 compute-0 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e262: 3 total, 3 up, 3 in
Oct 11 09:08:40 compute-0 nova_compute[260935]: 2025-10-11 09:08:40.371 2 DEBUG oslo_concurrency.lockutils [None req-332a8af7-387c-46cf-88fe-bdcd5676e252 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Acquiring lock "df634e60-d790-4a39-adcc-c6345a12e9df" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:08:40 compute-0 nova_compute[260935]: 2025-10-11 09:08:40.371 2 DEBUG oslo_concurrency.lockutils [None req-332a8af7-387c-46cf-88fe-bdcd5676e252 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Lock "df634e60-d790-4a39-adcc-c6345a12e9df" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:08:40 compute-0 nova_compute[260935]: 2025-10-11 09:08:40.372 2 DEBUG oslo_concurrency.lockutils [None req-332a8af7-387c-46cf-88fe-bdcd5676e252 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Acquiring lock "df634e60-d790-4a39-adcc-c6345a12e9df-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:08:40 compute-0 nova_compute[260935]: 2025-10-11 09:08:40.372 2 DEBUG oslo_concurrency.lockutils [None req-332a8af7-387c-46cf-88fe-bdcd5676e252 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Lock "df634e60-d790-4a39-adcc-c6345a12e9df-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:08:40 compute-0 nova_compute[260935]: 2025-10-11 09:08:40.372 2 DEBUG oslo_concurrency.lockutils [None req-332a8af7-387c-46cf-88fe-bdcd5676e252 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Lock "df634e60-d790-4a39-adcc-c6345a12e9df-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:08:40 compute-0 nova_compute[260935]: 2025-10-11 09:08:40.374 2 INFO nova.compute.manager [None req-332a8af7-387c-46cf-88fe-bdcd5676e252 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] [instance: df634e60-d790-4a39-adcc-c6345a12e9df] Terminating instance
Oct 11 09:08:40 compute-0 nova_compute[260935]: 2025-10-11 09:08:40.375 2 DEBUG oslo_concurrency.lockutils [None req-332a8af7-387c-46cf-88fe-bdcd5676e252 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Acquiring lock "refresh_cache-df634e60-d790-4a39-adcc-c6345a12e9df" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:08:40 compute-0 nova_compute[260935]: 2025-10-11 09:08:40.376 2 DEBUG oslo_concurrency.lockutils [None req-332a8af7-387c-46cf-88fe-bdcd5676e252 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Acquired lock "refresh_cache-df634e60-d790-4a39-adcc-c6345a12e9df" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:08:40 compute-0 nova_compute[260935]: 2025-10-11 09:08:40.376 2 DEBUG nova.network.neutron [None req-332a8af7-387c-46cf-88fe-bdcd5676e252 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] [instance: df634e60-d790-4a39-adcc-c6345a12e9df] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 11 09:08:40 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1985: 321 pgs: 321 active+clean; 533 MiB data, 919 MiB used, 59 GiB / 60 GiB avail; 176 KiB/s rd, 5.4 MiB/s wr, 261 op/s
Oct 11 09:08:40 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e262 do_prune osdmap full prune enabled
Oct 11 09:08:40 compute-0 ceph-mon[74313]: osdmap e262: 3 total, 3 up, 3 in
Oct 11 09:08:40 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e263 e263: 3 total, 3 up, 3 in
Oct 11 09:08:40 compute-0 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e263: 3 total, 3 up, 3 in
Oct 11 09:08:40 compute-0 nova_compute[260935]: 2025-10-11 09:08:40.788 2 DEBUG nova.network.neutron [None req-332a8af7-387c-46cf-88fe-bdcd5676e252 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] [instance: df634e60-d790-4a39-adcc-c6345a12e9df] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 11 09:08:41 compute-0 nova_compute[260935]: 2025-10-11 09:08:41.257 2 DEBUG nova.network.neutron [None req-332a8af7-387c-46cf-88fe-bdcd5676e252 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] [instance: df634e60-d790-4a39-adcc-c6345a12e9df] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:08:41 compute-0 nova_compute[260935]: 2025-10-11 09:08:41.274 2 DEBUG oslo_concurrency.lockutils [None req-332a8af7-387c-46cf-88fe-bdcd5676e252 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Releasing lock "refresh_cache-df634e60-d790-4a39-adcc-c6345a12e9df" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:08:41 compute-0 nova_compute[260935]: 2025-10-11 09:08:41.275 2 DEBUG nova.compute.manager [None req-332a8af7-387c-46cf-88fe-bdcd5676e252 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] [instance: df634e60-d790-4a39-adcc-c6345a12e9df] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 11 09:08:41 compute-0 systemd[1]: machine-qemu\x2d117\x2dinstance\x2d00000065.scope: Deactivated successfully.
Oct 11 09:08:41 compute-0 systemd[1]: machine-qemu\x2d117\x2dinstance\x2d00000065.scope: Consumed 3.511s CPU time.
Oct 11 09:08:41 compute-0 systemd-machined[215705]: Machine qemu-117-instance-00000065 terminated.
Oct 11 09:08:41 compute-0 nova_compute[260935]: 2025-10-11 09:08:41.501 2 INFO nova.virt.libvirt.driver [-] [instance: df634e60-d790-4a39-adcc-c6345a12e9df] Instance destroyed successfully.
Oct 11 09:08:41 compute-0 nova_compute[260935]: 2025-10-11 09:08:41.502 2 DEBUG nova.objects.instance [None req-332a8af7-387c-46cf-88fe-bdcd5676e252 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Lazy-loading 'resources' on Instance uuid df634e60-d790-4a39-adcc-c6345a12e9df obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:08:41 compute-0 ceph-mon[74313]: pgmap v1985: 321 pgs: 321 active+clean; 533 MiB data, 919 MiB used, 59 GiB / 60 GiB avail; 176 KiB/s rd, 5.4 MiB/s wr, 261 op/s
Oct 11 09:08:41 compute-0 ceph-mon[74313]: osdmap e263: 3 total, 3 up, 3 in
Oct 11 09:08:42 compute-0 nova_compute[260935]: 2025-10-11 09:08:42.010 2 INFO nova.virt.libvirt.driver [None req-332a8af7-387c-46cf-88fe-bdcd5676e252 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] [instance: df634e60-d790-4a39-adcc-c6345a12e9df] Deleting instance files /var/lib/nova/instances/df634e60-d790-4a39-adcc-c6345a12e9df_del
Oct 11 09:08:42 compute-0 nova_compute[260935]: 2025-10-11 09:08:42.011 2 INFO nova.virt.libvirt.driver [None req-332a8af7-387c-46cf-88fe-bdcd5676e252 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] [instance: df634e60-d790-4a39-adcc-c6345a12e9df] Deletion of /var/lib/nova/instances/df634e60-d790-4a39-adcc-c6345a12e9df_del complete
Oct 11 09:08:42 compute-0 nova_compute[260935]: 2025-10-11 09:08:42.077 2 INFO nova.compute.manager [None req-332a8af7-387c-46cf-88fe-bdcd5676e252 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] [instance: df634e60-d790-4a39-adcc-c6345a12e9df] Took 0.80 seconds to destroy the instance on the hypervisor.
Oct 11 09:08:42 compute-0 nova_compute[260935]: 2025-10-11 09:08:42.077 2 DEBUG oslo.service.loopingcall [None req-332a8af7-387c-46cf-88fe-bdcd5676e252 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 11 09:08:42 compute-0 nova_compute[260935]: 2025-10-11 09:08:42.078 2 DEBUG nova.compute.manager [-] [instance: df634e60-d790-4a39-adcc-c6345a12e9df] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 11 09:08:42 compute-0 nova_compute[260935]: 2025-10-11 09:08:42.078 2 DEBUG nova.network.neutron [-] [instance: df634e60-d790-4a39-adcc-c6345a12e9df] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 11 09:08:42 compute-0 nova_compute[260935]: 2025-10-11 09:08:42.261 2 DEBUG nova.network.neutron [-] [instance: df634e60-d790-4a39-adcc-c6345a12e9df] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 11 09:08:42 compute-0 nova_compute[260935]: 2025-10-11 09:08:42.277 2 DEBUG nova.network.neutron [-] [instance: df634e60-d790-4a39-adcc-c6345a12e9df] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:08:42 compute-0 nova_compute[260935]: 2025-10-11 09:08:42.292 2 INFO nova.compute.manager [-] [instance: df634e60-d790-4a39-adcc-c6345a12e9df] Took 0.21 seconds to deallocate network for instance.
Oct 11 09:08:42 compute-0 nova_compute[260935]: 2025-10-11 09:08:42.340 2 DEBUG oslo_concurrency.lockutils [None req-332a8af7-387c-46cf-88fe-bdcd5676e252 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:08:42 compute-0 nova_compute[260935]: 2025-10-11 09:08:42.341 2 DEBUG oslo_concurrency.lockutils [None req-332a8af7-387c-46cf-88fe-bdcd5676e252 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:08:42 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1987: 321 pgs: 321 active+clean; 490 MiB data, 910 MiB used, 59 GiB / 60 GiB avail; 4.7 MiB/s rd, 40 KiB/s wr, 277 op/s
Oct 11 09:08:42 compute-0 nova_compute[260935]: 2025-10-11 09:08:42.483 2 DEBUG oslo_concurrency.processutils [None req-332a8af7-387c-46cf-88fe-bdcd5676e252 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:08:42 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 09:08:42 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1017674053' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:08:43 compute-0 nova_compute[260935]: 2025-10-11 09:08:43.007 2 DEBUG oslo_concurrency.processutils [None req-332a8af7-387c-46cf-88fe-bdcd5676e252 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.524s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:08:43 compute-0 nova_compute[260935]: 2025-10-11 09:08:43.017 2 DEBUG nova.compute.provider_tree [None req-332a8af7-387c-46cf-88fe-bdcd5676e252 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 09:08:43 compute-0 nova_compute[260935]: 2025-10-11 09:08:43.043 2 DEBUG nova.scheduler.client.report [None req-332a8af7-387c-46cf-88fe-bdcd5676e252 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 09:08:43 compute-0 nova_compute[260935]: 2025-10-11 09:08:43.075 2 DEBUG oslo_concurrency.lockutils [None req-332a8af7-387c-46cf-88fe-bdcd5676e252 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.734s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:08:43 compute-0 nova_compute[260935]: 2025-10-11 09:08:43.105 2 INFO nova.scheduler.client.report [None req-332a8af7-387c-46cf-88fe-bdcd5676e252 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Deleted allocations for instance df634e60-d790-4a39-adcc-c6345a12e9df
Oct 11 09:08:43 compute-0 nova_compute[260935]: 2025-10-11 09:08:43.212 2 DEBUG oslo_concurrency.lockutils [None req-332a8af7-387c-46cf-88fe-bdcd5676e252 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Lock "df634e60-d790-4a39-adcc-c6345a12e9df" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.841s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:08:43 compute-0 nova_compute[260935]: 2025-10-11 09:08:43.214 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:08:43 compute-0 ceph-mon[74313]: pgmap v1987: 321 pgs: 321 active+clean; 490 MiB data, 910 MiB used, 59 GiB / 60 GiB avail; 4.7 MiB/s rd, 40 KiB/s wr, 277 op/s
Oct 11 09:08:43 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/1017674053' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:08:43 compute-0 nova_compute[260935]: 2025-10-11 09:08:43.704 2 DEBUG oslo_concurrency.lockutils [None req-662555af-b93b-4cde-974d-5ac176d92c5e 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Acquiring lock "1a90a348-da49-4ba3-8bae-5b426e9d7424" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:08:43 compute-0 nova_compute[260935]: 2025-10-11 09:08:43.705 2 DEBUG oslo_concurrency.lockutils [None req-662555af-b93b-4cde-974d-5ac176d92c5e 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Lock "1a90a348-da49-4ba3-8bae-5b426e9d7424" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:08:43 compute-0 nova_compute[260935]: 2025-10-11 09:08:43.705 2 DEBUG oslo_concurrency.lockutils [None req-662555af-b93b-4cde-974d-5ac176d92c5e 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Acquiring lock "1a90a348-da49-4ba3-8bae-5b426e9d7424-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:08:43 compute-0 nova_compute[260935]: 2025-10-11 09:08:43.705 2 DEBUG oslo_concurrency.lockutils [None req-662555af-b93b-4cde-974d-5ac176d92c5e 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Lock "1a90a348-da49-4ba3-8bae-5b426e9d7424-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:08:43 compute-0 nova_compute[260935]: 2025-10-11 09:08:43.706 2 DEBUG oslo_concurrency.lockutils [None req-662555af-b93b-4cde-974d-5ac176d92c5e 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Lock "1a90a348-da49-4ba3-8bae-5b426e9d7424-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:08:43 compute-0 nova_compute[260935]: 2025-10-11 09:08:43.708 2 INFO nova.compute.manager [None req-662555af-b93b-4cde-974d-5ac176d92c5e 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] [instance: 1a90a348-da49-4ba3-8bae-5b426e9d7424] Terminating instance
Oct 11 09:08:43 compute-0 nova_compute[260935]: 2025-10-11 09:08:43.710 2 DEBUG oslo_concurrency.lockutils [None req-662555af-b93b-4cde-974d-5ac176d92c5e 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Acquiring lock "refresh_cache-1a90a348-da49-4ba3-8bae-5b426e9d7424" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:08:43 compute-0 nova_compute[260935]: 2025-10-11 09:08:43.710 2 DEBUG oslo_concurrency.lockutils [None req-662555af-b93b-4cde-974d-5ac176d92c5e 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Acquired lock "refresh_cache-1a90a348-da49-4ba3-8bae-5b426e9d7424" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:08:43 compute-0 nova_compute[260935]: 2025-10-11 09:08:43.710 2 DEBUG nova.network.neutron [None req-662555af-b93b-4cde-974d-5ac176d92c5e 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] [instance: 1a90a348-da49-4ba3-8bae-5b426e9d7424] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 11 09:08:43 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e263 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:08:44 compute-0 nova_compute[260935]: 2025-10-11 09:08:43.996 2 DEBUG nova.network.neutron [None req-662555af-b93b-4cde-974d-5ac176d92c5e 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] [instance: 1a90a348-da49-4ba3-8bae-5b426e9d7424] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 11 09:08:44 compute-0 nova_compute[260935]: 2025-10-11 09:08:44.255 2 DEBUG nova.network.neutron [None req-662555af-b93b-4cde-974d-5ac176d92c5e 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] [instance: 1a90a348-da49-4ba3-8bae-5b426e9d7424] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:08:44 compute-0 nova_compute[260935]: 2025-10-11 09:08:44.273 2 DEBUG oslo_concurrency.lockutils [None req-662555af-b93b-4cde-974d-5ac176d92c5e 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Releasing lock "refresh_cache-1a90a348-da49-4ba3-8bae-5b426e9d7424" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:08:44 compute-0 nova_compute[260935]: 2025-10-11 09:08:44.274 2 DEBUG nova.compute.manager [None req-662555af-b93b-4cde-974d-5ac176d92c5e 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] [instance: 1a90a348-da49-4ba3-8bae-5b426e9d7424] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 11 09:08:44 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1988: 321 pgs: 321 active+clean; 486 MiB data, 909 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 38 KiB/s wr, 286 op/s
Oct 11 09:08:44 compute-0 nova_compute[260935]: 2025-10-11 09:08:44.398 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:08:44 compute-0 systemd[1]: machine-qemu\x2d115\x2dinstance\x2d00000064.scope: Deactivated successfully.
Oct 11 09:08:44 compute-0 systemd[1]: machine-qemu\x2d115\x2dinstance\x2d00000064.scope: Consumed 12.881s CPU time.
Oct 11 09:08:44 compute-0 systemd-machined[215705]: Machine qemu-115-instance-00000064 terminated.
Oct 11 09:08:44 compute-0 nova_compute[260935]: 2025-10-11 09:08:44.499 2 INFO nova.virt.libvirt.driver [-] [instance: 1a90a348-da49-4ba3-8bae-5b426e9d7424] Instance destroyed successfully.
Oct 11 09:08:44 compute-0 nova_compute[260935]: 2025-10-11 09:08:44.500 2 DEBUG nova.objects.instance [None req-662555af-b93b-4cde-974d-5ac176d92c5e 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Lazy-loading 'resources' on Instance uuid 1a90a348-da49-4ba3-8bae-5b426e9d7424 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:08:44 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e263 do_prune osdmap full prune enabled
Oct 11 09:08:44 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e264 e264: 3 total, 3 up, 3 in
Oct 11 09:08:44 compute-0 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e264: 3 total, 3 up, 3 in
Oct 11 09:08:45 compute-0 nova_compute[260935]: 2025-10-11 09:08:45.081 2 INFO nova.virt.libvirt.driver [None req-662555af-b93b-4cde-974d-5ac176d92c5e 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] [instance: 1a90a348-da49-4ba3-8bae-5b426e9d7424] Deleting instance files /var/lib/nova/instances/1a90a348-da49-4ba3-8bae-5b426e9d7424_del
Oct 11 09:08:45 compute-0 nova_compute[260935]: 2025-10-11 09:08:45.083 2 INFO nova.virt.libvirt.driver [None req-662555af-b93b-4cde-974d-5ac176d92c5e 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] [instance: 1a90a348-da49-4ba3-8bae-5b426e9d7424] Deletion of /var/lib/nova/instances/1a90a348-da49-4ba3-8bae-5b426e9d7424_del complete
Oct 11 09:08:45 compute-0 nova_compute[260935]: 2025-10-11 09:08:45.173 2 INFO nova.compute.manager [None req-662555af-b93b-4cde-974d-5ac176d92c5e 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] [instance: 1a90a348-da49-4ba3-8bae-5b426e9d7424] Took 0.90 seconds to destroy the instance on the hypervisor.
Oct 11 09:08:45 compute-0 nova_compute[260935]: 2025-10-11 09:08:45.173 2 DEBUG oslo.service.loopingcall [None req-662555af-b93b-4cde-974d-5ac176d92c5e 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 11 09:08:45 compute-0 nova_compute[260935]: 2025-10-11 09:08:45.174 2 DEBUG nova.compute.manager [-] [instance: 1a90a348-da49-4ba3-8bae-5b426e9d7424] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 11 09:08:45 compute-0 nova_compute[260935]: 2025-10-11 09:08:45.174 2 DEBUG nova.network.neutron [-] [instance: 1a90a348-da49-4ba3-8bae-5b426e9d7424] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 11 09:08:45 compute-0 nova_compute[260935]: 2025-10-11 09:08:45.361 2 DEBUG nova.network.neutron [-] [instance: 1a90a348-da49-4ba3-8bae-5b426e9d7424] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 11 09:08:45 compute-0 nova_compute[260935]: 2025-10-11 09:08:45.387 2 DEBUG nova.network.neutron [-] [instance: 1a90a348-da49-4ba3-8bae-5b426e9d7424] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:08:45 compute-0 nova_compute[260935]: 2025-10-11 09:08:45.416 2 INFO nova.compute.manager [-] [instance: 1a90a348-da49-4ba3-8bae-5b426e9d7424] Took 0.24 seconds to deallocate network for instance.
Oct 11 09:08:45 compute-0 nova_compute[260935]: 2025-10-11 09:08:45.502 2 DEBUG oslo_concurrency.lockutils [None req-662555af-b93b-4cde-974d-5ac176d92c5e 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:08:45 compute-0 nova_compute[260935]: 2025-10-11 09:08:45.503 2 DEBUG oslo_concurrency.lockutils [None req-662555af-b93b-4cde-974d-5ac176d92c5e 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:08:45 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e264 do_prune osdmap full prune enabled
Oct 11 09:08:45 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e265 e265: 3 total, 3 up, 3 in
Oct 11 09:08:45 compute-0 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e265: 3 total, 3 up, 3 in
Oct 11 09:08:45 compute-0 ceph-mon[74313]: pgmap v1988: 321 pgs: 321 active+clean; 486 MiB data, 909 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 38 KiB/s wr, 286 op/s
Oct 11 09:08:45 compute-0 ceph-mon[74313]: osdmap e264: 3 total, 3 up, 3 in
Oct 11 09:08:45 compute-0 nova_compute[260935]: 2025-10-11 09:08:45.707 2 DEBUG oslo_concurrency.processutils [None req-662555af-b93b-4cde-974d-5ac176d92c5e 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:08:46 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 09:08:46 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2571303775' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:08:46 compute-0 nova_compute[260935]: 2025-10-11 09:08:46.183 2 DEBUG oslo_concurrency.processutils [None req-662555af-b93b-4cde-974d-5ac176d92c5e 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:08:46 compute-0 nova_compute[260935]: 2025-10-11 09:08:46.194 2 DEBUG nova.compute.provider_tree [None req-662555af-b93b-4cde-974d-5ac176d92c5e 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 09:08:46 compute-0 nova_compute[260935]: 2025-10-11 09:08:46.218 2 DEBUG nova.scheduler.client.report [None req-662555af-b93b-4cde-974d-5ac176d92c5e 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 09:08:46 compute-0 nova_compute[260935]: 2025-10-11 09:08:46.287 2 DEBUG oslo_concurrency.lockutils [None req-662555af-b93b-4cde-974d-5ac176d92c5e 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.784s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:08:46 compute-0 nova_compute[260935]: 2025-10-11 09:08:46.332 2 INFO nova.scheduler.client.report [None req-662555af-b93b-4cde-974d-5ac176d92c5e 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Deleted allocations for instance 1a90a348-da49-4ba3-8bae-5b426e9d7424
Oct 11 09:08:46 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1991: 321 pgs: 321 active+clean; 486 MiB data, 909 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 38 KiB/s wr, 286 op/s
Oct 11 09:08:46 compute-0 nova_compute[260935]: 2025-10-11 09:08:46.437 2 DEBUG oslo_concurrency.lockutils [None req-662555af-b93b-4cde-974d-5ac176d92c5e 2273281c2fb74ffc95506b1aaa8874ed f4d141b93bd54c8f859a3ec1283a9a71 - - default default] Lock "1a90a348-da49-4ba3-8bae-5b426e9d7424" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.733s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:08:46 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e265 do_prune osdmap full prune enabled
Oct 11 09:08:46 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e266 e266: 3 total, 3 up, 3 in
Oct 11 09:08:46 compute-0 ceph-mon[74313]: osdmap e265: 3 total, 3 up, 3 in
Oct 11 09:08:46 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/2571303775' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:08:46 compute-0 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e266: 3 total, 3 up, 3 in
Oct 11 09:08:46 compute-0 nova_compute[260935]: 2025-10-11 09:08:46.760 2 DEBUG oslo_concurrency.lockutils [None req-f615746d-55a5-4eb1-8fb0-04d3283753bd 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Acquiring lock "0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:08:46 compute-0 nova_compute[260935]: 2025-10-11 09:08:46.761 2 DEBUG oslo_concurrency.lockutils [None req-f615746d-55a5-4eb1-8fb0-04d3283753bd 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Lock "0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c" acquired by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:08:46 compute-0 nova_compute[260935]: 2025-10-11 09:08:46.761 2 INFO nova.compute.manager [None req-f615746d-55a5-4eb1-8fb0-04d3283753bd 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Shelving
Oct 11 09:08:46 compute-0 nova_compute[260935]: 2025-10-11 09:08:46.793 2 DEBUG nova.virt.libvirt.driver [None req-f615746d-55a5-4eb1-8fb0-04d3283753bd 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Oct 11 09:08:47 compute-0 sshd-session[361178]: Invalid user rena from 155.4.244.179 port 38260
Oct 11 09:08:47 compute-0 sshd-session[361178]: pam_unix(sshd:auth): check pass; user unknown
Oct 11 09:08:47 compute-0 sshd-session[361178]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=155.4.244.179
Oct 11 09:08:47 compute-0 podman[361202]: 2025-10-11 09:08:47.435929997 +0000 UTC m=+0.093884841 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 11 09:08:47 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e266 do_prune osdmap full prune enabled
Oct 11 09:08:47 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e267 e267: 3 total, 3 up, 3 in
Oct 11 09:08:47 compute-0 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e267: 3 total, 3 up, 3 in
Oct 11 09:08:47 compute-0 ceph-mon[74313]: pgmap v1991: 321 pgs: 321 active+clean; 486 MiB data, 909 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 38 KiB/s wr, 286 op/s
Oct 11 09:08:47 compute-0 ceph-mon[74313]: osdmap e266: 3 total, 3 up, 3 in
Oct 11 09:08:48 compute-0 nova_compute[260935]: 2025-10-11 09:08:48.217 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:08:48 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1994: 321 pgs: 321 active+clean; 407 MiB data, 852 MiB used, 59 GiB / 60 GiB avail; 209 KiB/s rd, 40 KiB/s wr, 292 op/s
Oct 11 09:08:48 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e267 do_prune osdmap full prune enabled
Oct 11 09:08:48 compute-0 ceph-mon[74313]: osdmap e267: 3 total, 3 up, 3 in
Oct 11 09:08:48 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e268 e268: 3 total, 3 up, 3 in
Oct 11 09:08:48 compute-0 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e268: 3 total, 3 up, 3 in
Oct 11 09:08:48 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e268 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:08:48 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e268 do_prune osdmap full prune enabled
Oct 11 09:08:48 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e269 e269: 3 total, 3 up, 3 in
Oct 11 09:08:48 compute-0 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e269: 3 total, 3 up, 3 in
Oct 11 09:08:49 compute-0 kernel: tapa611854c-0a (unregistering): left promiscuous mode
Oct 11 09:08:49 compute-0 NetworkManager[44960]: <info>  [1760173729.1776] device (tapa611854c-0a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 09:08:49 compute-0 nova_compute[260935]: 2025-10-11 09:08:49.183 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:08:49 compute-0 ovn_controller[152945]: 2025-10-11T09:08:49Z|00907|binding|INFO|Releasing lport a611854c-0a61-41b8-91ce-0c0f893aa54c from this chassis (sb_readonly=0)
Oct 11 09:08:49 compute-0 ovn_controller[152945]: 2025-10-11T09:08:49Z|00908|binding|INFO|Setting lport a611854c-0a61-41b8-91ce-0c0f893aa54c down in Southbound
Oct 11 09:08:49 compute-0 ovn_controller[152945]: 2025-10-11T09:08:49Z|00909|binding|INFO|Removing iface tapa611854c-0a ovn-installed in OVS
Oct 11 09:08:49 compute-0 nova_compute[260935]: 2025-10-11 09:08:49.187 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:08:49 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:08:49.193 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9d:da:b0 10.100.0.12'], port_security=['fa:16:3e:9d:da:b0 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-14e82eeb-74e2-4de3-9047-74da777fe1ec', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '21f163e616ee4917a580701d466f7dc9', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'df67e917-90ec-4dfb-a7e0-e0c1517579e8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e0adf864-e0f8-4e06-afb7-e390a634b36e, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=a611854c-0a61-41b8-91ce-0c0f893aa54c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 09:08:49 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:08:49.195 162815 INFO neutron.agent.ovn.metadata.agent [-] Port a611854c-0a61-41b8-91ce-0c0f893aa54c in datapath 14e82eeb-74e2-4de3-9047-74da777fe1ec unbound from our chassis
Oct 11 09:08:49 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:08:49.197 162815 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 14e82eeb-74e2-4de3-9047-74da777fe1ec, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 11 09:08:49 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:08:49.200 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[a11e6570-a690-47d1-a88e-a5d0b3fde613]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:08:49 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:08:49.200 162815 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-14e82eeb-74e2-4de3-9047-74da777fe1ec namespace which is not needed anymore
Oct 11 09:08:49 compute-0 nova_compute[260935]: 2025-10-11 09:08:49.207 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:08:49 compute-0 systemd[1]: machine-qemu\x2d112\x2dinstance\x2d00000062.scope: Deactivated successfully.
Oct 11 09:08:49 compute-0 systemd[1]: machine-qemu\x2d112\x2dinstance\x2d00000062.scope: Consumed 20.462s CPU time.
Oct 11 09:08:49 compute-0 systemd-machined[215705]: Machine qemu-112-instance-00000062 terminated.
Oct 11 09:08:49 compute-0 neutron-haproxy-ovnmeta-14e82eeb-74e2-4de3-9047-74da777fe1ec[357685]: [NOTICE]   (357689) : haproxy version is 2.8.14-c23fe91
Oct 11 09:08:49 compute-0 neutron-haproxy-ovnmeta-14e82eeb-74e2-4de3-9047-74da777fe1ec[357685]: [NOTICE]   (357689) : path to executable is /usr/sbin/haproxy
Oct 11 09:08:49 compute-0 neutron-haproxy-ovnmeta-14e82eeb-74e2-4de3-9047-74da777fe1ec[357685]: [WARNING]  (357689) : Exiting Master process...
Oct 11 09:08:49 compute-0 neutron-haproxy-ovnmeta-14e82eeb-74e2-4de3-9047-74da777fe1ec[357685]: [ALERT]    (357689) : Current worker (357691) exited with code 143 (Terminated)
Oct 11 09:08:49 compute-0 neutron-haproxy-ovnmeta-14e82eeb-74e2-4de3-9047-74da777fe1ec[357685]: [WARNING]  (357689) : All workers exited. Exiting... (0)
Oct 11 09:08:49 compute-0 nova_compute[260935]: 2025-10-11 09:08:49.400 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:08:49 compute-0 systemd[1]: libpod-80ced7c6a3c16ccdbe23725730c8db7aeed4183f2fc1bc06e5ea3e07111ac748.scope: Deactivated successfully.
Oct 11 09:08:49 compute-0 podman[361245]: 2025-10-11 09:08:49.405234534 +0000 UTC m=+0.063164545 container died 80ced7c6a3c16ccdbe23725730c8db7aeed4183f2fc1bc06e5ea3e07111ac748 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-14e82eeb-74e2-4de3-9047-74da777fe1ec, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true)
Oct 11 09:08:49 compute-0 nova_compute[260935]: 2025-10-11 09:08:49.413 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:08:49 compute-0 nova_compute[260935]: 2025-10-11 09:08:49.425 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:08:49 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-80ced7c6a3c16ccdbe23725730c8db7aeed4183f2fc1bc06e5ea3e07111ac748-userdata-shm.mount: Deactivated successfully.
Oct 11 09:08:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-046b66f7870015e0b115ddde4088213d8c930e4e0ddec3540ecd817899022603-merged.mount: Deactivated successfully.
Oct 11 09:08:49 compute-0 podman[361245]: 2025-10-11 09:08:49.488790849 +0000 UTC m=+0.146720840 container cleanup 80ced7c6a3c16ccdbe23725730c8db7aeed4183f2fc1bc06e5ea3e07111ac748 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-14e82eeb-74e2-4de3-9047-74da777fe1ec, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true)
Oct 11 09:08:49 compute-0 systemd[1]: libpod-conmon-80ced7c6a3c16ccdbe23725730c8db7aeed4183f2fc1bc06e5ea3e07111ac748.scope: Deactivated successfully.
Oct 11 09:08:49 compute-0 podman[361285]: 2025-10-11 09:08:49.608409124 +0000 UTC m=+0.077869404 container remove 80ced7c6a3c16ccdbe23725730c8db7aeed4183f2fc1bc06e5ea3e07111ac748 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-14e82eeb-74e2-4de3-9047-74da777fe1ec, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3)
Oct 11 09:08:49 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:08:49.619 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[e44a9158-cb14-48d5-b043-1741601ce9d1]: (4, ('Sat Oct 11 09:08:49 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-14e82eeb-74e2-4de3-9047-74da777fe1ec (80ced7c6a3c16ccdbe23725730c8db7aeed4183f2fc1bc06e5ea3e07111ac748)\n80ced7c6a3c16ccdbe23725730c8db7aeed4183f2fc1bc06e5ea3e07111ac748\nSat Oct 11 09:08:49 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-14e82eeb-74e2-4de3-9047-74da777fe1ec (80ced7c6a3c16ccdbe23725730c8db7aeed4183f2fc1bc06e5ea3e07111ac748)\n80ced7c6a3c16ccdbe23725730c8db7aeed4183f2fc1bc06e5ea3e07111ac748\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:08:49 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:08:49.622 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[c0994e31-0d45-434e-8054-1e534a77e1fb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:08:49 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:08:49.623 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap14e82eeb-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:08:49 compute-0 nova_compute[260935]: 2025-10-11 09:08:49.626 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:08:49 compute-0 kernel: tap14e82eeb-70: left promiscuous mode
Oct 11 09:08:49 compute-0 nova_compute[260935]: 2025-10-11 09:08:49.664 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:08:49 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:08:49.668 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[a9258649-eec0-4356-8d72-f68aabc7fcbc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:08:49 compute-0 sshd-session[361178]: Failed password for invalid user rena from 155.4.244.179 port 38260 ssh2
Oct 11 09:08:49 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:08:49.703 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[7c43e6db-656f-47a6-b07e-77098619a562]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:08:49 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:08:49.705 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[b829d98d-2696-4433-865c-e2c1327c8e82]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:08:49 compute-0 ceph-mon[74313]: pgmap v1994: 321 pgs: 321 active+clean; 407 MiB data, 852 MiB used, 59 GiB / 60 GiB avail; 209 KiB/s rd, 40 KiB/s wr, 292 op/s
Oct 11 09:08:49 compute-0 ceph-mon[74313]: osdmap e268: 3 total, 3 up, 3 in
Oct 11 09:08:49 compute-0 ceph-mon[74313]: osdmap e269: 3 total, 3 up, 3 in
Oct 11 09:08:49 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:08:49.736 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[488026de-1b84-4661-a920-fa3ca4dc846f]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 548093, 'reachable_time': 21473, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 361304, 'error': None, 'target': 'ovnmeta-14e82eeb-74e2-4de3-9047-74da777fe1ec', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:08:49 compute-0 systemd[1]: run-netns-ovnmeta\x2d14e82eeb\x2d74e2\x2d4de3\x2d9047\x2d74da777fe1ec.mount: Deactivated successfully.
Oct 11 09:08:49 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:08:49.740 162929 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-14e82eeb-74e2-4de3-9047-74da777fe1ec deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 11 09:08:49 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:08:49.740 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[d71945dd-d8f4-46c3-b59b-08cec0fff12c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:08:49 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e269 do_prune osdmap full prune enabled
Oct 11 09:08:49 compute-0 nova_compute[260935]: 2025-10-11 09:08:49.823 2 INFO nova.virt.libvirt.driver [None req-f615746d-55a5-4eb1-8fb0-04d3283753bd 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Instance shutdown successfully after 3 seconds.
Oct 11 09:08:49 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e270 e270: 3 total, 3 up, 3 in
Oct 11 09:08:49 compute-0 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e270: 3 total, 3 up, 3 in
Oct 11 09:08:49 compute-0 nova_compute[260935]: 2025-10-11 09:08:49.833 2 INFO nova.virt.libvirt.driver [-] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Instance destroyed successfully.
Oct 11 09:08:49 compute-0 nova_compute[260935]: 2025-10-11 09:08:49.834 2 DEBUG nova.objects.instance [None req-f615746d-55a5-4eb1-8fb0-04d3283753bd 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Lazy-loading 'numa_topology' on Instance uuid 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:08:49 compute-0 nova_compute[260935]: 2025-10-11 09:08:49.856 2 DEBUG nova.compute.manager [req-dad44b56-18dd-42d9-9af3-37e58e23c8d9 req-a3aaf233-e15b-46bc-bb49-4ca0b182a66f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Received event network-vif-unplugged-a611854c-0a61-41b8-91ce-0c0f893aa54c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:08:49 compute-0 nova_compute[260935]: 2025-10-11 09:08:49.857 2 DEBUG oslo_concurrency.lockutils [req-dad44b56-18dd-42d9-9af3-37e58e23c8d9 req-a3aaf233-e15b-46bc-bb49-4ca0b182a66f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:08:49 compute-0 nova_compute[260935]: 2025-10-11 09:08:49.857 2 DEBUG oslo_concurrency.lockutils [req-dad44b56-18dd-42d9-9af3-37e58e23c8d9 req-a3aaf233-e15b-46bc-bb49-4ca0b182a66f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:08:49 compute-0 nova_compute[260935]: 2025-10-11 09:08:49.858 2 DEBUG oslo_concurrency.lockutils [req-dad44b56-18dd-42d9-9af3-37e58e23c8d9 req-a3aaf233-e15b-46bc-bb49-4ca0b182a66f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:08:49 compute-0 nova_compute[260935]: 2025-10-11 09:08:49.860 2 DEBUG nova.compute.manager [req-dad44b56-18dd-42d9-9af3-37e58e23c8d9 req-a3aaf233-e15b-46bc-bb49-4ca0b182a66f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] No waiting events found dispatching network-vif-unplugged-a611854c-0a61-41b8-91ce-0c0f893aa54c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 09:08:49 compute-0 nova_compute[260935]: 2025-10-11 09:08:49.860 2 WARNING nova.compute.manager [req-dad44b56-18dd-42d9-9af3-37e58e23c8d9 req-a3aaf233-e15b-46bc-bb49-4ca0b182a66f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Received unexpected event network-vif-unplugged-a611854c-0a61-41b8-91ce-0c0f893aa54c for instance with vm_state active and task_state shelving.
Oct 11 09:08:50 compute-0 nova_compute[260935]: 2025-10-11 09:08:50.312 2 INFO nova.virt.libvirt.driver [None req-f615746d-55a5-4eb1-8fb0-04d3283753bd 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Beginning cold snapshot process
Oct 11 09:08:50 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v1998: 321 pgs: 321 active+clean; 407 MiB data, 852 MiB used, 59 GiB / 60 GiB avail; 226 KiB/s rd, 43 KiB/s wr, 316 op/s
Oct 11 09:08:50 compute-0 nova_compute[260935]: 2025-10-11 09:08:50.513 2 DEBUG nova.virt.libvirt.imagebackend [None req-f615746d-55a5-4eb1-8fb0-04d3283753bd 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] No parent info for 03f2fef0-11c0-48e1-b3a0-3e02d898739e; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Oct 11 09:08:50 compute-0 ceph-mon[74313]: osdmap e270: 3 total, 3 up, 3 in
Oct 11 09:08:50 compute-0 ceph-mon[74313]: pgmap v1998: 321 pgs: 321 active+clean; 407 MiB data, 852 MiB used, 59 GiB / 60 GiB avail; 226 KiB/s rd, 43 KiB/s wr, 316 op/s
Oct 11 09:08:50 compute-0 sshd-session[361178]: Received disconnect from 155.4.244.179 port 38260:11: Bye Bye [preauth]
Oct 11 09:08:50 compute-0 sshd-session[361178]: Disconnected from invalid user rena 155.4.244.179 port 38260 [preauth]
Oct 11 09:08:50 compute-0 nova_compute[260935]: 2025-10-11 09:08:50.994 2 DEBUG nova.storage.rbd_utils [None req-f615746d-55a5-4eb1-8fb0-04d3283753bd 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] creating snapshot(6ad053a64b5d4ae4adc6ab6154a5ffde) on rbd image(0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Oct 11 09:08:51 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e270 do_prune osdmap full prune enabled
Oct 11 09:08:51 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e271 e271: 3 total, 3 up, 3 in
Oct 11 09:08:51 compute-0 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e271: 3 total, 3 up, 3 in
Oct 11 09:08:51 compute-0 nova_compute[260935]: 2025-10-11 09:08:51.927 2 DEBUG nova.storage.rbd_utils [None req-f615746d-55a5-4eb1-8fb0-04d3283753bd 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] cloning vms/0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c_disk@6ad053a64b5d4ae4adc6ab6154a5ffde to images/cccd490d-35ae-4410-9dbd-f711e72912ef clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Oct 11 09:08:51 compute-0 nova_compute[260935]: 2025-10-11 09:08:51.994 2 DEBUG nova.compute.manager [req-70d3d542-3bb1-4b50-8a1c-4c67902fc749 req-48d766d7-245a-4c8e-80b8-557ea5259861 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Received event network-vif-plugged-a611854c-0a61-41b8-91ce-0c0f893aa54c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:08:51 compute-0 nova_compute[260935]: 2025-10-11 09:08:51.995 2 DEBUG oslo_concurrency.lockutils [req-70d3d542-3bb1-4b50-8a1c-4c67902fc749 req-48d766d7-245a-4c8e-80b8-557ea5259861 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:08:51 compute-0 nova_compute[260935]: 2025-10-11 09:08:51.996 2 DEBUG oslo_concurrency.lockutils [req-70d3d542-3bb1-4b50-8a1c-4c67902fc749 req-48d766d7-245a-4c8e-80b8-557ea5259861 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:08:51 compute-0 nova_compute[260935]: 2025-10-11 09:08:51.996 2 DEBUG oslo_concurrency.lockutils [req-70d3d542-3bb1-4b50-8a1c-4c67902fc749 req-48d766d7-245a-4c8e-80b8-557ea5259861 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:08:51 compute-0 nova_compute[260935]: 2025-10-11 09:08:51.997 2 DEBUG nova.compute.manager [req-70d3d542-3bb1-4b50-8a1c-4c67902fc749 req-48d766d7-245a-4c8e-80b8-557ea5259861 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] No waiting events found dispatching network-vif-plugged-a611854c-0a61-41b8-91ce-0c0f893aa54c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 09:08:51 compute-0 nova_compute[260935]: 2025-10-11 09:08:51.998 2 WARNING nova.compute.manager [req-70d3d542-3bb1-4b50-8a1c-4c67902fc749 req-48d766d7-245a-4c8e-80b8-557ea5259861 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Received unexpected event network-vif-plugged-a611854c-0a61-41b8-91ce-0c0f893aa54c for instance with vm_state active and task_state shelving_image_uploading.
Oct 11 09:08:52 compute-0 nova_compute[260935]: 2025-10-11 09:08:52.085 2 DEBUG nova.storage.rbd_utils [None req-f615746d-55a5-4eb1-8fb0-04d3283753bd 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] flattening images/cccd490d-35ae-4410-9dbd-f711e72912ef flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Oct 11 09:08:52 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2000: 321 pgs: 321 active+clean; 420 MiB data, 857 MiB used, 59 GiB / 60 GiB avail; 6.9 MiB/s rd, 1.9 MiB/s wr, 184 op/s
Oct 11 09:08:52 compute-0 nova_compute[260935]: 2025-10-11 09:08:52.495 2 DEBUG nova.storage.rbd_utils [None req-f615746d-55a5-4eb1-8fb0-04d3283753bd 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] removing snapshot(6ad053a64b5d4ae4adc6ab6154a5ffde) on rbd image(0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Oct 11 09:08:52 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e271 do_prune osdmap full prune enabled
Oct 11 09:08:52 compute-0 ceph-mon[74313]: osdmap e271: 3 total, 3 up, 3 in
Oct 11 09:08:52 compute-0 ceph-mon[74313]: pgmap v2000: 321 pgs: 321 active+clean; 420 MiB data, 857 MiB used, 59 GiB / 60 GiB avail; 6.9 MiB/s rd, 1.9 MiB/s wr, 184 op/s
Oct 11 09:08:52 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e272 e272: 3 total, 3 up, 3 in
Oct 11 09:08:52 compute-0 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e272: 3 total, 3 up, 3 in
Oct 11 09:08:52 compute-0 nova_compute[260935]: 2025-10-11 09:08:52.897 2 DEBUG nova.storage.rbd_utils [None req-f615746d-55a5-4eb1-8fb0-04d3283753bd 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] creating snapshot(snap) on rbd image(cccd490d-35ae-4410-9dbd-f711e72912ef) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Oct 11 09:08:53 compute-0 nova_compute[260935]: 2025-10-11 09:08:53.247 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:08:53 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e272 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:08:53 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e272 do_prune osdmap full prune enabled
Oct 11 09:08:53 compute-0 ceph-mon[74313]: osdmap e272: 3 total, 3 up, 3 in
Oct 11 09:08:53 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e273 e273: 3 total, 3 up, 3 in
Oct 11 09:08:53 compute-0 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e273: 3 total, 3 up, 3 in
Oct 11 09:08:54 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2003: 321 pgs: 321 active+clean; 444 MiB data, 861 MiB used, 59 GiB / 60 GiB avail; 8.3 MiB/s rd, 4.5 MiB/s wr, 248 op/s
Oct 11 09:08:54 compute-0 nova_compute[260935]: 2025-10-11 09:08:54.428 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:08:54 compute-0 podman[361448]: 2025-10-11 09:08:54.769715854 +0000 UTC m=+0.067757216 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Oct 11 09:08:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:08:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:08:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:08:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:08:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:08:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:08:54 compute-0 ceph-mon[74313]: osdmap e273: 3 total, 3 up, 3 in
Oct 11 09:08:54 compute-0 ceph-mon[74313]: pgmap v2003: 321 pgs: 321 active+clean; 444 MiB data, 861 MiB used, 59 GiB / 60 GiB avail; 8.3 MiB/s rd, 4.5 MiB/s wr, 248 op/s
Oct 11 09:08:54 compute-0 ceph-mgr[74605]: [balancer INFO root] Optimize plan auto_2025-10-11_09:08:54
Oct 11 09:08:54 compute-0 ceph-mgr[74605]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 09:08:54 compute-0 ceph-mgr[74605]: [balancer INFO root] do_upmap
Oct 11 09:08:54 compute-0 ceph-mgr[74605]: [balancer INFO root] pools ['default.rgw.control', 'cephfs.cephfs.data', 'backups', 'vms', 'images', 'cephfs.cephfs.meta', 'default.rgw.meta', '.rgw.root', '.mgr', 'volumes', 'default.rgw.log']
Oct 11 09:08:54 compute-0 ceph-mgr[74605]: [balancer INFO root] prepared 0/10 changes
Oct 11 09:08:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 09:08:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 09:08:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 09:08:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 09:08:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 09:08:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 09:08:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 09:08:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 09:08:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 09:08:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 09:08:56 compute-0 nova_compute[260935]: 2025-10-11 09:08:56.239 2 INFO nova.virt.libvirt.driver [None req-f615746d-55a5-4eb1-8fb0-04d3283753bd 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Snapshot image upload complete
Oct 11 09:08:56 compute-0 nova_compute[260935]: 2025-10-11 09:08:56.240 2 DEBUG nova.compute.manager [None req-f615746d-55a5-4eb1-8fb0-04d3283753bd 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:08:56 compute-0 nova_compute[260935]: 2025-10-11 09:08:56.249 2 DEBUG oslo_concurrency.lockutils [None req-6c586fcc-5e53-410d-be96-8df6fbf601a8 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] Acquiring lock "b3697ef4-2e67-4d7f-ad14-ae6ab4055454" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:08:56 compute-0 nova_compute[260935]: 2025-10-11 09:08:56.249 2 DEBUG oslo_concurrency.lockutils [None req-6c586fcc-5e53-410d-be96-8df6fbf601a8 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] Lock "b3697ef4-2e67-4d7f-ad14-ae6ab4055454" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:08:56 compute-0 nova_compute[260935]: 2025-10-11 09:08:56.300 2 DEBUG nova.compute.manager [None req-6c586fcc-5e53-410d-be96-8df6fbf601a8 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] [instance: b3697ef4-2e67-4d7f-ad14-ae6ab4055454] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 11 09:08:56 compute-0 nova_compute[260935]: 2025-10-11 09:08:56.311 2 INFO nova.compute.manager [None req-f615746d-55a5-4eb1-8fb0-04d3283753bd 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Shelve offloading
Oct 11 09:08:56 compute-0 nova_compute[260935]: 2025-10-11 09:08:56.323 2 INFO nova.virt.libvirt.driver [-] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Instance destroyed successfully.
Oct 11 09:08:56 compute-0 nova_compute[260935]: 2025-10-11 09:08:56.324 2 DEBUG nova.compute.manager [None req-f615746d-55a5-4eb1-8fb0-04d3283753bd 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:08:56 compute-0 nova_compute[260935]: 2025-10-11 09:08:56.328 2 DEBUG oslo_concurrency.lockutils [None req-f615746d-55a5-4eb1-8fb0-04d3283753bd 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Acquiring lock "refresh_cache-0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:08:56 compute-0 nova_compute[260935]: 2025-10-11 09:08:56.328 2 DEBUG oslo_concurrency.lockutils [None req-f615746d-55a5-4eb1-8fb0-04d3283753bd 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Acquired lock "refresh_cache-0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:08:56 compute-0 nova_compute[260935]: 2025-10-11 09:08:56.329 2 DEBUG nova.network.neutron [None req-f615746d-55a5-4eb1-8fb0-04d3283753bd 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 11 09:08:56 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2004: 321 pgs: 321 active+clean; 444 MiB data, 861 MiB used, 59 GiB / 60 GiB avail; 6.3 MiB/s rd, 3.4 MiB/s wr, 188 op/s
Oct 11 09:08:56 compute-0 nova_compute[260935]: 2025-10-11 09:08:56.398 2 DEBUG oslo_concurrency.lockutils [None req-6c586fcc-5e53-410d-be96-8df6fbf601a8 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:08:56 compute-0 nova_compute[260935]: 2025-10-11 09:08:56.399 2 DEBUG oslo_concurrency.lockutils [None req-6c586fcc-5e53-410d-be96-8df6fbf601a8 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:08:56 compute-0 nova_compute[260935]: 2025-10-11 09:08:56.411 2 DEBUG nova.virt.hardware [None req-6c586fcc-5e53-410d-be96-8df6fbf601a8 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 11 09:08:56 compute-0 nova_compute[260935]: 2025-10-11 09:08:56.411 2 INFO nova.compute.claims [None req-6c586fcc-5e53-410d-be96-8df6fbf601a8 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] [instance: b3697ef4-2e67-4d7f-ad14-ae6ab4055454] Claim successful on node compute-0.ctlplane.example.com
Oct 11 09:08:56 compute-0 nova_compute[260935]: 2025-10-11 09:08:56.499 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760173721.4977875, df634e60-d790-4a39-adcc-c6345a12e9df => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:08:56 compute-0 nova_compute[260935]: 2025-10-11 09:08:56.500 2 INFO nova.compute.manager [-] [instance: df634e60-d790-4a39-adcc-c6345a12e9df] VM Stopped (Lifecycle Event)
Oct 11 09:08:56 compute-0 nova_compute[260935]: 2025-10-11 09:08:56.520 2 DEBUG nova.compute.manager [None req-8f22c9a1-c474-4201-b756-5d4aec567cf0 - - - - - -] [instance: df634e60-d790-4a39-adcc-c6345a12e9df] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:08:56 compute-0 nova_compute[260935]: 2025-10-11 09:08:56.616 2 DEBUG oslo_concurrency.processutils [None req-6c586fcc-5e53-410d-be96-8df6fbf601a8 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:08:57 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 09:08:57 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3150851700' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:08:57 compute-0 nova_compute[260935]: 2025-10-11 09:08:57.136 2 DEBUG oslo_concurrency.processutils [None req-6c586fcc-5e53-410d-be96-8df6fbf601a8 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.520s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:08:57 compute-0 nova_compute[260935]: 2025-10-11 09:08:57.142 2 DEBUG nova.compute.provider_tree [None req-6c586fcc-5e53-410d-be96-8df6fbf601a8 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 09:08:57 compute-0 nova_compute[260935]: 2025-10-11 09:08:57.160 2 DEBUG nova.scheduler.client.report [None req-6c586fcc-5e53-410d-be96-8df6fbf601a8 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 09:08:57 compute-0 nova_compute[260935]: 2025-10-11 09:08:57.183 2 DEBUG oslo_concurrency.lockutils [None req-6c586fcc-5e53-410d-be96-8df6fbf601a8 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.784s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:08:57 compute-0 nova_compute[260935]: 2025-10-11 09:08:57.183 2 DEBUG nova.compute.manager [None req-6c586fcc-5e53-410d-be96-8df6fbf601a8 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] [instance: b3697ef4-2e67-4d7f-ad14-ae6ab4055454] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 11 09:08:57 compute-0 nova_compute[260935]: 2025-10-11 09:08:57.239 2 DEBUG nova.compute.manager [None req-6c586fcc-5e53-410d-be96-8df6fbf601a8 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] [instance: b3697ef4-2e67-4d7f-ad14-ae6ab4055454] Not allocating networking since 'none' was specified. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1948
Oct 11 09:08:57 compute-0 nova_compute[260935]: 2025-10-11 09:08:57.263 2 INFO nova.virt.libvirt.driver [None req-6c586fcc-5e53-410d-be96-8df6fbf601a8 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] [instance: b3697ef4-2e67-4d7f-ad14-ae6ab4055454] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 11 09:08:57 compute-0 nova_compute[260935]: 2025-10-11 09:08:57.289 2 DEBUG nova.compute.manager [None req-6c586fcc-5e53-410d-be96-8df6fbf601a8 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] [instance: b3697ef4-2e67-4d7f-ad14-ae6ab4055454] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 11 09:08:57 compute-0 nova_compute[260935]: 2025-10-11 09:08:57.421 2 DEBUG nova.compute.manager [None req-6c586fcc-5e53-410d-be96-8df6fbf601a8 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] [instance: b3697ef4-2e67-4d7f-ad14-ae6ab4055454] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 11 09:08:57 compute-0 nova_compute[260935]: 2025-10-11 09:08:57.423 2 DEBUG nova.virt.libvirt.driver [None req-6c586fcc-5e53-410d-be96-8df6fbf601a8 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] [instance: b3697ef4-2e67-4d7f-ad14-ae6ab4055454] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 11 09:08:57 compute-0 nova_compute[260935]: 2025-10-11 09:08:57.424 2 INFO nova.virt.libvirt.driver [None req-6c586fcc-5e53-410d-be96-8df6fbf601a8 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] [instance: b3697ef4-2e67-4d7f-ad14-ae6ab4055454] Creating image(s)
Oct 11 09:08:57 compute-0 ceph-mon[74313]: pgmap v2004: 321 pgs: 321 active+clean; 444 MiB data, 861 MiB used, 59 GiB / 60 GiB avail; 6.3 MiB/s rd, 3.4 MiB/s wr, 188 op/s
Oct 11 09:08:57 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/3150851700' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:08:57 compute-0 nova_compute[260935]: 2025-10-11 09:08:57.459 2 DEBUG nova.storage.rbd_utils [None req-6c586fcc-5e53-410d-be96-8df6fbf601a8 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] rbd image b3697ef4-2e67-4d7f-ad14-ae6ab4055454_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:08:57 compute-0 nova_compute[260935]: 2025-10-11 09:08:57.494 2 DEBUG nova.storage.rbd_utils [None req-6c586fcc-5e53-410d-be96-8df6fbf601a8 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] rbd image b3697ef4-2e67-4d7f-ad14-ae6ab4055454_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:08:57 compute-0 nova_compute[260935]: 2025-10-11 09:08:57.523 2 DEBUG nova.storage.rbd_utils [None req-6c586fcc-5e53-410d-be96-8df6fbf601a8 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] rbd image b3697ef4-2e67-4d7f-ad14-ae6ab4055454_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:08:57 compute-0 nova_compute[260935]: 2025-10-11 09:08:57.528 2 DEBUG oslo_concurrency.processutils [None req-6c586fcc-5e53-410d-be96-8df6fbf601a8 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:08:57 compute-0 nova_compute[260935]: 2025-10-11 09:08:57.622 2 DEBUG oslo_concurrency.processutils [None req-6c586fcc-5e53-410d-be96-8df6fbf601a8 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json" returned: 0 in 0.094s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:08:57 compute-0 nova_compute[260935]: 2025-10-11 09:08:57.623 2 DEBUG oslo_concurrency.lockutils [None req-6c586fcc-5e53-410d-be96-8df6fbf601a8 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] Acquiring lock "9811042c7d73cc51997f7c966840f4b7728169a1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:08:57 compute-0 nova_compute[260935]: 2025-10-11 09:08:57.624 2 DEBUG oslo_concurrency.lockutils [None req-6c586fcc-5e53-410d-be96-8df6fbf601a8 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:08:57 compute-0 nova_compute[260935]: 2025-10-11 09:08:57.624 2 DEBUG oslo_concurrency.lockutils [None req-6c586fcc-5e53-410d-be96-8df6fbf601a8 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:08:57 compute-0 nova_compute[260935]: 2025-10-11 09:08:57.655 2 DEBUG nova.storage.rbd_utils [None req-6c586fcc-5e53-410d-be96-8df6fbf601a8 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] rbd image b3697ef4-2e67-4d7f-ad14-ae6ab4055454_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:08:57 compute-0 nova_compute[260935]: 2025-10-11 09:08:57.659 2 DEBUG oslo_concurrency.processutils [None req-6c586fcc-5e53-410d-be96-8df6fbf601a8 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 b3697ef4-2e67-4d7f-ad14-ae6ab4055454_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:08:57 compute-0 nova_compute[260935]: 2025-10-11 09:08:57.972 2 DEBUG oslo_concurrency.processutils [None req-6c586fcc-5e53-410d-be96-8df6fbf601a8 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 b3697ef4-2e67-4d7f-ad14-ae6ab4055454_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.312s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:08:58 compute-0 nova_compute[260935]: 2025-10-11 09:08:58.047 2 DEBUG nova.storage.rbd_utils [None req-6c586fcc-5e53-410d-be96-8df6fbf601a8 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] resizing rbd image b3697ef4-2e67-4d7f-ad14-ae6ab4055454_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 11 09:08:58 compute-0 nova_compute[260935]: 2025-10-11 09:08:58.130 2 DEBUG nova.network.neutron [None req-f615746d-55a5-4eb1-8fb0-04d3283753bd 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Updating instance_info_cache with network_info: [{"id": "a611854c-0a61-41b8-91ce-0c0f893aa54c", "address": "fa:16:3e:9d:da:b0", "network": {"id": "14e82eeb-74e2-4de3-9047-74da777fe1ec", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-461409610-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "21f163e616ee4917a580701d466f7dc9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa611854c-0a", "ovs_interfaceid": "a611854c-0a61-41b8-91ce-0c0f893aa54c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:08:58 compute-0 nova_compute[260935]: 2025-10-11 09:08:58.137 2 DEBUG nova.objects.instance [None req-6c586fcc-5e53-410d-be96-8df6fbf601a8 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] Lazy-loading 'migration_context' on Instance uuid b3697ef4-2e67-4d7f-ad14-ae6ab4055454 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:08:58 compute-0 nova_compute[260935]: 2025-10-11 09:08:58.154 2 DEBUG nova.virt.libvirt.driver [None req-6c586fcc-5e53-410d-be96-8df6fbf601a8 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] [instance: b3697ef4-2e67-4d7f-ad14-ae6ab4055454] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 11 09:08:58 compute-0 nova_compute[260935]: 2025-10-11 09:08:58.155 2 DEBUG nova.virt.libvirt.driver [None req-6c586fcc-5e53-410d-be96-8df6fbf601a8 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] [instance: b3697ef4-2e67-4d7f-ad14-ae6ab4055454] Ensure instance console log exists: /var/lib/nova/instances/b3697ef4-2e67-4d7f-ad14-ae6ab4055454/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 11 09:08:58 compute-0 nova_compute[260935]: 2025-10-11 09:08:58.155 2 DEBUG oslo_concurrency.lockutils [None req-6c586fcc-5e53-410d-be96-8df6fbf601a8 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:08:58 compute-0 nova_compute[260935]: 2025-10-11 09:08:58.155 2 DEBUG oslo_concurrency.lockutils [None req-6c586fcc-5e53-410d-be96-8df6fbf601a8 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:08:58 compute-0 nova_compute[260935]: 2025-10-11 09:08:58.156 2 DEBUG oslo_concurrency.lockutils [None req-6c586fcc-5e53-410d-be96-8df6fbf601a8 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:08:58 compute-0 nova_compute[260935]: 2025-10-11 09:08:58.157 2 DEBUG nova.virt.libvirt.driver [None req-6c586fcc-5e53-410d-be96-8df6fbf601a8 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] [instance: b3697ef4-2e67-4d7f-ad14-ae6ab4055454] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 11 09:08:58 compute-0 nova_compute[260935]: 2025-10-11 09:08:58.159 2 DEBUG oslo_concurrency.lockutils [None req-f615746d-55a5-4eb1-8fb0-04d3283753bd 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Releasing lock "refresh_cache-0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:08:58 compute-0 nova_compute[260935]: 2025-10-11 09:08:58.165 2 WARNING nova.virt.libvirt.driver [None req-6c586fcc-5e53-410d-be96-8df6fbf601a8 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 09:08:58 compute-0 nova_compute[260935]: 2025-10-11 09:08:58.169 2 DEBUG nova.virt.libvirt.host [None req-6c586fcc-5e53-410d-be96-8df6fbf601a8 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 11 09:08:58 compute-0 nova_compute[260935]: 2025-10-11 09:08:58.170 2 DEBUG nova.virt.libvirt.host [None req-6c586fcc-5e53-410d-be96-8df6fbf601a8 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 11 09:08:58 compute-0 nova_compute[260935]: 2025-10-11 09:08:58.173 2 DEBUG nova.virt.libvirt.host [None req-6c586fcc-5e53-410d-be96-8df6fbf601a8 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 11 09:08:58 compute-0 nova_compute[260935]: 2025-10-11 09:08:58.174 2 DEBUG nova.virt.libvirt.host [None req-6c586fcc-5e53-410d-be96-8df6fbf601a8 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 11 09:08:58 compute-0 nova_compute[260935]: 2025-10-11 09:08:58.174 2 DEBUG nova.virt.libvirt.driver [None req-6c586fcc-5e53-410d-be96-8df6fbf601a8 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 11 09:08:58 compute-0 nova_compute[260935]: 2025-10-11 09:08:58.174 2 DEBUG nova.virt.hardware [None req-6c586fcc-5e53-410d-be96-8df6fbf601a8 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 11 09:08:58 compute-0 nova_compute[260935]: 2025-10-11 09:08:58.175 2 DEBUG nova.virt.hardware [None req-6c586fcc-5e53-410d-be96-8df6fbf601a8 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 11 09:08:58 compute-0 nova_compute[260935]: 2025-10-11 09:08:58.175 2 DEBUG nova.virt.hardware [None req-6c586fcc-5e53-410d-be96-8df6fbf601a8 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 11 09:08:58 compute-0 nova_compute[260935]: 2025-10-11 09:08:58.175 2 DEBUG nova.virt.hardware [None req-6c586fcc-5e53-410d-be96-8df6fbf601a8 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 11 09:08:58 compute-0 nova_compute[260935]: 2025-10-11 09:08:58.175 2 DEBUG nova.virt.hardware [None req-6c586fcc-5e53-410d-be96-8df6fbf601a8 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 11 09:08:58 compute-0 nova_compute[260935]: 2025-10-11 09:08:58.176 2 DEBUG nova.virt.hardware [None req-6c586fcc-5e53-410d-be96-8df6fbf601a8 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 11 09:08:58 compute-0 nova_compute[260935]: 2025-10-11 09:08:58.176 2 DEBUG nova.virt.hardware [None req-6c586fcc-5e53-410d-be96-8df6fbf601a8 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 11 09:08:58 compute-0 nova_compute[260935]: 2025-10-11 09:08:58.176 2 DEBUG nova.virt.hardware [None req-6c586fcc-5e53-410d-be96-8df6fbf601a8 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 11 09:08:58 compute-0 nova_compute[260935]: 2025-10-11 09:08:58.176 2 DEBUG nova.virt.hardware [None req-6c586fcc-5e53-410d-be96-8df6fbf601a8 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 11 09:08:58 compute-0 nova_compute[260935]: 2025-10-11 09:08:58.176 2 DEBUG nova.virt.hardware [None req-6c586fcc-5e53-410d-be96-8df6fbf601a8 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 11 09:08:58 compute-0 nova_compute[260935]: 2025-10-11 09:08:58.177 2 DEBUG nova.virt.hardware [None req-6c586fcc-5e53-410d-be96-8df6fbf601a8 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 11 09:08:58 compute-0 nova_compute[260935]: 2025-10-11 09:08:58.180 2 DEBUG oslo_concurrency.processutils [None req-6c586fcc-5e53-410d-be96-8df6fbf601a8 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:08:58 compute-0 nova_compute[260935]: 2025-10-11 09:08:58.291 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:08:58 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2005: 321 pgs: 321 active+clean; 486 MiB data, 906 MiB used, 59 GiB / 60 GiB avail; 7.3 MiB/s rd, 7.1 MiB/s wr, 263 op/s
Oct 11 09:08:58 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 09:08:58 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2116364323' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:08:58 compute-0 nova_compute[260935]: 2025-10-11 09:08:58.711 2 DEBUG oslo_concurrency.processutils [None req-6c586fcc-5e53-410d-be96-8df6fbf601a8 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.531s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:08:58 compute-0 nova_compute[260935]: 2025-10-11 09:08:58.750 2 DEBUG nova.storage.rbd_utils [None req-6c586fcc-5e53-410d-be96-8df6fbf601a8 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] rbd image b3697ef4-2e67-4d7f-ad14-ae6ab4055454_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:08:58 compute-0 nova_compute[260935]: 2025-10-11 09:08:58.762 2 DEBUG oslo_concurrency.processutils [None req-6c586fcc-5e53-410d-be96-8df6fbf601a8 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:08:58 compute-0 podman[361676]: 2025-10-11 09:08:58.813202882 +0000 UTC m=+0.112090870 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 11 09:08:58 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:08:58 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e273 do_prune osdmap full prune enabled
Oct 11 09:08:58 compute-0 podman[361679]: 2025-10-11 09:08:58.826297256 +0000 UTC m=+0.124367941 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 11 09:08:58 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e274 e274: 3 total, 3 up, 3 in
Oct 11 09:08:58 compute-0 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e274: 3 total, 3 up, 3 in
Oct 11 09:08:59 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 09:08:59 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1842180243' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:08:59 compute-0 nova_compute[260935]: 2025-10-11 09:08:59.207 2 DEBUG oslo_concurrency.processutils [None req-6c586fcc-5e53-410d-be96-8df6fbf601a8 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:08:59 compute-0 nova_compute[260935]: 2025-10-11 09:08:59.211 2 DEBUG nova.objects.instance [None req-6c586fcc-5e53-410d-be96-8df6fbf601a8 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] Lazy-loading 'pci_devices' on Instance uuid b3697ef4-2e67-4d7f-ad14-ae6ab4055454 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:08:59 compute-0 nova_compute[260935]: 2025-10-11 09:08:59.238 2 DEBUG nova.virt.libvirt.driver [None req-6c586fcc-5e53-410d-be96-8df6fbf601a8 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] [instance: b3697ef4-2e67-4d7f-ad14-ae6ab4055454] End _get_guest_xml xml=<domain type="kvm">
Oct 11 09:08:59 compute-0 nova_compute[260935]:   <uuid>b3697ef4-2e67-4d7f-ad14-ae6ab4055454</uuid>
Oct 11 09:08:59 compute-0 nova_compute[260935]:   <name>instance-00000066</name>
Oct 11 09:08:59 compute-0 nova_compute[260935]:   <memory>131072</memory>
Oct 11 09:08:59 compute-0 nova_compute[260935]:   <vcpu>1</vcpu>
Oct 11 09:08:59 compute-0 nova_compute[260935]:   <metadata>
Oct 11 09:08:59 compute-0 nova_compute[260935]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 09:08:59 compute-0 nova_compute[260935]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 09:08:59 compute-0 nova_compute[260935]:       <nova:name>tempest-ServerShowV254Test-server-186124796</nova:name>
Oct 11 09:08:59 compute-0 nova_compute[260935]:       <nova:creationTime>2025-10-11 09:08:58</nova:creationTime>
Oct 11 09:08:59 compute-0 nova_compute[260935]:       <nova:flavor name="m1.nano">
Oct 11 09:08:59 compute-0 nova_compute[260935]:         <nova:memory>128</nova:memory>
Oct 11 09:08:59 compute-0 nova_compute[260935]:         <nova:disk>1</nova:disk>
Oct 11 09:08:59 compute-0 nova_compute[260935]:         <nova:swap>0</nova:swap>
Oct 11 09:08:59 compute-0 nova_compute[260935]:         <nova:ephemeral>0</nova:ephemeral>
Oct 11 09:08:59 compute-0 nova_compute[260935]:         <nova:vcpus>1</nova:vcpus>
Oct 11 09:08:59 compute-0 nova_compute[260935]:       </nova:flavor>
Oct 11 09:08:59 compute-0 nova_compute[260935]:       <nova:owner>
Oct 11 09:08:59 compute-0 nova_compute[260935]:         <nova:user uuid="18ec7b6a713d43e9880a338abc3182b1">tempest-ServerShowV254Test-827203150-project-member</nova:user>
Oct 11 09:08:59 compute-0 nova_compute[260935]:         <nova:project uuid="31a5173de7e7470ab2566a226b8ba071">tempest-ServerShowV254Test-827203150</nova:project>
Oct 11 09:08:59 compute-0 nova_compute[260935]:       </nova:owner>
Oct 11 09:08:59 compute-0 nova_compute[260935]:       <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 09:08:59 compute-0 nova_compute[260935]:       <nova:ports/>
Oct 11 09:08:59 compute-0 nova_compute[260935]:     </nova:instance>
Oct 11 09:08:59 compute-0 nova_compute[260935]:   </metadata>
Oct 11 09:08:59 compute-0 nova_compute[260935]:   <sysinfo type="smbios">
Oct 11 09:08:59 compute-0 nova_compute[260935]:     <system>
Oct 11 09:08:59 compute-0 nova_compute[260935]:       <entry name="manufacturer">RDO</entry>
Oct 11 09:08:59 compute-0 nova_compute[260935]:       <entry name="product">OpenStack Compute</entry>
Oct 11 09:08:59 compute-0 nova_compute[260935]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 09:08:59 compute-0 nova_compute[260935]:       <entry name="serial">b3697ef4-2e67-4d7f-ad14-ae6ab4055454</entry>
Oct 11 09:08:59 compute-0 nova_compute[260935]:       <entry name="uuid">b3697ef4-2e67-4d7f-ad14-ae6ab4055454</entry>
Oct 11 09:08:59 compute-0 nova_compute[260935]:       <entry name="family">Virtual Machine</entry>
Oct 11 09:08:59 compute-0 nova_compute[260935]:     </system>
Oct 11 09:08:59 compute-0 nova_compute[260935]:   </sysinfo>
Oct 11 09:08:59 compute-0 nova_compute[260935]:   <os>
Oct 11 09:08:59 compute-0 nova_compute[260935]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 11 09:08:59 compute-0 nova_compute[260935]:     <boot dev="hd"/>
Oct 11 09:08:59 compute-0 nova_compute[260935]:     <smbios mode="sysinfo"/>
Oct 11 09:08:59 compute-0 nova_compute[260935]:   </os>
Oct 11 09:08:59 compute-0 nova_compute[260935]:   <features>
Oct 11 09:08:59 compute-0 nova_compute[260935]:     <acpi/>
Oct 11 09:08:59 compute-0 nova_compute[260935]:     <apic/>
Oct 11 09:08:59 compute-0 nova_compute[260935]:     <vmcoreinfo/>
Oct 11 09:08:59 compute-0 nova_compute[260935]:   </features>
Oct 11 09:08:59 compute-0 nova_compute[260935]:   <clock offset="utc">
Oct 11 09:08:59 compute-0 nova_compute[260935]:     <timer name="pit" tickpolicy="delay"/>
Oct 11 09:08:59 compute-0 nova_compute[260935]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 11 09:08:59 compute-0 nova_compute[260935]:     <timer name="hpet" present="no"/>
Oct 11 09:08:59 compute-0 nova_compute[260935]:   </clock>
Oct 11 09:08:59 compute-0 nova_compute[260935]:   <cpu mode="host-model" match="exact">
Oct 11 09:08:59 compute-0 nova_compute[260935]:     <topology sockets="1" cores="1" threads="1"/>
Oct 11 09:08:59 compute-0 nova_compute[260935]:   </cpu>
Oct 11 09:08:59 compute-0 nova_compute[260935]:   <devices>
Oct 11 09:08:59 compute-0 nova_compute[260935]:     <disk type="network" device="disk">
Oct 11 09:08:59 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 09:08:59 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/b3697ef4-2e67-4d7f-ad14-ae6ab4055454_disk">
Oct 11 09:08:59 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 09:08:59 compute-0 nova_compute[260935]:       </source>
Oct 11 09:08:59 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 09:08:59 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 09:08:59 compute-0 nova_compute[260935]:       </auth>
Oct 11 09:08:59 compute-0 nova_compute[260935]:       <target dev="vda" bus="virtio"/>
Oct 11 09:08:59 compute-0 nova_compute[260935]:     </disk>
Oct 11 09:08:59 compute-0 nova_compute[260935]:     <disk type="network" device="cdrom">
Oct 11 09:08:59 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 09:08:59 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/b3697ef4-2e67-4d7f-ad14-ae6ab4055454_disk.config">
Oct 11 09:08:59 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 09:08:59 compute-0 nova_compute[260935]:       </source>
Oct 11 09:08:59 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 09:08:59 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 09:08:59 compute-0 nova_compute[260935]:       </auth>
Oct 11 09:08:59 compute-0 nova_compute[260935]:       <target dev="sda" bus="sata"/>
Oct 11 09:08:59 compute-0 nova_compute[260935]:     </disk>
Oct 11 09:08:59 compute-0 nova_compute[260935]:     <serial type="pty">
Oct 11 09:08:59 compute-0 nova_compute[260935]:       <log file="/var/lib/nova/instances/b3697ef4-2e67-4d7f-ad14-ae6ab4055454/console.log" append="off"/>
Oct 11 09:08:59 compute-0 nova_compute[260935]:     </serial>
Oct 11 09:08:59 compute-0 nova_compute[260935]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 09:08:59 compute-0 nova_compute[260935]:     <video>
Oct 11 09:08:59 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 09:08:59 compute-0 nova_compute[260935]:     </video>
Oct 11 09:08:59 compute-0 nova_compute[260935]:     <input type="tablet" bus="usb"/>
Oct 11 09:08:59 compute-0 nova_compute[260935]:     <rng model="virtio">
Oct 11 09:08:59 compute-0 nova_compute[260935]:       <backend model="random">/dev/urandom</backend>
Oct 11 09:08:59 compute-0 nova_compute[260935]:     </rng>
Oct 11 09:08:59 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root"/>
Oct 11 09:08:59 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:08:59 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:08:59 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:08:59 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:08:59 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:08:59 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:08:59 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:08:59 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:08:59 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:08:59 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:08:59 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:08:59 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:08:59 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:08:59 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:08:59 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:08:59 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:08:59 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:08:59 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:08:59 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:08:59 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:08:59 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:08:59 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:08:59 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:08:59 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:08:59 compute-0 nova_compute[260935]:     <controller type="usb" index="0"/>
Oct 11 09:08:59 compute-0 nova_compute[260935]:     <memballoon model="virtio">
Oct 11 09:08:59 compute-0 nova_compute[260935]:       <stats period="10"/>
Oct 11 09:08:59 compute-0 nova_compute[260935]:     </memballoon>
Oct 11 09:08:59 compute-0 nova_compute[260935]:   </devices>
Oct 11 09:08:59 compute-0 nova_compute[260935]: </domain>
Oct 11 09:08:59 compute-0 nova_compute[260935]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 11 09:08:59 compute-0 nova_compute[260935]: 2025-10-11 09:08:59.334 2 DEBUG nova.virt.libvirt.driver [None req-6c586fcc-5e53-410d-be96-8df6fbf601a8 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 09:08:59 compute-0 nova_compute[260935]: 2025-10-11 09:08:59.335 2 DEBUG nova.virt.libvirt.driver [None req-6c586fcc-5e53-410d-be96-8df6fbf601a8 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 09:08:59 compute-0 nova_compute[260935]: 2025-10-11 09:08:59.335 2 INFO nova.virt.libvirt.driver [None req-6c586fcc-5e53-410d-be96-8df6fbf601a8 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] [instance: b3697ef4-2e67-4d7f-ad14-ae6ab4055454] Using config drive
Oct 11 09:08:59 compute-0 nova_compute[260935]: 2025-10-11 09:08:59.375 2 DEBUG nova.storage.rbd_utils [None req-6c586fcc-5e53-410d-be96-8df6fbf601a8 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] rbd image b3697ef4-2e67-4d7f-ad14-ae6ab4055454_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:08:59 compute-0 ceph-mon[74313]: pgmap v2005: 321 pgs: 321 active+clean; 486 MiB data, 906 MiB used, 59 GiB / 60 GiB avail; 7.3 MiB/s rd, 7.1 MiB/s wr, 263 op/s
Oct 11 09:08:59 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/2116364323' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:08:59 compute-0 ceph-mon[74313]: osdmap e274: 3 total, 3 up, 3 in
Oct 11 09:08:59 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/1842180243' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:08:59 compute-0 nova_compute[260935]: 2025-10-11 09:08:59.467 2 INFO nova.virt.libvirt.driver [-] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Instance destroyed successfully.
Oct 11 09:08:59 compute-0 nova_compute[260935]: 2025-10-11 09:08:59.468 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:08:59 compute-0 nova_compute[260935]: 2025-10-11 09:08:59.469 2 DEBUG nova.objects.instance [None req-f615746d-55a5-4eb1-8fb0-04d3283753bd 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Lazy-loading 'resources' on Instance uuid 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:08:59 compute-0 nova_compute[260935]: 2025-10-11 09:08:59.498 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760173724.4976034, 1a90a348-da49-4ba3-8bae-5b426e9d7424 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:08:59 compute-0 nova_compute[260935]: 2025-10-11 09:08:59.498 2 INFO nova.compute.manager [-] [instance: 1a90a348-da49-4ba3-8bae-5b426e9d7424] VM Stopped (Lifecycle Event)
Oct 11 09:08:59 compute-0 nova_compute[260935]: 2025-10-11 09:08:59.500 2 DEBUG nova.virt.libvirt.vif [None req-f615746d-55a5-4eb1-8fb0-04d3283753bd 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T09:07:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersNegativeTestJSON-server-581429551',display_name='tempest-ServersNegativeTestJSON-server-581429551',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestjson-server-581429551',id=98,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-11T09:07:18Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='21f163e616ee4917a580701d466f7dc9',ramdisk_id='',reservation_id='r-wp4zx0a9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersNegativeTestJSON-414353187',owner_user_name='tempest-ServersNegativeTestJSON-414353187-project-member',shelved_at='2025-10-11T09:08:56.240286',shelved_host='compute-0.ctlplane.example.com',shelved_image_id='cccd490d-35ae-4410-9dbd-f711e72912ef'},tags=<?>,task_state='shelving_offloading',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T09:08:50Z,user_data=None,user_id='6b8d9d5ab01d48ae81a09f922875ea3e',uuid=0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='shelved') vif={"id": "a611854c-0a61-41b8-91ce-0c0f893aa54c", "address": "fa:16:3e:9d:da:b0", "network": {"id": "14e82eeb-74e2-4de3-9047-74da777fe1ec", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-461409610-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "21f163e616ee4917a580701d466f7dc9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa611854c-0a", "ovs_interfaceid": "a611854c-0a61-41b8-91ce-0c0f893aa54c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 11 09:08:59 compute-0 nova_compute[260935]: 2025-10-11 09:08:59.501 2 DEBUG nova.network.os_vif_util [None req-f615746d-55a5-4eb1-8fb0-04d3283753bd 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Converting VIF {"id": "a611854c-0a61-41b8-91ce-0c0f893aa54c", "address": "fa:16:3e:9d:da:b0", "network": {"id": "14e82eeb-74e2-4de3-9047-74da777fe1ec", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-461409610-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "21f163e616ee4917a580701d466f7dc9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa611854c-0a", "ovs_interfaceid": "a611854c-0a61-41b8-91ce-0c0f893aa54c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 09:08:59 compute-0 nova_compute[260935]: 2025-10-11 09:08:59.502 2 DEBUG nova.network.os_vif_util [None req-f615746d-55a5-4eb1-8fb0-04d3283753bd 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9d:da:b0,bridge_name='br-int',has_traffic_filtering=True,id=a611854c-0a61-41b8-91ce-0c0f893aa54c,network=Network(14e82eeb-74e2-4de3-9047-74da777fe1ec),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa611854c-0a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 09:08:59 compute-0 nova_compute[260935]: 2025-10-11 09:08:59.502 2 DEBUG os_vif [None req-f615746d-55a5-4eb1-8fb0-04d3283753bd 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:9d:da:b0,bridge_name='br-int',has_traffic_filtering=True,id=a611854c-0a61-41b8-91ce-0c0f893aa54c,network=Network(14e82eeb-74e2-4de3-9047-74da777fe1ec),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa611854c-0a') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 11 09:08:59 compute-0 nova_compute[260935]: 2025-10-11 09:08:59.504 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:08:59 compute-0 nova_compute[260935]: 2025-10-11 09:08:59.504 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa611854c-0a, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:08:59 compute-0 nova_compute[260935]: 2025-10-11 09:08:59.505 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:08:59 compute-0 nova_compute[260935]: 2025-10-11 09:08:59.508 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 09:08:59 compute-0 nova_compute[260935]: 2025-10-11 09:08:59.509 2 INFO os_vif [None req-f615746d-55a5-4eb1-8fb0-04d3283753bd 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:9d:da:b0,bridge_name='br-int',has_traffic_filtering=True,id=a611854c-0a61-41b8-91ce-0c0f893aa54c,network=Network(14e82eeb-74e2-4de3-9047-74da777fe1ec),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa611854c-0a')
Oct 11 09:08:59 compute-0 nova_compute[260935]: 2025-10-11 09:08:59.538 2 DEBUG nova.compute.manager [None req-ad9a494b-d208-4dba-a96e-80e4511b93ca - - - - - -] [instance: 1a90a348-da49-4ba3-8bae-5b426e9d7424] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:08:59 compute-0 nova_compute[260935]: 2025-10-11 09:08:59.691 2 DEBUG nova.compute.manager [req-f745fa9a-b1f7-4ab6-811e-237e3b5d4c33 req-811c4b79-38fa-45a8-820d-eebac48a4167 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Received event network-changed-a611854c-0a61-41b8-91ce-0c0f893aa54c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:08:59 compute-0 nova_compute[260935]: 2025-10-11 09:08:59.691 2 DEBUG nova.compute.manager [req-f745fa9a-b1f7-4ab6-811e-237e3b5d4c33 req-811c4b79-38fa-45a8-820d-eebac48a4167 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Refreshing instance network info cache due to event network-changed-a611854c-0a61-41b8-91ce-0c0f893aa54c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 09:08:59 compute-0 nova_compute[260935]: 2025-10-11 09:08:59.691 2 DEBUG oslo_concurrency.lockutils [req-f745fa9a-b1f7-4ab6-811e-237e3b5d4c33 req-811c4b79-38fa-45a8-820d-eebac48a4167 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:08:59 compute-0 nova_compute[260935]: 2025-10-11 09:08:59.692 2 DEBUG oslo_concurrency.lockutils [req-f745fa9a-b1f7-4ab6-811e-237e3b5d4c33 req-811c4b79-38fa-45a8-820d-eebac48a4167 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:08:59 compute-0 nova_compute[260935]: 2025-10-11 09:08:59.692 2 DEBUG nova.network.neutron [req-f745fa9a-b1f7-4ab6-811e-237e3b5d4c33 req-811c4b79-38fa-45a8-820d-eebac48a4167 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Refreshing network info cache for port a611854c-0a61-41b8-91ce-0c0f893aa54c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 09:08:59 compute-0 nova_compute[260935]: 2025-10-11 09:08:59.748 2 INFO nova.virt.libvirt.driver [None req-6c586fcc-5e53-410d-be96-8df6fbf601a8 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] [instance: b3697ef4-2e67-4d7f-ad14-ae6ab4055454] Creating config drive at /var/lib/nova/instances/b3697ef4-2e67-4d7f-ad14-ae6ab4055454/disk.config
Oct 11 09:08:59 compute-0 nova_compute[260935]: 2025-10-11 09:08:59.759 2 DEBUG oslo_concurrency.processutils [None req-6c586fcc-5e53-410d-be96-8df6fbf601a8 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/b3697ef4-2e67-4d7f-ad14-ae6ab4055454/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp3dlw3xny execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:08:59 compute-0 nova_compute[260935]: 2025-10-11 09:08:59.924 2 INFO nova.virt.libvirt.driver [None req-f615746d-55a5-4eb1-8fb0-04d3283753bd 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Deleting instance files /var/lib/nova/instances/0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c_del
Oct 11 09:08:59 compute-0 nova_compute[260935]: 2025-10-11 09:08:59.925 2 INFO nova.virt.libvirt.driver [None req-f615746d-55a5-4eb1-8fb0-04d3283753bd 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Deletion of /var/lib/nova/instances/0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c_del complete
Oct 11 09:08:59 compute-0 nova_compute[260935]: 2025-10-11 09:08:59.929 2 DEBUG oslo_concurrency.processutils [None req-6c586fcc-5e53-410d-be96-8df6fbf601a8 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/b3697ef4-2e67-4d7f-ad14-ae6ab4055454/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp3dlw3xny" returned: 0 in 0.171s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:08:59 compute-0 nova_compute[260935]: 2025-10-11 09:08:59.957 2 DEBUG nova.storage.rbd_utils [None req-6c586fcc-5e53-410d-be96-8df6fbf601a8 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] rbd image b3697ef4-2e67-4d7f-ad14-ae6ab4055454_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:08:59 compute-0 nova_compute[260935]: 2025-10-11 09:08:59.961 2 DEBUG oslo_concurrency.processutils [None req-6c586fcc-5e53-410d-be96-8df6fbf601a8 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/b3697ef4-2e67-4d7f-ad14-ae6ab4055454/disk.config b3697ef4-2e67-4d7f-ad14-ae6ab4055454_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:09:00 compute-0 nova_compute[260935]: 2025-10-11 09:09:00.052 2 INFO nova.scheduler.client.report [None req-f615746d-55a5-4eb1-8fb0-04d3283753bd 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Deleted allocations for instance 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c
Oct 11 09:09:00 compute-0 nova_compute[260935]: 2025-10-11 09:09:00.097 2 DEBUG oslo_concurrency.lockutils [None req-f615746d-55a5-4eb1-8fb0-04d3283753bd 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:09:00 compute-0 nova_compute[260935]: 2025-10-11 09:09:00.097 2 DEBUG oslo_concurrency.lockutils [None req-f615746d-55a5-4eb1-8fb0-04d3283753bd 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:09:00 compute-0 nova_compute[260935]: 2025-10-11 09:09:00.153 2 DEBUG oslo_concurrency.processutils [None req-6c586fcc-5e53-410d-be96-8df6fbf601a8 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/b3697ef4-2e67-4d7f-ad14-ae6ab4055454/disk.config b3697ef4-2e67-4d7f-ad14-ae6ab4055454_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.193s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:09:00 compute-0 nova_compute[260935]: 2025-10-11 09:09:00.155 2 INFO nova.virt.libvirt.driver [None req-6c586fcc-5e53-410d-be96-8df6fbf601a8 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] [instance: b3697ef4-2e67-4d7f-ad14-ae6ab4055454] Deleting local config drive /var/lib/nova/instances/b3697ef4-2e67-4d7f-ad14-ae6ab4055454/disk.config because it was imported into RBD.
Oct 11 09:09:00 compute-0 nova_compute[260935]: 2025-10-11 09:09:00.223 2 DEBUG oslo_concurrency.processutils [None req-f615746d-55a5-4eb1-8fb0-04d3283753bd 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:09:00 compute-0 systemd-machined[215705]: New machine qemu-118-instance-00000066.
Oct 11 09:09:00 compute-0 systemd[1]: Started Virtual Machine qemu-118-instance-00000066.
Oct 11 09:09:00 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2007: 321 pgs: 321 active+clean; 486 MiB data, 906 MiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 3.4 MiB/s wr, 78 op/s
Oct 11 09:09:00 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 09:09:00 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1091932551' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:09:00 compute-0 nova_compute[260935]: 2025-10-11 09:09:00.695 2 DEBUG oslo_concurrency.processutils [None req-f615746d-55a5-4eb1-8fb0-04d3283753bd 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:09:00 compute-0 nova_compute[260935]: 2025-10-11 09:09:00.706 2 DEBUG nova.compute.provider_tree [None req-f615746d-55a5-4eb1-8fb0-04d3283753bd 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 09:09:00 compute-0 nova_compute[260935]: 2025-10-11 09:09:00.735 2 DEBUG nova.scheduler.client.report [None req-f615746d-55a5-4eb1-8fb0-04d3283753bd 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 09:09:00 compute-0 nova_compute[260935]: 2025-10-11 09:09:00.774 2 DEBUG oslo_concurrency.lockutils [None req-f615746d-55a5-4eb1-8fb0-04d3283753bd 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.676s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:09:00 compute-0 nova_compute[260935]: 2025-10-11 09:09:00.839 2 DEBUG oslo_concurrency.lockutils [None req-f615746d-55a5-4eb1-8fb0-04d3283753bd 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Lock "0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c" "released" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: held 14.078s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:09:01 compute-0 nova_compute[260935]: 2025-10-11 09:09:01.227 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760173741.2266312, b3697ef4-2e67-4d7f-ad14-ae6ab4055454 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:09:01 compute-0 nova_compute[260935]: 2025-10-11 09:09:01.227 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: b3697ef4-2e67-4d7f-ad14-ae6ab4055454] VM Resumed (Lifecycle Event)
Oct 11 09:09:01 compute-0 nova_compute[260935]: 2025-10-11 09:09:01.229 2 DEBUG nova.compute.manager [None req-6c586fcc-5e53-410d-be96-8df6fbf601a8 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] [instance: b3697ef4-2e67-4d7f-ad14-ae6ab4055454] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 11 09:09:01 compute-0 nova_compute[260935]: 2025-10-11 09:09:01.230 2 DEBUG nova.virt.libvirt.driver [None req-6c586fcc-5e53-410d-be96-8df6fbf601a8 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] [instance: b3697ef4-2e67-4d7f-ad14-ae6ab4055454] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 11 09:09:01 compute-0 nova_compute[260935]: 2025-10-11 09:09:01.234 2 INFO nova.virt.libvirt.driver [-] [instance: b3697ef4-2e67-4d7f-ad14-ae6ab4055454] Instance spawned successfully.
Oct 11 09:09:01 compute-0 nova_compute[260935]: 2025-10-11 09:09:01.234 2 DEBUG nova.virt.libvirt.driver [None req-6c586fcc-5e53-410d-be96-8df6fbf601a8 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] [instance: b3697ef4-2e67-4d7f-ad14-ae6ab4055454] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 11 09:09:01 compute-0 nova_compute[260935]: 2025-10-11 09:09:01.260 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: b3697ef4-2e67-4d7f-ad14-ae6ab4055454] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:09:01 compute-0 nova_compute[260935]: 2025-10-11 09:09:01.264 2 DEBUG nova.virt.libvirt.driver [None req-6c586fcc-5e53-410d-be96-8df6fbf601a8 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] [instance: b3697ef4-2e67-4d7f-ad14-ae6ab4055454] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:09:01 compute-0 nova_compute[260935]: 2025-10-11 09:09:01.265 2 DEBUG nova.virt.libvirt.driver [None req-6c586fcc-5e53-410d-be96-8df6fbf601a8 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] [instance: b3697ef4-2e67-4d7f-ad14-ae6ab4055454] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:09:01 compute-0 nova_compute[260935]: 2025-10-11 09:09:01.266 2 DEBUG nova.virt.libvirt.driver [None req-6c586fcc-5e53-410d-be96-8df6fbf601a8 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] [instance: b3697ef4-2e67-4d7f-ad14-ae6ab4055454] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:09:01 compute-0 nova_compute[260935]: 2025-10-11 09:09:01.267 2 DEBUG nova.virt.libvirt.driver [None req-6c586fcc-5e53-410d-be96-8df6fbf601a8 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] [instance: b3697ef4-2e67-4d7f-ad14-ae6ab4055454] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:09:01 compute-0 nova_compute[260935]: 2025-10-11 09:09:01.267 2 DEBUG nova.virt.libvirt.driver [None req-6c586fcc-5e53-410d-be96-8df6fbf601a8 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] [instance: b3697ef4-2e67-4d7f-ad14-ae6ab4055454] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:09:01 compute-0 nova_compute[260935]: 2025-10-11 09:09:01.268 2 DEBUG nova.virt.libvirt.driver [None req-6c586fcc-5e53-410d-be96-8df6fbf601a8 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] [instance: b3697ef4-2e67-4d7f-ad14-ae6ab4055454] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:09:01 compute-0 nova_compute[260935]: 2025-10-11 09:09:01.272 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: b3697ef4-2e67-4d7f-ad14-ae6ab4055454] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 09:09:01 compute-0 nova_compute[260935]: 2025-10-11 09:09:01.304 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: b3697ef4-2e67-4d7f-ad14-ae6ab4055454] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 09:09:01 compute-0 nova_compute[260935]: 2025-10-11 09:09:01.304 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760173741.2296371, b3697ef4-2e67-4d7f-ad14-ae6ab4055454 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:09:01 compute-0 nova_compute[260935]: 2025-10-11 09:09:01.305 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: b3697ef4-2e67-4d7f-ad14-ae6ab4055454] VM Started (Lifecycle Event)
Oct 11 09:09:01 compute-0 nova_compute[260935]: 2025-10-11 09:09:01.333 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: b3697ef4-2e67-4d7f-ad14-ae6ab4055454] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:09:01 compute-0 nova_compute[260935]: 2025-10-11 09:09:01.338 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: b3697ef4-2e67-4d7f-ad14-ae6ab4055454] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 09:09:01 compute-0 nova_compute[260935]: 2025-10-11 09:09:01.343 2 INFO nova.compute.manager [None req-6c586fcc-5e53-410d-be96-8df6fbf601a8 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] [instance: b3697ef4-2e67-4d7f-ad14-ae6ab4055454] Took 3.92 seconds to spawn the instance on the hypervisor.
Oct 11 09:09:01 compute-0 nova_compute[260935]: 2025-10-11 09:09:01.344 2 DEBUG nova.compute.manager [None req-6c586fcc-5e53-410d-be96-8df6fbf601a8 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] [instance: b3697ef4-2e67-4d7f-ad14-ae6ab4055454] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:09:01 compute-0 nova_compute[260935]: 2025-10-11 09:09:01.356 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: b3697ef4-2e67-4d7f-ad14-ae6ab4055454] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 09:09:01 compute-0 nova_compute[260935]: 2025-10-11 09:09:01.416 2 INFO nova.compute.manager [None req-6c586fcc-5e53-410d-be96-8df6fbf601a8 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] [instance: b3697ef4-2e67-4d7f-ad14-ae6ab4055454] Took 5.05 seconds to build instance.
Oct 11 09:09:01 compute-0 ceph-mon[74313]: pgmap v2007: 321 pgs: 321 active+clean; 486 MiB data, 906 MiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 3.4 MiB/s wr, 78 op/s
Oct 11 09:09:01 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/1091932551' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:09:01 compute-0 nova_compute[260935]: 2025-10-11 09:09:01.485 2 DEBUG oslo_concurrency.lockutils [None req-6c586fcc-5e53-410d-be96-8df6fbf601a8 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] Lock "b3697ef4-2e67-4d7f-ad14-ae6ab4055454" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 5.236s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:09:01 compute-0 nova_compute[260935]: 2025-10-11 09:09:01.491 2 DEBUG nova.network.neutron [req-f745fa9a-b1f7-4ab6-811e-237e3b5d4c33 req-811c4b79-38fa-45a8-820d-eebac48a4167 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Updated VIF entry in instance network info cache for port a611854c-0a61-41b8-91ce-0c0f893aa54c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 09:09:01 compute-0 nova_compute[260935]: 2025-10-11 09:09:01.492 2 DEBUG nova.network.neutron [req-f745fa9a-b1f7-4ab6-811e-237e3b5d4c33 req-811c4b79-38fa-45a8-820d-eebac48a4167 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Updating instance_info_cache with network_info: [{"id": "a611854c-0a61-41b8-91ce-0c0f893aa54c", "address": "fa:16:3e:9d:da:b0", "network": {"id": "14e82eeb-74e2-4de3-9047-74da777fe1ec", "bridge": null, "label": "tempest-ServersNegativeTestJSON-461409610-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "21f163e616ee4917a580701d466f7dc9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "unbound", "details": {}, "devname": "tapa611854c-0a", "ovs_interfaceid": null, "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:09:01 compute-0 nova_compute[260935]: 2025-10-11 09:09:01.515 2 DEBUG oslo_concurrency.lockutils [req-f745fa9a-b1f7-4ab6-811e-237e3b5d4c33 req-811c4b79-38fa-45a8-820d-eebac48a4167 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:09:02 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2008: 321 pgs: 321 active+clean; 447 MiB data, 886 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 3.7 MiB/s wr, 173 op/s
Oct 11 09:09:03 compute-0 ceph-mon[74313]: pgmap v2008: 321 pgs: 321 active+clean; 447 MiB data, 886 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 3.7 MiB/s wr, 173 op/s
Oct 11 09:09:03 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e274 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:09:04 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2009: 321 pgs: 321 active+clean; 453 MiB data, 893 MiB used, 59 GiB / 60 GiB avail; 2.8 MiB/s rd, 4.7 MiB/s wr, 195 op/s
Oct 11 09:09:04 compute-0 nova_compute[260935]: 2025-10-11 09:09:04.434 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760173729.433575, 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:09:04 compute-0 nova_compute[260935]: 2025-10-11 09:09:04.435 2 INFO nova.compute.manager [-] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] VM Stopped (Lifecycle Event)
Oct 11 09:09:04 compute-0 nova_compute[260935]: 2025-10-11 09:09:04.459 2 INFO nova.compute.manager [None req-5e67c9aa-8b99-4747-a5aa-eef9566275f3 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] [instance: b3697ef4-2e67-4d7f-ad14-ae6ab4055454] Rebuilding instance
Oct 11 09:09:04 compute-0 nova_compute[260935]: 2025-10-11 09:09:04.465 2 DEBUG nova.compute.manager [None req-abed385a-bb63-4264-a041-df792a746ff2 - - - - - -] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:09:04 compute-0 nova_compute[260935]: 2025-10-11 09:09:04.507 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 09:09:04 compute-0 nova_compute[260935]: 2025-10-11 09:09:04.510 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 09:09:04 compute-0 nova_compute[260935]: 2025-10-11 09:09:04.510 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Oct 11 09:09:04 compute-0 nova_compute[260935]: 2025-10-11 09:09:04.511 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Oct 11 09:09:04 compute-0 nova_compute[260935]: 2025-10-11 09:09:04.511 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Oct 11 09:09:04 compute-0 nova_compute[260935]: 2025-10-11 09:09:04.512 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:09:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 09:09:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:09:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 09:09:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:09:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0029757271724620694 of space, bias 1.0, pg target 0.8927181517386208 quantized to 32 (current 32)
Oct 11 09:09:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:09:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 09:09:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:09:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 09:09:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:09:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0014232236317314579 of space, bias 1.0, pg target 0.42696708951943735 quantized to 32 (current 32)
Oct 11 09:09:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:09:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 11 09:09:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:09:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 09:09:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:09:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 11 09:09:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:09:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 11 09:09:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:09:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 09:09:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:09:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 11 09:09:05 compute-0 nova_compute[260935]: 2025-10-11 09:09:05.117 2 DEBUG nova.objects.instance [None req-5e67c9aa-8b99-4747-a5aa-eef9566275f3 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] Lazy-loading 'trusted_certs' on Instance uuid b3697ef4-2e67-4d7f-ad14-ae6ab4055454 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:09:05 compute-0 nova_compute[260935]: 2025-10-11 09:09:05.146 2 DEBUG nova.compute.manager [None req-5e67c9aa-8b99-4747-a5aa-eef9566275f3 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] [instance: b3697ef4-2e67-4d7f-ad14-ae6ab4055454] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:09:05 compute-0 nova_compute[260935]: 2025-10-11 09:09:05.229 2 DEBUG nova.objects.instance [None req-5e67c9aa-8b99-4747-a5aa-eef9566275f3 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] Lazy-loading 'pci_requests' on Instance uuid b3697ef4-2e67-4d7f-ad14-ae6ab4055454 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:09:05 compute-0 nova_compute[260935]: 2025-10-11 09:09:05.243 2 DEBUG nova.objects.instance [None req-5e67c9aa-8b99-4747-a5aa-eef9566275f3 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] Lazy-loading 'pci_devices' on Instance uuid b3697ef4-2e67-4d7f-ad14-ae6ab4055454 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:09:05 compute-0 nova_compute[260935]: 2025-10-11 09:09:05.259 2 DEBUG nova.objects.instance [None req-5e67c9aa-8b99-4747-a5aa-eef9566275f3 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] Lazy-loading 'resources' on Instance uuid b3697ef4-2e67-4d7f-ad14-ae6ab4055454 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:09:05 compute-0 nova_compute[260935]: 2025-10-11 09:09:05.276 2 DEBUG nova.objects.instance [None req-5e67c9aa-8b99-4747-a5aa-eef9566275f3 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] Lazy-loading 'migration_context' on Instance uuid b3697ef4-2e67-4d7f-ad14-ae6ab4055454 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:09:05 compute-0 nova_compute[260935]: 2025-10-11 09:09:05.290 2 DEBUG nova.objects.instance [None req-5e67c9aa-8b99-4747-a5aa-eef9566275f3 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] [instance: b3697ef4-2e67-4d7f-ad14-ae6ab4055454] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Oct 11 09:09:05 compute-0 nova_compute[260935]: 2025-10-11 09:09:05.297 2 DEBUG nova.virt.libvirt.driver [None req-5e67c9aa-8b99-4747-a5aa-eef9566275f3 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] [instance: b3697ef4-2e67-4d7f-ad14-ae6ab4055454] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Oct 11 09:09:05 compute-0 ceph-mon[74313]: pgmap v2009: 321 pgs: 321 active+clean; 453 MiB data, 893 MiB used, 59 GiB / 60 GiB avail; 2.8 MiB/s rd, 4.7 MiB/s wr, 195 op/s
Oct 11 09:09:06 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2010: 321 pgs: 321 active+clean; 453 MiB data, 893 MiB used, 59 GiB / 60 GiB avail; 2.8 MiB/s rd, 4.7 MiB/s wr, 195 op/s
Oct 11 09:09:06 compute-0 nova_compute[260935]: 2025-10-11 09:09:06.527 2 DEBUG oslo_concurrency.lockutils [None req-2e95498c-bcac-43e1-b077-459f5740c860 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Acquiring lock "0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c" by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:09:06 compute-0 nova_compute[260935]: 2025-10-11 09:09:06.528 2 DEBUG oslo_concurrency.lockutils [None req-2e95498c-bcac-43e1-b077-459f5740c860 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Lock "0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c" acquired by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:09:06 compute-0 nova_compute[260935]: 2025-10-11 09:09:06.529 2 INFO nova.compute.manager [None req-2e95498c-bcac-43e1-b077-459f5740c860 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Unshelving
Oct 11 09:09:06 compute-0 nova_compute[260935]: 2025-10-11 09:09:06.627 2 DEBUG oslo_concurrency.lockutils [None req-2e95498c-bcac-43e1-b077-459f5740c860 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:09:06 compute-0 nova_compute[260935]: 2025-10-11 09:09:06.628 2 DEBUG oslo_concurrency.lockutils [None req-2e95498c-bcac-43e1-b077-459f5740c860 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:09:06 compute-0 nova_compute[260935]: 2025-10-11 09:09:06.636 2 DEBUG nova.objects.instance [None req-2e95498c-bcac-43e1-b077-459f5740c860 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Lazy-loading 'pci_requests' on Instance uuid 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:09:06 compute-0 nova_compute[260935]: 2025-10-11 09:09:06.659 2 DEBUG nova.objects.instance [None req-2e95498c-bcac-43e1-b077-459f5740c860 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Lazy-loading 'numa_topology' on Instance uuid 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:09:06 compute-0 nova_compute[260935]: 2025-10-11 09:09:06.675 2 DEBUG nova.virt.hardware [None req-2e95498c-bcac-43e1-b077-459f5740c860 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 11 09:09:06 compute-0 nova_compute[260935]: 2025-10-11 09:09:06.676 2 INFO nova.compute.claims [None req-2e95498c-bcac-43e1-b077-459f5740c860 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Claim successful on node compute-0.ctlplane.example.com
Oct 11 09:09:06 compute-0 nova_compute[260935]: 2025-10-11 09:09:06.724 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:09:06 compute-0 nova_compute[260935]: 2025-10-11 09:09:06.898 2 DEBUG oslo_concurrency.processutils [None req-2e95498c-bcac-43e1-b077-459f5740c860 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:09:07 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 09:09:07 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1109422311' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:09:07 compute-0 nova_compute[260935]: 2025-10-11 09:09:07.402 2 DEBUG oslo_concurrency.processutils [None req-2e95498c-bcac-43e1-b077-459f5740c860 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.505s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:09:07 compute-0 nova_compute[260935]: 2025-10-11 09:09:07.408 2 DEBUG nova.compute.provider_tree [None req-2e95498c-bcac-43e1-b077-459f5740c860 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 09:09:07 compute-0 nova_compute[260935]: 2025-10-11 09:09:07.442 2 DEBUG nova.scheduler.client.report [None req-2e95498c-bcac-43e1-b077-459f5740c860 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 09:09:07 compute-0 nova_compute[260935]: 2025-10-11 09:09:07.476 2 DEBUG oslo_concurrency.lockutils [None req-2e95498c-bcac-43e1-b077-459f5740c860 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.848s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:09:07 compute-0 ceph-mon[74313]: pgmap v2010: 321 pgs: 321 active+clean; 453 MiB data, 893 MiB used, 59 GiB / 60 GiB avail; 2.8 MiB/s rd, 4.7 MiB/s wr, 195 op/s
Oct 11 09:09:07 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/1109422311' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:09:07 compute-0 nova_compute[260935]: 2025-10-11 09:09:07.722 2 INFO nova.network.neutron [None req-2e95498c-bcac-43e1-b077-459f5740c860 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Updating port a611854c-0a61-41b8-91ce-0c0f893aa54c with attributes {'binding:host_id': 'compute-0.ctlplane.example.com', 'device_owner': 'compute:nova'}
Oct 11 09:09:08 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2011: 321 pgs: 321 active+clean; 453 MiB data, 893 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 153 op/s
Oct 11 09:09:08 compute-0 sudo[361943]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:09:08 compute-0 sudo[361943]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:09:08 compute-0 sudo[361943]: pam_unix(sudo:session): session closed for user root
Oct 11 09:09:08 compute-0 sudo[361968]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 09:09:08 compute-0 sudo[361968]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:09:08 compute-0 sudo[361968]: pam_unix(sudo:session): session closed for user root
Oct 11 09:09:08 compute-0 sudo[361993]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:09:08 compute-0 sudo[361993]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:09:08 compute-0 sudo[361993]: pam_unix(sudo:session): session closed for user root
Oct 11 09:09:08 compute-0 sudo[362018]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 11 09:09:08 compute-0 sudo[362018]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:09:08 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e274 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:09:08 compute-0 sshd-session[361941]: Invalid user bad from 152.32.213.170 port 34116
Oct 11 09:09:08 compute-0 sshd-session[361941]: pam_unix(sshd:auth): check pass; user unknown
Oct 11 09:09:08 compute-0 sshd-session[361941]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=152.32.213.170
Oct 11 09:09:09 compute-0 nova_compute[260935]: 2025-10-11 09:09:09.077 2 DEBUG oslo_concurrency.lockutils [None req-2e95498c-bcac-43e1-b077-459f5740c860 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Acquiring lock "refresh_cache-0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:09:09 compute-0 nova_compute[260935]: 2025-10-11 09:09:09.080 2 DEBUG oslo_concurrency.lockutils [None req-2e95498c-bcac-43e1-b077-459f5740c860 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Acquired lock "refresh_cache-0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:09:09 compute-0 nova_compute[260935]: 2025-10-11 09:09:09.080 2 DEBUG nova.network.neutron [None req-2e95498c-bcac-43e1-b077-459f5740c860 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 11 09:09:09 compute-0 nova_compute[260935]: 2025-10-11 09:09:09.235 2 DEBUG nova.compute.manager [req-0fa1a883-467e-44b2-9c97-bd3403659cc3 req-dc9d79be-fb59-4206-be11-87d88e2e3517 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Received event network-changed-a611854c-0a61-41b8-91ce-0c0f893aa54c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:09:09 compute-0 nova_compute[260935]: 2025-10-11 09:09:09.237 2 DEBUG nova.compute.manager [req-0fa1a883-467e-44b2-9c97-bd3403659cc3 req-dc9d79be-fb59-4206-be11-87d88e2e3517 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Refreshing instance network info cache due to event network-changed-a611854c-0a61-41b8-91ce-0c0f893aa54c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 09:09:09 compute-0 nova_compute[260935]: 2025-10-11 09:09:09.237 2 DEBUG oslo_concurrency.lockutils [req-0fa1a883-467e-44b2-9c97-bd3403659cc3 req-dc9d79be-fb59-4206-be11-87d88e2e3517 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:09:09 compute-0 sudo[362018]: pam_unix(sudo:session): session closed for user root
Oct 11 09:09:09 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 09:09:09 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 09:09:09 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 11 09:09:09 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 09:09:09 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 11 09:09:09 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:09:09 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev 455e5a33-e8ff-4ed4-aa65-a8d8439c4d2f does not exist
Oct 11 09:09:09 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev f197b979-3563-4e4e-9efb-7c89cf4ebdcc does not exist
Oct 11 09:09:09 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev 7aa4292e-ed65-4aa6-99bb-19c91cf1dbde does not exist
Oct 11 09:09:09 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 11 09:09:09 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 09:09:09 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 11 09:09:09 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 09:09:09 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 09:09:09 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 09:09:09 compute-0 sudo[362075]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:09:09 compute-0 sudo[362075]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:09:09 compute-0 sudo[362075]: pam_unix(sudo:session): session closed for user root
Oct 11 09:09:09 compute-0 nova_compute[260935]: 2025-10-11 09:09:09.514 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 09:09:09 compute-0 nova_compute[260935]: 2025-10-11 09:09:09.516 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 09:09:09 compute-0 nova_compute[260935]: 2025-10-11 09:09:09.516 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Oct 11 09:09:09 compute-0 nova_compute[260935]: 2025-10-11 09:09:09.516 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Oct 11 09:09:09 compute-0 nova_compute[260935]: 2025-10-11 09:09:09.577 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:09:09 compute-0 nova_compute[260935]: 2025-10-11 09:09:09.578 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Oct 11 09:09:09 compute-0 ceph-mon[74313]: pgmap v2011: 321 pgs: 321 active+clean; 453 MiB data, 893 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 153 op/s
Oct 11 09:09:09 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 09:09:09 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 09:09:09 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:09:09 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 09:09:09 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 09:09:09 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 09:09:09 compute-0 sudo[362100]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 09:09:09 compute-0 sudo[362100]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:09:09 compute-0 sudo[362100]: pam_unix(sudo:session): session closed for user root
Oct 11 09:09:09 compute-0 sudo[362125]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:09:09 compute-0 sudo[362125]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:09:09 compute-0 sudo[362125]: pam_unix(sudo:session): session closed for user root
Oct 11 09:09:09 compute-0 sudo[362150]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 11 09:09:09 compute-0 sudo[362150]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:09:10 compute-0 podman[362214]: 2025-10-11 09:09:10.366967865 +0000 UTC m=+0.067168719 container create 6a7a9d24838506d085807d241aaa56aef9dd4b68fb8bbf64dcc21e49cc207355 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_cerf, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Oct 11 09:09:10 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2012: 321 pgs: 321 active+clean; 453 MiB data, 893 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.9 MiB/s wr, 132 op/s
Oct 11 09:09:10 compute-0 podman[362214]: 2025-10-11 09:09:10.332353427 +0000 UTC m=+0.032554301 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:09:10 compute-0 systemd[1]: Started libpod-conmon-6a7a9d24838506d085807d241aaa56aef9dd4b68fb8bbf64dcc21e49cc207355.scope.
Oct 11 09:09:10 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:09:10 compute-0 podman[362214]: 2025-10-11 09:09:10.499697384 +0000 UTC m=+0.199898308 container init 6a7a9d24838506d085807d241aaa56aef9dd4b68fb8bbf64dcc21e49cc207355 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_cerf, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct 11 09:09:10 compute-0 podman[362214]: 2025-10-11 09:09:10.512649284 +0000 UTC m=+0.212850148 container start 6a7a9d24838506d085807d241aaa56aef9dd4b68fb8bbf64dcc21e49cc207355 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_cerf, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct 11 09:09:10 compute-0 podman[362214]: 2025-10-11 09:09:10.51705708 +0000 UTC m=+0.217258004 container attach 6a7a9d24838506d085807d241aaa56aef9dd4b68fb8bbf64dcc21e49cc207355 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_cerf, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Oct 11 09:09:10 compute-0 stupefied_cerf[362231]: 167 167
Oct 11 09:09:10 compute-0 systemd[1]: libpod-6a7a9d24838506d085807d241aaa56aef9dd4b68fb8bbf64dcc21e49cc207355.scope: Deactivated successfully.
Oct 11 09:09:10 compute-0 podman[362214]: 2025-10-11 09:09:10.524052869 +0000 UTC m=+0.224253763 container died 6a7a9d24838506d085807d241aaa56aef9dd4b68fb8bbf64dcc21e49cc207355 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_cerf, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 09:09:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-cde6f1aef68fb15bff8ae86b68b74f91061d8d4c96fbdeb877a05ba90480cb7b-merged.mount: Deactivated successfully.
Oct 11 09:09:10 compute-0 podman[362214]: 2025-10-11 09:09:10.59169122 +0000 UTC m=+0.291892084 container remove 6a7a9d24838506d085807d241aaa56aef9dd4b68fb8bbf64dcc21e49cc207355 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_cerf, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 09:09:10 compute-0 systemd[1]: libpod-conmon-6a7a9d24838506d085807d241aaa56aef9dd4b68fb8bbf64dcc21e49cc207355.scope: Deactivated successfully.
Oct 11 09:09:10 compute-0 podman[362256]: 2025-10-11 09:09:10.822943062 +0000 UTC m=+0.053484638 container create 2042fd801196570e6b32100387af91a2ba2d7df27c5ed416651995b593fb4ac4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_robinson, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct 11 09:09:10 compute-0 systemd[1]: Started libpod-conmon-2042fd801196570e6b32100387af91a2ba2d7df27c5ed416651995b593fb4ac4.scope.
Oct 11 09:09:10 compute-0 podman[362256]: 2025-10-11 09:09:10.795723885 +0000 UTC m=+0.026265451 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:09:10 compute-0 sshd-session[361941]: Failed password for invalid user bad from 152.32.213.170 port 34116 ssh2
Oct 11 09:09:10 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:09:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d19f94917f751e6a7ea85a02a73d7b097fd74c5314b239da4e603d38bf310209/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 09:09:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d19f94917f751e6a7ea85a02a73d7b097fd74c5314b239da4e603d38bf310209/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 09:09:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d19f94917f751e6a7ea85a02a73d7b097fd74c5314b239da4e603d38bf310209/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 09:09:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d19f94917f751e6a7ea85a02a73d7b097fd74c5314b239da4e603d38bf310209/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 09:09:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d19f94917f751e6a7ea85a02a73d7b097fd74c5314b239da4e603d38bf310209/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 09:09:10 compute-0 podman[362256]: 2025-10-11 09:09:10.927391733 +0000 UTC m=+0.157933369 container init 2042fd801196570e6b32100387af91a2ba2d7df27c5ed416651995b593fb4ac4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_robinson, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 09:09:10 compute-0 podman[362256]: 2025-10-11 09:09:10.937957355 +0000 UTC m=+0.168498891 container start 2042fd801196570e6b32100387af91a2ba2d7df27c5ed416651995b593fb4ac4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_robinson, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 09:09:10 compute-0 podman[362256]: 2025-10-11 09:09:10.941921078 +0000 UTC m=+0.172462714 container attach 2042fd801196570e6b32100387af91a2ba2d7df27c5ed416651995b593fb4ac4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_robinson, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct 11 09:09:11 compute-0 ceph-mon[74313]: pgmap v2012: 321 pgs: 321 active+clean; 453 MiB data, 893 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.9 MiB/s wr, 132 op/s
Oct 11 09:09:11 compute-0 nova_compute[260935]: 2025-10-11 09:09:11.890 2 DEBUG nova.network.neutron [None req-2e95498c-bcac-43e1-b077-459f5740c860 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Updating instance_info_cache with network_info: [{"id": "a611854c-0a61-41b8-91ce-0c0f893aa54c", "address": "fa:16:3e:9d:da:b0", "network": {"id": "14e82eeb-74e2-4de3-9047-74da777fe1ec", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-461409610-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "21f163e616ee4917a580701d466f7dc9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa611854c-0a", "ovs_interfaceid": "a611854c-0a61-41b8-91ce-0c0f893aa54c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:09:11 compute-0 nova_compute[260935]: 2025-10-11 09:09:11.923 2 DEBUG oslo_concurrency.lockutils [None req-2e95498c-bcac-43e1-b077-459f5740c860 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Releasing lock "refresh_cache-0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:09:11 compute-0 nova_compute[260935]: 2025-10-11 09:09:11.925 2 DEBUG nova.virt.libvirt.driver [None req-2e95498c-bcac-43e1-b077-459f5740c860 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 11 09:09:11 compute-0 nova_compute[260935]: 2025-10-11 09:09:11.926 2 INFO nova.virt.libvirt.driver [None req-2e95498c-bcac-43e1-b077-459f5740c860 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Creating image(s)
Oct 11 09:09:11 compute-0 sshd-session[361941]: Received disconnect from 152.32.213.170 port 34116:11: Bye Bye [preauth]
Oct 11 09:09:11 compute-0 sshd-session[361941]: Disconnected from invalid user bad 152.32.213.170 port 34116 [preauth]
Oct 11 09:09:11 compute-0 nova_compute[260935]: 2025-10-11 09:09:11.964 2 DEBUG nova.storage.rbd_utils [None req-2e95498c-bcac-43e1-b077-459f5740c860 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] rbd image 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:09:11 compute-0 nova_compute[260935]: 2025-10-11 09:09:11.979 2 DEBUG nova.objects.instance [None req-2e95498c-bcac-43e1-b077-459f5740c860 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:09:11 compute-0 nova_compute[260935]: 2025-10-11 09:09:11.980 2 DEBUG oslo_concurrency.lockutils [req-0fa1a883-467e-44b2-9c97-bd3403659cc3 req-dc9d79be-fb59-4206-be11-87d88e2e3517 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:09:11 compute-0 nova_compute[260935]: 2025-10-11 09:09:11.981 2 DEBUG nova.network.neutron [req-0fa1a883-467e-44b2-9c97-bd3403659cc3 req-dc9d79be-fb59-4206-be11-87d88e2e3517 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Refreshing network info cache for port a611854c-0a61-41b8-91ce-0c0f893aa54c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 09:09:12 compute-0 nova_compute[260935]: 2025-10-11 09:09:12.028 2 DEBUG nova.storage.rbd_utils [None req-2e95498c-bcac-43e1-b077-459f5740c860 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] rbd image 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:09:12 compute-0 nova_compute[260935]: 2025-10-11 09:09:12.060 2 DEBUG nova.storage.rbd_utils [None req-2e95498c-bcac-43e1-b077-459f5740c860 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] rbd image 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:09:12 compute-0 nova_compute[260935]: 2025-10-11 09:09:12.063 2 DEBUG oslo_concurrency.lockutils [None req-2e95498c-bcac-43e1-b077-459f5740c860 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Acquiring lock "fde2d6dcc32e12f0cf33b731a0ab6c7e3dd9c214" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:09:12 compute-0 nova_compute[260935]: 2025-10-11 09:09:12.064 2 DEBUG oslo_concurrency.lockutils [None req-2e95498c-bcac-43e1-b077-459f5740c860 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Lock "fde2d6dcc32e12f0cf33b731a0ab6c7e3dd9c214" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:09:12 compute-0 bold_robinson[362272]: --> passed data devices: 0 physical, 3 LVM
Oct 11 09:09:12 compute-0 bold_robinson[362272]: --> relative data size: 1.0
Oct 11 09:09:12 compute-0 bold_robinson[362272]: --> All data devices are unavailable
Oct 11 09:09:12 compute-0 systemd[1]: libpod-2042fd801196570e6b32100387af91a2ba2d7df27c5ed416651995b593fb4ac4.scope: Deactivated successfully.
Oct 11 09:09:12 compute-0 systemd[1]: libpod-2042fd801196570e6b32100387af91a2ba2d7df27c5ed416651995b593fb4ac4.scope: Consumed 1.147s CPU time.
Oct 11 09:09:12 compute-0 podman[362355]: 2025-10-11 09:09:12.27169892 +0000 UTC m=+0.033261621 container died 2042fd801196570e6b32100387af91a2ba2d7df27c5ed416651995b593fb4ac4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_robinson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct 11 09:09:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-d19f94917f751e6a7ea85a02a73d7b097fd74c5314b239da4e603d38bf310209-merged.mount: Deactivated successfully.
Oct 11 09:09:12 compute-0 podman[362355]: 2025-10-11 09:09:12.354763311 +0000 UTC m=+0.116325952 container remove 2042fd801196570e6b32100387af91a2ba2d7df27c5ed416651995b593fb4ac4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_robinson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 09:09:12 compute-0 systemd[1]: libpod-conmon-2042fd801196570e6b32100387af91a2ba2d7df27c5ed416651995b593fb4ac4.scope: Deactivated successfully.
Oct 11 09:09:12 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2013: 321 pgs: 321 active+clean; 471 MiB data, 904 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.3 MiB/s wr, 158 op/s
Oct 11 09:09:12 compute-0 sudo[362150]: pam_unix(sudo:session): session closed for user root
Oct 11 09:09:12 compute-0 sudo[362370]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:09:12 compute-0 sudo[362370]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:09:12 compute-0 sudo[362370]: pam_unix(sudo:session): session closed for user root
Oct 11 09:09:12 compute-0 nova_compute[260935]: 2025-10-11 09:09:12.532 2 DEBUG nova.virt.libvirt.imagebackend [None req-2e95498c-bcac-43e1-b077-459f5740c860 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Image locations are: [{'url': 'rbd://33219f8b-dc38-5a8f-a577-8ccc4b37190a/images/cccd490d-35ae-4410-9dbd-f711e72912ef/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://33219f8b-dc38-5a8f-a577-8ccc4b37190a/images/cccd490d-35ae-4410-9dbd-f711e72912ef/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085
Oct 11 09:09:12 compute-0 sudo[362395]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 09:09:12 compute-0 sudo[362395]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:09:12 compute-0 sudo[362395]: pam_unix(sudo:session): session closed for user root
Oct 11 09:09:12 compute-0 nova_compute[260935]: 2025-10-11 09:09:12.639 2 DEBUG nova.virt.libvirt.imagebackend [None req-2e95498c-bcac-43e1-b077-459f5740c860 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Selected location: {'url': 'rbd://33219f8b-dc38-5a8f-a577-8ccc4b37190a/images/cccd490d-35ae-4410-9dbd-f711e72912ef/snap', 'metadata': {'store': 'default_backend'}} clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1094
Oct 11 09:09:12 compute-0 nova_compute[260935]: 2025-10-11 09:09:12.640 2 DEBUG nova.storage.rbd_utils [None req-2e95498c-bcac-43e1-b077-459f5740c860 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] cloning images/cccd490d-35ae-4410-9dbd-f711e72912ef@snap to None/0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c_disk clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Oct 11 09:09:12 compute-0 sudo[362453]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:09:12 compute-0 sudo[362453]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:09:12 compute-0 sudo[362453]: pam_unix(sudo:session): session closed for user root
Oct 11 09:09:12 compute-0 nova_compute[260935]: 2025-10-11 09:09:12.791 2 DEBUG oslo_concurrency.lockutils [None req-2e95498c-bcac-43e1-b077-459f5740c860 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Lock "fde2d6dcc32e12f0cf33b731a0ab6c7e3dd9c214" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.727s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:09:12 compute-0 sudo[362511]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a -- lvm list --format json
Oct 11 09:09:12 compute-0 sudo[362511]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:09:12 compute-0 nova_compute[260935]: 2025-10-11 09:09:12.911 2 DEBUG nova.objects.instance [None req-2e95498c-bcac-43e1-b077-459f5740c860 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Lazy-loading 'migration_context' on Instance uuid 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:09:12 compute-0 nova_compute[260935]: 2025-10-11 09:09:12.965 2 DEBUG nova.storage.rbd_utils [None req-2e95498c-bcac-43e1-b077-459f5740c860 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] flattening vms/0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c_disk flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Oct 11 09:09:13 compute-0 podman[362668]: 2025-10-11 09:09:13.277997156 +0000 UTC m=+0.080433777 container create 2be89783b3eee08660ac910f59f7be2123933d3d051c36cf70abd8e87253d8d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_lovelace, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Oct 11 09:09:13 compute-0 nova_compute[260935]: 2025-10-11 09:09:13.306 2 DEBUG nova.virt.libvirt.driver [None req-2e95498c-bcac-43e1-b077-459f5740c860 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Image rbd:vms/0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c_disk:id=openstack:conf=/etc/ceph/ceph.conf flattened successfully while unshelving instance. _try_fetch_image_cache /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11007
Oct 11 09:09:13 compute-0 nova_compute[260935]: 2025-10-11 09:09:13.307 2 DEBUG nova.virt.libvirt.driver [None req-2e95498c-bcac-43e1-b077-459f5740c860 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 11 09:09:13 compute-0 nova_compute[260935]: 2025-10-11 09:09:13.307 2 DEBUG nova.virt.libvirt.driver [None req-2e95498c-bcac-43e1-b077-459f5740c860 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Ensure instance console log exists: /var/lib/nova/instances/0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 11 09:09:13 compute-0 nova_compute[260935]: 2025-10-11 09:09:13.308 2 DEBUG oslo_concurrency.lockutils [None req-2e95498c-bcac-43e1-b077-459f5740c860 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:09:13 compute-0 nova_compute[260935]: 2025-10-11 09:09:13.308 2 DEBUG oslo_concurrency.lockutils [None req-2e95498c-bcac-43e1-b077-459f5740c860 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:09:13 compute-0 nova_compute[260935]: 2025-10-11 09:09:13.308 2 DEBUG oslo_concurrency.lockutils [None req-2e95498c-bcac-43e1-b077-459f5740c860 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:09:13 compute-0 nova_compute[260935]: 2025-10-11 09:09:13.310 2 DEBUG nova.virt.libvirt.driver [None req-2e95498c-bcac-43e1-b077-459f5740c860 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Start _get_guest_xml network_info=[{"id": "a611854c-0a61-41b8-91ce-0c0f893aa54c", "address": "fa:16:3e:9d:da:b0", "network": {"id": "14e82eeb-74e2-4de3-9047-74da777fe1ec", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-461409610-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "21f163e616ee4917a580701d466f7dc9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa611854c-0a", "ovs_interfaceid": "a611854c-0a61-41b8-91ce-0c0f893aa54c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='',container_format='bare',created_at=2025-10-11T09:08:46Z,direct_url=<?>,disk_format='raw',id=cccd490d-35ae-4410-9dbd-f711e72912ef,min_disk=1,min_ram=0,name='tempest-ServersNegativeTestJSON-server-581429551-shelved',owner='21f163e616ee4917a580701d466f7dc9',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2025-10-11T09:08:55Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 11 09:09:13 compute-0 nova_compute[260935]: 2025-10-11 09:09:13.318 2 WARNING nova.virt.libvirt.driver [None req-2e95498c-bcac-43e1-b077-459f5740c860 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 09:09:13 compute-0 nova_compute[260935]: 2025-10-11 09:09:13.324 2 DEBUG nova.virt.libvirt.host [None req-2e95498c-bcac-43e1-b077-459f5740c860 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 11 09:09:13 compute-0 nova_compute[260935]: 2025-10-11 09:09:13.324 2 DEBUG nova.virt.libvirt.host [None req-2e95498c-bcac-43e1-b077-459f5740c860 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 11 09:09:13 compute-0 nova_compute[260935]: 2025-10-11 09:09:13.328 2 DEBUG nova.virt.libvirt.host [None req-2e95498c-bcac-43e1-b077-459f5740c860 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 11 09:09:13 compute-0 nova_compute[260935]: 2025-10-11 09:09:13.329 2 DEBUG nova.virt.libvirt.host [None req-2e95498c-bcac-43e1-b077-459f5740c860 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 11 09:09:13 compute-0 nova_compute[260935]: 2025-10-11 09:09:13.329 2 DEBUG nova.virt.libvirt.driver [None req-2e95498c-bcac-43e1-b077-459f5740c860 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 11 09:09:13 compute-0 nova_compute[260935]: 2025-10-11 09:09:13.329 2 DEBUG nova.virt.hardware [None req-2e95498c-bcac-43e1-b077-459f5740c860 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='',container_format='bare',created_at=2025-10-11T09:08:46Z,direct_url=<?>,disk_format='raw',id=cccd490d-35ae-4410-9dbd-f711e72912ef,min_disk=1,min_ram=0,name='tempest-ServersNegativeTestJSON-server-581429551-shelved',owner='21f163e616ee4917a580701d466f7dc9',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2025-10-11T09:08:55Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 11 09:09:13 compute-0 podman[362668]: 2025-10-11 09:09:13.238096037 +0000 UTC m=+0.040532698 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:09:13 compute-0 nova_compute[260935]: 2025-10-11 09:09:13.330 2 DEBUG nova.virt.hardware [None req-2e95498c-bcac-43e1-b077-459f5740c860 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 11 09:09:13 compute-0 nova_compute[260935]: 2025-10-11 09:09:13.330 2 DEBUG nova.virt.hardware [None req-2e95498c-bcac-43e1-b077-459f5740c860 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 11 09:09:13 compute-0 nova_compute[260935]: 2025-10-11 09:09:13.330 2 DEBUG nova.virt.hardware [None req-2e95498c-bcac-43e1-b077-459f5740c860 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 11 09:09:13 compute-0 nova_compute[260935]: 2025-10-11 09:09:13.331 2 DEBUG nova.virt.hardware [None req-2e95498c-bcac-43e1-b077-459f5740c860 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 11 09:09:13 compute-0 nova_compute[260935]: 2025-10-11 09:09:13.331 2 DEBUG nova.virt.hardware [None req-2e95498c-bcac-43e1-b077-459f5740c860 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 11 09:09:13 compute-0 nova_compute[260935]: 2025-10-11 09:09:13.332 2 DEBUG nova.virt.hardware [None req-2e95498c-bcac-43e1-b077-459f5740c860 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 11 09:09:13 compute-0 nova_compute[260935]: 2025-10-11 09:09:13.332 2 DEBUG nova.virt.hardware [None req-2e95498c-bcac-43e1-b077-459f5740c860 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 11 09:09:13 compute-0 nova_compute[260935]: 2025-10-11 09:09:13.332 2 DEBUG nova.virt.hardware [None req-2e95498c-bcac-43e1-b077-459f5740c860 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 11 09:09:13 compute-0 nova_compute[260935]: 2025-10-11 09:09:13.332 2 DEBUG nova.virt.hardware [None req-2e95498c-bcac-43e1-b077-459f5740c860 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 11 09:09:13 compute-0 nova_compute[260935]: 2025-10-11 09:09:13.333 2 DEBUG nova.virt.hardware [None req-2e95498c-bcac-43e1-b077-459f5740c860 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 11 09:09:13 compute-0 nova_compute[260935]: 2025-10-11 09:09:13.334 2 DEBUG nova.objects.instance [None req-2e95498c-bcac-43e1-b077-459f5740c860 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:09:13 compute-0 systemd[1]: Started libpod-conmon-2be89783b3eee08660ac910f59f7be2123933d3d051c36cf70abd8e87253d8d4.scope.
Oct 11 09:09:13 compute-0 nova_compute[260935]: 2025-10-11 09:09:13.354 2 DEBUG oslo_concurrency.processutils [None req-2e95498c-bcac-43e1-b077-459f5740c860 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:09:13 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:09:13 compute-0 podman[362668]: 2025-10-11 09:09:13.395962423 +0000 UTC m=+0.198399054 container init 2be89783b3eee08660ac910f59f7be2123933d3d051c36cf70abd8e87253d8d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_lovelace, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 09:09:13 compute-0 podman[362668]: 2025-10-11 09:09:13.402363626 +0000 UTC m=+0.204800237 container start 2be89783b3eee08660ac910f59f7be2123933d3d051c36cf70abd8e87253d8d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_lovelace, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 09:09:13 compute-0 practical_lovelace[362684]: 167 167
Oct 11 09:09:13 compute-0 systemd[1]: libpod-2be89783b3eee08660ac910f59f7be2123933d3d051c36cf70abd8e87253d8d4.scope: Deactivated successfully.
Oct 11 09:09:13 compute-0 conmon[362684]: conmon 2be89783b3eee08660ac <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-2be89783b3eee08660ac910f59f7be2123933d3d051c36cf70abd8e87253d8d4.scope/container/memory.events
Oct 11 09:09:13 compute-0 podman[362668]: 2025-10-11 09:09:13.408525542 +0000 UTC m=+0.210962173 container attach 2be89783b3eee08660ac910f59f7be2123933d3d051c36cf70abd8e87253d8d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_lovelace, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS)
Oct 11 09:09:13 compute-0 podman[362668]: 2025-10-11 09:09:13.409128599 +0000 UTC m=+0.211565210 container died 2be89783b3eee08660ac910f59f7be2123933d3d051c36cf70abd8e87253d8d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_lovelace, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct 11 09:09:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-feeaf0159f00d574e79f4bde67bd9547b11c6dad3bdc9a2c0af26860dfd879ef-merged.mount: Deactivated successfully.
Oct 11 09:09:13 compute-0 podman[362668]: 2025-10-11 09:09:13.450030506 +0000 UTC m=+0.252467117 container remove 2be89783b3eee08660ac910f59f7be2123933d3d051c36cf70abd8e87253d8d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_lovelace, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct 11 09:09:13 compute-0 systemd[1]: libpod-conmon-2be89783b3eee08660ac910f59f7be2123933d3d051c36cf70abd8e87253d8d4.scope: Deactivated successfully.
Oct 11 09:09:13 compute-0 ceph-mon[74313]: pgmap v2013: 321 pgs: 321 active+clean; 471 MiB data, 904 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.3 MiB/s wr, 158 op/s
Oct 11 09:09:13 compute-0 podman[362728]: 2025-10-11 09:09:13.645115195 +0000 UTC m=+0.045515060 container create ec3416ba826e115b9a8baa76cec0a234cdedb480aae8c79bb9c7c08b90cac9c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_cannon, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True)
Oct 11 09:09:13 compute-0 nova_compute[260935]: 2025-10-11 09:09:13.671 2 DEBUG nova.network.neutron [req-0fa1a883-467e-44b2-9c97-bd3403659cc3 req-dc9d79be-fb59-4206-be11-87d88e2e3517 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Updated VIF entry in instance network info cache for port a611854c-0a61-41b8-91ce-0c0f893aa54c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 09:09:13 compute-0 nova_compute[260935]: 2025-10-11 09:09:13.672 2 DEBUG nova.network.neutron [req-0fa1a883-467e-44b2-9c97-bd3403659cc3 req-dc9d79be-fb59-4206-be11-87d88e2e3517 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Updating instance_info_cache with network_info: [{"id": "a611854c-0a61-41b8-91ce-0c0f893aa54c", "address": "fa:16:3e:9d:da:b0", "network": {"id": "14e82eeb-74e2-4de3-9047-74da777fe1ec", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-461409610-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "21f163e616ee4917a580701d466f7dc9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa611854c-0a", "ovs_interfaceid": "a611854c-0a61-41b8-91ce-0c0f893aa54c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:09:13 compute-0 systemd[1]: Started libpod-conmon-ec3416ba826e115b9a8baa76cec0a234cdedb480aae8c79bb9c7c08b90cac9c6.scope.
Oct 11 09:09:13 compute-0 nova_compute[260935]: 2025-10-11 09:09:13.698 2 DEBUG oslo_concurrency.lockutils [req-0fa1a883-467e-44b2-9c97-bd3403659cc3 req-dc9d79be-fb59-4206-be11-87d88e2e3517 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:09:13 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:09:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20a3f30b73f8db5ec63ad05258f7ceac39388b9248f2a763e61ca0d84573210d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 09:09:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20a3f30b73f8db5ec63ad05258f7ceac39388b9248f2a763e61ca0d84573210d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 09:09:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20a3f30b73f8db5ec63ad05258f7ceac39388b9248f2a763e61ca0d84573210d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 09:09:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20a3f30b73f8db5ec63ad05258f7ceac39388b9248f2a763e61ca0d84573210d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 09:09:13 compute-0 podman[362728]: 2025-10-11 09:09:13.62565837 +0000 UTC m=+0.026058315 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:09:13 compute-0 podman[362728]: 2025-10-11 09:09:13.727043274 +0000 UTC m=+0.127443139 container init ec3416ba826e115b9a8baa76cec0a234cdedb480aae8c79bb9c7c08b90cac9c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_cannon, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 09:09:13 compute-0 podman[362728]: 2025-10-11 09:09:13.734504847 +0000 UTC m=+0.134904712 container start ec3416ba826e115b9a8baa76cec0a234cdedb480aae8c79bb9c7c08b90cac9c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_cannon, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct 11 09:09:13 compute-0 podman[362728]: 2025-10-11 09:09:13.738051379 +0000 UTC m=+0.138451244 container attach ec3416ba826e115b9a8baa76cec0a234cdedb480aae8c79bb9c7c08b90cac9c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_cannon, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct 11 09:09:13 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 09:09:13 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/336213904' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:09:13 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e274 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:09:13 compute-0 nova_compute[260935]: 2025-10-11 09:09:13.827 2 DEBUG oslo_concurrency.processutils [None req-2e95498c-bcac-43e1-b077-459f5740c860 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:09:13 compute-0 ceph-mon[74313]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #90. Immutable memtables: 0.
Oct 11 09:09:13 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:09:13.834519) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 11 09:09:13 compute-0 ceph-mon[74313]: rocksdb: [db/flush_job.cc:856] [default] [JOB 51] Flushing memtable with next log file: 90
Oct 11 09:09:13 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760173753834585, "job": 51, "event": "flush_started", "num_memtables": 1, "num_entries": 1984, "num_deletes": 257, "total_data_size": 3050823, "memory_usage": 3114016, "flush_reason": "Manual Compaction"}
Oct 11 09:09:13 compute-0 ceph-mon[74313]: rocksdb: [db/flush_job.cc:885] [default] [JOB 51] Level-0 flush table #91: started
Oct 11 09:09:13 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760173753849743, "cf_name": "default", "job": 51, "event": "table_file_creation", "file_number": 91, "file_size": 3007356, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 40052, "largest_seqno": 42035, "table_properties": {"data_size": 2998141, "index_size": 5773, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2373, "raw_key_size": 18191, "raw_average_key_size": 19, "raw_value_size": 2979716, "raw_average_value_size": 3193, "num_data_blocks": 253, "num_entries": 933, "num_filter_entries": 933, "num_deletions": 257, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760173568, "oldest_key_time": 1760173568, "file_creation_time": 1760173753, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "25d4b2da-549a-4320-988a-8e4ed36a341b", "db_session_id": "HBQ1LYMNE0LLZGR7AHC6", "orig_file_number": 91, "seqno_to_time_mapping": "N/A"}}
Oct 11 09:09:13 compute-0 ceph-mon[74313]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 51] Flush lasted 15255 microseconds, and 6086 cpu microseconds.
Oct 11 09:09:13 compute-0 ceph-mon[74313]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 09:09:13 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:09:13.849778) [db/flush_job.cc:967] [default] [JOB 51] Level-0 flush table #91: 3007356 bytes OK
Oct 11 09:09:13 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:09:13.849795) [db/memtable_list.cc:519] [default] Level-0 commit table #91 started
Oct 11 09:09:13 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:09:13.851119) [db/memtable_list.cc:722] [default] Level-0 commit table #91: memtable #1 done
Oct 11 09:09:13 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:09:13.851131) EVENT_LOG_v1 {"time_micros": 1760173753851127, "job": 51, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 11 09:09:13 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:09:13.851148) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 11 09:09:13 compute-0 ceph-mon[74313]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 51] Try to delete WAL files size 3042282, prev total WAL file size 3042282, number of live WAL files 2.
Oct 11 09:09:13 compute-0 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000087.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 09:09:13 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:09:13.851852) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6B760030' seq:72057594037927935, type:22 .. '6B7600323531' seq:0, type:0; will stop at (end)
Oct 11 09:09:13 compute-0 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 52] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 11 09:09:13 compute-0 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 51 Base level 0, inputs: [91(2936KB)], [89(9355KB)]
Oct 11 09:09:13 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760173753851931, "job": 52, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [91], "files_L6": [89], "score": -1, "input_data_size": 12587837, "oldest_snapshot_seqno": -1}
Oct 11 09:09:13 compute-0 nova_compute[260935]: 2025-10-11 09:09:13.854 2 DEBUG nova.storage.rbd_utils [None req-2e95498c-bcac-43e1-b077-459f5740c860 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] rbd image 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:09:13 compute-0 nova_compute[260935]: 2025-10-11 09:09:13.866 2 DEBUG oslo_concurrency.processutils [None req-2e95498c-bcac-43e1-b077-459f5740c860 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:09:13 compute-0 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 52] Generated table #92: 6780 keys, 11873936 bytes, temperature: kUnknown
Oct 11 09:09:13 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760173753909482, "cf_name": "default", "job": 52, "event": "table_file_creation", "file_number": 92, "file_size": 11873936, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11824589, "index_size": 31315, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16965, "raw_key_size": 172466, "raw_average_key_size": 25, "raw_value_size": 11699001, "raw_average_value_size": 1725, "num_data_blocks": 1253, "num_entries": 6780, "num_filter_entries": 6780, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760170204, "oldest_key_time": 0, "file_creation_time": 1760173753, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "25d4b2da-549a-4320-988a-8e4ed36a341b", "db_session_id": "HBQ1LYMNE0LLZGR7AHC6", "orig_file_number": 92, "seqno_to_time_mapping": "N/A"}}
Oct 11 09:09:13 compute-0 ceph-mon[74313]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 09:09:13 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:09:13.909994) [db/compaction/compaction_job.cc:1663] [default] [JOB 52] Compacted 1@0 + 1@6 files to L6 => 11873936 bytes
Oct 11 09:09:13 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:09:13.912644) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 217.4 rd, 205.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.9, 9.1 +0.0 blob) out(11.3 +0.0 blob), read-write-amplify(8.1) write-amplify(3.9) OK, records in: 7305, records dropped: 525 output_compression: NoCompression
Oct 11 09:09:13 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:09:13.912663) EVENT_LOG_v1 {"time_micros": 1760173753912654, "job": 52, "event": "compaction_finished", "compaction_time_micros": 57899, "compaction_time_cpu_micros": 23079, "output_level": 6, "num_output_files": 1, "total_output_size": 11873936, "num_input_records": 7305, "num_output_records": 6780, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 11 09:09:13 compute-0 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000091.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 09:09:13 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760173753913366, "job": 52, "event": "table_file_deletion", "file_number": 91}
Oct 11 09:09:13 compute-0 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000089.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 09:09:13 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760173753915095, "job": 52, "event": "table_file_deletion", "file_number": 89}
Oct 11 09:09:13 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:09:13.851788) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 09:09:13 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:09:13.915137) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 09:09:13 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:09:13.915143) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 09:09:13 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:09:13.915146) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 09:09:13 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:09:13.915149) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 09:09:13 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:09:13.915152) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 09:09:14 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 09:09:14 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1550355202' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:09:14 compute-0 nova_compute[260935]: 2025-10-11 09:09:14.300 2 DEBUG oslo_concurrency.processutils [None req-2e95498c-bcac-43e1-b077-459f5740c860 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.433s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:09:14 compute-0 nova_compute[260935]: 2025-10-11 09:09:14.303 2 DEBUG nova.virt.libvirt.vif [None req-2e95498c-bcac-43e1-b077-459f5740c860 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-11T09:07:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersNegativeTestJSON-server-581429551',display_name='tempest-ServersNegativeTestJSON-server-581429551',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestjson-server-581429551',id=98,image_ref='cccd490d-35ae-4410-9dbd-f711e72912ef',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-11T09:07:18Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='21f163e616ee4917a580701d466f7dc9',ramdisk_id='',reservation_id='r-wp4zx0a9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersNegativeTestJSON-414353187',owner_user_name='tempest-ServersNegativeTestJSON-414353187-project-member',shelved_at='2025-10-11T09:08:56.240286',shelved_host='compute-0.ctlplane.example.com',shelved_image_id='cccd490d-35ae-4410-9dbd-f711e72912ef'},tags=<?>,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:09:06Z,user_data=None,user_id='6b8d9d5ab01d48ae81a09f922875ea3e',uuid=0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='shelved_offloaded') vif={"id": "a611854c-0a61-41b8-91ce-0c0f893aa54c", "address": "fa:16:3e:9d:da:b0", "network": {"id": "14e82eeb-74e2-4de3-9047-74da777fe1ec", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-461409610-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "21f163e616ee4917a580701d466f7dc9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa611854c-0a", "ovs_interfaceid": "a611854c-0a61-41b8-91ce-0c0f893aa54c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 11 09:09:14 compute-0 nova_compute[260935]: 2025-10-11 09:09:14.304 2 DEBUG nova.network.os_vif_util [None req-2e95498c-bcac-43e1-b077-459f5740c860 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Converting VIF {"id": "a611854c-0a61-41b8-91ce-0c0f893aa54c", "address": "fa:16:3e:9d:da:b0", "network": {"id": "14e82eeb-74e2-4de3-9047-74da777fe1ec", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-461409610-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "21f163e616ee4917a580701d466f7dc9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa611854c-0a", "ovs_interfaceid": "a611854c-0a61-41b8-91ce-0c0f893aa54c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 09:09:14 compute-0 nova_compute[260935]: 2025-10-11 09:09:14.306 2 DEBUG nova.network.os_vif_util [None req-2e95498c-bcac-43e1-b077-459f5740c860 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9d:da:b0,bridge_name='br-int',has_traffic_filtering=True,id=a611854c-0a61-41b8-91ce-0c0f893aa54c,network=Network(14e82eeb-74e2-4de3-9047-74da777fe1ec),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa611854c-0a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 09:09:14 compute-0 nova_compute[260935]: 2025-10-11 09:09:14.310 2 DEBUG nova.objects.instance [None req-2e95498c-bcac-43e1-b077-459f5740c860 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Lazy-loading 'pci_devices' on Instance uuid 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:09:14 compute-0 nova_compute[260935]: 2025-10-11 09:09:14.338 2 DEBUG nova.virt.libvirt.driver [None req-2e95498c-bcac-43e1-b077-459f5740c860 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] End _get_guest_xml xml=<domain type="kvm">
Oct 11 09:09:14 compute-0 nova_compute[260935]:   <uuid>0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c</uuid>
Oct 11 09:09:14 compute-0 nova_compute[260935]:   <name>instance-00000062</name>
Oct 11 09:09:14 compute-0 nova_compute[260935]:   <memory>131072</memory>
Oct 11 09:09:14 compute-0 nova_compute[260935]:   <vcpu>1</vcpu>
Oct 11 09:09:14 compute-0 nova_compute[260935]:   <metadata>
Oct 11 09:09:14 compute-0 nova_compute[260935]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 09:09:14 compute-0 nova_compute[260935]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 09:09:14 compute-0 nova_compute[260935]:       <nova:name>tempest-ServersNegativeTestJSON-server-581429551</nova:name>
Oct 11 09:09:14 compute-0 nova_compute[260935]:       <nova:creationTime>2025-10-11 09:09:13</nova:creationTime>
Oct 11 09:09:14 compute-0 nova_compute[260935]:       <nova:flavor name="m1.nano">
Oct 11 09:09:14 compute-0 nova_compute[260935]:         <nova:memory>128</nova:memory>
Oct 11 09:09:14 compute-0 nova_compute[260935]:         <nova:disk>1</nova:disk>
Oct 11 09:09:14 compute-0 nova_compute[260935]:         <nova:swap>0</nova:swap>
Oct 11 09:09:14 compute-0 nova_compute[260935]:         <nova:ephemeral>0</nova:ephemeral>
Oct 11 09:09:14 compute-0 nova_compute[260935]:         <nova:vcpus>1</nova:vcpus>
Oct 11 09:09:14 compute-0 nova_compute[260935]:       </nova:flavor>
Oct 11 09:09:14 compute-0 nova_compute[260935]:       <nova:owner>
Oct 11 09:09:14 compute-0 nova_compute[260935]:         <nova:user uuid="6b8d9d5ab01d48ae81a09f922875ea3e">tempest-ServersNegativeTestJSON-414353187-project-member</nova:user>
Oct 11 09:09:14 compute-0 nova_compute[260935]:         <nova:project uuid="21f163e616ee4917a580701d466f7dc9">tempest-ServersNegativeTestJSON-414353187</nova:project>
Oct 11 09:09:14 compute-0 nova_compute[260935]:       </nova:owner>
Oct 11 09:09:14 compute-0 nova_compute[260935]:       <nova:root type="image" uuid="cccd490d-35ae-4410-9dbd-f711e72912ef"/>
Oct 11 09:09:14 compute-0 nova_compute[260935]:       <nova:ports>
Oct 11 09:09:14 compute-0 nova_compute[260935]:         <nova:port uuid="a611854c-0a61-41b8-91ce-0c0f893aa54c">
Oct 11 09:09:14 compute-0 nova_compute[260935]:           <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Oct 11 09:09:14 compute-0 nova_compute[260935]:         </nova:port>
Oct 11 09:09:14 compute-0 nova_compute[260935]:       </nova:ports>
Oct 11 09:09:14 compute-0 nova_compute[260935]:     </nova:instance>
Oct 11 09:09:14 compute-0 nova_compute[260935]:   </metadata>
Oct 11 09:09:14 compute-0 nova_compute[260935]:   <sysinfo type="smbios">
Oct 11 09:09:14 compute-0 nova_compute[260935]:     <system>
Oct 11 09:09:14 compute-0 nova_compute[260935]:       <entry name="manufacturer">RDO</entry>
Oct 11 09:09:14 compute-0 nova_compute[260935]:       <entry name="product">OpenStack Compute</entry>
Oct 11 09:09:14 compute-0 nova_compute[260935]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 09:09:14 compute-0 nova_compute[260935]:       <entry name="serial">0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c</entry>
Oct 11 09:09:14 compute-0 nova_compute[260935]:       <entry name="uuid">0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c</entry>
Oct 11 09:09:14 compute-0 nova_compute[260935]:       <entry name="family">Virtual Machine</entry>
Oct 11 09:09:14 compute-0 nova_compute[260935]:     </system>
Oct 11 09:09:14 compute-0 nova_compute[260935]:   </sysinfo>
Oct 11 09:09:14 compute-0 nova_compute[260935]:   <os>
Oct 11 09:09:14 compute-0 nova_compute[260935]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 11 09:09:14 compute-0 nova_compute[260935]:     <boot dev="hd"/>
Oct 11 09:09:14 compute-0 nova_compute[260935]:     <smbios mode="sysinfo"/>
Oct 11 09:09:14 compute-0 nova_compute[260935]:   </os>
Oct 11 09:09:14 compute-0 nova_compute[260935]:   <features>
Oct 11 09:09:14 compute-0 nova_compute[260935]:     <acpi/>
Oct 11 09:09:14 compute-0 nova_compute[260935]:     <apic/>
Oct 11 09:09:14 compute-0 nova_compute[260935]:     <vmcoreinfo/>
Oct 11 09:09:14 compute-0 nova_compute[260935]:   </features>
Oct 11 09:09:14 compute-0 nova_compute[260935]:   <clock offset="utc">
Oct 11 09:09:14 compute-0 nova_compute[260935]:     <timer name="pit" tickpolicy="delay"/>
Oct 11 09:09:14 compute-0 nova_compute[260935]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 11 09:09:14 compute-0 nova_compute[260935]:     <timer name="hpet" present="no"/>
Oct 11 09:09:14 compute-0 nova_compute[260935]:   </clock>
Oct 11 09:09:14 compute-0 nova_compute[260935]:   <cpu mode="host-model" match="exact">
Oct 11 09:09:14 compute-0 nova_compute[260935]:     <topology sockets="1" cores="1" threads="1"/>
Oct 11 09:09:14 compute-0 nova_compute[260935]:   </cpu>
Oct 11 09:09:14 compute-0 nova_compute[260935]:   <devices>
Oct 11 09:09:14 compute-0 nova_compute[260935]:     <disk type="network" device="disk">
Oct 11 09:09:14 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 09:09:14 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c_disk">
Oct 11 09:09:14 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 09:09:14 compute-0 nova_compute[260935]:       </source>
Oct 11 09:09:14 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 09:09:14 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 09:09:14 compute-0 nova_compute[260935]:       </auth>
Oct 11 09:09:14 compute-0 nova_compute[260935]:       <target dev="vda" bus="virtio"/>
Oct 11 09:09:14 compute-0 nova_compute[260935]:     </disk>
Oct 11 09:09:14 compute-0 nova_compute[260935]:     <disk type="network" device="cdrom">
Oct 11 09:09:14 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 09:09:14 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c_disk.config">
Oct 11 09:09:14 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 09:09:14 compute-0 nova_compute[260935]:       </source>
Oct 11 09:09:14 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 09:09:14 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 09:09:14 compute-0 nova_compute[260935]:       </auth>
Oct 11 09:09:14 compute-0 nova_compute[260935]:       <target dev="sda" bus="sata"/>
Oct 11 09:09:14 compute-0 nova_compute[260935]:     </disk>
Oct 11 09:09:14 compute-0 nova_compute[260935]:     <interface type="ethernet">
Oct 11 09:09:14 compute-0 nova_compute[260935]:       <mac address="fa:16:3e:9d:da:b0"/>
Oct 11 09:09:14 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 09:09:14 compute-0 nova_compute[260935]:       <driver name="vhost" rx_queue_size="512"/>
Oct 11 09:09:14 compute-0 nova_compute[260935]:       <mtu size="1442"/>
Oct 11 09:09:14 compute-0 nova_compute[260935]:       <target dev="tapa611854c-0a"/>
Oct 11 09:09:14 compute-0 nova_compute[260935]:     </interface>
Oct 11 09:09:14 compute-0 nova_compute[260935]:     <serial type="pty">
Oct 11 09:09:14 compute-0 nova_compute[260935]:       <log file="/var/lib/nova/instances/0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c/console.log" append="off"/>
Oct 11 09:09:14 compute-0 nova_compute[260935]:     </serial>
Oct 11 09:09:14 compute-0 nova_compute[260935]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 09:09:14 compute-0 nova_compute[260935]:     <video>
Oct 11 09:09:14 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 09:09:14 compute-0 nova_compute[260935]:     </video>
Oct 11 09:09:14 compute-0 nova_compute[260935]:     <input type="tablet" bus="usb"/>
Oct 11 09:09:14 compute-0 nova_compute[260935]:     <input type="keyboard" bus="usb"/>
Oct 11 09:09:14 compute-0 nova_compute[260935]:     <rng model="virtio">
Oct 11 09:09:14 compute-0 nova_compute[260935]:       <backend model="random">/dev/urandom</backend>
Oct 11 09:09:14 compute-0 nova_compute[260935]:     </rng>
Oct 11 09:09:14 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root"/>
Oct 11 09:09:14 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:09:14 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:09:14 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:09:14 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:09:14 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:09:14 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:09:14 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:09:14 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:09:14 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:09:14 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:09:14 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:09:14 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:09:14 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:09:14 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:09:14 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:09:14 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:09:14 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:09:14 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:09:14 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:09:14 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:09:14 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:09:14 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:09:14 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:09:14 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:09:14 compute-0 nova_compute[260935]:     <controller type="usb" index="0"/>
Oct 11 09:09:14 compute-0 nova_compute[260935]:     <memballoon model="virtio">
Oct 11 09:09:14 compute-0 nova_compute[260935]:       <stats period="10"/>
Oct 11 09:09:14 compute-0 nova_compute[260935]:     </memballoon>
Oct 11 09:09:14 compute-0 nova_compute[260935]:   </devices>
Oct 11 09:09:14 compute-0 nova_compute[260935]: </domain>
Oct 11 09:09:14 compute-0 nova_compute[260935]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 11 09:09:14 compute-0 nova_compute[260935]: 2025-10-11 09:09:14.339 2 DEBUG nova.compute.manager [None req-2e95498c-bcac-43e1-b077-459f5740c860 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Preparing to wait for external event network-vif-plugged-a611854c-0a61-41b8-91ce-0c0f893aa54c prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 11 09:09:14 compute-0 nova_compute[260935]: 2025-10-11 09:09:14.340 2 DEBUG oslo_concurrency.lockutils [None req-2e95498c-bcac-43e1-b077-459f5740c860 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Acquiring lock "0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:09:14 compute-0 nova_compute[260935]: 2025-10-11 09:09:14.340 2 DEBUG oslo_concurrency.lockutils [None req-2e95498c-bcac-43e1-b077-459f5740c860 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Lock "0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:09:14 compute-0 nova_compute[260935]: 2025-10-11 09:09:14.341 2 DEBUG oslo_concurrency.lockutils [None req-2e95498c-bcac-43e1-b077-459f5740c860 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Lock "0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:09:14 compute-0 nova_compute[260935]: 2025-10-11 09:09:14.342 2 DEBUG nova.virt.libvirt.vif [None req-2e95498c-bcac-43e1-b077-459f5740c860 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-11T09:07:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersNegativeTestJSON-server-581429551',display_name='tempest-ServersNegativeTestJSON-server-581429551',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestjson-server-581429551',id=98,image_ref='cccd490d-35ae-4410-9dbd-f711e72912ef',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-11T09:07:18Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='21f163e616ee4917a580701d466f7dc9',ramdisk_id='',reservation_id='r-wp4zx0a9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersNegativeTestJSON-414353187',owner_user_name='tempest-ServersNegativeTestJSON-414353187-project-member',shelved_at='2025-10-11T09:08:56.240286',shelved_host='compute-0.ctlplane.example.com',shelved_image_id='cccd490d-35ae-4410-9dbd-f711e72912ef'},tags=<?>,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:09:06Z,user_data=None,user_id='6b8d9d5ab01d48ae81a09f922875ea3e',uuid=0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='shelved_offloaded') vif={"id": "a611854c-0a61-41b8-91ce-0c0f893aa54c", "address": "fa:16:3e:9d:da:b0", "network": {"id": "14e82eeb-74e2-4de3-9047-74da777fe1ec", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-461409610-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "21f163e616ee4917a580701d466f7dc9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa611854c-0a", "ovs_interfaceid": "a611854c-0a61-41b8-91ce-0c0f893aa54c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 11 09:09:14 compute-0 nova_compute[260935]: 2025-10-11 09:09:14.342 2 DEBUG nova.network.os_vif_util [None req-2e95498c-bcac-43e1-b077-459f5740c860 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Converting VIF {"id": "a611854c-0a61-41b8-91ce-0c0f893aa54c", "address": "fa:16:3e:9d:da:b0", "network": {"id": "14e82eeb-74e2-4de3-9047-74da777fe1ec", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-461409610-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "21f163e616ee4917a580701d466f7dc9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa611854c-0a", "ovs_interfaceid": "a611854c-0a61-41b8-91ce-0c0f893aa54c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 09:09:14 compute-0 nova_compute[260935]: 2025-10-11 09:09:14.343 2 DEBUG nova.network.os_vif_util [None req-2e95498c-bcac-43e1-b077-459f5740c860 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9d:da:b0,bridge_name='br-int',has_traffic_filtering=True,id=a611854c-0a61-41b8-91ce-0c0f893aa54c,network=Network(14e82eeb-74e2-4de3-9047-74da777fe1ec),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa611854c-0a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 09:09:14 compute-0 nova_compute[260935]: 2025-10-11 09:09:14.344 2 DEBUG os_vif [None req-2e95498c-bcac-43e1-b077-459f5740c860 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:9d:da:b0,bridge_name='br-int',has_traffic_filtering=True,id=a611854c-0a61-41b8-91ce-0c0f893aa54c,network=Network(14e82eeb-74e2-4de3-9047-74da777fe1ec),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa611854c-0a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 11 09:09:14 compute-0 nova_compute[260935]: 2025-10-11 09:09:14.345 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:09:14 compute-0 nova_compute[260935]: 2025-10-11 09:09:14.346 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:09:14 compute-0 nova_compute[260935]: 2025-10-11 09:09:14.347 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 09:09:14 compute-0 nova_compute[260935]: 2025-10-11 09:09:14.351 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:09:14 compute-0 nova_compute[260935]: 2025-10-11 09:09:14.352 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa611854c-0a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:09:14 compute-0 nova_compute[260935]: 2025-10-11 09:09:14.352 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa611854c-0a, col_values=(('external_ids', {'iface-id': 'a611854c-0a61-41b8-91ce-0c0f893aa54c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:9d:da:b0', 'vm-uuid': '0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:09:14 compute-0 nova_compute[260935]: 2025-10-11 09:09:14.396 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:09:14 compute-0 NetworkManager[44960]: <info>  [1760173754.3969] manager: (tapa611854c-0a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/381)
Oct 11 09:09:14 compute-0 nova_compute[260935]: 2025-10-11 09:09:14.400 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 09:09:14 compute-0 nova_compute[260935]: 2025-10-11 09:09:14.403 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:09:14 compute-0 nova_compute[260935]: 2025-10-11 09:09:14.404 2 INFO os_vif [None req-2e95498c-bcac-43e1-b077-459f5740c860 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:9d:da:b0,bridge_name='br-int',has_traffic_filtering=True,id=a611854c-0a61-41b8-91ce-0c0f893aa54c,network=Network(14e82eeb-74e2-4de3-9047-74da777fe1ec),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa611854c-0a')
Oct 11 09:09:14 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2014: 321 pgs: 321 active+clean; 475 MiB data, 904 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 3.4 MiB/s wr, 92 op/s
Oct 11 09:09:14 compute-0 peaceful_cannon[362743]: {
Oct 11 09:09:14 compute-0 peaceful_cannon[362743]:     "0": [
Oct 11 09:09:14 compute-0 peaceful_cannon[362743]:         {
Oct 11 09:09:14 compute-0 peaceful_cannon[362743]:             "devices": [
Oct 11 09:09:14 compute-0 peaceful_cannon[362743]:                 "/dev/loop3"
Oct 11 09:09:14 compute-0 peaceful_cannon[362743]:             ],
Oct 11 09:09:14 compute-0 peaceful_cannon[362743]:             "lv_name": "ceph_lv0",
Oct 11 09:09:14 compute-0 peaceful_cannon[362743]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 09:09:14 compute-0 peaceful_cannon[362743]:             "lv_size": "21470642176",
Oct 11 09:09:14 compute-0 peaceful_cannon[362743]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=bd31cfe9-266a-4479-a7f9-659fff14b3e1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 09:09:14 compute-0 peaceful_cannon[362743]:             "lv_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 09:09:14 compute-0 peaceful_cannon[362743]:             "name": "ceph_lv0",
Oct 11 09:09:14 compute-0 peaceful_cannon[362743]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 09:09:14 compute-0 peaceful_cannon[362743]:             "tags": {
Oct 11 09:09:14 compute-0 peaceful_cannon[362743]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 11 09:09:14 compute-0 peaceful_cannon[362743]:                 "ceph.block_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 09:09:14 compute-0 peaceful_cannon[362743]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 09:09:14 compute-0 peaceful_cannon[362743]:                 "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:09:14 compute-0 peaceful_cannon[362743]:                 "ceph.cluster_name": "ceph",
Oct 11 09:09:14 compute-0 peaceful_cannon[362743]:                 "ceph.crush_device_class": "",
Oct 11 09:09:14 compute-0 peaceful_cannon[362743]:                 "ceph.encrypted": "0",
Oct 11 09:09:14 compute-0 peaceful_cannon[362743]:                 "ceph.osd_fsid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 09:09:14 compute-0 peaceful_cannon[362743]:                 "ceph.osd_id": "0",
Oct 11 09:09:14 compute-0 peaceful_cannon[362743]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 09:09:14 compute-0 peaceful_cannon[362743]:                 "ceph.type": "block",
Oct 11 09:09:14 compute-0 peaceful_cannon[362743]:                 "ceph.vdo": "0"
Oct 11 09:09:14 compute-0 peaceful_cannon[362743]:             },
Oct 11 09:09:14 compute-0 peaceful_cannon[362743]:             "type": "block",
Oct 11 09:09:14 compute-0 peaceful_cannon[362743]:             "vg_name": "ceph_vg0"
Oct 11 09:09:14 compute-0 peaceful_cannon[362743]:         }
Oct 11 09:09:14 compute-0 peaceful_cannon[362743]:     ],
Oct 11 09:09:14 compute-0 peaceful_cannon[362743]:     "1": [
Oct 11 09:09:14 compute-0 peaceful_cannon[362743]:         {
Oct 11 09:09:14 compute-0 peaceful_cannon[362743]:             "devices": [
Oct 11 09:09:14 compute-0 peaceful_cannon[362743]:                 "/dev/loop4"
Oct 11 09:09:14 compute-0 peaceful_cannon[362743]:             ],
Oct 11 09:09:14 compute-0 peaceful_cannon[362743]:             "lv_name": "ceph_lv1",
Oct 11 09:09:14 compute-0 peaceful_cannon[362743]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 09:09:14 compute-0 peaceful_cannon[362743]:             "lv_size": "21470642176",
Oct 11 09:09:14 compute-0 peaceful_cannon[362743]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c6e730c8-7075-45e5-bf72-061b73088f5b,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 09:09:14 compute-0 peaceful_cannon[362743]:             "lv_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 09:09:14 compute-0 peaceful_cannon[362743]:             "name": "ceph_lv1",
Oct 11 09:09:14 compute-0 peaceful_cannon[362743]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 09:09:14 compute-0 peaceful_cannon[362743]:             "tags": {
Oct 11 09:09:14 compute-0 peaceful_cannon[362743]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 11 09:09:14 compute-0 peaceful_cannon[362743]:                 "ceph.block_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 09:09:14 compute-0 peaceful_cannon[362743]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 09:09:14 compute-0 peaceful_cannon[362743]:                 "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:09:14 compute-0 peaceful_cannon[362743]:                 "ceph.cluster_name": "ceph",
Oct 11 09:09:14 compute-0 peaceful_cannon[362743]:                 "ceph.crush_device_class": "",
Oct 11 09:09:14 compute-0 peaceful_cannon[362743]:                 "ceph.encrypted": "0",
Oct 11 09:09:14 compute-0 peaceful_cannon[362743]:                 "ceph.osd_fsid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 09:09:14 compute-0 peaceful_cannon[362743]:                 "ceph.osd_id": "1",
Oct 11 09:09:14 compute-0 peaceful_cannon[362743]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 09:09:14 compute-0 peaceful_cannon[362743]:                 "ceph.type": "block",
Oct 11 09:09:14 compute-0 peaceful_cannon[362743]:                 "ceph.vdo": "0"
Oct 11 09:09:14 compute-0 peaceful_cannon[362743]:             },
Oct 11 09:09:14 compute-0 peaceful_cannon[362743]:             "type": "block",
Oct 11 09:09:14 compute-0 peaceful_cannon[362743]:             "vg_name": "ceph_vg1"
Oct 11 09:09:14 compute-0 peaceful_cannon[362743]:         }
Oct 11 09:09:14 compute-0 peaceful_cannon[362743]:     ],
Oct 11 09:09:14 compute-0 peaceful_cannon[362743]:     "2": [
Oct 11 09:09:14 compute-0 peaceful_cannon[362743]:         {
Oct 11 09:09:14 compute-0 peaceful_cannon[362743]:             "devices": [
Oct 11 09:09:14 compute-0 peaceful_cannon[362743]:                 "/dev/loop5"
Oct 11 09:09:14 compute-0 peaceful_cannon[362743]:             ],
Oct 11 09:09:14 compute-0 peaceful_cannon[362743]:             "lv_name": "ceph_lv2",
Oct 11 09:09:14 compute-0 peaceful_cannon[362743]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 09:09:14 compute-0 peaceful_cannon[362743]:             "lv_size": "21470642176",
Oct 11 09:09:14 compute-0 peaceful_cannon[362743]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9a8d87fe-940e-4960-94fb-405e22e6e9ee,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 09:09:14 compute-0 peaceful_cannon[362743]:             "lv_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 09:09:14 compute-0 peaceful_cannon[362743]:             "name": "ceph_lv2",
Oct 11 09:09:14 compute-0 peaceful_cannon[362743]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 09:09:14 compute-0 peaceful_cannon[362743]:             "tags": {
Oct 11 09:09:14 compute-0 peaceful_cannon[362743]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 11 09:09:14 compute-0 peaceful_cannon[362743]:                 "ceph.block_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 09:09:14 compute-0 peaceful_cannon[362743]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 09:09:14 compute-0 peaceful_cannon[362743]:                 "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:09:14 compute-0 peaceful_cannon[362743]:                 "ceph.cluster_name": "ceph",
Oct 11 09:09:14 compute-0 peaceful_cannon[362743]:                 "ceph.crush_device_class": "",
Oct 11 09:09:14 compute-0 peaceful_cannon[362743]:                 "ceph.encrypted": "0",
Oct 11 09:09:14 compute-0 peaceful_cannon[362743]:                 "ceph.osd_fsid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 09:09:14 compute-0 peaceful_cannon[362743]:                 "ceph.osd_id": "2",
Oct 11 09:09:14 compute-0 peaceful_cannon[362743]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 09:09:14 compute-0 peaceful_cannon[362743]:                 "ceph.type": "block",
Oct 11 09:09:14 compute-0 peaceful_cannon[362743]:                 "ceph.vdo": "0"
Oct 11 09:09:14 compute-0 peaceful_cannon[362743]:             },
Oct 11 09:09:14 compute-0 peaceful_cannon[362743]:             "type": "block",
Oct 11 09:09:14 compute-0 peaceful_cannon[362743]:             "vg_name": "ceph_vg2"
Oct 11 09:09:14 compute-0 peaceful_cannon[362743]:         }
Oct 11 09:09:14 compute-0 peaceful_cannon[362743]:     ]
Oct 11 09:09:14 compute-0 peaceful_cannon[362743]: }
Oct 11 09:09:14 compute-0 nova_compute[260935]: 2025-10-11 09:09:14.499 2 DEBUG nova.virt.libvirt.driver [None req-2e95498c-bcac-43e1-b077-459f5740c860 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 09:09:14 compute-0 nova_compute[260935]: 2025-10-11 09:09:14.500 2 DEBUG nova.virt.libvirt.driver [None req-2e95498c-bcac-43e1-b077-459f5740c860 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 09:09:14 compute-0 nova_compute[260935]: 2025-10-11 09:09:14.501 2 DEBUG nova.virt.libvirt.driver [None req-2e95498c-bcac-43e1-b077-459f5740c860 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] No VIF found with MAC fa:16:3e:9d:da:b0, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 11 09:09:14 compute-0 nova_compute[260935]: 2025-10-11 09:09:14.502 2 INFO nova.virt.libvirt.driver [None req-2e95498c-bcac-43e1-b077-459f5740c860 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Using config drive
Oct 11 09:09:14 compute-0 systemd[1]: libpod-ec3416ba826e115b9a8baa76cec0a234cdedb480aae8c79bb9c7c08b90cac9c6.scope: Deactivated successfully.
Oct 11 09:09:14 compute-0 nova_compute[260935]: 2025-10-11 09:09:14.540 2 DEBUG nova.storage.rbd_utils [None req-2e95498c-bcac-43e1-b077-459f5740c860 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] rbd image 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:09:14 compute-0 podman[362805]: 2025-10-11 09:09:14.569888865 +0000 UTC m=+0.037513912 container died ec3416ba826e115b9a8baa76cec0a234cdedb480aae8c79bb9c7c08b90cac9c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_cannon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct 11 09:09:14 compute-0 nova_compute[260935]: 2025-10-11 09:09:14.570 2 DEBUG nova.objects.instance [None req-2e95498c-bcac-43e1-b077-459f5740c860 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Lazy-loading 'ec2_ids' on Instance uuid 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:09:14 compute-0 nova_compute[260935]: 2025-10-11 09:09:14.580 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:09:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-20a3f30b73f8db5ec63ad05258f7ceac39388b9248f2a763e61ca0d84573210d-merged.mount: Deactivated successfully.
Oct 11 09:09:14 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/336213904' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:09:14 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/1550355202' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:09:14 compute-0 nova_compute[260935]: 2025-10-11 09:09:14.638 2 DEBUG nova.objects.instance [None req-2e95498c-bcac-43e1-b077-459f5740c860 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Lazy-loading 'keypairs' on Instance uuid 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:09:14 compute-0 podman[362805]: 2025-10-11 09:09:14.642290012 +0000 UTC m=+0.109915009 container remove ec3416ba826e115b9a8baa76cec0a234cdedb480aae8c79bb9c7c08b90cac9c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_cannon, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 09:09:14 compute-0 systemd[1]: libpod-conmon-ec3416ba826e115b9a8baa76cec0a234cdedb480aae8c79bb9c7c08b90cac9c6.scope: Deactivated successfully.
Oct 11 09:09:14 compute-0 sudo[362511]: pam_unix(sudo:session): session closed for user root
Oct 11 09:09:14 compute-0 sudo[362830]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:09:14 compute-0 sudo[362830]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:09:14 compute-0 sudo[362830]: pam_unix(sudo:session): session closed for user root
Oct 11 09:09:14 compute-0 sudo[362855]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 09:09:14 compute-0 sudo[362855]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:09:14 compute-0 sudo[362855]: pam_unix(sudo:session): session closed for user root
Oct 11 09:09:14 compute-0 sudo[362880]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:09:14 compute-0 sudo[362880]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:09:14 compute-0 sudo[362880]: pam_unix(sudo:session): session closed for user root
Oct 11 09:09:15 compute-0 sudo[362905]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a -- raw list --format json
Oct 11 09:09:15 compute-0 sudo[362905]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:09:15 compute-0 nova_compute[260935]: 2025-10-11 09:09:15.199 2 INFO nova.virt.libvirt.driver [None req-2e95498c-bcac-43e1-b077-459f5740c860 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Creating config drive at /var/lib/nova/instances/0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c/disk.config
Oct 11 09:09:15 compute-0 nova_compute[260935]: 2025-10-11 09:09:15.204 2 DEBUG oslo_concurrency.processutils [None req-2e95498c-bcac-43e1-b077-459f5740c860 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpfy3d3dx1 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:09:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:09:15.204 162815 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:09:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:09:15.204 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:09:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:09:15.205 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:09:15 compute-0 nova_compute[260935]: 2025-10-11 09:09:15.344 2 DEBUG oslo_concurrency.processutils [None req-2e95498c-bcac-43e1-b077-459f5740c860 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpfy3d3dx1" returned: 0 in 0.140s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:09:15 compute-0 nova_compute[260935]: 2025-10-11 09:09:15.370 2 DEBUG nova.storage.rbd_utils [None req-2e95498c-bcac-43e1-b077-459f5740c860 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] rbd image 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:09:15 compute-0 nova_compute[260935]: 2025-10-11 09:09:15.374 2 DEBUG oslo_concurrency.processutils [None req-2e95498c-bcac-43e1-b077-459f5740c860 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c/disk.config 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:09:15 compute-0 podman[362973]: 2025-10-11 09:09:15.400694302 +0000 UTC m=+0.051264994 container create 49272595ba8c8913b04d2db88122970f6aad46584c8eb048d21ad8da34883d8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_panini, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Oct 11 09:09:15 compute-0 nova_compute[260935]: 2025-10-11 09:09:15.429 2 DEBUG nova.virt.libvirt.driver [None req-5e67c9aa-8b99-4747-a5aa-eef9566275f3 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] [instance: b3697ef4-2e67-4d7f-ad14-ae6ab4055454] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Oct 11 09:09:15 compute-0 systemd[1]: Started libpod-conmon-49272595ba8c8913b04d2db88122970f6aad46584c8eb048d21ad8da34883d8b.scope.
Oct 11 09:09:15 compute-0 podman[362973]: 2025-10-11 09:09:15.374722381 +0000 UTC m=+0.025293093 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:09:15 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:09:15 compute-0 podman[362973]: 2025-10-11 09:09:15.503075305 +0000 UTC m=+0.153646007 container init 49272595ba8c8913b04d2db88122970f6aad46584c8eb048d21ad8da34883d8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_panini, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507)
Oct 11 09:09:15 compute-0 podman[362973]: 2025-10-11 09:09:15.514704537 +0000 UTC m=+0.165275239 container start 49272595ba8c8913b04d2db88122970f6aad46584c8eb048d21ad8da34883d8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_panini, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct 11 09:09:15 compute-0 podman[362973]: 2025-10-11 09:09:15.519418852 +0000 UTC m=+0.169989584 container attach 49272595ba8c8913b04d2db88122970f6aad46584c8eb048d21ad8da34883d8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_panini, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 09:09:15 compute-0 upbeat_panini[363008]: 167 167
Oct 11 09:09:15 compute-0 systemd[1]: libpod-49272595ba8c8913b04d2db88122970f6aad46584c8eb048d21ad8da34883d8b.scope: Deactivated successfully.
Oct 11 09:09:15 compute-0 conmon[363008]: conmon 49272595ba8c8913b04d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-49272595ba8c8913b04d2db88122970f6aad46584c8eb048d21ad8da34883d8b.scope/container/memory.events
Oct 11 09:09:15 compute-0 podman[362973]: 2025-10-11 09:09:15.526555735 +0000 UTC m=+0.177126457 container died 49272595ba8c8913b04d2db88122970f6aad46584c8eb048d21ad8da34883d8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_panini, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 09:09:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-4a17367d388fd32ca1ecc0e3ccac09a756fbb372a442b3296aa2a964e0b55872-merged.mount: Deactivated successfully.
Oct 11 09:09:15 compute-0 podman[362973]: 2025-10-11 09:09:15.578133238 +0000 UTC m=+0.228703960 container remove 49272595ba8c8913b04d2db88122970f6aad46584c8eb048d21ad8da34883d8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_panini, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 09:09:15 compute-0 systemd[1]: libpod-conmon-49272595ba8c8913b04d2db88122970f6aad46584c8eb048d21ad8da34883d8b.scope: Deactivated successfully.
Oct 11 09:09:15 compute-0 ceph-mon[74313]: pgmap v2014: 321 pgs: 321 active+clean; 475 MiB data, 904 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 3.4 MiB/s wr, 92 op/s
Oct 11 09:09:15 compute-0 nova_compute[260935]: 2025-10-11 09:09:15.621 2 DEBUG oslo_concurrency.processutils [None req-2e95498c-bcac-43e1-b077-459f5740c860 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c/disk.config 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.246s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:09:15 compute-0 nova_compute[260935]: 2025-10-11 09:09:15.622 2 INFO nova.virt.libvirt.driver [None req-2e95498c-bcac-43e1-b077-459f5740c860 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Deleting local config drive /var/lib/nova/instances/0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c/disk.config because it was imported into RBD.
Oct 11 09:09:15 compute-0 kernel: tapa611854c-0a: entered promiscuous mode
Oct 11 09:09:15 compute-0 NetworkManager[44960]: <info>  [1760173755.6852] manager: (tapa611854c-0a): new Tun device (/org/freedesktop/NetworkManager/Devices/382)
Oct 11 09:09:15 compute-0 ovn_controller[152945]: 2025-10-11T09:09:15Z|00910|binding|INFO|Claiming lport a611854c-0a61-41b8-91ce-0c0f893aa54c for this chassis.
Oct 11 09:09:15 compute-0 nova_compute[260935]: 2025-10-11 09:09:15.719 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:09:15 compute-0 ovn_controller[152945]: 2025-10-11T09:09:15Z|00911|binding|INFO|a611854c-0a61-41b8-91ce-0c0f893aa54c: Claiming fa:16:3e:9d:da:b0 10.100.0.12
Oct 11 09:09:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:09:15.724 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9d:da:b0 10.100.0.12'], port_security=['fa:16:3e:9d:da:b0 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-14e82eeb-74e2-4de3-9047-74da777fe1ec', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '21f163e616ee4917a580701d466f7dc9', 'neutron:revision_number': '7', 'neutron:security_group_ids': 'df67e917-90ec-4dfb-a7e0-e0c1517579e8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e0adf864-e0f8-4e06-afb7-e390a634b36e, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=a611854c-0a61-41b8-91ce-0c0f893aa54c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 09:09:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:09:15.726 162815 INFO neutron.agent.ovn.metadata.agent [-] Port a611854c-0a61-41b8-91ce-0c0f893aa54c in datapath 14e82eeb-74e2-4de3-9047-74da777fe1ec bound to our chassis
Oct 11 09:09:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:09:15.727 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 14e82eeb-74e2-4de3-9047-74da777fe1ec
Oct 11 09:09:15 compute-0 systemd-udevd[363056]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 09:09:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:09:15.743 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[c5853c69-27ce-4b9e-8f0f-c1f2da85facb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:09:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:09:15.744 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap14e82eeb-71 in ovnmeta-14e82eeb-74e2-4de3-9047-74da777fe1ec namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 11 09:09:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:09:15.746 276945 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap14e82eeb-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 11 09:09:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:09:15.746 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[24b15751-84f6-4798-a456-1505c24249eb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:09:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:09:15.747 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[54dfdb7b-07b4-4a82-830e-74cc9b4f8d64]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:09:15 compute-0 systemd-machined[215705]: New machine qemu-119-instance-00000062.
Oct 11 09:09:15 compute-0 NetworkManager[44960]: <info>  [1760173755.7690] device (tapa611854c-0a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 09:09:15 compute-0 ovn_controller[152945]: 2025-10-11T09:09:15Z|00912|binding|INFO|Setting lport a611854c-0a61-41b8-91ce-0c0f893aa54c ovn-installed in OVS
Oct 11 09:09:15 compute-0 ovn_controller[152945]: 2025-10-11T09:09:15Z|00913|binding|INFO|Setting lport a611854c-0a61-41b8-91ce-0c0f893aa54c up in Southbound
Oct 11 09:09:15 compute-0 NetworkManager[44960]: <info>  [1760173755.7714] device (tapa611854c-0a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 09:09:15 compute-0 nova_compute[260935]: 2025-10-11 09:09:15.771 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:09:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:09:15.758 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[04defa4e-0482-4e6a-a831-a6690d476d74]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:09:15 compute-0 nova_compute[260935]: 2025-10-11 09:09:15.774 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:09:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:09:15.774 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[82b071dd-98e7-4837-b8b4-827270918ccf]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:09:15 compute-0 systemd[1]: Started Virtual Machine qemu-119-instance-00000062.
Oct 11 09:09:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:09:15.810 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[3fc1017e-c9b7-4e47-ae4c-36420dffc097]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:09:15 compute-0 systemd-udevd[363070]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 09:09:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:09:15.816 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[0dcc40a3-8518-4a6c-995b-29c5dabfbfbc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:09:15 compute-0 NetworkManager[44960]: <info>  [1760173755.8195] manager: (tap14e82eeb-70): new Veth device (/org/freedesktop/NetworkManager/Devices/383)
Oct 11 09:09:15 compute-0 podman[363067]: 2025-10-11 09:09:15.851970955 +0000 UTC m=+0.068579229 container create 55f390b0d9cb9154736e42fc7d1b2032109cf916732b2c22b97063f23b3ecb20 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_greider, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 09:09:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:09:15.855 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[3a628f1f-d59f-4bee-9dfd-f4557191ff79]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:09:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:09:15.858 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[39b79d13-d0a4-49b6-ab5d-80c9912ed42f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:09:15 compute-0 NetworkManager[44960]: <info>  [1760173755.8895] device (tap14e82eeb-70): carrier: link connected
Oct 11 09:09:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:09:15.897 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[0af448db-f4e8-486e-8dbc-2a9f52dfadcb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:09:15 compute-0 systemd[1]: Started libpod-conmon-55f390b0d9cb9154736e42fc7d1b2032109cf916732b2c22b97063f23b3ecb20.scope.
Oct 11 09:09:15 compute-0 podman[363067]: 2025-10-11 09:09:15.825336125 +0000 UTC m=+0.041944409 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:09:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:09:15.919 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[1741b652-7cc6-4b2c-b77f-3824e1797fe8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap14e82eeb-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:60:56:f1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 273], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 560059, 'reachable_time': 15853, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 363113, 'error': None, 'target': 'ovnmeta-14e82eeb-74e2-4de3-9047-74da777fe1ec', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:09:15 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:09:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:09:15.942 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[6c05a238-958f-4f6c-8da0-d9eec5f1bc45]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe60:56f1'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 560059, 'tstamp': 560059}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 363117, 'error': None, 'target': 'ovnmeta-14e82eeb-74e2-4de3-9047-74da777fe1ec', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:09:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ab14f4565140bf8b3c9c5e8b4bb775adc00d7ad339c188c8da533151e8f7e6e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 09:09:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ab14f4565140bf8b3c9c5e8b4bb775adc00d7ad339c188c8da533151e8f7e6e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 09:09:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ab14f4565140bf8b3c9c5e8b4bb775adc00d7ad339c188c8da533151e8f7e6e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 09:09:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ab14f4565140bf8b3c9c5e8b4bb775adc00d7ad339c188c8da533151e8f7e6e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 09:09:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:09:15.961 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[b0dffba4-bbe5-4467-8917-c623fd59020e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap14e82eeb-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:60:56:f1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 273], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 560059, 'reachable_time': 15853, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 363118, 'error': None, 'target': 'ovnmeta-14e82eeb-74e2-4de3-9047-74da777fe1ec', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:09:15 compute-0 podman[363067]: 2025-10-11 09:09:15.966640618 +0000 UTC m=+0.183248902 container init 55f390b0d9cb9154736e42fc7d1b2032109cf916732b2c22b97063f23b3ecb20 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_greider, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3)
Oct 11 09:09:15 compute-0 podman[363067]: 2025-10-11 09:09:15.976484559 +0000 UTC m=+0.193092823 container start 55f390b0d9cb9154736e42fc7d1b2032109cf916732b2c22b97063f23b3ecb20 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_greider, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 09:09:15 compute-0 podman[363067]: 2025-10-11 09:09:15.986950288 +0000 UTC m=+0.203558642 container attach 55f390b0d9cb9154736e42fc7d1b2032109cf916732b2c22b97063f23b3ecb20 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_greider, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 09:09:16 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:09:16.005 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[dd0c07c5-0355-44ab-b167-437d6b91cebc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:09:16 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:09:16.089 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[bc96da74-2dc7-4bdd-b7f1-061f0caa298d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:09:16 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:09:16.091 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap14e82eeb-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:09:16 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:09:16.091 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 09:09:16 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:09:16.091 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap14e82eeb-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:09:16 compute-0 NetworkManager[44960]: <info>  [1760173756.0941] manager: (tap14e82eeb-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/384)
Oct 11 09:09:16 compute-0 kernel: tap14e82eeb-70: entered promiscuous mode
Oct 11 09:09:16 compute-0 nova_compute[260935]: 2025-10-11 09:09:16.093 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:09:16 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:09:16.098 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap14e82eeb-70, col_values=(('external_ids', {'iface-id': 'c44760fe-71c5-482d-aee7-a0b0513681b7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:09:16 compute-0 ovn_controller[152945]: 2025-10-11T09:09:16Z|00914|binding|INFO|Releasing lport c44760fe-71c5-482d-aee7-a0b0513681b7 from this chassis (sb_readonly=0)
Oct 11 09:09:16 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:09:16.104 162815 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/14e82eeb-74e2-4de3-9047-74da777fe1ec.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/14e82eeb-74e2-4de3-9047-74da777fe1ec.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 11 09:09:16 compute-0 nova_compute[260935]: 2025-10-11 09:09:16.105 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:09:16 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:09:16.105 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[d6241346-7f55-467e-951b-fc0eef07bd3e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:09:16 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:09:16.106 162815 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 11 09:09:16 compute-0 ovn_metadata_agent[162810]: global
Oct 11 09:09:16 compute-0 ovn_metadata_agent[162810]:     log         /dev/log local0 debug
Oct 11 09:09:16 compute-0 ovn_metadata_agent[162810]:     log-tag     haproxy-metadata-proxy-14e82eeb-74e2-4de3-9047-74da777fe1ec
Oct 11 09:09:16 compute-0 ovn_metadata_agent[162810]:     user        root
Oct 11 09:09:16 compute-0 ovn_metadata_agent[162810]:     group       root
Oct 11 09:09:16 compute-0 ovn_metadata_agent[162810]:     maxconn     1024
Oct 11 09:09:16 compute-0 ovn_metadata_agent[162810]:     pidfile     /var/lib/neutron/external/pids/14e82eeb-74e2-4de3-9047-74da777fe1ec.pid.haproxy
Oct 11 09:09:16 compute-0 ovn_metadata_agent[162810]:     daemon
Oct 11 09:09:16 compute-0 ovn_metadata_agent[162810]: 
Oct 11 09:09:16 compute-0 ovn_metadata_agent[162810]: defaults
Oct 11 09:09:16 compute-0 ovn_metadata_agent[162810]:     log global
Oct 11 09:09:16 compute-0 ovn_metadata_agent[162810]:     mode http
Oct 11 09:09:16 compute-0 ovn_metadata_agent[162810]:     option httplog
Oct 11 09:09:16 compute-0 ovn_metadata_agent[162810]:     option dontlognull
Oct 11 09:09:16 compute-0 ovn_metadata_agent[162810]:     option http-server-close
Oct 11 09:09:16 compute-0 ovn_metadata_agent[162810]:     option forwardfor
Oct 11 09:09:16 compute-0 ovn_metadata_agent[162810]:     retries                 3
Oct 11 09:09:16 compute-0 ovn_metadata_agent[162810]:     timeout http-request    30s
Oct 11 09:09:16 compute-0 ovn_metadata_agent[162810]:     timeout connect         30s
Oct 11 09:09:16 compute-0 ovn_metadata_agent[162810]:     timeout client          32s
Oct 11 09:09:16 compute-0 ovn_metadata_agent[162810]:     timeout server          32s
Oct 11 09:09:16 compute-0 ovn_metadata_agent[162810]:     timeout http-keep-alive 30s
Oct 11 09:09:16 compute-0 ovn_metadata_agent[162810]: 
Oct 11 09:09:16 compute-0 ovn_metadata_agent[162810]: 
Oct 11 09:09:16 compute-0 ovn_metadata_agent[162810]: listen listener
Oct 11 09:09:16 compute-0 ovn_metadata_agent[162810]:     bind 169.254.169.254:80
Oct 11 09:09:16 compute-0 ovn_metadata_agent[162810]:     server metadata /var/lib/neutron/metadata_proxy
Oct 11 09:09:16 compute-0 ovn_metadata_agent[162810]:     http-request add-header X-OVN-Network-ID 14e82eeb-74e2-4de3-9047-74da777fe1ec
Oct 11 09:09:16 compute-0 ovn_metadata_agent[162810]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 11 09:09:16 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:09:16.107 162815 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-14e82eeb-74e2-4de3-9047-74da777fe1ec', 'env', 'PROCESS_TAG=haproxy-14e82eeb-74e2-4de3-9047-74da777fe1ec', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/14e82eeb-74e2-4de3-9047-74da777fe1ec.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 11 09:09:16 compute-0 nova_compute[260935]: 2025-10-11 09:09:16.138 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:09:16 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2015: 321 pgs: 321 active+clean; 475 MiB data, 904 MiB used, 59 GiB / 60 GiB avail; 586 KiB/s rd, 2.0 MiB/s wr, 53 op/s
Oct 11 09:09:16 compute-0 nova_compute[260935]: 2025-10-11 09:09:16.564 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760173756.564318, 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:09:16 compute-0 nova_compute[260935]: 2025-10-11 09:09:16.565 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] VM Started (Lifecycle Event)
Oct 11 09:09:16 compute-0 podman[363194]: 2025-10-11 09:09:16.579896455 +0000 UTC m=+0.065987205 container create 4c5576afb82b4ccd0ccde2b6eb55108563fef3f9bc2ce3ee3984b2f7786bc901 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-14e82eeb-74e2-4de3-9047-74da777fe1ec, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 11 09:09:16 compute-0 nova_compute[260935]: 2025-10-11 09:09:16.591 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:09:16 compute-0 nova_compute[260935]: 2025-10-11 09:09:16.596 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760173756.5682006, 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:09:16 compute-0 nova_compute[260935]: 2025-10-11 09:09:16.596 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] VM Paused (Lifecycle Event)
Oct 11 09:09:16 compute-0 nova_compute[260935]: 2025-10-11 09:09:16.622 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:09:16 compute-0 systemd[1]: Started libpod-conmon-4c5576afb82b4ccd0ccde2b6eb55108563fef3f9bc2ce3ee3984b2f7786bc901.scope.
Oct 11 09:09:16 compute-0 nova_compute[260935]: 2025-10-11 09:09:16.625 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: shelved_offloaded, current task_state: spawning, current DB power_state: 4, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 09:09:16 compute-0 podman[363194]: 2025-10-11 09:09:16.550759073 +0000 UTC m=+0.036849843 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 11 09:09:16 compute-0 nova_compute[260935]: 2025-10-11 09:09:16.647 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 09:09:16 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:09:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56bddb7740329636355fcc11f4dc0b2142ccfefe44ada4320f628f0155b9b1ac/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 11 09:09:16 compute-0 podman[363194]: 2025-10-11 09:09:16.694031603 +0000 UTC m=+0.180122373 container init 4c5576afb82b4ccd0ccde2b6eb55108563fef3f9bc2ce3ee3984b2f7786bc901 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-14e82eeb-74e2-4de3-9047-74da777fe1ec, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 09:09:16 compute-0 podman[363194]: 2025-10-11 09:09:16.699338845 +0000 UTC m=+0.185429595 container start 4c5576afb82b4ccd0ccde2b6eb55108563fef3f9bc2ce3ee3984b2f7786bc901 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-14e82eeb-74e2-4de3-9047-74da777fe1ec, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 11 09:09:16 compute-0 neutron-haproxy-ovnmeta-14e82eeb-74e2-4de3-9047-74da777fe1ec[363209]: [NOTICE]   (363218) : New worker (363222) forked
Oct 11 09:09:16 compute-0 neutron-haproxy-ovnmeta-14e82eeb-74e2-4de3-9047-74da777fe1ec[363209]: [NOTICE]   (363218) : Loading success.
Oct 11 09:09:16 compute-0 zealous_greider[363114]: {
Oct 11 09:09:16 compute-0 zealous_greider[363114]:     "9a8d87fe-940e-4960-94fb-405e22e6e9ee": {
Oct 11 09:09:16 compute-0 zealous_greider[363114]:         "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:09:16 compute-0 zealous_greider[363114]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 11 09:09:16 compute-0 zealous_greider[363114]:         "osd_id": 2,
Oct 11 09:09:16 compute-0 zealous_greider[363114]:         "osd_uuid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 09:09:16 compute-0 zealous_greider[363114]:         "type": "bluestore"
Oct 11 09:09:16 compute-0 zealous_greider[363114]:     },
Oct 11 09:09:16 compute-0 zealous_greider[363114]:     "bd31cfe9-266a-4479-a7f9-659fff14b3e1": {
Oct 11 09:09:16 compute-0 zealous_greider[363114]:         "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:09:16 compute-0 zealous_greider[363114]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 11 09:09:16 compute-0 zealous_greider[363114]:         "osd_id": 0,
Oct 11 09:09:16 compute-0 zealous_greider[363114]:         "osd_uuid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 09:09:16 compute-0 zealous_greider[363114]:         "type": "bluestore"
Oct 11 09:09:16 compute-0 zealous_greider[363114]:     },
Oct 11 09:09:16 compute-0 zealous_greider[363114]:     "c6e730c8-7075-45e5-bf72-061b73088f5b": {
Oct 11 09:09:16 compute-0 zealous_greider[363114]:         "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:09:16 compute-0 zealous_greider[363114]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 11 09:09:16 compute-0 zealous_greider[363114]:         "osd_id": 1,
Oct 11 09:09:16 compute-0 zealous_greider[363114]:         "osd_uuid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 09:09:16 compute-0 zealous_greider[363114]:         "type": "bluestore"
Oct 11 09:09:16 compute-0 zealous_greider[363114]:     }
Oct 11 09:09:16 compute-0 zealous_greider[363114]: }
Oct 11 09:09:16 compute-0 systemd[1]: libpod-55f390b0d9cb9154736e42fc7d1b2032109cf916732b2c22b97063f23b3ecb20.scope: Deactivated successfully.
Oct 11 09:09:16 compute-0 podman[363067]: 2025-10-11 09:09:16.964912405 +0000 UTC m=+1.181520729 container died 55f390b0d9cb9154736e42fc7d1b2032109cf916732b2c22b97063f23b3ecb20 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_greider, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True)
Oct 11 09:09:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-1ab14f4565140bf8b3c9c5e8b4bb775adc00d7ad339c188c8da533151e8f7e6e-merged.mount: Deactivated successfully.
Oct 11 09:09:17 compute-0 podman[363067]: 2025-10-11 09:09:17.038427794 +0000 UTC m=+1.255036048 container remove 55f390b0d9cb9154736e42fc7d1b2032109cf916732b2c22b97063f23b3ecb20 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_greider, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True)
Oct 11 09:09:17 compute-0 systemd[1]: libpod-conmon-55f390b0d9cb9154736e42fc7d1b2032109cf916732b2c22b97063f23b3ecb20.scope: Deactivated successfully.
Oct 11 09:09:17 compute-0 sudo[362905]: pam_unix(sudo:session): session closed for user root
Oct 11 09:09:17 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 09:09:17 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:09:17 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 09:09:17 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:09:17 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev 11fef533-3242-4416-80bf-147785e6f072 does not exist
Oct 11 09:09:17 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev b7d29961-5b95-4599-8a55-080db646447b does not exist
Oct 11 09:09:17 compute-0 sudo[363266]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:09:17 compute-0 sudo[363266]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:09:17 compute-0 sudo[363266]: pam_unix(sudo:session): session closed for user root
Oct 11 09:09:17 compute-0 sudo[363291]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 11 09:09:17 compute-0 sudo[363291]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:09:17 compute-0 sudo[363291]: pam_unix(sudo:session): session closed for user root
Oct 11 09:09:17 compute-0 ceph-mon[74313]: pgmap v2015: 321 pgs: 321 active+clean; 475 MiB data, 904 MiB used, 59 GiB / 60 GiB avail; 586 KiB/s rd, 2.0 MiB/s wr, 53 op/s
Oct 11 09:09:17 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:09:17 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:09:17 compute-0 nova_compute[260935]: 2025-10-11 09:09:17.704 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:09:17 compute-0 systemd[1]: machine-qemu\x2d118\x2dinstance\x2d00000066.scope: Deactivated successfully.
Oct 11 09:09:17 compute-0 systemd[1]: machine-qemu\x2d118\x2dinstance\x2d00000066.scope: Consumed 12.875s CPU time.
Oct 11 09:09:17 compute-0 systemd-machined[215705]: Machine qemu-118-instance-00000066 terminated.
Oct 11 09:09:17 compute-0 podman[363316]: 2025-10-11 09:09:17.80125352 +0000 UTC m=+0.096149885 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 11 09:09:18 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2016: 321 pgs: 321 active+clean; 565 MiB data, 957 MiB used, 59 GiB / 60 GiB avail; 4.6 MiB/s rd, 6.0 MiB/s wr, 164 op/s
Oct 11 09:09:18 compute-0 nova_compute[260935]: 2025-10-11 09:09:18.449 2 INFO nova.virt.libvirt.driver [None req-5e67c9aa-8b99-4747-a5aa-eef9566275f3 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] [instance: b3697ef4-2e67-4d7f-ad14-ae6ab4055454] Instance shutdown successfully after 13 seconds.
Oct 11 09:09:18 compute-0 nova_compute[260935]: 2025-10-11 09:09:18.457 2 INFO nova.virt.libvirt.driver [-] [instance: b3697ef4-2e67-4d7f-ad14-ae6ab4055454] Instance destroyed successfully.
Oct 11 09:09:18 compute-0 nova_compute[260935]: 2025-10-11 09:09:18.463 2 INFO nova.virt.libvirt.driver [-] [instance: b3697ef4-2e67-4d7f-ad14-ae6ab4055454] Instance destroyed successfully.
Oct 11 09:09:18 compute-0 nova_compute[260935]: 2025-10-11 09:09:18.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:09:18 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e274 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:09:18 compute-0 nova_compute[260935]: 2025-10-11 09:09:18.902 2 INFO nova.virt.libvirt.driver [None req-5e67c9aa-8b99-4747-a5aa-eef9566275f3 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] [instance: b3697ef4-2e67-4d7f-ad14-ae6ab4055454] Deleting instance files /var/lib/nova/instances/b3697ef4-2e67-4d7f-ad14-ae6ab4055454_del
Oct 11 09:09:18 compute-0 nova_compute[260935]: 2025-10-11 09:09:18.903 2 INFO nova.virt.libvirt.driver [None req-5e67c9aa-8b99-4747-a5aa-eef9566275f3 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] [instance: b3697ef4-2e67-4d7f-ad14-ae6ab4055454] Deletion of /var/lib/nova/instances/b3697ef4-2e67-4d7f-ad14-ae6ab4055454_del complete
Oct 11 09:09:19 compute-0 nova_compute[260935]: 2025-10-11 09:09:19.101 2 DEBUG nova.virt.libvirt.driver [None req-5e67c9aa-8b99-4747-a5aa-eef9566275f3 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] [instance: b3697ef4-2e67-4d7f-ad14-ae6ab4055454] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 11 09:09:19 compute-0 nova_compute[260935]: 2025-10-11 09:09:19.102 2 INFO nova.virt.libvirt.driver [None req-5e67c9aa-8b99-4747-a5aa-eef9566275f3 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] [instance: b3697ef4-2e67-4d7f-ad14-ae6ab4055454] Creating image(s)
Oct 11 09:09:19 compute-0 nova_compute[260935]: 2025-10-11 09:09:19.133 2 DEBUG nova.storage.rbd_utils [None req-5e67c9aa-8b99-4747-a5aa-eef9566275f3 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] rbd image b3697ef4-2e67-4d7f-ad14-ae6ab4055454_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:09:19 compute-0 nova_compute[260935]: 2025-10-11 09:09:19.170 2 DEBUG nova.storage.rbd_utils [None req-5e67c9aa-8b99-4747-a5aa-eef9566275f3 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] rbd image b3697ef4-2e67-4d7f-ad14-ae6ab4055454_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:09:19 compute-0 nova_compute[260935]: 2025-10-11 09:09:19.205 2 DEBUG nova.storage.rbd_utils [None req-5e67c9aa-8b99-4747-a5aa-eef9566275f3 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] rbd image b3697ef4-2e67-4d7f-ad14-ae6ab4055454_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:09:19 compute-0 nova_compute[260935]: 2025-10-11 09:09:19.210 2 DEBUG oslo_concurrency.processutils [None req-5e67c9aa-8b99-4747-a5aa-eef9566275f3 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/d427ed36e4acfaf36d5cf36bd49361b1db4ee571 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:09:19 compute-0 nova_compute[260935]: 2025-10-11 09:09:19.317 2 DEBUG oslo_concurrency.processutils [None req-5e67c9aa-8b99-4747-a5aa-eef9566275f3 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/d427ed36e4acfaf36d5cf36bd49361b1db4ee571 --force-share --output=json" returned: 0 in 0.107s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:09:19 compute-0 nova_compute[260935]: 2025-10-11 09:09:19.319 2 DEBUG oslo_concurrency.lockutils [None req-5e67c9aa-8b99-4747-a5aa-eef9566275f3 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] Acquiring lock "d427ed36e4acfaf36d5cf36bd49361b1db4ee571" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:09:19 compute-0 nova_compute[260935]: 2025-10-11 09:09:19.320 2 DEBUG oslo_concurrency.lockutils [None req-5e67c9aa-8b99-4747-a5aa-eef9566275f3 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] Lock "d427ed36e4acfaf36d5cf36bd49361b1db4ee571" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:09:19 compute-0 nova_compute[260935]: 2025-10-11 09:09:19.321 2 DEBUG oslo_concurrency.lockutils [None req-5e67c9aa-8b99-4747-a5aa-eef9566275f3 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] Lock "d427ed36e4acfaf36d5cf36bd49361b1db4ee571" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:09:19 compute-0 nova_compute[260935]: 2025-10-11 09:09:19.360 2 DEBUG nova.storage.rbd_utils [None req-5e67c9aa-8b99-4747-a5aa-eef9566275f3 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] rbd image b3697ef4-2e67-4d7f-ad14-ae6ab4055454_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:09:19 compute-0 nova_compute[260935]: 2025-10-11 09:09:19.367 2 DEBUG oslo_concurrency.processutils [None req-5e67c9aa-8b99-4747-a5aa-eef9566275f3 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/d427ed36e4acfaf36d5cf36bd49361b1db4ee571 b3697ef4-2e67-4d7f-ad14-ae6ab4055454_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:09:19 compute-0 nova_compute[260935]: 2025-10-11 09:09:19.424 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:09:19 compute-0 nova_compute[260935]: 2025-10-11 09:09:19.581 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:09:19 compute-0 ceph-mon[74313]: pgmap v2016: 321 pgs: 321 active+clean; 565 MiB data, 957 MiB used, 59 GiB / 60 GiB avail; 4.6 MiB/s rd, 6.0 MiB/s wr, 164 op/s
Oct 11 09:09:19 compute-0 nova_compute[260935]: 2025-10-11 09:09:19.708 2 DEBUG oslo_concurrency.processutils [None req-5e67c9aa-8b99-4747-a5aa-eef9566275f3 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/d427ed36e4acfaf36d5cf36bd49361b1db4ee571 b3697ef4-2e67-4d7f-ad14-ae6ab4055454_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.342s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:09:19 compute-0 nova_compute[260935]: 2025-10-11 09:09:19.762 2 DEBUG nova.storage.rbd_utils [None req-5e67c9aa-8b99-4747-a5aa-eef9566275f3 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] resizing rbd image b3697ef4-2e67-4d7f-ad14-ae6ab4055454_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 11 09:09:19 compute-0 nova_compute[260935]: 2025-10-11 09:09:19.855 2 DEBUG nova.virt.libvirt.driver [None req-5e67c9aa-8b99-4747-a5aa-eef9566275f3 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] [instance: b3697ef4-2e67-4d7f-ad14-ae6ab4055454] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 11 09:09:19 compute-0 nova_compute[260935]: 2025-10-11 09:09:19.856 2 DEBUG nova.virt.libvirt.driver [None req-5e67c9aa-8b99-4747-a5aa-eef9566275f3 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] [instance: b3697ef4-2e67-4d7f-ad14-ae6ab4055454] Ensure instance console log exists: /var/lib/nova/instances/b3697ef4-2e67-4d7f-ad14-ae6ab4055454/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 11 09:09:19 compute-0 nova_compute[260935]: 2025-10-11 09:09:19.857 2 DEBUG oslo_concurrency.lockutils [None req-5e67c9aa-8b99-4747-a5aa-eef9566275f3 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:09:19 compute-0 nova_compute[260935]: 2025-10-11 09:09:19.858 2 DEBUG oslo_concurrency.lockutils [None req-5e67c9aa-8b99-4747-a5aa-eef9566275f3 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:09:19 compute-0 nova_compute[260935]: 2025-10-11 09:09:19.858 2 DEBUG oslo_concurrency.lockutils [None req-5e67c9aa-8b99-4747-a5aa-eef9566275f3 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:09:19 compute-0 nova_compute[260935]: 2025-10-11 09:09:19.861 2 DEBUG nova.virt.libvirt.driver [None req-5e67c9aa-8b99-4747-a5aa-eef9566275f3 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] [instance: b3697ef4-2e67-4d7f-ad14-ae6ab4055454] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:29Z,direct_url=<?>,disk_format='qcow2',id=95632eb9-5895-4e20-b760-0f149aadf400,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 11 09:09:19 compute-0 nova_compute[260935]: 2025-10-11 09:09:19.866 2 WARNING nova.virt.libvirt.driver [None req-5e67c9aa-8b99-4747-a5aa-eef9566275f3 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError
Oct 11 09:09:19 compute-0 nova_compute[260935]: 2025-10-11 09:09:19.876 2 DEBUG nova.virt.libvirt.host [None req-5e67c9aa-8b99-4747-a5aa-eef9566275f3 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 11 09:09:19 compute-0 nova_compute[260935]: 2025-10-11 09:09:19.877 2 DEBUG nova.virt.libvirt.host [None req-5e67c9aa-8b99-4747-a5aa-eef9566275f3 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 11 09:09:19 compute-0 nova_compute[260935]: 2025-10-11 09:09:19.881 2 DEBUG nova.virt.libvirt.host [None req-5e67c9aa-8b99-4747-a5aa-eef9566275f3 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 11 09:09:19 compute-0 nova_compute[260935]: 2025-10-11 09:09:19.882 2 DEBUG nova.virt.libvirt.host [None req-5e67c9aa-8b99-4747-a5aa-eef9566275f3 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 11 09:09:19 compute-0 nova_compute[260935]: 2025-10-11 09:09:19.882 2 DEBUG nova.virt.libvirt.driver [None req-5e67c9aa-8b99-4747-a5aa-eef9566275f3 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 11 09:09:19 compute-0 nova_compute[260935]: 2025-10-11 09:09:19.883 2 DEBUG nova.virt.hardware [None req-5e67c9aa-8b99-4747-a5aa-eef9566275f3 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:29Z,direct_url=<?>,disk_format='qcow2',id=95632eb9-5895-4e20-b760-0f149aadf400,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 11 09:09:19 compute-0 nova_compute[260935]: 2025-10-11 09:09:19.883 2 DEBUG nova.virt.hardware [None req-5e67c9aa-8b99-4747-a5aa-eef9566275f3 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 11 09:09:19 compute-0 nova_compute[260935]: 2025-10-11 09:09:19.884 2 DEBUG nova.virt.hardware [None req-5e67c9aa-8b99-4747-a5aa-eef9566275f3 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 11 09:09:19 compute-0 nova_compute[260935]: 2025-10-11 09:09:19.884 2 DEBUG nova.virt.hardware [None req-5e67c9aa-8b99-4747-a5aa-eef9566275f3 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 11 09:09:19 compute-0 nova_compute[260935]: 2025-10-11 09:09:19.884 2 DEBUG nova.virt.hardware [None req-5e67c9aa-8b99-4747-a5aa-eef9566275f3 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 11 09:09:19 compute-0 nova_compute[260935]: 2025-10-11 09:09:19.885 2 DEBUG nova.virt.hardware [None req-5e67c9aa-8b99-4747-a5aa-eef9566275f3 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 11 09:09:19 compute-0 nova_compute[260935]: 2025-10-11 09:09:19.885 2 DEBUG nova.virt.hardware [None req-5e67c9aa-8b99-4747-a5aa-eef9566275f3 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 11 09:09:19 compute-0 nova_compute[260935]: 2025-10-11 09:09:19.885 2 DEBUG nova.virt.hardware [None req-5e67c9aa-8b99-4747-a5aa-eef9566275f3 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 11 09:09:19 compute-0 nova_compute[260935]: 2025-10-11 09:09:19.886 2 DEBUG nova.virt.hardware [None req-5e67c9aa-8b99-4747-a5aa-eef9566275f3 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 11 09:09:19 compute-0 nova_compute[260935]: 2025-10-11 09:09:19.886 2 DEBUG nova.virt.hardware [None req-5e67c9aa-8b99-4747-a5aa-eef9566275f3 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 11 09:09:19 compute-0 nova_compute[260935]: 2025-10-11 09:09:19.886 2 DEBUG nova.virt.hardware [None req-5e67c9aa-8b99-4747-a5aa-eef9566275f3 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 11 09:09:19 compute-0 nova_compute[260935]: 2025-10-11 09:09:19.887 2 DEBUG nova.objects.instance [None req-5e67c9aa-8b99-4747-a5aa-eef9566275f3 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] Lazy-loading 'vcpu_model' on Instance uuid b3697ef4-2e67-4d7f-ad14-ae6ab4055454 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:09:19 compute-0 nova_compute[260935]: 2025-10-11 09:09:19.907 2 DEBUG oslo_concurrency.processutils [None req-5e67c9aa-8b99-4747-a5aa-eef9566275f3 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:09:20 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 09:09:20 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/208600249' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:09:20 compute-0 nova_compute[260935]: 2025-10-11 09:09:20.388 2 DEBUG oslo_concurrency.processutils [None req-5e67c9aa-8b99-4747-a5aa-eef9566275f3 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.482s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:09:20 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2017: 321 pgs: 321 active+clean; 565 MiB data, 957 MiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 6.0 MiB/s wr, 149 op/s
Oct 11 09:09:20 compute-0 nova_compute[260935]: 2025-10-11 09:09:20.421 2 DEBUG nova.storage.rbd_utils [None req-5e67c9aa-8b99-4747-a5aa-eef9566275f3 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] rbd image b3697ef4-2e67-4d7f-ad14-ae6ab4055454_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:09:20 compute-0 nova_compute[260935]: 2025-10-11 09:09:20.426 2 DEBUG oslo_concurrency.processutils [None req-5e67c9aa-8b99-4747-a5aa-eef9566275f3 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:09:20 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/208600249' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:09:20 compute-0 nova_compute[260935]: 2025-10-11 09:09:20.698 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:09:20 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 09:09:20 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4039969589' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:09:20 compute-0 nova_compute[260935]: 2025-10-11 09:09:20.934 2 DEBUG oslo_concurrency.lockutils [None req-4a715ce3-3819-4b8e-820e-809ccea8612a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Acquiring lock "1e10d958-167c-4caf-8ff0-67f50e39ec3c" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:09:20 compute-0 nova_compute[260935]: 2025-10-11 09:09:20.935 2 DEBUG oslo_concurrency.lockutils [None req-4a715ce3-3819-4b8e-820e-809ccea8612a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Lock "1e10d958-167c-4caf-8ff0-67f50e39ec3c" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:09:20 compute-0 nova_compute[260935]: 2025-10-11 09:09:20.940 2 DEBUG oslo_concurrency.processutils [None req-5e67c9aa-8b99-4747-a5aa-eef9566275f3 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.514s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:09:20 compute-0 nova_compute[260935]: 2025-10-11 09:09:20.943 2 DEBUG nova.virt.libvirt.driver [None req-5e67c9aa-8b99-4747-a5aa-eef9566275f3 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] [instance: b3697ef4-2e67-4d7f-ad14-ae6ab4055454] End _get_guest_xml xml=<domain type="kvm">
Oct 11 09:09:20 compute-0 nova_compute[260935]:   <uuid>b3697ef4-2e67-4d7f-ad14-ae6ab4055454</uuid>
Oct 11 09:09:20 compute-0 nova_compute[260935]:   <name>instance-00000066</name>
Oct 11 09:09:20 compute-0 nova_compute[260935]:   <memory>131072</memory>
Oct 11 09:09:20 compute-0 nova_compute[260935]:   <vcpu>1</vcpu>
Oct 11 09:09:20 compute-0 nova_compute[260935]:   <metadata>
Oct 11 09:09:20 compute-0 nova_compute[260935]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 09:09:20 compute-0 nova_compute[260935]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 09:09:20 compute-0 nova_compute[260935]:       <nova:name>tempest-ServerShowV254Test-server-186124796</nova:name>
Oct 11 09:09:20 compute-0 nova_compute[260935]:       <nova:creationTime>2025-10-11 09:09:19</nova:creationTime>
Oct 11 09:09:20 compute-0 nova_compute[260935]:       <nova:flavor name="m1.nano">
Oct 11 09:09:20 compute-0 nova_compute[260935]:         <nova:memory>128</nova:memory>
Oct 11 09:09:20 compute-0 nova_compute[260935]:         <nova:disk>1</nova:disk>
Oct 11 09:09:20 compute-0 nova_compute[260935]:         <nova:swap>0</nova:swap>
Oct 11 09:09:20 compute-0 nova_compute[260935]:         <nova:ephemeral>0</nova:ephemeral>
Oct 11 09:09:20 compute-0 nova_compute[260935]:         <nova:vcpus>1</nova:vcpus>
Oct 11 09:09:20 compute-0 nova_compute[260935]:       </nova:flavor>
Oct 11 09:09:20 compute-0 nova_compute[260935]:       <nova:owner>
Oct 11 09:09:20 compute-0 nova_compute[260935]:         <nova:user uuid="18ec7b6a713d43e9880a338abc3182b1">tempest-ServerShowV254Test-827203150-project-member</nova:user>
Oct 11 09:09:20 compute-0 nova_compute[260935]:         <nova:project uuid="31a5173de7e7470ab2566a226b8ba071">tempest-ServerShowV254Test-827203150</nova:project>
Oct 11 09:09:20 compute-0 nova_compute[260935]:       </nova:owner>
Oct 11 09:09:20 compute-0 nova_compute[260935]:       <nova:root type="image" uuid="95632eb9-5895-4e20-b760-0f149aadf400"/>
Oct 11 09:09:20 compute-0 nova_compute[260935]:       <nova:ports/>
Oct 11 09:09:20 compute-0 nova_compute[260935]:     </nova:instance>
Oct 11 09:09:20 compute-0 nova_compute[260935]:   </metadata>
Oct 11 09:09:20 compute-0 nova_compute[260935]:   <sysinfo type="smbios">
Oct 11 09:09:20 compute-0 nova_compute[260935]:     <system>
Oct 11 09:09:20 compute-0 nova_compute[260935]:       <entry name="manufacturer">RDO</entry>
Oct 11 09:09:20 compute-0 nova_compute[260935]:       <entry name="product">OpenStack Compute</entry>
Oct 11 09:09:20 compute-0 nova_compute[260935]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 09:09:20 compute-0 nova_compute[260935]:       <entry name="serial">b3697ef4-2e67-4d7f-ad14-ae6ab4055454</entry>
Oct 11 09:09:20 compute-0 nova_compute[260935]:       <entry name="uuid">b3697ef4-2e67-4d7f-ad14-ae6ab4055454</entry>
Oct 11 09:09:20 compute-0 nova_compute[260935]:       <entry name="family">Virtual Machine</entry>
Oct 11 09:09:20 compute-0 nova_compute[260935]:     </system>
Oct 11 09:09:20 compute-0 nova_compute[260935]:   </sysinfo>
Oct 11 09:09:20 compute-0 nova_compute[260935]:   <os>
Oct 11 09:09:20 compute-0 nova_compute[260935]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 11 09:09:20 compute-0 nova_compute[260935]:     <boot dev="hd"/>
Oct 11 09:09:20 compute-0 nova_compute[260935]:     <smbios mode="sysinfo"/>
Oct 11 09:09:20 compute-0 nova_compute[260935]:   </os>
Oct 11 09:09:20 compute-0 nova_compute[260935]:   <features>
Oct 11 09:09:20 compute-0 nova_compute[260935]:     <acpi/>
Oct 11 09:09:20 compute-0 nova_compute[260935]:     <apic/>
Oct 11 09:09:20 compute-0 nova_compute[260935]:     <vmcoreinfo/>
Oct 11 09:09:20 compute-0 nova_compute[260935]:   </features>
Oct 11 09:09:20 compute-0 nova_compute[260935]:   <clock offset="utc">
Oct 11 09:09:20 compute-0 nova_compute[260935]:     <timer name="pit" tickpolicy="delay"/>
Oct 11 09:09:20 compute-0 nova_compute[260935]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 11 09:09:20 compute-0 nova_compute[260935]:     <timer name="hpet" present="no"/>
Oct 11 09:09:20 compute-0 nova_compute[260935]:   </clock>
Oct 11 09:09:20 compute-0 nova_compute[260935]:   <cpu mode="host-model" match="exact">
Oct 11 09:09:20 compute-0 nova_compute[260935]:     <topology sockets="1" cores="1" threads="1"/>
Oct 11 09:09:20 compute-0 nova_compute[260935]:   </cpu>
Oct 11 09:09:20 compute-0 nova_compute[260935]:   <devices>
Oct 11 09:09:20 compute-0 nova_compute[260935]:     <disk type="network" device="disk">
Oct 11 09:09:20 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 09:09:20 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/b3697ef4-2e67-4d7f-ad14-ae6ab4055454_disk">
Oct 11 09:09:20 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 09:09:20 compute-0 nova_compute[260935]:       </source>
Oct 11 09:09:20 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 09:09:20 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 09:09:20 compute-0 nova_compute[260935]:       </auth>
Oct 11 09:09:20 compute-0 nova_compute[260935]:       <target dev="vda" bus="virtio"/>
Oct 11 09:09:20 compute-0 nova_compute[260935]:     </disk>
Oct 11 09:09:20 compute-0 nova_compute[260935]:     <disk type="network" device="cdrom">
Oct 11 09:09:20 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 09:09:20 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/b3697ef4-2e67-4d7f-ad14-ae6ab4055454_disk.config">
Oct 11 09:09:20 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 09:09:20 compute-0 nova_compute[260935]:       </source>
Oct 11 09:09:20 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 09:09:20 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 09:09:20 compute-0 nova_compute[260935]:       </auth>
Oct 11 09:09:20 compute-0 nova_compute[260935]:       <target dev="sda" bus="sata"/>
Oct 11 09:09:20 compute-0 nova_compute[260935]:     </disk>
Oct 11 09:09:20 compute-0 nova_compute[260935]:     <serial type="pty">
Oct 11 09:09:20 compute-0 nova_compute[260935]:       <log file="/var/lib/nova/instances/b3697ef4-2e67-4d7f-ad14-ae6ab4055454/console.log" append="off"/>
Oct 11 09:09:20 compute-0 nova_compute[260935]:     </serial>
Oct 11 09:09:20 compute-0 nova_compute[260935]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 09:09:20 compute-0 nova_compute[260935]:     <video>
Oct 11 09:09:20 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 09:09:20 compute-0 nova_compute[260935]:     </video>
Oct 11 09:09:20 compute-0 nova_compute[260935]:     <input type="tablet" bus="usb"/>
Oct 11 09:09:20 compute-0 nova_compute[260935]:     <rng model="virtio">
Oct 11 09:09:20 compute-0 nova_compute[260935]:       <backend model="random">/dev/urandom</backend>
Oct 11 09:09:20 compute-0 nova_compute[260935]:     </rng>
Oct 11 09:09:20 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root"/>
Oct 11 09:09:20 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:09:20 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:09:20 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:09:20 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:09:20 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:09:20 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:09:20 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:09:20 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:09:20 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:09:20 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:09:20 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:09:20 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:09:20 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:09:20 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:09:20 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:09:20 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:09:20 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:09:20 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:09:20 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:09:20 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:09:20 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:09:20 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:09:20 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:09:20 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:09:20 compute-0 nova_compute[260935]:     <controller type="usb" index="0"/>
Oct 11 09:09:20 compute-0 nova_compute[260935]:     <memballoon model="virtio">
Oct 11 09:09:20 compute-0 nova_compute[260935]:       <stats period="10"/>
Oct 11 09:09:20 compute-0 nova_compute[260935]:     </memballoon>
Oct 11 09:09:20 compute-0 nova_compute[260935]:   </devices>
Oct 11 09:09:20 compute-0 nova_compute[260935]: </domain>
Oct 11 09:09:20 compute-0 nova_compute[260935]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 11 09:09:20 compute-0 nova_compute[260935]: 2025-10-11 09:09:20.961 2 DEBUG nova.compute.manager [None req-4a715ce3-3819-4b8e-820e-809ccea8612a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 1e10d958-167c-4caf-8ff0-67f50e39ec3c] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 11 09:09:21 compute-0 nova_compute[260935]: 2025-10-11 09:09:21.034 2 DEBUG nova.virt.libvirt.driver [None req-5e67c9aa-8b99-4747-a5aa-eef9566275f3 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 09:09:21 compute-0 nova_compute[260935]: 2025-10-11 09:09:21.034 2 DEBUG nova.virt.libvirt.driver [None req-5e67c9aa-8b99-4747-a5aa-eef9566275f3 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 09:09:21 compute-0 nova_compute[260935]: 2025-10-11 09:09:21.035 2 INFO nova.virt.libvirt.driver [None req-5e67c9aa-8b99-4747-a5aa-eef9566275f3 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] [instance: b3697ef4-2e67-4d7f-ad14-ae6ab4055454] Using config drive
Oct 11 09:09:21 compute-0 nova_compute[260935]: 2025-10-11 09:09:21.068 2 DEBUG nova.storage.rbd_utils [None req-5e67c9aa-8b99-4747-a5aa-eef9566275f3 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] rbd image b3697ef4-2e67-4d7f-ad14-ae6ab4055454_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:09:21 compute-0 nova_compute[260935]: 2025-10-11 09:09:21.107 2 DEBUG nova.objects.instance [None req-5e67c9aa-8b99-4747-a5aa-eef9566275f3 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] Lazy-loading 'ec2_ids' on Instance uuid b3697ef4-2e67-4d7f-ad14-ae6ab4055454 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:09:21 compute-0 nova_compute[260935]: 2025-10-11 09:09:21.124 2 DEBUG oslo_concurrency.lockutils [None req-4a715ce3-3819-4b8e-820e-809ccea8612a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:09:21 compute-0 nova_compute[260935]: 2025-10-11 09:09:21.125 2 DEBUG oslo_concurrency.lockutils [None req-4a715ce3-3819-4b8e-820e-809ccea8612a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:09:21 compute-0 nova_compute[260935]: 2025-10-11 09:09:21.134 2 DEBUG nova.virt.hardware [None req-4a715ce3-3819-4b8e-820e-809ccea8612a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 11 09:09:21 compute-0 nova_compute[260935]: 2025-10-11 09:09:21.135 2 INFO nova.compute.claims [None req-4a715ce3-3819-4b8e-820e-809ccea8612a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 1e10d958-167c-4caf-8ff0-67f50e39ec3c] Claim successful on node compute-0.ctlplane.example.com
Oct 11 09:09:21 compute-0 nova_compute[260935]: 2025-10-11 09:09:21.231 2 DEBUG nova.scheduler.client.report [None req-4a715ce3-3819-4b8e-820e-809ccea8612a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Refreshing inventories for resource provider ead2f521-4d5d-46d9-864c-1aac19134114 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Oct 11 09:09:21 compute-0 nova_compute[260935]: 2025-10-11 09:09:21.262 2 DEBUG nova.scheduler.client.report [None req-4a715ce3-3819-4b8e-820e-809ccea8612a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Updating ProviderTree inventory for provider ead2f521-4d5d-46d9-864c-1aac19134114 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Oct 11 09:09:21 compute-0 nova_compute[260935]: 2025-10-11 09:09:21.263 2 DEBUG nova.compute.provider_tree [None req-4a715ce3-3819-4b8e-820e-809ccea8612a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Updating inventory in ProviderTree for provider ead2f521-4d5d-46d9-864c-1aac19134114 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Oct 11 09:09:21 compute-0 nova_compute[260935]: 2025-10-11 09:09:21.286 2 DEBUG nova.scheduler.client.report [None req-4a715ce3-3819-4b8e-820e-809ccea8612a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Refreshing aggregate associations for resource provider ead2f521-4d5d-46d9-864c-1aac19134114, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Oct 11 09:09:21 compute-0 nova_compute[260935]: 2025-10-11 09:09:21.308 2 INFO nova.virt.libvirt.driver [None req-5e67c9aa-8b99-4747-a5aa-eef9566275f3 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] [instance: b3697ef4-2e67-4d7f-ad14-ae6ab4055454] Creating config drive at /var/lib/nova/instances/b3697ef4-2e67-4d7f-ad14-ae6ab4055454/disk.config
Oct 11 09:09:21 compute-0 nova_compute[260935]: 2025-10-11 09:09:21.316 2 DEBUG oslo_concurrency.processutils [None req-5e67c9aa-8b99-4747-a5aa-eef9566275f3 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/b3697ef4-2e67-4d7f-ad14-ae6ab4055454/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpfqtx9cos execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:09:21 compute-0 nova_compute[260935]: 2025-10-11 09:09:21.376 2 DEBUG nova.scheduler.client.report [None req-4a715ce3-3819-4b8e-820e-809ccea8612a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Refreshing trait associations for resource provider ead2f521-4d5d-46d9-864c-1aac19134114, traits: HW_CPU_X86_AESNI,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_CLMUL,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_AVX,COMPUTE_RESCUE_BFV,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_F16C,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NODE,HW_CPU_X86_SSE2,HW_CPU_X86_BMI,COMPUTE_DEVICE_TAGGING,COMPUTE_STORAGE_BUS_FDC,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SHA,COMPUTE_STORAGE_BUS_SATA,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SSE42,HW_CPU_X86_SSE41,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_ABM,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_STORAGE_BUS_IDE,COMPUTE_STORAGE_BUS_USB,COMPUTE_TRUSTED_CERTS,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_FMA3,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_SSE4A,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSSE3,HW_CPU_X86_SVM,HW_CPU_X86_BMI2,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_AVX2,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_SECURITY_TPM_2_0,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_AMD_SVM,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_RTL8139 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Oct 11 09:09:21 compute-0 nova_compute[260935]: 2025-10-11 09:09:21.485 2 DEBUG oslo_concurrency.processutils [None req-5e67c9aa-8b99-4747-a5aa-eef9566275f3 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/b3697ef4-2e67-4d7f-ad14-ae6ab4055454/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpfqtx9cos" returned: 0 in 0.169s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:09:21 compute-0 nova_compute[260935]: 2025-10-11 09:09:21.522 2 DEBUG nova.storage.rbd_utils [None req-5e67c9aa-8b99-4747-a5aa-eef9566275f3 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] rbd image b3697ef4-2e67-4d7f-ad14-ae6ab4055454_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:09:21 compute-0 nova_compute[260935]: 2025-10-11 09:09:21.527 2 DEBUG oslo_concurrency.processutils [None req-5e67c9aa-8b99-4747-a5aa-eef9566275f3 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/b3697ef4-2e67-4d7f-ad14-ae6ab4055454/disk.config b3697ef4-2e67-4d7f-ad14-ae6ab4055454_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:09:21 compute-0 nova_compute[260935]: 2025-10-11 09:09:21.633 2 DEBUG oslo_concurrency.processutils [None req-4a715ce3-3819-4b8e-820e-809ccea8612a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:09:21 compute-0 ceph-mon[74313]: pgmap v2017: 321 pgs: 321 active+clean; 565 MiB data, 957 MiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 6.0 MiB/s wr, 149 op/s
Oct 11 09:09:21 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/4039969589' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:09:21 compute-0 nova_compute[260935]: 2025-10-11 09:09:21.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:09:21 compute-0 nova_compute[260935]: 2025-10-11 09:09:21.704 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:09:21 compute-0 nova_compute[260935]: 2025-10-11 09:09:21.704 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 11 09:09:21 compute-0 nova_compute[260935]: 2025-10-11 09:09:21.749 2 DEBUG oslo_concurrency.processutils [None req-5e67c9aa-8b99-4747-a5aa-eef9566275f3 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/b3697ef4-2e67-4d7f-ad14-ae6ab4055454/disk.config b3697ef4-2e67-4d7f-ad14-ae6ab4055454_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.222s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:09:21 compute-0 nova_compute[260935]: 2025-10-11 09:09:21.750 2 INFO nova.virt.libvirt.driver [None req-5e67c9aa-8b99-4747-a5aa-eef9566275f3 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] [instance: b3697ef4-2e67-4d7f-ad14-ae6ab4055454] Deleting local config drive /var/lib/nova/instances/b3697ef4-2e67-4d7f-ad14-ae6ab4055454/disk.config because it was imported into RBD.
Oct 11 09:09:21 compute-0 systemd-machined[215705]: New machine qemu-120-instance-00000066.
Oct 11 09:09:21 compute-0 systemd[1]: Started Virtual Machine qemu-120-instance-00000066.
Oct 11 09:09:22 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 09:09:22 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4046967867' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:09:22 compute-0 nova_compute[260935]: 2025-10-11 09:09:22.177 2 DEBUG oslo_concurrency.processutils [None req-4a715ce3-3819-4b8e-820e-809ccea8612a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.545s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:09:22 compute-0 nova_compute[260935]: 2025-10-11 09:09:22.188 2 DEBUG nova.compute.provider_tree [None req-4a715ce3-3819-4b8e-820e-809ccea8612a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 09:09:22 compute-0 nova_compute[260935]: 2025-10-11 09:09:22.212 2 DEBUG nova.scheduler.client.report [None req-4a715ce3-3819-4b8e-820e-809ccea8612a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 09:09:22 compute-0 nova_compute[260935]: 2025-10-11 09:09:22.249 2 DEBUG oslo_concurrency.lockutils [None req-4a715ce3-3819-4b8e-820e-809ccea8612a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.125s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:09:22 compute-0 nova_compute[260935]: 2025-10-11 09:09:22.251 2 DEBUG nova.compute.manager [None req-4a715ce3-3819-4b8e-820e-809ccea8612a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 1e10d958-167c-4caf-8ff0-67f50e39ec3c] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 11 09:09:22 compute-0 nova_compute[260935]: 2025-10-11 09:09:22.334 2 DEBUG nova.compute.manager [None req-4a715ce3-3819-4b8e-820e-809ccea8612a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 1e10d958-167c-4caf-8ff0-67f50e39ec3c] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 11 09:09:22 compute-0 nova_compute[260935]: 2025-10-11 09:09:22.334 2 DEBUG nova.network.neutron [None req-4a715ce3-3819-4b8e-820e-809ccea8612a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 1e10d958-167c-4caf-8ff0-67f50e39ec3c] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 11 09:09:22 compute-0 nova_compute[260935]: 2025-10-11 09:09:22.364 2 INFO nova.virt.libvirt.driver [None req-4a715ce3-3819-4b8e-820e-809ccea8612a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 1e10d958-167c-4caf-8ff0-67f50e39ec3c] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 11 09:09:22 compute-0 nova_compute[260935]: 2025-10-11 09:09:22.400 2 DEBUG nova.compute.manager [None req-4a715ce3-3819-4b8e-820e-809ccea8612a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 1e10d958-167c-4caf-8ff0-67f50e39ec3c] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 11 09:09:22 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2018: 321 pgs: 321 active+clean; 540 MiB data, 948 MiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 7.0 MiB/s wr, 204 op/s
Oct 11 09:09:22 compute-0 sshd-session[363721]: Connection closed by 165.232.82.252 port 54086
Oct 11 09:09:22 compute-0 nova_compute[260935]: 2025-10-11 09:09:22.550 2 DEBUG nova.compute.manager [None req-4a715ce3-3819-4b8e-820e-809ccea8612a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 1e10d958-167c-4caf-8ff0-67f50e39ec3c] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 11 09:09:22 compute-0 nova_compute[260935]: 2025-10-11 09:09:22.552 2 DEBUG nova.virt.libvirt.driver [None req-4a715ce3-3819-4b8e-820e-809ccea8612a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 1e10d958-167c-4caf-8ff0-67f50e39ec3c] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 11 09:09:22 compute-0 nova_compute[260935]: 2025-10-11 09:09:22.552 2 INFO nova.virt.libvirt.driver [None req-4a715ce3-3819-4b8e-820e-809ccea8612a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 1e10d958-167c-4caf-8ff0-67f50e39ec3c] Creating image(s)
Oct 11 09:09:22 compute-0 nova_compute[260935]: 2025-10-11 09:09:22.589 2 DEBUG nova.storage.rbd_utils [None req-4a715ce3-3819-4b8e-820e-809ccea8612a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] rbd image 1e10d958-167c-4caf-8ff0-67f50e39ec3c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:09:22 compute-0 nova_compute[260935]: 2025-10-11 09:09:22.627 2 DEBUG nova.storage.rbd_utils [None req-4a715ce3-3819-4b8e-820e-809ccea8612a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] rbd image 1e10d958-167c-4caf-8ff0-67f50e39ec3c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:09:22 compute-0 nova_compute[260935]: 2025-10-11 09:09:22.664 2 DEBUG nova.storage.rbd_utils [None req-4a715ce3-3819-4b8e-820e-809ccea8612a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] rbd image 1e10d958-167c-4caf-8ff0-67f50e39ec3c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:09:22 compute-0 nova_compute[260935]: 2025-10-11 09:09:22.670 2 DEBUG oslo_concurrency.processutils [None req-4a715ce3-3819-4b8e-820e-809ccea8612a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:09:22 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/4046967867' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:09:22 compute-0 nova_compute[260935]: 2025-10-11 09:09:22.729 2 DEBUG nova.policy [None req-4a715ce3-3819-4b8e-820e-809ccea8612a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'a213c3877fc144a3af0be3c3d853f999', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'ca4b15770e784f45910b630937562cb6', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 11 09:09:22 compute-0 nova_compute[260935]: 2025-10-11 09:09:22.734 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:09:22 compute-0 nova_compute[260935]: 2025-10-11 09:09:22.736 2 DEBUG nova.virt.libvirt.host [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Removed pending event for b3697ef4-2e67-4d7f-ad14-ae6ab4055454 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Oct 11 09:09:22 compute-0 nova_compute[260935]: 2025-10-11 09:09:22.736 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760173762.7353868, b3697ef4-2e67-4d7f-ad14-ae6ab4055454 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:09:22 compute-0 nova_compute[260935]: 2025-10-11 09:09:22.737 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: b3697ef4-2e67-4d7f-ad14-ae6ab4055454] VM Resumed (Lifecycle Event)
Oct 11 09:09:22 compute-0 nova_compute[260935]: 2025-10-11 09:09:22.739 2 DEBUG nova.compute.manager [None req-5e67c9aa-8b99-4747-a5aa-eef9566275f3 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] [instance: b3697ef4-2e67-4d7f-ad14-ae6ab4055454] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 11 09:09:22 compute-0 nova_compute[260935]: 2025-10-11 09:09:22.740 2 DEBUG nova.virt.libvirt.driver [None req-5e67c9aa-8b99-4747-a5aa-eef9566275f3 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] [instance: b3697ef4-2e67-4d7f-ad14-ae6ab4055454] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 11 09:09:22 compute-0 nova_compute[260935]: 2025-10-11 09:09:22.747 2 INFO nova.virt.libvirt.driver [-] [instance: b3697ef4-2e67-4d7f-ad14-ae6ab4055454] Instance spawned successfully.
Oct 11 09:09:22 compute-0 nova_compute[260935]: 2025-10-11 09:09:22.748 2 DEBUG nova.virt.libvirt.driver [None req-5e67c9aa-8b99-4747-a5aa-eef9566275f3 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] [instance: b3697ef4-2e67-4d7f-ad14-ae6ab4055454] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 11 09:09:22 compute-0 nova_compute[260935]: 2025-10-11 09:09:22.769 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: b3697ef4-2e67-4d7f-ad14-ae6ab4055454] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:09:22 compute-0 nova_compute[260935]: 2025-10-11 09:09:22.777 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: b3697ef4-2e67-4d7f-ad14-ae6ab4055454] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 09:09:22 compute-0 nova_compute[260935]: 2025-10-11 09:09:22.779 2 DEBUG oslo_concurrency.processutils [None req-4a715ce3-3819-4b8e-820e-809ccea8612a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json" returned: 0 in 0.110s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:09:22 compute-0 nova_compute[260935]: 2025-10-11 09:09:22.782 2 DEBUG nova.virt.libvirt.driver [None req-5e67c9aa-8b99-4747-a5aa-eef9566275f3 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] [instance: b3697ef4-2e67-4d7f-ad14-ae6ab4055454] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:09:22 compute-0 nova_compute[260935]: 2025-10-11 09:09:22.782 2 DEBUG nova.virt.libvirt.driver [None req-5e67c9aa-8b99-4747-a5aa-eef9566275f3 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] [instance: b3697ef4-2e67-4d7f-ad14-ae6ab4055454] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:09:22 compute-0 nova_compute[260935]: 2025-10-11 09:09:22.782 2 DEBUG nova.virt.libvirt.driver [None req-5e67c9aa-8b99-4747-a5aa-eef9566275f3 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] [instance: b3697ef4-2e67-4d7f-ad14-ae6ab4055454] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:09:22 compute-0 nova_compute[260935]: 2025-10-11 09:09:22.783 2 DEBUG nova.virt.libvirt.driver [None req-5e67c9aa-8b99-4747-a5aa-eef9566275f3 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] [instance: b3697ef4-2e67-4d7f-ad14-ae6ab4055454] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:09:22 compute-0 nova_compute[260935]: 2025-10-11 09:09:22.783 2 DEBUG nova.virt.libvirt.driver [None req-5e67c9aa-8b99-4747-a5aa-eef9566275f3 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] [instance: b3697ef4-2e67-4d7f-ad14-ae6ab4055454] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:09:22 compute-0 nova_compute[260935]: 2025-10-11 09:09:22.784 2 DEBUG nova.virt.libvirt.driver [None req-5e67c9aa-8b99-4747-a5aa-eef9566275f3 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] [instance: b3697ef4-2e67-4d7f-ad14-ae6ab4055454] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:09:22 compute-0 nova_compute[260935]: 2025-10-11 09:09:22.788 2 DEBUG oslo_concurrency.lockutils [None req-4a715ce3-3819-4b8e-820e-809ccea8612a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Acquiring lock "9811042c7d73cc51997f7c966840f4b7728169a1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:09:22 compute-0 nova_compute[260935]: 2025-10-11 09:09:22.788 2 DEBUG oslo_concurrency.lockutils [None req-4a715ce3-3819-4b8e-820e-809ccea8612a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:09:22 compute-0 nova_compute[260935]: 2025-10-11 09:09:22.789 2 DEBUG oslo_concurrency.lockutils [None req-4a715ce3-3819-4b8e-820e-809ccea8612a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:09:22 compute-0 nova_compute[260935]: 2025-10-11 09:09:22.819 2 DEBUG nova.storage.rbd_utils [None req-4a715ce3-3819-4b8e-820e-809ccea8612a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] rbd image 1e10d958-167c-4caf-8ff0-67f50e39ec3c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:09:22 compute-0 nova_compute[260935]: 2025-10-11 09:09:22.823 2 DEBUG oslo_concurrency.processutils [None req-4a715ce3-3819-4b8e-820e-809ccea8612a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 1e10d958-167c-4caf-8ff0-67f50e39ec3c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:09:22 compute-0 nova_compute[260935]: 2025-10-11 09:09:22.865 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: b3697ef4-2e67-4d7f-ad14-ae6ab4055454] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Oct 11 09:09:22 compute-0 nova_compute[260935]: 2025-10-11 09:09:22.866 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760173762.7391198, b3697ef4-2e67-4d7f-ad14-ae6ab4055454 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:09:22 compute-0 nova_compute[260935]: 2025-10-11 09:09:22.866 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: b3697ef4-2e67-4d7f-ad14-ae6ab4055454] VM Started (Lifecycle Event)
Oct 11 09:09:22 compute-0 nova_compute[260935]: 2025-10-11 09:09:22.905 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: b3697ef4-2e67-4d7f-ad14-ae6ab4055454] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:09:22 compute-0 nova_compute[260935]: 2025-10-11 09:09:22.908 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: b3697ef4-2e67-4d7f-ad14-ae6ab4055454] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 09:09:22 compute-0 nova_compute[260935]: 2025-10-11 09:09:22.931 2 DEBUG nova.compute.manager [None req-5e67c9aa-8b99-4747-a5aa-eef9566275f3 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] [instance: b3697ef4-2e67-4d7f-ad14-ae6ab4055454] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:09:22 compute-0 nova_compute[260935]: 2025-10-11 09:09:22.932 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: b3697ef4-2e67-4d7f-ad14-ae6ab4055454] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Oct 11 09:09:23 compute-0 nova_compute[260935]: 2025-10-11 09:09:23.026 2 DEBUG oslo_concurrency.lockutils [None req-5e67c9aa-8b99-4747-a5aa-eef9566275f3 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:09:23 compute-0 nova_compute[260935]: 2025-10-11 09:09:23.027 2 DEBUG oslo_concurrency.lockutils [None req-5e67c9aa-8b99-4747-a5aa-eef9566275f3 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:09:23 compute-0 nova_compute[260935]: 2025-10-11 09:09:23.027 2 DEBUG nova.objects.instance [None req-5e67c9aa-8b99-4747-a5aa-eef9566275f3 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] [instance: b3697ef4-2e67-4d7f-ad14-ae6ab4055454] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Oct 11 09:09:23 compute-0 nova_compute[260935]: 2025-10-11 09:09:23.101 2 DEBUG oslo_concurrency.lockutils [None req-5e67c9aa-8b99-4747-a5aa-eef9566275f3 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.074s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:09:23 compute-0 nova_compute[260935]: 2025-10-11 09:09:23.132 2 DEBUG oslo_concurrency.processutils [None req-4a715ce3-3819-4b8e-820e-809ccea8612a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 1e10d958-167c-4caf-8ff0-67f50e39ec3c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.309s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:09:23 compute-0 nova_compute[260935]: 2025-10-11 09:09:23.204 2 DEBUG nova.storage.rbd_utils [None req-4a715ce3-3819-4b8e-820e-809ccea8612a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] resizing rbd image 1e10d958-167c-4caf-8ff0-67f50e39ec3c_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 11 09:09:23 compute-0 nova_compute[260935]: 2025-10-11 09:09:23.311 2 DEBUG nova.objects.instance [None req-4a715ce3-3819-4b8e-820e-809ccea8612a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Lazy-loading 'migration_context' on Instance uuid 1e10d958-167c-4caf-8ff0-67f50e39ec3c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:09:23 compute-0 nova_compute[260935]: 2025-10-11 09:09:23.338 2 DEBUG nova.virt.libvirt.driver [None req-4a715ce3-3819-4b8e-820e-809ccea8612a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 1e10d958-167c-4caf-8ff0-67f50e39ec3c] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 11 09:09:23 compute-0 nova_compute[260935]: 2025-10-11 09:09:23.339 2 DEBUG nova.virt.libvirt.driver [None req-4a715ce3-3819-4b8e-820e-809ccea8612a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 1e10d958-167c-4caf-8ff0-67f50e39ec3c] Ensure instance console log exists: /var/lib/nova/instances/1e10d958-167c-4caf-8ff0-67f50e39ec3c/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 11 09:09:23 compute-0 nova_compute[260935]: 2025-10-11 09:09:23.339 2 DEBUG oslo_concurrency.lockutils [None req-4a715ce3-3819-4b8e-820e-809ccea8612a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:09:23 compute-0 nova_compute[260935]: 2025-10-11 09:09:23.340 2 DEBUG oslo_concurrency.lockutils [None req-4a715ce3-3819-4b8e-820e-809ccea8612a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:09:23 compute-0 nova_compute[260935]: 2025-10-11 09:09:23.340 2 DEBUG oslo_concurrency.lockutils [None req-4a715ce3-3819-4b8e-820e-809ccea8612a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:09:23 compute-0 nova_compute[260935]: 2025-10-11 09:09:23.588 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:09:23 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:09:23.589 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=26, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:d1:d9', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '16:ab:1e:b7:4b:7f'}, ipsec=False) old=SB_Global(nb_cfg=25) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 09:09:23 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:09:23.593 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 11 09:09:23 compute-0 ceph-mon[74313]: pgmap v2018: 321 pgs: 321 active+clean; 540 MiB data, 948 MiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 7.0 MiB/s wr, 204 op/s
Oct 11 09:09:23 compute-0 nova_compute[260935]: 2025-10-11 09:09:23.698 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:09:23 compute-0 nova_compute[260935]: 2025-10-11 09:09:23.739 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:09:23 compute-0 nova_compute[260935]: 2025-10-11 09:09:23.764 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:09:23 compute-0 nova_compute[260935]: 2025-10-11 09:09:23.764 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:09:23 compute-0 nova_compute[260935]: 2025-10-11 09:09:23.764 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:09:23 compute-0 nova_compute[260935]: 2025-10-11 09:09:23.765 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 11 09:09:23 compute-0 nova_compute[260935]: 2025-10-11 09:09:23.765 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:09:23 compute-0 nova_compute[260935]: 2025-10-11 09:09:23.806 2 DEBUG nova.compute.manager [req-b1b9a9df-eccf-42e9-a026-9448b2791d62 req-db7d8b89-dc0a-4747-a2e1-dea0954847d2 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Received event network-vif-plugged-a611854c-0a61-41b8-91ce-0c0f893aa54c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:09:23 compute-0 nova_compute[260935]: 2025-10-11 09:09:23.807 2 DEBUG oslo_concurrency.lockutils [req-b1b9a9df-eccf-42e9-a026-9448b2791d62 req-db7d8b89-dc0a-4747-a2e1-dea0954847d2 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:09:23 compute-0 nova_compute[260935]: 2025-10-11 09:09:23.807 2 DEBUG oslo_concurrency.lockutils [req-b1b9a9df-eccf-42e9-a026-9448b2791d62 req-db7d8b89-dc0a-4747-a2e1-dea0954847d2 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:09:23 compute-0 nova_compute[260935]: 2025-10-11 09:09:23.807 2 DEBUG oslo_concurrency.lockutils [req-b1b9a9df-eccf-42e9-a026-9448b2791d62 req-db7d8b89-dc0a-4747-a2e1-dea0954847d2 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:09:23 compute-0 nova_compute[260935]: 2025-10-11 09:09:23.807 2 DEBUG nova.compute.manager [req-b1b9a9df-eccf-42e9-a026-9448b2791d62 req-db7d8b89-dc0a-4747-a2e1-dea0954847d2 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Processing event network-vif-plugged-a611854c-0a61-41b8-91ce-0c0f893aa54c _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 11 09:09:23 compute-0 nova_compute[260935]: 2025-10-11 09:09:23.808 2 DEBUG nova.compute.manager [None req-2e95498c-bcac-43e1-b077-459f5740c860 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Instance event wait completed in 7 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 11 09:09:23 compute-0 nova_compute[260935]: 2025-10-11 09:09:23.811 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760173763.8115203, 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:09:23 compute-0 nova_compute[260935]: 2025-10-11 09:09:23.812 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] VM Resumed (Lifecycle Event)
Oct 11 09:09:23 compute-0 nova_compute[260935]: 2025-10-11 09:09:23.814 2 DEBUG nova.virt.libvirt.driver [None req-2e95498c-bcac-43e1-b077-459f5740c860 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 11 09:09:23 compute-0 nova_compute[260935]: 2025-10-11 09:09:23.818 2 INFO nova.virt.libvirt.driver [-] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Instance spawned successfully.
Oct 11 09:09:23 compute-0 nova_compute[260935]: 2025-10-11 09:09:23.834 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:09:23 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e274 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:09:23 compute-0 nova_compute[260935]: 2025-10-11 09:09:23.837 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: shelved_offloaded, current task_state: spawning, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 09:09:23 compute-0 nova_compute[260935]: 2025-10-11 09:09:23.857 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 09:09:24 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 09:09:24 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1583032190' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:09:24 compute-0 nova_compute[260935]: 2025-10-11 09:09:24.249 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.484s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:09:24 compute-0 nova_compute[260935]: 2025-10-11 09:09:24.380 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:09:24 compute-0 nova_compute[260935]: 2025-10-11 09:09:24.381 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:09:24 compute-0 nova_compute[260935]: 2025-10-11 09:09:24.386 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000062 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:09:24 compute-0 nova_compute[260935]: 2025-10-11 09:09:24.387 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000062 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:09:24 compute-0 nova_compute[260935]: 2025-10-11 09:09:24.394 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000052 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:09:24 compute-0 nova_compute[260935]: 2025-10-11 09:09:24.394 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000052 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:09:24 compute-0 nova_compute[260935]: 2025-10-11 09:09:24.399 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000066 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:09:24 compute-0 nova_compute[260935]: 2025-10-11 09:09:24.400 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000066 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:09:24 compute-0 nova_compute[260935]: 2025-10-11 09:09:24.406 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:09:24 compute-0 nova_compute[260935]: 2025-10-11 09:09:24.406 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:09:24 compute-0 nova_compute[260935]: 2025-10-11 09:09:24.407 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:09:24 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2019: 321 pgs: 321 active+clean; 533 MiB data, 936 MiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 6.3 MiB/s wr, 183 op/s
Oct 11 09:09:24 compute-0 nova_compute[260935]: 2025-10-11 09:09:24.428 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:09:24 compute-0 nova_compute[260935]: 2025-10-11 09:09:24.583 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:09:24 compute-0 nova_compute[260935]: 2025-10-11 09:09:24.629 2 WARNING nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 09:09:24 compute-0 nova_compute[260935]: 2025-10-11 09:09:24.630 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=2800MB free_disk=59.752681732177734GB free_vcpus=3 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 11 09:09:24 compute-0 nova_compute[260935]: 2025-10-11 09:09:24.630 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:09:24 compute-0 nova_compute[260935]: 2025-10-11 09:09:24.631 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:09:24 compute-0 nova_compute[260935]: 2025-10-11 09:09:24.632 2 DEBUG oslo_concurrency.lockutils [None req-c6c35388-6f65-484e-ba37-54027b958af1 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] Acquiring lock "b3697ef4-2e67-4d7f-ad14-ae6ab4055454" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:09:24 compute-0 nova_compute[260935]: 2025-10-11 09:09:24.632 2 DEBUG oslo_concurrency.lockutils [None req-c6c35388-6f65-484e-ba37-54027b958af1 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] Lock "b3697ef4-2e67-4d7f-ad14-ae6ab4055454" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:09:24 compute-0 nova_compute[260935]: 2025-10-11 09:09:24.632 2 DEBUG oslo_concurrency.lockutils [None req-c6c35388-6f65-484e-ba37-54027b958af1 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] Acquiring lock "b3697ef4-2e67-4d7f-ad14-ae6ab4055454-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:09:24 compute-0 nova_compute[260935]: 2025-10-11 09:09:24.633 2 DEBUG oslo_concurrency.lockutils [None req-c6c35388-6f65-484e-ba37-54027b958af1 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] Lock "b3697ef4-2e67-4d7f-ad14-ae6ab4055454-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:09:24 compute-0 nova_compute[260935]: 2025-10-11 09:09:24.633 2 DEBUG oslo_concurrency.lockutils [None req-c6c35388-6f65-484e-ba37-54027b958af1 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] Lock "b3697ef4-2e67-4d7f-ad14-ae6ab4055454-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:09:24 compute-0 nova_compute[260935]: 2025-10-11 09:09:24.634 2 INFO nova.compute.manager [None req-c6c35388-6f65-484e-ba37-54027b958af1 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] [instance: b3697ef4-2e67-4d7f-ad14-ae6ab4055454] Terminating instance
Oct 11 09:09:24 compute-0 nova_compute[260935]: 2025-10-11 09:09:24.635 2 DEBUG oslo_concurrency.lockutils [None req-c6c35388-6f65-484e-ba37-54027b958af1 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] Acquiring lock "refresh_cache-b3697ef4-2e67-4d7f-ad14-ae6ab4055454" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:09:24 compute-0 nova_compute[260935]: 2025-10-11 09:09:24.635 2 DEBUG oslo_concurrency.lockutils [None req-c6c35388-6f65-484e-ba37-54027b958af1 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] Acquired lock "refresh_cache-b3697ef4-2e67-4d7f-ad14-ae6ab4055454" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:09:24 compute-0 nova_compute[260935]: 2025-10-11 09:09:24.635 2 DEBUG nova.network.neutron [None req-c6c35388-6f65-484e-ba37-54027b958af1 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] [instance: b3697ef4-2e67-4d7f-ad14-ae6ab4055454] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 11 09:09:24 compute-0 nova_compute[260935]: 2025-10-11 09:09:24.637 2 DEBUG nova.network.neutron [None req-4a715ce3-3819-4b8e-820e-809ccea8612a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 1e10d958-167c-4caf-8ff0-67f50e39ec3c] Successfully created port: 38159e48-a8a0-4daa-95d3-19be7346d2cb _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 11 09:09:24 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e274 do_prune osdmap full prune enabled
Oct 11 09:09:24 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e275 e275: 3 total, 3 up, 3 in
Oct 11 09:09:24 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/1583032190' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:09:24 compute-0 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e275: 3 total, 3 up, 3 in
Oct 11 09:09:24 compute-0 nova_compute[260935]: 2025-10-11 09:09:24.741 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance c176845c-89c0-4038-ba22-4ee79bd3ebfe actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 09:09:24 compute-0 nova_compute[260935]: 2025-10-11 09:09:24.741 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance b75d8ded-515b-48ff-a6b6-28df88878996 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 09:09:24 compute-0 nova_compute[260935]: 2025-10-11 09:09:24.741 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance 52be16b4-343a-4fd4-9041-39069a1fde2a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 09:09:24 compute-0 nova_compute[260935]: 2025-10-11 09:09:24.743 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance b3697ef4-2e67-4d7f-ad14-ae6ab4055454 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 09:09:24 compute-0 nova_compute[260935]: 2025-10-11 09:09:24.743 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 09:09:24 compute-0 nova_compute[260935]: 2025-10-11 09:09:24.744 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance 1e10d958-167c-4caf-8ff0-67f50e39ec3c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 09:09:24 compute-0 nova_compute[260935]: 2025-10-11 09:09:24.744 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 6 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 11 09:09:24 compute-0 nova_compute[260935]: 2025-10-11 09:09:24.744 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=1280MB phys_disk=59GB used_disk=6GB total_vcpus=8 used_vcpus=6 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 11 09:09:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:09:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:09:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:09:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:09:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:09:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:09:24 compute-0 nova_compute[260935]: 2025-10-11 09:09:24.894 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:09:24 compute-0 nova_compute[260935]: 2025-10-11 09:09:24.948 2 DEBUG nova.network.neutron [None req-c6c35388-6f65-484e-ba37-54027b958af1 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] [instance: b3697ef4-2e67-4d7f-ad14-ae6ab4055454] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 11 09:09:25 compute-0 nova_compute[260935]: 2025-10-11 09:09:25.137 2 DEBUG nova.compute.manager [None req-2e95498c-bcac-43e1-b077-459f5740c860 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:09:25 compute-0 nova_compute[260935]: 2025-10-11 09:09:25.221 2 DEBUG oslo_concurrency.lockutils [None req-2e95498c-bcac-43e1-b077-459f5740c860 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Lock "0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c" "released" by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" :: held 18.693s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:09:25 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 09:09:25 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1239512463' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:09:25 compute-0 nova_compute[260935]: 2025-10-11 09:09:25.342 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:09:25 compute-0 nova_compute[260935]: 2025-10-11 09:09:25.346 2 DEBUG nova.compute.provider_tree [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 09:09:25 compute-0 nova_compute[260935]: 2025-10-11 09:09:25.390 2 DEBUG nova.network.neutron [None req-c6c35388-6f65-484e-ba37-54027b958af1 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] [instance: b3697ef4-2e67-4d7f-ad14-ae6ab4055454] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:09:25 compute-0 nova_compute[260935]: 2025-10-11 09:09:25.392 2 DEBUG nova.scheduler.client.report [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 09:09:25 compute-0 nova_compute[260935]: 2025-10-11 09:09:25.415 2 DEBUG oslo_concurrency.lockutils [None req-c6c35388-6f65-484e-ba37-54027b958af1 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] Releasing lock "refresh_cache-b3697ef4-2e67-4d7f-ad14-ae6ab4055454" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:09:25 compute-0 nova_compute[260935]: 2025-10-11 09:09:25.416 2 DEBUG nova.compute.manager [None req-c6c35388-6f65-484e-ba37-54027b958af1 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] [instance: b3697ef4-2e67-4d7f-ad14-ae6ab4055454] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 11 09:09:25 compute-0 nova_compute[260935]: 2025-10-11 09:09:25.420 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 11 09:09:25 compute-0 nova_compute[260935]: 2025-10-11 09:09:25.420 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.789s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:09:25 compute-0 systemd[1]: machine-qemu\x2d120\x2dinstance\x2d00000066.scope: Deactivated successfully.
Oct 11 09:09:25 compute-0 systemd[1]: machine-qemu\x2d120\x2dinstance\x2d00000066.scope: Consumed 3.476s CPU time.
Oct 11 09:09:25 compute-0 systemd-machined[215705]: Machine qemu-120-instance-00000066 terminated.
Oct 11 09:09:25 compute-0 podman[363933]: 2025-10-11 09:09:25.542736185 +0000 UTC m=+0.065464410 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, container_name=iscsid, org.label-schema.license=GPLv2, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 09:09:25 compute-0 nova_compute[260935]: 2025-10-11 09:09:25.649 2 INFO nova.virt.libvirt.driver [-] [instance: b3697ef4-2e67-4d7f-ad14-ae6ab4055454] Instance destroyed successfully.
Oct 11 09:09:25 compute-0 nova_compute[260935]: 2025-10-11 09:09:25.650 2 DEBUG nova.objects.instance [None req-c6c35388-6f65-484e-ba37-54027b958af1 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] Lazy-loading 'resources' on Instance uuid b3697ef4-2e67-4d7f-ad14-ae6ab4055454 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:09:25 compute-0 ceph-mon[74313]: pgmap v2019: 321 pgs: 321 active+clean; 533 MiB data, 936 MiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 6.3 MiB/s wr, 183 op/s
Oct 11 09:09:25 compute-0 ceph-mon[74313]: osdmap e275: 3 total, 3 up, 3 in
Oct 11 09:09:25 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/1239512463' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:09:25 compute-0 nova_compute[260935]: 2025-10-11 09:09:25.738 2 DEBUG nova.network.neutron [None req-4a715ce3-3819-4b8e-820e-809ccea8612a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 1e10d958-167c-4caf-8ff0-67f50e39ec3c] Successfully updated port: 38159e48-a8a0-4daa-95d3-19be7346d2cb _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 11 09:09:25 compute-0 nova_compute[260935]: 2025-10-11 09:09:25.763 2 DEBUG oslo_concurrency.lockutils [None req-4a715ce3-3819-4b8e-820e-809ccea8612a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Acquiring lock "refresh_cache-1e10d958-167c-4caf-8ff0-67f50e39ec3c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:09:25 compute-0 nova_compute[260935]: 2025-10-11 09:09:25.763 2 DEBUG oslo_concurrency.lockutils [None req-4a715ce3-3819-4b8e-820e-809ccea8612a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Acquired lock "refresh_cache-1e10d958-167c-4caf-8ff0-67f50e39ec3c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:09:25 compute-0 nova_compute[260935]: 2025-10-11 09:09:25.763 2 DEBUG nova.network.neutron [None req-4a715ce3-3819-4b8e-820e-809ccea8612a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 1e10d958-167c-4caf-8ff0-67f50e39ec3c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 11 09:09:25 compute-0 nova_compute[260935]: 2025-10-11 09:09:25.866 2 DEBUG nova.compute.manager [req-e820127e-880c-4f3a-88ab-93ce9c8e3b4c req-273d0166-2656-4318-80ed-e4d9acc7e7ed e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Received event network-vif-plugged-a611854c-0a61-41b8-91ce-0c0f893aa54c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:09:25 compute-0 nova_compute[260935]: 2025-10-11 09:09:25.866 2 DEBUG oslo_concurrency.lockutils [req-e820127e-880c-4f3a-88ab-93ce9c8e3b4c req-273d0166-2656-4318-80ed-e4d9acc7e7ed e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:09:25 compute-0 nova_compute[260935]: 2025-10-11 09:09:25.867 2 DEBUG oslo_concurrency.lockutils [req-e820127e-880c-4f3a-88ab-93ce9c8e3b4c req-273d0166-2656-4318-80ed-e4d9acc7e7ed e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:09:25 compute-0 nova_compute[260935]: 2025-10-11 09:09:25.867 2 DEBUG oslo_concurrency.lockutils [req-e820127e-880c-4f3a-88ab-93ce9c8e3b4c req-273d0166-2656-4318-80ed-e4d9acc7e7ed e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:09:25 compute-0 nova_compute[260935]: 2025-10-11 09:09:25.867 2 DEBUG nova.compute.manager [req-e820127e-880c-4f3a-88ab-93ce9c8e3b4c req-273d0166-2656-4318-80ed-e4d9acc7e7ed e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] No waiting events found dispatching network-vif-plugged-a611854c-0a61-41b8-91ce-0c0f893aa54c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 09:09:25 compute-0 nova_compute[260935]: 2025-10-11 09:09:25.868 2 WARNING nova.compute.manager [req-e820127e-880c-4f3a-88ab-93ce9c8e3b4c req-273d0166-2656-4318-80ed-e4d9acc7e7ed e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Received unexpected event network-vif-plugged-a611854c-0a61-41b8-91ce-0c0f893aa54c for instance with vm_state active and task_state None.
Oct 11 09:09:25 compute-0 nova_compute[260935]: 2025-10-11 09:09:25.868 2 DEBUG nova.compute.manager [req-e820127e-880c-4f3a-88ab-93ce9c8e3b4c req-273d0166-2656-4318-80ed-e4d9acc7e7ed e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 1e10d958-167c-4caf-8ff0-67f50e39ec3c] Received event network-changed-38159e48-a8a0-4daa-95d3-19be7346d2cb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:09:25 compute-0 nova_compute[260935]: 2025-10-11 09:09:25.869 2 DEBUG nova.compute.manager [req-e820127e-880c-4f3a-88ab-93ce9c8e3b4c req-273d0166-2656-4318-80ed-e4d9acc7e7ed e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 1e10d958-167c-4caf-8ff0-67f50e39ec3c] Refreshing instance network info cache due to event network-changed-38159e48-a8a0-4daa-95d3-19be7346d2cb. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 09:09:25 compute-0 nova_compute[260935]: 2025-10-11 09:09:25.869 2 DEBUG oslo_concurrency.lockutils [req-e820127e-880c-4f3a-88ab-93ce9c8e3b4c req-273d0166-2656-4318-80ed-e4d9acc7e7ed e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-1e10d958-167c-4caf-8ff0-67f50e39ec3c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:09:26 compute-0 nova_compute[260935]: 2025-10-11 09:09:26.010 2 INFO nova.virt.libvirt.driver [None req-c6c35388-6f65-484e-ba37-54027b958af1 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] [instance: b3697ef4-2e67-4d7f-ad14-ae6ab4055454] Deleting instance files /var/lib/nova/instances/b3697ef4-2e67-4d7f-ad14-ae6ab4055454_del
Oct 11 09:09:26 compute-0 nova_compute[260935]: 2025-10-11 09:09:26.011 2 INFO nova.virt.libvirt.driver [None req-c6c35388-6f65-484e-ba37-54027b958af1 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] [instance: b3697ef4-2e67-4d7f-ad14-ae6ab4055454] Deletion of /var/lib/nova/instances/b3697ef4-2e67-4d7f-ad14-ae6ab4055454_del complete
Oct 11 09:09:26 compute-0 nova_compute[260935]: 2025-10-11 09:09:26.219 2 DEBUG nova.network.neutron [None req-4a715ce3-3819-4b8e-820e-809ccea8612a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 1e10d958-167c-4caf-8ff0-67f50e39ec3c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 11 09:09:26 compute-0 nova_compute[260935]: 2025-10-11 09:09:26.232 2 INFO nova.compute.manager [None req-c6c35388-6f65-484e-ba37-54027b958af1 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] [instance: b3697ef4-2e67-4d7f-ad14-ae6ab4055454] Took 0.82 seconds to destroy the instance on the hypervisor.
Oct 11 09:09:26 compute-0 nova_compute[260935]: 2025-10-11 09:09:26.233 2 DEBUG oslo.service.loopingcall [None req-c6c35388-6f65-484e-ba37-54027b958af1 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 11 09:09:26 compute-0 nova_compute[260935]: 2025-10-11 09:09:26.233 2 DEBUG nova.compute.manager [-] [instance: b3697ef4-2e67-4d7f-ad14-ae6ab4055454] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 11 09:09:26 compute-0 nova_compute[260935]: 2025-10-11 09:09:26.234 2 DEBUG nova.network.neutron [-] [instance: b3697ef4-2e67-4d7f-ad14-ae6ab4055454] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 11 09:09:26 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2021: 321 pgs: 321 active+clean; 533 MiB data, 936 MiB used, 59 GiB / 60 GiB avail; 4.9 MiB/s rd, 6.9 MiB/s wr, 210 op/s
Oct 11 09:09:26 compute-0 nova_compute[260935]: 2025-10-11 09:09:26.433 2 DEBUG nova.network.neutron [-] [instance: b3697ef4-2e67-4d7f-ad14-ae6ab4055454] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 11 09:09:26 compute-0 nova_compute[260935]: 2025-10-11 09:09:26.456 2 DEBUG nova.network.neutron [-] [instance: b3697ef4-2e67-4d7f-ad14-ae6ab4055454] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:09:26 compute-0 nova_compute[260935]: 2025-10-11 09:09:26.476 2 INFO nova.compute.manager [-] [instance: b3697ef4-2e67-4d7f-ad14-ae6ab4055454] Took 0.24 seconds to deallocate network for instance.
Oct 11 09:09:26 compute-0 nova_compute[260935]: 2025-10-11 09:09:26.534 2 DEBUG oslo_concurrency.lockutils [None req-c6c35388-6f65-484e-ba37-54027b958af1 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:09:26 compute-0 nova_compute[260935]: 2025-10-11 09:09:26.535 2 DEBUG oslo_concurrency.lockutils [None req-c6c35388-6f65-484e-ba37-54027b958af1 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:09:26 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 09:09:26 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/99429675' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 09:09:26 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 09:09:26 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/99429675' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 09:09:26 compute-0 ceph-mon[74313]: from='client.? 192.168.122.10:0/99429675' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 09:09:26 compute-0 ceph-mon[74313]: from='client.? 192.168.122.10:0/99429675' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 09:09:26 compute-0 nova_compute[260935]: 2025-10-11 09:09:26.713 2 DEBUG oslo_concurrency.processutils [None req-c6c35388-6f65-484e-ba37-54027b958af1 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:09:27 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 09:09:27 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3612448378' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:09:27 compute-0 nova_compute[260935]: 2025-10-11 09:09:27.175 2 DEBUG oslo_concurrency.processutils [None req-c6c35388-6f65-484e-ba37-54027b958af1 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:09:27 compute-0 nova_compute[260935]: 2025-10-11 09:09:27.182 2 DEBUG nova.compute.provider_tree [None req-c6c35388-6f65-484e-ba37-54027b958af1 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 09:09:27 compute-0 nova_compute[260935]: 2025-10-11 09:09:27.205 2 DEBUG nova.scheduler.client.report [None req-c6c35388-6f65-484e-ba37-54027b958af1 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 09:09:27 compute-0 nova_compute[260935]: 2025-10-11 09:09:27.231 2 DEBUG oslo_concurrency.lockutils [None req-c6c35388-6f65-484e-ba37-54027b958af1 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.696s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:09:27 compute-0 nova_compute[260935]: 2025-10-11 09:09:27.272 2 INFO nova.scheduler.client.report [None req-c6c35388-6f65-484e-ba37-54027b958af1 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] Deleted allocations for instance b3697ef4-2e67-4d7f-ad14-ae6ab4055454
Oct 11 09:09:27 compute-0 nova_compute[260935]: 2025-10-11 09:09:27.343 2 DEBUG oslo_concurrency.lockutils [None req-c6c35388-6f65-484e-ba37-54027b958af1 18ec7b6a713d43e9880a338abc3182b1 31a5173de7e7470ab2566a226b8ba071 - - default default] Lock "b3697ef4-2e67-4d7f-ad14-ae6ab4055454" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.711s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:09:27 compute-0 nova_compute[260935]: 2025-10-11 09:09:27.384 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:09:27 compute-0 nova_compute[260935]: 2025-10-11 09:09:27.384 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 11 09:09:27 compute-0 nova_compute[260935]: 2025-10-11 09:09:27.567 2 DEBUG nova.network.neutron [None req-4a715ce3-3819-4b8e-820e-809ccea8612a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 1e10d958-167c-4caf-8ff0-67f50e39ec3c] Updating instance_info_cache with network_info: [{"id": "38159e48-a8a0-4daa-95d3-19be7346d2cb", "address": "fa:16:3e:7c:70:56", "network": {"id": "3e150cf5-f6e5-4cd4-bd30-1cc73cc8f72e", "bridge": "br-int", "label": "tempest-network-smoke--7754663", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca4b15770e784f45910b630937562cb6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap38159e48-a8", "ovs_interfaceid": "38159e48-a8a0-4daa-95d3-19be7346d2cb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:09:27 compute-0 nova_compute[260935]: 2025-10-11 09:09:27.596 2 DEBUG oslo_concurrency.lockutils [None req-4a715ce3-3819-4b8e-820e-809ccea8612a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Releasing lock "refresh_cache-1e10d958-167c-4caf-8ff0-67f50e39ec3c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:09:27 compute-0 nova_compute[260935]: 2025-10-11 09:09:27.596 2 DEBUG nova.compute.manager [None req-4a715ce3-3819-4b8e-820e-809ccea8612a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 1e10d958-167c-4caf-8ff0-67f50e39ec3c] Instance network_info: |[{"id": "38159e48-a8a0-4daa-95d3-19be7346d2cb", "address": "fa:16:3e:7c:70:56", "network": {"id": "3e150cf5-f6e5-4cd4-bd30-1cc73cc8f72e", "bridge": "br-int", "label": "tempest-network-smoke--7754663", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca4b15770e784f45910b630937562cb6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap38159e48-a8", "ovs_interfaceid": "38159e48-a8a0-4daa-95d3-19be7346d2cb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 11 09:09:27 compute-0 nova_compute[260935]: 2025-10-11 09:09:27.597 2 DEBUG oslo_concurrency.lockutils [req-e820127e-880c-4f3a-88ab-93ce9c8e3b4c req-273d0166-2656-4318-80ed-e4d9acc7e7ed e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-1e10d958-167c-4caf-8ff0-67f50e39ec3c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:09:27 compute-0 nova_compute[260935]: 2025-10-11 09:09:27.597 2 DEBUG nova.network.neutron [req-e820127e-880c-4f3a-88ab-93ce9c8e3b4c req-273d0166-2656-4318-80ed-e4d9acc7e7ed e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 1e10d958-167c-4caf-8ff0-67f50e39ec3c] Refreshing network info cache for port 38159e48-a8a0-4daa-95d3-19be7346d2cb _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 09:09:27 compute-0 nova_compute[260935]: 2025-10-11 09:09:27.605 2 DEBUG nova.virt.libvirt.driver [None req-4a715ce3-3819-4b8e-820e-809ccea8612a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 1e10d958-167c-4caf-8ff0-67f50e39ec3c] Start _get_guest_xml network_info=[{"id": "38159e48-a8a0-4daa-95d3-19be7346d2cb", "address": "fa:16:3e:7c:70:56", "network": {"id": "3e150cf5-f6e5-4cd4-bd30-1cc73cc8f72e", "bridge": "br-int", "label": "tempest-network-smoke--7754663", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca4b15770e784f45910b630937562cb6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap38159e48-a8", "ovs_interfaceid": "38159e48-a8a0-4daa-95d3-19be7346d2cb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 11 09:09:27 compute-0 nova_compute[260935]: 2025-10-11 09:09:27.607 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "refresh_cache-b75d8ded-515b-48ff-a6b6-28df88878996" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:09:27 compute-0 nova_compute[260935]: 2025-10-11 09:09:27.607 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquired lock "refresh_cache-b75d8ded-515b-48ff-a6b6-28df88878996" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:09:27 compute-0 nova_compute[260935]: 2025-10-11 09:09:27.608 2 DEBUG nova.network.neutron [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: b75d8ded-515b-48ff-a6b6-28df88878996] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 11 09:09:27 compute-0 nova_compute[260935]: 2025-10-11 09:09:27.614 2 WARNING nova.virt.libvirt.driver [None req-4a715ce3-3819-4b8e-820e-809ccea8612a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 09:09:27 compute-0 nova_compute[260935]: 2025-10-11 09:09:27.635 2 DEBUG nova.virt.libvirt.host [None req-4a715ce3-3819-4b8e-820e-809ccea8612a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 11 09:09:27 compute-0 nova_compute[260935]: 2025-10-11 09:09:27.636 2 DEBUG nova.virt.libvirt.host [None req-4a715ce3-3819-4b8e-820e-809ccea8612a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 11 09:09:27 compute-0 nova_compute[260935]: 2025-10-11 09:09:27.641 2 DEBUG nova.virt.libvirt.host [None req-4a715ce3-3819-4b8e-820e-809ccea8612a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 11 09:09:27 compute-0 nova_compute[260935]: 2025-10-11 09:09:27.642 2 DEBUG nova.virt.libvirt.host [None req-4a715ce3-3819-4b8e-820e-809ccea8612a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 11 09:09:27 compute-0 nova_compute[260935]: 2025-10-11 09:09:27.642 2 DEBUG nova.virt.libvirt.driver [None req-4a715ce3-3819-4b8e-820e-809ccea8612a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 11 09:09:27 compute-0 nova_compute[260935]: 2025-10-11 09:09:27.643 2 DEBUG nova.virt.hardware [None req-4a715ce3-3819-4b8e-820e-809ccea8612a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 11 09:09:27 compute-0 nova_compute[260935]: 2025-10-11 09:09:27.644 2 DEBUG nova.virt.hardware [None req-4a715ce3-3819-4b8e-820e-809ccea8612a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 11 09:09:27 compute-0 nova_compute[260935]: 2025-10-11 09:09:27.644 2 DEBUG nova.virt.hardware [None req-4a715ce3-3819-4b8e-820e-809ccea8612a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 11 09:09:27 compute-0 nova_compute[260935]: 2025-10-11 09:09:27.644 2 DEBUG nova.virt.hardware [None req-4a715ce3-3819-4b8e-820e-809ccea8612a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 11 09:09:27 compute-0 nova_compute[260935]: 2025-10-11 09:09:27.645 2 DEBUG nova.virt.hardware [None req-4a715ce3-3819-4b8e-820e-809ccea8612a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 11 09:09:27 compute-0 nova_compute[260935]: 2025-10-11 09:09:27.645 2 DEBUG nova.virt.hardware [None req-4a715ce3-3819-4b8e-820e-809ccea8612a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 11 09:09:27 compute-0 nova_compute[260935]: 2025-10-11 09:09:27.645 2 DEBUG nova.virt.hardware [None req-4a715ce3-3819-4b8e-820e-809ccea8612a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 11 09:09:27 compute-0 nova_compute[260935]: 2025-10-11 09:09:27.646 2 DEBUG nova.virt.hardware [None req-4a715ce3-3819-4b8e-820e-809ccea8612a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 11 09:09:27 compute-0 nova_compute[260935]: 2025-10-11 09:09:27.646 2 DEBUG nova.virt.hardware [None req-4a715ce3-3819-4b8e-820e-809ccea8612a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 11 09:09:27 compute-0 nova_compute[260935]: 2025-10-11 09:09:27.646 2 DEBUG nova.virt.hardware [None req-4a715ce3-3819-4b8e-820e-809ccea8612a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 11 09:09:27 compute-0 nova_compute[260935]: 2025-10-11 09:09:27.647 2 DEBUG nova.virt.hardware [None req-4a715ce3-3819-4b8e-820e-809ccea8612a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 11 09:09:27 compute-0 nova_compute[260935]: 2025-10-11 09:09:27.651 2 DEBUG oslo_concurrency.processutils [None req-4a715ce3-3819-4b8e-820e-809ccea8612a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:09:27 compute-0 ceph-mon[74313]: pgmap v2021: 321 pgs: 321 active+clean; 533 MiB data, 936 MiB used, 59 GiB / 60 GiB avail; 4.9 MiB/s rd, 6.9 MiB/s wr, 210 op/s
Oct 11 09:09:27 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/3612448378' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:09:28 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 09:09:28 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2118813889' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:09:28 compute-0 nova_compute[260935]: 2025-10-11 09:09:28.151 2 DEBUG oslo_concurrency.processutils [None req-4a715ce3-3819-4b8e-820e-809ccea8612a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.499s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:09:28 compute-0 nova_compute[260935]: 2025-10-11 09:09:28.179 2 DEBUG nova.storage.rbd_utils [None req-4a715ce3-3819-4b8e-820e-809ccea8612a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] rbd image 1e10d958-167c-4caf-8ff0-67f50e39ec3c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:09:28 compute-0 nova_compute[260935]: 2025-10-11 09:09:28.185 2 DEBUG oslo_concurrency.processutils [None req-4a715ce3-3819-4b8e-820e-809ccea8612a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:09:28 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2022: 321 pgs: 321 active+clean; 453 MiB data, 873 MiB used, 59 GiB / 60 GiB avail; 4.7 MiB/s rd, 4.3 MiB/s wr, 323 op/s
Oct 11 09:09:28 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 09:09:28 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1556775564' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:09:28 compute-0 nova_compute[260935]: 2025-10-11 09:09:28.640 2 DEBUG oslo_concurrency.processutils [None req-4a715ce3-3819-4b8e-820e-809ccea8612a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:09:28 compute-0 nova_compute[260935]: 2025-10-11 09:09:28.643 2 DEBUG nova.virt.libvirt.vif [None req-4a715ce3-3819-4b8e-820e-809ccea8612a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T09:09:19Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1369305797',display_name='tempest-TestNetworkAdvancedServerOps-server-1369305797',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1369305797',id=103,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBB3xSdxOkU3R8spG6Ar0C2b6V4kJp9/fLj3qIKqVjHr+W8RWdFE2Y04thwsEaHG/Pjnq/p0OokkgP0kv89kAu1w3IytsdPugHW10i6+zWfhjEg7DZ4e+0dQC7E2fiV/TCA==',key_name='tempest-TestNetworkAdvancedServerOps-1897339183',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ca4b15770e784f45910b630937562cb6',ramdisk_id='',reservation_id='r-0mc689ih',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-1304559157',owner_user_name='tempest-TestNetworkAdvancedServerOps-1304559157-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:09:22Z,user_data=None,user_id='a213c3877fc144a3af0be3c3d853f999',uuid=1e10d958-167c-4caf-8ff0-67f50e39ec3c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "38159e48-a8a0-4daa-95d3-19be7346d2cb", "address": "fa:16:3e:7c:70:56", "network": {"id": "3e150cf5-f6e5-4cd4-bd30-1cc73cc8f72e", "bridge": "br-int", "label": "tempest-network-smoke--7754663", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca4b15770e784f45910b630937562cb6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap38159e48-a8", "ovs_interfaceid": "38159e48-a8a0-4daa-95d3-19be7346d2cb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 11 09:09:28 compute-0 nova_compute[260935]: 2025-10-11 09:09:28.644 2 DEBUG nova.network.os_vif_util [None req-4a715ce3-3819-4b8e-820e-809ccea8612a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Converting VIF {"id": "38159e48-a8a0-4daa-95d3-19be7346d2cb", "address": "fa:16:3e:7c:70:56", "network": {"id": "3e150cf5-f6e5-4cd4-bd30-1cc73cc8f72e", "bridge": "br-int", "label": "tempest-network-smoke--7754663", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca4b15770e784f45910b630937562cb6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap38159e48-a8", "ovs_interfaceid": "38159e48-a8a0-4daa-95d3-19be7346d2cb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 09:09:28 compute-0 nova_compute[260935]: 2025-10-11 09:09:28.646 2 DEBUG nova.network.os_vif_util [None req-4a715ce3-3819-4b8e-820e-809ccea8612a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7c:70:56,bridge_name='br-int',has_traffic_filtering=True,id=38159e48-a8a0-4daa-95d3-19be7346d2cb,network=Network(3e150cf5-f6e5-4cd4-bd30-1cc73cc8f72e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap38159e48-a8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 09:09:28 compute-0 nova_compute[260935]: 2025-10-11 09:09:28.648 2 DEBUG nova.objects.instance [None req-4a715ce3-3819-4b8e-820e-809ccea8612a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Lazy-loading 'pci_devices' on Instance uuid 1e10d958-167c-4caf-8ff0-67f50e39ec3c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:09:28 compute-0 nova_compute[260935]: 2025-10-11 09:09:28.674 2 DEBUG nova.virt.libvirt.driver [None req-4a715ce3-3819-4b8e-820e-809ccea8612a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 1e10d958-167c-4caf-8ff0-67f50e39ec3c] End _get_guest_xml xml=<domain type="kvm">
Oct 11 09:09:28 compute-0 nova_compute[260935]:   <uuid>1e10d958-167c-4caf-8ff0-67f50e39ec3c</uuid>
Oct 11 09:09:28 compute-0 nova_compute[260935]:   <name>instance-00000067</name>
Oct 11 09:09:28 compute-0 nova_compute[260935]:   <memory>131072</memory>
Oct 11 09:09:28 compute-0 nova_compute[260935]:   <vcpu>1</vcpu>
Oct 11 09:09:28 compute-0 nova_compute[260935]:   <metadata>
Oct 11 09:09:28 compute-0 nova_compute[260935]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 09:09:28 compute-0 nova_compute[260935]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 09:09:28 compute-0 nova_compute[260935]:       <nova:name>tempest-TestNetworkAdvancedServerOps-server-1369305797</nova:name>
Oct 11 09:09:28 compute-0 nova_compute[260935]:       <nova:creationTime>2025-10-11 09:09:27</nova:creationTime>
Oct 11 09:09:28 compute-0 nova_compute[260935]:       <nova:flavor name="m1.nano">
Oct 11 09:09:28 compute-0 nova_compute[260935]:         <nova:memory>128</nova:memory>
Oct 11 09:09:28 compute-0 nova_compute[260935]:         <nova:disk>1</nova:disk>
Oct 11 09:09:28 compute-0 nova_compute[260935]:         <nova:swap>0</nova:swap>
Oct 11 09:09:28 compute-0 nova_compute[260935]:         <nova:ephemeral>0</nova:ephemeral>
Oct 11 09:09:28 compute-0 nova_compute[260935]:         <nova:vcpus>1</nova:vcpus>
Oct 11 09:09:28 compute-0 nova_compute[260935]:       </nova:flavor>
Oct 11 09:09:28 compute-0 nova_compute[260935]:       <nova:owner>
Oct 11 09:09:28 compute-0 nova_compute[260935]:         <nova:user uuid="a213c3877fc144a3af0be3c3d853f999">tempest-TestNetworkAdvancedServerOps-1304559157-project-member</nova:user>
Oct 11 09:09:28 compute-0 nova_compute[260935]:         <nova:project uuid="ca4b15770e784f45910b630937562cb6">tempest-TestNetworkAdvancedServerOps-1304559157</nova:project>
Oct 11 09:09:28 compute-0 nova_compute[260935]:       </nova:owner>
Oct 11 09:09:28 compute-0 nova_compute[260935]:       <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 09:09:28 compute-0 nova_compute[260935]:       <nova:ports>
Oct 11 09:09:28 compute-0 nova_compute[260935]:         <nova:port uuid="38159e48-a8a0-4daa-95d3-19be7346d2cb">
Oct 11 09:09:28 compute-0 nova_compute[260935]:           <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Oct 11 09:09:28 compute-0 nova_compute[260935]:         </nova:port>
Oct 11 09:09:28 compute-0 nova_compute[260935]:       </nova:ports>
Oct 11 09:09:28 compute-0 nova_compute[260935]:     </nova:instance>
Oct 11 09:09:28 compute-0 nova_compute[260935]:   </metadata>
Oct 11 09:09:28 compute-0 nova_compute[260935]:   <sysinfo type="smbios">
Oct 11 09:09:28 compute-0 nova_compute[260935]:     <system>
Oct 11 09:09:28 compute-0 nova_compute[260935]:       <entry name="manufacturer">RDO</entry>
Oct 11 09:09:28 compute-0 nova_compute[260935]:       <entry name="product">OpenStack Compute</entry>
Oct 11 09:09:28 compute-0 nova_compute[260935]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 09:09:28 compute-0 nova_compute[260935]:       <entry name="serial">1e10d958-167c-4caf-8ff0-67f50e39ec3c</entry>
Oct 11 09:09:28 compute-0 nova_compute[260935]:       <entry name="uuid">1e10d958-167c-4caf-8ff0-67f50e39ec3c</entry>
Oct 11 09:09:28 compute-0 nova_compute[260935]:       <entry name="family">Virtual Machine</entry>
Oct 11 09:09:28 compute-0 nova_compute[260935]:     </system>
Oct 11 09:09:28 compute-0 nova_compute[260935]:   </sysinfo>
Oct 11 09:09:28 compute-0 nova_compute[260935]:   <os>
Oct 11 09:09:28 compute-0 nova_compute[260935]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 11 09:09:28 compute-0 nova_compute[260935]:     <boot dev="hd"/>
Oct 11 09:09:28 compute-0 nova_compute[260935]:     <smbios mode="sysinfo"/>
Oct 11 09:09:28 compute-0 nova_compute[260935]:   </os>
Oct 11 09:09:28 compute-0 nova_compute[260935]:   <features>
Oct 11 09:09:28 compute-0 nova_compute[260935]:     <acpi/>
Oct 11 09:09:28 compute-0 nova_compute[260935]:     <apic/>
Oct 11 09:09:28 compute-0 nova_compute[260935]:     <vmcoreinfo/>
Oct 11 09:09:28 compute-0 nova_compute[260935]:   </features>
Oct 11 09:09:28 compute-0 nova_compute[260935]:   <clock offset="utc">
Oct 11 09:09:28 compute-0 nova_compute[260935]:     <timer name="pit" tickpolicy="delay"/>
Oct 11 09:09:28 compute-0 nova_compute[260935]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 11 09:09:28 compute-0 nova_compute[260935]:     <timer name="hpet" present="no"/>
Oct 11 09:09:28 compute-0 nova_compute[260935]:   </clock>
Oct 11 09:09:28 compute-0 nova_compute[260935]:   <cpu mode="host-model" match="exact">
Oct 11 09:09:28 compute-0 nova_compute[260935]:     <topology sockets="1" cores="1" threads="1"/>
Oct 11 09:09:28 compute-0 nova_compute[260935]:   </cpu>
Oct 11 09:09:28 compute-0 nova_compute[260935]:   <devices>
Oct 11 09:09:28 compute-0 nova_compute[260935]:     <disk type="network" device="disk">
Oct 11 09:09:28 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 09:09:28 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/1e10d958-167c-4caf-8ff0-67f50e39ec3c_disk">
Oct 11 09:09:28 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 09:09:28 compute-0 nova_compute[260935]:       </source>
Oct 11 09:09:28 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 09:09:28 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 09:09:28 compute-0 nova_compute[260935]:       </auth>
Oct 11 09:09:28 compute-0 nova_compute[260935]:       <target dev="vda" bus="virtio"/>
Oct 11 09:09:28 compute-0 nova_compute[260935]:     </disk>
Oct 11 09:09:28 compute-0 nova_compute[260935]:     <disk type="network" device="cdrom">
Oct 11 09:09:28 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 09:09:28 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/1e10d958-167c-4caf-8ff0-67f50e39ec3c_disk.config">
Oct 11 09:09:28 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 09:09:28 compute-0 nova_compute[260935]:       </source>
Oct 11 09:09:28 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 09:09:28 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 09:09:28 compute-0 nova_compute[260935]:       </auth>
Oct 11 09:09:28 compute-0 nova_compute[260935]:       <target dev="sda" bus="sata"/>
Oct 11 09:09:28 compute-0 nova_compute[260935]:     </disk>
Oct 11 09:09:28 compute-0 nova_compute[260935]:     <interface type="ethernet">
Oct 11 09:09:28 compute-0 nova_compute[260935]:       <mac address="fa:16:3e:7c:70:56"/>
Oct 11 09:09:28 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 09:09:28 compute-0 nova_compute[260935]:       <driver name="vhost" rx_queue_size="512"/>
Oct 11 09:09:28 compute-0 nova_compute[260935]:       <mtu size="1442"/>
Oct 11 09:09:28 compute-0 nova_compute[260935]:       <target dev="tap38159e48-a8"/>
Oct 11 09:09:28 compute-0 nova_compute[260935]:     </interface>
Oct 11 09:09:28 compute-0 nova_compute[260935]:     <serial type="pty">
Oct 11 09:09:28 compute-0 nova_compute[260935]:       <log file="/var/lib/nova/instances/1e10d958-167c-4caf-8ff0-67f50e39ec3c/console.log" append="off"/>
Oct 11 09:09:28 compute-0 nova_compute[260935]:     </serial>
Oct 11 09:09:28 compute-0 nova_compute[260935]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 09:09:28 compute-0 nova_compute[260935]:     <video>
Oct 11 09:09:28 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 09:09:28 compute-0 nova_compute[260935]:     </video>
Oct 11 09:09:28 compute-0 nova_compute[260935]:     <input type="tablet" bus="usb"/>
Oct 11 09:09:28 compute-0 nova_compute[260935]:     <rng model="virtio">
Oct 11 09:09:28 compute-0 nova_compute[260935]:       <backend model="random">/dev/urandom</backend>
Oct 11 09:09:28 compute-0 nova_compute[260935]:     </rng>
Oct 11 09:09:28 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root"/>
Oct 11 09:09:28 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:09:28 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:09:28 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:09:28 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:09:28 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:09:28 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:09:28 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:09:28 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:09:28 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:09:28 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:09:28 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:09:28 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:09:28 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:09:28 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:09:28 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:09:28 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:09:28 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:09:28 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:09:28 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:09:28 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:09:28 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:09:28 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:09:28 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:09:28 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:09:28 compute-0 nova_compute[260935]:     <controller type="usb" index="0"/>
Oct 11 09:09:28 compute-0 nova_compute[260935]:     <memballoon model="virtio">
Oct 11 09:09:28 compute-0 nova_compute[260935]:       <stats period="10"/>
Oct 11 09:09:28 compute-0 nova_compute[260935]:     </memballoon>
Oct 11 09:09:28 compute-0 nova_compute[260935]:   </devices>
Oct 11 09:09:28 compute-0 nova_compute[260935]: </domain>
Oct 11 09:09:28 compute-0 nova_compute[260935]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 11 09:09:28 compute-0 nova_compute[260935]: 2025-10-11 09:09:28.676 2 DEBUG nova.compute.manager [None req-4a715ce3-3819-4b8e-820e-809ccea8612a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 1e10d958-167c-4caf-8ff0-67f50e39ec3c] Preparing to wait for external event network-vif-plugged-38159e48-a8a0-4daa-95d3-19be7346d2cb prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 11 09:09:28 compute-0 nova_compute[260935]: 2025-10-11 09:09:28.677 2 DEBUG oslo_concurrency.lockutils [None req-4a715ce3-3819-4b8e-820e-809ccea8612a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Acquiring lock "1e10d958-167c-4caf-8ff0-67f50e39ec3c-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:09:28 compute-0 nova_compute[260935]: 2025-10-11 09:09:28.678 2 DEBUG oslo_concurrency.lockutils [None req-4a715ce3-3819-4b8e-820e-809ccea8612a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Lock "1e10d958-167c-4caf-8ff0-67f50e39ec3c-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:09:28 compute-0 nova_compute[260935]: 2025-10-11 09:09:28.678 2 DEBUG oslo_concurrency.lockutils [None req-4a715ce3-3819-4b8e-820e-809ccea8612a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Lock "1e10d958-167c-4caf-8ff0-67f50e39ec3c-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:09:28 compute-0 nova_compute[260935]: 2025-10-11 09:09:28.679 2 DEBUG nova.virt.libvirt.vif [None req-4a715ce3-3819-4b8e-820e-809ccea8612a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T09:09:19Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1369305797',display_name='tempest-TestNetworkAdvancedServerOps-server-1369305797',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1369305797',id=103,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBB3xSdxOkU3R8spG6Ar0C2b6V4kJp9/fLj3qIKqVjHr+W8RWdFE2Y04thwsEaHG/Pjnq/p0OokkgP0kv89kAu1w3IytsdPugHW10i6+zWfhjEg7DZ4e+0dQC7E2fiV/TCA==',key_name='tempest-TestNetworkAdvancedServerOps-1897339183',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ca4b15770e784f45910b630937562cb6',ramdisk_id='',reservation_id='r-0mc689ih',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-1304559157',owner_user_name='tempest-TestNetworkAdvancedServerOps-1304559157-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:09:22Z,user_data=None,user_id='a213c3877fc144a3af0be3c3d853f999',uuid=1e10d958-167c-4caf-8ff0-67f50e39ec3c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "38159e48-a8a0-4daa-95d3-19be7346d2cb", "address": "fa:16:3e:7c:70:56", "network": {"id": "3e150cf5-f6e5-4cd4-bd30-1cc73cc8f72e", "bridge": "br-int", "label": "tempest-network-smoke--7754663", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca4b15770e784f45910b630937562cb6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap38159e48-a8", "ovs_interfaceid": "38159e48-a8a0-4daa-95d3-19be7346d2cb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 11 09:09:28 compute-0 nova_compute[260935]: 2025-10-11 09:09:28.680 2 DEBUG nova.network.os_vif_util [None req-4a715ce3-3819-4b8e-820e-809ccea8612a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Converting VIF {"id": "38159e48-a8a0-4daa-95d3-19be7346d2cb", "address": "fa:16:3e:7c:70:56", "network": {"id": "3e150cf5-f6e5-4cd4-bd30-1cc73cc8f72e", "bridge": "br-int", "label": "tempest-network-smoke--7754663", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca4b15770e784f45910b630937562cb6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap38159e48-a8", "ovs_interfaceid": "38159e48-a8a0-4daa-95d3-19be7346d2cb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 09:09:28 compute-0 nova_compute[260935]: 2025-10-11 09:09:28.681 2 DEBUG nova.network.os_vif_util [None req-4a715ce3-3819-4b8e-820e-809ccea8612a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7c:70:56,bridge_name='br-int',has_traffic_filtering=True,id=38159e48-a8a0-4daa-95d3-19be7346d2cb,network=Network(3e150cf5-f6e5-4cd4-bd30-1cc73cc8f72e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap38159e48-a8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 09:09:28 compute-0 nova_compute[260935]: 2025-10-11 09:09:28.682 2 DEBUG os_vif [None req-4a715ce3-3819-4b8e-820e-809ccea8612a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:7c:70:56,bridge_name='br-int',has_traffic_filtering=True,id=38159e48-a8a0-4daa-95d3-19be7346d2cb,network=Network(3e150cf5-f6e5-4cd4-bd30-1cc73cc8f72e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap38159e48-a8') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 11 09:09:28 compute-0 nova_compute[260935]: 2025-10-11 09:09:28.683 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:09:28 compute-0 nova_compute[260935]: 2025-10-11 09:09:28.684 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:09:28 compute-0 nova_compute[260935]: 2025-10-11 09:09:28.685 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 09:09:28 compute-0 nova_compute[260935]: 2025-10-11 09:09:28.690 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:09:28 compute-0 nova_compute[260935]: 2025-10-11 09:09:28.690 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap38159e48-a8, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:09:28 compute-0 nova_compute[260935]: 2025-10-11 09:09:28.691 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap38159e48-a8, col_values=(('external_ids', {'iface-id': '38159e48-a8a0-4daa-95d3-19be7346d2cb', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:7c:70:56', 'vm-uuid': '1e10d958-167c-4caf-8ff0-67f50e39ec3c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:09:28 compute-0 nova_compute[260935]: 2025-10-11 09:09:28.695 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:09:28 compute-0 NetworkManager[44960]: <info>  [1760173768.6962] manager: (tap38159e48-a8): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/385)
Oct 11 09:09:28 compute-0 nova_compute[260935]: 2025-10-11 09:09:28.699 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 09:09:28 compute-0 nova_compute[260935]: 2025-10-11 09:09:28.704 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:09:28 compute-0 nova_compute[260935]: 2025-10-11 09:09:28.706 2 INFO os_vif [None req-4a715ce3-3819-4b8e-820e-809ccea8612a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:7c:70:56,bridge_name='br-int',has_traffic_filtering=True,id=38159e48-a8a0-4daa-95d3-19be7346d2cb,network=Network(3e150cf5-f6e5-4cd4-bd30-1cc73cc8f72e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap38159e48-a8')
Oct 11 09:09:28 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/2118813889' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:09:28 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/1556775564' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:09:28 compute-0 nova_compute[260935]: 2025-10-11 09:09:28.769 2 DEBUG nova.virt.libvirt.driver [None req-4a715ce3-3819-4b8e-820e-809ccea8612a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 09:09:28 compute-0 nova_compute[260935]: 2025-10-11 09:09:28.769 2 DEBUG nova.virt.libvirt.driver [None req-4a715ce3-3819-4b8e-820e-809ccea8612a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 09:09:28 compute-0 nova_compute[260935]: 2025-10-11 09:09:28.770 2 DEBUG nova.virt.libvirt.driver [None req-4a715ce3-3819-4b8e-820e-809ccea8612a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] No VIF found with MAC fa:16:3e:7c:70:56, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 11 09:09:28 compute-0 nova_compute[260935]: 2025-10-11 09:09:28.770 2 INFO nova.virt.libvirt.driver [None req-4a715ce3-3819-4b8e-820e-809ccea8612a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 1e10d958-167c-4caf-8ff0-67f50e39ec3c] Using config drive
Oct 11 09:09:28 compute-0 nova_compute[260935]: 2025-10-11 09:09:28.798 2 DEBUG nova.storage.rbd_utils [None req-4a715ce3-3819-4b8e-820e-809ccea8612a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] rbd image 1e10d958-167c-4caf-8ff0-67f50e39ec3c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:09:28 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e275 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:09:28 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e275 do_prune osdmap full prune enabled
Oct 11 09:09:28 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e276 e276: 3 total, 3 up, 3 in
Oct 11 09:09:28 compute-0 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e276: 3 total, 3 up, 3 in
Oct 11 09:09:29 compute-0 nova_compute[260935]: 2025-10-11 09:09:29.385 2 INFO nova.virt.libvirt.driver [None req-4a715ce3-3819-4b8e-820e-809ccea8612a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 1e10d958-167c-4caf-8ff0-67f50e39ec3c] Creating config drive at /var/lib/nova/instances/1e10d958-167c-4caf-8ff0-67f50e39ec3c/disk.config
Oct 11 09:09:29 compute-0 nova_compute[260935]: 2025-10-11 09:09:29.393 2 DEBUG oslo_concurrency.processutils [None req-4a715ce3-3819-4b8e-820e-809ccea8612a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/1e10d958-167c-4caf-8ff0-67f50e39ec3c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmprd_3esig execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:09:29 compute-0 nova_compute[260935]: 2025-10-11 09:09:29.457 2 DEBUG nova.network.neutron [req-e820127e-880c-4f3a-88ab-93ce9c8e3b4c req-273d0166-2656-4318-80ed-e4d9acc7e7ed e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 1e10d958-167c-4caf-8ff0-67f50e39ec3c] Updated VIF entry in instance network info cache for port 38159e48-a8a0-4daa-95d3-19be7346d2cb. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 09:09:29 compute-0 nova_compute[260935]: 2025-10-11 09:09:29.460 2 DEBUG nova.network.neutron [req-e820127e-880c-4f3a-88ab-93ce9c8e3b4c req-273d0166-2656-4318-80ed-e4d9acc7e7ed e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 1e10d958-167c-4caf-8ff0-67f50e39ec3c] Updating instance_info_cache with network_info: [{"id": "38159e48-a8a0-4daa-95d3-19be7346d2cb", "address": "fa:16:3e:7c:70:56", "network": {"id": "3e150cf5-f6e5-4cd4-bd30-1cc73cc8f72e", "bridge": "br-int", "label": "tempest-network-smoke--7754663", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca4b15770e784f45910b630937562cb6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap38159e48-a8", "ovs_interfaceid": "38159e48-a8a0-4daa-95d3-19be7346d2cb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:09:29 compute-0 nova_compute[260935]: 2025-10-11 09:09:29.489 2 DEBUG oslo_concurrency.lockutils [req-e820127e-880c-4f3a-88ab-93ce9c8e3b4c req-273d0166-2656-4318-80ed-e4d9acc7e7ed e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-1e10d958-167c-4caf-8ff0-67f50e39ec3c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:09:29 compute-0 nova_compute[260935]: 2025-10-11 09:09:29.513 2 DEBUG nova.network.neutron [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: b75d8ded-515b-48ff-a6b6-28df88878996] Updating instance_info_cache with network_info: [{"id": "99e74dca-1d94-446c-ac4b-bc16dc028d2b", "address": "fa:16:3e:ab:9b:26", "network": {"id": "e4686205-cbf0-4221-bc49-ebb890c4a59f", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1553544744-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "11b44ad9193e4e43838d52056ccf413e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap99e74dca-1d", "ovs_interfaceid": "99e74dca-1d94-446c-ac4b-bc16dc028d2b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:09:29 compute-0 nova_compute[260935]: 2025-10-11 09:09:29.540 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Releasing lock "refresh_cache-b75d8ded-515b-48ff-a6b6-28df88878996" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:09:29 compute-0 nova_compute[260935]: 2025-10-11 09:09:29.540 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: b75d8ded-515b-48ff-a6b6-28df88878996] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 11 09:09:29 compute-0 nova_compute[260935]: 2025-10-11 09:09:29.558 2 DEBUG oslo_concurrency.processutils [None req-4a715ce3-3819-4b8e-820e-809ccea8612a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/1e10d958-167c-4caf-8ff0-67f50e39ec3c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmprd_3esig" returned: 0 in 0.165s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:09:29 compute-0 nova_compute[260935]: 2025-10-11 09:09:29.631 2 DEBUG nova.storage.rbd_utils [None req-4a715ce3-3819-4b8e-820e-809ccea8612a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] rbd image 1e10d958-167c-4caf-8ff0-67f50e39ec3c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:09:29 compute-0 nova_compute[260935]: 2025-10-11 09:09:29.638 2 DEBUG oslo_concurrency.processutils [None req-4a715ce3-3819-4b8e-820e-809ccea8612a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/1e10d958-167c-4caf-8ff0-67f50e39ec3c/disk.config 1e10d958-167c-4caf-8ff0-67f50e39ec3c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:09:29 compute-0 nova_compute[260935]: 2025-10-11 09:09:29.707 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:09:29 compute-0 ceph-mon[74313]: pgmap v2022: 321 pgs: 321 active+clean; 453 MiB data, 873 MiB used, 59 GiB / 60 GiB avail; 4.7 MiB/s rd, 4.3 MiB/s wr, 323 op/s
Oct 11 09:09:29 compute-0 ceph-mon[74313]: osdmap e276: 3 total, 3 up, 3 in
Oct 11 09:09:29 compute-0 ceph-mon[74313]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #93. Immutable memtables: 0.
Oct 11 09:09:29 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:09:29.745167) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 11 09:09:29 compute-0 ceph-mon[74313]: rocksdb: [db/flush_job.cc:856] [default] [JOB 53] Flushing memtable with next log file: 93
Oct 11 09:09:29 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760173769745201, "job": 53, "event": "flush_started", "num_memtables": 1, "num_entries": 454, "num_deletes": 252, "total_data_size": 329063, "memory_usage": 337464, "flush_reason": "Manual Compaction"}
Oct 11 09:09:29 compute-0 ceph-mon[74313]: rocksdb: [db/flush_job.cc:885] [default] [JOB 53] Level-0 flush table #94: started
Oct 11 09:09:29 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760173769749893, "cf_name": "default", "job": 53, "event": "table_file_creation", "file_number": 94, "file_size": 325692, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 42036, "largest_seqno": 42489, "table_properties": {"data_size": 323047, "index_size": 681, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 901, "raw_key_size": 6718, "raw_average_key_size": 19, "raw_value_size": 317592, "raw_average_value_size": 917, "num_data_blocks": 29, "num_entries": 346, "num_filter_entries": 346, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760173754, "oldest_key_time": 1760173754, "file_creation_time": 1760173769, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "25d4b2da-549a-4320-988a-8e4ed36a341b", "db_session_id": "HBQ1LYMNE0LLZGR7AHC6", "orig_file_number": 94, "seqno_to_time_mapping": "N/A"}}
Oct 11 09:09:29 compute-0 ceph-mon[74313]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 53] Flush lasted 4754 microseconds, and 1656 cpu microseconds.
Oct 11 09:09:29 compute-0 ceph-mon[74313]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 09:09:29 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:09:29.749920) [db/flush_job.cc:967] [default] [JOB 53] Level-0 flush table #94: 325692 bytes OK
Oct 11 09:09:29 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:09:29.749937) [db/memtable_list.cc:519] [default] Level-0 commit table #94 started
Oct 11 09:09:29 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:09:29.751700) [db/memtable_list.cc:722] [default] Level-0 commit table #94: memtable #1 done
Oct 11 09:09:29 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:09:29.751716) EVENT_LOG_v1 {"time_micros": 1760173769751711, "job": 53, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 11 09:09:29 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:09:29.751734) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 11 09:09:29 compute-0 ceph-mon[74313]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 53] Try to delete WAL files size 326281, prev total WAL file size 326281, number of live WAL files 2.
Oct 11 09:09:29 compute-0 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000090.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 09:09:29 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:09:29.753312) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033373635' seq:72057594037927935, type:22 .. '7061786F730034303137' seq:0, type:0; will stop at (end)
Oct 11 09:09:29 compute-0 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 54] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 11 09:09:29 compute-0 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 53 Base level 0, inputs: [94(318KB)], [92(11MB)]
Oct 11 09:09:29 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760173769753357, "job": 54, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [94], "files_L6": [92], "score": -1, "input_data_size": 12199628, "oldest_snapshot_seqno": -1}
Oct 11 09:09:29 compute-0 podman[364101]: 2025-10-11 09:09:29.781616192 +0000 UTC m=+0.076062513 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=multipathd, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team)
Oct 11 09:09:29 compute-0 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 54] Generated table #95: 6608 keys, 10509838 bytes, temperature: kUnknown
Oct 11 09:09:29 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760173769820710, "cf_name": "default", "job": 54, "event": "table_file_creation", "file_number": 95, "file_size": 10509838, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10462900, "index_size": 29290, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16581, "raw_key_size": 169630, "raw_average_key_size": 25, "raw_value_size": 10341598, "raw_average_value_size": 1565, "num_data_blocks": 1160, "num_entries": 6608, "num_filter_entries": 6608, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760170204, "oldest_key_time": 0, "file_creation_time": 1760173769, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "25d4b2da-549a-4320-988a-8e4ed36a341b", "db_session_id": "HBQ1LYMNE0LLZGR7AHC6", "orig_file_number": 95, "seqno_to_time_mapping": "N/A"}}
Oct 11 09:09:29 compute-0 ceph-mon[74313]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 09:09:29 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:09:29.820956) [db/compaction/compaction_job.cc:1663] [default] [JOB 54] Compacted 1@0 + 1@6 files to L6 => 10509838 bytes
Oct 11 09:09:29 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:09:29.822178) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 181.0 rd, 155.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.3, 11.3 +0.0 blob) out(10.0 +0.0 blob), read-write-amplify(69.7) write-amplify(32.3) OK, records in: 7126, records dropped: 518 output_compression: NoCompression
Oct 11 09:09:29 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:09:29.822201) EVENT_LOG_v1 {"time_micros": 1760173769822191, "job": 54, "event": "compaction_finished", "compaction_time_micros": 67415, "compaction_time_cpu_micros": 30092, "output_level": 6, "num_output_files": 1, "total_output_size": 10509838, "num_input_records": 7126, "num_output_records": 6608, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 11 09:09:29 compute-0 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000094.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 09:09:29 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760173769822391, "job": 54, "event": "table_file_deletion", "file_number": 94}
Oct 11 09:09:29 compute-0 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000092.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 09:09:29 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760173769825017, "job": 54, "event": "table_file_deletion", "file_number": 92}
Oct 11 09:09:29 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:09:29.753255) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 09:09:29 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:09:29.825051) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 09:09:29 compute-0 rsyslogd[1003]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 11 09:09:29 compute-0 rsyslogd[1003]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 11 09:09:29 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:09:29.825056) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 09:09:29 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:09:29.825057) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 09:09:29 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:09:29.825058) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 09:09:29 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:09:29.825060) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 09:09:29 compute-0 podman[364102]: 2025-10-11 09:09:29.83443058 +0000 UTC m=+0.128730856 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct 11 09:09:29 compute-0 nova_compute[260935]: 2025-10-11 09:09:29.908 2 DEBUG oslo_concurrency.processutils [None req-4a715ce3-3819-4b8e-820e-809ccea8612a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/1e10d958-167c-4caf-8ff0-67f50e39ec3c/disk.config 1e10d958-167c-4caf-8ff0-67f50e39ec3c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.271s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:09:29 compute-0 nova_compute[260935]: 2025-10-11 09:09:29.909 2 INFO nova.virt.libvirt.driver [None req-4a715ce3-3819-4b8e-820e-809ccea8612a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 1e10d958-167c-4caf-8ff0-67f50e39ec3c] Deleting local config drive /var/lib/nova/instances/1e10d958-167c-4caf-8ff0-67f50e39ec3c/disk.config because it was imported into RBD.
Oct 11 09:09:29 compute-0 kernel: tap38159e48-a8: entered promiscuous mode
Oct 11 09:09:29 compute-0 NetworkManager[44960]: <info>  [1760173769.9775] manager: (tap38159e48-a8): new Tun device (/org/freedesktop/NetworkManager/Devices/386)
Oct 11 09:09:29 compute-0 ovn_controller[152945]: 2025-10-11T09:09:29Z|00915|binding|INFO|Claiming lport 38159e48-a8a0-4daa-95d3-19be7346d2cb for this chassis.
Oct 11 09:09:29 compute-0 ovn_controller[152945]: 2025-10-11T09:09:29Z|00916|binding|INFO|38159e48-a8a0-4daa-95d3-19be7346d2cb: Claiming fa:16:3e:7c:70:56 10.100.0.11
Oct 11 09:09:29 compute-0 nova_compute[260935]: 2025-10-11 09:09:29.982 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:09:29 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:09:29.994 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7c:70:56 10.100.0.11'], port_security=['fa:16:3e:7c:70:56 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '1e10d958-167c-4caf-8ff0-67f50e39ec3c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3e150cf5-f6e5-4cd4-bd30-1cc73cc8f72e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ca4b15770e784f45910b630937562cb6', 'neutron:revision_number': '2', 'neutron:security_group_ids': '5fe33852-f012-4e6c-8c5b-0eacc3c5c3e8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b3216995-d47a-4bef-959f-e3536ac62432, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=38159e48-a8a0-4daa-95d3-19be7346d2cb) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 09:09:29 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:09:29.996 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 38159e48-a8a0-4daa-95d3-19be7346d2cb in datapath 3e150cf5-f6e5-4cd4-bd30-1cc73cc8f72e bound to our chassis
Oct 11 09:09:29 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:09:29.998 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 3e150cf5-f6e5-4cd4-bd30-1cc73cc8f72e
Oct 11 09:09:30 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:09:30.020 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[3b03c8bd-8858-42d3-84c8-0e487fd99536]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:09:30 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:09:30.022 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap3e150cf5-f1 in ovnmeta-3e150cf5-f6e5-4cd4-bd30-1cc73cc8f72e namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 11 09:09:30 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:09:30.024 276945 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap3e150cf5-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 11 09:09:30 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:09:30.024 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[d4370182-81d8-4c22-b747-898b3adaa540]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:09:30 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:09:30.026 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[823b9b21-12e4-475b-a89c-c0cccaf18175]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:09:30 compute-0 systemd-udevd[364179]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 09:09:30 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:09:30.044 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[a66719db-111b-4d9b-81c5-468e8f0a0d18]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:09:30 compute-0 NetworkManager[44960]: <info>  [1760173770.0538] device (tap38159e48-a8): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 09:09:30 compute-0 NetworkManager[44960]: <info>  [1760173770.0562] device (tap38159e48-a8): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 09:09:30 compute-0 systemd-machined[215705]: New machine qemu-121-instance-00000067.
Oct 11 09:09:30 compute-0 systemd[1]: Started Virtual Machine qemu-121-instance-00000067.
Oct 11 09:09:30 compute-0 nova_compute[260935]: 2025-10-11 09:09:30.077 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:09:30 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:09:30.077 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[133dfbb3-1cd9-4ecf-9fd4-cd39a018691f]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:09:30 compute-0 ovn_controller[152945]: 2025-10-11T09:09:30Z|00917|binding|INFO|Setting lport 38159e48-a8a0-4daa-95d3-19be7346d2cb ovn-installed in OVS
Oct 11 09:09:30 compute-0 ovn_controller[152945]: 2025-10-11T09:09:30Z|00918|binding|INFO|Setting lport 38159e48-a8a0-4daa-95d3-19be7346d2cb up in Southbound
Oct 11 09:09:30 compute-0 nova_compute[260935]: 2025-10-11 09:09:30.093 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:09:30 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:09:30.119 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[4f175af7-f1ee-404e-9c0f-b831e42023dd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:09:30 compute-0 systemd-udevd[364184]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 09:09:30 compute-0 NetworkManager[44960]: <info>  [1760173770.1283] manager: (tap3e150cf5-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/387)
Oct 11 09:09:30 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:09:30.127 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[0adc8786-eff7-446b-9456-39f77a535990]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:09:30 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:09:30.181 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[1bdd44ec-1b69-4be0-bc2d-c22c7a9b1dff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:09:30 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:09:30.183 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[2575c78a-e142-49c5-9d79-e27d82f8637f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:09:30 compute-0 NetworkManager[44960]: <info>  [1760173770.2144] device (tap3e150cf5-f0): carrier: link connected
Oct 11 09:09:30 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:09:30.221 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[7e59cd65-0a7e-44bd-8485-0252c26e5c02]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:09:30 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:09:30.242 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[b4bc6c46-5db8-4a2b-91d3-6a1bb6b1b429]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3e150cf5-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8b:ac:30'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 275], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 561491, 'reachable_time': 20501, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 364212, 'error': None, 'target': 'ovnmeta-3e150cf5-f6e5-4cd4-bd30-1cc73cc8f72e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:09:30 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:09:30.262 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[f09c1306-2f39-4671-87b4-21aba4c0405e]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe8b:ac30'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 561491, 'tstamp': 561491}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 364213, 'error': None, 'target': 'ovnmeta-3e150cf5-f6e5-4cd4-bd30-1cc73cc8f72e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:09:30 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:09:30.282 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[a53b751b-3187-407b-9469-b280392acd96]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3e150cf5-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8b:ac:30'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 275], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 561491, 'reachable_time': 20501, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 364214, 'error': None, 'target': 'ovnmeta-3e150cf5-f6e5-4cd4-bd30-1cc73cc8f72e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:09:30 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:09:30.320 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[fd16a658-e57b-418f-9107-777a31a30943]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:09:30 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:09:30.394 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[f0e5f857-5105-4851-b9a9-c564c493304d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:09:30 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:09:30.396 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3e150cf5-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:09:30 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:09:30.396 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 09:09:30 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:09:30.397 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3e150cf5-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:09:30 compute-0 nova_compute[260935]: 2025-10-11 09:09:30.399 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:09:30 compute-0 NetworkManager[44960]: <info>  [1760173770.4007] manager: (tap3e150cf5-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/388)
Oct 11 09:09:30 compute-0 kernel: tap3e150cf5-f0: entered promiscuous mode
Oct 11 09:09:30 compute-0 nova_compute[260935]: 2025-10-11 09:09:30.402 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:09:30 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:09:30.407 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap3e150cf5-f0, col_values=(('external_ids', {'iface-id': 'd058d94f-2c43-4ada-876d-4d89d5169811'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:09:30 compute-0 nova_compute[260935]: 2025-10-11 09:09:30.409 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:09:30 compute-0 ovn_controller[152945]: 2025-10-11T09:09:30Z|00919|binding|INFO|Releasing lport d058d94f-2c43-4ada-876d-4d89d5169811 from this chassis (sb_readonly=0)
Oct 11 09:09:30 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2024: 321 pgs: 321 active+clean; 453 MiB data, 873 MiB used, 59 GiB / 60 GiB avail; 5.8 MiB/s rd, 3.9 MiB/s wr, 320 op/s
Oct 11 09:09:30 compute-0 nova_compute[260935]: 2025-10-11 09:09:30.448 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:09:30 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:09:30.451 162815 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/3e150cf5-f6e5-4cd4-bd30-1cc73cc8f72e.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/3e150cf5-f6e5-4cd4-bd30-1cc73cc8f72e.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 11 09:09:30 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:09:30.452 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[a676ecd8-96b4-4a20-86f9-a9b44f15571e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:09:30 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:09:30.453 162815 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 11 09:09:30 compute-0 ovn_metadata_agent[162810]: global
Oct 11 09:09:30 compute-0 ovn_metadata_agent[162810]:     log         /dev/log local0 debug
Oct 11 09:09:30 compute-0 ovn_metadata_agent[162810]:     log-tag     haproxy-metadata-proxy-3e150cf5-f6e5-4cd4-bd30-1cc73cc8f72e
Oct 11 09:09:30 compute-0 ovn_metadata_agent[162810]:     user        root
Oct 11 09:09:30 compute-0 ovn_metadata_agent[162810]:     group       root
Oct 11 09:09:30 compute-0 ovn_metadata_agent[162810]:     maxconn     1024
Oct 11 09:09:30 compute-0 ovn_metadata_agent[162810]:     pidfile     /var/lib/neutron/external/pids/3e150cf5-f6e5-4cd4-bd30-1cc73cc8f72e.pid.haproxy
Oct 11 09:09:30 compute-0 ovn_metadata_agent[162810]:     daemon
Oct 11 09:09:30 compute-0 ovn_metadata_agent[162810]: 
Oct 11 09:09:30 compute-0 ovn_metadata_agent[162810]: defaults
Oct 11 09:09:30 compute-0 ovn_metadata_agent[162810]:     log global
Oct 11 09:09:30 compute-0 ovn_metadata_agent[162810]:     mode http
Oct 11 09:09:30 compute-0 ovn_metadata_agent[162810]:     option httplog
Oct 11 09:09:30 compute-0 ovn_metadata_agent[162810]:     option dontlognull
Oct 11 09:09:30 compute-0 ovn_metadata_agent[162810]:     option http-server-close
Oct 11 09:09:30 compute-0 ovn_metadata_agent[162810]:     option forwardfor
Oct 11 09:09:30 compute-0 ovn_metadata_agent[162810]:     retries                 3
Oct 11 09:09:30 compute-0 ovn_metadata_agent[162810]:     timeout http-request    30s
Oct 11 09:09:30 compute-0 ovn_metadata_agent[162810]:     timeout connect         30s
Oct 11 09:09:30 compute-0 ovn_metadata_agent[162810]:     timeout client          32s
Oct 11 09:09:30 compute-0 ovn_metadata_agent[162810]:     timeout server          32s
Oct 11 09:09:30 compute-0 ovn_metadata_agent[162810]:     timeout http-keep-alive 30s
Oct 11 09:09:30 compute-0 ovn_metadata_agent[162810]: 
Oct 11 09:09:30 compute-0 ovn_metadata_agent[162810]: 
Oct 11 09:09:30 compute-0 ovn_metadata_agent[162810]: listen listener
Oct 11 09:09:30 compute-0 ovn_metadata_agent[162810]:     bind 169.254.169.254:80
Oct 11 09:09:30 compute-0 ovn_metadata_agent[162810]:     server metadata /var/lib/neutron/metadata_proxy
Oct 11 09:09:30 compute-0 ovn_metadata_agent[162810]:     http-request add-header X-OVN-Network-ID 3e150cf5-f6e5-4cd4-bd30-1cc73cc8f72e
Oct 11 09:09:30 compute-0 ovn_metadata_agent[162810]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 11 09:09:30 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:09:30.454 162815 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-3e150cf5-f6e5-4cd4-bd30-1cc73cc8f72e', 'env', 'PROCESS_TAG=haproxy-3e150cf5-f6e5-4cd4-bd30-1cc73cc8f72e', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/3e150cf5-f6e5-4cd4-bd30-1cc73cc8f72e.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 11 09:09:30 compute-0 podman[364288]: 2025-10-11 09:09:30.942154762 +0000 UTC m=+0.082104585 container create 6815214ed4d7b783ad1ba6031bca85e6eb433cdd2124489969f3b2a6f5b588c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-3e150cf5-f6e5-4cd4-bd30-1cc73cc8f72e, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0)
Oct 11 09:09:30 compute-0 podman[364288]: 2025-10-11 09:09:30.893017269 +0000 UTC m=+0.032967122 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 11 09:09:31 compute-0 systemd[1]: Started libpod-conmon-6815214ed4d7b783ad1ba6031bca85e6eb433cdd2124489969f3b2a6f5b588c6.scope.
Oct 11 09:09:31 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:09:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6608af26482039c5816a02e8765c53d58b01bc30687269e83d429b9f655fb4c/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 11 09:09:31 compute-0 podman[364288]: 2025-10-11 09:09:31.078976948 +0000 UTC m=+0.218926831 container init 6815214ed4d7b783ad1ba6031bca85e6eb433cdd2124489969f3b2a6f5b588c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-3e150cf5-f6e5-4cd4-bd30-1cc73cc8f72e, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true)
Oct 11 09:09:31 compute-0 nova_compute[260935]: 2025-10-11 09:09:31.088 2 DEBUG nova.compute.manager [req-236749c3-eeae-42dc-9990-1aba432b5703 req-459fb2e9-93fb-422a-89b0-e2b61fff1973 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 1e10d958-167c-4caf-8ff0-67f50e39ec3c] Received event network-vif-plugged-38159e48-a8a0-4daa-95d3-19be7346d2cb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:09:31 compute-0 podman[364288]: 2025-10-11 09:09:31.090410584 +0000 UTC m=+0.230360407 container start 6815214ed4d7b783ad1ba6031bca85e6eb433cdd2124489969f3b2a6f5b588c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-3e150cf5-f6e5-4cd4-bd30-1cc73cc8f72e, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Oct 11 09:09:31 compute-0 nova_compute[260935]: 2025-10-11 09:09:31.090 2 DEBUG oslo_concurrency.lockutils [req-236749c3-eeae-42dc-9990-1aba432b5703 req-459fb2e9-93fb-422a-89b0-e2b61fff1973 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "1e10d958-167c-4caf-8ff0-67f50e39ec3c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:09:31 compute-0 nova_compute[260935]: 2025-10-11 09:09:31.091 2 DEBUG oslo_concurrency.lockutils [req-236749c3-eeae-42dc-9990-1aba432b5703 req-459fb2e9-93fb-422a-89b0-e2b61fff1973 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "1e10d958-167c-4caf-8ff0-67f50e39ec3c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:09:31 compute-0 nova_compute[260935]: 2025-10-11 09:09:31.092 2 DEBUG oslo_concurrency.lockutils [req-236749c3-eeae-42dc-9990-1aba432b5703 req-459fb2e9-93fb-422a-89b0-e2b61fff1973 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "1e10d958-167c-4caf-8ff0-67f50e39ec3c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:09:31 compute-0 nova_compute[260935]: 2025-10-11 09:09:31.092 2 DEBUG nova.compute.manager [req-236749c3-eeae-42dc-9990-1aba432b5703 req-459fb2e9-93fb-422a-89b0-e2b61fff1973 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 1e10d958-167c-4caf-8ff0-67f50e39ec3c] Processing event network-vif-plugged-38159e48-a8a0-4daa-95d3-19be7346d2cb _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 11 09:09:31 compute-0 neutron-haproxy-ovnmeta-3e150cf5-f6e5-4cd4-bd30-1cc73cc8f72e[364303]: [NOTICE]   (364307) : New worker (364309) forked
Oct 11 09:09:31 compute-0 neutron-haproxy-ovnmeta-3e150cf5-f6e5-4cd4-bd30-1cc73cc8f72e[364303]: [NOTICE]   (364307) : Loading success.
Oct 11 09:09:31 compute-0 nova_compute[260935]: 2025-10-11 09:09:31.210 2 DEBUG nova.compute.manager [None req-4a715ce3-3819-4b8e-820e-809ccea8612a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 1e10d958-167c-4caf-8ff0-67f50e39ec3c] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 11 09:09:31 compute-0 nova_compute[260935]: 2025-10-11 09:09:31.211 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760173771.2108932, 1e10d958-167c-4caf-8ff0-67f50e39ec3c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:09:31 compute-0 nova_compute[260935]: 2025-10-11 09:09:31.211 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 1e10d958-167c-4caf-8ff0-67f50e39ec3c] VM Started (Lifecycle Event)
Oct 11 09:09:31 compute-0 nova_compute[260935]: 2025-10-11 09:09:31.219 2 DEBUG nova.virt.libvirt.driver [None req-4a715ce3-3819-4b8e-820e-809ccea8612a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 1e10d958-167c-4caf-8ff0-67f50e39ec3c] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 11 09:09:31 compute-0 nova_compute[260935]: 2025-10-11 09:09:31.223 2 INFO nova.virt.libvirt.driver [-] [instance: 1e10d958-167c-4caf-8ff0-67f50e39ec3c] Instance spawned successfully.
Oct 11 09:09:31 compute-0 nova_compute[260935]: 2025-10-11 09:09:31.224 2 DEBUG nova.virt.libvirt.driver [None req-4a715ce3-3819-4b8e-820e-809ccea8612a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 1e10d958-167c-4caf-8ff0-67f50e39ec3c] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 11 09:09:31 compute-0 nova_compute[260935]: 2025-10-11 09:09:31.240 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 1e10d958-167c-4caf-8ff0-67f50e39ec3c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:09:31 compute-0 nova_compute[260935]: 2025-10-11 09:09:31.251 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 1e10d958-167c-4caf-8ff0-67f50e39ec3c] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 09:09:31 compute-0 nova_compute[260935]: 2025-10-11 09:09:31.257 2 DEBUG nova.virt.libvirt.driver [None req-4a715ce3-3819-4b8e-820e-809ccea8612a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 1e10d958-167c-4caf-8ff0-67f50e39ec3c] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:09:31 compute-0 nova_compute[260935]: 2025-10-11 09:09:31.258 2 DEBUG nova.virt.libvirt.driver [None req-4a715ce3-3819-4b8e-820e-809ccea8612a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 1e10d958-167c-4caf-8ff0-67f50e39ec3c] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:09:31 compute-0 nova_compute[260935]: 2025-10-11 09:09:31.259 2 DEBUG nova.virt.libvirt.driver [None req-4a715ce3-3819-4b8e-820e-809ccea8612a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 1e10d958-167c-4caf-8ff0-67f50e39ec3c] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:09:31 compute-0 nova_compute[260935]: 2025-10-11 09:09:31.260 2 DEBUG nova.virt.libvirt.driver [None req-4a715ce3-3819-4b8e-820e-809ccea8612a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 1e10d958-167c-4caf-8ff0-67f50e39ec3c] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:09:31 compute-0 nova_compute[260935]: 2025-10-11 09:09:31.260 2 DEBUG nova.virt.libvirt.driver [None req-4a715ce3-3819-4b8e-820e-809ccea8612a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 1e10d958-167c-4caf-8ff0-67f50e39ec3c] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:09:31 compute-0 nova_compute[260935]: 2025-10-11 09:09:31.261 2 DEBUG nova.virt.libvirt.driver [None req-4a715ce3-3819-4b8e-820e-809ccea8612a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 1e10d958-167c-4caf-8ff0-67f50e39ec3c] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:09:31 compute-0 nova_compute[260935]: 2025-10-11 09:09:31.275 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 1e10d958-167c-4caf-8ff0-67f50e39ec3c] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 09:09:31 compute-0 nova_compute[260935]: 2025-10-11 09:09:31.275 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760173771.2154548, 1e10d958-167c-4caf-8ff0-67f50e39ec3c => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:09:31 compute-0 nova_compute[260935]: 2025-10-11 09:09:31.276 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 1e10d958-167c-4caf-8ff0-67f50e39ec3c] VM Paused (Lifecycle Event)
Oct 11 09:09:31 compute-0 nova_compute[260935]: 2025-10-11 09:09:31.325 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 1e10d958-167c-4caf-8ff0-67f50e39ec3c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:09:31 compute-0 nova_compute[260935]: 2025-10-11 09:09:31.330 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760173771.2175663, 1e10d958-167c-4caf-8ff0-67f50e39ec3c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:09:31 compute-0 nova_compute[260935]: 2025-10-11 09:09:31.330 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 1e10d958-167c-4caf-8ff0-67f50e39ec3c] VM Resumed (Lifecycle Event)
Oct 11 09:09:31 compute-0 nova_compute[260935]: 2025-10-11 09:09:31.341 2 INFO nova.compute.manager [None req-4a715ce3-3819-4b8e-820e-809ccea8612a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 1e10d958-167c-4caf-8ff0-67f50e39ec3c] Took 8.79 seconds to spawn the instance on the hypervisor.
Oct 11 09:09:31 compute-0 nova_compute[260935]: 2025-10-11 09:09:31.342 2 DEBUG nova.compute.manager [None req-4a715ce3-3819-4b8e-820e-809ccea8612a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 1e10d958-167c-4caf-8ff0-67f50e39ec3c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:09:31 compute-0 nova_compute[260935]: 2025-10-11 09:09:31.352 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 1e10d958-167c-4caf-8ff0-67f50e39ec3c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:09:31 compute-0 nova_compute[260935]: 2025-10-11 09:09:31.356 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 1e10d958-167c-4caf-8ff0-67f50e39ec3c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 09:09:31 compute-0 nova_compute[260935]: 2025-10-11 09:09:31.388 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 1e10d958-167c-4caf-8ff0-67f50e39ec3c] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 09:09:31 compute-0 nova_compute[260935]: 2025-10-11 09:09:31.424 2 INFO nova.compute.manager [None req-4a715ce3-3819-4b8e-820e-809ccea8612a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 1e10d958-167c-4caf-8ff0-67f50e39ec3c] Took 10.35 seconds to build instance.
Oct 11 09:09:31 compute-0 nova_compute[260935]: 2025-10-11 09:09:31.442 2 DEBUG oslo_concurrency.lockutils [None req-4a715ce3-3819-4b8e-820e-809ccea8612a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Lock "1e10d958-167c-4caf-8ff0-67f50e39ec3c" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.507s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:09:31 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:09:31.595 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=bec9a5e2-82b0-42f0-811d-08d245f1dc66, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '26'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:09:31 compute-0 ceph-mon[74313]: pgmap v2024: 321 pgs: 321 active+clean; 453 MiB data, 873 MiB used, 59 GiB / 60 GiB avail; 5.8 MiB/s rd, 3.9 MiB/s wr, 320 op/s
Oct 11 09:09:31 compute-0 nova_compute[260935]: 2025-10-11 09:09:31.822 2 DEBUG nova.objects.instance [None req-d17933ce-c67d-44bc-b91c-be59c84776d2 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Lazy-loading 'pci_devices' on Instance uuid 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:09:31 compute-0 nova_compute[260935]: 2025-10-11 09:09:31.865 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760173771.8650405, 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:09:31 compute-0 nova_compute[260935]: 2025-10-11 09:09:31.868 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] VM Paused (Lifecycle Event)
Oct 11 09:09:31 compute-0 nova_compute[260935]: 2025-10-11 09:09:31.896 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:09:31 compute-0 nova_compute[260935]: 2025-10-11 09:09:31.904 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: suspending, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 09:09:31 compute-0 nova_compute[260935]: 2025-10-11 09:09:31.945 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] During sync_power_state the instance has a pending task (suspending). Skip.
Oct 11 09:09:32 compute-0 kernel: tapa611854c-0a (unregistering): left promiscuous mode
Oct 11 09:09:32 compute-0 NetworkManager[44960]: <info>  [1760173772.2763] device (tapa611854c-0a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 09:09:32 compute-0 nova_compute[260935]: 2025-10-11 09:09:32.314 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:09:32 compute-0 ovn_controller[152945]: 2025-10-11T09:09:32Z|00920|binding|INFO|Releasing lport a611854c-0a61-41b8-91ce-0c0f893aa54c from this chassis (sb_readonly=0)
Oct 11 09:09:32 compute-0 ovn_controller[152945]: 2025-10-11T09:09:32Z|00921|binding|INFO|Setting lport a611854c-0a61-41b8-91ce-0c0f893aa54c down in Southbound
Oct 11 09:09:32 compute-0 ovn_controller[152945]: 2025-10-11T09:09:32Z|00922|binding|INFO|Removing iface tapa611854c-0a ovn-installed in OVS
Oct 11 09:09:32 compute-0 nova_compute[260935]: 2025-10-11 09:09:32.319 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:09:32 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:09:32.327 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9d:da:b0 10.100.0.12'], port_security=['fa:16:3e:9d:da:b0 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-14e82eeb-74e2-4de3-9047-74da777fe1ec', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '21f163e616ee4917a580701d466f7dc9', 'neutron:revision_number': '9', 'neutron:security_group_ids': 'df67e917-90ec-4dfb-a7e0-e0c1517579e8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e0adf864-e0f8-4e06-afb7-e390a634b36e, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=a611854c-0a61-41b8-91ce-0c0f893aa54c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 09:09:32 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:09:32.329 162815 INFO neutron.agent.ovn.metadata.agent [-] Port a611854c-0a61-41b8-91ce-0c0f893aa54c in datapath 14e82eeb-74e2-4de3-9047-74da777fe1ec unbound from our chassis
Oct 11 09:09:32 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:09:32.333 162815 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 14e82eeb-74e2-4de3-9047-74da777fe1ec, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 11 09:09:32 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:09:32.334 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[4c1ebc11-bb90-4a9d-a16c-9e4481c38b94]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:09:32 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:09:32.335 162815 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-14e82eeb-74e2-4de3-9047-74da777fe1ec namespace which is not needed anymore
Oct 11 09:09:32 compute-0 systemd[1]: machine-qemu\x2d119\x2dinstance\x2d00000062.scope: Deactivated successfully.
Oct 11 09:09:32 compute-0 systemd[1]: machine-qemu\x2d119\x2dinstance\x2d00000062.scope: Consumed 9.168s CPU time.
Oct 11 09:09:32 compute-0 nova_compute[260935]: 2025-10-11 09:09:32.346 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:09:32 compute-0 systemd-machined[215705]: Machine qemu-119-instance-00000062 terminated.
Oct 11 09:09:32 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2025: 321 pgs: 321 active+clean; 453 MiB data, 869 MiB used, 59 GiB / 60 GiB avail; 6.7 MiB/s rd, 2.7 MiB/s wr, 348 op/s
Oct 11 09:09:32 compute-0 nova_compute[260935]: 2025-10-11 09:09:32.457 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:09:32 compute-0 nova_compute[260935]: 2025-10-11 09:09:32.472 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:09:32 compute-0 nova_compute[260935]: 2025-10-11 09:09:32.474 2 DEBUG nova.compute.manager [None req-d17933ce-c67d-44bc-b91c-be59c84776d2 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:09:32 compute-0 neutron-haproxy-ovnmeta-14e82eeb-74e2-4de3-9047-74da777fe1ec[363209]: [NOTICE]   (363218) : haproxy version is 2.8.14-c23fe91
Oct 11 09:09:32 compute-0 neutron-haproxy-ovnmeta-14e82eeb-74e2-4de3-9047-74da777fe1ec[363209]: [NOTICE]   (363218) : path to executable is /usr/sbin/haproxy
Oct 11 09:09:32 compute-0 neutron-haproxy-ovnmeta-14e82eeb-74e2-4de3-9047-74da777fe1ec[363209]: [WARNING]  (363218) : Exiting Master process...
Oct 11 09:09:32 compute-0 neutron-haproxy-ovnmeta-14e82eeb-74e2-4de3-9047-74da777fe1ec[363209]: [WARNING]  (363218) : Exiting Master process...
Oct 11 09:09:32 compute-0 neutron-haproxy-ovnmeta-14e82eeb-74e2-4de3-9047-74da777fe1ec[363209]: [ALERT]    (363218) : Current worker (363222) exited with code 143 (Terminated)
Oct 11 09:09:32 compute-0 neutron-haproxy-ovnmeta-14e82eeb-74e2-4de3-9047-74da777fe1ec[363209]: [WARNING]  (363218) : All workers exited. Exiting... (0)
Oct 11 09:09:32 compute-0 systemd[1]: libpod-4c5576afb82b4ccd0ccde2b6eb55108563fef3f9bc2ce3ee3984b2f7786bc901.scope: Deactivated successfully.
Oct 11 09:09:32 compute-0 podman[364351]: 2025-10-11 09:09:32.560641254 +0000 UTC m=+0.062694101 container died 4c5576afb82b4ccd0ccde2b6eb55108563fef3f9bc2ce3ee3984b2f7786bc901 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-14e82eeb-74e2-4de3-9047-74da777fe1ec, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 11 09:09:32 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-4c5576afb82b4ccd0ccde2b6eb55108563fef3f9bc2ce3ee3984b2f7786bc901-userdata-shm.mount: Deactivated successfully.
Oct 11 09:09:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-56bddb7740329636355fcc11f4dc0b2142ccfefe44ada4320f628f0155b9b1ac-merged.mount: Deactivated successfully.
Oct 11 09:09:32 compute-0 podman[364351]: 2025-10-11 09:09:32.621033008 +0000 UTC m=+0.123085845 container cleanup 4c5576afb82b4ccd0ccde2b6eb55108563fef3f9bc2ce3ee3984b2f7786bc901 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-14e82eeb-74e2-4de3-9047-74da777fe1ec, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0)
Oct 11 09:09:32 compute-0 systemd[1]: libpod-conmon-4c5576afb82b4ccd0ccde2b6eb55108563fef3f9bc2ce3ee3984b2f7786bc901.scope: Deactivated successfully.
Oct 11 09:09:32 compute-0 podman[364380]: 2025-10-11 09:09:32.716349159 +0000 UTC m=+0.061028033 container remove 4c5576afb82b4ccd0ccde2b6eb55108563fef3f9bc2ce3ee3984b2f7786bc901 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-14e82eeb-74e2-4de3-9047-74da777fe1ec, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Oct 11 09:09:32 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:09:32.727 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[c98660e0-b8ed-40dd-bcb1-a903bbcb46e2]: (4, ('Sat Oct 11 09:09:32 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-14e82eeb-74e2-4de3-9047-74da777fe1ec (4c5576afb82b4ccd0ccde2b6eb55108563fef3f9bc2ce3ee3984b2f7786bc901)\n4c5576afb82b4ccd0ccde2b6eb55108563fef3f9bc2ce3ee3984b2f7786bc901\nSat Oct 11 09:09:32 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-14e82eeb-74e2-4de3-9047-74da777fe1ec (4c5576afb82b4ccd0ccde2b6eb55108563fef3f9bc2ce3ee3984b2f7786bc901)\n4c5576afb82b4ccd0ccde2b6eb55108563fef3f9bc2ce3ee3984b2f7786bc901\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:09:32 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:09:32.729 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[1e58284f-ab38-4daa-afa0-3f61e349aada]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:09:32 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:09:32.730 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap14e82eeb-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:09:32 compute-0 kernel: tap14e82eeb-70: left promiscuous mode
Oct 11 09:09:32 compute-0 nova_compute[260935]: 2025-10-11 09:09:32.735 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:09:32 compute-0 nova_compute[260935]: 2025-10-11 09:09:32.740 2 DEBUG nova.compute.manager [req-de395aca-1d6c-4b0b-8ef3-733e4e01d72d req-538a1f19-92c5-4efd-89a3-594825b10aa4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Received event network-vif-unplugged-a611854c-0a61-41b8-91ce-0c0f893aa54c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:09:32 compute-0 nova_compute[260935]: 2025-10-11 09:09:32.740 2 DEBUG oslo_concurrency.lockutils [req-de395aca-1d6c-4b0b-8ef3-733e4e01d72d req-538a1f19-92c5-4efd-89a3-594825b10aa4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:09:32 compute-0 nova_compute[260935]: 2025-10-11 09:09:32.741 2 DEBUG oslo_concurrency.lockutils [req-de395aca-1d6c-4b0b-8ef3-733e4e01d72d req-538a1f19-92c5-4efd-89a3-594825b10aa4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:09:32 compute-0 nova_compute[260935]: 2025-10-11 09:09:32.741 2 DEBUG oslo_concurrency.lockutils [req-de395aca-1d6c-4b0b-8ef3-733e4e01d72d req-538a1f19-92c5-4efd-89a3-594825b10aa4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:09:32 compute-0 nova_compute[260935]: 2025-10-11 09:09:32.741 2 DEBUG nova.compute.manager [req-de395aca-1d6c-4b0b-8ef3-733e4e01d72d req-538a1f19-92c5-4efd-89a3-594825b10aa4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] No waiting events found dispatching network-vif-unplugged-a611854c-0a61-41b8-91ce-0c0f893aa54c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 09:09:32 compute-0 nova_compute[260935]: 2025-10-11 09:09:32.742 2 WARNING nova.compute.manager [req-de395aca-1d6c-4b0b-8ef3-733e4e01d72d req-538a1f19-92c5-4efd-89a3-594825b10aa4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Received unexpected event network-vif-unplugged-a611854c-0a61-41b8-91ce-0c0f893aa54c for instance with vm_state suspended and task_state None.
Oct 11 09:09:32 compute-0 nova_compute[260935]: 2025-10-11 09:09:32.760 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:09:32 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:09:32.765 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[d9679889-e34a-4dbb-ac8d-45d6d12dbbde]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:09:32 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:09:32.799 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[f092f369-8f49-4338-83f9-ceaa609b9369]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:09:32 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:09:32.800 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[80287d0d-a34e-4649-9052-472e9b37d2ce]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:09:32 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:09:32.825 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[31f18c00-3a9f-4249-9150-ba4067ad4d9d]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 560050, 'reachable_time': 40627, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 364397, 'error': None, 'target': 'ovnmeta-14e82eeb-74e2-4de3-9047-74da777fe1ec', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:09:32 compute-0 systemd[1]: run-netns-ovnmeta\x2d14e82eeb\x2d74e2\x2d4de3\x2d9047\x2d74da777fe1ec.mount: Deactivated successfully.
Oct 11 09:09:32 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:09:32.830 162929 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-14e82eeb-74e2-4de3-9047-74da777fe1ec deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 11 09:09:32 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:09:32.830 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[c170e288-5ffb-4202-9b76-66056c5e3683]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:09:33 compute-0 nova_compute[260935]: 2025-10-11 09:09:33.268 2 DEBUG nova.compute.manager [req-e9f8d358-b9d6-4595-a372-7c9375186eac req-f2102ee9-34e9-41ee-b3f2-3bd0d61378c3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 1e10d958-167c-4caf-8ff0-67f50e39ec3c] Received event network-vif-plugged-38159e48-a8a0-4daa-95d3-19be7346d2cb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:09:33 compute-0 nova_compute[260935]: 2025-10-11 09:09:33.268 2 DEBUG oslo_concurrency.lockutils [req-e9f8d358-b9d6-4595-a372-7c9375186eac req-f2102ee9-34e9-41ee-b3f2-3bd0d61378c3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "1e10d958-167c-4caf-8ff0-67f50e39ec3c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:09:33 compute-0 nova_compute[260935]: 2025-10-11 09:09:33.270 2 DEBUG oslo_concurrency.lockutils [req-e9f8d358-b9d6-4595-a372-7c9375186eac req-f2102ee9-34e9-41ee-b3f2-3bd0d61378c3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "1e10d958-167c-4caf-8ff0-67f50e39ec3c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:09:33 compute-0 nova_compute[260935]: 2025-10-11 09:09:33.270 2 DEBUG oslo_concurrency.lockutils [req-e9f8d358-b9d6-4595-a372-7c9375186eac req-f2102ee9-34e9-41ee-b3f2-3bd0d61378c3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "1e10d958-167c-4caf-8ff0-67f50e39ec3c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:09:33 compute-0 nova_compute[260935]: 2025-10-11 09:09:33.271 2 DEBUG nova.compute.manager [req-e9f8d358-b9d6-4595-a372-7c9375186eac req-f2102ee9-34e9-41ee-b3f2-3bd0d61378c3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 1e10d958-167c-4caf-8ff0-67f50e39ec3c] No waiting events found dispatching network-vif-plugged-38159e48-a8a0-4daa-95d3-19be7346d2cb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 09:09:33 compute-0 nova_compute[260935]: 2025-10-11 09:09:33.272 2 WARNING nova.compute.manager [req-e9f8d358-b9d6-4595-a372-7c9375186eac req-f2102ee9-34e9-41ee-b3f2-3bd0d61378c3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 1e10d958-167c-4caf-8ff0-67f50e39ec3c] Received unexpected event network-vif-plugged-38159e48-a8a0-4daa-95d3-19be7346d2cb for instance with vm_state active and task_state None.
Oct 11 09:09:33 compute-0 nova_compute[260935]: 2025-10-11 09:09:33.695 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:09:33 compute-0 ceph-mon[74313]: pgmap v2025: 321 pgs: 321 active+clean; 453 MiB data, 869 MiB used, 59 GiB / 60 GiB avail; 6.7 MiB/s rd, 2.7 MiB/s wr, 348 op/s
Oct 11 09:09:33 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e276 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:09:34 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2026: 321 pgs: 321 active+clean; 453 MiB data, 869 MiB used, 59 GiB / 60 GiB avail; 6.0 MiB/s rd, 2.2 MiB/s wr, 306 op/s
Oct 11 09:09:34 compute-0 nova_compute[260935]: 2025-10-11 09:09:34.636 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:09:34 compute-0 nova_compute[260935]: 2025-10-11 09:09:34.879 2 DEBUG nova.compute.manager [req-4b9ac9b1-0f95-4606-b436-eaff9cdbcddd req-57f473e6-d249-4095-b085-5c6b972f53e6 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Received event network-vif-plugged-a611854c-0a61-41b8-91ce-0c0f893aa54c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:09:34 compute-0 nova_compute[260935]: 2025-10-11 09:09:34.879 2 DEBUG oslo_concurrency.lockutils [req-4b9ac9b1-0f95-4606-b436-eaff9cdbcddd req-57f473e6-d249-4095-b085-5c6b972f53e6 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:09:34 compute-0 nova_compute[260935]: 2025-10-11 09:09:34.880 2 DEBUG oslo_concurrency.lockutils [req-4b9ac9b1-0f95-4606-b436-eaff9cdbcddd req-57f473e6-d249-4095-b085-5c6b972f53e6 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:09:34 compute-0 nova_compute[260935]: 2025-10-11 09:09:34.880 2 DEBUG oslo_concurrency.lockutils [req-4b9ac9b1-0f95-4606-b436-eaff9cdbcddd req-57f473e6-d249-4095-b085-5c6b972f53e6 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:09:34 compute-0 nova_compute[260935]: 2025-10-11 09:09:34.881 2 DEBUG nova.compute.manager [req-4b9ac9b1-0f95-4606-b436-eaff9cdbcddd req-57f473e6-d249-4095-b085-5c6b972f53e6 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] No waiting events found dispatching network-vif-plugged-a611854c-0a61-41b8-91ce-0c0f893aa54c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 09:09:34 compute-0 nova_compute[260935]: 2025-10-11 09:09:34.881 2 WARNING nova.compute.manager [req-4b9ac9b1-0f95-4606-b436-eaff9cdbcddd req-57f473e6-d249-4095-b085-5c6b972f53e6 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Received unexpected event network-vif-plugged-a611854c-0a61-41b8-91ce-0c0f893aa54c for instance with vm_state suspended and task_state None.
Oct 11 09:09:35 compute-0 nova_compute[260935]: 2025-10-11 09:09:35.198 2 INFO nova.compute.manager [None req-eaad3c70-20f5-4317-883e-0f5f692ea002 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Resuming
Oct 11 09:09:35 compute-0 nova_compute[260935]: 2025-10-11 09:09:35.200 2 DEBUG nova.objects.instance [None req-eaad3c70-20f5-4317-883e-0f5f692ea002 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Lazy-loading 'flavor' on Instance uuid 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:09:35 compute-0 nova_compute[260935]: 2025-10-11 09:09:35.246 2 DEBUG oslo_concurrency.lockutils [None req-eaad3c70-20f5-4317-883e-0f5f692ea002 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Acquiring lock "refresh_cache-0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:09:35 compute-0 nova_compute[260935]: 2025-10-11 09:09:35.247 2 DEBUG oslo_concurrency.lockutils [None req-eaad3c70-20f5-4317-883e-0f5f692ea002 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Acquired lock "refresh_cache-0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:09:35 compute-0 nova_compute[260935]: 2025-10-11 09:09:35.249 2 DEBUG nova.network.neutron [None req-eaad3c70-20f5-4317-883e-0f5f692ea002 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 11 09:09:35 compute-0 NetworkManager[44960]: <info>  [1760173775.2702] manager: (patch-provnet-02213cae-d648-4a8f-b18b-9bdf9573f1be-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/389)
Oct 11 09:09:35 compute-0 NetworkManager[44960]: <info>  [1760173775.2713] manager: (patch-br-int-to-provnet-02213cae-d648-4a8f-b18b-9bdf9573f1be): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/390)
Oct 11 09:09:35 compute-0 ovn_controller[152945]: 2025-10-11T09:09:35Z|00923|binding|INFO|Releasing lport d058d94f-2c43-4ada-876d-4d89d5169811 from this chassis (sb_readonly=0)
Oct 11 09:09:35 compute-0 ovn_controller[152945]: 2025-10-11T09:09:35Z|00924|binding|INFO|Releasing lport e23cd806-8523-4e59-ba27-db15cee52548 from this chassis (sb_readonly=0)
Oct 11 09:09:35 compute-0 ovn_controller[152945]: 2025-10-11T09:09:35Z|00925|binding|INFO|Releasing lport f45dd889-4db0-488b-b6d3-356bc191844e from this chassis (sb_readonly=0)
Oct 11 09:09:35 compute-0 nova_compute[260935]: 2025-10-11 09:09:35.269 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:09:35 compute-0 nova_compute[260935]: 2025-10-11 09:09:35.332 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:09:35 compute-0 ovn_controller[152945]: 2025-10-11T09:09:35Z|00926|binding|INFO|Releasing lport d058d94f-2c43-4ada-876d-4d89d5169811 from this chassis (sb_readonly=0)
Oct 11 09:09:35 compute-0 ovn_controller[152945]: 2025-10-11T09:09:35Z|00927|binding|INFO|Releasing lport e23cd806-8523-4e59-ba27-db15cee52548 from this chassis (sb_readonly=0)
Oct 11 09:09:35 compute-0 ovn_controller[152945]: 2025-10-11T09:09:35Z|00928|binding|INFO|Releasing lport f45dd889-4db0-488b-b6d3-356bc191844e from this chassis (sb_readonly=0)
Oct 11 09:09:35 compute-0 nova_compute[260935]: 2025-10-11 09:09:35.343 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:09:35 compute-0 ceph-mon[74313]: pgmap v2026: 321 pgs: 321 active+clean; 453 MiB data, 869 MiB used, 59 GiB / 60 GiB avail; 6.0 MiB/s rd, 2.2 MiB/s wr, 306 op/s
Oct 11 09:09:36 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2027: 321 pgs: 321 active+clean; 453 MiB data, 869 MiB used, 59 GiB / 60 GiB avail; 5.9 MiB/s rd, 2.1 MiB/s wr, 298 op/s
Oct 11 09:09:37 compute-0 nova_compute[260935]: 2025-10-11 09:09:37.008 2 DEBUG nova.compute.manager [req-1ee22aea-8ff6-4934-b395-dfb5998193fb req-4131591f-8b69-4e99-8763-44f98c4ff5a0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 1e10d958-167c-4caf-8ff0-67f50e39ec3c] Received event network-changed-38159e48-a8a0-4daa-95d3-19be7346d2cb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:09:37 compute-0 nova_compute[260935]: 2025-10-11 09:09:37.008 2 DEBUG nova.compute.manager [req-1ee22aea-8ff6-4934-b395-dfb5998193fb req-4131591f-8b69-4e99-8763-44f98c4ff5a0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 1e10d958-167c-4caf-8ff0-67f50e39ec3c] Refreshing instance network info cache due to event network-changed-38159e48-a8a0-4daa-95d3-19be7346d2cb. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 09:09:37 compute-0 nova_compute[260935]: 2025-10-11 09:09:37.009 2 DEBUG oslo_concurrency.lockutils [req-1ee22aea-8ff6-4934-b395-dfb5998193fb req-4131591f-8b69-4e99-8763-44f98c4ff5a0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-1e10d958-167c-4caf-8ff0-67f50e39ec3c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:09:37 compute-0 nova_compute[260935]: 2025-10-11 09:09:37.009 2 DEBUG oslo_concurrency.lockutils [req-1ee22aea-8ff6-4934-b395-dfb5998193fb req-4131591f-8b69-4e99-8763-44f98c4ff5a0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-1e10d958-167c-4caf-8ff0-67f50e39ec3c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:09:37 compute-0 nova_compute[260935]: 2025-10-11 09:09:37.009 2 DEBUG nova.network.neutron [req-1ee22aea-8ff6-4934-b395-dfb5998193fb req-4131591f-8b69-4e99-8763-44f98c4ff5a0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 1e10d958-167c-4caf-8ff0-67f50e39ec3c] Refreshing network info cache for port 38159e48-a8a0-4daa-95d3-19be7346d2cb _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 09:09:37 compute-0 ceph-mon[74313]: pgmap v2027: 321 pgs: 321 active+clean; 453 MiB data, 869 MiB used, 59 GiB / 60 GiB avail; 5.9 MiB/s rd, 2.1 MiB/s wr, 298 op/s
Oct 11 09:09:38 compute-0 nova_compute[260935]: 2025-10-11 09:09:38.308 2 DEBUG nova.network.neutron [None req-eaad3c70-20f5-4317-883e-0f5f692ea002 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Updating instance_info_cache with network_info: [{"id": "a611854c-0a61-41b8-91ce-0c0f893aa54c", "address": "fa:16:3e:9d:da:b0", "network": {"id": "14e82eeb-74e2-4de3-9047-74da777fe1ec", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-461409610-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "21f163e616ee4917a580701d466f7dc9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa611854c-0a", "ovs_interfaceid": "a611854c-0a61-41b8-91ce-0c0f893aa54c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:09:38 compute-0 nova_compute[260935]: 2025-10-11 09:09:38.331 2 DEBUG oslo_concurrency.lockutils [None req-eaad3c70-20f5-4317-883e-0f5f692ea002 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Releasing lock "refresh_cache-0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:09:38 compute-0 nova_compute[260935]: 2025-10-11 09:09:38.338 2 DEBUG nova.virt.libvirt.vif [None req-eaad3c70-20f5-4317-883e-0f5f692ea002 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-11T09:07:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersNegativeTestJSON-server-581429551',display_name='tempest-ServersNegativeTestJSON-server-581429551',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestjson-server-581429551',id=98,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-11T09:09:25Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='21f163e616ee4917a580701d466f7dc9',ramdisk_id='',reservation_id='r-wp4zx0a9',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-ServersNegativeTestJSON-414353187',owner_user_name='tempest-ServersNegativeTestJSON-414353187-project-member'},tags=<?>,task_state='resuming',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T09:09:32Z,user_data=None,user_id='6b8d9d5ab01d48ae81a09f922875ea3e',uuid=0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='suspended') vif={"id": "a611854c-0a61-41b8-91ce-0c0f893aa54c", "address": "fa:16:3e:9d:da:b0", "network": {"id": "14e82eeb-74e2-4de3-9047-74da777fe1ec", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-461409610-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "21f163e616ee4917a580701d466f7dc9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa611854c-0a", "ovs_interfaceid": "a611854c-0a61-41b8-91ce-0c0f893aa54c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 11 09:09:38 compute-0 nova_compute[260935]: 2025-10-11 09:09:38.338 2 DEBUG nova.network.os_vif_util [None req-eaad3c70-20f5-4317-883e-0f5f692ea002 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Converting VIF {"id": "a611854c-0a61-41b8-91ce-0c0f893aa54c", "address": "fa:16:3e:9d:da:b0", "network": {"id": "14e82eeb-74e2-4de3-9047-74da777fe1ec", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-461409610-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "21f163e616ee4917a580701d466f7dc9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa611854c-0a", "ovs_interfaceid": "a611854c-0a61-41b8-91ce-0c0f893aa54c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 09:09:38 compute-0 nova_compute[260935]: 2025-10-11 09:09:38.339 2 DEBUG nova.network.os_vif_util [None req-eaad3c70-20f5-4317-883e-0f5f692ea002 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9d:da:b0,bridge_name='br-int',has_traffic_filtering=True,id=a611854c-0a61-41b8-91ce-0c0f893aa54c,network=Network(14e82eeb-74e2-4de3-9047-74da777fe1ec),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa611854c-0a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 09:09:38 compute-0 nova_compute[260935]: 2025-10-11 09:09:38.340 2 DEBUG os_vif [None req-eaad3c70-20f5-4317-883e-0f5f692ea002 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:9d:da:b0,bridge_name='br-int',has_traffic_filtering=True,id=a611854c-0a61-41b8-91ce-0c0f893aa54c,network=Network(14e82eeb-74e2-4de3-9047-74da777fe1ec),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa611854c-0a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 11 09:09:38 compute-0 nova_compute[260935]: 2025-10-11 09:09:38.341 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:09:38 compute-0 nova_compute[260935]: 2025-10-11 09:09:38.341 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:09:38 compute-0 nova_compute[260935]: 2025-10-11 09:09:38.342 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 09:09:38 compute-0 nova_compute[260935]: 2025-10-11 09:09:38.345 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:09:38 compute-0 nova_compute[260935]: 2025-10-11 09:09:38.346 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa611854c-0a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:09:38 compute-0 nova_compute[260935]: 2025-10-11 09:09:38.346 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa611854c-0a, col_values=(('external_ids', {'iface-id': 'a611854c-0a61-41b8-91ce-0c0f893aa54c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:9d:da:b0', 'vm-uuid': '0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:09:38 compute-0 nova_compute[260935]: 2025-10-11 09:09:38.347 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 09:09:38 compute-0 nova_compute[260935]: 2025-10-11 09:09:38.347 2 INFO os_vif [None req-eaad3c70-20f5-4317-883e-0f5f692ea002 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:9d:da:b0,bridge_name='br-int',has_traffic_filtering=True,id=a611854c-0a61-41b8-91ce-0c0f893aa54c,network=Network(14e82eeb-74e2-4de3-9047-74da777fe1ec),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa611854c-0a')
Oct 11 09:09:38 compute-0 nova_compute[260935]: 2025-10-11 09:09:38.380 2 DEBUG nova.objects.instance [None req-eaad3c70-20f5-4317-883e-0f5f692ea002 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Lazy-loading 'numa_topology' on Instance uuid 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:09:38 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2028: 321 pgs: 321 active+clean; 453 MiB data, 869 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 15 KiB/s wr, 88 op/s
Oct 11 09:09:38 compute-0 NetworkManager[44960]: <info>  [1760173778.4727] manager: (tapa611854c-0a): new Tun device (/org/freedesktop/NetworkManager/Devices/391)
Oct 11 09:09:38 compute-0 kernel: tapa611854c-0a: entered promiscuous mode
Oct 11 09:09:38 compute-0 nova_compute[260935]: 2025-10-11 09:09:38.477 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:09:38 compute-0 ovn_controller[152945]: 2025-10-11T09:09:38Z|00929|binding|INFO|Claiming lport a611854c-0a61-41b8-91ce-0c0f893aa54c for this chassis.
Oct 11 09:09:38 compute-0 ovn_controller[152945]: 2025-10-11T09:09:38Z|00930|binding|INFO|a611854c-0a61-41b8-91ce-0c0f893aa54c: Claiming fa:16:3e:9d:da:b0 10.100.0.12
Oct 11 09:09:38 compute-0 systemd-udevd[364411]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 09:09:38 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:09:38.525 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9d:da:b0 10.100.0.12'], port_security=['fa:16:3e:9d:da:b0 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-14e82eeb-74e2-4de3-9047-74da777fe1ec', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '21f163e616ee4917a580701d466f7dc9', 'neutron:revision_number': '10', 'neutron:security_group_ids': 'df67e917-90ec-4dfb-a7e0-e0c1517579e8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e0adf864-e0f8-4e06-afb7-e390a634b36e, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=a611854c-0a61-41b8-91ce-0c0f893aa54c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 09:09:38 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:09:38.526 162815 INFO neutron.agent.ovn.metadata.agent [-] Port a611854c-0a61-41b8-91ce-0c0f893aa54c in datapath 14e82eeb-74e2-4de3-9047-74da777fe1ec bound to our chassis
Oct 11 09:09:38 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:09:38.528 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 14e82eeb-74e2-4de3-9047-74da777fe1ec
Oct 11 09:09:38 compute-0 ovn_controller[152945]: 2025-10-11T09:09:38Z|00931|binding|INFO|Setting lport a611854c-0a61-41b8-91ce-0c0f893aa54c ovn-installed in OVS
Oct 11 09:09:38 compute-0 ovn_controller[152945]: 2025-10-11T09:09:38Z|00932|binding|INFO|Setting lport a611854c-0a61-41b8-91ce-0c0f893aa54c up in Southbound
Oct 11 09:09:38 compute-0 nova_compute[260935]: 2025-10-11 09:09:38.544 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:09:38 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:09:38.545 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[00349b22-8700-474a-a0ae-949e8a8d00e7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:09:38 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:09:38.545 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap14e82eeb-71 in ovnmeta-14e82eeb-74e2-4de3-9047-74da777fe1ec namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 11 09:09:38 compute-0 NetworkManager[44960]: <info>  [1760173778.5475] device (tapa611854c-0a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 09:09:38 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:09:38.548 276945 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap14e82eeb-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 11 09:09:38 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:09:38.548 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[8ad478bb-574e-4c9b-9152-f28e7b9d2ab3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:09:38 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:09:38.549 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[5d4eb88f-ab8b-4be5-8a74-ae0b08938165]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:09:38 compute-0 NetworkManager[44960]: <info>  [1760173778.5505] device (tapa611854c-0a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 09:09:38 compute-0 systemd-machined[215705]: New machine qemu-122-instance-00000062.
Oct 11 09:09:38 compute-0 systemd[1]: Started Virtual Machine qemu-122-instance-00000062.
Oct 11 09:09:38 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:09:38.574 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[be09fb2f-645b-4bc5-9679-c09dd6449fe9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:09:38 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:09:38.606 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[db481803-cc16-4b48-86eb-efeed8dd4390]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:09:38 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:09:38.665 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[f9862c8f-d330-4afb-91b4-f24f862408c2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:09:38 compute-0 NetworkManager[44960]: <info>  [1760173778.6727] manager: (tap14e82eeb-70): new Veth device (/org/freedesktop/NetworkManager/Devices/392)
Oct 11 09:09:38 compute-0 systemd-udevd[364418]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 09:09:38 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:09:38.670 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[36040262-2c0e-4d8d-b8d4-5ccba852d843]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:09:38 compute-0 nova_compute[260935]: 2025-10-11 09:09:38.698 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:09:38 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:09:38.723 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[230149dd-dc63-416a-b9b4-740fd2bffeb5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:09:38 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:09:38.726 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[37c0a4ef-cd04-415f-adf8-f335cbc9e86c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:09:38 compute-0 NetworkManager[44960]: <info>  [1760173778.7634] device (tap14e82eeb-70): carrier: link connected
Oct 11 09:09:38 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:09:38.773 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[ee603908-d14a-48e9-ac81-5166aa9a84eb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:09:38 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:09:38.808 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[61bbf444-9f7d-43a6-abe6-3262f05445ac]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap14e82eeb-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:60:56:f1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 278], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 562346, 'reachable_time': 21818, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 364447, 'error': None, 'target': 'ovnmeta-14e82eeb-74e2-4de3-9047-74da777fe1ec', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:09:38 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:09:38.829 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[b187d892-3b92-460b-bab5-70df9ca539b0]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe60:56f1'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 562346, 'tstamp': 562346}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 364448, 'error': None, 'target': 'ovnmeta-14e82eeb-74e2-4de3-9047-74da777fe1ec', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:09:38 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e276 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:09:38 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:09:38.852 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[ef1001a8-7de8-40a4-9052-5eee6d93f68e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap14e82eeb-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:60:56:f1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 278], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 562346, 'reachable_time': 21818, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 364449, 'error': None, 'target': 'ovnmeta-14e82eeb-74e2-4de3-9047-74da777fe1ec', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:09:38 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:09:38.896 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[5e2f0919-5fb3-43bd-bb6a-0dc60abd0835]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:09:38 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:09:38.988 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[fcd4a2da-d881-4494-bbd0-e6e2c58bcc82]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:09:38 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:09:38.990 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap14e82eeb-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:09:38 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:09:38.990 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 09:09:38 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:09:38.991 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap14e82eeb-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:09:38 compute-0 nova_compute[260935]: 2025-10-11 09:09:38.993 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:09:38 compute-0 NetworkManager[44960]: <info>  [1760173778.9946] manager: (tap14e82eeb-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/393)
Oct 11 09:09:38 compute-0 kernel: tap14e82eeb-70: entered promiscuous mode
Oct 11 09:09:38 compute-0 nova_compute[260935]: 2025-10-11 09:09:38.998 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:09:38 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:09:38.999 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap14e82eeb-70, col_values=(('external_ids', {'iface-id': 'c44760fe-71c5-482d-aee7-a0b0513681b7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:09:39 compute-0 ovn_controller[152945]: 2025-10-11T09:09:39Z|00933|binding|INFO|Releasing lport c44760fe-71c5-482d-aee7-a0b0513681b7 from this chassis (sb_readonly=0)
Oct 11 09:09:39 compute-0 nova_compute[260935]: 2025-10-11 09:09:39.000 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:09:39 compute-0 nova_compute[260935]: 2025-10-11 09:09:39.031 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:09:39 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:09:39.032 162815 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/14e82eeb-74e2-4de3-9047-74da777fe1ec.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/14e82eeb-74e2-4de3-9047-74da777fe1ec.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 11 09:09:39 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:09:39.034 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[8dc8415b-365d-493e-b2a7-b6338c613a22]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:09:39 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:09:39.035 162815 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 11 09:09:39 compute-0 ovn_metadata_agent[162810]: global
Oct 11 09:09:39 compute-0 ovn_metadata_agent[162810]:     log         /dev/log local0 debug
Oct 11 09:09:39 compute-0 ovn_metadata_agent[162810]:     log-tag     haproxy-metadata-proxy-14e82eeb-74e2-4de3-9047-74da777fe1ec
Oct 11 09:09:39 compute-0 ovn_metadata_agent[162810]:     user        root
Oct 11 09:09:39 compute-0 ovn_metadata_agent[162810]:     group       root
Oct 11 09:09:39 compute-0 ovn_metadata_agent[162810]:     maxconn     1024
Oct 11 09:09:39 compute-0 ovn_metadata_agent[162810]:     pidfile     /var/lib/neutron/external/pids/14e82eeb-74e2-4de3-9047-74da777fe1ec.pid.haproxy
Oct 11 09:09:39 compute-0 ovn_metadata_agent[162810]:     daemon
Oct 11 09:09:39 compute-0 ovn_metadata_agent[162810]: 
Oct 11 09:09:39 compute-0 ovn_metadata_agent[162810]: defaults
Oct 11 09:09:39 compute-0 ovn_metadata_agent[162810]:     log global
Oct 11 09:09:39 compute-0 ovn_metadata_agent[162810]:     mode http
Oct 11 09:09:39 compute-0 ovn_metadata_agent[162810]:     option httplog
Oct 11 09:09:39 compute-0 ovn_metadata_agent[162810]:     option dontlognull
Oct 11 09:09:39 compute-0 ovn_metadata_agent[162810]:     option http-server-close
Oct 11 09:09:39 compute-0 ovn_metadata_agent[162810]:     option forwardfor
Oct 11 09:09:39 compute-0 ovn_metadata_agent[162810]:     retries                 3
Oct 11 09:09:39 compute-0 ovn_metadata_agent[162810]:     timeout http-request    30s
Oct 11 09:09:39 compute-0 ovn_metadata_agent[162810]:     timeout connect         30s
Oct 11 09:09:39 compute-0 ovn_metadata_agent[162810]:     timeout client          32s
Oct 11 09:09:39 compute-0 ovn_metadata_agent[162810]:     timeout server          32s
Oct 11 09:09:39 compute-0 ovn_metadata_agent[162810]:     timeout http-keep-alive 30s
Oct 11 09:09:39 compute-0 ovn_metadata_agent[162810]: 
Oct 11 09:09:39 compute-0 ovn_metadata_agent[162810]: 
Oct 11 09:09:39 compute-0 ovn_metadata_agent[162810]: listen listener
Oct 11 09:09:39 compute-0 ovn_metadata_agent[162810]:     bind 169.254.169.254:80
Oct 11 09:09:39 compute-0 ovn_metadata_agent[162810]:     server metadata /var/lib/neutron/metadata_proxy
Oct 11 09:09:39 compute-0 ovn_metadata_agent[162810]:     http-request add-header X-OVN-Network-ID 14e82eeb-74e2-4de3-9047-74da777fe1ec
Oct 11 09:09:39 compute-0 ovn_metadata_agent[162810]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 11 09:09:39 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:09:39.037 162815 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-14e82eeb-74e2-4de3-9047-74da777fe1ec', 'env', 'PROCESS_TAG=haproxy-14e82eeb-74e2-4de3-9047-74da777fe1ec', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/14e82eeb-74e2-4de3-9047-74da777fe1ec.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 11 09:09:39 compute-0 nova_compute[260935]: 2025-10-11 09:09:39.297 2 DEBUG nova.compute.manager [req-9078b05b-c7de-4bcc-86d9-b008abb62e02 req-3b71c798-617b-45c5-9798-81ca7fed664d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Received event network-vif-plugged-a611854c-0a61-41b8-91ce-0c0f893aa54c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:09:39 compute-0 nova_compute[260935]: 2025-10-11 09:09:39.298 2 DEBUG oslo_concurrency.lockutils [req-9078b05b-c7de-4bcc-86d9-b008abb62e02 req-3b71c798-617b-45c5-9798-81ca7fed664d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:09:39 compute-0 nova_compute[260935]: 2025-10-11 09:09:39.298 2 DEBUG oslo_concurrency.lockutils [req-9078b05b-c7de-4bcc-86d9-b008abb62e02 req-3b71c798-617b-45c5-9798-81ca7fed664d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:09:39 compute-0 nova_compute[260935]: 2025-10-11 09:09:39.299 2 DEBUG oslo_concurrency.lockutils [req-9078b05b-c7de-4bcc-86d9-b008abb62e02 req-3b71c798-617b-45c5-9798-81ca7fed664d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:09:39 compute-0 nova_compute[260935]: 2025-10-11 09:09:39.300 2 DEBUG nova.compute.manager [req-9078b05b-c7de-4bcc-86d9-b008abb62e02 req-3b71c798-617b-45c5-9798-81ca7fed664d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] No waiting events found dispatching network-vif-plugged-a611854c-0a61-41b8-91ce-0c0f893aa54c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 09:09:39 compute-0 nova_compute[260935]: 2025-10-11 09:09:39.300 2 WARNING nova.compute.manager [req-9078b05b-c7de-4bcc-86d9-b008abb62e02 req-3b71c798-617b-45c5-9798-81ca7fed664d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Received unexpected event network-vif-plugged-a611854c-0a61-41b8-91ce-0c0f893aa54c for instance with vm_state suspended and task_state resuming.
Oct 11 09:09:39 compute-0 podman[364521]: 2025-10-11 09:09:39.470327704 +0000 UTC m=+0.069353041 container create 064e59958f6ed1df696e290d0df260334f8f6cf4ec230c7a95673bc57167428b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-14e82eeb-74e2-4de3-9047-74da777fe1ec, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001)
Oct 11 09:09:39 compute-0 systemd[1]: Started libpod-conmon-064e59958f6ed1df696e290d0df260334f8f6cf4ec230c7a95673bc57167428b.scope.
Oct 11 09:09:39 compute-0 podman[364521]: 2025-10-11 09:09:39.424686271 +0000 UTC m=+0.023711618 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 11 09:09:39 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:09:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45fd75aade0337d682b14081c1eccf706df4989bd7a9205a59aaf2a4e8230fe7/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 11 09:09:39 compute-0 nova_compute[260935]: 2025-10-11 09:09:39.576 2 DEBUG nova.network.neutron [req-1ee22aea-8ff6-4934-b395-dfb5998193fb req-4131591f-8b69-4e99-8763-44f98c4ff5a0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 1e10d958-167c-4caf-8ff0-67f50e39ec3c] Updated VIF entry in instance network info cache for port 38159e48-a8a0-4daa-95d3-19be7346d2cb. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 09:09:39 compute-0 nova_compute[260935]: 2025-10-11 09:09:39.577 2 DEBUG nova.network.neutron [req-1ee22aea-8ff6-4934-b395-dfb5998193fb req-4131591f-8b69-4e99-8763-44f98c4ff5a0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 1e10d958-167c-4caf-8ff0-67f50e39ec3c] Updating instance_info_cache with network_info: [{"id": "38159e48-a8a0-4daa-95d3-19be7346d2cb", "address": "fa:16:3e:7c:70:56", "network": {"id": "3e150cf5-f6e5-4cd4-bd30-1cc73cc8f72e", "bridge": "br-int", "label": "tempest-network-smoke--7754663", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.201", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca4b15770e784f45910b630937562cb6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap38159e48-a8", "ovs_interfaceid": "38159e48-a8a0-4daa-95d3-19be7346d2cb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:09:39 compute-0 podman[364521]: 2025-10-11 09:09:39.582717402 +0000 UTC m=+0.181742769 container init 064e59958f6ed1df696e290d0df260334f8f6cf4ec230c7a95673bc57167428b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-14e82eeb-74e2-4de3-9047-74da777fe1ec, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 11 09:09:39 compute-0 podman[364521]: 2025-10-11 09:09:39.59033812 +0000 UTC m=+0.189363457 container start 064e59958f6ed1df696e290d0df260334f8f6cf4ec230c7a95673bc57167428b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-14e82eeb-74e2-4de3-9047-74da777fe1ec, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct 11 09:09:39 compute-0 nova_compute[260935]: 2025-10-11 09:09:39.602 2 DEBUG oslo_concurrency.lockutils [req-1ee22aea-8ff6-4934-b395-dfb5998193fb req-4131591f-8b69-4e99-8763-44f98c4ff5a0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-1e10d958-167c-4caf-8ff0-67f50e39ec3c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:09:39 compute-0 neutron-haproxy-ovnmeta-14e82eeb-74e2-4de3-9047-74da777fe1ec[364536]: [NOTICE]   (364540) : New worker (364542) forked
Oct 11 09:09:39 compute-0 neutron-haproxy-ovnmeta-14e82eeb-74e2-4de3-9047-74da777fe1ec[364536]: [NOTICE]   (364540) : Loading success.
Oct 11 09:09:39 compute-0 nova_compute[260935]: 2025-10-11 09:09:39.677 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:09:39 compute-0 ceph-mon[74313]: pgmap v2028: 321 pgs: 321 active+clean; 453 MiB data, 869 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 15 KiB/s wr, 88 op/s
Oct 11 09:09:39 compute-0 nova_compute[260935]: 2025-10-11 09:09:39.846 2 DEBUG nova.virt.libvirt.host [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Removed pending event for 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Oct 11 09:09:39 compute-0 nova_compute[260935]: 2025-10-11 09:09:39.846 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760173779.8458936, 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:09:39 compute-0 nova_compute[260935]: 2025-10-11 09:09:39.846 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] VM Started (Lifecycle Event)
Oct 11 09:09:39 compute-0 nova_compute[260935]: 2025-10-11 09:09:39.880 2 DEBUG nova.compute.manager [None req-eaad3c70-20f5-4317-883e-0f5f692ea002 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 11 09:09:39 compute-0 nova_compute[260935]: 2025-10-11 09:09:39.881 2 DEBUG nova.objects.instance [None req-eaad3c70-20f5-4317-883e-0f5f692ea002 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Lazy-loading 'pci_devices' on Instance uuid 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:09:39 compute-0 nova_compute[260935]: 2025-10-11 09:09:39.892 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:09:39 compute-0 nova_compute[260935]: 2025-10-11 09:09:39.897 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Synchronizing instance power state after lifecycle event "Started"; current vm_state: suspended, current task_state: resuming, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 09:09:39 compute-0 nova_compute[260935]: 2025-10-11 09:09:39.901 2 INFO nova.virt.libvirt.driver [-] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Instance running successfully.
Oct 11 09:09:39 compute-0 virtqemud[260524]: argument unsupported: QEMU guest agent is not configured
Oct 11 09:09:39 compute-0 nova_compute[260935]: 2025-10-11 09:09:39.904 2 DEBUG nova.virt.libvirt.guest [None req-eaad3c70-20f5-4317-883e-0f5f692ea002 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200
Oct 11 09:09:39 compute-0 nova_compute[260935]: 2025-10-11 09:09:39.904 2 DEBUG nova.compute.manager [None req-eaad3c70-20f5-4317-883e-0f5f692ea002 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:09:39 compute-0 nova_compute[260935]: 2025-10-11 09:09:39.916 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] During sync_power_state the instance has a pending task (resuming). Skip.
Oct 11 09:09:39 compute-0 nova_compute[260935]: 2025-10-11 09:09:39.917 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760173779.853199, 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:09:39 compute-0 nova_compute[260935]: 2025-10-11 09:09:39.918 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] VM Resumed (Lifecycle Event)
Oct 11 09:09:39 compute-0 nova_compute[260935]: 2025-10-11 09:09:39.955 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:09:39 compute-0 nova_compute[260935]: 2025-10-11 09:09:39.958 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: suspended, current task_state: resuming, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 09:09:40 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2029: 321 pgs: 321 active+clean; 453 MiB data, 869 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 13 KiB/s wr, 76 op/s
Oct 11 09:09:40 compute-0 nova_compute[260935]: 2025-10-11 09:09:40.646 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760173765.644571, b3697ef4-2e67-4d7f-ad14-ae6ab4055454 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:09:40 compute-0 nova_compute[260935]: 2025-10-11 09:09:40.648 2 INFO nova.compute.manager [-] [instance: b3697ef4-2e67-4d7f-ad14-ae6ab4055454] VM Stopped (Lifecycle Event)
Oct 11 09:09:40 compute-0 nova_compute[260935]: 2025-10-11 09:09:40.684 2 DEBUG nova.compute.manager [None req-b17f70c3-9df2-4665-b1ce-1cd415db256b - - - - - -] [instance: b3697ef4-2e67-4d7f-ad14-ae6ab4055454] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:09:41 compute-0 nova_compute[260935]: 2025-10-11 09:09:41.611 2 DEBUG nova.compute.manager [req-3681bd0a-b112-4d98-9112-114865dd07f3 req-fbe04f97-aa8e-4e8e-a171-fcef083e5245 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Received event network-vif-plugged-a611854c-0a61-41b8-91ce-0c0f893aa54c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:09:41 compute-0 nova_compute[260935]: 2025-10-11 09:09:41.612 2 DEBUG oslo_concurrency.lockutils [req-3681bd0a-b112-4d98-9112-114865dd07f3 req-fbe04f97-aa8e-4e8e-a171-fcef083e5245 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:09:41 compute-0 nova_compute[260935]: 2025-10-11 09:09:41.612 2 DEBUG oslo_concurrency.lockutils [req-3681bd0a-b112-4d98-9112-114865dd07f3 req-fbe04f97-aa8e-4e8e-a171-fcef083e5245 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:09:41 compute-0 nova_compute[260935]: 2025-10-11 09:09:41.613 2 DEBUG oslo_concurrency.lockutils [req-3681bd0a-b112-4d98-9112-114865dd07f3 req-fbe04f97-aa8e-4e8e-a171-fcef083e5245 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:09:41 compute-0 nova_compute[260935]: 2025-10-11 09:09:41.613 2 DEBUG nova.compute.manager [req-3681bd0a-b112-4d98-9112-114865dd07f3 req-fbe04f97-aa8e-4e8e-a171-fcef083e5245 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] No waiting events found dispatching network-vif-plugged-a611854c-0a61-41b8-91ce-0c0f893aa54c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 09:09:41 compute-0 nova_compute[260935]: 2025-10-11 09:09:41.614 2 WARNING nova.compute.manager [req-3681bd0a-b112-4d98-9112-114865dd07f3 req-fbe04f97-aa8e-4e8e-a171-fcef083e5245 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Received unexpected event network-vif-plugged-a611854c-0a61-41b8-91ce-0c0f893aa54c for instance with vm_state active and task_state None.
Oct 11 09:09:41 compute-0 ceph-mon[74313]: pgmap v2029: 321 pgs: 321 active+clean; 453 MiB data, 869 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 13 KiB/s wr, 76 op/s
Oct 11 09:09:42 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2030: 321 pgs: 321 active+clean; 466 MiB data, 876 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.3 MiB/s wr, 93 op/s
Oct 11 09:09:42 compute-0 ovn_controller[152945]: 2025-10-11T09:09:42Z|00104|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:7c:70:56 10.100.0.11
Oct 11 09:09:42 compute-0 ovn_controller[152945]: 2025-10-11T09:09:42Z|00105|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:7c:70:56 10.100.0.11
Oct 11 09:09:43 compute-0 nova_compute[260935]: 2025-10-11 09:09:43.702 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:09:43 compute-0 ceph-mon[74313]: pgmap v2030: 321 pgs: 321 active+clean; 466 MiB data, 876 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.3 MiB/s wr, 93 op/s
Oct 11 09:09:43 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e276 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:09:44 compute-0 ovn_controller[152945]: 2025-10-11T09:09:44Z|00106|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:9d:da:b0 10.100.0.12
Oct 11 09:09:44 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2031: 321 pgs: 321 active+clean; 475 MiB data, 903 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 2.0 MiB/s wr, 86 op/s
Oct 11 09:09:44 compute-0 nova_compute[260935]: 2025-10-11 09:09:44.744 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:09:45 compute-0 ceph-mon[74313]: pgmap v2031: 321 pgs: 321 active+clean; 475 MiB data, 903 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 2.0 MiB/s wr, 86 op/s
Oct 11 09:09:46 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2032: 321 pgs: 321 active+clean; 475 MiB data, 903 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 2.0 MiB/s wr, 70 op/s
Oct 11 09:09:47 compute-0 ceph-mon[74313]: pgmap v2032: 321 pgs: 321 active+clean; 475 MiB data, 903 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 2.0 MiB/s wr, 70 op/s
Oct 11 09:09:48 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2033: 321 pgs: 321 active+clean; 486 MiB data, 930 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 2.1 MiB/s wr, 139 op/s
Oct 11 09:09:48 compute-0 nova_compute[260935]: 2025-10-11 09:09:48.705 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:09:48 compute-0 podman[364551]: 2025-10-11 09:09:48.832740522 +0000 UTC m=+0.121128439 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct 11 09:09:48 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e276 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:09:48 compute-0 ceph-mon[74313]: pgmap v2033: 321 pgs: 321 active+clean; 486 MiB data, 930 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 2.1 MiB/s wr, 139 op/s
Oct 11 09:09:48 compute-0 nova_compute[260935]: 2025-10-11 09:09:48.947 2 INFO nova.compute.manager [None req-6d729b98-7574-4a3e-a10a-615181061c68 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 1e10d958-167c-4caf-8ff0-67f50e39ec3c] Get console output
Oct 11 09:09:48 compute-0 nova_compute[260935]: 2025-10-11 09:09:48.953 29289 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Oct 11 09:09:49 compute-0 nova_compute[260935]: 2025-10-11 09:09:49.398 2 INFO nova.compute.manager [None req-3831b164-722a-4dc1-8161-39e57de17105 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 1e10d958-167c-4caf-8ff0-67f50e39ec3c] Pausing
Oct 11 09:09:49 compute-0 nova_compute[260935]: 2025-10-11 09:09:49.399 2 DEBUG nova.objects.instance [None req-3831b164-722a-4dc1-8161-39e57de17105 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Lazy-loading 'flavor' on Instance uuid 1e10d958-167c-4caf-8ff0-67f50e39ec3c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:09:49 compute-0 nova_compute[260935]: 2025-10-11 09:09:49.444 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760173789.4443357, 1e10d958-167c-4caf-8ff0-67f50e39ec3c => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:09:49 compute-0 nova_compute[260935]: 2025-10-11 09:09:49.445 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 1e10d958-167c-4caf-8ff0-67f50e39ec3c] VM Paused (Lifecycle Event)
Oct 11 09:09:49 compute-0 nova_compute[260935]: 2025-10-11 09:09:49.447 2 DEBUG nova.compute.manager [None req-3831b164-722a-4dc1-8161-39e57de17105 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 1e10d958-167c-4caf-8ff0-67f50e39ec3c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:09:49 compute-0 nova_compute[260935]: 2025-10-11 09:09:49.486 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 1e10d958-167c-4caf-8ff0-67f50e39ec3c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:09:49 compute-0 nova_compute[260935]: 2025-10-11 09:09:49.490 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 1e10d958-167c-4caf-8ff0-67f50e39ec3c] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: pausing, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 09:09:49 compute-0 nova_compute[260935]: 2025-10-11 09:09:49.523 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 1e10d958-167c-4caf-8ff0-67f50e39ec3c] During sync_power_state the instance has a pending task (pausing). Skip.
Oct 11 09:09:49 compute-0 nova_compute[260935]: 2025-10-11 09:09:49.786 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:09:50 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2034: 321 pgs: 321 active+clean; 486 MiB data, 930 MiB used, 59 GiB / 60 GiB avail; 857 KiB/s rd, 2.1 MiB/s wr, 110 op/s
Oct 11 09:09:51 compute-0 ceph-mon[74313]: pgmap v2034: 321 pgs: 321 active+clean; 486 MiB data, 930 MiB used, 59 GiB / 60 GiB avail; 857 KiB/s rd, 2.1 MiB/s wr, 110 op/s
Oct 11 09:09:51 compute-0 nova_compute[260935]: 2025-10-11 09:09:51.806 2 INFO nova.compute.manager [None req-89ed1b6e-9584-48cb-8eb0-3924a268bc62 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 1e10d958-167c-4caf-8ff0-67f50e39ec3c] Get console output
Oct 11 09:09:51 compute-0 nova_compute[260935]: 2025-10-11 09:09:51.814 29289 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Oct 11 09:09:51 compute-0 nova_compute[260935]: 2025-10-11 09:09:51.977 2 INFO nova.compute.manager [None req-102b1e5c-bbd9-4e76-a6d7-88d3553b4b2a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 1e10d958-167c-4caf-8ff0-67f50e39ec3c] Unpausing
Oct 11 09:09:51 compute-0 nova_compute[260935]: 2025-10-11 09:09:51.979 2 DEBUG nova.objects.instance [None req-102b1e5c-bbd9-4e76-a6d7-88d3553b4b2a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Lazy-loading 'flavor' on Instance uuid 1e10d958-167c-4caf-8ff0-67f50e39ec3c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:09:52 compute-0 nova_compute[260935]: 2025-10-11 09:09:52.018 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760173792.0181463, 1e10d958-167c-4caf-8ff0-67f50e39ec3c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:09:52 compute-0 nova_compute[260935]: 2025-10-11 09:09:52.019 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 1e10d958-167c-4caf-8ff0-67f50e39ec3c] VM Resumed (Lifecycle Event)
Oct 11 09:09:52 compute-0 virtqemud[260524]: argument unsupported: QEMU guest agent is not configured
Oct 11 09:09:52 compute-0 nova_compute[260935]: 2025-10-11 09:09:52.023 2 DEBUG nova.virt.libvirt.guest [None req-102b1e5c-bbd9-4e76-a6d7-88d3553b4b2a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 1e10d958-167c-4caf-8ff0-67f50e39ec3c] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200
Oct 11 09:09:52 compute-0 nova_compute[260935]: 2025-10-11 09:09:52.024 2 DEBUG nova.compute.manager [None req-102b1e5c-bbd9-4e76-a6d7-88d3553b4b2a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 1e10d958-167c-4caf-8ff0-67f50e39ec3c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:09:52 compute-0 nova_compute[260935]: 2025-10-11 09:09:52.056 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 1e10d958-167c-4caf-8ff0-67f50e39ec3c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:09:52 compute-0 nova_compute[260935]: 2025-10-11 09:09:52.063 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 1e10d958-167c-4caf-8ff0-67f50e39ec3c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: paused, current task_state: unpausing, current DB power_state: 3, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 09:09:52 compute-0 nova_compute[260935]: 2025-10-11 09:09:52.112 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 1e10d958-167c-4caf-8ff0-67f50e39ec3c] During sync_power_state the instance has a pending task (unpausing). Skip.
Oct 11 09:09:52 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2035: 321 pgs: 321 active+clean; 488 MiB data, 930 MiB used, 59 GiB / 60 GiB avail; 857 KiB/s rd, 2.2 MiB/s wr, 111 op/s
Oct 11 09:09:53 compute-0 ceph-mon[74313]: pgmap v2035: 321 pgs: 321 active+clean; 488 MiB data, 930 MiB used, 59 GiB / 60 GiB avail; 857 KiB/s rd, 2.2 MiB/s wr, 111 op/s
Oct 11 09:09:53 compute-0 nova_compute[260935]: 2025-10-11 09:09:53.708 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:09:53 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e276 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:09:53 compute-0 ceph-mon[74313]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #96. Immutable memtables: 0.
Oct 11 09:09:53 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:09:53.852735) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 11 09:09:53 compute-0 ceph-mon[74313]: rocksdb: [db/flush_job.cc:856] [default] [JOB 55] Flushing memtable with next log file: 96
Oct 11 09:09:53 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760173793852772, "job": 55, "event": "flush_started", "num_memtables": 1, "num_entries": 425, "num_deletes": 250, "total_data_size": 351427, "memory_usage": 360144, "flush_reason": "Manual Compaction"}
Oct 11 09:09:53 compute-0 ceph-mon[74313]: rocksdb: [db/flush_job.cc:885] [default] [JOB 55] Level-0 flush table #97: started
Oct 11 09:09:53 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760173793858500, "cf_name": "default", "job": 55, "event": "table_file_creation", "file_number": 97, "file_size": 269199, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 42490, "largest_seqno": 42914, "table_properties": {"data_size": 266814, "index_size": 485, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 837, "raw_key_size": 6353, "raw_average_key_size": 20, "raw_value_size": 262099, "raw_average_value_size": 834, "num_data_blocks": 22, "num_entries": 314, "num_filter_entries": 314, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760173770, "oldest_key_time": 1760173770, "file_creation_time": 1760173793, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "25d4b2da-549a-4320-988a-8e4ed36a341b", "db_session_id": "HBQ1LYMNE0LLZGR7AHC6", "orig_file_number": 97, "seqno_to_time_mapping": "N/A"}}
Oct 11 09:09:53 compute-0 ceph-mon[74313]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 55] Flush lasted 5800 microseconds, and 1668 cpu microseconds.
Oct 11 09:09:53 compute-0 ceph-mon[74313]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 09:09:53 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:09:53.858536) [db/flush_job.cc:967] [default] [JOB 55] Level-0 flush table #97: 269199 bytes OK
Oct 11 09:09:53 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:09:53.858551) [db/memtable_list.cc:519] [default] Level-0 commit table #97 started
Oct 11 09:09:53 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:09:53.861058) [db/memtable_list.cc:722] [default] Level-0 commit table #97: memtable #1 done
Oct 11 09:09:53 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:09:53.861073) EVENT_LOG_v1 {"time_micros": 1760173793861068, "job": 55, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 11 09:09:53 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:09:53.861089) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 11 09:09:53 compute-0 ceph-mon[74313]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 55] Try to delete WAL files size 348802, prev total WAL file size 348802, number of live WAL files 2.
Oct 11 09:09:53 compute-0 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000093.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 09:09:53 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:09:53.861630) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031353034' seq:72057594037927935, type:22 .. '6D6772737461740031373535' seq:0, type:0; will stop at (end)
Oct 11 09:09:53 compute-0 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 56] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 11 09:09:53 compute-0 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 55 Base level 0, inputs: [97(262KB)], [95(10MB)]
Oct 11 09:09:53 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760173793861713, "job": 56, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [97], "files_L6": [95], "score": -1, "input_data_size": 10779037, "oldest_snapshot_seqno": -1}
Oct 11 09:09:53 compute-0 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 56] Generated table #98: 6420 keys, 7568892 bytes, temperature: kUnknown
Oct 11 09:09:53 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760173793916994, "cf_name": "default", "job": 56, "event": "table_file_creation", "file_number": 98, "file_size": 7568892, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7527814, "index_size": 23944, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16069, "raw_key_size": 165894, "raw_average_key_size": 25, "raw_value_size": 7414319, "raw_average_value_size": 1154, "num_data_blocks": 937, "num_entries": 6420, "num_filter_entries": 6420, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760170204, "oldest_key_time": 0, "file_creation_time": 1760173793, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "25d4b2da-549a-4320-988a-8e4ed36a341b", "db_session_id": "HBQ1LYMNE0LLZGR7AHC6", "orig_file_number": 98, "seqno_to_time_mapping": "N/A"}}
Oct 11 09:09:53 compute-0 ceph-mon[74313]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 09:09:53 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:09:53.917311) [db/compaction/compaction_job.cc:1663] [default] [JOB 56] Compacted 1@0 + 1@6 files to L6 => 7568892 bytes
Oct 11 09:09:53 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:09:53.918808) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 194.8 rd, 136.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.3, 10.0 +0.0 blob) out(7.2 +0.0 blob), read-write-amplify(68.2) write-amplify(28.1) OK, records in: 6922, records dropped: 502 output_compression: NoCompression
Oct 11 09:09:53 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:09:53.918866) EVENT_LOG_v1 {"time_micros": 1760173793918852, "job": 56, "event": "compaction_finished", "compaction_time_micros": 55345, "compaction_time_cpu_micros": 37485, "output_level": 6, "num_output_files": 1, "total_output_size": 7568892, "num_input_records": 6922, "num_output_records": 6420, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 11 09:09:53 compute-0 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000097.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 09:09:53 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760173793919169, "job": 56, "event": "table_file_deletion", "file_number": 97}
Oct 11 09:09:53 compute-0 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000095.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 09:09:53 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760173793922636, "job": 56, "event": "table_file_deletion", "file_number": 95}
Oct 11 09:09:53 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:09:53.861523) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 09:09:53 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:09:53.922753) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 09:09:53 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:09:53.922761) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 09:09:53 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:09:53.922763) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 09:09:53 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:09:53.922765) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 09:09:53 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:09:53.922767) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 09:09:54 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2036: 321 pgs: 321 active+clean; 488 MiB data, 930 MiB used, 59 GiB / 60 GiB avail; 807 KiB/s rd, 916 KiB/s wr, 91 op/s
Oct 11 09:09:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:09:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:09:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:09:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:09:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:09:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:09:54 compute-0 nova_compute[260935]: 2025-10-11 09:09:54.837 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:09:54 compute-0 ceph-mon[74313]: pgmap v2036: 321 pgs: 321 active+clean; 488 MiB data, 930 MiB used, 59 GiB / 60 GiB avail; 807 KiB/s rd, 916 KiB/s wr, 91 op/s
Oct 11 09:09:54 compute-0 ceph-mgr[74605]: [balancer INFO root] Optimize plan auto_2025-10-11_09:09:54
Oct 11 09:09:54 compute-0 ceph-mgr[74605]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 09:09:54 compute-0 ceph-mgr[74605]: [balancer INFO root] do_upmap
Oct 11 09:09:54 compute-0 ceph-mgr[74605]: [balancer INFO root] pools ['.rgw.root', 'images', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'default.rgw.log', 'vms', '.mgr', 'volumes', 'backups', 'default.rgw.meta', 'default.rgw.control']
Oct 11 09:09:54 compute-0 ceph-mgr[74605]: [balancer INFO root] prepared 0/10 changes
Oct 11 09:09:55 compute-0 nova_compute[260935]: 2025-10-11 09:09:55.072 2 INFO nova.compute.manager [None req-b1212c07-58c0-4534-942f-b1eca044470f a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 1e10d958-167c-4caf-8ff0-67f50e39ec3c] Get console output
Oct 11 09:09:55 compute-0 nova_compute[260935]: 2025-10-11 09:09:55.078 29289 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Oct 11 09:09:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 09:09:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 09:09:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 09:09:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 09:09:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 09:09:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 09:09:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 09:09:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 09:09:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 09:09:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 09:09:55 compute-0 podman[364571]: 2025-10-11 09:09:55.798329447 +0000 UTC m=+0.096254069 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=iscsid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct 11 09:09:56 compute-0 nova_compute[260935]: 2025-10-11 09:09:56.145 2 DEBUG oslo_concurrency.lockutils [None req-b8fd29cc-e044-452b-a4e6-5433f96ee35a 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Acquiring lock "0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:09:56 compute-0 nova_compute[260935]: 2025-10-11 09:09:56.146 2 DEBUG oslo_concurrency.lockutils [None req-b8fd29cc-e044-452b-a4e6-5433f96ee35a 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Lock "0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:09:56 compute-0 nova_compute[260935]: 2025-10-11 09:09:56.146 2 DEBUG oslo_concurrency.lockutils [None req-b8fd29cc-e044-452b-a4e6-5433f96ee35a 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Acquiring lock "0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:09:56 compute-0 nova_compute[260935]: 2025-10-11 09:09:56.147 2 DEBUG oslo_concurrency.lockutils [None req-b8fd29cc-e044-452b-a4e6-5433f96ee35a 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Lock "0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:09:56 compute-0 nova_compute[260935]: 2025-10-11 09:09:56.147 2 DEBUG oslo_concurrency.lockutils [None req-b8fd29cc-e044-452b-a4e6-5433f96ee35a 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Lock "0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:09:56 compute-0 nova_compute[260935]: 2025-10-11 09:09:56.149 2 INFO nova.compute.manager [None req-b8fd29cc-e044-452b-a4e6-5433f96ee35a 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Terminating instance
Oct 11 09:09:56 compute-0 nova_compute[260935]: 2025-10-11 09:09:56.150 2 DEBUG nova.compute.manager [None req-b8fd29cc-e044-452b-a4e6-5433f96ee35a 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 11 09:09:56 compute-0 kernel: tapa611854c-0a (unregistering): left promiscuous mode
Oct 11 09:09:56 compute-0 NetworkManager[44960]: <info>  [1760173796.2183] device (tapa611854c-0a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 09:09:56 compute-0 ovn_controller[152945]: 2025-10-11T09:09:56Z|00934|binding|INFO|Releasing lport a611854c-0a61-41b8-91ce-0c0f893aa54c from this chassis (sb_readonly=0)
Oct 11 09:09:56 compute-0 ovn_controller[152945]: 2025-10-11T09:09:56Z|00935|binding|INFO|Setting lport a611854c-0a61-41b8-91ce-0c0f893aa54c down in Southbound
Oct 11 09:09:56 compute-0 nova_compute[260935]: 2025-10-11 09:09:56.236 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:09:56 compute-0 ovn_controller[152945]: 2025-10-11T09:09:56Z|00936|binding|INFO|Removing iface tapa611854c-0a ovn-installed in OVS
Oct 11 09:09:56 compute-0 nova_compute[260935]: 2025-10-11 09:09:56.239 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:09:56 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:09:56.245 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9d:da:b0 10.100.0.12'], port_security=['fa:16:3e:9d:da:b0 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-14e82eeb-74e2-4de3-9047-74da777fe1ec', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '21f163e616ee4917a580701d466f7dc9', 'neutron:revision_number': '11', 'neutron:security_group_ids': 'df67e917-90ec-4dfb-a7e0-e0c1517579e8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e0adf864-e0f8-4e06-afb7-e390a634b36e, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=a611854c-0a61-41b8-91ce-0c0f893aa54c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 09:09:56 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:09:56.248 162815 INFO neutron.agent.ovn.metadata.agent [-] Port a611854c-0a61-41b8-91ce-0c0f893aa54c in datapath 14e82eeb-74e2-4de3-9047-74da777fe1ec unbound from our chassis
Oct 11 09:09:56 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:09:56.252 162815 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 14e82eeb-74e2-4de3-9047-74da777fe1ec, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 11 09:09:56 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:09:56.254 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[bad36584-7c0a-4230-ad57-7ba66a3ced90]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:09:56 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:09:56.255 162815 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-14e82eeb-74e2-4de3-9047-74da777fe1ec namespace which is not needed anymore
Oct 11 09:09:56 compute-0 nova_compute[260935]: 2025-10-11 09:09:56.285 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:09:56 compute-0 systemd[1]: machine-qemu\x2d122\x2dinstance\x2d00000062.scope: Deactivated successfully.
Oct 11 09:09:56 compute-0 systemd[1]: machine-qemu\x2d122\x2dinstance\x2d00000062.scope: Consumed 5.647s CPU time.
Oct 11 09:09:56 compute-0 systemd-machined[215705]: Machine qemu-122-instance-00000062 terminated.
Oct 11 09:09:56 compute-0 nova_compute[260935]: 2025-10-11 09:09:56.377 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:09:56 compute-0 nova_compute[260935]: 2025-10-11 09:09:56.384 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:09:56 compute-0 nova_compute[260935]: 2025-10-11 09:09:56.388 2 DEBUG oslo_concurrency.lockutils [None req-45f1a6d9-7500-4b6b-bab9-43b85db08961 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Acquiring lock "1e10d958-167c-4caf-8ff0-67f50e39ec3c" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:09:56 compute-0 nova_compute[260935]: 2025-10-11 09:09:56.393 2 DEBUG oslo_concurrency.lockutils [None req-45f1a6d9-7500-4b6b-bab9-43b85db08961 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Lock "1e10d958-167c-4caf-8ff0-67f50e39ec3c" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.004s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:09:56 compute-0 nova_compute[260935]: 2025-10-11 09:09:56.394 2 DEBUG oslo_concurrency.lockutils [None req-45f1a6d9-7500-4b6b-bab9-43b85db08961 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Acquiring lock "1e10d958-167c-4caf-8ff0-67f50e39ec3c-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:09:56 compute-0 nova_compute[260935]: 2025-10-11 09:09:56.394 2 DEBUG oslo_concurrency.lockutils [None req-45f1a6d9-7500-4b6b-bab9-43b85db08961 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Lock "1e10d958-167c-4caf-8ff0-67f50e39ec3c-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:09:56 compute-0 nova_compute[260935]: 2025-10-11 09:09:56.395 2 DEBUG oslo_concurrency.lockutils [None req-45f1a6d9-7500-4b6b-bab9-43b85db08961 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Lock "1e10d958-167c-4caf-8ff0-67f50e39ec3c-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:09:56 compute-0 nova_compute[260935]: 2025-10-11 09:09:56.397 2 INFO nova.compute.manager [None req-45f1a6d9-7500-4b6b-bab9-43b85db08961 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 1e10d958-167c-4caf-8ff0-67f50e39ec3c] Terminating instance
Oct 11 09:09:56 compute-0 nova_compute[260935]: 2025-10-11 09:09:56.399 2 DEBUG nova.compute.manager [None req-45f1a6d9-7500-4b6b-bab9-43b85db08961 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 1e10d958-167c-4caf-8ff0-67f50e39ec3c] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 11 09:09:56 compute-0 nova_compute[260935]: 2025-10-11 09:09:56.409 2 INFO nova.virt.libvirt.driver [-] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Instance destroyed successfully.
Oct 11 09:09:56 compute-0 nova_compute[260935]: 2025-10-11 09:09:56.411 2 DEBUG nova.objects.instance [None req-b8fd29cc-e044-452b-a4e6-5433f96ee35a 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Lazy-loading 'resources' on Instance uuid 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:09:56 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2037: 321 pgs: 321 active+clean; 488 MiB data, 930 MiB used, 59 GiB / 60 GiB avail; 653 KiB/s rd, 129 KiB/s wr, 70 op/s
Oct 11 09:09:56 compute-0 nova_compute[260935]: 2025-10-11 09:09:56.432 2 DEBUG nova.virt.libvirt.vif [None req-b8fd29cc-e044-452b-a4e6-5433f96ee35a 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-11T09:07:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersNegativeTestJSON-server-581429551',display_name='tempest-ServersNegativeTestJSON-server-581429551',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestjson-server-581429551',id=98,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-11T09:09:25Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='21f163e616ee4917a580701d466f7dc9',ramdisk_id='',reservation_id='r-wp4zx0a9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersNegativeTestJSON-414353187',owner_user_name='tempest-ServersNegativeTestJSON-414353187-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T09:09:39Z,user_data=None,user_id='6b8d9d5ab01d48ae81a09f922875ea3e',uuid=0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a611854c-0a61-41b8-91ce-0c0f893aa54c", "address": "fa:16:3e:9d:da:b0", "network": {"id": "14e82eeb-74e2-4de3-9047-74da777fe1ec", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-461409610-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "21f163e616ee4917a580701d466f7dc9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa611854c-0a", "ovs_interfaceid": "a611854c-0a61-41b8-91ce-0c0f893aa54c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 11 09:09:56 compute-0 nova_compute[260935]: 2025-10-11 09:09:56.433 2 DEBUG nova.network.os_vif_util [None req-b8fd29cc-e044-452b-a4e6-5433f96ee35a 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Converting VIF {"id": "a611854c-0a61-41b8-91ce-0c0f893aa54c", "address": "fa:16:3e:9d:da:b0", "network": {"id": "14e82eeb-74e2-4de3-9047-74da777fe1ec", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-461409610-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "21f163e616ee4917a580701d466f7dc9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa611854c-0a", "ovs_interfaceid": "a611854c-0a61-41b8-91ce-0c0f893aa54c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 09:09:56 compute-0 nova_compute[260935]: 2025-10-11 09:09:56.434 2 DEBUG nova.network.os_vif_util [None req-b8fd29cc-e044-452b-a4e6-5433f96ee35a 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9d:da:b0,bridge_name='br-int',has_traffic_filtering=True,id=a611854c-0a61-41b8-91ce-0c0f893aa54c,network=Network(14e82eeb-74e2-4de3-9047-74da777fe1ec),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa611854c-0a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 09:09:56 compute-0 nova_compute[260935]: 2025-10-11 09:09:56.435 2 DEBUG os_vif [None req-b8fd29cc-e044-452b-a4e6-5433f96ee35a 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:9d:da:b0,bridge_name='br-int',has_traffic_filtering=True,id=a611854c-0a61-41b8-91ce-0c0f893aa54c,network=Network(14e82eeb-74e2-4de3-9047-74da777fe1ec),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa611854c-0a') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 11 09:09:56 compute-0 nova_compute[260935]: 2025-10-11 09:09:56.438 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:09:56 compute-0 nova_compute[260935]: 2025-10-11 09:09:56.439 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa611854c-0a, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:09:56 compute-0 nova_compute[260935]: 2025-10-11 09:09:56.443 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:09:56 compute-0 nova_compute[260935]: 2025-10-11 09:09:56.445 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:09:56 compute-0 nova_compute[260935]: 2025-10-11 09:09:56.449 2 INFO os_vif [None req-b8fd29cc-e044-452b-a4e6-5433f96ee35a 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:9d:da:b0,bridge_name='br-int',has_traffic_filtering=True,id=a611854c-0a61-41b8-91ce-0c0f893aa54c,network=Network(14e82eeb-74e2-4de3-9047-74da777fe1ec),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa611854c-0a')
Oct 11 09:09:56 compute-0 kernel: tap38159e48-a8 (unregistering): left promiscuous mode
Oct 11 09:09:56 compute-0 NetworkManager[44960]: <info>  [1760173796.4658] device (tap38159e48-a8): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 09:09:56 compute-0 neutron-haproxy-ovnmeta-14e82eeb-74e2-4de3-9047-74da777fe1ec[364536]: [NOTICE]   (364540) : haproxy version is 2.8.14-c23fe91
Oct 11 09:09:56 compute-0 neutron-haproxy-ovnmeta-14e82eeb-74e2-4de3-9047-74da777fe1ec[364536]: [NOTICE]   (364540) : path to executable is /usr/sbin/haproxy
Oct 11 09:09:56 compute-0 neutron-haproxy-ovnmeta-14e82eeb-74e2-4de3-9047-74da777fe1ec[364536]: [WARNING]  (364540) : Exiting Master process...
Oct 11 09:09:56 compute-0 neutron-haproxy-ovnmeta-14e82eeb-74e2-4de3-9047-74da777fe1ec[364536]: [ALERT]    (364540) : Current worker (364542) exited with code 143 (Terminated)
Oct 11 09:09:56 compute-0 neutron-haproxy-ovnmeta-14e82eeb-74e2-4de3-9047-74da777fe1ec[364536]: [WARNING]  (364540) : All workers exited. Exiting... (0)
Oct 11 09:09:56 compute-0 systemd[1]: libpod-064e59958f6ed1df696e290d0df260334f8f6cf4ec230c7a95673bc57167428b.scope: Deactivated successfully.
Oct 11 09:09:56 compute-0 conmon[364536]: conmon 064e59958f6ed1df696e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-064e59958f6ed1df696e290d0df260334f8f6cf4ec230c7a95673bc57167428b.scope/container/memory.events
Oct 11 09:09:56 compute-0 podman[364620]: 2025-10-11 09:09:56.482800666 +0000 UTC m=+0.078884883 container died 064e59958f6ed1df696e290d0df260334f8f6cf4ec230c7a95673bc57167428b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-14e82eeb-74e2-4de3-9047-74da777fe1ec, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_managed=true, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 11 09:09:56 compute-0 ovn_controller[152945]: 2025-10-11T09:09:56Z|00937|binding|INFO|Releasing lport 38159e48-a8a0-4daa-95d3-19be7346d2cb from this chassis (sb_readonly=0)
Oct 11 09:09:56 compute-0 ovn_controller[152945]: 2025-10-11T09:09:56Z|00938|binding|INFO|Setting lport 38159e48-a8a0-4daa-95d3-19be7346d2cb down in Southbound
Oct 11 09:09:56 compute-0 ovn_controller[152945]: 2025-10-11T09:09:56Z|00939|binding|INFO|Removing iface tap38159e48-a8 ovn-installed in OVS
Oct 11 09:09:56 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:09:56.487 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7c:70:56 10.100.0.11'], port_security=['fa:16:3e:7c:70:56 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '1e10d958-167c-4caf-8ff0-67f50e39ec3c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3e150cf5-f6e5-4cd4-bd30-1cc73cc8f72e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ca4b15770e784f45910b630937562cb6', 'neutron:revision_number': '4', 'neutron:security_group_ids': '5fe33852-f012-4e6c-8c5b-0eacc3c5c3e8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b3216995-d47a-4bef-959f-e3536ac62432, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=38159e48-a8a0-4daa-95d3-19be7346d2cb) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 09:09:56 compute-0 nova_compute[260935]: 2025-10-11 09:09:56.490 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:09:56 compute-0 nova_compute[260935]: 2025-10-11 09:09:56.516 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:09:56 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-064e59958f6ed1df696e290d0df260334f8f6cf4ec230c7a95673bc57167428b-userdata-shm.mount: Deactivated successfully.
Oct 11 09:09:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-45fd75aade0337d682b14081c1eccf706df4989bd7a9205a59aaf2a4e8230fe7-merged.mount: Deactivated successfully.
Oct 11 09:09:56 compute-0 systemd[1]: machine-qemu\x2d121\x2dinstance\x2d00000067.scope: Deactivated successfully.
Oct 11 09:09:56 compute-0 systemd[1]: machine-qemu\x2d121\x2dinstance\x2d00000067.scope: Consumed 13.333s CPU time.
Oct 11 09:09:56 compute-0 systemd-machined[215705]: Machine qemu-121-instance-00000067 terminated.
Oct 11 09:09:56 compute-0 podman[364620]: 2025-10-11 09:09:56.533217765 +0000 UTC m=+0.129301982 container cleanup 064e59958f6ed1df696e290d0df260334f8f6cf4ec230c7a95673bc57167428b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-14e82eeb-74e2-4de3-9047-74da777fe1ec, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 11 09:09:56 compute-0 systemd[1]: libpod-conmon-064e59958f6ed1df696e290d0df260334f8f6cf4ec230c7a95673bc57167428b.scope: Deactivated successfully.
Oct 11 09:09:56 compute-0 nova_compute[260935]: 2025-10-11 09:09:56.610 2 DEBUG nova.compute.manager [req-754e631f-5595-48d6-9752-ea5452073819 req-d6c75318-dd8d-4bc4-9b76-d9f79a2cef4e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 1e10d958-167c-4caf-8ff0-67f50e39ec3c] Received event network-changed-38159e48-a8a0-4daa-95d3-19be7346d2cb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:09:56 compute-0 nova_compute[260935]: 2025-10-11 09:09:56.612 2 DEBUG nova.compute.manager [req-754e631f-5595-48d6-9752-ea5452073819 req-d6c75318-dd8d-4bc4-9b76-d9f79a2cef4e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 1e10d958-167c-4caf-8ff0-67f50e39ec3c] Refreshing instance network info cache due to event network-changed-38159e48-a8a0-4daa-95d3-19be7346d2cb. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 09:09:56 compute-0 nova_compute[260935]: 2025-10-11 09:09:56.612 2 DEBUG oslo_concurrency.lockutils [req-754e631f-5595-48d6-9752-ea5452073819 req-d6c75318-dd8d-4bc4-9b76-d9f79a2cef4e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-1e10d958-167c-4caf-8ff0-67f50e39ec3c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:09:56 compute-0 nova_compute[260935]: 2025-10-11 09:09:56.613 2 DEBUG oslo_concurrency.lockutils [req-754e631f-5595-48d6-9752-ea5452073819 req-d6c75318-dd8d-4bc4-9b76-d9f79a2cef4e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-1e10d958-167c-4caf-8ff0-67f50e39ec3c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:09:56 compute-0 nova_compute[260935]: 2025-10-11 09:09:56.614 2 DEBUG nova.network.neutron [req-754e631f-5595-48d6-9752-ea5452073819 req-d6c75318-dd8d-4bc4-9b76-d9f79a2cef4e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 1e10d958-167c-4caf-8ff0-67f50e39ec3c] Refreshing network info cache for port 38159e48-a8a0-4daa-95d3-19be7346d2cb _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 09:09:56 compute-0 podman[364677]: 2025-10-11 09:09:56.624091379 +0000 UTC m=+0.062499425 container remove 064e59958f6ed1df696e290d0df260334f8f6cf4ec230c7a95673bc57167428b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-14e82eeb-74e2-4de3-9047-74da777fe1ec, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Oct 11 09:09:56 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:09:56.636 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[e56e18cf-80ed-4b3d-b4f8-d1532b6a1a65]: (4, ('Sat Oct 11 09:09:56 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-14e82eeb-74e2-4de3-9047-74da777fe1ec (064e59958f6ed1df696e290d0df260334f8f6cf4ec230c7a95673bc57167428b)\n064e59958f6ed1df696e290d0df260334f8f6cf4ec230c7a95673bc57167428b\nSat Oct 11 09:09:56 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-14e82eeb-74e2-4de3-9047-74da777fe1ec (064e59958f6ed1df696e290d0df260334f8f6cf4ec230c7a95673bc57167428b)\n064e59958f6ed1df696e290d0df260334f8f6cf4ec230c7a95673bc57167428b\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:09:56 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:09:56.639 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[7d9404f6-ea08-42de-bea1-4bf05b5cbaaa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:09:56 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:09:56.641 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap14e82eeb-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:09:56 compute-0 nova_compute[260935]: 2025-10-11 09:09:56.646 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:09:56 compute-0 nova_compute[260935]: 2025-10-11 09:09:56.654 2 INFO nova.virt.libvirt.driver [-] [instance: 1e10d958-167c-4caf-8ff0-67f50e39ec3c] Instance destroyed successfully.
Oct 11 09:09:56 compute-0 nova_compute[260935]: 2025-10-11 09:09:56.656 2 DEBUG nova.objects.instance [None req-45f1a6d9-7500-4b6b-bab9-43b85db08961 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Lazy-loading 'resources' on Instance uuid 1e10d958-167c-4caf-8ff0-67f50e39ec3c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:09:56 compute-0 nova_compute[260935]: 2025-10-11 09:09:56.669 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:09:56 compute-0 kernel: tap14e82eeb-70: left promiscuous mode
Oct 11 09:09:56 compute-0 nova_compute[260935]: 2025-10-11 09:09:56.674 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:09:56 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:09:56.676 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[f2f6386a-e674-4a1b-a6a6-456b4e1debfb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:09:56 compute-0 nova_compute[260935]: 2025-10-11 09:09:56.680 2 DEBUG nova.virt.libvirt.vif [None req-45f1a6d9-7500-4b6b-bab9-43b85db08961 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T09:09:19Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1369305797',display_name='tempest-TestNetworkAdvancedServerOps-server-1369305797',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1369305797',id=103,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBB3xSdxOkU3R8spG6Ar0C2b6V4kJp9/fLj3qIKqVjHr+W8RWdFE2Y04thwsEaHG/Pjnq/p0OokkgP0kv89kAu1w3IytsdPugHW10i6+zWfhjEg7DZ4e+0dQC7E2fiV/TCA==',key_name='tempest-TestNetworkAdvancedServerOps-1897339183',keypairs=<?>,launch_index=0,launched_at=2025-10-11T09:09:31Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='ca4b15770e784f45910b630937562cb6',ramdisk_id='',reservation_id='r-0mc689ih',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-1304559157',owner_user_name='tempest-TestNetworkAdvancedServerOps-1304559157-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T09:09:52Z,user_data=None,user_id='a213c3877fc144a3af0be3c3d853f999',uuid=1e10d958-167c-4caf-8ff0-67f50e39ec3c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "38159e48-a8a0-4daa-95d3-19be7346d2cb", "address": "fa:16:3e:7c:70:56", "network": {"id": "3e150cf5-f6e5-4cd4-bd30-1cc73cc8f72e", "bridge": "br-int", "label": "tempest-network-smoke--7754663", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.201", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca4b15770e784f45910b630937562cb6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap38159e48-a8", "ovs_interfaceid": "38159e48-a8a0-4daa-95d3-19be7346d2cb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 11 09:09:56 compute-0 nova_compute[260935]: 2025-10-11 09:09:56.682 2 DEBUG nova.network.os_vif_util [None req-45f1a6d9-7500-4b6b-bab9-43b85db08961 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Converting VIF {"id": "38159e48-a8a0-4daa-95d3-19be7346d2cb", "address": "fa:16:3e:7c:70:56", "network": {"id": "3e150cf5-f6e5-4cd4-bd30-1cc73cc8f72e", "bridge": "br-int", "label": "tempest-network-smoke--7754663", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.201", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca4b15770e784f45910b630937562cb6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap38159e48-a8", "ovs_interfaceid": "38159e48-a8a0-4daa-95d3-19be7346d2cb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 09:09:56 compute-0 nova_compute[260935]: 2025-10-11 09:09:56.683 2 DEBUG nova.network.os_vif_util [None req-45f1a6d9-7500-4b6b-bab9-43b85db08961 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:7c:70:56,bridge_name='br-int',has_traffic_filtering=True,id=38159e48-a8a0-4daa-95d3-19be7346d2cb,network=Network(3e150cf5-f6e5-4cd4-bd30-1cc73cc8f72e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap38159e48-a8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 09:09:56 compute-0 nova_compute[260935]: 2025-10-11 09:09:56.684 2 DEBUG os_vif [None req-45f1a6d9-7500-4b6b-bab9-43b85db08961 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:7c:70:56,bridge_name='br-int',has_traffic_filtering=True,id=38159e48-a8a0-4daa-95d3-19be7346d2cb,network=Network(3e150cf5-f6e5-4cd4-bd30-1cc73cc8f72e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap38159e48-a8') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 11 09:09:56 compute-0 nova_compute[260935]: 2025-10-11 09:09:56.685 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:09:56 compute-0 nova_compute[260935]: 2025-10-11 09:09:56.686 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap38159e48-a8, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:09:56 compute-0 nova_compute[260935]: 2025-10-11 09:09:56.689 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:09:56 compute-0 nova_compute[260935]: 2025-10-11 09:09:56.690 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:09:56 compute-0 nova_compute[260935]: 2025-10-11 09:09:56.693 2 INFO os_vif [None req-45f1a6d9-7500-4b6b-bab9-43b85db08961 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:7c:70:56,bridge_name='br-int',has_traffic_filtering=True,id=38159e48-a8a0-4daa-95d3-19be7346d2cb,network=Network(3e150cf5-f6e5-4cd4-bd30-1cc73cc8f72e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap38159e48-a8')
Oct 11 09:09:56 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:09:56.699 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[006ee0d5-2c60-4b40-bb82-fc8ffdf3dff7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:09:56 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:09:56.700 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[b6545ce8-7af6-4f3c-9781-911f919be545]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:09:56 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:09:56.718 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[1c9ef5ab-833a-4e00-b2ae-9b93e7b39dd0]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 562335, 'reachable_time': 27980, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 364717, 'error': None, 'target': 'ovnmeta-14e82eeb-74e2-4de3-9047-74da777fe1ec', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:09:56 compute-0 systemd[1]: run-netns-ovnmeta\x2d14e82eeb\x2d74e2\x2d4de3\x2d9047\x2d74da777fe1ec.mount: Deactivated successfully.
Oct 11 09:09:56 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:09:56.723 162929 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-14e82eeb-74e2-4de3-9047-74da777fe1ec deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 11 09:09:56 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:09:56.723 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[b5885cb2-761f-4706-9a43-caa40316cbcd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:09:56 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:09:56.724 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 38159e48-a8a0-4daa-95d3-19be7346d2cb in datapath 3e150cf5-f6e5-4cd4-bd30-1cc73cc8f72e unbound from our chassis
Oct 11 09:09:56 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:09:56.725 162815 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 3e150cf5-f6e5-4cd4-bd30-1cc73cc8f72e, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 11 09:09:56 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:09:56.726 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[11c255ef-45ee-4aa3-8b26-7d489c1b121d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:09:56 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:09:56.726 162815 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-3e150cf5-f6e5-4cd4-bd30-1cc73cc8f72e namespace which is not needed anymore
Oct 11 09:09:56 compute-0 neutron-haproxy-ovnmeta-3e150cf5-f6e5-4cd4-bd30-1cc73cc8f72e[364303]: [NOTICE]   (364307) : haproxy version is 2.8.14-c23fe91
Oct 11 09:09:56 compute-0 neutron-haproxy-ovnmeta-3e150cf5-f6e5-4cd4-bd30-1cc73cc8f72e[364303]: [NOTICE]   (364307) : path to executable is /usr/sbin/haproxy
Oct 11 09:09:56 compute-0 neutron-haproxy-ovnmeta-3e150cf5-f6e5-4cd4-bd30-1cc73cc8f72e[364303]: [WARNING]  (364307) : Exiting Master process...
Oct 11 09:09:56 compute-0 neutron-haproxy-ovnmeta-3e150cf5-f6e5-4cd4-bd30-1cc73cc8f72e[364303]: [WARNING]  (364307) : Exiting Master process...
Oct 11 09:09:56 compute-0 neutron-haproxy-ovnmeta-3e150cf5-f6e5-4cd4-bd30-1cc73cc8f72e[364303]: [ALERT]    (364307) : Current worker (364309) exited with code 143 (Terminated)
Oct 11 09:09:56 compute-0 neutron-haproxy-ovnmeta-3e150cf5-f6e5-4cd4-bd30-1cc73cc8f72e[364303]: [WARNING]  (364307) : All workers exited. Exiting... (0)
Oct 11 09:09:56 compute-0 systemd[1]: libpod-6815214ed4d7b783ad1ba6031bca85e6eb433cdd2124489969f3b2a6f5b588c6.scope: Deactivated successfully.
Oct 11 09:09:56 compute-0 podman[364743]: 2025-10-11 09:09:56.933288536 +0000 UTC m=+0.067405265 container died 6815214ed4d7b783ad1ba6031bca85e6eb433cdd2124489969f3b2a6f5b588c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-3e150cf5-f6e5-4cd4-bd30-1cc73cc8f72e, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true)
Oct 11 09:09:56 compute-0 nova_compute[260935]: 2025-10-11 09:09:56.939 2 INFO nova.virt.libvirt.driver [None req-b8fd29cc-e044-452b-a4e6-5433f96ee35a 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Deleting instance files /var/lib/nova/instances/0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c_del
Oct 11 09:09:56 compute-0 nova_compute[260935]: 2025-10-11 09:09:56.940 2 INFO nova.virt.libvirt.driver [None req-b8fd29cc-e044-452b-a4e6-5433f96ee35a 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Deletion of /var/lib/nova/instances/0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c_del complete
Oct 11 09:09:56 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-6815214ed4d7b783ad1ba6031bca85e6eb433cdd2124489969f3b2a6f5b588c6-userdata-shm.mount: Deactivated successfully.
Oct 11 09:09:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-d6608af26482039c5816a02e8765c53d58b01bc30687269e83d429b9f655fb4c-merged.mount: Deactivated successfully.
Oct 11 09:09:56 compute-0 podman[364743]: 2025-10-11 09:09:56.976201651 +0000 UTC m=+0.110318370 container cleanup 6815214ed4d7b783ad1ba6031bca85e6eb433cdd2124489969f3b2a6f5b588c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-3e150cf5-f6e5-4cd4-bd30-1cc73cc8f72e, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2)
Oct 11 09:09:56 compute-0 systemd[1]: libpod-conmon-6815214ed4d7b783ad1ba6031bca85e6eb433cdd2124489969f3b2a6f5b588c6.scope: Deactivated successfully.
Oct 11 09:09:57 compute-0 nova_compute[260935]: 2025-10-11 09:09:57.021 2 INFO nova.compute.manager [None req-b8fd29cc-e044-452b-a4e6-5433f96ee35a 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Took 0.87 seconds to destroy the instance on the hypervisor.
Oct 11 09:09:57 compute-0 nova_compute[260935]: 2025-10-11 09:09:57.022 2 DEBUG oslo.service.loopingcall [None req-b8fd29cc-e044-452b-a4e6-5433f96ee35a 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 11 09:09:57 compute-0 nova_compute[260935]: 2025-10-11 09:09:57.023 2 DEBUG nova.compute.manager [-] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 11 09:09:57 compute-0 nova_compute[260935]: 2025-10-11 09:09:57.023 2 DEBUG nova.network.neutron [-] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 11 09:09:57 compute-0 podman[364775]: 2025-10-11 09:09:57.061139356 +0000 UTC m=+0.058867622 container remove 6815214ed4d7b783ad1ba6031bca85e6eb433cdd2124489969f3b2a6f5b588c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-3e150cf5-f6e5-4cd4-bd30-1cc73cc8f72e, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 11 09:09:57 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:09:57.068 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[915b4db4-8ecd-4e49-bcc3-f3ad6bf5cad8]: (4, ('Sat Oct 11 09:09:56 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-3e150cf5-f6e5-4cd4-bd30-1cc73cc8f72e (6815214ed4d7b783ad1ba6031bca85e6eb433cdd2124489969f3b2a6f5b588c6)\n6815214ed4d7b783ad1ba6031bca85e6eb433cdd2124489969f3b2a6f5b588c6\nSat Oct 11 09:09:56 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-3e150cf5-f6e5-4cd4-bd30-1cc73cc8f72e (6815214ed4d7b783ad1ba6031bca85e6eb433cdd2124489969f3b2a6f5b588c6)\n6815214ed4d7b783ad1ba6031bca85e6eb433cdd2124489969f3b2a6f5b588c6\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:09:57 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:09:57.070 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[3f5b632b-675c-48d3-9a58-c02326ee8aff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:09:57 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:09:57.070 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3e150cf5-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:09:57 compute-0 kernel: tap3e150cf5-f0: left promiscuous mode
Oct 11 09:09:57 compute-0 nova_compute[260935]: 2025-10-11 09:09:57.087 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:09:57 compute-0 nova_compute[260935]: 2025-10-11 09:09:57.089 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:09:57 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:09:57.092 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[6c8c9228-1622-419b-809a-2a346f13c5ea]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:09:57 compute-0 nova_compute[260935]: 2025-10-11 09:09:57.109 2 INFO nova.virt.libvirt.driver [None req-45f1a6d9-7500-4b6b-bab9-43b85db08961 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 1e10d958-167c-4caf-8ff0-67f50e39ec3c] Deleting instance files /var/lib/nova/instances/1e10d958-167c-4caf-8ff0-67f50e39ec3c_del
Oct 11 09:09:57 compute-0 nova_compute[260935]: 2025-10-11 09:09:57.110 2 INFO nova.virt.libvirt.driver [None req-45f1a6d9-7500-4b6b-bab9-43b85db08961 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 1e10d958-167c-4caf-8ff0-67f50e39ec3c] Deletion of /var/lib/nova/instances/1e10d958-167c-4caf-8ff0-67f50e39ec3c_del complete
Oct 11 09:09:57 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:09:57.110 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[696e7050-980c-42da-9b8d-b7155c568cde]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:09:57 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:09:57.111 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[c342ca9a-25bd-4c2b-8ca2-6ffc3e465596]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:09:57 compute-0 nova_compute[260935]: 2025-10-11 09:09:57.116 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:09:57 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:09:57.129 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[df5422b7-0c50-4674-9e73-89d7552873ec]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 561481, 'reachable_time': 38751, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 364789, 'error': None, 'target': 'ovnmeta-3e150cf5-f6e5-4cd4-bd30-1cc73cc8f72e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:09:57 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:09:57.131 162929 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-3e150cf5-f6e5-4cd4-bd30-1cc73cc8f72e deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 11 09:09:57 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:09:57.131 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[b0548319-947f-4b55-9ba8-e4a6bb804877]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:09:57 compute-0 nova_compute[260935]: 2025-10-11 09:09:57.176 2 INFO nova.compute.manager [None req-45f1a6d9-7500-4b6b-bab9-43b85db08961 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 1e10d958-167c-4caf-8ff0-67f50e39ec3c] Took 0.78 seconds to destroy the instance on the hypervisor.
Oct 11 09:09:57 compute-0 nova_compute[260935]: 2025-10-11 09:09:57.176 2 DEBUG oslo.service.loopingcall [None req-45f1a6d9-7500-4b6b-bab9-43b85db08961 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 11 09:09:57 compute-0 nova_compute[260935]: 2025-10-11 09:09:57.177 2 DEBUG nova.compute.manager [-] [instance: 1e10d958-167c-4caf-8ff0-67f50e39ec3c] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 11 09:09:57 compute-0 nova_compute[260935]: 2025-10-11 09:09:57.177 2 DEBUG nova.network.neutron [-] [instance: 1e10d958-167c-4caf-8ff0-67f50e39ec3c] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 11 09:09:57 compute-0 ceph-mon[74313]: pgmap v2037: 321 pgs: 321 active+clean; 488 MiB data, 930 MiB used, 59 GiB / 60 GiB avail; 653 KiB/s rd, 129 KiB/s wr, 70 op/s
Oct 11 09:09:57 compute-0 systemd[1]: run-netns-ovnmeta\x2d3e150cf5\x2df6e5\x2d4cd4\x2dbd30\x2d1cc73cc8f72e.mount: Deactivated successfully.
Oct 11 09:09:57 compute-0 nova_compute[260935]: 2025-10-11 09:09:57.575 2 DEBUG nova.compute.manager [req-9713534e-02a7-4dd3-8657-4bdf3cd9916d req-41481262-f05b-4860-ace6-2b7d77cff014 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 1e10d958-167c-4caf-8ff0-67f50e39ec3c] Received event network-vif-unplugged-38159e48-a8a0-4daa-95d3-19be7346d2cb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:09:57 compute-0 nova_compute[260935]: 2025-10-11 09:09:57.575 2 DEBUG oslo_concurrency.lockutils [req-9713534e-02a7-4dd3-8657-4bdf3cd9916d req-41481262-f05b-4860-ace6-2b7d77cff014 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "1e10d958-167c-4caf-8ff0-67f50e39ec3c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:09:57 compute-0 nova_compute[260935]: 2025-10-11 09:09:57.576 2 DEBUG oslo_concurrency.lockutils [req-9713534e-02a7-4dd3-8657-4bdf3cd9916d req-41481262-f05b-4860-ace6-2b7d77cff014 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "1e10d958-167c-4caf-8ff0-67f50e39ec3c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:09:57 compute-0 nova_compute[260935]: 2025-10-11 09:09:57.576 2 DEBUG oslo_concurrency.lockutils [req-9713534e-02a7-4dd3-8657-4bdf3cd9916d req-41481262-f05b-4860-ace6-2b7d77cff014 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "1e10d958-167c-4caf-8ff0-67f50e39ec3c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:09:57 compute-0 nova_compute[260935]: 2025-10-11 09:09:57.576 2 DEBUG nova.compute.manager [req-9713534e-02a7-4dd3-8657-4bdf3cd9916d req-41481262-f05b-4860-ace6-2b7d77cff014 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 1e10d958-167c-4caf-8ff0-67f50e39ec3c] No waiting events found dispatching network-vif-unplugged-38159e48-a8a0-4daa-95d3-19be7346d2cb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 09:09:57 compute-0 nova_compute[260935]: 2025-10-11 09:09:57.576 2 DEBUG nova.compute.manager [req-9713534e-02a7-4dd3-8657-4bdf3cd9916d req-41481262-f05b-4860-ace6-2b7d77cff014 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 1e10d958-167c-4caf-8ff0-67f50e39ec3c] Received event network-vif-unplugged-38159e48-a8a0-4daa-95d3-19be7346d2cb for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 11 09:09:58 compute-0 nova_compute[260935]: 2025-10-11 09:09:58.049 2 DEBUG nova.network.neutron [-] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:09:58 compute-0 nova_compute[260935]: 2025-10-11 09:09:58.074 2 INFO nova.compute.manager [-] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Took 1.05 seconds to deallocate network for instance.
Oct 11 09:09:58 compute-0 nova_compute[260935]: 2025-10-11 09:09:58.124 2 DEBUG oslo_concurrency.lockutils [None req-b8fd29cc-e044-452b-a4e6-5433f96ee35a 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:09:58 compute-0 nova_compute[260935]: 2025-10-11 09:09:58.124 2 DEBUG oslo_concurrency.lockutils [None req-b8fd29cc-e044-452b-a4e6-5433f96ee35a 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:09:58 compute-0 nova_compute[260935]: 2025-10-11 09:09:58.196 2 DEBUG nova.network.neutron [-] [instance: 1e10d958-167c-4caf-8ff0-67f50e39ec3c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:09:58 compute-0 nova_compute[260935]: 2025-10-11 09:09:58.225 2 INFO nova.compute.manager [-] [instance: 1e10d958-167c-4caf-8ff0-67f50e39ec3c] Took 1.05 seconds to deallocate network for instance.
Oct 11 09:09:58 compute-0 nova_compute[260935]: 2025-10-11 09:09:58.259 2 DEBUG oslo_concurrency.processutils [None req-b8fd29cc-e044-452b-a4e6-5433f96ee35a 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:09:58 compute-0 nova_compute[260935]: 2025-10-11 09:09:58.308 2 DEBUG oslo_concurrency.lockutils [None req-45f1a6d9-7500-4b6b-bab9-43b85db08961 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:09:58 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2038: 321 pgs: 321 active+clean; 328 MiB data, 846 MiB used, 59 GiB / 60 GiB avail; 691 KiB/s rd, 131 KiB/s wr, 126 op/s
Oct 11 09:09:58 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 09:09:58 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2714491320' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:09:58 compute-0 nova_compute[260935]: 2025-10-11 09:09:58.719 2 DEBUG oslo_concurrency.processutils [None req-b8fd29cc-e044-452b-a4e6-5433f96ee35a 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:09:58 compute-0 nova_compute[260935]: 2025-10-11 09:09:58.726 2 DEBUG nova.compute.provider_tree [None req-b8fd29cc-e044-452b-a4e6-5433f96ee35a 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 09:09:58 compute-0 nova_compute[260935]: 2025-10-11 09:09:58.743 2 DEBUG nova.scheduler.client.report [None req-b8fd29cc-e044-452b-a4e6-5433f96ee35a 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 09:09:58 compute-0 nova_compute[260935]: 2025-10-11 09:09:58.763 2 DEBUG oslo_concurrency.lockutils [None req-b8fd29cc-e044-452b-a4e6-5433f96ee35a 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.639s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:09:58 compute-0 nova_compute[260935]: 2025-10-11 09:09:58.765 2 DEBUG oslo_concurrency.lockutils [None req-45f1a6d9-7500-4b6b-bab9-43b85db08961 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.456s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:09:58 compute-0 nova_compute[260935]: 2025-10-11 09:09:58.780 2 DEBUG nova.network.neutron [req-754e631f-5595-48d6-9752-ea5452073819 req-d6c75318-dd8d-4bc4-9b76-d9f79a2cef4e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 1e10d958-167c-4caf-8ff0-67f50e39ec3c] Updated VIF entry in instance network info cache for port 38159e48-a8a0-4daa-95d3-19be7346d2cb. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 09:09:58 compute-0 nova_compute[260935]: 2025-10-11 09:09:58.780 2 DEBUG nova.network.neutron [req-754e631f-5595-48d6-9752-ea5452073819 req-d6c75318-dd8d-4bc4-9b76-d9f79a2cef4e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 1e10d958-167c-4caf-8ff0-67f50e39ec3c] Updating instance_info_cache with network_info: [{"id": "38159e48-a8a0-4daa-95d3-19be7346d2cb", "address": "fa:16:3e:7c:70:56", "network": {"id": "3e150cf5-f6e5-4cd4-bd30-1cc73cc8f72e", "bridge": "br-int", "label": "tempest-network-smoke--7754663", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca4b15770e784f45910b630937562cb6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap38159e48-a8", "ovs_interfaceid": "38159e48-a8a0-4daa-95d3-19be7346d2cb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:09:58 compute-0 nova_compute[260935]: 2025-10-11 09:09:58.807 2 DEBUG oslo_concurrency.lockutils [req-754e631f-5595-48d6-9752-ea5452073819 req-d6c75318-dd8d-4bc4-9b76-d9f79a2cef4e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-1e10d958-167c-4caf-8ff0-67f50e39ec3c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:09:58 compute-0 nova_compute[260935]: 2025-10-11 09:09:58.810 2 INFO nova.scheduler.client.report [None req-b8fd29cc-e044-452b-a4e6-5433f96ee35a 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Deleted allocations for instance 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c
Oct 11 09:09:58 compute-0 nova_compute[260935]: 2025-10-11 09:09:58.820 2 DEBUG nova.compute.manager [req-eafdf0ad-21dc-4c43-9764-bb10e0162bcb req-409c0d1b-3674-42ec-bd18-7e68d035afdd e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Received event network-vif-unplugged-a611854c-0a61-41b8-91ce-0c0f893aa54c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:09:58 compute-0 nova_compute[260935]: 2025-10-11 09:09:58.820 2 DEBUG oslo_concurrency.lockutils [req-eafdf0ad-21dc-4c43-9764-bb10e0162bcb req-409c0d1b-3674-42ec-bd18-7e68d035afdd e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:09:58 compute-0 nova_compute[260935]: 2025-10-11 09:09:58.821 2 DEBUG oslo_concurrency.lockutils [req-eafdf0ad-21dc-4c43-9764-bb10e0162bcb req-409c0d1b-3674-42ec-bd18-7e68d035afdd e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:09:58 compute-0 nova_compute[260935]: 2025-10-11 09:09:58.821 2 DEBUG oslo_concurrency.lockutils [req-eafdf0ad-21dc-4c43-9764-bb10e0162bcb req-409c0d1b-3674-42ec-bd18-7e68d035afdd e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:09:58 compute-0 nova_compute[260935]: 2025-10-11 09:09:58.821 2 DEBUG nova.compute.manager [req-eafdf0ad-21dc-4c43-9764-bb10e0162bcb req-409c0d1b-3674-42ec-bd18-7e68d035afdd e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] No waiting events found dispatching network-vif-unplugged-a611854c-0a61-41b8-91ce-0c0f893aa54c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 09:09:58 compute-0 nova_compute[260935]: 2025-10-11 09:09:58.821 2 WARNING nova.compute.manager [req-eafdf0ad-21dc-4c43-9764-bb10e0162bcb req-409c0d1b-3674-42ec-bd18-7e68d035afdd e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Received unexpected event network-vif-unplugged-a611854c-0a61-41b8-91ce-0c0f893aa54c for instance with vm_state deleted and task_state None.
Oct 11 09:09:58 compute-0 nova_compute[260935]: 2025-10-11 09:09:58.822 2 DEBUG nova.compute.manager [req-eafdf0ad-21dc-4c43-9764-bb10e0162bcb req-409c0d1b-3674-42ec-bd18-7e68d035afdd e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Received event network-vif-plugged-a611854c-0a61-41b8-91ce-0c0f893aa54c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:09:58 compute-0 nova_compute[260935]: 2025-10-11 09:09:58.822 2 DEBUG oslo_concurrency.lockutils [req-eafdf0ad-21dc-4c43-9764-bb10e0162bcb req-409c0d1b-3674-42ec-bd18-7e68d035afdd e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:09:58 compute-0 nova_compute[260935]: 2025-10-11 09:09:58.822 2 DEBUG oslo_concurrency.lockutils [req-eafdf0ad-21dc-4c43-9764-bb10e0162bcb req-409c0d1b-3674-42ec-bd18-7e68d035afdd e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:09:58 compute-0 nova_compute[260935]: 2025-10-11 09:09:58.822 2 DEBUG oslo_concurrency.lockutils [req-eafdf0ad-21dc-4c43-9764-bb10e0162bcb req-409c0d1b-3674-42ec-bd18-7e68d035afdd e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:09:58 compute-0 nova_compute[260935]: 2025-10-11 09:09:58.823 2 DEBUG nova.compute.manager [req-eafdf0ad-21dc-4c43-9764-bb10e0162bcb req-409c0d1b-3674-42ec-bd18-7e68d035afdd e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] No waiting events found dispatching network-vif-plugged-a611854c-0a61-41b8-91ce-0c0f893aa54c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 09:09:58 compute-0 nova_compute[260935]: 2025-10-11 09:09:58.823 2 WARNING nova.compute.manager [req-eafdf0ad-21dc-4c43-9764-bb10e0162bcb req-409c0d1b-3674-42ec-bd18-7e68d035afdd e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Received unexpected event network-vif-plugged-a611854c-0a61-41b8-91ce-0c0f893aa54c for instance with vm_state deleted and task_state None.
Oct 11 09:09:58 compute-0 nova_compute[260935]: 2025-10-11 09:09:58.823 2 DEBUG nova.compute.manager [req-eafdf0ad-21dc-4c43-9764-bb10e0162bcb req-409c0d1b-3674-42ec-bd18-7e68d035afdd e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Received event network-vif-deleted-a611854c-0a61-41b8-91ce-0c0f893aa54c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:09:58 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e276 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:09:58 compute-0 nova_compute[260935]: 2025-10-11 09:09:58.881 2 DEBUG oslo_concurrency.lockutils [None req-b8fd29cc-e044-452b-a4e6-5433f96ee35a 6b8d9d5ab01d48ae81a09f922875ea3e 21f163e616ee4917a580701d466f7dc9 - - default default] Lock "0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.735s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:09:58 compute-0 nova_compute[260935]: 2025-10-11 09:09:58.885 2 DEBUG oslo_concurrency.processutils [None req-45f1a6d9-7500-4b6b-bab9-43b85db08961 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:09:59 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 09:09:59 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3613070824' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:09:59 compute-0 nova_compute[260935]: 2025-10-11 09:09:59.360 2 DEBUG oslo_concurrency.processutils [None req-45f1a6d9-7500-4b6b-bab9-43b85db08961 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:09:59 compute-0 nova_compute[260935]: 2025-10-11 09:09:59.369 2 DEBUG nova.compute.provider_tree [None req-45f1a6d9-7500-4b6b-bab9-43b85db08961 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 09:09:59 compute-0 nova_compute[260935]: 2025-10-11 09:09:59.404 2 DEBUG nova.scheduler.client.report [None req-45f1a6d9-7500-4b6b-bab9-43b85db08961 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 09:09:59 compute-0 nova_compute[260935]: 2025-10-11 09:09:59.457 2 DEBUG oslo_concurrency.lockutils [None req-45f1a6d9-7500-4b6b-bab9-43b85db08961 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.692s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:09:59 compute-0 nova_compute[260935]: 2025-10-11 09:09:59.486 2 INFO nova.scheduler.client.report [None req-45f1a6d9-7500-4b6b-bab9-43b85db08961 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Deleted allocations for instance 1e10d958-167c-4caf-8ff0-67f50e39ec3c
Oct 11 09:09:59 compute-0 ceph-mon[74313]: pgmap v2038: 321 pgs: 321 active+clean; 328 MiB data, 846 MiB used, 59 GiB / 60 GiB avail; 691 KiB/s rd, 131 KiB/s wr, 126 op/s
Oct 11 09:09:59 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/2714491320' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:09:59 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/3613070824' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:09:59 compute-0 nova_compute[260935]: 2025-10-11 09:09:59.566 2 DEBUG oslo_concurrency.lockutils [None req-45f1a6d9-7500-4b6b-bab9-43b85db08961 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Lock "1e10d958-167c-4caf-8ff0-67f50e39ec3c" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.174s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:09:59 compute-0 nova_compute[260935]: 2025-10-11 09:09:59.782 2 DEBUG nova.compute.manager [req-08a80288-5e58-45fe-86b8-d24bbe5422df req-cf3c546e-1d73-4aa3-806e-504b63a3d9da e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 1e10d958-167c-4caf-8ff0-67f50e39ec3c] Received event network-vif-plugged-38159e48-a8a0-4daa-95d3-19be7346d2cb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:09:59 compute-0 nova_compute[260935]: 2025-10-11 09:09:59.783 2 DEBUG oslo_concurrency.lockutils [req-08a80288-5e58-45fe-86b8-d24bbe5422df req-cf3c546e-1d73-4aa3-806e-504b63a3d9da e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "1e10d958-167c-4caf-8ff0-67f50e39ec3c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:09:59 compute-0 nova_compute[260935]: 2025-10-11 09:09:59.784 2 DEBUG oslo_concurrency.lockutils [req-08a80288-5e58-45fe-86b8-d24bbe5422df req-cf3c546e-1d73-4aa3-806e-504b63a3d9da e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "1e10d958-167c-4caf-8ff0-67f50e39ec3c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:09:59 compute-0 nova_compute[260935]: 2025-10-11 09:09:59.785 2 DEBUG oslo_concurrency.lockutils [req-08a80288-5e58-45fe-86b8-d24bbe5422df req-cf3c546e-1d73-4aa3-806e-504b63a3d9da e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "1e10d958-167c-4caf-8ff0-67f50e39ec3c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:09:59 compute-0 nova_compute[260935]: 2025-10-11 09:09:59.785 2 DEBUG nova.compute.manager [req-08a80288-5e58-45fe-86b8-d24bbe5422df req-cf3c546e-1d73-4aa3-806e-504b63a3d9da e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 1e10d958-167c-4caf-8ff0-67f50e39ec3c] No waiting events found dispatching network-vif-plugged-38159e48-a8a0-4daa-95d3-19be7346d2cb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 09:09:59 compute-0 nova_compute[260935]: 2025-10-11 09:09:59.786 2 WARNING nova.compute.manager [req-08a80288-5e58-45fe-86b8-d24bbe5422df req-cf3c546e-1d73-4aa3-806e-504b63a3d9da e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 1e10d958-167c-4caf-8ff0-67f50e39ec3c] Received unexpected event network-vif-plugged-38159e48-a8a0-4daa-95d3-19be7346d2cb for instance with vm_state deleted and task_state None.
Oct 11 09:09:59 compute-0 nova_compute[260935]: 2025-10-11 09:09:59.786 2 DEBUG nova.compute.manager [req-08a80288-5e58-45fe-86b8-d24bbe5422df req-cf3c546e-1d73-4aa3-806e-504b63a3d9da e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 1e10d958-167c-4caf-8ff0-67f50e39ec3c] Received event network-vif-deleted-38159e48-a8a0-4daa-95d3-19be7346d2cb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:09:59 compute-0 nova_compute[260935]: 2025-10-11 09:09:59.787 2 INFO nova.compute.manager [req-08a80288-5e58-45fe-86b8-d24bbe5422df req-cf3c546e-1d73-4aa3-806e-504b63a3d9da e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 1e10d958-167c-4caf-8ff0-67f50e39ec3c] Neutron deleted interface 38159e48-a8a0-4daa-95d3-19be7346d2cb; detaching it from the instance and deleting it from the info cache
Oct 11 09:09:59 compute-0 nova_compute[260935]: 2025-10-11 09:09:59.787 2 DEBUG nova.network.neutron [req-08a80288-5e58-45fe-86b8-d24bbe5422df req-cf3c546e-1d73-4aa3-806e-504b63a3d9da e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 1e10d958-167c-4caf-8ff0-67f50e39ec3c] Instance is deleted, no further info cache update update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:106
Oct 11 09:09:59 compute-0 nova_compute[260935]: 2025-10-11 09:09:59.790 2 DEBUG nova.compute.manager [req-08a80288-5e58-45fe-86b8-d24bbe5422df req-cf3c546e-1d73-4aa3-806e-504b63a3d9da e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 1e10d958-167c-4caf-8ff0-67f50e39ec3c] Detach interface failed, port_id=38159e48-a8a0-4daa-95d3-19be7346d2cb, reason: Instance 1e10d958-167c-4caf-8ff0-67f50e39ec3c could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Oct 11 09:09:59 compute-0 nova_compute[260935]: 2025-10-11 09:09:59.858 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:10:00 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2039: 321 pgs: 321 active+clean; 328 MiB data, 846 MiB used, 59 GiB / 60 GiB avail; 39 KiB/s rd, 25 KiB/s wr, 57 op/s
Oct 11 09:10:00 compute-0 podman[364834]: 2025-10-11 09:10:00.824255651 +0000 UTC m=+0.108663313 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, container_name=multipathd)
Oct 11 09:10:00 compute-0 podman[364835]: 2025-10-11 09:10:00.872181789 +0000 UTC m=+0.153803991 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3)
Oct 11 09:10:01 compute-0 ceph-mon[74313]: pgmap v2039: 321 pgs: 321 active+clean; 328 MiB data, 846 MiB used, 59 GiB / 60 GiB avail; 39 KiB/s rd, 25 KiB/s wr, 57 op/s
Oct 11 09:10:01 compute-0 nova_compute[260935]: 2025-10-11 09:10:01.689 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:10:02 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2040: 321 pgs: 321 active+clean; 328 MiB data, 846 MiB used, 59 GiB / 60 GiB avail; 39 KiB/s rd, 25 KiB/s wr, 57 op/s
Oct 11 09:10:03 compute-0 ceph-mon[74313]: pgmap v2040: 321 pgs: 321 active+clean; 328 MiB data, 846 MiB used, 59 GiB / 60 GiB avail; 39 KiB/s rd, 25 KiB/s wr, 57 op/s
Oct 11 09:10:03 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e276 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:10:03 compute-0 ovn_controller[152945]: 2025-10-11T09:10:03Z|00940|binding|INFO|Releasing lport e23cd806-8523-4e59-ba27-db15cee52548 from this chassis (sb_readonly=0)
Oct 11 09:10:03 compute-0 ovn_controller[152945]: 2025-10-11T09:10:03Z|00941|binding|INFO|Releasing lport f45dd889-4db0-488b-b6d3-356bc191844e from this chassis (sb_readonly=0)
Oct 11 09:10:03 compute-0 nova_compute[260935]: 2025-10-11 09:10:03.894 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:10:04 compute-0 ovn_controller[152945]: 2025-10-11T09:10:04Z|00942|binding|INFO|Releasing lport e23cd806-8523-4e59-ba27-db15cee52548 from this chassis (sb_readonly=0)
Oct 11 09:10:04 compute-0 ovn_controller[152945]: 2025-10-11T09:10:04Z|00943|binding|INFO|Releasing lport f45dd889-4db0-488b-b6d3-356bc191844e from this chassis (sb_readonly=0)
Oct 11 09:10:04 compute-0 nova_compute[260935]: 2025-10-11 09:10:04.080 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:10:04 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2041: 321 pgs: 321 active+clean; 328 MiB data, 846 MiB used, 59 GiB / 60 GiB avail; 39 KiB/s rd, 13 KiB/s wr, 56 op/s
Oct 11 09:10:04 compute-0 nova_compute[260935]: 2025-10-11 09:10:04.859 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:10:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 09:10:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:10:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 09:10:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:10:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.002627377275021163 of space, bias 1.0, pg target 0.788213182506349 quantized to 32 (current 32)
Oct 11 09:10:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:10:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 09:10:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:10:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 09:10:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:10:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Oct 11 09:10:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:10:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 11 09:10:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:10:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 09:10:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:10:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 11 09:10:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:10:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 11 09:10:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:10:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 09:10:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:10:04 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 11 09:10:05 compute-0 ceph-mon[74313]: pgmap v2041: 321 pgs: 321 active+clean; 328 MiB data, 846 MiB used, 59 GiB / 60 GiB avail; 39 KiB/s rd, 13 KiB/s wr, 56 op/s
Oct 11 09:10:06 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2042: 321 pgs: 321 active+clean; 328 MiB data, 846 MiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 2.3 KiB/s wr, 55 op/s
Oct 11 09:10:06 compute-0 nova_compute[260935]: 2025-10-11 09:10:06.691 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:10:07 compute-0 ceph-mon[74313]: pgmap v2042: 321 pgs: 321 active+clean; 328 MiB data, 846 MiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 2.3 KiB/s wr, 55 op/s
Oct 11 09:10:07 compute-0 ceph-mon[74313]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 11 09:10:07 compute-0 ceph-mon[74313]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 3600.0 total, 600.0 interval
                                           Cumulative writes: 9375 writes, 42K keys, 9375 commit groups, 1.0 writes per commit group, ingest: 0.06 GB, 0.02 MB/s
                                           Cumulative WAL: 9375 writes, 9375 syncs, 1.00 writes per sync, written: 0.06 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1474 writes, 7155 keys, 1474 commit groups, 1.0 writes per commit group, ingest: 9.21 MB, 0.02 MB/s
                                           Interval WAL: 1474 writes, 1474 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     64.2      0.79              0.20        28    0.028       0      0       0.0       0.0
                                             L6      1/0    7.22 MB   0.0      0.2     0.0      0.2       0.2      0.0       0.0   4.1    160.1    132.7      1.59              0.88        27    0.059    149K    15K       0.0       0.0
                                            Sum      1/0    7.22 MB   0.0      0.2     0.0      0.2       0.3      0.1       0.0   5.1    106.9    109.9      2.38              1.08        55    0.043    149K    15K       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   8.0     88.2     85.7      0.82              0.28        14    0.059     48K   3558       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.2     0.0      0.2       0.2      0.0       0.0   0.0    160.1    132.7      1.59              0.88        27    0.059    149K    15K       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     64.5      0.79              0.20        27    0.029       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     12.2      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 3600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.050, interval 0.009
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.26 GB write, 0.07 MB/s write, 0.25 GB read, 0.07 MB/s read, 2.4 seconds
                                           Interval compaction: 0.07 GB write, 0.12 MB/s write, 0.07 GB read, 0.12 MB/s read, 0.8 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558f0ab3b1f0#2 capacity: 304.00 MB usage: 28.69 MB table_size: 0 occupancy: 18446744073709551615 collections: 7 last_copies: 0 last_secs: 0.000236 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1901,27.57 MB,9.06923%) FilterBlock(56,419.61 KB,0.134794%) IndexBlock(56,722.28 KB,0.232024%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Oct 11 09:10:07 compute-0 nova_compute[260935]: 2025-10-11 09:10:07.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:10:08 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2043: 321 pgs: 321 active+clean; 328 MiB data, 846 MiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 2.3 KiB/s wr, 55 op/s
Oct 11 09:10:08 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e276 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:10:09 compute-0 ceph-mon[74313]: pgmap v2043: 321 pgs: 321 active+clean; 328 MiB data, 846 MiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 2.3 KiB/s wr, 55 op/s
Oct 11 09:10:09 compute-0 nova_compute[260935]: 2025-10-11 09:10:09.862 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:10:10 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2044: 321 pgs: 321 active+clean; 328 MiB data, 846 MiB used, 59 GiB / 60 GiB avail
Oct 11 09:10:11 compute-0 nova_compute[260935]: 2025-10-11 09:10:11.403 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760173796.3939128, 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:10:11 compute-0 nova_compute[260935]: 2025-10-11 09:10:11.404 2 INFO nova.compute.manager [-] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] VM Stopped (Lifecycle Event)
Oct 11 09:10:11 compute-0 nova_compute[260935]: 2025-10-11 09:10:11.447 2 DEBUG nova.compute.manager [None req-6af58e5d-f4d9-4382-8911-1633837d27dc - - - - - -] [instance: 0522f7b9-e2e7-4eb7-9e56-377ac0aaea7c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:10:11 compute-0 ceph-mon[74313]: pgmap v2044: 321 pgs: 321 active+clean; 328 MiB data, 846 MiB used, 59 GiB / 60 GiB avail
Oct 11 09:10:11 compute-0 nova_compute[260935]: 2025-10-11 09:10:11.643 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760173796.6409194, 1e10d958-167c-4caf-8ff0-67f50e39ec3c => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:10:11 compute-0 nova_compute[260935]: 2025-10-11 09:10:11.644 2 INFO nova.compute.manager [-] [instance: 1e10d958-167c-4caf-8ff0-67f50e39ec3c] VM Stopped (Lifecycle Event)
Oct 11 09:10:11 compute-0 nova_compute[260935]: 2025-10-11 09:10:11.692 2 DEBUG nova.compute.manager [None req-88d40329-589f-4e52-b075-e4781ff797b8 - - - - - -] [instance: 1e10d958-167c-4caf-8ff0-67f50e39ec3c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:10:11 compute-0 nova_compute[260935]: 2025-10-11 09:10:11.696 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:10:12 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2045: 321 pgs: 321 active+clean; 328 MiB data, 846 MiB used, 59 GiB / 60 GiB avail
Oct 11 09:10:13 compute-0 ceph-mon[74313]: pgmap v2045: 321 pgs: 321 active+clean; 328 MiB data, 846 MiB used, 59 GiB / 60 GiB avail
Oct 11 09:10:13 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e276 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:10:14 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2046: 321 pgs: 321 active+clean; 328 MiB data, 846 MiB used, 59 GiB / 60 GiB avail
Oct 11 09:10:14 compute-0 nova_compute[260935]: 2025-10-11 09:10:14.908 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:10:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:10:15.206 162815 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:10:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:10:15.206 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:10:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:10:15.207 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:10:15 compute-0 ceph-mon[74313]: pgmap v2046: 321 pgs: 321 active+clean; 328 MiB data, 846 MiB used, 59 GiB / 60 GiB avail
Oct 11 09:10:16 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2047: 321 pgs: 321 active+clean; 328 MiB data, 846 MiB used, 59 GiB / 60 GiB avail
Oct 11 09:10:16 compute-0 nova_compute[260935]: 2025-10-11 09:10:16.699 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:10:17 compute-0 sudo[364880]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:10:17 compute-0 sudo[364880]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:10:17 compute-0 sudo[364880]: pam_unix(sudo:session): session closed for user root
Oct 11 09:10:17 compute-0 sudo[364905]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 09:10:17 compute-0 sudo[364905]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:10:17 compute-0 sudo[364905]: pam_unix(sudo:session): session closed for user root
Oct 11 09:10:17 compute-0 ceph-mon[74313]: pgmap v2047: 321 pgs: 321 active+clean; 328 MiB data, 846 MiB used, 59 GiB / 60 GiB avail
Oct 11 09:10:17 compute-0 sudo[364930]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:10:17 compute-0 sudo[364930]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:10:17 compute-0 sudo[364930]: pam_unix(sudo:session): session closed for user root
Oct 11 09:10:17 compute-0 sudo[364955]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 11 09:10:17 compute-0 sudo[364955]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:10:18 compute-0 sudo[364955]: pam_unix(sudo:session): session closed for user root
Oct 11 09:10:18 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 09:10:18 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 09:10:18 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 11 09:10:18 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 09:10:18 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 11 09:10:18 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:10:18 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev 14d56d60-e999-4767-8e9b-6015fc59f0cc does not exist
Oct 11 09:10:18 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev 4c6f9512-0d00-4228-836a-4ca6cbe640a1 does not exist
Oct 11 09:10:18 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev 66e99c38-ef81-4040-b02e-14d65fb218cb does not exist
Oct 11 09:10:18 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 11 09:10:18 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 09:10:18 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 11 09:10:18 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 09:10:18 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 09:10:18 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 09:10:18 compute-0 sudo[365011]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:10:18 compute-0 sudo[365011]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:10:18 compute-0 sudo[365011]: pam_unix(sudo:session): session closed for user root
Oct 11 09:10:18 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2048: 321 pgs: 321 active+clean; 328 MiB data, 846 MiB used, 59 GiB / 60 GiB avail
Oct 11 09:10:18 compute-0 sudo[365036]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 09:10:18 compute-0 sudo[365036]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:10:18 compute-0 sudo[365036]: pam_unix(sudo:session): session closed for user root
Oct 11 09:10:18 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 09:10:18 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 09:10:18 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:10:18 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 09:10:18 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 09:10:18 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 09:10:18 compute-0 sudo[365061]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:10:18 compute-0 sudo[365061]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:10:18 compute-0 sudo[365061]: pam_unix(sudo:session): session closed for user root
Oct 11 09:10:18 compute-0 nova_compute[260935]: 2025-10-11 09:10:18.704 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:10:18 compute-0 sudo[365086]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 11 09:10:18 compute-0 sudo[365086]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:10:18 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e276 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:10:18 compute-0 ceph-mon[74313]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #99. Immutable memtables: 0.
Oct 11 09:10:18 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:10:18.862278) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 11 09:10:18 compute-0 ceph-mon[74313]: rocksdb: [db/flush_job.cc:856] [default] [JOB 57] Flushing memtable with next log file: 99
Oct 11 09:10:18 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760173818862339, "job": 57, "event": "flush_started", "num_memtables": 1, "num_entries": 449, "num_deletes": 255, "total_data_size": 356565, "memory_usage": 365272, "flush_reason": "Manual Compaction"}
Oct 11 09:10:18 compute-0 ceph-mon[74313]: rocksdb: [db/flush_job.cc:885] [default] [JOB 57] Level-0 flush table #100: started
Oct 11 09:10:18 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760173818867780, "cf_name": "default", "job": 57, "event": "table_file_creation", "file_number": 100, "file_size": 353519, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 42915, "largest_seqno": 43363, "table_properties": {"data_size": 350895, "index_size": 660, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 901, "raw_key_size": 6143, "raw_average_key_size": 18, "raw_value_size": 345688, "raw_average_value_size": 1025, "num_data_blocks": 30, "num_entries": 337, "num_filter_entries": 337, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760173794, "oldest_key_time": 1760173794, "file_creation_time": 1760173818, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "25d4b2da-549a-4320-988a-8e4ed36a341b", "db_session_id": "HBQ1LYMNE0LLZGR7AHC6", "orig_file_number": 100, "seqno_to_time_mapping": "N/A"}}
Oct 11 09:10:18 compute-0 ceph-mon[74313]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 57] Flush lasted 5580 microseconds, and 2639 cpu microseconds.
Oct 11 09:10:18 compute-0 ceph-mon[74313]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 09:10:18 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:10:18.867858) [db/flush_job.cc:967] [default] [JOB 57] Level-0 flush table #100: 353519 bytes OK
Oct 11 09:10:18 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:10:18.867884) [db/memtable_list.cc:519] [default] Level-0 commit table #100 started
Oct 11 09:10:18 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:10:18.869841) [db/memtable_list.cc:722] [default] Level-0 commit table #100: memtable #1 done
Oct 11 09:10:18 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:10:18.869866) EVENT_LOG_v1 {"time_micros": 1760173818869858, "job": 57, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 11 09:10:18 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:10:18.869890) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 11 09:10:18 compute-0 ceph-mon[74313]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 57] Try to delete WAL files size 353815, prev total WAL file size 353815, number of live WAL files 2.
Oct 11 09:10:18 compute-0 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000096.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 09:10:18 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:10:18.870452) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031353038' seq:72057594037927935, type:22 .. '6C6F676D0031373539' seq:0, type:0; will stop at (end)
Oct 11 09:10:18 compute-0 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 58] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 11 09:10:18 compute-0 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 57 Base level 0, inputs: [100(345KB)], [98(7391KB)]
Oct 11 09:10:18 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760173818870512, "job": 58, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [100], "files_L6": [98], "score": -1, "input_data_size": 7922411, "oldest_snapshot_seqno": -1}
Oct 11 09:10:18 compute-0 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 58] Generated table #101: 6238 keys, 7798204 bytes, temperature: kUnknown
Oct 11 09:10:18 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760173818910832, "cf_name": "default", "job": 58, "event": "table_file_creation", "file_number": 101, "file_size": 7798204, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7757522, "index_size": 23996, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15621, "raw_key_size": 163059, "raw_average_key_size": 26, "raw_value_size": 7646403, "raw_average_value_size": 1225, "num_data_blocks": 936, "num_entries": 6238, "num_filter_entries": 6238, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760170204, "oldest_key_time": 0, "file_creation_time": 1760173818, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "25d4b2da-549a-4320-988a-8e4ed36a341b", "db_session_id": "HBQ1LYMNE0LLZGR7AHC6", "orig_file_number": 101, "seqno_to_time_mapping": "N/A"}}
Oct 11 09:10:18 compute-0 ceph-mon[74313]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 09:10:18 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:10:18.911015) [db/compaction/compaction_job.cc:1663] [default] [JOB 58] Compacted 1@0 + 1@6 files to L6 => 7798204 bytes
Oct 11 09:10:18 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:10:18.912243) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 196.2 rd, 193.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.3, 7.2 +0.0 blob) out(7.4 +0.0 blob), read-write-amplify(44.5) write-amplify(22.1) OK, records in: 6757, records dropped: 519 output_compression: NoCompression
Oct 11 09:10:18 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:10:18.912258) EVENT_LOG_v1 {"time_micros": 1760173818912251, "job": 58, "event": "compaction_finished", "compaction_time_micros": 40383, "compaction_time_cpu_micros": 16782, "output_level": 6, "num_output_files": 1, "total_output_size": 7798204, "num_input_records": 6757, "num_output_records": 6238, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 11 09:10:18 compute-0 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000100.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 09:10:18 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760173818912400, "job": 58, "event": "table_file_deletion", "file_number": 100}
Oct 11 09:10:18 compute-0 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000098.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 09:10:18 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760173818913563, "job": 58, "event": "table_file_deletion", "file_number": 98}
Oct 11 09:10:18 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:10:18.870338) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 09:10:18 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:10:18.913683) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 09:10:18 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:10:18.913692) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 09:10:18 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:10:18.913695) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 09:10:18 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:10:18.913699) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 09:10:18 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:10:18.913703) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 09:10:19 compute-0 podman[365155]: 2025-10-11 09:10:19.202174311 +0000 UTC m=+0.071017208 container create ec9f8cbbdac5db722090db2038612cd05fdac0dcc77b77d99fa11f491fd248ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_davinci, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 09:10:19 compute-0 systemd[1]: Started libpod-conmon-ec9f8cbbdac5db722090db2038612cd05fdac0dcc77b77d99fa11f491fd248ad.scope.
Oct 11 09:10:19 compute-0 podman[365155]: 2025-10-11 09:10:19.171034432 +0000 UTC m=+0.039877359 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:10:19 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:10:19 compute-0 podman[365155]: 2025-10-11 09:10:19.314569489 +0000 UTC m=+0.183412416 container init ec9f8cbbdac5db722090db2038612cd05fdac0dcc77b77d99fa11f491fd248ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_davinci, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct 11 09:10:19 compute-0 podman[365155]: 2025-10-11 09:10:19.324631447 +0000 UTC m=+0.193474344 container start ec9f8cbbdac5db722090db2038612cd05fdac0dcc77b77d99fa11f491fd248ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_davinci, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct 11 09:10:19 compute-0 podman[365155]: 2025-10-11 09:10:19.328906049 +0000 UTC m=+0.197748996 container attach ec9f8cbbdac5db722090db2038612cd05fdac0dcc77b77d99fa11f491fd248ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_davinci, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct 11 09:10:19 compute-0 gallant_davinci[365172]: 167 167
Oct 11 09:10:19 compute-0 systemd[1]: libpod-ec9f8cbbdac5db722090db2038612cd05fdac0dcc77b77d99fa11f491fd248ad.scope: Deactivated successfully.
Oct 11 09:10:19 compute-0 podman[365155]: 2025-10-11 09:10:19.331688418 +0000 UTC m=+0.200531305 container died ec9f8cbbdac5db722090db2038612cd05fdac0dcc77b77d99fa11f491fd248ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_davinci, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct 11 09:10:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-bbc8914c3f8483a9766a63e34e4dca6aee7e860ee2d8883254ffddedd143dd42-merged.mount: Deactivated successfully.
Oct 11 09:10:19 compute-0 podman[365155]: 2025-10-11 09:10:19.383219969 +0000 UTC m=+0.252062856 container remove ec9f8cbbdac5db722090db2038612cd05fdac0dcc77b77d99fa11f491fd248ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_davinci, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 09:10:19 compute-0 podman[365169]: 2025-10-11 09:10:19.395393657 +0000 UTC m=+0.139900355 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct 11 09:10:19 compute-0 systemd[1]: libpod-conmon-ec9f8cbbdac5db722090db2038612cd05fdac0dcc77b77d99fa11f491fd248ad.scope: Deactivated successfully.
Oct 11 09:10:19 compute-0 ceph-mon[74313]: pgmap v2048: 321 pgs: 321 active+clean; 328 MiB data, 846 MiB used, 59 GiB / 60 GiB avail
Oct 11 09:10:19 compute-0 podman[365213]: 2025-10-11 09:10:19.682795361 +0000 UTC m=+0.074239190 container create bc29eac582fd0db7dd5cfa08278c645bd7b4ad7cfd36f2d188135a02762041c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_moser, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Oct 11 09:10:19 compute-0 nova_compute[260935]: 2025-10-11 09:10:19.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:10:19 compute-0 systemd[1]: Started libpod-conmon-bc29eac582fd0db7dd5cfa08278c645bd7b4ad7cfd36f2d188135a02762041c2.scope.
Oct 11 09:10:19 compute-0 podman[365213]: 2025-10-11 09:10:19.653578867 +0000 UTC m=+0.045022706 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:10:19 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:10:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ad521fb5f7e01149c5bd79acaf181922a5ad027e69066e187c21675835a9b09/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 09:10:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ad521fb5f7e01149c5bd79acaf181922a5ad027e69066e187c21675835a9b09/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 09:10:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ad521fb5f7e01149c5bd79acaf181922a5ad027e69066e187c21675835a9b09/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 09:10:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ad521fb5f7e01149c5bd79acaf181922a5ad027e69066e187c21675835a9b09/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 09:10:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ad521fb5f7e01149c5bd79acaf181922a5ad027e69066e187c21675835a9b09/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 09:10:19 compute-0 podman[365213]: 2025-10-11 09:10:19.821668786 +0000 UTC m=+0.213112665 container init bc29eac582fd0db7dd5cfa08278c645bd7b4ad7cfd36f2d188135a02762041c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_moser, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct 11 09:10:19 compute-0 podman[365213]: 2025-10-11 09:10:19.838203428 +0000 UTC m=+0.229647257 container start bc29eac582fd0db7dd5cfa08278c645bd7b4ad7cfd36f2d188135a02762041c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_moser, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 09:10:19 compute-0 podman[365213]: 2025-10-11 09:10:19.842335186 +0000 UTC m=+0.233779075 container attach bc29eac582fd0db7dd5cfa08278c645bd7b4ad7cfd36f2d188135a02762041c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_moser, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 09:10:19 compute-0 nova_compute[260935]: 2025-10-11 09:10:19.909 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:10:20 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2049: 321 pgs: 321 active+clean; 328 MiB data, 846 MiB used, 59 GiB / 60 GiB avail
Oct 11 09:10:21 compute-0 agitated_moser[365230]: --> passed data devices: 0 physical, 3 LVM
Oct 11 09:10:21 compute-0 agitated_moser[365230]: --> relative data size: 1.0
Oct 11 09:10:21 compute-0 agitated_moser[365230]: --> All data devices are unavailable
Oct 11 09:10:21 compute-0 systemd[1]: libpod-bc29eac582fd0db7dd5cfa08278c645bd7b4ad7cfd36f2d188135a02762041c2.scope: Deactivated successfully.
Oct 11 09:10:21 compute-0 podman[365213]: 2025-10-11 09:10:21.117023444 +0000 UTC m=+1.508467263 container died bc29eac582fd0db7dd5cfa08278c645bd7b4ad7cfd36f2d188135a02762041c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_moser, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct 11 09:10:21 compute-0 systemd[1]: libpod-bc29eac582fd0db7dd5cfa08278c645bd7b4ad7cfd36f2d188135a02762041c2.scope: Consumed 1.208s CPU time.
Oct 11 09:10:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-6ad521fb5f7e01149c5bd79acaf181922a5ad027e69066e187c21675835a9b09-merged.mount: Deactivated successfully.
Oct 11 09:10:21 compute-0 podman[365213]: 2025-10-11 09:10:21.181535096 +0000 UTC m=+1.572978935 container remove bc29eac582fd0db7dd5cfa08278c645bd7b4ad7cfd36f2d188135a02762041c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_moser, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 09:10:21 compute-0 systemd[1]: libpod-conmon-bc29eac582fd0db7dd5cfa08278c645bd7b4ad7cfd36f2d188135a02762041c2.scope: Deactivated successfully.
Oct 11 09:10:21 compute-0 sudo[365086]: pam_unix(sudo:session): session closed for user root
Oct 11 09:10:21 compute-0 sudo[365270]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:10:21 compute-0 sudo[365270]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:10:21 compute-0 sudo[365270]: pam_unix(sudo:session): session closed for user root
Oct 11 09:10:21 compute-0 sudo[365295]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 09:10:21 compute-0 sudo[365295]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:10:21 compute-0 sudo[365295]: pam_unix(sudo:session): session closed for user root
Oct 11 09:10:21 compute-0 sudo[365320]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:10:21 compute-0 sudo[365320]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:10:21 compute-0 sudo[365320]: pam_unix(sudo:session): session closed for user root
Oct 11 09:10:21 compute-0 sudo[365345]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a -- lvm list --format json
Oct 11 09:10:21 compute-0 sudo[365345]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:10:21 compute-0 ceph-mon[74313]: pgmap v2049: 321 pgs: 321 active+clean; 328 MiB data, 846 MiB used, 59 GiB / 60 GiB avail
Oct 11 09:10:21 compute-0 nova_compute[260935]: 2025-10-11 09:10:21.698 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:10:21 compute-0 nova_compute[260935]: 2025-10-11 09:10:21.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:10:21 compute-0 nova_compute[260935]: 2025-10-11 09:10:21.703 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:10:21 compute-0 podman[365411]: 2025-10-11 09:10:21.967305586 +0000 UTC m=+0.068366302 container create dc04bef7807f627849125c51e98e538f6eecad5832fc7f7b16ac096ce01d0a02 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_morse, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Oct 11 09:10:22 compute-0 systemd[1]: Started libpod-conmon-dc04bef7807f627849125c51e98e538f6eecad5832fc7f7b16ac096ce01d0a02.scope.
Oct 11 09:10:22 compute-0 podman[365411]: 2025-10-11 09:10:21.93801667 +0000 UTC m=+0.039077426 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:10:22 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:10:22 compute-0 podman[365411]: 2025-10-11 09:10:22.08162619 +0000 UTC m=+0.182686916 container init dc04bef7807f627849125c51e98e538f6eecad5832fc7f7b16ac096ce01d0a02 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_morse, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 09:10:22 compute-0 podman[365411]: 2025-10-11 09:10:22.094129197 +0000 UTC m=+0.195189883 container start dc04bef7807f627849125c51e98e538f6eecad5832fc7f7b16ac096ce01d0a02 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_morse, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct 11 09:10:22 compute-0 podman[365411]: 2025-10-11 09:10:22.098413959 +0000 UTC m=+0.199474735 container attach dc04bef7807f627849125c51e98e538f6eecad5832fc7f7b16ac096ce01d0a02 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_morse, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 09:10:22 compute-0 zealous_morse[365427]: 167 167
Oct 11 09:10:22 compute-0 systemd[1]: libpod-dc04bef7807f627849125c51e98e538f6eecad5832fc7f7b16ac096ce01d0a02.scope: Deactivated successfully.
Oct 11 09:10:22 compute-0 podman[365411]: 2025-10-11 09:10:22.104292097 +0000 UTC m=+0.205352813 container died dc04bef7807f627849125c51e98e538f6eecad5832fc7f7b16ac096ce01d0a02 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_morse, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 09:10:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-c51acee5a03c69c0199a888fac0644f757d7d8aaddf4f3ef5a7c30e48de00ebd-merged.mount: Deactivated successfully.
Oct 11 09:10:22 compute-0 podman[365411]: 2025-10-11 09:10:22.163637791 +0000 UTC m=+0.264698497 container remove dc04bef7807f627849125c51e98e538f6eecad5832fc7f7b16ac096ce01d0a02 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_morse, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct 11 09:10:22 compute-0 systemd[1]: libpod-conmon-dc04bef7807f627849125c51e98e538f6eecad5832fc7f7b16ac096ce01d0a02.scope: Deactivated successfully.
Oct 11 09:10:22 compute-0 podman[365450]: 2025-10-11 09:10:22.425331692 +0000 UTC m=+0.047296532 container create 8a4ee84b75e89166056a2acfa5a8b785341b6a6a125a09f5a9e1abf26023ef7d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_banzai, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct 11 09:10:22 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2050: 321 pgs: 321 active+clean; 328 MiB data, 846 MiB used, 59 GiB / 60 GiB avail
Oct 11 09:10:22 compute-0 systemd[1]: Started libpod-conmon-8a4ee84b75e89166056a2acfa5a8b785341b6a6a125a09f5a9e1abf26023ef7d.scope.
Oct 11 09:10:22 compute-0 podman[365450]: 2025-10-11 09:10:22.404131576 +0000 UTC m=+0.026096396 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:10:22 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:10:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/79313149ddb2dcbadc89daa23b4b8d9e1f23296ca8946376c1620440d55b6ba9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 09:10:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/79313149ddb2dcbadc89daa23b4b8d9e1f23296ca8946376c1620440d55b6ba9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 09:10:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/79313149ddb2dcbadc89daa23b4b8d9e1f23296ca8946376c1620440d55b6ba9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 09:10:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/79313149ddb2dcbadc89daa23b4b8d9e1f23296ca8946376c1620440d55b6ba9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 09:10:22 compute-0 podman[365450]: 2025-10-11 09:10:22.54402275 +0000 UTC m=+0.165987640 container init 8a4ee84b75e89166056a2acfa5a8b785341b6a6a125a09f5a9e1abf26023ef7d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_banzai, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct 11 09:10:22 compute-0 podman[365450]: 2025-10-11 09:10:22.557160985 +0000 UTC m=+0.179125805 container start 8a4ee84b75e89166056a2acfa5a8b785341b6a6a125a09f5a9e1abf26023ef7d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_banzai, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 09:10:22 compute-0 podman[365450]: 2025-10-11 09:10:22.561122458 +0000 UTC m=+0.183087348 container attach 8a4ee84b75e89166056a2acfa5a8b785341b6a6a125a09f5a9e1abf26023ef7d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_banzai, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Oct 11 09:10:22 compute-0 nova_compute[260935]: 2025-10-11 09:10:22.889 2 DEBUG oslo_concurrency.lockutils [None req-402cdad3-0a70-4b41-8df4-a529d58cfc49 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Acquiring lock "0b3da34e-478e-44fc-a1ec-2601998d2b0d" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:10:22 compute-0 nova_compute[260935]: 2025-10-11 09:10:22.896 2 DEBUG oslo_concurrency.lockutils [None req-402cdad3-0a70-4b41-8df4-a529d58cfc49 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Lock "0b3da34e-478e-44fc-a1ec-2601998d2b0d" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.007s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:10:22 compute-0 nova_compute[260935]: 2025-10-11 09:10:22.926 2 DEBUG nova.compute.manager [None req-402cdad3-0a70-4b41-8df4-a529d58cfc49 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 0b3da34e-478e-44fc-a1ec-2601998d2b0d] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 11 09:10:23 compute-0 nova_compute[260935]: 2025-10-11 09:10:23.035 2 DEBUG oslo_concurrency.lockutils [None req-402cdad3-0a70-4b41-8df4-a529d58cfc49 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:10:23 compute-0 nova_compute[260935]: 2025-10-11 09:10:23.036 2 DEBUG oslo_concurrency.lockutils [None req-402cdad3-0a70-4b41-8df4-a529d58cfc49 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:10:23 compute-0 nova_compute[260935]: 2025-10-11 09:10:23.048 2 DEBUG nova.virt.hardware [None req-402cdad3-0a70-4b41-8df4-a529d58cfc49 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 11 09:10:23 compute-0 nova_compute[260935]: 2025-10-11 09:10:23.049 2 INFO nova.compute.claims [None req-402cdad3-0a70-4b41-8df4-a529d58cfc49 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 0b3da34e-478e-44fc-a1ec-2601998d2b0d] Claim successful on node compute-0.ctlplane.example.com
Oct 11 09:10:23 compute-0 nova_compute[260935]: 2025-10-11 09:10:23.271 2 DEBUG oslo_concurrency.processutils [None req-402cdad3-0a70-4b41-8df4-a529d58cfc49 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:10:23 compute-0 infallible_banzai[365467]: {
Oct 11 09:10:23 compute-0 infallible_banzai[365467]:     "0": [
Oct 11 09:10:23 compute-0 infallible_banzai[365467]:         {
Oct 11 09:10:23 compute-0 infallible_banzai[365467]:             "devices": [
Oct 11 09:10:23 compute-0 infallible_banzai[365467]:                 "/dev/loop3"
Oct 11 09:10:23 compute-0 infallible_banzai[365467]:             ],
Oct 11 09:10:23 compute-0 infallible_banzai[365467]:             "lv_name": "ceph_lv0",
Oct 11 09:10:23 compute-0 infallible_banzai[365467]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 09:10:23 compute-0 infallible_banzai[365467]:             "lv_size": "21470642176",
Oct 11 09:10:23 compute-0 infallible_banzai[365467]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=bd31cfe9-266a-4479-a7f9-659fff14b3e1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 09:10:23 compute-0 infallible_banzai[365467]:             "lv_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 09:10:23 compute-0 infallible_banzai[365467]:             "name": "ceph_lv0",
Oct 11 09:10:23 compute-0 infallible_banzai[365467]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 09:10:23 compute-0 infallible_banzai[365467]:             "tags": {
Oct 11 09:10:23 compute-0 infallible_banzai[365467]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 11 09:10:23 compute-0 infallible_banzai[365467]:                 "ceph.block_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 09:10:23 compute-0 infallible_banzai[365467]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 09:10:23 compute-0 infallible_banzai[365467]:                 "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:10:23 compute-0 infallible_banzai[365467]:                 "ceph.cluster_name": "ceph",
Oct 11 09:10:23 compute-0 infallible_banzai[365467]:                 "ceph.crush_device_class": "",
Oct 11 09:10:23 compute-0 infallible_banzai[365467]:                 "ceph.encrypted": "0",
Oct 11 09:10:23 compute-0 infallible_banzai[365467]:                 "ceph.osd_fsid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 09:10:23 compute-0 infallible_banzai[365467]:                 "ceph.osd_id": "0",
Oct 11 09:10:23 compute-0 infallible_banzai[365467]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 09:10:23 compute-0 infallible_banzai[365467]:                 "ceph.type": "block",
Oct 11 09:10:23 compute-0 infallible_banzai[365467]:                 "ceph.vdo": "0"
Oct 11 09:10:23 compute-0 infallible_banzai[365467]:             },
Oct 11 09:10:23 compute-0 infallible_banzai[365467]:             "type": "block",
Oct 11 09:10:23 compute-0 infallible_banzai[365467]:             "vg_name": "ceph_vg0"
Oct 11 09:10:23 compute-0 infallible_banzai[365467]:         }
Oct 11 09:10:23 compute-0 infallible_banzai[365467]:     ],
Oct 11 09:10:23 compute-0 infallible_banzai[365467]:     "1": [
Oct 11 09:10:23 compute-0 infallible_banzai[365467]:         {
Oct 11 09:10:23 compute-0 infallible_banzai[365467]:             "devices": [
Oct 11 09:10:23 compute-0 infallible_banzai[365467]:                 "/dev/loop4"
Oct 11 09:10:23 compute-0 infallible_banzai[365467]:             ],
Oct 11 09:10:23 compute-0 infallible_banzai[365467]:             "lv_name": "ceph_lv1",
Oct 11 09:10:23 compute-0 infallible_banzai[365467]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 09:10:23 compute-0 infallible_banzai[365467]:             "lv_size": "21470642176",
Oct 11 09:10:23 compute-0 infallible_banzai[365467]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c6e730c8-7075-45e5-bf72-061b73088f5b,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 09:10:23 compute-0 infallible_banzai[365467]:             "lv_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 09:10:23 compute-0 infallible_banzai[365467]:             "name": "ceph_lv1",
Oct 11 09:10:23 compute-0 infallible_banzai[365467]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 09:10:23 compute-0 infallible_banzai[365467]:             "tags": {
Oct 11 09:10:23 compute-0 infallible_banzai[365467]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 11 09:10:23 compute-0 infallible_banzai[365467]:                 "ceph.block_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 09:10:23 compute-0 infallible_banzai[365467]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 09:10:23 compute-0 infallible_banzai[365467]:                 "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:10:23 compute-0 infallible_banzai[365467]:                 "ceph.cluster_name": "ceph",
Oct 11 09:10:23 compute-0 infallible_banzai[365467]:                 "ceph.crush_device_class": "",
Oct 11 09:10:23 compute-0 infallible_banzai[365467]:                 "ceph.encrypted": "0",
Oct 11 09:10:23 compute-0 infallible_banzai[365467]:                 "ceph.osd_fsid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 09:10:23 compute-0 infallible_banzai[365467]:                 "ceph.osd_id": "1",
Oct 11 09:10:23 compute-0 infallible_banzai[365467]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 09:10:23 compute-0 infallible_banzai[365467]:                 "ceph.type": "block",
Oct 11 09:10:23 compute-0 infallible_banzai[365467]:                 "ceph.vdo": "0"
Oct 11 09:10:23 compute-0 infallible_banzai[365467]:             },
Oct 11 09:10:23 compute-0 infallible_banzai[365467]:             "type": "block",
Oct 11 09:10:23 compute-0 infallible_banzai[365467]:             "vg_name": "ceph_vg1"
Oct 11 09:10:23 compute-0 infallible_banzai[365467]:         }
Oct 11 09:10:23 compute-0 infallible_banzai[365467]:     ],
Oct 11 09:10:23 compute-0 infallible_banzai[365467]:     "2": [
Oct 11 09:10:23 compute-0 infallible_banzai[365467]:         {
Oct 11 09:10:23 compute-0 infallible_banzai[365467]:             "devices": [
Oct 11 09:10:23 compute-0 infallible_banzai[365467]:                 "/dev/loop5"
Oct 11 09:10:23 compute-0 infallible_banzai[365467]:             ],
Oct 11 09:10:23 compute-0 infallible_banzai[365467]:             "lv_name": "ceph_lv2",
Oct 11 09:10:23 compute-0 infallible_banzai[365467]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 09:10:23 compute-0 infallible_banzai[365467]:             "lv_size": "21470642176",
Oct 11 09:10:23 compute-0 infallible_banzai[365467]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9a8d87fe-940e-4960-94fb-405e22e6e9ee,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 09:10:23 compute-0 infallible_banzai[365467]:             "lv_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 09:10:23 compute-0 infallible_banzai[365467]:             "name": "ceph_lv2",
Oct 11 09:10:23 compute-0 infallible_banzai[365467]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 09:10:23 compute-0 infallible_banzai[365467]:             "tags": {
Oct 11 09:10:23 compute-0 infallible_banzai[365467]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 11 09:10:23 compute-0 infallible_banzai[365467]:                 "ceph.block_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 09:10:23 compute-0 infallible_banzai[365467]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 09:10:23 compute-0 infallible_banzai[365467]:                 "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:10:23 compute-0 infallible_banzai[365467]:                 "ceph.cluster_name": "ceph",
Oct 11 09:10:23 compute-0 infallible_banzai[365467]:                 "ceph.crush_device_class": "",
Oct 11 09:10:23 compute-0 infallible_banzai[365467]:                 "ceph.encrypted": "0",
Oct 11 09:10:23 compute-0 infallible_banzai[365467]:                 "ceph.osd_fsid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 09:10:23 compute-0 infallible_banzai[365467]:                 "ceph.osd_id": "2",
Oct 11 09:10:23 compute-0 infallible_banzai[365467]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 09:10:23 compute-0 infallible_banzai[365467]:                 "ceph.type": "block",
Oct 11 09:10:23 compute-0 infallible_banzai[365467]:                 "ceph.vdo": "0"
Oct 11 09:10:23 compute-0 infallible_banzai[365467]:             },
Oct 11 09:10:23 compute-0 infallible_banzai[365467]:             "type": "block",
Oct 11 09:10:23 compute-0 infallible_banzai[365467]:             "vg_name": "ceph_vg2"
Oct 11 09:10:23 compute-0 infallible_banzai[365467]:         }
Oct 11 09:10:23 compute-0 infallible_banzai[365467]:     ]
Oct 11 09:10:23 compute-0 infallible_banzai[365467]: }
Oct 11 09:10:23 compute-0 systemd[1]: libpod-8a4ee84b75e89166056a2acfa5a8b785341b6a6a125a09f5a9e1abf26023ef7d.scope: Deactivated successfully.
Oct 11 09:10:23 compute-0 podman[365450]: 2025-10-11 09:10:23.369400332 +0000 UTC m=+0.991365192 container died 8a4ee84b75e89166056a2acfa5a8b785341b6a6a125a09f5a9e1abf26023ef7d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_banzai, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct 11 09:10:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-79313149ddb2dcbadc89daa23b4b8d9e1f23296ca8946376c1620440d55b6ba9-merged.mount: Deactivated successfully.
Oct 11 09:10:23 compute-0 podman[365450]: 2025-10-11 09:10:23.44780572 +0000 UTC m=+1.069770560 container remove 8a4ee84b75e89166056a2acfa5a8b785341b6a6a125a09f5a9e1abf26023ef7d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_banzai, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 09:10:23 compute-0 systemd[1]: libpod-conmon-8a4ee84b75e89166056a2acfa5a8b785341b6a6a125a09f5a9e1abf26023ef7d.scope: Deactivated successfully.
Oct 11 09:10:23 compute-0 sudo[365345]: pam_unix(sudo:session): session closed for user root
Oct 11 09:10:23 compute-0 sudo[365509]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:10:23 compute-0 sudo[365509]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:10:23 compute-0 sudo[365509]: pam_unix(sudo:session): session closed for user root
Oct 11 09:10:23 compute-0 ceph-mon[74313]: pgmap v2050: 321 pgs: 321 active+clean; 328 MiB data, 846 MiB used, 59 GiB / 60 GiB avail
Oct 11 09:10:23 compute-0 sudo[365534]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 09:10:23 compute-0 sudo[365534]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:10:23 compute-0 sudo[365534]: pam_unix(sudo:session): session closed for user root
Oct 11 09:10:23 compute-0 nova_compute[260935]: 2025-10-11 09:10:23.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:10:23 compute-0 nova_compute[260935]: 2025-10-11 09:10:23.703 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 11 09:10:23 compute-0 sudo[365559]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:10:23 compute-0 sudo[365559]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:10:23 compute-0 sudo[365559]: pam_unix(sudo:session): session closed for user root
Oct 11 09:10:23 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 09:10:23 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3168893775' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:10:23 compute-0 nova_compute[260935]: 2025-10-11 09:10:23.808 2 DEBUG oslo_concurrency.processutils [None req-402cdad3-0a70-4b41-8df4-a529d58cfc49 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.537s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:10:23 compute-0 nova_compute[260935]: 2025-10-11 09:10:23.815 2 DEBUG nova.compute.provider_tree [None req-402cdad3-0a70-4b41-8df4-a529d58cfc49 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 09:10:23 compute-0 nova_compute[260935]: 2025-10-11 09:10:23.832 2 DEBUG nova.scheduler.client.report [None req-402cdad3-0a70-4b41-8df4-a529d58cfc49 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 09:10:23 compute-0 nova_compute[260935]: 2025-10-11 09:10:23.855 2 DEBUG oslo_concurrency.lockutils [None req-402cdad3-0a70-4b41-8df4-a529d58cfc49 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.819s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:10:23 compute-0 nova_compute[260935]: 2025-10-11 09:10:23.856 2 DEBUG nova.compute.manager [None req-402cdad3-0a70-4b41-8df4-a529d58cfc49 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 0b3da34e-478e-44fc-a1ec-2601998d2b0d] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 11 09:10:23 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e276 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:10:23 compute-0 sudo[365584]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a -- raw list --format json
Oct 11 09:10:23 compute-0 sudo[365584]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:10:23 compute-0 nova_compute[260935]: 2025-10-11 09:10:23.898 2 DEBUG nova.compute.manager [None req-402cdad3-0a70-4b41-8df4-a529d58cfc49 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 0b3da34e-478e-44fc-a1ec-2601998d2b0d] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 11 09:10:23 compute-0 nova_compute[260935]: 2025-10-11 09:10:23.898 2 DEBUG nova.network.neutron [None req-402cdad3-0a70-4b41-8df4-a529d58cfc49 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 0b3da34e-478e-44fc-a1ec-2601998d2b0d] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 11 09:10:23 compute-0 nova_compute[260935]: 2025-10-11 09:10:23.917 2 INFO nova.virt.libvirt.driver [None req-402cdad3-0a70-4b41-8df4-a529d58cfc49 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 0b3da34e-478e-44fc-a1ec-2601998d2b0d] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 11 09:10:23 compute-0 nova_compute[260935]: 2025-10-11 09:10:23.933 2 DEBUG nova.compute.manager [None req-402cdad3-0a70-4b41-8df4-a529d58cfc49 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 0b3da34e-478e-44fc-a1ec-2601998d2b0d] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 11 09:10:24 compute-0 nova_compute[260935]: 2025-10-11 09:10:24.026 2 DEBUG nova.compute.manager [None req-402cdad3-0a70-4b41-8df4-a529d58cfc49 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 0b3da34e-478e-44fc-a1ec-2601998d2b0d] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 11 09:10:24 compute-0 nova_compute[260935]: 2025-10-11 09:10:24.027 2 DEBUG nova.virt.libvirt.driver [None req-402cdad3-0a70-4b41-8df4-a529d58cfc49 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 0b3da34e-478e-44fc-a1ec-2601998d2b0d] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 11 09:10:24 compute-0 nova_compute[260935]: 2025-10-11 09:10:24.027 2 INFO nova.virt.libvirt.driver [None req-402cdad3-0a70-4b41-8df4-a529d58cfc49 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 0b3da34e-478e-44fc-a1ec-2601998d2b0d] Creating image(s)
Oct 11 09:10:24 compute-0 nova_compute[260935]: 2025-10-11 09:10:24.053 2 DEBUG nova.storage.rbd_utils [None req-402cdad3-0a70-4b41-8df4-a529d58cfc49 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] rbd image 0b3da34e-478e-44fc-a1ec-2601998d2b0d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:10:24 compute-0 nova_compute[260935]: 2025-10-11 09:10:24.093 2 DEBUG nova.storage.rbd_utils [None req-402cdad3-0a70-4b41-8df4-a529d58cfc49 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] rbd image 0b3da34e-478e-44fc-a1ec-2601998d2b0d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:10:24 compute-0 nova_compute[260935]: 2025-10-11 09:10:24.123 2 DEBUG nova.storage.rbd_utils [None req-402cdad3-0a70-4b41-8df4-a529d58cfc49 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] rbd image 0b3da34e-478e-44fc-a1ec-2601998d2b0d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:10:24 compute-0 nova_compute[260935]: 2025-10-11 09:10:24.129 2 DEBUG oslo_concurrency.processutils [None req-402cdad3-0a70-4b41-8df4-a529d58cfc49 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:10:24 compute-0 nova_compute[260935]: 2025-10-11 09:10:24.231 2 DEBUG nova.policy [None req-402cdad3-0a70-4b41-8df4-a529d58cfc49 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'a213c3877fc144a3af0be3c3d853f999', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'ca4b15770e784f45910b630937562cb6', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 11 09:10:24 compute-0 nova_compute[260935]: 2025-10-11 09:10:24.233 2 DEBUG oslo_concurrency.processutils [None req-402cdad3-0a70-4b41-8df4-a529d58cfc49 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json" returned: 0 in 0.105s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:10:24 compute-0 nova_compute[260935]: 2025-10-11 09:10:24.234 2 DEBUG oslo_concurrency.lockutils [None req-402cdad3-0a70-4b41-8df4-a529d58cfc49 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Acquiring lock "9811042c7d73cc51997f7c966840f4b7728169a1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:10:24 compute-0 nova_compute[260935]: 2025-10-11 09:10:24.234 2 DEBUG oslo_concurrency.lockutils [None req-402cdad3-0a70-4b41-8df4-a529d58cfc49 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:10:24 compute-0 nova_compute[260935]: 2025-10-11 09:10:24.235 2 DEBUG oslo_concurrency.lockutils [None req-402cdad3-0a70-4b41-8df4-a529d58cfc49 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:10:24 compute-0 nova_compute[260935]: 2025-10-11 09:10:24.255 2 DEBUG nova.storage.rbd_utils [None req-402cdad3-0a70-4b41-8df4-a529d58cfc49 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] rbd image 0b3da34e-478e-44fc-a1ec-2601998d2b0d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:10:24 compute-0 nova_compute[260935]: 2025-10-11 09:10:24.258 2 DEBUG oslo_concurrency.processutils [None req-402cdad3-0a70-4b41-8df4-a529d58cfc49 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 0b3da34e-478e-44fc-a1ec-2601998d2b0d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:10:24 compute-0 podman[365728]: 2025-10-11 09:10:24.381661838 +0000 UTC m=+0.065271055 container create 1a60a99eae6186a15073f5b3b3aa4b215ba9cd11fa73e3ab0f4af85840f5cd0f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_bardeen, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct 11 09:10:24 compute-0 systemd[1]: Started libpod-conmon-1a60a99eae6186a15073f5b3b3aa4b215ba9cd11fa73e3ab0f4af85840f5cd0f.scope.
Oct 11 09:10:24 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2051: 321 pgs: 321 active+clean; 328 MiB data, 846 MiB used, 59 GiB / 60 GiB avail
Oct 11 09:10:24 compute-0 podman[365728]: 2025-10-11 09:10:24.355313586 +0000 UTC m=+0.038922903 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:10:24 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:10:24 compute-0 podman[365728]: 2025-10-11 09:10:24.513536193 +0000 UTC m=+0.197145410 container init 1a60a99eae6186a15073f5b3b3aa4b215ba9cd11fa73e3ab0f4af85840f5cd0f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_bardeen, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 09:10:24 compute-0 podman[365728]: 2025-10-11 09:10:24.520691367 +0000 UTC m=+0.204300584 container start 1a60a99eae6186a15073f5b3b3aa4b215ba9cd11fa73e3ab0f4af85840f5cd0f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_bardeen, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Oct 11 09:10:24 compute-0 xenodochial_bardeen[365762]: 167 167
Oct 11 09:10:24 compute-0 systemd[1]: libpod-1a60a99eae6186a15073f5b3b3aa4b215ba9cd11fa73e3ab0f4af85840f5cd0f.scope: Deactivated successfully.
Oct 11 09:10:24 compute-0 podman[365728]: 2025-10-11 09:10:24.53306023 +0000 UTC m=+0.216669447 container attach 1a60a99eae6186a15073f5b3b3aa4b215ba9cd11fa73e3ab0f4af85840f5cd0f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_bardeen, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 09:10:24 compute-0 podman[365728]: 2025-10-11 09:10:24.53339488 +0000 UTC m=+0.217004107 container died 1a60a99eae6186a15073f5b3b3aa4b215ba9cd11fa73e3ab0f4af85840f5cd0f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_bardeen, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 09:10:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-76367e33aed37aaf8092eb3e4550fa26061beefdd2f13263c60abc0ee97e3683-merged.mount: Deactivated successfully.
Oct 11 09:10:24 compute-0 podman[365728]: 2025-10-11 09:10:24.569164941 +0000 UTC m=+0.252774158 container remove 1a60a99eae6186a15073f5b3b3aa4b215ba9cd11fa73e3ab0f4af85840f5cd0f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_bardeen, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct 11 09:10:24 compute-0 nova_compute[260935]: 2025-10-11 09:10:24.572 2 DEBUG oslo_concurrency.processutils [None req-402cdad3-0a70-4b41-8df4-a529d58cfc49 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 0b3da34e-478e-44fc-a1ec-2601998d2b0d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.314s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:10:24 compute-0 systemd[1]: libpod-conmon-1a60a99eae6186a15073f5b3b3aa4b215ba9cd11fa73e3ab0f4af85840f5cd0f.scope: Deactivated successfully.
Oct 11 09:10:24 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/3168893775' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:10:24 compute-0 nova_compute[260935]: 2025-10-11 09:10:24.645 2 DEBUG nova.storage.rbd_utils [None req-402cdad3-0a70-4b41-8df4-a529d58cfc49 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] resizing rbd image 0b3da34e-478e-44fc-a1ec-2601998d2b0d_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 11 09:10:24 compute-0 nova_compute[260935]: 2025-10-11 09:10:24.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:10:24 compute-0 nova_compute[260935]: 2025-10-11 09:10:24.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:10:24 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:10:24.716 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=27, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:d1:d9', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '16:ab:1e:b7:4b:7f'}, ipsec=False) old=SB_Global(nb_cfg=26) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 09:10:24 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:10:24.717 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 11 09:10:24 compute-0 nova_compute[260935]: 2025-10-11 09:10:24.741 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:10:24 compute-0 nova_compute[260935]: 2025-10-11 09:10:24.741 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:10:24 compute-0 nova_compute[260935]: 2025-10-11 09:10:24.741 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:10:24 compute-0 nova_compute[260935]: 2025-10-11 09:10:24.742 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 11 09:10:24 compute-0 nova_compute[260935]: 2025-10-11 09:10:24.742 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:10:24 compute-0 nova_compute[260935]: 2025-10-11 09:10:24.801 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:10:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:10:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:10:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:10:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:10:24 compute-0 nova_compute[260935]: 2025-10-11 09:10:24.812 2 DEBUG nova.objects.instance [None req-402cdad3-0a70-4b41-8df4-a529d58cfc49 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Lazy-loading 'migration_context' on Instance uuid 0b3da34e-478e-44fc-a1ec-2601998d2b0d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:10:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:10:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:10:24 compute-0 nova_compute[260935]: 2025-10-11 09:10:24.836 2 DEBUG nova.virt.libvirt.driver [None req-402cdad3-0a70-4b41-8df4-a529d58cfc49 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 0b3da34e-478e-44fc-a1ec-2601998d2b0d] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 11 09:10:24 compute-0 nova_compute[260935]: 2025-10-11 09:10:24.837 2 DEBUG nova.virt.libvirt.driver [None req-402cdad3-0a70-4b41-8df4-a529d58cfc49 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 0b3da34e-478e-44fc-a1ec-2601998d2b0d] Ensure instance console log exists: /var/lib/nova/instances/0b3da34e-478e-44fc-a1ec-2601998d2b0d/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 11 09:10:24 compute-0 nova_compute[260935]: 2025-10-11 09:10:24.837 2 DEBUG oslo_concurrency.lockutils [None req-402cdad3-0a70-4b41-8df4-a529d58cfc49 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:10:24 compute-0 nova_compute[260935]: 2025-10-11 09:10:24.838 2 DEBUG oslo_concurrency.lockutils [None req-402cdad3-0a70-4b41-8df4-a529d58cfc49 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:10:24 compute-0 nova_compute[260935]: 2025-10-11 09:10:24.838 2 DEBUG oslo_concurrency.lockutils [None req-402cdad3-0a70-4b41-8df4-a529d58cfc49 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:10:24 compute-0 podman[365856]: 2025-10-11 09:10:24.840139176 +0000 UTC m=+0.054561958 container create dc8b5ead0f6f56094c5c21e096b7e4583b8b6e66cd8ff2a3de1f4617b6ddfe46 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_sutherland, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct 11 09:10:24 compute-0 systemd[1]: Started libpod-conmon-dc8b5ead0f6f56094c5c21e096b7e4583b8b6e66cd8ff2a3de1f4617b6ddfe46.scope.
Oct 11 09:10:24 compute-0 nova_compute[260935]: 2025-10-11 09:10:24.912 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:10:24 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:10:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/714a74fd693a6ef9103a7f0aeae32096958dcb04fcab7e56975b8e17321273f5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 09:10:24 compute-0 podman[365856]: 2025-10-11 09:10:24.825310463 +0000 UTC m=+0.039733275 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:10:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/714a74fd693a6ef9103a7f0aeae32096958dcb04fcab7e56975b8e17321273f5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 09:10:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/714a74fd693a6ef9103a7f0aeae32096958dcb04fcab7e56975b8e17321273f5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 09:10:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/714a74fd693a6ef9103a7f0aeae32096958dcb04fcab7e56975b8e17321273f5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 09:10:24 compute-0 podman[365856]: 2025-10-11 09:10:24.936415374 +0000 UTC m=+0.150838176 container init dc8b5ead0f6f56094c5c21e096b7e4583b8b6e66cd8ff2a3de1f4617b6ddfe46 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_sutherland, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct 11 09:10:24 compute-0 podman[365856]: 2025-10-11 09:10:24.943747363 +0000 UTC m=+0.158170185 container start dc8b5ead0f6f56094c5c21e096b7e4583b8b6e66cd8ff2a3de1f4617b6ddfe46 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_sutherland, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 09:10:24 compute-0 podman[365856]: 2025-10-11 09:10:24.948479749 +0000 UTC m=+0.162902551 container attach dc8b5ead0f6f56094c5c21e096b7e4583b8b6e66cd8ff2a3de1f4617b6ddfe46 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_sutherland, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct 11 09:10:25 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 09:10:25 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3710635512' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:10:25 compute-0 nova_compute[260935]: 2025-10-11 09:10:25.240 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.498s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:10:25 compute-0 nova_compute[260935]: 2025-10-11 09:10:25.351 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:10:25 compute-0 nova_compute[260935]: 2025-10-11 09:10:25.351 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:10:25 compute-0 nova_compute[260935]: 2025-10-11 09:10:25.351 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:10:25 compute-0 nova_compute[260935]: 2025-10-11 09:10:25.357 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:10:25 compute-0 nova_compute[260935]: 2025-10-11 09:10:25.357 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:10:25 compute-0 nova_compute[260935]: 2025-10-11 09:10:25.363 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000052 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:10:25 compute-0 nova_compute[260935]: 2025-10-11 09:10:25.363 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000052 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:10:25 compute-0 ceph-mon[74313]: pgmap v2051: 321 pgs: 321 active+clean; 328 MiB data, 846 MiB used, 59 GiB / 60 GiB avail
Oct 11 09:10:25 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/3710635512' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:10:25 compute-0 nova_compute[260935]: 2025-10-11 09:10:25.624 2 WARNING nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 09:10:25 compute-0 nova_compute[260935]: 2025-10-11 09:10:25.625 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3005MB free_disk=59.83066940307617GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 11 09:10:25 compute-0 nova_compute[260935]: 2025-10-11 09:10:25.625 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:10:25 compute-0 nova_compute[260935]: 2025-10-11 09:10:25.625 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:10:25 compute-0 nova_compute[260935]: 2025-10-11 09:10:25.720 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance c176845c-89c0-4038-ba22-4ee79bd3ebfe actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 09:10:25 compute-0 nova_compute[260935]: 2025-10-11 09:10:25.720 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance b75d8ded-515b-48ff-a6b6-28df88878996 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 09:10:25 compute-0 nova_compute[260935]: 2025-10-11 09:10:25.720 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance 52be16b4-343a-4fd4-9041-39069a1fde2a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 09:10:25 compute-0 nova_compute[260935]: 2025-10-11 09:10:25.720 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance 0b3da34e-478e-44fc-a1ec-2601998d2b0d actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 09:10:25 compute-0 nova_compute[260935]: 2025-10-11 09:10:25.720 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 11 09:10:25 compute-0 nova_compute[260935]: 2025-10-11 09:10:25.721 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=1024MB phys_disk=59GB used_disk=4GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 11 09:10:25 compute-0 nova_compute[260935]: 2025-10-11 09:10:25.841 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:10:25 compute-0 nova_compute[260935]: 2025-10-11 09:10:25.886 2 DEBUG nova.network.neutron [None req-402cdad3-0a70-4b41-8df4-a529d58cfc49 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 0b3da34e-478e-44fc-a1ec-2601998d2b0d] Successfully created port: 673c8285-440e-4cb6-b81b-1bd2ddb7375a _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 11 09:10:26 compute-0 stupefied_sutherland[365872]: {
Oct 11 09:10:26 compute-0 stupefied_sutherland[365872]:     "9a8d87fe-940e-4960-94fb-405e22e6e9ee": {
Oct 11 09:10:26 compute-0 stupefied_sutherland[365872]:         "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:10:26 compute-0 stupefied_sutherland[365872]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 11 09:10:26 compute-0 stupefied_sutherland[365872]:         "osd_id": 2,
Oct 11 09:10:26 compute-0 stupefied_sutherland[365872]:         "osd_uuid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 09:10:26 compute-0 stupefied_sutherland[365872]:         "type": "bluestore"
Oct 11 09:10:26 compute-0 stupefied_sutherland[365872]:     },
Oct 11 09:10:26 compute-0 stupefied_sutherland[365872]:     "bd31cfe9-266a-4479-a7f9-659fff14b3e1": {
Oct 11 09:10:26 compute-0 stupefied_sutherland[365872]:         "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:10:26 compute-0 stupefied_sutherland[365872]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 11 09:10:26 compute-0 stupefied_sutherland[365872]:         "osd_id": 0,
Oct 11 09:10:26 compute-0 stupefied_sutherland[365872]:         "osd_uuid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 09:10:26 compute-0 stupefied_sutherland[365872]:         "type": "bluestore"
Oct 11 09:10:26 compute-0 stupefied_sutherland[365872]:     },
Oct 11 09:10:26 compute-0 stupefied_sutherland[365872]:     "c6e730c8-7075-45e5-bf72-061b73088f5b": {
Oct 11 09:10:26 compute-0 stupefied_sutherland[365872]:         "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:10:26 compute-0 stupefied_sutherland[365872]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 11 09:10:26 compute-0 stupefied_sutherland[365872]:         "osd_id": 1,
Oct 11 09:10:26 compute-0 stupefied_sutherland[365872]:         "osd_uuid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 09:10:26 compute-0 stupefied_sutherland[365872]:         "type": "bluestore"
Oct 11 09:10:26 compute-0 stupefied_sutherland[365872]:     }
Oct 11 09:10:26 compute-0 stupefied_sutherland[365872]: }
Oct 11 09:10:26 compute-0 systemd[1]: libpod-dc8b5ead0f6f56094c5c21e096b7e4583b8b6e66cd8ff2a3de1f4617b6ddfe46.scope: Deactivated successfully.
Oct 11 09:10:26 compute-0 systemd[1]: libpod-dc8b5ead0f6f56094c5c21e096b7e4583b8b6e66cd8ff2a3de1f4617b6ddfe46.scope: Consumed 1.165s CPU time.
Oct 11 09:10:26 compute-0 podman[365856]: 2025-10-11 09:10:26.150223855 +0000 UTC m=+1.364646667 container died dc8b5ead0f6f56094c5c21e096b7e4583b8b6e66cd8ff2a3de1f4617b6ddfe46 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_sutherland, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 09:10:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-714a74fd693a6ef9103a7f0aeae32096958dcb04fcab7e56975b8e17321273f5-merged.mount: Deactivated successfully.
Oct 11 09:10:26 compute-0 podman[365856]: 2025-10-11 09:10:26.227997755 +0000 UTC m=+1.442420547 container remove dc8b5ead0f6f56094c5c21e096b7e4583b8b6e66cd8ff2a3de1f4617b6ddfe46 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_sutherland, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 09:10:26 compute-0 systemd[1]: libpod-conmon-dc8b5ead0f6f56094c5c21e096b7e4583b8b6e66cd8ff2a3de1f4617b6ddfe46.scope: Deactivated successfully.
Oct 11 09:10:26 compute-0 sudo[365584]: pam_unix(sudo:session): session closed for user root
Oct 11 09:10:26 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 09:10:26 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:10:26 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 09:10:26 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:10:26 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev 7850faf4-3b34-4fdc-a23e-46776fb0a3be does not exist
Oct 11 09:10:26 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev ba43fd61-a9cf-498f-93a2-8de44e9f3180 does not exist
Oct 11 09:10:26 compute-0 podman[365947]: 2025-10-11 09:10:26.294647908 +0000 UTC m=+0.109098356 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, container_name=iscsid, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, config_id=iscsid)
Oct 11 09:10:26 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 09:10:26 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2975616066' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:10:26 compute-0 nova_compute[260935]: 2025-10-11 09:10:26.358 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.517s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:10:26 compute-0 sudo[365978]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:10:26 compute-0 nova_compute[260935]: 2025-10-11 09:10:26.365 2 DEBUG nova.compute.provider_tree [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 09:10:26 compute-0 sudo[365978]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:10:26 compute-0 sudo[365978]: pam_unix(sudo:session): session closed for user root
Oct 11 09:10:26 compute-0 nova_compute[260935]: 2025-10-11 09:10:26.386 2 DEBUG nova.scheduler.client.report [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 09:10:26 compute-0 nova_compute[260935]: 2025-10-11 09:10:26.419 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 11 09:10:26 compute-0 nova_compute[260935]: 2025-10-11 09:10:26.420 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.795s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:10:26 compute-0 sudo[366005]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 11 09:10:26 compute-0 sudo[366005]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:10:26 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2052: 321 pgs: 321 active+clean; 328 MiB data, 846 MiB used, 59 GiB / 60 GiB avail
Oct 11 09:10:26 compute-0 sudo[366005]: pam_unix(sudo:session): session closed for user root
Oct 11 09:10:26 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 09:10:26 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1384408300' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 09:10:26 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 09:10:26 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1384408300' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 09:10:26 compute-0 nova_compute[260935]: 2025-10-11 09:10:26.708 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:10:27 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:10:27 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:10:27 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/2975616066' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:10:27 compute-0 ceph-mon[74313]: pgmap v2052: 321 pgs: 321 active+clean; 328 MiB data, 846 MiB used, 59 GiB / 60 GiB avail
Oct 11 09:10:27 compute-0 ceph-mon[74313]: from='client.? 192.168.122.10:0/1384408300' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 09:10:27 compute-0 ceph-mon[74313]: from='client.? 192.168.122.10:0/1384408300' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 09:10:27 compute-0 nova_compute[260935]: 2025-10-11 09:10:27.369 2 DEBUG nova.network.neutron [None req-402cdad3-0a70-4b41-8df4-a529d58cfc49 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 0b3da34e-478e-44fc-a1ec-2601998d2b0d] Successfully updated port: 673c8285-440e-4cb6-b81b-1bd2ddb7375a _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 11 09:10:27 compute-0 nova_compute[260935]: 2025-10-11 09:10:27.393 2 DEBUG oslo_concurrency.lockutils [None req-402cdad3-0a70-4b41-8df4-a529d58cfc49 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Acquiring lock "refresh_cache-0b3da34e-478e-44fc-a1ec-2601998d2b0d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:10:27 compute-0 nova_compute[260935]: 2025-10-11 09:10:27.394 2 DEBUG oslo_concurrency.lockutils [None req-402cdad3-0a70-4b41-8df4-a529d58cfc49 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Acquired lock "refresh_cache-0b3da34e-478e-44fc-a1ec-2601998d2b0d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:10:27 compute-0 nova_compute[260935]: 2025-10-11 09:10:27.394 2 DEBUG nova.network.neutron [None req-402cdad3-0a70-4b41-8df4-a529d58cfc49 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 0b3da34e-478e-44fc-a1ec-2601998d2b0d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 11 09:10:27 compute-0 nova_compute[260935]: 2025-10-11 09:10:27.631 2 DEBUG nova.compute.manager [req-8db12eed-84e2-4226-bc1d-003dc9a18b3e req-0e9209f5-37b8-4fff-b779-68e00cd8594a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0b3da34e-478e-44fc-a1ec-2601998d2b0d] Received event network-changed-673c8285-440e-4cb6-b81b-1bd2ddb7375a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:10:27 compute-0 nova_compute[260935]: 2025-10-11 09:10:27.631 2 DEBUG nova.compute.manager [req-8db12eed-84e2-4226-bc1d-003dc9a18b3e req-0e9209f5-37b8-4fff-b779-68e00cd8594a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0b3da34e-478e-44fc-a1ec-2601998d2b0d] Refreshing instance network info cache due to event network-changed-673c8285-440e-4cb6-b81b-1bd2ddb7375a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 09:10:27 compute-0 nova_compute[260935]: 2025-10-11 09:10:27.632 2 DEBUG oslo_concurrency.lockutils [req-8db12eed-84e2-4226-bc1d-003dc9a18b3e req-0e9209f5-37b8-4fff-b779-68e00cd8594a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-0b3da34e-478e-44fc-a1ec-2601998d2b0d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:10:27 compute-0 nova_compute[260935]: 2025-10-11 09:10:27.854 2 DEBUG nova.network.neutron [None req-402cdad3-0a70-4b41-8df4-a529d58cfc49 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 0b3da34e-478e-44fc-a1ec-2601998d2b0d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 11 09:10:28 compute-0 nova_compute[260935]: 2025-10-11 09:10:28.421 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:10:28 compute-0 nova_compute[260935]: 2025-10-11 09:10:28.421 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 11 09:10:28 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2053: 321 pgs: 321 active+clean; 374 MiB data, 858 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 11 09:10:28 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e276 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:10:28 compute-0 nova_compute[260935]: 2025-10-11 09:10:28.917 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "refresh_cache-52be16b4-343a-4fd4-9041-39069a1fde2a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:10:28 compute-0 nova_compute[260935]: 2025-10-11 09:10:28.918 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquired lock "refresh_cache-52be16b4-343a-4fd4-9041-39069a1fde2a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:10:28 compute-0 nova_compute[260935]: 2025-10-11 09:10:28.918 2 DEBUG nova.network.neutron [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: 52be16b4-343a-4fd4-9041-39069a1fde2a] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 11 09:10:29 compute-0 ceph-mon[74313]: pgmap v2053: 321 pgs: 321 active+clean; 374 MiB data, 858 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 11 09:10:29 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:10:29.719 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=bec9a5e2-82b0-42f0-811d-08d245f1dc66, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '27'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:10:29 compute-0 nova_compute[260935]: 2025-10-11 09:10:29.938 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:10:30 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2054: 321 pgs: 321 active+clean; 374 MiB data, 858 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 11 09:10:30 compute-0 nova_compute[260935]: 2025-10-11 09:10:30.484 2 DEBUG nova.network.neutron [None req-402cdad3-0a70-4b41-8df4-a529d58cfc49 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 0b3da34e-478e-44fc-a1ec-2601998d2b0d] Updating instance_info_cache with network_info: [{"id": "673c8285-440e-4cb6-b81b-1bd2ddb7375a", "address": "fa:16:3e:ac:22:4a", "network": {"id": "2de6a2d5-4c4a-403f-9eb5-27dc7562315f", "bridge": "br-int", "label": "tempest-network-smoke--795331470", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca4b15770e784f45910b630937562cb6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap673c8285-44", "ovs_interfaceid": "673c8285-440e-4cb6-b81b-1bd2ddb7375a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:10:30 compute-0 nova_compute[260935]: 2025-10-11 09:10:30.516 2 DEBUG oslo_concurrency.lockutils [None req-402cdad3-0a70-4b41-8df4-a529d58cfc49 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Releasing lock "refresh_cache-0b3da34e-478e-44fc-a1ec-2601998d2b0d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:10:30 compute-0 nova_compute[260935]: 2025-10-11 09:10:30.517 2 DEBUG nova.compute.manager [None req-402cdad3-0a70-4b41-8df4-a529d58cfc49 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 0b3da34e-478e-44fc-a1ec-2601998d2b0d] Instance network_info: |[{"id": "673c8285-440e-4cb6-b81b-1bd2ddb7375a", "address": "fa:16:3e:ac:22:4a", "network": {"id": "2de6a2d5-4c4a-403f-9eb5-27dc7562315f", "bridge": "br-int", "label": "tempest-network-smoke--795331470", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca4b15770e784f45910b630937562cb6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap673c8285-44", "ovs_interfaceid": "673c8285-440e-4cb6-b81b-1bd2ddb7375a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 11 09:10:30 compute-0 nova_compute[260935]: 2025-10-11 09:10:30.518 2 DEBUG oslo_concurrency.lockutils [req-8db12eed-84e2-4226-bc1d-003dc9a18b3e req-0e9209f5-37b8-4fff-b779-68e00cd8594a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-0b3da34e-478e-44fc-a1ec-2601998d2b0d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:10:30 compute-0 nova_compute[260935]: 2025-10-11 09:10:30.518 2 DEBUG nova.network.neutron [req-8db12eed-84e2-4226-bc1d-003dc9a18b3e req-0e9209f5-37b8-4fff-b779-68e00cd8594a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0b3da34e-478e-44fc-a1ec-2601998d2b0d] Refreshing network info cache for port 673c8285-440e-4cb6-b81b-1bd2ddb7375a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 09:10:30 compute-0 nova_compute[260935]: 2025-10-11 09:10:30.523 2 DEBUG nova.virt.libvirt.driver [None req-402cdad3-0a70-4b41-8df4-a529d58cfc49 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 0b3da34e-478e-44fc-a1ec-2601998d2b0d] Start _get_guest_xml network_info=[{"id": "673c8285-440e-4cb6-b81b-1bd2ddb7375a", "address": "fa:16:3e:ac:22:4a", "network": {"id": "2de6a2d5-4c4a-403f-9eb5-27dc7562315f", "bridge": "br-int", "label": "tempest-network-smoke--795331470", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca4b15770e784f45910b630937562cb6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap673c8285-44", "ovs_interfaceid": "673c8285-440e-4cb6-b81b-1bd2ddb7375a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 11 09:10:30 compute-0 nova_compute[260935]: 2025-10-11 09:10:30.531 2 WARNING nova.virt.libvirt.driver [None req-402cdad3-0a70-4b41-8df4-a529d58cfc49 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 09:10:30 compute-0 nova_compute[260935]: 2025-10-11 09:10:30.547 2 DEBUG nova.virt.libvirt.host [None req-402cdad3-0a70-4b41-8df4-a529d58cfc49 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 11 09:10:30 compute-0 nova_compute[260935]: 2025-10-11 09:10:30.548 2 DEBUG nova.virt.libvirt.host [None req-402cdad3-0a70-4b41-8df4-a529d58cfc49 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 11 09:10:30 compute-0 nova_compute[260935]: 2025-10-11 09:10:30.554 2 DEBUG nova.virt.libvirt.host [None req-402cdad3-0a70-4b41-8df4-a529d58cfc49 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 11 09:10:30 compute-0 nova_compute[260935]: 2025-10-11 09:10:30.555 2 DEBUG nova.virt.libvirt.host [None req-402cdad3-0a70-4b41-8df4-a529d58cfc49 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 11 09:10:30 compute-0 nova_compute[260935]: 2025-10-11 09:10:30.556 2 DEBUG nova.virt.libvirt.driver [None req-402cdad3-0a70-4b41-8df4-a529d58cfc49 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 11 09:10:30 compute-0 nova_compute[260935]: 2025-10-11 09:10:30.556 2 DEBUG nova.virt.hardware [None req-402cdad3-0a70-4b41-8df4-a529d58cfc49 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 11 09:10:30 compute-0 nova_compute[260935]: 2025-10-11 09:10:30.557 2 DEBUG nova.virt.hardware [None req-402cdad3-0a70-4b41-8df4-a529d58cfc49 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 11 09:10:30 compute-0 nova_compute[260935]: 2025-10-11 09:10:30.558 2 DEBUG nova.virt.hardware [None req-402cdad3-0a70-4b41-8df4-a529d58cfc49 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 11 09:10:30 compute-0 nova_compute[260935]: 2025-10-11 09:10:30.558 2 DEBUG nova.virt.hardware [None req-402cdad3-0a70-4b41-8df4-a529d58cfc49 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 11 09:10:30 compute-0 nova_compute[260935]: 2025-10-11 09:10:30.559 2 DEBUG nova.virt.hardware [None req-402cdad3-0a70-4b41-8df4-a529d58cfc49 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 11 09:10:30 compute-0 nova_compute[260935]: 2025-10-11 09:10:30.559 2 DEBUG nova.virt.hardware [None req-402cdad3-0a70-4b41-8df4-a529d58cfc49 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 11 09:10:30 compute-0 nova_compute[260935]: 2025-10-11 09:10:30.560 2 DEBUG nova.virt.hardware [None req-402cdad3-0a70-4b41-8df4-a529d58cfc49 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 11 09:10:30 compute-0 nova_compute[260935]: 2025-10-11 09:10:30.561 2 DEBUG nova.virt.hardware [None req-402cdad3-0a70-4b41-8df4-a529d58cfc49 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 11 09:10:30 compute-0 nova_compute[260935]: 2025-10-11 09:10:30.561 2 DEBUG nova.virt.hardware [None req-402cdad3-0a70-4b41-8df4-a529d58cfc49 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 11 09:10:30 compute-0 nova_compute[260935]: 2025-10-11 09:10:30.562 2 DEBUG nova.virt.hardware [None req-402cdad3-0a70-4b41-8df4-a529d58cfc49 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 11 09:10:30 compute-0 nova_compute[260935]: 2025-10-11 09:10:30.563 2 DEBUG nova.virt.hardware [None req-402cdad3-0a70-4b41-8df4-a529d58cfc49 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 11 09:10:30 compute-0 nova_compute[260935]: 2025-10-11 09:10:30.568 2 DEBUG oslo_concurrency.processutils [None req-402cdad3-0a70-4b41-8df4-a529d58cfc49 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:10:31 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 09:10:31 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3119043267' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:10:31 compute-0 nova_compute[260935]: 2025-10-11 09:10:31.082 2 DEBUG oslo_concurrency.processutils [None req-402cdad3-0a70-4b41-8df4-a529d58cfc49 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.514s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:10:31 compute-0 nova_compute[260935]: 2025-10-11 09:10:31.121 2 DEBUG nova.storage.rbd_utils [None req-402cdad3-0a70-4b41-8df4-a529d58cfc49 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] rbd image 0b3da34e-478e-44fc-a1ec-2601998d2b0d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:10:31 compute-0 nova_compute[260935]: 2025-10-11 09:10:31.128 2 DEBUG oslo_concurrency.processutils [None req-402cdad3-0a70-4b41-8df4-a529d58cfc49 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:10:31 compute-0 ceph-mon[74313]: pgmap v2054: 321 pgs: 321 active+clean; 374 MiB data, 858 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 11 09:10:31 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/3119043267' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:10:31 compute-0 podman[366092]: 2025-10-11 09:10:31.558314767 +0000 UTC m=+0.102694242 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 09:10:31 compute-0 podman[366093]: 2025-10-11 09:10:31.60639944 +0000 UTC m=+0.145308849 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller)
Oct 11 09:10:31 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 09:10:31 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2812675974' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:10:31 compute-0 nova_compute[260935]: 2025-10-11 09:10:31.638 2 DEBUG oslo_concurrency.processutils [None req-402cdad3-0a70-4b41-8df4-a529d58cfc49 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.510s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:10:31 compute-0 nova_compute[260935]: 2025-10-11 09:10:31.641 2 DEBUG nova.virt.libvirt.vif [None req-402cdad3-0a70-4b41-8df4-a529d58cfc49 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T09:10:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-2087957653',display_name='tempest-TestNetworkAdvancedServerOps-server-2087957653',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-2087957653',id=104,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBElmuTcJVI/ZkWAz+5kv9/dAzHDbPqJ/RLoEbpcbzY+bQ/5rlnwJAiwfIH5I2AByz1siErLhAl04MipJZMi/HRmixiLKf1bIaWO7+9o5POQWUQay494cNjZNh9+hKmeEUA==',key_name='tempest-TestNetworkAdvancedServerOps-746953215',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ca4b15770e784f45910b630937562cb6',ramdisk_id='',reservation_id='r-ng6vrpfk',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-1304559157',owner_user_name='tempest-TestNetworkAdvancedServerOps-1304559157-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:10:23Z,user_data=None,user_id='a213c3877fc144a3af0be3c3d853f999',uuid=0b3da34e-478e-44fc-a1ec-2601998d2b0d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "673c8285-440e-4cb6-b81b-1bd2ddb7375a", "address": "fa:16:3e:ac:22:4a", "network": {"id": "2de6a2d5-4c4a-403f-9eb5-27dc7562315f", "bridge": "br-int", "label": "tempest-network-smoke--795331470", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca4b15770e784f45910b630937562cb6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap673c8285-44", "ovs_interfaceid": "673c8285-440e-4cb6-b81b-1bd2ddb7375a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 11 09:10:31 compute-0 nova_compute[260935]: 2025-10-11 09:10:31.642 2 DEBUG nova.network.os_vif_util [None req-402cdad3-0a70-4b41-8df4-a529d58cfc49 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Converting VIF {"id": "673c8285-440e-4cb6-b81b-1bd2ddb7375a", "address": "fa:16:3e:ac:22:4a", "network": {"id": "2de6a2d5-4c4a-403f-9eb5-27dc7562315f", "bridge": "br-int", "label": "tempest-network-smoke--795331470", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca4b15770e784f45910b630937562cb6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap673c8285-44", "ovs_interfaceid": "673c8285-440e-4cb6-b81b-1bd2ddb7375a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 09:10:31 compute-0 nova_compute[260935]: 2025-10-11 09:10:31.643 2 DEBUG nova.network.os_vif_util [None req-402cdad3-0a70-4b41-8df4-a529d58cfc49 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ac:22:4a,bridge_name='br-int',has_traffic_filtering=True,id=673c8285-440e-4cb6-b81b-1bd2ddb7375a,network=Network(2de6a2d5-4c4a-403f-9eb5-27dc7562315f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap673c8285-44') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 09:10:31 compute-0 nova_compute[260935]: 2025-10-11 09:10:31.646 2 DEBUG nova.objects.instance [None req-402cdad3-0a70-4b41-8df4-a529d58cfc49 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Lazy-loading 'pci_devices' on Instance uuid 0b3da34e-478e-44fc-a1ec-2601998d2b0d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:10:31 compute-0 nova_compute[260935]: 2025-10-11 09:10:31.711 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:10:31 compute-0 nova_compute[260935]: 2025-10-11 09:10:31.716 2 DEBUG nova.virt.libvirt.driver [None req-402cdad3-0a70-4b41-8df4-a529d58cfc49 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 0b3da34e-478e-44fc-a1ec-2601998d2b0d] End _get_guest_xml xml=<domain type="kvm">
Oct 11 09:10:31 compute-0 nova_compute[260935]:   <uuid>0b3da34e-478e-44fc-a1ec-2601998d2b0d</uuid>
Oct 11 09:10:31 compute-0 nova_compute[260935]:   <name>instance-00000068</name>
Oct 11 09:10:31 compute-0 nova_compute[260935]:   <memory>131072</memory>
Oct 11 09:10:31 compute-0 nova_compute[260935]:   <vcpu>1</vcpu>
Oct 11 09:10:31 compute-0 nova_compute[260935]:   <metadata>
Oct 11 09:10:31 compute-0 nova_compute[260935]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 09:10:31 compute-0 nova_compute[260935]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 09:10:31 compute-0 nova_compute[260935]:       <nova:name>tempest-TestNetworkAdvancedServerOps-server-2087957653</nova:name>
Oct 11 09:10:31 compute-0 nova_compute[260935]:       <nova:creationTime>2025-10-11 09:10:30</nova:creationTime>
Oct 11 09:10:31 compute-0 nova_compute[260935]:       <nova:flavor name="m1.nano">
Oct 11 09:10:31 compute-0 nova_compute[260935]:         <nova:memory>128</nova:memory>
Oct 11 09:10:31 compute-0 nova_compute[260935]:         <nova:disk>1</nova:disk>
Oct 11 09:10:31 compute-0 nova_compute[260935]:         <nova:swap>0</nova:swap>
Oct 11 09:10:31 compute-0 nova_compute[260935]:         <nova:ephemeral>0</nova:ephemeral>
Oct 11 09:10:31 compute-0 nova_compute[260935]:         <nova:vcpus>1</nova:vcpus>
Oct 11 09:10:31 compute-0 nova_compute[260935]:       </nova:flavor>
Oct 11 09:10:31 compute-0 nova_compute[260935]:       <nova:owner>
Oct 11 09:10:31 compute-0 nova_compute[260935]:         <nova:user uuid="a213c3877fc144a3af0be3c3d853f999">tempest-TestNetworkAdvancedServerOps-1304559157-project-member</nova:user>
Oct 11 09:10:31 compute-0 nova_compute[260935]:         <nova:project uuid="ca4b15770e784f45910b630937562cb6">tempest-TestNetworkAdvancedServerOps-1304559157</nova:project>
Oct 11 09:10:31 compute-0 nova_compute[260935]:       </nova:owner>
Oct 11 09:10:31 compute-0 nova_compute[260935]:       <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 09:10:31 compute-0 nova_compute[260935]:       <nova:ports>
Oct 11 09:10:31 compute-0 nova_compute[260935]:         <nova:port uuid="673c8285-440e-4cb6-b81b-1bd2ddb7375a">
Oct 11 09:10:31 compute-0 nova_compute[260935]:           <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Oct 11 09:10:31 compute-0 nova_compute[260935]:         </nova:port>
Oct 11 09:10:31 compute-0 nova_compute[260935]:       </nova:ports>
Oct 11 09:10:31 compute-0 nova_compute[260935]:     </nova:instance>
Oct 11 09:10:31 compute-0 nova_compute[260935]:   </metadata>
Oct 11 09:10:31 compute-0 nova_compute[260935]:   <sysinfo type="smbios">
Oct 11 09:10:31 compute-0 nova_compute[260935]:     <system>
Oct 11 09:10:31 compute-0 nova_compute[260935]:       <entry name="manufacturer">RDO</entry>
Oct 11 09:10:31 compute-0 nova_compute[260935]:       <entry name="product">OpenStack Compute</entry>
Oct 11 09:10:31 compute-0 nova_compute[260935]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 09:10:31 compute-0 nova_compute[260935]:       <entry name="serial">0b3da34e-478e-44fc-a1ec-2601998d2b0d</entry>
Oct 11 09:10:31 compute-0 nova_compute[260935]:       <entry name="uuid">0b3da34e-478e-44fc-a1ec-2601998d2b0d</entry>
Oct 11 09:10:31 compute-0 nova_compute[260935]:       <entry name="family">Virtual Machine</entry>
Oct 11 09:10:31 compute-0 nova_compute[260935]:     </system>
Oct 11 09:10:31 compute-0 nova_compute[260935]:   </sysinfo>
Oct 11 09:10:31 compute-0 nova_compute[260935]:   <os>
Oct 11 09:10:31 compute-0 nova_compute[260935]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 11 09:10:31 compute-0 nova_compute[260935]:     <boot dev="hd"/>
Oct 11 09:10:31 compute-0 nova_compute[260935]:     <smbios mode="sysinfo"/>
Oct 11 09:10:31 compute-0 nova_compute[260935]:   </os>
Oct 11 09:10:31 compute-0 nova_compute[260935]:   <features>
Oct 11 09:10:31 compute-0 nova_compute[260935]:     <acpi/>
Oct 11 09:10:31 compute-0 nova_compute[260935]:     <apic/>
Oct 11 09:10:31 compute-0 nova_compute[260935]:     <vmcoreinfo/>
Oct 11 09:10:31 compute-0 nova_compute[260935]:   </features>
Oct 11 09:10:31 compute-0 nova_compute[260935]:   <clock offset="utc">
Oct 11 09:10:31 compute-0 nova_compute[260935]:     <timer name="pit" tickpolicy="delay"/>
Oct 11 09:10:31 compute-0 nova_compute[260935]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 11 09:10:31 compute-0 nova_compute[260935]:     <timer name="hpet" present="no"/>
Oct 11 09:10:31 compute-0 nova_compute[260935]:   </clock>
Oct 11 09:10:31 compute-0 nova_compute[260935]:   <cpu mode="host-model" match="exact">
Oct 11 09:10:31 compute-0 nova_compute[260935]:     <topology sockets="1" cores="1" threads="1"/>
Oct 11 09:10:31 compute-0 nova_compute[260935]:   </cpu>
Oct 11 09:10:31 compute-0 nova_compute[260935]:   <devices>
Oct 11 09:10:31 compute-0 nova_compute[260935]:     <disk type="network" device="disk">
Oct 11 09:10:31 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 09:10:31 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/0b3da34e-478e-44fc-a1ec-2601998d2b0d_disk">
Oct 11 09:10:31 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 09:10:31 compute-0 nova_compute[260935]:       </source>
Oct 11 09:10:31 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 09:10:31 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 09:10:31 compute-0 nova_compute[260935]:       </auth>
Oct 11 09:10:31 compute-0 nova_compute[260935]:       <target dev="vda" bus="virtio"/>
Oct 11 09:10:31 compute-0 nova_compute[260935]:     </disk>
Oct 11 09:10:31 compute-0 nova_compute[260935]:     <disk type="network" device="cdrom">
Oct 11 09:10:31 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 09:10:31 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/0b3da34e-478e-44fc-a1ec-2601998d2b0d_disk.config">
Oct 11 09:10:31 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 09:10:31 compute-0 nova_compute[260935]:       </source>
Oct 11 09:10:31 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 09:10:31 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 09:10:31 compute-0 nova_compute[260935]:       </auth>
Oct 11 09:10:31 compute-0 nova_compute[260935]:       <target dev="sda" bus="sata"/>
Oct 11 09:10:31 compute-0 nova_compute[260935]:     </disk>
Oct 11 09:10:31 compute-0 nova_compute[260935]:     <interface type="ethernet">
Oct 11 09:10:31 compute-0 nova_compute[260935]:       <mac address="fa:16:3e:ac:22:4a"/>
Oct 11 09:10:31 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 09:10:31 compute-0 nova_compute[260935]:       <driver name="vhost" rx_queue_size="512"/>
Oct 11 09:10:31 compute-0 nova_compute[260935]:       <mtu size="1442"/>
Oct 11 09:10:31 compute-0 nova_compute[260935]:       <target dev="tap673c8285-44"/>
Oct 11 09:10:31 compute-0 nova_compute[260935]:     </interface>
Oct 11 09:10:31 compute-0 nova_compute[260935]:     <serial type="pty">
Oct 11 09:10:31 compute-0 nova_compute[260935]:       <log file="/var/lib/nova/instances/0b3da34e-478e-44fc-a1ec-2601998d2b0d/console.log" append="off"/>
Oct 11 09:10:31 compute-0 nova_compute[260935]:     </serial>
Oct 11 09:10:31 compute-0 nova_compute[260935]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 09:10:31 compute-0 nova_compute[260935]:     <video>
Oct 11 09:10:31 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 09:10:31 compute-0 nova_compute[260935]:     </video>
Oct 11 09:10:31 compute-0 nova_compute[260935]:     <input type="tablet" bus="usb"/>
Oct 11 09:10:31 compute-0 nova_compute[260935]:     <rng model="virtio">
Oct 11 09:10:31 compute-0 nova_compute[260935]:       <backend model="random">/dev/urandom</backend>
Oct 11 09:10:31 compute-0 nova_compute[260935]:     </rng>
Oct 11 09:10:31 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root"/>
Oct 11 09:10:31 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:10:31 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:10:31 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:10:31 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:10:31 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:10:31 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:10:31 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:10:31 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:10:31 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:10:31 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:10:31 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:10:31 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:10:31 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:10:31 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:10:31 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:10:31 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:10:31 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:10:31 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:10:31 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:10:31 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:10:31 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:10:31 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:10:31 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:10:31 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:10:31 compute-0 nova_compute[260935]:     <controller type="usb" index="0"/>
Oct 11 09:10:31 compute-0 nova_compute[260935]:     <memballoon model="virtio">
Oct 11 09:10:31 compute-0 nova_compute[260935]:       <stats period="10"/>
Oct 11 09:10:31 compute-0 nova_compute[260935]:     </memballoon>
Oct 11 09:10:31 compute-0 nova_compute[260935]:   </devices>
Oct 11 09:10:31 compute-0 nova_compute[260935]: </domain>
Oct 11 09:10:31 compute-0 nova_compute[260935]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 11 09:10:31 compute-0 nova_compute[260935]: 2025-10-11 09:10:31.720 2 DEBUG nova.compute.manager [None req-402cdad3-0a70-4b41-8df4-a529d58cfc49 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 0b3da34e-478e-44fc-a1ec-2601998d2b0d] Preparing to wait for external event network-vif-plugged-673c8285-440e-4cb6-b81b-1bd2ddb7375a prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 11 09:10:31 compute-0 nova_compute[260935]: 2025-10-11 09:10:31.721 2 DEBUG oslo_concurrency.lockutils [None req-402cdad3-0a70-4b41-8df4-a529d58cfc49 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Acquiring lock "0b3da34e-478e-44fc-a1ec-2601998d2b0d-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:10:31 compute-0 nova_compute[260935]: 2025-10-11 09:10:31.721 2 DEBUG oslo_concurrency.lockutils [None req-402cdad3-0a70-4b41-8df4-a529d58cfc49 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Lock "0b3da34e-478e-44fc-a1ec-2601998d2b0d-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:10:31 compute-0 nova_compute[260935]: 2025-10-11 09:10:31.722 2 DEBUG oslo_concurrency.lockutils [None req-402cdad3-0a70-4b41-8df4-a529d58cfc49 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Lock "0b3da34e-478e-44fc-a1ec-2601998d2b0d-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:10:31 compute-0 nova_compute[260935]: 2025-10-11 09:10:31.723 2 DEBUG nova.virt.libvirt.vif [None req-402cdad3-0a70-4b41-8df4-a529d58cfc49 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T09:10:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-2087957653',display_name='tempest-TestNetworkAdvancedServerOps-server-2087957653',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-2087957653',id=104,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBElmuTcJVI/ZkWAz+5kv9/dAzHDbPqJ/RLoEbpcbzY+bQ/5rlnwJAiwfIH5I2AByz1siErLhAl04MipJZMi/HRmixiLKf1bIaWO7+9o5POQWUQay494cNjZNh9+hKmeEUA==',key_name='tempest-TestNetworkAdvancedServerOps-746953215',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ca4b15770e784f45910b630937562cb6',ramdisk_id='',reservation_id='r-ng6vrpfk',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-1304559157',owner_user_name='tempest-TestNetworkAdvancedServerOps-1304559157-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:10:23Z,user_data=None,user_id='a213c3877fc144a3af0be3c3d853f999',uuid=0b3da34e-478e-44fc-a1ec-2601998d2b0d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "673c8285-440e-4cb6-b81b-1bd2ddb7375a", "address": "fa:16:3e:ac:22:4a", "network": {"id": "2de6a2d5-4c4a-403f-9eb5-27dc7562315f", "bridge": "br-int", "label": "tempest-network-smoke--795331470", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca4b15770e784f45910b630937562cb6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap673c8285-44", "ovs_interfaceid": "673c8285-440e-4cb6-b81b-1bd2ddb7375a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 11 09:10:31 compute-0 nova_compute[260935]: 2025-10-11 09:10:31.724 2 DEBUG nova.network.os_vif_util [None req-402cdad3-0a70-4b41-8df4-a529d58cfc49 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Converting VIF {"id": "673c8285-440e-4cb6-b81b-1bd2ddb7375a", "address": "fa:16:3e:ac:22:4a", "network": {"id": "2de6a2d5-4c4a-403f-9eb5-27dc7562315f", "bridge": "br-int", "label": "tempest-network-smoke--795331470", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca4b15770e784f45910b630937562cb6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap673c8285-44", "ovs_interfaceid": "673c8285-440e-4cb6-b81b-1bd2ddb7375a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 09:10:31 compute-0 nova_compute[260935]: 2025-10-11 09:10:31.725 2 DEBUG nova.network.os_vif_util [None req-402cdad3-0a70-4b41-8df4-a529d58cfc49 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ac:22:4a,bridge_name='br-int',has_traffic_filtering=True,id=673c8285-440e-4cb6-b81b-1bd2ddb7375a,network=Network(2de6a2d5-4c4a-403f-9eb5-27dc7562315f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap673c8285-44') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 09:10:31 compute-0 nova_compute[260935]: 2025-10-11 09:10:31.726 2 DEBUG os_vif [None req-402cdad3-0a70-4b41-8df4-a529d58cfc49 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ac:22:4a,bridge_name='br-int',has_traffic_filtering=True,id=673c8285-440e-4cb6-b81b-1bd2ddb7375a,network=Network(2de6a2d5-4c4a-403f-9eb5-27dc7562315f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap673c8285-44') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 11 09:10:31 compute-0 nova_compute[260935]: 2025-10-11 09:10:31.727 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:10:31 compute-0 nova_compute[260935]: 2025-10-11 09:10:31.727 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:10:31 compute-0 nova_compute[260935]: 2025-10-11 09:10:31.728 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 09:10:31 compute-0 nova_compute[260935]: 2025-10-11 09:10:31.734 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:10:31 compute-0 nova_compute[260935]: 2025-10-11 09:10:31.734 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap673c8285-44, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:10:31 compute-0 nova_compute[260935]: 2025-10-11 09:10:31.735 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap673c8285-44, col_values=(('external_ids', {'iface-id': '673c8285-440e-4cb6-b81b-1bd2ddb7375a', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ac:22:4a', 'vm-uuid': '0b3da34e-478e-44fc-a1ec-2601998d2b0d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:10:31 compute-0 NetworkManager[44960]: <info>  [1760173831.7396] manager: (tap673c8285-44): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/394)
Oct 11 09:10:31 compute-0 nova_compute[260935]: 2025-10-11 09:10:31.741 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 09:10:31 compute-0 nova_compute[260935]: 2025-10-11 09:10:31.749 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:10:31 compute-0 nova_compute[260935]: 2025-10-11 09:10:31.751 2 INFO os_vif [None req-402cdad3-0a70-4b41-8df4-a529d58cfc49 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ac:22:4a,bridge_name='br-int',has_traffic_filtering=True,id=673c8285-440e-4cb6-b81b-1bd2ddb7375a,network=Network(2de6a2d5-4c4a-403f-9eb5-27dc7562315f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap673c8285-44')
Oct 11 09:10:31 compute-0 nova_compute[260935]: 2025-10-11 09:10:31.848 2 DEBUG nova.virt.libvirt.driver [None req-402cdad3-0a70-4b41-8df4-a529d58cfc49 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 09:10:31 compute-0 nova_compute[260935]: 2025-10-11 09:10:31.848 2 DEBUG nova.virt.libvirt.driver [None req-402cdad3-0a70-4b41-8df4-a529d58cfc49 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 09:10:31 compute-0 nova_compute[260935]: 2025-10-11 09:10:31.849 2 DEBUG nova.virt.libvirt.driver [None req-402cdad3-0a70-4b41-8df4-a529d58cfc49 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] No VIF found with MAC fa:16:3e:ac:22:4a, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 11 09:10:31 compute-0 nova_compute[260935]: 2025-10-11 09:10:31.849 2 INFO nova.virt.libvirt.driver [None req-402cdad3-0a70-4b41-8df4-a529d58cfc49 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 0b3da34e-478e-44fc-a1ec-2601998d2b0d] Using config drive
Oct 11 09:10:31 compute-0 nova_compute[260935]: 2025-10-11 09:10:31.886 2 DEBUG nova.storage.rbd_utils [None req-402cdad3-0a70-4b41-8df4-a529d58cfc49 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] rbd image 0b3da34e-478e-44fc-a1ec-2601998d2b0d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:10:32 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2055: 321 pgs: 321 active+clean; 374 MiB data, 858 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 11 09:10:32 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/2812675974' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:10:32 compute-0 unix_chkpwd[366158]: password check failed for user (bin)
Oct 11 09:10:32 compute-0 sshd-session[366090]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=152.32.213.170  user=bin
Oct 11 09:10:33 compute-0 ceph-mon[74313]: pgmap v2055: 321 pgs: 321 active+clean; 374 MiB data, 858 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 11 09:10:33 compute-0 nova_compute[260935]: 2025-10-11 09:10:33.636 2 INFO nova.virt.libvirt.driver [None req-402cdad3-0a70-4b41-8df4-a529d58cfc49 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 0b3da34e-478e-44fc-a1ec-2601998d2b0d] Creating config drive at /var/lib/nova/instances/0b3da34e-478e-44fc-a1ec-2601998d2b0d/disk.config
Oct 11 09:10:33 compute-0 nova_compute[260935]: 2025-10-11 09:10:33.644 2 DEBUG oslo_concurrency.processutils [None req-402cdad3-0a70-4b41-8df4-a529d58cfc49 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/0b3da34e-478e-44fc-a1ec-2601998d2b0d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpa6bxkdrr execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:10:33 compute-0 nova_compute[260935]: 2025-10-11 09:10:33.804 2 DEBUG oslo_concurrency.processutils [None req-402cdad3-0a70-4b41-8df4-a529d58cfc49 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/0b3da34e-478e-44fc-a1ec-2601998d2b0d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpa6bxkdrr" returned: 0 in 0.160s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:10:33 compute-0 nova_compute[260935]: 2025-10-11 09:10:33.844 2 DEBUG nova.storage.rbd_utils [None req-402cdad3-0a70-4b41-8df4-a529d58cfc49 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] rbd image 0b3da34e-478e-44fc-a1ec-2601998d2b0d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:10:33 compute-0 nova_compute[260935]: 2025-10-11 09:10:33.849 2 DEBUG oslo_concurrency.processutils [None req-402cdad3-0a70-4b41-8df4-a529d58cfc49 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/0b3da34e-478e-44fc-a1ec-2601998d2b0d/disk.config 0b3da34e-478e-44fc-a1ec-2601998d2b0d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:10:33 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e276 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:10:33 compute-0 nova_compute[260935]: 2025-10-11 09:10:33.918 2 DEBUG nova.network.neutron [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: 52be16b4-343a-4fd4-9041-39069a1fde2a] Updating instance_info_cache with network_info: [{"id": "c992d6e3-ef59-42a0-80c5-109fe0c056cd", "address": "fa:16:3e:d3:b5:ce", "network": {"id": "7c40ad6c-6e2c-4d8e-a70f-72c8786fa745", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1855455514-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0ba95f2514ce4fe4b00f245335eaeb01", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc992d6e3-ef", "ovs_interfaceid": "c992d6e3-ef59-42a0-80c5-109fe0c056cd", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:10:33 compute-0 nova_compute[260935]: 2025-10-11 09:10:33.948 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Releasing lock "refresh_cache-52be16b4-343a-4fd4-9041-39069a1fde2a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:10:33 compute-0 nova_compute[260935]: 2025-10-11 09:10:33.949 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: 52be16b4-343a-4fd4-9041-39069a1fde2a] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 11 09:10:34 compute-0 nova_compute[260935]: 2025-10-11 09:10:34.024 2 DEBUG oslo_concurrency.processutils [None req-402cdad3-0a70-4b41-8df4-a529d58cfc49 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/0b3da34e-478e-44fc-a1ec-2601998d2b0d/disk.config 0b3da34e-478e-44fc-a1ec-2601998d2b0d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.175s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:10:34 compute-0 nova_compute[260935]: 2025-10-11 09:10:34.025 2 INFO nova.virt.libvirt.driver [None req-402cdad3-0a70-4b41-8df4-a529d58cfc49 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 0b3da34e-478e-44fc-a1ec-2601998d2b0d] Deleting local config drive /var/lib/nova/instances/0b3da34e-478e-44fc-a1ec-2601998d2b0d/disk.config because it was imported into RBD.
Oct 11 09:10:34 compute-0 kernel: tap673c8285-44: entered promiscuous mode
Oct 11 09:10:34 compute-0 ovn_controller[152945]: 2025-10-11T09:10:34Z|00944|binding|INFO|Claiming lport 673c8285-440e-4cb6-b81b-1bd2ddb7375a for this chassis.
Oct 11 09:10:34 compute-0 ovn_controller[152945]: 2025-10-11T09:10:34Z|00945|binding|INFO|673c8285-440e-4cb6-b81b-1bd2ddb7375a: Claiming fa:16:3e:ac:22:4a 10.100.0.9
Oct 11 09:10:34 compute-0 NetworkManager[44960]: <info>  [1760173834.0892] manager: (tap673c8285-44): new Tun device (/org/freedesktop/NetworkManager/Devices/395)
Oct 11 09:10:34 compute-0 nova_compute[260935]: 2025-10-11 09:10:34.092 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:10:34 compute-0 nova_compute[260935]: 2025-10-11 09:10:34.098 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:10:34 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:10:34.111 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ac:22:4a 10.100.0.9'], port_security=['fa:16:3e:ac:22:4a 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '0b3da34e-478e-44fc-a1ec-2601998d2b0d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2de6a2d5-4c4a-403f-9eb5-27dc7562315f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ca4b15770e784f45910b630937562cb6', 'neutron:revision_number': '2', 'neutron:security_group_ids': '8ebe05f7-a7b2-491d-bdc1-80d46e3deb2b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9218e07c-cf9e-412a-8b5f-2d45bacdf8cb, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=673c8285-440e-4cb6-b81b-1bd2ddb7375a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 09:10:34 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:10:34.114 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 673c8285-440e-4cb6-b81b-1bd2ddb7375a in datapath 2de6a2d5-4c4a-403f-9eb5-27dc7562315f bound to our chassis
Oct 11 09:10:34 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:10:34.117 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 2de6a2d5-4c4a-403f-9eb5-27dc7562315f
Oct 11 09:10:34 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:10:34.142 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[ac1e57ef-d006-433a-ac3f-d5df8640b227]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:10:34 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:10:34.143 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap2de6a2d5-41 in ovnmeta-2de6a2d5-4c4a-403f-9eb5-27dc7562315f namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 11 09:10:34 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:10:34.145 276945 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap2de6a2d5-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 11 09:10:34 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:10:34.145 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[759da2c9-568c-4560-ae13-8138638fb2e3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:10:34 compute-0 systemd-machined[215705]: New machine qemu-123-instance-00000068.
Oct 11 09:10:34 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:10:34.146 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[8327c2cb-8490-409a-88f3-fca9be2577fe]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:10:34 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:10:34.166 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[f0e969d3-4bc7-4c6f-ac89-f1206ed651f7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:10:34 compute-0 systemd[1]: Started Virtual Machine qemu-123-instance-00000068.
Oct 11 09:10:34 compute-0 systemd-udevd[366215]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 09:10:34 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:10:34.198 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[fc81d62e-c89a-4f5e-ab8f-858eb687da8a]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:10:34 compute-0 NetworkManager[44960]: <info>  [1760173834.2101] device (tap673c8285-44): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 09:10:34 compute-0 NetworkManager[44960]: <info>  [1760173834.2114] device (tap673c8285-44): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 09:10:34 compute-0 nova_compute[260935]: 2025-10-11 09:10:34.216 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:10:34 compute-0 ovn_controller[152945]: 2025-10-11T09:10:34Z|00946|binding|INFO|Setting lport 673c8285-440e-4cb6-b81b-1bd2ddb7375a ovn-installed in OVS
Oct 11 09:10:34 compute-0 ovn_controller[152945]: 2025-10-11T09:10:34Z|00947|binding|INFO|Setting lport 673c8285-440e-4cb6-b81b-1bd2ddb7375a up in Southbound
Oct 11 09:10:34 compute-0 nova_compute[260935]: 2025-10-11 09:10:34.224 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:10:34 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:10:34.243 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[b35078dd-7280-477f-811b-33032c1f1993]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:10:34 compute-0 NetworkManager[44960]: <info>  [1760173834.2539] manager: (tap2de6a2d5-40): new Veth device (/org/freedesktop/NetworkManager/Devices/396)
Oct 11 09:10:34 compute-0 systemd-udevd[366218]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 09:10:34 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:10:34.253 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[3e25d09e-02ba-418e-bbb1-34f3609cbd5b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:10:34 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:10:34.304 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[3151bccb-1c7c-4bfc-bc8d-13059c22bad6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:10:34 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:10:34.309 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[dd5ea99d-0d28-4390-9a40-11750511a3be]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:10:34 compute-0 NetworkManager[44960]: <info>  [1760173834.3503] device (tap2de6a2d5-40): carrier: link connected
Oct 11 09:10:34 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:10:34.359 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[59e8c91f-9245-42b7-88a4-ea31a98405f6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:10:34 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:10:34.389 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[715e244a-0c23-436c-ba14-9797404146db]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2de6a2d5-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:54:6d:2f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 282], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 567905, 'reachable_time': 34835, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 366245, 'error': None, 'target': 'ovnmeta-2de6a2d5-4c4a-403f-9eb5-27dc7562315f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:10:34 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:10:34.408 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[f5413fba-d3eb-4f09-816d-d670ed250173]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe54:6d2f'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 567905, 'tstamp': 567905}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 366246, 'error': None, 'target': 'ovnmeta-2de6a2d5-4c4a-403f-9eb5-27dc7562315f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:10:34 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:10:34.436 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[e3727ee4-7750-49b2-bdf8-56a917ff0037]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2de6a2d5-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:54:6d:2f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 282], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 567905, 'reachable_time': 34835, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 366247, 'error': None, 'target': 'ovnmeta-2de6a2d5-4c4a-403f-9eb5-27dc7562315f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:10:34 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2056: 321 pgs: 321 active+clean; 374 MiB data, 858 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 11 09:10:34 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:10:34.490 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[acd5efa2-8812-4f9e-931c-306811ded906]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:10:34 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:10:34.600 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[d2913251-577b-4006-8b40-c02e377de212]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:10:34 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:10:34.603 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2de6a2d5-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:10:34 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:10:34.603 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 09:10:34 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:10:34.604 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2de6a2d5-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:10:34 compute-0 nova_compute[260935]: 2025-10-11 09:10:34.638 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:10:34 compute-0 kernel: tap2de6a2d5-40: entered promiscuous mode
Oct 11 09:10:34 compute-0 NetworkManager[44960]: <info>  [1760173834.6396] manager: (tap2de6a2d5-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/397)
Oct 11 09:10:34 compute-0 sshd-session[366090]: Failed password for bin from 152.32.213.170 port 36626 ssh2
Oct 11 09:10:34 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:10:34.656 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap2de6a2d5-40, col_values=(('external_ids', {'iface-id': 'f34fe4de-2979-4f48-9b40-4531a5420e53'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:10:34 compute-0 nova_compute[260935]: 2025-10-11 09:10:34.657 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:10:34 compute-0 ovn_controller[152945]: 2025-10-11T09:10:34Z|00948|binding|INFO|Releasing lport f34fe4de-2979-4f48-9b40-4531a5420e53 from this chassis (sb_readonly=0)
Oct 11 09:10:34 compute-0 nova_compute[260935]: 2025-10-11 09:10:34.659 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:10:34 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:10:34.663 162815 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/2de6a2d5-4c4a-403f-9eb5-27dc7562315f.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/2de6a2d5-4c4a-403f-9eb5-27dc7562315f.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 11 09:10:34 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:10:34.664 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[2c5d3890-94eb-4464-9717-a7c289b8844b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:10:34 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:10:34.666 162815 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 11 09:10:34 compute-0 ovn_metadata_agent[162810]: global
Oct 11 09:10:34 compute-0 ovn_metadata_agent[162810]:     log         /dev/log local0 debug
Oct 11 09:10:34 compute-0 ovn_metadata_agent[162810]:     log-tag     haproxy-metadata-proxy-2de6a2d5-4c4a-403f-9eb5-27dc7562315f
Oct 11 09:10:34 compute-0 ovn_metadata_agent[162810]:     user        root
Oct 11 09:10:34 compute-0 ovn_metadata_agent[162810]:     group       root
Oct 11 09:10:34 compute-0 ovn_metadata_agent[162810]:     maxconn     1024
Oct 11 09:10:34 compute-0 ovn_metadata_agent[162810]:     pidfile     /var/lib/neutron/external/pids/2de6a2d5-4c4a-403f-9eb5-27dc7562315f.pid.haproxy
Oct 11 09:10:34 compute-0 ovn_metadata_agent[162810]:     daemon
Oct 11 09:10:34 compute-0 ovn_metadata_agent[162810]: 
Oct 11 09:10:34 compute-0 ovn_metadata_agent[162810]: defaults
Oct 11 09:10:34 compute-0 ovn_metadata_agent[162810]:     log global
Oct 11 09:10:34 compute-0 ovn_metadata_agent[162810]:     mode http
Oct 11 09:10:34 compute-0 ovn_metadata_agent[162810]:     option httplog
Oct 11 09:10:34 compute-0 ovn_metadata_agent[162810]:     option dontlognull
Oct 11 09:10:34 compute-0 ovn_metadata_agent[162810]:     option http-server-close
Oct 11 09:10:34 compute-0 ovn_metadata_agent[162810]:     option forwardfor
Oct 11 09:10:34 compute-0 ovn_metadata_agent[162810]:     retries                 3
Oct 11 09:10:34 compute-0 ovn_metadata_agent[162810]:     timeout http-request    30s
Oct 11 09:10:34 compute-0 ovn_metadata_agent[162810]:     timeout connect         30s
Oct 11 09:10:34 compute-0 ovn_metadata_agent[162810]:     timeout client          32s
Oct 11 09:10:34 compute-0 ovn_metadata_agent[162810]:     timeout server          32s
Oct 11 09:10:34 compute-0 ovn_metadata_agent[162810]:     timeout http-keep-alive 30s
Oct 11 09:10:34 compute-0 ovn_metadata_agent[162810]: 
Oct 11 09:10:34 compute-0 ovn_metadata_agent[162810]: 
Oct 11 09:10:34 compute-0 ovn_metadata_agent[162810]: listen listener
Oct 11 09:10:34 compute-0 ovn_metadata_agent[162810]:     bind 169.254.169.254:80
Oct 11 09:10:34 compute-0 ovn_metadata_agent[162810]:     server metadata /var/lib/neutron/metadata_proxy
Oct 11 09:10:34 compute-0 ovn_metadata_agent[162810]:     http-request add-header X-OVN-Network-ID 2de6a2d5-4c4a-403f-9eb5-27dc7562315f
Oct 11 09:10:34 compute-0 ovn_metadata_agent[162810]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 11 09:10:34 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:10:34.667 162815 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-2de6a2d5-4c4a-403f-9eb5-27dc7562315f', 'env', 'PROCESS_TAG=haproxy-2de6a2d5-4c4a-403f-9eb5-27dc7562315f', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/2de6a2d5-4c4a-403f-9eb5-27dc7562315f.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 11 09:10:34 compute-0 nova_compute[260935]: 2025-10-11 09:10:34.689 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:10:34 compute-0 nova_compute[260935]: 2025-10-11 09:10:34.740 2 DEBUG nova.compute.manager [req-f8502b11-a306-4183-8458-10f7de90de4b req-39c1e905-d96e-4ff0-b1a0-d85941d0a849 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0b3da34e-478e-44fc-a1ec-2601998d2b0d] Received event network-vif-plugged-673c8285-440e-4cb6-b81b-1bd2ddb7375a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:10:34 compute-0 nova_compute[260935]: 2025-10-11 09:10:34.741 2 DEBUG oslo_concurrency.lockutils [req-f8502b11-a306-4183-8458-10f7de90de4b req-39c1e905-d96e-4ff0-b1a0-d85941d0a849 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "0b3da34e-478e-44fc-a1ec-2601998d2b0d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:10:34 compute-0 nova_compute[260935]: 2025-10-11 09:10:34.741 2 DEBUG oslo_concurrency.lockutils [req-f8502b11-a306-4183-8458-10f7de90de4b req-39c1e905-d96e-4ff0-b1a0-d85941d0a849 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "0b3da34e-478e-44fc-a1ec-2601998d2b0d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:10:34 compute-0 nova_compute[260935]: 2025-10-11 09:10:34.742 2 DEBUG oslo_concurrency.lockutils [req-f8502b11-a306-4183-8458-10f7de90de4b req-39c1e905-d96e-4ff0-b1a0-d85941d0a849 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "0b3da34e-478e-44fc-a1ec-2601998d2b0d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:10:34 compute-0 nova_compute[260935]: 2025-10-11 09:10:34.742 2 DEBUG nova.compute.manager [req-f8502b11-a306-4183-8458-10f7de90de4b req-39c1e905-d96e-4ff0-b1a0-d85941d0a849 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0b3da34e-478e-44fc-a1ec-2601998d2b0d] Processing event network-vif-plugged-673c8285-440e-4cb6-b81b-1bd2ddb7375a _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 11 09:10:34 compute-0 nova_compute[260935]: 2025-10-11 09:10:34.898 2 DEBUG nova.network.neutron [req-8db12eed-84e2-4226-bc1d-003dc9a18b3e req-0e9209f5-37b8-4fff-b779-68e00cd8594a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0b3da34e-478e-44fc-a1ec-2601998d2b0d] Updated VIF entry in instance network info cache for port 673c8285-440e-4cb6-b81b-1bd2ddb7375a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 09:10:34 compute-0 nova_compute[260935]: 2025-10-11 09:10:34.898 2 DEBUG nova.network.neutron [req-8db12eed-84e2-4226-bc1d-003dc9a18b3e req-0e9209f5-37b8-4fff-b779-68e00cd8594a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0b3da34e-478e-44fc-a1ec-2601998d2b0d] Updating instance_info_cache with network_info: [{"id": "673c8285-440e-4cb6-b81b-1bd2ddb7375a", "address": "fa:16:3e:ac:22:4a", "network": {"id": "2de6a2d5-4c4a-403f-9eb5-27dc7562315f", "bridge": "br-int", "label": "tempest-network-smoke--795331470", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca4b15770e784f45910b630937562cb6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap673c8285-44", "ovs_interfaceid": "673c8285-440e-4cb6-b81b-1bd2ddb7375a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:10:34 compute-0 nova_compute[260935]: 2025-10-11 09:10:34.927 2 DEBUG oslo_concurrency.lockutils [req-8db12eed-84e2-4226-bc1d-003dc9a18b3e req-0e9209f5-37b8-4fff-b779-68e00cd8594a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-0b3da34e-478e-44fc-a1ec-2601998d2b0d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:10:34 compute-0 nova_compute[260935]: 2025-10-11 09:10:34.941 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:10:35 compute-0 podman[366279]: 2025-10-11 09:10:35.19172526 +0000 UTC m=+0.088034164 container create c053f96544bf638240d0d54ddc97e5f3e213ef72bbd8494d356db148f8011b3f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-2de6a2d5-4c4a-403f-9eb5-27dc7562315f, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 11 09:10:35 compute-0 podman[366279]: 2025-10-11 09:10:35.146852079 +0000 UTC m=+0.043161023 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 11 09:10:35 compute-0 systemd[1]: Started libpod-conmon-c053f96544bf638240d0d54ddc97e5f3e213ef72bbd8494d356db148f8011b3f.scope.
Oct 11 09:10:35 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:10:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8d2d7c4ba9cfd39626d89e94aa92614f628d67bec419a0105d23bb5bb0122f8/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 11 09:10:35 compute-0 podman[366279]: 2025-10-11 09:10:35.313594009 +0000 UTC m=+0.209902973 container init c053f96544bf638240d0d54ddc97e5f3e213ef72bbd8494d356db148f8011b3f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-2de6a2d5-4c4a-403f-9eb5-27dc7562315f, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 11 09:10:35 compute-0 podman[366279]: 2025-10-11 09:10:35.319634062 +0000 UTC m=+0.215942966 container start c053f96544bf638240d0d54ddc97e5f3e213ef72bbd8494d356db148f8011b3f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-2de6a2d5-4c4a-403f-9eb5-27dc7562315f, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, io.buildah.version=1.41.3)
Oct 11 09:10:35 compute-0 neutron-haproxy-ovnmeta-2de6a2d5-4c4a-403f-9eb5-27dc7562315f[366336]: [NOTICE]   (366341) : New worker (366343) forked
Oct 11 09:10:35 compute-0 neutron-haproxy-ovnmeta-2de6a2d5-4c4a-403f-9eb5-27dc7562315f[366336]: [NOTICE]   (366341) : Loading success.
Oct 11 09:10:35 compute-0 ceph-mon[74313]: pgmap v2056: 321 pgs: 321 active+clean; 374 MiB data, 858 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 11 09:10:35 compute-0 nova_compute[260935]: 2025-10-11 09:10:35.823 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760173835.8224127, 0b3da34e-478e-44fc-a1ec-2601998d2b0d => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:10:35 compute-0 nova_compute[260935]: 2025-10-11 09:10:35.823 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 0b3da34e-478e-44fc-a1ec-2601998d2b0d] VM Started (Lifecycle Event)
Oct 11 09:10:35 compute-0 nova_compute[260935]: 2025-10-11 09:10:35.825 2 DEBUG nova.compute.manager [None req-402cdad3-0a70-4b41-8df4-a529d58cfc49 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 0b3da34e-478e-44fc-a1ec-2601998d2b0d] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 11 09:10:35 compute-0 nova_compute[260935]: 2025-10-11 09:10:35.830 2 DEBUG nova.virt.libvirt.driver [None req-402cdad3-0a70-4b41-8df4-a529d58cfc49 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 0b3da34e-478e-44fc-a1ec-2601998d2b0d] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 11 09:10:35 compute-0 nova_compute[260935]: 2025-10-11 09:10:35.835 2 INFO nova.virt.libvirt.driver [-] [instance: 0b3da34e-478e-44fc-a1ec-2601998d2b0d] Instance spawned successfully.
Oct 11 09:10:35 compute-0 nova_compute[260935]: 2025-10-11 09:10:35.835 2 DEBUG nova.virt.libvirt.driver [None req-402cdad3-0a70-4b41-8df4-a529d58cfc49 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 0b3da34e-478e-44fc-a1ec-2601998d2b0d] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 11 09:10:35 compute-0 nova_compute[260935]: 2025-10-11 09:10:35.865 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 0b3da34e-478e-44fc-a1ec-2601998d2b0d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:10:35 compute-0 nova_compute[260935]: 2025-10-11 09:10:35.870 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 0b3da34e-478e-44fc-a1ec-2601998d2b0d] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 09:10:35 compute-0 nova_compute[260935]: 2025-10-11 09:10:35.885 2 DEBUG nova.virt.libvirt.driver [None req-402cdad3-0a70-4b41-8df4-a529d58cfc49 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 0b3da34e-478e-44fc-a1ec-2601998d2b0d] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:10:35 compute-0 nova_compute[260935]: 2025-10-11 09:10:35.885 2 DEBUG nova.virt.libvirt.driver [None req-402cdad3-0a70-4b41-8df4-a529d58cfc49 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 0b3da34e-478e-44fc-a1ec-2601998d2b0d] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:10:35 compute-0 nova_compute[260935]: 2025-10-11 09:10:35.886 2 DEBUG nova.virt.libvirt.driver [None req-402cdad3-0a70-4b41-8df4-a529d58cfc49 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 0b3da34e-478e-44fc-a1ec-2601998d2b0d] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:10:35 compute-0 nova_compute[260935]: 2025-10-11 09:10:35.887 2 DEBUG nova.virt.libvirt.driver [None req-402cdad3-0a70-4b41-8df4-a529d58cfc49 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 0b3da34e-478e-44fc-a1ec-2601998d2b0d] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:10:35 compute-0 nova_compute[260935]: 2025-10-11 09:10:35.888 2 DEBUG nova.virt.libvirt.driver [None req-402cdad3-0a70-4b41-8df4-a529d58cfc49 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 0b3da34e-478e-44fc-a1ec-2601998d2b0d] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:10:35 compute-0 nova_compute[260935]: 2025-10-11 09:10:35.888 2 DEBUG nova.virt.libvirt.driver [None req-402cdad3-0a70-4b41-8df4-a529d58cfc49 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 0b3da34e-478e-44fc-a1ec-2601998d2b0d] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:10:35 compute-0 nova_compute[260935]: 2025-10-11 09:10:35.895 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 0b3da34e-478e-44fc-a1ec-2601998d2b0d] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 09:10:35 compute-0 nova_compute[260935]: 2025-10-11 09:10:35.895 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760173835.8226926, 0b3da34e-478e-44fc-a1ec-2601998d2b0d => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:10:35 compute-0 nova_compute[260935]: 2025-10-11 09:10:35.896 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 0b3da34e-478e-44fc-a1ec-2601998d2b0d] VM Paused (Lifecycle Event)
Oct 11 09:10:35 compute-0 nova_compute[260935]: 2025-10-11 09:10:35.933 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 0b3da34e-478e-44fc-a1ec-2601998d2b0d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:10:35 compute-0 nova_compute[260935]: 2025-10-11 09:10:35.943 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760173835.8288982, 0b3da34e-478e-44fc-a1ec-2601998d2b0d => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:10:35 compute-0 nova_compute[260935]: 2025-10-11 09:10:35.944 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 0b3da34e-478e-44fc-a1ec-2601998d2b0d] VM Resumed (Lifecycle Event)
Oct 11 09:10:35 compute-0 nova_compute[260935]: 2025-10-11 09:10:35.964 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 0b3da34e-478e-44fc-a1ec-2601998d2b0d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:10:35 compute-0 nova_compute[260935]: 2025-10-11 09:10:35.968 2 INFO nova.compute.manager [None req-402cdad3-0a70-4b41-8df4-a529d58cfc49 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 0b3da34e-478e-44fc-a1ec-2601998d2b0d] Took 11.94 seconds to spawn the instance on the hypervisor.
Oct 11 09:10:35 compute-0 nova_compute[260935]: 2025-10-11 09:10:35.969 2 DEBUG nova.compute.manager [None req-402cdad3-0a70-4b41-8df4-a529d58cfc49 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 0b3da34e-478e-44fc-a1ec-2601998d2b0d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:10:35 compute-0 nova_compute[260935]: 2025-10-11 09:10:35.971 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 0b3da34e-478e-44fc-a1ec-2601998d2b0d] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 09:10:36 compute-0 nova_compute[260935]: 2025-10-11 09:10:36.010 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 0b3da34e-478e-44fc-a1ec-2601998d2b0d] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 09:10:36 compute-0 nova_compute[260935]: 2025-10-11 09:10:36.044 2 INFO nova.compute.manager [None req-402cdad3-0a70-4b41-8df4-a529d58cfc49 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 0b3da34e-478e-44fc-a1ec-2601998d2b0d] Took 13.04 seconds to build instance.
Oct 11 09:10:36 compute-0 nova_compute[260935]: 2025-10-11 09:10:36.071 2 DEBUG oslo_concurrency.lockutils [None req-402cdad3-0a70-4b41-8df4-a529d58cfc49 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Lock "0b3da34e-478e-44fc-a1ec-2601998d2b0d" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.175s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:10:36 compute-0 unix_chkpwd[366354]: password check failed for user (root)
Oct 11 09:10:36 compute-0 sshd-session[366352]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=165.232.82.252  user=root
Oct 11 09:10:36 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2057: 321 pgs: 321 active+clean; 374 MiB data, 858 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 11 09:10:36 compute-0 sshd-session[366090]: Received disconnect from 152.32.213.170 port 36626:11: Bye Bye [preauth]
Oct 11 09:10:36 compute-0 sshd-session[366090]: Disconnected from authenticating user bin 152.32.213.170 port 36626 [preauth]
Oct 11 09:10:36 compute-0 nova_compute[260935]: 2025-10-11 09:10:36.740 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:10:36 compute-0 nova_compute[260935]: 2025-10-11 09:10:36.989 2 DEBUG nova.compute.manager [req-efb96965-7fcf-4304-b10c-a1954faac3c9 req-8632e11f-8f8e-493d-8557-3b403362eda3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0b3da34e-478e-44fc-a1ec-2601998d2b0d] Received event network-vif-plugged-673c8285-440e-4cb6-b81b-1bd2ddb7375a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:10:36 compute-0 nova_compute[260935]: 2025-10-11 09:10:36.990 2 DEBUG oslo_concurrency.lockutils [req-efb96965-7fcf-4304-b10c-a1954faac3c9 req-8632e11f-8f8e-493d-8557-3b403362eda3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "0b3da34e-478e-44fc-a1ec-2601998d2b0d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:10:36 compute-0 nova_compute[260935]: 2025-10-11 09:10:36.990 2 DEBUG oslo_concurrency.lockutils [req-efb96965-7fcf-4304-b10c-a1954faac3c9 req-8632e11f-8f8e-493d-8557-3b403362eda3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "0b3da34e-478e-44fc-a1ec-2601998d2b0d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:10:36 compute-0 nova_compute[260935]: 2025-10-11 09:10:36.991 2 DEBUG oslo_concurrency.lockutils [req-efb96965-7fcf-4304-b10c-a1954faac3c9 req-8632e11f-8f8e-493d-8557-3b403362eda3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "0b3da34e-478e-44fc-a1ec-2601998d2b0d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:10:36 compute-0 nova_compute[260935]: 2025-10-11 09:10:36.991 2 DEBUG nova.compute.manager [req-efb96965-7fcf-4304-b10c-a1954faac3c9 req-8632e11f-8f8e-493d-8557-3b403362eda3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0b3da34e-478e-44fc-a1ec-2601998d2b0d] No waiting events found dispatching network-vif-plugged-673c8285-440e-4cb6-b81b-1bd2ddb7375a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 09:10:36 compute-0 nova_compute[260935]: 2025-10-11 09:10:36.991 2 WARNING nova.compute.manager [req-efb96965-7fcf-4304-b10c-a1954faac3c9 req-8632e11f-8f8e-493d-8557-3b403362eda3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0b3da34e-478e-44fc-a1ec-2601998d2b0d] Received unexpected event network-vif-plugged-673c8285-440e-4cb6-b81b-1bd2ddb7375a for instance with vm_state active and task_state None.
Oct 11 09:10:37 compute-0 ceph-mon[74313]: pgmap v2057: 321 pgs: 321 active+clean; 374 MiB data, 858 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 11 09:10:37 compute-0 sshd-session[366352]: Failed password for root from 165.232.82.252 port 39218 ssh2
Oct 11 09:10:38 compute-0 sshd-session[366352]: Connection closed by authenticating user root 165.232.82.252 port 39218 [preauth]
Oct 11 09:10:38 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2058: 321 pgs: 321 active+clean; 374 MiB data, 858 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Oct 11 09:10:38 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e276 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:10:39 compute-0 ceph-mon[74313]: pgmap v2058: 321 pgs: 321 active+clean; 374 MiB data, 858 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Oct 11 09:10:39 compute-0 nova_compute[260935]: 2025-10-11 09:10:39.941 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:10:40 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2059: 321 pgs: 321 active+clean; 374 MiB data, 858 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 11 09:10:41 compute-0 sshd-session[366355]: Invalid user video from 155.4.244.179 port 50153
Oct 11 09:10:41 compute-0 sshd-session[366355]: pam_unix(sshd:auth): check pass; user unknown
Oct 11 09:10:41 compute-0 sshd-session[366355]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=155.4.244.179
Oct 11 09:10:41 compute-0 ceph-mon[74313]: pgmap v2059: 321 pgs: 321 active+clean; 374 MiB data, 858 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 11 09:10:41 compute-0 nova_compute[260935]: 2025-10-11 09:10:41.744 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:10:41 compute-0 ovn_controller[152945]: 2025-10-11T09:10:41Z|00949|binding|INFO|Releasing lport f34fe4de-2979-4f48-9b40-4531a5420e53 from this chassis (sb_readonly=0)
Oct 11 09:10:41 compute-0 ovn_controller[152945]: 2025-10-11T09:10:41Z|00950|binding|INFO|Releasing lport e23cd806-8523-4e59-ba27-db15cee52548 from this chassis (sb_readonly=0)
Oct 11 09:10:41 compute-0 ovn_controller[152945]: 2025-10-11T09:10:41Z|00951|binding|INFO|Releasing lport f45dd889-4db0-488b-b6d3-356bc191844e from this chassis (sb_readonly=0)
Oct 11 09:10:41 compute-0 nova_compute[260935]: 2025-10-11 09:10:41.942 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:10:41 compute-0 NetworkManager[44960]: <info>  [1760173841.9429] manager: (patch-provnet-02213cae-d648-4a8f-b18b-9bdf9573f1be-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/398)
Oct 11 09:10:41 compute-0 NetworkManager[44960]: <info>  [1760173841.9438] manager: (patch-br-int-to-provnet-02213cae-d648-4a8f-b18b-9bdf9573f1be): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/399)
Oct 11 09:10:41 compute-0 ovn_controller[152945]: 2025-10-11T09:10:41Z|00952|binding|INFO|Releasing lport f34fe4de-2979-4f48-9b40-4531a5420e53 from this chassis (sb_readonly=0)
Oct 11 09:10:41 compute-0 nova_compute[260935]: 2025-10-11 09:10:41.979 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:10:41 compute-0 ovn_controller[152945]: 2025-10-11T09:10:41Z|00953|binding|INFO|Releasing lport e23cd806-8523-4e59-ba27-db15cee52548 from this chassis (sb_readonly=0)
Oct 11 09:10:41 compute-0 ovn_controller[152945]: 2025-10-11T09:10:41Z|00954|binding|INFO|Releasing lport f45dd889-4db0-488b-b6d3-356bc191844e from this chassis (sb_readonly=0)
Oct 11 09:10:41 compute-0 nova_compute[260935]: 2025-10-11 09:10:41.995 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:10:42 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2060: 321 pgs: 321 active+clean; 374 MiB data, 858 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 11 09:10:42 compute-0 sshd-session[366355]: Failed password for invalid user video from 155.4.244.179 port 50153 ssh2
Oct 11 09:10:42 compute-0 nova_compute[260935]: 2025-10-11 09:10:42.655 2 DEBUG nova.compute.manager [req-8a91c098-4ca1-4722-9508-a9b2606f3d67 req-1b8dbe9f-e828-46bc-b4cc-37bd496cf529 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0b3da34e-478e-44fc-a1ec-2601998d2b0d] Received event network-changed-673c8285-440e-4cb6-b81b-1bd2ddb7375a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:10:42 compute-0 nova_compute[260935]: 2025-10-11 09:10:42.656 2 DEBUG nova.compute.manager [req-8a91c098-4ca1-4722-9508-a9b2606f3d67 req-1b8dbe9f-e828-46bc-b4cc-37bd496cf529 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0b3da34e-478e-44fc-a1ec-2601998d2b0d] Refreshing instance network info cache due to event network-changed-673c8285-440e-4cb6-b81b-1bd2ddb7375a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 09:10:42 compute-0 nova_compute[260935]: 2025-10-11 09:10:42.656 2 DEBUG oslo_concurrency.lockutils [req-8a91c098-4ca1-4722-9508-a9b2606f3d67 req-1b8dbe9f-e828-46bc-b4cc-37bd496cf529 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-0b3da34e-478e-44fc-a1ec-2601998d2b0d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:10:42 compute-0 nova_compute[260935]: 2025-10-11 09:10:42.657 2 DEBUG oslo_concurrency.lockutils [req-8a91c098-4ca1-4722-9508-a9b2606f3d67 req-1b8dbe9f-e828-46bc-b4cc-37bd496cf529 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-0b3da34e-478e-44fc-a1ec-2601998d2b0d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:10:42 compute-0 nova_compute[260935]: 2025-10-11 09:10:42.657 2 DEBUG nova.network.neutron [req-8a91c098-4ca1-4722-9508-a9b2606f3d67 req-1b8dbe9f-e828-46bc-b4cc-37bd496cf529 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0b3da34e-478e-44fc-a1ec-2601998d2b0d] Refreshing network info cache for port 673c8285-440e-4cb6-b81b-1bd2ddb7375a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 09:10:42 compute-0 sshd-session[366355]: Received disconnect from 155.4.244.179 port 50153:11: Bye Bye [preauth]
Oct 11 09:10:42 compute-0 sshd-session[366355]: Disconnected from invalid user video 155.4.244.179 port 50153 [preauth]
Oct 11 09:10:43 compute-0 ceph-mon[74313]: pgmap v2060: 321 pgs: 321 active+clean; 374 MiB data, 858 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 11 09:10:43 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e276 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:10:44 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2061: 321 pgs: 321 active+clean; 374 MiB data, 858 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 11 09:10:44 compute-0 nova_compute[260935]: 2025-10-11 09:10:44.944 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:10:45 compute-0 ceph-mon[74313]: pgmap v2061: 321 pgs: 321 active+clean; 374 MiB data, 858 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 11 09:10:45 compute-0 nova_compute[260935]: 2025-10-11 09:10:45.896 2 DEBUG nova.network.neutron [req-8a91c098-4ca1-4722-9508-a9b2606f3d67 req-1b8dbe9f-e828-46bc-b4cc-37bd496cf529 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0b3da34e-478e-44fc-a1ec-2601998d2b0d] Updated VIF entry in instance network info cache for port 673c8285-440e-4cb6-b81b-1bd2ddb7375a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 09:10:45 compute-0 nova_compute[260935]: 2025-10-11 09:10:45.897 2 DEBUG nova.network.neutron [req-8a91c098-4ca1-4722-9508-a9b2606f3d67 req-1b8dbe9f-e828-46bc-b4cc-37bd496cf529 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0b3da34e-478e-44fc-a1ec-2601998d2b0d] Updating instance_info_cache with network_info: [{"id": "673c8285-440e-4cb6-b81b-1bd2ddb7375a", "address": "fa:16:3e:ac:22:4a", "network": {"id": "2de6a2d5-4c4a-403f-9eb5-27dc7562315f", "bridge": "br-int", "label": "tempest-network-smoke--795331470", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.181", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca4b15770e784f45910b630937562cb6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap673c8285-44", "ovs_interfaceid": "673c8285-440e-4cb6-b81b-1bd2ddb7375a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:10:45 compute-0 nova_compute[260935]: 2025-10-11 09:10:45.915 2 DEBUG oslo_concurrency.lockutils [req-8a91c098-4ca1-4722-9508-a9b2606f3d67 req-1b8dbe9f-e828-46bc-b4cc-37bd496cf529 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-0b3da34e-478e-44fc-a1ec-2601998d2b0d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:10:46 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2062: 321 pgs: 321 active+clean; 374 MiB data, 858 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 11 09:10:46 compute-0 nova_compute[260935]: 2025-10-11 09:10:46.747 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:10:47 compute-0 ceph-mon[74313]: pgmap v2062: 321 pgs: 321 active+clean; 374 MiB data, 858 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 11 09:10:48 compute-0 ovn_controller[152945]: 2025-10-11T09:10:48Z|00107|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:ac:22:4a 10.100.0.9
Oct 11 09:10:48 compute-0 ovn_controller[152945]: 2025-10-11T09:10:48Z|00108|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:ac:22:4a 10.100.0.9
Oct 11 09:10:48 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2063: 321 pgs: 321 active+clean; 395 MiB data, 900 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.0 MiB/s wr, 119 op/s
Oct 11 09:10:48 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e276 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:10:49 compute-0 ceph-mon[74313]: pgmap v2063: 321 pgs: 321 active+clean; 395 MiB data, 900 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.0 MiB/s wr, 119 op/s
Oct 11 09:10:49 compute-0 podman[366358]: 2025-10-11 09:10:49.811781177 +0000 UTC m=+0.095627287 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 09:10:49 compute-0 nova_compute[260935]: 2025-10-11 09:10:49.945 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:10:50 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2064: 321 pgs: 321 active+clean; 395 MiB data, 900 MiB used, 59 GiB / 60 GiB avail; 261 KiB/s rd, 2.0 MiB/s wr, 46 op/s
Oct 11 09:10:51 compute-0 ceph-mon[74313]: pgmap v2064: 321 pgs: 321 active+clean; 395 MiB data, 900 MiB used, 59 GiB / 60 GiB avail; 261 KiB/s rd, 2.0 MiB/s wr, 46 op/s
Oct 11 09:10:51 compute-0 nova_compute[260935]: 2025-10-11 09:10:51.782 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:10:52 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2065: 321 pgs: 321 active+clean; 404 MiB data, 900 MiB used, 59 GiB / 60 GiB avail; 323 KiB/s rd, 2.1 MiB/s wr, 56 op/s
Oct 11 09:10:53 compute-0 ceph-mon[74313]: pgmap v2065: 321 pgs: 321 active+clean; 404 MiB data, 900 MiB used, 59 GiB / 60 GiB avail; 323 KiB/s rd, 2.1 MiB/s wr, 56 op/s
Oct 11 09:10:53 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e276 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:10:54 compute-0 nova_compute[260935]: 2025-10-11 09:10:54.442 2 INFO nova.compute.manager [None req-16fdadf7-ac51-4a4e-a01b-bd586a212c12 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 0b3da34e-478e-44fc-a1ec-2601998d2b0d] Get console output
Oct 11 09:10:54 compute-0 nova_compute[260935]: 2025-10-11 09:10:54.449 29289 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Oct 11 09:10:54 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2066: 321 pgs: 321 active+clean; 407 MiB data, 901 MiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 11 09:10:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:10:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:10:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:10:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:10:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:10:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:10:54 compute-0 ceph-mgr[74605]: [balancer INFO root] Optimize plan auto_2025-10-11_09:10:54
Oct 11 09:10:54 compute-0 ceph-mgr[74605]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 09:10:54 compute-0 ceph-mgr[74605]: [balancer INFO root] do_upmap
Oct 11 09:10:54 compute-0 ceph-mgr[74605]: [balancer INFO root] pools ['vms', 'images', 'backups', 'volumes', 'default.rgw.control', 'cephfs.cephfs.data', '.rgw.root', 'default.rgw.log', 'cephfs.cephfs.meta', 'default.rgw.meta', '.mgr']
Oct 11 09:10:54 compute-0 ceph-mgr[74605]: [balancer INFO root] prepared 0/10 changes
Oct 11 09:10:54 compute-0 nova_compute[260935]: 2025-10-11 09:10:54.948 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:10:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 09:10:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 09:10:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 09:10:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 09:10:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 09:10:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 09:10:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 09:10:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 09:10:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 09:10:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 09:10:55 compute-0 nova_compute[260935]: 2025-10-11 09:10:55.600 2 DEBUG oslo_concurrency.lockutils [None req-5eb2966d-fec5-4d19-98fc-0401849181fa a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Acquiring lock "0b3da34e-478e-44fc-a1ec-2601998d2b0d" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:10:55 compute-0 nova_compute[260935]: 2025-10-11 09:10:55.600 2 DEBUG oslo_concurrency.lockutils [None req-5eb2966d-fec5-4d19-98fc-0401849181fa a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Lock "0b3da34e-478e-44fc-a1ec-2601998d2b0d" acquired by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:10:55 compute-0 nova_compute[260935]: 2025-10-11 09:10:55.601 2 INFO nova.compute.manager [None req-5eb2966d-fec5-4d19-98fc-0401849181fa a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 0b3da34e-478e-44fc-a1ec-2601998d2b0d] Rebooting instance
Oct 11 09:10:55 compute-0 ceph-mon[74313]: pgmap v2066: 321 pgs: 321 active+clean; 407 MiB data, 901 MiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 11 09:10:55 compute-0 nova_compute[260935]: 2025-10-11 09:10:55.622 2 DEBUG oslo_concurrency.lockutils [None req-5eb2966d-fec5-4d19-98fc-0401849181fa a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Acquiring lock "refresh_cache-0b3da34e-478e-44fc-a1ec-2601998d2b0d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:10:55 compute-0 nova_compute[260935]: 2025-10-11 09:10:55.623 2 DEBUG oslo_concurrency.lockutils [None req-5eb2966d-fec5-4d19-98fc-0401849181fa a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Acquired lock "refresh_cache-0b3da34e-478e-44fc-a1ec-2601998d2b0d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:10:55 compute-0 nova_compute[260935]: 2025-10-11 09:10:55.624 2 DEBUG nova.network.neutron [None req-5eb2966d-fec5-4d19-98fc-0401849181fa a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 0b3da34e-478e-44fc-a1ec-2601998d2b0d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 11 09:10:55 compute-0 ceph-mgr[74605]: client.0 ms_handle_reset on v2:192.168.122.100:6800/2898047278
Oct 11 09:10:56 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2067: 321 pgs: 321 active+clean; 407 MiB data, 901 MiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 11 09:10:56 compute-0 nova_compute[260935]: 2025-10-11 09:10:56.786 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:10:56 compute-0 podman[366378]: 2025-10-11 09:10:56.829791939 +0000 UTC m=+0.083611663 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3)
Oct 11 09:10:57 compute-0 ceph-mon[74313]: pgmap v2067: 321 pgs: 321 active+clean; 407 MiB data, 901 MiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 11 09:10:58 compute-0 nova_compute[260935]: 2025-10-11 09:10:58.435 2 DEBUG nova.network.neutron [None req-5eb2966d-fec5-4d19-98fc-0401849181fa a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 0b3da34e-478e-44fc-a1ec-2601998d2b0d] Updating instance_info_cache with network_info: [{"id": "673c8285-440e-4cb6-b81b-1bd2ddb7375a", "address": "fa:16:3e:ac:22:4a", "network": {"id": "2de6a2d5-4c4a-403f-9eb5-27dc7562315f", "bridge": "br-int", "label": "tempest-network-smoke--795331470", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.181", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca4b15770e784f45910b630937562cb6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap673c8285-44", "ovs_interfaceid": "673c8285-440e-4cb6-b81b-1bd2ddb7375a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:10:58 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2068: 321 pgs: 321 active+clean; 407 MiB data, 901 MiB used, 59 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 11 09:10:58 compute-0 nova_compute[260935]: 2025-10-11 09:10:58.457 2 DEBUG oslo_concurrency.lockutils [None req-5eb2966d-fec5-4d19-98fc-0401849181fa a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Releasing lock "refresh_cache-0b3da34e-478e-44fc-a1ec-2601998d2b0d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:10:58 compute-0 nova_compute[260935]: 2025-10-11 09:10:58.459 2 DEBUG nova.compute.manager [None req-5eb2966d-fec5-4d19-98fc-0401849181fa a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 0b3da34e-478e-44fc-a1ec-2601998d2b0d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:10:58 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e276 do_prune osdmap full prune enabled
Oct 11 09:10:58 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e277 e277: 3 total, 3 up, 3 in
Oct 11 09:10:58 compute-0 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e277: 3 total, 3 up, 3 in
Oct 11 09:10:58 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e277 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:10:59 compute-0 ceph-mon[74313]: pgmap v2068: 321 pgs: 321 active+clean; 407 MiB data, 901 MiB used, 59 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 11 09:10:59 compute-0 ceph-mon[74313]: osdmap e277: 3 total, 3 up, 3 in
Oct 11 09:10:59 compute-0 nova_compute[260935]: 2025-10-11 09:10:59.951 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:11:00 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2070: 321 pgs: 321 active+clean; 407 MiB data, 901 MiB used, 59 GiB / 60 GiB avail; 81 KiB/s rd, 130 KiB/s wr, 21 op/s
Oct 11 09:11:00 compute-0 kernel: tap673c8285-44 (unregistering): left promiscuous mode
Oct 11 09:11:00 compute-0 NetworkManager[44960]: <info>  [1760173860.9421] device (tap673c8285-44): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 09:11:00 compute-0 ovn_controller[152945]: 2025-10-11T09:11:00Z|00955|binding|INFO|Releasing lport 673c8285-440e-4cb6-b81b-1bd2ddb7375a from this chassis (sb_readonly=0)
Oct 11 09:11:00 compute-0 ovn_controller[152945]: 2025-10-11T09:11:00Z|00956|binding|INFO|Setting lport 673c8285-440e-4cb6-b81b-1bd2ddb7375a down in Southbound
Oct 11 09:11:00 compute-0 nova_compute[260935]: 2025-10-11 09:11:00.953 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:11:00 compute-0 ovn_controller[152945]: 2025-10-11T09:11:00Z|00957|binding|INFO|Removing iface tap673c8285-44 ovn-installed in OVS
Oct 11 09:11:00 compute-0 nova_compute[260935]: 2025-10-11 09:11:00.957 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:11:00 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:11:00.963 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ac:22:4a 10.100.0.9'], port_security=['fa:16:3e:ac:22:4a 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '0b3da34e-478e-44fc-a1ec-2601998d2b0d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2de6a2d5-4c4a-403f-9eb5-27dc7562315f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ca4b15770e784f45910b630937562cb6', 'neutron:revision_number': '4', 'neutron:security_group_ids': '8ebe05f7-a7b2-491d-bdc1-80d46e3deb2b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.181'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9218e07c-cf9e-412a-8b5f-2d45bacdf8cb, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=673c8285-440e-4cb6-b81b-1bd2ddb7375a) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 09:11:00 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:11:00.966 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 673c8285-440e-4cb6-b81b-1bd2ddb7375a in datapath 2de6a2d5-4c4a-403f-9eb5-27dc7562315f unbound from our chassis
Oct 11 09:11:00 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:11:00.969 162815 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 2de6a2d5-4c4a-403f-9eb5-27dc7562315f, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 11 09:11:00 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:11:00.971 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[0577ff14-774b-4202-8536-375b593f85e6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:11:00 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:11:00.975 162815 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-2de6a2d5-4c4a-403f-9eb5-27dc7562315f namespace which is not needed anymore
Oct 11 09:11:00 compute-0 nova_compute[260935]: 2025-10-11 09:11:00.989 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:11:01 compute-0 systemd[1]: machine-qemu\x2d123\x2dinstance\x2d00000068.scope: Deactivated successfully.
Oct 11 09:11:01 compute-0 systemd[1]: machine-qemu\x2d123\x2dinstance\x2d00000068.scope: Consumed 14.431s CPU time.
Oct 11 09:11:01 compute-0 systemd-machined[215705]: Machine qemu-123-instance-00000068 terminated.
Oct 11 09:11:01 compute-0 nova_compute[260935]: 2025-10-11 09:11:01.188 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:11:01 compute-0 neutron-haproxy-ovnmeta-2de6a2d5-4c4a-403f-9eb5-27dc7562315f[366336]: [NOTICE]   (366341) : haproxy version is 2.8.14-c23fe91
Oct 11 09:11:01 compute-0 neutron-haproxy-ovnmeta-2de6a2d5-4c4a-403f-9eb5-27dc7562315f[366336]: [NOTICE]   (366341) : path to executable is /usr/sbin/haproxy
Oct 11 09:11:01 compute-0 neutron-haproxy-ovnmeta-2de6a2d5-4c4a-403f-9eb5-27dc7562315f[366336]: [ALERT]    (366341) : Current worker (366343) exited with code 143 (Terminated)
Oct 11 09:11:01 compute-0 neutron-haproxy-ovnmeta-2de6a2d5-4c4a-403f-9eb5-27dc7562315f[366336]: [WARNING]  (366341) : All workers exited. Exiting... (0)
Oct 11 09:11:01 compute-0 nova_compute[260935]: 2025-10-11 09:11:01.198 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:11:01 compute-0 systemd[1]: libpod-c053f96544bf638240d0d54ddc97e5f3e213ef72bbd8494d356db148f8011b3f.scope: Deactivated successfully.
Oct 11 09:11:01 compute-0 podman[366421]: 2025-10-11 09:11:01.209109086 +0000 UTC m=+0.079455611 container died c053f96544bf638240d0d54ddc97e5f3e213ef72bbd8494d356db148f8011b3f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-2de6a2d5-4c4a-403f-9eb5-27dc7562315f, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 11 09:11:01 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c053f96544bf638240d0d54ddc97e5f3e213ef72bbd8494d356db148f8011b3f-userdata-shm.mount: Deactivated successfully.
Oct 11 09:11:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-e8d2d7c4ba9cfd39626d89e94aa92614f628d67bec419a0105d23bb5bb0122f8-merged.mount: Deactivated successfully.
Oct 11 09:11:01 compute-0 podman[366421]: 2025-10-11 09:11:01.27285874 +0000 UTC m=+0.143205265 container cleanup c053f96544bf638240d0d54ddc97e5f3e213ef72bbd8494d356db148f8011b3f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-2de6a2d5-4c4a-403f-9eb5-27dc7562315f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS)
Oct 11 09:11:01 compute-0 systemd[1]: libpod-conmon-c053f96544bf638240d0d54ddc97e5f3e213ef72bbd8494d356db148f8011b3f.scope: Deactivated successfully.
Oct 11 09:11:01 compute-0 podman[366459]: 2025-10-11 09:11:01.376142539 +0000 UTC m=+0.060758913 container remove c053f96544bf638240d0d54ddc97e5f3e213ef72bbd8494d356db148f8011b3f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-2de6a2d5-4c4a-403f-9eb5-27dc7562315f, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3)
Oct 11 09:11:01 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:11:01.386 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[f884a26f-b666-4d04-86a4-5c5669fb331e]: (4, ('Sat Oct 11 09:11:01 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-2de6a2d5-4c4a-403f-9eb5-27dc7562315f (c053f96544bf638240d0d54ddc97e5f3e213ef72bbd8494d356db148f8011b3f)\nc053f96544bf638240d0d54ddc97e5f3e213ef72bbd8494d356db148f8011b3f\nSat Oct 11 09:11:01 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-2de6a2d5-4c4a-403f-9eb5-27dc7562315f (c053f96544bf638240d0d54ddc97e5f3e213ef72bbd8494d356db148f8011b3f)\nc053f96544bf638240d0d54ddc97e5f3e213ef72bbd8494d356db148f8011b3f\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:11:01 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:11:01.388 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[49555e5b-0211-439b-9aba-b722fbc500a7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:11:01 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:11:01.389 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2de6a2d5-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:11:01 compute-0 nova_compute[260935]: 2025-10-11 09:11:01.391 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:11:01 compute-0 kernel: tap2de6a2d5-40: left promiscuous mode
Oct 11 09:11:01 compute-0 nova_compute[260935]: 2025-10-11 09:11:01.401 2 DEBUG nova.compute.manager [req-6bc8a049-b0d3-4ead-bd14-0f48ae292217 req-99db68cb-1f21-458d-9270-f42f448eec96 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0b3da34e-478e-44fc-a1ec-2601998d2b0d] Received event network-vif-unplugged-673c8285-440e-4cb6-b81b-1bd2ddb7375a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:11:01 compute-0 nova_compute[260935]: 2025-10-11 09:11:01.402 2 DEBUG oslo_concurrency.lockutils [req-6bc8a049-b0d3-4ead-bd14-0f48ae292217 req-99db68cb-1f21-458d-9270-f42f448eec96 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "0b3da34e-478e-44fc-a1ec-2601998d2b0d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:11:01 compute-0 nova_compute[260935]: 2025-10-11 09:11:01.402 2 DEBUG oslo_concurrency.lockutils [req-6bc8a049-b0d3-4ead-bd14-0f48ae292217 req-99db68cb-1f21-458d-9270-f42f448eec96 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "0b3da34e-478e-44fc-a1ec-2601998d2b0d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:11:01 compute-0 nova_compute[260935]: 2025-10-11 09:11:01.403 2 DEBUG oslo_concurrency.lockutils [req-6bc8a049-b0d3-4ead-bd14-0f48ae292217 req-99db68cb-1f21-458d-9270-f42f448eec96 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "0b3da34e-478e-44fc-a1ec-2601998d2b0d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:11:01 compute-0 nova_compute[260935]: 2025-10-11 09:11:01.403 2 DEBUG nova.compute.manager [req-6bc8a049-b0d3-4ead-bd14-0f48ae292217 req-99db68cb-1f21-458d-9270-f42f448eec96 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0b3da34e-478e-44fc-a1ec-2601998d2b0d] No waiting events found dispatching network-vif-unplugged-673c8285-440e-4cb6-b81b-1bd2ddb7375a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 09:11:01 compute-0 nova_compute[260935]: 2025-10-11 09:11:01.403 2 WARNING nova.compute.manager [req-6bc8a049-b0d3-4ead-bd14-0f48ae292217 req-99db68cb-1f21-458d-9270-f42f448eec96 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0b3da34e-478e-44fc-a1ec-2601998d2b0d] Received unexpected event network-vif-unplugged-673c8285-440e-4cb6-b81b-1bd2ddb7375a for instance with vm_state active and task_state reboot_started.
Oct 11 09:11:01 compute-0 nova_compute[260935]: 2025-10-11 09:11:01.414 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:11:01 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:11:01.418 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[83cd76e8-c11c-4877-8eb9-58acfc08b6fb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:11:01 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:11:01.455 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[2945ca4b-8c22-40ae-a387-9772b791f583]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:11:01 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:11:01.457 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[5694c5c0-3044-47d4-84c3-6ac041ebddd2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:11:01 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:11:01.485 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[cdab48dc-153e-4df6-a723-49bb30e551cb]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 567893, 'reachable_time': 15283, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 366477, 'error': None, 'target': 'ovnmeta-2de6a2d5-4c4a-403f-9eb5-27dc7562315f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:11:01 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:11:01.488 162929 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-2de6a2d5-4c4a-403f-9eb5-27dc7562315f deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 11 09:11:01 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:11:01.489 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[4c33669c-3f99-4bd5-8a20-286979d5f2b9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:11:01 compute-0 systemd[1]: run-netns-ovnmeta\x2d2de6a2d5\x2d4c4a\x2d403f\x2d9eb5\x2d27dc7562315f.mount: Deactivated successfully.
Oct 11 09:11:01 compute-0 ceph-mon[74313]: pgmap v2070: 321 pgs: 321 active+clean; 407 MiB data, 901 MiB used, 59 GiB / 60 GiB avail; 81 KiB/s rd, 130 KiB/s wr, 21 op/s
Oct 11 09:11:01 compute-0 nova_compute[260935]: 2025-10-11 09:11:01.644 2 INFO nova.virt.libvirt.driver [None req-5eb2966d-fec5-4d19-98fc-0401849181fa a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 0b3da34e-478e-44fc-a1ec-2601998d2b0d] Instance shutdown successfully.
Oct 11 09:11:01 compute-0 kernel: tap673c8285-44: entered promiscuous mode
Oct 11 09:11:01 compute-0 systemd-udevd[366402]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 09:11:01 compute-0 NetworkManager[44960]: <info>  [1760173861.7360] manager: (tap673c8285-44): new Tun device (/org/freedesktop/NetworkManager/Devices/400)
Oct 11 09:11:01 compute-0 ovn_controller[152945]: 2025-10-11T09:11:01Z|00958|binding|INFO|Claiming lport 673c8285-440e-4cb6-b81b-1bd2ddb7375a for this chassis.
Oct 11 09:11:01 compute-0 ovn_controller[152945]: 2025-10-11T09:11:01Z|00959|binding|INFO|673c8285-440e-4cb6-b81b-1bd2ddb7375a: Claiming fa:16:3e:ac:22:4a 10.100.0.9
Oct 11 09:11:01 compute-0 nova_compute[260935]: 2025-10-11 09:11:01.738 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:11:01 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:11:01.748 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ac:22:4a 10.100.0.9'], port_security=['fa:16:3e:ac:22:4a 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '0b3da34e-478e-44fc-a1ec-2601998d2b0d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2de6a2d5-4c4a-403f-9eb5-27dc7562315f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ca4b15770e784f45910b630937562cb6', 'neutron:revision_number': '5', 'neutron:security_group_ids': '8ebe05f7-a7b2-491d-bdc1-80d46e3deb2b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.181'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9218e07c-cf9e-412a-8b5f-2d45bacdf8cb, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=673c8285-440e-4cb6-b81b-1bd2ddb7375a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 09:11:01 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:11:01.750 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 673c8285-440e-4cb6-b81b-1bd2ddb7375a in datapath 2de6a2d5-4c4a-403f-9eb5-27dc7562315f bound to our chassis
Oct 11 09:11:01 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:11:01.754 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 2de6a2d5-4c4a-403f-9eb5-27dc7562315f
Oct 11 09:11:01 compute-0 NetworkManager[44960]: <info>  [1760173861.7552] device (tap673c8285-44): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 09:11:01 compute-0 ovn_controller[152945]: 2025-10-11T09:11:01Z|00960|binding|INFO|Setting lport 673c8285-440e-4cb6-b81b-1bd2ddb7375a ovn-installed in OVS
Oct 11 09:11:01 compute-0 ovn_controller[152945]: 2025-10-11T09:11:01Z|00961|binding|INFO|Setting lport 673c8285-440e-4cb6-b81b-1bd2ddb7375a up in Southbound
Oct 11 09:11:01 compute-0 nova_compute[260935]: 2025-10-11 09:11:01.763 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:11:01 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:11:01.770 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[4066c7c0-1f11-45c2-857a-b5338612fb58]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:11:01 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:11:01.771 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap2de6a2d5-41 in ovnmeta-2de6a2d5-4c4a-403f-9eb5-27dc7562315f namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 11 09:11:01 compute-0 NetworkManager[44960]: <info>  [1760173861.7739] device (tap673c8285-44): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 09:11:01 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:11:01.774 276945 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap2de6a2d5-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 11 09:11:01 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:11:01.774 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[7837f526-211f-4c88-b9f1-ae5c480a4ba9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:11:01 compute-0 nova_compute[260935]: 2025-10-11 09:11:01.775 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:11:01 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:11:01.776 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[1d6a6000-71cb-48d0-a2c6-6609e67ba7aa]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:11:01 compute-0 nova_compute[260935]: 2025-10-11 09:11:01.791 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:11:01 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:11:01.795 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[bd973dcc-cbcd-4fc9-966c-12ea7fb57adb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:11:01 compute-0 systemd-machined[215705]: New machine qemu-124-instance-00000068.
Oct 11 09:11:01 compute-0 podman[366479]: 2025-10-11 09:11:01.826845864 +0000 UTC m=+0.130727369 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 11 09:11:01 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:11:01.829 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[e5adc779-e21c-4a64-a235-7fd258704c76]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:11:01 compute-0 systemd[1]: Started Virtual Machine qemu-124-instance-00000068.
Oct 11 09:11:01 compute-0 podman[366481]: 2025-10-11 09:11:01.862446489 +0000 UTC m=+0.164019031 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct 11 09:11:01 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:11:01.864 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[1fc07a3b-cc8b-4c60-88c0-150497905f72]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:11:01 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:11:01.871 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[e9b5b046-0878-4661-81d9-a48afba45caa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:11:01 compute-0 NetworkManager[44960]: <info>  [1760173861.8733] manager: (tap2de6a2d5-40): new Veth device (/org/freedesktop/NetworkManager/Devices/401)
Oct 11 09:11:01 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:11:01.912 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[6a6ba076-7b5c-4a51-8964-8072e43e7954]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:11:01 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:11:01.915 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[8da6a919-5b2b-487a-872a-5b6a899017e8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:11:01 compute-0 NetworkManager[44960]: <info>  [1760173861.9392] device (tap2de6a2d5-40): carrier: link connected
Oct 11 09:11:01 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:11:01.946 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[685f3b40-487b-449e-b0c8-56c7720427a5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:11:01 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:11:01.966 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[61741135-5998-4da4-8fec-20f00c63e390]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2de6a2d5-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:54:6d:2f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 285], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 570664, 'reachable_time': 31214, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 366566, 'error': None, 'target': 'ovnmeta-2de6a2d5-4c4a-403f-9eb5-27dc7562315f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:11:01 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:11:01.983 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[721d7e6f-1c8c-44b1-a68b-61d236cbbec3]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe54:6d2f'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 570664, 'tstamp': 570664}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 366567, 'error': None, 'target': 'ovnmeta-2de6a2d5-4c4a-403f-9eb5-27dc7562315f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:11:02 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:11:02.006 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[e5964ccc-f313-406c-963e-e2be3a2fd730]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2de6a2d5-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:54:6d:2f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 285], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 570664, 'reachable_time': 31214, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 366568, 'error': None, 'target': 'ovnmeta-2de6a2d5-4c4a-403f-9eb5-27dc7562315f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:11:02 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:11:02.045 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[36d44ed0-bba9-4be2-b5e2-3691d3e02851]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:11:02 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:11:02.136 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[b0860bcf-2bcc-4045-988e-c1c9035cc8ac]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:11:02 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:11:02.138 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2de6a2d5-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:11:02 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:11:02.138 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 09:11:02 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:11:02.139 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2de6a2d5-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:11:02 compute-0 nova_compute[260935]: 2025-10-11 09:11:02.141 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:11:02 compute-0 kernel: tap2de6a2d5-40: entered promiscuous mode
Oct 11 09:11:02 compute-0 NetworkManager[44960]: <info>  [1760173862.1419] manager: (tap2de6a2d5-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/402)
Oct 11 09:11:02 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:11:02.146 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap2de6a2d5-40, col_values=(('external_ids', {'iface-id': 'f34fe4de-2979-4f48-9b40-4531a5420e53'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:11:02 compute-0 ovn_controller[152945]: 2025-10-11T09:11:02Z|00962|binding|INFO|Releasing lport f34fe4de-2979-4f48-9b40-4531a5420e53 from this chassis (sb_readonly=0)
Oct 11 09:11:02 compute-0 nova_compute[260935]: 2025-10-11 09:11:02.147 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:11:02 compute-0 nova_compute[260935]: 2025-10-11 09:11:02.177 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:11:02 compute-0 nova_compute[260935]: 2025-10-11 09:11:02.179 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:11:02 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:11:02.180 162815 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/2de6a2d5-4c4a-403f-9eb5-27dc7562315f.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/2de6a2d5-4c4a-403f-9eb5-27dc7562315f.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 11 09:11:02 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:11:02.181 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[2125fcc7-f19f-4c0d-95a2-5c9f5c308257]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:11:02 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:11:02.182 162815 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 11 09:11:02 compute-0 ovn_metadata_agent[162810]: global
Oct 11 09:11:02 compute-0 ovn_metadata_agent[162810]:     log         /dev/log local0 debug
Oct 11 09:11:02 compute-0 ovn_metadata_agent[162810]:     log-tag     haproxy-metadata-proxy-2de6a2d5-4c4a-403f-9eb5-27dc7562315f
Oct 11 09:11:02 compute-0 ovn_metadata_agent[162810]:     user        root
Oct 11 09:11:02 compute-0 ovn_metadata_agent[162810]:     group       root
Oct 11 09:11:02 compute-0 ovn_metadata_agent[162810]:     maxconn     1024
Oct 11 09:11:02 compute-0 ovn_metadata_agent[162810]:     pidfile     /var/lib/neutron/external/pids/2de6a2d5-4c4a-403f-9eb5-27dc7562315f.pid.haproxy
Oct 11 09:11:02 compute-0 ovn_metadata_agent[162810]:     daemon
Oct 11 09:11:02 compute-0 ovn_metadata_agent[162810]: 
Oct 11 09:11:02 compute-0 ovn_metadata_agent[162810]: defaults
Oct 11 09:11:02 compute-0 ovn_metadata_agent[162810]:     log global
Oct 11 09:11:02 compute-0 ovn_metadata_agent[162810]:     mode http
Oct 11 09:11:02 compute-0 ovn_metadata_agent[162810]:     option httplog
Oct 11 09:11:02 compute-0 ovn_metadata_agent[162810]:     option dontlognull
Oct 11 09:11:02 compute-0 ovn_metadata_agent[162810]:     option http-server-close
Oct 11 09:11:02 compute-0 ovn_metadata_agent[162810]:     option forwardfor
Oct 11 09:11:02 compute-0 ovn_metadata_agent[162810]:     retries                 3
Oct 11 09:11:02 compute-0 ovn_metadata_agent[162810]:     timeout http-request    30s
Oct 11 09:11:02 compute-0 ovn_metadata_agent[162810]:     timeout connect         30s
Oct 11 09:11:02 compute-0 ovn_metadata_agent[162810]:     timeout client          32s
Oct 11 09:11:02 compute-0 ovn_metadata_agent[162810]:     timeout server          32s
Oct 11 09:11:02 compute-0 ovn_metadata_agent[162810]:     timeout http-keep-alive 30s
Oct 11 09:11:02 compute-0 ovn_metadata_agent[162810]: 
Oct 11 09:11:02 compute-0 ovn_metadata_agent[162810]: 
Oct 11 09:11:02 compute-0 ovn_metadata_agent[162810]: listen listener
Oct 11 09:11:02 compute-0 ovn_metadata_agent[162810]:     bind 169.254.169.254:80
Oct 11 09:11:02 compute-0 ovn_metadata_agent[162810]:     server metadata /var/lib/neutron/metadata_proxy
Oct 11 09:11:02 compute-0 ovn_metadata_agent[162810]:     http-request add-header X-OVN-Network-ID 2de6a2d5-4c4a-403f-9eb5-27dc7562315f
Oct 11 09:11:02 compute-0 ovn_metadata_agent[162810]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 11 09:11:02 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:11:02.183 162815 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-2de6a2d5-4c4a-403f-9eb5-27dc7562315f', 'env', 'PROCESS_TAG=haproxy-2de6a2d5-4c4a-403f-9eb5-27dc7562315f', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/2de6a2d5-4c4a-403f-9eb5-27dc7562315f.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 11 09:11:02 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2071: 321 pgs: 321 active+clean; 407 MiB data, 901 MiB used, 59 GiB / 60 GiB avail; 22 KiB/s rd, 62 KiB/s wr, 30 op/s
Oct 11 09:11:02 compute-0 podman[366642]: 2025-10-11 09:11:02.668311615 +0000 UTC m=+0.056331890 container create 390c567af3f4f9b17c471a557ae52e1fc53a30a1ce7d52271270e72501fa6d45 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-2de6a2d5-4c4a-403f-9eb5-27dc7562315f, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001)
Oct 11 09:11:02 compute-0 systemd[1]: Started libpod-conmon-390c567af3f4f9b17c471a557ae52e1fc53a30a1ce7d52271270e72501fa6d45.scope.
Oct 11 09:11:02 compute-0 podman[366642]: 2025-10-11 09:11:02.636930747 +0000 UTC m=+0.024951022 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 11 09:11:02 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:11:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e0faee35d9e2350e7eab68b1a8768c75dd4c537a8039875f94be2481264f64f/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 11 09:11:02 compute-0 podman[366642]: 2025-10-11 09:11:02.773856087 +0000 UTC m=+0.161876352 container init 390c567af3f4f9b17c471a557ae52e1fc53a30a1ce7d52271270e72501fa6d45 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-2de6a2d5-4c4a-403f-9eb5-27dc7562315f, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 11 09:11:02 compute-0 podman[366642]: 2025-10-11 09:11:02.781078187 +0000 UTC m=+0.169098422 container start 390c567af3f4f9b17c471a557ae52e1fc53a30a1ce7d52271270e72501fa6d45 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-2de6a2d5-4c4a-403f-9eb5-27dc7562315f, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 11 09:11:02 compute-0 nova_compute[260935]: 2025-10-11 09:11:02.803 2 DEBUG nova.virt.libvirt.host [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Removed pending event for 0b3da34e-478e-44fc-a1ec-2601998d2b0d due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Oct 11 09:11:02 compute-0 nova_compute[260935]: 2025-10-11 09:11:02.804 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760173862.8030243, 0b3da34e-478e-44fc-a1ec-2601998d2b0d => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:11:02 compute-0 nova_compute[260935]: 2025-10-11 09:11:02.804 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 0b3da34e-478e-44fc-a1ec-2601998d2b0d] VM Resumed (Lifecycle Event)
Oct 11 09:11:02 compute-0 neutron-haproxy-ovnmeta-2de6a2d5-4c4a-403f-9eb5-27dc7562315f[366657]: [NOTICE]   (366661) : New worker (366663) forked
Oct 11 09:11:02 compute-0 neutron-haproxy-ovnmeta-2de6a2d5-4c4a-403f-9eb5-27dc7562315f[366657]: [NOTICE]   (366661) : Loading success.
Oct 11 09:11:02 compute-0 nova_compute[260935]: 2025-10-11 09:11:02.813 2 INFO nova.virt.libvirt.driver [-] [instance: 0b3da34e-478e-44fc-a1ec-2601998d2b0d] Instance running successfully.
Oct 11 09:11:02 compute-0 nova_compute[260935]: 2025-10-11 09:11:02.813 2 INFO nova.virt.libvirt.driver [None req-5eb2966d-fec5-4d19-98fc-0401849181fa a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 0b3da34e-478e-44fc-a1ec-2601998d2b0d] Instance soft rebooted successfully.
Oct 11 09:11:02 compute-0 nova_compute[260935]: 2025-10-11 09:11:02.814 2 DEBUG nova.compute.manager [None req-5eb2966d-fec5-4d19-98fc-0401849181fa a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 0b3da34e-478e-44fc-a1ec-2601998d2b0d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:11:03 compute-0 nova_compute[260935]: 2025-10-11 09:11:03.268 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 0b3da34e-478e-44fc-a1ec-2601998d2b0d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:11:03 compute-0 nova_compute[260935]: 2025-10-11 09:11:03.272 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 0b3da34e-478e-44fc-a1ec-2601998d2b0d] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: reboot_started, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 09:11:03 compute-0 nova_compute[260935]: 2025-10-11 09:11:03.318 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 0b3da34e-478e-44fc-a1ec-2601998d2b0d] During sync_power_state the instance has a pending task (reboot_started). Skip.
Oct 11 09:11:03 compute-0 nova_compute[260935]: 2025-10-11 09:11:03.319 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760173862.8045547, 0b3da34e-478e-44fc-a1ec-2601998d2b0d => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:11:03 compute-0 nova_compute[260935]: 2025-10-11 09:11:03.319 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 0b3da34e-478e-44fc-a1ec-2601998d2b0d] VM Started (Lifecycle Event)
Oct 11 09:11:03 compute-0 nova_compute[260935]: 2025-10-11 09:11:03.330 2 DEBUG oslo_concurrency.lockutils [None req-5eb2966d-fec5-4d19-98fc-0401849181fa a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Lock "0b3da34e-478e-44fc-a1ec-2601998d2b0d" "released" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: held 7.730s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:11:03 compute-0 nova_compute[260935]: 2025-10-11 09:11:03.345 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 0b3da34e-478e-44fc-a1ec-2601998d2b0d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:11:03 compute-0 nova_compute[260935]: 2025-10-11 09:11:03.349 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 0b3da34e-478e-44fc-a1ec-2601998d2b0d] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 09:11:03 compute-0 ceph-mon[74313]: pgmap v2071: 321 pgs: 321 active+clean; 407 MiB data, 901 MiB used, 59 GiB / 60 GiB avail; 22 KiB/s rd, 62 KiB/s wr, 30 op/s
Oct 11 09:11:03 compute-0 nova_compute[260935]: 2025-10-11 09:11:03.802 2 DEBUG nova.compute.manager [req-7d3522dc-64d4-488f-85fc-217ba7f93ceb req-ac3a1a01-9bb6-4257-922f-9bbc761c4819 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0b3da34e-478e-44fc-a1ec-2601998d2b0d] Received event network-vif-plugged-673c8285-440e-4cb6-b81b-1bd2ddb7375a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:11:03 compute-0 nova_compute[260935]: 2025-10-11 09:11:03.803 2 DEBUG oslo_concurrency.lockutils [req-7d3522dc-64d4-488f-85fc-217ba7f93ceb req-ac3a1a01-9bb6-4257-922f-9bbc761c4819 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "0b3da34e-478e-44fc-a1ec-2601998d2b0d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:11:03 compute-0 nova_compute[260935]: 2025-10-11 09:11:03.804 2 DEBUG oslo_concurrency.lockutils [req-7d3522dc-64d4-488f-85fc-217ba7f93ceb req-ac3a1a01-9bb6-4257-922f-9bbc761c4819 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "0b3da34e-478e-44fc-a1ec-2601998d2b0d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:11:03 compute-0 nova_compute[260935]: 2025-10-11 09:11:03.804 2 DEBUG oslo_concurrency.lockutils [req-7d3522dc-64d4-488f-85fc-217ba7f93ceb req-ac3a1a01-9bb6-4257-922f-9bbc761c4819 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "0b3da34e-478e-44fc-a1ec-2601998d2b0d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:11:03 compute-0 nova_compute[260935]: 2025-10-11 09:11:03.805 2 DEBUG nova.compute.manager [req-7d3522dc-64d4-488f-85fc-217ba7f93ceb req-ac3a1a01-9bb6-4257-922f-9bbc761c4819 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0b3da34e-478e-44fc-a1ec-2601998d2b0d] No waiting events found dispatching network-vif-plugged-673c8285-440e-4cb6-b81b-1bd2ddb7375a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 09:11:03 compute-0 nova_compute[260935]: 2025-10-11 09:11:03.806 2 WARNING nova.compute.manager [req-7d3522dc-64d4-488f-85fc-217ba7f93ceb req-ac3a1a01-9bb6-4257-922f-9bbc761c4819 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0b3da34e-478e-44fc-a1ec-2601998d2b0d] Received unexpected event network-vif-plugged-673c8285-440e-4cb6-b81b-1bd2ddb7375a for instance with vm_state active and task_state None.
Oct 11 09:11:03 compute-0 nova_compute[260935]: 2025-10-11 09:11:03.806 2 DEBUG nova.compute.manager [req-7d3522dc-64d4-488f-85fc-217ba7f93ceb req-ac3a1a01-9bb6-4257-922f-9bbc761c4819 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0b3da34e-478e-44fc-a1ec-2601998d2b0d] Received event network-vif-plugged-673c8285-440e-4cb6-b81b-1bd2ddb7375a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:11:03 compute-0 nova_compute[260935]: 2025-10-11 09:11:03.807 2 DEBUG oslo_concurrency.lockutils [req-7d3522dc-64d4-488f-85fc-217ba7f93ceb req-ac3a1a01-9bb6-4257-922f-9bbc761c4819 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "0b3da34e-478e-44fc-a1ec-2601998d2b0d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:11:03 compute-0 nova_compute[260935]: 2025-10-11 09:11:03.808 2 DEBUG oslo_concurrency.lockutils [req-7d3522dc-64d4-488f-85fc-217ba7f93ceb req-ac3a1a01-9bb6-4257-922f-9bbc761c4819 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "0b3da34e-478e-44fc-a1ec-2601998d2b0d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:11:03 compute-0 nova_compute[260935]: 2025-10-11 09:11:03.808 2 DEBUG oslo_concurrency.lockutils [req-7d3522dc-64d4-488f-85fc-217ba7f93ceb req-ac3a1a01-9bb6-4257-922f-9bbc761c4819 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "0b3da34e-478e-44fc-a1ec-2601998d2b0d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:11:03 compute-0 nova_compute[260935]: 2025-10-11 09:11:03.809 2 DEBUG nova.compute.manager [req-7d3522dc-64d4-488f-85fc-217ba7f93ceb req-ac3a1a01-9bb6-4257-922f-9bbc761c4819 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0b3da34e-478e-44fc-a1ec-2601998d2b0d] No waiting events found dispatching network-vif-plugged-673c8285-440e-4cb6-b81b-1bd2ddb7375a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 09:11:03 compute-0 nova_compute[260935]: 2025-10-11 09:11:03.810 2 WARNING nova.compute.manager [req-7d3522dc-64d4-488f-85fc-217ba7f93ceb req-ac3a1a01-9bb6-4257-922f-9bbc761c4819 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0b3da34e-478e-44fc-a1ec-2601998d2b0d] Received unexpected event network-vif-plugged-673c8285-440e-4cb6-b81b-1bd2ddb7375a for instance with vm_state active and task_state None.
Oct 11 09:11:03 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e277 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:11:04 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2072: 321 pgs: 321 active+clean; 415 MiB data, 901 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 858 KiB/s wr, 27 op/s
Oct 11 09:11:04 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e277 do_prune osdmap full prune enabled
Oct 11 09:11:04 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e278 e278: 3 total, 3 up, 3 in
Oct 11 09:11:04 compute-0 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e278: 3 total, 3 up, 3 in
Oct 11 09:11:04 compute-0 nova_compute[260935]: 2025-10-11 09:11:04.954 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:11:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 09:11:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:11:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 09:11:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:11:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0033876677777235136 of space, bias 1.0, pg target 1.0163003333170542 quantized to 32 (current 32)
Oct 11 09:11:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:11:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 09:11:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:11:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 09:11:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:11:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0007967915689913394 of space, bias 1.0, pg target 0.2382406791284105 quantized to 32 (current 32)
Oct 11 09:11:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:11:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006084358924269063 quantized to 16 (current 32)
Oct 11 09:11:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:11:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 09:11:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:11:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.605448655336329e-05 quantized to 32 (current 32)
Oct 11 09:11:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:11:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006464631357035879 quantized to 32 (current 32)
Oct 11 09:11:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:11:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 09:11:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:11:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015210897310672657 quantized to 32 (current 32)
Oct 11 09:11:05 compute-0 ceph-mon[74313]: pgmap v2072: 321 pgs: 321 active+clean; 415 MiB data, 901 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 858 KiB/s wr, 27 op/s
Oct 11 09:11:05 compute-0 ceph-mon[74313]: osdmap e278: 3 total, 3 up, 3 in
Oct 11 09:11:06 compute-0 nova_compute[260935]: 2025-10-11 09:11:06.222 2 DEBUG nova.compute.manager [req-23acae09-ca70-4343-a0c5-e1b5bf4ad3ab req-08b5312c-e46b-44cb-ad6c-762fb4bc0533 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0b3da34e-478e-44fc-a1ec-2601998d2b0d] Received event network-vif-plugged-673c8285-440e-4cb6-b81b-1bd2ddb7375a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:11:06 compute-0 nova_compute[260935]: 2025-10-11 09:11:06.223 2 DEBUG oslo_concurrency.lockutils [req-23acae09-ca70-4343-a0c5-e1b5bf4ad3ab req-08b5312c-e46b-44cb-ad6c-762fb4bc0533 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "0b3da34e-478e-44fc-a1ec-2601998d2b0d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:11:06 compute-0 nova_compute[260935]: 2025-10-11 09:11:06.224 2 DEBUG oslo_concurrency.lockutils [req-23acae09-ca70-4343-a0c5-e1b5bf4ad3ab req-08b5312c-e46b-44cb-ad6c-762fb4bc0533 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "0b3da34e-478e-44fc-a1ec-2601998d2b0d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:11:06 compute-0 nova_compute[260935]: 2025-10-11 09:11:06.225 2 DEBUG oslo_concurrency.lockutils [req-23acae09-ca70-4343-a0c5-e1b5bf4ad3ab req-08b5312c-e46b-44cb-ad6c-762fb4bc0533 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "0b3da34e-478e-44fc-a1ec-2601998d2b0d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:11:06 compute-0 nova_compute[260935]: 2025-10-11 09:11:06.225 2 DEBUG nova.compute.manager [req-23acae09-ca70-4343-a0c5-e1b5bf4ad3ab req-08b5312c-e46b-44cb-ad6c-762fb4bc0533 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0b3da34e-478e-44fc-a1ec-2601998d2b0d] No waiting events found dispatching network-vif-plugged-673c8285-440e-4cb6-b81b-1bd2ddb7375a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 09:11:06 compute-0 nova_compute[260935]: 2025-10-11 09:11:06.226 2 WARNING nova.compute.manager [req-23acae09-ca70-4343-a0c5-e1b5bf4ad3ab req-08b5312c-e46b-44cb-ad6c-762fb4bc0533 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0b3da34e-478e-44fc-a1ec-2601998d2b0d] Received unexpected event network-vif-plugged-673c8285-440e-4cb6-b81b-1bd2ddb7375a for instance with vm_state active and task_state None.
Oct 11 09:11:06 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2074: 321 pgs: 321 active+clean; 415 MiB data, 901 MiB used, 59 GiB / 60 GiB avail; 22 KiB/s rd, 1.0 MiB/s wr, 33 op/s
Oct 11 09:11:06 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e278 do_prune osdmap full prune enabled
Oct 11 09:11:06 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e279 e279: 3 total, 3 up, 3 in
Oct 11 09:11:06 compute-0 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e279: 3 total, 3 up, 3 in
Oct 11 09:11:06 compute-0 ceph-osd[90364]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #47. Immutable memtables: 4.
Oct 11 09:11:06 compute-0 nova_compute[260935]: 2025-10-11 09:11:06.793 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:11:07 compute-0 ceph-mon[74313]: pgmap v2074: 321 pgs: 321 active+clean; 415 MiB data, 901 MiB used, 59 GiB / 60 GiB avail; 22 KiB/s rd, 1.0 MiB/s wr, 33 op/s
Oct 11 09:11:07 compute-0 ceph-mon[74313]: osdmap e279: 3 total, 3 up, 3 in
Oct 11 09:11:07 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e279 do_prune osdmap full prune enabled
Oct 11 09:11:07 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e280 e280: 3 total, 3 up, 3 in
Oct 11 09:11:07 compute-0 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e280: 3 total, 3 up, 3 in
Oct 11 09:11:08 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2077: 321 pgs: 321 active+clean; 407 MiB data, 989 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 19 MiB/s wr, 233 op/s
Oct 11 09:11:08 compute-0 ceph-mon[74313]: osdmap e280: 3 total, 3 up, 3 in
Oct 11 09:11:08 compute-0 nova_compute[260935]: 2025-10-11 09:11:08.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:11:08 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:11:08 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e280 do_prune osdmap full prune enabled
Oct 11 09:11:08 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e281 e281: 3 total, 3 up, 3 in
Oct 11 09:11:08 compute-0 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e281: 3 total, 3 up, 3 in
Oct 11 09:11:09 compute-0 ceph-mon[74313]: pgmap v2077: 321 pgs: 321 active+clean; 407 MiB data, 989 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 19 MiB/s wr, 233 op/s
Oct 11 09:11:09 compute-0 ceph-mon[74313]: osdmap e281: 3 total, 3 up, 3 in
Oct 11 09:11:09 compute-0 nova_compute[260935]: 2025-10-11 09:11:09.956 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:11:10 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2079: 321 pgs: 321 active+clean; 407 MiB data, 989 MiB used, 59 GiB / 60 GiB avail; 4.0 MiB/s rd, 18 MiB/s wr, 233 op/s
Oct 11 09:11:11 compute-0 ceph-mon[74313]: pgmap v2079: 321 pgs: 321 active+clean; 407 MiB data, 989 MiB used, 59 GiB / 60 GiB avail; 4.0 MiB/s rd, 18 MiB/s wr, 233 op/s
Oct 11 09:11:11 compute-0 nova_compute[260935]: 2025-10-11 09:11:11.837 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:11:12 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2080: 321 pgs: 321 active+clean; 407 MiB data, 949 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 17 MiB/s wr, 267 op/s
Oct 11 09:11:13 compute-0 ceph-mon[74313]: pgmap v2080: 321 pgs: 321 active+clean; 407 MiB data, 949 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 17 MiB/s wr, 267 op/s
Oct 11 09:11:13 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e281 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:11:13 compute-0 ovn_controller[152945]: 2025-10-11T09:11:13Z|00109|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:ac:22:4a 10.100.0.9
Oct 11 09:11:14 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2081: 321 pgs: 321 active+clean; 407 MiB data, 919 MiB used, 59 GiB / 60 GiB avail; 3.1 MiB/s rd, 13 MiB/s wr, 210 op/s
Oct 11 09:11:14 compute-0 nova_compute[260935]: 2025-10-11 09:11:14.960 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:11:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:11:15.206 162815 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:11:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:11:15.207 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:11:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:11:15.209 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:11:15 compute-0 ceph-mon[74313]: pgmap v2081: 321 pgs: 321 active+clean; 407 MiB data, 919 MiB used, 59 GiB / 60 GiB avail; 3.1 MiB/s rd, 13 MiB/s wr, 210 op/s
Oct 11 09:11:16 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2082: 321 pgs: 321 active+clean; 407 MiB data, 919 MiB used, 59 GiB / 60 GiB avail; 75 KiB/s rd, 1.6 KiB/s wr, 32 op/s
Oct 11 09:11:16 compute-0 nova_compute[260935]: 2025-10-11 09:11:16.886 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:11:17 compute-0 ceph-mon[74313]: pgmap v2082: 321 pgs: 321 active+clean; 407 MiB data, 919 MiB used, 59 GiB / 60 GiB avail; 75 KiB/s rd, 1.6 KiB/s wr, 32 op/s
Oct 11 09:11:18 compute-0 unix_chkpwd[366675]: password check failed for user (root)
Oct 11 09:11:18 compute-0 sshd-session[366673]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=165.232.82.252  user=root
Oct 11 09:11:18 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2083: 321 pgs: 321 active+clean; 407 MiB data, 919 MiB used, 59 GiB / 60 GiB avail; 652 KiB/s rd, 16 KiB/s wr, 77 op/s
Oct 11 09:11:18 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e281 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:11:18 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e281 do_prune osdmap full prune enabled
Oct 11 09:11:18 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 e282: 3 total, 3 up, 3 in
Oct 11 09:11:18 compute-0 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e282: 3 total, 3 up, 3 in
Oct 11 09:11:19 compute-0 nova_compute[260935]: 2025-10-11 09:11:19.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:11:19 compute-0 ceph-mon[74313]: pgmap v2083: 321 pgs: 321 active+clean; 407 MiB data, 919 MiB used, 59 GiB / 60 GiB avail; 652 KiB/s rd, 16 KiB/s wr, 77 op/s
Oct 11 09:11:19 compute-0 ceph-mon[74313]: osdmap e282: 3 total, 3 up, 3 in
Oct 11 09:11:19 compute-0 nova_compute[260935]: 2025-10-11 09:11:19.757 2 INFO nova.compute.manager [None req-f5ca0e01-6f59-4bdb-a392-298635601835 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 0b3da34e-478e-44fc-a1ec-2601998d2b0d] Get console output
Oct 11 09:11:19 compute-0 nova_compute[260935]: 2025-10-11 09:11:19.765 29289 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Oct 11 09:11:19 compute-0 nova_compute[260935]: 2025-10-11 09:11:19.963 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:11:20 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2085: 321 pgs: 321 active+clean; 407 MiB data, 919 MiB used, 59 GiB / 60 GiB avail; 652 KiB/s rd, 16 KiB/s wr, 77 op/s
Oct 11 09:11:20 compute-0 sshd-session[366673]: Failed password for root from 165.232.82.252 port 43692 ssh2
Oct 11 09:11:20 compute-0 sshd-session[366673]: Connection closed by authenticating user root 165.232.82.252 port 43692 [preauth]
Oct 11 09:11:20 compute-0 nova_compute[260935]: 2025-10-11 09:11:20.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:11:20 compute-0 podman[366676]: 2025-10-11 09:11:20.809887346 +0000 UTC m=+0.094314731 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 11 09:11:21 compute-0 nova_compute[260935]: 2025-10-11 09:11:21.514 2 DEBUG nova.compute.manager [req-4945e1cc-71bb-47d6-aefc-300ceab5f058 req-788d1078-bae1-48c2-8ae4-7639ba2f0cad e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0b3da34e-478e-44fc-a1ec-2601998d2b0d] Received event network-changed-673c8285-440e-4cb6-b81b-1bd2ddb7375a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:11:21 compute-0 nova_compute[260935]: 2025-10-11 09:11:21.515 2 DEBUG nova.compute.manager [req-4945e1cc-71bb-47d6-aefc-300ceab5f058 req-788d1078-bae1-48c2-8ae4-7639ba2f0cad e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0b3da34e-478e-44fc-a1ec-2601998d2b0d] Refreshing instance network info cache due to event network-changed-673c8285-440e-4cb6-b81b-1bd2ddb7375a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 09:11:21 compute-0 nova_compute[260935]: 2025-10-11 09:11:21.515 2 DEBUG oslo_concurrency.lockutils [req-4945e1cc-71bb-47d6-aefc-300ceab5f058 req-788d1078-bae1-48c2-8ae4-7639ba2f0cad e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-0b3da34e-478e-44fc-a1ec-2601998d2b0d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:11:21 compute-0 nova_compute[260935]: 2025-10-11 09:11:21.516 2 DEBUG oslo_concurrency.lockutils [req-4945e1cc-71bb-47d6-aefc-300ceab5f058 req-788d1078-bae1-48c2-8ae4-7639ba2f0cad e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-0b3da34e-478e-44fc-a1ec-2601998d2b0d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:11:21 compute-0 nova_compute[260935]: 2025-10-11 09:11:21.516 2 DEBUG nova.network.neutron [req-4945e1cc-71bb-47d6-aefc-300ceab5f058 req-788d1078-bae1-48c2-8ae4-7639ba2f0cad e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0b3da34e-478e-44fc-a1ec-2601998d2b0d] Refreshing network info cache for port 673c8285-440e-4cb6-b81b-1bd2ddb7375a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 09:11:21 compute-0 nova_compute[260935]: 2025-10-11 09:11:21.543 2 DEBUG oslo_concurrency.lockutils [None req-7b47f653-3159-488e-a698-078bcab638c1 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Acquiring lock "0b3da34e-478e-44fc-a1ec-2601998d2b0d" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:11:21 compute-0 nova_compute[260935]: 2025-10-11 09:11:21.543 2 DEBUG oslo_concurrency.lockutils [None req-7b47f653-3159-488e-a698-078bcab638c1 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Lock "0b3da34e-478e-44fc-a1ec-2601998d2b0d" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:11:21 compute-0 nova_compute[260935]: 2025-10-11 09:11:21.544 2 DEBUG oslo_concurrency.lockutils [None req-7b47f653-3159-488e-a698-078bcab638c1 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Acquiring lock "0b3da34e-478e-44fc-a1ec-2601998d2b0d-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:11:21 compute-0 nova_compute[260935]: 2025-10-11 09:11:21.544 2 DEBUG oslo_concurrency.lockutils [None req-7b47f653-3159-488e-a698-078bcab638c1 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Lock "0b3da34e-478e-44fc-a1ec-2601998d2b0d-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:11:21 compute-0 nova_compute[260935]: 2025-10-11 09:11:21.544 2 DEBUG oslo_concurrency.lockutils [None req-7b47f653-3159-488e-a698-078bcab638c1 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Lock "0b3da34e-478e-44fc-a1ec-2601998d2b0d-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:11:21 compute-0 nova_compute[260935]: 2025-10-11 09:11:21.545 2 INFO nova.compute.manager [None req-7b47f653-3159-488e-a698-078bcab638c1 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 0b3da34e-478e-44fc-a1ec-2601998d2b0d] Terminating instance
Oct 11 09:11:21 compute-0 nova_compute[260935]: 2025-10-11 09:11:21.546 2 DEBUG nova.compute.manager [None req-7b47f653-3159-488e-a698-078bcab638c1 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 0b3da34e-478e-44fc-a1ec-2601998d2b0d] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 11 09:11:21 compute-0 kernel: tap673c8285-44 (unregistering): left promiscuous mode
Oct 11 09:11:21 compute-0 NetworkManager[44960]: <info>  [1760173881.6128] device (tap673c8285-44): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 09:11:21 compute-0 nova_compute[260935]: 2025-10-11 09:11:21.630 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:11:21 compute-0 ovn_controller[152945]: 2025-10-11T09:11:21Z|00963|binding|INFO|Releasing lport 673c8285-440e-4cb6-b81b-1bd2ddb7375a from this chassis (sb_readonly=0)
Oct 11 09:11:21 compute-0 ovn_controller[152945]: 2025-10-11T09:11:21Z|00964|binding|INFO|Setting lport 673c8285-440e-4cb6-b81b-1bd2ddb7375a down in Southbound
Oct 11 09:11:21 compute-0 ovn_controller[152945]: 2025-10-11T09:11:21Z|00965|binding|INFO|Removing iface tap673c8285-44 ovn-installed in OVS
Oct 11 09:11:21 compute-0 nova_compute[260935]: 2025-10-11 09:11:21.635 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:11:21 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:11:21.643 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ac:22:4a 10.100.0.9'], port_security=['fa:16:3e:ac:22:4a 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '0b3da34e-478e-44fc-a1ec-2601998d2b0d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2de6a2d5-4c4a-403f-9eb5-27dc7562315f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ca4b15770e784f45910b630937562cb6', 'neutron:revision_number': '6', 'neutron:security_group_ids': '8ebe05f7-a7b2-491d-bdc1-80d46e3deb2b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9218e07c-cf9e-412a-8b5f-2d45bacdf8cb, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=673c8285-440e-4cb6-b81b-1bd2ddb7375a) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 09:11:21 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:11:21.647 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 673c8285-440e-4cb6-b81b-1bd2ddb7375a in datapath 2de6a2d5-4c4a-403f-9eb5-27dc7562315f unbound from our chassis
Oct 11 09:11:21 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:11:21.652 162815 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 2de6a2d5-4c4a-403f-9eb5-27dc7562315f, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 11 09:11:21 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:11:21.654 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[37fa09b8-f404-4126-b373-02cdaa404a68]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:11:21 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:11:21.655 162815 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-2de6a2d5-4c4a-403f-9eb5-27dc7562315f namespace which is not needed anymore
Oct 11 09:11:21 compute-0 nova_compute[260935]: 2025-10-11 09:11:21.665 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:11:21 compute-0 nova_compute[260935]: 2025-10-11 09:11:21.699 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:11:21 compute-0 nova_compute[260935]: 2025-10-11 09:11:21.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:11:21 compute-0 systemd[1]: machine-qemu\x2d124\x2dinstance\x2d00000068.scope: Deactivated successfully.
Oct 11 09:11:21 compute-0 systemd[1]: machine-qemu\x2d124\x2dinstance\x2d00000068.scope: Consumed 12.247s CPU time.
Oct 11 09:11:21 compute-0 systemd-machined[215705]: Machine qemu-124-instance-00000068 terminated.
Oct 11 09:11:21 compute-0 ceph-mon[74313]: pgmap v2085: 321 pgs: 321 active+clean; 407 MiB data, 919 MiB used, 59 GiB / 60 GiB avail; 652 KiB/s rd, 16 KiB/s wr, 77 op/s
Oct 11 09:11:21 compute-0 nova_compute[260935]: 2025-10-11 09:11:21.824 2 INFO nova.virt.libvirt.driver [-] [instance: 0b3da34e-478e-44fc-a1ec-2601998d2b0d] Instance destroyed successfully.
Oct 11 09:11:21 compute-0 nova_compute[260935]: 2025-10-11 09:11:21.824 2 DEBUG nova.objects.instance [None req-7b47f653-3159-488e-a698-078bcab638c1 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Lazy-loading 'resources' on Instance uuid 0b3da34e-478e-44fc-a1ec-2601998d2b0d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:11:21 compute-0 nova_compute[260935]: 2025-10-11 09:11:21.843 2 DEBUG nova.virt.libvirt.vif [None req-7b47f653-3159-488e-a698-078bcab638c1 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T09:10:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-2087957653',display_name='tempest-TestNetworkAdvancedServerOps-server-2087957653',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-2087957653',id=104,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBElmuTcJVI/ZkWAz+5kv9/dAzHDbPqJ/RLoEbpcbzY+bQ/5rlnwJAiwfIH5I2AByz1siErLhAl04MipJZMi/HRmixiLKf1bIaWO7+9o5POQWUQay494cNjZNh9+hKmeEUA==',key_name='tempest-TestNetworkAdvancedServerOps-746953215',keypairs=<?>,launch_index=0,launched_at=2025-10-11T09:10:35Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='ca4b15770e784f45910b630937562cb6',ramdisk_id='',reservation_id='r-ng6vrpfk',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-1304559157',owner_user_name='tempest-TestNetworkAdvancedServerOps-1304559157-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T09:11:03Z,user_data=None,user_id='a213c3877fc144a3af0be3c3d853f999',uuid=0b3da34e-478e-44fc-a1ec-2601998d2b0d,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "673c8285-440e-4cb6-b81b-1bd2ddb7375a", "address": "fa:16:3e:ac:22:4a", "network": {"id": "2de6a2d5-4c4a-403f-9eb5-27dc7562315f", "bridge": "br-int", "label": "tempest-network-smoke--795331470", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.181", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca4b15770e784f45910b630937562cb6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap673c8285-44", "ovs_interfaceid": "673c8285-440e-4cb6-b81b-1bd2ddb7375a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 11 09:11:21 compute-0 nova_compute[260935]: 2025-10-11 09:11:21.843 2 DEBUG nova.network.os_vif_util [None req-7b47f653-3159-488e-a698-078bcab638c1 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Converting VIF {"id": "673c8285-440e-4cb6-b81b-1bd2ddb7375a", "address": "fa:16:3e:ac:22:4a", "network": {"id": "2de6a2d5-4c4a-403f-9eb5-27dc7562315f", "bridge": "br-int", "label": "tempest-network-smoke--795331470", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.181", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca4b15770e784f45910b630937562cb6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap673c8285-44", "ovs_interfaceid": "673c8285-440e-4cb6-b81b-1bd2ddb7375a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 09:11:21 compute-0 nova_compute[260935]: 2025-10-11 09:11:21.844 2 DEBUG nova.network.os_vif_util [None req-7b47f653-3159-488e-a698-078bcab638c1 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:ac:22:4a,bridge_name='br-int',has_traffic_filtering=True,id=673c8285-440e-4cb6-b81b-1bd2ddb7375a,network=Network(2de6a2d5-4c4a-403f-9eb5-27dc7562315f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap673c8285-44') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 09:11:21 compute-0 nova_compute[260935]: 2025-10-11 09:11:21.844 2 DEBUG os_vif [None req-7b47f653-3159-488e-a698-078bcab638c1 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:ac:22:4a,bridge_name='br-int',has_traffic_filtering=True,id=673c8285-440e-4cb6-b81b-1bd2ddb7375a,network=Network(2de6a2d5-4c4a-403f-9eb5-27dc7562315f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap673c8285-44') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 11 09:11:21 compute-0 nova_compute[260935]: 2025-10-11 09:11:21.847 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:11:21 compute-0 nova_compute[260935]: 2025-10-11 09:11:21.847 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap673c8285-44, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:11:21 compute-0 nova_compute[260935]: 2025-10-11 09:11:21.849 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:11:21 compute-0 nova_compute[260935]: 2025-10-11 09:11:21.852 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 09:11:21 compute-0 nova_compute[260935]: 2025-10-11 09:11:21.855 2 INFO os_vif [None req-7b47f653-3159-488e-a698-078bcab638c1 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:ac:22:4a,bridge_name='br-int',has_traffic_filtering=True,id=673c8285-440e-4cb6-b81b-1bd2ddb7375a,network=Network(2de6a2d5-4c4a-403f-9eb5-27dc7562315f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap673c8285-44')
Oct 11 09:11:21 compute-0 neutron-haproxy-ovnmeta-2de6a2d5-4c4a-403f-9eb5-27dc7562315f[366657]: [NOTICE]   (366661) : haproxy version is 2.8.14-c23fe91
Oct 11 09:11:21 compute-0 neutron-haproxy-ovnmeta-2de6a2d5-4c4a-403f-9eb5-27dc7562315f[366657]: [NOTICE]   (366661) : path to executable is /usr/sbin/haproxy
Oct 11 09:11:21 compute-0 neutron-haproxy-ovnmeta-2de6a2d5-4c4a-403f-9eb5-27dc7562315f[366657]: [WARNING]  (366661) : Exiting Master process...
Oct 11 09:11:21 compute-0 neutron-haproxy-ovnmeta-2de6a2d5-4c4a-403f-9eb5-27dc7562315f[366657]: [WARNING]  (366661) : Exiting Master process...
Oct 11 09:11:21 compute-0 neutron-haproxy-ovnmeta-2de6a2d5-4c4a-403f-9eb5-27dc7562315f[366657]: [ALERT]    (366661) : Current worker (366663) exited with code 143 (Terminated)
Oct 11 09:11:21 compute-0 neutron-haproxy-ovnmeta-2de6a2d5-4c4a-403f-9eb5-27dc7562315f[366657]: [WARNING]  (366661) : All workers exited. Exiting... (0)
Oct 11 09:11:21 compute-0 systemd[1]: libpod-390c567af3f4f9b17c471a557ae52e1fc53a30a1ce7d52271270e72501fa6d45.scope: Deactivated successfully.
Oct 11 09:11:21 compute-0 podman[366728]: 2025-10-11 09:11:21.911306503 +0000 UTC m=+0.066303747 container died 390c567af3f4f9b17c471a557ae52e1fc53a30a1ce7d52271270e72501fa6d45 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-2de6a2d5-4c4a-403f-9eb5-27dc7562315f, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Oct 11 09:11:21 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-390c567af3f4f9b17c471a557ae52e1fc53a30a1ce7d52271270e72501fa6d45-userdata-shm.mount: Deactivated successfully.
Oct 11 09:11:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-8e0faee35d9e2350e7eab68b1a8768c75dd4c537a8039875f94be2481264f64f-merged.mount: Deactivated successfully.
Oct 11 09:11:21 compute-0 podman[366728]: 2025-10-11 09:11:21.963447116 +0000 UTC m=+0.118444360 container cleanup 390c567af3f4f9b17c471a557ae52e1fc53a30a1ce7d52271270e72501fa6d45 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-2de6a2d5-4c4a-403f-9eb5-27dc7562315f, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001)
Oct 11 09:11:21 compute-0 systemd[1]: libpod-conmon-390c567af3f4f9b17c471a557ae52e1fc53a30a1ce7d52271270e72501fa6d45.scope: Deactivated successfully.
Oct 11 09:11:22 compute-0 nova_compute[260935]: 2025-10-11 09:11:22.031 2 DEBUG nova.compute.manager [req-466d8e69-b97b-4211-a55c-99c6f61daa91 req-16ddeec1-42f0-4f88-8801-1aeaad7253cc e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0b3da34e-478e-44fc-a1ec-2601998d2b0d] Received event network-vif-unplugged-673c8285-440e-4cb6-b81b-1bd2ddb7375a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:11:22 compute-0 nova_compute[260935]: 2025-10-11 09:11:22.032 2 DEBUG oslo_concurrency.lockutils [req-466d8e69-b97b-4211-a55c-99c6f61daa91 req-16ddeec1-42f0-4f88-8801-1aeaad7253cc e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "0b3da34e-478e-44fc-a1ec-2601998d2b0d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:11:22 compute-0 nova_compute[260935]: 2025-10-11 09:11:22.032 2 DEBUG oslo_concurrency.lockutils [req-466d8e69-b97b-4211-a55c-99c6f61daa91 req-16ddeec1-42f0-4f88-8801-1aeaad7253cc e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "0b3da34e-478e-44fc-a1ec-2601998d2b0d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:11:22 compute-0 nova_compute[260935]: 2025-10-11 09:11:22.032 2 DEBUG oslo_concurrency.lockutils [req-466d8e69-b97b-4211-a55c-99c6f61daa91 req-16ddeec1-42f0-4f88-8801-1aeaad7253cc e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "0b3da34e-478e-44fc-a1ec-2601998d2b0d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:11:22 compute-0 nova_compute[260935]: 2025-10-11 09:11:22.033 2 DEBUG nova.compute.manager [req-466d8e69-b97b-4211-a55c-99c6f61daa91 req-16ddeec1-42f0-4f88-8801-1aeaad7253cc e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0b3da34e-478e-44fc-a1ec-2601998d2b0d] No waiting events found dispatching network-vif-unplugged-673c8285-440e-4cb6-b81b-1bd2ddb7375a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 09:11:22 compute-0 nova_compute[260935]: 2025-10-11 09:11:22.033 2 DEBUG nova.compute.manager [req-466d8e69-b97b-4211-a55c-99c6f61daa91 req-16ddeec1-42f0-4f88-8801-1aeaad7253cc e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0b3da34e-478e-44fc-a1ec-2601998d2b0d] Received event network-vif-unplugged-673c8285-440e-4cb6-b81b-1bd2ddb7375a for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 11 09:11:22 compute-0 podman[366779]: 2025-10-11 09:11:22.044428357 +0000 UTC m=+0.052107103 container remove 390c567af3f4f9b17c471a557ae52e1fc53a30a1ce7d52271270e72501fa6d45 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-2de6a2d5-4c4a-403f-9eb5-27dc7562315f, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 11 09:11:22 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:11:22.053 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[63a18d1c-5535-4498-b7c8-e16496d0d675]: (4, ('Sat Oct 11 09:11:21 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-2de6a2d5-4c4a-403f-9eb5-27dc7562315f (390c567af3f4f9b17c471a557ae52e1fc53a30a1ce7d52271270e72501fa6d45)\n390c567af3f4f9b17c471a557ae52e1fc53a30a1ce7d52271270e72501fa6d45\nSat Oct 11 09:11:21 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-2de6a2d5-4c4a-403f-9eb5-27dc7562315f (390c567af3f4f9b17c471a557ae52e1fc53a30a1ce7d52271270e72501fa6d45)\n390c567af3f4f9b17c471a557ae52e1fc53a30a1ce7d52271270e72501fa6d45\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:11:22 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:11:22.056 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[abdd2684-a3bc-4ff5-b2d5-c3476200935b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:11:22 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:11:22.058 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2de6a2d5-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:11:22 compute-0 nova_compute[260935]: 2025-10-11 09:11:22.060 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:11:22 compute-0 nova_compute[260935]: 2025-10-11 09:11:22.078 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:11:22 compute-0 kernel: tap2de6a2d5-40: left promiscuous mode
Oct 11 09:11:22 compute-0 nova_compute[260935]: 2025-10-11 09:11:22.081 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:11:22 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:11:22.084 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[4a85c9f0-d2aa-4b38-bf8b-c27b8cfd9e93]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:11:22 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:11:22.113 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[b5e08cc8-b8e3-4515-bbc2-86b221c577e4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:11:22 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:11:22.114 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[02fc31ea-c7e3-4850-863b-92ba96a298e8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:11:22 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:11:22.139 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[e666e34e-2233-4d07-9c89-3ba6e63c52e8]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 570656, 'reachable_time': 43356, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 366795, 'error': None, 'target': 'ovnmeta-2de6a2d5-4c4a-403f-9eb5-27dc7562315f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:11:22 compute-0 systemd[1]: run-netns-ovnmeta\x2d2de6a2d5\x2d4c4a\x2d403f\x2d9eb5\x2d27dc7562315f.mount: Deactivated successfully.
Oct 11 09:11:22 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:11:22.144 162929 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-2de6a2d5-4c4a-403f-9eb5-27dc7562315f deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 11 09:11:22 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:11:22.144 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[b560725f-60ef-4ba1-9fd3-53d5e82748e2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:11:22 compute-0 nova_compute[260935]: 2025-10-11 09:11:22.345 2 INFO nova.virt.libvirt.driver [None req-7b47f653-3159-488e-a698-078bcab638c1 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 0b3da34e-478e-44fc-a1ec-2601998d2b0d] Deleting instance files /var/lib/nova/instances/0b3da34e-478e-44fc-a1ec-2601998d2b0d_del
Oct 11 09:11:22 compute-0 nova_compute[260935]: 2025-10-11 09:11:22.347 2 INFO nova.virt.libvirt.driver [None req-7b47f653-3159-488e-a698-078bcab638c1 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 0b3da34e-478e-44fc-a1ec-2601998d2b0d] Deletion of /var/lib/nova/instances/0b3da34e-478e-44fc-a1ec-2601998d2b0d_del complete
Oct 11 09:11:22 compute-0 nova_compute[260935]: 2025-10-11 09:11:22.398 2 INFO nova.compute.manager [None req-7b47f653-3159-488e-a698-078bcab638c1 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 0b3da34e-478e-44fc-a1ec-2601998d2b0d] Took 0.85 seconds to destroy the instance on the hypervisor.
Oct 11 09:11:22 compute-0 nova_compute[260935]: 2025-10-11 09:11:22.399 2 DEBUG oslo.service.loopingcall [None req-7b47f653-3159-488e-a698-078bcab638c1 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 11 09:11:22 compute-0 nova_compute[260935]: 2025-10-11 09:11:22.399 2 DEBUG nova.compute.manager [-] [instance: 0b3da34e-478e-44fc-a1ec-2601998d2b0d] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 11 09:11:22 compute-0 nova_compute[260935]: 2025-10-11 09:11:22.400 2 DEBUG nova.network.neutron [-] [instance: 0b3da34e-478e-44fc-a1ec-2601998d2b0d] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 11 09:11:22 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2086: 321 pgs: 321 active+clean; 356 MiB data, 919 MiB used, 59 GiB / 60 GiB avail; 649 KiB/s rd, 27 KiB/s wr, 76 op/s
Oct 11 09:11:23 compute-0 nova_compute[260935]: 2025-10-11 09:11:23.357 2 DEBUG nova.network.neutron [-] [instance: 0b3da34e-478e-44fc-a1ec-2601998d2b0d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:11:23 compute-0 nova_compute[260935]: 2025-10-11 09:11:23.392 2 INFO nova.compute.manager [-] [instance: 0b3da34e-478e-44fc-a1ec-2601998d2b0d] Took 0.99 seconds to deallocate network for instance.
Oct 11 09:11:23 compute-0 nova_compute[260935]: 2025-10-11 09:11:23.414 2 DEBUG nova.network.neutron [req-4945e1cc-71bb-47d6-aefc-300ceab5f058 req-788d1078-bae1-48c2-8ae4-7639ba2f0cad e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0b3da34e-478e-44fc-a1ec-2601998d2b0d] Updated VIF entry in instance network info cache for port 673c8285-440e-4cb6-b81b-1bd2ddb7375a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 09:11:23 compute-0 nova_compute[260935]: 2025-10-11 09:11:23.415 2 DEBUG nova.network.neutron [req-4945e1cc-71bb-47d6-aefc-300ceab5f058 req-788d1078-bae1-48c2-8ae4-7639ba2f0cad e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0b3da34e-478e-44fc-a1ec-2601998d2b0d] Updating instance_info_cache with network_info: [{"id": "673c8285-440e-4cb6-b81b-1bd2ddb7375a", "address": "fa:16:3e:ac:22:4a", "network": {"id": "2de6a2d5-4c4a-403f-9eb5-27dc7562315f", "bridge": "br-int", "label": "tempest-network-smoke--795331470", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca4b15770e784f45910b630937562cb6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap673c8285-44", "ovs_interfaceid": "673c8285-440e-4cb6-b81b-1bd2ddb7375a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:11:23 compute-0 nova_compute[260935]: 2025-10-11 09:11:23.446 2 DEBUG oslo_concurrency.lockutils [req-4945e1cc-71bb-47d6-aefc-300ceab5f058 req-788d1078-bae1-48c2-8ae4-7639ba2f0cad e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-0b3da34e-478e-44fc-a1ec-2601998d2b0d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:11:23 compute-0 nova_compute[260935]: 2025-10-11 09:11:23.452 2 DEBUG oslo_concurrency.lockutils [None req-7b47f653-3159-488e-a698-078bcab638c1 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:11:23 compute-0 nova_compute[260935]: 2025-10-11 09:11:23.453 2 DEBUG oslo_concurrency.lockutils [None req-7b47f653-3159-488e-a698-078bcab638c1 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:11:23 compute-0 nova_compute[260935]: 2025-10-11 09:11:23.616 2 DEBUG oslo_concurrency.processutils [None req-7b47f653-3159-488e-a698-078bcab638c1 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:11:23 compute-0 nova_compute[260935]: 2025-10-11 09:11:23.675 2 DEBUG nova.compute.manager [req-14a1f755-df81-44d3-9935-ab1447a2f179 req-afb0180c-3980-4b59-985f-44c4a093d16a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0b3da34e-478e-44fc-a1ec-2601998d2b0d] Received event network-vif-deleted-673c8285-440e-4cb6-b81b-1bd2ddb7375a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:11:23 compute-0 nova_compute[260935]: 2025-10-11 09:11:23.676 2 INFO nova.compute.manager [req-14a1f755-df81-44d3-9935-ab1447a2f179 req-afb0180c-3980-4b59-985f-44c4a093d16a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0b3da34e-478e-44fc-a1ec-2601998d2b0d] Neutron deleted interface 673c8285-440e-4cb6-b81b-1bd2ddb7375a; detaching it from the instance and deleting it from the info cache
Oct 11 09:11:23 compute-0 nova_compute[260935]: 2025-10-11 09:11:23.677 2 DEBUG nova.network.neutron [req-14a1f755-df81-44d3-9935-ab1447a2f179 req-afb0180c-3980-4b59-985f-44c4a093d16a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0b3da34e-478e-44fc-a1ec-2601998d2b0d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:11:23 compute-0 nova_compute[260935]: 2025-10-11 09:11:23.702 2 DEBUG nova.compute.manager [req-14a1f755-df81-44d3-9935-ab1447a2f179 req-afb0180c-3980-4b59-985f-44c4a093d16a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0b3da34e-478e-44fc-a1ec-2601998d2b0d] Detach interface failed, port_id=673c8285-440e-4cb6-b81b-1bd2ddb7375a, reason: Instance 0b3da34e-478e-44fc-a1ec-2601998d2b0d could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Oct 11 09:11:23 compute-0 ceph-mon[74313]: pgmap v2086: 321 pgs: 321 active+clean; 356 MiB data, 919 MiB used, 59 GiB / 60 GiB avail; 649 KiB/s rd, 27 KiB/s wr, 76 op/s
Oct 11 09:11:23 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:11:24 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 09:11:24 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3961666844' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:11:24 compute-0 nova_compute[260935]: 2025-10-11 09:11:24.106 2 DEBUG oslo_concurrency.processutils [None req-7b47f653-3159-488e-a698-078bcab638c1 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.490s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:11:24 compute-0 nova_compute[260935]: 2025-10-11 09:11:24.115 2 DEBUG nova.compute.provider_tree [None req-7b47f653-3159-488e-a698-078bcab638c1 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 09:11:24 compute-0 nova_compute[260935]: 2025-10-11 09:11:24.135 2 DEBUG nova.scheduler.client.report [None req-7b47f653-3159-488e-a698-078bcab638c1 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 09:11:24 compute-0 nova_compute[260935]: 2025-10-11 09:11:24.160 2 DEBUG oslo_concurrency.lockutils [None req-7b47f653-3159-488e-a698-078bcab638c1 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.707s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:11:24 compute-0 nova_compute[260935]: 2025-10-11 09:11:24.200 2 INFO nova.scheduler.client.report [None req-7b47f653-3159-488e-a698-078bcab638c1 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Deleted allocations for instance 0b3da34e-478e-44fc-a1ec-2601998d2b0d
Oct 11 09:11:24 compute-0 nova_compute[260935]: 2025-10-11 09:11:24.277 2 DEBUG nova.compute.manager [req-4532ac89-a18a-43c3-8669-4aeb3b03e114 req-f5ffd120-fc2a-453b-a6bf-28d2aa14e313 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0b3da34e-478e-44fc-a1ec-2601998d2b0d] Received event network-vif-plugged-673c8285-440e-4cb6-b81b-1bd2ddb7375a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:11:24 compute-0 nova_compute[260935]: 2025-10-11 09:11:24.278 2 DEBUG oslo_concurrency.lockutils [req-4532ac89-a18a-43c3-8669-4aeb3b03e114 req-f5ffd120-fc2a-453b-a6bf-28d2aa14e313 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "0b3da34e-478e-44fc-a1ec-2601998d2b0d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:11:24 compute-0 nova_compute[260935]: 2025-10-11 09:11:24.278 2 DEBUG oslo_concurrency.lockutils [req-4532ac89-a18a-43c3-8669-4aeb3b03e114 req-f5ffd120-fc2a-453b-a6bf-28d2aa14e313 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "0b3da34e-478e-44fc-a1ec-2601998d2b0d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:11:24 compute-0 nova_compute[260935]: 2025-10-11 09:11:24.279 2 DEBUG oslo_concurrency.lockutils [req-4532ac89-a18a-43c3-8669-4aeb3b03e114 req-f5ffd120-fc2a-453b-a6bf-28d2aa14e313 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "0b3da34e-478e-44fc-a1ec-2601998d2b0d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:11:24 compute-0 nova_compute[260935]: 2025-10-11 09:11:24.279 2 DEBUG nova.compute.manager [req-4532ac89-a18a-43c3-8669-4aeb3b03e114 req-f5ffd120-fc2a-453b-a6bf-28d2aa14e313 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0b3da34e-478e-44fc-a1ec-2601998d2b0d] No waiting events found dispatching network-vif-plugged-673c8285-440e-4cb6-b81b-1bd2ddb7375a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 09:11:24 compute-0 nova_compute[260935]: 2025-10-11 09:11:24.280 2 WARNING nova.compute.manager [req-4532ac89-a18a-43c3-8669-4aeb3b03e114 req-f5ffd120-fc2a-453b-a6bf-28d2aa14e313 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0b3da34e-478e-44fc-a1ec-2601998d2b0d] Received unexpected event network-vif-plugged-673c8285-440e-4cb6-b81b-1bd2ddb7375a for instance with vm_state deleted and task_state None.
Oct 11 09:11:24 compute-0 nova_compute[260935]: 2025-10-11 09:11:24.284 2 DEBUG oslo_concurrency.lockutils [None req-7b47f653-3159-488e-a698-078bcab638c1 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Lock "0b3da34e-478e-44fc-a1ec-2601998d2b0d" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.741s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:11:24 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2087: 321 pgs: 321 active+clean; 328 MiB data, 919 MiB used, 59 GiB / 60 GiB avail; 608 KiB/s rd, 28 KiB/s wr, 77 op/s
Oct 11 09:11:24 compute-0 nova_compute[260935]: 2025-10-11 09:11:24.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:11:24 compute-0 nova_compute[260935]: 2025-10-11 09:11:24.704 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:11:24 compute-0 nova_compute[260935]: 2025-10-11 09:11:24.704 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 11 09:11:24 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/3961666844' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:11:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:11:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:11:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:11:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:11:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:11:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:11:24 compute-0 nova_compute[260935]: 2025-10-11 09:11:24.965 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:11:25 compute-0 nova_compute[260935]: 2025-10-11 09:11:25.487 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:11:25 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:11:25.486 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=28, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:d1:d9', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '16:ab:1e:b7:4b:7f'}, ipsec=False) old=SB_Global(nb_cfg=27) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 09:11:25 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:11:25.488 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 11 09:11:25 compute-0 nova_compute[260935]: 2025-10-11 09:11:25.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:11:25 compute-0 nova_compute[260935]: 2025-10-11 09:11:25.732 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:11:25 compute-0 nova_compute[260935]: 2025-10-11 09:11:25.732 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:11:25 compute-0 nova_compute[260935]: 2025-10-11 09:11:25.733 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:11:25 compute-0 nova_compute[260935]: 2025-10-11 09:11:25.733 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 11 09:11:25 compute-0 nova_compute[260935]: 2025-10-11 09:11:25.734 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:11:25 compute-0 ceph-mon[74313]: pgmap v2087: 321 pgs: 321 active+clean; 328 MiB data, 919 MiB used, 59 GiB / 60 GiB avail; 608 KiB/s rd, 28 KiB/s wr, 77 op/s
Oct 11 09:11:26 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 09:11:26 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/761223916' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:11:26 compute-0 nova_compute[260935]: 2025-10-11 09:11:26.257 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.523s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:11:26 compute-0 nova_compute[260935]: 2025-10-11 09:11:26.378 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:11:26 compute-0 nova_compute[260935]: 2025-10-11 09:11:26.379 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:11:26 compute-0 nova_compute[260935]: 2025-10-11 09:11:26.379 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:11:26 compute-0 nova_compute[260935]: 2025-10-11 09:11:26.386 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:11:26 compute-0 nova_compute[260935]: 2025-10-11 09:11:26.386 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:11:26 compute-0 nova_compute[260935]: 2025-10-11 09:11:26.392 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000052 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:11:26 compute-0 nova_compute[260935]: 2025-10-11 09:11:26.392 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000052 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:11:26 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2088: 321 pgs: 321 active+clean; 328 MiB data, 919 MiB used, 59 GiB / 60 GiB avail; 608 KiB/s rd, 28 KiB/s wr, 77 op/s
Oct 11 09:11:26 compute-0 sudo[366841]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:11:26 compute-0 sudo[366841]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:11:26 compute-0 sudo[366841]: pam_unix(sudo:session): session closed for user root
Oct 11 09:11:26 compute-0 sudo[366866]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 09:11:26 compute-0 sudo[366866]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:11:26 compute-0 sudo[366866]: pam_unix(sudo:session): session closed for user root
Oct 11 09:11:26 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 09:11:26 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1266745744' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 09:11:26 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 09:11:26 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1266745744' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 09:11:26 compute-0 nova_compute[260935]: 2025-10-11 09:11:26.744 2 WARNING nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 09:11:26 compute-0 nova_compute[260935]: 2025-10-11 09:11:26.748 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3106MB free_disk=59.830528259277344GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 11 09:11:26 compute-0 nova_compute[260935]: 2025-10-11 09:11:26.749 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:11:26 compute-0 nova_compute[260935]: 2025-10-11 09:11:26.750 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:11:26 compute-0 sudo[366891]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:11:26 compute-0 sudo[366891]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:11:26 compute-0 sudo[366891]: pam_unix(sudo:session): session closed for user root
Oct 11 09:11:26 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/761223916' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:11:26 compute-0 ceph-mon[74313]: from='client.? 192.168.122.10:0/1266745744' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 09:11:26 compute-0 ceph-mon[74313]: from='client.? 192.168.122.10:0/1266745744' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 09:11:26 compute-0 nova_compute[260935]: 2025-10-11 09:11:26.851 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:11:26 compute-0 sudo[366916]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 11 09:11:26 compute-0 sudo[366916]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:11:26 compute-0 nova_compute[260935]: 2025-10-11 09:11:26.873 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance c176845c-89c0-4038-ba22-4ee79bd3ebfe actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 09:11:26 compute-0 nova_compute[260935]: 2025-10-11 09:11:26.873 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance b75d8ded-515b-48ff-a6b6-28df88878996 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 09:11:26 compute-0 nova_compute[260935]: 2025-10-11 09:11:26.874 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance 52be16b4-343a-4fd4-9041-39069a1fde2a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 09:11:26 compute-0 nova_compute[260935]: 2025-10-11 09:11:26.874 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 11 09:11:26 compute-0 nova_compute[260935]: 2025-10-11 09:11:26.874 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=896MB phys_disk=59GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 11 09:11:26 compute-0 podman[366940]: 2025-10-11 09:11:26.963206663 +0000 UTC m=+0.085062516 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=iscsid, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Oct 11 09:11:26 compute-0 nova_compute[260935]: 2025-10-11 09:11:26.980 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:11:27 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 09:11:27 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3666519937' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:11:27 compute-0 nova_compute[260935]: 2025-10-11 09:11:27.445 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:11:27 compute-0 nova_compute[260935]: 2025-10-11 09:11:27.454 2 DEBUG nova.compute.provider_tree [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 09:11:27 compute-0 nova_compute[260935]: 2025-10-11 09:11:27.481 2 DEBUG nova.scheduler.client.report [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 09:11:27 compute-0 sudo[366916]: pam_unix(sudo:session): session closed for user root
Oct 11 09:11:27 compute-0 nova_compute[260935]: 2025-10-11 09:11:27.515 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 11 09:11:27 compute-0 nova_compute[260935]: 2025-10-11 09:11:27.516 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.766s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:11:27 compute-0 sudo[367013]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:11:27 compute-0 sudo[367013]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:11:27 compute-0 sudo[367013]: pam_unix(sudo:session): session closed for user root
Oct 11 09:11:27 compute-0 sudo[367038]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 09:11:27 compute-0 sudo[367038]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:11:27 compute-0 sudo[367038]: pam_unix(sudo:session): session closed for user root
Oct 11 09:11:27 compute-0 sudo[367063]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:11:27 compute-0 sudo[367063]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:11:27 compute-0 sudo[367063]: pam_unix(sudo:session): session closed for user root
Oct 11 09:11:27 compute-0 ceph-mon[74313]: pgmap v2088: 321 pgs: 321 active+clean; 328 MiB data, 919 MiB used, 59 GiB / 60 GiB avail; 608 KiB/s rd, 28 KiB/s wr, 77 op/s
Oct 11 09:11:27 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/3666519937' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:11:27 compute-0 sudo[367088]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 list-networks
Oct 11 09:11:27 compute-0 sudo[367088]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:11:28 compute-0 sudo[367088]: pam_unix(sudo:session): session closed for user root
Oct 11 09:11:28 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 09:11:28 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:11:28 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 09:11:28 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:11:28 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 09:11:28 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 09:11:28 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 11 09:11:28 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 09:11:28 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 11 09:11:28 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:11:28 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev 846c8ff3-2ccb-47e3-826a-7a0e7ccbb9db does not exist
Oct 11 09:11:28 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev c5434dcd-861b-4075-bd16-62120523cf83 does not exist
Oct 11 09:11:28 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev 3cb2f74c-db94-494f-8993-c2883eb7ea06 does not exist
Oct 11 09:11:28 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 11 09:11:28 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 09:11:28 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 11 09:11:28 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 09:11:28 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 09:11:28 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 09:11:28 compute-0 sudo[367131]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:11:28 compute-0 sudo[367131]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:11:28 compute-0 sudo[367131]: pam_unix(sudo:session): session closed for user root
Oct 11 09:11:28 compute-0 sudo[367156]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 09:11:28 compute-0 sudo[367156]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:11:28 compute-0 sudo[367156]: pam_unix(sudo:session): session closed for user root
Oct 11 09:11:28 compute-0 ovn_controller[152945]: 2025-10-11T09:11:28Z|00966|binding|INFO|Releasing lport e23cd806-8523-4e59-ba27-db15cee52548 from this chassis (sb_readonly=0)
Oct 11 09:11:28 compute-0 ovn_controller[152945]: 2025-10-11T09:11:28Z|00967|binding|INFO|Releasing lport f45dd889-4db0-488b-b6d3-356bc191844e from this chassis (sb_readonly=0)
Oct 11 09:11:28 compute-0 nova_compute[260935]: 2025-10-11 09:11:28.326 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:11:28 compute-0 sudo[367181]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:11:28 compute-0 sudo[367181]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:11:28 compute-0 sudo[367181]: pam_unix(sudo:session): session closed for user root
Oct 11 09:11:28 compute-0 sudo[367206]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 11 09:11:28 compute-0 ovn_controller[152945]: 2025-10-11T09:11:28Z|00968|binding|INFO|Releasing lport e23cd806-8523-4e59-ba27-db15cee52548 from this chassis (sb_readonly=0)
Oct 11 09:11:28 compute-0 ovn_controller[152945]: 2025-10-11T09:11:28Z|00969|binding|INFO|Releasing lport f45dd889-4db0-488b-b6d3-356bc191844e from this chassis (sb_readonly=0)
Oct 11 09:11:28 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2089: 321 pgs: 321 active+clean; 328 MiB data, 872 MiB used, 59 GiB / 60 GiB avail; 25 KiB/s rd, 14 KiB/s wr, 34 op/s
Oct 11 09:11:28 compute-0 sudo[367206]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:11:28 compute-0 nova_compute[260935]: 2025-10-11 09:11:28.477 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:11:28 compute-0 nova_compute[260935]: 2025-10-11 09:11:28.512 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:11:28 compute-0 nova_compute[260935]: 2025-10-11 09:11:28.543 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:11:28 compute-0 nova_compute[260935]: 2025-10-11 09:11:28.544 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 11 09:11:28 compute-0 nova_compute[260935]: 2025-10-11 09:11:28.611 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 11 09:11:28 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:11:28 compute-0 podman[367272]: 2025-10-11 09:11:28.952515445 +0000 UTC m=+0.063981162 container create 1344808d8463297da901a5dec9a35c10b4d72edd9b93d38004490c81650c7bb1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_mestorf, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 09:11:28 compute-0 systemd[1]: Started libpod-conmon-1344808d8463297da901a5dec9a35c10b4d72edd9b93d38004490c81650c7bb1.scope.
Oct 11 09:11:29 compute-0 podman[367272]: 2025-10-11 09:11:28.926850725 +0000 UTC m=+0.038316472 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:11:29 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:11:29 compute-0 podman[367272]: 2025-10-11 09:11:29.042593369 +0000 UTC m=+0.154059116 container init 1344808d8463297da901a5dec9a35c10b4d72edd9b93d38004490c81650c7bb1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_mestorf, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct 11 09:11:29 compute-0 podman[367272]: 2025-10-11 09:11:29.059409854 +0000 UTC m=+0.170875571 container start 1344808d8463297da901a5dec9a35c10b4d72edd9b93d38004490c81650c7bb1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_mestorf, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct 11 09:11:29 compute-0 podman[367272]: 2025-10-11 09:11:29.062739306 +0000 UTC m=+0.174205063 container attach 1344808d8463297da901a5dec9a35c10b4d72edd9b93d38004490c81650c7bb1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_mestorf, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct 11 09:11:29 compute-0 exciting_mestorf[367289]: 167 167
Oct 11 09:11:29 compute-0 systemd[1]: libpod-1344808d8463297da901a5dec9a35c10b4d72edd9b93d38004490c81650c7bb1.scope: Deactivated successfully.
Oct 11 09:11:29 compute-0 podman[367272]: 2025-10-11 09:11:29.066707576 +0000 UTC m=+0.178173323 container died 1344808d8463297da901a5dec9a35c10b4d72edd9b93d38004490c81650c7bb1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_mestorf, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Oct 11 09:11:29 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:11:29 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:11:29 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 09:11:29 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 09:11:29 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:11:29 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 09:11:29 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 09:11:29 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 09:11:29 compute-0 ceph-mon[74313]: pgmap v2089: 321 pgs: 321 active+clean; 328 MiB data, 872 MiB used, 59 GiB / 60 GiB avail; 25 KiB/s rd, 14 KiB/s wr, 34 op/s
Oct 11 09:11:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-e6a2775652ca2131ba9553fcdbbc8621f0aef780ccbca67dc59bef8d3bef9f7c-merged.mount: Deactivated successfully.
Oct 11 09:11:29 compute-0 podman[367272]: 2025-10-11 09:11:29.130172973 +0000 UTC m=+0.241638710 container remove 1344808d8463297da901a5dec9a35c10b4d72edd9b93d38004490c81650c7bb1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_mestorf, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 09:11:29 compute-0 systemd[1]: libpod-conmon-1344808d8463297da901a5dec9a35c10b4d72edd9b93d38004490c81650c7bb1.scope: Deactivated successfully.
Oct 11 09:11:29 compute-0 podman[367315]: 2025-10-11 09:11:29.396514204 +0000 UTC m=+0.066568883 container create c68cd01edb2d5be213b929b8d089cff7e502e1af2439b64f60215cae56158f27 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_mahavira, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef)
Oct 11 09:11:29 compute-0 systemd[1]: Started libpod-conmon-c68cd01edb2d5be213b929b8d089cff7e502e1af2439b64f60215cae56158f27.scope.
Oct 11 09:11:29 compute-0 podman[367315]: 2025-10-11 09:11:29.373021954 +0000 UTC m=+0.043076633 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:11:29 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:11:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f4e0b44162b0a251d73c2d9858f7878c2e811d37d7ad9f2c9b22a6651dd5cde/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 09:11:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f4e0b44162b0a251d73c2d9858f7878c2e811d37d7ad9f2c9b22a6651dd5cde/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 09:11:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f4e0b44162b0a251d73c2d9858f7878c2e811d37d7ad9f2c9b22a6651dd5cde/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 09:11:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f4e0b44162b0a251d73c2d9858f7878c2e811d37d7ad9f2c9b22a6651dd5cde/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 09:11:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f4e0b44162b0a251d73c2d9858f7878c2e811d37d7ad9f2c9b22a6651dd5cde/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 09:11:29 compute-0 podman[367315]: 2025-10-11 09:11:29.513829831 +0000 UTC m=+0.183884530 container init c68cd01edb2d5be213b929b8d089cff7e502e1af2439b64f60215cae56158f27 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_mahavira, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct 11 09:11:29 compute-0 podman[367315]: 2025-10-11 09:11:29.531740267 +0000 UTC m=+0.201794946 container start c68cd01edb2d5be213b929b8d089cff7e502e1af2439b64f60215cae56158f27 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_mahavira, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 09:11:29 compute-0 podman[367315]: 2025-10-11 09:11:29.536493848 +0000 UTC m=+0.206548497 container attach c68cd01edb2d5be213b929b8d089cff7e502e1af2439b64f60215cae56158f27 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_mahavira, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct 11 09:11:29 compute-0 nova_compute[260935]: 2025-10-11 09:11:29.989 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:11:30 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2090: 321 pgs: 321 active+clean; 328 MiB data, 872 MiB used, 59 GiB / 60 GiB avail; 22 KiB/s rd, 12 KiB/s wr, 30 op/s
Oct 11 09:11:30 compute-0 peaceful_mahavira[367331]: --> passed data devices: 0 physical, 3 LVM
Oct 11 09:11:30 compute-0 peaceful_mahavira[367331]: --> relative data size: 1.0
Oct 11 09:11:30 compute-0 peaceful_mahavira[367331]: --> All data devices are unavailable
Oct 11 09:11:30 compute-0 systemd[1]: libpod-c68cd01edb2d5be213b929b8d089cff7e502e1af2439b64f60215cae56158f27.scope: Deactivated successfully.
Oct 11 09:11:30 compute-0 systemd[1]: libpod-c68cd01edb2d5be213b929b8d089cff7e502e1af2439b64f60215cae56158f27.scope: Consumed 1.063s CPU time.
Oct 11 09:11:30 compute-0 podman[367315]: 2025-10-11 09:11:30.676627047 +0000 UTC m=+1.346681766 container died c68cd01edb2d5be213b929b8d089cff7e502e1af2439b64f60215cae56158f27 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_mahavira, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 09:11:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-5f4e0b44162b0a251d73c2d9858f7878c2e811d37d7ad9f2c9b22a6651dd5cde-merged.mount: Deactivated successfully.
Oct 11 09:11:30 compute-0 podman[367315]: 2025-10-11 09:11:30.761594648 +0000 UTC m=+1.431649337 container remove c68cd01edb2d5be213b929b8d089cff7e502e1af2439b64f60215cae56158f27 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_mahavira, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2)
Oct 11 09:11:30 compute-0 systemd[1]: libpod-conmon-c68cd01edb2d5be213b929b8d089cff7e502e1af2439b64f60215cae56158f27.scope: Deactivated successfully.
Oct 11 09:11:30 compute-0 sudo[367206]: pam_unix(sudo:session): session closed for user root
Oct 11 09:11:30 compute-0 sudo[367374]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:11:30 compute-0 sudo[367374]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:11:30 compute-0 sudo[367374]: pam_unix(sudo:session): session closed for user root
Oct 11 09:11:31 compute-0 sudo[367399]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 09:11:31 compute-0 sudo[367399]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:11:31 compute-0 sudo[367399]: pam_unix(sudo:session): session closed for user root
Oct 11 09:11:31 compute-0 sudo[367424]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:11:31 compute-0 sudo[367424]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:11:31 compute-0 sudo[367424]: pam_unix(sudo:session): session closed for user root
Oct 11 09:11:31 compute-0 sudo[367449]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a -- lvm list --format json
Oct 11 09:11:31 compute-0 sudo[367449]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:11:31 compute-0 ceph-mon[74313]: pgmap v2090: 321 pgs: 321 active+clean; 328 MiB data, 872 MiB used, 59 GiB / 60 GiB avail; 22 KiB/s rd, 12 KiB/s wr, 30 op/s
Oct 11 09:11:31 compute-0 podman[367515]: 2025-10-11 09:11:31.728329087 +0000 UTC m=+0.064242639 container create 0718606399c49ea856a66fa2be08f8ec259208f20603f5a7380ebe1a92395c17 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_curie, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True)
Oct 11 09:11:31 compute-0 systemd[1]: Started libpod-conmon-0718606399c49ea856a66fa2be08f8ec259208f20603f5a7380ebe1a92395c17.scope.
Oct 11 09:11:31 compute-0 podman[367515]: 2025-10-11 09:11:31.703304474 +0000 UTC m=+0.039218066 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:11:31 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:11:31 compute-0 podman[367515]: 2025-10-11 09:11:31.833374244 +0000 UTC m=+0.169287836 container init 0718606399c49ea856a66fa2be08f8ec259208f20603f5a7380ebe1a92395c17 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_curie, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct 11 09:11:31 compute-0 podman[367515]: 2025-10-11 09:11:31.84333658 +0000 UTC m=+0.179250122 container start 0718606399c49ea856a66fa2be08f8ec259208f20603f5a7380ebe1a92395c17 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_curie, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 09:11:31 compute-0 podman[367515]: 2025-10-11 09:11:31.847884476 +0000 UTC m=+0.183798048 container attach 0718606399c49ea856a66fa2be08f8ec259208f20603f5a7380ebe1a92395c17 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_curie, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 09:11:31 compute-0 cool_curie[367532]: 167 167
Oct 11 09:11:31 compute-0 systemd[1]: libpod-0718606399c49ea856a66fa2be08f8ec259208f20603f5a7380ebe1a92395c17.scope: Deactivated successfully.
Oct 11 09:11:31 compute-0 podman[367515]: 2025-10-11 09:11:31.850364965 +0000 UTC m=+0.186278507 container died 0718606399c49ea856a66fa2be08f8ec259208f20603f5a7380ebe1a92395c17 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_curie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 09:11:31 compute-0 nova_compute[260935]: 2025-10-11 09:11:31.855 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:11:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-9baec829c3182bd4a312656579301074f89f2cfe95ee5134ae29bbd1397ef7c3-merged.mount: Deactivated successfully.
Oct 11 09:11:31 compute-0 podman[367515]: 2025-10-11 09:11:31.909394669 +0000 UTC m=+0.245308221 container remove 0718606399c49ea856a66fa2be08f8ec259208f20603f5a7380ebe1a92395c17 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_curie, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct 11 09:11:31 compute-0 systemd[1]: libpod-conmon-0718606399c49ea856a66fa2be08f8ec259208f20603f5a7380ebe1a92395c17.scope: Deactivated successfully.
Oct 11 09:11:32 compute-0 podman[367538]: 2025-10-11 09:11:32.01892437 +0000 UTC m=+0.119634692 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, managed_by=edpm_ansible)
Oct 11 09:11:32 compute-0 podman[367546]: 2025-10-11 09:11:32.050459883 +0000 UTC m=+0.140831649 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct 11 09:11:32 compute-0 podman[367600]: 2025-10-11 09:11:32.150887283 +0000 UTC m=+0.054505840 container create 61563d162d002c17e679dbd3d29cdb240492a79eed2ece2859d3e33db9fe2cb8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_lamarr, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 09:11:32 compute-0 systemd[1]: Started libpod-conmon-61563d162d002c17e679dbd3d29cdb240492a79eed2ece2859d3e33db9fe2cb8.scope.
Oct 11 09:11:32 compute-0 podman[367600]: 2025-10-11 09:11:32.130254532 +0000 UTC m=+0.033873059 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:11:32 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:11:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4734989fb6aba1cf2bd0cd2b0b811a1d1b66410fbbbca3466a96b26daf115d99/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 09:11:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4734989fb6aba1cf2bd0cd2b0b811a1d1b66410fbbbca3466a96b26daf115d99/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 09:11:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4734989fb6aba1cf2bd0cd2b0b811a1d1b66410fbbbca3466a96b26daf115d99/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 09:11:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4734989fb6aba1cf2bd0cd2b0b811a1d1b66410fbbbca3466a96b26daf115d99/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 09:11:32 compute-0 podman[367600]: 2025-10-11 09:11:32.283984827 +0000 UTC m=+0.187603414 container init 61563d162d002c17e679dbd3d29cdb240492a79eed2ece2859d3e33db9fe2cb8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_lamarr, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct 11 09:11:32 compute-0 podman[367600]: 2025-10-11 09:11:32.296200065 +0000 UTC m=+0.199818572 container start 61563d162d002c17e679dbd3d29cdb240492a79eed2ece2859d3e33db9fe2cb8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_lamarr, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct 11 09:11:32 compute-0 podman[367600]: 2025-10-11 09:11:32.299949249 +0000 UTC m=+0.203567806 container attach 61563d162d002c17e679dbd3d29cdb240492a79eed2ece2859d3e33db9fe2cb8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_lamarr, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 09:11:32 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2091: 321 pgs: 321 active+clean; 328 MiB data, 872 MiB used, 59 GiB / 60 GiB avail; 21 KiB/s rd, 11 KiB/s wr, 28 op/s
Oct 11 09:11:33 compute-0 elastic_lamarr[367616]: {
Oct 11 09:11:33 compute-0 elastic_lamarr[367616]:     "0": [
Oct 11 09:11:33 compute-0 elastic_lamarr[367616]:         {
Oct 11 09:11:33 compute-0 elastic_lamarr[367616]:             "devices": [
Oct 11 09:11:33 compute-0 elastic_lamarr[367616]:                 "/dev/loop3"
Oct 11 09:11:33 compute-0 elastic_lamarr[367616]:             ],
Oct 11 09:11:33 compute-0 elastic_lamarr[367616]:             "lv_name": "ceph_lv0",
Oct 11 09:11:33 compute-0 elastic_lamarr[367616]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 09:11:33 compute-0 elastic_lamarr[367616]:             "lv_size": "21470642176",
Oct 11 09:11:33 compute-0 elastic_lamarr[367616]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=bd31cfe9-266a-4479-a7f9-659fff14b3e1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 09:11:33 compute-0 elastic_lamarr[367616]:             "lv_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 09:11:33 compute-0 elastic_lamarr[367616]:             "name": "ceph_lv0",
Oct 11 09:11:33 compute-0 elastic_lamarr[367616]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 09:11:33 compute-0 elastic_lamarr[367616]:             "tags": {
Oct 11 09:11:33 compute-0 elastic_lamarr[367616]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 11 09:11:33 compute-0 elastic_lamarr[367616]:                 "ceph.block_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 09:11:33 compute-0 elastic_lamarr[367616]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 09:11:33 compute-0 elastic_lamarr[367616]:                 "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:11:33 compute-0 elastic_lamarr[367616]:                 "ceph.cluster_name": "ceph",
Oct 11 09:11:33 compute-0 elastic_lamarr[367616]:                 "ceph.crush_device_class": "",
Oct 11 09:11:33 compute-0 elastic_lamarr[367616]:                 "ceph.encrypted": "0",
Oct 11 09:11:33 compute-0 elastic_lamarr[367616]:                 "ceph.osd_fsid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 09:11:33 compute-0 elastic_lamarr[367616]:                 "ceph.osd_id": "0",
Oct 11 09:11:33 compute-0 elastic_lamarr[367616]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 09:11:33 compute-0 elastic_lamarr[367616]:                 "ceph.type": "block",
Oct 11 09:11:33 compute-0 elastic_lamarr[367616]:                 "ceph.vdo": "0"
Oct 11 09:11:33 compute-0 elastic_lamarr[367616]:             },
Oct 11 09:11:33 compute-0 elastic_lamarr[367616]:             "type": "block",
Oct 11 09:11:33 compute-0 elastic_lamarr[367616]:             "vg_name": "ceph_vg0"
Oct 11 09:11:33 compute-0 elastic_lamarr[367616]:         }
Oct 11 09:11:33 compute-0 elastic_lamarr[367616]:     ],
Oct 11 09:11:33 compute-0 elastic_lamarr[367616]:     "1": [
Oct 11 09:11:33 compute-0 elastic_lamarr[367616]:         {
Oct 11 09:11:33 compute-0 elastic_lamarr[367616]:             "devices": [
Oct 11 09:11:33 compute-0 elastic_lamarr[367616]:                 "/dev/loop4"
Oct 11 09:11:33 compute-0 elastic_lamarr[367616]:             ],
Oct 11 09:11:33 compute-0 elastic_lamarr[367616]:             "lv_name": "ceph_lv1",
Oct 11 09:11:33 compute-0 elastic_lamarr[367616]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 09:11:33 compute-0 elastic_lamarr[367616]:             "lv_size": "21470642176",
Oct 11 09:11:33 compute-0 elastic_lamarr[367616]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c6e730c8-7075-45e5-bf72-061b73088f5b,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 09:11:33 compute-0 elastic_lamarr[367616]:             "lv_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 09:11:33 compute-0 elastic_lamarr[367616]:             "name": "ceph_lv1",
Oct 11 09:11:33 compute-0 elastic_lamarr[367616]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 09:11:33 compute-0 elastic_lamarr[367616]:             "tags": {
Oct 11 09:11:33 compute-0 elastic_lamarr[367616]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 11 09:11:33 compute-0 elastic_lamarr[367616]:                 "ceph.block_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 09:11:33 compute-0 elastic_lamarr[367616]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 09:11:33 compute-0 elastic_lamarr[367616]:                 "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:11:33 compute-0 elastic_lamarr[367616]:                 "ceph.cluster_name": "ceph",
Oct 11 09:11:33 compute-0 elastic_lamarr[367616]:                 "ceph.crush_device_class": "",
Oct 11 09:11:33 compute-0 elastic_lamarr[367616]:                 "ceph.encrypted": "0",
Oct 11 09:11:33 compute-0 elastic_lamarr[367616]:                 "ceph.osd_fsid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 09:11:33 compute-0 elastic_lamarr[367616]:                 "ceph.osd_id": "1",
Oct 11 09:11:33 compute-0 elastic_lamarr[367616]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 09:11:33 compute-0 elastic_lamarr[367616]:                 "ceph.type": "block",
Oct 11 09:11:33 compute-0 elastic_lamarr[367616]:                 "ceph.vdo": "0"
Oct 11 09:11:33 compute-0 elastic_lamarr[367616]:             },
Oct 11 09:11:33 compute-0 elastic_lamarr[367616]:             "type": "block",
Oct 11 09:11:33 compute-0 elastic_lamarr[367616]:             "vg_name": "ceph_vg1"
Oct 11 09:11:33 compute-0 elastic_lamarr[367616]:         }
Oct 11 09:11:33 compute-0 elastic_lamarr[367616]:     ],
Oct 11 09:11:33 compute-0 elastic_lamarr[367616]:     "2": [
Oct 11 09:11:33 compute-0 elastic_lamarr[367616]:         {
Oct 11 09:11:33 compute-0 elastic_lamarr[367616]:             "devices": [
Oct 11 09:11:33 compute-0 elastic_lamarr[367616]:                 "/dev/loop5"
Oct 11 09:11:33 compute-0 elastic_lamarr[367616]:             ],
Oct 11 09:11:33 compute-0 elastic_lamarr[367616]:             "lv_name": "ceph_lv2",
Oct 11 09:11:33 compute-0 elastic_lamarr[367616]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 09:11:33 compute-0 elastic_lamarr[367616]:             "lv_size": "21470642176",
Oct 11 09:11:33 compute-0 elastic_lamarr[367616]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9a8d87fe-940e-4960-94fb-405e22e6e9ee,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 09:11:33 compute-0 elastic_lamarr[367616]:             "lv_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 09:11:33 compute-0 elastic_lamarr[367616]:             "name": "ceph_lv2",
Oct 11 09:11:33 compute-0 elastic_lamarr[367616]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 09:11:33 compute-0 elastic_lamarr[367616]:             "tags": {
Oct 11 09:11:33 compute-0 elastic_lamarr[367616]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 11 09:11:33 compute-0 elastic_lamarr[367616]:                 "ceph.block_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 09:11:33 compute-0 elastic_lamarr[367616]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 09:11:33 compute-0 elastic_lamarr[367616]:                 "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:11:33 compute-0 elastic_lamarr[367616]:                 "ceph.cluster_name": "ceph",
Oct 11 09:11:33 compute-0 elastic_lamarr[367616]:                 "ceph.crush_device_class": "",
Oct 11 09:11:33 compute-0 elastic_lamarr[367616]:                 "ceph.encrypted": "0",
Oct 11 09:11:33 compute-0 elastic_lamarr[367616]:                 "ceph.osd_fsid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 09:11:33 compute-0 elastic_lamarr[367616]:                 "ceph.osd_id": "2",
Oct 11 09:11:33 compute-0 elastic_lamarr[367616]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 09:11:33 compute-0 elastic_lamarr[367616]:                 "ceph.type": "block",
Oct 11 09:11:33 compute-0 elastic_lamarr[367616]:                 "ceph.vdo": "0"
Oct 11 09:11:33 compute-0 elastic_lamarr[367616]:             },
Oct 11 09:11:33 compute-0 elastic_lamarr[367616]:             "type": "block",
Oct 11 09:11:33 compute-0 elastic_lamarr[367616]:             "vg_name": "ceph_vg2"
Oct 11 09:11:33 compute-0 elastic_lamarr[367616]:         }
Oct 11 09:11:33 compute-0 elastic_lamarr[367616]:     ]
Oct 11 09:11:33 compute-0 elastic_lamarr[367616]: }
Oct 11 09:11:33 compute-0 systemd[1]: libpod-61563d162d002c17e679dbd3d29cdb240492a79eed2ece2859d3e33db9fe2cb8.scope: Deactivated successfully.
Oct 11 09:11:33 compute-0 podman[367600]: 2025-10-11 09:11:33.129991493 +0000 UTC m=+1.033610030 container died 61563d162d002c17e679dbd3d29cdb240492a79eed2ece2859d3e33db9fe2cb8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_lamarr, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Oct 11 09:11:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-4734989fb6aba1cf2bd0cd2b0b811a1d1b66410fbbbca3466a96b26daf115d99-merged.mount: Deactivated successfully.
Oct 11 09:11:33 compute-0 podman[367600]: 2025-10-11 09:11:33.206028597 +0000 UTC m=+1.109647154 container remove 61563d162d002c17e679dbd3d29cdb240492a79eed2ece2859d3e33db9fe2cb8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_lamarr, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct 11 09:11:33 compute-0 systemd[1]: libpod-conmon-61563d162d002c17e679dbd3d29cdb240492a79eed2ece2859d3e33db9fe2cb8.scope: Deactivated successfully.
Oct 11 09:11:33 compute-0 sudo[367449]: pam_unix(sudo:session): session closed for user root
Oct 11 09:11:33 compute-0 sudo[367640]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:11:33 compute-0 sudo[367640]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:11:33 compute-0 sudo[367640]: pam_unix(sudo:session): session closed for user root
Oct 11 09:11:33 compute-0 sudo[367665]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 09:11:33 compute-0 sudo[367665]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:11:33 compute-0 sudo[367665]: pam_unix(sudo:session): session closed for user root
Oct 11 09:11:33 compute-0 ceph-mon[74313]: pgmap v2091: 321 pgs: 321 active+clean; 328 MiB data, 872 MiB used, 59 GiB / 60 GiB avail; 21 KiB/s rd, 11 KiB/s wr, 28 op/s
Oct 11 09:11:33 compute-0 sudo[367690]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:11:33 compute-0 sudo[367690]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:11:33 compute-0 sudo[367690]: pam_unix(sudo:session): session closed for user root
Oct 11 09:11:33 compute-0 sudo[367715]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a -- raw list --format json
Oct 11 09:11:33 compute-0 sudo[367715]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:11:33 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:11:34 compute-0 podman[367779]: 2025-10-11 09:11:34.055677145 +0000 UTC m=+0.046189380 container create af35bff4549b5f4c46cb930b2b32e0918335e11d083ceadcd9e855ccb64a1202 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_bouman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True)
Oct 11 09:11:34 compute-0 systemd[1]: Started libpod-conmon-af35bff4549b5f4c46cb930b2b32e0918335e11d083ceadcd9e855ccb64a1202.scope.
Oct 11 09:11:34 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:11:34 compute-0 podman[367779]: 2025-10-11 09:11:34.036466183 +0000 UTC m=+0.026978448 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:11:34 compute-0 podman[367779]: 2025-10-11 09:11:34.151683122 +0000 UTC m=+0.142195447 container init af35bff4549b5f4c46cb930b2b32e0918335e11d083ceadcd9e855ccb64a1202 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_bouman, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct 11 09:11:34 compute-0 podman[367779]: 2025-10-11 09:11:34.161577316 +0000 UTC m=+0.152089571 container start af35bff4549b5f4c46cb930b2b32e0918335e11d083ceadcd9e855ccb64a1202 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_bouman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 09:11:34 compute-0 podman[367779]: 2025-10-11 09:11:34.166279686 +0000 UTC m=+0.156791951 container attach af35bff4549b5f4c46cb930b2b32e0918335e11d083ceadcd9e855ccb64a1202 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_bouman, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 11 09:11:34 compute-0 relaxed_bouman[367796]: 167 167
Oct 11 09:11:34 compute-0 podman[367779]: 2025-10-11 09:11:34.169894986 +0000 UTC m=+0.160407251 container died af35bff4549b5f4c46cb930b2b32e0918335e11d083ceadcd9e855ccb64a1202 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_bouman, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True)
Oct 11 09:11:34 compute-0 systemd[1]: libpod-af35bff4549b5f4c46cb930b2b32e0918335e11d083ceadcd9e855ccb64a1202.scope: Deactivated successfully.
Oct 11 09:11:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-bc53acc7d5985cc7a43820902b781d3a6135ccd70fa205ae659107aba912596e-merged.mount: Deactivated successfully.
Oct 11 09:11:34 compute-0 podman[367779]: 2025-10-11 09:11:34.224977401 +0000 UTC m=+0.215489676 container remove af35bff4549b5f4c46cb930b2b32e0918335e11d083ceadcd9e855ccb64a1202 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_bouman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct 11 09:11:34 compute-0 systemd[1]: libpod-conmon-af35bff4549b5f4c46cb930b2b32e0918335e11d083ceadcd9e855ccb64a1202.scope: Deactivated successfully.
Oct 11 09:11:34 compute-0 podman[367820]: 2025-10-11 09:11:34.476352619 +0000 UTC m=+0.072830927 container create 0110d9d0bbe8920e14de7052fb9814eb727c0ae00da8b380970ea9853aae09af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_snyder, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct 11 09:11:34 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2092: 321 pgs: 321 active+clean; 328 MiB data, 872 MiB used, 59 GiB / 60 GiB avail; 5.6 KiB/s rd, 682 B/s wr, 9 op/s
Oct 11 09:11:34 compute-0 systemd[1]: Started libpod-conmon-0110d9d0bbe8920e14de7052fb9814eb727c0ae00da8b380970ea9853aae09af.scope.
Oct 11 09:11:34 compute-0 podman[367820]: 2025-10-11 09:11:34.446707368 +0000 UTC m=+0.043185746 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:11:34 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:11:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5438a642c54e083513f332d51858fc774c756581c63a9a701d675939eb6cd55c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 09:11:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5438a642c54e083513f332d51858fc774c756581c63a9a701d675939eb6cd55c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 09:11:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5438a642c54e083513f332d51858fc774c756581c63a9a701d675939eb6cd55c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 09:11:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5438a642c54e083513f332d51858fc774c756581c63a9a701d675939eb6cd55c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 09:11:34 compute-0 podman[367820]: 2025-10-11 09:11:34.599422065 +0000 UTC m=+0.195900403 container init 0110d9d0bbe8920e14de7052fb9814eb727c0ae00da8b380970ea9853aae09af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_snyder, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 09:11:34 compute-0 podman[367820]: 2025-10-11 09:11:34.616462847 +0000 UTC m=+0.212941175 container start 0110d9d0bbe8920e14de7052fb9814eb727c0ae00da8b380970ea9853aae09af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_snyder, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 09:11:34 compute-0 podman[367820]: 2025-10-11 09:11:34.620765316 +0000 UTC m=+0.217243654 container attach 0110d9d0bbe8920e14de7052fb9814eb727c0ae00da8b380970ea9853aae09af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_snyder, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 09:11:34 compute-0 nova_compute[260935]: 2025-10-11 09:11:34.991 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:11:35 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:11:35.491 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=bec9a5e2-82b0-42f0-811d-08d245f1dc66, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '28'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:11:35 compute-0 ceph-mon[74313]: pgmap v2092: 321 pgs: 321 active+clean; 328 MiB data, 872 MiB used, 59 GiB / 60 GiB avail; 5.6 KiB/s rd, 682 B/s wr, 9 op/s
Oct 11 09:11:35 compute-0 great_snyder[367837]: {
Oct 11 09:11:35 compute-0 great_snyder[367837]:     "9a8d87fe-940e-4960-94fb-405e22e6e9ee": {
Oct 11 09:11:35 compute-0 great_snyder[367837]:         "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:11:35 compute-0 great_snyder[367837]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 11 09:11:35 compute-0 great_snyder[367837]:         "osd_id": 2,
Oct 11 09:11:35 compute-0 great_snyder[367837]:         "osd_uuid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 09:11:35 compute-0 great_snyder[367837]:         "type": "bluestore"
Oct 11 09:11:35 compute-0 great_snyder[367837]:     },
Oct 11 09:11:35 compute-0 great_snyder[367837]:     "bd31cfe9-266a-4479-a7f9-659fff14b3e1": {
Oct 11 09:11:35 compute-0 great_snyder[367837]:         "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:11:35 compute-0 great_snyder[367837]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 11 09:11:35 compute-0 great_snyder[367837]:         "osd_id": 0,
Oct 11 09:11:35 compute-0 great_snyder[367837]:         "osd_uuid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 09:11:35 compute-0 great_snyder[367837]:         "type": "bluestore"
Oct 11 09:11:35 compute-0 great_snyder[367837]:     },
Oct 11 09:11:35 compute-0 great_snyder[367837]:     "c6e730c8-7075-45e5-bf72-061b73088f5b": {
Oct 11 09:11:35 compute-0 great_snyder[367837]:         "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:11:35 compute-0 great_snyder[367837]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 11 09:11:35 compute-0 great_snyder[367837]:         "osd_id": 1,
Oct 11 09:11:35 compute-0 great_snyder[367837]:         "osd_uuid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 09:11:35 compute-0 great_snyder[367837]:         "type": "bluestore"
Oct 11 09:11:35 compute-0 great_snyder[367837]:     }
Oct 11 09:11:35 compute-0 great_snyder[367837]: }
Oct 11 09:11:35 compute-0 systemd[1]: libpod-0110d9d0bbe8920e14de7052fb9814eb727c0ae00da8b380970ea9853aae09af.scope: Deactivated successfully.
Oct 11 09:11:35 compute-0 podman[367820]: 2025-10-11 09:11:35.713532043 +0000 UTC m=+1.310010381 container died 0110d9d0bbe8920e14de7052fb9814eb727c0ae00da8b380970ea9853aae09af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_snyder, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 11 09:11:35 compute-0 systemd[1]: libpod-0110d9d0bbe8920e14de7052fb9814eb727c0ae00da8b380970ea9853aae09af.scope: Consumed 1.105s CPU time.
Oct 11 09:11:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-5438a642c54e083513f332d51858fc774c756581c63a9a701d675939eb6cd55c-merged.mount: Deactivated successfully.
Oct 11 09:11:35 compute-0 podman[367820]: 2025-10-11 09:11:35.785283299 +0000 UTC m=+1.381761607 container remove 0110d9d0bbe8920e14de7052fb9814eb727c0ae00da8b380970ea9853aae09af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_snyder, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 09:11:35 compute-0 systemd[1]: libpod-conmon-0110d9d0bbe8920e14de7052fb9814eb727c0ae00da8b380970ea9853aae09af.scope: Deactivated successfully.
Oct 11 09:11:35 compute-0 sudo[367715]: pam_unix(sudo:session): session closed for user root
Oct 11 09:11:35 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 09:11:35 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:11:35 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 09:11:35 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:11:35 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev 1dd096a8-c3b3-4dd6-824c-1f8779428b53 does not exist
Oct 11 09:11:35 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev 92cec2f9-607b-4756-845c-13ebc86319de does not exist
Oct 11 09:11:35 compute-0 sudo[367884]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:11:35 compute-0 sudo[367884]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:11:35 compute-0 sudo[367884]: pam_unix(sudo:session): session closed for user root
Oct 11 09:11:36 compute-0 sudo[367909]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 11 09:11:36 compute-0 sudo[367909]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:11:36 compute-0 sudo[367909]: pam_unix(sudo:session): session closed for user root
Oct 11 09:11:36 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2093: 321 pgs: 321 active+clean; 328 MiB data, 872 MiB used, 59 GiB / 60 GiB avail; 3.2 KiB/s rd, 341 B/s wr, 4 op/s
Oct 11 09:11:36 compute-0 nova_compute[260935]: 2025-10-11 09:11:36.792 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760173881.7909877, 0b3da34e-478e-44fc-a1ec-2601998d2b0d => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:11:36 compute-0 nova_compute[260935]: 2025-10-11 09:11:36.793 2 INFO nova.compute.manager [-] [instance: 0b3da34e-478e-44fc-a1ec-2601998d2b0d] VM Stopped (Lifecycle Event)
Oct 11 09:11:36 compute-0 nova_compute[260935]: 2025-10-11 09:11:36.820 2 DEBUG nova.compute.manager [None req-2a0d7db5-a1fc-4746-b921-0f93b8001a07 - - - - - -] [instance: 0b3da34e-478e-44fc-a1ec-2601998d2b0d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:11:36 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:11:36 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:11:36 compute-0 nova_compute[260935]: 2025-10-11 09:11:36.859 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:11:37 compute-0 ceph-mon[74313]: pgmap v2093: 321 pgs: 321 active+clean; 328 MiB data, 872 MiB used, 59 GiB / 60 GiB avail; 3.2 KiB/s rd, 341 B/s wr, 4 op/s
Oct 11 09:11:38 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2094: 321 pgs: 321 active+clean; 328 MiB data, 872 MiB used, 59 GiB / 60 GiB avail; 3.2 KiB/s rd, 341 B/s wr, 4 op/s
Oct 11 09:11:38 compute-0 ceph-mon[74313]: pgmap v2094: 321 pgs: 321 active+clean; 328 MiB data, 872 MiB used, 59 GiB / 60 GiB avail; 3.2 KiB/s rd, 341 B/s wr, 4 op/s
Oct 11 09:11:38 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:11:39 compute-0 nova_compute[260935]: 2025-10-11 09:11:39.992 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:11:40 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2095: 321 pgs: 321 active+clean; 328 MiB data, 872 MiB used, 59 GiB / 60 GiB avail
Oct 11 09:11:41 compute-0 ceph-mon[74313]: pgmap v2095: 321 pgs: 321 active+clean; 328 MiB data, 872 MiB used, 59 GiB / 60 GiB avail
Oct 11 09:11:41 compute-0 nova_compute[260935]: 2025-10-11 09:11:41.863 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:11:42 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2096: 321 pgs: 321 active+clean; 328 MiB data, 872 MiB used, 59 GiB / 60 GiB avail
Oct 11 09:11:43 compute-0 ceph-mon[74313]: pgmap v2096: 321 pgs: 321 active+clean; 328 MiB data, 872 MiB used, 59 GiB / 60 GiB avail
Oct 11 09:11:43 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:11:44 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2097: 321 pgs: 321 active+clean; 328 MiB data, 872 MiB used, 59 GiB / 60 GiB avail
Oct 11 09:11:44 compute-0 nova_compute[260935]: 2025-10-11 09:11:44.994 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:11:45 compute-0 ceph-mon[74313]: pgmap v2097: 321 pgs: 321 active+clean; 328 MiB data, 872 MiB used, 59 GiB / 60 GiB avail
Oct 11 09:11:46 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2098: 321 pgs: 321 active+clean; 328 MiB data, 872 MiB used, 59 GiB / 60 GiB avail
Oct 11 09:11:46 compute-0 nova_compute[260935]: 2025-10-11 09:11:46.902 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:11:47 compute-0 nova_compute[260935]: 2025-10-11 09:11:47.139 2 DEBUG oslo_concurrency.lockutils [None req-a8cd5c17-8312-46bb-b227-88dd68700724 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Acquiring lock "0dc02e8f-5afd-40a3-8de3-e5550e4ab57e" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:11:47 compute-0 nova_compute[260935]: 2025-10-11 09:11:47.140 2 DEBUG oslo_concurrency.lockutils [None req-a8cd5c17-8312-46bb-b227-88dd68700724 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Lock "0dc02e8f-5afd-40a3-8de3-e5550e4ab57e" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:11:47 compute-0 nova_compute[260935]: 2025-10-11 09:11:47.161 2 DEBUG nova.compute.manager [None req-a8cd5c17-8312-46bb-b227-88dd68700724 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 11 09:11:47 compute-0 nova_compute[260935]: 2025-10-11 09:11:47.266 2 DEBUG oslo_concurrency.lockutils [None req-a8cd5c17-8312-46bb-b227-88dd68700724 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:11:47 compute-0 nova_compute[260935]: 2025-10-11 09:11:47.267 2 DEBUG oslo_concurrency.lockutils [None req-a8cd5c17-8312-46bb-b227-88dd68700724 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:11:47 compute-0 nova_compute[260935]: 2025-10-11 09:11:47.279 2 DEBUG nova.virt.hardware [None req-a8cd5c17-8312-46bb-b227-88dd68700724 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 11 09:11:47 compute-0 nova_compute[260935]: 2025-10-11 09:11:47.280 2 INFO nova.compute.claims [None req-a8cd5c17-8312-46bb-b227-88dd68700724 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e] Claim successful on node compute-0.ctlplane.example.com
Oct 11 09:11:47 compute-0 nova_compute[260935]: 2025-10-11 09:11:47.524 2 DEBUG oslo_concurrency.processutils [None req-a8cd5c17-8312-46bb-b227-88dd68700724 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:11:47 compute-0 ceph-mon[74313]: pgmap v2098: 321 pgs: 321 active+clean; 328 MiB data, 872 MiB used, 59 GiB / 60 GiB avail
Oct 11 09:11:48 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 09:11:48 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2240100118' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:11:48 compute-0 nova_compute[260935]: 2025-10-11 09:11:48.027 2 DEBUG oslo_concurrency.processutils [None req-a8cd5c17-8312-46bb-b227-88dd68700724 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.504s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:11:48 compute-0 nova_compute[260935]: 2025-10-11 09:11:48.035 2 DEBUG nova.compute.provider_tree [None req-a8cd5c17-8312-46bb-b227-88dd68700724 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 09:11:48 compute-0 nova_compute[260935]: 2025-10-11 09:11:48.056 2 DEBUG nova.scheduler.client.report [None req-a8cd5c17-8312-46bb-b227-88dd68700724 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 09:11:48 compute-0 nova_compute[260935]: 2025-10-11 09:11:48.092 2 DEBUG oslo_concurrency.lockutils [None req-a8cd5c17-8312-46bb-b227-88dd68700724 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.825s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:11:48 compute-0 nova_compute[260935]: 2025-10-11 09:11:48.093 2 DEBUG nova.compute.manager [None req-a8cd5c17-8312-46bb-b227-88dd68700724 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 11 09:11:48 compute-0 nova_compute[260935]: 2025-10-11 09:11:48.192 2 DEBUG nova.compute.manager [None req-a8cd5c17-8312-46bb-b227-88dd68700724 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 11 09:11:48 compute-0 nova_compute[260935]: 2025-10-11 09:11:48.193 2 DEBUG nova.network.neutron [None req-a8cd5c17-8312-46bb-b227-88dd68700724 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 11 09:11:48 compute-0 nova_compute[260935]: 2025-10-11 09:11:48.219 2 INFO nova.virt.libvirt.driver [None req-a8cd5c17-8312-46bb-b227-88dd68700724 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 11 09:11:48 compute-0 nova_compute[260935]: 2025-10-11 09:11:48.259 2 DEBUG nova.compute.manager [None req-a8cd5c17-8312-46bb-b227-88dd68700724 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 11 09:11:48 compute-0 nova_compute[260935]: 2025-10-11 09:11:48.361 2 DEBUG nova.compute.manager [None req-a8cd5c17-8312-46bb-b227-88dd68700724 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 11 09:11:48 compute-0 nova_compute[260935]: 2025-10-11 09:11:48.364 2 DEBUG nova.virt.libvirt.driver [None req-a8cd5c17-8312-46bb-b227-88dd68700724 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 11 09:11:48 compute-0 nova_compute[260935]: 2025-10-11 09:11:48.365 2 INFO nova.virt.libvirt.driver [None req-a8cd5c17-8312-46bb-b227-88dd68700724 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e] Creating image(s)
Oct 11 09:11:48 compute-0 nova_compute[260935]: 2025-10-11 09:11:48.403 2 DEBUG nova.storage.rbd_utils [None req-a8cd5c17-8312-46bb-b227-88dd68700724 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] rbd image 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:11:48 compute-0 nova_compute[260935]: 2025-10-11 09:11:48.447 2 DEBUG nova.storage.rbd_utils [None req-a8cd5c17-8312-46bb-b227-88dd68700724 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] rbd image 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:11:48 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2099: 321 pgs: 321 active+clean; 328 MiB data, 872 MiB used, 59 GiB / 60 GiB avail
Oct 11 09:11:48 compute-0 nova_compute[260935]: 2025-10-11 09:11:48.487 2 DEBUG nova.storage.rbd_utils [None req-a8cd5c17-8312-46bb-b227-88dd68700724 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] rbd image 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:11:48 compute-0 nova_compute[260935]: 2025-10-11 09:11:48.493 2 DEBUG oslo_concurrency.processutils [None req-a8cd5c17-8312-46bb-b227-88dd68700724 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:11:48 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/2240100118' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:11:48 compute-0 nova_compute[260935]: 2025-10-11 09:11:48.598 2 DEBUG oslo_concurrency.processutils [None req-a8cd5c17-8312-46bb-b227-88dd68700724 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json" returned: 0 in 0.105s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:11:48 compute-0 nova_compute[260935]: 2025-10-11 09:11:48.599 2 DEBUG oslo_concurrency.lockutils [None req-a8cd5c17-8312-46bb-b227-88dd68700724 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Acquiring lock "9811042c7d73cc51997f7c966840f4b7728169a1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:11:48 compute-0 nova_compute[260935]: 2025-10-11 09:11:48.600 2 DEBUG oslo_concurrency.lockutils [None req-a8cd5c17-8312-46bb-b227-88dd68700724 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:11:48 compute-0 nova_compute[260935]: 2025-10-11 09:11:48.600 2 DEBUG oslo_concurrency.lockutils [None req-a8cd5c17-8312-46bb-b227-88dd68700724 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:11:48 compute-0 nova_compute[260935]: 2025-10-11 09:11:48.625 2 DEBUG nova.storage.rbd_utils [None req-a8cd5c17-8312-46bb-b227-88dd68700724 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] rbd image 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:11:48 compute-0 nova_compute[260935]: 2025-10-11 09:11:48.629 2 DEBUG oslo_concurrency.processutils [None req-a8cd5c17-8312-46bb-b227-88dd68700724 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:11:48 compute-0 nova_compute[260935]: 2025-10-11 09:11:48.781 2 DEBUG nova.policy [None req-a8cd5c17-8312-46bb-b227-88dd68700724 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'a213c3877fc144a3af0be3c3d853f999', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'ca4b15770e784f45910b630937562cb6', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 11 09:11:48 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:11:48 compute-0 nova_compute[260935]: 2025-10-11 09:11:48.918 2 DEBUG oslo_concurrency.processutils [None req-a8cd5c17-8312-46bb-b227-88dd68700724 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.289s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:11:49 compute-0 nova_compute[260935]: 2025-10-11 09:11:49.011 2 DEBUG nova.storage.rbd_utils [None req-a8cd5c17-8312-46bb-b227-88dd68700724 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] resizing rbd image 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 11 09:11:49 compute-0 nova_compute[260935]: 2025-10-11 09:11:49.143 2 DEBUG nova.objects.instance [None req-a8cd5c17-8312-46bb-b227-88dd68700724 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Lazy-loading 'migration_context' on Instance uuid 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:11:49 compute-0 nova_compute[260935]: 2025-10-11 09:11:49.161 2 DEBUG nova.virt.libvirt.driver [None req-a8cd5c17-8312-46bb-b227-88dd68700724 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 11 09:11:49 compute-0 nova_compute[260935]: 2025-10-11 09:11:49.162 2 DEBUG nova.virt.libvirt.driver [None req-a8cd5c17-8312-46bb-b227-88dd68700724 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e] Ensure instance console log exists: /var/lib/nova/instances/0dc02e8f-5afd-40a3-8de3-e5550e4ab57e/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 11 09:11:49 compute-0 nova_compute[260935]: 2025-10-11 09:11:49.162 2 DEBUG oslo_concurrency.lockutils [None req-a8cd5c17-8312-46bb-b227-88dd68700724 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:11:49 compute-0 nova_compute[260935]: 2025-10-11 09:11:49.163 2 DEBUG oslo_concurrency.lockutils [None req-a8cd5c17-8312-46bb-b227-88dd68700724 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:11:49 compute-0 nova_compute[260935]: 2025-10-11 09:11:49.164 2 DEBUG oslo_concurrency.lockutils [None req-a8cd5c17-8312-46bb-b227-88dd68700724 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:11:49 compute-0 ceph-mon[74313]: pgmap v2099: 321 pgs: 321 active+clean; 328 MiB data, 872 MiB used, 59 GiB / 60 GiB avail
Oct 11 09:11:49 compute-0 nova_compute[260935]: 2025-10-11 09:11:49.819 2 DEBUG nova.network.neutron [None req-a8cd5c17-8312-46bb-b227-88dd68700724 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e] Successfully created port: 0ae3e094-fe06-40c1-8840-cf16ba40f7fb _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 11 09:11:49 compute-0 nova_compute[260935]: 2025-10-11 09:11:49.997 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:11:50 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2100: 321 pgs: 321 active+clean; 328 MiB data, 872 MiB used, 59 GiB / 60 GiB avail
Oct 11 09:11:51 compute-0 nova_compute[260935]: 2025-10-11 09:11:51.185 2 DEBUG nova.network.neutron [None req-a8cd5c17-8312-46bb-b227-88dd68700724 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e] Successfully updated port: 0ae3e094-fe06-40c1-8840-cf16ba40f7fb _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 11 09:11:51 compute-0 nova_compute[260935]: 2025-10-11 09:11:51.206 2 DEBUG oslo_concurrency.lockutils [None req-a8cd5c17-8312-46bb-b227-88dd68700724 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Acquiring lock "refresh_cache-0dc02e8f-5afd-40a3-8de3-e5550e4ab57e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:11:51 compute-0 nova_compute[260935]: 2025-10-11 09:11:51.206 2 DEBUG oslo_concurrency.lockutils [None req-a8cd5c17-8312-46bb-b227-88dd68700724 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Acquired lock "refresh_cache-0dc02e8f-5afd-40a3-8de3-e5550e4ab57e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:11:51 compute-0 nova_compute[260935]: 2025-10-11 09:11:51.206 2 DEBUG nova.network.neutron [None req-a8cd5c17-8312-46bb-b227-88dd68700724 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 11 09:11:51 compute-0 nova_compute[260935]: 2025-10-11 09:11:51.475 2 DEBUG nova.compute.manager [req-1a83ff77-b968-4564-b5e9-880c091bddcd req-d3483574-f53a-49e6-8398-0a644fbc8cab e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e] Received event network-changed-0ae3e094-fe06-40c1-8840-cf16ba40f7fb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:11:51 compute-0 nova_compute[260935]: 2025-10-11 09:11:51.476 2 DEBUG nova.compute.manager [req-1a83ff77-b968-4564-b5e9-880c091bddcd req-d3483574-f53a-49e6-8398-0a644fbc8cab e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e] Refreshing instance network info cache due to event network-changed-0ae3e094-fe06-40c1-8840-cf16ba40f7fb. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 09:11:51 compute-0 nova_compute[260935]: 2025-10-11 09:11:51.476 2 DEBUG oslo_concurrency.lockutils [req-1a83ff77-b968-4564-b5e9-880c091bddcd req-d3483574-f53a-49e6-8398-0a644fbc8cab e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-0dc02e8f-5afd-40a3-8de3-e5550e4ab57e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:11:51 compute-0 nova_compute[260935]: 2025-10-11 09:11:51.574 2 DEBUG nova.network.neutron [None req-a8cd5c17-8312-46bb-b227-88dd68700724 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 11 09:11:51 compute-0 ceph-mon[74313]: pgmap v2100: 321 pgs: 321 active+clean; 328 MiB data, 872 MiB used, 59 GiB / 60 GiB avail
Oct 11 09:11:51 compute-0 podman[368123]: 2025-10-11 09:11:51.804260504 +0000 UTC m=+0.099392612 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251001, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Oct 11 09:11:51 compute-0 nova_compute[260935]: 2025-10-11 09:11:51.906 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:11:52 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2101: 321 pgs: 321 active+clean; 370 MiB data, 890 MiB used, 59 GiB / 60 GiB avail; 16 KiB/s rd, 1.5 MiB/s wr, 24 op/s
Oct 11 09:11:52 compute-0 nova_compute[260935]: 2025-10-11 09:11:52.931 2 DEBUG nova.network.neutron [None req-a8cd5c17-8312-46bb-b227-88dd68700724 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e] Updating instance_info_cache with network_info: [{"id": "0ae3e094-fe06-40c1-8840-cf16ba40f7fb", "address": "fa:16:3e:36:24:98", "network": {"id": "fba72350-6e1a-4542-9fe5-a0774a411936", "bridge": "br-int", "label": "tempest-network-smoke--284994200", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca4b15770e784f45910b630937562cb6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0ae3e094-fe", "ovs_interfaceid": "0ae3e094-fe06-40c1-8840-cf16ba40f7fb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:11:52 compute-0 nova_compute[260935]: 2025-10-11 09:11:52.964 2 DEBUG oslo_concurrency.lockutils [None req-a8cd5c17-8312-46bb-b227-88dd68700724 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Releasing lock "refresh_cache-0dc02e8f-5afd-40a3-8de3-e5550e4ab57e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:11:52 compute-0 nova_compute[260935]: 2025-10-11 09:11:52.965 2 DEBUG nova.compute.manager [None req-a8cd5c17-8312-46bb-b227-88dd68700724 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e] Instance network_info: |[{"id": "0ae3e094-fe06-40c1-8840-cf16ba40f7fb", "address": "fa:16:3e:36:24:98", "network": {"id": "fba72350-6e1a-4542-9fe5-a0774a411936", "bridge": "br-int", "label": "tempest-network-smoke--284994200", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca4b15770e784f45910b630937562cb6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0ae3e094-fe", "ovs_interfaceid": "0ae3e094-fe06-40c1-8840-cf16ba40f7fb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 11 09:11:52 compute-0 nova_compute[260935]: 2025-10-11 09:11:52.966 2 DEBUG oslo_concurrency.lockutils [req-1a83ff77-b968-4564-b5e9-880c091bddcd req-d3483574-f53a-49e6-8398-0a644fbc8cab e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-0dc02e8f-5afd-40a3-8de3-e5550e4ab57e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:11:52 compute-0 nova_compute[260935]: 2025-10-11 09:11:52.966 2 DEBUG nova.network.neutron [req-1a83ff77-b968-4564-b5e9-880c091bddcd req-d3483574-f53a-49e6-8398-0a644fbc8cab e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e] Refreshing network info cache for port 0ae3e094-fe06-40c1-8840-cf16ba40f7fb _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 09:11:52 compute-0 nova_compute[260935]: 2025-10-11 09:11:52.970 2 DEBUG nova.virt.libvirt.driver [None req-a8cd5c17-8312-46bb-b227-88dd68700724 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e] Start _get_guest_xml network_info=[{"id": "0ae3e094-fe06-40c1-8840-cf16ba40f7fb", "address": "fa:16:3e:36:24:98", "network": {"id": "fba72350-6e1a-4542-9fe5-a0774a411936", "bridge": "br-int", "label": "tempest-network-smoke--284994200", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca4b15770e784f45910b630937562cb6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0ae3e094-fe", "ovs_interfaceid": "0ae3e094-fe06-40c1-8840-cf16ba40f7fb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 11 09:11:52 compute-0 nova_compute[260935]: 2025-10-11 09:11:52.974 2 WARNING nova.virt.libvirt.driver [None req-a8cd5c17-8312-46bb-b227-88dd68700724 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 09:11:52 compute-0 nova_compute[260935]: 2025-10-11 09:11:52.982 2 DEBUG nova.virt.libvirt.host [None req-a8cd5c17-8312-46bb-b227-88dd68700724 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 11 09:11:52 compute-0 nova_compute[260935]: 2025-10-11 09:11:52.983 2 DEBUG nova.virt.libvirt.host [None req-a8cd5c17-8312-46bb-b227-88dd68700724 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 11 09:11:52 compute-0 nova_compute[260935]: 2025-10-11 09:11:52.989 2 DEBUG nova.virt.libvirt.host [None req-a8cd5c17-8312-46bb-b227-88dd68700724 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 11 09:11:52 compute-0 nova_compute[260935]: 2025-10-11 09:11:52.989 2 DEBUG nova.virt.libvirt.host [None req-a8cd5c17-8312-46bb-b227-88dd68700724 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 11 09:11:52 compute-0 nova_compute[260935]: 2025-10-11 09:11:52.990 2 DEBUG nova.virt.libvirt.driver [None req-a8cd5c17-8312-46bb-b227-88dd68700724 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 11 09:11:52 compute-0 nova_compute[260935]: 2025-10-11 09:11:52.991 2 DEBUG nova.virt.hardware [None req-a8cd5c17-8312-46bb-b227-88dd68700724 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 11 09:11:52 compute-0 nova_compute[260935]: 2025-10-11 09:11:52.992 2 DEBUG nova.virt.hardware [None req-a8cd5c17-8312-46bb-b227-88dd68700724 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 11 09:11:52 compute-0 nova_compute[260935]: 2025-10-11 09:11:52.992 2 DEBUG nova.virt.hardware [None req-a8cd5c17-8312-46bb-b227-88dd68700724 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 11 09:11:52 compute-0 nova_compute[260935]: 2025-10-11 09:11:52.992 2 DEBUG nova.virt.hardware [None req-a8cd5c17-8312-46bb-b227-88dd68700724 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 11 09:11:52 compute-0 nova_compute[260935]: 2025-10-11 09:11:52.993 2 DEBUG nova.virt.hardware [None req-a8cd5c17-8312-46bb-b227-88dd68700724 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 11 09:11:52 compute-0 nova_compute[260935]: 2025-10-11 09:11:52.993 2 DEBUG nova.virt.hardware [None req-a8cd5c17-8312-46bb-b227-88dd68700724 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 11 09:11:52 compute-0 nova_compute[260935]: 2025-10-11 09:11:52.994 2 DEBUG nova.virt.hardware [None req-a8cd5c17-8312-46bb-b227-88dd68700724 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 11 09:11:52 compute-0 nova_compute[260935]: 2025-10-11 09:11:52.994 2 DEBUG nova.virt.hardware [None req-a8cd5c17-8312-46bb-b227-88dd68700724 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 11 09:11:52 compute-0 nova_compute[260935]: 2025-10-11 09:11:52.995 2 DEBUG nova.virt.hardware [None req-a8cd5c17-8312-46bb-b227-88dd68700724 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 11 09:11:52 compute-0 nova_compute[260935]: 2025-10-11 09:11:52.995 2 DEBUG nova.virt.hardware [None req-a8cd5c17-8312-46bb-b227-88dd68700724 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 11 09:11:52 compute-0 nova_compute[260935]: 2025-10-11 09:11:52.996 2 DEBUG nova.virt.hardware [None req-a8cd5c17-8312-46bb-b227-88dd68700724 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 11 09:11:53 compute-0 nova_compute[260935]: 2025-10-11 09:11:52.999 2 DEBUG oslo_concurrency.processutils [None req-a8cd5c17-8312-46bb-b227-88dd68700724 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:11:53 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 09:11:53 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2727017568' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:11:53 compute-0 nova_compute[260935]: 2025-10-11 09:11:53.512 2 DEBUG oslo_concurrency.processutils [None req-a8cd5c17-8312-46bb-b227-88dd68700724 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.513s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:11:53 compute-0 nova_compute[260935]: 2025-10-11 09:11:53.548 2 DEBUG nova.storage.rbd_utils [None req-a8cd5c17-8312-46bb-b227-88dd68700724 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] rbd image 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:11:53 compute-0 nova_compute[260935]: 2025-10-11 09:11:53.551 2 DEBUG oslo_concurrency.processutils [None req-a8cd5c17-8312-46bb-b227-88dd68700724 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:11:53 compute-0 ceph-mon[74313]: pgmap v2101: 321 pgs: 321 active+clean; 370 MiB data, 890 MiB used, 59 GiB / 60 GiB avail; 16 KiB/s rd, 1.5 MiB/s wr, 24 op/s
Oct 11 09:11:53 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/2727017568' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:11:53 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:11:53 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 09:11:53 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/712901151' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:11:54 compute-0 nova_compute[260935]: 2025-10-11 09:11:54.006 2 DEBUG oslo_concurrency.processutils [None req-a8cd5c17-8312-46bb-b227-88dd68700724 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:11:54 compute-0 nova_compute[260935]: 2025-10-11 09:11:54.008 2 DEBUG nova.virt.libvirt.vif [None req-a8cd5c17-8312-46bb-b227-88dd68700724 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T09:11:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1973542017',display_name='tempest-TestNetworkAdvancedServerOps-server-1973542017',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1973542017',id=105,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPjiTuLZzmAZ35tMhCOSVitFlf9QVc7XZ8MVeNKbs8Bxh7N8pYPwy6nmkT0CJ4moptetANiA+6a5Y0trCDigpxU0NxSZrm3lKn8b9iWNg2gUkyVkLT8AlFALq9EjTDW32g==',key_name='tempest-TestNetworkAdvancedServerOps-2078355019',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ca4b15770e784f45910b630937562cb6',ramdisk_id='',reservation_id='r-7vnn79ig',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-1304559157',owner_user_name='tempest-TestNetworkAdvancedServerOps-1304559157-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:11:48Z,user_data=None,user_id='a213c3877fc144a3af0be3c3d853f999',uuid=0dc02e8f-5afd-40a3-8de3-e5550e4ab57e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0ae3e094-fe06-40c1-8840-cf16ba40f7fb", "address": "fa:16:3e:36:24:98", "network": {"id": "fba72350-6e1a-4542-9fe5-a0774a411936", "bridge": "br-int", "label": "tempest-network-smoke--284994200", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca4b15770e784f45910b630937562cb6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0ae3e094-fe", "ovs_interfaceid": "0ae3e094-fe06-40c1-8840-cf16ba40f7fb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 11 09:11:54 compute-0 nova_compute[260935]: 2025-10-11 09:11:54.008 2 DEBUG nova.network.os_vif_util [None req-a8cd5c17-8312-46bb-b227-88dd68700724 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Converting VIF {"id": "0ae3e094-fe06-40c1-8840-cf16ba40f7fb", "address": "fa:16:3e:36:24:98", "network": {"id": "fba72350-6e1a-4542-9fe5-a0774a411936", "bridge": "br-int", "label": "tempest-network-smoke--284994200", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca4b15770e784f45910b630937562cb6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0ae3e094-fe", "ovs_interfaceid": "0ae3e094-fe06-40c1-8840-cf16ba40f7fb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 09:11:54 compute-0 nova_compute[260935]: 2025-10-11 09:11:54.009 2 DEBUG nova.network.os_vif_util [None req-a8cd5c17-8312-46bb-b227-88dd68700724 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:36:24:98,bridge_name='br-int',has_traffic_filtering=True,id=0ae3e094-fe06-40c1-8840-cf16ba40f7fb,network=Network(fba72350-6e1a-4542-9fe5-a0774a411936),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0ae3e094-fe') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 09:11:54 compute-0 nova_compute[260935]: 2025-10-11 09:11:54.010 2 DEBUG nova.objects.instance [None req-a8cd5c17-8312-46bb-b227-88dd68700724 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Lazy-loading 'pci_devices' on Instance uuid 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:11:54 compute-0 nova_compute[260935]: 2025-10-11 09:11:54.026 2 DEBUG nova.virt.libvirt.driver [None req-a8cd5c17-8312-46bb-b227-88dd68700724 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e] End _get_guest_xml xml=<domain type="kvm">
Oct 11 09:11:54 compute-0 nova_compute[260935]:   <uuid>0dc02e8f-5afd-40a3-8de3-e5550e4ab57e</uuid>
Oct 11 09:11:54 compute-0 nova_compute[260935]:   <name>instance-00000069</name>
Oct 11 09:11:54 compute-0 nova_compute[260935]:   <memory>131072</memory>
Oct 11 09:11:54 compute-0 nova_compute[260935]:   <vcpu>1</vcpu>
Oct 11 09:11:54 compute-0 nova_compute[260935]:   <metadata>
Oct 11 09:11:54 compute-0 nova_compute[260935]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 09:11:54 compute-0 nova_compute[260935]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 09:11:54 compute-0 nova_compute[260935]:       <nova:name>tempest-TestNetworkAdvancedServerOps-server-1973542017</nova:name>
Oct 11 09:11:54 compute-0 nova_compute[260935]:       <nova:creationTime>2025-10-11 09:11:52</nova:creationTime>
Oct 11 09:11:54 compute-0 nova_compute[260935]:       <nova:flavor name="m1.nano">
Oct 11 09:11:54 compute-0 nova_compute[260935]:         <nova:memory>128</nova:memory>
Oct 11 09:11:54 compute-0 nova_compute[260935]:         <nova:disk>1</nova:disk>
Oct 11 09:11:54 compute-0 nova_compute[260935]:         <nova:swap>0</nova:swap>
Oct 11 09:11:54 compute-0 nova_compute[260935]:         <nova:ephemeral>0</nova:ephemeral>
Oct 11 09:11:54 compute-0 nova_compute[260935]:         <nova:vcpus>1</nova:vcpus>
Oct 11 09:11:54 compute-0 nova_compute[260935]:       </nova:flavor>
Oct 11 09:11:54 compute-0 nova_compute[260935]:       <nova:owner>
Oct 11 09:11:54 compute-0 nova_compute[260935]:         <nova:user uuid="a213c3877fc144a3af0be3c3d853f999">tempest-TestNetworkAdvancedServerOps-1304559157-project-member</nova:user>
Oct 11 09:11:54 compute-0 nova_compute[260935]:         <nova:project uuid="ca4b15770e784f45910b630937562cb6">tempest-TestNetworkAdvancedServerOps-1304559157</nova:project>
Oct 11 09:11:54 compute-0 nova_compute[260935]:       </nova:owner>
Oct 11 09:11:54 compute-0 nova_compute[260935]:       <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 09:11:54 compute-0 nova_compute[260935]:       <nova:ports>
Oct 11 09:11:54 compute-0 nova_compute[260935]:         <nova:port uuid="0ae3e094-fe06-40c1-8840-cf16ba40f7fb">
Oct 11 09:11:54 compute-0 nova_compute[260935]:           <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Oct 11 09:11:54 compute-0 nova_compute[260935]:         </nova:port>
Oct 11 09:11:54 compute-0 nova_compute[260935]:       </nova:ports>
Oct 11 09:11:54 compute-0 nova_compute[260935]:     </nova:instance>
Oct 11 09:11:54 compute-0 nova_compute[260935]:   </metadata>
Oct 11 09:11:54 compute-0 nova_compute[260935]:   <sysinfo type="smbios">
Oct 11 09:11:54 compute-0 nova_compute[260935]:     <system>
Oct 11 09:11:54 compute-0 nova_compute[260935]:       <entry name="manufacturer">RDO</entry>
Oct 11 09:11:54 compute-0 nova_compute[260935]:       <entry name="product">OpenStack Compute</entry>
Oct 11 09:11:54 compute-0 nova_compute[260935]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 09:11:54 compute-0 nova_compute[260935]:       <entry name="serial">0dc02e8f-5afd-40a3-8de3-e5550e4ab57e</entry>
Oct 11 09:11:54 compute-0 nova_compute[260935]:       <entry name="uuid">0dc02e8f-5afd-40a3-8de3-e5550e4ab57e</entry>
Oct 11 09:11:54 compute-0 nova_compute[260935]:       <entry name="family">Virtual Machine</entry>
Oct 11 09:11:54 compute-0 nova_compute[260935]:     </system>
Oct 11 09:11:54 compute-0 nova_compute[260935]:   </sysinfo>
Oct 11 09:11:54 compute-0 nova_compute[260935]:   <os>
Oct 11 09:11:54 compute-0 nova_compute[260935]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 11 09:11:54 compute-0 nova_compute[260935]:     <boot dev="hd"/>
Oct 11 09:11:54 compute-0 nova_compute[260935]:     <smbios mode="sysinfo"/>
Oct 11 09:11:54 compute-0 nova_compute[260935]:   </os>
Oct 11 09:11:54 compute-0 nova_compute[260935]:   <features>
Oct 11 09:11:54 compute-0 nova_compute[260935]:     <acpi/>
Oct 11 09:11:54 compute-0 nova_compute[260935]:     <apic/>
Oct 11 09:11:54 compute-0 nova_compute[260935]:     <vmcoreinfo/>
Oct 11 09:11:54 compute-0 nova_compute[260935]:   </features>
Oct 11 09:11:54 compute-0 nova_compute[260935]:   <clock offset="utc">
Oct 11 09:11:54 compute-0 nova_compute[260935]:     <timer name="pit" tickpolicy="delay"/>
Oct 11 09:11:54 compute-0 nova_compute[260935]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 11 09:11:54 compute-0 nova_compute[260935]:     <timer name="hpet" present="no"/>
Oct 11 09:11:54 compute-0 nova_compute[260935]:   </clock>
Oct 11 09:11:54 compute-0 nova_compute[260935]:   <cpu mode="host-model" match="exact">
Oct 11 09:11:54 compute-0 nova_compute[260935]:     <topology sockets="1" cores="1" threads="1"/>
Oct 11 09:11:54 compute-0 nova_compute[260935]:   </cpu>
Oct 11 09:11:54 compute-0 nova_compute[260935]:   <devices>
Oct 11 09:11:54 compute-0 nova_compute[260935]:     <disk type="network" device="disk">
Oct 11 09:11:54 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 09:11:54 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/0dc02e8f-5afd-40a3-8de3-e5550e4ab57e_disk">
Oct 11 09:11:54 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 09:11:54 compute-0 nova_compute[260935]:       </source>
Oct 11 09:11:54 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 09:11:54 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 09:11:54 compute-0 nova_compute[260935]:       </auth>
Oct 11 09:11:54 compute-0 nova_compute[260935]:       <target dev="vda" bus="virtio"/>
Oct 11 09:11:54 compute-0 nova_compute[260935]:     </disk>
Oct 11 09:11:54 compute-0 nova_compute[260935]:     <disk type="network" device="cdrom">
Oct 11 09:11:54 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 09:11:54 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/0dc02e8f-5afd-40a3-8de3-e5550e4ab57e_disk.config">
Oct 11 09:11:54 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 09:11:54 compute-0 nova_compute[260935]:       </source>
Oct 11 09:11:54 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 09:11:54 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 09:11:54 compute-0 nova_compute[260935]:       </auth>
Oct 11 09:11:54 compute-0 nova_compute[260935]:       <target dev="sda" bus="sata"/>
Oct 11 09:11:54 compute-0 nova_compute[260935]:     </disk>
Oct 11 09:11:54 compute-0 nova_compute[260935]:     <interface type="ethernet">
Oct 11 09:11:54 compute-0 nova_compute[260935]:       <mac address="fa:16:3e:36:24:98"/>
Oct 11 09:11:54 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 09:11:54 compute-0 nova_compute[260935]:       <driver name="vhost" rx_queue_size="512"/>
Oct 11 09:11:54 compute-0 nova_compute[260935]:       <mtu size="1442"/>
Oct 11 09:11:54 compute-0 nova_compute[260935]:       <target dev="tap0ae3e094-fe"/>
Oct 11 09:11:54 compute-0 nova_compute[260935]:     </interface>
Oct 11 09:11:54 compute-0 nova_compute[260935]:     <serial type="pty">
Oct 11 09:11:54 compute-0 nova_compute[260935]:       <log file="/var/lib/nova/instances/0dc02e8f-5afd-40a3-8de3-e5550e4ab57e/console.log" append="off"/>
Oct 11 09:11:54 compute-0 nova_compute[260935]:     </serial>
Oct 11 09:11:54 compute-0 nova_compute[260935]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 09:11:54 compute-0 nova_compute[260935]:     <video>
Oct 11 09:11:54 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 09:11:54 compute-0 nova_compute[260935]:     </video>
Oct 11 09:11:54 compute-0 nova_compute[260935]:     <input type="tablet" bus="usb"/>
Oct 11 09:11:54 compute-0 nova_compute[260935]:     <rng model="virtio">
Oct 11 09:11:54 compute-0 nova_compute[260935]:       <backend model="random">/dev/urandom</backend>
Oct 11 09:11:54 compute-0 nova_compute[260935]:     </rng>
Oct 11 09:11:54 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root"/>
Oct 11 09:11:54 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:11:54 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:11:54 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:11:54 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:11:54 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:11:54 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:11:54 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:11:54 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:11:54 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:11:54 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:11:54 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:11:54 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:11:54 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:11:54 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:11:54 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:11:54 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:11:54 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:11:54 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:11:54 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:11:54 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:11:54 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:11:54 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:11:54 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:11:54 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:11:54 compute-0 nova_compute[260935]:     <controller type="usb" index="0"/>
Oct 11 09:11:54 compute-0 nova_compute[260935]:     <memballoon model="virtio">
Oct 11 09:11:54 compute-0 nova_compute[260935]:       <stats period="10"/>
Oct 11 09:11:54 compute-0 nova_compute[260935]:     </memballoon>
Oct 11 09:11:54 compute-0 nova_compute[260935]:   </devices>
Oct 11 09:11:54 compute-0 nova_compute[260935]: </domain>
Oct 11 09:11:54 compute-0 nova_compute[260935]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 11 09:11:54 compute-0 nova_compute[260935]: 2025-10-11 09:11:54.027 2 DEBUG nova.compute.manager [None req-a8cd5c17-8312-46bb-b227-88dd68700724 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e] Preparing to wait for external event network-vif-plugged-0ae3e094-fe06-40c1-8840-cf16ba40f7fb prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 11 09:11:54 compute-0 nova_compute[260935]: 2025-10-11 09:11:54.027 2 DEBUG oslo_concurrency.lockutils [None req-a8cd5c17-8312-46bb-b227-88dd68700724 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Acquiring lock "0dc02e8f-5afd-40a3-8de3-e5550e4ab57e-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:11:54 compute-0 nova_compute[260935]: 2025-10-11 09:11:54.028 2 DEBUG oslo_concurrency.lockutils [None req-a8cd5c17-8312-46bb-b227-88dd68700724 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Lock "0dc02e8f-5afd-40a3-8de3-e5550e4ab57e-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:11:54 compute-0 nova_compute[260935]: 2025-10-11 09:11:54.028 2 DEBUG oslo_concurrency.lockutils [None req-a8cd5c17-8312-46bb-b227-88dd68700724 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Lock "0dc02e8f-5afd-40a3-8de3-e5550e4ab57e-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:11:54 compute-0 nova_compute[260935]: 2025-10-11 09:11:54.028 2 DEBUG nova.virt.libvirt.vif [None req-a8cd5c17-8312-46bb-b227-88dd68700724 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T09:11:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1973542017',display_name='tempest-TestNetworkAdvancedServerOps-server-1973542017',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1973542017',id=105,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPjiTuLZzmAZ35tMhCOSVitFlf9QVc7XZ8MVeNKbs8Bxh7N8pYPwy6nmkT0CJ4moptetANiA+6a5Y0trCDigpxU0NxSZrm3lKn8b9iWNg2gUkyVkLT8AlFALq9EjTDW32g==',key_name='tempest-TestNetworkAdvancedServerOps-2078355019',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ca4b15770e784f45910b630937562cb6',ramdisk_id='',reservation_id='r-7vnn79ig',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-1304559157',owner_user_name='tempest-TestNetworkAdvancedServerOps-1304559157-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:11:48Z,user_data=None,user_id='a213c3877fc144a3af0be3c3d853f999',uuid=0dc02e8f-5afd-40a3-8de3-e5550e4ab57e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0ae3e094-fe06-40c1-8840-cf16ba40f7fb", "address": "fa:16:3e:36:24:98", "network": {"id": "fba72350-6e1a-4542-9fe5-a0774a411936", "bridge": "br-int", "label": "tempest-network-smoke--284994200", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca4b15770e784f45910b630937562cb6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0ae3e094-fe", "ovs_interfaceid": "0ae3e094-fe06-40c1-8840-cf16ba40f7fb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 11 09:11:54 compute-0 nova_compute[260935]: 2025-10-11 09:11:54.029 2 DEBUG nova.network.os_vif_util [None req-a8cd5c17-8312-46bb-b227-88dd68700724 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Converting VIF {"id": "0ae3e094-fe06-40c1-8840-cf16ba40f7fb", "address": "fa:16:3e:36:24:98", "network": {"id": "fba72350-6e1a-4542-9fe5-a0774a411936", "bridge": "br-int", "label": "tempest-network-smoke--284994200", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca4b15770e784f45910b630937562cb6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0ae3e094-fe", "ovs_interfaceid": "0ae3e094-fe06-40c1-8840-cf16ba40f7fb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 09:11:54 compute-0 nova_compute[260935]: 2025-10-11 09:11:54.029 2 DEBUG nova.network.os_vif_util [None req-a8cd5c17-8312-46bb-b227-88dd68700724 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:36:24:98,bridge_name='br-int',has_traffic_filtering=True,id=0ae3e094-fe06-40c1-8840-cf16ba40f7fb,network=Network(fba72350-6e1a-4542-9fe5-a0774a411936),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0ae3e094-fe') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 09:11:54 compute-0 nova_compute[260935]: 2025-10-11 09:11:54.030 2 DEBUG os_vif [None req-a8cd5c17-8312-46bb-b227-88dd68700724 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:36:24:98,bridge_name='br-int',has_traffic_filtering=True,id=0ae3e094-fe06-40c1-8840-cf16ba40f7fb,network=Network(fba72350-6e1a-4542-9fe5-a0774a411936),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0ae3e094-fe') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 11 09:11:54 compute-0 nova_compute[260935]: 2025-10-11 09:11:54.030 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:11:54 compute-0 nova_compute[260935]: 2025-10-11 09:11:54.030 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:11:54 compute-0 nova_compute[260935]: 2025-10-11 09:11:54.031 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 09:11:54 compute-0 nova_compute[260935]: 2025-10-11 09:11:54.033 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:11:54 compute-0 nova_compute[260935]: 2025-10-11 09:11:54.033 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0ae3e094-fe, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:11:54 compute-0 nova_compute[260935]: 2025-10-11 09:11:54.034 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap0ae3e094-fe, col_values=(('external_ids', {'iface-id': '0ae3e094-fe06-40c1-8840-cf16ba40f7fb', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:36:24:98', 'vm-uuid': '0dc02e8f-5afd-40a3-8de3-e5550e4ab57e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:11:54 compute-0 nova_compute[260935]: 2025-10-11 09:11:54.035 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:11:54 compute-0 NetworkManager[44960]: <info>  [1760173914.0363] manager: (tap0ae3e094-fe): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/403)
Oct 11 09:11:54 compute-0 nova_compute[260935]: 2025-10-11 09:11:54.038 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 09:11:54 compute-0 nova_compute[260935]: 2025-10-11 09:11:54.045 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:11:54 compute-0 nova_compute[260935]: 2025-10-11 09:11:54.047 2 INFO os_vif [None req-a8cd5c17-8312-46bb-b227-88dd68700724 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:36:24:98,bridge_name='br-int',has_traffic_filtering=True,id=0ae3e094-fe06-40c1-8840-cf16ba40f7fb,network=Network(fba72350-6e1a-4542-9fe5-a0774a411936),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0ae3e094-fe')
Oct 11 09:11:54 compute-0 nova_compute[260935]: 2025-10-11 09:11:54.101 2 DEBUG nova.virt.libvirt.driver [None req-a8cd5c17-8312-46bb-b227-88dd68700724 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 09:11:54 compute-0 nova_compute[260935]: 2025-10-11 09:11:54.101 2 DEBUG nova.virt.libvirt.driver [None req-a8cd5c17-8312-46bb-b227-88dd68700724 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 09:11:54 compute-0 nova_compute[260935]: 2025-10-11 09:11:54.102 2 DEBUG nova.virt.libvirt.driver [None req-a8cd5c17-8312-46bb-b227-88dd68700724 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] No VIF found with MAC fa:16:3e:36:24:98, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 11 09:11:54 compute-0 nova_compute[260935]: 2025-10-11 09:11:54.103 2 INFO nova.virt.libvirt.driver [None req-a8cd5c17-8312-46bb-b227-88dd68700724 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e] Using config drive
Oct 11 09:11:54 compute-0 nova_compute[260935]: 2025-10-11 09:11:54.136 2 DEBUG nova.storage.rbd_utils [None req-a8cd5c17-8312-46bb-b227-88dd68700724 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] rbd image 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:11:54 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2102: 321 pgs: 321 active+clean; 374 MiB data, 893 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 11 09:11:54 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/712901151' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:11:54 compute-0 nova_compute[260935]: 2025-10-11 09:11:54.739 2 INFO nova.virt.libvirt.driver [None req-a8cd5c17-8312-46bb-b227-88dd68700724 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e] Creating config drive at /var/lib/nova/instances/0dc02e8f-5afd-40a3-8de3-e5550e4ab57e/disk.config
Oct 11 09:11:54 compute-0 nova_compute[260935]: 2025-10-11 09:11:54.749 2 DEBUG oslo_concurrency.processutils [None req-a8cd5c17-8312-46bb-b227-88dd68700724 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/0dc02e8f-5afd-40a3-8de3-e5550e4ab57e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpxd_4ys_n execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:11:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:11:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:11:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:11:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:11:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:11:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:11:54 compute-0 nova_compute[260935]: 2025-10-11 09:11:54.900 2 DEBUG oslo_concurrency.processutils [None req-a8cd5c17-8312-46bb-b227-88dd68700724 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/0dc02e8f-5afd-40a3-8de3-e5550e4ab57e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpxd_4ys_n" returned: 0 in 0.151s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:11:54 compute-0 ceph-mgr[74605]: [balancer INFO root] Optimize plan auto_2025-10-11_09:11:54
Oct 11 09:11:54 compute-0 ceph-mgr[74605]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 09:11:54 compute-0 ceph-mgr[74605]: [balancer INFO root] do_upmap
Oct 11 09:11:54 compute-0 ceph-mgr[74605]: [balancer INFO root] pools ['default.rgw.control', 'cephfs.cephfs.data', 'volumes', 'default.rgw.log', 'default.rgw.meta', 'images', 'vms', '.rgw.root', 'cephfs.cephfs.meta', 'backups', '.mgr']
Oct 11 09:11:54 compute-0 ceph-mgr[74605]: [balancer INFO root] prepared 0/10 changes
Oct 11 09:11:54 compute-0 nova_compute[260935]: 2025-10-11 09:11:54.936 2 DEBUG nova.storage.rbd_utils [None req-a8cd5c17-8312-46bb-b227-88dd68700724 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] rbd image 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:11:54 compute-0 nova_compute[260935]: 2025-10-11 09:11:54.942 2 DEBUG oslo_concurrency.processutils [None req-a8cd5c17-8312-46bb-b227-88dd68700724 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/0dc02e8f-5afd-40a3-8de3-e5550e4ab57e/disk.config 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:11:55 compute-0 nova_compute[260935]: 2025-10-11 09:11:55.040 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:11:55 compute-0 ceph-osd[88249]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 11 09:11:55 compute-0 ceph-osd[88249]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 3600.1 total, 600.0 interval
                                           Cumulative writes: 32K writes, 126K keys, 32K commit groups, 1.0 writes per commit group, ingest: 0.12 GB, 0.03 MB/s
                                           Cumulative WAL: 32K writes, 11K syncs, 2.78 writes per sync, written: 0.12 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 5554 writes, 19K keys, 5554 commit groups, 1.0 writes per commit group, ingest: 17.50 MB, 0.03 MB/s
                                           Interval WAL: 5554 writes, 2320 syncs, 2.39 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 11 09:11:55 compute-0 nova_compute[260935]: 2025-10-11 09:11:55.177 2 DEBUG oslo_concurrency.processutils [None req-a8cd5c17-8312-46bb-b227-88dd68700724 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/0dc02e8f-5afd-40a3-8de3-e5550e4ab57e/disk.config 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.235s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:11:55 compute-0 nova_compute[260935]: 2025-10-11 09:11:55.178 2 INFO nova.virt.libvirt.driver [None req-a8cd5c17-8312-46bb-b227-88dd68700724 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e] Deleting local config drive /var/lib/nova/instances/0dc02e8f-5afd-40a3-8de3-e5550e4ab57e/disk.config because it was imported into RBD.
Oct 11 09:11:55 compute-0 nova_compute[260935]: 2025-10-11 09:11:55.275 2 DEBUG nova.network.neutron [req-1a83ff77-b968-4564-b5e9-880c091bddcd req-d3483574-f53a-49e6-8398-0a644fbc8cab e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e] Updated VIF entry in instance network info cache for port 0ae3e094-fe06-40c1-8840-cf16ba40f7fb. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 09:11:55 compute-0 nova_compute[260935]: 2025-10-11 09:11:55.276 2 DEBUG nova.network.neutron [req-1a83ff77-b968-4564-b5e9-880c091bddcd req-d3483574-f53a-49e6-8398-0a644fbc8cab e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e] Updating instance_info_cache with network_info: [{"id": "0ae3e094-fe06-40c1-8840-cf16ba40f7fb", "address": "fa:16:3e:36:24:98", "network": {"id": "fba72350-6e1a-4542-9fe5-a0774a411936", "bridge": "br-int", "label": "tempest-network-smoke--284994200", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca4b15770e784f45910b630937562cb6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0ae3e094-fe", "ovs_interfaceid": "0ae3e094-fe06-40c1-8840-cf16ba40f7fb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:11:55 compute-0 kernel: tap0ae3e094-fe: entered promiscuous mode
Oct 11 09:11:55 compute-0 ovn_controller[152945]: 2025-10-11T09:11:55Z|00970|binding|INFO|Claiming lport 0ae3e094-fe06-40c1-8840-cf16ba40f7fb for this chassis.
Oct 11 09:11:55 compute-0 ovn_controller[152945]: 2025-10-11T09:11:55Z|00971|binding|INFO|0ae3e094-fe06-40c1-8840-cf16ba40f7fb: Claiming fa:16:3e:36:24:98 10.100.0.9
Oct 11 09:11:55 compute-0 nova_compute[260935]: 2025-10-11 09:11:55.289 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:11:55 compute-0 NetworkManager[44960]: <info>  [1760173915.2928] manager: (tap0ae3e094-fe): new Tun device (/org/freedesktop/NetworkManager/Devices/404)
Oct 11 09:11:55 compute-0 nova_compute[260935]: 2025-10-11 09:11:55.296 2 DEBUG oslo_concurrency.lockutils [req-1a83ff77-b968-4564-b5e9-880c091bddcd req-d3483574-f53a-49e6-8398-0a644fbc8cab e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-0dc02e8f-5afd-40a3-8de3-e5550e4ab57e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:11:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 09:11:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 09:11:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 09:11:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 09:11:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 09:11:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 09:11:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 09:11:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 09:11:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 09:11:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 09:11:55 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:11:55.322 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:36:24:98 10.100.0.9'], port_security=['fa:16:3e:36:24:98 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '0dc02e8f-5afd-40a3-8de3-e5550e4ab57e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fba72350-6e1a-4542-9fe5-a0774a411936', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ca4b15770e784f45910b630937562cb6', 'neutron:revision_number': '2', 'neutron:security_group_ids': '8dd2bd74-fe29-451a-969b-10f8c65ff537', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4107ca00-6bae-4723-b2f7-08200810b548, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=0ae3e094-fe06-40c1-8840-cf16ba40f7fb) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 09:11:55 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:11:55.324 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 0ae3e094-fe06-40c1-8840-cf16ba40f7fb in datapath fba72350-6e1a-4542-9fe5-a0774a411936 bound to our chassis
Oct 11 09:11:55 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:11:55.327 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network fba72350-6e1a-4542-9fe5-a0774a411936
Oct 11 09:11:55 compute-0 systemd-machined[215705]: New machine qemu-125-instance-00000069.
Oct 11 09:11:55 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:11:55.355 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[ae9f272e-38ba-4ae2-b5c5-47f1478a7fe6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:11:55 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:11:55.356 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapfba72350-61 in ovnmeta-fba72350-6e1a-4542-9fe5-a0774a411936 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 11 09:11:55 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:11:55.359 276945 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapfba72350-60 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 11 09:11:55 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:11:55.359 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[59b93252-2d4c-4612-856c-4112837b583b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:11:55 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:11:55.360 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[6695ffd0-fbcc-48a9-aad8-4e72d045ccff]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:11:55 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:11:55.383 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[d675da1e-e7e6-4043-8898-092cfb36249b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:11:55 compute-0 systemd[1]: Started Virtual Machine qemu-125-instance-00000069.
Oct 11 09:11:55 compute-0 nova_compute[260935]: 2025-10-11 09:11:55.400 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:11:55 compute-0 ovn_controller[152945]: 2025-10-11T09:11:55Z|00972|binding|INFO|Setting lport 0ae3e094-fe06-40c1-8840-cf16ba40f7fb ovn-installed in OVS
Oct 11 09:11:55 compute-0 ovn_controller[152945]: 2025-10-11T09:11:55Z|00973|binding|INFO|Setting lport 0ae3e094-fe06-40c1-8840-cf16ba40f7fb up in Southbound
Oct 11 09:11:55 compute-0 nova_compute[260935]: 2025-10-11 09:11:55.407 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:11:55 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:11:55.412 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[ce6e57e5-0c4d-42a0-b7ad-e9f5e9288727]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:11:55 compute-0 systemd-udevd[368281]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 09:11:55 compute-0 NetworkManager[44960]: <info>  [1760173915.4493] device (tap0ae3e094-fe): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 09:11:55 compute-0 NetworkManager[44960]: <info>  [1760173915.4509] device (tap0ae3e094-fe): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 09:11:55 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:11:55.460 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[b758fe1c-2f38-4dfe-910c-71393363755b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:11:55 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:11:55.466 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[cfb80155-49ae-4ea4-8124-1f2d2e1ade2a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:11:55 compute-0 NetworkManager[44960]: <info>  [1760173915.4707] manager: (tapfba72350-60): new Veth device (/org/freedesktop/NetworkManager/Devices/405)
Oct 11 09:11:55 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:11:55.522 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[a20f9e56-8af2-42fa-a3f2-034877195d6b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:11:55 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:11:55.526 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[d8add052-d652-437b-8935-a442cc37bacd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:11:55 compute-0 NetworkManager[44960]: <info>  [1760173915.5715] device (tapfba72350-60): carrier: link connected
Oct 11 09:11:55 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:11:55.581 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[e67b717a-111d-4c5c-aecb-1e99681d10bd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:11:55 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:11:55.610 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[037cb446-7205-4d2e-98e5-d7d74e268373]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfba72350-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5c:dc:bd'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 2, 'rx_bytes': 196, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 2, 'rx_bytes': 196, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 288], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 576027, 'reachable_time': 31590, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 168, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 168, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 368311, 'error': None, 'target': 'ovnmeta-fba72350-6e1a-4542-9fe5-a0774a411936', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:11:55 compute-0 ceph-mon[74313]: pgmap v2102: 321 pgs: 321 active+clean; 374 MiB data, 893 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 11 09:11:55 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:11:55.637 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[2a54d5fa-6b17-4e25-96c5-526f937810ca]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe5c:dcbd'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 576027, 'tstamp': 576027}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 368312, 'error': None, 'target': 'ovnmeta-fba72350-6e1a-4542-9fe5-a0774a411936', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:11:55 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:11:55.659 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[5f89d375-25e0-40a4-a131-18a31d94eab4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfba72350-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5c:dc:bd'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 2, 'rx_bytes': 196, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 2, 'rx_bytes': 196, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 288], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 576027, 'reachable_time': 31590, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 168, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 168, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 368313, 'error': None, 'target': 'ovnmeta-fba72350-6e1a-4542-9fe5-a0774a411936', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:11:55 compute-0 nova_compute[260935]: 2025-10-11 09:11:55.701 2 DEBUG nova.compute.manager [req-a28e75ca-b4c9-4e14-af8b-f78022f21853 req-17789a9d-248f-47a3-a4f9-24649112cb95 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e] Received event network-vif-plugged-0ae3e094-fe06-40c1-8840-cf16ba40f7fb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:11:55 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:11:55.701 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[6c85fc7a-f3e1-4a6b-809f-fb40fedd044c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:11:55 compute-0 nova_compute[260935]: 2025-10-11 09:11:55.702 2 DEBUG oslo_concurrency.lockutils [req-a28e75ca-b4c9-4e14-af8b-f78022f21853 req-17789a9d-248f-47a3-a4f9-24649112cb95 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "0dc02e8f-5afd-40a3-8de3-e5550e4ab57e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:11:55 compute-0 nova_compute[260935]: 2025-10-11 09:11:55.702 2 DEBUG oslo_concurrency.lockutils [req-a28e75ca-b4c9-4e14-af8b-f78022f21853 req-17789a9d-248f-47a3-a4f9-24649112cb95 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "0dc02e8f-5afd-40a3-8de3-e5550e4ab57e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:11:55 compute-0 nova_compute[260935]: 2025-10-11 09:11:55.703 2 DEBUG oslo_concurrency.lockutils [req-a28e75ca-b4c9-4e14-af8b-f78022f21853 req-17789a9d-248f-47a3-a4f9-24649112cb95 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "0dc02e8f-5afd-40a3-8de3-e5550e4ab57e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:11:55 compute-0 nova_compute[260935]: 2025-10-11 09:11:55.703 2 DEBUG nova.compute.manager [req-a28e75ca-b4c9-4e14-af8b-f78022f21853 req-17789a9d-248f-47a3-a4f9-24649112cb95 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e] Processing event network-vif-plugged-0ae3e094-fe06-40c1-8840-cf16ba40f7fb _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 11 09:11:55 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:11:55.793 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[bbde9201-a284-4b3f-8741-bf4c1861dafc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:11:55 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:11:55.794 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfba72350-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:11:55 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:11:55.795 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 09:11:55 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:11:55.795 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfba72350-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:11:55 compute-0 nova_compute[260935]: 2025-10-11 09:11:55.797 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:11:55 compute-0 NetworkManager[44960]: <info>  [1760173915.7985] manager: (tapfba72350-60): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/406)
Oct 11 09:11:55 compute-0 kernel: tapfba72350-60: entered promiscuous mode
Oct 11 09:11:55 compute-0 nova_compute[260935]: 2025-10-11 09:11:55.801 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:11:55 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:11:55.803 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapfba72350-60, col_values=(('external_ids', {'iface-id': '2e151f7b-2e1d-434e-9750-cdf44a5aa034'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:11:55 compute-0 nova_compute[260935]: 2025-10-11 09:11:55.805 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:11:55 compute-0 ovn_controller[152945]: 2025-10-11T09:11:55Z|00974|binding|INFO|Releasing lport 2e151f7b-2e1d-434e-9750-cdf44a5aa034 from this chassis (sb_readonly=0)
Oct 11 09:11:55 compute-0 nova_compute[260935]: 2025-10-11 09:11:55.834 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:11:55 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:11:55.835 162815 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/fba72350-6e1a-4542-9fe5-a0774a411936.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/fba72350-6e1a-4542-9fe5-a0774a411936.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 11 09:11:55 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:11:55.838 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[4d5c62f5-c303-4695-9a6e-ebfd72bbf14a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:11:55 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:11:55.839 162815 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 11 09:11:55 compute-0 ovn_metadata_agent[162810]: global
Oct 11 09:11:55 compute-0 ovn_metadata_agent[162810]:     log         /dev/log local0 debug
Oct 11 09:11:55 compute-0 ovn_metadata_agent[162810]:     log-tag     haproxy-metadata-proxy-fba72350-6e1a-4542-9fe5-a0774a411936
Oct 11 09:11:55 compute-0 ovn_metadata_agent[162810]:     user        root
Oct 11 09:11:55 compute-0 ovn_metadata_agent[162810]:     group       root
Oct 11 09:11:55 compute-0 ovn_metadata_agent[162810]:     maxconn     1024
Oct 11 09:11:55 compute-0 ovn_metadata_agent[162810]:     pidfile     /var/lib/neutron/external/pids/fba72350-6e1a-4542-9fe5-a0774a411936.pid.haproxy
Oct 11 09:11:55 compute-0 ovn_metadata_agent[162810]:     daemon
Oct 11 09:11:55 compute-0 ovn_metadata_agent[162810]: 
Oct 11 09:11:55 compute-0 ovn_metadata_agent[162810]: defaults
Oct 11 09:11:55 compute-0 ovn_metadata_agent[162810]:     log global
Oct 11 09:11:55 compute-0 ovn_metadata_agent[162810]:     mode http
Oct 11 09:11:55 compute-0 ovn_metadata_agent[162810]:     option httplog
Oct 11 09:11:55 compute-0 ovn_metadata_agent[162810]:     option dontlognull
Oct 11 09:11:55 compute-0 ovn_metadata_agent[162810]:     option http-server-close
Oct 11 09:11:55 compute-0 ovn_metadata_agent[162810]:     option forwardfor
Oct 11 09:11:55 compute-0 ovn_metadata_agent[162810]:     retries                 3
Oct 11 09:11:55 compute-0 ovn_metadata_agent[162810]:     timeout http-request    30s
Oct 11 09:11:55 compute-0 ovn_metadata_agent[162810]:     timeout connect         30s
Oct 11 09:11:55 compute-0 ovn_metadata_agent[162810]:     timeout client          32s
Oct 11 09:11:55 compute-0 ovn_metadata_agent[162810]:     timeout server          32s
Oct 11 09:11:55 compute-0 ovn_metadata_agent[162810]:     timeout http-keep-alive 30s
Oct 11 09:11:55 compute-0 ovn_metadata_agent[162810]: 
Oct 11 09:11:55 compute-0 ovn_metadata_agent[162810]: 
Oct 11 09:11:55 compute-0 ovn_metadata_agent[162810]: listen listener
Oct 11 09:11:55 compute-0 ovn_metadata_agent[162810]:     bind 169.254.169.254:80
Oct 11 09:11:55 compute-0 ovn_metadata_agent[162810]:     server metadata /var/lib/neutron/metadata_proxy
Oct 11 09:11:55 compute-0 ovn_metadata_agent[162810]:     http-request add-header X-OVN-Network-ID fba72350-6e1a-4542-9fe5-a0774a411936
Oct 11 09:11:55 compute-0 ovn_metadata_agent[162810]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 11 09:11:55 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:11:55.840 162815 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-fba72350-6e1a-4542-9fe5-a0774a411936', 'env', 'PROCESS_TAG=haproxy-fba72350-6e1a-4542-9fe5-a0774a411936', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/fba72350-6e1a-4542-9fe5-a0774a411936.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 11 09:11:56 compute-0 podman[368382]: 2025-10-11 09:11:56.285098259 +0000 UTC m=+0.063719805 container create ec1149314a3d07372ddbc52429d17cff61449fe11903c2a988709a542e875bb0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-fba72350-6e1a-4542-9fe5-a0774a411936, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct 11 09:11:56 compute-0 systemd[1]: Started libpod-conmon-ec1149314a3d07372ddbc52429d17cff61449fe11903c2a988709a542e875bb0.scope.
Oct 11 09:11:56 compute-0 podman[368382]: 2025-10-11 09:11:56.250905982 +0000 UTC m=+0.029527598 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 11 09:11:56 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:11:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/79b21d042abfb89eb21b26b040500e0f55cb793382691f819975fafe3db3b783/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 11 09:11:56 compute-0 podman[368382]: 2025-10-11 09:11:56.391953647 +0000 UTC m=+0.170575203 container init ec1149314a3d07372ddbc52429d17cff61449fe11903c2a988709a542e875bb0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-fba72350-6e1a-4542-9fe5-a0774a411936, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2)
Oct 11 09:11:56 compute-0 podman[368382]: 2025-10-11 09:11:56.401317736 +0000 UTC m=+0.179939282 container start ec1149314a3d07372ddbc52429d17cff61449fe11903c2a988709a542e875bb0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-fba72350-6e1a-4542-9fe5-a0774a411936, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 11 09:11:56 compute-0 neutron-haproxy-ovnmeta-fba72350-6e1a-4542-9fe5-a0774a411936[368402]: [NOTICE]   (368406) : New worker (368408) forked
Oct 11 09:11:56 compute-0 neutron-haproxy-ovnmeta-fba72350-6e1a-4542-9fe5-a0774a411936[368402]: [NOTICE]   (368406) : Loading success.
Oct 11 09:11:56 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2103: 321 pgs: 321 active+clean; 374 MiB data, 893 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 11 09:11:56 compute-0 nova_compute[260935]: 2025-10-11 09:11:56.882 2 DEBUG nova.compute.manager [None req-a8cd5c17-8312-46bb-b227-88dd68700724 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 11 09:11:56 compute-0 nova_compute[260935]: 2025-10-11 09:11:56.883 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760173916.8814278, 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:11:56 compute-0 nova_compute[260935]: 2025-10-11 09:11:56.883 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e] VM Started (Lifecycle Event)
Oct 11 09:11:56 compute-0 nova_compute[260935]: 2025-10-11 09:11:56.887 2 DEBUG nova.virt.libvirt.driver [None req-a8cd5c17-8312-46bb-b227-88dd68700724 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 11 09:11:56 compute-0 nova_compute[260935]: 2025-10-11 09:11:56.892 2 INFO nova.virt.libvirt.driver [-] [instance: 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e] Instance spawned successfully.
Oct 11 09:11:56 compute-0 nova_compute[260935]: 2025-10-11 09:11:56.893 2 DEBUG nova.virt.libvirt.driver [None req-a8cd5c17-8312-46bb-b227-88dd68700724 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 11 09:11:56 compute-0 nova_compute[260935]: 2025-10-11 09:11:56.908 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:11:56 compute-0 nova_compute[260935]: 2025-10-11 09:11:56.917 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 09:11:56 compute-0 nova_compute[260935]: 2025-10-11 09:11:56.923 2 DEBUG nova.virt.libvirt.driver [None req-a8cd5c17-8312-46bb-b227-88dd68700724 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:11:56 compute-0 nova_compute[260935]: 2025-10-11 09:11:56.923 2 DEBUG nova.virt.libvirt.driver [None req-a8cd5c17-8312-46bb-b227-88dd68700724 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:11:56 compute-0 nova_compute[260935]: 2025-10-11 09:11:56.924 2 DEBUG nova.virt.libvirt.driver [None req-a8cd5c17-8312-46bb-b227-88dd68700724 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:11:56 compute-0 nova_compute[260935]: 2025-10-11 09:11:56.925 2 DEBUG nova.virt.libvirt.driver [None req-a8cd5c17-8312-46bb-b227-88dd68700724 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:11:56 compute-0 nova_compute[260935]: 2025-10-11 09:11:56.926 2 DEBUG nova.virt.libvirt.driver [None req-a8cd5c17-8312-46bb-b227-88dd68700724 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:11:56 compute-0 nova_compute[260935]: 2025-10-11 09:11:56.926 2 DEBUG nova.virt.libvirt.driver [None req-a8cd5c17-8312-46bb-b227-88dd68700724 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:11:56 compute-0 nova_compute[260935]: 2025-10-11 09:11:56.936 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 09:11:56 compute-0 nova_compute[260935]: 2025-10-11 09:11:56.936 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760173916.8869085, 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:11:56 compute-0 nova_compute[260935]: 2025-10-11 09:11:56.936 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e] VM Paused (Lifecycle Event)
Oct 11 09:11:56 compute-0 nova_compute[260935]: 2025-10-11 09:11:56.959 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:11:56 compute-0 nova_compute[260935]: 2025-10-11 09:11:56.963 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760173916.8874564, 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:11:56 compute-0 nova_compute[260935]: 2025-10-11 09:11:56.964 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e] VM Resumed (Lifecycle Event)
Oct 11 09:11:56 compute-0 nova_compute[260935]: 2025-10-11 09:11:56.993 2 INFO nova.compute.manager [None req-a8cd5c17-8312-46bb-b227-88dd68700724 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e] Took 8.63 seconds to spawn the instance on the hypervisor.
Oct 11 09:11:56 compute-0 nova_compute[260935]: 2025-10-11 09:11:56.994 2 DEBUG nova.compute.manager [None req-a8cd5c17-8312-46bb-b227-88dd68700724 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:11:56 compute-0 nova_compute[260935]: 2025-10-11 09:11:56.995 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:11:57 compute-0 nova_compute[260935]: 2025-10-11 09:11:57.003 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 09:11:57 compute-0 nova_compute[260935]: 2025-10-11 09:11:57.043 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 09:11:57 compute-0 nova_compute[260935]: 2025-10-11 09:11:57.071 2 INFO nova.compute.manager [None req-a8cd5c17-8312-46bb-b227-88dd68700724 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e] Took 9.86 seconds to build instance.
Oct 11 09:11:57 compute-0 nova_compute[260935]: 2025-10-11 09:11:57.092 2 DEBUG oslo_concurrency.lockutils [None req-a8cd5c17-8312-46bb-b227-88dd68700724 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Lock "0dc02e8f-5afd-40a3-8de3-e5550e4ab57e" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.953s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:11:57 compute-0 ceph-mon[74313]: pgmap v2103: 321 pgs: 321 active+clean; 374 MiB data, 893 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 11 09:11:57 compute-0 sshd-session[368417]: Invalid user zan from 152.32.213.170 port 33822
Oct 11 09:11:57 compute-0 sshd-session[368417]: pam_unix(sshd:auth): check pass; user unknown
Oct 11 09:11:57 compute-0 sshd-session[368417]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=152.32.213.170
Oct 11 09:11:57 compute-0 podman[368419]: 2025-10-11 09:11:57.801055739 +0000 UTC m=+0.104596266 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=iscsid, org.label-schema.build-date=20251001)
Oct 11 09:11:57 compute-0 nova_compute[260935]: 2025-10-11 09:11:57.862 2 DEBUG nova.compute.manager [req-310a12d4-2ce7-445e-8daf-b6b5ef0ab15b req-365613d6-9304-4671-abca-e4ea1a5b6933 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e] Received event network-vif-plugged-0ae3e094-fe06-40c1-8840-cf16ba40f7fb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:11:57 compute-0 nova_compute[260935]: 2025-10-11 09:11:57.862 2 DEBUG oslo_concurrency.lockutils [req-310a12d4-2ce7-445e-8daf-b6b5ef0ab15b req-365613d6-9304-4671-abca-e4ea1a5b6933 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "0dc02e8f-5afd-40a3-8de3-e5550e4ab57e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:11:57 compute-0 nova_compute[260935]: 2025-10-11 09:11:57.863 2 DEBUG oslo_concurrency.lockutils [req-310a12d4-2ce7-445e-8daf-b6b5ef0ab15b req-365613d6-9304-4671-abca-e4ea1a5b6933 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "0dc02e8f-5afd-40a3-8de3-e5550e4ab57e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:11:57 compute-0 nova_compute[260935]: 2025-10-11 09:11:57.863 2 DEBUG oslo_concurrency.lockutils [req-310a12d4-2ce7-445e-8daf-b6b5ef0ab15b req-365613d6-9304-4671-abca-e4ea1a5b6933 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "0dc02e8f-5afd-40a3-8de3-e5550e4ab57e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:11:57 compute-0 nova_compute[260935]: 2025-10-11 09:11:57.863 2 DEBUG nova.compute.manager [req-310a12d4-2ce7-445e-8daf-b6b5ef0ab15b req-365613d6-9304-4671-abca-e4ea1a5b6933 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e] No waiting events found dispatching network-vif-plugged-0ae3e094-fe06-40c1-8840-cf16ba40f7fb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 09:11:57 compute-0 nova_compute[260935]: 2025-10-11 09:11:57.863 2 WARNING nova.compute.manager [req-310a12d4-2ce7-445e-8daf-b6b5ef0ab15b req-365613d6-9304-4671-abca-e4ea1a5b6933 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e] Received unexpected event network-vif-plugged-0ae3e094-fe06-40c1-8840-cf16ba40f7fb for instance with vm_state active and task_state None.
Oct 11 09:11:58 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2104: 321 pgs: 321 active+clean; 374 MiB data, 893 MiB used, 59 GiB / 60 GiB avail; 25 KiB/s rd, 1.8 MiB/s wr, 37 op/s
Oct 11 09:11:58 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:11:59 compute-0 nova_compute[260935]: 2025-10-11 09:11:59.037 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:11:59 compute-0 ceph-mon[74313]: pgmap v2104: 321 pgs: 321 active+clean; 374 MiB data, 893 MiB used, 59 GiB / 60 GiB avail; 25 KiB/s rd, 1.8 MiB/s wr, 37 op/s
Oct 11 09:11:59 compute-0 sshd-session[368417]: Failed password for invalid user zan from 152.32.213.170 port 33822 ssh2
Oct 11 09:12:00 compute-0 nova_compute[260935]: 2025-10-11 09:12:00.079 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:12:00 compute-0 ceph-osd[89278]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 11 09:12:00 compute-0 ceph-osd[89278]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 3600.1 total, 600.0 interval
                                           Cumulative writes: 34K writes, 126K keys, 34K commit groups, 1.0 writes per commit group, ingest: 0.12 GB, 0.03 MB/s
                                           Cumulative WAL: 34K writes, 12K syncs, 2.74 writes per sync, written: 0.12 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 5812 writes, 20K keys, 5812 commit groups, 1.0 writes per commit group, ingest: 19.71 MB, 0.03 MB/s
                                           Interval WAL: 5812 writes, 2422 syncs, 2.40 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 11 09:12:00 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2105: 321 pgs: 321 active+clean; 374 MiB data, 893 MiB used, 59 GiB / 60 GiB avail; 25 KiB/s rd, 1.8 MiB/s wr, 37 op/s
Oct 11 09:12:00 compute-0 unix_chkpwd[368443]: password check failed for user (root)
Oct 11 09:12:00 compute-0 sshd-session[368441]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=165.232.82.252  user=root
Oct 11 09:12:01 compute-0 ceph-mon[74313]: pgmap v2105: 321 pgs: 321 active+clean; 374 MiB data, 893 MiB used, 59 GiB / 60 GiB avail; 25 KiB/s rd, 1.8 MiB/s wr, 37 op/s
Oct 11 09:12:01 compute-0 ovn_controller[152945]: 2025-10-11T09:12:01Z|00975|binding|INFO|Releasing lport 2e151f7b-2e1d-434e-9750-cdf44a5aa034 from this chassis (sb_readonly=0)
Oct 11 09:12:01 compute-0 ovn_controller[152945]: 2025-10-11T09:12:01Z|00976|binding|INFO|Releasing lport e23cd806-8523-4e59-ba27-db15cee52548 from this chassis (sb_readonly=0)
Oct 11 09:12:01 compute-0 ovn_controller[152945]: 2025-10-11T09:12:01Z|00977|binding|INFO|Releasing lport f45dd889-4db0-488b-b6d3-356bc191844e from this chassis (sb_readonly=0)
Oct 11 09:12:01 compute-0 NetworkManager[44960]: <info>  [1760173921.7488] manager: (patch-br-int-to-provnet-02213cae-d648-4a8f-b18b-9bdf9573f1be): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/407)
Oct 11 09:12:01 compute-0 nova_compute[260935]: 2025-10-11 09:12:01.749 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:12:01 compute-0 NetworkManager[44960]: <info>  [1760173921.7504] manager: (patch-provnet-02213cae-d648-4a8f-b18b-9bdf9573f1be-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/408)
Oct 11 09:12:01 compute-0 ovn_controller[152945]: 2025-10-11T09:12:01Z|00978|binding|INFO|Releasing lport 2e151f7b-2e1d-434e-9750-cdf44a5aa034 from this chassis (sb_readonly=0)
Oct 11 09:12:01 compute-0 ovn_controller[152945]: 2025-10-11T09:12:01Z|00979|binding|INFO|Releasing lport e23cd806-8523-4e59-ba27-db15cee52548 from this chassis (sb_readonly=0)
Oct 11 09:12:01 compute-0 ovn_controller[152945]: 2025-10-11T09:12:01Z|00980|binding|INFO|Releasing lport f45dd889-4db0-488b-b6d3-356bc191844e from this chassis (sb_readonly=0)
Oct 11 09:12:01 compute-0 nova_compute[260935]: 2025-10-11 09:12:01.795 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:12:01 compute-0 nova_compute[260935]: 2025-10-11 09:12:01.803 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:12:01 compute-0 sshd-session[368417]: Received disconnect from 152.32.213.170 port 33822:11: Bye Bye [preauth]
Oct 11 09:12:01 compute-0 sshd-session[368417]: Disconnected from invalid user zan 152.32.213.170 port 33822 [preauth]
Oct 11 09:12:02 compute-0 nova_compute[260935]: 2025-10-11 09:12:02.294 2 DEBUG nova.compute.manager [req-fa3f3a5f-1c6d-47ad-9cc5-cc5c2ff2126d req-aec50289-e3dc-41ee-afeb-c6259a1a1cab e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e] Received event network-changed-0ae3e094-fe06-40c1-8840-cf16ba40f7fb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:12:02 compute-0 nova_compute[260935]: 2025-10-11 09:12:02.295 2 DEBUG nova.compute.manager [req-fa3f3a5f-1c6d-47ad-9cc5-cc5c2ff2126d req-aec50289-e3dc-41ee-afeb-c6259a1a1cab e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e] Refreshing instance network info cache due to event network-changed-0ae3e094-fe06-40c1-8840-cf16ba40f7fb. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 09:12:02 compute-0 nova_compute[260935]: 2025-10-11 09:12:02.296 2 DEBUG oslo_concurrency.lockutils [req-fa3f3a5f-1c6d-47ad-9cc5-cc5c2ff2126d req-aec50289-e3dc-41ee-afeb-c6259a1a1cab e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-0dc02e8f-5afd-40a3-8de3-e5550e4ab57e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:12:02 compute-0 nova_compute[260935]: 2025-10-11 09:12:02.296 2 DEBUG oslo_concurrency.lockutils [req-fa3f3a5f-1c6d-47ad-9cc5-cc5c2ff2126d req-aec50289-e3dc-41ee-afeb-c6259a1a1cab e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-0dc02e8f-5afd-40a3-8de3-e5550e4ab57e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:12:02 compute-0 nova_compute[260935]: 2025-10-11 09:12:02.297 2 DEBUG nova.network.neutron [req-fa3f3a5f-1c6d-47ad-9cc5-cc5c2ff2126d req-aec50289-e3dc-41ee-afeb-c6259a1a1cab e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e] Refreshing network info cache for port 0ae3e094-fe06-40c1-8840-cf16ba40f7fb _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 09:12:02 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2106: 321 pgs: 321 active+clean; 374 MiB data, 893 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 92 op/s
Oct 11 09:12:02 compute-0 podman[368445]: 2025-10-11 09:12:02.793611398 +0000 UTC m=+0.093396047 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 11 09:12:02 compute-0 podman[368446]: 2025-10-11 09:12:02.807644266 +0000 UTC m=+0.110855719 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 11 09:12:03 compute-0 sshd-session[368441]: Failed password for root from 165.232.82.252 port 32812 ssh2
Oct 11 09:12:03 compute-0 ceph-mon[74313]: pgmap v2106: 321 pgs: 321 active+clean; 374 MiB data, 893 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 92 op/s
Oct 11 09:12:03 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:12:04 compute-0 nova_compute[260935]: 2025-10-11 09:12:04.039 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:12:04 compute-0 nova_compute[260935]: 2025-10-11 09:12:04.109 2 DEBUG nova.network.neutron [req-fa3f3a5f-1c6d-47ad-9cc5-cc5c2ff2126d req-aec50289-e3dc-41ee-afeb-c6259a1a1cab e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e] Updated VIF entry in instance network info cache for port 0ae3e094-fe06-40c1-8840-cf16ba40f7fb. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 09:12:04 compute-0 nova_compute[260935]: 2025-10-11 09:12:04.110 2 DEBUG nova.network.neutron [req-fa3f3a5f-1c6d-47ad-9cc5-cc5c2ff2126d req-aec50289-e3dc-41ee-afeb-c6259a1a1cab e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e] Updating instance_info_cache with network_info: [{"id": "0ae3e094-fe06-40c1-8840-cf16ba40f7fb", "address": "fa:16:3e:36:24:98", "network": {"id": "fba72350-6e1a-4542-9fe5-a0774a411936", "bridge": "br-int", "label": "tempest-network-smoke--284994200", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.173", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca4b15770e784f45910b630937562cb6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0ae3e094-fe", "ovs_interfaceid": "0ae3e094-fe06-40c1-8840-cf16ba40f7fb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:12:04 compute-0 nova_compute[260935]: 2025-10-11 09:12:04.137 2 DEBUG oslo_concurrency.lockutils [req-fa3f3a5f-1c6d-47ad-9cc5-cc5c2ff2126d req-aec50289-e3dc-41ee-afeb-c6259a1a1cab e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-0dc02e8f-5afd-40a3-8de3-e5550e4ab57e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:12:04 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2107: 321 pgs: 321 active+clean; 374 MiB data, 893 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 268 KiB/s wr, 76 op/s
Oct 11 09:12:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 09:12:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:12:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 09:12:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:12:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0029757271724620694 of space, bias 1.0, pg target 0.8927181517386208 quantized to 32 (current 32)
Oct 11 09:12:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:12:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 09:12:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:12:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 09:12:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:12:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Oct 11 09:12:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:12:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 11 09:12:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:12:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 09:12:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:12:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 11 09:12:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:12:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 11 09:12:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:12:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 09:12:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:12:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 11 09:12:05 compute-0 nova_compute[260935]: 2025-10-11 09:12:05.082 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:12:05 compute-0 sshd-session[368441]: Connection closed by authenticating user root 165.232.82.252 port 32812 [preauth]
Oct 11 09:12:05 compute-0 ceph-mon[74313]: pgmap v2107: 321 pgs: 321 active+clean; 374 MiB data, 893 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 268 KiB/s wr, 76 op/s
Oct 11 09:12:06 compute-0 ceph-osd[90364]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 11 09:12:06 compute-0 ceph-osd[90364]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 3600.3 total, 600.0 interval
                                           Cumulative writes: 26K writes, 105K keys, 26K commit groups, 1.0 writes per commit group, ingest: 0.10 GB, 0.03 MB/s
                                           Cumulative WAL: 26K writes, 9284 syncs, 2.85 writes per sync, written: 0.10 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 4253 writes, 16K keys, 4253 commit groups, 1.0 writes per commit group, ingest: 15.90 MB, 0.03 MB/s
                                           Interval WAL: 4253 writes, 1727 syncs, 2.46 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 11 09:12:06 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2108: 321 pgs: 321 active+clean; 374 MiB data, 893 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 11 09:12:07 compute-0 ceph-osd[88249]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #48. Immutable memtables: 5.
Oct 11 09:12:07 compute-0 ceph-mon[74313]: pgmap v2108: 321 pgs: 321 active+clean; 374 MiB data, 893 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 11 09:12:07 compute-0 ceph-mgr[74605]: [devicehealth INFO root] Check health
Oct 11 09:12:08 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2109: 321 pgs: 321 active+clean; 374 MiB data, 893 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 100 KiB/s wr, 81 op/s
Oct 11 09:12:08 compute-0 nova_compute[260935]: 2025-10-11 09:12:08.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:12:08 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:12:09 compute-0 nova_compute[260935]: 2025-10-11 09:12:09.042 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:12:09 compute-0 ceph-mon[74313]: pgmap v2109: 321 pgs: 321 active+clean; 374 MiB data, 893 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 100 KiB/s wr, 81 op/s
Oct 11 09:12:09 compute-0 ovn_controller[152945]: 2025-10-11T09:12:09Z|00110|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:36:24:98 10.100.0.9
Oct 11 09:12:09 compute-0 ovn_controller[152945]: 2025-10-11T09:12:09Z|00111|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:36:24:98 10.100.0.9
Oct 11 09:12:10 compute-0 nova_compute[260935]: 2025-10-11 09:12:10.083 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:12:10 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2110: 321 pgs: 321 active+clean; 374 MiB data, 893 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 88 KiB/s wr, 71 op/s
Oct 11 09:12:11 compute-0 ceph-mon[74313]: pgmap v2110: 321 pgs: 321 active+clean; 374 MiB data, 893 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 88 KiB/s wr, 71 op/s
Oct 11 09:12:12 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2111: 321 pgs: 321 active+clean; 400 MiB data, 935 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 125 op/s
Oct 11 09:12:13 compute-0 ceph-mon[74313]: pgmap v2111: 321 pgs: 321 active+clean; 400 MiB data, 935 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 125 op/s
Oct 11 09:12:13 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:12:14 compute-0 nova_compute[260935]: 2025-10-11 09:12:14.044 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:12:14 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2112: 321 pgs: 321 active+clean; 407 MiB data, 936 MiB used, 59 GiB / 60 GiB avail; 589 KiB/s rd, 2.1 MiB/s wr, 71 op/s
Oct 11 09:12:15 compute-0 nova_compute[260935]: 2025-10-11 09:12:15.086 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:12:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:12:15.207 162815 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:12:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:12:15.209 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:12:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:12:15.210 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:12:15 compute-0 ceph-mon[74313]: pgmap v2112: 321 pgs: 321 active+clean; 407 MiB data, 936 MiB used, 59 GiB / 60 GiB avail; 589 KiB/s rd, 2.1 MiB/s wr, 71 op/s
Oct 11 09:12:16 compute-0 nova_compute[260935]: 2025-10-11 09:12:16.351 2 INFO nova.compute.manager [None req-35f7fdbf-915f-4e01-9594-8f08e657df71 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e] Get console output
Oct 11 09:12:16 compute-0 nova_compute[260935]: 2025-10-11 09:12:16.357 29289 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Oct 11 09:12:16 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2113: 321 pgs: 321 active+clean; 407 MiB data, 936 MiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Oct 11 09:12:17 compute-0 nova_compute[260935]: 2025-10-11 09:12:17.665 2 INFO nova.compute.manager [None req-879333dc-604f-41f6-a4d2-442a21b9455d a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e] Rebuilding instance
Oct 11 09:12:17 compute-0 ceph-mon[74313]: pgmap v2113: 321 pgs: 321 active+clean; 407 MiB data, 936 MiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Oct 11 09:12:18 compute-0 nova_compute[260935]: 2025-10-11 09:12:18.008 2 DEBUG nova.objects.instance [None req-879333dc-604f-41f6-a4d2-442a21b9455d a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:12:18 compute-0 nova_compute[260935]: 2025-10-11 09:12:18.023 2 DEBUG nova.compute.manager [None req-879333dc-604f-41f6-a4d2-442a21b9455d a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:12:18 compute-0 nova_compute[260935]: 2025-10-11 09:12:18.092 2 DEBUG nova.objects.instance [None req-879333dc-604f-41f6-a4d2-442a21b9455d a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Lazy-loading 'pci_requests' on Instance uuid 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:12:18 compute-0 nova_compute[260935]: 2025-10-11 09:12:18.111 2 DEBUG nova.objects.instance [None req-879333dc-604f-41f6-a4d2-442a21b9455d a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Lazy-loading 'pci_devices' on Instance uuid 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:12:18 compute-0 nova_compute[260935]: 2025-10-11 09:12:18.128 2 DEBUG nova.objects.instance [None req-879333dc-604f-41f6-a4d2-442a21b9455d a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Lazy-loading 'resources' on Instance uuid 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:12:18 compute-0 nova_compute[260935]: 2025-10-11 09:12:18.148 2 DEBUG nova.objects.instance [None req-879333dc-604f-41f6-a4d2-442a21b9455d a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Lazy-loading 'migration_context' on Instance uuid 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:12:18 compute-0 nova_compute[260935]: 2025-10-11 09:12:18.162 2 DEBUG nova.objects.instance [None req-879333dc-604f-41f6-a4d2-442a21b9455d a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Oct 11 09:12:18 compute-0 nova_compute[260935]: 2025-10-11 09:12:18.166 2 DEBUG nova.virt.libvirt.driver [None req-879333dc-604f-41f6-a4d2-442a21b9455d a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Oct 11 09:12:18 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2114: 321 pgs: 321 active+clean; 407 MiB data, 936 MiB used, 59 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 11 09:12:18 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:12:19 compute-0 nova_compute[260935]: 2025-10-11 09:12:19.084 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:12:19 compute-0 ceph-mon[74313]: pgmap v2114: 321 pgs: 321 active+clean; 407 MiB data, 936 MiB used, 59 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 11 09:12:20 compute-0 nova_compute[260935]: 2025-10-11 09:12:20.090 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:12:20 compute-0 kernel: tap0ae3e094-fe (unregistering): left promiscuous mode
Oct 11 09:12:20 compute-0 NetworkManager[44960]: <info>  [1760173940.4649] device (tap0ae3e094-fe): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 09:12:20 compute-0 nova_compute[260935]: 2025-10-11 09:12:20.480 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:12:20 compute-0 ovn_controller[152945]: 2025-10-11T09:12:20Z|00981|binding|INFO|Releasing lport 0ae3e094-fe06-40c1-8840-cf16ba40f7fb from this chassis (sb_readonly=0)
Oct 11 09:12:20 compute-0 ovn_controller[152945]: 2025-10-11T09:12:20Z|00982|binding|INFO|Setting lport 0ae3e094-fe06-40c1-8840-cf16ba40f7fb down in Southbound
Oct 11 09:12:20 compute-0 ovn_controller[152945]: 2025-10-11T09:12:20Z|00983|binding|INFO|Removing iface tap0ae3e094-fe ovn-installed in OVS
Oct 11 09:12:20 compute-0 nova_compute[260935]: 2025-10-11 09:12:20.484 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:12:20 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:12:20.492 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:36:24:98 10.100.0.9'], port_security=['fa:16:3e:36:24:98 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '0dc02e8f-5afd-40a3-8de3-e5550e4ab57e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fba72350-6e1a-4542-9fe5-a0774a411936', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ca4b15770e784f45910b630937562cb6', 'neutron:revision_number': '4', 'neutron:security_group_ids': '8dd2bd74-fe29-451a-969b-10f8c65ff537', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.173'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4107ca00-6bae-4723-b2f7-08200810b548, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=0ae3e094-fe06-40c1-8840-cf16ba40f7fb) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 09:12:20 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:12:20.495 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 0ae3e094-fe06-40c1-8840-cf16ba40f7fb in datapath fba72350-6e1a-4542-9fe5-a0774a411936 unbound from our chassis
Oct 11 09:12:20 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:12:20.499 162815 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network fba72350-6e1a-4542-9fe5-a0774a411936, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 11 09:12:20 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2115: 321 pgs: 321 active+clean; 407 MiB data, 936 MiB used, 59 GiB / 60 GiB avail; 301 KiB/s rd, 2.1 MiB/s wr, 56 op/s
Oct 11 09:12:20 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:12:20.501 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[3acafd0f-2703-4ec0-a0fc-43943e370025]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:12:20 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:12:20.503 162815 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-fba72350-6e1a-4542-9fe5-a0774a411936 namespace which is not needed anymore
Oct 11 09:12:20 compute-0 nova_compute[260935]: 2025-10-11 09:12:20.513 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:12:20 compute-0 systemd[1]: machine-qemu\x2d125\x2dinstance\x2d00000069.scope: Deactivated successfully.
Oct 11 09:12:20 compute-0 systemd[1]: machine-qemu\x2d125\x2dinstance\x2d00000069.scope: Consumed 13.530s CPU time.
Oct 11 09:12:20 compute-0 systemd-machined[215705]: Machine qemu-125-instance-00000069 terminated.
Oct 11 09:12:20 compute-0 nova_compute[260935]: 2025-10-11 09:12:20.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:12:20 compute-0 neutron-haproxy-ovnmeta-fba72350-6e1a-4542-9fe5-a0774a411936[368402]: [NOTICE]   (368406) : haproxy version is 2.8.14-c23fe91
Oct 11 09:12:20 compute-0 neutron-haproxy-ovnmeta-fba72350-6e1a-4542-9fe5-a0774a411936[368402]: [NOTICE]   (368406) : path to executable is /usr/sbin/haproxy
Oct 11 09:12:20 compute-0 neutron-haproxy-ovnmeta-fba72350-6e1a-4542-9fe5-a0774a411936[368402]: [WARNING]  (368406) : Exiting Master process...
Oct 11 09:12:20 compute-0 neutron-haproxy-ovnmeta-fba72350-6e1a-4542-9fe5-a0774a411936[368402]: [WARNING]  (368406) : Exiting Master process...
Oct 11 09:12:20 compute-0 neutron-haproxy-ovnmeta-fba72350-6e1a-4542-9fe5-a0774a411936[368402]: [ALERT]    (368406) : Current worker (368408) exited with code 143 (Terminated)
Oct 11 09:12:20 compute-0 neutron-haproxy-ovnmeta-fba72350-6e1a-4542-9fe5-a0774a411936[368402]: [WARNING]  (368406) : All workers exited. Exiting... (0)
Oct 11 09:12:20 compute-0 systemd[1]: libpod-ec1149314a3d07372ddbc52429d17cff61449fe11903c2a988709a542e875bb0.scope: Deactivated successfully.
Oct 11 09:12:20 compute-0 podman[368517]: 2025-10-11 09:12:20.728645715 +0000 UTC m=+0.077300710 container died ec1149314a3d07372ddbc52429d17cff61449fe11903c2a988709a542e875bb0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-fba72350-6e1a-4542-9fe5-a0774a411936, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 11 09:12:20 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ec1149314a3d07372ddbc52429d17cff61449fe11903c2a988709a542e875bb0-userdata-shm.mount: Deactivated successfully.
Oct 11 09:12:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-79b21d042abfb89eb21b26b040500e0f55cb793382691f819975fafe3db3b783-merged.mount: Deactivated successfully.
Oct 11 09:12:20 compute-0 podman[368517]: 2025-10-11 09:12:20.795244549 +0000 UTC m=+0.143899554 container cleanup ec1149314a3d07372ddbc52429d17cff61449fe11903c2a988709a542e875bb0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-fba72350-6e1a-4542-9fe5-a0774a411936, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Oct 11 09:12:20 compute-0 systemd[1]: libpod-conmon-ec1149314a3d07372ddbc52429d17cff61449fe11903c2a988709a542e875bb0.scope: Deactivated successfully.
Oct 11 09:12:20 compute-0 podman[368559]: 2025-10-11 09:12:20.900717718 +0000 UTC m=+0.067236142 container remove ec1149314a3d07372ddbc52429d17cff61449fe11903c2a988709a542e875bb0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-fba72350-6e1a-4542-9fe5-a0774a411936, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 11 09:12:20 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:12:20.911 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[512e3ac5-2a73-4c04-b204-e81e818d501e]: (4, ('Sat Oct 11 09:12:20 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-fba72350-6e1a-4542-9fe5-a0774a411936 (ec1149314a3d07372ddbc52429d17cff61449fe11903c2a988709a542e875bb0)\nec1149314a3d07372ddbc52429d17cff61449fe11903c2a988709a542e875bb0\nSat Oct 11 09:12:20 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-fba72350-6e1a-4542-9fe5-a0774a411936 (ec1149314a3d07372ddbc52429d17cff61449fe11903c2a988709a542e875bb0)\nec1149314a3d07372ddbc52429d17cff61449fe11903c2a988709a542e875bb0\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:12:20 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:12:20.913 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[a49772b3-333a-457a-9707-2a988d08abce]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:12:20 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:12:20.914 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfba72350-60, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:12:20 compute-0 nova_compute[260935]: 2025-10-11 09:12:20.916 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:12:20 compute-0 kernel: tapfba72350-60: left promiscuous mode
Oct 11 09:12:20 compute-0 nova_compute[260935]: 2025-10-11 09:12:20.941 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:12:20 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:12:20.946 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[848b16b2-08f6-4172-a6be-5ce305871953]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:12:20 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:12:20.975 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[64ca7b3e-78e6-4fb7-b5f2-8e6d9d330b2d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:12:20 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:12:20.979 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[b53c71e1-6290-48cf-b7c2-b05c13de0db7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:12:21 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:12:21.006 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[24c9f837-6b6f-4b09-9654-97c785b0d54f]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 576015, 'reachable_time': 15086, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 368577, 'error': None, 'target': 'ovnmeta-fba72350-6e1a-4542-9fe5-a0774a411936', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:12:21 compute-0 systemd[1]: run-netns-ovnmeta\x2dfba72350\x2d6e1a\x2d4542\x2d9fe5\x2da0774a411936.mount: Deactivated successfully.
Oct 11 09:12:21 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:12:21.010 162929 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-fba72350-6e1a-4542-9fe5-a0774a411936 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 11 09:12:21 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:12:21.010 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[354a2bb7-1357-435f-bbe8-1634fe125dfc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:12:21 compute-0 nova_compute[260935]: 2025-10-11 09:12:21.070 2 DEBUG nova.compute.manager [req-272ce3f7-2b29-4c54-b078-c07b8411df51 req-0da5a748-6301-4a2f-a469-25402bf191c6 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e] Received event network-vif-unplugged-0ae3e094-fe06-40c1-8840-cf16ba40f7fb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:12:21 compute-0 nova_compute[260935]: 2025-10-11 09:12:21.070 2 DEBUG oslo_concurrency.lockutils [req-272ce3f7-2b29-4c54-b078-c07b8411df51 req-0da5a748-6301-4a2f-a469-25402bf191c6 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "0dc02e8f-5afd-40a3-8de3-e5550e4ab57e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:12:21 compute-0 nova_compute[260935]: 2025-10-11 09:12:21.071 2 DEBUG oslo_concurrency.lockutils [req-272ce3f7-2b29-4c54-b078-c07b8411df51 req-0da5a748-6301-4a2f-a469-25402bf191c6 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "0dc02e8f-5afd-40a3-8de3-e5550e4ab57e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:12:21 compute-0 nova_compute[260935]: 2025-10-11 09:12:21.071 2 DEBUG oslo_concurrency.lockutils [req-272ce3f7-2b29-4c54-b078-c07b8411df51 req-0da5a748-6301-4a2f-a469-25402bf191c6 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "0dc02e8f-5afd-40a3-8de3-e5550e4ab57e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:12:21 compute-0 nova_compute[260935]: 2025-10-11 09:12:21.072 2 DEBUG nova.compute.manager [req-272ce3f7-2b29-4c54-b078-c07b8411df51 req-0da5a748-6301-4a2f-a469-25402bf191c6 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e] No waiting events found dispatching network-vif-unplugged-0ae3e094-fe06-40c1-8840-cf16ba40f7fb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 09:12:21 compute-0 nova_compute[260935]: 2025-10-11 09:12:21.072 2 WARNING nova.compute.manager [req-272ce3f7-2b29-4c54-b078-c07b8411df51 req-0da5a748-6301-4a2f-a469-25402bf191c6 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e] Received unexpected event network-vif-unplugged-0ae3e094-fe06-40c1-8840-cf16ba40f7fb for instance with vm_state active and task_state rebuilding.
Oct 11 09:12:21 compute-0 nova_compute[260935]: 2025-10-11 09:12:21.189 2 INFO nova.virt.libvirt.driver [None req-879333dc-604f-41f6-a4d2-442a21b9455d a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e] Instance shutdown successfully after 3 seconds.
Oct 11 09:12:21 compute-0 nova_compute[260935]: 2025-10-11 09:12:21.197 2 INFO nova.virt.libvirt.driver [-] [instance: 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e] Instance destroyed successfully.
Oct 11 09:12:21 compute-0 nova_compute[260935]: 2025-10-11 09:12:21.204 2 INFO nova.virt.libvirt.driver [-] [instance: 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e] Instance destroyed successfully.
Oct 11 09:12:21 compute-0 nova_compute[260935]: 2025-10-11 09:12:21.206 2 DEBUG nova.virt.libvirt.vif [None req-879333dc-604f-41f6-a4d2-442a21b9455d a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T09:11:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1973542017',display_name='tempest-TestNetworkAdvancedServerOps-server-1973542017',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1973542017',id=105,image_ref='95632eb9-5895-4e20-b760-0f149aadf400',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPjiTuLZzmAZ35tMhCOSVitFlf9QVc7XZ8MVeNKbs8Bxh7N8pYPwy6nmkT0CJ4moptetANiA+6a5Y0trCDigpxU0NxSZrm3lKn8b9iWNg2gUkyVkLT8AlFALq9EjTDW32g==',key_name='tempest-TestNetworkAdvancedServerOps-2078355019',keypairs=<?>,launch_index=0,launched_at=2025-10-11T09:11:56Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='ca4b15770e784f45910b630937562cb6',ramdisk_id='',reservation_id='r-7vnn79ig',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='95632eb9-5895-4e20-b760-0f149aadf400',image_container_format='bare',image_disk_format='qcow2',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-1304559157',owner_user_name='tempest-TestNetworkAdvancedServerOps-1304559157-project-member'},tags=<?>,task_state='rebuilding',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:12:17Z,user_data=None,user_id='a213c3877fc144a3af0be3c3d853f999',uuid=0dc02e8f-5afd-40a3-8de3-e5550e4ab57e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "0ae3e094-fe06-40c1-8840-cf16ba40f7fb", "address": "fa:16:3e:36:24:98", "network": {"id": "fba72350-6e1a-4542-9fe5-a0774a411936", "bridge": "br-int", "label": "tempest-network-smoke--284994200", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.173", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca4b15770e784f45910b630937562cb6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0ae3e094-fe", "ovs_interfaceid": "0ae3e094-fe06-40c1-8840-cf16ba40f7fb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 11 09:12:21 compute-0 nova_compute[260935]: 2025-10-11 09:12:21.206 2 DEBUG nova.network.os_vif_util [None req-879333dc-604f-41f6-a4d2-442a21b9455d a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Converting VIF {"id": "0ae3e094-fe06-40c1-8840-cf16ba40f7fb", "address": "fa:16:3e:36:24:98", "network": {"id": "fba72350-6e1a-4542-9fe5-a0774a411936", "bridge": "br-int", "label": "tempest-network-smoke--284994200", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.173", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca4b15770e784f45910b630937562cb6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0ae3e094-fe", "ovs_interfaceid": "0ae3e094-fe06-40c1-8840-cf16ba40f7fb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 09:12:21 compute-0 nova_compute[260935]: 2025-10-11 09:12:21.208 2 DEBUG nova.network.os_vif_util [None req-879333dc-604f-41f6-a4d2-442a21b9455d a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:36:24:98,bridge_name='br-int',has_traffic_filtering=True,id=0ae3e094-fe06-40c1-8840-cf16ba40f7fb,network=Network(fba72350-6e1a-4542-9fe5-a0774a411936),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0ae3e094-fe') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 09:12:21 compute-0 nova_compute[260935]: 2025-10-11 09:12:21.208 2 DEBUG os_vif [None req-879333dc-604f-41f6-a4d2-442a21b9455d a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:36:24:98,bridge_name='br-int',has_traffic_filtering=True,id=0ae3e094-fe06-40c1-8840-cf16ba40f7fb,network=Network(fba72350-6e1a-4542-9fe5-a0774a411936),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0ae3e094-fe') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 11 09:12:21 compute-0 nova_compute[260935]: 2025-10-11 09:12:21.211 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:12:21 compute-0 nova_compute[260935]: 2025-10-11 09:12:21.211 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0ae3e094-fe, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:12:21 compute-0 nova_compute[260935]: 2025-10-11 09:12:21.251 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:12:21 compute-0 nova_compute[260935]: 2025-10-11 09:12:21.255 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:12:21 compute-0 nova_compute[260935]: 2025-10-11 09:12:21.259 2 INFO os_vif [None req-879333dc-604f-41f6-a4d2-442a21b9455d a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:36:24:98,bridge_name='br-int',has_traffic_filtering=True,id=0ae3e094-fe06-40c1-8840-cf16ba40f7fb,network=Network(fba72350-6e1a-4542-9fe5-a0774a411936),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0ae3e094-fe')
Oct 11 09:12:21 compute-0 nova_compute[260935]: 2025-10-11 09:12:21.665 2 INFO nova.virt.libvirt.driver [None req-879333dc-604f-41f6-a4d2-442a21b9455d a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e] Deleting instance files /var/lib/nova/instances/0dc02e8f-5afd-40a3-8de3-e5550e4ab57e_del
Oct 11 09:12:21 compute-0 nova_compute[260935]: 2025-10-11 09:12:21.666 2 INFO nova.virt.libvirt.driver [None req-879333dc-604f-41f6-a4d2-442a21b9455d a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e] Deletion of /var/lib/nova/instances/0dc02e8f-5afd-40a3-8de3-e5550e4ab57e_del complete
Oct 11 09:12:21 compute-0 nova_compute[260935]: 2025-10-11 09:12:21.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:12:21 compute-0 ceph-mon[74313]: pgmap v2115: 321 pgs: 321 active+clean; 407 MiB data, 936 MiB used, 59 GiB / 60 GiB avail; 301 KiB/s rd, 2.1 MiB/s wr, 56 op/s
Oct 11 09:12:21 compute-0 nova_compute[260935]: 2025-10-11 09:12:21.849 2 DEBUG nova.virt.libvirt.driver [None req-879333dc-604f-41f6-a4d2-442a21b9455d a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 11 09:12:21 compute-0 nova_compute[260935]: 2025-10-11 09:12:21.850 2 INFO nova.virt.libvirt.driver [None req-879333dc-604f-41f6-a4d2-442a21b9455d a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e] Creating image(s)
Oct 11 09:12:21 compute-0 nova_compute[260935]: 2025-10-11 09:12:21.884 2 DEBUG nova.storage.rbd_utils [None req-879333dc-604f-41f6-a4d2-442a21b9455d a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] rbd image 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:12:21 compute-0 nova_compute[260935]: 2025-10-11 09:12:21.921 2 DEBUG nova.storage.rbd_utils [None req-879333dc-604f-41f6-a4d2-442a21b9455d a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] rbd image 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:12:21 compute-0 nova_compute[260935]: 2025-10-11 09:12:21.957 2 DEBUG nova.storage.rbd_utils [None req-879333dc-604f-41f6-a4d2-442a21b9455d a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] rbd image 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:12:21 compute-0 nova_compute[260935]: 2025-10-11 09:12:21.963 2 DEBUG oslo_concurrency.processutils [None req-879333dc-604f-41f6-a4d2-442a21b9455d a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/d427ed36e4acfaf36d5cf36bd49361b1db4ee571 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:12:22 compute-0 nova_compute[260935]: 2025-10-11 09:12:22.076 2 DEBUG oslo_concurrency.processutils [None req-879333dc-604f-41f6-a4d2-442a21b9455d a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/d427ed36e4acfaf36d5cf36bd49361b1db4ee571 --force-share --output=json" returned: 0 in 0.113s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:12:22 compute-0 nova_compute[260935]: 2025-10-11 09:12:22.077 2 DEBUG oslo_concurrency.lockutils [None req-879333dc-604f-41f6-a4d2-442a21b9455d a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Acquiring lock "d427ed36e4acfaf36d5cf36bd49361b1db4ee571" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:12:22 compute-0 nova_compute[260935]: 2025-10-11 09:12:22.078 2 DEBUG oslo_concurrency.lockutils [None req-879333dc-604f-41f6-a4d2-442a21b9455d a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Lock "d427ed36e4acfaf36d5cf36bd49361b1db4ee571" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:12:22 compute-0 nova_compute[260935]: 2025-10-11 09:12:22.078 2 DEBUG oslo_concurrency.lockutils [None req-879333dc-604f-41f6-a4d2-442a21b9455d a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Lock "d427ed36e4acfaf36d5cf36bd49361b1db4ee571" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:12:22 compute-0 nova_compute[260935]: 2025-10-11 09:12:22.104 2 DEBUG nova.storage.rbd_utils [None req-879333dc-604f-41f6-a4d2-442a21b9455d a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] rbd image 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:12:22 compute-0 nova_compute[260935]: 2025-10-11 09:12:22.108 2 DEBUG oslo_concurrency.processutils [None req-879333dc-604f-41f6-a4d2-442a21b9455d a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/d427ed36e4acfaf36d5cf36bd49361b1db4ee571 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:12:22 compute-0 nova_compute[260935]: 2025-10-11 09:12:22.376 2 DEBUG oslo_concurrency.processutils [None req-879333dc-604f-41f6-a4d2-442a21b9455d a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/d427ed36e4acfaf36d5cf36bd49361b1db4ee571 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.268s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:12:22 compute-0 nova_compute[260935]: 2025-10-11 09:12:22.429 2 DEBUG nova.storage.rbd_utils [None req-879333dc-604f-41f6-a4d2-442a21b9455d a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] resizing rbd image 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 11 09:12:22 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2116: 321 pgs: 321 active+clean; 351 MiB data, 893 MiB used, 59 GiB / 60 GiB avail; 319 KiB/s rd, 2.9 MiB/s wr, 83 op/s
Oct 11 09:12:22 compute-0 nova_compute[260935]: 2025-10-11 09:12:22.530 2 DEBUG nova.virt.libvirt.driver [None req-879333dc-604f-41f6-a4d2-442a21b9455d a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 11 09:12:22 compute-0 nova_compute[260935]: 2025-10-11 09:12:22.530 2 DEBUG nova.virt.libvirt.driver [None req-879333dc-604f-41f6-a4d2-442a21b9455d a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e] Ensure instance console log exists: /var/lib/nova/instances/0dc02e8f-5afd-40a3-8de3-e5550e4ab57e/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 11 09:12:22 compute-0 nova_compute[260935]: 2025-10-11 09:12:22.531 2 DEBUG oslo_concurrency.lockutils [None req-879333dc-604f-41f6-a4d2-442a21b9455d a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:12:22 compute-0 nova_compute[260935]: 2025-10-11 09:12:22.532 2 DEBUG oslo_concurrency.lockutils [None req-879333dc-604f-41f6-a4d2-442a21b9455d a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:12:22 compute-0 nova_compute[260935]: 2025-10-11 09:12:22.532 2 DEBUG oslo_concurrency.lockutils [None req-879333dc-604f-41f6-a4d2-442a21b9455d a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:12:22 compute-0 nova_compute[260935]: 2025-10-11 09:12:22.535 2 DEBUG nova.virt.libvirt.driver [None req-879333dc-604f-41f6-a4d2-442a21b9455d a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e] Start _get_guest_xml network_info=[{"id": "0ae3e094-fe06-40c1-8840-cf16ba40f7fb", "address": "fa:16:3e:36:24:98", "network": {"id": "fba72350-6e1a-4542-9fe5-a0774a411936", "bridge": "br-int", "label": "tempest-network-smoke--284994200", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.173", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca4b15770e784f45910b630937562cb6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0ae3e094-fe", "ovs_interfaceid": "0ae3e094-fe06-40c1-8840-cf16ba40f7fb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:29Z,direct_url=<?>,disk_format='qcow2',id=95632eb9-5895-4e20-b760-0f149aadf400,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 11 09:12:22 compute-0 nova_compute[260935]: 2025-10-11 09:12:22.538 2 WARNING nova.virt.libvirt.driver [None req-879333dc-604f-41f6-a4d2-442a21b9455d a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError
Oct 11 09:12:22 compute-0 nova_compute[260935]: 2025-10-11 09:12:22.548 2 DEBUG nova.virt.libvirt.host [None req-879333dc-604f-41f6-a4d2-442a21b9455d a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 11 09:12:22 compute-0 nova_compute[260935]: 2025-10-11 09:12:22.548 2 DEBUG nova.virt.libvirt.host [None req-879333dc-604f-41f6-a4d2-442a21b9455d a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 11 09:12:22 compute-0 nova_compute[260935]: 2025-10-11 09:12:22.552 2 DEBUG nova.virt.libvirt.host [None req-879333dc-604f-41f6-a4d2-442a21b9455d a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 11 09:12:22 compute-0 nova_compute[260935]: 2025-10-11 09:12:22.552 2 DEBUG nova.virt.libvirt.host [None req-879333dc-604f-41f6-a4d2-442a21b9455d a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 11 09:12:22 compute-0 nova_compute[260935]: 2025-10-11 09:12:22.553 2 DEBUG nova.virt.libvirt.driver [None req-879333dc-604f-41f6-a4d2-442a21b9455d a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 11 09:12:22 compute-0 nova_compute[260935]: 2025-10-11 09:12:22.553 2 DEBUG nova.virt.hardware [None req-879333dc-604f-41f6-a4d2-442a21b9455d a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:29Z,direct_url=<?>,disk_format='qcow2',id=95632eb9-5895-4e20-b760-0f149aadf400,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 11 09:12:22 compute-0 nova_compute[260935]: 2025-10-11 09:12:22.553 2 DEBUG nova.virt.hardware [None req-879333dc-604f-41f6-a4d2-442a21b9455d a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 11 09:12:22 compute-0 nova_compute[260935]: 2025-10-11 09:12:22.554 2 DEBUG nova.virt.hardware [None req-879333dc-604f-41f6-a4d2-442a21b9455d a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 11 09:12:22 compute-0 nova_compute[260935]: 2025-10-11 09:12:22.554 2 DEBUG nova.virt.hardware [None req-879333dc-604f-41f6-a4d2-442a21b9455d a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 11 09:12:22 compute-0 nova_compute[260935]: 2025-10-11 09:12:22.554 2 DEBUG nova.virt.hardware [None req-879333dc-604f-41f6-a4d2-442a21b9455d a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 11 09:12:22 compute-0 nova_compute[260935]: 2025-10-11 09:12:22.554 2 DEBUG nova.virt.hardware [None req-879333dc-604f-41f6-a4d2-442a21b9455d a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 11 09:12:22 compute-0 nova_compute[260935]: 2025-10-11 09:12:22.554 2 DEBUG nova.virt.hardware [None req-879333dc-604f-41f6-a4d2-442a21b9455d a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 11 09:12:22 compute-0 nova_compute[260935]: 2025-10-11 09:12:22.555 2 DEBUG nova.virt.hardware [None req-879333dc-604f-41f6-a4d2-442a21b9455d a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 11 09:12:22 compute-0 nova_compute[260935]: 2025-10-11 09:12:22.555 2 DEBUG nova.virt.hardware [None req-879333dc-604f-41f6-a4d2-442a21b9455d a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 11 09:12:22 compute-0 nova_compute[260935]: 2025-10-11 09:12:22.555 2 DEBUG nova.virt.hardware [None req-879333dc-604f-41f6-a4d2-442a21b9455d a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 11 09:12:22 compute-0 nova_compute[260935]: 2025-10-11 09:12:22.555 2 DEBUG nova.virt.hardware [None req-879333dc-604f-41f6-a4d2-442a21b9455d a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 11 09:12:22 compute-0 nova_compute[260935]: 2025-10-11 09:12:22.556 2 DEBUG nova.objects.instance [None req-879333dc-604f-41f6-a4d2-442a21b9455d a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:12:22 compute-0 nova_compute[260935]: 2025-10-11 09:12:22.575 2 DEBUG oslo_concurrency.processutils [None req-879333dc-604f-41f6-a4d2-442a21b9455d a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:12:22 compute-0 podman[368774]: 2025-10-11 09:12:22.816320121 +0000 UTC m=+0.067895281 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 11 09:12:23 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 09:12:23 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2288654728' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:12:23 compute-0 nova_compute[260935]: 2025-10-11 09:12:23.033 2 DEBUG oslo_concurrency.processutils [None req-879333dc-604f-41f6-a4d2-442a21b9455d a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:12:23 compute-0 nova_compute[260935]: 2025-10-11 09:12:23.072 2 DEBUG nova.storage.rbd_utils [None req-879333dc-604f-41f6-a4d2-442a21b9455d a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] rbd image 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:12:23 compute-0 nova_compute[260935]: 2025-10-11 09:12:23.080 2 DEBUG oslo_concurrency.processutils [None req-879333dc-604f-41f6-a4d2-442a21b9455d a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:12:23 compute-0 nova_compute[260935]: 2025-10-11 09:12:23.177 2 DEBUG nova.compute.manager [req-9f18507a-0d49-482d-afbf-f6d4821d9939 req-75a64cf9-a273-4897-9146-86a55ffb686f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e] Received event network-vif-plugged-0ae3e094-fe06-40c1-8840-cf16ba40f7fb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:12:23 compute-0 nova_compute[260935]: 2025-10-11 09:12:23.179 2 DEBUG oslo_concurrency.lockutils [req-9f18507a-0d49-482d-afbf-f6d4821d9939 req-75a64cf9-a273-4897-9146-86a55ffb686f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "0dc02e8f-5afd-40a3-8de3-e5550e4ab57e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:12:23 compute-0 nova_compute[260935]: 2025-10-11 09:12:23.179 2 DEBUG oslo_concurrency.lockutils [req-9f18507a-0d49-482d-afbf-f6d4821d9939 req-75a64cf9-a273-4897-9146-86a55ffb686f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "0dc02e8f-5afd-40a3-8de3-e5550e4ab57e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:12:23 compute-0 nova_compute[260935]: 2025-10-11 09:12:23.180 2 DEBUG oslo_concurrency.lockutils [req-9f18507a-0d49-482d-afbf-f6d4821d9939 req-75a64cf9-a273-4897-9146-86a55ffb686f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "0dc02e8f-5afd-40a3-8de3-e5550e4ab57e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:12:23 compute-0 nova_compute[260935]: 2025-10-11 09:12:23.180 2 DEBUG nova.compute.manager [req-9f18507a-0d49-482d-afbf-f6d4821d9939 req-75a64cf9-a273-4897-9146-86a55ffb686f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e] No waiting events found dispatching network-vif-plugged-0ae3e094-fe06-40c1-8840-cf16ba40f7fb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 09:12:23 compute-0 nova_compute[260935]: 2025-10-11 09:12:23.181 2 WARNING nova.compute.manager [req-9f18507a-0d49-482d-afbf-f6d4821d9939 req-75a64cf9-a273-4897-9146-86a55ffb686f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e] Received unexpected event network-vif-plugged-0ae3e094-fe06-40c1-8840-cf16ba40f7fb for instance with vm_state active and task_state rebuild_spawning.
Oct 11 09:12:23 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 09:12:23 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1435080860' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:12:23 compute-0 nova_compute[260935]: 2025-10-11 09:12:23.582 2 DEBUG oslo_concurrency.processutils [None req-879333dc-604f-41f6-a4d2-442a21b9455d a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.502s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:12:23 compute-0 nova_compute[260935]: 2025-10-11 09:12:23.583 2 DEBUG nova.virt.libvirt.vif [None req-879333dc-604f-41f6-a4d2-442a21b9455d a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-11T09:11:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1973542017',display_name='tempest-TestNetworkAdvancedServerOps-server-1973542017',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1973542017',id=105,image_ref='95632eb9-5895-4e20-b760-0f149aadf400',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPjiTuLZzmAZ35tMhCOSVitFlf9QVc7XZ8MVeNKbs8Bxh7N8pYPwy6nmkT0CJ4moptetANiA+6a5Y0trCDigpxU0NxSZrm3lKn8b9iWNg2gUkyVkLT8AlFALq9EjTDW32g==',key_name='tempest-TestNetworkAdvancedServerOps-2078355019',keypairs=<?>,launch_index=0,launched_at=2025-10-11T09:11:56Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='ca4b15770e784f45910b630937562cb6',ramdisk_id='',reservation_id='r-7vnn79ig',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='95632eb9-5895-4e20-b760-0f149aadf400',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-1304559157',owner_user_name='tempest-TestNetworkAdvancedServerOps-1304559157-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:12:21Z,user_data=None,user_id='a213c3877fc144a3af0be3c3d853f999',uuid=0dc02e8f-5afd-40a3-8de3-e5550e4ab57e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "0ae3e094-fe06-40c1-8840-cf16ba40f7fb", "address": "fa:16:3e:36:24:98", "network": {"id": "fba72350-6e1a-4542-9fe5-a0774a411936", "bridge": "br-int", "label": "tempest-network-smoke--284994200", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.173", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca4b15770e784f45910b630937562cb6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0ae3e094-fe", "ovs_interfaceid": "0ae3e094-fe06-40c1-8840-cf16ba40f7fb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 11 09:12:23 compute-0 nova_compute[260935]: 2025-10-11 09:12:23.584 2 DEBUG nova.network.os_vif_util [None req-879333dc-604f-41f6-a4d2-442a21b9455d a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Converting VIF {"id": "0ae3e094-fe06-40c1-8840-cf16ba40f7fb", "address": "fa:16:3e:36:24:98", "network": {"id": "fba72350-6e1a-4542-9fe5-a0774a411936", "bridge": "br-int", "label": "tempest-network-smoke--284994200", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.173", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca4b15770e784f45910b630937562cb6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0ae3e094-fe", "ovs_interfaceid": "0ae3e094-fe06-40c1-8840-cf16ba40f7fb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 09:12:23 compute-0 nova_compute[260935]: 2025-10-11 09:12:23.585 2 DEBUG nova.network.os_vif_util [None req-879333dc-604f-41f6-a4d2-442a21b9455d a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:36:24:98,bridge_name='br-int',has_traffic_filtering=True,id=0ae3e094-fe06-40c1-8840-cf16ba40f7fb,network=Network(fba72350-6e1a-4542-9fe5-a0774a411936),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0ae3e094-fe') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 09:12:23 compute-0 nova_compute[260935]: 2025-10-11 09:12:23.588 2 DEBUG nova.virt.libvirt.driver [None req-879333dc-604f-41f6-a4d2-442a21b9455d a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e] End _get_guest_xml xml=<domain type="kvm">
Oct 11 09:12:23 compute-0 nova_compute[260935]:   <uuid>0dc02e8f-5afd-40a3-8de3-e5550e4ab57e</uuid>
Oct 11 09:12:23 compute-0 nova_compute[260935]:   <name>instance-00000069</name>
Oct 11 09:12:23 compute-0 nova_compute[260935]:   <memory>131072</memory>
Oct 11 09:12:23 compute-0 nova_compute[260935]:   <vcpu>1</vcpu>
Oct 11 09:12:23 compute-0 nova_compute[260935]:   <metadata>
Oct 11 09:12:23 compute-0 nova_compute[260935]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 09:12:23 compute-0 nova_compute[260935]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 09:12:23 compute-0 nova_compute[260935]:       <nova:name>tempest-TestNetworkAdvancedServerOps-server-1973542017</nova:name>
Oct 11 09:12:23 compute-0 nova_compute[260935]:       <nova:creationTime>2025-10-11 09:12:22</nova:creationTime>
Oct 11 09:12:23 compute-0 nova_compute[260935]:       <nova:flavor name="m1.nano">
Oct 11 09:12:23 compute-0 nova_compute[260935]:         <nova:memory>128</nova:memory>
Oct 11 09:12:23 compute-0 nova_compute[260935]:         <nova:disk>1</nova:disk>
Oct 11 09:12:23 compute-0 nova_compute[260935]:         <nova:swap>0</nova:swap>
Oct 11 09:12:23 compute-0 nova_compute[260935]:         <nova:ephemeral>0</nova:ephemeral>
Oct 11 09:12:23 compute-0 nova_compute[260935]:         <nova:vcpus>1</nova:vcpus>
Oct 11 09:12:23 compute-0 nova_compute[260935]:       </nova:flavor>
Oct 11 09:12:23 compute-0 nova_compute[260935]:       <nova:owner>
Oct 11 09:12:23 compute-0 nova_compute[260935]:         <nova:user uuid="a213c3877fc144a3af0be3c3d853f999">tempest-TestNetworkAdvancedServerOps-1304559157-project-member</nova:user>
Oct 11 09:12:23 compute-0 nova_compute[260935]:         <nova:project uuid="ca4b15770e784f45910b630937562cb6">tempest-TestNetworkAdvancedServerOps-1304559157</nova:project>
Oct 11 09:12:23 compute-0 nova_compute[260935]:       </nova:owner>
Oct 11 09:12:23 compute-0 nova_compute[260935]:       <nova:root type="image" uuid="95632eb9-5895-4e20-b760-0f149aadf400"/>
Oct 11 09:12:23 compute-0 nova_compute[260935]:       <nova:ports>
Oct 11 09:12:23 compute-0 nova_compute[260935]:         <nova:port uuid="0ae3e094-fe06-40c1-8840-cf16ba40f7fb">
Oct 11 09:12:23 compute-0 nova_compute[260935]:           <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Oct 11 09:12:23 compute-0 nova_compute[260935]:         </nova:port>
Oct 11 09:12:23 compute-0 nova_compute[260935]:       </nova:ports>
Oct 11 09:12:23 compute-0 nova_compute[260935]:     </nova:instance>
Oct 11 09:12:23 compute-0 nova_compute[260935]:   </metadata>
Oct 11 09:12:23 compute-0 nova_compute[260935]:   <sysinfo type="smbios">
Oct 11 09:12:23 compute-0 nova_compute[260935]:     <system>
Oct 11 09:12:23 compute-0 nova_compute[260935]:       <entry name="manufacturer">RDO</entry>
Oct 11 09:12:23 compute-0 nova_compute[260935]:       <entry name="product">OpenStack Compute</entry>
Oct 11 09:12:23 compute-0 nova_compute[260935]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 09:12:23 compute-0 nova_compute[260935]:       <entry name="serial">0dc02e8f-5afd-40a3-8de3-e5550e4ab57e</entry>
Oct 11 09:12:23 compute-0 nova_compute[260935]:       <entry name="uuid">0dc02e8f-5afd-40a3-8de3-e5550e4ab57e</entry>
Oct 11 09:12:23 compute-0 nova_compute[260935]:       <entry name="family">Virtual Machine</entry>
Oct 11 09:12:23 compute-0 nova_compute[260935]:     </system>
Oct 11 09:12:23 compute-0 nova_compute[260935]:   </sysinfo>
Oct 11 09:12:23 compute-0 nova_compute[260935]:   <os>
Oct 11 09:12:23 compute-0 nova_compute[260935]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 11 09:12:23 compute-0 nova_compute[260935]:     <boot dev="hd"/>
Oct 11 09:12:23 compute-0 nova_compute[260935]:     <smbios mode="sysinfo"/>
Oct 11 09:12:23 compute-0 nova_compute[260935]:   </os>
Oct 11 09:12:23 compute-0 nova_compute[260935]:   <features>
Oct 11 09:12:23 compute-0 nova_compute[260935]:     <acpi/>
Oct 11 09:12:23 compute-0 nova_compute[260935]:     <apic/>
Oct 11 09:12:23 compute-0 nova_compute[260935]:     <vmcoreinfo/>
Oct 11 09:12:23 compute-0 nova_compute[260935]:   </features>
Oct 11 09:12:23 compute-0 nova_compute[260935]:   <clock offset="utc">
Oct 11 09:12:23 compute-0 nova_compute[260935]:     <timer name="pit" tickpolicy="delay"/>
Oct 11 09:12:23 compute-0 nova_compute[260935]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 11 09:12:23 compute-0 nova_compute[260935]:     <timer name="hpet" present="no"/>
Oct 11 09:12:23 compute-0 nova_compute[260935]:   </clock>
Oct 11 09:12:23 compute-0 nova_compute[260935]:   <cpu mode="host-model" match="exact">
Oct 11 09:12:23 compute-0 nova_compute[260935]:     <topology sockets="1" cores="1" threads="1"/>
Oct 11 09:12:23 compute-0 nova_compute[260935]:   </cpu>
Oct 11 09:12:23 compute-0 nova_compute[260935]:   <devices>
Oct 11 09:12:23 compute-0 nova_compute[260935]:     <disk type="network" device="disk">
Oct 11 09:12:23 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 09:12:23 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/0dc02e8f-5afd-40a3-8de3-e5550e4ab57e_disk">
Oct 11 09:12:23 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 09:12:23 compute-0 nova_compute[260935]:       </source>
Oct 11 09:12:23 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 09:12:23 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 09:12:23 compute-0 nova_compute[260935]:       </auth>
Oct 11 09:12:23 compute-0 nova_compute[260935]:       <target dev="vda" bus="virtio"/>
Oct 11 09:12:23 compute-0 nova_compute[260935]:     </disk>
Oct 11 09:12:23 compute-0 nova_compute[260935]:     <disk type="network" device="cdrom">
Oct 11 09:12:23 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 09:12:23 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/0dc02e8f-5afd-40a3-8de3-e5550e4ab57e_disk.config">
Oct 11 09:12:23 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 09:12:23 compute-0 nova_compute[260935]:       </source>
Oct 11 09:12:23 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 09:12:23 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 09:12:23 compute-0 nova_compute[260935]:       </auth>
Oct 11 09:12:23 compute-0 nova_compute[260935]:       <target dev="sda" bus="sata"/>
Oct 11 09:12:23 compute-0 nova_compute[260935]:     </disk>
Oct 11 09:12:23 compute-0 nova_compute[260935]:     <interface type="ethernet">
Oct 11 09:12:23 compute-0 nova_compute[260935]:       <mac address="fa:16:3e:36:24:98"/>
Oct 11 09:12:23 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 09:12:23 compute-0 nova_compute[260935]:       <driver name="vhost" rx_queue_size="512"/>
Oct 11 09:12:23 compute-0 nova_compute[260935]:       <mtu size="1442"/>
Oct 11 09:12:23 compute-0 nova_compute[260935]:       <target dev="tap0ae3e094-fe"/>
Oct 11 09:12:23 compute-0 nova_compute[260935]:     </interface>
Oct 11 09:12:23 compute-0 nova_compute[260935]:     <serial type="pty">
Oct 11 09:12:23 compute-0 nova_compute[260935]:       <log file="/var/lib/nova/instances/0dc02e8f-5afd-40a3-8de3-e5550e4ab57e/console.log" append="off"/>
Oct 11 09:12:23 compute-0 nova_compute[260935]:     </serial>
Oct 11 09:12:23 compute-0 nova_compute[260935]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 09:12:23 compute-0 nova_compute[260935]:     <video>
Oct 11 09:12:23 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 09:12:23 compute-0 nova_compute[260935]:     </video>
Oct 11 09:12:23 compute-0 nova_compute[260935]:     <input type="tablet" bus="usb"/>
Oct 11 09:12:23 compute-0 nova_compute[260935]:     <rng model="virtio">
Oct 11 09:12:23 compute-0 nova_compute[260935]:       <backend model="random">/dev/urandom</backend>
Oct 11 09:12:23 compute-0 nova_compute[260935]:     </rng>
Oct 11 09:12:23 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root"/>
Oct 11 09:12:23 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:12:23 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:12:23 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:12:23 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:12:23 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:12:23 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:12:23 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:12:23 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:12:23 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:12:23 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:12:23 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:12:23 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:12:23 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:12:23 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:12:23 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:12:23 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:12:23 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:12:23 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:12:23 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:12:23 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:12:23 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:12:23 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:12:23 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:12:23 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:12:23 compute-0 nova_compute[260935]:     <controller type="usb" index="0"/>
Oct 11 09:12:23 compute-0 nova_compute[260935]:     <memballoon model="virtio">
Oct 11 09:12:23 compute-0 nova_compute[260935]:       <stats period="10"/>
Oct 11 09:12:23 compute-0 nova_compute[260935]:     </memballoon>
Oct 11 09:12:23 compute-0 nova_compute[260935]:   </devices>
Oct 11 09:12:23 compute-0 nova_compute[260935]: </domain>
Oct 11 09:12:23 compute-0 nova_compute[260935]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 11 09:12:23 compute-0 nova_compute[260935]: 2025-10-11 09:12:23.591 2 DEBUG nova.virt.libvirt.vif [None req-879333dc-604f-41f6-a4d2-442a21b9455d a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-11T09:11:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1973542017',display_name='tempest-TestNetworkAdvancedServerOps-server-1973542017',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1973542017',id=105,image_ref='95632eb9-5895-4e20-b760-0f149aadf400',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPjiTuLZzmAZ35tMhCOSVitFlf9QVc7XZ8MVeNKbs8Bxh7N8pYPwy6nmkT0CJ4moptetANiA+6a5Y0trCDigpxU0NxSZrm3lKn8b9iWNg2gUkyVkLT8AlFALq9EjTDW32g==',key_name='tempest-TestNetworkAdvancedServerOps-2078355019',keypairs=<?>,launch_index=0,launched_at=2025-10-11T09:11:56Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='ca4b15770e784f45910b630937562cb6',ramdisk_id='',reservation_id='r-7vnn79ig',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='95632eb9-5895-4e20-b760-0f149aadf400',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-1304559157',owner_user_name='tempest-TestNetworkAdvancedServerOps-1304559157-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:12:21Z,user_data=None,user_id='a213c3877fc144a3af0be3c3d853f999',uuid=0dc02e8f-5afd-40a3-8de3-e5550e4ab57e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "0ae3e094-fe06-40c1-8840-cf16ba40f7fb", "address": "fa:16:3e:36:24:98", "network": {"id": "fba72350-6e1a-4542-9fe5-a0774a411936", "bridge": "br-int", "label": "tempest-network-smoke--284994200", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.173", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca4b15770e784f45910b630937562cb6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0ae3e094-fe", "ovs_interfaceid": "0ae3e094-fe06-40c1-8840-cf16ba40f7fb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 11 09:12:23 compute-0 nova_compute[260935]: 2025-10-11 09:12:23.591 2 DEBUG nova.network.os_vif_util [None req-879333dc-604f-41f6-a4d2-442a21b9455d a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Converting VIF {"id": "0ae3e094-fe06-40c1-8840-cf16ba40f7fb", "address": "fa:16:3e:36:24:98", "network": {"id": "fba72350-6e1a-4542-9fe5-a0774a411936", "bridge": "br-int", "label": "tempest-network-smoke--284994200", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.173", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca4b15770e784f45910b630937562cb6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0ae3e094-fe", "ovs_interfaceid": "0ae3e094-fe06-40c1-8840-cf16ba40f7fb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 09:12:23 compute-0 nova_compute[260935]: 2025-10-11 09:12:23.592 2 DEBUG nova.network.os_vif_util [None req-879333dc-604f-41f6-a4d2-442a21b9455d a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:36:24:98,bridge_name='br-int',has_traffic_filtering=True,id=0ae3e094-fe06-40c1-8840-cf16ba40f7fb,network=Network(fba72350-6e1a-4542-9fe5-a0774a411936),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0ae3e094-fe') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 09:12:23 compute-0 nova_compute[260935]: 2025-10-11 09:12:23.593 2 DEBUG os_vif [None req-879333dc-604f-41f6-a4d2-442a21b9455d a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Plugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:36:24:98,bridge_name='br-int',has_traffic_filtering=True,id=0ae3e094-fe06-40c1-8840-cf16ba40f7fb,network=Network(fba72350-6e1a-4542-9fe5-a0774a411936),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0ae3e094-fe') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 11 09:12:23 compute-0 nova_compute[260935]: 2025-10-11 09:12:23.594 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:12:23 compute-0 nova_compute[260935]: 2025-10-11 09:12:23.595 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:12:23 compute-0 nova_compute[260935]: 2025-10-11 09:12:23.596 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 09:12:23 compute-0 nova_compute[260935]: 2025-10-11 09:12:23.600 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:12:23 compute-0 nova_compute[260935]: 2025-10-11 09:12:23.601 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0ae3e094-fe, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:12:23 compute-0 nova_compute[260935]: 2025-10-11 09:12:23.602 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap0ae3e094-fe, col_values=(('external_ids', {'iface-id': '0ae3e094-fe06-40c1-8840-cf16ba40f7fb', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:36:24:98', 'vm-uuid': '0dc02e8f-5afd-40a3-8de3-e5550e4ab57e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:12:23 compute-0 NetworkManager[44960]: <info>  [1760173943.6055] manager: (tap0ae3e094-fe): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/409)
Oct 11 09:12:23 compute-0 nova_compute[260935]: 2025-10-11 09:12:23.608 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 09:12:23 compute-0 nova_compute[260935]: 2025-10-11 09:12:23.611 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:12:23 compute-0 nova_compute[260935]: 2025-10-11 09:12:23.611 2 INFO os_vif [None req-879333dc-604f-41f6-a4d2-442a21b9455d a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Successfully plugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:36:24:98,bridge_name='br-int',has_traffic_filtering=True,id=0ae3e094-fe06-40c1-8840-cf16ba40f7fb,network=Network(fba72350-6e1a-4542-9fe5-a0774a411936),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0ae3e094-fe')
Oct 11 09:12:23 compute-0 nova_compute[260935]: 2025-10-11 09:12:23.691 2 DEBUG nova.virt.libvirt.driver [None req-879333dc-604f-41f6-a4d2-442a21b9455d a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 09:12:23 compute-0 nova_compute[260935]: 2025-10-11 09:12:23.692 2 DEBUG nova.virt.libvirt.driver [None req-879333dc-604f-41f6-a4d2-442a21b9455d a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 09:12:23 compute-0 nova_compute[260935]: 2025-10-11 09:12:23.692 2 DEBUG nova.virt.libvirt.driver [None req-879333dc-604f-41f6-a4d2-442a21b9455d a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] No VIF found with MAC fa:16:3e:36:24:98, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 11 09:12:23 compute-0 nova_compute[260935]: 2025-10-11 09:12:23.693 2 INFO nova.virt.libvirt.driver [None req-879333dc-604f-41f6-a4d2-442a21b9455d a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e] Using config drive
Oct 11 09:12:23 compute-0 nova_compute[260935]: 2025-10-11 09:12:23.730 2 DEBUG nova.storage.rbd_utils [None req-879333dc-604f-41f6-a4d2-442a21b9455d a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] rbd image 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:12:23 compute-0 nova_compute[260935]: 2025-10-11 09:12:23.742 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:12:23 compute-0 nova_compute[260935]: 2025-10-11 09:12:23.744 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:12:23 compute-0 ceph-mon[74313]: pgmap v2116: 321 pgs: 321 active+clean; 351 MiB data, 893 MiB used, 59 GiB / 60 GiB avail; 319 KiB/s rd, 2.9 MiB/s wr, 83 op/s
Oct 11 09:12:23 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/2288654728' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:12:23 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/1435080860' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:12:23 compute-0 nova_compute[260935]: 2025-10-11 09:12:23.768 2 DEBUG nova.objects.instance [None req-879333dc-604f-41f6-a4d2-442a21b9455d a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Lazy-loading 'ec2_ids' on Instance uuid 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:12:23 compute-0 nova_compute[260935]: 2025-10-11 09:12:23.810 2 DEBUG nova.objects.instance [None req-879333dc-604f-41f6-a4d2-442a21b9455d a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Lazy-loading 'keypairs' on Instance uuid 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:12:23 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:12:24 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2117: 321 pgs: 321 active+clean; 346 MiB data, 892 MiB used, 59 GiB / 60 GiB avail; 30 KiB/s rd, 1.1 MiB/s wr, 46 op/s
Oct 11 09:12:24 compute-0 nova_compute[260935]: 2025-10-11 09:12:24.506 2 INFO nova.virt.libvirt.driver [None req-879333dc-604f-41f6-a4d2-442a21b9455d a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e] Creating config drive at /var/lib/nova/instances/0dc02e8f-5afd-40a3-8de3-e5550e4ab57e/disk.config
Oct 11 09:12:24 compute-0 nova_compute[260935]: 2025-10-11 09:12:24.519 2 DEBUG oslo_concurrency.processutils [None req-879333dc-604f-41f6-a4d2-442a21b9455d a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/0dc02e8f-5afd-40a3-8de3-e5550e4ab57e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpb8w5wtef execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:12:24 compute-0 nova_compute[260935]: 2025-10-11 09:12:24.695 2 DEBUG oslo_concurrency.processutils [None req-879333dc-604f-41f6-a4d2-442a21b9455d a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/0dc02e8f-5afd-40a3-8de3-e5550e4ab57e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpb8w5wtef" returned: 0 in 0.177s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:12:24 compute-0 nova_compute[260935]: 2025-10-11 09:12:24.743 2 DEBUG nova.storage.rbd_utils [None req-879333dc-604f-41f6-a4d2-442a21b9455d a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] rbd image 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:12:24 compute-0 nova_compute[260935]: 2025-10-11 09:12:24.750 2 DEBUG oslo_concurrency.processutils [None req-879333dc-604f-41f6-a4d2-442a21b9455d a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/0dc02e8f-5afd-40a3-8de3-e5550e4ab57e/disk.config 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:12:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:12:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:12:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:12:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:12:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:12:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:12:25 compute-0 nova_compute[260935]: 2025-10-11 09:12:25.004 2 DEBUG oslo_concurrency.processutils [None req-879333dc-604f-41f6-a4d2-442a21b9455d a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/0dc02e8f-5afd-40a3-8de3-e5550e4ab57e/disk.config 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.254s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:12:25 compute-0 nova_compute[260935]: 2025-10-11 09:12:25.006 2 INFO nova.virt.libvirt.driver [None req-879333dc-604f-41f6-a4d2-442a21b9455d a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e] Deleting local config drive /var/lib/nova/instances/0dc02e8f-5afd-40a3-8de3-e5550e4ab57e/disk.config because it was imported into RBD.
Oct 11 09:12:25 compute-0 kernel: tap0ae3e094-fe: entered promiscuous mode
Oct 11 09:12:25 compute-0 NetworkManager[44960]: <info>  [1760173945.0842] manager: (tap0ae3e094-fe): new Tun device (/org/freedesktop/NetworkManager/Devices/410)
Oct 11 09:12:25 compute-0 ovn_controller[152945]: 2025-10-11T09:12:25Z|00984|binding|INFO|Claiming lport 0ae3e094-fe06-40c1-8840-cf16ba40f7fb for this chassis.
Oct 11 09:12:25 compute-0 ovn_controller[152945]: 2025-10-11T09:12:25Z|00985|binding|INFO|0ae3e094-fe06-40c1-8840-cf16ba40f7fb: Claiming fa:16:3e:36:24:98 10.100.0.9
Oct 11 09:12:25 compute-0 nova_compute[260935]: 2025-10-11 09:12:25.127 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:12:25 compute-0 nova_compute[260935]: 2025-10-11 09:12:25.130 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:12:25 compute-0 nova_compute[260935]: 2025-10-11 09:12:25.132 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:12:25 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:12:25.134 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:36:24:98 10.100.0.9'], port_security=['fa:16:3e:36:24:98 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '0dc02e8f-5afd-40a3-8de3-e5550e4ab57e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fba72350-6e1a-4542-9fe5-a0774a411936', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ca4b15770e784f45910b630937562cb6', 'neutron:revision_number': '5', 'neutron:security_group_ids': '8dd2bd74-fe29-451a-969b-10f8c65ff537', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.173'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4107ca00-6bae-4723-b2f7-08200810b548, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=0ae3e094-fe06-40c1-8840-cf16ba40f7fb) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 09:12:25 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:12:25.135 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 0ae3e094-fe06-40c1-8840-cf16ba40f7fb in datapath fba72350-6e1a-4542-9fe5-a0774a411936 bound to our chassis
Oct 11 09:12:25 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:12:25.138 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network fba72350-6e1a-4542-9fe5-a0774a411936
Oct 11 09:12:25 compute-0 ovn_controller[152945]: 2025-10-11T09:12:25Z|00986|binding|INFO|Setting lport 0ae3e094-fe06-40c1-8840-cf16ba40f7fb ovn-installed in OVS
Oct 11 09:12:25 compute-0 ovn_controller[152945]: 2025-10-11T09:12:25Z|00987|binding|INFO|Setting lport 0ae3e094-fe06-40c1-8840-cf16ba40f7fb up in Southbound
Oct 11 09:12:25 compute-0 nova_compute[260935]: 2025-10-11 09:12:25.141 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:12:25 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:12:25.158 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[be67eadc-084e-4f53-aa39-29e648ef9c1a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:12:25 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:12:25.159 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapfba72350-61 in ovnmeta-fba72350-6e1a-4542-9fe5-a0774a411936 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 11 09:12:25 compute-0 systemd-machined[215705]: New machine qemu-126-instance-00000069.
Oct 11 09:12:25 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:12:25.161 276945 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapfba72350-60 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 11 09:12:25 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:12:25.162 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[83c0d177-bd1b-4afb-b03d-1b960dfeb4eb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:12:25 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:12:25.163 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[b7bdc562-2b5a-460d-95f5-b43e60b3fa43]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:12:25 compute-0 systemd[1]: Started Virtual Machine qemu-126-instance-00000069.
Oct 11 09:12:25 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:12:25.184 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[72ffbe3d-d63b-462d-b42c-8a37032f593e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:12:25 compute-0 systemd-udevd[368920]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 09:12:25 compute-0 NetworkManager[44960]: <info>  [1760173945.2027] device (tap0ae3e094-fe): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 09:12:25 compute-0 NetworkManager[44960]: <info>  [1760173945.2046] device (tap0ae3e094-fe): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 09:12:25 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:12:25.218 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[e932bfb5-da0e-4e8b-8c01-3f003dea5350]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:12:25 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:12:25.267 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[beca9cc4-ca7b-4b4b-8119-fb6057a156fc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:12:25 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:12:25.276 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[8a326c4b-c757-4365-b9fe-3f7fedf80489]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:12:25 compute-0 NetworkManager[44960]: <info>  [1760173945.2782] manager: (tapfba72350-60): new Veth device (/org/freedesktop/NetworkManager/Devices/411)
Oct 11 09:12:25 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:12:25.321 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[9e2c3890-22bc-4476-8267-aa5c8899d7c6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:12:25 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:12:25.325 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[bed1eb83-7d11-4e9c-88f4-aa2b5b951daa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:12:25 compute-0 NetworkManager[44960]: <info>  [1760173945.3563] device (tapfba72350-60): carrier: link connected
Oct 11 09:12:25 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:12:25.371 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[b27411a0-5f8a-463e-b9bd-b16c6c30c883]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:12:25 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:12:25.394 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[dd253e41-43b0-492b-882b-f6abb3a238f9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfba72350-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5c:dc:bd'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 291], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 579005, 'reachable_time': 39809, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 368951, 'error': None, 'target': 'ovnmeta-fba72350-6e1a-4542-9fe5-a0774a411936', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:12:25 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:12:25.413 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[cd31cde5-44e8-4d1d-8818-6bc0aed98a76]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe5c:dcbd'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 579005, 'tstamp': 579005}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 368952, 'error': None, 'target': 'ovnmeta-fba72350-6e1a-4542-9fe5-a0774a411936', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:12:25 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:12:25.437 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[254cba19-fc77-4a0a-85df-640c66fb83ea]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfba72350-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5c:dc:bd'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 291], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 579005, 'reachable_time': 39809, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 368953, 'error': None, 'target': 'ovnmeta-fba72350-6e1a-4542-9fe5-a0774a411936', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:12:25 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:12:25.492 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[bcbd078f-3ce8-45f7-bbae-3fef1b93d67b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:12:25 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:12:25.589 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[467133c9-5bf9-4adc-b233-86e49055d19f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:12:25 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:12:25.591 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfba72350-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:12:25 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:12:25.591 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 09:12:25 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:12:25.592 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfba72350-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:12:25 compute-0 nova_compute[260935]: 2025-10-11 09:12:25.595 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:12:25 compute-0 NetworkManager[44960]: <info>  [1760173945.5961] manager: (tapfba72350-60): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/412)
Oct 11 09:12:25 compute-0 kernel: tapfba72350-60: entered promiscuous mode
Oct 11 09:12:25 compute-0 nova_compute[260935]: 2025-10-11 09:12:25.598 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:12:25 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:12:25.601 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapfba72350-60, col_values=(('external_ids', {'iface-id': '2e151f7b-2e1d-434e-9750-cdf44a5aa034'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:12:25 compute-0 nova_compute[260935]: 2025-10-11 09:12:25.602 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:12:25 compute-0 ovn_controller[152945]: 2025-10-11T09:12:25Z|00988|binding|INFO|Releasing lport 2e151f7b-2e1d-434e-9750-cdf44a5aa034 from this chassis (sb_readonly=0)
Oct 11 09:12:25 compute-0 nova_compute[260935]: 2025-10-11 09:12:25.632 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:12:25 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:12:25.634 162815 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/fba72350-6e1a-4542-9fe5-a0774a411936.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/fba72350-6e1a-4542-9fe5-a0774a411936.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 11 09:12:25 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:12:25.638 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[2743efe1-7994-47a4-abfe-319ff7ed7695]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:12:25 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:12:25.639 162815 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 11 09:12:25 compute-0 ovn_metadata_agent[162810]: global
Oct 11 09:12:25 compute-0 ovn_metadata_agent[162810]:     log         /dev/log local0 debug
Oct 11 09:12:25 compute-0 ovn_metadata_agent[162810]:     log-tag     haproxy-metadata-proxy-fba72350-6e1a-4542-9fe5-a0774a411936
Oct 11 09:12:25 compute-0 ovn_metadata_agent[162810]:     user        root
Oct 11 09:12:25 compute-0 ovn_metadata_agent[162810]:     group       root
Oct 11 09:12:25 compute-0 ovn_metadata_agent[162810]:     maxconn     1024
Oct 11 09:12:25 compute-0 ovn_metadata_agent[162810]:     pidfile     /var/lib/neutron/external/pids/fba72350-6e1a-4542-9fe5-a0774a411936.pid.haproxy
Oct 11 09:12:25 compute-0 ovn_metadata_agent[162810]:     daemon
Oct 11 09:12:25 compute-0 ovn_metadata_agent[162810]: 
Oct 11 09:12:25 compute-0 ovn_metadata_agent[162810]: defaults
Oct 11 09:12:25 compute-0 ovn_metadata_agent[162810]:     log global
Oct 11 09:12:25 compute-0 ovn_metadata_agent[162810]:     mode http
Oct 11 09:12:25 compute-0 ovn_metadata_agent[162810]:     option httplog
Oct 11 09:12:25 compute-0 ovn_metadata_agent[162810]:     option dontlognull
Oct 11 09:12:25 compute-0 ovn_metadata_agent[162810]:     option http-server-close
Oct 11 09:12:25 compute-0 ovn_metadata_agent[162810]:     option forwardfor
Oct 11 09:12:25 compute-0 ovn_metadata_agent[162810]:     retries                 3
Oct 11 09:12:25 compute-0 ovn_metadata_agent[162810]:     timeout http-request    30s
Oct 11 09:12:25 compute-0 ovn_metadata_agent[162810]:     timeout connect         30s
Oct 11 09:12:25 compute-0 ovn_metadata_agent[162810]:     timeout client          32s
Oct 11 09:12:25 compute-0 ovn_metadata_agent[162810]:     timeout server          32s
Oct 11 09:12:25 compute-0 ovn_metadata_agent[162810]:     timeout http-keep-alive 30s
Oct 11 09:12:25 compute-0 ovn_metadata_agent[162810]: 
Oct 11 09:12:25 compute-0 ovn_metadata_agent[162810]: 
Oct 11 09:12:25 compute-0 ovn_metadata_agent[162810]: listen listener
Oct 11 09:12:25 compute-0 ovn_metadata_agent[162810]:     bind 169.254.169.254:80
Oct 11 09:12:25 compute-0 ovn_metadata_agent[162810]:     server metadata /var/lib/neutron/metadata_proxy
Oct 11 09:12:25 compute-0 ovn_metadata_agent[162810]:     http-request add-header X-OVN-Network-ID fba72350-6e1a-4542-9fe5-a0774a411936
Oct 11 09:12:25 compute-0 ovn_metadata_agent[162810]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 11 09:12:25 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:12:25.639 162815 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-fba72350-6e1a-4542-9fe5-a0774a411936', 'env', 'PROCESS_TAG=haproxy-fba72350-6e1a-4542-9fe5-a0774a411936', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/fba72350-6e1a-4542-9fe5-a0774a411936.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 11 09:12:25 compute-0 nova_compute[260935]: 2025-10-11 09:12:25.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:12:25 compute-0 nova_compute[260935]: 2025-10-11 09:12:25.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:12:25 compute-0 nova_compute[260935]: 2025-10-11 09:12:25.703 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 11 09:12:25 compute-0 ceph-mon[74313]: pgmap v2117: 321 pgs: 321 active+clean; 346 MiB data, 892 MiB used, 59 GiB / 60 GiB avail; 30 KiB/s rd, 1.1 MiB/s wr, 46 op/s
Oct 11 09:12:25 compute-0 nova_compute[260935]: 2025-10-11 09:12:25.946 2 DEBUG nova.compute.manager [req-10e5d2f0-7f08-4ae5-bfa3-f28ff11d6c4a req-0b684556-7d42-4a05-a986-a5e3db48efd3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e] Received event network-vif-plugged-0ae3e094-fe06-40c1-8840-cf16ba40f7fb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:12:25 compute-0 nova_compute[260935]: 2025-10-11 09:12:25.946 2 DEBUG oslo_concurrency.lockutils [req-10e5d2f0-7f08-4ae5-bfa3-f28ff11d6c4a req-0b684556-7d42-4a05-a986-a5e3db48efd3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "0dc02e8f-5afd-40a3-8de3-e5550e4ab57e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:12:25 compute-0 nova_compute[260935]: 2025-10-11 09:12:25.946 2 DEBUG oslo_concurrency.lockutils [req-10e5d2f0-7f08-4ae5-bfa3-f28ff11d6c4a req-0b684556-7d42-4a05-a986-a5e3db48efd3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "0dc02e8f-5afd-40a3-8de3-e5550e4ab57e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:12:25 compute-0 nova_compute[260935]: 2025-10-11 09:12:25.947 2 DEBUG oslo_concurrency.lockutils [req-10e5d2f0-7f08-4ae5-bfa3-f28ff11d6c4a req-0b684556-7d42-4a05-a986-a5e3db48efd3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "0dc02e8f-5afd-40a3-8de3-e5550e4ab57e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:12:25 compute-0 nova_compute[260935]: 2025-10-11 09:12:25.947 2 DEBUG nova.compute.manager [req-10e5d2f0-7f08-4ae5-bfa3-f28ff11d6c4a req-0b684556-7d42-4a05-a986-a5e3db48efd3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e] No waiting events found dispatching network-vif-plugged-0ae3e094-fe06-40c1-8840-cf16ba40f7fb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 09:12:25 compute-0 nova_compute[260935]: 2025-10-11 09:12:25.947 2 WARNING nova.compute.manager [req-10e5d2f0-7f08-4ae5-bfa3-f28ff11d6c4a req-0b684556-7d42-4a05-a986-a5e3db48efd3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e] Received unexpected event network-vif-plugged-0ae3e094-fe06-40c1-8840-cf16ba40f7fb for instance with vm_state active and task_state rebuild_spawning.
Oct 11 09:12:26 compute-0 podman[369025]: 2025-10-11 09:12:26.075478741 +0000 UTC m=+0.054508470 container create cca8b69c19e14fed71092700160d6e1286225281cd0deb548025e0d9e80d71b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-fba72350-6e1a-4542-9fe5-a0774a411936, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0)
Oct 11 09:12:26 compute-0 systemd[1]: Started libpod-conmon-cca8b69c19e14fed71092700160d6e1286225281cd0deb548025e0d9e80d71b1.scope.
Oct 11 09:12:26 compute-0 podman[369025]: 2025-10-11 09:12:26.045866352 +0000 UTC m=+0.024896171 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 11 09:12:26 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:12:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7f85520362d24f3ac3e0a81a81c4d2407edd9ee81684410d35e390bcf954224/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 11 09:12:26 compute-0 nova_compute[260935]: 2025-10-11 09:12:26.165 2 DEBUG nova.virt.libvirt.host [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Removed pending event for 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Oct 11 09:12:26 compute-0 nova_compute[260935]: 2025-10-11 09:12:26.166 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760173946.1648345, 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:12:26 compute-0 nova_compute[260935]: 2025-10-11 09:12:26.166 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e] VM Resumed (Lifecycle Event)
Oct 11 09:12:26 compute-0 podman[369025]: 2025-10-11 09:12:26.167331854 +0000 UTC m=+0.146361603 container init cca8b69c19e14fed71092700160d6e1286225281cd0deb548025e0d9e80d71b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-fba72350-6e1a-4542-9fe5-a0774a411936, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001)
Oct 11 09:12:26 compute-0 nova_compute[260935]: 2025-10-11 09:12:26.169 2 DEBUG nova.compute.manager [None req-879333dc-604f-41f6-a4d2-442a21b9455d a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 11 09:12:26 compute-0 nova_compute[260935]: 2025-10-11 09:12:26.169 2 DEBUG nova.virt.libvirt.driver [None req-879333dc-604f-41f6-a4d2-442a21b9455d a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 11 09:12:26 compute-0 nova_compute[260935]: 2025-10-11 09:12:26.173 2 INFO nova.virt.libvirt.driver [-] [instance: 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e] Instance spawned successfully.
Oct 11 09:12:26 compute-0 nova_compute[260935]: 2025-10-11 09:12:26.174 2 DEBUG nova.virt.libvirt.driver [None req-879333dc-604f-41f6-a4d2-442a21b9455d a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 11 09:12:26 compute-0 podman[369025]: 2025-10-11 09:12:26.174833621 +0000 UTC m=+0.153863350 container start cca8b69c19e14fed71092700160d6e1286225281cd0deb548025e0d9e80d71b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-fba72350-6e1a-4542-9fe5-a0774a411936, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 11 09:12:26 compute-0 nova_compute[260935]: 2025-10-11 09:12:26.190 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:12:26 compute-0 nova_compute[260935]: 2025-10-11 09:12:26.194 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 09:12:26 compute-0 neutron-haproxy-ovnmeta-fba72350-6e1a-4542-9fe5-a0774a411936[369040]: [NOTICE]   (369044) : New worker (369046) forked
Oct 11 09:12:26 compute-0 neutron-haproxy-ovnmeta-fba72350-6e1a-4542-9fe5-a0774a411936[369040]: [NOTICE]   (369044) : Loading success.
Oct 11 09:12:26 compute-0 nova_compute[260935]: 2025-10-11 09:12:26.217 2 DEBUG nova.virt.libvirt.driver [None req-879333dc-604f-41f6-a4d2-442a21b9455d a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:12:26 compute-0 nova_compute[260935]: 2025-10-11 09:12:26.217 2 DEBUG nova.virt.libvirt.driver [None req-879333dc-604f-41f6-a4d2-442a21b9455d a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:12:26 compute-0 nova_compute[260935]: 2025-10-11 09:12:26.217 2 DEBUG nova.virt.libvirt.driver [None req-879333dc-604f-41f6-a4d2-442a21b9455d a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:12:26 compute-0 nova_compute[260935]: 2025-10-11 09:12:26.218 2 DEBUG nova.virt.libvirt.driver [None req-879333dc-604f-41f6-a4d2-442a21b9455d a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:12:26 compute-0 nova_compute[260935]: 2025-10-11 09:12:26.218 2 DEBUG nova.virt.libvirt.driver [None req-879333dc-604f-41f6-a4d2-442a21b9455d a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:12:26 compute-0 nova_compute[260935]: 2025-10-11 09:12:26.219 2 DEBUG nova.virt.libvirt.driver [None req-879333dc-604f-41f6-a4d2-442a21b9455d a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:12:26 compute-0 nova_compute[260935]: 2025-10-11 09:12:26.223 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Oct 11 09:12:26 compute-0 nova_compute[260935]: 2025-10-11 09:12:26.223 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760173946.1676824, 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:12:26 compute-0 nova_compute[260935]: 2025-10-11 09:12:26.223 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e] VM Started (Lifecycle Event)
Oct 11 09:12:26 compute-0 nova_compute[260935]: 2025-10-11 09:12:26.263 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:12:26 compute-0 nova_compute[260935]: 2025-10-11 09:12:26.267 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 09:12:26 compute-0 nova_compute[260935]: 2025-10-11 09:12:26.286 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Oct 11 09:12:26 compute-0 nova_compute[260935]: 2025-10-11 09:12:26.295 2 DEBUG nova.compute.manager [None req-879333dc-604f-41f6-a4d2-442a21b9455d a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:12:26 compute-0 nova_compute[260935]: 2025-10-11 09:12:26.360 2 DEBUG oslo_concurrency.lockutils [None req-879333dc-604f-41f6-a4d2-442a21b9455d a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:12:26 compute-0 nova_compute[260935]: 2025-10-11 09:12:26.360 2 DEBUG oslo_concurrency.lockutils [None req-879333dc-604f-41f6-a4d2-442a21b9455d a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:12:26 compute-0 nova_compute[260935]: 2025-10-11 09:12:26.361 2 DEBUG nova.objects.instance [None req-879333dc-604f-41f6-a4d2-442a21b9455d a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Oct 11 09:12:26 compute-0 nova_compute[260935]: 2025-10-11 09:12:26.413 2 DEBUG oslo_concurrency.lockutils [None req-879333dc-604f-41f6-a4d2-442a21b9455d a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.053s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:12:26 compute-0 nova_compute[260935]: 2025-10-11 09:12:26.447 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:12:26 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:12:26.448 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=29, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:d1:d9', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '16:ab:1e:b7:4b:7f'}, ipsec=False) old=SB_Global(nb_cfg=28) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 09:12:26 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:12:26.450 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 11 09:12:26 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2118: 321 pgs: 321 active+clean; 346 MiB data, 892 MiB used, 59 GiB / 60 GiB avail; 28 KiB/s rd, 1.1 MiB/s wr, 45 op/s
Oct 11 09:12:26 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 09:12:26 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3378638996' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 09:12:26 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 09:12:26 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3378638996' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 09:12:26 compute-0 nova_compute[260935]: 2025-10-11 09:12:26.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:12:26 compute-0 nova_compute[260935]: 2025-10-11 09:12:26.738 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:12:26 compute-0 nova_compute[260935]: 2025-10-11 09:12:26.739 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:12:26 compute-0 nova_compute[260935]: 2025-10-11 09:12:26.739 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:12:26 compute-0 nova_compute[260935]: 2025-10-11 09:12:26.740 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 11 09:12:26 compute-0 nova_compute[260935]: 2025-10-11 09:12:26.740 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:12:26 compute-0 ceph-mon[74313]: from='client.? 192.168.122.10:0/3378638996' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 09:12:26 compute-0 ceph-mon[74313]: from='client.? 192.168.122.10:0/3378638996' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 09:12:27 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 09:12:27 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/911322854' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:12:27 compute-0 nova_compute[260935]: 2025-10-11 09:12:27.353 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.612s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:12:27 compute-0 nova_compute[260935]: 2025-10-11 09:12:27.437 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:12:27 compute-0 nova_compute[260935]: 2025-10-11 09:12:27.437 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:12:27 compute-0 nova_compute[260935]: 2025-10-11 09:12:27.437 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:12:27 compute-0 nova_compute[260935]: 2025-10-11 09:12:27.441 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:12:27 compute-0 nova_compute[260935]: 2025-10-11 09:12:27.441 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:12:27 compute-0 nova_compute[260935]: 2025-10-11 09:12:27.445 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000052 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:12:27 compute-0 nova_compute[260935]: 2025-10-11 09:12:27.445 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000052 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:12:27 compute-0 nova_compute[260935]: 2025-10-11 09:12:27.449 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000069 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:12:27 compute-0 nova_compute[260935]: 2025-10-11 09:12:27.450 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000069 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:12:27 compute-0 nova_compute[260935]: 2025-10-11 09:12:27.638 2 WARNING nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 09:12:27 compute-0 nova_compute[260935]: 2025-10-11 09:12:27.639 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=2962MB free_disk=59.818275451660156GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 11 09:12:27 compute-0 nova_compute[260935]: 2025-10-11 09:12:27.639 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:12:27 compute-0 nova_compute[260935]: 2025-10-11 09:12:27.640 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:12:27 compute-0 ceph-mon[74313]: pgmap v2118: 321 pgs: 321 active+clean; 346 MiB data, 892 MiB used, 59 GiB / 60 GiB avail; 28 KiB/s rd, 1.1 MiB/s wr, 45 op/s
Oct 11 09:12:27 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/911322854' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:12:27 compute-0 nova_compute[260935]: 2025-10-11 09:12:27.828 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance c176845c-89c0-4038-ba22-4ee79bd3ebfe actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 09:12:27 compute-0 nova_compute[260935]: 2025-10-11 09:12:27.829 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance b75d8ded-515b-48ff-a6b6-28df88878996 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 09:12:27 compute-0 nova_compute[260935]: 2025-10-11 09:12:27.829 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance 52be16b4-343a-4fd4-9041-39069a1fde2a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 09:12:27 compute-0 nova_compute[260935]: 2025-10-11 09:12:27.829 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 09:12:27 compute-0 nova_compute[260935]: 2025-10-11 09:12:27.829 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 11 09:12:27 compute-0 nova_compute[260935]: 2025-10-11 09:12:27.829 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=1024MB phys_disk=59GB used_disk=4GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 11 09:12:27 compute-0 nova_compute[260935]: 2025-10-11 09:12:27.971 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:12:28 compute-0 nova_compute[260935]: 2025-10-11 09:12:28.105 2 DEBUG nova.compute.manager [req-8ab5e81e-7d82-45eb-bf0c-ee25b6cf3985 req-7731e587-1ae5-4791-bd8c-3f7af6b9d4ec e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e] Received event network-vif-plugged-0ae3e094-fe06-40c1-8840-cf16ba40f7fb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:12:28 compute-0 nova_compute[260935]: 2025-10-11 09:12:28.107 2 DEBUG oslo_concurrency.lockutils [req-8ab5e81e-7d82-45eb-bf0c-ee25b6cf3985 req-7731e587-1ae5-4791-bd8c-3f7af6b9d4ec e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "0dc02e8f-5afd-40a3-8de3-e5550e4ab57e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:12:28 compute-0 nova_compute[260935]: 2025-10-11 09:12:28.108 2 DEBUG oslo_concurrency.lockutils [req-8ab5e81e-7d82-45eb-bf0c-ee25b6cf3985 req-7731e587-1ae5-4791-bd8c-3f7af6b9d4ec e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "0dc02e8f-5afd-40a3-8de3-e5550e4ab57e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:12:28 compute-0 nova_compute[260935]: 2025-10-11 09:12:28.108 2 DEBUG oslo_concurrency.lockutils [req-8ab5e81e-7d82-45eb-bf0c-ee25b6cf3985 req-7731e587-1ae5-4791-bd8c-3f7af6b9d4ec e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "0dc02e8f-5afd-40a3-8de3-e5550e4ab57e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:12:28 compute-0 nova_compute[260935]: 2025-10-11 09:12:28.109 2 DEBUG nova.compute.manager [req-8ab5e81e-7d82-45eb-bf0c-ee25b6cf3985 req-7731e587-1ae5-4791-bd8c-3f7af6b9d4ec e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e] No waiting events found dispatching network-vif-plugged-0ae3e094-fe06-40c1-8840-cf16ba40f7fb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 09:12:28 compute-0 nova_compute[260935]: 2025-10-11 09:12:28.110 2 WARNING nova.compute.manager [req-8ab5e81e-7d82-45eb-bf0c-ee25b6cf3985 req-7731e587-1ae5-4791-bd8c-3f7af6b9d4ec e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e] Received unexpected event network-vif-plugged-0ae3e094-fe06-40c1-8840-cf16ba40f7fb for instance with vm_state active and task_state None.
Oct 11 09:12:28 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 09:12:28 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/527898686' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:12:28 compute-0 nova_compute[260935]: 2025-10-11 09:12:28.485 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.514s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:12:28 compute-0 nova_compute[260935]: 2025-10-11 09:12:28.493 2 DEBUG nova.compute.provider_tree [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 09:12:28 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2119: 321 pgs: 321 active+clean; 374 MiB data, 911 MiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 1.8 MiB/s wr, 105 op/s
Oct 11 09:12:28 compute-0 nova_compute[260935]: 2025-10-11 09:12:28.516 2 DEBUG nova.scheduler.client.report [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 09:12:28 compute-0 nova_compute[260935]: 2025-10-11 09:12:28.555 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 11 09:12:28 compute-0 nova_compute[260935]: 2025-10-11 09:12:28.555 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.916s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:12:28 compute-0 nova_compute[260935]: 2025-10-11 09:12:28.639 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:12:28 compute-0 podman[369100]: 2025-10-11 09:12:28.771100883 +0000 UTC m=+0.071057868 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=iscsid)
Oct 11 09:12:28 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/527898686' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:12:28 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:12:29 compute-0 ceph-mon[74313]: pgmap v2119: 321 pgs: 321 active+clean; 374 MiB data, 911 MiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 1.8 MiB/s wr, 105 op/s
Oct 11 09:12:30 compute-0 nova_compute[260935]: 2025-10-11 09:12:30.149 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:12:30 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2120: 321 pgs: 321 active+clean; 374 MiB data, 911 MiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 1.8 MiB/s wr, 105 op/s
Oct 11 09:12:30 compute-0 nova_compute[260935]: 2025-10-11 09:12:30.555 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:12:30 compute-0 nova_compute[260935]: 2025-10-11 09:12:30.556 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 11 09:12:30 compute-0 nova_compute[260935]: 2025-10-11 09:12:30.557 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 11 09:12:30 compute-0 nova_compute[260935]: 2025-10-11 09:12:30.776 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "refresh_cache-c176845c-89c0-4038-ba22-4ee79bd3ebfe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:12:30 compute-0 nova_compute[260935]: 2025-10-11 09:12:30.777 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquired lock "refresh_cache-c176845c-89c0-4038-ba22-4ee79bd3ebfe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:12:30 compute-0 nova_compute[260935]: 2025-10-11 09:12:30.778 2 DEBUG nova.network.neutron [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: c176845c-89c0-4038-ba22-4ee79bd3ebfe] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 11 09:12:30 compute-0 nova_compute[260935]: 2025-10-11 09:12:30.778 2 DEBUG nova.objects.instance [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lazy-loading 'info_cache' on Instance uuid c176845c-89c0-4038-ba22-4ee79bd3ebfe obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:12:31 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:12:31.453 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=bec9a5e2-82b0-42f0-811d-08d245f1dc66, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '29'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:12:31 compute-0 ceph-mon[74313]: pgmap v2120: 321 pgs: 321 active+clean; 374 MiB data, 911 MiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 1.8 MiB/s wr, 105 op/s
Oct 11 09:12:31 compute-0 nova_compute[260935]: 2025-10-11 09:12:31.964 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:12:32 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2121: 321 pgs: 321 active+clean; 374 MiB data, 911 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 131 op/s
Oct 11 09:12:33 compute-0 nova_compute[260935]: 2025-10-11 09:12:33.314 2 DEBUG nova.network.neutron [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: c176845c-89c0-4038-ba22-4ee79bd3ebfe] Updating instance_info_cache with network_info: [{"id": "e61ae661-47c6-4317-a2c2-6e7a5b567441", "address": "fa:16:3e:1e:82:58", "network": {"id": "164a664d-5e52-48b9-8b00-f73d0851a4cc", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-311778958-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d33b48586acf4e6c8254f2a1213b001c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape61ae661-47", "ovs_interfaceid": "e61ae661-47c6-4317-a2c2-6e7a5b567441", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:12:33 compute-0 nova_compute[260935]: 2025-10-11 09:12:33.341 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Releasing lock "refresh_cache-c176845c-89c0-4038-ba22-4ee79bd3ebfe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:12:33 compute-0 nova_compute[260935]: 2025-10-11 09:12:33.342 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: c176845c-89c0-4038-ba22-4ee79bd3ebfe] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 11 09:12:33 compute-0 nova_compute[260935]: 2025-10-11 09:12:33.642 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:12:33 compute-0 podman[369120]: 2025-10-11 09:12:33.774903553 +0000 UTC m=+0.070459131 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=multipathd, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Oct 11 09:12:33 compute-0 ceph-mon[74313]: pgmap v2121: 321 pgs: 321 active+clean; 374 MiB data, 911 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 131 op/s
Oct 11 09:12:33 compute-0 podman[369121]: 2025-10-11 09:12:33.869203882 +0000 UTC m=+0.166280682 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2)
Oct 11 09:12:33 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:12:34 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2122: 321 pgs: 321 active+clean; 374 MiB data, 911 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 993 KiB/s wr, 104 op/s
Oct 11 09:12:35 compute-0 nova_compute[260935]: 2025-10-11 09:12:35.113 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:12:35 compute-0 nova_compute[260935]: 2025-10-11 09:12:35.144 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:12:35 compute-0 ceph-mon[74313]: pgmap v2122: 321 pgs: 321 active+clean; 374 MiB data, 911 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 993 KiB/s wr, 104 op/s
Oct 11 09:12:35 compute-0 sshd-session[369165]: Invalid user siteadmin from 155.4.244.179 port 1189
Oct 11 09:12:35 compute-0 sshd-session[369165]: pam_unix(sshd:auth): check pass; user unknown
Oct 11 09:12:35 compute-0 sshd-session[369165]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=155.4.244.179
Oct 11 09:12:36 compute-0 sudo[369167]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:12:36 compute-0 sudo[369167]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:12:36 compute-0 sudo[369167]: pam_unix(sudo:session): session closed for user root
Oct 11 09:12:36 compute-0 sudo[369192]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 09:12:36 compute-0 sudo[369192]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:12:36 compute-0 sudo[369192]: pam_unix(sudo:session): session closed for user root
Oct 11 09:12:36 compute-0 sudo[369217]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:12:36 compute-0 sudo[369217]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:12:36 compute-0 sudo[369217]: pam_unix(sudo:session): session closed for user root
Oct 11 09:12:36 compute-0 sudo[369242]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 11 09:12:36 compute-0 sudo[369242]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:12:36 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2123: 321 pgs: 321 active+clean; 374 MiB data, 911 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 743 KiB/s wr, 86 op/s
Oct 11 09:12:36 compute-0 ceph-osd[89278]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #48. Immutable memtables: 5.
Oct 11 09:12:36 compute-0 ovn_controller[152945]: 2025-10-11T09:12:36Z|00989|binding|INFO|Releasing lport 2e151f7b-2e1d-434e-9750-cdf44a5aa034 from this chassis (sb_readonly=0)
Oct 11 09:12:36 compute-0 ovn_controller[152945]: 2025-10-11T09:12:36Z|00990|binding|INFO|Releasing lport e23cd806-8523-4e59-ba27-db15cee52548 from this chassis (sb_readonly=0)
Oct 11 09:12:36 compute-0 ovn_controller[152945]: 2025-10-11T09:12:36Z|00991|binding|INFO|Releasing lport f45dd889-4db0-488b-b6d3-356bc191844e from this chassis (sb_readonly=0)
Oct 11 09:12:36 compute-0 nova_compute[260935]: 2025-10-11 09:12:36.974 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:12:37 compute-0 sudo[369242]: pam_unix(sudo:session): session closed for user root
Oct 11 09:12:37 compute-0 sudo[369298]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:12:37 compute-0 sudo[369298]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:12:37 compute-0 sudo[369298]: pam_unix(sudo:session): session closed for user root
Oct 11 09:12:37 compute-0 sudo[369323]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 09:12:37 compute-0 sudo[369323]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:12:37 compute-0 sudo[369323]: pam_unix(sudo:session): session closed for user root
Oct 11 09:12:37 compute-0 sudo[369348]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:12:37 compute-0 sudo[369348]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:12:37 compute-0 sudo[369348]: pam_unix(sudo:session): session closed for user root
Oct 11 09:12:37 compute-0 sudo[369373]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a -- inventory --format=json-pretty --filter-for-batch
Oct 11 09:12:37 compute-0 sudo[369373]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:12:37 compute-0 ovn_controller[152945]: 2025-10-11T09:12:37Z|00112|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:36:24:98 10.100.0.9
Oct 11 09:12:37 compute-0 ovn_controller[152945]: 2025-10-11T09:12:37Z|00113|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:36:24:98 10.100.0.9
Oct 11 09:12:37 compute-0 ceph-mon[74313]: pgmap v2123: 321 pgs: 321 active+clean; 374 MiB data, 911 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 743 KiB/s wr, 86 op/s
Oct 11 09:12:37 compute-0 podman[369438]: 2025-10-11 09:12:37.900176106 +0000 UTC m=+0.067644524 container create bdf9399e238f6e578a374853b87c29d693ba9b5c618673ac7c48eecf1825ecb7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_kilby, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 09:12:37 compute-0 sshd-session[369165]: Failed password for invalid user siteadmin from 155.4.244.179 port 1189 ssh2
Oct 11 09:12:37 compute-0 podman[369438]: 2025-10-11 09:12:37.868711455 +0000 UTC m=+0.036179943 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:12:37 compute-0 systemd[1]: Started libpod-conmon-bdf9399e238f6e578a374853b87c29d693ba9b5c618673ac7c48eecf1825ecb7.scope.
Oct 11 09:12:37 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:12:38 compute-0 podman[369438]: 2025-10-11 09:12:38.017528104 +0000 UTC m=+0.184996612 container init bdf9399e238f6e578a374853b87c29d693ba9b5c618673ac7c48eecf1825ecb7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_kilby, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct 11 09:12:38 compute-0 podman[369438]: 2025-10-11 09:12:38.025871035 +0000 UTC m=+0.193339473 container start bdf9399e238f6e578a374853b87c29d693ba9b5c618673ac7c48eecf1825ecb7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_kilby, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 09:12:38 compute-0 podman[369438]: 2025-10-11 09:12:38.029370832 +0000 UTC m=+0.196839330 container attach bdf9399e238f6e578a374853b87c29d693ba9b5c618673ac7c48eecf1825ecb7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_kilby, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct 11 09:12:38 compute-0 brave_kilby[369454]: 167 167
Oct 11 09:12:38 compute-0 systemd[1]: libpod-bdf9399e238f6e578a374853b87c29d693ba9b5c618673ac7c48eecf1825ecb7.scope: Deactivated successfully.
Oct 11 09:12:38 compute-0 conmon[369454]: conmon bdf9399e238f6e578a37 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-bdf9399e238f6e578a374853b87c29d693ba9b5c618673ac7c48eecf1825ecb7.scope/container/memory.events
Oct 11 09:12:38 compute-0 podman[369438]: 2025-10-11 09:12:38.037282921 +0000 UTC m=+0.204751369 container died bdf9399e238f6e578a374853b87c29d693ba9b5c618673ac7c48eecf1825ecb7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_kilby, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 09:12:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-21c62a690f6168543425e8c60489777c09a6ffd06c39245337545ce0f121a193-merged.mount: Deactivated successfully.
Oct 11 09:12:38 compute-0 podman[369438]: 2025-10-11 09:12:38.089156257 +0000 UTC m=+0.256624705 container remove bdf9399e238f6e578a374853b87c29d693ba9b5c618673ac7c48eecf1825ecb7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_kilby, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct 11 09:12:38 compute-0 systemd[1]: libpod-conmon-bdf9399e238f6e578a374853b87c29d693ba9b5c618673ac7c48eecf1825ecb7.scope: Deactivated successfully.
Oct 11 09:12:38 compute-0 podman[369477]: 2025-10-11 09:12:38.38660796 +0000 UTC m=+0.087020080 container create 2f5855283a76abacd3363befce745fd39d7007f94e0628738d0584b306de7a53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_nightingale, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 09:12:38 compute-0 systemd[1]: Started libpod-conmon-2f5855283a76abacd3363befce745fd39d7007f94e0628738d0584b306de7a53.scope.
Oct 11 09:12:38 compute-0 podman[369477]: 2025-10-11 09:12:38.357055112 +0000 UTC m=+0.057467312 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:12:38 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:12:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/33adebc7114fa456ae86066a354759e171629270519347b59951f8e464540dd3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 09:12:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/33adebc7114fa456ae86066a354759e171629270519347b59951f8e464540dd3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 09:12:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/33adebc7114fa456ae86066a354759e171629270519347b59951f8e464540dd3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 09:12:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/33adebc7114fa456ae86066a354759e171629270519347b59951f8e464540dd3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 09:12:38 compute-0 podman[369477]: 2025-10-11 09:12:38.502986171 +0000 UTC m=+0.203398361 container init 2f5855283a76abacd3363befce745fd39d7007f94e0628738d0584b306de7a53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_nightingale, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 09:12:38 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2124: 321 pgs: 321 active+clean; 395 MiB data, 953 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 2.8 MiB/s wr, 129 op/s
Oct 11 09:12:38 compute-0 podman[369477]: 2025-10-11 09:12:38.509040059 +0000 UTC m=+0.209452179 container start 2f5855283a76abacd3363befce745fd39d7007f94e0628738d0584b306de7a53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_nightingale, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 11 09:12:38 compute-0 podman[369477]: 2025-10-11 09:12:38.513566174 +0000 UTC m=+0.213978294 container attach 2f5855283a76abacd3363befce745fd39d7007f94e0628738d0584b306de7a53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_nightingale, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct 11 09:12:38 compute-0 nova_compute[260935]: 2025-10-11 09:12:38.646 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:12:38 compute-0 ovn_controller[152945]: 2025-10-11T09:12:38Z|00992|binding|INFO|Releasing lport 2e151f7b-2e1d-434e-9750-cdf44a5aa034 from this chassis (sb_readonly=0)
Oct 11 09:12:38 compute-0 ovn_controller[152945]: 2025-10-11T09:12:38Z|00993|binding|INFO|Releasing lport e23cd806-8523-4e59-ba27-db15cee52548 from this chassis (sb_readonly=0)
Oct 11 09:12:38 compute-0 ovn_controller[152945]: 2025-10-11T09:12:38Z|00994|binding|INFO|Releasing lport f45dd889-4db0-488b-b6d3-356bc191844e from this chassis (sb_readonly=0)
Oct 11 09:12:38 compute-0 nova_compute[260935]: 2025-10-11 09:12:38.884 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:12:38 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:12:38 compute-0 sshd-session[369165]: Received disconnect from 155.4.244.179 port 1189:11: Bye Bye [preauth]
Oct 11 09:12:38 compute-0 sshd-session[369165]: Disconnected from invalid user siteadmin 155.4.244.179 port 1189 [preauth]
Oct 11 09:12:39 compute-0 ceph-mon[74313]: pgmap v2124: 321 pgs: 321 active+clean; 395 MiB data, 953 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 2.8 MiB/s wr, 129 op/s
Oct 11 09:12:40 compute-0 nova_compute[260935]: 2025-10-11 09:12:40.146 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:12:40 compute-0 stupefied_nightingale[369493]: [
Oct 11 09:12:40 compute-0 stupefied_nightingale[369493]:     {
Oct 11 09:12:40 compute-0 stupefied_nightingale[369493]:         "available": false,
Oct 11 09:12:40 compute-0 stupefied_nightingale[369493]:         "ceph_device": false,
Oct 11 09:12:40 compute-0 stupefied_nightingale[369493]:         "device_id": "QEMU_DVD-ROM_QM00001",
Oct 11 09:12:40 compute-0 stupefied_nightingale[369493]:         "lsm_data": {},
Oct 11 09:12:40 compute-0 stupefied_nightingale[369493]:         "lvs": [],
Oct 11 09:12:40 compute-0 stupefied_nightingale[369493]:         "path": "/dev/sr0",
Oct 11 09:12:40 compute-0 stupefied_nightingale[369493]:         "rejected_reasons": [
Oct 11 09:12:40 compute-0 stupefied_nightingale[369493]:             "Insufficient space (<5GB)",
Oct 11 09:12:40 compute-0 stupefied_nightingale[369493]:             "Has a FileSystem"
Oct 11 09:12:40 compute-0 stupefied_nightingale[369493]:         ],
Oct 11 09:12:40 compute-0 stupefied_nightingale[369493]:         "sys_api": {
Oct 11 09:12:40 compute-0 stupefied_nightingale[369493]:             "actuators": null,
Oct 11 09:12:40 compute-0 stupefied_nightingale[369493]:             "device_nodes": "sr0",
Oct 11 09:12:40 compute-0 stupefied_nightingale[369493]:             "devname": "sr0",
Oct 11 09:12:40 compute-0 stupefied_nightingale[369493]:             "human_readable_size": "482.00 KB",
Oct 11 09:12:40 compute-0 stupefied_nightingale[369493]:             "id_bus": "ata",
Oct 11 09:12:40 compute-0 stupefied_nightingale[369493]:             "model": "QEMU DVD-ROM",
Oct 11 09:12:40 compute-0 stupefied_nightingale[369493]:             "nr_requests": "2",
Oct 11 09:12:40 compute-0 stupefied_nightingale[369493]:             "parent": "/dev/sr0",
Oct 11 09:12:40 compute-0 stupefied_nightingale[369493]:             "partitions": {},
Oct 11 09:12:40 compute-0 stupefied_nightingale[369493]:             "path": "/dev/sr0",
Oct 11 09:12:40 compute-0 stupefied_nightingale[369493]:             "removable": "1",
Oct 11 09:12:40 compute-0 stupefied_nightingale[369493]:             "rev": "2.5+",
Oct 11 09:12:40 compute-0 stupefied_nightingale[369493]:             "ro": "0",
Oct 11 09:12:40 compute-0 stupefied_nightingale[369493]:             "rotational": "0",
Oct 11 09:12:40 compute-0 stupefied_nightingale[369493]:             "sas_address": "",
Oct 11 09:12:40 compute-0 stupefied_nightingale[369493]:             "sas_device_handle": "",
Oct 11 09:12:40 compute-0 stupefied_nightingale[369493]:             "scheduler_mode": "mq-deadline",
Oct 11 09:12:40 compute-0 stupefied_nightingale[369493]:             "sectors": 0,
Oct 11 09:12:40 compute-0 stupefied_nightingale[369493]:             "sectorsize": "2048",
Oct 11 09:12:40 compute-0 stupefied_nightingale[369493]:             "size": 493568.0,
Oct 11 09:12:40 compute-0 stupefied_nightingale[369493]:             "support_discard": "2048",
Oct 11 09:12:40 compute-0 stupefied_nightingale[369493]:             "type": "disk",
Oct 11 09:12:40 compute-0 stupefied_nightingale[369493]:             "vendor": "QEMU"
Oct 11 09:12:40 compute-0 stupefied_nightingale[369493]:         }
Oct 11 09:12:40 compute-0 stupefied_nightingale[369493]:     }
Oct 11 09:12:40 compute-0 stupefied_nightingale[369493]: ]
Oct 11 09:12:40 compute-0 systemd[1]: libpod-2f5855283a76abacd3363befce745fd39d7007f94e0628738d0584b306de7a53.scope: Deactivated successfully.
Oct 11 09:12:40 compute-0 podman[369477]: 2025-10-11 09:12:40.445263752 +0000 UTC m=+2.145675872 container died 2f5855283a76abacd3363befce745fd39d7007f94e0628738d0584b306de7a53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_nightingale, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 09:12:40 compute-0 systemd[1]: libpod-2f5855283a76abacd3363befce745fd39d7007f94e0628738d0584b306de7a53.scope: Consumed 2.014s CPU time.
Oct 11 09:12:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-33adebc7114fa456ae86066a354759e171629270519347b59951f8e464540dd3-merged.mount: Deactivated successfully.
Oct 11 09:12:40 compute-0 podman[369477]: 2025-10-11 09:12:40.504490811 +0000 UTC m=+2.204902921 container remove 2f5855283a76abacd3363befce745fd39d7007f94e0628738d0584b306de7a53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_nightingale, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 09:12:40 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2125: 321 pgs: 321 active+clean; 395 MiB data, 953 MiB used, 59 GiB / 60 GiB avail; 1.0 MiB/s rd, 2.0 MiB/s wr, 68 op/s
Oct 11 09:12:40 compute-0 systemd[1]: libpod-conmon-2f5855283a76abacd3363befce745fd39d7007f94e0628738d0584b306de7a53.scope: Deactivated successfully.
Oct 11 09:12:40 compute-0 sudo[369373]: pam_unix(sudo:session): session closed for user root
Oct 11 09:12:40 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 09:12:40 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:12:40 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 09:12:40 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:12:40 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 09:12:40 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 09:12:40 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 11 09:12:40 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 09:12:40 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 11 09:12:40 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:12:40 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev 7fce1dbd-60dd-41f0-a48b-7b690750ec82 does not exist
Oct 11 09:12:40 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev 13192085-f10b-4996-a65c-9d264e846671 does not exist
Oct 11 09:12:40 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev b618ddd0-6be9-4499-ab34-f544a8c32942 does not exist
Oct 11 09:12:40 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 11 09:12:40 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 09:12:40 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 11 09:12:40 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 09:12:40 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 09:12:40 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 09:12:40 compute-0 sudo[371953]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:12:40 compute-0 sudo[371953]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:12:40 compute-0 sudo[371953]: pam_unix(sudo:session): session closed for user root
Oct 11 09:12:40 compute-0 sudo[371978]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 09:12:40 compute-0 sudo[371978]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:12:40 compute-0 sudo[371978]: pam_unix(sudo:session): session closed for user root
Oct 11 09:12:40 compute-0 sudo[372003]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:12:40 compute-0 sudo[372003]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:12:40 compute-0 sudo[372003]: pam_unix(sudo:session): session closed for user root
Oct 11 09:12:40 compute-0 sudo[372028]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 11 09:12:40 compute-0 sudo[372028]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:12:41 compute-0 podman[372096]: 2025-10-11 09:12:41.467455684 +0000 UTC m=+0.064537197 container create 555930723cd5acba86cad25228c5c85a1641dd053ef2ae52a8cddb62a66c2329 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_haibt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True)
Oct 11 09:12:41 compute-0 systemd[1]: Started libpod-conmon-555930723cd5acba86cad25228c5c85a1641dd053ef2ae52a8cddb62a66c2329.scope.
Oct 11 09:12:41 compute-0 podman[372096]: 2025-10-11 09:12:41.441214968 +0000 UTC m=+0.038296561 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:12:41 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:12:41 compute-0 ceph-mon[74313]: pgmap v2125: 321 pgs: 321 active+clean; 395 MiB data, 953 MiB used, 59 GiB / 60 GiB avail; 1.0 MiB/s rd, 2.0 MiB/s wr, 68 op/s
Oct 11 09:12:41 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:12:41 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:12:41 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 09:12:41 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 09:12:41 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:12:41 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 09:12:41 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 09:12:41 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 09:12:41 compute-0 podman[372096]: 2025-10-11 09:12:41.570240999 +0000 UTC m=+0.167322582 container init 555930723cd5acba86cad25228c5c85a1641dd053ef2ae52a8cddb62a66c2329 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_haibt, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True)
Oct 11 09:12:41 compute-0 podman[372096]: 2025-10-11 09:12:41.58219353 +0000 UTC m=+0.179275073 container start 555930723cd5acba86cad25228c5c85a1641dd053ef2ae52a8cddb62a66c2329 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_haibt, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 09:12:41 compute-0 podman[372096]: 2025-10-11 09:12:41.585990395 +0000 UTC m=+0.183071988 container attach 555930723cd5acba86cad25228c5c85a1641dd053ef2ae52a8cddb62a66c2329 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_haibt, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 09:12:41 compute-0 jovial_haibt[372112]: 167 167
Oct 11 09:12:41 compute-0 systemd[1]: libpod-555930723cd5acba86cad25228c5c85a1641dd053ef2ae52a8cddb62a66c2329.scope: Deactivated successfully.
Oct 11 09:12:41 compute-0 podman[372096]: 2025-10-11 09:12:41.588526345 +0000 UTC m=+0.185607888 container died 555930723cd5acba86cad25228c5c85a1641dd053ef2ae52a8cddb62a66c2329 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_haibt, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct 11 09:12:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-cf8296d0c8d71531457f45c3ac1eff0372c1514b82000e7f17f7481f24751edf-merged.mount: Deactivated successfully.
Oct 11 09:12:41 compute-0 podman[372096]: 2025-10-11 09:12:41.637344227 +0000 UTC m=+0.234425760 container remove 555930723cd5acba86cad25228c5c85a1641dd053ef2ae52a8cddb62a66c2329 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_haibt, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 09:12:41 compute-0 systemd[1]: libpod-conmon-555930723cd5acba86cad25228c5c85a1641dd053ef2ae52a8cddb62a66c2329.scope: Deactivated successfully.
Oct 11 09:12:41 compute-0 podman[372135]: 2025-10-11 09:12:41.892438057 +0000 UTC m=+0.061283887 container create bc15640458bd10f5ed0f940ea1a15d214ae3c5c17a2b38bc4f19c992ffcd6a88 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_mayer, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 09:12:41 compute-0 systemd[1]: Started libpod-conmon-bc15640458bd10f5ed0f940ea1a15d214ae3c5c17a2b38bc4f19c992ffcd6a88.scope.
Oct 11 09:12:41 compute-0 podman[372135]: 2025-10-11 09:12:41.869300997 +0000 UTC m=+0.038146877 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:12:41 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:12:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/75cf768eaa14592f7594c4db80d34fcead89bd99914c3d82feaf8d5c75848c20/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 09:12:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/75cf768eaa14592f7594c4db80d34fcead89bd99914c3d82feaf8d5c75848c20/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 09:12:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/75cf768eaa14592f7594c4db80d34fcead89bd99914c3d82feaf8d5c75848c20/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 09:12:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/75cf768eaa14592f7594c4db80d34fcead89bd99914c3d82feaf8d5c75848c20/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 09:12:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/75cf768eaa14592f7594c4db80d34fcead89bd99914c3d82feaf8d5c75848c20/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 09:12:42 compute-0 podman[372135]: 2025-10-11 09:12:42.023960938 +0000 UTC m=+0.192806798 container init bc15640458bd10f5ed0f940ea1a15d214ae3c5c17a2b38bc4f19c992ffcd6a88 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_mayer, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct 11 09:12:42 compute-0 podman[372135]: 2025-10-11 09:12:42.031969379 +0000 UTC m=+0.200815219 container start bc15640458bd10f5ed0f940ea1a15d214ae3c5c17a2b38bc4f19c992ffcd6a88 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_mayer, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct 11 09:12:42 compute-0 podman[372135]: 2025-10-11 09:12:42.037529203 +0000 UTC m=+0.206375083 container attach bc15640458bd10f5ed0f940ea1a15d214ae3c5c17a2b38bc4f19c992ffcd6a88 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_mayer, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct 11 09:12:42 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2126: 321 pgs: 321 active+clean; 406 MiB data, 954 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 2.1 MiB/s wr, 84 op/s
Oct 11 09:12:42 compute-0 unix_chkpwd[372172]: password check failed for user (root)
Oct 11 09:12:42 compute-0 sshd-session[372156]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=165.232.82.252  user=root
Oct 11 09:12:43 compute-0 epic_mayer[372151]: --> passed data devices: 0 physical, 3 LVM
Oct 11 09:12:43 compute-0 epic_mayer[372151]: --> relative data size: 1.0
Oct 11 09:12:43 compute-0 epic_mayer[372151]: --> All data devices are unavailable
Oct 11 09:12:43 compute-0 systemd[1]: libpod-bc15640458bd10f5ed0f940ea1a15d214ae3c5c17a2b38bc4f19c992ffcd6a88.scope: Deactivated successfully.
Oct 11 09:12:43 compute-0 podman[372135]: 2025-10-11 09:12:43.233350043 +0000 UTC m=+1.402195853 container died bc15640458bd10f5ed0f940ea1a15d214ae3c5c17a2b38bc4f19c992ffcd6a88 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_mayer, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct 11 09:12:43 compute-0 systemd[1]: libpod-bc15640458bd10f5ed0f940ea1a15d214ae3c5c17a2b38bc4f19c992ffcd6a88.scope: Consumed 1.129s CPU time.
Oct 11 09:12:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-75cf768eaa14592f7594c4db80d34fcead89bd99914c3d82feaf8d5c75848c20-merged.mount: Deactivated successfully.
Oct 11 09:12:43 compute-0 podman[372135]: 2025-10-11 09:12:43.291683737 +0000 UTC m=+1.460529547 container remove bc15640458bd10f5ed0f940ea1a15d214ae3c5c17a2b38bc4f19c992ffcd6a88 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_mayer, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 09:12:43 compute-0 systemd[1]: libpod-conmon-bc15640458bd10f5ed0f940ea1a15d214ae3c5c17a2b38bc4f19c992ffcd6a88.scope: Deactivated successfully.
Oct 11 09:12:43 compute-0 sudo[372028]: pam_unix(sudo:session): session closed for user root
Oct 11 09:12:43 compute-0 sudo[372195]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:12:43 compute-0 sudo[372195]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:12:43 compute-0 sudo[372195]: pam_unix(sudo:session): session closed for user root
Oct 11 09:12:43 compute-0 sudo[372220]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 09:12:43 compute-0 sudo[372220]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:12:43 compute-0 sudo[372220]: pam_unix(sudo:session): session closed for user root
Oct 11 09:12:43 compute-0 sudo[372245]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:12:43 compute-0 sudo[372245]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:12:43 compute-0 ceph-mon[74313]: pgmap v2126: 321 pgs: 321 active+clean; 406 MiB data, 954 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 2.1 MiB/s wr, 84 op/s
Oct 11 09:12:43 compute-0 sudo[372245]: pam_unix(sudo:session): session closed for user root
Oct 11 09:12:43 compute-0 sudo[372270]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a -- lvm list --format json
Oct 11 09:12:43 compute-0 sudo[372270]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:12:43 compute-0 nova_compute[260935]: 2025-10-11 09:12:43.651 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:12:43 compute-0 nova_compute[260935]: 2025-10-11 09:12:43.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._run_image_cache_manager_pass run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:12:43 compute-0 nova_compute[260935]: 2025-10-11 09:12:43.702 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "storage-registry-lock" by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:12:43 compute-0 nova_compute[260935]: 2025-10-11 09:12:43.703 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "storage-registry-lock" acquired by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:12:43 compute-0 nova_compute[260935]: 2025-10-11 09:12:43.704 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "storage-registry-lock" "released" by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:12:43 compute-0 nova_compute[260935]: 2025-10-11 09:12:43.704 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "storage-registry-lock" by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:12:43 compute-0 nova_compute[260935]: 2025-10-11 09:12:43.704 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "storage-registry-lock" acquired by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:12:43 compute-0 nova_compute[260935]: 2025-10-11 09:12:43.704 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "storage-registry-lock" "released" by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:12:43 compute-0 nova_compute[260935]: 2025-10-11 09:12:43.740 2 DEBUG nova.virt.libvirt.imagecache [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Adding ephemeral_1_0706d66 into backend ephemeral images _store_ephemeral_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:100
Oct 11 09:12:43 compute-0 nova_compute[260935]: 2025-10-11 09:12:43.774 2 DEBUG nova.virt.libvirt.imagecache [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Verify base images _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:314
Oct 11 09:12:43 compute-0 nova_compute[260935]: 2025-10-11 09:12:43.776 2 DEBUG nova.virt.libvirt.imagecache [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Image id 95632eb9-5895-4e20-b760-0f149aadf400 yields fingerprint d427ed36e4acfaf36d5cf36bd49361b1db4ee571 _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:319
Oct 11 09:12:43 compute-0 nova_compute[260935]: 2025-10-11 09:12:43.776 2 INFO nova.virt.libvirt.imagecache [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] image 95632eb9-5895-4e20-b760-0f149aadf400 at (/var/lib/nova/instances/_base/d427ed36e4acfaf36d5cf36bd49361b1db4ee571): checking
Oct 11 09:12:43 compute-0 nova_compute[260935]: 2025-10-11 09:12:43.777 2 DEBUG nova.virt.libvirt.imagecache [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] image 95632eb9-5895-4e20-b760-0f149aadf400 at (/var/lib/nova/instances/_base/d427ed36e4acfaf36d5cf36bd49361b1db4ee571): image is in use _mark_in_use /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:279
Oct 11 09:12:43 compute-0 nova_compute[260935]: 2025-10-11 09:12:43.780 2 DEBUG nova.virt.libvirt.imagecache [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Image id  yields fingerprint da39a3ee5e6b4b0d3255bfef95601890afd80709 _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:319
Oct 11 09:12:43 compute-0 nova_compute[260935]: 2025-10-11 09:12:43.781 2 DEBUG nova.virt.libvirt.imagecache [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Image id 03f2fef0-11c0-48e1-b3a0-3e02d898739e yields fingerprint 9811042c7d73cc51997f7c966840f4b7728169a1 _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:319
Oct 11 09:12:43 compute-0 nova_compute[260935]: 2025-10-11 09:12:43.781 2 INFO nova.virt.libvirt.imagecache [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] image 03f2fef0-11c0-48e1-b3a0-3e02d898739e at (/var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1): checking
Oct 11 09:12:43 compute-0 nova_compute[260935]: 2025-10-11 09:12:43.781 2 DEBUG nova.virt.libvirt.imagecache [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] image 03f2fef0-11c0-48e1-b3a0-3e02d898739e at (/var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1): image is in use _mark_in_use /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:279
Oct 11 09:12:43 compute-0 nova_compute[260935]: 2025-10-11 09:12:43.783 2 DEBUG nova.virt.libvirt.imagecache [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] c176845c-89c0-4038-ba22-4ee79bd3ebfe is a valid instance name _list_backing_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:126
Oct 11 09:12:43 compute-0 nova_compute[260935]: 2025-10-11 09:12:43.784 2 DEBUG nova.virt.libvirt.imagecache [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] b75d8ded-515b-48ff-a6b6-28df88878996 is a valid instance name _list_backing_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:126
Oct 11 09:12:43 compute-0 nova_compute[260935]: 2025-10-11 09:12:43.784 2 DEBUG nova.virt.libvirt.imagecache [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] 52be16b4-343a-4fd4-9041-39069a1fde2a is a valid instance name _list_backing_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:126
Oct 11 09:12:43 compute-0 nova_compute[260935]: 2025-10-11 09:12:43.785 2 DEBUG nova.virt.libvirt.imagecache [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e is a valid instance name _list_backing_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:126
Oct 11 09:12:43 compute-0 nova_compute[260935]: 2025-10-11 09:12:43.785 2 INFO nova.virt.libvirt.imagecache [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Active base files: /var/lib/nova/instances/_base/d427ed36e4acfaf36d5cf36bd49361b1db4ee571 /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1
Oct 11 09:12:43 compute-0 nova_compute[260935]: 2025-10-11 09:12:43.785 2 DEBUG nova.virt.libvirt.imagecache [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Verification complete _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:350
Oct 11 09:12:43 compute-0 nova_compute[260935]: 2025-10-11 09:12:43.785 2 DEBUG nova.virt.libvirt.imagecache [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Verify swap images _age_and_verify_swap_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:299
Oct 11 09:12:43 compute-0 nova_compute[260935]: 2025-10-11 09:12:43.786 2 DEBUG nova.virt.libvirt.imagecache [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Verify ephemeral images _age_and_verify_ephemeral_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:284
Oct 11 09:12:43 compute-0 nova_compute[260935]: 2025-10-11 09:12:43.786 2 INFO nova.virt.libvirt.imagecache [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/ephemeral_1_0706d66
Oct 11 09:12:43 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:12:43 compute-0 nova_compute[260935]: 2025-10-11 09:12:43.994 2 INFO nova.compute.manager [None req-279cc5eb-9669-44e6-9f9b-a1a3947ef73e a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e] Get console output
Oct 11 09:12:44 compute-0 nova_compute[260935]: 2025-10-11 09:12:44.002 29289 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Oct 11 09:12:44 compute-0 podman[372336]: 2025-10-11 09:12:44.101273136 +0000 UTC m=+0.077492326 container create f07f6f730d4eedfcd8b7a62c4499cf1a4ae6320069341beb97359d33a7c960da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_wilbur, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct 11 09:12:44 compute-0 systemd[1]: Started libpod-conmon-f07f6f730d4eedfcd8b7a62c4499cf1a4ae6320069341beb97359d33a7c960da.scope.
Oct 11 09:12:44 compute-0 podman[372336]: 2025-10-11 09:12:44.060045805 +0000 UTC m=+0.036265065 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:12:44 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:12:44 compute-0 podman[372336]: 2025-10-11 09:12:44.20941731 +0000 UTC m=+0.185636530 container init f07f6f730d4eedfcd8b7a62c4499cf1a4ae6320069341beb97359d33a7c960da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_wilbur, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 09:12:44 compute-0 podman[372336]: 2025-10-11 09:12:44.221582596 +0000 UTC m=+0.197801806 container start f07f6f730d4eedfcd8b7a62c4499cf1a4ae6320069341beb97359d33a7c960da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_wilbur, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct 11 09:12:44 compute-0 podman[372336]: 2025-10-11 09:12:44.22642317 +0000 UTC m=+0.202642340 container attach f07f6f730d4eedfcd8b7a62c4499cf1a4ae6320069341beb97359d33a7c960da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_wilbur, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 11 09:12:44 compute-0 pedantic_wilbur[372353]: 167 167
Oct 11 09:12:44 compute-0 systemd[1]: libpod-f07f6f730d4eedfcd8b7a62c4499cf1a4ae6320069341beb97359d33a7c960da.scope: Deactivated successfully.
Oct 11 09:12:44 compute-0 podman[372336]: 2025-10-11 09:12:44.231064559 +0000 UTC m=+0.207283769 container died f07f6f730d4eedfcd8b7a62c4499cf1a4ae6320069341beb97359d33a7c960da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_wilbur, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct 11 09:12:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-8f28d6adb7ef11f9483590958c837af5f44a8f197e5a6c91a9dbe16b5ead5d83-merged.mount: Deactivated successfully.
Oct 11 09:12:44 compute-0 podman[372336]: 2025-10-11 09:12:44.29721219 +0000 UTC m=+0.273431400 container remove f07f6f730d4eedfcd8b7a62c4499cf1a4ae6320069341beb97359d33a7c960da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_wilbur, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 11 09:12:44 compute-0 systemd[1]: libpod-conmon-f07f6f730d4eedfcd8b7a62c4499cf1a4ae6320069341beb97359d33a7c960da.scope: Deactivated successfully.
Oct 11 09:12:44 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2127: 321 pgs: 321 active+clean; 407 MiB data, 954 MiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 11 09:12:44 compute-0 podman[372376]: 2025-10-11 09:12:44.579016959 +0000 UTC m=+0.067262472 container create 6286ee8c120f154e75e417e7067df83a1cfcc896b2fc1f20e9b3a025f5b1bc7c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_chaum, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct 11 09:12:44 compute-0 podman[372376]: 2025-10-11 09:12:44.552292139 +0000 UTC m=+0.040537762 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:12:44 compute-0 systemd[1]: Started libpod-conmon-6286ee8c120f154e75e417e7067df83a1cfcc896b2fc1f20e9b3a025f5b1bc7c.scope.
Oct 11 09:12:44 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:12:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7cf93721894837aac5457766c172ba39cfd6abb9e33a4547025828c22500270b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 09:12:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7cf93721894837aac5457766c172ba39cfd6abb9e33a4547025828c22500270b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 09:12:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7cf93721894837aac5457766c172ba39cfd6abb9e33a4547025828c22500270b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 09:12:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7cf93721894837aac5457766c172ba39cfd6abb9e33a4547025828c22500270b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 09:12:44 compute-0 podman[372376]: 2025-10-11 09:12:44.713594404 +0000 UTC m=+0.201839977 container init 6286ee8c120f154e75e417e7067df83a1cfcc896b2fc1f20e9b3a025f5b1bc7c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_chaum, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Oct 11 09:12:44 compute-0 podman[372376]: 2025-10-11 09:12:44.733946007 +0000 UTC m=+0.222191540 container start 6286ee8c120f154e75e417e7067df83a1cfcc896b2fc1f20e9b3a025f5b1bc7c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_chaum, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 09:12:44 compute-0 podman[372376]: 2025-10-11 09:12:44.739703836 +0000 UTC m=+0.227949409 container attach 6286ee8c120f154e75e417e7067df83a1cfcc896b2fc1f20e9b3a025f5b1bc7c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_chaum, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 11 09:12:44 compute-0 sshd-session[372156]: Failed password for root from 165.232.82.252 port 53422 ssh2
Oct 11 09:12:45 compute-0 nova_compute[260935]: 2025-10-11 09:12:45.013 2 DEBUG nova.compute.manager [req-6fa2d102-2289-4f5a-978d-098a0402b1b9 req-5ce0ac04-dcc0-4016-acfa-aad8d88b5ada e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e] Received event network-changed-0ae3e094-fe06-40c1-8840-cf16ba40f7fb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:12:45 compute-0 nova_compute[260935]: 2025-10-11 09:12:45.013 2 DEBUG nova.compute.manager [req-6fa2d102-2289-4f5a-978d-098a0402b1b9 req-5ce0ac04-dcc0-4016-acfa-aad8d88b5ada e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e] Refreshing instance network info cache due to event network-changed-0ae3e094-fe06-40c1-8840-cf16ba40f7fb. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 09:12:45 compute-0 nova_compute[260935]: 2025-10-11 09:12:45.014 2 DEBUG oslo_concurrency.lockutils [req-6fa2d102-2289-4f5a-978d-098a0402b1b9 req-5ce0ac04-dcc0-4016-acfa-aad8d88b5ada e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-0dc02e8f-5afd-40a3-8de3-e5550e4ab57e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:12:45 compute-0 nova_compute[260935]: 2025-10-11 09:12:45.014 2 DEBUG oslo_concurrency.lockutils [req-6fa2d102-2289-4f5a-978d-098a0402b1b9 req-5ce0ac04-dcc0-4016-acfa-aad8d88b5ada e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-0dc02e8f-5afd-40a3-8de3-e5550e4ab57e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:12:45 compute-0 nova_compute[260935]: 2025-10-11 09:12:45.014 2 DEBUG nova.network.neutron [req-6fa2d102-2289-4f5a-978d-098a0402b1b9 req-5ce0ac04-dcc0-4016-acfa-aad8d88b5ada e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e] Refreshing network info cache for port 0ae3e094-fe06-40c1-8840-cf16ba40f7fb _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 09:12:45 compute-0 sshd-session[372156]: Connection closed by authenticating user root 165.232.82.252 port 53422 [preauth]
Oct 11 09:12:45 compute-0 nova_compute[260935]: 2025-10-11 09:12:45.136 2 DEBUG oslo_concurrency.lockutils [None req-ed823c19-8ddf-431b-ba34-91b643184e33 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Acquiring lock "0dc02e8f-5afd-40a3-8de3-e5550e4ab57e" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:12:45 compute-0 nova_compute[260935]: 2025-10-11 09:12:45.137 2 DEBUG oslo_concurrency.lockutils [None req-ed823c19-8ddf-431b-ba34-91b643184e33 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Lock "0dc02e8f-5afd-40a3-8de3-e5550e4ab57e" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:12:45 compute-0 nova_compute[260935]: 2025-10-11 09:12:45.137 2 DEBUG oslo_concurrency.lockutils [None req-ed823c19-8ddf-431b-ba34-91b643184e33 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Acquiring lock "0dc02e8f-5afd-40a3-8de3-e5550e4ab57e-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:12:45 compute-0 nova_compute[260935]: 2025-10-11 09:12:45.137 2 DEBUG oslo_concurrency.lockutils [None req-ed823c19-8ddf-431b-ba34-91b643184e33 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Lock "0dc02e8f-5afd-40a3-8de3-e5550e4ab57e-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:12:45 compute-0 nova_compute[260935]: 2025-10-11 09:12:45.138 2 DEBUG oslo_concurrency.lockutils [None req-ed823c19-8ddf-431b-ba34-91b643184e33 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Lock "0dc02e8f-5afd-40a3-8de3-e5550e4ab57e-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:12:45 compute-0 nova_compute[260935]: 2025-10-11 09:12:45.139 2 INFO nova.compute.manager [None req-ed823c19-8ddf-431b-ba34-91b643184e33 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e] Terminating instance
Oct 11 09:12:45 compute-0 nova_compute[260935]: 2025-10-11 09:12:45.140 2 DEBUG nova.compute.manager [None req-ed823c19-8ddf-431b-ba34-91b643184e33 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 11 09:12:45 compute-0 nova_compute[260935]: 2025-10-11 09:12:45.149 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:12:45 compute-0 kernel: tap0ae3e094-fe (unregistering): left promiscuous mode
Oct 11 09:12:45 compute-0 NetworkManager[44960]: <info>  [1760173965.2054] device (tap0ae3e094-fe): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 09:12:45 compute-0 nova_compute[260935]: 2025-10-11 09:12:45.222 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:12:45 compute-0 ovn_controller[152945]: 2025-10-11T09:12:45Z|00995|binding|INFO|Releasing lport 0ae3e094-fe06-40c1-8840-cf16ba40f7fb from this chassis (sb_readonly=0)
Oct 11 09:12:45 compute-0 ovn_controller[152945]: 2025-10-11T09:12:45Z|00996|binding|INFO|Setting lport 0ae3e094-fe06-40c1-8840-cf16ba40f7fb down in Southbound
Oct 11 09:12:45 compute-0 ovn_controller[152945]: 2025-10-11T09:12:45Z|00997|binding|INFO|Removing iface tap0ae3e094-fe ovn-installed in OVS
Oct 11 09:12:45 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:12:45.233 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:36:24:98 10.100.0.9'], port_security=['fa:16:3e:36:24:98 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '0dc02e8f-5afd-40a3-8de3-e5550e4ab57e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fba72350-6e1a-4542-9fe5-a0774a411936', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ca4b15770e784f45910b630937562cb6', 'neutron:revision_number': '6', 'neutron:security_group_ids': '8dd2bd74-fe29-451a-969b-10f8c65ff537', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4107ca00-6bae-4723-b2f7-08200810b548, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=0ae3e094-fe06-40c1-8840-cf16ba40f7fb) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 09:12:45 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:12:45.235 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 0ae3e094-fe06-40c1-8840-cf16ba40f7fb in datapath fba72350-6e1a-4542-9fe5-a0774a411936 unbound from our chassis
Oct 11 09:12:45 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:12:45.238 162815 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network fba72350-6e1a-4542-9fe5-a0774a411936, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 11 09:12:45 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:12:45.241 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[2198b792-6611-494a-9724-a612433f1d8d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:12:45 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:12:45.242 162815 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-fba72350-6e1a-4542-9fe5-a0774a411936 namespace which is not needed anymore
Oct 11 09:12:45 compute-0 nova_compute[260935]: 2025-10-11 09:12:45.245 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:12:45 compute-0 systemd[1]: machine-qemu\x2d126\x2dinstance\x2d00000069.scope: Deactivated successfully.
Oct 11 09:12:45 compute-0 systemd[1]: machine-qemu\x2d126\x2dinstance\x2d00000069.scope: Consumed 12.775s CPU time.
Oct 11 09:12:45 compute-0 systemd-machined[215705]: Machine qemu-126-instance-00000069 terminated.
Oct 11 09:12:45 compute-0 nova_compute[260935]: 2025-10-11 09:12:45.383 2 INFO nova.virt.libvirt.driver [-] [instance: 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e] Instance destroyed successfully.
Oct 11 09:12:45 compute-0 nova_compute[260935]: 2025-10-11 09:12:45.384 2 DEBUG nova.objects.instance [None req-ed823c19-8ddf-431b-ba34-91b643184e33 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Lazy-loading 'resources' on Instance uuid 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:12:45 compute-0 nova_compute[260935]: 2025-10-11 09:12:45.419 2 DEBUG nova.virt.libvirt.vif [None req-ed823c19-8ddf-431b-ba34-91b643184e33 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-11T09:11:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1973542017',display_name='tempest-TestNetworkAdvancedServerOps-server-1973542017',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1973542017',id=105,image_ref='95632eb9-5895-4e20-b760-0f149aadf400',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPjiTuLZzmAZ35tMhCOSVitFlf9QVc7XZ8MVeNKbs8Bxh7N8pYPwy6nmkT0CJ4moptetANiA+6a5Y0trCDigpxU0NxSZrm3lKn8b9iWNg2gUkyVkLT8AlFALq9EjTDW32g==',key_name='tempest-TestNetworkAdvancedServerOps-2078355019',keypairs=<?>,launch_index=0,launched_at=2025-10-11T09:12:26Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='ca4b15770e784f45910b630937562cb6',ramdisk_id='',reservation_id='r-7vnn79ig',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='95632eb9-5895-4e20-b760-0f149aadf400',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-1304559157',owner_user_name='tempest-TestNetworkAdvancedServerOps-1304559157-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T09:12:26Z,user_data=None,user_id='a213c3877fc144a3af0be3c3d853f999',uuid=0dc02e8f-5afd-40a3-8de3-e5550e4ab57e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "0ae3e094-fe06-40c1-8840-cf16ba40f7fb", "address": "fa:16:3e:36:24:98", "network": {"id": "fba72350-6e1a-4542-9fe5-a0774a411936", "bridge": "br-int", "label": "tempest-network-smoke--284994200", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.173", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca4b15770e784f45910b630937562cb6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0ae3e094-fe", "ovs_interfaceid": "0ae3e094-fe06-40c1-8840-cf16ba40f7fb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 11 09:12:45 compute-0 nova_compute[260935]: 2025-10-11 09:12:45.420 2 DEBUG nova.network.os_vif_util [None req-ed823c19-8ddf-431b-ba34-91b643184e33 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Converting VIF {"id": "0ae3e094-fe06-40c1-8840-cf16ba40f7fb", "address": "fa:16:3e:36:24:98", "network": {"id": "fba72350-6e1a-4542-9fe5-a0774a411936", "bridge": "br-int", "label": "tempest-network-smoke--284994200", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.173", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca4b15770e784f45910b630937562cb6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0ae3e094-fe", "ovs_interfaceid": "0ae3e094-fe06-40c1-8840-cf16ba40f7fb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 09:12:45 compute-0 nova_compute[260935]: 2025-10-11 09:12:45.420 2 DEBUG nova.network.os_vif_util [None req-ed823c19-8ddf-431b-ba34-91b643184e33 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:36:24:98,bridge_name='br-int',has_traffic_filtering=True,id=0ae3e094-fe06-40c1-8840-cf16ba40f7fb,network=Network(fba72350-6e1a-4542-9fe5-a0774a411936),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0ae3e094-fe') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 09:12:45 compute-0 nova_compute[260935]: 2025-10-11 09:12:45.421 2 DEBUG os_vif [None req-ed823c19-8ddf-431b-ba34-91b643184e33 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:36:24:98,bridge_name='br-int',has_traffic_filtering=True,id=0ae3e094-fe06-40c1-8840-cf16ba40f7fb,network=Network(fba72350-6e1a-4542-9fe5-a0774a411936),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0ae3e094-fe') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 11 09:12:45 compute-0 nova_compute[260935]: 2025-10-11 09:12:45.422 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:12:45 compute-0 nova_compute[260935]: 2025-10-11 09:12:45.422 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0ae3e094-fe, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:12:45 compute-0 nova_compute[260935]: 2025-10-11 09:12:45.424 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:12:45 compute-0 nova_compute[260935]: 2025-10-11 09:12:45.426 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:12:45 compute-0 nova_compute[260935]: 2025-10-11 09:12:45.428 2 INFO os_vif [None req-ed823c19-8ddf-431b-ba34-91b643184e33 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:36:24:98,bridge_name='br-int',has_traffic_filtering=True,id=0ae3e094-fe06-40c1-8840-cf16ba40f7fb,network=Network(fba72350-6e1a-4542-9fe5-a0774a411936),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0ae3e094-fe')
Oct 11 09:12:45 compute-0 neutron-haproxy-ovnmeta-fba72350-6e1a-4542-9fe5-a0774a411936[369040]: [NOTICE]   (369044) : haproxy version is 2.8.14-c23fe91
Oct 11 09:12:45 compute-0 neutron-haproxy-ovnmeta-fba72350-6e1a-4542-9fe5-a0774a411936[369040]: [NOTICE]   (369044) : path to executable is /usr/sbin/haproxy
Oct 11 09:12:45 compute-0 neutron-haproxy-ovnmeta-fba72350-6e1a-4542-9fe5-a0774a411936[369040]: [WARNING]  (369044) : Exiting Master process...
Oct 11 09:12:45 compute-0 neutron-haproxy-ovnmeta-fba72350-6e1a-4542-9fe5-a0774a411936[369040]: [WARNING]  (369044) : Exiting Master process...
Oct 11 09:12:45 compute-0 neutron-haproxy-ovnmeta-fba72350-6e1a-4542-9fe5-a0774a411936[369040]: [ALERT]    (369044) : Current worker (369046) exited with code 143 (Terminated)
Oct 11 09:12:45 compute-0 neutron-haproxy-ovnmeta-fba72350-6e1a-4542-9fe5-a0774a411936[369040]: [WARNING]  (369044) : All workers exited. Exiting... (0)
Oct 11 09:12:45 compute-0 systemd[1]: libpod-cca8b69c19e14fed71092700160d6e1286225281cd0deb548025e0d9e80d71b1.scope: Deactivated successfully.
Oct 11 09:12:45 compute-0 podman[372427]: 2025-10-11 09:12:45.465731382 +0000 UTC m=+0.077664450 container died cca8b69c19e14fed71092700160d6e1286225281cd0deb548025e0d9e80d71b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-fba72350-6e1a-4542-9fe5-a0774a411936, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Oct 11 09:12:45 compute-0 quirky_chaum[372393]: {
Oct 11 09:12:45 compute-0 quirky_chaum[372393]:     "0": [
Oct 11 09:12:45 compute-0 quirky_chaum[372393]:         {
Oct 11 09:12:45 compute-0 quirky_chaum[372393]:             "devices": [
Oct 11 09:12:45 compute-0 quirky_chaum[372393]:                 "/dev/loop3"
Oct 11 09:12:45 compute-0 quirky_chaum[372393]:             ],
Oct 11 09:12:45 compute-0 quirky_chaum[372393]:             "lv_name": "ceph_lv0",
Oct 11 09:12:45 compute-0 quirky_chaum[372393]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 09:12:45 compute-0 quirky_chaum[372393]:             "lv_size": "21470642176",
Oct 11 09:12:45 compute-0 quirky_chaum[372393]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=bd31cfe9-266a-4479-a7f9-659fff14b3e1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 09:12:45 compute-0 quirky_chaum[372393]:             "lv_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 09:12:45 compute-0 quirky_chaum[372393]:             "name": "ceph_lv0",
Oct 11 09:12:45 compute-0 quirky_chaum[372393]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 09:12:45 compute-0 quirky_chaum[372393]:             "tags": {
Oct 11 09:12:45 compute-0 quirky_chaum[372393]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 11 09:12:45 compute-0 quirky_chaum[372393]:                 "ceph.block_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 09:12:45 compute-0 quirky_chaum[372393]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 09:12:45 compute-0 quirky_chaum[372393]:                 "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:12:45 compute-0 quirky_chaum[372393]:                 "ceph.cluster_name": "ceph",
Oct 11 09:12:45 compute-0 quirky_chaum[372393]:                 "ceph.crush_device_class": "",
Oct 11 09:12:45 compute-0 quirky_chaum[372393]:                 "ceph.encrypted": "0",
Oct 11 09:12:45 compute-0 quirky_chaum[372393]:                 "ceph.osd_fsid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 09:12:45 compute-0 quirky_chaum[372393]:                 "ceph.osd_id": "0",
Oct 11 09:12:45 compute-0 quirky_chaum[372393]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 09:12:45 compute-0 quirky_chaum[372393]:                 "ceph.type": "block",
Oct 11 09:12:45 compute-0 quirky_chaum[372393]:                 "ceph.vdo": "0"
Oct 11 09:12:45 compute-0 quirky_chaum[372393]:             },
Oct 11 09:12:45 compute-0 quirky_chaum[372393]:             "type": "block",
Oct 11 09:12:45 compute-0 quirky_chaum[372393]:             "vg_name": "ceph_vg0"
Oct 11 09:12:45 compute-0 quirky_chaum[372393]:         }
Oct 11 09:12:45 compute-0 quirky_chaum[372393]:     ],
Oct 11 09:12:45 compute-0 quirky_chaum[372393]:     "1": [
Oct 11 09:12:45 compute-0 quirky_chaum[372393]:         {
Oct 11 09:12:45 compute-0 quirky_chaum[372393]:             "devices": [
Oct 11 09:12:45 compute-0 quirky_chaum[372393]:                 "/dev/loop4"
Oct 11 09:12:45 compute-0 quirky_chaum[372393]:             ],
Oct 11 09:12:45 compute-0 quirky_chaum[372393]:             "lv_name": "ceph_lv1",
Oct 11 09:12:45 compute-0 quirky_chaum[372393]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 09:12:45 compute-0 quirky_chaum[372393]:             "lv_size": "21470642176",
Oct 11 09:12:45 compute-0 quirky_chaum[372393]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c6e730c8-7075-45e5-bf72-061b73088f5b,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 09:12:45 compute-0 quirky_chaum[372393]:             "lv_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 09:12:45 compute-0 quirky_chaum[372393]:             "name": "ceph_lv1",
Oct 11 09:12:45 compute-0 quirky_chaum[372393]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 09:12:45 compute-0 quirky_chaum[372393]:             "tags": {
Oct 11 09:12:45 compute-0 quirky_chaum[372393]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 11 09:12:45 compute-0 quirky_chaum[372393]:                 "ceph.block_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 09:12:45 compute-0 quirky_chaum[372393]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 09:12:45 compute-0 quirky_chaum[372393]:                 "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:12:45 compute-0 quirky_chaum[372393]:                 "ceph.cluster_name": "ceph",
Oct 11 09:12:45 compute-0 quirky_chaum[372393]:                 "ceph.crush_device_class": "",
Oct 11 09:12:45 compute-0 quirky_chaum[372393]:                 "ceph.encrypted": "0",
Oct 11 09:12:45 compute-0 quirky_chaum[372393]:                 "ceph.osd_fsid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 09:12:45 compute-0 quirky_chaum[372393]:                 "ceph.osd_id": "1",
Oct 11 09:12:45 compute-0 quirky_chaum[372393]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 09:12:45 compute-0 quirky_chaum[372393]:                 "ceph.type": "block",
Oct 11 09:12:45 compute-0 quirky_chaum[372393]:                 "ceph.vdo": "0"
Oct 11 09:12:45 compute-0 quirky_chaum[372393]:             },
Oct 11 09:12:45 compute-0 quirky_chaum[372393]:             "type": "block",
Oct 11 09:12:45 compute-0 quirky_chaum[372393]:             "vg_name": "ceph_vg1"
Oct 11 09:12:45 compute-0 quirky_chaum[372393]:         }
Oct 11 09:12:45 compute-0 quirky_chaum[372393]:     ],
Oct 11 09:12:45 compute-0 quirky_chaum[372393]:     "2": [
Oct 11 09:12:45 compute-0 quirky_chaum[372393]:         {
Oct 11 09:12:45 compute-0 quirky_chaum[372393]:             "devices": [
Oct 11 09:12:45 compute-0 quirky_chaum[372393]:                 "/dev/loop5"
Oct 11 09:12:45 compute-0 quirky_chaum[372393]:             ],
Oct 11 09:12:45 compute-0 quirky_chaum[372393]:             "lv_name": "ceph_lv2",
Oct 11 09:12:45 compute-0 quirky_chaum[372393]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 09:12:45 compute-0 quirky_chaum[372393]:             "lv_size": "21470642176",
Oct 11 09:12:45 compute-0 quirky_chaum[372393]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9a8d87fe-940e-4960-94fb-405e22e6e9ee,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 09:12:45 compute-0 quirky_chaum[372393]:             "lv_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 09:12:45 compute-0 quirky_chaum[372393]:             "name": "ceph_lv2",
Oct 11 09:12:45 compute-0 quirky_chaum[372393]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 09:12:45 compute-0 quirky_chaum[372393]:             "tags": {
Oct 11 09:12:45 compute-0 quirky_chaum[372393]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 11 09:12:45 compute-0 quirky_chaum[372393]:                 "ceph.block_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 09:12:45 compute-0 quirky_chaum[372393]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 09:12:45 compute-0 quirky_chaum[372393]:                 "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:12:45 compute-0 quirky_chaum[372393]:                 "ceph.cluster_name": "ceph",
Oct 11 09:12:45 compute-0 quirky_chaum[372393]:                 "ceph.crush_device_class": "",
Oct 11 09:12:45 compute-0 quirky_chaum[372393]:                 "ceph.encrypted": "0",
Oct 11 09:12:45 compute-0 quirky_chaum[372393]:                 "ceph.osd_fsid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 09:12:45 compute-0 quirky_chaum[372393]:                 "ceph.osd_id": "2",
Oct 11 09:12:45 compute-0 quirky_chaum[372393]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 09:12:45 compute-0 quirky_chaum[372393]:                 "ceph.type": "block",
Oct 11 09:12:45 compute-0 quirky_chaum[372393]:                 "ceph.vdo": "0"
Oct 11 09:12:45 compute-0 quirky_chaum[372393]:             },
Oct 11 09:12:45 compute-0 quirky_chaum[372393]:             "type": "block",
Oct 11 09:12:45 compute-0 quirky_chaum[372393]:             "vg_name": "ceph_vg2"
Oct 11 09:12:45 compute-0 quirky_chaum[372393]:         }
Oct 11 09:12:45 compute-0 quirky_chaum[372393]:     ]
Oct 11 09:12:45 compute-0 quirky_chaum[372393]: }
Oct 11 09:12:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-c7f85520362d24f3ac3e0a81a81c4d2407edd9ee81684410d35e390bcf954224-merged.mount: Deactivated successfully.
Oct 11 09:12:45 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-cca8b69c19e14fed71092700160d6e1286225281cd0deb548025e0d9e80d71b1-userdata-shm.mount: Deactivated successfully.
Oct 11 09:12:45 compute-0 systemd[1]: libpod-6286ee8c120f154e75e417e7067df83a1cfcc896b2fc1f20e9b3a025f5b1bc7c.scope: Deactivated successfully.
Oct 11 09:12:45 compute-0 podman[372376]: 2025-10-11 09:12:45.519399068 +0000 UTC m=+1.007644621 container died 6286ee8c120f154e75e417e7067df83a1cfcc896b2fc1f20e9b3a025f5b1bc7c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_chaum, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 09:12:45 compute-0 podman[372427]: 2025-10-11 09:12:45.535453202 +0000 UTC m=+0.147386270 container cleanup cca8b69c19e14fed71092700160d6e1286225281cd0deb548025e0d9e80d71b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-fba72350-6e1a-4542-9fe5-a0774a411936, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Oct 11 09:12:45 compute-0 systemd[1]: libpod-conmon-cca8b69c19e14fed71092700160d6e1286225281cd0deb548025e0d9e80d71b1.scope: Deactivated successfully.
Oct 11 09:12:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-7cf93721894837aac5457766c172ba39cfd6abb9e33a4547025828c22500270b-merged.mount: Deactivated successfully.
Oct 11 09:12:45 compute-0 ceph-mon[74313]: pgmap v2127: 321 pgs: 321 active+clean; 407 MiB data, 954 MiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 11 09:12:45 compute-0 podman[372376]: 2025-10-11 09:12:45.587339148 +0000 UTC m=+1.075584671 container remove 6286ee8c120f154e75e417e7067df83a1cfcc896b2fc1f20e9b3a025f5b1bc7c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_chaum, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 09:12:45 compute-0 systemd[1]: libpod-conmon-6286ee8c120f154e75e417e7067df83a1cfcc896b2fc1f20e9b3a025f5b1bc7c.scope: Deactivated successfully.
Oct 11 09:12:45 compute-0 sudo[372270]: pam_unix(sudo:session): session closed for user root
Oct 11 09:12:45 compute-0 podman[372497]: 2025-10-11 09:12:45.640207762 +0000 UTC m=+0.067399257 container remove cca8b69c19e14fed71092700160d6e1286225281cd0deb548025e0d9e80d71b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-fba72350-6e1a-4542-9fe5-a0774a411936, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 11 09:12:45 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:12:45.655 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[f247cab7-3750-40e7-99e0-83078f28bbc4]: (4, ('Sat Oct 11 09:12:45 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-fba72350-6e1a-4542-9fe5-a0774a411936 (cca8b69c19e14fed71092700160d6e1286225281cd0deb548025e0d9e80d71b1)\ncca8b69c19e14fed71092700160d6e1286225281cd0deb548025e0d9e80d71b1\nSat Oct 11 09:12:45 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-fba72350-6e1a-4542-9fe5-a0774a411936 (cca8b69c19e14fed71092700160d6e1286225281cd0deb548025e0d9e80d71b1)\ncca8b69c19e14fed71092700160d6e1286225281cd0deb548025e0d9e80d71b1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:12:45 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:12:45.658 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[fc68744d-b468-4e2e-bbf3-6ea0fc8ec1d3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:12:45 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:12:45.659 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfba72350-60, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:12:45 compute-0 nova_compute[260935]: 2025-10-11 09:12:45.661 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:12:45 compute-0 kernel: tapfba72350-60: left promiscuous mode
Oct 11 09:12:45 compute-0 nova_compute[260935]: 2025-10-11 09:12:45.675 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:12:45 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:12:45.678 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[189cec1f-0008-4ac3-8ef1-918030db234b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:12:45 compute-0 sudo[372510]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:12:45 compute-0 sudo[372510]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:12:45 compute-0 sudo[372510]: pam_unix(sudo:session): session closed for user root
Oct 11 09:12:45 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:12:45.704 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[bb194699-3cf1-4e95-8154-690fb260f8a9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:12:45 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:12:45.705 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[267670bb-f8b7-484c-aa72-1f59ad19e697]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:12:45 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:12:45.719 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[d641136f-5141-4070-aa05-2ef5c1044879]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 578996, 'reachable_time': 23273, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 372541, 'error': None, 'target': 'ovnmeta-fba72350-6e1a-4542-9fe5-a0774a411936', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:12:45 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:12:45.721 162929 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-fba72350-6e1a-4542-9fe5-a0774a411936 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 11 09:12:45 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:12:45.721 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[ecb68f68-7067-4df3-b9bf-1466015c4ca6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:12:45 compute-0 systemd[1]: run-netns-ovnmeta\x2dfba72350\x2d6e1a\x2d4542\x2d9fe5\x2da0774a411936.mount: Deactivated successfully.
Oct 11 09:12:45 compute-0 sudo[372538]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 09:12:45 compute-0 sudo[372538]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:12:45 compute-0 sudo[372538]: pam_unix(sudo:session): session closed for user root
Oct 11 09:12:45 compute-0 nova_compute[260935]: 2025-10-11 09:12:45.791 2 DEBUG nova.compute.manager [req-6c4479b2-9e9c-4163-bd53-07a1809b6b95 req-210832e3-f2dc-4d24-a4a4-d15f3510efed e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e] Received event network-vif-unplugged-0ae3e094-fe06-40c1-8840-cf16ba40f7fb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:12:45 compute-0 nova_compute[260935]: 2025-10-11 09:12:45.791 2 DEBUG oslo_concurrency.lockutils [req-6c4479b2-9e9c-4163-bd53-07a1809b6b95 req-210832e3-f2dc-4d24-a4a4-d15f3510efed e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "0dc02e8f-5afd-40a3-8de3-e5550e4ab57e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:12:45 compute-0 nova_compute[260935]: 2025-10-11 09:12:45.792 2 DEBUG oslo_concurrency.lockutils [req-6c4479b2-9e9c-4163-bd53-07a1809b6b95 req-210832e3-f2dc-4d24-a4a4-d15f3510efed e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "0dc02e8f-5afd-40a3-8de3-e5550e4ab57e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:12:45 compute-0 nova_compute[260935]: 2025-10-11 09:12:45.792 2 DEBUG oslo_concurrency.lockutils [req-6c4479b2-9e9c-4163-bd53-07a1809b6b95 req-210832e3-f2dc-4d24-a4a4-d15f3510efed e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "0dc02e8f-5afd-40a3-8de3-e5550e4ab57e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:12:45 compute-0 nova_compute[260935]: 2025-10-11 09:12:45.792 2 DEBUG nova.compute.manager [req-6c4479b2-9e9c-4163-bd53-07a1809b6b95 req-210832e3-f2dc-4d24-a4a4-d15f3510efed e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e] No waiting events found dispatching network-vif-unplugged-0ae3e094-fe06-40c1-8840-cf16ba40f7fb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 09:12:45 compute-0 nova_compute[260935]: 2025-10-11 09:12:45.792 2 DEBUG nova.compute.manager [req-6c4479b2-9e9c-4163-bd53-07a1809b6b95 req-210832e3-f2dc-4d24-a4a4-d15f3510efed e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e] Received event network-vif-unplugged-0ae3e094-fe06-40c1-8840-cf16ba40f7fb for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 11 09:12:45 compute-0 sudo[372564]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:12:45 compute-0 sudo[372564]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:12:45 compute-0 sudo[372564]: pam_unix(sudo:session): session closed for user root
Oct 11 09:12:45 compute-0 nova_compute[260935]: 2025-10-11 09:12:45.860 2 INFO nova.virt.libvirt.driver [None req-ed823c19-8ddf-431b-ba34-91b643184e33 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e] Deleting instance files /var/lib/nova/instances/0dc02e8f-5afd-40a3-8de3-e5550e4ab57e_del
Oct 11 09:12:45 compute-0 nova_compute[260935]: 2025-10-11 09:12:45.861 2 INFO nova.virt.libvirt.driver [None req-ed823c19-8ddf-431b-ba34-91b643184e33 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e] Deletion of /var/lib/nova/instances/0dc02e8f-5afd-40a3-8de3-e5550e4ab57e_del complete
Oct 11 09:12:45 compute-0 sudo[372589]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a -- raw list --format json
Oct 11 09:12:45 compute-0 sudo[372589]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:12:45 compute-0 nova_compute[260935]: 2025-10-11 09:12:45.936 2 INFO nova.compute.manager [None req-ed823c19-8ddf-431b-ba34-91b643184e33 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e] Took 0.80 seconds to destroy the instance on the hypervisor.
Oct 11 09:12:45 compute-0 nova_compute[260935]: 2025-10-11 09:12:45.937 2 DEBUG oslo.service.loopingcall [None req-ed823c19-8ddf-431b-ba34-91b643184e33 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 11 09:12:45 compute-0 nova_compute[260935]: 2025-10-11 09:12:45.938 2 DEBUG nova.compute.manager [-] [instance: 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 11 09:12:45 compute-0 nova_compute[260935]: 2025-10-11 09:12:45.938 2 DEBUG nova.network.neutron [-] [instance: 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 11 09:12:46 compute-0 podman[372656]: 2025-10-11 09:12:46.353750022 +0000 UTC m=+0.066508082 container create b52b5752a82029f6ce3f01e6e0122175631f17bd3623bd0f46783ab0f8e08386 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_chatelet, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 09:12:46 compute-0 systemd[1]: Started libpod-conmon-b52b5752a82029f6ce3f01e6e0122175631f17bd3623bd0f46783ab0f8e08386.scope.
Oct 11 09:12:46 compute-0 podman[372656]: 2025-10-11 09:12:46.327248358 +0000 UTC m=+0.040006418 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:12:46 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:12:46 compute-0 podman[372656]: 2025-10-11 09:12:46.455710604 +0000 UTC m=+0.168468704 container init b52b5752a82029f6ce3f01e6e0122175631f17bd3623bd0f46783ab0f8e08386 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_chatelet, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True)
Oct 11 09:12:46 compute-0 podman[372656]: 2025-10-11 09:12:46.467091539 +0000 UTC m=+0.179849589 container start b52b5752a82029f6ce3f01e6e0122175631f17bd3623bd0f46783ab0f8e08386 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_chatelet, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 09:12:46 compute-0 podman[372656]: 2025-10-11 09:12:46.47074644 +0000 UTC m=+0.183504610 container attach b52b5752a82029f6ce3f01e6e0122175631f17bd3623bd0f46783ab0f8e08386 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_chatelet, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct 11 09:12:46 compute-0 exciting_chatelet[372673]: 167 167
Oct 11 09:12:46 compute-0 systemd[1]: libpod-b52b5752a82029f6ce3f01e6e0122175631f17bd3623bd0f46783ab0f8e08386.scope: Deactivated successfully.
Oct 11 09:12:46 compute-0 podman[372656]: 2025-10-11 09:12:46.477184979 +0000 UTC m=+0.189943069 container died b52b5752a82029f6ce3f01e6e0122175631f17bd3623bd0f46783ab0f8e08386 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_chatelet, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 09:12:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-ccc4fbb4d6cff7e2413430ea8793c414c3476aef0a2e71c4f3621404ca12387c-merged.mount: Deactivated successfully.
Oct 11 09:12:46 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2128: 321 pgs: 321 active+clean; 407 MiB data, 954 MiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 11 09:12:46 compute-0 podman[372656]: 2025-10-11 09:12:46.525346932 +0000 UTC m=+0.238105012 container remove b52b5752a82029f6ce3f01e6e0122175631f17bd3623bd0f46783ab0f8e08386 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_chatelet, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Oct 11 09:12:46 compute-0 systemd[1]: libpod-conmon-b52b5752a82029f6ce3f01e6e0122175631f17bd3623bd0f46783ab0f8e08386.scope: Deactivated successfully.
Oct 11 09:12:46 compute-0 podman[372697]: 2025-10-11 09:12:46.776981547 +0000 UTC m=+0.061198045 container create 67b6266006fe5b3a3992b945ca77b6ef36ea3e8ef70a5925f39edaebf1fd255b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_booth, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 09:12:46 compute-0 systemd[1]: Started libpod-conmon-67b6266006fe5b3a3992b945ca77b6ef36ea3e8ef70a5925f39edaebf1fd255b.scope.
Oct 11 09:12:46 compute-0 podman[372697]: 2025-10-11 09:12:46.754795603 +0000 UTC m=+0.039012151 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:12:46 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:12:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff7ad4d5a95dd5ce376dffcbb492274a75e8ddf6791aa75e9eba9e07d207395c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 09:12:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff7ad4d5a95dd5ce376dffcbb492274a75e8ddf6791aa75e9eba9e07d207395c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 09:12:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff7ad4d5a95dd5ce376dffcbb492274a75e8ddf6791aa75e9eba9e07d207395c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 09:12:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff7ad4d5a95dd5ce376dffcbb492274a75e8ddf6791aa75e9eba9e07d207395c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 09:12:46 compute-0 podman[372697]: 2025-10-11 09:12:46.881625613 +0000 UTC m=+0.165842141 container init 67b6266006fe5b3a3992b945ca77b6ef36ea3e8ef70a5925f39edaebf1fd255b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_booth, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct 11 09:12:46 compute-0 podman[372697]: 2025-10-11 09:12:46.889031618 +0000 UTC m=+0.173248116 container start 67b6266006fe5b3a3992b945ca77b6ef36ea3e8ef70a5925f39edaebf1fd255b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_booth, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 09:12:46 compute-0 podman[372697]: 2025-10-11 09:12:46.892595937 +0000 UTC m=+0.176812435 container attach 67b6266006fe5b3a3992b945ca77b6ef36ea3e8ef70a5925f39edaebf1fd255b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_booth, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 09:12:47 compute-0 nova_compute[260935]: 2025-10-11 09:12:47.206 2 DEBUG nova.network.neutron [-] [instance: 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:12:47 compute-0 nova_compute[260935]: 2025-10-11 09:12:47.246 2 INFO nova.compute.manager [-] [instance: 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e] Took 1.31 seconds to deallocate network for instance.
Oct 11 09:12:47 compute-0 nova_compute[260935]: 2025-10-11 09:12:47.325 2 DEBUG oslo_concurrency.lockutils [None req-ed823c19-8ddf-431b-ba34-91b643184e33 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:12:47 compute-0 nova_compute[260935]: 2025-10-11 09:12:47.325 2 DEBUG oslo_concurrency.lockutils [None req-ed823c19-8ddf-431b-ba34-91b643184e33 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:12:47 compute-0 nova_compute[260935]: 2025-10-11 09:12:47.424 2 DEBUG nova.compute.manager [req-0ec9682b-abe9-4e62-81fe-84ac3cfdf630 req-1bca8125-a39d-4329-ba2f-a35ad0229401 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e] Received event network-vif-deleted-0ae3e094-fe06-40c1-8840-cf16ba40f7fb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:12:47 compute-0 ceph-mon[74313]: pgmap v2128: 321 pgs: 321 active+clean; 407 MiB data, 954 MiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 11 09:12:47 compute-0 nova_compute[260935]: 2025-10-11 09:12:47.814 2 DEBUG oslo_concurrency.processutils [None req-ed823c19-8ddf-431b-ba34-91b643184e33 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:12:47 compute-0 nova_compute[260935]: 2025-10-11 09:12:47.876 2 DEBUG nova.network.neutron [req-6fa2d102-2289-4f5a-978d-098a0402b1b9 req-5ce0ac04-dcc0-4016-acfa-aad8d88b5ada e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e] Updated VIF entry in instance network info cache for port 0ae3e094-fe06-40c1-8840-cf16ba40f7fb. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 09:12:47 compute-0 nova_compute[260935]: 2025-10-11 09:12:47.878 2 DEBUG nova.network.neutron [req-6fa2d102-2289-4f5a-978d-098a0402b1b9 req-5ce0ac04-dcc0-4016-acfa-aad8d88b5ada e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e] Updating instance_info_cache with network_info: [{"id": "0ae3e094-fe06-40c1-8840-cf16ba40f7fb", "address": "fa:16:3e:36:24:98", "network": {"id": "fba72350-6e1a-4542-9fe5-a0774a411936", "bridge": "br-int", "label": "tempest-network-smoke--284994200", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca4b15770e784f45910b630937562cb6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0ae3e094-fe", "ovs_interfaceid": "0ae3e094-fe06-40c1-8840-cf16ba40f7fb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:12:47 compute-0 clever_booth[372714]: {
Oct 11 09:12:47 compute-0 clever_booth[372714]:     "9a8d87fe-940e-4960-94fb-405e22e6e9ee": {
Oct 11 09:12:47 compute-0 clever_booth[372714]:         "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:12:47 compute-0 clever_booth[372714]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 11 09:12:47 compute-0 clever_booth[372714]:         "osd_id": 2,
Oct 11 09:12:47 compute-0 clever_booth[372714]:         "osd_uuid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 09:12:47 compute-0 clever_booth[372714]:         "type": "bluestore"
Oct 11 09:12:47 compute-0 clever_booth[372714]:     },
Oct 11 09:12:47 compute-0 clever_booth[372714]:     "bd31cfe9-266a-4479-a7f9-659fff14b3e1": {
Oct 11 09:12:47 compute-0 clever_booth[372714]:         "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:12:47 compute-0 clever_booth[372714]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 11 09:12:47 compute-0 clever_booth[372714]:         "osd_id": 0,
Oct 11 09:12:47 compute-0 clever_booth[372714]:         "osd_uuid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 09:12:47 compute-0 clever_booth[372714]:         "type": "bluestore"
Oct 11 09:12:47 compute-0 clever_booth[372714]:     },
Oct 11 09:12:47 compute-0 clever_booth[372714]:     "c6e730c8-7075-45e5-bf72-061b73088f5b": {
Oct 11 09:12:47 compute-0 clever_booth[372714]:         "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:12:47 compute-0 clever_booth[372714]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 11 09:12:47 compute-0 clever_booth[372714]:         "osd_id": 1,
Oct 11 09:12:47 compute-0 clever_booth[372714]:         "osd_uuid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 09:12:47 compute-0 clever_booth[372714]:         "type": "bluestore"
Oct 11 09:12:47 compute-0 clever_booth[372714]:     }
Oct 11 09:12:47 compute-0 clever_booth[372714]: }
Oct 11 09:12:47 compute-0 nova_compute[260935]: 2025-10-11 09:12:47.958 2 DEBUG oslo_concurrency.lockutils [req-6fa2d102-2289-4f5a-978d-098a0402b1b9 req-5ce0ac04-dcc0-4016-acfa-aad8d88b5ada e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-0dc02e8f-5afd-40a3-8de3-e5550e4ab57e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:12:47 compute-0 systemd[1]: libpod-67b6266006fe5b3a3992b945ca77b6ef36ea3e8ef70a5925f39edaebf1fd255b.scope: Deactivated successfully.
Oct 11 09:12:47 compute-0 systemd[1]: libpod-67b6266006fe5b3a3992b945ca77b6ef36ea3e8ef70a5925f39edaebf1fd255b.scope: Consumed 1.073s CPU time.
Oct 11 09:12:47 compute-0 nova_compute[260935]: 2025-10-11 09:12:47.981 2 DEBUG nova.compute.manager [req-38d8bc27-7a7a-4a74-91ba-14c1fd3ff606 req-e307caf5-70ae-4164-ace9-6d8124c40ead e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e] Received event network-vif-plugged-0ae3e094-fe06-40c1-8840-cf16ba40f7fb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:12:47 compute-0 nova_compute[260935]: 2025-10-11 09:12:47.982 2 DEBUG oslo_concurrency.lockutils [req-38d8bc27-7a7a-4a74-91ba-14c1fd3ff606 req-e307caf5-70ae-4164-ace9-6d8124c40ead e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "0dc02e8f-5afd-40a3-8de3-e5550e4ab57e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:12:47 compute-0 nova_compute[260935]: 2025-10-11 09:12:47.982 2 DEBUG oslo_concurrency.lockutils [req-38d8bc27-7a7a-4a74-91ba-14c1fd3ff606 req-e307caf5-70ae-4164-ace9-6d8124c40ead e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "0dc02e8f-5afd-40a3-8de3-e5550e4ab57e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:12:47 compute-0 nova_compute[260935]: 2025-10-11 09:12:47.983 2 DEBUG oslo_concurrency.lockutils [req-38d8bc27-7a7a-4a74-91ba-14c1fd3ff606 req-e307caf5-70ae-4164-ace9-6d8124c40ead e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "0dc02e8f-5afd-40a3-8de3-e5550e4ab57e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:12:47 compute-0 nova_compute[260935]: 2025-10-11 09:12:47.983 2 DEBUG nova.compute.manager [req-38d8bc27-7a7a-4a74-91ba-14c1fd3ff606 req-e307caf5-70ae-4164-ace9-6d8124c40ead e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e] No waiting events found dispatching network-vif-plugged-0ae3e094-fe06-40c1-8840-cf16ba40f7fb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 09:12:47 compute-0 nova_compute[260935]: 2025-10-11 09:12:47.983 2 WARNING nova.compute.manager [req-38d8bc27-7a7a-4a74-91ba-14c1fd3ff606 req-e307caf5-70ae-4164-ace9-6d8124c40ead e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e] Received unexpected event network-vif-plugged-0ae3e094-fe06-40c1-8840-cf16ba40f7fb for instance with vm_state deleted and task_state None.
Oct 11 09:12:48 compute-0 podman[372748]: 2025-10-11 09:12:48.012585948 +0000 UTC m=+0.035893065 container died 67b6266006fe5b3a3992b945ca77b6ef36ea3e8ef70a5925f39edaebf1fd255b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_booth, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 09:12:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-ff7ad4d5a95dd5ce376dffcbb492274a75e8ddf6791aa75e9eba9e07d207395c-merged.mount: Deactivated successfully.
Oct 11 09:12:48 compute-0 podman[372748]: 2025-10-11 09:12:48.065924514 +0000 UTC m=+0.089231611 container remove 67b6266006fe5b3a3992b945ca77b6ef36ea3e8ef70a5925f39edaebf1fd255b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_booth, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 09:12:48 compute-0 systemd[1]: libpod-conmon-67b6266006fe5b3a3992b945ca77b6ef36ea3e8ef70a5925f39edaebf1fd255b.scope: Deactivated successfully.
Oct 11 09:12:48 compute-0 sudo[372589]: pam_unix(sudo:session): session closed for user root
Oct 11 09:12:48 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 09:12:48 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:12:48 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 09:12:48 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:12:48 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev 765d1411-4811-4afa-9fc9-0d80c03d3d07 does not exist
Oct 11 09:12:48 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev aaf70e23-50fb-464f-a3f2-aaa56030cc1a does not exist
Oct 11 09:12:48 compute-0 sudo[372781]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:12:48 compute-0 sudo[372781]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:12:48 compute-0 sudo[372781]: pam_unix(sudo:session): session closed for user root
Oct 11 09:12:48 compute-0 sudo[372806]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 11 09:12:48 compute-0 sudo[372806]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:12:48 compute-0 sudo[372806]: pam_unix(sudo:session): session closed for user root
Oct 11 09:12:48 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 09:12:48 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3440920978' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:12:48 compute-0 nova_compute[260935]: 2025-10-11 09:12:48.390 2 DEBUG oslo_concurrency.processutils [None req-ed823c19-8ddf-431b-ba34-91b643184e33 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.576s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:12:48 compute-0 nova_compute[260935]: 2025-10-11 09:12:48.400 2 DEBUG nova.compute.provider_tree [None req-ed823c19-8ddf-431b-ba34-91b643184e33 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 09:12:48 compute-0 nova_compute[260935]: 2025-10-11 09:12:48.418 2 DEBUG nova.scheduler.client.report [None req-ed823c19-8ddf-431b-ba34-91b643184e33 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 09:12:48 compute-0 nova_compute[260935]: 2025-10-11 09:12:48.443 2 DEBUG oslo_concurrency.lockutils [None req-ed823c19-8ddf-431b-ba34-91b643184e33 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.118s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:12:48 compute-0 nova_compute[260935]: 2025-10-11 09:12:48.470 2 INFO nova.scheduler.client.report [None req-ed823c19-8ddf-431b-ba34-91b643184e33 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Deleted allocations for instance 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e
Oct 11 09:12:48 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2129: 321 pgs: 321 active+clean; 328 MiB data, 919 MiB used, 59 GiB / 60 GiB avail; 344 KiB/s rd, 2.1 MiB/s wr, 92 op/s
Oct 11 09:12:48 compute-0 nova_compute[260935]: 2025-10-11 09:12:48.563 2 DEBUG oslo_concurrency.lockutils [None req-ed823c19-8ddf-431b-ba34-91b643184e33 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Lock "0dc02e8f-5afd-40a3-8de3-e5550e4ab57e" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.426s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:12:48 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:12:49 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:12:49 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:12:49 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/3440920978' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:12:49 compute-0 ceph-mon[74313]: pgmap v2129: 321 pgs: 321 active+clean; 328 MiB data, 919 MiB used, 59 GiB / 60 GiB avail; 344 KiB/s rd, 2.1 MiB/s wr, 92 op/s
Oct 11 09:12:50 compute-0 nova_compute[260935]: 2025-10-11 09:12:50.151 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:12:50 compute-0 nova_compute[260935]: 2025-10-11 09:12:50.425 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:12:50 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2130: 321 pgs: 321 active+clean; 328 MiB data, 919 MiB used, 59 GiB / 60 GiB avail; 117 KiB/s rd, 108 KiB/s wr, 49 op/s
Oct 11 09:12:51 compute-0 ceph-mon[74313]: pgmap v2130: 321 pgs: 321 active+clean; 328 MiB data, 919 MiB used, 59 GiB / 60 GiB avail; 117 KiB/s rd, 108 KiB/s wr, 49 op/s
Oct 11 09:12:51 compute-0 ovn_controller[152945]: 2025-10-11T09:12:51Z|00998|binding|INFO|Releasing lport e23cd806-8523-4e59-ba27-db15cee52548 from this chassis (sb_readonly=0)
Oct 11 09:12:51 compute-0 ovn_controller[152945]: 2025-10-11T09:12:51Z|00999|binding|INFO|Releasing lport f45dd889-4db0-488b-b6d3-356bc191844e from this chassis (sb_readonly=0)
Oct 11 09:12:51 compute-0 nova_compute[260935]: 2025-10-11 09:12:51.593 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:12:51 compute-0 ovn_controller[152945]: 2025-10-11T09:12:51Z|01000|binding|INFO|Releasing lport e23cd806-8523-4e59-ba27-db15cee52548 from this chassis (sb_readonly=0)
Oct 11 09:12:51 compute-0 ovn_controller[152945]: 2025-10-11T09:12:51Z|01001|binding|INFO|Releasing lport f45dd889-4db0-488b-b6d3-356bc191844e from this chassis (sb_readonly=0)
Oct 11 09:12:51 compute-0 nova_compute[260935]: 2025-10-11 09:12:51.706 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:12:52 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2131: 321 pgs: 321 active+clean; 328 MiB data, 919 MiB used, 59 GiB / 60 GiB avail; 117 KiB/s rd, 108 KiB/s wr, 49 op/s
Oct 11 09:12:53 compute-0 ceph-mon[74313]: pgmap v2131: 321 pgs: 321 active+clean; 328 MiB data, 919 MiB used, 59 GiB / 60 GiB avail; 117 KiB/s rd, 108 KiB/s wr, 49 op/s
Oct 11 09:12:53 compute-0 podman[372834]: 2025-10-11 09:12:53.81046951 +0000 UTC m=+0.095483824 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001)
Oct 11 09:12:53 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:12:54 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2132: 321 pgs: 321 active+clean; 328 MiB data, 919 MiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 19 KiB/s wr, 33 op/s
Oct 11 09:12:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:12:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:12:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:12:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:12:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:12:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:12:54 compute-0 ceph-mgr[74605]: [balancer INFO root] Optimize plan auto_2025-10-11_09:12:54
Oct 11 09:12:54 compute-0 ceph-mgr[74605]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 09:12:54 compute-0 ceph-mgr[74605]: [balancer INFO root] do_upmap
Oct 11 09:12:54 compute-0 ceph-mgr[74605]: [balancer INFO root] pools ['vms', 'cephfs.cephfs.data', 'volumes', 'default.rgw.meta', 'cephfs.cephfs.meta', 'images', 'default.rgw.control', 'default.rgw.log', '.rgw.root', 'backups', '.mgr']
Oct 11 09:12:54 compute-0 ceph-mgr[74605]: [balancer INFO root] prepared 0/10 changes
Oct 11 09:12:55 compute-0 nova_compute[260935]: 2025-10-11 09:12:55.180 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:12:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 09:12:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 09:12:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 09:12:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 09:12:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 09:12:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 09:12:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 09:12:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 09:12:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 09:12:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 09:12:55 compute-0 nova_compute[260935]: 2025-10-11 09:12:55.427 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:12:55 compute-0 ceph-mon[74313]: pgmap v2132: 321 pgs: 321 active+clean; 328 MiB data, 919 MiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 19 KiB/s wr, 33 op/s
Oct 11 09:12:56 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2133: 321 pgs: 321 active+clean; 328 MiB data, 919 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 13 KiB/s wr, 28 op/s
Oct 11 09:12:57 compute-0 ceph-mon[74313]: pgmap v2133: 321 pgs: 321 active+clean; 328 MiB data, 919 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 13 KiB/s wr, 28 op/s
Oct 11 09:12:58 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2134: 321 pgs: 321 active+clean; 328 MiB data, 919 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 13 KiB/s wr, 28 op/s
Oct 11 09:12:58 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:12:59 compute-0 ceph-mon[74313]: pgmap v2134: 321 pgs: 321 active+clean; 328 MiB data, 919 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 13 KiB/s wr, 28 op/s
Oct 11 09:12:59 compute-0 podman[372853]: 2025-10-11 09:12:59.778448969 +0000 UTC m=+0.077719503 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible)
Oct 11 09:13:00 compute-0 nova_compute[260935]: 2025-10-11 09:13:00.183 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:13:00 compute-0 nova_compute[260935]: 2025-10-11 09:13:00.381 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760173965.3812284, 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:13:00 compute-0 nova_compute[260935]: 2025-10-11 09:13:00.382 2 INFO nova.compute.manager [-] [instance: 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e] VM Stopped (Lifecycle Event)
Oct 11 09:13:00 compute-0 nova_compute[260935]: 2025-10-11 09:13:00.413 2 DEBUG nova.compute.manager [None req-a55a8cd3-09a1-468a-a91f-edce24b0989b - - - - - -] [instance: 0dc02e8f-5afd-40a3-8de3-e5550e4ab57e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:13:00 compute-0 nova_compute[260935]: 2025-10-11 09:13:00.429 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:13:00 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2135: 321 pgs: 321 active+clean; 328 MiB data, 919 MiB used, 59 GiB / 60 GiB avail
Oct 11 09:13:01 compute-0 ceph-mon[74313]: pgmap v2135: 321 pgs: 321 active+clean; 328 MiB data, 919 MiB used, 59 GiB / 60 GiB avail
Oct 11 09:13:02 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2136: 321 pgs: 321 active+clean; 328 MiB data, 919 MiB used, 59 GiB / 60 GiB avail
Oct 11 09:13:03 compute-0 ceph-mon[74313]: pgmap v2136: 321 pgs: 321 active+clean; 328 MiB data, 919 MiB used, 59 GiB / 60 GiB avail
Oct 11 09:13:03 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:13:04 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2137: 321 pgs: 321 active+clean; 328 MiB data, 919 MiB used, 59 GiB / 60 GiB avail
Oct 11 09:13:04 compute-0 podman[372873]: 2025-10-11 09:13:04.793048058 +0000 UTC m=+0.096056720 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, container_name=multipathd)
Oct 11 09:13:04 compute-0 podman[372874]: 2025-10-11 09:13:04.857093451 +0000 UTC m=+0.139620466 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true)
Oct 11 09:13:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 09:13:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:13:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 09:13:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:13:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.002627377275021163 of space, bias 1.0, pg target 0.788213182506349 quantized to 32 (current 32)
Oct 11 09:13:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:13:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 09:13:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:13:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 09:13:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:13:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Oct 11 09:13:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:13:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 11 09:13:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:13:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 09:13:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:13:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 11 09:13:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:13:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 11 09:13:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:13:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 09:13:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:13:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 11 09:13:05 compute-0 nova_compute[260935]: 2025-10-11 09:13:05.186 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:13:05 compute-0 nova_compute[260935]: 2025-10-11 09:13:05.431 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:13:05 compute-0 ceph-mon[74313]: pgmap v2137: 321 pgs: 321 active+clean; 328 MiB data, 919 MiB used, 59 GiB / 60 GiB avail
Oct 11 09:13:06 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2138: 321 pgs: 321 active+clean; 328 MiB data, 919 MiB used, 59 GiB / 60 GiB avail
Oct 11 09:13:07 compute-0 ceph-mon[74313]: pgmap v2138: 321 pgs: 321 active+clean; 328 MiB data, 919 MiB used, 59 GiB / 60 GiB avail
Oct 11 09:13:07 compute-0 ceph-mon[74313]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #102. Immutable memtables: 0.
Oct 11 09:13:07 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:13:07.700436) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 11 09:13:07 compute-0 ceph-mon[74313]: rocksdb: [db/flush_job.cc:856] [default] [JOB 59] Flushing memtable with next log file: 102
Oct 11 09:13:07 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760173987700530, "job": 59, "event": "flush_started", "num_memtables": 1, "num_entries": 1712, "num_deletes": 253, "total_data_size": 2756032, "memory_usage": 2788768, "flush_reason": "Manual Compaction"}
Oct 11 09:13:07 compute-0 ceph-mon[74313]: rocksdb: [db/flush_job.cc:885] [default] [JOB 59] Level-0 flush table #103: started
Oct 11 09:13:07 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760173987718076, "cf_name": "default", "job": 59, "event": "table_file_creation", "file_number": 103, "file_size": 2684307, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 43364, "largest_seqno": 45075, "table_properties": {"data_size": 2676323, "index_size": 4862, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2053, "raw_key_size": 16416, "raw_average_key_size": 20, "raw_value_size": 2660367, "raw_average_value_size": 3276, "num_data_blocks": 216, "num_entries": 812, "num_filter_entries": 812, "num_deletions": 253, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760173819, "oldest_key_time": 1760173819, "file_creation_time": 1760173987, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "25d4b2da-549a-4320-988a-8e4ed36a341b", "db_session_id": "HBQ1LYMNE0LLZGR7AHC6", "orig_file_number": 103, "seqno_to_time_mapping": "N/A"}}
Oct 11 09:13:07 compute-0 ceph-mon[74313]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 59] Flush lasted 17688 microseconds, and 11220 cpu microseconds.
Oct 11 09:13:07 compute-0 ceph-mon[74313]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 09:13:07 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:13:07.718136) [db/flush_job.cc:967] [default] [JOB 59] Level-0 flush table #103: 2684307 bytes OK
Oct 11 09:13:07 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:13:07.718161) [db/memtable_list.cc:519] [default] Level-0 commit table #103 started
Oct 11 09:13:07 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:13:07.719738) [db/memtable_list.cc:722] [default] Level-0 commit table #103: memtable #1 done
Oct 11 09:13:07 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:13:07.719764) EVENT_LOG_v1 {"time_micros": 1760173987719755, "job": 59, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 11 09:13:07 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:13:07.719792) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 11 09:13:07 compute-0 ceph-mon[74313]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 59] Try to delete WAL files size 2748643, prev total WAL file size 2748643, number of live WAL files 2.
Oct 11 09:13:07 compute-0 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000099.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 09:13:07 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:13:07.721413) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730034303136' seq:72057594037927935, type:22 .. '7061786F730034323638' seq:0, type:0; will stop at (end)
Oct 11 09:13:07 compute-0 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 60] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 11 09:13:07 compute-0 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 59 Base level 0, inputs: [103(2621KB)], [101(7615KB)]
Oct 11 09:13:07 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760173987721479, "job": 60, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [103], "files_L6": [101], "score": -1, "input_data_size": 10482511, "oldest_snapshot_seqno": -1}
Oct 11 09:13:07 compute-0 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 60] Generated table #104: 6529 keys, 8786356 bytes, temperature: kUnknown
Oct 11 09:13:07 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760173987779287, "cf_name": "default", "job": 60, "event": "table_file_creation", "file_number": 104, "file_size": 8786356, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8742680, "index_size": 26220, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16389, "raw_key_size": 169799, "raw_average_key_size": 26, "raw_value_size": 8625435, "raw_average_value_size": 1321, "num_data_blocks": 1026, "num_entries": 6529, "num_filter_entries": 6529, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760170204, "oldest_key_time": 0, "file_creation_time": 1760173987, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "25d4b2da-549a-4320-988a-8e4ed36a341b", "db_session_id": "HBQ1LYMNE0LLZGR7AHC6", "orig_file_number": 104, "seqno_to_time_mapping": "N/A"}}
Oct 11 09:13:07 compute-0 ceph-mon[74313]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 09:13:07 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:13:07.779672) [db/compaction/compaction_job.cc:1663] [default] [JOB 60] Compacted 1@0 + 1@6 files to L6 => 8786356 bytes
Oct 11 09:13:07 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:13:07.781355) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 180.9 rd, 151.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.6, 7.4 +0.0 blob) out(8.4 +0.0 blob), read-write-amplify(7.2) write-amplify(3.3) OK, records in: 7050, records dropped: 521 output_compression: NoCompression
Oct 11 09:13:07 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:13:07.781384) EVENT_LOG_v1 {"time_micros": 1760173987781371, "job": 60, "event": "compaction_finished", "compaction_time_micros": 57949, "compaction_time_cpu_micros": 41810, "output_level": 6, "num_output_files": 1, "total_output_size": 8786356, "num_input_records": 7050, "num_output_records": 6529, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 11 09:13:07 compute-0 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000103.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 09:13:07 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760173987782658, "job": 60, "event": "table_file_deletion", "file_number": 103}
Oct 11 09:13:07 compute-0 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000101.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 09:13:07 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760173987785427, "job": 60, "event": "table_file_deletion", "file_number": 101}
Oct 11 09:13:07 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:13:07.721267) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 09:13:07 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:13:07.785622) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 09:13:07 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:13:07.785632) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 09:13:07 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:13:07.785638) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 09:13:07 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:13:07.785642) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 09:13:07 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:13:07.785646) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 09:13:08 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2139: 321 pgs: 321 active+clean; 328 MiB data, 919 MiB used, 59 GiB / 60 GiB avail
Oct 11 09:13:08 compute-0 nova_compute[260935]: 2025-10-11 09:13:08.787 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:13:08 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:13:09 compute-0 ceph-mon[74313]: pgmap v2139: 321 pgs: 321 active+clean; 328 MiB data, 919 MiB used, 59 GiB / 60 GiB avail
Oct 11 09:13:10 compute-0 nova_compute[260935]: 2025-10-11 09:13:10.189 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:13:10 compute-0 nova_compute[260935]: 2025-10-11 09:13:10.433 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:13:10 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2140: 321 pgs: 321 active+clean; 328 MiB data, 919 MiB used, 59 GiB / 60 GiB avail
Oct 11 09:13:11 compute-0 ceph-mon[74313]: pgmap v2140: 321 pgs: 321 active+clean; 328 MiB data, 919 MiB used, 59 GiB / 60 GiB avail
Oct 11 09:13:12 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2141: 321 pgs: 321 active+clean; 328 MiB data, 919 MiB used, 59 GiB / 60 GiB avail
Oct 11 09:13:13 compute-0 ceph-mon[74313]: pgmap v2141: 321 pgs: 321 active+clean; 328 MiB data, 919 MiB used, 59 GiB / 60 GiB avail
Oct 11 09:13:13 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:13:14 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2142: 321 pgs: 321 active+clean; 328 MiB data, 919 MiB used, 59 GiB / 60 GiB avail
Oct 11 09:13:15 compute-0 nova_compute[260935]: 2025-10-11 09:13:15.054 2 DEBUG oslo_concurrency.lockutils [None req-551d129e-f5db-4b7e-b6ac-91b088d13b98 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Acquiring lock "4b5bb809-c821-466d-9a47-b5a6aa337212" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:13:15 compute-0 nova_compute[260935]: 2025-10-11 09:13:15.055 2 DEBUG oslo_concurrency.lockutils [None req-551d129e-f5db-4b7e-b6ac-91b088d13b98 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Lock "4b5bb809-c821-466d-9a47-b5a6aa337212" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:13:15 compute-0 nova_compute[260935]: 2025-10-11 09:13:15.078 2 DEBUG nova.compute.manager [None req-551d129e-f5db-4b7e-b6ac-91b088d13b98 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 4b5bb809-c821-466d-9a47-b5a6aa337212] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 11 09:13:15 compute-0 nova_compute[260935]: 2025-10-11 09:13:15.186 2 DEBUG oslo_concurrency.lockutils [None req-551d129e-f5db-4b7e-b6ac-91b088d13b98 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:13:15 compute-0 nova_compute[260935]: 2025-10-11 09:13:15.187 2 DEBUG oslo_concurrency.lockutils [None req-551d129e-f5db-4b7e-b6ac-91b088d13b98 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:13:15 compute-0 nova_compute[260935]: 2025-10-11 09:13:15.192 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:13:15 compute-0 nova_compute[260935]: 2025-10-11 09:13:15.200 2 DEBUG nova.virt.hardware [None req-551d129e-f5db-4b7e-b6ac-91b088d13b98 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 11 09:13:15 compute-0 nova_compute[260935]: 2025-10-11 09:13:15.201 2 INFO nova.compute.claims [None req-551d129e-f5db-4b7e-b6ac-91b088d13b98 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 4b5bb809-c821-466d-9a47-b5a6aa337212] Claim successful on node compute-0.ctlplane.example.com
Oct 11 09:13:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:13:15.208 162815 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:13:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:13:15.209 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:13:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:13:15.210 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:13:15 compute-0 nova_compute[260935]: 2025-10-11 09:13:15.419 2 DEBUG oslo_concurrency.processutils [None req-551d129e-f5db-4b7e-b6ac-91b088d13b98 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:13:15 compute-0 nova_compute[260935]: 2025-10-11 09:13:15.478 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:13:15 compute-0 ceph-mon[74313]: pgmap v2142: 321 pgs: 321 active+clean; 328 MiB data, 919 MiB used, 59 GiB / 60 GiB avail
Oct 11 09:13:15 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 09:13:15 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1103627397' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:13:15 compute-0 nova_compute[260935]: 2025-10-11 09:13:15.926 2 DEBUG oslo_concurrency.processutils [None req-551d129e-f5db-4b7e-b6ac-91b088d13b98 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.507s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:13:15 compute-0 nova_compute[260935]: 2025-10-11 09:13:15.932 2 DEBUG nova.compute.provider_tree [None req-551d129e-f5db-4b7e-b6ac-91b088d13b98 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 09:13:15 compute-0 nova_compute[260935]: 2025-10-11 09:13:15.950 2 DEBUG nova.scheduler.client.report [None req-551d129e-f5db-4b7e-b6ac-91b088d13b98 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 09:13:15 compute-0 nova_compute[260935]: 2025-10-11 09:13:15.973 2 DEBUG oslo_concurrency.lockutils [None req-551d129e-f5db-4b7e-b6ac-91b088d13b98 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.786s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:13:15 compute-0 nova_compute[260935]: 2025-10-11 09:13:15.974 2 DEBUG nova.compute.manager [None req-551d129e-f5db-4b7e-b6ac-91b088d13b98 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 4b5bb809-c821-466d-9a47-b5a6aa337212] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 11 09:13:16 compute-0 nova_compute[260935]: 2025-10-11 09:13:16.025 2 DEBUG nova.compute.manager [None req-551d129e-f5db-4b7e-b6ac-91b088d13b98 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 4b5bb809-c821-466d-9a47-b5a6aa337212] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 11 09:13:16 compute-0 nova_compute[260935]: 2025-10-11 09:13:16.026 2 DEBUG nova.network.neutron [None req-551d129e-f5db-4b7e-b6ac-91b088d13b98 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 4b5bb809-c821-466d-9a47-b5a6aa337212] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 11 09:13:16 compute-0 nova_compute[260935]: 2025-10-11 09:13:16.054 2 INFO nova.virt.libvirt.driver [None req-551d129e-f5db-4b7e-b6ac-91b088d13b98 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 4b5bb809-c821-466d-9a47-b5a6aa337212] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 11 09:13:16 compute-0 nova_compute[260935]: 2025-10-11 09:13:16.089 2 DEBUG nova.compute.manager [None req-551d129e-f5db-4b7e-b6ac-91b088d13b98 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 4b5bb809-c821-466d-9a47-b5a6aa337212] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 11 09:13:16 compute-0 nova_compute[260935]: 2025-10-11 09:13:16.203 2 DEBUG nova.compute.manager [None req-551d129e-f5db-4b7e-b6ac-91b088d13b98 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 4b5bb809-c821-466d-9a47-b5a6aa337212] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 11 09:13:16 compute-0 nova_compute[260935]: 2025-10-11 09:13:16.205 2 DEBUG nova.virt.libvirt.driver [None req-551d129e-f5db-4b7e-b6ac-91b088d13b98 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 4b5bb809-c821-466d-9a47-b5a6aa337212] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 11 09:13:16 compute-0 nova_compute[260935]: 2025-10-11 09:13:16.206 2 INFO nova.virt.libvirt.driver [None req-551d129e-f5db-4b7e-b6ac-91b088d13b98 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 4b5bb809-c821-466d-9a47-b5a6aa337212] Creating image(s)
Oct 11 09:13:16 compute-0 nova_compute[260935]: 2025-10-11 09:13:16.241 2 DEBUG nova.storage.rbd_utils [None req-551d129e-f5db-4b7e-b6ac-91b088d13b98 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] rbd image 4b5bb809-c821-466d-9a47-b5a6aa337212_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:13:16 compute-0 nova_compute[260935]: 2025-10-11 09:13:16.280 2 DEBUG nova.storage.rbd_utils [None req-551d129e-f5db-4b7e-b6ac-91b088d13b98 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] rbd image 4b5bb809-c821-466d-9a47-b5a6aa337212_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:13:16 compute-0 nova_compute[260935]: 2025-10-11 09:13:16.317 2 DEBUG nova.storage.rbd_utils [None req-551d129e-f5db-4b7e-b6ac-91b088d13b98 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] rbd image 4b5bb809-c821-466d-9a47-b5a6aa337212_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:13:16 compute-0 nova_compute[260935]: 2025-10-11 09:13:16.322 2 DEBUG oslo_concurrency.processutils [None req-551d129e-f5db-4b7e-b6ac-91b088d13b98 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:13:16 compute-0 nova_compute[260935]: 2025-10-11 09:13:16.370 2 DEBUG nova.policy [None req-551d129e-f5db-4b7e-b6ac-91b088d13b98 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'a213c3877fc144a3af0be3c3d853f999', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'ca4b15770e784f45910b630937562cb6', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 11 09:13:16 compute-0 nova_compute[260935]: 2025-10-11 09:13:16.416 2 DEBUG oslo_concurrency.processutils [None req-551d129e-f5db-4b7e-b6ac-91b088d13b98 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json" returned: 0 in 0.094s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:13:16 compute-0 nova_compute[260935]: 2025-10-11 09:13:16.417 2 DEBUG oslo_concurrency.lockutils [None req-551d129e-f5db-4b7e-b6ac-91b088d13b98 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Acquiring lock "9811042c7d73cc51997f7c966840f4b7728169a1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:13:16 compute-0 nova_compute[260935]: 2025-10-11 09:13:16.418 2 DEBUG oslo_concurrency.lockutils [None req-551d129e-f5db-4b7e-b6ac-91b088d13b98 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:13:16 compute-0 nova_compute[260935]: 2025-10-11 09:13:16.418 2 DEBUG oslo_concurrency.lockutils [None req-551d129e-f5db-4b7e-b6ac-91b088d13b98 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:13:16 compute-0 nova_compute[260935]: 2025-10-11 09:13:16.449 2 DEBUG nova.storage.rbd_utils [None req-551d129e-f5db-4b7e-b6ac-91b088d13b98 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] rbd image 4b5bb809-c821-466d-9a47-b5a6aa337212_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:13:16 compute-0 nova_compute[260935]: 2025-10-11 09:13:16.454 2 DEBUG oslo_concurrency.processutils [None req-551d129e-f5db-4b7e-b6ac-91b088d13b98 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 4b5bb809-c821-466d-9a47-b5a6aa337212_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:13:16 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2143: 321 pgs: 321 active+clean; 328 MiB data, 919 MiB used, 59 GiB / 60 GiB avail
Oct 11 09:13:16 compute-0 nova_compute[260935]: 2025-10-11 09:13:16.789 2 DEBUG oslo_concurrency.processutils [None req-551d129e-f5db-4b7e-b6ac-91b088d13b98 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 4b5bb809-c821-466d-9a47-b5a6aa337212_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.334s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:13:16 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/1103627397' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:13:16 compute-0 nova_compute[260935]: 2025-10-11 09:13:16.889 2 DEBUG nova.storage.rbd_utils [None req-551d129e-f5db-4b7e-b6ac-91b088d13b98 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] resizing rbd image 4b5bb809-c821-466d-9a47-b5a6aa337212_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 11 09:13:17 compute-0 nova_compute[260935]: 2025-10-11 09:13:17.029 2 DEBUG nova.objects.instance [None req-551d129e-f5db-4b7e-b6ac-91b088d13b98 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Lazy-loading 'migration_context' on Instance uuid 4b5bb809-c821-466d-9a47-b5a6aa337212 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:13:17 compute-0 nova_compute[260935]: 2025-10-11 09:13:17.058 2 DEBUG nova.virt.libvirt.driver [None req-551d129e-f5db-4b7e-b6ac-91b088d13b98 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 4b5bb809-c821-466d-9a47-b5a6aa337212] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 11 09:13:17 compute-0 nova_compute[260935]: 2025-10-11 09:13:17.059 2 DEBUG nova.virt.libvirt.driver [None req-551d129e-f5db-4b7e-b6ac-91b088d13b98 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 4b5bb809-c821-466d-9a47-b5a6aa337212] Ensure instance console log exists: /var/lib/nova/instances/4b5bb809-c821-466d-9a47-b5a6aa337212/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 11 09:13:17 compute-0 nova_compute[260935]: 2025-10-11 09:13:17.061 2 DEBUG oslo_concurrency.lockutils [None req-551d129e-f5db-4b7e-b6ac-91b088d13b98 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:13:17 compute-0 nova_compute[260935]: 2025-10-11 09:13:17.062 2 DEBUG oslo_concurrency.lockutils [None req-551d129e-f5db-4b7e-b6ac-91b088d13b98 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:13:17 compute-0 nova_compute[260935]: 2025-10-11 09:13:17.062 2 DEBUG oslo_concurrency.lockutils [None req-551d129e-f5db-4b7e-b6ac-91b088d13b98 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:13:17 compute-0 nova_compute[260935]: 2025-10-11 09:13:17.471 2 DEBUG nova.network.neutron [None req-551d129e-f5db-4b7e-b6ac-91b088d13b98 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 4b5bb809-c821-466d-9a47-b5a6aa337212] Successfully created port: 072bf760-1ffd-4b15-bdd9-58b9b670feb8 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 11 09:13:17 compute-0 ceph-mon[74313]: pgmap v2143: 321 pgs: 321 active+clean; 328 MiB data, 919 MiB used, 59 GiB / 60 GiB avail
Oct 11 09:13:18 compute-0 nova_compute[260935]: 2025-10-11 09:13:18.501 2 DEBUG nova.network.neutron [None req-551d129e-f5db-4b7e-b6ac-91b088d13b98 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 4b5bb809-c821-466d-9a47-b5a6aa337212] Successfully updated port: 072bf760-1ffd-4b15-bdd9-58b9b670feb8 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 11 09:13:18 compute-0 nova_compute[260935]: 2025-10-11 09:13:18.522 2 DEBUG oslo_concurrency.lockutils [None req-551d129e-f5db-4b7e-b6ac-91b088d13b98 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Acquiring lock "refresh_cache-4b5bb809-c821-466d-9a47-b5a6aa337212" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:13:18 compute-0 nova_compute[260935]: 2025-10-11 09:13:18.522 2 DEBUG oslo_concurrency.lockutils [None req-551d129e-f5db-4b7e-b6ac-91b088d13b98 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Acquired lock "refresh_cache-4b5bb809-c821-466d-9a47-b5a6aa337212" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:13:18 compute-0 nova_compute[260935]: 2025-10-11 09:13:18.522 2 DEBUG nova.network.neutron [None req-551d129e-f5db-4b7e-b6ac-91b088d13b98 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 4b5bb809-c821-466d-9a47-b5a6aa337212] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 11 09:13:18 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2144: 321 pgs: 321 active+clean; 374 MiB data, 928 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 11 09:13:18 compute-0 nova_compute[260935]: 2025-10-11 09:13:18.699 2 DEBUG nova.compute.manager [req-a287590b-9955-4caa-b838-4fbdab33613b req-4cf07e00-67be-412e-b923-2e02fd29b96b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 4b5bb809-c821-466d-9a47-b5a6aa337212] Received event network-changed-072bf760-1ffd-4b15-bdd9-58b9b670feb8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:13:18 compute-0 nova_compute[260935]: 2025-10-11 09:13:18.700 2 DEBUG nova.compute.manager [req-a287590b-9955-4caa-b838-4fbdab33613b req-4cf07e00-67be-412e-b923-2e02fd29b96b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 4b5bb809-c821-466d-9a47-b5a6aa337212] Refreshing instance network info cache due to event network-changed-072bf760-1ffd-4b15-bdd9-58b9b670feb8. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 09:13:18 compute-0 nova_compute[260935]: 2025-10-11 09:13:18.700 2 DEBUG oslo_concurrency.lockutils [req-a287590b-9955-4caa-b838-4fbdab33613b req-4cf07e00-67be-412e-b923-2e02fd29b96b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-4b5bb809-c821-466d-9a47-b5a6aa337212" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:13:18 compute-0 nova_compute[260935]: 2025-10-11 09:13:18.871 2 DEBUG nova.network.neutron [None req-551d129e-f5db-4b7e-b6ac-91b088d13b98 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 4b5bb809-c821-466d-9a47-b5a6aa337212] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 11 09:13:18 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:13:18 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:13:18.940 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5c:14:7e 10.100.0.18 10.100.0.2'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.18/28 10.100.0.2/28', 'neutron:device_id': 'ovnmeta-38f0ab9f-8af5-45d6-b50f-d8f0d954376a', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-38f0ab9f-8af5-45d6-b50f-d8f0d954376a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7020c6a7808745b3bfdbd16acc7ff39e', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e4e40f64-b9d8-48f8-9bae-4a183b1a906f, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=697c1af8-d05b-4380-bc71-c07e452471c7) old=Port_Binding(mac=['fa:16:3e:5c:14:7e 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-38f0ab9f-8af5-45d6-b50f-d8f0d954376a', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-38f0ab9f-8af5-45d6-b50f-d8f0d954376a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7020c6a7808745b3bfdbd16acc7ff39e', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 09:13:18 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:13:18.942 162815 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 697c1af8-d05b-4380-bc71-c07e452471c7 in datapath 38f0ab9f-8af5-45d6-b50f-d8f0d954376a updated
Oct 11 09:13:18 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:13:18.946 162815 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 38f0ab9f-8af5-45d6-b50f-d8f0d954376a, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 11 09:13:18 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:13:18.947 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[75ef80d6-e79e-4689-afe5-1a9b6cf1da88]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:13:19 compute-0 ceph-mon[74313]: pgmap v2144: 321 pgs: 321 active+clean; 374 MiB data, 928 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 11 09:13:20 compute-0 nova_compute[260935]: 2025-10-11 09:13:20.193 2 DEBUG nova.network.neutron [None req-551d129e-f5db-4b7e-b6ac-91b088d13b98 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 4b5bb809-c821-466d-9a47-b5a6aa337212] Updating instance_info_cache with network_info: [{"id": "072bf760-1ffd-4b15-bdd9-58b9b670feb8", "address": "fa:16:3e:c7:58:7f", "network": {"id": "824eb213-11b4-4a1c-ba73-2f66376f326f", "bridge": "br-int", "label": "tempest-network-smoke--404577306", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca4b15770e784f45910b630937562cb6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap072bf760-1f", "ovs_interfaceid": "072bf760-1ffd-4b15-bdd9-58b9b670feb8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:13:20 compute-0 nova_compute[260935]: 2025-10-11 09:13:20.229 2 DEBUG oslo_concurrency.lockutils [None req-551d129e-f5db-4b7e-b6ac-91b088d13b98 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Releasing lock "refresh_cache-4b5bb809-c821-466d-9a47-b5a6aa337212" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:13:20 compute-0 nova_compute[260935]: 2025-10-11 09:13:20.230 2 DEBUG nova.compute.manager [None req-551d129e-f5db-4b7e-b6ac-91b088d13b98 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 4b5bb809-c821-466d-9a47-b5a6aa337212] Instance network_info: |[{"id": "072bf760-1ffd-4b15-bdd9-58b9b670feb8", "address": "fa:16:3e:c7:58:7f", "network": {"id": "824eb213-11b4-4a1c-ba73-2f66376f326f", "bridge": "br-int", "label": "tempest-network-smoke--404577306", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca4b15770e784f45910b630937562cb6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap072bf760-1f", "ovs_interfaceid": "072bf760-1ffd-4b15-bdd9-58b9b670feb8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 11 09:13:20 compute-0 nova_compute[260935]: 2025-10-11 09:13:20.231 2 DEBUG oslo_concurrency.lockutils [req-a287590b-9955-4caa-b838-4fbdab33613b req-4cf07e00-67be-412e-b923-2e02fd29b96b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-4b5bb809-c821-466d-9a47-b5a6aa337212" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:13:20 compute-0 nova_compute[260935]: 2025-10-11 09:13:20.231 2 DEBUG nova.network.neutron [req-a287590b-9955-4caa-b838-4fbdab33613b req-4cf07e00-67be-412e-b923-2e02fd29b96b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 4b5bb809-c821-466d-9a47-b5a6aa337212] Refreshing network info cache for port 072bf760-1ffd-4b15-bdd9-58b9b670feb8 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 09:13:20 compute-0 nova_compute[260935]: 2025-10-11 09:13:20.237 2 DEBUG nova.virt.libvirt.driver [None req-551d129e-f5db-4b7e-b6ac-91b088d13b98 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 4b5bb809-c821-466d-9a47-b5a6aa337212] Start _get_guest_xml network_info=[{"id": "072bf760-1ffd-4b15-bdd9-58b9b670feb8", "address": "fa:16:3e:c7:58:7f", "network": {"id": "824eb213-11b4-4a1c-ba73-2f66376f326f", "bridge": "br-int", "label": "tempest-network-smoke--404577306", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca4b15770e784f45910b630937562cb6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap072bf760-1f", "ovs_interfaceid": "072bf760-1ffd-4b15-bdd9-58b9b670feb8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 11 09:13:20 compute-0 nova_compute[260935]: 2025-10-11 09:13:20.238 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:13:20 compute-0 nova_compute[260935]: 2025-10-11 09:13:20.246 2 WARNING nova.virt.libvirt.driver [None req-551d129e-f5db-4b7e-b6ac-91b088d13b98 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 09:13:20 compute-0 nova_compute[260935]: 2025-10-11 09:13:20.252 2 DEBUG nova.virt.libvirt.host [None req-551d129e-f5db-4b7e-b6ac-91b088d13b98 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 11 09:13:20 compute-0 nova_compute[260935]: 2025-10-11 09:13:20.252 2 DEBUG nova.virt.libvirt.host [None req-551d129e-f5db-4b7e-b6ac-91b088d13b98 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 11 09:13:20 compute-0 nova_compute[260935]: 2025-10-11 09:13:20.264 2 DEBUG nova.virt.libvirt.host [None req-551d129e-f5db-4b7e-b6ac-91b088d13b98 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 11 09:13:20 compute-0 nova_compute[260935]: 2025-10-11 09:13:20.265 2 DEBUG nova.virt.libvirt.host [None req-551d129e-f5db-4b7e-b6ac-91b088d13b98 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 11 09:13:20 compute-0 nova_compute[260935]: 2025-10-11 09:13:20.265 2 DEBUG nova.virt.libvirt.driver [None req-551d129e-f5db-4b7e-b6ac-91b088d13b98 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 11 09:13:20 compute-0 nova_compute[260935]: 2025-10-11 09:13:20.266 2 DEBUG nova.virt.hardware [None req-551d129e-f5db-4b7e-b6ac-91b088d13b98 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 11 09:13:20 compute-0 nova_compute[260935]: 2025-10-11 09:13:20.267 2 DEBUG nova.virt.hardware [None req-551d129e-f5db-4b7e-b6ac-91b088d13b98 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 11 09:13:20 compute-0 nova_compute[260935]: 2025-10-11 09:13:20.267 2 DEBUG nova.virt.hardware [None req-551d129e-f5db-4b7e-b6ac-91b088d13b98 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 11 09:13:20 compute-0 nova_compute[260935]: 2025-10-11 09:13:20.267 2 DEBUG nova.virt.hardware [None req-551d129e-f5db-4b7e-b6ac-91b088d13b98 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 11 09:13:20 compute-0 nova_compute[260935]: 2025-10-11 09:13:20.268 2 DEBUG nova.virt.hardware [None req-551d129e-f5db-4b7e-b6ac-91b088d13b98 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 11 09:13:20 compute-0 nova_compute[260935]: 2025-10-11 09:13:20.268 2 DEBUG nova.virt.hardware [None req-551d129e-f5db-4b7e-b6ac-91b088d13b98 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 11 09:13:20 compute-0 nova_compute[260935]: 2025-10-11 09:13:20.268 2 DEBUG nova.virt.hardware [None req-551d129e-f5db-4b7e-b6ac-91b088d13b98 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 11 09:13:20 compute-0 nova_compute[260935]: 2025-10-11 09:13:20.269 2 DEBUG nova.virt.hardware [None req-551d129e-f5db-4b7e-b6ac-91b088d13b98 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 11 09:13:20 compute-0 nova_compute[260935]: 2025-10-11 09:13:20.269 2 DEBUG nova.virt.hardware [None req-551d129e-f5db-4b7e-b6ac-91b088d13b98 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 11 09:13:20 compute-0 nova_compute[260935]: 2025-10-11 09:13:20.269 2 DEBUG nova.virt.hardware [None req-551d129e-f5db-4b7e-b6ac-91b088d13b98 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 11 09:13:20 compute-0 nova_compute[260935]: 2025-10-11 09:13:20.270 2 DEBUG nova.virt.hardware [None req-551d129e-f5db-4b7e-b6ac-91b088d13b98 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 11 09:13:20 compute-0 nova_compute[260935]: 2025-10-11 09:13:20.274 2 DEBUG oslo_concurrency.processutils [None req-551d129e-f5db-4b7e-b6ac-91b088d13b98 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:13:20 compute-0 nova_compute[260935]: 2025-10-11 09:13:20.482 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:13:20 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2145: 321 pgs: 321 active+clean; 374 MiB data, 928 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 11 09:13:20 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 09:13:20 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2231420734' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:13:20 compute-0 nova_compute[260935]: 2025-10-11 09:13:20.731 2 DEBUG oslo_concurrency.processutils [None req-551d129e-f5db-4b7e-b6ac-91b088d13b98 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:13:20 compute-0 nova_compute[260935]: 2025-10-11 09:13:20.765 2 DEBUG nova.storage.rbd_utils [None req-551d129e-f5db-4b7e-b6ac-91b088d13b98 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] rbd image 4b5bb809-c821-466d-9a47-b5a6aa337212_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:13:20 compute-0 nova_compute[260935]: 2025-10-11 09:13:20.771 2 DEBUG oslo_concurrency.processutils [None req-551d129e-f5db-4b7e-b6ac-91b088d13b98 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:13:20 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/2231420734' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:13:21 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 09:13:21 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/174029542' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:13:21 compute-0 nova_compute[260935]: 2025-10-11 09:13:21.271 2 DEBUG oslo_concurrency.processutils [None req-551d129e-f5db-4b7e-b6ac-91b088d13b98 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.500s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:13:21 compute-0 nova_compute[260935]: 2025-10-11 09:13:21.275 2 DEBUG nova.virt.libvirt.vif [None req-551d129e-f5db-4b7e-b6ac-91b088d13b98 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T09:13:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-429546179',display_name='tempest-TestNetworkAdvancedServerOps-server-429546179',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-429546179',id=106,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBI+dqizmLaYC2433qIpQLnnx/a3P6lrbi1ICEZMm0vIsSZegV5kK7h8RBLfeGIs6fvNJtxm5wiio76URdFktX8daCex/YjqNpM8rlSnMQuWZwdYT6LukjvR6b7qleLKRfA==',key_name='tempest-TestNetworkAdvancedServerOps-1430850582',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ca4b15770e784f45910b630937562cb6',ramdisk_id='',reservation_id='r-2d0dbaqa',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-1304559157',owner_user_name='tempest-TestNetworkAdvancedServerOps-1304559157-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:13:16Z,user_data=None,user_id='a213c3877fc144a3af0be3c3d853f999',uuid=4b5bb809-c821-466d-9a47-b5a6aa337212,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "072bf760-1ffd-4b15-bdd9-58b9b670feb8", "address": "fa:16:3e:c7:58:7f", "network": {"id": "824eb213-11b4-4a1c-ba73-2f66376f326f", "bridge": "br-int", "label": "tempest-network-smoke--404577306", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca4b15770e784f45910b630937562cb6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap072bf760-1f", "ovs_interfaceid": "072bf760-1ffd-4b15-bdd9-58b9b670feb8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 11 09:13:21 compute-0 nova_compute[260935]: 2025-10-11 09:13:21.276 2 DEBUG nova.network.os_vif_util [None req-551d129e-f5db-4b7e-b6ac-91b088d13b98 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Converting VIF {"id": "072bf760-1ffd-4b15-bdd9-58b9b670feb8", "address": "fa:16:3e:c7:58:7f", "network": {"id": "824eb213-11b4-4a1c-ba73-2f66376f326f", "bridge": "br-int", "label": "tempest-network-smoke--404577306", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca4b15770e784f45910b630937562cb6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap072bf760-1f", "ovs_interfaceid": "072bf760-1ffd-4b15-bdd9-58b9b670feb8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 09:13:21 compute-0 nova_compute[260935]: 2025-10-11 09:13:21.277 2 DEBUG nova.network.os_vif_util [None req-551d129e-f5db-4b7e-b6ac-91b088d13b98 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c7:58:7f,bridge_name='br-int',has_traffic_filtering=True,id=072bf760-1ffd-4b15-bdd9-58b9b670feb8,network=Network(824eb213-11b4-4a1c-ba73-2f66376f326f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap072bf760-1f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 09:13:21 compute-0 nova_compute[260935]: 2025-10-11 09:13:21.279 2 DEBUG nova.objects.instance [None req-551d129e-f5db-4b7e-b6ac-91b088d13b98 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Lazy-loading 'pci_devices' on Instance uuid 4b5bb809-c821-466d-9a47-b5a6aa337212 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:13:21 compute-0 nova_compute[260935]: 2025-10-11 09:13:21.297 2 DEBUG nova.virt.libvirt.driver [None req-551d129e-f5db-4b7e-b6ac-91b088d13b98 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 4b5bb809-c821-466d-9a47-b5a6aa337212] End _get_guest_xml xml=<domain type="kvm">
Oct 11 09:13:21 compute-0 nova_compute[260935]:   <uuid>4b5bb809-c821-466d-9a47-b5a6aa337212</uuid>
Oct 11 09:13:21 compute-0 nova_compute[260935]:   <name>instance-0000006a</name>
Oct 11 09:13:21 compute-0 nova_compute[260935]:   <memory>131072</memory>
Oct 11 09:13:21 compute-0 nova_compute[260935]:   <vcpu>1</vcpu>
Oct 11 09:13:21 compute-0 nova_compute[260935]:   <metadata>
Oct 11 09:13:21 compute-0 nova_compute[260935]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 09:13:21 compute-0 nova_compute[260935]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 09:13:21 compute-0 nova_compute[260935]:       <nova:name>tempest-TestNetworkAdvancedServerOps-server-429546179</nova:name>
Oct 11 09:13:21 compute-0 nova_compute[260935]:       <nova:creationTime>2025-10-11 09:13:20</nova:creationTime>
Oct 11 09:13:21 compute-0 nova_compute[260935]:       <nova:flavor name="m1.nano">
Oct 11 09:13:21 compute-0 nova_compute[260935]:         <nova:memory>128</nova:memory>
Oct 11 09:13:21 compute-0 nova_compute[260935]:         <nova:disk>1</nova:disk>
Oct 11 09:13:21 compute-0 nova_compute[260935]:         <nova:swap>0</nova:swap>
Oct 11 09:13:21 compute-0 nova_compute[260935]:         <nova:ephemeral>0</nova:ephemeral>
Oct 11 09:13:21 compute-0 nova_compute[260935]:         <nova:vcpus>1</nova:vcpus>
Oct 11 09:13:21 compute-0 nova_compute[260935]:       </nova:flavor>
Oct 11 09:13:21 compute-0 nova_compute[260935]:       <nova:owner>
Oct 11 09:13:21 compute-0 nova_compute[260935]:         <nova:user uuid="a213c3877fc144a3af0be3c3d853f999">tempest-TestNetworkAdvancedServerOps-1304559157-project-member</nova:user>
Oct 11 09:13:21 compute-0 nova_compute[260935]:         <nova:project uuid="ca4b15770e784f45910b630937562cb6">tempest-TestNetworkAdvancedServerOps-1304559157</nova:project>
Oct 11 09:13:21 compute-0 nova_compute[260935]:       </nova:owner>
Oct 11 09:13:21 compute-0 nova_compute[260935]:       <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 09:13:21 compute-0 nova_compute[260935]:       <nova:ports>
Oct 11 09:13:21 compute-0 nova_compute[260935]:         <nova:port uuid="072bf760-1ffd-4b15-bdd9-58b9b670feb8">
Oct 11 09:13:21 compute-0 nova_compute[260935]:           <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Oct 11 09:13:21 compute-0 nova_compute[260935]:         </nova:port>
Oct 11 09:13:21 compute-0 nova_compute[260935]:       </nova:ports>
Oct 11 09:13:21 compute-0 nova_compute[260935]:     </nova:instance>
Oct 11 09:13:21 compute-0 nova_compute[260935]:   </metadata>
Oct 11 09:13:21 compute-0 nova_compute[260935]:   <sysinfo type="smbios">
Oct 11 09:13:21 compute-0 nova_compute[260935]:     <system>
Oct 11 09:13:21 compute-0 nova_compute[260935]:       <entry name="manufacturer">RDO</entry>
Oct 11 09:13:21 compute-0 nova_compute[260935]:       <entry name="product">OpenStack Compute</entry>
Oct 11 09:13:21 compute-0 nova_compute[260935]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 09:13:21 compute-0 nova_compute[260935]:       <entry name="serial">4b5bb809-c821-466d-9a47-b5a6aa337212</entry>
Oct 11 09:13:21 compute-0 nova_compute[260935]:       <entry name="uuid">4b5bb809-c821-466d-9a47-b5a6aa337212</entry>
Oct 11 09:13:21 compute-0 nova_compute[260935]:       <entry name="family">Virtual Machine</entry>
Oct 11 09:13:21 compute-0 nova_compute[260935]:     </system>
Oct 11 09:13:21 compute-0 nova_compute[260935]:   </sysinfo>
Oct 11 09:13:21 compute-0 nova_compute[260935]:   <os>
Oct 11 09:13:21 compute-0 nova_compute[260935]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 11 09:13:21 compute-0 nova_compute[260935]:     <boot dev="hd"/>
Oct 11 09:13:21 compute-0 nova_compute[260935]:     <smbios mode="sysinfo"/>
Oct 11 09:13:21 compute-0 nova_compute[260935]:   </os>
Oct 11 09:13:21 compute-0 nova_compute[260935]:   <features>
Oct 11 09:13:21 compute-0 nova_compute[260935]:     <acpi/>
Oct 11 09:13:21 compute-0 nova_compute[260935]:     <apic/>
Oct 11 09:13:21 compute-0 nova_compute[260935]:     <vmcoreinfo/>
Oct 11 09:13:21 compute-0 nova_compute[260935]:   </features>
Oct 11 09:13:21 compute-0 nova_compute[260935]:   <clock offset="utc">
Oct 11 09:13:21 compute-0 nova_compute[260935]:     <timer name="pit" tickpolicy="delay"/>
Oct 11 09:13:21 compute-0 nova_compute[260935]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 11 09:13:21 compute-0 nova_compute[260935]:     <timer name="hpet" present="no"/>
Oct 11 09:13:21 compute-0 nova_compute[260935]:   </clock>
Oct 11 09:13:21 compute-0 nova_compute[260935]:   <cpu mode="host-model" match="exact">
Oct 11 09:13:21 compute-0 nova_compute[260935]:     <topology sockets="1" cores="1" threads="1"/>
Oct 11 09:13:21 compute-0 nova_compute[260935]:   </cpu>
Oct 11 09:13:21 compute-0 nova_compute[260935]:   <devices>
Oct 11 09:13:21 compute-0 nova_compute[260935]:     <disk type="network" device="disk">
Oct 11 09:13:21 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 09:13:21 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/4b5bb809-c821-466d-9a47-b5a6aa337212_disk">
Oct 11 09:13:21 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 09:13:21 compute-0 nova_compute[260935]:       </source>
Oct 11 09:13:21 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 09:13:21 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 09:13:21 compute-0 nova_compute[260935]:       </auth>
Oct 11 09:13:21 compute-0 nova_compute[260935]:       <target dev="vda" bus="virtio"/>
Oct 11 09:13:21 compute-0 nova_compute[260935]:     </disk>
Oct 11 09:13:21 compute-0 nova_compute[260935]:     <disk type="network" device="cdrom">
Oct 11 09:13:21 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 09:13:21 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/4b5bb809-c821-466d-9a47-b5a6aa337212_disk.config">
Oct 11 09:13:21 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 09:13:21 compute-0 nova_compute[260935]:       </source>
Oct 11 09:13:21 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 09:13:21 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 09:13:21 compute-0 nova_compute[260935]:       </auth>
Oct 11 09:13:21 compute-0 nova_compute[260935]:       <target dev="sda" bus="sata"/>
Oct 11 09:13:21 compute-0 nova_compute[260935]:     </disk>
Oct 11 09:13:21 compute-0 nova_compute[260935]:     <interface type="ethernet">
Oct 11 09:13:21 compute-0 nova_compute[260935]:       <mac address="fa:16:3e:c7:58:7f"/>
Oct 11 09:13:21 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 09:13:21 compute-0 nova_compute[260935]:       <driver name="vhost" rx_queue_size="512"/>
Oct 11 09:13:21 compute-0 nova_compute[260935]:       <mtu size="1442"/>
Oct 11 09:13:21 compute-0 nova_compute[260935]:       <target dev="tap072bf760-1f"/>
Oct 11 09:13:21 compute-0 nova_compute[260935]:     </interface>
Oct 11 09:13:21 compute-0 nova_compute[260935]:     <serial type="pty">
Oct 11 09:13:21 compute-0 nova_compute[260935]:       <log file="/var/lib/nova/instances/4b5bb809-c821-466d-9a47-b5a6aa337212/console.log" append="off"/>
Oct 11 09:13:21 compute-0 nova_compute[260935]:     </serial>
Oct 11 09:13:21 compute-0 nova_compute[260935]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 09:13:21 compute-0 nova_compute[260935]:     <video>
Oct 11 09:13:21 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 09:13:21 compute-0 nova_compute[260935]:     </video>
Oct 11 09:13:21 compute-0 nova_compute[260935]:     <input type="tablet" bus="usb"/>
Oct 11 09:13:21 compute-0 nova_compute[260935]:     <rng model="virtio">
Oct 11 09:13:21 compute-0 nova_compute[260935]:       <backend model="random">/dev/urandom</backend>
Oct 11 09:13:21 compute-0 nova_compute[260935]:     </rng>
Oct 11 09:13:21 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root"/>
Oct 11 09:13:21 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:13:21 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:13:21 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:13:21 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:13:21 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:13:21 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:13:21 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:13:21 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:13:21 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:13:21 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:13:21 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:13:21 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:13:21 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:13:21 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:13:21 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:13:21 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:13:21 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:13:21 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:13:21 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:13:21 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:13:21 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:13:21 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:13:21 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:13:21 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:13:21 compute-0 nova_compute[260935]:     <controller type="usb" index="0"/>
Oct 11 09:13:21 compute-0 nova_compute[260935]:     <memballoon model="virtio">
Oct 11 09:13:21 compute-0 nova_compute[260935]:       <stats period="10"/>
Oct 11 09:13:21 compute-0 nova_compute[260935]:     </memballoon>
Oct 11 09:13:21 compute-0 nova_compute[260935]:   </devices>
Oct 11 09:13:21 compute-0 nova_compute[260935]: </domain>
Oct 11 09:13:21 compute-0 nova_compute[260935]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 11 09:13:21 compute-0 nova_compute[260935]: 2025-10-11 09:13:21.299 2 DEBUG nova.compute.manager [None req-551d129e-f5db-4b7e-b6ac-91b088d13b98 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 4b5bb809-c821-466d-9a47-b5a6aa337212] Preparing to wait for external event network-vif-plugged-072bf760-1ffd-4b15-bdd9-58b9b670feb8 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 11 09:13:21 compute-0 nova_compute[260935]: 2025-10-11 09:13:21.299 2 DEBUG oslo_concurrency.lockutils [None req-551d129e-f5db-4b7e-b6ac-91b088d13b98 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Acquiring lock "4b5bb809-c821-466d-9a47-b5a6aa337212-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:13:21 compute-0 nova_compute[260935]: 2025-10-11 09:13:21.300 2 DEBUG oslo_concurrency.lockutils [None req-551d129e-f5db-4b7e-b6ac-91b088d13b98 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Lock "4b5bb809-c821-466d-9a47-b5a6aa337212-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:13:21 compute-0 nova_compute[260935]: 2025-10-11 09:13:21.301 2 DEBUG oslo_concurrency.lockutils [None req-551d129e-f5db-4b7e-b6ac-91b088d13b98 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Lock "4b5bb809-c821-466d-9a47-b5a6aa337212-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:13:21 compute-0 nova_compute[260935]: 2025-10-11 09:13:21.302 2 DEBUG nova.virt.libvirt.vif [None req-551d129e-f5db-4b7e-b6ac-91b088d13b98 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T09:13:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-429546179',display_name='tempest-TestNetworkAdvancedServerOps-server-429546179',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-429546179',id=106,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBI+dqizmLaYC2433qIpQLnnx/a3P6lrbi1ICEZMm0vIsSZegV5kK7h8RBLfeGIs6fvNJtxm5wiio76URdFktX8daCex/YjqNpM8rlSnMQuWZwdYT6LukjvR6b7qleLKRfA==',key_name='tempest-TestNetworkAdvancedServerOps-1430850582',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ca4b15770e784f45910b630937562cb6',ramdisk_id='',reservation_id='r-2d0dbaqa',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-1304559157',owner_user_name='tempest-TestNetworkAdvancedServerOps-1304559157-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:13:16Z,user_data=None,user_id='a213c3877fc144a3af0be3c3d853f999',uuid=4b5bb809-c821-466d-9a47-b5a6aa337212,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "072bf760-1ffd-4b15-bdd9-58b9b670feb8", "address": "fa:16:3e:c7:58:7f", "network": {"id": "824eb213-11b4-4a1c-ba73-2f66376f326f", "bridge": "br-int", "label": "tempest-network-smoke--404577306", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca4b15770e784f45910b630937562cb6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap072bf760-1f", "ovs_interfaceid": "072bf760-1ffd-4b15-bdd9-58b9b670feb8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 11 09:13:21 compute-0 nova_compute[260935]: 2025-10-11 09:13:21.302 2 DEBUG nova.network.os_vif_util [None req-551d129e-f5db-4b7e-b6ac-91b088d13b98 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Converting VIF {"id": "072bf760-1ffd-4b15-bdd9-58b9b670feb8", "address": "fa:16:3e:c7:58:7f", "network": {"id": "824eb213-11b4-4a1c-ba73-2f66376f326f", "bridge": "br-int", "label": "tempest-network-smoke--404577306", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca4b15770e784f45910b630937562cb6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap072bf760-1f", "ovs_interfaceid": "072bf760-1ffd-4b15-bdd9-58b9b670feb8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 09:13:21 compute-0 nova_compute[260935]: 2025-10-11 09:13:21.303 2 DEBUG nova.network.os_vif_util [None req-551d129e-f5db-4b7e-b6ac-91b088d13b98 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c7:58:7f,bridge_name='br-int',has_traffic_filtering=True,id=072bf760-1ffd-4b15-bdd9-58b9b670feb8,network=Network(824eb213-11b4-4a1c-ba73-2f66376f326f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap072bf760-1f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 09:13:21 compute-0 nova_compute[260935]: 2025-10-11 09:13:21.304 2 DEBUG os_vif [None req-551d129e-f5db-4b7e-b6ac-91b088d13b98 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c7:58:7f,bridge_name='br-int',has_traffic_filtering=True,id=072bf760-1ffd-4b15-bdd9-58b9b670feb8,network=Network(824eb213-11b4-4a1c-ba73-2f66376f326f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap072bf760-1f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 11 09:13:21 compute-0 nova_compute[260935]: 2025-10-11 09:13:21.305 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:13:21 compute-0 nova_compute[260935]: 2025-10-11 09:13:21.306 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:13:21 compute-0 nova_compute[260935]: 2025-10-11 09:13:21.307 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 09:13:21 compute-0 nova_compute[260935]: 2025-10-11 09:13:21.311 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:13:21 compute-0 nova_compute[260935]: 2025-10-11 09:13:21.312 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap072bf760-1f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:13:21 compute-0 nova_compute[260935]: 2025-10-11 09:13:21.313 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap072bf760-1f, col_values=(('external_ids', {'iface-id': '072bf760-1ffd-4b15-bdd9-58b9b670feb8', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:c7:58:7f', 'vm-uuid': '4b5bb809-c821-466d-9a47-b5a6aa337212'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:13:21 compute-0 NetworkManager[44960]: <info>  [1760174001.3475] manager: (tap072bf760-1f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/413)
Oct 11 09:13:21 compute-0 nova_compute[260935]: 2025-10-11 09:13:21.346 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:13:21 compute-0 nova_compute[260935]: 2025-10-11 09:13:21.352 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 09:13:21 compute-0 nova_compute[260935]: 2025-10-11 09:13:21.357 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:13:21 compute-0 nova_compute[260935]: 2025-10-11 09:13:21.358 2 INFO os_vif [None req-551d129e-f5db-4b7e-b6ac-91b088d13b98 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c7:58:7f,bridge_name='br-int',has_traffic_filtering=True,id=072bf760-1ffd-4b15-bdd9-58b9b670feb8,network=Network(824eb213-11b4-4a1c-ba73-2f66376f326f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap072bf760-1f')
Oct 11 09:13:21 compute-0 nova_compute[260935]: 2025-10-11 09:13:21.425 2 DEBUG nova.virt.libvirt.driver [None req-551d129e-f5db-4b7e-b6ac-91b088d13b98 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 09:13:21 compute-0 nova_compute[260935]: 2025-10-11 09:13:21.425 2 DEBUG nova.virt.libvirt.driver [None req-551d129e-f5db-4b7e-b6ac-91b088d13b98 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 09:13:21 compute-0 nova_compute[260935]: 2025-10-11 09:13:21.426 2 DEBUG nova.virt.libvirt.driver [None req-551d129e-f5db-4b7e-b6ac-91b088d13b98 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] No VIF found with MAC fa:16:3e:c7:58:7f, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 11 09:13:21 compute-0 nova_compute[260935]: 2025-10-11 09:13:21.427 2 INFO nova.virt.libvirt.driver [None req-551d129e-f5db-4b7e-b6ac-91b088d13b98 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 4b5bb809-c821-466d-9a47-b5a6aa337212] Using config drive
Oct 11 09:13:21 compute-0 nova_compute[260935]: 2025-10-11 09:13:21.453 2 DEBUG nova.storage.rbd_utils [None req-551d129e-f5db-4b7e-b6ac-91b088d13b98 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] rbd image 4b5bb809-c821-466d-9a47-b5a6aa337212_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:13:21 compute-0 nova_compute[260935]: 2025-10-11 09:13:21.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:13:21 compute-0 nova_compute[260935]: 2025-10-11 09:13:21.838 2 DEBUG nova.network.neutron [req-a287590b-9955-4caa-b838-4fbdab33613b req-4cf07e00-67be-412e-b923-2e02fd29b96b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 4b5bb809-c821-466d-9a47-b5a6aa337212] Updated VIF entry in instance network info cache for port 072bf760-1ffd-4b15-bdd9-58b9b670feb8. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 09:13:21 compute-0 nova_compute[260935]: 2025-10-11 09:13:21.839 2 DEBUG nova.network.neutron [req-a287590b-9955-4caa-b838-4fbdab33613b req-4cf07e00-67be-412e-b923-2e02fd29b96b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 4b5bb809-c821-466d-9a47-b5a6aa337212] Updating instance_info_cache with network_info: [{"id": "072bf760-1ffd-4b15-bdd9-58b9b670feb8", "address": "fa:16:3e:c7:58:7f", "network": {"id": "824eb213-11b4-4a1c-ba73-2f66376f326f", "bridge": "br-int", "label": "tempest-network-smoke--404577306", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca4b15770e784f45910b630937562cb6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap072bf760-1f", "ovs_interfaceid": "072bf760-1ffd-4b15-bdd9-58b9b670feb8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:13:21 compute-0 ceph-mon[74313]: pgmap v2145: 321 pgs: 321 active+clean; 374 MiB data, 928 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 11 09:13:21 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/174029542' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:13:21 compute-0 nova_compute[260935]: 2025-10-11 09:13:21.857 2 DEBUG oslo_concurrency.lockutils [req-a287590b-9955-4caa-b838-4fbdab33613b req-4cf07e00-67be-412e-b923-2e02fd29b96b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-4b5bb809-c821-466d-9a47-b5a6aa337212" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:13:22 compute-0 nova_compute[260935]: 2025-10-11 09:13:22.009 2 INFO nova.virt.libvirt.driver [None req-551d129e-f5db-4b7e-b6ac-91b088d13b98 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 4b5bb809-c821-466d-9a47-b5a6aa337212] Creating config drive at /var/lib/nova/instances/4b5bb809-c821-466d-9a47-b5a6aa337212/disk.config
Oct 11 09:13:22 compute-0 nova_compute[260935]: 2025-10-11 09:13:22.019 2 DEBUG oslo_concurrency.processutils [None req-551d129e-f5db-4b7e-b6ac-91b088d13b98 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/4b5bb809-c821-466d-9a47-b5a6aa337212/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp1_d2wzhx execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:13:22 compute-0 nova_compute[260935]: 2025-10-11 09:13:22.188 2 DEBUG oslo_concurrency.processutils [None req-551d129e-f5db-4b7e-b6ac-91b088d13b98 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/4b5bb809-c821-466d-9a47-b5a6aa337212/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp1_d2wzhx" returned: 0 in 0.170s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:13:22 compute-0 nova_compute[260935]: 2025-10-11 09:13:22.226 2 DEBUG nova.storage.rbd_utils [None req-551d129e-f5db-4b7e-b6ac-91b088d13b98 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] rbd image 4b5bb809-c821-466d-9a47-b5a6aa337212_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:13:22 compute-0 nova_compute[260935]: 2025-10-11 09:13:22.231 2 DEBUG oslo_concurrency.processutils [None req-551d129e-f5db-4b7e-b6ac-91b088d13b98 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/4b5bb809-c821-466d-9a47-b5a6aa337212/disk.config 4b5bb809-c821-466d-9a47-b5a6aa337212_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:13:22 compute-0 nova_compute[260935]: 2025-10-11 09:13:22.390 2 DEBUG oslo_concurrency.processutils [None req-551d129e-f5db-4b7e-b6ac-91b088d13b98 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/4b5bb809-c821-466d-9a47-b5a6aa337212/disk.config 4b5bb809-c821-466d-9a47-b5a6aa337212_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.159s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:13:22 compute-0 nova_compute[260935]: 2025-10-11 09:13:22.393 2 INFO nova.virt.libvirt.driver [None req-551d129e-f5db-4b7e-b6ac-91b088d13b98 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 4b5bb809-c821-466d-9a47-b5a6aa337212] Deleting local config drive /var/lib/nova/instances/4b5bb809-c821-466d-9a47-b5a6aa337212/disk.config because it was imported into RBD.
Oct 11 09:13:22 compute-0 kernel: tap072bf760-1f: entered promiscuous mode
Oct 11 09:13:22 compute-0 NetworkManager[44960]: <info>  [1760174002.4720] manager: (tap072bf760-1f): new Tun device (/org/freedesktop/NetworkManager/Devices/414)
Oct 11 09:13:22 compute-0 nova_compute[260935]: 2025-10-11 09:13:22.517 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:13:22 compute-0 ovn_controller[152945]: 2025-10-11T09:13:22Z|01002|binding|INFO|Claiming lport 072bf760-1ffd-4b15-bdd9-58b9b670feb8 for this chassis.
Oct 11 09:13:22 compute-0 ovn_controller[152945]: 2025-10-11T09:13:22Z|01003|binding|INFO|072bf760-1ffd-4b15-bdd9-58b9b670feb8: Claiming fa:16:3e:c7:58:7f 10.100.0.11
Oct 11 09:13:22 compute-0 systemd-udevd[373238]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 09:13:22 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2146: 321 pgs: 321 active+clean; 374 MiB data, 928 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 29 op/s
Oct 11 09:13:22 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:13:22.536 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c7:58:7f 10.100.0.11'], port_security=['fa:16:3e:c7:58:7f 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '4b5bb809-c821-466d-9a47-b5a6aa337212', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-824eb213-11b4-4a1c-ba73-2f66376f326f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ca4b15770e784f45910b630937562cb6', 'neutron:revision_number': '2', 'neutron:security_group_ids': '17c338b8-146b-462b-bba0-743df78b6636', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8beef9e7-87f9-4509-b665-58f91eb3377a, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=072bf760-1ffd-4b15-bdd9-58b9b670feb8) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 09:13:22 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:13:22.538 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 072bf760-1ffd-4b15-bdd9-58b9b670feb8 in datapath 824eb213-11b4-4a1c-ba73-2f66376f326f bound to our chassis
Oct 11 09:13:22 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:13:22.541 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 824eb213-11b4-4a1c-ba73-2f66376f326f
Oct 11 09:13:22 compute-0 NetworkManager[44960]: <info>  [1760174002.5493] device (tap072bf760-1f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 09:13:22 compute-0 NetworkManager[44960]: <info>  [1760174002.5521] device (tap072bf760-1f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 09:13:22 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:13:22.560 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[95a0f141-8f23-4019-9169-7231c5db8685]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:13:22 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:13:22.562 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap824eb213-11 in ovnmeta-824eb213-11b4-4a1c-ba73-2f66376f326f namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 11 09:13:22 compute-0 systemd-machined[215705]: New machine qemu-127-instance-0000006a.
Oct 11 09:13:22 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:13:22.565 276945 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap824eb213-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 11 09:13:22 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:13:22.565 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[c331db11-ec38-4ec5-be21-1a018e1e677a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:13:22 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:13:22.567 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[9c6a5d7f-86f7-4b00-8f1a-e06cfff9a7a1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:13:22 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:13:22.585 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[a2367b3f-9ba5-4dc9-b7e8-a59c31f3b7de]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:13:22 compute-0 systemd[1]: Started Virtual Machine qemu-127-instance-0000006a.
Oct 11 09:13:22 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:13:22.617 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[c2d8e05c-d32c-4ed9-82da-3c60d2027fc1]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:13:22 compute-0 ovn_controller[152945]: 2025-10-11T09:13:22Z|01004|binding|INFO|Setting lport 072bf760-1ffd-4b15-bdd9-58b9b670feb8 ovn-installed in OVS
Oct 11 09:13:22 compute-0 ovn_controller[152945]: 2025-10-11T09:13:22Z|01005|binding|INFO|Setting lport 072bf760-1ffd-4b15-bdd9-58b9b670feb8 up in Southbound
Oct 11 09:13:22 compute-0 nova_compute[260935]: 2025-10-11 09:13:22.624 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:13:22 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:13:22.647 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[49b20e7b-53d2-4493-ac1c-21bd15f7ac8b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:13:22 compute-0 systemd-udevd[373244]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 09:13:22 compute-0 NetworkManager[44960]: <info>  [1760174002.6577] manager: (tap824eb213-10): new Veth device (/org/freedesktop/NetworkManager/Devices/415)
Oct 11 09:13:22 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:13:22.655 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[4fbe27b1-7c9a-4ccb-8ab6-344dfc9fad86]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:13:22 compute-0 nova_compute[260935]: 2025-10-11 09:13:22.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:13:22 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:13:22.710 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[17f8fbbc-4c82-494a-8405-4d31e68065da]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:13:22 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:13:22.715 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[de3ff4ee-006e-4935-91b9-11182626352b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:13:22 compute-0 NetworkManager[44960]: <info>  [1760174002.7514] device (tap824eb213-10): carrier: link connected
Oct 11 09:13:22 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:13:22.757 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[ab99c1e1-342a-4587-94ec-1a1307026c40]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:13:22 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:13:22.776 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[711afd9a-33bc-4daa-a19f-cd6c650e8caa]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap824eb213-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:32:a7:1c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 294], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 584745, 'reachable_time': 41318, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 373274, 'error': None, 'target': 'ovnmeta-824eb213-11b4-4a1c-ba73-2f66376f326f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:13:22 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:13:22.805 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[2151a5f3-dbcc-464c-852a-8c3858bce448]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe32:a71c'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 584745, 'tstamp': 584745}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 373275, 'error': None, 'target': 'ovnmeta-824eb213-11b4-4a1c-ba73-2f66376f326f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:13:22 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:13:22.830 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[cc71ca3c-64ee-4ab2-b952-9e73d0072dfb]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap824eb213-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:32:a7:1c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 294], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 584745, 'reachable_time': 41318, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 373276, 'error': None, 'target': 'ovnmeta-824eb213-11b4-4a1c-ba73-2f66376f326f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:13:22 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:13:22.870 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[c0cce818-2c1b-4b87-b895-b6c53250a9f8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:13:22 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:13:22.954 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[0083b82f-6e5f-4cc9-9473-54d5340b01f7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:13:22 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:13:22.956 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap824eb213-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:13:22 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:13:22.957 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 09:13:22 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:13:22.958 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap824eb213-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:13:22 compute-0 kernel: tap824eb213-10: entered promiscuous mode
Oct 11 09:13:22 compute-0 NetworkManager[44960]: <info>  [1760174002.9619] manager: (tap824eb213-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/416)
Oct 11 09:13:22 compute-0 nova_compute[260935]: 2025-10-11 09:13:22.963 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:13:22 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:13:22.967 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap824eb213-10, col_values=(('external_ids', {'iface-id': 'f4946461-9ac5-43c5-81c4-5ba2f19b9f2d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:13:22 compute-0 ovn_controller[152945]: 2025-10-11T09:13:22Z|01006|binding|INFO|Releasing lport f4946461-9ac5-43c5-81c4-5ba2f19b9f2d from this chassis (sb_readonly=0)
Oct 11 09:13:22 compute-0 nova_compute[260935]: 2025-10-11 09:13:22.968 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:13:22 compute-0 nova_compute[260935]: 2025-10-11 09:13:22.998 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:13:23 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:13:22.999 162815 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/824eb213-11b4-4a1c-ba73-2f66376f326f.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/824eb213-11b4-4a1c-ba73-2f66376f326f.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 11 09:13:23 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:13:23.001 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[8e968643-f6b6-4231-8e81-79b970bf23f5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:13:23 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:13:23.003 162815 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 11 09:13:23 compute-0 ovn_metadata_agent[162810]: global
Oct 11 09:13:23 compute-0 ovn_metadata_agent[162810]:     log         /dev/log local0 debug
Oct 11 09:13:23 compute-0 ovn_metadata_agent[162810]:     log-tag     haproxy-metadata-proxy-824eb213-11b4-4a1c-ba73-2f66376f326f
Oct 11 09:13:23 compute-0 ovn_metadata_agent[162810]:     user        root
Oct 11 09:13:23 compute-0 ovn_metadata_agent[162810]:     group       root
Oct 11 09:13:23 compute-0 ovn_metadata_agent[162810]:     maxconn     1024
Oct 11 09:13:23 compute-0 ovn_metadata_agent[162810]:     pidfile     /var/lib/neutron/external/pids/824eb213-11b4-4a1c-ba73-2f66376f326f.pid.haproxy
Oct 11 09:13:23 compute-0 ovn_metadata_agent[162810]:     daemon
Oct 11 09:13:23 compute-0 ovn_metadata_agent[162810]: 
Oct 11 09:13:23 compute-0 ovn_metadata_agent[162810]: defaults
Oct 11 09:13:23 compute-0 ovn_metadata_agent[162810]:     log global
Oct 11 09:13:23 compute-0 ovn_metadata_agent[162810]:     mode http
Oct 11 09:13:23 compute-0 ovn_metadata_agent[162810]:     option httplog
Oct 11 09:13:23 compute-0 ovn_metadata_agent[162810]:     option dontlognull
Oct 11 09:13:23 compute-0 ovn_metadata_agent[162810]:     option http-server-close
Oct 11 09:13:23 compute-0 ovn_metadata_agent[162810]:     option forwardfor
Oct 11 09:13:23 compute-0 ovn_metadata_agent[162810]:     retries                 3
Oct 11 09:13:23 compute-0 ovn_metadata_agent[162810]:     timeout http-request    30s
Oct 11 09:13:23 compute-0 ovn_metadata_agent[162810]:     timeout connect         30s
Oct 11 09:13:23 compute-0 ovn_metadata_agent[162810]:     timeout client          32s
Oct 11 09:13:23 compute-0 ovn_metadata_agent[162810]:     timeout server          32s
Oct 11 09:13:23 compute-0 ovn_metadata_agent[162810]:     timeout http-keep-alive 30s
Oct 11 09:13:23 compute-0 ovn_metadata_agent[162810]: 
Oct 11 09:13:23 compute-0 ovn_metadata_agent[162810]: 
Oct 11 09:13:23 compute-0 ovn_metadata_agent[162810]: listen listener
Oct 11 09:13:23 compute-0 ovn_metadata_agent[162810]:     bind 169.254.169.254:80
Oct 11 09:13:23 compute-0 ovn_metadata_agent[162810]:     server metadata /var/lib/neutron/metadata_proxy
Oct 11 09:13:23 compute-0 ovn_metadata_agent[162810]:     http-request add-header X-OVN-Network-ID 824eb213-11b4-4a1c-ba73-2f66376f326f
Oct 11 09:13:23 compute-0 ovn_metadata_agent[162810]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 11 09:13:23 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:13:23.005 162815 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-824eb213-11b4-4a1c-ba73-2f66376f326f', 'env', 'PROCESS_TAG=haproxy-824eb213-11b4-4a1c-ba73-2f66376f326f', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/824eb213-11b4-4a1c-ba73-2f66376f326f.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 11 09:13:23 compute-0 nova_compute[260935]: 2025-10-11 09:13:23.010 2 DEBUG nova.compute.manager [req-dea19ea0-9ebb-4f25-a718-7b574e3a3987 req-eb7cd729-b0bf-4204-959c-1f5ebe04e00f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 4b5bb809-c821-466d-9a47-b5a6aa337212] Received event network-vif-plugged-072bf760-1ffd-4b15-bdd9-58b9b670feb8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:13:23 compute-0 nova_compute[260935]: 2025-10-11 09:13:23.011 2 DEBUG oslo_concurrency.lockutils [req-dea19ea0-9ebb-4f25-a718-7b574e3a3987 req-eb7cd729-b0bf-4204-959c-1f5ebe04e00f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "4b5bb809-c821-466d-9a47-b5a6aa337212-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:13:23 compute-0 nova_compute[260935]: 2025-10-11 09:13:23.011 2 DEBUG oslo_concurrency.lockutils [req-dea19ea0-9ebb-4f25-a718-7b574e3a3987 req-eb7cd729-b0bf-4204-959c-1f5ebe04e00f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "4b5bb809-c821-466d-9a47-b5a6aa337212-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:13:23 compute-0 nova_compute[260935]: 2025-10-11 09:13:23.012 2 DEBUG oslo_concurrency.lockutils [req-dea19ea0-9ebb-4f25-a718-7b574e3a3987 req-eb7cd729-b0bf-4204-959c-1f5ebe04e00f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "4b5bb809-c821-466d-9a47-b5a6aa337212-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:13:23 compute-0 nova_compute[260935]: 2025-10-11 09:13:23.013 2 DEBUG nova.compute.manager [req-dea19ea0-9ebb-4f25-a718-7b574e3a3987 req-eb7cd729-b0bf-4204-959c-1f5ebe04e00f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 4b5bb809-c821-466d-9a47-b5a6aa337212] Processing event network-vif-plugged-072bf760-1ffd-4b15-bdd9-58b9b670feb8 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 11 09:13:23 compute-0 podman[373308]: 2025-10-11 09:13:23.482103174 +0000 UTC m=+0.061485643 container create 78f0fd3b5de59331d822e226bd70d7d4fef1aa6cadfaada96296472857cf5483 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-824eb213-11b4-4a1c-ba73-2f66376f326f, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 11 09:13:23 compute-0 systemd[1]: Started libpod-conmon-78f0fd3b5de59331d822e226bd70d7d4fef1aa6cadfaada96296472857cf5483.scope.
Oct 11 09:13:23 compute-0 podman[373308]: 2025-10-11 09:13:23.453929564 +0000 UTC m=+0.033312023 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 11 09:13:23 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:13:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ac5649964aaf0918143f43f0172e8f20ee7f3bca61099013f301b1e0f7916bd/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 11 09:13:23 compute-0 podman[373308]: 2025-10-11 09:13:23.585410063 +0000 UTC m=+0.164792552 container init 78f0fd3b5de59331d822e226bd70d7d4fef1aa6cadfaada96296472857cf5483 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-824eb213-11b4-4a1c-ba73-2f66376f326f, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2)
Oct 11 09:13:23 compute-0 podman[373308]: 2025-10-11 09:13:23.592083268 +0000 UTC m=+0.171465707 container start 78f0fd3b5de59331d822e226bd70d7d4fef1aa6cadfaada96296472857cf5483 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-824eb213-11b4-4a1c-ba73-2f66376f326f, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 11 09:13:23 compute-0 neutron-haproxy-ovnmeta-824eb213-11b4-4a1c-ba73-2f66376f326f[373363]: [NOTICE]   (373369) : New worker (373371) forked
Oct 11 09:13:23 compute-0 neutron-haproxy-ovnmeta-824eb213-11b4-4a1c-ba73-2f66376f326f[373363]: [NOTICE]   (373369) : Loading success.
Oct 11 09:13:23 compute-0 ceph-mon[74313]: pgmap v2146: 321 pgs: 321 active+clean; 374 MiB data, 928 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 29 op/s
Oct 11 09:13:23 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:13:24 compute-0 nova_compute[260935]: 2025-10-11 09:13:24.023 2 DEBUG nova.compute.manager [None req-551d129e-f5db-4b7e-b6ac-91b088d13b98 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 4b5bb809-c821-466d-9a47-b5a6aa337212] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 11 09:13:24 compute-0 nova_compute[260935]: 2025-10-11 09:13:24.026 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760174004.0253658, 4b5bb809-c821-466d-9a47-b5a6aa337212 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:13:24 compute-0 nova_compute[260935]: 2025-10-11 09:13:24.026 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 4b5bb809-c821-466d-9a47-b5a6aa337212] VM Started (Lifecycle Event)
Oct 11 09:13:24 compute-0 nova_compute[260935]: 2025-10-11 09:13:24.031 2 DEBUG nova.virt.libvirt.driver [None req-551d129e-f5db-4b7e-b6ac-91b088d13b98 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 4b5bb809-c821-466d-9a47-b5a6aa337212] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 11 09:13:24 compute-0 nova_compute[260935]: 2025-10-11 09:13:24.036 2 INFO nova.virt.libvirt.driver [-] [instance: 4b5bb809-c821-466d-9a47-b5a6aa337212] Instance spawned successfully.
Oct 11 09:13:24 compute-0 nova_compute[260935]: 2025-10-11 09:13:24.037 2 DEBUG nova.virt.libvirt.driver [None req-551d129e-f5db-4b7e-b6ac-91b088d13b98 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 4b5bb809-c821-466d-9a47-b5a6aa337212] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 11 09:13:24 compute-0 nova_compute[260935]: 2025-10-11 09:13:24.057 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 4b5bb809-c821-466d-9a47-b5a6aa337212] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:13:24 compute-0 nova_compute[260935]: 2025-10-11 09:13:24.066 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 4b5bb809-c821-466d-9a47-b5a6aa337212] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 09:13:24 compute-0 nova_compute[260935]: 2025-10-11 09:13:24.073 2 DEBUG nova.virt.libvirt.driver [None req-551d129e-f5db-4b7e-b6ac-91b088d13b98 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 4b5bb809-c821-466d-9a47-b5a6aa337212] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:13:24 compute-0 nova_compute[260935]: 2025-10-11 09:13:24.074 2 DEBUG nova.virt.libvirt.driver [None req-551d129e-f5db-4b7e-b6ac-91b088d13b98 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 4b5bb809-c821-466d-9a47-b5a6aa337212] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:13:24 compute-0 nova_compute[260935]: 2025-10-11 09:13:24.074 2 DEBUG nova.virt.libvirt.driver [None req-551d129e-f5db-4b7e-b6ac-91b088d13b98 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 4b5bb809-c821-466d-9a47-b5a6aa337212] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:13:24 compute-0 nova_compute[260935]: 2025-10-11 09:13:24.075 2 DEBUG nova.virt.libvirt.driver [None req-551d129e-f5db-4b7e-b6ac-91b088d13b98 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 4b5bb809-c821-466d-9a47-b5a6aa337212] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:13:24 compute-0 nova_compute[260935]: 2025-10-11 09:13:24.076 2 DEBUG nova.virt.libvirt.driver [None req-551d129e-f5db-4b7e-b6ac-91b088d13b98 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 4b5bb809-c821-466d-9a47-b5a6aa337212] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:13:24 compute-0 nova_compute[260935]: 2025-10-11 09:13:24.077 2 DEBUG nova.virt.libvirt.driver [None req-551d129e-f5db-4b7e-b6ac-91b088d13b98 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 4b5bb809-c821-466d-9a47-b5a6aa337212] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:13:24 compute-0 nova_compute[260935]: 2025-10-11 09:13:24.133 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 4b5bb809-c821-466d-9a47-b5a6aa337212] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 09:13:24 compute-0 nova_compute[260935]: 2025-10-11 09:13:24.134 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760174004.0255444, 4b5bb809-c821-466d-9a47-b5a6aa337212 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:13:24 compute-0 nova_compute[260935]: 2025-10-11 09:13:24.134 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 4b5bb809-c821-466d-9a47-b5a6aa337212] VM Paused (Lifecycle Event)
Oct 11 09:13:24 compute-0 nova_compute[260935]: 2025-10-11 09:13:24.168 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 4b5bb809-c821-466d-9a47-b5a6aa337212] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:13:24 compute-0 nova_compute[260935]: 2025-10-11 09:13:24.173 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760174004.0282097, 4b5bb809-c821-466d-9a47-b5a6aa337212 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:13:24 compute-0 nova_compute[260935]: 2025-10-11 09:13:24.173 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 4b5bb809-c821-466d-9a47-b5a6aa337212] VM Resumed (Lifecycle Event)
Oct 11 09:13:24 compute-0 nova_compute[260935]: 2025-10-11 09:13:24.182 2 INFO nova.compute.manager [None req-551d129e-f5db-4b7e-b6ac-91b088d13b98 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 4b5bb809-c821-466d-9a47-b5a6aa337212] Took 7.98 seconds to spawn the instance on the hypervisor.
Oct 11 09:13:24 compute-0 nova_compute[260935]: 2025-10-11 09:13:24.183 2 DEBUG nova.compute.manager [None req-551d129e-f5db-4b7e-b6ac-91b088d13b98 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 4b5bb809-c821-466d-9a47-b5a6aa337212] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:13:24 compute-0 nova_compute[260935]: 2025-10-11 09:13:24.202 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 4b5bb809-c821-466d-9a47-b5a6aa337212] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:13:24 compute-0 nova_compute[260935]: 2025-10-11 09:13:24.206 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 4b5bb809-c821-466d-9a47-b5a6aa337212] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 09:13:24 compute-0 nova_compute[260935]: 2025-10-11 09:13:24.223 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 4b5bb809-c821-466d-9a47-b5a6aa337212] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 09:13:24 compute-0 nova_compute[260935]: 2025-10-11 09:13:24.253 2 INFO nova.compute.manager [None req-551d129e-f5db-4b7e-b6ac-91b088d13b98 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 4b5bb809-c821-466d-9a47-b5a6aa337212] Took 9.11 seconds to build instance.
Oct 11 09:13:24 compute-0 nova_compute[260935]: 2025-10-11 09:13:24.271 2 DEBUG oslo_concurrency.lockutils [None req-551d129e-f5db-4b7e-b6ac-91b088d13b98 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Lock "4b5bb809-c821-466d-9a47-b5a6aa337212" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.217s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:13:24 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2147: 321 pgs: 321 active+clean; 374 MiB data, 928 MiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 31 op/s
Oct 11 09:13:24 compute-0 nova_compute[260935]: 2025-10-11 09:13:24.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:13:24 compute-0 nova_compute[260935]: 2025-10-11 09:13:24.703 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Oct 11 09:13:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:13:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:13:24 compute-0 podman[373380]: 2025-10-11 09:13:24.810175454 +0000 UTC m=+0.096289456 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 09:13:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:13:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:13:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:13:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:13:25 compute-0 nova_compute[260935]: 2025-10-11 09:13:25.231 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:13:25 compute-0 unix_chkpwd[373402]: password check failed for user (root)
Oct 11 09:13:25 compute-0 sshd-session[373396]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=165.232.82.252  user=root
Oct 11 09:13:25 compute-0 nova_compute[260935]: 2025-10-11 09:13:25.336 2 DEBUG nova.compute.manager [req-e2eb0558-ecac-4692-8e91-3b4ba7ca3d4e req-6f420f3e-f7e8-4eda-af5e-aef09d936510 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 4b5bb809-c821-466d-9a47-b5a6aa337212] Received event network-vif-plugged-072bf760-1ffd-4b15-bdd9-58b9b670feb8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:13:25 compute-0 nova_compute[260935]: 2025-10-11 09:13:25.338 2 DEBUG oslo_concurrency.lockutils [req-e2eb0558-ecac-4692-8e91-3b4ba7ca3d4e req-6f420f3e-f7e8-4eda-af5e-aef09d936510 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "4b5bb809-c821-466d-9a47-b5a6aa337212-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:13:25 compute-0 nova_compute[260935]: 2025-10-11 09:13:25.338 2 DEBUG oslo_concurrency.lockutils [req-e2eb0558-ecac-4692-8e91-3b4ba7ca3d4e req-6f420f3e-f7e8-4eda-af5e-aef09d936510 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "4b5bb809-c821-466d-9a47-b5a6aa337212-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:13:25 compute-0 nova_compute[260935]: 2025-10-11 09:13:25.338 2 DEBUG oslo_concurrency.lockutils [req-e2eb0558-ecac-4692-8e91-3b4ba7ca3d4e req-6f420f3e-f7e8-4eda-af5e-aef09d936510 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "4b5bb809-c821-466d-9a47-b5a6aa337212-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:13:25 compute-0 nova_compute[260935]: 2025-10-11 09:13:25.339 2 DEBUG nova.compute.manager [req-e2eb0558-ecac-4692-8e91-3b4ba7ca3d4e req-6f420f3e-f7e8-4eda-af5e-aef09d936510 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 4b5bb809-c821-466d-9a47-b5a6aa337212] No waiting events found dispatching network-vif-plugged-072bf760-1ffd-4b15-bdd9-58b9b670feb8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 09:13:25 compute-0 nova_compute[260935]: 2025-10-11 09:13:25.339 2 WARNING nova.compute.manager [req-e2eb0558-ecac-4692-8e91-3b4ba7ca3d4e req-6f420f3e-f7e8-4eda-af5e-aef09d936510 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 4b5bb809-c821-466d-9a47-b5a6aa337212] Received unexpected event network-vif-plugged-072bf760-1ffd-4b15-bdd9-58b9b670feb8 for instance with vm_state active and task_state None.
Oct 11 09:13:25 compute-0 nova_compute[260935]: 2025-10-11 09:13:25.718 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:13:25 compute-0 nova_compute[260935]: 2025-10-11 09:13:25.719 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:13:25 compute-0 nova_compute[260935]: 2025-10-11 09:13:25.719 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:13:25 compute-0 nova_compute[260935]: 2025-10-11 09:13:25.719 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 11 09:13:25 compute-0 ceph-mon[74313]: pgmap v2147: 321 pgs: 321 active+clean; 374 MiB data, 928 MiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 31 op/s
Oct 11 09:13:26 compute-0 nova_compute[260935]: 2025-10-11 09:13:26.411 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:13:26 compute-0 nova_compute[260935]: 2025-10-11 09:13:26.508 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:13:26 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:13:26.507 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=30, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:d1:d9', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '16:ab:1e:b7:4b:7f'}, ipsec=False) old=SB_Global(nb_cfg=29) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 09:13:26 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:13:26.509 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 11 09:13:26 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2148: 321 pgs: 321 active+clean; 374 MiB data, 928 MiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 31 op/s
Oct 11 09:13:26 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 09:13:26 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2181799174' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 09:13:26 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 09:13:26 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2181799174' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 09:13:26 compute-0 nova_compute[260935]: 2025-10-11 09:13:26.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:13:26 compute-0 nova_compute[260935]: 2025-10-11 09:13:26.730 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:13:26 compute-0 nova_compute[260935]: 2025-10-11 09:13:26.731 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:13:26 compute-0 nova_compute[260935]: 2025-10-11 09:13:26.732 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:13:26 compute-0 nova_compute[260935]: 2025-10-11 09:13:26.732 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 11 09:13:26 compute-0 nova_compute[260935]: 2025-10-11 09:13:26.733 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:13:26 compute-0 sshd-session[373403]: Invalid user katie from 152.32.213.170 port 42552
Oct 11 09:13:26 compute-0 sshd-session[373403]: pam_unix(sshd:auth): check pass; user unknown
Oct 11 09:13:26 compute-0 sshd-session[373403]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=152.32.213.170
Oct 11 09:13:26 compute-0 ceph-mon[74313]: from='client.? 192.168.122.10:0/2181799174' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 09:13:26 compute-0 ceph-mon[74313]: from='client.? 192.168.122.10:0/2181799174' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 09:13:27 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 09:13:27 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/589256496' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:13:27 compute-0 nova_compute[260935]: 2025-10-11 09:13:27.239 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.505s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:13:27 compute-0 nova_compute[260935]: 2025-10-11 09:13:27.337 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:13:27 compute-0 nova_compute[260935]: 2025-10-11 09:13:27.338 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:13:27 compute-0 nova_compute[260935]: 2025-10-11 09:13:27.338 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:13:27 compute-0 nova_compute[260935]: 2025-10-11 09:13:27.344 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:13:27 compute-0 nova_compute[260935]: 2025-10-11 09:13:27.344 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:13:27 compute-0 nova_compute[260935]: 2025-10-11 09:13:27.350 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-0000006a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:13:27 compute-0 nova_compute[260935]: 2025-10-11 09:13:27.350 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-0000006a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:13:27 compute-0 nova_compute[260935]: 2025-10-11 09:13:27.355 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000052 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:13:27 compute-0 nova_compute[260935]: 2025-10-11 09:13:27.356 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000052 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:13:27 compute-0 sshd-session[373396]: Failed password for root from 165.232.82.252 port 45406 ssh2
Oct 11 09:13:27 compute-0 nova_compute[260935]: 2025-10-11 09:13:27.664 2 WARNING nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 09:13:27 compute-0 nova_compute[260935]: 2025-10-11 09:13:27.666 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=2924MB free_disk=59.80990982055664GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 11 09:13:27 compute-0 nova_compute[260935]: 2025-10-11 09:13:27.667 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:13:27 compute-0 nova_compute[260935]: 2025-10-11 09:13:27.667 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:13:27 compute-0 ceph-mon[74313]: pgmap v2148: 321 pgs: 321 active+clean; 374 MiB data, 928 MiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 31 op/s
Oct 11 09:13:27 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/589256496' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:13:28 compute-0 nova_compute[260935]: 2025-10-11 09:13:28.007 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:13:28 compute-0 ovn_controller[152945]: 2025-10-11T09:13:28Z|01007|binding|INFO|Releasing lport f4946461-9ac5-43c5-81c4-5ba2f19b9f2d from this chassis (sb_readonly=0)
Oct 11 09:13:28 compute-0 ovn_controller[152945]: 2025-10-11T09:13:28Z|01008|binding|INFO|Releasing lport e23cd806-8523-4e59-ba27-db15cee52548 from this chassis (sb_readonly=0)
Oct 11 09:13:28 compute-0 ovn_controller[152945]: 2025-10-11T09:13:28Z|01009|binding|INFO|Releasing lport f45dd889-4db0-488b-b6d3-356bc191844e from this chassis (sb_readonly=0)
Oct 11 09:13:28 compute-0 NetworkManager[44960]: <info>  [1760174008.0127] manager: (patch-provnet-02213cae-d648-4a8f-b18b-9bdf9573f1be-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/417)
Oct 11 09:13:28 compute-0 NetworkManager[44960]: <info>  [1760174008.0139] manager: (patch-br-int-to-provnet-02213cae-d648-4a8f-b18b-9bdf9573f1be): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/418)
Oct 11 09:13:28 compute-0 ovn_controller[152945]: 2025-10-11T09:13:28Z|01010|binding|INFO|Releasing lport f4946461-9ac5-43c5-81c4-5ba2f19b9f2d from this chassis (sb_readonly=0)
Oct 11 09:13:28 compute-0 ovn_controller[152945]: 2025-10-11T09:13:28Z|01011|binding|INFO|Releasing lport e23cd806-8523-4e59-ba27-db15cee52548 from this chassis (sb_readonly=0)
Oct 11 09:13:28 compute-0 ovn_controller[152945]: 2025-10-11T09:13:28Z|01012|binding|INFO|Releasing lport f45dd889-4db0-488b-b6d3-356bc191844e from this chassis (sb_readonly=0)
Oct 11 09:13:28 compute-0 nova_compute[260935]: 2025-10-11 09:13:28.065 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:13:28 compute-0 nova_compute[260935]: 2025-10-11 09:13:28.076 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:13:28 compute-0 nova_compute[260935]: 2025-10-11 09:13:28.091 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance c176845c-89c0-4038-ba22-4ee79bd3ebfe actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 09:13:28 compute-0 nova_compute[260935]: 2025-10-11 09:13:28.091 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance b75d8ded-515b-48ff-a6b6-28df88878996 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 09:13:28 compute-0 nova_compute[260935]: 2025-10-11 09:13:28.092 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance 52be16b4-343a-4fd4-9041-39069a1fde2a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 09:13:28 compute-0 nova_compute[260935]: 2025-10-11 09:13:28.092 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance 4b5bb809-c821-466d-9a47-b5a6aa337212 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 09:13:28 compute-0 nova_compute[260935]: 2025-10-11 09:13:28.093 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 11 09:13:28 compute-0 nova_compute[260935]: 2025-10-11 09:13:28.093 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=1024MB phys_disk=59GB used_disk=4GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 11 09:13:28 compute-0 nova_compute[260935]: 2025-10-11 09:13:28.351 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:13:28 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2149: 321 pgs: 321 active+clean; 374 MiB data, 928 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Oct 11 09:13:28 compute-0 sshd-session[373403]: Failed password for invalid user katie from 152.32.213.170 port 42552 ssh2
Oct 11 09:13:28 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 09:13:28 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3078186894' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:13:28 compute-0 nova_compute[260935]: 2025-10-11 09:13:28.843 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.492s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:13:28 compute-0 nova_compute[260935]: 2025-10-11 09:13:28.850 2 DEBUG nova.compute.provider_tree [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 09:13:28 compute-0 nova_compute[260935]: 2025-10-11 09:13:28.869 2 DEBUG nova.scheduler.client.report [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 09:13:28 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/3078186894' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:13:28 compute-0 nova_compute[260935]: 2025-10-11 09:13:28.896 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 11 09:13:28 compute-0 nova_compute[260935]: 2025-10-11 09:13:28.898 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.230s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:13:28 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:13:28 compute-0 nova_compute[260935]: 2025-10-11 09:13:28.977 2 DEBUG nova.compute.manager [req-754ef831-fa74-4bb0-bef7-b77b27d312a1 req-3f7d2d3d-272b-499c-b4f8-dc2273584b9d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 4b5bb809-c821-466d-9a47-b5a6aa337212] Received event network-changed-072bf760-1ffd-4b15-bdd9-58b9b670feb8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:13:28 compute-0 nova_compute[260935]: 2025-10-11 09:13:28.977 2 DEBUG nova.compute.manager [req-754ef831-fa74-4bb0-bef7-b77b27d312a1 req-3f7d2d3d-272b-499c-b4f8-dc2273584b9d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 4b5bb809-c821-466d-9a47-b5a6aa337212] Refreshing instance network info cache due to event network-changed-072bf760-1ffd-4b15-bdd9-58b9b670feb8. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 09:13:28 compute-0 nova_compute[260935]: 2025-10-11 09:13:28.978 2 DEBUG oslo_concurrency.lockutils [req-754ef831-fa74-4bb0-bef7-b77b27d312a1 req-3f7d2d3d-272b-499c-b4f8-dc2273584b9d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-4b5bb809-c821-466d-9a47-b5a6aa337212" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:13:28 compute-0 nova_compute[260935]: 2025-10-11 09:13:28.978 2 DEBUG oslo_concurrency.lockutils [req-754ef831-fa74-4bb0-bef7-b77b27d312a1 req-3f7d2d3d-272b-499c-b4f8-dc2273584b9d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-4b5bb809-c821-466d-9a47-b5a6aa337212" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:13:28 compute-0 nova_compute[260935]: 2025-10-11 09:13:28.979 2 DEBUG nova.network.neutron [req-754ef831-fa74-4bb0-bef7-b77b27d312a1 req-3f7d2d3d-272b-499c-b4f8-dc2273584b9d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 4b5bb809-c821-466d-9a47-b5a6aa337212] Refreshing network info cache for port 072bf760-1ffd-4b15-bdd9-58b9b670feb8 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 09:13:29 compute-0 sshd-session[373396]: Connection closed by authenticating user root 165.232.82.252 port 45406 [preauth]
Oct 11 09:13:29 compute-0 ceph-mon[74313]: pgmap v2149: 321 pgs: 321 active+clean; 374 MiB data, 928 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Oct 11 09:13:29 compute-0 nova_compute[260935]: 2025-10-11 09:13:29.898 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:13:29 compute-0 nova_compute[260935]: 2025-10-11 09:13:29.934 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:13:29 compute-0 nova_compute[260935]: 2025-10-11 09:13:29.934 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 11 09:13:30 compute-0 nova_compute[260935]: 2025-10-11 09:13:30.235 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:13:30 compute-0 nova_compute[260935]: 2025-10-11 09:13:30.276 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "refresh_cache-b75d8ded-515b-48ff-a6b6-28df88878996" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:13:30 compute-0 nova_compute[260935]: 2025-10-11 09:13:30.276 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquired lock "refresh_cache-b75d8ded-515b-48ff-a6b6-28df88878996" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:13:30 compute-0 nova_compute[260935]: 2025-10-11 09:13:30.276 2 DEBUG nova.network.neutron [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: b75d8ded-515b-48ff-a6b6-28df88878996] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 11 09:13:30 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2150: 321 pgs: 321 active+clean; 374 MiB data, 928 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 11 09:13:30 compute-0 podman[373451]: 2025-10-11 09:13:30.790798663 +0000 UTC m=+0.082387811 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct 11 09:13:30 compute-0 sshd-session[373403]: Received disconnect from 152.32.213.170 port 42552:11: Bye Bye [preauth]
Oct 11 09:13:30 compute-0 sshd-session[373403]: Disconnected from invalid user katie 152.32.213.170 port 42552 [preauth]
Oct 11 09:13:30 compute-0 ceph-mon[74313]: pgmap v2150: 321 pgs: 321 active+clean; 374 MiB data, 928 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 11 09:13:31 compute-0 nova_compute[260935]: 2025-10-11 09:13:31.440 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:13:31 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:13:31.512 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=bec9a5e2-82b0-42f0-811d-08d245f1dc66, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '30'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:13:31 compute-0 nova_compute[260935]: 2025-10-11 09:13:31.863 2 DEBUG nova.network.neutron [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: b75d8ded-515b-48ff-a6b6-28df88878996] Updating instance_info_cache with network_info: [{"id": "99e74dca-1d94-446c-ac4b-bc16dc028d2b", "address": "fa:16:3e:ab:9b:26", "network": {"id": "e4686205-cbf0-4221-bc49-ebb890c4a59f", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1553544744-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "11b44ad9193e4e43838d52056ccf413e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap99e74dca-1d", "ovs_interfaceid": "99e74dca-1d94-446c-ac4b-bc16dc028d2b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:13:31 compute-0 nova_compute[260935]: 2025-10-11 09:13:31.882 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Releasing lock "refresh_cache-b75d8ded-515b-48ff-a6b6-28df88878996" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:13:31 compute-0 nova_compute[260935]: 2025-10-11 09:13:31.882 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: b75d8ded-515b-48ff-a6b6-28df88878996] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 11 09:13:31 compute-0 nova_compute[260935]: 2025-10-11 09:13:31.882 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:13:31 compute-0 nova_compute[260935]: 2025-10-11 09:13:31.930 2 DEBUG nova.network.neutron [req-754ef831-fa74-4bb0-bef7-b77b27d312a1 req-3f7d2d3d-272b-499c-b4f8-dc2273584b9d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 4b5bb809-c821-466d-9a47-b5a6aa337212] Updated VIF entry in instance network info cache for port 072bf760-1ffd-4b15-bdd9-58b9b670feb8. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 09:13:31 compute-0 nova_compute[260935]: 2025-10-11 09:13:31.930 2 DEBUG nova.network.neutron [req-754ef831-fa74-4bb0-bef7-b77b27d312a1 req-3f7d2d3d-272b-499c-b4f8-dc2273584b9d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 4b5bb809-c821-466d-9a47-b5a6aa337212] Updating instance_info_cache with network_info: [{"id": "072bf760-1ffd-4b15-bdd9-58b9b670feb8", "address": "fa:16:3e:c7:58:7f", "network": {"id": "824eb213-11b4-4a1c-ba73-2f66376f326f", "bridge": "br-int", "label": "tempest-network-smoke--404577306", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.175", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca4b15770e784f45910b630937562cb6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap072bf760-1f", "ovs_interfaceid": "072bf760-1ffd-4b15-bdd9-58b9b670feb8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:13:31 compute-0 nova_compute[260935]: 2025-10-11 09:13:31.955 2 DEBUG oslo_concurrency.lockutils [req-754ef831-fa74-4bb0-bef7-b77b27d312a1 req-3f7d2d3d-272b-499c-b4f8-dc2273584b9d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-4b5bb809-c821-466d-9a47-b5a6aa337212" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:13:32 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2151: 321 pgs: 321 active+clean; 374 MiB data, 933 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 124 op/s
Oct 11 09:13:33 compute-0 ceph-mon[74313]: pgmap v2151: 321 pgs: 321 active+clean; 374 MiB data, 933 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 124 op/s
Oct 11 09:13:33 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:13:34 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2152: 321 pgs: 321 active+clean; 374 MiB data, 933 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 12 KiB/s wr, 143 op/s
Oct 11 09:13:35 compute-0 nova_compute[260935]: 2025-10-11 09:13:35.239 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:13:35 compute-0 ovn_controller[152945]: 2025-10-11T09:13:35Z|00114|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:c7:58:7f 10.100.0.11
Oct 11 09:13:35 compute-0 ovn_controller[152945]: 2025-10-11T09:13:35Z|00115|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:c7:58:7f 10.100.0.11
Oct 11 09:13:35 compute-0 ceph-mon[74313]: pgmap v2152: 321 pgs: 321 active+clean; 374 MiB data, 933 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 12 KiB/s wr, 143 op/s
Oct 11 09:13:35 compute-0 podman[373473]: 2025-10-11 09:13:35.779410412 +0000 UTC m=+0.086470945 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 11 09:13:35 compute-0 podman[373474]: 2025-10-11 09:13:35.831953396 +0000 UTC m=+0.127810298 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Oct 11 09:13:36 compute-0 nova_compute[260935]: 2025-10-11 09:13:36.442 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:13:36 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2153: 321 pgs: 321 active+clean; 374 MiB data, 933 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 12 KiB/s wr, 142 op/s
Oct 11 09:13:37 compute-0 ceph-mon[74313]: pgmap v2153: 321 pgs: 321 active+clean; 374 MiB data, 933 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 12 KiB/s wr, 142 op/s
Oct 11 09:13:38 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2154: 321 pgs: 321 active+clean; 407 MiB data, 958 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 205 op/s
Oct 11 09:13:38 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:13:39 compute-0 ceph-mon[74313]: pgmap v2154: 321 pgs: 321 active+clean; 407 MiB data, 958 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 205 op/s
Oct 11 09:13:40 compute-0 nova_compute[260935]: 2025-10-11 09:13:40.242 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:13:40 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2155: 321 pgs: 321 active+clean; 407 MiB data, 958 MiB used, 59 GiB / 60 GiB avail; 371 KiB/s rd, 2.1 MiB/s wr, 136 op/s
Oct 11 09:13:41 compute-0 nova_compute[260935]: 2025-10-11 09:13:41.201 2 INFO nova.compute.manager [None req-d1fd82f4-8d7b-42ef-a107-375a860fafd4 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 4b5bb809-c821-466d-9a47-b5a6aa337212] Get console output
Oct 11 09:13:41 compute-0 nova_compute[260935]: 2025-10-11 09:13:41.209 29289 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Oct 11 09:13:41 compute-0 nova_compute[260935]: 2025-10-11 09:13:41.485 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:13:41 compute-0 nova_compute[260935]: 2025-10-11 09:13:41.582 2 DEBUG oslo_concurrency.lockutils [None req-d9a68e0f-f019-40d9-b349-5e578780b11e a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Acquiring lock "4b5bb809-c821-466d-9a47-b5a6aa337212" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:13:41 compute-0 nova_compute[260935]: 2025-10-11 09:13:41.583 2 DEBUG oslo_concurrency.lockutils [None req-d9a68e0f-f019-40d9-b349-5e578780b11e a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Lock "4b5bb809-c821-466d-9a47-b5a6aa337212" acquired by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:13:41 compute-0 nova_compute[260935]: 2025-10-11 09:13:41.583 2 DEBUG nova.compute.manager [None req-d9a68e0f-f019-40d9-b349-5e578780b11e a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 4b5bb809-c821-466d-9a47-b5a6aa337212] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:13:41 compute-0 nova_compute[260935]: 2025-10-11 09:13:41.588 2 DEBUG nova.compute.manager [None req-d9a68e0f-f019-40d9-b349-5e578780b11e a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 4b5bb809-c821-466d-9a47-b5a6aa337212] Stopping instance; current vm_state: active, current task_state: powering-off, current DB power_state: 1, current VM power_state: 1 do_stop_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3338
Oct 11 09:13:41 compute-0 nova_compute[260935]: 2025-10-11 09:13:41.590 2 DEBUG nova.objects.instance [None req-d9a68e0f-f019-40d9-b349-5e578780b11e a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Lazy-loading 'flavor' on Instance uuid 4b5bb809-c821-466d-9a47-b5a6aa337212 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:13:41 compute-0 nova_compute[260935]: 2025-10-11 09:13:41.618 2 DEBUG nova.virt.libvirt.driver [None req-d9a68e0f-f019-40d9-b349-5e578780b11e a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 4b5bb809-c821-466d-9a47-b5a6aa337212] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Oct 11 09:13:41 compute-0 ceph-mon[74313]: pgmap v2155: 321 pgs: 321 active+clean; 407 MiB data, 958 MiB used, 59 GiB / 60 GiB avail; 371 KiB/s rd, 2.1 MiB/s wr, 136 op/s
Oct 11 09:13:42 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2156: 321 pgs: 321 active+clean; 407 MiB data, 962 MiB used, 59 GiB / 60 GiB avail; 373 KiB/s rd, 2.2 MiB/s wr, 140 op/s
Oct 11 09:13:42 compute-0 nova_compute[260935]: 2025-10-11 09:13:42.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:13:42 compute-0 nova_compute[260935]: 2025-10-11 09:13:42.704 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Oct 11 09:13:42 compute-0 nova_compute[260935]: 2025-10-11 09:13:42.739 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Oct 11 09:13:43 compute-0 ceph-mon[74313]: pgmap v2156: 321 pgs: 321 active+clean; 407 MiB data, 962 MiB used, 59 GiB / 60 GiB avail; 373 KiB/s rd, 2.2 MiB/s wr, 140 op/s
Oct 11 09:13:43 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:13:43 compute-0 kernel: tap072bf760-1f (unregistering): left promiscuous mode
Oct 11 09:13:43 compute-0 NetworkManager[44960]: <info>  [1760174023.9654] device (tap072bf760-1f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 09:13:43 compute-0 ovn_controller[152945]: 2025-10-11T09:13:43Z|01013|binding|INFO|Releasing lport 072bf760-1ffd-4b15-bdd9-58b9b670feb8 from this chassis (sb_readonly=0)
Oct 11 09:13:43 compute-0 ovn_controller[152945]: 2025-10-11T09:13:43Z|01014|binding|INFO|Setting lport 072bf760-1ffd-4b15-bdd9-58b9b670feb8 down in Southbound
Oct 11 09:13:43 compute-0 nova_compute[260935]: 2025-10-11 09:13:43.983 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:13:43 compute-0 ovn_controller[152945]: 2025-10-11T09:13:43Z|01015|binding|INFO|Removing iface tap072bf760-1f ovn-installed in OVS
Oct 11 09:13:43 compute-0 nova_compute[260935]: 2025-10-11 09:13:43.990 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:13:44 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:13:44.003 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c7:58:7f 10.100.0.11'], port_security=['fa:16:3e:c7:58:7f 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '4b5bb809-c821-466d-9a47-b5a6aa337212', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-824eb213-11b4-4a1c-ba73-2f66376f326f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ca4b15770e784f45910b630937562cb6', 'neutron:revision_number': '4', 'neutron:security_group_ids': '17c338b8-146b-462b-bba0-743df78b6636', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.175'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8beef9e7-87f9-4509-b665-58f91eb3377a, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=072bf760-1ffd-4b15-bdd9-58b9b670feb8) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 09:13:44 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:13:44.005 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 072bf760-1ffd-4b15-bdd9-58b9b670feb8 in datapath 824eb213-11b4-4a1c-ba73-2f66376f326f unbound from our chassis
Oct 11 09:13:44 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:13:44.009 162815 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 824eb213-11b4-4a1c-ba73-2f66376f326f, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 11 09:13:44 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:13:44.011 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[76324ef5-7a6f-40d3-a08b-9f165a7fe86e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:13:44 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:13:44.012 162815 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-824eb213-11b4-4a1c-ba73-2f66376f326f namespace which is not needed anymore
Oct 11 09:13:44 compute-0 nova_compute[260935]: 2025-10-11 09:13:44.018 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:13:44 compute-0 systemd[1]: machine-qemu\x2d127\x2dinstance\x2d0000006a.scope: Deactivated successfully.
Oct 11 09:13:44 compute-0 systemd[1]: machine-qemu\x2d127\x2dinstance\x2d0000006a.scope: Consumed 13.451s CPU time.
Oct 11 09:13:44 compute-0 systemd-machined[215705]: Machine qemu-127-instance-0000006a terminated.
Oct 11 09:13:44 compute-0 nova_compute[260935]: 2025-10-11 09:13:44.202 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:13:44 compute-0 nova_compute[260935]: 2025-10-11 09:13:44.214 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:13:44 compute-0 neutron-haproxy-ovnmeta-824eb213-11b4-4a1c-ba73-2f66376f326f[373363]: [NOTICE]   (373369) : haproxy version is 2.8.14-c23fe91
Oct 11 09:13:44 compute-0 neutron-haproxy-ovnmeta-824eb213-11b4-4a1c-ba73-2f66376f326f[373363]: [NOTICE]   (373369) : path to executable is /usr/sbin/haproxy
Oct 11 09:13:44 compute-0 neutron-haproxy-ovnmeta-824eb213-11b4-4a1c-ba73-2f66376f326f[373363]: [WARNING]  (373369) : Exiting Master process...
Oct 11 09:13:44 compute-0 neutron-haproxy-ovnmeta-824eb213-11b4-4a1c-ba73-2f66376f326f[373363]: [WARNING]  (373369) : Exiting Master process...
Oct 11 09:13:44 compute-0 neutron-haproxy-ovnmeta-824eb213-11b4-4a1c-ba73-2f66376f326f[373363]: [ALERT]    (373369) : Current worker (373371) exited with code 143 (Terminated)
Oct 11 09:13:44 compute-0 neutron-haproxy-ovnmeta-824eb213-11b4-4a1c-ba73-2f66376f326f[373363]: [WARNING]  (373369) : All workers exited. Exiting... (0)
Oct 11 09:13:44 compute-0 systemd[1]: libpod-78f0fd3b5de59331d822e226bd70d7d4fef1aa6cadfaada96296472857cf5483.scope: Deactivated successfully.
Oct 11 09:13:44 compute-0 podman[373541]: 2025-10-11 09:13:44.25218248 +0000 UTC m=+0.095715231 container died 78f0fd3b5de59331d822e226bd70d7d4fef1aa6cadfaada96296472857cf5483 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-824eb213-11b4-4a1c-ba73-2f66376f326f, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct 11 09:13:44 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-78f0fd3b5de59331d822e226bd70d7d4fef1aa6cadfaada96296472857cf5483-userdata-shm.mount: Deactivated successfully.
Oct 11 09:13:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-8ac5649964aaf0918143f43f0172e8f20ee7f3bca61099013f301b1e0f7916bd-merged.mount: Deactivated successfully.
Oct 11 09:13:44 compute-0 podman[373541]: 2025-10-11 09:13:44.325557221 +0000 UTC m=+0.169089922 container cleanup 78f0fd3b5de59331d822e226bd70d7d4fef1aa6cadfaada96296472857cf5483 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-824eb213-11b4-4a1c-ba73-2f66376f326f, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 09:13:44 compute-0 systemd[1]: libpod-conmon-78f0fd3b5de59331d822e226bd70d7d4fef1aa6cadfaada96296472857cf5483.scope: Deactivated successfully.
Oct 11 09:13:44 compute-0 podman[373582]: 2025-10-11 09:13:44.43644059 +0000 UTC m=+0.070181454 container remove 78f0fd3b5de59331d822e226bd70d7d4fef1aa6cadfaada96296472857cf5483 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-824eb213-11b4-4a1c-ba73-2f66376f326f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 11 09:13:44 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:13:44.447 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[167216b9-7578-4c3d-a069-c3028d9d0d0a]: (4, ('Sat Oct 11 09:13:44 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-824eb213-11b4-4a1c-ba73-2f66376f326f (78f0fd3b5de59331d822e226bd70d7d4fef1aa6cadfaada96296472857cf5483)\n78f0fd3b5de59331d822e226bd70d7d4fef1aa6cadfaada96296472857cf5483\nSat Oct 11 09:13:44 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-824eb213-11b4-4a1c-ba73-2f66376f326f (78f0fd3b5de59331d822e226bd70d7d4fef1aa6cadfaada96296472857cf5483)\n78f0fd3b5de59331d822e226bd70d7d4fef1aa6cadfaada96296472857cf5483\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:13:44 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:13:44.451 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[1d1771f9-80ec-41e6-9d45-58d39bf7f5bd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:13:44 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:13:44.452 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap824eb213-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:13:44 compute-0 nova_compute[260935]: 2025-10-11 09:13:44.455 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:13:44 compute-0 kernel: tap824eb213-10: left promiscuous mode
Oct 11 09:13:44 compute-0 nova_compute[260935]: 2025-10-11 09:13:44.487 2 DEBUG nova.compute.manager [req-f3aa1294-7828-4c33-99e9-e33e2cc7e45b req-d8751403-cf58-4735-ba35-07bfe16d2b01 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 4b5bb809-c821-466d-9a47-b5a6aa337212] Received event network-vif-unplugged-072bf760-1ffd-4b15-bdd9-58b9b670feb8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:13:44 compute-0 nova_compute[260935]: 2025-10-11 09:13:44.489 2 DEBUG oslo_concurrency.lockutils [req-f3aa1294-7828-4c33-99e9-e33e2cc7e45b req-d8751403-cf58-4735-ba35-07bfe16d2b01 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "4b5bb809-c821-466d-9a47-b5a6aa337212-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:13:44 compute-0 nova_compute[260935]: 2025-10-11 09:13:44.490 2 DEBUG oslo_concurrency.lockutils [req-f3aa1294-7828-4c33-99e9-e33e2cc7e45b req-d8751403-cf58-4735-ba35-07bfe16d2b01 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "4b5bb809-c821-466d-9a47-b5a6aa337212-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:13:44 compute-0 nova_compute[260935]: 2025-10-11 09:13:44.491 2 DEBUG oslo_concurrency.lockutils [req-f3aa1294-7828-4c33-99e9-e33e2cc7e45b req-d8751403-cf58-4735-ba35-07bfe16d2b01 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "4b5bb809-c821-466d-9a47-b5a6aa337212-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:13:44 compute-0 nova_compute[260935]: 2025-10-11 09:13:44.491 2 DEBUG nova.compute.manager [req-f3aa1294-7828-4c33-99e9-e33e2cc7e45b req-d8751403-cf58-4735-ba35-07bfe16d2b01 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 4b5bb809-c821-466d-9a47-b5a6aa337212] No waiting events found dispatching network-vif-unplugged-072bf760-1ffd-4b15-bdd9-58b9b670feb8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 09:13:44 compute-0 nova_compute[260935]: 2025-10-11 09:13:44.492 2 WARNING nova.compute.manager [req-f3aa1294-7828-4c33-99e9-e33e2cc7e45b req-d8751403-cf58-4735-ba35-07bfe16d2b01 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 4b5bb809-c821-466d-9a47-b5a6aa337212] Received unexpected event network-vif-unplugged-072bf760-1ffd-4b15-bdd9-58b9b670feb8 for instance with vm_state active and task_state powering-off.
Oct 11 09:13:44 compute-0 nova_compute[260935]: 2025-10-11 09:13:44.493 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:13:44 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:13:44.493 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[4f130ca7-66a1-4b8d-88b3-234c667fd7eb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:13:44 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:13:44.523 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[dae36fcc-c8a6-410c-8a9f-9d9b5dd41bfd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:13:44 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:13:44.526 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[278a7e07-4c57-4815-8e3f-3c3eb2336619]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:13:44 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2157: 321 pgs: 321 active+clean; 407 MiB data, 962 MiB used, 59 GiB / 60 GiB avail; 342 KiB/s rd, 2.2 MiB/s wr, 90 op/s
Oct 11 09:13:44 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:13:44.558 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[d3893abf-f868-4dbb-8cfe-9a8b3738b45a]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 584734, 'reachable_time': 30150, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 373600, 'error': None, 'target': 'ovnmeta-824eb213-11b4-4a1c-ba73-2f66376f326f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:13:44 compute-0 systemd[1]: run-netns-ovnmeta\x2d824eb213\x2d11b4\x2d4a1c\x2dba73\x2d2f66376f326f.mount: Deactivated successfully.
Oct 11 09:13:44 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:13:44.567 162929 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-824eb213-11b4-4a1c-ba73-2f66376f326f deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 11 09:13:44 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:13:44.567 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[25853aeb-2171-4fbb-8795-fd6694e027b9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:13:44 compute-0 nova_compute[260935]: 2025-10-11 09:13:44.665 2 INFO nova.virt.libvirt.driver [None req-d9a68e0f-f019-40d9-b349-5e578780b11e a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 4b5bb809-c821-466d-9a47-b5a6aa337212] Instance shutdown successfully after 3 seconds.
Oct 11 09:13:44 compute-0 nova_compute[260935]: 2025-10-11 09:13:44.674 2 INFO nova.virt.libvirt.driver [-] [instance: 4b5bb809-c821-466d-9a47-b5a6aa337212] Instance destroyed successfully.
Oct 11 09:13:44 compute-0 nova_compute[260935]: 2025-10-11 09:13:44.675 2 DEBUG nova.objects.instance [None req-d9a68e0f-f019-40d9-b349-5e578780b11e a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Lazy-loading 'numa_topology' on Instance uuid 4b5bb809-c821-466d-9a47-b5a6aa337212 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:13:44 compute-0 nova_compute[260935]: 2025-10-11 09:13:44.696 2 DEBUG nova.compute.manager [None req-d9a68e0f-f019-40d9-b349-5e578780b11e a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 4b5bb809-c821-466d-9a47-b5a6aa337212] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:13:44 compute-0 nova_compute[260935]: 2025-10-11 09:13:44.765 2 DEBUG oslo_concurrency.lockutils [None req-d9a68e0f-f019-40d9-b349-5e578780b11e a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Lock "4b5bb809-c821-466d-9a47-b5a6aa337212" "released" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: held 3.182s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:13:45 compute-0 nova_compute[260935]: 2025-10-11 09:13:45.305 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:13:45 compute-0 ceph-mon[74313]: pgmap v2157: 321 pgs: 321 active+clean; 407 MiB data, 962 MiB used, 59 GiB / 60 GiB avail; 342 KiB/s rd, 2.2 MiB/s wr, 90 op/s
Oct 11 09:13:46 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2158: 321 pgs: 321 active+clean; 407 MiB data, 962 MiB used, 59 GiB / 60 GiB avail; 329 KiB/s rd, 2.2 MiB/s wr, 67 op/s
Oct 11 09:13:46 compute-0 nova_compute[260935]: 2025-10-11 09:13:46.545 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:13:46 compute-0 nova_compute[260935]: 2025-10-11 09:13:46.629 2 DEBUG nova.compute.manager [req-e04a8058-85a7-4a43-9d32-16f00d8c576e req-9eafffb4-a794-4386-b329-f6686c4c90bc e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 4b5bb809-c821-466d-9a47-b5a6aa337212] Received event network-vif-plugged-072bf760-1ffd-4b15-bdd9-58b9b670feb8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:13:46 compute-0 nova_compute[260935]: 2025-10-11 09:13:46.630 2 DEBUG oslo_concurrency.lockutils [req-e04a8058-85a7-4a43-9d32-16f00d8c576e req-9eafffb4-a794-4386-b329-f6686c4c90bc e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "4b5bb809-c821-466d-9a47-b5a6aa337212-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:13:46 compute-0 nova_compute[260935]: 2025-10-11 09:13:46.630 2 DEBUG oslo_concurrency.lockutils [req-e04a8058-85a7-4a43-9d32-16f00d8c576e req-9eafffb4-a794-4386-b329-f6686c4c90bc e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "4b5bb809-c821-466d-9a47-b5a6aa337212-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:13:46 compute-0 nova_compute[260935]: 2025-10-11 09:13:46.631 2 DEBUG oslo_concurrency.lockutils [req-e04a8058-85a7-4a43-9d32-16f00d8c576e req-9eafffb4-a794-4386-b329-f6686c4c90bc e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "4b5bb809-c821-466d-9a47-b5a6aa337212-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:13:46 compute-0 nova_compute[260935]: 2025-10-11 09:13:46.631 2 DEBUG nova.compute.manager [req-e04a8058-85a7-4a43-9d32-16f00d8c576e req-9eafffb4-a794-4386-b329-f6686c4c90bc e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 4b5bb809-c821-466d-9a47-b5a6aa337212] No waiting events found dispatching network-vif-plugged-072bf760-1ffd-4b15-bdd9-58b9b670feb8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 09:13:46 compute-0 nova_compute[260935]: 2025-10-11 09:13:46.632 2 WARNING nova.compute.manager [req-e04a8058-85a7-4a43-9d32-16f00d8c576e req-9eafffb4-a794-4386-b329-f6686c4c90bc e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 4b5bb809-c821-466d-9a47-b5a6aa337212] Received unexpected event network-vif-plugged-072bf760-1ffd-4b15-bdd9-58b9b670feb8 for instance with vm_state stopped and task_state None.
Oct 11 09:13:47 compute-0 nova_compute[260935]: 2025-10-11 09:13:47.672 2 INFO nova.compute.manager [None req-7f30aee0-7767-42ae-a7e9-21ec7d59af3e a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 4b5bb809-c821-466d-9a47-b5a6aa337212] Get console output
Oct 11 09:13:47 compute-0 ceph-mon[74313]: pgmap v2158: 321 pgs: 321 active+clean; 407 MiB data, 962 MiB used, 59 GiB / 60 GiB avail; 329 KiB/s rd, 2.2 MiB/s wr, 67 op/s
Oct 11 09:13:47 compute-0 nova_compute[260935]: 2025-10-11 09:13:47.708 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:13:47 compute-0 nova_compute[260935]: 2025-10-11 09:13:47.939 2 DEBUG nova.objects.instance [None req-e3a51370-caa5-45fb-ac9a-270d84af9010 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Lazy-loading 'flavor' on Instance uuid 4b5bb809-c821-466d-9a47-b5a6aa337212 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:13:47 compute-0 nova_compute[260935]: 2025-10-11 09:13:47.966 2 DEBUG oslo_concurrency.lockutils [None req-e3a51370-caa5-45fb-ac9a-270d84af9010 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Acquiring lock "refresh_cache-4b5bb809-c821-466d-9a47-b5a6aa337212" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:13:47 compute-0 nova_compute[260935]: 2025-10-11 09:13:47.967 2 DEBUG oslo_concurrency.lockutils [None req-e3a51370-caa5-45fb-ac9a-270d84af9010 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Acquired lock "refresh_cache-4b5bb809-c821-466d-9a47-b5a6aa337212" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:13:47 compute-0 nova_compute[260935]: 2025-10-11 09:13:47.968 2 DEBUG nova.network.neutron [None req-e3a51370-caa5-45fb-ac9a-270d84af9010 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 4b5bb809-c821-466d-9a47-b5a6aa337212] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 11 09:13:47 compute-0 nova_compute[260935]: 2025-10-11 09:13:47.968 2 DEBUG nova.objects.instance [None req-e3a51370-caa5-45fb-ac9a-270d84af9010 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Lazy-loading 'info_cache' on Instance uuid 4b5bb809-c821-466d-9a47-b5a6aa337212 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:13:48 compute-0 sudo[373601]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:13:48 compute-0 sudo[373601]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:13:48 compute-0 sudo[373601]: pam_unix(sudo:session): session closed for user root
Oct 11 09:13:48 compute-0 sudo[373626]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 09:13:48 compute-0 sudo[373626]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:13:48 compute-0 sudo[373626]: pam_unix(sudo:session): session closed for user root
Oct 11 09:13:48 compute-0 sudo[373651]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:13:48 compute-0 sudo[373651]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:13:48 compute-0 sudo[373651]: pam_unix(sudo:session): session closed for user root
Oct 11 09:13:48 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2159: 321 pgs: 321 active+clean; 407 MiB data, 962 MiB used, 59 GiB / 60 GiB avail; 329 KiB/s rd, 2.2 MiB/s wr, 67 op/s
Oct 11 09:13:48 compute-0 sudo[373676]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Oct 11 09:13:48 compute-0 sudo[373676]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:13:48 compute-0 sudo[373676]: pam_unix(sudo:session): session closed for user root
Oct 11 09:13:48 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 09:13:48 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:13:48 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 09:13:48 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:13:48.887 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:57:cd:e5 10.100.0.18 10.100.0.2'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.18/28 10.100.0.2/28', 'neutron:device_id': 'ovnmeta-77797f2a-6d47-4af7-aba0-0c6e896bb489', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-77797f2a-6d47-4af7-aba0-0c6e896bb489', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7020c6a7808745b3bfdbd16acc7ff39e', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=01bd402c-a60d-400e-bb8e-f9e8f782e47c, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=f3e66d3d-6090-4c0d-ae59-e7c3b5cba99f) old=Port_Binding(mac=['fa:16:3e:57:cd:e5 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-77797f2a-6d47-4af7-aba0-0c6e896bb489', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-77797f2a-6d47-4af7-aba0-0c6e896bb489', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7020c6a7808745b3bfdbd16acc7ff39e', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 09:13:48 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:13:48.889 162815 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port f3e66d3d-6090-4c0d-ae59-e7c3b5cba99f in datapath 77797f2a-6d47-4af7-aba0-0c6e896bb489 updated
Oct 11 09:13:48 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:13:48 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:13:48.892 162815 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 77797f2a-6d47-4af7-aba0-0c6e896bb489, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 11 09:13:48 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:13:48.894 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[1076f3b1-39fd-4bbd-9054-6a38f04fecad]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:13:48 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:13:48 compute-0 sudo[373721]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:13:48 compute-0 sudo[373721]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:13:48 compute-0 sudo[373721]: pam_unix(sudo:session): session closed for user root
Oct 11 09:13:49 compute-0 sudo[373746]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 09:13:49 compute-0 sudo[373746]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:13:49 compute-0 sudo[373746]: pam_unix(sudo:session): session closed for user root
Oct 11 09:13:49 compute-0 sudo[373771]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:13:49 compute-0 sudo[373771]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:13:49 compute-0 sudo[373771]: pam_unix(sudo:session): session closed for user root
Oct 11 09:13:49 compute-0 sudo[373796]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 11 09:13:49 compute-0 sudo[373796]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:13:49 compute-0 nova_compute[260935]: 2025-10-11 09:13:49.441 2 DEBUG nova.network.neutron [None req-e3a51370-caa5-45fb-ac9a-270d84af9010 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 4b5bb809-c821-466d-9a47-b5a6aa337212] Updating instance_info_cache with network_info: [{"id": "072bf760-1ffd-4b15-bdd9-58b9b670feb8", "address": "fa:16:3e:c7:58:7f", "network": {"id": "824eb213-11b4-4a1c-ba73-2f66376f326f", "bridge": "br-int", "label": "tempest-network-smoke--404577306", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.175", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca4b15770e784f45910b630937562cb6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap072bf760-1f", "ovs_interfaceid": "072bf760-1ffd-4b15-bdd9-58b9b670feb8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:13:49 compute-0 nova_compute[260935]: 2025-10-11 09:13:49.464 2 DEBUG oslo_concurrency.lockutils [None req-e3a51370-caa5-45fb-ac9a-270d84af9010 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Releasing lock "refresh_cache-4b5bb809-c821-466d-9a47-b5a6aa337212" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:13:49 compute-0 nova_compute[260935]: 2025-10-11 09:13:49.494 2 INFO nova.virt.libvirt.driver [-] [instance: 4b5bb809-c821-466d-9a47-b5a6aa337212] Instance destroyed successfully.
Oct 11 09:13:49 compute-0 nova_compute[260935]: 2025-10-11 09:13:49.495 2 DEBUG nova.objects.instance [None req-e3a51370-caa5-45fb-ac9a-270d84af9010 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Lazy-loading 'numa_topology' on Instance uuid 4b5bb809-c821-466d-9a47-b5a6aa337212 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:13:49 compute-0 nova_compute[260935]: 2025-10-11 09:13:49.506 2 DEBUG nova.objects.instance [None req-e3a51370-caa5-45fb-ac9a-270d84af9010 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Lazy-loading 'resources' on Instance uuid 4b5bb809-c821-466d-9a47-b5a6aa337212 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:13:49 compute-0 nova_compute[260935]: 2025-10-11 09:13:49.524 2 DEBUG nova.virt.libvirt.vif [None req-e3a51370-caa5-45fb-ac9a-270d84af9010 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T09:13:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-429546179',display_name='tempest-TestNetworkAdvancedServerOps-server-429546179',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-429546179',id=106,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBI+dqizmLaYC2433qIpQLnnx/a3P6lrbi1ICEZMm0vIsSZegV5kK7h8RBLfeGIs6fvNJtxm5wiio76URdFktX8daCex/YjqNpM8rlSnMQuWZwdYT6LukjvR6b7qleLKRfA==',key_name='tempest-TestNetworkAdvancedServerOps-1430850582',keypairs=<?>,launch_index=0,launched_at=2025-10-11T09:13:24Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='ca4b15770e784f45910b630937562cb6',ramdisk_id='',reservation_id='r-2d0dbaqa',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-1304559157',owner_user_name='tempest-TestNetworkAdvancedServerOps-1304559157-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T09:13:44Z,user_data=None,user_id='a213c3877fc144a3af0be3c3d853f999',uuid=4b5bb809-c821-466d-9a47-b5a6aa337212,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "072bf760-1ffd-4b15-bdd9-58b9b670feb8", "address": "fa:16:3e:c7:58:7f", "network": {"id": "824eb213-11b4-4a1c-ba73-2f66376f326f", "bridge": "br-int", "label": "tempest-network-smoke--404577306", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.175", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca4b15770e784f45910b630937562cb6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap072bf760-1f", "ovs_interfaceid": "072bf760-1ffd-4b15-bdd9-58b9b670feb8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 11 09:13:49 compute-0 nova_compute[260935]: 2025-10-11 09:13:49.525 2 DEBUG nova.network.os_vif_util [None req-e3a51370-caa5-45fb-ac9a-270d84af9010 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Converting VIF {"id": "072bf760-1ffd-4b15-bdd9-58b9b670feb8", "address": "fa:16:3e:c7:58:7f", "network": {"id": "824eb213-11b4-4a1c-ba73-2f66376f326f", "bridge": "br-int", "label": "tempest-network-smoke--404577306", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.175", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca4b15770e784f45910b630937562cb6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap072bf760-1f", "ovs_interfaceid": "072bf760-1ffd-4b15-bdd9-58b9b670feb8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 09:13:49 compute-0 nova_compute[260935]: 2025-10-11 09:13:49.526 2 DEBUG nova.network.os_vif_util [None req-e3a51370-caa5-45fb-ac9a-270d84af9010 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c7:58:7f,bridge_name='br-int',has_traffic_filtering=True,id=072bf760-1ffd-4b15-bdd9-58b9b670feb8,network=Network(824eb213-11b4-4a1c-ba73-2f66376f326f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap072bf760-1f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 09:13:49 compute-0 nova_compute[260935]: 2025-10-11 09:13:49.526 2 DEBUG os_vif [None req-e3a51370-caa5-45fb-ac9a-270d84af9010 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c7:58:7f,bridge_name='br-int',has_traffic_filtering=True,id=072bf760-1ffd-4b15-bdd9-58b9b670feb8,network=Network(824eb213-11b4-4a1c-ba73-2f66376f326f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap072bf760-1f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 11 09:13:49 compute-0 nova_compute[260935]: 2025-10-11 09:13:49.530 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:13:49 compute-0 nova_compute[260935]: 2025-10-11 09:13:49.530 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap072bf760-1f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:13:49 compute-0 nova_compute[260935]: 2025-10-11 09:13:49.532 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:13:49 compute-0 nova_compute[260935]: 2025-10-11 09:13:49.537 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 09:13:49 compute-0 nova_compute[260935]: 2025-10-11 09:13:49.540 2 INFO os_vif [None req-e3a51370-caa5-45fb-ac9a-270d84af9010 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c7:58:7f,bridge_name='br-int',has_traffic_filtering=True,id=072bf760-1ffd-4b15-bdd9-58b9b670feb8,network=Network(824eb213-11b4-4a1c-ba73-2f66376f326f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap072bf760-1f')
Oct 11 09:13:49 compute-0 nova_compute[260935]: 2025-10-11 09:13:49.550 2 DEBUG nova.virt.libvirt.driver [None req-e3a51370-caa5-45fb-ac9a-270d84af9010 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 4b5bb809-c821-466d-9a47-b5a6aa337212] Start _get_guest_xml network_info=[{"id": "072bf760-1ffd-4b15-bdd9-58b9b670feb8", "address": "fa:16:3e:c7:58:7f", "network": {"id": "824eb213-11b4-4a1c-ba73-2f66376f326f", "bridge": "br-int", "label": "tempest-network-smoke--404577306", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.175", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca4b15770e784f45910b630937562cb6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap072bf760-1f", "ovs_interfaceid": "072bf760-1ffd-4b15-bdd9-58b9b670feb8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 11 09:13:49 compute-0 nova_compute[260935]: 2025-10-11 09:13:49.556 2 WARNING nova.virt.libvirt.driver [None req-e3a51370-caa5-45fb-ac9a-270d84af9010 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 09:13:49 compute-0 nova_compute[260935]: 2025-10-11 09:13:49.564 2 DEBUG nova.virt.libvirt.host [None req-e3a51370-caa5-45fb-ac9a-270d84af9010 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 11 09:13:49 compute-0 nova_compute[260935]: 2025-10-11 09:13:49.565 2 DEBUG nova.virt.libvirt.host [None req-e3a51370-caa5-45fb-ac9a-270d84af9010 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 11 09:13:49 compute-0 nova_compute[260935]: 2025-10-11 09:13:49.569 2 DEBUG nova.virt.libvirt.host [None req-e3a51370-caa5-45fb-ac9a-270d84af9010 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 11 09:13:49 compute-0 nova_compute[260935]: 2025-10-11 09:13:49.569 2 DEBUG nova.virt.libvirt.host [None req-e3a51370-caa5-45fb-ac9a-270d84af9010 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 11 09:13:49 compute-0 nova_compute[260935]: 2025-10-11 09:13:49.570 2 DEBUG nova.virt.libvirt.driver [None req-e3a51370-caa5-45fb-ac9a-270d84af9010 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 11 09:13:49 compute-0 nova_compute[260935]: 2025-10-11 09:13:49.570 2 DEBUG nova.virt.hardware [None req-e3a51370-caa5-45fb-ac9a-270d84af9010 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 11 09:13:49 compute-0 nova_compute[260935]: 2025-10-11 09:13:49.571 2 DEBUG nova.virt.hardware [None req-e3a51370-caa5-45fb-ac9a-270d84af9010 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 11 09:13:49 compute-0 nova_compute[260935]: 2025-10-11 09:13:49.571 2 DEBUG nova.virt.hardware [None req-e3a51370-caa5-45fb-ac9a-270d84af9010 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 11 09:13:49 compute-0 nova_compute[260935]: 2025-10-11 09:13:49.572 2 DEBUG nova.virt.hardware [None req-e3a51370-caa5-45fb-ac9a-270d84af9010 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 11 09:13:49 compute-0 nova_compute[260935]: 2025-10-11 09:13:49.572 2 DEBUG nova.virt.hardware [None req-e3a51370-caa5-45fb-ac9a-270d84af9010 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 11 09:13:49 compute-0 nova_compute[260935]: 2025-10-11 09:13:49.572 2 DEBUG nova.virt.hardware [None req-e3a51370-caa5-45fb-ac9a-270d84af9010 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 11 09:13:49 compute-0 nova_compute[260935]: 2025-10-11 09:13:49.573 2 DEBUG nova.virt.hardware [None req-e3a51370-caa5-45fb-ac9a-270d84af9010 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 11 09:13:49 compute-0 nova_compute[260935]: 2025-10-11 09:13:49.573 2 DEBUG nova.virt.hardware [None req-e3a51370-caa5-45fb-ac9a-270d84af9010 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 11 09:13:49 compute-0 nova_compute[260935]: 2025-10-11 09:13:49.573 2 DEBUG nova.virt.hardware [None req-e3a51370-caa5-45fb-ac9a-270d84af9010 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 11 09:13:49 compute-0 nova_compute[260935]: 2025-10-11 09:13:49.574 2 DEBUG nova.virt.hardware [None req-e3a51370-caa5-45fb-ac9a-270d84af9010 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 11 09:13:49 compute-0 nova_compute[260935]: 2025-10-11 09:13:49.574 2 DEBUG nova.virt.hardware [None req-e3a51370-caa5-45fb-ac9a-270d84af9010 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 11 09:13:49 compute-0 nova_compute[260935]: 2025-10-11 09:13:49.575 2 DEBUG nova.objects.instance [None req-e3a51370-caa5-45fb-ac9a-270d84af9010 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 4b5bb809-c821-466d-9a47-b5a6aa337212 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:13:49 compute-0 nova_compute[260935]: 2025-10-11 09:13:49.592 2 DEBUG oslo_concurrency.processutils [None req-e3a51370-caa5-45fb-ac9a-270d84af9010 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:13:49 compute-0 ceph-mon[74313]: pgmap v2159: 321 pgs: 321 active+clean; 407 MiB data, 962 MiB used, 59 GiB / 60 GiB avail; 329 KiB/s rd, 2.2 MiB/s wr, 67 op/s
Oct 11 09:13:49 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:13:49 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:13:49 compute-0 sudo[373796]: pam_unix(sudo:session): session closed for user root
Oct 11 09:13:49 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 09:13:49 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 09:13:49 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 11 09:13:49 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 09:13:49 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 11 09:13:49 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:13:49 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev 5b97b3ec-f186-422f-b1b8-6a9904163145 does not exist
Oct 11 09:13:49 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev cac8406a-d85c-4ecc-9f6a-d27b5946f7c5 does not exist
Oct 11 09:13:49 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev a607b788-caf3-4841-9c52-e2433ff9f578 does not exist
Oct 11 09:13:49 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 11 09:13:49 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 09:13:49 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 11 09:13:49 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 09:13:49 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 09:13:49 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 09:13:50 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 09:13:50 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2885213427' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:13:50 compute-0 nova_compute[260935]: 2025-10-11 09:13:50.049 2 DEBUG oslo_concurrency.processutils [None req-e3a51370-caa5-45fb-ac9a-270d84af9010 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:13:50 compute-0 sudo[373872]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:13:50 compute-0 sudo[373872]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:13:50 compute-0 sudo[373872]: pam_unix(sudo:session): session closed for user root
Oct 11 09:13:50 compute-0 nova_compute[260935]: 2025-10-11 09:13:50.108 2 DEBUG oslo_concurrency.processutils [None req-e3a51370-caa5-45fb-ac9a-270d84af9010 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:13:50 compute-0 sudo[373917]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 09:13:50 compute-0 sudo[373917]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:13:50 compute-0 sudo[373917]: pam_unix(sudo:session): session closed for user root
Oct 11 09:13:50 compute-0 sudo[373943]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:13:50 compute-0 sudo[373943]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:13:50 compute-0 sudo[373943]: pam_unix(sudo:session): session closed for user root
Oct 11 09:13:50 compute-0 nova_compute[260935]: 2025-10-11 09:13:50.355 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:13:50 compute-0 sudo[373969]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 11 09:13:50 compute-0 sudo[373969]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:13:50 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2160: 321 pgs: 321 active+clean; 407 MiB data, 962 MiB used, 59 GiB / 60 GiB avail; 1.8 KiB/s rd, 30 KiB/s wr, 4 op/s
Oct 11 09:13:50 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 09:13:50 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3511507650' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:13:50 compute-0 nova_compute[260935]: 2025-10-11 09:13:50.641 2 DEBUG oslo_concurrency.processutils [None req-e3a51370-caa5-45fb-ac9a-270d84af9010 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.534s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:13:50 compute-0 nova_compute[260935]: 2025-10-11 09:13:50.643 2 DEBUG nova.virt.libvirt.vif [None req-e3a51370-caa5-45fb-ac9a-270d84af9010 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T09:13:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-429546179',display_name='tempest-TestNetworkAdvancedServerOps-server-429546179',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-429546179',id=106,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBI+dqizmLaYC2433qIpQLnnx/a3P6lrbi1ICEZMm0vIsSZegV5kK7h8RBLfeGIs6fvNJtxm5wiio76URdFktX8daCex/YjqNpM8rlSnMQuWZwdYT6LukjvR6b7qleLKRfA==',key_name='tempest-TestNetworkAdvancedServerOps-1430850582',keypairs=<?>,launch_index=0,launched_at=2025-10-11T09:13:24Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='ca4b15770e784f45910b630937562cb6',ramdisk_id='',reservation_id='r-2d0dbaqa',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-1304559157',owner_user_name='tempest-TestNetworkAdvancedServerOps-1304559157-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T09:13:44Z,user_data=None,user_id='a213c3877fc144a3af0be3c3d853f999',uuid=4b5bb809-c821-466d-9a47-b5a6aa337212,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "072bf760-1ffd-4b15-bdd9-58b9b670feb8", "address": "fa:16:3e:c7:58:7f", "network": {"id": "824eb213-11b4-4a1c-ba73-2f66376f326f", "bridge": "br-int", "label": "tempest-network-smoke--404577306", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.175", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca4b15770e784f45910b630937562cb6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap072bf760-1f", "ovs_interfaceid": "072bf760-1ffd-4b15-bdd9-58b9b670feb8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 11 09:13:50 compute-0 nova_compute[260935]: 2025-10-11 09:13:50.644 2 DEBUG nova.network.os_vif_util [None req-e3a51370-caa5-45fb-ac9a-270d84af9010 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Converting VIF {"id": "072bf760-1ffd-4b15-bdd9-58b9b670feb8", "address": "fa:16:3e:c7:58:7f", "network": {"id": "824eb213-11b4-4a1c-ba73-2f66376f326f", "bridge": "br-int", "label": "tempest-network-smoke--404577306", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.175", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca4b15770e784f45910b630937562cb6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap072bf760-1f", "ovs_interfaceid": "072bf760-1ffd-4b15-bdd9-58b9b670feb8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 09:13:50 compute-0 nova_compute[260935]: 2025-10-11 09:13:50.645 2 DEBUG nova.network.os_vif_util [None req-e3a51370-caa5-45fb-ac9a-270d84af9010 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c7:58:7f,bridge_name='br-int',has_traffic_filtering=True,id=072bf760-1ffd-4b15-bdd9-58b9b670feb8,network=Network(824eb213-11b4-4a1c-ba73-2f66376f326f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap072bf760-1f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 09:13:50 compute-0 nova_compute[260935]: 2025-10-11 09:13:50.647 2 DEBUG nova.objects.instance [None req-e3a51370-caa5-45fb-ac9a-270d84af9010 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Lazy-loading 'pci_devices' on Instance uuid 4b5bb809-c821-466d-9a47-b5a6aa337212 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:13:50 compute-0 nova_compute[260935]: 2025-10-11 09:13:50.672 2 DEBUG nova.virt.libvirt.driver [None req-e3a51370-caa5-45fb-ac9a-270d84af9010 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 4b5bb809-c821-466d-9a47-b5a6aa337212] End _get_guest_xml xml=<domain type="kvm">
Oct 11 09:13:50 compute-0 nova_compute[260935]:   <uuid>4b5bb809-c821-466d-9a47-b5a6aa337212</uuid>
Oct 11 09:13:50 compute-0 nova_compute[260935]:   <name>instance-0000006a</name>
Oct 11 09:13:50 compute-0 nova_compute[260935]:   <memory>131072</memory>
Oct 11 09:13:50 compute-0 nova_compute[260935]:   <vcpu>1</vcpu>
Oct 11 09:13:50 compute-0 nova_compute[260935]:   <metadata>
Oct 11 09:13:50 compute-0 nova_compute[260935]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 09:13:50 compute-0 nova_compute[260935]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 09:13:50 compute-0 nova_compute[260935]:       <nova:name>tempest-TestNetworkAdvancedServerOps-server-429546179</nova:name>
Oct 11 09:13:50 compute-0 nova_compute[260935]:       <nova:creationTime>2025-10-11 09:13:49</nova:creationTime>
Oct 11 09:13:50 compute-0 nova_compute[260935]:       <nova:flavor name="m1.nano">
Oct 11 09:13:50 compute-0 nova_compute[260935]:         <nova:memory>128</nova:memory>
Oct 11 09:13:50 compute-0 nova_compute[260935]:         <nova:disk>1</nova:disk>
Oct 11 09:13:50 compute-0 nova_compute[260935]:         <nova:swap>0</nova:swap>
Oct 11 09:13:50 compute-0 nova_compute[260935]:         <nova:ephemeral>0</nova:ephemeral>
Oct 11 09:13:50 compute-0 nova_compute[260935]:         <nova:vcpus>1</nova:vcpus>
Oct 11 09:13:50 compute-0 nova_compute[260935]:       </nova:flavor>
Oct 11 09:13:50 compute-0 nova_compute[260935]:       <nova:owner>
Oct 11 09:13:50 compute-0 nova_compute[260935]:         <nova:user uuid="a213c3877fc144a3af0be3c3d853f999">tempest-TestNetworkAdvancedServerOps-1304559157-project-member</nova:user>
Oct 11 09:13:50 compute-0 nova_compute[260935]:         <nova:project uuid="ca4b15770e784f45910b630937562cb6">tempest-TestNetworkAdvancedServerOps-1304559157</nova:project>
Oct 11 09:13:50 compute-0 nova_compute[260935]:       </nova:owner>
Oct 11 09:13:50 compute-0 nova_compute[260935]:       <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 09:13:50 compute-0 nova_compute[260935]:       <nova:ports>
Oct 11 09:13:50 compute-0 nova_compute[260935]:         <nova:port uuid="072bf760-1ffd-4b15-bdd9-58b9b670feb8">
Oct 11 09:13:50 compute-0 nova_compute[260935]:           <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Oct 11 09:13:50 compute-0 nova_compute[260935]:         </nova:port>
Oct 11 09:13:50 compute-0 nova_compute[260935]:       </nova:ports>
Oct 11 09:13:50 compute-0 nova_compute[260935]:     </nova:instance>
Oct 11 09:13:50 compute-0 nova_compute[260935]:   </metadata>
Oct 11 09:13:50 compute-0 nova_compute[260935]:   <sysinfo type="smbios">
Oct 11 09:13:50 compute-0 nova_compute[260935]:     <system>
Oct 11 09:13:50 compute-0 nova_compute[260935]:       <entry name="manufacturer">RDO</entry>
Oct 11 09:13:50 compute-0 nova_compute[260935]:       <entry name="product">OpenStack Compute</entry>
Oct 11 09:13:50 compute-0 nova_compute[260935]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 09:13:50 compute-0 nova_compute[260935]:       <entry name="serial">4b5bb809-c821-466d-9a47-b5a6aa337212</entry>
Oct 11 09:13:50 compute-0 nova_compute[260935]:       <entry name="uuid">4b5bb809-c821-466d-9a47-b5a6aa337212</entry>
Oct 11 09:13:50 compute-0 nova_compute[260935]:       <entry name="family">Virtual Machine</entry>
Oct 11 09:13:50 compute-0 nova_compute[260935]:     </system>
Oct 11 09:13:50 compute-0 nova_compute[260935]:   </sysinfo>
Oct 11 09:13:50 compute-0 nova_compute[260935]:   <os>
Oct 11 09:13:50 compute-0 nova_compute[260935]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 11 09:13:50 compute-0 nova_compute[260935]:     <boot dev="hd"/>
Oct 11 09:13:50 compute-0 nova_compute[260935]:     <smbios mode="sysinfo"/>
Oct 11 09:13:50 compute-0 nova_compute[260935]:   </os>
Oct 11 09:13:50 compute-0 nova_compute[260935]:   <features>
Oct 11 09:13:50 compute-0 nova_compute[260935]:     <acpi/>
Oct 11 09:13:50 compute-0 nova_compute[260935]:     <apic/>
Oct 11 09:13:50 compute-0 nova_compute[260935]:     <vmcoreinfo/>
Oct 11 09:13:50 compute-0 nova_compute[260935]:   </features>
Oct 11 09:13:50 compute-0 nova_compute[260935]:   <clock offset="utc">
Oct 11 09:13:50 compute-0 nova_compute[260935]:     <timer name="pit" tickpolicy="delay"/>
Oct 11 09:13:50 compute-0 nova_compute[260935]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 11 09:13:50 compute-0 nova_compute[260935]:     <timer name="hpet" present="no"/>
Oct 11 09:13:50 compute-0 nova_compute[260935]:   </clock>
Oct 11 09:13:50 compute-0 nova_compute[260935]:   <cpu mode="host-model" match="exact">
Oct 11 09:13:50 compute-0 nova_compute[260935]:     <topology sockets="1" cores="1" threads="1"/>
Oct 11 09:13:50 compute-0 nova_compute[260935]:   </cpu>
Oct 11 09:13:50 compute-0 nova_compute[260935]:   <devices>
Oct 11 09:13:50 compute-0 nova_compute[260935]:     <disk type="network" device="disk">
Oct 11 09:13:50 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 09:13:50 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/4b5bb809-c821-466d-9a47-b5a6aa337212_disk">
Oct 11 09:13:50 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 09:13:50 compute-0 nova_compute[260935]:       </source>
Oct 11 09:13:50 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 09:13:50 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 09:13:50 compute-0 nova_compute[260935]:       </auth>
Oct 11 09:13:50 compute-0 nova_compute[260935]:       <target dev="vda" bus="virtio"/>
Oct 11 09:13:50 compute-0 nova_compute[260935]:     </disk>
Oct 11 09:13:50 compute-0 nova_compute[260935]:     <disk type="network" device="cdrom">
Oct 11 09:13:50 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 09:13:50 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/4b5bb809-c821-466d-9a47-b5a6aa337212_disk.config">
Oct 11 09:13:50 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 09:13:50 compute-0 nova_compute[260935]:       </source>
Oct 11 09:13:50 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 09:13:50 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 09:13:50 compute-0 nova_compute[260935]:       </auth>
Oct 11 09:13:50 compute-0 nova_compute[260935]:       <target dev="sda" bus="sata"/>
Oct 11 09:13:50 compute-0 nova_compute[260935]:     </disk>
Oct 11 09:13:50 compute-0 nova_compute[260935]:     <interface type="ethernet">
Oct 11 09:13:50 compute-0 nova_compute[260935]:       <mac address="fa:16:3e:c7:58:7f"/>
Oct 11 09:13:50 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 09:13:50 compute-0 nova_compute[260935]:       <driver name="vhost" rx_queue_size="512"/>
Oct 11 09:13:50 compute-0 nova_compute[260935]:       <mtu size="1442"/>
Oct 11 09:13:50 compute-0 nova_compute[260935]:       <target dev="tap072bf760-1f"/>
Oct 11 09:13:50 compute-0 nova_compute[260935]:     </interface>
Oct 11 09:13:50 compute-0 nova_compute[260935]:     <serial type="pty">
Oct 11 09:13:50 compute-0 nova_compute[260935]:       <log file="/var/lib/nova/instances/4b5bb809-c821-466d-9a47-b5a6aa337212/console.log" append="off"/>
Oct 11 09:13:50 compute-0 nova_compute[260935]:     </serial>
Oct 11 09:13:50 compute-0 nova_compute[260935]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 09:13:50 compute-0 nova_compute[260935]:     <video>
Oct 11 09:13:50 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 09:13:50 compute-0 nova_compute[260935]:     </video>
Oct 11 09:13:50 compute-0 nova_compute[260935]:     <input type="tablet" bus="usb"/>
Oct 11 09:13:50 compute-0 nova_compute[260935]:     <input type="keyboard" bus="usb"/>
Oct 11 09:13:50 compute-0 nova_compute[260935]:     <rng model="virtio">
Oct 11 09:13:50 compute-0 nova_compute[260935]:       <backend model="random">/dev/urandom</backend>
Oct 11 09:13:50 compute-0 nova_compute[260935]:     </rng>
Oct 11 09:13:50 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root"/>
Oct 11 09:13:50 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:13:50 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:13:50 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:13:50 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:13:50 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:13:50 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:13:50 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:13:50 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:13:50 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:13:50 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:13:50 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:13:50 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:13:50 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:13:50 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:13:50 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:13:50 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:13:50 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:13:50 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:13:50 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:13:50 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:13:50 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:13:50 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:13:50 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:13:50 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:13:50 compute-0 nova_compute[260935]:     <controller type="usb" index="0"/>
Oct 11 09:13:50 compute-0 nova_compute[260935]:     <memballoon model="virtio">
Oct 11 09:13:50 compute-0 nova_compute[260935]:       <stats period="10"/>
Oct 11 09:13:50 compute-0 nova_compute[260935]:     </memballoon>
Oct 11 09:13:50 compute-0 nova_compute[260935]:   </devices>
Oct 11 09:13:50 compute-0 nova_compute[260935]: </domain>
Oct 11 09:13:50 compute-0 nova_compute[260935]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 11 09:13:50 compute-0 nova_compute[260935]: 2025-10-11 09:13:50.674 2 DEBUG nova.virt.libvirt.driver [None req-e3a51370-caa5-45fb-ac9a-270d84af9010 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] skipping disk for instance-0000006a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:13:50 compute-0 nova_compute[260935]: 2025-10-11 09:13:50.675 2 DEBUG nova.virt.libvirt.driver [None req-e3a51370-caa5-45fb-ac9a-270d84af9010 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] skipping disk for instance-0000006a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:13:50 compute-0 nova_compute[260935]: 2025-10-11 09:13:50.676 2 DEBUG nova.virt.libvirt.vif [None req-e3a51370-caa5-45fb-ac9a-270d84af9010 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T09:13:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-429546179',display_name='tempest-TestNetworkAdvancedServerOps-server-429546179',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-429546179',id=106,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBI+dqizmLaYC2433qIpQLnnx/a3P6lrbi1ICEZMm0vIsSZegV5kK7h8RBLfeGIs6fvNJtxm5wiio76URdFktX8daCex/YjqNpM8rlSnMQuWZwdYT6LukjvR6b7qleLKRfA==',key_name='tempest-TestNetworkAdvancedServerOps-1430850582',keypairs=<?>,launch_index=0,launched_at=2025-10-11T09:13:24Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=<?>,power_state=4,progress=0,project_id='ca4b15770e784f45910b630937562cb6',ramdisk_id='',reservation_id='r-2d0dbaqa',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-1304559157',owner_user_name='tempest-TestNetworkAdvancedServerOps-1304559157-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T09:13:44Z,user_data=None,user_id='a213c3877fc144a3af0be3c3d853f999',uuid=4b5bb809-c821-466d-9a47-b5a6aa337212,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "072bf760-1ffd-4b15-bdd9-58b9b670feb8", "address": "fa:16:3e:c7:58:7f", "network": {"id": "824eb213-11b4-4a1c-ba73-2f66376f326f", "bridge": "br-int", "label": "tempest-network-smoke--404577306", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.175", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca4b15770e784f45910b630937562cb6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap072bf760-1f", "ovs_interfaceid": "072bf760-1ffd-4b15-bdd9-58b9b670feb8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 11 09:13:50 compute-0 nova_compute[260935]: 2025-10-11 09:13:50.676 2 DEBUG nova.network.os_vif_util [None req-e3a51370-caa5-45fb-ac9a-270d84af9010 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Converting VIF {"id": "072bf760-1ffd-4b15-bdd9-58b9b670feb8", "address": "fa:16:3e:c7:58:7f", "network": {"id": "824eb213-11b4-4a1c-ba73-2f66376f326f", "bridge": "br-int", "label": "tempest-network-smoke--404577306", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.175", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca4b15770e784f45910b630937562cb6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap072bf760-1f", "ovs_interfaceid": "072bf760-1ffd-4b15-bdd9-58b9b670feb8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 09:13:50 compute-0 nova_compute[260935]: 2025-10-11 09:13:50.677 2 DEBUG nova.network.os_vif_util [None req-e3a51370-caa5-45fb-ac9a-270d84af9010 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c7:58:7f,bridge_name='br-int',has_traffic_filtering=True,id=072bf760-1ffd-4b15-bdd9-58b9b670feb8,network=Network(824eb213-11b4-4a1c-ba73-2f66376f326f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap072bf760-1f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 09:13:50 compute-0 nova_compute[260935]: 2025-10-11 09:13:50.677 2 DEBUG os_vif [None req-e3a51370-caa5-45fb-ac9a-270d84af9010 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c7:58:7f,bridge_name='br-int',has_traffic_filtering=True,id=072bf760-1ffd-4b15-bdd9-58b9b670feb8,network=Network(824eb213-11b4-4a1c-ba73-2f66376f326f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap072bf760-1f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 11 09:13:50 compute-0 nova_compute[260935]: 2025-10-11 09:13:50.678 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:13:50 compute-0 nova_compute[260935]: 2025-10-11 09:13:50.678 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:13:50 compute-0 nova_compute[260935]: 2025-10-11 09:13:50.679 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 09:13:50 compute-0 nova_compute[260935]: 2025-10-11 09:13:50.683 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:13:50 compute-0 nova_compute[260935]: 2025-10-11 09:13:50.683 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap072bf760-1f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:13:50 compute-0 nova_compute[260935]: 2025-10-11 09:13:50.684 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap072bf760-1f, col_values=(('external_ids', {'iface-id': '072bf760-1ffd-4b15-bdd9-58b9b670feb8', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:c7:58:7f', 'vm-uuid': '4b5bb809-c821-466d-9a47-b5a6aa337212'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:13:50 compute-0 NetworkManager[44960]: <info>  [1760174030.6859] manager: (tap072bf760-1f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/419)
Oct 11 09:13:50 compute-0 nova_compute[260935]: 2025-10-11 09:13:50.685 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:13:50 compute-0 nova_compute[260935]: 2025-10-11 09:13:50.688 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 09:13:50 compute-0 nova_compute[260935]: 2025-10-11 09:13:50.693 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:13:50 compute-0 nova_compute[260935]: 2025-10-11 09:13:50.694 2 INFO os_vif [None req-e3a51370-caa5-45fb-ac9a-270d84af9010 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c7:58:7f,bridge_name='br-int',has_traffic_filtering=True,id=072bf760-1ffd-4b15-bdd9-58b9b670feb8,network=Network(824eb213-11b4-4a1c-ba73-2f66376f326f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap072bf760-1f')
Oct 11 09:13:50 compute-0 kernel: tap072bf760-1f: entered promiscuous mode
Oct 11 09:13:50 compute-0 ovn_controller[152945]: 2025-10-11T09:13:50Z|01016|binding|INFO|Claiming lport 072bf760-1ffd-4b15-bdd9-58b9b670feb8 for this chassis.
Oct 11 09:13:50 compute-0 ovn_controller[152945]: 2025-10-11T09:13:50Z|01017|binding|INFO|072bf760-1ffd-4b15-bdd9-58b9b670feb8: Claiming fa:16:3e:c7:58:7f 10.100.0.11
Oct 11 09:13:50 compute-0 NetworkManager[44960]: <info>  [1760174030.8063] manager: (tap072bf760-1f): new Tun device (/org/freedesktop/NetworkManager/Devices/420)
Oct 11 09:13:50 compute-0 nova_compute[260935]: 2025-10-11 09:13:50.807 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:13:50 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:13:50.815 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c7:58:7f 10.100.0.11'], port_security=['fa:16:3e:c7:58:7f 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '4b5bb809-c821-466d-9a47-b5a6aa337212', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-824eb213-11b4-4a1c-ba73-2f66376f326f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ca4b15770e784f45910b630937562cb6', 'neutron:revision_number': '5', 'neutron:security_group_ids': '17c338b8-146b-462b-bba0-743df78b6636', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.175'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8beef9e7-87f9-4509-b665-58f91eb3377a, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=072bf760-1ffd-4b15-bdd9-58b9b670feb8) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 09:13:50 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:13:50.817 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 072bf760-1ffd-4b15-bdd9-58b9b670feb8 in datapath 824eb213-11b4-4a1c-ba73-2f66376f326f bound to our chassis
Oct 11 09:13:50 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:13:50.819 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 824eb213-11b4-4a1c-ba73-2f66376f326f
Oct 11 09:13:50 compute-0 ovn_controller[152945]: 2025-10-11T09:13:50Z|01018|binding|INFO|Setting lport 072bf760-1ffd-4b15-bdd9-58b9b670feb8 ovn-installed in OVS
Oct 11 09:13:50 compute-0 ovn_controller[152945]: 2025-10-11T09:13:50Z|01019|binding|INFO|Setting lport 072bf760-1ffd-4b15-bdd9-58b9b670feb8 up in Southbound
Oct 11 09:13:50 compute-0 nova_compute[260935]: 2025-10-11 09:13:50.833 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:13:50 compute-0 systemd-udevd[374083]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 09:13:50 compute-0 podman[374060]: 2025-10-11 09:13:50.839146821 +0000 UTC m=+0.069807253 container create 1181b9fd8dd2977eaf121da177a897146d0bb712ef39c57e8701c61f5a599bc2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_hypatia, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct 11 09:13:50 compute-0 nova_compute[260935]: 2025-10-11 09:13:50.842 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:13:50 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:13:50.841 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[29f2dcea-4b0a-4140-9586-3b8e665ede0c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:13:50 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:13:50.842 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap824eb213-11 in ovnmeta-824eb213-11b4-4a1c-ba73-2f66376f326f namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 11 09:13:50 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:13:50.844 276945 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap824eb213-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 11 09:13:50 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:13:50.844 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[6e8dfe82-e9e0-4ecb-af62-b4d4cfa9d8d5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:13:50 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:13:50.845 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[6b0a47ce-d909-4981-be84-4d240931e064]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:13:50 compute-0 NetworkManager[44960]: <info>  [1760174030.8550] device (tap072bf760-1f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 09:13:50 compute-0 NetworkManager[44960]: <info>  [1760174030.8567] device (tap072bf760-1f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 09:13:50 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:13:50.867 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[2c74180d-02c2-4b19-a79f-7f81ada88931]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:13:50 compute-0 systemd-machined[215705]: New machine qemu-128-instance-0000006a.
Oct 11 09:13:50 compute-0 podman[374060]: 2025-10-11 09:13:50.788931321 +0000 UTC m=+0.019591733 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:13:50 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 09:13:50 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 09:13:50 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:13:50 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 09:13:50 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 09:13:50 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 09:13:50 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/2885213427' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:13:50 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/3511507650' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:13:50 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:13:50.894 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[93df4a64-2523-4a1f-b2c0-94dab9cd1d99]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:13:50 compute-0 systemd[1]: Started libpod-conmon-1181b9fd8dd2977eaf121da177a897146d0bb712ef39c57e8701c61f5a599bc2.scope.
Oct 11 09:13:50 compute-0 systemd[1]: Started Virtual Machine qemu-128-instance-0000006a.
Oct 11 09:13:50 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:13:50.936 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[94914c90-faee-438b-bce7-4893f21569b2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:13:50 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:13:50 compute-0 auditd[700]: Audit daemon rotating log files
Oct 11 09:13:50 compute-0 NetworkManager[44960]: <info>  [1760174030.9504] manager: (tap824eb213-10): new Veth device (/org/freedesktop/NetworkManager/Devices/421)
Oct 11 09:13:50 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:13:50.948 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[0d503f4f-c748-4330-a851-eaaecee717f2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:13:50 compute-0 podman[374060]: 2025-10-11 09:13:50.988204487 +0000 UTC m=+0.218864909 container init 1181b9fd8dd2977eaf121da177a897146d0bb712ef39c57e8701c61f5a599bc2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_hypatia, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 09:13:51 compute-0 podman[374060]: 2025-10-11 09:13:51.003911281 +0000 UTC m=+0.234571713 container start 1181b9fd8dd2977eaf121da177a897146d0bb712ef39c57e8701c61f5a599bc2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_hypatia, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct 11 09:13:51 compute-0 podman[374060]: 2025-10-11 09:13:51.009367772 +0000 UTC m=+0.240028254 container attach 1181b9fd8dd2977eaf121da177a897146d0bb712ef39c57e8701c61f5a599bc2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_hypatia, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 09:13:51 compute-0 determined_hypatia[374094]: 167 167
Oct 11 09:13:51 compute-0 systemd[1]: libpod-1181b9fd8dd2977eaf121da177a897146d0bb712ef39c57e8701c61f5a599bc2.scope: Deactivated successfully.
Oct 11 09:13:51 compute-0 podman[374060]: 2025-10-11 09:13:51.01866999 +0000 UTC m=+0.249330422 container died 1181b9fd8dd2977eaf121da177a897146d0bb712ef39c57e8701c61f5a599bc2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_hypatia, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct 11 09:13:51 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:13:51.014 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[aae1dd39-2a35-4f59-99d4-720b5e2913c0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:13:51 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:13:51.023 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[f03e8087-16db-4437-92b6-cb16b51ea12b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:13:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-046c3d7ec6a3e1ca8f36a32ed7d98d29bf0df08dd897555605e8fc0425975e6a-merged.mount: Deactivated successfully.
Oct 11 09:13:51 compute-0 NetworkManager[44960]: <info>  [1760174031.0859] device (tap824eb213-10): carrier: link connected
Oct 11 09:13:51 compute-0 podman[374060]: 2025-10-11 09:13:51.087436583 +0000 UTC m=+0.318096985 container remove 1181b9fd8dd2977eaf121da177a897146d0bb712ef39c57e8701c61f5a599bc2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_hypatia, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 11 09:13:51 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:13:51.097 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[7140e0e5-e298-44fc-b26b-2a0f617365e2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:13:51 compute-0 systemd[1]: libpod-conmon-1181b9fd8dd2977eaf121da177a897146d0bb712ef39c57e8701c61f5a599bc2.scope: Deactivated successfully.
Oct 11 09:13:51 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:13:51.126 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[2a7ecc06-ca0c-45f6-aeb3-f159ee1646b6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap824eb213-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:32:a7:1c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 297], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 587578, 'reachable_time': 18087, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 374141, 'error': None, 'target': 'ovnmeta-824eb213-11b4-4a1c-ba73-2f66376f326f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:13:51 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:13:51.151 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[b8e505e4-1ad0-434e-b631-2028970a0d09]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe32:a71c'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 587578, 'tstamp': 587578}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 374142, 'error': None, 'target': 'ovnmeta-824eb213-11b4-4a1c-ba73-2f66376f326f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:13:51 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:13:51.178 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[e8499fa9-53ec-4e28-8b5a-9514c1131190]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap824eb213-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:32:a7:1c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 297], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 587578, 'reachable_time': 18087, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 374143, 'error': None, 'target': 'ovnmeta-824eb213-11b4-4a1c-ba73-2f66376f326f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:13:51 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:13:51.212 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[2fd85785-c96e-4ae0-96f1-532ee922c9f7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:13:51 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:13:51.304 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[be543a97-4921-418d-95d3-8d29af03abc6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:13:51 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:13:51.306 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap824eb213-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:13:51 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:13:51.307 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 09:13:51 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:13:51.308 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap824eb213-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:13:51 compute-0 nova_compute[260935]: 2025-10-11 09:13:51.310 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:13:51 compute-0 NetworkManager[44960]: <info>  [1760174031.3113] manager: (tap824eb213-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/422)
Oct 11 09:13:51 compute-0 kernel: tap824eb213-10: entered promiscuous mode
Oct 11 09:13:51 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:13:51.315 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap824eb213-10, col_values=(('external_ids', {'iface-id': 'f4946461-9ac5-43c5-81c4-5ba2f19b9f2d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:13:51 compute-0 ovn_controller[152945]: 2025-10-11T09:13:51Z|01020|binding|INFO|Releasing lport f4946461-9ac5-43c5-81c4-5ba2f19b9f2d from this chassis (sb_readonly=0)
Oct 11 09:13:51 compute-0 nova_compute[260935]: 2025-10-11 09:13:51.339 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:13:51 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:13:51.340 162815 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/824eb213-11b4-4a1c-ba73-2f66376f326f.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/824eb213-11b4-4a1c-ba73-2f66376f326f.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 11 09:13:51 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:13:51.342 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[8d7f4abd-9e7a-4c4e-b668-e6194e4d6d85]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:13:51 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:13:51.343 162815 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 11 09:13:51 compute-0 ovn_metadata_agent[162810]: global
Oct 11 09:13:51 compute-0 ovn_metadata_agent[162810]:     log         /dev/log local0 debug
Oct 11 09:13:51 compute-0 ovn_metadata_agent[162810]:     log-tag     haproxy-metadata-proxy-824eb213-11b4-4a1c-ba73-2f66376f326f
Oct 11 09:13:51 compute-0 ovn_metadata_agent[162810]:     user        root
Oct 11 09:13:51 compute-0 ovn_metadata_agent[162810]:     group       root
Oct 11 09:13:51 compute-0 ovn_metadata_agent[162810]:     maxconn     1024
Oct 11 09:13:51 compute-0 ovn_metadata_agent[162810]:     pidfile     /var/lib/neutron/external/pids/824eb213-11b4-4a1c-ba73-2f66376f326f.pid.haproxy
Oct 11 09:13:51 compute-0 ovn_metadata_agent[162810]:     daemon
Oct 11 09:13:51 compute-0 ovn_metadata_agent[162810]: 
Oct 11 09:13:51 compute-0 ovn_metadata_agent[162810]: defaults
Oct 11 09:13:51 compute-0 ovn_metadata_agent[162810]:     log global
Oct 11 09:13:51 compute-0 ovn_metadata_agent[162810]:     mode http
Oct 11 09:13:51 compute-0 ovn_metadata_agent[162810]:     option httplog
Oct 11 09:13:51 compute-0 ovn_metadata_agent[162810]:     option dontlognull
Oct 11 09:13:51 compute-0 ovn_metadata_agent[162810]:     option http-server-close
Oct 11 09:13:51 compute-0 ovn_metadata_agent[162810]:     option forwardfor
Oct 11 09:13:51 compute-0 ovn_metadata_agent[162810]:     retries                 3
Oct 11 09:13:51 compute-0 ovn_metadata_agent[162810]:     timeout http-request    30s
Oct 11 09:13:51 compute-0 ovn_metadata_agent[162810]:     timeout connect         30s
Oct 11 09:13:51 compute-0 ovn_metadata_agent[162810]:     timeout client          32s
Oct 11 09:13:51 compute-0 ovn_metadata_agent[162810]:     timeout server          32s
Oct 11 09:13:51 compute-0 ovn_metadata_agent[162810]:     timeout http-keep-alive 30s
Oct 11 09:13:51 compute-0 ovn_metadata_agent[162810]: 
Oct 11 09:13:51 compute-0 ovn_metadata_agent[162810]: 
Oct 11 09:13:51 compute-0 ovn_metadata_agent[162810]: listen listener
Oct 11 09:13:51 compute-0 ovn_metadata_agent[162810]:     bind 169.254.169.254:80
Oct 11 09:13:51 compute-0 ovn_metadata_agent[162810]:     server metadata /var/lib/neutron/metadata_proxy
Oct 11 09:13:51 compute-0 ovn_metadata_agent[162810]:     http-request add-header X-OVN-Network-ID 824eb213-11b4-4a1c-ba73-2f66376f326f
Oct 11 09:13:51 compute-0 ovn_metadata_agent[162810]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 11 09:13:51 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:13:51.344 162815 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-824eb213-11b4-4a1c-ba73-2f66376f326f', 'env', 'PROCESS_TAG=haproxy-824eb213-11b4-4a1c-ba73-2f66376f326f', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/824eb213-11b4-4a1c-ba73-2f66376f326f.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 11 09:13:51 compute-0 podman[374154]: 2025-10-11 09:13:51.392676372 +0000 UTC m=+0.098721834 container create 65a9e29df67dd2d39ae721425c864410f3e649c440827d896f418b8e32dd0147 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_davinci, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct 11 09:13:51 compute-0 podman[374154]: 2025-10-11 09:13:51.359203636 +0000 UTC m=+0.065249148 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:13:51 compute-0 systemd[1]: Started libpod-conmon-65a9e29df67dd2d39ae721425c864410f3e649c440827d896f418b8e32dd0147.scope.
Oct 11 09:13:51 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:13:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ef148b6cbb84017ffdfebfc86e683029c7463dfb6677f0f5eadfc146e25f1da/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 09:13:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ef148b6cbb84017ffdfebfc86e683029c7463dfb6677f0f5eadfc146e25f1da/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 09:13:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ef148b6cbb84017ffdfebfc86e683029c7463dfb6677f0f5eadfc146e25f1da/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 09:13:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ef148b6cbb84017ffdfebfc86e683029c7463dfb6677f0f5eadfc146e25f1da/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 09:13:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ef148b6cbb84017ffdfebfc86e683029c7463dfb6677f0f5eadfc146e25f1da/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 09:13:51 compute-0 podman[374154]: 2025-10-11 09:13:51.542742696 +0000 UTC m=+0.248788218 container init 65a9e29df67dd2d39ae721425c864410f3e649c440827d896f418b8e32dd0147 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_davinci, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 09:13:51 compute-0 podman[374154]: 2025-10-11 09:13:51.557029821 +0000 UTC m=+0.263075243 container start 65a9e29df67dd2d39ae721425c864410f3e649c440827d896f418b8e32dd0147 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_davinci, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 09:13:51 compute-0 podman[374154]: 2025-10-11 09:13:51.560429775 +0000 UTC m=+0.266475287 container attach 65a9e29df67dd2d39ae721425c864410f3e649c440827d896f418b8e32dd0147 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_davinci, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 09:13:51 compute-0 podman[374243]: 2025-10-11 09:13:51.838312987 +0000 UTC m=+0.078222806 container create fe44361210902a7d2aa4fa61cd2749a87305dac2f61418b347db2edef02fb595 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-824eb213-11b4-4a1c-ba73-2f66376f326f, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 11 09:13:51 compute-0 systemd[1]: Started libpod-conmon-fe44361210902a7d2aa4fa61cd2749a87305dac2f61418b347db2edef02fb595.scope.
Oct 11 09:13:51 compute-0 podman[374243]: 2025-10-11 09:13:51.797570599 +0000 UTC m=+0.037480498 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 11 09:13:51 compute-0 ceph-mon[74313]: pgmap v2160: 321 pgs: 321 active+clean; 407 MiB data, 962 MiB used, 59 GiB / 60 GiB avail; 1.8 KiB/s rd, 30 KiB/s wr, 4 op/s
Oct 11 09:13:51 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:13:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/15941a0eabc12ef18aa237435bb737824ea9fdbe58e73a7127e8fa87ef5c01c5/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 11 09:13:51 compute-0 podman[374243]: 2025-10-11 09:13:51.949756072 +0000 UTC m=+0.189665901 container init fe44361210902a7d2aa4fa61cd2749a87305dac2f61418b347db2edef02fb595 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-824eb213-11b4-4a1c-ba73-2f66376f326f, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct 11 09:13:51 compute-0 podman[374243]: 2025-10-11 09:13:51.95763106 +0000 UTC m=+0.197540889 container start fe44361210902a7d2aa4fa61cd2749a87305dac2f61418b347db2edef02fb595 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-824eb213-11b4-4a1c-ba73-2f66376f326f, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team)
Oct 11 09:13:51 compute-0 neutron-haproxy-ovnmeta-824eb213-11b4-4a1c-ba73-2f66376f326f[374257]: [NOTICE]   (374261) : New worker (374263) forked
Oct 11 09:13:51 compute-0 neutron-haproxy-ovnmeta-824eb213-11b4-4a1c-ba73-2f66376f326f[374257]: [NOTICE]   (374261) : Loading success.
Oct 11 09:13:52 compute-0 nova_compute[260935]: 2025-10-11 09:13:52.110 2 DEBUG nova.virt.libvirt.host [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Removed pending event for 4b5bb809-c821-466d-9a47-b5a6aa337212 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Oct 11 09:13:52 compute-0 nova_compute[260935]: 2025-10-11 09:13:52.110 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760174032.1075108, 4b5bb809-c821-466d-9a47-b5a6aa337212 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:13:52 compute-0 nova_compute[260935]: 2025-10-11 09:13:52.112 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 4b5bb809-c821-466d-9a47-b5a6aa337212] VM Resumed (Lifecycle Event)
Oct 11 09:13:52 compute-0 nova_compute[260935]: 2025-10-11 09:13:52.113 2 DEBUG nova.compute.manager [None req-e3a51370-caa5-45fb-ac9a-270d84af9010 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 4b5bb809-c821-466d-9a47-b5a6aa337212] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 11 09:13:52 compute-0 nova_compute[260935]: 2025-10-11 09:13:52.119 2 INFO nova.virt.libvirt.driver [-] [instance: 4b5bb809-c821-466d-9a47-b5a6aa337212] Instance rebooted successfully.
Oct 11 09:13:52 compute-0 nova_compute[260935]: 2025-10-11 09:13:52.120 2 DEBUG nova.compute.manager [None req-e3a51370-caa5-45fb-ac9a-270d84af9010 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 4b5bb809-c821-466d-9a47-b5a6aa337212] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:13:52 compute-0 nova_compute[260935]: 2025-10-11 09:13:52.160 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 4b5bb809-c821-466d-9a47-b5a6aa337212] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:13:52 compute-0 nova_compute[260935]: 2025-10-11 09:13:52.179 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 4b5bb809-c821-466d-9a47-b5a6aa337212] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: stopped, current task_state: powering-on, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 09:13:52 compute-0 nova_compute[260935]: 2025-10-11 09:13:52.217 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760174032.1087902, 4b5bb809-c821-466d-9a47-b5a6aa337212 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:13:52 compute-0 nova_compute[260935]: 2025-10-11 09:13:52.217 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 4b5bb809-c821-466d-9a47-b5a6aa337212] VM Started (Lifecycle Event)
Oct 11 09:13:52 compute-0 nova_compute[260935]: 2025-10-11 09:13:52.243 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 4b5bb809-c821-466d-9a47-b5a6aa337212] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:13:52 compute-0 nova_compute[260935]: 2025-10-11 09:13:52.247 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 4b5bb809-c821-466d-9a47-b5a6aa337212] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 09:13:52 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2161: 321 pgs: 321 active+clean; 407 MiB data, 962 MiB used, 59 GiB / 60 GiB avail; 7.8 KiB/s rd, 30 KiB/s wr, 11 op/s
Oct 11 09:13:52 compute-0 loving_davinci[374181]: --> passed data devices: 0 physical, 3 LVM
Oct 11 09:13:52 compute-0 loving_davinci[374181]: --> relative data size: 1.0
Oct 11 09:13:52 compute-0 loving_davinci[374181]: --> All data devices are unavailable
Oct 11 09:13:52 compute-0 systemd[1]: libpod-65a9e29df67dd2d39ae721425c864410f3e649c440827d896f418b8e32dd0147.scope: Deactivated successfully.
Oct 11 09:13:52 compute-0 systemd[1]: libpod-65a9e29df67dd2d39ae721425c864410f3e649c440827d896f418b8e32dd0147.scope: Consumed 1.107s CPU time.
Oct 11 09:13:52 compute-0 podman[374296]: 2025-10-11 09:13:52.833869012 +0000 UTC m=+0.030385512 container died 65a9e29df67dd2d39ae721425c864410f3e649c440827d896f418b8e32dd0147 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_davinci, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True)
Oct 11 09:13:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-3ef148b6cbb84017ffdfebfc86e683029c7463dfb6677f0f5eadfc146e25f1da-merged.mount: Deactivated successfully.
Oct 11 09:13:52 compute-0 ceph-mon[74313]: pgmap v2161: 321 pgs: 321 active+clean; 407 MiB data, 962 MiB used, 59 GiB / 60 GiB avail; 7.8 KiB/s rd, 30 KiB/s wr, 11 op/s
Oct 11 09:13:52 compute-0 podman[374296]: 2025-10-11 09:13:52.924986034 +0000 UTC m=+0.121502544 container remove 65a9e29df67dd2d39ae721425c864410f3e649c440827d896f418b8e32dd0147 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_davinci, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct 11 09:13:52 compute-0 systemd[1]: libpod-conmon-65a9e29df67dd2d39ae721425c864410f3e649c440827d896f418b8e32dd0147.scope: Deactivated successfully.
Oct 11 09:13:52 compute-0 sudo[373969]: pam_unix(sudo:session): session closed for user root
Oct 11 09:13:53 compute-0 sudo[374311]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:13:53 compute-0 sudo[374311]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:13:53 compute-0 sudo[374311]: pam_unix(sudo:session): session closed for user root
Oct 11 09:13:53 compute-0 sudo[374336]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 09:13:53 compute-0 sudo[374336]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:13:53 compute-0 sudo[374336]: pam_unix(sudo:session): session closed for user root
Oct 11 09:13:53 compute-0 sudo[374361]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:13:53 compute-0 sudo[374361]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:13:53 compute-0 sudo[374361]: pam_unix(sudo:session): session closed for user root
Oct 11 09:13:53 compute-0 sudo[374386]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a -- lvm list --format json
Oct 11 09:13:53 compute-0 sudo[374386]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:13:53 compute-0 podman[374451]: 2025-10-11 09:13:53.846885522 +0000 UTC m=+0.048403881 container create c0e38af6f9783064fc93649e55639f78149caf0f15744e566ba6811e33106570 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_gates, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 09:13:53 compute-0 systemd[1]: Started libpod-conmon-c0e38af6f9783064fc93649e55639f78149caf0f15744e566ba6811e33106570.scope.
Oct 11 09:13:53 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:13:53 compute-0 podman[374451]: 2025-10-11 09:13:53.827560747 +0000 UTC m=+0.029079126 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:13:53 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:13:53 compute-0 podman[374451]: 2025-10-11 09:13:53.949491312 +0000 UTC m=+0.151009701 container init c0e38af6f9783064fc93649e55639f78149caf0f15744e566ba6811e33106570 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_gates, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 09:13:53 compute-0 podman[374451]: 2025-10-11 09:13:53.961144274 +0000 UTC m=+0.162662633 container start c0e38af6f9783064fc93649e55639f78149caf0f15744e566ba6811e33106570 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_gates, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct 11 09:13:53 compute-0 podman[374451]: 2025-10-11 09:13:53.964323542 +0000 UTC m=+0.165841901 container attach c0e38af6f9783064fc93649e55639f78149caf0f15744e566ba6811e33106570 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_gates, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 09:13:53 compute-0 epic_gates[374467]: 167 167
Oct 11 09:13:53 compute-0 systemd[1]: libpod-c0e38af6f9783064fc93649e55639f78149caf0f15744e566ba6811e33106570.scope: Deactivated successfully.
Oct 11 09:13:53 compute-0 podman[374451]: 2025-10-11 09:13:53.970478753 +0000 UTC m=+0.171997152 container died c0e38af6f9783064fc93649e55639f78149caf0f15744e566ba6811e33106570 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_gates, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 09:13:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-c909e26b5ce3f4f13f640151274a5237d0989ccadd83bacf7f68229970cd4cfd-merged.mount: Deactivated successfully.
Oct 11 09:13:54 compute-0 podman[374451]: 2025-10-11 09:13:54.026518714 +0000 UTC m=+0.228037093 container remove c0e38af6f9783064fc93649e55639f78149caf0f15744e566ba6811e33106570 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_gates, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 09:13:54 compute-0 systemd[1]: libpod-conmon-c0e38af6f9783064fc93649e55639f78149caf0f15744e566ba6811e33106570.scope: Deactivated successfully.
Oct 11 09:13:54 compute-0 podman[374490]: 2025-10-11 09:13:54.33167408 +0000 UTC m=+0.084308184 container create 6fe9f5811e8ce5522a659f2f54ba7dbb066334c1220d82b74b9ad6ee66a25413 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_engelbart, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Oct 11 09:13:54 compute-0 systemd[1]: Started libpod-conmon-6fe9f5811e8ce5522a659f2f54ba7dbb066334c1220d82b74b9ad6ee66a25413.scope.
Oct 11 09:13:54 compute-0 nova_compute[260935]: 2025-10-11 09:13:54.378 2 DEBUG nova.compute.manager [req-a2d94969-bd01-4ea9-aaab-cd1e0fa0d1b6 req-e08a926e-d1b6-41d9-820e-60f735de703d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 4b5bb809-c821-466d-9a47-b5a6aa337212] Received event network-vif-plugged-072bf760-1ffd-4b15-bdd9-58b9b670feb8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:13:54 compute-0 nova_compute[260935]: 2025-10-11 09:13:54.380 2 DEBUG oslo_concurrency.lockutils [req-a2d94969-bd01-4ea9-aaab-cd1e0fa0d1b6 req-e08a926e-d1b6-41d9-820e-60f735de703d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "4b5bb809-c821-466d-9a47-b5a6aa337212-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:13:54 compute-0 podman[374490]: 2025-10-11 09:13:54.291702244 +0000 UTC m=+0.044336428 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:13:54 compute-0 nova_compute[260935]: 2025-10-11 09:13:54.380 2 DEBUG oslo_concurrency.lockutils [req-a2d94969-bd01-4ea9-aaab-cd1e0fa0d1b6 req-e08a926e-d1b6-41d9-820e-60f735de703d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "4b5bb809-c821-466d-9a47-b5a6aa337212-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:13:54 compute-0 nova_compute[260935]: 2025-10-11 09:13:54.384 2 DEBUG oslo_concurrency.lockutils [req-a2d94969-bd01-4ea9-aaab-cd1e0fa0d1b6 req-e08a926e-d1b6-41d9-820e-60f735de703d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "4b5bb809-c821-466d-9a47-b5a6aa337212-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:13:54 compute-0 nova_compute[260935]: 2025-10-11 09:13:54.384 2 DEBUG nova.compute.manager [req-a2d94969-bd01-4ea9-aaab-cd1e0fa0d1b6 req-e08a926e-d1b6-41d9-820e-60f735de703d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 4b5bb809-c821-466d-9a47-b5a6aa337212] No waiting events found dispatching network-vif-plugged-072bf760-1ffd-4b15-bdd9-58b9b670feb8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 09:13:54 compute-0 nova_compute[260935]: 2025-10-11 09:13:54.385 2 WARNING nova.compute.manager [req-a2d94969-bd01-4ea9-aaab-cd1e0fa0d1b6 req-e08a926e-d1b6-41d9-820e-60f735de703d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 4b5bb809-c821-466d-9a47-b5a6aa337212] Received unexpected event network-vif-plugged-072bf760-1ffd-4b15-bdd9-58b9b670feb8 for instance with vm_state active and task_state None.
Oct 11 09:13:54 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:13:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8634b20a56ccebe2680c75d526f4bc221f25677dc6d6e1b63cd912ee96e650cc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 09:13:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8634b20a56ccebe2680c75d526f4bc221f25677dc6d6e1b63cd912ee96e650cc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 09:13:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8634b20a56ccebe2680c75d526f4bc221f25677dc6d6e1b63cd912ee96e650cc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 09:13:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8634b20a56ccebe2680c75d526f4bc221f25677dc6d6e1b63cd912ee96e650cc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 09:13:54 compute-0 podman[374490]: 2025-10-11 09:13:54.428804449 +0000 UTC m=+0.181438543 container init 6fe9f5811e8ce5522a659f2f54ba7dbb066334c1220d82b74b9ad6ee66a25413 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_engelbart, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 09:13:54 compute-0 podman[374490]: 2025-10-11 09:13:54.442374354 +0000 UTC m=+0.195008478 container start 6fe9f5811e8ce5522a659f2f54ba7dbb066334c1220d82b74b9ad6ee66a25413 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_engelbart, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct 11 09:13:54 compute-0 podman[374490]: 2025-10-11 09:13:54.446310293 +0000 UTC m=+0.198944407 container attach 6fe9f5811e8ce5522a659f2f54ba7dbb066334c1220d82b74b9ad6ee66a25413 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_engelbart, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct 11 09:13:54 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2162: 321 pgs: 321 active+clean; 407 MiB data, 962 MiB used, 59 GiB / 60 GiB avail; 6.4 KiB/s rd, 0 B/s wr, 7 op/s
Oct 11 09:13:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:13:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:13:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:13:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:13:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:13:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:13:54 compute-0 ceph-mgr[74605]: [balancer INFO root] Optimize plan auto_2025-10-11_09:13:54
Oct 11 09:13:54 compute-0 ceph-mgr[74605]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 09:13:54 compute-0 ceph-mgr[74605]: [balancer INFO root] do_upmap
Oct 11 09:13:54 compute-0 ceph-mgr[74605]: [balancer INFO root] pools ['vms', 'backups', 'volumes', 'default.rgw.log', 'default.rgw.control', '.rgw.root', 'default.rgw.meta', '.mgr', 'images', 'cephfs.cephfs.data', 'cephfs.cephfs.meta']
Oct 11 09:13:54 compute-0 ceph-mgr[74605]: [balancer INFO root] prepared 0/10 changes
Oct 11 09:13:55 compute-0 wonderful_engelbart[374507]: {
Oct 11 09:13:55 compute-0 wonderful_engelbart[374507]:     "0": [
Oct 11 09:13:55 compute-0 wonderful_engelbart[374507]:         {
Oct 11 09:13:55 compute-0 wonderful_engelbart[374507]:             "devices": [
Oct 11 09:13:55 compute-0 wonderful_engelbart[374507]:                 "/dev/loop3"
Oct 11 09:13:55 compute-0 wonderful_engelbart[374507]:             ],
Oct 11 09:13:55 compute-0 wonderful_engelbart[374507]:             "lv_name": "ceph_lv0",
Oct 11 09:13:55 compute-0 wonderful_engelbart[374507]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 09:13:55 compute-0 wonderful_engelbart[374507]:             "lv_size": "21470642176",
Oct 11 09:13:55 compute-0 wonderful_engelbart[374507]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=bd31cfe9-266a-4479-a7f9-659fff14b3e1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 09:13:55 compute-0 wonderful_engelbart[374507]:             "lv_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 09:13:55 compute-0 wonderful_engelbart[374507]:             "name": "ceph_lv0",
Oct 11 09:13:55 compute-0 wonderful_engelbart[374507]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 09:13:55 compute-0 wonderful_engelbart[374507]:             "tags": {
Oct 11 09:13:55 compute-0 wonderful_engelbart[374507]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 11 09:13:55 compute-0 wonderful_engelbart[374507]:                 "ceph.block_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 09:13:55 compute-0 wonderful_engelbart[374507]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 09:13:55 compute-0 wonderful_engelbart[374507]:                 "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:13:55 compute-0 wonderful_engelbart[374507]:                 "ceph.cluster_name": "ceph",
Oct 11 09:13:55 compute-0 wonderful_engelbart[374507]:                 "ceph.crush_device_class": "",
Oct 11 09:13:55 compute-0 wonderful_engelbart[374507]:                 "ceph.encrypted": "0",
Oct 11 09:13:55 compute-0 wonderful_engelbart[374507]:                 "ceph.osd_fsid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 09:13:55 compute-0 wonderful_engelbart[374507]:                 "ceph.osd_id": "0",
Oct 11 09:13:55 compute-0 wonderful_engelbart[374507]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 09:13:55 compute-0 wonderful_engelbart[374507]:                 "ceph.type": "block",
Oct 11 09:13:55 compute-0 wonderful_engelbart[374507]:                 "ceph.vdo": "0"
Oct 11 09:13:55 compute-0 wonderful_engelbart[374507]:             },
Oct 11 09:13:55 compute-0 wonderful_engelbart[374507]:             "type": "block",
Oct 11 09:13:55 compute-0 wonderful_engelbart[374507]:             "vg_name": "ceph_vg0"
Oct 11 09:13:55 compute-0 wonderful_engelbart[374507]:         }
Oct 11 09:13:55 compute-0 wonderful_engelbart[374507]:     ],
Oct 11 09:13:55 compute-0 wonderful_engelbart[374507]:     "1": [
Oct 11 09:13:55 compute-0 wonderful_engelbart[374507]:         {
Oct 11 09:13:55 compute-0 wonderful_engelbart[374507]:             "devices": [
Oct 11 09:13:55 compute-0 wonderful_engelbart[374507]:                 "/dev/loop4"
Oct 11 09:13:55 compute-0 wonderful_engelbart[374507]:             ],
Oct 11 09:13:55 compute-0 wonderful_engelbart[374507]:             "lv_name": "ceph_lv1",
Oct 11 09:13:55 compute-0 wonderful_engelbart[374507]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 09:13:55 compute-0 wonderful_engelbart[374507]:             "lv_size": "21470642176",
Oct 11 09:13:55 compute-0 wonderful_engelbart[374507]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c6e730c8-7075-45e5-bf72-061b73088f5b,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 09:13:55 compute-0 wonderful_engelbart[374507]:             "lv_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 09:13:55 compute-0 wonderful_engelbart[374507]:             "name": "ceph_lv1",
Oct 11 09:13:55 compute-0 wonderful_engelbart[374507]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 09:13:55 compute-0 wonderful_engelbart[374507]:             "tags": {
Oct 11 09:13:55 compute-0 wonderful_engelbart[374507]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 11 09:13:55 compute-0 wonderful_engelbart[374507]:                 "ceph.block_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 09:13:55 compute-0 wonderful_engelbart[374507]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 09:13:55 compute-0 wonderful_engelbart[374507]:                 "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:13:55 compute-0 wonderful_engelbart[374507]:                 "ceph.cluster_name": "ceph",
Oct 11 09:13:55 compute-0 wonderful_engelbart[374507]:                 "ceph.crush_device_class": "",
Oct 11 09:13:55 compute-0 wonderful_engelbart[374507]:                 "ceph.encrypted": "0",
Oct 11 09:13:55 compute-0 wonderful_engelbart[374507]:                 "ceph.osd_fsid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 09:13:55 compute-0 wonderful_engelbart[374507]:                 "ceph.osd_id": "1",
Oct 11 09:13:55 compute-0 wonderful_engelbart[374507]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 09:13:55 compute-0 wonderful_engelbart[374507]:                 "ceph.type": "block",
Oct 11 09:13:55 compute-0 wonderful_engelbart[374507]:                 "ceph.vdo": "0"
Oct 11 09:13:55 compute-0 wonderful_engelbart[374507]:             },
Oct 11 09:13:55 compute-0 wonderful_engelbart[374507]:             "type": "block",
Oct 11 09:13:55 compute-0 wonderful_engelbart[374507]:             "vg_name": "ceph_vg1"
Oct 11 09:13:55 compute-0 wonderful_engelbart[374507]:         }
Oct 11 09:13:55 compute-0 wonderful_engelbart[374507]:     ],
Oct 11 09:13:55 compute-0 wonderful_engelbart[374507]:     "2": [
Oct 11 09:13:55 compute-0 wonderful_engelbart[374507]:         {
Oct 11 09:13:55 compute-0 wonderful_engelbart[374507]:             "devices": [
Oct 11 09:13:55 compute-0 wonderful_engelbart[374507]:                 "/dev/loop5"
Oct 11 09:13:55 compute-0 wonderful_engelbart[374507]:             ],
Oct 11 09:13:55 compute-0 wonderful_engelbart[374507]:             "lv_name": "ceph_lv2",
Oct 11 09:13:55 compute-0 wonderful_engelbart[374507]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 09:13:55 compute-0 wonderful_engelbart[374507]:             "lv_size": "21470642176",
Oct 11 09:13:55 compute-0 wonderful_engelbart[374507]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9a8d87fe-940e-4960-94fb-405e22e6e9ee,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 09:13:55 compute-0 wonderful_engelbart[374507]:             "lv_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 09:13:55 compute-0 wonderful_engelbart[374507]:             "name": "ceph_lv2",
Oct 11 09:13:55 compute-0 wonderful_engelbart[374507]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 09:13:55 compute-0 wonderful_engelbart[374507]:             "tags": {
Oct 11 09:13:55 compute-0 wonderful_engelbart[374507]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 11 09:13:55 compute-0 wonderful_engelbart[374507]:                 "ceph.block_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 09:13:55 compute-0 wonderful_engelbart[374507]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 09:13:55 compute-0 wonderful_engelbart[374507]:                 "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:13:55 compute-0 wonderful_engelbart[374507]:                 "ceph.cluster_name": "ceph",
Oct 11 09:13:55 compute-0 wonderful_engelbart[374507]:                 "ceph.crush_device_class": "",
Oct 11 09:13:55 compute-0 wonderful_engelbart[374507]:                 "ceph.encrypted": "0",
Oct 11 09:13:55 compute-0 wonderful_engelbart[374507]:                 "ceph.osd_fsid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 09:13:55 compute-0 wonderful_engelbart[374507]:                 "ceph.osd_id": "2",
Oct 11 09:13:55 compute-0 wonderful_engelbart[374507]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 09:13:55 compute-0 wonderful_engelbart[374507]:                 "ceph.type": "block",
Oct 11 09:13:55 compute-0 wonderful_engelbart[374507]:                 "ceph.vdo": "0"
Oct 11 09:13:55 compute-0 wonderful_engelbart[374507]:             },
Oct 11 09:13:55 compute-0 wonderful_engelbart[374507]:             "type": "block",
Oct 11 09:13:55 compute-0 wonderful_engelbart[374507]:             "vg_name": "ceph_vg2"
Oct 11 09:13:55 compute-0 wonderful_engelbart[374507]:         }
Oct 11 09:13:55 compute-0 wonderful_engelbart[374507]:     ]
Oct 11 09:13:55 compute-0 wonderful_engelbart[374507]: }
Oct 11 09:13:55 compute-0 systemd[1]: libpod-6fe9f5811e8ce5522a659f2f54ba7dbb066334c1220d82b74b9ad6ee66a25413.scope: Deactivated successfully.
Oct 11 09:13:55 compute-0 podman[374490]: 2025-10-11 09:13:55.290540841 +0000 UTC m=+1.043174975 container died 6fe9f5811e8ce5522a659f2f54ba7dbb066334c1220d82b74b9ad6ee66a25413 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_engelbart, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct 11 09:13:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 09:13:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 09:13:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 09:13:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 09:13:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 09:13:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 09:13:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 09:13:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 09:13:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 09:13:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 09:13:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-8634b20a56ccebe2680c75d526f4bc221f25677dc6d6e1b63cd912ee96e650cc-merged.mount: Deactivated successfully.
Oct 11 09:13:55 compute-0 nova_compute[260935]: 2025-10-11 09:13:55.359 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:13:55 compute-0 podman[374490]: 2025-10-11 09:13:55.386857907 +0000 UTC m=+1.139492011 container remove 6fe9f5811e8ce5522a659f2f54ba7dbb066334c1220d82b74b9ad6ee66a25413 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_engelbart, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct 11 09:13:55 compute-0 systemd[1]: libpod-conmon-6fe9f5811e8ce5522a659f2f54ba7dbb066334c1220d82b74b9ad6ee66a25413.scope: Deactivated successfully.
Oct 11 09:13:55 compute-0 sudo[374386]: pam_unix(sudo:session): session closed for user root
Oct 11 09:13:55 compute-0 podman[374517]: 2025-10-11 09:13:55.47079212 +0000 UTC m=+0.145171329 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct 11 09:13:55 compute-0 sudo[374546]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:13:55 compute-0 sudo[374546]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:13:55 compute-0 sudo[374546]: pam_unix(sudo:session): session closed for user root
Oct 11 09:13:55 compute-0 sudo[374572]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 09:13:55 compute-0 sudo[374572]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:13:55 compute-0 sudo[374572]: pam_unix(sudo:session): session closed for user root
Oct 11 09:13:55 compute-0 ceph-mon[74313]: pgmap v2162: 321 pgs: 321 active+clean; 407 MiB data, 962 MiB used, 59 GiB / 60 GiB avail; 6.4 KiB/s rd, 0 B/s wr, 7 op/s
Oct 11 09:13:55 compute-0 sudo[374597]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:13:55 compute-0 sudo[374597]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:13:55 compute-0 sudo[374597]: pam_unix(sudo:session): session closed for user root
Oct 11 09:13:55 compute-0 nova_compute[260935]: 2025-10-11 09:13:55.686 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:13:55 compute-0 sudo[374622]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a -- raw list --format json
Oct 11 09:13:55 compute-0 sudo[374622]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:13:56 compute-0 podman[374686]: 2025-10-11 09:13:56.180848913 +0000 UTC m=+0.056857885 container create 785e467b4f5529d8d591e577721e27ce87bcdb711d821aa96a0ed9bf0906acbe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_zhukovsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507)
Oct 11 09:13:56 compute-0 systemd[1]: Started libpod-conmon-785e467b4f5529d8d591e577721e27ce87bcdb711d821aa96a0ed9bf0906acbe.scope.
Oct 11 09:13:56 compute-0 podman[374686]: 2025-10-11 09:13:56.158061732 +0000 UTC m=+0.034070684 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:13:56 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:13:56 compute-0 podman[374686]: 2025-10-11 09:13:56.282673651 +0000 UTC m=+0.158682663 container init 785e467b4f5529d8d591e577721e27ce87bcdb711d821aa96a0ed9bf0906acbe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_zhukovsky, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2)
Oct 11 09:13:56 compute-0 podman[374686]: 2025-10-11 09:13:56.299144017 +0000 UTC m=+0.175152989 container start 785e467b4f5529d8d591e577721e27ce87bcdb711d821aa96a0ed9bf0906acbe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_zhukovsky, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 09:13:56 compute-0 podman[374686]: 2025-10-11 09:13:56.30359311 +0000 UTC m=+0.179602092 container attach 785e467b4f5529d8d591e577721e27ce87bcdb711d821aa96a0ed9bf0906acbe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_zhukovsky, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 09:13:56 compute-0 nifty_zhukovsky[374703]: 167 167
Oct 11 09:13:56 compute-0 systemd[1]: libpod-785e467b4f5529d8d591e577721e27ce87bcdb711d821aa96a0ed9bf0906acbe.scope: Deactivated successfully.
Oct 11 09:13:56 compute-0 podman[374686]: 2025-10-11 09:13:56.310042969 +0000 UTC m=+0.186051931 container died 785e467b4f5529d8d591e577721e27ce87bcdb711d821aa96a0ed9bf0906acbe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_zhukovsky, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef)
Oct 11 09:13:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-cac356a4a4743ac1b5926d654e4032faf8165de18b51f4360f5332caf6550791-merged.mount: Deactivated successfully.
Oct 11 09:13:56 compute-0 podman[374686]: 2025-10-11 09:13:56.392015738 +0000 UTC m=+0.268024680 container remove 785e467b4f5529d8d591e577721e27ce87bcdb711d821aa96a0ed9bf0906acbe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_zhukovsky, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Oct 11 09:13:56 compute-0 systemd[1]: libpod-conmon-785e467b4f5529d8d591e577721e27ce87bcdb711d821aa96a0ed9bf0906acbe.scope: Deactivated successfully.
Oct 11 09:13:56 compute-0 nova_compute[260935]: 2025-10-11 09:13:56.512 2 DEBUG nova.compute.manager [req-4c83a98e-f310-40eb-9bba-ce546f5116b1 req-1694c42d-09b9-4e2b-b213-76b0abf388eb e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 4b5bb809-c821-466d-9a47-b5a6aa337212] Received event network-vif-plugged-072bf760-1ffd-4b15-bdd9-58b9b670feb8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:13:56 compute-0 nova_compute[260935]: 2025-10-11 09:13:56.513 2 DEBUG oslo_concurrency.lockutils [req-4c83a98e-f310-40eb-9bba-ce546f5116b1 req-1694c42d-09b9-4e2b-b213-76b0abf388eb e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "4b5bb809-c821-466d-9a47-b5a6aa337212-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:13:56 compute-0 nova_compute[260935]: 2025-10-11 09:13:56.513 2 DEBUG oslo_concurrency.lockutils [req-4c83a98e-f310-40eb-9bba-ce546f5116b1 req-1694c42d-09b9-4e2b-b213-76b0abf388eb e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "4b5bb809-c821-466d-9a47-b5a6aa337212-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:13:56 compute-0 nova_compute[260935]: 2025-10-11 09:13:56.514 2 DEBUG oslo_concurrency.lockutils [req-4c83a98e-f310-40eb-9bba-ce546f5116b1 req-1694c42d-09b9-4e2b-b213-76b0abf388eb e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "4b5bb809-c821-466d-9a47-b5a6aa337212-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:13:56 compute-0 nova_compute[260935]: 2025-10-11 09:13:56.514 2 DEBUG nova.compute.manager [req-4c83a98e-f310-40eb-9bba-ce546f5116b1 req-1694c42d-09b9-4e2b-b213-76b0abf388eb e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 4b5bb809-c821-466d-9a47-b5a6aa337212] No waiting events found dispatching network-vif-plugged-072bf760-1ffd-4b15-bdd9-58b9b670feb8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 09:13:56 compute-0 nova_compute[260935]: 2025-10-11 09:13:56.514 2 WARNING nova.compute.manager [req-4c83a98e-f310-40eb-9bba-ce546f5116b1 req-1694c42d-09b9-4e2b-b213-76b0abf388eb e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 4b5bb809-c821-466d-9a47-b5a6aa337212] Received unexpected event network-vif-plugged-072bf760-1ffd-4b15-bdd9-58b9b670feb8 for instance with vm_state active and task_state None.
Oct 11 09:13:56 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2163: 321 pgs: 321 active+clean; 407 MiB data, 962 MiB used, 59 GiB / 60 GiB avail; 6.4 KiB/s rd, 0 B/s wr, 7 op/s
Oct 11 09:13:56 compute-0 podman[374727]: 2025-10-11 09:13:56.647391186 +0000 UTC m=+0.080483278 container create 9f1cb4f830036fa8c95f4a0aafac1589c3ea57a684af86b84ccd67212788319d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_sammet, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 09:13:56 compute-0 podman[374727]: 2025-10-11 09:13:56.610963838 +0000 UTC m=+0.044056020 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:13:56 compute-0 systemd[1]: Started libpod-conmon-9f1cb4f830036fa8c95f4a0aafac1589c3ea57a684af86b84ccd67212788319d.scope.
Oct 11 09:13:56 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:13:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cbd4f0210e3a8e390e1428446e7b5430c3ebc18baad2d99766567c5fcf70369d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 09:13:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cbd4f0210e3a8e390e1428446e7b5430c3ebc18baad2d99766567c5fcf70369d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 09:13:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cbd4f0210e3a8e390e1428446e7b5430c3ebc18baad2d99766567c5fcf70369d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 09:13:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cbd4f0210e3a8e390e1428446e7b5430c3ebc18baad2d99766567c5fcf70369d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 09:13:56 compute-0 podman[374727]: 2025-10-11 09:13:56.780334726 +0000 UTC m=+0.213426888 container init 9f1cb4f830036fa8c95f4a0aafac1589c3ea57a684af86b84ccd67212788319d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_sammet, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 09:13:56 compute-0 podman[374727]: 2025-10-11 09:13:56.795965779 +0000 UTC m=+0.229057891 container start 9f1cb4f830036fa8c95f4a0aafac1589c3ea57a684af86b84ccd67212788319d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_sammet, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507)
Oct 11 09:13:56 compute-0 podman[374727]: 2025-10-11 09:13:56.800212676 +0000 UTC m=+0.233304858 container attach 9f1cb4f830036fa8c95f4a0aafac1589c3ea57a684af86b84ccd67212788319d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_sammet, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 09:13:57 compute-0 ceph-mon[74313]: pgmap v2163: 321 pgs: 321 active+clean; 407 MiB data, 962 MiB used, 59 GiB / 60 GiB avail; 6.4 KiB/s rd, 0 B/s wr, 7 op/s
Oct 11 09:13:57 compute-0 nice_sammet[374743]: {
Oct 11 09:13:57 compute-0 nice_sammet[374743]:     "9a8d87fe-940e-4960-94fb-405e22e6e9ee": {
Oct 11 09:13:57 compute-0 nice_sammet[374743]:         "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:13:57 compute-0 nice_sammet[374743]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 11 09:13:57 compute-0 nice_sammet[374743]:         "osd_id": 2,
Oct 11 09:13:57 compute-0 nice_sammet[374743]:         "osd_uuid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 09:13:57 compute-0 nice_sammet[374743]:         "type": "bluestore"
Oct 11 09:13:57 compute-0 nice_sammet[374743]:     },
Oct 11 09:13:57 compute-0 nice_sammet[374743]:     "bd31cfe9-266a-4479-a7f9-659fff14b3e1": {
Oct 11 09:13:57 compute-0 nice_sammet[374743]:         "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:13:57 compute-0 nice_sammet[374743]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 11 09:13:57 compute-0 nice_sammet[374743]:         "osd_id": 0,
Oct 11 09:13:57 compute-0 nice_sammet[374743]:         "osd_uuid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 09:13:57 compute-0 nice_sammet[374743]:         "type": "bluestore"
Oct 11 09:13:57 compute-0 nice_sammet[374743]:     },
Oct 11 09:13:57 compute-0 nice_sammet[374743]:     "c6e730c8-7075-45e5-bf72-061b73088f5b": {
Oct 11 09:13:57 compute-0 nice_sammet[374743]:         "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:13:57 compute-0 nice_sammet[374743]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 11 09:13:57 compute-0 nice_sammet[374743]:         "osd_id": 1,
Oct 11 09:13:57 compute-0 nice_sammet[374743]:         "osd_uuid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 09:13:57 compute-0 nice_sammet[374743]:         "type": "bluestore"
Oct 11 09:13:57 compute-0 nice_sammet[374743]:     }
Oct 11 09:13:57 compute-0 nice_sammet[374743]: }
Oct 11 09:13:57 compute-0 systemd[1]: libpod-9f1cb4f830036fa8c95f4a0aafac1589c3ea57a684af86b84ccd67212788319d.scope: Deactivated successfully.
Oct 11 09:13:57 compute-0 podman[374727]: 2025-10-11 09:13:57.770051221 +0000 UTC m=+1.203143333 container died 9f1cb4f830036fa8c95f4a0aafac1589c3ea57a684af86b84ccd67212788319d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_sammet, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 09:13:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-cbd4f0210e3a8e390e1428446e7b5430c3ebc18baad2d99766567c5fcf70369d-merged.mount: Deactivated successfully.
Oct 11 09:13:57 compute-0 podman[374727]: 2025-10-11 09:13:57.844920503 +0000 UTC m=+1.278012595 container remove 9f1cb4f830036fa8c95f4a0aafac1589c3ea57a684af86b84ccd67212788319d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_sammet, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 09:13:57 compute-0 systemd[1]: libpod-conmon-9f1cb4f830036fa8c95f4a0aafac1589c3ea57a684af86b84ccd67212788319d.scope: Deactivated successfully.
Oct 11 09:13:57 compute-0 sudo[374622]: pam_unix(sudo:session): session closed for user root
Oct 11 09:13:57 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 09:13:57 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:13:57 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 09:13:57 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:13:57 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev 17b8ea58-ced7-4802-bb22-0ef879decb02 does not exist
Oct 11 09:13:57 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev 4b88a664-b9cc-4637-aa93-826f5b013ab1 does not exist
Oct 11 09:13:58 compute-0 sudo[374789]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:13:58 compute-0 sudo[374789]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:13:58 compute-0 sudo[374789]: pam_unix(sudo:session): session closed for user root
Oct 11 09:13:58 compute-0 sudo[374814]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 11 09:13:58 compute-0 sudo[374814]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:13:58 compute-0 sudo[374814]: pam_unix(sudo:session): session closed for user root
Oct 11 09:13:58 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2164: 321 pgs: 321 active+clean; 407 MiB data, 962 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 0 B/s wr, 71 op/s
Oct 11 09:13:58 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:13:58.808 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:57:cd:e5 10.100.0.18 10.100.0.2 10.100.0.34'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.18/28 10.100.0.2/28 10.100.0.34/28', 'neutron:device_id': 'ovnmeta-77797f2a-6d47-4af7-aba0-0c6e896bb489', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-77797f2a-6d47-4af7-aba0-0c6e896bb489', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7020c6a7808745b3bfdbd16acc7ff39e', 'neutron:revision_number': '6', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=01bd402c-a60d-400e-bb8e-f9e8f782e47c, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=f3e66d3d-6090-4c0d-ae59-e7c3b5cba99f) old=Port_Binding(mac=['fa:16:3e:57:cd:e5 10.100.0.18 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.18/28 10.100.0.2/28', 'neutron:device_id': 'ovnmeta-77797f2a-6d47-4af7-aba0-0c6e896bb489', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-77797f2a-6d47-4af7-aba0-0c6e896bb489', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7020c6a7808745b3bfdbd16acc7ff39e', 'neutron:revision_number': '5', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 09:13:58 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:13:58.811 162815 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port f3e66d3d-6090-4c0d-ae59-e7c3b5cba99f in datapath 77797f2a-6d47-4af7-aba0-0c6e896bb489 updated
Oct 11 09:13:58 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:13:58.814 162815 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 77797f2a-6d47-4af7-aba0-0c6e896bb489, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 11 09:13:58 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:13:58.815 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[1ede057e-d99b-4501-8827-d0bd0a4f84df]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:13:58 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:13:58 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:13:58 compute-0 ceph-mon[74313]: pgmap v2164: 321 pgs: 321 active+clean; 407 MiB data, 962 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 0 B/s wr, 71 op/s
Oct 11 09:13:58 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:14:00 compute-0 nova_compute[260935]: 2025-10-11 09:14:00.387 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:14:00 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2165: 321 pgs: 321 active+clean; 407 MiB data, 962 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 71 op/s
Oct 11 09:14:00 compute-0 nova_compute[260935]: 2025-10-11 09:14:00.688 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:14:01 compute-0 ceph-mon[74313]: pgmap v2165: 321 pgs: 321 active+clean; 407 MiB data, 962 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 71 op/s
Oct 11 09:14:01 compute-0 podman[374839]: 2025-10-11 09:14:01.831295383 +0000 UTC m=+0.117445562 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, tcib_managed=true, container_name=iscsid, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 11 09:14:02 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:14:02.340 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6f:96:7e 10.100.0.2 2001:db8::f816:3eff:fe6f:967e'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fe6f:967e/64', 'neutron:device_id': 'ovnmeta-64420f4d-abed-43af-8d6f-2156706c7174', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-64420f4d-abed-43af-8d6f-2156706c7174', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2b9fef5c3096478daba4f4d85fddf9e9', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=237c95b9-58c9-4db6-9d3a-13740240295d, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=2a00a6d8-ae77-4ece-ac60-480f6b95d6a8) old=Port_Binding(mac=['fa:16:3e:6f:96:7e 2001:db8::f816:3eff:fe6f:967e'], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe6f:967e/64', 'neutron:device_id': 'ovnmeta-64420f4d-abed-43af-8d6f-2156706c7174', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-64420f4d-abed-43af-8d6f-2156706c7174', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2b9fef5c3096478daba4f4d85fddf9e9', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 09:14:02 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:14:02.342 162815 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 2a00a6d8-ae77-4ece-ac60-480f6b95d6a8 in datapath 64420f4d-abed-43af-8d6f-2156706c7174 updated
Oct 11 09:14:02 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:14:02.344 162815 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 64420f4d-abed-43af-8d6f-2156706c7174, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 11 09:14:02 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:14:02.345 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[eb119a2d-4223-4435-8489-0c16437ec814]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:14:02 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2166: 321 pgs: 321 active+clean; 407 MiB data, 962 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 71 op/s
Oct 11 09:14:03 compute-0 ceph-mon[74313]: pgmap v2166: 321 pgs: 321 active+clean; 407 MiB data, 962 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 71 op/s
Oct 11 09:14:03 compute-0 ovn_controller[152945]: 2025-10-11T09:14:03Z|00116|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:c7:58:7f 10.100.0.11
Oct 11 09:14:03 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:14:04 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2167: 321 pgs: 321 active+clean; 407 MiB data, 962 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 64 op/s
Oct 11 09:14:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 09:14:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:14:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 09:14:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:14:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0033874134148922314 of space, bias 1.0, pg target 1.0162240244676695 quantized to 32 (current 32)
Oct 11 09:14:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:14:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 09:14:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:14:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 09:14:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:14:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.1992057139048968 quantized to 32 (current 32)
Oct 11 09:14:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:14:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006084358924269063 quantized to 16 (current 32)
Oct 11 09:14:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:14:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 09:14:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:14:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.605448655336329e-05 quantized to 32 (current 32)
Oct 11 09:14:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:14:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006464631357035879 quantized to 32 (current 32)
Oct 11 09:14:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:14:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 09:14:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:14:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015210897310672657 quantized to 32 (current 32)
Oct 11 09:14:05 compute-0 nova_compute[260935]: 2025-10-11 09:14:05.389 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:14:05 compute-0 ceph-mon[74313]: pgmap v2167: 321 pgs: 321 active+clean; 407 MiB data, 962 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 64 op/s
Oct 11 09:14:05 compute-0 nova_compute[260935]: 2025-10-11 09:14:05.690 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:14:06 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:14:06.024 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6f:96:7e 2001:db8::f816:3eff:fe6f:967e'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe6f:967e/64', 'neutron:device_id': 'ovnmeta-64420f4d-abed-43af-8d6f-2156706c7174', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-64420f4d-abed-43af-8d6f-2156706c7174', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2b9fef5c3096478daba4f4d85fddf9e9', 'neutron:revision_number': '6', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=237c95b9-58c9-4db6-9d3a-13740240295d, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=2a00a6d8-ae77-4ece-ac60-480f6b95d6a8) old=Port_Binding(mac=['fa:16:3e:6f:96:7e 10.100.0.2 2001:db8::f816:3eff:fe6f:967e'], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fe6f:967e/64', 'neutron:device_id': 'ovnmeta-64420f4d-abed-43af-8d6f-2156706c7174', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-64420f4d-abed-43af-8d6f-2156706c7174', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2b9fef5c3096478daba4f4d85fddf9e9', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 09:14:06 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:14:06.024 162815 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 2a00a6d8-ae77-4ece-ac60-480f6b95d6a8 in datapath 64420f4d-abed-43af-8d6f-2156706c7174 updated
Oct 11 09:14:06 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:14:06.026 162815 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 64420f4d-abed-43af-8d6f-2156706c7174, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 11 09:14:06 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:14:06.026 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[42a41ca8-1f61-441d-a71b-a7deed1cdef7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:14:06 compute-0 unix_chkpwd[374861]: password check failed for user (root)
Oct 11 09:14:06 compute-0 sshd-session[374859]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=165.232.82.252  user=root
Oct 11 09:14:06 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2168: 321 pgs: 321 active+clean; 407 MiB data, 962 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 63 op/s
Oct 11 09:14:06 compute-0 podman[374862]: 2025-10-11 09:14:06.822122564 +0000 UTC m=+0.105503271 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Oct 11 09:14:06 compute-0 podman[374863]: 2025-10-11 09:14:06.863804037 +0000 UTC m=+0.142266048 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0)
Oct 11 09:14:07 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:14:07.443 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6f:96:7e 10.100.0.2 2001:db8::f816:3eff:fe6f:967e'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fe6f:967e/64', 'neutron:device_id': 'ovnmeta-64420f4d-abed-43af-8d6f-2156706c7174', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-64420f4d-abed-43af-8d6f-2156706c7174', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2b9fef5c3096478daba4f4d85fddf9e9', 'neutron:revision_number': '7', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=237c95b9-58c9-4db6-9d3a-13740240295d, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=2a00a6d8-ae77-4ece-ac60-480f6b95d6a8) old=Port_Binding(mac=['fa:16:3e:6f:96:7e 2001:db8::f816:3eff:fe6f:967e'], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe6f:967e/64', 'neutron:device_id': 'ovnmeta-64420f4d-abed-43af-8d6f-2156706c7174', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-64420f4d-abed-43af-8d6f-2156706c7174', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2b9fef5c3096478daba4f4d85fddf9e9', 'neutron:revision_number': '6', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 09:14:07 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:14:07.444 162815 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 2a00a6d8-ae77-4ece-ac60-480f6b95d6a8 in datapath 64420f4d-abed-43af-8d6f-2156706c7174 updated
Oct 11 09:14:07 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:14:07.447 162815 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 64420f4d-abed-43af-8d6f-2156706c7174, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 11 09:14:07 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:14:07.448 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[1a9d49f0-c156-4eb6-80e2-2539dcaa0de5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:14:07 compute-0 ceph-mon[74313]: pgmap v2168: 321 pgs: 321 active+clean; 407 MiB data, 962 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 63 op/s
Oct 11 09:14:08 compute-0 sshd-session[374859]: Failed password for root from 165.232.82.252 port 57226 ssh2
Oct 11 09:14:08 compute-0 sshd-session[374859]: Connection closed by authenticating user root 165.232.82.252 port 57226 [preauth]
Oct 11 09:14:08 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2169: 321 pgs: 321 active+clean; 407 MiB data, 962 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 12 KiB/s wr, 109 op/s
Oct 11 09:14:08 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:14:09 compute-0 ceph-mon[74313]: pgmap v2169: 321 pgs: 321 active+clean; 407 MiB data, 962 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 12 KiB/s wr, 109 op/s
Oct 11 09:14:10 compute-0 nova_compute[260935]: 2025-10-11 09:14:10.439 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:14:10 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2170: 321 pgs: 321 active+clean; 407 MiB data, 962 MiB used, 59 GiB / 60 GiB avail; 529 KiB/s rd, 12 KiB/s wr, 45 op/s
Oct 11 09:14:10 compute-0 nova_compute[260935]: 2025-10-11 09:14:10.693 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:14:10 compute-0 nova_compute[260935]: 2025-10-11 09:14:10.719 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:14:10 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:14:10.819 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6f:96:7e 10.100.0.2'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-64420f4d-abed-43af-8d6f-2156706c7174', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-64420f4d-abed-43af-8d6f-2156706c7174', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2b9fef5c3096478daba4f4d85fddf9e9', 'neutron:revision_number': '10', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=237c95b9-58c9-4db6-9d3a-13740240295d, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=2a00a6d8-ae77-4ece-ac60-480f6b95d6a8) old=Port_Binding(mac=['fa:16:3e:6f:96:7e 10.100.0.2 2001:db8::f816:3eff:fe6f:967e'], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fe6f:967e/64', 'neutron:device_id': 'ovnmeta-64420f4d-abed-43af-8d6f-2156706c7174', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-64420f4d-abed-43af-8d6f-2156706c7174', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2b9fef5c3096478daba4f4d85fddf9e9', 'neutron:revision_number': '7', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 09:14:10 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:14:10.821 162815 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 2a00a6d8-ae77-4ece-ac60-480f6b95d6a8 in datapath 64420f4d-abed-43af-8d6f-2156706c7174 updated
Oct 11 09:14:10 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:14:10.823 162815 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 64420f4d-abed-43af-8d6f-2156706c7174, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 11 09:14:10 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:14:10.823 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[60f2c391-7c0a-4392-813d-f712d37c06bb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:14:11 compute-0 nova_compute[260935]: 2025-10-11 09:14:11.340 2 INFO nova.compute.manager [None req-a9ccf037-b415-4fd8-ba5b-1a9737c06444 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 4b5bb809-c821-466d-9a47-b5a6aa337212] Get console output
Oct 11 09:14:11 compute-0 nova_compute[260935]: 2025-10-11 09:14:11.348 29289 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Oct 11 09:14:11 compute-0 ceph-mon[74313]: pgmap v2170: 321 pgs: 321 active+clean; 407 MiB data, 962 MiB used, 59 GiB / 60 GiB avail; 529 KiB/s rd, 12 KiB/s wr, 45 op/s
Oct 11 09:14:12 compute-0 nova_compute[260935]: 2025-10-11 09:14:12.361 2 DEBUG nova.compute.manager [req-e89c302b-5525-4ad8-aee1-13001d738e99 req-24334ce0-0517-4372-b3a7-f475361380c0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 4b5bb809-c821-466d-9a47-b5a6aa337212] Received event network-changed-072bf760-1ffd-4b15-bdd9-58b9b670feb8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:14:12 compute-0 nova_compute[260935]: 2025-10-11 09:14:12.361 2 DEBUG nova.compute.manager [req-e89c302b-5525-4ad8-aee1-13001d738e99 req-24334ce0-0517-4372-b3a7-f475361380c0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 4b5bb809-c821-466d-9a47-b5a6aa337212] Refreshing instance network info cache due to event network-changed-072bf760-1ffd-4b15-bdd9-58b9b670feb8. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 09:14:12 compute-0 nova_compute[260935]: 2025-10-11 09:14:12.362 2 DEBUG oslo_concurrency.lockutils [req-e89c302b-5525-4ad8-aee1-13001d738e99 req-24334ce0-0517-4372-b3a7-f475361380c0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-4b5bb809-c821-466d-9a47-b5a6aa337212" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:14:12 compute-0 nova_compute[260935]: 2025-10-11 09:14:12.362 2 DEBUG oslo_concurrency.lockutils [req-e89c302b-5525-4ad8-aee1-13001d738e99 req-24334ce0-0517-4372-b3a7-f475361380c0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-4b5bb809-c821-466d-9a47-b5a6aa337212" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:14:12 compute-0 nova_compute[260935]: 2025-10-11 09:14:12.363 2 DEBUG nova.network.neutron [req-e89c302b-5525-4ad8-aee1-13001d738e99 req-24334ce0-0517-4372-b3a7-f475361380c0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 4b5bb809-c821-466d-9a47-b5a6aa337212] Refreshing network info cache for port 072bf760-1ffd-4b15-bdd9-58b9b670feb8 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 09:14:12 compute-0 nova_compute[260935]: 2025-10-11 09:14:12.406 2 DEBUG oslo_concurrency.lockutils [None req-5f616836-f1e6-48c3-96cc-2b7cadb0680a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Acquiring lock "4b5bb809-c821-466d-9a47-b5a6aa337212" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:14:12 compute-0 nova_compute[260935]: 2025-10-11 09:14:12.406 2 DEBUG oslo_concurrency.lockutils [None req-5f616836-f1e6-48c3-96cc-2b7cadb0680a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Lock "4b5bb809-c821-466d-9a47-b5a6aa337212" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:14:12 compute-0 nova_compute[260935]: 2025-10-11 09:14:12.407 2 DEBUG oslo_concurrency.lockutils [None req-5f616836-f1e6-48c3-96cc-2b7cadb0680a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Acquiring lock "4b5bb809-c821-466d-9a47-b5a6aa337212-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:14:12 compute-0 nova_compute[260935]: 2025-10-11 09:14:12.408 2 DEBUG oslo_concurrency.lockutils [None req-5f616836-f1e6-48c3-96cc-2b7cadb0680a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Lock "4b5bb809-c821-466d-9a47-b5a6aa337212-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:14:12 compute-0 nova_compute[260935]: 2025-10-11 09:14:12.408 2 DEBUG oslo_concurrency.lockutils [None req-5f616836-f1e6-48c3-96cc-2b7cadb0680a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Lock "4b5bb809-c821-466d-9a47-b5a6aa337212-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:14:12 compute-0 nova_compute[260935]: 2025-10-11 09:14:12.410 2 INFO nova.compute.manager [None req-5f616836-f1e6-48c3-96cc-2b7cadb0680a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 4b5bb809-c821-466d-9a47-b5a6aa337212] Terminating instance
Oct 11 09:14:12 compute-0 nova_compute[260935]: 2025-10-11 09:14:12.412 2 DEBUG nova.compute.manager [None req-5f616836-f1e6-48c3-96cc-2b7cadb0680a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 4b5bb809-c821-466d-9a47-b5a6aa337212] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 11 09:14:12 compute-0 kernel: tap072bf760-1f (unregistering): left promiscuous mode
Oct 11 09:14:12 compute-0 NetworkManager[44960]: <info>  [1760174052.4857] device (tap072bf760-1f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 09:14:12 compute-0 nova_compute[260935]: 2025-10-11 09:14:12.499 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:14:12 compute-0 ovn_controller[152945]: 2025-10-11T09:14:12Z|01021|binding|INFO|Releasing lport 072bf760-1ffd-4b15-bdd9-58b9b670feb8 from this chassis (sb_readonly=0)
Oct 11 09:14:12 compute-0 ovn_controller[152945]: 2025-10-11T09:14:12Z|01022|binding|INFO|Setting lport 072bf760-1ffd-4b15-bdd9-58b9b670feb8 down in Southbound
Oct 11 09:14:12 compute-0 ovn_controller[152945]: 2025-10-11T09:14:12Z|01023|binding|INFO|Removing iface tap072bf760-1f ovn-installed in OVS
Oct 11 09:14:12 compute-0 nova_compute[260935]: 2025-10-11 09:14:12.504 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:14:12 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:14:12.510 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c7:58:7f 10.100.0.11'], port_security=['fa:16:3e:c7:58:7f 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '4b5bb809-c821-466d-9a47-b5a6aa337212', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-824eb213-11b4-4a1c-ba73-2f66376f326f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ca4b15770e784f45910b630937562cb6', 'neutron:revision_number': '6', 'neutron:security_group_ids': '17c338b8-146b-462b-bba0-743df78b6636', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8beef9e7-87f9-4509-b665-58f91eb3377a, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=072bf760-1ffd-4b15-bdd9-58b9b670feb8) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 09:14:12 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:14:12.511 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 072bf760-1ffd-4b15-bdd9-58b9b670feb8 in datapath 824eb213-11b4-4a1c-ba73-2f66376f326f unbound from our chassis
Oct 11 09:14:12 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:14:12.513 162815 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 824eb213-11b4-4a1c-ba73-2f66376f326f, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 11 09:14:12 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:14:12.513 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[af39a327-df3f-45f9-bf37-bc6a4d56dbd4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:14:12 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:14:12.514 162815 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-824eb213-11b4-4a1c-ba73-2f66376f326f namespace which is not needed anymore
Oct 11 09:14:12 compute-0 nova_compute[260935]: 2025-10-11 09:14:12.531 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:14:12 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2171: 321 pgs: 321 active+clean; 409 MiB data, 962 MiB used, 59 GiB / 60 GiB avail; 531 KiB/s rd, 22 KiB/s wr, 46 op/s
Oct 11 09:14:12 compute-0 systemd[1]: machine-qemu\x2d128\x2dinstance\x2d0000006a.scope: Deactivated successfully.
Oct 11 09:14:12 compute-0 systemd[1]: machine-qemu\x2d128\x2dinstance\x2d0000006a.scope: Consumed 13.468s CPU time.
Oct 11 09:14:12 compute-0 systemd-machined[215705]: Machine qemu-128-instance-0000006a terminated.
Oct 11 09:14:12 compute-0 nova_compute[260935]: 2025-10-11 09:14:12.647 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:14:12 compute-0 nova_compute[260935]: 2025-10-11 09:14:12.660 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:14:12 compute-0 nova_compute[260935]: 2025-10-11 09:14:12.663 2 INFO nova.virt.libvirt.driver [-] [instance: 4b5bb809-c821-466d-9a47-b5a6aa337212] Instance destroyed successfully.
Oct 11 09:14:12 compute-0 nova_compute[260935]: 2025-10-11 09:14:12.664 2 DEBUG nova.objects.instance [None req-5f616836-f1e6-48c3-96cc-2b7cadb0680a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Lazy-loading 'resources' on Instance uuid 4b5bb809-c821-466d-9a47-b5a6aa337212 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:14:12 compute-0 nova_compute[260935]: 2025-10-11 09:14:12.684 2 DEBUG nova.virt.libvirt.vif [None req-5f616836-f1e6-48c3-96cc-2b7cadb0680a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T09:13:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-429546179',display_name='tempest-TestNetworkAdvancedServerOps-server-429546179',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-429546179',id=106,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBI+dqizmLaYC2433qIpQLnnx/a3P6lrbi1ICEZMm0vIsSZegV5kK7h8RBLfeGIs6fvNJtxm5wiio76URdFktX8daCex/YjqNpM8rlSnMQuWZwdYT6LukjvR6b7qleLKRfA==',key_name='tempest-TestNetworkAdvancedServerOps-1430850582',keypairs=<?>,launch_index=0,launched_at=2025-10-11T09:13:24Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='ca4b15770e784f45910b630937562cb6',ramdisk_id='',reservation_id='r-2d0dbaqa',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-1304559157',owner_user_name='tempest-TestNetworkAdvancedServerOps-1304559157-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T09:13:52Z,user_data=None,user_id='a213c3877fc144a3af0be3c3d853f999',uuid=4b5bb809-c821-466d-9a47-b5a6aa337212,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "072bf760-1ffd-4b15-bdd9-58b9b670feb8", "address": "fa:16:3e:c7:58:7f", "network": {"id": "824eb213-11b4-4a1c-ba73-2f66376f326f", "bridge": "br-int", "label": "tempest-network-smoke--404577306", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.175", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca4b15770e784f45910b630937562cb6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap072bf760-1f", "ovs_interfaceid": "072bf760-1ffd-4b15-bdd9-58b9b670feb8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 11 09:14:12 compute-0 nova_compute[260935]: 2025-10-11 09:14:12.686 2 DEBUG nova.network.os_vif_util [None req-5f616836-f1e6-48c3-96cc-2b7cadb0680a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Converting VIF {"id": "072bf760-1ffd-4b15-bdd9-58b9b670feb8", "address": "fa:16:3e:c7:58:7f", "network": {"id": "824eb213-11b4-4a1c-ba73-2f66376f326f", "bridge": "br-int", "label": "tempest-network-smoke--404577306", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.175", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca4b15770e784f45910b630937562cb6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap072bf760-1f", "ovs_interfaceid": "072bf760-1ffd-4b15-bdd9-58b9b670feb8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 09:14:12 compute-0 nova_compute[260935]: 2025-10-11 09:14:12.688 2 DEBUG nova.network.os_vif_util [None req-5f616836-f1e6-48c3-96cc-2b7cadb0680a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c7:58:7f,bridge_name='br-int',has_traffic_filtering=True,id=072bf760-1ffd-4b15-bdd9-58b9b670feb8,network=Network(824eb213-11b4-4a1c-ba73-2f66376f326f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap072bf760-1f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 09:14:12 compute-0 nova_compute[260935]: 2025-10-11 09:14:12.688 2 DEBUG os_vif [None req-5f616836-f1e6-48c3-96cc-2b7cadb0680a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c7:58:7f,bridge_name='br-int',has_traffic_filtering=True,id=072bf760-1ffd-4b15-bdd9-58b9b670feb8,network=Network(824eb213-11b4-4a1c-ba73-2f66376f326f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap072bf760-1f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 11 09:14:12 compute-0 nova_compute[260935]: 2025-10-11 09:14:12.691 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:14:12 compute-0 nova_compute[260935]: 2025-10-11 09:14:12.692 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap072bf760-1f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:14:12 compute-0 nova_compute[260935]: 2025-10-11 09:14:12.695 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:14:12 compute-0 nova_compute[260935]: 2025-10-11 09:14:12.697 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 09:14:12 compute-0 nova_compute[260935]: 2025-10-11 09:14:12.700 2 INFO os_vif [None req-5f616836-f1e6-48c3-96cc-2b7cadb0680a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c7:58:7f,bridge_name='br-int',has_traffic_filtering=True,id=072bf760-1ffd-4b15-bdd9-58b9b670feb8,network=Network(824eb213-11b4-4a1c-ba73-2f66376f326f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap072bf760-1f')
Oct 11 09:14:12 compute-0 neutron-haproxy-ovnmeta-824eb213-11b4-4a1c-ba73-2f66376f326f[374257]: [NOTICE]   (374261) : haproxy version is 2.8.14-c23fe91
Oct 11 09:14:12 compute-0 neutron-haproxy-ovnmeta-824eb213-11b4-4a1c-ba73-2f66376f326f[374257]: [NOTICE]   (374261) : path to executable is /usr/sbin/haproxy
Oct 11 09:14:12 compute-0 neutron-haproxy-ovnmeta-824eb213-11b4-4a1c-ba73-2f66376f326f[374257]: [WARNING]  (374261) : Exiting Master process...
Oct 11 09:14:12 compute-0 neutron-haproxy-ovnmeta-824eb213-11b4-4a1c-ba73-2f66376f326f[374257]: [ALERT]    (374261) : Current worker (374263) exited with code 143 (Terminated)
Oct 11 09:14:12 compute-0 neutron-haproxy-ovnmeta-824eb213-11b4-4a1c-ba73-2f66376f326f[374257]: [WARNING]  (374261) : All workers exited. Exiting... (0)
Oct 11 09:14:12 compute-0 systemd[1]: libpod-fe44361210902a7d2aa4fa61cd2749a87305dac2f61418b347db2edef02fb595.scope: Deactivated successfully.
Oct 11 09:14:12 compute-0 podman[374937]: 2025-10-11 09:14:12.747564576 +0000 UTC m=+0.075073059 container died fe44361210902a7d2aa4fa61cd2749a87305dac2f61418b347db2edef02fb595 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-824eb213-11b4-4a1c-ba73-2f66376f326f, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2)
Oct 11 09:14:12 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:14:12.774 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6f:96:7e 10.100.0.2 2001:db8::f816:3eff:fe6f:967e'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fe6f:967e/64', 'neutron:device_id': 'ovnmeta-64420f4d-abed-43af-8d6f-2156706c7174', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-64420f4d-abed-43af-8d6f-2156706c7174', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2b9fef5c3096478daba4f4d85fddf9e9', 'neutron:revision_number': '11', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=237c95b9-58c9-4db6-9d3a-13740240295d, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=2a00a6d8-ae77-4ece-ac60-480f6b95d6a8) old=Port_Binding(mac=['fa:16:3e:6f:96:7e 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-64420f4d-abed-43af-8d6f-2156706c7174', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-64420f4d-abed-43af-8d6f-2156706c7174', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2b9fef5c3096478daba4f4d85fddf9e9', 'neutron:revision_number': '10', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 09:14:12 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-fe44361210902a7d2aa4fa61cd2749a87305dac2f61418b347db2edef02fb595-userdata-shm.mount: Deactivated successfully.
Oct 11 09:14:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-15941a0eabc12ef18aa237435bb737824ea9fdbe58e73a7127e8fa87ef5c01c5-merged.mount: Deactivated successfully.
Oct 11 09:14:12 compute-0 podman[374937]: 2025-10-11 09:14:12.818349216 +0000 UTC m=+0.145857729 container cleanup fe44361210902a7d2aa4fa61cd2749a87305dac2f61418b347db2edef02fb595 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-824eb213-11b4-4a1c-ba73-2f66376f326f, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Oct 11 09:14:12 compute-0 systemd[1]: libpod-conmon-fe44361210902a7d2aa4fa61cd2749a87305dac2f61418b347db2edef02fb595.scope: Deactivated successfully.
Oct 11 09:14:12 compute-0 podman[374991]: 2025-10-11 09:14:12.928789642 +0000 UTC m=+0.079441689 container remove fe44361210902a7d2aa4fa61cd2749a87305dac2f61418b347db2edef02fb595 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-824eb213-11b4-4a1c-ba73-2f66376f326f, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 11 09:14:12 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:14:12.936 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[33bef591-5136-4adc-b1b6-0a44a315c060]: (4, ('Sat Oct 11 09:14:12 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-824eb213-11b4-4a1c-ba73-2f66376f326f (fe44361210902a7d2aa4fa61cd2749a87305dac2f61418b347db2edef02fb595)\nfe44361210902a7d2aa4fa61cd2749a87305dac2f61418b347db2edef02fb595\nSat Oct 11 09:14:12 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-824eb213-11b4-4a1c-ba73-2f66376f326f (fe44361210902a7d2aa4fa61cd2749a87305dac2f61418b347db2edef02fb595)\nfe44361210902a7d2aa4fa61cd2749a87305dac2f61418b347db2edef02fb595\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:14:12 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:14:12.938 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[5d99ea83-9fc5-42b7-913d-f1b84ba88096]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:14:12 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:14:12.940 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap824eb213-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:14:12 compute-0 nova_compute[260935]: 2025-10-11 09:14:12.942 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:14:12 compute-0 kernel: tap824eb213-10: left promiscuous mode
Oct 11 09:14:12 compute-0 nova_compute[260935]: 2025-10-11 09:14:12.961 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:14:12 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:14:12.964 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[4c4dbdbd-e751-4aba-9111-d8d6d66b228a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:14:12 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:14:12.993 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[345f9357-c1d2-43c6-bf5b-24fb2a521484]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:14:12 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:14:12.995 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[643170e7-1f24-4f99-8213-8c96a68a3b96]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:14:13 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:14:13.017 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[fd752f6e-38fb-4924-bb3b-8d33355702e0]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 587563, 'reachable_time': 42835, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 375008, 'error': None, 'target': 'ovnmeta-824eb213-11b4-4a1c-ba73-2f66376f326f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:14:13 compute-0 systemd[1]: run-netns-ovnmeta\x2d824eb213\x2d11b4\x2d4a1c\x2dba73\x2d2f66376f326f.mount: Deactivated successfully.
Oct 11 09:14:13 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:14:13.023 162929 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-824eb213-11b4-4a1c-ba73-2f66376f326f deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 11 09:14:13 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:14:13.023 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[4ec416a3-94b7-4153-9ab4-abc219320fa8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:14:13 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:14:13.024 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 2a00a6d8-ae77-4ece-ac60-480f6b95d6a8 in datapath 64420f4d-abed-43af-8d6f-2156706c7174 unbound from our chassis
Oct 11 09:14:13 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:14:13.026 162815 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 64420f4d-abed-43af-8d6f-2156706c7174, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 11 09:14:13 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:14:13.027 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[62357795-a2f7-47b3-840e-9abfacb32d3f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:14:13 compute-0 nova_compute[260935]: 2025-10-11 09:14:13.124 2 INFO nova.virt.libvirt.driver [None req-5f616836-f1e6-48c3-96cc-2b7cadb0680a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 4b5bb809-c821-466d-9a47-b5a6aa337212] Deleting instance files /var/lib/nova/instances/4b5bb809-c821-466d-9a47-b5a6aa337212_del
Oct 11 09:14:13 compute-0 nova_compute[260935]: 2025-10-11 09:14:13.125 2 INFO nova.virt.libvirt.driver [None req-5f616836-f1e6-48c3-96cc-2b7cadb0680a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 4b5bb809-c821-466d-9a47-b5a6aa337212] Deletion of /var/lib/nova/instances/4b5bb809-c821-466d-9a47-b5a6aa337212_del complete
Oct 11 09:14:13 compute-0 nova_compute[260935]: 2025-10-11 09:14:13.190 2 INFO nova.compute.manager [None req-5f616836-f1e6-48c3-96cc-2b7cadb0680a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: 4b5bb809-c821-466d-9a47-b5a6aa337212] Took 0.78 seconds to destroy the instance on the hypervisor.
Oct 11 09:14:13 compute-0 nova_compute[260935]: 2025-10-11 09:14:13.190 2 DEBUG oslo.service.loopingcall [None req-5f616836-f1e6-48c3-96cc-2b7cadb0680a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 11 09:14:13 compute-0 nova_compute[260935]: 2025-10-11 09:14:13.191 2 DEBUG nova.compute.manager [-] [instance: 4b5bb809-c821-466d-9a47-b5a6aa337212] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 11 09:14:13 compute-0 nova_compute[260935]: 2025-10-11 09:14:13.191 2 DEBUG nova.network.neutron [-] [instance: 4b5bb809-c821-466d-9a47-b5a6aa337212] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 11 09:14:13 compute-0 ceph-mon[74313]: pgmap v2171: 321 pgs: 321 active+clean; 409 MiB data, 962 MiB used, 59 GiB / 60 GiB avail; 531 KiB/s rd, 22 KiB/s wr, 46 op/s
Oct 11 09:14:13 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:14:14 compute-0 nova_compute[260935]: 2025-10-11 09:14:14.523 2 DEBUG nova.compute.manager [req-04961ed3-0438-4ac5-b1e2-e15155688bc2 req-2ff45222-9a4d-4742-9e0f-a00981f15663 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 4b5bb809-c821-466d-9a47-b5a6aa337212] Received event network-vif-unplugged-072bf760-1ffd-4b15-bdd9-58b9b670feb8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:14:14 compute-0 nova_compute[260935]: 2025-10-11 09:14:14.524 2 DEBUG oslo_concurrency.lockutils [req-04961ed3-0438-4ac5-b1e2-e15155688bc2 req-2ff45222-9a4d-4742-9e0f-a00981f15663 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "4b5bb809-c821-466d-9a47-b5a6aa337212-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:14:14 compute-0 nova_compute[260935]: 2025-10-11 09:14:14.524 2 DEBUG oslo_concurrency.lockutils [req-04961ed3-0438-4ac5-b1e2-e15155688bc2 req-2ff45222-9a4d-4742-9e0f-a00981f15663 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "4b5bb809-c821-466d-9a47-b5a6aa337212-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:14:14 compute-0 nova_compute[260935]: 2025-10-11 09:14:14.524 2 DEBUG oslo_concurrency.lockutils [req-04961ed3-0438-4ac5-b1e2-e15155688bc2 req-2ff45222-9a4d-4742-9e0f-a00981f15663 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "4b5bb809-c821-466d-9a47-b5a6aa337212-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:14:14 compute-0 nova_compute[260935]: 2025-10-11 09:14:14.525 2 DEBUG nova.compute.manager [req-04961ed3-0438-4ac5-b1e2-e15155688bc2 req-2ff45222-9a4d-4742-9e0f-a00981f15663 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 4b5bb809-c821-466d-9a47-b5a6aa337212] No waiting events found dispatching network-vif-unplugged-072bf760-1ffd-4b15-bdd9-58b9b670feb8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 09:14:14 compute-0 nova_compute[260935]: 2025-10-11 09:14:14.525 2 DEBUG nova.compute.manager [req-04961ed3-0438-4ac5-b1e2-e15155688bc2 req-2ff45222-9a4d-4742-9e0f-a00981f15663 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 4b5bb809-c821-466d-9a47-b5a6aa337212] Received event network-vif-unplugged-072bf760-1ffd-4b15-bdd9-58b9b670feb8 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 11 09:14:14 compute-0 nova_compute[260935]: 2025-10-11 09:14:14.526 2 DEBUG nova.compute.manager [req-04961ed3-0438-4ac5-b1e2-e15155688bc2 req-2ff45222-9a4d-4742-9e0f-a00981f15663 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 4b5bb809-c821-466d-9a47-b5a6aa337212] Received event network-vif-plugged-072bf760-1ffd-4b15-bdd9-58b9b670feb8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:14:14 compute-0 nova_compute[260935]: 2025-10-11 09:14:14.526 2 DEBUG oslo_concurrency.lockutils [req-04961ed3-0438-4ac5-b1e2-e15155688bc2 req-2ff45222-9a4d-4742-9e0f-a00981f15663 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "4b5bb809-c821-466d-9a47-b5a6aa337212-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:14:14 compute-0 nova_compute[260935]: 2025-10-11 09:14:14.527 2 DEBUG oslo_concurrency.lockutils [req-04961ed3-0438-4ac5-b1e2-e15155688bc2 req-2ff45222-9a4d-4742-9e0f-a00981f15663 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "4b5bb809-c821-466d-9a47-b5a6aa337212-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:14:14 compute-0 nova_compute[260935]: 2025-10-11 09:14:14.527 2 DEBUG oslo_concurrency.lockutils [req-04961ed3-0438-4ac5-b1e2-e15155688bc2 req-2ff45222-9a4d-4742-9e0f-a00981f15663 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "4b5bb809-c821-466d-9a47-b5a6aa337212-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:14:14 compute-0 nova_compute[260935]: 2025-10-11 09:14:14.527 2 DEBUG nova.compute.manager [req-04961ed3-0438-4ac5-b1e2-e15155688bc2 req-2ff45222-9a4d-4742-9e0f-a00981f15663 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 4b5bb809-c821-466d-9a47-b5a6aa337212] No waiting events found dispatching network-vif-plugged-072bf760-1ffd-4b15-bdd9-58b9b670feb8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 09:14:14 compute-0 nova_compute[260935]: 2025-10-11 09:14:14.528 2 WARNING nova.compute.manager [req-04961ed3-0438-4ac5-b1e2-e15155688bc2 req-2ff45222-9a4d-4742-9e0f-a00981f15663 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 4b5bb809-c821-466d-9a47-b5a6aa337212] Received unexpected event network-vif-plugged-072bf760-1ffd-4b15-bdd9-58b9b670feb8 for instance with vm_state active and task_state deleting.
Oct 11 09:14:14 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2172: 321 pgs: 321 active+clean; 409 MiB data, 962 MiB used, 59 GiB / 60 GiB avail; 531 KiB/s rd, 22 KiB/s wr, 46 op/s
Oct 11 09:14:14 compute-0 nova_compute[260935]: 2025-10-11 09:14:14.654 2 DEBUG nova.network.neutron [-] [instance: 4b5bb809-c821-466d-9a47-b5a6aa337212] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:14:14 compute-0 nova_compute[260935]: 2025-10-11 09:14:14.683 2 DEBUG nova.network.neutron [req-e89c302b-5525-4ad8-aee1-13001d738e99 req-24334ce0-0517-4372-b3a7-f475361380c0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 4b5bb809-c821-466d-9a47-b5a6aa337212] Updated VIF entry in instance network info cache for port 072bf760-1ffd-4b15-bdd9-58b9b670feb8. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 09:14:14 compute-0 nova_compute[260935]: 2025-10-11 09:14:14.684 2 DEBUG nova.network.neutron [req-e89c302b-5525-4ad8-aee1-13001d738e99 req-24334ce0-0517-4372-b3a7-f475361380c0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 4b5bb809-c821-466d-9a47-b5a6aa337212] Updating instance_info_cache with network_info: [{"id": "072bf760-1ffd-4b15-bdd9-58b9b670feb8", "address": "fa:16:3e:c7:58:7f", "network": {"id": "824eb213-11b4-4a1c-ba73-2f66376f326f", "bridge": "br-int", "label": "tempest-network-smoke--404577306", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca4b15770e784f45910b630937562cb6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap072bf760-1f", "ovs_interfaceid": "072bf760-1ffd-4b15-bdd9-58b9b670feb8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:14:14 compute-0 nova_compute[260935]: 2025-10-11 09:14:14.690 2 INFO nova.compute.manager [-] [instance: 4b5bb809-c821-466d-9a47-b5a6aa337212] Took 1.50 seconds to deallocate network for instance.
Oct 11 09:14:14 compute-0 nova_compute[260935]: 2025-10-11 09:14:14.724 2 DEBUG oslo_concurrency.lockutils [req-e89c302b-5525-4ad8-aee1-13001d738e99 req-24334ce0-0517-4372-b3a7-f475361380c0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-4b5bb809-c821-466d-9a47-b5a6aa337212" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:14:14 compute-0 nova_compute[260935]: 2025-10-11 09:14:14.761 2 DEBUG nova.compute.manager [req-89526ed6-16f0-4a40-9549-5de0e3a834f6 req-7c19813a-8672-4645-b615-d6b730ba9229 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 4b5bb809-c821-466d-9a47-b5a6aa337212] Received event network-vif-deleted-072bf760-1ffd-4b15-bdd9-58b9b670feb8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:14:14 compute-0 nova_compute[260935]: 2025-10-11 09:14:14.761 2 INFO nova.compute.manager [req-89526ed6-16f0-4a40-9549-5de0e3a834f6 req-7c19813a-8672-4645-b615-d6b730ba9229 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 4b5bb809-c821-466d-9a47-b5a6aa337212] Neutron deleted interface 072bf760-1ffd-4b15-bdd9-58b9b670feb8; detaching it from the instance and deleting it from the info cache
Oct 11 09:14:14 compute-0 nova_compute[260935]: 2025-10-11 09:14:14.762 2 DEBUG nova.network.neutron [req-89526ed6-16f0-4a40-9549-5de0e3a834f6 req-7c19813a-8672-4645-b615-d6b730ba9229 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 4b5bb809-c821-466d-9a47-b5a6aa337212] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:14:14 compute-0 nova_compute[260935]: 2025-10-11 09:14:14.765 2 DEBUG oslo_concurrency.lockutils [None req-5f616836-f1e6-48c3-96cc-2b7cadb0680a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:14:14 compute-0 nova_compute[260935]: 2025-10-11 09:14:14.766 2 DEBUG oslo_concurrency.lockutils [None req-5f616836-f1e6-48c3-96cc-2b7cadb0680a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:14:14 compute-0 nova_compute[260935]: 2025-10-11 09:14:14.791 2 DEBUG nova.compute.manager [req-89526ed6-16f0-4a40-9549-5de0e3a834f6 req-7c19813a-8672-4645-b615-d6b730ba9229 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 4b5bb809-c821-466d-9a47-b5a6aa337212] Detach interface failed, port_id=072bf760-1ffd-4b15-bdd9-58b9b670feb8, reason: Instance 4b5bb809-c821-466d-9a47-b5a6aa337212 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Oct 11 09:14:14 compute-0 nova_compute[260935]: 2025-10-11 09:14:14.916 2 DEBUG oslo_concurrency.processutils [None req-5f616836-f1e6-48c3-96cc-2b7cadb0680a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:14:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:14:15.209 162815 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:14:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:14:15.210 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:14:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:14:15.212 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:14:15 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 09:14:15 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2032363935' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:14:15 compute-0 nova_compute[260935]: 2025-10-11 09:14:15.416 2 DEBUG oslo_concurrency.processutils [None req-5f616836-f1e6-48c3-96cc-2b7cadb0680a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.500s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:14:15 compute-0 nova_compute[260935]: 2025-10-11 09:14:15.424 2 DEBUG nova.compute.provider_tree [None req-5f616836-f1e6-48c3-96cc-2b7cadb0680a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 09:14:15 compute-0 nova_compute[260935]: 2025-10-11 09:14:15.440 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:14:15 compute-0 nova_compute[260935]: 2025-10-11 09:14:15.445 2 DEBUG nova.scheduler.client.report [None req-5f616836-f1e6-48c3-96cc-2b7cadb0680a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 09:14:15 compute-0 nova_compute[260935]: 2025-10-11 09:14:15.584 2 DEBUG oslo_concurrency.lockutils [None req-5f616836-f1e6-48c3-96cc-2b7cadb0680a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.818s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:14:15 compute-0 ceph-mon[74313]: pgmap v2172: 321 pgs: 321 active+clean; 409 MiB data, 962 MiB used, 59 GiB / 60 GiB avail; 531 KiB/s rd, 22 KiB/s wr, 46 op/s
Oct 11 09:14:15 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/2032363935' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:14:15 compute-0 nova_compute[260935]: 2025-10-11 09:14:15.743 2 INFO nova.scheduler.client.report [None req-5f616836-f1e6-48c3-96cc-2b7cadb0680a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Deleted allocations for instance 4b5bb809-c821-466d-9a47-b5a6aa337212
Oct 11 09:14:15 compute-0 nova_compute[260935]: 2025-10-11 09:14:15.864 2 DEBUG oslo_concurrency.lockutils [None req-5f616836-f1e6-48c3-96cc-2b7cadb0680a a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Lock "4b5bb809-c821-466d-9a47-b5a6aa337212" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.457s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:14:16 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2173: 321 pgs: 321 active+clean; 409 MiB data, 962 MiB used, 59 GiB / 60 GiB avail; 531 KiB/s rd, 22 KiB/s wr, 46 op/s
Oct 11 09:14:17 compute-0 ceph-mon[74313]: pgmap v2173: 321 pgs: 321 active+clean; 409 MiB data, 962 MiB used, 59 GiB / 60 GiB avail; 531 KiB/s rd, 22 KiB/s wr, 46 op/s
Oct 11 09:14:17 compute-0 nova_compute[260935]: 2025-10-11 09:14:17.697 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:14:18 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2174: 321 pgs: 321 active+clean; 328 MiB data, 899 MiB used, 59 GiB / 60 GiB avail; 549 KiB/s rd, 23 KiB/s wr, 74 op/s
Oct 11 09:14:18 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:14:18.652 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6f:96:7e 10.100.0.2'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-64420f4d-abed-43af-8d6f-2156706c7174', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-64420f4d-abed-43af-8d6f-2156706c7174', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2b9fef5c3096478daba4f4d85fddf9e9', 'neutron:revision_number': '14', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=237c95b9-58c9-4db6-9d3a-13740240295d, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=2a00a6d8-ae77-4ece-ac60-480f6b95d6a8) old=Port_Binding(mac=['fa:16:3e:6f:96:7e 10.100.0.2 2001:db8::f816:3eff:fe6f:967e'], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fe6f:967e/64', 'neutron:device_id': 'ovnmeta-64420f4d-abed-43af-8d6f-2156706c7174', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-64420f4d-abed-43af-8d6f-2156706c7174', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2b9fef5c3096478daba4f4d85fddf9e9', 'neutron:revision_number': '11', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 09:14:18 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:14:18.654 162815 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 2a00a6d8-ae77-4ece-ac60-480f6b95d6a8 in datapath 64420f4d-abed-43af-8d6f-2156706c7174 updated
Oct 11 09:14:18 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:14:18.656 162815 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 64420f4d-abed-43af-8d6f-2156706c7174, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 11 09:14:18 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:14:18.657 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[9254b364-92f9-47bf-8a05-98fb15d2bc39]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:14:18 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:14:19 compute-0 ceph-mon[74313]: pgmap v2174: 321 pgs: 321 active+clean; 328 MiB data, 899 MiB used, 59 GiB / 60 GiB avail; 549 KiB/s rd, 23 KiB/s wr, 74 op/s
Oct 11 09:14:19 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:14:19.758 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6f:96:7e 10.100.0.2 2001:db8::f816:3eff:fe6f:967e'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fe6f:967e/64', 'neutron:device_id': 'ovnmeta-64420f4d-abed-43af-8d6f-2156706c7174', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-64420f4d-abed-43af-8d6f-2156706c7174', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2b9fef5c3096478daba4f4d85fddf9e9', 'neutron:revision_number': '15', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=237c95b9-58c9-4db6-9d3a-13740240295d, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=2a00a6d8-ae77-4ece-ac60-480f6b95d6a8) old=Port_Binding(mac=['fa:16:3e:6f:96:7e 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-64420f4d-abed-43af-8d6f-2156706c7174', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-64420f4d-abed-43af-8d6f-2156706c7174', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2b9fef5c3096478daba4f4d85fddf9e9', 'neutron:revision_number': '14', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 09:14:19 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:14:19.760 162815 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 2a00a6d8-ae77-4ece-ac60-480f6b95d6a8 in datapath 64420f4d-abed-43af-8d6f-2156706c7174 updated
Oct 11 09:14:19 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:14:19.762 162815 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 64420f4d-abed-43af-8d6f-2156706c7174, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 11 09:14:19 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:14:19.763 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[7887cc6e-b6e5-4b5e-bc38-2273ff91d787]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:14:20 compute-0 ovn_controller[152945]: 2025-10-11T09:14:20Z|01024|binding|INFO|Releasing lport e23cd806-8523-4e59-ba27-db15cee52548 from this chassis (sb_readonly=0)
Oct 11 09:14:20 compute-0 ovn_controller[152945]: 2025-10-11T09:14:20Z|01025|binding|INFO|Releasing lport f45dd889-4db0-488b-b6d3-356bc191844e from this chassis (sb_readonly=0)
Oct 11 09:14:20 compute-0 nova_compute[260935]: 2025-10-11 09:14:20.148 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:14:20 compute-0 ovn_controller[152945]: 2025-10-11T09:14:20Z|01026|binding|INFO|Releasing lport e23cd806-8523-4e59-ba27-db15cee52548 from this chassis (sb_readonly=0)
Oct 11 09:14:20 compute-0 ovn_controller[152945]: 2025-10-11T09:14:20Z|01027|binding|INFO|Releasing lport f45dd889-4db0-488b-b6d3-356bc191844e from this chassis (sb_readonly=0)
Oct 11 09:14:20 compute-0 nova_compute[260935]: 2025-10-11 09:14:20.228 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:14:20 compute-0 nova_compute[260935]: 2025-10-11 09:14:20.442 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:14:20 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2175: 321 pgs: 321 active+clean; 328 MiB data, 899 MiB used, 59 GiB / 60 GiB avail; 21 KiB/s rd, 11 KiB/s wr, 28 op/s
Oct 11 09:14:21 compute-0 nova_compute[260935]: 2025-10-11 09:14:21.661 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:14:21 compute-0 nova_compute[260935]: 2025-10-11 09:14:21.696 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Triggering sync for uuid c176845c-89c0-4038-ba22-4ee79bd3ebfe _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Oct 11 09:14:21 compute-0 ceph-mon[74313]: pgmap v2175: 321 pgs: 321 active+clean; 328 MiB data, 899 MiB used, 59 GiB / 60 GiB avail; 21 KiB/s rd, 11 KiB/s wr, 28 op/s
Oct 11 09:14:21 compute-0 nova_compute[260935]: 2025-10-11 09:14:21.697 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Triggering sync for uuid b75d8ded-515b-48ff-a6b6-28df88878996 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Oct 11 09:14:21 compute-0 nova_compute[260935]: 2025-10-11 09:14:21.698 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Triggering sync for uuid 52be16b4-343a-4fd4-9041-39069a1fde2a _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Oct 11 09:14:21 compute-0 nova_compute[260935]: 2025-10-11 09:14:21.698 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "c176845c-89c0-4038-ba22-4ee79bd3ebfe" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:14:21 compute-0 nova_compute[260935]: 2025-10-11 09:14:21.699 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "c176845c-89c0-4038-ba22-4ee79bd3ebfe" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:14:21 compute-0 nova_compute[260935]: 2025-10-11 09:14:21.699 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "b75d8ded-515b-48ff-a6b6-28df88878996" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:14:21 compute-0 nova_compute[260935]: 2025-10-11 09:14:21.700 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "b75d8ded-515b-48ff-a6b6-28df88878996" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:14:21 compute-0 nova_compute[260935]: 2025-10-11 09:14:21.701 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "52be16b4-343a-4fd4-9041-39069a1fde2a" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:14:21 compute-0 nova_compute[260935]: 2025-10-11 09:14:21.701 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "52be16b4-343a-4fd4-9041-39069a1fde2a" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:14:21 compute-0 nova_compute[260935]: 2025-10-11 09:14:21.799 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "c176845c-89c0-4038-ba22-4ee79bd3ebfe" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.100s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:14:21 compute-0 nova_compute[260935]: 2025-10-11 09:14:21.800 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "52be16b4-343a-4fd4-9041-39069a1fde2a" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.099s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:14:21 compute-0 nova_compute[260935]: 2025-10-11 09:14:21.803 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "b75d8ded-515b-48ff-a6b6-28df88878996" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.103s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:14:22 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2176: 321 pgs: 321 active+clean; 328 MiB data, 899 MiB used, 59 GiB / 60 GiB avail; 21 KiB/s rd, 11 KiB/s wr, 28 op/s
Oct 11 09:14:22 compute-0 nova_compute[260935]: 2025-10-11 09:14:22.701 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:14:22 compute-0 nova_compute[260935]: 2025-10-11 09:14:22.743 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:14:23 compute-0 nova_compute[260935]: 2025-10-11 09:14:23.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:14:23 compute-0 ceph-mon[74313]: pgmap v2176: 321 pgs: 321 active+clean; 328 MiB data, 899 MiB used, 59 GiB / 60 GiB avail; 21 KiB/s rd, 11 KiB/s wr, 28 op/s
Oct 11 09:14:23 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:14:24 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2177: 321 pgs: 321 active+clean; 328 MiB data, 899 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 11 09:14:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:14:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:14:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:14:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:14:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:14:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:14:25 compute-0 nova_compute[260935]: 2025-10-11 09:14:25.495 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:14:25 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:14:25.684 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6f:96:7e 2001:db8::f816:3eff:fe6f:967e'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe6f:967e/64', 'neutron:device_id': 'ovnmeta-64420f4d-abed-43af-8d6f-2156706c7174', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-64420f4d-abed-43af-8d6f-2156706c7174', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2b9fef5c3096478daba4f4d85fddf9e9', 'neutron:revision_number': '18', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=237c95b9-58c9-4db6-9d3a-13740240295d, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=2a00a6d8-ae77-4ece-ac60-480f6b95d6a8) old=Port_Binding(mac=['fa:16:3e:6f:96:7e 10.100.0.2 2001:db8::f816:3eff:fe6f:967e'], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fe6f:967e/64', 'neutron:device_id': 'ovnmeta-64420f4d-abed-43af-8d6f-2156706c7174', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-64420f4d-abed-43af-8d6f-2156706c7174', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2b9fef5c3096478daba4f4d85fddf9e9', 'neutron:revision_number': '15', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 09:14:25 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:14:25.685 162815 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 2a00a6d8-ae77-4ece-ac60-480f6b95d6a8 in datapath 64420f4d-abed-43af-8d6f-2156706c7174 updated
Oct 11 09:14:25 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:14:25.688 162815 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 64420f4d-abed-43af-8d6f-2156706c7174, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 11 09:14:25 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:14:25.689 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[f012c683-6c20-4edd-8218-95eca14b936c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:14:25 compute-0 nova_compute[260935]: 2025-10-11 09:14:25.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:14:25 compute-0 nova_compute[260935]: 2025-10-11 09:14:25.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:14:25 compute-0 nova_compute[260935]: 2025-10-11 09:14:25.703 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 11 09:14:25 compute-0 ceph-mon[74313]: pgmap v2177: 321 pgs: 321 active+clean; 328 MiB data, 899 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 11 09:14:25 compute-0 podman[375032]: 2025-10-11 09:14:25.804227461 +0000 UTC m=+0.100178684 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 11 09:14:26 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2178: 321 pgs: 321 active+clean; 328 MiB data, 899 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 11 09:14:26 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 09:14:26 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3223603160' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 09:14:26 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 09:14:26 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3223603160' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 09:14:26 compute-0 nova_compute[260935]: 2025-10-11 09:14:26.700 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:14:26 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:14:26.701 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=31, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:d1:d9', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '16:ab:1e:b7:4b:7f'}, ipsec=False) old=SB_Global(nb_cfg=30) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 09:14:26 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:14:26.703 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 11 09:14:26 compute-0 nova_compute[260935]: 2025-10-11 09:14:26.738 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:14:26 compute-0 ceph-mon[74313]: from='client.? 192.168.122.10:0/3223603160' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 09:14:26 compute-0 ceph-mon[74313]: from='client.? 192.168.122.10:0/3223603160' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 09:14:27 compute-0 nova_compute[260935]: 2025-10-11 09:14:27.659 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760174052.6581135, 4b5bb809-c821-466d-9a47-b5a6aa337212 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:14:27 compute-0 nova_compute[260935]: 2025-10-11 09:14:27.660 2 INFO nova.compute.manager [-] [instance: 4b5bb809-c821-466d-9a47-b5a6aa337212] VM Stopped (Lifecycle Event)
Oct 11 09:14:27 compute-0 nova_compute[260935]: 2025-10-11 09:14:27.689 2 DEBUG nova.compute.manager [None req-0fd74466-5629-43e7-9801-53bda3281b2a - - - - - -] [instance: 4b5bb809-c821-466d-9a47-b5a6aa337212] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:14:27 compute-0 nova_compute[260935]: 2025-10-11 09:14:27.703 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:14:27 compute-0 ceph-mon[74313]: pgmap v2178: 321 pgs: 321 active+clean; 328 MiB data, 899 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 11 09:14:28 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2179: 321 pgs: 321 active+clean; 328 MiB data, 899 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 11 09:14:28 compute-0 nova_compute[260935]: 2025-10-11 09:14:28.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:14:28 compute-0 nova_compute[260935]: 2025-10-11 09:14:28.737 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:14:28 compute-0 nova_compute[260935]: 2025-10-11 09:14:28.738 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:14:28 compute-0 nova_compute[260935]: 2025-10-11 09:14:28.738 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:14:28 compute-0 nova_compute[260935]: 2025-10-11 09:14:28.739 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 11 09:14:28 compute-0 nova_compute[260935]: 2025-10-11 09:14:28.739 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:14:28 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:14:29 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 09:14:29 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/704317858' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:14:29 compute-0 nova_compute[260935]: 2025-10-11 09:14:29.215 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:14:29 compute-0 nova_compute[260935]: 2025-10-11 09:14:29.328 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:14:29 compute-0 nova_compute[260935]: 2025-10-11 09:14:29.328 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:14:29 compute-0 nova_compute[260935]: 2025-10-11 09:14:29.329 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:14:29 compute-0 nova_compute[260935]: 2025-10-11 09:14:29.335 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:14:29 compute-0 nova_compute[260935]: 2025-10-11 09:14:29.336 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:14:29 compute-0 nova_compute[260935]: 2025-10-11 09:14:29.341 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000052 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:14:29 compute-0 nova_compute[260935]: 2025-10-11 09:14:29.342 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000052 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:14:29 compute-0 nova_compute[260935]: 2025-10-11 09:14:29.609 2 WARNING nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 09:14:29 compute-0 nova_compute[260935]: 2025-10-11 09:14:29.610 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3030MB free_disk=59.83066940307617GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 11 09:14:29 compute-0 nova_compute[260935]: 2025-10-11 09:14:29.611 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:14:29 compute-0 nova_compute[260935]: 2025-10-11 09:14:29.611 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:14:29 compute-0 nova_compute[260935]: 2025-10-11 09:14:29.734 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance c176845c-89c0-4038-ba22-4ee79bd3ebfe actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 09:14:29 compute-0 nova_compute[260935]: 2025-10-11 09:14:29.735 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance b75d8ded-515b-48ff-a6b6-28df88878996 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 09:14:29 compute-0 nova_compute[260935]: 2025-10-11 09:14:29.735 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance 52be16b4-343a-4fd4-9041-39069a1fde2a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 09:14:29 compute-0 nova_compute[260935]: 2025-10-11 09:14:29.736 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 11 09:14:29 compute-0 nova_compute[260935]: 2025-10-11 09:14:29.736 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=896MB phys_disk=59GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 11 09:14:29 compute-0 ceph-mon[74313]: pgmap v2179: 321 pgs: 321 active+clean; 328 MiB data, 899 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 11 09:14:29 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/704317858' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:14:29 compute-0 nova_compute[260935]: 2025-10-11 09:14:29.757 2 DEBUG nova.scheduler.client.report [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Refreshing inventories for resource provider ead2f521-4d5d-46d9-864c-1aac19134114 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Oct 11 09:14:29 compute-0 nova_compute[260935]: 2025-10-11 09:14:29.782 2 DEBUG nova.scheduler.client.report [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Updating ProviderTree inventory for provider ead2f521-4d5d-46d9-864c-1aac19134114 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Oct 11 09:14:29 compute-0 nova_compute[260935]: 2025-10-11 09:14:29.782 2 DEBUG nova.compute.provider_tree [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Updating inventory in ProviderTree for provider ead2f521-4d5d-46d9-864c-1aac19134114 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Oct 11 09:14:29 compute-0 nova_compute[260935]: 2025-10-11 09:14:29.802 2 DEBUG nova.scheduler.client.report [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Refreshing aggregate associations for resource provider ead2f521-4d5d-46d9-864c-1aac19134114, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Oct 11 09:14:29 compute-0 nova_compute[260935]: 2025-10-11 09:14:29.834 2 DEBUG nova.scheduler.client.report [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Refreshing trait associations for resource provider ead2f521-4d5d-46d9-864c-1aac19134114, traits: HW_CPU_X86_AESNI,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_CLMUL,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_AVX,COMPUTE_RESCUE_BFV,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_F16C,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NODE,HW_CPU_X86_SSE2,HW_CPU_X86_BMI,COMPUTE_DEVICE_TAGGING,COMPUTE_STORAGE_BUS_FDC,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SHA,COMPUTE_STORAGE_BUS_SATA,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SSE42,HW_CPU_X86_SSE41,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_ABM,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_STORAGE_BUS_IDE,COMPUTE_STORAGE_BUS_USB,COMPUTE_TRUSTED_CERTS,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_FMA3,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_SSE4A,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSSE3,HW_CPU_X86_SVM,HW_CPU_X86_BMI2,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_AVX2,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_SECURITY_TPM_2_0,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_AMD_SVM,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_RTL8139 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Oct 11 09:14:29 compute-0 nova_compute[260935]: 2025-10-11 09:14:29.951 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:14:30 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 09:14:30 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2134330022' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:14:30 compute-0 nova_compute[260935]: 2025-10-11 09:14:30.434 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.482s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:14:30 compute-0 nova_compute[260935]: 2025-10-11 09:14:30.444 2 DEBUG nova.compute.provider_tree [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 09:14:30 compute-0 nova_compute[260935]: 2025-10-11 09:14:30.471 2 DEBUG nova.scheduler.client.report [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 09:14:30 compute-0 nova_compute[260935]: 2025-10-11 09:14:30.544 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 11 09:14:30 compute-0 nova_compute[260935]: 2025-10-11 09:14:30.545 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.933s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:14:30 compute-0 nova_compute[260935]: 2025-10-11 09:14:30.545 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:14:30 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2180: 321 pgs: 321 active+clean; 328 MiB data, 899 MiB used, 59 GiB / 60 GiB avail
Oct 11 09:14:30 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:14:30.705 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=bec9a5e2-82b0-42f0-811d-08d245f1dc66, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '31'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:14:30 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/2134330022' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:14:31 compute-0 nova_compute[260935]: 2025-10-11 09:14:31.547 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:14:31 compute-0 nova_compute[260935]: 2025-10-11 09:14:31.548 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 11 09:14:31 compute-0 ceph-mon[74313]: pgmap v2180: 321 pgs: 321 active+clean; 328 MiB data, 899 MiB used, 59 GiB / 60 GiB avail
Oct 11 09:14:31 compute-0 nova_compute[260935]: 2025-10-11 09:14:31.864 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "refresh_cache-52be16b4-343a-4fd4-9041-39069a1fde2a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:14:31 compute-0 nova_compute[260935]: 2025-10-11 09:14:31.864 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquired lock "refresh_cache-52be16b4-343a-4fd4-9041-39069a1fde2a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:14:31 compute-0 nova_compute[260935]: 2025-10-11 09:14:31.865 2 DEBUG nova.network.neutron [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: 52be16b4-343a-4fd4-9041-39069a1fde2a] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 11 09:14:32 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2181: 321 pgs: 321 active+clean; 328 MiB data, 899 MiB used, 59 GiB / 60 GiB avail
Oct 11 09:14:32 compute-0 nova_compute[260935]: 2025-10-11 09:14:32.706 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:14:32 compute-0 podman[375100]: 2025-10-11 09:14:32.792844879 +0000 UTC m=+0.090201848 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 11 09:14:33 compute-0 sshd-session[375098]: Invalid user oriol from 155.4.244.179 port 45872
Oct 11 09:14:33 compute-0 sshd-session[375098]: pam_unix(sshd:auth): check pass; user unknown
Oct 11 09:14:33 compute-0 sshd-session[375098]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=155.4.244.179
Oct 11 09:14:33 compute-0 nova_compute[260935]: 2025-10-11 09:14:33.716 2 DEBUG nova.network.neutron [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: 52be16b4-343a-4fd4-9041-39069a1fde2a] Updating instance_info_cache with network_info: [{"id": "c992d6e3-ef59-42a0-80c5-109fe0c056cd", "address": "fa:16:3e:d3:b5:ce", "network": {"id": "7c40ad6c-6e2c-4d8e-a70f-72c8786fa745", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1855455514-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0ba95f2514ce4fe4b00f245335eaeb01", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc992d6e3-ef", "ovs_interfaceid": "c992d6e3-ef59-42a0-80c5-109fe0c056cd", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:14:33 compute-0 nova_compute[260935]: 2025-10-11 09:14:33.743 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Releasing lock "refresh_cache-52be16b4-343a-4fd4-9041-39069a1fde2a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:14:33 compute-0 nova_compute[260935]: 2025-10-11 09:14:33.744 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: 52be16b4-343a-4fd4-9041-39069a1fde2a] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 11 09:14:33 compute-0 nova_compute[260935]: 2025-10-11 09:14:33.745 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:14:33 compute-0 ceph-mon[74313]: pgmap v2181: 321 pgs: 321 active+clean; 328 MiB data, 899 MiB used, 59 GiB / 60 GiB avail
Oct 11 09:14:33 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:14:34 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2182: 321 pgs: 321 active+clean; 328 MiB data, 899 MiB used, 59 GiB / 60 GiB avail
Oct 11 09:14:35 compute-0 sshd-session[375098]: Failed password for invalid user oriol from 155.4.244.179 port 45872 ssh2
Oct 11 09:14:35 compute-0 nova_compute[260935]: 2025-10-11 09:14:35.546 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:14:35 compute-0 ceph-mon[74313]: pgmap v2182: 321 pgs: 321 active+clean; 328 MiB data, 899 MiB used, 59 GiB / 60 GiB avail
Oct 11 09:14:36 compute-0 sshd-session[375098]: Received disconnect from 155.4.244.179 port 45872:11: Bye Bye [preauth]
Oct 11 09:14:36 compute-0 sshd-session[375098]: Disconnected from invalid user oriol 155.4.244.179 port 45872 [preauth]
Oct 11 09:14:36 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2183: 321 pgs: 321 active+clean; 328 MiB data, 899 MiB used, 59 GiB / 60 GiB avail
Oct 11 09:14:37 compute-0 nova_compute[260935]: 2025-10-11 09:14:37.677 2 DEBUG oslo_concurrency.lockutils [None req-30fb0cf0-66ad-425d-9ca1-638b26026e31 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Acquiring lock "dd4f6627-2bd4-4ad6-8e1e-c5bec1228096" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:14:37 compute-0 nova_compute[260935]: 2025-10-11 09:14:37.678 2 DEBUG oslo_concurrency.lockutils [None req-30fb0cf0-66ad-425d-9ca1-638b26026e31 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Lock "dd4f6627-2bd4-4ad6-8e1e-c5bec1228096" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:14:37 compute-0 nova_compute[260935]: 2025-10-11 09:14:37.750 2 DEBUG nova.compute.manager [None req-30fb0cf0-66ad-425d-9ca1-638b26026e31 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: dd4f6627-2bd4-4ad6-8e1e-c5bec1228096] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 11 09:14:37 compute-0 nova_compute[260935]: 2025-10-11 09:14:37.754 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:14:37 compute-0 ceph-mon[74313]: pgmap v2183: 321 pgs: 321 active+clean; 328 MiB data, 899 MiB used, 59 GiB / 60 GiB avail
Oct 11 09:14:37 compute-0 podman[375120]: 2025-10-11 09:14:37.86714685 +0000 UTC m=+0.160831003 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 11 09:14:37 compute-0 nova_compute[260935]: 2025-10-11 09:14:37.879 2 DEBUG oslo_concurrency.lockutils [None req-30fb0cf0-66ad-425d-9ca1-638b26026e31 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:14:37 compute-0 nova_compute[260935]: 2025-10-11 09:14:37.879 2 DEBUG oslo_concurrency.lockutils [None req-30fb0cf0-66ad-425d-9ca1-638b26026e31 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:14:37 compute-0 podman[375121]: 2025-10-11 09:14:37.884191752 +0000 UTC m=+0.174113961 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Oct 11 09:14:37 compute-0 nova_compute[260935]: 2025-10-11 09:14:37.887 2 DEBUG nova.virt.hardware [None req-30fb0cf0-66ad-425d-9ca1-638b26026e31 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 11 09:14:37 compute-0 nova_compute[260935]: 2025-10-11 09:14:37.887 2 INFO nova.compute.claims [None req-30fb0cf0-66ad-425d-9ca1-638b26026e31 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: dd4f6627-2bd4-4ad6-8e1e-c5bec1228096] Claim successful on node compute-0.ctlplane.example.com
Oct 11 09:14:38 compute-0 nova_compute[260935]: 2025-10-11 09:14:38.287 2 DEBUG oslo_concurrency.processutils [None req-30fb0cf0-66ad-425d-9ca1-638b26026e31 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:14:38 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2184: 321 pgs: 321 active+clean; 328 MiB data, 899 MiB used, 59 GiB / 60 GiB avail
Oct 11 09:14:38 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 09:14:38 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3791196006' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:14:38 compute-0 nova_compute[260935]: 2025-10-11 09:14:38.764 2 DEBUG oslo_concurrency.processutils [None req-30fb0cf0-66ad-425d-9ca1-638b26026e31 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.477s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:14:38 compute-0 nova_compute[260935]: 2025-10-11 09:14:38.773 2 DEBUG nova.compute.provider_tree [None req-30fb0cf0-66ad-425d-9ca1-638b26026e31 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 09:14:38 compute-0 nova_compute[260935]: 2025-10-11 09:14:38.799 2 DEBUG nova.scheduler.client.report [None req-30fb0cf0-66ad-425d-9ca1-638b26026e31 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 09:14:38 compute-0 nova_compute[260935]: 2025-10-11 09:14:38.839 2 DEBUG oslo_concurrency.lockutils [None req-30fb0cf0-66ad-425d-9ca1-638b26026e31 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.959s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:14:38 compute-0 nova_compute[260935]: 2025-10-11 09:14:38.840 2 DEBUG nova.compute.manager [None req-30fb0cf0-66ad-425d-9ca1-638b26026e31 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: dd4f6627-2bd4-4ad6-8e1e-c5bec1228096] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 11 09:14:38 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/3791196006' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:14:38 compute-0 nova_compute[260935]: 2025-10-11 09:14:38.904 2 DEBUG nova.compute.manager [None req-30fb0cf0-66ad-425d-9ca1-638b26026e31 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: dd4f6627-2bd4-4ad6-8e1e-c5bec1228096] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 11 09:14:38 compute-0 nova_compute[260935]: 2025-10-11 09:14:38.904 2 DEBUG nova.network.neutron [None req-30fb0cf0-66ad-425d-9ca1-638b26026e31 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: dd4f6627-2bd4-4ad6-8e1e-c5bec1228096] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 11 09:14:38 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:14:38 compute-0 nova_compute[260935]: 2025-10-11 09:14:38.923 2 INFO nova.virt.libvirt.driver [None req-30fb0cf0-66ad-425d-9ca1-638b26026e31 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: dd4f6627-2bd4-4ad6-8e1e-c5bec1228096] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 11 09:14:38 compute-0 nova_compute[260935]: 2025-10-11 09:14:38.948 2 DEBUG nova.compute.manager [None req-30fb0cf0-66ad-425d-9ca1-638b26026e31 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: dd4f6627-2bd4-4ad6-8e1e-c5bec1228096] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 11 09:14:39 compute-0 nova_compute[260935]: 2025-10-11 09:14:39.076 2 DEBUG nova.compute.manager [None req-30fb0cf0-66ad-425d-9ca1-638b26026e31 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: dd4f6627-2bd4-4ad6-8e1e-c5bec1228096] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 11 09:14:39 compute-0 nova_compute[260935]: 2025-10-11 09:14:39.078 2 DEBUG nova.virt.libvirt.driver [None req-30fb0cf0-66ad-425d-9ca1-638b26026e31 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: dd4f6627-2bd4-4ad6-8e1e-c5bec1228096] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 11 09:14:39 compute-0 nova_compute[260935]: 2025-10-11 09:14:39.079 2 INFO nova.virt.libvirt.driver [None req-30fb0cf0-66ad-425d-9ca1-638b26026e31 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: dd4f6627-2bd4-4ad6-8e1e-c5bec1228096] Creating image(s)
Oct 11 09:14:39 compute-0 nova_compute[260935]: 2025-10-11 09:14:39.113 2 DEBUG nova.storage.rbd_utils [None req-30fb0cf0-66ad-425d-9ca1-638b26026e31 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] rbd image dd4f6627-2bd4-4ad6-8e1e-c5bec1228096_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:14:39 compute-0 nova_compute[260935]: 2025-10-11 09:14:39.145 2 DEBUG nova.storage.rbd_utils [None req-30fb0cf0-66ad-425d-9ca1-638b26026e31 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] rbd image dd4f6627-2bd4-4ad6-8e1e-c5bec1228096_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:14:39 compute-0 nova_compute[260935]: 2025-10-11 09:14:39.178 2 DEBUG nova.storage.rbd_utils [None req-30fb0cf0-66ad-425d-9ca1-638b26026e31 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] rbd image dd4f6627-2bd4-4ad6-8e1e-c5bec1228096_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:14:39 compute-0 nova_compute[260935]: 2025-10-11 09:14:39.182 2 DEBUG oslo_concurrency.processutils [None req-30fb0cf0-66ad-425d-9ca1-638b26026e31 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:14:39 compute-0 nova_compute[260935]: 2025-10-11 09:14:39.230 2 DEBUG nova.policy [None req-30fb0cf0-66ad-425d-9ca1-638b26026e31 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'a213c3877fc144a3af0be3c3d853f999', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'ca4b15770e784f45910b630937562cb6', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 11 09:14:39 compute-0 nova_compute[260935]: 2025-10-11 09:14:39.280 2 DEBUG oslo_concurrency.processutils [None req-30fb0cf0-66ad-425d-9ca1-638b26026e31 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json" returned: 0 in 0.098s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:14:39 compute-0 nova_compute[260935]: 2025-10-11 09:14:39.281 2 DEBUG oslo_concurrency.lockutils [None req-30fb0cf0-66ad-425d-9ca1-638b26026e31 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Acquiring lock "9811042c7d73cc51997f7c966840f4b7728169a1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:14:39 compute-0 nova_compute[260935]: 2025-10-11 09:14:39.282 2 DEBUG oslo_concurrency.lockutils [None req-30fb0cf0-66ad-425d-9ca1-638b26026e31 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:14:39 compute-0 nova_compute[260935]: 2025-10-11 09:14:39.283 2 DEBUG oslo_concurrency.lockutils [None req-30fb0cf0-66ad-425d-9ca1-638b26026e31 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:14:39 compute-0 nova_compute[260935]: 2025-10-11 09:14:39.317 2 DEBUG nova.storage.rbd_utils [None req-30fb0cf0-66ad-425d-9ca1-638b26026e31 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] rbd image dd4f6627-2bd4-4ad6-8e1e-c5bec1228096_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:14:39 compute-0 nova_compute[260935]: 2025-10-11 09:14:39.321 2 DEBUG oslo_concurrency.processutils [None req-30fb0cf0-66ad-425d-9ca1-638b26026e31 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 dd4f6627-2bd4-4ad6-8e1e-c5bec1228096_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:14:39 compute-0 nova_compute[260935]: 2025-10-11 09:14:39.621 2 DEBUG oslo_concurrency.processutils [None req-30fb0cf0-66ad-425d-9ca1-638b26026e31 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 dd4f6627-2bd4-4ad6-8e1e-c5bec1228096_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.300s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:14:39 compute-0 nova_compute[260935]: 2025-10-11 09:14:39.712 2 DEBUG nova.storage.rbd_utils [None req-30fb0cf0-66ad-425d-9ca1-638b26026e31 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] resizing rbd image dd4f6627-2bd4-4ad6-8e1e-c5bec1228096_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 11 09:14:39 compute-0 nova_compute[260935]: 2025-10-11 09:14:39.843 2 DEBUG nova.objects.instance [None req-30fb0cf0-66ad-425d-9ca1-638b26026e31 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Lazy-loading 'migration_context' on Instance uuid dd4f6627-2bd4-4ad6-8e1e-c5bec1228096 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:14:39 compute-0 ceph-mon[74313]: pgmap v2184: 321 pgs: 321 active+clean; 328 MiB data, 899 MiB used, 59 GiB / 60 GiB avail
Oct 11 09:14:39 compute-0 nova_compute[260935]: 2025-10-11 09:14:39.872 2 DEBUG nova.virt.libvirt.driver [None req-30fb0cf0-66ad-425d-9ca1-638b26026e31 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: dd4f6627-2bd4-4ad6-8e1e-c5bec1228096] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 11 09:14:39 compute-0 nova_compute[260935]: 2025-10-11 09:14:39.872 2 DEBUG nova.virt.libvirt.driver [None req-30fb0cf0-66ad-425d-9ca1-638b26026e31 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: dd4f6627-2bd4-4ad6-8e1e-c5bec1228096] Ensure instance console log exists: /var/lib/nova/instances/dd4f6627-2bd4-4ad6-8e1e-c5bec1228096/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 11 09:14:39 compute-0 nova_compute[260935]: 2025-10-11 09:14:39.873 2 DEBUG oslo_concurrency.lockutils [None req-30fb0cf0-66ad-425d-9ca1-638b26026e31 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:14:39 compute-0 nova_compute[260935]: 2025-10-11 09:14:39.874 2 DEBUG oslo_concurrency.lockutils [None req-30fb0cf0-66ad-425d-9ca1-638b26026e31 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:14:39 compute-0 nova_compute[260935]: 2025-10-11 09:14:39.875 2 DEBUG oslo_concurrency.lockutils [None req-30fb0cf0-66ad-425d-9ca1-638b26026e31 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:14:40 compute-0 nova_compute[260935]: 2025-10-11 09:14:40.291 2 DEBUG nova.network.neutron [None req-30fb0cf0-66ad-425d-9ca1-638b26026e31 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: dd4f6627-2bd4-4ad6-8e1e-c5bec1228096] Successfully created port: ef5b5080-0657-4874-a32f-f79c370fbb6f _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 11 09:14:40 compute-0 nova_compute[260935]: 2025-10-11 09:14:40.549 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:14:40 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2185: 321 pgs: 321 active+clean; 328 MiB data, 899 MiB used, 59 GiB / 60 GiB avail
Oct 11 09:14:41 compute-0 nova_compute[260935]: 2025-10-11 09:14:41.213 2 DEBUG nova.network.neutron [None req-30fb0cf0-66ad-425d-9ca1-638b26026e31 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: dd4f6627-2bd4-4ad6-8e1e-c5bec1228096] Successfully updated port: ef5b5080-0657-4874-a32f-f79c370fbb6f _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 11 09:14:41 compute-0 nova_compute[260935]: 2025-10-11 09:14:41.230 2 DEBUG oslo_concurrency.lockutils [None req-30fb0cf0-66ad-425d-9ca1-638b26026e31 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Acquiring lock "refresh_cache-dd4f6627-2bd4-4ad6-8e1e-c5bec1228096" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:14:41 compute-0 nova_compute[260935]: 2025-10-11 09:14:41.231 2 DEBUG oslo_concurrency.lockutils [None req-30fb0cf0-66ad-425d-9ca1-638b26026e31 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Acquired lock "refresh_cache-dd4f6627-2bd4-4ad6-8e1e-c5bec1228096" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:14:41 compute-0 nova_compute[260935]: 2025-10-11 09:14:41.231 2 DEBUG nova.network.neutron [None req-30fb0cf0-66ad-425d-9ca1-638b26026e31 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: dd4f6627-2bd4-4ad6-8e1e-c5bec1228096] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 11 09:14:41 compute-0 nova_compute[260935]: 2025-10-11 09:14:41.355 2 DEBUG nova.compute.manager [req-d802011d-7444-4195-8808-4e802c2bdcd5 req-3cb1222f-4264-4034-b06b-f7c161d7d69e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: dd4f6627-2bd4-4ad6-8e1e-c5bec1228096] Received event network-changed-ef5b5080-0657-4874-a32f-f79c370fbb6f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:14:41 compute-0 nova_compute[260935]: 2025-10-11 09:14:41.356 2 DEBUG nova.compute.manager [req-d802011d-7444-4195-8808-4e802c2bdcd5 req-3cb1222f-4264-4034-b06b-f7c161d7d69e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: dd4f6627-2bd4-4ad6-8e1e-c5bec1228096] Refreshing instance network info cache due to event network-changed-ef5b5080-0657-4874-a32f-f79c370fbb6f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 09:14:41 compute-0 nova_compute[260935]: 2025-10-11 09:14:41.357 2 DEBUG oslo_concurrency.lockutils [req-d802011d-7444-4195-8808-4e802c2bdcd5 req-3cb1222f-4264-4034-b06b-f7c161d7d69e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-dd4f6627-2bd4-4ad6-8e1e-c5bec1228096" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:14:41 compute-0 nova_compute[260935]: 2025-10-11 09:14:41.731 2 DEBUG nova.network.neutron [None req-30fb0cf0-66ad-425d-9ca1-638b26026e31 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: dd4f6627-2bd4-4ad6-8e1e-c5bec1228096] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 11 09:14:41 compute-0 ceph-mon[74313]: pgmap v2185: 321 pgs: 321 active+clean; 328 MiB data, 899 MiB used, 59 GiB / 60 GiB avail
Oct 11 09:14:42 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2186: 321 pgs: 321 active+clean; 358 MiB data, 917 MiB used, 59 GiB / 60 GiB avail; 7.0 KiB/s rd, 1.5 MiB/s wr, 14 op/s
Oct 11 09:14:42 compute-0 nova_compute[260935]: 2025-10-11 09:14:42.756 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:14:42 compute-0 nova_compute[260935]: 2025-10-11 09:14:42.781 2 DEBUG nova.network.neutron [None req-30fb0cf0-66ad-425d-9ca1-638b26026e31 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: dd4f6627-2bd4-4ad6-8e1e-c5bec1228096] Updating instance_info_cache with network_info: [{"id": "ef5b5080-0657-4874-a32f-f79c370fbb6f", "address": "fa:16:3e:73:4b:2f", "network": {"id": "4b2dcf8f-2d20-4207-8c49-d03814ca4c89", "bridge": "br-int", "label": "tempest-network-smoke--1714222013", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca4b15770e784f45910b630937562cb6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapef5b5080-06", "ovs_interfaceid": "ef5b5080-0657-4874-a32f-f79c370fbb6f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:14:42 compute-0 nova_compute[260935]: 2025-10-11 09:14:42.808 2 DEBUG oslo_concurrency.lockutils [None req-30fb0cf0-66ad-425d-9ca1-638b26026e31 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Releasing lock "refresh_cache-dd4f6627-2bd4-4ad6-8e1e-c5bec1228096" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:14:42 compute-0 nova_compute[260935]: 2025-10-11 09:14:42.809 2 DEBUG nova.compute.manager [None req-30fb0cf0-66ad-425d-9ca1-638b26026e31 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: dd4f6627-2bd4-4ad6-8e1e-c5bec1228096] Instance network_info: |[{"id": "ef5b5080-0657-4874-a32f-f79c370fbb6f", "address": "fa:16:3e:73:4b:2f", "network": {"id": "4b2dcf8f-2d20-4207-8c49-d03814ca4c89", "bridge": "br-int", "label": "tempest-network-smoke--1714222013", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca4b15770e784f45910b630937562cb6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapef5b5080-06", "ovs_interfaceid": "ef5b5080-0657-4874-a32f-f79c370fbb6f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 11 09:14:42 compute-0 nova_compute[260935]: 2025-10-11 09:14:42.810 2 DEBUG oslo_concurrency.lockutils [req-d802011d-7444-4195-8808-4e802c2bdcd5 req-3cb1222f-4264-4034-b06b-f7c161d7d69e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-dd4f6627-2bd4-4ad6-8e1e-c5bec1228096" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:14:42 compute-0 nova_compute[260935]: 2025-10-11 09:14:42.810 2 DEBUG nova.network.neutron [req-d802011d-7444-4195-8808-4e802c2bdcd5 req-3cb1222f-4264-4034-b06b-f7c161d7d69e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: dd4f6627-2bd4-4ad6-8e1e-c5bec1228096] Refreshing network info cache for port ef5b5080-0657-4874-a32f-f79c370fbb6f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 09:14:42 compute-0 nova_compute[260935]: 2025-10-11 09:14:42.815 2 DEBUG nova.virt.libvirt.driver [None req-30fb0cf0-66ad-425d-9ca1-638b26026e31 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: dd4f6627-2bd4-4ad6-8e1e-c5bec1228096] Start _get_guest_xml network_info=[{"id": "ef5b5080-0657-4874-a32f-f79c370fbb6f", "address": "fa:16:3e:73:4b:2f", "network": {"id": "4b2dcf8f-2d20-4207-8c49-d03814ca4c89", "bridge": "br-int", "label": "tempest-network-smoke--1714222013", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca4b15770e784f45910b630937562cb6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapef5b5080-06", "ovs_interfaceid": "ef5b5080-0657-4874-a32f-f79c370fbb6f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 11 09:14:42 compute-0 nova_compute[260935]: 2025-10-11 09:14:42.823 2 WARNING nova.virt.libvirt.driver [None req-30fb0cf0-66ad-425d-9ca1-638b26026e31 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 09:14:42 compute-0 nova_compute[260935]: 2025-10-11 09:14:42.831 2 DEBUG nova.virt.libvirt.host [None req-30fb0cf0-66ad-425d-9ca1-638b26026e31 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 11 09:14:42 compute-0 nova_compute[260935]: 2025-10-11 09:14:42.832 2 DEBUG nova.virt.libvirt.host [None req-30fb0cf0-66ad-425d-9ca1-638b26026e31 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 11 09:14:42 compute-0 nova_compute[260935]: 2025-10-11 09:14:42.840 2 DEBUG nova.virt.libvirt.host [None req-30fb0cf0-66ad-425d-9ca1-638b26026e31 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 11 09:14:42 compute-0 nova_compute[260935]: 2025-10-11 09:14:42.841 2 DEBUG nova.virt.libvirt.host [None req-30fb0cf0-66ad-425d-9ca1-638b26026e31 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 11 09:14:42 compute-0 nova_compute[260935]: 2025-10-11 09:14:42.842 2 DEBUG nova.virt.libvirt.driver [None req-30fb0cf0-66ad-425d-9ca1-638b26026e31 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 11 09:14:42 compute-0 nova_compute[260935]: 2025-10-11 09:14:42.842 2 DEBUG nova.virt.hardware [None req-30fb0cf0-66ad-425d-9ca1-638b26026e31 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 11 09:14:42 compute-0 nova_compute[260935]: 2025-10-11 09:14:42.843 2 DEBUG nova.virt.hardware [None req-30fb0cf0-66ad-425d-9ca1-638b26026e31 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 11 09:14:42 compute-0 nova_compute[260935]: 2025-10-11 09:14:42.843 2 DEBUG nova.virt.hardware [None req-30fb0cf0-66ad-425d-9ca1-638b26026e31 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 11 09:14:42 compute-0 nova_compute[260935]: 2025-10-11 09:14:42.844 2 DEBUG nova.virt.hardware [None req-30fb0cf0-66ad-425d-9ca1-638b26026e31 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 11 09:14:42 compute-0 nova_compute[260935]: 2025-10-11 09:14:42.844 2 DEBUG nova.virt.hardware [None req-30fb0cf0-66ad-425d-9ca1-638b26026e31 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 11 09:14:42 compute-0 nova_compute[260935]: 2025-10-11 09:14:42.845 2 DEBUG nova.virt.hardware [None req-30fb0cf0-66ad-425d-9ca1-638b26026e31 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 11 09:14:42 compute-0 nova_compute[260935]: 2025-10-11 09:14:42.845 2 DEBUG nova.virt.hardware [None req-30fb0cf0-66ad-425d-9ca1-638b26026e31 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 11 09:14:42 compute-0 nova_compute[260935]: 2025-10-11 09:14:42.846 2 DEBUG nova.virt.hardware [None req-30fb0cf0-66ad-425d-9ca1-638b26026e31 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 11 09:14:42 compute-0 nova_compute[260935]: 2025-10-11 09:14:42.846 2 DEBUG nova.virt.hardware [None req-30fb0cf0-66ad-425d-9ca1-638b26026e31 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 11 09:14:42 compute-0 nova_compute[260935]: 2025-10-11 09:14:42.847 2 DEBUG nova.virt.hardware [None req-30fb0cf0-66ad-425d-9ca1-638b26026e31 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 11 09:14:42 compute-0 nova_compute[260935]: 2025-10-11 09:14:42.847 2 DEBUG nova.virt.hardware [None req-30fb0cf0-66ad-425d-9ca1-638b26026e31 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 11 09:14:42 compute-0 nova_compute[260935]: 2025-10-11 09:14:42.852 2 DEBUG oslo_concurrency.processutils [None req-30fb0cf0-66ad-425d-9ca1-638b26026e31 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:14:43 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 09:14:43 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3489057043' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:14:43 compute-0 nova_compute[260935]: 2025-10-11 09:14:43.367 2 DEBUG oslo_concurrency.processutils [None req-30fb0cf0-66ad-425d-9ca1-638b26026e31 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.515s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:14:43 compute-0 nova_compute[260935]: 2025-10-11 09:14:43.404 2 DEBUG nova.storage.rbd_utils [None req-30fb0cf0-66ad-425d-9ca1-638b26026e31 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] rbd image dd4f6627-2bd4-4ad6-8e1e-c5bec1228096_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:14:43 compute-0 nova_compute[260935]: 2025-10-11 09:14:43.410 2 DEBUG oslo_concurrency.processutils [None req-30fb0cf0-66ad-425d-9ca1-638b26026e31 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:14:43 compute-0 ceph-mon[74313]: pgmap v2186: 321 pgs: 321 active+clean; 358 MiB data, 917 MiB used, 59 GiB / 60 GiB avail; 7.0 KiB/s rd, 1.5 MiB/s wr, 14 op/s
Oct 11 09:14:43 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/3489057043' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:14:43 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:14:43 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 09:14:43 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2814180833' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:14:43 compute-0 nova_compute[260935]: 2025-10-11 09:14:43.950 2 DEBUG oslo_concurrency.processutils [None req-30fb0cf0-66ad-425d-9ca1-638b26026e31 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.540s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:14:43 compute-0 nova_compute[260935]: 2025-10-11 09:14:43.952 2 DEBUG nova.virt.libvirt.vif [None req-30fb0cf0-66ad-425d-9ca1-638b26026e31 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T09:14:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1538313355',display_name='tempest-TestNetworkAdvancedServerOps-server-1538313355',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1538313355',id=107,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIm35lVc/qb0x6y/uUYwsQgGwxPTQjIAOVresR65zlGyxGEfXr+JB+WnXq5JHTOUB0BIvi8UrAYCHxEhMebttYysUf8+o2zlwFp3FSTpdihQDj9CLBgCJ2HNvxQ7hPPL2g==',key_name='tempest-TestNetworkAdvancedServerOps-1022296660',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ca4b15770e784f45910b630937562cb6',ramdisk_id='',reservation_id='r-nhg7hkl0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-1304559157',owner_user_name='tempest-TestNetworkAdvancedServerOps-1304559157-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:14:39Z,user_data=None,user_id='a213c3877fc144a3af0be3c3d853f999',uuid=dd4f6627-2bd4-4ad6-8e1e-c5bec1228096,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ef5b5080-0657-4874-a32f-f79c370fbb6f", "address": "fa:16:3e:73:4b:2f", "network": {"id": "4b2dcf8f-2d20-4207-8c49-d03814ca4c89", "bridge": "br-int", "label": "tempest-network-smoke--1714222013", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca4b15770e784f45910b630937562cb6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapef5b5080-06", "ovs_interfaceid": "ef5b5080-0657-4874-a32f-f79c370fbb6f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 11 09:14:43 compute-0 nova_compute[260935]: 2025-10-11 09:14:43.952 2 DEBUG nova.network.os_vif_util [None req-30fb0cf0-66ad-425d-9ca1-638b26026e31 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Converting VIF {"id": "ef5b5080-0657-4874-a32f-f79c370fbb6f", "address": "fa:16:3e:73:4b:2f", "network": {"id": "4b2dcf8f-2d20-4207-8c49-d03814ca4c89", "bridge": "br-int", "label": "tempest-network-smoke--1714222013", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca4b15770e784f45910b630937562cb6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapef5b5080-06", "ovs_interfaceid": "ef5b5080-0657-4874-a32f-f79c370fbb6f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 09:14:43 compute-0 nova_compute[260935]: 2025-10-11 09:14:43.953 2 DEBUG nova.network.os_vif_util [None req-30fb0cf0-66ad-425d-9ca1-638b26026e31 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:73:4b:2f,bridge_name='br-int',has_traffic_filtering=True,id=ef5b5080-0657-4874-a32f-f79c370fbb6f,network=Network(4b2dcf8f-2d20-4207-8c49-d03814ca4c89),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapef5b5080-06') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 09:14:43 compute-0 nova_compute[260935]: 2025-10-11 09:14:43.954 2 DEBUG nova.objects.instance [None req-30fb0cf0-66ad-425d-9ca1-638b26026e31 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Lazy-loading 'pci_devices' on Instance uuid dd4f6627-2bd4-4ad6-8e1e-c5bec1228096 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:14:43 compute-0 nova_compute[260935]: 2025-10-11 09:14:43.989 2 DEBUG nova.virt.libvirt.driver [None req-30fb0cf0-66ad-425d-9ca1-638b26026e31 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: dd4f6627-2bd4-4ad6-8e1e-c5bec1228096] End _get_guest_xml xml=<domain type="kvm">
Oct 11 09:14:43 compute-0 nova_compute[260935]:   <uuid>dd4f6627-2bd4-4ad6-8e1e-c5bec1228096</uuid>
Oct 11 09:14:43 compute-0 nova_compute[260935]:   <name>instance-0000006b</name>
Oct 11 09:14:43 compute-0 nova_compute[260935]:   <memory>131072</memory>
Oct 11 09:14:43 compute-0 nova_compute[260935]:   <vcpu>1</vcpu>
Oct 11 09:14:43 compute-0 nova_compute[260935]:   <metadata>
Oct 11 09:14:43 compute-0 nova_compute[260935]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 09:14:43 compute-0 nova_compute[260935]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 09:14:43 compute-0 nova_compute[260935]:       <nova:name>tempest-TestNetworkAdvancedServerOps-server-1538313355</nova:name>
Oct 11 09:14:43 compute-0 nova_compute[260935]:       <nova:creationTime>2025-10-11 09:14:42</nova:creationTime>
Oct 11 09:14:43 compute-0 nova_compute[260935]:       <nova:flavor name="m1.nano">
Oct 11 09:14:43 compute-0 nova_compute[260935]:         <nova:memory>128</nova:memory>
Oct 11 09:14:43 compute-0 nova_compute[260935]:         <nova:disk>1</nova:disk>
Oct 11 09:14:43 compute-0 nova_compute[260935]:         <nova:swap>0</nova:swap>
Oct 11 09:14:43 compute-0 nova_compute[260935]:         <nova:ephemeral>0</nova:ephemeral>
Oct 11 09:14:43 compute-0 nova_compute[260935]:         <nova:vcpus>1</nova:vcpus>
Oct 11 09:14:43 compute-0 nova_compute[260935]:       </nova:flavor>
Oct 11 09:14:43 compute-0 nova_compute[260935]:       <nova:owner>
Oct 11 09:14:43 compute-0 nova_compute[260935]:         <nova:user uuid="a213c3877fc144a3af0be3c3d853f999">tempest-TestNetworkAdvancedServerOps-1304559157-project-member</nova:user>
Oct 11 09:14:43 compute-0 nova_compute[260935]:         <nova:project uuid="ca4b15770e784f45910b630937562cb6">tempest-TestNetworkAdvancedServerOps-1304559157</nova:project>
Oct 11 09:14:43 compute-0 nova_compute[260935]:       </nova:owner>
Oct 11 09:14:43 compute-0 nova_compute[260935]:       <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 09:14:43 compute-0 nova_compute[260935]:       <nova:ports>
Oct 11 09:14:43 compute-0 nova_compute[260935]:         <nova:port uuid="ef5b5080-0657-4874-a32f-f79c370fbb6f">
Oct 11 09:14:43 compute-0 nova_compute[260935]:           <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Oct 11 09:14:43 compute-0 nova_compute[260935]:         </nova:port>
Oct 11 09:14:43 compute-0 nova_compute[260935]:       </nova:ports>
Oct 11 09:14:43 compute-0 nova_compute[260935]:     </nova:instance>
Oct 11 09:14:43 compute-0 nova_compute[260935]:   </metadata>
Oct 11 09:14:43 compute-0 nova_compute[260935]:   <sysinfo type="smbios">
Oct 11 09:14:43 compute-0 nova_compute[260935]:     <system>
Oct 11 09:14:43 compute-0 nova_compute[260935]:       <entry name="manufacturer">RDO</entry>
Oct 11 09:14:43 compute-0 nova_compute[260935]:       <entry name="product">OpenStack Compute</entry>
Oct 11 09:14:43 compute-0 nova_compute[260935]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 09:14:43 compute-0 nova_compute[260935]:       <entry name="serial">dd4f6627-2bd4-4ad6-8e1e-c5bec1228096</entry>
Oct 11 09:14:43 compute-0 nova_compute[260935]:       <entry name="uuid">dd4f6627-2bd4-4ad6-8e1e-c5bec1228096</entry>
Oct 11 09:14:43 compute-0 nova_compute[260935]:       <entry name="family">Virtual Machine</entry>
Oct 11 09:14:43 compute-0 nova_compute[260935]:     </system>
Oct 11 09:14:43 compute-0 nova_compute[260935]:   </sysinfo>
Oct 11 09:14:43 compute-0 nova_compute[260935]:   <os>
Oct 11 09:14:43 compute-0 nova_compute[260935]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 11 09:14:43 compute-0 nova_compute[260935]:     <boot dev="hd"/>
Oct 11 09:14:43 compute-0 nova_compute[260935]:     <smbios mode="sysinfo"/>
Oct 11 09:14:43 compute-0 nova_compute[260935]:   </os>
Oct 11 09:14:43 compute-0 nova_compute[260935]:   <features>
Oct 11 09:14:43 compute-0 nova_compute[260935]:     <acpi/>
Oct 11 09:14:43 compute-0 nova_compute[260935]:     <apic/>
Oct 11 09:14:43 compute-0 nova_compute[260935]:     <vmcoreinfo/>
Oct 11 09:14:43 compute-0 nova_compute[260935]:   </features>
Oct 11 09:14:43 compute-0 nova_compute[260935]:   <clock offset="utc">
Oct 11 09:14:43 compute-0 nova_compute[260935]:     <timer name="pit" tickpolicy="delay"/>
Oct 11 09:14:43 compute-0 nova_compute[260935]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 11 09:14:43 compute-0 nova_compute[260935]:     <timer name="hpet" present="no"/>
Oct 11 09:14:43 compute-0 nova_compute[260935]:   </clock>
Oct 11 09:14:43 compute-0 nova_compute[260935]:   <cpu mode="host-model" match="exact">
Oct 11 09:14:43 compute-0 nova_compute[260935]:     <topology sockets="1" cores="1" threads="1"/>
Oct 11 09:14:43 compute-0 nova_compute[260935]:   </cpu>
Oct 11 09:14:43 compute-0 nova_compute[260935]:   <devices>
Oct 11 09:14:43 compute-0 nova_compute[260935]:     <disk type="network" device="disk">
Oct 11 09:14:43 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 09:14:43 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/dd4f6627-2bd4-4ad6-8e1e-c5bec1228096_disk">
Oct 11 09:14:43 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 09:14:43 compute-0 nova_compute[260935]:       </source>
Oct 11 09:14:43 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 09:14:43 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 09:14:43 compute-0 nova_compute[260935]:       </auth>
Oct 11 09:14:43 compute-0 nova_compute[260935]:       <target dev="vda" bus="virtio"/>
Oct 11 09:14:43 compute-0 nova_compute[260935]:     </disk>
Oct 11 09:14:43 compute-0 nova_compute[260935]:     <disk type="network" device="cdrom">
Oct 11 09:14:43 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 09:14:43 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/dd4f6627-2bd4-4ad6-8e1e-c5bec1228096_disk.config">
Oct 11 09:14:43 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 09:14:43 compute-0 nova_compute[260935]:       </source>
Oct 11 09:14:43 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 09:14:43 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 09:14:43 compute-0 nova_compute[260935]:       </auth>
Oct 11 09:14:43 compute-0 nova_compute[260935]:       <target dev="sda" bus="sata"/>
Oct 11 09:14:43 compute-0 nova_compute[260935]:     </disk>
Oct 11 09:14:43 compute-0 nova_compute[260935]:     <interface type="ethernet">
Oct 11 09:14:43 compute-0 nova_compute[260935]:       <mac address="fa:16:3e:73:4b:2f"/>
Oct 11 09:14:43 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 09:14:43 compute-0 nova_compute[260935]:       <driver name="vhost" rx_queue_size="512"/>
Oct 11 09:14:43 compute-0 nova_compute[260935]:       <mtu size="1442"/>
Oct 11 09:14:43 compute-0 nova_compute[260935]:       <target dev="tapef5b5080-06"/>
Oct 11 09:14:43 compute-0 nova_compute[260935]:     </interface>
Oct 11 09:14:43 compute-0 nova_compute[260935]:     <serial type="pty">
Oct 11 09:14:43 compute-0 nova_compute[260935]:       <log file="/var/lib/nova/instances/dd4f6627-2bd4-4ad6-8e1e-c5bec1228096/console.log" append="off"/>
Oct 11 09:14:43 compute-0 nova_compute[260935]:     </serial>
Oct 11 09:14:43 compute-0 nova_compute[260935]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 09:14:43 compute-0 nova_compute[260935]:     <video>
Oct 11 09:14:43 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 09:14:43 compute-0 nova_compute[260935]:     </video>
Oct 11 09:14:43 compute-0 nova_compute[260935]:     <input type="tablet" bus="usb"/>
Oct 11 09:14:43 compute-0 nova_compute[260935]:     <rng model="virtio">
Oct 11 09:14:43 compute-0 nova_compute[260935]:       <backend model="random">/dev/urandom</backend>
Oct 11 09:14:43 compute-0 nova_compute[260935]:     </rng>
Oct 11 09:14:43 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root"/>
Oct 11 09:14:43 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:14:43 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:14:43 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:14:43 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:14:43 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:14:43 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:14:43 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:14:43 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:14:43 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:14:43 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:14:43 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:14:43 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:14:43 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:14:43 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:14:43 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:14:43 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:14:43 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:14:43 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:14:43 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:14:43 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:14:43 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:14:43 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:14:43 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:14:43 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:14:43 compute-0 nova_compute[260935]:     <controller type="usb" index="0"/>
Oct 11 09:14:43 compute-0 nova_compute[260935]:     <memballoon model="virtio">
Oct 11 09:14:43 compute-0 nova_compute[260935]:       <stats period="10"/>
Oct 11 09:14:43 compute-0 nova_compute[260935]:     </memballoon>
Oct 11 09:14:43 compute-0 nova_compute[260935]:   </devices>
Oct 11 09:14:43 compute-0 nova_compute[260935]: </domain>
Oct 11 09:14:43 compute-0 nova_compute[260935]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 11 09:14:43 compute-0 nova_compute[260935]: 2025-10-11 09:14:43.992 2 DEBUG nova.compute.manager [None req-30fb0cf0-66ad-425d-9ca1-638b26026e31 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: dd4f6627-2bd4-4ad6-8e1e-c5bec1228096] Preparing to wait for external event network-vif-plugged-ef5b5080-0657-4874-a32f-f79c370fbb6f prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 11 09:14:43 compute-0 nova_compute[260935]: 2025-10-11 09:14:43.993 2 DEBUG oslo_concurrency.lockutils [None req-30fb0cf0-66ad-425d-9ca1-638b26026e31 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Acquiring lock "dd4f6627-2bd4-4ad6-8e1e-c5bec1228096-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:14:43 compute-0 nova_compute[260935]: 2025-10-11 09:14:43.993 2 DEBUG oslo_concurrency.lockutils [None req-30fb0cf0-66ad-425d-9ca1-638b26026e31 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Lock "dd4f6627-2bd4-4ad6-8e1e-c5bec1228096-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:14:43 compute-0 nova_compute[260935]: 2025-10-11 09:14:43.994 2 DEBUG oslo_concurrency.lockutils [None req-30fb0cf0-66ad-425d-9ca1-638b26026e31 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Lock "dd4f6627-2bd4-4ad6-8e1e-c5bec1228096-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:14:43 compute-0 nova_compute[260935]: 2025-10-11 09:14:43.995 2 DEBUG nova.virt.libvirt.vif [None req-30fb0cf0-66ad-425d-9ca1-638b26026e31 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T09:14:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1538313355',display_name='tempest-TestNetworkAdvancedServerOps-server-1538313355',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1538313355',id=107,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIm35lVc/qb0x6y/uUYwsQgGwxPTQjIAOVresR65zlGyxGEfXr+JB+WnXq5JHTOUB0BIvi8UrAYCHxEhMebttYysUf8+o2zlwFp3FSTpdihQDj9CLBgCJ2HNvxQ7hPPL2g==',key_name='tempest-TestNetworkAdvancedServerOps-1022296660',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ca4b15770e784f45910b630937562cb6',ramdisk_id='',reservation_id='r-nhg7hkl0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-1304559157',owner_user_name='tempest-TestNetworkAdvancedServerOps-1304559157-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:14:39Z,user_data=None,user_id='a213c3877fc144a3af0be3c3d853f999',uuid=dd4f6627-2bd4-4ad6-8e1e-c5bec1228096,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ef5b5080-0657-4874-a32f-f79c370fbb6f", "address": "fa:16:3e:73:4b:2f", "network": {"id": "4b2dcf8f-2d20-4207-8c49-d03814ca4c89", "bridge": "br-int", "label": "tempest-network-smoke--1714222013", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca4b15770e784f45910b630937562cb6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapef5b5080-06", "ovs_interfaceid": "ef5b5080-0657-4874-a32f-f79c370fbb6f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 11 09:14:43 compute-0 nova_compute[260935]: 2025-10-11 09:14:43.996 2 DEBUG nova.network.os_vif_util [None req-30fb0cf0-66ad-425d-9ca1-638b26026e31 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Converting VIF {"id": "ef5b5080-0657-4874-a32f-f79c370fbb6f", "address": "fa:16:3e:73:4b:2f", "network": {"id": "4b2dcf8f-2d20-4207-8c49-d03814ca4c89", "bridge": "br-int", "label": "tempest-network-smoke--1714222013", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca4b15770e784f45910b630937562cb6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapef5b5080-06", "ovs_interfaceid": "ef5b5080-0657-4874-a32f-f79c370fbb6f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 09:14:43 compute-0 nova_compute[260935]: 2025-10-11 09:14:43.997 2 DEBUG nova.network.os_vif_util [None req-30fb0cf0-66ad-425d-9ca1-638b26026e31 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:73:4b:2f,bridge_name='br-int',has_traffic_filtering=True,id=ef5b5080-0657-4874-a32f-f79c370fbb6f,network=Network(4b2dcf8f-2d20-4207-8c49-d03814ca4c89),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapef5b5080-06') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 09:14:43 compute-0 nova_compute[260935]: 2025-10-11 09:14:43.998 2 DEBUG os_vif [None req-30fb0cf0-66ad-425d-9ca1-638b26026e31 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:73:4b:2f,bridge_name='br-int',has_traffic_filtering=True,id=ef5b5080-0657-4874-a32f-f79c370fbb6f,network=Network(4b2dcf8f-2d20-4207-8c49-d03814ca4c89),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapef5b5080-06') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 11 09:14:43 compute-0 nova_compute[260935]: 2025-10-11 09:14:43.999 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:14:44 compute-0 nova_compute[260935]: 2025-10-11 09:14:44.000 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:14:44 compute-0 nova_compute[260935]: 2025-10-11 09:14:44.001 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 09:14:44 compute-0 nova_compute[260935]: 2025-10-11 09:14:44.005 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:14:44 compute-0 nova_compute[260935]: 2025-10-11 09:14:44.006 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapef5b5080-06, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:14:44 compute-0 nova_compute[260935]: 2025-10-11 09:14:44.007 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapef5b5080-06, col_values=(('external_ids', {'iface-id': 'ef5b5080-0657-4874-a32f-f79c370fbb6f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:73:4b:2f', 'vm-uuid': 'dd4f6627-2bd4-4ad6-8e1e-c5bec1228096'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:14:44 compute-0 nova_compute[260935]: 2025-10-11 09:14:44.009 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:14:44 compute-0 NetworkManager[44960]: <info>  [1760174084.0116] manager: (tapef5b5080-06): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/423)
Oct 11 09:14:44 compute-0 nova_compute[260935]: 2025-10-11 09:14:44.013 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 09:14:44 compute-0 nova_compute[260935]: 2025-10-11 09:14:44.015 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:14:44 compute-0 nova_compute[260935]: 2025-10-11 09:14:44.016 2 INFO os_vif [None req-30fb0cf0-66ad-425d-9ca1-638b26026e31 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:73:4b:2f,bridge_name='br-int',has_traffic_filtering=True,id=ef5b5080-0657-4874-a32f-f79c370fbb6f,network=Network(4b2dcf8f-2d20-4207-8c49-d03814ca4c89),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapef5b5080-06')
Oct 11 09:14:44 compute-0 nova_compute[260935]: 2025-10-11 09:14:44.085 2 DEBUG nova.virt.libvirt.driver [None req-30fb0cf0-66ad-425d-9ca1-638b26026e31 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 09:14:44 compute-0 nova_compute[260935]: 2025-10-11 09:14:44.085 2 DEBUG nova.virt.libvirt.driver [None req-30fb0cf0-66ad-425d-9ca1-638b26026e31 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 09:14:44 compute-0 nova_compute[260935]: 2025-10-11 09:14:44.086 2 DEBUG nova.virt.libvirt.driver [None req-30fb0cf0-66ad-425d-9ca1-638b26026e31 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] No VIF found with MAC fa:16:3e:73:4b:2f, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 11 09:14:44 compute-0 nova_compute[260935]: 2025-10-11 09:14:44.086 2 INFO nova.virt.libvirt.driver [None req-30fb0cf0-66ad-425d-9ca1-638b26026e31 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: dd4f6627-2bd4-4ad6-8e1e-c5bec1228096] Using config drive
Oct 11 09:14:44 compute-0 nova_compute[260935]: 2025-10-11 09:14:44.119 2 DEBUG nova.storage.rbd_utils [None req-30fb0cf0-66ad-425d-9ca1-638b26026e31 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] rbd image dd4f6627-2bd4-4ad6-8e1e-c5bec1228096_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:14:44 compute-0 nova_compute[260935]: 2025-10-11 09:14:44.428 2 DEBUG nova.network.neutron [req-d802011d-7444-4195-8808-4e802c2bdcd5 req-3cb1222f-4264-4034-b06b-f7c161d7d69e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: dd4f6627-2bd4-4ad6-8e1e-c5bec1228096] Updated VIF entry in instance network info cache for port ef5b5080-0657-4874-a32f-f79c370fbb6f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 09:14:44 compute-0 nova_compute[260935]: 2025-10-11 09:14:44.429 2 DEBUG nova.network.neutron [req-d802011d-7444-4195-8808-4e802c2bdcd5 req-3cb1222f-4264-4034-b06b-f7c161d7d69e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: dd4f6627-2bd4-4ad6-8e1e-c5bec1228096] Updating instance_info_cache with network_info: [{"id": "ef5b5080-0657-4874-a32f-f79c370fbb6f", "address": "fa:16:3e:73:4b:2f", "network": {"id": "4b2dcf8f-2d20-4207-8c49-d03814ca4c89", "bridge": "br-int", "label": "tempest-network-smoke--1714222013", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca4b15770e784f45910b630937562cb6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapef5b5080-06", "ovs_interfaceid": "ef5b5080-0657-4874-a32f-f79c370fbb6f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:14:44 compute-0 nova_compute[260935]: 2025-10-11 09:14:44.452 2 DEBUG oslo_concurrency.lockutils [req-d802011d-7444-4195-8808-4e802c2bdcd5 req-3cb1222f-4264-4034-b06b-f7c161d7d69e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-dd4f6627-2bd4-4ad6-8e1e-c5bec1228096" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:14:44 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2187: 321 pgs: 321 active+clean; 374 MiB data, 920 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 11 09:14:44 compute-0 nova_compute[260935]: 2025-10-11 09:14:44.610 2 INFO nova.virt.libvirt.driver [None req-30fb0cf0-66ad-425d-9ca1-638b26026e31 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: dd4f6627-2bd4-4ad6-8e1e-c5bec1228096] Creating config drive at /var/lib/nova/instances/dd4f6627-2bd4-4ad6-8e1e-c5bec1228096/disk.config
Oct 11 09:14:44 compute-0 nova_compute[260935]: 2025-10-11 09:14:44.617 2 DEBUG oslo_concurrency.processutils [None req-30fb0cf0-66ad-425d-9ca1-638b26026e31 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/dd4f6627-2bd4-4ad6-8e1e-c5bec1228096/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpx9os0ois execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:14:44 compute-0 nova_compute[260935]: 2025-10-11 09:14:44.785 2 DEBUG oslo_concurrency.processutils [None req-30fb0cf0-66ad-425d-9ca1-638b26026e31 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/dd4f6627-2bd4-4ad6-8e1e-c5bec1228096/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpx9os0ois" returned: 0 in 0.168s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:14:44 compute-0 nova_compute[260935]: 2025-10-11 09:14:44.824 2 DEBUG nova.storage.rbd_utils [None req-30fb0cf0-66ad-425d-9ca1-638b26026e31 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] rbd image dd4f6627-2bd4-4ad6-8e1e-c5bec1228096_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:14:44 compute-0 nova_compute[260935]: 2025-10-11 09:14:44.829 2 DEBUG oslo_concurrency.processutils [None req-30fb0cf0-66ad-425d-9ca1-638b26026e31 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/dd4f6627-2bd4-4ad6-8e1e-c5bec1228096/disk.config dd4f6627-2bd4-4ad6-8e1e-c5bec1228096_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:14:44 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/2814180833' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:14:44 compute-0 nova_compute[260935]: 2025-10-11 09:14:44.988 2 DEBUG oslo_concurrency.processutils [None req-30fb0cf0-66ad-425d-9ca1-638b26026e31 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/dd4f6627-2bd4-4ad6-8e1e-c5bec1228096/disk.config dd4f6627-2bd4-4ad6-8e1e-c5bec1228096_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.159s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:14:44 compute-0 nova_compute[260935]: 2025-10-11 09:14:44.989 2 INFO nova.virt.libvirt.driver [None req-30fb0cf0-66ad-425d-9ca1-638b26026e31 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: dd4f6627-2bd4-4ad6-8e1e-c5bec1228096] Deleting local config drive /var/lib/nova/instances/dd4f6627-2bd4-4ad6-8e1e-c5bec1228096/disk.config because it was imported into RBD.
Oct 11 09:14:45 compute-0 kernel: tapef5b5080-06: entered promiscuous mode
Oct 11 09:14:45 compute-0 NetworkManager[44960]: <info>  [1760174085.0677] manager: (tapef5b5080-06): new Tun device (/org/freedesktop/NetworkManager/Devices/424)
Oct 11 09:14:45 compute-0 ovn_controller[152945]: 2025-10-11T09:14:45Z|01028|binding|INFO|Claiming lport ef5b5080-0657-4874-a32f-f79c370fbb6f for this chassis.
Oct 11 09:14:45 compute-0 ovn_controller[152945]: 2025-10-11T09:14:45Z|01029|binding|INFO|ef5b5080-0657-4874-a32f-f79c370fbb6f: Claiming fa:16:3e:73:4b:2f 10.100.0.4
Oct 11 09:14:45 compute-0 nova_compute[260935]: 2025-10-11 09:14:45.073 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:14:45 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:14:45.092 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:73:4b:2f 10.100.0.4'], port_security=['fa:16:3e:73:4b:2f 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'dd4f6627-2bd4-4ad6-8e1e-c5bec1228096', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4b2dcf8f-2d20-4207-8c49-d03814ca4c89', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ca4b15770e784f45910b630937562cb6', 'neutron:revision_number': '2', 'neutron:security_group_ids': '69b43b9b-1747-4199-b611-82604b25c02c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6a61dfb2-d260-491a-87d9-80daad2fd54f, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=ef5b5080-0657-4874-a32f-f79c370fbb6f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 09:14:45 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:14:45.093 162815 INFO neutron.agent.ovn.metadata.agent [-] Port ef5b5080-0657-4874-a32f-f79c370fbb6f in datapath 4b2dcf8f-2d20-4207-8c49-d03814ca4c89 bound to our chassis
Oct 11 09:14:45 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:14:45.095 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4b2dcf8f-2d20-4207-8c49-d03814ca4c89
Oct 11 09:14:45 compute-0 systemd-udevd[375489]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 09:14:45 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:14:45.112 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[059f12fc-b34b-4db0-ad09-42ca38e5c3ac]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:14:45 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:14:45.114 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap4b2dcf8f-21 in ovnmeta-4b2dcf8f-2d20-4207-8c49-d03814ca4c89 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 11 09:14:45 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:14:45.117 276945 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap4b2dcf8f-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 11 09:14:45 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:14:45.117 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[7eb6dd39-6ac6-4647-9e29-4eaecf27f3e9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:14:45 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:14:45.119 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[1c13d017-e9a7-472e-8e90-763edb2df2bd]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:14:45 compute-0 NetworkManager[44960]: <info>  [1760174085.1266] device (tapef5b5080-06): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 09:14:45 compute-0 NetworkManager[44960]: <info>  [1760174085.1282] device (tapef5b5080-06): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 09:14:45 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:14:45.133 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[bb0d5390-5573-48af-8373-17ff87a30372]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:14:45 compute-0 systemd-machined[215705]: New machine qemu-129-instance-0000006b.
Oct 11 09:14:45 compute-0 nova_compute[260935]: 2025-10-11 09:14:45.144 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:14:45 compute-0 ovn_controller[152945]: 2025-10-11T09:14:45Z|01030|binding|INFO|Setting lport ef5b5080-0657-4874-a32f-f79c370fbb6f ovn-installed in OVS
Oct 11 09:14:45 compute-0 ovn_controller[152945]: 2025-10-11T09:14:45Z|01031|binding|INFO|Setting lport ef5b5080-0657-4874-a32f-f79c370fbb6f up in Southbound
Oct 11 09:14:45 compute-0 nova_compute[260935]: 2025-10-11 09:14:45.149 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:14:45 compute-0 systemd[1]: Started Virtual Machine qemu-129-instance-0000006b.
Oct 11 09:14:45 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:14:45.168 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[0e63af39-f7a6-4180-b7ba-cc8370e9d13a]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:14:45 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:14:45.203 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[1e996c4e-adcd-4836-bacf-968017459999]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:14:45 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:14:45.207 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[ebf51311-fff6-48d2-aa78-09227ec611f6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:14:45 compute-0 systemd-udevd[375496]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 09:14:45 compute-0 NetworkManager[44960]: <info>  [1760174085.2088] manager: (tap4b2dcf8f-20): new Veth device (/org/freedesktop/NetworkManager/Devices/425)
Oct 11 09:14:45 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:14:45.238 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[fcedf2d3-0da2-449d-856e-5c6ae0416643]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:14:45 compute-0 unix_chkpwd[375522]: password check failed for user (root)
Oct 11 09:14:45 compute-0 sshd-session[375440]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=165.232.82.252  user=root
Oct 11 09:14:45 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:14:45.241 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[08525a40-37dc-4559-82ee-1d481c398cbf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:14:45 compute-0 NetworkManager[44960]: <info>  [1760174085.2599] device (tap4b2dcf8f-20): carrier: link connected
Oct 11 09:14:45 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:14:45.268 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[6d0fac29-210e-452a-8f3d-e0db9759e3ba]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:14:45 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:14:45.288 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[ba751bc8-73a1-4e82-bac2-6b02d789f2d0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4b2dcf8f-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b1:48:74'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 300], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 592996, 'reachable_time': 36712, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 375526, 'error': None, 'target': 'ovnmeta-4b2dcf8f-2d20-4207-8c49-d03814ca4c89', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:14:45 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:14:45.307 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[d85f8659-2657-4937-a565-b4ab9e1968e6]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feb1:4874'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 592996, 'tstamp': 592996}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 375527, 'error': None, 'target': 'ovnmeta-4b2dcf8f-2d20-4207-8c49-d03814ca4c89', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:14:45 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:14:45.333 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[469f5481-b2b7-45ea-bd4a-ff59c2f1c023]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4b2dcf8f-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b1:48:74'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 300], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 592996, 'reachable_time': 36712, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 375528, 'error': None, 'target': 'ovnmeta-4b2dcf8f-2d20-4207-8c49-d03814ca4c89', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:14:45 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:14:45.388 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[cc0a4a32-b956-4c93-8b17-73a1e8605b74]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:14:45 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:14:45.491 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[80eae5aa-9dd1-4155-935b-8a4b18f67fa8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:14:45 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:14:45.493 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4b2dcf8f-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:14:45 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:14:45.493 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 09:14:45 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:14:45.494 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4b2dcf8f-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:14:45 compute-0 NetworkManager[44960]: <info>  [1760174085.5341] manager: (tap4b2dcf8f-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/426)
Oct 11 09:14:45 compute-0 nova_compute[260935]: 2025-10-11 09:14:45.534 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:14:45 compute-0 kernel: tap4b2dcf8f-20: entered promiscuous mode
Oct 11 09:14:45 compute-0 nova_compute[260935]: 2025-10-11 09:14:45.538 2 DEBUG nova.compute.manager [req-3d4257d0-3299-4939-8036-707dd7743df8 req-5c524de4-7e65-45f2-be75-1904211e34b6 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: dd4f6627-2bd4-4ad6-8e1e-c5bec1228096] Received event network-vif-plugged-ef5b5080-0657-4874-a32f-f79c370fbb6f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:14:45 compute-0 nova_compute[260935]: 2025-10-11 09:14:45.538 2 DEBUG oslo_concurrency.lockutils [req-3d4257d0-3299-4939-8036-707dd7743df8 req-5c524de4-7e65-45f2-be75-1904211e34b6 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "dd4f6627-2bd4-4ad6-8e1e-c5bec1228096-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:14:45 compute-0 nova_compute[260935]: 2025-10-11 09:14:45.539 2 DEBUG oslo_concurrency.lockutils [req-3d4257d0-3299-4939-8036-707dd7743df8 req-5c524de4-7e65-45f2-be75-1904211e34b6 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "dd4f6627-2bd4-4ad6-8e1e-c5bec1228096-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:14:45 compute-0 nova_compute[260935]: 2025-10-11 09:14:45.539 2 DEBUG oslo_concurrency.lockutils [req-3d4257d0-3299-4939-8036-707dd7743df8 req-5c524de4-7e65-45f2-be75-1904211e34b6 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "dd4f6627-2bd4-4ad6-8e1e-c5bec1228096-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:14:45 compute-0 nova_compute[260935]: 2025-10-11 09:14:45.539 2 DEBUG nova.compute.manager [req-3d4257d0-3299-4939-8036-707dd7743df8 req-5c524de4-7e65-45f2-be75-1904211e34b6 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: dd4f6627-2bd4-4ad6-8e1e-c5bec1228096] Processing event network-vif-plugged-ef5b5080-0657-4874-a32f-f79c370fbb6f _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 11 09:14:45 compute-0 nova_compute[260935]: 2025-10-11 09:14:45.540 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:14:45 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:14:45.541 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4b2dcf8f-20, col_values=(('external_ids', {'iface-id': '0c6a8cda-d5e3-42f3-82f4-69be33ec4ca6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:14:45 compute-0 nova_compute[260935]: 2025-10-11 09:14:45.543 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:14:45 compute-0 ovn_controller[152945]: 2025-10-11T09:14:45Z|01032|binding|INFO|Releasing lport 0c6a8cda-d5e3-42f3-82f4-69be33ec4ca6 from this chassis (sb_readonly=0)
Oct 11 09:14:45 compute-0 nova_compute[260935]: 2025-10-11 09:14:45.568 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:14:45 compute-0 nova_compute[260935]: 2025-10-11 09:14:45.569 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:14:45 compute-0 nova_compute[260935]: 2025-10-11 09:14:45.574 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:14:45 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:14:45.575 162815 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/4b2dcf8f-2d20-4207-8c49-d03814ca4c89.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/4b2dcf8f-2d20-4207-8c49-d03814ca4c89.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 11 09:14:45 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:14:45.576 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[b5006f02-218b-449b-a901-aa8ad0621860]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:14:45 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:14:45.578 162815 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 11 09:14:45 compute-0 ovn_metadata_agent[162810]: global
Oct 11 09:14:45 compute-0 ovn_metadata_agent[162810]:     log         /dev/log local0 debug
Oct 11 09:14:45 compute-0 ovn_metadata_agent[162810]:     log-tag     haproxy-metadata-proxy-4b2dcf8f-2d20-4207-8c49-d03814ca4c89
Oct 11 09:14:45 compute-0 ovn_metadata_agent[162810]:     user        root
Oct 11 09:14:45 compute-0 ovn_metadata_agent[162810]:     group       root
Oct 11 09:14:45 compute-0 ovn_metadata_agent[162810]:     maxconn     1024
Oct 11 09:14:45 compute-0 ovn_metadata_agent[162810]:     pidfile     /var/lib/neutron/external/pids/4b2dcf8f-2d20-4207-8c49-d03814ca4c89.pid.haproxy
Oct 11 09:14:45 compute-0 ovn_metadata_agent[162810]:     daemon
Oct 11 09:14:45 compute-0 ovn_metadata_agent[162810]: 
Oct 11 09:14:45 compute-0 ovn_metadata_agent[162810]: defaults
Oct 11 09:14:45 compute-0 ovn_metadata_agent[162810]:     log global
Oct 11 09:14:45 compute-0 ovn_metadata_agent[162810]:     mode http
Oct 11 09:14:45 compute-0 ovn_metadata_agent[162810]:     option httplog
Oct 11 09:14:45 compute-0 ovn_metadata_agent[162810]:     option dontlognull
Oct 11 09:14:45 compute-0 ovn_metadata_agent[162810]:     option http-server-close
Oct 11 09:14:45 compute-0 ovn_metadata_agent[162810]:     option forwardfor
Oct 11 09:14:45 compute-0 ovn_metadata_agent[162810]:     retries                 3
Oct 11 09:14:45 compute-0 ovn_metadata_agent[162810]:     timeout http-request    30s
Oct 11 09:14:45 compute-0 ovn_metadata_agent[162810]:     timeout connect         30s
Oct 11 09:14:45 compute-0 ovn_metadata_agent[162810]:     timeout client          32s
Oct 11 09:14:45 compute-0 ovn_metadata_agent[162810]:     timeout server          32s
Oct 11 09:14:45 compute-0 ovn_metadata_agent[162810]:     timeout http-keep-alive 30s
Oct 11 09:14:45 compute-0 ovn_metadata_agent[162810]: 
Oct 11 09:14:45 compute-0 ovn_metadata_agent[162810]: 
Oct 11 09:14:45 compute-0 ovn_metadata_agent[162810]: listen listener
Oct 11 09:14:45 compute-0 ovn_metadata_agent[162810]:     bind 169.254.169.254:80
Oct 11 09:14:45 compute-0 ovn_metadata_agent[162810]:     server metadata /var/lib/neutron/metadata_proxy
Oct 11 09:14:45 compute-0 ovn_metadata_agent[162810]:     http-request add-header X-OVN-Network-ID 4b2dcf8f-2d20-4207-8c49-d03814ca4c89
Oct 11 09:14:45 compute-0 ovn_metadata_agent[162810]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 11 09:14:45 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:14:45.581 162815 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-4b2dcf8f-2d20-4207-8c49-d03814ca4c89', 'env', 'PROCESS_TAG=haproxy-4b2dcf8f-2d20-4207-8c49-d03814ca4c89', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/4b2dcf8f-2d20-4207-8c49-d03814ca4c89.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 11 09:14:45 compute-0 ceph-mon[74313]: pgmap v2187: 321 pgs: 321 active+clean; 374 MiB data, 920 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 11 09:14:45 compute-0 nova_compute[260935]: 2025-10-11 09:14:45.978 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760174085.9776804, dd4f6627-2bd4-4ad6-8e1e-c5bec1228096 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:14:45 compute-0 nova_compute[260935]: 2025-10-11 09:14:45.980 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: dd4f6627-2bd4-4ad6-8e1e-c5bec1228096] VM Started (Lifecycle Event)
Oct 11 09:14:45 compute-0 nova_compute[260935]: 2025-10-11 09:14:45.984 2 DEBUG nova.compute.manager [None req-30fb0cf0-66ad-425d-9ca1-638b26026e31 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: dd4f6627-2bd4-4ad6-8e1e-c5bec1228096] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 11 09:14:45 compute-0 nova_compute[260935]: 2025-10-11 09:14:45.989 2 DEBUG nova.virt.libvirt.driver [None req-30fb0cf0-66ad-425d-9ca1-638b26026e31 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: dd4f6627-2bd4-4ad6-8e1e-c5bec1228096] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 11 09:14:45 compute-0 nova_compute[260935]: 2025-10-11 09:14:45.995 2 INFO nova.virt.libvirt.driver [-] [instance: dd4f6627-2bd4-4ad6-8e1e-c5bec1228096] Instance spawned successfully.
Oct 11 09:14:45 compute-0 nova_compute[260935]: 2025-10-11 09:14:45.995 2 DEBUG nova.virt.libvirt.driver [None req-30fb0cf0-66ad-425d-9ca1-638b26026e31 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: dd4f6627-2bd4-4ad6-8e1e-c5bec1228096] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 11 09:14:46 compute-0 nova_compute[260935]: 2025-10-11 09:14:46.015 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: dd4f6627-2bd4-4ad6-8e1e-c5bec1228096] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:14:46 compute-0 nova_compute[260935]: 2025-10-11 09:14:46.026 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: dd4f6627-2bd4-4ad6-8e1e-c5bec1228096] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 09:14:46 compute-0 podman[375602]: 2025-10-11 09:14:46.028791637 +0000 UTC m=+0.086066483 container create 9c990cd252a00b3aae108b75a4f292bd7c93e59da0428ed1c037f6870491fac9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-4b2dcf8f-2d20-4207-8c49-d03814ca4c89, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Oct 11 09:14:46 compute-0 nova_compute[260935]: 2025-10-11 09:14:46.034 2 DEBUG nova.virt.libvirt.driver [None req-30fb0cf0-66ad-425d-9ca1-638b26026e31 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: dd4f6627-2bd4-4ad6-8e1e-c5bec1228096] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:14:46 compute-0 nova_compute[260935]: 2025-10-11 09:14:46.035 2 DEBUG nova.virt.libvirt.driver [None req-30fb0cf0-66ad-425d-9ca1-638b26026e31 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: dd4f6627-2bd4-4ad6-8e1e-c5bec1228096] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:14:46 compute-0 nova_compute[260935]: 2025-10-11 09:14:46.035 2 DEBUG nova.virt.libvirt.driver [None req-30fb0cf0-66ad-425d-9ca1-638b26026e31 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: dd4f6627-2bd4-4ad6-8e1e-c5bec1228096] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:14:46 compute-0 nova_compute[260935]: 2025-10-11 09:14:46.036 2 DEBUG nova.virt.libvirt.driver [None req-30fb0cf0-66ad-425d-9ca1-638b26026e31 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: dd4f6627-2bd4-4ad6-8e1e-c5bec1228096] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:14:46 compute-0 nova_compute[260935]: 2025-10-11 09:14:46.037 2 DEBUG nova.virt.libvirt.driver [None req-30fb0cf0-66ad-425d-9ca1-638b26026e31 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: dd4f6627-2bd4-4ad6-8e1e-c5bec1228096] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:14:46 compute-0 nova_compute[260935]: 2025-10-11 09:14:46.038 2 DEBUG nova.virt.libvirt.driver [None req-30fb0cf0-66ad-425d-9ca1-638b26026e31 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: dd4f6627-2bd4-4ad6-8e1e-c5bec1228096] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:14:46 compute-0 nova_compute[260935]: 2025-10-11 09:14:46.063 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: dd4f6627-2bd4-4ad6-8e1e-c5bec1228096] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 09:14:46 compute-0 nova_compute[260935]: 2025-10-11 09:14:46.063 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760174085.978961, dd4f6627-2bd4-4ad6-8e1e-c5bec1228096 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:14:46 compute-0 nova_compute[260935]: 2025-10-11 09:14:46.063 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: dd4f6627-2bd4-4ad6-8e1e-c5bec1228096] VM Paused (Lifecycle Event)
Oct 11 09:14:46 compute-0 podman[375602]: 2025-10-11 09:14:45.984325066 +0000 UTC m=+0.041599992 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 11 09:14:46 compute-0 systemd[1]: Started libpod-conmon-9c990cd252a00b3aae108b75a4f292bd7c93e59da0428ed1c037f6870491fac9.scope.
Oct 11 09:14:46 compute-0 nova_compute[260935]: 2025-10-11 09:14:46.090 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: dd4f6627-2bd4-4ad6-8e1e-c5bec1228096] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:14:46 compute-0 nova_compute[260935]: 2025-10-11 09:14:46.098 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760174085.9867253, dd4f6627-2bd4-4ad6-8e1e-c5bec1228096 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:14:46 compute-0 nova_compute[260935]: 2025-10-11 09:14:46.099 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: dd4f6627-2bd4-4ad6-8e1e-c5bec1228096] VM Resumed (Lifecycle Event)
Oct 11 09:14:46 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:14:46 compute-0 nova_compute[260935]: 2025-10-11 09:14:46.132 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: dd4f6627-2bd4-4ad6-8e1e-c5bec1228096] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:14:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93c774767af60c322abb2e50f9ecf5a65e5ec0268f3808529aa2cf4be24924c9/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 11 09:14:46 compute-0 nova_compute[260935]: 2025-10-11 09:14:46.137 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: dd4f6627-2bd4-4ad6-8e1e-c5bec1228096] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 09:14:46 compute-0 nova_compute[260935]: 2025-10-11 09:14:46.142 2 INFO nova.compute.manager [None req-30fb0cf0-66ad-425d-9ca1-638b26026e31 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: dd4f6627-2bd4-4ad6-8e1e-c5bec1228096] Took 7.06 seconds to spawn the instance on the hypervisor.
Oct 11 09:14:46 compute-0 nova_compute[260935]: 2025-10-11 09:14:46.142 2 DEBUG nova.compute.manager [None req-30fb0cf0-66ad-425d-9ca1-638b26026e31 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: dd4f6627-2bd4-4ad6-8e1e-c5bec1228096] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:14:46 compute-0 podman[375602]: 2025-10-11 09:14:46.167890947 +0000 UTC m=+0.225165873 container init 9c990cd252a00b3aae108b75a4f292bd7c93e59da0428ed1c037f6870491fac9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-4b2dcf8f-2d20-4207-8c49-d03814ca4c89, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 11 09:14:46 compute-0 podman[375602]: 2025-10-11 09:14:46.178197402 +0000 UTC m=+0.235472278 container start 9c990cd252a00b3aae108b75a4f292bd7c93e59da0428ed1c037f6870491fac9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-4b2dcf8f-2d20-4207-8c49-d03814ca4c89, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 09:14:46 compute-0 neutron-haproxy-ovnmeta-4b2dcf8f-2d20-4207-8c49-d03814ca4c89[375618]: [NOTICE]   (375622) : New worker (375624) forked
Oct 11 09:14:46 compute-0 neutron-haproxy-ovnmeta-4b2dcf8f-2d20-4207-8c49-d03814ca4c89[375618]: [NOTICE]   (375622) : Loading success.
Oct 11 09:14:46 compute-0 nova_compute[260935]: 2025-10-11 09:14:46.371 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: dd4f6627-2bd4-4ad6-8e1e-c5bec1228096] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 09:14:46 compute-0 nova_compute[260935]: 2025-10-11 09:14:46.429 2 INFO nova.compute.manager [None req-30fb0cf0-66ad-425d-9ca1-638b26026e31 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: dd4f6627-2bd4-4ad6-8e1e-c5bec1228096] Took 8.63 seconds to build instance.
Oct 11 09:14:46 compute-0 nova_compute[260935]: 2025-10-11 09:14:46.447 2 DEBUG oslo_concurrency.lockutils [None req-30fb0cf0-66ad-425d-9ca1-638b26026e31 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Lock "dd4f6627-2bd4-4ad6-8e1e-c5bec1228096" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.769s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:14:46 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2188: 321 pgs: 321 active+clean; 374 MiB data, 920 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 11 09:14:47 compute-0 sshd-session[375440]: Failed password for root from 165.232.82.252 port 53768 ssh2
Oct 11 09:14:47 compute-0 nova_compute[260935]: 2025-10-11 09:14:47.605 2 DEBUG nova.compute.manager [req-a02ede09-7bc9-41a7-a404-46193785a408 req-f4175bac-607a-4989-84a8-b3cd3eae1b57 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: dd4f6627-2bd4-4ad6-8e1e-c5bec1228096] Received event network-vif-plugged-ef5b5080-0657-4874-a32f-f79c370fbb6f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:14:47 compute-0 nova_compute[260935]: 2025-10-11 09:14:47.605 2 DEBUG oslo_concurrency.lockutils [req-a02ede09-7bc9-41a7-a404-46193785a408 req-f4175bac-607a-4989-84a8-b3cd3eae1b57 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "dd4f6627-2bd4-4ad6-8e1e-c5bec1228096-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:14:47 compute-0 nova_compute[260935]: 2025-10-11 09:14:47.606 2 DEBUG oslo_concurrency.lockutils [req-a02ede09-7bc9-41a7-a404-46193785a408 req-f4175bac-607a-4989-84a8-b3cd3eae1b57 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "dd4f6627-2bd4-4ad6-8e1e-c5bec1228096-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:14:47 compute-0 nova_compute[260935]: 2025-10-11 09:14:47.606 2 DEBUG oslo_concurrency.lockutils [req-a02ede09-7bc9-41a7-a404-46193785a408 req-f4175bac-607a-4989-84a8-b3cd3eae1b57 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "dd4f6627-2bd4-4ad6-8e1e-c5bec1228096-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:14:47 compute-0 nova_compute[260935]: 2025-10-11 09:14:47.606 2 DEBUG nova.compute.manager [req-a02ede09-7bc9-41a7-a404-46193785a408 req-f4175bac-607a-4989-84a8-b3cd3eae1b57 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: dd4f6627-2bd4-4ad6-8e1e-c5bec1228096] No waiting events found dispatching network-vif-plugged-ef5b5080-0657-4874-a32f-f79c370fbb6f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 09:14:47 compute-0 nova_compute[260935]: 2025-10-11 09:14:47.607 2 WARNING nova.compute.manager [req-a02ede09-7bc9-41a7-a404-46193785a408 req-f4175bac-607a-4989-84a8-b3cd3eae1b57 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: dd4f6627-2bd4-4ad6-8e1e-c5bec1228096] Received unexpected event network-vif-plugged-ef5b5080-0657-4874-a32f-f79c370fbb6f for instance with vm_state active and task_state None.
Oct 11 09:14:47 compute-0 ceph-mon[74313]: pgmap v2188: 321 pgs: 321 active+clean; 374 MiB data, 920 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 11 09:14:48 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2189: 321 pgs: 321 active+clean; 374 MiB data, 920 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 1.8 MiB/s wr, 91 op/s
Oct 11 09:14:48 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:14:49 compute-0 nova_compute[260935]: 2025-10-11 09:14:49.010 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:14:49 compute-0 sshd-session[375440]: Connection closed by authenticating user root 165.232.82.252 port 53768 [preauth]
Oct 11 09:14:49 compute-0 ceph-mon[74313]: pgmap v2189: 321 pgs: 321 active+clean; 374 MiB data, 920 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 1.8 MiB/s wr, 91 op/s
Oct 11 09:14:50 compute-0 ovn_controller[152945]: 2025-10-11T09:14:50Z|01033|binding|INFO|Releasing lport 0c6a8cda-d5e3-42f3-82f4-69be33ec4ca6 from this chassis (sb_readonly=0)
Oct 11 09:14:50 compute-0 ovn_controller[152945]: 2025-10-11T09:14:50Z|01034|binding|INFO|Releasing lport e23cd806-8523-4e59-ba27-db15cee52548 from this chassis (sb_readonly=0)
Oct 11 09:14:50 compute-0 ovn_controller[152945]: 2025-10-11T09:14:50Z|01035|binding|INFO|Releasing lport f45dd889-4db0-488b-b6d3-356bc191844e from this chassis (sb_readonly=0)
Oct 11 09:14:50 compute-0 NetworkManager[44960]: <info>  [1760174090.2440] manager: (patch-br-int-to-provnet-02213cae-d648-4a8f-b18b-9bdf9573f1be): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/427)
Oct 11 09:14:50 compute-0 NetworkManager[44960]: <info>  [1760174090.2450] manager: (patch-provnet-02213cae-d648-4a8f-b18b-9bdf9573f1be-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/428)
Oct 11 09:14:50 compute-0 nova_compute[260935]: 2025-10-11 09:14:50.247 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:14:50 compute-0 ovn_controller[152945]: 2025-10-11T09:14:50Z|01036|binding|INFO|Releasing lport 0c6a8cda-d5e3-42f3-82f4-69be33ec4ca6 from this chassis (sb_readonly=0)
Oct 11 09:14:50 compute-0 nova_compute[260935]: 2025-10-11 09:14:50.272 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:14:50 compute-0 ovn_controller[152945]: 2025-10-11T09:14:50Z|01037|binding|INFO|Releasing lport e23cd806-8523-4e59-ba27-db15cee52548 from this chassis (sb_readonly=0)
Oct 11 09:14:50 compute-0 ovn_controller[152945]: 2025-10-11T09:14:50Z|01038|binding|INFO|Releasing lport f45dd889-4db0-488b-b6d3-356bc191844e from this chassis (sb_readonly=0)
Oct 11 09:14:50 compute-0 nova_compute[260935]: 2025-10-11 09:14:50.283 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:14:50 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2190: 321 pgs: 321 active+clean; 374 MiB data, 920 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 1.8 MiB/s wr, 91 op/s
Oct 11 09:14:50 compute-0 nova_compute[260935]: 2025-10-11 09:14:50.577 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:14:50 compute-0 nova_compute[260935]: 2025-10-11 09:14:50.948 2 DEBUG nova.compute.manager [req-29adb0d9-b49d-4894-abe2-e79324eba3ae req-f336c83e-f12b-424d-b696-69a9e8ac1f8b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: dd4f6627-2bd4-4ad6-8e1e-c5bec1228096] Received event network-changed-ef5b5080-0657-4874-a32f-f79c370fbb6f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:14:50 compute-0 nova_compute[260935]: 2025-10-11 09:14:50.948 2 DEBUG nova.compute.manager [req-29adb0d9-b49d-4894-abe2-e79324eba3ae req-f336c83e-f12b-424d-b696-69a9e8ac1f8b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: dd4f6627-2bd4-4ad6-8e1e-c5bec1228096] Refreshing instance network info cache due to event network-changed-ef5b5080-0657-4874-a32f-f79c370fbb6f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 09:14:50 compute-0 nova_compute[260935]: 2025-10-11 09:14:50.949 2 DEBUG oslo_concurrency.lockutils [req-29adb0d9-b49d-4894-abe2-e79324eba3ae req-f336c83e-f12b-424d-b696-69a9e8ac1f8b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-dd4f6627-2bd4-4ad6-8e1e-c5bec1228096" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:14:50 compute-0 nova_compute[260935]: 2025-10-11 09:14:50.949 2 DEBUG oslo_concurrency.lockutils [req-29adb0d9-b49d-4894-abe2-e79324eba3ae req-f336c83e-f12b-424d-b696-69a9e8ac1f8b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-dd4f6627-2bd4-4ad6-8e1e-c5bec1228096" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:14:50 compute-0 nova_compute[260935]: 2025-10-11 09:14:50.950 2 DEBUG nova.network.neutron [req-29adb0d9-b49d-4894-abe2-e79324eba3ae req-f336c83e-f12b-424d-b696-69a9e8ac1f8b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: dd4f6627-2bd4-4ad6-8e1e-c5bec1228096] Refreshing network info cache for port ef5b5080-0657-4874-a32f-f79c370fbb6f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 09:14:50 compute-0 ceph-mon[74313]: pgmap v2190: 321 pgs: 321 active+clean; 374 MiB data, 920 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 1.8 MiB/s wr, 91 op/s
Oct 11 09:14:52 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2191: 321 pgs: 321 active+clean; 374 MiB data, 920 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Oct 11 09:14:53 compute-0 ceph-mon[74313]: pgmap v2191: 321 pgs: 321 active+clean; 374 MiB data, 920 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Oct 11 09:14:53 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:14:53 compute-0 sshd-session[375635]: Invalid user public from 152.32.213.170 port 39184
Oct 11 09:14:53 compute-0 sshd-session[375635]: pam_unix(sshd:auth): check pass; user unknown
Oct 11 09:14:53 compute-0 sshd-session[375635]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=152.32.213.170
Oct 11 09:14:54 compute-0 nova_compute[260935]: 2025-10-11 09:14:54.044 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:14:54 compute-0 nova_compute[260935]: 2025-10-11 09:14:54.113 2 DEBUG nova.network.neutron [req-29adb0d9-b49d-4894-abe2-e79324eba3ae req-f336c83e-f12b-424d-b696-69a9e8ac1f8b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: dd4f6627-2bd4-4ad6-8e1e-c5bec1228096] Updated VIF entry in instance network info cache for port ef5b5080-0657-4874-a32f-f79c370fbb6f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 09:14:54 compute-0 nova_compute[260935]: 2025-10-11 09:14:54.113 2 DEBUG nova.network.neutron [req-29adb0d9-b49d-4894-abe2-e79324eba3ae req-f336c83e-f12b-424d-b696-69a9e8ac1f8b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: dd4f6627-2bd4-4ad6-8e1e-c5bec1228096] Updating instance_info_cache with network_info: [{"id": "ef5b5080-0657-4874-a32f-f79c370fbb6f", "address": "fa:16:3e:73:4b:2f", "network": {"id": "4b2dcf8f-2d20-4207-8c49-d03814ca4c89", "bridge": "br-int", "label": "tempest-network-smoke--1714222013", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.242", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca4b15770e784f45910b630937562cb6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapef5b5080-06", "ovs_interfaceid": "ef5b5080-0657-4874-a32f-f79c370fbb6f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:14:54 compute-0 nova_compute[260935]: 2025-10-11 09:14:54.138 2 DEBUG oslo_concurrency.lockutils [req-29adb0d9-b49d-4894-abe2-e79324eba3ae req-f336c83e-f12b-424d-b696-69a9e8ac1f8b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-dd4f6627-2bd4-4ad6-8e1e-c5bec1228096" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:14:54 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2192: 321 pgs: 321 active+clean; 374 MiB data, 920 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 292 KiB/s wr, 86 op/s
Oct 11 09:14:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:14:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:14:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:14:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:14:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:14:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:14:54 compute-0 ceph-mgr[74605]: [balancer INFO root] Optimize plan auto_2025-10-11_09:14:54
Oct 11 09:14:54 compute-0 ceph-mgr[74605]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 09:14:54 compute-0 ceph-mgr[74605]: [balancer INFO root] do_upmap
Oct 11 09:14:54 compute-0 ceph-mgr[74605]: [balancer INFO root] pools ['default.rgw.control', 'default.rgw.meta', 'images', 'vms', '.mgr', '.rgw.root', 'cephfs.cephfs.meta', 'backups', 'cephfs.cephfs.data', 'default.rgw.log', 'volumes']
Oct 11 09:14:54 compute-0 ceph-mgr[74605]: [balancer INFO root] prepared 0/10 changes
Oct 11 09:14:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 09:14:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 09:14:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 09:14:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 09:14:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 09:14:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 09:14:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 09:14:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 09:14:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 09:14:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 09:14:55 compute-0 nova_compute[260935]: 2025-10-11 09:14:55.585 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:14:55 compute-0 ceph-mon[74313]: pgmap v2192: 321 pgs: 321 active+clean; 374 MiB data, 920 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 292 KiB/s wr, 86 op/s
Oct 11 09:14:55 compute-0 sshd-session[375635]: Failed password for invalid user public from 152.32.213.170 port 39184 ssh2
Oct 11 09:14:56 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:14:56.488 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6f:96:7e 2001:db8:0:1:f816:3eff:fe6f:967e'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:fe6f:967e/64', 'neutron:device_id': 'ovnmeta-64420f4d-abed-43af-8d6f-2156706c7174', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-64420f4d-abed-43af-8d6f-2156706c7174', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2b9fef5c3096478daba4f4d85fddf9e9', 'neutron:revision_number': '30', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=237c95b9-58c9-4db6-9d3a-13740240295d, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=2a00a6d8-ae77-4ece-ac60-480f6b95d6a8) old=Port_Binding(mac=['fa:16:3e:6f:96:7e 2001:db8::f816:3eff:fe6f:967e'], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe6f:967e/64', 'neutron:device_id': 'ovnmeta-64420f4d-abed-43af-8d6f-2156706c7174', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-64420f4d-abed-43af-8d6f-2156706c7174', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2b9fef5c3096478daba4f4d85fddf9e9', 'neutron:revision_number': '28', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 09:14:56 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:14:56.490 162815 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 2a00a6d8-ae77-4ece-ac60-480f6b95d6a8 in datapath 64420f4d-abed-43af-8d6f-2156706c7174 updated
Oct 11 09:14:56 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:14:56.492 162815 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 64420f4d-abed-43af-8d6f-2156706c7174, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 11 09:14:56 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:14:56.494 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[1aa6f894-e56c-492a-ab5c-045821fad8d2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:14:56 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2193: 321 pgs: 321 active+clean; 374 MiB data, 920 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 11 09:14:56 compute-0 podman[375637]: 2025-10-11 09:14:56.810463772 +0000 UTC m=+0.091521134 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Oct 11 09:14:57 compute-0 sshd-session[375635]: Received disconnect from 152.32.213.170 port 39184:11: Bye Bye [preauth]
Oct 11 09:14:57 compute-0 sshd-session[375635]: Disconnected from invalid user public 152.32.213.170 port 39184 [preauth]
Oct 11 09:14:57 compute-0 ovn_controller[152945]: 2025-10-11T09:14:57Z|00117|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:73:4b:2f 10.100.0.4
Oct 11 09:14:57 compute-0 ovn_controller[152945]: 2025-10-11T09:14:57Z|00118|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:73:4b:2f 10.100.0.4
Oct 11 09:14:57 compute-0 ceph-mon[74313]: pgmap v2193: 321 pgs: 321 active+clean; 374 MiB data, 920 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 11 09:14:58 compute-0 sudo[375657]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:14:58 compute-0 sudo[375657]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:14:58 compute-0 sudo[375657]: pam_unix(sudo:session): session closed for user root
Oct 11 09:14:58 compute-0 sudo[375682]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 09:14:58 compute-0 sudo[375682]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:14:58 compute-0 sudo[375682]: pam_unix(sudo:session): session closed for user root
Oct 11 09:14:58 compute-0 sudo[375707]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:14:58 compute-0 sudo[375707]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:14:58 compute-0 sudo[375707]: pam_unix(sudo:session): session closed for user root
Oct 11 09:14:58 compute-0 sudo[375732]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 11 09:14:58 compute-0 sudo[375732]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:14:58 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2194: 321 pgs: 321 active+clean; 400 MiB data, 945 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 127 op/s
Oct 11 09:14:58 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:14:59 compute-0 nova_compute[260935]: 2025-10-11 09:14:59.048 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:14:59 compute-0 sudo[375732]: pam_unix(sudo:session): session closed for user root
Oct 11 09:14:59 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Oct 11 09:14:59 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 11 09:14:59 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 09:14:59 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 09:14:59 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 11 09:14:59 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 09:14:59 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 11 09:14:59 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:14:59 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev 0c52ef51-3194-4ac1-af8f-80b2fbd174b5 does not exist
Oct 11 09:14:59 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev 87be00fa-e806-47f4-8fb4-7e80c85e9cb4 does not exist
Oct 11 09:14:59 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev a241172c-b74e-4068-b0e2-2c41b0c77ce2 does not exist
Oct 11 09:14:59 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 11 09:14:59 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 09:14:59 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 11 09:14:59 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 09:14:59 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 09:14:59 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 09:14:59 compute-0 sudo[375789]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:14:59 compute-0 sudo[375789]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:14:59 compute-0 sudo[375789]: pam_unix(sudo:session): session closed for user root
Oct 11 09:14:59 compute-0 sudo[375814]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 09:14:59 compute-0 sudo[375814]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:14:59 compute-0 sudo[375814]: pam_unix(sudo:session): session closed for user root
Oct 11 09:14:59 compute-0 sudo[375839]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:14:59 compute-0 sudo[375839]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:14:59 compute-0 sudo[375839]: pam_unix(sudo:session): session closed for user root
Oct 11 09:14:59 compute-0 sudo[375864]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 11 09:14:59 compute-0 sudo[375864]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:14:59 compute-0 ceph-mon[74313]: pgmap v2194: 321 pgs: 321 active+clean; 400 MiB data, 945 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 127 op/s
Oct 11 09:14:59 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 11 09:14:59 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 09:14:59 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 09:14:59 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:14:59 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 09:14:59 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 09:14:59 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 09:14:59 compute-0 podman[375930]: 2025-10-11 09:14:59.940666223 +0000 UTC m=+0.064571828 container create 0a3b75c4235a2633d6dc57c5ee08a5be89cb7af45cbaa76442bd4f86969d91f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_bhaskara, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 09:14:59 compute-0 systemd[1]: Started libpod-conmon-0a3b75c4235a2633d6dc57c5ee08a5be89cb7af45cbaa76442bd4f86969d91f8.scope.
Oct 11 09:15:00 compute-0 podman[375930]: 2025-10-11 09:14:59.914997753 +0000 UTC m=+0.038903388 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:15:00 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:15:00 compute-0 podman[375930]: 2025-10-11 09:15:00.05289929 +0000 UTC m=+0.176804915 container init 0a3b75c4235a2633d6dc57c5ee08a5be89cb7af45cbaa76442bd4f86969d91f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_bhaskara, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct 11 09:15:00 compute-0 podman[375930]: 2025-10-11 09:15:00.062529926 +0000 UTC m=+0.186435541 container start 0a3b75c4235a2633d6dc57c5ee08a5be89cb7af45cbaa76442bd4f86969d91f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_bhaskara, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef)
Oct 11 09:15:00 compute-0 podman[375930]: 2025-10-11 09:15:00.06662393 +0000 UTC m=+0.190529555 container attach 0a3b75c4235a2633d6dc57c5ee08a5be89cb7af45cbaa76442bd4f86969d91f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_bhaskara, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct 11 09:15:00 compute-0 goofy_bhaskara[375947]: 167 167
Oct 11 09:15:00 compute-0 systemd[1]: libpod-0a3b75c4235a2633d6dc57c5ee08a5be89cb7af45cbaa76442bd4f86969d91f8.scope: Deactivated successfully.
Oct 11 09:15:00 compute-0 podman[375930]: 2025-10-11 09:15:00.070022934 +0000 UTC m=+0.193928559 container died 0a3b75c4235a2633d6dc57c5ee08a5be89cb7af45cbaa76442bd4f86969d91f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_bhaskara, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct 11 09:15:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-d8c75588465589cdf7a6897c7d068254530256967745fc3bef4c89ace7b80eb7-merged.mount: Deactivated successfully.
Oct 11 09:15:00 compute-0 podman[375930]: 2025-10-11 09:15:00.121397366 +0000 UTC m=+0.245302981 container remove 0a3b75c4235a2633d6dc57c5ee08a5be89cb7af45cbaa76442bd4f86969d91f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_bhaskara, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct 11 09:15:00 compute-0 systemd[1]: libpod-conmon-0a3b75c4235a2633d6dc57c5ee08a5be89cb7af45cbaa76442bd4f86969d91f8.scope: Deactivated successfully.
Oct 11 09:15:00 compute-0 podman[375974]: 2025-10-11 09:15:00.414038506 +0000 UTC m=+0.076695374 container create dd621cd9b7a84ab1df23646b3dfa8813e9bfbb982075050cc9246c63bf4daaaa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_jennings, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct 11 09:15:00 compute-0 podman[375974]: 2025-10-11 09:15:00.379472559 +0000 UTC m=+0.042129487 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:15:00 compute-0 systemd[1]: Started libpod-conmon-dd621cd9b7a84ab1df23646b3dfa8813e9bfbb982075050cc9246c63bf4daaaa.scope.
Oct 11 09:15:00 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:15:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/015f3917fcfc94d4f282fe6160309442617a172d5e1156bc5d4feda1b4d893ce/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 09:15:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/015f3917fcfc94d4f282fe6160309442617a172d5e1156bc5d4feda1b4d893ce/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 09:15:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/015f3917fcfc94d4f282fe6160309442617a172d5e1156bc5d4feda1b4d893ce/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 09:15:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/015f3917fcfc94d4f282fe6160309442617a172d5e1156bc5d4feda1b4d893ce/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 09:15:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/015f3917fcfc94d4f282fe6160309442617a172d5e1156bc5d4feda1b4d893ce/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 09:15:00 compute-0 podman[375974]: 2025-10-11 09:15:00.560754636 +0000 UTC m=+0.223411504 container init dd621cd9b7a84ab1df23646b3dfa8813e9bfbb982075050cc9246c63bf4daaaa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_jennings, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 09:15:00 compute-0 podman[375974]: 2025-10-11 09:15:00.575980627 +0000 UTC m=+0.238637505 container start dd621cd9b7a84ab1df23646b3dfa8813e9bfbb982075050cc9246c63bf4daaaa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_jennings, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 09:15:00 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2195: 321 pgs: 321 active+clean; 400 MiB data, 945 MiB used, 59 GiB / 60 GiB avail; 603 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Oct 11 09:15:00 compute-0 podman[375974]: 2025-10-11 09:15:00.580312457 +0000 UTC m=+0.242969325 container attach dd621cd9b7a84ab1df23646b3dfa8813e9bfbb982075050cc9246c63bf4daaaa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_jennings, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Oct 11 09:15:00 compute-0 nova_compute[260935]: 2025-10-11 09:15:00.631 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:15:01 compute-0 stupefied_jennings[375991]: --> passed data devices: 0 physical, 3 LVM
Oct 11 09:15:01 compute-0 stupefied_jennings[375991]: --> relative data size: 1.0
Oct 11 09:15:01 compute-0 stupefied_jennings[375991]: --> All data devices are unavailable
Oct 11 09:15:01 compute-0 systemd[1]: libpod-dd621cd9b7a84ab1df23646b3dfa8813e9bfbb982075050cc9246c63bf4daaaa.scope: Deactivated successfully.
Oct 11 09:15:01 compute-0 podman[376020]: 2025-10-11 09:15:01.641291514 +0000 UTC m=+0.043917356 container died dd621cd9b7a84ab1df23646b3dfa8813e9bfbb982075050cc9246c63bf4daaaa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_jennings, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct 11 09:15:01 compute-0 ceph-mon[74313]: pgmap v2195: 321 pgs: 321 active+clean; 400 MiB data, 945 MiB used, 59 GiB / 60 GiB avail; 603 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Oct 11 09:15:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-015f3917fcfc94d4f282fe6160309442617a172d5e1156bc5d4feda1b4d893ce-merged.mount: Deactivated successfully.
Oct 11 09:15:01 compute-0 podman[376020]: 2025-10-11 09:15:01.707099656 +0000 UTC m=+0.109725418 container remove dd621cd9b7a84ab1df23646b3dfa8813e9bfbb982075050cc9246c63bf4daaaa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_jennings, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 11 09:15:01 compute-0 systemd[1]: libpod-conmon-dd621cd9b7a84ab1df23646b3dfa8813e9bfbb982075050cc9246c63bf4daaaa.scope: Deactivated successfully.
Oct 11 09:15:01 compute-0 sudo[375864]: pam_unix(sudo:session): session closed for user root
Oct 11 09:15:01 compute-0 sudo[376035]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:15:01 compute-0 sudo[376035]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:15:01 compute-0 sudo[376035]: pam_unix(sudo:session): session closed for user root
Oct 11 09:15:01 compute-0 sudo[376060]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 09:15:01 compute-0 sudo[376060]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:15:01 compute-0 sudo[376060]: pam_unix(sudo:session): session closed for user root
Oct 11 09:15:02 compute-0 sudo[376085]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:15:02 compute-0 sudo[376085]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:15:02 compute-0 sudo[376085]: pam_unix(sudo:session): session closed for user root
Oct 11 09:15:02 compute-0 sudo[376110]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a -- lvm list --format json
Oct 11 09:15:02 compute-0 sudo[376110]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:15:02 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2196: 321 pgs: 321 active+clean; 407 MiB data, 945 MiB used, 59 GiB / 60 GiB avail; 616 KiB/s rd, 2.1 MiB/s wr, 72 op/s
Oct 11 09:15:02 compute-0 podman[376179]: 2025-10-11 09:15:02.649915522 +0000 UTC m=+0.071933442 container create 5d5fc40996ad7915ec9004c88ad3f846eab7c5c36e8124a9fae472176aa332f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_wozniak, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef)
Oct 11 09:15:02 compute-0 systemd[1]: Started libpod-conmon-5d5fc40996ad7915ec9004c88ad3f846eab7c5c36e8124a9fae472176aa332f8.scope.
Oct 11 09:15:02 compute-0 podman[376179]: 2025-10-11 09:15:02.621375402 +0000 UTC m=+0.043393362 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:15:02 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:15:02 compute-0 podman[376179]: 2025-10-11 09:15:02.766706935 +0000 UTC m=+0.188724885 container init 5d5fc40996ad7915ec9004c88ad3f846eab7c5c36e8124a9fae472176aa332f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_wozniak, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 09:15:02 compute-0 podman[376179]: 2025-10-11 09:15:02.773506573 +0000 UTC m=+0.195524493 container start 5d5fc40996ad7915ec9004c88ad3f846eab7c5c36e8124a9fae472176aa332f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_wozniak, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Oct 11 09:15:02 compute-0 podman[376179]: 2025-10-11 09:15:02.778040659 +0000 UTC m=+0.200058569 container attach 5d5fc40996ad7915ec9004c88ad3f846eab7c5c36e8124a9fae472176aa332f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_wozniak, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Oct 11 09:15:02 compute-0 friendly_wozniak[376196]: 167 167
Oct 11 09:15:02 compute-0 systemd[1]: libpod-5d5fc40996ad7915ec9004c88ad3f846eab7c5c36e8124a9fae472176aa332f8.scope: Deactivated successfully.
Oct 11 09:15:02 compute-0 conmon[376196]: conmon 5d5fc40996ad7915ec90 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-5d5fc40996ad7915ec9004c88ad3f846eab7c5c36e8124a9fae472176aa332f8.scope/container/memory.events
Oct 11 09:15:02 compute-0 podman[376179]: 2025-10-11 09:15:02.784159798 +0000 UTC m=+0.206177678 container died 5d5fc40996ad7915ec9004c88ad3f846eab7c5c36e8124a9fae472176aa332f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_wozniak, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct 11 09:15:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-a8bc64e3e1c74dcdfe3a5bf6d178453a24558646e27efdb8672768d03b2a317d-merged.mount: Deactivated successfully.
Oct 11 09:15:02 compute-0 podman[376179]: 2025-10-11 09:15:02.847267805 +0000 UTC m=+0.269285715 container remove 5d5fc40996ad7915ec9004c88ad3f846eab7c5c36e8124a9fae472176aa332f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_wozniak, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Oct 11 09:15:02 compute-0 systemd[1]: libpod-conmon-5d5fc40996ad7915ec9004c88ad3f846eab7c5c36e8124a9fae472176aa332f8.scope: Deactivated successfully.
Oct 11 09:15:02 compute-0 podman[376213]: 2025-10-11 09:15:02.933305416 +0000 UTC m=+0.073409073 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, managed_by=edpm_ansible, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct 11 09:15:03 compute-0 podman[376240]: 2025-10-11 09:15:03.086471906 +0000 UTC m=+0.062418409 container create 2c37a8f9d2a33cb46ae9366c9de7751840effc6e79278de92a0cf21abec62a86 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_clarke, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct 11 09:15:03 compute-0 systemd[1]: Started libpod-conmon-2c37a8f9d2a33cb46ae9366c9de7751840effc6e79278de92a0cf21abec62a86.scope.
Oct 11 09:15:03 compute-0 podman[376240]: 2025-10-11 09:15:03.056537247 +0000 UTC m=+0.032483820 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:15:03 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:15:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8fd513a2427299e7f5e3b26367a3aa0700cbc4052f619c33affb7a8ddfc3f1f0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 09:15:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8fd513a2427299e7f5e3b26367a3aa0700cbc4052f619c33affb7a8ddfc3f1f0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 09:15:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8fd513a2427299e7f5e3b26367a3aa0700cbc4052f619c33affb7a8ddfc3f1f0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 09:15:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8fd513a2427299e7f5e3b26367a3aa0700cbc4052f619c33affb7a8ddfc3f1f0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 09:15:03 compute-0 podman[376240]: 2025-10-11 09:15:03.201291624 +0000 UTC m=+0.177238197 container init 2c37a8f9d2a33cb46ae9366c9de7751840effc6e79278de92a0cf21abec62a86 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_clarke, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct 11 09:15:03 compute-0 podman[376240]: 2025-10-11 09:15:03.214324485 +0000 UTC m=+0.190270998 container start 2c37a8f9d2a33cb46ae9366c9de7751840effc6e79278de92a0cf21abec62a86 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_clarke, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 09:15:03 compute-0 podman[376240]: 2025-10-11 09:15:03.220242008 +0000 UTC m=+0.196188531 container attach 2c37a8f9d2a33cb46ae9366c9de7751840effc6e79278de92a0cf21abec62a86 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_clarke, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 09:15:03 compute-0 ceph-mon[74313]: pgmap v2196: 321 pgs: 321 active+clean; 407 MiB data, 945 MiB used, 59 GiB / 60 GiB avail; 616 KiB/s rd, 2.1 MiB/s wr, 72 op/s
Oct 11 09:15:03 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:15:03 compute-0 priceless_clarke[376257]: {
Oct 11 09:15:03 compute-0 priceless_clarke[376257]:     "0": [
Oct 11 09:15:03 compute-0 priceless_clarke[376257]:         {
Oct 11 09:15:03 compute-0 priceless_clarke[376257]:             "devices": [
Oct 11 09:15:03 compute-0 priceless_clarke[376257]:                 "/dev/loop3"
Oct 11 09:15:03 compute-0 priceless_clarke[376257]:             ],
Oct 11 09:15:03 compute-0 priceless_clarke[376257]:             "lv_name": "ceph_lv0",
Oct 11 09:15:03 compute-0 priceless_clarke[376257]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 09:15:03 compute-0 priceless_clarke[376257]:             "lv_size": "21470642176",
Oct 11 09:15:03 compute-0 priceless_clarke[376257]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=bd31cfe9-266a-4479-a7f9-659fff14b3e1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 09:15:03 compute-0 priceless_clarke[376257]:             "lv_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 09:15:03 compute-0 priceless_clarke[376257]:             "name": "ceph_lv0",
Oct 11 09:15:03 compute-0 priceless_clarke[376257]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 09:15:03 compute-0 priceless_clarke[376257]:             "tags": {
Oct 11 09:15:03 compute-0 priceless_clarke[376257]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 11 09:15:03 compute-0 priceless_clarke[376257]:                 "ceph.block_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 09:15:03 compute-0 priceless_clarke[376257]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 09:15:03 compute-0 priceless_clarke[376257]:                 "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:15:03 compute-0 priceless_clarke[376257]:                 "ceph.cluster_name": "ceph",
Oct 11 09:15:03 compute-0 priceless_clarke[376257]:                 "ceph.crush_device_class": "",
Oct 11 09:15:03 compute-0 priceless_clarke[376257]:                 "ceph.encrypted": "0",
Oct 11 09:15:03 compute-0 priceless_clarke[376257]:                 "ceph.osd_fsid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 09:15:03 compute-0 priceless_clarke[376257]:                 "ceph.osd_id": "0",
Oct 11 09:15:03 compute-0 priceless_clarke[376257]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 09:15:03 compute-0 priceless_clarke[376257]:                 "ceph.type": "block",
Oct 11 09:15:03 compute-0 priceless_clarke[376257]:                 "ceph.vdo": "0"
Oct 11 09:15:03 compute-0 priceless_clarke[376257]:             },
Oct 11 09:15:03 compute-0 priceless_clarke[376257]:             "type": "block",
Oct 11 09:15:03 compute-0 priceless_clarke[376257]:             "vg_name": "ceph_vg0"
Oct 11 09:15:03 compute-0 priceless_clarke[376257]:         }
Oct 11 09:15:03 compute-0 priceless_clarke[376257]:     ],
Oct 11 09:15:03 compute-0 priceless_clarke[376257]:     "1": [
Oct 11 09:15:03 compute-0 priceless_clarke[376257]:         {
Oct 11 09:15:03 compute-0 priceless_clarke[376257]:             "devices": [
Oct 11 09:15:03 compute-0 priceless_clarke[376257]:                 "/dev/loop4"
Oct 11 09:15:03 compute-0 priceless_clarke[376257]:             ],
Oct 11 09:15:03 compute-0 priceless_clarke[376257]:             "lv_name": "ceph_lv1",
Oct 11 09:15:03 compute-0 priceless_clarke[376257]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 09:15:03 compute-0 priceless_clarke[376257]:             "lv_size": "21470642176",
Oct 11 09:15:03 compute-0 priceless_clarke[376257]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c6e730c8-7075-45e5-bf72-061b73088f5b,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 09:15:03 compute-0 priceless_clarke[376257]:             "lv_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 09:15:03 compute-0 priceless_clarke[376257]:             "name": "ceph_lv1",
Oct 11 09:15:03 compute-0 priceless_clarke[376257]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 09:15:03 compute-0 priceless_clarke[376257]:             "tags": {
Oct 11 09:15:03 compute-0 priceless_clarke[376257]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 11 09:15:03 compute-0 priceless_clarke[376257]:                 "ceph.block_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 09:15:03 compute-0 priceless_clarke[376257]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 09:15:03 compute-0 priceless_clarke[376257]:                 "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:15:03 compute-0 priceless_clarke[376257]:                 "ceph.cluster_name": "ceph",
Oct 11 09:15:03 compute-0 priceless_clarke[376257]:                 "ceph.crush_device_class": "",
Oct 11 09:15:03 compute-0 priceless_clarke[376257]:                 "ceph.encrypted": "0",
Oct 11 09:15:03 compute-0 priceless_clarke[376257]:                 "ceph.osd_fsid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 09:15:03 compute-0 priceless_clarke[376257]:                 "ceph.osd_id": "1",
Oct 11 09:15:03 compute-0 priceless_clarke[376257]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 09:15:03 compute-0 priceless_clarke[376257]:                 "ceph.type": "block",
Oct 11 09:15:03 compute-0 priceless_clarke[376257]:                 "ceph.vdo": "0"
Oct 11 09:15:03 compute-0 priceless_clarke[376257]:             },
Oct 11 09:15:03 compute-0 priceless_clarke[376257]:             "type": "block",
Oct 11 09:15:03 compute-0 priceless_clarke[376257]:             "vg_name": "ceph_vg1"
Oct 11 09:15:03 compute-0 priceless_clarke[376257]:         }
Oct 11 09:15:03 compute-0 priceless_clarke[376257]:     ],
Oct 11 09:15:03 compute-0 priceless_clarke[376257]:     "2": [
Oct 11 09:15:03 compute-0 priceless_clarke[376257]:         {
Oct 11 09:15:03 compute-0 priceless_clarke[376257]:             "devices": [
Oct 11 09:15:03 compute-0 priceless_clarke[376257]:                 "/dev/loop5"
Oct 11 09:15:03 compute-0 priceless_clarke[376257]:             ],
Oct 11 09:15:03 compute-0 priceless_clarke[376257]:             "lv_name": "ceph_lv2",
Oct 11 09:15:03 compute-0 priceless_clarke[376257]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 09:15:03 compute-0 priceless_clarke[376257]:             "lv_size": "21470642176",
Oct 11 09:15:03 compute-0 priceless_clarke[376257]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9a8d87fe-940e-4960-94fb-405e22e6e9ee,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 09:15:03 compute-0 priceless_clarke[376257]:             "lv_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 09:15:03 compute-0 priceless_clarke[376257]:             "name": "ceph_lv2",
Oct 11 09:15:03 compute-0 priceless_clarke[376257]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 09:15:03 compute-0 priceless_clarke[376257]:             "tags": {
Oct 11 09:15:03 compute-0 priceless_clarke[376257]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 11 09:15:03 compute-0 priceless_clarke[376257]:                 "ceph.block_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 09:15:03 compute-0 priceless_clarke[376257]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 09:15:03 compute-0 priceless_clarke[376257]:                 "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:15:03 compute-0 priceless_clarke[376257]:                 "ceph.cluster_name": "ceph",
Oct 11 09:15:03 compute-0 priceless_clarke[376257]:                 "ceph.crush_device_class": "",
Oct 11 09:15:03 compute-0 priceless_clarke[376257]:                 "ceph.encrypted": "0",
Oct 11 09:15:03 compute-0 priceless_clarke[376257]:                 "ceph.osd_fsid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 09:15:03 compute-0 priceless_clarke[376257]:                 "ceph.osd_id": "2",
Oct 11 09:15:03 compute-0 priceless_clarke[376257]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 09:15:03 compute-0 priceless_clarke[376257]:                 "ceph.type": "block",
Oct 11 09:15:03 compute-0 priceless_clarke[376257]:                 "ceph.vdo": "0"
Oct 11 09:15:03 compute-0 priceless_clarke[376257]:             },
Oct 11 09:15:03 compute-0 priceless_clarke[376257]:             "type": "block",
Oct 11 09:15:03 compute-0 priceless_clarke[376257]:             "vg_name": "ceph_vg2"
Oct 11 09:15:03 compute-0 priceless_clarke[376257]:         }
Oct 11 09:15:03 compute-0 priceless_clarke[376257]:     ]
Oct 11 09:15:03 compute-0 priceless_clarke[376257]: }
Oct 11 09:15:04 compute-0 systemd[1]: libpod-2c37a8f9d2a33cb46ae9366c9de7751840effc6e79278de92a0cf21abec62a86.scope: Deactivated successfully.
Oct 11 09:15:04 compute-0 podman[376240]: 2025-10-11 09:15:04.002237133 +0000 UTC m=+0.978183596 container died 2c37a8f9d2a33cb46ae9366c9de7751840effc6e79278de92a0cf21abec62a86 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_clarke, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 11 09:15:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-8fd513a2427299e7f5e3b26367a3aa0700cbc4052f619c33affb7a8ddfc3f1f0-merged.mount: Deactivated successfully.
Oct 11 09:15:04 compute-0 nova_compute[260935]: 2025-10-11 09:15:04.050 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:15:04 compute-0 podman[376240]: 2025-10-11 09:15:04.070311958 +0000 UTC m=+1.046258471 container remove 2c37a8f9d2a33cb46ae9366c9de7751840effc6e79278de92a0cf21abec62a86 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_clarke, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Oct 11 09:15:04 compute-0 systemd[1]: libpod-conmon-2c37a8f9d2a33cb46ae9366c9de7751840effc6e79278de92a0cf21abec62a86.scope: Deactivated successfully.
Oct 11 09:15:04 compute-0 sudo[376110]: pam_unix(sudo:session): session closed for user root
Oct 11 09:15:04 compute-0 sudo[376281]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:15:04 compute-0 sudo[376281]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:15:04 compute-0 sudo[376281]: pam_unix(sudo:session): session closed for user root
Oct 11 09:15:04 compute-0 sudo[376306]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 09:15:04 compute-0 sudo[376306]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:15:04 compute-0 sudo[376306]: pam_unix(sudo:session): session closed for user root
Oct 11 09:15:04 compute-0 sudo[376331]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:15:04 compute-0 sudo[376331]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:15:04 compute-0 sudo[376331]: pam_unix(sudo:session): session closed for user root
Oct 11 09:15:04 compute-0 sudo[376356]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a -- raw list --format json
Oct 11 09:15:04 compute-0 sudo[376356]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:15:04 compute-0 nova_compute[260935]: 2025-10-11 09:15:04.497 2 INFO nova.compute.manager [None req-537c3019-fecd-4ec8-af26-cf075784eff4 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: dd4f6627-2bd4-4ad6-8e1e-c5bec1228096] Get console output
Oct 11 09:15:04 compute-0 nova_compute[260935]: 2025-10-11 09:15:04.504 29289 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Oct 11 09:15:04 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2197: 321 pgs: 321 active+clean; 407 MiB data, 945 MiB used, 59 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 11 09:15:04 compute-0 podman[376421]: 2025-10-11 09:15:04.908279621 +0000 UTC m=+0.057534143 container create f8570dba1a8563122c08cd230341a142c2e600fe60749aecf979867105328c48 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_clarke, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct 11 09:15:04 compute-0 nova_compute[260935]: 2025-10-11 09:15:04.923 2 DEBUG nova.objects.instance [None req-2f012b15-b6c8-4501-89aa-611311630683 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Lazy-loading 'pci_devices' on Instance uuid dd4f6627-2bd4-4ad6-8e1e-c5bec1228096 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:15:04 compute-0 nova_compute[260935]: 2025-10-11 09:15:04.951 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760174104.951147, dd4f6627-2bd4-4ad6-8e1e-c5bec1228096 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:15:04 compute-0 nova_compute[260935]: 2025-10-11 09:15:04.952 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: dd4f6627-2bd4-4ad6-8e1e-c5bec1228096] VM Paused (Lifecycle Event)
Oct 11 09:15:04 compute-0 systemd[1]: Started libpod-conmon-f8570dba1a8563122c08cd230341a142c2e600fe60749aecf979867105328c48.scope.
Oct 11 09:15:04 compute-0 podman[376421]: 2025-10-11 09:15:04.880803321 +0000 UTC m=+0.030057913 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:15:04 compute-0 nova_compute[260935]: 2025-10-11 09:15:04.975 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: dd4f6627-2bd4-4ad6-8e1e-c5bec1228096] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:15:04 compute-0 nova_compute[260935]: 2025-10-11 09:15:04.984 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: dd4f6627-2bd4-4ad6-8e1e-c5bec1228096] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: suspending, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 09:15:05 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:15:05 compute-0 nova_compute[260935]: 2025-10-11 09:15:05.012 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: dd4f6627-2bd4-4ad6-8e1e-c5bec1228096] During sync_power_state the instance has a pending task (suspending). Skip.
Oct 11 09:15:05 compute-0 podman[376421]: 2025-10-11 09:15:05.027377058 +0000 UTC m=+0.176631590 container init f8570dba1a8563122c08cd230341a142c2e600fe60749aecf979867105328c48 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_clarke, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True)
Oct 11 09:15:05 compute-0 podman[376421]: 2025-10-11 09:15:05.039570695 +0000 UTC m=+0.188825227 container start f8570dba1a8563122c08cd230341a142c2e600fe60749aecf979867105328c48 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_clarke, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 09:15:05 compute-0 podman[376421]: 2025-10-11 09:15:05.045090758 +0000 UTC m=+0.194345340 container attach f8570dba1a8563122c08cd230341a142c2e600fe60749aecf979867105328c48 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_clarke, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 09:15:05 compute-0 reverent_clarke[376439]: 167 167
Oct 11 09:15:05 compute-0 systemd[1]: libpod-f8570dba1a8563122c08cd230341a142c2e600fe60749aecf979867105328c48.scope: Deactivated successfully.
Oct 11 09:15:05 compute-0 podman[376421]: 2025-10-11 09:15:05.051168376 +0000 UTC m=+0.200422908 container died f8570dba1a8563122c08cd230341a142c2e600fe60749aecf979867105328c48 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_clarke, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 09:15:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 09:15:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:15:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 09:15:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:15:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0033839795166699226 of space, bias 1.0, pg target 1.0151938550009767 quantized to 32 (current 32)
Oct 11 09:15:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:15:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 09:15:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:15:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 09:15:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:15:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.1992057139048968 quantized to 32 (current 32)
Oct 11 09:15:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:15:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006084358924269063 quantized to 16 (current 32)
Oct 11 09:15:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:15:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 09:15:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:15:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.605448655336329e-05 quantized to 32 (current 32)
Oct 11 09:15:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:15:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006464631357035879 quantized to 32 (current 32)
Oct 11 09:15:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:15:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 09:15:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:15:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015210897310672657 quantized to 32 (current 32)
Oct 11 09:15:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-9fa93331277ca4f40f339565e6122fe3a57002e00e02aad7f56d1a5059d1f5dd-merged.mount: Deactivated successfully.
Oct 11 09:15:05 compute-0 podman[376421]: 2025-10-11 09:15:05.102117786 +0000 UTC m=+0.251372288 container remove f8570dba1a8563122c08cd230341a142c2e600fe60749aecf979867105328c48 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_clarke, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 09:15:05 compute-0 systemd[1]: libpod-conmon-f8570dba1a8563122c08cd230341a142c2e600fe60749aecf979867105328c48.scope: Deactivated successfully.
Oct 11 09:15:05 compute-0 podman[376464]: 2025-10-11 09:15:05.366896545 +0000 UTC m=+0.056730621 container create 1c6d2d20fd65e152956cc7569b6db81ce7e7aafa05930cd624d6e3c6245dc84c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_euler, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct 11 09:15:05 compute-0 systemd[1]: Started libpod-conmon-1c6d2d20fd65e152956cc7569b6db81ce7e7aafa05930cd624d6e3c6245dc84c.scope.
Oct 11 09:15:05 compute-0 podman[376464]: 2025-10-11 09:15:05.339383654 +0000 UTC m=+0.029217800 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:15:05 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:15:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a59c468e57361f6447cf724a331b19adb8637614bec64ece422f15578f1ab270/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 09:15:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a59c468e57361f6447cf724a331b19adb8637614bec64ece422f15578f1ab270/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 09:15:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a59c468e57361f6447cf724a331b19adb8637614bec64ece422f15578f1ab270/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 09:15:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a59c468e57361f6447cf724a331b19adb8637614bec64ece422f15578f1ab270/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 09:15:05 compute-0 podman[376464]: 2025-10-11 09:15:05.53363144 +0000 UTC m=+0.223465526 container init 1c6d2d20fd65e152956cc7569b6db81ce7e7aafa05930cd624d6e3c6245dc84c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_euler, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 11 09:15:05 compute-0 podman[376464]: 2025-10-11 09:15:05.547158855 +0000 UTC m=+0.236992931 container start 1c6d2d20fd65e152956cc7569b6db81ce7e7aafa05930cd624d6e3c6245dc84c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_euler, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct 11 09:15:05 compute-0 podman[376464]: 2025-10-11 09:15:05.551274879 +0000 UTC m=+0.241108975 container attach 1c6d2d20fd65e152956cc7569b6db81ce7e7aafa05930cd624d6e3c6245dc84c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_euler, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct 11 09:15:05 compute-0 kernel: tapef5b5080-06 (unregistering): left promiscuous mode
Oct 11 09:15:05 compute-0 NetworkManager[44960]: <info>  [1760174105.5954] device (tapef5b5080-06): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 09:15:05 compute-0 ovn_controller[152945]: 2025-10-11T09:15:05Z|01039|binding|INFO|Releasing lport ef5b5080-0657-4874-a32f-f79c370fbb6f from this chassis (sb_readonly=0)
Oct 11 09:15:05 compute-0 nova_compute[260935]: 2025-10-11 09:15:05.608 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:15:05 compute-0 ovn_controller[152945]: 2025-10-11T09:15:05Z|01040|binding|INFO|Setting lport ef5b5080-0657-4874-a32f-f79c370fbb6f down in Southbound
Oct 11 09:15:05 compute-0 ovn_controller[152945]: 2025-10-11T09:15:05Z|01041|binding|INFO|Removing iface tapef5b5080-06 ovn-installed in OVS
Oct 11 09:15:05 compute-0 nova_compute[260935]: 2025-10-11 09:15:05.614 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:15:05 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:15:05.626 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:73:4b:2f 10.100.0.4'], port_security=['fa:16:3e:73:4b:2f 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'dd4f6627-2bd4-4ad6-8e1e-c5bec1228096', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4b2dcf8f-2d20-4207-8c49-d03814ca4c89', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ca4b15770e784f45910b630937562cb6', 'neutron:revision_number': '4', 'neutron:security_group_ids': '69b43b9b-1747-4199-b611-82604b25c02c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.242'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6a61dfb2-d260-491a-87d9-80daad2fd54f, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=ef5b5080-0657-4874-a32f-f79c370fbb6f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 09:15:05 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:15:05.628 162815 INFO neutron.agent.ovn.metadata.agent [-] Port ef5b5080-0657-4874-a32f-f79c370fbb6f in datapath 4b2dcf8f-2d20-4207-8c49-d03814ca4c89 unbound from our chassis
Oct 11 09:15:05 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:15:05.632 162815 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 4b2dcf8f-2d20-4207-8c49-d03814ca4c89, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 11 09:15:05 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:15:05.634 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[145eafbb-65af-4782-881c-685e5ab47003]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:15:05 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:15:05.635 162815 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-4b2dcf8f-2d20-4207-8c49-d03814ca4c89 namespace which is not needed anymore
Oct 11 09:15:05 compute-0 nova_compute[260935]: 2025-10-11 09:15:05.641 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:15:05 compute-0 nova_compute[260935]: 2025-10-11 09:15:05.644 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:15:05 compute-0 systemd[1]: machine-qemu\x2d129\x2dinstance\x2d0000006b.scope: Deactivated successfully.
Oct 11 09:15:05 compute-0 systemd[1]: machine-qemu\x2d129\x2dinstance\x2d0000006b.scope: Consumed 12.515s CPU time.
Oct 11 09:15:05 compute-0 systemd-machined[215705]: Machine qemu-129-instance-0000006b terminated.
Oct 11 09:15:05 compute-0 ceph-mon[74313]: pgmap v2197: 321 pgs: 321 active+clean; 407 MiB data, 945 MiB used, 59 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 11 09:15:05 compute-0 nova_compute[260935]: 2025-10-11 09:15:05.762 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:15:05 compute-0 nova_compute[260935]: 2025-10-11 09:15:05.773 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:15:05 compute-0 nova_compute[260935]: 2025-10-11 09:15:05.776 2 DEBUG nova.compute.manager [None req-2f012b15-b6c8-4501-89aa-611311630683 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: dd4f6627-2bd4-4ad6-8e1e-c5bec1228096] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:15:05 compute-0 neutron-haproxy-ovnmeta-4b2dcf8f-2d20-4207-8c49-d03814ca4c89[375618]: [NOTICE]   (375622) : haproxy version is 2.8.14-c23fe91
Oct 11 09:15:05 compute-0 neutron-haproxy-ovnmeta-4b2dcf8f-2d20-4207-8c49-d03814ca4c89[375618]: [NOTICE]   (375622) : path to executable is /usr/sbin/haproxy
Oct 11 09:15:05 compute-0 neutron-haproxy-ovnmeta-4b2dcf8f-2d20-4207-8c49-d03814ca4c89[375618]: [ALERT]    (375622) : Current worker (375624) exited with code 143 (Terminated)
Oct 11 09:15:05 compute-0 neutron-haproxy-ovnmeta-4b2dcf8f-2d20-4207-8c49-d03814ca4c89[375618]: [WARNING]  (375622) : All workers exited. Exiting... (0)
Oct 11 09:15:05 compute-0 systemd[1]: libpod-9c990cd252a00b3aae108b75a4f292bd7c93e59da0428ed1c037f6870491fac9.scope: Deactivated successfully.
Oct 11 09:15:05 compute-0 conmon[375618]: conmon 9c990cd252a00b3aae10 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-9c990cd252a00b3aae108b75a4f292bd7c93e59da0428ed1c037f6870491fac9.scope/container/memory.events
Oct 11 09:15:05 compute-0 podman[376511]: 2025-10-11 09:15:05.838364465 +0000 UTC m=+0.066377168 container died 9c990cd252a00b3aae108b75a4f292bd7c93e59da0428ed1c037f6870491fac9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-4b2dcf8f-2d20-4207-8c49-d03814ca4c89, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 11 09:15:05 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-9c990cd252a00b3aae108b75a4f292bd7c93e59da0428ed1c037f6870491fac9-userdata-shm.mount: Deactivated successfully.
Oct 11 09:15:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-93c774767af60c322abb2e50f9ecf5a65e5ec0268f3808529aa2cf4be24924c9-merged.mount: Deactivated successfully.
Oct 11 09:15:05 compute-0 podman[376511]: 2025-10-11 09:15:05.904027243 +0000 UTC m=+0.132039916 container cleanup 9c990cd252a00b3aae108b75a4f292bd7c93e59da0428ed1c037f6870491fac9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-4b2dcf8f-2d20-4207-8c49-d03814ca4c89, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 11 09:15:05 compute-0 systemd[1]: libpod-conmon-9c990cd252a00b3aae108b75a4f292bd7c93e59da0428ed1c037f6870491fac9.scope: Deactivated successfully.
Oct 11 09:15:05 compute-0 podman[376545]: 2025-10-11 09:15:05.981369163 +0000 UTC m=+0.045971863 container remove 9c990cd252a00b3aae108b75a4f292bd7c93e59da0428ed1c037f6870491fac9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-4b2dcf8f-2d20-4207-8c49-d03814ca4c89, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 11 09:15:05 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:15:05.990 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[d9cebc7e-55f4-4bae-9d60-92a5f00394d1]: (4, ('Sat Oct 11 09:15:05 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-4b2dcf8f-2d20-4207-8c49-d03814ca4c89 (9c990cd252a00b3aae108b75a4f292bd7c93e59da0428ed1c037f6870491fac9)\n9c990cd252a00b3aae108b75a4f292bd7c93e59da0428ed1c037f6870491fac9\nSat Oct 11 09:15:05 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-4b2dcf8f-2d20-4207-8c49-d03814ca4c89 (9c990cd252a00b3aae108b75a4f292bd7c93e59da0428ed1c037f6870491fac9)\n9c990cd252a00b3aae108b75a4f292bd7c93e59da0428ed1c037f6870491fac9\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:15:05 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:15:05.993 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[e7dd5456-fc4f-40a7-a915-fe6d5a308ee6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:15:05 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:15:05.994 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4b2dcf8f-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:15:05 compute-0 nova_compute[260935]: 2025-10-11 09:15:05.998 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:15:05 compute-0 kernel: tap4b2dcf8f-20: left promiscuous mode
Oct 11 09:15:06 compute-0 nova_compute[260935]: 2025-10-11 09:15:06.022 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:15:06 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:15:06.028 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[835ca78a-1776-4373-ab11-42c9a8c62be7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:15:06 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:15:06.058 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[44e5dc23-8c91-4450-932d-58256a0ce875]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:15:06 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:15:06.060 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[47359233-c670-422a-9438-447a31632b3d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:15:06 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:15:06.088 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[3bdece7a-9d83-4ff8-893f-084ef027d0ff]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 592989, 'reachable_time': 23448, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 376563, 'error': None, 'target': 'ovnmeta-4b2dcf8f-2d20-4207-8c49-d03814ca4c89', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:15:06 compute-0 systemd[1]: run-netns-ovnmeta\x2d4b2dcf8f\x2d2d20\x2d4207\x2d8c49\x2dd03814ca4c89.mount: Deactivated successfully.
Oct 11 09:15:06 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:15:06.093 162929 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-4b2dcf8f-2d20-4207-8c49-d03814ca4c89 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 11 09:15:06 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:15:06.094 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[d941dbbb-752d-4e58-be55-ed6edd37321d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:15:06 compute-0 nova_compute[260935]: 2025-10-11 09:15:06.176 2 DEBUG nova.compute.manager [req-099bdbfc-d3e8-44d0-b05b-c8f1c74e4193 req-b5cd1cb2-7bc1-4d45-95ce-bde2cb856c0d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: dd4f6627-2bd4-4ad6-8e1e-c5bec1228096] Received event network-vif-unplugged-ef5b5080-0657-4874-a32f-f79c370fbb6f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:15:06 compute-0 nova_compute[260935]: 2025-10-11 09:15:06.178 2 DEBUG oslo_concurrency.lockutils [req-099bdbfc-d3e8-44d0-b05b-c8f1c74e4193 req-b5cd1cb2-7bc1-4d45-95ce-bde2cb856c0d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "dd4f6627-2bd4-4ad6-8e1e-c5bec1228096-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:15:06 compute-0 nova_compute[260935]: 2025-10-11 09:15:06.179 2 DEBUG oslo_concurrency.lockutils [req-099bdbfc-d3e8-44d0-b05b-c8f1c74e4193 req-b5cd1cb2-7bc1-4d45-95ce-bde2cb856c0d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "dd4f6627-2bd4-4ad6-8e1e-c5bec1228096-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:15:06 compute-0 nova_compute[260935]: 2025-10-11 09:15:06.180 2 DEBUG oslo_concurrency.lockutils [req-099bdbfc-d3e8-44d0-b05b-c8f1c74e4193 req-b5cd1cb2-7bc1-4d45-95ce-bde2cb856c0d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "dd4f6627-2bd4-4ad6-8e1e-c5bec1228096-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:15:06 compute-0 nova_compute[260935]: 2025-10-11 09:15:06.181 2 DEBUG nova.compute.manager [req-099bdbfc-d3e8-44d0-b05b-c8f1c74e4193 req-b5cd1cb2-7bc1-4d45-95ce-bde2cb856c0d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: dd4f6627-2bd4-4ad6-8e1e-c5bec1228096] No waiting events found dispatching network-vif-unplugged-ef5b5080-0657-4874-a32f-f79c370fbb6f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 09:15:06 compute-0 nova_compute[260935]: 2025-10-11 09:15:06.181 2 WARNING nova.compute.manager [req-099bdbfc-d3e8-44d0-b05b-c8f1c74e4193 req-b5cd1cb2-7bc1-4d45-95ce-bde2cb856c0d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: dd4f6627-2bd4-4ad6-8e1e-c5bec1228096] Received unexpected event network-vif-unplugged-ef5b5080-0657-4874-a32f-f79c370fbb6f for instance with vm_state suspended and task_state None.
Oct 11 09:15:06 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2198: 321 pgs: 321 active+clean; 407 MiB data, 945 MiB used, 59 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 11 09:15:06 compute-0 brave_euler[376481]: {
Oct 11 09:15:06 compute-0 brave_euler[376481]:     "9a8d87fe-940e-4960-94fb-405e22e6e9ee": {
Oct 11 09:15:06 compute-0 brave_euler[376481]:         "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:15:06 compute-0 brave_euler[376481]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 11 09:15:06 compute-0 brave_euler[376481]:         "osd_id": 2,
Oct 11 09:15:06 compute-0 brave_euler[376481]:         "osd_uuid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 09:15:06 compute-0 brave_euler[376481]:         "type": "bluestore"
Oct 11 09:15:06 compute-0 brave_euler[376481]:     },
Oct 11 09:15:06 compute-0 brave_euler[376481]:     "bd31cfe9-266a-4479-a7f9-659fff14b3e1": {
Oct 11 09:15:06 compute-0 brave_euler[376481]:         "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:15:06 compute-0 brave_euler[376481]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 11 09:15:06 compute-0 brave_euler[376481]:         "osd_id": 0,
Oct 11 09:15:06 compute-0 brave_euler[376481]:         "osd_uuid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 09:15:06 compute-0 brave_euler[376481]:         "type": "bluestore"
Oct 11 09:15:06 compute-0 brave_euler[376481]:     },
Oct 11 09:15:06 compute-0 brave_euler[376481]:     "c6e730c8-7075-45e5-bf72-061b73088f5b": {
Oct 11 09:15:06 compute-0 brave_euler[376481]:         "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:15:06 compute-0 brave_euler[376481]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 11 09:15:06 compute-0 brave_euler[376481]:         "osd_id": 1,
Oct 11 09:15:06 compute-0 brave_euler[376481]:         "osd_uuid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 09:15:06 compute-0 brave_euler[376481]:         "type": "bluestore"
Oct 11 09:15:06 compute-0 brave_euler[376481]:     }
Oct 11 09:15:06 compute-0 brave_euler[376481]: }
Oct 11 09:15:06 compute-0 systemd[1]: libpod-1c6d2d20fd65e152956cc7569b6db81ce7e7aafa05930cd624d6e3c6245dc84c.scope: Deactivated successfully.
Oct 11 09:15:06 compute-0 systemd[1]: libpod-1c6d2d20fd65e152956cc7569b6db81ce7e7aafa05930cd624d6e3c6245dc84c.scope: Consumed 1.161s CPU time.
Oct 11 09:15:06 compute-0 conmon[376481]: conmon 1c6d2d20fd65e152956c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-1c6d2d20fd65e152956cc7569b6db81ce7e7aafa05930cd624d6e3c6245dc84c.scope/container/memory.events
Oct 11 09:15:06 compute-0 podman[376464]: 2025-10-11 09:15:06.737987316 +0000 UTC m=+1.427821422 container died 1c6d2d20fd65e152956cc7569b6db81ce7e7aafa05930cd624d6e3c6245dc84c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_euler, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Oct 11 09:15:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-a59c468e57361f6447cf724a331b19adb8637614bec64ece422f15578f1ab270-merged.mount: Deactivated successfully.
Oct 11 09:15:06 compute-0 podman[376464]: 2025-10-11 09:15:06.80967655 +0000 UTC m=+1.499510626 container remove 1c6d2d20fd65e152956cc7569b6db81ce7e7aafa05930cd624d6e3c6245dc84c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_euler, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 09:15:06 compute-0 systemd[1]: libpod-conmon-1c6d2d20fd65e152956cc7569b6db81ce7e7aafa05930cd624d6e3c6245dc84c.scope: Deactivated successfully.
Oct 11 09:15:06 compute-0 sudo[376356]: pam_unix(sudo:session): session closed for user root
Oct 11 09:15:06 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 09:15:06 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:15:06 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 09:15:06 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:15:06 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev 8f128bb1-1ca3-4f16-b212-bc09d4eaee0d does not exist
Oct 11 09:15:06 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev cefe412f-19a5-4789-8f8c-33bfc14006da does not exist
Oct 11 09:15:06 compute-0 sudo[376603]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:15:06 compute-0 sudo[376603]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:15:06 compute-0 sudo[376603]: pam_unix(sudo:session): session closed for user root
Oct 11 09:15:07 compute-0 sudo[376628]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 11 09:15:07 compute-0 sudo[376628]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:15:07 compute-0 sudo[376628]: pam_unix(sudo:session): session closed for user root
Oct 11 09:15:07 compute-0 ceph-mon[74313]: pgmap v2198: 321 pgs: 321 active+clean; 407 MiB data, 945 MiB used, 59 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 11 09:15:07 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:15:07 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:15:08 compute-0 nova_compute[260935]: 2025-10-11 09:15:08.297 2 DEBUG nova.compute.manager [req-3adfc58f-c2df-4f5b-a430-08e252b682c3 req-a9a98f69-38c9-4076-8cd1-dc8f21f1194e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: dd4f6627-2bd4-4ad6-8e1e-c5bec1228096] Received event network-vif-plugged-ef5b5080-0657-4874-a32f-f79c370fbb6f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:15:08 compute-0 nova_compute[260935]: 2025-10-11 09:15:08.298 2 DEBUG oslo_concurrency.lockutils [req-3adfc58f-c2df-4f5b-a430-08e252b682c3 req-a9a98f69-38c9-4076-8cd1-dc8f21f1194e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "dd4f6627-2bd4-4ad6-8e1e-c5bec1228096-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:15:08 compute-0 nova_compute[260935]: 2025-10-11 09:15:08.299 2 DEBUG oslo_concurrency.lockutils [req-3adfc58f-c2df-4f5b-a430-08e252b682c3 req-a9a98f69-38c9-4076-8cd1-dc8f21f1194e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "dd4f6627-2bd4-4ad6-8e1e-c5bec1228096-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:15:08 compute-0 nova_compute[260935]: 2025-10-11 09:15:08.299 2 DEBUG oslo_concurrency.lockutils [req-3adfc58f-c2df-4f5b-a430-08e252b682c3 req-a9a98f69-38c9-4076-8cd1-dc8f21f1194e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "dd4f6627-2bd4-4ad6-8e1e-c5bec1228096-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:15:08 compute-0 nova_compute[260935]: 2025-10-11 09:15:08.300 2 DEBUG nova.compute.manager [req-3adfc58f-c2df-4f5b-a430-08e252b682c3 req-a9a98f69-38c9-4076-8cd1-dc8f21f1194e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: dd4f6627-2bd4-4ad6-8e1e-c5bec1228096] No waiting events found dispatching network-vif-plugged-ef5b5080-0657-4874-a32f-f79c370fbb6f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 09:15:08 compute-0 nova_compute[260935]: 2025-10-11 09:15:08.300 2 WARNING nova.compute.manager [req-3adfc58f-c2df-4f5b-a430-08e252b682c3 req-a9a98f69-38c9-4076-8cd1-dc8f21f1194e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: dd4f6627-2bd4-4ad6-8e1e-c5bec1228096] Received unexpected event network-vif-plugged-ef5b5080-0657-4874-a32f-f79c370fbb6f for instance with vm_state suspended and task_state None.
Oct 11 09:15:08 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2199: 321 pgs: 321 active+clean; 407 MiB data, 946 MiB used, 59 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 11 09:15:08 compute-0 podman[376653]: 2025-10-11 09:15:08.831659746 +0000 UTC m=+0.120994610 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_id=multipathd, container_name=multipathd)
Oct 11 09:15:08 compute-0 podman[376654]: 2025-10-11 09:15:08.871011536 +0000 UTC m=+0.158709764 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Oct 11 09:15:08 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:15:09 compute-0 nova_compute[260935]: 2025-10-11 09:15:09.097 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:15:09 compute-0 nova_compute[260935]: 2025-10-11 09:15:09.424 2 INFO nova.compute.manager [None req-dfbaa716-6f44-48a1-ac51-f0fa18a5e265 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: dd4f6627-2bd4-4ad6-8e1e-c5bec1228096] Get console output
Oct 11 09:15:09 compute-0 nova_compute[260935]: 2025-10-11 09:15:09.621 2 INFO nova.compute.manager [None req-5bce991b-4bd0-467a-b8cd-da5a138e2600 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: dd4f6627-2bd4-4ad6-8e1e-c5bec1228096] Resuming
Oct 11 09:15:09 compute-0 nova_compute[260935]: 2025-10-11 09:15:09.622 2 DEBUG nova.objects.instance [None req-5bce991b-4bd0-467a-b8cd-da5a138e2600 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Lazy-loading 'flavor' on Instance uuid dd4f6627-2bd4-4ad6-8e1e-c5bec1228096 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:15:09 compute-0 nova_compute[260935]: 2025-10-11 09:15:09.663 2 DEBUG oslo_concurrency.lockutils [None req-5bce991b-4bd0-467a-b8cd-da5a138e2600 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Acquiring lock "refresh_cache-dd4f6627-2bd4-4ad6-8e1e-c5bec1228096" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:15:09 compute-0 nova_compute[260935]: 2025-10-11 09:15:09.664 2 DEBUG oslo_concurrency.lockutils [None req-5bce991b-4bd0-467a-b8cd-da5a138e2600 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Acquired lock "refresh_cache-dd4f6627-2bd4-4ad6-8e1e-c5bec1228096" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:15:09 compute-0 nova_compute[260935]: 2025-10-11 09:15:09.664 2 DEBUG nova.network.neutron [None req-5bce991b-4bd0-467a-b8cd-da5a138e2600 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: dd4f6627-2bd4-4ad6-8e1e-c5bec1228096] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 11 09:15:09 compute-0 ceph-mon[74313]: pgmap v2199: 321 pgs: 321 active+clean; 407 MiB data, 946 MiB used, 59 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 11 09:15:10 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2200: 321 pgs: 321 active+clean; 407 MiB data, 946 MiB used, 59 GiB / 60 GiB avail; 15 KiB/s rd, 55 KiB/s wr, 10 op/s
Oct 11 09:15:10 compute-0 nova_compute[260935]: 2025-10-11 09:15:10.689 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:15:11 compute-0 nova_compute[260935]: 2025-10-11 09:15:11.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:15:11 compute-0 ceph-mon[74313]: pgmap v2200: 321 pgs: 321 active+clean; 407 MiB data, 946 MiB used, 59 GiB / 60 GiB avail; 15 KiB/s rd, 55 KiB/s wr, 10 op/s
Oct 11 09:15:12 compute-0 nova_compute[260935]: 2025-10-11 09:15:12.335 2 DEBUG nova.network.neutron [None req-5bce991b-4bd0-467a-b8cd-da5a138e2600 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: dd4f6627-2bd4-4ad6-8e1e-c5bec1228096] Updating instance_info_cache with network_info: [{"id": "ef5b5080-0657-4874-a32f-f79c370fbb6f", "address": "fa:16:3e:73:4b:2f", "network": {"id": "4b2dcf8f-2d20-4207-8c49-d03814ca4c89", "bridge": "br-int", "label": "tempest-network-smoke--1714222013", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.242", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca4b15770e784f45910b630937562cb6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapef5b5080-06", "ovs_interfaceid": "ef5b5080-0657-4874-a32f-f79c370fbb6f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:15:12 compute-0 nova_compute[260935]: 2025-10-11 09:15:12.433 2 DEBUG oslo_concurrency.lockutils [None req-5bce991b-4bd0-467a-b8cd-da5a138e2600 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Releasing lock "refresh_cache-dd4f6627-2bd4-4ad6-8e1e-c5bec1228096" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:15:12 compute-0 nova_compute[260935]: 2025-10-11 09:15:12.441 2 DEBUG nova.virt.libvirt.vif [None req-5bce991b-4bd0-467a-b8cd-da5a138e2600 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T09:14:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1538313355',display_name='tempest-TestNetworkAdvancedServerOps-server-1538313355',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1538313355',id=107,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIm35lVc/qb0x6y/uUYwsQgGwxPTQjIAOVresR65zlGyxGEfXr+JB+WnXq5JHTOUB0BIvi8UrAYCHxEhMebttYysUf8+o2zlwFp3FSTpdihQDj9CLBgCJ2HNvxQ7hPPL2g==',key_name='tempest-TestNetworkAdvancedServerOps-1022296660',keypairs=<?>,launch_index=0,launched_at=2025-10-11T09:14:46Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='ca4b15770e784f45910b630937562cb6',ramdisk_id='',reservation_id='r-nhg7hkl0',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-TestNetworkAdvancedServerOps-1304559157',owner_user_name='tempest-TestNetworkAdvancedServerOps-1304559157-project-member'},tags=<?>,task_state='resuming',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T09:15:05Z,user_data=None,user_id='a213c3877fc144a3af0be3c3d853f999',uuid=dd4f6627-2bd4-4ad6-8e1e-c5bec1228096,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='suspended') vif={"id": "ef5b5080-0657-4874-a32f-f79c370fbb6f", "address": "fa:16:3e:73:4b:2f", "network": {"id": "4b2dcf8f-2d20-4207-8c49-d03814ca4c89", "bridge": "br-int", "label": "tempest-network-smoke--1714222013", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.242", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca4b15770e784f45910b630937562cb6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapef5b5080-06", "ovs_interfaceid": "ef5b5080-0657-4874-a32f-f79c370fbb6f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 11 09:15:12 compute-0 nova_compute[260935]: 2025-10-11 09:15:12.441 2 DEBUG nova.network.os_vif_util [None req-5bce991b-4bd0-467a-b8cd-da5a138e2600 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Converting VIF {"id": "ef5b5080-0657-4874-a32f-f79c370fbb6f", "address": "fa:16:3e:73:4b:2f", "network": {"id": "4b2dcf8f-2d20-4207-8c49-d03814ca4c89", "bridge": "br-int", "label": "tempest-network-smoke--1714222013", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.242", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca4b15770e784f45910b630937562cb6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapef5b5080-06", "ovs_interfaceid": "ef5b5080-0657-4874-a32f-f79c370fbb6f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 09:15:12 compute-0 nova_compute[260935]: 2025-10-11 09:15:12.443 2 DEBUG nova.network.os_vif_util [None req-5bce991b-4bd0-467a-b8cd-da5a138e2600 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:73:4b:2f,bridge_name='br-int',has_traffic_filtering=True,id=ef5b5080-0657-4874-a32f-f79c370fbb6f,network=Network(4b2dcf8f-2d20-4207-8c49-d03814ca4c89),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapef5b5080-06') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 09:15:12 compute-0 nova_compute[260935]: 2025-10-11 09:15:12.444 2 DEBUG os_vif [None req-5bce991b-4bd0-467a-b8cd-da5a138e2600 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:73:4b:2f,bridge_name='br-int',has_traffic_filtering=True,id=ef5b5080-0657-4874-a32f-f79c370fbb6f,network=Network(4b2dcf8f-2d20-4207-8c49-d03814ca4c89),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapef5b5080-06') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 11 09:15:12 compute-0 nova_compute[260935]: 2025-10-11 09:15:12.445 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:15:12 compute-0 nova_compute[260935]: 2025-10-11 09:15:12.445 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:15:12 compute-0 nova_compute[260935]: 2025-10-11 09:15:12.446 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 09:15:12 compute-0 nova_compute[260935]: 2025-10-11 09:15:12.451 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:15:12 compute-0 nova_compute[260935]: 2025-10-11 09:15:12.451 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapef5b5080-06, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:15:12 compute-0 nova_compute[260935]: 2025-10-11 09:15:12.452 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapef5b5080-06, col_values=(('external_ids', {'iface-id': 'ef5b5080-0657-4874-a32f-f79c370fbb6f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:73:4b:2f', 'vm-uuid': 'dd4f6627-2bd4-4ad6-8e1e-c5bec1228096'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:15:12 compute-0 nova_compute[260935]: 2025-10-11 09:15:12.453 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 09:15:12 compute-0 nova_compute[260935]: 2025-10-11 09:15:12.453 2 INFO os_vif [None req-5bce991b-4bd0-467a-b8cd-da5a138e2600 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:73:4b:2f,bridge_name='br-int',has_traffic_filtering=True,id=ef5b5080-0657-4874-a32f-f79c370fbb6f,network=Network(4b2dcf8f-2d20-4207-8c49-d03814ca4c89),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapef5b5080-06')
Oct 11 09:15:12 compute-0 nova_compute[260935]: 2025-10-11 09:15:12.487 2 DEBUG nova.objects.instance [None req-5bce991b-4bd0-467a-b8cd-da5a138e2600 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Lazy-loading 'numa_topology' on Instance uuid dd4f6627-2bd4-4ad6-8e1e-c5bec1228096 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:15:12 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2201: 321 pgs: 321 active+clean; 407 MiB data, 946 MiB used, 59 GiB / 60 GiB avail; 15 KiB/s rd, 55 KiB/s wr, 10 op/s
Oct 11 09:15:12 compute-0 kernel: tapef5b5080-06: entered promiscuous mode
Oct 11 09:15:12 compute-0 NetworkManager[44960]: <info>  [1760174112.6353] manager: (tapef5b5080-06): new Tun device (/org/freedesktop/NetworkManager/Devices/429)
Oct 11 09:15:12 compute-0 ovn_controller[152945]: 2025-10-11T09:15:12Z|01042|binding|INFO|Claiming lport ef5b5080-0657-4874-a32f-f79c370fbb6f for this chassis.
Oct 11 09:15:12 compute-0 ovn_controller[152945]: 2025-10-11T09:15:12Z|01043|binding|INFO|ef5b5080-0657-4874-a32f-f79c370fbb6f: Claiming fa:16:3e:73:4b:2f 10.100.0.4
Oct 11 09:15:12 compute-0 nova_compute[260935]: 2025-10-11 09:15:12.638 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:15:12 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:15:12.651 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:73:4b:2f 10.100.0.4'], port_security=['fa:16:3e:73:4b:2f 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'dd4f6627-2bd4-4ad6-8e1e-c5bec1228096', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4b2dcf8f-2d20-4207-8c49-d03814ca4c89', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ca4b15770e784f45910b630937562cb6', 'neutron:revision_number': '5', 'neutron:security_group_ids': '69b43b9b-1747-4199-b611-82604b25c02c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.242'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6a61dfb2-d260-491a-87d9-80daad2fd54f, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=ef5b5080-0657-4874-a32f-f79c370fbb6f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 09:15:12 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:15:12.654 162815 INFO neutron.agent.ovn.metadata.agent [-] Port ef5b5080-0657-4874-a32f-f79c370fbb6f in datapath 4b2dcf8f-2d20-4207-8c49-d03814ca4c89 bound to our chassis
Oct 11 09:15:12 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:15:12.656 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4b2dcf8f-2d20-4207-8c49-d03814ca4c89
Oct 11 09:15:12 compute-0 ovn_controller[152945]: 2025-10-11T09:15:12Z|01044|binding|INFO|Setting lport ef5b5080-0657-4874-a32f-f79c370fbb6f ovn-installed in OVS
Oct 11 09:15:12 compute-0 ovn_controller[152945]: 2025-10-11T09:15:12Z|01045|binding|INFO|Setting lport ef5b5080-0657-4874-a32f-f79c370fbb6f up in Southbound
Oct 11 09:15:12 compute-0 nova_compute[260935]: 2025-10-11 09:15:12.676 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:15:12 compute-0 nova_compute[260935]: 2025-10-11 09:15:12.678 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:15:12 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:15:12.677 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[55d9757e-1d1f-4b80-92c4-15b964a5ef15]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:15:12 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:15:12.680 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap4b2dcf8f-21 in ovnmeta-4b2dcf8f-2d20-4207-8c49-d03814ca4c89 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 11 09:15:12 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:15:12.683 276945 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap4b2dcf8f-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 11 09:15:12 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:15:12.683 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[014ddf90-030c-422f-8add-783023c9b52a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:15:12 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:15:12.684 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[d04ff11a-3cda-4a71-a269-a48a041aa455]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:15:12 compute-0 systemd-udevd[376714]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 09:15:12 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:15:12.701 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[82411115-b2e1-4e0c-9db7-d70c5077c07b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:15:12 compute-0 systemd-machined[215705]: New machine qemu-130-instance-0000006b.
Oct 11 09:15:12 compute-0 NetworkManager[44960]: <info>  [1760174112.7154] device (tapef5b5080-06): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 09:15:12 compute-0 NetworkManager[44960]: <info>  [1760174112.7162] device (tapef5b5080-06): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 09:15:12 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:15:12.726 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[5ee86388-624c-4159-a785-87ff731a637b]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:15:12 compute-0 systemd[1]: Started Virtual Machine qemu-130-instance-0000006b.
Oct 11 09:15:12 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:15:12.787 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[9417bd50-1bfb-43bd-ac4f-f78f9ba9e06d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:15:12 compute-0 NetworkManager[44960]: <info>  [1760174112.7985] manager: (tap4b2dcf8f-20): new Veth device (/org/freedesktop/NetworkManager/Devices/430)
Oct 11 09:15:12 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:15:12.800 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[194392ea-9d09-4c52-94f4-e643f00d537d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:15:12 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:15:12.860 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[6848b532-e9c2-4d78-9475-9d50ad45593a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:15:12 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:15:12.865 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[e2a634aa-e2b8-49ec-accc-2cdd7765c5be]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:15:12 compute-0 NetworkManager[44960]: <info>  [1760174112.9047] device (tap4b2dcf8f-20): carrier: link connected
Oct 11 09:15:12 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:15:12.915 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[38c6b1bd-f979-4b21-8c0f-f6fba2a9fa9a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:15:12 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:15:12.943 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[eeb53c70-25a1-4ae6-9eca-39a349f9d327]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4b2dcf8f-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b1:48:74'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 303], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 595760, 'reachable_time': 33957, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 376746, 'error': None, 'target': 'ovnmeta-4b2dcf8f-2d20-4207-8c49-d03814ca4c89', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:15:12 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:15:12.962 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[75dd6ad9-03cb-4817-9d95-c55825831f99]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feb1:4874'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 595760, 'tstamp': 595760}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 376747, 'error': None, 'target': 'ovnmeta-4b2dcf8f-2d20-4207-8c49-d03814ca4c89', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:15:12 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:15:12.985 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[d18dc929-d042-4f02-af23-480e219e1606]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4b2dcf8f-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b1:48:74'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 303], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 595760, 'reachable_time': 33957, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 376748, 'error': None, 'target': 'ovnmeta-4b2dcf8f-2d20-4207-8c49-d03814ca4c89', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:15:13 compute-0 nova_compute[260935]: 2025-10-11 09:15:13.013 2 DEBUG nova.compute.manager [req-5e6815dc-a9c3-474b-af4d-e3163a266e57 req-99f64454-eb9f-4892-b197-63cd335d179e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: dd4f6627-2bd4-4ad6-8e1e-c5bec1228096] Received event network-vif-plugged-ef5b5080-0657-4874-a32f-f79c370fbb6f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:15:13 compute-0 nova_compute[260935]: 2025-10-11 09:15:13.013 2 DEBUG oslo_concurrency.lockutils [req-5e6815dc-a9c3-474b-af4d-e3163a266e57 req-99f64454-eb9f-4892-b197-63cd335d179e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "dd4f6627-2bd4-4ad6-8e1e-c5bec1228096-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:15:13 compute-0 nova_compute[260935]: 2025-10-11 09:15:13.014 2 DEBUG oslo_concurrency.lockutils [req-5e6815dc-a9c3-474b-af4d-e3163a266e57 req-99f64454-eb9f-4892-b197-63cd335d179e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "dd4f6627-2bd4-4ad6-8e1e-c5bec1228096-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:15:13 compute-0 nova_compute[260935]: 2025-10-11 09:15:13.015 2 DEBUG oslo_concurrency.lockutils [req-5e6815dc-a9c3-474b-af4d-e3163a266e57 req-99f64454-eb9f-4892-b197-63cd335d179e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "dd4f6627-2bd4-4ad6-8e1e-c5bec1228096-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:15:13 compute-0 nova_compute[260935]: 2025-10-11 09:15:13.015 2 DEBUG nova.compute.manager [req-5e6815dc-a9c3-474b-af4d-e3163a266e57 req-99f64454-eb9f-4892-b197-63cd335d179e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: dd4f6627-2bd4-4ad6-8e1e-c5bec1228096] No waiting events found dispatching network-vif-plugged-ef5b5080-0657-4874-a32f-f79c370fbb6f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 09:15:13 compute-0 nova_compute[260935]: 2025-10-11 09:15:13.016 2 WARNING nova.compute.manager [req-5e6815dc-a9c3-474b-af4d-e3163a266e57 req-99f64454-eb9f-4892-b197-63cd335d179e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: dd4f6627-2bd4-4ad6-8e1e-c5bec1228096] Received unexpected event network-vif-plugged-ef5b5080-0657-4874-a32f-f79c370fbb6f for instance with vm_state suspended and task_state resuming.
Oct 11 09:15:13 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:15:13.034 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[4b9f21a3-55da-4208-8006-5a2a0693513a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:15:13 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:15:13.121 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[2461dad8-1e9e-4801-a2eb-d4159b2c859e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:15:13 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:15:13.123 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4b2dcf8f-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:15:13 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:15:13.124 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 09:15:13 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:15:13.124 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4b2dcf8f-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:15:13 compute-0 nova_compute[260935]: 2025-10-11 09:15:13.126 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:15:13 compute-0 kernel: tap4b2dcf8f-20: entered promiscuous mode
Oct 11 09:15:13 compute-0 NetworkManager[44960]: <info>  [1760174113.1294] manager: (tap4b2dcf8f-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/431)
Oct 11 09:15:13 compute-0 nova_compute[260935]: 2025-10-11 09:15:13.130 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:15:13 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:15:13.138 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4b2dcf8f-20, col_values=(('external_ids', {'iface-id': '0c6a8cda-d5e3-42f3-82f4-69be33ec4ca6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:15:13 compute-0 ovn_controller[152945]: 2025-10-11T09:15:13Z|01046|binding|INFO|Releasing lport 0c6a8cda-d5e3-42f3-82f4-69be33ec4ca6 from this chassis (sb_readonly=0)
Oct 11 09:15:13 compute-0 nova_compute[260935]: 2025-10-11 09:15:13.140 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:15:13 compute-0 nova_compute[260935]: 2025-10-11 09:15:13.165 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:15:13 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:15:13.169 162815 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/4b2dcf8f-2d20-4207-8c49-d03814ca4c89.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/4b2dcf8f-2d20-4207-8c49-d03814ca4c89.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 11 09:15:13 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:15:13.171 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[b0fdf880-c951-4c97-bb30-85960a4a67a3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:15:13 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:15:13.172 162815 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 11 09:15:13 compute-0 ovn_metadata_agent[162810]: global
Oct 11 09:15:13 compute-0 ovn_metadata_agent[162810]:     log         /dev/log local0 debug
Oct 11 09:15:13 compute-0 ovn_metadata_agent[162810]:     log-tag     haproxy-metadata-proxy-4b2dcf8f-2d20-4207-8c49-d03814ca4c89
Oct 11 09:15:13 compute-0 ovn_metadata_agent[162810]:     user        root
Oct 11 09:15:13 compute-0 ovn_metadata_agent[162810]:     group       root
Oct 11 09:15:13 compute-0 ovn_metadata_agent[162810]:     maxconn     1024
Oct 11 09:15:13 compute-0 ovn_metadata_agent[162810]:     pidfile     /var/lib/neutron/external/pids/4b2dcf8f-2d20-4207-8c49-d03814ca4c89.pid.haproxy
Oct 11 09:15:13 compute-0 ovn_metadata_agent[162810]:     daemon
Oct 11 09:15:13 compute-0 ovn_metadata_agent[162810]: 
Oct 11 09:15:13 compute-0 ovn_metadata_agent[162810]: defaults
Oct 11 09:15:13 compute-0 ovn_metadata_agent[162810]:     log global
Oct 11 09:15:13 compute-0 ovn_metadata_agent[162810]:     mode http
Oct 11 09:15:13 compute-0 ovn_metadata_agent[162810]:     option httplog
Oct 11 09:15:13 compute-0 ovn_metadata_agent[162810]:     option dontlognull
Oct 11 09:15:13 compute-0 ovn_metadata_agent[162810]:     option http-server-close
Oct 11 09:15:13 compute-0 ovn_metadata_agent[162810]:     option forwardfor
Oct 11 09:15:13 compute-0 ovn_metadata_agent[162810]:     retries                 3
Oct 11 09:15:13 compute-0 ovn_metadata_agent[162810]:     timeout http-request    30s
Oct 11 09:15:13 compute-0 ovn_metadata_agent[162810]:     timeout connect         30s
Oct 11 09:15:13 compute-0 ovn_metadata_agent[162810]:     timeout client          32s
Oct 11 09:15:13 compute-0 ovn_metadata_agent[162810]:     timeout server          32s
Oct 11 09:15:13 compute-0 ovn_metadata_agent[162810]:     timeout http-keep-alive 30s
Oct 11 09:15:13 compute-0 ovn_metadata_agent[162810]: 
Oct 11 09:15:13 compute-0 ovn_metadata_agent[162810]: 
Oct 11 09:15:13 compute-0 ovn_metadata_agent[162810]: listen listener
Oct 11 09:15:13 compute-0 ovn_metadata_agent[162810]:     bind 169.254.169.254:80
Oct 11 09:15:13 compute-0 ovn_metadata_agent[162810]:     server metadata /var/lib/neutron/metadata_proxy
Oct 11 09:15:13 compute-0 ovn_metadata_agent[162810]:     http-request add-header X-OVN-Network-ID 4b2dcf8f-2d20-4207-8c49-d03814ca4c89
Oct 11 09:15:13 compute-0 ovn_metadata_agent[162810]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 11 09:15:13 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:15:13.173 162815 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-4b2dcf8f-2d20-4207-8c49-d03814ca4c89', 'env', 'PROCESS_TAG=haproxy-4b2dcf8f-2d20-4207-8c49-d03814ca4c89', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/4b2dcf8f-2d20-4207-8c49-d03814ca4c89.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 11 09:15:13 compute-0 podman[376823]: 2025-10-11 09:15:13.600538165 +0000 UTC m=+0.043887206 container create 758743012bf5f9089c8d9eaa120435dbf2e8a515465430d45860b329e93b9eda (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-4b2dcf8f-2d20-4207-8c49-d03814ca4c89, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 11 09:15:13 compute-0 systemd[1]: Started libpod-conmon-758743012bf5f9089c8d9eaa120435dbf2e8a515465430d45860b329e93b9eda.scope.
Oct 11 09:15:13 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:15:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f44778f1e684e02e2c766e9adf76da6209b32f1329c4249ef2256f9967f26385/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 11 09:15:13 compute-0 podman[376823]: 2025-10-11 09:15:13.673139844 +0000 UTC m=+0.116488895 container init 758743012bf5f9089c8d9eaa120435dbf2e8a515465430d45860b329e93b9eda (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-4b2dcf8f-2d20-4207-8c49-d03814ca4c89, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0)
Oct 11 09:15:13 compute-0 podman[376823]: 2025-10-11 09:15:13.578272438 +0000 UTC m=+0.021621499 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 11 09:15:13 compute-0 podman[376823]: 2025-10-11 09:15:13.682968356 +0000 UTC m=+0.126317397 container start 758743012bf5f9089c8d9eaa120435dbf2e8a515465430d45860b329e93b9eda (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-4b2dcf8f-2d20-4207-8c49-d03814ca4c89, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3)
Oct 11 09:15:13 compute-0 neutron-haproxy-ovnmeta-4b2dcf8f-2d20-4207-8c49-d03814ca4c89[376840]: [NOTICE]   (376844) : New worker (376846) forked
Oct 11 09:15:13 compute-0 neutron-haproxy-ovnmeta-4b2dcf8f-2d20-4207-8c49-d03814ca4c89[376840]: [NOTICE]   (376844) : Loading success.
Oct 11 09:15:13 compute-0 ceph-mon[74313]: pgmap v2201: 321 pgs: 321 active+clean; 407 MiB data, 946 MiB used, 59 GiB / 60 GiB avail; 15 KiB/s rd, 55 KiB/s wr, 10 op/s
Oct 11 09:15:13 compute-0 nova_compute[260935]: 2025-10-11 09:15:13.919 2 DEBUG nova.virt.libvirt.host [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Removed pending event for dd4f6627-2bd4-4ad6-8e1e-c5bec1228096 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Oct 11 09:15:13 compute-0 nova_compute[260935]: 2025-10-11 09:15:13.920 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760174113.9191809, dd4f6627-2bd4-4ad6-8e1e-c5bec1228096 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:15:13 compute-0 nova_compute[260935]: 2025-10-11 09:15:13.921 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: dd4f6627-2bd4-4ad6-8e1e-c5bec1228096] VM Started (Lifecycle Event)
Oct 11 09:15:13 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:15:13 compute-0 nova_compute[260935]: 2025-10-11 09:15:13.949 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: dd4f6627-2bd4-4ad6-8e1e-c5bec1228096] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:15:13 compute-0 nova_compute[260935]: 2025-10-11 09:15:13.972 2 DEBUG nova.compute.manager [None req-5bce991b-4bd0-467a-b8cd-da5a138e2600 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: dd4f6627-2bd4-4ad6-8e1e-c5bec1228096] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 11 09:15:13 compute-0 nova_compute[260935]: 2025-10-11 09:15:13.973 2 DEBUG nova.objects.instance [None req-5bce991b-4bd0-467a-b8cd-da5a138e2600 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Lazy-loading 'pci_devices' on Instance uuid dd4f6627-2bd4-4ad6-8e1e-c5bec1228096 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:15:13 compute-0 nova_compute[260935]: 2025-10-11 09:15:13.976 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: dd4f6627-2bd4-4ad6-8e1e-c5bec1228096] Synchronizing instance power state after lifecycle event "Started"; current vm_state: suspended, current task_state: resuming, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 09:15:13 compute-0 nova_compute[260935]: 2025-10-11 09:15:13.996 2 INFO nova.virt.libvirt.driver [-] [instance: dd4f6627-2bd4-4ad6-8e1e-c5bec1228096] Instance running successfully.
Oct 11 09:15:13 compute-0 virtqemud[260524]: argument unsupported: QEMU guest agent is not configured
Oct 11 09:15:13 compute-0 nova_compute[260935]: 2025-10-11 09:15:13.999 2 DEBUG nova.virt.libvirt.guest [None req-5bce991b-4bd0-467a-b8cd-da5a138e2600 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: dd4f6627-2bd4-4ad6-8e1e-c5bec1228096] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200
Oct 11 09:15:14 compute-0 nova_compute[260935]: 2025-10-11 09:15:13.999 2 DEBUG nova.compute.manager [None req-5bce991b-4bd0-467a-b8cd-da5a138e2600 a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: dd4f6627-2bd4-4ad6-8e1e-c5bec1228096] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:15:14 compute-0 nova_compute[260935]: 2025-10-11 09:15:14.002 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: dd4f6627-2bd4-4ad6-8e1e-c5bec1228096] During sync_power_state the instance has a pending task (resuming). Skip.
Oct 11 09:15:14 compute-0 nova_compute[260935]: 2025-10-11 09:15:14.003 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760174113.9242215, dd4f6627-2bd4-4ad6-8e1e-c5bec1228096 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:15:14 compute-0 nova_compute[260935]: 2025-10-11 09:15:14.003 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: dd4f6627-2bd4-4ad6-8e1e-c5bec1228096] VM Resumed (Lifecycle Event)
Oct 11 09:15:14 compute-0 nova_compute[260935]: 2025-10-11 09:15:14.032 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: dd4f6627-2bd4-4ad6-8e1e-c5bec1228096] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:15:14 compute-0 nova_compute[260935]: 2025-10-11 09:15:14.038 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: dd4f6627-2bd4-4ad6-8e1e-c5bec1228096] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: suspended, current task_state: resuming, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 09:15:14 compute-0 nova_compute[260935]: 2025-10-11 09:15:14.071 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: dd4f6627-2bd4-4ad6-8e1e-c5bec1228096] During sync_power_state the instance has a pending task (resuming). Skip.
Oct 11 09:15:14 compute-0 nova_compute[260935]: 2025-10-11 09:15:14.099 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:15:14 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2202: 321 pgs: 321 active+clean; 407 MiB data, 946 MiB used, 59 GiB / 60 GiB avail; 2.2 KiB/s rd, 12 KiB/s wr, 1 op/s
Oct 11 09:15:15 compute-0 nova_compute[260935]: 2025-10-11 09:15:15.094 2 DEBUG nova.compute.manager [req-43736221-b6b3-44ed-b3e4-5ad61dc5cebc req-d6d76eac-b806-4228-9b3b-40e74fa23952 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: dd4f6627-2bd4-4ad6-8e1e-c5bec1228096] Received event network-vif-plugged-ef5b5080-0657-4874-a32f-f79c370fbb6f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:15:15 compute-0 nova_compute[260935]: 2025-10-11 09:15:15.095 2 DEBUG oslo_concurrency.lockutils [req-43736221-b6b3-44ed-b3e4-5ad61dc5cebc req-d6d76eac-b806-4228-9b3b-40e74fa23952 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "dd4f6627-2bd4-4ad6-8e1e-c5bec1228096-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:15:15 compute-0 nova_compute[260935]: 2025-10-11 09:15:15.095 2 DEBUG oslo_concurrency.lockutils [req-43736221-b6b3-44ed-b3e4-5ad61dc5cebc req-d6d76eac-b806-4228-9b3b-40e74fa23952 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "dd4f6627-2bd4-4ad6-8e1e-c5bec1228096-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:15:15 compute-0 nova_compute[260935]: 2025-10-11 09:15:15.096 2 DEBUG oslo_concurrency.lockutils [req-43736221-b6b3-44ed-b3e4-5ad61dc5cebc req-d6d76eac-b806-4228-9b3b-40e74fa23952 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "dd4f6627-2bd4-4ad6-8e1e-c5bec1228096-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:15:15 compute-0 nova_compute[260935]: 2025-10-11 09:15:15.096 2 DEBUG nova.compute.manager [req-43736221-b6b3-44ed-b3e4-5ad61dc5cebc req-d6d76eac-b806-4228-9b3b-40e74fa23952 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: dd4f6627-2bd4-4ad6-8e1e-c5bec1228096] No waiting events found dispatching network-vif-plugged-ef5b5080-0657-4874-a32f-f79c370fbb6f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 09:15:15 compute-0 nova_compute[260935]: 2025-10-11 09:15:15.097 2 WARNING nova.compute.manager [req-43736221-b6b3-44ed-b3e4-5ad61dc5cebc req-d6d76eac-b806-4228-9b3b-40e74fa23952 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: dd4f6627-2bd4-4ad6-8e1e-c5bec1228096] Received unexpected event network-vif-plugged-ef5b5080-0657-4874-a32f-f79c370fbb6f for instance with vm_state active and task_state None.
Oct 11 09:15:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:15:15.210 162815 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:15:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:15:15.211 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:15:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:15:15.212 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:15:15 compute-0 nova_compute[260935]: 2025-10-11 09:15:15.724 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:15:15 compute-0 ceph-mon[74313]: pgmap v2202: 321 pgs: 321 active+clean; 407 MiB data, 946 MiB used, 59 GiB / 60 GiB avail; 2.2 KiB/s rd, 12 KiB/s wr, 1 op/s
Oct 11 09:15:16 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2203: 321 pgs: 321 active+clean; 407 MiB data, 946 MiB used, 59 GiB / 60 GiB avail; 852 B/s rd, 12 KiB/s wr, 0 op/s
Oct 11 09:15:16 compute-0 ceph-mon[74313]: pgmap v2203: 321 pgs: 321 active+clean; 407 MiB data, 946 MiB used, 59 GiB / 60 GiB avail; 852 B/s rd, 12 KiB/s wr, 0 op/s
Oct 11 09:15:17 compute-0 nova_compute[260935]: 2025-10-11 09:15:17.147 2 INFO nova.compute.manager [None req-056281e6-4d65-4f3f-8017-5989b5f18f6b a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: dd4f6627-2bd4-4ad6-8e1e-c5bec1228096] Get console output
Oct 11 09:15:17 compute-0 nova_compute[260935]: 2025-10-11 09:15:17.154 29289 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Oct 11 09:15:18 compute-0 nova_compute[260935]: 2025-10-11 09:15:18.313 2 DEBUG nova.compute.manager [req-c003a7df-5071-42b6-aeeb-1a605ba7c5be req-cd611666-e864-4272-8691-8aa0c171026a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: dd4f6627-2bd4-4ad6-8e1e-c5bec1228096] Received event network-changed-ef5b5080-0657-4874-a32f-f79c370fbb6f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:15:18 compute-0 nova_compute[260935]: 2025-10-11 09:15:18.314 2 DEBUG nova.compute.manager [req-c003a7df-5071-42b6-aeeb-1a605ba7c5be req-cd611666-e864-4272-8691-8aa0c171026a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: dd4f6627-2bd4-4ad6-8e1e-c5bec1228096] Refreshing instance network info cache due to event network-changed-ef5b5080-0657-4874-a32f-f79c370fbb6f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 09:15:18 compute-0 nova_compute[260935]: 2025-10-11 09:15:18.315 2 DEBUG oslo_concurrency.lockutils [req-c003a7df-5071-42b6-aeeb-1a605ba7c5be req-cd611666-e864-4272-8691-8aa0c171026a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-dd4f6627-2bd4-4ad6-8e1e-c5bec1228096" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:15:18 compute-0 nova_compute[260935]: 2025-10-11 09:15:18.315 2 DEBUG oslo_concurrency.lockutils [req-c003a7df-5071-42b6-aeeb-1a605ba7c5be req-cd611666-e864-4272-8691-8aa0c171026a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-dd4f6627-2bd4-4ad6-8e1e-c5bec1228096" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:15:18 compute-0 nova_compute[260935]: 2025-10-11 09:15:18.315 2 DEBUG nova.network.neutron [req-c003a7df-5071-42b6-aeeb-1a605ba7c5be req-cd611666-e864-4272-8691-8aa0c171026a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: dd4f6627-2bd4-4ad6-8e1e-c5bec1228096] Refreshing network info cache for port ef5b5080-0657-4874-a32f-f79c370fbb6f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 09:15:18 compute-0 nova_compute[260935]: 2025-10-11 09:15:18.511 2 DEBUG oslo_concurrency.lockutils [None req-c53a938f-bb46-4741-9e60-3b88069acf3f a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Acquiring lock "dd4f6627-2bd4-4ad6-8e1e-c5bec1228096" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:15:18 compute-0 nova_compute[260935]: 2025-10-11 09:15:18.512 2 DEBUG oslo_concurrency.lockutils [None req-c53a938f-bb46-4741-9e60-3b88069acf3f a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Lock "dd4f6627-2bd4-4ad6-8e1e-c5bec1228096" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:15:18 compute-0 nova_compute[260935]: 2025-10-11 09:15:18.512 2 DEBUG oslo_concurrency.lockutils [None req-c53a938f-bb46-4741-9e60-3b88069acf3f a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Acquiring lock "dd4f6627-2bd4-4ad6-8e1e-c5bec1228096-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:15:18 compute-0 nova_compute[260935]: 2025-10-11 09:15:18.513 2 DEBUG oslo_concurrency.lockutils [None req-c53a938f-bb46-4741-9e60-3b88069acf3f a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Lock "dd4f6627-2bd4-4ad6-8e1e-c5bec1228096-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:15:18 compute-0 nova_compute[260935]: 2025-10-11 09:15:18.514 2 DEBUG oslo_concurrency.lockutils [None req-c53a938f-bb46-4741-9e60-3b88069acf3f a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Lock "dd4f6627-2bd4-4ad6-8e1e-c5bec1228096-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:15:18 compute-0 nova_compute[260935]: 2025-10-11 09:15:18.516 2 INFO nova.compute.manager [None req-c53a938f-bb46-4741-9e60-3b88069acf3f a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: dd4f6627-2bd4-4ad6-8e1e-c5bec1228096] Terminating instance
Oct 11 09:15:18 compute-0 nova_compute[260935]: 2025-10-11 09:15:18.518 2 DEBUG nova.compute.manager [None req-c53a938f-bb46-4741-9e60-3b88069acf3f a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: dd4f6627-2bd4-4ad6-8e1e-c5bec1228096] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 11 09:15:18 compute-0 kernel: tapef5b5080-06 (unregistering): left promiscuous mode
Oct 11 09:15:18 compute-0 NetworkManager[44960]: <info>  [1760174118.5833] device (tapef5b5080-06): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 09:15:18 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2204: 321 pgs: 321 active+clean; 407 MiB data, 946 MiB used, 59 GiB / 60 GiB avail; 5.1 KiB/s rd, 12 KiB/s wr, 5 op/s
Oct 11 09:15:18 compute-0 nova_compute[260935]: 2025-10-11 09:15:18.604 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:15:18 compute-0 ovn_controller[152945]: 2025-10-11T09:15:18Z|01047|binding|INFO|Releasing lport ef5b5080-0657-4874-a32f-f79c370fbb6f from this chassis (sb_readonly=0)
Oct 11 09:15:18 compute-0 ovn_controller[152945]: 2025-10-11T09:15:18Z|01048|binding|INFO|Setting lport ef5b5080-0657-4874-a32f-f79c370fbb6f down in Southbound
Oct 11 09:15:18 compute-0 ovn_controller[152945]: 2025-10-11T09:15:18Z|01049|binding|INFO|Removing iface tapef5b5080-06 ovn-installed in OVS
Oct 11 09:15:18 compute-0 nova_compute[260935]: 2025-10-11 09:15:18.608 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:15:18 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:15:18.613 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:73:4b:2f 10.100.0.4'], port_security=['fa:16:3e:73:4b:2f 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'dd4f6627-2bd4-4ad6-8e1e-c5bec1228096', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4b2dcf8f-2d20-4207-8c49-d03814ca4c89', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ca4b15770e784f45910b630937562cb6', 'neutron:revision_number': '6', 'neutron:security_group_ids': '69b43b9b-1747-4199-b611-82604b25c02c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6a61dfb2-d260-491a-87d9-80daad2fd54f, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=ef5b5080-0657-4874-a32f-f79c370fbb6f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 09:15:18 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:15:18.615 162815 INFO neutron.agent.ovn.metadata.agent [-] Port ef5b5080-0657-4874-a32f-f79c370fbb6f in datapath 4b2dcf8f-2d20-4207-8c49-d03814ca4c89 unbound from our chassis
Oct 11 09:15:18 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:15:18.618 162815 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 4b2dcf8f-2d20-4207-8c49-d03814ca4c89, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 11 09:15:18 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:15:18.620 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[943efbc0-eff3-476b-9c74-2863ae3c4807]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:15:18 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:15:18.620 162815 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-4b2dcf8f-2d20-4207-8c49-d03814ca4c89 namespace which is not needed anymore
Oct 11 09:15:18 compute-0 nova_compute[260935]: 2025-10-11 09:15:18.646 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:15:18 compute-0 systemd[1]: machine-qemu\x2d130\x2dinstance\x2d0000006b.scope: Deactivated successfully.
Oct 11 09:15:18 compute-0 systemd[1]: machine-qemu\x2d130\x2dinstance\x2d0000006b.scope: Consumed 1.329s CPU time.
Oct 11 09:15:18 compute-0 systemd-machined[215705]: Machine qemu-130-instance-0000006b terminated.
Oct 11 09:15:18 compute-0 nova_compute[260935]: 2025-10-11 09:15:18.744 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:15:18 compute-0 nova_compute[260935]: 2025-10-11 09:15:18.758 2 INFO nova.virt.libvirt.driver [-] [instance: dd4f6627-2bd4-4ad6-8e1e-c5bec1228096] Instance destroyed successfully.
Oct 11 09:15:18 compute-0 nova_compute[260935]: 2025-10-11 09:15:18.759 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:15:18 compute-0 nova_compute[260935]: 2025-10-11 09:15:18.760 2 DEBUG nova.objects.instance [None req-c53a938f-bb46-4741-9e60-3b88069acf3f a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Lazy-loading 'resources' on Instance uuid dd4f6627-2bd4-4ad6-8e1e-c5bec1228096 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:15:18 compute-0 nova_compute[260935]: 2025-10-11 09:15:18.783 2 DEBUG nova.virt.libvirt.vif [None req-c53a938f-bb46-4741-9e60-3b88069acf3f a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T09:14:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1538313355',display_name='tempest-TestNetworkAdvancedServerOps-server-1538313355',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1538313355',id=107,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIm35lVc/qb0x6y/uUYwsQgGwxPTQjIAOVresR65zlGyxGEfXr+JB+WnXq5JHTOUB0BIvi8UrAYCHxEhMebttYysUf8+o2zlwFp3FSTpdihQDj9CLBgCJ2HNvxQ7hPPL2g==',key_name='tempest-TestNetworkAdvancedServerOps-1022296660',keypairs=<?>,launch_index=0,launched_at=2025-10-11T09:14:46Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='ca4b15770e784f45910b630937562cb6',ramdisk_id='',reservation_id='r-nhg7hkl0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-1304559157',owner_user_name='tempest-TestNetworkAdvancedServerOps-1304559157-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T09:15:14Z,user_data=None,user_id='a213c3877fc144a3af0be3c3d853f999',uuid=dd4f6627-2bd4-4ad6-8e1e-c5bec1228096,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ef5b5080-0657-4874-a32f-f79c370fbb6f", "address": "fa:16:3e:73:4b:2f", "network": {"id": "4b2dcf8f-2d20-4207-8c49-d03814ca4c89", "bridge": "br-int", "label": "tempest-network-smoke--1714222013", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.242", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca4b15770e784f45910b630937562cb6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapef5b5080-06", "ovs_interfaceid": "ef5b5080-0657-4874-a32f-f79c370fbb6f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 11 09:15:18 compute-0 nova_compute[260935]: 2025-10-11 09:15:18.785 2 DEBUG nova.network.os_vif_util [None req-c53a938f-bb46-4741-9e60-3b88069acf3f a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Converting VIF {"id": "ef5b5080-0657-4874-a32f-f79c370fbb6f", "address": "fa:16:3e:73:4b:2f", "network": {"id": "4b2dcf8f-2d20-4207-8c49-d03814ca4c89", "bridge": "br-int", "label": "tempest-network-smoke--1714222013", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.242", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca4b15770e784f45910b630937562cb6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapef5b5080-06", "ovs_interfaceid": "ef5b5080-0657-4874-a32f-f79c370fbb6f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 09:15:18 compute-0 nova_compute[260935]: 2025-10-11 09:15:18.787 2 DEBUG nova.network.os_vif_util [None req-c53a938f-bb46-4741-9e60-3b88069acf3f a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:73:4b:2f,bridge_name='br-int',has_traffic_filtering=True,id=ef5b5080-0657-4874-a32f-f79c370fbb6f,network=Network(4b2dcf8f-2d20-4207-8c49-d03814ca4c89),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapef5b5080-06') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 09:15:18 compute-0 nova_compute[260935]: 2025-10-11 09:15:18.787 2 DEBUG os_vif [None req-c53a938f-bb46-4741-9e60-3b88069acf3f a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:73:4b:2f,bridge_name='br-int',has_traffic_filtering=True,id=ef5b5080-0657-4874-a32f-f79c370fbb6f,network=Network(4b2dcf8f-2d20-4207-8c49-d03814ca4c89),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapef5b5080-06') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 11 09:15:18 compute-0 nova_compute[260935]: 2025-10-11 09:15:18.790 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:15:18 compute-0 nova_compute[260935]: 2025-10-11 09:15:18.790 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapef5b5080-06, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:15:18 compute-0 nova_compute[260935]: 2025-10-11 09:15:18.792 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:15:18 compute-0 nova_compute[260935]: 2025-10-11 09:15:18.795 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 09:15:18 compute-0 nova_compute[260935]: 2025-10-11 09:15:18.798 2 INFO os_vif [None req-c53a938f-bb46-4741-9e60-3b88069acf3f a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:73:4b:2f,bridge_name='br-int',has_traffic_filtering=True,id=ef5b5080-0657-4874-a32f-f79c370fbb6f,network=Network(4b2dcf8f-2d20-4207-8c49-d03814ca4c89),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapef5b5080-06')
Oct 11 09:15:18 compute-0 neutron-haproxy-ovnmeta-4b2dcf8f-2d20-4207-8c49-d03814ca4c89[376840]: [NOTICE]   (376844) : haproxy version is 2.8.14-c23fe91
Oct 11 09:15:18 compute-0 neutron-haproxy-ovnmeta-4b2dcf8f-2d20-4207-8c49-d03814ca4c89[376840]: [NOTICE]   (376844) : path to executable is /usr/sbin/haproxy
Oct 11 09:15:18 compute-0 neutron-haproxy-ovnmeta-4b2dcf8f-2d20-4207-8c49-d03814ca4c89[376840]: [WARNING]  (376844) : Exiting Master process...
Oct 11 09:15:18 compute-0 neutron-haproxy-ovnmeta-4b2dcf8f-2d20-4207-8c49-d03814ca4c89[376840]: [WARNING]  (376844) : Exiting Master process...
Oct 11 09:15:18 compute-0 neutron-haproxy-ovnmeta-4b2dcf8f-2d20-4207-8c49-d03814ca4c89[376840]: [ALERT]    (376844) : Current worker (376846) exited with code 143 (Terminated)
Oct 11 09:15:18 compute-0 neutron-haproxy-ovnmeta-4b2dcf8f-2d20-4207-8c49-d03814ca4c89[376840]: [WARNING]  (376844) : All workers exited. Exiting... (0)
Oct 11 09:15:18 compute-0 systemd[1]: libpod-758743012bf5f9089c8d9eaa120435dbf2e8a515465430d45860b329e93b9eda.scope: Deactivated successfully.
Oct 11 09:15:18 compute-0 conmon[376840]: conmon 758743012bf5f9089c8d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-758743012bf5f9089c8d9eaa120435dbf2e8a515465430d45860b329e93b9eda.scope/container/memory.events
Oct 11 09:15:18 compute-0 podman[376880]: 2025-10-11 09:15:18.813292431 +0000 UTC m=+0.062355467 container died 758743012bf5f9089c8d9eaa120435dbf2e8a515465430d45860b329e93b9eda (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-4b2dcf8f-2d20-4207-8c49-d03814ca4c89, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true)
Oct 11 09:15:18 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-758743012bf5f9089c8d9eaa120435dbf2e8a515465430d45860b329e93b9eda-userdata-shm.mount: Deactivated successfully.
Oct 11 09:15:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-f44778f1e684e02e2c766e9adf76da6209b32f1329c4249ef2256f9967f26385-merged.mount: Deactivated successfully.
Oct 11 09:15:18 compute-0 podman[376880]: 2025-10-11 09:15:18.874136565 +0000 UTC m=+0.123199581 container cleanup 758743012bf5f9089c8d9eaa120435dbf2e8a515465430d45860b329e93b9eda (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-4b2dcf8f-2d20-4207-8c49-d03814ca4c89, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 11 09:15:18 compute-0 systemd[1]: libpod-conmon-758743012bf5f9089c8d9eaa120435dbf2e8a515465430d45860b329e93b9eda.scope: Deactivated successfully.
Oct 11 09:15:18 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:15:18 compute-0 podman[376933]: 2025-10-11 09:15:18.96285176 +0000 UTC m=+0.050667083 container remove 758743012bf5f9089c8d9eaa120435dbf2e8a515465430d45860b329e93b9eda (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-4b2dcf8f-2d20-4207-8c49-d03814ca4c89, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Oct 11 09:15:18 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:15:18.972 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[1ea1ce45-df02-4694-aab1-5fae9ae28d92]: (4, ('Sat Oct 11 09:15:18 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-4b2dcf8f-2d20-4207-8c49-d03814ca4c89 (758743012bf5f9089c8d9eaa120435dbf2e8a515465430d45860b329e93b9eda)\n758743012bf5f9089c8d9eaa120435dbf2e8a515465430d45860b329e93b9eda\nSat Oct 11 09:15:18 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-4b2dcf8f-2d20-4207-8c49-d03814ca4c89 (758743012bf5f9089c8d9eaa120435dbf2e8a515465430d45860b329e93b9eda)\n758743012bf5f9089c8d9eaa120435dbf2e8a515465430d45860b329e93b9eda\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:15:18 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:15:18.974 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[badcebe3-10f9-4b07-aed0-5483083db895]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:15:18 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:15:18.976 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4b2dcf8f-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:15:18 compute-0 nova_compute[260935]: 2025-10-11 09:15:18.979 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:15:18 compute-0 kernel: tap4b2dcf8f-20: left promiscuous mode
Oct 11 09:15:19 compute-0 nova_compute[260935]: 2025-10-11 09:15:18.999 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:15:19 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:15:19.005 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[06d451dc-f2cd-4771-a2c9-e2c64806ea96]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:15:19 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:15:19.031 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[b2f74e57-ea1a-43b2-9a6c-6adc9d2e7c2a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:15:19 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:15:19.032 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[8ba8542e-c414-4f15-89a5-bc9bbd0bd329]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:15:19 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:15:19.055 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[9bb30987-a532-4ca4-8b9a-a13eac69c60e]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 595747, 'reachable_time': 20292, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 376945, 'error': None, 'target': 'ovnmeta-4b2dcf8f-2d20-4207-8c49-d03814ca4c89', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:15:19 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:15:19.058 162929 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-4b2dcf8f-2d20-4207-8c49-d03814ca4c89 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 11 09:15:19 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:15:19.058 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[5508e2dc-ea4c-4592-9e3b-a6964add191b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:15:19 compute-0 systemd[1]: run-netns-ovnmeta\x2d4b2dcf8f\x2d2d20\x2d4207\x2d8c49\x2dd03814ca4c89.mount: Deactivated successfully.
Oct 11 09:15:19 compute-0 nova_compute[260935]: 2025-10-11 09:15:19.256 2 INFO nova.virt.libvirt.driver [None req-c53a938f-bb46-4741-9e60-3b88069acf3f a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: dd4f6627-2bd4-4ad6-8e1e-c5bec1228096] Deleting instance files /var/lib/nova/instances/dd4f6627-2bd4-4ad6-8e1e-c5bec1228096_del
Oct 11 09:15:19 compute-0 nova_compute[260935]: 2025-10-11 09:15:19.257 2 INFO nova.virt.libvirt.driver [None req-c53a938f-bb46-4741-9e60-3b88069acf3f a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: dd4f6627-2bd4-4ad6-8e1e-c5bec1228096] Deletion of /var/lib/nova/instances/dd4f6627-2bd4-4ad6-8e1e-c5bec1228096_del complete
Oct 11 09:15:19 compute-0 nova_compute[260935]: 2025-10-11 09:15:19.355 2 INFO nova.compute.manager [None req-c53a938f-bb46-4741-9e60-3b88069acf3f a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] [instance: dd4f6627-2bd4-4ad6-8e1e-c5bec1228096] Took 0.84 seconds to destroy the instance on the hypervisor.
Oct 11 09:15:19 compute-0 nova_compute[260935]: 2025-10-11 09:15:19.356 2 DEBUG oslo.service.loopingcall [None req-c53a938f-bb46-4741-9e60-3b88069acf3f a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 11 09:15:19 compute-0 nova_compute[260935]: 2025-10-11 09:15:19.361 2 DEBUG nova.compute.manager [-] [instance: dd4f6627-2bd4-4ad6-8e1e-c5bec1228096] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 11 09:15:19 compute-0 nova_compute[260935]: 2025-10-11 09:15:19.361 2 DEBUG nova.network.neutron [-] [instance: dd4f6627-2bd4-4ad6-8e1e-c5bec1228096] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 11 09:15:19 compute-0 ceph-mon[74313]: pgmap v2204: 321 pgs: 321 active+clean; 407 MiB data, 946 MiB used, 59 GiB / 60 GiB avail; 5.1 KiB/s rd, 12 KiB/s wr, 5 op/s
Oct 11 09:15:20 compute-0 nova_compute[260935]: 2025-10-11 09:15:20.250 2 DEBUG nova.network.neutron [-] [instance: dd4f6627-2bd4-4ad6-8e1e-c5bec1228096] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:15:20 compute-0 nova_compute[260935]: 2025-10-11 09:15:20.273 2 INFO nova.compute.manager [-] [instance: dd4f6627-2bd4-4ad6-8e1e-c5bec1228096] Took 0.91 seconds to deallocate network for instance.
Oct 11 09:15:20 compute-0 nova_compute[260935]: 2025-10-11 09:15:20.299 2 DEBUG nova.network.neutron [req-c003a7df-5071-42b6-aeeb-1a605ba7c5be req-cd611666-e864-4272-8691-8aa0c171026a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: dd4f6627-2bd4-4ad6-8e1e-c5bec1228096] Updated VIF entry in instance network info cache for port ef5b5080-0657-4874-a32f-f79c370fbb6f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 09:15:20 compute-0 nova_compute[260935]: 2025-10-11 09:15:20.300 2 DEBUG nova.network.neutron [req-c003a7df-5071-42b6-aeeb-1a605ba7c5be req-cd611666-e864-4272-8691-8aa0c171026a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: dd4f6627-2bd4-4ad6-8e1e-c5bec1228096] Updating instance_info_cache with network_info: [{"id": "ef5b5080-0657-4874-a32f-f79c370fbb6f", "address": "fa:16:3e:73:4b:2f", "network": {"id": "4b2dcf8f-2d20-4207-8c49-d03814ca4c89", "bridge": "br-int", "label": "tempest-network-smoke--1714222013", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca4b15770e784f45910b630937562cb6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapef5b5080-06", "ovs_interfaceid": "ef5b5080-0657-4874-a32f-f79c370fbb6f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:15:20 compute-0 nova_compute[260935]: 2025-10-11 09:15:20.370 2 DEBUG oslo_concurrency.lockutils [req-c003a7df-5071-42b6-aeeb-1a605ba7c5be req-cd611666-e864-4272-8691-8aa0c171026a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-dd4f6627-2bd4-4ad6-8e1e-c5bec1228096" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:15:20 compute-0 nova_compute[260935]: 2025-10-11 09:15:20.376 2 DEBUG nova.compute.manager [req-af268778-64de-40f4-8843-eac9115a3a2d req-6036156c-ae43-4249-8013-20282b63bdf2 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: dd4f6627-2bd4-4ad6-8e1e-c5bec1228096] Received event network-vif-deleted-ef5b5080-0657-4874-a32f-f79c370fbb6f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:15:20 compute-0 nova_compute[260935]: 2025-10-11 09:15:20.378 2 DEBUG oslo_concurrency.lockutils [None req-c53a938f-bb46-4741-9e60-3b88069acf3f a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:15:20 compute-0 nova_compute[260935]: 2025-10-11 09:15:20.378 2 DEBUG oslo_concurrency.lockutils [None req-c53a938f-bb46-4741-9e60-3b88069acf3f a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:15:20 compute-0 nova_compute[260935]: 2025-10-11 09:15:20.467 2 DEBUG nova.compute.manager [req-87282bf4-2052-412c-be3a-64512721df33 req-c1ac9d01-f8ed-44d4-ae82-e54ff4442b09 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: dd4f6627-2bd4-4ad6-8e1e-c5bec1228096] Received event network-vif-unplugged-ef5b5080-0657-4874-a32f-f79c370fbb6f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:15:20 compute-0 nova_compute[260935]: 2025-10-11 09:15:20.467 2 DEBUG oslo_concurrency.lockutils [req-87282bf4-2052-412c-be3a-64512721df33 req-c1ac9d01-f8ed-44d4-ae82-e54ff4442b09 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "dd4f6627-2bd4-4ad6-8e1e-c5bec1228096-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:15:20 compute-0 nova_compute[260935]: 2025-10-11 09:15:20.468 2 DEBUG oslo_concurrency.lockutils [req-87282bf4-2052-412c-be3a-64512721df33 req-c1ac9d01-f8ed-44d4-ae82-e54ff4442b09 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "dd4f6627-2bd4-4ad6-8e1e-c5bec1228096-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:15:20 compute-0 nova_compute[260935]: 2025-10-11 09:15:20.468 2 DEBUG oslo_concurrency.lockutils [req-87282bf4-2052-412c-be3a-64512721df33 req-c1ac9d01-f8ed-44d4-ae82-e54ff4442b09 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "dd4f6627-2bd4-4ad6-8e1e-c5bec1228096-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:15:20 compute-0 nova_compute[260935]: 2025-10-11 09:15:20.469 2 DEBUG nova.compute.manager [req-87282bf4-2052-412c-be3a-64512721df33 req-c1ac9d01-f8ed-44d4-ae82-e54ff4442b09 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: dd4f6627-2bd4-4ad6-8e1e-c5bec1228096] No waiting events found dispatching network-vif-unplugged-ef5b5080-0657-4874-a32f-f79c370fbb6f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 09:15:20 compute-0 nova_compute[260935]: 2025-10-11 09:15:20.469 2 WARNING nova.compute.manager [req-87282bf4-2052-412c-be3a-64512721df33 req-c1ac9d01-f8ed-44d4-ae82-e54ff4442b09 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: dd4f6627-2bd4-4ad6-8e1e-c5bec1228096] Received unexpected event network-vif-unplugged-ef5b5080-0657-4874-a32f-f79c370fbb6f for instance with vm_state deleted and task_state None.
Oct 11 09:15:20 compute-0 nova_compute[260935]: 2025-10-11 09:15:20.469 2 DEBUG nova.compute.manager [req-87282bf4-2052-412c-be3a-64512721df33 req-c1ac9d01-f8ed-44d4-ae82-e54ff4442b09 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: dd4f6627-2bd4-4ad6-8e1e-c5bec1228096] Received event network-vif-plugged-ef5b5080-0657-4874-a32f-f79c370fbb6f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:15:20 compute-0 nova_compute[260935]: 2025-10-11 09:15:20.470 2 DEBUG oslo_concurrency.lockutils [req-87282bf4-2052-412c-be3a-64512721df33 req-c1ac9d01-f8ed-44d4-ae82-e54ff4442b09 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "dd4f6627-2bd4-4ad6-8e1e-c5bec1228096-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:15:20 compute-0 nova_compute[260935]: 2025-10-11 09:15:20.470 2 DEBUG oslo_concurrency.lockutils [req-87282bf4-2052-412c-be3a-64512721df33 req-c1ac9d01-f8ed-44d4-ae82-e54ff4442b09 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "dd4f6627-2bd4-4ad6-8e1e-c5bec1228096-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:15:20 compute-0 nova_compute[260935]: 2025-10-11 09:15:20.471 2 DEBUG oslo_concurrency.lockutils [req-87282bf4-2052-412c-be3a-64512721df33 req-c1ac9d01-f8ed-44d4-ae82-e54ff4442b09 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "dd4f6627-2bd4-4ad6-8e1e-c5bec1228096-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:15:20 compute-0 nova_compute[260935]: 2025-10-11 09:15:20.471 2 DEBUG nova.compute.manager [req-87282bf4-2052-412c-be3a-64512721df33 req-c1ac9d01-f8ed-44d4-ae82-e54ff4442b09 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: dd4f6627-2bd4-4ad6-8e1e-c5bec1228096] No waiting events found dispatching network-vif-plugged-ef5b5080-0657-4874-a32f-f79c370fbb6f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 09:15:20 compute-0 nova_compute[260935]: 2025-10-11 09:15:20.471 2 WARNING nova.compute.manager [req-87282bf4-2052-412c-be3a-64512721df33 req-c1ac9d01-f8ed-44d4-ae82-e54ff4442b09 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: dd4f6627-2bd4-4ad6-8e1e-c5bec1228096] Received unexpected event network-vif-plugged-ef5b5080-0657-4874-a32f-f79c370fbb6f for instance with vm_state deleted and task_state None.
Oct 11 09:15:20 compute-0 nova_compute[260935]: 2025-10-11 09:15:20.556 2 DEBUG oslo_concurrency.processutils [None req-c53a938f-bb46-4741-9e60-3b88069acf3f a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:15:20 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2205: 321 pgs: 321 active+clean; 407 MiB data, 946 MiB used, 59 GiB / 60 GiB avail; 4.2 KiB/s rd, 4 op/s
Oct 11 09:15:20 compute-0 nova_compute[260935]: 2025-10-11 09:15:20.725 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:15:21 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 09:15:21 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3364576966' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:15:21 compute-0 nova_compute[260935]: 2025-10-11 09:15:21.032 2 DEBUG oslo_concurrency.processutils [None req-c53a938f-bb46-4741-9e60-3b88069acf3f a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:15:21 compute-0 nova_compute[260935]: 2025-10-11 09:15:21.041 2 DEBUG nova.compute.provider_tree [None req-c53a938f-bb46-4741-9e60-3b88069acf3f a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 09:15:21 compute-0 nova_compute[260935]: 2025-10-11 09:15:21.063 2 DEBUG nova.scheduler.client.report [None req-c53a938f-bb46-4741-9e60-3b88069acf3f a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 09:15:21 compute-0 nova_compute[260935]: 2025-10-11 09:15:21.099 2 DEBUG oslo_concurrency.lockutils [None req-c53a938f-bb46-4741-9e60-3b88069acf3f a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.721s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:15:21 compute-0 nova_compute[260935]: 2025-10-11 09:15:21.179 2 INFO nova.scheduler.client.report [None req-c53a938f-bb46-4741-9e60-3b88069acf3f a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Deleted allocations for instance dd4f6627-2bd4-4ad6-8e1e-c5bec1228096
Oct 11 09:15:21 compute-0 nova_compute[260935]: 2025-10-11 09:15:21.267 2 DEBUG oslo_concurrency.lockutils [None req-c53a938f-bb46-4741-9e60-3b88069acf3f a213c3877fc144a3af0be3c3d853f999 ca4b15770e784f45910b630937562cb6 - - default default] Lock "dd4f6627-2bd4-4ad6-8e1e-c5bec1228096" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.755s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:15:21 compute-0 ceph-mon[74313]: pgmap v2205: 321 pgs: 321 active+clean; 407 MiB data, 946 MiB used, 59 GiB / 60 GiB avail; 4.2 KiB/s rd, 4 op/s
Oct 11 09:15:21 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/3364576966' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:15:22 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2206: 321 pgs: 321 active+clean; 358 MiB data, 918 MiB used, 59 GiB / 60 GiB avail; 16 KiB/s rd, 1.2 KiB/s wr, 23 op/s
Oct 11 09:15:23 compute-0 ceph-mon[74313]: pgmap v2206: 321 pgs: 321 active+clean; 358 MiB data, 918 MiB used, 59 GiB / 60 GiB avail; 16 KiB/s rd, 1.2 KiB/s wr, 23 op/s
Oct 11 09:15:23 compute-0 nova_compute[260935]: 2025-10-11 09:15:23.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:15:23 compute-0 nova_compute[260935]: 2025-10-11 09:15:23.793 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:15:23 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:15:24 compute-0 ovn_controller[152945]: 2025-10-11T09:15:24Z|01050|binding|INFO|Releasing lport e23cd806-8523-4e59-ba27-db15cee52548 from this chassis (sb_readonly=0)
Oct 11 09:15:24 compute-0 ovn_controller[152945]: 2025-10-11T09:15:24Z|01051|binding|INFO|Releasing lport f45dd889-4db0-488b-b6d3-356bc191844e from this chassis (sb_readonly=0)
Oct 11 09:15:24 compute-0 nova_compute[260935]: 2025-10-11 09:15:24.558 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:15:24 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2207: 321 pgs: 321 active+clean; 328 MiB data, 903 MiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 1.2 KiB/s wr, 32 op/s
Oct 11 09:15:24 compute-0 ovn_controller[152945]: 2025-10-11T09:15:24Z|01052|binding|INFO|Releasing lport e23cd806-8523-4e59-ba27-db15cee52548 from this chassis (sb_readonly=0)
Oct 11 09:15:24 compute-0 ovn_controller[152945]: 2025-10-11T09:15:24Z|01053|binding|INFO|Releasing lport f45dd889-4db0-488b-b6d3-356bc191844e from this chassis (sb_readonly=0)
Oct 11 09:15:24 compute-0 nova_compute[260935]: 2025-10-11 09:15:24.633 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:15:24 compute-0 nova_compute[260935]: 2025-10-11 09:15:24.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:15:24 compute-0 unix_chkpwd[376972]: password check failed for user (root)
Oct 11 09:15:24 compute-0 sshd-session[376969]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=165.232.82.252  user=root
Oct 11 09:15:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:15:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:15:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:15:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:15:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:15:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:15:25 compute-0 ceph-mon[74313]: pgmap v2207: 321 pgs: 321 active+clean; 328 MiB data, 903 MiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 1.2 KiB/s wr, 32 op/s
Oct 11 09:15:25 compute-0 nova_compute[260935]: 2025-10-11 09:15:25.797 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:15:26 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2208: 321 pgs: 321 active+clean; 328 MiB data, 903 MiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 1.2 KiB/s wr, 32 op/s
Oct 11 09:15:26 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 09:15:26 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1071036871' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 09:15:26 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 09:15:26 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1071036871' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 09:15:26 compute-0 nova_compute[260935]: 2025-10-11 09:15:26.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:15:26 compute-0 nova_compute[260935]: 2025-10-11 09:15:26.703 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 11 09:15:26 compute-0 sshd-session[376969]: Failed password for root from 165.232.82.252 port 41264 ssh2
Oct 11 09:15:27 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:15:27.103 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=32, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:d1:d9', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '16:ab:1e:b7:4b:7f'}, ipsec=False) old=SB_Global(nb_cfg=31) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 09:15:27 compute-0 nova_compute[260935]: 2025-10-11 09:15:27.103 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:15:27 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:15:27.105 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 11 09:15:27 compute-0 ceph-mon[74313]: pgmap v2208: 321 pgs: 321 active+clean; 328 MiB data, 903 MiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 1.2 KiB/s wr, 32 op/s
Oct 11 09:15:27 compute-0 ceph-mon[74313]: from='client.? 192.168.122.10:0/1071036871' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 09:15:27 compute-0 ceph-mon[74313]: from='client.? 192.168.122.10:0/1071036871' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 09:15:27 compute-0 nova_compute[260935]: 2025-10-11 09:15:27.698 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:15:27 compute-0 nova_compute[260935]: 2025-10-11 09:15:27.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:15:27 compute-0 podman[376973]: 2025-10-11 09:15:27.808228965 +0000 UTC m=+0.098235700 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_id=ovn_metadata_agent)
Oct 11 09:15:28 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2209: 321 pgs: 321 active+clean; 328 MiB data, 903 MiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 1.2 KiB/s wr, 32 op/s
Oct 11 09:15:28 compute-0 nova_compute[260935]: 2025-10-11 09:15:28.796 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:15:28 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:15:29 compute-0 sshd-session[376969]: Connection closed by authenticating user root 165.232.82.252 port 41264 [preauth]
Oct 11 09:15:29 compute-0 ceph-mon[74313]: pgmap v2209: 321 pgs: 321 active+clean; 328 MiB data, 903 MiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 1.2 KiB/s wr, 32 op/s
Oct 11 09:15:30 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2210: 321 pgs: 321 active+clean; 328 MiB data, 903 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 11 09:15:30 compute-0 nova_compute[260935]: 2025-10-11 09:15:30.698 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:15:30 compute-0 nova_compute[260935]: 2025-10-11 09:15:30.799 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:15:30 compute-0 nova_compute[260935]: 2025-10-11 09:15:30.876 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:15:30 compute-0 nova_compute[260935]: 2025-10-11 09:15:30.876 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 11 09:15:30 compute-0 nova_compute[260935]: 2025-10-11 09:15:30.907 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 11 09:15:30 compute-0 nova_compute[260935]: 2025-10-11 09:15:30.908 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:15:30 compute-0 nova_compute[260935]: 2025-10-11 09:15:30.941 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:15:30 compute-0 nova_compute[260935]: 2025-10-11 09:15:30.942 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:15:30 compute-0 nova_compute[260935]: 2025-10-11 09:15:30.943 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:15:30 compute-0 nova_compute[260935]: 2025-10-11 09:15:30.943 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 11 09:15:30 compute-0 nova_compute[260935]: 2025-10-11 09:15:30.945 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:15:31 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 09:15:31 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/59155469' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:15:31 compute-0 nova_compute[260935]: 2025-10-11 09:15:31.447 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.503s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:15:31 compute-0 nova_compute[260935]: 2025-10-11 09:15:31.560 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:15:31 compute-0 nova_compute[260935]: 2025-10-11 09:15:31.560 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:15:31 compute-0 nova_compute[260935]: 2025-10-11 09:15:31.561 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:15:31 compute-0 nova_compute[260935]: 2025-10-11 09:15:31.567 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:15:31 compute-0 nova_compute[260935]: 2025-10-11 09:15:31.568 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:15:31 compute-0 nova_compute[260935]: 2025-10-11 09:15:31.573 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000052 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:15:31 compute-0 nova_compute[260935]: 2025-10-11 09:15:31.574 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000052 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:15:31 compute-0 ceph-mon[74313]: pgmap v2210: 321 pgs: 321 active+clean; 328 MiB data, 903 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 11 09:15:31 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/59155469' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:15:31 compute-0 nova_compute[260935]: 2025-10-11 09:15:31.901 2 WARNING nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 09:15:31 compute-0 nova_compute[260935]: 2025-10-11 09:15:31.904 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3016MB free_disk=59.83066940307617GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 11 09:15:31 compute-0 nova_compute[260935]: 2025-10-11 09:15:31.905 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:15:31 compute-0 nova_compute[260935]: 2025-10-11 09:15:31.906 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:15:32 compute-0 nova_compute[260935]: 2025-10-11 09:15:32.020 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance c176845c-89c0-4038-ba22-4ee79bd3ebfe actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 09:15:32 compute-0 nova_compute[260935]: 2025-10-11 09:15:32.020 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance b75d8ded-515b-48ff-a6b6-28df88878996 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 09:15:32 compute-0 nova_compute[260935]: 2025-10-11 09:15:32.021 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance 52be16b4-343a-4fd4-9041-39069a1fde2a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 09:15:32 compute-0 nova_compute[260935]: 2025-10-11 09:15:32.021 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 11 09:15:32 compute-0 nova_compute[260935]: 2025-10-11 09:15:32.022 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=896MB phys_disk=59GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 11 09:15:32 compute-0 nova_compute[260935]: 2025-10-11 09:15:32.107 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:15:32 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 09:15:32 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/836855172' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:15:32 compute-0 nova_compute[260935]: 2025-10-11 09:15:32.577 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:15:32 compute-0 nova_compute[260935]: 2025-10-11 09:15:32.584 2 DEBUG nova.compute.provider_tree [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 09:15:32 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2211: 321 pgs: 321 active+clean; 328 MiB data, 903 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 11 09:15:32 compute-0 nova_compute[260935]: 2025-10-11 09:15:32.608 2 DEBUG nova.scheduler.client.report [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 09:15:32 compute-0 nova_compute[260935]: 2025-10-11 09:15:32.636 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 11 09:15:32 compute-0 nova_compute[260935]: 2025-10-11 09:15:32.637 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.732s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:15:32 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/836855172' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:15:33 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:15:33.107 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=bec9a5e2-82b0-42f0-811d-08d245f1dc66, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '32'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:15:33 compute-0 nova_compute[260935]: 2025-10-11 09:15:33.433 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:15:33 compute-0 ceph-mon[74313]: pgmap v2211: 321 pgs: 321 active+clean; 328 MiB data, 903 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 11 09:15:33 compute-0 nova_compute[260935]: 2025-10-11 09:15:33.757 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760174118.7554939, dd4f6627-2bd4-4ad6-8e1e-c5bec1228096 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:15:33 compute-0 nova_compute[260935]: 2025-10-11 09:15:33.757 2 INFO nova.compute.manager [-] [instance: dd4f6627-2bd4-4ad6-8e1e-c5bec1228096] VM Stopped (Lifecycle Event)
Oct 11 09:15:33 compute-0 nova_compute[260935]: 2025-10-11 09:15:33.786 2 DEBUG nova.compute.manager [None req-68e62540-cac9-4fbd-983b-35747801efd6 - - - - - -] [instance: dd4f6627-2bd4-4ad6-8e1e-c5bec1228096] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:15:33 compute-0 nova_compute[260935]: 2025-10-11 09:15:33.798 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:15:33 compute-0 podman[377037]: 2025-10-11 09:15:33.803730804 +0000 UTC m=+0.097063538 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 11 09:15:33 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:15:34 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2212: 321 pgs: 321 active+clean; 328 MiB data, 903 MiB used, 59 GiB / 60 GiB avail; 7.1 KiB/s rd, 0 B/s wr, 9 op/s
Oct 11 09:15:35 compute-0 ceph-mon[74313]: pgmap v2212: 321 pgs: 321 active+clean; 328 MiB data, 903 MiB used, 59 GiB / 60 GiB avail; 7.1 KiB/s rd, 0 B/s wr, 9 op/s
Oct 11 09:15:35 compute-0 nova_compute[260935]: 2025-10-11 09:15:35.802 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:15:36 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2213: 321 pgs: 321 active+clean; 328 MiB data, 903 MiB used, 59 GiB / 60 GiB avail
Oct 11 09:15:37 compute-0 ceph-mon[74313]: pgmap v2213: 321 pgs: 321 active+clean; 328 MiB data, 903 MiB used, 59 GiB / 60 GiB avail
Oct 11 09:15:38 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2214: 321 pgs: 321 active+clean; 328 MiB data, 903 MiB used, 59 GiB / 60 GiB avail
Oct 11 09:15:38 compute-0 nova_compute[260935]: 2025-10-11 09:15:38.800 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:15:38 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:15:39 compute-0 systemd[1]: Starting dnf makecache...
Oct 11 09:15:39 compute-0 ceph-mon[74313]: pgmap v2214: 321 pgs: 321 active+clean; 328 MiB data, 903 MiB used, 59 GiB / 60 GiB avail
Oct 11 09:15:39 compute-0 podman[377057]: 2025-10-11 09:15:39.79802885 +0000 UTC m=+0.091535364 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct 11 09:15:39 compute-0 podman[377058]: 2025-10-11 09:15:39.809378105 +0000 UTC m=+0.102934021 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 11 09:15:40 compute-0 dnf[377059]: Metadata cache refreshed recently.
Oct 11 09:15:40 compute-0 systemd[1]: dnf-makecache.service: Deactivated successfully.
Oct 11 09:15:40 compute-0 systemd[1]: Finished dnf makecache.
Oct 11 09:15:40 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2215: 321 pgs: 321 active+clean; 328 MiB data, 903 MiB used, 59 GiB / 60 GiB avail
Oct 11 09:15:40 compute-0 nova_compute[260935]: 2025-10-11 09:15:40.842 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:15:41 compute-0 ceph-mon[74313]: pgmap v2215: 321 pgs: 321 active+clean; 328 MiB data, 903 MiB used, 59 GiB / 60 GiB avail
Oct 11 09:15:42 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2216: 321 pgs: 321 active+clean; 328 MiB data, 903 MiB used, 59 GiB / 60 GiB avail
Oct 11 09:15:43 compute-0 ceph-mon[74313]: pgmap v2216: 321 pgs: 321 active+clean; 328 MiB data, 903 MiB used, 59 GiB / 60 GiB avail
Oct 11 09:15:43 compute-0 nova_compute[260935]: 2025-10-11 09:15:43.803 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:15:43 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:15:44 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2217: 321 pgs: 321 active+clean; 328 MiB data, 903 MiB used, 59 GiB / 60 GiB avail
Oct 11 09:15:45 compute-0 ceph-mon[74313]: pgmap v2217: 321 pgs: 321 active+clean; 328 MiB data, 903 MiB used, 59 GiB / 60 GiB avail
Oct 11 09:15:45 compute-0 nova_compute[260935]: 2025-10-11 09:15:45.845 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:15:46 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2218: 321 pgs: 321 active+clean; 328 MiB data, 903 MiB used, 59 GiB / 60 GiB avail
Oct 11 09:15:47 compute-0 ceph-mon[74313]: pgmap v2218: 321 pgs: 321 active+clean; 328 MiB data, 903 MiB used, 59 GiB / 60 GiB avail
Oct 11 09:15:48 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2219: 321 pgs: 321 active+clean; 328 MiB data, 903 MiB used, 59 GiB / 60 GiB avail
Oct 11 09:15:48 compute-0 nova_compute[260935]: 2025-10-11 09:15:48.805 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:15:48 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:15:49 compute-0 ceph-mon[74313]: pgmap v2219: 321 pgs: 321 active+clean; 328 MiB data, 903 MiB used, 59 GiB / 60 GiB avail
Oct 11 09:15:50 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2220: 321 pgs: 321 active+clean; 328 MiB data, 903 MiB used, 59 GiB / 60 GiB avail
Oct 11 09:15:50 compute-0 nova_compute[260935]: 2025-10-11 09:15:50.847 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:15:51 compute-0 nova_compute[260935]: 2025-10-11 09:15:51.612 2 DEBUG oslo_concurrency.lockutils [None req-b7745654-f0c1-45ec-b80e-65effeedb649 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Acquiring lock "8354434f-daa4-4745-9755-bd2465f5459b" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:15:51 compute-0 nova_compute[260935]: 2025-10-11 09:15:51.613 2 DEBUG oslo_concurrency.lockutils [None req-b7745654-f0c1-45ec-b80e-65effeedb649 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "8354434f-daa4-4745-9755-bd2465f5459b" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:15:51 compute-0 nova_compute[260935]: 2025-10-11 09:15:51.647 2 DEBUG nova.compute.manager [None req-b7745654-f0c1-45ec-b80e-65effeedb649 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 8354434f-daa4-4745-9755-bd2465f5459b] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 11 09:15:51 compute-0 nova_compute[260935]: 2025-10-11 09:15:51.772 2 DEBUG oslo_concurrency.lockutils [None req-b7745654-f0c1-45ec-b80e-65effeedb649 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:15:51 compute-0 nova_compute[260935]: 2025-10-11 09:15:51.773 2 DEBUG oslo_concurrency.lockutils [None req-b7745654-f0c1-45ec-b80e-65effeedb649 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:15:51 compute-0 nova_compute[260935]: 2025-10-11 09:15:51.785 2 DEBUG nova.virt.hardware [None req-b7745654-f0c1-45ec-b80e-65effeedb649 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 11 09:15:51 compute-0 nova_compute[260935]: 2025-10-11 09:15:51.786 2 INFO nova.compute.claims [None req-b7745654-f0c1-45ec-b80e-65effeedb649 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 8354434f-daa4-4745-9755-bd2465f5459b] Claim successful on node compute-0.ctlplane.example.com
Oct 11 09:15:51 compute-0 ceph-mon[74313]: pgmap v2220: 321 pgs: 321 active+clean; 328 MiB data, 903 MiB used, 59 GiB / 60 GiB avail
Oct 11 09:15:51 compute-0 nova_compute[260935]: 2025-10-11 09:15:51.976 2 DEBUG oslo_concurrency.processutils [None req-b7745654-f0c1-45ec-b80e-65effeedb649 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:15:52 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 09:15:52 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/89263273' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:15:52 compute-0 nova_compute[260935]: 2025-10-11 09:15:52.471 2 DEBUG oslo_concurrency.processutils [None req-b7745654-f0c1-45ec-b80e-65effeedb649 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.496s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:15:52 compute-0 nova_compute[260935]: 2025-10-11 09:15:52.481 2 DEBUG nova.compute.provider_tree [None req-b7745654-f0c1-45ec-b80e-65effeedb649 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 09:15:52 compute-0 nova_compute[260935]: 2025-10-11 09:15:52.508 2 DEBUG nova.scheduler.client.report [None req-b7745654-f0c1-45ec-b80e-65effeedb649 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 09:15:52 compute-0 nova_compute[260935]: 2025-10-11 09:15:52.547 2 DEBUG oslo_concurrency.lockutils [None req-b7745654-f0c1-45ec-b80e-65effeedb649 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.774s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:15:52 compute-0 nova_compute[260935]: 2025-10-11 09:15:52.548 2 DEBUG nova.compute.manager [None req-b7745654-f0c1-45ec-b80e-65effeedb649 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 8354434f-daa4-4745-9755-bd2465f5459b] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 11 09:15:52 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2221: 321 pgs: 321 active+clean; 328 MiB data, 903 MiB used, 59 GiB / 60 GiB avail
Oct 11 09:15:52 compute-0 nova_compute[260935]: 2025-10-11 09:15:52.612 2 DEBUG nova.compute.manager [None req-b7745654-f0c1-45ec-b80e-65effeedb649 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 8354434f-daa4-4745-9755-bd2465f5459b] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 11 09:15:52 compute-0 nova_compute[260935]: 2025-10-11 09:15:52.613 2 DEBUG nova.network.neutron [None req-b7745654-f0c1-45ec-b80e-65effeedb649 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 8354434f-daa4-4745-9755-bd2465f5459b] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 11 09:15:52 compute-0 nova_compute[260935]: 2025-10-11 09:15:52.635 2 INFO nova.virt.libvirt.driver [None req-b7745654-f0c1-45ec-b80e-65effeedb649 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 8354434f-daa4-4745-9755-bd2465f5459b] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 11 09:15:52 compute-0 nova_compute[260935]: 2025-10-11 09:15:52.655 2 DEBUG nova.compute.manager [None req-b7745654-f0c1-45ec-b80e-65effeedb649 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 8354434f-daa4-4745-9755-bd2465f5459b] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 11 09:15:52 compute-0 nova_compute[260935]: 2025-10-11 09:15:52.785 2 DEBUG nova.compute.manager [None req-b7745654-f0c1-45ec-b80e-65effeedb649 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 8354434f-daa4-4745-9755-bd2465f5459b] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 11 09:15:52 compute-0 nova_compute[260935]: 2025-10-11 09:15:52.787 2 DEBUG nova.virt.libvirt.driver [None req-b7745654-f0c1-45ec-b80e-65effeedb649 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 8354434f-daa4-4745-9755-bd2465f5459b] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 11 09:15:52 compute-0 nova_compute[260935]: 2025-10-11 09:15:52.788 2 INFO nova.virt.libvirt.driver [None req-b7745654-f0c1-45ec-b80e-65effeedb649 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 8354434f-daa4-4745-9755-bd2465f5459b] Creating image(s)
Oct 11 09:15:52 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/89263273' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:15:52 compute-0 nova_compute[260935]: 2025-10-11 09:15:52.815 2 DEBUG nova.storage.rbd_utils [None req-b7745654-f0c1-45ec-b80e-65effeedb649 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] rbd image 8354434f-daa4-4745-9755-bd2465f5459b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:15:52 compute-0 nova_compute[260935]: 2025-10-11 09:15:52.849 2 DEBUG nova.storage.rbd_utils [None req-b7745654-f0c1-45ec-b80e-65effeedb649 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] rbd image 8354434f-daa4-4745-9755-bd2465f5459b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:15:52 compute-0 nova_compute[260935]: 2025-10-11 09:15:52.884 2 DEBUG nova.storage.rbd_utils [None req-b7745654-f0c1-45ec-b80e-65effeedb649 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] rbd image 8354434f-daa4-4745-9755-bd2465f5459b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:15:52 compute-0 nova_compute[260935]: 2025-10-11 09:15:52.890 2 DEBUG oslo_concurrency.processutils [None req-b7745654-f0c1-45ec-b80e-65effeedb649 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:15:52 compute-0 nova_compute[260935]: 2025-10-11 09:15:52.942 2 DEBUG nova.policy [None req-b7745654-f0c1-45ec-b80e-65effeedb649 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'dd336dcb24664df58613d4105ce1b004', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'bee9c6aad5fe46a2b0fb6caf4d995b72', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 11 09:15:52 compute-0 nova_compute[260935]: 2025-10-11 09:15:52.988 2 DEBUG oslo_concurrency.processutils [None req-b7745654-f0c1-45ec-b80e-65effeedb649 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json" returned: 0 in 0.098s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:15:52 compute-0 nova_compute[260935]: 2025-10-11 09:15:52.988 2 DEBUG oslo_concurrency.lockutils [None req-b7745654-f0c1-45ec-b80e-65effeedb649 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Acquiring lock "9811042c7d73cc51997f7c966840f4b7728169a1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:15:52 compute-0 nova_compute[260935]: 2025-10-11 09:15:52.990 2 DEBUG oslo_concurrency.lockutils [None req-b7745654-f0c1-45ec-b80e-65effeedb649 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:15:52 compute-0 nova_compute[260935]: 2025-10-11 09:15:52.991 2 DEBUG oslo_concurrency.lockutils [None req-b7745654-f0c1-45ec-b80e-65effeedb649 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:15:53 compute-0 nova_compute[260935]: 2025-10-11 09:15:53.021 2 DEBUG nova.storage.rbd_utils [None req-b7745654-f0c1-45ec-b80e-65effeedb649 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] rbd image 8354434f-daa4-4745-9755-bd2465f5459b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:15:53 compute-0 nova_compute[260935]: 2025-10-11 09:15:53.027 2 DEBUG oslo_concurrency.processutils [None req-b7745654-f0c1-45ec-b80e-65effeedb649 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 8354434f-daa4-4745-9755-bd2465f5459b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:15:53 compute-0 nova_compute[260935]: 2025-10-11 09:15:53.342 2 DEBUG oslo_concurrency.processutils [None req-b7745654-f0c1-45ec-b80e-65effeedb649 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 8354434f-daa4-4745-9755-bd2465f5459b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.315s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:15:53 compute-0 nova_compute[260935]: 2025-10-11 09:15:53.422 2 DEBUG nova.storage.rbd_utils [None req-b7745654-f0c1-45ec-b80e-65effeedb649 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] resizing rbd image 8354434f-daa4-4745-9755-bd2465f5459b_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 11 09:15:53 compute-0 nova_compute[260935]: 2025-10-11 09:15:53.519 2 DEBUG nova.objects.instance [None req-b7745654-f0c1-45ec-b80e-65effeedb649 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lazy-loading 'migration_context' on Instance uuid 8354434f-daa4-4745-9755-bd2465f5459b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:15:53 compute-0 nova_compute[260935]: 2025-10-11 09:15:53.536 2 DEBUG nova.virt.libvirt.driver [None req-b7745654-f0c1-45ec-b80e-65effeedb649 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 8354434f-daa4-4745-9755-bd2465f5459b] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 11 09:15:53 compute-0 nova_compute[260935]: 2025-10-11 09:15:53.537 2 DEBUG nova.virt.libvirt.driver [None req-b7745654-f0c1-45ec-b80e-65effeedb649 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 8354434f-daa4-4745-9755-bd2465f5459b] Ensure instance console log exists: /var/lib/nova/instances/8354434f-daa4-4745-9755-bd2465f5459b/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 11 09:15:53 compute-0 nova_compute[260935]: 2025-10-11 09:15:53.537 2 DEBUG oslo_concurrency.lockutils [None req-b7745654-f0c1-45ec-b80e-65effeedb649 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:15:53 compute-0 nova_compute[260935]: 2025-10-11 09:15:53.538 2 DEBUG oslo_concurrency.lockutils [None req-b7745654-f0c1-45ec-b80e-65effeedb649 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:15:53 compute-0 nova_compute[260935]: 2025-10-11 09:15:53.538 2 DEBUG oslo_concurrency.lockutils [None req-b7745654-f0c1-45ec-b80e-65effeedb649 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:15:53 compute-0 nova_compute[260935]: 2025-10-11 09:15:53.807 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:15:53 compute-0 ceph-mon[74313]: pgmap v2221: 321 pgs: 321 active+clean; 328 MiB data, 903 MiB used, 59 GiB / 60 GiB avail
Oct 11 09:15:53 compute-0 nova_compute[260935]: 2025-10-11 09:15:53.926 2 DEBUG nova.network.neutron [None req-b7745654-f0c1-45ec-b80e-65effeedb649 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 8354434f-daa4-4745-9755-bd2465f5459b] Successfully created port: cabc92ec-b61c-42d8-80b5-444ec773a568 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 11 09:15:53 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:15:54 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2222: 321 pgs: 321 active+clean; 328 MiB data, 903 MiB used, 59 GiB / 60 GiB avail
Oct 11 09:15:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:15:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:15:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:15:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:15:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:15:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:15:54 compute-0 nova_compute[260935]: 2025-10-11 09:15:54.852 2 DEBUG nova.network.neutron [None req-b7745654-f0c1-45ec-b80e-65effeedb649 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 8354434f-daa4-4745-9755-bd2465f5459b] Successfully updated port: cabc92ec-b61c-42d8-80b5-444ec773a568 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 11 09:15:54 compute-0 nova_compute[260935]: 2025-10-11 09:15:54.869 2 DEBUG oslo_concurrency.lockutils [None req-b7745654-f0c1-45ec-b80e-65effeedb649 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Acquiring lock "refresh_cache-8354434f-daa4-4745-9755-bd2465f5459b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:15:54 compute-0 nova_compute[260935]: 2025-10-11 09:15:54.869 2 DEBUG oslo_concurrency.lockutils [None req-b7745654-f0c1-45ec-b80e-65effeedb649 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Acquired lock "refresh_cache-8354434f-daa4-4745-9755-bd2465f5459b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:15:54 compute-0 nova_compute[260935]: 2025-10-11 09:15:54.869 2 DEBUG nova.network.neutron [None req-b7745654-f0c1-45ec-b80e-65effeedb649 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 8354434f-daa4-4745-9755-bd2465f5459b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 11 09:15:54 compute-0 ceph-mgr[74605]: [balancer INFO root] Optimize plan auto_2025-10-11_09:15:54
Oct 11 09:15:54 compute-0 ceph-mgr[74605]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 09:15:54 compute-0 ceph-mgr[74605]: [balancer INFO root] do_upmap
Oct 11 09:15:54 compute-0 ceph-mgr[74605]: [balancer INFO root] pools ['default.rgw.log', 'default.rgw.meta', 'vms', 'images', '.mgr', 'cephfs.cephfs.data', 'volumes', '.rgw.root', 'default.rgw.control', 'cephfs.cephfs.meta', 'backups']
Oct 11 09:15:54 compute-0 ceph-mgr[74605]: [balancer INFO root] prepared 0/10 changes
Oct 11 09:15:55 compute-0 nova_compute[260935]: 2025-10-11 09:15:55.004 2 DEBUG nova.compute.manager [req-e7e4e3da-6d72-45bb-8e2c-9307289a221b req-c5734003-c844-4cc4-a2c2-4485688827a1 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 8354434f-daa4-4745-9755-bd2465f5459b] Received event network-changed-cabc92ec-b61c-42d8-80b5-444ec773a568 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:15:55 compute-0 nova_compute[260935]: 2025-10-11 09:15:55.004 2 DEBUG nova.compute.manager [req-e7e4e3da-6d72-45bb-8e2c-9307289a221b req-c5734003-c844-4cc4-a2c2-4485688827a1 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 8354434f-daa4-4745-9755-bd2465f5459b] Refreshing instance network info cache due to event network-changed-cabc92ec-b61c-42d8-80b5-444ec773a568. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 09:15:55 compute-0 nova_compute[260935]: 2025-10-11 09:15:55.005 2 DEBUG oslo_concurrency.lockutils [req-e7e4e3da-6d72-45bb-8e2c-9307289a221b req-c5734003-c844-4cc4-a2c2-4485688827a1 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-8354434f-daa4-4745-9755-bd2465f5459b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:15:55 compute-0 nova_compute[260935]: 2025-10-11 09:15:55.125 2 DEBUG nova.network.neutron [None req-b7745654-f0c1-45ec-b80e-65effeedb649 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 8354434f-daa4-4745-9755-bd2465f5459b] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 11 09:15:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 09:15:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 09:15:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 09:15:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 09:15:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 09:15:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 09:15:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 09:15:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 09:15:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 09:15:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 09:15:55 compute-0 ceph-mon[74313]: pgmap v2222: 321 pgs: 321 active+clean; 328 MiB data, 903 MiB used, 59 GiB / 60 GiB avail
Oct 11 09:15:55 compute-0 nova_compute[260935]: 2025-10-11 09:15:55.890 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:15:56 compute-0 nova_compute[260935]: 2025-10-11 09:15:56.561 2 DEBUG nova.network.neutron [None req-b7745654-f0c1-45ec-b80e-65effeedb649 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 8354434f-daa4-4745-9755-bd2465f5459b] Updating instance_info_cache with network_info: [{"id": "cabc92ec-b61c-42d8-80b5-444ec773a568", "address": "fa:16:3e:ec:41:56", "network": {"id": "3f56857a-dc0a-4d4f-92ae-d6806961b854", "bridge": "br-int", "label": "tempest-network-smoke--985507468", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcabc92ec-b6", "ovs_interfaceid": "cabc92ec-b61c-42d8-80b5-444ec773a568", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:15:56 compute-0 nova_compute[260935]: 2025-10-11 09:15:56.585 2 DEBUG oslo_concurrency.lockutils [None req-b7745654-f0c1-45ec-b80e-65effeedb649 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Releasing lock "refresh_cache-8354434f-daa4-4745-9755-bd2465f5459b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:15:56 compute-0 nova_compute[260935]: 2025-10-11 09:15:56.585 2 DEBUG nova.compute.manager [None req-b7745654-f0c1-45ec-b80e-65effeedb649 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 8354434f-daa4-4745-9755-bd2465f5459b] Instance network_info: |[{"id": "cabc92ec-b61c-42d8-80b5-444ec773a568", "address": "fa:16:3e:ec:41:56", "network": {"id": "3f56857a-dc0a-4d4f-92ae-d6806961b854", "bridge": "br-int", "label": "tempest-network-smoke--985507468", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcabc92ec-b6", "ovs_interfaceid": "cabc92ec-b61c-42d8-80b5-444ec773a568", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 11 09:15:56 compute-0 nova_compute[260935]: 2025-10-11 09:15:56.586 2 DEBUG oslo_concurrency.lockutils [req-e7e4e3da-6d72-45bb-8e2c-9307289a221b req-c5734003-c844-4cc4-a2c2-4485688827a1 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-8354434f-daa4-4745-9755-bd2465f5459b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:15:56 compute-0 nova_compute[260935]: 2025-10-11 09:15:56.587 2 DEBUG nova.network.neutron [req-e7e4e3da-6d72-45bb-8e2c-9307289a221b req-c5734003-c844-4cc4-a2c2-4485688827a1 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 8354434f-daa4-4745-9755-bd2465f5459b] Refreshing network info cache for port cabc92ec-b61c-42d8-80b5-444ec773a568 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 09:15:56 compute-0 nova_compute[260935]: 2025-10-11 09:15:56.592 2 DEBUG nova.virt.libvirt.driver [None req-b7745654-f0c1-45ec-b80e-65effeedb649 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 8354434f-daa4-4745-9755-bd2465f5459b] Start _get_guest_xml network_info=[{"id": "cabc92ec-b61c-42d8-80b5-444ec773a568", "address": "fa:16:3e:ec:41:56", "network": {"id": "3f56857a-dc0a-4d4f-92ae-d6806961b854", "bridge": "br-int", "label": "tempest-network-smoke--985507468", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcabc92ec-b6", "ovs_interfaceid": "cabc92ec-b61c-42d8-80b5-444ec773a568", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 11 09:15:56 compute-0 nova_compute[260935]: 2025-10-11 09:15:56.600 2 WARNING nova.virt.libvirt.driver [None req-b7745654-f0c1-45ec-b80e-65effeedb649 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 09:15:56 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2223: 321 pgs: 321 active+clean; 328 MiB data, 903 MiB used, 59 GiB / 60 GiB avail
Oct 11 09:15:56 compute-0 nova_compute[260935]: 2025-10-11 09:15:56.616 2 DEBUG nova.virt.libvirt.host [None req-b7745654-f0c1-45ec-b80e-65effeedb649 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 11 09:15:56 compute-0 nova_compute[260935]: 2025-10-11 09:15:56.617 2 DEBUG nova.virt.libvirt.host [None req-b7745654-f0c1-45ec-b80e-65effeedb649 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 11 09:15:56 compute-0 nova_compute[260935]: 2025-10-11 09:15:56.622 2 DEBUG nova.virt.libvirt.host [None req-b7745654-f0c1-45ec-b80e-65effeedb649 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 11 09:15:56 compute-0 nova_compute[260935]: 2025-10-11 09:15:56.623 2 DEBUG nova.virt.libvirt.host [None req-b7745654-f0c1-45ec-b80e-65effeedb649 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 11 09:15:56 compute-0 nova_compute[260935]: 2025-10-11 09:15:56.624 2 DEBUG nova.virt.libvirt.driver [None req-b7745654-f0c1-45ec-b80e-65effeedb649 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 11 09:15:56 compute-0 nova_compute[260935]: 2025-10-11 09:15:56.624 2 DEBUG nova.virt.hardware [None req-b7745654-f0c1-45ec-b80e-65effeedb649 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 11 09:15:56 compute-0 nova_compute[260935]: 2025-10-11 09:15:56.625 2 DEBUG nova.virt.hardware [None req-b7745654-f0c1-45ec-b80e-65effeedb649 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 11 09:15:56 compute-0 nova_compute[260935]: 2025-10-11 09:15:56.626 2 DEBUG nova.virt.hardware [None req-b7745654-f0c1-45ec-b80e-65effeedb649 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 11 09:15:56 compute-0 nova_compute[260935]: 2025-10-11 09:15:56.626 2 DEBUG nova.virt.hardware [None req-b7745654-f0c1-45ec-b80e-65effeedb649 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 11 09:15:56 compute-0 nova_compute[260935]: 2025-10-11 09:15:56.627 2 DEBUG nova.virt.hardware [None req-b7745654-f0c1-45ec-b80e-65effeedb649 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 11 09:15:56 compute-0 nova_compute[260935]: 2025-10-11 09:15:56.628 2 DEBUG nova.virt.hardware [None req-b7745654-f0c1-45ec-b80e-65effeedb649 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 11 09:15:56 compute-0 nova_compute[260935]: 2025-10-11 09:15:56.628 2 DEBUG nova.virt.hardware [None req-b7745654-f0c1-45ec-b80e-65effeedb649 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 11 09:15:56 compute-0 nova_compute[260935]: 2025-10-11 09:15:56.629 2 DEBUG nova.virt.hardware [None req-b7745654-f0c1-45ec-b80e-65effeedb649 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 11 09:15:56 compute-0 nova_compute[260935]: 2025-10-11 09:15:56.629 2 DEBUG nova.virt.hardware [None req-b7745654-f0c1-45ec-b80e-65effeedb649 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 11 09:15:56 compute-0 nova_compute[260935]: 2025-10-11 09:15:56.630 2 DEBUG nova.virt.hardware [None req-b7745654-f0c1-45ec-b80e-65effeedb649 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 11 09:15:56 compute-0 nova_compute[260935]: 2025-10-11 09:15:56.630 2 DEBUG nova.virt.hardware [None req-b7745654-f0c1-45ec-b80e-65effeedb649 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 11 09:15:56 compute-0 nova_compute[260935]: 2025-10-11 09:15:56.635 2 DEBUG oslo_concurrency.processutils [None req-b7745654-f0c1-45ec-b80e-65effeedb649 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:15:57 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 09:15:57 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1569846386' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:15:57 compute-0 nova_compute[260935]: 2025-10-11 09:15:57.149 2 DEBUG oslo_concurrency.processutils [None req-b7745654-f0c1-45ec-b80e-65effeedb649 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.514s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:15:57 compute-0 nova_compute[260935]: 2025-10-11 09:15:57.177 2 DEBUG nova.storage.rbd_utils [None req-b7745654-f0c1-45ec-b80e-65effeedb649 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] rbd image 8354434f-daa4-4745-9755-bd2465f5459b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:15:57 compute-0 nova_compute[260935]: 2025-10-11 09:15:57.182 2 DEBUG oslo_concurrency.processutils [None req-b7745654-f0c1-45ec-b80e-65effeedb649 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:15:57 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 09:15:57 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/620990903' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:15:57 compute-0 nova_compute[260935]: 2025-10-11 09:15:57.648 2 DEBUG oslo_concurrency.processutils [None req-b7745654-f0c1-45ec-b80e-65effeedb649 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:15:57 compute-0 nova_compute[260935]: 2025-10-11 09:15:57.651 2 DEBUG nova.virt.libvirt.vif [None req-b7745654-f0c1-45ec-b80e-65effeedb649 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T09:15:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-901907022',display_name='tempest-TestNetworkBasicOps-server-901907022',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-901907022',id=108,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNSUDuwrUo8mPnl46KJgqVMBI/oUdDQJiZKLg3PcXdEoEbBCfynVBzjq5RcgXvWBbi7yER9lhbMWC7+xjyS7CKL6WdpVIbjuziOiJFncdqOQt0YKcX1oXGLKj/54Eha0rg==',key_name='tempest-TestNetworkBasicOps-1207558742',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='bee9c6aad5fe46a2b0fb6caf4d995b72',ramdisk_id='',reservation_id='r-svb3ck8j',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1622727639',owner_user_name='tempest-TestNetworkBasicOps-1622727639-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:15:52Z,user_data=None,user_id='dd336dcb24664df58613d4105ce1b004',uuid=8354434f-daa4-4745-9755-bd2465f5459b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "cabc92ec-b61c-42d8-80b5-444ec773a568", "address": "fa:16:3e:ec:41:56", "network": {"id": "3f56857a-dc0a-4d4f-92ae-d6806961b854", "bridge": "br-int", "label": "tempest-network-smoke--985507468", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcabc92ec-b6", "ovs_interfaceid": "cabc92ec-b61c-42d8-80b5-444ec773a568", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 11 09:15:57 compute-0 nova_compute[260935]: 2025-10-11 09:15:57.652 2 DEBUG nova.network.os_vif_util [None req-b7745654-f0c1-45ec-b80e-65effeedb649 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Converting VIF {"id": "cabc92ec-b61c-42d8-80b5-444ec773a568", "address": "fa:16:3e:ec:41:56", "network": {"id": "3f56857a-dc0a-4d4f-92ae-d6806961b854", "bridge": "br-int", "label": "tempest-network-smoke--985507468", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcabc92ec-b6", "ovs_interfaceid": "cabc92ec-b61c-42d8-80b5-444ec773a568", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 09:15:57 compute-0 nova_compute[260935]: 2025-10-11 09:15:57.654 2 DEBUG nova.network.os_vif_util [None req-b7745654-f0c1-45ec-b80e-65effeedb649 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ec:41:56,bridge_name='br-int',has_traffic_filtering=True,id=cabc92ec-b61c-42d8-80b5-444ec773a568,network=Network(3f56857a-dc0a-4d4f-92ae-d6806961b854),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcabc92ec-b6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 09:15:57 compute-0 nova_compute[260935]: 2025-10-11 09:15:57.656 2 DEBUG nova.objects.instance [None req-b7745654-f0c1-45ec-b80e-65effeedb649 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lazy-loading 'pci_devices' on Instance uuid 8354434f-daa4-4745-9755-bd2465f5459b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:15:57 compute-0 nova_compute[260935]: 2025-10-11 09:15:57.682 2 DEBUG nova.virt.libvirt.driver [None req-b7745654-f0c1-45ec-b80e-65effeedb649 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 8354434f-daa4-4745-9755-bd2465f5459b] End _get_guest_xml xml=<domain type="kvm">
Oct 11 09:15:57 compute-0 nova_compute[260935]:   <uuid>8354434f-daa4-4745-9755-bd2465f5459b</uuid>
Oct 11 09:15:57 compute-0 nova_compute[260935]:   <name>instance-0000006c</name>
Oct 11 09:15:57 compute-0 nova_compute[260935]:   <memory>131072</memory>
Oct 11 09:15:57 compute-0 nova_compute[260935]:   <vcpu>1</vcpu>
Oct 11 09:15:57 compute-0 nova_compute[260935]:   <metadata>
Oct 11 09:15:57 compute-0 nova_compute[260935]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 09:15:57 compute-0 nova_compute[260935]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 09:15:57 compute-0 nova_compute[260935]:       <nova:name>tempest-TestNetworkBasicOps-server-901907022</nova:name>
Oct 11 09:15:57 compute-0 nova_compute[260935]:       <nova:creationTime>2025-10-11 09:15:56</nova:creationTime>
Oct 11 09:15:57 compute-0 nova_compute[260935]:       <nova:flavor name="m1.nano">
Oct 11 09:15:57 compute-0 nova_compute[260935]:         <nova:memory>128</nova:memory>
Oct 11 09:15:57 compute-0 nova_compute[260935]:         <nova:disk>1</nova:disk>
Oct 11 09:15:57 compute-0 nova_compute[260935]:         <nova:swap>0</nova:swap>
Oct 11 09:15:57 compute-0 nova_compute[260935]:         <nova:ephemeral>0</nova:ephemeral>
Oct 11 09:15:57 compute-0 nova_compute[260935]:         <nova:vcpus>1</nova:vcpus>
Oct 11 09:15:57 compute-0 nova_compute[260935]:       </nova:flavor>
Oct 11 09:15:57 compute-0 nova_compute[260935]:       <nova:owner>
Oct 11 09:15:57 compute-0 nova_compute[260935]:         <nova:user uuid="dd336dcb24664df58613d4105ce1b004">tempest-TestNetworkBasicOps-1622727639-project-member</nova:user>
Oct 11 09:15:57 compute-0 nova_compute[260935]:         <nova:project uuid="bee9c6aad5fe46a2b0fb6caf4d995b72">tempest-TestNetworkBasicOps-1622727639</nova:project>
Oct 11 09:15:57 compute-0 nova_compute[260935]:       </nova:owner>
Oct 11 09:15:57 compute-0 nova_compute[260935]:       <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 09:15:57 compute-0 nova_compute[260935]:       <nova:ports>
Oct 11 09:15:57 compute-0 nova_compute[260935]:         <nova:port uuid="cabc92ec-b61c-42d8-80b5-444ec773a568">
Oct 11 09:15:57 compute-0 nova_compute[260935]:           <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Oct 11 09:15:57 compute-0 nova_compute[260935]:         </nova:port>
Oct 11 09:15:57 compute-0 nova_compute[260935]:       </nova:ports>
Oct 11 09:15:57 compute-0 nova_compute[260935]:     </nova:instance>
Oct 11 09:15:57 compute-0 nova_compute[260935]:   </metadata>
Oct 11 09:15:57 compute-0 nova_compute[260935]:   <sysinfo type="smbios">
Oct 11 09:15:57 compute-0 nova_compute[260935]:     <system>
Oct 11 09:15:57 compute-0 nova_compute[260935]:       <entry name="manufacturer">RDO</entry>
Oct 11 09:15:57 compute-0 nova_compute[260935]:       <entry name="product">OpenStack Compute</entry>
Oct 11 09:15:57 compute-0 nova_compute[260935]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 09:15:57 compute-0 nova_compute[260935]:       <entry name="serial">8354434f-daa4-4745-9755-bd2465f5459b</entry>
Oct 11 09:15:57 compute-0 nova_compute[260935]:       <entry name="uuid">8354434f-daa4-4745-9755-bd2465f5459b</entry>
Oct 11 09:15:57 compute-0 nova_compute[260935]:       <entry name="family">Virtual Machine</entry>
Oct 11 09:15:57 compute-0 nova_compute[260935]:     </system>
Oct 11 09:15:57 compute-0 nova_compute[260935]:   </sysinfo>
Oct 11 09:15:57 compute-0 nova_compute[260935]:   <os>
Oct 11 09:15:57 compute-0 nova_compute[260935]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 11 09:15:57 compute-0 nova_compute[260935]:     <boot dev="hd"/>
Oct 11 09:15:57 compute-0 nova_compute[260935]:     <smbios mode="sysinfo"/>
Oct 11 09:15:57 compute-0 nova_compute[260935]:   </os>
Oct 11 09:15:57 compute-0 nova_compute[260935]:   <features>
Oct 11 09:15:57 compute-0 nova_compute[260935]:     <acpi/>
Oct 11 09:15:57 compute-0 nova_compute[260935]:     <apic/>
Oct 11 09:15:57 compute-0 nova_compute[260935]:     <vmcoreinfo/>
Oct 11 09:15:57 compute-0 nova_compute[260935]:   </features>
Oct 11 09:15:57 compute-0 nova_compute[260935]:   <clock offset="utc">
Oct 11 09:15:57 compute-0 nova_compute[260935]:     <timer name="pit" tickpolicy="delay"/>
Oct 11 09:15:57 compute-0 nova_compute[260935]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 11 09:15:57 compute-0 nova_compute[260935]:     <timer name="hpet" present="no"/>
Oct 11 09:15:57 compute-0 nova_compute[260935]:   </clock>
Oct 11 09:15:57 compute-0 nova_compute[260935]:   <cpu mode="host-model" match="exact">
Oct 11 09:15:57 compute-0 nova_compute[260935]:     <topology sockets="1" cores="1" threads="1"/>
Oct 11 09:15:57 compute-0 nova_compute[260935]:   </cpu>
Oct 11 09:15:57 compute-0 nova_compute[260935]:   <devices>
Oct 11 09:15:57 compute-0 nova_compute[260935]:     <disk type="network" device="disk">
Oct 11 09:15:57 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 09:15:57 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/8354434f-daa4-4745-9755-bd2465f5459b_disk">
Oct 11 09:15:57 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 09:15:57 compute-0 nova_compute[260935]:       </source>
Oct 11 09:15:57 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 09:15:57 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 09:15:57 compute-0 nova_compute[260935]:       </auth>
Oct 11 09:15:57 compute-0 nova_compute[260935]:       <target dev="vda" bus="virtio"/>
Oct 11 09:15:57 compute-0 nova_compute[260935]:     </disk>
Oct 11 09:15:57 compute-0 nova_compute[260935]:     <disk type="network" device="cdrom">
Oct 11 09:15:57 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 09:15:57 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/8354434f-daa4-4745-9755-bd2465f5459b_disk.config">
Oct 11 09:15:57 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 09:15:57 compute-0 nova_compute[260935]:       </source>
Oct 11 09:15:57 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 09:15:57 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 09:15:57 compute-0 nova_compute[260935]:       </auth>
Oct 11 09:15:57 compute-0 nova_compute[260935]:       <target dev="sda" bus="sata"/>
Oct 11 09:15:57 compute-0 nova_compute[260935]:     </disk>
Oct 11 09:15:57 compute-0 nova_compute[260935]:     <interface type="ethernet">
Oct 11 09:15:57 compute-0 nova_compute[260935]:       <mac address="fa:16:3e:ec:41:56"/>
Oct 11 09:15:57 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 09:15:57 compute-0 nova_compute[260935]:       <driver name="vhost" rx_queue_size="512"/>
Oct 11 09:15:57 compute-0 nova_compute[260935]:       <mtu size="1442"/>
Oct 11 09:15:57 compute-0 nova_compute[260935]:       <target dev="tapcabc92ec-b6"/>
Oct 11 09:15:57 compute-0 nova_compute[260935]:     </interface>
Oct 11 09:15:57 compute-0 nova_compute[260935]:     <serial type="pty">
Oct 11 09:15:57 compute-0 nova_compute[260935]:       <log file="/var/lib/nova/instances/8354434f-daa4-4745-9755-bd2465f5459b/console.log" append="off"/>
Oct 11 09:15:57 compute-0 nova_compute[260935]:     </serial>
Oct 11 09:15:57 compute-0 nova_compute[260935]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 09:15:57 compute-0 nova_compute[260935]:     <video>
Oct 11 09:15:57 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 09:15:57 compute-0 nova_compute[260935]:     </video>
Oct 11 09:15:57 compute-0 nova_compute[260935]:     <input type="tablet" bus="usb"/>
Oct 11 09:15:57 compute-0 nova_compute[260935]:     <rng model="virtio">
Oct 11 09:15:57 compute-0 nova_compute[260935]:       <backend model="random">/dev/urandom</backend>
Oct 11 09:15:57 compute-0 nova_compute[260935]:     </rng>
Oct 11 09:15:57 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root"/>
Oct 11 09:15:57 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:15:57 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:15:57 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:15:57 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:15:57 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:15:57 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:15:57 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:15:57 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:15:57 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:15:57 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:15:57 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:15:57 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:15:57 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:15:57 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:15:57 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:15:57 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:15:57 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:15:57 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:15:57 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:15:57 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:15:57 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:15:57 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:15:57 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:15:57 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:15:57 compute-0 nova_compute[260935]:     <controller type="usb" index="0"/>
Oct 11 09:15:57 compute-0 nova_compute[260935]:     <memballoon model="virtio">
Oct 11 09:15:57 compute-0 nova_compute[260935]:       <stats period="10"/>
Oct 11 09:15:57 compute-0 nova_compute[260935]:     </memballoon>
Oct 11 09:15:57 compute-0 nova_compute[260935]:   </devices>
Oct 11 09:15:57 compute-0 nova_compute[260935]: </domain>
Oct 11 09:15:57 compute-0 nova_compute[260935]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 11 09:15:57 compute-0 nova_compute[260935]: 2025-10-11 09:15:57.685 2 DEBUG nova.compute.manager [None req-b7745654-f0c1-45ec-b80e-65effeedb649 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 8354434f-daa4-4745-9755-bd2465f5459b] Preparing to wait for external event network-vif-plugged-cabc92ec-b61c-42d8-80b5-444ec773a568 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 11 09:15:57 compute-0 nova_compute[260935]: 2025-10-11 09:15:57.686 2 DEBUG oslo_concurrency.lockutils [None req-b7745654-f0c1-45ec-b80e-65effeedb649 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Acquiring lock "8354434f-daa4-4745-9755-bd2465f5459b-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:15:57 compute-0 nova_compute[260935]: 2025-10-11 09:15:57.686 2 DEBUG oslo_concurrency.lockutils [None req-b7745654-f0c1-45ec-b80e-65effeedb649 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "8354434f-daa4-4745-9755-bd2465f5459b-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:15:57 compute-0 nova_compute[260935]: 2025-10-11 09:15:57.687 2 DEBUG oslo_concurrency.lockutils [None req-b7745654-f0c1-45ec-b80e-65effeedb649 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "8354434f-daa4-4745-9755-bd2465f5459b-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:15:57 compute-0 nova_compute[260935]: 2025-10-11 09:15:57.689 2 DEBUG nova.virt.libvirt.vif [None req-b7745654-f0c1-45ec-b80e-65effeedb649 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T09:15:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-901907022',display_name='tempest-TestNetworkBasicOps-server-901907022',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-901907022',id=108,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNSUDuwrUo8mPnl46KJgqVMBI/oUdDQJiZKLg3PcXdEoEbBCfynVBzjq5RcgXvWBbi7yER9lhbMWC7+xjyS7CKL6WdpVIbjuziOiJFncdqOQt0YKcX1oXGLKj/54Eha0rg==',key_name='tempest-TestNetworkBasicOps-1207558742',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='bee9c6aad5fe46a2b0fb6caf4d995b72',ramdisk_id='',reservation_id='r-svb3ck8j',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1622727639',owner_user_name='tempest-TestNetworkBasicOps-1622727639-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:15:52Z,user_data=None,user_id='dd336dcb24664df58613d4105ce1b004',uuid=8354434f-daa4-4745-9755-bd2465f5459b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "cabc92ec-b61c-42d8-80b5-444ec773a568", "address": "fa:16:3e:ec:41:56", "network": {"id": "3f56857a-dc0a-4d4f-92ae-d6806961b854", "bridge": "br-int", "label": "tempest-network-smoke--985507468", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcabc92ec-b6", "ovs_interfaceid": "cabc92ec-b61c-42d8-80b5-444ec773a568", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 11 09:15:57 compute-0 nova_compute[260935]: 2025-10-11 09:15:57.690 2 DEBUG nova.network.os_vif_util [None req-b7745654-f0c1-45ec-b80e-65effeedb649 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Converting VIF {"id": "cabc92ec-b61c-42d8-80b5-444ec773a568", "address": "fa:16:3e:ec:41:56", "network": {"id": "3f56857a-dc0a-4d4f-92ae-d6806961b854", "bridge": "br-int", "label": "tempest-network-smoke--985507468", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcabc92ec-b6", "ovs_interfaceid": "cabc92ec-b61c-42d8-80b5-444ec773a568", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 09:15:57 compute-0 nova_compute[260935]: 2025-10-11 09:15:57.691 2 DEBUG nova.network.os_vif_util [None req-b7745654-f0c1-45ec-b80e-65effeedb649 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ec:41:56,bridge_name='br-int',has_traffic_filtering=True,id=cabc92ec-b61c-42d8-80b5-444ec773a568,network=Network(3f56857a-dc0a-4d4f-92ae-d6806961b854),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcabc92ec-b6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 09:15:57 compute-0 nova_compute[260935]: 2025-10-11 09:15:57.692 2 DEBUG os_vif [None req-b7745654-f0c1-45ec-b80e-65effeedb649 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ec:41:56,bridge_name='br-int',has_traffic_filtering=True,id=cabc92ec-b61c-42d8-80b5-444ec773a568,network=Network(3f56857a-dc0a-4d4f-92ae-d6806961b854),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcabc92ec-b6') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 11 09:15:57 compute-0 nova_compute[260935]: 2025-10-11 09:15:57.693 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:15:57 compute-0 nova_compute[260935]: 2025-10-11 09:15:57.694 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:15:57 compute-0 nova_compute[260935]: 2025-10-11 09:15:57.696 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 09:15:57 compute-0 nova_compute[260935]: 2025-10-11 09:15:57.701 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:15:57 compute-0 nova_compute[260935]: 2025-10-11 09:15:57.701 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapcabc92ec-b6, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:15:57 compute-0 nova_compute[260935]: 2025-10-11 09:15:57.703 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapcabc92ec-b6, col_values=(('external_ids', {'iface-id': 'cabc92ec-b61c-42d8-80b5-444ec773a568', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ec:41:56', 'vm-uuid': '8354434f-daa4-4745-9755-bd2465f5459b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:15:57 compute-0 nova_compute[260935]: 2025-10-11 09:15:57.705 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:15:57 compute-0 NetworkManager[44960]: <info>  [1760174157.7065] manager: (tapcabc92ec-b6): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/432)
Oct 11 09:15:57 compute-0 nova_compute[260935]: 2025-10-11 09:15:57.708 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 09:15:57 compute-0 nova_compute[260935]: 2025-10-11 09:15:57.716 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:15:57 compute-0 nova_compute[260935]: 2025-10-11 09:15:57.717 2 INFO os_vif [None req-b7745654-f0c1-45ec-b80e-65effeedb649 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ec:41:56,bridge_name='br-int',has_traffic_filtering=True,id=cabc92ec-b61c-42d8-80b5-444ec773a568,network=Network(3f56857a-dc0a-4d4f-92ae-d6806961b854),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcabc92ec-b6')
Oct 11 09:15:57 compute-0 nova_compute[260935]: 2025-10-11 09:15:57.779 2 DEBUG nova.virt.libvirt.driver [None req-b7745654-f0c1-45ec-b80e-65effeedb649 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 09:15:57 compute-0 nova_compute[260935]: 2025-10-11 09:15:57.780 2 DEBUG nova.virt.libvirt.driver [None req-b7745654-f0c1-45ec-b80e-65effeedb649 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 09:15:57 compute-0 nova_compute[260935]: 2025-10-11 09:15:57.780 2 DEBUG nova.virt.libvirt.driver [None req-b7745654-f0c1-45ec-b80e-65effeedb649 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] No VIF found with MAC fa:16:3e:ec:41:56, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 11 09:15:57 compute-0 nova_compute[260935]: 2025-10-11 09:15:57.781 2 INFO nova.virt.libvirt.driver [None req-b7745654-f0c1-45ec-b80e-65effeedb649 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 8354434f-daa4-4745-9755-bd2465f5459b] Using config drive
Oct 11 09:15:57 compute-0 nova_compute[260935]: 2025-10-11 09:15:57.817 2 DEBUG nova.storage.rbd_utils [None req-b7745654-f0c1-45ec-b80e-65effeedb649 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] rbd image 8354434f-daa4-4745-9755-bd2465f5459b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:15:57 compute-0 ceph-mon[74313]: pgmap v2223: 321 pgs: 321 active+clean; 328 MiB data, 903 MiB used, 59 GiB / 60 GiB avail
Oct 11 09:15:57 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/1569846386' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:15:57 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/620990903' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:15:58 compute-0 nova_compute[260935]: 2025-10-11 09:15:58.328 2 INFO nova.virt.libvirt.driver [None req-b7745654-f0c1-45ec-b80e-65effeedb649 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 8354434f-daa4-4745-9755-bd2465f5459b] Creating config drive at /var/lib/nova/instances/8354434f-daa4-4745-9755-bd2465f5459b/disk.config
Oct 11 09:15:58 compute-0 nova_compute[260935]: 2025-10-11 09:15:58.333 2 DEBUG oslo_concurrency.processutils [None req-b7745654-f0c1-45ec-b80e-65effeedb649 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/8354434f-daa4-4745-9755-bd2465f5459b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp87o6q4cu execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:15:58 compute-0 nova_compute[260935]: 2025-10-11 09:15:58.483 2 DEBUG oslo_concurrency.processutils [None req-b7745654-f0c1-45ec-b80e-65effeedb649 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/8354434f-daa4-4745-9755-bd2465f5459b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp87o6q4cu" returned: 0 in 0.150s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:15:58 compute-0 nova_compute[260935]: 2025-10-11 09:15:58.511 2 DEBUG nova.storage.rbd_utils [None req-b7745654-f0c1-45ec-b80e-65effeedb649 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] rbd image 8354434f-daa4-4745-9755-bd2465f5459b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:15:58 compute-0 nova_compute[260935]: 2025-10-11 09:15:58.517 2 DEBUG oslo_concurrency.processutils [None req-b7745654-f0c1-45ec-b80e-65effeedb649 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/8354434f-daa4-4745-9755-bd2465f5459b/disk.config 8354434f-daa4-4745-9755-bd2465f5459b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:15:58 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2224: 321 pgs: 321 active+clean; 374 MiB data, 920 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 11 09:15:58 compute-0 nova_compute[260935]: 2025-10-11 09:15:58.721 2 DEBUG oslo_concurrency.processutils [None req-b7745654-f0c1-45ec-b80e-65effeedb649 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/8354434f-daa4-4745-9755-bd2465f5459b/disk.config 8354434f-daa4-4745-9755-bd2465f5459b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.204s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:15:58 compute-0 nova_compute[260935]: 2025-10-11 09:15:58.723 2 INFO nova.virt.libvirt.driver [None req-b7745654-f0c1-45ec-b80e-65effeedb649 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 8354434f-daa4-4745-9755-bd2465f5459b] Deleting local config drive /var/lib/nova/instances/8354434f-daa4-4745-9755-bd2465f5459b/disk.config because it was imported into RBD.
Oct 11 09:15:58 compute-0 podman[377414]: 2025-10-11 09:15:58.782702558 +0000 UTC m=+0.079972265 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 11 09:15:58 compute-0 kernel: tapcabc92ec-b6: entered promiscuous mode
Oct 11 09:15:58 compute-0 NetworkManager[44960]: <info>  [1760174158.7882] manager: (tapcabc92ec-b6): new Tun device (/org/freedesktop/NetworkManager/Devices/433)
Oct 11 09:15:58 compute-0 ovn_controller[152945]: 2025-10-11T09:15:58Z|01054|binding|INFO|Claiming lport cabc92ec-b61c-42d8-80b5-444ec773a568 for this chassis.
Oct 11 09:15:58 compute-0 ovn_controller[152945]: 2025-10-11T09:15:58Z|01055|binding|INFO|cabc92ec-b61c-42d8-80b5-444ec773a568: Claiming fa:16:3e:ec:41:56 10.100.0.8
Oct 11 09:15:58 compute-0 nova_compute[260935]: 2025-10-11 09:15:58.790 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:15:58 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:15:58.802 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ec:41:56 10.100.0.8'], port_security=['fa:16:3e:ec:41:56 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '8354434f-daa4-4745-9755-bd2465f5459b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3f56857a-dc0a-4d4f-92ae-d6806961b854', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'bee9c6aad5fe46a2b0fb6caf4d995b72', 'neutron:revision_number': '2', 'neutron:security_group_ids': '437c3d45-aba0-41e1-8469-ba1a8b534642', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=94b69aa0-e5aa-4772-b64f-f38dc39ac5a0, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=cabc92ec-b61c-42d8-80b5-444ec773a568) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 09:15:58 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:15:58.803 162815 INFO neutron.agent.ovn.metadata.agent [-] Port cabc92ec-b61c-42d8-80b5-444ec773a568 in datapath 3f56857a-dc0a-4d4f-92ae-d6806961b854 bound to our chassis
Oct 11 09:15:58 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:15:58.804 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 3f56857a-dc0a-4d4f-92ae-d6806961b854
Oct 11 09:15:58 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:15:58.822 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[929a3f1c-643c-4c0d-86e4-60b31786f42f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:15:58 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:15:58.823 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap3f56857a-d1 in ovnmeta-3f56857a-dc0a-4d4f-92ae-d6806961b854 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 11 09:15:58 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:15:58.825 276945 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap3f56857a-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 11 09:15:58 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:15:58.826 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[f7198e72-6a2b-499e-a741-941e50c40e0c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:15:58 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:15:58.826 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[f4cb2a26-c651-4965-a237-3ba9d4a77230]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:15:58 compute-0 systemd-udevd[377446]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 09:15:58 compute-0 systemd-machined[215705]: New machine qemu-131-instance-0000006c.
Oct 11 09:15:58 compute-0 NetworkManager[44960]: <info>  [1760174158.8452] device (tapcabc92ec-b6): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 09:15:58 compute-0 NetworkManager[44960]: <info>  [1760174158.8464] device (tapcabc92ec-b6): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 09:15:58 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:15:58.844 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[7204d881-8fcc-4997-b0ec-aab2a4b5cf2b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:15:58 compute-0 systemd[1]: Started Virtual Machine qemu-131-instance-0000006c.
Oct 11 09:15:58 compute-0 nova_compute[260935]: 2025-10-11 09:15:58.857 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:15:58 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:15:58.861 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[2f4e5495-3b2d-4536-9a24-82ef4c484504]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:15:58 compute-0 ovn_controller[152945]: 2025-10-11T09:15:58Z|01056|binding|INFO|Setting lport cabc92ec-b61c-42d8-80b5-444ec773a568 ovn-installed in OVS
Oct 11 09:15:58 compute-0 ovn_controller[152945]: 2025-10-11T09:15:58Z|01057|binding|INFO|Setting lport cabc92ec-b61c-42d8-80b5-444ec773a568 up in Southbound
Oct 11 09:15:58 compute-0 nova_compute[260935]: 2025-10-11 09:15:58.870 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:15:58 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:15:58.893 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[43f63a93-6baf-487e-b193-c7f2d51d270f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:15:58 compute-0 NetworkManager[44960]: <info>  [1760174158.9001] manager: (tap3f56857a-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/434)
Oct 11 09:15:58 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:15:58.898 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[b3268674-2298-42d6-95dc-e0984102c207]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:15:58 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:15:58 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:15:58.936 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[0e5c79f6-d00e-4492-b8a8-ac8424039b52]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:15:58 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:15:58.939 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[5bc21416-e600-42df-99b8-d255e4fcfae8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:15:58 compute-0 NetworkManager[44960]: <info>  [1760174158.9621] device (tap3f56857a-d0): carrier: link connected
Oct 11 09:15:58 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:15:58.967 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[9b4ace29-cb32-42a7-b65e-62e5df464475]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:15:58 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:15:58.987 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[8fb9a21b-5568-4e88-b899-d07292af3669]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3f56857a-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c4:81:b2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 306], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 600366, 'reachable_time': 27415, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 377479, 'error': None, 'target': 'ovnmeta-3f56857a-dc0a-4d4f-92ae-d6806961b854', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:15:59 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:15:59.004 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[ba283a04-7e26-4ebd-9638-c6494bcb62a3]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fec4:81b2'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 600366, 'tstamp': 600366}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 377480, 'error': None, 'target': 'ovnmeta-3f56857a-dc0a-4d4f-92ae-d6806961b854', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:15:59 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:15:59.030 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[3402dc0f-75c1-4102-bc65-862ba15d6c28]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3f56857a-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c4:81:b2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 306], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 600366, 'reachable_time': 27415, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 377481, 'error': None, 'target': 'ovnmeta-3f56857a-dc0a-4d4f-92ae-d6806961b854', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:15:59 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:15:59.068 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[70a0ad37-8d8a-46e6-9dfb-360794e389f1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:15:59 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:15:59.120 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[b77161e0-70ca-4526-ad71-7ac4954058cf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:15:59 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:15:59.122 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3f56857a-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:15:59 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:15:59.122 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 09:15:59 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:15:59.122 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3f56857a-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:15:59 compute-0 nova_compute[260935]: 2025-10-11 09:15:59.124 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:15:59 compute-0 NetworkManager[44960]: <info>  [1760174159.1246] manager: (tap3f56857a-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/435)
Oct 11 09:15:59 compute-0 kernel: tap3f56857a-d0: entered promiscuous mode
Oct 11 09:15:59 compute-0 nova_compute[260935]: 2025-10-11 09:15:59.125 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:15:59 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:15:59.129 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap3f56857a-d0, col_values=(('external_ids', {'iface-id': '58ef9b4a-8b66-4d5d-ac05-f694b2a9b216'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:15:59 compute-0 ovn_controller[152945]: 2025-10-11T09:15:59Z|01058|binding|INFO|Releasing lport 58ef9b4a-8b66-4d5d-ac05-f694b2a9b216 from this chassis (sb_readonly=0)
Oct 11 09:15:59 compute-0 nova_compute[260935]: 2025-10-11 09:15:59.130 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:15:59 compute-0 nova_compute[260935]: 2025-10-11 09:15:59.131 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:15:59 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:15:59.134 162815 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/3f56857a-dc0a-4d4f-92ae-d6806961b854.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/3f56857a-dc0a-4d4f-92ae-d6806961b854.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 11 09:15:59 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:15:59.134 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[c813ae73-b210-41a5-abbb-b18d8d00ae36]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:15:59 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:15:59.135 162815 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 11 09:15:59 compute-0 ovn_metadata_agent[162810]: global
Oct 11 09:15:59 compute-0 ovn_metadata_agent[162810]:     log         /dev/log local0 debug
Oct 11 09:15:59 compute-0 ovn_metadata_agent[162810]:     log-tag     haproxy-metadata-proxy-3f56857a-dc0a-4d4f-92ae-d6806961b854
Oct 11 09:15:59 compute-0 ovn_metadata_agent[162810]:     user        root
Oct 11 09:15:59 compute-0 ovn_metadata_agent[162810]:     group       root
Oct 11 09:15:59 compute-0 ovn_metadata_agent[162810]:     maxconn     1024
Oct 11 09:15:59 compute-0 ovn_metadata_agent[162810]:     pidfile     /var/lib/neutron/external/pids/3f56857a-dc0a-4d4f-92ae-d6806961b854.pid.haproxy
Oct 11 09:15:59 compute-0 ovn_metadata_agent[162810]:     daemon
Oct 11 09:15:59 compute-0 ovn_metadata_agent[162810]: 
Oct 11 09:15:59 compute-0 ovn_metadata_agent[162810]: defaults
Oct 11 09:15:59 compute-0 ovn_metadata_agent[162810]:     log global
Oct 11 09:15:59 compute-0 ovn_metadata_agent[162810]:     mode http
Oct 11 09:15:59 compute-0 ovn_metadata_agent[162810]:     option httplog
Oct 11 09:15:59 compute-0 ovn_metadata_agent[162810]:     option dontlognull
Oct 11 09:15:59 compute-0 ovn_metadata_agent[162810]:     option http-server-close
Oct 11 09:15:59 compute-0 ovn_metadata_agent[162810]:     option forwardfor
Oct 11 09:15:59 compute-0 ovn_metadata_agent[162810]:     retries                 3
Oct 11 09:15:59 compute-0 ovn_metadata_agent[162810]:     timeout http-request    30s
Oct 11 09:15:59 compute-0 ovn_metadata_agent[162810]:     timeout connect         30s
Oct 11 09:15:59 compute-0 ovn_metadata_agent[162810]:     timeout client          32s
Oct 11 09:15:59 compute-0 ovn_metadata_agent[162810]:     timeout server          32s
Oct 11 09:15:59 compute-0 ovn_metadata_agent[162810]:     timeout http-keep-alive 30s
Oct 11 09:15:59 compute-0 ovn_metadata_agent[162810]: 
Oct 11 09:15:59 compute-0 ovn_metadata_agent[162810]: 
Oct 11 09:15:59 compute-0 ovn_metadata_agent[162810]: listen listener
Oct 11 09:15:59 compute-0 ovn_metadata_agent[162810]:     bind 169.254.169.254:80
Oct 11 09:15:59 compute-0 ovn_metadata_agent[162810]:     server metadata /var/lib/neutron/metadata_proxy
Oct 11 09:15:59 compute-0 ovn_metadata_agent[162810]:     http-request add-header X-OVN-Network-ID 3f56857a-dc0a-4d4f-92ae-d6806961b854
Oct 11 09:15:59 compute-0 ovn_metadata_agent[162810]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 11 09:15:59 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:15:59.137 162815 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-3f56857a-dc0a-4d4f-92ae-d6806961b854', 'env', 'PROCESS_TAG=haproxy-3f56857a-dc0a-4d4f-92ae-d6806961b854', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/3f56857a-dc0a-4d4f-92ae-d6806961b854.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 11 09:15:59 compute-0 nova_compute[260935]: 2025-10-11 09:15:59.144 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:15:59 compute-0 nova_compute[260935]: 2025-10-11 09:15:59.240 2 DEBUG nova.network.neutron [req-e7e4e3da-6d72-45bb-8e2c-9307289a221b req-c5734003-c844-4cc4-a2c2-4485688827a1 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 8354434f-daa4-4745-9755-bd2465f5459b] Updated VIF entry in instance network info cache for port cabc92ec-b61c-42d8-80b5-444ec773a568. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 09:15:59 compute-0 nova_compute[260935]: 2025-10-11 09:15:59.241 2 DEBUG nova.network.neutron [req-e7e4e3da-6d72-45bb-8e2c-9307289a221b req-c5734003-c844-4cc4-a2c2-4485688827a1 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 8354434f-daa4-4745-9755-bd2465f5459b] Updating instance_info_cache with network_info: [{"id": "cabc92ec-b61c-42d8-80b5-444ec773a568", "address": "fa:16:3e:ec:41:56", "network": {"id": "3f56857a-dc0a-4d4f-92ae-d6806961b854", "bridge": "br-int", "label": "tempest-network-smoke--985507468", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcabc92ec-b6", "ovs_interfaceid": "cabc92ec-b61c-42d8-80b5-444ec773a568", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:15:59 compute-0 nova_compute[260935]: 2025-10-11 09:15:59.262 2 DEBUG oslo_concurrency.lockutils [req-e7e4e3da-6d72-45bb-8e2c-9307289a221b req-c5734003-c844-4cc4-a2c2-4485688827a1 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-8354434f-daa4-4745-9755-bd2465f5459b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:15:59 compute-0 podman[377555]: 2025-10-11 09:15:59.504737863 +0000 UTC m=+0.050805917 container create 56a2f43e4dc043e957cb609a6cf1764396e6ef23226a986d93d157a36a8cfc97 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-3f56857a-dc0a-4d4f-92ae-d6806961b854, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct 11 09:15:59 compute-0 systemd[1]: Started libpod-conmon-56a2f43e4dc043e957cb609a6cf1764396e6ef23226a986d93d157a36a8cfc97.scope.
Oct 11 09:15:59 compute-0 nova_compute[260935]: 2025-10-11 09:15:59.564 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760174159.5637765, 8354434f-daa4-4745-9755-bd2465f5459b => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:15:59 compute-0 nova_compute[260935]: 2025-10-11 09:15:59.565 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 8354434f-daa4-4745-9755-bd2465f5459b] VM Started (Lifecycle Event)
Oct 11 09:15:59 compute-0 podman[377555]: 2025-10-11 09:15:59.479077483 +0000 UTC m=+0.025145567 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 11 09:15:59 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:15:59 compute-0 nova_compute[260935]: 2025-10-11 09:15:59.586 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 8354434f-daa4-4745-9755-bd2465f5459b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:15:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/16ea3fde6a616ac783e0a41154d62589ff39ccb058913321360e01f0736e2297/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 11 09:15:59 compute-0 nova_compute[260935]: 2025-10-11 09:15:59.591 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760174159.5639136, 8354434f-daa4-4745-9755-bd2465f5459b => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:15:59 compute-0 nova_compute[260935]: 2025-10-11 09:15:59.592 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 8354434f-daa4-4745-9755-bd2465f5459b] VM Paused (Lifecycle Event)
Oct 11 09:15:59 compute-0 podman[377555]: 2025-10-11 09:15:59.604619048 +0000 UTC m=+0.150687152 container init 56a2f43e4dc043e957cb609a6cf1764396e6ef23226a986d93d157a36a8cfc97 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-3f56857a-dc0a-4d4f-92ae-d6806961b854, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001)
Oct 11 09:15:59 compute-0 podman[377555]: 2025-10-11 09:15:59.611295643 +0000 UTC m=+0.157363707 container start 56a2f43e4dc043e957cb609a6cf1764396e6ef23226a986d93d157a36a8cfc97 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-3f56857a-dc0a-4d4f-92ae-d6806961b854, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_managed=true)
Oct 11 09:15:59 compute-0 nova_compute[260935]: 2025-10-11 09:15:59.613 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 8354434f-daa4-4745-9755-bd2465f5459b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:15:59 compute-0 nova_compute[260935]: 2025-10-11 09:15:59.618 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 8354434f-daa4-4745-9755-bd2465f5459b] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 09:15:59 compute-0 neutron-haproxy-ovnmeta-3f56857a-dc0a-4d4f-92ae-d6806961b854[377571]: [NOTICE]   (377575) : New worker (377577) forked
Oct 11 09:15:59 compute-0 neutron-haproxy-ovnmeta-3f56857a-dc0a-4d4f-92ae-d6806961b854[377571]: [NOTICE]   (377575) : Loading success.
Oct 11 09:15:59 compute-0 nova_compute[260935]: 2025-10-11 09:15:59.641 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 8354434f-daa4-4745-9755-bd2465f5459b] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 09:15:59 compute-0 ceph-mon[74313]: pgmap v2224: 321 pgs: 321 active+clean; 374 MiB data, 920 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 11 09:16:00 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2225: 321 pgs: 321 active+clean; 374 MiB data, 920 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 11 09:16:00 compute-0 nova_compute[260935]: 2025-10-11 09:16:00.938 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:16:01 compute-0 ceph-mon[74313]: pgmap v2225: 321 pgs: 321 active+clean; 374 MiB data, 920 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 11 09:16:02 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2226: 321 pgs: 321 active+clean; 374 MiB data, 920 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 29 op/s
Oct 11 09:16:02 compute-0 nova_compute[260935]: 2025-10-11 09:16:02.705 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:16:03 compute-0 ceph-mon[74313]: pgmap v2226: 321 pgs: 321 active+clean; 374 MiB data, 920 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 29 op/s
Oct 11 09:16:03 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:16:04 compute-0 unix_chkpwd[377588]: password check failed for user (root)
Oct 11 09:16:04 compute-0 sshd-session[377586]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=165.232.82.252  user=root
Oct 11 09:16:04 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2227: 321 pgs: 321 active+clean; 374 MiB data, 920 MiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 36 op/s
Oct 11 09:16:04 compute-0 podman[377589]: 2025-10-11 09:16:04.78727333 +0000 UTC m=+0.082278368 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=iscsid)
Oct 11 09:16:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 09:16:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:16:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 09:16:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:16:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0029757271724620694 of space, bias 1.0, pg target 0.8927181517386208 quantized to 32 (current 32)
Oct 11 09:16:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:16:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 09:16:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:16:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 09:16:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:16:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Oct 11 09:16:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:16:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 11 09:16:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:16:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 09:16:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:16:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 11 09:16:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:16:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 11 09:16:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:16:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 09:16:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:16:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 11 09:16:05 compute-0 nova_compute[260935]: 2025-10-11 09:16:05.221 2 DEBUG nova.compute.manager [req-eb52a615-6a2b-443b-ab9a-733d3cefb827 req-ad588793-81df-43cf-81fd-7894e5f95be3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 8354434f-daa4-4745-9755-bd2465f5459b] Received event network-vif-plugged-cabc92ec-b61c-42d8-80b5-444ec773a568 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:16:05 compute-0 nova_compute[260935]: 2025-10-11 09:16:05.221 2 DEBUG oslo_concurrency.lockutils [req-eb52a615-6a2b-443b-ab9a-733d3cefb827 req-ad588793-81df-43cf-81fd-7894e5f95be3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "8354434f-daa4-4745-9755-bd2465f5459b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:16:05 compute-0 nova_compute[260935]: 2025-10-11 09:16:05.221 2 DEBUG oslo_concurrency.lockutils [req-eb52a615-6a2b-443b-ab9a-733d3cefb827 req-ad588793-81df-43cf-81fd-7894e5f95be3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "8354434f-daa4-4745-9755-bd2465f5459b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:16:05 compute-0 nova_compute[260935]: 2025-10-11 09:16:05.222 2 DEBUG oslo_concurrency.lockutils [req-eb52a615-6a2b-443b-ab9a-733d3cefb827 req-ad588793-81df-43cf-81fd-7894e5f95be3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "8354434f-daa4-4745-9755-bd2465f5459b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:16:05 compute-0 nova_compute[260935]: 2025-10-11 09:16:05.222 2 DEBUG nova.compute.manager [req-eb52a615-6a2b-443b-ab9a-733d3cefb827 req-ad588793-81df-43cf-81fd-7894e5f95be3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 8354434f-daa4-4745-9755-bd2465f5459b] Processing event network-vif-plugged-cabc92ec-b61c-42d8-80b5-444ec773a568 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 11 09:16:05 compute-0 nova_compute[260935]: 2025-10-11 09:16:05.223 2 DEBUG nova.compute.manager [None req-b7745654-f0c1-45ec-b80e-65effeedb649 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 8354434f-daa4-4745-9755-bd2465f5459b] Instance event wait completed in 5 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 11 09:16:05 compute-0 nova_compute[260935]: 2025-10-11 09:16:05.226 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760174165.2262444, 8354434f-daa4-4745-9755-bd2465f5459b => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:16:05 compute-0 nova_compute[260935]: 2025-10-11 09:16:05.226 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 8354434f-daa4-4745-9755-bd2465f5459b] VM Resumed (Lifecycle Event)
Oct 11 09:16:05 compute-0 nova_compute[260935]: 2025-10-11 09:16:05.229 2 DEBUG nova.virt.libvirt.driver [None req-b7745654-f0c1-45ec-b80e-65effeedb649 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 8354434f-daa4-4745-9755-bd2465f5459b] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 11 09:16:05 compute-0 nova_compute[260935]: 2025-10-11 09:16:05.232 2 INFO nova.virt.libvirt.driver [-] [instance: 8354434f-daa4-4745-9755-bd2465f5459b] Instance spawned successfully.
Oct 11 09:16:05 compute-0 nova_compute[260935]: 2025-10-11 09:16:05.232 2 DEBUG nova.virt.libvirt.driver [None req-b7745654-f0c1-45ec-b80e-65effeedb649 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 8354434f-daa4-4745-9755-bd2465f5459b] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 11 09:16:05 compute-0 nova_compute[260935]: 2025-10-11 09:16:05.252 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 8354434f-daa4-4745-9755-bd2465f5459b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:16:05 compute-0 nova_compute[260935]: 2025-10-11 09:16:05.258 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 8354434f-daa4-4745-9755-bd2465f5459b] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 09:16:05 compute-0 nova_compute[260935]: 2025-10-11 09:16:05.262 2 DEBUG nova.virt.libvirt.driver [None req-b7745654-f0c1-45ec-b80e-65effeedb649 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 8354434f-daa4-4745-9755-bd2465f5459b] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:16:05 compute-0 nova_compute[260935]: 2025-10-11 09:16:05.263 2 DEBUG nova.virt.libvirt.driver [None req-b7745654-f0c1-45ec-b80e-65effeedb649 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 8354434f-daa4-4745-9755-bd2465f5459b] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:16:05 compute-0 nova_compute[260935]: 2025-10-11 09:16:05.263 2 DEBUG nova.virt.libvirt.driver [None req-b7745654-f0c1-45ec-b80e-65effeedb649 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 8354434f-daa4-4745-9755-bd2465f5459b] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:16:05 compute-0 nova_compute[260935]: 2025-10-11 09:16:05.263 2 DEBUG nova.virt.libvirt.driver [None req-b7745654-f0c1-45ec-b80e-65effeedb649 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 8354434f-daa4-4745-9755-bd2465f5459b] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:16:05 compute-0 nova_compute[260935]: 2025-10-11 09:16:05.264 2 DEBUG nova.virt.libvirt.driver [None req-b7745654-f0c1-45ec-b80e-65effeedb649 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 8354434f-daa4-4745-9755-bd2465f5459b] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:16:05 compute-0 nova_compute[260935]: 2025-10-11 09:16:05.264 2 DEBUG nova.virt.libvirt.driver [None req-b7745654-f0c1-45ec-b80e-65effeedb649 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 8354434f-daa4-4745-9755-bd2465f5459b] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:16:05 compute-0 nova_compute[260935]: 2025-10-11 09:16:05.292 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 8354434f-daa4-4745-9755-bd2465f5459b] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 09:16:05 compute-0 nova_compute[260935]: 2025-10-11 09:16:05.330 2 INFO nova.compute.manager [None req-b7745654-f0c1-45ec-b80e-65effeedb649 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 8354434f-daa4-4745-9755-bd2465f5459b] Took 12.54 seconds to spawn the instance on the hypervisor.
Oct 11 09:16:05 compute-0 nova_compute[260935]: 2025-10-11 09:16:05.331 2 DEBUG nova.compute.manager [None req-b7745654-f0c1-45ec-b80e-65effeedb649 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 8354434f-daa4-4745-9755-bd2465f5459b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:16:05 compute-0 nova_compute[260935]: 2025-10-11 09:16:05.410 2 INFO nova.compute.manager [None req-b7745654-f0c1-45ec-b80e-65effeedb649 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 8354434f-daa4-4745-9755-bd2465f5459b] Took 13.68 seconds to build instance.
Oct 11 09:16:05 compute-0 nova_compute[260935]: 2025-10-11 09:16:05.427 2 DEBUG oslo_concurrency.lockutils [None req-b7745654-f0c1-45ec-b80e-65effeedb649 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "8354434f-daa4-4745-9755-bd2465f5459b" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.814s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:16:05 compute-0 sshd-session[377586]: Failed password for root from 165.232.82.252 port 38988 ssh2
Oct 11 09:16:05 compute-0 ceph-mon[74313]: pgmap v2227: 321 pgs: 321 active+clean; 374 MiB data, 920 MiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 36 op/s
Oct 11 09:16:05 compute-0 nova_compute[260935]: 2025-10-11 09:16:05.955 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:16:06 compute-0 sshd-session[377586]: Connection closed by authenticating user root 165.232.82.252 port 38988 [preauth]
Oct 11 09:16:06 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2228: 321 pgs: 321 active+clean; 374 MiB data, 920 MiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 36 op/s
Oct 11 09:16:07 compute-0 sudo[377611]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:16:07 compute-0 sudo[377611]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:16:07 compute-0 sudo[377611]: pam_unix(sudo:session): session closed for user root
Oct 11 09:16:07 compute-0 sudo[377636]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 09:16:07 compute-0 sudo[377636]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:16:07 compute-0 sudo[377636]: pam_unix(sudo:session): session closed for user root
Oct 11 09:16:07 compute-0 nova_compute[260935]: 2025-10-11 09:16:07.346 2 DEBUG nova.compute.manager [req-30f62ef4-ded5-4c0e-8488-740caddeae9d req-7e6c287a-2033-4479-ac7c-2154e35474f8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 8354434f-daa4-4745-9755-bd2465f5459b] Received event network-vif-plugged-cabc92ec-b61c-42d8-80b5-444ec773a568 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:16:07 compute-0 nova_compute[260935]: 2025-10-11 09:16:07.348 2 DEBUG oslo_concurrency.lockutils [req-30f62ef4-ded5-4c0e-8488-740caddeae9d req-7e6c287a-2033-4479-ac7c-2154e35474f8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "8354434f-daa4-4745-9755-bd2465f5459b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:16:07 compute-0 nova_compute[260935]: 2025-10-11 09:16:07.349 2 DEBUG oslo_concurrency.lockutils [req-30f62ef4-ded5-4c0e-8488-740caddeae9d req-7e6c287a-2033-4479-ac7c-2154e35474f8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "8354434f-daa4-4745-9755-bd2465f5459b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:16:07 compute-0 nova_compute[260935]: 2025-10-11 09:16:07.349 2 DEBUG oslo_concurrency.lockutils [req-30f62ef4-ded5-4c0e-8488-740caddeae9d req-7e6c287a-2033-4479-ac7c-2154e35474f8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "8354434f-daa4-4745-9755-bd2465f5459b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:16:07 compute-0 nova_compute[260935]: 2025-10-11 09:16:07.350 2 DEBUG nova.compute.manager [req-30f62ef4-ded5-4c0e-8488-740caddeae9d req-7e6c287a-2033-4479-ac7c-2154e35474f8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 8354434f-daa4-4745-9755-bd2465f5459b] No waiting events found dispatching network-vif-plugged-cabc92ec-b61c-42d8-80b5-444ec773a568 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 09:16:07 compute-0 nova_compute[260935]: 2025-10-11 09:16:07.351 2 WARNING nova.compute.manager [req-30f62ef4-ded5-4c0e-8488-740caddeae9d req-7e6c287a-2033-4479-ac7c-2154e35474f8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 8354434f-daa4-4745-9755-bd2465f5459b] Received unexpected event network-vif-plugged-cabc92ec-b61c-42d8-80b5-444ec773a568 for instance with vm_state active and task_state None.
Oct 11 09:16:07 compute-0 sudo[377661]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:16:07 compute-0 sudo[377661]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:16:07 compute-0 sudo[377661]: pam_unix(sudo:session): session closed for user root
Oct 11 09:16:07 compute-0 sudo[377686]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Oct 11 09:16:07 compute-0 sudo[377686]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:16:07 compute-0 nova_compute[260935]: 2025-10-11 09:16:07.708 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:16:07 compute-0 ceph-mon[74313]: pgmap v2228: 321 pgs: 321 active+clean; 374 MiB data, 920 MiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 36 op/s
Oct 11 09:16:08 compute-0 podman[377783]: 2025-10-11 09:16:08.098036469 +0000 UTC m=+0.071930992 container exec ef4d743dbf6b626090e433b260dff1359de31ba4682290cbdab8727911345729 (image=quay.io/ceph/ceph:v18, name=ceph-33219f8b-dc38-5a8f-a577-8ccc4b37190a-mon-compute-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Oct 11 09:16:08 compute-0 podman[377783]: 2025-10-11 09:16:08.212168898 +0000 UTC m=+0.186063431 container exec_died ef4d743dbf6b626090e433b260dff1359de31ba4682290cbdab8727911345729 (image=quay.io/ceph/ceph:v18, name=ceph-33219f8b-dc38-5a8f-a577-8ccc4b37190a-mon-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 09:16:08 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2229: 321 pgs: 321 active+clean; 374 MiB data, 920 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Oct 11 09:16:08 compute-0 nova_compute[260935]: 2025-10-11 09:16:08.745 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:16:08 compute-0 NetworkManager[44960]: <info>  [1760174168.7469] manager: (patch-provnet-02213cae-d648-4a8f-b18b-9bdf9573f1be-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/436)
Oct 11 09:16:08 compute-0 NetworkManager[44960]: <info>  [1760174168.7495] manager: (patch-br-int-to-provnet-02213cae-d648-4a8f-b18b-9bdf9573f1be): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/437)
Oct 11 09:16:08 compute-0 nova_compute[260935]: 2025-10-11 09:16:08.898 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:16:08 compute-0 ovn_controller[152945]: 2025-10-11T09:16:08Z|01059|binding|INFO|Releasing lport 58ef9b4a-8b66-4d5d-ac05-f694b2a9b216 from this chassis (sb_readonly=0)
Oct 11 09:16:08 compute-0 ovn_controller[152945]: 2025-10-11T09:16:08Z|01060|binding|INFO|Releasing lport e23cd806-8523-4e59-ba27-db15cee52548 from this chassis (sb_readonly=0)
Oct 11 09:16:08 compute-0 ovn_controller[152945]: 2025-10-11T09:16:08Z|01061|binding|INFO|Releasing lport f45dd889-4db0-488b-b6d3-356bc191844e from this chassis (sb_readonly=0)
Oct 11 09:16:08 compute-0 nova_compute[260935]: 2025-10-11 09:16:08.922 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:16:08 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:16:09 compute-0 sudo[377686]: pam_unix(sudo:session): session closed for user root
Oct 11 09:16:09 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 09:16:09 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:16:09 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 09:16:09 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:16:09 compute-0 sudo[377939]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:16:09 compute-0 sudo[377939]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:16:09 compute-0 sudo[377939]: pam_unix(sudo:session): session closed for user root
Oct 11 09:16:09 compute-0 sudo[377964]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 09:16:09 compute-0 sudo[377964]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:16:09 compute-0 sudo[377964]: pam_unix(sudo:session): session closed for user root
Oct 11 09:16:09 compute-0 sudo[377989]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:16:09 compute-0 sudo[377989]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:16:09 compute-0 sudo[377989]: pam_unix(sudo:session): session closed for user root
Oct 11 09:16:09 compute-0 sudo[378014]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 11 09:16:09 compute-0 sudo[378014]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:16:09 compute-0 nova_compute[260935]: 2025-10-11 09:16:09.486 2 DEBUG nova.compute.manager [req-f203fb46-db47-4782-8995-e686bccb57ba req-958a762b-fa78-4de4-9b7c-6730cedc6591 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 8354434f-daa4-4745-9755-bd2465f5459b] Received event network-changed-cabc92ec-b61c-42d8-80b5-444ec773a568 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:16:09 compute-0 nova_compute[260935]: 2025-10-11 09:16:09.487 2 DEBUG nova.compute.manager [req-f203fb46-db47-4782-8995-e686bccb57ba req-958a762b-fa78-4de4-9b7c-6730cedc6591 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 8354434f-daa4-4745-9755-bd2465f5459b] Refreshing instance network info cache due to event network-changed-cabc92ec-b61c-42d8-80b5-444ec773a568. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 09:16:09 compute-0 nova_compute[260935]: 2025-10-11 09:16:09.488 2 DEBUG oslo_concurrency.lockutils [req-f203fb46-db47-4782-8995-e686bccb57ba req-958a762b-fa78-4de4-9b7c-6730cedc6591 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-8354434f-daa4-4745-9755-bd2465f5459b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:16:09 compute-0 nova_compute[260935]: 2025-10-11 09:16:09.489 2 DEBUG oslo_concurrency.lockutils [req-f203fb46-db47-4782-8995-e686bccb57ba req-958a762b-fa78-4de4-9b7c-6730cedc6591 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-8354434f-daa4-4745-9755-bd2465f5459b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:16:09 compute-0 nova_compute[260935]: 2025-10-11 09:16:09.489 2 DEBUG nova.network.neutron [req-f203fb46-db47-4782-8995-e686bccb57ba req-958a762b-fa78-4de4-9b7c-6730cedc6591 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 8354434f-daa4-4745-9755-bd2465f5459b] Refreshing network info cache for port cabc92ec-b61c-42d8-80b5-444ec773a568 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 09:16:09 compute-0 ceph-mon[74313]: pgmap v2229: 321 pgs: 321 active+clean; 374 MiB data, 920 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Oct 11 09:16:09 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:16:09 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:16:10 compute-0 sudo[378014]: pam_unix(sudo:session): session closed for user root
Oct 11 09:16:10 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 09:16:10 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 09:16:10 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 11 09:16:10 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 09:16:10 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 11 09:16:10 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:16:10 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev dbac6785-7197-44e2-88f5-426d09db3d69 does not exist
Oct 11 09:16:10 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev c18e0a9c-21bd-443c-a0b6-907040c6e159 does not exist
Oct 11 09:16:10 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev 98913d9e-4926-4ae6-8a63-c153a5b44158 does not exist
Oct 11 09:16:10 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 11 09:16:10 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 09:16:10 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 11 09:16:10 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 09:16:10 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 09:16:10 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 09:16:10 compute-0 sudo[378070]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:16:10 compute-0 sudo[378070]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:16:10 compute-0 sudo[378070]: pam_unix(sudo:session): session closed for user root
Oct 11 09:16:10 compute-0 podman[378094]: 2025-10-11 09:16:10.457398543 +0000 UTC m=+0.104911675 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 11 09:16:10 compute-0 sudo[378107]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 09:16:10 compute-0 sudo[378107]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:16:10 compute-0 sudo[378107]: pam_unix(sudo:session): session closed for user root
Oct 11 09:16:10 compute-0 sudo[378162]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:16:10 compute-0 sudo[378162]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:16:10 compute-0 sudo[378162]: pam_unix(sudo:session): session closed for user root
Oct 11 09:16:10 compute-0 podman[378095]: 2025-10-11 09:16:10.60572086 +0000 UTC m=+0.251482943 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true)
Oct 11 09:16:10 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2230: 321 pgs: 321 active+clean; 374 MiB data, 920 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 11 09:16:10 compute-0 sudo[378190]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 11 09:16:10 compute-0 sudo[378190]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:16:10 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 09:16:10 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 09:16:10 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:16:10 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 09:16:10 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 09:16:10 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 09:16:10 compute-0 nova_compute[260935]: 2025-10-11 09:16:10.957 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:16:11 compute-0 podman[378256]: 2025-10-11 09:16:11.025964221 +0000 UTC m=+0.071750887 container create 63db8bdf6c92ffebbc62993debdcb316bcd48fa310f74eb62b3542754b59b24d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_volhard, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 09:16:11 compute-0 systemd[1]: Started libpod-conmon-63db8bdf6c92ffebbc62993debdcb316bcd48fa310f74eb62b3542754b59b24d.scope.
Oct 11 09:16:11 compute-0 podman[378256]: 2025-10-11 09:16:10.995203809 +0000 UTC m=+0.040990535 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:16:11 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:16:11 compute-0 podman[378256]: 2025-10-11 09:16:11.125407943 +0000 UTC m=+0.171194669 container init 63db8bdf6c92ffebbc62993debdcb316bcd48fa310f74eb62b3542754b59b24d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_volhard, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct 11 09:16:11 compute-0 podman[378256]: 2025-10-11 09:16:11.133186128 +0000 UTC m=+0.178972774 container start 63db8bdf6c92ffebbc62993debdcb316bcd48fa310f74eb62b3542754b59b24d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_volhard, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 09:16:11 compute-0 podman[378256]: 2025-10-11 09:16:11.13687247 +0000 UTC m=+0.182659146 container attach 63db8bdf6c92ffebbc62993debdcb316bcd48fa310f74eb62b3542754b59b24d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_volhard, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 09:16:11 compute-0 relaxed_volhard[378273]: 167 167
Oct 11 09:16:11 compute-0 systemd[1]: libpod-63db8bdf6c92ffebbc62993debdcb316bcd48fa310f74eb62b3542754b59b24d.scope: Deactivated successfully.
Oct 11 09:16:11 compute-0 conmon[378273]: conmon 63db8bdf6c92ffebbc62 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-63db8bdf6c92ffebbc62993debdcb316bcd48fa310f74eb62b3542754b59b24d.scope/container/memory.events
Oct 11 09:16:11 compute-0 podman[378256]: 2025-10-11 09:16:11.144211674 +0000 UTC m=+0.189998320 container died 63db8bdf6c92ffebbc62993debdcb316bcd48fa310f74eb62b3542754b59b24d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_volhard, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct 11 09:16:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-1c29e61b4a39f08640b08433077ec266d2e94403b82e3514f818275fdcf19b5f-merged.mount: Deactivated successfully.
Oct 11 09:16:11 compute-0 podman[378256]: 2025-10-11 09:16:11.197434117 +0000 UTC m=+0.243220773 container remove 63db8bdf6c92ffebbc62993debdcb316bcd48fa310f74eb62b3542754b59b24d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_volhard, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 09:16:11 compute-0 systemd[1]: libpod-conmon-63db8bdf6c92ffebbc62993debdcb316bcd48fa310f74eb62b3542754b59b24d.scope: Deactivated successfully.
Oct 11 09:16:11 compute-0 podman[378299]: 2025-10-11 09:16:11.475314238 +0000 UTC m=+0.088770828 container create 6687c6354db2ea962a3feda63e2846d9a87759b36985dd5c40014dbf666f1564 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_clarke, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 09:16:11 compute-0 podman[378299]: 2025-10-11 09:16:11.433962874 +0000 UTC m=+0.047419444 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:16:11 compute-0 systemd[1]: Started libpod-conmon-6687c6354db2ea962a3feda63e2846d9a87759b36985dd5c40014dbf666f1564.scope.
Oct 11 09:16:11 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:16:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/767e5cdb46a400d8460bb807259ff44a2d1a9bf6a37cef13ec6e38775f6eb82c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 09:16:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/767e5cdb46a400d8460bb807259ff44a2d1a9bf6a37cef13ec6e38775f6eb82c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 09:16:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/767e5cdb46a400d8460bb807259ff44a2d1a9bf6a37cef13ec6e38775f6eb82c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 09:16:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/767e5cdb46a400d8460bb807259ff44a2d1a9bf6a37cef13ec6e38775f6eb82c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 09:16:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/767e5cdb46a400d8460bb807259ff44a2d1a9bf6a37cef13ec6e38775f6eb82c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 09:16:11 compute-0 podman[378299]: 2025-10-11 09:16:11.616580478 +0000 UTC m=+0.230037128 container init 6687c6354db2ea962a3feda63e2846d9a87759b36985dd5c40014dbf666f1564 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_clarke, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct 11 09:16:11 compute-0 podman[378299]: 2025-10-11 09:16:11.628079957 +0000 UTC m=+0.241536517 container start 6687c6354db2ea962a3feda63e2846d9a87759b36985dd5c40014dbf666f1564 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_clarke, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 09:16:11 compute-0 podman[378299]: 2025-10-11 09:16:11.631587544 +0000 UTC m=+0.245044194 container attach 6687c6354db2ea962a3feda63e2846d9a87759b36985dd5c40014dbf666f1564 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_clarke, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 09:16:11 compute-0 nova_compute[260935]: 2025-10-11 09:16:11.704 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:16:11 compute-0 ceph-mon[74313]: pgmap v2230: 321 pgs: 321 active+clean; 374 MiB data, 920 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 11 09:16:12 compute-0 nova_compute[260935]: 2025-10-11 09:16:12.173 2 DEBUG nova.network.neutron [req-f203fb46-db47-4782-8995-e686bccb57ba req-958a762b-fa78-4de4-9b7c-6730cedc6591 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 8354434f-daa4-4745-9755-bd2465f5459b] Updated VIF entry in instance network info cache for port cabc92ec-b61c-42d8-80b5-444ec773a568. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 09:16:12 compute-0 nova_compute[260935]: 2025-10-11 09:16:12.173 2 DEBUG nova.network.neutron [req-f203fb46-db47-4782-8995-e686bccb57ba req-958a762b-fa78-4de4-9b7c-6730cedc6591 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 8354434f-daa4-4745-9755-bd2465f5459b] Updating instance_info_cache with network_info: [{"id": "cabc92ec-b61c-42d8-80b5-444ec773a568", "address": "fa:16:3e:ec:41:56", "network": {"id": "3f56857a-dc0a-4d4f-92ae-d6806961b854", "bridge": "br-int", "label": "tempest-network-smoke--985507468", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.249", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcabc92ec-b6", "ovs_interfaceid": "cabc92ec-b61c-42d8-80b5-444ec773a568", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:16:12 compute-0 ovn_controller[152945]: 2025-10-11T09:16:12Z|01062|binding|INFO|Releasing lport 58ef9b4a-8b66-4d5d-ac05-f694b2a9b216 from this chassis (sb_readonly=0)
Oct 11 09:16:12 compute-0 ovn_controller[152945]: 2025-10-11T09:16:12Z|01063|binding|INFO|Releasing lport e23cd806-8523-4e59-ba27-db15cee52548 from this chassis (sb_readonly=0)
Oct 11 09:16:12 compute-0 ovn_controller[152945]: 2025-10-11T09:16:12Z|01064|binding|INFO|Releasing lport f45dd889-4db0-488b-b6d3-356bc191844e from this chassis (sb_readonly=0)
Oct 11 09:16:12 compute-0 nova_compute[260935]: 2025-10-11 09:16:12.208 2 DEBUG oslo_concurrency.lockutils [req-f203fb46-db47-4782-8995-e686bccb57ba req-958a762b-fa78-4de4-9b7c-6730cedc6591 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-8354434f-daa4-4745-9755-bd2465f5459b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:16:12 compute-0 nova_compute[260935]: 2025-10-11 09:16:12.255 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:16:12 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2231: 321 pgs: 321 active+clean; 374 MiB data, 920 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 11 09:16:12 compute-0 nova_compute[260935]: 2025-10-11 09:16:12.710 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:16:12 compute-0 busy_clarke[378316]: --> passed data devices: 0 physical, 3 LVM
Oct 11 09:16:12 compute-0 busy_clarke[378316]: --> relative data size: 1.0
Oct 11 09:16:12 compute-0 busy_clarke[378316]: --> All data devices are unavailable
Oct 11 09:16:12 compute-0 systemd[1]: libpod-6687c6354db2ea962a3feda63e2846d9a87759b36985dd5c40014dbf666f1564.scope: Deactivated successfully.
Oct 11 09:16:12 compute-0 systemd[1]: libpod-6687c6354db2ea962a3feda63e2846d9a87759b36985dd5c40014dbf666f1564.scope: Consumed 1.052s CPU time.
Oct 11 09:16:12 compute-0 podman[378299]: 2025-10-11 09:16:12.760339596 +0000 UTC m=+1.373796186 container died 6687c6354db2ea962a3feda63e2846d9a87759b36985dd5c40014dbf666f1564 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_clarke, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct 11 09:16:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-767e5cdb46a400d8460bb807259ff44a2d1a9bf6a37cef13ec6e38775f6eb82c-merged.mount: Deactivated successfully.
Oct 11 09:16:12 compute-0 podman[378299]: 2025-10-11 09:16:12.844977469 +0000 UTC m=+1.458434019 container remove 6687c6354db2ea962a3feda63e2846d9a87759b36985dd5c40014dbf666f1564 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_clarke, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct 11 09:16:12 compute-0 systemd[1]: libpod-conmon-6687c6354db2ea962a3feda63e2846d9a87759b36985dd5c40014dbf666f1564.scope: Deactivated successfully.
Oct 11 09:16:12 compute-0 sudo[378190]: pam_unix(sudo:session): session closed for user root
Oct 11 09:16:12 compute-0 ceph-mon[74313]: pgmap v2231: 321 pgs: 321 active+clean; 374 MiB data, 920 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 11 09:16:12 compute-0 sudo[378357]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:16:12 compute-0 sudo[378357]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:16:12 compute-0 sudo[378357]: pam_unix(sudo:session): session closed for user root
Oct 11 09:16:13 compute-0 sudo[378382]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 09:16:13 compute-0 sudo[378382]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:16:13 compute-0 sudo[378382]: pam_unix(sudo:session): session closed for user root
Oct 11 09:16:13 compute-0 sudo[378407]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:16:13 compute-0 sudo[378407]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:16:13 compute-0 sudo[378407]: pam_unix(sudo:session): session closed for user root
Oct 11 09:16:13 compute-0 sudo[378432]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a -- lvm list --format json
Oct 11 09:16:13 compute-0 sudo[378432]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:16:13 compute-0 podman[378497]: 2025-10-11 09:16:13.573064861 +0000 UTC m=+0.037794327 container create b912405845f5c0875a009661421e2c3327933f6c1a4334c0e3175c0f009f138a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_saha, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct 11 09:16:13 compute-0 systemd[1]: Started libpod-conmon-b912405845f5c0875a009661421e2c3327933f6c1a4334c0e3175c0f009f138a.scope.
Oct 11 09:16:13 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:16:13 compute-0 podman[378497]: 2025-10-11 09:16:13.638712308 +0000 UTC m=+0.103441834 container init b912405845f5c0875a009661421e2c3327933f6c1a4334c0e3175c0f009f138a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_saha, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct 11 09:16:13 compute-0 podman[378497]: 2025-10-11 09:16:13.65031392 +0000 UTC m=+0.115043396 container start b912405845f5c0875a009661421e2c3327933f6c1a4334c0e3175c0f009f138a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_saha, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct 11 09:16:13 compute-0 podman[378497]: 2025-10-11 09:16:13.556108242 +0000 UTC m=+0.020837728 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:16:13 compute-0 podman[378497]: 2025-10-11 09:16:13.654081094 +0000 UTC m=+0.118810660 container attach b912405845f5c0875a009661421e2c3327933f6c1a4334c0e3175c0f009f138a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_saha, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 09:16:13 compute-0 naughty_saha[378513]: 167 167
Oct 11 09:16:13 compute-0 systemd[1]: libpod-b912405845f5c0875a009661421e2c3327933f6c1a4334c0e3175c0f009f138a.scope: Deactivated successfully.
Oct 11 09:16:13 compute-0 podman[378497]: 2025-10-11 09:16:13.655675548 +0000 UTC m=+0.120405014 container died b912405845f5c0875a009661421e2c3327933f6c1a4334c0e3175c0f009f138a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_saha, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3)
Oct 11 09:16:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-ec8f18c7e3465ab83c7dd6e1c7f90421731946949830374c6bc51214b00f7bee-merged.mount: Deactivated successfully.
Oct 11 09:16:13 compute-0 podman[378497]: 2025-10-11 09:16:13.691310344 +0000 UTC m=+0.156039820 container remove b912405845f5c0875a009661421e2c3327933f6c1a4334c0e3175c0f009f138a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_saha, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 09:16:13 compute-0 systemd[1]: libpod-conmon-b912405845f5c0875a009661421e2c3327933f6c1a4334c0e3175c0f009f138a.scope: Deactivated successfully.
Oct 11 09:16:13 compute-0 podman[378536]: 2025-10-11 09:16:13.890374024 +0000 UTC m=+0.052827013 container create f280f480b116bf3205125bfcefb11bbdafed2368ef8bc9eb70598f4a6a745486 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_rubin, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Oct 11 09:16:13 compute-0 systemd[1]: Started libpod-conmon-f280f480b116bf3205125bfcefb11bbdafed2368ef8bc9eb70598f4a6a745486.scope.
Oct 11 09:16:13 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:16:13 compute-0 podman[378536]: 2025-10-11 09:16:13.862330608 +0000 UTC m=+0.024783617 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:16:13 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:16:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8023906f7774a1cd2f2fc89ed07d92384bfda036e56be970260f7d6774f260a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 09:16:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8023906f7774a1cd2f2fc89ed07d92384bfda036e56be970260f7d6774f260a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 09:16:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8023906f7774a1cd2f2fc89ed07d92384bfda036e56be970260f7d6774f260a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 09:16:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8023906f7774a1cd2f2fc89ed07d92384bfda036e56be970260f7d6774f260a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 09:16:13 compute-0 podman[378536]: 2025-10-11 09:16:13.991288357 +0000 UTC m=+0.153741296 container init f280f480b116bf3205125bfcefb11bbdafed2368ef8bc9eb70598f4a6a745486 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_rubin, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 09:16:14 compute-0 podman[378536]: 2025-10-11 09:16:14.007490516 +0000 UTC m=+0.169943495 container start f280f480b116bf3205125bfcefb11bbdafed2368ef8bc9eb70598f4a6a745486 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_rubin, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 09:16:14 compute-0 podman[378536]: 2025-10-11 09:16:14.01160899 +0000 UTC m=+0.174061929 container attach f280f480b116bf3205125bfcefb11bbdafed2368ef8bc9eb70598f4a6a745486 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_rubin, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 09:16:14 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2232: 321 pgs: 321 active+clean; 374 MiB data, 920 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 70 op/s
Oct 11 09:16:14 compute-0 infallible_rubin[378553]: {
Oct 11 09:16:14 compute-0 infallible_rubin[378553]:     "0": [
Oct 11 09:16:14 compute-0 infallible_rubin[378553]:         {
Oct 11 09:16:14 compute-0 infallible_rubin[378553]:             "devices": [
Oct 11 09:16:14 compute-0 infallible_rubin[378553]:                 "/dev/loop3"
Oct 11 09:16:14 compute-0 infallible_rubin[378553]:             ],
Oct 11 09:16:14 compute-0 infallible_rubin[378553]:             "lv_name": "ceph_lv0",
Oct 11 09:16:14 compute-0 infallible_rubin[378553]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 09:16:14 compute-0 infallible_rubin[378553]:             "lv_size": "21470642176",
Oct 11 09:16:14 compute-0 infallible_rubin[378553]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=bd31cfe9-266a-4479-a7f9-659fff14b3e1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 09:16:14 compute-0 infallible_rubin[378553]:             "lv_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 09:16:14 compute-0 infallible_rubin[378553]:             "name": "ceph_lv0",
Oct 11 09:16:14 compute-0 infallible_rubin[378553]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 09:16:14 compute-0 infallible_rubin[378553]:             "tags": {
Oct 11 09:16:14 compute-0 infallible_rubin[378553]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 11 09:16:14 compute-0 infallible_rubin[378553]:                 "ceph.block_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 09:16:14 compute-0 infallible_rubin[378553]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 09:16:14 compute-0 infallible_rubin[378553]:                 "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:16:14 compute-0 infallible_rubin[378553]:                 "ceph.cluster_name": "ceph",
Oct 11 09:16:14 compute-0 infallible_rubin[378553]:                 "ceph.crush_device_class": "",
Oct 11 09:16:14 compute-0 infallible_rubin[378553]:                 "ceph.encrypted": "0",
Oct 11 09:16:14 compute-0 infallible_rubin[378553]:                 "ceph.osd_fsid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 09:16:14 compute-0 infallible_rubin[378553]:                 "ceph.osd_id": "0",
Oct 11 09:16:14 compute-0 infallible_rubin[378553]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 09:16:14 compute-0 infallible_rubin[378553]:                 "ceph.type": "block",
Oct 11 09:16:14 compute-0 infallible_rubin[378553]:                 "ceph.vdo": "0"
Oct 11 09:16:14 compute-0 infallible_rubin[378553]:             },
Oct 11 09:16:14 compute-0 infallible_rubin[378553]:             "type": "block",
Oct 11 09:16:14 compute-0 infallible_rubin[378553]:             "vg_name": "ceph_vg0"
Oct 11 09:16:14 compute-0 infallible_rubin[378553]:         }
Oct 11 09:16:14 compute-0 infallible_rubin[378553]:     ],
Oct 11 09:16:14 compute-0 infallible_rubin[378553]:     "1": [
Oct 11 09:16:14 compute-0 infallible_rubin[378553]:         {
Oct 11 09:16:14 compute-0 infallible_rubin[378553]:             "devices": [
Oct 11 09:16:14 compute-0 infallible_rubin[378553]:                 "/dev/loop4"
Oct 11 09:16:14 compute-0 infallible_rubin[378553]:             ],
Oct 11 09:16:14 compute-0 infallible_rubin[378553]:             "lv_name": "ceph_lv1",
Oct 11 09:16:14 compute-0 infallible_rubin[378553]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 09:16:14 compute-0 infallible_rubin[378553]:             "lv_size": "21470642176",
Oct 11 09:16:14 compute-0 infallible_rubin[378553]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c6e730c8-7075-45e5-bf72-061b73088f5b,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 09:16:14 compute-0 infallible_rubin[378553]:             "lv_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 09:16:14 compute-0 infallible_rubin[378553]:             "name": "ceph_lv1",
Oct 11 09:16:14 compute-0 infallible_rubin[378553]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 09:16:14 compute-0 infallible_rubin[378553]:             "tags": {
Oct 11 09:16:14 compute-0 infallible_rubin[378553]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 11 09:16:14 compute-0 infallible_rubin[378553]:                 "ceph.block_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 09:16:14 compute-0 infallible_rubin[378553]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 09:16:14 compute-0 infallible_rubin[378553]:                 "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:16:14 compute-0 infallible_rubin[378553]:                 "ceph.cluster_name": "ceph",
Oct 11 09:16:14 compute-0 infallible_rubin[378553]:                 "ceph.crush_device_class": "",
Oct 11 09:16:14 compute-0 infallible_rubin[378553]:                 "ceph.encrypted": "0",
Oct 11 09:16:14 compute-0 infallible_rubin[378553]:                 "ceph.osd_fsid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 09:16:14 compute-0 infallible_rubin[378553]:                 "ceph.osd_id": "1",
Oct 11 09:16:14 compute-0 infallible_rubin[378553]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 09:16:14 compute-0 infallible_rubin[378553]:                 "ceph.type": "block",
Oct 11 09:16:14 compute-0 infallible_rubin[378553]:                 "ceph.vdo": "0"
Oct 11 09:16:14 compute-0 infallible_rubin[378553]:             },
Oct 11 09:16:14 compute-0 infallible_rubin[378553]:             "type": "block",
Oct 11 09:16:14 compute-0 infallible_rubin[378553]:             "vg_name": "ceph_vg1"
Oct 11 09:16:14 compute-0 infallible_rubin[378553]:         }
Oct 11 09:16:14 compute-0 infallible_rubin[378553]:     ],
Oct 11 09:16:14 compute-0 infallible_rubin[378553]:     "2": [
Oct 11 09:16:14 compute-0 infallible_rubin[378553]:         {
Oct 11 09:16:14 compute-0 infallible_rubin[378553]:             "devices": [
Oct 11 09:16:14 compute-0 infallible_rubin[378553]:                 "/dev/loop5"
Oct 11 09:16:14 compute-0 infallible_rubin[378553]:             ],
Oct 11 09:16:14 compute-0 infallible_rubin[378553]:             "lv_name": "ceph_lv2",
Oct 11 09:16:14 compute-0 infallible_rubin[378553]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 09:16:14 compute-0 infallible_rubin[378553]:             "lv_size": "21470642176",
Oct 11 09:16:14 compute-0 infallible_rubin[378553]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9a8d87fe-940e-4960-94fb-405e22e6e9ee,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 09:16:14 compute-0 infallible_rubin[378553]:             "lv_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 09:16:14 compute-0 infallible_rubin[378553]:             "name": "ceph_lv2",
Oct 11 09:16:14 compute-0 infallible_rubin[378553]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 09:16:14 compute-0 infallible_rubin[378553]:             "tags": {
Oct 11 09:16:14 compute-0 infallible_rubin[378553]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 11 09:16:14 compute-0 infallible_rubin[378553]:                 "ceph.block_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 09:16:14 compute-0 infallible_rubin[378553]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 09:16:14 compute-0 infallible_rubin[378553]:                 "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:16:14 compute-0 infallible_rubin[378553]:                 "ceph.cluster_name": "ceph",
Oct 11 09:16:14 compute-0 infallible_rubin[378553]:                 "ceph.crush_device_class": "",
Oct 11 09:16:14 compute-0 infallible_rubin[378553]:                 "ceph.encrypted": "0",
Oct 11 09:16:14 compute-0 infallible_rubin[378553]:                 "ceph.osd_fsid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 09:16:14 compute-0 infallible_rubin[378553]:                 "ceph.osd_id": "2",
Oct 11 09:16:14 compute-0 infallible_rubin[378553]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 09:16:14 compute-0 infallible_rubin[378553]:                 "ceph.type": "block",
Oct 11 09:16:14 compute-0 infallible_rubin[378553]:                 "ceph.vdo": "0"
Oct 11 09:16:14 compute-0 infallible_rubin[378553]:             },
Oct 11 09:16:14 compute-0 infallible_rubin[378553]:             "type": "block",
Oct 11 09:16:14 compute-0 infallible_rubin[378553]:             "vg_name": "ceph_vg2"
Oct 11 09:16:14 compute-0 infallible_rubin[378553]:         }
Oct 11 09:16:14 compute-0 infallible_rubin[378553]:     ]
Oct 11 09:16:14 compute-0 infallible_rubin[378553]: }
Oct 11 09:16:14 compute-0 systemd[1]: libpod-f280f480b116bf3205125bfcefb11bbdafed2368ef8bc9eb70598f4a6a745486.scope: Deactivated successfully.
Oct 11 09:16:14 compute-0 podman[378562]: 2025-10-11 09:16:14.941681804 +0000 UTC m=+0.071461889 container died f280f480b116bf3205125bfcefb11bbdafed2368ef8bc9eb70598f4a6a745486 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_rubin, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 11 09:16:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-c8023906f7774a1cd2f2fc89ed07d92384bfda036e56be970260f7d6774f260a-merged.mount: Deactivated successfully.
Oct 11 09:16:15 compute-0 podman[378562]: 2025-10-11 09:16:15.007419473 +0000 UTC m=+0.137199478 container remove f280f480b116bf3205125bfcefb11bbdafed2368ef8bc9eb70598f4a6a745486 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_rubin, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct 11 09:16:15 compute-0 systemd[1]: libpod-conmon-f280f480b116bf3205125bfcefb11bbdafed2368ef8bc9eb70598f4a6a745486.scope: Deactivated successfully.
Oct 11 09:16:15 compute-0 sudo[378432]: pam_unix(sudo:session): session closed for user root
Oct 11 09:16:15 compute-0 sudo[378576]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:16:15 compute-0 sudo[378576]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:16:15 compute-0 sudo[378576]: pam_unix(sudo:session): session closed for user root
Oct 11 09:16:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:16:15.211 162815 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:16:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:16:15.212 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:16:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:16:15.213 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:16:15 compute-0 sudo[378601]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 09:16:15 compute-0 sudo[378601]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:16:15 compute-0 sudo[378601]: pam_unix(sudo:session): session closed for user root
Oct 11 09:16:15 compute-0 sudo[378626]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:16:15 compute-0 sudo[378626]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:16:15 compute-0 sudo[378626]: pam_unix(sudo:session): session closed for user root
Oct 11 09:16:15 compute-0 sudo[378651]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a -- raw list --format json
Oct 11 09:16:15 compute-0 sudo[378651]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:16:15 compute-0 ceph-mon[74313]: pgmap v2232: 321 pgs: 321 active+clean; 374 MiB data, 920 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 70 op/s
Oct 11 09:16:15 compute-0 podman[378716]: 2025-10-11 09:16:15.944414567 +0000 UTC m=+0.056982328 container create 0fdcc8bda2d7904c47042b20c19d525d0c78aa04c620eec9ddad8bb14e96400f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_mayer, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct 11 09:16:15 compute-0 nova_compute[260935]: 2025-10-11 09:16:15.958 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:16:15 compute-0 systemd[1]: Started libpod-conmon-0fdcc8bda2d7904c47042b20c19d525d0c78aa04c620eec9ddad8bb14e96400f.scope.
Oct 11 09:16:16 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:16:16 compute-0 podman[378716]: 2025-10-11 09:16:15.92571322 +0000 UTC m=+0.038281001 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:16:16 compute-0 podman[378716]: 2025-10-11 09:16:16.03987375 +0000 UTC m=+0.152441521 container init 0fdcc8bda2d7904c47042b20c19d525d0c78aa04c620eec9ddad8bb14e96400f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_mayer, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0)
Oct 11 09:16:16 compute-0 podman[378716]: 2025-10-11 09:16:16.048961111 +0000 UTC m=+0.161528872 container start 0fdcc8bda2d7904c47042b20c19d525d0c78aa04c620eec9ddad8bb14e96400f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_mayer, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct 11 09:16:16 compute-0 podman[378716]: 2025-10-11 09:16:16.052451668 +0000 UTC m=+0.165019469 container attach 0fdcc8bda2d7904c47042b20c19d525d0c78aa04c620eec9ddad8bb14e96400f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_mayer, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 09:16:16 compute-0 agitated_mayer[378732]: 167 167
Oct 11 09:16:16 compute-0 systemd[1]: libpod-0fdcc8bda2d7904c47042b20c19d525d0c78aa04c620eec9ddad8bb14e96400f.scope: Deactivated successfully.
Oct 11 09:16:16 compute-0 podman[378716]: 2025-10-11 09:16:16.055752029 +0000 UTC m=+0.168319790 container died 0fdcc8bda2d7904c47042b20c19d525d0c78aa04c620eec9ddad8bb14e96400f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_mayer, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True)
Oct 11 09:16:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-1189a7b68178d54d997bff7cc8fdcb847acf3b81d34b192383ed19ba71faaa5e-merged.mount: Deactivated successfully.
Oct 11 09:16:16 compute-0 podman[378716]: 2025-10-11 09:16:16.09299754 +0000 UTC m=+0.205565301 container remove 0fdcc8bda2d7904c47042b20c19d525d0c78aa04c620eec9ddad8bb14e96400f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_mayer, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct 11 09:16:16 compute-0 systemd[1]: libpod-conmon-0fdcc8bda2d7904c47042b20c19d525d0c78aa04c620eec9ddad8bb14e96400f.scope: Deactivated successfully.
Oct 11 09:16:16 compute-0 podman[378758]: 2025-10-11 09:16:16.288047819 +0000 UTC m=+0.044474282 container create f7666642b0860ef73fecc1e2b77765d459baac86834af4f0fabc9119fb7433a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_sutherland, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 11 09:16:16 compute-0 systemd[1]: Started libpod-conmon-f7666642b0860ef73fecc1e2b77765d459baac86834af4f0fabc9119fb7433a5.scope.
Oct 11 09:16:16 compute-0 podman[378758]: 2025-10-11 09:16:16.27147215 +0000 UTC m=+0.027898633 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:16:16 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:16:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a7d8eb77559754a46c4f7429b6327fd22af0b0701378c374b2f5a359e1c0cf1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 09:16:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a7d8eb77559754a46c4f7429b6327fd22af0b0701378c374b2f5a359e1c0cf1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 09:16:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a7d8eb77559754a46c4f7429b6327fd22af0b0701378c374b2f5a359e1c0cf1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 09:16:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a7d8eb77559754a46c4f7429b6327fd22af0b0701378c374b2f5a359e1c0cf1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 09:16:16 compute-0 podman[378758]: 2025-10-11 09:16:16.407753482 +0000 UTC m=+0.164179965 container init f7666642b0860ef73fecc1e2b77765d459baac86834af4f0fabc9119fb7433a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_sutherland, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 09:16:16 compute-0 podman[378758]: 2025-10-11 09:16:16.419168268 +0000 UTC m=+0.175594731 container start f7666642b0860ef73fecc1e2b77765d459baac86834af4f0fabc9119fb7433a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_sutherland, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 09:16:16 compute-0 podman[378758]: 2025-10-11 09:16:16.422215203 +0000 UTC m=+0.178641666 container attach f7666642b0860ef73fecc1e2b77765d459baac86834af4f0fabc9119fb7433a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_sutherland, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 09:16:16 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2233: 321 pgs: 321 active+clean; 374 MiB data, 920 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 63 op/s
Oct 11 09:16:17 compute-0 reverent_sutherland[378774]: {
Oct 11 09:16:17 compute-0 reverent_sutherland[378774]:     "9a8d87fe-940e-4960-94fb-405e22e6e9ee": {
Oct 11 09:16:17 compute-0 reverent_sutherland[378774]:         "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:16:17 compute-0 reverent_sutherland[378774]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 11 09:16:17 compute-0 reverent_sutherland[378774]:         "osd_id": 2,
Oct 11 09:16:17 compute-0 reverent_sutherland[378774]:         "osd_uuid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 09:16:17 compute-0 reverent_sutherland[378774]:         "type": "bluestore"
Oct 11 09:16:17 compute-0 reverent_sutherland[378774]:     },
Oct 11 09:16:17 compute-0 reverent_sutherland[378774]:     "bd31cfe9-266a-4479-a7f9-659fff14b3e1": {
Oct 11 09:16:17 compute-0 reverent_sutherland[378774]:         "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:16:17 compute-0 reverent_sutherland[378774]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 11 09:16:17 compute-0 reverent_sutherland[378774]:         "osd_id": 0,
Oct 11 09:16:17 compute-0 reverent_sutherland[378774]:         "osd_uuid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 09:16:17 compute-0 reverent_sutherland[378774]:         "type": "bluestore"
Oct 11 09:16:17 compute-0 reverent_sutherland[378774]:     },
Oct 11 09:16:17 compute-0 reverent_sutherland[378774]:     "c6e730c8-7075-45e5-bf72-061b73088f5b": {
Oct 11 09:16:17 compute-0 reverent_sutherland[378774]:         "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:16:17 compute-0 reverent_sutherland[378774]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 11 09:16:17 compute-0 reverent_sutherland[378774]:         "osd_id": 1,
Oct 11 09:16:17 compute-0 reverent_sutherland[378774]:         "osd_uuid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 09:16:17 compute-0 reverent_sutherland[378774]:         "type": "bluestore"
Oct 11 09:16:17 compute-0 reverent_sutherland[378774]:     }
Oct 11 09:16:17 compute-0 reverent_sutherland[378774]: }
Oct 11 09:16:17 compute-0 systemd[1]: libpod-f7666642b0860ef73fecc1e2b77765d459baac86834af4f0fabc9119fb7433a5.scope: Deactivated successfully.
Oct 11 09:16:17 compute-0 podman[378758]: 2025-10-11 09:16:17.368747622 +0000 UTC m=+1.125174125 container died f7666642b0860ef73fecc1e2b77765d459baac86834af4f0fabc9119fb7433a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_sutherland, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 09:16:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-4a7d8eb77559754a46c4f7429b6327fd22af0b0701378c374b2f5a359e1c0cf1-merged.mount: Deactivated successfully.
Oct 11 09:16:17 compute-0 podman[378758]: 2025-10-11 09:16:17.423007494 +0000 UTC m=+1.179433967 container remove f7666642b0860ef73fecc1e2b77765d459baac86834af4f0fabc9119fb7433a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_sutherland, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 09:16:17 compute-0 systemd[1]: libpod-conmon-f7666642b0860ef73fecc1e2b77765d459baac86834af4f0fabc9119fb7433a5.scope: Deactivated successfully.
Oct 11 09:16:17 compute-0 sudo[378651]: pam_unix(sudo:session): session closed for user root
Oct 11 09:16:17 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 09:16:17 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:16:17 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 09:16:17 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:16:17 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev c9492410-593d-4f69-9238-9bad0a9c9580 does not exist
Oct 11 09:16:17 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev c56147f0-4ebf-4b49-8ca3-61f24c511ef6 does not exist
Oct 11 09:16:17 compute-0 sudo[378818]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:16:17 compute-0 sudo[378818]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:16:17 compute-0 sudo[378818]: pam_unix(sudo:session): session closed for user root
Oct 11 09:16:17 compute-0 sudo[378843]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 11 09:16:17 compute-0 sudo[378843]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:16:17 compute-0 sudo[378843]: pam_unix(sudo:session): session closed for user root
Oct 11 09:16:17 compute-0 ceph-mon[74313]: pgmap v2233: 321 pgs: 321 active+clean; 374 MiB data, 920 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 63 op/s
Oct 11 09:16:17 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:16:17 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:16:17 compute-0 nova_compute[260935]: 2025-10-11 09:16:17.713 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:16:17 compute-0 ovn_controller[152945]: 2025-10-11T09:16:17Z|01065|binding|INFO|Releasing lport 58ef9b4a-8b66-4d5d-ac05-f694b2a9b216 from this chassis (sb_readonly=0)
Oct 11 09:16:17 compute-0 ovn_controller[152945]: 2025-10-11T09:16:17Z|01066|binding|INFO|Releasing lport e23cd806-8523-4e59-ba27-db15cee52548 from this chassis (sb_readonly=0)
Oct 11 09:16:17 compute-0 ovn_controller[152945]: 2025-10-11T09:16:17Z|01067|binding|INFO|Releasing lport f45dd889-4db0-488b-b6d3-356bc191844e from this chassis (sb_readonly=0)
Oct 11 09:16:17 compute-0 nova_compute[260935]: 2025-10-11 09:16:17.773 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:16:18 compute-0 ovn_controller[152945]: 2025-10-11T09:16:18Z|00119|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:ec:41:56 10.100.0.8
Oct 11 09:16:18 compute-0 ovn_controller[152945]: 2025-10-11T09:16:18Z|00120|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:ec:41:56 10.100.0.8
Oct 11 09:16:18 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2234: 321 pgs: 321 active+clean; 407 MiB data, 945 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 124 op/s
Oct 11 09:16:18 compute-0 sshd-session[378868]: Invalid user rahul from 152.32.213.170 port 39300
Oct 11 09:16:18 compute-0 sshd-session[378868]: pam_unix(sshd:auth): check pass; user unknown
Oct 11 09:16:18 compute-0 sshd-session[378868]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=152.32.213.170
Oct 11 09:16:18 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:16:19 compute-0 ceph-mon[74313]: pgmap v2234: 321 pgs: 321 active+clean; 407 MiB data, 945 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 124 op/s
Oct 11 09:16:20 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2235: 321 pgs: 321 active+clean; 407 MiB data, 945 MiB used, 59 GiB / 60 GiB avail; 273 KiB/s rd, 2.1 MiB/s wr, 60 op/s
Oct 11 09:16:20 compute-0 sshd-session[378868]: Failed password for invalid user rahul from 152.32.213.170 port 39300 ssh2
Oct 11 09:16:20 compute-0 nova_compute[260935]: 2025-10-11 09:16:20.997 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:16:21 compute-0 sshd-session[378868]: Received disconnect from 152.32.213.170 port 39300:11: Bye Bye [preauth]
Oct 11 09:16:21 compute-0 sshd-session[378868]: Disconnected from invalid user rahul 152.32.213.170 port 39300 [preauth]
Oct 11 09:16:21 compute-0 ceph-mon[74313]: pgmap v2235: 321 pgs: 321 active+clean; 407 MiB data, 945 MiB used, 59 GiB / 60 GiB avail; 273 KiB/s rd, 2.1 MiB/s wr, 60 op/s
Oct 11 09:16:22 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2236: 321 pgs: 321 active+clean; 407 MiB data, 945 MiB used, 59 GiB / 60 GiB avail; 305 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Oct 11 09:16:22 compute-0 nova_compute[260935]: 2025-10-11 09:16:22.717 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:16:23 compute-0 ceph-mon[74313]: pgmap v2236: 321 pgs: 321 active+clean; 407 MiB data, 945 MiB used, 59 GiB / 60 GiB avail; 305 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Oct 11 09:16:23 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:16:24 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2237: 321 pgs: 321 active+clean; 407 MiB data, 945 MiB used, 59 GiB / 60 GiB avail; 305 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Oct 11 09:16:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:16:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:16:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:16:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:16:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:16:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:16:24 compute-0 nova_compute[260935]: 2025-10-11 09:16:24.856 2 INFO nova.compute.manager [None req-ed4a3496-921f-4e8d-a19a-35da0aa2d8e0 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 8354434f-daa4-4745-9755-bd2465f5459b] Get console output
Oct 11 09:16:24 compute-0 nova_compute[260935]: 2025-10-11 09:16:24.870 29289 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Oct 11 09:16:25 compute-0 nova_compute[260935]: 2025-10-11 09:16:25.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:16:25 compute-0 nova_compute[260935]: 2025-10-11 09:16:25.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:16:25 compute-0 ceph-mon[74313]: pgmap v2237: 321 pgs: 321 active+clean; 407 MiB data, 945 MiB used, 59 GiB / 60 GiB avail; 305 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Oct 11 09:16:26 compute-0 nova_compute[260935]: 2025-10-11 09:16:26.000 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:16:26 compute-0 sshd-session[378870]: Invalid user mati from 155.4.244.179 port 61226
Oct 11 09:16:26 compute-0 sshd-session[378870]: pam_unix(sshd:auth): check pass; user unknown
Oct 11 09:16:26 compute-0 sshd-session[378870]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=155.4.244.179
Oct 11 09:16:26 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2238: 321 pgs: 321 active+clean; 407 MiB data, 945 MiB used, 59 GiB / 60 GiB avail; 305 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Oct 11 09:16:26 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 09:16:26 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2767573846' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 09:16:26 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 09:16:26 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2767573846' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 09:16:26 compute-0 ceph-mon[74313]: from='client.? 192.168.122.10:0/2767573846' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 09:16:26 compute-0 ceph-mon[74313]: from='client.? 192.168.122.10:0/2767573846' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 09:16:27 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:16:27.359 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=33, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:d1:d9', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '16:ab:1e:b7:4b:7f'}, ipsec=False) old=SB_Global(nb_cfg=32) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 09:16:27 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:16:27.360 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 11 09:16:27 compute-0 nova_compute[260935]: 2025-10-11 09:16:27.362 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:16:27 compute-0 nova_compute[260935]: 2025-10-11 09:16:27.699 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:16:27 compute-0 nova_compute[260935]: 2025-10-11 09:16:27.719 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:16:27 compute-0 ceph-mon[74313]: pgmap v2238: 321 pgs: 321 active+clean; 407 MiB data, 945 MiB used, 59 GiB / 60 GiB avail; 305 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Oct 11 09:16:28 compute-0 sshd-session[378870]: Failed password for invalid user mati from 155.4.244.179 port 61226 ssh2
Oct 11 09:16:28 compute-0 sshd-session[378870]: Received disconnect from 155.4.244.179 port 61226:11: Bye Bye [preauth]
Oct 11 09:16:28 compute-0 sshd-session[378870]: Disconnected from invalid user mati 155.4.244.179 port 61226 [preauth]
Oct 11 09:16:28 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2239: 321 pgs: 321 active+clean; 407 MiB data, 946 MiB used, 59 GiB / 60 GiB avail; 310 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 11 09:16:28 compute-0 nova_compute[260935]: 2025-10-11 09:16:28.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:16:28 compute-0 nova_compute[260935]: 2025-10-11 09:16:28.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:16:28 compute-0 nova_compute[260935]: 2025-10-11 09:16:28.703 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 11 09:16:28 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:16:29 compute-0 nova_compute[260935]: 2025-10-11 09:16:29.772 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:16:29 compute-0 podman[378872]: 2025-10-11 09:16:29.795584483 +0000 UTC m=+0.087982127 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct 11 09:16:29 compute-0 ceph-mon[74313]: pgmap v2239: 321 pgs: 321 active+clean; 407 MiB data, 946 MiB used, 59 GiB / 60 GiB avail; 310 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 11 09:16:30 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2240: 321 pgs: 321 active+clean; 407 MiB data, 946 MiB used, 59 GiB / 60 GiB avail; 37 KiB/s rd, 11 KiB/s wr, 2 op/s
Oct 11 09:16:31 compute-0 nova_compute[260935]: 2025-10-11 09:16:31.038 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:16:31 compute-0 nova_compute[260935]: 2025-10-11 09:16:31.351 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:16:31 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:16:31.362 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=bec9a5e2-82b0-42f0-811d-08d245f1dc66, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '33'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:16:31 compute-0 nova_compute[260935]: 2025-10-11 09:16:31.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:16:31 compute-0 nova_compute[260935]: 2025-10-11 09:16:31.703 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 11 09:16:31 compute-0 nova_compute[260935]: 2025-10-11 09:16:31.703 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 11 09:16:31 compute-0 ceph-mon[74313]: pgmap v2240: 321 pgs: 321 active+clean; 407 MiB data, 946 MiB used, 59 GiB / 60 GiB avail; 37 KiB/s rd, 11 KiB/s wr, 2 op/s
Oct 11 09:16:32 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2241: 321 pgs: 321 active+clean; 407 MiB data, 946 MiB used, 59 GiB / 60 GiB avail; 37 KiB/s rd, 12 KiB/s wr, 2 op/s
Oct 11 09:16:32 compute-0 nova_compute[260935]: 2025-10-11 09:16:32.721 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:16:32 compute-0 nova_compute[260935]: 2025-10-11 09:16:32.917 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "refresh_cache-c176845c-89c0-4038-ba22-4ee79bd3ebfe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:16:32 compute-0 nova_compute[260935]: 2025-10-11 09:16:32.917 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquired lock "refresh_cache-c176845c-89c0-4038-ba22-4ee79bd3ebfe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:16:32 compute-0 nova_compute[260935]: 2025-10-11 09:16:32.918 2 DEBUG nova.network.neutron [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: c176845c-89c0-4038-ba22-4ee79bd3ebfe] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 11 09:16:32 compute-0 nova_compute[260935]: 2025-10-11 09:16:32.918 2 DEBUG nova.objects.instance [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lazy-loading 'info_cache' on Instance uuid c176845c-89c0-4038-ba22-4ee79bd3ebfe obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:16:33 compute-0 ceph-mon[74313]: pgmap v2241: 321 pgs: 321 active+clean; 407 MiB data, 946 MiB used, 59 GiB / 60 GiB avail; 37 KiB/s rd, 12 KiB/s wr, 2 op/s
Oct 11 09:16:33 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:16:33.886 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:74:43:a2 10.100.0.2 2001:db8::f816:3eff:fe74:43a2'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fe74:43a2/64', 'neutron:device_id': 'ovnmeta-2f4ce403-2596-441d-805b-ba15e2f385a1', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2f4ce403-2596-441d-805b-ba15e2f385a1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'db6885dd005947ad850fed13cefdf2fc', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ec00ec81-0492-43bf-b21e-a8398ff551c2, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=722d4b4c-2d64-4e8c-b343-4ac25259f23b) old=Port_Binding(mac=['fa:16:3e:74:43:a2 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-2f4ce403-2596-441d-805b-ba15e2f385a1', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2f4ce403-2596-441d-805b-ba15e2f385a1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'db6885dd005947ad850fed13cefdf2fc', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 09:16:33 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:16:33.889 162815 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 722d4b4c-2d64-4e8c-b343-4ac25259f23b in datapath 2f4ce403-2596-441d-805b-ba15e2f385a1 updated
Oct 11 09:16:33 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:16:33.892 162815 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 2f4ce403-2596-441d-805b-ba15e2f385a1, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 11 09:16:33 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:16:33.893 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[92a6be89-542e-47bf-914d-bb40636e942f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:16:33 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:16:34 compute-0 nova_compute[260935]: 2025-10-11 09:16:34.444 2 DEBUG nova.network.neutron [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: c176845c-89c0-4038-ba22-4ee79bd3ebfe] Updating instance_info_cache with network_info: [{"id": "e61ae661-47c6-4317-a2c2-6e7a5b567441", "address": "fa:16:3e:1e:82:58", "network": {"id": "164a664d-5e52-48b9-8b00-f73d0851a4cc", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-311778958-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d33b48586acf4e6c8254f2a1213b001c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape61ae661-47", "ovs_interfaceid": "e61ae661-47c6-4317-a2c2-6e7a5b567441", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:16:34 compute-0 nova_compute[260935]: 2025-10-11 09:16:34.466 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Releasing lock "refresh_cache-c176845c-89c0-4038-ba22-4ee79bd3ebfe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:16:34 compute-0 nova_compute[260935]: 2025-10-11 09:16:34.467 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: c176845c-89c0-4038-ba22-4ee79bd3ebfe] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 11 09:16:34 compute-0 nova_compute[260935]: 2025-10-11 09:16:34.467 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:16:34 compute-0 nova_compute[260935]: 2025-10-11 09:16:34.468 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:16:34 compute-0 nova_compute[260935]: 2025-10-11 09:16:34.499 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:16:34 compute-0 nova_compute[260935]: 2025-10-11 09:16:34.500 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:16:34 compute-0 nova_compute[260935]: 2025-10-11 09:16:34.500 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:16:34 compute-0 nova_compute[260935]: 2025-10-11 09:16:34.501 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 11 09:16:34 compute-0 nova_compute[260935]: 2025-10-11 09:16:34.501 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:16:34 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2242: 321 pgs: 321 active+clean; 407 MiB data, 946 MiB used, 59 GiB / 60 GiB avail; 5.3 KiB/s rd, 17 KiB/s wr, 0 op/s
Oct 11 09:16:34 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 09:16:34 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4186119559' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:16:34 compute-0 nova_compute[260935]: 2025-10-11 09:16:34.984 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.482s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:16:35 compute-0 podman[378912]: 2025-10-11 09:16:35.10886622 +0000 UTC m=+0.057228415 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, container_name=iscsid, org.label-schema.license=GPLv2)
Oct 11 09:16:35 compute-0 nova_compute[260935]: 2025-10-11 09:16:35.130 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:16:35 compute-0 nova_compute[260935]: 2025-10-11 09:16:35.131 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:16:35 compute-0 nova_compute[260935]: 2025-10-11 09:16:35.131 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:16:35 compute-0 nova_compute[260935]: 2025-10-11 09:16:35.136 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:16:35 compute-0 nova_compute[260935]: 2025-10-11 09:16:35.137 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:16:35 compute-0 nova_compute[260935]: 2025-10-11 09:16:35.141 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-0000006c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:16:35 compute-0 nova_compute[260935]: 2025-10-11 09:16:35.141 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-0000006c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:16:35 compute-0 nova_compute[260935]: 2025-10-11 09:16:35.146 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000052 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:16:35 compute-0 nova_compute[260935]: 2025-10-11 09:16:35.146 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000052 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:16:35 compute-0 nova_compute[260935]: 2025-10-11 09:16:35.388 2 WARNING nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 09:16:35 compute-0 nova_compute[260935]: 2025-10-11 09:16:35.391 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=2791MB free_disk=59.78509521484375GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 11 09:16:35 compute-0 nova_compute[260935]: 2025-10-11 09:16:35.391 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:16:35 compute-0 nova_compute[260935]: 2025-10-11 09:16:35.392 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:16:35 compute-0 nova_compute[260935]: 2025-10-11 09:16:35.539 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance c176845c-89c0-4038-ba22-4ee79bd3ebfe actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 09:16:35 compute-0 nova_compute[260935]: 2025-10-11 09:16:35.540 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance b75d8ded-515b-48ff-a6b6-28df88878996 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 09:16:35 compute-0 nova_compute[260935]: 2025-10-11 09:16:35.540 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance 52be16b4-343a-4fd4-9041-39069a1fde2a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 09:16:35 compute-0 nova_compute[260935]: 2025-10-11 09:16:35.540 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance 8354434f-daa4-4745-9755-bd2465f5459b actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 09:16:35 compute-0 nova_compute[260935]: 2025-10-11 09:16:35.541 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 11 09:16:35 compute-0 nova_compute[260935]: 2025-10-11 09:16:35.541 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=1024MB phys_disk=59GB used_disk=4GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 11 09:16:35 compute-0 nova_compute[260935]: 2025-10-11 09:16:35.666 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:16:35 compute-0 ceph-mon[74313]: pgmap v2242: 321 pgs: 321 active+clean; 407 MiB data, 946 MiB used, 59 GiB / 60 GiB avail; 5.3 KiB/s rd, 17 KiB/s wr, 0 op/s
Oct 11 09:16:35 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/4186119559' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:16:36 compute-0 nova_compute[260935]: 2025-10-11 09:16:36.043 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:16:36 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 09:16:36 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1105510431' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:16:36 compute-0 nova_compute[260935]: 2025-10-11 09:16:36.135 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:16:36 compute-0 nova_compute[260935]: 2025-10-11 09:16:36.143 2 DEBUG nova.compute.provider_tree [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 09:16:36 compute-0 nova_compute[260935]: 2025-10-11 09:16:36.169 2 DEBUG nova.scheduler.client.report [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 09:16:36 compute-0 nova_compute[260935]: 2025-10-11 09:16:36.203 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 11 09:16:36 compute-0 nova_compute[260935]: 2025-10-11 09:16:36.204 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.812s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:16:36 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2243: 321 pgs: 321 active+clean; 407 MiB data, 946 MiB used, 59 GiB / 60 GiB avail; 5.3 KiB/s rd, 6.7 KiB/s wr, 0 op/s
Oct 11 09:16:36 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/1105510431' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:16:36 compute-0 nova_compute[260935]: 2025-10-11 09:16:36.888 2 DEBUG oslo_concurrency.lockutils [None req-2554d751-df02-4f4c-a977-409d61bd8161 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Acquiring lock "a1232bdc-1728-423e-91ef-f46614fcec43" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:16:36 compute-0 nova_compute[260935]: 2025-10-11 09:16:36.888 2 DEBUG oslo_concurrency.lockutils [None req-2554d751-df02-4f4c-a977-409d61bd8161 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "a1232bdc-1728-423e-91ef-f46614fcec43" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:16:36 compute-0 nova_compute[260935]: 2025-10-11 09:16:36.909 2 DEBUG nova.compute.manager [None req-2554d751-df02-4f4c-a977-409d61bd8161 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: a1232bdc-1728-423e-91ef-f46614fcec43] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 11 09:16:36 compute-0 nova_compute[260935]: 2025-10-11 09:16:36.985 2 DEBUG oslo_concurrency.lockutils [None req-2554d751-df02-4f4c-a977-409d61bd8161 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:16:36 compute-0 nova_compute[260935]: 2025-10-11 09:16:36.986 2 DEBUG oslo_concurrency.lockutils [None req-2554d751-df02-4f4c-a977-409d61bd8161 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:16:36 compute-0 nova_compute[260935]: 2025-10-11 09:16:36.996 2 DEBUG nova.virt.hardware [None req-2554d751-df02-4f4c-a977-409d61bd8161 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 11 09:16:36 compute-0 nova_compute[260935]: 2025-10-11 09:16:36.997 2 INFO nova.compute.claims [None req-2554d751-df02-4f4c-a977-409d61bd8161 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: a1232bdc-1728-423e-91ef-f46614fcec43] Claim successful on node compute-0.ctlplane.example.com
Oct 11 09:16:37 compute-0 nova_compute[260935]: 2025-10-11 09:16:37.173 2 DEBUG oslo_concurrency.processutils [None req-2554d751-df02-4f4c-a977-409d61bd8161 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:16:37 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 09:16:37 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/922198867' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:16:37 compute-0 nova_compute[260935]: 2025-10-11 09:16:37.604 2 DEBUG oslo_concurrency.processutils [None req-2554d751-df02-4f4c-a977-409d61bd8161 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.431s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:16:37 compute-0 nova_compute[260935]: 2025-10-11 09:16:37.613 2 DEBUG nova.compute.provider_tree [None req-2554d751-df02-4f4c-a977-409d61bd8161 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 09:16:37 compute-0 nova_compute[260935]: 2025-10-11 09:16:37.633 2 DEBUG nova.scheduler.client.report [None req-2554d751-df02-4f4c-a977-409d61bd8161 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 09:16:37 compute-0 nova_compute[260935]: 2025-10-11 09:16:37.662 2 DEBUG oslo_concurrency.lockutils [None req-2554d751-df02-4f4c-a977-409d61bd8161 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.677s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:16:37 compute-0 nova_compute[260935]: 2025-10-11 09:16:37.664 2 DEBUG nova.compute.manager [None req-2554d751-df02-4f4c-a977-409d61bd8161 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: a1232bdc-1728-423e-91ef-f46614fcec43] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 11 09:16:37 compute-0 nova_compute[260935]: 2025-10-11 09:16:37.727 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:16:37 compute-0 nova_compute[260935]: 2025-10-11 09:16:37.729 2 DEBUG nova.compute.manager [None req-2554d751-df02-4f4c-a977-409d61bd8161 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: a1232bdc-1728-423e-91ef-f46614fcec43] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 11 09:16:37 compute-0 nova_compute[260935]: 2025-10-11 09:16:37.730 2 DEBUG nova.network.neutron [None req-2554d751-df02-4f4c-a977-409d61bd8161 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: a1232bdc-1728-423e-91ef-f46614fcec43] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 11 09:16:37 compute-0 nova_compute[260935]: 2025-10-11 09:16:37.756 2 INFO nova.virt.libvirt.driver [None req-2554d751-df02-4f4c-a977-409d61bd8161 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: a1232bdc-1728-423e-91ef-f46614fcec43] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 11 09:16:37 compute-0 nova_compute[260935]: 2025-10-11 09:16:37.780 2 DEBUG nova.compute.manager [None req-2554d751-df02-4f4c-a977-409d61bd8161 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: a1232bdc-1728-423e-91ef-f46614fcec43] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 11 09:16:37 compute-0 ceph-mon[74313]: pgmap v2243: 321 pgs: 321 active+clean; 407 MiB data, 946 MiB used, 59 GiB / 60 GiB avail; 5.3 KiB/s rd, 6.7 KiB/s wr, 0 op/s
Oct 11 09:16:37 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/922198867' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:16:37 compute-0 nova_compute[260935]: 2025-10-11 09:16:37.876 2 DEBUG nova.compute.manager [None req-2554d751-df02-4f4c-a977-409d61bd8161 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: a1232bdc-1728-423e-91ef-f46614fcec43] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 11 09:16:37 compute-0 nova_compute[260935]: 2025-10-11 09:16:37.877 2 DEBUG nova.virt.libvirt.driver [None req-2554d751-df02-4f4c-a977-409d61bd8161 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: a1232bdc-1728-423e-91ef-f46614fcec43] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 11 09:16:37 compute-0 nova_compute[260935]: 2025-10-11 09:16:37.878 2 INFO nova.virt.libvirt.driver [None req-2554d751-df02-4f4c-a977-409d61bd8161 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: a1232bdc-1728-423e-91ef-f46614fcec43] Creating image(s)
Oct 11 09:16:37 compute-0 nova_compute[260935]: 2025-10-11 09:16:37.904 2 DEBUG nova.storage.rbd_utils [None req-2554d751-df02-4f4c-a977-409d61bd8161 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] rbd image a1232bdc-1728-423e-91ef-f46614fcec43_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:16:37 compute-0 nova_compute[260935]: 2025-10-11 09:16:37.933 2 DEBUG nova.storage.rbd_utils [None req-2554d751-df02-4f4c-a977-409d61bd8161 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] rbd image a1232bdc-1728-423e-91ef-f46614fcec43_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:16:37 compute-0 nova_compute[260935]: 2025-10-11 09:16:37.959 2 DEBUG nova.storage.rbd_utils [None req-2554d751-df02-4f4c-a977-409d61bd8161 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] rbd image a1232bdc-1728-423e-91ef-f46614fcec43_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:16:37 compute-0 nova_compute[260935]: 2025-10-11 09:16:37.963 2 DEBUG oslo_concurrency.processutils [None req-2554d751-df02-4f4c-a977-409d61bd8161 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:16:38 compute-0 nova_compute[260935]: 2025-10-11 09:16:38.012 2 DEBUG nova.policy [None req-2554d751-df02-4f4c-a977-409d61bd8161 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'dd336dcb24664df58613d4105ce1b004', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'bee9c6aad5fe46a2b0fb6caf4d995b72', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 11 09:16:38 compute-0 nova_compute[260935]: 2025-10-11 09:16:38.063 2 DEBUG oslo_concurrency.processutils [None req-2554d751-df02-4f4c-a977-409d61bd8161 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json" returned: 0 in 0.101s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:16:38 compute-0 nova_compute[260935]: 2025-10-11 09:16:38.064 2 DEBUG oslo_concurrency.lockutils [None req-2554d751-df02-4f4c-a977-409d61bd8161 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Acquiring lock "9811042c7d73cc51997f7c966840f4b7728169a1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:16:38 compute-0 nova_compute[260935]: 2025-10-11 09:16:38.065 2 DEBUG oslo_concurrency.lockutils [None req-2554d751-df02-4f4c-a977-409d61bd8161 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:16:38 compute-0 nova_compute[260935]: 2025-10-11 09:16:38.065 2 DEBUG oslo_concurrency.lockutils [None req-2554d751-df02-4f4c-a977-409d61bd8161 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:16:38 compute-0 nova_compute[260935]: 2025-10-11 09:16:38.094 2 DEBUG nova.storage.rbd_utils [None req-2554d751-df02-4f4c-a977-409d61bd8161 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] rbd image a1232bdc-1728-423e-91ef-f46614fcec43_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:16:38 compute-0 nova_compute[260935]: 2025-10-11 09:16:38.102 2 DEBUG oslo_concurrency.processutils [None req-2554d751-df02-4f4c-a977-409d61bd8161 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 a1232bdc-1728-423e-91ef-f46614fcec43_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:16:38 compute-0 nova_compute[260935]: 2025-10-11 09:16:38.380 2 DEBUG oslo_concurrency.processutils [None req-2554d751-df02-4f4c-a977-409d61bd8161 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 a1232bdc-1728-423e-91ef-f46614fcec43_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.278s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:16:38 compute-0 nova_compute[260935]: 2025-10-11 09:16:38.444 2 DEBUG nova.storage.rbd_utils [None req-2554d751-df02-4f4c-a977-409d61bd8161 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] resizing rbd image a1232bdc-1728-423e-91ef-f46614fcec43_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 11 09:16:38 compute-0 nova_compute[260935]: 2025-10-11 09:16:38.539 2 DEBUG nova.objects.instance [None req-2554d751-df02-4f4c-a977-409d61bd8161 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lazy-loading 'migration_context' on Instance uuid a1232bdc-1728-423e-91ef-f46614fcec43 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:16:38 compute-0 nova_compute[260935]: 2025-10-11 09:16:38.587 2 DEBUG nova.virt.libvirt.driver [None req-2554d751-df02-4f4c-a977-409d61bd8161 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: a1232bdc-1728-423e-91ef-f46614fcec43] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 11 09:16:38 compute-0 nova_compute[260935]: 2025-10-11 09:16:38.587 2 DEBUG nova.virt.libvirt.driver [None req-2554d751-df02-4f4c-a977-409d61bd8161 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: a1232bdc-1728-423e-91ef-f46614fcec43] Ensure instance console log exists: /var/lib/nova/instances/a1232bdc-1728-423e-91ef-f46614fcec43/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 11 09:16:38 compute-0 nova_compute[260935]: 2025-10-11 09:16:38.588 2 DEBUG oslo_concurrency.lockutils [None req-2554d751-df02-4f4c-a977-409d61bd8161 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:16:38 compute-0 nova_compute[260935]: 2025-10-11 09:16:38.588 2 DEBUG oslo_concurrency.lockutils [None req-2554d751-df02-4f4c-a977-409d61bd8161 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:16:38 compute-0 nova_compute[260935]: 2025-10-11 09:16:38.589 2 DEBUG oslo_concurrency.lockutils [None req-2554d751-df02-4f4c-a977-409d61bd8161 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:16:38 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2244: 321 pgs: 321 active+clean; 407 MiB data, 946 MiB used, 59 GiB / 60 GiB avail; 5.3 KiB/s rd, 7.7 KiB/s wr, 1 op/s
Oct 11 09:16:38 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:16:39 compute-0 nova_compute[260935]: 2025-10-11 09:16:39.825 2 DEBUG nova.network.neutron [None req-2554d751-df02-4f4c-a977-409d61bd8161 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: a1232bdc-1728-423e-91ef-f46614fcec43] Successfully created port: e89831fc-f646-401f-8562-959bb36ec0e9 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 11 09:16:39 compute-0 ceph-mon[74313]: pgmap v2244: 321 pgs: 321 active+clean; 407 MiB data, 946 MiB used, 59 GiB / 60 GiB avail; 5.3 KiB/s rd, 7.7 KiB/s wr, 1 op/s
Oct 11 09:16:40 compute-0 nova_compute[260935]: 2025-10-11 09:16:40.304 2 DEBUG oslo_concurrency.lockutils [None req-5b53aa05-737d-40fc-9b53-30f5af6d2f88 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "187076f6-221b-4a35-a7a8-9ba7c2a546b5" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:16:40 compute-0 nova_compute[260935]: 2025-10-11 09:16:40.305 2 DEBUG oslo_concurrency.lockutils [None req-5b53aa05-737d-40fc-9b53-30f5af6d2f88 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "187076f6-221b-4a35-a7a8-9ba7c2a546b5" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:16:40 compute-0 nova_compute[260935]: 2025-10-11 09:16:40.343 2 DEBUG nova.compute.manager [None req-5b53aa05-737d-40fc-9b53-30f5af6d2f88 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 187076f6-221b-4a35-a7a8-9ba7c2a546b5] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 11 09:16:40 compute-0 nova_compute[260935]: 2025-10-11 09:16:40.419 2 DEBUG oslo_concurrency.lockutils [None req-5b53aa05-737d-40fc-9b53-30f5af6d2f88 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:16:40 compute-0 nova_compute[260935]: 2025-10-11 09:16:40.419 2 DEBUG oslo_concurrency.lockutils [None req-5b53aa05-737d-40fc-9b53-30f5af6d2f88 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:16:40 compute-0 nova_compute[260935]: 2025-10-11 09:16:40.428 2 DEBUG nova.virt.hardware [None req-5b53aa05-737d-40fc-9b53-30f5af6d2f88 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 11 09:16:40 compute-0 nova_compute[260935]: 2025-10-11 09:16:40.429 2 INFO nova.compute.claims [None req-5b53aa05-737d-40fc-9b53-30f5af6d2f88 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 187076f6-221b-4a35-a7a8-9ba7c2a546b5] Claim successful on node compute-0.ctlplane.example.com
Oct 11 09:16:40 compute-0 nova_compute[260935]: 2025-10-11 09:16:40.501 2 DEBUG nova.network.neutron [None req-2554d751-df02-4f4c-a977-409d61bd8161 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: a1232bdc-1728-423e-91ef-f46614fcec43] Successfully updated port: e89831fc-f646-401f-8562-959bb36ec0e9 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 11 09:16:40 compute-0 nova_compute[260935]: 2025-10-11 09:16:40.513 2 DEBUG oslo_concurrency.lockutils [None req-2554d751-df02-4f4c-a977-409d61bd8161 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Acquiring lock "refresh_cache-a1232bdc-1728-423e-91ef-f46614fcec43" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:16:40 compute-0 nova_compute[260935]: 2025-10-11 09:16:40.514 2 DEBUG oslo_concurrency.lockutils [None req-2554d751-df02-4f4c-a977-409d61bd8161 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Acquired lock "refresh_cache-a1232bdc-1728-423e-91ef-f46614fcec43" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:16:40 compute-0 nova_compute[260935]: 2025-10-11 09:16:40.514 2 DEBUG nova.network.neutron [None req-2554d751-df02-4f4c-a977-409d61bd8161 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: a1232bdc-1728-423e-91ef-f46614fcec43] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 11 09:16:40 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2245: 321 pgs: 321 active+clean; 407 MiB data, 946 MiB used, 59 GiB / 60 GiB avail; 6.3 KiB/s wr, 0 op/s
Oct 11 09:16:40 compute-0 nova_compute[260935]: 2025-10-11 09:16:40.639 2 DEBUG nova.compute.manager [req-15770475-065f-44bc-a417-b0321fb1d7b1 req-27d635b4-5c00-4e32-92e7-c2ebad038784 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a1232bdc-1728-423e-91ef-f46614fcec43] Received event network-changed-e89831fc-f646-401f-8562-959bb36ec0e9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:16:40 compute-0 nova_compute[260935]: 2025-10-11 09:16:40.640 2 DEBUG nova.compute.manager [req-15770475-065f-44bc-a417-b0321fb1d7b1 req-27d635b4-5c00-4e32-92e7-c2ebad038784 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a1232bdc-1728-423e-91ef-f46614fcec43] Refreshing instance network info cache due to event network-changed-e89831fc-f646-401f-8562-959bb36ec0e9. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 09:16:40 compute-0 nova_compute[260935]: 2025-10-11 09:16:40.640 2 DEBUG oslo_concurrency.lockutils [req-15770475-065f-44bc-a417-b0321fb1d7b1 req-27d635b4-5c00-4e32-92e7-c2ebad038784 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-a1232bdc-1728-423e-91ef-f46614fcec43" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:16:40 compute-0 nova_compute[260935]: 2025-10-11 09:16:40.652 2 DEBUG oslo_concurrency.processutils [None req-5b53aa05-737d-40fc-9b53-30f5af6d2f88 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:16:40 compute-0 nova_compute[260935]: 2025-10-11 09:16:40.709 2 DEBUG nova.network.neutron [None req-2554d751-df02-4f4c-a977-409d61bd8161 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: a1232bdc-1728-423e-91ef-f46614fcec43] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 11 09:16:40 compute-0 podman[379144]: 2025-10-11 09:16:40.817012146 +0000 UTC m=+0.096289465 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct 11 09:16:40 compute-0 podman[379145]: 2025-10-11 09:16:40.881068429 +0000 UTC m=+0.156095600 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, org.label-schema.license=GPLv2, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 11 09:16:41 compute-0 nova_compute[260935]: 2025-10-11 09:16:41.043 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:16:41 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 09:16:41 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1105601677' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:16:41 compute-0 nova_compute[260935]: 2025-10-11 09:16:41.127 2 DEBUG oslo_concurrency.processutils [None req-5b53aa05-737d-40fc-9b53-30f5af6d2f88 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:16:41 compute-0 nova_compute[260935]: 2025-10-11 09:16:41.135 2 DEBUG nova.compute.provider_tree [None req-5b53aa05-737d-40fc-9b53-30f5af6d2f88 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 09:16:41 compute-0 nova_compute[260935]: 2025-10-11 09:16:41.160 2 DEBUG nova.scheduler.client.report [None req-5b53aa05-737d-40fc-9b53-30f5af6d2f88 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 09:16:41 compute-0 nova_compute[260935]: 2025-10-11 09:16:41.194 2 DEBUG oslo_concurrency.lockutils [None req-5b53aa05-737d-40fc-9b53-30f5af6d2f88 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.775s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:16:41 compute-0 nova_compute[260935]: 2025-10-11 09:16:41.195 2 DEBUG nova.compute.manager [None req-5b53aa05-737d-40fc-9b53-30f5af6d2f88 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 187076f6-221b-4a35-a7a8-9ba7c2a546b5] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 11 09:16:41 compute-0 nova_compute[260935]: 2025-10-11 09:16:41.253 2 DEBUG nova.compute.manager [None req-5b53aa05-737d-40fc-9b53-30f5af6d2f88 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 187076f6-221b-4a35-a7a8-9ba7c2a546b5] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 11 09:16:41 compute-0 nova_compute[260935]: 2025-10-11 09:16:41.254 2 DEBUG nova.network.neutron [None req-5b53aa05-737d-40fc-9b53-30f5af6d2f88 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 187076f6-221b-4a35-a7a8-9ba7c2a546b5] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 11 09:16:41 compute-0 nova_compute[260935]: 2025-10-11 09:16:41.276 2 INFO nova.virt.libvirt.driver [None req-5b53aa05-737d-40fc-9b53-30f5af6d2f88 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 187076f6-221b-4a35-a7a8-9ba7c2a546b5] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 11 09:16:41 compute-0 nova_compute[260935]: 2025-10-11 09:16:41.293 2 DEBUG nova.compute.manager [None req-5b53aa05-737d-40fc-9b53-30f5af6d2f88 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 187076f6-221b-4a35-a7a8-9ba7c2a546b5] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 11 09:16:41 compute-0 nova_compute[260935]: 2025-10-11 09:16:41.381 2 DEBUG nova.compute.manager [None req-5b53aa05-737d-40fc-9b53-30f5af6d2f88 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 187076f6-221b-4a35-a7a8-9ba7c2a546b5] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 11 09:16:41 compute-0 nova_compute[260935]: 2025-10-11 09:16:41.383 2 DEBUG nova.virt.libvirt.driver [None req-5b53aa05-737d-40fc-9b53-30f5af6d2f88 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 187076f6-221b-4a35-a7a8-9ba7c2a546b5] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 11 09:16:41 compute-0 nova_compute[260935]: 2025-10-11 09:16:41.384 2 INFO nova.virt.libvirt.driver [None req-5b53aa05-737d-40fc-9b53-30f5af6d2f88 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 187076f6-221b-4a35-a7a8-9ba7c2a546b5] Creating image(s)
Oct 11 09:16:41 compute-0 nova_compute[260935]: 2025-10-11 09:16:41.421 2 DEBUG nova.storage.rbd_utils [None req-5b53aa05-737d-40fc-9b53-30f5af6d2f88 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] rbd image 187076f6-221b-4a35-a7a8-9ba7c2a546b5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:16:41 compute-0 nova_compute[260935]: 2025-10-11 09:16:41.456 2 DEBUG nova.storage.rbd_utils [None req-5b53aa05-737d-40fc-9b53-30f5af6d2f88 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] rbd image 187076f6-221b-4a35-a7a8-9ba7c2a546b5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:16:41 compute-0 nova_compute[260935]: 2025-10-11 09:16:41.490 2 DEBUG nova.storage.rbd_utils [None req-5b53aa05-737d-40fc-9b53-30f5af6d2f88 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] rbd image 187076f6-221b-4a35-a7a8-9ba7c2a546b5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:16:41 compute-0 nova_compute[260935]: 2025-10-11 09:16:41.495 2 DEBUG oslo_concurrency.processutils [None req-5b53aa05-737d-40fc-9b53-30f5af6d2f88 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:16:41 compute-0 nova_compute[260935]: 2025-10-11 09:16:41.600 2 DEBUG oslo_concurrency.processutils [None req-5b53aa05-737d-40fc-9b53-30f5af6d2f88 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json" returned: 0 in 0.105s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:16:41 compute-0 nova_compute[260935]: 2025-10-11 09:16:41.602 2 DEBUG oslo_concurrency.lockutils [None req-5b53aa05-737d-40fc-9b53-30f5af6d2f88 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "9811042c7d73cc51997f7c966840f4b7728169a1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:16:41 compute-0 nova_compute[260935]: 2025-10-11 09:16:41.603 2 DEBUG oslo_concurrency.lockutils [None req-5b53aa05-737d-40fc-9b53-30f5af6d2f88 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:16:41 compute-0 nova_compute[260935]: 2025-10-11 09:16:41.603 2 DEBUG oslo_concurrency.lockutils [None req-5b53aa05-737d-40fc-9b53-30f5af6d2f88 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:16:41 compute-0 nova_compute[260935]: 2025-10-11 09:16:41.634 2 DEBUG nova.storage.rbd_utils [None req-5b53aa05-737d-40fc-9b53-30f5af6d2f88 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] rbd image 187076f6-221b-4a35-a7a8-9ba7c2a546b5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:16:41 compute-0 nova_compute[260935]: 2025-10-11 09:16:41.639 2 DEBUG oslo_concurrency.processutils [None req-5b53aa05-737d-40fc-9b53-30f5af6d2f88 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 187076f6-221b-4a35-a7a8-9ba7c2a546b5_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:16:41 compute-0 ceph-mon[74313]: pgmap v2245: 321 pgs: 321 active+clean; 407 MiB data, 946 MiB used, 59 GiB / 60 GiB avail; 6.3 KiB/s wr, 0 op/s
Oct 11 09:16:41 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/1105601677' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:16:41 compute-0 nova_compute[260935]: 2025-10-11 09:16:41.931 2 DEBUG oslo_concurrency.processutils [None req-5b53aa05-737d-40fc-9b53-30f5af6d2f88 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 187076f6-221b-4a35-a7a8-9ba7c2a546b5_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.292s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:16:41 compute-0 nova_compute[260935]: 2025-10-11 09:16:41.971 2 DEBUG nova.policy [None req-5b53aa05-737d-40fc-9b53-30f5af6d2f88 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '0e1fd111a1ff43179343661e01457085', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'db6885dd005947ad850fed13cefdf2fc', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 11 09:16:42 compute-0 nova_compute[260935]: 2025-10-11 09:16:42.018 2 DEBUG nova.storage.rbd_utils [None req-5b53aa05-737d-40fc-9b53-30f5af6d2f88 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] resizing rbd image 187076f6-221b-4a35-a7a8-9ba7c2a546b5_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 11 09:16:42 compute-0 nova_compute[260935]: 2025-10-11 09:16:42.132 2 DEBUG nova.objects.instance [None req-5b53aa05-737d-40fc-9b53-30f5af6d2f88 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lazy-loading 'migration_context' on Instance uuid 187076f6-221b-4a35-a7a8-9ba7c2a546b5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:16:42 compute-0 nova_compute[260935]: 2025-10-11 09:16:42.149 2 DEBUG nova.virt.libvirt.driver [None req-5b53aa05-737d-40fc-9b53-30f5af6d2f88 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 187076f6-221b-4a35-a7a8-9ba7c2a546b5] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 11 09:16:42 compute-0 nova_compute[260935]: 2025-10-11 09:16:42.149 2 DEBUG nova.virt.libvirt.driver [None req-5b53aa05-737d-40fc-9b53-30f5af6d2f88 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 187076f6-221b-4a35-a7a8-9ba7c2a546b5] Ensure instance console log exists: /var/lib/nova/instances/187076f6-221b-4a35-a7a8-9ba7c2a546b5/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 11 09:16:42 compute-0 nova_compute[260935]: 2025-10-11 09:16:42.150 2 DEBUG oslo_concurrency.lockutils [None req-5b53aa05-737d-40fc-9b53-30f5af6d2f88 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:16:42 compute-0 nova_compute[260935]: 2025-10-11 09:16:42.151 2 DEBUG oslo_concurrency.lockutils [None req-5b53aa05-737d-40fc-9b53-30f5af6d2f88 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:16:42 compute-0 nova_compute[260935]: 2025-10-11 09:16:42.151 2 DEBUG oslo_concurrency.lockutils [None req-5b53aa05-737d-40fc-9b53-30f5af6d2f88 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:16:42 compute-0 nova_compute[260935]: 2025-10-11 09:16:42.293 2 DEBUG nova.network.neutron [None req-2554d751-df02-4f4c-a977-409d61bd8161 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: a1232bdc-1728-423e-91ef-f46614fcec43] Updating instance_info_cache with network_info: [{"id": "e89831fc-f646-401f-8562-959bb36ec0e9", "address": "fa:16:3e:48:8f:cc", "network": {"id": "3f778265-a3b7-4c18-be8e-648917a97a03", "bridge": "br-int", "label": "tempest-network-smoke--1276900070", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.21", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape89831fc-f6", "ovs_interfaceid": "e89831fc-f646-401f-8562-959bb36ec0e9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:16:42 compute-0 nova_compute[260935]: 2025-10-11 09:16:42.318 2 DEBUG oslo_concurrency.lockutils [None req-2554d751-df02-4f4c-a977-409d61bd8161 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Releasing lock "refresh_cache-a1232bdc-1728-423e-91ef-f46614fcec43" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:16:42 compute-0 nova_compute[260935]: 2025-10-11 09:16:42.319 2 DEBUG nova.compute.manager [None req-2554d751-df02-4f4c-a977-409d61bd8161 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: a1232bdc-1728-423e-91ef-f46614fcec43] Instance network_info: |[{"id": "e89831fc-f646-401f-8562-959bb36ec0e9", "address": "fa:16:3e:48:8f:cc", "network": {"id": "3f778265-a3b7-4c18-be8e-648917a97a03", "bridge": "br-int", "label": "tempest-network-smoke--1276900070", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.21", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape89831fc-f6", "ovs_interfaceid": "e89831fc-f646-401f-8562-959bb36ec0e9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 11 09:16:42 compute-0 nova_compute[260935]: 2025-10-11 09:16:42.319 2 DEBUG oslo_concurrency.lockutils [req-15770475-065f-44bc-a417-b0321fb1d7b1 req-27d635b4-5c00-4e32-92e7-c2ebad038784 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-a1232bdc-1728-423e-91ef-f46614fcec43" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:16:42 compute-0 nova_compute[260935]: 2025-10-11 09:16:42.320 2 DEBUG nova.network.neutron [req-15770475-065f-44bc-a417-b0321fb1d7b1 req-27d635b4-5c00-4e32-92e7-c2ebad038784 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a1232bdc-1728-423e-91ef-f46614fcec43] Refreshing network info cache for port e89831fc-f646-401f-8562-959bb36ec0e9 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 09:16:42 compute-0 nova_compute[260935]: 2025-10-11 09:16:42.325 2 DEBUG nova.virt.libvirt.driver [None req-2554d751-df02-4f4c-a977-409d61bd8161 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: a1232bdc-1728-423e-91ef-f46614fcec43] Start _get_guest_xml network_info=[{"id": "e89831fc-f646-401f-8562-959bb36ec0e9", "address": "fa:16:3e:48:8f:cc", "network": {"id": "3f778265-a3b7-4c18-be8e-648917a97a03", "bridge": "br-int", "label": "tempest-network-smoke--1276900070", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.21", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape89831fc-f6", "ovs_interfaceid": "e89831fc-f646-401f-8562-959bb36ec0e9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 11 09:16:42 compute-0 nova_compute[260935]: 2025-10-11 09:16:42.330 2 WARNING nova.virt.libvirt.driver [None req-2554d751-df02-4f4c-a977-409d61bd8161 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 09:16:42 compute-0 nova_compute[260935]: 2025-10-11 09:16:42.337 2 DEBUG nova.virt.libvirt.host [None req-2554d751-df02-4f4c-a977-409d61bd8161 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 11 09:16:42 compute-0 nova_compute[260935]: 2025-10-11 09:16:42.338 2 DEBUG nova.virt.libvirt.host [None req-2554d751-df02-4f4c-a977-409d61bd8161 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 11 09:16:42 compute-0 nova_compute[260935]: 2025-10-11 09:16:42.341 2 DEBUG nova.virt.libvirt.host [None req-2554d751-df02-4f4c-a977-409d61bd8161 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 11 09:16:42 compute-0 nova_compute[260935]: 2025-10-11 09:16:42.342 2 DEBUG nova.virt.libvirt.host [None req-2554d751-df02-4f4c-a977-409d61bd8161 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 11 09:16:42 compute-0 nova_compute[260935]: 2025-10-11 09:16:42.343 2 DEBUG nova.virt.libvirt.driver [None req-2554d751-df02-4f4c-a977-409d61bd8161 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 11 09:16:42 compute-0 nova_compute[260935]: 2025-10-11 09:16:42.343 2 DEBUG nova.virt.hardware [None req-2554d751-df02-4f4c-a977-409d61bd8161 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 11 09:16:42 compute-0 nova_compute[260935]: 2025-10-11 09:16:42.344 2 DEBUG nova.virt.hardware [None req-2554d751-df02-4f4c-a977-409d61bd8161 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 11 09:16:42 compute-0 nova_compute[260935]: 2025-10-11 09:16:42.344 2 DEBUG nova.virt.hardware [None req-2554d751-df02-4f4c-a977-409d61bd8161 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 11 09:16:42 compute-0 nova_compute[260935]: 2025-10-11 09:16:42.345 2 DEBUG nova.virt.hardware [None req-2554d751-df02-4f4c-a977-409d61bd8161 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 11 09:16:42 compute-0 nova_compute[260935]: 2025-10-11 09:16:42.345 2 DEBUG nova.virt.hardware [None req-2554d751-df02-4f4c-a977-409d61bd8161 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 11 09:16:42 compute-0 nova_compute[260935]: 2025-10-11 09:16:42.345 2 DEBUG nova.virt.hardware [None req-2554d751-df02-4f4c-a977-409d61bd8161 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 11 09:16:42 compute-0 nova_compute[260935]: 2025-10-11 09:16:42.346 2 DEBUG nova.virt.hardware [None req-2554d751-df02-4f4c-a977-409d61bd8161 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 11 09:16:42 compute-0 nova_compute[260935]: 2025-10-11 09:16:42.346 2 DEBUG nova.virt.hardware [None req-2554d751-df02-4f4c-a977-409d61bd8161 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 11 09:16:42 compute-0 nova_compute[260935]: 2025-10-11 09:16:42.347 2 DEBUG nova.virt.hardware [None req-2554d751-df02-4f4c-a977-409d61bd8161 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 11 09:16:42 compute-0 nova_compute[260935]: 2025-10-11 09:16:42.347 2 DEBUG nova.virt.hardware [None req-2554d751-df02-4f4c-a977-409d61bd8161 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 11 09:16:42 compute-0 nova_compute[260935]: 2025-10-11 09:16:42.347 2 DEBUG nova.virt.hardware [None req-2554d751-df02-4f4c-a977-409d61bd8161 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 11 09:16:42 compute-0 nova_compute[260935]: 2025-10-11 09:16:42.352 2 DEBUG oslo_concurrency.processutils [None req-2554d751-df02-4f4c-a977-409d61bd8161 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:16:42 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2246: 321 pgs: 321 active+clean; 460 MiB data, 960 MiB used, 59 GiB / 60 GiB avail; 31 KiB/s rd, 1.6 MiB/s wr, 49 op/s
Oct 11 09:16:42 compute-0 nova_compute[260935]: 2025-10-11 09:16:42.730 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:16:42 compute-0 unix_chkpwd[379399]: password check failed for user (root)
Oct 11 09:16:42 compute-0 sshd-session[379377]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=165.232.82.252  user=root
Oct 11 09:16:42 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 09:16:42 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2037161715' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:16:42 compute-0 nova_compute[260935]: 2025-10-11 09:16:42.834 2 DEBUG oslo_concurrency.processutils [None req-2554d751-df02-4f4c-a977-409d61bd8161 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.482s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:16:42 compute-0 nova_compute[260935]: 2025-10-11 09:16:42.866 2 DEBUG nova.storage.rbd_utils [None req-2554d751-df02-4f4c-a977-409d61bd8161 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] rbd image a1232bdc-1728-423e-91ef-f46614fcec43_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:16:42 compute-0 nova_compute[260935]: 2025-10-11 09:16:42.871 2 DEBUG oslo_concurrency.processutils [None req-2554d751-df02-4f4c-a977-409d61bd8161 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:16:42 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/2037161715' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:16:43 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 09:16:43 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3934545857' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:16:43 compute-0 nova_compute[260935]: 2025-10-11 09:16:43.339 2 DEBUG oslo_concurrency.processutils [None req-2554d751-df02-4f4c-a977-409d61bd8161 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:16:43 compute-0 nova_compute[260935]: 2025-10-11 09:16:43.342 2 DEBUG nova.virt.libvirt.vif [None req-2554d751-df02-4f4c-a977-409d61bd8161 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T09:16:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-272732865',display_name='tempest-TestNetworkBasicOps-server-272732865',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-272732865',id=109,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBF7FX3U/5RP2hr/RRWH8Uzs1pxIw1BlrlGqbe9sm3AIGCWu3yNU1SvlSTOKsJU+AC7mmnOtgeA/3XVmXVb4brBN1OHPjhKFiHZVW+4rKWNrrCcpToIxg/52KW4sZLUbqeQ==',key_name='tempest-TestNetworkBasicOps-1996793419',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='bee9c6aad5fe46a2b0fb6caf4d995b72',ramdisk_id='',reservation_id='r-obg5p04x',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1622727639',owner_user_name='tempest-TestNetworkBasicOps-1622727639-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:16:37Z,user_data=None,user_id='dd336dcb24664df58613d4105ce1b004',uuid=a1232bdc-1728-423e-91ef-f46614fcec43,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e89831fc-f646-401f-8562-959bb36ec0e9", "address": "fa:16:3e:48:8f:cc", "network": {"id": "3f778265-a3b7-4c18-be8e-648917a97a03", "bridge": "br-int", "label": "tempest-network-smoke--1276900070", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.21", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape89831fc-f6", "ovs_interfaceid": "e89831fc-f646-401f-8562-959bb36ec0e9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 11 09:16:43 compute-0 nova_compute[260935]: 2025-10-11 09:16:43.343 2 DEBUG nova.network.os_vif_util [None req-2554d751-df02-4f4c-a977-409d61bd8161 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Converting VIF {"id": "e89831fc-f646-401f-8562-959bb36ec0e9", "address": "fa:16:3e:48:8f:cc", "network": {"id": "3f778265-a3b7-4c18-be8e-648917a97a03", "bridge": "br-int", "label": "tempest-network-smoke--1276900070", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.21", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape89831fc-f6", "ovs_interfaceid": "e89831fc-f646-401f-8562-959bb36ec0e9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 09:16:43 compute-0 nova_compute[260935]: 2025-10-11 09:16:43.344 2 DEBUG nova.network.os_vif_util [None req-2554d751-df02-4f4c-a977-409d61bd8161 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:48:8f:cc,bridge_name='br-int',has_traffic_filtering=True,id=e89831fc-f646-401f-8562-959bb36ec0e9,network=Network(3f778265-a3b7-4c18-be8e-648917a97a03),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape89831fc-f6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 09:16:43 compute-0 nova_compute[260935]: 2025-10-11 09:16:43.347 2 DEBUG nova.objects.instance [None req-2554d751-df02-4f4c-a977-409d61bd8161 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lazy-loading 'pci_devices' on Instance uuid a1232bdc-1728-423e-91ef-f46614fcec43 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:16:43 compute-0 nova_compute[260935]: 2025-10-11 09:16:43.365 2 DEBUG nova.virt.libvirt.driver [None req-2554d751-df02-4f4c-a977-409d61bd8161 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: a1232bdc-1728-423e-91ef-f46614fcec43] End _get_guest_xml xml=<domain type="kvm">
Oct 11 09:16:43 compute-0 nova_compute[260935]:   <uuid>a1232bdc-1728-423e-91ef-f46614fcec43</uuid>
Oct 11 09:16:43 compute-0 nova_compute[260935]:   <name>instance-0000006d</name>
Oct 11 09:16:43 compute-0 nova_compute[260935]:   <memory>131072</memory>
Oct 11 09:16:43 compute-0 nova_compute[260935]:   <vcpu>1</vcpu>
Oct 11 09:16:43 compute-0 nova_compute[260935]:   <metadata>
Oct 11 09:16:43 compute-0 nova_compute[260935]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 09:16:43 compute-0 nova_compute[260935]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 09:16:43 compute-0 nova_compute[260935]:       <nova:name>tempest-TestNetworkBasicOps-server-272732865</nova:name>
Oct 11 09:16:43 compute-0 nova_compute[260935]:       <nova:creationTime>2025-10-11 09:16:42</nova:creationTime>
Oct 11 09:16:43 compute-0 nova_compute[260935]:       <nova:flavor name="m1.nano">
Oct 11 09:16:43 compute-0 nova_compute[260935]:         <nova:memory>128</nova:memory>
Oct 11 09:16:43 compute-0 nova_compute[260935]:         <nova:disk>1</nova:disk>
Oct 11 09:16:43 compute-0 nova_compute[260935]:         <nova:swap>0</nova:swap>
Oct 11 09:16:43 compute-0 nova_compute[260935]:         <nova:ephemeral>0</nova:ephemeral>
Oct 11 09:16:43 compute-0 nova_compute[260935]:         <nova:vcpus>1</nova:vcpus>
Oct 11 09:16:43 compute-0 nova_compute[260935]:       </nova:flavor>
Oct 11 09:16:43 compute-0 nova_compute[260935]:       <nova:owner>
Oct 11 09:16:43 compute-0 nova_compute[260935]:         <nova:user uuid="dd336dcb24664df58613d4105ce1b004">tempest-TestNetworkBasicOps-1622727639-project-member</nova:user>
Oct 11 09:16:43 compute-0 nova_compute[260935]:         <nova:project uuid="bee9c6aad5fe46a2b0fb6caf4d995b72">tempest-TestNetworkBasicOps-1622727639</nova:project>
Oct 11 09:16:43 compute-0 nova_compute[260935]:       </nova:owner>
Oct 11 09:16:43 compute-0 nova_compute[260935]:       <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 09:16:43 compute-0 nova_compute[260935]:       <nova:ports>
Oct 11 09:16:43 compute-0 nova_compute[260935]:         <nova:port uuid="e89831fc-f646-401f-8562-959bb36ec0e9">
Oct 11 09:16:43 compute-0 nova_compute[260935]:           <nova:ip type="fixed" address="10.100.0.21" ipVersion="4"/>
Oct 11 09:16:43 compute-0 nova_compute[260935]:         </nova:port>
Oct 11 09:16:43 compute-0 nova_compute[260935]:       </nova:ports>
Oct 11 09:16:43 compute-0 nova_compute[260935]:     </nova:instance>
Oct 11 09:16:43 compute-0 nova_compute[260935]:   </metadata>
Oct 11 09:16:43 compute-0 nova_compute[260935]:   <sysinfo type="smbios">
Oct 11 09:16:43 compute-0 nova_compute[260935]:     <system>
Oct 11 09:16:43 compute-0 nova_compute[260935]:       <entry name="manufacturer">RDO</entry>
Oct 11 09:16:43 compute-0 nova_compute[260935]:       <entry name="product">OpenStack Compute</entry>
Oct 11 09:16:43 compute-0 nova_compute[260935]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 09:16:43 compute-0 nova_compute[260935]:       <entry name="serial">a1232bdc-1728-423e-91ef-f46614fcec43</entry>
Oct 11 09:16:43 compute-0 nova_compute[260935]:       <entry name="uuid">a1232bdc-1728-423e-91ef-f46614fcec43</entry>
Oct 11 09:16:43 compute-0 nova_compute[260935]:       <entry name="family">Virtual Machine</entry>
Oct 11 09:16:43 compute-0 nova_compute[260935]:     </system>
Oct 11 09:16:43 compute-0 nova_compute[260935]:   </sysinfo>
Oct 11 09:16:43 compute-0 nova_compute[260935]:   <os>
Oct 11 09:16:43 compute-0 nova_compute[260935]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 11 09:16:43 compute-0 nova_compute[260935]:     <boot dev="hd"/>
Oct 11 09:16:43 compute-0 nova_compute[260935]:     <smbios mode="sysinfo"/>
Oct 11 09:16:43 compute-0 nova_compute[260935]:   </os>
Oct 11 09:16:43 compute-0 nova_compute[260935]:   <features>
Oct 11 09:16:43 compute-0 nova_compute[260935]:     <acpi/>
Oct 11 09:16:43 compute-0 nova_compute[260935]:     <apic/>
Oct 11 09:16:43 compute-0 nova_compute[260935]:     <vmcoreinfo/>
Oct 11 09:16:43 compute-0 nova_compute[260935]:   </features>
Oct 11 09:16:43 compute-0 nova_compute[260935]:   <clock offset="utc">
Oct 11 09:16:43 compute-0 nova_compute[260935]:     <timer name="pit" tickpolicy="delay"/>
Oct 11 09:16:43 compute-0 nova_compute[260935]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 11 09:16:43 compute-0 nova_compute[260935]:     <timer name="hpet" present="no"/>
Oct 11 09:16:43 compute-0 nova_compute[260935]:   </clock>
Oct 11 09:16:43 compute-0 nova_compute[260935]:   <cpu mode="host-model" match="exact">
Oct 11 09:16:43 compute-0 nova_compute[260935]:     <topology sockets="1" cores="1" threads="1"/>
Oct 11 09:16:43 compute-0 nova_compute[260935]:   </cpu>
Oct 11 09:16:43 compute-0 nova_compute[260935]:   <devices>
Oct 11 09:16:43 compute-0 nova_compute[260935]:     <disk type="network" device="disk">
Oct 11 09:16:43 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 09:16:43 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/a1232bdc-1728-423e-91ef-f46614fcec43_disk">
Oct 11 09:16:43 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 09:16:43 compute-0 nova_compute[260935]:       </source>
Oct 11 09:16:43 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 09:16:43 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 09:16:43 compute-0 nova_compute[260935]:       </auth>
Oct 11 09:16:43 compute-0 nova_compute[260935]:       <target dev="vda" bus="virtio"/>
Oct 11 09:16:43 compute-0 nova_compute[260935]:     </disk>
Oct 11 09:16:43 compute-0 nova_compute[260935]:     <disk type="network" device="cdrom">
Oct 11 09:16:43 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 09:16:43 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/a1232bdc-1728-423e-91ef-f46614fcec43_disk.config">
Oct 11 09:16:43 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 09:16:43 compute-0 nova_compute[260935]:       </source>
Oct 11 09:16:43 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 09:16:43 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 09:16:43 compute-0 nova_compute[260935]:       </auth>
Oct 11 09:16:43 compute-0 nova_compute[260935]:       <target dev="sda" bus="sata"/>
Oct 11 09:16:43 compute-0 nova_compute[260935]:     </disk>
Oct 11 09:16:43 compute-0 nova_compute[260935]:     <interface type="ethernet">
Oct 11 09:16:43 compute-0 nova_compute[260935]:       <mac address="fa:16:3e:48:8f:cc"/>
Oct 11 09:16:43 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 09:16:43 compute-0 nova_compute[260935]:       <driver name="vhost" rx_queue_size="512"/>
Oct 11 09:16:43 compute-0 nova_compute[260935]:       <mtu size="1442"/>
Oct 11 09:16:43 compute-0 nova_compute[260935]:       <target dev="tape89831fc-f6"/>
Oct 11 09:16:43 compute-0 nova_compute[260935]:     </interface>
Oct 11 09:16:43 compute-0 nova_compute[260935]:     <serial type="pty">
Oct 11 09:16:43 compute-0 nova_compute[260935]:       <log file="/var/lib/nova/instances/a1232bdc-1728-423e-91ef-f46614fcec43/console.log" append="off"/>
Oct 11 09:16:43 compute-0 nova_compute[260935]:     </serial>
Oct 11 09:16:43 compute-0 nova_compute[260935]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 09:16:43 compute-0 nova_compute[260935]:     <video>
Oct 11 09:16:43 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 09:16:43 compute-0 nova_compute[260935]:     </video>
Oct 11 09:16:43 compute-0 nova_compute[260935]:     <input type="tablet" bus="usb"/>
Oct 11 09:16:43 compute-0 nova_compute[260935]:     <rng model="virtio">
Oct 11 09:16:43 compute-0 nova_compute[260935]:       <backend model="random">/dev/urandom</backend>
Oct 11 09:16:43 compute-0 nova_compute[260935]:     </rng>
Oct 11 09:16:43 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root"/>
Oct 11 09:16:43 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:16:43 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:16:43 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:16:43 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:16:43 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:16:43 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:16:43 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:16:43 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:16:43 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:16:43 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:16:43 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:16:43 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:16:43 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:16:43 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:16:43 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:16:43 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:16:43 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:16:43 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:16:43 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:16:43 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:16:43 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:16:43 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:16:43 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:16:43 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:16:43 compute-0 nova_compute[260935]:     <controller type="usb" index="0"/>
Oct 11 09:16:43 compute-0 nova_compute[260935]:     <memballoon model="virtio">
Oct 11 09:16:43 compute-0 nova_compute[260935]:       <stats period="10"/>
Oct 11 09:16:43 compute-0 nova_compute[260935]:     </memballoon>
Oct 11 09:16:43 compute-0 nova_compute[260935]:   </devices>
Oct 11 09:16:43 compute-0 nova_compute[260935]: </domain>
Oct 11 09:16:43 compute-0 nova_compute[260935]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 11 09:16:43 compute-0 nova_compute[260935]: 2025-10-11 09:16:43.367 2 DEBUG nova.compute.manager [None req-2554d751-df02-4f4c-a977-409d61bd8161 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: a1232bdc-1728-423e-91ef-f46614fcec43] Preparing to wait for external event network-vif-plugged-e89831fc-f646-401f-8562-959bb36ec0e9 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 11 09:16:43 compute-0 nova_compute[260935]: 2025-10-11 09:16:43.368 2 DEBUG oslo_concurrency.lockutils [None req-2554d751-df02-4f4c-a977-409d61bd8161 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Acquiring lock "a1232bdc-1728-423e-91ef-f46614fcec43-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:16:43 compute-0 nova_compute[260935]: 2025-10-11 09:16:43.368 2 DEBUG oslo_concurrency.lockutils [None req-2554d751-df02-4f4c-a977-409d61bd8161 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "a1232bdc-1728-423e-91ef-f46614fcec43-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:16:43 compute-0 nova_compute[260935]: 2025-10-11 09:16:43.368 2 DEBUG oslo_concurrency.lockutils [None req-2554d751-df02-4f4c-a977-409d61bd8161 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "a1232bdc-1728-423e-91ef-f46614fcec43-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:16:43 compute-0 nova_compute[260935]: 2025-10-11 09:16:43.369 2 DEBUG nova.virt.libvirt.vif [None req-2554d751-df02-4f4c-a977-409d61bd8161 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T09:16:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-272732865',display_name='tempest-TestNetworkBasicOps-server-272732865',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-272732865',id=109,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBF7FX3U/5RP2hr/RRWH8Uzs1pxIw1BlrlGqbe9sm3AIGCWu3yNU1SvlSTOKsJU+AC7mmnOtgeA/3XVmXVb4brBN1OHPjhKFiHZVW+4rKWNrrCcpToIxg/52KW4sZLUbqeQ==',key_name='tempest-TestNetworkBasicOps-1996793419',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='bee9c6aad5fe46a2b0fb6caf4d995b72',ramdisk_id='',reservation_id='r-obg5p04x',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1622727639',owner_user_name='tempest-TestNetworkBasicOps-1622727639-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:16:37Z,user_data=None,user_id='dd336dcb24664df58613d4105ce1b004',uuid=a1232bdc-1728-423e-91ef-f46614fcec43,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e89831fc-f646-401f-8562-959bb36ec0e9", "address": "fa:16:3e:48:8f:cc", "network": {"id": "3f778265-a3b7-4c18-be8e-648917a97a03", "bridge": "br-int", "label": "tempest-network-smoke--1276900070", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.21", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape89831fc-f6", "ovs_interfaceid": "e89831fc-f646-401f-8562-959bb36ec0e9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 11 09:16:43 compute-0 nova_compute[260935]: 2025-10-11 09:16:43.369 2 DEBUG nova.network.os_vif_util [None req-2554d751-df02-4f4c-a977-409d61bd8161 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Converting VIF {"id": "e89831fc-f646-401f-8562-959bb36ec0e9", "address": "fa:16:3e:48:8f:cc", "network": {"id": "3f778265-a3b7-4c18-be8e-648917a97a03", "bridge": "br-int", "label": "tempest-network-smoke--1276900070", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.21", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape89831fc-f6", "ovs_interfaceid": "e89831fc-f646-401f-8562-959bb36ec0e9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 09:16:43 compute-0 nova_compute[260935]: 2025-10-11 09:16:43.370 2 DEBUG nova.network.os_vif_util [None req-2554d751-df02-4f4c-a977-409d61bd8161 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:48:8f:cc,bridge_name='br-int',has_traffic_filtering=True,id=e89831fc-f646-401f-8562-959bb36ec0e9,network=Network(3f778265-a3b7-4c18-be8e-648917a97a03),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape89831fc-f6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 09:16:43 compute-0 nova_compute[260935]: 2025-10-11 09:16:43.370 2 DEBUG os_vif [None req-2554d751-df02-4f4c-a977-409d61bd8161 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:48:8f:cc,bridge_name='br-int',has_traffic_filtering=True,id=e89831fc-f646-401f-8562-959bb36ec0e9,network=Network(3f778265-a3b7-4c18-be8e-648917a97a03),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape89831fc-f6') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 11 09:16:43 compute-0 nova_compute[260935]: 2025-10-11 09:16:43.371 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:16:43 compute-0 nova_compute[260935]: 2025-10-11 09:16:43.372 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:16:43 compute-0 nova_compute[260935]: 2025-10-11 09:16:43.372 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 09:16:43 compute-0 nova_compute[260935]: 2025-10-11 09:16:43.376 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:16:43 compute-0 nova_compute[260935]: 2025-10-11 09:16:43.376 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape89831fc-f6, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:16:43 compute-0 nova_compute[260935]: 2025-10-11 09:16:43.377 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tape89831fc-f6, col_values=(('external_ids', {'iface-id': 'e89831fc-f646-401f-8562-959bb36ec0e9', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:48:8f:cc', 'vm-uuid': 'a1232bdc-1728-423e-91ef-f46614fcec43'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:16:43 compute-0 NetworkManager[44960]: <info>  [1760174203.4220] manager: (tape89831fc-f6): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/438)
Oct 11 09:16:43 compute-0 nova_compute[260935]: 2025-10-11 09:16:43.421 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:16:43 compute-0 nova_compute[260935]: 2025-10-11 09:16:43.426 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 09:16:43 compute-0 nova_compute[260935]: 2025-10-11 09:16:43.431 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:16:43 compute-0 nova_compute[260935]: 2025-10-11 09:16:43.433 2 INFO os_vif [None req-2554d751-df02-4f4c-a977-409d61bd8161 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:48:8f:cc,bridge_name='br-int',has_traffic_filtering=True,id=e89831fc-f646-401f-8562-959bb36ec0e9,network=Network(3f778265-a3b7-4c18-be8e-648917a97a03),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape89831fc-f6')
Oct 11 09:16:43 compute-0 nova_compute[260935]: 2025-10-11 09:16:43.469 2 DEBUG nova.network.neutron [None req-5b53aa05-737d-40fc-9b53-30f5af6d2f88 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 187076f6-221b-4a35-a7a8-9ba7c2a546b5] Successfully created port: 1f9dc2d8-70e2-4b9d-acc8-8f1bf8cd7f54 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 11 09:16:43 compute-0 nova_compute[260935]: 2025-10-11 09:16:43.496 2 DEBUG nova.virt.libvirt.driver [None req-2554d751-df02-4f4c-a977-409d61bd8161 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 09:16:43 compute-0 nova_compute[260935]: 2025-10-11 09:16:43.497 2 DEBUG nova.virt.libvirt.driver [None req-2554d751-df02-4f4c-a977-409d61bd8161 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 09:16:43 compute-0 nova_compute[260935]: 2025-10-11 09:16:43.497 2 DEBUG nova.virt.libvirt.driver [None req-2554d751-df02-4f4c-a977-409d61bd8161 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] No VIF found with MAC fa:16:3e:48:8f:cc, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 11 09:16:43 compute-0 nova_compute[260935]: 2025-10-11 09:16:43.498 2 INFO nova.virt.libvirt.driver [None req-2554d751-df02-4f4c-a977-409d61bd8161 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: a1232bdc-1728-423e-91ef-f46614fcec43] Using config drive
Oct 11 09:16:43 compute-0 nova_compute[260935]: 2025-10-11 09:16:43.539 2 DEBUG nova.storage.rbd_utils [None req-2554d751-df02-4f4c-a977-409d61bd8161 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] rbd image a1232bdc-1728-423e-91ef-f46614fcec43_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:16:43 compute-0 ceph-mon[74313]: pgmap v2246: 321 pgs: 321 active+clean; 460 MiB data, 960 MiB used, 59 GiB / 60 GiB avail; 31 KiB/s rd, 1.6 MiB/s wr, 49 op/s
Oct 11 09:16:43 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/3934545857' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:16:43 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:16:44 compute-0 nova_compute[260935]: 2025-10-11 09:16:44.047 2 INFO nova.virt.libvirt.driver [None req-2554d751-df02-4f4c-a977-409d61bd8161 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: a1232bdc-1728-423e-91ef-f46614fcec43] Creating config drive at /var/lib/nova/instances/a1232bdc-1728-423e-91ef-f46614fcec43/disk.config
Oct 11 09:16:44 compute-0 nova_compute[260935]: 2025-10-11 09:16:44.059 2 DEBUG oslo_concurrency.processutils [None req-2554d751-df02-4f4c-a977-409d61bd8161 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/a1232bdc-1728-423e-91ef-f46614fcec43/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpxd1e1h8b execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:16:44 compute-0 nova_compute[260935]: 2025-10-11 09:16:44.225 2 DEBUG oslo_concurrency.processutils [None req-2554d751-df02-4f4c-a977-409d61bd8161 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/a1232bdc-1728-423e-91ef-f46614fcec43/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpxd1e1h8b" returned: 0 in 0.167s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:16:44 compute-0 nova_compute[260935]: 2025-10-11 09:16:44.269 2 DEBUG nova.storage.rbd_utils [None req-2554d751-df02-4f4c-a977-409d61bd8161 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] rbd image a1232bdc-1728-423e-91ef-f46614fcec43_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:16:44 compute-0 nova_compute[260935]: 2025-10-11 09:16:44.274 2 DEBUG oslo_concurrency.processutils [None req-2554d751-df02-4f4c-a977-409d61bd8161 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/a1232bdc-1728-423e-91ef-f46614fcec43/disk.config a1232bdc-1728-423e-91ef-f46614fcec43_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:16:44 compute-0 nova_compute[260935]: 2025-10-11 09:16:44.330 2 DEBUG nova.network.neutron [req-15770475-065f-44bc-a417-b0321fb1d7b1 req-27d635b4-5c00-4e32-92e7-c2ebad038784 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a1232bdc-1728-423e-91ef-f46614fcec43] Updated VIF entry in instance network info cache for port e89831fc-f646-401f-8562-959bb36ec0e9. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 09:16:44 compute-0 nova_compute[260935]: 2025-10-11 09:16:44.332 2 DEBUG nova.network.neutron [req-15770475-065f-44bc-a417-b0321fb1d7b1 req-27d635b4-5c00-4e32-92e7-c2ebad038784 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a1232bdc-1728-423e-91ef-f46614fcec43] Updating instance_info_cache with network_info: [{"id": "e89831fc-f646-401f-8562-959bb36ec0e9", "address": "fa:16:3e:48:8f:cc", "network": {"id": "3f778265-a3b7-4c18-be8e-648917a97a03", "bridge": "br-int", "label": "tempest-network-smoke--1276900070", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.21", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape89831fc-f6", "ovs_interfaceid": "e89831fc-f646-401f-8562-959bb36ec0e9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:16:44 compute-0 nova_compute[260935]: 2025-10-11 09:16:44.356 2 DEBUG oslo_concurrency.lockutils [req-15770475-065f-44bc-a417-b0321fb1d7b1 req-27d635b4-5c00-4e32-92e7-c2ebad038784 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-a1232bdc-1728-423e-91ef-f46614fcec43" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:16:44 compute-0 nova_compute[260935]: 2025-10-11 09:16:44.499 2 DEBUG oslo_concurrency.processutils [None req-2554d751-df02-4f4c-a977-409d61bd8161 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/a1232bdc-1728-423e-91ef-f46614fcec43/disk.config a1232bdc-1728-423e-91ef-f46614fcec43_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.226s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:16:44 compute-0 nova_compute[260935]: 2025-10-11 09:16:44.500 2 INFO nova.virt.libvirt.driver [None req-2554d751-df02-4f4c-a977-409d61bd8161 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: a1232bdc-1728-423e-91ef-f46614fcec43] Deleting local config drive /var/lib/nova/instances/a1232bdc-1728-423e-91ef-f46614fcec43/disk.config because it was imported into RBD.
Oct 11 09:16:44 compute-0 kernel: tape89831fc-f6: entered promiscuous mode
Oct 11 09:16:44 compute-0 NetworkManager[44960]: <info>  [1760174204.5771] manager: (tape89831fc-f6): new Tun device (/org/freedesktop/NetworkManager/Devices/439)
Oct 11 09:16:44 compute-0 ovn_controller[152945]: 2025-10-11T09:16:44Z|01068|binding|INFO|Claiming lport e89831fc-f646-401f-8562-959bb36ec0e9 for this chassis.
Oct 11 09:16:44 compute-0 ovn_controller[152945]: 2025-10-11T09:16:44Z|01069|binding|INFO|e89831fc-f646-401f-8562-959bb36ec0e9: Claiming fa:16:3e:48:8f:cc 10.100.0.21
Oct 11 09:16:44 compute-0 nova_compute[260935]: 2025-10-11 09:16:44.616 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:16:44 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2247: 321 pgs: 321 active+clean; 500 MiB data, 983 MiB used, 59 GiB / 60 GiB avail; 32 KiB/s rd, 3.5 MiB/s wr, 52 op/s
Oct 11 09:16:44 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:16:44.631 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:48:8f:cc 10.100.0.21'], port_security=['fa:16:3e:48:8f:cc 10.100.0.21'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.21/28', 'neutron:device_id': 'a1232bdc-1728-423e-91ef-f46614fcec43', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3f778265-a3b7-4c18-be8e-648917a97a03', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'bee9c6aad5fe46a2b0fb6caf4d995b72', 'neutron:revision_number': '2', 'neutron:security_group_ids': '0f4801ae-575f-43c2-a03b-ceeeb136f93e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=704f698c-f7f9-44d3-83d1-ae6d1824d8ad, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=e89831fc-f646-401f-8562-959bb36ec0e9) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 09:16:44 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:16:44.632 162815 INFO neutron.agent.ovn.metadata.agent [-] Port e89831fc-f646-401f-8562-959bb36ec0e9 in datapath 3f778265-a3b7-4c18-be8e-648917a97a03 bound to our chassis
Oct 11 09:16:44 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:16:44.635 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 3f778265-a3b7-4c18-be8e-648917a97a03
Oct 11 09:16:44 compute-0 systemd-udevd[379514]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 09:16:44 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:16:44.653 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[5685adc9-0237-4467-93ae-057c07466026]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:16:44 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:16:44.654 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap3f778265-a1 in ovnmeta-3f778265-a3b7-4c18-be8e-648917a97a03 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 11 09:16:44 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:16:44.657 276945 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap3f778265-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 11 09:16:44 compute-0 systemd-machined[215705]: New machine qemu-132-instance-0000006d.
Oct 11 09:16:44 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:16:44.658 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[1f7006c7-2cab-4161-a6d0-bcb9be28e5cf]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:16:44 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:16:44.659 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[22ef5904-b044-4819-a400-9730993276c2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:16:44 compute-0 NetworkManager[44960]: <info>  [1760174204.6701] device (tape89831fc-f6): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 09:16:44 compute-0 NetworkManager[44960]: <info>  [1760174204.6719] device (tape89831fc-f6): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 09:16:44 compute-0 ovn_controller[152945]: 2025-10-11T09:16:44Z|01070|binding|INFO|Setting lport e89831fc-f646-401f-8562-959bb36ec0e9 ovn-installed in OVS
Oct 11 09:16:44 compute-0 ovn_controller[152945]: 2025-10-11T09:16:44Z|01071|binding|INFO|Setting lport e89831fc-f646-401f-8562-959bb36ec0e9 up in Southbound
Oct 11 09:16:44 compute-0 nova_compute[260935]: 2025-10-11 09:16:44.674 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:16:44 compute-0 nova_compute[260935]: 2025-10-11 09:16:44.677 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:16:44 compute-0 systemd[1]: Started Virtual Machine qemu-132-instance-0000006d.
Oct 11 09:16:44 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:16:44.688 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[0b68a41e-29d1-4fce-a48c-3f9d28017e0b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:16:44 compute-0 sshd-session[379377]: Failed password for root from 165.232.82.252 port 34200 ssh2
Oct 11 09:16:44 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:16:44.711 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[40b307d9-9404-4613-889f-4e0442fb4641]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:16:44 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:16:44.751 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[20af9bce-387f-453c-aa0e-770458d4ca13]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:16:44 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:16:44.757 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[c6f0db63-6a5a-4f9d-8af1-e8cc264f320d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:16:44 compute-0 NetworkManager[44960]: <info>  [1760174204.7589] manager: (tap3f778265-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/440)
Oct 11 09:16:44 compute-0 nova_compute[260935]: 2025-10-11 09:16:44.791 2 DEBUG nova.network.neutron [None req-5b53aa05-737d-40fc-9b53-30f5af6d2f88 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 187076f6-221b-4a35-a7a8-9ba7c2a546b5] Successfully updated port: 1f9dc2d8-70e2-4b9d-acc8-8f1bf8cd7f54 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 11 09:16:44 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:16:44.798 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[1e04ac94-bb93-4bde-a5c0-d6d719da92f1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:16:44 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:16:44.802 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[6364cfbd-4155-4256-bbda-2e4b71c08e84]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:16:44 compute-0 NetworkManager[44960]: <info>  [1760174204.8374] device (tap3f778265-a0): carrier: link connected
Oct 11 09:16:44 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:16:44.846 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[0688bdb7-56a9-47ef-8c5f-0ad5846ee117]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:16:44 compute-0 nova_compute[260935]: 2025-10-11 09:16:44.850 2 DEBUG oslo_concurrency.lockutils [None req-5b53aa05-737d-40fc-9b53-30f5af6d2f88 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "refresh_cache-187076f6-221b-4a35-a7a8-9ba7c2a546b5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:16:44 compute-0 nova_compute[260935]: 2025-10-11 09:16:44.851 2 DEBUG oslo_concurrency.lockutils [None req-5b53aa05-737d-40fc-9b53-30f5af6d2f88 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquired lock "refresh_cache-187076f6-221b-4a35-a7a8-9ba7c2a546b5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:16:44 compute-0 nova_compute[260935]: 2025-10-11 09:16:44.851 2 DEBUG nova.network.neutron [None req-5b53aa05-737d-40fc-9b53-30f5af6d2f88 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 187076f6-221b-4a35-a7a8-9ba7c2a546b5] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 11 09:16:44 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:16:44.871 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[9ebf009d-8fc5-49fe-afec-dc704bf53983]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3f778265-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ef:ed:d9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 308], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 604954, 'reachable_time': 36448, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 379548, 'error': None, 'target': 'ovnmeta-3f778265-a3b7-4c18-be8e-648917a97a03', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:16:44 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:16:44.897 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[6de1f229-650a-4c59-b32c-68d4582b2931]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feef:edd9'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 604954, 'tstamp': 604954}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 379549, 'error': None, 'target': 'ovnmeta-3f778265-a3b7-4c18-be8e-648917a97a03', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:16:44 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:16:44.926 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[a5f9b886-522e-406d-86bb-dbd5bac454e2]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3f778265-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ef:ed:d9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 308], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 604954, 'reachable_time': 36448, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 379550, 'error': None, 'target': 'ovnmeta-3f778265-a3b7-4c18-be8e-648917a97a03', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:16:44 compute-0 nova_compute[260935]: 2025-10-11 09:16:44.966 2 DEBUG nova.compute.manager [req-bd89893c-bfc7-4ca1-82bc-ddb8316b997d req-1026d221-a472-473d-b396-ca4b464f1d38 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 187076f6-221b-4a35-a7a8-9ba7c2a546b5] Received event network-changed-1f9dc2d8-70e2-4b9d-acc8-8f1bf8cd7f54 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:16:44 compute-0 nova_compute[260935]: 2025-10-11 09:16:44.967 2 DEBUG nova.compute.manager [req-bd89893c-bfc7-4ca1-82bc-ddb8316b997d req-1026d221-a472-473d-b396-ca4b464f1d38 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 187076f6-221b-4a35-a7a8-9ba7c2a546b5] Refreshing instance network info cache due to event network-changed-1f9dc2d8-70e2-4b9d-acc8-8f1bf8cd7f54. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 09:16:44 compute-0 nova_compute[260935]: 2025-10-11 09:16:44.967 2 DEBUG oslo_concurrency.lockutils [req-bd89893c-bfc7-4ca1-82bc-ddb8316b997d req-1026d221-a472-473d-b396-ca4b464f1d38 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-187076f6-221b-4a35-a7a8-9ba7c2a546b5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:16:44 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:16:44.977 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[11f80536-8b44-4afd-9cad-a3aea173a8f6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:16:45 compute-0 sshd-session[379377]: Connection closed by authenticating user root 165.232.82.252 port 34200 [preauth]
Oct 11 09:16:45 compute-0 nova_compute[260935]: 2025-10-11 09:16:45.048 2 DEBUG nova.compute.manager [req-e5f5c232-7522-4941-9b51-7b2359db5161 req-226341a8-0e93-4429-967e-686ca8808296 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a1232bdc-1728-423e-91ef-f46614fcec43] Received event network-vif-plugged-e89831fc-f646-401f-8562-959bb36ec0e9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:16:45 compute-0 nova_compute[260935]: 2025-10-11 09:16:45.049 2 DEBUG oslo_concurrency.lockutils [req-e5f5c232-7522-4941-9b51-7b2359db5161 req-226341a8-0e93-4429-967e-686ca8808296 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "a1232bdc-1728-423e-91ef-f46614fcec43-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:16:45 compute-0 nova_compute[260935]: 2025-10-11 09:16:45.049 2 DEBUG oslo_concurrency.lockutils [req-e5f5c232-7522-4941-9b51-7b2359db5161 req-226341a8-0e93-4429-967e-686ca8808296 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "a1232bdc-1728-423e-91ef-f46614fcec43-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:16:45 compute-0 nova_compute[260935]: 2025-10-11 09:16:45.050 2 DEBUG oslo_concurrency.lockutils [req-e5f5c232-7522-4941-9b51-7b2359db5161 req-226341a8-0e93-4429-967e-686ca8808296 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "a1232bdc-1728-423e-91ef-f46614fcec43-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:16:45 compute-0 nova_compute[260935]: 2025-10-11 09:16:45.050 2 DEBUG nova.compute.manager [req-e5f5c232-7522-4941-9b51-7b2359db5161 req-226341a8-0e93-4429-967e-686ca8808296 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a1232bdc-1728-423e-91ef-f46614fcec43] Processing event network-vif-plugged-e89831fc-f646-401f-8562-959bb36ec0e9 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 11 09:16:45 compute-0 nova_compute[260935]: 2025-10-11 09:16:45.084 2 DEBUG nova.network.neutron [None req-5b53aa05-737d-40fc-9b53-30f5af6d2f88 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 187076f6-221b-4a35-a7a8-9ba7c2a546b5] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 11 09:16:45 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:16:45.090 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[dc57be4f-e966-4f15-8b19-fd72ae2c747e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:16:45 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:16:45.092 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3f778265-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:16:45 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:16:45.092 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 09:16:45 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:16:45.093 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3f778265-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:16:45 compute-0 nova_compute[260935]: 2025-10-11 09:16:45.095 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:16:45 compute-0 kernel: tap3f778265-a0: entered promiscuous mode
Oct 11 09:16:45 compute-0 NetworkManager[44960]: <info>  [1760174205.0961] manager: (tap3f778265-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/441)
Oct 11 09:16:45 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:16:45.103 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap3f778265-a0, col_values=(('external_ids', {'iface-id': 'cbaa5fc6-b10a-49d7-b190-bda5c8465d38'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:16:45 compute-0 ovn_controller[152945]: 2025-10-11T09:16:45Z|01072|binding|INFO|Releasing lport cbaa5fc6-b10a-49d7-b190-bda5c8465d38 from this chassis (sb_readonly=0)
Oct 11 09:16:45 compute-0 nova_compute[260935]: 2025-10-11 09:16:45.104 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:16:45 compute-0 nova_compute[260935]: 2025-10-11 09:16:45.105 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:16:45 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:16:45.108 162815 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/3f778265-a3b7-4c18-be8e-648917a97a03.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/3f778265-a3b7-4c18-be8e-648917a97a03.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 11 09:16:45 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:16:45.109 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[8e822726-5f49-4083-a746-ee8c4a8c49b3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:16:45 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:16:45.110 162815 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 11 09:16:45 compute-0 ovn_metadata_agent[162810]: global
Oct 11 09:16:45 compute-0 ovn_metadata_agent[162810]:     log         /dev/log local0 debug
Oct 11 09:16:45 compute-0 ovn_metadata_agent[162810]:     log-tag     haproxy-metadata-proxy-3f778265-a3b7-4c18-be8e-648917a97a03
Oct 11 09:16:45 compute-0 ovn_metadata_agent[162810]:     user        root
Oct 11 09:16:45 compute-0 ovn_metadata_agent[162810]:     group       root
Oct 11 09:16:45 compute-0 ovn_metadata_agent[162810]:     maxconn     1024
Oct 11 09:16:45 compute-0 ovn_metadata_agent[162810]:     pidfile     /var/lib/neutron/external/pids/3f778265-a3b7-4c18-be8e-648917a97a03.pid.haproxy
Oct 11 09:16:45 compute-0 ovn_metadata_agent[162810]:     daemon
Oct 11 09:16:45 compute-0 ovn_metadata_agent[162810]: 
Oct 11 09:16:45 compute-0 ovn_metadata_agent[162810]: defaults
Oct 11 09:16:45 compute-0 ovn_metadata_agent[162810]:     log global
Oct 11 09:16:45 compute-0 ovn_metadata_agent[162810]:     mode http
Oct 11 09:16:45 compute-0 ovn_metadata_agent[162810]:     option httplog
Oct 11 09:16:45 compute-0 ovn_metadata_agent[162810]:     option dontlognull
Oct 11 09:16:45 compute-0 ovn_metadata_agent[162810]:     option http-server-close
Oct 11 09:16:45 compute-0 ovn_metadata_agent[162810]:     option forwardfor
Oct 11 09:16:45 compute-0 ovn_metadata_agent[162810]:     retries                 3
Oct 11 09:16:45 compute-0 ovn_metadata_agent[162810]:     timeout http-request    30s
Oct 11 09:16:45 compute-0 ovn_metadata_agent[162810]:     timeout connect         30s
Oct 11 09:16:45 compute-0 ovn_metadata_agent[162810]:     timeout client          32s
Oct 11 09:16:45 compute-0 ovn_metadata_agent[162810]:     timeout server          32s
Oct 11 09:16:45 compute-0 ovn_metadata_agent[162810]:     timeout http-keep-alive 30s
Oct 11 09:16:45 compute-0 ovn_metadata_agent[162810]: 
Oct 11 09:16:45 compute-0 ovn_metadata_agent[162810]: 
Oct 11 09:16:45 compute-0 ovn_metadata_agent[162810]: listen listener
Oct 11 09:16:45 compute-0 ovn_metadata_agent[162810]:     bind 169.254.169.254:80
Oct 11 09:16:45 compute-0 ovn_metadata_agent[162810]:     server metadata /var/lib/neutron/metadata_proxy
Oct 11 09:16:45 compute-0 ovn_metadata_agent[162810]:     http-request add-header X-OVN-Network-ID 3f778265-a3b7-4c18-be8e-648917a97a03
Oct 11 09:16:45 compute-0 ovn_metadata_agent[162810]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 11 09:16:45 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:16:45.111 162815 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-3f778265-a3b7-4c18-be8e-648917a97a03', 'env', 'PROCESS_TAG=haproxy-3f778265-a3b7-4c18-be8e-648917a97a03', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/3f778265-a3b7-4c18-be8e-648917a97a03.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 11 09:16:45 compute-0 nova_compute[260935]: 2025-10-11 09:16:45.117 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:16:45 compute-0 podman[379624]: 2025-10-11 09:16:45.595345006 +0000 UTC m=+0.082948907 container create cec0bb5800761f72b32b04dda9e4e3462d4da48888e056643a262d83e6e3bdb6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-3f778265-a3b7-4c18-be8e-648917a97a03, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251001)
Oct 11 09:16:45 compute-0 systemd[1]: Started libpod-conmon-cec0bb5800761f72b32b04dda9e4e3462d4da48888e056643a262d83e6e3bdb6.scope.
Oct 11 09:16:45 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:16:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/efdb927a6e1809dd643e00a3746dcecf9a780cac7ca509077562678f25a5dc5f/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 11 09:16:45 compute-0 podman[379624]: 2025-10-11 09:16:45.558628169 +0000 UTC m=+0.046232110 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 11 09:16:45 compute-0 podman[379624]: 2025-10-11 09:16:45.668190172 +0000 UTC m=+0.155794163 container init cec0bb5800761f72b32b04dda9e4e3462d4da48888e056643a262d83e6e3bdb6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-3f778265-a3b7-4c18-be8e-648917a97a03, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 11 09:16:45 compute-0 podman[379624]: 2025-10-11 09:16:45.67821439 +0000 UTC m=+0.165818321 container start cec0bb5800761f72b32b04dda9e4e3462d4da48888e056643a262d83e6e3bdb6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-3f778265-a3b7-4c18-be8e-648917a97a03, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251001)
Oct 11 09:16:45 compute-0 neutron-haproxy-ovnmeta-3f778265-a3b7-4c18-be8e-648917a97a03[379640]: [NOTICE]   (379644) : New worker (379646) forked
Oct 11 09:16:45 compute-0 neutron-haproxy-ovnmeta-3f778265-a3b7-4c18-be8e-648917a97a03[379640]: [NOTICE]   (379644) : Loading success.
Oct 11 09:16:45 compute-0 nova_compute[260935]: 2025-10-11 09:16:45.742 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760174205.7420537, a1232bdc-1728-423e-91ef-f46614fcec43 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:16:45 compute-0 nova_compute[260935]: 2025-10-11 09:16:45.743 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: a1232bdc-1728-423e-91ef-f46614fcec43] VM Started (Lifecycle Event)
Oct 11 09:16:45 compute-0 nova_compute[260935]: 2025-10-11 09:16:45.746 2 DEBUG nova.compute.manager [None req-2554d751-df02-4f4c-a977-409d61bd8161 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: a1232bdc-1728-423e-91ef-f46614fcec43] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 11 09:16:45 compute-0 nova_compute[260935]: 2025-10-11 09:16:45.750 2 DEBUG nova.virt.libvirt.driver [None req-2554d751-df02-4f4c-a977-409d61bd8161 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: a1232bdc-1728-423e-91ef-f46614fcec43] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 11 09:16:45 compute-0 nova_compute[260935]: 2025-10-11 09:16:45.754 2 INFO nova.virt.libvirt.driver [-] [instance: a1232bdc-1728-423e-91ef-f46614fcec43] Instance spawned successfully.
Oct 11 09:16:45 compute-0 nova_compute[260935]: 2025-10-11 09:16:45.755 2 DEBUG nova.virt.libvirt.driver [None req-2554d751-df02-4f4c-a977-409d61bd8161 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: a1232bdc-1728-423e-91ef-f46614fcec43] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 11 09:16:45 compute-0 nova_compute[260935]: 2025-10-11 09:16:45.781 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: a1232bdc-1728-423e-91ef-f46614fcec43] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:16:45 compute-0 nova_compute[260935]: 2025-10-11 09:16:45.791 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: a1232bdc-1728-423e-91ef-f46614fcec43] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 09:16:45 compute-0 nova_compute[260935]: 2025-10-11 09:16:45.796 2 DEBUG nova.virt.libvirt.driver [None req-2554d751-df02-4f4c-a977-409d61bd8161 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: a1232bdc-1728-423e-91ef-f46614fcec43] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:16:45 compute-0 nova_compute[260935]: 2025-10-11 09:16:45.797 2 DEBUG nova.virt.libvirt.driver [None req-2554d751-df02-4f4c-a977-409d61bd8161 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: a1232bdc-1728-423e-91ef-f46614fcec43] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:16:45 compute-0 nova_compute[260935]: 2025-10-11 09:16:45.797 2 DEBUG nova.virt.libvirt.driver [None req-2554d751-df02-4f4c-a977-409d61bd8161 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: a1232bdc-1728-423e-91ef-f46614fcec43] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:16:45 compute-0 nova_compute[260935]: 2025-10-11 09:16:45.798 2 DEBUG nova.virt.libvirt.driver [None req-2554d751-df02-4f4c-a977-409d61bd8161 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: a1232bdc-1728-423e-91ef-f46614fcec43] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:16:45 compute-0 nova_compute[260935]: 2025-10-11 09:16:45.799 2 DEBUG nova.virt.libvirt.driver [None req-2554d751-df02-4f4c-a977-409d61bd8161 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: a1232bdc-1728-423e-91ef-f46614fcec43] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:16:45 compute-0 nova_compute[260935]: 2025-10-11 09:16:45.799 2 DEBUG nova.virt.libvirt.driver [None req-2554d751-df02-4f4c-a977-409d61bd8161 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: a1232bdc-1728-423e-91ef-f46614fcec43] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:16:45 compute-0 nova_compute[260935]: 2025-10-11 09:16:45.842 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: a1232bdc-1728-423e-91ef-f46614fcec43] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 09:16:45 compute-0 nova_compute[260935]: 2025-10-11 09:16:45.842 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760174205.7423146, a1232bdc-1728-423e-91ef-f46614fcec43 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:16:45 compute-0 nova_compute[260935]: 2025-10-11 09:16:45.843 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: a1232bdc-1728-423e-91ef-f46614fcec43] VM Paused (Lifecycle Event)
Oct 11 09:16:45 compute-0 nova_compute[260935]: 2025-10-11 09:16:45.875 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: a1232bdc-1728-423e-91ef-f46614fcec43] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:16:45 compute-0 nova_compute[260935]: 2025-10-11 09:16:45.886 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760174205.7488785, a1232bdc-1728-423e-91ef-f46614fcec43 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:16:45 compute-0 nova_compute[260935]: 2025-10-11 09:16:45.887 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: a1232bdc-1728-423e-91ef-f46614fcec43] VM Resumed (Lifecycle Event)
Oct 11 09:16:45 compute-0 ceph-mon[74313]: pgmap v2247: 321 pgs: 321 active+clean; 500 MiB data, 983 MiB used, 59 GiB / 60 GiB avail; 32 KiB/s rd, 3.5 MiB/s wr, 52 op/s
Oct 11 09:16:45 compute-0 nova_compute[260935]: 2025-10-11 09:16:45.899 2 INFO nova.compute.manager [None req-2554d751-df02-4f4c-a977-409d61bd8161 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: a1232bdc-1728-423e-91ef-f46614fcec43] Took 8.02 seconds to spawn the instance on the hypervisor.
Oct 11 09:16:45 compute-0 nova_compute[260935]: 2025-10-11 09:16:45.900 2 DEBUG nova.compute.manager [None req-2554d751-df02-4f4c-a977-409d61bd8161 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: a1232bdc-1728-423e-91ef-f46614fcec43] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:16:45 compute-0 nova_compute[260935]: 2025-10-11 09:16:45.910 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: a1232bdc-1728-423e-91ef-f46614fcec43] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:16:45 compute-0 nova_compute[260935]: 2025-10-11 09:16:45.922 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: a1232bdc-1728-423e-91ef-f46614fcec43] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 09:16:45 compute-0 nova_compute[260935]: 2025-10-11 09:16:45.975 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: a1232bdc-1728-423e-91ef-f46614fcec43] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 09:16:46 compute-0 nova_compute[260935]: 2025-10-11 09:16:46.009 2 INFO nova.compute.manager [None req-2554d751-df02-4f4c-a977-409d61bd8161 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: a1232bdc-1728-423e-91ef-f46614fcec43] Took 9.05 seconds to build instance.
Oct 11 09:16:46 compute-0 nova_compute[260935]: 2025-10-11 09:16:46.039 2 DEBUG oslo_concurrency.lockutils [None req-2554d751-df02-4f4c-a977-409d61bd8161 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "a1232bdc-1728-423e-91ef-f46614fcec43" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.151s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:16:46 compute-0 nova_compute[260935]: 2025-10-11 09:16:46.046 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:16:46 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2248: 321 pgs: 321 active+clean; 500 MiB data, 983 MiB used, 59 GiB / 60 GiB avail; 32 KiB/s rd, 3.5 MiB/s wr, 52 op/s
Oct 11 09:16:46 compute-0 nova_compute[260935]: 2025-10-11 09:16:46.973 2 DEBUG nova.network.neutron [None req-5b53aa05-737d-40fc-9b53-30f5af6d2f88 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 187076f6-221b-4a35-a7a8-9ba7c2a546b5] Updating instance_info_cache with network_info: [{"id": "1f9dc2d8-70e2-4b9d-acc8-8f1bf8cd7f54", "address": "fa:16:3e:4b:3e:b6", "network": {"id": "2f4ce403-2596-441d-805b-ba15e2f385a1", "bridge": "br-int", "label": "tempest-network-smoke--109925739", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe4b:3eb6", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1f9dc2d8-70", "ovs_interfaceid": "1f9dc2d8-70e2-4b9d-acc8-8f1bf8cd7f54", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:16:47 compute-0 nova_compute[260935]: 2025-10-11 09:16:47.000 2 DEBUG oslo_concurrency.lockutils [None req-5b53aa05-737d-40fc-9b53-30f5af6d2f88 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Releasing lock "refresh_cache-187076f6-221b-4a35-a7a8-9ba7c2a546b5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:16:47 compute-0 nova_compute[260935]: 2025-10-11 09:16:47.001 2 DEBUG nova.compute.manager [None req-5b53aa05-737d-40fc-9b53-30f5af6d2f88 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 187076f6-221b-4a35-a7a8-9ba7c2a546b5] Instance network_info: |[{"id": "1f9dc2d8-70e2-4b9d-acc8-8f1bf8cd7f54", "address": "fa:16:3e:4b:3e:b6", "network": {"id": "2f4ce403-2596-441d-805b-ba15e2f385a1", "bridge": "br-int", "label": "tempest-network-smoke--109925739", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe4b:3eb6", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1f9dc2d8-70", "ovs_interfaceid": "1f9dc2d8-70e2-4b9d-acc8-8f1bf8cd7f54", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 11 09:16:47 compute-0 nova_compute[260935]: 2025-10-11 09:16:47.002 2 DEBUG oslo_concurrency.lockutils [req-bd89893c-bfc7-4ca1-82bc-ddb8316b997d req-1026d221-a472-473d-b396-ca4b464f1d38 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-187076f6-221b-4a35-a7a8-9ba7c2a546b5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:16:47 compute-0 nova_compute[260935]: 2025-10-11 09:16:47.002 2 DEBUG nova.network.neutron [req-bd89893c-bfc7-4ca1-82bc-ddb8316b997d req-1026d221-a472-473d-b396-ca4b464f1d38 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 187076f6-221b-4a35-a7a8-9ba7c2a546b5] Refreshing network info cache for port 1f9dc2d8-70e2-4b9d-acc8-8f1bf8cd7f54 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 09:16:47 compute-0 nova_compute[260935]: 2025-10-11 09:16:47.007 2 DEBUG nova.virt.libvirt.driver [None req-5b53aa05-737d-40fc-9b53-30f5af6d2f88 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 187076f6-221b-4a35-a7a8-9ba7c2a546b5] Start _get_guest_xml network_info=[{"id": "1f9dc2d8-70e2-4b9d-acc8-8f1bf8cd7f54", "address": "fa:16:3e:4b:3e:b6", "network": {"id": "2f4ce403-2596-441d-805b-ba15e2f385a1", "bridge": "br-int", "label": "tempest-network-smoke--109925739", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe4b:3eb6", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1f9dc2d8-70", "ovs_interfaceid": "1f9dc2d8-70e2-4b9d-acc8-8f1bf8cd7f54", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 11 09:16:47 compute-0 nova_compute[260935]: 2025-10-11 09:16:47.014 2 WARNING nova.virt.libvirt.driver [None req-5b53aa05-737d-40fc-9b53-30f5af6d2f88 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 09:16:47 compute-0 nova_compute[260935]: 2025-10-11 09:16:47.023 2 DEBUG nova.virt.libvirt.host [None req-5b53aa05-737d-40fc-9b53-30f5af6d2f88 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 11 09:16:47 compute-0 nova_compute[260935]: 2025-10-11 09:16:47.024 2 DEBUG nova.virt.libvirt.host [None req-5b53aa05-737d-40fc-9b53-30f5af6d2f88 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 11 09:16:47 compute-0 nova_compute[260935]: 2025-10-11 09:16:47.035 2 DEBUG nova.virt.libvirt.host [None req-5b53aa05-737d-40fc-9b53-30f5af6d2f88 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 11 09:16:47 compute-0 nova_compute[260935]: 2025-10-11 09:16:47.036 2 DEBUG nova.virt.libvirt.host [None req-5b53aa05-737d-40fc-9b53-30f5af6d2f88 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 11 09:16:47 compute-0 nova_compute[260935]: 2025-10-11 09:16:47.036 2 DEBUG nova.virt.libvirt.driver [None req-5b53aa05-737d-40fc-9b53-30f5af6d2f88 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 11 09:16:47 compute-0 nova_compute[260935]: 2025-10-11 09:16:47.037 2 DEBUG nova.virt.hardware [None req-5b53aa05-737d-40fc-9b53-30f5af6d2f88 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 11 09:16:47 compute-0 nova_compute[260935]: 2025-10-11 09:16:47.038 2 DEBUG nova.virt.hardware [None req-5b53aa05-737d-40fc-9b53-30f5af6d2f88 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 11 09:16:47 compute-0 nova_compute[260935]: 2025-10-11 09:16:47.038 2 DEBUG nova.virt.hardware [None req-5b53aa05-737d-40fc-9b53-30f5af6d2f88 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 11 09:16:47 compute-0 nova_compute[260935]: 2025-10-11 09:16:47.039 2 DEBUG nova.virt.hardware [None req-5b53aa05-737d-40fc-9b53-30f5af6d2f88 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 11 09:16:47 compute-0 nova_compute[260935]: 2025-10-11 09:16:47.039 2 DEBUG nova.virt.hardware [None req-5b53aa05-737d-40fc-9b53-30f5af6d2f88 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 11 09:16:47 compute-0 nova_compute[260935]: 2025-10-11 09:16:47.039 2 DEBUG nova.virt.hardware [None req-5b53aa05-737d-40fc-9b53-30f5af6d2f88 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 11 09:16:47 compute-0 nova_compute[260935]: 2025-10-11 09:16:47.040 2 DEBUG nova.virt.hardware [None req-5b53aa05-737d-40fc-9b53-30f5af6d2f88 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 11 09:16:47 compute-0 nova_compute[260935]: 2025-10-11 09:16:47.040 2 DEBUG nova.virt.hardware [None req-5b53aa05-737d-40fc-9b53-30f5af6d2f88 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 11 09:16:47 compute-0 nova_compute[260935]: 2025-10-11 09:16:47.041 2 DEBUG nova.virt.hardware [None req-5b53aa05-737d-40fc-9b53-30f5af6d2f88 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 11 09:16:47 compute-0 nova_compute[260935]: 2025-10-11 09:16:47.041 2 DEBUG nova.virt.hardware [None req-5b53aa05-737d-40fc-9b53-30f5af6d2f88 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 11 09:16:47 compute-0 nova_compute[260935]: 2025-10-11 09:16:47.041 2 DEBUG nova.virt.hardware [None req-5b53aa05-737d-40fc-9b53-30f5af6d2f88 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 11 09:16:47 compute-0 nova_compute[260935]: 2025-10-11 09:16:47.046 2 DEBUG oslo_concurrency.processutils [None req-5b53aa05-737d-40fc-9b53-30f5af6d2f88 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:16:47 compute-0 nova_compute[260935]: 2025-10-11 09:16:47.167 2 DEBUG nova.compute.manager [req-3f172184-926a-4be2-bbab-1b56387ce661 req-a2cb0f3a-9a23-4253-a068-53261fbff1de e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a1232bdc-1728-423e-91ef-f46614fcec43] Received event network-vif-plugged-e89831fc-f646-401f-8562-959bb36ec0e9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:16:47 compute-0 nova_compute[260935]: 2025-10-11 09:16:47.168 2 DEBUG oslo_concurrency.lockutils [req-3f172184-926a-4be2-bbab-1b56387ce661 req-a2cb0f3a-9a23-4253-a068-53261fbff1de e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "a1232bdc-1728-423e-91ef-f46614fcec43-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:16:47 compute-0 nova_compute[260935]: 2025-10-11 09:16:47.168 2 DEBUG oslo_concurrency.lockutils [req-3f172184-926a-4be2-bbab-1b56387ce661 req-a2cb0f3a-9a23-4253-a068-53261fbff1de e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "a1232bdc-1728-423e-91ef-f46614fcec43-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:16:47 compute-0 nova_compute[260935]: 2025-10-11 09:16:47.168 2 DEBUG oslo_concurrency.lockutils [req-3f172184-926a-4be2-bbab-1b56387ce661 req-a2cb0f3a-9a23-4253-a068-53261fbff1de e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "a1232bdc-1728-423e-91ef-f46614fcec43-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:16:47 compute-0 nova_compute[260935]: 2025-10-11 09:16:47.169 2 DEBUG nova.compute.manager [req-3f172184-926a-4be2-bbab-1b56387ce661 req-a2cb0f3a-9a23-4253-a068-53261fbff1de e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a1232bdc-1728-423e-91ef-f46614fcec43] No waiting events found dispatching network-vif-plugged-e89831fc-f646-401f-8562-959bb36ec0e9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 09:16:47 compute-0 nova_compute[260935]: 2025-10-11 09:16:47.169 2 WARNING nova.compute.manager [req-3f172184-926a-4be2-bbab-1b56387ce661 req-a2cb0f3a-9a23-4253-a068-53261fbff1de e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a1232bdc-1728-423e-91ef-f46614fcec43] Received unexpected event network-vif-plugged-e89831fc-f646-401f-8562-959bb36ec0e9 for instance with vm_state active and task_state None.
Oct 11 09:16:47 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 09:16:47 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/628114044' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:16:47 compute-0 nova_compute[260935]: 2025-10-11 09:16:47.555 2 DEBUG oslo_concurrency.processutils [None req-5b53aa05-737d-40fc-9b53-30f5af6d2f88 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.510s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:16:47 compute-0 nova_compute[260935]: 2025-10-11 09:16:47.590 2 DEBUG nova.storage.rbd_utils [None req-5b53aa05-737d-40fc-9b53-30f5af6d2f88 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] rbd image 187076f6-221b-4a35-a7a8-9ba7c2a546b5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:16:47 compute-0 nova_compute[260935]: 2025-10-11 09:16:47.596 2 DEBUG oslo_concurrency.processutils [None req-5b53aa05-737d-40fc-9b53-30f5af6d2f88 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:16:47 compute-0 ceph-mon[74313]: pgmap v2248: 321 pgs: 321 active+clean; 500 MiB data, 983 MiB used, 59 GiB / 60 GiB avail; 32 KiB/s rd, 3.5 MiB/s wr, 52 op/s
Oct 11 09:16:47 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/628114044' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:16:48 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 09:16:48 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/751411432' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:16:48 compute-0 nova_compute[260935]: 2025-10-11 09:16:48.130 2 DEBUG oslo_concurrency.processutils [None req-5b53aa05-737d-40fc-9b53-30f5af6d2f88 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.534s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:16:48 compute-0 nova_compute[260935]: 2025-10-11 09:16:48.132 2 DEBUG nova.virt.libvirt.vif [None req-5b53aa05-737d-40fc-9b53-30f5af6d2f88 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T09:16:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-354741605',display_name='tempest-TestGettingAddress-server-354741605',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-354741605',id=110,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIEEroIcQI6yjPYNFRbpO55PkieYAc+yLELOXjNZohffuvQJuC/A538gnzESmYvfIV1iCxTeQxcsCPGRPplzY6F4Y6cKL3td1D8v0hhGFKNU2eGLTpxtQxcU6ACWQ1bQPg==',key_name='tempest-TestGettingAddress-609422538',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='db6885dd005947ad850fed13cefdf2fc',ramdisk_id='',reservation_id='r-f6psl0gg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-1238692117',owner_user_name='tempest-TestGettingAddress-1238692117-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:16:41Z,user_data=None,user_id='0e1fd111a1ff43179343661e01457085',uuid=187076f6-221b-4a35-a7a8-9ba7c2a546b5,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1f9dc2d8-70e2-4b9d-acc8-8f1bf8cd7f54", "address": "fa:16:3e:4b:3e:b6", "network": {"id": "2f4ce403-2596-441d-805b-ba15e2f385a1", "bridge": "br-int", "label": "tempest-network-smoke--109925739", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe4b:3eb6", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1f9dc2d8-70", "ovs_interfaceid": "1f9dc2d8-70e2-4b9d-acc8-8f1bf8cd7f54", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 11 09:16:48 compute-0 nova_compute[260935]: 2025-10-11 09:16:48.133 2 DEBUG nova.network.os_vif_util [None req-5b53aa05-737d-40fc-9b53-30f5af6d2f88 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converting VIF {"id": "1f9dc2d8-70e2-4b9d-acc8-8f1bf8cd7f54", "address": "fa:16:3e:4b:3e:b6", "network": {"id": "2f4ce403-2596-441d-805b-ba15e2f385a1", "bridge": "br-int", "label": "tempest-network-smoke--109925739", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe4b:3eb6", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1f9dc2d8-70", "ovs_interfaceid": "1f9dc2d8-70e2-4b9d-acc8-8f1bf8cd7f54", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 09:16:48 compute-0 nova_compute[260935]: 2025-10-11 09:16:48.135 2 DEBUG nova.network.os_vif_util [None req-5b53aa05-737d-40fc-9b53-30f5af6d2f88 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4b:3e:b6,bridge_name='br-int',has_traffic_filtering=True,id=1f9dc2d8-70e2-4b9d-acc8-8f1bf8cd7f54,network=Network(2f4ce403-2596-441d-805b-ba15e2f385a1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1f9dc2d8-70') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 09:16:48 compute-0 nova_compute[260935]: 2025-10-11 09:16:48.137 2 DEBUG nova.objects.instance [None req-5b53aa05-737d-40fc-9b53-30f5af6d2f88 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lazy-loading 'pci_devices' on Instance uuid 187076f6-221b-4a35-a7a8-9ba7c2a546b5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:16:48 compute-0 nova_compute[260935]: 2025-10-11 09:16:48.164 2 DEBUG nova.virt.libvirt.driver [None req-5b53aa05-737d-40fc-9b53-30f5af6d2f88 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 187076f6-221b-4a35-a7a8-9ba7c2a546b5] End _get_guest_xml xml=<domain type="kvm">
Oct 11 09:16:48 compute-0 nova_compute[260935]:   <uuid>187076f6-221b-4a35-a7a8-9ba7c2a546b5</uuid>
Oct 11 09:16:48 compute-0 nova_compute[260935]:   <name>instance-0000006e</name>
Oct 11 09:16:48 compute-0 nova_compute[260935]:   <memory>131072</memory>
Oct 11 09:16:48 compute-0 nova_compute[260935]:   <vcpu>1</vcpu>
Oct 11 09:16:48 compute-0 nova_compute[260935]:   <metadata>
Oct 11 09:16:48 compute-0 nova_compute[260935]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 09:16:48 compute-0 nova_compute[260935]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 09:16:48 compute-0 nova_compute[260935]:       <nova:name>tempest-TestGettingAddress-server-354741605</nova:name>
Oct 11 09:16:48 compute-0 nova_compute[260935]:       <nova:creationTime>2025-10-11 09:16:47</nova:creationTime>
Oct 11 09:16:48 compute-0 nova_compute[260935]:       <nova:flavor name="m1.nano">
Oct 11 09:16:48 compute-0 nova_compute[260935]:         <nova:memory>128</nova:memory>
Oct 11 09:16:48 compute-0 nova_compute[260935]:         <nova:disk>1</nova:disk>
Oct 11 09:16:48 compute-0 nova_compute[260935]:         <nova:swap>0</nova:swap>
Oct 11 09:16:48 compute-0 nova_compute[260935]:         <nova:ephemeral>0</nova:ephemeral>
Oct 11 09:16:48 compute-0 nova_compute[260935]:         <nova:vcpus>1</nova:vcpus>
Oct 11 09:16:48 compute-0 nova_compute[260935]:       </nova:flavor>
Oct 11 09:16:48 compute-0 nova_compute[260935]:       <nova:owner>
Oct 11 09:16:48 compute-0 nova_compute[260935]:         <nova:user uuid="0e1fd111a1ff43179343661e01457085">tempest-TestGettingAddress-1238692117-project-member</nova:user>
Oct 11 09:16:48 compute-0 nova_compute[260935]:         <nova:project uuid="db6885dd005947ad850fed13cefdf2fc">tempest-TestGettingAddress-1238692117</nova:project>
Oct 11 09:16:48 compute-0 nova_compute[260935]:       </nova:owner>
Oct 11 09:16:48 compute-0 nova_compute[260935]:       <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 09:16:48 compute-0 nova_compute[260935]:       <nova:ports>
Oct 11 09:16:48 compute-0 nova_compute[260935]:         <nova:port uuid="1f9dc2d8-70e2-4b9d-acc8-8f1bf8cd7f54">
Oct 11 09:16:48 compute-0 nova_compute[260935]:           <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Oct 11 09:16:48 compute-0 nova_compute[260935]:           <nova:ip type="fixed" address="2001:db8::f816:3eff:fe4b:3eb6" ipVersion="6"/>
Oct 11 09:16:48 compute-0 nova_compute[260935]:         </nova:port>
Oct 11 09:16:48 compute-0 nova_compute[260935]:       </nova:ports>
Oct 11 09:16:48 compute-0 nova_compute[260935]:     </nova:instance>
Oct 11 09:16:48 compute-0 nova_compute[260935]:   </metadata>
Oct 11 09:16:48 compute-0 nova_compute[260935]:   <sysinfo type="smbios">
Oct 11 09:16:48 compute-0 nova_compute[260935]:     <system>
Oct 11 09:16:48 compute-0 nova_compute[260935]:       <entry name="manufacturer">RDO</entry>
Oct 11 09:16:48 compute-0 nova_compute[260935]:       <entry name="product">OpenStack Compute</entry>
Oct 11 09:16:48 compute-0 nova_compute[260935]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 09:16:48 compute-0 nova_compute[260935]:       <entry name="serial">187076f6-221b-4a35-a7a8-9ba7c2a546b5</entry>
Oct 11 09:16:48 compute-0 nova_compute[260935]:       <entry name="uuid">187076f6-221b-4a35-a7a8-9ba7c2a546b5</entry>
Oct 11 09:16:48 compute-0 nova_compute[260935]:       <entry name="family">Virtual Machine</entry>
Oct 11 09:16:48 compute-0 nova_compute[260935]:     </system>
Oct 11 09:16:48 compute-0 nova_compute[260935]:   </sysinfo>
Oct 11 09:16:48 compute-0 nova_compute[260935]:   <os>
Oct 11 09:16:48 compute-0 nova_compute[260935]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 11 09:16:48 compute-0 nova_compute[260935]:     <boot dev="hd"/>
Oct 11 09:16:48 compute-0 nova_compute[260935]:     <smbios mode="sysinfo"/>
Oct 11 09:16:48 compute-0 nova_compute[260935]:   </os>
Oct 11 09:16:48 compute-0 nova_compute[260935]:   <features>
Oct 11 09:16:48 compute-0 nova_compute[260935]:     <acpi/>
Oct 11 09:16:48 compute-0 nova_compute[260935]:     <apic/>
Oct 11 09:16:48 compute-0 nova_compute[260935]:     <vmcoreinfo/>
Oct 11 09:16:48 compute-0 nova_compute[260935]:   </features>
Oct 11 09:16:48 compute-0 nova_compute[260935]:   <clock offset="utc">
Oct 11 09:16:48 compute-0 nova_compute[260935]:     <timer name="pit" tickpolicy="delay"/>
Oct 11 09:16:48 compute-0 nova_compute[260935]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 11 09:16:48 compute-0 nova_compute[260935]:     <timer name="hpet" present="no"/>
Oct 11 09:16:48 compute-0 nova_compute[260935]:   </clock>
Oct 11 09:16:48 compute-0 nova_compute[260935]:   <cpu mode="host-model" match="exact">
Oct 11 09:16:48 compute-0 nova_compute[260935]:     <topology sockets="1" cores="1" threads="1"/>
Oct 11 09:16:48 compute-0 nova_compute[260935]:   </cpu>
Oct 11 09:16:48 compute-0 nova_compute[260935]:   <devices>
Oct 11 09:16:48 compute-0 nova_compute[260935]:     <disk type="network" device="disk">
Oct 11 09:16:48 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 09:16:48 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/187076f6-221b-4a35-a7a8-9ba7c2a546b5_disk">
Oct 11 09:16:48 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 09:16:48 compute-0 nova_compute[260935]:       </source>
Oct 11 09:16:48 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 09:16:48 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 09:16:48 compute-0 nova_compute[260935]:       </auth>
Oct 11 09:16:48 compute-0 nova_compute[260935]:       <target dev="vda" bus="virtio"/>
Oct 11 09:16:48 compute-0 nova_compute[260935]:     </disk>
Oct 11 09:16:48 compute-0 nova_compute[260935]:     <disk type="network" device="cdrom">
Oct 11 09:16:48 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 09:16:48 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/187076f6-221b-4a35-a7a8-9ba7c2a546b5_disk.config">
Oct 11 09:16:48 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 09:16:48 compute-0 nova_compute[260935]:       </source>
Oct 11 09:16:48 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 09:16:48 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 09:16:48 compute-0 nova_compute[260935]:       </auth>
Oct 11 09:16:48 compute-0 nova_compute[260935]:       <target dev="sda" bus="sata"/>
Oct 11 09:16:48 compute-0 nova_compute[260935]:     </disk>
Oct 11 09:16:48 compute-0 nova_compute[260935]:     <interface type="ethernet">
Oct 11 09:16:48 compute-0 nova_compute[260935]:       <mac address="fa:16:3e:4b:3e:b6"/>
Oct 11 09:16:48 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 09:16:48 compute-0 nova_compute[260935]:       <driver name="vhost" rx_queue_size="512"/>
Oct 11 09:16:48 compute-0 nova_compute[260935]:       <mtu size="1442"/>
Oct 11 09:16:48 compute-0 nova_compute[260935]:       <target dev="tap1f9dc2d8-70"/>
Oct 11 09:16:48 compute-0 nova_compute[260935]:     </interface>
Oct 11 09:16:48 compute-0 nova_compute[260935]:     <serial type="pty">
Oct 11 09:16:48 compute-0 nova_compute[260935]:       <log file="/var/lib/nova/instances/187076f6-221b-4a35-a7a8-9ba7c2a546b5/console.log" append="off"/>
Oct 11 09:16:48 compute-0 nova_compute[260935]:     </serial>
Oct 11 09:16:48 compute-0 nova_compute[260935]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 09:16:48 compute-0 nova_compute[260935]:     <video>
Oct 11 09:16:48 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 09:16:48 compute-0 nova_compute[260935]:     </video>
Oct 11 09:16:48 compute-0 nova_compute[260935]:     <input type="tablet" bus="usb"/>
Oct 11 09:16:48 compute-0 nova_compute[260935]:     <rng model="virtio">
Oct 11 09:16:48 compute-0 nova_compute[260935]:       <backend model="random">/dev/urandom</backend>
Oct 11 09:16:48 compute-0 nova_compute[260935]:     </rng>
Oct 11 09:16:48 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root"/>
Oct 11 09:16:48 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:16:48 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:16:48 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:16:48 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:16:48 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:16:48 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:16:48 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:16:48 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:16:48 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:16:48 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:16:48 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:16:48 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:16:48 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:16:48 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:16:48 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:16:48 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:16:48 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:16:48 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:16:48 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:16:48 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:16:48 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:16:48 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:16:48 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:16:48 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:16:48 compute-0 nova_compute[260935]:     <controller type="usb" index="0"/>
Oct 11 09:16:48 compute-0 nova_compute[260935]:     <memballoon model="virtio">
Oct 11 09:16:48 compute-0 nova_compute[260935]:       <stats period="10"/>
Oct 11 09:16:48 compute-0 nova_compute[260935]:     </memballoon>
Oct 11 09:16:48 compute-0 nova_compute[260935]:   </devices>
Oct 11 09:16:48 compute-0 nova_compute[260935]: </domain>
Oct 11 09:16:48 compute-0 nova_compute[260935]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 11 09:16:48 compute-0 nova_compute[260935]: 2025-10-11 09:16:48.165 2 DEBUG nova.compute.manager [None req-5b53aa05-737d-40fc-9b53-30f5af6d2f88 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 187076f6-221b-4a35-a7a8-9ba7c2a546b5] Preparing to wait for external event network-vif-plugged-1f9dc2d8-70e2-4b9d-acc8-8f1bf8cd7f54 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 11 09:16:48 compute-0 nova_compute[260935]: 2025-10-11 09:16:48.165 2 DEBUG oslo_concurrency.lockutils [None req-5b53aa05-737d-40fc-9b53-30f5af6d2f88 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "187076f6-221b-4a35-a7a8-9ba7c2a546b5-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:16:48 compute-0 nova_compute[260935]: 2025-10-11 09:16:48.166 2 DEBUG oslo_concurrency.lockutils [None req-5b53aa05-737d-40fc-9b53-30f5af6d2f88 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "187076f6-221b-4a35-a7a8-9ba7c2a546b5-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:16:48 compute-0 nova_compute[260935]: 2025-10-11 09:16:48.166 2 DEBUG oslo_concurrency.lockutils [None req-5b53aa05-737d-40fc-9b53-30f5af6d2f88 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "187076f6-221b-4a35-a7a8-9ba7c2a546b5-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:16:48 compute-0 nova_compute[260935]: 2025-10-11 09:16:48.167 2 DEBUG nova.virt.libvirt.vif [None req-5b53aa05-737d-40fc-9b53-30f5af6d2f88 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T09:16:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-354741605',display_name='tempest-TestGettingAddress-server-354741605',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-354741605',id=110,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIEEroIcQI6yjPYNFRbpO55PkieYAc+yLELOXjNZohffuvQJuC/A538gnzESmYvfIV1iCxTeQxcsCPGRPplzY6F4Y6cKL3td1D8v0hhGFKNU2eGLTpxtQxcU6ACWQ1bQPg==',key_name='tempest-TestGettingAddress-609422538',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='db6885dd005947ad850fed13cefdf2fc',ramdisk_id='',reservation_id='r-f6psl0gg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-1238692117',owner_user_name='tempest-TestGettingAddress-1238692117-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:16:41Z,user_data=None,user_id='0e1fd111a1ff43179343661e01457085',uuid=187076f6-221b-4a35-a7a8-9ba7c2a546b5,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1f9dc2d8-70e2-4b9d-acc8-8f1bf8cd7f54", "address": "fa:16:3e:4b:3e:b6", "network": {"id": "2f4ce403-2596-441d-805b-ba15e2f385a1", "bridge": "br-int", "label": "tempest-network-smoke--109925739", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe4b:3eb6", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1f9dc2d8-70", "ovs_interfaceid": "1f9dc2d8-70e2-4b9d-acc8-8f1bf8cd7f54", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 11 09:16:48 compute-0 nova_compute[260935]: 2025-10-11 09:16:48.167 2 DEBUG nova.network.os_vif_util [None req-5b53aa05-737d-40fc-9b53-30f5af6d2f88 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converting VIF {"id": "1f9dc2d8-70e2-4b9d-acc8-8f1bf8cd7f54", "address": "fa:16:3e:4b:3e:b6", "network": {"id": "2f4ce403-2596-441d-805b-ba15e2f385a1", "bridge": "br-int", "label": "tempest-network-smoke--109925739", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe4b:3eb6", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1f9dc2d8-70", "ovs_interfaceid": "1f9dc2d8-70e2-4b9d-acc8-8f1bf8cd7f54", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 09:16:48 compute-0 nova_compute[260935]: 2025-10-11 09:16:48.169 2 DEBUG nova.network.os_vif_util [None req-5b53aa05-737d-40fc-9b53-30f5af6d2f88 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4b:3e:b6,bridge_name='br-int',has_traffic_filtering=True,id=1f9dc2d8-70e2-4b9d-acc8-8f1bf8cd7f54,network=Network(2f4ce403-2596-441d-805b-ba15e2f385a1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1f9dc2d8-70') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 09:16:48 compute-0 nova_compute[260935]: 2025-10-11 09:16:48.169 2 DEBUG os_vif [None req-5b53aa05-737d-40fc-9b53-30f5af6d2f88 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:4b:3e:b6,bridge_name='br-int',has_traffic_filtering=True,id=1f9dc2d8-70e2-4b9d-acc8-8f1bf8cd7f54,network=Network(2f4ce403-2596-441d-805b-ba15e2f385a1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1f9dc2d8-70') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 11 09:16:48 compute-0 nova_compute[260935]: 2025-10-11 09:16:48.170 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:16:48 compute-0 nova_compute[260935]: 2025-10-11 09:16:48.171 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:16:48 compute-0 nova_compute[260935]: 2025-10-11 09:16:48.172 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 09:16:48 compute-0 nova_compute[260935]: 2025-10-11 09:16:48.178 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:16:48 compute-0 nova_compute[260935]: 2025-10-11 09:16:48.178 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1f9dc2d8-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:16:48 compute-0 nova_compute[260935]: 2025-10-11 09:16:48.179 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap1f9dc2d8-70, col_values=(('external_ids', {'iface-id': '1f9dc2d8-70e2-4b9d-acc8-8f1bf8cd7f54', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:4b:3e:b6', 'vm-uuid': '187076f6-221b-4a35-a7a8-9ba7c2a546b5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:16:48 compute-0 nova_compute[260935]: 2025-10-11 09:16:48.230 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:16:48 compute-0 NetworkManager[44960]: <info>  [1760174208.2313] manager: (tap1f9dc2d8-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/442)
Oct 11 09:16:48 compute-0 nova_compute[260935]: 2025-10-11 09:16:48.233 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 09:16:48 compute-0 nova_compute[260935]: 2025-10-11 09:16:48.237 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:16:48 compute-0 nova_compute[260935]: 2025-10-11 09:16:48.238 2 INFO os_vif [None req-5b53aa05-737d-40fc-9b53-30f5af6d2f88 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:4b:3e:b6,bridge_name='br-int',has_traffic_filtering=True,id=1f9dc2d8-70e2-4b9d-acc8-8f1bf8cd7f54,network=Network(2f4ce403-2596-441d-805b-ba15e2f385a1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1f9dc2d8-70')
Oct 11 09:16:48 compute-0 nova_compute[260935]: 2025-10-11 09:16:48.322 2 DEBUG nova.virt.libvirt.driver [None req-5b53aa05-737d-40fc-9b53-30f5af6d2f88 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 09:16:48 compute-0 nova_compute[260935]: 2025-10-11 09:16:48.323 2 DEBUG nova.virt.libvirt.driver [None req-5b53aa05-737d-40fc-9b53-30f5af6d2f88 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 09:16:48 compute-0 nova_compute[260935]: 2025-10-11 09:16:48.323 2 DEBUG nova.virt.libvirt.driver [None req-5b53aa05-737d-40fc-9b53-30f5af6d2f88 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] No VIF found with MAC fa:16:3e:4b:3e:b6, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 11 09:16:48 compute-0 nova_compute[260935]: 2025-10-11 09:16:48.324 2 INFO nova.virt.libvirt.driver [None req-5b53aa05-737d-40fc-9b53-30f5af6d2f88 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 187076f6-221b-4a35-a7a8-9ba7c2a546b5] Using config drive
Oct 11 09:16:48 compute-0 nova_compute[260935]: 2025-10-11 09:16:48.355 2 DEBUG nova.storage.rbd_utils [None req-5b53aa05-737d-40fc-9b53-30f5af6d2f88 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] rbd image 187076f6-221b-4a35-a7a8-9ba7c2a546b5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:16:48 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2249: 321 pgs: 321 active+clean; 500 MiB data, 988 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 3.6 MiB/s wr, 128 op/s
Oct 11 09:16:48 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/751411432' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:16:48 compute-0 ceph-mon[74313]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #105. Immutable memtables: 0.
Oct 11 09:16:48 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:16:48.921920) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 11 09:16:48 compute-0 ceph-mon[74313]: rocksdb: [db/flush_job.cc:856] [default] [JOB 61] Flushing memtable with next log file: 105
Oct 11 09:16:48 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760174208921999, "job": 61, "event": "flush_started", "num_memtables": 1, "num_entries": 2058, "num_deletes": 251, "total_data_size": 3409258, "memory_usage": 3469688, "flush_reason": "Manual Compaction"}
Oct 11 09:16:48 compute-0 ceph-mon[74313]: rocksdb: [db/flush_job.cc:885] [default] [JOB 61] Level-0 flush table #106: started
Oct 11 09:16:48 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760174208938776, "cf_name": "default", "job": 61, "event": "table_file_creation", "file_number": 106, "file_size": 3331164, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 45076, "largest_seqno": 47133, "table_properties": {"data_size": 3321840, "index_size": 5882, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2373, "raw_key_size": 18933, "raw_average_key_size": 20, "raw_value_size": 3303283, "raw_average_value_size": 3514, "num_data_blocks": 261, "num_entries": 940, "num_filter_entries": 940, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760173988, "oldest_key_time": 1760173988, "file_creation_time": 1760174208, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "25d4b2da-549a-4320-988a-8e4ed36a341b", "db_session_id": "HBQ1LYMNE0LLZGR7AHC6", "orig_file_number": 106, "seqno_to_time_mapping": "N/A"}}
Oct 11 09:16:48 compute-0 ceph-mon[74313]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 61] Flush lasted 16930 microseconds, and 8956 cpu microseconds.
Oct 11 09:16:48 compute-0 ceph-mon[74313]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 09:16:48 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:16:48.938856) [db/flush_job.cc:967] [default] [JOB 61] Level-0 flush table #106: 3331164 bytes OK
Oct 11 09:16:48 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:16:48.938882) [db/memtable_list.cc:519] [default] Level-0 commit table #106 started
Oct 11 09:16:48 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:16:48.940428) [db/memtable_list.cc:722] [default] Level-0 commit table #106: memtable #1 done
Oct 11 09:16:48 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:16:48.940483) EVENT_LOG_v1 {"time_micros": 1760174208940471, "job": 61, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 11 09:16:48 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:16:48.940514) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 11 09:16:48 compute-0 ceph-mon[74313]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 61] Try to delete WAL files size 3400625, prev total WAL file size 3400625, number of live WAL files 2.
Oct 11 09:16:48 compute-0 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000102.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 09:16:48 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:16:48.942092) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730034323637' seq:72057594037927935, type:22 .. '7061786F730034353139' seq:0, type:0; will stop at (end)
Oct 11 09:16:48 compute-0 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 62] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 11 09:16:48 compute-0 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 61 Base level 0, inputs: [106(3253KB)], [104(8580KB)]
Oct 11 09:16:48 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760174208942147, "job": 62, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [106], "files_L6": [104], "score": -1, "input_data_size": 12117520, "oldest_snapshot_seqno": -1}
Oct 11 09:16:48 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:16:48 compute-0 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 62] Generated table #107: 6955 keys, 10406128 bytes, temperature: kUnknown
Oct 11 09:16:48 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760174208988895, "cf_name": "default", "job": 62, "event": "table_file_creation", "file_number": 107, "file_size": 10406128, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10358204, "index_size": 29444, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 17413, "raw_key_size": 179305, "raw_average_key_size": 25, "raw_value_size": 10232184, "raw_average_value_size": 1471, "num_data_blocks": 1160, "num_entries": 6955, "num_filter_entries": 6955, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760170204, "oldest_key_time": 0, "file_creation_time": 1760174208, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "25d4b2da-549a-4320-988a-8e4ed36a341b", "db_session_id": "HBQ1LYMNE0LLZGR7AHC6", "orig_file_number": 107, "seqno_to_time_mapping": "N/A"}}
Oct 11 09:16:48 compute-0 ceph-mon[74313]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 09:16:48 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:16:48.989080) [db/compaction/compaction_job.cc:1663] [default] [JOB 62] Compacted 1@0 + 1@6 files to L6 => 10406128 bytes
Oct 11 09:16:48 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:16:48.990609) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 258.9 rd, 222.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.2, 8.4 +0.0 blob) out(9.9 +0.0 blob), read-write-amplify(6.8) write-amplify(3.1) OK, records in: 7469, records dropped: 514 output_compression: NoCompression
Oct 11 09:16:48 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:16:48.990624) EVENT_LOG_v1 {"time_micros": 1760174208990616, "job": 62, "event": "compaction_finished", "compaction_time_micros": 46806, "compaction_time_cpu_micros": 23033, "output_level": 6, "num_output_files": 1, "total_output_size": 10406128, "num_input_records": 7469, "num_output_records": 6955, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 11 09:16:48 compute-0 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000106.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 09:16:48 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760174208991181, "job": 62, "event": "table_file_deletion", "file_number": 106}
Oct 11 09:16:48 compute-0 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000104.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 09:16:48 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760174208992482, "job": 62, "event": "table_file_deletion", "file_number": 104}
Oct 11 09:16:48 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:16:48.941989) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 09:16:48 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:16:48.992542) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 09:16:48 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:16:48.992550) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 09:16:48 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:16:48.992554) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 09:16:48 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:16:48.992557) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 09:16:48 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:16:48.992561) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 09:16:49 compute-0 nova_compute[260935]: 2025-10-11 09:16:49.185 2 INFO nova.virt.libvirt.driver [None req-5b53aa05-737d-40fc-9b53-30f5af6d2f88 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 187076f6-221b-4a35-a7a8-9ba7c2a546b5] Creating config drive at /var/lib/nova/instances/187076f6-221b-4a35-a7a8-9ba7c2a546b5/disk.config
Oct 11 09:16:49 compute-0 nova_compute[260935]: 2025-10-11 09:16:49.194 2 DEBUG oslo_concurrency.processutils [None req-5b53aa05-737d-40fc-9b53-30f5af6d2f88 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/187076f6-221b-4a35-a7a8-9ba7c2a546b5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp3e1y8h8e execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:16:49 compute-0 nova_compute[260935]: 2025-10-11 09:16:49.364 2 DEBUG oslo_concurrency.processutils [None req-5b53aa05-737d-40fc-9b53-30f5af6d2f88 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/187076f6-221b-4a35-a7a8-9ba7c2a546b5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp3e1y8h8e" returned: 0 in 0.170s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:16:49 compute-0 nova_compute[260935]: 2025-10-11 09:16:49.403 2 DEBUG nova.storage.rbd_utils [None req-5b53aa05-737d-40fc-9b53-30f5af6d2f88 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] rbd image 187076f6-221b-4a35-a7a8-9ba7c2a546b5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:16:49 compute-0 nova_compute[260935]: 2025-10-11 09:16:49.410 2 DEBUG oslo_concurrency.processutils [None req-5b53aa05-737d-40fc-9b53-30f5af6d2f88 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/187076f6-221b-4a35-a7a8-9ba7c2a546b5/disk.config 187076f6-221b-4a35-a7a8-9ba7c2a546b5_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:16:49 compute-0 nova_compute[260935]: 2025-10-11 09:16:49.630 2 DEBUG oslo_concurrency.processutils [None req-5b53aa05-737d-40fc-9b53-30f5af6d2f88 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/187076f6-221b-4a35-a7a8-9ba7c2a546b5/disk.config 187076f6-221b-4a35-a7a8-9ba7c2a546b5_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.220s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:16:49 compute-0 nova_compute[260935]: 2025-10-11 09:16:49.632 2 INFO nova.virt.libvirt.driver [None req-5b53aa05-737d-40fc-9b53-30f5af6d2f88 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 187076f6-221b-4a35-a7a8-9ba7c2a546b5] Deleting local config drive /var/lib/nova/instances/187076f6-221b-4a35-a7a8-9ba7c2a546b5/disk.config because it was imported into RBD.
Oct 11 09:16:49 compute-0 nova_compute[260935]: 2025-10-11 09:16:49.636 2 DEBUG nova.network.neutron [req-bd89893c-bfc7-4ca1-82bc-ddb8316b997d req-1026d221-a472-473d-b396-ca4b464f1d38 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 187076f6-221b-4a35-a7a8-9ba7c2a546b5] Updated VIF entry in instance network info cache for port 1f9dc2d8-70e2-4b9d-acc8-8f1bf8cd7f54. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 09:16:49 compute-0 nova_compute[260935]: 2025-10-11 09:16:49.637 2 DEBUG nova.network.neutron [req-bd89893c-bfc7-4ca1-82bc-ddb8316b997d req-1026d221-a472-473d-b396-ca4b464f1d38 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 187076f6-221b-4a35-a7a8-9ba7c2a546b5] Updating instance_info_cache with network_info: [{"id": "1f9dc2d8-70e2-4b9d-acc8-8f1bf8cd7f54", "address": "fa:16:3e:4b:3e:b6", "network": {"id": "2f4ce403-2596-441d-805b-ba15e2f385a1", "bridge": "br-int", "label": "tempest-network-smoke--109925739", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe4b:3eb6", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1f9dc2d8-70", "ovs_interfaceid": "1f9dc2d8-70e2-4b9d-acc8-8f1bf8cd7f54", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:16:49 compute-0 nova_compute[260935]: 2025-10-11 09:16:49.666 2 DEBUG oslo_concurrency.lockutils [req-bd89893c-bfc7-4ca1-82bc-ddb8316b997d req-1026d221-a472-473d-b396-ca4b464f1d38 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-187076f6-221b-4a35-a7a8-9ba7c2a546b5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:16:49 compute-0 kernel: tap1f9dc2d8-70: entered promiscuous mode
Oct 11 09:16:49 compute-0 NetworkManager[44960]: <info>  [1760174209.7367] manager: (tap1f9dc2d8-70): new Tun device (/org/freedesktop/NetworkManager/Devices/443)
Oct 11 09:16:49 compute-0 ovn_controller[152945]: 2025-10-11T09:16:49Z|01073|binding|INFO|Claiming lport 1f9dc2d8-70e2-4b9d-acc8-8f1bf8cd7f54 for this chassis.
Oct 11 09:16:49 compute-0 ovn_controller[152945]: 2025-10-11T09:16:49Z|01074|binding|INFO|1f9dc2d8-70e2-4b9d-acc8-8f1bf8cd7f54: Claiming fa:16:3e:4b:3e:b6 10.100.0.12 2001:db8::f816:3eff:fe4b:3eb6
Oct 11 09:16:49 compute-0 nova_compute[260935]: 2025-10-11 09:16:49.783 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:16:49 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:16:49.797 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4b:3e:b6 10.100.0.12 2001:db8::f816:3eff:fe4b:3eb6'], port_security=['fa:16:3e:4b:3e:b6 10.100.0.12 2001:db8::f816:3eff:fe4b:3eb6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28 2001:db8::f816:3eff:fe4b:3eb6/64', 'neutron:device_id': '187076f6-221b-4a35-a7a8-9ba7c2a546b5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2f4ce403-2596-441d-805b-ba15e2f385a1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'db6885dd005947ad850fed13cefdf2fc', 'neutron:revision_number': '2', 'neutron:security_group_ids': '89a40070-a057-46b1-9d97-ea387c29c959', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ec00ec81-0492-43bf-b21e-a8398ff551c2, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=1f9dc2d8-70e2-4b9d-acc8-8f1bf8cd7f54) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 09:16:49 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:16:49.798 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 1f9dc2d8-70e2-4b9d-acc8-8f1bf8cd7f54 in datapath 2f4ce403-2596-441d-805b-ba15e2f385a1 bound to our chassis
Oct 11 09:16:49 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:16:49.800 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 2f4ce403-2596-441d-805b-ba15e2f385a1
Oct 11 09:16:49 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:16:49.815 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[b2a08658-f34a-4899-b8e5-edc7192e10b8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:16:49 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:16:49.816 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap2f4ce403-21 in ovnmeta-2f4ce403-2596-441d-805b-ba15e2f385a1 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 11 09:16:49 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:16:49.818 276945 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap2f4ce403-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 11 09:16:49 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:16:49.818 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[0789bfe9-7951-40e9-9835-510fdda14e96]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:16:49 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:16:49.819 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[afd8aacd-c610-48d5-8935-c3e318e6e962]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:16:49 compute-0 systemd-udevd[379792]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 09:16:49 compute-0 ovn_controller[152945]: 2025-10-11T09:16:49Z|01075|binding|INFO|Setting lport 1f9dc2d8-70e2-4b9d-acc8-8f1bf8cd7f54 ovn-installed in OVS
Oct 11 09:16:49 compute-0 ovn_controller[152945]: 2025-10-11T09:16:49Z|01076|binding|INFO|Setting lport 1f9dc2d8-70e2-4b9d-acc8-8f1bf8cd7f54 up in Southbound
Oct 11 09:16:49 compute-0 nova_compute[260935]: 2025-10-11 09:16:49.826 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:16:49 compute-0 nova_compute[260935]: 2025-10-11 09:16:49.835 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:16:49 compute-0 systemd-machined[215705]: New machine qemu-133-instance-0000006e.
Oct 11 09:16:49 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:16:49.835 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[4c00d225-2d8f-4540-aaeb-6cb112c98ab8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:16:49 compute-0 NetworkManager[44960]: <info>  [1760174209.8389] device (tap1f9dc2d8-70): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 09:16:49 compute-0 NetworkManager[44960]: <info>  [1760174209.8398] device (tap1f9dc2d8-70): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 09:16:49 compute-0 systemd[1]: Started Virtual Machine qemu-133-instance-0000006e.
Oct 11 09:16:49 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:16:49.864 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[259726b1-48e8-40b7-9497-aa18f2022af2]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:16:49 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:16:49.905 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[de1e1cb0-feb7-42bd-b962-17533eab5e3b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:16:49 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:16:49.912 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[9515063e-dd63-4706-8b77-55483005de18]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:16:49 compute-0 systemd-udevd[379795]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 09:16:49 compute-0 NetworkManager[44960]: <info>  [1760174209.9144] manager: (tap2f4ce403-20): new Veth device (/org/freedesktop/NetworkManager/Devices/444)
Oct 11 09:16:49 compute-0 ceph-mon[74313]: pgmap v2249: 321 pgs: 321 active+clean; 500 MiB data, 988 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 3.6 MiB/s wr, 128 op/s
Oct 11 09:16:49 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:16:49.977 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[18b39002-8557-4056-abb6-63b2b4222d6d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:16:49 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:16:49.981 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[b9aa40f3-687b-4b02-a022-7f38987390be]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:16:50 compute-0 NetworkManager[44960]: <info>  [1760174210.0146] device (tap2f4ce403-20): carrier: link connected
Oct 11 09:16:50 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:16:50.021 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[25aa5aef-c7c5-45ff-ae4e-152b89f019fd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:16:50 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:16:50.045 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[87f9fcd7-af83-4da2-8440-834a43604fa0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2f4ce403-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:74:43:a2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 310], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 605471, 'reachable_time': 19417, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 379824, 'error': None, 'target': 'ovnmeta-2f4ce403-2596-441d-805b-ba15e2f385a1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:16:50 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:16:50.066 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[e6496a5f-2175-41db-b292-a4bee978654d]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe74:43a2'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 605471, 'tstamp': 605471}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 379825, 'error': None, 'target': 'ovnmeta-2f4ce403-2596-441d-805b-ba15e2f385a1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:16:50 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:16:50.092 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[8556622b-23da-4c62-975e-f35c24413003]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2f4ce403-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:74:43:a2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 310], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 605471, 'reachable_time': 19417, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 379826, 'error': None, 'target': 'ovnmeta-2f4ce403-2596-441d-805b-ba15e2f385a1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:16:50 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:16:50.132 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[6360c4f2-a90b-4d6e-b351-4f51ffa817f9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:16:50 compute-0 nova_compute[260935]: 2025-10-11 09:16:50.145 2 DEBUG nova.compute.manager [req-8f009638-a121-43f6-90f1-44ab8943c3ab req-385a7dd7-8b07-40b6-afcf-07082062d459 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 187076f6-221b-4a35-a7a8-9ba7c2a546b5] Received event network-vif-plugged-1f9dc2d8-70e2-4b9d-acc8-8f1bf8cd7f54 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:16:50 compute-0 nova_compute[260935]: 2025-10-11 09:16:50.146 2 DEBUG oslo_concurrency.lockutils [req-8f009638-a121-43f6-90f1-44ab8943c3ab req-385a7dd7-8b07-40b6-afcf-07082062d459 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "187076f6-221b-4a35-a7a8-9ba7c2a546b5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:16:50 compute-0 nova_compute[260935]: 2025-10-11 09:16:50.146 2 DEBUG oslo_concurrency.lockutils [req-8f009638-a121-43f6-90f1-44ab8943c3ab req-385a7dd7-8b07-40b6-afcf-07082062d459 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "187076f6-221b-4a35-a7a8-9ba7c2a546b5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:16:50 compute-0 nova_compute[260935]: 2025-10-11 09:16:50.147 2 DEBUG oslo_concurrency.lockutils [req-8f009638-a121-43f6-90f1-44ab8943c3ab req-385a7dd7-8b07-40b6-afcf-07082062d459 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "187076f6-221b-4a35-a7a8-9ba7c2a546b5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:16:50 compute-0 nova_compute[260935]: 2025-10-11 09:16:50.148 2 DEBUG nova.compute.manager [req-8f009638-a121-43f6-90f1-44ab8943c3ab req-385a7dd7-8b07-40b6-afcf-07082062d459 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 187076f6-221b-4a35-a7a8-9ba7c2a546b5] Processing event network-vif-plugged-1f9dc2d8-70e2-4b9d-acc8-8f1bf8cd7f54 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 11 09:16:50 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:16:50.218 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[64fde1c6-741e-4cb3-8143-2be18237f452]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:16:50 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:16:50.221 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2f4ce403-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:16:50 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:16:50.221 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 09:16:50 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:16:50.222 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2f4ce403-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:16:50 compute-0 NetworkManager[44960]: <info>  [1760174210.2255] manager: (tap2f4ce403-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/445)
Oct 11 09:16:50 compute-0 kernel: tap2f4ce403-20: entered promiscuous mode
Oct 11 09:16:50 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:16:50.231 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap2f4ce403-20, col_values=(('external_ids', {'iface-id': '722d4b4c-2d64-4e8c-b343-4ac25259f23b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:16:50 compute-0 ovn_controller[152945]: 2025-10-11T09:16:50Z|01077|binding|INFO|Releasing lport 722d4b4c-2d64-4e8c-b343-4ac25259f23b from this chassis (sb_readonly=0)
Oct 11 09:16:50 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:16:50.238 162815 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/2f4ce403-2596-441d-805b-ba15e2f385a1.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/2f4ce403-2596-441d-805b-ba15e2f385a1.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 11 09:16:50 compute-0 nova_compute[260935]: 2025-10-11 09:16:50.224 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:16:50 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:16:50.239 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[1200ee52-91f8-4683-8140-3ec2785a7da8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:16:50 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:16:50.240 162815 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 11 09:16:50 compute-0 ovn_metadata_agent[162810]: global
Oct 11 09:16:50 compute-0 ovn_metadata_agent[162810]:     log         /dev/log local0 debug
Oct 11 09:16:50 compute-0 ovn_metadata_agent[162810]:     log-tag     haproxy-metadata-proxy-2f4ce403-2596-441d-805b-ba15e2f385a1
Oct 11 09:16:50 compute-0 ovn_metadata_agent[162810]:     user        root
Oct 11 09:16:50 compute-0 ovn_metadata_agent[162810]:     group       root
Oct 11 09:16:50 compute-0 ovn_metadata_agent[162810]:     maxconn     1024
Oct 11 09:16:50 compute-0 ovn_metadata_agent[162810]:     pidfile     /var/lib/neutron/external/pids/2f4ce403-2596-441d-805b-ba15e2f385a1.pid.haproxy
Oct 11 09:16:50 compute-0 ovn_metadata_agent[162810]:     daemon
Oct 11 09:16:50 compute-0 ovn_metadata_agent[162810]: 
Oct 11 09:16:50 compute-0 ovn_metadata_agent[162810]: defaults
Oct 11 09:16:50 compute-0 ovn_metadata_agent[162810]:     log global
Oct 11 09:16:50 compute-0 ovn_metadata_agent[162810]:     mode http
Oct 11 09:16:50 compute-0 ovn_metadata_agent[162810]:     option httplog
Oct 11 09:16:50 compute-0 ovn_metadata_agent[162810]:     option dontlognull
Oct 11 09:16:50 compute-0 ovn_metadata_agent[162810]:     option http-server-close
Oct 11 09:16:50 compute-0 ovn_metadata_agent[162810]:     option forwardfor
Oct 11 09:16:50 compute-0 ovn_metadata_agent[162810]:     retries                 3
Oct 11 09:16:50 compute-0 ovn_metadata_agent[162810]:     timeout http-request    30s
Oct 11 09:16:50 compute-0 ovn_metadata_agent[162810]:     timeout connect         30s
Oct 11 09:16:50 compute-0 ovn_metadata_agent[162810]:     timeout client          32s
Oct 11 09:16:50 compute-0 ovn_metadata_agent[162810]:     timeout server          32s
Oct 11 09:16:50 compute-0 ovn_metadata_agent[162810]:     timeout http-keep-alive 30s
Oct 11 09:16:50 compute-0 ovn_metadata_agent[162810]: 
Oct 11 09:16:50 compute-0 ovn_metadata_agent[162810]: 
Oct 11 09:16:50 compute-0 ovn_metadata_agent[162810]: listen listener
Oct 11 09:16:50 compute-0 ovn_metadata_agent[162810]:     bind 169.254.169.254:80
Oct 11 09:16:50 compute-0 ovn_metadata_agent[162810]:     server metadata /var/lib/neutron/metadata_proxy
Oct 11 09:16:50 compute-0 ovn_metadata_agent[162810]:     http-request add-header X-OVN-Network-ID 2f4ce403-2596-441d-805b-ba15e2f385a1
Oct 11 09:16:50 compute-0 ovn_metadata_agent[162810]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 11 09:16:50 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:16:50.242 162815 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-2f4ce403-2596-441d-805b-ba15e2f385a1', 'env', 'PROCESS_TAG=haproxy-2f4ce403-2596-441d-805b-ba15e2f385a1', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/2f4ce403-2596-441d-805b-ba15e2f385a1.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 11 09:16:50 compute-0 nova_compute[260935]: 2025-10-11 09:16:50.260 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:16:50 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2250: 321 pgs: 321 active+clean; 500 MiB data, 988 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 3.6 MiB/s wr, 127 op/s
Oct 11 09:16:50 compute-0 podman[379898]: 2025-10-11 09:16:50.719078015 +0000 UTC m=+0.102598060 container create 3697c62e0c6431967367d91d5b22adc88dc70ab8ef5c5a60c6aad4d4a948029a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-2f4ce403-2596-441d-805b-ba15e2f385a1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true)
Oct 11 09:16:50 compute-0 podman[379898]: 2025-10-11 09:16:50.670022828 +0000 UTC m=+0.053542943 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 11 09:16:50 compute-0 systemd[1]: Started libpod-conmon-3697c62e0c6431967367d91d5b22adc88dc70ab8ef5c5a60c6aad4d4a948029a.scope.
Oct 11 09:16:50 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:16:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/425c0630029d1aed8583bcc3941c7da79348c9c8a0eadb7928e597fdeee04cbf/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 11 09:16:50 compute-0 podman[379898]: 2025-10-11 09:16:50.846687138 +0000 UTC m=+0.230207203 container init 3697c62e0c6431967367d91d5b22adc88dc70ab8ef5c5a60c6aad4d4a948029a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-2f4ce403-2596-441d-805b-ba15e2f385a1, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Oct 11 09:16:50 compute-0 podman[379898]: 2025-10-11 09:16:50.857016004 +0000 UTC m=+0.240536019 container start 3697c62e0c6431967367d91d5b22adc88dc70ab8ef5c5a60c6aad4d4a948029a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-2f4ce403-2596-441d-805b-ba15e2f385a1, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Oct 11 09:16:50 compute-0 neutron-haproxy-ovnmeta-2f4ce403-2596-441d-805b-ba15e2f385a1[379911]: [NOTICE]   (379915) : New worker (379917) forked
Oct 11 09:16:50 compute-0 neutron-haproxy-ovnmeta-2f4ce403-2596-441d-805b-ba15e2f385a1[379911]: [NOTICE]   (379915) : Loading success.
Oct 11 09:16:50 compute-0 nova_compute[260935]: 2025-10-11 09:16:50.968 2 DEBUG nova.compute.manager [None req-5b53aa05-737d-40fc-9b53-30f5af6d2f88 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 187076f6-221b-4a35-a7a8-9ba7c2a546b5] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 11 09:16:50 compute-0 nova_compute[260935]: 2025-10-11 09:16:50.969 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760174210.9673986, 187076f6-221b-4a35-a7a8-9ba7c2a546b5 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:16:50 compute-0 nova_compute[260935]: 2025-10-11 09:16:50.970 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 187076f6-221b-4a35-a7a8-9ba7c2a546b5] VM Started (Lifecycle Event)
Oct 11 09:16:50 compute-0 nova_compute[260935]: 2025-10-11 09:16:50.974 2 DEBUG nova.virt.libvirt.driver [None req-5b53aa05-737d-40fc-9b53-30f5af6d2f88 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 187076f6-221b-4a35-a7a8-9ba7c2a546b5] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 11 09:16:50 compute-0 nova_compute[260935]: 2025-10-11 09:16:50.979 2 INFO nova.virt.libvirt.driver [-] [instance: 187076f6-221b-4a35-a7a8-9ba7c2a546b5] Instance spawned successfully.
Oct 11 09:16:50 compute-0 nova_compute[260935]: 2025-10-11 09:16:50.980 2 DEBUG nova.virt.libvirt.driver [None req-5b53aa05-737d-40fc-9b53-30f5af6d2f88 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 187076f6-221b-4a35-a7a8-9ba7c2a546b5] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 11 09:16:50 compute-0 nova_compute[260935]: 2025-10-11 09:16:50.995 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 187076f6-221b-4a35-a7a8-9ba7c2a546b5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:16:51 compute-0 nova_compute[260935]: 2025-10-11 09:16:51.001 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 187076f6-221b-4a35-a7a8-9ba7c2a546b5] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 09:16:51 compute-0 nova_compute[260935]: 2025-10-11 09:16:51.017 2 DEBUG nova.virt.libvirt.driver [None req-5b53aa05-737d-40fc-9b53-30f5af6d2f88 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 187076f6-221b-4a35-a7a8-9ba7c2a546b5] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:16:51 compute-0 nova_compute[260935]: 2025-10-11 09:16:51.018 2 DEBUG nova.virt.libvirt.driver [None req-5b53aa05-737d-40fc-9b53-30f5af6d2f88 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 187076f6-221b-4a35-a7a8-9ba7c2a546b5] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:16:51 compute-0 nova_compute[260935]: 2025-10-11 09:16:51.019 2 DEBUG nova.virt.libvirt.driver [None req-5b53aa05-737d-40fc-9b53-30f5af6d2f88 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 187076f6-221b-4a35-a7a8-9ba7c2a546b5] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:16:51 compute-0 nova_compute[260935]: 2025-10-11 09:16:51.020 2 DEBUG nova.virt.libvirt.driver [None req-5b53aa05-737d-40fc-9b53-30f5af6d2f88 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 187076f6-221b-4a35-a7a8-9ba7c2a546b5] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:16:51 compute-0 nova_compute[260935]: 2025-10-11 09:16:51.020 2 DEBUG nova.virt.libvirt.driver [None req-5b53aa05-737d-40fc-9b53-30f5af6d2f88 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 187076f6-221b-4a35-a7a8-9ba7c2a546b5] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:16:51 compute-0 nova_compute[260935]: 2025-10-11 09:16:51.021 2 DEBUG nova.virt.libvirt.driver [None req-5b53aa05-737d-40fc-9b53-30f5af6d2f88 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 187076f6-221b-4a35-a7a8-9ba7c2a546b5] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:16:51 compute-0 nova_compute[260935]: 2025-10-11 09:16:51.028 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 187076f6-221b-4a35-a7a8-9ba7c2a546b5] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 09:16:51 compute-0 nova_compute[260935]: 2025-10-11 09:16:51.029 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760174210.967625, 187076f6-221b-4a35-a7a8-9ba7c2a546b5 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:16:51 compute-0 nova_compute[260935]: 2025-10-11 09:16:51.029 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 187076f6-221b-4a35-a7a8-9ba7c2a546b5] VM Paused (Lifecycle Event)
Oct 11 09:16:51 compute-0 nova_compute[260935]: 2025-10-11 09:16:51.052 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:16:51 compute-0 nova_compute[260935]: 2025-10-11 09:16:51.058 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 187076f6-221b-4a35-a7a8-9ba7c2a546b5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:16:51 compute-0 nova_compute[260935]: 2025-10-11 09:16:51.064 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760174210.9742355, 187076f6-221b-4a35-a7a8-9ba7c2a546b5 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:16:51 compute-0 nova_compute[260935]: 2025-10-11 09:16:51.064 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 187076f6-221b-4a35-a7a8-9ba7c2a546b5] VM Resumed (Lifecycle Event)
Oct 11 09:16:51 compute-0 nova_compute[260935]: 2025-10-11 09:16:51.086 2 INFO nova.compute.manager [None req-5b53aa05-737d-40fc-9b53-30f5af6d2f88 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 187076f6-221b-4a35-a7a8-9ba7c2a546b5] Took 9.70 seconds to spawn the instance on the hypervisor.
Oct 11 09:16:51 compute-0 nova_compute[260935]: 2025-10-11 09:16:51.086 2 DEBUG nova.compute.manager [None req-5b53aa05-737d-40fc-9b53-30f5af6d2f88 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 187076f6-221b-4a35-a7a8-9ba7c2a546b5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:16:51 compute-0 nova_compute[260935]: 2025-10-11 09:16:51.099 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 187076f6-221b-4a35-a7a8-9ba7c2a546b5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:16:51 compute-0 nova_compute[260935]: 2025-10-11 09:16:51.103 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 187076f6-221b-4a35-a7a8-9ba7c2a546b5] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 09:16:51 compute-0 nova_compute[260935]: 2025-10-11 09:16:51.129 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 187076f6-221b-4a35-a7a8-9ba7c2a546b5] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 09:16:51 compute-0 nova_compute[260935]: 2025-10-11 09:16:51.168 2 INFO nova.compute.manager [None req-5b53aa05-737d-40fc-9b53-30f5af6d2f88 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 187076f6-221b-4a35-a7a8-9ba7c2a546b5] Took 10.77 seconds to build instance.
Oct 11 09:16:51 compute-0 nova_compute[260935]: 2025-10-11 09:16:51.184 2 DEBUG oslo_concurrency.lockutils [None req-5b53aa05-737d-40fc-9b53-30f5af6d2f88 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "187076f6-221b-4a35-a7a8-9ba7c2a546b5" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.879s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:16:51 compute-0 ceph-mon[74313]: pgmap v2250: 321 pgs: 321 active+clean; 500 MiB data, 988 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 3.6 MiB/s wr, 127 op/s
Oct 11 09:16:52 compute-0 nova_compute[260935]: 2025-10-11 09:16:52.247 2 DEBUG nova.compute.manager [req-1fdc7c63-4613-426c-bf28-0648debe82f4 req-5a629357-4532-4d73-88ec-c975161d5cfe e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 187076f6-221b-4a35-a7a8-9ba7c2a546b5] Received event network-vif-plugged-1f9dc2d8-70e2-4b9d-acc8-8f1bf8cd7f54 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:16:52 compute-0 nova_compute[260935]: 2025-10-11 09:16:52.248 2 DEBUG oslo_concurrency.lockutils [req-1fdc7c63-4613-426c-bf28-0648debe82f4 req-5a629357-4532-4d73-88ec-c975161d5cfe e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "187076f6-221b-4a35-a7a8-9ba7c2a546b5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:16:52 compute-0 nova_compute[260935]: 2025-10-11 09:16:52.248 2 DEBUG oslo_concurrency.lockutils [req-1fdc7c63-4613-426c-bf28-0648debe82f4 req-5a629357-4532-4d73-88ec-c975161d5cfe e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "187076f6-221b-4a35-a7a8-9ba7c2a546b5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:16:52 compute-0 nova_compute[260935]: 2025-10-11 09:16:52.248 2 DEBUG oslo_concurrency.lockutils [req-1fdc7c63-4613-426c-bf28-0648debe82f4 req-5a629357-4532-4d73-88ec-c975161d5cfe e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "187076f6-221b-4a35-a7a8-9ba7c2a546b5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:16:52 compute-0 nova_compute[260935]: 2025-10-11 09:16:52.248 2 DEBUG nova.compute.manager [req-1fdc7c63-4613-426c-bf28-0648debe82f4 req-5a629357-4532-4d73-88ec-c975161d5cfe e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 187076f6-221b-4a35-a7a8-9ba7c2a546b5] No waiting events found dispatching network-vif-plugged-1f9dc2d8-70e2-4b9d-acc8-8f1bf8cd7f54 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 09:16:52 compute-0 nova_compute[260935]: 2025-10-11 09:16:52.249 2 WARNING nova.compute.manager [req-1fdc7c63-4613-426c-bf28-0648debe82f4 req-5a629357-4532-4d73-88ec-c975161d5cfe e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 187076f6-221b-4a35-a7a8-9ba7c2a546b5] Received unexpected event network-vif-plugged-1f9dc2d8-70e2-4b9d-acc8-8f1bf8cd7f54 for instance with vm_state active and task_state None.
Oct 11 09:16:52 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2251: 321 pgs: 321 active+clean; 500 MiB data, 988 MiB used, 59 GiB / 60 GiB avail; 2.8 MiB/s rd, 3.6 MiB/s wr, 164 op/s
Oct 11 09:16:53 compute-0 nova_compute[260935]: 2025-10-11 09:16:53.231 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:16:53 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:16:53 compute-0 ceph-mon[74313]: pgmap v2251: 321 pgs: 321 active+clean; 500 MiB data, 988 MiB used, 59 GiB / 60 GiB avail; 2.8 MiB/s rd, 3.6 MiB/s wr, 164 op/s
Oct 11 09:16:54 compute-0 nova_compute[260935]: 2025-10-11 09:16:54.315 2 DEBUG nova.compute.manager [req-17341210-0060-4152-9050-57789c938675 req-28a12971-a734-41e2-a5b0-2ba90587bc11 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 187076f6-221b-4a35-a7a8-9ba7c2a546b5] Received event network-changed-1f9dc2d8-70e2-4b9d-acc8-8f1bf8cd7f54 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:16:54 compute-0 nova_compute[260935]: 2025-10-11 09:16:54.316 2 DEBUG nova.compute.manager [req-17341210-0060-4152-9050-57789c938675 req-28a12971-a734-41e2-a5b0-2ba90587bc11 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 187076f6-221b-4a35-a7a8-9ba7c2a546b5] Refreshing instance network info cache due to event network-changed-1f9dc2d8-70e2-4b9d-acc8-8f1bf8cd7f54. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 09:16:54 compute-0 nova_compute[260935]: 2025-10-11 09:16:54.317 2 DEBUG oslo_concurrency.lockutils [req-17341210-0060-4152-9050-57789c938675 req-28a12971-a734-41e2-a5b0-2ba90587bc11 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-187076f6-221b-4a35-a7a8-9ba7c2a546b5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:16:54 compute-0 nova_compute[260935]: 2025-10-11 09:16:54.317 2 DEBUG oslo_concurrency.lockutils [req-17341210-0060-4152-9050-57789c938675 req-28a12971-a734-41e2-a5b0-2ba90587bc11 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-187076f6-221b-4a35-a7a8-9ba7c2a546b5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:16:54 compute-0 nova_compute[260935]: 2025-10-11 09:16:54.318 2 DEBUG nova.network.neutron [req-17341210-0060-4152-9050-57789c938675 req-28a12971-a734-41e2-a5b0-2ba90587bc11 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 187076f6-221b-4a35-a7a8-9ba7c2a546b5] Refreshing network info cache for port 1f9dc2d8-70e2-4b9d-acc8-8f1bf8cd7f54 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 09:16:54 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2252: 321 pgs: 321 active+clean; 500 MiB data, 988 MiB used, 59 GiB / 60 GiB avail; 3.6 MiB/s rd, 2.0 MiB/s wr, 147 op/s
Oct 11 09:16:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:16:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:16:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:16:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:16:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:16:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:16:54 compute-0 ceph-mgr[74605]: [balancer INFO root] Optimize plan auto_2025-10-11_09:16:54
Oct 11 09:16:54 compute-0 ceph-mgr[74605]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 09:16:54 compute-0 ceph-mgr[74605]: [balancer INFO root] do_upmap
Oct 11 09:16:54 compute-0 ceph-mgr[74605]: [balancer INFO root] pools ['default.rgw.control', '.mgr', 'images', 'volumes', 'default.rgw.log', 'vms', 'default.rgw.meta', 'cephfs.cephfs.data', '.rgw.root', 'backups', 'cephfs.cephfs.meta']
Oct 11 09:16:54 compute-0 ceph-mgr[74605]: [balancer INFO root] prepared 0/10 changes
Oct 11 09:16:55 compute-0 ceph-mon[74313]: pgmap v2252: 321 pgs: 321 active+clean; 500 MiB data, 988 MiB used, 59 GiB / 60 GiB avail; 3.6 MiB/s rd, 2.0 MiB/s wr, 147 op/s
Oct 11 09:16:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 09:16:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 09:16:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 09:16:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 09:16:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 09:16:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 09:16:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 09:16:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 09:16:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 09:16:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 09:16:56 compute-0 nova_compute[260935]: 2025-10-11 09:16:56.053 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:16:56 compute-0 nova_compute[260935]: 2025-10-11 09:16:56.254 2 DEBUG nova.network.neutron [req-17341210-0060-4152-9050-57789c938675 req-28a12971-a734-41e2-a5b0-2ba90587bc11 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 187076f6-221b-4a35-a7a8-9ba7c2a546b5] Updated VIF entry in instance network info cache for port 1f9dc2d8-70e2-4b9d-acc8-8f1bf8cd7f54. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 09:16:56 compute-0 nova_compute[260935]: 2025-10-11 09:16:56.255 2 DEBUG nova.network.neutron [req-17341210-0060-4152-9050-57789c938675 req-28a12971-a734-41e2-a5b0-2ba90587bc11 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 187076f6-221b-4a35-a7a8-9ba7c2a546b5] Updating instance_info_cache with network_info: [{"id": "1f9dc2d8-70e2-4b9d-acc8-8f1bf8cd7f54", "address": "fa:16:3e:4b:3e:b6", "network": {"id": "2f4ce403-2596-441d-805b-ba15e2f385a1", "bridge": "br-int", "label": "tempest-network-smoke--109925739", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.213", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe4b:3eb6", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1f9dc2d8-70", "ovs_interfaceid": "1f9dc2d8-70e2-4b9d-acc8-8f1bf8cd7f54", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:16:56 compute-0 nova_compute[260935]: 2025-10-11 09:16:56.279 2 DEBUG oslo_concurrency.lockutils [req-17341210-0060-4152-9050-57789c938675 req-28a12971-a734-41e2-a5b0-2ba90587bc11 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-187076f6-221b-4a35-a7a8-9ba7c2a546b5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:16:56 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2253: 321 pgs: 321 active+clean; 500 MiB data, 988 MiB used, 59 GiB / 60 GiB avail; 3.6 MiB/s rd, 29 KiB/s wr, 143 op/s
Oct 11 09:16:57 compute-0 ceph-mon[74313]: pgmap v2253: 321 pgs: 321 active+clean; 500 MiB data, 988 MiB used, 59 GiB / 60 GiB avail; 3.6 MiB/s rd, 29 KiB/s wr, 143 op/s
Oct 11 09:16:58 compute-0 nova_compute[260935]: 2025-10-11 09:16:58.267 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:16:58 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2254: 321 pgs: 321 active+clean; 521 MiB data, 994 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 2.1 MiB/s wr, 185 op/s
Oct 11 09:16:58 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:16:59 compute-0 ovn_controller[152945]: 2025-10-11T09:16:59Z|00121|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:48:8f:cc 10.100.0.21
Oct 11 09:16:59 compute-0 ovn_controller[152945]: 2025-10-11T09:16:59Z|00122|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:48:8f:cc 10.100.0.21
Oct 11 09:16:59 compute-0 ceph-mon[74313]: pgmap v2254: 321 pgs: 321 active+clean; 521 MiB data, 994 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 2.1 MiB/s wr, 185 op/s
Oct 11 09:17:00 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2255: 321 pgs: 321 active+clean; 521 MiB data, 994 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.1 MiB/s wr, 109 op/s
Oct 11 09:17:00 compute-0 podman[379928]: 2025-10-11 09:17:00.815614688 +0000 UTC m=+0.107708333 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 11 09:17:01 compute-0 nova_compute[260935]: 2025-10-11 09:17:01.057 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:17:01 compute-0 ceph-mon[74313]: pgmap v2255: 321 pgs: 321 active+clean; 521 MiB data, 994 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.1 MiB/s wr, 109 op/s
Oct 11 09:17:02 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2256: 321 pgs: 321 active+clean; 532 MiB data, 1014 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 134 op/s
Oct 11 09:17:03 compute-0 nova_compute[260935]: 2025-10-11 09:17:03.270 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:17:03 compute-0 ceph-mon[74313]: pgmap v2256: 321 pgs: 321 active+clean; 532 MiB data, 1014 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 134 op/s
Oct 11 09:17:03 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:17:04 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2257: 321 pgs: 321 active+clean; 533 MiB data, 1014 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 2.1 MiB/s wr, 103 op/s
Oct 11 09:17:05 compute-0 ovn_controller[152945]: 2025-10-11T09:17:05Z|00123|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:4b:3e:b6 10.100.0.12
Oct 11 09:17:05 compute-0 ovn_controller[152945]: 2025-10-11T09:17:05Z|00124|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:4b:3e:b6 10.100.0.12
Oct 11 09:17:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 09:17:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:17:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 09:17:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:17:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.004499487713257797 of space, bias 1.0, pg target 1.3498463139773391 quantized to 32 (current 32)
Oct 11 09:17:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:17:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 09:17:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:17:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 09:17:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:17:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.1992057139048968 quantized to 32 (current 32)
Oct 11 09:17:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:17:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006084358924269063 quantized to 16 (current 32)
Oct 11 09:17:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:17:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 09:17:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:17:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.605448655336329e-05 quantized to 32 (current 32)
Oct 11 09:17:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:17:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006464631357035879 quantized to 32 (current 32)
Oct 11 09:17:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:17:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 09:17:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:17:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015210897310672657 quantized to 32 (current 32)
Oct 11 09:17:05 compute-0 ceph-mon[74313]: pgmap v2257: 321 pgs: 321 active+clean; 533 MiB data, 1014 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 2.1 MiB/s wr, 103 op/s
Oct 11 09:17:05 compute-0 podman[379949]: 2025-10-11 09:17:05.80670123 +0000 UTC m=+0.095587097 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=iscsid, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3)
Oct 11 09:17:06 compute-0 nova_compute[260935]: 2025-10-11 09:17:06.060 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:17:06 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2258: 321 pgs: 321 active+clean; 533 MiB data, 1014 MiB used, 59 GiB / 60 GiB avail; 531 KiB/s rd, 2.1 MiB/s wr, 71 op/s
Oct 11 09:17:07 compute-0 ceph-mon[74313]: pgmap v2258: 321 pgs: 321 active+clean; 533 MiB data, 1014 MiB used, 59 GiB / 60 GiB avail; 531 KiB/s rd, 2.1 MiB/s wr, 71 op/s
Oct 11 09:17:08 compute-0 nova_compute[260935]: 2025-10-11 09:17:08.272 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:17:08 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2259: 321 pgs: 321 active+clean; 566 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 791 KiB/s rd, 4.3 MiB/s wr, 133 op/s
Oct 11 09:17:08 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:17:09 compute-0 nova_compute[260935]: 2025-10-11 09:17:09.425 2 DEBUG oslo_concurrency.lockutils [None req-bc309bb7-c7fc-47c1-a648-27936f8b600c dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Acquiring lock "a1232bdc-1728-423e-91ef-f46614fcec43" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:17:09 compute-0 nova_compute[260935]: 2025-10-11 09:17:09.426 2 DEBUG oslo_concurrency.lockutils [None req-bc309bb7-c7fc-47c1-a648-27936f8b600c dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "a1232bdc-1728-423e-91ef-f46614fcec43" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:17:09 compute-0 nova_compute[260935]: 2025-10-11 09:17:09.427 2 DEBUG oslo_concurrency.lockutils [None req-bc309bb7-c7fc-47c1-a648-27936f8b600c dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Acquiring lock "a1232bdc-1728-423e-91ef-f46614fcec43-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:17:09 compute-0 nova_compute[260935]: 2025-10-11 09:17:09.427 2 DEBUG oslo_concurrency.lockutils [None req-bc309bb7-c7fc-47c1-a648-27936f8b600c dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "a1232bdc-1728-423e-91ef-f46614fcec43-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:17:09 compute-0 nova_compute[260935]: 2025-10-11 09:17:09.428 2 DEBUG oslo_concurrency.lockutils [None req-bc309bb7-c7fc-47c1-a648-27936f8b600c dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "a1232bdc-1728-423e-91ef-f46614fcec43-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:17:09 compute-0 nova_compute[260935]: 2025-10-11 09:17:09.430 2 INFO nova.compute.manager [None req-bc309bb7-c7fc-47c1-a648-27936f8b600c dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: a1232bdc-1728-423e-91ef-f46614fcec43] Terminating instance
Oct 11 09:17:09 compute-0 nova_compute[260935]: 2025-10-11 09:17:09.432 2 DEBUG nova.compute.manager [None req-bc309bb7-c7fc-47c1-a648-27936f8b600c dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: a1232bdc-1728-423e-91ef-f46614fcec43] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 11 09:17:09 compute-0 kernel: tape89831fc-f6 (unregistering): left promiscuous mode
Oct 11 09:17:09 compute-0 NetworkManager[44960]: <info>  [1760174229.5017] device (tape89831fc-f6): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 09:17:09 compute-0 nova_compute[260935]: 2025-10-11 09:17:09.520 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:17:09 compute-0 ovn_controller[152945]: 2025-10-11T09:17:09Z|01078|binding|INFO|Releasing lport e89831fc-f646-401f-8562-959bb36ec0e9 from this chassis (sb_readonly=0)
Oct 11 09:17:09 compute-0 ovn_controller[152945]: 2025-10-11T09:17:09Z|01079|binding|INFO|Setting lport e89831fc-f646-401f-8562-959bb36ec0e9 down in Southbound
Oct 11 09:17:09 compute-0 ovn_controller[152945]: 2025-10-11T09:17:09Z|01080|binding|INFO|Removing iface tape89831fc-f6 ovn-installed in OVS
Oct 11 09:17:09 compute-0 nova_compute[260935]: 2025-10-11 09:17:09.524 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:17:09 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:17:09.532 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:48:8f:cc 10.100.0.21'], port_security=['fa:16:3e:48:8f:cc 10.100.0.21'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.21/28', 'neutron:device_id': 'a1232bdc-1728-423e-91ef-f46614fcec43', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3f778265-a3b7-4c18-be8e-648917a97a03', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'bee9c6aad5fe46a2b0fb6caf4d995b72', 'neutron:revision_number': '4', 'neutron:security_group_ids': '0f4801ae-575f-43c2-a03b-ceeeb136f93e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=704f698c-f7f9-44d3-83d1-ae6d1824d8ad, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=e89831fc-f646-401f-8562-959bb36ec0e9) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 09:17:09 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:17:09.534 162815 INFO neutron.agent.ovn.metadata.agent [-] Port e89831fc-f646-401f-8562-959bb36ec0e9 in datapath 3f778265-a3b7-4c18-be8e-648917a97a03 unbound from our chassis
Oct 11 09:17:09 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:17:09.538 162815 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 3f778265-a3b7-4c18-be8e-648917a97a03, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 11 09:17:09 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:17:09.540 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[5e639041-39a4-4a8a-87f7-9097fd056de3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:17:09 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:17:09.541 162815 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-3f778265-a3b7-4c18-be8e-648917a97a03 namespace which is not needed anymore
Oct 11 09:17:09 compute-0 nova_compute[260935]: 2025-10-11 09:17:09.557 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:17:09 compute-0 systemd[1]: machine-qemu\x2d132\x2dinstance\x2d0000006d.scope: Deactivated successfully.
Oct 11 09:17:09 compute-0 systemd[1]: machine-qemu\x2d132\x2dinstance\x2d0000006d.scope: Consumed 13.671s CPU time.
Oct 11 09:17:09 compute-0 systemd-machined[215705]: Machine qemu-132-instance-0000006d terminated.
Oct 11 09:17:09 compute-0 nova_compute[260935]: 2025-10-11 09:17:09.718 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:17:09 compute-0 nova_compute[260935]: 2025-10-11 09:17:09.728 2 INFO nova.virt.libvirt.driver [-] [instance: a1232bdc-1728-423e-91ef-f46614fcec43] Instance destroyed successfully.
Oct 11 09:17:09 compute-0 nova_compute[260935]: 2025-10-11 09:17:09.728 2 DEBUG nova.objects.instance [None req-bc309bb7-c7fc-47c1-a648-27936f8b600c dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lazy-loading 'resources' on Instance uuid a1232bdc-1728-423e-91ef-f46614fcec43 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:17:09 compute-0 neutron-haproxy-ovnmeta-3f778265-a3b7-4c18-be8e-648917a97a03[379640]: [NOTICE]   (379644) : haproxy version is 2.8.14-c23fe91
Oct 11 09:17:09 compute-0 neutron-haproxy-ovnmeta-3f778265-a3b7-4c18-be8e-648917a97a03[379640]: [NOTICE]   (379644) : path to executable is /usr/sbin/haproxy
Oct 11 09:17:09 compute-0 neutron-haproxy-ovnmeta-3f778265-a3b7-4c18-be8e-648917a97a03[379640]: [WARNING]  (379644) : Exiting Master process...
Oct 11 09:17:09 compute-0 neutron-haproxy-ovnmeta-3f778265-a3b7-4c18-be8e-648917a97a03[379640]: [WARNING]  (379644) : Exiting Master process...
Oct 11 09:17:09 compute-0 neutron-haproxy-ovnmeta-3f778265-a3b7-4c18-be8e-648917a97a03[379640]: [ALERT]    (379644) : Current worker (379646) exited with code 143 (Terminated)
Oct 11 09:17:09 compute-0 neutron-haproxy-ovnmeta-3f778265-a3b7-4c18-be8e-648917a97a03[379640]: [WARNING]  (379644) : All workers exited. Exiting... (0)
Oct 11 09:17:09 compute-0 ceph-mon[74313]: pgmap v2259: 321 pgs: 321 active+clean; 566 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 791 KiB/s rd, 4.3 MiB/s wr, 133 op/s
Oct 11 09:17:09 compute-0 systemd[1]: libpod-cec0bb5800761f72b32b04dda9e4e3462d4da48888e056643a262d83e6e3bdb6.scope: Deactivated successfully.
Oct 11 09:17:09 compute-0 nova_compute[260935]: 2025-10-11 09:17:09.749 2 DEBUG nova.virt.libvirt.vif [None req-bc309bb7-c7fc-47c1-a648-27936f8b600c dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T09:16:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-272732865',display_name='tempest-TestNetworkBasicOps-server-272732865',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-272732865',id=109,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBF7FX3U/5RP2hr/RRWH8Uzs1pxIw1BlrlGqbe9sm3AIGCWu3yNU1SvlSTOKsJU+AC7mmnOtgeA/3XVmXVb4brBN1OHPjhKFiHZVW+4rKWNrrCcpToIxg/52KW4sZLUbqeQ==',key_name='tempest-TestNetworkBasicOps-1996793419',keypairs=<?>,launch_index=0,launched_at=2025-10-11T09:16:45Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='bee9c6aad5fe46a2b0fb6caf4d995b72',ramdisk_id='',reservation_id='r-obg5p04x',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1622727639',owner_user_name='tempest-TestNetworkBasicOps-1622727639-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T09:16:45Z,user_data=None,user_id='dd336dcb24664df58613d4105ce1b004',uuid=a1232bdc-1728-423e-91ef-f46614fcec43,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "e89831fc-f646-401f-8562-959bb36ec0e9", "address": "fa:16:3e:48:8f:cc", "network": {"id": "3f778265-a3b7-4c18-be8e-648917a97a03", "bridge": "br-int", "label": "tempest-network-smoke--1276900070", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.21", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape89831fc-f6", "ovs_interfaceid": "e89831fc-f646-401f-8562-959bb36ec0e9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 11 09:17:09 compute-0 nova_compute[260935]: 2025-10-11 09:17:09.750 2 DEBUG nova.network.os_vif_util [None req-bc309bb7-c7fc-47c1-a648-27936f8b600c dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Converting VIF {"id": "e89831fc-f646-401f-8562-959bb36ec0e9", "address": "fa:16:3e:48:8f:cc", "network": {"id": "3f778265-a3b7-4c18-be8e-648917a97a03", "bridge": "br-int", "label": "tempest-network-smoke--1276900070", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.21", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape89831fc-f6", "ovs_interfaceid": "e89831fc-f646-401f-8562-959bb36ec0e9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 09:17:09 compute-0 podman[379995]: 2025-10-11 09:17:09.754841881 +0000 UTC m=+0.070965236 container died cec0bb5800761f72b32b04dda9e4e3462d4da48888e056643a262d83e6e3bdb6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-3f778265-a3b7-4c18-be8e-648917a97a03, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Oct 11 09:17:09 compute-0 nova_compute[260935]: 2025-10-11 09:17:09.752 2 DEBUG nova.network.os_vif_util [None req-bc309bb7-c7fc-47c1-a648-27936f8b600c dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:48:8f:cc,bridge_name='br-int',has_traffic_filtering=True,id=e89831fc-f646-401f-8562-959bb36ec0e9,network=Network(3f778265-a3b7-4c18-be8e-648917a97a03),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape89831fc-f6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 09:17:09 compute-0 nova_compute[260935]: 2025-10-11 09:17:09.753 2 DEBUG os_vif [None req-bc309bb7-c7fc-47c1-a648-27936f8b600c dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:48:8f:cc,bridge_name='br-int',has_traffic_filtering=True,id=e89831fc-f646-401f-8562-959bb36ec0e9,network=Network(3f778265-a3b7-4c18-be8e-648917a97a03),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape89831fc-f6') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 11 09:17:09 compute-0 nova_compute[260935]: 2025-10-11 09:17:09.756 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:17:09 compute-0 nova_compute[260935]: 2025-10-11 09:17:09.758 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape89831fc-f6, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:17:09 compute-0 nova_compute[260935]: 2025-10-11 09:17:09.762 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:17:09 compute-0 nova_compute[260935]: 2025-10-11 09:17:09.764 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 09:17:09 compute-0 nova_compute[260935]: 2025-10-11 09:17:09.770 2 INFO os_vif [None req-bc309bb7-c7fc-47c1-a648-27936f8b600c dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:48:8f:cc,bridge_name='br-int',has_traffic_filtering=True,id=e89831fc-f646-401f-8562-959bb36ec0e9,network=Network(3f778265-a3b7-4c18-be8e-648917a97a03),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape89831fc-f6')
Oct 11 09:17:09 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-cec0bb5800761f72b32b04dda9e4e3462d4da48888e056643a262d83e6e3bdb6-userdata-shm.mount: Deactivated successfully.
Oct 11 09:17:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-efdb927a6e1809dd643e00a3746dcecf9a780cac7ca509077562678f25a5dc5f-merged.mount: Deactivated successfully.
Oct 11 09:17:09 compute-0 podman[379995]: 2025-10-11 09:17:09.827119231 +0000 UTC m=+0.143242556 container cleanup cec0bb5800761f72b32b04dda9e4e3462d4da48888e056643a262d83e6e3bdb6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-3f778265-a3b7-4c18-be8e-648917a97a03, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true)
Oct 11 09:17:09 compute-0 systemd[1]: libpod-conmon-cec0bb5800761f72b32b04dda9e4e3462d4da48888e056643a262d83e6e3bdb6.scope: Deactivated successfully.
Oct 11 09:17:09 compute-0 nova_compute[260935]: 2025-10-11 09:17:09.851 2 DEBUG nova.compute.manager [req-9529cb3d-a545-4ab4-9287-ea56a2d5356f req-e9313062-43d1-42c7-a537-4382a53ddbc7 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a1232bdc-1728-423e-91ef-f46614fcec43] Received event network-vif-unplugged-e89831fc-f646-401f-8562-959bb36ec0e9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:17:09 compute-0 nova_compute[260935]: 2025-10-11 09:17:09.852 2 DEBUG oslo_concurrency.lockutils [req-9529cb3d-a545-4ab4-9287-ea56a2d5356f req-e9313062-43d1-42c7-a537-4382a53ddbc7 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "a1232bdc-1728-423e-91ef-f46614fcec43-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:17:09 compute-0 nova_compute[260935]: 2025-10-11 09:17:09.853 2 DEBUG oslo_concurrency.lockutils [req-9529cb3d-a545-4ab4-9287-ea56a2d5356f req-e9313062-43d1-42c7-a537-4382a53ddbc7 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "a1232bdc-1728-423e-91ef-f46614fcec43-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:17:09 compute-0 nova_compute[260935]: 2025-10-11 09:17:09.853 2 DEBUG oslo_concurrency.lockutils [req-9529cb3d-a545-4ab4-9287-ea56a2d5356f req-e9313062-43d1-42c7-a537-4382a53ddbc7 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "a1232bdc-1728-423e-91ef-f46614fcec43-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:17:09 compute-0 nova_compute[260935]: 2025-10-11 09:17:09.854 2 DEBUG nova.compute.manager [req-9529cb3d-a545-4ab4-9287-ea56a2d5356f req-e9313062-43d1-42c7-a537-4382a53ddbc7 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a1232bdc-1728-423e-91ef-f46614fcec43] No waiting events found dispatching network-vif-unplugged-e89831fc-f646-401f-8562-959bb36ec0e9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 09:17:09 compute-0 nova_compute[260935]: 2025-10-11 09:17:09.854 2 DEBUG nova.compute.manager [req-9529cb3d-a545-4ab4-9287-ea56a2d5356f req-e9313062-43d1-42c7-a537-4382a53ddbc7 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a1232bdc-1728-423e-91ef-f46614fcec43] Received event network-vif-unplugged-e89831fc-f646-401f-8562-959bb36ec0e9 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 11 09:17:09 compute-0 podman[380049]: 2025-10-11 09:17:09.925961117 +0000 UTC m=+0.061724319 container remove cec0bb5800761f72b32b04dda9e4e3462d4da48888e056643a262d83e6e3bdb6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-3f778265-a3b7-4c18-be8e-648917a97a03, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Oct 11 09:17:09 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:17:09.933 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[55295607-7acf-4fb8-a487-21a74e410ded]: (4, ('Sat Oct 11 09:17:09 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-3f778265-a3b7-4c18-be8e-648917a97a03 (cec0bb5800761f72b32b04dda9e4e3462d4da48888e056643a262d83e6e3bdb6)\ncec0bb5800761f72b32b04dda9e4e3462d4da48888e056643a262d83e6e3bdb6\nSat Oct 11 09:17:09 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-3f778265-a3b7-4c18-be8e-648917a97a03 (cec0bb5800761f72b32b04dda9e4e3462d4da48888e056643a262d83e6e3bdb6)\ncec0bb5800761f72b32b04dda9e4e3462d4da48888e056643a262d83e6e3bdb6\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:17:09 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:17:09.934 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[5c8bdf12-f6bb-4d95-a775-cf71950aebdc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:17:09 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:17:09.936 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3f778265-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:17:09 compute-0 nova_compute[260935]: 2025-10-11 09:17:09.938 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:17:09 compute-0 kernel: tap3f778265-a0: left promiscuous mode
Oct 11 09:17:09 compute-0 nova_compute[260935]: 2025-10-11 09:17:09.943 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:17:09 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:17:09.948 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[33a0630e-69e4-49d9-b760-b21f13bd3444]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:17:09 compute-0 nova_compute[260935]: 2025-10-11 09:17:09.959 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:17:09 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:17:09.981 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[df970beb-2db6-4d7f-950b-2e03d07b5785]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:17:09 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:17:09.983 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[04843a9c-b56b-4285-b351-c5ab60e1adfc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:17:10 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:17:10.005 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[87bd1beb-aa6a-4f39-b17a-9c6a4aeac894]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 604944, 'reachable_time': 34489, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 380065, 'error': None, 'target': 'ovnmeta-3f778265-a3b7-4c18-be8e-648917a97a03', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:17:10 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:17:10.008 162929 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-3f778265-a3b7-4c18-be8e-648917a97a03 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 11 09:17:10 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:17:10.008 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[96cc13a3-f99b-43d3-b25b-f44177387d20]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:17:10 compute-0 systemd[1]: run-netns-ovnmeta\x2d3f778265\x2da3b7\x2d4c18\x2dbe8e\x2d648917a97a03.mount: Deactivated successfully.
Oct 11 09:17:10 compute-0 nova_compute[260935]: 2025-10-11 09:17:10.197 2 INFO nova.virt.libvirt.driver [None req-bc309bb7-c7fc-47c1-a648-27936f8b600c dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: a1232bdc-1728-423e-91ef-f46614fcec43] Deleting instance files /var/lib/nova/instances/a1232bdc-1728-423e-91ef-f46614fcec43_del
Oct 11 09:17:10 compute-0 nova_compute[260935]: 2025-10-11 09:17:10.198 2 INFO nova.virt.libvirt.driver [None req-bc309bb7-c7fc-47c1-a648-27936f8b600c dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: a1232bdc-1728-423e-91ef-f46614fcec43] Deletion of /var/lib/nova/instances/a1232bdc-1728-423e-91ef-f46614fcec43_del complete
Oct 11 09:17:10 compute-0 nova_compute[260935]: 2025-10-11 09:17:10.279 2 INFO nova.compute.manager [None req-bc309bb7-c7fc-47c1-a648-27936f8b600c dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: a1232bdc-1728-423e-91ef-f46614fcec43] Took 0.85 seconds to destroy the instance on the hypervisor.
Oct 11 09:17:10 compute-0 nova_compute[260935]: 2025-10-11 09:17:10.280 2 DEBUG oslo.service.loopingcall [None req-bc309bb7-c7fc-47c1-a648-27936f8b600c dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 11 09:17:10 compute-0 nova_compute[260935]: 2025-10-11 09:17:10.281 2 DEBUG nova.compute.manager [-] [instance: a1232bdc-1728-423e-91ef-f46614fcec43] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 11 09:17:10 compute-0 nova_compute[260935]: 2025-10-11 09:17:10.281 2 DEBUG nova.network.neutron [-] [instance: a1232bdc-1728-423e-91ef-f46614fcec43] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 11 09:17:10 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2260: 321 pgs: 321 active+clean; 566 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 473 KiB/s rd, 2.2 MiB/s wr, 91 op/s
Oct 11 09:17:10 compute-0 nova_compute[260935]: 2025-10-11 09:17:10.933 2 DEBUG nova.network.neutron [-] [instance: a1232bdc-1728-423e-91ef-f46614fcec43] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:17:10 compute-0 nova_compute[260935]: 2025-10-11 09:17:10.954 2 INFO nova.compute.manager [-] [instance: a1232bdc-1728-423e-91ef-f46614fcec43] Took 0.67 seconds to deallocate network for instance.
Oct 11 09:17:11 compute-0 nova_compute[260935]: 2025-10-11 09:17:11.008 2 DEBUG oslo_concurrency.lockutils [None req-bc309bb7-c7fc-47c1-a648-27936f8b600c dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:17:11 compute-0 nova_compute[260935]: 2025-10-11 09:17:11.009 2 DEBUG oslo_concurrency.lockutils [None req-bc309bb7-c7fc-47c1-a648-27936f8b600c dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:17:11 compute-0 nova_compute[260935]: 2025-10-11 09:17:11.069 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:17:11 compute-0 nova_compute[260935]: 2025-10-11 09:17:11.192 2 DEBUG oslo_concurrency.processutils [None req-bc309bb7-c7fc-47c1-a648-27936f8b600c dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:17:11 compute-0 sshd-session[380067]: Invalid user support from 78.128.112.74 port 35234
Oct 11 09:17:11 compute-0 podman[380089]: 2025-10-11 09:17:11.656911118 +0000 UTC m=+0.073319740 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.build-date=20251001, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd)
Oct 11 09:17:11 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 09:17:11 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3939161694' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:17:11 compute-0 podman[380090]: 2025-10-11 09:17:11.705139743 +0000 UTC m=+0.108936486 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 11 09:17:11 compute-0 sshd-session[380067]: pam_unix(sshd:auth): check pass; user unknown
Oct 11 09:17:11 compute-0 sshd-session[380067]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=78.128.112.74
Oct 11 09:17:11 compute-0 nova_compute[260935]: 2025-10-11 09:17:11.734 2 DEBUG oslo_concurrency.processutils [None req-bc309bb7-c7fc-47c1-a648-27936f8b600c dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.541s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:17:11 compute-0 nova_compute[260935]: 2025-10-11 09:17:11.746 2 DEBUG nova.compute.provider_tree [None req-bc309bb7-c7fc-47c1-a648-27936f8b600c dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 09:17:11 compute-0 ceph-mon[74313]: pgmap v2260: 321 pgs: 321 active+clean; 566 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 473 KiB/s rd, 2.2 MiB/s wr, 91 op/s
Oct 11 09:17:11 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/3939161694' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:17:11 compute-0 nova_compute[260935]: 2025-10-11 09:17:11.768 2 DEBUG nova.scheduler.client.report [None req-bc309bb7-c7fc-47c1-a648-27936f8b600c dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 09:17:11 compute-0 nova_compute[260935]: 2025-10-11 09:17:11.795 2 DEBUG oslo_concurrency.lockutils [None req-bc309bb7-c7fc-47c1-a648-27936f8b600c dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.786s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:17:11 compute-0 nova_compute[260935]: 2025-10-11 09:17:11.828 2 INFO nova.scheduler.client.report [None req-bc309bb7-c7fc-47c1-a648-27936f8b600c dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Deleted allocations for instance a1232bdc-1728-423e-91ef-f46614fcec43
Oct 11 09:17:11 compute-0 nova_compute[260935]: 2025-10-11 09:17:11.901 2 DEBUG oslo_concurrency.lockutils [None req-bc309bb7-c7fc-47c1-a648-27936f8b600c dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "a1232bdc-1728-423e-91ef-f46614fcec43" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.475s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:17:11 compute-0 nova_compute[260935]: 2025-10-11 09:17:11.934 2 DEBUG nova.compute.manager [req-e6df145e-0c93-48d1-887b-d513054dc6ba req-1bb08df6-dc6e-4dfb-b90c-6b440529399b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a1232bdc-1728-423e-91ef-f46614fcec43] Received event network-vif-plugged-e89831fc-f646-401f-8562-959bb36ec0e9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:17:11 compute-0 nova_compute[260935]: 2025-10-11 09:17:11.935 2 DEBUG oslo_concurrency.lockutils [req-e6df145e-0c93-48d1-887b-d513054dc6ba req-1bb08df6-dc6e-4dfb-b90c-6b440529399b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "a1232bdc-1728-423e-91ef-f46614fcec43-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:17:11 compute-0 nova_compute[260935]: 2025-10-11 09:17:11.936 2 DEBUG oslo_concurrency.lockutils [req-e6df145e-0c93-48d1-887b-d513054dc6ba req-1bb08df6-dc6e-4dfb-b90c-6b440529399b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "a1232bdc-1728-423e-91ef-f46614fcec43-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:17:11 compute-0 nova_compute[260935]: 2025-10-11 09:17:11.936 2 DEBUG oslo_concurrency.lockutils [req-e6df145e-0c93-48d1-887b-d513054dc6ba req-1bb08df6-dc6e-4dfb-b90c-6b440529399b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "a1232bdc-1728-423e-91ef-f46614fcec43-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:17:11 compute-0 nova_compute[260935]: 2025-10-11 09:17:11.937 2 DEBUG nova.compute.manager [req-e6df145e-0c93-48d1-887b-d513054dc6ba req-1bb08df6-dc6e-4dfb-b90c-6b440529399b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a1232bdc-1728-423e-91ef-f46614fcec43] No waiting events found dispatching network-vif-plugged-e89831fc-f646-401f-8562-959bb36ec0e9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 09:17:11 compute-0 nova_compute[260935]: 2025-10-11 09:17:11.938 2 WARNING nova.compute.manager [req-e6df145e-0c93-48d1-887b-d513054dc6ba req-1bb08df6-dc6e-4dfb-b90c-6b440529399b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a1232bdc-1728-423e-91ef-f46614fcec43] Received unexpected event network-vif-plugged-e89831fc-f646-401f-8562-959bb36ec0e9 for instance with vm_state deleted and task_state None.
Oct 11 09:17:11 compute-0 nova_compute[260935]: 2025-10-11 09:17:11.938 2 DEBUG nova.compute.manager [req-e6df145e-0c93-48d1-887b-d513054dc6ba req-1bb08df6-dc6e-4dfb-b90c-6b440529399b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a1232bdc-1728-423e-91ef-f46614fcec43] Received event network-vif-deleted-e89831fc-f646-401f-8562-959bb36ec0e9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:17:12 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2261: 321 pgs: 321 active+clean; 509 MiB data, 1014 MiB used, 59 GiB / 60 GiB avail; 492 KiB/s rd, 2.2 MiB/s wr, 118 op/s
Oct 11 09:17:13 compute-0 nova_compute[260935]: 2025-10-11 09:17:13.440 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:17:13 compute-0 ceph-mon[74313]: pgmap v2261: 321 pgs: 321 active+clean; 509 MiB data, 1014 MiB used, 59 GiB / 60 GiB avail; 492 KiB/s rd, 2.2 MiB/s wr, 118 op/s
Oct 11 09:17:13 compute-0 sshd-session[380067]: Failed password for invalid user support from 78.128.112.74 port 35234 ssh2
Oct 11 09:17:13 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:17:14 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2262: 321 pgs: 321 active+clean; 486 MiB data, 997 MiB used, 59 GiB / 60 GiB avail; 303 KiB/s rd, 2.2 MiB/s wr, 95 op/s
Oct 11 09:17:14 compute-0 sshd-session[380067]: Connection closed by invalid user support 78.128.112.74 port 35234 [preauth]
Oct 11 09:17:14 compute-0 nova_compute[260935]: 2025-10-11 09:17:14.795 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:17:14 compute-0 ovn_controller[152945]: 2025-10-11T09:17:14Z|01081|binding|INFO|Releasing lport 58ef9b4a-8b66-4d5d-ac05-f694b2a9b216 from this chassis (sb_readonly=0)
Oct 11 09:17:14 compute-0 ovn_controller[152945]: 2025-10-11T09:17:14Z|01082|binding|INFO|Releasing lport 722d4b4c-2d64-4e8c-b343-4ac25259f23b from this chassis (sb_readonly=0)
Oct 11 09:17:14 compute-0 ovn_controller[152945]: 2025-10-11T09:17:14Z|01083|binding|INFO|Releasing lport e23cd806-8523-4e59-ba27-db15cee52548 from this chassis (sb_readonly=0)
Oct 11 09:17:14 compute-0 ovn_controller[152945]: 2025-10-11T09:17:14Z|01084|binding|INFO|Releasing lport f45dd889-4db0-488b-b6d3-356bc191844e from this chassis (sb_readonly=0)
Oct 11 09:17:14 compute-0 nova_compute[260935]: 2025-10-11 09:17:14.993 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:17:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:17:15.211 162815 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:17:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:17:15.212 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:17:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:17:15.213 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:17:15 compute-0 ceph-mon[74313]: pgmap v2262: 321 pgs: 321 active+clean; 486 MiB data, 997 MiB used, 59 GiB / 60 GiB avail; 303 KiB/s rd, 2.2 MiB/s wr, 95 op/s
Oct 11 09:17:16 compute-0 nova_compute[260935]: 2025-10-11 09:17:16.065 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:17:16 compute-0 nova_compute[260935]: 2025-10-11 09:17:16.342 2 DEBUG nova.compute.manager [req-2fc3de9d-758c-4505-91b9-bb270d019091 req-16e90f6d-4595-4d73-9fd4-601d7cfbf617 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 8354434f-daa4-4745-9755-bd2465f5459b] Received event network-changed-cabc92ec-b61c-42d8-80b5-444ec773a568 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:17:16 compute-0 nova_compute[260935]: 2025-10-11 09:17:16.343 2 DEBUG nova.compute.manager [req-2fc3de9d-758c-4505-91b9-bb270d019091 req-16e90f6d-4595-4d73-9fd4-601d7cfbf617 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 8354434f-daa4-4745-9755-bd2465f5459b] Refreshing instance network info cache due to event network-changed-cabc92ec-b61c-42d8-80b5-444ec773a568. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 09:17:16 compute-0 nova_compute[260935]: 2025-10-11 09:17:16.343 2 DEBUG oslo_concurrency.lockutils [req-2fc3de9d-758c-4505-91b9-bb270d019091 req-16e90f6d-4595-4d73-9fd4-601d7cfbf617 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-8354434f-daa4-4745-9755-bd2465f5459b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:17:16 compute-0 nova_compute[260935]: 2025-10-11 09:17:16.344 2 DEBUG oslo_concurrency.lockutils [req-2fc3de9d-758c-4505-91b9-bb270d019091 req-16e90f6d-4595-4d73-9fd4-601d7cfbf617 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-8354434f-daa4-4745-9755-bd2465f5459b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:17:16 compute-0 nova_compute[260935]: 2025-10-11 09:17:16.344 2 DEBUG nova.network.neutron [req-2fc3de9d-758c-4505-91b9-bb270d019091 req-16e90f6d-4595-4d73-9fd4-601d7cfbf617 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 8354434f-daa4-4745-9755-bd2465f5459b] Refreshing network info cache for port cabc92ec-b61c-42d8-80b5-444ec773a568 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 09:17:16 compute-0 nova_compute[260935]: 2025-10-11 09:17:16.502 2 DEBUG oslo_concurrency.lockutils [None req-aa12f155-3a65-4148-a44d-7d46f8eae8e5 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Acquiring lock "8354434f-daa4-4745-9755-bd2465f5459b" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:17:16 compute-0 nova_compute[260935]: 2025-10-11 09:17:16.503 2 DEBUG oslo_concurrency.lockutils [None req-aa12f155-3a65-4148-a44d-7d46f8eae8e5 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "8354434f-daa4-4745-9755-bd2465f5459b" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:17:16 compute-0 nova_compute[260935]: 2025-10-11 09:17:16.503 2 DEBUG oslo_concurrency.lockutils [None req-aa12f155-3a65-4148-a44d-7d46f8eae8e5 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Acquiring lock "8354434f-daa4-4745-9755-bd2465f5459b-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:17:16 compute-0 nova_compute[260935]: 2025-10-11 09:17:16.504 2 DEBUG oslo_concurrency.lockutils [None req-aa12f155-3a65-4148-a44d-7d46f8eae8e5 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "8354434f-daa4-4745-9755-bd2465f5459b-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:17:16 compute-0 nova_compute[260935]: 2025-10-11 09:17:16.504 2 DEBUG oslo_concurrency.lockutils [None req-aa12f155-3a65-4148-a44d-7d46f8eae8e5 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "8354434f-daa4-4745-9755-bd2465f5459b-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:17:16 compute-0 nova_compute[260935]: 2025-10-11 09:17:16.506 2 INFO nova.compute.manager [None req-aa12f155-3a65-4148-a44d-7d46f8eae8e5 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 8354434f-daa4-4745-9755-bd2465f5459b] Terminating instance
Oct 11 09:17:16 compute-0 nova_compute[260935]: 2025-10-11 09:17:16.508 2 DEBUG nova.compute.manager [None req-aa12f155-3a65-4148-a44d-7d46f8eae8e5 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 8354434f-daa4-4745-9755-bd2465f5459b] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 11 09:17:16 compute-0 kernel: tapcabc92ec-b6 (unregistering): left promiscuous mode
Oct 11 09:17:16 compute-0 NetworkManager[44960]: <info>  [1760174236.5813] device (tapcabc92ec-b6): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 09:17:16 compute-0 ovn_controller[152945]: 2025-10-11T09:17:16Z|01085|binding|INFO|Releasing lport cabc92ec-b61c-42d8-80b5-444ec773a568 from this chassis (sb_readonly=0)
Oct 11 09:17:16 compute-0 ovn_controller[152945]: 2025-10-11T09:17:16Z|01086|binding|INFO|Setting lport cabc92ec-b61c-42d8-80b5-444ec773a568 down in Southbound
Oct 11 09:17:16 compute-0 nova_compute[260935]: 2025-10-11 09:17:16.600 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:17:16 compute-0 ovn_controller[152945]: 2025-10-11T09:17:16Z|01087|binding|INFO|Removing iface tapcabc92ec-b6 ovn-installed in OVS
Oct 11 09:17:16 compute-0 nova_compute[260935]: 2025-10-11 09:17:16.603 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:17:16 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:17:16.615 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ec:41:56 10.100.0.8'], port_security=['fa:16:3e:ec:41:56 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '8354434f-daa4-4745-9755-bd2465f5459b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3f56857a-dc0a-4d4f-92ae-d6806961b854', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'bee9c6aad5fe46a2b0fb6caf4d995b72', 'neutron:revision_number': '4', 'neutron:security_group_ids': '437c3d45-aba0-41e1-8469-ba1a8b534642', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=94b69aa0-e5aa-4772-b64f-f38dc39ac5a0, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=cabc92ec-b61c-42d8-80b5-444ec773a568) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 09:17:16 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:17:16.616 162815 INFO neutron.agent.ovn.metadata.agent [-] Port cabc92ec-b61c-42d8-80b5-444ec773a568 in datapath 3f56857a-dc0a-4d4f-92ae-d6806961b854 unbound from our chassis
Oct 11 09:17:16 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:17:16.620 162815 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 3f56857a-dc0a-4d4f-92ae-d6806961b854, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 11 09:17:16 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:17:16.622 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[6e2c186c-ec5f-4a5a-9f3a-765aa256139a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:17:16 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:17:16.623 162815 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-3f56857a-dc0a-4d4f-92ae-d6806961b854 namespace which is not needed anymore
Oct 11 09:17:16 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2263: 321 pgs: 321 active+clean; 486 MiB data, 997 MiB used, 59 GiB / 60 GiB avail; 279 KiB/s rd, 2.2 MiB/s wr, 90 op/s
Oct 11 09:17:16 compute-0 nova_compute[260935]: 2025-10-11 09:17:16.638 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:17:16 compute-0 systemd[1]: machine-qemu\x2d131\x2dinstance\x2d0000006c.scope: Deactivated successfully.
Oct 11 09:17:16 compute-0 systemd[1]: machine-qemu\x2d131\x2dinstance\x2d0000006c.scope: Consumed 15.612s CPU time.
Oct 11 09:17:16 compute-0 systemd-machined[215705]: Machine qemu-131-instance-0000006c terminated.
Oct 11 09:17:16 compute-0 nova_compute[260935]: 2025-10-11 09:17:16.740 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:17:16 compute-0 nova_compute[260935]: 2025-10-11 09:17:16.748 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:17:16 compute-0 nova_compute[260935]: 2025-10-11 09:17:16.764 2 INFO nova.virt.libvirt.driver [-] [instance: 8354434f-daa4-4745-9755-bd2465f5459b] Instance destroyed successfully.
Oct 11 09:17:16 compute-0 nova_compute[260935]: 2025-10-11 09:17:16.764 2 DEBUG nova.objects.instance [None req-aa12f155-3a65-4148-a44d-7d46f8eae8e5 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lazy-loading 'resources' on Instance uuid 8354434f-daa4-4745-9755-bd2465f5459b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:17:16 compute-0 nova_compute[260935]: 2025-10-11 09:17:16.787 2 DEBUG nova.virt.libvirt.vif [None req-aa12f155-3a65-4148-a44d-7d46f8eae8e5 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T09:15:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-901907022',display_name='tempest-TestNetworkBasicOps-server-901907022',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-901907022',id=108,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNSUDuwrUo8mPnl46KJgqVMBI/oUdDQJiZKLg3PcXdEoEbBCfynVBzjq5RcgXvWBbi7yER9lhbMWC7+xjyS7CKL6WdpVIbjuziOiJFncdqOQt0YKcX1oXGLKj/54Eha0rg==',key_name='tempest-TestNetworkBasicOps-1207558742',keypairs=<?>,launch_index=0,launched_at=2025-10-11T09:16:05Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='bee9c6aad5fe46a2b0fb6caf4d995b72',ramdisk_id='',reservation_id='r-svb3ck8j',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1622727639',owner_user_name='tempest-TestNetworkBasicOps-1622727639-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T09:16:05Z,user_data=None,user_id='dd336dcb24664df58613d4105ce1b004',uuid=8354434f-daa4-4745-9755-bd2465f5459b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "cabc92ec-b61c-42d8-80b5-444ec773a568", "address": "fa:16:3e:ec:41:56", "network": {"id": "3f56857a-dc0a-4d4f-92ae-d6806961b854", "bridge": "br-int", "label": "tempest-network-smoke--985507468", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.249", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcabc92ec-b6", "ovs_interfaceid": "cabc92ec-b61c-42d8-80b5-444ec773a568", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 11 09:17:16 compute-0 nova_compute[260935]: 2025-10-11 09:17:16.788 2 DEBUG nova.network.os_vif_util [None req-aa12f155-3a65-4148-a44d-7d46f8eae8e5 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Converting VIF {"id": "cabc92ec-b61c-42d8-80b5-444ec773a568", "address": "fa:16:3e:ec:41:56", "network": {"id": "3f56857a-dc0a-4d4f-92ae-d6806961b854", "bridge": "br-int", "label": "tempest-network-smoke--985507468", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.249", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcabc92ec-b6", "ovs_interfaceid": "cabc92ec-b61c-42d8-80b5-444ec773a568", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 09:17:16 compute-0 nova_compute[260935]: 2025-10-11 09:17:16.789 2 DEBUG nova.network.os_vif_util [None req-aa12f155-3a65-4148-a44d-7d46f8eae8e5 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:ec:41:56,bridge_name='br-int',has_traffic_filtering=True,id=cabc92ec-b61c-42d8-80b5-444ec773a568,network=Network(3f56857a-dc0a-4d4f-92ae-d6806961b854),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcabc92ec-b6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 09:17:16 compute-0 nova_compute[260935]: 2025-10-11 09:17:16.789 2 DEBUG os_vif [None req-aa12f155-3a65-4148-a44d-7d46f8eae8e5 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:ec:41:56,bridge_name='br-int',has_traffic_filtering=True,id=cabc92ec-b61c-42d8-80b5-444ec773a568,network=Network(3f56857a-dc0a-4d4f-92ae-d6806961b854),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcabc92ec-b6') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 11 09:17:16 compute-0 nova_compute[260935]: 2025-10-11 09:17:16.791 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:17:16 compute-0 nova_compute[260935]: 2025-10-11 09:17:16.791 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcabc92ec-b6, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:17:16 compute-0 nova_compute[260935]: 2025-10-11 09:17:16.793 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:17:16 compute-0 nova_compute[260935]: 2025-10-11 09:17:16.796 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:17:16 compute-0 nova_compute[260935]: 2025-10-11 09:17:16.798 2 INFO os_vif [None req-aa12f155-3a65-4148-a44d-7d46f8eae8e5 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:ec:41:56,bridge_name='br-int',has_traffic_filtering=True,id=cabc92ec-b61c-42d8-80b5-444ec773a568,network=Network(3f56857a-dc0a-4d4f-92ae-d6806961b854),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcabc92ec-b6')
Oct 11 09:17:16 compute-0 neutron-haproxy-ovnmeta-3f56857a-dc0a-4d4f-92ae-d6806961b854[377571]: [NOTICE]   (377575) : haproxy version is 2.8.14-c23fe91
Oct 11 09:17:16 compute-0 neutron-haproxy-ovnmeta-3f56857a-dc0a-4d4f-92ae-d6806961b854[377571]: [NOTICE]   (377575) : path to executable is /usr/sbin/haproxy
Oct 11 09:17:16 compute-0 neutron-haproxy-ovnmeta-3f56857a-dc0a-4d4f-92ae-d6806961b854[377571]: [WARNING]  (377575) : Exiting Master process...
Oct 11 09:17:16 compute-0 neutron-haproxy-ovnmeta-3f56857a-dc0a-4d4f-92ae-d6806961b854[377571]: [WARNING]  (377575) : Exiting Master process...
Oct 11 09:17:16 compute-0 neutron-haproxy-ovnmeta-3f56857a-dc0a-4d4f-92ae-d6806961b854[377571]: [ALERT]    (377575) : Current worker (377577) exited with code 143 (Terminated)
Oct 11 09:17:16 compute-0 neutron-haproxy-ovnmeta-3f56857a-dc0a-4d4f-92ae-d6806961b854[377571]: [WARNING]  (377575) : All workers exited. Exiting... (0)
Oct 11 09:17:16 compute-0 systemd[1]: libpod-56a2f43e4dc043e957cb609a6cf1764396e6ef23226a986d93d157a36a8cfc97.scope: Deactivated successfully.
Oct 11 09:17:16 compute-0 conmon[377571]: conmon 56a2f43e4dc043e957cb <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-56a2f43e4dc043e957cb609a6cf1764396e6ef23226a986d93d157a36a8cfc97.scope/container/memory.events
Oct 11 09:17:16 compute-0 podman[380165]: 2025-10-11 09:17:16.86672394 +0000 UTC m=+0.075155731 container died 56a2f43e4dc043e957cb609a6cf1764396e6ef23226a986d93d157a36a8cfc97 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-3f56857a-dc0a-4d4f-92ae-d6806961b854, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001)
Oct 11 09:17:16 compute-0 nova_compute[260935]: 2025-10-11 09:17:16.894 2 DEBUG nova.compute.manager [req-5d11c3b9-3f2f-4fbc-b1f1-620833c9a62a req-24a363bc-77c9-4a2a-b7a3-010edb3d853d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 8354434f-daa4-4745-9755-bd2465f5459b] Received event network-vif-unplugged-cabc92ec-b61c-42d8-80b5-444ec773a568 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:17:16 compute-0 nova_compute[260935]: 2025-10-11 09:17:16.894 2 DEBUG oslo_concurrency.lockutils [req-5d11c3b9-3f2f-4fbc-b1f1-620833c9a62a req-24a363bc-77c9-4a2a-b7a3-010edb3d853d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "8354434f-daa4-4745-9755-bd2465f5459b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:17:16 compute-0 nova_compute[260935]: 2025-10-11 09:17:16.895 2 DEBUG oslo_concurrency.lockutils [req-5d11c3b9-3f2f-4fbc-b1f1-620833c9a62a req-24a363bc-77c9-4a2a-b7a3-010edb3d853d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "8354434f-daa4-4745-9755-bd2465f5459b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:17:16 compute-0 nova_compute[260935]: 2025-10-11 09:17:16.895 2 DEBUG oslo_concurrency.lockutils [req-5d11c3b9-3f2f-4fbc-b1f1-620833c9a62a req-24a363bc-77c9-4a2a-b7a3-010edb3d853d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "8354434f-daa4-4745-9755-bd2465f5459b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:17:16 compute-0 nova_compute[260935]: 2025-10-11 09:17:16.896 2 DEBUG nova.compute.manager [req-5d11c3b9-3f2f-4fbc-b1f1-620833c9a62a req-24a363bc-77c9-4a2a-b7a3-010edb3d853d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 8354434f-daa4-4745-9755-bd2465f5459b] No waiting events found dispatching network-vif-unplugged-cabc92ec-b61c-42d8-80b5-444ec773a568 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 09:17:16 compute-0 nova_compute[260935]: 2025-10-11 09:17:16.897 2 DEBUG nova.compute.manager [req-5d11c3b9-3f2f-4fbc-b1f1-620833c9a62a req-24a363bc-77c9-4a2a-b7a3-010edb3d853d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 8354434f-daa4-4745-9755-bd2465f5459b] Received event network-vif-unplugged-cabc92ec-b61c-42d8-80b5-444ec773a568 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 11 09:17:16 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-56a2f43e4dc043e957cb609a6cf1764396e6ef23226a986d93d157a36a8cfc97-userdata-shm.mount: Deactivated successfully.
Oct 11 09:17:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-16ea3fde6a616ac783e0a41154d62589ff39ccb058913321360e01f0736e2297-merged.mount: Deactivated successfully.
Oct 11 09:17:16 compute-0 podman[380165]: 2025-10-11 09:17:16.917186197 +0000 UTC m=+0.125617978 container cleanup 56a2f43e4dc043e957cb609a6cf1764396e6ef23226a986d93d157a36a8cfc97 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-3f56857a-dc0a-4d4f-92ae-d6806961b854, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Oct 11 09:17:16 compute-0 systemd[1]: libpod-conmon-56a2f43e4dc043e957cb609a6cf1764396e6ef23226a986d93d157a36a8cfc97.scope: Deactivated successfully.
Oct 11 09:17:17 compute-0 podman[380211]: 2025-10-11 09:17:17.009855892 +0000 UTC m=+0.063360525 container remove 56a2f43e4dc043e957cb609a6cf1764396e6ef23226a986d93d157a36a8cfc97 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-3f56857a-dc0a-4d4f-92ae-d6806961b854, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 11 09:17:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:17:17.015 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[3a01691a-d0f3-45b0-89f3-73ed6ce06c99]: (4, ('Sat Oct 11 09:17:16 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-3f56857a-dc0a-4d4f-92ae-d6806961b854 (56a2f43e4dc043e957cb609a6cf1764396e6ef23226a986d93d157a36a8cfc97)\n56a2f43e4dc043e957cb609a6cf1764396e6ef23226a986d93d157a36a8cfc97\nSat Oct 11 09:17:16 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-3f56857a-dc0a-4d4f-92ae-d6806961b854 (56a2f43e4dc043e957cb609a6cf1764396e6ef23226a986d93d157a36a8cfc97)\n56a2f43e4dc043e957cb609a6cf1764396e6ef23226a986d93d157a36a8cfc97\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:17:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:17:17.017 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[086ec4e0-18ca-4ebe-97e5-5e2b4bb6f824]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:17:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:17:17.019 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3f56857a-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:17:17 compute-0 nova_compute[260935]: 2025-10-11 09:17:17.037 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:17:17 compute-0 kernel: tap3f56857a-d0: left promiscuous mode
Oct 11 09:17:17 compute-0 nova_compute[260935]: 2025-10-11 09:17:17.056 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:17:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:17:17.060 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[af7ff1c3-75cf-4f1e-8ab4-734c176f569d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:17:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:17:17.093 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[8a6b0a27-17eb-41a8-88e0-fd1018ecc6a6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:17:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:17:17.094 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[7938283e-7490-4901-b8d1-86f0ab19e3cb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:17:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:17:17.112 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[9eade0db-61e2-4bb1-95f5-31b129ff2c65]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 600359, 'reachable_time': 16373, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 380227, 'error': None, 'target': 'ovnmeta-3f56857a-dc0a-4d4f-92ae-d6806961b854', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:17:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:17:17.114 162929 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-3f56857a-dc0a-4d4f-92ae-d6806961b854 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 11 09:17:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:17:17.115 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[d989db6e-59cc-4ad2-9d35-f0f56f0cab24]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:17:17 compute-0 systemd[1]: run-netns-ovnmeta\x2d3f56857a\x2ddc0a\x2d4d4f\x2d92ae\x2dd6806961b854.mount: Deactivated successfully.
Oct 11 09:17:17 compute-0 nova_compute[260935]: 2025-10-11 09:17:17.252 2 INFO nova.virt.libvirt.driver [None req-aa12f155-3a65-4148-a44d-7d46f8eae8e5 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 8354434f-daa4-4745-9755-bd2465f5459b] Deleting instance files /var/lib/nova/instances/8354434f-daa4-4745-9755-bd2465f5459b_del
Oct 11 09:17:17 compute-0 nova_compute[260935]: 2025-10-11 09:17:17.253 2 INFO nova.virt.libvirt.driver [None req-aa12f155-3a65-4148-a44d-7d46f8eae8e5 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 8354434f-daa4-4745-9755-bd2465f5459b] Deletion of /var/lib/nova/instances/8354434f-daa4-4745-9755-bd2465f5459b_del complete
Oct 11 09:17:17 compute-0 nova_compute[260935]: 2025-10-11 09:17:17.312 2 INFO nova.compute.manager [None req-aa12f155-3a65-4148-a44d-7d46f8eae8e5 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 8354434f-daa4-4745-9755-bd2465f5459b] Took 0.80 seconds to destroy the instance on the hypervisor.
Oct 11 09:17:17 compute-0 nova_compute[260935]: 2025-10-11 09:17:17.313 2 DEBUG oslo.service.loopingcall [None req-aa12f155-3a65-4148-a44d-7d46f8eae8e5 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 11 09:17:17 compute-0 nova_compute[260935]: 2025-10-11 09:17:17.314 2 DEBUG nova.compute.manager [-] [instance: 8354434f-daa4-4745-9755-bd2465f5459b] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 11 09:17:17 compute-0 nova_compute[260935]: 2025-10-11 09:17:17.314 2 DEBUG nova.network.neutron [-] [instance: 8354434f-daa4-4745-9755-bd2465f5459b] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 11 09:17:17 compute-0 sudo[380228]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:17:17 compute-0 sudo[380228]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:17:17 compute-0 sudo[380228]: pam_unix(sudo:session): session closed for user root
Oct 11 09:17:17 compute-0 ceph-mon[74313]: pgmap v2263: 321 pgs: 321 active+clean; 486 MiB data, 997 MiB used, 59 GiB / 60 GiB avail; 279 KiB/s rd, 2.2 MiB/s wr, 90 op/s
Oct 11 09:17:17 compute-0 sudo[380253]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 09:17:17 compute-0 sudo[380253]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:17:17 compute-0 sudo[380253]: pam_unix(sudo:session): session closed for user root
Oct 11 09:17:17 compute-0 sudo[380278]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:17:17 compute-0 sudo[380278]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:17:17 compute-0 sudo[380278]: pam_unix(sudo:session): session closed for user root
Oct 11 09:17:18 compute-0 sudo[380303]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 11 09:17:18 compute-0 sudo[380303]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:17:18 compute-0 nova_compute[260935]: 2025-10-11 09:17:18.602 2 DEBUG nova.network.neutron [req-2fc3de9d-758c-4505-91b9-bb270d019091 req-16e90f6d-4595-4d73-9fd4-601d7cfbf617 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 8354434f-daa4-4745-9755-bd2465f5459b] Updated VIF entry in instance network info cache for port cabc92ec-b61c-42d8-80b5-444ec773a568. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 09:17:18 compute-0 nova_compute[260935]: 2025-10-11 09:17:18.604 2 DEBUG nova.network.neutron [req-2fc3de9d-758c-4505-91b9-bb270d019091 req-16e90f6d-4595-4d73-9fd4-601d7cfbf617 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 8354434f-daa4-4745-9755-bd2465f5459b] Updating instance_info_cache with network_info: [{"id": "cabc92ec-b61c-42d8-80b5-444ec773a568", "address": "fa:16:3e:ec:41:56", "network": {"id": "3f56857a-dc0a-4d4f-92ae-d6806961b854", "bridge": "br-int", "label": "tempest-network-smoke--985507468", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcabc92ec-b6", "ovs_interfaceid": "cabc92ec-b61c-42d8-80b5-444ec773a568", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:17:18 compute-0 nova_compute[260935]: 2025-10-11 09:17:18.630 2 DEBUG oslo_concurrency.lockutils [req-2fc3de9d-758c-4505-91b9-bb270d019091 req-16e90f6d-4595-4d73-9fd4-601d7cfbf617 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-8354434f-daa4-4745-9755-bd2465f5459b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:17:18 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2264: 321 pgs: 321 active+clean; 407 MiB data, 986 MiB used, 59 GiB / 60 GiB avail; 298 KiB/s rd, 2.2 MiB/s wr, 118 op/s
Oct 11 09:17:18 compute-0 sudo[380303]: pam_unix(sudo:session): session closed for user root
Oct 11 09:17:18 compute-0 nova_compute[260935]: 2025-10-11 09:17:18.698 2 DEBUG nova.network.neutron [-] [instance: 8354434f-daa4-4745-9755-bd2465f5459b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:17:18 compute-0 nova_compute[260935]: 2025-10-11 09:17:18.720 2 INFO nova.compute.manager [-] [instance: 8354434f-daa4-4745-9755-bd2465f5459b] Took 1.41 seconds to deallocate network for instance.
Oct 11 09:17:18 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 09:17:18 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 09:17:18 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 11 09:17:18 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 09:17:18 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 11 09:17:18 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:17:18 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev 94488ee0-29db-48cf-a0f6-a86e9b7fa3c1 does not exist
Oct 11 09:17:18 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev 15502bc0-2a38-4f34-b9c4-b31f0c76dddd does not exist
Oct 11 09:17:18 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev bea7cf29-e528-416f-9829-f960f040152f does not exist
Oct 11 09:17:18 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 11 09:17:18 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 09:17:18 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 11 09:17:18 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 09:17:18 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 09:17:18 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 09:17:18 compute-0 nova_compute[260935]: 2025-10-11 09:17:18.781 2 DEBUG oslo_concurrency.lockutils [None req-aa12f155-3a65-4148-a44d-7d46f8eae8e5 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:17:18 compute-0 nova_compute[260935]: 2025-10-11 09:17:18.781 2 DEBUG oslo_concurrency.lockutils [None req-aa12f155-3a65-4148-a44d-7d46f8eae8e5 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:17:18 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 09:17:18 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 09:17:18 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:17:18 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 09:17:18 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 09:17:18 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 09:17:18 compute-0 sudo[380361]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:17:18 compute-0 sudo[380361]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:17:18 compute-0 sudo[380361]: pam_unix(sudo:session): session closed for user root
Oct 11 09:17:18 compute-0 sudo[380386]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 09:17:18 compute-0 sudo[380386]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:17:18 compute-0 sudo[380386]: pam_unix(sudo:session): session closed for user root
Oct 11 09:17:18 compute-0 nova_compute[260935]: 2025-10-11 09:17:18.933 2 DEBUG oslo_concurrency.processutils [None req-aa12f155-3a65-4148-a44d-7d46f8eae8e5 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:17:18 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:17:18 compute-0 sudo[380411]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:17:18 compute-0 sudo[380411]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:17:19 compute-0 sudo[380411]: pam_unix(sudo:session): session closed for user root
Oct 11 09:17:19 compute-0 nova_compute[260935]: 2025-10-11 09:17:19.033 2 DEBUG nova.compute.manager [req-e876c5d5-f4c9-4f11-99fd-4d872f86de7b req-eb84fffd-e93c-4a2d-815f-d893f5cbcdb3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 8354434f-daa4-4745-9755-bd2465f5459b] Received event network-vif-plugged-cabc92ec-b61c-42d8-80b5-444ec773a568 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:17:19 compute-0 nova_compute[260935]: 2025-10-11 09:17:19.034 2 DEBUG oslo_concurrency.lockutils [req-e876c5d5-f4c9-4f11-99fd-4d872f86de7b req-eb84fffd-e93c-4a2d-815f-d893f5cbcdb3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "8354434f-daa4-4745-9755-bd2465f5459b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:17:19 compute-0 nova_compute[260935]: 2025-10-11 09:17:19.034 2 DEBUG oslo_concurrency.lockutils [req-e876c5d5-f4c9-4f11-99fd-4d872f86de7b req-eb84fffd-e93c-4a2d-815f-d893f5cbcdb3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "8354434f-daa4-4745-9755-bd2465f5459b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:17:19 compute-0 nova_compute[260935]: 2025-10-11 09:17:19.034 2 DEBUG oslo_concurrency.lockutils [req-e876c5d5-f4c9-4f11-99fd-4d872f86de7b req-eb84fffd-e93c-4a2d-815f-d893f5cbcdb3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "8354434f-daa4-4745-9755-bd2465f5459b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:17:19 compute-0 nova_compute[260935]: 2025-10-11 09:17:19.034 2 DEBUG nova.compute.manager [req-e876c5d5-f4c9-4f11-99fd-4d872f86de7b req-eb84fffd-e93c-4a2d-815f-d893f5cbcdb3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 8354434f-daa4-4745-9755-bd2465f5459b] No waiting events found dispatching network-vif-plugged-cabc92ec-b61c-42d8-80b5-444ec773a568 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 09:17:19 compute-0 nova_compute[260935]: 2025-10-11 09:17:19.035 2 WARNING nova.compute.manager [req-e876c5d5-f4c9-4f11-99fd-4d872f86de7b req-eb84fffd-e93c-4a2d-815f-d893f5cbcdb3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 8354434f-daa4-4745-9755-bd2465f5459b] Received unexpected event network-vif-plugged-cabc92ec-b61c-42d8-80b5-444ec773a568 for instance with vm_state deleted and task_state None.
Oct 11 09:17:19 compute-0 nova_compute[260935]: 2025-10-11 09:17:19.035 2 DEBUG nova.compute.manager [req-e876c5d5-f4c9-4f11-99fd-4d872f86de7b req-eb84fffd-e93c-4a2d-815f-d893f5cbcdb3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 8354434f-daa4-4745-9755-bd2465f5459b] Received event network-vif-deleted-cabc92ec-b61c-42d8-80b5-444ec773a568 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:17:19 compute-0 sudo[380437]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 11 09:17:19 compute-0 sudo[380437]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:17:19 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 09:17:19 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/972597374' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:17:19 compute-0 nova_compute[260935]: 2025-10-11 09:17:19.395 2 DEBUG oslo_concurrency.processutils [None req-aa12f155-3a65-4148-a44d-7d46f8eae8e5 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:17:19 compute-0 nova_compute[260935]: 2025-10-11 09:17:19.405 2 DEBUG nova.compute.provider_tree [None req-aa12f155-3a65-4148-a44d-7d46f8eae8e5 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 09:17:19 compute-0 nova_compute[260935]: 2025-10-11 09:17:19.439 2 DEBUG nova.scheduler.client.report [None req-aa12f155-3a65-4148-a44d-7d46f8eae8e5 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 09:17:19 compute-0 nova_compute[260935]: 2025-10-11 09:17:19.462 2 DEBUG oslo_concurrency.lockutils [None req-aa12f155-3a65-4148-a44d-7d46f8eae8e5 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.681s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:17:19 compute-0 nova_compute[260935]: 2025-10-11 09:17:19.488 2 INFO nova.scheduler.client.report [None req-aa12f155-3a65-4148-a44d-7d46f8eae8e5 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Deleted allocations for instance 8354434f-daa4-4745-9755-bd2465f5459b
Oct 11 09:17:19 compute-0 podman[380526]: 2025-10-11 09:17:19.522991464 +0000 UTC m=+0.061558045 container create 25e91ab114911d93b50be2dbbd2299d9ed1160cc806be27370e36013bc3225b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_shirley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct 11 09:17:19 compute-0 nova_compute[260935]: 2025-10-11 09:17:19.571 2 DEBUG oslo_concurrency.lockutils [None req-aa12f155-3a65-4148-a44d-7d46f8eae8e5 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "8354434f-daa4-4745-9755-bd2465f5459b" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.068s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:17:19 compute-0 systemd[1]: Started libpod-conmon-25e91ab114911d93b50be2dbbd2299d9ed1160cc806be27370e36013bc3225b4.scope.
Oct 11 09:17:19 compute-0 podman[380526]: 2025-10-11 09:17:19.495357849 +0000 UTC m=+0.033924480 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:17:19 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:17:19 compute-0 podman[380526]: 2025-10-11 09:17:19.645125224 +0000 UTC m=+0.183691845 container init 25e91ab114911d93b50be2dbbd2299d9ed1160cc806be27370e36013bc3225b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_shirley, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct 11 09:17:19 compute-0 podman[380526]: 2025-10-11 09:17:19.657644221 +0000 UTC m=+0.196210772 container start 25e91ab114911d93b50be2dbbd2299d9ed1160cc806be27370e36013bc3225b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_shirley, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct 11 09:17:19 compute-0 podman[380526]: 2025-10-11 09:17:19.661676153 +0000 UTC m=+0.200242784 container attach 25e91ab114911d93b50be2dbbd2299d9ed1160cc806be27370e36013bc3225b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_shirley, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 09:17:19 compute-0 clever_shirley[380542]: 167 167
Oct 11 09:17:19 compute-0 systemd[1]: libpod-25e91ab114911d93b50be2dbbd2299d9ed1160cc806be27370e36013bc3225b4.scope: Deactivated successfully.
Oct 11 09:17:19 compute-0 conmon[380542]: conmon 25e91ab114911d93b50b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-25e91ab114911d93b50be2dbbd2299d9ed1160cc806be27370e36013bc3225b4.scope/container/memory.events
Oct 11 09:17:19 compute-0 podman[380526]: 2025-10-11 09:17:19.668409209 +0000 UTC m=+0.206975800 container died 25e91ab114911d93b50be2dbbd2299d9ed1160cc806be27370e36013bc3225b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_shirley, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 11 09:17:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-aed7f6645d639396f1cfa780beb4bf542cec88ab3e058d3bc6c670cb1568e58a-merged.mount: Deactivated successfully.
Oct 11 09:17:19 compute-0 podman[380526]: 2025-10-11 09:17:19.723235596 +0000 UTC m=+0.261802167 container remove 25e91ab114911d93b50be2dbbd2299d9ed1160cc806be27370e36013bc3225b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_shirley, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct 11 09:17:19 compute-0 systemd[1]: libpod-conmon-25e91ab114911d93b50be2dbbd2299d9ed1160cc806be27370e36013bc3225b4.scope: Deactivated successfully.
Oct 11 09:17:19 compute-0 ceph-mon[74313]: pgmap v2264: 321 pgs: 321 active+clean; 407 MiB data, 986 MiB used, 59 GiB / 60 GiB avail; 298 KiB/s rd, 2.2 MiB/s wr, 118 op/s
Oct 11 09:17:19 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/972597374' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:17:20 compute-0 podman[380565]: 2025-10-11 09:17:20.005354955 +0000 UTC m=+0.069317629 container create 87f8fe6b43146246da4c1d2bc6122723b1e5deee09a7542717a6db4b4b21fbc1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_knuth, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct 11 09:17:20 compute-0 systemd[1]: Started libpod-conmon-87f8fe6b43146246da4c1d2bc6122723b1e5deee09a7542717a6db4b4b21fbc1.scope.
Oct 11 09:17:20 compute-0 podman[380565]: 2025-10-11 09:17:19.979423868 +0000 UTC m=+0.043386562 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:17:20 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:17:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd3b041be12d13438af6f67a6773c234b4846b5bd375c72e5cfeafb17cd44569/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 09:17:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd3b041be12d13438af6f67a6773c234b4846b5bd375c72e5cfeafb17cd44569/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 09:17:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd3b041be12d13438af6f67a6773c234b4846b5bd375c72e5cfeafb17cd44569/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 09:17:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd3b041be12d13438af6f67a6773c234b4846b5bd375c72e5cfeafb17cd44569/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 09:17:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd3b041be12d13438af6f67a6773c234b4846b5bd375c72e5cfeafb17cd44569/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 09:17:20 compute-0 podman[380565]: 2025-10-11 09:17:20.109307523 +0000 UTC m=+0.173270247 container init 87f8fe6b43146246da4c1d2bc6122723b1e5deee09a7542717a6db4b4b21fbc1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_knuth, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 09:17:20 compute-0 podman[380565]: 2025-10-11 09:17:20.129969864 +0000 UTC m=+0.193932538 container start 87f8fe6b43146246da4c1d2bc6122723b1e5deee09a7542717a6db4b4b21fbc1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_knuth, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 09:17:20 compute-0 podman[380565]: 2025-10-11 09:17:20.134145219 +0000 UTC m=+0.198107943 container attach 87f8fe6b43146246da4c1d2bc6122723b1e5deee09a7542717a6db4b4b21fbc1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_knuth, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 09:17:20 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2265: 321 pgs: 321 active+clean; 407 MiB data, 986 MiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 19 KiB/s wr, 56 op/s
Oct 11 09:17:21 compute-0 nova_compute[260935]: 2025-10-11 09:17:21.112 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:17:21 compute-0 gallant_knuth[380582]: --> passed data devices: 0 physical, 3 LVM
Oct 11 09:17:21 compute-0 gallant_knuth[380582]: --> relative data size: 1.0
Oct 11 09:17:21 compute-0 gallant_knuth[380582]: --> All data devices are unavailable
Oct 11 09:17:21 compute-0 systemd[1]: libpod-87f8fe6b43146246da4c1d2bc6122723b1e5deee09a7542717a6db4b4b21fbc1.scope: Deactivated successfully.
Oct 11 09:17:21 compute-0 systemd[1]: libpod-87f8fe6b43146246da4c1d2bc6122723b1e5deee09a7542717a6db4b4b21fbc1.scope: Consumed 1.137s CPU time.
Oct 11 09:17:21 compute-0 podman[380613]: 2025-10-11 09:17:21.427782096 +0000 UTC m=+0.038159127 container died 87f8fe6b43146246da4c1d2bc6122723b1e5deee09a7542717a6db4b4b21fbc1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_knuth, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 09:17:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-bd3b041be12d13438af6f67a6773c234b4846b5bd375c72e5cfeafb17cd44569-merged.mount: Deactivated successfully.
Oct 11 09:17:21 compute-0 podman[380613]: 2025-10-11 09:17:21.519384421 +0000 UTC m=+0.129761432 container remove 87f8fe6b43146246da4c1d2bc6122723b1e5deee09a7542717a6db4b4b21fbc1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_knuth, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct 11 09:17:21 compute-0 systemd[1]: libpod-conmon-87f8fe6b43146246da4c1d2bc6122723b1e5deee09a7542717a6db4b4b21fbc1.scope: Deactivated successfully.
Oct 11 09:17:21 compute-0 sudo[380437]: pam_unix(sudo:session): session closed for user root
Oct 11 09:17:21 compute-0 unix_chkpwd[380651]: password check failed for user (root)
Oct 11 09:17:21 compute-0 sshd-session[380602]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=165.232.82.252  user=root
Oct 11 09:17:21 compute-0 sudo[380628]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:17:21 compute-0 sudo[380628]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:17:21 compute-0 sudo[380628]: pam_unix(sudo:session): session closed for user root
Oct 11 09:17:21 compute-0 sudo[380654]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 09:17:21 compute-0 sudo[380654]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:17:21 compute-0 sudo[380654]: pam_unix(sudo:session): session closed for user root
Oct 11 09:17:21 compute-0 nova_compute[260935]: 2025-10-11 09:17:21.794 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:17:21 compute-0 ceph-mon[74313]: pgmap v2265: 321 pgs: 321 active+clean; 407 MiB data, 986 MiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 19 KiB/s wr, 56 op/s
Oct 11 09:17:21 compute-0 sudo[380679]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:17:21 compute-0 sudo[380679]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:17:21 compute-0 sudo[380679]: pam_unix(sudo:session): session closed for user root
Oct 11 09:17:21 compute-0 sudo[380704]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a -- lvm list --format json
Oct 11 09:17:21 compute-0 sudo[380704]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:17:22 compute-0 podman[380771]: 2025-10-11 09:17:22.471061513 +0000 UTC m=+0.068210499 container create f81fdd7c224aa502815d5919af3a1e5e4cbbd406fc41632054b4baf480080513 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_haibt, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct 11 09:17:22 compute-0 systemd[1]: Started libpod-conmon-f81fdd7c224aa502815d5919af3a1e5e4cbbd406fc41632054b4baf480080513.scope.
Oct 11 09:17:22 compute-0 podman[380771]: 2025-10-11 09:17:22.441864705 +0000 UTC m=+0.039013741 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:17:22 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:17:22 compute-0 podman[380771]: 2025-10-11 09:17:22.584363639 +0000 UTC m=+0.181512655 container init f81fdd7c224aa502815d5919af3a1e5e4cbbd406fc41632054b4baf480080513 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_haibt, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct 11 09:17:22 compute-0 podman[380771]: 2025-10-11 09:17:22.594035827 +0000 UTC m=+0.191184783 container start f81fdd7c224aa502815d5919af3a1e5e4cbbd406fc41632054b4baf480080513 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_haibt, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct 11 09:17:22 compute-0 podman[380771]: 2025-10-11 09:17:22.597932745 +0000 UTC m=+0.195081791 container attach f81fdd7c224aa502815d5919af3a1e5e4cbbd406fc41632054b4baf480080513 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_haibt, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 11 09:17:22 compute-0 blissful_haibt[380787]: 167 167
Oct 11 09:17:22 compute-0 systemd[1]: libpod-f81fdd7c224aa502815d5919af3a1e5e4cbbd406fc41632054b4baf480080513.scope: Deactivated successfully.
Oct 11 09:17:22 compute-0 podman[380771]: 2025-10-11 09:17:22.602123151 +0000 UTC m=+0.199272107 container died f81fdd7c224aa502815d5919af3a1e5e4cbbd406fc41632054b4baf480080513 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_haibt, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 09:17:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-81a14d524d29f7dd68896a9170a24e49f7c067c9b91946793d79c087bfac2a2a-merged.mount: Deactivated successfully.
Oct 11 09:17:22 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2266: 321 pgs: 321 active+clean; 407 MiB data, 950 MiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 19 KiB/s wr, 56 op/s
Oct 11 09:17:22 compute-0 podman[380771]: 2025-10-11 09:17:22.65195961 +0000 UTC m=+0.249108596 container remove f81fdd7c224aa502815d5919af3a1e5e4cbbd406fc41632054b4baf480080513 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_haibt, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 09:17:22 compute-0 systemd[1]: libpod-conmon-f81fdd7c224aa502815d5919af3a1e5e4cbbd406fc41632054b4baf480080513.scope: Deactivated successfully.
Oct 11 09:17:22 compute-0 podman[380811]: 2025-10-11 09:17:22.954681189 +0000 UTC m=+0.069589877 container create d14d69468097342456b847fbc788f2020c0798c1773e26b80f49d878ab1b8b58 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_chatterjee, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Oct 11 09:17:22 compute-0 ovn_controller[152945]: 2025-10-11T09:17:22Z|01088|binding|INFO|Releasing lport 722d4b4c-2d64-4e8c-b343-4ac25259f23b from this chassis (sb_readonly=0)
Oct 11 09:17:22 compute-0 ovn_controller[152945]: 2025-10-11T09:17:22Z|01089|binding|INFO|Releasing lport e23cd806-8523-4e59-ba27-db15cee52548 from this chassis (sb_readonly=0)
Oct 11 09:17:22 compute-0 ovn_controller[152945]: 2025-10-11T09:17:22Z|01090|binding|INFO|Releasing lport f45dd889-4db0-488b-b6d3-356bc191844e from this chassis (sb_readonly=0)
Oct 11 09:17:23 compute-0 systemd[1]: Started libpod-conmon-d14d69468097342456b847fbc788f2020c0798c1773e26b80f49d878ab1b8b58.scope.
Oct 11 09:17:23 compute-0 podman[380811]: 2025-10-11 09:17:22.92976214 +0000 UTC m=+0.044670818 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:17:23 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:17:23 compute-0 nova_compute[260935]: 2025-10-11 09:17:23.085 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:17:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fdcfea61c9b5a061119127222cae8c5239f18e93390059d7f1e316a15f874694/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 09:17:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fdcfea61c9b5a061119127222cae8c5239f18e93390059d7f1e316a15f874694/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 09:17:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fdcfea61c9b5a061119127222cae8c5239f18e93390059d7f1e316a15f874694/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 09:17:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fdcfea61c9b5a061119127222cae8c5239f18e93390059d7f1e316a15f874694/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 09:17:23 compute-0 podman[380811]: 2025-10-11 09:17:23.115802149 +0000 UTC m=+0.230710837 container init d14d69468097342456b847fbc788f2020c0798c1773e26b80f49d878ab1b8b58 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_chatterjee, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 09:17:23 compute-0 podman[380811]: 2025-10-11 09:17:23.128183912 +0000 UTC m=+0.243092560 container start d14d69468097342456b847fbc788f2020c0798c1773e26b80f49d878ab1b8b58 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_chatterjee, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct 11 09:17:23 compute-0 podman[380811]: 2025-10-11 09:17:23.131757541 +0000 UTC m=+0.246666199 container attach d14d69468097342456b847fbc788f2020c0798c1773e26b80f49d878ab1b8b58 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_chatterjee, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 09:17:23 compute-0 sshd-session[380602]: Failed password for root from 165.232.82.252 port 56478 ssh2
Oct 11 09:17:23 compute-0 ceph-mon[74313]: pgmap v2266: 321 pgs: 321 active+clean; 407 MiB data, 950 MiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 19 KiB/s wr, 56 op/s
Oct 11 09:17:23 compute-0 sshd-session[380602]: Connection closed by authenticating user root 165.232.82.252 port 56478 [preauth]
Oct 11 09:17:23 compute-0 confident_chatterjee[380827]: {
Oct 11 09:17:23 compute-0 confident_chatterjee[380827]:     "0": [
Oct 11 09:17:23 compute-0 confident_chatterjee[380827]:         {
Oct 11 09:17:23 compute-0 confident_chatterjee[380827]:             "devices": [
Oct 11 09:17:23 compute-0 confident_chatterjee[380827]:                 "/dev/loop3"
Oct 11 09:17:23 compute-0 confident_chatterjee[380827]:             ],
Oct 11 09:17:23 compute-0 confident_chatterjee[380827]:             "lv_name": "ceph_lv0",
Oct 11 09:17:23 compute-0 confident_chatterjee[380827]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 09:17:23 compute-0 confident_chatterjee[380827]:             "lv_size": "21470642176",
Oct 11 09:17:23 compute-0 confident_chatterjee[380827]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=bd31cfe9-266a-4479-a7f9-659fff14b3e1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 09:17:23 compute-0 confident_chatterjee[380827]:             "lv_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 09:17:23 compute-0 confident_chatterjee[380827]:             "name": "ceph_lv0",
Oct 11 09:17:23 compute-0 confident_chatterjee[380827]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 09:17:23 compute-0 confident_chatterjee[380827]:             "tags": {
Oct 11 09:17:23 compute-0 confident_chatterjee[380827]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 11 09:17:23 compute-0 confident_chatterjee[380827]:                 "ceph.block_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 09:17:23 compute-0 confident_chatterjee[380827]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 09:17:23 compute-0 confident_chatterjee[380827]:                 "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:17:23 compute-0 confident_chatterjee[380827]:                 "ceph.cluster_name": "ceph",
Oct 11 09:17:23 compute-0 confident_chatterjee[380827]:                 "ceph.crush_device_class": "",
Oct 11 09:17:23 compute-0 confident_chatterjee[380827]:                 "ceph.encrypted": "0",
Oct 11 09:17:23 compute-0 confident_chatterjee[380827]:                 "ceph.osd_fsid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 09:17:23 compute-0 confident_chatterjee[380827]:                 "ceph.osd_id": "0",
Oct 11 09:17:23 compute-0 confident_chatterjee[380827]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 09:17:23 compute-0 confident_chatterjee[380827]:                 "ceph.type": "block",
Oct 11 09:17:23 compute-0 confident_chatterjee[380827]:                 "ceph.vdo": "0"
Oct 11 09:17:23 compute-0 confident_chatterjee[380827]:             },
Oct 11 09:17:23 compute-0 confident_chatterjee[380827]:             "type": "block",
Oct 11 09:17:23 compute-0 confident_chatterjee[380827]:             "vg_name": "ceph_vg0"
Oct 11 09:17:23 compute-0 confident_chatterjee[380827]:         }
Oct 11 09:17:23 compute-0 confident_chatterjee[380827]:     ],
Oct 11 09:17:23 compute-0 confident_chatterjee[380827]:     "1": [
Oct 11 09:17:23 compute-0 confident_chatterjee[380827]:         {
Oct 11 09:17:23 compute-0 confident_chatterjee[380827]:             "devices": [
Oct 11 09:17:23 compute-0 confident_chatterjee[380827]:                 "/dev/loop4"
Oct 11 09:17:23 compute-0 confident_chatterjee[380827]:             ],
Oct 11 09:17:23 compute-0 confident_chatterjee[380827]:             "lv_name": "ceph_lv1",
Oct 11 09:17:23 compute-0 confident_chatterjee[380827]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 09:17:23 compute-0 confident_chatterjee[380827]:             "lv_size": "21470642176",
Oct 11 09:17:23 compute-0 confident_chatterjee[380827]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c6e730c8-7075-45e5-bf72-061b73088f5b,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 09:17:23 compute-0 confident_chatterjee[380827]:             "lv_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 09:17:23 compute-0 confident_chatterjee[380827]:             "name": "ceph_lv1",
Oct 11 09:17:23 compute-0 confident_chatterjee[380827]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 09:17:23 compute-0 confident_chatterjee[380827]:             "tags": {
Oct 11 09:17:23 compute-0 confident_chatterjee[380827]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 11 09:17:23 compute-0 confident_chatterjee[380827]:                 "ceph.block_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 09:17:23 compute-0 confident_chatterjee[380827]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 09:17:23 compute-0 confident_chatterjee[380827]:                 "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:17:23 compute-0 confident_chatterjee[380827]:                 "ceph.cluster_name": "ceph",
Oct 11 09:17:23 compute-0 confident_chatterjee[380827]:                 "ceph.crush_device_class": "",
Oct 11 09:17:23 compute-0 confident_chatterjee[380827]:                 "ceph.encrypted": "0",
Oct 11 09:17:23 compute-0 confident_chatterjee[380827]:                 "ceph.osd_fsid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 09:17:23 compute-0 confident_chatterjee[380827]:                 "ceph.osd_id": "1",
Oct 11 09:17:23 compute-0 confident_chatterjee[380827]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 09:17:23 compute-0 confident_chatterjee[380827]:                 "ceph.type": "block",
Oct 11 09:17:23 compute-0 confident_chatterjee[380827]:                 "ceph.vdo": "0"
Oct 11 09:17:23 compute-0 confident_chatterjee[380827]:             },
Oct 11 09:17:23 compute-0 confident_chatterjee[380827]:             "type": "block",
Oct 11 09:17:23 compute-0 confident_chatterjee[380827]:             "vg_name": "ceph_vg1"
Oct 11 09:17:23 compute-0 confident_chatterjee[380827]:         }
Oct 11 09:17:23 compute-0 confident_chatterjee[380827]:     ],
Oct 11 09:17:23 compute-0 confident_chatterjee[380827]:     "2": [
Oct 11 09:17:23 compute-0 confident_chatterjee[380827]:         {
Oct 11 09:17:23 compute-0 confident_chatterjee[380827]:             "devices": [
Oct 11 09:17:23 compute-0 confident_chatterjee[380827]:                 "/dev/loop5"
Oct 11 09:17:23 compute-0 confident_chatterjee[380827]:             ],
Oct 11 09:17:23 compute-0 confident_chatterjee[380827]:             "lv_name": "ceph_lv2",
Oct 11 09:17:23 compute-0 confident_chatterjee[380827]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 09:17:23 compute-0 confident_chatterjee[380827]:             "lv_size": "21470642176",
Oct 11 09:17:23 compute-0 confident_chatterjee[380827]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9a8d87fe-940e-4960-94fb-405e22e6e9ee,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 09:17:23 compute-0 confident_chatterjee[380827]:             "lv_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 09:17:23 compute-0 confident_chatterjee[380827]:             "name": "ceph_lv2",
Oct 11 09:17:23 compute-0 confident_chatterjee[380827]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 09:17:23 compute-0 confident_chatterjee[380827]:             "tags": {
Oct 11 09:17:23 compute-0 confident_chatterjee[380827]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 11 09:17:23 compute-0 confident_chatterjee[380827]:                 "ceph.block_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 09:17:23 compute-0 confident_chatterjee[380827]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 09:17:23 compute-0 confident_chatterjee[380827]:                 "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:17:23 compute-0 confident_chatterjee[380827]:                 "ceph.cluster_name": "ceph",
Oct 11 09:17:23 compute-0 confident_chatterjee[380827]:                 "ceph.crush_device_class": "",
Oct 11 09:17:23 compute-0 confident_chatterjee[380827]:                 "ceph.encrypted": "0",
Oct 11 09:17:23 compute-0 confident_chatterjee[380827]:                 "ceph.osd_fsid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 09:17:23 compute-0 confident_chatterjee[380827]:                 "ceph.osd_id": "2",
Oct 11 09:17:23 compute-0 confident_chatterjee[380827]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 09:17:23 compute-0 confident_chatterjee[380827]:                 "ceph.type": "block",
Oct 11 09:17:23 compute-0 confident_chatterjee[380827]:                 "ceph.vdo": "0"
Oct 11 09:17:23 compute-0 confident_chatterjee[380827]:             },
Oct 11 09:17:23 compute-0 confident_chatterjee[380827]:             "type": "block",
Oct 11 09:17:23 compute-0 confident_chatterjee[380827]:             "vg_name": "ceph_vg2"
Oct 11 09:17:23 compute-0 confident_chatterjee[380827]:         }
Oct 11 09:17:23 compute-0 confident_chatterjee[380827]:     ]
Oct 11 09:17:23 compute-0 confident_chatterjee[380827]: }
Oct 11 09:17:23 compute-0 systemd[1]: libpod-d14d69468097342456b847fbc788f2020c0798c1773e26b80f49d878ab1b8b58.scope: Deactivated successfully.
Oct 11 09:17:23 compute-0 podman[380811]: 2025-10-11 09:17:23.938347535 +0000 UTC m=+1.053256213 container died d14d69468097342456b847fbc788f2020c0798c1773e26b80f49d878ab1b8b58 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_chatterjee, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 09:17:23 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:17:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-fdcfea61c9b5a061119127222cae8c5239f18e93390059d7f1e316a15f874694-merged.mount: Deactivated successfully.
Oct 11 09:17:24 compute-0 podman[380811]: 2025-10-11 09:17:24.013082264 +0000 UTC m=+1.127990942 container remove d14d69468097342456b847fbc788f2020c0798c1773e26b80f49d878ab1b8b58 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_chatterjee, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct 11 09:17:24 compute-0 systemd[1]: libpod-conmon-d14d69468097342456b847fbc788f2020c0798c1773e26b80f49d878ab1b8b58.scope: Deactivated successfully.
Oct 11 09:17:24 compute-0 sudo[380704]: pam_unix(sudo:session): session closed for user root
Oct 11 09:17:24 compute-0 sudo[380847]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:17:24 compute-0 sudo[380847]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:17:24 compute-0 sudo[380847]: pam_unix(sudo:session): session closed for user root
Oct 11 09:17:24 compute-0 sudo[380872]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 09:17:24 compute-0 sudo[380872]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:17:24 compute-0 sudo[380872]: pam_unix(sudo:session): session closed for user root
Oct 11 09:17:24 compute-0 sudo[380897]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:17:24 compute-0 sudo[380897]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:17:24 compute-0 sudo[380897]: pam_unix(sudo:session): session closed for user root
Oct 11 09:17:24 compute-0 sudo[380922]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a -- raw list --format json
Oct 11 09:17:24 compute-0 sudo[380922]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:17:24 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2267: 321 pgs: 321 active+clean; 407 MiB data, 950 MiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 16 KiB/s wr, 29 op/s
Oct 11 09:17:24 compute-0 nova_compute[260935]: 2025-10-11 09:17:24.725 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760174229.7236707, a1232bdc-1728-423e-91ef-f46614fcec43 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:17:24 compute-0 nova_compute[260935]: 2025-10-11 09:17:24.726 2 INFO nova.compute.manager [-] [instance: a1232bdc-1728-423e-91ef-f46614fcec43] VM Stopped (Lifecycle Event)
Oct 11 09:17:24 compute-0 nova_compute[260935]: 2025-10-11 09:17:24.750 2 DEBUG nova.compute.manager [None req-6a04600b-dd74-4839-8704-15b7ab546ab5 - - - - - -] [instance: a1232bdc-1728-423e-91ef-f46614fcec43] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:17:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:17:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:17:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:17:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:17:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:17:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:17:24 compute-0 podman[380987]: 2025-10-11 09:17:24.890624254 +0000 UTC m=+0.067344505 container create c57c04e63ef36cbc155540c24cd71fcac92407e9c265433609143bb59edfdfcf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_maxwell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 09:17:24 compute-0 systemd[1]: Started libpod-conmon-c57c04e63ef36cbc155540c24cd71fcac92407e9c265433609143bb59edfdfcf.scope.
Oct 11 09:17:24 compute-0 podman[380987]: 2025-10-11 09:17:24.861790246 +0000 UTC m=+0.038510527 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:17:24 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:17:24 compute-0 podman[380987]: 2025-10-11 09:17:24.999455856 +0000 UTC m=+0.176176147 container init c57c04e63ef36cbc155540c24cd71fcac92407e9c265433609143bb59edfdfcf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_maxwell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct 11 09:17:25 compute-0 podman[380987]: 2025-10-11 09:17:25.010288686 +0000 UTC m=+0.187008937 container start c57c04e63ef36cbc155540c24cd71fcac92407e9c265433609143bb59edfdfcf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_maxwell, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 09:17:25 compute-0 podman[380987]: 2025-10-11 09:17:25.014864403 +0000 UTC m=+0.191584724 container attach c57c04e63ef36cbc155540c24cd71fcac92407e9c265433609143bb59edfdfcf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_maxwell, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 09:17:25 compute-0 unruffled_maxwell[381003]: 167 167
Oct 11 09:17:25 compute-0 systemd[1]: libpod-c57c04e63ef36cbc155540c24cd71fcac92407e9c265433609143bb59edfdfcf.scope: Deactivated successfully.
Oct 11 09:17:25 compute-0 podman[380987]: 2025-10-11 09:17:25.018259697 +0000 UTC m=+0.194979938 container died c57c04e63ef36cbc155540c24cd71fcac92407e9c265433609143bb59edfdfcf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_maxwell, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 09:17:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-fc9414be89defe3bc867d47246018cb176d81d11da0418071126616fb2a16478-merged.mount: Deactivated successfully.
Oct 11 09:17:25 compute-0 podman[380987]: 2025-10-11 09:17:25.067619043 +0000 UTC m=+0.244339294 container remove c57c04e63ef36cbc155540c24cd71fcac92407e9c265433609143bb59edfdfcf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_maxwell, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default)
Oct 11 09:17:25 compute-0 systemd[1]: libpod-conmon-c57c04e63ef36cbc155540c24cd71fcac92407e9c265433609143bb59edfdfcf.scope: Deactivated successfully.
Oct 11 09:17:25 compute-0 podman[381027]: 2025-10-11 09:17:25.317050927 +0000 UTC m=+0.058803839 container create 5099cd8aadc93a4b5bc29e5d7c1d57d0972964096eaefa6ee201588c2c3ec384 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_golick, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct 11 09:17:25 compute-0 podman[381027]: 2025-10-11 09:17:25.288771244 +0000 UTC m=+0.030524166 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:17:25 compute-0 systemd[1]: Started libpod-conmon-5099cd8aadc93a4b5bc29e5d7c1d57d0972964096eaefa6ee201588c2c3ec384.scope.
Oct 11 09:17:25 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:17:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d65142affbd865170ec4f5fc19e8e95d9a4ac669a9dc9829b4db6989af2c173e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 09:17:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d65142affbd865170ec4f5fc19e8e95d9a4ac669a9dc9829b4db6989af2c173e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 09:17:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d65142affbd865170ec4f5fc19e8e95d9a4ac669a9dc9829b4db6989af2c173e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 09:17:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d65142affbd865170ec4f5fc19e8e95d9a4ac669a9dc9829b4db6989af2c173e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 09:17:25 compute-0 podman[381027]: 2025-10-11 09:17:25.452424824 +0000 UTC m=+0.194177796 container init 5099cd8aadc93a4b5bc29e5d7c1d57d0972964096eaefa6ee201588c2c3ec384 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_golick, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct 11 09:17:25 compute-0 podman[381027]: 2025-10-11 09:17:25.467558433 +0000 UTC m=+0.209311335 container start 5099cd8aadc93a4b5bc29e5d7c1d57d0972964096eaefa6ee201588c2c3ec384 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_golick, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct 11 09:17:25 compute-0 podman[381027]: 2025-10-11 09:17:25.472171581 +0000 UTC m=+0.213924553 container attach 5099cd8aadc93a4b5bc29e5d7c1d57d0972964096eaefa6ee201588c2c3ec384 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_golick, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 09:17:25 compute-0 ceph-mon[74313]: pgmap v2267: 321 pgs: 321 active+clean; 407 MiB data, 950 MiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 16 KiB/s wr, 29 op/s
Oct 11 09:17:26 compute-0 nova_compute[260935]: 2025-10-11 09:17:26.174 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:17:26 compute-0 upbeat_golick[381044]: {
Oct 11 09:17:26 compute-0 upbeat_golick[381044]:     "9a8d87fe-940e-4960-94fb-405e22e6e9ee": {
Oct 11 09:17:26 compute-0 upbeat_golick[381044]:         "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:17:26 compute-0 upbeat_golick[381044]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 11 09:17:26 compute-0 upbeat_golick[381044]:         "osd_id": 2,
Oct 11 09:17:26 compute-0 upbeat_golick[381044]:         "osd_uuid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 09:17:26 compute-0 upbeat_golick[381044]:         "type": "bluestore"
Oct 11 09:17:26 compute-0 upbeat_golick[381044]:     },
Oct 11 09:17:26 compute-0 upbeat_golick[381044]:     "bd31cfe9-266a-4479-a7f9-659fff14b3e1": {
Oct 11 09:17:26 compute-0 upbeat_golick[381044]:         "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:17:26 compute-0 upbeat_golick[381044]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 11 09:17:26 compute-0 upbeat_golick[381044]:         "osd_id": 0,
Oct 11 09:17:26 compute-0 upbeat_golick[381044]:         "osd_uuid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 09:17:26 compute-0 upbeat_golick[381044]:         "type": "bluestore"
Oct 11 09:17:26 compute-0 upbeat_golick[381044]:     },
Oct 11 09:17:26 compute-0 upbeat_golick[381044]:     "c6e730c8-7075-45e5-bf72-061b73088f5b": {
Oct 11 09:17:26 compute-0 upbeat_golick[381044]:         "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:17:26 compute-0 upbeat_golick[381044]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 11 09:17:26 compute-0 upbeat_golick[381044]:         "osd_id": 1,
Oct 11 09:17:26 compute-0 upbeat_golick[381044]:         "osd_uuid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 09:17:26 compute-0 upbeat_golick[381044]:         "type": "bluestore"
Oct 11 09:17:26 compute-0 upbeat_golick[381044]:     }
Oct 11 09:17:26 compute-0 upbeat_golick[381044]: }
Oct 11 09:17:26 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2268: 321 pgs: 321 active+clean; 407 MiB data, 950 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 4.5 KiB/s wr, 28 op/s
Oct 11 09:17:26 compute-0 systemd[1]: libpod-5099cd8aadc93a4b5bc29e5d7c1d57d0972964096eaefa6ee201588c2c3ec384.scope: Deactivated successfully.
Oct 11 09:17:26 compute-0 systemd[1]: libpod-5099cd8aadc93a4b5bc29e5d7c1d57d0972964096eaefa6ee201588c2c3ec384.scope: Consumed 1.193s CPU time.
Oct 11 09:17:26 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 09:17:26 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1594780645' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 09:17:26 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 09:17:26 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1594780645' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 09:17:26 compute-0 podman[381027]: 2025-10-11 09:17:26.659871865 +0000 UTC m=+1.401624777 container died 5099cd8aadc93a4b5bc29e5d7c1d57d0972964096eaefa6ee201588c2c3ec384 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_golick, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct 11 09:17:26 compute-0 nova_compute[260935]: 2025-10-11 09:17:26.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:17:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-d65142affbd865170ec4f5fc19e8e95d9a4ac669a9dc9829b4db6989af2c173e-merged.mount: Deactivated successfully.
Oct 11 09:17:26 compute-0 nova_compute[260935]: 2025-10-11 09:17:26.746 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:17:26 compute-0 podman[381027]: 2025-10-11 09:17:26.748021715 +0000 UTC m=+1.489774577 container remove 5099cd8aadc93a4b5bc29e5d7c1d57d0972964096eaefa6ee201588c2c3ec384 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_golick, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 09:17:26 compute-0 systemd[1]: libpod-conmon-5099cd8aadc93a4b5bc29e5d7c1d57d0972964096eaefa6ee201588c2c3ec384.scope: Deactivated successfully.
Oct 11 09:17:26 compute-0 sudo[380922]: pam_unix(sudo:session): session closed for user root
Oct 11 09:17:26 compute-0 nova_compute[260935]: 2025-10-11 09:17:26.797 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:17:26 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 09:17:26 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:17:26 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 09:17:26 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:17:26 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev 0c57ba51-073e-4350-88d2-e32ba346f31c does not exist
Oct 11 09:17:26 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev 62dcae6e-e495-47bb-83d2-195ccb4f99ed does not exist
Oct 11 09:17:26 compute-0 ceph-mon[74313]: from='client.? 192.168.122.10:0/1594780645' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 09:17:26 compute-0 ceph-mon[74313]: from='client.? 192.168.122.10:0/1594780645' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 09:17:26 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:17:26 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:17:26 compute-0 sudo[381091]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:17:26 compute-0 sudo[381091]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:17:26 compute-0 sudo[381091]: pam_unix(sudo:session): session closed for user root
Oct 11 09:17:26 compute-0 sudo[381116]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 11 09:17:27 compute-0 sudo[381116]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:17:27 compute-0 sudo[381116]: pam_unix(sudo:session): session closed for user root
Oct 11 09:17:27 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:17:27.503 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=34, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:d1:d9', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '16:ab:1e:b7:4b:7f'}, ipsec=False) old=SB_Global(nb_cfg=33) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 09:17:27 compute-0 nova_compute[260935]: 2025-10-11 09:17:27.504 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:17:27 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:17:27.506 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 11 09:17:27 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:17:27.508 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=bec9a5e2-82b0-42f0-811d-08d245f1dc66, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '34'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:17:27 compute-0 nova_compute[260935]: 2025-10-11 09:17:27.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:17:27 compute-0 ceph-mon[74313]: pgmap v2268: 321 pgs: 321 active+clean; 407 MiB data, 950 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 4.5 KiB/s wr, 28 op/s
Oct 11 09:17:28 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2269: 321 pgs: 321 active+clean; 407 MiB data, 950 MiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 4.5 KiB/s wr, 28 op/s
Oct 11 09:17:28 compute-0 nova_compute[260935]: 2025-10-11 09:17:28.698 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:17:28 compute-0 nova_compute[260935]: 2025-10-11 09:17:28.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:17:28 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:17:29 compute-0 nova_compute[260935]: 2025-10-11 09:17:29.451 2 DEBUG oslo_concurrency.lockutils [None req-7b9fe19d-1144-4473-b7ea-5ac8c22507e4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "25c0d908-6200-4e9e-8914-ed531abe14bf" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:17:29 compute-0 nova_compute[260935]: 2025-10-11 09:17:29.452 2 DEBUG oslo_concurrency.lockutils [None req-7b9fe19d-1144-4473-b7ea-5ac8c22507e4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "25c0d908-6200-4e9e-8914-ed531abe14bf" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:17:29 compute-0 nova_compute[260935]: 2025-10-11 09:17:29.470 2 DEBUG nova.compute.manager [None req-7b9fe19d-1144-4473-b7ea-5ac8c22507e4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 25c0d908-6200-4e9e-8914-ed531abe14bf] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 11 09:17:29 compute-0 nova_compute[260935]: 2025-10-11 09:17:29.556 2 DEBUG oslo_concurrency.lockutils [None req-7b9fe19d-1144-4473-b7ea-5ac8c22507e4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:17:29 compute-0 nova_compute[260935]: 2025-10-11 09:17:29.556 2 DEBUG oslo_concurrency.lockutils [None req-7b9fe19d-1144-4473-b7ea-5ac8c22507e4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:17:29 compute-0 nova_compute[260935]: 2025-10-11 09:17:29.571 2 DEBUG nova.virt.hardware [None req-7b9fe19d-1144-4473-b7ea-5ac8c22507e4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 11 09:17:29 compute-0 nova_compute[260935]: 2025-10-11 09:17:29.571 2 INFO nova.compute.claims [None req-7b9fe19d-1144-4473-b7ea-5ac8c22507e4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 25c0d908-6200-4e9e-8914-ed531abe14bf] Claim successful on node compute-0.ctlplane.example.com
Oct 11 09:17:29 compute-0 nova_compute[260935]: 2025-10-11 09:17:29.787 2 DEBUG oslo_concurrency.processutils [None req-7b9fe19d-1144-4473-b7ea-5ac8c22507e4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:17:29 compute-0 ceph-mon[74313]: pgmap v2269: 321 pgs: 321 active+clean; 407 MiB data, 950 MiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 4.5 KiB/s wr, 28 op/s
Oct 11 09:17:30 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 09:17:30 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2284915098' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:17:30 compute-0 nova_compute[260935]: 2025-10-11 09:17:30.284 2 DEBUG oslo_concurrency.processutils [None req-7b9fe19d-1144-4473-b7ea-5ac8c22507e4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.497s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:17:30 compute-0 nova_compute[260935]: 2025-10-11 09:17:30.293 2 DEBUG nova.compute.provider_tree [None req-7b9fe19d-1144-4473-b7ea-5ac8c22507e4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 09:17:30 compute-0 nova_compute[260935]: 2025-10-11 09:17:30.317 2 DEBUG nova.scheduler.client.report [None req-7b9fe19d-1144-4473-b7ea-5ac8c22507e4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 09:17:30 compute-0 nova_compute[260935]: 2025-10-11 09:17:30.354 2 DEBUG oslo_concurrency.lockutils [None req-7b9fe19d-1144-4473-b7ea-5ac8c22507e4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.798s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:17:30 compute-0 nova_compute[260935]: 2025-10-11 09:17:30.356 2 DEBUG nova.compute.manager [None req-7b9fe19d-1144-4473-b7ea-5ac8c22507e4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 25c0d908-6200-4e9e-8914-ed531abe14bf] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 11 09:17:30 compute-0 nova_compute[260935]: 2025-10-11 09:17:30.425 2 DEBUG nova.compute.manager [None req-7b9fe19d-1144-4473-b7ea-5ac8c22507e4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 25c0d908-6200-4e9e-8914-ed531abe14bf] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 11 09:17:30 compute-0 nova_compute[260935]: 2025-10-11 09:17:30.426 2 DEBUG nova.network.neutron [None req-7b9fe19d-1144-4473-b7ea-5ac8c22507e4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 25c0d908-6200-4e9e-8914-ed531abe14bf] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 11 09:17:30 compute-0 nova_compute[260935]: 2025-10-11 09:17:30.449 2 INFO nova.virt.libvirt.driver [None req-7b9fe19d-1144-4473-b7ea-5ac8c22507e4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 25c0d908-6200-4e9e-8914-ed531abe14bf] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 11 09:17:30 compute-0 nova_compute[260935]: 2025-10-11 09:17:30.472 2 DEBUG nova.compute.manager [None req-7b9fe19d-1144-4473-b7ea-5ac8c22507e4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 25c0d908-6200-4e9e-8914-ed531abe14bf] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 11 09:17:30 compute-0 nova_compute[260935]: 2025-10-11 09:17:30.603 2 DEBUG nova.compute.manager [None req-7b9fe19d-1144-4473-b7ea-5ac8c22507e4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 25c0d908-6200-4e9e-8914-ed531abe14bf] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 11 09:17:30 compute-0 nova_compute[260935]: 2025-10-11 09:17:30.606 2 DEBUG nova.virt.libvirt.driver [None req-7b9fe19d-1144-4473-b7ea-5ac8c22507e4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 25c0d908-6200-4e9e-8914-ed531abe14bf] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 11 09:17:30 compute-0 nova_compute[260935]: 2025-10-11 09:17:30.606 2 INFO nova.virt.libvirt.driver [None req-7b9fe19d-1144-4473-b7ea-5ac8c22507e4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 25c0d908-6200-4e9e-8914-ed531abe14bf] Creating image(s)
Oct 11 09:17:30 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2270: 321 pgs: 321 active+clean; 407 MiB data, 950 MiB used, 59 GiB / 60 GiB avail; 852 B/s rd, 0 B/s wr, 0 op/s
Oct 11 09:17:30 compute-0 nova_compute[260935]: 2025-10-11 09:17:30.642 2 DEBUG nova.storage.rbd_utils [None req-7b9fe19d-1144-4473-b7ea-5ac8c22507e4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] rbd image 25c0d908-6200-4e9e-8914-ed531abe14bf_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:17:30 compute-0 nova_compute[260935]: 2025-10-11 09:17:30.680 2 DEBUG nova.storage.rbd_utils [None req-7b9fe19d-1144-4473-b7ea-5ac8c22507e4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] rbd image 25c0d908-6200-4e9e-8914-ed531abe14bf_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:17:30 compute-0 nova_compute[260935]: 2025-10-11 09:17:30.722 2 DEBUG nova.storage.rbd_utils [None req-7b9fe19d-1144-4473-b7ea-5ac8c22507e4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] rbd image 25c0d908-6200-4e9e-8914-ed531abe14bf_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:17:30 compute-0 nova_compute[260935]: 2025-10-11 09:17:30.729 2 DEBUG oslo_concurrency.processutils [None req-7b9fe19d-1144-4473-b7ea-5ac8c22507e4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:17:30 compute-0 nova_compute[260935]: 2025-10-11 09:17:30.791 2 DEBUG nova.policy [None req-7b9fe19d-1144-4473-b7ea-5ac8c22507e4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '0e1fd111a1ff43179343661e01457085', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'db6885dd005947ad850fed13cefdf2fc', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 11 09:17:30 compute-0 nova_compute[260935]: 2025-10-11 09:17:30.796 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:17:30 compute-0 nova_compute[260935]: 2025-10-11 09:17:30.796 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 11 09:17:30 compute-0 nova_compute[260935]: 2025-10-11 09:17:30.797 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:17:30 compute-0 nova_compute[260935]: 2025-10-11 09:17:30.840 2 DEBUG oslo_concurrency.processutils [None req-7b9fe19d-1144-4473-b7ea-5ac8c22507e4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json" returned: 0 in 0.112s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:17:30 compute-0 nova_compute[260935]: 2025-10-11 09:17:30.841 2 DEBUG oslo_concurrency.lockutils [None req-7b9fe19d-1144-4473-b7ea-5ac8c22507e4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "9811042c7d73cc51997f7c966840f4b7728169a1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:17:30 compute-0 nova_compute[260935]: 2025-10-11 09:17:30.842 2 DEBUG oslo_concurrency.lockutils [None req-7b9fe19d-1144-4473-b7ea-5ac8c22507e4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:17:30 compute-0 nova_compute[260935]: 2025-10-11 09:17:30.843 2 DEBUG oslo_concurrency.lockutils [None req-7b9fe19d-1144-4473-b7ea-5ac8c22507e4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:17:30 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/2284915098' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:17:30 compute-0 nova_compute[260935]: 2025-10-11 09:17:30.877 2 DEBUG nova.storage.rbd_utils [None req-7b9fe19d-1144-4473-b7ea-5ac8c22507e4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] rbd image 25c0d908-6200-4e9e-8914-ed531abe14bf_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:17:30 compute-0 nova_compute[260935]: 2025-10-11 09:17:30.882 2 DEBUG oslo_concurrency.processutils [None req-7b9fe19d-1144-4473-b7ea-5ac8c22507e4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 25c0d908-6200-4e9e-8914-ed531abe14bf_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:17:31 compute-0 nova_compute[260935]: 2025-10-11 09:17:31.210 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:17:31 compute-0 nova_compute[260935]: 2025-10-11 09:17:31.251 2 DEBUG oslo_concurrency.processutils [None req-7b9fe19d-1144-4473-b7ea-5ac8c22507e4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 25c0d908-6200-4e9e-8914-ed531abe14bf_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.370s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:17:31 compute-0 nova_compute[260935]: 2025-10-11 09:17:31.334 2 DEBUG nova.storage.rbd_utils [None req-7b9fe19d-1144-4473-b7ea-5ac8c22507e4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] resizing rbd image 25c0d908-6200-4e9e-8914-ed531abe14bf_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 11 09:17:31 compute-0 nova_compute[260935]: 2025-10-11 09:17:31.450 2 DEBUG nova.objects.instance [None req-7b9fe19d-1144-4473-b7ea-5ac8c22507e4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lazy-loading 'migration_context' on Instance uuid 25c0d908-6200-4e9e-8914-ed531abe14bf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:17:31 compute-0 nova_compute[260935]: 2025-10-11 09:17:31.469 2 DEBUG nova.virt.libvirt.driver [None req-7b9fe19d-1144-4473-b7ea-5ac8c22507e4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 25c0d908-6200-4e9e-8914-ed531abe14bf] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 11 09:17:31 compute-0 nova_compute[260935]: 2025-10-11 09:17:31.469 2 DEBUG nova.virt.libvirt.driver [None req-7b9fe19d-1144-4473-b7ea-5ac8c22507e4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 25c0d908-6200-4e9e-8914-ed531abe14bf] Ensure instance console log exists: /var/lib/nova/instances/25c0d908-6200-4e9e-8914-ed531abe14bf/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 11 09:17:31 compute-0 nova_compute[260935]: 2025-10-11 09:17:31.470 2 DEBUG oslo_concurrency.lockutils [None req-7b9fe19d-1144-4473-b7ea-5ac8c22507e4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:17:31 compute-0 nova_compute[260935]: 2025-10-11 09:17:31.470 2 DEBUG oslo_concurrency.lockutils [None req-7b9fe19d-1144-4473-b7ea-5ac8c22507e4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:17:31 compute-0 nova_compute[260935]: 2025-10-11 09:17:31.470 2 DEBUG oslo_concurrency.lockutils [None req-7b9fe19d-1144-4473-b7ea-5ac8c22507e4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:17:31 compute-0 nova_compute[260935]: 2025-10-11 09:17:31.760 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760174236.7596006, 8354434f-daa4-4745-9755-bd2465f5459b => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:17:31 compute-0 nova_compute[260935]: 2025-10-11 09:17:31.761 2 INFO nova.compute.manager [-] [instance: 8354434f-daa4-4745-9755-bd2465f5459b] VM Stopped (Lifecycle Event)
Oct 11 09:17:31 compute-0 nova_compute[260935]: 2025-10-11 09:17:31.779 2 DEBUG nova.compute.manager [None req-c5c4e698-b6f5-456d-9a0c-9c5d2643dc9f - - - - - -] [instance: 8354434f-daa4-4745-9755-bd2465f5459b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:17:31 compute-0 nova_compute[260935]: 2025-10-11 09:17:31.800 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:17:31 compute-0 podman[381329]: 2025-10-11 09:17:31.800586254 +0000 UTC m=+0.092662266 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 11 09:17:31 compute-0 nova_compute[260935]: 2025-10-11 09:17:31.837 2 DEBUG nova.network.neutron [None req-7b9fe19d-1144-4473-b7ea-5ac8c22507e4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 25c0d908-6200-4e9e-8914-ed531abe14bf] Successfully created port: 25dc7747-60d0-4a38-8938-dfa9f55068b3 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 11 09:17:31 compute-0 ceph-mon[74313]: pgmap v2270: 321 pgs: 321 active+clean; 407 MiB data, 950 MiB used, 59 GiB / 60 GiB avail; 852 B/s rd, 0 B/s wr, 0 op/s
Oct 11 09:17:32 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2271: 321 pgs: 321 active+clean; 435 MiB data, 962 MiB used, 59 GiB / 60 GiB avail; 7.4 KiB/s rd, 1.0 MiB/s wr, 12 op/s
Oct 11 09:17:32 compute-0 nova_compute[260935]: 2025-10-11 09:17:32.700 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:17:32 compute-0 nova_compute[260935]: 2025-10-11 09:17:32.727 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:17:32 compute-0 nova_compute[260935]: 2025-10-11 09:17:32.727 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 11 09:17:32 compute-0 nova_compute[260935]: 2025-10-11 09:17:32.826 2 DEBUG nova.network.neutron [None req-7b9fe19d-1144-4473-b7ea-5ac8c22507e4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 25c0d908-6200-4e9e-8914-ed531abe14bf] Successfully updated port: 25dc7747-60d0-4a38-8938-dfa9f55068b3 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 11 09:17:32 compute-0 nova_compute[260935]: 2025-10-11 09:17:32.841 2 DEBUG oslo_concurrency.lockutils [None req-7b9fe19d-1144-4473-b7ea-5ac8c22507e4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "refresh_cache-25c0d908-6200-4e9e-8914-ed531abe14bf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:17:32 compute-0 nova_compute[260935]: 2025-10-11 09:17:32.841 2 DEBUG oslo_concurrency.lockutils [None req-7b9fe19d-1144-4473-b7ea-5ac8c22507e4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquired lock "refresh_cache-25c0d908-6200-4e9e-8914-ed531abe14bf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:17:32 compute-0 nova_compute[260935]: 2025-10-11 09:17:32.842 2 DEBUG nova.network.neutron [None req-7b9fe19d-1144-4473-b7ea-5ac8c22507e4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 25c0d908-6200-4e9e-8914-ed531abe14bf] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 11 09:17:32 compute-0 nova_compute[260935]: 2025-10-11 09:17:32.939 2 DEBUG nova.compute.manager [req-f98e1868-53db-4ce1-9ff2-db865e9e6823 req-48a38adc-d014-4c15-b42b-d088179ab618 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 25c0d908-6200-4e9e-8914-ed531abe14bf] Received event network-changed-25dc7747-60d0-4a38-8938-dfa9f55068b3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:17:32 compute-0 nova_compute[260935]: 2025-10-11 09:17:32.940 2 DEBUG nova.compute.manager [req-f98e1868-53db-4ce1-9ff2-db865e9e6823 req-48a38adc-d014-4c15-b42b-d088179ab618 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 25c0d908-6200-4e9e-8914-ed531abe14bf] Refreshing instance network info cache due to event network-changed-25dc7747-60d0-4a38-8938-dfa9f55068b3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 09:17:32 compute-0 nova_compute[260935]: 2025-10-11 09:17:32.941 2 DEBUG oslo_concurrency.lockutils [req-f98e1868-53db-4ce1-9ff2-db865e9e6823 req-48a38adc-d014-4c15-b42b-d088179ab618 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-25c0d908-6200-4e9e-8914-ed531abe14bf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:17:32 compute-0 nova_compute[260935]: 2025-10-11 09:17:32.969 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "refresh_cache-b75d8ded-515b-48ff-a6b6-28df88878996" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:17:32 compute-0 nova_compute[260935]: 2025-10-11 09:17:32.969 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquired lock "refresh_cache-b75d8ded-515b-48ff-a6b6-28df88878996" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:17:32 compute-0 nova_compute[260935]: 2025-10-11 09:17:32.969 2 DEBUG nova.network.neutron [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: b75d8ded-515b-48ff-a6b6-28df88878996] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 11 09:17:33 compute-0 nova_compute[260935]: 2025-10-11 09:17:33.350 2 DEBUG nova.network.neutron [None req-7b9fe19d-1144-4473-b7ea-5ac8c22507e4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 25c0d908-6200-4e9e-8914-ed531abe14bf] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 11 09:17:33 compute-0 ceph-mon[74313]: pgmap v2271: 321 pgs: 321 active+clean; 435 MiB data, 962 MiB used, 59 GiB / 60 GiB avail; 7.4 KiB/s rd, 1.0 MiB/s wr, 12 op/s
Oct 11 09:17:33 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:17:34 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2272: 321 pgs: 321 active+clean; 453 MiB data, 971 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 11 09:17:34 compute-0 nova_compute[260935]: 2025-10-11 09:17:34.979 2 DEBUG nova.network.neutron [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: b75d8ded-515b-48ff-a6b6-28df88878996] Updating instance_info_cache with network_info: [{"id": "99e74dca-1d94-446c-ac4b-bc16dc028d2b", "address": "fa:16:3e:ab:9b:26", "network": {"id": "e4686205-cbf0-4221-bc49-ebb890c4a59f", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1553544744-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "11b44ad9193e4e43838d52056ccf413e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap99e74dca-1d", "ovs_interfaceid": "99e74dca-1d94-446c-ac4b-bc16dc028d2b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:17:35 compute-0 nova_compute[260935]: 2025-10-11 09:17:34.999 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Releasing lock "refresh_cache-b75d8ded-515b-48ff-a6b6-28df88878996" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:17:35 compute-0 nova_compute[260935]: 2025-10-11 09:17:35.000 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: b75d8ded-515b-48ff-a6b6-28df88878996] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 11 09:17:35 compute-0 nova_compute[260935]: 2025-10-11 09:17:35.001 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:17:35 compute-0 nova_compute[260935]: 2025-10-11 09:17:35.001 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:17:35 compute-0 nova_compute[260935]: 2025-10-11 09:17:35.034 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:17:35 compute-0 nova_compute[260935]: 2025-10-11 09:17:35.035 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:17:35 compute-0 nova_compute[260935]: 2025-10-11 09:17:35.035 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:17:35 compute-0 nova_compute[260935]: 2025-10-11 09:17:35.035 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 11 09:17:35 compute-0 nova_compute[260935]: 2025-10-11 09:17:35.036 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:17:35 compute-0 nova_compute[260935]: 2025-10-11 09:17:35.385 2 DEBUG nova.network.neutron [None req-7b9fe19d-1144-4473-b7ea-5ac8c22507e4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 25c0d908-6200-4e9e-8914-ed531abe14bf] Updating instance_info_cache with network_info: [{"id": "25dc7747-60d0-4a38-8938-dfa9f55068b3", "address": "fa:16:3e:8e:fc:a5", "network": {"id": "2f4ce403-2596-441d-805b-ba15e2f385a1", "bridge": "br-int", "label": "tempest-network-smoke--109925739", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe8e:fca5", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25dc7747-60", "ovs_interfaceid": "25dc7747-60d0-4a38-8938-dfa9f55068b3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:17:35 compute-0 nova_compute[260935]: 2025-10-11 09:17:35.409 2 DEBUG oslo_concurrency.lockutils [None req-7b9fe19d-1144-4473-b7ea-5ac8c22507e4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Releasing lock "refresh_cache-25c0d908-6200-4e9e-8914-ed531abe14bf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:17:35 compute-0 nova_compute[260935]: 2025-10-11 09:17:35.410 2 DEBUG nova.compute.manager [None req-7b9fe19d-1144-4473-b7ea-5ac8c22507e4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 25c0d908-6200-4e9e-8914-ed531abe14bf] Instance network_info: |[{"id": "25dc7747-60d0-4a38-8938-dfa9f55068b3", "address": "fa:16:3e:8e:fc:a5", "network": {"id": "2f4ce403-2596-441d-805b-ba15e2f385a1", "bridge": "br-int", "label": "tempest-network-smoke--109925739", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe8e:fca5", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25dc7747-60", "ovs_interfaceid": "25dc7747-60d0-4a38-8938-dfa9f55068b3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 11 09:17:35 compute-0 nova_compute[260935]: 2025-10-11 09:17:35.411 2 DEBUG oslo_concurrency.lockutils [req-f98e1868-53db-4ce1-9ff2-db865e9e6823 req-48a38adc-d014-4c15-b42b-d088179ab618 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-25c0d908-6200-4e9e-8914-ed531abe14bf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:17:35 compute-0 nova_compute[260935]: 2025-10-11 09:17:35.412 2 DEBUG nova.network.neutron [req-f98e1868-53db-4ce1-9ff2-db865e9e6823 req-48a38adc-d014-4c15-b42b-d088179ab618 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 25c0d908-6200-4e9e-8914-ed531abe14bf] Refreshing network info cache for port 25dc7747-60d0-4a38-8938-dfa9f55068b3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 09:17:35 compute-0 nova_compute[260935]: 2025-10-11 09:17:35.419 2 DEBUG nova.virt.libvirt.driver [None req-7b9fe19d-1144-4473-b7ea-5ac8c22507e4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 25c0d908-6200-4e9e-8914-ed531abe14bf] Start _get_guest_xml network_info=[{"id": "25dc7747-60d0-4a38-8938-dfa9f55068b3", "address": "fa:16:3e:8e:fc:a5", "network": {"id": "2f4ce403-2596-441d-805b-ba15e2f385a1", "bridge": "br-int", "label": "tempest-network-smoke--109925739", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe8e:fca5", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25dc7747-60", "ovs_interfaceid": "25dc7747-60d0-4a38-8938-dfa9f55068b3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 11 09:17:35 compute-0 nova_compute[260935]: 2025-10-11 09:17:35.428 2 WARNING nova.virt.libvirt.driver [None req-7b9fe19d-1144-4473-b7ea-5ac8c22507e4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 09:17:35 compute-0 nova_compute[260935]: 2025-10-11 09:17:35.434 2 DEBUG nova.virt.libvirt.host [None req-7b9fe19d-1144-4473-b7ea-5ac8c22507e4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 11 09:17:35 compute-0 nova_compute[260935]: 2025-10-11 09:17:35.436 2 DEBUG nova.virt.libvirt.host [None req-7b9fe19d-1144-4473-b7ea-5ac8c22507e4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 11 09:17:35 compute-0 nova_compute[260935]: 2025-10-11 09:17:35.453 2 DEBUG nova.virt.libvirt.host [None req-7b9fe19d-1144-4473-b7ea-5ac8c22507e4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 11 09:17:35 compute-0 nova_compute[260935]: 2025-10-11 09:17:35.454 2 DEBUG nova.virt.libvirt.host [None req-7b9fe19d-1144-4473-b7ea-5ac8c22507e4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 11 09:17:35 compute-0 nova_compute[260935]: 2025-10-11 09:17:35.455 2 DEBUG nova.virt.libvirt.driver [None req-7b9fe19d-1144-4473-b7ea-5ac8c22507e4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 11 09:17:35 compute-0 nova_compute[260935]: 2025-10-11 09:17:35.455 2 DEBUG nova.virt.hardware [None req-7b9fe19d-1144-4473-b7ea-5ac8c22507e4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 11 09:17:35 compute-0 nova_compute[260935]: 2025-10-11 09:17:35.457 2 DEBUG nova.virt.hardware [None req-7b9fe19d-1144-4473-b7ea-5ac8c22507e4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 11 09:17:35 compute-0 nova_compute[260935]: 2025-10-11 09:17:35.457 2 DEBUG nova.virt.hardware [None req-7b9fe19d-1144-4473-b7ea-5ac8c22507e4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 11 09:17:35 compute-0 nova_compute[260935]: 2025-10-11 09:17:35.458 2 DEBUG nova.virt.hardware [None req-7b9fe19d-1144-4473-b7ea-5ac8c22507e4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 11 09:17:35 compute-0 nova_compute[260935]: 2025-10-11 09:17:35.458 2 DEBUG nova.virt.hardware [None req-7b9fe19d-1144-4473-b7ea-5ac8c22507e4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 11 09:17:35 compute-0 nova_compute[260935]: 2025-10-11 09:17:35.459 2 DEBUG nova.virt.hardware [None req-7b9fe19d-1144-4473-b7ea-5ac8c22507e4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 11 09:17:35 compute-0 nova_compute[260935]: 2025-10-11 09:17:35.460 2 DEBUG nova.virt.hardware [None req-7b9fe19d-1144-4473-b7ea-5ac8c22507e4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 11 09:17:35 compute-0 nova_compute[260935]: 2025-10-11 09:17:35.460 2 DEBUG nova.virt.hardware [None req-7b9fe19d-1144-4473-b7ea-5ac8c22507e4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 11 09:17:35 compute-0 nova_compute[260935]: 2025-10-11 09:17:35.461 2 DEBUG nova.virt.hardware [None req-7b9fe19d-1144-4473-b7ea-5ac8c22507e4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 11 09:17:35 compute-0 nova_compute[260935]: 2025-10-11 09:17:35.461 2 DEBUG nova.virt.hardware [None req-7b9fe19d-1144-4473-b7ea-5ac8c22507e4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 11 09:17:35 compute-0 nova_compute[260935]: 2025-10-11 09:17:35.462 2 DEBUG nova.virt.hardware [None req-7b9fe19d-1144-4473-b7ea-5ac8c22507e4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 11 09:17:35 compute-0 nova_compute[260935]: 2025-10-11 09:17:35.468 2 DEBUG oslo_concurrency.processutils [None req-7b9fe19d-1144-4473-b7ea-5ac8c22507e4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:17:35 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 09:17:35 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2159254145' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:17:35 compute-0 nova_compute[260935]: 2025-10-11 09:17:35.561 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.525s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:17:35 compute-0 nova_compute[260935]: 2025-10-11 09:17:35.660 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:17:35 compute-0 nova_compute[260935]: 2025-10-11 09:17:35.660 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:17:35 compute-0 nova_compute[260935]: 2025-10-11 09:17:35.661 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:17:35 compute-0 nova_compute[260935]: 2025-10-11 09:17:35.665 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:17:35 compute-0 nova_compute[260935]: 2025-10-11 09:17:35.665 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:17:35 compute-0 nova_compute[260935]: 2025-10-11 09:17:35.669 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-0000006e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:17:35 compute-0 nova_compute[260935]: 2025-10-11 09:17:35.669 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-0000006e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:17:35 compute-0 nova_compute[260935]: 2025-10-11 09:17:35.673 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000052 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:17:35 compute-0 nova_compute[260935]: 2025-10-11 09:17:35.673 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000052 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:17:35 compute-0 ceph-mon[74313]: pgmap v2272: 321 pgs: 321 active+clean; 453 MiB data, 971 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 11 09:17:35 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/2159254145' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:17:35 compute-0 nova_compute[260935]: 2025-10-11 09:17:35.934 2 WARNING nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 09:17:35 compute-0 nova_compute[260935]: 2025-10-11 09:17:35.936 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=2731MB free_disk=59.76434326171875GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 11 09:17:35 compute-0 nova_compute[260935]: 2025-10-11 09:17:35.937 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:17:35 compute-0 nova_compute[260935]: 2025-10-11 09:17:35.937 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:17:35 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 09:17:35 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3103790310' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:17:35 compute-0 nova_compute[260935]: 2025-10-11 09:17:35.982 2 DEBUG oslo_concurrency.processutils [None req-7b9fe19d-1144-4473-b7ea-5ac8c22507e4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.514s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:17:36 compute-0 nova_compute[260935]: 2025-10-11 09:17:36.016 2 DEBUG nova.storage.rbd_utils [None req-7b9fe19d-1144-4473-b7ea-5ac8c22507e4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] rbd image 25c0d908-6200-4e9e-8914-ed531abe14bf_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:17:36 compute-0 nova_compute[260935]: 2025-10-11 09:17:36.021 2 DEBUG oslo_concurrency.processutils [None req-7b9fe19d-1144-4473-b7ea-5ac8c22507e4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:17:36 compute-0 nova_compute[260935]: 2025-10-11 09:17:36.125 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance c176845c-89c0-4038-ba22-4ee79bd3ebfe actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 09:17:36 compute-0 nova_compute[260935]: 2025-10-11 09:17:36.126 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance b75d8ded-515b-48ff-a6b6-28df88878996 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 09:17:36 compute-0 nova_compute[260935]: 2025-10-11 09:17:36.126 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance 52be16b4-343a-4fd4-9041-39069a1fde2a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 09:17:36 compute-0 nova_compute[260935]: 2025-10-11 09:17:36.126 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance 187076f6-221b-4a35-a7a8-9ba7c2a546b5 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 09:17:36 compute-0 nova_compute[260935]: 2025-10-11 09:17:36.127 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance 25c0d908-6200-4e9e-8914-ed531abe14bf actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 09:17:36 compute-0 nova_compute[260935]: 2025-10-11 09:17:36.127 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 5 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 11 09:17:36 compute-0 nova_compute[260935]: 2025-10-11 09:17:36.127 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=1152MB phys_disk=59GB used_disk=5GB total_vcpus=8 used_vcpus=5 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 11 09:17:36 compute-0 nova_compute[260935]: 2025-10-11 09:17:36.272 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:17:36 compute-0 nova_compute[260935]: 2025-10-11 09:17:36.318 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:17:36 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 09:17:36 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2726868061' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:17:36 compute-0 nova_compute[260935]: 2025-10-11 09:17:36.528 2 DEBUG oslo_concurrency.processutils [None req-7b9fe19d-1144-4473-b7ea-5ac8c22507e4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.507s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:17:36 compute-0 nova_compute[260935]: 2025-10-11 09:17:36.531 2 DEBUG nova.virt.libvirt.vif [None req-7b9fe19d-1144-4473-b7ea-5ac8c22507e4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T09:17:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-637088263',display_name='tempest-TestGettingAddress-server-637088263',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-637088263',id=111,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIEEroIcQI6yjPYNFRbpO55PkieYAc+yLELOXjNZohffuvQJuC/A538gnzESmYvfIV1iCxTeQxcsCPGRPplzY6F4Y6cKL3td1D8v0hhGFKNU2eGLTpxtQxcU6ACWQ1bQPg==',key_name='tempest-TestGettingAddress-609422538',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='db6885dd005947ad850fed13cefdf2fc',ramdisk_id='',reservation_id='r-249bfkz4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-1238692117',owner_user_name='tempest-TestGettingAddress-1238692117-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:17:30Z,user_data=None,user_id='0e1fd111a1ff43179343661e01457085',uuid=25c0d908-6200-4e9e-8914-ed531abe14bf,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "25dc7747-60d0-4a38-8938-dfa9f55068b3", "address": "fa:16:3e:8e:fc:a5", "network": {"id": "2f4ce403-2596-441d-805b-ba15e2f385a1", "bridge": "br-int", "label": "tempest-network-smoke--109925739", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe8e:fca5", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25dc7747-60", "ovs_interfaceid": "25dc7747-60d0-4a38-8938-dfa9f55068b3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 11 09:17:36 compute-0 nova_compute[260935]: 2025-10-11 09:17:36.532 2 DEBUG nova.network.os_vif_util [None req-7b9fe19d-1144-4473-b7ea-5ac8c22507e4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converting VIF {"id": "25dc7747-60d0-4a38-8938-dfa9f55068b3", "address": "fa:16:3e:8e:fc:a5", "network": {"id": "2f4ce403-2596-441d-805b-ba15e2f385a1", "bridge": "br-int", "label": "tempest-network-smoke--109925739", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe8e:fca5", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25dc7747-60", "ovs_interfaceid": "25dc7747-60d0-4a38-8938-dfa9f55068b3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 09:17:36 compute-0 nova_compute[260935]: 2025-10-11 09:17:36.533 2 DEBUG nova.network.os_vif_util [None req-7b9fe19d-1144-4473-b7ea-5ac8c22507e4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8e:fc:a5,bridge_name='br-int',has_traffic_filtering=True,id=25dc7747-60d0-4a38-8938-dfa9f55068b3,network=Network(2f4ce403-2596-441d-805b-ba15e2f385a1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap25dc7747-60') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 09:17:36 compute-0 nova_compute[260935]: 2025-10-11 09:17:36.536 2 DEBUG nova.objects.instance [None req-7b9fe19d-1144-4473-b7ea-5ac8c22507e4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lazy-loading 'pci_devices' on Instance uuid 25c0d908-6200-4e9e-8914-ed531abe14bf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:17:36 compute-0 nova_compute[260935]: 2025-10-11 09:17:36.555 2 DEBUG nova.virt.libvirt.driver [None req-7b9fe19d-1144-4473-b7ea-5ac8c22507e4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 25c0d908-6200-4e9e-8914-ed531abe14bf] End _get_guest_xml xml=<domain type="kvm">
Oct 11 09:17:36 compute-0 nova_compute[260935]:   <uuid>25c0d908-6200-4e9e-8914-ed531abe14bf</uuid>
Oct 11 09:17:36 compute-0 nova_compute[260935]:   <name>instance-0000006f</name>
Oct 11 09:17:36 compute-0 nova_compute[260935]:   <memory>131072</memory>
Oct 11 09:17:36 compute-0 nova_compute[260935]:   <vcpu>1</vcpu>
Oct 11 09:17:36 compute-0 nova_compute[260935]:   <metadata>
Oct 11 09:17:36 compute-0 nova_compute[260935]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 09:17:36 compute-0 nova_compute[260935]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 09:17:36 compute-0 nova_compute[260935]:       <nova:name>tempest-TestGettingAddress-server-637088263</nova:name>
Oct 11 09:17:36 compute-0 nova_compute[260935]:       <nova:creationTime>2025-10-11 09:17:35</nova:creationTime>
Oct 11 09:17:36 compute-0 nova_compute[260935]:       <nova:flavor name="m1.nano">
Oct 11 09:17:36 compute-0 nova_compute[260935]:         <nova:memory>128</nova:memory>
Oct 11 09:17:36 compute-0 nova_compute[260935]:         <nova:disk>1</nova:disk>
Oct 11 09:17:36 compute-0 nova_compute[260935]:         <nova:swap>0</nova:swap>
Oct 11 09:17:36 compute-0 nova_compute[260935]:         <nova:ephemeral>0</nova:ephemeral>
Oct 11 09:17:36 compute-0 nova_compute[260935]:         <nova:vcpus>1</nova:vcpus>
Oct 11 09:17:36 compute-0 nova_compute[260935]:       </nova:flavor>
Oct 11 09:17:36 compute-0 nova_compute[260935]:       <nova:owner>
Oct 11 09:17:36 compute-0 nova_compute[260935]:         <nova:user uuid="0e1fd111a1ff43179343661e01457085">tempest-TestGettingAddress-1238692117-project-member</nova:user>
Oct 11 09:17:36 compute-0 nova_compute[260935]:         <nova:project uuid="db6885dd005947ad850fed13cefdf2fc">tempest-TestGettingAddress-1238692117</nova:project>
Oct 11 09:17:36 compute-0 nova_compute[260935]:       </nova:owner>
Oct 11 09:17:36 compute-0 nova_compute[260935]:       <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 09:17:36 compute-0 nova_compute[260935]:       <nova:ports>
Oct 11 09:17:36 compute-0 nova_compute[260935]:         <nova:port uuid="25dc7747-60d0-4a38-8938-dfa9f55068b3">
Oct 11 09:17:36 compute-0 nova_compute[260935]:           <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Oct 11 09:17:36 compute-0 nova_compute[260935]:           <nova:ip type="fixed" address="2001:db8::f816:3eff:fe8e:fca5" ipVersion="6"/>
Oct 11 09:17:36 compute-0 nova_compute[260935]:         </nova:port>
Oct 11 09:17:36 compute-0 nova_compute[260935]:       </nova:ports>
Oct 11 09:17:36 compute-0 nova_compute[260935]:     </nova:instance>
Oct 11 09:17:36 compute-0 nova_compute[260935]:   </metadata>
Oct 11 09:17:36 compute-0 nova_compute[260935]:   <sysinfo type="smbios">
Oct 11 09:17:36 compute-0 nova_compute[260935]:     <system>
Oct 11 09:17:36 compute-0 nova_compute[260935]:       <entry name="manufacturer">RDO</entry>
Oct 11 09:17:36 compute-0 nova_compute[260935]:       <entry name="product">OpenStack Compute</entry>
Oct 11 09:17:36 compute-0 nova_compute[260935]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 09:17:36 compute-0 nova_compute[260935]:       <entry name="serial">25c0d908-6200-4e9e-8914-ed531abe14bf</entry>
Oct 11 09:17:36 compute-0 nova_compute[260935]:       <entry name="uuid">25c0d908-6200-4e9e-8914-ed531abe14bf</entry>
Oct 11 09:17:36 compute-0 nova_compute[260935]:       <entry name="family">Virtual Machine</entry>
Oct 11 09:17:36 compute-0 nova_compute[260935]:     </system>
Oct 11 09:17:36 compute-0 nova_compute[260935]:   </sysinfo>
Oct 11 09:17:36 compute-0 nova_compute[260935]:   <os>
Oct 11 09:17:36 compute-0 nova_compute[260935]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 11 09:17:36 compute-0 nova_compute[260935]:     <boot dev="hd"/>
Oct 11 09:17:36 compute-0 nova_compute[260935]:     <smbios mode="sysinfo"/>
Oct 11 09:17:36 compute-0 nova_compute[260935]:   </os>
Oct 11 09:17:36 compute-0 nova_compute[260935]:   <features>
Oct 11 09:17:36 compute-0 nova_compute[260935]:     <acpi/>
Oct 11 09:17:36 compute-0 nova_compute[260935]:     <apic/>
Oct 11 09:17:36 compute-0 nova_compute[260935]:     <vmcoreinfo/>
Oct 11 09:17:36 compute-0 nova_compute[260935]:   </features>
Oct 11 09:17:36 compute-0 nova_compute[260935]:   <clock offset="utc">
Oct 11 09:17:36 compute-0 nova_compute[260935]:     <timer name="pit" tickpolicy="delay"/>
Oct 11 09:17:36 compute-0 nova_compute[260935]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 11 09:17:36 compute-0 nova_compute[260935]:     <timer name="hpet" present="no"/>
Oct 11 09:17:36 compute-0 nova_compute[260935]:   </clock>
Oct 11 09:17:36 compute-0 nova_compute[260935]:   <cpu mode="host-model" match="exact">
Oct 11 09:17:36 compute-0 nova_compute[260935]:     <topology sockets="1" cores="1" threads="1"/>
Oct 11 09:17:36 compute-0 nova_compute[260935]:   </cpu>
Oct 11 09:17:36 compute-0 nova_compute[260935]:   <devices>
Oct 11 09:17:36 compute-0 nova_compute[260935]:     <disk type="network" device="disk">
Oct 11 09:17:36 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 09:17:36 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/25c0d908-6200-4e9e-8914-ed531abe14bf_disk">
Oct 11 09:17:36 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 09:17:36 compute-0 nova_compute[260935]:       </source>
Oct 11 09:17:36 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 09:17:36 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 09:17:36 compute-0 nova_compute[260935]:       </auth>
Oct 11 09:17:36 compute-0 nova_compute[260935]:       <target dev="vda" bus="virtio"/>
Oct 11 09:17:36 compute-0 nova_compute[260935]:     </disk>
Oct 11 09:17:36 compute-0 nova_compute[260935]:     <disk type="network" device="cdrom">
Oct 11 09:17:36 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 09:17:36 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/25c0d908-6200-4e9e-8914-ed531abe14bf_disk.config">
Oct 11 09:17:36 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 09:17:36 compute-0 nova_compute[260935]:       </source>
Oct 11 09:17:36 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 09:17:36 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 09:17:36 compute-0 nova_compute[260935]:       </auth>
Oct 11 09:17:36 compute-0 nova_compute[260935]:       <target dev="sda" bus="sata"/>
Oct 11 09:17:36 compute-0 nova_compute[260935]:     </disk>
Oct 11 09:17:36 compute-0 nova_compute[260935]:     <interface type="ethernet">
Oct 11 09:17:36 compute-0 nova_compute[260935]:       <mac address="fa:16:3e:8e:fc:a5"/>
Oct 11 09:17:36 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 09:17:36 compute-0 nova_compute[260935]:       <driver name="vhost" rx_queue_size="512"/>
Oct 11 09:17:36 compute-0 nova_compute[260935]:       <mtu size="1442"/>
Oct 11 09:17:36 compute-0 nova_compute[260935]:       <target dev="tap25dc7747-60"/>
Oct 11 09:17:36 compute-0 nova_compute[260935]:     </interface>
Oct 11 09:17:36 compute-0 nova_compute[260935]:     <serial type="pty">
Oct 11 09:17:36 compute-0 nova_compute[260935]:       <log file="/var/lib/nova/instances/25c0d908-6200-4e9e-8914-ed531abe14bf/console.log" append="off"/>
Oct 11 09:17:36 compute-0 nova_compute[260935]:     </serial>
Oct 11 09:17:36 compute-0 nova_compute[260935]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 09:17:36 compute-0 nova_compute[260935]:     <video>
Oct 11 09:17:36 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 09:17:36 compute-0 nova_compute[260935]:     </video>
Oct 11 09:17:36 compute-0 nova_compute[260935]:     <input type="tablet" bus="usb"/>
Oct 11 09:17:36 compute-0 nova_compute[260935]:     <rng model="virtio">
Oct 11 09:17:36 compute-0 nova_compute[260935]:       <backend model="random">/dev/urandom</backend>
Oct 11 09:17:36 compute-0 nova_compute[260935]:     </rng>
Oct 11 09:17:36 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root"/>
Oct 11 09:17:36 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:17:36 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:17:36 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:17:36 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:17:36 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:17:36 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:17:36 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:17:36 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:17:36 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:17:36 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:17:36 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:17:36 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:17:36 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:17:36 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:17:36 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:17:36 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:17:36 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:17:36 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:17:36 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:17:36 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:17:36 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:17:36 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:17:36 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:17:36 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:17:36 compute-0 nova_compute[260935]:     <controller type="usb" index="0"/>
Oct 11 09:17:36 compute-0 nova_compute[260935]:     <memballoon model="virtio">
Oct 11 09:17:36 compute-0 nova_compute[260935]:       <stats period="10"/>
Oct 11 09:17:36 compute-0 nova_compute[260935]:     </memballoon>
Oct 11 09:17:36 compute-0 nova_compute[260935]:   </devices>
Oct 11 09:17:36 compute-0 nova_compute[260935]: </domain>
Oct 11 09:17:36 compute-0 nova_compute[260935]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 11 09:17:36 compute-0 nova_compute[260935]: 2025-10-11 09:17:36.556 2 DEBUG nova.compute.manager [None req-7b9fe19d-1144-4473-b7ea-5ac8c22507e4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 25c0d908-6200-4e9e-8914-ed531abe14bf] Preparing to wait for external event network-vif-plugged-25dc7747-60d0-4a38-8938-dfa9f55068b3 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 11 09:17:36 compute-0 nova_compute[260935]: 2025-10-11 09:17:36.556 2 DEBUG oslo_concurrency.lockutils [None req-7b9fe19d-1144-4473-b7ea-5ac8c22507e4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "25c0d908-6200-4e9e-8914-ed531abe14bf-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:17:36 compute-0 nova_compute[260935]: 2025-10-11 09:17:36.557 2 DEBUG oslo_concurrency.lockutils [None req-7b9fe19d-1144-4473-b7ea-5ac8c22507e4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "25c0d908-6200-4e9e-8914-ed531abe14bf-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:17:36 compute-0 nova_compute[260935]: 2025-10-11 09:17:36.557 2 DEBUG oslo_concurrency.lockutils [None req-7b9fe19d-1144-4473-b7ea-5ac8c22507e4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "25c0d908-6200-4e9e-8914-ed531abe14bf-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:17:36 compute-0 nova_compute[260935]: 2025-10-11 09:17:36.558 2 DEBUG nova.virt.libvirt.vif [None req-7b9fe19d-1144-4473-b7ea-5ac8c22507e4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T09:17:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-637088263',display_name='tempest-TestGettingAddress-server-637088263',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-637088263',id=111,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIEEroIcQI6yjPYNFRbpO55PkieYAc+yLELOXjNZohffuvQJuC/A538gnzESmYvfIV1iCxTeQxcsCPGRPplzY6F4Y6cKL3td1D8v0hhGFKNU2eGLTpxtQxcU6ACWQ1bQPg==',key_name='tempest-TestGettingAddress-609422538',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='db6885dd005947ad850fed13cefdf2fc',ramdisk_id='',reservation_id='r-249bfkz4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-1238692117',owner_user_name='tempest-TestGettingAddress-1238692117-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:17:30Z,user_data=None,user_id='0e1fd111a1ff43179343661e01457085',uuid=25c0d908-6200-4e9e-8914-ed531abe14bf,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "25dc7747-60d0-4a38-8938-dfa9f55068b3", "address": "fa:16:3e:8e:fc:a5", "network": {"id": "2f4ce403-2596-441d-805b-ba15e2f385a1", "bridge": "br-int", "label": "tempest-network-smoke--109925739", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe8e:fca5", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25dc7747-60", "ovs_interfaceid": "25dc7747-60d0-4a38-8938-dfa9f55068b3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 11 09:17:36 compute-0 nova_compute[260935]: 2025-10-11 09:17:36.559 2 DEBUG nova.network.os_vif_util [None req-7b9fe19d-1144-4473-b7ea-5ac8c22507e4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converting VIF {"id": "25dc7747-60d0-4a38-8938-dfa9f55068b3", "address": "fa:16:3e:8e:fc:a5", "network": {"id": "2f4ce403-2596-441d-805b-ba15e2f385a1", "bridge": "br-int", "label": "tempest-network-smoke--109925739", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe8e:fca5", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25dc7747-60", "ovs_interfaceid": "25dc7747-60d0-4a38-8938-dfa9f55068b3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 09:17:36 compute-0 nova_compute[260935]: 2025-10-11 09:17:36.560 2 DEBUG nova.network.os_vif_util [None req-7b9fe19d-1144-4473-b7ea-5ac8c22507e4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8e:fc:a5,bridge_name='br-int',has_traffic_filtering=True,id=25dc7747-60d0-4a38-8938-dfa9f55068b3,network=Network(2f4ce403-2596-441d-805b-ba15e2f385a1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap25dc7747-60') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 09:17:36 compute-0 nova_compute[260935]: 2025-10-11 09:17:36.560 2 DEBUG os_vif [None req-7b9fe19d-1144-4473-b7ea-5ac8c22507e4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:8e:fc:a5,bridge_name='br-int',has_traffic_filtering=True,id=25dc7747-60d0-4a38-8938-dfa9f55068b3,network=Network(2f4ce403-2596-441d-805b-ba15e2f385a1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap25dc7747-60') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 11 09:17:36 compute-0 nova_compute[260935]: 2025-10-11 09:17:36.561 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:17:36 compute-0 nova_compute[260935]: 2025-10-11 09:17:36.561 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:17:36 compute-0 nova_compute[260935]: 2025-10-11 09:17:36.562 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 09:17:36 compute-0 nova_compute[260935]: 2025-10-11 09:17:36.566 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:17:36 compute-0 nova_compute[260935]: 2025-10-11 09:17:36.566 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap25dc7747-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:17:36 compute-0 nova_compute[260935]: 2025-10-11 09:17:36.567 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap25dc7747-60, col_values=(('external_ids', {'iface-id': '25dc7747-60d0-4a38-8938-dfa9f55068b3', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:8e:fc:a5', 'vm-uuid': '25c0d908-6200-4e9e-8914-ed531abe14bf'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:17:36 compute-0 nova_compute[260935]: 2025-10-11 09:17:36.570 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:17:36 compute-0 NetworkManager[44960]: <info>  [1760174256.5709] manager: (tap25dc7747-60): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/446)
Oct 11 09:17:36 compute-0 nova_compute[260935]: 2025-10-11 09:17:36.573 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 09:17:36 compute-0 nova_compute[260935]: 2025-10-11 09:17:36.579 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:17:36 compute-0 nova_compute[260935]: 2025-10-11 09:17:36.580 2 INFO os_vif [None req-7b9fe19d-1144-4473-b7ea-5ac8c22507e4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:8e:fc:a5,bridge_name='br-int',has_traffic_filtering=True,id=25dc7747-60d0-4a38-8938-dfa9f55068b3,network=Network(2f4ce403-2596-441d-805b-ba15e2f385a1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap25dc7747-60')
Oct 11 09:17:36 compute-0 nova_compute[260935]: 2025-10-11 09:17:36.639 2 DEBUG nova.virt.libvirt.driver [None req-7b9fe19d-1144-4473-b7ea-5ac8c22507e4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 09:17:36 compute-0 nova_compute[260935]: 2025-10-11 09:17:36.640 2 DEBUG nova.virt.libvirt.driver [None req-7b9fe19d-1144-4473-b7ea-5ac8c22507e4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 09:17:36 compute-0 nova_compute[260935]: 2025-10-11 09:17:36.640 2 DEBUG nova.virt.libvirt.driver [None req-7b9fe19d-1144-4473-b7ea-5ac8c22507e4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] No VIF found with MAC fa:16:3e:8e:fc:a5, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 11 09:17:36 compute-0 nova_compute[260935]: 2025-10-11 09:17:36.641 2 INFO nova.virt.libvirt.driver [None req-7b9fe19d-1144-4473-b7ea-5ac8c22507e4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 25c0d908-6200-4e9e-8914-ed531abe14bf] Using config drive
Oct 11 09:17:36 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2273: 321 pgs: 321 active+clean; 453 MiB data, 971 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 11 09:17:36 compute-0 nova_compute[260935]: 2025-10-11 09:17:36.674 2 DEBUG nova.storage.rbd_utils [None req-7b9fe19d-1144-4473-b7ea-5ac8c22507e4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] rbd image 25c0d908-6200-4e9e-8914-ed531abe14bf_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:17:36 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 09:17:36 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1308882637' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:17:36 compute-0 nova_compute[260935]: 2025-10-11 09:17:36.757 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.486s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:17:36 compute-0 nova_compute[260935]: 2025-10-11 09:17:36.765 2 DEBUG nova.compute.provider_tree [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 09:17:36 compute-0 podman[381472]: 2025-10-11 09:17:36.781599874 +0000 UTC m=+0.086224338 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=iscsid)
Oct 11 09:17:36 compute-0 nova_compute[260935]: 2025-10-11 09:17:36.785 2 DEBUG nova.scheduler.client.report [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 09:17:36 compute-0 nova_compute[260935]: 2025-10-11 09:17:36.819 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 11 09:17:36 compute-0 nova_compute[260935]: 2025-10-11 09:17:36.819 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.882s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:17:36 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/3103790310' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:17:36 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/2726868061' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:17:36 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/1308882637' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:17:37 compute-0 nova_compute[260935]: 2025-10-11 09:17:37.258 2 DEBUG nova.network.neutron [req-f98e1868-53db-4ce1-9ff2-db865e9e6823 req-48a38adc-d014-4c15-b42b-d088179ab618 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 25c0d908-6200-4e9e-8914-ed531abe14bf] Updated VIF entry in instance network info cache for port 25dc7747-60d0-4a38-8938-dfa9f55068b3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 09:17:37 compute-0 nova_compute[260935]: 2025-10-11 09:17:37.259 2 DEBUG nova.network.neutron [req-f98e1868-53db-4ce1-9ff2-db865e9e6823 req-48a38adc-d014-4c15-b42b-d088179ab618 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 25c0d908-6200-4e9e-8914-ed531abe14bf] Updating instance_info_cache with network_info: [{"id": "25dc7747-60d0-4a38-8938-dfa9f55068b3", "address": "fa:16:3e:8e:fc:a5", "network": {"id": "2f4ce403-2596-441d-805b-ba15e2f385a1", "bridge": "br-int", "label": "tempest-network-smoke--109925739", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe8e:fca5", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25dc7747-60", "ovs_interfaceid": "25dc7747-60d0-4a38-8938-dfa9f55068b3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:17:37 compute-0 nova_compute[260935]: 2025-10-11 09:17:37.275 2 DEBUG oslo_concurrency.lockutils [req-f98e1868-53db-4ce1-9ff2-db865e9e6823 req-48a38adc-d014-4c15-b42b-d088179ab618 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-25c0d908-6200-4e9e-8914-ed531abe14bf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:17:37 compute-0 nova_compute[260935]: 2025-10-11 09:17:37.328 2 INFO nova.virt.libvirt.driver [None req-7b9fe19d-1144-4473-b7ea-5ac8c22507e4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 25c0d908-6200-4e9e-8914-ed531abe14bf] Creating config drive at /var/lib/nova/instances/25c0d908-6200-4e9e-8914-ed531abe14bf/disk.config
Oct 11 09:17:37 compute-0 nova_compute[260935]: 2025-10-11 09:17:37.337 2 DEBUG oslo_concurrency.processutils [None req-7b9fe19d-1144-4473-b7ea-5ac8c22507e4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/25c0d908-6200-4e9e-8914-ed531abe14bf/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpz_1zzlzt execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:17:37 compute-0 nova_compute[260935]: 2025-10-11 09:17:37.499 2 DEBUG oslo_concurrency.processutils [None req-7b9fe19d-1144-4473-b7ea-5ac8c22507e4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/25c0d908-6200-4e9e-8914-ed531abe14bf/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpz_1zzlzt" returned: 0 in 0.162s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:17:37 compute-0 nova_compute[260935]: 2025-10-11 09:17:37.541 2 DEBUG nova.storage.rbd_utils [None req-7b9fe19d-1144-4473-b7ea-5ac8c22507e4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] rbd image 25c0d908-6200-4e9e-8914-ed531abe14bf_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:17:37 compute-0 nova_compute[260935]: 2025-10-11 09:17:37.548 2 DEBUG oslo_concurrency.processutils [None req-7b9fe19d-1144-4473-b7ea-5ac8c22507e4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/25c0d908-6200-4e9e-8914-ed531abe14bf/disk.config 25c0d908-6200-4e9e-8914-ed531abe14bf_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:17:37 compute-0 nova_compute[260935]: 2025-10-11 09:17:37.761 2 DEBUG oslo_concurrency.processutils [None req-7b9fe19d-1144-4473-b7ea-5ac8c22507e4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/25c0d908-6200-4e9e-8914-ed531abe14bf/disk.config 25c0d908-6200-4e9e-8914-ed531abe14bf_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.214s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:17:37 compute-0 nova_compute[260935]: 2025-10-11 09:17:37.763 2 INFO nova.virt.libvirt.driver [None req-7b9fe19d-1144-4473-b7ea-5ac8c22507e4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 25c0d908-6200-4e9e-8914-ed531abe14bf] Deleting local config drive /var/lib/nova/instances/25c0d908-6200-4e9e-8914-ed531abe14bf/disk.config because it was imported into RBD.
Oct 11 09:17:37 compute-0 kernel: tap25dc7747-60: entered promiscuous mode
Oct 11 09:17:37 compute-0 NetworkManager[44960]: <info>  [1760174257.8427] manager: (tap25dc7747-60): new Tun device (/org/freedesktop/NetworkManager/Devices/447)
Oct 11 09:17:37 compute-0 nova_compute[260935]: 2025-10-11 09:17:37.847 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:17:37 compute-0 ovn_controller[152945]: 2025-10-11T09:17:37Z|01091|binding|INFO|Claiming lport 25dc7747-60d0-4a38-8938-dfa9f55068b3 for this chassis.
Oct 11 09:17:37 compute-0 ovn_controller[152945]: 2025-10-11T09:17:37Z|01092|binding|INFO|25dc7747-60d0-4a38-8938-dfa9f55068b3: Claiming fa:16:3e:8e:fc:a5 10.100.0.3 2001:db8::f816:3eff:fe8e:fca5
Oct 11 09:17:37 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:17:37.858 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8e:fc:a5 10.100.0.3 2001:db8::f816:3eff:fe8e:fca5'], port_security=['fa:16:3e:8e:fc:a5 10.100.0.3 2001:db8::f816:3eff:fe8e:fca5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28 2001:db8::f816:3eff:fe8e:fca5/64', 'neutron:device_id': '25c0d908-6200-4e9e-8914-ed531abe14bf', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2f4ce403-2596-441d-805b-ba15e2f385a1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'db6885dd005947ad850fed13cefdf2fc', 'neutron:revision_number': '2', 'neutron:security_group_ids': '89a40070-a057-46b1-9d97-ea387c29c959', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ec00ec81-0492-43bf-b21e-a8398ff551c2, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=25dc7747-60d0-4a38-8938-dfa9f55068b3) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 09:17:37 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:17:37.860 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 25dc7747-60d0-4a38-8938-dfa9f55068b3 in datapath 2f4ce403-2596-441d-805b-ba15e2f385a1 bound to our chassis
Oct 11 09:17:37 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:17:37.863 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 2f4ce403-2596-441d-805b-ba15e2f385a1
Oct 11 09:17:37 compute-0 ovn_controller[152945]: 2025-10-11T09:17:37Z|01093|binding|INFO|Setting lport 25dc7747-60d0-4a38-8938-dfa9f55068b3 ovn-installed in OVS
Oct 11 09:17:37 compute-0 ovn_controller[152945]: 2025-10-11T09:17:37Z|01094|binding|INFO|Setting lport 25dc7747-60d0-4a38-8938-dfa9f55068b3 up in Southbound
Oct 11 09:17:37 compute-0 nova_compute[260935]: 2025-10-11 09:17:37.892 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:17:37 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:17:37.899 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[b0d338ee-653d-4a9d-b51b-fca6b7af64fb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:17:37 compute-0 systemd-machined[215705]: New machine qemu-134-instance-0000006f.
Oct 11 09:17:37 compute-0 ceph-mon[74313]: pgmap v2273: 321 pgs: 321 active+clean; 453 MiB data, 971 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 11 09:17:37 compute-0 systemd[1]: Started Virtual Machine qemu-134-instance-0000006f.
Oct 11 09:17:37 compute-0 systemd-udevd[381552]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 09:17:37 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:17:37.951 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[218c46ba-2485-4cc0-aee8-4e419f94f801]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:17:37 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:17:37.955 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[fe8805a5-754d-4af0-8ac0-2bb4ab54f918]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:17:37 compute-0 NetworkManager[44960]: <info>  [1760174257.9599] device (tap25dc7747-60): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 09:17:37 compute-0 NetworkManager[44960]: <info>  [1760174257.9613] device (tap25dc7747-60): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 09:17:37 compute-0 nova_compute[260935]: 2025-10-11 09:17:37.962 2 DEBUG oslo_concurrency.lockutils [None req-9b5ab1ab-dc57-4b80-9c9b-8d4857fce611 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Acquiring lock "0c16a8df-379f-45ee-b8a2-930ab997e47b" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:17:37 compute-0 nova_compute[260935]: 2025-10-11 09:17:37.963 2 DEBUG oslo_concurrency.lockutils [None req-9b5ab1ab-dc57-4b80-9c9b-8d4857fce611 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "0c16a8df-379f-45ee-b8a2-930ab997e47b" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:17:37 compute-0 nova_compute[260935]: 2025-10-11 09:17:37.983 2 DEBUG nova.compute.manager [None req-9b5ab1ab-dc57-4b80-9c9b-8d4857fce611 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 0c16a8df-379f-45ee-b8a2-930ab997e47b] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 11 09:17:38 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:17:38.007 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[3feaca5d-8bcf-441a-b374-d50fdc26a080]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:17:38 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:17:38.036 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[44a25d93-a323-496c-9b2e-8accb14509b4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2f4ce403-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:74:43:a2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 21, 'tx_packets': 5, 'rx_bytes': 1886, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 21, 'tx_packets': 5, 'rx_bytes': 1886, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 310], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 605471, 'reachable_time': 19417, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 19, 'inoctets': 1536, 'indelivers': 4, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 19, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 1536, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 19, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 4, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 381562, 'error': None, 'target': 'ovnmeta-2f4ce403-2596-441d-805b-ba15e2f385a1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:17:38 compute-0 nova_compute[260935]: 2025-10-11 09:17:38.062 2 DEBUG oslo_concurrency.lockutils [None req-9b5ab1ab-dc57-4b80-9c9b-8d4857fce611 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:17:38 compute-0 nova_compute[260935]: 2025-10-11 09:17:38.063 2 DEBUG oslo_concurrency.lockutils [None req-9b5ab1ab-dc57-4b80-9c9b-8d4857fce611 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:17:38 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:17:38.066 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[da76c7fa-3508-4370-824a-4fb9fee9d4a6]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap2f4ce403-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 605487, 'tstamp': 605487}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 381564, 'error': None, 'target': 'ovnmeta-2f4ce403-2596-441d-805b-ba15e2f385a1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap2f4ce403-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 605491, 'tstamp': 605491}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 381564, 'error': None, 'target': 'ovnmeta-2f4ce403-2596-441d-805b-ba15e2f385a1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:17:38 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:17:38.069 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2f4ce403-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:17:38 compute-0 nova_compute[260935]: 2025-10-11 09:17:38.072 2 DEBUG nova.virt.hardware [None req-9b5ab1ab-dc57-4b80-9c9b-8d4857fce611 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 11 09:17:38 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:17:38.073 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2f4ce403-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:17:38 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:17:38.073 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 09:17:38 compute-0 nova_compute[260935]: 2025-10-11 09:17:38.073 2 INFO nova.compute.claims [None req-9b5ab1ab-dc57-4b80-9c9b-8d4857fce611 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 0c16a8df-379f-45ee-b8a2-930ab997e47b] Claim successful on node compute-0.ctlplane.example.com
Oct 11 09:17:38 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:17:38.074 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap2f4ce403-20, col_values=(('external_ids', {'iface-id': '722d4b4c-2d64-4e8c-b343-4ac25259f23b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:17:38 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:17:38.074 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 09:17:38 compute-0 nova_compute[260935]: 2025-10-11 09:17:38.077 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:17:38 compute-0 nova_compute[260935]: 2025-10-11 09:17:38.334 2 DEBUG oslo_concurrency.processutils [None req-9b5ab1ab-dc57-4b80-9c9b-8d4857fce611 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:17:38 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2274: 321 pgs: 321 active+clean; 453 MiB data, 971 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Oct 11 09:17:38 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 09:17:38 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2439670451' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:17:38 compute-0 nova_compute[260935]: 2025-10-11 09:17:38.849 2 DEBUG oslo_concurrency.processutils [None req-9b5ab1ab-dc57-4b80-9c9b-8d4857fce611 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.514s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:17:38 compute-0 nova_compute[260935]: 2025-10-11 09:17:38.857 2 DEBUG nova.compute.provider_tree [None req-9b5ab1ab-dc57-4b80-9c9b-8d4857fce611 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 09:17:38 compute-0 nova_compute[260935]: 2025-10-11 09:17:38.878 2 DEBUG nova.scheduler.client.report [None req-9b5ab1ab-dc57-4b80-9c9b-8d4857fce611 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 09:17:38 compute-0 nova_compute[260935]: 2025-10-11 09:17:38.910 2 DEBUG oslo_concurrency.lockutils [None req-9b5ab1ab-dc57-4b80-9c9b-8d4857fce611 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.847s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:17:38 compute-0 nova_compute[260935]: 2025-10-11 09:17:38.911 2 DEBUG nova.compute.manager [None req-9b5ab1ab-dc57-4b80-9c9b-8d4857fce611 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 0c16a8df-379f-45ee-b8a2-930ab997e47b] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 11 09:17:38 compute-0 nova_compute[260935]: 2025-10-11 09:17:38.914 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760174258.911845, 25c0d908-6200-4e9e-8914-ed531abe14bf => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:17:38 compute-0 nova_compute[260935]: 2025-10-11 09:17:38.915 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 25c0d908-6200-4e9e-8914-ed531abe14bf] VM Started (Lifecycle Event)
Oct 11 09:17:38 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/2439670451' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:17:38 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:17:38 compute-0 ceph-mon[74313]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #108. Immutable memtables: 0.
Oct 11 09:17:38 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:17:38.962747) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 11 09:17:38 compute-0 ceph-mon[74313]: rocksdb: [db/flush_job.cc:856] [default] [JOB 63] Flushing memtable with next log file: 108
Oct 11 09:17:38 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760174258962883, "job": 63, "event": "flush_started", "num_memtables": 1, "num_entries": 682, "num_deletes": 256, "total_data_size": 779644, "memory_usage": 792776, "flush_reason": "Manual Compaction"}
Oct 11 09:17:38 compute-0 ceph-mon[74313]: rocksdb: [db/flush_job.cc:885] [default] [JOB 63] Level-0 flush table #109: started
Oct 11 09:17:38 compute-0 nova_compute[260935]: 2025-10-11 09:17:38.968 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 25c0d908-6200-4e9e-8914-ed531abe14bf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:17:38 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760174258973276, "cf_name": "default", "job": 63, "event": "table_file_creation", "file_number": 109, "file_size": 772323, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 47134, "largest_seqno": 47815, "table_properties": {"data_size": 768740, "index_size": 1427, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1093, "raw_key_size": 8114, "raw_average_key_size": 18, "raw_value_size": 761500, "raw_average_value_size": 1775, "num_data_blocks": 63, "num_entries": 429, "num_filter_entries": 429, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760174209, "oldest_key_time": 1760174209, "file_creation_time": 1760174258, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "25d4b2da-549a-4320-988a-8e4ed36a341b", "db_session_id": "HBQ1LYMNE0LLZGR7AHC6", "orig_file_number": 109, "seqno_to_time_mapping": "N/A"}}
Oct 11 09:17:38 compute-0 ceph-mon[74313]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 63] Flush lasted 10559 microseconds, and 6168 cpu microseconds.
Oct 11 09:17:38 compute-0 ceph-mon[74313]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 09:17:38 compute-0 nova_compute[260935]: 2025-10-11 09:17:38.974 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760174258.914773, 25c0d908-6200-4e9e-8914-ed531abe14bf => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:17:38 compute-0 nova_compute[260935]: 2025-10-11 09:17:38.974 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 25c0d908-6200-4e9e-8914-ed531abe14bf] VM Paused (Lifecycle Event)
Oct 11 09:17:38 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:17:38.973323) [db/flush_job.cc:967] [default] [JOB 63] Level-0 flush table #109: 772323 bytes OK
Oct 11 09:17:38 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:17:38.973349) [db/memtable_list.cc:519] [default] Level-0 commit table #109 started
Oct 11 09:17:38 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:17:38.975206) [db/memtable_list.cc:722] [default] Level-0 commit table #109: memtable #1 done
Oct 11 09:17:38 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:17:38.975220) EVENT_LOG_v1 {"time_micros": 1760174258975216, "job": 63, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 11 09:17:38 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:17:38.975242) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 11 09:17:38 compute-0 ceph-mon[74313]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 63] Try to delete WAL files size 776030, prev total WAL file size 776030, number of live WAL files 2.
Oct 11 09:17:38 compute-0 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000105.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 09:17:38 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:17:38.975868) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031373538' seq:72057594037927935, type:22 .. '6C6F676D0032303130' seq:0, type:0; will stop at (end)
Oct 11 09:17:38 compute-0 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 64] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 11 09:17:38 compute-0 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 63 Base level 0, inputs: [109(754KB)], [107(10162KB)]
Oct 11 09:17:38 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760174258975924, "job": 64, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [109], "files_L6": [107], "score": -1, "input_data_size": 11178451, "oldest_snapshot_seqno": -1}
Oct 11 09:17:38 compute-0 nova_compute[260935]: 2025-10-11 09:17:38.986 2 DEBUG nova.compute.manager [None req-9b5ab1ab-dc57-4b80-9c9b-8d4857fce611 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 0c16a8df-379f-45ee-b8a2-930ab997e47b] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 11 09:17:38 compute-0 nova_compute[260935]: 2025-10-11 09:17:38.986 2 DEBUG nova.network.neutron [None req-9b5ab1ab-dc57-4b80-9c9b-8d4857fce611 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 0c16a8df-379f-45ee-b8a2-930ab997e47b] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 11 09:17:38 compute-0 nova_compute[260935]: 2025-10-11 09:17:38.996 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 25c0d908-6200-4e9e-8914-ed531abe14bf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:17:39 compute-0 nova_compute[260935]: 2025-10-11 09:17:39.002 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 25c0d908-6200-4e9e-8914-ed531abe14bf] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 09:17:39 compute-0 nova_compute[260935]: 2025-10-11 09:17:39.007 2 INFO nova.virt.libvirt.driver [None req-9b5ab1ab-dc57-4b80-9c9b-8d4857fce611 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 0c16a8df-379f-45ee-b8a2-930ab997e47b] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 11 09:17:39 compute-0 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 64] Generated table #110: 6860 keys, 11055345 bytes, temperature: kUnknown
Oct 11 09:17:39 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760174259021343, "cf_name": "default", "job": 64, "event": "table_file_creation", "file_number": 110, "file_size": 11055345, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11006807, "index_size": 30314, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 17157, "raw_key_size": 178275, "raw_average_key_size": 25, "raw_value_size": 10881192, "raw_average_value_size": 1586, "num_data_blocks": 1195, "num_entries": 6860, "num_filter_entries": 6860, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760170204, "oldest_key_time": 0, "file_creation_time": 1760174258, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "25d4b2da-549a-4320-988a-8e4ed36a341b", "db_session_id": "HBQ1LYMNE0LLZGR7AHC6", "orig_file_number": 110, "seqno_to_time_mapping": "N/A"}}
Oct 11 09:17:39 compute-0 ceph-mon[74313]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 09:17:39 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:17:39.021583) [db/compaction/compaction_job.cc:1663] [default] [JOB 64] Compacted 1@0 + 1@6 files to L6 => 11055345 bytes
Oct 11 09:17:39 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:17:39.022675) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 245.7 rd, 243.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.7, 9.9 +0.0 blob) out(10.5 +0.0 blob), read-write-amplify(28.8) write-amplify(14.3) OK, records in: 7384, records dropped: 524 output_compression: NoCompression
Oct 11 09:17:39 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:17:39.022690) EVENT_LOG_v1 {"time_micros": 1760174259022683, "job": 64, "event": "compaction_finished", "compaction_time_micros": 45493, "compaction_time_cpu_micros": 24127, "output_level": 6, "num_output_files": 1, "total_output_size": 11055345, "num_input_records": 7384, "num_output_records": 6860, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 11 09:17:39 compute-0 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000109.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 09:17:39 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760174259022934, "job": 64, "event": "table_file_deletion", "file_number": 109}
Oct 11 09:17:39 compute-0 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000107.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 09:17:39 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760174259024702, "job": 64, "event": "table_file_deletion", "file_number": 107}
Oct 11 09:17:39 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:17:38.975716) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 09:17:39 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:17:39.024863) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 09:17:39 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:17:39.024873) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 09:17:39 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:17:39.024876) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 09:17:39 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:17:39.024878) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 09:17:39 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:17:39.024880) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 09:17:39 compute-0 nova_compute[260935]: 2025-10-11 09:17:39.036 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 25c0d908-6200-4e9e-8914-ed531abe14bf] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 09:17:39 compute-0 nova_compute[260935]: 2025-10-11 09:17:39.039 2 DEBUG nova.compute.manager [None req-9b5ab1ab-dc57-4b80-9c9b-8d4857fce611 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 0c16a8df-379f-45ee-b8a2-930ab997e47b] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 11 09:17:39 compute-0 nova_compute[260935]: 2025-10-11 09:17:39.135 2 DEBUG nova.compute.manager [None req-9b5ab1ab-dc57-4b80-9c9b-8d4857fce611 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 0c16a8df-379f-45ee-b8a2-930ab997e47b] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 11 09:17:39 compute-0 nova_compute[260935]: 2025-10-11 09:17:39.136 2 DEBUG nova.virt.libvirt.driver [None req-9b5ab1ab-dc57-4b80-9c9b-8d4857fce611 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 0c16a8df-379f-45ee-b8a2-930ab997e47b] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 11 09:17:39 compute-0 nova_compute[260935]: 2025-10-11 09:17:39.136 2 INFO nova.virt.libvirt.driver [None req-9b5ab1ab-dc57-4b80-9c9b-8d4857fce611 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 0c16a8df-379f-45ee-b8a2-930ab997e47b] Creating image(s)
Oct 11 09:17:39 compute-0 nova_compute[260935]: 2025-10-11 09:17:39.161 2 DEBUG nova.storage.rbd_utils [None req-9b5ab1ab-dc57-4b80-9c9b-8d4857fce611 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] rbd image 0c16a8df-379f-45ee-b8a2-930ab997e47b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:17:39 compute-0 nova_compute[260935]: 2025-10-11 09:17:39.193 2 DEBUG nova.storage.rbd_utils [None req-9b5ab1ab-dc57-4b80-9c9b-8d4857fce611 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] rbd image 0c16a8df-379f-45ee-b8a2-930ab997e47b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:17:39 compute-0 nova_compute[260935]: 2025-10-11 09:17:39.216 2 DEBUG nova.storage.rbd_utils [None req-9b5ab1ab-dc57-4b80-9c9b-8d4857fce611 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] rbd image 0c16a8df-379f-45ee-b8a2-930ab997e47b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:17:39 compute-0 nova_compute[260935]: 2025-10-11 09:17:39.221 2 DEBUG oslo_concurrency.processutils [None req-9b5ab1ab-dc57-4b80-9c9b-8d4857fce611 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:17:39 compute-0 nova_compute[260935]: 2025-10-11 09:17:39.284 2 DEBUG nova.policy [None req-9b5ab1ab-dc57-4b80-9c9b-8d4857fce611 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'dd336dcb24664df58613d4105ce1b004', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'bee9c6aad5fe46a2b0fb6caf4d995b72', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 11 09:17:39 compute-0 nova_compute[260935]: 2025-10-11 09:17:39.329 2 DEBUG oslo_concurrency.processutils [None req-9b5ab1ab-dc57-4b80-9c9b-8d4857fce611 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json" returned: 0 in 0.109s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:17:39 compute-0 nova_compute[260935]: 2025-10-11 09:17:39.330 2 DEBUG oslo_concurrency.lockutils [None req-9b5ab1ab-dc57-4b80-9c9b-8d4857fce611 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Acquiring lock "9811042c7d73cc51997f7c966840f4b7728169a1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:17:39 compute-0 nova_compute[260935]: 2025-10-11 09:17:39.330 2 DEBUG oslo_concurrency.lockutils [None req-9b5ab1ab-dc57-4b80-9c9b-8d4857fce611 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:17:39 compute-0 nova_compute[260935]: 2025-10-11 09:17:39.331 2 DEBUG oslo_concurrency.lockutils [None req-9b5ab1ab-dc57-4b80-9c9b-8d4857fce611 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:17:39 compute-0 nova_compute[260935]: 2025-10-11 09:17:39.356 2 DEBUG nova.storage.rbd_utils [None req-9b5ab1ab-dc57-4b80-9c9b-8d4857fce611 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] rbd image 0c16a8df-379f-45ee-b8a2-930ab997e47b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:17:39 compute-0 nova_compute[260935]: 2025-10-11 09:17:39.362 2 DEBUG oslo_concurrency.processutils [None req-9b5ab1ab-dc57-4b80-9c9b-8d4857fce611 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 0c16a8df-379f-45ee-b8a2-930ab997e47b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:17:39 compute-0 nova_compute[260935]: 2025-10-11 09:17:39.651 2 DEBUG oslo_concurrency.processutils [None req-9b5ab1ab-dc57-4b80-9c9b-8d4857fce611 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 0c16a8df-379f-45ee-b8a2-930ab997e47b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.288s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:17:39 compute-0 nova_compute[260935]: 2025-10-11 09:17:39.737 2 DEBUG nova.storage.rbd_utils [None req-9b5ab1ab-dc57-4b80-9c9b-8d4857fce611 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] resizing rbd image 0c16a8df-379f-45ee-b8a2-930ab997e47b_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 11 09:17:39 compute-0 nova_compute[260935]: 2025-10-11 09:17:39.850 2 DEBUG nova.objects.instance [None req-9b5ab1ab-dc57-4b80-9c9b-8d4857fce611 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lazy-loading 'migration_context' on Instance uuid 0c16a8df-379f-45ee-b8a2-930ab997e47b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:17:39 compute-0 nova_compute[260935]: 2025-10-11 09:17:39.873 2 DEBUG nova.virt.libvirt.driver [None req-9b5ab1ab-dc57-4b80-9c9b-8d4857fce611 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 0c16a8df-379f-45ee-b8a2-930ab997e47b] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 11 09:17:39 compute-0 nova_compute[260935]: 2025-10-11 09:17:39.873 2 DEBUG nova.virt.libvirt.driver [None req-9b5ab1ab-dc57-4b80-9c9b-8d4857fce611 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 0c16a8df-379f-45ee-b8a2-930ab997e47b] Ensure instance console log exists: /var/lib/nova/instances/0c16a8df-379f-45ee-b8a2-930ab997e47b/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 11 09:17:39 compute-0 nova_compute[260935]: 2025-10-11 09:17:39.874 2 DEBUG oslo_concurrency.lockutils [None req-9b5ab1ab-dc57-4b80-9c9b-8d4857fce611 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:17:39 compute-0 nova_compute[260935]: 2025-10-11 09:17:39.874 2 DEBUG oslo_concurrency.lockutils [None req-9b5ab1ab-dc57-4b80-9c9b-8d4857fce611 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:17:39 compute-0 nova_compute[260935]: 2025-10-11 09:17:39.875 2 DEBUG oslo_concurrency.lockutils [None req-9b5ab1ab-dc57-4b80-9c9b-8d4857fce611 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:17:39 compute-0 ceph-mon[74313]: pgmap v2274: 321 pgs: 321 active+clean; 453 MiB data, 971 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Oct 11 09:17:40 compute-0 nova_compute[260935]: 2025-10-11 09:17:40.582 2 DEBUG nova.network.neutron [None req-9b5ab1ab-dc57-4b80-9c9b-8d4857fce611 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 0c16a8df-379f-45ee-b8a2-930ab997e47b] Successfully created port: 2284d8f8-63ae-4cf4-b954-c08472abd3ed _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 11 09:17:40 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2275: 321 pgs: 321 active+clean; 453 MiB data, 971 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Oct 11 09:17:40 compute-0 nova_compute[260935]: 2025-10-11 09:17:40.977 2 DEBUG nova.compute.manager [req-fa3f69e6-f355-44fc-963f-7ad7c49d72e7 req-4d45a33a-2081-4cd8-9256-4553d3588359 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 25c0d908-6200-4e9e-8914-ed531abe14bf] Received event network-vif-plugged-25dc7747-60d0-4a38-8938-dfa9f55068b3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:17:40 compute-0 nova_compute[260935]: 2025-10-11 09:17:40.978 2 DEBUG oslo_concurrency.lockutils [req-fa3f69e6-f355-44fc-963f-7ad7c49d72e7 req-4d45a33a-2081-4cd8-9256-4553d3588359 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "25c0d908-6200-4e9e-8914-ed531abe14bf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:17:40 compute-0 nova_compute[260935]: 2025-10-11 09:17:40.978 2 DEBUG oslo_concurrency.lockutils [req-fa3f69e6-f355-44fc-963f-7ad7c49d72e7 req-4d45a33a-2081-4cd8-9256-4553d3588359 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "25c0d908-6200-4e9e-8914-ed531abe14bf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:17:40 compute-0 nova_compute[260935]: 2025-10-11 09:17:40.979 2 DEBUG oslo_concurrency.lockutils [req-fa3f69e6-f355-44fc-963f-7ad7c49d72e7 req-4d45a33a-2081-4cd8-9256-4553d3588359 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "25c0d908-6200-4e9e-8914-ed531abe14bf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:17:40 compute-0 nova_compute[260935]: 2025-10-11 09:17:40.980 2 DEBUG nova.compute.manager [req-fa3f69e6-f355-44fc-963f-7ad7c49d72e7 req-4d45a33a-2081-4cd8-9256-4553d3588359 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 25c0d908-6200-4e9e-8914-ed531abe14bf] Processing event network-vif-plugged-25dc7747-60d0-4a38-8938-dfa9f55068b3 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 11 09:17:40 compute-0 nova_compute[260935]: 2025-10-11 09:17:40.981 2 DEBUG nova.compute.manager [None req-7b9fe19d-1144-4473-b7ea-5ac8c22507e4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 25c0d908-6200-4e9e-8914-ed531abe14bf] Instance event wait completed in 2 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 11 09:17:40 compute-0 nova_compute[260935]: 2025-10-11 09:17:40.986 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760174260.9860694, 25c0d908-6200-4e9e-8914-ed531abe14bf => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:17:40 compute-0 nova_compute[260935]: 2025-10-11 09:17:40.987 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 25c0d908-6200-4e9e-8914-ed531abe14bf] VM Resumed (Lifecycle Event)
Oct 11 09:17:40 compute-0 nova_compute[260935]: 2025-10-11 09:17:40.992 2 DEBUG nova.virt.libvirt.driver [None req-7b9fe19d-1144-4473-b7ea-5ac8c22507e4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 25c0d908-6200-4e9e-8914-ed531abe14bf] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 11 09:17:41 compute-0 nova_compute[260935]: 2025-10-11 09:17:41.000 2 INFO nova.virt.libvirt.driver [-] [instance: 25c0d908-6200-4e9e-8914-ed531abe14bf] Instance spawned successfully.
Oct 11 09:17:41 compute-0 nova_compute[260935]: 2025-10-11 09:17:41.001 2 DEBUG nova.virt.libvirt.driver [None req-7b9fe19d-1144-4473-b7ea-5ac8c22507e4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 25c0d908-6200-4e9e-8914-ed531abe14bf] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 11 09:17:41 compute-0 nova_compute[260935]: 2025-10-11 09:17:41.029 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 25c0d908-6200-4e9e-8914-ed531abe14bf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:17:41 compute-0 nova_compute[260935]: 2025-10-11 09:17:41.038 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 25c0d908-6200-4e9e-8914-ed531abe14bf] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 09:17:41 compute-0 nova_compute[260935]: 2025-10-11 09:17:41.044 2 DEBUG nova.virt.libvirt.driver [None req-7b9fe19d-1144-4473-b7ea-5ac8c22507e4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 25c0d908-6200-4e9e-8914-ed531abe14bf] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:17:41 compute-0 nova_compute[260935]: 2025-10-11 09:17:41.044 2 DEBUG nova.virt.libvirt.driver [None req-7b9fe19d-1144-4473-b7ea-5ac8c22507e4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 25c0d908-6200-4e9e-8914-ed531abe14bf] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:17:41 compute-0 nova_compute[260935]: 2025-10-11 09:17:41.045 2 DEBUG nova.virt.libvirt.driver [None req-7b9fe19d-1144-4473-b7ea-5ac8c22507e4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 25c0d908-6200-4e9e-8914-ed531abe14bf] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:17:41 compute-0 nova_compute[260935]: 2025-10-11 09:17:41.046 2 DEBUG nova.virt.libvirt.driver [None req-7b9fe19d-1144-4473-b7ea-5ac8c22507e4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 25c0d908-6200-4e9e-8914-ed531abe14bf] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:17:41 compute-0 nova_compute[260935]: 2025-10-11 09:17:41.047 2 DEBUG nova.virt.libvirt.driver [None req-7b9fe19d-1144-4473-b7ea-5ac8c22507e4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 25c0d908-6200-4e9e-8914-ed531abe14bf] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:17:41 compute-0 nova_compute[260935]: 2025-10-11 09:17:41.048 2 DEBUG nova.virt.libvirt.driver [None req-7b9fe19d-1144-4473-b7ea-5ac8c22507e4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 25c0d908-6200-4e9e-8914-ed531abe14bf] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:17:41 compute-0 nova_compute[260935]: 2025-10-11 09:17:41.109 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 25c0d908-6200-4e9e-8914-ed531abe14bf] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 09:17:41 compute-0 nova_compute[260935]: 2025-10-11 09:17:41.163 2 INFO nova.compute.manager [None req-7b9fe19d-1144-4473-b7ea-5ac8c22507e4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 25c0d908-6200-4e9e-8914-ed531abe14bf] Took 10.56 seconds to spawn the instance on the hypervisor.
Oct 11 09:17:41 compute-0 nova_compute[260935]: 2025-10-11 09:17:41.164 2 DEBUG nova.compute.manager [None req-7b9fe19d-1144-4473-b7ea-5ac8c22507e4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 25c0d908-6200-4e9e-8914-ed531abe14bf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:17:41 compute-0 nova_compute[260935]: 2025-10-11 09:17:41.238 2 INFO nova.compute.manager [None req-7b9fe19d-1144-4473-b7ea-5ac8c22507e4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 25c0d908-6200-4e9e-8914-ed531abe14bf] Took 11.72 seconds to build instance.
Oct 11 09:17:41 compute-0 nova_compute[260935]: 2025-10-11 09:17:41.262 2 DEBUG oslo_concurrency.lockutils [None req-7b9fe19d-1144-4473-b7ea-5ac8c22507e4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "25c0d908-6200-4e9e-8914-ed531abe14bf" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.810s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:17:41 compute-0 nova_compute[260935]: 2025-10-11 09:17:41.315 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:17:41 compute-0 nova_compute[260935]: 2025-10-11 09:17:41.434 2 DEBUG nova.network.neutron [None req-9b5ab1ab-dc57-4b80-9c9b-8d4857fce611 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 0c16a8df-379f-45ee-b8a2-930ab997e47b] Successfully updated port: 2284d8f8-63ae-4cf4-b954-c08472abd3ed _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 11 09:17:41 compute-0 nova_compute[260935]: 2025-10-11 09:17:41.449 2 DEBUG oslo_concurrency.lockutils [None req-9b5ab1ab-dc57-4b80-9c9b-8d4857fce611 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Acquiring lock "refresh_cache-0c16a8df-379f-45ee-b8a2-930ab997e47b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:17:41 compute-0 nova_compute[260935]: 2025-10-11 09:17:41.449 2 DEBUG oslo_concurrency.lockutils [None req-9b5ab1ab-dc57-4b80-9c9b-8d4857fce611 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Acquired lock "refresh_cache-0c16a8df-379f-45ee-b8a2-930ab997e47b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:17:41 compute-0 nova_compute[260935]: 2025-10-11 09:17:41.450 2 DEBUG nova.network.neutron [None req-9b5ab1ab-dc57-4b80-9c9b-8d4857fce611 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 0c16a8df-379f-45ee-b8a2-930ab997e47b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 11 09:17:41 compute-0 nova_compute[260935]: 2025-10-11 09:17:41.570 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:17:41 compute-0 nova_compute[260935]: 2025-10-11 09:17:41.667 2 DEBUG nova.network.neutron [None req-9b5ab1ab-dc57-4b80-9c9b-8d4857fce611 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 0c16a8df-379f-45ee-b8a2-930ab997e47b] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 11 09:17:41 compute-0 ceph-mon[74313]: pgmap v2275: 321 pgs: 321 active+clean; 453 MiB data, 971 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Oct 11 09:17:42 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2276: 321 pgs: 321 active+clean; 485 MiB data, 987 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 3.1 MiB/s wr, 96 op/s
Oct 11 09:17:42 compute-0 podman[381797]: 2025-10-11 09:17:42.816786604 +0000 UTC m=+0.108253268 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_id=multipathd, io.buildah.version=1.41.3)
Oct 11 09:17:42 compute-0 podman[381798]: 2025-10-11 09:17:42.855386412 +0000 UTC m=+0.145486698 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_controller, managed_by=edpm_ansible)
Oct 11 09:17:42 compute-0 nova_compute[260935]: 2025-10-11 09:17:42.976 2 DEBUG nova.network.neutron [None req-9b5ab1ab-dc57-4b80-9c9b-8d4857fce611 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 0c16a8df-379f-45ee-b8a2-930ab997e47b] Updating instance_info_cache with network_info: [{"id": "2284d8f8-63ae-4cf4-b954-c08472abd3ed", "address": "fa:16:3e:c9:58:d3", "network": {"id": "3f472aff-4044-47cd-b539-ffd0a15c2851", "bridge": "br-int", "label": "tempest-network-smoke--74572185", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2284d8f8-63", "ovs_interfaceid": "2284d8f8-63ae-4cf4-b954-c08472abd3ed", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:17:43 compute-0 nova_compute[260935]: 2025-10-11 09:17:43.009 2 DEBUG oslo_concurrency.lockutils [None req-9b5ab1ab-dc57-4b80-9c9b-8d4857fce611 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Releasing lock "refresh_cache-0c16a8df-379f-45ee-b8a2-930ab997e47b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:17:43 compute-0 nova_compute[260935]: 2025-10-11 09:17:43.010 2 DEBUG nova.compute.manager [None req-9b5ab1ab-dc57-4b80-9c9b-8d4857fce611 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 0c16a8df-379f-45ee-b8a2-930ab997e47b] Instance network_info: |[{"id": "2284d8f8-63ae-4cf4-b954-c08472abd3ed", "address": "fa:16:3e:c9:58:d3", "network": {"id": "3f472aff-4044-47cd-b539-ffd0a15c2851", "bridge": "br-int", "label": "tempest-network-smoke--74572185", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2284d8f8-63", "ovs_interfaceid": "2284d8f8-63ae-4cf4-b954-c08472abd3ed", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 11 09:17:43 compute-0 nova_compute[260935]: 2025-10-11 09:17:43.014 2 DEBUG nova.virt.libvirt.driver [None req-9b5ab1ab-dc57-4b80-9c9b-8d4857fce611 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 0c16a8df-379f-45ee-b8a2-930ab997e47b] Start _get_guest_xml network_info=[{"id": "2284d8f8-63ae-4cf4-b954-c08472abd3ed", "address": "fa:16:3e:c9:58:d3", "network": {"id": "3f472aff-4044-47cd-b539-ffd0a15c2851", "bridge": "br-int", "label": "tempest-network-smoke--74572185", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2284d8f8-63", "ovs_interfaceid": "2284d8f8-63ae-4cf4-b954-c08472abd3ed", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 11 09:17:43 compute-0 nova_compute[260935]: 2025-10-11 09:17:43.020 2 WARNING nova.virt.libvirt.driver [None req-9b5ab1ab-dc57-4b80-9c9b-8d4857fce611 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 09:17:43 compute-0 nova_compute[260935]: 2025-10-11 09:17:43.027 2 DEBUG nova.virt.libvirt.host [None req-9b5ab1ab-dc57-4b80-9c9b-8d4857fce611 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 11 09:17:43 compute-0 nova_compute[260935]: 2025-10-11 09:17:43.028 2 DEBUG nova.virt.libvirt.host [None req-9b5ab1ab-dc57-4b80-9c9b-8d4857fce611 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 11 09:17:43 compute-0 nova_compute[260935]: 2025-10-11 09:17:43.031 2 DEBUG nova.virt.libvirt.host [None req-9b5ab1ab-dc57-4b80-9c9b-8d4857fce611 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 11 09:17:43 compute-0 nova_compute[260935]: 2025-10-11 09:17:43.032 2 DEBUG nova.virt.libvirt.host [None req-9b5ab1ab-dc57-4b80-9c9b-8d4857fce611 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 11 09:17:43 compute-0 nova_compute[260935]: 2025-10-11 09:17:43.032 2 DEBUG nova.virt.libvirt.driver [None req-9b5ab1ab-dc57-4b80-9c9b-8d4857fce611 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 11 09:17:43 compute-0 nova_compute[260935]: 2025-10-11 09:17:43.032 2 DEBUG nova.virt.hardware [None req-9b5ab1ab-dc57-4b80-9c9b-8d4857fce611 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 11 09:17:43 compute-0 nova_compute[260935]: 2025-10-11 09:17:43.033 2 DEBUG nova.virt.hardware [None req-9b5ab1ab-dc57-4b80-9c9b-8d4857fce611 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 11 09:17:43 compute-0 nova_compute[260935]: 2025-10-11 09:17:43.033 2 DEBUG nova.virt.hardware [None req-9b5ab1ab-dc57-4b80-9c9b-8d4857fce611 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 11 09:17:43 compute-0 nova_compute[260935]: 2025-10-11 09:17:43.033 2 DEBUG nova.virt.hardware [None req-9b5ab1ab-dc57-4b80-9c9b-8d4857fce611 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 11 09:17:43 compute-0 nova_compute[260935]: 2025-10-11 09:17:43.034 2 DEBUG nova.virt.hardware [None req-9b5ab1ab-dc57-4b80-9c9b-8d4857fce611 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 11 09:17:43 compute-0 nova_compute[260935]: 2025-10-11 09:17:43.034 2 DEBUG nova.virt.hardware [None req-9b5ab1ab-dc57-4b80-9c9b-8d4857fce611 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 11 09:17:43 compute-0 nova_compute[260935]: 2025-10-11 09:17:43.034 2 DEBUG nova.virt.hardware [None req-9b5ab1ab-dc57-4b80-9c9b-8d4857fce611 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 11 09:17:43 compute-0 nova_compute[260935]: 2025-10-11 09:17:43.034 2 DEBUG nova.virt.hardware [None req-9b5ab1ab-dc57-4b80-9c9b-8d4857fce611 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 11 09:17:43 compute-0 nova_compute[260935]: 2025-10-11 09:17:43.035 2 DEBUG nova.virt.hardware [None req-9b5ab1ab-dc57-4b80-9c9b-8d4857fce611 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 11 09:17:43 compute-0 nova_compute[260935]: 2025-10-11 09:17:43.035 2 DEBUG nova.virt.hardware [None req-9b5ab1ab-dc57-4b80-9c9b-8d4857fce611 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 11 09:17:43 compute-0 nova_compute[260935]: 2025-10-11 09:17:43.035 2 DEBUG nova.virt.hardware [None req-9b5ab1ab-dc57-4b80-9c9b-8d4857fce611 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 11 09:17:43 compute-0 nova_compute[260935]: 2025-10-11 09:17:43.038 2 DEBUG oslo_concurrency.processutils [None req-9b5ab1ab-dc57-4b80-9c9b-8d4857fce611 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:17:43 compute-0 sshd-session[381795]: Invalid user debian1 from 152.32.213.170 port 40590
Oct 11 09:17:43 compute-0 sshd-session[381795]: pam_unix(sshd:auth): check pass; user unknown
Oct 11 09:17:43 compute-0 sshd-session[381795]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=152.32.213.170
Oct 11 09:17:43 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 09:17:43 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1643253741' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:17:43 compute-0 nova_compute[260935]: 2025-10-11 09:17:43.538 2 DEBUG oslo_concurrency.processutils [None req-9b5ab1ab-dc57-4b80-9c9b-8d4857fce611 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.500s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:17:43 compute-0 nova_compute[260935]: 2025-10-11 09:17:43.571 2 DEBUG nova.storage.rbd_utils [None req-9b5ab1ab-dc57-4b80-9c9b-8d4857fce611 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] rbd image 0c16a8df-379f-45ee-b8a2-930ab997e47b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:17:43 compute-0 nova_compute[260935]: 2025-10-11 09:17:43.576 2 DEBUG oslo_concurrency.processutils [None req-9b5ab1ab-dc57-4b80-9c9b-8d4857fce611 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:17:43 compute-0 nova_compute[260935]: 2025-10-11 09:17:43.634 2 DEBUG nova.compute.manager [req-5791cfbf-80dd-43bc-9479-c8ff34bbc78a req-6eb259e0-14c1-4981-80d7-c660ef55db72 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 25c0d908-6200-4e9e-8914-ed531abe14bf] Received event network-vif-plugged-25dc7747-60d0-4a38-8938-dfa9f55068b3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:17:43 compute-0 nova_compute[260935]: 2025-10-11 09:17:43.635 2 DEBUG oslo_concurrency.lockutils [req-5791cfbf-80dd-43bc-9479-c8ff34bbc78a req-6eb259e0-14c1-4981-80d7-c660ef55db72 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "25c0d908-6200-4e9e-8914-ed531abe14bf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:17:43 compute-0 nova_compute[260935]: 2025-10-11 09:17:43.635 2 DEBUG oslo_concurrency.lockutils [req-5791cfbf-80dd-43bc-9479-c8ff34bbc78a req-6eb259e0-14c1-4981-80d7-c660ef55db72 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "25c0d908-6200-4e9e-8914-ed531abe14bf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:17:43 compute-0 nova_compute[260935]: 2025-10-11 09:17:43.636 2 DEBUG oslo_concurrency.lockutils [req-5791cfbf-80dd-43bc-9479-c8ff34bbc78a req-6eb259e0-14c1-4981-80d7-c660ef55db72 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "25c0d908-6200-4e9e-8914-ed531abe14bf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:17:43 compute-0 nova_compute[260935]: 2025-10-11 09:17:43.636 2 DEBUG nova.compute.manager [req-5791cfbf-80dd-43bc-9479-c8ff34bbc78a req-6eb259e0-14c1-4981-80d7-c660ef55db72 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 25c0d908-6200-4e9e-8914-ed531abe14bf] No waiting events found dispatching network-vif-plugged-25dc7747-60d0-4a38-8938-dfa9f55068b3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 09:17:43 compute-0 nova_compute[260935]: 2025-10-11 09:17:43.636 2 WARNING nova.compute.manager [req-5791cfbf-80dd-43bc-9479-c8ff34bbc78a req-6eb259e0-14c1-4981-80d7-c660ef55db72 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 25c0d908-6200-4e9e-8914-ed531abe14bf] Received unexpected event network-vif-plugged-25dc7747-60d0-4a38-8938-dfa9f55068b3 for instance with vm_state active and task_state None.
Oct 11 09:17:43 compute-0 nova_compute[260935]: 2025-10-11 09:17:43.637 2 DEBUG nova.compute.manager [req-5791cfbf-80dd-43bc-9479-c8ff34bbc78a req-6eb259e0-14c1-4981-80d7-c660ef55db72 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0c16a8df-379f-45ee-b8a2-930ab997e47b] Received event network-changed-2284d8f8-63ae-4cf4-b954-c08472abd3ed external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:17:43 compute-0 nova_compute[260935]: 2025-10-11 09:17:43.637 2 DEBUG nova.compute.manager [req-5791cfbf-80dd-43bc-9479-c8ff34bbc78a req-6eb259e0-14c1-4981-80d7-c660ef55db72 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0c16a8df-379f-45ee-b8a2-930ab997e47b] Refreshing instance network info cache due to event network-changed-2284d8f8-63ae-4cf4-b954-c08472abd3ed. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 09:17:43 compute-0 nova_compute[260935]: 2025-10-11 09:17:43.638 2 DEBUG oslo_concurrency.lockutils [req-5791cfbf-80dd-43bc-9479-c8ff34bbc78a req-6eb259e0-14c1-4981-80d7-c660ef55db72 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-0c16a8df-379f-45ee-b8a2-930ab997e47b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:17:43 compute-0 nova_compute[260935]: 2025-10-11 09:17:43.638 2 DEBUG oslo_concurrency.lockutils [req-5791cfbf-80dd-43bc-9479-c8ff34bbc78a req-6eb259e0-14c1-4981-80d7-c660ef55db72 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-0c16a8df-379f-45ee-b8a2-930ab997e47b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:17:43 compute-0 nova_compute[260935]: 2025-10-11 09:17:43.638 2 DEBUG nova.network.neutron [req-5791cfbf-80dd-43bc-9479-c8ff34bbc78a req-6eb259e0-14c1-4981-80d7-c660ef55db72 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0c16a8df-379f-45ee-b8a2-930ab997e47b] Refreshing network info cache for port 2284d8f8-63ae-4cf4-b954-c08472abd3ed _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 09:17:43 compute-0 ceph-mon[74313]: pgmap v2276: 321 pgs: 321 active+clean; 485 MiB data, 987 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 3.1 MiB/s wr, 96 op/s
Oct 11 09:17:43 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/1643253741' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:17:43 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:17:44 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 09:17:44 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/99311287' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:17:44 compute-0 nova_compute[260935]: 2025-10-11 09:17:44.064 2 DEBUG oslo_concurrency.processutils [None req-9b5ab1ab-dc57-4b80-9c9b-8d4857fce611 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.488s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:17:44 compute-0 nova_compute[260935]: 2025-10-11 09:17:44.067 2 DEBUG nova.virt.libvirt.vif [None req-9b5ab1ab-dc57-4b80-9c9b-8d4857fce611 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T09:17:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1002613129',display_name='tempest-TestNetworkBasicOps-server-1002613129',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1002613129',id=112,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBB1DLyePqzD5XeXq/VEty+kauBvZ2ILHF8of/ToD91IxehxiiNemh7q1yZYimrdmB0i6DGy2R/mJVWVMcXEoPylnExeWpiVIgoXv8lQJTgtQjj42wL1vsxK2jjFlUxPqYA==',key_name='tempest-TestNetworkBasicOps-1545772393',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='bee9c6aad5fe46a2b0fb6caf4d995b72',ramdisk_id='',reservation_id='r-c03kw6on',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1622727639',owner_user_name='tempest-TestNetworkBasicOps-1622727639-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:17:39Z,user_data=None,user_id='dd336dcb24664df58613d4105ce1b004',uuid=0c16a8df-379f-45ee-b8a2-930ab997e47b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2284d8f8-63ae-4cf4-b954-c08472abd3ed", "address": "fa:16:3e:c9:58:d3", "network": {"id": "3f472aff-4044-47cd-b539-ffd0a15c2851", "bridge": "br-int", "label": "tempest-network-smoke--74572185", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2284d8f8-63", "ovs_interfaceid": "2284d8f8-63ae-4cf4-b954-c08472abd3ed", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 11 09:17:44 compute-0 nova_compute[260935]: 2025-10-11 09:17:44.068 2 DEBUG nova.network.os_vif_util [None req-9b5ab1ab-dc57-4b80-9c9b-8d4857fce611 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Converting VIF {"id": "2284d8f8-63ae-4cf4-b954-c08472abd3ed", "address": "fa:16:3e:c9:58:d3", "network": {"id": "3f472aff-4044-47cd-b539-ffd0a15c2851", "bridge": "br-int", "label": "tempest-network-smoke--74572185", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2284d8f8-63", "ovs_interfaceid": "2284d8f8-63ae-4cf4-b954-c08472abd3ed", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 09:17:44 compute-0 nova_compute[260935]: 2025-10-11 09:17:44.069 2 DEBUG nova.network.os_vif_util [None req-9b5ab1ab-dc57-4b80-9c9b-8d4857fce611 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c9:58:d3,bridge_name='br-int',has_traffic_filtering=True,id=2284d8f8-63ae-4cf4-b954-c08472abd3ed,network=Network(3f472aff-4044-47cd-b539-ffd0a15c2851),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2284d8f8-63') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 09:17:44 compute-0 nova_compute[260935]: 2025-10-11 09:17:44.072 2 DEBUG nova.objects.instance [None req-9b5ab1ab-dc57-4b80-9c9b-8d4857fce611 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lazy-loading 'pci_devices' on Instance uuid 0c16a8df-379f-45ee-b8a2-930ab997e47b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:17:44 compute-0 nova_compute[260935]: 2025-10-11 09:17:44.095 2 DEBUG nova.virt.libvirt.driver [None req-9b5ab1ab-dc57-4b80-9c9b-8d4857fce611 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 0c16a8df-379f-45ee-b8a2-930ab997e47b] End _get_guest_xml xml=<domain type="kvm">
Oct 11 09:17:44 compute-0 nova_compute[260935]:   <uuid>0c16a8df-379f-45ee-b8a2-930ab997e47b</uuid>
Oct 11 09:17:44 compute-0 nova_compute[260935]:   <name>instance-00000070</name>
Oct 11 09:17:44 compute-0 nova_compute[260935]:   <memory>131072</memory>
Oct 11 09:17:44 compute-0 nova_compute[260935]:   <vcpu>1</vcpu>
Oct 11 09:17:44 compute-0 nova_compute[260935]:   <metadata>
Oct 11 09:17:44 compute-0 nova_compute[260935]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 09:17:44 compute-0 nova_compute[260935]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 09:17:44 compute-0 nova_compute[260935]:       <nova:name>tempest-TestNetworkBasicOps-server-1002613129</nova:name>
Oct 11 09:17:44 compute-0 nova_compute[260935]:       <nova:creationTime>2025-10-11 09:17:43</nova:creationTime>
Oct 11 09:17:44 compute-0 nova_compute[260935]:       <nova:flavor name="m1.nano">
Oct 11 09:17:44 compute-0 nova_compute[260935]:         <nova:memory>128</nova:memory>
Oct 11 09:17:44 compute-0 nova_compute[260935]:         <nova:disk>1</nova:disk>
Oct 11 09:17:44 compute-0 nova_compute[260935]:         <nova:swap>0</nova:swap>
Oct 11 09:17:44 compute-0 nova_compute[260935]:         <nova:ephemeral>0</nova:ephemeral>
Oct 11 09:17:44 compute-0 nova_compute[260935]:         <nova:vcpus>1</nova:vcpus>
Oct 11 09:17:44 compute-0 nova_compute[260935]:       </nova:flavor>
Oct 11 09:17:44 compute-0 nova_compute[260935]:       <nova:owner>
Oct 11 09:17:44 compute-0 nova_compute[260935]:         <nova:user uuid="dd336dcb24664df58613d4105ce1b004">tempest-TestNetworkBasicOps-1622727639-project-member</nova:user>
Oct 11 09:17:44 compute-0 nova_compute[260935]:         <nova:project uuid="bee9c6aad5fe46a2b0fb6caf4d995b72">tempest-TestNetworkBasicOps-1622727639</nova:project>
Oct 11 09:17:44 compute-0 nova_compute[260935]:       </nova:owner>
Oct 11 09:17:44 compute-0 nova_compute[260935]:       <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 09:17:44 compute-0 nova_compute[260935]:       <nova:ports>
Oct 11 09:17:44 compute-0 nova_compute[260935]:         <nova:port uuid="2284d8f8-63ae-4cf4-b954-c08472abd3ed">
Oct 11 09:17:44 compute-0 nova_compute[260935]:           <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Oct 11 09:17:44 compute-0 nova_compute[260935]:         </nova:port>
Oct 11 09:17:44 compute-0 nova_compute[260935]:       </nova:ports>
Oct 11 09:17:44 compute-0 nova_compute[260935]:     </nova:instance>
Oct 11 09:17:44 compute-0 nova_compute[260935]:   </metadata>
Oct 11 09:17:44 compute-0 nova_compute[260935]:   <sysinfo type="smbios">
Oct 11 09:17:44 compute-0 nova_compute[260935]:     <system>
Oct 11 09:17:44 compute-0 nova_compute[260935]:       <entry name="manufacturer">RDO</entry>
Oct 11 09:17:44 compute-0 nova_compute[260935]:       <entry name="product">OpenStack Compute</entry>
Oct 11 09:17:44 compute-0 nova_compute[260935]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 09:17:44 compute-0 nova_compute[260935]:       <entry name="serial">0c16a8df-379f-45ee-b8a2-930ab997e47b</entry>
Oct 11 09:17:44 compute-0 nova_compute[260935]:       <entry name="uuid">0c16a8df-379f-45ee-b8a2-930ab997e47b</entry>
Oct 11 09:17:44 compute-0 nova_compute[260935]:       <entry name="family">Virtual Machine</entry>
Oct 11 09:17:44 compute-0 nova_compute[260935]:     </system>
Oct 11 09:17:44 compute-0 nova_compute[260935]:   </sysinfo>
Oct 11 09:17:44 compute-0 nova_compute[260935]:   <os>
Oct 11 09:17:44 compute-0 nova_compute[260935]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 11 09:17:44 compute-0 nova_compute[260935]:     <boot dev="hd"/>
Oct 11 09:17:44 compute-0 nova_compute[260935]:     <smbios mode="sysinfo"/>
Oct 11 09:17:44 compute-0 nova_compute[260935]:   </os>
Oct 11 09:17:44 compute-0 nova_compute[260935]:   <features>
Oct 11 09:17:44 compute-0 nova_compute[260935]:     <acpi/>
Oct 11 09:17:44 compute-0 nova_compute[260935]:     <apic/>
Oct 11 09:17:44 compute-0 nova_compute[260935]:     <vmcoreinfo/>
Oct 11 09:17:44 compute-0 nova_compute[260935]:   </features>
Oct 11 09:17:44 compute-0 nova_compute[260935]:   <clock offset="utc">
Oct 11 09:17:44 compute-0 nova_compute[260935]:     <timer name="pit" tickpolicy="delay"/>
Oct 11 09:17:44 compute-0 nova_compute[260935]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 11 09:17:44 compute-0 nova_compute[260935]:     <timer name="hpet" present="no"/>
Oct 11 09:17:44 compute-0 nova_compute[260935]:   </clock>
Oct 11 09:17:44 compute-0 nova_compute[260935]:   <cpu mode="host-model" match="exact">
Oct 11 09:17:44 compute-0 nova_compute[260935]:     <topology sockets="1" cores="1" threads="1"/>
Oct 11 09:17:44 compute-0 nova_compute[260935]:   </cpu>
Oct 11 09:17:44 compute-0 nova_compute[260935]:   <devices>
Oct 11 09:17:44 compute-0 nova_compute[260935]:     <disk type="network" device="disk">
Oct 11 09:17:44 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 09:17:44 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/0c16a8df-379f-45ee-b8a2-930ab997e47b_disk">
Oct 11 09:17:44 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 09:17:44 compute-0 nova_compute[260935]:       </source>
Oct 11 09:17:44 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 09:17:44 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 09:17:44 compute-0 nova_compute[260935]:       </auth>
Oct 11 09:17:44 compute-0 nova_compute[260935]:       <target dev="vda" bus="virtio"/>
Oct 11 09:17:44 compute-0 nova_compute[260935]:     </disk>
Oct 11 09:17:44 compute-0 nova_compute[260935]:     <disk type="network" device="cdrom">
Oct 11 09:17:44 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 09:17:44 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/0c16a8df-379f-45ee-b8a2-930ab997e47b_disk.config">
Oct 11 09:17:44 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 09:17:44 compute-0 nova_compute[260935]:       </source>
Oct 11 09:17:44 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 09:17:44 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 09:17:44 compute-0 nova_compute[260935]:       </auth>
Oct 11 09:17:44 compute-0 nova_compute[260935]:       <target dev="sda" bus="sata"/>
Oct 11 09:17:44 compute-0 nova_compute[260935]:     </disk>
Oct 11 09:17:44 compute-0 nova_compute[260935]:     <interface type="ethernet">
Oct 11 09:17:44 compute-0 nova_compute[260935]:       <mac address="fa:16:3e:c9:58:d3"/>
Oct 11 09:17:44 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 09:17:44 compute-0 nova_compute[260935]:       <driver name="vhost" rx_queue_size="512"/>
Oct 11 09:17:44 compute-0 nova_compute[260935]:       <mtu size="1442"/>
Oct 11 09:17:44 compute-0 nova_compute[260935]:       <target dev="tap2284d8f8-63"/>
Oct 11 09:17:44 compute-0 nova_compute[260935]:     </interface>
Oct 11 09:17:44 compute-0 nova_compute[260935]:     <serial type="pty">
Oct 11 09:17:44 compute-0 nova_compute[260935]:       <log file="/var/lib/nova/instances/0c16a8df-379f-45ee-b8a2-930ab997e47b/console.log" append="off"/>
Oct 11 09:17:44 compute-0 nova_compute[260935]:     </serial>
Oct 11 09:17:44 compute-0 nova_compute[260935]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 09:17:44 compute-0 nova_compute[260935]:     <video>
Oct 11 09:17:44 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 09:17:44 compute-0 nova_compute[260935]:     </video>
Oct 11 09:17:44 compute-0 nova_compute[260935]:     <input type="tablet" bus="usb"/>
Oct 11 09:17:44 compute-0 nova_compute[260935]:     <rng model="virtio">
Oct 11 09:17:44 compute-0 nova_compute[260935]:       <backend model="random">/dev/urandom</backend>
Oct 11 09:17:44 compute-0 nova_compute[260935]:     </rng>
Oct 11 09:17:44 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root"/>
Oct 11 09:17:44 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:17:44 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:17:44 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:17:44 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:17:44 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:17:44 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:17:44 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:17:44 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:17:44 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:17:44 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:17:44 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:17:44 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:17:44 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:17:44 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:17:44 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:17:44 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:17:44 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:17:44 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:17:44 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:17:44 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:17:44 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:17:44 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:17:44 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:17:44 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:17:44 compute-0 nova_compute[260935]:     <controller type="usb" index="0"/>
Oct 11 09:17:44 compute-0 nova_compute[260935]:     <memballoon model="virtio">
Oct 11 09:17:44 compute-0 nova_compute[260935]:       <stats period="10"/>
Oct 11 09:17:44 compute-0 nova_compute[260935]:     </memballoon>
Oct 11 09:17:44 compute-0 nova_compute[260935]:   </devices>
Oct 11 09:17:44 compute-0 nova_compute[260935]: </domain>
Oct 11 09:17:44 compute-0 nova_compute[260935]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 11 09:17:44 compute-0 nova_compute[260935]: 2025-10-11 09:17:44.102 2 DEBUG nova.compute.manager [None req-9b5ab1ab-dc57-4b80-9c9b-8d4857fce611 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 0c16a8df-379f-45ee-b8a2-930ab997e47b] Preparing to wait for external event network-vif-plugged-2284d8f8-63ae-4cf4-b954-c08472abd3ed prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 11 09:17:44 compute-0 nova_compute[260935]: 2025-10-11 09:17:44.102 2 DEBUG oslo_concurrency.lockutils [None req-9b5ab1ab-dc57-4b80-9c9b-8d4857fce611 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Acquiring lock "0c16a8df-379f-45ee-b8a2-930ab997e47b-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:17:44 compute-0 nova_compute[260935]: 2025-10-11 09:17:44.102 2 DEBUG oslo_concurrency.lockutils [None req-9b5ab1ab-dc57-4b80-9c9b-8d4857fce611 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "0c16a8df-379f-45ee-b8a2-930ab997e47b-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:17:44 compute-0 nova_compute[260935]: 2025-10-11 09:17:44.103 2 DEBUG oslo_concurrency.lockutils [None req-9b5ab1ab-dc57-4b80-9c9b-8d4857fce611 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "0c16a8df-379f-45ee-b8a2-930ab997e47b-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:17:44 compute-0 nova_compute[260935]: 2025-10-11 09:17:44.103 2 DEBUG nova.virt.libvirt.vif [None req-9b5ab1ab-dc57-4b80-9c9b-8d4857fce611 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T09:17:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1002613129',display_name='tempest-TestNetworkBasicOps-server-1002613129',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1002613129',id=112,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBB1DLyePqzD5XeXq/VEty+kauBvZ2ILHF8of/ToD91IxehxiiNemh7q1yZYimrdmB0i6DGy2R/mJVWVMcXEoPylnExeWpiVIgoXv8lQJTgtQjj42wL1vsxK2jjFlUxPqYA==',key_name='tempest-TestNetworkBasicOps-1545772393',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='bee9c6aad5fe46a2b0fb6caf4d995b72',ramdisk_id='',reservation_id='r-c03kw6on',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1622727639',owner_user_name='tempest-TestNetworkBasicOps-1622727639-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:17:39Z,user_data=None,user_id='dd336dcb24664df58613d4105ce1b004',uuid=0c16a8df-379f-45ee-b8a2-930ab997e47b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2284d8f8-63ae-4cf4-b954-c08472abd3ed", "address": "fa:16:3e:c9:58:d3", "network": {"id": "3f472aff-4044-47cd-b539-ffd0a15c2851", "bridge": "br-int", "label": "tempest-network-smoke--74572185", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2284d8f8-63", "ovs_interfaceid": "2284d8f8-63ae-4cf4-b954-c08472abd3ed", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 11 09:17:44 compute-0 nova_compute[260935]: 2025-10-11 09:17:44.103 2 DEBUG nova.network.os_vif_util [None req-9b5ab1ab-dc57-4b80-9c9b-8d4857fce611 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Converting VIF {"id": "2284d8f8-63ae-4cf4-b954-c08472abd3ed", "address": "fa:16:3e:c9:58:d3", "network": {"id": "3f472aff-4044-47cd-b539-ffd0a15c2851", "bridge": "br-int", "label": "tempest-network-smoke--74572185", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2284d8f8-63", "ovs_interfaceid": "2284d8f8-63ae-4cf4-b954-c08472abd3ed", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 09:17:44 compute-0 nova_compute[260935]: 2025-10-11 09:17:44.104 2 DEBUG nova.network.os_vif_util [None req-9b5ab1ab-dc57-4b80-9c9b-8d4857fce611 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c9:58:d3,bridge_name='br-int',has_traffic_filtering=True,id=2284d8f8-63ae-4cf4-b954-c08472abd3ed,network=Network(3f472aff-4044-47cd-b539-ffd0a15c2851),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2284d8f8-63') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 09:17:44 compute-0 nova_compute[260935]: 2025-10-11 09:17:44.104 2 DEBUG os_vif [None req-9b5ab1ab-dc57-4b80-9c9b-8d4857fce611 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c9:58:d3,bridge_name='br-int',has_traffic_filtering=True,id=2284d8f8-63ae-4cf4-b954-c08472abd3ed,network=Network(3f472aff-4044-47cd-b539-ffd0a15c2851),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2284d8f8-63') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 11 09:17:44 compute-0 nova_compute[260935]: 2025-10-11 09:17:44.105 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:17:44 compute-0 nova_compute[260935]: 2025-10-11 09:17:44.105 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:17:44 compute-0 nova_compute[260935]: 2025-10-11 09:17:44.106 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 09:17:44 compute-0 nova_compute[260935]: 2025-10-11 09:17:44.109 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:17:44 compute-0 nova_compute[260935]: 2025-10-11 09:17:44.109 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2284d8f8-63, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:17:44 compute-0 nova_compute[260935]: 2025-10-11 09:17:44.110 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap2284d8f8-63, col_values=(('external_ids', {'iface-id': '2284d8f8-63ae-4cf4-b954-c08472abd3ed', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:c9:58:d3', 'vm-uuid': '0c16a8df-379f-45ee-b8a2-930ab997e47b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:17:44 compute-0 NetworkManager[44960]: <info>  [1760174264.1126] manager: (tap2284d8f8-63): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/448)
Oct 11 09:17:44 compute-0 nova_compute[260935]: 2025-10-11 09:17:44.112 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 09:17:44 compute-0 nova_compute[260935]: 2025-10-11 09:17:44.118 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:17:44 compute-0 nova_compute[260935]: 2025-10-11 09:17:44.119 2 INFO os_vif [None req-9b5ab1ab-dc57-4b80-9c9b-8d4857fce611 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c9:58:d3,bridge_name='br-int',has_traffic_filtering=True,id=2284d8f8-63ae-4cf4-b954-c08472abd3ed,network=Network(3f472aff-4044-47cd-b539-ffd0a15c2851),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2284d8f8-63')
Oct 11 09:17:44 compute-0 nova_compute[260935]: 2025-10-11 09:17:44.179 2 DEBUG nova.virt.libvirt.driver [None req-9b5ab1ab-dc57-4b80-9c9b-8d4857fce611 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 09:17:44 compute-0 nova_compute[260935]: 2025-10-11 09:17:44.179 2 DEBUG nova.virt.libvirt.driver [None req-9b5ab1ab-dc57-4b80-9c9b-8d4857fce611 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 09:17:44 compute-0 nova_compute[260935]: 2025-10-11 09:17:44.179 2 DEBUG nova.virt.libvirt.driver [None req-9b5ab1ab-dc57-4b80-9c9b-8d4857fce611 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] No VIF found with MAC fa:16:3e:c9:58:d3, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 11 09:17:44 compute-0 nova_compute[260935]: 2025-10-11 09:17:44.180 2 INFO nova.virt.libvirt.driver [None req-9b5ab1ab-dc57-4b80-9c9b-8d4857fce611 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 0c16a8df-379f-45ee-b8a2-930ab997e47b] Using config drive
Oct 11 09:17:44 compute-0 nova_compute[260935]: 2025-10-11 09:17:44.201 2 DEBUG nova.storage.rbd_utils [None req-9b5ab1ab-dc57-4b80-9c9b-8d4857fce611 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] rbd image 0c16a8df-379f-45ee-b8a2-930ab997e47b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:17:44 compute-0 nova_compute[260935]: 2025-10-11 09:17:44.591 2 INFO nova.virt.libvirt.driver [None req-9b5ab1ab-dc57-4b80-9c9b-8d4857fce611 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 0c16a8df-379f-45ee-b8a2-930ab997e47b] Creating config drive at /var/lib/nova/instances/0c16a8df-379f-45ee-b8a2-930ab997e47b/disk.config
Oct 11 09:17:44 compute-0 nova_compute[260935]: 2025-10-11 09:17:44.600 2 DEBUG oslo_concurrency.processutils [None req-9b5ab1ab-dc57-4b80-9c9b-8d4857fce611 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/0c16a8df-379f-45ee-b8a2-930ab997e47b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp6j14a8af execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:17:44 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2277: 321 pgs: 321 active+clean; 500 MiB data, 992 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.6 MiB/s wr, 116 op/s
Oct 11 09:17:44 compute-0 nova_compute[260935]: 2025-10-11 09:17:44.772 2 DEBUG oslo_concurrency.processutils [None req-9b5ab1ab-dc57-4b80-9c9b-8d4857fce611 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/0c16a8df-379f-45ee-b8a2-930ab997e47b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp6j14a8af" returned: 0 in 0.171s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:17:44 compute-0 nova_compute[260935]: 2025-10-11 09:17:44.814 2 DEBUG nova.storage.rbd_utils [None req-9b5ab1ab-dc57-4b80-9c9b-8d4857fce611 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] rbd image 0c16a8df-379f-45ee-b8a2-930ab997e47b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:17:44 compute-0 nova_compute[260935]: 2025-10-11 09:17:44.817 2 DEBUG oslo_concurrency.processutils [None req-9b5ab1ab-dc57-4b80-9c9b-8d4857fce611 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/0c16a8df-379f-45ee-b8a2-930ab997e47b/disk.config 0c16a8df-379f-45ee-b8a2-930ab997e47b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:17:44 compute-0 nova_compute[260935]: 2025-10-11 09:17:44.951 2 DEBUG nova.network.neutron [req-5791cfbf-80dd-43bc-9479-c8ff34bbc78a req-6eb259e0-14c1-4981-80d7-c660ef55db72 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0c16a8df-379f-45ee-b8a2-930ab997e47b] Updated VIF entry in instance network info cache for port 2284d8f8-63ae-4cf4-b954-c08472abd3ed. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 09:17:44 compute-0 nova_compute[260935]: 2025-10-11 09:17:44.952 2 DEBUG nova.network.neutron [req-5791cfbf-80dd-43bc-9479-c8ff34bbc78a req-6eb259e0-14c1-4981-80d7-c660ef55db72 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0c16a8df-379f-45ee-b8a2-930ab997e47b] Updating instance_info_cache with network_info: [{"id": "2284d8f8-63ae-4cf4-b954-c08472abd3ed", "address": "fa:16:3e:c9:58:d3", "network": {"id": "3f472aff-4044-47cd-b539-ffd0a15c2851", "bridge": "br-int", "label": "tempest-network-smoke--74572185", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2284d8f8-63", "ovs_interfaceid": "2284d8f8-63ae-4cf4-b954-c08472abd3ed", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:17:44 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/99311287' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:17:44 compute-0 ceph-mon[74313]: pgmap v2277: 321 pgs: 321 active+clean; 500 MiB data, 992 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.6 MiB/s wr, 116 op/s
Oct 11 09:17:44 compute-0 nova_compute[260935]: 2025-10-11 09:17:44.976 2 DEBUG oslo_concurrency.lockutils [req-5791cfbf-80dd-43bc-9479-c8ff34bbc78a req-6eb259e0-14c1-4981-80d7-c660ef55db72 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-0c16a8df-379f-45ee-b8a2-930ab997e47b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:17:45 compute-0 nova_compute[260935]: 2025-10-11 09:17:45.011 2 DEBUG oslo_concurrency.processutils [None req-9b5ab1ab-dc57-4b80-9c9b-8d4857fce611 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/0c16a8df-379f-45ee-b8a2-930ab997e47b/disk.config 0c16a8df-379f-45ee-b8a2-930ab997e47b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.193s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:17:45 compute-0 nova_compute[260935]: 2025-10-11 09:17:45.012 2 INFO nova.virt.libvirt.driver [None req-9b5ab1ab-dc57-4b80-9c9b-8d4857fce611 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 0c16a8df-379f-45ee-b8a2-930ab997e47b] Deleting local config drive /var/lib/nova/instances/0c16a8df-379f-45ee-b8a2-930ab997e47b/disk.config because it was imported into RBD.
Oct 11 09:17:45 compute-0 kernel: tap2284d8f8-63: entered promiscuous mode
Oct 11 09:17:45 compute-0 NetworkManager[44960]: <info>  [1760174265.0571] manager: (tap2284d8f8-63): new Tun device (/org/freedesktop/NetworkManager/Devices/449)
Oct 11 09:17:45 compute-0 ovn_controller[152945]: 2025-10-11T09:17:45Z|01095|binding|INFO|Claiming lport 2284d8f8-63ae-4cf4-b954-c08472abd3ed for this chassis.
Oct 11 09:17:45 compute-0 ovn_controller[152945]: 2025-10-11T09:17:45Z|01096|binding|INFO|2284d8f8-63ae-4cf4-b954-c08472abd3ed: Claiming fa:16:3e:c9:58:d3 10.100.0.14
Oct 11 09:17:45 compute-0 nova_compute[260935]: 2025-10-11 09:17:45.063 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:17:45 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:17:45.072 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c9:58:d3 10.100.0.14'], port_security=['fa:16:3e:c9:58:d3 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '0c16a8df-379f-45ee-b8a2-930ab997e47b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3f472aff-4044-47cd-b539-ffd0a15c2851', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'bee9c6aad5fe46a2b0fb6caf4d995b72', 'neutron:revision_number': '2', 'neutron:security_group_ids': '19e2c24e-234b-4dae-9978-109acb79adf0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=83048660-eea0-4997-8f0a-503095730c3f, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=2284d8f8-63ae-4cf4-b954-c08472abd3ed) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 09:17:45 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:17:45.073 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 2284d8f8-63ae-4cf4-b954-c08472abd3ed in datapath 3f472aff-4044-47cd-b539-ffd0a15c2851 bound to our chassis
Oct 11 09:17:45 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:17:45.075 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 3f472aff-4044-47cd-b539-ffd0a15c2851
Oct 11 09:17:45 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:17:45.093 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[abf070f0-3729-4078-bbe1-31bdb73ff364]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:17:45 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:17:45.094 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap3f472aff-41 in ovnmeta-3f472aff-4044-47cd-b539-ffd0a15c2851 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 11 09:17:45 compute-0 systemd-machined[215705]: New machine qemu-135-instance-00000070.
Oct 11 09:17:45 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:17:45.098 276945 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap3f472aff-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 11 09:17:45 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:17:45.098 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[a5643077-2188-4ec6-b411-81de4315f3e6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:17:45 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:17:45.099 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[cdf03568-91fb-4493-87bb-f5c8f42e5ba8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:17:45 compute-0 ovn_controller[152945]: 2025-10-11T09:17:45Z|01097|binding|INFO|Setting lport 2284d8f8-63ae-4cf4-b954-c08472abd3ed ovn-installed in OVS
Oct 11 09:17:45 compute-0 ovn_controller[152945]: 2025-10-11T09:17:45Z|01098|binding|INFO|Setting lport 2284d8f8-63ae-4cf4-b954-c08472abd3ed up in Southbound
Oct 11 09:17:45 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:17:45.115 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[51270ea0-0b54-470f-ab41-d4450042b9ed]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:17:45 compute-0 nova_compute[260935]: 2025-10-11 09:17:45.115 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:17:45 compute-0 systemd[1]: Started Virtual Machine qemu-135-instance-00000070.
Oct 11 09:17:45 compute-0 nova_compute[260935]: 2025-10-11 09:17:45.117 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:17:45 compute-0 systemd-udevd[381980]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 09:17:45 compute-0 NetworkManager[44960]: <info>  [1760174265.1377] device (tap2284d8f8-63): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 09:17:45 compute-0 NetworkManager[44960]: <info>  [1760174265.1385] device (tap2284d8f8-63): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 09:17:45 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:17:45.156 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[6f674e60-5782-4ee2-bffc-67c055de3844]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:17:45 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:17:45.195 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[c56fc655-4179-4ad3-bcfd-18999ae98886]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:17:45 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:17:45.203 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[6c163da6-154c-4e5d-aae7-b7e4d09bc9ca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:17:45 compute-0 systemd-udevd[381983]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 09:17:45 compute-0 NetworkManager[44960]: <info>  [1760174265.2039] manager: (tap3f472aff-40): new Veth device (/org/freedesktop/NetworkManager/Devices/450)
Oct 11 09:17:45 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:17:45.227 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[5ea70187-2079-4644-b80b-6029ad835dd4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:17:45 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:17:45.232 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[95d37ba9-e797-4842-a210-ff83561f6d18]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:17:45 compute-0 NetworkManager[44960]: <info>  [1760174265.2579] device (tap3f472aff-40): carrier: link connected
Oct 11 09:17:45 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:17:45.269 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[7cc32314-e130-43d7-ac79-fa18b55ede57]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:17:45 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:17:45.293 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[a230fcc4-78ef-4923-9501-ceaf115a7484]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3f472aff-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:63:0e:c0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 315], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 610996, 'reachable_time': 15084, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 382011, 'error': None, 'target': 'ovnmeta-3f472aff-4044-47cd-b539-ffd0a15c2851', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:17:45 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:17:45.315 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[6bb5bb51-2e18-4c4a-a2f3-000065e356b2]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe63:ec0'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 610996, 'tstamp': 610996}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 382012, 'error': None, 'target': 'ovnmeta-3f472aff-4044-47cd-b539-ffd0a15c2851', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:17:45 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:17:45.342 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[054451a7-38d8-49d0-b4b5-11da65a4c8e2]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3f472aff-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:63:0e:c0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 220, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 220, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 315], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 610996, 'reachable_time': 15084, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 192, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 192, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 382013, 'error': None, 'target': 'ovnmeta-3f472aff-4044-47cd-b539-ffd0a15c2851', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:17:45 compute-0 sshd-session[381795]: Failed password for invalid user debian1 from 152.32.213.170 port 40590 ssh2
Oct 11 09:17:45 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:17:45.391 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[c8b8e300-1dba-4f54-952e-03a39f907a47]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:17:45 compute-0 nova_compute[260935]: 2025-10-11 09:17:45.465 2 DEBUG nova.compute.manager [req-fe95fe1e-2817-431f-88ac-e0b0b8b048d0 req-66bf1b17-2eac-46bd-83b8-5dafebbbad12 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0c16a8df-379f-45ee-b8a2-930ab997e47b] Received event network-vif-plugged-2284d8f8-63ae-4cf4-b954-c08472abd3ed external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:17:45 compute-0 nova_compute[260935]: 2025-10-11 09:17:45.465 2 DEBUG oslo_concurrency.lockutils [req-fe95fe1e-2817-431f-88ac-e0b0b8b048d0 req-66bf1b17-2eac-46bd-83b8-5dafebbbad12 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "0c16a8df-379f-45ee-b8a2-930ab997e47b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:17:45 compute-0 nova_compute[260935]: 2025-10-11 09:17:45.466 2 DEBUG oslo_concurrency.lockutils [req-fe95fe1e-2817-431f-88ac-e0b0b8b048d0 req-66bf1b17-2eac-46bd-83b8-5dafebbbad12 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "0c16a8df-379f-45ee-b8a2-930ab997e47b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:17:45 compute-0 nova_compute[260935]: 2025-10-11 09:17:45.466 2 DEBUG oslo_concurrency.lockutils [req-fe95fe1e-2817-431f-88ac-e0b0b8b048d0 req-66bf1b17-2eac-46bd-83b8-5dafebbbad12 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "0c16a8df-379f-45ee-b8a2-930ab997e47b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:17:45 compute-0 nova_compute[260935]: 2025-10-11 09:17:45.466 2 DEBUG nova.compute.manager [req-fe95fe1e-2817-431f-88ac-e0b0b8b048d0 req-66bf1b17-2eac-46bd-83b8-5dafebbbad12 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0c16a8df-379f-45ee-b8a2-930ab997e47b] Processing event network-vif-plugged-2284d8f8-63ae-4cf4-b954-c08472abd3ed _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 11 09:17:45 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:17:45.476 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[9c0e301a-df5a-4951-badb-4bbb3a8c7617]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:17:45 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:17:45.477 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3f472aff-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:17:45 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:17:45.477 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 09:17:45 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:17:45.477 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3f472aff-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:17:45 compute-0 kernel: tap3f472aff-40: entered promiscuous mode
Oct 11 09:17:45 compute-0 NetworkManager[44960]: <info>  [1760174265.4807] manager: (tap3f472aff-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/451)
Oct 11 09:17:45 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:17:45.483 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap3f472aff-40, col_values=(('external_ids', {'iface-id': '117fa814-7b44-4449-adf6-8726d78cffe9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:17:45 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:17:45.489 162815 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/3f472aff-4044-47cd-b539-ffd0a15c2851.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/3f472aff-4044-47cd-b539-ffd0a15c2851.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 11 09:17:45 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:17:45.490 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[46ac8abd-2957-491b-8d5d-3855ee8fd40e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:17:45 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:17:45.491 162815 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 11 09:17:45 compute-0 ovn_metadata_agent[162810]: global
Oct 11 09:17:45 compute-0 ovn_metadata_agent[162810]:     log         /dev/log local0 debug
Oct 11 09:17:45 compute-0 ovn_metadata_agent[162810]:     log-tag     haproxy-metadata-proxy-3f472aff-4044-47cd-b539-ffd0a15c2851
Oct 11 09:17:45 compute-0 ovn_metadata_agent[162810]:     user        root
Oct 11 09:17:45 compute-0 ovn_metadata_agent[162810]:     group       root
Oct 11 09:17:45 compute-0 ovn_metadata_agent[162810]:     maxconn     1024
Oct 11 09:17:45 compute-0 ovn_metadata_agent[162810]:     pidfile     /var/lib/neutron/external/pids/3f472aff-4044-47cd-b539-ffd0a15c2851.pid.haproxy
Oct 11 09:17:45 compute-0 ovn_metadata_agent[162810]:     daemon
Oct 11 09:17:45 compute-0 ovn_metadata_agent[162810]: 
Oct 11 09:17:45 compute-0 ovn_metadata_agent[162810]: defaults
Oct 11 09:17:45 compute-0 ovn_metadata_agent[162810]:     log global
Oct 11 09:17:45 compute-0 ovn_metadata_agent[162810]:     mode http
Oct 11 09:17:45 compute-0 ovn_metadata_agent[162810]:     option httplog
Oct 11 09:17:45 compute-0 ovn_metadata_agent[162810]:     option dontlognull
Oct 11 09:17:45 compute-0 ovn_metadata_agent[162810]:     option http-server-close
Oct 11 09:17:45 compute-0 ovn_metadata_agent[162810]:     option forwardfor
Oct 11 09:17:45 compute-0 ovn_metadata_agent[162810]:     retries                 3
Oct 11 09:17:45 compute-0 ovn_metadata_agent[162810]:     timeout http-request    30s
Oct 11 09:17:45 compute-0 ovn_metadata_agent[162810]:     timeout connect         30s
Oct 11 09:17:45 compute-0 ovn_metadata_agent[162810]:     timeout client          32s
Oct 11 09:17:45 compute-0 ovn_metadata_agent[162810]:     timeout server          32s
Oct 11 09:17:45 compute-0 ovn_metadata_agent[162810]:     timeout http-keep-alive 30s
Oct 11 09:17:45 compute-0 ovn_metadata_agent[162810]: 
Oct 11 09:17:45 compute-0 ovn_metadata_agent[162810]: 
Oct 11 09:17:45 compute-0 ovn_metadata_agent[162810]: listen listener
Oct 11 09:17:45 compute-0 ovn_metadata_agent[162810]:     bind 169.254.169.254:80
Oct 11 09:17:45 compute-0 ovn_metadata_agent[162810]:     server metadata /var/lib/neutron/metadata_proxy
Oct 11 09:17:45 compute-0 ovn_metadata_agent[162810]:     http-request add-header X-OVN-Network-ID 3f472aff-4044-47cd-b539-ffd0a15c2851
Oct 11 09:17:45 compute-0 ovn_metadata_agent[162810]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 11 09:17:45 compute-0 ovn_controller[152945]: 2025-10-11T09:17:45Z|01099|binding|INFO|Releasing lport 117fa814-7b44-4449-adf6-8726d78cffe9 from this chassis (sb_readonly=0)
Oct 11 09:17:45 compute-0 nova_compute[260935]: 2025-10-11 09:17:45.479 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:17:45 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:17:45.491 162815 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-3f472aff-4044-47cd-b539-ffd0a15c2851', 'env', 'PROCESS_TAG=haproxy-3f472aff-4044-47cd-b539-ffd0a15c2851', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/3f472aff-4044-47cd-b539-ffd0a15c2851.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 11 09:17:45 compute-0 nova_compute[260935]: 2025-10-11 09:17:45.523 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:17:45 compute-0 nova_compute[260935]: 2025-10-11 09:17:45.874 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760174265.8736994, 0c16a8df-379f-45ee-b8a2-930ab997e47b => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:17:45 compute-0 nova_compute[260935]: 2025-10-11 09:17:45.876 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 0c16a8df-379f-45ee-b8a2-930ab997e47b] VM Started (Lifecycle Event)
Oct 11 09:17:45 compute-0 nova_compute[260935]: 2025-10-11 09:17:45.879 2 DEBUG nova.compute.manager [None req-9b5ab1ab-dc57-4b80-9c9b-8d4857fce611 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 0c16a8df-379f-45ee-b8a2-930ab997e47b] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 11 09:17:45 compute-0 nova_compute[260935]: 2025-10-11 09:17:45.883 2 DEBUG nova.virt.libvirt.driver [None req-9b5ab1ab-dc57-4b80-9c9b-8d4857fce611 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 0c16a8df-379f-45ee-b8a2-930ab997e47b] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 11 09:17:45 compute-0 nova_compute[260935]: 2025-10-11 09:17:45.887 2 INFO nova.virt.libvirt.driver [-] [instance: 0c16a8df-379f-45ee-b8a2-930ab997e47b] Instance spawned successfully.
Oct 11 09:17:45 compute-0 nova_compute[260935]: 2025-10-11 09:17:45.888 2 DEBUG nova.virt.libvirt.driver [None req-9b5ab1ab-dc57-4b80-9c9b-8d4857fce611 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 0c16a8df-379f-45ee-b8a2-930ab997e47b] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 11 09:17:45 compute-0 nova_compute[260935]: 2025-10-11 09:17:45.914 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 0c16a8df-379f-45ee-b8a2-930ab997e47b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:17:45 compute-0 podman[382085]: 2025-10-11 09:17:45.917992092 +0000 UTC m=+0.071681555 container create 1ecdb8b07380fa6c69f2dee9d9e0b0d27bd9b74547b9ea0ac2f30e8fde8e5390 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-3f472aff-4044-47cd-b539-ffd0a15c2851, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001)
Oct 11 09:17:45 compute-0 nova_compute[260935]: 2025-10-11 09:17:45.925 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 0c16a8df-379f-45ee-b8a2-930ab997e47b] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 09:17:45 compute-0 nova_compute[260935]: 2025-10-11 09:17:45.932 2 DEBUG nova.virt.libvirt.driver [None req-9b5ab1ab-dc57-4b80-9c9b-8d4857fce611 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 0c16a8df-379f-45ee-b8a2-930ab997e47b] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:17:45 compute-0 nova_compute[260935]: 2025-10-11 09:17:45.933 2 DEBUG nova.virt.libvirt.driver [None req-9b5ab1ab-dc57-4b80-9c9b-8d4857fce611 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 0c16a8df-379f-45ee-b8a2-930ab997e47b] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:17:45 compute-0 nova_compute[260935]: 2025-10-11 09:17:45.934 2 DEBUG nova.virt.libvirt.driver [None req-9b5ab1ab-dc57-4b80-9c9b-8d4857fce611 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 0c16a8df-379f-45ee-b8a2-930ab997e47b] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:17:45 compute-0 nova_compute[260935]: 2025-10-11 09:17:45.935 2 DEBUG nova.virt.libvirt.driver [None req-9b5ab1ab-dc57-4b80-9c9b-8d4857fce611 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 0c16a8df-379f-45ee-b8a2-930ab997e47b] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:17:45 compute-0 nova_compute[260935]: 2025-10-11 09:17:45.936 2 DEBUG nova.virt.libvirt.driver [None req-9b5ab1ab-dc57-4b80-9c9b-8d4857fce611 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 0c16a8df-379f-45ee-b8a2-930ab997e47b] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:17:45 compute-0 nova_compute[260935]: 2025-10-11 09:17:45.937 2 DEBUG nova.virt.libvirt.driver [None req-9b5ab1ab-dc57-4b80-9c9b-8d4857fce611 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 0c16a8df-379f-45ee-b8a2-930ab997e47b] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:17:45 compute-0 nova_compute[260935]: 2025-10-11 09:17:45.949 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 0c16a8df-379f-45ee-b8a2-930ab997e47b] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 09:17:45 compute-0 nova_compute[260935]: 2025-10-11 09:17:45.950 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760174265.8739605, 0c16a8df-379f-45ee-b8a2-930ab997e47b => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:17:45 compute-0 nova_compute[260935]: 2025-10-11 09:17:45.950 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 0c16a8df-379f-45ee-b8a2-930ab997e47b] VM Paused (Lifecycle Event)
Oct 11 09:17:45 compute-0 systemd[1]: Started libpod-conmon-1ecdb8b07380fa6c69f2dee9d9e0b0d27bd9b74547b9ea0ac2f30e8fde8e5390.scope.
Oct 11 09:17:45 compute-0 podman[382085]: 2025-10-11 09:17:45.874868748 +0000 UTC m=+0.028558251 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 11 09:17:45 compute-0 nova_compute[260935]: 2025-10-11 09:17:45.988 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 0c16a8df-379f-45ee-b8a2-930ab997e47b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:17:45 compute-0 nova_compute[260935]: 2025-10-11 09:17:45.993 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760174265.8822107, 0c16a8df-379f-45ee-b8a2-930ab997e47b => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:17:45 compute-0 nova_compute[260935]: 2025-10-11 09:17:45.994 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 0c16a8df-379f-45ee-b8a2-930ab997e47b] VM Resumed (Lifecycle Event)
Oct 11 09:17:45 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:17:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d7493b7450df163711f70cb19b9368d1526437ef35376e11493a1c6025bea22/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 11 09:17:46 compute-0 podman[382085]: 2025-10-11 09:17:46.028258104 +0000 UTC m=+0.181947677 container init 1ecdb8b07380fa6c69f2dee9d9e0b0d27bd9b74547b9ea0ac2f30e8fde8e5390 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-3f472aff-4044-47cd-b539-ffd0a15c2851, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.vendor=CentOS)
Oct 11 09:17:46 compute-0 nova_compute[260935]: 2025-10-11 09:17:46.032 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 0c16a8df-379f-45ee-b8a2-930ab997e47b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:17:46 compute-0 podman[382085]: 2025-10-11 09:17:46.035207006 +0000 UTC m=+0.188896469 container start 1ecdb8b07380fa6c69f2dee9d9e0b0d27bd9b74547b9ea0ac2f30e8fde8e5390 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-3f472aff-4044-47cd-b539-ffd0a15c2851, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2)
Oct 11 09:17:46 compute-0 nova_compute[260935]: 2025-10-11 09:17:46.038 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 0c16a8df-379f-45ee-b8a2-930ab997e47b] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 09:17:46 compute-0 nova_compute[260935]: 2025-10-11 09:17:46.050 2 INFO nova.compute.manager [None req-9b5ab1ab-dc57-4b80-9c9b-8d4857fce611 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 0c16a8df-379f-45ee-b8a2-930ab997e47b] Took 6.92 seconds to spawn the instance on the hypervisor.
Oct 11 09:17:46 compute-0 nova_compute[260935]: 2025-10-11 09:17:46.051 2 DEBUG nova.compute.manager [None req-9b5ab1ab-dc57-4b80-9c9b-8d4857fce611 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 0c16a8df-379f-45ee-b8a2-930ab997e47b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:17:46 compute-0 nova_compute[260935]: 2025-10-11 09:17:46.067 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 0c16a8df-379f-45ee-b8a2-930ab997e47b] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 09:17:46 compute-0 neutron-haproxy-ovnmeta-3f472aff-4044-47cd-b539-ffd0a15c2851[382100]: [NOTICE]   (382104) : New worker (382106) forked
Oct 11 09:17:46 compute-0 neutron-haproxy-ovnmeta-3f472aff-4044-47cd-b539-ffd0a15c2851[382100]: [NOTICE]   (382104) : Loading success.
Oct 11 09:17:46 compute-0 nova_compute[260935]: 2025-10-11 09:17:46.157 2 INFO nova.compute.manager [None req-9b5ab1ab-dc57-4b80-9c9b-8d4857fce611 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 0c16a8df-379f-45ee-b8a2-930ab997e47b] Took 8.12 seconds to build instance.
Oct 11 09:17:46 compute-0 nova_compute[260935]: 2025-10-11 09:17:46.186 2 DEBUG oslo_concurrency.lockutils [None req-9b5ab1ab-dc57-4b80-9c9b-8d4857fce611 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "0c16a8df-379f-45ee-b8a2-930ab997e47b" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.222s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:17:46 compute-0 nova_compute[260935]: 2025-10-11 09:17:46.328 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:17:46 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2278: 321 pgs: 321 active+clean; 500 MiB data, 992 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Oct 11 09:17:47 compute-0 sshd-session[381795]: Received disconnect from 152.32.213.170 port 40590:11: Bye Bye [preauth]
Oct 11 09:17:47 compute-0 sshd-session[381795]: Disconnected from invalid user debian1 152.32.213.170 port 40590 [preauth]
Oct 11 09:17:47 compute-0 nova_compute[260935]: 2025-10-11 09:17:47.594 2 DEBUG nova.compute.manager [req-a5ec8735-7fe6-44da-85b7-3fa90ce03299 req-7ff89a89-0b1d-4857-bd2f-f44db6ad07eb e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 25c0d908-6200-4e9e-8914-ed531abe14bf] Received event network-changed-25dc7747-60d0-4a38-8938-dfa9f55068b3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:17:47 compute-0 nova_compute[260935]: 2025-10-11 09:17:47.595 2 DEBUG nova.compute.manager [req-a5ec8735-7fe6-44da-85b7-3fa90ce03299 req-7ff89a89-0b1d-4857-bd2f-f44db6ad07eb e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 25c0d908-6200-4e9e-8914-ed531abe14bf] Refreshing instance network info cache due to event network-changed-25dc7747-60d0-4a38-8938-dfa9f55068b3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 09:17:47 compute-0 nova_compute[260935]: 2025-10-11 09:17:47.595 2 DEBUG oslo_concurrency.lockutils [req-a5ec8735-7fe6-44da-85b7-3fa90ce03299 req-7ff89a89-0b1d-4857-bd2f-f44db6ad07eb e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-25c0d908-6200-4e9e-8914-ed531abe14bf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:17:47 compute-0 nova_compute[260935]: 2025-10-11 09:17:47.595 2 DEBUG oslo_concurrency.lockutils [req-a5ec8735-7fe6-44da-85b7-3fa90ce03299 req-7ff89a89-0b1d-4857-bd2f-f44db6ad07eb e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-25c0d908-6200-4e9e-8914-ed531abe14bf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:17:47 compute-0 nova_compute[260935]: 2025-10-11 09:17:47.595 2 DEBUG nova.network.neutron [req-a5ec8735-7fe6-44da-85b7-3fa90ce03299 req-7ff89a89-0b1d-4857-bd2f-f44db6ad07eb e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 25c0d908-6200-4e9e-8914-ed531abe14bf] Refreshing network info cache for port 25dc7747-60d0-4a38-8938-dfa9f55068b3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 09:17:47 compute-0 ceph-mon[74313]: pgmap v2278: 321 pgs: 321 active+clean; 500 MiB data, 992 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Oct 11 09:17:48 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2279: 321 pgs: 321 active+clean; 500 MiB data, 992 MiB used, 59 GiB / 60 GiB avail; 3.7 MiB/s rd, 1.8 MiB/s wr, 170 op/s
Oct 11 09:17:48 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:17:49 compute-0 nova_compute[260935]: 2025-10-11 09:17:49.111 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:17:49 compute-0 nova_compute[260935]: 2025-10-11 09:17:49.584 2 DEBUG nova.network.neutron [req-a5ec8735-7fe6-44da-85b7-3fa90ce03299 req-7ff89a89-0b1d-4857-bd2f-f44db6ad07eb e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 25c0d908-6200-4e9e-8914-ed531abe14bf] Updated VIF entry in instance network info cache for port 25dc7747-60d0-4a38-8938-dfa9f55068b3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 09:17:49 compute-0 nova_compute[260935]: 2025-10-11 09:17:49.584 2 DEBUG nova.network.neutron [req-a5ec8735-7fe6-44da-85b7-3fa90ce03299 req-7ff89a89-0b1d-4857-bd2f-f44db6ad07eb e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 25c0d908-6200-4e9e-8914-ed531abe14bf] Updating instance_info_cache with network_info: [{"id": "25dc7747-60d0-4a38-8938-dfa9f55068b3", "address": "fa:16:3e:8e:fc:a5", "network": {"id": "2f4ce403-2596-441d-805b-ba15e2f385a1", "bridge": "br-int", "label": "tempest-network-smoke--109925739", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.186", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe8e:fca5", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25dc7747-60", "ovs_interfaceid": "25dc7747-60d0-4a38-8938-dfa9f55068b3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:17:49 compute-0 nova_compute[260935]: 2025-10-11 09:17:49.603 2 DEBUG oslo_concurrency.lockutils [req-a5ec8735-7fe6-44da-85b7-3fa90ce03299 req-7ff89a89-0b1d-4857-bd2f-f44db6ad07eb e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-25c0d908-6200-4e9e-8914-ed531abe14bf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:17:49 compute-0 nova_compute[260935]: 2025-10-11 09:17:49.604 2 DEBUG nova.compute.manager [req-a5ec8735-7fe6-44da-85b7-3fa90ce03299 req-7ff89a89-0b1d-4857-bd2f-f44db6ad07eb e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0c16a8df-379f-45ee-b8a2-930ab997e47b] Received event network-vif-plugged-2284d8f8-63ae-4cf4-b954-c08472abd3ed external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:17:49 compute-0 nova_compute[260935]: 2025-10-11 09:17:49.604 2 DEBUG oslo_concurrency.lockutils [req-a5ec8735-7fe6-44da-85b7-3fa90ce03299 req-7ff89a89-0b1d-4857-bd2f-f44db6ad07eb e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "0c16a8df-379f-45ee-b8a2-930ab997e47b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:17:49 compute-0 nova_compute[260935]: 2025-10-11 09:17:49.605 2 DEBUG oslo_concurrency.lockutils [req-a5ec8735-7fe6-44da-85b7-3fa90ce03299 req-7ff89a89-0b1d-4857-bd2f-f44db6ad07eb e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "0c16a8df-379f-45ee-b8a2-930ab997e47b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:17:49 compute-0 nova_compute[260935]: 2025-10-11 09:17:49.605 2 DEBUG oslo_concurrency.lockutils [req-a5ec8735-7fe6-44da-85b7-3fa90ce03299 req-7ff89a89-0b1d-4857-bd2f-f44db6ad07eb e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "0c16a8df-379f-45ee-b8a2-930ab997e47b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:17:49 compute-0 nova_compute[260935]: 2025-10-11 09:17:49.605 2 DEBUG nova.compute.manager [req-a5ec8735-7fe6-44da-85b7-3fa90ce03299 req-7ff89a89-0b1d-4857-bd2f-f44db6ad07eb e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0c16a8df-379f-45ee-b8a2-930ab997e47b] No waiting events found dispatching network-vif-plugged-2284d8f8-63ae-4cf4-b954-c08472abd3ed pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 09:17:49 compute-0 nova_compute[260935]: 2025-10-11 09:17:49.606 2 WARNING nova.compute.manager [req-a5ec8735-7fe6-44da-85b7-3fa90ce03299 req-7ff89a89-0b1d-4857-bd2f-f44db6ad07eb e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0c16a8df-379f-45ee-b8a2-930ab997e47b] Received unexpected event network-vif-plugged-2284d8f8-63ae-4cf4-b954-c08472abd3ed for instance with vm_state active and task_state None.
Oct 11 09:17:49 compute-0 ceph-mon[74313]: pgmap v2279: 321 pgs: 321 active+clean; 500 MiB data, 992 MiB used, 59 GiB / 60 GiB avail; 3.7 MiB/s rd, 1.8 MiB/s wr, 170 op/s
Oct 11 09:17:50 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2280: 321 pgs: 321 active+clean; 500 MiB data, 992 MiB used, 59 GiB / 60 GiB avail; 3.7 MiB/s rd, 1.8 MiB/s wr, 169 op/s
Oct 11 09:17:50 compute-0 nova_compute[260935]: 2025-10-11 09:17:50.948 2 DEBUG nova.compute.manager [req-6524c5d1-44fe-4134-b654-cbd0ae999a75 req-179eefd8-a3b4-4bc4-8538-f3e2b929c8a5 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0c16a8df-379f-45ee-b8a2-930ab997e47b] Received event network-changed-2284d8f8-63ae-4cf4-b954-c08472abd3ed external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:17:50 compute-0 nova_compute[260935]: 2025-10-11 09:17:50.949 2 DEBUG nova.compute.manager [req-6524c5d1-44fe-4134-b654-cbd0ae999a75 req-179eefd8-a3b4-4bc4-8538-f3e2b929c8a5 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0c16a8df-379f-45ee-b8a2-930ab997e47b] Refreshing instance network info cache due to event network-changed-2284d8f8-63ae-4cf4-b954-c08472abd3ed. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 09:17:50 compute-0 nova_compute[260935]: 2025-10-11 09:17:50.949 2 DEBUG oslo_concurrency.lockutils [req-6524c5d1-44fe-4134-b654-cbd0ae999a75 req-179eefd8-a3b4-4bc4-8538-f3e2b929c8a5 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-0c16a8df-379f-45ee-b8a2-930ab997e47b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:17:50 compute-0 nova_compute[260935]: 2025-10-11 09:17:50.949 2 DEBUG oslo_concurrency.lockutils [req-6524c5d1-44fe-4134-b654-cbd0ae999a75 req-179eefd8-a3b4-4bc4-8538-f3e2b929c8a5 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-0c16a8df-379f-45ee-b8a2-930ab997e47b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:17:50 compute-0 nova_compute[260935]: 2025-10-11 09:17:50.950 2 DEBUG nova.network.neutron [req-6524c5d1-44fe-4134-b654-cbd0ae999a75 req-179eefd8-a3b4-4bc4-8538-f3e2b929c8a5 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0c16a8df-379f-45ee-b8a2-930ab997e47b] Refreshing network info cache for port 2284d8f8-63ae-4cf4-b954-c08472abd3ed _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 09:17:51 compute-0 nova_compute[260935]: 2025-10-11 09:17:51.369 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:17:51 compute-0 ceph-mon[74313]: pgmap v2280: 321 pgs: 321 active+clean; 500 MiB data, 992 MiB used, 59 GiB / 60 GiB avail; 3.7 MiB/s rd, 1.8 MiB/s wr, 169 op/s
Oct 11 09:17:52 compute-0 nova_compute[260935]: 2025-10-11 09:17:52.345 2 DEBUG nova.network.neutron [req-6524c5d1-44fe-4134-b654-cbd0ae999a75 req-179eefd8-a3b4-4bc4-8538-f3e2b929c8a5 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0c16a8df-379f-45ee-b8a2-930ab997e47b] Updated VIF entry in instance network info cache for port 2284d8f8-63ae-4cf4-b954-c08472abd3ed. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 09:17:52 compute-0 nova_compute[260935]: 2025-10-11 09:17:52.346 2 DEBUG nova.network.neutron [req-6524c5d1-44fe-4134-b654-cbd0ae999a75 req-179eefd8-a3b4-4bc4-8538-f3e2b929c8a5 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0c16a8df-379f-45ee-b8a2-930ab997e47b] Updating instance_info_cache with network_info: [{"id": "2284d8f8-63ae-4cf4-b954-c08472abd3ed", "address": "fa:16:3e:c9:58:d3", "network": {"id": "3f472aff-4044-47cd-b539-ffd0a15c2851", "bridge": "br-int", "label": "tempest-network-smoke--74572185", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2284d8f8-63", "ovs_interfaceid": "2284d8f8-63ae-4cf4-b954-c08472abd3ed", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:17:52 compute-0 nova_compute[260935]: 2025-10-11 09:17:52.370 2 DEBUG oslo_concurrency.lockutils [req-6524c5d1-44fe-4134-b654-cbd0ae999a75 req-179eefd8-a3b4-4bc4-8538-f3e2b929c8a5 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-0c16a8df-379f-45ee-b8a2-930ab997e47b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:17:52 compute-0 ovn_controller[152945]: 2025-10-11T09:17:52Z|00125|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:8e:fc:a5 10.100.0.3
Oct 11 09:17:52 compute-0 ovn_controller[152945]: 2025-10-11T09:17:52Z|00126|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:8e:fc:a5 10.100.0.3
Oct 11 09:17:52 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2281: 321 pgs: 321 active+clean; 519 MiB data, 1013 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 3.5 MiB/s wr, 198 op/s
Oct 11 09:17:53 compute-0 ceph-mon[74313]: pgmap v2281: 321 pgs: 321 active+clean; 519 MiB data, 1013 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 3.5 MiB/s wr, 198 op/s
Oct 11 09:17:53 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:17:54 compute-0 nova_compute[260935]: 2025-10-11 09:17:54.114 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:17:54 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2282: 321 pgs: 321 active+clean; 521 MiB data, 1017 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 2.5 MiB/s wr, 151 op/s
Oct 11 09:17:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:17:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:17:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:17:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:17:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:17:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:17:54 compute-0 ceph-mgr[74605]: [balancer INFO root] Optimize plan auto_2025-10-11_09:17:54
Oct 11 09:17:54 compute-0 ceph-mgr[74605]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 09:17:54 compute-0 ceph-mgr[74605]: [balancer INFO root] do_upmap
Oct 11 09:17:54 compute-0 ceph-mgr[74605]: [balancer INFO root] pools ['.rgw.root', '.mgr', 'default.rgw.log', 'vms', 'volumes', 'backups', 'cephfs.cephfs.data', 'images', 'default.rgw.meta', 'cephfs.cephfs.meta', 'default.rgw.control']
Oct 11 09:17:54 compute-0 ceph-mgr[74605]: [balancer INFO root] prepared 0/10 changes
Oct 11 09:17:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 09:17:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 09:17:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 09:17:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 09:17:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 09:17:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 09:17:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 09:17:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 09:17:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 09:17:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 09:17:55 compute-0 ceph-mon[74313]: pgmap v2282: 321 pgs: 321 active+clean; 521 MiB data, 1017 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 2.5 MiB/s wr, 151 op/s
Oct 11 09:17:56 compute-0 nova_compute[260935]: 2025-10-11 09:17:56.378 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:17:56 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2283: 321 pgs: 321 active+clean; 521 MiB data, 1017 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 119 op/s
Oct 11 09:17:57 compute-0 ceph-mon[74313]: pgmap v2283: 321 pgs: 321 active+clean; 521 MiB data, 1017 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 119 op/s
Oct 11 09:17:57 compute-0 ovn_controller[152945]: 2025-10-11T09:17:57Z|00127|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:c9:58:d3 10.100.0.14
Oct 11 09:17:57 compute-0 ovn_controller[152945]: 2025-10-11T09:17:57Z|00128|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:c9:58:d3 10.100.0.14
Oct 11 09:17:58 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2284: 321 pgs: 321 active+clean; 554 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 4.2 MiB/s wr, 180 op/s
Oct 11 09:17:58 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:17:59 compute-0 nova_compute[260935]: 2025-10-11 09:17:59.117 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:17:59 compute-0 unix_chkpwd[382119]: password check failed for user (root)
Oct 11 09:17:59 compute-0 sshd-session[382117]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=165.232.82.252  user=root
Oct 11 09:17:59 compute-0 ceph-mon[74313]: pgmap v2284: 321 pgs: 321 active+clean; 554 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 4.2 MiB/s wr, 180 op/s
Oct 11 09:18:00 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2285: 321 pgs: 321 active+clean; 554 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 699 KiB/s rd, 4.2 MiB/s wr, 110 op/s
Oct 11 09:18:01 compute-0 sshd-session[382117]: Failed password for root from 165.232.82.252 port 39888 ssh2
Oct 11 09:18:01 compute-0 nova_compute[260935]: 2025-10-11 09:18:01.382 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:18:01 compute-0 sshd-session[382117]: Connection closed by authenticating user root 165.232.82.252 port 39888 [preauth]
Oct 11 09:18:01 compute-0 ceph-mon[74313]: pgmap v2285: 321 pgs: 321 active+clean; 554 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 699 KiB/s rd, 4.2 MiB/s wr, 110 op/s
Oct 11 09:18:02 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2286: 321 pgs: 321 active+clean; 564 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 797 KiB/s rd, 4.3 MiB/s wr, 127 op/s
Oct 11 09:18:02 compute-0 podman[382120]: 2025-10-11 09:18:02.790533427 +0000 UTC m=+0.089520719 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true)
Oct 11 09:18:03 compute-0 ceph-mon[74313]: pgmap v2286: 321 pgs: 321 active+clean; 564 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 797 KiB/s rd, 4.3 MiB/s wr, 127 op/s
Oct 11 09:18:03 compute-0 nova_compute[260935]: 2025-10-11 09:18:03.852 2 INFO nova.compute.manager [None req-5fdfb1e5-c9d9-4152-aefb-49c027715127 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 0c16a8df-379f-45ee-b8a2-930ab997e47b] Get console output
Oct 11 09:18:03 compute-0 nova_compute[260935]: 2025-10-11 09:18:03.860 29289 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Oct 11 09:18:03 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:18:04 compute-0 nova_compute[260935]: 2025-10-11 09:18:04.119 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:18:04 compute-0 nova_compute[260935]: 2025-10-11 09:18:04.438 2 DEBUG nova.compute.manager [req-b86ce366-5075-4ffe-ac0c-a1f57144e9c0 req-239d32f5-f450-4a03-83e0-6f4bf83a0dec e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 25c0d908-6200-4e9e-8914-ed531abe14bf] Received event network-changed-25dc7747-60d0-4a38-8938-dfa9f55068b3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:18:04 compute-0 nova_compute[260935]: 2025-10-11 09:18:04.439 2 DEBUG nova.compute.manager [req-b86ce366-5075-4ffe-ac0c-a1f57144e9c0 req-239d32f5-f450-4a03-83e0-6f4bf83a0dec e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 25c0d908-6200-4e9e-8914-ed531abe14bf] Refreshing instance network info cache due to event network-changed-25dc7747-60d0-4a38-8938-dfa9f55068b3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 09:18:04 compute-0 nova_compute[260935]: 2025-10-11 09:18:04.439 2 DEBUG oslo_concurrency.lockutils [req-b86ce366-5075-4ffe-ac0c-a1f57144e9c0 req-239d32f5-f450-4a03-83e0-6f4bf83a0dec e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-25c0d908-6200-4e9e-8914-ed531abe14bf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:18:04 compute-0 nova_compute[260935]: 2025-10-11 09:18:04.440 2 DEBUG oslo_concurrency.lockutils [req-b86ce366-5075-4ffe-ac0c-a1f57144e9c0 req-239d32f5-f450-4a03-83e0-6f4bf83a0dec e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-25c0d908-6200-4e9e-8914-ed531abe14bf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:18:04 compute-0 nova_compute[260935]: 2025-10-11 09:18:04.440 2 DEBUG nova.network.neutron [req-b86ce366-5075-4ffe-ac0c-a1f57144e9c0 req-239d32f5-f450-4a03-83e0-6f4bf83a0dec e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 25c0d908-6200-4e9e-8914-ed531abe14bf] Refreshing network info cache for port 25dc7747-60d0-4a38-8938-dfa9f55068b3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 09:18:04 compute-0 nova_compute[260935]: 2025-10-11 09:18:04.488 2 DEBUG oslo_concurrency.lockutils [None req-fc281bd7-2763-41e2-8050-a15c9b60b36c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "25c0d908-6200-4e9e-8914-ed531abe14bf" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:18:04 compute-0 nova_compute[260935]: 2025-10-11 09:18:04.489 2 DEBUG oslo_concurrency.lockutils [None req-fc281bd7-2763-41e2-8050-a15c9b60b36c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "25c0d908-6200-4e9e-8914-ed531abe14bf" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:18:04 compute-0 nova_compute[260935]: 2025-10-11 09:18:04.490 2 DEBUG oslo_concurrency.lockutils [None req-fc281bd7-2763-41e2-8050-a15c9b60b36c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "25c0d908-6200-4e9e-8914-ed531abe14bf-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:18:04 compute-0 nova_compute[260935]: 2025-10-11 09:18:04.490 2 DEBUG oslo_concurrency.lockutils [None req-fc281bd7-2763-41e2-8050-a15c9b60b36c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "25c0d908-6200-4e9e-8914-ed531abe14bf-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:18:04 compute-0 nova_compute[260935]: 2025-10-11 09:18:04.490 2 DEBUG oslo_concurrency.lockutils [None req-fc281bd7-2763-41e2-8050-a15c9b60b36c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "25c0d908-6200-4e9e-8914-ed531abe14bf-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:18:04 compute-0 nova_compute[260935]: 2025-10-11 09:18:04.492 2 INFO nova.compute.manager [None req-fc281bd7-2763-41e2-8050-a15c9b60b36c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 25c0d908-6200-4e9e-8914-ed531abe14bf] Terminating instance
Oct 11 09:18:04 compute-0 nova_compute[260935]: 2025-10-11 09:18:04.493 2 DEBUG nova.compute.manager [None req-fc281bd7-2763-41e2-8050-a15c9b60b36c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 25c0d908-6200-4e9e-8914-ed531abe14bf] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 11 09:18:04 compute-0 kernel: tap25dc7747-60 (unregistering): left promiscuous mode
Oct 11 09:18:04 compute-0 NetworkManager[44960]: <info>  [1760174284.5709] device (tap25dc7747-60): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 09:18:04 compute-0 ovn_controller[152945]: 2025-10-11T09:18:04Z|01100|binding|INFO|Releasing lport 25dc7747-60d0-4a38-8938-dfa9f55068b3 from this chassis (sb_readonly=0)
Oct 11 09:18:04 compute-0 ovn_controller[152945]: 2025-10-11T09:18:04Z|01101|binding|INFO|Setting lport 25dc7747-60d0-4a38-8938-dfa9f55068b3 down in Southbound
Oct 11 09:18:04 compute-0 ovn_controller[152945]: 2025-10-11T09:18:04Z|01102|binding|INFO|Removing iface tap25dc7747-60 ovn-installed in OVS
Oct 11 09:18:04 compute-0 nova_compute[260935]: 2025-10-11 09:18:04.639 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:18:04 compute-0 nova_compute[260935]: 2025-10-11 09:18:04.641 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:18:04 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2287: 321 pgs: 321 active+clean; 566 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 623 KiB/s rd, 2.5 MiB/s wr, 105 op/s
Oct 11 09:18:04 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:18:04.659 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8e:fc:a5 10.100.0.3 2001:db8::f816:3eff:fe8e:fca5'], port_security=['fa:16:3e:8e:fc:a5 10.100.0.3 2001:db8::f816:3eff:fe8e:fca5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28 2001:db8::f816:3eff:fe8e:fca5/64', 'neutron:device_id': '25c0d908-6200-4e9e-8914-ed531abe14bf', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2f4ce403-2596-441d-805b-ba15e2f385a1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'db6885dd005947ad850fed13cefdf2fc', 'neutron:revision_number': '4', 'neutron:security_group_ids': '89a40070-a057-46b1-9d97-ea387c29c959', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ec00ec81-0492-43bf-b21e-a8398ff551c2, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=25dc7747-60d0-4a38-8938-dfa9f55068b3) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 09:18:04 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:18:04.667 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 25dc7747-60d0-4a38-8938-dfa9f55068b3 in datapath 2f4ce403-2596-441d-805b-ba15e2f385a1 unbound from our chassis
Oct 11 09:18:04 compute-0 nova_compute[260935]: 2025-10-11 09:18:04.668 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:18:04 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:18:04.670 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 2f4ce403-2596-441d-805b-ba15e2f385a1
Oct 11 09:18:04 compute-0 systemd[1]: machine-qemu\x2d134\x2dinstance\x2d0000006f.scope: Deactivated successfully.
Oct 11 09:18:04 compute-0 systemd[1]: machine-qemu\x2d134\x2dinstance\x2d0000006f.scope: Consumed 13.062s CPU time.
Oct 11 09:18:04 compute-0 systemd-machined[215705]: Machine qemu-134-instance-0000006f terminated.
Oct 11 09:18:04 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:18:04.697 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[d98dd0da-13b6-4e36-8224-25dcea8cc4ce]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:18:04 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:18:04.747 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[e6e089eb-16bb-4848-a275-511f1387df4f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:18:04 compute-0 nova_compute[260935]: 2025-10-11 09:18:04.748 2 INFO nova.virt.libvirt.driver [-] [instance: 25c0d908-6200-4e9e-8914-ed531abe14bf] Instance destroyed successfully.
Oct 11 09:18:04 compute-0 nova_compute[260935]: 2025-10-11 09:18:04.749 2 DEBUG nova.objects.instance [None req-fc281bd7-2763-41e2-8050-a15c9b60b36c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lazy-loading 'resources' on Instance uuid 25c0d908-6200-4e9e-8914-ed531abe14bf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:18:04 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:18:04.752 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[a922da3d-34b1-4e0c-874b-524c6d65218d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:18:04 compute-0 nova_compute[260935]: 2025-10-11 09:18:04.768 2 DEBUG nova.virt.libvirt.vif [None req-fc281bd7-2763-41e2-8050-a15c9b60b36c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T09:17:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-637088263',display_name='tempest-TestGettingAddress-server-637088263',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-637088263',id=111,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIEEroIcQI6yjPYNFRbpO55PkieYAc+yLELOXjNZohffuvQJuC/A538gnzESmYvfIV1iCxTeQxcsCPGRPplzY6F4Y6cKL3td1D8v0hhGFKNU2eGLTpxtQxcU6ACWQ1bQPg==',key_name='tempest-TestGettingAddress-609422538',keypairs=<?>,launch_index=0,launched_at=2025-10-11T09:17:41Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='db6885dd005947ad850fed13cefdf2fc',ramdisk_id='',reservation_id='r-249bfkz4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-1238692117',owner_user_name='tempest-TestGettingAddress-1238692117-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T09:17:41Z,user_data=None,user_id='0e1fd111a1ff43179343661e01457085',uuid=25c0d908-6200-4e9e-8914-ed531abe14bf,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "25dc7747-60d0-4a38-8938-dfa9f55068b3", "address": "fa:16:3e:8e:fc:a5", "network": {"id": "2f4ce403-2596-441d-805b-ba15e2f385a1", "bridge": "br-int", "label": "tempest-network-smoke--109925739", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.186", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe8e:fca5", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25dc7747-60", "ovs_interfaceid": "25dc7747-60d0-4a38-8938-dfa9f55068b3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 11 09:18:04 compute-0 nova_compute[260935]: 2025-10-11 09:18:04.769 2 DEBUG nova.network.os_vif_util [None req-fc281bd7-2763-41e2-8050-a15c9b60b36c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converting VIF {"id": "25dc7747-60d0-4a38-8938-dfa9f55068b3", "address": "fa:16:3e:8e:fc:a5", "network": {"id": "2f4ce403-2596-441d-805b-ba15e2f385a1", "bridge": "br-int", "label": "tempest-network-smoke--109925739", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.186", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe8e:fca5", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25dc7747-60", "ovs_interfaceid": "25dc7747-60d0-4a38-8938-dfa9f55068b3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 09:18:04 compute-0 nova_compute[260935]: 2025-10-11 09:18:04.770 2 DEBUG nova.network.os_vif_util [None req-fc281bd7-2763-41e2-8050-a15c9b60b36c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:8e:fc:a5,bridge_name='br-int',has_traffic_filtering=True,id=25dc7747-60d0-4a38-8938-dfa9f55068b3,network=Network(2f4ce403-2596-441d-805b-ba15e2f385a1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap25dc7747-60') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 09:18:04 compute-0 nova_compute[260935]: 2025-10-11 09:18:04.771 2 DEBUG os_vif [None req-fc281bd7-2763-41e2-8050-a15c9b60b36c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:8e:fc:a5,bridge_name='br-int',has_traffic_filtering=True,id=25dc7747-60d0-4a38-8938-dfa9f55068b3,network=Network(2f4ce403-2596-441d-805b-ba15e2f385a1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap25dc7747-60') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 11 09:18:04 compute-0 nova_compute[260935]: 2025-10-11 09:18:04.773 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:18:04 compute-0 nova_compute[260935]: 2025-10-11 09:18:04.774 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap25dc7747-60, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:18:04 compute-0 nova_compute[260935]: 2025-10-11 09:18:04.776 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:18:04 compute-0 nova_compute[260935]: 2025-10-11 09:18:04.779 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 09:18:04 compute-0 nova_compute[260935]: 2025-10-11 09:18:04.781 2 INFO os_vif [None req-fc281bd7-2763-41e2-8050-a15c9b60b36c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:8e:fc:a5,bridge_name='br-int',has_traffic_filtering=True,id=25dc7747-60d0-4a38-8938-dfa9f55068b3,network=Network(2f4ce403-2596-441d-805b-ba15e2f385a1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap25dc7747-60')
Oct 11 09:18:04 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:18:04.799 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[18700001-b054-4283-bdd1-55b9a87c5673]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:18:04 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:18:04.833 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[f90e31f1-df8c-47d5-8f54-6f55ff702010]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2f4ce403-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:74:43:a2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 34, 'tx_packets': 7, 'rx_bytes': 2940, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 34, 'tx_packets': 7, 'rx_bytes': 2940, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 310], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 605471, 'reachable_time': 32877, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 30, 'inoctets': 2352, 'indelivers': 7, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 30, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 2352, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 30, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 7, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 382176, 'error': None, 'target': 'ovnmeta-2f4ce403-2596-441d-805b-ba15e2f385a1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:18:04 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:18:04.871 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[78b597dc-d76b-445d-8041-2f73869981d7]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap2f4ce403-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 605487, 'tstamp': 605487}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 382180, 'error': None, 'target': 'ovnmeta-2f4ce403-2596-441d-805b-ba15e2f385a1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap2f4ce403-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 605491, 'tstamp': 605491}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 382180, 'error': None, 'target': 'ovnmeta-2f4ce403-2596-441d-805b-ba15e2f385a1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:18:04 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:18:04.873 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2f4ce403-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:18:04 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:18:04.878 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2f4ce403-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:18:04 compute-0 nova_compute[260935]: 2025-10-11 09:18:04.878 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:18:04 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:18:04.878 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 09:18:04 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:18:04.879 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap2f4ce403-20, col_values=(('external_ids', {'iface-id': '722d4b4c-2d64-4e8c-b343-4ac25259f23b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:18:04 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:18:04.880 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 09:18:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 09:18:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:18:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 09:18:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:18:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0049026528008400076 of space, bias 1.0, pg target 1.4707958402520023 quantized to 32 (current 32)
Oct 11 09:18:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:18:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 09:18:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:18:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 09:18:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:18:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.1992057139048968 quantized to 32 (current 32)
Oct 11 09:18:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:18:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006084358924269063 quantized to 16 (current 32)
Oct 11 09:18:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:18:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 09:18:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:18:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.605448655336329e-05 quantized to 32 (current 32)
Oct 11 09:18:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:18:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006464631357035879 quantized to 32 (current 32)
Oct 11 09:18:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:18:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 09:18:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:18:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015210897310672657 quantized to 32 (current 32)
Oct 11 09:18:05 compute-0 nova_compute[260935]: 2025-10-11 09:18:05.242 2 INFO nova.virt.libvirt.driver [None req-fc281bd7-2763-41e2-8050-a15c9b60b36c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 25c0d908-6200-4e9e-8914-ed531abe14bf] Deleting instance files /var/lib/nova/instances/25c0d908-6200-4e9e-8914-ed531abe14bf_del
Oct 11 09:18:05 compute-0 nova_compute[260935]: 2025-10-11 09:18:05.243 2 INFO nova.virt.libvirt.driver [None req-fc281bd7-2763-41e2-8050-a15c9b60b36c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 25c0d908-6200-4e9e-8914-ed531abe14bf] Deletion of /var/lib/nova/instances/25c0d908-6200-4e9e-8914-ed531abe14bf_del complete
Oct 11 09:18:05 compute-0 nova_compute[260935]: 2025-10-11 09:18:05.301 2 INFO nova.compute.manager [None req-fc281bd7-2763-41e2-8050-a15c9b60b36c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 25c0d908-6200-4e9e-8914-ed531abe14bf] Took 0.81 seconds to destroy the instance on the hypervisor.
Oct 11 09:18:05 compute-0 nova_compute[260935]: 2025-10-11 09:18:05.302 2 DEBUG oslo.service.loopingcall [None req-fc281bd7-2763-41e2-8050-a15c9b60b36c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 11 09:18:05 compute-0 nova_compute[260935]: 2025-10-11 09:18:05.302 2 DEBUG nova.compute.manager [-] [instance: 25c0d908-6200-4e9e-8914-ed531abe14bf] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 11 09:18:05 compute-0 nova_compute[260935]: 2025-10-11 09:18:05.302 2 DEBUG nova.network.neutron [-] [instance: 25c0d908-6200-4e9e-8914-ed531abe14bf] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 11 09:18:05 compute-0 ceph-mon[74313]: pgmap v2287: 321 pgs: 321 active+clean; 566 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 623 KiB/s rd, 2.5 MiB/s wr, 105 op/s
Oct 11 09:18:06 compute-0 nova_compute[260935]: 2025-10-11 09:18:06.386 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:18:06 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2288: 321 pgs: 321 active+clean; 566 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 438 KiB/s rd, 2.2 MiB/s wr, 84 op/s
Oct 11 09:18:07 compute-0 nova_compute[260935]: 2025-10-11 09:18:07.237 2 DEBUG nova.network.neutron [-] [instance: 25c0d908-6200-4e9e-8914-ed531abe14bf] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:18:07 compute-0 nova_compute[260935]: 2025-10-11 09:18:07.278 2 INFO nova.compute.manager [-] [instance: 25c0d908-6200-4e9e-8914-ed531abe14bf] Took 1.98 seconds to deallocate network for instance.
Oct 11 09:18:07 compute-0 nova_compute[260935]: 2025-10-11 09:18:07.311 2 DEBUG nova.compute.manager [req-35afe52e-f59e-4fd6-b887-1e801a137168 req-e70de97c-6723-4bbe-8c4a-6ec0106bd353 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 25c0d908-6200-4e9e-8914-ed531abe14bf] Received event network-vif-deleted-25dc7747-60d0-4a38-8938-dfa9f55068b3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:18:07 compute-0 nova_compute[260935]: 2025-10-11 09:18:07.360 2 DEBUG oslo_concurrency.lockutils [None req-fc281bd7-2763-41e2-8050-a15c9b60b36c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:18:07 compute-0 nova_compute[260935]: 2025-10-11 09:18:07.361 2 DEBUG oslo_concurrency.lockutils [None req-fc281bd7-2763-41e2-8050-a15c9b60b36c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:18:07 compute-0 nova_compute[260935]: 2025-10-11 09:18:07.396 2 DEBUG nova.compute.manager [req-8c363077-edd2-40ba-a3e2-311fac81293f req-59db0dde-9448-4e88-bdcb-b5c4c74724f0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 25c0d908-6200-4e9e-8914-ed531abe14bf] Received event network-vif-plugged-25dc7747-60d0-4a38-8938-dfa9f55068b3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:18:07 compute-0 nova_compute[260935]: 2025-10-11 09:18:07.397 2 DEBUG oslo_concurrency.lockutils [req-8c363077-edd2-40ba-a3e2-311fac81293f req-59db0dde-9448-4e88-bdcb-b5c4c74724f0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "25c0d908-6200-4e9e-8914-ed531abe14bf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:18:07 compute-0 nova_compute[260935]: 2025-10-11 09:18:07.398 2 DEBUG oslo_concurrency.lockutils [req-8c363077-edd2-40ba-a3e2-311fac81293f req-59db0dde-9448-4e88-bdcb-b5c4c74724f0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "25c0d908-6200-4e9e-8914-ed531abe14bf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:18:07 compute-0 nova_compute[260935]: 2025-10-11 09:18:07.398 2 DEBUG oslo_concurrency.lockutils [req-8c363077-edd2-40ba-a3e2-311fac81293f req-59db0dde-9448-4e88-bdcb-b5c4c74724f0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "25c0d908-6200-4e9e-8914-ed531abe14bf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:18:07 compute-0 nova_compute[260935]: 2025-10-11 09:18:07.399 2 DEBUG nova.compute.manager [req-8c363077-edd2-40ba-a3e2-311fac81293f req-59db0dde-9448-4e88-bdcb-b5c4c74724f0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 25c0d908-6200-4e9e-8914-ed531abe14bf] No waiting events found dispatching network-vif-plugged-25dc7747-60d0-4a38-8938-dfa9f55068b3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 09:18:07 compute-0 nova_compute[260935]: 2025-10-11 09:18:07.399 2 WARNING nova.compute.manager [req-8c363077-edd2-40ba-a3e2-311fac81293f req-59db0dde-9448-4e88-bdcb-b5c4c74724f0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 25c0d908-6200-4e9e-8914-ed531abe14bf] Received unexpected event network-vif-plugged-25dc7747-60d0-4a38-8938-dfa9f55068b3 for instance with vm_state deleted and task_state None.
Oct 11 09:18:07 compute-0 nova_compute[260935]: 2025-10-11 09:18:07.556 2 DEBUG oslo_concurrency.processutils [None req-fc281bd7-2763-41e2-8050-a15c9b60b36c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:18:07 compute-0 podman[382183]: 2025-10-11 09:18:07.791038935 +0000 UTC m=+0.078277018 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=iscsid, container_name=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct 11 09:18:07 compute-0 ceph-mon[74313]: pgmap v2288: 321 pgs: 321 active+clean; 566 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 438 KiB/s rd, 2.2 MiB/s wr, 84 op/s
Oct 11 09:18:07 compute-0 nova_compute[260935]: 2025-10-11 09:18:07.866 2 DEBUG nova.network.neutron [req-b86ce366-5075-4ffe-ac0c-a1f57144e9c0 req-239d32f5-f450-4a03-83e0-6f4bf83a0dec e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 25c0d908-6200-4e9e-8914-ed531abe14bf] Updated VIF entry in instance network info cache for port 25dc7747-60d0-4a38-8938-dfa9f55068b3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 09:18:07 compute-0 nova_compute[260935]: 2025-10-11 09:18:07.867 2 DEBUG nova.network.neutron [req-b86ce366-5075-4ffe-ac0c-a1f57144e9c0 req-239d32f5-f450-4a03-83e0-6f4bf83a0dec e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 25c0d908-6200-4e9e-8914-ed531abe14bf] Updating instance_info_cache with network_info: [{"id": "25dc7747-60d0-4a38-8938-dfa9f55068b3", "address": "fa:16:3e:8e:fc:a5", "network": {"id": "2f4ce403-2596-441d-805b-ba15e2f385a1", "bridge": "br-int", "label": "tempest-network-smoke--109925739", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe8e:fca5", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25dc7747-60", "ovs_interfaceid": "25dc7747-60d0-4a38-8938-dfa9f55068b3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:18:07 compute-0 nova_compute[260935]: 2025-10-11 09:18:07.889 2 DEBUG oslo_concurrency.lockutils [req-b86ce366-5075-4ffe-ac0c-a1f57144e9c0 req-239d32f5-f450-4a03-83e0-6f4bf83a0dec e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-25c0d908-6200-4e9e-8914-ed531abe14bf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:18:08 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 09:18:08 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/119341820' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:18:08 compute-0 nova_compute[260935]: 2025-10-11 09:18:08.048 2 DEBUG oslo_concurrency.processutils [None req-fc281bd7-2763-41e2-8050-a15c9b60b36c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.492s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:18:08 compute-0 nova_compute[260935]: 2025-10-11 09:18:08.055 2 DEBUG nova.compute.provider_tree [None req-fc281bd7-2763-41e2-8050-a15c9b60b36c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 09:18:08 compute-0 nova_compute[260935]: 2025-10-11 09:18:08.083 2 DEBUG nova.scheduler.client.report [None req-fc281bd7-2763-41e2-8050-a15c9b60b36c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 09:18:08 compute-0 nova_compute[260935]: 2025-10-11 09:18:08.106 2 DEBUG oslo_concurrency.lockutils [None req-fc281bd7-2763-41e2-8050-a15c9b60b36c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.745s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:18:08 compute-0 nova_compute[260935]: 2025-10-11 09:18:08.155 2 INFO nova.scheduler.client.report [None req-fc281bd7-2763-41e2-8050-a15c9b60b36c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Deleted allocations for instance 25c0d908-6200-4e9e-8914-ed531abe14bf
Oct 11 09:18:08 compute-0 nova_compute[260935]: 2025-10-11 09:18:08.259 2 DEBUG oslo_concurrency.lockutils [None req-fc281bd7-2763-41e2-8050-a15c9b60b36c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "25c0d908-6200-4e9e-8914-ed531abe14bf" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.770s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:18:08 compute-0 nova_compute[260935]: 2025-10-11 09:18:08.317 2 DEBUG oslo_concurrency.lockutils [None req-b08a428b-4ca4-4928-ae68-d827b2d2724f dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Acquiring lock "interface-0c16a8df-379f-45ee-b8a2-930ab997e47b-None" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:18:08 compute-0 nova_compute[260935]: 2025-10-11 09:18:08.318 2 DEBUG oslo_concurrency.lockutils [None req-b08a428b-4ca4-4928-ae68-d827b2d2724f dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "interface-0c16a8df-379f-45ee-b8a2-930ab997e47b-None" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:18:08 compute-0 nova_compute[260935]: 2025-10-11 09:18:08.319 2 DEBUG nova.objects.instance [None req-b08a428b-4ca4-4928-ae68-d827b2d2724f dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lazy-loading 'flavor' on Instance uuid 0c16a8df-379f-45ee-b8a2-930ab997e47b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:18:08 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2289: 321 pgs: 321 active+clean; 486 MiB data, 997 MiB used, 59 GiB / 60 GiB avail; 463 KiB/s rd, 2.2 MiB/s wr, 114 op/s
Oct 11 09:18:08 compute-0 nova_compute[260935]: 2025-10-11 09:18:08.734 2 DEBUG nova.objects.instance [None req-b08a428b-4ca4-4928-ae68-d827b2d2724f dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lazy-loading 'pci_requests' on Instance uuid 0c16a8df-379f-45ee-b8a2-930ab997e47b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:18:08 compute-0 nova_compute[260935]: 2025-10-11 09:18:08.754 2 DEBUG nova.network.neutron [None req-b08a428b-4ca4-4928-ae68-d827b2d2724f dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 0c16a8df-379f-45ee-b8a2-930ab997e47b] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 11 09:18:08 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/119341820' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:18:08 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:18:09 compute-0 nova_compute[260935]: 2025-10-11 09:18:09.011 2 DEBUG nova.policy [None req-b08a428b-4ca4-4928-ae68-d827b2d2724f dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'dd336dcb24664df58613d4105ce1b004', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'bee9c6aad5fe46a2b0fb6caf4d995b72', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 11 09:18:09 compute-0 nova_compute[260935]: 2025-10-11 09:18:09.312 2 DEBUG oslo_concurrency.lockutils [None req-b79524c1-4228-4b0b-9879-2e9046f6cf49 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "187076f6-221b-4a35-a7a8-9ba7c2a546b5" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:18:09 compute-0 nova_compute[260935]: 2025-10-11 09:18:09.313 2 DEBUG oslo_concurrency.lockutils [None req-b79524c1-4228-4b0b-9879-2e9046f6cf49 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "187076f6-221b-4a35-a7a8-9ba7c2a546b5" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:18:09 compute-0 nova_compute[260935]: 2025-10-11 09:18:09.314 2 DEBUG oslo_concurrency.lockutils [None req-b79524c1-4228-4b0b-9879-2e9046f6cf49 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "187076f6-221b-4a35-a7a8-9ba7c2a546b5-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:18:09 compute-0 nova_compute[260935]: 2025-10-11 09:18:09.314 2 DEBUG oslo_concurrency.lockutils [None req-b79524c1-4228-4b0b-9879-2e9046f6cf49 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "187076f6-221b-4a35-a7a8-9ba7c2a546b5-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:18:09 compute-0 nova_compute[260935]: 2025-10-11 09:18:09.315 2 DEBUG oslo_concurrency.lockutils [None req-b79524c1-4228-4b0b-9879-2e9046f6cf49 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "187076f6-221b-4a35-a7a8-9ba7c2a546b5-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:18:09 compute-0 nova_compute[260935]: 2025-10-11 09:18:09.316 2 INFO nova.compute.manager [None req-b79524c1-4228-4b0b-9879-2e9046f6cf49 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 187076f6-221b-4a35-a7a8-9ba7c2a546b5] Terminating instance
Oct 11 09:18:09 compute-0 nova_compute[260935]: 2025-10-11 09:18:09.318 2 DEBUG nova.compute.manager [None req-b79524c1-4228-4b0b-9879-2e9046f6cf49 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 187076f6-221b-4a35-a7a8-9ba7c2a546b5] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 11 09:18:09 compute-0 kernel: tap1f9dc2d8-70 (unregistering): left promiscuous mode
Oct 11 09:18:09 compute-0 NetworkManager[44960]: <info>  [1760174289.3866] device (tap1f9dc2d8-70): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 09:18:09 compute-0 ovn_controller[152945]: 2025-10-11T09:18:09Z|01103|binding|INFO|Releasing lport 1f9dc2d8-70e2-4b9d-acc8-8f1bf8cd7f54 from this chassis (sb_readonly=0)
Oct 11 09:18:09 compute-0 ovn_controller[152945]: 2025-10-11T09:18:09Z|01104|binding|INFO|Setting lport 1f9dc2d8-70e2-4b9d-acc8-8f1bf8cd7f54 down in Southbound
Oct 11 09:18:09 compute-0 ovn_controller[152945]: 2025-10-11T09:18:09Z|01105|binding|INFO|Removing iface tap1f9dc2d8-70 ovn-installed in OVS
Oct 11 09:18:09 compute-0 nova_compute[260935]: 2025-10-11 09:18:09.464 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:18:09 compute-0 nova_compute[260935]: 2025-10-11 09:18:09.467 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:18:09 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:18:09.473 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4b:3e:b6 10.100.0.12 2001:db8::f816:3eff:fe4b:3eb6'], port_security=['fa:16:3e:4b:3e:b6 10.100.0.12 2001:db8::f816:3eff:fe4b:3eb6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28 2001:db8::f816:3eff:fe4b:3eb6/64', 'neutron:device_id': '187076f6-221b-4a35-a7a8-9ba7c2a546b5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2f4ce403-2596-441d-805b-ba15e2f385a1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'db6885dd005947ad850fed13cefdf2fc', 'neutron:revision_number': '4', 'neutron:security_group_ids': '89a40070-a057-46b1-9d97-ea387c29c959', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ec00ec81-0492-43bf-b21e-a8398ff551c2, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=1f9dc2d8-70e2-4b9d-acc8-8f1bf8cd7f54) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 09:18:09 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:18:09.475 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 1f9dc2d8-70e2-4b9d-acc8-8f1bf8cd7f54 in datapath 2f4ce403-2596-441d-805b-ba15e2f385a1 unbound from our chassis
Oct 11 09:18:09 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:18:09.477 162815 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 2f4ce403-2596-441d-805b-ba15e2f385a1, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 11 09:18:09 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:18:09.477 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[3383f22f-8e04-407e-87ae-904207914853]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:18:09 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:18:09.478 162815 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-2f4ce403-2596-441d-805b-ba15e2f385a1 namespace which is not needed anymore
Oct 11 09:18:09 compute-0 nova_compute[260935]: 2025-10-11 09:18:09.497 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:18:09 compute-0 systemd[1]: machine-qemu\x2d133\x2dinstance\x2d0000006e.scope: Deactivated successfully.
Oct 11 09:18:09 compute-0 systemd[1]: machine-qemu\x2d133\x2dinstance\x2d0000006e.scope: Consumed 17.414s CPU time.
Oct 11 09:18:09 compute-0 systemd-machined[215705]: Machine qemu-133-instance-0000006e terminated.
Oct 11 09:18:09 compute-0 nova_compute[260935]: 2025-10-11 09:18:09.560 2 INFO nova.virt.libvirt.driver [-] [instance: 187076f6-221b-4a35-a7a8-9ba7c2a546b5] Instance destroyed successfully.
Oct 11 09:18:09 compute-0 nova_compute[260935]: 2025-10-11 09:18:09.561 2 DEBUG nova.objects.instance [None req-b79524c1-4228-4b0b-9879-2e9046f6cf49 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lazy-loading 'resources' on Instance uuid 187076f6-221b-4a35-a7a8-9ba7c2a546b5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:18:09 compute-0 nova_compute[260935]: 2025-10-11 09:18:09.576 2 DEBUG nova.virt.libvirt.vif [None req-b79524c1-4228-4b0b-9879-2e9046f6cf49 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T09:16:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-354741605',display_name='tempest-TestGettingAddress-server-354741605',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-354741605',id=110,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIEEroIcQI6yjPYNFRbpO55PkieYAc+yLELOXjNZohffuvQJuC/A538gnzESmYvfIV1iCxTeQxcsCPGRPplzY6F4Y6cKL3td1D8v0hhGFKNU2eGLTpxtQxcU6ACWQ1bQPg==',key_name='tempest-TestGettingAddress-609422538',keypairs=<?>,launch_index=0,launched_at=2025-10-11T09:16:51Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='db6885dd005947ad850fed13cefdf2fc',ramdisk_id='',reservation_id='r-f6psl0gg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-1238692117',owner_user_name='tempest-TestGettingAddress-1238692117-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T09:16:51Z,user_data=None,user_id='0e1fd111a1ff43179343661e01457085',uuid=187076f6-221b-4a35-a7a8-9ba7c2a546b5,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1f9dc2d8-70e2-4b9d-acc8-8f1bf8cd7f54", "address": "fa:16:3e:4b:3e:b6", "network": {"id": "2f4ce403-2596-441d-805b-ba15e2f385a1", "bridge": "br-int", "label": "tempest-network-smoke--109925739", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.213", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe4b:3eb6", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1f9dc2d8-70", "ovs_interfaceid": "1f9dc2d8-70e2-4b9d-acc8-8f1bf8cd7f54", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 11 09:18:09 compute-0 nova_compute[260935]: 2025-10-11 09:18:09.577 2 DEBUG nova.network.os_vif_util [None req-b79524c1-4228-4b0b-9879-2e9046f6cf49 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converting VIF {"id": "1f9dc2d8-70e2-4b9d-acc8-8f1bf8cd7f54", "address": "fa:16:3e:4b:3e:b6", "network": {"id": "2f4ce403-2596-441d-805b-ba15e2f385a1", "bridge": "br-int", "label": "tempest-network-smoke--109925739", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.213", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe4b:3eb6", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1f9dc2d8-70", "ovs_interfaceid": "1f9dc2d8-70e2-4b9d-acc8-8f1bf8cd7f54", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 09:18:09 compute-0 nova_compute[260935]: 2025-10-11 09:18:09.579 2 DEBUG nova.network.os_vif_util [None req-b79524c1-4228-4b0b-9879-2e9046f6cf49 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:4b:3e:b6,bridge_name='br-int',has_traffic_filtering=True,id=1f9dc2d8-70e2-4b9d-acc8-8f1bf8cd7f54,network=Network(2f4ce403-2596-441d-805b-ba15e2f385a1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1f9dc2d8-70') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 09:18:09 compute-0 nova_compute[260935]: 2025-10-11 09:18:09.580 2 DEBUG os_vif [None req-b79524c1-4228-4b0b-9879-2e9046f6cf49 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:4b:3e:b6,bridge_name='br-int',has_traffic_filtering=True,id=1f9dc2d8-70e2-4b9d-acc8-8f1bf8cd7f54,network=Network(2f4ce403-2596-441d-805b-ba15e2f385a1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1f9dc2d8-70') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 11 09:18:09 compute-0 nova_compute[260935]: 2025-10-11 09:18:09.584 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:18:09 compute-0 nova_compute[260935]: 2025-10-11 09:18:09.585 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1f9dc2d8-70, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:18:09 compute-0 nova_compute[260935]: 2025-10-11 09:18:09.587 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:18:09 compute-0 nova_compute[260935]: 2025-10-11 09:18:09.590 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 09:18:09 compute-0 nova_compute[260935]: 2025-10-11 09:18:09.591 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:18:09 compute-0 nova_compute[260935]: 2025-10-11 09:18:09.594 2 INFO os_vif [None req-b79524c1-4228-4b0b-9879-2e9046f6cf49 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:4b:3e:b6,bridge_name='br-int',has_traffic_filtering=True,id=1f9dc2d8-70e2-4b9d-acc8-8f1bf8cd7f54,network=Network(2f4ce403-2596-441d-805b-ba15e2f385a1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1f9dc2d8-70')
Oct 11 09:18:09 compute-0 neutron-haproxy-ovnmeta-2f4ce403-2596-441d-805b-ba15e2f385a1[379911]: [NOTICE]   (379915) : haproxy version is 2.8.14-c23fe91
Oct 11 09:18:09 compute-0 neutron-haproxy-ovnmeta-2f4ce403-2596-441d-805b-ba15e2f385a1[379911]: [NOTICE]   (379915) : path to executable is /usr/sbin/haproxy
Oct 11 09:18:09 compute-0 neutron-haproxy-ovnmeta-2f4ce403-2596-441d-805b-ba15e2f385a1[379911]: [WARNING]  (379915) : Exiting Master process...
Oct 11 09:18:09 compute-0 neutron-haproxy-ovnmeta-2f4ce403-2596-441d-805b-ba15e2f385a1[379911]: [ALERT]    (379915) : Current worker (379917) exited with code 143 (Terminated)
Oct 11 09:18:09 compute-0 neutron-haproxy-ovnmeta-2f4ce403-2596-441d-805b-ba15e2f385a1[379911]: [WARNING]  (379915) : All workers exited. Exiting... (0)
Oct 11 09:18:09 compute-0 systemd[1]: libpod-3697c62e0c6431967367d91d5b22adc88dc70ab8ef5c5a60c6aad4d4a948029a.scope: Deactivated successfully.
Oct 11 09:18:09 compute-0 podman[382258]: 2025-10-11 09:18:09.684082973 +0000 UTC m=+0.076328034 container died 3697c62e0c6431967367d91d5b22adc88dc70ab8ef5c5a60c6aad4d4a948029a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-2f4ce403-2596-441d-805b-ba15e2f385a1, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0)
Oct 11 09:18:09 compute-0 ceph-mon[74313]: pgmap v2289: 321 pgs: 321 active+clean; 486 MiB data, 997 MiB used, 59 GiB / 60 GiB avail; 463 KiB/s rd, 2.2 MiB/s wr, 114 op/s
Oct 11 09:18:09 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-3697c62e0c6431967367d91d5b22adc88dc70ab8ef5c5a60c6aad4d4a948029a-userdata-shm.mount: Deactivated successfully.
Oct 11 09:18:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-425c0630029d1aed8583bcc3941c7da79348c9c8a0eadb7928e597fdeee04cbf-merged.mount: Deactivated successfully.
Oct 11 09:18:09 compute-0 podman[382258]: 2025-10-11 09:18:09.88848667 +0000 UTC m=+0.280731751 container cleanup 3697c62e0c6431967367d91d5b22adc88dc70ab8ef5c5a60c6aad4d4a948029a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-2f4ce403-2596-441d-805b-ba15e2f385a1, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct 11 09:18:09 compute-0 systemd[1]: libpod-conmon-3697c62e0c6431967367d91d5b22adc88dc70ab8ef5c5a60c6aad4d4a948029a.scope: Deactivated successfully.
Oct 11 09:18:09 compute-0 podman[382307]: 2025-10-11 09:18:09.994103414 +0000 UTC m=+0.065687979 container remove 3697c62e0c6431967367d91d5b22adc88dc70ab8ef5c5a60c6aad4d4a948029a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-2f4ce403-2596-441d-805b-ba15e2f385a1, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct 11 09:18:10 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:18:10.007 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[183a8375-9a8f-4bd1-a97e-b0019e4fc920]: (4, ('Sat Oct 11 09:18:09 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-2f4ce403-2596-441d-805b-ba15e2f385a1 (3697c62e0c6431967367d91d5b22adc88dc70ab8ef5c5a60c6aad4d4a948029a)\n3697c62e0c6431967367d91d5b22adc88dc70ab8ef5c5a60c6aad4d4a948029a\nSat Oct 11 09:18:09 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-2f4ce403-2596-441d-805b-ba15e2f385a1 (3697c62e0c6431967367d91d5b22adc88dc70ab8ef5c5a60c6aad4d4a948029a)\n3697c62e0c6431967367d91d5b22adc88dc70ab8ef5c5a60c6aad4d4a948029a\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:18:10 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:18:10.011 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[8c16ad71-6a14-47b8-bf13-452e312fe1fd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:18:10 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:18:10.012 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2f4ce403-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:18:10 compute-0 nova_compute[260935]: 2025-10-11 09:18:10.014 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:18:10 compute-0 kernel: tap2f4ce403-20: left promiscuous mode
Oct 11 09:18:10 compute-0 nova_compute[260935]: 2025-10-11 09:18:10.041 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:18:10 compute-0 nova_compute[260935]: 2025-10-11 09:18:10.045 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:18:10 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:18:10.045 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[ada9bd3f-0d55-4b59-a86f-39bc81888329]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:18:10 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:18:10.073 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[83a299c8-55d4-44d9-b231-5dd6354c5050]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:18:10 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:18:10.074 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[772593f4-0c5c-4fb7-b097-59353cce083a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:18:10 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:18:10.103 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[9901b9cd-093c-4a2f-b8b4-59c07577e6cf]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 605460, 'reachable_time': 42176, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 382323, 'error': None, 'target': 'ovnmeta-2f4ce403-2596-441d-805b-ba15e2f385a1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:18:10 compute-0 systemd[1]: run-netns-ovnmeta\x2d2f4ce403\x2d2596\x2d441d\x2d805b\x2dba15e2f385a1.mount: Deactivated successfully.
Oct 11 09:18:10 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:18:10.110 162929 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-2f4ce403-2596-441d-805b-ba15e2f385a1 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 11 09:18:10 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:18:10.110 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[139902fb-1aae-4841-850e-65c35ef22fa1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:18:10 compute-0 nova_compute[260935]: 2025-10-11 09:18:10.114 2 DEBUG nova.compute.manager [req-a5a09b40-57c4-4fb9-9014-ae43d23fb90a req-42507238-05dc-4792-85ae-528cc94e2c7b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 187076f6-221b-4a35-a7a8-9ba7c2a546b5] Received event network-changed-1f9dc2d8-70e2-4b9d-acc8-8f1bf8cd7f54 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:18:10 compute-0 nova_compute[260935]: 2025-10-11 09:18:10.115 2 DEBUG nova.compute.manager [req-a5a09b40-57c4-4fb9-9014-ae43d23fb90a req-42507238-05dc-4792-85ae-528cc94e2c7b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 187076f6-221b-4a35-a7a8-9ba7c2a546b5] Refreshing instance network info cache due to event network-changed-1f9dc2d8-70e2-4b9d-acc8-8f1bf8cd7f54. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 09:18:10 compute-0 nova_compute[260935]: 2025-10-11 09:18:10.116 2 DEBUG oslo_concurrency.lockutils [req-a5a09b40-57c4-4fb9-9014-ae43d23fb90a req-42507238-05dc-4792-85ae-528cc94e2c7b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-187076f6-221b-4a35-a7a8-9ba7c2a546b5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:18:10 compute-0 nova_compute[260935]: 2025-10-11 09:18:10.116 2 DEBUG oslo_concurrency.lockutils [req-a5a09b40-57c4-4fb9-9014-ae43d23fb90a req-42507238-05dc-4792-85ae-528cc94e2c7b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-187076f6-221b-4a35-a7a8-9ba7c2a546b5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:18:10 compute-0 nova_compute[260935]: 2025-10-11 09:18:10.117 2 DEBUG nova.network.neutron [req-a5a09b40-57c4-4fb9-9014-ae43d23fb90a req-42507238-05dc-4792-85ae-528cc94e2c7b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 187076f6-221b-4a35-a7a8-9ba7c2a546b5] Refreshing network info cache for port 1f9dc2d8-70e2-4b9d-acc8-8f1bf8cd7f54 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 09:18:10 compute-0 nova_compute[260935]: 2025-10-11 09:18:10.141 2 DEBUG nova.network.neutron [None req-b08a428b-4ca4-4928-ae68-d827b2d2724f dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 0c16a8df-379f-45ee-b8a2-930ab997e47b] Successfully created port: 442b50ff-f920-42c8-b400-ec09aade7cd6 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 11 09:18:10 compute-0 nova_compute[260935]: 2025-10-11 09:18:10.254 2 INFO nova.virt.libvirt.driver [None req-b79524c1-4228-4b0b-9879-2e9046f6cf49 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 187076f6-221b-4a35-a7a8-9ba7c2a546b5] Deleting instance files /var/lib/nova/instances/187076f6-221b-4a35-a7a8-9ba7c2a546b5_del
Oct 11 09:18:10 compute-0 nova_compute[260935]: 2025-10-11 09:18:10.256 2 INFO nova.virt.libvirt.driver [None req-b79524c1-4228-4b0b-9879-2e9046f6cf49 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 187076f6-221b-4a35-a7a8-9ba7c2a546b5] Deletion of /var/lib/nova/instances/187076f6-221b-4a35-a7a8-9ba7c2a546b5_del complete
Oct 11 09:18:10 compute-0 nova_compute[260935]: 2025-10-11 09:18:10.314 2 INFO nova.compute.manager [None req-b79524c1-4228-4b0b-9879-2e9046f6cf49 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 187076f6-221b-4a35-a7a8-9ba7c2a546b5] Took 1.00 seconds to destroy the instance on the hypervisor.
Oct 11 09:18:10 compute-0 nova_compute[260935]: 2025-10-11 09:18:10.315 2 DEBUG oslo.service.loopingcall [None req-b79524c1-4228-4b0b-9879-2e9046f6cf49 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 11 09:18:10 compute-0 nova_compute[260935]: 2025-10-11 09:18:10.315 2 DEBUG nova.compute.manager [-] [instance: 187076f6-221b-4a35-a7a8-9ba7c2a546b5] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 11 09:18:10 compute-0 nova_compute[260935]: 2025-10-11 09:18:10.316 2 DEBUG nova.network.neutron [-] [instance: 187076f6-221b-4a35-a7a8-9ba7c2a546b5] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 11 09:18:10 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2290: 321 pgs: 321 active+clean; 486 MiB data, 997 MiB used, 59 GiB / 60 GiB avail; 190 KiB/s rd, 121 KiB/s wr, 53 op/s
Oct 11 09:18:11 compute-0 nova_compute[260935]: 2025-10-11 09:18:11.389 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:18:11 compute-0 nova_compute[260935]: 2025-10-11 09:18:11.687 2 DEBUG nova.network.neutron [-] [instance: 187076f6-221b-4a35-a7a8-9ba7c2a546b5] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:18:11 compute-0 nova_compute[260935]: 2025-10-11 09:18:11.705 2 INFO nova.compute.manager [-] [instance: 187076f6-221b-4a35-a7a8-9ba7c2a546b5] Took 1.39 seconds to deallocate network for instance.
Oct 11 09:18:11 compute-0 nova_compute[260935]: 2025-10-11 09:18:11.782 2 DEBUG nova.network.neutron [None req-b08a428b-4ca4-4928-ae68-d827b2d2724f dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 0c16a8df-379f-45ee-b8a2-930ab997e47b] Successfully updated port: 442b50ff-f920-42c8-b400-ec09aade7cd6 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 11 09:18:11 compute-0 nova_compute[260935]: 2025-10-11 09:18:11.785 2 DEBUG oslo_concurrency.lockutils [None req-b79524c1-4228-4b0b-9879-2e9046f6cf49 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:18:11 compute-0 nova_compute[260935]: 2025-10-11 09:18:11.786 2 DEBUG oslo_concurrency.lockutils [None req-b79524c1-4228-4b0b-9879-2e9046f6cf49 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:18:11 compute-0 nova_compute[260935]: 2025-10-11 09:18:11.799 2 DEBUG oslo_concurrency.lockutils [None req-b08a428b-4ca4-4928-ae68-d827b2d2724f dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Acquiring lock "refresh_cache-0c16a8df-379f-45ee-b8a2-930ab997e47b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:18:11 compute-0 nova_compute[260935]: 2025-10-11 09:18:11.799 2 DEBUG oslo_concurrency.lockutils [None req-b08a428b-4ca4-4928-ae68-d827b2d2724f dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Acquired lock "refresh_cache-0c16a8df-379f-45ee-b8a2-930ab997e47b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:18:11 compute-0 nova_compute[260935]: 2025-10-11 09:18:11.800 2 DEBUG nova.network.neutron [None req-b08a428b-4ca4-4928-ae68-d827b2d2724f dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 0c16a8df-379f-45ee-b8a2-930ab997e47b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 11 09:18:11 compute-0 ceph-mon[74313]: pgmap v2290: 321 pgs: 321 active+clean; 486 MiB data, 997 MiB used, 59 GiB / 60 GiB avail; 190 KiB/s rd, 121 KiB/s wr, 53 op/s
Oct 11 09:18:11 compute-0 nova_compute[260935]: 2025-10-11 09:18:11.959 2 DEBUG nova.compute.manager [req-dd6619f6-93a9-42ea-9701-0bf39815851d req-145f050e-eab2-4fe7-88e7-2d0a7c16d0f2 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0c16a8df-379f-45ee-b8a2-930ab997e47b] Received event network-changed-442b50ff-f920-42c8-b400-ec09aade7cd6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:18:11 compute-0 nova_compute[260935]: 2025-10-11 09:18:11.960 2 DEBUG nova.compute.manager [req-dd6619f6-93a9-42ea-9701-0bf39815851d req-145f050e-eab2-4fe7-88e7-2d0a7c16d0f2 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0c16a8df-379f-45ee-b8a2-930ab997e47b] Refreshing instance network info cache due to event network-changed-442b50ff-f920-42c8-b400-ec09aade7cd6. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 09:18:11 compute-0 nova_compute[260935]: 2025-10-11 09:18:11.962 2 DEBUG oslo_concurrency.lockutils [req-dd6619f6-93a9-42ea-9701-0bf39815851d req-145f050e-eab2-4fe7-88e7-2d0a7c16d0f2 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-0c16a8df-379f-45ee-b8a2-930ab997e47b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:18:11 compute-0 nova_compute[260935]: 2025-10-11 09:18:11.997 2 DEBUG oslo_concurrency.processutils [None req-b79524c1-4228-4b0b-9879-2e9046f6cf49 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:18:12 compute-0 nova_compute[260935]: 2025-10-11 09:18:12.218 2 DEBUG nova.compute.manager [req-3ae73e13-5de6-4a21-a578-5a339e8f70f0 req-55798aa1-35fb-445a-bf6a-d7393139033c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 187076f6-221b-4a35-a7a8-9ba7c2a546b5] Received event network-vif-deleted-1f9dc2d8-70e2-4b9d-acc8-8f1bf8cd7f54 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:18:12 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 09:18:12 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1987145678' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:18:12 compute-0 nova_compute[260935]: 2025-10-11 09:18:12.501 2 DEBUG oslo_concurrency.processutils [None req-b79524c1-4228-4b0b-9879-2e9046f6cf49 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.504s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:18:12 compute-0 nova_compute[260935]: 2025-10-11 09:18:12.507 2 DEBUG nova.compute.provider_tree [None req-b79524c1-4228-4b0b-9879-2e9046f6cf49 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 09:18:12 compute-0 nova_compute[260935]: 2025-10-11 09:18:12.527 2 DEBUG nova.scheduler.client.report [None req-b79524c1-4228-4b0b-9879-2e9046f6cf49 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 09:18:12 compute-0 nova_compute[260935]: 2025-10-11 09:18:12.554 2 DEBUG oslo_concurrency.lockutils [None req-b79524c1-4228-4b0b-9879-2e9046f6cf49 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.768s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:18:12 compute-0 nova_compute[260935]: 2025-10-11 09:18:12.599 2 INFO nova.scheduler.client.report [None req-b79524c1-4228-4b0b-9879-2e9046f6cf49 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Deleted allocations for instance 187076f6-221b-4a35-a7a8-9ba7c2a546b5
Oct 11 09:18:12 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2291: 321 pgs: 321 active+clean; 434 MiB data, 972 MiB used, 59 GiB / 60 GiB avail; 209 KiB/s rd, 124 KiB/s wr, 79 op/s
Oct 11 09:18:12 compute-0 nova_compute[260935]: 2025-10-11 09:18:12.727 2 DEBUG oslo_concurrency.lockutils [None req-b79524c1-4228-4b0b-9879-2e9046f6cf49 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "187076f6-221b-4a35-a7a8-9ba7c2a546b5" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.414s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:18:12 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/1987145678' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:18:12 compute-0 nova_compute[260935]: 2025-10-11 09:18:12.883 2 DEBUG nova.network.neutron [req-a5a09b40-57c4-4fb9-9014-ae43d23fb90a req-42507238-05dc-4792-85ae-528cc94e2c7b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 187076f6-221b-4a35-a7a8-9ba7c2a546b5] Updated VIF entry in instance network info cache for port 1f9dc2d8-70e2-4b9d-acc8-8f1bf8cd7f54. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 09:18:12 compute-0 nova_compute[260935]: 2025-10-11 09:18:12.883 2 DEBUG nova.network.neutron [req-a5a09b40-57c4-4fb9-9014-ae43d23fb90a req-42507238-05dc-4792-85ae-528cc94e2c7b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 187076f6-221b-4a35-a7a8-9ba7c2a546b5] Updating instance_info_cache with network_info: [{"id": "1f9dc2d8-70e2-4b9d-acc8-8f1bf8cd7f54", "address": "fa:16:3e:4b:3e:b6", "network": {"id": "2f4ce403-2596-441d-805b-ba15e2f385a1", "bridge": "br-int", "label": "tempest-network-smoke--109925739", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe4b:3eb6", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1f9dc2d8-70", "ovs_interfaceid": "1f9dc2d8-70e2-4b9d-acc8-8f1bf8cd7f54", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:18:12 compute-0 nova_compute[260935]: 2025-10-11 09:18:12.902 2 DEBUG oslo_concurrency.lockutils [req-a5a09b40-57c4-4fb9-9014-ae43d23fb90a req-42507238-05dc-4792-85ae-528cc94e2c7b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-187076f6-221b-4a35-a7a8-9ba7c2a546b5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:18:13 compute-0 podman[382346]: 2025-10-11 09:18:13.819382253 +0000 UTC m=+0.102770755 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 11 09:18:13 compute-0 podman[382347]: 2025-10-11 09:18:13.855572832 +0000 UTC m=+0.135606488 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct 11 09:18:13 compute-0 ceph-mon[74313]: pgmap v2291: 321 pgs: 321 active+clean; 434 MiB data, 972 MiB used, 59 GiB / 60 GiB avail; 209 KiB/s rd, 124 KiB/s wr, 79 op/s
Oct 11 09:18:13 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:18:13 compute-0 ceph-mon[74313]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #111. Immutable memtables: 0.
Oct 11 09:18:13 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:18:13.974496) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 11 09:18:13 compute-0 ceph-mon[74313]: rocksdb: [db/flush_job.cc:856] [default] [JOB 65] Flushing memtable with next log file: 111
Oct 11 09:18:13 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760174293974569, "job": 65, "event": "flush_started", "num_memtables": 1, "num_entries": 516, "num_deletes": 250, "total_data_size": 494195, "memory_usage": 503312, "flush_reason": "Manual Compaction"}
Oct 11 09:18:13 compute-0 ceph-mon[74313]: rocksdb: [db/flush_job.cc:885] [default] [JOB 65] Level-0 flush table #112: started
Oct 11 09:18:13 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760174293980925, "cf_name": "default", "job": 65, "event": "table_file_creation", "file_number": 112, "file_size": 344518, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 47816, "largest_seqno": 48331, "table_properties": {"data_size": 341944, "index_size": 610, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 901, "raw_key_size": 6959, "raw_average_key_size": 20, "raw_value_size": 336713, "raw_average_value_size": 981, "num_data_blocks": 28, "num_entries": 343, "num_filter_entries": 343, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760174259, "oldest_key_time": 1760174259, "file_creation_time": 1760174293, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "25d4b2da-549a-4320-988a-8e4ed36a341b", "db_session_id": "HBQ1LYMNE0LLZGR7AHC6", "orig_file_number": 112, "seqno_to_time_mapping": "N/A"}}
Oct 11 09:18:13 compute-0 ceph-mon[74313]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 65] Flush lasted 6498 microseconds, and 3126 cpu microseconds.
Oct 11 09:18:13 compute-0 ceph-mon[74313]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 09:18:13 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:18:13.980997) [db/flush_job.cc:967] [default] [JOB 65] Level-0 flush table #112: 344518 bytes OK
Oct 11 09:18:13 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:18:13.981020) [db/memtable_list.cc:519] [default] Level-0 commit table #112 started
Oct 11 09:18:13 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:18:13.982885) [db/memtable_list.cc:722] [default] Level-0 commit table #112: memtable #1 done
Oct 11 09:18:13 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:18:13.982907) EVENT_LOG_v1 {"time_micros": 1760174293982900, "job": 65, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 11 09:18:13 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:18:13.982927) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 11 09:18:13 compute-0 ceph-mon[74313]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 65] Try to delete WAL files size 491245, prev total WAL file size 491245, number of live WAL files 2.
Oct 11 09:18:13 compute-0 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000108.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 09:18:13 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:18:13.983562) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031373534' seq:72057594037927935, type:22 .. '6D6772737461740032303035' seq:0, type:0; will stop at (end)
Oct 11 09:18:13 compute-0 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 66] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 11 09:18:13 compute-0 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 65 Base level 0, inputs: [112(336KB)], [110(10MB)]
Oct 11 09:18:13 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760174293983616, "job": 66, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [112], "files_L6": [110], "score": -1, "input_data_size": 11399863, "oldest_snapshot_seqno": -1}
Oct 11 09:18:14 compute-0 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 66] Generated table #113: 6707 keys, 8256527 bytes, temperature: kUnknown
Oct 11 09:18:14 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760174294036110, "cf_name": "default", "job": 66, "event": "table_file_creation", "file_number": 113, "file_size": 8256527, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8213455, "index_size": 25232, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16773, "raw_key_size": 175269, "raw_average_key_size": 26, "raw_value_size": 8094881, "raw_average_value_size": 1206, "num_data_blocks": 983, "num_entries": 6707, "num_filter_entries": 6707, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760170204, "oldest_key_time": 0, "file_creation_time": 1760174293, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "25d4b2da-549a-4320-988a-8e4ed36a341b", "db_session_id": "HBQ1LYMNE0LLZGR7AHC6", "orig_file_number": 113, "seqno_to_time_mapping": "N/A"}}
Oct 11 09:18:14 compute-0 ceph-mon[74313]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 09:18:14 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:18:14.036504) [db/compaction/compaction_job.cc:1663] [default] [JOB 66] Compacted 1@0 + 1@6 files to L6 => 8256527 bytes
Oct 11 09:18:14 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:18:14.038871) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 216.7 rd, 156.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.3, 10.5 +0.0 blob) out(7.9 +0.0 blob), read-write-amplify(57.1) write-amplify(24.0) OK, records in: 7203, records dropped: 496 output_compression: NoCompression
Oct 11 09:18:14 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:18:14.038903) EVENT_LOG_v1 {"time_micros": 1760174294038889, "job": 66, "event": "compaction_finished", "compaction_time_micros": 52614, "compaction_time_cpu_micros": 39225, "output_level": 6, "num_output_files": 1, "total_output_size": 8256527, "num_input_records": 7203, "num_output_records": 6707, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 11 09:18:14 compute-0 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000112.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 09:18:14 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760174294039159, "job": 66, "event": "table_file_deletion", "file_number": 112}
Oct 11 09:18:14 compute-0 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000110.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 09:18:14 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760174294044125, "job": 66, "event": "table_file_deletion", "file_number": 110}
Oct 11 09:18:14 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:18:13.983475) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 09:18:14 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:18:14.044210) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 09:18:14 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:18:14.044218) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 09:18:14 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:18:14.044221) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 09:18:14 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:18:14.044224) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 09:18:14 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:18:14.044227) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 09:18:14 compute-0 nova_compute[260935]: 2025-10-11 09:18:14.103 2 DEBUG nova.network.neutron [None req-b08a428b-4ca4-4928-ae68-d827b2d2724f dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 0c16a8df-379f-45ee-b8a2-930ab997e47b] Updating instance_info_cache with network_info: [{"id": "2284d8f8-63ae-4cf4-b954-c08472abd3ed", "address": "fa:16:3e:c9:58:d3", "network": {"id": "3f472aff-4044-47cd-b539-ffd0a15c2851", "bridge": "br-int", "label": "tempest-network-smoke--74572185", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2284d8f8-63", "ovs_interfaceid": "2284d8f8-63ae-4cf4-b954-c08472abd3ed", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "442b50ff-f920-42c8-b400-ec09aade7cd6", "address": "fa:16:3e:4f:8d:be", "network": {"id": "1ea2d228-974d-4a28-901f-89593554c6f8", "bridge": "br-int", "label": "tempest-network-smoke--1003241548", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.27", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap442b50ff-f9", "ovs_interfaceid": "442b50ff-f920-42c8-b400-ec09aade7cd6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:18:14 compute-0 nova_compute[260935]: 2025-10-11 09:18:14.133 2 DEBUG oslo_concurrency.lockutils [None req-b08a428b-4ca4-4928-ae68-d827b2d2724f dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Releasing lock "refresh_cache-0c16a8df-379f-45ee-b8a2-930ab997e47b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:18:14 compute-0 nova_compute[260935]: 2025-10-11 09:18:14.134 2 DEBUG oslo_concurrency.lockutils [req-dd6619f6-93a9-42ea-9701-0bf39815851d req-145f050e-eab2-4fe7-88e7-2d0a7c16d0f2 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-0c16a8df-379f-45ee-b8a2-930ab997e47b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:18:14 compute-0 nova_compute[260935]: 2025-10-11 09:18:14.134 2 DEBUG nova.network.neutron [req-dd6619f6-93a9-42ea-9701-0bf39815851d req-145f050e-eab2-4fe7-88e7-2d0a7c16d0f2 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0c16a8df-379f-45ee-b8a2-930ab997e47b] Refreshing network info cache for port 442b50ff-f920-42c8-b400-ec09aade7cd6 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 09:18:14 compute-0 nova_compute[260935]: 2025-10-11 09:18:14.138 2 DEBUG nova.virt.libvirt.vif [None req-b08a428b-4ca4-4928-ae68-d827b2d2724f dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T09:17:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1002613129',display_name='tempest-TestNetworkBasicOps-server-1002613129',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1002613129',id=112,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBB1DLyePqzD5XeXq/VEty+kauBvZ2ILHF8of/ToD91IxehxiiNemh7q1yZYimrdmB0i6DGy2R/mJVWVMcXEoPylnExeWpiVIgoXv8lQJTgtQjj42wL1vsxK2jjFlUxPqYA==',key_name='tempest-TestNetworkBasicOps-1545772393',keypairs=<?>,launch_index=0,launched_at=2025-10-11T09:17:46Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='bee9c6aad5fe46a2b0fb6caf4d995b72',ramdisk_id='',reservation_id='r-c03kw6on',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1622727639',owner_user_name='tempest-TestNetworkBasicOps-1622727639-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T09:17:46Z,user_data=None,user_id='dd336dcb24664df58613d4105ce1b004',uuid=0c16a8df-379f-45ee-b8a2-930ab997e47b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "442b50ff-f920-42c8-b400-ec09aade7cd6", "address": "fa:16:3e:4f:8d:be", "network": {"id": "1ea2d228-974d-4a28-901f-89593554c6f8", "bridge": "br-int", "label": "tempest-network-smoke--1003241548", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.27", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap442b50ff-f9", "ovs_interfaceid": "442b50ff-f920-42c8-b400-ec09aade7cd6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 11 09:18:14 compute-0 nova_compute[260935]: 2025-10-11 09:18:14.138 2 DEBUG nova.network.os_vif_util [None req-b08a428b-4ca4-4928-ae68-d827b2d2724f dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Converting VIF {"id": "442b50ff-f920-42c8-b400-ec09aade7cd6", "address": "fa:16:3e:4f:8d:be", "network": {"id": "1ea2d228-974d-4a28-901f-89593554c6f8", "bridge": "br-int", "label": "tempest-network-smoke--1003241548", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.27", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap442b50ff-f9", "ovs_interfaceid": "442b50ff-f920-42c8-b400-ec09aade7cd6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 09:18:14 compute-0 nova_compute[260935]: 2025-10-11 09:18:14.139 2 DEBUG nova.network.os_vif_util [None req-b08a428b-4ca4-4928-ae68-d827b2d2724f dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4f:8d:be,bridge_name='br-int',has_traffic_filtering=True,id=442b50ff-f920-42c8-b400-ec09aade7cd6,network=Network(1ea2d228-974d-4a28-901f-89593554c6f8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap442b50ff-f9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 09:18:14 compute-0 nova_compute[260935]: 2025-10-11 09:18:14.139 2 DEBUG os_vif [None req-b08a428b-4ca4-4928-ae68-d827b2d2724f dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:4f:8d:be,bridge_name='br-int',has_traffic_filtering=True,id=442b50ff-f920-42c8-b400-ec09aade7cd6,network=Network(1ea2d228-974d-4a28-901f-89593554c6f8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap442b50ff-f9') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 11 09:18:14 compute-0 nova_compute[260935]: 2025-10-11 09:18:14.140 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:18:14 compute-0 nova_compute[260935]: 2025-10-11 09:18:14.140 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:18:14 compute-0 nova_compute[260935]: 2025-10-11 09:18:14.141 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 09:18:14 compute-0 nova_compute[260935]: 2025-10-11 09:18:14.144 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:18:14 compute-0 nova_compute[260935]: 2025-10-11 09:18:14.144 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap442b50ff-f9, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:18:14 compute-0 nova_compute[260935]: 2025-10-11 09:18:14.145 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap442b50ff-f9, col_values=(('external_ids', {'iface-id': '442b50ff-f920-42c8-b400-ec09aade7cd6', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:4f:8d:be', 'vm-uuid': '0c16a8df-379f-45ee-b8a2-930ab997e47b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:18:14 compute-0 NetworkManager[44960]: <info>  [1760174294.1482] manager: (tap442b50ff-f9): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/452)
Oct 11 09:18:14 compute-0 nova_compute[260935]: 2025-10-11 09:18:14.157 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:18:14 compute-0 nova_compute[260935]: 2025-10-11 09:18:14.161 2 INFO os_vif [None req-b08a428b-4ca4-4928-ae68-d827b2d2724f dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:4f:8d:be,bridge_name='br-int',has_traffic_filtering=True,id=442b50ff-f920-42c8-b400-ec09aade7cd6,network=Network(1ea2d228-974d-4a28-901f-89593554c6f8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap442b50ff-f9')
Oct 11 09:18:14 compute-0 nova_compute[260935]: 2025-10-11 09:18:14.163 2 DEBUG nova.virt.libvirt.vif [None req-b08a428b-4ca4-4928-ae68-d827b2d2724f dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T09:17:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1002613129',display_name='tempest-TestNetworkBasicOps-server-1002613129',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1002613129',id=112,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBB1DLyePqzD5XeXq/VEty+kauBvZ2ILHF8of/ToD91IxehxiiNemh7q1yZYimrdmB0i6DGy2R/mJVWVMcXEoPylnExeWpiVIgoXv8lQJTgtQjj42wL1vsxK2jjFlUxPqYA==',key_name='tempest-TestNetworkBasicOps-1545772393',keypairs=<?>,launch_index=0,launched_at=2025-10-11T09:17:46Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='bee9c6aad5fe46a2b0fb6caf4d995b72',ramdisk_id='',reservation_id='r-c03kw6on',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1622727639',owner_user_name='tempest-TestNetworkBasicOps-1622727639-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T09:17:46Z,user_data=None,user_id='dd336dcb24664df58613d4105ce1b004',uuid=0c16a8df-379f-45ee-b8a2-930ab997e47b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "442b50ff-f920-42c8-b400-ec09aade7cd6", "address": "fa:16:3e:4f:8d:be", "network": {"id": "1ea2d228-974d-4a28-901f-89593554c6f8", "bridge": "br-int", "label": "tempest-network-smoke--1003241548", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.27", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap442b50ff-f9", "ovs_interfaceid": "442b50ff-f920-42c8-b400-ec09aade7cd6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 11 09:18:14 compute-0 nova_compute[260935]: 2025-10-11 09:18:14.163 2 DEBUG nova.network.os_vif_util [None req-b08a428b-4ca4-4928-ae68-d827b2d2724f dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Converting VIF {"id": "442b50ff-f920-42c8-b400-ec09aade7cd6", "address": "fa:16:3e:4f:8d:be", "network": {"id": "1ea2d228-974d-4a28-901f-89593554c6f8", "bridge": "br-int", "label": "tempest-network-smoke--1003241548", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.27", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap442b50ff-f9", "ovs_interfaceid": "442b50ff-f920-42c8-b400-ec09aade7cd6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 09:18:14 compute-0 nova_compute[260935]: 2025-10-11 09:18:14.165 2 DEBUG nova.network.os_vif_util [None req-b08a428b-4ca4-4928-ae68-d827b2d2724f dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4f:8d:be,bridge_name='br-int',has_traffic_filtering=True,id=442b50ff-f920-42c8-b400-ec09aade7cd6,network=Network(1ea2d228-974d-4a28-901f-89593554c6f8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap442b50ff-f9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 09:18:14 compute-0 nova_compute[260935]: 2025-10-11 09:18:14.169 2 DEBUG nova.virt.libvirt.guest [None req-b08a428b-4ca4-4928-ae68-d827b2d2724f dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] attach device xml: <interface type="ethernet">
Oct 11 09:18:14 compute-0 nova_compute[260935]:   <mac address="fa:16:3e:4f:8d:be"/>
Oct 11 09:18:14 compute-0 nova_compute[260935]:   <model type="virtio"/>
Oct 11 09:18:14 compute-0 nova_compute[260935]:   <driver name="vhost" rx_queue_size="512"/>
Oct 11 09:18:14 compute-0 nova_compute[260935]:   <mtu size="1442"/>
Oct 11 09:18:14 compute-0 nova_compute[260935]:   <target dev="tap442b50ff-f9"/>
Oct 11 09:18:14 compute-0 nova_compute[260935]: </interface>
Oct 11 09:18:14 compute-0 nova_compute[260935]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Oct 11 09:18:14 compute-0 kernel: tap442b50ff-f9: entered promiscuous mode
Oct 11 09:18:14 compute-0 NetworkManager[44960]: <info>  [1760174294.1900] manager: (tap442b50ff-f9): new Tun device (/org/freedesktop/NetworkManager/Devices/453)
Oct 11 09:18:14 compute-0 nova_compute[260935]: 2025-10-11 09:18:14.194 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:18:14 compute-0 ovn_controller[152945]: 2025-10-11T09:18:14Z|01106|binding|INFO|Claiming lport 442b50ff-f920-42c8-b400-ec09aade7cd6 for this chassis.
Oct 11 09:18:14 compute-0 ovn_controller[152945]: 2025-10-11T09:18:14Z|01107|binding|INFO|442b50ff-f920-42c8-b400-ec09aade7cd6: Claiming fa:16:3e:4f:8d:be 10.100.0.27
Oct 11 09:18:14 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:18:14.209 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4f:8d:be 10.100.0.27'], port_security=['fa:16:3e:4f:8d:be 10.100.0.27'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.27/28', 'neutron:device_id': '0c16a8df-379f-45ee-b8a2-930ab997e47b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1ea2d228-974d-4a28-901f-89593554c6f8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'bee9c6aad5fe46a2b0fb6caf4d995b72', 'neutron:revision_number': '2', 'neutron:security_group_ids': '019fac1b-1c13-4d21-809e-39e39fdd9255', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2a1ce1a9-36d9-465e-a0cc-5bbbfcd5b496, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=442b50ff-f920-42c8-b400-ec09aade7cd6) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 09:18:14 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:18:14.210 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 442b50ff-f920-42c8-b400-ec09aade7cd6 in datapath 1ea2d228-974d-4a28-901f-89593554c6f8 bound to our chassis
Oct 11 09:18:14 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:18:14.212 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 1ea2d228-974d-4a28-901f-89593554c6f8
Oct 11 09:18:14 compute-0 systemd-udevd[382398]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 09:18:14 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:18:14.230 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[caac18a9-1acd-4d4a-a43e-fb0bd78ba30a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:18:14 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:18:14.231 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap1ea2d228-91 in ovnmeta-1ea2d228-974d-4a28-901f-89593554c6f8 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 11 09:18:14 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:18:14.233 276945 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap1ea2d228-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 11 09:18:14 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:18:14.233 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[bf608b37-c271-49a1-9c3c-5ef97651d9df]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:18:14 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:18:14.234 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[5958c52c-93b3-4ed7-9339-27159c2b3003]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:18:14 compute-0 NetworkManager[44960]: <info>  [1760174294.2443] device (tap442b50ff-f9): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 09:18:14 compute-0 NetworkManager[44960]: <info>  [1760174294.2459] device (tap442b50ff-f9): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 09:18:14 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:18:14.251 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[1bf2a29e-886c-4d88-bff8-300fc13e0f01]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:18:14 compute-0 nova_compute[260935]: 2025-10-11 09:18:14.266 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:18:14 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:18:14.268 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[5ed210e9-2358-472f-9a8e-faa8f39e399c]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:18:14 compute-0 ovn_controller[152945]: 2025-10-11T09:18:14Z|01108|binding|INFO|Setting lport 442b50ff-f920-42c8-b400-ec09aade7cd6 ovn-installed in OVS
Oct 11 09:18:14 compute-0 ovn_controller[152945]: 2025-10-11T09:18:14Z|01109|binding|INFO|Setting lport 442b50ff-f920-42c8-b400-ec09aade7cd6 up in Southbound
Oct 11 09:18:14 compute-0 nova_compute[260935]: 2025-10-11 09:18:14.274 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:18:14 compute-0 nova_compute[260935]: 2025-10-11 09:18:14.291 2 DEBUG nova.virt.libvirt.driver [None req-b08a428b-4ca4-4928-ae68-d827b2d2724f dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 09:18:14 compute-0 nova_compute[260935]: 2025-10-11 09:18:14.291 2 DEBUG nova.virt.libvirt.driver [None req-b08a428b-4ca4-4928-ae68-d827b2d2724f dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 09:18:14 compute-0 nova_compute[260935]: 2025-10-11 09:18:14.292 2 DEBUG nova.virt.libvirt.driver [None req-b08a428b-4ca4-4928-ae68-d827b2d2724f dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] No VIF found with MAC fa:16:3e:c9:58:d3, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 11 09:18:14 compute-0 nova_compute[260935]: 2025-10-11 09:18:14.292 2 DEBUG nova.virt.libvirt.driver [None req-b08a428b-4ca4-4928-ae68-d827b2d2724f dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] No VIF found with MAC fa:16:3e:4f:8d:be, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 11 09:18:14 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:18:14.310 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[59e7f0ba-ab6c-4221-ae62-8c014729fab9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:18:14 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:18:14.317 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[9fbda176-d1d7-4c72-b6aa-e89883b1804e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:18:14 compute-0 NetworkManager[44960]: <info>  [1760174294.3182] manager: (tap1ea2d228-90): new Veth device (/org/freedesktop/NetworkManager/Devices/454)
Oct 11 09:18:14 compute-0 nova_compute[260935]: 2025-10-11 09:18:14.320 2 DEBUG nova.virt.libvirt.guest [None req-b08a428b-4ca4-4928-ae68-d827b2d2724f dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 09:18:14 compute-0 nova_compute[260935]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 09:18:14 compute-0 nova_compute[260935]:   <nova:name>tempest-TestNetworkBasicOps-server-1002613129</nova:name>
Oct 11 09:18:14 compute-0 nova_compute[260935]:   <nova:creationTime>2025-10-11 09:18:14</nova:creationTime>
Oct 11 09:18:14 compute-0 nova_compute[260935]:   <nova:flavor name="m1.nano">
Oct 11 09:18:14 compute-0 nova_compute[260935]:     <nova:memory>128</nova:memory>
Oct 11 09:18:14 compute-0 nova_compute[260935]:     <nova:disk>1</nova:disk>
Oct 11 09:18:14 compute-0 nova_compute[260935]:     <nova:swap>0</nova:swap>
Oct 11 09:18:14 compute-0 nova_compute[260935]:     <nova:ephemeral>0</nova:ephemeral>
Oct 11 09:18:14 compute-0 nova_compute[260935]:     <nova:vcpus>1</nova:vcpus>
Oct 11 09:18:14 compute-0 nova_compute[260935]:   </nova:flavor>
Oct 11 09:18:14 compute-0 nova_compute[260935]:   <nova:owner>
Oct 11 09:18:14 compute-0 nova_compute[260935]:     <nova:user uuid="dd336dcb24664df58613d4105ce1b004">tempest-TestNetworkBasicOps-1622727639-project-member</nova:user>
Oct 11 09:18:14 compute-0 nova_compute[260935]:     <nova:project uuid="bee9c6aad5fe46a2b0fb6caf4d995b72">tempest-TestNetworkBasicOps-1622727639</nova:project>
Oct 11 09:18:14 compute-0 nova_compute[260935]:   </nova:owner>
Oct 11 09:18:14 compute-0 nova_compute[260935]:   <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 09:18:14 compute-0 nova_compute[260935]:   <nova:ports>
Oct 11 09:18:14 compute-0 nova_compute[260935]:     <nova:port uuid="2284d8f8-63ae-4cf4-b954-c08472abd3ed">
Oct 11 09:18:14 compute-0 nova_compute[260935]:       <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Oct 11 09:18:14 compute-0 nova_compute[260935]:     </nova:port>
Oct 11 09:18:14 compute-0 nova_compute[260935]:     <nova:port uuid="442b50ff-f920-42c8-b400-ec09aade7cd6">
Oct 11 09:18:14 compute-0 nova_compute[260935]:       <nova:ip type="fixed" address="10.100.0.27" ipVersion="4"/>
Oct 11 09:18:14 compute-0 nova_compute[260935]:     </nova:port>
Oct 11 09:18:14 compute-0 nova_compute[260935]:   </nova:ports>
Oct 11 09:18:14 compute-0 nova_compute[260935]: </nova:instance>
Oct 11 09:18:14 compute-0 nova_compute[260935]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Oct 11 09:18:14 compute-0 nova_compute[260935]: 2025-10-11 09:18:14.352 2 DEBUG oslo_concurrency.lockutils [None req-b08a428b-4ca4-4928-ae68-d827b2d2724f dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "interface-0c16a8df-379f-45ee-b8a2-930ab997e47b-None" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 6.034s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:18:14 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:18:14.369 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[c0c15fc9-3a7f-45f5-8111-d9c5f3c77edc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:18:14 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:18:14.374 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[5aa72ba2-5fb2-4c93-b119-e6fd9bfd5f3c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:18:14 compute-0 NetworkManager[44960]: <info>  [1760174294.4036] device (tap1ea2d228-90): carrier: link connected
Oct 11 09:18:14 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:18:14.412 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[8f52e6a0-fca5-4f70-a77c-e285837bcd1f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:18:14 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:18:14.432 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[f4544124-0fff-4bd6-8dea-cf2179563969]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1ea2d228-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e3:88:22'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 319], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 613910, 'reachable_time': 19837, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 382424, 'error': None, 'target': 'ovnmeta-1ea2d228-974d-4a28-901f-89593554c6f8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:18:14 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:18:14.454 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[a062629d-2c16-472f-ad08-7ac0e69d3f79]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fee3:8822'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 613910, 'tstamp': 613910}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 382425, 'error': None, 'target': 'ovnmeta-1ea2d228-974d-4a28-901f-89593554c6f8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:18:14 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:18:14.482 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[a383efe5-91b8-481b-8834-29af85ecc16d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1ea2d228-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e3:88:22'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 319], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 613910, 'reachable_time': 19837, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 382426, 'error': None, 'target': 'ovnmeta-1ea2d228-974d-4a28-901f-89593554c6f8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:18:14 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:18:14.522 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[02468298-e1dd-4055-8e9f-349b818039dd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:18:14 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:18:14.609 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[10d4c9fa-46c9-4a83-8203-890e64d5ebe8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:18:14 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:18:14.610 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1ea2d228-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:18:14 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:18:14.611 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 09:18:14 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:18:14.611 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1ea2d228-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:18:14 compute-0 nova_compute[260935]: 2025-10-11 09:18:14.655 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:18:14 compute-0 kernel: tap1ea2d228-90: entered promiscuous mode
Oct 11 09:18:14 compute-0 NetworkManager[44960]: <info>  [1760174294.6581] manager: (tap1ea2d228-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/455)
Oct 11 09:18:14 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:18:14.661 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap1ea2d228-90, col_values=(('external_ids', {'iface-id': '3fc4d2ee-2b2f-40fb-b79c-5db1c1c53bb8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:18:14 compute-0 ovn_controller[152945]: 2025-10-11T09:18:14Z|01110|binding|INFO|Releasing lport 3fc4d2ee-2b2f-40fb-b79c-5db1c1c53bb8 from this chassis (sb_readonly=0)
Oct 11 09:18:14 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2292: 321 pgs: 321 active+clean; 407 MiB data, 953 MiB used, 59 GiB / 60 GiB avail; 112 KiB/s rd, 46 KiB/s wr, 65 op/s
Oct 11 09:18:14 compute-0 nova_compute[260935]: 2025-10-11 09:18:14.664 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:18:14 compute-0 nova_compute[260935]: 2025-10-11 09:18:14.693 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:18:14 compute-0 nova_compute[260935]: 2025-10-11 09:18:14.695 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:18:14 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:18:14.696 162815 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/1ea2d228-974d-4a28-901f-89593554c6f8.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/1ea2d228-974d-4a28-901f-89593554c6f8.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 11 09:18:14 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:18:14.707 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[a48519e5-2ca4-43a9-ac3d-b37bc2416a93]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:18:14 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:18:14.708 162815 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 11 09:18:14 compute-0 ovn_metadata_agent[162810]: global
Oct 11 09:18:14 compute-0 ovn_metadata_agent[162810]:     log         /dev/log local0 debug
Oct 11 09:18:14 compute-0 ovn_metadata_agent[162810]:     log-tag     haproxy-metadata-proxy-1ea2d228-974d-4a28-901f-89593554c6f8
Oct 11 09:18:14 compute-0 ovn_metadata_agent[162810]:     user        root
Oct 11 09:18:14 compute-0 ovn_metadata_agent[162810]:     group       root
Oct 11 09:18:14 compute-0 ovn_metadata_agent[162810]:     maxconn     1024
Oct 11 09:18:14 compute-0 ovn_metadata_agent[162810]:     pidfile     /var/lib/neutron/external/pids/1ea2d228-974d-4a28-901f-89593554c6f8.pid.haproxy
Oct 11 09:18:14 compute-0 ovn_metadata_agent[162810]:     daemon
Oct 11 09:18:14 compute-0 ovn_metadata_agent[162810]: 
Oct 11 09:18:14 compute-0 ovn_metadata_agent[162810]: defaults
Oct 11 09:18:14 compute-0 ovn_metadata_agent[162810]:     log global
Oct 11 09:18:14 compute-0 ovn_metadata_agent[162810]:     mode http
Oct 11 09:18:14 compute-0 ovn_metadata_agent[162810]:     option httplog
Oct 11 09:18:14 compute-0 ovn_metadata_agent[162810]:     option dontlognull
Oct 11 09:18:14 compute-0 ovn_metadata_agent[162810]:     option http-server-close
Oct 11 09:18:14 compute-0 ovn_metadata_agent[162810]:     option forwardfor
Oct 11 09:18:14 compute-0 ovn_metadata_agent[162810]:     retries                 3
Oct 11 09:18:14 compute-0 ovn_metadata_agent[162810]:     timeout http-request    30s
Oct 11 09:18:14 compute-0 ovn_metadata_agent[162810]:     timeout connect         30s
Oct 11 09:18:14 compute-0 ovn_metadata_agent[162810]:     timeout client          32s
Oct 11 09:18:14 compute-0 ovn_metadata_agent[162810]:     timeout server          32s
Oct 11 09:18:14 compute-0 ovn_metadata_agent[162810]:     timeout http-keep-alive 30s
Oct 11 09:18:14 compute-0 ovn_metadata_agent[162810]: 
Oct 11 09:18:14 compute-0 ovn_metadata_agent[162810]: 
Oct 11 09:18:14 compute-0 ovn_metadata_agent[162810]: listen listener
Oct 11 09:18:14 compute-0 ovn_metadata_agent[162810]:     bind 169.254.169.254:80
Oct 11 09:18:14 compute-0 ovn_metadata_agent[162810]:     server metadata /var/lib/neutron/metadata_proxy
Oct 11 09:18:14 compute-0 ovn_metadata_agent[162810]:     http-request add-header X-OVN-Network-ID 1ea2d228-974d-4a28-901f-89593554c6f8
Oct 11 09:18:14 compute-0 ovn_metadata_agent[162810]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 11 09:18:14 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:18:14.709 162815 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-1ea2d228-974d-4a28-901f-89593554c6f8', 'env', 'PROCESS_TAG=haproxy-1ea2d228-974d-4a28-901f-89593554c6f8', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/1ea2d228-974d-4a28-901f-89593554c6f8.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 11 09:18:14 compute-0 nova_compute[260935]: 2025-10-11 09:18:14.812 2 DEBUG nova.compute.manager [req-88742c7a-3f72-4d27-b678-8eac7b3aa835 req-12661665-8462-47ad-9aa5-08338fc104e1 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0c16a8df-379f-45ee-b8a2-930ab997e47b] Received event network-vif-plugged-442b50ff-f920-42c8-b400-ec09aade7cd6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:18:14 compute-0 nova_compute[260935]: 2025-10-11 09:18:14.812 2 DEBUG oslo_concurrency.lockutils [req-88742c7a-3f72-4d27-b678-8eac7b3aa835 req-12661665-8462-47ad-9aa5-08338fc104e1 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "0c16a8df-379f-45ee-b8a2-930ab997e47b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:18:14 compute-0 nova_compute[260935]: 2025-10-11 09:18:14.813 2 DEBUG oslo_concurrency.lockutils [req-88742c7a-3f72-4d27-b678-8eac7b3aa835 req-12661665-8462-47ad-9aa5-08338fc104e1 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "0c16a8df-379f-45ee-b8a2-930ab997e47b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:18:14 compute-0 nova_compute[260935]: 2025-10-11 09:18:14.813 2 DEBUG oslo_concurrency.lockutils [req-88742c7a-3f72-4d27-b678-8eac7b3aa835 req-12661665-8462-47ad-9aa5-08338fc104e1 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "0c16a8df-379f-45ee-b8a2-930ab997e47b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:18:14 compute-0 nova_compute[260935]: 2025-10-11 09:18:14.814 2 DEBUG nova.compute.manager [req-88742c7a-3f72-4d27-b678-8eac7b3aa835 req-12661665-8462-47ad-9aa5-08338fc104e1 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0c16a8df-379f-45ee-b8a2-930ab997e47b] No waiting events found dispatching network-vif-plugged-442b50ff-f920-42c8-b400-ec09aade7cd6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 09:18:14 compute-0 nova_compute[260935]: 2025-10-11 09:18:14.814 2 WARNING nova.compute.manager [req-88742c7a-3f72-4d27-b678-8eac7b3aa835 req-12661665-8462-47ad-9aa5-08338fc104e1 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0c16a8df-379f-45ee-b8a2-930ab997e47b] Received unexpected event network-vif-plugged-442b50ff-f920-42c8-b400-ec09aade7cd6 for instance with vm_state active and task_state None.
Oct 11 09:18:14 compute-0 ceph-mon[74313]: pgmap v2292: 321 pgs: 321 active+clean; 407 MiB data, 953 MiB used, 59 GiB / 60 GiB avail; 112 KiB/s rd, 46 KiB/s wr, 65 op/s
Oct 11 09:18:15 compute-0 podman[382459]: 2025-10-11 09:18:15.203738793 +0000 UTC m=+0.084002581 container create 587638c82f350fbd62d701e29e65ee0bc9197b8e2311b00da2a8821c45d0571e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-1ea2d228-974d-4a28-901f-89593554c6f8, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 11 09:18:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:18:15.216 162815 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:18:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:18:15.218 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:18:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:18:15.220 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:18:15 compute-0 podman[382459]: 2025-10-11 09:18:15.163965412 +0000 UTC m=+0.044229260 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 11 09:18:15 compute-0 systemd[1]: Started libpod-conmon-587638c82f350fbd62d701e29e65ee0bc9197b8e2311b00da2a8821c45d0571e.scope.
Oct 11 09:18:15 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:18:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd4a08369183d1c19d0437930f9c9a68155223cbc285edf48d39b1d8e5c0ceb4/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 11 09:18:15 compute-0 podman[382459]: 2025-10-11 09:18:15.330398697 +0000 UTC m=+0.210662565 container init 587638c82f350fbd62d701e29e65ee0bc9197b8e2311b00da2a8821c45d0571e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-1ea2d228-974d-4a28-901f-89593554c6f8, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Oct 11 09:18:15 compute-0 podman[382459]: 2025-10-11 09:18:15.340702221 +0000 UTC m=+0.220966009 container start 587638c82f350fbd62d701e29e65ee0bc9197b8e2311b00da2a8821c45d0571e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-1ea2d228-974d-4a28-901f-89593554c6f8, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Oct 11 09:18:15 compute-0 neutron-haproxy-ovnmeta-1ea2d228-974d-4a28-901f-89593554c6f8[382475]: [NOTICE]   (382479) : New worker (382481) forked
Oct 11 09:18:15 compute-0 neutron-haproxy-ovnmeta-1ea2d228-974d-4a28-901f-89593554c6f8[382475]: [NOTICE]   (382479) : Loading success.
Oct 11 09:18:15 compute-0 nova_compute[260935]: 2025-10-11 09:18:15.521 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:18:15 compute-0 nova_compute[260935]: 2025-10-11 09:18:15.702 2 DEBUG nova.network.neutron [req-dd6619f6-93a9-42ea-9701-0bf39815851d req-145f050e-eab2-4fe7-88e7-2d0a7c16d0f2 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0c16a8df-379f-45ee-b8a2-930ab997e47b] Updated VIF entry in instance network info cache for port 442b50ff-f920-42c8-b400-ec09aade7cd6. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 09:18:15 compute-0 nova_compute[260935]: 2025-10-11 09:18:15.703 2 DEBUG nova.network.neutron [req-dd6619f6-93a9-42ea-9701-0bf39815851d req-145f050e-eab2-4fe7-88e7-2d0a7c16d0f2 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0c16a8df-379f-45ee-b8a2-930ab997e47b] Updating instance_info_cache with network_info: [{"id": "2284d8f8-63ae-4cf4-b954-c08472abd3ed", "address": "fa:16:3e:c9:58:d3", "network": {"id": "3f472aff-4044-47cd-b539-ffd0a15c2851", "bridge": "br-int", "label": "tempest-network-smoke--74572185", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2284d8f8-63", "ovs_interfaceid": "2284d8f8-63ae-4cf4-b954-c08472abd3ed", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "442b50ff-f920-42c8-b400-ec09aade7cd6", "address": "fa:16:3e:4f:8d:be", "network": {"id": "1ea2d228-974d-4a28-901f-89593554c6f8", "bridge": "br-int", "label": "tempest-network-smoke--1003241548", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.27", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap442b50ff-f9", "ovs_interfaceid": "442b50ff-f920-42c8-b400-ec09aade7cd6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:18:15 compute-0 nova_compute[260935]: 2025-10-11 09:18:15.728 2 DEBUG oslo_concurrency.lockutils [req-dd6619f6-93a9-42ea-9701-0bf39815851d req-145f050e-eab2-4fe7-88e7-2d0a7c16d0f2 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-0c16a8df-379f-45ee-b8a2-930ab997e47b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:18:15 compute-0 ovn_controller[152945]: 2025-10-11T09:18:15Z|00129|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:4f:8d:be 10.100.0.27
Oct 11 09:18:15 compute-0 ovn_controller[152945]: 2025-10-11T09:18:15Z|00130|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:4f:8d:be 10.100.0.27
Oct 11 09:18:16 compute-0 nova_compute[260935]: 2025-10-11 09:18:16.004 2 DEBUG oslo_concurrency.lockutils [None req-f9151682-0317-4648-ab5e-92d0cfdc252e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Acquiring lock "interface-0c16a8df-379f-45ee-b8a2-930ab997e47b-442b50ff-f920-42c8-b400-ec09aade7cd6" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:18:16 compute-0 nova_compute[260935]: 2025-10-11 09:18:16.004 2 DEBUG oslo_concurrency.lockutils [None req-f9151682-0317-4648-ab5e-92d0cfdc252e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "interface-0c16a8df-379f-45ee-b8a2-930ab997e47b-442b50ff-f920-42c8-b400-ec09aade7cd6" acquired by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:18:16 compute-0 nova_compute[260935]: 2025-10-11 09:18:16.030 2 DEBUG nova.objects.instance [None req-f9151682-0317-4648-ab5e-92d0cfdc252e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lazy-loading 'flavor' on Instance uuid 0c16a8df-379f-45ee-b8a2-930ab997e47b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:18:16 compute-0 nova_compute[260935]: 2025-10-11 09:18:16.057 2 DEBUG nova.virt.libvirt.vif [None req-f9151682-0317-4648-ab5e-92d0cfdc252e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T09:17:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1002613129',display_name='tempest-TestNetworkBasicOps-server-1002613129',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1002613129',id=112,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBB1DLyePqzD5XeXq/VEty+kauBvZ2ILHF8of/ToD91IxehxiiNemh7q1yZYimrdmB0i6DGy2R/mJVWVMcXEoPylnExeWpiVIgoXv8lQJTgtQjj42wL1vsxK2jjFlUxPqYA==',key_name='tempest-TestNetworkBasicOps-1545772393',keypairs=<?>,launch_index=0,launched_at=2025-10-11T09:17:46Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='bee9c6aad5fe46a2b0fb6caf4d995b72',ramdisk_id='',reservation_id='r-c03kw6on',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1622727639',owner_user_name='tempest-TestNetworkBasicOps-1622727639-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T09:17:46Z,user_data=None,user_id='dd336dcb24664df58613d4105ce1b004',uuid=0c16a8df-379f-45ee-b8a2-930ab997e47b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "442b50ff-f920-42c8-b400-ec09aade7cd6", "address": "fa:16:3e:4f:8d:be", "network": {"id": "1ea2d228-974d-4a28-901f-89593554c6f8", "bridge": "br-int", "label": "tempest-network-smoke--1003241548", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.27", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap442b50ff-f9", "ovs_interfaceid": "442b50ff-f920-42c8-b400-ec09aade7cd6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 11 09:18:16 compute-0 nova_compute[260935]: 2025-10-11 09:18:16.057 2 DEBUG nova.network.os_vif_util [None req-f9151682-0317-4648-ab5e-92d0cfdc252e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Converting VIF {"id": "442b50ff-f920-42c8-b400-ec09aade7cd6", "address": "fa:16:3e:4f:8d:be", "network": {"id": "1ea2d228-974d-4a28-901f-89593554c6f8", "bridge": "br-int", "label": "tempest-network-smoke--1003241548", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.27", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap442b50ff-f9", "ovs_interfaceid": "442b50ff-f920-42c8-b400-ec09aade7cd6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 09:18:16 compute-0 nova_compute[260935]: 2025-10-11 09:18:16.058 2 DEBUG nova.network.os_vif_util [None req-f9151682-0317-4648-ab5e-92d0cfdc252e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4f:8d:be,bridge_name='br-int',has_traffic_filtering=True,id=442b50ff-f920-42c8-b400-ec09aade7cd6,network=Network(1ea2d228-974d-4a28-901f-89593554c6f8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap442b50ff-f9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 09:18:16 compute-0 nova_compute[260935]: 2025-10-11 09:18:16.063 2 DEBUG nova.virt.libvirt.guest [None req-f9151682-0317-4648-ab5e-92d0cfdc252e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:4f:8d:be"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap442b50ff-f9"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Oct 11 09:18:16 compute-0 nova_compute[260935]: 2025-10-11 09:18:16.067 2 DEBUG nova.virt.libvirt.guest [None req-f9151682-0317-4648-ab5e-92d0cfdc252e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:4f:8d:be"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap442b50ff-f9"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Oct 11 09:18:16 compute-0 nova_compute[260935]: 2025-10-11 09:18:16.071 2 DEBUG nova.virt.libvirt.driver [None req-f9151682-0317-4648-ab5e-92d0cfdc252e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Attempting to detach device tap442b50ff-f9 from instance 0c16a8df-379f-45ee-b8a2-930ab997e47b from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Oct 11 09:18:16 compute-0 nova_compute[260935]: 2025-10-11 09:18:16.072 2 DEBUG nova.virt.libvirt.guest [None req-f9151682-0317-4648-ab5e-92d0cfdc252e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] detach device xml: <interface type="ethernet">
Oct 11 09:18:16 compute-0 nova_compute[260935]:   <mac address="fa:16:3e:4f:8d:be"/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:   <model type="virtio"/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:   <driver name="vhost" rx_queue_size="512"/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:   <mtu size="1442"/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:   <target dev="tap442b50ff-f9"/>
Oct 11 09:18:16 compute-0 nova_compute[260935]: </interface>
Oct 11 09:18:16 compute-0 nova_compute[260935]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Oct 11 09:18:16 compute-0 nova_compute[260935]: 2025-10-11 09:18:16.081 2 DEBUG nova.virt.libvirt.guest [None req-f9151682-0317-4648-ab5e-92d0cfdc252e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:4f:8d:be"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap442b50ff-f9"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Oct 11 09:18:16 compute-0 nova_compute[260935]: 2025-10-11 09:18:16.085 2 DEBUG nova.virt.libvirt.guest [None req-f9151682-0317-4648-ab5e-92d0cfdc252e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:4f:8d:be"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap442b50ff-f9"/></interface>not found in domain: <domain type='kvm' id='135'>
Oct 11 09:18:16 compute-0 nova_compute[260935]:   <name>instance-00000070</name>
Oct 11 09:18:16 compute-0 nova_compute[260935]:   <uuid>0c16a8df-379f-45ee-b8a2-930ab997e47b</uuid>
Oct 11 09:18:16 compute-0 nova_compute[260935]:   <metadata>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 09:18:16 compute-0 nova_compute[260935]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:   <nova:name>tempest-TestNetworkBasicOps-server-1002613129</nova:name>
Oct 11 09:18:16 compute-0 nova_compute[260935]:   <nova:creationTime>2025-10-11 09:18:14</nova:creationTime>
Oct 11 09:18:16 compute-0 nova_compute[260935]:   <nova:flavor name="m1.nano">
Oct 11 09:18:16 compute-0 nova_compute[260935]:     <nova:memory>128</nova:memory>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     <nova:disk>1</nova:disk>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     <nova:swap>0</nova:swap>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     <nova:ephemeral>0</nova:ephemeral>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     <nova:vcpus>1</nova:vcpus>
Oct 11 09:18:16 compute-0 nova_compute[260935]:   </nova:flavor>
Oct 11 09:18:16 compute-0 nova_compute[260935]:   <nova:owner>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     <nova:user uuid="dd336dcb24664df58613d4105ce1b004">tempest-TestNetworkBasicOps-1622727639-project-member</nova:user>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     <nova:project uuid="bee9c6aad5fe46a2b0fb6caf4d995b72">tempest-TestNetworkBasicOps-1622727639</nova:project>
Oct 11 09:18:16 compute-0 nova_compute[260935]:   </nova:owner>
Oct 11 09:18:16 compute-0 nova_compute[260935]:   <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:   <nova:ports>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     <nova:port uuid="2284d8f8-63ae-4cf4-b954-c08472abd3ed">
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     </nova:port>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     <nova:port uuid="442b50ff-f920-42c8-b400-ec09aade7cd6">
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <nova:ip type="fixed" address="10.100.0.27" ipVersion="4"/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     </nova:port>
Oct 11 09:18:16 compute-0 nova_compute[260935]:   </nova:ports>
Oct 11 09:18:16 compute-0 nova_compute[260935]: </nova:instance>
Oct 11 09:18:16 compute-0 nova_compute[260935]:   </metadata>
Oct 11 09:18:16 compute-0 nova_compute[260935]:   <memory unit='KiB'>131072</memory>
Oct 11 09:18:16 compute-0 nova_compute[260935]:   <currentMemory unit='KiB'>131072</currentMemory>
Oct 11 09:18:16 compute-0 nova_compute[260935]:   <vcpu placement='static'>1</vcpu>
Oct 11 09:18:16 compute-0 nova_compute[260935]:   <resource>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     <partition>/machine</partition>
Oct 11 09:18:16 compute-0 nova_compute[260935]:   </resource>
Oct 11 09:18:16 compute-0 nova_compute[260935]:   <sysinfo type='smbios'>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     <system>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <entry name='manufacturer'>RDO</entry>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <entry name='product'>OpenStack Compute</entry>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <entry name='serial'>0c16a8df-379f-45ee-b8a2-930ab997e47b</entry>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <entry name='uuid'>0c16a8df-379f-45ee-b8a2-930ab997e47b</entry>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <entry name='family'>Virtual Machine</entry>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     </system>
Oct 11 09:18:16 compute-0 nova_compute[260935]:   </sysinfo>
Oct 11 09:18:16 compute-0 nova_compute[260935]:   <os>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     <type arch='x86_64' machine='pc-q35-rhel9.6.0'>hvm</type>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     <boot dev='hd'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     <smbios mode='sysinfo'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:   </os>
Oct 11 09:18:16 compute-0 nova_compute[260935]:   <features>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     <acpi/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     <apic/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     <vmcoreinfo state='on'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:   </features>
Oct 11 09:18:16 compute-0 nova_compute[260935]:   <cpu mode='custom' match='exact' check='full'>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     <model fallback='forbid'>EPYC-Rome</model>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     <vendor>AMD</vendor>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     <feature policy='require' name='x2apic'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     <feature policy='require' name='tsc-deadline'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     <feature policy='require' name='hypervisor'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     <feature policy='require' name='tsc_adjust'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     <feature policy='require' name='spec-ctrl'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     <feature policy='require' name='stibp'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     <feature policy='require' name='arch-capabilities'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     <feature policy='require' name='ssbd'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     <feature policy='require' name='cmp_legacy'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     <feature policy='require' name='overflow-recov'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     <feature policy='require' name='succor'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     <feature policy='require' name='ibrs'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     <feature policy='require' name='amd-ssbd'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     <feature policy='require' name='virt-ssbd'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     <feature policy='disable' name='lbrv'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     <feature policy='disable' name='tsc-scale'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     <feature policy='disable' name='vmcb-clean'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     <feature policy='disable' name='flushbyasid'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     <feature policy='disable' name='pause-filter'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     <feature policy='disable' name='pfthreshold'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     <feature policy='disable' name='svme-addr-chk'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     <feature policy='require' name='lfence-always-serializing'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     <feature policy='require' name='rdctl-no'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     <feature policy='require' name='mds-no'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     <feature policy='require' name='pschange-mc-no'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     <feature policy='require' name='gds-no'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     <feature policy='require' name='rfds-no'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     <feature policy='disable' name='xsaves'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     <feature policy='disable' name='svm'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     <feature policy='require' name='topoext'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     <feature policy='disable' name='npt'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     <feature policy='disable' name='nrip-save'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:   </cpu>
Oct 11 09:18:16 compute-0 nova_compute[260935]:   <clock offset='utc'>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     <timer name='pit' tickpolicy='delay'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     <timer name='rtc' tickpolicy='catchup'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     <timer name='hpet' present='no'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:   </clock>
Oct 11 09:18:16 compute-0 nova_compute[260935]:   <on_poweroff>destroy</on_poweroff>
Oct 11 09:18:16 compute-0 nova_compute[260935]:   <on_reboot>restart</on_reboot>
Oct 11 09:18:16 compute-0 nova_compute[260935]:   <on_crash>destroy</on_crash>
Oct 11 09:18:16 compute-0 nova_compute[260935]:   <devices>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     <disk type='network' device='disk'>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <driver name='qemu' type='raw' cache='none'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <auth username='openstack'>
Oct 11 09:18:16 compute-0 nova_compute[260935]:         <secret type='ceph' uuid='33219f8b-dc38-5a8f-a577-8ccc4b37190a'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       </auth>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <source protocol='rbd' name='vms/0c16a8df-379f-45ee-b8a2-930ab997e47b_disk' index='2'>
Oct 11 09:18:16 compute-0 nova_compute[260935]:         <host name='192.168.122.100' port='6789'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       </source>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <target dev='vda' bus='virtio'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <alias name='virtio-disk0'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     </disk>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     <disk type='network' device='cdrom'>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <driver name='qemu' type='raw' cache='none'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <auth username='openstack'>
Oct 11 09:18:16 compute-0 nova_compute[260935]:         <secret type='ceph' uuid='33219f8b-dc38-5a8f-a577-8ccc4b37190a'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       </auth>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <source protocol='rbd' name='vms/0c16a8df-379f-45ee-b8a2-930ab997e47b_disk.config' index='1'>
Oct 11 09:18:16 compute-0 nova_compute[260935]:         <host name='192.168.122.100' port='6789'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       </source>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <target dev='sda' bus='sata'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <readonly/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <alias name='sata0-0-0'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     </disk>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     <controller type='pci' index='0' model='pcie-root'>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <alias name='pcie.0'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     </controller>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     <controller type='pci' index='1' model='pcie-root-port'>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <target chassis='1' port='0x10'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <alias name='pci.1'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     </controller>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     <controller type='pci' index='2' model='pcie-root-port'>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <target chassis='2' port='0x11'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <alias name='pci.2'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     </controller>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     <controller type='pci' index='3' model='pcie-root-port'>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <target chassis='3' port='0x12'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <alias name='pci.3'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     </controller>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     <controller type='pci' index='4' model='pcie-root-port'>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <target chassis='4' port='0x13'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <alias name='pci.4'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     </controller>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     <controller type='pci' index='5' model='pcie-root-port'>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <target chassis='5' port='0x14'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <alias name='pci.5'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     </controller>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     <controller type='pci' index='6' model='pcie-root-port'>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <target chassis='6' port='0x15'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <alias name='pci.6'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     </controller>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     <controller type='pci' index='7' model='pcie-root-port'>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <target chassis='7' port='0x16'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <alias name='pci.7'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     </controller>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     <controller type='pci' index='8' model='pcie-root-port'>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <target chassis='8' port='0x17'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <alias name='pci.8'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     </controller>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     <controller type='pci' index='9' model='pcie-root-port'>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <target chassis='9' port='0x18'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <alias name='pci.9'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     </controller>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     <controller type='pci' index='10' model='pcie-root-port'>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <target chassis='10' port='0x19'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <alias name='pci.10'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     </controller>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     <controller type='pci' index='11' model='pcie-root-port'>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <target chassis='11' port='0x1a'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <alias name='pci.11'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     </controller>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     <controller type='pci' index='12' model='pcie-root-port'>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <target chassis='12' port='0x1b'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <alias name='pci.12'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     </controller>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     <controller type='pci' index='13' model='pcie-root-port'>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <target chassis='13' port='0x1c'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <alias name='pci.13'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     </controller>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     <controller type='pci' index='14' model='pcie-root-port'>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <target chassis='14' port='0x1d'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <alias name='pci.14'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     </controller>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     <controller type='pci' index='15' model='pcie-root-port'>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <target chassis='15' port='0x1e'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <alias name='pci.15'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     </controller>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     <controller type='pci' index='16' model='pcie-root-port'>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <target chassis='16' port='0x1f'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <alias name='pci.16'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     </controller>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     <controller type='pci' index='17' model='pcie-root-port'>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <target chassis='17' port='0x20'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <alias name='pci.17'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     </controller>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     <controller type='pci' index='18' model='pcie-root-port'>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <target chassis='18' port='0x21'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <alias name='pci.18'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     </controller>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     <controller type='pci' index='19' model='pcie-root-port'>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <target chassis='19' port='0x22'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <alias name='pci.19'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     </controller>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     <controller type='pci' index='20' model='pcie-root-port'>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <target chassis='20' port='0x23'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <alias name='pci.20'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     </controller>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     <controller type='pci' index='21' model='pcie-root-port'>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <target chassis='21' port='0x24'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <alias name='pci.21'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     </controller>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     <controller type='pci' index='22' model='pcie-root-port'>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <target chassis='22' port='0x25'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <alias name='pci.22'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     </controller>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     <controller type='pci' index='23' model='pcie-root-port'>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <target chassis='23' port='0x26'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <alias name='pci.23'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     </controller>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     <controller type='pci' index='24' model='pcie-root-port'>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <target chassis='24' port='0x27'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <alias name='pci.24'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     </controller>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     <controller type='pci' index='25' model='pcie-root-port'>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <target chassis='25' port='0x28'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <alias name='pci.25'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     </controller>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <model name='pcie-pci-bridge'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <alias name='pci.26'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     </controller>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     <controller type='usb' index='0' model='piix3-uhci'>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <alias name='usb'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     </controller>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     <controller type='sata' index='0'>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <alias name='ide'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     </controller>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     <interface type='ethernet'>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <mac address='fa:16:3e:c9:58:d3'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <target dev='tap2284d8f8-63'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <model type='virtio'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <driver name='vhost' rx_queue_size='512'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <mtu size='1442'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <alias name='net0'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     </interface>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     <interface type='ethernet'>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <mac address='fa:16:3e:4f:8d:be'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <target dev='tap442b50ff-f9'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <model type='virtio'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <driver name='vhost' rx_queue_size='512'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <mtu size='1442'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <alias name='net1'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     </interface>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     <serial type='pty'>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <source path='/dev/pts/4'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <log file='/var/lib/nova/instances/0c16a8df-379f-45ee-b8a2-930ab997e47b/console.log' append='off'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <target type='isa-serial' port='0'>
Oct 11 09:18:16 compute-0 nova_compute[260935]:         <model name='isa-serial'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       </target>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <alias name='serial0'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     </serial>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     <console type='pty' tty='/dev/pts/4'>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <source path='/dev/pts/4'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <log file='/var/lib/nova/instances/0c16a8df-379f-45ee-b8a2-930ab997e47b/console.log' append='off'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <target type='serial' port='0'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <alias name='serial0'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     </console>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     <input type='tablet' bus='usb'>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <alias name='input0'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <address type='usb' bus='0' port='1'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     </input>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     <input type='mouse' bus='ps2'>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <alias name='input1'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     </input>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     <input type='keyboard' bus='ps2'>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <alias name='input2'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     </input>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     <graphics type='vnc' port='5904' autoport='yes' listen='::0'>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <listen type='address' address='::0'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     </graphics>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     <audio id='1' type='none'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     <video>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <model type='virtio' heads='1' primary='yes'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <alias name='video0'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     </video>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     <watchdog model='itco' action='reset'>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <alias name='watchdog0'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     </watchdog>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     <memballoon model='virtio'>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <stats period='10'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <alias name='balloon0'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     </memballoon>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     <rng model='virtio'>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <backend model='random'>/dev/urandom</backend>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <alias name='rng0'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     </rng>
Oct 11 09:18:16 compute-0 nova_compute[260935]:   </devices>
Oct 11 09:18:16 compute-0 nova_compute[260935]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     <label>system_u:system_r:svirt_t:s0:c440,c514</label>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c440,c514</imagelabel>
Oct 11 09:18:16 compute-0 nova_compute[260935]:   </seclabel>
Oct 11 09:18:16 compute-0 nova_compute[260935]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     <label>+107:+107</label>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     <imagelabel>+107:+107</imagelabel>
Oct 11 09:18:16 compute-0 nova_compute[260935]:   </seclabel>
Oct 11 09:18:16 compute-0 nova_compute[260935]: </domain>
Oct 11 09:18:16 compute-0 nova_compute[260935]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Oct 11 09:18:16 compute-0 nova_compute[260935]: 2025-10-11 09:18:16.085 2 INFO nova.virt.libvirt.driver [None req-f9151682-0317-4648-ab5e-92d0cfdc252e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Successfully detached device tap442b50ff-f9 from instance 0c16a8df-379f-45ee-b8a2-930ab997e47b from the persistent domain config.
Oct 11 09:18:16 compute-0 nova_compute[260935]: 2025-10-11 09:18:16.086 2 DEBUG nova.virt.libvirt.driver [None req-f9151682-0317-4648-ab5e-92d0cfdc252e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] (1/8): Attempting to detach device tap442b50ff-f9 with device alias net1 from instance 0c16a8df-379f-45ee-b8a2-930ab997e47b from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Oct 11 09:18:16 compute-0 nova_compute[260935]: 2025-10-11 09:18:16.086 2 DEBUG nova.virt.libvirt.guest [None req-f9151682-0317-4648-ab5e-92d0cfdc252e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] detach device xml: <interface type="ethernet">
Oct 11 09:18:16 compute-0 nova_compute[260935]:   <mac address="fa:16:3e:4f:8d:be"/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:   <model type="virtio"/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:   <driver name="vhost" rx_queue_size="512"/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:   <mtu size="1442"/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:   <target dev="tap442b50ff-f9"/>
Oct 11 09:18:16 compute-0 nova_compute[260935]: </interface>
Oct 11 09:18:16 compute-0 nova_compute[260935]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Oct 11 09:18:16 compute-0 kernel: tap442b50ff-f9 (unregistering): left promiscuous mode
Oct 11 09:18:16 compute-0 NetworkManager[44960]: <info>  [1760174296.2062] device (tap442b50ff-f9): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 09:18:16 compute-0 nova_compute[260935]: 2025-10-11 09:18:16.215 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:18:16 compute-0 ovn_controller[152945]: 2025-10-11T09:18:16Z|01111|binding|INFO|Releasing lport 442b50ff-f920-42c8-b400-ec09aade7cd6 from this chassis (sb_readonly=0)
Oct 11 09:18:16 compute-0 ovn_controller[152945]: 2025-10-11T09:18:16Z|01112|binding|INFO|Setting lport 442b50ff-f920-42c8-b400-ec09aade7cd6 down in Southbound
Oct 11 09:18:16 compute-0 ovn_controller[152945]: 2025-10-11T09:18:16Z|01113|binding|INFO|Removing iface tap442b50ff-f9 ovn-installed in OVS
Oct 11 09:18:16 compute-0 nova_compute[260935]: 2025-10-11 09:18:16.218 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:18:16 compute-0 nova_compute[260935]: 2025-10-11 09:18:16.225 2 DEBUG nova.virt.libvirt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Received event <DeviceRemovedEvent: 1760174296.2250836, 0c16a8df-379f-45ee-b8a2-930ab997e47b => net1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Oct 11 09:18:16 compute-0 nova_compute[260935]: 2025-10-11 09:18:16.226 2 DEBUG nova.virt.libvirt.driver [None req-f9151682-0317-4648-ab5e-92d0cfdc252e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Start waiting for the detach event from libvirt for device tap442b50ff-f9 with device alias net1 for instance 0c16a8df-379f-45ee-b8a2-930ab997e47b _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Oct 11 09:18:16 compute-0 nova_compute[260935]: 2025-10-11 09:18:16.227 2 DEBUG nova.virt.libvirt.guest [None req-f9151682-0317-4648-ab5e-92d0cfdc252e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:4f:8d:be"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap442b50ff-f9"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Oct 11 09:18:16 compute-0 nova_compute[260935]: 2025-10-11 09:18:16.231 2 DEBUG nova.virt.libvirt.guest [None req-f9151682-0317-4648-ab5e-92d0cfdc252e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:4f:8d:be"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap442b50ff-f9"/></interface>not found in domain: <domain type='kvm' id='135'>
Oct 11 09:18:16 compute-0 nova_compute[260935]:   <name>instance-00000070</name>
Oct 11 09:18:16 compute-0 nova_compute[260935]:   <uuid>0c16a8df-379f-45ee-b8a2-930ab997e47b</uuid>
Oct 11 09:18:16 compute-0 nova_compute[260935]:   <metadata>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 09:18:16 compute-0 nova_compute[260935]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:   <nova:name>tempest-TestNetworkBasicOps-server-1002613129</nova:name>
Oct 11 09:18:16 compute-0 nova_compute[260935]:   <nova:creationTime>2025-10-11 09:18:14</nova:creationTime>
Oct 11 09:18:16 compute-0 nova_compute[260935]:   <nova:flavor name="m1.nano">
Oct 11 09:18:16 compute-0 nova_compute[260935]:     <nova:memory>128</nova:memory>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     <nova:disk>1</nova:disk>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     <nova:swap>0</nova:swap>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     <nova:ephemeral>0</nova:ephemeral>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     <nova:vcpus>1</nova:vcpus>
Oct 11 09:18:16 compute-0 nova_compute[260935]:   </nova:flavor>
Oct 11 09:18:16 compute-0 nova_compute[260935]:   <nova:owner>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     <nova:user uuid="dd336dcb24664df58613d4105ce1b004">tempest-TestNetworkBasicOps-1622727639-project-member</nova:user>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     <nova:project uuid="bee9c6aad5fe46a2b0fb6caf4d995b72">tempest-TestNetworkBasicOps-1622727639</nova:project>
Oct 11 09:18:16 compute-0 nova_compute[260935]:   </nova:owner>
Oct 11 09:18:16 compute-0 nova_compute[260935]:   <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:   <nova:ports>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     <nova:port uuid="2284d8f8-63ae-4cf4-b954-c08472abd3ed">
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     </nova:port>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     <nova:port uuid="442b50ff-f920-42c8-b400-ec09aade7cd6">
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <nova:ip type="fixed" address="10.100.0.27" ipVersion="4"/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     </nova:port>
Oct 11 09:18:16 compute-0 nova_compute[260935]:   </nova:ports>
Oct 11 09:18:16 compute-0 nova_compute[260935]: </nova:instance>
Oct 11 09:18:16 compute-0 nova_compute[260935]:   </metadata>
Oct 11 09:18:16 compute-0 nova_compute[260935]:   <memory unit='KiB'>131072</memory>
Oct 11 09:18:16 compute-0 nova_compute[260935]:   <currentMemory unit='KiB'>131072</currentMemory>
Oct 11 09:18:16 compute-0 nova_compute[260935]:   <vcpu placement='static'>1</vcpu>
Oct 11 09:18:16 compute-0 nova_compute[260935]:   <resource>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     <partition>/machine</partition>
Oct 11 09:18:16 compute-0 nova_compute[260935]:   </resource>
Oct 11 09:18:16 compute-0 nova_compute[260935]:   <sysinfo type='smbios'>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     <system>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <entry name='manufacturer'>RDO</entry>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <entry name='product'>OpenStack Compute</entry>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <entry name='serial'>0c16a8df-379f-45ee-b8a2-930ab997e47b</entry>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <entry name='uuid'>0c16a8df-379f-45ee-b8a2-930ab997e47b</entry>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <entry name='family'>Virtual Machine</entry>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     </system>
Oct 11 09:18:16 compute-0 nova_compute[260935]:   </sysinfo>
Oct 11 09:18:16 compute-0 nova_compute[260935]:   <os>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     <type arch='x86_64' machine='pc-q35-rhel9.6.0'>hvm</type>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     <boot dev='hd'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     <smbios mode='sysinfo'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:   </os>
Oct 11 09:18:16 compute-0 nova_compute[260935]:   <features>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     <acpi/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     <apic/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     <vmcoreinfo state='on'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:   </features>
Oct 11 09:18:16 compute-0 nova_compute[260935]:   <cpu mode='custom' match='exact' check='full'>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     <model fallback='forbid'>EPYC-Rome</model>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     <vendor>AMD</vendor>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     <feature policy='require' name='x2apic'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     <feature policy='require' name='tsc-deadline'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     <feature policy='require' name='hypervisor'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     <feature policy='require' name='tsc_adjust'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     <feature policy='require' name='spec-ctrl'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     <feature policy='require' name='stibp'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     <feature policy='require' name='arch-capabilities'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     <feature policy='require' name='ssbd'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     <feature policy='require' name='cmp_legacy'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     <feature policy='require' name='overflow-recov'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     <feature policy='require' name='succor'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     <feature policy='require' name='ibrs'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     <feature policy='require' name='amd-ssbd'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     <feature policy='require' name='virt-ssbd'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     <feature policy='disable' name='lbrv'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     <feature policy='disable' name='tsc-scale'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     <feature policy='disable' name='vmcb-clean'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     <feature policy='disable' name='flushbyasid'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     <feature policy='disable' name='pause-filter'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     <feature policy='disable' name='pfthreshold'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     <feature policy='disable' name='svme-addr-chk'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     <feature policy='require' name='lfence-always-serializing'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     <feature policy='require' name='rdctl-no'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     <feature policy='require' name='mds-no'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     <feature policy='require' name='pschange-mc-no'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     <feature policy='require' name='gds-no'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     <feature policy='require' name='rfds-no'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     <feature policy='disable' name='xsaves'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     <feature policy='disable' name='svm'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     <feature policy='require' name='topoext'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     <feature policy='disable' name='npt'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     <feature policy='disable' name='nrip-save'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:   </cpu>
Oct 11 09:18:16 compute-0 nova_compute[260935]:   <clock offset='utc'>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     <timer name='pit' tickpolicy='delay'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     <timer name='rtc' tickpolicy='catchup'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     <timer name='hpet' present='no'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:   </clock>
Oct 11 09:18:16 compute-0 nova_compute[260935]:   <on_poweroff>destroy</on_poweroff>
Oct 11 09:18:16 compute-0 nova_compute[260935]:   <on_reboot>restart</on_reboot>
Oct 11 09:18:16 compute-0 nova_compute[260935]:   <on_crash>destroy</on_crash>
Oct 11 09:18:16 compute-0 nova_compute[260935]:   <devices>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     <disk type='network' device='disk'>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <driver name='qemu' type='raw' cache='none'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <auth username='openstack'>
Oct 11 09:18:16 compute-0 nova_compute[260935]:         <secret type='ceph' uuid='33219f8b-dc38-5a8f-a577-8ccc4b37190a'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       </auth>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <source protocol='rbd' name='vms/0c16a8df-379f-45ee-b8a2-930ab997e47b_disk' index='2'>
Oct 11 09:18:16 compute-0 nova_compute[260935]:         <host name='192.168.122.100' port='6789'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       </source>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <target dev='vda' bus='virtio'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <alias name='virtio-disk0'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     </disk>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     <disk type='network' device='cdrom'>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <driver name='qemu' type='raw' cache='none'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <auth username='openstack'>
Oct 11 09:18:16 compute-0 nova_compute[260935]:         <secret type='ceph' uuid='33219f8b-dc38-5a8f-a577-8ccc4b37190a'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       </auth>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <source protocol='rbd' name='vms/0c16a8df-379f-45ee-b8a2-930ab997e47b_disk.config' index='1'>
Oct 11 09:18:16 compute-0 nova_compute[260935]:         <host name='192.168.122.100' port='6789'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       </source>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <target dev='sda' bus='sata'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <readonly/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <alias name='sata0-0-0'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     </disk>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     <controller type='pci' index='0' model='pcie-root'>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <alias name='pcie.0'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     </controller>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     <controller type='pci' index='1' model='pcie-root-port'>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <target chassis='1' port='0x10'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <alias name='pci.1'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     </controller>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     <controller type='pci' index='2' model='pcie-root-port'>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <target chassis='2' port='0x11'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <alias name='pci.2'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     </controller>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     <controller type='pci' index='3' model='pcie-root-port'>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <target chassis='3' port='0x12'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <alias name='pci.3'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     </controller>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     <controller type='pci' index='4' model='pcie-root-port'>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <target chassis='4' port='0x13'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <alias name='pci.4'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     </controller>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     <controller type='pci' index='5' model='pcie-root-port'>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <target chassis='5' port='0x14'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <alias name='pci.5'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     </controller>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     <controller type='pci' index='6' model='pcie-root-port'>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <target chassis='6' port='0x15'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <alias name='pci.6'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     </controller>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     <controller type='pci' index='7' model='pcie-root-port'>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <target chassis='7' port='0x16'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <alias name='pci.7'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     </controller>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     <controller type='pci' index='8' model='pcie-root-port'>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <target chassis='8' port='0x17'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <alias name='pci.8'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     </controller>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     <controller type='pci' index='9' model='pcie-root-port'>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <target chassis='9' port='0x18'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <alias name='pci.9'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     </controller>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     <controller type='pci' index='10' model='pcie-root-port'>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <target chassis='10' port='0x19'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <alias name='pci.10'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     </controller>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     <controller type='pci' index='11' model='pcie-root-port'>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <target chassis='11' port='0x1a'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <alias name='pci.11'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     </controller>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     <controller type='pci' index='12' model='pcie-root-port'>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <target chassis='12' port='0x1b'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <alias name='pci.12'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     </controller>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     <controller type='pci' index='13' model='pcie-root-port'>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <target chassis='13' port='0x1c'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <alias name='pci.13'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     </controller>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     <controller type='pci' index='14' model='pcie-root-port'>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <target chassis='14' port='0x1d'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <alias name='pci.14'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     </controller>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     <controller type='pci' index='15' model='pcie-root-port'>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <target chassis='15' port='0x1e'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <alias name='pci.15'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     </controller>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     <controller type='pci' index='16' model='pcie-root-port'>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <target chassis='16' port='0x1f'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <alias name='pci.16'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     </controller>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     <controller type='pci' index='17' model='pcie-root-port'>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <target chassis='17' port='0x20'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <alias name='pci.17'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     </controller>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     <controller type='pci' index='18' model='pcie-root-port'>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <target chassis='18' port='0x21'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <alias name='pci.18'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     </controller>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     <controller type='pci' index='19' model='pcie-root-port'>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <target chassis='19' port='0x22'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <alias name='pci.19'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     </controller>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     <controller type='pci' index='20' model='pcie-root-port'>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <target chassis='20' port='0x23'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <alias name='pci.20'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     </controller>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     <controller type='pci' index='21' model='pcie-root-port'>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <target chassis='21' port='0x24'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <alias name='pci.21'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     </controller>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     <controller type='pci' index='22' model='pcie-root-port'>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <target chassis='22' port='0x25'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <alias name='pci.22'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     </controller>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     <controller type='pci' index='23' model='pcie-root-port'>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <target chassis='23' port='0x26'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <alias name='pci.23'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     </controller>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     <controller type='pci' index='24' model='pcie-root-port'>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <target chassis='24' port='0x27'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <alias name='pci.24'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     </controller>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     <controller type='pci' index='25' model='pcie-root-port'>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <target chassis='25' port='0x28'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <alias name='pci.25'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     </controller>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <model name='pcie-pci-bridge'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <alias name='pci.26'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     </controller>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     <controller type='usb' index='0' model='piix3-uhci'>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <alias name='usb'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     </controller>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     <controller type='sata' index='0'>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <alias name='ide'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     </controller>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     <interface type='ethernet'>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <mac address='fa:16:3e:c9:58:d3'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <target dev='tap2284d8f8-63'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <model type='virtio'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <driver name='vhost' rx_queue_size='512'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <mtu size='1442'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <alias name='net0'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     </interface>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     <serial type='pty'>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <source path='/dev/pts/4'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <log file='/var/lib/nova/instances/0c16a8df-379f-45ee-b8a2-930ab997e47b/console.log' append='off'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <target type='isa-serial' port='0'>
Oct 11 09:18:16 compute-0 nova_compute[260935]:         <model name='isa-serial'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       </target>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <alias name='serial0'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     </serial>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     <console type='pty' tty='/dev/pts/4'>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <source path='/dev/pts/4'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <log file='/var/lib/nova/instances/0c16a8df-379f-45ee-b8a2-930ab997e47b/console.log' append='off'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <target type='serial' port='0'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <alias name='serial0'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     </console>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     <input type='tablet' bus='usb'>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <alias name='input0'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <address type='usb' bus='0' port='1'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     </input>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     <input type='mouse' bus='ps2'>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <alias name='input1'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     </input>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     <input type='keyboard' bus='ps2'>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <alias name='input2'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     </input>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     <graphics type='vnc' port='5904' autoport='yes' listen='::0'>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <listen type='address' address='::0'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     </graphics>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     <audio id='1' type='none'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     <video>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <model type='virtio' heads='1' primary='yes'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <alias name='video0'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     </video>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     <watchdog model='itco' action='reset'>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <alias name='watchdog0'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     </watchdog>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     <memballoon model='virtio'>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <stats period='10'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <alias name='balloon0'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     </memballoon>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     <rng model='virtio'>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <backend model='random'>/dev/urandom</backend>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <alias name='rng0'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     </rng>
Oct 11 09:18:16 compute-0 nova_compute[260935]:   </devices>
Oct 11 09:18:16 compute-0 nova_compute[260935]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     <label>system_u:system_r:svirt_t:s0:c440,c514</label>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c440,c514</imagelabel>
Oct 11 09:18:16 compute-0 nova_compute[260935]:   </seclabel>
Oct 11 09:18:16 compute-0 nova_compute[260935]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     <label>+107:+107</label>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     <imagelabel>+107:+107</imagelabel>
Oct 11 09:18:16 compute-0 nova_compute[260935]:   </seclabel>
Oct 11 09:18:16 compute-0 nova_compute[260935]: </domain>
Oct 11 09:18:16 compute-0 nova_compute[260935]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Oct 11 09:18:16 compute-0 nova_compute[260935]: 2025-10-11 09:18:16.231 2 INFO nova.virt.libvirt.driver [None req-f9151682-0317-4648-ab5e-92d0cfdc252e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Successfully detached device tap442b50ff-f9 from instance 0c16a8df-379f-45ee-b8a2-930ab997e47b from the live domain config.
Oct 11 09:18:16 compute-0 nova_compute[260935]: 2025-10-11 09:18:16.232 2 DEBUG nova.virt.libvirt.vif [None req-f9151682-0317-4648-ab5e-92d0cfdc252e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T09:17:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1002613129',display_name='tempest-TestNetworkBasicOps-server-1002613129',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1002613129',id=112,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBB1DLyePqzD5XeXq/VEty+kauBvZ2ILHF8of/ToD91IxehxiiNemh7q1yZYimrdmB0i6DGy2R/mJVWVMcXEoPylnExeWpiVIgoXv8lQJTgtQjj42wL1vsxK2jjFlUxPqYA==',key_name='tempest-TestNetworkBasicOps-1545772393',keypairs=<?>,launch_index=0,launched_at=2025-10-11T09:17:46Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='bee9c6aad5fe46a2b0fb6caf4d995b72',ramdisk_id='',reservation_id='r-c03kw6on',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1622727639',owner_user_name='tempest-TestNetworkBasicOps-1622727639-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T09:17:46Z,user_data=None,user_id='dd336dcb24664df58613d4105ce1b004',uuid=0c16a8df-379f-45ee-b8a2-930ab997e47b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "442b50ff-f920-42c8-b400-ec09aade7cd6", "address": "fa:16:3e:4f:8d:be", "network": {"id": "1ea2d228-974d-4a28-901f-89593554c6f8", "bridge": "br-int", "label": "tempest-network-smoke--1003241548", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.27", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap442b50ff-f9", "ovs_interfaceid": "442b50ff-f920-42c8-b400-ec09aade7cd6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 11 09:18:16 compute-0 nova_compute[260935]: 2025-10-11 09:18:16.233 2 DEBUG nova.network.os_vif_util [None req-f9151682-0317-4648-ab5e-92d0cfdc252e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Converting VIF {"id": "442b50ff-f920-42c8-b400-ec09aade7cd6", "address": "fa:16:3e:4f:8d:be", "network": {"id": "1ea2d228-974d-4a28-901f-89593554c6f8", "bridge": "br-int", "label": "tempest-network-smoke--1003241548", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.27", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap442b50ff-f9", "ovs_interfaceid": "442b50ff-f920-42c8-b400-ec09aade7cd6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 09:18:16 compute-0 nova_compute[260935]: 2025-10-11 09:18:16.233 2 DEBUG nova.network.os_vif_util [None req-f9151682-0317-4648-ab5e-92d0cfdc252e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4f:8d:be,bridge_name='br-int',has_traffic_filtering=True,id=442b50ff-f920-42c8-b400-ec09aade7cd6,network=Network(1ea2d228-974d-4a28-901f-89593554c6f8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap442b50ff-f9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 09:18:16 compute-0 nova_compute[260935]: 2025-10-11 09:18:16.234 2 DEBUG os_vif [None req-f9151682-0317-4648-ab5e-92d0cfdc252e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:4f:8d:be,bridge_name='br-int',has_traffic_filtering=True,id=442b50ff-f920-42c8-b400-ec09aade7cd6,network=Network(1ea2d228-974d-4a28-901f-89593554c6f8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap442b50ff-f9') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 11 09:18:16 compute-0 nova_compute[260935]: 2025-10-11 09:18:16.236 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:18:16 compute-0 nova_compute[260935]: 2025-10-11 09:18:16.236 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap442b50ff-f9, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:18:16 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:18:16.237 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4f:8d:be 10.100.0.27'], port_security=['fa:16:3e:4f:8d:be 10.100.0.27'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.27/28', 'neutron:device_id': '0c16a8df-379f-45ee-b8a2-930ab997e47b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1ea2d228-974d-4a28-901f-89593554c6f8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'bee9c6aad5fe46a2b0fb6caf4d995b72', 'neutron:revision_number': '4', 'neutron:security_group_ids': '019fac1b-1c13-4d21-809e-39e39fdd9255', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2a1ce1a9-36d9-465e-a0cc-5bbbfcd5b496, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=442b50ff-f920-42c8-b400-ec09aade7cd6) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 09:18:16 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:18:16.239 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 442b50ff-f920-42c8-b400-ec09aade7cd6 in datapath 1ea2d228-974d-4a28-901f-89593554c6f8 unbound from our chassis
Oct 11 09:18:16 compute-0 nova_compute[260935]: 2025-10-11 09:18:16.241 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:18:16 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:18:16.242 162815 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 1ea2d228-974d-4a28-901f-89593554c6f8, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 11 09:18:16 compute-0 nova_compute[260935]: 2025-10-11 09:18:16.243 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 09:18:16 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:18:16.243 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[53aa2d22-97da-482f-a089-181b96546bdd]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:18:16 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:18:16.244 162815 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-1ea2d228-974d-4a28-901f-89593554c6f8 namespace which is not needed anymore
Oct 11 09:18:16 compute-0 nova_compute[260935]: 2025-10-11 09:18:16.251 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:18:16 compute-0 nova_compute[260935]: 2025-10-11 09:18:16.254 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:18:16 compute-0 nova_compute[260935]: 2025-10-11 09:18:16.257 2 INFO os_vif [None req-f9151682-0317-4648-ab5e-92d0cfdc252e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:4f:8d:be,bridge_name='br-int',has_traffic_filtering=True,id=442b50ff-f920-42c8-b400-ec09aade7cd6,network=Network(1ea2d228-974d-4a28-901f-89593554c6f8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap442b50ff-f9')
Oct 11 09:18:16 compute-0 nova_compute[260935]: 2025-10-11 09:18:16.258 2 DEBUG nova.virt.libvirt.guest [None req-f9151682-0317-4648-ab5e-92d0cfdc252e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 09:18:16 compute-0 nova_compute[260935]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:   <nova:name>tempest-TestNetworkBasicOps-server-1002613129</nova:name>
Oct 11 09:18:16 compute-0 nova_compute[260935]:   <nova:creationTime>2025-10-11 09:18:16</nova:creationTime>
Oct 11 09:18:16 compute-0 nova_compute[260935]:   <nova:flavor name="m1.nano">
Oct 11 09:18:16 compute-0 nova_compute[260935]:     <nova:memory>128</nova:memory>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     <nova:disk>1</nova:disk>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     <nova:swap>0</nova:swap>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     <nova:ephemeral>0</nova:ephemeral>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     <nova:vcpus>1</nova:vcpus>
Oct 11 09:18:16 compute-0 nova_compute[260935]:   </nova:flavor>
Oct 11 09:18:16 compute-0 nova_compute[260935]:   <nova:owner>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     <nova:user uuid="dd336dcb24664df58613d4105ce1b004">tempest-TestNetworkBasicOps-1622727639-project-member</nova:user>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     <nova:project uuid="bee9c6aad5fe46a2b0fb6caf4d995b72">tempest-TestNetworkBasicOps-1622727639</nova:project>
Oct 11 09:18:16 compute-0 nova_compute[260935]:   </nova:owner>
Oct 11 09:18:16 compute-0 nova_compute[260935]:   <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:   <nova:ports>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     <nova:port uuid="2284d8f8-63ae-4cf4-b954-c08472abd3ed">
Oct 11 09:18:16 compute-0 nova_compute[260935]:       <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Oct 11 09:18:16 compute-0 nova_compute[260935]:     </nova:port>
Oct 11 09:18:16 compute-0 nova_compute[260935]:   </nova:ports>
Oct 11 09:18:16 compute-0 nova_compute[260935]: </nova:instance>
Oct 11 09:18:16 compute-0 nova_compute[260935]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Oct 11 09:18:16 compute-0 nova_compute[260935]: 2025-10-11 09:18:16.391 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:18:16 compute-0 neutron-haproxy-ovnmeta-1ea2d228-974d-4a28-901f-89593554c6f8[382475]: [NOTICE]   (382479) : haproxy version is 2.8.14-c23fe91
Oct 11 09:18:16 compute-0 neutron-haproxy-ovnmeta-1ea2d228-974d-4a28-901f-89593554c6f8[382475]: [NOTICE]   (382479) : path to executable is /usr/sbin/haproxy
Oct 11 09:18:16 compute-0 neutron-haproxy-ovnmeta-1ea2d228-974d-4a28-901f-89593554c6f8[382475]: [WARNING]  (382479) : Exiting Master process...
Oct 11 09:18:16 compute-0 neutron-haproxy-ovnmeta-1ea2d228-974d-4a28-901f-89593554c6f8[382475]: [ALERT]    (382479) : Current worker (382481) exited with code 143 (Terminated)
Oct 11 09:18:16 compute-0 neutron-haproxy-ovnmeta-1ea2d228-974d-4a28-901f-89593554c6f8[382475]: [WARNING]  (382479) : All workers exited. Exiting... (0)
Oct 11 09:18:16 compute-0 systemd[1]: libpod-587638c82f350fbd62d701e29e65ee0bc9197b8e2311b00da2a8821c45d0571e.scope: Deactivated successfully.
Oct 11 09:18:16 compute-0 podman[382511]: 2025-10-11 09:18:16.472617408 +0000 UTC m=+0.073250915 container died 587638c82f350fbd62d701e29e65ee0bc9197b8e2311b00da2a8821c45d0571e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-1ea2d228-974d-4a28-901f-89593554c6f8, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true)
Oct 11 09:18:16 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-587638c82f350fbd62d701e29e65ee0bc9197b8e2311b00da2a8821c45d0571e-userdata-shm.mount: Deactivated successfully.
Oct 11 09:18:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-dd4a08369183d1c19d0437930f9c9a68155223cbc285edf48d39b1d8e5c0ceb4-merged.mount: Deactivated successfully.
Oct 11 09:18:16 compute-0 podman[382511]: 2025-10-11 09:18:16.530438473 +0000 UTC m=+0.131071980 container cleanup 587638c82f350fbd62d701e29e65ee0bc9197b8e2311b00da2a8821c45d0571e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-1ea2d228-974d-4a28-901f-89593554c6f8, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 11 09:18:16 compute-0 systemd[1]: libpod-conmon-587638c82f350fbd62d701e29e65ee0bc9197b8e2311b00da2a8821c45d0571e.scope: Deactivated successfully.
Oct 11 09:18:16 compute-0 podman[382542]: 2025-10-11 09:18:16.626197288 +0000 UTC m=+0.061836081 container remove 587638c82f350fbd62d701e29e65ee0bc9197b8e2311b00da2a8821c45d0571e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-1ea2d228-974d-4a28-901f-89593554c6f8, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Oct 11 09:18:16 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:18:16.633 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[e9564ab3-0d49-461b-abf2-06bbfe8792ad]: (4, ('Sat Oct 11 09:18:16 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-1ea2d228-974d-4a28-901f-89593554c6f8 (587638c82f350fbd62d701e29e65ee0bc9197b8e2311b00da2a8821c45d0571e)\n587638c82f350fbd62d701e29e65ee0bc9197b8e2311b00da2a8821c45d0571e\nSat Oct 11 09:18:16 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-1ea2d228-974d-4a28-901f-89593554c6f8 (587638c82f350fbd62d701e29e65ee0bc9197b8e2311b00da2a8821c45d0571e)\n587638c82f350fbd62d701e29e65ee0bc9197b8e2311b00da2a8821c45d0571e\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:18:16 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:18:16.635 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[05af4713-6dc5-4dc4-9d5b-7311822649f5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:18:16 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:18:16.637 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1ea2d228-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:18:16 compute-0 nova_compute[260935]: 2025-10-11 09:18:16.640 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:18:16 compute-0 kernel: tap1ea2d228-90: left promiscuous mode
Oct 11 09:18:16 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2293: 321 pgs: 321 active+clean; 407 MiB data, 953 MiB used, 59 GiB / 60 GiB avail; 44 KiB/s rd, 19 KiB/s wr, 57 op/s
Oct 11 09:18:16 compute-0 nova_compute[260935]: 2025-10-11 09:18:16.672 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:18:16 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:18:16.675 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[7e4fec08-3fff-4b30-b271-d7872517c579]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:18:16 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:18:16.708 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[062eb7dc-62bd-416d-86e7-630fbece3101]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:18:16 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:18:16.709 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[374abceb-c7e7-4b6f-bc10-8770ec2f87aa]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:18:16 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:18:16.736 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[31b6cea8-deeb-4c11-9adf-63edc66c9f35]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 613900, 'reachable_time': 41736, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 382557, 'error': None, 'target': 'ovnmeta-1ea2d228-974d-4a28-901f-89593554c6f8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:18:16 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:18:16.739 162929 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-1ea2d228-974d-4a28-901f-89593554c6f8 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 11 09:18:16 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:18:16.739 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[83b3a893-4a70-4b4a-8cb4-db4c484852eb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:18:16 compute-0 systemd[1]: run-netns-ovnmeta\x2d1ea2d228\x2d974d\x2d4a28\x2d901f\x2d89593554c6f8.mount: Deactivated successfully.
Oct 11 09:18:16 compute-0 nova_compute[260935]: 2025-10-11 09:18:16.937 2 DEBUG nova.compute.manager [req-42fe97ad-d35d-4265-9764-2a3ff1f39df9 req-2749b787-2d4a-453f-abb1-3fd967bd17b0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0c16a8df-379f-45ee-b8a2-930ab997e47b] Received event network-vif-plugged-442b50ff-f920-42c8-b400-ec09aade7cd6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:18:16 compute-0 nova_compute[260935]: 2025-10-11 09:18:16.938 2 DEBUG oslo_concurrency.lockutils [req-42fe97ad-d35d-4265-9764-2a3ff1f39df9 req-2749b787-2d4a-453f-abb1-3fd967bd17b0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "0c16a8df-379f-45ee-b8a2-930ab997e47b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:18:16 compute-0 nova_compute[260935]: 2025-10-11 09:18:16.939 2 DEBUG oslo_concurrency.lockutils [req-42fe97ad-d35d-4265-9764-2a3ff1f39df9 req-2749b787-2d4a-453f-abb1-3fd967bd17b0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "0c16a8df-379f-45ee-b8a2-930ab997e47b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:18:16 compute-0 nova_compute[260935]: 2025-10-11 09:18:16.940 2 DEBUG oslo_concurrency.lockutils [req-42fe97ad-d35d-4265-9764-2a3ff1f39df9 req-2749b787-2d4a-453f-abb1-3fd967bd17b0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "0c16a8df-379f-45ee-b8a2-930ab997e47b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:18:16 compute-0 nova_compute[260935]: 2025-10-11 09:18:16.940 2 DEBUG nova.compute.manager [req-42fe97ad-d35d-4265-9764-2a3ff1f39df9 req-2749b787-2d4a-453f-abb1-3fd967bd17b0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0c16a8df-379f-45ee-b8a2-930ab997e47b] No waiting events found dispatching network-vif-plugged-442b50ff-f920-42c8-b400-ec09aade7cd6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 09:18:16 compute-0 nova_compute[260935]: 2025-10-11 09:18:16.941 2 WARNING nova.compute.manager [req-42fe97ad-d35d-4265-9764-2a3ff1f39df9 req-2749b787-2d4a-453f-abb1-3fd967bd17b0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0c16a8df-379f-45ee-b8a2-930ab997e47b] Received unexpected event network-vif-plugged-442b50ff-f920-42c8-b400-ec09aade7cd6 for instance with vm_state active and task_state None.
Oct 11 09:18:17 compute-0 nova_compute[260935]: 2025-10-11 09:18:17.372 2 DEBUG oslo_concurrency.lockutils [None req-f9151682-0317-4648-ab5e-92d0cfdc252e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Acquiring lock "refresh_cache-0c16a8df-379f-45ee-b8a2-930ab997e47b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:18:17 compute-0 nova_compute[260935]: 2025-10-11 09:18:17.374 2 DEBUG oslo_concurrency.lockutils [None req-f9151682-0317-4648-ab5e-92d0cfdc252e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Acquired lock "refresh_cache-0c16a8df-379f-45ee-b8a2-930ab997e47b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:18:17 compute-0 nova_compute[260935]: 2025-10-11 09:18:17.374 2 DEBUG nova.network.neutron [None req-f9151682-0317-4648-ab5e-92d0cfdc252e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 0c16a8df-379f-45ee-b8a2-930ab997e47b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 11 09:18:17 compute-0 ceph-mon[74313]: pgmap v2293: 321 pgs: 321 active+clean; 407 MiB data, 953 MiB used, 59 GiB / 60 GiB avail; 44 KiB/s rd, 19 KiB/s wr, 57 op/s
Oct 11 09:18:17 compute-0 ovn_controller[152945]: 2025-10-11T09:18:17Z|01114|binding|INFO|Releasing lport 117fa814-7b44-4449-adf6-8726d78cffe9 from this chassis (sb_readonly=0)
Oct 11 09:18:17 compute-0 ovn_controller[152945]: 2025-10-11T09:18:17Z|01115|binding|INFO|Releasing lport e23cd806-8523-4e59-ba27-db15cee52548 from this chassis (sb_readonly=0)
Oct 11 09:18:17 compute-0 ovn_controller[152945]: 2025-10-11T09:18:17Z|01116|binding|INFO|Releasing lport f45dd889-4db0-488b-b6d3-356bc191844e from this chassis (sb_readonly=0)
Oct 11 09:18:18 compute-0 nova_compute[260935]: 2025-10-11 09:18:18.102 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:18:18 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2294: 321 pgs: 321 active+clean; 407 MiB data, 950 MiB used, 59 GiB / 60 GiB avail; 52 KiB/s rd, 25 KiB/s wr, 58 op/s
Oct 11 09:18:18 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:18:19 compute-0 nova_compute[260935]: 2025-10-11 09:18:19.040 2 DEBUG nova.compute.manager [req-17c94f15-0cf5-49bc-a95c-7915fcc76bb5 req-d0d81a92-80b8-4cdd-9904-01e74fe18328 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0c16a8df-379f-45ee-b8a2-930ab997e47b] Received event network-vif-deleted-442b50ff-f920-42c8-b400-ec09aade7cd6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:18:19 compute-0 nova_compute[260935]: 2025-10-11 09:18:19.041 2 INFO nova.compute.manager [req-17c94f15-0cf5-49bc-a95c-7915fcc76bb5 req-d0d81a92-80b8-4cdd-9904-01e74fe18328 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0c16a8df-379f-45ee-b8a2-930ab997e47b] Neutron deleted interface 442b50ff-f920-42c8-b400-ec09aade7cd6; detaching it from the instance and deleting it from the info cache
Oct 11 09:18:19 compute-0 nova_compute[260935]: 2025-10-11 09:18:19.041 2 DEBUG nova.network.neutron [req-17c94f15-0cf5-49bc-a95c-7915fcc76bb5 req-d0d81a92-80b8-4cdd-9904-01e74fe18328 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0c16a8df-379f-45ee-b8a2-930ab997e47b] Updating instance_info_cache with network_info: [{"id": "2284d8f8-63ae-4cf4-b954-c08472abd3ed", "address": "fa:16:3e:c9:58:d3", "network": {"id": "3f472aff-4044-47cd-b539-ffd0a15c2851", "bridge": "br-int", "label": "tempest-network-smoke--74572185", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2284d8f8-63", "ovs_interfaceid": "2284d8f8-63ae-4cf4-b954-c08472abd3ed", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:18:19 compute-0 nova_compute[260935]: 2025-10-11 09:18:19.071 2 DEBUG nova.objects.instance [req-17c94f15-0cf5-49bc-a95c-7915fcc76bb5 req-d0d81a92-80b8-4cdd-9904-01e74fe18328 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lazy-loading 'system_metadata' on Instance uuid 0c16a8df-379f-45ee-b8a2-930ab997e47b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:18:19 compute-0 nova_compute[260935]: 2025-10-11 09:18:19.098 2 DEBUG nova.objects.instance [req-17c94f15-0cf5-49bc-a95c-7915fcc76bb5 req-d0d81a92-80b8-4cdd-9904-01e74fe18328 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lazy-loading 'flavor' on Instance uuid 0c16a8df-379f-45ee-b8a2-930ab997e47b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:18:19 compute-0 nova_compute[260935]: 2025-10-11 09:18:19.123 2 DEBUG nova.virt.libvirt.vif [req-17c94f15-0cf5-49bc-a95c-7915fcc76bb5 req-d0d81a92-80b8-4cdd-9904-01e74fe18328 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T09:17:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1002613129',display_name='tempest-TestNetworkBasicOps-server-1002613129',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1002613129',id=112,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBB1DLyePqzD5XeXq/VEty+kauBvZ2ILHF8of/ToD91IxehxiiNemh7q1yZYimrdmB0i6DGy2R/mJVWVMcXEoPylnExeWpiVIgoXv8lQJTgtQjj42wL1vsxK2jjFlUxPqYA==',key_name='tempest-TestNetworkBasicOps-1545772393',keypairs=<?>,launch_index=0,launched_at=2025-10-11T09:17:46Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='bee9c6aad5fe46a2b0fb6caf4d995b72',ramdisk_id='',reservation_id='r-c03kw6on',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1622727639',owner_user_name='tempest-TestNetworkBasicOps-1622727639-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T09:17:46Z,user_data=None,user_id='dd336dcb24664df58613d4105ce1b004',uuid=0c16a8df-379f-45ee-b8a2-930ab997e47b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "442b50ff-f920-42c8-b400-ec09aade7cd6", "address": "fa:16:3e:4f:8d:be", "network": {"id": "1ea2d228-974d-4a28-901f-89593554c6f8", "bridge": "br-int", "label": "tempest-network-smoke--1003241548", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.27", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap442b50ff-f9", "ovs_interfaceid": "442b50ff-f920-42c8-b400-ec09aade7cd6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 11 09:18:19 compute-0 nova_compute[260935]: 2025-10-11 09:18:19.123 2 DEBUG nova.network.os_vif_util [req-17c94f15-0cf5-49bc-a95c-7915fcc76bb5 req-d0d81a92-80b8-4cdd-9904-01e74fe18328 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Converting VIF {"id": "442b50ff-f920-42c8-b400-ec09aade7cd6", "address": "fa:16:3e:4f:8d:be", "network": {"id": "1ea2d228-974d-4a28-901f-89593554c6f8", "bridge": "br-int", "label": "tempest-network-smoke--1003241548", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.27", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap442b50ff-f9", "ovs_interfaceid": "442b50ff-f920-42c8-b400-ec09aade7cd6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 09:18:19 compute-0 nova_compute[260935]: 2025-10-11 09:18:19.124 2 DEBUG nova.network.os_vif_util [req-17c94f15-0cf5-49bc-a95c-7915fcc76bb5 req-d0d81a92-80b8-4cdd-9904-01e74fe18328 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4f:8d:be,bridge_name='br-int',has_traffic_filtering=True,id=442b50ff-f920-42c8-b400-ec09aade7cd6,network=Network(1ea2d228-974d-4a28-901f-89593554c6f8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap442b50ff-f9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 09:18:19 compute-0 nova_compute[260935]: 2025-10-11 09:18:19.129 2 DEBUG nova.virt.libvirt.guest [req-17c94f15-0cf5-49bc-a95c-7915fcc76bb5 req-d0d81a92-80b8-4cdd-9904-01e74fe18328 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:4f:8d:be"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap442b50ff-f9"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Oct 11 09:18:19 compute-0 nova_compute[260935]: 2025-10-11 09:18:19.134 2 DEBUG nova.virt.libvirt.guest [req-17c94f15-0cf5-49bc-a95c-7915fcc76bb5 req-d0d81a92-80b8-4cdd-9904-01e74fe18328 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:4f:8d:be"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap442b50ff-f9"/></interface>not found in domain: <domain type='kvm' id='135'>
Oct 11 09:18:19 compute-0 nova_compute[260935]:   <name>instance-00000070</name>
Oct 11 09:18:19 compute-0 nova_compute[260935]:   <uuid>0c16a8df-379f-45ee-b8a2-930ab997e47b</uuid>
Oct 11 09:18:19 compute-0 nova_compute[260935]:   <metadata>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 09:18:19 compute-0 nova_compute[260935]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:   <nova:name>tempest-TestNetworkBasicOps-server-1002613129</nova:name>
Oct 11 09:18:19 compute-0 nova_compute[260935]:   <nova:creationTime>2025-10-11 09:18:16</nova:creationTime>
Oct 11 09:18:19 compute-0 nova_compute[260935]:   <nova:flavor name="m1.nano">
Oct 11 09:18:19 compute-0 nova_compute[260935]:     <nova:memory>128</nova:memory>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     <nova:disk>1</nova:disk>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     <nova:swap>0</nova:swap>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     <nova:ephemeral>0</nova:ephemeral>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     <nova:vcpus>1</nova:vcpus>
Oct 11 09:18:19 compute-0 nova_compute[260935]:   </nova:flavor>
Oct 11 09:18:19 compute-0 nova_compute[260935]:   <nova:owner>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     <nova:user uuid="dd336dcb24664df58613d4105ce1b004">tempest-TestNetworkBasicOps-1622727639-project-member</nova:user>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     <nova:project uuid="bee9c6aad5fe46a2b0fb6caf4d995b72">tempest-TestNetworkBasicOps-1622727639</nova:project>
Oct 11 09:18:19 compute-0 nova_compute[260935]:   </nova:owner>
Oct 11 09:18:19 compute-0 nova_compute[260935]:   <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:   <nova:ports>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     <nova:port uuid="2284d8f8-63ae-4cf4-b954-c08472abd3ed">
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     </nova:port>
Oct 11 09:18:19 compute-0 nova_compute[260935]:   </nova:ports>
Oct 11 09:18:19 compute-0 nova_compute[260935]: </nova:instance>
Oct 11 09:18:19 compute-0 nova_compute[260935]:   </metadata>
Oct 11 09:18:19 compute-0 nova_compute[260935]:   <memory unit='KiB'>131072</memory>
Oct 11 09:18:19 compute-0 nova_compute[260935]:   <currentMemory unit='KiB'>131072</currentMemory>
Oct 11 09:18:19 compute-0 nova_compute[260935]:   <vcpu placement='static'>1</vcpu>
Oct 11 09:18:19 compute-0 nova_compute[260935]:   <resource>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     <partition>/machine</partition>
Oct 11 09:18:19 compute-0 nova_compute[260935]:   </resource>
Oct 11 09:18:19 compute-0 nova_compute[260935]:   <sysinfo type='smbios'>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     <system>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <entry name='manufacturer'>RDO</entry>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <entry name='product'>OpenStack Compute</entry>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <entry name='serial'>0c16a8df-379f-45ee-b8a2-930ab997e47b</entry>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <entry name='uuid'>0c16a8df-379f-45ee-b8a2-930ab997e47b</entry>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <entry name='family'>Virtual Machine</entry>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     </system>
Oct 11 09:18:19 compute-0 nova_compute[260935]:   </sysinfo>
Oct 11 09:18:19 compute-0 nova_compute[260935]:   <os>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     <type arch='x86_64' machine='pc-q35-rhel9.6.0'>hvm</type>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     <boot dev='hd'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     <smbios mode='sysinfo'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:   </os>
Oct 11 09:18:19 compute-0 nova_compute[260935]:   <features>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     <acpi/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     <apic/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     <vmcoreinfo state='on'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:   </features>
Oct 11 09:18:19 compute-0 nova_compute[260935]:   <cpu mode='custom' match='exact' check='full'>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     <model fallback='forbid'>EPYC-Rome</model>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     <vendor>AMD</vendor>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     <feature policy='require' name='x2apic'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     <feature policy='require' name='tsc-deadline'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     <feature policy='require' name='hypervisor'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     <feature policy='require' name='tsc_adjust'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     <feature policy='require' name='spec-ctrl'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     <feature policy='require' name='stibp'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     <feature policy='require' name='arch-capabilities'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     <feature policy='require' name='ssbd'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     <feature policy='require' name='cmp_legacy'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     <feature policy='require' name='overflow-recov'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     <feature policy='require' name='succor'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     <feature policy='require' name='ibrs'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     <feature policy='require' name='amd-ssbd'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     <feature policy='require' name='virt-ssbd'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     <feature policy='disable' name='lbrv'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     <feature policy='disable' name='tsc-scale'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     <feature policy='disable' name='vmcb-clean'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     <feature policy='disable' name='flushbyasid'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     <feature policy='disable' name='pause-filter'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     <feature policy='disable' name='pfthreshold'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     <feature policy='disable' name='svme-addr-chk'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     <feature policy='require' name='lfence-always-serializing'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     <feature policy='require' name='rdctl-no'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     <feature policy='require' name='mds-no'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     <feature policy='require' name='pschange-mc-no'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     <feature policy='require' name='gds-no'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     <feature policy='require' name='rfds-no'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     <feature policy='disable' name='xsaves'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     <feature policy='disable' name='svm'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     <feature policy='require' name='topoext'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     <feature policy='disable' name='npt'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     <feature policy='disable' name='nrip-save'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:   </cpu>
Oct 11 09:18:19 compute-0 nova_compute[260935]:   <clock offset='utc'>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     <timer name='pit' tickpolicy='delay'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     <timer name='rtc' tickpolicy='catchup'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     <timer name='hpet' present='no'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:   </clock>
Oct 11 09:18:19 compute-0 nova_compute[260935]:   <on_poweroff>destroy</on_poweroff>
Oct 11 09:18:19 compute-0 nova_compute[260935]:   <on_reboot>restart</on_reboot>
Oct 11 09:18:19 compute-0 nova_compute[260935]:   <on_crash>destroy</on_crash>
Oct 11 09:18:19 compute-0 nova_compute[260935]:   <devices>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     <disk type='network' device='disk'>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <driver name='qemu' type='raw' cache='none'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <auth username='openstack'>
Oct 11 09:18:19 compute-0 nova_compute[260935]:         <secret type='ceph' uuid='33219f8b-dc38-5a8f-a577-8ccc4b37190a'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       </auth>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <source protocol='rbd' name='vms/0c16a8df-379f-45ee-b8a2-930ab997e47b_disk' index='2'>
Oct 11 09:18:19 compute-0 nova_compute[260935]:         <host name='192.168.122.100' port='6789'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       </source>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <target dev='vda' bus='virtio'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <alias name='virtio-disk0'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     </disk>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     <disk type='network' device='cdrom'>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <driver name='qemu' type='raw' cache='none'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <auth username='openstack'>
Oct 11 09:18:19 compute-0 nova_compute[260935]:         <secret type='ceph' uuid='33219f8b-dc38-5a8f-a577-8ccc4b37190a'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       </auth>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <source protocol='rbd' name='vms/0c16a8df-379f-45ee-b8a2-930ab997e47b_disk.config' index='1'>
Oct 11 09:18:19 compute-0 nova_compute[260935]:         <host name='192.168.122.100' port='6789'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       </source>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <target dev='sda' bus='sata'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <readonly/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <alias name='sata0-0-0'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     </disk>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     <controller type='pci' index='0' model='pcie-root'>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <alias name='pcie.0'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     </controller>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     <controller type='pci' index='1' model='pcie-root-port'>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <target chassis='1' port='0x10'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <alias name='pci.1'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     </controller>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     <controller type='pci' index='2' model='pcie-root-port'>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <target chassis='2' port='0x11'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <alias name='pci.2'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     </controller>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     <controller type='pci' index='3' model='pcie-root-port'>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <target chassis='3' port='0x12'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <alias name='pci.3'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     </controller>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     <controller type='pci' index='4' model='pcie-root-port'>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <target chassis='4' port='0x13'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <alias name='pci.4'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     </controller>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     <controller type='pci' index='5' model='pcie-root-port'>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <target chassis='5' port='0x14'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <alias name='pci.5'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     </controller>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     <controller type='pci' index='6' model='pcie-root-port'>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <target chassis='6' port='0x15'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <alias name='pci.6'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     </controller>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     <controller type='pci' index='7' model='pcie-root-port'>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <target chassis='7' port='0x16'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <alias name='pci.7'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     </controller>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     <controller type='pci' index='8' model='pcie-root-port'>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <target chassis='8' port='0x17'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <alias name='pci.8'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     </controller>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     <controller type='pci' index='9' model='pcie-root-port'>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <target chassis='9' port='0x18'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <alias name='pci.9'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     </controller>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     <controller type='pci' index='10' model='pcie-root-port'>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <target chassis='10' port='0x19'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <alias name='pci.10'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     </controller>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     <controller type='pci' index='11' model='pcie-root-port'>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <target chassis='11' port='0x1a'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <alias name='pci.11'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     </controller>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     <controller type='pci' index='12' model='pcie-root-port'>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <target chassis='12' port='0x1b'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <alias name='pci.12'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     </controller>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     <controller type='pci' index='13' model='pcie-root-port'>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <target chassis='13' port='0x1c'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <alias name='pci.13'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     </controller>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     <controller type='pci' index='14' model='pcie-root-port'>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <target chassis='14' port='0x1d'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <alias name='pci.14'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     </controller>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     <controller type='pci' index='15' model='pcie-root-port'>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <target chassis='15' port='0x1e'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <alias name='pci.15'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     </controller>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     <controller type='pci' index='16' model='pcie-root-port'>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <target chassis='16' port='0x1f'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <alias name='pci.16'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     </controller>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     <controller type='pci' index='17' model='pcie-root-port'>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <target chassis='17' port='0x20'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <alias name='pci.17'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     </controller>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     <controller type='pci' index='18' model='pcie-root-port'>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <target chassis='18' port='0x21'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <alias name='pci.18'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     </controller>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     <controller type='pci' index='19' model='pcie-root-port'>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <target chassis='19' port='0x22'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <alias name='pci.19'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     </controller>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     <controller type='pci' index='20' model='pcie-root-port'>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <target chassis='20' port='0x23'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <alias name='pci.20'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     </controller>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     <controller type='pci' index='21' model='pcie-root-port'>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <target chassis='21' port='0x24'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <alias name='pci.21'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     </controller>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     <controller type='pci' index='22' model='pcie-root-port'>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <target chassis='22' port='0x25'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <alias name='pci.22'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     </controller>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     <controller type='pci' index='23' model='pcie-root-port'>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <target chassis='23' port='0x26'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <alias name='pci.23'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     </controller>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     <controller type='pci' index='24' model='pcie-root-port'>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <target chassis='24' port='0x27'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <alias name='pci.24'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     </controller>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     <controller type='pci' index='25' model='pcie-root-port'>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <target chassis='25' port='0x28'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <alias name='pci.25'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     </controller>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <model name='pcie-pci-bridge'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <alias name='pci.26'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     </controller>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     <controller type='usb' index='0' model='piix3-uhci'>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <alias name='usb'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     </controller>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     <controller type='sata' index='0'>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <alias name='ide'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     </controller>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     <interface type='ethernet'>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <mac address='fa:16:3e:c9:58:d3'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <target dev='tap2284d8f8-63'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <model type='virtio'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <driver name='vhost' rx_queue_size='512'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <mtu size='1442'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <alias name='net0'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     </interface>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     <serial type='pty'>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <source path='/dev/pts/4'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <log file='/var/lib/nova/instances/0c16a8df-379f-45ee-b8a2-930ab997e47b/console.log' append='off'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <target type='isa-serial' port='0'>
Oct 11 09:18:19 compute-0 nova_compute[260935]:         <model name='isa-serial'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       </target>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <alias name='serial0'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     </serial>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     <console type='pty' tty='/dev/pts/4'>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <source path='/dev/pts/4'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <log file='/var/lib/nova/instances/0c16a8df-379f-45ee-b8a2-930ab997e47b/console.log' append='off'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <target type='serial' port='0'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <alias name='serial0'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     </console>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     <input type='tablet' bus='usb'>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <alias name='input0'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <address type='usb' bus='0' port='1'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     </input>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     <input type='mouse' bus='ps2'>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <alias name='input1'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     </input>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     <input type='keyboard' bus='ps2'>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <alias name='input2'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     </input>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     <graphics type='vnc' port='5904' autoport='yes' listen='::0'>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <listen type='address' address='::0'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     </graphics>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     <audio id='1' type='none'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     <video>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <model type='virtio' heads='1' primary='yes'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <alias name='video0'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     </video>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     <watchdog model='itco' action='reset'>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <alias name='watchdog0'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     </watchdog>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     <memballoon model='virtio'>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <stats period='10'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <alias name='balloon0'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     </memballoon>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     <rng model='virtio'>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <backend model='random'>/dev/urandom</backend>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <alias name='rng0'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     </rng>
Oct 11 09:18:19 compute-0 nova_compute[260935]:   </devices>
Oct 11 09:18:19 compute-0 nova_compute[260935]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     <label>system_u:system_r:svirt_t:s0:c440,c514</label>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c440,c514</imagelabel>
Oct 11 09:18:19 compute-0 nova_compute[260935]:   </seclabel>
Oct 11 09:18:19 compute-0 nova_compute[260935]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     <label>+107:+107</label>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     <imagelabel>+107:+107</imagelabel>
Oct 11 09:18:19 compute-0 nova_compute[260935]:   </seclabel>
Oct 11 09:18:19 compute-0 nova_compute[260935]: </domain>
Oct 11 09:18:19 compute-0 nova_compute[260935]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Oct 11 09:18:19 compute-0 nova_compute[260935]: 2025-10-11 09:18:19.135 2 DEBUG nova.virt.libvirt.guest [req-17c94f15-0cf5-49bc-a95c-7915fcc76bb5 req-d0d81a92-80b8-4cdd-9904-01e74fe18328 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:4f:8d:be"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap442b50ff-f9"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Oct 11 09:18:19 compute-0 nova_compute[260935]: 2025-10-11 09:18:19.140 2 DEBUG nova.virt.libvirt.guest [req-17c94f15-0cf5-49bc-a95c-7915fcc76bb5 req-d0d81a92-80b8-4cdd-9904-01e74fe18328 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:4f:8d:be"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap442b50ff-f9"/></interface>not found in domain: <domain type='kvm' id='135'>
Oct 11 09:18:19 compute-0 nova_compute[260935]:   <name>instance-00000070</name>
Oct 11 09:18:19 compute-0 nova_compute[260935]:   <uuid>0c16a8df-379f-45ee-b8a2-930ab997e47b</uuid>
Oct 11 09:18:19 compute-0 nova_compute[260935]:   <metadata>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 09:18:19 compute-0 nova_compute[260935]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:   <nova:name>tempest-TestNetworkBasicOps-server-1002613129</nova:name>
Oct 11 09:18:19 compute-0 nova_compute[260935]:   <nova:creationTime>2025-10-11 09:18:16</nova:creationTime>
Oct 11 09:18:19 compute-0 nova_compute[260935]:   <nova:flavor name="m1.nano">
Oct 11 09:18:19 compute-0 nova_compute[260935]:     <nova:memory>128</nova:memory>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     <nova:disk>1</nova:disk>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     <nova:swap>0</nova:swap>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     <nova:ephemeral>0</nova:ephemeral>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     <nova:vcpus>1</nova:vcpus>
Oct 11 09:18:19 compute-0 nova_compute[260935]:   </nova:flavor>
Oct 11 09:18:19 compute-0 nova_compute[260935]:   <nova:owner>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     <nova:user uuid="dd336dcb24664df58613d4105ce1b004">tempest-TestNetworkBasicOps-1622727639-project-member</nova:user>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     <nova:project uuid="bee9c6aad5fe46a2b0fb6caf4d995b72">tempest-TestNetworkBasicOps-1622727639</nova:project>
Oct 11 09:18:19 compute-0 nova_compute[260935]:   </nova:owner>
Oct 11 09:18:19 compute-0 nova_compute[260935]:   <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:   <nova:ports>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     <nova:port uuid="2284d8f8-63ae-4cf4-b954-c08472abd3ed">
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     </nova:port>
Oct 11 09:18:19 compute-0 nova_compute[260935]:   </nova:ports>
Oct 11 09:18:19 compute-0 nova_compute[260935]: </nova:instance>
Oct 11 09:18:19 compute-0 nova_compute[260935]:   </metadata>
Oct 11 09:18:19 compute-0 nova_compute[260935]:   <memory unit='KiB'>131072</memory>
Oct 11 09:18:19 compute-0 nova_compute[260935]:   <currentMemory unit='KiB'>131072</currentMemory>
Oct 11 09:18:19 compute-0 nova_compute[260935]:   <vcpu placement='static'>1</vcpu>
Oct 11 09:18:19 compute-0 nova_compute[260935]:   <resource>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     <partition>/machine</partition>
Oct 11 09:18:19 compute-0 nova_compute[260935]:   </resource>
Oct 11 09:18:19 compute-0 nova_compute[260935]:   <sysinfo type='smbios'>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     <system>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <entry name='manufacturer'>RDO</entry>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <entry name='product'>OpenStack Compute</entry>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <entry name='serial'>0c16a8df-379f-45ee-b8a2-930ab997e47b</entry>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <entry name='uuid'>0c16a8df-379f-45ee-b8a2-930ab997e47b</entry>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <entry name='family'>Virtual Machine</entry>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     </system>
Oct 11 09:18:19 compute-0 nova_compute[260935]:   </sysinfo>
Oct 11 09:18:19 compute-0 nova_compute[260935]:   <os>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     <type arch='x86_64' machine='pc-q35-rhel9.6.0'>hvm</type>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     <boot dev='hd'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     <smbios mode='sysinfo'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:   </os>
Oct 11 09:18:19 compute-0 nova_compute[260935]:   <features>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     <acpi/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     <apic/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     <vmcoreinfo state='on'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:   </features>
Oct 11 09:18:19 compute-0 nova_compute[260935]:   <cpu mode='custom' match='exact' check='full'>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     <model fallback='forbid'>EPYC-Rome</model>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     <vendor>AMD</vendor>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     <feature policy='require' name='x2apic'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     <feature policy='require' name='tsc-deadline'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     <feature policy='require' name='hypervisor'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     <feature policy='require' name='tsc_adjust'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     <feature policy='require' name='spec-ctrl'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     <feature policy='require' name='stibp'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     <feature policy='require' name='arch-capabilities'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     <feature policy='require' name='ssbd'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     <feature policy='require' name='cmp_legacy'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     <feature policy='require' name='overflow-recov'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     <feature policy='require' name='succor'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     <feature policy='require' name='ibrs'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     <feature policy='require' name='amd-ssbd'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     <feature policy='require' name='virt-ssbd'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     <feature policy='disable' name='lbrv'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     <feature policy='disable' name='tsc-scale'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     <feature policy='disable' name='vmcb-clean'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     <feature policy='disable' name='flushbyasid'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     <feature policy='disable' name='pause-filter'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     <feature policy='disable' name='pfthreshold'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     <feature policy='disable' name='svme-addr-chk'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     <feature policy='require' name='lfence-always-serializing'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     <feature policy='require' name='rdctl-no'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     <feature policy='require' name='mds-no'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     <feature policy='require' name='pschange-mc-no'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     <feature policy='require' name='gds-no'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     <feature policy='require' name='rfds-no'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     <feature policy='disable' name='xsaves'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     <feature policy='disable' name='svm'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     <feature policy='require' name='topoext'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     <feature policy='disable' name='npt'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     <feature policy='disable' name='nrip-save'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:   </cpu>
Oct 11 09:18:19 compute-0 nova_compute[260935]:   <clock offset='utc'>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     <timer name='pit' tickpolicy='delay'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     <timer name='rtc' tickpolicy='catchup'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     <timer name='hpet' present='no'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:   </clock>
Oct 11 09:18:19 compute-0 nova_compute[260935]:   <on_poweroff>destroy</on_poweroff>
Oct 11 09:18:19 compute-0 nova_compute[260935]:   <on_reboot>restart</on_reboot>
Oct 11 09:18:19 compute-0 nova_compute[260935]:   <on_crash>destroy</on_crash>
Oct 11 09:18:19 compute-0 nova_compute[260935]:   <devices>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     <disk type='network' device='disk'>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <driver name='qemu' type='raw' cache='none'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <auth username='openstack'>
Oct 11 09:18:19 compute-0 nova_compute[260935]:         <secret type='ceph' uuid='33219f8b-dc38-5a8f-a577-8ccc4b37190a'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       </auth>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <source protocol='rbd' name='vms/0c16a8df-379f-45ee-b8a2-930ab997e47b_disk' index='2'>
Oct 11 09:18:19 compute-0 nova_compute[260935]:         <host name='192.168.122.100' port='6789'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       </source>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <target dev='vda' bus='virtio'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <alias name='virtio-disk0'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     </disk>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     <disk type='network' device='cdrom'>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <driver name='qemu' type='raw' cache='none'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <auth username='openstack'>
Oct 11 09:18:19 compute-0 nova_compute[260935]:         <secret type='ceph' uuid='33219f8b-dc38-5a8f-a577-8ccc4b37190a'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       </auth>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <source protocol='rbd' name='vms/0c16a8df-379f-45ee-b8a2-930ab997e47b_disk.config' index='1'>
Oct 11 09:18:19 compute-0 nova_compute[260935]:         <host name='192.168.122.100' port='6789'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       </source>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <target dev='sda' bus='sata'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <readonly/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <alias name='sata0-0-0'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     </disk>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     <controller type='pci' index='0' model='pcie-root'>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <alias name='pcie.0'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     </controller>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     <controller type='pci' index='1' model='pcie-root-port'>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <target chassis='1' port='0x10'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <alias name='pci.1'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     </controller>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     <controller type='pci' index='2' model='pcie-root-port'>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <target chassis='2' port='0x11'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <alias name='pci.2'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     </controller>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     <controller type='pci' index='3' model='pcie-root-port'>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <target chassis='3' port='0x12'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <alias name='pci.3'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     </controller>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     <controller type='pci' index='4' model='pcie-root-port'>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <target chassis='4' port='0x13'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <alias name='pci.4'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     </controller>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     <controller type='pci' index='5' model='pcie-root-port'>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <target chassis='5' port='0x14'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <alias name='pci.5'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     </controller>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     <controller type='pci' index='6' model='pcie-root-port'>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <target chassis='6' port='0x15'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <alias name='pci.6'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     </controller>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     <controller type='pci' index='7' model='pcie-root-port'>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <target chassis='7' port='0x16'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <alias name='pci.7'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     </controller>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     <controller type='pci' index='8' model='pcie-root-port'>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <target chassis='8' port='0x17'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <alias name='pci.8'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     </controller>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     <controller type='pci' index='9' model='pcie-root-port'>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <target chassis='9' port='0x18'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <alias name='pci.9'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     </controller>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     <controller type='pci' index='10' model='pcie-root-port'>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <target chassis='10' port='0x19'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <alias name='pci.10'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     </controller>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     <controller type='pci' index='11' model='pcie-root-port'>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <target chassis='11' port='0x1a'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <alias name='pci.11'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     </controller>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     <controller type='pci' index='12' model='pcie-root-port'>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <target chassis='12' port='0x1b'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <alias name='pci.12'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     </controller>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     <controller type='pci' index='13' model='pcie-root-port'>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <target chassis='13' port='0x1c'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <alias name='pci.13'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     </controller>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     <controller type='pci' index='14' model='pcie-root-port'>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <target chassis='14' port='0x1d'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <alias name='pci.14'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     </controller>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     <controller type='pci' index='15' model='pcie-root-port'>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <target chassis='15' port='0x1e'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <alias name='pci.15'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     </controller>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     <controller type='pci' index='16' model='pcie-root-port'>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <target chassis='16' port='0x1f'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <alias name='pci.16'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     </controller>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     <controller type='pci' index='17' model='pcie-root-port'>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <target chassis='17' port='0x20'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <alias name='pci.17'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     </controller>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     <controller type='pci' index='18' model='pcie-root-port'>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <target chassis='18' port='0x21'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <alias name='pci.18'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     </controller>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     <controller type='pci' index='19' model='pcie-root-port'>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <target chassis='19' port='0x22'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <alias name='pci.19'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     </controller>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     <controller type='pci' index='20' model='pcie-root-port'>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <target chassis='20' port='0x23'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <alias name='pci.20'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     </controller>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     <controller type='pci' index='21' model='pcie-root-port'>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <target chassis='21' port='0x24'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <alias name='pci.21'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     </controller>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     <controller type='pci' index='22' model='pcie-root-port'>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <target chassis='22' port='0x25'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <alias name='pci.22'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     </controller>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     <controller type='pci' index='23' model='pcie-root-port'>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <target chassis='23' port='0x26'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <alias name='pci.23'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     </controller>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     <controller type='pci' index='24' model='pcie-root-port'>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <target chassis='24' port='0x27'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <alias name='pci.24'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     </controller>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     <controller type='pci' index='25' model='pcie-root-port'>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <target chassis='25' port='0x28'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <alias name='pci.25'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     </controller>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <model name='pcie-pci-bridge'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <alias name='pci.26'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     </controller>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     <controller type='usb' index='0' model='piix3-uhci'>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <alias name='usb'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     </controller>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     <controller type='sata' index='0'>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <alias name='ide'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     </controller>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     <interface type='ethernet'>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <mac address='fa:16:3e:c9:58:d3'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <target dev='tap2284d8f8-63'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <model type='virtio'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <driver name='vhost' rx_queue_size='512'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <mtu size='1442'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <alias name='net0'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     </interface>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     <serial type='pty'>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <source path='/dev/pts/4'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <log file='/var/lib/nova/instances/0c16a8df-379f-45ee-b8a2-930ab997e47b/console.log' append='off'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <target type='isa-serial' port='0'>
Oct 11 09:18:19 compute-0 nova_compute[260935]:         <model name='isa-serial'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       </target>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <alias name='serial0'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     </serial>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     <console type='pty' tty='/dev/pts/4'>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <source path='/dev/pts/4'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <log file='/var/lib/nova/instances/0c16a8df-379f-45ee-b8a2-930ab997e47b/console.log' append='off'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <target type='serial' port='0'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <alias name='serial0'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     </console>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     <input type='tablet' bus='usb'>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <alias name='input0'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <address type='usb' bus='0' port='1'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     </input>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     <input type='mouse' bus='ps2'>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <alias name='input1'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     </input>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     <input type='keyboard' bus='ps2'>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <alias name='input2'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     </input>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     <graphics type='vnc' port='5904' autoport='yes' listen='::0'>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <listen type='address' address='::0'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     </graphics>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     <audio id='1' type='none'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     <video>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <model type='virtio' heads='1' primary='yes'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <alias name='video0'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     </video>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     <watchdog model='itco' action='reset'>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <alias name='watchdog0'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     </watchdog>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     <memballoon model='virtio'>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <stats period='10'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <alias name='balloon0'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     </memballoon>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     <rng model='virtio'>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <backend model='random'>/dev/urandom</backend>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <alias name='rng0'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     </rng>
Oct 11 09:18:19 compute-0 nova_compute[260935]:   </devices>
Oct 11 09:18:19 compute-0 nova_compute[260935]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     <label>system_u:system_r:svirt_t:s0:c440,c514</label>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c440,c514</imagelabel>
Oct 11 09:18:19 compute-0 nova_compute[260935]:   </seclabel>
Oct 11 09:18:19 compute-0 nova_compute[260935]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     <label>+107:+107</label>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     <imagelabel>+107:+107</imagelabel>
Oct 11 09:18:19 compute-0 nova_compute[260935]:   </seclabel>
Oct 11 09:18:19 compute-0 nova_compute[260935]: </domain>
Oct 11 09:18:19 compute-0 nova_compute[260935]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Oct 11 09:18:19 compute-0 nova_compute[260935]: 2025-10-11 09:18:19.141 2 WARNING nova.virt.libvirt.driver [req-17c94f15-0cf5-49bc-a95c-7915fcc76bb5 req-d0d81a92-80b8-4cdd-9904-01e74fe18328 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0c16a8df-379f-45ee-b8a2-930ab997e47b] Detaching interface fa:16:3e:4f:8d:be failed because the device is no longer found on the guest.: nova.exception.DeviceNotFound: Device 'tap442b50ff-f9' not found.
Oct 11 09:18:19 compute-0 nova_compute[260935]: 2025-10-11 09:18:19.143 2 DEBUG nova.virt.libvirt.vif [req-17c94f15-0cf5-49bc-a95c-7915fcc76bb5 req-d0d81a92-80b8-4cdd-9904-01e74fe18328 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T09:17:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1002613129',display_name='tempest-TestNetworkBasicOps-server-1002613129',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1002613129',id=112,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBB1DLyePqzD5XeXq/VEty+kauBvZ2ILHF8of/ToD91IxehxiiNemh7q1yZYimrdmB0i6DGy2R/mJVWVMcXEoPylnExeWpiVIgoXv8lQJTgtQjj42wL1vsxK2jjFlUxPqYA==',key_name='tempest-TestNetworkBasicOps-1545772393',keypairs=<?>,launch_index=0,launched_at=2025-10-11T09:17:46Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='bee9c6aad5fe46a2b0fb6caf4d995b72',ramdisk_id='',reservation_id='r-c03kw6on',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1622727639',owner_user_name='tempest-TestNetworkBasicOps-1622727639-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T09:17:46Z,user_data=None,user_id='dd336dcb24664df58613d4105ce1b004',uuid=0c16a8df-379f-45ee-b8a2-930ab997e47b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "442b50ff-f920-42c8-b400-ec09aade7cd6", "address": "fa:16:3e:4f:8d:be", "network": {"id": "1ea2d228-974d-4a28-901f-89593554c6f8", "bridge": "br-int", "label": "tempest-network-smoke--1003241548", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.27", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap442b50ff-f9", "ovs_interfaceid": "442b50ff-f920-42c8-b400-ec09aade7cd6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 11 09:18:19 compute-0 nova_compute[260935]: 2025-10-11 09:18:19.143 2 DEBUG nova.network.os_vif_util [req-17c94f15-0cf5-49bc-a95c-7915fcc76bb5 req-d0d81a92-80b8-4cdd-9904-01e74fe18328 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Converting VIF {"id": "442b50ff-f920-42c8-b400-ec09aade7cd6", "address": "fa:16:3e:4f:8d:be", "network": {"id": "1ea2d228-974d-4a28-901f-89593554c6f8", "bridge": "br-int", "label": "tempest-network-smoke--1003241548", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.27", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap442b50ff-f9", "ovs_interfaceid": "442b50ff-f920-42c8-b400-ec09aade7cd6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 09:18:19 compute-0 nova_compute[260935]: 2025-10-11 09:18:19.145 2 DEBUG nova.network.os_vif_util [req-17c94f15-0cf5-49bc-a95c-7915fcc76bb5 req-d0d81a92-80b8-4cdd-9904-01e74fe18328 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4f:8d:be,bridge_name='br-int',has_traffic_filtering=True,id=442b50ff-f920-42c8-b400-ec09aade7cd6,network=Network(1ea2d228-974d-4a28-901f-89593554c6f8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap442b50ff-f9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 09:18:19 compute-0 nova_compute[260935]: 2025-10-11 09:18:19.146 2 DEBUG os_vif [req-17c94f15-0cf5-49bc-a95c-7915fcc76bb5 req-d0d81a92-80b8-4cdd-9904-01e74fe18328 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:4f:8d:be,bridge_name='br-int',has_traffic_filtering=True,id=442b50ff-f920-42c8-b400-ec09aade7cd6,network=Network(1ea2d228-974d-4a28-901f-89593554c6f8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap442b50ff-f9') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 11 09:18:19 compute-0 nova_compute[260935]: 2025-10-11 09:18:19.149 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:18:19 compute-0 nova_compute[260935]: 2025-10-11 09:18:19.150 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap442b50ff-f9, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:18:19 compute-0 nova_compute[260935]: 2025-10-11 09:18:19.151 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 09:18:19 compute-0 nova_compute[260935]: 2025-10-11 09:18:19.155 2 INFO os_vif [req-17c94f15-0cf5-49bc-a95c-7915fcc76bb5 req-d0d81a92-80b8-4cdd-9904-01e74fe18328 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:4f:8d:be,bridge_name='br-int',has_traffic_filtering=True,id=442b50ff-f920-42c8-b400-ec09aade7cd6,network=Network(1ea2d228-974d-4a28-901f-89593554c6f8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap442b50ff-f9')
Oct 11 09:18:19 compute-0 nova_compute[260935]: 2025-10-11 09:18:19.156 2 DEBUG nova.virt.libvirt.guest [req-17c94f15-0cf5-49bc-a95c-7915fcc76bb5 req-d0d81a92-80b8-4cdd-9904-01e74fe18328 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 09:18:19 compute-0 nova_compute[260935]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:   <nova:name>tempest-TestNetworkBasicOps-server-1002613129</nova:name>
Oct 11 09:18:19 compute-0 nova_compute[260935]:   <nova:creationTime>2025-10-11 09:18:19</nova:creationTime>
Oct 11 09:18:19 compute-0 nova_compute[260935]:   <nova:flavor name="m1.nano">
Oct 11 09:18:19 compute-0 nova_compute[260935]:     <nova:memory>128</nova:memory>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     <nova:disk>1</nova:disk>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     <nova:swap>0</nova:swap>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     <nova:ephemeral>0</nova:ephemeral>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     <nova:vcpus>1</nova:vcpus>
Oct 11 09:18:19 compute-0 nova_compute[260935]:   </nova:flavor>
Oct 11 09:18:19 compute-0 nova_compute[260935]:   <nova:owner>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     <nova:user uuid="dd336dcb24664df58613d4105ce1b004">tempest-TestNetworkBasicOps-1622727639-project-member</nova:user>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     <nova:project uuid="bee9c6aad5fe46a2b0fb6caf4d995b72">tempest-TestNetworkBasicOps-1622727639</nova:project>
Oct 11 09:18:19 compute-0 nova_compute[260935]:   </nova:owner>
Oct 11 09:18:19 compute-0 nova_compute[260935]:   <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:   <nova:ports>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     <nova:port uuid="2284d8f8-63ae-4cf4-b954-c08472abd3ed">
Oct 11 09:18:19 compute-0 nova_compute[260935]:       <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Oct 11 09:18:19 compute-0 nova_compute[260935]:     </nova:port>
Oct 11 09:18:19 compute-0 nova_compute[260935]:   </nova:ports>
Oct 11 09:18:19 compute-0 nova_compute[260935]: </nova:instance>
Oct 11 09:18:19 compute-0 nova_compute[260935]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Oct 11 09:18:19 compute-0 nova_compute[260935]: 2025-10-11 09:18:19.249 2 INFO nova.network.neutron [None req-f9151682-0317-4648-ab5e-92d0cfdc252e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 0c16a8df-379f-45ee-b8a2-930ab997e47b] Port 442b50ff-f920-42c8-b400-ec09aade7cd6 from network info_cache is no longer associated with instance in Neutron. Removing from network info_cache.
Oct 11 09:18:19 compute-0 nova_compute[260935]: 2025-10-11 09:18:19.250 2 DEBUG nova.network.neutron [None req-f9151682-0317-4648-ab5e-92d0cfdc252e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 0c16a8df-379f-45ee-b8a2-930ab997e47b] Updating instance_info_cache with network_info: [{"id": "2284d8f8-63ae-4cf4-b954-c08472abd3ed", "address": "fa:16:3e:c9:58:d3", "network": {"id": "3f472aff-4044-47cd-b539-ffd0a15c2851", "bridge": "br-int", "label": "tempest-network-smoke--74572185", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2284d8f8-63", "ovs_interfaceid": "2284d8f8-63ae-4cf4-b954-c08472abd3ed", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:18:19 compute-0 nova_compute[260935]: 2025-10-11 09:18:19.272 2 DEBUG oslo_concurrency.lockutils [None req-f9151682-0317-4648-ab5e-92d0cfdc252e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Releasing lock "refresh_cache-0c16a8df-379f-45ee-b8a2-930ab997e47b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:18:19 compute-0 nova_compute[260935]: 2025-10-11 09:18:19.295 2 DEBUG oslo_concurrency.lockutils [None req-f9151682-0317-4648-ab5e-92d0cfdc252e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "interface-0c16a8df-379f-45ee-b8a2-930ab997e47b-442b50ff-f920-42c8-b400-ec09aade7cd6" "released" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: held 3.291s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:18:19 compute-0 ceph-mon[74313]: pgmap v2294: 321 pgs: 321 active+clean; 407 MiB data, 950 MiB used, 59 GiB / 60 GiB avail; 52 KiB/s rd, 25 KiB/s wr, 58 op/s
Oct 11 09:18:19 compute-0 nova_compute[260935]: 2025-10-11 09:18:19.746 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760174284.745655, 25c0d908-6200-4e9e-8914-ed531abe14bf => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:18:19 compute-0 nova_compute[260935]: 2025-10-11 09:18:19.747 2 INFO nova.compute.manager [-] [instance: 25c0d908-6200-4e9e-8914-ed531abe14bf] VM Stopped (Lifecycle Event)
Oct 11 09:18:19 compute-0 nova_compute[260935]: 2025-10-11 09:18:19.783 2 DEBUG nova.compute.manager [None req-76a6863e-dfa6-4dc8-85ce-b7fdcf9e2c69 - - - - - -] [instance: 25c0d908-6200-4e9e-8914-ed531abe14bf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:18:20 compute-0 nova_compute[260935]: 2025-10-11 09:18:20.589 2 DEBUG nova.compute.manager [req-abf91524-11cf-49d3-a708-e981ba96d05b req-39b5b8d5-1dbe-4878-9a9a-0eb3321d6944 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0c16a8df-379f-45ee-b8a2-930ab997e47b] Received event network-changed-2284d8f8-63ae-4cf4-b954-c08472abd3ed external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:18:20 compute-0 nova_compute[260935]: 2025-10-11 09:18:20.589 2 DEBUG nova.compute.manager [req-abf91524-11cf-49d3-a708-e981ba96d05b req-39b5b8d5-1dbe-4878-9a9a-0eb3321d6944 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0c16a8df-379f-45ee-b8a2-930ab997e47b] Refreshing instance network info cache due to event network-changed-2284d8f8-63ae-4cf4-b954-c08472abd3ed. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 09:18:20 compute-0 nova_compute[260935]: 2025-10-11 09:18:20.590 2 DEBUG oslo_concurrency.lockutils [req-abf91524-11cf-49d3-a708-e981ba96d05b req-39b5b8d5-1dbe-4878-9a9a-0eb3321d6944 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-0c16a8df-379f-45ee-b8a2-930ab997e47b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:18:20 compute-0 nova_compute[260935]: 2025-10-11 09:18:20.590 2 DEBUG oslo_concurrency.lockutils [req-abf91524-11cf-49d3-a708-e981ba96d05b req-39b5b8d5-1dbe-4878-9a9a-0eb3321d6944 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-0c16a8df-379f-45ee-b8a2-930ab997e47b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:18:20 compute-0 nova_compute[260935]: 2025-10-11 09:18:20.591 2 DEBUG nova.network.neutron [req-abf91524-11cf-49d3-a708-e981ba96d05b req-39b5b8d5-1dbe-4878-9a9a-0eb3321d6944 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0c16a8df-379f-45ee-b8a2-930ab997e47b] Refreshing network info cache for port 2284d8f8-63ae-4cf4-b954-c08472abd3ed _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 09:18:20 compute-0 nova_compute[260935]: 2025-10-11 09:18:20.617 2 DEBUG oslo_concurrency.lockutils [None req-0660e251-f1e0-459e-bcfb-768d67651959 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Acquiring lock "0c16a8df-379f-45ee-b8a2-930ab997e47b" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:18:20 compute-0 nova_compute[260935]: 2025-10-11 09:18:20.617 2 DEBUG oslo_concurrency.lockutils [None req-0660e251-f1e0-459e-bcfb-768d67651959 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "0c16a8df-379f-45ee-b8a2-930ab997e47b" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:18:20 compute-0 nova_compute[260935]: 2025-10-11 09:18:20.618 2 DEBUG oslo_concurrency.lockutils [None req-0660e251-f1e0-459e-bcfb-768d67651959 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Acquiring lock "0c16a8df-379f-45ee-b8a2-930ab997e47b-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:18:20 compute-0 nova_compute[260935]: 2025-10-11 09:18:20.618 2 DEBUG oslo_concurrency.lockutils [None req-0660e251-f1e0-459e-bcfb-768d67651959 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "0c16a8df-379f-45ee-b8a2-930ab997e47b-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:18:20 compute-0 nova_compute[260935]: 2025-10-11 09:18:20.618 2 DEBUG oslo_concurrency.lockutils [None req-0660e251-f1e0-459e-bcfb-768d67651959 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "0c16a8df-379f-45ee-b8a2-930ab997e47b-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:18:20 compute-0 nova_compute[260935]: 2025-10-11 09:18:20.620 2 INFO nova.compute.manager [None req-0660e251-f1e0-459e-bcfb-768d67651959 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 0c16a8df-379f-45ee-b8a2-930ab997e47b] Terminating instance
Oct 11 09:18:20 compute-0 nova_compute[260935]: 2025-10-11 09:18:20.621 2 DEBUG nova.compute.manager [None req-0660e251-f1e0-459e-bcfb-768d67651959 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 0c16a8df-379f-45ee-b8a2-930ab997e47b] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 11 09:18:20 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2295: 321 pgs: 321 active+clean; 407 MiB data, 950 MiB used, 59 GiB / 60 GiB avail; 27 KiB/s rd, 11 KiB/s wr, 29 op/s
Oct 11 09:18:20 compute-0 kernel: tap2284d8f8-63 (unregistering): left promiscuous mode
Oct 11 09:18:20 compute-0 NetworkManager[44960]: <info>  [1760174300.6793] device (tap2284d8f8-63): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 09:18:20 compute-0 ovn_controller[152945]: 2025-10-11T09:18:20Z|01117|binding|INFO|Releasing lport 2284d8f8-63ae-4cf4-b954-c08472abd3ed from this chassis (sb_readonly=0)
Oct 11 09:18:20 compute-0 ovn_controller[152945]: 2025-10-11T09:18:20Z|01118|binding|INFO|Setting lport 2284d8f8-63ae-4cf4-b954-c08472abd3ed down in Southbound
Oct 11 09:18:20 compute-0 ovn_controller[152945]: 2025-10-11T09:18:20Z|01119|binding|INFO|Removing iface tap2284d8f8-63 ovn-installed in OVS
Oct 11 09:18:20 compute-0 nova_compute[260935]: 2025-10-11 09:18:20.750 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:18:20 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:18:20.760 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c9:58:d3 10.100.0.14'], port_security=['fa:16:3e:c9:58:d3 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '0c16a8df-379f-45ee-b8a2-930ab997e47b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3f472aff-4044-47cd-b539-ffd0a15c2851', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'bee9c6aad5fe46a2b0fb6caf4d995b72', 'neutron:revision_number': '4', 'neutron:security_group_ids': '19e2c24e-234b-4dae-9978-109acb79adf0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=83048660-eea0-4997-8f0a-503095730c3f, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=2284d8f8-63ae-4cf4-b954-c08472abd3ed) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 09:18:20 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:18:20.762 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 2284d8f8-63ae-4cf4-b954-c08472abd3ed in datapath 3f472aff-4044-47cd-b539-ffd0a15c2851 unbound from our chassis
Oct 11 09:18:20 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:18:20.764 162815 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 3f472aff-4044-47cd-b539-ffd0a15c2851, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 11 09:18:20 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:18:20.765 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[069888c8-8433-4788-87e4-f77130fec871]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:18:20 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:18:20.766 162815 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-3f472aff-4044-47cd-b539-ffd0a15c2851 namespace which is not needed anymore
Oct 11 09:18:20 compute-0 nova_compute[260935]: 2025-10-11 09:18:20.771 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:18:20 compute-0 systemd[1]: machine-qemu\x2d135\x2dinstance\x2d00000070.scope: Deactivated successfully.
Oct 11 09:18:20 compute-0 systemd[1]: machine-qemu\x2d135\x2dinstance\x2d00000070.scope: Consumed 14.567s CPU time.
Oct 11 09:18:20 compute-0 systemd-machined[215705]: Machine qemu-135-instance-00000070 terminated.
Oct 11 09:18:20 compute-0 nova_compute[260935]: 2025-10-11 09:18:20.865 2 INFO nova.virt.libvirt.driver [-] [instance: 0c16a8df-379f-45ee-b8a2-930ab997e47b] Instance destroyed successfully.
Oct 11 09:18:20 compute-0 nova_compute[260935]: 2025-10-11 09:18:20.866 2 DEBUG nova.objects.instance [None req-0660e251-f1e0-459e-bcfb-768d67651959 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lazy-loading 'resources' on Instance uuid 0c16a8df-379f-45ee-b8a2-930ab997e47b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:18:20 compute-0 nova_compute[260935]: 2025-10-11 09:18:20.883 2 DEBUG nova.virt.libvirt.vif [None req-0660e251-f1e0-459e-bcfb-768d67651959 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T09:17:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1002613129',display_name='tempest-TestNetworkBasicOps-server-1002613129',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1002613129',id=112,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBB1DLyePqzD5XeXq/VEty+kauBvZ2ILHF8of/ToD91IxehxiiNemh7q1yZYimrdmB0i6DGy2R/mJVWVMcXEoPylnExeWpiVIgoXv8lQJTgtQjj42wL1vsxK2jjFlUxPqYA==',key_name='tempest-TestNetworkBasicOps-1545772393',keypairs=<?>,launch_index=0,launched_at=2025-10-11T09:17:46Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='bee9c6aad5fe46a2b0fb6caf4d995b72',ramdisk_id='',reservation_id='r-c03kw6on',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1622727639',owner_user_name='tempest-TestNetworkBasicOps-1622727639-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T09:17:46Z,user_data=None,user_id='dd336dcb24664df58613d4105ce1b004',uuid=0c16a8df-379f-45ee-b8a2-930ab997e47b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "2284d8f8-63ae-4cf4-b954-c08472abd3ed", "address": "fa:16:3e:c9:58:d3", "network": {"id": "3f472aff-4044-47cd-b539-ffd0a15c2851", "bridge": "br-int", "label": "tempest-network-smoke--74572185", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2284d8f8-63", "ovs_interfaceid": "2284d8f8-63ae-4cf4-b954-c08472abd3ed", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 11 09:18:20 compute-0 nova_compute[260935]: 2025-10-11 09:18:20.884 2 DEBUG nova.network.os_vif_util [None req-0660e251-f1e0-459e-bcfb-768d67651959 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Converting VIF {"id": "2284d8f8-63ae-4cf4-b954-c08472abd3ed", "address": "fa:16:3e:c9:58:d3", "network": {"id": "3f472aff-4044-47cd-b539-ffd0a15c2851", "bridge": "br-int", "label": "tempest-network-smoke--74572185", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2284d8f8-63", "ovs_interfaceid": "2284d8f8-63ae-4cf4-b954-c08472abd3ed", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 09:18:20 compute-0 nova_compute[260935]: 2025-10-11 09:18:20.884 2 DEBUG nova.network.os_vif_util [None req-0660e251-f1e0-459e-bcfb-768d67651959 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:c9:58:d3,bridge_name='br-int',has_traffic_filtering=True,id=2284d8f8-63ae-4cf4-b954-c08472abd3ed,network=Network(3f472aff-4044-47cd-b539-ffd0a15c2851),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2284d8f8-63') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 09:18:20 compute-0 nova_compute[260935]: 2025-10-11 09:18:20.885 2 DEBUG os_vif [None req-0660e251-f1e0-459e-bcfb-768d67651959 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:c9:58:d3,bridge_name='br-int',has_traffic_filtering=True,id=2284d8f8-63ae-4cf4-b954-c08472abd3ed,network=Network(3f472aff-4044-47cd-b539-ffd0a15c2851),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2284d8f8-63') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 11 09:18:20 compute-0 nova_compute[260935]: 2025-10-11 09:18:20.886 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:18:20 compute-0 nova_compute[260935]: 2025-10-11 09:18:20.887 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2284d8f8-63, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:18:20 compute-0 nova_compute[260935]: 2025-10-11 09:18:20.890 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:18:20 compute-0 nova_compute[260935]: 2025-10-11 09:18:20.892 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:18:20 compute-0 nova_compute[260935]: 2025-10-11 09:18:20.895 2 INFO os_vif [None req-0660e251-f1e0-459e-bcfb-768d67651959 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:c9:58:d3,bridge_name='br-int',has_traffic_filtering=True,id=2284d8f8-63ae-4cf4-b954-c08472abd3ed,network=Network(3f472aff-4044-47cd-b539-ffd0a15c2851),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2284d8f8-63')
Oct 11 09:18:20 compute-0 neutron-haproxy-ovnmeta-3f472aff-4044-47cd-b539-ffd0a15c2851[382100]: [NOTICE]   (382104) : haproxy version is 2.8.14-c23fe91
Oct 11 09:18:20 compute-0 neutron-haproxy-ovnmeta-3f472aff-4044-47cd-b539-ffd0a15c2851[382100]: [NOTICE]   (382104) : path to executable is /usr/sbin/haproxy
Oct 11 09:18:20 compute-0 neutron-haproxy-ovnmeta-3f472aff-4044-47cd-b539-ffd0a15c2851[382100]: [WARNING]  (382104) : Exiting Master process...
Oct 11 09:18:20 compute-0 neutron-haproxy-ovnmeta-3f472aff-4044-47cd-b539-ffd0a15c2851[382100]: [ALERT]    (382104) : Current worker (382106) exited with code 143 (Terminated)
Oct 11 09:18:20 compute-0 neutron-haproxy-ovnmeta-3f472aff-4044-47cd-b539-ffd0a15c2851[382100]: [WARNING]  (382104) : All workers exited. Exiting... (0)
Oct 11 09:18:20 compute-0 systemd[1]: libpod-1ecdb8b07380fa6c69f2dee9d9e0b0d27bd9b74547b9ea0ac2f30e8fde8e5390.scope: Deactivated successfully.
Oct 11 09:18:20 compute-0 podman[382592]: 2025-10-11 09:18:20.984615652 +0000 UTC m=+0.085981857 container died 1ecdb8b07380fa6c69f2dee9d9e0b0d27bd9b74547b9ea0ac2f30e8fde8e5390 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-3f472aff-4044-47cd-b539-ffd0a15c2851, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 11 09:18:21 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-1ecdb8b07380fa6c69f2dee9d9e0b0d27bd9b74547b9ea0ac2f30e8fde8e5390-userdata-shm.mount: Deactivated successfully.
Oct 11 09:18:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-2d7493b7450df163711f70cb19b9368d1526437ef35376e11493a1c6025bea22-merged.mount: Deactivated successfully.
Oct 11 09:18:21 compute-0 podman[382592]: 2025-10-11 09:18:21.037713832 +0000 UTC m=+0.139080027 container cleanup 1ecdb8b07380fa6c69f2dee9d9e0b0d27bd9b74547b9ea0ac2f30e8fde8e5390 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-3f472aff-4044-47cd-b539-ffd0a15c2851, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001)
Oct 11 09:18:21 compute-0 systemd[1]: libpod-conmon-1ecdb8b07380fa6c69f2dee9d9e0b0d27bd9b74547b9ea0ac2f30e8fde8e5390.scope: Deactivated successfully.
Oct 11 09:18:21 compute-0 sshd-session[382558]: Invalid user fedora from 155.4.244.179 port 50376
Oct 11 09:18:21 compute-0 sshd-session[382558]: pam_unix(sshd:auth): check pass; user unknown
Oct 11 09:18:21 compute-0 sshd-session[382558]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=155.4.244.179
Oct 11 09:18:21 compute-0 podman[382641]: 2025-10-11 09:18:21.158472399 +0000 UTC m=+0.087849791 container remove 1ecdb8b07380fa6c69f2dee9d9e0b0d27bd9b74547b9ea0ac2f30e8fde8e5390 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-3f472aff-4044-47cd-b539-ffd0a15c2851, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct 11 09:18:21 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:18:21.172 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[c5ab189a-1645-48b4-9bfb-340a3381ce35]: (4, ('Sat Oct 11 09:18:20 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-3f472aff-4044-47cd-b539-ffd0a15c2851 (1ecdb8b07380fa6c69f2dee9d9e0b0d27bd9b74547b9ea0ac2f30e8fde8e5390)\n1ecdb8b07380fa6c69f2dee9d9e0b0d27bd9b74547b9ea0ac2f30e8fde8e5390\nSat Oct 11 09:18:21 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-3f472aff-4044-47cd-b539-ffd0a15c2851 (1ecdb8b07380fa6c69f2dee9d9e0b0d27bd9b74547b9ea0ac2f30e8fde8e5390)\n1ecdb8b07380fa6c69f2dee9d9e0b0d27bd9b74547b9ea0ac2f30e8fde8e5390\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:18:21 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:18:21.174 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[7a4763c4-6149-42c0-a3cc-b587b9605893]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:18:21 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:18:21.176 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3f472aff-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:18:21 compute-0 kernel: tap3f472aff-40: left promiscuous mode
Oct 11 09:18:21 compute-0 nova_compute[260935]: 2025-10-11 09:18:21.181 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:18:21 compute-0 nova_compute[260935]: 2025-10-11 09:18:21.210 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:18:21 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:18:21.214 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[27f5b310-6bb1-49cc-aa69-c8744432f508]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:18:21 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:18:21.242 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[b939e357-a2c2-4847-b2e7-44f0f7c5fb1f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:18:21 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:18:21.243 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[76b42fa9-9c4d-4395-b5cd-d4ac9636d864]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:18:21 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:18:21.263 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[e2fecd17-307d-4147-b71d-5b2724a3a065]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 610989, 'reachable_time': 43254, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 382657, 'error': None, 'target': 'ovnmeta-3f472aff-4044-47cd-b539-ffd0a15c2851', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:18:21 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:18:21.266 162929 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-3f472aff-4044-47cd-b539-ffd0a15c2851 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 11 09:18:21 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:18:21.266 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[21796f90-73d2-4a78-b6ea-aea8c6f70977]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:18:21 compute-0 systemd[1]: run-netns-ovnmeta\x2d3f472aff\x2d4044\x2d47cd\x2db539\x2dffd0a15c2851.mount: Deactivated successfully.
Oct 11 09:18:21 compute-0 nova_compute[260935]: 2025-10-11 09:18:21.387 2 INFO nova.virt.libvirt.driver [None req-0660e251-f1e0-459e-bcfb-768d67651959 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 0c16a8df-379f-45ee-b8a2-930ab997e47b] Deleting instance files /var/lib/nova/instances/0c16a8df-379f-45ee-b8a2-930ab997e47b_del
Oct 11 09:18:21 compute-0 nova_compute[260935]: 2025-10-11 09:18:21.388 2 INFO nova.virt.libvirt.driver [None req-0660e251-f1e0-459e-bcfb-768d67651959 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 0c16a8df-379f-45ee-b8a2-930ab997e47b] Deletion of /var/lib/nova/instances/0c16a8df-379f-45ee-b8a2-930ab997e47b_del complete
Oct 11 09:18:21 compute-0 nova_compute[260935]: 2025-10-11 09:18:21.393 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:18:21 compute-0 nova_compute[260935]: 2025-10-11 09:18:21.462 2 INFO nova.compute.manager [None req-0660e251-f1e0-459e-bcfb-768d67651959 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 0c16a8df-379f-45ee-b8a2-930ab997e47b] Took 0.84 seconds to destroy the instance on the hypervisor.
Oct 11 09:18:21 compute-0 nova_compute[260935]: 2025-10-11 09:18:21.463 2 DEBUG oslo.service.loopingcall [None req-0660e251-f1e0-459e-bcfb-768d67651959 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 11 09:18:21 compute-0 nova_compute[260935]: 2025-10-11 09:18:21.463 2 DEBUG nova.compute.manager [-] [instance: 0c16a8df-379f-45ee-b8a2-930ab997e47b] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 11 09:18:21 compute-0 nova_compute[260935]: 2025-10-11 09:18:21.464 2 DEBUG nova.network.neutron [-] [instance: 0c16a8df-379f-45ee-b8a2-930ab997e47b] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 11 09:18:21 compute-0 ceph-mon[74313]: pgmap v2295: 321 pgs: 321 active+clean; 407 MiB data, 950 MiB used, 59 GiB / 60 GiB avail; 27 KiB/s rd, 11 KiB/s wr, 29 op/s
Oct 11 09:18:22 compute-0 nova_compute[260935]: 2025-10-11 09:18:22.351 2 DEBUG nova.network.neutron [req-abf91524-11cf-49d3-a708-e981ba96d05b req-39b5b8d5-1dbe-4878-9a9a-0eb3321d6944 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0c16a8df-379f-45ee-b8a2-930ab997e47b] Updated VIF entry in instance network info cache for port 2284d8f8-63ae-4cf4-b954-c08472abd3ed. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 09:18:22 compute-0 nova_compute[260935]: 2025-10-11 09:18:22.352 2 DEBUG nova.network.neutron [req-abf91524-11cf-49d3-a708-e981ba96d05b req-39b5b8d5-1dbe-4878-9a9a-0eb3321d6944 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0c16a8df-379f-45ee-b8a2-930ab997e47b] Updating instance_info_cache with network_info: [{"id": "2284d8f8-63ae-4cf4-b954-c08472abd3ed", "address": "fa:16:3e:c9:58:d3", "network": {"id": "3f472aff-4044-47cd-b539-ffd0a15c2851", "bridge": "br-int", "label": "tempest-network-smoke--74572185", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2284d8f8-63", "ovs_interfaceid": "2284d8f8-63ae-4cf4-b954-c08472abd3ed", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:18:22 compute-0 nova_compute[260935]: 2025-10-11 09:18:22.384 2 DEBUG oslo_concurrency.lockutils [req-abf91524-11cf-49d3-a708-e981ba96d05b req-39b5b8d5-1dbe-4878-9a9a-0eb3321d6944 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-0c16a8df-379f-45ee-b8a2-930ab997e47b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:18:22 compute-0 nova_compute[260935]: 2025-10-11 09:18:22.471 2 DEBUG nova.network.neutron [-] [instance: 0c16a8df-379f-45ee-b8a2-930ab997e47b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:18:22 compute-0 nova_compute[260935]: 2025-10-11 09:18:22.501 2 INFO nova.compute.manager [-] [instance: 0c16a8df-379f-45ee-b8a2-930ab997e47b] Took 1.04 seconds to deallocate network for instance.
Oct 11 09:18:22 compute-0 nova_compute[260935]: 2025-10-11 09:18:22.550 2 DEBUG nova.compute.manager [req-8bc39084-ece2-45f1-b149-ac4b50b13b08 req-3222e011-1176-45d7-892e-66a712a90587 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0c16a8df-379f-45ee-b8a2-930ab997e47b] Received event network-vif-deleted-2284d8f8-63ae-4cf4-b954-c08472abd3ed external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:18:22 compute-0 nova_compute[260935]: 2025-10-11 09:18:22.559 2 DEBUG oslo_concurrency.lockutils [None req-0660e251-f1e0-459e-bcfb-768d67651959 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:18:22 compute-0 nova_compute[260935]: 2025-10-11 09:18:22.560 2 DEBUG oslo_concurrency.lockutils [None req-0660e251-f1e0-459e-bcfb-768d67651959 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:18:22 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2296: 321 pgs: 321 active+clean; 349 MiB data, 916 MiB used, 59 GiB / 60 GiB avail; 43 KiB/s rd, 12 KiB/s wr, 52 op/s
Oct 11 09:18:22 compute-0 nova_compute[260935]: 2025-10-11 09:18:22.679 2 DEBUG nova.compute.manager [req-1048d2f7-8948-4707-8970-27ff73a2e945 req-6aa6ffc2-d1bc-4689-a939-0ce794bdc3d4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0c16a8df-379f-45ee-b8a2-930ab997e47b] Received event network-vif-unplugged-2284d8f8-63ae-4cf4-b954-c08472abd3ed external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:18:22 compute-0 nova_compute[260935]: 2025-10-11 09:18:22.680 2 DEBUG oslo_concurrency.lockutils [req-1048d2f7-8948-4707-8970-27ff73a2e945 req-6aa6ffc2-d1bc-4689-a939-0ce794bdc3d4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "0c16a8df-379f-45ee-b8a2-930ab997e47b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:18:22 compute-0 nova_compute[260935]: 2025-10-11 09:18:22.680 2 DEBUG oslo_concurrency.lockutils [req-1048d2f7-8948-4707-8970-27ff73a2e945 req-6aa6ffc2-d1bc-4689-a939-0ce794bdc3d4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "0c16a8df-379f-45ee-b8a2-930ab997e47b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:18:22 compute-0 nova_compute[260935]: 2025-10-11 09:18:22.680 2 DEBUG oslo_concurrency.lockutils [req-1048d2f7-8948-4707-8970-27ff73a2e945 req-6aa6ffc2-d1bc-4689-a939-0ce794bdc3d4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "0c16a8df-379f-45ee-b8a2-930ab997e47b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:18:22 compute-0 nova_compute[260935]: 2025-10-11 09:18:22.680 2 DEBUG nova.compute.manager [req-1048d2f7-8948-4707-8970-27ff73a2e945 req-6aa6ffc2-d1bc-4689-a939-0ce794bdc3d4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0c16a8df-379f-45ee-b8a2-930ab997e47b] No waiting events found dispatching network-vif-unplugged-2284d8f8-63ae-4cf4-b954-c08472abd3ed pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 09:18:22 compute-0 nova_compute[260935]: 2025-10-11 09:18:22.681 2 WARNING nova.compute.manager [req-1048d2f7-8948-4707-8970-27ff73a2e945 req-6aa6ffc2-d1bc-4689-a939-0ce794bdc3d4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0c16a8df-379f-45ee-b8a2-930ab997e47b] Received unexpected event network-vif-unplugged-2284d8f8-63ae-4cf4-b954-c08472abd3ed for instance with vm_state deleted and task_state None.
Oct 11 09:18:22 compute-0 nova_compute[260935]: 2025-10-11 09:18:22.681 2 DEBUG nova.compute.manager [req-1048d2f7-8948-4707-8970-27ff73a2e945 req-6aa6ffc2-d1bc-4689-a939-0ce794bdc3d4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0c16a8df-379f-45ee-b8a2-930ab997e47b] Received event network-vif-plugged-2284d8f8-63ae-4cf4-b954-c08472abd3ed external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:18:22 compute-0 nova_compute[260935]: 2025-10-11 09:18:22.681 2 DEBUG oslo_concurrency.lockutils [req-1048d2f7-8948-4707-8970-27ff73a2e945 req-6aa6ffc2-d1bc-4689-a939-0ce794bdc3d4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "0c16a8df-379f-45ee-b8a2-930ab997e47b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:18:22 compute-0 nova_compute[260935]: 2025-10-11 09:18:22.681 2 DEBUG oslo_concurrency.lockutils [req-1048d2f7-8948-4707-8970-27ff73a2e945 req-6aa6ffc2-d1bc-4689-a939-0ce794bdc3d4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "0c16a8df-379f-45ee-b8a2-930ab997e47b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:18:22 compute-0 nova_compute[260935]: 2025-10-11 09:18:22.682 2 DEBUG oslo_concurrency.lockutils [req-1048d2f7-8948-4707-8970-27ff73a2e945 req-6aa6ffc2-d1bc-4689-a939-0ce794bdc3d4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "0c16a8df-379f-45ee-b8a2-930ab997e47b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:18:22 compute-0 nova_compute[260935]: 2025-10-11 09:18:22.682 2 DEBUG nova.compute.manager [req-1048d2f7-8948-4707-8970-27ff73a2e945 req-6aa6ffc2-d1bc-4689-a939-0ce794bdc3d4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0c16a8df-379f-45ee-b8a2-930ab997e47b] No waiting events found dispatching network-vif-plugged-2284d8f8-63ae-4cf4-b954-c08472abd3ed pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 09:18:22 compute-0 nova_compute[260935]: 2025-10-11 09:18:22.682 2 WARNING nova.compute.manager [req-1048d2f7-8948-4707-8970-27ff73a2e945 req-6aa6ffc2-d1bc-4689-a939-0ce794bdc3d4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 0c16a8df-379f-45ee-b8a2-930ab997e47b] Received unexpected event network-vif-plugged-2284d8f8-63ae-4cf4-b954-c08472abd3ed for instance with vm_state deleted and task_state None.
Oct 11 09:18:22 compute-0 nova_compute[260935]: 2025-10-11 09:18:22.727 2 DEBUG oslo_concurrency.processutils [None req-0660e251-f1e0-459e-bcfb-768d67651959 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:18:22 compute-0 sshd-session[382558]: Failed password for invalid user fedora from 155.4.244.179 port 50376 ssh2
Oct 11 09:18:23 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 09:18:23 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3623176638' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:18:23 compute-0 nova_compute[260935]: 2025-10-11 09:18:23.169 2 DEBUG oslo_concurrency.processutils [None req-0660e251-f1e0-459e-bcfb-768d67651959 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:18:23 compute-0 nova_compute[260935]: 2025-10-11 09:18:23.178 2 DEBUG nova.compute.provider_tree [None req-0660e251-f1e0-459e-bcfb-768d67651959 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 09:18:23 compute-0 nova_compute[260935]: 2025-10-11 09:18:23.199 2 DEBUG nova.scheduler.client.report [None req-0660e251-f1e0-459e-bcfb-768d67651959 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 09:18:23 compute-0 nova_compute[260935]: 2025-10-11 09:18:23.229 2 DEBUG oslo_concurrency.lockutils [None req-0660e251-f1e0-459e-bcfb-768d67651959 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.669s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:18:23 compute-0 sshd-session[382558]: Received disconnect from 155.4.244.179 port 50376:11: Bye Bye [preauth]
Oct 11 09:18:23 compute-0 sshd-session[382558]: Disconnected from invalid user fedora 155.4.244.179 port 50376 [preauth]
Oct 11 09:18:23 compute-0 nova_compute[260935]: 2025-10-11 09:18:23.256 2 INFO nova.scheduler.client.report [None req-0660e251-f1e0-459e-bcfb-768d67651959 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Deleted allocations for instance 0c16a8df-379f-45ee-b8a2-930ab997e47b
Oct 11 09:18:23 compute-0 nova_compute[260935]: 2025-10-11 09:18:23.324 2 DEBUG oslo_concurrency.lockutils [None req-0660e251-f1e0-459e-bcfb-768d67651959 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "0c16a8df-379f-45ee-b8a2-930ab997e47b" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.706s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:18:23 compute-0 ceph-mon[74313]: pgmap v2296: 321 pgs: 321 active+clean; 349 MiB data, 916 MiB used, 59 GiB / 60 GiB avail; 43 KiB/s rd, 12 KiB/s wr, 52 op/s
Oct 11 09:18:23 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/3623176638' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:18:23 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:18:24 compute-0 nova_compute[260935]: 2025-10-11 09:18:24.557 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760174289.5557923, 187076f6-221b-4a35-a7a8-9ba7c2a546b5 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:18:24 compute-0 nova_compute[260935]: 2025-10-11 09:18:24.558 2 INFO nova.compute.manager [-] [instance: 187076f6-221b-4a35-a7a8-9ba7c2a546b5] VM Stopped (Lifecycle Event)
Oct 11 09:18:24 compute-0 nova_compute[260935]: 2025-10-11 09:18:24.581 2 DEBUG nova.compute.manager [None req-0c6a74ec-dcf1-41ed-88f5-59104dd262d5 - - - - - -] [instance: 187076f6-221b-4a35-a7a8-9ba7c2a546b5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:18:24 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2297: 321 pgs: 321 active+clean; 328 MiB data, 903 MiB used, 59 GiB / 60 GiB avail; 27 KiB/s rd, 9.2 KiB/s wr, 30 op/s
Oct 11 09:18:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:18:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:18:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:18:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:18:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:18:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:18:25 compute-0 ceph-mon[74313]: pgmap v2297: 321 pgs: 321 active+clean; 328 MiB data, 903 MiB used, 59 GiB / 60 GiB avail; 27 KiB/s rd, 9.2 KiB/s wr, 30 op/s
Oct 11 09:18:25 compute-0 nova_compute[260935]: 2025-10-11 09:18:25.891 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:18:26 compute-0 nova_compute[260935]: 2025-10-11 09:18:26.429 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:18:26 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 09:18:26 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3186847443' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 09:18:26 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 09:18:26 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3186847443' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 09:18:26 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2298: 321 pgs: 321 active+clean; 328 MiB data, 903 MiB used, 59 GiB / 60 GiB avail; 26 KiB/s rd, 6.8 KiB/s wr, 28 op/s
Oct 11 09:18:26 compute-0 ceph-mon[74313]: from='client.? 192.168.122.10:0/3186847443' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 09:18:26 compute-0 ceph-mon[74313]: from='client.? 192.168.122.10:0/3186847443' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 09:18:27 compute-0 sudo[382680]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:18:27 compute-0 sudo[382680]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:18:27 compute-0 sudo[382680]: pam_unix(sudo:session): session closed for user root
Oct 11 09:18:27 compute-0 sudo[382705]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 09:18:27 compute-0 sudo[382705]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:18:27 compute-0 sudo[382705]: pam_unix(sudo:session): session closed for user root
Oct 11 09:18:27 compute-0 sudo[382730]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:18:27 compute-0 sudo[382730]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:18:27 compute-0 sudo[382730]: pam_unix(sudo:session): session closed for user root
Oct 11 09:18:27 compute-0 sudo[382755]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 11 09:18:27 compute-0 sudo[382755]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:18:27 compute-0 nova_compute[260935]: 2025-10-11 09:18:27.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:18:27 compute-0 ovn_controller[152945]: 2025-10-11T09:18:27Z|01120|binding|INFO|Releasing lport e23cd806-8523-4e59-ba27-db15cee52548 from this chassis (sb_readonly=0)
Oct 11 09:18:27 compute-0 ovn_controller[152945]: 2025-10-11T09:18:27Z|01121|binding|INFO|Releasing lport f45dd889-4db0-488b-b6d3-356bc191844e from this chassis (sb_readonly=0)
Oct 11 09:18:27 compute-0 nova_compute[260935]: 2025-10-11 09:18:27.748 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:18:27 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:18:27.782 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=35, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:d1:d9', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '16:ab:1e:b7:4b:7f'}, ipsec=False) old=SB_Global(nb_cfg=34) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 09:18:27 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:18:27.783 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 11 09:18:27 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:18:27.784 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=bec9a5e2-82b0-42f0-811d-08d245f1dc66, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '35'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:18:27 compute-0 ceph-mon[74313]: pgmap v2298: 321 pgs: 321 active+clean; 328 MiB data, 903 MiB used, 59 GiB / 60 GiB avail; 26 KiB/s rd, 6.8 KiB/s wr, 28 op/s
Oct 11 09:18:27 compute-0 ovn_controller[152945]: 2025-10-11T09:18:27Z|01122|binding|INFO|Releasing lport e23cd806-8523-4e59-ba27-db15cee52548 from this chassis (sb_readonly=0)
Oct 11 09:18:27 compute-0 ovn_controller[152945]: 2025-10-11T09:18:27Z|01123|binding|INFO|Releasing lport f45dd889-4db0-488b-b6d3-356bc191844e from this chassis (sb_readonly=0)
Oct 11 09:18:27 compute-0 nova_compute[260935]: 2025-10-11 09:18:27.834 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:18:28 compute-0 sudo[382755]: pam_unix(sudo:session): session closed for user root
Oct 11 09:18:28 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 09:18:28 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 09:18:28 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 11 09:18:28 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 09:18:28 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 11 09:18:28 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:18:28 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev c8b1aa77-283d-4bde-bcbf-17a7fb258e6c does not exist
Oct 11 09:18:28 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev e02b600c-3fa0-4a86-bdd1-28555387a6bc does not exist
Oct 11 09:18:28 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev ebfc13ca-0c1e-42a5-9047-46a0aef5605a does not exist
Oct 11 09:18:28 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 11 09:18:28 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 09:18:28 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 11 09:18:28 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 09:18:28 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 09:18:28 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 09:18:28 compute-0 sudo[382812]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:18:28 compute-0 sudo[382812]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:18:28 compute-0 sudo[382812]: pam_unix(sudo:session): session closed for user root
Oct 11 09:18:28 compute-0 sudo[382837]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 09:18:28 compute-0 sudo[382837]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:18:28 compute-0 sudo[382837]: pam_unix(sudo:session): session closed for user root
Oct 11 09:18:28 compute-0 sudo[382862]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:18:28 compute-0 sudo[382862]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:18:28 compute-0 sudo[382862]: pam_unix(sudo:session): session closed for user root
Oct 11 09:18:28 compute-0 sudo[382887]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 11 09:18:28 compute-0 sudo[382887]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:18:28 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2299: 321 pgs: 321 active+clean; 328 MiB data, 903 MiB used, 59 GiB / 60 GiB avail; 26 KiB/s rd, 6.8 KiB/s wr, 28 op/s
Oct 11 09:18:28 compute-0 nova_compute[260935]: 2025-10-11 09:18:28.704 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:18:28 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 09:18:28 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 09:18:28 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:18:28 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 09:18:28 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 09:18:28 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 09:18:28 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:18:29 compute-0 podman[382952]: 2025-10-11 09:18:29.144315248 +0000 UTC m=+0.073082241 container create 65df244736213d40227337331a7ff352423569cc8525419ec6d1e69c502290ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_keldysh, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef)
Oct 11 09:18:29 compute-0 podman[382952]: 2025-10-11 09:18:29.114756817 +0000 UTC m=+0.043523900 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:18:29 compute-0 systemd[1]: Started libpod-conmon-65df244736213d40227337331a7ff352423569cc8525419ec6d1e69c502290ec.scope.
Oct 11 09:18:29 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:18:29 compute-0 podman[382952]: 2025-10-11 09:18:29.292864374 +0000 UTC m=+0.221631407 container init 65df244736213d40227337331a7ff352423569cc8525419ec6d1e69c502290ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_keldysh, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct 11 09:18:29 compute-0 podman[382952]: 2025-10-11 09:18:29.299462672 +0000 UTC m=+0.228229675 container start 65df244736213d40227337331a7ff352423569cc8525419ec6d1e69c502290ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_keldysh, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 09:18:29 compute-0 podman[382952]: 2025-10-11 09:18:29.303702453 +0000 UTC m=+0.232469456 container attach 65df244736213d40227337331a7ff352423569cc8525419ec6d1e69c502290ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_keldysh, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 09:18:29 compute-0 modest_keldysh[382969]: 167 167
Oct 11 09:18:29 compute-0 systemd[1]: libpod-65df244736213d40227337331a7ff352423569cc8525419ec6d1e69c502290ec.scope: Deactivated successfully.
Oct 11 09:18:29 compute-0 conmon[382969]: conmon 65df244736213d402273 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-65df244736213d40227337331a7ff352423569cc8525419ec6d1e69c502290ec.scope/container/memory.events
Oct 11 09:18:29 compute-0 podman[382952]: 2025-10-11 09:18:29.312167354 +0000 UTC m=+0.240934387 container died 65df244736213d40227337331a7ff352423569cc8525419ec6d1e69c502290ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_keldysh, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef)
Oct 11 09:18:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-84a1bcea237e3d130ca9f03135ee76398edbc63ec558a302a79eacc28f08660a-merged.mount: Deactivated successfully.
Oct 11 09:18:29 compute-0 podman[382952]: 2025-10-11 09:18:29.366072478 +0000 UTC m=+0.294839521 container remove 65df244736213d40227337331a7ff352423569cc8525419ec6d1e69c502290ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_keldysh, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 09:18:29 compute-0 systemd[1]: libpod-conmon-65df244736213d40227337331a7ff352423569cc8525419ec6d1e69c502290ec.scope: Deactivated successfully.
Oct 11 09:18:29 compute-0 podman[382995]: 2025-10-11 09:18:29.623382689 +0000 UTC m=+0.071223147 container create f1d896c1d6f89892cf12c1d999d3b7030b8fcae2b71cdf487040da121098484b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_ishizaka, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True)
Oct 11 09:18:29 compute-0 systemd[1]: Started libpod-conmon-f1d896c1d6f89892cf12c1d999d3b7030b8fcae2b71cdf487040da121098484b.scope.
Oct 11 09:18:29 compute-0 podman[382995]: 2025-10-11 09:18:29.594270241 +0000 UTC m=+0.042110749 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:18:29 compute-0 nova_compute[260935]: 2025-10-11 09:18:29.698 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:18:29 compute-0 nova_compute[260935]: 2025-10-11 09:18:29.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:18:29 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:18:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be8476e761e0047f48bdfdaf9fbeaa0052c96c9fc00bd64739a10707240ad49b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 09:18:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be8476e761e0047f48bdfdaf9fbeaa0052c96c9fc00bd64739a10707240ad49b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 09:18:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be8476e761e0047f48bdfdaf9fbeaa0052c96c9fc00bd64739a10707240ad49b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 09:18:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be8476e761e0047f48bdfdaf9fbeaa0052c96c9fc00bd64739a10707240ad49b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 09:18:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be8476e761e0047f48bdfdaf9fbeaa0052c96c9fc00bd64739a10707240ad49b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 09:18:29 compute-0 podman[382995]: 2025-10-11 09:18:29.738919507 +0000 UTC m=+0.186760005 container init f1d896c1d6f89892cf12c1d999d3b7030b8fcae2b71cdf487040da121098484b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_ishizaka, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct 11 09:18:29 compute-0 podman[382995]: 2025-10-11 09:18:29.751247227 +0000 UTC m=+0.199087655 container start f1d896c1d6f89892cf12c1d999d3b7030b8fcae2b71cdf487040da121098484b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_ishizaka, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct 11 09:18:29 compute-0 podman[382995]: 2025-10-11 09:18:29.754631944 +0000 UTC m=+0.202472472 container attach f1d896c1d6f89892cf12c1d999d3b7030b8fcae2b71cdf487040da121098484b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_ishizaka, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct 11 09:18:29 compute-0 ceph-mon[74313]: pgmap v2299: 321 pgs: 321 active+clean; 328 MiB data, 903 MiB used, 59 GiB / 60 GiB avail; 26 KiB/s rd, 6.8 KiB/s wr, 28 op/s
Oct 11 09:18:30 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2300: 321 pgs: 321 active+clean; 328 MiB data, 903 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 11 09:18:30 compute-0 nova_compute[260935]: 2025-10-11 09:18:30.896 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:18:30 compute-0 dreamy_ishizaka[383011]: --> passed data devices: 0 physical, 3 LVM
Oct 11 09:18:30 compute-0 dreamy_ishizaka[383011]: --> relative data size: 1.0
Oct 11 09:18:30 compute-0 dreamy_ishizaka[383011]: --> All data devices are unavailable
Oct 11 09:18:30 compute-0 systemd[1]: libpod-f1d896c1d6f89892cf12c1d999d3b7030b8fcae2b71cdf487040da121098484b.scope: Deactivated successfully.
Oct 11 09:18:30 compute-0 systemd[1]: libpod-f1d896c1d6f89892cf12c1d999d3b7030b8fcae2b71cdf487040da121098484b.scope: Consumed 1.155s CPU time.
Oct 11 09:18:30 compute-0 podman[382995]: 2025-10-11 09:18:30.974778162 +0000 UTC m=+1.422618660 container died f1d896c1d6f89892cf12c1d999d3b7030b8fcae2b71cdf487040da121098484b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_ishizaka, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct 11 09:18:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-be8476e761e0047f48bdfdaf9fbeaa0052c96c9fc00bd64739a10707240ad49b-merged.mount: Deactivated successfully.
Oct 11 09:18:31 compute-0 podman[382995]: 2025-10-11 09:18:31.052213156 +0000 UTC m=+1.500053604 container remove f1d896c1d6f89892cf12c1d999d3b7030b8fcae2b71cdf487040da121098484b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_ishizaka, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 09:18:31 compute-0 systemd[1]: libpod-conmon-f1d896c1d6f89892cf12c1d999d3b7030b8fcae2b71cdf487040da121098484b.scope: Deactivated successfully.
Oct 11 09:18:31 compute-0 sudo[382887]: pam_unix(sudo:session): session closed for user root
Oct 11 09:18:31 compute-0 sudo[383054]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:18:31 compute-0 sudo[383054]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:18:31 compute-0 sudo[383054]: pam_unix(sudo:session): session closed for user root
Oct 11 09:18:31 compute-0 sudo[383079]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 09:18:31 compute-0 sudo[383079]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:18:31 compute-0 sudo[383079]: pam_unix(sudo:session): session closed for user root
Oct 11 09:18:31 compute-0 sudo[383104]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:18:31 compute-0 sudo[383104]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:18:31 compute-0 sudo[383104]: pam_unix(sudo:session): session closed for user root
Oct 11 09:18:31 compute-0 nova_compute[260935]: 2025-10-11 09:18:31.429 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:18:31 compute-0 sudo[383129]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a -- lvm list --format json
Oct 11 09:18:31 compute-0 sudo[383129]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:18:31 compute-0 ceph-mon[74313]: pgmap v2300: 321 pgs: 321 active+clean; 328 MiB data, 903 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 11 09:18:31 compute-0 podman[383195]: 2025-10-11 09:18:31.958610266 +0000 UTC m=+0.063901520 container create f67302f0dec71d9cfe90fcfe7db16bc9f2ed1f53210924ff0a0464506708af33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_hoover, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 09:18:32 compute-0 systemd[1]: Started libpod-conmon-f67302f0dec71d9cfe90fcfe7db16bc9f2ed1f53210924ff0a0464506708af33.scope.
Oct 11 09:18:32 compute-0 podman[383195]: 2025-10-11 09:18:31.932191574 +0000 UTC m=+0.037482878 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:18:32 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:18:32 compute-0 podman[383195]: 2025-10-11 09:18:32.083999674 +0000 UTC m=+0.189290958 container init f67302f0dec71d9cfe90fcfe7db16bc9f2ed1f53210924ff0a0464506708af33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_hoover, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 09:18:32 compute-0 podman[383195]: 2025-10-11 09:18:32.097274701 +0000 UTC m=+0.202565925 container start f67302f0dec71d9cfe90fcfe7db16bc9f2ed1f53210924ff0a0464506708af33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_hoover, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 09:18:32 compute-0 podman[383195]: 2025-10-11 09:18:32.101645736 +0000 UTC m=+0.206936990 container attach f67302f0dec71d9cfe90fcfe7db16bc9f2ed1f53210924ff0a0464506708af33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_hoover, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 09:18:32 compute-0 optimistic_hoover[383211]: 167 167
Oct 11 09:18:32 compute-0 systemd[1]: libpod-f67302f0dec71d9cfe90fcfe7db16bc9f2ed1f53210924ff0a0464506708af33.scope: Deactivated successfully.
Oct 11 09:18:32 compute-0 conmon[383211]: conmon f67302f0dec71d9cfe90 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-f67302f0dec71d9cfe90fcfe7db16bc9f2ed1f53210924ff0a0464506708af33.scope/container/memory.events
Oct 11 09:18:32 compute-0 podman[383195]: 2025-10-11 09:18:32.104374913 +0000 UTC m=+0.209666197 container died f67302f0dec71d9cfe90fcfe7db16bc9f2ed1f53210924ff0a0464506708af33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_hoover, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct 11 09:18:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-524b74e54162358b96dcb2845f5e643925c96004f677c1d8ad027e3ae70ac886-merged.mount: Deactivated successfully.
Oct 11 09:18:32 compute-0 podman[383195]: 2025-10-11 09:18:32.158341459 +0000 UTC m=+0.263632723 container remove f67302f0dec71d9cfe90fcfe7db16bc9f2ed1f53210924ff0a0464506708af33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_hoover, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 09:18:32 compute-0 systemd[1]: libpod-conmon-f67302f0dec71d9cfe90fcfe7db16bc9f2ed1f53210924ff0a0464506708af33.scope: Deactivated successfully.
Oct 11 09:18:32 compute-0 podman[383235]: 2025-10-11 09:18:32.42266641 +0000 UTC m=+0.070247180 container create d4fe01d12073eb7b4a7035acd61d220d32c89f4b877949262be3caae12e5726b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_kirch, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 09:18:32 compute-0 systemd[1]: Started libpod-conmon-d4fe01d12073eb7b4a7035acd61d220d32c89f4b877949262be3caae12e5726b.scope.
Oct 11 09:18:32 compute-0 podman[383235]: 2025-10-11 09:18:32.393523631 +0000 UTC m=+0.041104461 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:18:32 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:18:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3b7728b0295d8d67002324cd8e6171a5a65bdf00cfad34e07a5fbd007b66549/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 09:18:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3b7728b0295d8d67002324cd8e6171a5a65bdf00cfad34e07a5fbd007b66549/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 09:18:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3b7728b0295d8d67002324cd8e6171a5a65bdf00cfad34e07a5fbd007b66549/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 09:18:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3b7728b0295d8d67002324cd8e6171a5a65bdf00cfad34e07a5fbd007b66549/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 09:18:32 compute-0 podman[383235]: 2025-10-11 09:18:32.530267732 +0000 UTC m=+0.177848542 container init d4fe01d12073eb7b4a7035acd61d220d32c89f4b877949262be3caae12e5726b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_kirch, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 09:18:32 compute-0 podman[383235]: 2025-10-11 09:18:32.544141237 +0000 UTC m=+0.191722027 container start d4fe01d12073eb7b4a7035acd61d220d32c89f4b877949262be3caae12e5726b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_kirch, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0)
Oct 11 09:18:32 compute-0 podman[383235]: 2025-10-11 09:18:32.548249193 +0000 UTC m=+0.195830003 container attach d4fe01d12073eb7b4a7035acd61d220d32c89f4b877949262be3caae12e5726b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_kirch, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct 11 09:18:32 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2301: 321 pgs: 321 active+clean; 328 MiB data, 903 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 11 09:18:32 compute-0 nova_compute[260935]: 2025-10-11 09:18:32.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:18:32 compute-0 nova_compute[260935]: 2025-10-11 09:18:32.705 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 11 09:18:32 compute-0 nova_compute[260935]: 2025-10-11 09:18:32.705 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:18:32 compute-0 nova_compute[260935]: 2025-10-11 09:18:32.706 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Oct 11 09:18:33 compute-0 musing_kirch[383251]: {
Oct 11 09:18:33 compute-0 musing_kirch[383251]:     "0": [
Oct 11 09:18:33 compute-0 musing_kirch[383251]:         {
Oct 11 09:18:33 compute-0 musing_kirch[383251]:             "devices": [
Oct 11 09:18:33 compute-0 musing_kirch[383251]:                 "/dev/loop3"
Oct 11 09:18:33 compute-0 musing_kirch[383251]:             ],
Oct 11 09:18:33 compute-0 musing_kirch[383251]:             "lv_name": "ceph_lv0",
Oct 11 09:18:33 compute-0 musing_kirch[383251]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 09:18:33 compute-0 musing_kirch[383251]:             "lv_size": "21470642176",
Oct 11 09:18:33 compute-0 musing_kirch[383251]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=bd31cfe9-266a-4479-a7f9-659fff14b3e1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 09:18:33 compute-0 musing_kirch[383251]:             "lv_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 09:18:33 compute-0 musing_kirch[383251]:             "name": "ceph_lv0",
Oct 11 09:18:33 compute-0 musing_kirch[383251]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 09:18:33 compute-0 musing_kirch[383251]:             "tags": {
Oct 11 09:18:33 compute-0 musing_kirch[383251]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 11 09:18:33 compute-0 musing_kirch[383251]:                 "ceph.block_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 09:18:33 compute-0 musing_kirch[383251]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 09:18:33 compute-0 musing_kirch[383251]:                 "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:18:33 compute-0 musing_kirch[383251]:                 "ceph.cluster_name": "ceph",
Oct 11 09:18:33 compute-0 musing_kirch[383251]:                 "ceph.crush_device_class": "",
Oct 11 09:18:33 compute-0 musing_kirch[383251]:                 "ceph.encrypted": "0",
Oct 11 09:18:33 compute-0 musing_kirch[383251]:                 "ceph.osd_fsid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 09:18:33 compute-0 musing_kirch[383251]:                 "ceph.osd_id": "0",
Oct 11 09:18:33 compute-0 musing_kirch[383251]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 09:18:33 compute-0 musing_kirch[383251]:                 "ceph.type": "block",
Oct 11 09:18:33 compute-0 musing_kirch[383251]:                 "ceph.vdo": "0"
Oct 11 09:18:33 compute-0 musing_kirch[383251]:             },
Oct 11 09:18:33 compute-0 musing_kirch[383251]:             "type": "block",
Oct 11 09:18:33 compute-0 musing_kirch[383251]:             "vg_name": "ceph_vg0"
Oct 11 09:18:33 compute-0 musing_kirch[383251]:         }
Oct 11 09:18:33 compute-0 musing_kirch[383251]:     ],
Oct 11 09:18:33 compute-0 musing_kirch[383251]:     "1": [
Oct 11 09:18:33 compute-0 musing_kirch[383251]:         {
Oct 11 09:18:33 compute-0 musing_kirch[383251]:             "devices": [
Oct 11 09:18:33 compute-0 musing_kirch[383251]:                 "/dev/loop4"
Oct 11 09:18:33 compute-0 musing_kirch[383251]:             ],
Oct 11 09:18:33 compute-0 musing_kirch[383251]:             "lv_name": "ceph_lv1",
Oct 11 09:18:33 compute-0 musing_kirch[383251]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 09:18:33 compute-0 musing_kirch[383251]:             "lv_size": "21470642176",
Oct 11 09:18:33 compute-0 musing_kirch[383251]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c6e730c8-7075-45e5-bf72-061b73088f5b,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 09:18:33 compute-0 musing_kirch[383251]:             "lv_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 09:18:33 compute-0 musing_kirch[383251]:             "name": "ceph_lv1",
Oct 11 09:18:33 compute-0 musing_kirch[383251]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 09:18:33 compute-0 musing_kirch[383251]:             "tags": {
Oct 11 09:18:33 compute-0 musing_kirch[383251]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 11 09:18:33 compute-0 musing_kirch[383251]:                 "ceph.block_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 09:18:33 compute-0 musing_kirch[383251]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 09:18:33 compute-0 musing_kirch[383251]:                 "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:18:33 compute-0 musing_kirch[383251]:                 "ceph.cluster_name": "ceph",
Oct 11 09:18:33 compute-0 musing_kirch[383251]:                 "ceph.crush_device_class": "",
Oct 11 09:18:33 compute-0 musing_kirch[383251]:                 "ceph.encrypted": "0",
Oct 11 09:18:33 compute-0 musing_kirch[383251]:                 "ceph.osd_fsid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 09:18:33 compute-0 musing_kirch[383251]:                 "ceph.osd_id": "1",
Oct 11 09:18:33 compute-0 musing_kirch[383251]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 09:18:33 compute-0 musing_kirch[383251]:                 "ceph.type": "block",
Oct 11 09:18:33 compute-0 musing_kirch[383251]:                 "ceph.vdo": "0"
Oct 11 09:18:33 compute-0 musing_kirch[383251]:             },
Oct 11 09:18:33 compute-0 musing_kirch[383251]:             "type": "block",
Oct 11 09:18:33 compute-0 musing_kirch[383251]:             "vg_name": "ceph_vg1"
Oct 11 09:18:33 compute-0 musing_kirch[383251]:         }
Oct 11 09:18:33 compute-0 musing_kirch[383251]:     ],
Oct 11 09:18:33 compute-0 musing_kirch[383251]:     "2": [
Oct 11 09:18:33 compute-0 musing_kirch[383251]:         {
Oct 11 09:18:33 compute-0 musing_kirch[383251]:             "devices": [
Oct 11 09:18:33 compute-0 musing_kirch[383251]:                 "/dev/loop5"
Oct 11 09:18:33 compute-0 musing_kirch[383251]:             ],
Oct 11 09:18:33 compute-0 musing_kirch[383251]:             "lv_name": "ceph_lv2",
Oct 11 09:18:33 compute-0 musing_kirch[383251]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 09:18:33 compute-0 musing_kirch[383251]:             "lv_size": "21470642176",
Oct 11 09:18:33 compute-0 musing_kirch[383251]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9a8d87fe-940e-4960-94fb-405e22e6e9ee,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 09:18:33 compute-0 musing_kirch[383251]:             "lv_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 09:18:33 compute-0 musing_kirch[383251]:             "name": "ceph_lv2",
Oct 11 09:18:33 compute-0 musing_kirch[383251]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 09:18:33 compute-0 musing_kirch[383251]:             "tags": {
Oct 11 09:18:33 compute-0 musing_kirch[383251]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 11 09:18:33 compute-0 musing_kirch[383251]:                 "ceph.block_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 09:18:33 compute-0 musing_kirch[383251]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 09:18:33 compute-0 musing_kirch[383251]:                 "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:18:33 compute-0 musing_kirch[383251]:                 "ceph.cluster_name": "ceph",
Oct 11 09:18:33 compute-0 musing_kirch[383251]:                 "ceph.crush_device_class": "",
Oct 11 09:18:33 compute-0 musing_kirch[383251]:                 "ceph.encrypted": "0",
Oct 11 09:18:33 compute-0 musing_kirch[383251]:                 "ceph.osd_fsid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 09:18:33 compute-0 musing_kirch[383251]:                 "ceph.osd_id": "2",
Oct 11 09:18:33 compute-0 musing_kirch[383251]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 09:18:33 compute-0 musing_kirch[383251]:                 "ceph.type": "block",
Oct 11 09:18:33 compute-0 musing_kirch[383251]:                 "ceph.vdo": "0"
Oct 11 09:18:33 compute-0 musing_kirch[383251]:             },
Oct 11 09:18:33 compute-0 musing_kirch[383251]:             "type": "block",
Oct 11 09:18:33 compute-0 musing_kirch[383251]:             "vg_name": "ceph_vg2"
Oct 11 09:18:33 compute-0 musing_kirch[383251]:         }
Oct 11 09:18:33 compute-0 musing_kirch[383251]:     ]
Oct 11 09:18:33 compute-0 musing_kirch[383251]: }
Oct 11 09:18:33 compute-0 systemd[1]: libpod-d4fe01d12073eb7b4a7035acd61d220d32c89f4b877949262be3caae12e5726b.scope: Deactivated successfully.
Oct 11 09:18:33 compute-0 podman[383235]: 2025-10-11 09:18:33.407535984 +0000 UTC m=+1.055116774 container died d4fe01d12073eb7b4a7035acd61d220d32c89f4b877949262be3caae12e5726b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_kirch, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 09:18:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-a3b7728b0295d8d67002324cd8e6171a5a65bdf00cfad34e07a5fbd007b66549-merged.mount: Deactivated successfully.
Oct 11 09:18:33 compute-0 podman[383235]: 2025-10-11 09:18:33.495026394 +0000 UTC m=+1.142607194 container remove d4fe01d12073eb7b4a7035acd61d220d32c89f4b877949262be3caae12e5726b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_kirch, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 09:18:33 compute-0 systemd[1]: libpod-conmon-d4fe01d12073eb7b4a7035acd61d220d32c89f4b877949262be3caae12e5726b.scope: Deactivated successfully.
Oct 11 09:18:33 compute-0 podman[383260]: 2025-10-11 09:18:33.517773721 +0000 UTC m=+0.075094368 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Oct 11 09:18:33 compute-0 sudo[383129]: pam_unix(sudo:session): session closed for user root
Oct 11 09:18:33 compute-0 sudo[383291]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:18:33 compute-0 sudo[383291]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:18:33 compute-0 sudo[383291]: pam_unix(sudo:session): session closed for user root
Oct 11 09:18:33 compute-0 sudo[383316]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 09:18:33 compute-0 sudo[383316]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:18:33 compute-0 sudo[383316]: pam_unix(sudo:session): session closed for user root
Oct 11 09:18:33 compute-0 sudo[383341]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:18:33 compute-0 sudo[383341]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:18:33 compute-0 sudo[383341]: pam_unix(sudo:session): session closed for user root
Oct 11 09:18:33 compute-0 ceph-mon[74313]: pgmap v2301: 321 pgs: 321 active+clean; 328 MiB data, 903 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 11 09:18:33 compute-0 sudo[383366]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a -- raw list --format json
Oct 11 09:18:33 compute-0 sudo[383366]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:18:33 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:18:34 compute-0 podman[383431]: 2025-10-11 09:18:34.348455137 +0000 UTC m=+0.072094532 container create 5b2a222b001e280421855f67f5df6297a2f253b6efd74bac05b5cabe4633e819 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_nash, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 09:18:34 compute-0 systemd[1]: Started libpod-conmon-5b2a222b001e280421855f67f5df6297a2f253b6efd74bac05b5cabe4633e819.scope.
Oct 11 09:18:34 compute-0 podman[383431]: 2025-10-11 09:18:34.31656375 +0000 UTC m=+0.040203205 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:18:34 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:18:34 compute-0 podman[383431]: 2025-10-11 09:18:34.458315823 +0000 UTC m=+0.181955238 container init 5b2a222b001e280421855f67f5df6297a2f253b6efd74bac05b5cabe4633e819 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_nash, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Oct 11 09:18:34 compute-0 podman[383431]: 2025-10-11 09:18:34.466838966 +0000 UTC m=+0.190478371 container start 5b2a222b001e280421855f67f5df6297a2f253b6efd74bac05b5cabe4633e819 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_nash, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 09:18:34 compute-0 podman[383431]: 2025-10-11 09:18:34.471268592 +0000 UTC m=+0.194908057 container attach 5b2a222b001e280421855f67f5df6297a2f253b6efd74bac05b5cabe4633e819 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_nash, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 09:18:34 compute-0 blissful_nash[383448]: 167 167
Oct 11 09:18:34 compute-0 systemd[1]: libpod-5b2a222b001e280421855f67f5df6297a2f253b6efd74bac05b5cabe4633e819.scope: Deactivated successfully.
Oct 11 09:18:34 compute-0 podman[383431]: 2025-10-11 09:18:34.476164361 +0000 UTC m=+0.199803756 container died 5b2a222b001e280421855f67f5df6297a2f253b6efd74bac05b5cabe4633e819 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_nash, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct 11 09:18:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-119e11540d14b2a2aa4a9553259a5d4971cbcaa6bc8d91b2b6e5036c364c30d0-merged.mount: Deactivated successfully.
Oct 11 09:18:34 compute-0 podman[383431]: 2025-10-11 09:18:34.530524168 +0000 UTC m=+0.254163573 container remove 5b2a222b001e280421855f67f5df6297a2f253b6efd74bac05b5cabe4633e819 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_nash, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 09:18:34 compute-0 systemd[1]: libpod-conmon-5b2a222b001e280421855f67f5df6297a2f253b6efd74bac05b5cabe4633e819.scope: Deactivated successfully.
Oct 11 09:18:34 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2302: 321 pgs: 321 active+clean; 328 MiB data, 903 MiB used, 59 GiB / 60 GiB avail; 2.2 KiB/s rd, 341 B/s wr, 4 op/s
Oct 11 09:18:34 compute-0 nova_compute[260935]: 2025-10-11 09:18:34.722 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:18:34 compute-0 nova_compute[260935]: 2025-10-11 09:18:34.723 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 11 09:18:34 compute-0 podman[383472]: 2025-10-11 09:18:34.748438869 +0000 UTC m=+0.046504225 container create 65b2229fe2008500a4b36fef65974723ce7eb12836904d29c55e2a30e58c878a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_poincare, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 09:18:34 compute-0 systemd[1]: Started libpod-conmon-65b2229fe2008500a4b36fef65974723ce7eb12836904d29c55e2a30e58c878a.scope.
Oct 11 09:18:34 compute-0 podman[383472]: 2025-10-11 09:18:34.724031684 +0000 UTC m=+0.022097030 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:18:34 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:18:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8dc5248a20165b55786a475faa96dc9dc6ad01b3822a8be08e1941c82b6dff60/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 09:18:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8dc5248a20165b55786a475faa96dc9dc6ad01b3822a8be08e1941c82b6dff60/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 09:18:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8dc5248a20165b55786a475faa96dc9dc6ad01b3822a8be08e1941c82b6dff60/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 09:18:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8dc5248a20165b55786a475faa96dc9dc6ad01b3822a8be08e1941c82b6dff60/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 09:18:34 compute-0 podman[383472]: 2025-10-11 09:18:34.870272985 +0000 UTC m=+0.168338421 container init 65b2229fe2008500a4b36fef65974723ce7eb12836904d29c55e2a30e58c878a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_poincare, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Oct 11 09:18:34 compute-0 podman[383472]: 2025-10-11 09:18:34.88345037 +0000 UTC m=+0.181515756 container start 65b2229fe2008500a4b36fef65974723ce7eb12836904d29c55e2a30e58c878a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_poincare, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 09:18:34 compute-0 podman[383472]: 2025-10-11 09:18:34.888230486 +0000 UTC m=+0.186295872 container attach 65b2229fe2008500a4b36fef65974723ce7eb12836904d29c55e2a30e58c878a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_poincare, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct 11 09:18:34 compute-0 nova_compute[260935]: 2025-10-11 09:18:34.941 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "refresh_cache-52be16b4-343a-4fd4-9041-39069a1fde2a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:18:34 compute-0 nova_compute[260935]: 2025-10-11 09:18:34.942 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquired lock "refresh_cache-52be16b4-343a-4fd4-9041-39069a1fde2a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:18:34 compute-0 nova_compute[260935]: 2025-10-11 09:18:34.942 2 DEBUG nova.network.neutron [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: 52be16b4-343a-4fd4-9041-39069a1fde2a] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 11 09:18:35 compute-0 ceph-mon[74313]: pgmap v2302: 321 pgs: 321 active+clean; 328 MiB data, 903 MiB used, 59 GiB / 60 GiB avail; 2.2 KiB/s rd, 341 B/s wr, 4 op/s
Oct 11 09:18:35 compute-0 nova_compute[260935]: 2025-10-11 09:18:35.864 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760174300.8629994, 0c16a8df-379f-45ee-b8a2-930ab997e47b => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:18:35 compute-0 nova_compute[260935]: 2025-10-11 09:18:35.864 2 INFO nova.compute.manager [-] [instance: 0c16a8df-379f-45ee-b8a2-930ab997e47b] VM Stopped (Lifecycle Event)
Oct 11 09:18:35 compute-0 nova_compute[260935]: 2025-10-11 09:18:35.890 2 DEBUG nova.compute.manager [None req-28bd4e1d-5078-4fd8-adfd-73f10a398d97 - - - - - -] [instance: 0c16a8df-379f-45ee-b8a2-930ab997e47b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:18:35 compute-0 nova_compute[260935]: 2025-10-11 09:18:35.902 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:18:35 compute-0 flamboyant_poincare[383489]: {
Oct 11 09:18:35 compute-0 flamboyant_poincare[383489]:     "9a8d87fe-940e-4960-94fb-405e22e6e9ee": {
Oct 11 09:18:35 compute-0 flamboyant_poincare[383489]:         "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:18:35 compute-0 flamboyant_poincare[383489]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 11 09:18:35 compute-0 flamboyant_poincare[383489]:         "osd_id": 2,
Oct 11 09:18:35 compute-0 flamboyant_poincare[383489]:         "osd_uuid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 09:18:35 compute-0 flamboyant_poincare[383489]:         "type": "bluestore"
Oct 11 09:18:35 compute-0 flamboyant_poincare[383489]:     },
Oct 11 09:18:35 compute-0 flamboyant_poincare[383489]:     "bd31cfe9-266a-4479-a7f9-659fff14b3e1": {
Oct 11 09:18:35 compute-0 flamboyant_poincare[383489]:         "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:18:35 compute-0 flamboyant_poincare[383489]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 11 09:18:35 compute-0 flamboyant_poincare[383489]:         "osd_id": 0,
Oct 11 09:18:35 compute-0 flamboyant_poincare[383489]:         "osd_uuid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 09:18:35 compute-0 flamboyant_poincare[383489]:         "type": "bluestore"
Oct 11 09:18:35 compute-0 flamboyant_poincare[383489]:     },
Oct 11 09:18:35 compute-0 flamboyant_poincare[383489]:     "c6e730c8-7075-45e5-bf72-061b73088f5b": {
Oct 11 09:18:35 compute-0 flamboyant_poincare[383489]:         "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:18:35 compute-0 flamboyant_poincare[383489]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 11 09:18:35 compute-0 flamboyant_poincare[383489]:         "osd_id": 1,
Oct 11 09:18:35 compute-0 flamboyant_poincare[383489]:         "osd_uuid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 09:18:35 compute-0 flamboyant_poincare[383489]:         "type": "bluestore"
Oct 11 09:18:35 compute-0 flamboyant_poincare[383489]:     }
Oct 11 09:18:35 compute-0 flamboyant_poincare[383489]: }
Oct 11 09:18:36 compute-0 systemd[1]: libpod-65b2229fe2008500a4b36fef65974723ce7eb12836904d29c55e2a30e58c878a.scope: Deactivated successfully.
Oct 11 09:18:36 compute-0 podman[383472]: 2025-10-11 09:18:36.052648318 +0000 UTC m=+1.350713674 container died 65b2229fe2008500a4b36fef65974723ce7eb12836904d29c55e2a30e58c878a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_poincare, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 09:18:36 compute-0 systemd[1]: libpod-65b2229fe2008500a4b36fef65974723ce7eb12836904d29c55e2a30e58c878a.scope: Consumed 1.165s CPU time.
Oct 11 09:18:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-8dc5248a20165b55786a475faa96dc9dc6ad01b3822a8be08e1941c82b6dff60-merged.mount: Deactivated successfully.
Oct 11 09:18:36 compute-0 podman[383472]: 2025-10-11 09:18:36.120726475 +0000 UTC m=+1.418791831 container remove 65b2229fe2008500a4b36fef65974723ce7eb12836904d29c55e2a30e58c878a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_poincare, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 09:18:36 compute-0 systemd[1]: libpod-conmon-65b2229fe2008500a4b36fef65974723ce7eb12836904d29c55e2a30e58c878a.scope: Deactivated successfully.
Oct 11 09:18:36 compute-0 sudo[383366]: pam_unix(sudo:session): session closed for user root
Oct 11 09:18:36 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 09:18:36 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:18:36 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 09:18:36 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:18:36 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev 922bdb0b-234e-4b2c-8527-2140d17f4ade does not exist
Oct 11 09:18:36 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev 2eeab56b-9d81-4e00-87f4-56da59487b99 does not exist
Oct 11 09:18:36 compute-0 nova_compute[260935]: 2025-10-11 09:18:36.232 2 DEBUG nova.network.neutron [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: 52be16b4-343a-4fd4-9041-39069a1fde2a] Updating instance_info_cache with network_info: [{"id": "c992d6e3-ef59-42a0-80c5-109fe0c056cd", "address": "fa:16:3e:d3:b5:ce", "network": {"id": "7c40ad6c-6e2c-4d8e-a70f-72c8786fa745", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1855455514-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0ba95f2514ce4fe4b00f245335eaeb01", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc992d6e3-ef", "ovs_interfaceid": "c992d6e3-ef59-42a0-80c5-109fe0c056cd", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:18:36 compute-0 nova_compute[260935]: 2025-10-11 09:18:36.265 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Releasing lock "refresh_cache-52be16b4-343a-4fd4-9041-39069a1fde2a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:18:36 compute-0 nova_compute[260935]: 2025-10-11 09:18:36.265 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: 52be16b4-343a-4fd4-9041-39069a1fde2a] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 11 09:18:36 compute-0 nova_compute[260935]: 2025-10-11 09:18:36.266 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:18:36 compute-0 nova_compute[260935]: 2025-10-11 09:18:36.266 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:18:36 compute-0 sudo[383536]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:18:36 compute-0 sudo[383536]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:18:36 compute-0 sudo[383536]: pam_unix(sudo:session): session closed for user root
Oct 11 09:18:36 compute-0 nova_compute[260935]: 2025-10-11 09:18:36.289 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:18:36 compute-0 nova_compute[260935]: 2025-10-11 09:18:36.290 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:18:36 compute-0 nova_compute[260935]: 2025-10-11 09:18:36.290 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:18:36 compute-0 nova_compute[260935]: 2025-10-11 09:18:36.290 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 11 09:18:36 compute-0 nova_compute[260935]: 2025-10-11 09:18:36.291 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:18:36 compute-0 sudo[383561]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 11 09:18:36 compute-0 sudo[383561]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:18:36 compute-0 sudo[383561]: pam_unix(sudo:session): session closed for user root
Oct 11 09:18:36 compute-0 nova_compute[260935]: 2025-10-11 09:18:36.473 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:18:36 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2303: 321 pgs: 321 active+clean; 328 MiB data, 903 MiB used, 59 GiB / 60 GiB avail
Oct 11 09:18:36 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 09:18:36 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/69984512' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:18:36 compute-0 nova_compute[260935]: 2025-10-11 09:18:36.808 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.517s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:18:36 compute-0 nova_compute[260935]: 2025-10-11 09:18:36.925 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:18:36 compute-0 nova_compute[260935]: 2025-10-11 09:18:36.927 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:18:36 compute-0 nova_compute[260935]: 2025-10-11 09:18:36.928 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:18:36 compute-0 nova_compute[260935]: 2025-10-11 09:18:36.934 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:18:36 compute-0 nova_compute[260935]: 2025-10-11 09:18:36.935 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:18:36 compute-0 nova_compute[260935]: 2025-10-11 09:18:36.941 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000052 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:18:36 compute-0 nova_compute[260935]: 2025-10-11 09:18:36.941 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000052 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:18:37 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:18:37 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:18:37 compute-0 ceph-mon[74313]: pgmap v2303: 321 pgs: 321 active+clean; 328 MiB data, 903 MiB used, 59 GiB / 60 GiB avail
Oct 11 09:18:37 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/69984512' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:18:37 compute-0 nova_compute[260935]: 2025-10-11 09:18:37.233 2 WARNING nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 09:18:37 compute-0 nova_compute[260935]: 2025-10-11 09:18:37.235 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=2937MB free_disk=59.83066940307617GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 11 09:18:37 compute-0 nova_compute[260935]: 2025-10-11 09:18:37.235 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:18:37 compute-0 nova_compute[260935]: 2025-10-11 09:18:37.236 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:18:37 compute-0 nova_compute[260935]: 2025-10-11 09:18:37.363 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance c176845c-89c0-4038-ba22-4ee79bd3ebfe actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 09:18:37 compute-0 nova_compute[260935]: 2025-10-11 09:18:37.364 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance b75d8ded-515b-48ff-a6b6-28df88878996 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 09:18:37 compute-0 nova_compute[260935]: 2025-10-11 09:18:37.364 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance 52be16b4-343a-4fd4-9041-39069a1fde2a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 09:18:37 compute-0 nova_compute[260935]: 2025-10-11 09:18:37.364 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 11 09:18:37 compute-0 nova_compute[260935]: 2025-10-11 09:18:37.364 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=896MB phys_disk=59GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 11 09:18:37 compute-0 nova_compute[260935]: 2025-10-11 09:18:37.539 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:18:37 compute-0 unix_chkpwd[383612]: password check failed for user (root)
Oct 11 09:18:37 compute-0 sshd-session[383609]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=165.232.82.252  user=root
Oct 11 09:18:37 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 09:18:37 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/24222150' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:18:37 compute-0 nova_compute[260935]: 2025-10-11 09:18:37.994 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:18:38 compute-0 nova_compute[260935]: 2025-10-11 09:18:38.000 2 DEBUG nova.compute.provider_tree [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 09:18:38 compute-0 nova_compute[260935]: 2025-10-11 09:18:38.018 2 DEBUG nova.scheduler.client.report [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 09:18:38 compute-0 nova_compute[260935]: 2025-10-11 09:18:38.051 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 11 09:18:38 compute-0 nova_compute[260935]: 2025-10-11 09:18:38.051 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.815s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:18:38 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/24222150' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:18:38 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2304: 321 pgs: 321 active+clean; 328 MiB data, 903 MiB used, 59 GiB / 60 GiB avail
Oct 11 09:18:38 compute-0 podman[383634]: 2025-10-11 09:18:38.817206482 +0000 UTC m=+0.104550246 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=iscsid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0)
Oct 11 09:18:38 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:18:39 compute-0 ceph-mon[74313]: pgmap v2304: 321 pgs: 321 active+clean; 328 MiB data, 903 MiB used, 59 GiB / 60 GiB avail
Oct 11 09:18:39 compute-0 sshd-session[383609]: Failed password for root from 165.232.82.252 port 41106 ssh2
Oct 11 09:18:40 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2305: 321 pgs: 321 active+clean; 328 MiB data, 903 MiB used, 59 GiB / 60 GiB avail
Oct 11 09:18:40 compute-0 nova_compute[260935]: 2025-10-11 09:18:40.907 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:18:41 compute-0 nova_compute[260935]: 2025-10-11 09:18:41.475 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:18:41 compute-0 ceph-mon[74313]: pgmap v2305: 321 pgs: 321 active+clean; 328 MiB data, 903 MiB used, 59 GiB / 60 GiB avail
Oct 11 09:18:41 compute-0 sshd-session[383609]: Connection closed by authenticating user root 165.232.82.252 port 41106 [preauth]
Oct 11 09:18:42 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2306: 321 pgs: 321 active+clean; 328 MiB data, 903 MiB used, 59 GiB / 60 GiB avail
Oct 11 09:18:43 compute-0 ceph-mon[74313]: pgmap v2306: 321 pgs: 321 active+clean; 328 MiB data, 903 MiB used, 59 GiB / 60 GiB avail
Oct 11 09:18:43 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:18:44 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2307: 321 pgs: 321 active+clean; 328 MiB data, 903 MiB used, 59 GiB / 60 GiB avail
Oct 11 09:18:44 compute-0 podman[383656]: 2025-10-11 09:18:44.805578775 +0000 UTC m=+0.097469864 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, config_id=multipathd, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true)
Oct 11 09:18:44 compute-0 podman[383657]: 2025-10-11 09:18:44.835806775 +0000 UTC m=+0.127829768 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Oct 11 09:18:45 compute-0 ceph-mon[74313]: pgmap v2307: 321 pgs: 321 active+clean; 328 MiB data, 903 MiB used, 59 GiB / 60 GiB avail
Oct 11 09:18:45 compute-0 nova_compute[260935]: 2025-10-11 09:18:45.910 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:18:46 compute-0 nova_compute[260935]: 2025-10-11 09:18:46.480 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:18:46 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2308: 321 pgs: 321 active+clean; 328 MiB data, 903 MiB used, 59 GiB / 60 GiB avail
Oct 11 09:18:47 compute-0 nova_compute[260935]: 2025-10-11 09:18:47.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:18:47 compute-0 nova_compute[260935]: 2025-10-11 09:18:47.704 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Oct 11 09:18:47 compute-0 nova_compute[260935]: 2025-10-11 09:18:47.726 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Oct 11 09:18:47 compute-0 ceph-mon[74313]: pgmap v2308: 321 pgs: 321 active+clean; 328 MiB data, 903 MiB used, 59 GiB / 60 GiB avail
Oct 11 09:18:48 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2309: 321 pgs: 321 active+clean; 328 MiB data, 903 MiB used, 59 GiB / 60 GiB avail
Oct 11 09:18:48 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:18:49 compute-0 nova_compute[260935]: 2025-10-11 09:18:49.360 2 DEBUG oslo_concurrency.lockutils [None req-bbbe524c-0004-40ea-81d8-fcbb2ccf9469 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Acquiring lock "69860e17-caac-461a-a4a5-34ca72c0ee09" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:18:49 compute-0 nova_compute[260935]: 2025-10-11 09:18:49.361 2 DEBUG oslo_concurrency.lockutils [None req-bbbe524c-0004-40ea-81d8-fcbb2ccf9469 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "69860e17-caac-461a-a4a5-34ca72c0ee09" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:18:49 compute-0 nova_compute[260935]: 2025-10-11 09:18:49.387 2 DEBUG nova.compute.manager [None req-bbbe524c-0004-40ea-81d8-fcbb2ccf9469 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 69860e17-caac-461a-a4a5-34ca72c0ee09] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 11 09:18:49 compute-0 nova_compute[260935]: 2025-10-11 09:18:49.487 2 DEBUG oslo_concurrency.lockutils [None req-bbbe524c-0004-40ea-81d8-fcbb2ccf9469 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:18:49 compute-0 nova_compute[260935]: 2025-10-11 09:18:49.488 2 DEBUG oslo_concurrency.lockutils [None req-bbbe524c-0004-40ea-81d8-fcbb2ccf9469 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:18:49 compute-0 nova_compute[260935]: 2025-10-11 09:18:49.500 2 DEBUG nova.virt.hardware [None req-bbbe524c-0004-40ea-81d8-fcbb2ccf9469 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 11 09:18:49 compute-0 nova_compute[260935]: 2025-10-11 09:18:49.501 2 INFO nova.compute.claims [None req-bbbe524c-0004-40ea-81d8-fcbb2ccf9469 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 69860e17-caac-461a-a4a5-34ca72c0ee09] Claim successful on node compute-0.ctlplane.example.com
Oct 11 09:18:49 compute-0 nova_compute[260935]: 2025-10-11 09:18:49.700 2 DEBUG oslo_concurrency.processutils [None req-bbbe524c-0004-40ea-81d8-fcbb2ccf9469 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:18:49 compute-0 ceph-mon[74313]: pgmap v2309: 321 pgs: 321 active+clean; 328 MiB data, 903 MiB used, 59 GiB / 60 GiB avail
Oct 11 09:18:50 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 09:18:50 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/322561464' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:18:50 compute-0 nova_compute[260935]: 2025-10-11 09:18:50.223 2 DEBUG oslo_concurrency.processutils [None req-bbbe524c-0004-40ea-81d8-fcbb2ccf9469 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.523s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:18:50 compute-0 nova_compute[260935]: 2025-10-11 09:18:50.232 2 DEBUG nova.compute.provider_tree [None req-bbbe524c-0004-40ea-81d8-fcbb2ccf9469 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 09:18:50 compute-0 nova_compute[260935]: 2025-10-11 09:18:50.258 2 DEBUG nova.scheduler.client.report [None req-bbbe524c-0004-40ea-81d8-fcbb2ccf9469 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 09:18:50 compute-0 nova_compute[260935]: 2025-10-11 09:18:50.280 2 DEBUG oslo_concurrency.lockutils [None req-bbbe524c-0004-40ea-81d8-fcbb2ccf9469 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.792s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:18:50 compute-0 nova_compute[260935]: 2025-10-11 09:18:50.281 2 DEBUG nova.compute.manager [None req-bbbe524c-0004-40ea-81d8-fcbb2ccf9469 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 69860e17-caac-461a-a4a5-34ca72c0ee09] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 11 09:18:50 compute-0 nova_compute[260935]: 2025-10-11 09:18:50.329 2 DEBUG nova.compute.manager [None req-bbbe524c-0004-40ea-81d8-fcbb2ccf9469 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 69860e17-caac-461a-a4a5-34ca72c0ee09] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 11 09:18:50 compute-0 nova_compute[260935]: 2025-10-11 09:18:50.330 2 DEBUG nova.network.neutron [None req-bbbe524c-0004-40ea-81d8-fcbb2ccf9469 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 69860e17-caac-461a-a4a5-34ca72c0ee09] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 11 09:18:50 compute-0 nova_compute[260935]: 2025-10-11 09:18:50.361 2 INFO nova.virt.libvirt.driver [None req-bbbe524c-0004-40ea-81d8-fcbb2ccf9469 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 69860e17-caac-461a-a4a5-34ca72c0ee09] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 11 09:18:50 compute-0 nova_compute[260935]: 2025-10-11 09:18:50.392 2 DEBUG nova.compute.manager [None req-bbbe524c-0004-40ea-81d8-fcbb2ccf9469 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 69860e17-caac-461a-a4a5-34ca72c0ee09] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 11 09:18:50 compute-0 nova_compute[260935]: 2025-10-11 09:18:50.423 2 DEBUG oslo_concurrency.lockutils [None req-d8798150-4c23-4dc3-830b-d5e7985ac3ad 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "564a3027-0f98-40fd-a495-1c13a103ea39" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:18:50 compute-0 nova_compute[260935]: 2025-10-11 09:18:50.424 2 DEBUG oslo_concurrency.lockutils [None req-d8798150-4c23-4dc3-830b-d5e7985ac3ad 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "564a3027-0f98-40fd-a495-1c13a103ea39" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:18:50 compute-0 nova_compute[260935]: 2025-10-11 09:18:50.455 2 DEBUG nova.compute.manager [None req-d8798150-4c23-4dc3-830b-d5e7985ac3ad 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 564a3027-0f98-40fd-a495-1c13a103ea39] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 11 09:18:50 compute-0 nova_compute[260935]: 2025-10-11 09:18:50.572 2 DEBUG nova.compute.manager [None req-bbbe524c-0004-40ea-81d8-fcbb2ccf9469 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 69860e17-caac-461a-a4a5-34ca72c0ee09] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 11 09:18:50 compute-0 nova_compute[260935]: 2025-10-11 09:18:50.574 2 DEBUG nova.virt.libvirt.driver [None req-bbbe524c-0004-40ea-81d8-fcbb2ccf9469 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 69860e17-caac-461a-a4a5-34ca72c0ee09] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 11 09:18:50 compute-0 nova_compute[260935]: 2025-10-11 09:18:50.575 2 INFO nova.virt.libvirt.driver [None req-bbbe524c-0004-40ea-81d8-fcbb2ccf9469 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 69860e17-caac-461a-a4a5-34ca72c0ee09] Creating image(s)
Oct 11 09:18:50 compute-0 nova_compute[260935]: 2025-10-11 09:18:50.606 2 DEBUG nova.storage.rbd_utils [None req-bbbe524c-0004-40ea-81d8-fcbb2ccf9469 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] rbd image 69860e17-caac-461a-a4a5-34ca72c0ee09_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:18:50 compute-0 nova_compute[260935]: 2025-10-11 09:18:50.640 2 DEBUG nova.storage.rbd_utils [None req-bbbe524c-0004-40ea-81d8-fcbb2ccf9469 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] rbd image 69860e17-caac-461a-a4a5-34ca72c0ee09_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:18:50 compute-0 nova_compute[260935]: 2025-10-11 09:18:50.670 2 DEBUG nova.storage.rbd_utils [None req-bbbe524c-0004-40ea-81d8-fcbb2ccf9469 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] rbd image 69860e17-caac-461a-a4a5-34ca72c0ee09_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:18:50 compute-0 nova_compute[260935]: 2025-10-11 09:18:50.674 2 DEBUG oslo_concurrency.processutils [None req-bbbe524c-0004-40ea-81d8-fcbb2ccf9469 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:18:50 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2310: 321 pgs: 321 active+clean; 328 MiB data, 903 MiB used, 59 GiB / 60 GiB avail
Oct 11 09:18:50 compute-0 nova_compute[260935]: 2025-10-11 09:18:50.725 2 DEBUG nova.policy [None req-bbbe524c-0004-40ea-81d8-fcbb2ccf9469 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'dd336dcb24664df58613d4105ce1b004', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'bee9c6aad5fe46a2b0fb6caf4d995b72', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 11 09:18:50 compute-0 nova_compute[260935]: 2025-10-11 09:18:50.757 2 DEBUG oslo_concurrency.lockutils [None req-d8798150-4c23-4dc3-830b-d5e7985ac3ad 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:18:50 compute-0 nova_compute[260935]: 2025-10-11 09:18:50.757 2 DEBUG oslo_concurrency.lockutils [None req-d8798150-4c23-4dc3-830b-d5e7985ac3ad 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:18:50 compute-0 nova_compute[260935]: 2025-10-11 09:18:50.765 2 DEBUG nova.virt.hardware [None req-d8798150-4c23-4dc3-830b-d5e7985ac3ad 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 11 09:18:50 compute-0 nova_compute[260935]: 2025-10-11 09:18:50.766 2 INFO nova.compute.claims [None req-d8798150-4c23-4dc3-830b-d5e7985ac3ad 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 564a3027-0f98-40fd-a495-1c13a103ea39] Claim successful on node compute-0.ctlplane.example.com
Oct 11 09:18:50 compute-0 nova_compute[260935]: 2025-10-11 09:18:50.773 2 DEBUG oslo_concurrency.processutils [None req-bbbe524c-0004-40ea-81d8-fcbb2ccf9469 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json" returned: 0 in 0.099s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:18:50 compute-0 nova_compute[260935]: 2025-10-11 09:18:50.774 2 DEBUG oslo_concurrency.lockutils [None req-bbbe524c-0004-40ea-81d8-fcbb2ccf9469 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Acquiring lock "9811042c7d73cc51997f7c966840f4b7728169a1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:18:50 compute-0 nova_compute[260935]: 2025-10-11 09:18:50.774 2 DEBUG oslo_concurrency.lockutils [None req-bbbe524c-0004-40ea-81d8-fcbb2ccf9469 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:18:50 compute-0 nova_compute[260935]: 2025-10-11 09:18:50.775 2 DEBUG oslo_concurrency.lockutils [None req-bbbe524c-0004-40ea-81d8-fcbb2ccf9469 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:18:50 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/322561464' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:18:50 compute-0 nova_compute[260935]: 2025-10-11 09:18:50.802 2 DEBUG nova.storage.rbd_utils [None req-bbbe524c-0004-40ea-81d8-fcbb2ccf9469 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] rbd image 69860e17-caac-461a-a4a5-34ca72c0ee09_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:18:50 compute-0 nova_compute[260935]: 2025-10-11 09:18:50.807 2 DEBUG oslo_concurrency.processutils [None req-bbbe524c-0004-40ea-81d8-fcbb2ccf9469 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 69860e17-caac-461a-a4a5-34ca72c0ee09_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:18:50 compute-0 nova_compute[260935]: 2025-10-11 09:18:50.913 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:18:51 compute-0 nova_compute[260935]: 2025-10-11 09:18:51.005 2 DEBUG oslo_concurrency.processutils [None req-d8798150-4c23-4dc3-830b-d5e7985ac3ad 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:18:51 compute-0 nova_compute[260935]: 2025-10-11 09:18:51.185 2 DEBUG oslo_concurrency.processutils [None req-bbbe524c-0004-40ea-81d8-fcbb2ccf9469 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 69860e17-caac-461a-a4a5-34ca72c0ee09_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.378s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:18:51 compute-0 nova_compute[260935]: 2025-10-11 09:18:51.284 2 DEBUG nova.storage.rbd_utils [None req-bbbe524c-0004-40ea-81d8-fcbb2ccf9469 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] resizing rbd image 69860e17-caac-461a-a4a5-34ca72c0ee09_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 11 09:18:51 compute-0 nova_compute[260935]: 2025-10-11 09:18:51.419 2 DEBUG nova.objects.instance [None req-bbbe524c-0004-40ea-81d8-fcbb2ccf9469 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lazy-loading 'migration_context' on Instance uuid 69860e17-caac-461a-a4a5-34ca72c0ee09 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:18:51 compute-0 nova_compute[260935]: 2025-10-11 09:18:51.443 2 DEBUG nova.virt.libvirt.driver [None req-bbbe524c-0004-40ea-81d8-fcbb2ccf9469 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 69860e17-caac-461a-a4a5-34ca72c0ee09] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 11 09:18:51 compute-0 nova_compute[260935]: 2025-10-11 09:18:51.444 2 DEBUG nova.virt.libvirt.driver [None req-bbbe524c-0004-40ea-81d8-fcbb2ccf9469 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 69860e17-caac-461a-a4a5-34ca72c0ee09] Ensure instance console log exists: /var/lib/nova/instances/69860e17-caac-461a-a4a5-34ca72c0ee09/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 11 09:18:51 compute-0 nova_compute[260935]: 2025-10-11 09:18:51.445 2 DEBUG oslo_concurrency.lockutils [None req-bbbe524c-0004-40ea-81d8-fcbb2ccf9469 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:18:51 compute-0 nova_compute[260935]: 2025-10-11 09:18:51.445 2 DEBUG oslo_concurrency.lockutils [None req-bbbe524c-0004-40ea-81d8-fcbb2ccf9469 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:18:51 compute-0 nova_compute[260935]: 2025-10-11 09:18:51.446 2 DEBUG oslo_concurrency.lockutils [None req-bbbe524c-0004-40ea-81d8-fcbb2ccf9469 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:18:51 compute-0 nova_compute[260935]: 2025-10-11 09:18:51.482 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:18:51 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 09:18:51 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/131644788' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:18:51 compute-0 nova_compute[260935]: 2025-10-11 09:18:51.557 2 DEBUG oslo_concurrency.processutils [None req-d8798150-4c23-4dc3-830b-d5e7985ac3ad 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.552s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:18:51 compute-0 nova_compute[260935]: 2025-10-11 09:18:51.565 2 DEBUG nova.compute.provider_tree [None req-d8798150-4c23-4dc3-830b-d5e7985ac3ad 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 09:18:51 compute-0 nova_compute[260935]: 2025-10-11 09:18:51.586 2 DEBUG nova.scheduler.client.report [None req-d8798150-4c23-4dc3-830b-d5e7985ac3ad 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 09:18:51 compute-0 nova_compute[260935]: 2025-10-11 09:18:51.612 2 DEBUG oslo_concurrency.lockutils [None req-d8798150-4c23-4dc3-830b-d5e7985ac3ad 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.854s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:18:51 compute-0 nova_compute[260935]: 2025-10-11 09:18:51.613 2 DEBUG nova.compute.manager [None req-d8798150-4c23-4dc3-830b-d5e7985ac3ad 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 564a3027-0f98-40fd-a495-1c13a103ea39] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 11 09:18:51 compute-0 nova_compute[260935]: 2025-10-11 09:18:51.669 2 DEBUG nova.compute.manager [None req-d8798150-4c23-4dc3-830b-d5e7985ac3ad 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 564a3027-0f98-40fd-a495-1c13a103ea39] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 11 09:18:51 compute-0 nova_compute[260935]: 2025-10-11 09:18:51.669 2 DEBUG nova.network.neutron [None req-d8798150-4c23-4dc3-830b-d5e7985ac3ad 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 564a3027-0f98-40fd-a495-1c13a103ea39] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 11 09:18:51 compute-0 nova_compute[260935]: 2025-10-11 09:18:51.693 2 INFO nova.virt.libvirt.driver [None req-d8798150-4c23-4dc3-830b-d5e7985ac3ad 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 564a3027-0f98-40fd-a495-1c13a103ea39] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 11 09:18:51 compute-0 nova_compute[260935]: 2025-10-11 09:18:51.714 2 DEBUG nova.compute.manager [None req-d8798150-4c23-4dc3-830b-d5e7985ac3ad 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 564a3027-0f98-40fd-a495-1c13a103ea39] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 11 09:18:51 compute-0 ceph-mon[74313]: pgmap v2310: 321 pgs: 321 active+clean; 328 MiB data, 903 MiB used, 59 GiB / 60 GiB avail
Oct 11 09:18:51 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/131644788' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:18:51 compute-0 nova_compute[260935]: 2025-10-11 09:18:51.835 2 DEBUG nova.compute.manager [None req-d8798150-4c23-4dc3-830b-d5e7985ac3ad 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 564a3027-0f98-40fd-a495-1c13a103ea39] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 11 09:18:51 compute-0 nova_compute[260935]: 2025-10-11 09:18:51.837 2 DEBUG nova.virt.libvirt.driver [None req-d8798150-4c23-4dc3-830b-d5e7985ac3ad 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 564a3027-0f98-40fd-a495-1c13a103ea39] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 11 09:18:51 compute-0 nova_compute[260935]: 2025-10-11 09:18:51.838 2 INFO nova.virt.libvirt.driver [None req-d8798150-4c23-4dc3-830b-d5e7985ac3ad 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 564a3027-0f98-40fd-a495-1c13a103ea39] Creating image(s)
Oct 11 09:18:51 compute-0 nova_compute[260935]: 2025-10-11 09:18:51.874 2 DEBUG nova.storage.rbd_utils [None req-d8798150-4c23-4dc3-830b-d5e7985ac3ad 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] rbd image 564a3027-0f98-40fd-a495-1c13a103ea39_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:18:51 compute-0 nova_compute[260935]: 2025-10-11 09:18:51.940 2 DEBUG nova.storage.rbd_utils [None req-d8798150-4c23-4dc3-830b-d5e7985ac3ad 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] rbd image 564a3027-0f98-40fd-a495-1c13a103ea39_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:18:51 compute-0 nova_compute[260935]: 2025-10-11 09:18:51.971 2 DEBUG nova.storage.rbd_utils [None req-d8798150-4c23-4dc3-830b-d5e7985ac3ad 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] rbd image 564a3027-0f98-40fd-a495-1c13a103ea39_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:18:51 compute-0 nova_compute[260935]: 2025-10-11 09:18:51.975 2 DEBUG oslo_concurrency.processutils [None req-d8798150-4c23-4dc3-830b-d5e7985ac3ad 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:18:52 compute-0 nova_compute[260935]: 2025-10-11 09:18:52.022 2 DEBUG nova.policy [None req-d8798150-4c23-4dc3-830b-d5e7985ac3ad 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '0e1fd111a1ff43179343661e01457085', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'db6885dd005947ad850fed13cefdf2fc', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 11 09:18:52 compute-0 nova_compute[260935]: 2025-10-11 09:18:52.069 2 DEBUG oslo_concurrency.processutils [None req-d8798150-4c23-4dc3-830b-d5e7985ac3ad 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json" returned: 0 in 0.094s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:18:52 compute-0 nova_compute[260935]: 2025-10-11 09:18:52.070 2 DEBUG oslo_concurrency.lockutils [None req-d8798150-4c23-4dc3-830b-d5e7985ac3ad 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "9811042c7d73cc51997f7c966840f4b7728169a1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:18:52 compute-0 nova_compute[260935]: 2025-10-11 09:18:52.071 2 DEBUG oslo_concurrency.lockutils [None req-d8798150-4c23-4dc3-830b-d5e7985ac3ad 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:18:52 compute-0 nova_compute[260935]: 2025-10-11 09:18:52.071 2 DEBUG oslo_concurrency.lockutils [None req-d8798150-4c23-4dc3-830b-d5e7985ac3ad 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:18:52 compute-0 nova_compute[260935]: 2025-10-11 09:18:52.104 2 DEBUG nova.storage.rbd_utils [None req-d8798150-4c23-4dc3-830b-d5e7985ac3ad 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] rbd image 564a3027-0f98-40fd-a495-1c13a103ea39_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:18:52 compute-0 nova_compute[260935]: 2025-10-11 09:18:52.111 2 DEBUG oslo_concurrency.processutils [None req-d8798150-4c23-4dc3-830b-d5e7985ac3ad 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 564a3027-0f98-40fd-a495-1c13a103ea39_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:18:52 compute-0 nova_compute[260935]: 2025-10-11 09:18:52.172 2 DEBUG nova.network.neutron [None req-bbbe524c-0004-40ea-81d8-fcbb2ccf9469 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 69860e17-caac-461a-a4a5-34ca72c0ee09] Successfully created port: a1864eda-bf8d-42a1-b315-967521604391 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 11 09:18:52 compute-0 nova_compute[260935]: 2025-10-11 09:18:52.482 2 DEBUG oslo_concurrency.processutils [None req-d8798150-4c23-4dc3-830b-d5e7985ac3ad 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 564a3027-0f98-40fd-a495-1c13a103ea39_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.371s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:18:52 compute-0 nova_compute[260935]: 2025-10-11 09:18:52.551 2 DEBUG nova.storage.rbd_utils [None req-d8798150-4c23-4dc3-830b-d5e7985ac3ad 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] resizing rbd image 564a3027-0f98-40fd-a495-1c13a103ea39_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 11 09:18:52 compute-0 nova_compute[260935]: 2025-10-11 09:18:52.670 2 DEBUG nova.objects.instance [None req-d8798150-4c23-4dc3-830b-d5e7985ac3ad 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lazy-loading 'migration_context' on Instance uuid 564a3027-0f98-40fd-a495-1c13a103ea39 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:18:52 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2311: 321 pgs: 321 active+clean; 374 MiB data, 916 MiB used, 59 GiB / 60 GiB avail; 22 KiB/s rd, 1.5 MiB/s wr, 36 op/s
Oct 11 09:18:52 compute-0 nova_compute[260935]: 2025-10-11 09:18:52.690 2 DEBUG nova.virt.libvirt.driver [None req-d8798150-4c23-4dc3-830b-d5e7985ac3ad 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 564a3027-0f98-40fd-a495-1c13a103ea39] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 11 09:18:52 compute-0 nova_compute[260935]: 2025-10-11 09:18:52.691 2 DEBUG nova.virt.libvirt.driver [None req-d8798150-4c23-4dc3-830b-d5e7985ac3ad 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 564a3027-0f98-40fd-a495-1c13a103ea39] Ensure instance console log exists: /var/lib/nova/instances/564a3027-0f98-40fd-a495-1c13a103ea39/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 11 09:18:52 compute-0 nova_compute[260935]: 2025-10-11 09:18:52.691 2 DEBUG oslo_concurrency.lockutils [None req-d8798150-4c23-4dc3-830b-d5e7985ac3ad 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:18:52 compute-0 nova_compute[260935]: 2025-10-11 09:18:52.692 2 DEBUG oslo_concurrency.lockutils [None req-d8798150-4c23-4dc3-830b-d5e7985ac3ad 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:18:52 compute-0 nova_compute[260935]: 2025-10-11 09:18:52.692 2 DEBUG oslo_concurrency.lockutils [None req-d8798150-4c23-4dc3-830b-d5e7985ac3ad 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:18:53 compute-0 nova_compute[260935]: 2025-10-11 09:18:53.396 2 DEBUG nova.network.neutron [None req-d8798150-4c23-4dc3-830b-d5e7985ac3ad 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 564a3027-0f98-40fd-a495-1c13a103ea39] Successfully created port: dea05614-b04a-4078-a98e-428065514f37 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 11 09:18:53 compute-0 nova_compute[260935]: 2025-10-11 09:18:53.400 2 DEBUG nova.network.neutron [None req-bbbe524c-0004-40ea-81d8-fcbb2ccf9469 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 69860e17-caac-461a-a4a5-34ca72c0ee09] Successfully updated port: a1864eda-bf8d-42a1-b315-967521604391 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 11 09:18:53 compute-0 nova_compute[260935]: 2025-10-11 09:18:53.422 2 DEBUG oslo_concurrency.lockutils [None req-bbbe524c-0004-40ea-81d8-fcbb2ccf9469 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Acquiring lock "refresh_cache-69860e17-caac-461a-a4a5-34ca72c0ee09" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:18:53 compute-0 nova_compute[260935]: 2025-10-11 09:18:53.422 2 DEBUG oslo_concurrency.lockutils [None req-bbbe524c-0004-40ea-81d8-fcbb2ccf9469 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Acquired lock "refresh_cache-69860e17-caac-461a-a4a5-34ca72c0ee09" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:18:53 compute-0 nova_compute[260935]: 2025-10-11 09:18:53.423 2 DEBUG nova.network.neutron [None req-bbbe524c-0004-40ea-81d8-fcbb2ccf9469 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 69860e17-caac-461a-a4a5-34ca72c0ee09] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 11 09:18:53 compute-0 ceph-mon[74313]: pgmap v2311: 321 pgs: 321 active+clean; 374 MiB data, 916 MiB used, 59 GiB / 60 GiB avail; 22 KiB/s rd, 1.5 MiB/s wr, 36 op/s
Oct 11 09:18:53 compute-0 nova_compute[260935]: 2025-10-11 09:18:53.964 2 DEBUG nova.network.neutron [None req-bbbe524c-0004-40ea-81d8-fcbb2ccf9469 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 69860e17-caac-461a-a4a5-34ca72c0ee09] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 11 09:18:53 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:18:54 compute-0 nova_compute[260935]: 2025-10-11 09:18:54.104 2 DEBUG nova.compute.manager [req-679c847c-f1de-456f-b018-ce46809e9f30 req-6a8a27dd-8a3b-4ec7-8580-5753d87c5cef e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 69860e17-caac-461a-a4a5-34ca72c0ee09] Received event network-changed-a1864eda-bf8d-42a1-b315-967521604391 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:18:54 compute-0 nova_compute[260935]: 2025-10-11 09:18:54.105 2 DEBUG nova.compute.manager [req-679c847c-f1de-456f-b018-ce46809e9f30 req-6a8a27dd-8a3b-4ec7-8580-5753d87c5cef e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 69860e17-caac-461a-a4a5-34ca72c0ee09] Refreshing instance network info cache due to event network-changed-a1864eda-bf8d-42a1-b315-967521604391. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 09:18:54 compute-0 nova_compute[260935]: 2025-10-11 09:18:54.105 2 DEBUG oslo_concurrency.lockutils [req-679c847c-f1de-456f-b018-ce46809e9f30 req-6a8a27dd-8a3b-4ec7-8580-5753d87c5cef e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-69860e17-caac-461a-a4a5-34ca72c0ee09" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:18:54 compute-0 nova_compute[260935]: 2025-10-11 09:18:54.399 2 DEBUG nova.network.neutron [None req-d8798150-4c23-4dc3-830b-d5e7985ac3ad 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 564a3027-0f98-40fd-a495-1c13a103ea39] Successfully created port: c8bc0542-b12a-4eb6-898d-5fd663184c82 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 11 09:18:54 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2312: 321 pgs: 321 active+clean; 399 MiB data, 924 MiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 2.8 MiB/s wr, 40 op/s
Oct 11 09:18:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:18:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:18:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:18:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:18:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:18:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:18:54 compute-0 ceph-mgr[74605]: [balancer INFO root] Optimize plan auto_2025-10-11_09:18:54
Oct 11 09:18:54 compute-0 ceph-mgr[74605]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 09:18:54 compute-0 ceph-mgr[74605]: [balancer INFO root] do_upmap
Oct 11 09:18:54 compute-0 ceph-mgr[74605]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'cephfs.cephfs.data', 'default.rgw.control', 'backups', 'images', '.mgr', 'default.rgw.meta', 'default.rgw.log', '.rgw.root', 'volumes', 'vms']
Oct 11 09:18:54 compute-0 ceph-mgr[74605]: [balancer INFO root] prepared 0/10 changes
Oct 11 09:18:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 09:18:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 09:18:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 09:18:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 09:18:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 09:18:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 09:18:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 09:18:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 09:18:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 09:18:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 09:18:55 compute-0 nova_compute[260935]: 2025-10-11 09:18:55.505 2 DEBUG nova.network.neutron [None req-bbbe524c-0004-40ea-81d8-fcbb2ccf9469 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 69860e17-caac-461a-a4a5-34ca72c0ee09] Updating instance_info_cache with network_info: [{"id": "a1864eda-bf8d-42a1-b315-967521604391", "address": "fa:16:3e:8e:a2:5e", "network": {"id": "fbf3e0c3-4aa9-41d1-8ed0-cbadb331b448", "bridge": "br-int", "label": "tempest-network-smoke--1639787343", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa1864eda-bf", "ovs_interfaceid": "a1864eda-bf8d-42a1-b315-967521604391", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:18:55 compute-0 nova_compute[260935]: 2025-10-11 09:18:55.523 2 DEBUG oslo_concurrency.lockutils [None req-bbbe524c-0004-40ea-81d8-fcbb2ccf9469 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Releasing lock "refresh_cache-69860e17-caac-461a-a4a5-34ca72c0ee09" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:18:55 compute-0 nova_compute[260935]: 2025-10-11 09:18:55.524 2 DEBUG nova.compute.manager [None req-bbbe524c-0004-40ea-81d8-fcbb2ccf9469 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 69860e17-caac-461a-a4a5-34ca72c0ee09] Instance network_info: |[{"id": "a1864eda-bf8d-42a1-b315-967521604391", "address": "fa:16:3e:8e:a2:5e", "network": {"id": "fbf3e0c3-4aa9-41d1-8ed0-cbadb331b448", "bridge": "br-int", "label": "tempest-network-smoke--1639787343", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa1864eda-bf", "ovs_interfaceid": "a1864eda-bf8d-42a1-b315-967521604391", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 11 09:18:55 compute-0 nova_compute[260935]: 2025-10-11 09:18:55.525 2 DEBUG oslo_concurrency.lockutils [req-679c847c-f1de-456f-b018-ce46809e9f30 req-6a8a27dd-8a3b-4ec7-8580-5753d87c5cef e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-69860e17-caac-461a-a4a5-34ca72c0ee09" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:18:55 compute-0 nova_compute[260935]: 2025-10-11 09:18:55.525 2 DEBUG nova.network.neutron [req-679c847c-f1de-456f-b018-ce46809e9f30 req-6a8a27dd-8a3b-4ec7-8580-5753d87c5cef e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 69860e17-caac-461a-a4a5-34ca72c0ee09] Refreshing network info cache for port a1864eda-bf8d-42a1-b315-967521604391 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 09:18:55 compute-0 nova_compute[260935]: 2025-10-11 09:18:55.531 2 DEBUG nova.virt.libvirt.driver [None req-bbbe524c-0004-40ea-81d8-fcbb2ccf9469 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 69860e17-caac-461a-a4a5-34ca72c0ee09] Start _get_guest_xml network_info=[{"id": "a1864eda-bf8d-42a1-b315-967521604391", "address": "fa:16:3e:8e:a2:5e", "network": {"id": "fbf3e0c3-4aa9-41d1-8ed0-cbadb331b448", "bridge": "br-int", "label": "tempest-network-smoke--1639787343", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa1864eda-bf", "ovs_interfaceid": "a1864eda-bf8d-42a1-b315-967521604391", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 11 09:18:55 compute-0 nova_compute[260935]: 2025-10-11 09:18:55.538 2 WARNING nova.virt.libvirt.driver [None req-bbbe524c-0004-40ea-81d8-fcbb2ccf9469 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 09:18:55 compute-0 nova_compute[260935]: 2025-10-11 09:18:55.549 2 DEBUG nova.virt.libvirt.host [None req-bbbe524c-0004-40ea-81d8-fcbb2ccf9469 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 11 09:18:55 compute-0 nova_compute[260935]: 2025-10-11 09:18:55.550 2 DEBUG nova.virt.libvirt.host [None req-bbbe524c-0004-40ea-81d8-fcbb2ccf9469 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 11 09:18:55 compute-0 nova_compute[260935]: 2025-10-11 09:18:55.555 2 DEBUG nova.virt.libvirt.host [None req-bbbe524c-0004-40ea-81d8-fcbb2ccf9469 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 11 09:18:55 compute-0 nova_compute[260935]: 2025-10-11 09:18:55.556 2 DEBUG nova.virt.libvirt.host [None req-bbbe524c-0004-40ea-81d8-fcbb2ccf9469 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 11 09:18:55 compute-0 nova_compute[260935]: 2025-10-11 09:18:55.557 2 DEBUG nova.virt.libvirt.driver [None req-bbbe524c-0004-40ea-81d8-fcbb2ccf9469 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 11 09:18:55 compute-0 nova_compute[260935]: 2025-10-11 09:18:55.557 2 DEBUG nova.virt.hardware [None req-bbbe524c-0004-40ea-81d8-fcbb2ccf9469 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 11 09:18:55 compute-0 nova_compute[260935]: 2025-10-11 09:18:55.558 2 DEBUG nova.virt.hardware [None req-bbbe524c-0004-40ea-81d8-fcbb2ccf9469 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 11 09:18:55 compute-0 nova_compute[260935]: 2025-10-11 09:18:55.559 2 DEBUG nova.virt.hardware [None req-bbbe524c-0004-40ea-81d8-fcbb2ccf9469 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 11 09:18:55 compute-0 nova_compute[260935]: 2025-10-11 09:18:55.559 2 DEBUG nova.virt.hardware [None req-bbbe524c-0004-40ea-81d8-fcbb2ccf9469 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 11 09:18:55 compute-0 nova_compute[260935]: 2025-10-11 09:18:55.559 2 DEBUG nova.virt.hardware [None req-bbbe524c-0004-40ea-81d8-fcbb2ccf9469 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 11 09:18:55 compute-0 nova_compute[260935]: 2025-10-11 09:18:55.560 2 DEBUG nova.virt.hardware [None req-bbbe524c-0004-40ea-81d8-fcbb2ccf9469 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 11 09:18:55 compute-0 nova_compute[260935]: 2025-10-11 09:18:55.560 2 DEBUG nova.virt.hardware [None req-bbbe524c-0004-40ea-81d8-fcbb2ccf9469 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 11 09:18:55 compute-0 nova_compute[260935]: 2025-10-11 09:18:55.561 2 DEBUG nova.virt.hardware [None req-bbbe524c-0004-40ea-81d8-fcbb2ccf9469 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 11 09:18:55 compute-0 nova_compute[260935]: 2025-10-11 09:18:55.561 2 DEBUG nova.virt.hardware [None req-bbbe524c-0004-40ea-81d8-fcbb2ccf9469 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 11 09:18:55 compute-0 nova_compute[260935]: 2025-10-11 09:18:55.562 2 DEBUG nova.virt.hardware [None req-bbbe524c-0004-40ea-81d8-fcbb2ccf9469 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 11 09:18:55 compute-0 nova_compute[260935]: 2025-10-11 09:18:55.562 2 DEBUG nova.virt.hardware [None req-bbbe524c-0004-40ea-81d8-fcbb2ccf9469 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 11 09:18:55 compute-0 nova_compute[260935]: 2025-10-11 09:18:55.567 2 DEBUG oslo_concurrency.processutils [None req-bbbe524c-0004-40ea-81d8-fcbb2ccf9469 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:18:55 compute-0 ceph-mon[74313]: pgmap v2312: 321 pgs: 321 active+clean; 399 MiB data, 924 MiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 2.8 MiB/s wr, 40 op/s
Oct 11 09:18:55 compute-0 nova_compute[260935]: 2025-10-11 09:18:55.919 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:18:56 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 09:18:56 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2495784309' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:18:56 compute-0 nova_compute[260935]: 2025-10-11 09:18:56.123 2 DEBUG oslo_concurrency.processutils [None req-bbbe524c-0004-40ea-81d8-fcbb2ccf9469 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.556s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:18:56 compute-0 nova_compute[260935]: 2025-10-11 09:18:56.147 2 DEBUG nova.storage.rbd_utils [None req-bbbe524c-0004-40ea-81d8-fcbb2ccf9469 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] rbd image 69860e17-caac-461a-a4a5-34ca72c0ee09_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:18:56 compute-0 nova_compute[260935]: 2025-10-11 09:18:56.152 2 DEBUG oslo_concurrency.processutils [None req-bbbe524c-0004-40ea-81d8-fcbb2ccf9469 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:18:56 compute-0 nova_compute[260935]: 2025-10-11 09:18:56.198 2 DEBUG nova.network.neutron [None req-d8798150-4c23-4dc3-830b-d5e7985ac3ad 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 564a3027-0f98-40fd-a495-1c13a103ea39] Successfully updated port: dea05614-b04a-4078-a98e-428065514f37 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 11 09:18:56 compute-0 nova_compute[260935]: 2025-10-11 09:18:56.250 2 DEBUG nova.compute.manager [req-5b90bd63-1501-4c8e-ba41-f956ba27584f req-1bb9c8fd-a99f-4f99-ad8e-d50955ae592f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 564a3027-0f98-40fd-a495-1c13a103ea39] Received event network-changed-dea05614-b04a-4078-a98e-428065514f37 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:18:56 compute-0 nova_compute[260935]: 2025-10-11 09:18:56.250 2 DEBUG nova.compute.manager [req-5b90bd63-1501-4c8e-ba41-f956ba27584f req-1bb9c8fd-a99f-4f99-ad8e-d50955ae592f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 564a3027-0f98-40fd-a495-1c13a103ea39] Refreshing instance network info cache due to event network-changed-dea05614-b04a-4078-a98e-428065514f37. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 09:18:56 compute-0 nova_compute[260935]: 2025-10-11 09:18:56.252 2 DEBUG oslo_concurrency.lockutils [req-5b90bd63-1501-4c8e-ba41-f956ba27584f req-1bb9c8fd-a99f-4f99-ad8e-d50955ae592f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-564a3027-0f98-40fd-a495-1c13a103ea39" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:18:56 compute-0 nova_compute[260935]: 2025-10-11 09:18:56.252 2 DEBUG oslo_concurrency.lockutils [req-5b90bd63-1501-4c8e-ba41-f956ba27584f req-1bb9c8fd-a99f-4f99-ad8e-d50955ae592f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-564a3027-0f98-40fd-a495-1c13a103ea39" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:18:56 compute-0 nova_compute[260935]: 2025-10-11 09:18:56.252 2 DEBUG nova.network.neutron [req-5b90bd63-1501-4c8e-ba41-f956ba27584f req-1bb9c8fd-a99f-4f99-ad8e-d50955ae592f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 564a3027-0f98-40fd-a495-1c13a103ea39] Refreshing network info cache for port dea05614-b04a-4078-a98e-428065514f37 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 09:18:56 compute-0 nova_compute[260935]: 2025-10-11 09:18:56.484 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:18:56 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 09:18:56 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1934886053' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:18:56 compute-0 nova_compute[260935]: 2025-10-11 09:18:56.614 2 DEBUG oslo_concurrency.processutils [None req-bbbe524c-0004-40ea-81d8-fcbb2ccf9469 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:18:56 compute-0 nova_compute[260935]: 2025-10-11 09:18:56.616 2 DEBUG nova.virt.libvirt.vif [None req-bbbe524c-0004-40ea-81d8-fcbb2ccf9469 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T09:18:48Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-372874397',display_name='tempest-TestNetworkBasicOps-server-372874397',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-372874397',id=113,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIx2RsPvJwPys7fB6mymA8gM4JxRMjGXoxlum0FnOgNOoalsj2xjCE+J+1HgjSPsznufYemTb9pcBC69nUdop6linXie2N/WBdlfI1xGC4f2xXUMk1ZGsW/ToHE3DY3muA==',key_name='tempest-TestNetworkBasicOps-55635867',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='bee9c6aad5fe46a2b0fb6caf4d995b72',ramdisk_id='',reservation_id='r-vrz2alhg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1622727639',owner_user_name='tempest-TestNetworkBasicOps-1622727639-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:18:50Z,user_data=None,user_id='dd336dcb24664df58613d4105ce1b004',uuid=69860e17-caac-461a-a4a5-34ca72c0ee09,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a1864eda-bf8d-42a1-b315-967521604391", "address": "fa:16:3e:8e:a2:5e", "network": {"id": "fbf3e0c3-4aa9-41d1-8ed0-cbadb331b448", "bridge": "br-int", "label": "tempest-network-smoke--1639787343", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa1864eda-bf", "ovs_interfaceid": "a1864eda-bf8d-42a1-b315-967521604391", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 11 09:18:56 compute-0 nova_compute[260935]: 2025-10-11 09:18:56.616 2 DEBUG nova.network.os_vif_util [None req-bbbe524c-0004-40ea-81d8-fcbb2ccf9469 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Converting VIF {"id": "a1864eda-bf8d-42a1-b315-967521604391", "address": "fa:16:3e:8e:a2:5e", "network": {"id": "fbf3e0c3-4aa9-41d1-8ed0-cbadb331b448", "bridge": "br-int", "label": "tempest-network-smoke--1639787343", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa1864eda-bf", "ovs_interfaceid": "a1864eda-bf8d-42a1-b315-967521604391", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 09:18:56 compute-0 nova_compute[260935]: 2025-10-11 09:18:56.617 2 DEBUG nova.network.os_vif_util [None req-bbbe524c-0004-40ea-81d8-fcbb2ccf9469 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8e:a2:5e,bridge_name='br-int',has_traffic_filtering=True,id=a1864eda-bf8d-42a1-b315-967521604391,network=Network(fbf3e0c3-4aa9-41d1-8ed0-cbadb331b448),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa1864eda-bf') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 09:18:56 compute-0 nova_compute[260935]: 2025-10-11 09:18:56.618 2 DEBUG nova.objects.instance [None req-bbbe524c-0004-40ea-81d8-fcbb2ccf9469 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lazy-loading 'pci_devices' on Instance uuid 69860e17-caac-461a-a4a5-34ca72c0ee09 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:18:56 compute-0 nova_compute[260935]: 2025-10-11 09:18:56.656 2 DEBUG nova.virt.libvirt.driver [None req-bbbe524c-0004-40ea-81d8-fcbb2ccf9469 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 69860e17-caac-461a-a4a5-34ca72c0ee09] End _get_guest_xml xml=<domain type="kvm">
Oct 11 09:18:56 compute-0 nova_compute[260935]:   <uuid>69860e17-caac-461a-a4a5-34ca72c0ee09</uuid>
Oct 11 09:18:56 compute-0 nova_compute[260935]:   <name>instance-00000071</name>
Oct 11 09:18:56 compute-0 nova_compute[260935]:   <memory>131072</memory>
Oct 11 09:18:56 compute-0 nova_compute[260935]:   <vcpu>1</vcpu>
Oct 11 09:18:56 compute-0 nova_compute[260935]:   <metadata>
Oct 11 09:18:56 compute-0 nova_compute[260935]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 09:18:56 compute-0 nova_compute[260935]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 09:18:56 compute-0 nova_compute[260935]:       <nova:name>tempest-TestNetworkBasicOps-server-372874397</nova:name>
Oct 11 09:18:56 compute-0 nova_compute[260935]:       <nova:creationTime>2025-10-11 09:18:55</nova:creationTime>
Oct 11 09:18:56 compute-0 nova_compute[260935]:       <nova:flavor name="m1.nano">
Oct 11 09:18:56 compute-0 nova_compute[260935]:         <nova:memory>128</nova:memory>
Oct 11 09:18:56 compute-0 nova_compute[260935]:         <nova:disk>1</nova:disk>
Oct 11 09:18:56 compute-0 nova_compute[260935]:         <nova:swap>0</nova:swap>
Oct 11 09:18:56 compute-0 nova_compute[260935]:         <nova:ephemeral>0</nova:ephemeral>
Oct 11 09:18:56 compute-0 nova_compute[260935]:         <nova:vcpus>1</nova:vcpus>
Oct 11 09:18:56 compute-0 nova_compute[260935]:       </nova:flavor>
Oct 11 09:18:56 compute-0 nova_compute[260935]:       <nova:owner>
Oct 11 09:18:56 compute-0 nova_compute[260935]:         <nova:user uuid="dd336dcb24664df58613d4105ce1b004">tempest-TestNetworkBasicOps-1622727639-project-member</nova:user>
Oct 11 09:18:56 compute-0 nova_compute[260935]:         <nova:project uuid="bee9c6aad5fe46a2b0fb6caf4d995b72">tempest-TestNetworkBasicOps-1622727639</nova:project>
Oct 11 09:18:56 compute-0 nova_compute[260935]:       </nova:owner>
Oct 11 09:18:56 compute-0 nova_compute[260935]:       <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 09:18:56 compute-0 nova_compute[260935]:       <nova:ports>
Oct 11 09:18:56 compute-0 nova_compute[260935]:         <nova:port uuid="a1864eda-bf8d-42a1-b315-967521604391">
Oct 11 09:18:56 compute-0 nova_compute[260935]:           <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Oct 11 09:18:56 compute-0 nova_compute[260935]:         </nova:port>
Oct 11 09:18:56 compute-0 nova_compute[260935]:       </nova:ports>
Oct 11 09:18:56 compute-0 nova_compute[260935]:     </nova:instance>
Oct 11 09:18:56 compute-0 nova_compute[260935]:   </metadata>
Oct 11 09:18:56 compute-0 nova_compute[260935]:   <sysinfo type="smbios">
Oct 11 09:18:56 compute-0 nova_compute[260935]:     <system>
Oct 11 09:18:56 compute-0 nova_compute[260935]:       <entry name="manufacturer">RDO</entry>
Oct 11 09:18:56 compute-0 nova_compute[260935]:       <entry name="product">OpenStack Compute</entry>
Oct 11 09:18:56 compute-0 nova_compute[260935]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 09:18:56 compute-0 nova_compute[260935]:       <entry name="serial">69860e17-caac-461a-a4a5-34ca72c0ee09</entry>
Oct 11 09:18:56 compute-0 nova_compute[260935]:       <entry name="uuid">69860e17-caac-461a-a4a5-34ca72c0ee09</entry>
Oct 11 09:18:56 compute-0 nova_compute[260935]:       <entry name="family">Virtual Machine</entry>
Oct 11 09:18:56 compute-0 nova_compute[260935]:     </system>
Oct 11 09:18:56 compute-0 nova_compute[260935]:   </sysinfo>
Oct 11 09:18:56 compute-0 nova_compute[260935]:   <os>
Oct 11 09:18:56 compute-0 nova_compute[260935]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 11 09:18:56 compute-0 nova_compute[260935]:     <boot dev="hd"/>
Oct 11 09:18:56 compute-0 nova_compute[260935]:     <smbios mode="sysinfo"/>
Oct 11 09:18:56 compute-0 nova_compute[260935]:   </os>
Oct 11 09:18:56 compute-0 nova_compute[260935]:   <features>
Oct 11 09:18:56 compute-0 nova_compute[260935]:     <acpi/>
Oct 11 09:18:56 compute-0 nova_compute[260935]:     <apic/>
Oct 11 09:18:56 compute-0 nova_compute[260935]:     <vmcoreinfo/>
Oct 11 09:18:56 compute-0 nova_compute[260935]:   </features>
Oct 11 09:18:56 compute-0 nova_compute[260935]:   <clock offset="utc">
Oct 11 09:18:56 compute-0 nova_compute[260935]:     <timer name="pit" tickpolicy="delay"/>
Oct 11 09:18:56 compute-0 nova_compute[260935]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 11 09:18:56 compute-0 nova_compute[260935]:     <timer name="hpet" present="no"/>
Oct 11 09:18:56 compute-0 nova_compute[260935]:   </clock>
Oct 11 09:18:56 compute-0 nova_compute[260935]:   <cpu mode="host-model" match="exact">
Oct 11 09:18:56 compute-0 nova_compute[260935]:     <topology sockets="1" cores="1" threads="1"/>
Oct 11 09:18:56 compute-0 nova_compute[260935]:   </cpu>
Oct 11 09:18:56 compute-0 nova_compute[260935]:   <devices>
Oct 11 09:18:56 compute-0 nova_compute[260935]:     <disk type="network" device="disk">
Oct 11 09:18:56 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 09:18:56 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/69860e17-caac-461a-a4a5-34ca72c0ee09_disk">
Oct 11 09:18:56 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 09:18:56 compute-0 nova_compute[260935]:       </source>
Oct 11 09:18:56 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 09:18:56 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 09:18:56 compute-0 nova_compute[260935]:       </auth>
Oct 11 09:18:56 compute-0 nova_compute[260935]:       <target dev="vda" bus="virtio"/>
Oct 11 09:18:56 compute-0 nova_compute[260935]:     </disk>
Oct 11 09:18:56 compute-0 nova_compute[260935]:     <disk type="network" device="cdrom">
Oct 11 09:18:56 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 09:18:56 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/69860e17-caac-461a-a4a5-34ca72c0ee09_disk.config">
Oct 11 09:18:56 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 09:18:56 compute-0 nova_compute[260935]:       </source>
Oct 11 09:18:56 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 09:18:56 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 09:18:56 compute-0 nova_compute[260935]:       </auth>
Oct 11 09:18:56 compute-0 nova_compute[260935]:       <target dev="sda" bus="sata"/>
Oct 11 09:18:56 compute-0 nova_compute[260935]:     </disk>
Oct 11 09:18:56 compute-0 nova_compute[260935]:     <interface type="ethernet">
Oct 11 09:18:56 compute-0 nova_compute[260935]:       <mac address="fa:16:3e:8e:a2:5e"/>
Oct 11 09:18:56 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 09:18:56 compute-0 nova_compute[260935]:       <driver name="vhost" rx_queue_size="512"/>
Oct 11 09:18:56 compute-0 nova_compute[260935]:       <mtu size="1442"/>
Oct 11 09:18:56 compute-0 nova_compute[260935]:       <target dev="tapa1864eda-bf"/>
Oct 11 09:18:56 compute-0 nova_compute[260935]:     </interface>
Oct 11 09:18:56 compute-0 nova_compute[260935]:     <serial type="pty">
Oct 11 09:18:56 compute-0 nova_compute[260935]:       <log file="/var/lib/nova/instances/69860e17-caac-461a-a4a5-34ca72c0ee09/console.log" append="off"/>
Oct 11 09:18:56 compute-0 nova_compute[260935]:     </serial>
Oct 11 09:18:56 compute-0 nova_compute[260935]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 09:18:56 compute-0 nova_compute[260935]:     <video>
Oct 11 09:18:56 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 09:18:56 compute-0 nova_compute[260935]:     </video>
Oct 11 09:18:56 compute-0 nova_compute[260935]:     <input type="tablet" bus="usb"/>
Oct 11 09:18:56 compute-0 nova_compute[260935]:     <rng model="virtio">
Oct 11 09:18:56 compute-0 nova_compute[260935]:       <backend model="random">/dev/urandom</backend>
Oct 11 09:18:56 compute-0 nova_compute[260935]:     </rng>
Oct 11 09:18:56 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root"/>
Oct 11 09:18:56 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:18:56 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:18:56 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:18:56 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:18:56 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:18:56 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:18:56 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:18:56 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:18:56 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:18:56 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:18:56 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:18:56 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:18:56 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:18:56 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:18:56 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:18:56 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:18:56 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:18:56 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:18:56 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:18:56 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:18:56 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:18:56 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:18:56 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:18:56 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:18:56 compute-0 nova_compute[260935]:     <controller type="usb" index="0"/>
Oct 11 09:18:56 compute-0 nova_compute[260935]:     <memballoon model="virtio">
Oct 11 09:18:56 compute-0 nova_compute[260935]:       <stats period="10"/>
Oct 11 09:18:56 compute-0 nova_compute[260935]:     </memballoon>
Oct 11 09:18:56 compute-0 nova_compute[260935]:   </devices>
Oct 11 09:18:56 compute-0 nova_compute[260935]: </domain>
Oct 11 09:18:56 compute-0 nova_compute[260935]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 11 09:18:56 compute-0 nova_compute[260935]: 2025-10-11 09:18:56.658 2 DEBUG nova.compute.manager [None req-bbbe524c-0004-40ea-81d8-fcbb2ccf9469 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 69860e17-caac-461a-a4a5-34ca72c0ee09] Preparing to wait for external event network-vif-plugged-a1864eda-bf8d-42a1-b315-967521604391 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 11 09:18:56 compute-0 nova_compute[260935]: 2025-10-11 09:18:56.659 2 DEBUG oslo_concurrency.lockutils [None req-bbbe524c-0004-40ea-81d8-fcbb2ccf9469 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Acquiring lock "69860e17-caac-461a-a4a5-34ca72c0ee09-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:18:56 compute-0 nova_compute[260935]: 2025-10-11 09:18:56.659 2 DEBUG oslo_concurrency.lockutils [None req-bbbe524c-0004-40ea-81d8-fcbb2ccf9469 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "69860e17-caac-461a-a4a5-34ca72c0ee09-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:18:56 compute-0 nova_compute[260935]: 2025-10-11 09:18:56.660 2 DEBUG oslo_concurrency.lockutils [None req-bbbe524c-0004-40ea-81d8-fcbb2ccf9469 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "69860e17-caac-461a-a4a5-34ca72c0ee09-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:18:56 compute-0 nova_compute[260935]: 2025-10-11 09:18:56.661 2 DEBUG nova.virt.libvirt.vif [None req-bbbe524c-0004-40ea-81d8-fcbb2ccf9469 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T09:18:48Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-372874397',display_name='tempest-TestNetworkBasicOps-server-372874397',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-372874397',id=113,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIx2RsPvJwPys7fB6mymA8gM4JxRMjGXoxlum0FnOgNOoalsj2xjCE+J+1HgjSPsznufYemTb9pcBC69nUdop6linXie2N/WBdlfI1xGC4f2xXUMk1ZGsW/ToHE3DY3muA==',key_name='tempest-TestNetworkBasicOps-55635867',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='bee9c6aad5fe46a2b0fb6caf4d995b72',ramdisk_id='',reservation_id='r-vrz2alhg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1622727639',owner_user_name='tempest-TestNetworkBasicOps-1622727639-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:18:50Z,user_data=None,user_id='dd336dcb24664df58613d4105ce1b004',uuid=69860e17-caac-461a-a4a5-34ca72c0ee09,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a1864eda-bf8d-42a1-b315-967521604391", "address": "fa:16:3e:8e:a2:5e", "network": {"id": "fbf3e0c3-4aa9-41d1-8ed0-cbadb331b448", "bridge": "br-int", "label": "tempest-network-smoke--1639787343", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa1864eda-bf", "ovs_interfaceid": "a1864eda-bf8d-42a1-b315-967521604391", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 11 09:18:56 compute-0 nova_compute[260935]: 2025-10-11 09:18:56.662 2 DEBUG nova.network.os_vif_util [None req-bbbe524c-0004-40ea-81d8-fcbb2ccf9469 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Converting VIF {"id": "a1864eda-bf8d-42a1-b315-967521604391", "address": "fa:16:3e:8e:a2:5e", "network": {"id": "fbf3e0c3-4aa9-41d1-8ed0-cbadb331b448", "bridge": "br-int", "label": "tempest-network-smoke--1639787343", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa1864eda-bf", "ovs_interfaceid": "a1864eda-bf8d-42a1-b315-967521604391", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 09:18:56 compute-0 nova_compute[260935]: 2025-10-11 09:18:56.663 2 DEBUG nova.network.os_vif_util [None req-bbbe524c-0004-40ea-81d8-fcbb2ccf9469 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8e:a2:5e,bridge_name='br-int',has_traffic_filtering=True,id=a1864eda-bf8d-42a1-b315-967521604391,network=Network(fbf3e0c3-4aa9-41d1-8ed0-cbadb331b448),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa1864eda-bf') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 09:18:56 compute-0 nova_compute[260935]: 2025-10-11 09:18:56.663 2 DEBUG os_vif [None req-bbbe524c-0004-40ea-81d8-fcbb2ccf9469 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:8e:a2:5e,bridge_name='br-int',has_traffic_filtering=True,id=a1864eda-bf8d-42a1-b315-967521604391,network=Network(fbf3e0c3-4aa9-41d1-8ed0-cbadb331b448),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa1864eda-bf') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 11 09:18:56 compute-0 nova_compute[260935]: 2025-10-11 09:18:56.664 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:18:56 compute-0 nova_compute[260935]: 2025-10-11 09:18:56.665 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:18:56 compute-0 nova_compute[260935]: 2025-10-11 09:18:56.666 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 09:18:56 compute-0 nova_compute[260935]: 2025-10-11 09:18:56.676 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:18:56 compute-0 nova_compute[260935]: 2025-10-11 09:18:56.676 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa1864eda-bf, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:18:56 compute-0 nova_compute[260935]: 2025-10-11 09:18:56.677 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa1864eda-bf, col_values=(('external_ids', {'iface-id': 'a1864eda-bf8d-42a1-b315-967521604391', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:8e:a2:5e', 'vm-uuid': '69860e17-caac-461a-a4a5-34ca72c0ee09'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:18:56 compute-0 nova_compute[260935]: 2025-10-11 09:18:56.680 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:18:56 compute-0 NetworkManager[44960]: <info>  [1760174336.6821] manager: (tapa1864eda-bf): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/456)
Oct 11 09:18:56 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2313: 321 pgs: 321 active+clean; 399 MiB data, 924 MiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 2.8 MiB/s wr, 40 op/s
Oct 11 09:18:56 compute-0 nova_compute[260935]: 2025-10-11 09:18:56.684 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 09:18:56 compute-0 nova_compute[260935]: 2025-10-11 09:18:56.693 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:18:56 compute-0 nova_compute[260935]: 2025-10-11 09:18:56.695 2 INFO os_vif [None req-bbbe524c-0004-40ea-81d8-fcbb2ccf9469 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:8e:a2:5e,bridge_name='br-int',has_traffic_filtering=True,id=a1864eda-bf8d-42a1-b315-967521604391,network=Network(fbf3e0c3-4aa9-41d1-8ed0-cbadb331b448),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa1864eda-bf')
Oct 11 09:18:56 compute-0 nova_compute[260935]: 2025-10-11 09:18:56.771 2 DEBUG nova.virt.libvirt.driver [None req-bbbe524c-0004-40ea-81d8-fcbb2ccf9469 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 09:18:56 compute-0 nova_compute[260935]: 2025-10-11 09:18:56.772 2 DEBUG nova.virt.libvirt.driver [None req-bbbe524c-0004-40ea-81d8-fcbb2ccf9469 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 09:18:56 compute-0 nova_compute[260935]: 2025-10-11 09:18:56.772 2 DEBUG nova.virt.libvirt.driver [None req-bbbe524c-0004-40ea-81d8-fcbb2ccf9469 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] No VIF found with MAC fa:16:3e:8e:a2:5e, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 11 09:18:56 compute-0 nova_compute[260935]: 2025-10-11 09:18:56.773 2 INFO nova.virt.libvirt.driver [None req-bbbe524c-0004-40ea-81d8-fcbb2ccf9469 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 69860e17-caac-461a-a4a5-34ca72c0ee09] Using config drive
Oct 11 09:18:56 compute-0 rsyslogd[1003]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 11 09:18:56 compute-0 nova_compute[260935]: 2025-10-11 09:18:56.801 2 DEBUG nova.storage.rbd_utils [None req-bbbe524c-0004-40ea-81d8-fcbb2ccf9469 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] rbd image 69860e17-caac-461a-a4a5-34ca72c0ee09_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:18:56 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/2495784309' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:18:56 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/1934886053' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:18:57 compute-0 nova_compute[260935]: 2025-10-11 09:18:57.043 2 DEBUG nova.network.neutron [req-5b90bd63-1501-4c8e-ba41-f956ba27584f req-1bb9c8fd-a99f-4f99-ad8e-d50955ae592f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 564a3027-0f98-40fd-a495-1c13a103ea39] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 11 09:18:57 compute-0 nova_compute[260935]: 2025-10-11 09:18:57.469 2 INFO nova.virt.libvirt.driver [None req-bbbe524c-0004-40ea-81d8-fcbb2ccf9469 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 69860e17-caac-461a-a4a5-34ca72c0ee09] Creating config drive at /var/lib/nova/instances/69860e17-caac-461a-a4a5-34ca72c0ee09/disk.config
Oct 11 09:18:57 compute-0 nova_compute[260935]: 2025-10-11 09:18:57.474 2 DEBUG oslo_concurrency.processutils [None req-bbbe524c-0004-40ea-81d8-fcbb2ccf9469 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/69860e17-caac-461a-a4a5-34ca72c0ee09/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp1wf5x582 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:18:57 compute-0 nova_compute[260935]: 2025-10-11 09:18:57.614 2 DEBUG oslo_concurrency.processutils [None req-bbbe524c-0004-40ea-81d8-fcbb2ccf9469 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/69860e17-caac-461a-a4a5-34ca72c0ee09/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp1wf5x582" returned: 0 in 0.140s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:18:57 compute-0 nova_compute[260935]: 2025-10-11 09:18:57.659 2 DEBUG nova.storage.rbd_utils [None req-bbbe524c-0004-40ea-81d8-fcbb2ccf9469 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] rbd image 69860e17-caac-461a-a4a5-34ca72c0ee09_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:18:57 compute-0 nova_compute[260935]: 2025-10-11 09:18:57.664 2 DEBUG oslo_concurrency.processutils [None req-bbbe524c-0004-40ea-81d8-fcbb2ccf9469 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/69860e17-caac-461a-a4a5-34ca72c0ee09/disk.config 69860e17-caac-461a-a4a5-34ca72c0ee09_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:18:57 compute-0 nova_compute[260935]: 2025-10-11 09:18:57.720 2 DEBUG nova.network.neutron [req-5b90bd63-1501-4c8e-ba41-f956ba27584f req-1bb9c8fd-a99f-4f99-ad8e-d50955ae592f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 564a3027-0f98-40fd-a495-1c13a103ea39] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:18:57 compute-0 nova_compute[260935]: 2025-10-11 09:18:57.726 2 DEBUG nova.network.neutron [req-679c847c-f1de-456f-b018-ce46809e9f30 req-6a8a27dd-8a3b-4ec7-8580-5753d87c5cef e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 69860e17-caac-461a-a4a5-34ca72c0ee09] Updated VIF entry in instance network info cache for port a1864eda-bf8d-42a1-b315-967521604391. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 09:18:57 compute-0 nova_compute[260935]: 2025-10-11 09:18:57.727 2 DEBUG nova.network.neutron [req-679c847c-f1de-456f-b018-ce46809e9f30 req-6a8a27dd-8a3b-4ec7-8580-5753d87c5cef e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 69860e17-caac-461a-a4a5-34ca72c0ee09] Updating instance_info_cache with network_info: [{"id": "a1864eda-bf8d-42a1-b315-967521604391", "address": "fa:16:3e:8e:a2:5e", "network": {"id": "fbf3e0c3-4aa9-41d1-8ed0-cbadb331b448", "bridge": "br-int", "label": "tempest-network-smoke--1639787343", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa1864eda-bf", "ovs_interfaceid": "a1864eda-bf8d-42a1-b315-967521604391", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:18:57 compute-0 nova_compute[260935]: 2025-10-11 09:18:57.753 2 DEBUG oslo_concurrency.lockutils [req-679c847c-f1de-456f-b018-ce46809e9f30 req-6a8a27dd-8a3b-4ec7-8580-5753d87c5cef e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-69860e17-caac-461a-a4a5-34ca72c0ee09" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:18:57 compute-0 nova_compute[260935]: 2025-10-11 09:18:57.755 2 DEBUG oslo_concurrency.lockutils [req-5b90bd63-1501-4c8e-ba41-f956ba27584f req-1bb9c8fd-a99f-4f99-ad8e-d50955ae592f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-564a3027-0f98-40fd-a495-1c13a103ea39" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:18:57 compute-0 ceph-mon[74313]: pgmap v2313: 321 pgs: 321 active+clean; 399 MiB data, 924 MiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 2.8 MiB/s wr, 40 op/s
Oct 11 09:18:57 compute-0 nova_compute[260935]: 2025-10-11 09:18:57.873 2 DEBUG oslo_concurrency.processutils [None req-bbbe524c-0004-40ea-81d8-fcbb2ccf9469 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/69860e17-caac-461a-a4a5-34ca72c0ee09/disk.config 69860e17-caac-461a-a4a5-34ca72c0ee09_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.209s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:18:57 compute-0 nova_compute[260935]: 2025-10-11 09:18:57.873 2 INFO nova.virt.libvirt.driver [None req-bbbe524c-0004-40ea-81d8-fcbb2ccf9469 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 69860e17-caac-461a-a4a5-34ca72c0ee09] Deleting local config drive /var/lib/nova/instances/69860e17-caac-461a-a4a5-34ca72c0ee09/disk.config because it was imported into RBD.
Oct 11 09:18:57 compute-0 kernel: tapa1864eda-bf: entered promiscuous mode
Oct 11 09:18:57 compute-0 nova_compute[260935]: 2025-10-11 09:18:57.939 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:18:57 compute-0 ovn_controller[152945]: 2025-10-11T09:18:57Z|01124|binding|INFO|Claiming lport a1864eda-bf8d-42a1-b315-967521604391 for this chassis.
Oct 11 09:18:57 compute-0 ovn_controller[152945]: 2025-10-11T09:18:57Z|01125|binding|INFO|a1864eda-bf8d-42a1-b315-967521604391: Claiming fa:16:3e:8e:a2:5e 10.100.0.3
Oct 11 09:18:57 compute-0 NetworkManager[44960]: <info>  [1760174337.9455] manager: (tapa1864eda-bf): new Tun device (/org/freedesktop/NetworkManager/Devices/457)
Oct 11 09:18:57 compute-0 nova_compute[260935]: 2025-10-11 09:18:57.952 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:18:57 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:18:57.963 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8e:a2:5e 10.100.0.3'], port_security=['fa:16:3e:8e:a2:5e 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '69860e17-caac-461a-a4a5-34ca72c0ee09', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fbf3e0c3-4aa9-41d1-8ed0-cbadb331b448', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'bee9c6aad5fe46a2b0fb6caf4d995b72', 'neutron:revision_number': '2', 'neutron:security_group_ids': '04903712-3ca4-4ffc-b1ee-8e3bb7ff59e6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e842406f-d65b-48f7-9a65-50c6608add8c, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=a1864eda-bf8d-42a1-b315-967521604391) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 09:18:57 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:18:57.964 162815 INFO neutron.agent.ovn.metadata.agent [-] Port a1864eda-bf8d-42a1-b315-967521604391 in datapath fbf3e0c3-4aa9-41d1-8ed0-cbadb331b448 bound to our chassis
Oct 11 09:18:57 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:18:57.967 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network fbf3e0c3-4aa9-41d1-8ed0-cbadb331b448
Oct 11 09:18:57 compute-0 nova_compute[260935]: 2025-10-11 09:18:57.973 2 DEBUG nova.network.neutron [None req-d8798150-4c23-4dc3-830b-d5e7985ac3ad 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 564a3027-0f98-40fd-a495-1c13a103ea39] Successfully updated port: c8bc0542-b12a-4eb6-898d-5fd663184c82 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 11 09:18:57 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:18:57.984 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[7f917d39-d343-4919-9ae4-fead3d7dc066]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:18:57 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:18:57.985 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapfbf3e0c3-41 in ovnmeta-fbf3e0c3-4aa9-41d1-8ed0-cbadb331b448 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 11 09:18:57 compute-0 systemd-udevd[384217]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 09:18:57 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:18:57.987 276945 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapfbf3e0c3-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 11 09:18:57 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:18:57.987 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[beba6bcc-59e3-4d42-a2b7-9e4802f9d0a6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:18:57 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:18:57.988 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[d07ccf06-c550-4d73-ae60-3f38701d88e1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:18:58 compute-0 systemd-machined[215705]: New machine qemu-136-instance-00000071.
Oct 11 09:18:58 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:18:58.005 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[9a5049b7-70d7-4adb-9ca5-d034c571fc2e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:18:58 compute-0 NetworkManager[44960]: <info>  [1760174338.0129] device (tapa1864eda-bf): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 09:18:58 compute-0 NetworkManager[44960]: <info>  [1760174338.0147] device (tapa1864eda-bf): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 09:18:58 compute-0 nova_compute[260935]: 2025-10-11 09:18:58.024 2 DEBUG oslo_concurrency.lockutils [None req-d8798150-4c23-4dc3-830b-d5e7985ac3ad 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "refresh_cache-564a3027-0f98-40fd-a495-1c13a103ea39" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:18:58 compute-0 nova_compute[260935]: 2025-10-11 09:18:58.024 2 DEBUG oslo_concurrency.lockutils [None req-d8798150-4c23-4dc3-830b-d5e7985ac3ad 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquired lock "refresh_cache-564a3027-0f98-40fd-a495-1c13a103ea39" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:18:58 compute-0 nova_compute[260935]: 2025-10-11 09:18:58.024 2 DEBUG nova.network.neutron [None req-d8798150-4c23-4dc3-830b-d5e7985ac3ad 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 564a3027-0f98-40fd-a495-1c13a103ea39] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 11 09:18:58 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:18:58.034 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[9e1f374e-2ce8-44bc-942d-ca979c6184cc]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:18:58 compute-0 systemd[1]: Started Virtual Machine qemu-136-instance-00000071.
Oct 11 09:18:58 compute-0 nova_compute[260935]: 2025-10-11 09:18:58.056 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:18:58 compute-0 ovn_controller[152945]: 2025-10-11T09:18:58Z|01126|binding|INFO|Setting lport a1864eda-bf8d-42a1-b315-967521604391 ovn-installed in OVS
Oct 11 09:18:58 compute-0 ovn_controller[152945]: 2025-10-11T09:18:58Z|01127|binding|INFO|Setting lport a1864eda-bf8d-42a1-b315-967521604391 up in Southbound
Oct 11 09:18:58 compute-0 nova_compute[260935]: 2025-10-11 09:18:58.062 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:18:58 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:18:58.080 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[8168db52-e3a6-43bc-982d-de3d7e218e4a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:18:58 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:18:58.086 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[b92ad869-a3a2-46a0-9d46-61be8b2ebe3f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:18:58 compute-0 NetworkManager[44960]: <info>  [1760174338.0871] manager: (tapfbf3e0c3-40): new Veth device (/org/freedesktop/NetworkManager/Devices/458)
Oct 11 09:18:58 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:18:58.123 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[7d9734d3-9a5c-4d7d-a56e-12c6da115851]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:18:58 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:18:58.129 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[96fa9c90-6592-4e93-aa08-e73972b0d76d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:18:58 compute-0 NetworkManager[44960]: <info>  [1760174338.1570] device (tapfbf3e0c3-40): carrier: link connected
Oct 11 09:18:58 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:18:58.167 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[f35f6398-1c98-48ca-aa02-b7dbf7bcbe22]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:18:58 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:18:58.185 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[2ee71ddc-631c-4d1e-bfcd-9f2005eb7229]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfbf3e0c3-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c8:5d:03'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 322], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 618285, 'reachable_time': 33588, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 384250, 'error': None, 'target': 'ovnmeta-fbf3e0c3-4aa9-41d1-8ed0-cbadb331b448', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:18:58 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:18:58.198 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[c5b4c5a8-7594-46ea-ba02-479920437e61]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fec8:5d03'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 618285, 'tstamp': 618285}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 384251, 'error': None, 'target': 'ovnmeta-fbf3e0c3-4aa9-41d1-8ed0-cbadb331b448', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:18:58 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:18:58.220 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[c2086022-036d-4629-a0c8-4b0a9014fbd4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfbf3e0c3-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c8:5d:03'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 322], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 618285, 'reachable_time': 33588, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 384252, 'error': None, 'target': 'ovnmeta-fbf3e0c3-4aa9-41d1-8ed0-cbadb331b448', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:18:58 compute-0 nova_compute[260935]: 2025-10-11 09:18:58.243 2 DEBUG nova.network.neutron [None req-d8798150-4c23-4dc3-830b-d5e7985ac3ad 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 564a3027-0f98-40fd-a495-1c13a103ea39] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 11 09:18:58 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:18:58.263 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[b2415060-e577-4600-ad7e-4b27e9e6f228]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:18:58 compute-0 nova_compute[260935]: 2025-10-11 09:18:58.278 2 DEBUG nova.compute.manager [req-90a7ebb9-4f9c-43ed-8e40-2dad29652b4d req-403631c2-d5fc-4de5-981f-847f9b1a3dcc e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 69860e17-caac-461a-a4a5-34ca72c0ee09] Received event network-vif-plugged-a1864eda-bf8d-42a1-b315-967521604391 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:18:58 compute-0 nova_compute[260935]: 2025-10-11 09:18:58.278 2 DEBUG oslo_concurrency.lockutils [req-90a7ebb9-4f9c-43ed-8e40-2dad29652b4d req-403631c2-d5fc-4de5-981f-847f9b1a3dcc e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "69860e17-caac-461a-a4a5-34ca72c0ee09-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:18:58 compute-0 nova_compute[260935]: 2025-10-11 09:18:58.279 2 DEBUG oslo_concurrency.lockutils [req-90a7ebb9-4f9c-43ed-8e40-2dad29652b4d req-403631c2-d5fc-4de5-981f-847f9b1a3dcc e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "69860e17-caac-461a-a4a5-34ca72c0ee09-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:18:58 compute-0 nova_compute[260935]: 2025-10-11 09:18:58.279 2 DEBUG oslo_concurrency.lockutils [req-90a7ebb9-4f9c-43ed-8e40-2dad29652b4d req-403631c2-d5fc-4de5-981f-847f9b1a3dcc e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "69860e17-caac-461a-a4a5-34ca72c0ee09-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:18:58 compute-0 nova_compute[260935]: 2025-10-11 09:18:58.279 2 DEBUG nova.compute.manager [req-90a7ebb9-4f9c-43ed-8e40-2dad29652b4d req-403631c2-d5fc-4de5-981f-847f9b1a3dcc e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 69860e17-caac-461a-a4a5-34ca72c0ee09] Processing event network-vif-plugged-a1864eda-bf8d-42a1-b315-967521604391 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 11 09:18:58 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:18:58.334 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[0858b0cc-8c75-472d-904b-55b9171fd625]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:18:58 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:18:58.336 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfbf3e0c3-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:18:58 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:18:58.336 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 09:18:58 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:18:58.337 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfbf3e0c3-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:18:58 compute-0 nova_compute[260935]: 2025-10-11 09:18:58.338 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:18:58 compute-0 kernel: tapfbf3e0c3-40: entered promiscuous mode
Oct 11 09:18:58 compute-0 nova_compute[260935]: 2025-10-11 09:18:58.340 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:18:58 compute-0 NetworkManager[44960]: <info>  [1760174338.3417] manager: (tapfbf3e0c3-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/459)
Oct 11 09:18:58 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:18:58.344 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapfbf3e0c3-40, col_values=(('external_ids', {'iface-id': 'fb1da81b-c31d-4f26-a595-cb8eaad2e189'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:18:58 compute-0 nova_compute[260935]: 2025-10-11 09:18:58.346 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:18:58 compute-0 ovn_controller[152945]: 2025-10-11T09:18:58Z|01128|binding|INFO|Releasing lport fb1da81b-c31d-4f26-a595-cb8eaad2e189 from this chassis (sb_readonly=0)
Oct 11 09:18:58 compute-0 nova_compute[260935]: 2025-10-11 09:18:58.347 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:18:58 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:18:58.349 162815 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/fbf3e0c3-4aa9-41d1-8ed0-cbadb331b448.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/fbf3e0c3-4aa9-41d1-8ed0-cbadb331b448.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 11 09:18:58 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:18:58.350 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[2bbd9c9f-e8a9-45b8-963d-0e0634786f78]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:18:58 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:18:58.351 162815 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 11 09:18:58 compute-0 ovn_metadata_agent[162810]: global
Oct 11 09:18:58 compute-0 ovn_metadata_agent[162810]:     log         /dev/log local0 debug
Oct 11 09:18:58 compute-0 ovn_metadata_agent[162810]:     log-tag     haproxy-metadata-proxy-fbf3e0c3-4aa9-41d1-8ed0-cbadb331b448
Oct 11 09:18:58 compute-0 ovn_metadata_agent[162810]:     user        root
Oct 11 09:18:58 compute-0 ovn_metadata_agent[162810]:     group       root
Oct 11 09:18:58 compute-0 ovn_metadata_agent[162810]:     maxconn     1024
Oct 11 09:18:58 compute-0 ovn_metadata_agent[162810]:     pidfile     /var/lib/neutron/external/pids/fbf3e0c3-4aa9-41d1-8ed0-cbadb331b448.pid.haproxy
Oct 11 09:18:58 compute-0 ovn_metadata_agent[162810]:     daemon
Oct 11 09:18:58 compute-0 ovn_metadata_agent[162810]: 
Oct 11 09:18:58 compute-0 ovn_metadata_agent[162810]: defaults
Oct 11 09:18:58 compute-0 ovn_metadata_agent[162810]:     log global
Oct 11 09:18:58 compute-0 ovn_metadata_agent[162810]:     mode http
Oct 11 09:18:58 compute-0 ovn_metadata_agent[162810]:     option httplog
Oct 11 09:18:58 compute-0 ovn_metadata_agent[162810]:     option dontlognull
Oct 11 09:18:58 compute-0 ovn_metadata_agent[162810]:     option http-server-close
Oct 11 09:18:58 compute-0 ovn_metadata_agent[162810]:     option forwardfor
Oct 11 09:18:58 compute-0 ovn_metadata_agent[162810]:     retries                 3
Oct 11 09:18:58 compute-0 ovn_metadata_agent[162810]:     timeout http-request    30s
Oct 11 09:18:58 compute-0 ovn_metadata_agent[162810]:     timeout connect         30s
Oct 11 09:18:58 compute-0 ovn_metadata_agent[162810]:     timeout client          32s
Oct 11 09:18:58 compute-0 ovn_metadata_agent[162810]:     timeout server          32s
Oct 11 09:18:58 compute-0 ovn_metadata_agent[162810]:     timeout http-keep-alive 30s
Oct 11 09:18:58 compute-0 ovn_metadata_agent[162810]: 
Oct 11 09:18:58 compute-0 ovn_metadata_agent[162810]: 
Oct 11 09:18:58 compute-0 ovn_metadata_agent[162810]: listen listener
Oct 11 09:18:58 compute-0 ovn_metadata_agent[162810]:     bind 169.254.169.254:80
Oct 11 09:18:58 compute-0 ovn_metadata_agent[162810]:     server metadata /var/lib/neutron/metadata_proxy
Oct 11 09:18:58 compute-0 ovn_metadata_agent[162810]:     http-request add-header X-OVN-Network-ID fbf3e0c3-4aa9-41d1-8ed0-cbadb331b448
Oct 11 09:18:58 compute-0 ovn_metadata_agent[162810]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 11 09:18:58 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:18:58.353 162815 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-fbf3e0c3-4aa9-41d1-8ed0-cbadb331b448', 'env', 'PROCESS_TAG=haproxy-fbf3e0c3-4aa9-41d1-8ed0-cbadb331b448', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/fbf3e0c3-4aa9-41d1-8ed0-cbadb331b448.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 11 09:18:58 compute-0 nova_compute[260935]: 2025-10-11 09:18:58.361 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:18:58 compute-0 nova_compute[260935]: 2025-10-11 09:18:58.374 2 DEBUG nova.compute.manager [req-02c4584f-0806-4fe0-af7c-f678c37d6ff7 req-3c22c189-e51e-4cff-94db-a6cdb182e3d9 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 564a3027-0f98-40fd-a495-1c13a103ea39] Received event network-changed-c8bc0542-b12a-4eb6-898d-5fd663184c82 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:18:58 compute-0 nova_compute[260935]: 2025-10-11 09:18:58.374 2 DEBUG nova.compute.manager [req-02c4584f-0806-4fe0-af7c-f678c37d6ff7 req-3c22c189-e51e-4cff-94db-a6cdb182e3d9 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 564a3027-0f98-40fd-a495-1c13a103ea39] Refreshing instance network info cache due to event network-changed-c8bc0542-b12a-4eb6-898d-5fd663184c82. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 09:18:58 compute-0 nova_compute[260935]: 2025-10-11 09:18:58.374 2 DEBUG oslo_concurrency.lockutils [req-02c4584f-0806-4fe0-af7c-f678c37d6ff7 req-3c22c189-e51e-4cff-94db-a6cdb182e3d9 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-564a3027-0f98-40fd-a495-1c13a103ea39" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:18:58 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2314: 321 pgs: 321 active+clean; 420 MiB data, 946 MiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 3.5 MiB/s wr, 54 op/s
Oct 11 09:18:58 compute-0 podman[384326]: 2025-10-11 09:18:58.80876543 +0000 UTC m=+0.083359733 container create 01e7bdbe8c35ae7e00b5f53817e42d58224fba8df1abe5879aaded3c00981af1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-fbf3e0c3-4aa9-41d1-8ed0-cbadb331b448, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 11 09:18:58 compute-0 systemd[1]: Started libpod-conmon-01e7bdbe8c35ae7e00b5f53817e42d58224fba8df1abe5879aaded3c00981af1.scope.
Oct 11 09:18:58 compute-0 podman[384326]: 2025-10-11 09:18:58.763378249 +0000 UTC m=+0.037972652 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 11 09:18:58 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:18:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c06fe5bbac5efd6711a8c9e5d7b4f200ffa3fc048d59e854178cb58c74576a86/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 11 09:18:58 compute-0 podman[384326]: 2025-10-11 09:18:58.907179801 +0000 UTC m=+0.181774204 container init 01e7bdbe8c35ae7e00b5f53817e42d58224fba8df1abe5879aaded3c00981af1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-fbf3e0c3-4aa9-41d1-8ed0-cbadb331b448, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 09:18:58 compute-0 podman[384326]: 2025-10-11 09:18:58.916426904 +0000 UTC m=+0.191021247 container start 01e7bdbe8c35ae7e00b5f53817e42d58224fba8df1abe5879aaded3c00981af1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-fbf3e0c3-4aa9-41d1-8ed0-cbadb331b448, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, io.buildah.version=1.41.3)
Oct 11 09:18:58 compute-0 nova_compute[260935]: 2025-10-11 09:18:58.942 2 DEBUG nova.compute.manager [None req-bbbe524c-0004-40ea-81d8-fcbb2ccf9469 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 69860e17-caac-461a-a4a5-34ca72c0ee09] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 11 09:18:58 compute-0 nova_compute[260935]: 2025-10-11 09:18:58.944 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760174338.9445179, 69860e17-caac-461a-a4a5-34ca72c0ee09 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:18:58 compute-0 nova_compute[260935]: 2025-10-11 09:18:58.945 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 69860e17-caac-461a-a4a5-34ca72c0ee09] VM Started (Lifecycle Event)
Oct 11 09:18:58 compute-0 neutron-haproxy-ovnmeta-fbf3e0c3-4aa9-41d1-8ed0-cbadb331b448[384342]: [NOTICE]   (384346) : New worker (384348) forked
Oct 11 09:18:58 compute-0 neutron-haproxy-ovnmeta-fbf3e0c3-4aa9-41d1-8ed0-cbadb331b448[384342]: [NOTICE]   (384346) : Loading success.
Oct 11 09:18:58 compute-0 nova_compute[260935]: 2025-10-11 09:18:58.954 2 DEBUG nova.virt.libvirt.driver [None req-bbbe524c-0004-40ea-81d8-fcbb2ccf9469 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 69860e17-caac-461a-a4a5-34ca72c0ee09] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 11 09:18:58 compute-0 nova_compute[260935]: 2025-10-11 09:18:58.959 2 INFO nova.virt.libvirt.driver [-] [instance: 69860e17-caac-461a-a4a5-34ca72c0ee09] Instance spawned successfully.
Oct 11 09:18:58 compute-0 nova_compute[260935]: 2025-10-11 09:18:58.959 2 DEBUG nova.virt.libvirt.driver [None req-bbbe524c-0004-40ea-81d8-fcbb2ccf9469 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 69860e17-caac-461a-a4a5-34ca72c0ee09] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 11 09:18:58 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:18:58 compute-0 nova_compute[260935]: 2025-10-11 09:18:58.989 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 69860e17-caac-461a-a4a5-34ca72c0ee09] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:18:58 compute-0 nova_compute[260935]: 2025-10-11 09:18:58.997 2 DEBUG nova.virt.libvirt.driver [None req-bbbe524c-0004-40ea-81d8-fcbb2ccf9469 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 69860e17-caac-461a-a4a5-34ca72c0ee09] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:18:58 compute-0 nova_compute[260935]: 2025-10-11 09:18:58.997 2 DEBUG nova.virt.libvirt.driver [None req-bbbe524c-0004-40ea-81d8-fcbb2ccf9469 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 69860e17-caac-461a-a4a5-34ca72c0ee09] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:18:58 compute-0 nova_compute[260935]: 2025-10-11 09:18:58.998 2 DEBUG nova.virt.libvirt.driver [None req-bbbe524c-0004-40ea-81d8-fcbb2ccf9469 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 69860e17-caac-461a-a4a5-34ca72c0ee09] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:18:58 compute-0 nova_compute[260935]: 2025-10-11 09:18:58.998 2 DEBUG nova.virt.libvirt.driver [None req-bbbe524c-0004-40ea-81d8-fcbb2ccf9469 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 69860e17-caac-461a-a4a5-34ca72c0ee09] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:18:58 compute-0 nova_compute[260935]: 2025-10-11 09:18:58.999 2 DEBUG nova.virt.libvirt.driver [None req-bbbe524c-0004-40ea-81d8-fcbb2ccf9469 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 69860e17-caac-461a-a4a5-34ca72c0ee09] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:18:58 compute-0 nova_compute[260935]: 2025-10-11 09:18:58.999 2 DEBUG nova.virt.libvirt.driver [None req-bbbe524c-0004-40ea-81d8-fcbb2ccf9469 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 69860e17-caac-461a-a4a5-34ca72c0ee09] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:18:59 compute-0 nova_compute[260935]: 2025-10-11 09:18:59.034 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 69860e17-caac-461a-a4a5-34ca72c0ee09] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 09:18:59 compute-0 nova_compute[260935]: 2025-10-11 09:18:59.062 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 69860e17-caac-461a-a4a5-34ca72c0ee09] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 09:18:59 compute-0 nova_compute[260935]: 2025-10-11 09:18:59.063 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760174338.9451833, 69860e17-caac-461a-a4a5-34ca72c0ee09 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:18:59 compute-0 nova_compute[260935]: 2025-10-11 09:18:59.063 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 69860e17-caac-461a-a4a5-34ca72c0ee09] VM Paused (Lifecycle Event)
Oct 11 09:18:59 compute-0 nova_compute[260935]: 2025-10-11 09:18:59.074 2 INFO nova.compute.manager [None req-bbbe524c-0004-40ea-81d8-fcbb2ccf9469 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 69860e17-caac-461a-a4a5-34ca72c0ee09] Took 8.50 seconds to spawn the instance on the hypervisor.
Oct 11 09:18:59 compute-0 nova_compute[260935]: 2025-10-11 09:18:59.075 2 DEBUG nova.compute.manager [None req-bbbe524c-0004-40ea-81d8-fcbb2ccf9469 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 69860e17-caac-461a-a4a5-34ca72c0ee09] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:18:59 compute-0 nova_compute[260935]: 2025-10-11 09:18:59.081 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 69860e17-caac-461a-a4a5-34ca72c0ee09] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:18:59 compute-0 nova_compute[260935]: 2025-10-11 09:18:59.089 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760174338.9536877, 69860e17-caac-461a-a4a5-34ca72c0ee09 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:18:59 compute-0 nova_compute[260935]: 2025-10-11 09:18:59.089 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 69860e17-caac-461a-a4a5-34ca72c0ee09] VM Resumed (Lifecycle Event)
Oct 11 09:18:59 compute-0 nova_compute[260935]: 2025-10-11 09:18:59.111 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 69860e17-caac-461a-a4a5-34ca72c0ee09] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:18:59 compute-0 nova_compute[260935]: 2025-10-11 09:18:59.115 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 69860e17-caac-461a-a4a5-34ca72c0ee09] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 09:18:59 compute-0 nova_compute[260935]: 2025-10-11 09:18:59.149 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 69860e17-caac-461a-a4a5-34ca72c0ee09] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 09:18:59 compute-0 nova_compute[260935]: 2025-10-11 09:18:59.152 2 INFO nova.compute.manager [None req-bbbe524c-0004-40ea-81d8-fcbb2ccf9469 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 69860e17-caac-461a-a4a5-34ca72c0ee09] Took 9.70 seconds to build instance.
Oct 11 09:18:59 compute-0 nova_compute[260935]: 2025-10-11 09:18:59.179 2 DEBUG oslo_concurrency.lockutils [None req-bbbe524c-0004-40ea-81d8-fcbb2ccf9469 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "69860e17-caac-461a-a4a5-34ca72c0ee09" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.818s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:18:59 compute-0 ceph-mon[74313]: pgmap v2314: 321 pgs: 321 active+clean; 420 MiB data, 946 MiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 3.5 MiB/s wr, 54 op/s
Oct 11 09:19:00 compute-0 nova_compute[260935]: 2025-10-11 09:19:00.196 2 DEBUG nova.network.neutron [None req-d8798150-4c23-4dc3-830b-d5e7985ac3ad 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 564a3027-0f98-40fd-a495-1c13a103ea39] Updating instance_info_cache with network_info: [{"id": "dea05614-b04a-4078-a98e-428065514f37", "address": "fa:16:3e:00:3b:17", "network": {"id": "6346ea52-07fc-49ad-8f2d-fcfed9769241", "bridge": "br-int", "label": "tempest-network-smoke--1742332739", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdea05614-b0", "ovs_interfaceid": "dea05614-b04a-4078-a98e-428065514f37", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "c8bc0542-b12a-4eb6-898d-5fd663184c82", "address": "fa:16:3e:78:b9:1a", "network": {"id": "ce353a4c-7280-46bb-ac7d-157bb5dc08cb", "bridge": "br-int", "label": "tempest-network-smoke--1463782365", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe78:b91a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc8bc0542-b1", "ovs_interfaceid": "c8bc0542-b12a-4eb6-898d-5fd663184c82", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:19:00 compute-0 nova_compute[260935]: 2025-10-11 09:19:00.221 2 DEBUG oslo_concurrency.lockutils [None req-d8798150-4c23-4dc3-830b-d5e7985ac3ad 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Releasing lock "refresh_cache-564a3027-0f98-40fd-a495-1c13a103ea39" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:19:00 compute-0 nova_compute[260935]: 2025-10-11 09:19:00.222 2 DEBUG nova.compute.manager [None req-d8798150-4c23-4dc3-830b-d5e7985ac3ad 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 564a3027-0f98-40fd-a495-1c13a103ea39] Instance network_info: |[{"id": "dea05614-b04a-4078-a98e-428065514f37", "address": "fa:16:3e:00:3b:17", "network": {"id": "6346ea52-07fc-49ad-8f2d-fcfed9769241", "bridge": "br-int", "label": "tempest-network-smoke--1742332739", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdea05614-b0", "ovs_interfaceid": "dea05614-b04a-4078-a98e-428065514f37", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "c8bc0542-b12a-4eb6-898d-5fd663184c82", "address": "fa:16:3e:78:b9:1a", "network": {"id": "ce353a4c-7280-46bb-ac7d-157bb5dc08cb", "bridge": "br-int", "label": "tempest-network-smoke--1463782365", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe78:b91a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc8bc0542-b1", "ovs_interfaceid": "c8bc0542-b12a-4eb6-898d-5fd663184c82", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 11 09:19:00 compute-0 nova_compute[260935]: 2025-10-11 09:19:00.223 2 DEBUG oslo_concurrency.lockutils [req-02c4584f-0806-4fe0-af7c-f678c37d6ff7 req-3c22c189-e51e-4cff-94db-a6cdb182e3d9 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-564a3027-0f98-40fd-a495-1c13a103ea39" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:19:00 compute-0 nova_compute[260935]: 2025-10-11 09:19:00.223 2 DEBUG nova.network.neutron [req-02c4584f-0806-4fe0-af7c-f678c37d6ff7 req-3c22c189-e51e-4cff-94db-a6cdb182e3d9 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 564a3027-0f98-40fd-a495-1c13a103ea39] Refreshing network info cache for port c8bc0542-b12a-4eb6-898d-5fd663184c82 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 09:19:00 compute-0 nova_compute[260935]: 2025-10-11 09:19:00.227 2 DEBUG nova.virt.libvirt.driver [None req-d8798150-4c23-4dc3-830b-d5e7985ac3ad 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 564a3027-0f98-40fd-a495-1c13a103ea39] Start _get_guest_xml network_info=[{"id": "dea05614-b04a-4078-a98e-428065514f37", "address": "fa:16:3e:00:3b:17", "network": {"id": "6346ea52-07fc-49ad-8f2d-fcfed9769241", "bridge": "br-int", "label": "tempest-network-smoke--1742332739", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdea05614-b0", "ovs_interfaceid": "dea05614-b04a-4078-a98e-428065514f37", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "c8bc0542-b12a-4eb6-898d-5fd663184c82", "address": "fa:16:3e:78:b9:1a", "network": {"id": "ce353a4c-7280-46bb-ac7d-157bb5dc08cb", "bridge": "br-int", "label": "tempest-network-smoke--1463782365", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe78:b91a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc8bc0542-b1", "ovs_interfaceid": "c8bc0542-b12a-4eb6-898d-5fd663184c82", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 11 09:19:00 compute-0 nova_compute[260935]: 2025-10-11 09:19:00.234 2 WARNING nova.virt.libvirt.driver [None req-d8798150-4c23-4dc3-830b-d5e7985ac3ad 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 09:19:00 compute-0 nova_compute[260935]: 2025-10-11 09:19:00.246 2 DEBUG nova.virt.libvirt.host [None req-d8798150-4c23-4dc3-830b-d5e7985ac3ad 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 11 09:19:00 compute-0 nova_compute[260935]: 2025-10-11 09:19:00.247 2 DEBUG nova.virt.libvirt.host [None req-d8798150-4c23-4dc3-830b-d5e7985ac3ad 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 11 09:19:00 compute-0 nova_compute[260935]: 2025-10-11 09:19:00.252 2 DEBUG nova.virt.libvirt.host [None req-d8798150-4c23-4dc3-830b-d5e7985ac3ad 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 11 09:19:00 compute-0 nova_compute[260935]: 2025-10-11 09:19:00.253 2 DEBUG nova.virt.libvirt.host [None req-d8798150-4c23-4dc3-830b-d5e7985ac3ad 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 11 09:19:00 compute-0 nova_compute[260935]: 2025-10-11 09:19:00.254 2 DEBUG nova.virt.libvirt.driver [None req-d8798150-4c23-4dc3-830b-d5e7985ac3ad 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 11 09:19:00 compute-0 nova_compute[260935]: 2025-10-11 09:19:00.254 2 DEBUG nova.virt.hardware [None req-d8798150-4c23-4dc3-830b-d5e7985ac3ad 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 11 09:19:00 compute-0 nova_compute[260935]: 2025-10-11 09:19:00.255 2 DEBUG nova.virt.hardware [None req-d8798150-4c23-4dc3-830b-d5e7985ac3ad 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 11 09:19:00 compute-0 nova_compute[260935]: 2025-10-11 09:19:00.255 2 DEBUG nova.virt.hardware [None req-d8798150-4c23-4dc3-830b-d5e7985ac3ad 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 11 09:19:00 compute-0 nova_compute[260935]: 2025-10-11 09:19:00.255 2 DEBUG nova.virt.hardware [None req-d8798150-4c23-4dc3-830b-d5e7985ac3ad 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 11 09:19:00 compute-0 nova_compute[260935]: 2025-10-11 09:19:00.256 2 DEBUG nova.virt.hardware [None req-d8798150-4c23-4dc3-830b-d5e7985ac3ad 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 11 09:19:00 compute-0 nova_compute[260935]: 2025-10-11 09:19:00.256 2 DEBUG nova.virt.hardware [None req-d8798150-4c23-4dc3-830b-d5e7985ac3ad 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 11 09:19:00 compute-0 nova_compute[260935]: 2025-10-11 09:19:00.256 2 DEBUG nova.virt.hardware [None req-d8798150-4c23-4dc3-830b-d5e7985ac3ad 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 11 09:19:00 compute-0 nova_compute[260935]: 2025-10-11 09:19:00.257 2 DEBUG nova.virt.hardware [None req-d8798150-4c23-4dc3-830b-d5e7985ac3ad 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 11 09:19:00 compute-0 nova_compute[260935]: 2025-10-11 09:19:00.257 2 DEBUG nova.virt.hardware [None req-d8798150-4c23-4dc3-830b-d5e7985ac3ad 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 11 09:19:00 compute-0 nova_compute[260935]: 2025-10-11 09:19:00.258 2 DEBUG nova.virt.hardware [None req-d8798150-4c23-4dc3-830b-d5e7985ac3ad 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 11 09:19:00 compute-0 nova_compute[260935]: 2025-10-11 09:19:00.258 2 DEBUG nova.virt.hardware [None req-d8798150-4c23-4dc3-830b-d5e7985ac3ad 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 11 09:19:00 compute-0 nova_compute[260935]: 2025-10-11 09:19:00.261 2 DEBUG oslo_concurrency.processutils [None req-d8798150-4c23-4dc3-830b-d5e7985ac3ad 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:19:00 compute-0 nova_compute[260935]: 2025-10-11 09:19:00.411 2 DEBUG nova.compute.manager [req-4203aa03-1ee1-44c5-a29f-b9bbce319c05 req-167cd235-2413-46c5-96a8-d540712b0081 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 69860e17-caac-461a-a4a5-34ca72c0ee09] Received event network-vif-plugged-a1864eda-bf8d-42a1-b315-967521604391 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:19:00 compute-0 nova_compute[260935]: 2025-10-11 09:19:00.412 2 DEBUG oslo_concurrency.lockutils [req-4203aa03-1ee1-44c5-a29f-b9bbce319c05 req-167cd235-2413-46c5-96a8-d540712b0081 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "69860e17-caac-461a-a4a5-34ca72c0ee09-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:19:00 compute-0 nova_compute[260935]: 2025-10-11 09:19:00.413 2 DEBUG oslo_concurrency.lockutils [req-4203aa03-1ee1-44c5-a29f-b9bbce319c05 req-167cd235-2413-46c5-96a8-d540712b0081 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "69860e17-caac-461a-a4a5-34ca72c0ee09-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:19:00 compute-0 nova_compute[260935]: 2025-10-11 09:19:00.414 2 DEBUG oslo_concurrency.lockutils [req-4203aa03-1ee1-44c5-a29f-b9bbce319c05 req-167cd235-2413-46c5-96a8-d540712b0081 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "69860e17-caac-461a-a4a5-34ca72c0ee09-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:19:00 compute-0 nova_compute[260935]: 2025-10-11 09:19:00.414 2 DEBUG nova.compute.manager [req-4203aa03-1ee1-44c5-a29f-b9bbce319c05 req-167cd235-2413-46c5-96a8-d540712b0081 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 69860e17-caac-461a-a4a5-34ca72c0ee09] No waiting events found dispatching network-vif-plugged-a1864eda-bf8d-42a1-b315-967521604391 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 09:19:00 compute-0 nova_compute[260935]: 2025-10-11 09:19:00.414 2 WARNING nova.compute.manager [req-4203aa03-1ee1-44c5-a29f-b9bbce319c05 req-167cd235-2413-46c5-96a8-d540712b0081 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 69860e17-caac-461a-a4a5-34ca72c0ee09] Received unexpected event network-vif-plugged-a1864eda-bf8d-42a1-b315-967521604391 for instance with vm_state active and task_state None.
Oct 11 09:19:00 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2315: 321 pgs: 321 active+clean; 420 MiB data, 946 MiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 3.5 MiB/s wr, 54 op/s
Oct 11 09:19:00 compute-0 nova_compute[260935]: 2025-10-11 09:19:00.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:19:00 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 09:19:00 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2683852569' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:19:00 compute-0 nova_compute[260935]: 2025-10-11 09:19:00.757 2 DEBUG oslo_concurrency.processutils [None req-d8798150-4c23-4dc3-830b-d5e7985ac3ad 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.496s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:19:00 compute-0 nova_compute[260935]: 2025-10-11 09:19:00.796 2 DEBUG nova.storage.rbd_utils [None req-d8798150-4c23-4dc3-830b-d5e7985ac3ad 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] rbd image 564a3027-0f98-40fd-a495-1c13a103ea39_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:19:00 compute-0 nova_compute[260935]: 2025-10-11 09:19:00.802 2 DEBUG oslo_concurrency.processutils [None req-d8798150-4c23-4dc3-830b-d5e7985ac3ad 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:19:00 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/2683852569' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:19:01 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 09:19:01 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/544037921' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:19:01 compute-0 nova_compute[260935]: 2025-10-11 09:19:01.361 2 DEBUG oslo_concurrency.processutils [None req-d8798150-4c23-4dc3-830b-d5e7985ac3ad 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.559s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:19:01 compute-0 nova_compute[260935]: 2025-10-11 09:19:01.364 2 DEBUG nova.virt.libvirt.vif [None req-d8798150-4c23-4dc3-830b-d5e7985ac3ad 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T09:18:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1664284742',display_name='tempest-TestGettingAddress-server-1664284742',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1664284742',id=114,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJTFYLtktGdqXo2IJ/ShBWfWgvaV4+fWmolxALneU1gDJL8dLDtvTdIhs+PbG9YcPYaBAkq6yl21bo4TTyj4znAX7oza+01Fop0j7H2jGDTRbWSF88RRRnjrHtRpLFrBeg==',key_name='tempest-TestGettingAddress-2133650400',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='db6885dd005947ad850fed13cefdf2fc',ramdisk_id='',reservation_id='r-2sdrmv3t',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-1238692117',owner_user_name='tempest-TestGettingAddress-1238692117-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:18:51Z,user_data=None,user_id='0e1fd111a1ff43179343661e01457085',uuid=564a3027-0f98-40fd-a495-1c13a103ea39,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "dea05614-b04a-4078-a98e-428065514f37", "address": "fa:16:3e:00:3b:17", "network": {"id": "6346ea52-07fc-49ad-8f2d-fcfed9769241", "bridge": "br-int", "label": "tempest-network-smoke--1742332739", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdea05614-b0", "ovs_interfaceid": "dea05614-b04a-4078-a98e-428065514f37", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 11 09:19:01 compute-0 nova_compute[260935]: 2025-10-11 09:19:01.365 2 DEBUG nova.network.os_vif_util [None req-d8798150-4c23-4dc3-830b-d5e7985ac3ad 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converting VIF {"id": "dea05614-b04a-4078-a98e-428065514f37", "address": "fa:16:3e:00:3b:17", "network": {"id": "6346ea52-07fc-49ad-8f2d-fcfed9769241", "bridge": "br-int", "label": "tempest-network-smoke--1742332739", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdea05614-b0", "ovs_interfaceid": "dea05614-b04a-4078-a98e-428065514f37", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 09:19:01 compute-0 nova_compute[260935]: 2025-10-11 09:19:01.367 2 DEBUG nova.network.os_vif_util [None req-d8798150-4c23-4dc3-830b-d5e7985ac3ad 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:00:3b:17,bridge_name='br-int',has_traffic_filtering=True,id=dea05614-b04a-4078-a98e-428065514f37,network=Network(6346ea52-07fc-49ad-8f2d-fcfed9769241),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdea05614-b0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 09:19:01 compute-0 nova_compute[260935]: 2025-10-11 09:19:01.369 2 DEBUG nova.virt.libvirt.vif [None req-d8798150-4c23-4dc3-830b-d5e7985ac3ad 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T09:18:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1664284742',display_name='tempest-TestGettingAddress-server-1664284742',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1664284742',id=114,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJTFYLtktGdqXo2IJ/ShBWfWgvaV4+fWmolxALneU1gDJL8dLDtvTdIhs+PbG9YcPYaBAkq6yl21bo4TTyj4znAX7oza+01Fop0j7H2jGDTRbWSF88RRRnjrHtRpLFrBeg==',key_name='tempest-TestGettingAddress-2133650400',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='db6885dd005947ad850fed13cefdf2fc',ramdisk_id='',reservation_id='r-2sdrmv3t',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-1238692117',owner_user_name='tempest-TestGettingAddress-1238692117-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:18:51Z,user_data=None,user_id='0e1fd111a1ff43179343661e01457085',uuid=564a3027-0f98-40fd-a495-1c13a103ea39,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c8bc0542-b12a-4eb6-898d-5fd663184c82", "address": "fa:16:3e:78:b9:1a", "network": {"id": "ce353a4c-7280-46bb-ac7d-157bb5dc08cb", "bridge": "br-int", "label": "tempest-network-smoke--1463782365", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe78:b91a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc8bc0542-b1", "ovs_interfaceid": "c8bc0542-b12a-4eb6-898d-5fd663184c82", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 11 09:19:01 compute-0 nova_compute[260935]: 2025-10-11 09:19:01.369 2 DEBUG nova.network.os_vif_util [None req-d8798150-4c23-4dc3-830b-d5e7985ac3ad 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converting VIF {"id": "c8bc0542-b12a-4eb6-898d-5fd663184c82", "address": "fa:16:3e:78:b9:1a", "network": {"id": "ce353a4c-7280-46bb-ac7d-157bb5dc08cb", "bridge": "br-int", "label": "tempest-network-smoke--1463782365", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe78:b91a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc8bc0542-b1", "ovs_interfaceid": "c8bc0542-b12a-4eb6-898d-5fd663184c82", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 09:19:01 compute-0 nova_compute[260935]: 2025-10-11 09:19:01.371 2 DEBUG nova.network.os_vif_util [None req-d8798150-4c23-4dc3-830b-d5e7985ac3ad 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:78:b9:1a,bridge_name='br-int',has_traffic_filtering=True,id=c8bc0542-b12a-4eb6-898d-5fd663184c82,network=Network(ce353a4c-7280-46bb-ac7d-157bb5dc08cb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc8bc0542-b1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 09:19:01 compute-0 nova_compute[260935]: 2025-10-11 09:19:01.374 2 DEBUG nova.objects.instance [None req-d8798150-4c23-4dc3-830b-d5e7985ac3ad 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lazy-loading 'pci_devices' on Instance uuid 564a3027-0f98-40fd-a495-1c13a103ea39 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:19:01 compute-0 nova_compute[260935]: 2025-10-11 09:19:01.394 2 DEBUG nova.virt.libvirt.driver [None req-d8798150-4c23-4dc3-830b-d5e7985ac3ad 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 564a3027-0f98-40fd-a495-1c13a103ea39] End _get_guest_xml xml=<domain type="kvm">
Oct 11 09:19:01 compute-0 nova_compute[260935]:   <uuid>564a3027-0f98-40fd-a495-1c13a103ea39</uuid>
Oct 11 09:19:01 compute-0 nova_compute[260935]:   <name>instance-00000072</name>
Oct 11 09:19:01 compute-0 nova_compute[260935]:   <memory>131072</memory>
Oct 11 09:19:01 compute-0 nova_compute[260935]:   <vcpu>1</vcpu>
Oct 11 09:19:01 compute-0 nova_compute[260935]:   <metadata>
Oct 11 09:19:01 compute-0 nova_compute[260935]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 09:19:01 compute-0 nova_compute[260935]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 09:19:01 compute-0 nova_compute[260935]:       <nova:name>tempest-TestGettingAddress-server-1664284742</nova:name>
Oct 11 09:19:01 compute-0 nova_compute[260935]:       <nova:creationTime>2025-10-11 09:19:00</nova:creationTime>
Oct 11 09:19:01 compute-0 nova_compute[260935]:       <nova:flavor name="m1.nano">
Oct 11 09:19:01 compute-0 nova_compute[260935]:         <nova:memory>128</nova:memory>
Oct 11 09:19:01 compute-0 nova_compute[260935]:         <nova:disk>1</nova:disk>
Oct 11 09:19:01 compute-0 nova_compute[260935]:         <nova:swap>0</nova:swap>
Oct 11 09:19:01 compute-0 nova_compute[260935]:         <nova:ephemeral>0</nova:ephemeral>
Oct 11 09:19:01 compute-0 nova_compute[260935]:         <nova:vcpus>1</nova:vcpus>
Oct 11 09:19:01 compute-0 nova_compute[260935]:       </nova:flavor>
Oct 11 09:19:01 compute-0 nova_compute[260935]:       <nova:owner>
Oct 11 09:19:01 compute-0 nova_compute[260935]:         <nova:user uuid="0e1fd111a1ff43179343661e01457085">tempest-TestGettingAddress-1238692117-project-member</nova:user>
Oct 11 09:19:01 compute-0 nova_compute[260935]:         <nova:project uuid="db6885dd005947ad850fed13cefdf2fc">tempest-TestGettingAddress-1238692117</nova:project>
Oct 11 09:19:01 compute-0 nova_compute[260935]:       </nova:owner>
Oct 11 09:19:01 compute-0 nova_compute[260935]:       <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 09:19:01 compute-0 nova_compute[260935]:       <nova:ports>
Oct 11 09:19:01 compute-0 nova_compute[260935]:         <nova:port uuid="dea05614-b04a-4078-a98e-428065514f37">
Oct 11 09:19:01 compute-0 nova_compute[260935]:           <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Oct 11 09:19:01 compute-0 nova_compute[260935]:         </nova:port>
Oct 11 09:19:01 compute-0 nova_compute[260935]:         <nova:port uuid="c8bc0542-b12a-4eb6-898d-5fd663184c82">
Oct 11 09:19:01 compute-0 nova_compute[260935]:           <nova:ip type="fixed" address="2001:db8::f816:3eff:fe78:b91a" ipVersion="6"/>
Oct 11 09:19:01 compute-0 nova_compute[260935]:         </nova:port>
Oct 11 09:19:01 compute-0 nova_compute[260935]:       </nova:ports>
Oct 11 09:19:01 compute-0 nova_compute[260935]:     </nova:instance>
Oct 11 09:19:01 compute-0 nova_compute[260935]:   </metadata>
Oct 11 09:19:01 compute-0 nova_compute[260935]:   <sysinfo type="smbios">
Oct 11 09:19:01 compute-0 nova_compute[260935]:     <system>
Oct 11 09:19:01 compute-0 nova_compute[260935]:       <entry name="manufacturer">RDO</entry>
Oct 11 09:19:01 compute-0 nova_compute[260935]:       <entry name="product">OpenStack Compute</entry>
Oct 11 09:19:01 compute-0 nova_compute[260935]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 09:19:01 compute-0 nova_compute[260935]:       <entry name="serial">564a3027-0f98-40fd-a495-1c13a103ea39</entry>
Oct 11 09:19:01 compute-0 nova_compute[260935]:       <entry name="uuid">564a3027-0f98-40fd-a495-1c13a103ea39</entry>
Oct 11 09:19:01 compute-0 nova_compute[260935]:       <entry name="family">Virtual Machine</entry>
Oct 11 09:19:01 compute-0 nova_compute[260935]:     </system>
Oct 11 09:19:01 compute-0 nova_compute[260935]:   </sysinfo>
Oct 11 09:19:01 compute-0 nova_compute[260935]:   <os>
Oct 11 09:19:01 compute-0 nova_compute[260935]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 11 09:19:01 compute-0 nova_compute[260935]:     <boot dev="hd"/>
Oct 11 09:19:01 compute-0 nova_compute[260935]:     <smbios mode="sysinfo"/>
Oct 11 09:19:01 compute-0 nova_compute[260935]:   </os>
Oct 11 09:19:01 compute-0 nova_compute[260935]:   <features>
Oct 11 09:19:01 compute-0 nova_compute[260935]:     <acpi/>
Oct 11 09:19:01 compute-0 nova_compute[260935]:     <apic/>
Oct 11 09:19:01 compute-0 nova_compute[260935]:     <vmcoreinfo/>
Oct 11 09:19:01 compute-0 nova_compute[260935]:   </features>
Oct 11 09:19:01 compute-0 nova_compute[260935]:   <clock offset="utc">
Oct 11 09:19:01 compute-0 nova_compute[260935]:     <timer name="pit" tickpolicy="delay"/>
Oct 11 09:19:01 compute-0 nova_compute[260935]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 11 09:19:01 compute-0 nova_compute[260935]:     <timer name="hpet" present="no"/>
Oct 11 09:19:01 compute-0 nova_compute[260935]:   </clock>
Oct 11 09:19:01 compute-0 nova_compute[260935]:   <cpu mode="host-model" match="exact">
Oct 11 09:19:01 compute-0 nova_compute[260935]:     <topology sockets="1" cores="1" threads="1"/>
Oct 11 09:19:01 compute-0 nova_compute[260935]:   </cpu>
Oct 11 09:19:01 compute-0 nova_compute[260935]:   <devices>
Oct 11 09:19:01 compute-0 nova_compute[260935]:     <disk type="network" device="disk">
Oct 11 09:19:01 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 09:19:01 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/564a3027-0f98-40fd-a495-1c13a103ea39_disk">
Oct 11 09:19:01 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 09:19:01 compute-0 nova_compute[260935]:       </source>
Oct 11 09:19:01 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 09:19:01 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 09:19:01 compute-0 nova_compute[260935]:       </auth>
Oct 11 09:19:01 compute-0 nova_compute[260935]:       <target dev="vda" bus="virtio"/>
Oct 11 09:19:01 compute-0 nova_compute[260935]:     </disk>
Oct 11 09:19:01 compute-0 nova_compute[260935]:     <disk type="network" device="cdrom">
Oct 11 09:19:01 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 09:19:01 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/564a3027-0f98-40fd-a495-1c13a103ea39_disk.config">
Oct 11 09:19:01 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 09:19:01 compute-0 nova_compute[260935]:       </source>
Oct 11 09:19:01 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 09:19:01 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 09:19:01 compute-0 nova_compute[260935]:       </auth>
Oct 11 09:19:01 compute-0 nova_compute[260935]:       <target dev="sda" bus="sata"/>
Oct 11 09:19:01 compute-0 nova_compute[260935]:     </disk>
Oct 11 09:19:01 compute-0 nova_compute[260935]:     <interface type="ethernet">
Oct 11 09:19:01 compute-0 nova_compute[260935]:       <mac address="fa:16:3e:00:3b:17"/>
Oct 11 09:19:01 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 09:19:01 compute-0 nova_compute[260935]:       <driver name="vhost" rx_queue_size="512"/>
Oct 11 09:19:01 compute-0 nova_compute[260935]:       <mtu size="1442"/>
Oct 11 09:19:01 compute-0 nova_compute[260935]:       <target dev="tapdea05614-b0"/>
Oct 11 09:19:01 compute-0 nova_compute[260935]:     </interface>
Oct 11 09:19:01 compute-0 nova_compute[260935]:     <interface type="ethernet">
Oct 11 09:19:01 compute-0 nova_compute[260935]:       <mac address="fa:16:3e:78:b9:1a"/>
Oct 11 09:19:01 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 09:19:01 compute-0 nova_compute[260935]:       <driver name="vhost" rx_queue_size="512"/>
Oct 11 09:19:01 compute-0 nova_compute[260935]:       <mtu size="1442"/>
Oct 11 09:19:01 compute-0 nova_compute[260935]:       <target dev="tapc8bc0542-b1"/>
Oct 11 09:19:01 compute-0 nova_compute[260935]:     </interface>
Oct 11 09:19:01 compute-0 nova_compute[260935]:     <serial type="pty">
Oct 11 09:19:01 compute-0 nova_compute[260935]:       <log file="/var/lib/nova/instances/564a3027-0f98-40fd-a495-1c13a103ea39/console.log" append="off"/>
Oct 11 09:19:01 compute-0 nova_compute[260935]:     </serial>
Oct 11 09:19:01 compute-0 nova_compute[260935]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 09:19:01 compute-0 nova_compute[260935]:     <video>
Oct 11 09:19:01 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 09:19:01 compute-0 nova_compute[260935]:     </video>
Oct 11 09:19:01 compute-0 nova_compute[260935]:     <input type="tablet" bus="usb"/>
Oct 11 09:19:01 compute-0 nova_compute[260935]:     <rng model="virtio">
Oct 11 09:19:01 compute-0 nova_compute[260935]:       <backend model="random">/dev/urandom</backend>
Oct 11 09:19:01 compute-0 nova_compute[260935]:     </rng>
Oct 11 09:19:01 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root"/>
Oct 11 09:19:01 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:19:01 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:19:01 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:19:01 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:19:01 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:19:01 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:19:01 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:19:01 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:19:01 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:19:01 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:19:01 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:19:01 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:19:01 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:19:01 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:19:01 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:19:01 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:19:01 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:19:01 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:19:01 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:19:01 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:19:01 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:19:01 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:19:01 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:19:01 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:19:01 compute-0 nova_compute[260935]:     <controller type="usb" index="0"/>
Oct 11 09:19:01 compute-0 nova_compute[260935]:     <memballoon model="virtio">
Oct 11 09:19:01 compute-0 nova_compute[260935]:       <stats period="10"/>
Oct 11 09:19:01 compute-0 nova_compute[260935]:     </memballoon>
Oct 11 09:19:01 compute-0 nova_compute[260935]:   </devices>
Oct 11 09:19:01 compute-0 nova_compute[260935]: </domain>
Oct 11 09:19:01 compute-0 nova_compute[260935]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 11 09:19:01 compute-0 nova_compute[260935]: 2025-10-11 09:19:01.397 2 DEBUG nova.compute.manager [None req-d8798150-4c23-4dc3-830b-d5e7985ac3ad 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 564a3027-0f98-40fd-a495-1c13a103ea39] Preparing to wait for external event network-vif-plugged-dea05614-b04a-4078-a98e-428065514f37 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 11 09:19:01 compute-0 nova_compute[260935]: 2025-10-11 09:19:01.397 2 DEBUG oslo_concurrency.lockutils [None req-d8798150-4c23-4dc3-830b-d5e7985ac3ad 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "564a3027-0f98-40fd-a495-1c13a103ea39-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:19:01 compute-0 nova_compute[260935]: 2025-10-11 09:19:01.398 2 DEBUG oslo_concurrency.lockutils [None req-d8798150-4c23-4dc3-830b-d5e7985ac3ad 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "564a3027-0f98-40fd-a495-1c13a103ea39-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:19:01 compute-0 nova_compute[260935]: 2025-10-11 09:19:01.398 2 DEBUG oslo_concurrency.lockutils [None req-d8798150-4c23-4dc3-830b-d5e7985ac3ad 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "564a3027-0f98-40fd-a495-1c13a103ea39-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:19:01 compute-0 nova_compute[260935]: 2025-10-11 09:19:01.399 2 DEBUG nova.compute.manager [None req-d8798150-4c23-4dc3-830b-d5e7985ac3ad 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 564a3027-0f98-40fd-a495-1c13a103ea39] Preparing to wait for external event network-vif-plugged-c8bc0542-b12a-4eb6-898d-5fd663184c82 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 11 09:19:01 compute-0 nova_compute[260935]: 2025-10-11 09:19:01.399 2 DEBUG oslo_concurrency.lockutils [None req-d8798150-4c23-4dc3-830b-d5e7985ac3ad 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "564a3027-0f98-40fd-a495-1c13a103ea39-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:19:01 compute-0 nova_compute[260935]: 2025-10-11 09:19:01.400 2 DEBUG oslo_concurrency.lockutils [None req-d8798150-4c23-4dc3-830b-d5e7985ac3ad 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "564a3027-0f98-40fd-a495-1c13a103ea39-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:19:01 compute-0 nova_compute[260935]: 2025-10-11 09:19:01.401 2 DEBUG oslo_concurrency.lockutils [None req-d8798150-4c23-4dc3-830b-d5e7985ac3ad 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "564a3027-0f98-40fd-a495-1c13a103ea39-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:19:01 compute-0 nova_compute[260935]: 2025-10-11 09:19:01.402 2 DEBUG nova.virt.libvirt.vif [None req-d8798150-4c23-4dc3-830b-d5e7985ac3ad 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T09:18:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1664284742',display_name='tempest-TestGettingAddress-server-1664284742',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1664284742',id=114,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJTFYLtktGdqXo2IJ/ShBWfWgvaV4+fWmolxALneU1gDJL8dLDtvTdIhs+PbG9YcPYaBAkq6yl21bo4TTyj4znAX7oza+01Fop0j7H2jGDTRbWSF88RRRnjrHtRpLFrBeg==',key_name='tempest-TestGettingAddress-2133650400',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='db6885dd005947ad850fed13cefdf2fc',ramdisk_id='',reservation_id='r-2sdrmv3t',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-1238692117',owner_user_name='tempest-TestGettingAddress-1238692117-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:18:51Z,user_data=None,user_id='0e1fd111a1ff43179343661e01457085',uuid=564a3027-0f98-40fd-a495-1c13a103ea39,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "dea05614-b04a-4078-a98e-428065514f37", "address": "fa:16:3e:00:3b:17", "network": {"id": "6346ea52-07fc-49ad-8f2d-fcfed9769241", "bridge": "br-int", "label": "tempest-network-smoke--1742332739", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdea05614-b0", "ovs_interfaceid": "dea05614-b04a-4078-a98e-428065514f37", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 11 09:19:01 compute-0 nova_compute[260935]: 2025-10-11 09:19:01.403 2 DEBUG nova.network.os_vif_util [None req-d8798150-4c23-4dc3-830b-d5e7985ac3ad 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converting VIF {"id": "dea05614-b04a-4078-a98e-428065514f37", "address": "fa:16:3e:00:3b:17", "network": {"id": "6346ea52-07fc-49ad-8f2d-fcfed9769241", "bridge": "br-int", "label": "tempest-network-smoke--1742332739", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdea05614-b0", "ovs_interfaceid": "dea05614-b04a-4078-a98e-428065514f37", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 09:19:01 compute-0 nova_compute[260935]: 2025-10-11 09:19:01.404 2 DEBUG nova.network.os_vif_util [None req-d8798150-4c23-4dc3-830b-d5e7985ac3ad 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:00:3b:17,bridge_name='br-int',has_traffic_filtering=True,id=dea05614-b04a-4078-a98e-428065514f37,network=Network(6346ea52-07fc-49ad-8f2d-fcfed9769241),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdea05614-b0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 09:19:01 compute-0 nova_compute[260935]: 2025-10-11 09:19:01.405 2 DEBUG os_vif [None req-d8798150-4c23-4dc3-830b-d5e7985ac3ad 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:00:3b:17,bridge_name='br-int',has_traffic_filtering=True,id=dea05614-b04a-4078-a98e-428065514f37,network=Network(6346ea52-07fc-49ad-8f2d-fcfed9769241),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdea05614-b0') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 11 09:19:01 compute-0 nova_compute[260935]: 2025-10-11 09:19:01.406 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:19:01 compute-0 nova_compute[260935]: 2025-10-11 09:19:01.407 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:19:01 compute-0 nova_compute[260935]: 2025-10-11 09:19:01.409 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 09:19:01 compute-0 nova_compute[260935]: 2025-10-11 09:19:01.413 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:19:01 compute-0 nova_compute[260935]: 2025-10-11 09:19:01.414 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapdea05614-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:19:01 compute-0 nova_compute[260935]: 2025-10-11 09:19:01.415 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapdea05614-b0, col_values=(('external_ids', {'iface-id': 'dea05614-b04a-4078-a98e-428065514f37', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:00:3b:17', 'vm-uuid': '564a3027-0f98-40fd-a495-1c13a103ea39'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:19:01 compute-0 nova_compute[260935]: 2025-10-11 09:19:01.417 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:19:01 compute-0 NetworkManager[44960]: <info>  [1760174341.4189] manager: (tapdea05614-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/460)
Oct 11 09:19:01 compute-0 nova_compute[260935]: 2025-10-11 09:19:01.420 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 09:19:01 compute-0 nova_compute[260935]: 2025-10-11 09:19:01.426 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:19:01 compute-0 nova_compute[260935]: 2025-10-11 09:19:01.427 2 INFO os_vif [None req-d8798150-4c23-4dc3-830b-d5e7985ac3ad 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:00:3b:17,bridge_name='br-int',has_traffic_filtering=True,id=dea05614-b04a-4078-a98e-428065514f37,network=Network(6346ea52-07fc-49ad-8f2d-fcfed9769241),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdea05614-b0')
Oct 11 09:19:01 compute-0 nova_compute[260935]: 2025-10-11 09:19:01.429 2 DEBUG nova.virt.libvirt.vif [None req-d8798150-4c23-4dc3-830b-d5e7985ac3ad 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T09:18:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1664284742',display_name='tempest-TestGettingAddress-server-1664284742',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1664284742',id=114,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJTFYLtktGdqXo2IJ/ShBWfWgvaV4+fWmolxALneU1gDJL8dLDtvTdIhs+PbG9YcPYaBAkq6yl21bo4TTyj4znAX7oza+01Fop0j7H2jGDTRbWSF88RRRnjrHtRpLFrBeg==',key_name='tempest-TestGettingAddress-2133650400',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='db6885dd005947ad850fed13cefdf2fc',ramdisk_id='',reservation_id='r-2sdrmv3t',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-1238692117',owner_user_name='tempest-TestGettingAddress-1238692117-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:18:51Z,user_data=None,user_id='0e1fd111a1ff43179343661e01457085',uuid=564a3027-0f98-40fd-a495-1c13a103ea39,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c8bc0542-b12a-4eb6-898d-5fd663184c82", "address": "fa:16:3e:78:b9:1a", "network": {"id": "ce353a4c-7280-46bb-ac7d-157bb5dc08cb", "bridge": "br-int", "label": "tempest-network-smoke--1463782365", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe78:b91a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc8bc0542-b1", "ovs_interfaceid": "c8bc0542-b12a-4eb6-898d-5fd663184c82", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 11 09:19:01 compute-0 nova_compute[260935]: 2025-10-11 09:19:01.430 2 DEBUG nova.network.os_vif_util [None req-d8798150-4c23-4dc3-830b-d5e7985ac3ad 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converting VIF {"id": "c8bc0542-b12a-4eb6-898d-5fd663184c82", "address": "fa:16:3e:78:b9:1a", "network": {"id": "ce353a4c-7280-46bb-ac7d-157bb5dc08cb", "bridge": "br-int", "label": "tempest-network-smoke--1463782365", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe78:b91a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc8bc0542-b1", "ovs_interfaceid": "c8bc0542-b12a-4eb6-898d-5fd663184c82", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 09:19:01 compute-0 nova_compute[260935]: 2025-10-11 09:19:01.431 2 DEBUG nova.network.os_vif_util [None req-d8798150-4c23-4dc3-830b-d5e7985ac3ad 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:78:b9:1a,bridge_name='br-int',has_traffic_filtering=True,id=c8bc0542-b12a-4eb6-898d-5fd663184c82,network=Network(ce353a4c-7280-46bb-ac7d-157bb5dc08cb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc8bc0542-b1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 09:19:01 compute-0 nova_compute[260935]: 2025-10-11 09:19:01.432 2 DEBUG os_vif [None req-d8798150-4c23-4dc3-830b-d5e7985ac3ad 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:78:b9:1a,bridge_name='br-int',has_traffic_filtering=True,id=c8bc0542-b12a-4eb6-898d-5fd663184c82,network=Network(ce353a4c-7280-46bb-ac7d-157bb5dc08cb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc8bc0542-b1') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 11 09:19:01 compute-0 nova_compute[260935]: 2025-10-11 09:19:01.433 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:19:01 compute-0 nova_compute[260935]: 2025-10-11 09:19:01.433 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:19:01 compute-0 nova_compute[260935]: 2025-10-11 09:19:01.434 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 09:19:01 compute-0 nova_compute[260935]: 2025-10-11 09:19:01.437 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:19:01 compute-0 nova_compute[260935]: 2025-10-11 09:19:01.438 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc8bc0542-b1, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:19:01 compute-0 nova_compute[260935]: 2025-10-11 09:19:01.439 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapc8bc0542-b1, col_values=(('external_ids', {'iface-id': 'c8bc0542-b12a-4eb6-898d-5fd663184c82', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:78:b9:1a', 'vm-uuid': '564a3027-0f98-40fd-a495-1c13a103ea39'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:19:01 compute-0 nova_compute[260935]: 2025-10-11 09:19:01.444 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:19:01 compute-0 NetworkManager[44960]: <info>  [1760174341.4472] manager: (tapc8bc0542-b1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/461)
Oct 11 09:19:01 compute-0 nova_compute[260935]: 2025-10-11 09:19:01.448 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 09:19:01 compute-0 nova_compute[260935]: 2025-10-11 09:19:01.455 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:19:01 compute-0 nova_compute[260935]: 2025-10-11 09:19:01.456 2 INFO os_vif [None req-d8798150-4c23-4dc3-830b-d5e7985ac3ad 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:78:b9:1a,bridge_name='br-int',has_traffic_filtering=True,id=c8bc0542-b12a-4eb6-898d-5fd663184c82,network=Network(ce353a4c-7280-46bb-ac7d-157bb5dc08cb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc8bc0542-b1')
Oct 11 09:19:01 compute-0 nova_compute[260935]: 2025-10-11 09:19:01.488 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:19:01 compute-0 nova_compute[260935]: 2025-10-11 09:19:01.525 2 DEBUG nova.virt.libvirt.driver [None req-d8798150-4c23-4dc3-830b-d5e7985ac3ad 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 09:19:01 compute-0 nova_compute[260935]: 2025-10-11 09:19:01.526 2 DEBUG nova.virt.libvirt.driver [None req-d8798150-4c23-4dc3-830b-d5e7985ac3ad 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 09:19:01 compute-0 nova_compute[260935]: 2025-10-11 09:19:01.526 2 DEBUG nova.virt.libvirt.driver [None req-d8798150-4c23-4dc3-830b-d5e7985ac3ad 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] No VIF found with MAC fa:16:3e:00:3b:17, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 11 09:19:01 compute-0 nova_compute[260935]: 2025-10-11 09:19:01.526 2 DEBUG nova.virt.libvirt.driver [None req-d8798150-4c23-4dc3-830b-d5e7985ac3ad 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] No VIF found with MAC fa:16:3e:78:b9:1a, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 11 09:19:01 compute-0 nova_compute[260935]: 2025-10-11 09:19:01.527 2 INFO nova.virt.libvirt.driver [None req-d8798150-4c23-4dc3-830b-d5e7985ac3ad 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 564a3027-0f98-40fd-a495-1c13a103ea39] Using config drive
Oct 11 09:19:01 compute-0 nova_compute[260935]: 2025-10-11 09:19:01.556 2 DEBUG nova.storage.rbd_utils [None req-d8798150-4c23-4dc3-830b-d5e7985ac3ad 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] rbd image 564a3027-0f98-40fd-a495-1c13a103ea39_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:19:01 compute-0 ceph-mon[74313]: pgmap v2315: 321 pgs: 321 active+clean; 420 MiB data, 946 MiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 3.5 MiB/s wr, 54 op/s
Oct 11 09:19:01 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/544037921' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:19:02 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2316: 321 pgs: 321 active+clean; 420 MiB data, 946 MiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 3.6 MiB/s wr, 103 op/s
Oct 11 09:19:03 compute-0 nova_compute[260935]: 2025-10-11 09:19:03.000 2 INFO nova.virt.libvirt.driver [None req-d8798150-4c23-4dc3-830b-d5e7985ac3ad 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 564a3027-0f98-40fd-a495-1c13a103ea39] Creating config drive at /var/lib/nova/instances/564a3027-0f98-40fd-a495-1c13a103ea39/disk.config
Oct 11 09:19:03 compute-0 nova_compute[260935]: 2025-10-11 09:19:03.006 2 DEBUG oslo_concurrency.processutils [None req-d8798150-4c23-4dc3-830b-d5e7985ac3ad 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/564a3027-0f98-40fd-a495-1c13a103ea39/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp8k2mfp3n execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:19:03 compute-0 nova_compute[260935]: 2025-10-11 09:19:03.181 2 DEBUG oslo_concurrency.processutils [None req-d8798150-4c23-4dc3-830b-d5e7985ac3ad 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/564a3027-0f98-40fd-a495-1c13a103ea39/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp8k2mfp3n" returned: 0 in 0.175s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:19:03 compute-0 nova_compute[260935]: 2025-10-11 09:19:03.216 2 DEBUG nova.storage.rbd_utils [None req-d8798150-4c23-4dc3-830b-d5e7985ac3ad 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] rbd image 564a3027-0f98-40fd-a495-1c13a103ea39_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:19:03 compute-0 nova_compute[260935]: 2025-10-11 09:19:03.221 2 DEBUG oslo_concurrency.processutils [None req-d8798150-4c23-4dc3-830b-d5e7985ac3ad 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/564a3027-0f98-40fd-a495-1c13a103ea39/disk.config 564a3027-0f98-40fd-a495-1c13a103ea39_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:19:03 compute-0 nova_compute[260935]: 2025-10-11 09:19:03.440 2 DEBUG oslo_concurrency.processutils [None req-d8798150-4c23-4dc3-830b-d5e7985ac3ad 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/564a3027-0f98-40fd-a495-1c13a103ea39/disk.config 564a3027-0f98-40fd-a495-1c13a103ea39_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.219s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:19:03 compute-0 nova_compute[260935]: 2025-10-11 09:19:03.442 2 INFO nova.virt.libvirt.driver [None req-d8798150-4c23-4dc3-830b-d5e7985ac3ad 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 564a3027-0f98-40fd-a495-1c13a103ea39] Deleting local config drive /var/lib/nova/instances/564a3027-0f98-40fd-a495-1c13a103ea39/disk.config because it was imported into RBD.
Oct 11 09:19:03 compute-0 NetworkManager[44960]: <info>  [1760174343.5030] manager: (tapdea05614-b0): new Tun device (/org/freedesktop/NetworkManager/Devices/462)
Oct 11 09:19:03 compute-0 kernel: tapdea05614-b0: entered promiscuous mode
Oct 11 09:19:03 compute-0 nova_compute[260935]: 2025-10-11 09:19:03.510 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:19:03 compute-0 ovn_controller[152945]: 2025-10-11T09:19:03Z|01129|binding|INFO|Claiming lport dea05614-b04a-4078-a98e-428065514f37 for this chassis.
Oct 11 09:19:03 compute-0 ovn_controller[152945]: 2025-10-11T09:19:03Z|01130|binding|INFO|dea05614-b04a-4078-a98e-428065514f37: Claiming fa:16:3e:00:3b:17 10.100.0.3
Oct 11 09:19:03 compute-0 NetworkManager[44960]: <info>  [1760174343.5288] manager: (tapc8bc0542-b1): new Tun device (/org/freedesktop/NetworkManager/Devices/463)
Oct 11 09:19:03 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:19:03.546 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:00:3b:17 10.100.0.3'], port_security=['fa:16:3e:00:3b:17 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '564a3027-0f98-40fd-a495-1c13a103ea39', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6346ea52-07fc-49ad-8f2d-fcfed9769241', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'db6885dd005947ad850fed13cefdf2fc', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'a975f5dd-d207-4e6b-9401-45f24b820c2c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0785b003-8951-44fc-bd21-f1c5c3076102, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=dea05614-b04a-4078-a98e-428065514f37) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 09:19:03 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:19:03.551 162815 INFO neutron.agent.ovn.metadata.agent [-] Port dea05614-b04a-4078-a98e-428065514f37 in datapath 6346ea52-07fc-49ad-8f2d-fcfed9769241 bound to our chassis
Oct 11 09:19:03 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:19:03.555 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6346ea52-07fc-49ad-8f2d-fcfed9769241
Oct 11 09:19:03 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:19:03.574 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[fae0f0aa-f24c-49c3-b6c7-3cd824a83093]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:19:03 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:19:03.575 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap6346ea52-01 in ovnmeta-6346ea52-07fc-49ad-8f2d-fcfed9769241 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 11 09:19:03 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:19:03.579 276945 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap6346ea52-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 11 09:19:03 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:19:03.580 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[3dd495c2-7c93-4444-b2af-c69b2e9790c5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:19:03 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:19:03.582 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[e86caa39-98a6-46a6-8843-ac81e2be6305]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:19:03 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:19:03.603 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[041f7cd5-1aa5-4ee4-9e38-68c3c134efec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:19:03 compute-0 systemd-machined[215705]: New machine qemu-137-instance-00000072.
Oct 11 09:19:03 compute-0 systemd[1]: Started Virtual Machine qemu-137-instance-00000072.
Oct 11 09:19:03 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:19:03.628 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[63df9585-939d-432d-9fdc-4b893c04fa3b]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:19:03 compute-0 kernel: tapc8bc0542-b1: entered promiscuous mode
Oct 11 09:19:03 compute-0 systemd-udevd[384512]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 09:19:03 compute-0 systemd-udevd[384511]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 09:19:03 compute-0 ovn_controller[152945]: 2025-10-11T09:19:03Z|01131|binding|INFO|Claiming lport c8bc0542-b12a-4eb6-898d-5fd663184c82 for this chassis.
Oct 11 09:19:03 compute-0 ovn_controller[152945]: 2025-10-11T09:19:03Z|01132|binding|INFO|c8bc0542-b12a-4eb6-898d-5fd663184c82: Claiming fa:16:3e:78:b9:1a 2001:db8::f816:3eff:fe78:b91a
Oct 11 09:19:03 compute-0 nova_compute[260935]: 2025-10-11 09:19:03.638 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:19:03 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:19:03.645 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:78:b9:1a 2001:db8::f816:3eff:fe78:b91a'], port_security=['fa:16:3e:78:b9:1a 2001:db8::f816:3eff:fe78:b91a'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe78:b91a/64', 'neutron:device_id': '564a3027-0f98-40fd-a495-1c13a103ea39', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ce353a4c-7280-46bb-ac7d-157bb5dc08cb', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'db6885dd005947ad850fed13cefdf2fc', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'a975f5dd-d207-4e6b-9401-45f24b820c2c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7413cee9-eaf1-4090-8a8e-9bf9dac36a0a, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=c8bc0542-b12a-4eb6-898d-5fd663184c82) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 09:19:03 compute-0 ovn_controller[152945]: 2025-10-11T09:19:03Z|01133|binding|INFO|Setting lport dea05614-b04a-4078-a98e-428065514f37 ovn-installed in OVS
Oct 11 09:19:03 compute-0 ovn_controller[152945]: 2025-10-11T09:19:03Z|01134|binding|INFO|Setting lport dea05614-b04a-4078-a98e-428065514f37 up in Southbound
Oct 11 09:19:03 compute-0 nova_compute[260935]: 2025-10-11 09:19:03.649 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:19:03 compute-0 ovn_controller[152945]: 2025-10-11T09:19:03Z|01135|binding|INFO|Setting lport c8bc0542-b12a-4eb6-898d-5fd663184c82 ovn-installed in OVS
Oct 11 09:19:03 compute-0 ovn_controller[152945]: 2025-10-11T09:19:03Z|01136|binding|INFO|Setting lport c8bc0542-b12a-4eb6-898d-5fd663184c82 up in Southbound
Oct 11 09:19:03 compute-0 NetworkManager[44960]: <info>  [1760174343.6608] device (tapc8bc0542-b1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 09:19:03 compute-0 nova_compute[260935]: 2025-10-11 09:19:03.661 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:19:03 compute-0 NetworkManager[44960]: <info>  [1760174343.6626] device (tapc8bc0542-b1): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 09:19:03 compute-0 NetworkManager[44960]: <info>  [1760174343.6696] device (tapdea05614-b0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 09:19:03 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:19:03.669 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[2f56af59-1edf-4066-a74e-df2b795af6b9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:19:03 compute-0 NetworkManager[44960]: <info>  [1760174343.6705] device (tapdea05614-b0): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 09:19:03 compute-0 NetworkManager[44960]: <info>  [1760174343.6757] manager: (tap6346ea52-00): new Veth device (/org/freedesktop/NetworkManager/Devices/464)
Oct 11 09:19:03 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:19:03.675 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[fade1765-0c56-4ae6-9360-ac464d73cb55]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:19:03 compute-0 podman[384497]: 2025-10-11 09:19:03.698206855 +0000 UTC m=+0.107481879 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_id=ovn_metadata_agent)
Oct 11 09:19:03 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:19:03.721 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[d8b2b21f-bbfb-4ce2-88dc-67800dfb7f33]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:19:03 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:19:03.725 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[cca34fe9-f86f-47af-a938-c81b83b43687]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:19:03 compute-0 NetworkManager[44960]: <info>  [1760174343.7445] device (tap6346ea52-00): carrier: link connected
Oct 11 09:19:03 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:19:03.749 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[fa6a6128-e53d-420e-b851-e9e7ba1f3129]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:19:03 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:19:03.766 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[758262ea-a0ad-4be6-b026-c137d34c33c8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6346ea52-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f5:67:fa'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 325], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 618844, 'reachable_time': 24736, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 384550, 'error': None, 'target': 'ovnmeta-6346ea52-07fc-49ad-8f2d-fcfed9769241', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:19:03 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:19:03.782 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[b32d6a8f-8499-4319-af93-5072a92e5f8c]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fef5:67fa'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 618844, 'tstamp': 618844}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 384551, 'error': None, 'target': 'ovnmeta-6346ea52-07fc-49ad-8f2d-fcfed9769241', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:19:03 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:19:03.798 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[da43e569-4257-40ed-8979-68b7d62b7b4b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6346ea52-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f5:67:fa'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 325], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 618844, 'reachable_time': 24736, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 384552, 'error': None, 'target': 'ovnmeta-6346ea52-07fc-49ad-8f2d-fcfed9769241', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:19:03 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:19:03.837 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[ffac1703-5763-43af-839b-1858d2ce5b74]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:19:03 compute-0 ceph-mon[74313]: pgmap v2316: 321 pgs: 321 active+clean; 420 MiB data, 946 MiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 3.6 MiB/s wr, 103 op/s
Oct 11 09:19:03 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:19:03.930 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[8ec7b970-f9e2-46e2-9eb5-1d072b77152b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:19:03 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:19:03.933 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6346ea52-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:19:03 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:19:03.934 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 09:19:03 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:19:03.935 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6346ea52-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:19:03 compute-0 nova_compute[260935]: 2025-10-11 09:19:03.937 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:19:03 compute-0 NetworkManager[44960]: <info>  [1760174343.9377] manager: (tap6346ea52-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/465)
Oct 11 09:19:03 compute-0 kernel: tap6346ea52-00: entered promiscuous mode
Oct 11 09:19:03 compute-0 nova_compute[260935]: 2025-10-11 09:19:03.939 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:19:03 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:19:03.941 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6346ea52-00, col_values=(('external_ids', {'iface-id': 'f4a0a09a-d174-44d3-b63d-753fefd94646'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:19:03 compute-0 nova_compute[260935]: 2025-10-11 09:19:03.942 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:19:03 compute-0 ovn_controller[152945]: 2025-10-11T09:19:03Z|01137|binding|INFO|Releasing lport f4a0a09a-d174-44d3-b63d-753fefd94646 from this chassis (sb_readonly=0)
Oct 11 09:19:03 compute-0 nova_compute[260935]: 2025-10-11 09:19:03.960 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:19:03 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:19:03.962 162815 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/6346ea52-07fc-49ad-8f2d-fcfed9769241.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/6346ea52-07fc-49ad-8f2d-fcfed9769241.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 11 09:19:03 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:19:03.966 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[f49a1fc1-c85f-4beb-b7a5-9680613ef402]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:19:03 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:19:03.967 162815 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 11 09:19:03 compute-0 ovn_metadata_agent[162810]: global
Oct 11 09:19:03 compute-0 ovn_metadata_agent[162810]:     log         /dev/log local0 debug
Oct 11 09:19:03 compute-0 ovn_metadata_agent[162810]:     log-tag     haproxy-metadata-proxy-6346ea52-07fc-49ad-8f2d-fcfed9769241
Oct 11 09:19:03 compute-0 ovn_metadata_agent[162810]:     user        root
Oct 11 09:19:03 compute-0 ovn_metadata_agent[162810]:     group       root
Oct 11 09:19:03 compute-0 ovn_metadata_agent[162810]:     maxconn     1024
Oct 11 09:19:03 compute-0 ovn_metadata_agent[162810]:     pidfile     /var/lib/neutron/external/pids/6346ea52-07fc-49ad-8f2d-fcfed9769241.pid.haproxy
Oct 11 09:19:03 compute-0 ovn_metadata_agent[162810]:     daemon
Oct 11 09:19:03 compute-0 ovn_metadata_agent[162810]: 
Oct 11 09:19:03 compute-0 ovn_metadata_agent[162810]: defaults
Oct 11 09:19:03 compute-0 ovn_metadata_agent[162810]:     log global
Oct 11 09:19:03 compute-0 ovn_metadata_agent[162810]:     mode http
Oct 11 09:19:03 compute-0 ovn_metadata_agent[162810]:     option httplog
Oct 11 09:19:03 compute-0 ovn_metadata_agent[162810]:     option dontlognull
Oct 11 09:19:03 compute-0 ovn_metadata_agent[162810]:     option http-server-close
Oct 11 09:19:03 compute-0 ovn_metadata_agent[162810]:     option forwardfor
Oct 11 09:19:03 compute-0 ovn_metadata_agent[162810]:     retries                 3
Oct 11 09:19:03 compute-0 ovn_metadata_agent[162810]:     timeout http-request    30s
Oct 11 09:19:03 compute-0 ovn_metadata_agent[162810]:     timeout connect         30s
Oct 11 09:19:03 compute-0 ovn_metadata_agent[162810]:     timeout client          32s
Oct 11 09:19:03 compute-0 ovn_metadata_agent[162810]:     timeout server          32s
Oct 11 09:19:03 compute-0 ovn_metadata_agent[162810]:     timeout http-keep-alive 30s
Oct 11 09:19:03 compute-0 ovn_metadata_agent[162810]: 
Oct 11 09:19:03 compute-0 ovn_metadata_agent[162810]: 
Oct 11 09:19:03 compute-0 ovn_metadata_agent[162810]: listen listener
Oct 11 09:19:03 compute-0 ovn_metadata_agent[162810]:     bind 169.254.169.254:80
Oct 11 09:19:03 compute-0 ovn_metadata_agent[162810]:     server metadata /var/lib/neutron/metadata_proxy
Oct 11 09:19:03 compute-0 ovn_metadata_agent[162810]:     http-request add-header X-OVN-Network-ID 6346ea52-07fc-49ad-8f2d-fcfed9769241
Oct 11 09:19:03 compute-0 ovn_metadata_agent[162810]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 11 09:19:03 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:19:03.971 162815 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-6346ea52-07fc-49ad-8f2d-fcfed9769241', 'env', 'PROCESS_TAG=haproxy-6346ea52-07fc-49ad-8f2d-fcfed9769241', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/6346ea52-07fc-49ad-8f2d-fcfed9769241.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 11 09:19:03 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:19:04 compute-0 nova_compute[260935]: 2025-10-11 09:19:04.154 2 DEBUG nova.compute.manager [req-019006d6-51d8-4285-9e57-1e363f245d11 req-113219a6-4aef-48a9-a354-032ae87407fb e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 564a3027-0f98-40fd-a495-1c13a103ea39] Received event network-vif-plugged-c8bc0542-b12a-4eb6-898d-5fd663184c82 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:19:04 compute-0 nova_compute[260935]: 2025-10-11 09:19:04.154 2 DEBUG oslo_concurrency.lockutils [req-019006d6-51d8-4285-9e57-1e363f245d11 req-113219a6-4aef-48a9-a354-032ae87407fb e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "564a3027-0f98-40fd-a495-1c13a103ea39-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:19:04 compute-0 nova_compute[260935]: 2025-10-11 09:19:04.154 2 DEBUG oslo_concurrency.lockutils [req-019006d6-51d8-4285-9e57-1e363f245d11 req-113219a6-4aef-48a9-a354-032ae87407fb e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "564a3027-0f98-40fd-a495-1c13a103ea39-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:19:04 compute-0 nova_compute[260935]: 2025-10-11 09:19:04.155 2 DEBUG oslo_concurrency.lockutils [req-019006d6-51d8-4285-9e57-1e363f245d11 req-113219a6-4aef-48a9-a354-032ae87407fb e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "564a3027-0f98-40fd-a495-1c13a103ea39-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:19:04 compute-0 nova_compute[260935]: 2025-10-11 09:19:04.155 2 DEBUG nova.compute.manager [req-019006d6-51d8-4285-9e57-1e363f245d11 req-113219a6-4aef-48a9-a354-032ae87407fb e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 564a3027-0f98-40fd-a495-1c13a103ea39] Processing event network-vif-plugged-c8bc0542-b12a-4eb6-898d-5fd663184c82 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 11 09:19:04 compute-0 podman[384621]: 2025-10-11 09:19:04.408468644 +0000 UTC m=+0.055237382 container create 35a4ceb1b8073f14fe148299f689cfe5109498accc2ce93b22f7a21ca56c96a2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-6346ea52-07fc-49ad-8f2d-fcfed9769241, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct 11 09:19:04 compute-0 nova_compute[260935]: 2025-10-11 09:19:04.442 2 DEBUG nova.network.neutron [req-02c4584f-0806-4fe0-af7c-f678c37d6ff7 req-3c22c189-e51e-4cff-94db-a6cdb182e3d9 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 564a3027-0f98-40fd-a495-1c13a103ea39] Updated VIF entry in instance network info cache for port c8bc0542-b12a-4eb6-898d-5fd663184c82. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 09:19:04 compute-0 nova_compute[260935]: 2025-10-11 09:19:04.443 2 DEBUG nova.network.neutron [req-02c4584f-0806-4fe0-af7c-f678c37d6ff7 req-3c22c189-e51e-4cff-94db-a6cdb182e3d9 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 564a3027-0f98-40fd-a495-1c13a103ea39] Updating instance_info_cache with network_info: [{"id": "dea05614-b04a-4078-a98e-428065514f37", "address": "fa:16:3e:00:3b:17", "network": {"id": "6346ea52-07fc-49ad-8f2d-fcfed9769241", "bridge": "br-int", "label": "tempest-network-smoke--1742332739", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdea05614-b0", "ovs_interfaceid": "dea05614-b04a-4078-a98e-428065514f37", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "c8bc0542-b12a-4eb6-898d-5fd663184c82", "address": "fa:16:3e:78:b9:1a", "network": {"id": "ce353a4c-7280-46bb-ac7d-157bb5dc08cb", "bridge": "br-int", "label": "tempest-network-smoke--1463782365", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe78:b91a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc8bc0542-b1", "ovs_interfaceid": "c8bc0542-b12a-4eb6-898d-5fd663184c82", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:19:04 compute-0 systemd[1]: Started libpod-conmon-35a4ceb1b8073f14fe148299f689cfe5109498accc2ce93b22f7a21ca56c96a2.scope.
Oct 11 09:19:04 compute-0 nova_compute[260935]: 2025-10-11 09:19:04.464 2 DEBUG oslo_concurrency.lockutils [req-02c4584f-0806-4fe0-af7c-f678c37d6ff7 req-3c22c189-e51e-4cff-94db-a6cdb182e3d9 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-564a3027-0f98-40fd-a495-1c13a103ea39" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:19:04 compute-0 podman[384621]: 2025-10-11 09:19:04.383512324 +0000 UTC m=+0.030281102 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 11 09:19:04 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:19:04 compute-0 nova_compute[260935]: 2025-10-11 09:19:04.496 2 DEBUG nova.compute.manager [req-44a374fb-8cb5-4af0-ac07-488663623fde req-8c1e3b1d-8f35-4131-8215-737f5e5bcd3e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 564a3027-0f98-40fd-a495-1c13a103ea39] Received event network-vif-plugged-dea05614-b04a-4078-a98e-428065514f37 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:19:04 compute-0 nova_compute[260935]: 2025-10-11 09:19:04.496 2 DEBUG oslo_concurrency.lockutils [req-44a374fb-8cb5-4af0-ac07-488663623fde req-8c1e3b1d-8f35-4131-8215-737f5e5bcd3e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "564a3027-0f98-40fd-a495-1c13a103ea39-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:19:04 compute-0 nova_compute[260935]: 2025-10-11 09:19:04.496 2 DEBUG oslo_concurrency.lockutils [req-44a374fb-8cb5-4af0-ac07-488663623fde req-8c1e3b1d-8f35-4131-8215-737f5e5bcd3e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "564a3027-0f98-40fd-a495-1c13a103ea39-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:19:04 compute-0 nova_compute[260935]: 2025-10-11 09:19:04.496 2 DEBUG oslo_concurrency.lockutils [req-44a374fb-8cb5-4af0-ac07-488663623fde req-8c1e3b1d-8f35-4131-8215-737f5e5bcd3e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "564a3027-0f98-40fd-a495-1c13a103ea39-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:19:04 compute-0 nova_compute[260935]: 2025-10-11 09:19:04.497 2 DEBUG nova.compute.manager [req-44a374fb-8cb5-4af0-ac07-488663623fde req-8c1e3b1d-8f35-4131-8215-737f5e5bcd3e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 564a3027-0f98-40fd-a495-1c13a103ea39] Processing event network-vif-plugged-dea05614-b04a-4078-a98e-428065514f37 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 11 09:19:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/adfdfffff4362d598c2ef7afc8f136cd17a5b46b0bac4f37ab2cb909f4afc6f2/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 11 09:19:04 compute-0 podman[384621]: 2025-10-11 09:19:04.515681025 +0000 UTC m=+0.162449793 container init 35a4ceb1b8073f14fe148299f689cfe5109498accc2ce93b22f7a21ca56c96a2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-6346ea52-07fc-49ad-8f2d-fcfed9769241, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 11 09:19:04 compute-0 podman[384621]: 2025-10-11 09:19:04.52604901 +0000 UTC m=+0.172817758 container start 35a4ceb1b8073f14fe148299f689cfe5109498accc2ce93b22f7a21ca56c96a2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-6346ea52-07fc-49ad-8f2d-fcfed9769241, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.schema-version=1.0)
Oct 11 09:19:04 compute-0 neutron-haproxy-ovnmeta-6346ea52-07fc-49ad-8f2d-fcfed9769241[384643]: [NOTICE]   (384647) : New worker (384649) forked
Oct 11 09:19:04 compute-0 neutron-haproxy-ovnmeta-6346ea52-07fc-49ad-8f2d-fcfed9769241[384643]: [NOTICE]   (384647) : Loading success.
Oct 11 09:19:04 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:19:04.587 162815 INFO neutron.agent.ovn.metadata.agent [-] Port c8bc0542-b12a-4eb6-898d-5fd663184c82 in datapath ce353a4c-7280-46bb-ac7d-157bb5dc08cb unbound from our chassis
Oct 11 09:19:04 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:19:04.594 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ce353a4c-7280-46bb-ac7d-157bb5dc08cb
Oct 11 09:19:04 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:19:04.610 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[07441b2f-f1d3-4e98-8ab9-d08d660aa803]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:19:04 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:19:04.611 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapce353a4c-71 in ovnmeta-ce353a4c-7280-46bb-ac7d-157bb5dc08cb namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 11 09:19:04 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:19:04.614 276945 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapce353a4c-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 11 09:19:04 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:19:04.614 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[b96e7a8e-80e7-4d26-a7a2-aa254734100f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:19:04 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:19:04.616 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[da7e1521-e9ba-4432-9c62-2dca2e1ecaa0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:19:04 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:19:04.632 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[ea0f0eed-b45f-43fa-b022-c198d8cdc305]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:19:04 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:19:04.646 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[96d71ede-c7d3-44f0-a898-0c2cb012b0ad]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:19:04 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2317: 321 pgs: 321 active+clean; 420 MiB data, 946 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.0 MiB/s wr, 91 op/s
Oct 11 09:19:04 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:19:04.689 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[57e39484-fdc0-4bed-8fe9-5f031741ddc9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:19:04 compute-0 systemd-udevd[384540]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 09:19:04 compute-0 NetworkManager[44960]: <info>  [1760174344.7013] manager: (tapce353a4c-70): new Veth device (/org/freedesktop/NetworkManager/Devices/466)
Oct 11 09:19:04 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:19:04.705 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[30ad8543-2469-4743-93a5-4dc98bb08243]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:19:04 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:19:04.752 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[6a19fc12-4efd-45a3-8ad1-955fa7c8a8cd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:19:04 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:19:04.757 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[df77b8b5-e8a4-46f9-8cf9-03ba9e96e4a4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:19:04 compute-0 NetworkManager[44960]: <info>  [1760174344.7883] device (tapce353a4c-70): carrier: link connected
Oct 11 09:19:04 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:19:04.797 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[2fd39355-5795-420f-97d5-d8f07dcb3abd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:19:04 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:19:04.821 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[ae7ff0a2-97e2-432a-8470-b96783a44db6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapce353a4c-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:22:23:62'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 326], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 618949, 'reachable_time': 20488, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 384668, 'error': None, 'target': 'ovnmeta-ce353a4c-7280-46bb-ac7d-157bb5dc08cb', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:19:04 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:19:04.848 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[6e816665-39e8-4b9e-932f-9bb2ad68a8a6]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe22:2362'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 618949, 'tstamp': 618949}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 384669, 'error': None, 'target': 'ovnmeta-ce353a4c-7280-46bb-ac7d-157bb5dc08cb', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:19:04 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:19:04.877 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[c60c5dde-479e-4920-ae91-1b9b760c3d80]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapce353a4c-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:22:23:62'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 196, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 196, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 326], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 618949, 'reachable_time': 20488, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 168, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 168, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 384670, 'error': None, 'target': 'ovnmeta-ce353a4c-7280-46bb-ac7d-157bb5dc08cb', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:19:04 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:19:04.915 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[a86ed9e1-5787-42cd-85f7-c2729622294a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:19:04 compute-0 nova_compute[260935]: 2025-10-11 09:19:04.930 2 DEBUG nova.compute.manager [None req-d8798150-4c23-4dc3-830b-d5e7985ac3ad 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 564a3027-0f98-40fd-a495-1c13a103ea39] Instance event wait completed in 0 seconds for network-vif-plugged,network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 11 09:19:04 compute-0 nova_compute[260935]: 2025-10-11 09:19:04.931 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760174344.9302588, 564a3027-0f98-40fd-a495-1c13a103ea39 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:19:04 compute-0 nova_compute[260935]: 2025-10-11 09:19:04.932 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 564a3027-0f98-40fd-a495-1c13a103ea39] VM Started (Lifecycle Event)
Oct 11 09:19:04 compute-0 nova_compute[260935]: 2025-10-11 09:19:04.950 2 DEBUG nova.virt.libvirt.driver [None req-d8798150-4c23-4dc3-830b-d5e7985ac3ad 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 564a3027-0f98-40fd-a495-1c13a103ea39] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 11 09:19:04 compute-0 nova_compute[260935]: 2025-10-11 09:19:04.954 2 INFO nova.virt.libvirt.driver [-] [instance: 564a3027-0f98-40fd-a495-1c13a103ea39] Instance spawned successfully.
Oct 11 09:19:04 compute-0 nova_compute[260935]: 2025-10-11 09:19:04.954 2 DEBUG nova.virt.libvirt.driver [None req-d8798150-4c23-4dc3-830b-d5e7985ac3ad 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 564a3027-0f98-40fd-a495-1c13a103ea39] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 11 09:19:04 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:19:04.957 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[4597f233-c9ad-4c7b-a4df-daa16271ab6e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:19:04 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:19:04.959 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapce353a4c-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:19:04 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:19:04.960 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 09:19:04 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:19:04.961 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapce353a4c-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:19:04 compute-0 nova_compute[260935]: 2025-10-11 09:19:04.963 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 564a3027-0f98-40fd-a495-1c13a103ea39] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:19:04 compute-0 nova_compute[260935]: 2025-10-11 09:19:04.964 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:19:04 compute-0 kernel: tapce353a4c-70: entered promiscuous mode
Oct 11 09:19:04 compute-0 NetworkManager[44960]: <info>  [1760174344.9648] manager: (tapce353a4c-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/467)
Oct 11 09:19:04 compute-0 nova_compute[260935]: 2025-10-11 09:19:04.966 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:19:04 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:19:04.968 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapce353a4c-70, col_values=(('external_ids', {'iface-id': 'bf7f7fa8-f1d9-4202-bec4-06d2178d548d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:19:04 compute-0 nova_compute[260935]: 2025-10-11 09:19:04.970 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 564a3027-0f98-40fd-a495-1c13a103ea39] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 09:19:04 compute-0 ovn_controller[152945]: 2025-10-11T09:19:04Z|01138|binding|INFO|Releasing lport bf7f7fa8-f1d9-4202-bec4-06d2178d548d from this chassis (sb_readonly=0)
Oct 11 09:19:04 compute-0 nova_compute[260935]: 2025-10-11 09:19:04.971 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:19:04 compute-0 nova_compute[260935]: 2025-10-11 09:19:04.971 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:19:04 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:19:04.974 162815 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/ce353a4c-7280-46bb-ac7d-157bb5dc08cb.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/ce353a4c-7280-46bb-ac7d-157bb5dc08cb.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 11 09:19:04 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:19:04.977 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[ee29ddf3-8c19-4f82-a587-9db4e25ce3d4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:19:04 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:19:04.979 162815 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 11 09:19:04 compute-0 ovn_metadata_agent[162810]: global
Oct 11 09:19:04 compute-0 ovn_metadata_agent[162810]:     log         /dev/log local0 debug
Oct 11 09:19:04 compute-0 ovn_metadata_agent[162810]:     log-tag     haproxy-metadata-proxy-ce353a4c-7280-46bb-ac7d-157bb5dc08cb
Oct 11 09:19:04 compute-0 ovn_metadata_agent[162810]:     user        root
Oct 11 09:19:04 compute-0 ovn_metadata_agent[162810]:     group       root
Oct 11 09:19:04 compute-0 ovn_metadata_agent[162810]:     maxconn     1024
Oct 11 09:19:04 compute-0 ovn_metadata_agent[162810]:     pidfile     /var/lib/neutron/external/pids/ce353a4c-7280-46bb-ac7d-157bb5dc08cb.pid.haproxy
Oct 11 09:19:04 compute-0 ovn_metadata_agent[162810]:     daemon
Oct 11 09:19:04 compute-0 ovn_metadata_agent[162810]: 
Oct 11 09:19:04 compute-0 ovn_metadata_agent[162810]: defaults
Oct 11 09:19:04 compute-0 ovn_metadata_agent[162810]:     log global
Oct 11 09:19:04 compute-0 ovn_metadata_agent[162810]:     mode http
Oct 11 09:19:04 compute-0 ovn_metadata_agent[162810]:     option httplog
Oct 11 09:19:04 compute-0 ovn_metadata_agent[162810]:     option dontlognull
Oct 11 09:19:04 compute-0 ovn_metadata_agent[162810]:     option http-server-close
Oct 11 09:19:04 compute-0 ovn_metadata_agent[162810]:     option forwardfor
Oct 11 09:19:04 compute-0 ovn_metadata_agent[162810]:     retries                 3
Oct 11 09:19:04 compute-0 ovn_metadata_agent[162810]:     timeout http-request    30s
Oct 11 09:19:04 compute-0 ovn_metadata_agent[162810]:     timeout connect         30s
Oct 11 09:19:04 compute-0 ovn_metadata_agent[162810]:     timeout client          32s
Oct 11 09:19:04 compute-0 ovn_metadata_agent[162810]:     timeout server          32s
Oct 11 09:19:04 compute-0 ovn_metadata_agent[162810]:     timeout http-keep-alive 30s
Oct 11 09:19:04 compute-0 ovn_metadata_agent[162810]: 
Oct 11 09:19:04 compute-0 ovn_metadata_agent[162810]: 
Oct 11 09:19:04 compute-0 ovn_metadata_agent[162810]: listen listener
Oct 11 09:19:04 compute-0 ovn_metadata_agent[162810]:     bind 169.254.169.254:80
Oct 11 09:19:04 compute-0 ovn_metadata_agent[162810]:     server metadata /var/lib/neutron/metadata_proxy
Oct 11 09:19:04 compute-0 ovn_metadata_agent[162810]:     http-request add-header X-OVN-Network-ID ce353a4c-7280-46bb-ac7d-157bb5dc08cb
Oct 11 09:19:04 compute-0 ovn_metadata_agent[162810]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 11 09:19:04 compute-0 nova_compute[260935]: 2025-10-11 09:19:04.980 2 DEBUG nova.virt.libvirt.driver [None req-d8798150-4c23-4dc3-830b-d5e7985ac3ad 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 564a3027-0f98-40fd-a495-1c13a103ea39] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:19:04 compute-0 nova_compute[260935]: 2025-10-11 09:19:04.980 2 DEBUG nova.virt.libvirt.driver [None req-d8798150-4c23-4dc3-830b-d5e7985ac3ad 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 564a3027-0f98-40fd-a495-1c13a103ea39] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:19:04 compute-0 nova_compute[260935]: 2025-10-11 09:19:04.981 2 DEBUG nova.virt.libvirt.driver [None req-d8798150-4c23-4dc3-830b-d5e7985ac3ad 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 564a3027-0f98-40fd-a495-1c13a103ea39] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:19:04 compute-0 nova_compute[260935]: 2025-10-11 09:19:04.981 2 DEBUG nova.virt.libvirt.driver [None req-d8798150-4c23-4dc3-830b-d5e7985ac3ad 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 564a3027-0f98-40fd-a495-1c13a103ea39] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:19:04 compute-0 nova_compute[260935]: 2025-10-11 09:19:04.981 2 DEBUG nova.virt.libvirt.driver [None req-d8798150-4c23-4dc3-830b-d5e7985ac3ad 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 564a3027-0f98-40fd-a495-1c13a103ea39] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:19:04 compute-0 nova_compute[260935]: 2025-10-11 09:19:04.982 2 DEBUG nova.virt.libvirt.driver [None req-d8798150-4c23-4dc3-830b-d5e7985ac3ad 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 564a3027-0f98-40fd-a495-1c13a103ea39] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:19:04 compute-0 nova_compute[260935]: 2025-10-11 09:19:04.985 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:19:04 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:19:04.983 162815 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-ce353a4c-7280-46bb-ac7d-157bb5dc08cb', 'env', 'PROCESS_TAG=haproxy-ce353a4c-7280-46bb-ac7d-157bb5dc08cb', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/ce353a4c-7280-46bb-ac7d-157bb5dc08cb.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 11 09:19:05 compute-0 nova_compute[260935]: 2025-10-11 09:19:05.012 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 564a3027-0f98-40fd-a495-1c13a103ea39] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 09:19:05 compute-0 nova_compute[260935]: 2025-10-11 09:19:05.013 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760174344.931285, 564a3027-0f98-40fd-a495-1c13a103ea39 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:19:05 compute-0 nova_compute[260935]: 2025-10-11 09:19:05.013 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 564a3027-0f98-40fd-a495-1c13a103ea39] VM Paused (Lifecycle Event)
Oct 11 09:19:05 compute-0 nova_compute[260935]: 2025-10-11 09:19:05.050 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 564a3027-0f98-40fd-a495-1c13a103ea39] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:19:05 compute-0 nova_compute[260935]: 2025-10-11 09:19:05.053 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760174344.9355233, 564a3027-0f98-40fd-a495-1c13a103ea39 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:19:05 compute-0 nova_compute[260935]: 2025-10-11 09:19:05.053 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 564a3027-0f98-40fd-a495-1c13a103ea39] VM Resumed (Lifecycle Event)
Oct 11 09:19:05 compute-0 nova_compute[260935]: 2025-10-11 09:19:05.066 2 INFO nova.compute.manager [None req-d8798150-4c23-4dc3-830b-d5e7985ac3ad 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 564a3027-0f98-40fd-a495-1c13a103ea39] Took 13.23 seconds to spawn the instance on the hypervisor.
Oct 11 09:19:05 compute-0 nova_compute[260935]: 2025-10-11 09:19:05.067 2 DEBUG nova.compute.manager [None req-d8798150-4c23-4dc3-830b-d5e7985ac3ad 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 564a3027-0f98-40fd-a495-1c13a103ea39] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:19:05 compute-0 nova_compute[260935]: 2025-10-11 09:19:05.069 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 564a3027-0f98-40fd-a495-1c13a103ea39] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:19:05 compute-0 nova_compute[260935]: 2025-10-11 09:19:05.074 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 564a3027-0f98-40fd-a495-1c13a103ea39] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 09:19:05 compute-0 nova_compute[260935]: 2025-10-11 09:19:05.104 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 564a3027-0f98-40fd-a495-1c13a103ea39] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 09:19:05 compute-0 nova_compute[260935]: 2025-10-11 09:19:05.119 2 INFO nova.compute.manager [None req-d8798150-4c23-4dc3-830b-d5e7985ac3ad 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 564a3027-0f98-40fd-a495-1c13a103ea39] Took 14.39 seconds to build instance.
Oct 11 09:19:05 compute-0 nova_compute[260935]: 2025-10-11 09:19:05.131 2 DEBUG oslo_concurrency.lockutils [None req-d8798150-4c23-4dc3-830b-d5e7985ac3ad 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "564a3027-0f98-40fd-a495-1c13a103ea39" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 14.707s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:19:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 09:19:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:19:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 09:19:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:19:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0033216606230057955 of space, bias 1.0, pg target 0.9964981869017386 quantized to 32 (current 32)
Oct 11 09:19:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:19:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 09:19:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:19:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 09:19:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:19:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Oct 11 09:19:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:19:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 11 09:19:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:19:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 09:19:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:19:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 11 09:19:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:19:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 11 09:19:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:19:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 09:19:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:19:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 11 09:19:05 compute-0 podman[384701]: 2025-10-11 09:19:05.400885233 +0000 UTC m=+0.069475698 container create 4f3f77c60f2bcd18ca6d28297f83f5e108db244fbb44511df252c2033ac620b4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-ce353a4c-7280-46bb-ac7d-157bb5dc08cb, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001)
Oct 11 09:19:05 compute-0 systemd[1]: Started libpod-conmon-4f3f77c60f2bcd18ca6d28297f83f5e108db244fbb44511df252c2033ac620b4.scope.
Oct 11 09:19:05 compute-0 podman[384701]: 2025-10-11 09:19:05.377080536 +0000 UTC m=+0.045671001 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 11 09:19:05 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:19:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8c33ef9ddc9558d0573d134c73c574d03ca7402b4d4ad1531c574c341a884ee/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 11 09:19:05 compute-0 podman[384701]: 2025-10-11 09:19:05.515236217 +0000 UTC m=+0.183826702 container init 4f3f77c60f2bcd18ca6d28297f83f5e108db244fbb44511df252c2033ac620b4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-ce353a4c-7280-46bb-ac7d-157bb5dc08cb, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 11 09:19:05 compute-0 podman[384701]: 2025-10-11 09:19:05.525646083 +0000 UTC m=+0.194236548 container start 4f3f77c60f2bcd18ca6d28297f83f5e108db244fbb44511df252c2033ac620b4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-ce353a4c-7280-46bb-ac7d-157bb5dc08cb, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 11 09:19:05 compute-0 neutron-haproxy-ovnmeta-ce353a4c-7280-46bb-ac7d-157bb5dc08cb[384716]: [NOTICE]   (384720) : New worker (384722) forked
Oct 11 09:19:05 compute-0 neutron-haproxy-ovnmeta-ce353a4c-7280-46bb-ac7d-157bb5dc08cb[384716]: [NOTICE]   (384720) : Loading success.
Oct 11 09:19:05 compute-0 ceph-mon[74313]: pgmap v2317: 321 pgs: 321 active+clean; 420 MiB data, 946 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.0 MiB/s wr, 91 op/s
Oct 11 09:19:06 compute-0 nova_compute[260935]: 2025-10-11 09:19:06.250 2 DEBUG nova.compute.manager [req-89ea2b94-d7d4-48f4-9e18-a6cf5f6fea6a req-1a1d31e6-df8e-468c-bf3c-2cec2cb4ef9a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 564a3027-0f98-40fd-a495-1c13a103ea39] Received event network-vif-plugged-c8bc0542-b12a-4eb6-898d-5fd663184c82 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:19:06 compute-0 nova_compute[260935]: 2025-10-11 09:19:06.251 2 DEBUG oslo_concurrency.lockutils [req-89ea2b94-d7d4-48f4-9e18-a6cf5f6fea6a req-1a1d31e6-df8e-468c-bf3c-2cec2cb4ef9a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "564a3027-0f98-40fd-a495-1c13a103ea39-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:19:06 compute-0 nova_compute[260935]: 2025-10-11 09:19:06.252 2 DEBUG oslo_concurrency.lockutils [req-89ea2b94-d7d4-48f4-9e18-a6cf5f6fea6a req-1a1d31e6-df8e-468c-bf3c-2cec2cb4ef9a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "564a3027-0f98-40fd-a495-1c13a103ea39-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:19:06 compute-0 nova_compute[260935]: 2025-10-11 09:19:06.252 2 DEBUG oslo_concurrency.lockutils [req-89ea2b94-d7d4-48f4-9e18-a6cf5f6fea6a req-1a1d31e6-df8e-468c-bf3c-2cec2cb4ef9a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "564a3027-0f98-40fd-a495-1c13a103ea39-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:19:06 compute-0 nova_compute[260935]: 2025-10-11 09:19:06.253 2 DEBUG nova.compute.manager [req-89ea2b94-d7d4-48f4-9e18-a6cf5f6fea6a req-1a1d31e6-df8e-468c-bf3c-2cec2cb4ef9a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 564a3027-0f98-40fd-a495-1c13a103ea39] No waiting events found dispatching network-vif-plugged-c8bc0542-b12a-4eb6-898d-5fd663184c82 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 09:19:06 compute-0 nova_compute[260935]: 2025-10-11 09:19:06.254 2 WARNING nova.compute.manager [req-89ea2b94-d7d4-48f4-9e18-a6cf5f6fea6a req-1a1d31e6-df8e-468c-bf3c-2cec2cb4ef9a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 564a3027-0f98-40fd-a495-1c13a103ea39] Received unexpected event network-vif-plugged-c8bc0542-b12a-4eb6-898d-5fd663184c82 for instance with vm_state active and task_state None.
Oct 11 09:19:06 compute-0 ovn_controller[152945]: 2025-10-11T09:19:06Z|01139|binding|INFO|Releasing lport bf7f7fa8-f1d9-4202-bec4-06d2178d548d from this chassis (sb_readonly=0)
Oct 11 09:19:06 compute-0 ovn_controller[152945]: 2025-10-11T09:19:06Z|01140|binding|INFO|Releasing lport fb1da81b-c31d-4f26-a595-cb8eaad2e189 from this chassis (sb_readonly=0)
Oct 11 09:19:06 compute-0 ovn_controller[152945]: 2025-10-11T09:19:06Z|01141|binding|INFO|Releasing lport f4a0a09a-d174-44d3-b63d-753fefd94646 from this chassis (sb_readonly=0)
Oct 11 09:19:06 compute-0 ovn_controller[152945]: 2025-10-11T09:19:06Z|01142|binding|INFO|Releasing lport e23cd806-8523-4e59-ba27-db15cee52548 from this chassis (sb_readonly=0)
Oct 11 09:19:06 compute-0 ovn_controller[152945]: 2025-10-11T09:19:06Z|01143|binding|INFO|Releasing lport f45dd889-4db0-488b-b6d3-356bc191844e from this chassis (sb_readonly=0)
Oct 11 09:19:06 compute-0 nova_compute[260935]: 2025-10-11 09:19:06.424 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:19:06 compute-0 NetworkManager[44960]: <info>  [1760174346.4264] manager: (patch-br-int-to-provnet-02213cae-d648-4a8f-b18b-9bdf9573f1be): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/468)
Oct 11 09:19:06 compute-0 NetworkManager[44960]: <info>  [1760174346.4278] manager: (patch-provnet-02213cae-d648-4a8f-b18b-9bdf9573f1be-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/469)
Oct 11 09:19:06 compute-0 nova_compute[260935]: 2025-10-11 09:19:06.429 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:19:06 compute-0 ovn_controller[152945]: 2025-10-11T09:19:06Z|01144|binding|INFO|Releasing lport bf7f7fa8-f1d9-4202-bec4-06d2178d548d from this chassis (sb_readonly=0)
Oct 11 09:19:06 compute-0 ovn_controller[152945]: 2025-10-11T09:19:06Z|01145|binding|INFO|Releasing lport fb1da81b-c31d-4f26-a595-cb8eaad2e189 from this chassis (sb_readonly=0)
Oct 11 09:19:06 compute-0 ovn_controller[152945]: 2025-10-11T09:19:06Z|01146|binding|INFO|Releasing lport f4a0a09a-d174-44d3-b63d-753fefd94646 from this chassis (sb_readonly=0)
Oct 11 09:19:06 compute-0 ovn_controller[152945]: 2025-10-11T09:19:06Z|01147|binding|INFO|Releasing lport e23cd806-8523-4e59-ba27-db15cee52548 from this chassis (sb_readonly=0)
Oct 11 09:19:06 compute-0 ovn_controller[152945]: 2025-10-11T09:19:06Z|01148|binding|INFO|Releasing lport f45dd889-4db0-488b-b6d3-356bc191844e from this chassis (sb_readonly=0)
Oct 11 09:19:06 compute-0 nova_compute[260935]: 2025-10-11 09:19:06.436 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:19:06 compute-0 nova_compute[260935]: 2025-10-11 09:19:06.442 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:19:06 compute-0 nova_compute[260935]: 2025-10-11 09:19:06.490 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:19:06 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2318: 321 pgs: 321 active+clean; 420 MiB data, 946 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 741 KiB/s wr, 87 op/s
Oct 11 09:19:06 compute-0 nova_compute[260935]: 2025-10-11 09:19:06.785 2 DEBUG nova.compute.manager [req-91eea30b-31e6-4908-8765-0e59d19f068c req-0f55932a-ed17-4a7f-b328-ca27872a57f1 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 564a3027-0f98-40fd-a495-1c13a103ea39] Received event network-vif-plugged-dea05614-b04a-4078-a98e-428065514f37 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:19:06 compute-0 nova_compute[260935]: 2025-10-11 09:19:06.786 2 DEBUG oslo_concurrency.lockutils [req-91eea30b-31e6-4908-8765-0e59d19f068c req-0f55932a-ed17-4a7f-b328-ca27872a57f1 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "564a3027-0f98-40fd-a495-1c13a103ea39-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:19:06 compute-0 nova_compute[260935]: 2025-10-11 09:19:06.787 2 DEBUG oslo_concurrency.lockutils [req-91eea30b-31e6-4908-8765-0e59d19f068c req-0f55932a-ed17-4a7f-b328-ca27872a57f1 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "564a3027-0f98-40fd-a495-1c13a103ea39-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:19:06 compute-0 nova_compute[260935]: 2025-10-11 09:19:06.788 2 DEBUG oslo_concurrency.lockutils [req-91eea30b-31e6-4908-8765-0e59d19f068c req-0f55932a-ed17-4a7f-b328-ca27872a57f1 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "564a3027-0f98-40fd-a495-1c13a103ea39-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:19:06 compute-0 nova_compute[260935]: 2025-10-11 09:19:06.788 2 DEBUG nova.compute.manager [req-91eea30b-31e6-4908-8765-0e59d19f068c req-0f55932a-ed17-4a7f-b328-ca27872a57f1 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 564a3027-0f98-40fd-a495-1c13a103ea39] No waiting events found dispatching network-vif-plugged-dea05614-b04a-4078-a98e-428065514f37 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 09:19:06 compute-0 nova_compute[260935]: 2025-10-11 09:19:06.789 2 WARNING nova.compute.manager [req-91eea30b-31e6-4908-8765-0e59d19f068c req-0f55932a-ed17-4a7f-b328-ca27872a57f1 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 564a3027-0f98-40fd-a495-1c13a103ea39] Received unexpected event network-vif-plugged-dea05614-b04a-4078-a98e-428065514f37 for instance with vm_state active and task_state None.
Oct 11 09:19:07 compute-0 ceph-mon[74313]: pgmap v2318: 321 pgs: 321 active+clean; 420 MiB data, 946 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 741 KiB/s wr, 87 op/s
Oct 11 09:19:08 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2319: 321 pgs: 321 active+clean; 421 MiB data, 946 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 754 KiB/s wr, 160 op/s
Oct 11 09:19:08 compute-0 nova_compute[260935]: 2025-10-11 09:19:08.905 2 DEBUG nova.compute.manager [req-211794b5-7183-4893-9361-4cdce30a649e req-bb245872-1cf5-4880-8bfe-1214470401d0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 69860e17-caac-461a-a4a5-34ca72c0ee09] Received event network-changed-a1864eda-bf8d-42a1-b315-967521604391 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:19:08 compute-0 nova_compute[260935]: 2025-10-11 09:19:08.907 2 DEBUG nova.compute.manager [req-211794b5-7183-4893-9361-4cdce30a649e req-bb245872-1cf5-4880-8bfe-1214470401d0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 69860e17-caac-461a-a4a5-34ca72c0ee09] Refreshing instance network info cache due to event network-changed-a1864eda-bf8d-42a1-b315-967521604391. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 09:19:08 compute-0 nova_compute[260935]: 2025-10-11 09:19:08.907 2 DEBUG oslo_concurrency.lockutils [req-211794b5-7183-4893-9361-4cdce30a649e req-bb245872-1cf5-4880-8bfe-1214470401d0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-69860e17-caac-461a-a4a5-34ca72c0ee09" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:19:08 compute-0 nova_compute[260935]: 2025-10-11 09:19:08.908 2 DEBUG oslo_concurrency.lockutils [req-211794b5-7183-4893-9361-4cdce30a649e req-bb245872-1cf5-4880-8bfe-1214470401d0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-69860e17-caac-461a-a4a5-34ca72c0ee09" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:19:08 compute-0 nova_compute[260935]: 2025-10-11 09:19:08.909 2 DEBUG nova.network.neutron [req-211794b5-7183-4893-9361-4cdce30a649e req-bb245872-1cf5-4880-8bfe-1214470401d0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 69860e17-caac-461a-a4a5-34ca72c0ee09] Refreshing network info cache for port a1864eda-bf8d-42a1-b315-967521604391 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 09:19:08 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:19:09 compute-0 podman[384732]: 2025-10-11 09:19:09.80751065 +0000 UTC m=+0.101943282 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true)
Oct 11 09:19:09 compute-0 ceph-mon[74313]: pgmap v2319: 321 pgs: 321 active+clean; 421 MiB data, 946 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 754 KiB/s wr, 160 op/s
Oct 11 09:19:10 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2320: 321 pgs: 321 active+clean; 421 MiB data, 946 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 25 KiB/s wr, 147 op/s
Oct 11 09:19:10 compute-0 ovn_controller[152945]: 2025-10-11T09:19:10Z|00131|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:8e:a2:5e 10.100.0.3
Oct 11 09:19:10 compute-0 ovn_controller[152945]: 2025-10-11T09:19:10Z|00132|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:8e:a2:5e 10.100.0.3
Oct 11 09:19:11 compute-0 nova_compute[260935]: 2025-10-11 09:19:11.012 2 DEBUG nova.network.neutron [req-211794b5-7183-4893-9361-4cdce30a649e req-bb245872-1cf5-4880-8bfe-1214470401d0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 69860e17-caac-461a-a4a5-34ca72c0ee09] Updated VIF entry in instance network info cache for port a1864eda-bf8d-42a1-b315-967521604391. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 09:19:11 compute-0 nova_compute[260935]: 2025-10-11 09:19:11.014 2 DEBUG nova.network.neutron [req-211794b5-7183-4893-9361-4cdce30a649e req-bb245872-1cf5-4880-8bfe-1214470401d0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 69860e17-caac-461a-a4a5-34ca72c0ee09] Updating instance_info_cache with network_info: [{"id": "a1864eda-bf8d-42a1-b315-967521604391", "address": "fa:16:3e:8e:a2:5e", "network": {"id": "fbf3e0c3-4aa9-41d1-8ed0-cbadb331b448", "bridge": "br-int", "label": "tempest-network-smoke--1639787343", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.236", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa1864eda-bf", "ovs_interfaceid": "a1864eda-bf8d-42a1-b315-967521604391", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:19:11 compute-0 nova_compute[260935]: 2025-10-11 09:19:11.044 2 DEBUG oslo_concurrency.lockutils [req-211794b5-7183-4893-9361-4cdce30a649e req-bb245872-1cf5-4880-8bfe-1214470401d0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-69860e17-caac-461a-a4a5-34ca72c0ee09" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:19:11 compute-0 nova_compute[260935]: 2025-10-11 09:19:11.059 2 DEBUG nova.compute.manager [req-df0f2baa-3fc3-42cc-960d-d4609582f845 req-20de248d-bfdb-4002-b190-50b7160d8829 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 564a3027-0f98-40fd-a495-1c13a103ea39] Received event network-changed-dea05614-b04a-4078-a98e-428065514f37 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:19:11 compute-0 nova_compute[260935]: 2025-10-11 09:19:11.060 2 DEBUG nova.compute.manager [req-df0f2baa-3fc3-42cc-960d-d4609582f845 req-20de248d-bfdb-4002-b190-50b7160d8829 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 564a3027-0f98-40fd-a495-1c13a103ea39] Refreshing instance network info cache due to event network-changed-dea05614-b04a-4078-a98e-428065514f37. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 09:19:11 compute-0 nova_compute[260935]: 2025-10-11 09:19:11.061 2 DEBUG oslo_concurrency.lockutils [req-df0f2baa-3fc3-42cc-960d-d4609582f845 req-20de248d-bfdb-4002-b190-50b7160d8829 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-564a3027-0f98-40fd-a495-1c13a103ea39" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:19:11 compute-0 nova_compute[260935]: 2025-10-11 09:19:11.062 2 DEBUG oslo_concurrency.lockutils [req-df0f2baa-3fc3-42cc-960d-d4609582f845 req-20de248d-bfdb-4002-b190-50b7160d8829 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-564a3027-0f98-40fd-a495-1c13a103ea39" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:19:11 compute-0 nova_compute[260935]: 2025-10-11 09:19:11.063 2 DEBUG nova.network.neutron [req-df0f2baa-3fc3-42cc-960d-d4609582f845 req-20de248d-bfdb-4002-b190-50b7160d8829 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 564a3027-0f98-40fd-a495-1c13a103ea39] Refreshing network info cache for port dea05614-b04a-4078-a98e-428065514f37 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 09:19:11 compute-0 nova_compute[260935]: 2025-10-11 09:19:11.477 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:19:11 compute-0 nova_compute[260935]: 2025-10-11 09:19:11.492 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:19:11 compute-0 ceph-mon[74313]: pgmap v2320: 321 pgs: 321 active+clean; 421 MiB data, 946 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 25 KiB/s wr, 147 op/s
Oct 11 09:19:12 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2321: 321 pgs: 321 active+clean; 448 MiB data, 968 MiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 1.9 MiB/s wr, 203 op/s
Oct 11 09:19:13 compute-0 nova_compute[260935]: 2025-10-11 09:19:13.186 2 DEBUG nova.network.neutron [req-df0f2baa-3fc3-42cc-960d-d4609582f845 req-20de248d-bfdb-4002-b190-50b7160d8829 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 564a3027-0f98-40fd-a495-1c13a103ea39] Updated VIF entry in instance network info cache for port dea05614-b04a-4078-a98e-428065514f37. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 09:19:13 compute-0 nova_compute[260935]: 2025-10-11 09:19:13.187 2 DEBUG nova.network.neutron [req-df0f2baa-3fc3-42cc-960d-d4609582f845 req-20de248d-bfdb-4002-b190-50b7160d8829 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 564a3027-0f98-40fd-a495-1c13a103ea39] Updating instance_info_cache with network_info: [{"id": "dea05614-b04a-4078-a98e-428065514f37", "address": "fa:16:3e:00:3b:17", "network": {"id": "6346ea52-07fc-49ad-8f2d-fcfed9769241", "bridge": "br-int", "label": "tempest-network-smoke--1742332739", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdea05614-b0", "ovs_interfaceid": "dea05614-b04a-4078-a98e-428065514f37", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "c8bc0542-b12a-4eb6-898d-5fd663184c82", "address": "fa:16:3e:78:b9:1a", "network": {"id": "ce353a4c-7280-46bb-ac7d-157bb5dc08cb", "bridge": "br-int", "label": "tempest-network-smoke--1463782365", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe78:b91a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc8bc0542-b1", "ovs_interfaceid": "c8bc0542-b12a-4eb6-898d-5fd663184c82", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:19:13 compute-0 nova_compute[260935]: 2025-10-11 09:19:13.206 2 DEBUG oslo_concurrency.lockutils [req-df0f2baa-3fc3-42cc-960d-d4609582f845 req-20de248d-bfdb-4002-b190-50b7160d8829 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-564a3027-0f98-40fd-a495-1c13a103ea39" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:19:13 compute-0 ceph-mon[74313]: pgmap v2321: 321 pgs: 321 active+clean; 448 MiB data, 968 MiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 1.9 MiB/s wr, 203 op/s
Oct 11 09:19:13 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:19:14 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2322: 321 pgs: 321 active+clean; 453 MiB data, 971 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 2.1 MiB/s wr, 164 op/s
Oct 11 09:19:14 compute-0 sshd-session[384752]: Invalid user testadmin from 152.32.213.170 port 32996
Oct 11 09:19:14 compute-0 sshd-session[384752]: pam_unix(sshd:auth): check pass; user unknown
Oct 11 09:19:14 compute-0 sshd-session[384752]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=152.32.213.170
Oct 11 09:19:15 compute-0 podman[384754]: 2025-10-11 09:19:15.009504637 +0000 UTC m=+0.101992923 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team)
Oct 11 09:19:15 compute-0 podman[384755]: 2025-10-11 09:19:15.063257747 +0000 UTC m=+0.142180697 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20251001)
Oct 11 09:19:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:19:15.217 162815 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:19:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:19:15.218 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:19:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:19:15.219 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:19:15 compute-0 nova_compute[260935]: 2025-10-11 09:19:15.860 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:19:15 compute-0 ceph-mon[74313]: pgmap v2322: 321 pgs: 321 active+clean; 453 MiB data, 971 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 2.1 MiB/s wr, 164 op/s
Oct 11 09:19:16 compute-0 nova_compute[260935]: 2025-10-11 09:19:16.480 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:19:16 compute-0 nova_compute[260935]: 2025-10-11 09:19:16.495 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:19:16 compute-0 unix_chkpwd[384802]: password check failed for user (root)
Oct 11 09:19:16 compute-0 sshd-session[384800]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=165.232.82.252  user=root
Oct 11 09:19:16 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2323: 321 pgs: 321 active+clean; 453 MiB data, 971 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 139 op/s
Oct 11 09:19:17 compute-0 sshd-session[384752]: Failed password for invalid user testadmin from 152.32.213.170 port 32996 ssh2
Oct 11 09:19:17 compute-0 nova_compute[260935]: 2025-10-11 09:19:17.830 2 INFO nova.compute.manager [None req-021c4a4b-0393-4dc8-b868-dc4f6767b3b9 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 69860e17-caac-461a-a4a5-34ca72c0ee09] Get console output
Oct 11 09:19:17 compute-0 nova_compute[260935]: 2025-10-11 09:19:17.839 29289 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Oct 11 09:19:17 compute-0 ceph-mon[74313]: pgmap v2323: 321 pgs: 321 active+clean; 453 MiB data, 971 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 139 op/s
Oct 11 09:19:18 compute-0 ovn_controller[152945]: 2025-10-11T09:19:18Z|00133|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:00:3b:17 10.100.0.3
Oct 11 09:19:18 compute-0 ovn_controller[152945]: 2025-10-11T09:19:18Z|00134|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:00:3b:17 10.100.0.3
Oct 11 09:19:18 compute-0 sshd-session[384800]: Failed password for root from 165.232.82.252 port 41732 ssh2
Oct 11 09:19:18 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2324: 321 pgs: 321 active+clean; 484 MiB data, 996 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 4.3 MiB/s wr, 201 op/s
Oct 11 09:19:18 compute-0 sshd-session[384800]: Connection closed by authenticating user root 165.232.82.252 port 41732 [preauth]
Oct 11 09:19:18 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:19:19 compute-0 nova_compute[260935]: 2025-10-11 09:19:19.617 2 DEBUG nova.compute.manager [req-cb0535f6-30c9-4209-a88d-d214166fd672 req-76a6e508-1427-4773-a9b8-4866a4390a69 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 69860e17-caac-461a-a4a5-34ca72c0ee09] Received event network-changed-a1864eda-bf8d-42a1-b315-967521604391 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:19:19 compute-0 nova_compute[260935]: 2025-10-11 09:19:19.617 2 DEBUG nova.compute.manager [req-cb0535f6-30c9-4209-a88d-d214166fd672 req-76a6e508-1427-4773-a9b8-4866a4390a69 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 69860e17-caac-461a-a4a5-34ca72c0ee09] Refreshing instance network info cache due to event network-changed-a1864eda-bf8d-42a1-b315-967521604391. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 09:19:19 compute-0 nova_compute[260935]: 2025-10-11 09:19:19.618 2 DEBUG oslo_concurrency.lockutils [req-cb0535f6-30c9-4209-a88d-d214166fd672 req-76a6e508-1427-4773-a9b8-4866a4390a69 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-69860e17-caac-461a-a4a5-34ca72c0ee09" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:19:19 compute-0 nova_compute[260935]: 2025-10-11 09:19:19.619 2 DEBUG oslo_concurrency.lockutils [req-cb0535f6-30c9-4209-a88d-d214166fd672 req-76a6e508-1427-4773-a9b8-4866a4390a69 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-69860e17-caac-461a-a4a5-34ca72c0ee09" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:19:19 compute-0 nova_compute[260935]: 2025-10-11 09:19:19.619 2 DEBUG nova.network.neutron [req-cb0535f6-30c9-4209-a88d-d214166fd672 req-76a6e508-1427-4773-a9b8-4866a4390a69 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 69860e17-caac-461a-a4a5-34ca72c0ee09] Refreshing network info cache for port a1864eda-bf8d-42a1-b315-967521604391 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 09:19:19 compute-0 ceph-mon[74313]: pgmap v2324: 321 pgs: 321 active+clean; 484 MiB data, 996 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 4.3 MiB/s wr, 201 op/s
Oct 11 09:19:20 compute-0 sshd-session[384752]: Received disconnect from 152.32.213.170 port 32996:11: Bye Bye [preauth]
Oct 11 09:19:20 compute-0 sshd-session[384752]: Disconnected from invalid user testadmin 152.32.213.170 port 32996 [preauth]
Oct 11 09:19:20 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2325: 321 pgs: 321 active+clean; 484 MiB data, 996 MiB used, 59 GiB / 60 GiB avail; 712 KiB/s rd, 4.3 MiB/s wr, 127 op/s
Oct 11 09:19:21 compute-0 nova_compute[260935]: 2025-10-11 09:19:21.163 2 DEBUG nova.network.neutron [req-cb0535f6-30c9-4209-a88d-d214166fd672 req-76a6e508-1427-4773-a9b8-4866a4390a69 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 69860e17-caac-461a-a4a5-34ca72c0ee09] Updated VIF entry in instance network info cache for port a1864eda-bf8d-42a1-b315-967521604391. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 09:19:21 compute-0 nova_compute[260935]: 2025-10-11 09:19:21.164 2 DEBUG nova.network.neutron [req-cb0535f6-30c9-4209-a88d-d214166fd672 req-76a6e508-1427-4773-a9b8-4866a4390a69 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 69860e17-caac-461a-a4a5-34ca72c0ee09] Updating instance_info_cache with network_info: [{"id": "a1864eda-bf8d-42a1-b315-967521604391", "address": "fa:16:3e:8e:a2:5e", "network": {"id": "fbf3e0c3-4aa9-41d1-8ed0-cbadb331b448", "bridge": "br-int", "label": "tempest-network-smoke--1639787343", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa1864eda-bf", "ovs_interfaceid": "a1864eda-bf8d-42a1-b315-967521604391", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:19:21 compute-0 nova_compute[260935]: 2025-10-11 09:19:21.190 2 DEBUG oslo_concurrency.lockutils [req-cb0535f6-30c9-4209-a88d-d214166fd672 req-76a6e508-1427-4773-a9b8-4866a4390a69 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-69860e17-caac-461a-a4a5-34ca72c0ee09" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:19:21 compute-0 nova_compute[260935]: 2025-10-11 09:19:21.497 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4997-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 09:19:21 compute-0 nova_compute[260935]: 2025-10-11 09:19:21.525 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 09:19:21 compute-0 nova_compute[260935]: 2025-10-11 09:19:21.525 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5029 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Oct 11 09:19:21 compute-0 nova_compute[260935]: 2025-10-11 09:19:21.526 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Oct 11 09:19:21 compute-0 nova_compute[260935]: 2025-10-11 09:19:21.529 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:19:21 compute-0 nova_compute[260935]: 2025-10-11 09:19:21.530 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Oct 11 09:19:21 compute-0 ceph-mon[74313]: pgmap v2325: 321 pgs: 321 active+clean; 484 MiB data, 996 MiB used, 59 GiB / 60 GiB avail; 712 KiB/s rd, 4.3 MiB/s wr, 127 op/s
Oct 11 09:19:22 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2326: 321 pgs: 321 active+clean; 486 MiB data, 996 MiB used, 59 GiB / 60 GiB avail; 720 KiB/s rd, 4.3 MiB/s wr, 130 op/s
Oct 11 09:19:23 compute-0 ceph-mon[74313]: pgmap v2326: 321 pgs: 321 active+clean; 486 MiB data, 996 MiB used, 59 GiB / 60 GiB avail; 720 KiB/s rd, 4.3 MiB/s wr, 130 op/s
Oct 11 09:19:23 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:19:24 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2327: 321 pgs: 321 active+clean; 486 MiB data, 996 MiB used, 59 GiB / 60 GiB avail; 372 KiB/s rd, 2.4 MiB/s wr, 73 op/s
Oct 11 09:19:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:19:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:19:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:19:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:19:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:19:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:19:25 compute-0 ceph-mon[74313]: pgmap v2327: 321 pgs: 321 active+clean; 486 MiB data, 996 MiB used, 59 GiB / 60 GiB avail; 372 KiB/s rd, 2.4 MiB/s wr, 73 op/s
Oct 11 09:19:26 compute-0 nova_compute[260935]: 2025-10-11 09:19:26.493 2 DEBUG oslo_concurrency.lockutils [None req-2b59932c-1944-4c88-af55-c0132848285a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Acquiring lock "6c9132ae-fb4f-4a77-8e3f-14f516eed49c" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:19:26 compute-0 nova_compute[260935]: 2025-10-11 09:19:26.494 2 DEBUG oslo_concurrency.lockutils [None req-2b59932c-1944-4c88-af55-c0132848285a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "6c9132ae-fb4f-4a77-8e3f-14f516eed49c" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:19:26 compute-0 nova_compute[260935]: 2025-10-11 09:19:26.509 2 DEBUG nova.compute.manager [None req-2b59932c-1944-4c88-af55-c0132848285a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 6c9132ae-fb4f-4a77-8e3f-14f516eed49c] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 11 09:19:26 compute-0 nova_compute[260935]: 2025-10-11 09:19:26.530 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:19:26 compute-0 nova_compute[260935]: 2025-10-11 09:19:26.532 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:19:26 compute-0 nova_compute[260935]: 2025-10-11 09:19:26.583 2 DEBUG oslo_concurrency.lockutils [None req-2b59932c-1944-4c88-af55-c0132848285a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:19:26 compute-0 nova_compute[260935]: 2025-10-11 09:19:26.583 2 DEBUG oslo_concurrency.lockutils [None req-2b59932c-1944-4c88-af55-c0132848285a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:19:26 compute-0 nova_compute[260935]: 2025-10-11 09:19:26.596 2 DEBUG nova.virt.hardware [None req-2b59932c-1944-4c88-af55-c0132848285a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 11 09:19:26 compute-0 nova_compute[260935]: 2025-10-11 09:19:26.597 2 INFO nova.compute.claims [None req-2b59932c-1944-4c88-af55-c0132848285a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 6c9132ae-fb4f-4a77-8e3f-14f516eed49c] Claim successful on node compute-0.ctlplane.example.com
Oct 11 09:19:26 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 09:19:26 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4070051561' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 09:19:26 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 09:19:26 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4070051561' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 09:19:26 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2328: 321 pgs: 321 active+clean; 486 MiB data, 996 MiB used, 59 GiB / 60 GiB avail; 331 KiB/s rd, 2.2 MiB/s wr, 64 op/s
Oct 11 09:19:26 compute-0 nova_compute[260935]: 2025-10-11 09:19:26.870 2 DEBUG oslo_concurrency.processutils [None req-2b59932c-1944-4c88-af55-c0132848285a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:19:26 compute-0 ceph-mon[74313]: from='client.? 192.168.122.10:0/4070051561' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 09:19:26 compute-0 ceph-mon[74313]: from='client.? 192.168.122.10:0/4070051561' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 09:19:27 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 09:19:27 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1950226892' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:19:27 compute-0 nova_compute[260935]: 2025-10-11 09:19:27.378 2 DEBUG oslo_concurrency.processutils [None req-2b59932c-1944-4c88-af55-c0132848285a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.509s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:19:27 compute-0 nova_compute[260935]: 2025-10-11 09:19:27.389 2 DEBUG nova.compute.provider_tree [None req-2b59932c-1944-4c88-af55-c0132848285a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 09:19:27 compute-0 nova_compute[260935]: 2025-10-11 09:19:27.483 2 DEBUG nova.scheduler.client.report [None req-2b59932c-1944-4c88-af55-c0132848285a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 09:19:27 compute-0 nova_compute[260935]: 2025-10-11 09:19:27.507 2 DEBUG oslo_concurrency.lockutils [None req-2b59932c-1944-4c88-af55-c0132848285a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.924s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:19:27 compute-0 nova_compute[260935]: 2025-10-11 09:19:27.509 2 DEBUG nova.compute.manager [None req-2b59932c-1944-4c88-af55-c0132848285a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 6c9132ae-fb4f-4a77-8e3f-14f516eed49c] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 11 09:19:27 compute-0 nova_compute[260935]: 2025-10-11 09:19:27.559 2 DEBUG nova.compute.manager [None req-2b59932c-1944-4c88-af55-c0132848285a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 6c9132ae-fb4f-4a77-8e3f-14f516eed49c] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 11 09:19:27 compute-0 nova_compute[260935]: 2025-10-11 09:19:27.560 2 DEBUG nova.network.neutron [None req-2b59932c-1944-4c88-af55-c0132848285a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 6c9132ae-fb4f-4a77-8e3f-14f516eed49c] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 11 09:19:27 compute-0 nova_compute[260935]: 2025-10-11 09:19:27.581 2 INFO nova.virt.libvirt.driver [None req-2b59932c-1944-4c88-af55-c0132848285a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 6c9132ae-fb4f-4a77-8e3f-14f516eed49c] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 11 09:19:27 compute-0 nova_compute[260935]: 2025-10-11 09:19:27.600 2 DEBUG nova.compute.manager [None req-2b59932c-1944-4c88-af55-c0132848285a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 6c9132ae-fb4f-4a77-8e3f-14f516eed49c] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 11 09:19:27 compute-0 nova_compute[260935]: 2025-10-11 09:19:27.686 2 DEBUG nova.compute.manager [None req-2b59932c-1944-4c88-af55-c0132848285a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 6c9132ae-fb4f-4a77-8e3f-14f516eed49c] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 11 09:19:27 compute-0 nova_compute[260935]: 2025-10-11 09:19:27.687 2 DEBUG nova.virt.libvirt.driver [None req-2b59932c-1944-4c88-af55-c0132848285a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 6c9132ae-fb4f-4a77-8e3f-14f516eed49c] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 11 09:19:27 compute-0 nova_compute[260935]: 2025-10-11 09:19:27.688 2 INFO nova.virt.libvirt.driver [None req-2b59932c-1944-4c88-af55-c0132848285a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 6c9132ae-fb4f-4a77-8e3f-14f516eed49c] Creating image(s)
Oct 11 09:19:27 compute-0 nova_compute[260935]: 2025-10-11 09:19:27.713 2 DEBUG nova.storage.rbd_utils [None req-2b59932c-1944-4c88-af55-c0132848285a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] rbd image 6c9132ae-fb4f-4a77-8e3f-14f516eed49c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:19:27 compute-0 nova_compute[260935]: 2025-10-11 09:19:27.743 2 DEBUG nova.storage.rbd_utils [None req-2b59932c-1944-4c88-af55-c0132848285a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] rbd image 6c9132ae-fb4f-4a77-8e3f-14f516eed49c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:19:27 compute-0 nova_compute[260935]: 2025-10-11 09:19:27.780 2 DEBUG nova.storage.rbd_utils [None req-2b59932c-1944-4c88-af55-c0132848285a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] rbd image 6c9132ae-fb4f-4a77-8e3f-14f516eed49c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:19:27 compute-0 nova_compute[260935]: 2025-10-11 09:19:27.785 2 DEBUG oslo_concurrency.processutils [None req-2b59932c-1944-4c88-af55-c0132848285a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:19:27 compute-0 nova_compute[260935]: 2025-10-11 09:19:27.856 2 DEBUG nova.policy [None req-2b59932c-1944-4c88-af55-c0132848285a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'dd336dcb24664df58613d4105ce1b004', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'bee9c6aad5fe46a2b0fb6caf4d995b72', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 11 09:19:27 compute-0 nova_compute[260935]: 2025-10-11 09:19:27.895 2 DEBUG oslo_concurrency.processutils [None req-2b59932c-1944-4c88-af55-c0132848285a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json" returned: 0 in 0.110s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:19:27 compute-0 nova_compute[260935]: 2025-10-11 09:19:27.896 2 DEBUG oslo_concurrency.lockutils [None req-2b59932c-1944-4c88-af55-c0132848285a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Acquiring lock "9811042c7d73cc51997f7c966840f4b7728169a1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:19:27 compute-0 nova_compute[260935]: 2025-10-11 09:19:27.897 2 DEBUG oslo_concurrency.lockutils [None req-2b59932c-1944-4c88-af55-c0132848285a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:19:27 compute-0 nova_compute[260935]: 2025-10-11 09:19:27.897 2 DEBUG oslo_concurrency.lockutils [None req-2b59932c-1944-4c88-af55-c0132848285a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:19:27 compute-0 nova_compute[260935]: 2025-10-11 09:19:27.928 2 DEBUG nova.storage.rbd_utils [None req-2b59932c-1944-4c88-af55-c0132848285a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] rbd image 6c9132ae-fb4f-4a77-8e3f-14f516eed49c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:19:27 compute-0 nova_compute[260935]: 2025-10-11 09:19:27.933 2 DEBUG oslo_concurrency.processutils [None req-2b59932c-1944-4c88-af55-c0132848285a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 6c9132ae-fb4f-4a77-8e3f-14f516eed49c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:19:27 compute-0 ceph-mon[74313]: pgmap v2328: 321 pgs: 321 active+clean; 486 MiB data, 996 MiB used, 59 GiB / 60 GiB avail; 331 KiB/s rd, 2.2 MiB/s wr, 64 op/s
Oct 11 09:19:27 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/1950226892' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:19:28 compute-0 nova_compute[260935]: 2025-10-11 09:19:28.271 2 DEBUG oslo_concurrency.processutils [None req-2b59932c-1944-4c88-af55-c0132848285a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 6c9132ae-fb4f-4a77-8e3f-14f516eed49c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.338s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:19:28 compute-0 nova_compute[260935]: 2025-10-11 09:19:28.361 2 DEBUG nova.storage.rbd_utils [None req-2b59932c-1944-4c88-af55-c0132848285a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] resizing rbd image 6c9132ae-fb4f-4a77-8e3f-14f516eed49c_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 11 09:19:28 compute-0 nova_compute[260935]: 2025-10-11 09:19:28.469 2 DEBUG nova.objects.instance [None req-2b59932c-1944-4c88-af55-c0132848285a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lazy-loading 'migration_context' on Instance uuid 6c9132ae-fb4f-4a77-8e3f-14f516eed49c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:19:28 compute-0 nova_compute[260935]: 2025-10-11 09:19:28.485 2 DEBUG nova.virt.libvirt.driver [None req-2b59932c-1944-4c88-af55-c0132848285a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 6c9132ae-fb4f-4a77-8e3f-14f516eed49c] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 11 09:19:28 compute-0 nova_compute[260935]: 2025-10-11 09:19:28.485 2 DEBUG nova.virt.libvirt.driver [None req-2b59932c-1944-4c88-af55-c0132848285a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 6c9132ae-fb4f-4a77-8e3f-14f516eed49c] Ensure instance console log exists: /var/lib/nova/instances/6c9132ae-fb4f-4a77-8e3f-14f516eed49c/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 11 09:19:28 compute-0 nova_compute[260935]: 2025-10-11 09:19:28.485 2 DEBUG oslo_concurrency.lockutils [None req-2b59932c-1944-4c88-af55-c0132848285a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:19:28 compute-0 nova_compute[260935]: 2025-10-11 09:19:28.486 2 DEBUG oslo_concurrency.lockutils [None req-2b59932c-1944-4c88-af55-c0132848285a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:19:28 compute-0 nova_compute[260935]: 2025-10-11 09:19:28.486 2 DEBUG oslo_concurrency.lockutils [None req-2b59932c-1944-4c88-af55-c0132848285a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:19:28 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2329: 321 pgs: 321 active+clean; 486 MiB data, 996 MiB used, 59 GiB / 60 GiB avail; 332 KiB/s rd, 2.2 MiB/s wr, 65 op/s
Oct 11 09:19:28 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:19:29 compute-0 ceph-mon[74313]: pgmap v2329: 321 pgs: 321 active+clean; 486 MiB data, 996 MiB used, 59 GiB / 60 GiB avail; 332 KiB/s rd, 2.2 MiB/s wr, 65 op/s
Oct 11 09:19:29 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:19:29.425 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=36, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:d1:d9', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '16:ab:1e:b7:4b:7f'}, ipsec=False) old=SB_Global(nb_cfg=35) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 09:19:29 compute-0 nova_compute[260935]: 2025-10-11 09:19:29.426 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:19:29 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:19:29.427 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 11 09:19:29 compute-0 nova_compute[260935]: 2025-10-11 09:19:29.533 2 DEBUG nova.network.neutron [None req-2b59932c-1944-4c88-af55-c0132848285a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 6c9132ae-fb4f-4a77-8e3f-14f516eed49c] Successfully created port: 50e8a951-4439-4d05-b0da-9b126fae5c30 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 11 09:19:29 compute-0 nova_compute[260935]: 2025-10-11 09:19:29.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:19:29 compute-0 nova_compute[260935]: 2025-10-11 09:19:29.704 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:19:29 compute-0 nova_compute[260935]: 2025-10-11 09:19:29.704 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:19:30 compute-0 nova_compute[260935]: 2025-10-11 09:19:30.148 2 DEBUG oslo_concurrency.lockutils [None req-1eb0c06a-ad3d-4fea-bc67-8d264d758d2b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "a798e2c3-8294-482d-a073-883121427765" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:19:30 compute-0 nova_compute[260935]: 2025-10-11 09:19:30.148 2 DEBUG oslo_concurrency.lockutils [None req-1eb0c06a-ad3d-4fea-bc67-8d264d758d2b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "a798e2c3-8294-482d-a073-883121427765" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:19:30 compute-0 nova_compute[260935]: 2025-10-11 09:19:30.169 2 DEBUG nova.compute.manager [None req-1eb0c06a-ad3d-4fea-bc67-8d264d758d2b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: a798e2c3-8294-482d-a073-883121427765] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 11 09:19:30 compute-0 nova_compute[260935]: 2025-10-11 09:19:30.271 2 DEBUG oslo_concurrency.lockutils [None req-1eb0c06a-ad3d-4fea-bc67-8d264d758d2b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:19:30 compute-0 nova_compute[260935]: 2025-10-11 09:19:30.272 2 DEBUG oslo_concurrency.lockutils [None req-1eb0c06a-ad3d-4fea-bc67-8d264d758d2b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:19:30 compute-0 nova_compute[260935]: 2025-10-11 09:19:30.280 2 DEBUG nova.virt.hardware [None req-1eb0c06a-ad3d-4fea-bc67-8d264d758d2b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 11 09:19:30 compute-0 nova_compute[260935]: 2025-10-11 09:19:30.281 2 INFO nova.compute.claims [None req-1eb0c06a-ad3d-4fea-bc67-8d264d758d2b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: a798e2c3-8294-482d-a073-883121427765] Claim successful on node compute-0.ctlplane.example.com
Oct 11 09:19:30 compute-0 nova_compute[260935]: 2025-10-11 09:19:30.292 2 DEBUG nova.network.neutron [None req-2b59932c-1944-4c88-af55-c0132848285a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 6c9132ae-fb4f-4a77-8e3f-14f516eed49c] Successfully updated port: 50e8a951-4439-4d05-b0da-9b126fae5c30 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 11 09:19:30 compute-0 nova_compute[260935]: 2025-10-11 09:19:30.319 2 DEBUG oslo_concurrency.lockutils [None req-2b59932c-1944-4c88-af55-c0132848285a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Acquiring lock "refresh_cache-6c9132ae-fb4f-4a77-8e3f-14f516eed49c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:19:30 compute-0 nova_compute[260935]: 2025-10-11 09:19:30.319 2 DEBUG oslo_concurrency.lockutils [None req-2b59932c-1944-4c88-af55-c0132848285a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Acquired lock "refresh_cache-6c9132ae-fb4f-4a77-8e3f-14f516eed49c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:19:30 compute-0 nova_compute[260935]: 2025-10-11 09:19:30.320 2 DEBUG nova.network.neutron [None req-2b59932c-1944-4c88-af55-c0132848285a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 6c9132ae-fb4f-4a77-8e3f-14f516eed49c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 11 09:19:30 compute-0 nova_compute[260935]: 2025-10-11 09:19:30.387 2 DEBUG nova.scheduler.client.report [None req-1eb0c06a-ad3d-4fea-bc67-8d264d758d2b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Refreshing inventories for resource provider ead2f521-4d5d-46d9-864c-1aac19134114 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Oct 11 09:19:30 compute-0 nova_compute[260935]: 2025-10-11 09:19:30.446 2 DEBUG nova.compute.manager [req-c846a5ee-f86b-41b2-b6f7-3b8996b8489d req-eebfb772-18d8-41ba-86d1-995a3ad1a3e4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 6c9132ae-fb4f-4a77-8e3f-14f516eed49c] Received event network-changed-50e8a951-4439-4d05-b0da-9b126fae5c30 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:19:30 compute-0 nova_compute[260935]: 2025-10-11 09:19:30.447 2 DEBUG nova.compute.manager [req-c846a5ee-f86b-41b2-b6f7-3b8996b8489d req-eebfb772-18d8-41ba-86d1-995a3ad1a3e4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 6c9132ae-fb4f-4a77-8e3f-14f516eed49c] Refreshing instance network info cache due to event network-changed-50e8a951-4439-4d05-b0da-9b126fae5c30. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 09:19:30 compute-0 nova_compute[260935]: 2025-10-11 09:19:30.447 2 DEBUG oslo_concurrency.lockutils [req-c846a5ee-f86b-41b2-b6f7-3b8996b8489d req-eebfb772-18d8-41ba-86d1-995a3ad1a3e4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-6c9132ae-fb4f-4a77-8e3f-14f516eed49c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:19:30 compute-0 nova_compute[260935]: 2025-10-11 09:19:30.449 2 DEBUG nova.scheduler.client.report [None req-1eb0c06a-ad3d-4fea-bc67-8d264d758d2b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Updating ProviderTree inventory for provider ead2f521-4d5d-46d9-864c-1aac19134114 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Oct 11 09:19:30 compute-0 nova_compute[260935]: 2025-10-11 09:19:30.450 2 DEBUG nova.compute.provider_tree [None req-1eb0c06a-ad3d-4fea-bc67-8d264d758d2b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Updating inventory in ProviderTree for provider ead2f521-4d5d-46d9-864c-1aac19134114 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Oct 11 09:19:30 compute-0 nova_compute[260935]: 2025-10-11 09:19:30.477 2 DEBUG nova.scheduler.client.report [None req-1eb0c06a-ad3d-4fea-bc67-8d264d758d2b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Refreshing aggregate associations for resource provider ead2f521-4d5d-46d9-864c-1aac19134114, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Oct 11 09:19:30 compute-0 nova_compute[260935]: 2025-10-11 09:19:30.497 2 DEBUG nova.scheduler.client.report [None req-1eb0c06a-ad3d-4fea-bc67-8d264d758d2b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Refreshing trait associations for resource provider ead2f521-4d5d-46d9-864c-1aac19134114, traits: HW_CPU_X86_AESNI,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_CLMUL,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_AVX,COMPUTE_RESCUE_BFV,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_F16C,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NODE,HW_CPU_X86_SSE2,HW_CPU_X86_BMI,COMPUTE_DEVICE_TAGGING,COMPUTE_STORAGE_BUS_FDC,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SHA,COMPUTE_STORAGE_BUS_SATA,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SSE42,HW_CPU_X86_SSE41,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_ABM,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_STORAGE_BUS_IDE,COMPUTE_STORAGE_BUS_USB,COMPUTE_TRUSTED_CERTS,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_FMA3,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_SSE4A,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSSE3,HW_CPU_X86_SVM,HW_CPU_X86_BMI2,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_AVX2,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_SECURITY_TPM_2_0,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_AMD_SVM,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_RTL8139 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Oct 11 09:19:30 compute-0 nova_compute[260935]: 2025-10-11 09:19:30.546 2 DEBUG nova.network.neutron [None req-2b59932c-1944-4c88-af55-c0132848285a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 6c9132ae-fb4f-4a77-8e3f-14f516eed49c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 11 09:19:30 compute-0 nova_compute[260935]: 2025-10-11 09:19:30.665 2 DEBUG oslo_concurrency.processutils [None req-1eb0c06a-ad3d-4fea-bc67-8d264d758d2b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:19:30 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2330: 321 pgs: 321 active+clean; 486 MiB data, 996 MiB used, 59 GiB / 60 GiB avail; 9.0 KiB/s rd, 24 KiB/s wr, 2 op/s
Oct 11 09:19:30 compute-0 nova_compute[260935]: 2025-10-11 09:19:30.723 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:19:31 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 09:19:31 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1516431143' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:19:31 compute-0 nova_compute[260935]: 2025-10-11 09:19:31.166 2 DEBUG oslo_concurrency.processutils [None req-1eb0c06a-ad3d-4fea-bc67-8d264d758d2b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.501s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:19:31 compute-0 nova_compute[260935]: 2025-10-11 09:19:31.171 2 DEBUG nova.compute.provider_tree [None req-1eb0c06a-ad3d-4fea-bc67-8d264d758d2b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 09:19:31 compute-0 nova_compute[260935]: 2025-10-11 09:19:31.186 2 DEBUG nova.scheduler.client.report [None req-1eb0c06a-ad3d-4fea-bc67-8d264d758d2b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 09:19:31 compute-0 nova_compute[260935]: 2025-10-11 09:19:31.204 2 DEBUG oslo_concurrency.lockutils [None req-1eb0c06a-ad3d-4fea-bc67-8d264d758d2b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.932s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:19:31 compute-0 nova_compute[260935]: 2025-10-11 09:19:31.205 2 DEBUG nova.compute.manager [None req-1eb0c06a-ad3d-4fea-bc67-8d264d758d2b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: a798e2c3-8294-482d-a073-883121427765] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 11 09:19:31 compute-0 nova_compute[260935]: 2025-10-11 09:19:31.245 2 DEBUG nova.compute.manager [None req-1eb0c06a-ad3d-4fea-bc67-8d264d758d2b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: a798e2c3-8294-482d-a073-883121427765] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 11 09:19:31 compute-0 nova_compute[260935]: 2025-10-11 09:19:31.245 2 DEBUG nova.network.neutron [None req-1eb0c06a-ad3d-4fea-bc67-8d264d758d2b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: a798e2c3-8294-482d-a073-883121427765] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 11 09:19:31 compute-0 nova_compute[260935]: 2025-10-11 09:19:31.266 2 INFO nova.virt.libvirt.driver [None req-1eb0c06a-ad3d-4fea-bc67-8d264d758d2b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: a798e2c3-8294-482d-a073-883121427765] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 11 09:19:31 compute-0 nova_compute[260935]: 2025-10-11 09:19:31.295 2 DEBUG nova.compute.manager [None req-1eb0c06a-ad3d-4fea-bc67-8d264d758d2b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: a798e2c3-8294-482d-a073-883121427765] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 11 09:19:31 compute-0 nova_compute[260935]: 2025-10-11 09:19:31.393 2 DEBUG nova.compute.manager [None req-1eb0c06a-ad3d-4fea-bc67-8d264d758d2b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: a798e2c3-8294-482d-a073-883121427765] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 11 09:19:31 compute-0 nova_compute[260935]: 2025-10-11 09:19:31.395 2 DEBUG nova.virt.libvirt.driver [None req-1eb0c06a-ad3d-4fea-bc67-8d264d758d2b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: a798e2c3-8294-482d-a073-883121427765] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 11 09:19:31 compute-0 nova_compute[260935]: 2025-10-11 09:19:31.395 2 INFO nova.virt.libvirt.driver [None req-1eb0c06a-ad3d-4fea-bc67-8d264d758d2b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: a798e2c3-8294-482d-a073-883121427765] Creating image(s)
Oct 11 09:19:31 compute-0 nova_compute[260935]: 2025-10-11 09:19:31.424 2 DEBUG nova.storage.rbd_utils [None req-1eb0c06a-ad3d-4fea-bc67-8d264d758d2b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] rbd image a798e2c3-8294-482d-a073-883121427765_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:19:31 compute-0 nova_compute[260935]: 2025-10-11 09:19:31.451 2 DEBUG nova.storage.rbd_utils [None req-1eb0c06a-ad3d-4fea-bc67-8d264d758d2b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] rbd image a798e2c3-8294-482d-a073-883121427765_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:19:31 compute-0 nova_compute[260935]: 2025-10-11 09:19:31.480 2 DEBUG nova.storage.rbd_utils [None req-1eb0c06a-ad3d-4fea-bc67-8d264d758d2b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] rbd image a798e2c3-8294-482d-a073-883121427765_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:19:31 compute-0 nova_compute[260935]: 2025-10-11 09:19:31.484 2 DEBUG oslo_concurrency.processutils [None req-1eb0c06a-ad3d-4fea-bc67-8d264d758d2b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:19:31 compute-0 nova_compute[260935]: 2025-10-11 09:19:31.582 2 DEBUG oslo_concurrency.processutils [None req-1eb0c06a-ad3d-4fea-bc67-8d264d758d2b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json" returned: 0 in 0.098s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:19:31 compute-0 nova_compute[260935]: 2025-10-11 09:19:31.583 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:19:31 compute-0 nova_compute[260935]: 2025-10-11 09:19:31.586 2 DEBUG oslo_concurrency.lockutils [None req-1eb0c06a-ad3d-4fea-bc67-8d264d758d2b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "9811042c7d73cc51997f7c966840f4b7728169a1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:19:31 compute-0 nova_compute[260935]: 2025-10-11 09:19:31.586 2 DEBUG oslo_concurrency.lockutils [None req-1eb0c06a-ad3d-4fea-bc67-8d264d758d2b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:19:31 compute-0 nova_compute[260935]: 2025-10-11 09:19:31.586 2 DEBUG oslo_concurrency.lockutils [None req-1eb0c06a-ad3d-4fea-bc67-8d264d758d2b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:19:31 compute-0 nova_compute[260935]: 2025-10-11 09:19:31.612 2 DEBUG nova.storage.rbd_utils [None req-1eb0c06a-ad3d-4fea-bc67-8d264d758d2b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] rbd image a798e2c3-8294-482d-a073-883121427765_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:19:31 compute-0 nova_compute[260935]: 2025-10-11 09:19:31.616 2 DEBUG oslo_concurrency.processutils [None req-1eb0c06a-ad3d-4fea-bc67-8d264d758d2b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 a798e2c3-8294-482d-a073-883121427765_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:19:31 compute-0 ceph-mon[74313]: pgmap v2330: 321 pgs: 321 active+clean; 486 MiB data, 996 MiB used, 59 GiB / 60 GiB avail; 9.0 KiB/s rd, 24 KiB/s wr, 2 op/s
Oct 11 09:19:31 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/1516431143' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:19:31 compute-0 nova_compute[260935]: 2025-10-11 09:19:31.957 2 DEBUG oslo_concurrency.processutils [None req-1eb0c06a-ad3d-4fea-bc67-8d264d758d2b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 a798e2c3-8294-482d-a073-883121427765_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.341s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:19:32 compute-0 nova_compute[260935]: 2025-10-11 09:19:32.040 2 DEBUG nova.storage.rbd_utils [None req-1eb0c06a-ad3d-4fea-bc67-8d264d758d2b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] resizing rbd image a798e2c3-8294-482d-a073-883121427765_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 11 09:19:32 compute-0 nova_compute[260935]: 2025-10-11 09:19:32.156 2 DEBUG nova.policy [None req-1eb0c06a-ad3d-4fea-bc67-8d264d758d2b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '0e1fd111a1ff43179343661e01457085', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'db6885dd005947ad850fed13cefdf2fc', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 11 09:19:32 compute-0 nova_compute[260935]: 2025-10-11 09:19:32.167 2 DEBUG nova.objects.instance [None req-1eb0c06a-ad3d-4fea-bc67-8d264d758d2b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lazy-loading 'migration_context' on Instance uuid a798e2c3-8294-482d-a073-883121427765 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:19:32 compute-0 nova_compute[260935]: 2025-10-11 09:19:32.204 2 DEBUG nova.virt.libvirt.driver [None req-1eb0c06a-ad3d-4fea-bc67-8d264d758d2b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: a798e2c3-8294-482d-a073-883121427765] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 11 09:19:32 compute-0 nova_compute[260935]: 2025-10-11 09:19:32.205 2 DEBUG nova.virt.libvirt.driver [None req-1eb0c06a-ad3d-4fea-bc67-8d264d758d2b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: a798e2c3-8294-482d-a073-883121427765] Ensure instance console log exists: /var/lib/nova/instances/a798e2c3-8294-482d-a073-883121427765/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 11 09:19:32 compute-0 nova_compute[260935]: 2025-10-11 09:19:32.205 2 DEBUG oslo_concurrency.lockutils [None req-1eb0c06a-ad3d-4fea-bc67-8d264d758d2b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:19:32 compute-0 nova_compute[260935]: 2025-10-11 09:19:32.206 2 DEBUG oslo_concurrency.lockutils [None req-1eb0c06a-ad3d-4fea-bc67-8d264d758d2b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:19:32 compute-0 nova_compute[260935]: 2025-10-11 09:19:32.207 2 DEBUG oslo_concurrency.lockutils [None req-1eb0c06a-ad3d-4fea-bc67-8d264d758d2b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:19:32 compute-0 nova_compute[260935]: 2025-10-11 09:19:32.698 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:19:32 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2331: 321 pgs: 321 active+clean; 554 MiB data, 1014 MiB used, 59 GiB / 60 GiB avail; 32 KiB/s rd, 2.8 MiB/s wr, 42 op/s
Oct 11 09:19:33 compute-0 nova_compute[260935]: 2025-10-11 09:19:33.127 2 DEBUG nova.network.neutron [None req-2b59932c-1944-4c88-af55-c0132848285a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 6c9132ae-fb4f-4a77-8e3f-14f516eed49c] Updating instance_info_cache with network_info: [{"id": "50e8a951-4439-4d05-b0da-9b126fae5c30", "address": "fa:16:3e:23:8b:f3", "network": {"id": "fbf3e0c3-4aa9-41d1-8ed0-cbadb331b448", "bridge": "br-int", "label": "tempest-network-smoke--1639787343", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap50e8a951-44", "ovs_interfaceid": "50e8a951-4439-4d05-b0da-9b126fae5c30", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:19:33 compute-0 nova_compute[260935]: 2025-10-11 09:19:33.165 2 DEBUG oslo_concurrency.lockutils [None req-2b59932c-1944-4c88-af55-c0132848285a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Releasing lock "refresh_cache-6c9132ae-fb4f-4a77-8e3f-14f516eed49c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:19:33 compute-0 nova_compute[260935]: 2025-10-11 09:19:33.166 2 DEBUG nova.compute.manager [None req-2b59932c-1944-4c88-af55-c0132848285a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 6c9132ae-fb4f-4a77-8e3f-14f516eed49c] Instance network_info: |[{"id": "50e8a951-4439-4d05-b0da-9b126fae5c30", "address": "fa:16:3e:23:8b:f3", "network": {"id": "fbf3e0c3-4aa9-41d1-8ed0-cbadb331b448", "bridge": "br-int", "label": "tempest-network-smoke--1639787343", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap50e8a951-44", "ovs_interfaceid": "50e8a951-4439-4d05-b0da-9b126fae5c30", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 11 09:19:33 compute-0 nova_compute[260935]: 2025-10-11 09:19:33.166 2 DEBUG oslo_concurrency.lockutils [req-c846a5ee-f86b-41b2-b6f7-3b8996b8489d req-eebfb772-18d8-41ba-86d1-995a3ad1a3e4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-6c9132ae-fb4f-4a77-8e3f-14f516eed49c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:19:33 compute-0 nova_compute[260935]: 2025-10-11 09:19:33.167 2 DEBUG nova.network.neutron [req-c846a5ee-f86b-41b2-b6f7-3b8996b8489d req-eebfb772-18d8-41ba-86d1-995a3ad1a3e4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 6c9132ae-fb4f-4a77-8e3f-14f516eed49c] Refreshing network info cache for port 50e8a951-4439-4d05-b0da-9b126fae5c30 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 09:19:33 compute-0 nova_compute[260935]: 2025-10-11 09:19:33.170 2 DEBUG nova.virt.libvirt.driver [None req-2b59932c-1944-4c88-af55-c0132848285a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 6c9132ae-fb4f-4a77-8e3f-14f516eed49c] Start _get_guest_xml network_info=[{"id": "50e8a951-4439-4d05-b0da-9b126fae5c30", "address": "fa:16:3e:23:8b:f3", "network": {"id": "fbf3e0c3-4aa9-41d1-8ed0-cbadb331b448", "bridge": "br-int", "label": "tempest-network-smoke--1639787343", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap50e8a951-44", "ovs_interfaceid": "50e8a951-4439-4d05-b0da-9b126fae5c30", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 11 09:19:33 compute-0 nova_compute[260935]: 2025-10-11 09:19:33.176 2 WARNING nova.virt.libvirt.driver [None req-2b59932c-1944-4c88-af55-c0132848285a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 09:19:33 compute-0 nova_compute[260935]: 2025-10-11 09:19:33.181 2 DEBUG nova.virt.libvirt.host [None req-2b59932c-1944-4c88-af55-c0132848285a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 11 09:19:33 compute-0 nova_compute[260935]: 2025-10-11 09:19:33.182 2 DEBUG nova.virt.libvirt.host [None req-2b59932c-1944-4c88-af55-c0132848285a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 11 09:19:33 compute-0 nova_compute[260935]: 2025-10-11 09:19:33.190 2 DEBUG nova.virt.libvirt.host [None req-2b59932c-1944-4c88-af55-c0132848285a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 11 09:19:33 compute-0 nova_compute[260935]: 2025-10-11 09:19:33.191 2 DEBUG nova.virt.libvirt.host [None req-2b59932c-1944-4c88-af55-c0132848285a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 11 09:19:33 compute-0 nova_compute[260935]: 2025-10-11 09:19:33.191 2 DEBUG nova.virt.libvirt.driver [None req-2b59932c-1944-4c88-af55-c0132848285a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 11 09:19:33 compute-0 nova_compute[260935]: 2025-10-11 09:19:33.192 2 DEBUG nova.virt.hardware [None req-2b59932c-1944-4c88-af55-c0132848285a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 11 09:19:33 compute-0 nova_compute[260935]: 2025-10-11 09:19:33.192 2 DEBUG nova.virt.hardware [None req-2b59932c-1944-4c88-af55-c0132848285a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 11 09:19:33 compute-0 nova_compute[260935]: 2025-10-11 09:19:33.193 2 DEBUG nova.virt.hardware [None req-2b59932c-1944-4c88-af55-c0132848285a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 11 09:19:33 compute-0 nova_compute[260935]: 2025-10-11 09:19:33.193 2 DEBUG nova.virt.hardware [None req-2b59932c-1944-4c88-af55-c0132848285a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 11 09:19:33 compute-0 nova_compute[260935]: 2025-10-11 09:19:33.193 2 DEBUG nova.virt.hardware [None req-2b59932c-1944-4c88-af55-c0132848285a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 11 09:19:33 compute-0 nova_compute[260935]: 2025-10-11 09:19:33.194 2 DEBUG nova.virt.hardware [None req-2b59932c-1944-4c88-af55-c0132848285a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 11 09:19:33 compute-0 nova_compute[260935]: 2025-10-11 09:19:33.194 2 DEBUG nova.virt.hardware [None req-2b59932c-1944-4c88-af55-c0132848285a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 11 09:19:33 compute-0 nova_compute[260935]: 2025-10-11 09:19:33.194 2 DEBUG nova.virt.hardware [None req-2b59932c-1944-4c88-af55-c0132848285a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 11 09:19:33 compute-0 nova_compute[260935]: 2025-10-11 09:19:33.195 2 DEBUG nova.virt.hardware [None req-2b59932c-1944-4c88-af55-c0132848285a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 11 09:19:33 compute-0 nova_compute[260935]: 2025-10-11 09:19:33.195 2 DEBUG nova.virt.hardware [None req-2b59932c-1944-4c88-af55-c0132848285a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 11 09:19:33 compute-0 nova_compute[260935]: 2025-10-11 09:19:33.195 2 DEBUG nova.virt.hardware [None req-2b59932c-1944-4c88-af55-c0132848285a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 11 09:19:33 compute-0 nova_compute[260935]: 2025-10-11 09:19:33.199 2 DEBUG oslo_concurrency.processutils [None req-2b59932c-1944-4c88-af55-c0132848285a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:19:33 compute-0 nova_compute[260935]: 2025-10-11 09:19:33.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:19:33 compute-0 nova_compute[260935]: 2025-10-11 09:19:33.703 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 11 09:19:33 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 09:19:33 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1876995508' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:19:33 compute-0 nova_compute[260935]: 2025-10-11 09:19:33.747 2 DEBUG oslo_concurrency.processutils [None req-2b59932c-1944-4c88-af55-c0132848285a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.548s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:19:33 compute-0 ceph-mon[74313]: pgmap v2331: 321 pgs: 321 active+clean; 554 MiB data, 1014 MiB used, 59 GiB / 60 GiB avail; 32 KiB/s rd, 2.8 MiB/s wr, 42 op/s
Oct 11 09:19:33 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/1876995508' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:19:33 compute-0 nova_compute[260935]: 2025-10-11 09:19:33.792 2 DEBUG nova.storage.rbd_utils [None req-2b59932c-1944-4c88-af55-c0132848285a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] rbd image 6c9132ae-fb4f-4a77-8e3f-14f516eed49c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:19:33 compute-0 nova_compute[260935]: 2025-10-11 09:19:33.800 2 DEBUG oslo_concurrency.processutils [None req-2b59932c-1944-4c88-af55-c0132848285a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:19:33 compute-0 nova_compute[260935]: 2025-10-11 09:19:33.854 2 DEBUG nova.network.neutron [None req-1eb0c06a-ad3d-4fea-bc67-8d264d758d2b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: a798e2c3-8294-482d-a073-883121427765] Successfully created port: d5347066-e322-4664-8798-146b9745fa17 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 11 09:19:33 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:19:34 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 09:19:34 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4216683930' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:19:34 compute-0 nova_compute[260935]: 2025-10-11 09:19:34.284 2 DEBUG oslo_concurrency.processutils [None req-2b59932c-1944-4c88-af55-c0132848285a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.484s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:19:34 compute-0 nova_compute[260935]: 2025-10-11 09:19:34.287 2 DEBUG nova.virt.libvirt.vif [None req-2b59932c-1944-4c88-af55-c0132848285a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T09:19:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-487160952',display_name='tempest-TestNetworkBasicOps-server-487160952',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-487160952',id=115,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIt4DfbM2d+jaXAhSihykdBZ+tOpog8UlIDD8J/El5+5N6K+dCWK3MxK+m6m5Y83GMP6LzeAgXIDOnVujmKdajdhJSoGBcewPota2xCPS2Aiqozz2Osh9vQuMUfwGcZ9Cw==',key_name='tempest-TestNetworkBasicOps-1573176618',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='bee9c6aad5fe46a2b0fb6caf4d995b72',ramdisk_id='',reservation_id='r-wkj2eggj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1622727639',owner_user_name='tempest-TestNetworkBasicOps-1622727639-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:19:27Z,user_data=None,user_id='dd336dcb24664df58613d4105ce1b004',uuid=6c9132ae-fb4f-4a77-8e3f-14f516eed49c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "50e8a951-4439-4d05-b0da-9b126fae5c30", "address": "fa:16:3e:23:8b:f3", "network": {"id": "fbf3e0c3-4aa9-41d1-8ed0-cbadb331b448", "bridge": "br-int", "label": "tempest-network-smoke--1639787343", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap50e8a951-44", "ovs_interfaceid": "50e8a951-4439-4d05-b0da-9b126fae5c30", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 11 09:19:34 compute-0 nova_compute[260935]: 2025-10-11 09:19:34.287 2 DEBUG nova.network.os_vif_util [None req-2b59932c-1944-4c88-af55-c0132848285a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Converting VIF {"id": "50e8a951-4439-4d05-b0da-9b126fae5c30", "address": "fa:16:3e:23:8b:f3", "network": {"id": "fbf3e0c3-4aa9-41d1-8ed0-cbadb331b448", "bridge": "br-int", "label": "tempest-network-smoke--1639787343", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap50e8a951-44", "ovs_interfaceid": "50e8a951-4439-4d05-b0da-9b126fae5c30", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 09:19:34 compute-0 nova_compute[260935]: 2025-10-11 09:19:34.289 2 DEBUG nova.network.os_vif_util [None req-2b59932c-1944-4c88-af55-c0132848285a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:23:8b:f3,bridge_name='br-int',has_traffic_filtering=True,id=50e8a951-4439-4d05-b0da-9b126fae5c30,network=Network(fbf3e0c3-4aa9-41d1-8ed0-cbadb331b448),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap50e8a951-44') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 09:19:34 compute-0 nova_compute[260935]: 2025-10-11 09:19:34.291 2 DEBUG nova.objects.instance [None req-2b59932c-1944-4c88-af55-c0132848285a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lazy-loading 'pci_devices' on Instance uuid 6c9132ae-fb4f-4a77-8e3f-14f516eed49c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:19:34 compute-0 nova_compute[260935]: 2025-10-11 09:19:34.316 2 DEBUG nova.virt.libvirt.driver [None req-2b59932c-1944-4c88-af55-c0132848285a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 6c9132ae-fb4f-4a77-8e3f-14f516eed49c] End _get_guest_xml xml=<domain type="kvm">
Oct 11 09:19:34 compute-0 nova_compute[260935]:   <uuid>6c9132ae-fb4f-4a77-8e3f-14f516eed49c</uuid>
Oct 11 09:19:34 compute-0 nova_compute[260935]:   <name>instance-00000073</name>
Oct 11 09:19:34 compute-0 nova_compute[260935]:   <memory>131072</memory>
Oct 11 09:19:34 compute-0 nova_compute[260935]:   <vcpu>1</vcpu>
Oct 11 09:19:34 compute-0 nova_compute[260935]:   <metadata>
Oct 11 09:19:34 compute-0 nova_compute[260935]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 09:19:34 compute-0 nova_compute[260935]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 09:19:34 compute-0 nova_compute[260935]:       <nova:name>tempest-TestNetworkBasicOps-server-487160952</nova:name>
Oct 11 09:19:34 compute-0 nova_compute[260935]:       <nova:creationTime>2025-10-11 09:19:33</nova:creationTime>
Oct 11 09:19:34 compute-0 nova_compute[260935]:       <nova:flavor name="m1.nano">
Oct 11 09:19:34 compute-0 nova_compute[260935]:         <nova:memory>128</nova:memory>
Oct 11 09:19:34 compute-0 nova_compute[260935]:         <nova:disk>1</nova:disk>
Oct 11 09:19:34 compute-0 nova_compute[260935]:         <nova:swap>0</nova:swap>
Oct 11 09:19:34 compute-0 nova_compute[260935]:         <nova:ephemeral>0</nova:ephemeral>
Oct 11 09:19:34 compute-0 nova_compute[260935]:         <nova:vcpus>1</nova:vcpus>
Oct 11 09:19:34 compute-0 nova_compute[260935]:       </nova:flavor>
Oct 11 09:19:34 compute-0 nova_compute[260935]:       <nova:owner>
Oct 11 09:19:34 compute-0 nova_compute[260935]:         <nova:user uuid="dd336dcb24664df58613d4105ce1b004">tempest-TestNetworkBasicOps-1622727639-project-member</nova:user>
Oct 11 09:19:34 compute-0 nova_compute[260935]:         <nova:project uuid="bee9c6aad5fe46a2b0fb6caf4d995b72">tempest-TestNetworkBasicOps-1622727639</nova:project>
Oct 11 09:19:34 compute-0 nova_compute[260935]:       </nova:owner>
Oct 11 09:19:34 compute-0 nova_compute[260935]:       <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 09:19:34 compute-0 nova_compute[260935]:       <nova:ports>
Oct 11 09:19:34 compute-0 nova_compute[260935]:         <nova:port uuid="50e8a951-4439-4d05-b0da-9b126fae5c30">
Oct 11 09:19:34 compute-0 nova_compute[260935]:           <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Oct 11 09:19:34 compute-0 nova_compute[260935]:         </nova:port>
Oct 11 09:19:34 compute-0 nova_compute[260935]:       </nova:ports>
Oct 11 09:19:34 compute-0 nova_compute[260935]:     </nova:instance>
Oct 11 09:19:34 compute-0 nova_compute[260935]:   </metadata>
Oct 11 09:19:34 compute-0 nova_compute[260935]:   <sysinfo type="smbios">
Oct 11 09:19:34 compute-0 nova_compute[260935]:     <system>
Oct 11 09:19:34 compute-0 nova_compute[260935]:       <entry name="manufacturer">RDO</entry>
Oct 11 09:19:34 compute-0 nova_compute[260935]:       <entry name="product">OpenStack Compute</entry>
Oct 11 09:19:34 compute-0 nova_compute[260935]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 09:19:34 compute-0 nova_compute[260935]:       <entry name="serial">6c9132ae-fb4f-4a77-8e3f-14f516eed49c</entry>
Oct 11 09:19:34 compute-0 nova_compute[260935]:       <entry name="uuid">6c9132ae-fb4f-4a77-8e3f-14f516eed49c</entry>
Oct 11 09:19:34 compute-0 nova_compute[260935]:       <entry name="family">Virtual Machine</entry>
Oct 11 09:19:34 compute-0 nova_compute[260935]:     </system>
Oct 11 09:19:34 compute-0 nova_compute[260935]:   </sysinfo>
Oct 11 09:19:34 compute-0 nova_compute[260935]:   <os>
Oct 11 09:19:34 compute-0 nova_compute[260935]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 11 09:19:34 compute-0 nova_compute[260935]:     <boot dev="hd"/>
Oct 11 09:19:34 compute-0 nova_compute[260935]:     <smbios mode="sysinfo"/>
Oct 11 09:19:34 compute-0 nova_compute[260935]:   </os>
Oct 11 09:19:34 compute-0 nova_compute[260935]:   <features>
Oct 11 09:19:34 compute-0 nova_compute[260935]:     <acpi/>
Oct 11 09:19:34 compute-0 nova_compute[260935]:     <apic/>
Oct 11 09:19:34 compute-0 nova_compute[260935]:     <vmcoreinfo/>
Oct 11 09:19:34 compute-0 nova_compute[260935]:   </features>
Oct 11 09:19:34 compute-0 nova_compute[260935]:   <clock offset="utc">
Oct 11 09:19:34 compute-0 nova_compute[260935]:     <timer name="pit" tickpolicy="delay"/>
Oct 11 09:19:34 compute-0 nova_compute[260935]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 11 09:19:34 compute-0 nova_compute[260935]:     <timer name="hpet" present="no"/>
Oct 11 09:19:34 compute-0 nova_compute[260935]:   </clock>
Oct 11 09:19:34 compute-0 nova_compute[260935]:   <cpu mode="host-model" match="exact">
Oct 11 09:19:34 compute-0 nova_compute[260935]:     <topology sockets="1" cores="1" threads="1"/>
Oct 11 09:19:34 compute-0 nova_compute[260935]:   </cpu>
Oct 11 09:19:34 compute-0 nova_compute[260935]:   <devices>
Oct 11 09:19:34 compute-0 nova_compute[260935]:     <disk type="network" device="disk">
Oct 11 09:19:34 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 09:19:34 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/6c9132ae-fb4f-4a77-8e3f-14f516eed49c_disk">
Oct 11 09:19:34 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 09:19:34 compute-0 nova_compute[260935]:       </source>
Oct 11 09:19:34 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 09:19:34 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 09:19:34 compute-0 nova_compute[260935]:       </auth>
Oct 11 09:19:34 compute-0 nova_compute[260935]:       <target dev="vda" bus="virtio"/>
Oct 11 09:19:34 compute-0 nova_compute[260935]:     </disk>
Oct 11 09:19:34 compute-0 nova_compute[260935]:     <disk type="network" device="cdrom">
Oct 11 09:19:34 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 09:19:34 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/6c9132ae-fb4f-4a77-8e3f-14f516eed49c_disk.config">
Oct 11 09:19:34 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 09:19:34 compute-0 nova_compute[260935]:       </source>
Oct 11 09:19:34 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 09:19:34 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 09:19:34 compute-0 nova_compute[260935]:       </auth>
Oct 11 09:19:34 compute-0 nova_compute[260935]:       <target dev="sda" bus="sata"/>
Oct 11 09:19:34 compute-0 nova_compute[260935]:     </disk>
Oct 11 09:19:34 compute-0 nova_compute[260935]:     <interface type="ethernet">
Oct 11 09:19:34 compute-0 nova_compute[260935]:       <mac address="fa:16:3e:23:8b:f3"/>
Oct 11 09:19:34 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 09:19:34 compute-0 nova_compute[260935]:       <driver name="vhost" rx_queue_size="512"/>
Oct 11 09:19:34 compute-0 nova_compute[260935]:       <mtu size="1442"/>
Oct 11 09:19:34 compute-0 nova_compute[260935]:       <target dev="tap50e8a951-44"/>
Oct 11 09:19:34 compute-0 nova_compute[260935]:     </interface>
Oct 11 09:19:34 compute-0 nova_compute[260935]:     <serial type="pty">
Oct 11 09:19:34 compute-0 nova_compute[260935]:       <log file="/var/lib/nova/instances/6c9132ae-fb4f-4a77-8e3f-14f516eed49c/console.log" append="off"/>
Oct 11 09:19:34 compute-0 nova_compute[260935]:     </serial>
Oct 11 09:19:34 compute-0 nova_compute[260935]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 09:19:34 compute-0 nova_compute[260935]:     <video>
Oct 11 09:19:34 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 09:19:34 compute-0 nova_compute[260935]:     </video>
Oct 11 09:19:34 compute-0 nova_compute[260935]:     <input type="tablet" bus="usb"/>
Oct 11 09:19:34 compute-0 nova_compute[260935]:     <rng model="virtio">
Oct 11 09:19:34 compute-0 nova_compute[260935]:       <backend model="random">/dev/urandom</backend>
Oct 11 09:19:34 compute-0 nova_compute[260935]:     </rng>
Oct 11 09:19:34 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root"/>
Oct 11 09:19:34 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:19:34 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:19:34 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:19:34 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:19:34 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:19:34 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:19:34 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:19:34 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:19:34 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:19:34 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:19:34 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:19:34 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:19:34 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:19:34 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:19:34 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:19:34 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:19:34 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:19:34 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:19:34 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:19:34 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:19:34 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:19:34 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:19:34 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:19:34 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:19:34 compute-0 nova_compute[260935]:     <controller type="usb" index="0"/>
Oct 11 09:19:34 compute-0 nova_compute[260935]:     <memballoon model="virtio">
Oct 11 09:19:34 compute-0 nova_compute[260935]:       <stats period="10"/>
Oct 11 09:19:34 compute-0 nova_compute[260935]:     </memballoon>
Oct 11 09:19:34 compute-0 nova_compute[260935]:   </devices>
Oct 11 09:19:34 compute-0 nova_compute[260935]: </domain>
Oct 11 09:19:34 compute-0 nova_compute[260935]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 11 09:19:34 compute-0 nova_compute[260935]: 2025-10-11 09:19:34.317 2 DEBUG nova.compute.manager [None req-2b59932c-1944-4c88-af55-c0132848285a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 6c9132ae-fb4f-4a77-8e3f-14f516eed49c] Preparing to wait for external event network-vif-plugged-50e8a951-4439-4d05-b0da-9b126fae5c30 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 11 09:19:34 compute-0 nova_compute[260935]: 2025-10-11 09:19:34.317 2 DEBUG oslo_concurrency.lockutils [None req-2b59932c-1944-4c88-af55-c0132848285a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Acquiring lock "6c9132ae-fb4f-4a77-8e3f-14f516eed49c-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:19:34 compute-0 nova_compute[260935]: 2025-10-11 09:19:34.318 2 DEBUG oslo_concurrency.lockutils [None req-2b59932c-1944-4c88-af55-c0132848285a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "6c9132ae-fb4f-4a77-8e3f-14f516eed49c-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:19:34 compute-0 nova_compute[260935]: 2025-10-11 09:19:34.318 2 DEBUG oslo_concurrency.lockutils [None req-2b59932c-1944-4c88-af55-c0132848285a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "6c9132ae-fb4f-4a77-8e3f-14f516eed49c-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:19:34 compute-0 nova_compute[260935]: 2025-10-11 09:19:34.319 2 DEBUG nova.virt.libvirt.vif [None req-2b59932c-1944-4c88-af55-c0132848285a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T09:19:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-487160952',display_name='tempest-TestNetworkBasicOps-server-487160952',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-487160952',id=115,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIt4DfbM2d+jaXAhSihykdBZ+tOpog8UlIDD8J/El5+5N6K+dCWK3MxK+m6m5Y83GMP6LzeAgXIDOnVujmKdajdhJSoGBcewPota2xCPS2Aiqozz2Osh9vQuMUfwGcZ9Cw==',key_name='tempest-TestNetworkBasicOps-1573176618',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='bee9c6aad5fe46a2b0fb6caf4d995b72',ramdisk_id='',reservation_id='r-wkj2eggj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1622727639',owner_user_name='tempest-TestNetworkBasicOps-1622727639-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:19:27Z,user_data=None,user_id='dd336dcb24664df58613d4105ce1b004',uuid=6c9132ae-fb4f-4a77-8e3f-14f516eed49c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "50e8a951-4439-4d05-b0da-9b126fae5c30", "address": "fa:16:3e:23:8b:f3", "network": {"id": "fbf3e0c3-4aa9-41d1-8ed0-cbadb331b448", "bridge": "br-int", "label": "tempest-network-smoke--1639787343", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap50e8a951-44", "ovs_interfaceid": "50e8a951-4439-4d05-b0da-9b126fae5c30", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 11 09:19:34 compute-0 nova_compute[260935]: 2025-10-11 09:19:34.320 2 DEBUG nova.network.os_vif_util [None req-2b59932c-1944-4c88-af55-c0132848285a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Converting VIF {"id": "50e8a951-4439-4d05-b0da-9b126fae5c30", "address": "fa:16:3e:23:8b:f3", "network": {"id": "fbf3e0c3-4aa9-41d1-8ed0-cbadb331b448", "bridge": "br-int", "label": "tempest-network-smoke--1639787343", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap50e8a951-44", "ovs_interfaceid": "50e8a951-4439-4d05-b0da-9b126fae5c30", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 09:19:34 compute-0 nova_compute[260935]: 2025-10-11 09:19:34.321 2 DEBUG nova.network.os_vif_util [None req-2b59932c-1944-4c88-af55-c0132848285a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:23:8b:f3,bridge_name='br-int',has_traffic_filtering=True,id=50e8a951-4439-4d05-b0da-9b126fae5c30,network=Network(fbf3e0c3-4aa9-41d1-8ed0-cbadb331b448),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap50e8a951-44') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 09:19:34 compute-0 nova_compute[260935]: 2025-10-11 09:19:34.322 2 DEBUG os_vif [None req-2b59932c-1944-4c88-af55-c0132848285a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:23:8b:f3,bridge_name='br-int',has_traffic_filtering=True,id=50e8a951-4439-4d05-b0da-9b126fae5c30,network=Network(fbf3e0c3-4aa9-41d1-8ed0-cbadb331b448),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap50e8a951-44') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 11 09:19:34 compute-0 nova_compute[260935]: 2025-10-11 09:19:34.323 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:19:34 compute-0 nova_compute[260935]: 2025-10-11 09:19:34.323 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:19:34 compute-0 nova_compute[260935]: 2025-10-11 09:19:34.324 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 09:19:34 compute-0 nova_compute[260935]: 2025-10-11 09:19:34.327 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:19:34 compute-0 nova_compute[260935]: 2025-10-11 09:19:34.328 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap50e8a951-44, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:19:34 compute-0 nova_compute[260935]: 2025-10-11 09:19:34.329 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap50e8a951-44, col_values=(('external_ids', {'iface-id': '50e8a951-4439-4d05-b0da-9b126fae5c30', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:23:8b:f3', 'vm-uuid': '6c9132ae-fb4f-4a77-8e3f-14f516eed49c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:19:34 compute-0 NetworkManager[44960]: <info>  [1760174374.3325] manager: (tap50e8a951-44): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/470)
Oct 11 09:19:34 compute-0 nova_compute[260935]: 2025-10-11 09:19:34.337 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 09:19:34 compute-0 nova_compute[260935]: 2025-10-11 09:19:34.340 2 INFO os_vif [None req-2b59932c-1944-4c88-af55-c0132848285a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:23:8b:f3,bridge_name='br-int',has_traffic_filtering=True,id=50e8a951-4439-4d05-b0da-9b126fae5c30,network=Network(fbf3e0c3-4aa9-41d1-8ed0-cbadb331b448),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap50e8a951-44')
Oct 11 09:19:34 compute-0 nova_compute[260935]: 2025-10-11 09:19:34.404 2 DEBUG nova.virt.libvirt.driver [None req-2b59932c-1944-4c88-af55-c0132848285a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 09:19:34 compute-0 nova_compute[260935]: 2025-10-11 09:19:34.404 2 DEBUG nova.virt.libvirt.driver [None req-2b59932c-1944-4c88-af55-c0132848285a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 09:19:34 compute-0 nova_compute[260935]: 2025-10-11 09:19:34.404 2 DEBUG nova.virt.libvirt.driver [None req-2b59932c-1944-4c88-af55-c0132848285a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] No VIF found with MAC fa:16:3e:23:8b:f3, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 11 09:19:34 compute-0 nova_compute[260935]: 2025-10-11 09:19:34.405 2 INFO nova.virt.libvirt.driver [None req-2b59932c-1944-4c88-af55-c0132848285a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 6c9132ae-fb4f-4a77-8e3f-14f516eed49c] Using config drive
Oct 11 09:19:34 compute-0 nova_compute[260935]: 2025-10-11 09:19:34.434 2 DEBUG nova.storage.rbd_utils [None req-2b59932c-1944-4c88-af55-c0132848285a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] rbd image 6c9132ae-fb4f-4a77-8e3f-14f516eed49c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:19:34 compute-0 podman[385244]: 2025-10-11 09:19:34.484799798 +0000 UTC m=+0.088028615 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 11 09:19:34 compute-0 nova_compute[260935]: 2025-10-11 09:19:34.554 2 DEBUG nova.network.neutron [None req-1eb0c06a-ad3d-4fea-bc67-8d264d758d2b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: a798e2c3-8294-482d-a073-883121427765] Successfully created port: d8f510f6-a87e-4c76-be24-a143f69f564d _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 11 09:19:34 compute-0 nova_compute[260935]: 2025-10-11 09:19:34.698 2 DEBUG nova.network.neutron [req-c846a5ee-f86b-41b2-b6f7-3b8996b8489d req-eebfb772-18d8-41ba-86d1-995a3ad1a3e4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 6c9132ae-fb4f-4a77-8e3f-14f516eed49c] Updated VIF entry in instance network info cache for port 50e8a951-4439-4d05-b0da-9b126fae5c30. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 09:19:34 compute-0 nova_compute[260935]: 2025-10-11 09:19:34.698 2 DEBUG nova.network.neutron [req-c846a5ee-f86b-41b2-b6f7-3b8996b8489d req-eebfb772-18d8-41ba-86d1-995a3ad1a3e4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 6c9132ae-fb4f-4a77-8e3f-14f516eed49c] Updating instance_info_cache with network_info: [{"id": "50e8a951-4439-4d05-b0da-9b126fae5c30", "address": "fa:16:3e:23:8b:f3", "network": {"id": "fbf3e0c3-4aa9-41d1-8ed0-cbadb331b448", "bridge": "br-int", "label": "tempest-network-smoke--1639787343", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap50e8a951-44", "ovs_interfaceid": "50e8a951-4439-4d05-b0da-9b126fae5c30", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:19:34 compute-0 nova_compute[260935]: 2025-10-11 09:19:34.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:19:34 compute-0 nova_compute[260935]: 2025-10-11 09:19:34.704 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 11 09:19:34 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2332: 321 pgs: 321 active+clean; 579 MiB data, 1024 MiB used, 59 GiB / 60 GiB avail; 35 KiB/s rd, 3.6 MiB/s wr, 55 op/s
Oct 11 09:19:34 compute-0 nova_compute[260935]: 2025-10-11 09:19:34.714 2 DEBUG oslo_concurrency.lockutils [req-c846a5ee-f86b-41b2-b6f7-3b8996b8489d req-eebfb772-18d8-41ba-86d1-995a3ad1a3e4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-6c9132ae-fb4f-4a77-8e3f-14f516eed49c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:19:34 compute-0 nova_compute[260935]: 2025-10-11 09:19:34.721 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 11 09:19:34 compute-0 nova_compute[260935]: 2025-10-11 09:19:34.747 2 INFO nova.virt.libvirt.driver [None req-2b59932c-1944-4c88-af55-c0132848285a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 6c9132ae-fb4f-4a77-8e3f-14f516eed49c] Creating config drive at /var/lib/nova/instances/6c9132ae-fb4f-4a77-8e3f-14f516eed49c/disk.config
Oct 11 09:19:34 compute-0 nova_compute[260935]: 2025-10-11 09:19:34.754 2 DEBUG oslo_concurrency.processutils [None req-2b59932c-1944-4c88-af55-c0132848285a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/6c9132ae-fb4f-4a77-8e3f-14f516eed49c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpzwga0xxq execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:19:34 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/4216683930' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:19:34 compute-0 nova_compute[260935]: 2025-10-11 09:19:34.918 2 DEBUG oslo_concurrency.processutils [None req-2b59932c-1944-4c88-af55-c0132848285a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/6c9132ae-fb4f-4a77-8e3f-14f516eed49c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpzwga0xxq" returned: 0 in 0.164s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:19:34 compute-0 nova_compute[260935]: 2025-10-11 09:19:34.963 2 DEBUG nova.storage.rbd_utils [None req-2b59932c-1944-4c88-af55-c0132848285a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] rbd image 6c9132ae-fb4f-4a77-8e3f-14f516eed49c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:19:34 compute-0 nova_compute[260935]: 2025-10-11 09:19:34.968 2 DEBUG oslo_concurrency.processutils [None req-2b59932c-1944-4c88-af55-c0132848285a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/6c9132ae-fb4f-4a77-8e3f-14f516eed49c/disk.config 6c9132ae-fb4f-4a77-8e3f-14f516eed49c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:19:35 compute-0 nova_compute[260935]: 2025-10-11 09:19:35.176 2 DEBUG oslo_concurrency.processutils [None req-2b59932c-1944-4c88-af55-c0132848285a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/6c9132ae-fb4f-4a77-8e3f-14f516eed49c/disk.config 6c9132ae-fb4f-4a77-8e3f-14f516eed49c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.208s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:19:35 compute-0 nova_compute[260935]: 2025-10-11 09:19:35.177 2 INFO nova.virt.libvirt.driver [None req-2b59932c-1944-4c88-af55-c0132848285a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 6c9132ae-fb4f-4a77-8e3f-14f516eed49c] Deleting local config drive /var/lib/nova/instances/6c9132ae-fb4f-4a77-8e3f-14f516eed49c/disk.config because it was imported into RBD.
Oct 11 09:19:35 compute-0 kernel: tap50e8a951-44: entered promiscuous mode
Oct 11 09:19:35 compute-0 nova_compute[260935]: 2025-10-11 09:19:35.297 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:19:35 compute-0 NetworkManager[44960]: <info>  [1760174375.3002] manager: (tap50e8a951-44): new Tun device (/org/freedesktop/NetworkManager/Devices/471)
Oct 11 09:19:35 compute-0 ovn_controller[152945]: 2025-10-11T09:19:35Z|01149|binding|INFO|Claiming lport 50e8a951-4439-4d05-b0da-9b126fae5c30 for this chassis.
Oct 11 09:19:35 compute-0 ovn_controller[152945]: 2025-10-11T09:19:35Z|01150|binding|INFO|50e8a951-4439-4d05-b0da-9b126fae5c30: Claiming fa:16:3e:23:8b:f3 10.100.0.10
Oct 11 09:19:35 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:19:35.312 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:23:8b:f3 10.100.0.10'], port_security=['fa:16:3e:23:8b:f3 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '6c9132ae-fb4f-4a77-8e3f-14f516eed49c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fbf3e0c3-4aa9-41d1-8ed0-cbadb331b448', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'bee9c6aad5fe46a2b0fb6caf4d995b72', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'ef8b4bf7-fe14-4629-a7f4-eb04c76ca0d2', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e842406f-d65b-48f7-9a65-50c6608add8c, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=50e8a951-4439-4d05-b0da-9b126fae5c30) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 09:19:35 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:19:35.316 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 50e8a951-4439-4d05-b0da-9b126fae5c30 in datapath fbf3e0c3-4aa9-41d1-8ed0-cbadb331b448 bound to our chassis
Oct 11 09:19:35 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:19:35.321 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network fbf3e0c3-4aa9-41d1-8ed0-cbadb331b448
Oct 11 09:19:35 compute-0 ovn_controller[152945]: 2025-10-11T09:19:35Z|01151|binding|INFO|Setting lport 50e8a951-4439-4d05-b0da-9b126fae5c30 ovn-installed in OVS
Oct 11 09:19:35 compute-0 ovn_controller[152945]: 2025-10-11T09:19:35Z|01152|binding|INFO|Setting lport 50e8a951-4439-4d05-b0da-9b126fae5c30 up in Southbound
Oct 11 09:19:35 compute-0 systemd-machined[215705]: New machine qemu-138-instance-00000073.
Oct 11 09:19:35 compute-0 nova_compute[260935]: 2025-10-11 09:19:35.342 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:19:35 compute-0 nova_compute[260935]: 2025-10-11 09:19:35.344 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:19:35 compute-0 systemd[1]: Started Virtual Machine qemu-138-instance-00000073.
Oct 11 09:19:35 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:19:35.354 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[5173e8c9-99c8-4f06-ac92-39197daf0f7c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:19:35 compute-0 systemd-udevd[385335]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 09:19:35 compute-0 NetworkManager[44960]: <info>  [1760174375.3770] device (tap50e8a951-44): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 09:19:35 compute-0 NetworkManager[44960]: <info>  [1760174375.3782] device (tap50e8a951-44): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 09:19:35 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:19:35.403 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[43bf7fa7-10b3-4066-924d-f1dc12354677]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:19:35 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:19:35.412 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[f3bca6de-96fb-44e9-80f4-732abf734ed3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:19:35 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:19:35.460 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[536aace9-bb97-4ed4-bf11-8385cdfa9419]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:19:35 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:19:35.480 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[efe71f5d-6252-4187-8a44-63cb494955ba]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfbf3e0c3-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c8:5d:03'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 322], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 618285, 'reachable_time': 33588, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 385347, 'error': None, 'target': 'ovnmeta-fbf3e0c3-4aa9-41d1-8ed0-cbadb331b448', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:19:35 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:19:35.509 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[e1e94c45-b4de-4866-889f-baf3b9940503]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapfbf3e0c3-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 618299, 'tstamp': 618299}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 385349, 'error': None, 'target': 'ovnmeta-fbf3e0c3-4aa9-41d1-8ed0-cbadb331b448', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapfbf3e0c3-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 618303, 'tstamp': 618303}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 385349, 'error': None, 'target': 'ovnmeta-fbf3e0c3-4aa9-41d1-8ed0-cbadb331b448', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:19:35 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:19:35.511 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfbf3e0c3-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:19:35 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:19:35.515 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfbf3e0c3-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:19:35 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:19:35.515 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 09:19:35 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:19:35.516 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapfbf3e0c3-40, col_values=(('external_ids', {'iface-id': 'fb1da81b-c31d-4f26-a595-cb8eaad2e189'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:19:35 compute-0 nova_compute[260935]: 2025-10-11 09:19:35.516 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:19:35 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:19:35.516 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 09:19:35 compute-0 nova_compute[260935]: 2025-10-11 09:19:35.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:19:35 compute-0 nova_compute[260935]: 2025-10-11 09:19:35.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:19:35 compute-0 nova_compute[260935]: 2025-10-11 09:19:35.737 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:19:35 compute-0 nova_compute[260935]: 2025-10-11 09:19:35.737 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:19:35 compute-0 nova_compute[260935]: 2025-10-11 09:19:35.738 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:19:35 compute-0 nova_compute[260935]: 2025-10-11 09:19:35.738 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 11 09:19:35 compute-0 nova_compute[260935]: 2025-10-11 09:19:35.738 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:19:35 compute-0 ceph-mon[74313]: pgmap v2332: 321 pgs: 321 active+clean; 579 MiB data, 1024 MiB used, 59 GiB / 60 GiB avail; 35 KiB/s rd, 3.6 MiB/s wr, 55 op/s
Oct 11 09:19:36 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 09:19:36 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3815423025' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:19:36 compute-0 nova_compute[260935]: 2025-10-11 09:19:36.213 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:19:36 compute-0 nova_compute[260935]: 2025-10-11 09:19:36.341 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:19:36 compute-0 nova_compute[260935]: 2025-10-11 09:19:36.341 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:19:36 compute-0 nova_compute[260935]: 2025-10-11 09:19:36.342 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:19:36 compute-0 nova_compute[260935]: 2025-10-11 09:19:36.343 2 DEBUG nova.network.neutron [None req-1eb0c06a-ad3d-4fea-bc67-8d264d758d2b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: a798e2c3-8294-482d-a073-883121427765] Successfully updated port: d5347066-e322-4664-8798-146b9745fa17 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 11 09:19:36 compute-0 nova_compute[260935]: 2025-10-11 09:19:36.347 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:19:36 compute-0 nova_compute[260935]: 2025-10-11 09:19:36.347 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:19:36 compute-0 nova_compute[260935]: 2025-10-11 09:19:36.350 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000071 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:19:36 compute-0 nova_compute[260935]: 2025-10-11 09:19:36.350 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000071 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:19:36 compute-0 nova_compute[260935]: 2025-10-11 09:19:36.354 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000052 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:19:36 compute-0 nova_compute[260935]: 2025-10-11 09:19:36.354 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000052 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:19:36 compute-0 nova_compute[260935]: 2025-10-11 09:19:36.361 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000073 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:19:36 compute-0 nova_compute[260935]: 2025-10-11 09:19:36.361 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000073 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:19:36 compute-0 nova_compute[260935]: 2025-10-11 09:19:36.365 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000072 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:19:36 compute-0 nova_compute[260935]: 2025-10-11 09:19:36.365 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000072 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:19:36 compute-0 sudo[385415]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:19:36 compute-0 sudo[385415]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:19:36 compute-0 nova_compute[260935]: 2025-10-11 09:19:36.462 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760174376.4618876, 6c9132ae-fb4f-4a77-8e3f-14f516eed49c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:19:36 compute-0 nova_compute[260935]: 2025-10-11 09:19:36.463 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 6c9132ae-fb4f-4a77-8e3f-14f516eed49c] VM Started (Lifecycle Event)
Oct 11 09:19:36 compute-0 sudo[385415]: pam_unix(sudo:session): session closed for user root
Oct 11 09:19:36 compute-0 nova_compute[260935]: 2025-10-11 09:19:36.485 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 6c9132ae-fb4f-4a77-8e3f-14f516eed49c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:19:36 compute-0 nova_compute[260935]: 2025-10-11 09:19:36.491 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760174376.4620554, 6c9132ae-fb4f-4a77-8e3f-14f516eed49c => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:19:36 compute-0 nova_compute[260935]: 2025-10-11 09:19:36.492 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 6c9132ae-fb4f-4a77-8e3f-14f516eed49c] VM Paused (Lifecycle Event)
Oct 11 09:19:36 compute-0 nova_compute[260935]: 2025-10-11 09:19:36.517 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 6c9132ae-fb4f-4a77-8e3f-14f516eed49c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:19:36 compute-0 nova_compute[260935]: 2025-10-11 09:19:36.521 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 6c9132ae-fb4f-4a77-8e3f-14f516eed49c] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 09:19:36 compute-0 sudo[385440]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 09:19:36 compute-0 sudo[385440]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:19:36 compute-0 sudo[385440]: pam_unix(sudo:session): session closed for user root
Oct 11 09:19:36 compute-0 nova_compute[260935]: 2025-10-11 09:19:36.553 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 6c9132ae-fb4f-4a77-8e3f-14f516eed49c] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 09:19:36 compute-0 nova_compute[260935]: 2025-10-11 09:19:36.583 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:19:36 compute-0 sudo[385465]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:19:36 compute-0 sudo[385465]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:19:36 compute-0 sudo[385465]: pam_unix(sudo:session): session closed for user root
Oct 11 09:19:36 compute-0 nova_compute[260935]: 2025-10-11 09:19:36.681 2 WARNING nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 09:19:36 compute-0 nova_compute[260935]: 2025-10-11 09:19:36.682 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=2506MB free_disk=59.69802474975586GB free_vcpus=2 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 11 09:19:36 compute-0 nova_compute[260935]: 2025-10-11 09:19:36.682 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:19:36 compute-0 nova_compute[260935]: 2025-10-11 09:19:36.682 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:19:36 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2333: 321 pgs: 321 active+clean; 579 MiB data, 1024 MiB used, 59 GiB / 60 GiB avail; 35 KiB/s rd, 3.6 MiB/s wr, 54 op/s
Oct 11 09:19:36 compute-0 sudo[385490]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 11 09:19:36 compute-0 sudo[385490]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:19:36 compute-0 nova_compute[260935]: 2025-10-11 09:19:36.789 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance c176845c-89c0-4038-ba22-4ee79bd3ebfe actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 09:19:36 compute-0 nova_compute[260935]: 2025-10-11 09:19:36.789 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance b75d8ded-515b-48ff-a6b6-28df88878996 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 09:19:36 compute-0 nova_compute[260935]: 2025-10-11 09:19:36.790 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance 52be16b4-343a-4fd4-9041-39069a1fde2a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 09:19:36 compute-0 nova_compute[260935]: 2025-10-11 09:19:36.790 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance 69860e17-caac-461a-a4a5-34ca72c0ee09 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 09:19:36 compute-0 nova_compute[260935]: 2025-10-11 09:19:36.790 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance 564a3027-0f98-40fd-a495-1c13a103ea39 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 09:19:36 compute-0 nova_compute[260935]: 2025-10-11 09:19:36.790 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance 6c9132ae-fb4f-4a77-8e3f-14f516eed49c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 09:19:36 compute-0 nova_compute[260935]: 2025-10-11 09:19:36.790 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance a798e2c3-8294-482d-a073-883121427765 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 09:19:36 compute-0 nova_compute[260935]: 2025-10-11 09:19:36.791 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 7 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 11 09:19:36 compute-0 nova_compute[260935]: 2025-10-11 09:19:36.791 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=1408MB phys_disk=59GB used_disk=7GB total_vcpus=8 used_vcpus=7 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 11 09:19:36 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/3815423025' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:19:36 compute-0 nova_compute[260935]: 2025-10-11 09:19:36.951 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:19:37 compute-0 nova_compute[260935]: 2025-10-11 09:19:37.172 2 DEBUG nova.compute.manager [req-05064526-5db6-43ab-b70e-7da3c4ed1673 req-1fa9ff84-957b-4565-9d7d-56fbeeecd371 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 6c9132ae-fb4f-4a77-8e3f-14f516eed49c] Received event network-vif-plugged-50e8a951-4439-4d05-b0da-9b126fae5c30 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:19:37 compute-0 nova_compute[260935]: 2025-10-11 09:19:37.173 2 DEBUG oslo_concurrency.lockutils [req-05064526-5db6-43ab-b70e-7da3c4ed1673 req-1fa9ff84-957b-4565-9d7d-56fbeeecd371 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "6c9132ae-fb4f-4a77-8e3f-14f516eed49c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:19:37 compute-0 nova_compute[260935]: 2025-10-11 09:19:37.173 2 DEBUG oslo_concurrency.lockutils [req-05064526-5db6-43ab-b70e-7da3c4ed1673 req-1fa9ff84-957b-4565-9d7d-56fbeeecd371 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "6c9132ae-fb4f-4a77-8e3f-14f516eed49c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:19:37 compute-0 nova_compute[260935]: 2025-10-11 09:19:37.174 2 DEBUG oslo_concurrency.lockutils [req-05064526-5db6-43ab-b70e-7da3c4ed1673 req-1fa9ff84-957b-4565-9d7d-56fbeeecd371 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "6c9132ae-fb4f-4a77-8e3f-14f516eed49c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:19:37 compute-0 nova_compute[260935]: 2025-10-11 09:19:37.174 2 DEBUG nova.compute.manager [req-05064526-5db6-43ab-b70e-7da3c4ed1673 req-1fa9ff84-957b-4565-9d7d-56fbeeecd371 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 6c9132ae-fb4f-4a77-8e3f-14f516eed49c] Processing event network-vif-plugged-50e8a951-4439-4d05-b0da-9b126fae5c30 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 11 09:19:37 compute-0 nova_compute[260935]: 2025-10-11 09:19:37.175 2 DEBUG nova.compute.manager [None req-2b59932c-1944-4c88-af55-c0132848285a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 6c9132ae-fb4f-4a77-8e3f-14f516eed49c] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 11 09:19:37 compute-0 nova_compute[260935]: 2025-10-11 09:19:37.187 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760174377.1866078, 6c9132ae-fb4f-4a77-8e3f-14f516eed49c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:19:37 compute-0 nova_compute[260935]: 2025-10-11 09:19:37.188 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 6c9132ae-fb4f-4a77-8e3f-14f516eed49c] VM Resumed (Lifecycle Event)
Oct 11 09:19:37 compute-0 nova_compute[260935]: 2025-10-11 09:19:37.192 2 DEBUG nova.virt.libvirt.driver [None req-2b59932c-1944-4c88-af55-c0132848285a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 6c9132ae-fb4f-4a77-8e3f-14f516eed49c] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 11 09:19:37 compute-0 nova_compute[260935]: 2025-10-11 09:19:37.205 2 INFO nova.virt.libvirt.driver [-] [instance: 6c9132ae-fb4f-4a77-8e3f-14f516eed49c] Instance spawned successfully.
Oct 11 09:19:37 compute-0 nova_compute[260935]: 2025-10-11 09:19:37.205 2 DEBUG nova.virt.libvirt.driver [None req-2b59932c-1944-4c88-af55-c0132848285a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 6c9132ae-fb4f-4a77-8e3f-14f516eed49c] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 11 09:19:37 compute-0 nova_compute[260935]: 2025-10-11 09:19:37.210 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 6c9132ae-fb4f-4a77-8e3f-14f516eed49c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:19:37 compute-0 nova_compute[260935]: 2025-10-11 09:19:37.214 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 6c9132ae-fb4f-4a77-8e3f-14f516eed49c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 09:19:37 compute-0 nova_compute[260935]: 2025-10-11 09:19:37.229 2 DEBUG nova.virt.libvirt.driver [None req-2b59932c-1944-4c88-af55-c0132848285a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 6c9132ae-fb4f-4a77-8e3f-14f516eed49c] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:19:37 compute-0 nova_compute[260935]: 2025-10-11 09:19:37.229 2 DEBUG nova.virt.libvirt.driver [None req-2b59932c-1944-4c88-af55-c0132848285a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 6c9132ae-fb4f-4a77-8e3f-14f516eed49c] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:19:37 compute-0 nova_compute[260935]: 2025-10-11 09:19:37.230 2 DEBUG nova.virt.libvirt.driver [None req-2b59932c-1944-4c88-af55-c0132848285a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 6c9132ae-fb4f-4a77-8e3f-14f516eed49c] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:19:37 compute-0 nova_compute[260935]: 2025-10-11 09:19:37.230 2 DEBUG nova.virt.libvirt.driver [None req-2b59932c-1944-4c88-af55-c0132848285a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 6c9132ae-fb4f-4a77-8e3f-14f516eed49c] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:19:37 compute-0 nova_compute[260935]: 2025-10-11 09:19:37.231 2 DEBUG nova.virt.libvirt.driver [None req-2b59932c-1944-4c88-af55-c0132848285a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 6c9132ae-fb4f-4a77-8e3f-14f516eed49c] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:19:37 compute-0 nova_compute[260935]: 2025-10-11 09:19:37.232 2 DEBUG nova.virt.libvirt.driver [None req-2b59932c-1944-4c88-af55-c0132848285a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 6c9132ae-fb4f-4a77-8e3f-14f516eed49c] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:19:37 compute-0 nova_compute[260935]: 2025-10-11 09:19:37.238 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 6c9132ae-fb4f-4a77-8e3f-14f516eed49c] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 09:19:37 compute-0 nova_compute[260935]: 2025-10-11 09:19:37.282 2 INFO nova.compute.manager [None req-2b59932c-1944-4c88-af55-c0132848285a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 6c9132ae-fb4f-4a77-8e3f-14f516eed49c] Took 9.59 seconds to spawn the instance on the hypervisor.
Oct 11 09:19:37 compute-0 nova_compute[260935]: 2025-10-11 09:19:37.282 2 DEBUG nova.compute.manager [None req-2b59932c-1944-4c88-af55-c0132848285a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 6c9132ae-fb4f-4a77-8e3f-14f516eed49c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:19:37 compute-0 sudo[385490]: pam_unix(sudo:session): session closed for user root
Oct 11 09:19:37 compute-0 nova_compute[260935]: 2025-10-11 09:19:37.375 2 INFO nova.compute.manager [None req-2b59932c-1944-4c88-af55-c0132848285a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 6c9132ae-fb4f-4a77-8e3f-14f516eed49c] Took 10.83 seconds to build instance.
Oct 11 09:19:37 compute-0 nova_compute[260935]: 2025-10-11 09:19:37.399 2 DEBUG oslo_concurrency.lockutils [None req-2b59932c-1944-4c88-af55-c0132848285a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "6c9132ae-fb4f-4a77-8e3f-14f516eed49c" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.905s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:19:37 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 09:19:37 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/468721934' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:19:37 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:19:37.429 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=bec9a5e2-82b0-42f0-811d-08d245f1dc66, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '36'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:19:37 compute-0 nova_compute[260935]: 2025-10-11 09:19:37.447 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.496s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:19:37 compute-0 nova_compute[260935]: 2025-10-11 09:19:37.458 2 DEBUG nova.compute.provider_tree [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 09:19:37 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 09:19:37 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 09:19:37 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 11 09:19:37 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 09:19:37 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 11 09:19:37 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:19:37 compute-0 nova_compute[260935]: 2025-10-11 09:19:37.473 2 DEBUG nova.scheduler.client.report [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 09:19:37 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev 67ed6427-e597-4e0d-b8ee-803929919f33 does not exist
Oct 11 09:19:37 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev a2dcf191-9512-4018-8c63-30df62794c17 does not exist
Oct 11 09:19:37 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev 2146a691-7c05-417b-86cc-1742e8991aa3 does not exist
Oct 11 09:19:37 compute-0 nova_compute[260935]: 2025-10-11 09:19:37.477 2 DEBUG nova.network.neutron [None req-1eb0c06a-ad3d-4fea-bc67-8d264d758d2b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: a798e2c3-8294-482d-a073-883121427765] Successfully updated port: d8f510f6-a87e-4c76-be24-a143f69f564d _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 11 09:19:37 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 11 09:19:37 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 09:19:37 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 11 09:19:37 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 09:19:37 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 09:19:37 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 09:19:37 compute-0 nova_compute[260935]: 2025-10-11 09:19:37.495 2 DEBUG oslo_concurrency.lockutils [None req-1eb0c06a-ad3d-4fea-bc67-8d264d758d2b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "refresh_cache-a798e2c3-8294-482d-a073-883121427765" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:19:37 compute-0 nova_compute[260935]: 2025-10-11 09:19:37.495 2 DEBUG oslo_concurrency.lockutils [None req-1eb0c06a-ad3d-4fea-bc67-8d264d758d2b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquired lock "refresh_cache-a798e2c3-8294-482d-a073-883121427765" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:19:37 compute-0 nova_compute[260935]: 2025-10-11 09:19:37.495 2 DEBUG nova.network.neutron [None req-1eb0c06a-ad3d-4fea-bc67-8d264d758d2b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: a798e2c3-8294-482d-a073-883121427765] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 11 09:19:37 compute-0 nova_compute[260935]: 2025-10-11 09:19:37.502 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 11 09:19:37 compute-0 nova_compute[260935]: 2025-10-11 09:19:37.502 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.820s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:19:37 compute-0 sudo[385568]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:19:37 compute-0 sudo[385568]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:19:37 compute-0 sudo[385568]: pam_unix(sudo:session): session closed for user root
Oct 11 09:19:37 compute-0 sudo[385593]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 09:19:37 compute-0 sudo[385593]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:19:37 compute-0 sudo[385593]: pam_unix(sudo:session): session closed for user root
Oct 11 09:19:37 compute-0 sudo[385618]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:19:37 compute-0 sudo[385618]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:19:37 compute-0 sudo[385618]: pam_unix(sudo:session): session closed for user root
Oct 11 09:19:37 compute-0 nova_compute[260935]: 2025-10-11 09:19:37.766 2 DEBUG nova.network.neutron [None req-1eb0c06a-ad3d-4fea-bc67-8d264d758d2b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: a798e2c3-8294-482d-a073-883121427765] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 11 09:19:37 compute-0 sudo[385643]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 11 09:19:37 compute-0 sudo[385643]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:19:37 compute-0 ceph-mon[74313]: pgmap v2333: 321 pgs: 321 active+clean; 579 MiB data, 1024 MiB used, 59 GiB / 60 GiB avail; 35 KiB/s rd, 3.6 MiB/s wr, 54 op/s
Oct 11 09:19:37 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/468721934' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:19:37 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 09:19:37 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 09:19:37 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:19:37 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 09:19:37 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 09:19:37 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 09:19:38 compute-0 podman[385709]: 2025-10-11 09:19:38.285514397 +0000 UTC m=+0.052487912 container create b5c278b731de731a1c5b9069356faee6852facdcde096f44d3614386906b912d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_engelbart, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Oct 11 09:19:38 compute-0 systemd[1]: Started libpod-conmon-b5c278b731de731a1c5b9069356faee6852facdcde096f44d3614386906b912d.scope.
Oct 11 09:19:38 compute-0 podman[385709]: 2025-10-11 09:19:38.25941919 +0000 UTC m=+0.026392745 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:19:38 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:19:38 compute-0 podman[385709]: 2025-10-11 09:19:38.405770661 +0000 UTC m=+0.172744176 container init b5c278b731de731a1c5b9069356faee6852facdcde096f44d3614386906b912d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_engelbart, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 09:19:38 compute-0 podman[385709]: 2025-10-11 09:19:38.413989503 +0000 UTC m=+0.180963038 container start b5c278b731de731a1c5b9069356faee6852facdcde096f44d3614386906b912d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_engelbart, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct 11 09:19:38 compute-0 podman[385709]: 2025-10-11 09:19:38.417694267 +0000 UTC m=+0.184667802 container attach b5c278b731de731a1c5b9069356faee6852facdcde096f44d3614386906b912d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_engelbart, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct 11 09:19:38 compute-0 eloquent_engelbart[385725]: 167 167
Oct 11 09:19:38 compute-0 systemd[1]: libpod-b5c278b731de731a1c5b9069356faee6852facdcde096f44d3614386906b912d.scope: Deactivated successfully.
Oct 11 09:19:38 compute-0 podman[385709]: 2025-10-11 09:19:38.427417641 +0000 UTC m=+0.194391166 container died b5c278b731de731a1c5b9069356faee6852facdcde096f44d3614386906b912d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_engelbart, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 09:19:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-1b646267ace4f903fc6b558ed8ef584662156ac1b53d9838f230f291dff376f2-merged.mount: Deactivated successfully.
Oct 11 09:19:38 compute-0 podman[385709]: 2025-10-11 09:19:38.472418201 +0000 UTC m=+0.239391716 container remove b5c278b731de731a1c5b9069356faee6852facdcde096f44d3614386906b912d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_engelbart, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct 11 09:19:38 compute-0 systemd[1]: libpod-conmon-b5c278b731de731a1c5b9069356faee6852facdcde096f44d3614386906b912d.scope: Deactivated successfully.
Oct 11 09:19:38 compute-0 podman[385748]: 2025-10-11 09:19:38.697223306 +0000 UTC m=+0.054737416 container create 4c9ede4c09235424a0dfb2aabb46ce1194ef9238f41fe686c4c2684c6c9ef175 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_mayer, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 09:19:38 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2334: 321 pgs: 321 active+clean; 579 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 42 KiB/s rd, 3.6 MiB/s wr, 65 op/s
Oct 11 09:19:38 compute-0 systemd[1]: Started libpod-conmon-4c9ede4c09235424a0dfb2aabb46ce1194ef9238f41fe686c4c2684c6c9ef175.scope.
Oct 11 09:19:38 compute-0 podman[385748]: 2025-10-11 09:19:38.675165523 +0000 UTC m=+0.032679623 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:19:38 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:19:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b6bfed197ab9a644d5d673e5c3a5c602fee2d96387087ebdac1be488284176e5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 09:19:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b6bfed197ab9a644d5d673e5c3a5c602fee2d96387087ebdac1be488284176e5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 09:19:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b6bfed197ab9a644d5d673e5c3a5c602fee2d96387087ebdac1be488284176e5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 09:19:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b6bfed197ab9a644d5d673e5c3a5c602fee2d96387087ebdac1be488284176e5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 09:19:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b6bfed197ab9a644d5d673e5c3a5c602fee2d96387087ebdac1be488284176e5/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 09:19:38 compute-0 podman[385748]: 2025-10-11 09:19:38.835187749 +0000 UTC m=+0.192701909 container init 4c9ede4c09235424a0dfb2aabb46ce1194ef9238f41fe686c4c2684c6c9ef175 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_mayer, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 09:19:38 compute-0 podman[385748]: 2025-10-11 09:19:38.844255025 +0000 UTC m=+0.201769125 container start 4c9ede4c09235424a0dfb2aabb46ce1194ef9238f41fe686c4c2684c6c9ef175 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_mayer, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 09:19:38 compute-0 podman[385748]: 2025-10-11 09:19:38.847798105 +0000 UTC m=+0.205312275 container attach 4c9ede4c09235424a0dfb2aabb46ce1194ef9238f41fe686c4c2684c6c9ef175 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_mayer, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct 11 09:19:38 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:19:39 compute-0 nova_compute[260935]: 2025-10-11 09:19:39.331 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:19:39 compute-0 nova_compute[260935]: 2025-10-11 09:19:39.478 2 DEBUG nova.compute.manager [req-d53f6209-42ac-461f-92b4-8dc609ebc4f0 req-22b0ba02-f507-4271-a49b-3b849bf11fe3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a798e2c3-8294-482d-a073-883121427765] Received event network-changed-d5347066-e322-4664-8798-146b9745fa17 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:19:39 compute-0 nova_compute[260935]: 2025-10-11 09:19:39.478 2 DEBUG nova.compute.manager [req-d53f6209-42ac-461f-92b4-8dc609ebc4f0 req-22b0ba02-f507-4271-a49b-3b849bf11fe3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a798e2c3-8294-482d-a073-883121427765] Refreshing instance network info cache due to event network-changed-d5347066-e322-4664-8798-146b9745fa17. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 09:19:39 compute-0 nova_compute[260935]: 2025-10-11 09:19:39.479 2 DEBUG oslo_concurrency.lockutils [req-d53f6209-42ac-461f-92b4-8dc609ebc4f0 req-22b0ba02-f507-4271-a49b-3b849bf11fe3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-a798e2c3-8294-482d-a073-883121427765" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:19:39 compute-0 ceph-mon[74313]: pgmap v2334: 321 pgs: 321 active+clean; 579 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 42 KiB/s rd, 3.6 MiB/s wr, 65 op/s
Oct 11 09:19:40 compute-0 festive_mayer[385764]: --> passed data devices: 0 physical, 3 LVM
Oct 11 09:19:40 compute-0 festive_mayer[385764]: --> relative data size: 1.0
Oct 11 09:19:40 compute-0 festive_mayer[385764]: --> All data devices are unavailable
Oct 11 09:19:40 compute-0 systemd[1]: libpod-4c9ede4c09235424a0dfb2aabb46ce1194ef9238f41fe686c4c2684c6c9ef175.scope: Deactivated successfully.
Oct 11 09:19:40 compute-0 podman[385748]: 2025-10-11 09:19:40.137145811 +0000 UTC m=+1.494659931 container died 4c9ede4c09235424a0dfb2aabb46ce1194ef9238f41fe686c4c2684c6c9ef175 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_mayer, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 09:19:40 compute-0 systemd[1]: libpod-4c9ede4c09235424a0dfb2aabb46ce1194ef9238f41fe686c4c2684c6c9ef175.scope: Consumed 1.214s CPU time.
Oct 11 09:19:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-b6bfed197ab9a644d5d673e5c3a5c602fee2d96387087ebdac1be488284176e5-merged.mount: Deactivated successfully.
Oct 11 09:19:40 compute-0 podman[385748]: 2025-10-11 09:19:40.218011773 +0000 UTC m=+1.575525883 container remove 4c9ede4c09235424a0dfb2aabb46ce1194ef9238f41fe686c4c2684c6c9ef175 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_mayer, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default)
Oct 11 09:19:40 compute-0 systemd[1]: libpod-conmon-4c9ede4c09235424a0dfb2aabb46ce1194ef9238f41fe686c4c2684c6c9ef175.scope: Deactivated successfully.
Oct 11 09:19:40 compute-0 sudo[385643]: pam_unix(sudo:session): session closed for user root
Oct 11 09:19:40 compute-0 nova_compute[260935]: 2025-10-11 09:19:40.305 2 DEBUG nova.network.neutron [None req-1eb0c06a-ad3d-4fea-bc67-8d264d758d2b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: a798e2c3-8294-482d-a073-883121427765] Updating instance_info_cache with network_info: [{"id": "d5347066-e322-4664-8798-146b9745fa17", "address": "fa:16:3e:eb:9b:3f", "network": {"id": "6346ea52-07fc-49ad-8f2d-fcfed9769241", "bridge": "br-int", "label": "tempest-network-smoke--1742332739", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd5347066-e3", "ovs_interfaceid": "d5347066-e322-4664-8798-146b9745fa17", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "d8f510f6-a87e-4c76-be24-a143f69f564d", "address": "fa:16:3e:f2:49:59", "network": {"id": "ce353a4c-7280-46bb-ac7d-157bb5dc08cb", "bridge": "br-int", "label": "tempest-network-smoke--1463782365", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fef2:4959", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd8f510f6-a8", "ovs_interfaceid": "d8f510f6-a87e-4c76-be24-a143f69f564d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:19:40 compute-0 podman[385794]: 2025-10-11 09:19:40.307869078 +0000 UTC m=+0.123808755 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=iscsid, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 09:19:40 compute-0 sudo[385822]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:19:40 compute-0 sudo[385822]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:19:40 compute-0 sudo[385822]: pam_unix(sudo:session): session closed for user root
Oct 11 09:19:40 compute-0 sudo[385850]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 09:19:40 compute-0 sudo[385850]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:19:40 compute-0 sudo[385850]: pam_unix(sudo:session): session closed for user root
Oct 11 09:19:40 compute-0 nova_compute[260935]: 2025-10-11 09:19:40.426 2 DEBUG oslo_concurrency.lockutils [None req-1eb0c06a-ad3d-4fea-bc67-8d264d758d2b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Releasing lock "refresh_cache-a798e2c3-8294-482d-a073-883121427765" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:19:40 compute-0 nova_compute[260935]: 2025-10-11 09:19:40.427 2 DEBUG nova.compute.manager [None req-1eb0c06a-ad3d-4fea-bc67-8d264d758d2b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: a798e2c3-8294-482d-a073-883121427765] Instance network_info: |[{"id": "d5347066-e322-4664-8798-146b9745fa17", "address": "fa:16:3e:eb:9b:3f", "network": {"id": "6346ea52-07fc-49ad-8f2d-fcfed9769241", "bridge": "br-int", "label": "tempest-network-smoke--1742332739", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd5347066-e3", "ovs_interfaceid": "d5347066-e322-4664-8798-146b9745fa17", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "d8f510f6-a87e-4c76-be24-a143f69f564d", "address": "fa:16:3e:f2:49:59", "network": {"id": "ce353a4c-7280-46bb-ac7d-157bb5dc08cb", "bridge": "br-int", "label": "tempest-network-smoke--1463782365", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fef2:4959", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd8f510f6-a8", "ovs_interfaceid": "d8f510f6-a87e-4c76-be24-a143f69f564d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 11 09:19:40 compute-0 nova_compute[260935]: 2025-10-11 09:19:40.428 2 DEBUG oslo_concurrency.lockutils [req-d53f6209-42ac-461f-92b4-8dc609ebc4f0 req-22b0ba02-f507-4271-a49b-3b849bf11fe3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-a798e2c3-8294-482d-a073-883121427765" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:19:40 compute-0 nova_compute[260935]: 2025-10-11 09:19:40.428 2 DEBUG nova.network.neutron [req-d53f6209-42ac-461f-92b4-8dc609ebc4f0 req-22b0ba02-f507-4271-a49b-3b849bf11fe3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a798e2c3-8294-482d-a073-883121427765] Refreshing network info cache for port d5347066-e322-4664-8798-146b9745fa17 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 09:19:40 compute-0 nova_compute[260935]: 2025-10-11 09:19:40.433 2 DEBUG nova.virt.libvirt.driver [None req-1eb0c06a-ad3d-4fea-bc67-8d264d758d2b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: a798e2c3-8294-482d-a073-883121427765] Start _get_guest_xml network_info=[{"id": "d5347066-e322-4664-8798-146b9745fa17", "address": "fa:16:3e:eb:9b:3f", "network": {"id": "6346ea52-07fc-49ad-8f2d-fcfed9769241", "bridge": "br-int", "label": "tempest-network-smoke--1742332739", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd5347066-e3", "ovs_interfaceid": "d5347066-e322-4664-8798-146b9745fa17", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "d8f510f6-a87e-4c76-be24-a143f69f564d", "address": "fa:16:3e:f2:49:59", "network": {"id": "ce353a4c-7280-46bb-ac7d-157bb5dc08cb", "bridge": "br-int", "label": "tempest-network-smoke--1463782365", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fef2:4959", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd8f510f6-a8", "ovs_interfaceid": "d8f510f6-a87e-4c76-be24-a143f69f564d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 11 09:19:40 compute-0 nova_compute[260935]: 2025-10-11 09:19:40.437 2 WARNING nova.virt.libvirt.driver [None req-1eb0c06a-ad3d-4fea-bc67-8d264d758d2b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 09:19:40 compute-0 nova_compute[260935]: 2025-10-11 09:19:40.443 2 DEBUG nova.virt.libvirt.host [None req-1eb0c06a-ad3d-4fea-bc67-8d264d758d2b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 11 09:19:40 compute-0 nova_compute[260935]: 2025-10-11 09:19:40.444 2 DEBUG nova.virt.libvirt.host [None req-1eb0c06a-ad3d-4fea-bc67-8d264d758d2b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 11 09:19:40 compute-0 nova_compute[260935]: 2025-10-11 09:19:40.447 2 DEBUG nova.virt.libvirt.host [None req-1eb0c06a-ad3d-4fea-bc67-8d264d758d2b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 11 09:19:40 compute-0 nova_compute[260935]: 2025-10-11 09:19:40.448 2 DEBUG nova.virt.libvirt.host [None req-1eb0c06a-ad3d-4fea-bc67-8d264d758d2b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 11 09:19:40 compute-0 nova_compute[260935]: 2025-10-11 09:19:40.448 2 DEBUG nova.virt.libvirt.driver [None req-1eb0c06a-ad3d-4fea-bc67-8d264d758d2b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 11 09:19:40 compute-0 nova_compute[260935]: 2025-10-11 09:19:40.449 2 DEBUG nova.virt.hardware [None req-1eb0c06a-ad3d-4fea-bc67-8d264d758d2b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 11 09:19:40 compute-0 nova_compute[260935]: 2025-10-11 09:19:40.449 2 DEBUG nova.virt.hardware [None req-1eb0c06a-ad3d-4fea-bc67-8d264d758d2b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 11 09:19:40 compute-0 nova_compute[260935]: 2025-10-11 09:19:40.450 2 DEBUG nova.virt.hardware [None req-1eb0c06a-ad3d-4fea-bc67-8d264d758d2b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 11 09:19:40 compute-0 nova_compute[260935]: 2025-10-11 09:19:40.450 2 DEBUG nova.virt.hardware [None req-1eb0c06a-ad3d-4fea-bc67-8d264d758d2b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 11 09:19:40 compute-0 nova_compute[260935]: 2025-10-11 09:19:40.450 2 DEBUG nova.virt.hardware [None req-1eb0c06a-ad3d-4fea-bc67-8d264d758d2b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 11 09:19:40 compute-0 nova_compute[260935]: 2025-10-11 09:19:40.450 2 DEBUG nova.virt.hardware [None req-1eb0c06a-ad3d-4fea-bc67-8d264d758d2b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 11 09:19:40 compute-0 nova_compute[260935]: 2025-10-11 09:19:40.451 2 DEBUG nova.virt.hardware [None req-1eb0c06a-ad3d-4fea-bc67-8d264d758d2b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 11 09:19:40 compute-0 nova_compute[260935]: 2025-10-11 09:19:40.451 2 DEBUG nova.virt.hardware [None req-1eb0c06a-ad3d-4fea-bc67-8d264d758d2b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 11 09:19:40 compute-0 nova_compute[260935]: 2025-10-11 09:19:40.452 2 DEBUG nova.virt.hardware [None req-1eb0c06a-ad3d-4fea-bc67-8d264d758d2b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 11 09:19:40 compute-0 nova_compute[260935]: 2025-10-11 09:19:40.452 2 DEBUG nova.virt.hardware [None req-1eb0c06a-ad3d-4fea-bc67-8d264d758d2b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 11 09:19:40 compute-0 nova_compute[260935]: 2025-10-11 09:19:40.452 2 DEBUG nova.virt.hardware [None req-1eb0c06a-ad3d-4fea-bc67-8d264d758d2b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 11 09:19:40 compute-0 nova_compute[260935]: 2025-10-11 09:19:40.456 2 DEBUG oslo_concurrency.processutils [None req-1eb0c06a-ad3d-4fea-bc67-8d264d758d2b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:19:40 compute-0 sudo[385876]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:19:40 compute-0 sudo[385876]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:19:40 compute-0 sudo[385876]: pam_unix(sudo:session): session closed for user root
Oct 11 09:19:40 compute-0 sudo[385902]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a -- lvm list --format json
Oct 11 09:19:40 compute-0 sudo[385902]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:19:40 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2335: 321 pgs: 321 active+clean; 579 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 42 KiB/s rd, 3.6 MiB/s wr, 64 op/s
Oct 11 09:19:40 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 09:19:40 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4290417126' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:19:40 compute-0 nova_compute[260935]: 2025-10-11 09:19:40.950 2 DEBUG oslo_concurrency.processutils [None req-1eb0c06a-ad3d-4fea-bc67-8d264d758d2b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.494s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:19:40 compute-0 podman[385984]: 2025-10-11 09:19:40.979957935 +0000 UTC m=+0.054831158 container create 6f934fde4061d71035e716172b8cf96dcbed76bef49592bc2c2034dbfb680d3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_curran, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 09:19:40 compute-0 nova_compute[260935]: 2025-10-11 09:19:40.983 2 DEBUG nova.storage.rbd_utils [None req-1eb0c06a-ad3d-4fea-bc67-8d264d758d2b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] rbd image a798e2c3-8294-482d-a073-883121427765_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:19:41 compute-0 nova_compute[260935]: 2025-10-11 09:19:41.004 2 DEBUG oslo_concurrency.processutils [None req-1eb0c06a-ad3d-4fea-bc67-8d264d758d2b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:19:41 compute-0 systemd[1]: Started libpod-conmon-6f934fde4061d71035e716172b8cf96dcbed76bef49592bc2c2034dbfb680d3a.scope.
Oct 11 09:19:41 compute-0 podman[385984]: 2025-10-11 09:19:40.957189333 +0000 UTC m=+0.032062546 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:19:41 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:19:41 compute-0 podman[385984]: 2025-10-11 09:19:41.096109193 +0000 UTC m=+0.170982436 container init 6f934fde4061d71035e716172b8cf96dcbed76bef49592bc2c2034dbfb680d3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_curran, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Oct 11 09:19:41 compute-0 podman[385984]: 2025-10-11 09:19:41.107862935 +0000 UTC m=+0.182736158 container start 6f934fde4061d71035e716172b8cf96dcbed76bef49592bc2c2034dbfb680d3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_curran, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 09:19:41 compute-0 podman[385984]: 2025-10-11 09:19:41.11192625 +0000 UTC m=+0.186799493 container attach 6f934fde4061d71035e716172b8cf96dcbed76bef49592bc2c2034dbfb680d3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_curran, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct 11 09:19:41 compute-0 nervous_curran[386020]: 167 167
Oct 11 09:19:41 compute-0 systemd[1]: libpod-6f934fde4061d71035e716172b8cf96dcbed76bef49592bc2c2034dbfb680d3a.scope: Deactivated successfully.
Oct 11 09:19:41 compute-0 podman[385984]: 2025-10-11 09:19:41.122649302 +0000 UTC m=+0.197522535 container died 6f934fde4061d71035e716172b8cf96dcbed76bef49592bc2c2034dbfb680d3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_curran, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 09:19:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-ff11e8af25e835bb1c34924bd5eacdbc910198432b0d43406b6cb76116b6b401-merged.mount: Deactivated successfully.
Oct 11 09:19:41 compute-0 podman[385984]: 2025-10-11 09:19:41.186371761 +0000 UTC m=+0.261244984 container remove 6f934fde4061d71035e716172b8cf96dcbed76bef49592bc2c2034dbfb680d3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_curran, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct 11 09:19:41 compute-0 systemd[1]: libpod-conmon-6f934fde4061d71035e716172b8cf96dcbed76bef49592bc2c2034dbfb680d3a.scope: Deactivated successfully.
Oct 11 09:19:41 compute-0 podman[386063]: 2025-10-11 09:19:41.433904066 +0000 UTC m=+0.064233333 container create ab23d35585b669f2b5af7353c76a6f28971258cdacbd6d8e5f83f391039e7b4e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_vaughan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 11 09:19:41 compute-0 systemd[1]: Started libpod-conmon-ab23d35585b669f2b5af7353c76a6f28971258cdacbd6d8e5f83f391039e7b4e.scope.
Oct 11 09:19:41 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 09:19:41 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3460648708' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:19:41 compute-0 nova_compute[260935]: 2025-10-11 09:19:41.502 2 DEBUG oslo_concurrency.processutils [None req-1eb0c06a-ad3d-4fea-bc67-8d264d758d2b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.498s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:19:41 compute-0 podman[386063]: 2025-10-11 09:19:41.410965959 +0000 UTC m=+0.041295226 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:19:41 compute-0 nova_compute[260935]: 2025-10-11 09:19:41.504 2 DEBUG nova.virt.libvirt.vif [None req-1eb0c06a-ad3d-4fea-bc67-8d264d758d2b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T09:19:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1575481376',display_name='tempest-TestGettingAddress-server-1575481376',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1575481376',id=116,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJTFYLtktGdqXo2IJ/ShBWfWgvaV4+fWmolxALneU1gDJL8dLDtvTdIhs+PbG9YcPYaBAkq6yl21bo4TTyj4znAX7oza+01Fop0j7H2jGDTRbWSF88RRRnjrHtRpLFrBeg==',key_name='tempest-TestGettingAddress-2133650400',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='db6885dd005947ad850fed13cefdf2fc',ramdisk_id='',reservation_id='r-cbv7fcg7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-1238692117',owner_user_name='tempest-TestGettingAddress-1238692117-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:19:31Z,user_data=None,user_id='0e1fd111a1ff43179343661e01457085',uuid=a798e2c3-8294-482d-a073-883121427765,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d5347066-e322-4664-8798-146b9745fa17", "address": "fa:16:3e:eb:9b:3f", "network": {"id": "6346ea52-07fc-49ad-8f2d-fcfed9769241", "bridge": "br-int", "label": "tempest-network-smoke--1742332739", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd5347066-e3", "ovs_interfaceid": "d5347066-e322-4664-8798-146b9745fa17", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 11 09:19:41 compute-0 nova_compute[260935]: 2025-10-11 09:19:41.505 2 DEBUG nova.network.os_vif_util [None req-1eb0c06a-ad3d-4fea-bc67-8d264d758d2b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converting VIF {"id": "d5347066-e322-4664-8798-146b9745fa17", "address": "fa:16:3e:eb:9b:3f", "network": {"id": "6346ea52-07fc-49ad-8f2d-fcfed9769241", "bridge": "br-int", "label": "tempest-network-smoke--1742332739", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd5347066-e3", "ovs_interfaceid": "d5347066-e322-4664-8798-146b9745fa17", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 09:19:41 compute-0 nova_compute[260935]: 2025-10-11 09:19:41.506 2 DEBUG nova.network.os_vif_util [None req-1eb0c06a-ad3d-4fea-bc67-8d264d758d2b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:eb:9b:3f,bridge_name='br-int',has_traffic_filtering=True,id=d5347066-e322-4664-8798-146b9745fa17,network=Network(6346ea52-07fc-49ad-8f2d-fcfed9769241),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd5347066-e3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 09:19:41 compute-0 nova_compute[260935]: 2025-10-11 09:19:41.507 2 DEBUG nova.virt.libvirt.vif [None req-1eb0c06a-ad3d-4fea-bc67-8d264d758d2b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T09:19:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1575481376',display_name='tempest-TestGettingAddress-server-1575481376',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1575481376',id=116,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJTFYLtktGdqXo2IJ/ShBWfWgvaV4+fWmolxALneU1gDJL8dLDtvTdIhs+PbG9YcPYaBAkq6yl21bo4TTyj4znAX7oza+01Fop0j7H2jGDTRbWSF88RRRnjrHtRpLFrBeg==',key_name='tempest-TestGettingAddress-2133650400',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='db6885dd005947ad850fed13cefdf2fc',ramdisk_id='',reservation_id='r-cbv7fcg7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-1238692117',owner_user_name='tempest-TestGettingAddress-1238692117-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:19:31Z,user_data=None,user_id='0e1fd111a1ff43179343661e01457085',uuid=a798e2c3-8294-482d-a073-883121427765,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d8f510f6-a87e-4c76-be24-a143f69f564d", "address": "fa:16:3e:f2:49:59", "network": {"id": "ce353a4c-7280-46bb-ac7d-157bb5dc08cb", "bridge": "br-int", "label": "tempest-network-smoke--1463782365", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fef2:4959", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd8f510f6-a8", "ovs_interfaceid": "d8f510f6-a87e-4c76-be24-a143f69f564d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 11 09:19:41 compute-0 nova_compute[260935]: 2025-10-11 09:19:41.508 2 DEBUG nova.network.os_vif_util [None req-1eb0c06a-ad3d-4fea-bc67-8d264d758d2b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converting VIF {"id": "d8f510f6-a87e-4c76-be24-a143f69f564d", "address": "fa:16:3e:f2:49:59", "network": {"id": "ce353a4c-7280-46bb-ac7d-157bb5dc08cb", "bridge": "br-int", "label": "tempest-network-smoke--1463782365", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fef2:4959", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd8f510f6-a8", "ovs_interfaceid": "d8f510f6-a87e-4c76-be24-a143f69f564d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 09:19:41 compute-0 nova_compute[260935]: 2025-10-11 09:19:41.509 2 DEBUG nova.network.os_vif_util [None req-1eb0c06a-ad3d-4fea-bc67-8d264d758d2b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f2:49:59,bridge_name='br-int',has_traffic_filtering=True,id=d8f510f6-a87e-4c76-be24-a143f69f564d,network=Network(ce353a4c-7280-46bb-ac7d-157bb5dc08cb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd8f510f6-a8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 09:19:41 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:19:41 compute-0 nova_compute[260935]: 2025-10-11 09:19:41.513 2 DEBUG nova.objects.instance [None req-1eb0c06a-ad3d-4fea-bc67-8d264d758d2b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lazy-loading 'pci_devices' on Instance uuid a798e2c3-8294-482d-a073-883121427765 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:19:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e2755b6c3574361d20c07bde3295a76ce0fba2242ef9b614af006b195f89d42/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 09:19:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e2755b6c3574361d20c07bde3295a76ce0fba2242ef9b614af006b195f89d42/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 09:19:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e2755b6c3574361d20c07bde3295a76ce0fba2242ef9b614af006b195f89d42/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 09:19:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e2755b6c3574361d20c07bde3295a76ce0fba2242ef9b614af006b195f89d42/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 09:19:41 compute-0 nova_compute[260935]: 2025-10-11 09:19:41.530 2 DEBUG nova.virt.libvirt.driver [None req-1eb0c06a-ad3d-4fea-bc67-8d264d758d2b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: a798e2c3-8294-482d-a073-883121427765] End _get_guest_xml xml=<domain type="kvm">
Oct 11 09:19:41 compute-0 nova_compute[260935]:   <uuid>a798e2c3-8294-482d-a073-883121427765</uuid>
Oct 11 09:19:41 compute-0 nova_compute[260935]:   <name>instance-00000074</name>
Oct 11 09:19:41 compute-0 nova_compute[260935]:   <memory>131072</memory>
Oct 11 09:19:41 compute-0 nova_compute[260935]:   <vcpu>1</vcpu>
Oct 11 09:19:41 compute-0 nova_compute[260935]:   <metadata>
Oct 11 09:19:41 compute-0 nova_compute[260935]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 09:19:41 compute-0 nova_compute[260935]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 09:19:41 compute-0 nova_compute[260935]:       <nova:name>tempest-TestGettingAddress-server-1575481376</nova:name>
Oct 11 09:19:41 compute-0 nova_compute[260935]:       <nova:creationTime>2025-10-11 09:19:40</nova:creationTime>
Oct 11 09:19:41 compute-0 nova_compute[260935]:       <nova:flavor name="m1.nano">
Oct 11 09:19:41 compute-0 nova_compute[260935]:         <nova:memory>128</nova:memory>
Oct 11 09:19:41 compute-0 nova_compute[260935]:         <nova:disk>1</nova:disk>
Oct 11 09:19:41 compute-0 nova_compute[260935]:         <nova:swap>0</nova:swap>
Oct 11 09:19:41 compute-0 nova_compute[260935]:         <nova:ephemeral>0</nova:ephemeral>
Oct 11 09:19:41 compute-0 nova_compute[260935]:         <nova:vcpus>1</nova:vcpus>
Oct 11 09:19:41 compute-0 nova_compute[260935]:       </nova:flavor>
Oct 11 09:19:41 compute-0 nova_compute[260935]:       <nova:owner>
Oct 11 09:19:41 compute-0 nova_compute[260935]:         <nova:user uuid="0e1fd111a1ff43179343661e01457085">tempest-TestGettingAddress-1238692117-project-member</nova:user>
Oct 11 09:19:41 compute-0 nova_compute[260935]:         <nova:project uuid="db6885dd005947ad850fed13cefdf2fc">tempest-TestGettingAddress-1238692117</nova:project>
Oct 11 09:19:41 compute-0 nova_compute[260935]:       </nova:owner>
Oct 11 09:19:41 compute-0 nova_compute[260935]:       <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 09:19:41 compute-0 nova_compute[260935]:       <nova:ports>
Oct 11 09:19:41 compute-0 nova_compute[260935]:         <nova:port uuid="d5347066-e322-4664-8798-146b9745fa17">
Oct 11 09:19:41 compute-0 nova_compute[260935]:           <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Oct 11 09:19:41 compute-0 nova_compute[260935]:         </nova:port>
Oct 11 09:19:41 compute-0 nova_compute[260935]:         <nova:port uuid="d8f510f6-a87e-4c76-be24-a143f69f564d">
Oct 11 09:19:41 compute-0 nova_compute[260935]:           <nova:ip type="fixed" address="2001:db8::f816:3eff:fef2:4959" ipVersion="6"/>
Oct 11 09:19:41 compute-0 nova_compute[260935]:         </nova:port>
Oct 11 09:19:41 compute-0 nova_compute[260935]:       </nova:ports>
Oct 11 09:19:41 compute-0 nova_compute[260935]:     </nova:instance>
Oct 11 09:19:41 compute-0 nova_compute[260935]:   </metadata>
Oct 11 09:19:41 compute-0 nova_compute[260935]:   <sysinfo type="smbios">
Oct 11 09:19:41 compute-0 nova_compute[260935]:     <system>
Oct 11 09:19:41 compute-0 nova_compute[260935]:       <entry name="manufacturer">RDO</entry>
Oct 11 09:19:41 compute-0 nova_compute[260935]:       <entry name="product">OpenStack Compute</entry>
Oct 11 09:19:41 compute-0 nova_compute[260935]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 09:19:41 compute-0 nova_compute[260935]:       <entry name="serial">a798e2c3-8294-482d-a073-883121427765</entry>
Oct 11 09:19:41 compute-0 nova_compute[260935]:       <entry name="uuid">a798e2c3-8294-482d-a073-883121427765</entry>
Oct 11 09:19:41 compute-0 nova_compute[260935]:       <entry name="family">Virtual Machine</entry>
Oct 11 09:19:41 compute-0 nova_compute[260935]:     </system>
Oct 11 09:19:41 compute-0 nova_compute[260935]:   </sysinfo>
Oct 11 09:19:41 compute-0 nova_compute[260935]:   <os>
Oct 11 09:19:41 compute-0 nova_compute[260935]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 11 09:19:41 compute-0 nova_compute[260935]:     <boot dev="hd"/>
Oct 11 09:19:41 compute-0 nova_compute[260935]:     <smbios mode="sysinfo"/>
Oct 11 09:19:41 compute-0 nova_compute[260935]:   </os>
Oct 11 09:19:41 compute-0 nova_compute[260935]:   <features>
Oct 11 09:19:41 compute-0 nova_compute[260935]:     <acpi/>
Oct 11 09:19:41 compute-0 nova_compute[260935]:     <apic/>
Oct 11 09:19:41 compute-0 nova_compute[260935]:     <vmcoreinfo/>
Oct 11 09:19:41 compute-0 nova_compute[260935]:   </features>
Oct 11 09:19:41 compute-0 nova_compute[260935]:   <clock offset="utc">
Oct 11 09:19:41 compute-0 nova_compute[260935]:     <timer name="pit" tickpolicy="delay"/>
Oct 11 09:19:41 compute-0 nova_compute[260935]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 11 09:19:41 compute-0 nova_compute[260935]:     <timer name="hpet" present="no"/>
Oct 11 09:19:41 compute-0 nova_compute[260935]:   </clock>
Oct 11 09:19:41 compute-0 nova_compute[260935]:   <cpu mode="host-model" match="exact">
Oct 11 09:19:41 compute-0 nova_compute[260935]:     <topology sockets="1" cores="1" threads="1"/>
Oct 11 09:19:41 compute-0 nova_compute[260935]:   </cpu>
Oct 11 09:19:41 compute-0 nova_compute[260935]:   <devices>
Oct 11 09:19:41 compute-0 nova_compute[260935]:     <disk type="network" device="disk">
Oct 11 09:19:41 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 09:19:41 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/a798e2c3-8294-482d-a073-883121427765_disk">
Oct 11 09:19:41 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 09:19:41 compute-0 nova_compute[260935]:       </source>
Oct 11 09:19:41 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 09:19:41 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 09:19:41 compute-0 nova_compute[260935]:       </auth>
Oct 11 09:19:41 compute-0 nova_compute[260935]:       <target dev="vda" bus="virtio"/>
Oct 11 09:19:41 compute-0 nova_compute[260935]:     </disk>
Oct 11 09:19:41 compute-0 nova_compute[260935]:     <disk type="network" device="cdrom">
Oct 11 09:19:41 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 09:19:41 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/a798e2c3-8294-482d-a073-883121427765_disk.config">
Oct 11 09:19:41 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 09:19:41 compute-0 nova_compute[260935]:       </source>
Oct 11 09:19:41 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 09:19:41 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 09:19:41 compute-0 nova_compute[260935]:       </auth>
Oct 11 09:19:41 compute-0 nova_compute[260935]:       <target dev="sda" bus="sata"/>
Oct 11 09:19:41 compute-0 nova_compute[260935]:     </disk>
Oct 11 09:19:41 compute-0 nova_compute[260935]:     <interface type="ethernet">
Oct 11 09:19:41 compute-0 nova_compute[260935]:       <mac address="fa:16:3e:eb:9b:3f"/>
Oct 11 09:19:41 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 09:19:41 compute-0 nova_compute[260935]:       <driver name="vhost" rx_queue_size="512"/>
Oct 11 09:19:41 compute-0 nova_compute[260935]:       <mtu size="1442"/>
Oct 11 09:19:41 compute-0 nova_compute[260935]:       <target dev="tapd5347066-e3"/>
Oct 11 09:19:41 compute-0 nova_compute[260935]:     </interface>
Oct 11 09:19:41 compute-0 nova_compute[260935]:     <interface type="ethernet">
Oct 11 09:19:41 compute-0 nova_compute[260935]:       <mac address="fa:16:3e:f2:49:59"/>
Oct 11 09:19:41 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 09:19:41 compute-0 nova_compute[260935]:       <driver name="vhost" rx_queue_size="512"/>
Oct 11 09:19:41 compute-0 nova_compute[260935]:       <mtu size="1442"/>
Oct 11 09:19:41 compute-0 nova_compute[260935]:       <target dev="tapd8f510f6-a8"/>
Oct 11 09:19:41 compute-0 nova_compute[260935]:     </interface>
Oct 11 09:19:41 compute-0 nova_compute[260935]:     <serial type="pty">
Oct 11 09:19:41 compute-0 nova_compute[260935]:       <log file="/var/lib/nova/instances/a798e2c3-8294-482d-a073-883121427765/console.log" append="off"/>
Oct 11 09:19:41 compute-0 nova_compute[260935]:     </serial>
Oct 11 09:19:41 compute-0 nova_compute[260935]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 09:19:41 compute-0 nova_compute[260935]:     <video>
Oct 11 09:19:41 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 09:19:41 compute-0 nova_compute[260935]:     </video>
Oct 11 09:19:41 compute-0 nova_compute[260935]:     <input type="tablet" bus="usb"/>
Oct 11 09:19:41 compute-0 nova_compute[260935]:     <rng model="virtio">
Oct 11 09:19:41 compute-0 nova_compute[260935]:       <backend model="random">/dev/urandom</backend>
Oct 11 09:19:41 compute-0 nova_compute[260935]:     </rng>
Oct 11 09:19:41 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root"/>
Oct 11 09:19:41 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:19:41 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:19:41 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:19:41 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:19:41 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:19:41 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:19:41 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:19:41 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:19:41 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:19:41 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:19:41 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:19:41 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:19:41 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:19:41 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:19:41 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:19:41 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:19:41 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:19:41 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:19:41 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:19:41 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:19:41 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:19:41 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:19:41 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:19:41 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:19:41 compute-0 nova_compute[260935]:     <controller type="usb" index="0"/>
Oct 11 09:19:41 compute-0 nova_compute[260935]:     <memballoon model="virtio">
Oct 11 09:19:41 compute-0 nova_compute[260935]:       <stats period="10"/>
Oct 11 09:19:41 compute-0 nova_compute[260935]:     </memballoon>
Oct 11 09:19:41 compute-0 nova_compute[260935]:   </devices>
Oct 11 09:19:41 compute-0 nova_compute[260935]: </domain>
Oct 11 09:19:41 compute-0 nova_compute[260935]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 11 09:19:41 compute-0 nova_compute[260935]: 2025-10-11 09:19:41.531 2 DEBUG nova.compute.manager [None req-1eb0c06a-ad3d-4fea-bc67-8d264d758d2b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: a798e2c3-8294-482d-a073-883121427765] Preparing to wait for external event network-vif-plugged-d5347066-e322-4664-8798-146b9745fa17 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 11 09:19:41 compute-0 nova_compute[260935]: 2025-10-11 09:19:41.531 2 DEBUG oslo_concurrency.lockutils [None req-1eb0c06a-ad3d-4fea-bc67-8d264d758d2b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "a798e2c3-8294-482d-a073-883121427765-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:19:41 compute-0 nova_compute[260935]: 2025-10-11 09:19:41.531 2 DEBUG oslo_concurrency.lockutils [None req-1eb0c06a-ad3d-4fea-bc67-8d264d758d2b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "a798e2c3-8294-482d-a073-883121427765-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:19:41 compute-0 nova_compute[260935]: 2025-10-11 09:19:41.532 2 DEBUG oslo_concurrency.lockutils [None req-1eb0c06a-ad3d-4fea-bc67-8d264d758d2b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "a798e2c3-8294-482d-a073-883121427765-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:19:41 compute-0 nova_compute[260935]: 2025-10-11 09:19:41.532 2 DEBUG nova.compute.manager [None req-1eb0c06a-ad3d-4fea-bc67-8d264d758d2b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: a798e2c3-8294-482d-a073-883121427765] Preparing to wait for external event network-vif-plugged-d8f510f6-a87e-4c76-be24-a143f69f564d prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 11 09:19:41 compute-0 nova_compute[260935]: 2025-10-11 09:19:41.532 2 DEBUG oslo_concurrency.lockutils [None req-1eb0c06a-ad3d-4fea-bc67-8d264d758d2b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "a798e2c3-8294-482d-a073-883121427765-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:19:41 compute-0 nova_compute[260935]: 2025-10-11 09:19:41.532 2 DEBUG oslo_concurrency.lockutils [None req-1eb0c06a-ad3d-4fea-bc67-8d264d758d2b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "a798e2c3-8294-482d-a073-883121427765-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:19:41 compute-0 nova_compute[260935]: 2025-10-11 09:19:41.533 2 DEBUG oslo_concurrency.lockutils [None req-1eb0c06a-ad3d-4fea-bc67-8d264d758d2b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "a798e2c3-8294-482d-a073-883121427765-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:19:41 compute-0 nova_compute[260935]: 2025-10-11 09:19:41.533 2 DEBUG nova.virt.libvirt.vif [None req-1eb0c06a-ad3d-4fea-bc67-8d264d758d2b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T09:19:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1575481376',display_name='tempest-TestGettingAddress-server-1575481376',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1575481376',id=116,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJTFYLtktGdqXo2IJ/ShBWfWgvaV4+fWmolxALneU1gDJL8dLDtvTdIhs+PbG9YcPYaBAkq6yl21bo4TTyj4znAX7oza+01Fop0j7H2jGDTRbWSF88RRRnjrHtRpLFrBeg==',key_name='tempest-TestGettingAddress-2133650400',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='db6885dd005947ad850fed13cefdf2fc',ramdisk_id='',reservation_id='r-cbv7fcg7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-1238692117',owner_user_name='tempest-TestGettingAddress-1238692117-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:19:31Z,user_data=None,user_id='0e1fd111a1ff43179343661e01457085',uuid=a798e2c3-8294-482d-a073-883121427765,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d5347066-e322-4664-8798-146b9745fa17", "address": "fa:16:3e:eb:9b:3f", "network": {"id": "6346ea52-07fc-49ad-8f2d-fcfed9769241", "bridge": "br-int", "label": "tempest-network-smoke--1742332739", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd5347066-e3", "ovs_interfaceid": "d5347066-e322-4664-8798-146b9745fa17", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 11 09:19:41 compute-0 nova_compute[260935]: 2025-10-11 09:19:41.534 2 DEBUG nova.network.os_vif_util [None req-1eb0c06a-ad3d-4fea-bc67-8d264d758d2b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converting VIF {"id": "d5347066-e322-4664-8798-146b9745fa17", "address": "fa:16:3e:eb:9b:3f", "network": {"id": "6346ea52-07fc-49ad-8f2d-fcfed9769241", "bridge": "br-int", "label": "tempest-network-smoke--1742332739", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd5347066-e3", "ovs_interfaceid": "d5347066-e322-4664-8798-146b9745fa17", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 09:19:41 compute-0 nova_compute[260935]: 2025-10-11 09:19:41.534 2 DEBUG nova.network.os_vif_util [None req-1eb0c06a-ad3d-4fea-bc67-8d264d758d2b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:eb:9b:3f,bridge_name='br-int',has_traffic_filtering=True,id=d5347066-e322-4664-8798-146b9745fa17,network=Network(6346ea52-07fc-49ad-8f2d-fcfed9769241),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd5347066-e3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 09:19:41 compute-0 podman[386063]: 2025-10-11 09:19:41.535587846 +0000 UTC m=+0.165917173 container init ab23d35585b669f2b5af7353c76a6f28971258cdacbd6d8e5f83f391039e7b4e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_vaughan, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 09:19:41 compute-0 nova_compute[260935]: 2025-10-11 09:19:41.535 2 DEBUG os_vif [None req-1eb0c06a-ad3d-4fea-bc67-8d264d758d2b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:eb:9b:3f,bridge_name='br-int',has_traffic_filtering=True,id=d5347066-e322-4664-8798-146b9745fa17,network=Network(6346ea52-07fc-49ad-8f2d-fcfed9769241),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd5347066-e3') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 11 09:19:41 compute-0 nova_compute[260935]: 2025-10-11 09:19:41.536 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:19:41 compute-0 nova_compute[260935]: 2025-10-11 09:19:41.537 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:19:41 compute-0 nova_compute[260935]: 2025-10-11 09:19:41.538 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 09:19:41 compute-0 nova_compute[260935]: 2025-10-11 09:19:41.543 2 DEBUG nova.compute.manager [req-f4424040-ac78-4b60-9f68-8f151d38e775 req-45a121ba-a26b-4e1b-9b65-5318da266ab4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 6c9132ae-fb4f-4a77-8e3f-14f516eed49c] Received event network-changed-50e8a951-4439-4d05-b0da-9b126fae5c30 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:19:41 compute-0 nova_compute[260935]: 2025-10-11 09:19:41.543 2 DEBUG nova.compute.manager [req-f4424040-ac78-4b60-9f68-8f151d38e775 req-45a121ba-a26b-4e1b-9b65-5318da266ab4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 6c9132ae-fb4f-4a77-8e3f-14f516eed49c] Refreshing instance network info cache due to event network-changed-50e8a951-4439-4d05-b0da-9b126fae5c30. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 09:19:41 compute-0 nova_compute[260935]: 2025-10-11 09:19:41.543 2 DEBUG oslo_concurrency.lockutils [req-f4424040-ac78-4b60-9f68-8f151d38e775 req-45a121ba-a26b-4e1b-9b65-5318da266ab4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-6c9132ae-fb4f-4a77-8e3f-14f516eed49c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:19:41 compute-0 nova_compute[260935]: 2025-10-11 09:19:41.543 2 DEBUG oslo_concurrency.lockutils [req-f4424040-ac78-4b60-9f68-8f151d38e775 req-45a121ba-a26b-4e1b-9b65-5318da266ab4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-6c9132ae-fb4f-4a77-8e3f-14f516eed49c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:19:41 compute-0 nova_compute[260935]: 2025-10-11 09:19:41.544 2 DEBUG nova.network.neutron [req-f4424040-ac78-4b60-9f68-8f151d38e775 req-45a121ba-a26b-4e1b-9b65-5318da266ab4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 6c9132ae-fb4f-4a77-8e3f-14f516eed49c] Refreshing network info cache for port 50e8a951-4439-4d05-b0da-9b126fae5c30 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 09:19:41 compute-0 nova_compute[260935]: 2025-10-11 09:19:41.546 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:19:41 compute-0 nova_compute[260935]: 2025-10-11 09:19:41.546 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd5347066-e3, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:19:41 compute-0 nova_compute[260935]: 2025-10-11 09:19:41.547 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapd5347066-e3, col_values=(('external_ids', {'iface-id': 'd5347066-e322-4664-8798-146b9745fa17', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:eb:9b:3f', 'vm-uuid': 'a798e2c3-8294-482d-a073-883121427765'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:19:41 compute-0 NetworkManager[44960]: <info>  [1760174381.5510] manager: (tapd5347066-e3): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/472)
Oct 11 09:19:41 compute-0 nova_compute[260935]: 2025-10-11 09:19:41.549 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:19:41 compute-0 nova_compute[260935]: 2025-10-11 09:19:41.551 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 09:19:41 compute-0 podman[386063]: 2025-10-11 09:19:41.552613986 +0000 UTC m=+0.182943263 container start ab23d35585b669f2b5af7353c76a6f28971258cdacbd6d8e5f83f391039e7b4e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_vaughan, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 09:19:41 compute-0 podman[386063]: 2025-10-11 09:19:41.556431774 +0000 UTC m=+0.186761111 container attach ab23d35585b669f2b5af7353c76a6f28971258cdacbd6d8e5f83f391039e7b4e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_vaughan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 09:19:41 compute-0 nova_compute[260935]: 2025-10-11 09:19:41.561 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:19:41 compute-0 nova_compute[260935]: 2025-10-11 09:19:41.562 2 INFO os_vif [None req-1eb0c06a-ad3d-4fea-bc67-8d264d758d2b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:eb:9b:3f,bridge_name='br-int',has_traffic_filtering=True,id=d5347066-e322-4664-8798-146b9745fa17,network=Network(6346ea52-07fc-49ad-8f2d-fcfed9769241),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd5347066-e3')
Oct 11 09:19:41 compute-0 nova_compute[260935]: 2025-10-11 09:19:41.563 2 DEBUG nova.virt.libvirt.vif [None req-1eb0c06a-ad3d-4fea-bc67-8d264d758d2b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T09:19:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1575481376',display_name='tempest-TestGettingAddress-server-1575481376',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1575481376',id=116,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJTFYLtktGdqXo2IJ/ShBWfWgvaV4+fWmolxALneU1gDJL8dLDtvTdIhs+PbG9YcPYaBAkq6yl21bo4TTyj4znAX7oza+01Fop0j7H2jGDTRbWSF88RRRnjrHtRpLFrBeg==',key_name='tempest-TestGettingAddress-2133650400',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='db6885dd005947ad850fed13cefdf2fc',ramdisk_id='',reservation_id='r-cbv7fcg7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-1238692117',owner_user_name='tempest-TestGettingAddress-1238692117-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:19:31Z,user_data=None,user_id='0e1fd111a1ff43179343661e01457085',uuid=a798e2c3-8294-482d-a073-883121427765,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d8f510f6-a87e-4c76-be24-a143f69f564d", "address": "fa:16:3e:f2:49:59", "network": {"id": "ce353a4c-7280-46bb-ac7d-157bb5dc08cb", "bridge": "br-int", "label": "tempest-network-smoke--1463782365", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fef2:4959", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd8f510f6-a8", "ovs_interfaceid": "d8f510f6-a87e-4c76-be24-a143f69f564d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 11 09:19:41 compute-0 nova_compute[260935]: 2025-10-11 09:19:41.563 2 DEBUG nova.network.os_vif_util [None req-1eb0c06a-ad3d-4fea-bc67-8d264d758d2b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converting VIF {"id": "d8f510f6-a87e-4c76-be24-a143f69f564d", "address": "fa:16:3e:f2:49:59", "network": {"id": "ce353a4c-7280-46bb-ac7d-157bb5dc08cb", "bridge": "br-int", "label": "tempest-network-smoke--1463782365", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fef2:4959", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd8f510f6-a8", "ovs_interfaceid": "d8f510f6-a87e-4c76-be24-a143f69f564d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 09:19:41 compute-0 nova_compute[260935]: 2025-10-11 09:19:41.564 2 DEBUG nova.network.os_vif_util [None req-1eb0c06a-ad3d-4fea-bc67-8d264d758d2b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f2:49:59,bridge_name='br-int',has_traffic_filtering=True,id=d8f510f6-a87e-4c76-be24-a143f69f564d,network=Network(ce353a4c-7280-46bb-ac7d-157bb5dc08cb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd8f510f6-a8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 09:19:41 compute-0 nova_compute[260935]: 2025-10-11 09:19:41.564 2 DEBUG os_vif [None req-1eb0c06a-ad3d-4fea-bc67-8d264d758d2b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f2:49:59,bridge_name='br-int',has_traffic_filtering=True,id=d8f510f6-a87e-4c76-be24-a143f69f564d,network=Network(ce353a4c-7280-46bb-ac7d-157bb5dc08cb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd8f510f6-a8') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 11 09:19:41 compute-0 nova_compute[260935]: 2025-10-11 09:19:41.565 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:19:41 compute-0 nova_compute[260935]: 2025-10-11 09:19:41.566 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:19:41 compute-0 nova_compute[260935]: 2025-10-11 09:19:41.566 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 09:19:41 compute-0 nova_compute[260935]: 2025-10-11 09:19:41.569 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:19:41 compute-0 nova_compute[260935]: 2025-10-11 09:19:41.569 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd8f510f6-a8, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:19:41 compute-0 nova_compute[260935]: 2025-10-11 09:19:41.569 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapd8f510f6-a8, col_values=(('external_ids', {'iface-id': 'd8f510f6-a87e-4c76-be24-a143f69f564d', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:f2:49:59', 'vm-uuid': 'a798e2c3-8294-482d-a073-883121427765'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:19:41 compute-0 nova_compute[260935]: 2025-10-11 09:19:41.571 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:19:41 compute-0 NetworkManager[44960]: <info>  [1760174381.5726] manager: (tapd8f510f6-a8): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/473)
Oct 11 09:19:41 compute-0 nova_compute[260935]: 2025-10-11 09:19:41.576 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 09:19:41 compute-0 nova_compute[260935]: 2025-10-11 09:19:41.582 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:19:41 compute-0 nova_compute[260935]: 2025-10-11 09:19:41.583 2 INFO os_vif [None req-1eb0c06a-ad3d-4fea-bc67-8d264d758d2b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f2:49:59,bridge_name='br-int',has_traffic_filtering=True,id=d8f510f6-a87e-4c76-be24-a143f69f564d,network=Network(ce353a4c-7280-46bb-ac7d-157bb5dc08cb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd8f510f6-a8')
Oct 11 09:19:41 compute-0 nova_compute[260935]: 2025-10-11 09:19:41.585 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:19:41 compute-0 nova_compute[260935]: 2025-10-11 09:19:41.635 2 DEBUG nova.virt.libvirt.driver [None req-1eb0c06a-ad3d-4fea-bc67-8d264d758d2b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 09:19:41 compute-0 nova_compute[260935]: 2025-10-11 09:19:41.635 2 DEBUG nova.virt.libvirt.driver [None req-1eb0c06a-ad3d-4fea-bc67-8d264d758d2b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 09:19:41 compute-0 nova_compute[260935]: 2025-10-11 09:19:41.635 2 DEBUG nova.virt.libvirt.driver [None req-1eb0c06a-ad3d-4fea-bc67-8d264d758d2b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] No VIF found with MAC fa:16:3e:eb:9b:3f, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 11 09:19:41 compute-0 nova_compute[260935]: 2025-10-11 09:19:41.636 2 DEBUG nova.virt.libvirt.driver [None req-1eb0c06a-ad3d-4fea-bc67-8d264d758d2b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] No VIF found with MAC fa:16:3e:f2:49:59, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 11 09:19:41 compute-0 nova_compute[260935]: 2025-10-11 09:19:41.636 2 INFO nova.virt.libvirt.driver [None req-1eb0c06a-ad3d-4fea-bc67-8d264d758d2b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: a798e2c3-8294-482d-a073-883121427765] Using config drive
Oct 11 09:19:41 compute-0 nova_compute[260935]: 2025-10-11 09:19:41.662 2 DEBUG nova.storage.rbd_utils [None req-1eb0c06a-ad3d-4fea-bc67-8d264d758d2b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] rbd image a798e2c3-8294-482d-a073-883121427765_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:19:41 compute-0 ceph-mon[74313]: pgmap v2335: 321 pgs: 321 active+clean; 579 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 42 KiB/s rd, 3.6 MiB/s wr, 64 op/s
Oct 11 09:19:41 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/4290417126' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:19:41 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/3460648708' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:19:42 compute-0 nova_compute[260935]: 2025-10-11 09:19:42.095 2 INFO nova.virt.libvirt.driver [None req-1eb0c06a-ad3d-4fea-bc67-8d264d758d2b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: a798e2c3-8294-482d-a073-883121427765] Creating config drive at /var/lib/nova/instances/a798e2c3-8294-482d-a073-883121427765/disk.config
Oct 11 09:19:42 compute-0 nova_compute[260935]: 2025-10-11 09:19:42.107 2 DEBUG oslo_concurrency.processutils [None req-1eb0c06a-ad3d-4fea-bc67-8d264d758d2b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/a798e2c3-8294-482d-a073-883121427765/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpdr8pqgv2 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:19:42 compute-0 nova_compute[260935]: 2025-10-11 09:19:42.155 2 DEBUG nova.network.neutron [req-d53f6209-42ac-461f-92b4-8dc609ebc4f0 req-22b0ba02-f507-4271-a49b-3b849bf11fe3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a798e2c3-8294-482d-a073-883121427765] Updated VIF entry in instance network info cache for port d5347066-e322-4664-8798-146b9745fa17. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 09:19:42 compute-0 nova_compute[260935]: 2025-10-11 09:19:42.156 2 DEBUG nova.network.neutron [req-d53f6209-42ac-461f-92b4-8dc609ebc4f0 req-22b0ba02-f507-4271-a49b-3b849bf11fe3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a798e2c3-8294-482d-a073-883121427765] Updating instance_info_cache with network_info: [{"id": "d5347066-e322-4664-8798-146b9745fa17", "address": "fa:16:3e:eb:9b:3f", "network": {"id": "6346ea52-07fc-49ad-8f2d-fcfed9769241", "bridge": "br-int", "label": "tempest-network-smoke--1742332739", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd5347066-e3", "ovs_interfaceid": "d5347066-e322-4664-8798-146b9745fa17", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "d8f510f6-a87e-4c76-be24-a143f69f564d", "address": "fa:16:3e:f2:49:59", "network": {"id": "ce353a4c-7280-46bb-ac7d-157bb5dc08cb", "bridge": "br-int", "label": "tempest-network-smoke--1463782365", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fef2:4959", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd8f510f6-a8", "ovs_interfaceid": "d8f510f6-a87e-4c76-be24-a143f69f564d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:19:42 compute-0 nova_compute[260935]: 2025-10-11 09:19:42.185 2 DEBUG oslo_concurrency.lockutils [req-d53f6209-42ac-461f-92b4-8dc609ebc4f0 req-22b0ba02-f507-4271-a49b-3b849bf11fe3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-a798e2c3-8294-482d-a073-883121427765" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:19:42 compute-0 nova_compute[260935]: 2025-10-11 09:19:42.186 2 DEBUG nova.compute.manager [req-d53f6209-42ac-461f-92b4-8dc609ebc4f0 req-22b0ba02-f507-4271-a49b-3b849bf11fe3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 6c9132ae-fb4f-4a77-8e3f-14f516eed49c] Received event network-vif-plugged-50e8a951-4439-4d05-b0da-9b126fae5c30 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:19:42 compute-0 nova_compute[260935]: 2025-10-11 09:19:42.186 2 DEBUG oslo_concurrency.lockutils [req-d53f6209-42ac-461f-92b4-8dc609ebc4f0 req-22b0ba02-f507-4271-a49b-3b849bf11fe3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "6c9132ae-fb4f-4a77-8e3f-14f516eed49c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:19:42 compute-0 nova_compute[260935]: 2025-10-11 09:19:42.187 2 DEBUG oslo_concurrency.lockutils [req-d53f6209-42ac-461f-92b4-8dc609ebc4f0 req-22b0ba02-f507-4271-a49b-3b849bf11fe3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "6c9132ae-fb4f-4a77-8e3f-14f516eed49c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:19:42 compute-0 nova_compute[260935]: 2025-10-11 09:19:42.187 2 DEBUG oslo_concurrency.lockutils [req-d53f6209-42ac-461f-92b4-8dc609ebc4f0 req-22b0ba02-f507-4271-a49b-3b849bf11fe3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "6c9132ae-fb4f-4a77-8e3f-14f516eed49c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:19:42 compute-0 nova_compute[260935]: 2025-10-11 09:19:42.187 2 DEBUG nova.compute.manager [req-d53f6209-42ac-461f-92b4-8dc609ebc4f0 req-22b0ba02-f507-4271-a49b-3b849bf11fe3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 6c9132ae-fb4f-4a77-8e3f-14f516eed49c] No waiting events found dispatching network-vif-plugged-50e8a951-4439-4d05-b0da-9b126fae5c30 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 09:19:42 compute-0 nova_compute[260935]: 2025-10-11 09:19:42.188 2 WARNING nova.compute.manager [req-d53f6209-42ac-461f-92b4-8dc609ebc4f0 req-22b0ba02-f507-4271-a49b-3b849bf11fe3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 6c9132ae-fb4f-4a77-8e3f-14f516eed49c] Received unexpected event network-vif-plugged-50e8a951-4439-4d05-b0da-9b126fae5c30 for instance with vm_state active and task_state None.
Oct 11 09:19:42 compute-0 nova_compute[260935]: 2025-10-11 09:19:42.188 2 DEBUG nova.compute.manager [req-d53f6209-42ac-461f-92b4-8dc609ebc4f0 req-22b0ba02-f507-4271-a49b-3b849bf11fe3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a798e2c3-8294-482d-a073-883121427765] Received event network-changed-d8f510f6-a87e-4c76-be24-a143f69f564d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:19:42 compute-0 nova_compute[260935]: 2025-10-11 09:19:42.188 2 DEBUG nova.compute.manager [req-d53f6209-42ac-461f-92b4-8dc609ebc4f0 req-22b0ba02-f507-4271-a49b-3b849bf11fe3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a798e2c3-8294-482d-a073-883121427765] Refreshing instance network info cache due to event network-changed-d8f510f6-a87e-4c76-be24-a143f69f564d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 09:19:42 compute-0 nova_compute[260935]: 2025-10-11 09:19:42.188 2 DEBUG oslo_concurrency.lockutils [req-d53f6209-42ac-461f-92b4-8dc609ebc4f0 req-22b0ba02-f507-4271-a49b-3b849bf11fe3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-a798e2c3-8294-482d-a073-883121427765" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:19:42 compute-0 nova_compute[260935]: 2025-10-11 09:19:42.189 2 DEBUG oslo_concurrency.lockutils [req-d53f6209-42ac-461f-92b4-8dc609ebc4f0 req-22b0ba02-f507-4271-a49b-3b849bf11fe3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-a798e2c3-8294-482d-a073-883121427765" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:19:42 compute-0 nova_compute[260935]: 2025-10-11 09:19:42.189 2 DEBUG nova.network.neutron [req-d53f6209-42ac-461f-92b4-8dc609ebc4f0 req-22b0ba02-f507-4271-a49b-3b849bf11fe3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a798e2c3-8294-482d-a073-883121427765] Refreshing network info cache for port d8f510f6-a87e-4c76-be24-a143f69f564d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 09:19:42 compute-0 nova_compute[260935]: 2025-10-11 09:19:42.266 2 DEBUG oslo_concurrency.processutils [None req-1eb0c06a-ad3d-4fea-bc67-8d264d758d2b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/a798e2c3-8294-482d-a073-883121427765/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpdr8pqgv2" returned: 0 in 0.159s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:19:42 compute-0 nova_compute[260935]: 2025-10-11 09:19:42.299 2 DEBUG nova.storage.rbd_utils [None req-1eb0c06a-ad3d-4fea-bc67-8d264d758d2b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] rbd image a798e2c3-8294-482d-a073-883121427765_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:19:42 compute-0 nova_compute[260935]: 2025-10-11 09:19:42.311 2 DEBUG oslo_concurrency.processutils [None req-1eb0c06a-ad3d-4fea-bc67-8d264d758d2b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/a798e2c3-8294-482d-a073-883121427765/disk.config a798e2c3-8294-482d-a073-883121427765_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:19:42 compute-0 bold_vaughan[386080]: {
Oct 11 09:19:42 compute-0 bold_vaughan[386080]:     "0": [
Oct 11 09:19:42 compute-0 bold_vaughan[386080]:         {
Oct 11 09:19:42 compute-0 bold_vaughan[386080]:             "devices": [
Oct 11 09:19:42 compute-0 bold_vaughan[386080]:                 "/dev/loop3"
Oct 11 09:19:42 compute-0 bold_vaughan[386080]:             ],
Oct 11 09:19:42 compute-0 bold_vaughan[386080]:             "lv_name": "ceph_lv0",
Oct 11 09:19:42 compute-0 bold_vaughan[386080]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 09:19:42 compute-0 bold_vaughan[386080]:             "lv_size": "21470642176",
Oct 11 09:19:42 compute-0 bold_vaughan[386080]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=bd31cfe9-266a-4479-a7f9-659fff14b3e1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 09:19:42 compute-0 bold_vaughan[386080]:             "lv_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 09:19:42 compute-0 bold_vaughan[386080]:             "name": "ceph_lv0",
Oct 11 09:19:42 compute-0 bold_vaughan[386080]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 09:19:42 compute-0 bold_vaughan[386080]:             "tags": {
Oct 11 09:19:42 compute-0 bold_vaughan[386080]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 11 09:19:42 compute-0 bold_vaughan[386080]:                 "ceph.block_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 09:19:42 compute-0 bold_vaughan[386080]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 09:19:42 compute-0 bold_vaughan[386080]:                 "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:19:42 compute-0 bold_vaughan[386080]:                 "ceph.cluster_name": "ceph",
Oct 11 09:19:42 compute-0 bold_vaughan[386080]:                 "ceph.crush_device_class": "",
Oct 11 09:19:42 compute-0 bold_vaughan[386080]:                 "ceph.encrypted": "0",
Oct 11 09:19:42 compute-0 bold_vaughan[386080]:                 "ceph.osd_fsid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 09:19:42 compute-0 bold_vaughan[386080]:                 "ceph.osd_id": "0",
Oct 11 09:19:42 compute-0 bold_vaughan[386080]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 09:19:42 compute-0 bold_vaughan[386080]:                 "ceph.type": "block",
Oct 11 09:19:42 compute-0 bold_vaughan[386080]:                 "ceph.vdo": "0"
Oct 11 09:19:42 compute-0 bold_vaughan[386080]:             },
Oct 11 09:19:42 compute-0 bold_vaughan[386080]:             "type": "block",
Oct 11 09:19:42 compute-0 bold_vaughan[386080]:             "vg_name": "ceph_vg0"
Oct 11 09:19:42 compute-0 bold_vaughan[386080]:         }
Oct 11 09:19:42 compute-0 bold_vaughan[386080]:     ],
Oct 11 09:19:42 compute-0 bold_vaughan[386080]:     "1": [
Oct 11 09:19:42 compute-0 bold_vaughan[386080]:         {
Oct 11 09:19:42 compute-0 bold_vaughan[386080]:             "devices": [
Oct 11 09:19:42 compute-0 bold_vaughan[386080]:                 "/dev/loop4"
Oct 11 09:19:42 compute-0 bold_vaughan[386080]:             ],
Oct 11 09:19:42 compute-0 bold_vaughan[386080]:             "lv_name": "ceph_lv1",
Oct 11 09:19:42 compute-0 bold_vaughan[386080]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 09:19:42 compute-0 bold_vaughan[386080]:             "lv_size": "21470642176",
Oct 11 09:19:42 compute-0 bold_vaughan[386080]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c6e730c8-7075-45e5-bf72-061b73088f5b,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 09:19:42 compute-0 bold_vaughan[386080]:             "lv_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 09:19:42 compute-0 bold_vaughan[386080]:             "name": "ceph_lv1",
Oct 11 09:19:42 compute-0 bold_vaughan[386080]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 09:19:42 compute-0 bold_vaughan[386080]:             "tags": {
Oct 11 09:19:42 compute-0 bold_vaughan[386080]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 11 09:19:42 compute-0 bold_vaughan[386080]:                 "ceph.block_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 09:19:42 compute-0 bold_vaughan[386080]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 09:19:42 compute-0 bold_vaughan[386080]:                 "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:19:42 compute-0 bold_vaughan[386080]:                 "ceph.cluster_name": "ceph",
Oct 11 09:19:42 compute-0 bold_vaughan[386080]:                 "ceph.crush_device_class": "",
Oct 11 09:19:42 compute-0 bold_vaughan[386080]:                 "ceph.encrypted": "0",
Oct 11 09:19:42 compute-0 bold_vaughan[386080]:                 "ceph.osd_fsid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 09:19:42 compute-0 bold_vaughan[386080]:                 "ceph.osd_id": "1",
Oct 11 09:19:42 compute-0 bold_vaughan[386080]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 09:19:42 compute-0 bold_vaughan[386080]:                 "ceph.type": "block",
Oct 11 09:19:42 compute-0 bold_vaughan[386080]:                 "ceph.vdo": "0"
Oct 11 09:19:42 compute-0 bold_vaughan[386080]:             },
Oct 11 09:19:42 compute-0 bold_vaughan[386080]:             "type": "block",
Oct 11 09:19:42 compute-0 bold_vaughan[386080]:             "vg_name": "ceph_vg1"
Oct 11 09:19:42 compute-0 bold_vaughan[386080]:         }
Oct 11 09:19:42 compute-0 bold_vaughan[386080]:     ],
Oct 11 09:19:42 compute-0 bold_vaughan[386080]:     "2": [
Oct 11 09:19:42 compute-0 bold_vaughan[386080]:         {
Oct 11 09:19:42 compute-0 bold_vaughan[386080]:             "devices": [
Oct 11 09:19:42 compute-0 bold_vaughan[386080]:                 "/dev/loop5"
Oct 11 09:19:42 compute-0 bold_vaughan[386080]:             ],
Oct 11 09:19:42 compute-0 bold_vaughan[386080]:             "lv_name": "ceph_lv2",
Oct 11 09:19:42 compute-0 bold_vaughan[386080]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 09:19:42 compute-0 bold_vaughan[386080]:             "lv_size": "21470642176",
Oct 11 09:19:42 compute-0 bold_vaughan[386080]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9a8d87fe-940e-4960-94fb-405e22e6e9ee,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 09:19:42 compute-0 bold_vaughan[386080]:             "lv_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 09:19:42 compute-0 bold_vaughan[386080]:             "name": "ceph_lv2",
Oct 11 09:19:42 compute-0 bold_vaughan[386080]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 09:19:42 compute-0 bold_vaughan[386080]:             "tags": {
Oct 11 09:19:42 compute-0 bold_vaughan[386080]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 11 09:19:42 compute-0 bold_vaughan[386080]:                 "ceph.block_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 09:19:42 compute-0 bold_vaughan[386080]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 09:19:42 compute-0 bold_vaughan[386080]:                 "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:19:42 compute-0 bold_vaughan[386080]:                 "ceph.cluster_name": "ceph",
Oct 11 09:19:42 compute-0 bold_vaughan[386080]:                 "ceph.crush_device_class": "",
Oct 11 09:19:42 compute-0 bold_vaughan[386080]:                 "ceph.encrypted": "0",
Oct 11 09:19:42 compute-0 bold_vaughan[386080]:                 "ceph.osd_fsid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 09:19:42 compute-0 bold_vaughan[386080]:                 "ceph.osd_id": "2",
Oct 11 09:19:42 compute-0 bold_vaughan[386080]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 09:19:42 compute-0 bold_vaughan[386080]:                 "ceph.type": "block",
Oct 11 09:19:42 compute-0 bold_vaughan[386080]:                 "ceph.vdo": "0"
Oct 11 09:19:42 compute-0 bold_vaughan[386080]:             },
Oct 11 09:19:42 compute-0 bold_vaughan[386080]:             "type": "block",
Oct 11 09:19:42 compute-0 bold_vaughan[386080]:             "vg_name": "ceph_vg2"
Oct 11 09:19:42 compute-0 bold_vaughan[386080]:         }
Oct 11 09:19:42 compute-0 bold_vaughan[386080]:     ]
Oct 11 09:19:42 compute-0 bold_vaughan[386080]: }
Oct 11 09:19:42 compute-0 systemd[1]: libpod-ab23d35585b669f2b5af7353c76a6f28971258cdacbd6d8e5f83f391039e7b4e.scope: Deactivated successfully.
Oct 11 09:19:42 compute-0 podman[386063]: 2025-10-11 09:19:42.382886117 +0000 UTC m=+1.013215424 container died ab23d35585b669f2b5af7353c76a6f28971258cdacbd6d8e5f83f391039e7b4e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_vaughan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct 11 09:19:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-3e2755b6c3574361d20c07bde3295a76ce0fba2242ef9b614af006b195f89d42-merged.mount: Deactivated successfully.
Oct 11 09:19:42 compute-0 podman[386063]: 2025-10-11 09:19:42.452676307 +0000 UTC m=+1.083005544 container remove ab23d35585b669f2b5af7353c76a6f28971258cdacbd6d8e5f83f391039e7b4e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_vaughan, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct 11 09:19:42 compute-0 systemd[1]: libpod-conmon-ab23d35585b669f2b5af7353c76a6f28971258cdacbd6d8e5f83f391039e7b4e.scope: Deactivated successfully.
Oct 11 09:19:42 compute-0 sudo[385902]: pam_unix(sudo:session): session closed for user root
Oct 11 09:19:42 compute-0 nova_compute[260935]: 2025-10-11 09:19:42.562 2 DEBUG oslo_concurrency.processutils [None req-1eb0c06a-ad3d-4fea-bc67-8d264d758d2b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/a798e2c3-8294-482d-a073-883121427765/disk.config a798e2c3-8294-482d-a073-883121427765_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.251s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:19:42 compute-0 sudo[386163]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:19:42 compute-0 nova_compute[260935]: 2025-10-11 09:19:42.564 2 INFO nova.virt.libvirt.driver [None req-1eb0c06a-ad3d-4fea-bc67-8d264d758d2b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: a798e2c3-8294-482d-a073-883121427765] Deleting local config drive /var/lib/nova/instances/a798e2c3-8294-482d-a073-883121427765/disk.config because it was imported into RBD.
Oct 11 09:19:42 compute-0 sudo[386163]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:19:42 compute-0 sudo[386163]: pam_unix(sudo:session): session closed for user root
Oct 11 09:19:42 compute-0 kernel: tapd5347066-e3: entered promiscuous mode
Oct 11 09:19:42 compute-0 NetworkManager[44960]: <info>  [1760174382.6301] manager: (tapd5347066-e3): new Tun device (/org/freedesktop/NetworkManager/Devices/474)
Oct 11 09:19:42 compute-0 nova_compute[260935]: 2025-10-11 09:19:42.635 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:19:42 compute-0 ovn_controller[152945]: 2025-10-11T09:19:42Z|01153|binding|INFO|Claiming lport d5347066-e322-4664-8798-146b9745fa17 for this chassis.
Oct 11 09:19:42 compute-0 ovn_controller[152945]: 2025-10-11T09:19:42Z|01154|binding|INFO|d5347066-e322-4664-8798-146b9745fa17: Claiming fa:16:3e:eb:9b:3f 10.100.0.10
Oct 11 09:19:42 compute-0 sudo[386191]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 09:19:42 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:19:42.646 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:eb:9b:3f 10.100.0.10'], port_security=['fa:16:3e:eb:9b:3f 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'a798e2c3-8294-482d-a073-883121427765', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6346ea52-07fc-49ad-8f2d-fcfed9769241', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'db6885dd005947ad850fed13cefdf2fc', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'a975f5dd-d207-4e6b-9401-45f24b820c2c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0785b003-8951-44fc-bd21-f1c5c3076102, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=d5347066-e322-4664-8798-146b9745fa17) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 09:19:42 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:19:42.647 162815 INFO neutron.agent.ovn.metadata.agent [-] Port d5347066-e322-4664-8798-146b9745fa17 in datapath 6346ea52-07fc-49ad-8f2d-fcfed9769241 bound to our chassis
Oct 11 09:19:42 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:19:42.650 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6346ea52-07fc-49ad-8f2d-fcfed9769241
Oct 11 09:19:42 compute-0 sudo[386191]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:19:42 compute-0 NetworkManager[44960]: <info>  [1760174382.6549] manager: (tapd8f510f6-a8): new Tun device (/org/freedesktop/NetworkManager/Devices/475)
Oct 11 09:19:42 compute-0 sudo[386191]: pam_unix(sudo:session): session closed for user root
Oct 11 09:19:42 compute-0 kernel: tapd8f510f6-a8: entered promiscuous mode
Oct 11 09:19:42 compute-0 ovn_controller[152945]: 2025-10-11T09:19:42Z|01155|binding|INFO|Setting lport d5347066-e322-4664-8798-146b9745fa17 ovn-installed in OVS
Oct 11 09:19:42 compute-0 ovn_controller[152945]: 2025-10-11T09:19:42Z|01156|binding|INFO|Setting lport d5347066-e322-4664-8798-146b9745fa17 up in Southbound
Oct 11 09:19:42 compute-0 nova_compute[260935]: 2025-10-11 09:19:42.666 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:19:42 compute-0 ovn_controller[152945]: 2025-10-11T09:19:42Z|01157|if_status|INFO|Dropped 5 log messages in last 1015 seconds (most recently, 1015 seconds ago) due to excessive rate
Oct 11 09:19:42 compute-0 ovn_controller[152945]: 2025-10-11T09:19:42Z|01158|if_status|INFO|Not updating pb chassis for d8f510f6-a87e-4c76-be24-a143f69f564d now as sb is readonly
Oct 11 09:19:42 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:19:42.676 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[9ad9d5d2-a082-4e81-ac61-a98edf9b0f56]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:19:42 compute-0 systemd-udevd[386235]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 09:19:42 compute-0 systemd-udevd[386237]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 09:19:42 compute-0 ovn_controller[152945]: 2025-10-11T09:19:42Z|01159|binding|INFO|Claiming lport d8f510f6-a87e-4c76-be24-a143f69f564d for this chassis.
Oct 11 09:19:42 compute-0 ovn_controller[152945]: 2025-10-11T09:19:42Z|01160|binding|INFO|d8f510f6-a87e-4c76-be24-a143f69f564d: Claiming fa:16:3e:f2:49:59 2001:db8::f816:3eff:fef2:4959
Oct 11 09:19:42 compute-0 nova_compute[260935]: 2025-10-11 09:19:42.692 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:19:42 compute-0 ovn_controller[152945]: 2025-10-11T09:19:42Z|01161|binding|INFO|Setting lport d8f510f6-a87e-4c76-be24-a143f69f564d ovn-installed in OVS
Oct 11 09:19:42 compute-0 nova_compute[260935]: 2025-10-11 09:19:42.696 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:19:42 compute-0 ovn_controller[152945]: 2025-10-11T09:19:42Z|01162|binding|INFO|Setting lport d8f510f6-a87e-4c76-be24-a143f69f564d up in Southbound
Oct 11 09:19:42 compute-0 NetworkManager[44960]: <info>  [1760174382.7042] device (tapd8f510f6-a8): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 09:19:42 compute-0 NetworkManager[44960]: <info>  [1760174382.7060] device (tapd8f510f6-a8): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 09:19:42 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2336: 321 pgs: 321 active+clean; 579 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 3.6 MiB/s wr, 111 op/s
Oct 11 09:19:42 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:19:42.708 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f2:49:59 2001:db8::f816:3eff:fef2:4959'], port_security=['fa:16:3e:f2:49:59 2001:db8::f816:3eff:fef2:4959'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fef2:4959/64', 'neutron:device_id': 'a798e2c3-8294-482d-a073-883121427765', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ce353a4c-7280-46bb-ac7d-157bb5dc08cb', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'db6885dd005947ad850fed13cefdf2fc', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'a975f5dd-d207-4e6b-9401-45f24b820c2c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7413cee9-eaf1-4090-8a8e-9bf9dac36a0a, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=d8f510f6-a87e-4c76-be24-a143f69f564d) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 09:19:42 compute-0 NetworkManager[44960]: <info>  [1760174382.7167] device (tapd5347066-e3): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 09:19:42 compute-0 NetworkManager[44960]: <info>  [1760174382.7223] device (tapd5347066-e3): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 09:19:42 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:19:42.722 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[c0210ddd-c4f4-4022-abe7-f430a5bbf2b8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:19:42 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:19:42.725 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[a91dd1e5-725a-4c36-b84b-9874326f0b62]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:19:42 compute-0 systemd-machined[215705]: New machine qemu-139-instance-00000074.
Oct 11 09:19:42 compute-0 systemd[1]: Started Virtual Machine qemu-139-instance-00000074.
Oct 11 09:19:42 compute-0 sudo[386230]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:19:42 compute-0 sudo[386230]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:19:42 compute-0 sudo[386230]: pam_unix(sudo:session): session closed for user root
Oct 11 09:19:42 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:19:42.757 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[e3d5948a-93bb-4355-bacb-5a79c8a5ac87]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:19:42 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:19:42.796 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[df14af2d-0a02-4681-a164-4a92ecef8f4f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6346ea52-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f5:67:fa'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 325], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 618844, 'reachable_time': 24736, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 386283, 'error': None, 'target': 'ovnmeta-6346ea52-07fc-49ad-8f2d-fcfed9769241', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:19:42 compute-0 sudo[386266]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a -- raw list --format json
Oct 11 09:19:42 compute-0 sudo[386266]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:19:42 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:19:42.815 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[906efa65-8a56-4c3d-8d0b-f7f7a99268f4]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap6346ea52-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 618858, 'tstamp': 618858}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 386295, 'error': None, 'target': 'ovnmeta-6346ea52-07fc-49ad-8f2d-fcfed9769241', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap6346ea52-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 618862, 'tstamp': 618862}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 386295, 'error': None, 'target': 'ovnmeta-6346ea52-07fc-49ad-8f2d-fcfed9769241', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:19:42 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:19:42.817 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6346ea52-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:19:42 compute-0 nova_compute[260935]: 2025-10-11 09:19:42.825 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:19:42 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:19:42.827 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6346ea52-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:19:42 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:19:42.827 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 09:19:42 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:19:42.828 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6346ea52-00, col_values=(('external_ids', {'iface-id': 'f4a0a09a-d174-44d3-b63d-753fefd94646'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:19:42 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:19:42.828 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 09:19:42 compute-0 nova_compute[260935]: 2025-10-11 09:19:42.826 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:19:42 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:19:42.830 162815 INFO neutron.agent.ovn.metadata.agent [-] Port d8f510f6-a87e-4c76-be24-a143f69f564d in datapath ce353a4c-7280-46bb-ac7d-157bb5dc08cb unbound from our chassis
Oct 11 09:19:42 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:19:42.832 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ce353a4c-7280-46bb-ac7d-157bb5dc08cb
Oct 11 09:19:42 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:19:42.846 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[097188b7-28f7-4cb3-affa-d45a684981d9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:19:42 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:19:42.883 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[f0c3e25e-0425-412c-b8e0-6e86a703c72d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:19:42 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:19:42.886 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[972370a3-3b06-455e-9807-f03caa4b5d63]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:19:42 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:19:42.924 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[26ca91ef-6914-4732-933a-9ae99c87d000]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:19:42 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:19:42.943 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[5198391d-a711-4397-84e8-9906db650d8f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapce353a4c-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:22:23:62'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 20, 'tx_packets': 4, 'rx_bytes': 1872, 'tx_bytes': 312, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 20, 'tx_packets': 4, 'rx_bytes': 1872, 'tx_bytes': 312, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 326], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 618949, 'reachable_time': 20488, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 20, 'inoctets': 1592, 'indelivers': 4, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 20, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 1592, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 20, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 4, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 386305, 'error': None, 'target': 'ovnmeta-ce353a4c-7280-46bb-ac7d-157bb5dc08cb', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:19:42 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:19:42.962 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[041d9dc2-80db-4b1c-bd5f-048a6f789bbe]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapce353a4c-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 618964, 'tstamp': 618964}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 386307, 'error': None, 'target': 'ovnmeta-ce353a4c-7280-46bb-ac7d-157bb5dc08cb', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:19:42 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:19:42.964 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapce353a4c-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:19:42 compute-0 nova_compute[260935]: 2025-10-11 09:19:42.968 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:19:42 compute-0 nova_compute[260935]: 2025-10-11 09:19:42.971 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:19:42 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:19:42.971 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapce353a4c-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:19:42 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:19:42.971 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 09:19:42 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:19:42.972 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapce353a4c-70, col_values=(('external_ids', {'iface-id': 'bf7f7fa8-f1d9-4202-bec4-06d2178d548d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:19:42 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:19:42.972 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 09:19:43 compute-0 podman[386344]: 2025-10-11 09:19:43.258297982 +0000 UTC m=+0.101587578 container create 08265a73667c04df573d97c63574cbff3fef6817a34aa49ba397fe4449f508c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_darwin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct 11 09:19:43 compute-0 podman[386344]: 2025-10-11 09:19:43.186420814 +0000 UTC m=+0.029710440 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:19:43 compute-0 systemd[1]: Started libpod-conmon-08265a73667c04df573d97c63574cbff3fef6817a34aa49ba397fe4449f508c5.scope.
Oct 11 09:19:43 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:19:43 compute-0 podman[386344]: 2025-10-11 09:19:43.369654574 +0000 UTC m=+0.212944240 container init 08265a73667c04df573d97c63574cbff3fef6817a34aa49ba397fe4449f508c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_darwin, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Oct 11 09:19:43 compute-0 podman[386344]: 2025-10-11 09:19:43.381950071 +0000 UTC m=+0.225239677 container start 08265a73667c04df573d97c63574cbff3fef6817a34aa49ba397fe4449f508c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_darwin, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct 11 09:19:43 compute-0 podman[386344]: 2025-10-11 09:19:43.385386388 +0000 UTC m=+0.228676014 container attach 08265a73667c04df573d97c63574cbff3fef6817a34aa49ba397fe4449f508c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_darwin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 09:19:43 compute-0 charming_darwin[386361]: 167 167
Oct 11 09:19:43 compute-0 systemd[1]: libpod-08265a73667c04df573d97c63574cbff3fef6817a34aa49ba397fe4449f508c5.scope: Deactivated successfully.
Oct 11 09:19:43 compute-0 podman[386344]: 2025-10-11 09:19:43.393807936 +0000 UTC m=+0.237097572 container died 08265a73667c04df573d97c63574cbff3fef6817a34aa49ba397fe4449f508c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_darwin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True)
Oct 11 09:19:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-3f75522e4ec8d369c42ba6beeb0461574a71fb3f7a9639e02cef1c05d0ba7353-merged.mount: Deactivated successfully.
Oct 11 09:19:43 compute-0 podman[386344]: 2025-10-11 09:19:43.441239794 +0000 UTC m=+0.284529400 container remove 08265a73667c04df573d97c63574cbff3fef6817a34aa49ba397fe4449f508c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_darwin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 09:19:43 compute-0 systemd[1]: libpod-conmon-08265a73667c04df573d97c63574cbff3fef6817a34aa49ba397fe4449f508c5.scope: Deactivated successfully.
Oct 11 09:19:43 compute-0 nova_compute[260935]: 2025-10-11 09:19:43.564 2 DEBUG nova.network.neutron [req-f4424040-ac78-4b60-9f68-8f151d38e775 req-45a121ba-a26b-4e1b-9b65-5318da266ab4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 6c9132ae-fb4f-4a77-8e3f-14f516eed49c] Updated VIF entry in instance network info cache for port 50e8a951-4439-4d05-b0da-9b126fae5c30. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 09:19:43 compute-0 nova_compute[260935]: 2025-10-11 09:19:43.565 2 DEBUG nova.network.neutron [req-f4424040-ac78-4b60-9f68-8f151d38e775 req-45a121ba-a26b-4e1b-9b65-5318da266ab4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 6c9132ae-fb4f-4a77-8e3f-14f516eed49c] Updating instance_info_cache with network_info: [{"id": "50e8a951-4439-4d05-b0da-9b126fae5c30", "address": "fa:16:3e:23:8b:f3", "network": {"id": "fbf3e0c3-4aa9-41d1-8ed0-cbadb331b448", "bridge": "br-int", "label": "tempest-network-smoke--1639787343", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.236", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap50e8a951-44", "ovs_interfaceid": "50e8a951-4439-4d05-b0da-9b126fae5c30", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:19:43 compute-0 nova_compute[260935]: 2025-10-11 09:19:43.628 2 DEBUG oslo_concurrency.lockutils [req-f4424040-ac78-4b60-9f68-8f151d38e775 req-45a121ba-a26b-4e1b-9b65-5318da266ab4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-6c9132ae-fb4f-4a77-8e3f-14f516eed49c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:19:43 compute-0 nova_compute[260935]: 2025-10-11 09:19:43.645 2 DEBUG nova.compute.manager [req-1c9a06e1-ec84-4ee6-ae42-3eaf47ee3150 req-c800f83b-82b4-48ce-8fef-75069697116b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a798e2c3-8294-482d-a073-883121427765] Received event network-vif-plugged-d8f510f6-a87e-4c76-be24-a143f69f564d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:19:43 compute-0 nova_compute[260935]: 2025-10-11 09:19:43.646 2 DEBUG oslo_concurrency.lockutils [req-1c9a06e1-ec84-4ee6-ae42-3eaf47ee3150 req-c800f83b-82b4-48ce-8fef-75069697116b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "a798e2c3-8294-482d-a073-883121427765-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:19:43 compute-0 nova_compute[260935]: 2025-10-11 09:19:43.647 2 DEBUG oslo_concurrency.lockutils [req-1c9a06e1-ec84-4ee6-ae42-3eaf47ee3150 req-c800f83b-82b4-48ce-8fef-75069697116b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "a798e2c3-8294-482d-a073-883121427765-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:19:43 compute-0 nova_compute[260935]: 2025-10-11 09:19:43.647 2 DEBUG oslo_concurrency.lockutils [req-1c9a06e1-ec84-4ee6-ae42-3eaf47ee3150 req-c800f83b-82b4-48ce-8fef-75069697116b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "a798e2c3-8294-482d-a073-883121427765-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:19:43 compute-0 nova_compute[260935]: 2025-10-11 09:19:43.648 2 DEBUG nova.compute.manager [req-1c9a06e1-ec84-4ee6-ae42-3eaf47ee3150 req-c800f83b-82b4-48ce-8fef-75069697116b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a798e2c3-8294-482d-a073-883121427765] Processing event network-vif-plugged-d8f510f6-a87e-4c76-be24-a143f69f564d _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 11 09:19:43 compute-0 nova_compute[260935]: 2025-10-11 09:19:43.648 2 DEBUG nova.compute.manager [req-1c9a06e1-ec84-4ee6-ae42-3eaf47ee3150 req-c800f83b-82b4-48ce-8fef-75069697116b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a798e2c3-8294-482d-a073-883121427765] Received event network-vif-plugged-d8f510f6-a87e-4c76-be24-a143f69f564d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:19:43 compute-0 nova_compute[260935]: 2025-10-11 09:19:43.649 2 DEBUG oslo_concurrency.lockutils [req-1c9a06e1-ec84-4ee6-ae42-3eaf47ee3150 req-c800f83b-82b4-48ce-8fef-75069697116b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "a798e2c3-8294-482d-a073-883121427765-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:19:43 compute-0 nova_compute[260935]: 2025-10-11 09:19:43.649 2 DEBUG oslo_concurrency.lockutils [req-1c9a06e1-ec84-4ee6-ae42-3eaf47ee3150 req-c800f83b-82b4-48ce-8fef-75069697116b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "a798e2c3-8294-482d-a073-883121427765-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:19:43 compute-0 nova_compute[260935]: 2025-10-11 09:19:43.649 2 DEBUG oslo_concurrency.lockutils [req-1c9a06e1-ec84-4ee6-ae42-3eaf47ee3150 req-c800f83b-82b4-48ce-8fef-75069697116b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "a798e2c3-8294-482d-a073-883121427765-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:19:43 compute-0 nova_compute[260935]: 2025-10-11 09:19:43.650 2 DEBUG nova.compute.manager [req-1c9a06e1-ec84-4ee6-ae42-3eaf47ee3150 req-c800f83b-82b4-48ce-8fef-75069697116b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a798e2c3-8294-482d-a073-883121427765] No event matching network-vif-plugged-d8f510f6-a87e-4c76-be24-a143f69f564d in dict_keys([('network-vif-plugged', 'd5347066-e322-4664-8798-146b9745fa17')]) pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:325
Oct 11 09:19:43 compute-0 nova_compute[260935]: 2025-10-11 09:19:43.650 2 WARNING nova.compute.manager [req-1c9a06e1-ec84-4ee6-ae42-3eaf47ee3150 req-c800f83b-82b4-48ce-8fef-75069697116b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a798e2c3-8294-482d-a073-883121427765] Received unexpected event network-vif-plugged-d8f510f6-a87e-4c76-be24-a143f69f564d for instance with vm_state building and task_state spawning.
Oct 11 09:19:43 compute-0 nova_compute[260935]: 2025-10-11 09:19:43.679 2 DEBUG nova.network.neutron [req-d53f6209-42ac-461f-92b4-8dc609ebc4f0 req-22b0ba02-f507-4271-a49b-3b849bf11fe3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a798e2c3-8294-482d-a073-883121427765] Updated VIF entry in instance network info cache for port d8f510f6-a87e-4c76-be24-a143f69f564d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 09:19:43 compute-0 nova_compute[260935]: 2025-10-11 09:19:43.680 2 DEBUG nova.network.neutron [req-d53f6209-42ac-461f-92b4-8dc609ebc4f0 req-22b0ba02-f507-4271-a49b-3b849bf11fe3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a798e2c3-8294-482d-a073-883121427765] Updating instance_info_cache with network_info: [{"id": "d5347066-e322-4664-8798-146b9745fa17", "address": "fa:16:3e:eb:9b:3f", "network": {"id": "6346ea52-07fc-49ad-8f2d-fcfed9769241", "bridge": "br-int", "label": "tempest-network-smoke--1742332739", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd5347066-e3", "ovs_interfaceid": "d5347066-e322-4664-8798-146b9745fa17", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "d8f510f6-a87e-4c76-be24-a143f69f564d", "address": "fa:16:3e:f2:49:59", "network": {"id": "ce353a4c-7280-46bb-ac7d-157bb5dc08cb", "bridge": "br-int", "label": "tempest-network-smoke--1463782365", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fef2:4959", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd8f510f6-a8", "ovs_interfaceid": "d8f510f6-a87e-4c76-be24-a143f69f564d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:19:43 compute-0 nova_compute[260935]: 2025-10-11 09:19:43.709 2 DEBUG oslo_concurrency.lockutils [req-d53f6209-42ac-461f-92b4-8dc609ebc4f0 req-22b0ba02-f507-4271-a49b-3b849bf11fe3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-a798e2c3-8294-482d-a073-883121427765" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:19:43 compute-0 podman[386385]: 2025-10-11 09:19:43.714608749 +0000 UTC m=+0.062103834 container create 4ea051f3c8eb99f1a8282ae813e05c935501de9ffbee9cd7294999d2a1d71ec4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_villani, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Oct 11 09:19:43 compute-0 systemd[1]: Started libpod-conmon-4ea051f3c8eb99f1a8282ae813e05c935501de9ffbee9cd7294999d2a1d71ec4.scope.
Oct 11 09:19:43 compute-0 podman[386385]: 2025-10-11 09:19:43.69162599 +0000 UTC m=+0.039121045 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:19:43 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:19:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e64c21a32f7212729275dc071307bb1556d603e32905c48310e3d8c4aa9664d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 09:19:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e64c21a32f7212729275dc071307bb1556d603e32905c48310e3d8c4aa9664d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 09:19:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e64c21a32f7212729275dc071307bb1556d603e32905c48310e3d8c4aa9664d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 09:19:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e64c21a32f7212729275dc071307bb1556d603e32905c48310e3d8c4aa9664d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 09:19:43 compute-0 ceph-mon[74313]: pgmap v2336: 321 pgs: 321 active+clean; 579 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 3.6 MiB/s wr, 111 op/s
Oct 11 09:19:43 compute-0 podman[386385]: 2025-10-11 09:19:43.890649077 +0000 UTC m=+0.238144122 container init 4ea051f3c8eb99f1a8282ae813e05c935501de9ffbee9cd7294999d2a1d71ec4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_villani, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 09:19:43 compute-0 podman[386385]: 2025-10-11 09:19:43.902616695 +0000 UTC m=+0.250111750 container start 4ea051f3c8eb99f1a8282ae813e05c935501de9ffbee9cd7294999d2a1d71ec4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_villani, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct 11 09:19:43 compute-0 podman[386385]: 2025-10-11 09:19:43.919927543 +0000 UTC m=+0.267422588 container attach 4ea051f3c8eb99f1a8282ae813e05c935501de9ffbee9cd7294999d2a1d71ec4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_villani, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Oct 11 09:19:43 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:19:44 compute-0 nova_compute[260935]: 2025-10-11 09:19:44.258 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760174384.2572055, a798e2c3-8294-482d-a073-883121427765 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:19:44 compute-0 nova_compute[260935]: 2025-10-11 09:19:44.259 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: a798e2c3-8294-482d-a073-883121427765] VM Started (Lifecycle Event)
Oct 11 09:19:44 compute-0 nova_compute[260935]: 2025-10-11 09:19:44.290 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: a798e2c3-8294-482d-a073-883121427765] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:19:44 compute-0 nova_compute[260935]: 2025-10-11 09:19:44.295 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760174384.2574992, a798e2c3-8294-482d-a073-883121427765 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:19:44 compute-0 nova_compute[260935]: 2025-10-11 09:19:44.295 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: a798e2c3-8294-482d-a073-883121427765] VM Paused (Lifecycle Event)
Oct 11 09:19:44 compute-0 nova_compute[260935]: 2025-10-11 09:19:44.318 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: a798e2c3-8294-482d-a073-883121427765] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:19:44 compute-0 nova_compute[260935]: 2025-10-11 09:19:44.320 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: a798e2c3-8294-482d-a073-883121427765] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 09:19:44 compute-0 nova_compute[260935]: 2025-10-11 09:19:44.352 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: a798e2c3-8294-482d-a073-883121427765] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 09:19:44 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2337: 321 pgs: 321 active+clean; 579 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 834 KiB/s wr, 90 op/s
Oct 11 09:19:45 compute-0 stoic_villani[386444]: {
Oct 11 09:19:45 compute-0 stoic_villani[386444]:     "9a8d87fe-940e-4960-94fb-405e22e6e9ee": {
Oct 11 09:19:45 compute-0 stoic_villani[386444]:         "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:19:45 compute-0 stoic_villani[386444]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 11 09:19:45 compute-0 stoic_villani[386444]:         "osd_id": 2,
Oct 11 09:19:45 compute-0 stoic_villani[386444]:         "osd_uuid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 09:19:45 compute-0 stoic_villani[386444]:         "type": "bluestore"
Oct 11 09:19:45 compute-0 stoic_villani[386444]:     },
Oct 11 09:19:45 compute-0 stoic_villani[386444]:     "bd31cfe9-266a-4479-a7f9-659fff14b3e1": {
Oct 11 09:19:45 compute-0 stoic_villani[386444]:         "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:19:45 compute-0 stoic_villani[386444]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 11 09:19:45 compute-0 stoic_villani[386444]:         "osd_id": 0,
Oct 11 09:19:45 compute-0 stoic_villani[386444]:         "osd_uuid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 09:19:45 compute-0 stoic_villani[386444]:         "type": "bluestore"
Oct 11 09:19:45 compute-0 stoic_villani[386444]:     },
Oct 11 09:19:45 compute-0 stoic_villani[386444]:     "c6e730c8-7075-45e5-bf72-061b73088f5b": {
Oct 11 09:19:45 compute-0 stoic_villani[386444]:         "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:19:45 compute-0 stoic_villani[386444]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 11 09:19:45 compute-0 stoic_villani[386444]:         "osd_id": 1,
Oct 11 09:19:45 compute-0 stoic_villani[386444]:         "osd_uuid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 09:19:45 compute-0 stoic_villani[386444]:         "type": "bluestore"
Oct 11 09:19:45 compute-0 stoic_villani[386444]:     }
Oct 11 09:19:45 compute-0 stoic_villani[386444]: }
Oct 11 09:19:45 compute-0 systemd[1]: libpod-4ea051f3c8eb99f1a8282ae813e05c935501de9ffbee9cd7294999d2a1d71ec4.scope: Deactivated successfully.
Oct 11 09:19:45 compute-0 systemd[1]: libpod-4ea051f3c8eb99f1a8282ae813e05c935501de9ffbee9cd7294999d2a1d71ec4.scope: Consumed 1.162s CPU time.
Oct 11 09:19:45 compute-0 conmon[386444]: conmon 4ea051f3c8eb99f1a828 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-4ea051f3c8eb99f1a8282ae813e05c935501de9ffbee9cd7294999d2a1d71ec4.scope/container/memory.events
Oct 11 09:19:45 compute-0 podman[386385]: 2025-10-11 09:19:45.110243195 +0000 UTC m=+1.457738280 container died 4ea051f3c8eb99f1a8282ae813e05c935501de9ffbee9cd7294999d2a1d71ec4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_villani, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 09:19:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-3e64c21a32f7212729275dc071307bb1556d603e32905c48310e3d8c4aa9664d-merged.mount: Deactivated successfully.
Oct 11 09:19:45 compute-0 podman[386385]: 2025-10-11 09:19:45.267332388 +0000 UTC m=+1.614827473 container remove 4ea051f3c8eb99f1a8282ae813e05c935501de9ffbee9cd7294999d2a1d71ec4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_villani, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 09:19:45 compute-0 podman[386478]: 2025-10-11 09:19:45.272193635 +0000 UTC m=+0.130312318 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_id=multipathd, org.label-schema.license=GPLv2, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct 11 09:19:45 compute-0 systemd[1]: libpod-conmon-4ea051f3c8eb99f1a8282ae813e05c935501de9ffbee9cd7294999d2a1d71ec4.scope: Deactivated successfully.
Oct 11 09:19:45 compute-0 podman[386485]: 2025-10-11 09:19:45.28901152 +0000 UTC m=+0.147230586 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3)
Oct 11 09:19:45 compute-0 sudo[386266]: pam_unix(sudo:session): session closed for user root
Oct 11 09:19:45 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 09:19:45 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:19:45 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 09:19:45 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:19:45 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev 781126cd-9523-477b-9839-3adf48c1ff7e does not exist
Oct 11 09:19:45 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev dfa3fdba-ffe3-4f18-8607-3271ea0d8e1b does not exist
Oct 11 09:19:45 compute-0 sudo[386531]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:19:45 compute-0 sudo[386531]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:19:45 compute-0 sudo[386531]: pam_unix(sudo:session): session closed for user root
Oct 11 09:19:45 compute-0 sudo[386556]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 11 09:19:45 compute-0 sudo[386556]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:19:45 compute-0 sudo[386556]: pam_unix(sudo:session): session closed for user root
Oct 11 09:19:45 compute-0 ceph-mon[74313]: pgmap v2337: 321 pgs: 321 active+clean; 579 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 834 KiB/s wr, 90 op/s
Oct 11 09:19:45 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:19:45 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:19:46 compute-0 nova_compute[260935]: 2025-10-11 09:19:46.572 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:19:46 compute-0 nova_compute[260935]: 2025-10-11 09:19:46.588 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:19:46 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2338: 321 pgs: 321 active+clean; 579 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 16 KiB/s wr, 75 op/s
Oct 11 09:19:47 compute-0 ceph-mon[74313]: pgmap v2338: 321 pgs: 321 active+clean; 579 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 16 KiB/s wr, 75 op/s
Oct 11 09:19:48 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2339: 321 pgs: 321 active+clean; 579 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 29 KiB/s wr, 84 op/s
Oct 11 09:19:49 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:19:49 compute-0 ceph-mon[74313]: pgmap v2339: 321 pgs: 321 active+clean; 579 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 29 KiB/s wr, 84 op/s
Oct 11 09:19:50 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2340: 321 pgs: 321 active+clean; 579 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 74 op/s
Oct 11 09:19:51 compute-0 nova_compute[260935]: 2025-10-11 09:19:51.574 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:19:51 compute-0 nova_compute[260935]: 2025-10-11 09:19:51.591 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:19:52 compute-0 nova_compute[260935]: 2025-10-11 09:19:52.308 2 DEBUG nova.compute.manager [req-1400e464-fd38-4ac2-a50e-b6d0b9c1ae87 req-0d673cbb-fbea-4511-8577-22e50e91be08 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a798e2c3-8294-482d-a073-883121427765] Received event network-vif-plugged-d5347066-e322-4664-8798-146b9745fa17 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:19:52 compute-0 nova_compute[260935]: 2025-10-11 09:19:52.309 2 DEBUG oslo_concurrency.lockutils [req-1400e464-fd38-4ac2-a50e-b6d0b9c1ae87 req-0d673cbb-fbea-4511-8577-22e50e91be08 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "a798e2c3-8294-482d-a073-883121427765-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:19:52 compute-0 nova_compute[260935]: 2025-10-11 09:19:52.309 2 DEBUG oslo_concurrency.lockutils [req-1400e464-fd38-4ac2-a50e-b6d0b9c1ae87 req-0d673cbb-fbea-4511-8577-22e50e91be08 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "a798e2c3-8294-482d-a073-883121427765-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:19:52 compute-0 nova_compute[260935]: 2025-10-11 09:19:52.309 2 DEBUG oslo_concurrency.lockutils [req-1400e464-fd38-4ac2-a50e-b6d0b9c1ae87 req-0d673cbb-fbea-4511-8577-22e50e91be08 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "a798e2c3-8294-482d-a073-883121427765-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:19:52 compute-0 nova_compute[260935]: 2025-10-11 09:19:52.310 2 DEBUG nova.compute.manager [req-1400e464-fd38-4ac2-a50e-b6d0b9c1ae87 req-0d673cbb-fbea-4511-8577-22e50e91be08 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a798e2c3-8294-482d-a073-883121427765] Processing event network-vif-plugged-d5347066-e322-4664-8798-146b9745fa17 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 11 09:19:52 compute-0 nova_compute[260935]: 2025-10-11 09:19:52.311 2 DEBUG nova.compute.manager [None req-1eb0c06a-ad3d-4fea-bc67-8d264d758d2b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: a798e2c3-8294-482d-a073-883121427765] Instance event wait completed in 8 seconds for network-vif-plugged,network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 11 09:19:52 compute-0 nova_compute[260935]: 2025-10-11 09:19:52.316 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760174392.3160458, a798e2c3-8294-482d-a073-883121427765 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:19:52 compute-0 nova_compute[260935]: 2025-10-11 09:19:52.317 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: a798e2c3-8294-482d-a073-883121427765] VM Resumed (Lifecycle Event)
Oct 11 09:19:52 compute-0 nova_compute[260935]: 2025-10-11 09:19:52.319 2 DEBUG nova.virt.libvirt.driver [None req-1eb0c06a-ad3d-4fea-bc67-8d264d758d2b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: a798e2c3-8294-482d-a073-883121427765] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 11 09:19:52 compute-0 nova_compute[260935]: 2025-10-11 09:19:52.323 2 INFO nova.virt.libvirt.driver [-] [instance: a798e2c3-8294-482d-a073-883121427765] Instance spawned successfully.
Oct 11 09:19:52 compute-0 nova_compute[260935]: 2025-10-11 09:19:52.324 2 DEBUG nova.virt.libvirt.driver [None req-1eb0c06a-ad3d-4fea-bc67-8d264d758d2b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: a798e2c3-8294-482d-a073-883121427765] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 11 09:19:52 compute-0 ceph-mon[74313]: pgmap v2340: 321 pgs: 321 active+clean; 579 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 74 op/s
Oct 11 09:19:52 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2341: 321 pgs: 321 active+clean; 585 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 744 KiB/s wr, 85 op/s
Oct 11 09:19:53 compute-0 ceph-mon[74313]: pgmap v2341: 321 pgs: 321 active+clean; 585 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 744 KiB/s wr, 85 op/s
Oct 11 09:19:54 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:19:54 compute-0 unix_chkpwd[386585]: password check failed for user (root)
Oct 11 09:19:54 compute-0 sshd-session[386583]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=165.232.82.252  user=root
Oct 11 09:19:54 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2342: 321 pgs: 321 active+clean; 588 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 589 KiB/s rd, 1.1 MiB/s wr, 44 op/s
Oct 11 09:19:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:19:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:19:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:19:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:19:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:19:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:19:54 compute-0 ceph-mgr[74605]: [balancer INFO root] Optimize plan auto_2025-10-11_09:19:54
Oct 11 09:19:54 compute-0 ceph-mgr[74605]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 09:19:54 compute-0 ceph-mgr[74605]: [balancer INFO root] do_upmap
Oct 11 09:19:54 compute-0 ceph-mgr[74605]: [balancer INFO root] pools ['volumes', 'cephfs.cephfs.data', '.rgw.root', 'default.rgw.meta', 'backups', 'vms', '.mgr', 'cephfs.cephfs.meta', 'images', 'default.rgw.control', 'default.rgw.log']
Oct 11 09:19:54 compute-0 ceph-mgr[74605]: [balancer INFO root] prepared 0/10 changes
Oct 11 09:19:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 09:19:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 09:19:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 09:19:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 09:19:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 09:19:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 09:19:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 09:19:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 09:19:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 09:19:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 09:19:55 compute-0 nova_compute[260935]: 2025-10-11 09:19:55.550 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: a798e2c3-8294-482d-a073-883121427765] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:19:55 compute-0 nova_compute[260935]: 2025-10-11 09:19:55.564 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: a798e2c3-8294-482d-a073-883121427765] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 09:19:55 compute-0 nova_compute[260935]: 2025-10-11 09:19:55.574 2 DEBUG nova.virt.libvirt.driver [None req-1eb0c06a-ad3d-4fea-bc67-8d264d758d2b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: a798e2c3-8294-482d-a073-883121427765] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:19:55 compute-0 nova_compute[260935]: 2025-10-11 09:19:55.576 2 DEBUG nova.virt.libvirt.driver [None req-1eb0c06a-ad3d-4fea-bc67-8d264d758d2b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: a798e2c3-8294-482d-a073-883121427765] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:19:55 compute-0 nova_compute[260935]: 2025-10-11 09:19:55.577 2 DEBUG nova.virt.libvirt.driver [None req-1eb0c06a-ad3d-4fea-bc67-8d264d758d2b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: a798e2c3-8294-482d-a073-883121427765] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:19:55 compute-0 nova_compute[260935]: 2025-10-11 09:19:55.578 2 DEBUG nova.virt.libvirt.driver [None req-1eb0c06a-ad3d-4fea-bc67-8d264d758d2b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: a798e2c3-8294-482d-a073-883121427765] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:19:55 compute-0 nova_compute[260935]: 2025-10-11 09:19:55.579 2 DEBUG nova.virt.libvirt.driver [None req-1eb0c06a-ad3d-4fea-bc67-8d264d758d2b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: a798e2c3-8294-482d-a073-883121427765] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:19:55 compute-0 nova_compute[260935]: 2025-10-11 09:19:55.580 2 DEBUG nova.virt.libvirt.driver [None req-1eb0c06a-ad3d-4fea-bc67-8d264d758d2b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: a798e2c3-8294-482d-a073-883121427765] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:19:56 compute-0 nova_compute[260935]: 2025-10-11 09:19:56.296 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: a798e2c3-8294-482d-a073-883121427765] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 09:19:56 compute-0 ceph-mon[74313]: pgmap v2342: 321 pgs: 321 active+clean; 588 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 589 KiB/s rd, 1.1 MiB/s wr, 44 op/s
Oct 11 09:19:56 compute-0 nova_compute[260935]: 2025-10-11 09:19:56.592 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 09:19:56 compute-0 nova_compute[260935]: 2025-10-11 09:19:56.594 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:19:56 compute-0 nova_compute[260935]: 2025-10-11 09:19:56.594 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Oct 11 09:19:56 compute-0 nova_compute[260935]: 2025-10-11 09:19:56.594 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Oct 11 09:19:56 compute-0 nova_compute[260935]: 2025-10-11 09:19:56.594 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Oct 11 09:19:56 compute-0 nova_compute[260935]: 2025-10-11 09:19:56.595 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 09:19:56 compute-0 sshd-session[386583]: Failed password for root from 165.232.82.252 port 39976 ssh2
Oct 11 09:19:56 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2343: 321 pgs: 321 active+clean; 588 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 1.1 MiB/s wr, 25 op/s
Oct 11 09:19:56 compute-0 nova_compute[260935]: 2025-10-11 09:19:56.789 2 INFO nova.compute.manager [None req-1eb0c06a-ad3d-4fea-bc67-8d264d758d2b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: a798e2c3-8294-482d-a073-883121427765] Took 25.40 seconds to spawn the instance on the hypervisor.
Oct 11 09:19:56 compute-0 nova_compute[260935]: 2025-10-11 09:19:56.789 2 DEBUG nova.compute.manager [None req-1eb0c06a-ad3d-4fea-bc67-8d264d758d2b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: a798e2c3-8294-482d-a073-883121427765] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:19:57 compute-0 nova_compute[260935]: 2025-10-11 09:19:57.156 2 INFO nova.compute.manager [None req-1eb0c06a-ad3d-4fea-bc67-8d264d758d2b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: a798e2c3-8294-482d-a073-883121427765] Took 26.91 seconds to build instance.
Oct 11 09:19:57 compute-0 nova_compute[260935]: 2025-10-11 09:19:57.828 2 DEBUG nova.compute.manager [req-2d6cd393-1950-4dea-bf63-12c2e39dbc89 req-595aa63a-c97b-479e-abd0-cbd959fb2bff e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a798e2c3-8294-482d-a073-883121427765] Received event network-vif-plugged-d5347066-e322-4664-8798-146b9745fa17 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:19:57 compute-0 nova_compute[260935]: 2025-10-11 09:19:57.828 2 DEBUG oslo_concurrency.lockutils [req-2d6cd393-1950-4dea-bf63-12c2e39dbc89 req-595aa63a-c97b-479e-abd0-cbd959fb2bff e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "a798e2c3-8294-482d-a073-883121427765-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:19:57 compute-0 nova_compute[260935]: 2025-10-11 09:19:57.829 2 DEBUG oslo_concurrency.lockutils [req-2d6cd393-1950-4dea-bf63-12c2e39dbc89 req-595aa63a-c97b-479e-abd0-cbd959fb2bff e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "a798e2c3-8294-482d-a073-883121427765-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:19:57 compute-0 nova_compute[260935]: 2025-10-11 09:19:57.829 2 DEBUG oslo_concurrency.lockutils [req-2d6cd393-1950-4dea-bf63-12c2e39dbc89 req-595aa63a-c97b-479e-abd0-cbd959fb2bff e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "a798e2c3-8294-482d-a073-883121427765-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:19:57 compute-0 nova_compute[260935]: 2025-10-11 09:19:57.830 2 DEBUG nova.compute.manager [req-2d6cd393-1950-4dea-bf63-12c2e39dbc89 req-595aa63a-c97b-479e-abd0-cbd959fb2bff e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a798e2c3-8294-482d-a073-883121427765] No waiting events found dispatching network-vif-plugged-d5347066-e322-4664-8798-146b9745fa17 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 09:19:57 compute-0 nova_compute[260935]: 2025-10-11 09:19:57.830 2 WARNING nova.compute.manager [req-2d6cd393-1950-4dea-bf63-12c2e39dbc89 req-595aa63a-c97b-479e-abd0-cbd959fb2bff e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a798e2c3-8294-482d-a073-883121427765] Received unexpected event network-vif-plugged-d5347066-e322-4664-8798-146b9745fa17 for instance with vm_state active and task_state None.
Oct 11 09:19:57 compute-0 ceph-mon[74313]: pgmap v2343: 321 pgs: 321 active+clean; 588 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 1.1 MiB/s wr, 25 op/s
Oct 11 09:19:57 compute-0 nova_compute[260935]: 2025-10-11 09:19:57.996 2 DEBUG oslo_concurrency.lockutils [None req-1eb0c06a-ad3d-4fea-bc67-8d264d758d2b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "a798e2c3-8294-482d-a073-883121427765" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 27.847s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:19:58 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2344: 321 pgs: 321 active+clean; 600 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.0 MiB/s wr, 96 op/s
Oct 11 09:19:58 compute-0 sshd-session[386583]: Connection closed by authenticating user root 165.232.82.252 port 39976 [preauth]
Oct 11 09:19:59 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:19:59 compute-0 ceph-mon[74313]: pgmap v2344: 321 pgs: 321 active+clean; 600 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.0 MiB/s wr, 96 op/s
Oct 11 09:19:59 compute-0 ovn_controller[152945]: 2025-10-11T09:19:59Z|00135|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:23:8b:f3 10.100.0.10
Oct 11 09:19:59 compute-0 ovn_controller[152945]: 2025-10-11T09:19:59Z|00136|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:23:8b:f3 10.100.0.10
Oct 11 09:20:00 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2345: 321 pgs: 321 active+clean; 600 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.0 MiB/s wr, 87 op/s
Oct 11 09:20:01 compute-0 nova_compute[260935]: 2025-10-11 09:20:01.595 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:20:01 compute-0 ceph-mon[74313]: pgmap v2345: 321 pgs: 321 active+clean; 600 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.0 MiB/s wr, 87 op/s
Oct 11 09:20:01 compute-0 nova_compute[260935]: 2025-10-11 09:20:01.886 2 DEBUG nova.compute.manager [req-e89af524-c82b-4402-a3a8-d7e54a8befc1 req-ea2e4f81-a945-4160-9e04-1b8d6737c6fe e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a798e2c3-8294-482d-a073-883121427765] Received event network-changed-d5347066-e322-4664-8798-146b9745fa17 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:20:01 compute-0 nova_compute[260935]: 2025-10-11 09:20:01.886 2 DEBUG nova.compute.manager [req-e89af524-c82b-4402-a3a8-d7e54a8befc1 req-ea2e4f81-a945-4160-9e04-1b8d6737c6fe e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a798e2c3-8294-482d-a073-883121427765] Refreshing instance network info cache due to event network-changed-d5347066-e322-4664-8798-146b9745fa17. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 09:20:01 compute-0 nova_compute[260935]: 2025-10-11 09:20:01.887 2 DEBUG oslo_concurrency.lockutils [req-e89af524-c82b-4402-a3a8-d7e54a8befc1 req-ea2e4f81-a945-4160-9e04-1b8d6737c6fe e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-a798e2c3-8294-482d-a073-883121427765" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:20:01 compute-0 nova_compute[260935]: 2025-10-11 09:20:01.887 2 DEBUG oslo_concurrency.lockutils [req-e89af524-c82b-4402-a3a8-d7e54a8befc1 req-ea2e4f81-a945-4160-9e04-1b8d6737c6fe e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-a798e2c3-8294-482d-a073-883121427765" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:20:01 compute-0 nova_compute[260935]: 2025-10-11 09:20:01.888 2 DEBUG nova.network.neutron [req-e89af524-c82b-4402-a3a8-d7e54a8befc1 req-ea2e4f81-a945-4160-9e04-1b8d6737c6fe e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a798e2c3-8294-482d-a073-883121427765] Refreshing network info cache for port d5347066-e322-4664-8798-146b9745fa17 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 09:20:02 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2346: 321 pgs: 321 active+clean; 610 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 2.1 MiB/s wr, 116 op/s
Oct 11 09:20:03 compute-0 nova_compute[260935]: 2025-10-11 09:20:03.465 2 DEBUG nova.network.neutron [req-e89af524-c82b-4402-a3a8-d7e54a8befc1 req-ea2e4f81-a945-4160-9e04-1b8d6737c6fe e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a798e2c3-8294-482d-a073-883121427765] Updated VIF entry in instance network info cache for port d5347066-e322-4664-8798-146b9745fa17. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 09:20:03 compute-0 nova_compute[260935]: 2025-10-11 09:20:03.465 2 DEBUG nova.network.neutron [req-e89af524-c82b-4402-a3a8-d7e54a8befc1 req-ea2e4f81-a945-4160-9e04-1b8d6737c6fe e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a798e2c3-8294-482d-a073-883121427765] Updating instance_info_cache with network_info: [{"id": "d5347066-e322-4664-8798-146b9745fa17", "address": "fa:16:3e:eb:9b:3f", "network": {"id": "6346ea52-07fc-49ad-8f2d-fcfed9769241", "bridge": "br-int", "label": "tempest-network-smoke--1742332739", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.177", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd5347066-e3", "ovs_interfaceid": "d5347066-e322-4664-8798-146b9745fa17", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "d8f510f6-a87e-4c76-be24-a143f69f564d", "address": "fa:16:3e:f2:49:59", "network": {"id": "ce353a4c-7280-46bb-ac7d-157bb5dc08cb", "bridge": "br-int", "label": "tempest-network-smoke--1463782365", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fef2:4959", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd8f510f6-a8", "ovs_interfaceid": "d8f510f6-a87e-4c76-be24-a143f69f564d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:20:03 compute-0 nova_compute[260935]: 2025-10-11 09:20:03.693 2 DEBUG oslo_concurrency.lockutils [req-e89af524-c82b-4402-a3a8-d7e54a8befc1 req-ea2e4f81-a945-4160-9e04-1b8d6737c6fe e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-a798e2c3-8294-482d-a073-883121427765" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:20:03 compute-0 ceph-mon[74313]: pgmap v2346: 321 pgs: 321 active+clean; 610 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 2.1 MiB/s wr, 116 op/s
Oct 11 09:20:04 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:20:04 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2347: 321 pgs: 321 active+clean; 612 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 1.4 MiB/s wr, 114 op/s
Oct 11 09:20:04 compute-0 podman[386586]: 2025-10-11 09:20:04.834692281 +0000 UTC m=+0.113986377 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 11 09:20:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 09:20:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:20:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 09:20:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:20:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.005251956558898222 of space, bias 1.0, pg target 1.5755869676694667 quantized to 32 (current 32)
Oct 11 09:20:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:20:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 09:20:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:20:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 09:20:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:20:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.1992057139048968 quantized to 32 (current 32)
Oct 11 09:20:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:20:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006084358924269063 quantized to 16 (current 32)
Oct 11 09:20:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:20:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 09:20:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:20:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.605448655336329e-05 quantized to 32 (current 32)
Oct 11 09:20:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:20:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006464631357035879 quantized to 32 (current 32)
Oct 11 09:20:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:20:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 09:20:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:20:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015210897310672657 quantized to 32 (current 32)
Oct 11 09:20:05 compute-0 ceph-mon[74313]: pgmap v2347: 321 pgs: 321 active+clean; 612 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 1.4 MiB/s wr, 114 op/s
Oct 11 09:20:06 compute-0 nova_compute[260935]: 2025-10-11 09:20:06.596 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 09:20:06 compute-0 nova_compute[260935]: 2025-10-11 09:20:06.598 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 09:20:06 compute-0 nova_compute[260935]: 2025-10-11 09:20:06.598 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Oct 11 09:20:06 compute-0 nova_compute[260935]: 2025-10-11 09:20:06.598 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Oct 11 09:20:06 compute-0 nova_compute[260935]: 2025-10-11 09:20:06.635 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:20:06 compute-0 nova_compute[260935]: 2025-10-11 09:20:06.635 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Oct 11 09:20:06 compute-0 nova_compute[260935]: 2025-10-11 09:20:06.649 2 INFO nova.compute.manager [None req-4a62a28c-7f0f-4593-9113-b4d04a8748ec dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 6c9132ae-fb4f-4a77-8e3f-14f516eed49c] Get console output
Oct 11 09:20:06 compute-0 nova_compute[260935]: 2025-10-11 09:20:06.655 29289 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Oct 11 09:20:06 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2348: 321 pgs: 321 active+clean; 612 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 1.1 MiB/s wr, 109 op/s
Oct 11 09:20:06 compute-0 nova_compute[260935]: 2025-10-11 09:20:06.936 2 DEBUG oslo_concurrency.lockutils [None req-55de08ed-24e8-4d1d-bb31-46ddba2296d3 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Acquiring lock "6c9132ae-fb4f-4a77-8e3f-14f516eed49c" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:20:06 compute-0 nova_compute[260935]: 2025-10-11 09:20:06.937 2 DEBUG oslo_concurrency.lockutils [None req-55de08ed-24e8-4d1d-bb31-46ddba2296d3 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "6c9132ae-fb4f-4a77-8e3f-14f516eed49c" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:20:06 compute-0 nova_compute[260935]: 2025-10-11 09:20:06.937 2 DEBUG oslo_concurrency.lockutils [None req-55de08ed-24e8-4d1d-bb31-46ddba2296d3 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Acquiring lock "6c9132ae-fb4f-4a77-8e3f-14f516eed49c-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:20:06 compute-0 nova_compute[260935]: 2025-10-11 09:20:06.938 2 DEBUG oslo_concurrency.lockutils [None req-55de08ed-24e8-4d1d-bb31-46ddba2296d3 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "6c9132ae-fb4f-4a77-8e3f-14f516eed49c-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:20:06 compute-0 nova_compute[260935]: 2025-10-11 09:20:06.938 2 DEBUG oslo_concurrency.lockutils [None req-55de08ed-24e8-4d1d-bb31-46ddba2296d3 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "6c9132ae-fb4f-4a77-8e3f-14f516eed49c-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:20:06 compute-0 nova_compute[260935]: 2025-10-11 09:20:06.939 2 INFO nova.compute.manager [None req-55de08ed-24e8-4d1d-bb31-46ddba2296d3 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 6c9132ae-fb4f-4a77-8e3f-14f516eed49c] Terminating instance
Oct 11 09:20:06 compute-0 nova_compute[260935]: 2025-10-11 09:20:06.940 2 DEBUG nova.compute.manager [None req-55de08ed-24e8-4d1d-bb31-46ddba2296d3 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 6c9132ae-fb4f-4a77-8e3f-14f516eed49c] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 11 09:20:06 compute-0 kernel: tap50e8a951-44 (unregistering): left promiscuous mode
Oct 11 09:20:06 compute-0 NetworkManager[44960]: <info>  [1760174406.9910] device (tap50e8a951-44): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 09:20:07 compute-0 ovn_controller[152945]: 2025-10-11T09:20:07Z|01163|binding|INFO|Releasing lport 50e8a951-4439-4d05-b0da-9b126fae5c30 from this chassis (sb_readonly=0)
Oct 11 09:20:07 compute-0 nova_compute[260935]: 2025-10-11 09:20:07.009 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:20:07 compute-0 ovn_controller[152945]: 2025-10-11T09:20:07Z|01164|binding|INFO|Setting lport 50e8a951-4439-4d05-b0da-9b126fae5c30 down in Southbound
Oct 11 09:20:07 compute-0 ovn_controller[152945]: 2025-10-11T09:20:07Z|01165|binding|INFO|Removing iface tap50e8a951-44 ovn-installed in OVS
Oct 11 09:20:07 compute-0 nova_compute[260935]: 2025-10-11 09:20:07.015 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:20:07 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:20:07.027 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:23:8b:f3 10.100.0.10'], port_security=['fa:16:3e:23:8b:f3 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '6c9132ae-fb4f-4a77-8e3f-14f516eed49c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fbf3e0c3-4aa9-41d1-8ed0-cbadb331b448', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'bee9c6aad5fe46a2b0fb6caf4d995b72', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'ef8b4bf7-fe14-4629-a7f4-eb04c76ca0d2', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.236'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e842406f-d65b-48f7-9a65-50c6608add8c, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=50e8a951-4439-4d05-b0da-9b126fae5c30) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 09:20:07 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:20:07.030 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 50e8a951-4439-4d05-b0da-9b126fae5c30 in datapath fbf3e0c3-4aa9-41d1-8ed0-cbadb331b448 unbound from our chassis
Oct 11 09:20:07 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:20:07.039 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network fbf3e0c3-4aa9-41d1-8ed0-cbadb331b448
Oct 11 09:20:07 compute-0 nova_compute[260935]: 2025-10-11 09:20:07.044 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:20:07 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:20:07.061 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[030e52bc-3592-47ae-957a-adc8fdd0470b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:20:07 compute-0 systemd[1]: machine-qemu\x2d138\x2dinstance\x2d00000073.scope: Deactivated successfully.
Oct 11 09:20:07 compute-0 systemd[1]: machine-qemu\x2d138\x2dinstance\x2d00000073.scope: Consumed 14.699s CPU time.
Oct 11 09:20:07 compute-0 systemd-machined[215705]: Machine qemu-138-instance-00000073 terminated.
Oct 11 09:20:07 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:20:07.112 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[1b7c6f47-2296-4348-9034-808834ec747f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:20:07 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:20:07.116 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[c3bed0ad-95b7-4f91-b383-322041860f5d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:20:07 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:20:07.165 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[9024f222-5b7b-422d-bc1b-8482962492bb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:20:07 compute-0 nova_compute[260935]: 2025-10-11 09:20:07.169 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:20:07 compute-0 nova_compute[260935]: 2025-10-11 09:20:07.183 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:20:07 compute-0 nova_compute[260935]: 2025-10-11 09:20:07.186 2 INFO nova.virt.libvirt.driver [-] [instance: 6c9132ae-fb4f-4a77-8e3f-14f516eed49c] Instance destroyed successfully.
Oct 11 09:20:07 compute-0 nova_compute[260935]: 2025-10-11 09:20:07.186 2 DEBUG nova.objects.instance [None req-55de08ed-24e8-4d1d-bb31-46ddba2296d3 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lazy-loading 'resources' on Instance uuid 6c9132ae-fb4f-4a77-8e3f-14f516eed49c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:20:07 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:20:07.195 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[a7e68ede-60ec-4f85-93b8-e6ef2af6b35f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfbf3e0c3-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c8:5d:03'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 11, 'tx_packets': 7, 'rx_bytes': 958, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 11, 'tx_packets': 7, 'rx_bytes': 958, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 322], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 618285, 'reachable_time': 33588, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 386622, 'error': None, 'target': 'ovnmeta-fbf3e0c3-4aa9-41d1-8ed0-cbadb331b448', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:20:07 compute-0 nova_compute[260935]: 2025-10-11 09:20:07.202 2 DEBUG nova.virt.libvirt.vif [None req-55de08ed-24e8-4d1d-bb31-46ddba2296d3 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T09:19:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-487160952',display_name='tempest-TestNetworkBasicOps-server-487160952',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-487160952',id=115,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIt4DfbM2d+jaXAhSihykdBZ+tOpog8UlIDD8J/El5+5N6K+dCWK3MxK+m6m5Y83GMP6LzeAgXIDOnVujmKdajdhJSoGBcewPota2xCPS2Aiqozz2Osh9vQuMUfwGcZ9Cw==',key_name='tempest-TestNetworkBasicOps-1573176618',keypairs=<?>,launch_index=0,launched_at=2025-10-11T09:19:37Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='bee9c6aad5fe46a2b0fb6caf4d995b72',ramdisk_id='',reservation_id='r-wkj2eggj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1622727639',owner_user_name='tempest-TestNetworkBasicOps-1622727639-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T09:19:37Z,user_data=None,user_id='dd336dcb24664df58613d4105ce1b004',uuid=6c9132ae-fb4f-4a77-8e3f-14f516eed49c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "50e8a951-4439-4d05-b0da-9b126fae5c30", "address": "fa:16:3e:23:8b:f3", "network": {"id": "fbf3e0c3-4aa9-41d1-8ed0-cbadb331b448", "bridge": "br-int", "label": "tempest-network-smoke--1639787343", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.236", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap50e8a951-44", "ovs_interfaceid": "50e8a951-4439-4d05-b0da-9b126fae5c30", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 11 09:20:07 compute-0 nova_compute[260935]: 2025-10-11 09:20:07.202 2 DEBUG nova.network.os_vif_util [None req-55de08ed-24e8-4d1d-bb31-46ddba2296d3 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Converting VIF {"id": "50e8a951-4439-4d05-b0da-9b126fae5c30", "address": "fa:16:3e:23:8b:f3", "network": {"id": "fbf3e0c3-4aa9-41d1-8ed0-cbadb331b448", "bridge": "br-int", "label": "tempest-network-smoke--1639787343", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.236", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap50e8a951-44", "ovs_interfaceid": "50e8a951-4439-4d05-b0da-9b126fae5c30", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 09:20:07 compute-0 nova_compute[260935]: 2025-10-11 09:20:07.203 2 DEBUG nova.network.os_vif_util [None req-55de08ed-24e8-4d1d-bb31-46ddba2296d3 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:23:8b:f3,bridge_name='br-int',has_traffic_filtering=True,id=50e8a951-4439-4d05-b0da-9b126fae5c30,network=Network(fbf3e0c3-4aa9-41d1-8ed0-cbadb331b448),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap50e8a951-44') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 09:20:07 compute-0 nova_compute[260935]: 2025-10-11 09:20:07.204 2 DEBUG os_vif [None req-55de08ed-24e8-4d1d-bb31-46ddba2296d3 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:23:8b:f3,bridge_name='br-int',has_traffic_filtering=True,id=50e8a951-4439-4d05-b0da-9b126fae5c30,network=Network(fbf3e0c3-4aa9-41d1-8ed0-cbadb331b448),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap50e8a951-44') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 11 09:20:07 compute-0 nova_compute[260935]: 2025-10-11 09:20:07.206 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:20:07 compute-0 nova_compute[260935]: 2025-10-11 09:20:07.206 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap50e8a951-44, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:20:07 compute-0 nova_compute[260935]: 2025-10-11 09:20:07.208 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:20:07 compute-0 nova_compute[260935]: 2025-10-11 09:20:07.210 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:20:07 compute-0 nova_compute[260935]: 2025-10-11 09:20:07.212 2 INFO os_vif [None req-55de08ed-24e8-4d1d-bb31-46ddba2296d3 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:23:8b:f3,bridge_name='br-int',has_traffic_filtering=True,id=50e8a951-4439-4d05-b0da-9b126fae5c30,network=Network(fbf3e0c3-4aa9-41d1-8ed0-cbadb331b448),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap50e8a951-44')
Oct 11 09:20:07 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:20:07.226 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[7c98f707-c89c-42f8-9854-43a457998173]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapfbf3e0c3-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 618299, 'tstamp': 618299}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 386629, 'error': None, 'target': 'ovnmeta-fbf3e0c3-4aa9-41d1-8ed0-cbadb331b448', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapfbf3e0c3-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 618303, 'tstamp': 618303}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 386629, 'error': None, 'target': 'ovnmeta-fbf3e0c3-4aa9-41d1-8ed0-cbadb331b448', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:20:07 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:20:07.228 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfbf3e0c3-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:20:07 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:20:07.236 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfbf3e0c3-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:20:07 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:20:07.236 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 09:20:07 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:20:07.236 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapfbf3e0c3-40, col_values=(('external_ids', {'iface-id': 'fb1da81b-c31d-4f26-a595-cb8eaad2e189'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:20:07 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:20:07.236 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 09:20:07 compute-0 nova_compute[260935]: 2025-10-11 09:20:07.240 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:20:07 compute-0 ceph-mon[74313]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 11 09:20:07 compute-0 ceph-mon[74313]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 4200.0 total, 600.0 interval
                                           Cumulative writes: 10K writes, 49K keys, 10K commit groups, 1.0 writes per commit group, ingest: 0.06 GB, 0.02 MB/s
                                           Cumulative WAL: 10K writes, 10K syncs, 1.00 writes per sync, written: 0.06 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1382 writes, 6243 keys, 1382 commit groups, 1.0 writes per commit group, ingest: 8.86 MB, 0.01 MB/s
                                           Interval WAL: 1382 writes, 1382 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     68.3      0.85              0.23        33    0.026       0      0       0.0       0.0
                                             L6      1/0    7.87 MB   0.0      0.3     0.1      0.2       0.2      0.0       0.0   4.4    166.5    139.1      1.83              1.02        32    0.057    184K    17K       0.0       0.0
                                            Sum      1/0    7.87 MB   0.0      0.3     0.1      0.2       0.3      0.1       0.0   5.4    113.8    116.7      2.68              1.26        65    0.041    184K    17K       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.0       0.0   7.2    168.5    170.7      0.30              0.18        10    0.030     35K   2574       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.3     0.1      0.2       0.2      0.0       0.0   0.0    166.5    139.1      1.83              1.02        32    0.057    184K    17K       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     68.6      0.85              0.23        32    0.026       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     12.2      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 4200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.057, interval 0.007
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.31 GB write, 0.07 MB/s write, 0.30 GB read, 0.07 MB/s read, 2.7 seconds
                                           Interval compaction: 0.05 GB write, 0.09 MB/s write, 0.05 GB read, 0.08 MB/s read, 0.3 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558f0ab3b1f0#2 capacity: 304.00 MB usage: 33.66 MB table_size: 0 occupancy: 18446744073709551615 collections: 8 last_copies: 0 last_secs: 0.000315 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2226,32.30 MB,10.6237%) FilterBlock(66,523.80 KB,0.168263%) IndexBlock(66,876.88 KB,0.281685%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Oct 11 09:20:07 compute-0 nova_compute[260935]: 2025-10-11 09:20:07.650 2 INFO nova.virt.libvirt.driver [None req-55de08ed-24e8-4d1d-bb31-46ddba2296d3 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 6c9132ae-fb4f-4a77-8e3f-14f516eed49c] Deleting instance files /var/lib/nova/instances/6c9132ae-fb4f-4a77-8e3f-14f516eed49c_del
Oct 11 09:20:07 compute-0 nova_compute[260935]: 2025-10-11 09:20:07.651 2 INFO nova.virt.libvirt.driver [None req-55de08ed-24e8-4d1d-bb31-46ddba2296d3 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 6c9132ae-fb4f-4a77-8e3f-14f516eed49c] Deletion of /var/lib/nova/instances/6c9132ae-fb4f-4a77-8e3f-14f516eed49c_del complete
Oct 11 09:20:07 compute-0 nova_compute[260935]: 2025-10-11 09:20:07.733 2 INFO nova.compute.manager [None req-55de08ed-24e8-4d1d-bb31-46ddba2296d3 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 6c9132ae-fb4f-4a77-8e3f-14f516eed49c] Took 0.79 seconds to destroy the instance on the hypervisor.
Oct 11 09:20:07 compute-0 nova_compute[260935]: 2025-10-11 09:20:07.735 2 DEBUG oslo.service.loopingcall [None req-55de08ed-24e8-4d1d-bb31-46ddba2296d3 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 11 09:20:07 compute-0 nova_compute[260935]: 2025-10-11 09:20:07.735 2 DEBUG nova.compute.manager [-] [instance: 6c9132ae-fb4f-4a77-8e3f-14f516eed49c] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 11 09:20:07 compute-0 nova_compute[260935]: 2025-10-11 09:20:07.736 2 DEBUG nova.network.neutron [-] [instance: 6c9132ae-fb4f-4a77-8e3f-14f516eed49c] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 11 09:20:07 compute-0 ceph-mon[74313]: pgmap v2348: 321 pgs: 321 active+clean; 612 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 1.1 MiB/s wr, 109 op/s
Oct 11 09:20:08 compute-0 nova_compute[260935]: 2025-10-11 09:20:08.546 2 DEBUG nova.network.neutron [-] [instance: 6c9132ae-fb4f-4a77-8e3f-14f516eed49c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:20:08 compute-0 nova_compute[260935]: 2025-10-11 09:20:08.585 2 INFO nova.compute.manager [-] [instance: 6c9132ae-fb4f-4a77-8e3f-14f516eed49c] Took 0.85 seconds to deallocate network for instance.
Oct 11 09:20:08 compute-0 nova_compute[260935]: 2025-10-11 09:20:08.637 2 DEBUG oslo_concurrency.lockutils [None req-55de08ed-24e8-4d1d-bb31-46ddba2296d3 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:20:08 compute-0 nova_compute[260935]: 2025-10-11 09:20:08.638 2 DEBUG oslo_concurrency.lockutils [None req-55de08ed-24e8-4d1d-bb31-46ddba2296d3 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:20:08 compute-0 nova_compute[260935]: 2025-10-11 09:20:08.656 2 DEBUG nova.compute.manager [req-2384ee2c-e4ff-4b43-8567-4b11eab93be9 req-d6a0f728-10ad-4f65-9bfc-84e0751cf60b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 6c9132ae-fb4f-4a77-8e3f-14f516eed49c] Received event network-vif-deleted-50e8a951-4439-4d05-b0da-9b126fae5c30 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:20:08 compute-0 ovn_controller[152945]: 2025-10-11T09:20:08Z|00137|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:eb:9b:3f 10.100.0.10
Oct 11 09:20:08 compute-0 ovn_controller[152945]: 2025-10-11T09:20:08Z|00138|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:eb:9b:3f 10.100.0.10
Oct 11 09:20:08 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2349: 321 pgs: 321 active+clean; 587 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 3.1 MiB/s wr, 171 op/s
Oct 11 09:20:08 compute-0 nova_compute[260935]: 2025-10-11 09:20:08.852 2 DEBUG oslo_concurrency.processutils [None req-55de08ed-24e8-4d1d-bb31-46ddba2296d3 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:20:09 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:20:09 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 09:20:09 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3979511492' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:20:09 compute-0 nova_compute[260935]: 2025-10-11 09:20:09.370 2 DEBUG oslo_concurrency.processutils [None req-55de08ed-24e8-4d1d-bb31-46ddba2296d3 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.517s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:20:09 compute-0 nova_compute[260935]: 2025-10-11 09:20:09.375 2 DEBUG nova.compute.provider_tree [None req-55de08ed-24e8-4d1d-bb31-46ddba2296d3 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 09:20:09 compute-0 nova_compute[260935]: 2025-10-11 09:20:09.399 2 DEBUG nova.scheduler.client.report [None req-55de08ed-24e8-4d1d-bb31-46ddba2296d3 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 09:20:09 compute-0 nova_compute[260935]: 2025-10-11 09:20:09.421 2 DEBUG oslo_concurrency.lockutils [None req-55de08ed-24e8-4d1d-bb31-46ddba2296d3 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.783s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:20:09 compute-0 nova_compute[260935]: 2025-10-11 09:20:09.443 2 INFO nova.scheduler.client.report [None req-55de08ed-24e8-4d1d-bb31-46ddba2296d3 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Deleted allocations for instance 6c9132ae-fb4f-4a77-8e3f-14f516eed49c
Oct 11 09:20:09 compute-0 nova_compute[260935]: 2025-10-11 09:20:09.504 2 DEBUG oslo_concurrency.lockutils [None req-55de08ed-24e8-4d1d-bb31-46ddba2296d3 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "6c9132ae-fb4f-4a77-8e3f-14f516eed49c" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.567s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:20:09 compute-0 ceph-mon[74313]: pgmap v2349: 321 pgs: 321 active+clean; 587 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 3.1 MiB/s wr, 171 op/s
Oct 11 09:20:09 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/3979511492' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:20:10 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2350: 321 pgs: 321 active+clean; 587 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 548 KiB/s rd, 2.2 MiB/s wr, 100 op/s
Oct 11 09:20:10 compute-0 podman[386671]: 2025-10-11 09:20:10.812446587 +0000 UTC m=+0.102561206 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_id=iscsid)
Oct 11 09:20:11 compute-0 nova_compute[260935]: 2025-10-11 09:20:11.639 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:20:11 compute-0 ceph-mon[74313]: pgmap v2350: 321 pgs: 321 active+clean; 587 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 548 KiB/s rd, 2.2 MiB/s wr, 100 op/s
Oct 11 09:20:12 compute-0 nova_compute[260935]: 2025-10-11 09:20:12.208 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:20:12 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2351: 321 pgs: 321 active+clean; 563 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 606 KiB/s rd, 2.3 MiB/s wr, 127 op/s
Oct 11 09:20:13 compute-0 nova_compute[260935]: 2025-10-11 09:20:13.233 2 DEBUG oslo_concurrency.lockutils [None req-13d36e4c-34d2-4b58-ba7d-39f792574e43 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Acquiring lock "69860e17-caac-461a-a4a5-34ca72c0ee09" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:20:13 compute-0 nova_compute[260935]: 2025-10-11 09:20:13.234 2 DEBUG oslo_concurrency.lockutils [None req-13d36e4c-34d2-4b58-ba7d-39f792574e43 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "69860e17-caac-461a-a4a5-34ca72c0ee09" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:20:13 compute-0 nova_compute[260935]: 2025-10-11 09:20:13.234 2 DEBUG oslo_concurrency.lockutils [None req-13d36e4c-34d2-4b58-ba7d-39f792574e43 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Acquiring lock "69860e17-caac-461a-a4a5-34ca72c0ee09-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:20:13 compute-0 nova_compute[260935]: 2025-10-11 09:20:13.234 2 DEBUG oslo_concurrency.lockutils [None req-13d36e4c-34d2-4b58-ba7d-39f792574e43 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "69860e17-caac-461a-a4a5-34ca72c0ee09-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:20:13 compute-0 nova_compute[260935]: 2025-10-11 09:20:13.235 2 DEBUG oslo_concurrency.lockutils [None req-13d36e4c-34d2-4b58-ba7d-39f792574e43 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "69860e17-caac-461a-a4a5-34ca72c0ee09-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:20:13 compute-0 nova_compute[260935]: 2025-10-11 09:20:13.235 2 INFO nova.compute.manager [None req-13d36e4c-34d2-4b58-ba7d-39f792574e43 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 69860e17-caac-461a-a4a5-34ca72c0ee09] Terminating instance
Oct 11 09:20:13 compute-0 nova_compute[260935]: 2025-10-11 09:20:13.236 2 DEBUG nova.compute.manager [None req-13d36e4c-34d2-4b58-ba7d-39f792574e43 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 69860e17-caac-461a-a4a5-34ca72c0ee09] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 11 09:20:13 compute-0 kernel: tapa1864eda-bf (unregistering): left promiscuous mode
Oct 11 09:20:13 compute-0 NetworkManager[44960]: <info>  [1760174413.2986] device (tapa1864eda-bf): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 09:20:13 compute-0 ovn_controller[152945]: 2025-10-11T09:20:13Z|01166|binding|INFO|Releasing lport a1864eda-bf8d-42a1-b315-967521604391 from this chassis (sb_readonly=0)
Oct 11 09:20:13 compute-0 ovn_controller[152945]: 2025-10-11T09:20:13Z|01167|binding|INFO|Setting lport a1864eda-bf8d-42a1-b315-967521604391 down in Southbound
Oct 11 09:20:13 compute-0 ovn_controller[152945]: 2025-10-11T09:20:13Z|01168|binding|INFO|Removing iface tapa1864eda-bf ovn-installed in OVS
Oct 11 09:20:13 compute-0 nova_compute[260935]: 2025-10-11 09:20:13.311 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:20:13 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:20:13.319 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8e:a2:5e 10.100.0.3'], port_security=['fa:16:3e:8e:a2:5e 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '69860e17-caac-461a-a4a5-34ca72c0ee09', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fbf3e0c3-4aa9-41d1-8ed0-cbadb331b448', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'bee9c6aad5fe46a2b0fb6caf4d995b72', 'neutron:revision_number': '4', 'neutron:security_group_ids': '04903712-3ca4-4ffc-b1ee-8e3bb7ff59e6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e842406f-d65b-48f7-9a65-50c6608add8c, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=a1864eda-bf8d-42a1-b315-967521604391) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 09:20:13 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:20:13.321 162815 INFO neutron.agent.ovn.metadata.agent [-] Port a1864eda-bf8d-42a1-b315-967521604391 in datapath fbf3e0c3-4aa9-41d1-8ed0-cbadb331b448 unbound from our chassis
Oct 11 09:20:13 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:20:13.325 162815 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network fbf3e0c3-4aa9-41d1-8ed0-cbadb331b448, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 11 09:20:13 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:20:13.326 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[8c702162-68c6-4576-9327-49875287a7ad]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:20:13 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:20:13.327 162815 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-fbf3e0c3-4aa9-41d1-8ed0-cbadb331b448 namespace which is not needed anymore
Oct 11 09:20:13 compute-0 nova_compute[260935]: 2025-10-11 09:20:13.326 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:20:13 compute-0 systemd[1]: machine-qemu\x2d136\x2dinstance\x2d00000071.scope: Deactivated successfully.
Oct 11 09:20:13 compute-0 systemd[1]: machine-qemu\x2d136\x2dinstance\x2d00000071.scope: Consumed 15.392s CPU time.
Oct 11 09:20:13 compute-0 systemd-machined[215705]: Machine qemu-136-instance-00000071 terminated.
Oct 11 09:20:13 compute-0 nova_compute[260935]: 2025-10-11 09:20:13.482 2 INFO nova.virt.libvirt.driver [-] [instance: 69860e17-caac-461a-a4a5-34ca72c0ee09] Instance destroyed successfully.
Oct 11 09:20:13 compute-0 nova_compute[260935]: 2025-10-11 09:20:13.482 2 DEBUG nova.objects.instance [None req-13d36e4c-34d2-4b58-ba7d-39f792574e43 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lazy-loading 'resources' on Instance uuid 69860e17-caac-461a-a4a5-34ca72c0ee09 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:20:13 compute-0 nova_compute[260935]: 2025-10-11 09:20:13.496 2 DEBUG nova.virt.libvirt.vif [None req-13d36e4c-34d2-4b58-ba7d-39f792574e43 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T09:18:48Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-372874397',display_name='tempest-TestNetworkBasicOps-server-372874397',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-372874397',id=113,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIx2RsPvJwPys7fB6mymA8gM4JxRMjGXoxlum0FnOgNOoalsj2xjCE+J+1HgjSPsznufYemTb9pcBC69nUdop6linXie2N/WBdlfI1xGC4f2xXUMk1ZGsW/ToHE3DY3muA==',key_name='tempest-TestNetworkBasicOps-55635867',keypairs=<?>,launch_index=0,launched_at=2025-10-11T09:18:59Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='bee9c6aad5fe46a2b0fb6caf4d995b72',ramdisk_id='',reservation_id='r-vrz2alhg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1622727639',owner_user_name='tempest-TestNetworkBasicOps-1622727639-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T09:18:59Z,user_data=None,user_id='dd336dcb24664df58613d4105ce1b004',uuid=69860e17-caac-461a-a4a5-34ca72c0ee09,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a1864eda-bf8d-42a1-b315-967521604391", "address": "fa:16:3e:8e:a2:5e", "network": {"id": "fbf3e0c3-4aa9-41d1-8ed0-cbadb331b448", "bridge": "br-int", "label": "tempest-network-smoke--1639787343", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa1864eda-bf", "ovs_interfaceid": "a1864eda-bf8d-42a1-b315-967521604391", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 11 09:20:13 compute-0 nova_compute[260935]: 2025-10-11 09:20:13.497 2 DEBUG nova.network.os_vif_util [None req-13d36e4c-34d2-4b58-ba7d-39f792574e43 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Converting VIF {"id": "a1864eda-bf8d-42a1-b315-967521604391", "address": "fa:16:3e:8e:a2:5e", "network": {"id": "fbf3e0c3-4aa9-41d1-8ed0-cbadb331b448", "bridge": "br-int", "label": "tempest-network-smoke--1639787343", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa1864eda-bf", "ovs_interfaceid": "a1864eda-bf8d-42a1-b315-967521604391", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 09:20:13 compute-0 nova_compute[260935]: 2025-10-11 09:20:13.499 2 DEBUG nova.network.os_vif_util [None req-13d36e4c-34d2-4b58-ba7d-39f792574e43 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:8e:a2:5e,bridge_name='br-int',has_traffic_filtering=True,id=a1864eda-bf8d-42a1-b315-967521604391,network=Network(fbf3e0c3-4aa9-41d1-8ed0-cbadb331b448),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa1864eda-bf') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 09:20:13 compute-0 nova_compute[260935]: 2025-10-11 09:20:13.499 2 DEBUG os_vif [None req-13d36e4c-34d2-4b58-ba7d-39f792574e43 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:8e:a2:5e,bridge_name='br-int',has_traffic_filtering=True,id=a1864eda-bf8d-42a1-b315-967521604391,network=Network(fbf3e0c3-4aa9-41d1-8ed0-cbadb331b448),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa1864eda-bf') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 11 09:20:13 compute-0 nova_compute[260935]: 2025-10-11 09:20:13.502 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:20:13 compute-0 nova_compute[260935]: 2025-10-11 09:20:13.502 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa1864eda-bf, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:20:13 compute-0 nova_compute[260935]: 2025-10-11 09:20:13.505 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:20:13 compute-0 nova_compute[260935]: 2025-10-11 09:20:13.511 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 09:20:13 compute-0 nova_compute[260935]: 2025-10-11 09:20:13.514 2 INFO os_vif [None req-13d36e4c-34d2-4b58-ba7d-39f792574e43 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:8e:a2:5e,bridge_name='br-int',has_traffic_filtering=True,id=a1864eda-bf8d-42a1-b315-967521604391,network=Network(fbf3e0c3-4aa9-41d1-8ed0-cbadb331b448),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa1864eda-bf')
Oct 11 09:20:13 compute-0 neutron-haproxy-ovnmeta-fbf3e0c3-4aa9-41d1-8ed0-cbadb331b448[384342]: [NOTICE]   (384346) : haproxy version is 2.8.14-c23fe91
Oct 11 09:20:13 compute-0 neutron-haproxy-ovnmeta-fbf3e0c3-4aa9-41d1-8ed0-cbadb331b448[384342]: [NOTICE]   (384346) : path to executable is /usr/sbin/haproxy
Oct 11 09:20:13 compute-0 neutron-haproxy-ovnmeta-fbf3e0c3-4aa9-41d1-8ed0-cbadb331b448[384342]: [WARNING]  (384346) : Exiting Master process...
Oct 11 09:20:13 compute-0 neutron-haproxy-ovnmeta-fbf3e0c3-4aa9-41d1-8ed0-cbadb331b448[384342]: [WARNING]  (384346) : Exiting Master process...
Oct 11 09:20:13 compute-0 neutron-haproxy-ovnmeta-fbf3e0c3-4aa9-41d1-8ed0-cbadb331b448[384342]: [ALERT]    (384346) : Current worker (384348) exited with code 143 (Terminated)
Oct 11 09:20:13 compute-0 neutron-haproxy-ovnmeta-fbf3e0c3-4aa9-41d1-8ed0-cbadb331b448[384342]: [WARNING]  (384346) : All workers exited. Exiting... (0)
Oct 11 09:20:13 compute-0 systemd[1]: libpod-01e7bdbe8c35ae7e00b5f53817e42d58224fba8df1abe5879aaded3c00981af1.scope: Deactivated successfully.
Oct 11 09:20:13 compute-0 podman[386715]: 2025-10-11 09:20:13.562041231 +0000 UTC m=+0.080319968 container died 01e7bdbe8c35ae7e00b5f53817e42d58224fba8df1abe5879aaded3c00981af1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-fbf3e0c3-4aa9-41d1-8ed0-cbadb331b448, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true)
Oct 11 09:20:13 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-01e7bdbe8c35ae7e00b5f53817e42d58224fba8df1abe5879aaded3c00981af1-userdata-shm.mount: Deactivated successfully.
Oct 11 09:20:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-c06fe5bbac5efd6711a8c9e5d7b4f200ffa3fc048d59e854178cb58c74576a86-merged.mount: Deactivated successfully.
Oct 11 09:20:13 compute-0 podman[386715]: 2025-10-11 09:20:13.619470632 +0000 UTC m=+0.137749369 container cleanup 01e7bdbe8c35ae7e00b5f53817e42d58224fba8df1abe5879aaded3c00981af1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-fbf3e0c3-4aa9-41d1-8ed0-cbadb331b448, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3)
Oct 11 09:20:13 compute-0 systemd[1]: libpod-conmon-01e7bdbe8c35ae7e00b5f53817e42d58224fba8df1abe5879aaded3c00981af1.scope: Deactivated successfully.
Oct 11 09:20:13 compute-0 podman[386774]: 2025-10-11 09:20:13.704268085 +0000 UTC m=+0.056399823 container remove 01e7bdbe8c35ae7e00b5f53817e42d58224fba8df1abe5879aaded3c00981af1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-fbf3e0c3-4aa9-41d1-8ed0-cbadb331b448, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 11 09:20:13 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:20:13.716 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[264e2437-4d58-401a-baac-120610069b96]: (4, ('Sat Oct 11 09:20:13 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-fbf3e0c3-4aa9-41d1-8ed0-cbadb331b448 (01e7bdbe8c35ae7e00b5f53817e42d58224fba8df1abe5879aaded3c00981af1)\n01e7bdbe8c35ae7e00b5f53817e42d58224fba8df1abe5879aaded3c00981af1\nSat Oct 11 09:20:13 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-fbf3e0c3-4aa9-41d1-8ed0-cbadb331b448 (01e7bdbe8c35ae7e00b5f53817e42d58224fba8df1abe5879aaded3c00981af1)\n01e7bdbe8c35ae7e00b5f53817e42d58224fba8df1abe5879aaded3c00981af1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:20:13 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:20:13.719 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[152ce6c1-9fcf-45bf-a07d-4fe21f509ffa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:20:13 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:20:13.720 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfbf3e0c3-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:20:13 compute-0 nova_compute[260935]: 2025-10-11 09:20:13.724 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:20:13 compute-0 kernel: tapfbf3e0c3-40: left promiscuous mode
Oct 11 09:20:13 compute-0 nova_compute[260935]: 2025-10-11 09:20:13.749 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:20:13 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:20:13.754 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[539006d3-6abd-419c-badd-aaaf312e2868]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:20:13 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:20:13.786 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[33cd58c5-f920-4242-88b7-76f8ad59114a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:20:13 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:20:13.787 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[e6456c87-ae79-4ea7-80db-216bd30304a4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:20:13 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:20:13.811 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[d2ab1b99-5e8a-47ff-a7b7-835536de7333]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 618277, 'reachable_time': 20900, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 386792, 'error': None, 'target': 'ovnmeta-fbf3e0c3-4aa9-41d1-8ed0-cbadb331b448', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:20:13 compute-0 systemd[1]: run-netns-ovnmeta\x2dfbf3e0c3\x2d4aa9\x2d41d1\x2d8ed0\x2dcbadb331b448.mount: Deactivated successfully.
Oct 11 09:20:13 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:20:13.817 162929 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-fbf3e0c3-4aa9-41d1-8ed0-cbadb331b448 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 11 09:20:13 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:20:13.817 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[c58c198d-59b5-47b4-a582-d7086de052d4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:20:13 compute-0 ceph-mon[74313]: pgmap v2351: 321 pgs: 321 active+clean; 563 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 606 KiB/s rd, 2.3 MiB/s wr, 127 op/s
Oct 11 09:20:13 compute-0 nova_compute[260935]: 2025-10-11 09:20:13.967 2 INFO nova.virt.libvirt.driver [None req-13d36e4c-34d2-4b58-ba7d-39f792574e43 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 69860e17-caac-461a-a4a5-34ca72c0ee09] Deleting instance files /var/lib/nova/instances/69860e17-caac-461a-a4a5-34ca72c0ee09_del
Oct 11 09:20:13 compute-0 nova_compute[260935]: 2025-10-11 09:20:13.968 2 INFO nova.virt.libvirt.driver [None req-13d36e4c-34d2-4b58-ba7d-39f792574e43 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 69860e17-caac-461a-a4a5-34ca72c0ee09] Deletion of /var/lib/nova/instances/69860e17-caac-461a-a4a5-34ca72c0ee09_del complete
Oct 11 09:20:14 compute-0 nova_compute[260935]: 2025-10-11 09:20:14.026 2 INFO nova.compute.manager [None req-13d36e4c-34d2-4b58-ba7d-39f792574e43 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 69860e17-caac-461a-a4a5-34ca72c0ee09] Took 0.79 seconds to destroy the instance on the hypervisor.
Oct 11 09:20:14 compute-0 nova_compute[260935]: 2025-10-11 09:20:14.027 2 DEBUG oslo.service.loopingcall [None req-13d36e4c-34d2-4b58-ba7d-39f792574e43 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 11 09:20:14 compute-0 nova_compute[260935]: 2025-10-11 09:20:14.028 2 DEBUG nova.compute.manager [-] [instance: 69860e17-caac-461a-a4a5-34ca72c0ee09] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 11 09:20:14 compute-0 nova_compute[260935]: 2025-10-11 09:20:14.028 2 DEBUG nova.network.neutron [-] [instance: 69860e17-caac-461a-a4a5-34ca72c0ee09] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 11 09:20:14 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:20:14 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2352: 321 pgs: 321 active+clean; 566 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 430 KiB/s rd, 2.2 MiB/s wr, 103 op/s
Oct 11 09:20:14 compute-0 nova_compute[260935]: 2025-10-11 09:20:14.939 2 DEBUG nova.network.neutron [-] [instance: 69860e17-caac-461a-a4a5-34ca72c0ee09] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:20:14 compute-0 nova_compute[260935]: 2025-10-11 09:20:14.959 2 INFO nova.compute.manager [-] [instance: 69860e17-caac-461a-a4a5-34ca72c0ee09] Took 0.93 seconds to deallocate network for instance.
Oct 11 09:20:15 compute-0 nova_compute[260935]: 2025-10-11 09:20:15.004 2 DEBUG nova.compute.manager [req-eae06087-d8b0-44dd-b2d1-0b6efff2609e req-304fe773-1142-4989-a200-d593de8e65a7 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 69860e17-caac-461a-a4a5-34ca72c0ee09] Received event network-vif-deleted-a1864eda-bf8d-42a1-b315-967521604391 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:20:15 compute-0 nova_compute[260935]: 2025-10-11 09:20:15.026 2 DEBUG oslo_concurrency.lockutils [None req-13d36e4c-34d2-4b58-ba7d-39f792574e43 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:20:15 compute-0 nova_compute[260935]: 2025-10-11 09:20:15.027 2 DEBUG oslo_concurrency.lockutils [None req-13d36e4c-34d2-4b58-ba7d-39f792574e43 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:20:15 compute-0 nova_compute[260935]: 2025-10-11 09:20:15.216 2 DEBUG oslo_concurrency.processutils [None req-13d36e4c-34d2-4b58-ba7d-39f792574e43 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:20:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:20:15.217 162815 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:20:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:20:15.218 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:20:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:20:15.219 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:20:15 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 09:20:15 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1108013294' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:20:15 compute-0 nova_compute[260935]: 2025-10-11 09:20:15.668 2 DEBUG oslo_concurrency.processutils [None req-13d36e4c-34d2-4b58-ba7d-39f792574e43 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:20:15 compute-0 nova_compute[260935]: 2025-10-11 09:20:15.674 2 DEBUG nova.compute.provider_tree [None req-13d36e4c-34d2-4b58-ba7d-39f792574e43 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 09:20:15 compute-0 nova_compute[260935]: 2025-10-11 09:20:15.699 2 DEBUG nova.scheduler.client.report [None req-13d36e4c-34d2-4b58-ba7d-39f792574e43 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 09:20:15 compute-0 nova_compute[260935]: 2025-10-11 09:20:15.726 2 DEBUG oslo_concurrency.lockutils [None req-13d36e4c-34d2-4b58-ba7d-39f792574e43 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.700s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:20:15 compute-0 nova_compute[260935]: 2025-10-11 09:20:15.755 2 INFO nova.scheduler.client.report [None req-13d36e4c-34d2-4b58-ba7d-39f792574e43 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Deleted allocations for instance 69860e17-caac-461a-a4a5-34ca72c0ee09
Oct 11 09:20:15 compute-0 podman[386815]: 2025-10-11 09:20:15.770916127 +0000 UTC m=+0.067266460 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001)
Oct 11 09:20:15 compute-0 podman[386816]: 2025-10-11 09:20:15.804790443 +0000 UTC m=+0.100021704 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 11 09:20:15 compute-0 nova_compute[260935]: 2025-10-11 09:20:15.821 2 DEBUG oslo_concurrency.lockutils [None req-13d36e4c-34d2-4b58-ba7d-39f792574e43 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "69860e17-caac-461a-a4a5-34ca72c0ee09" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.587s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:20:15 compute-0 ceph-mon[74313]: pgmap v2352: 321 pgs: 321 active+clean; 566 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 430 KiB/s rd, 2.2 MiB/s wr, 103 op/s
Oct 11 09:20:15 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/1108013294' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:20:16 compute-0 nova_compute[260935]: 2025-10-11 09:20:16.641 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:20:16 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2353: 321 pgs: 321 active+clean; 566 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 373 KiB/s rd, 2.2 MiB/s wr, 94 op/s
Oct 11 09:20:17 compute-0 ceph-mon[74313]: pgmap v2353: 321 pgs: 321 active+clean; 566 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 373 KiB/s rd, 2.2 MiB/s wr, 94 op/s
Oct 11 09:20:18 compute-0 nova_compute[260935]: 2025-10-11 09:20:18.507 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:20:18 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2354: 321 pgs: 321 active+clean; 486 MiB data, 1004 MiB used, 59 GiB / 60 GiB avail; 393 KiB/s rd, 2.2 MiB/s wr, 124 op/s
Oct 11 09:20:19 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:20:19 compute-0 nova_compute[260935]: 2025-10-11 09:20:19.503 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:20:19 compute-0 ceph-mon[74313]: pgmap v2354: 321 pgs: 321 active+clean; 486 MiB data, 1004 MiB used, 59 GiB / 60 GiB avail; 393 KiB/s rd, 2.2 MiB/s wr, 124 op/s
Oct 11 09:20:20 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2355: 321 pgs: 321 active+clean; 486 MiB data, 1004 MiB used, 59 GiB / 60 GiB avail; 98 KiB/s rd, 114 KiB/s wr, 61 op/s
Oct 11 09:20:20 compute-0 nova_compute[260935]: 2025-10-11 09:20:20.793 2 DEBUG nova.compute.manager [req-011b05e4-f65a-4d5c-9c17-59569661654a req-c41a3e21-0b5a-4ae4-942f-f6fc3a0f1260 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a798e2c3-8294-482d-a073-883121427765] Received event network-changed-d5347066-e322-4664-8798-146b9745fa17 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:20:20 compute-0 nova_compute[260935]: 2025-10-11 09:20:20.794 2 DEBUG nova.compute.manager [req-011b05e4-f65a-4d5c-9c17-59569661654a req-c41a3e21-0b5a-4ae4-942f-f6fc3a0f1260 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a798e2c3-8294-482d-a073-883121427765] Refreshing instance network info cache due to event network-changed-d5347066-e322-4664-8798-146b9745fa17. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 09:20:20 compute-0 nova_compute[260935]: 2025-10-11 09:20:20.794 2 DEBUG oslo_concurrency.lockutils [req-011b05e4-f65a-4d5c-9c17-59569661654a req-c41a3e21-0b5a-4ae4-942f-f6fc3a0f1260 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-a798e2c3-8294-482d-a073-883121427765" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:20:20 compute-0 nova_compute[260935]: 2025-10-11 09:20:20.795 2 DEBUG oslo_concurrency.lockutils [req-011b05e4-f65a-4d5c-9c17-59569661654a req-c41a3e21-0b5a-4ae4-942f-f6fc3a0f1260 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-a798e2c3-8294-482d-a073-883121427765" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:20:20 compute-0 nova_compute[260935]: 2025-10-11 09:20:20.795 2 DEBUG nova.network.neutron [req-011b05e4-f65a-4d5c-9c17-59569661654a req-c41a3e21-0b5a-4ae4-942f-f6fc3a0f1260 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a798e2c3-8294-482d-a073-883121427765] Refreshing network info cache for port d5347066-e322-4664-8798-146b9745fa17 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 09:20:20 compute-0 nova_compute[260935]: 2025-10-11 09:20:20.843 2 DEBUG oslo_concurrency.lockutils [None req-d3300635-6abd-4679-baca-b5355c6a95df 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "a798e2c3-8294-482d-a073-883121427765" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:20:20 compute-0 nova_compute[260935]: 2025-10-11 09:20:20.843 2 DEBUG oslo_concurrency.lockutils [None req-d3300635-6abd-4679-baca-b5355c6a95df 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "a798e2c3-8294-482d-a073-883121427765" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:20:20 compute-0 nova_compute[260935]: 2025-10-11 09:20:20.844 2 DEBUG oslo_concurrency.lockutils [None req-d3300635-6abd-4679-baca-b5355c6a95df 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "a798e2c3-8294-482d-a073-883121427765-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:20:20 compute-0 nova_compute[260935]: 2025-10-11 09:20:20.844 2 DEBUG oslo_concurrency.lockutils [None req-d3300635-6abd-4679-baca-b5355c6a95df 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "a798e2c3-8294-482d-a073-883121427765-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:20:20 compute-0 nova_compute[260935]: 2025-10-11 09:20:20.844 2 DEBUG oslo_concurrency.lockutils [None req-d3300635-6abd-4679-baca-b5355c6a95df 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "a798e2c3-8294-482d-a073-883121427765-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:20:20 compute-0 nova_compute[260935]: 2025-10-11 09:20:20.845 2 INFO nova.compute.manager [None req-d3300635-6abd-4679-baca-b5355c6a95df 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: a798e2c3-8294-482d-a073-883121427765] Terminating instance
Oct 11 09:20:20 compute-0 nova_compute[260935]: 2025-10-11 09:20:20.846 2 DEBUG nova.compute.manager [None req-d3300635-6abd-4679-baca-b5355c6a95df 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: a798e2c3-8294-482d-a073-883121427765] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 11 09:20:20 compute-0 kernel: tapd5347066-e3 (unregistering): left promiscuous mode
Oct 11 09:20:20 compute-0 NetworkManager[44960]: <info>  [1760174420.9329] device (tapd5347066-e3): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 09:20:20 compute-0 nova_compute[260935]: 2025-10-11 09:20:20.949 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:20:20 compute-0 ovn_controller[152945]: 2025-10-11T09:20:20Z|01169|binding|INFO|Releasing lport d5347066-e322-4664-8798-146b9745fa17 from this chassis (sb_readonly=0)
Oct 11 09:20:20 compute-0 ovn_controller[152945]: 2025-10-11T09:20:20Z|01170|binding|INFO|Setting lport d5347066-e322-4664-8798-146b9745fa17 down in Southbound
Oct 11 09:20:20 compute-0 ovn_controller[152945]: 2025-10-11T09:20:20Z|01171|binding|INFO|Removing iface tapd5347066-e3 ovn-installed in OVS
Oct 11 09:20:20 compute-0 nova_compute[260935]: 2025-10-11 09:20:20.958 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:20:20 compute-0 ovn_controller[152945]: 2025-10-11T09:20:20Z|01172|binding|INFO|Releasing lport bf7f7fa8-f1d9-4202-bec4-06d2178d548d from this chassis (sb_readonly=0)
Oct 11 09:20:20 compute-0 ovn_controller[152945]: 2025-10-11T09:20:20Z|01173|binding|INFO|Releasing lport f4a0a09a-d174-44d3-b63d-753fefd94646 from this chassis (sb_readonly=0)
Oct 11 09:20:20 compute-0 ovn_controller[152945]: 2025-10-11T09:20:20Z|01174|binding|INFO|Releasing lport e23cd806-8523-4e59-ba27-db15cee52548 from this chassis (sb_readonly=0)
Oct 11 09:20:20 compute-0 ovn_controller[152945]: 2025-10-11T09:20:20Z|01175|binding|INFO|Releasing lport f45dd889-4db0-488b-b6d3-356bc191844e from this chassis (sb_readonly=0)
Oct 11 09:20:20 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:20:20.964 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:eb:9b:3f 10.100.0.10'], port_security=['fa:16:3e:eb:9b:3f 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'a798e2c3-8294-482d-a073-883121427765', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6346ea52-07fc-49ad-8f2d-fcfed9769241', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'db6885dd005947ad850fed13cefdf2fc', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'a975f5dd-d207-4e6b-9401-45f24b820c2c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0785b003-8951-44fc-bd21-f1c5c3076102, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=d5347066-e322-4664-8798-146b9745fa17) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 09:20:20 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:20:20.966 162815 INFO neutron.agent.ovn.metadata.agent [-] Port d5347066-e322-4664-8798-146b9745fa17 in datapath 6346ea52-07fc-49ad-8f2d-fcfed9769241 unbound from our chassis
Oct 11 09:20:20 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:20:20.969 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6346ea52-07fc-49ad-8f2d-fcfed9769241
Oct 11 09:20:20 compute-0 kernel: tapd8f510f6-a8 (unregistering): left promiscuous mode
Oct 11 09:20:20 compute-0 NetworkManager[44960]: <info>  [1760174420.9849] device (tapd8f510f6-a8): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 09:20:21 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:20:21.002 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[b222e8ad-1c66-489d-8c13-46ece9bac66c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:20:21 compute-0 nova_compute[260935]: 2025-10-11 09:20:21.006 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:20:21 compute-0 ovn_controller[152945]: 2025-10-11T09:20:21Z|01176|binding|INFO|Releasing lport d8f510f6-a87e-4c76-be24-a143f69f564d from this chassis (sb_readonly=0)
Oct 11 09:20:21 compute-0 ovn_controller[152945]: 2025-10-11T09:20:21Z|01177|binding|INFO|Setting lport d8f510f6-a87e-4c76-be24-a143f69f564d down in Southbound
Oct 11 09:20:21 compute-0 nova_compute[260935]: 2025-10-11 09:20:21.057 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:20:21 compute-0 ovn_controller[152945]: 2025-10-11T09:20:21Z|01178|binding|INFO|Removing iface tapd8f510f6-a8 ovn-installed in OVS
Oct 11 09:20:21 compute-0 nova_compute[260935]: 2025-10-11 09:20:21.061 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:20:21 compute-0 nova_compute[260935]: 2025-10-11 09:20:21.062 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:20:21 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:20:21.061 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[f845f9d8-f19e-4dd1-82e1-305af7c1678e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:20:21 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:20:21.067 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f2:49:59 2001:db8::f816:3eff:fef2:4959'], port_security=['fa:16:3e:f2:49:59 2001:db8::f816:3eff:fef2:4959'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fef2:4959/64', 'neutron:device_id': 'a798e2c3-8294-482d-a073-883121427765', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ce353a4c-7280-46bb-ac7d-157bb5dc08cb', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'db6885dd005947ad850fed13cefdf2fc', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'a975f5dd-d207-4e6b-9401-45f24b820c2c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7413cee9-eaf1-4090-8a8e-9bf9dac36a0a, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=d8f510f6-a87e-4c76-be24-a143f69f564d) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 09:20:21 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:20:21.068 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[86e10ae5-9102-4822-b890-c22c6be2ca7e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:20:21 compute-0 systemd[1]: machine-qemu\x2d139\x2dinstance\x2d00000074.scope: Deactivated successfully.
Oct 11 09:20:21 compute-0 systemd[1]: machine-qemu\x2d139\x2dinstance\x2d00000074.scope: Consumed 17.789s CPU time.
Oct 11 09:20:21 compute-0 nova_compute[260935]: 2025-10-11 09:20:21.075 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:20:21 compute-0 systemd-machined[215705]: Machine qemu-139-instance-00000074 terminated.
Oct 11 09:20:21 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:20:21.112 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[53dd05a6-b512-47f5-8839-ff29aadf57ee]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:20:21 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:20:21.143 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[b268d4a0-c597-43a3-a87e-c932e7f816b9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6346ea52-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f5:67:fa'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 7, 'rx_bytes': 1000, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 7, 'rx_bytes': 1000, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 325], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 618844, 'reachable_time': 24736, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 386874, 'error': None, 'target': 'ovnmeta-6346ea52-07fc-49ad-8f2d-fcfed9769241', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:20:21 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:20:21.175 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[7e51667a-4d4d-4ee5-bfb5-717765d07bbb]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap6346ea52-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 618858, 'tstamp': 618858}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 386875, 'error': None, 'target': 'ovnmeta-6346ea52-07fc-49ad-8f2d-fcfed9769241', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap6346ea52-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 618862, 'tstamp': 618862}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 386875, 'error': None, 'target': 'ovnmeta-6346ea52-07fc-49ad-8f2d-fcfed9769241', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:20:21 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:20:21.177 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6346ea52-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:20:21 compute-0 nova_compute[260935]: 2025-10-11 09:20:21.179 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:20:21 compute-0 nova_compute[260935]: 2025-10-11 09:20:21.189 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:20:21 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:20:21.190 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6346ea52-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:20:21 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:20:21.190 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 09:20:21 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:20:21.191 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6346ea52-00, col_values=(('external_ids', {'iface-id': 'f4a0a09a-d174-44d3-b63d-753fefd94646'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:20:21 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:20:21.191 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 09:20:21 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:20:21.192 162815 INFO neutron.agent.ovn.metadata.agent [-] Port d8f510f6-a87e-4c76-be24-a143f69f564d in datapath ce353a4c-7280-46bb-ac7d-157bb5dc08cb unbound from our chassis
Oct 11 09:20:21 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:20:21.194 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ce353a4c-7280-46bb-ac7d-157bb5dc08cb
Oct 11 09:20:21 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:20:21.210 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[427582f8-23e5-43a1-9d36-281049d1455e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:20:21 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:20:21.243 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[3729a8a5-7136-45ed-98a7-36f53b608bde]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:20:21 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:20:21.247 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[1aa647fb-2ead-4c0b-870d-3f39d71f8256]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:20:21 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:20:21.277 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[f3d59f5c-6d19-4bfc-b3c3-2ed3f031fb0e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:20:21 compute-0 NetworkManager[44960]: <info>  [1760174421.2844] manager: (tapd8f510f6-a8): new Tun device (/org/freedesktop/NetworkManager/Devices/476)
Oct 11 09:20:21 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:20:21.298 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[f5ebd787-1c6f-46a2-9bc4-94e1ecec7713]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapce353a4c-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:22:23:62'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 32, 'tx_packets': 5, 'rx_bytes': 2912, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 32, 'tx_packets': 5, 'rx_bytes': 2912, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 326], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 618949, 'reachable_time': 20488, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 32, 'inoctets': 2464, 'indelivers': 7, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 32, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 2464, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 32, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 7, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 386889, 'error': None, 'target': 'ovnmeta-ce353a4c-7280-46bb-ac7d-157bb5dc08cb', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:20:21 compute-0 nova_compute[260935]: 2025-10-11 09:20:21.314 2 INFO nova.virt.libvirt.driver [-] [instance: a798e2c3-8294-482d-a073-883121427765] Instance destroyed successfully.
Oct 11 09:20:21 compute-0 nova_compute[260935]: 2025-10-11 09:20:21.315 2 DEBUG nova.objects.instance [None req-d3300635-6abd-4679-baca-b5355c6a95df 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lazy-loading 'resources' on Instance uuid a798e2c3-8294-482d-a073-883121427765 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:20:21 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:20:21.319 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[38d855b8-e72c-4504-b270-2f35c2885aac]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapce353a4c-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 618964, 'tstamp': 618964}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 386902, 'error': None, 'target': 'ovnmeta-ce353a4c-7280-46bb-ac7d-157bb5dc08cb', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:20:21 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:20:21.321 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapce353a4c-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:20:21 compute-0 nova_compute[260935]: 2025-10-11 09:20:21.323 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:20:21 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:20:21.331 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapce353a4c-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:20:21 compute-0 nova_compute[260935]: 2025-10-11 09:20:21.331 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:20:21 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:20:21.331 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 09:20:21 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:20:21.332 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapce353a4c-70, col_values=(('external_ids', {'iface-id': 'bf7f7fa8-f1d9-4202-bec4-06d2178d548d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:20:21 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:20:21.332 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 09:20:21 compute-0 nova_compute[260935]: 2025-10-11 09:20:21.340 2 DEBUG nova.virt.libvirt.vif [None req-d3300635-6abd-4679-baca-b5355c6a95df 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T09:19:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1575481376',display_name='tempest-TestGettingAddress-server-1575481376',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1575481376',id=116,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJTFYLtktGdqXo2IJ/ShBWfWgvaV4+fWmolxALneU1gDJL8dLDtvTdIhs+PbG9YcPYaBAkq6yl21bo4TTyj4znAX7oza+01Fop0j7H2jGDTRbWSF88RRRnjrHtRpLFrBeg==',key_name='tempest-TestGettingAddress-2133650400',keypairs=<?>,launch_index=0,launched_at=2025-10-11T09:19:56Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='db6885dd005947ad850fed13cefdf2fc',ramdisk_id='',reservation_id='r-cbv7fcg7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-1238692117',owner_user_name='tempest-TestGettingAddress-1238692117-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T09:19:56Z,user_data=None,user_id='0e1fd111a1ff43179343661e01457085',uuid=a798e2c3-8294-482d-a073-883121427765,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d5347066-e322-4664-8798-146b9745fa17", "address": "fa:16:3e:eb:9b:3f", "network": {"id": "6346ea52-07fc-49ad-8f2d-fcfed9769241", "bridge": "br-int", "label": "tempest-network-smoke--1742332739", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.177", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd5347066-e3", "ovs_interfaceid": "d5347066-e322-4664-8798-146b9745fa17", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 11 09:20:21 compute-0 nova_compute[260935]: 2025-10-11 09:20:21.341 2 DEBUG nova.network.os_vif_util [None req-d3300635-6abd-4679-baca-b5355c6a95df 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converting VIF {"id": "d5347066-e322-4664-8798-146b9745fa17", "address": "fa:16:3e:eb:9b:3f", "network": {"id": "6346ea52-07fc-49ad-8f2d-fcfed9769241", "bridge": "br-int", "label": "tempest-network-smoke--1742332739", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.177", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd5347066-e3", "ovs_interfaceid": "d5347066-e322-4664-8798-146b9745fa17", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 09:20:21 compute-0 nova_compute[260935]: 2025-10-11 09:20:21.342 2 DEBUG nova.network.os_vif_util [None req-d3300635-6abd-4679-baca-b5355c6a95df 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:eb:9b:3f,bridge_name='br-int',has_traffic_filtering=True,id=d5347066-e322-4664-8798-146b9745fa17,network=Network(6346ea52-07fc-49ad-8f2d-fcfed9769241),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd5347066-e3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 09:20:21 compute-0 nova_compute[260935]: 2025-10-11 09:20:21.343 2 DEBUG os_vif [None req-d3300635-6abd-4679-baca-b5355c6a95df 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:eb:9b:3f,bridge_name='br-int',has_traffic_filtering=True,id=d5347066-e322-4664-8798-146b9745fa17,network=Network(6346ea52-07fc-49ad-8f2d-fcfed9769241),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd5347066-e3') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 11 09:20:21 compute-0 nova_compute[260935]: 2025-10-11 09:20:21.346 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:20:21 compute-0 nova_compute[260935]: 2025-10-11 09:20:21.346 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd5347066-e3, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:20:21 compute-0 nova_compute[260935]: 2025-10-11 09:20:21.349 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:20:21 compute-0 nova_compute[260935]: 2025-10-11 09:20:21.352 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 09:20:21 compute-0 nova_compute[260935]: 2025-10-11 09:20:21.352 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:20:21 compute-0 nova_compute[260935]: 2025-10-11 09:20:21.355 2 INFO os_vif [None req-d3300635-6abd-4679-baca-b5355c6a95df 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:eb:9b:3f,bridge_name='br-int',has_traffic_filtering=True,id=d5347066-e322-4664-8798-146b9745fa17,network=Network(6346ea52-07fc-49ad-8f2d-fcfed9769241),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd5347066-e3')
Oct 11 09:20:21 compute-0 nova_compute[260935]: 2025-10-11 09:20:21.356 2 DEBUG nova.virt.libvirt.vif [None req-d3300635-6abd-4679-baca-b5355c6a95df 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T09:19:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1575481376',display_name='tempest-TestGettingAddress-server-1575481376',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1575481376',id=116,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJTFYLtktGdqXo2IJ/ShBWfWgvaV4+fWmolxALneU1gDJL8dLDtvTdIhs+PbG9YcPYaBAkq6yl21bo4TTyj4znAX7oza+01Fop0j7H2jGDTRbWSF88RRRnjrHtRpLFrBeg==',key_name='tempest-TestGettingAddress-2133650400',keypairs=<?>,launch_index=0,launched_at=2025-10-11T09:19:56Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='db6885dd005947ad850fed13cefdf2fc',ramdisk_id='',reservation_id='r-cbv7fcg7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-1238692117',owner_user_name='tempest-TestGettingAddress-1238692117-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T09:19:56Z,user_data=None,user_id='0e1fd111a1ff43179343661e01457085',uuid=a798e2c3-8294-482d-a073-883121427765,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d8f510f6-a87e-4c76-be24-a143f69f564d", "address": "fa:16:3e:f2:49:59", "network": {"id": "ce353a4c-7280-46bb-ac7d-157bb5dc08cb", "bridge": "br-int", "label": "tempest-network-smoke--1463782365", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fef2:4959", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd8f510f6-a8", "ovs_interfaceid": "d8f510f6-a87e-4c76-be24-a143f69f564d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 11 09:20:21 compute-0 nova_compute[260935]: 2025-10-11 09:20:21.357 2 DEBUG nova.network.os_vif_util [None req-d3300635-6abd-4679-baca-b5355c6a95df 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converting VIF {"id": "d8f510f6-a87e-4c76-be24-a143f69f564d", "address": "fa:16:3e:f2:49:59", "network": {"id": "ce353a4c-7280-46bb-ac7d-157bb5dc08cb", "bridge": "br-int", "label": "tempest-network-smoke--1463782365", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fef2:4959", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd8f510f6-a8", "ovs_interfaceid": "d8f510f6-a87e-4c76-be24-a143f69f564d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 09:20:21 compute-0 nova_compute[260935]: 2025-10-11 09:20:21.358 2 DEBUG nova.network.os_vif_util [None req-d3300635-6abd-4679-baca-b5355c6a95df 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f2:49:59,bridge_name='br-int',has_traffic_filtering=True,id=d8f510f6-a87e-4c76-be24-a143f69f564d,network=Network(ce353a4c-7280-46bb-ac7d-157bb5dc08cb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd8f510f6-a8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 09:20:21 compute-0 nova_compute[260935]: 2025-10-11 09:20:21.358 2 DEBUG os_vif [None req-d3300635-6abd-4679-baca-b5355c6a95df 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f2:49:59,bridge_name='br-int',has_traffic_filtering=True,id=d8f510f6-a87e-4c76-be24-a143f69f564d,network=Network(ce353a4c-7280-46bb-ac7d-157bb5dc08cb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd8f510f6-a8') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 11 09:20:21 compute-0 nova_compute[260935]: 2025-10-11 09:20:21.360 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:20:21 compute-0 nova_compute[260935]: 2025-10-11 09:20:21.360 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd8f510f6-a8, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:20:21 compute-0 nova_compute[260935]: 2025-10-11 09:20:21.362 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:20:21 compute-0 nova_compute[260935]: 2025-10-11 09:20:21.364 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 09:20:21 compute-0 nova_compute[260935]: 2025-10-11 09:20:21.366 2 INFO os_vif [None req-d3300635-6abd-4679-baca-b5355c6a95df 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f2:49:59,bridge_name='br-int',has_traffic_filtering=True,id=d8f510f6-a87e-4c76-be24-a143f69f564d,network=Network(ce353a4c-7280-46bb-ac7d-157bb5dc08cb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd8f510f6-a8')
Oct 11 09:20:21 compute-0 nova_compute[260935]: 2025-10-11 09:20:21.599 2 DEBUG nova.compute.manager [req-b64c5dad-e3b8-4c8c-95a0-0ca7467e7ee9 req-a2ce4c7f-59e0-43c0-bd88-20995ff9a95e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a798e2c3-8294-482d-a073-883121427765] Received event network-vif-unplugged-d5347066-e322-4664-8798-146b9745fa17 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:20:21 compute-0 nova_compute[260935]: 2025-10-11 09:20:21.600 2 DEBUG oslo_concurrency.lockutils [req-b64c5dad-e3b8-4c8c-95a0-0ca7467e7ee9 req-a2ce4c7f-59e0-43c0-bd88-20995ff9a95e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "a798e2c3-8294-482d-a073-883121427765-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:20:21 compute-0 nova_compute[260935]: 2025-10-11 09:20:21.600 2 DEBUG oslo_concurrency.lockutils [req-b64c5dad-e3b8-4c8c-95a0-0ca7467e7ee9 req-a2ce4c7f-59e0-43c0-bd88-20995ff9a95e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "a798e2c3-8294-482d-a073-883121427765-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:20:21 compute-0 nova_compute[260935]: 2025-10-11 09:20:21.601 2 DEBUG oslo_concurrency.lockutils [req-b64c5dad-e3b8-4c8c-95a0-0ca7467e7ee9 req-a2ce4c7f-59e0-43c0-bd88-20995ff9a95e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "a798e2c3-8294-482d-a073-883121427765-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:20:21 compute-0 nova_compute[260935]: 2025-10-11 09:20:21.601 2 DEBUG nova.compute.manager [req-b64c5dad-e3b8-4c8c-95a0-0ca7467e7ee9 req-a2ce4c7f-59e0-43c0-bd88-20995ff9a95e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a798e2c3-8294-482d-a073-883121427765] No waiting events found dispatching network-vif-unplugged-d5347066-e322-4664-8798-146b9745fa17 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 09:20:21 compute-0 nova_compute[260935]: 2025-10-11 09:20:21.602 2 DEBUG nova.compute.manager [req-b64c5dad-e3b8-4c8c-95a0-0ca7467e7ee9 req-a2ce4c7f-59e0-43c0-bd88-20995ff9a95e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a798e2c3-8294-482d-a073-883121427765] Received event network-vif-unplugged-d5347066-e322-4664-8798-146b9745fa17 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 11 09:20:21 compute-0 nova_compute[260935]: 2025-10-11 09:20:21.686 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:20:21 compute-0 nova_compute[260935]: 2025-10-11 09:20:21.842 2 INFO nova.virt.libvirt.driver [None req-d3300635-6abd-4679-baca-b5355c6a95df 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: a798e2c3-8294-482d-a073-883121427765] Deleting instance files /var/lib/nova/instances/a798e2c3-8294-482d-a073-883121427765_del
Oct 11 09:20:21 compute-0 nova_compute[260935]: 2025-10-11 09:20:21.843 2 INFO nova.virt.libvirt.driver [None req-d3300635-6abd-4679-baca-b5355c6a95df 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: a798e2c3-8294-482d-a073-883121427765] Deletion of /var/lib/nova/instances/a798e2c3-8294-482d-a073-883121427765_del complete
Oct 11 09:20:21 compute-0 ceph-mon[74313]: pgmap v2355: 321 pgs: 321 active+clean; 486 MiB data, 1004 MiB used, 59 GiB / 60 GiB avail; 98 KiB/s rd, 114 KiB/s wr, 61 op/s
Oct 11 09:20:21 compute-0 nova_compute[260935]: 2025-10-11 09:20:21.915 2 INFO nova.compute.manager [None req-d3300635-6abd-4679-baca-b5355c6a95df 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: a798e2c3-8294-482d-a073-883121427765] Took 1.07 seconds to destroy the instance on the hypervisor.
Oct 11 09:20:21 compute-0 nova_compute[260935]: 2025-10-11 09:20:21.915 2 DEBUG oslo.service.loopingcall [None req-d3300635-6abd-4679-baca-b5355c6a95df 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 11 09:20:21 compute-0 nova_compute[260935]: 2025-10-11 09:20:21.916 2 DEBUG nova.compute.manager [-] [instance: a798e2c3-8294-482d-a073-883121427765] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 11 09:20:21 compute-0 nova_compute[260935]: 2025-10-11 09:20:21.916 2 DEBUG nova.network.neutron [-] [instance: a798e2c3-8294-482d-a073-883121427765] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 11 09:20:22 compute-0 nova_compute[260935]: 2025-10-11 09:20:22.178 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760174407.1768434, 6c9132ae-fb4f-4a77-8e3f-14f516eed49c => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:20:22 compute-0 nova_compute[260935]: 2025-10-11 09:20:22.179 2 INFO nova.compute.manager [-] [instance: 6c9132ae-fb4f-4a77-8e3f-14f516eed49c] VM Stopped (Lifecycle Event)
Oct 11 09:20:22 compute-0 nova_compute[260935]: 2025-10-11 09:20:22.272 2 DEBUG nova.compute.manager [None req-a655d275-fde7-45e2-a8fa-87f663bc64d6 - - - - - -] [instance: 6c9132ae-fb4f-4a77-8e3f-14f516eed49c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:20:22 compute-0 nova_compute[260935]: 2025-10-11 09:20:22.461 2 DEBUG nova.network.neutron [req-011b05e4-f65a-4d5c-9c17-59569661654a req-c41a3e21-0b5a-4ae4-942f-f6fc3a0f1260 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a798e2c3-8294-482d-a073-883121427765] Updated VIF entry in instance network info cache for port d5347066-e322-4664-8798-146b9745fa17. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 09:20:22 compute-0 nova_compute[260935]: 2025-10-11 09:20:22.461 2 DEBUG nova.network.neutron [req-011b05e4-f65a-4d5c-9c17-59569661654a req-c41a3e21-0b5a-4ae4-942f-f6fc3a0f1260 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a798e2c3-8294-482d-a073-883121427765] Updating instance_info_cache with network_info: [{"id": "d5347066-e322-4664-8798-146b9745fa17", "address": "fa:16:3e:eb:9b:3f", "network": {"id": "6346ea52-07fc-49ad-8f2d-fcfed9769241", "bridge": "br-int", "label": "tempest-network-smoke--1742332739", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd5347066-e3", "ovs_interfaceid": "d5347066-e322-4664-8798-146b9745fa17", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "d8f510f6-a87e-4c76-be24-a143f69f564d", "address": "fa:16:3e:f2:49:59", "network": {"id": "ce353a4c-7280-46bb-ac7d-157bb5dc08cb", "bridge": "br-int", "label": "tempest-network-smoke--1463782365", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fef2:4959", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd8f510f6-a8", "ovs_interfaceid": "d8f510f6-a87e-4c76-be24-a143f69f564d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:20:22 compute-0 nova_compute[260935]: 2025-10-11 09:20:22.566 2 DEBUG oslo_concurrency.lockutils [req-011b05e4-f65a-4d5c-9c17-59569661654a req-c41a3e21-0b5a-4ae4-942f-f6fc3a0f1260 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-a798e2c3-8294-482d-a073-883121427765" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:20:22 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2356: 321 pgs: 321 active+clean; 433 MiB data, 966 MiB used, 59 GiB / 60 GiB avail; 124 KiB/s rd, 115 KiB/s wr, 89 op/s
Oct 11 09:20:22 compute-0 nova_compute[260935]: 2025-10-11 09:20:22.887 2 DEBUG nova.compute.manager [req-09c0b2fb-8501-459e-867e-27ac87e8de7b req-0233dd50-f350-41d4-98f3-01f9f4e8b3f0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a798e2c3-8294-482d-a073-883121427765] Received event network-vif-unplugged-d8f510f6-a87e-4c76-be24-a143f69f564d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:20:22 compute-0 nova_compute[260935]: 2025-10-11 09:20:22.888 2 DEBUG oslo_concurrency.lockutils [req-09c0b2fb-8501-459e-867e-27ac87e8de7b req-0233dd50-f350-41d4-98f3-01f9f4e8b3f0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "a798e2c3-8294-482d-a073-883121427765-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:20:22 compute-0 nova_compute[260935]: 2025-10-11 09:20:22.888 2 DEBUG oslo_concurrency.lockutils [req-09c0b2fb-8501-459e-867e-27ac87e8de7b req-0233dd50-f350-41d4-98f3-01f9f4e8b3f0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "a798e2c3-8294-482d-a073-883121427765-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:20:22 compute-0 nova_compute[260935]: 2025-10-11 09:20:22.889 2 DEBUG oslo_concurrency.lockutils [req-09c0b2fb-8501-459e-867e-27ac87e8de7b req-0233dd50-f350-41d4-98f3-01f9f4e8b3f0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "a798e2c3-8294-482d-a073-883121427765-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:20:22 compute-0 nova_compute[260935]: 2025-10-11 09:20:22.889 2 DEBUG nova.compute.manager [req-09c0b2fb-8501-459e-867e-27ac87e8de7b req-0233dd50-f350-41d4-98f3-01f9f4e8b3f0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a798e2c3-8294-482d-a073-883121427765] No waiting events found dispatching network-vif-unplugged-d8f510f6-a87e-4c76-be24-a143f69f564d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 09:20:22 compute-0 nova_compute[260935]: 2025-10-11 09:20:22.890 2 DEBUG nova.compute.manager [req-09c0b2fb-8501-459e-867e-27ac87e8de7b req-0233dd50-f350-41d4-98f3-01f9f4e8b3f0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a798e2c3-8294-482d-a073-883121427765] Received event network-vif-unplugged-d8f510f6-a87e-4c76-be24-a143f69f564d for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 11 09:20:22 compute-0 nova_compute[260935]: 2025-10-11 09:20:22.890 2 DEBUG nova.compute.manager [req-09c0b2fb-8501-459e-867e-27ac87e8de7b req-0233dd50-f350-41d4-98f3-01f9f4e8b3f0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a798e2c3-8294-482d-a073-883121427765] Received event network-vif-plugged-d8f510f6-a87e-4c76-be24-a143f69f564d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:20:22 compute-0 nova_compute[260935]: 2025-10-11 09:20:22.891 2 DEBUG oslo_concurrency.lockutils [req-09c0b2fb-8501-459e-867e-27ac87e8de7b req-0233dd50-f350-41d4-98f3-01f9f4e8b3f0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "a798e2c3-8294-482d-a073-883121427765-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:20:22 compute-0 nova_compute[260935]: 2025-10-11 09:20:22.891 2 DEBUG oslo_concurrency.lockutils [req-09c0b2fb-8501-459e-867e-27ac87e8de7b req-0233dd50-f350-41d4-98f3-01f9f4e8b3f0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "a798e2c3-8294-482d-a073-883121427765-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:20:22 compute-0 nova_compute[260935]: 2025-10-11 09:20:22.892 2 DEBUG oslo_concurrency.lockutils [req-09c0b2fb-8501-459e-867e-27ac87e8de7b req-0233dd50-f350-41d4-98f3-01f9f4e8b3f0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "a798e2c3-8294-482d-a073-883121427765-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:20:22 compute-0 nova_compute[260935]: 2025-10-11 09:20:22.892 2 DEBUG nova.compute.manager [req-09c0b2fb-8501-459e-867e-27ac87e8de7b req-0233dd50-f350-41d4-98f3-01f9f4e8b3f0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a798e2c3-8294-482d-a073-883121427765] No waiting events found dispatching network-vif-plugged-d8f510f6-a87e-4c76-be24-a143f69f564d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 09:20:22 compute-0 nova_compute[260935]: 2025-10-11 09:20:22.893 2 WARNING nova.compute.manager [req-09c0b2fb-8501-459e-867e-27ac87e8de7b req-0233dd50-f350-41d4-98f3-01f9f4e8b3f0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a798e2c3-8294-482d-a073-883121427765] Received unexpected event network-vif-plugged-d8f510f6-a87e-4c76-be24-a143f69f564d for instance with vm_state active and task_state deleting.
Oct 11 09:20:23 compute-0 nova_compute[260935]: 2025-10-11 09:20:23.656 2 DEBUG nova.network.neutron [-] [instance: a798e2c3-8294-482d-a073-883121427765] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:20:23 compute-0 nova_compute[260935]: 2025-10-11 09:20:23.683 2 INFO nova.compute.manager [-] [instance: a798e2c3-8294-482d-a073-883121427765] Took 1.77 seconds to deallocate network for instance.
Oct 11 09:20:23 compute-0 nova_compute[260935]: 2025-10-11 09:20:23.725 2 DEBUG oslo_concurrency.lockutils [None req-d3300635-6abd-4679-baca-b5355c6a95df 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:20:23 compute-0 nova_compute[260935]: 2025-10-11 09:20:23.725 2 DEBUG oslo_concurrency.lockutils [None req-d3300635-6abd-4679-baca-b5355c6a95df 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:20:23 compute-0 nova_compute[260935]: 2025-10-11 09:20:23.745 2 DEBUG nova.compute.manager [req-ca0fbed9-fead-40fa-8703-387da90e1a1a req-a5a6941c-8d6b-45db-8e79-0c7425bf3b6d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a798e2c3-8294-482d-a073-883121427765] Received event network-vif-plugged-d5347066-e322-4664-8798-146b9745fa17 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:20:23 compute-0 nova_compute[260935]: 2025-10-11 09:20:23.745 2 DEBUG oslo_concurrency.lockutils [req-ca0fbed9-fead-40fa-8703-387da90e1a1a req-a5a6941c-8d6b-45db-8e79-0c7425bf3b6d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "a798e2c3-8294-482d-a073-883121427765-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:20:23 compute-0 nova_compute[260935]: 2025-10-11 09:20:23.746 2 DEBUG oslo_concurrency.lockutils [req-ca0fbed9-fead-40fa-8703-387da90e1a1a req-a5a6941c-8d6b-45db-8e79-0c7425bf3b6d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "a798e2c3-8294-482d-a073-883121427765-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:20:23 compute-0 nova_compute[260935]: 2025-10-11 09:20:23.746 2 DEBUG oslo_concurrency.lockutils [req-ca0fbed9-fead-40fa-8703-387da90e1a1a req-a5a6941c-8d6b-45db-8e79-0c7425bf3b6d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "a798e2c3-8294-482d-a073-883121427765-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:20:23 compute-0 nova_compute[260935]: 2025-10-11 09:20:23.747 2 DEBUG nova.compute.manager [req-ca0fbed9-fead-40fa-8703-387da90e1a1a req-a5a6941c-8d6b-45db-8e79-0c7425bf3b6d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a798e2c3-8294-482d-a073-883121427765] No waiting events found dispatching network-vif-plugged-d5347066-e322-4664-8798-146b9745fa17 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 09:20:23 compute-0 nova_compute[260935]: 2025-10-11 09:20:23.747 2 WARNING nova.compute.manager [req-ca0fbed9-fead-40fa-8703-387da90e1a1a req-a5a6941c-8d6b-45db-8e79-0c7425bf3b6d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a798e2c3-8294-482d-a073-883121427765] Received unexpected event network-vif-plugged-d5347066-e322-4664-8798-146b9745fa17 for instance with vm_state deleted and task_state None.
Oct 11 09:20:23 compute-0 ceph-mon[74313]: pgmap v2356: 321 pgs: 321 active+clean; 433 MiB data, 966 MiB used, 59 GiB / 60 GiB avail; 124 KiB/s rd, 115 KiB/s wr, 89 op/s
Oct 11 09:20:23 compute-0 nova_compute[260935]: 2025-10-11 09:20:23.892 2 DEBUG oslo_concurrency.processutils [None req-d3300635-6abd-4679-baca-b5355c6a95df 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:20:24 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:20:24 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 09:20:24 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3744330981' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:20:24 compute-0 nova_compute[260935]: 2025-10-11 09:20:24.379 2 DEBUG oslo_concurrency.processutils [None req-d3300635-6abd-4679-baca-b5355c6a95df 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.487s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:20:24 compute-0 nova_compute[260935]: 2025-10-11 09:20:24.388 2 DEBUG nova.compute.provider_tree [None req-d3300635-6abd-4679-baca-b5355c6a95df 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 09:20:24 compute-0 nova_compute[260935]: 2025-10-11 09:20:24.415 2 DEBUG nova.scheduler.client.report [None req-d3300635-6abd-4679-baca-b5355c6a95df 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 09:20:24 compute-0 nova_compute[260935]: 2025-10-11 09:20:24.456 2 DEBUG oslo_concurrency.lockutils [None req-d3300635-6abd-4679-baca-b5355c6a95df 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.730s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:20:24 compute-0 nova_compute[260935]: 2025-10-11 09:20:24.502 2 INFO nova.scheduler.client.report [None req-d3300635-6abd-4679-baca-b5355c6a95df 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Deleted allocations for instance a798e2c3-8294-482d-a073-883121427765
Oct 11 09:20:24 compute-0 nova_compute[260935]: 2025-10-11 09:20:24.589 2 DEBUG oslo_concurrency.lockutils [None req-d3300635-6abd-4679-baca-b5355c6a95df 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "a798e2c3-8294-482d-a073-883121427765" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.745s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:20:24 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2357: 321 pgs: 321 active+clean; 407 MiB data, 954 MiB used, 59 GiB / 60 GiB avail; 74 KiB/s rd, 92 KiB/s wr, 64 op/s
Oct 11 09:20:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:20:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:20:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:20:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:20:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:20:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:20:24 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/3744330981' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:20:24 compute-0 sshd-session[386926]: Invalid user debian1 from 155.4.244.179 port 33436
Oct 11 09:20:24 compute-0 sshd-session[386926]: pam_unix(sshd:auth): check pass; user unknown
Oct 11 09:20:24 compute-0 sshd-session[386926]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=155.4.244.179
Oct 11 09:20:24 compute-0 nova_compute[260935]: 2025-10-11 09:20:24.981 2 DEBUG nova.compute.manager [req-212cd942-967a-44d1-a302-b4cd770d7703 req-0f521bf4-5749-48da-9cbb-93690379eb0a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a798e2c3-8294-482d-a073-883121427765] Received event network-vif-deleted-d5347066-e322-4664-8798-146b9745fa17 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:20:24 compute-0 nova_compute[260935]: 2025-10-11 09:20:24.981 2 DEBUG nova.compute.manager [req-212cd942-967a-44d1-a302-b4cd770d7703 req-0f521bf4-5749-48da-9cbb-93690379eb0a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a798e2c3-8294-482d-a073-883121427765] Received event network-vif-deleted-d8f510f6-a87e-4c76-be24-a143f69f564d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:20:25 compute-0 nova_compute[260935]: 2025-10-11 09:20:25.637 2 DEBUG oslo_concurrency.lockutils [None req-e62ae1d8-0e00-4340-a2dd-168d2362cf9c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "564a3027-0f98-40fd-a495-1c13a103ea39" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:20:25 compute-0 nova_compute[260935]: 2025-10-11 09:20:25.638 2 DEBUG oslo_concurrency.lockutils [None req-e62ae1d8-0e00-4340-a2dd-168d2362cf9c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "564a3027-0f98-40fd-a495-1c13a103ea39" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:20:25 compute-0 nova_compute[260935]: 2025-10-11 09:20:25.638 2 DEBUG oslo_concurrency.lockutils [None req-e62ae1d8-0e00-4340-a2dd-168d2362cf9c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "564a3027-0f98-40fd-a495-1c13a103ea39-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:20:25 compute-0 nova_compute[260935]: 2025-10-11 09:20:25.639 2 DEBUG oslo_concurrency.lockutils [None req-e62ae1d8-0e00-4340-a2dd-168d2362cf9c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "564a3027-0f98-40fd-a495-1c13a103ea39-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:20:25 compute-0 nova_compute[260935]: 2025-10-11 09:20:25.639 2 DEBUG oslo_concurrency.lockutils [None req-e62ae1d8-0e00-4340-a2dd-168d2362cf9c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "564a3027-0f98-40fd-a495-1c13a103ea39-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:20:25 compute-0 nova_compute[260935]: 2025-10-11 09:20:25.641 2 INFO nova.compute.manager [None req-e62ae1d8-0e00-4340-a2dd-168d2362cf9c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 564a3027-0f98-40fd-a495-1c13a103ea39] Terminating instance
Oct 11 09:20:25 compute-0 nova_compute[260935]: 2025-10-11 09:20:25.642 2 DEBUG nova.compute.manager [None req-e62ae1d8-0e00-4340-a2dd-168d2362cf9c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 564a3027-0f98-40fd-a495-1c13a103ea39] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 11 09:20:25 compute-0 nova_compute[260935]: 2025-10-11 09:20:25.713 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:20:25 compute-0 kernel: tapdea05614-b0 (unregistering): left promiscuous mode
Oct 11 09:20:25 compute-0 NetworkManager[44960]: <info>  [1760174425.7430] device (tapdea05614-b0): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 09:20:25 compute-0 ovn_controller[152945]: 2025-10-11T09:20:25Z|01179|binding|INFO|Releasing lport dea05614-b04a-4078-a98e-428065514f37 from this chassis (sb_readonly=0)
Oct 11 09:20:25 compute-0 nova_compute[260935]: 2025-10-11 09:20:25.759 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:20:25 compute-0 ovn_controller[152945]: 2025-10-11T09:20:25Z|01180|binding|INFO|Setting lport dea05614-b04a-4078-a98e-428065514f37 down in Southbound
Oct 11 09:20:25 compute-0 ovn_controller[152945]: 2025-10-11T09:20:25Z|01181|binding|INFO|Removing iface tapdea05614-b0 ovn-installed in OVS
Oct 11 09:20:25 compute-0 nova_compute[260935]: 2025-10-11 09:20:25.765 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:20:25 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:20:25.771 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:00:3b:17 10.100.0.3'], port_security=['fa:16:3e:00:3b:17 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '564a3027-0f98-40fd-a495-1c13a103ea39', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6346ea52-07fc-49ad-8f2d-fcfed9769241', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'db6885dd005947ad850fed13cefdf2fc', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'a975f5dd-d207-4e6b-9401-45f24b820c2c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0785b003-8951-44fc-bd21-f1c5c3076102, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=dea05614-b04a-4078-a98e-428065514f37) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 09:20:25 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:20:25.774 162815 INFO neutron.agent.ovn.metadata.agent [-] Port dea05614-b04a-4078-a98e-428065514f37 in datapath 6346ea52-07fc-49ad-8f2d-fcfed9769241 unbound from our chassis
Oct 11 09:20:25 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:20:25.777 162815 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 6346ea52-07fc-49ad-8f2d-fcfed9769241, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 11 09:20:25 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:20:25.780 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[bf9d163b-d82b-45a5-aa2c-c91147fdfa1d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:20:25 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:20:25.781 162815 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-6346ea52-07fc-49ad-8f2d-fcfed9769241 namespace which is not needed anymore
Oct 11 09:20:25 compute-0 kernel: tapc8bc0542-b1 (unregistering): left promiscuous mode
Oct 11 09:20:25 compute-0 NetworkManager[44960]: <info>  [1760174425.7889] device (tapc8bc0542-b1): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 09:20:25 compute-0 nova_compute[260935]: 2025-10-11 09:20:25.795 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:20:25 compute-0 ovn_controller[152945]: 2025-10-11T09:20:25Z|01182|binding|INFO|Releasing lport c8bc0542-b12a-4eb6-898d-5fd663184c82 from this chassis (sb_readonly=0)
Oct 11 09:20:25 compute-0 ovn_controller[152945]: 2025-10-11T09:20:25Z|01183|binding|INFO|Setting lport c8bc0542-b12a-4eb6-898d-5fd663184c82 down in Southbound
Oct 11 09:20:25 compute-0 ovn_controller[152945]: 2025-10-11T09:20:25Z|01184|binding|INFO|Removing iface tapc8bc0542-b1 ovn-installed in OVS
Oct 11 09:20:25 compute-0 nova_compute[260935]: 2025-10-11 09:20:25.817 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:20:25 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:20:25.818 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:78:b9:1a 2001:db8::f816:3eff:fe78:b91a'], port_security=['fa:16:3e:78:b9:1a 2001:db8::f816:3eff:fe78:b91a'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe78:b91a/64', 'neutron:device_id': '564a3027-0f98-40fd-a495-1c13a103ea39', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ce353a4c-7280-46bb-ac7d-157bb5dc08cb', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'db6885dd005947ad850fed13cefdf2fc', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'a975f5dd-d207-4e6b-9401-45f24b820c2c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7413cee9-eaf1-4090-8a8e-9bf9dac36a0a, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=c8bc0542-b12a-4eb6-898d-5fd663184c82) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 09:20:25 compute-0 nova_compute[260935]: 2025-10-11 09:20:25.829 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:20:25 compute-0 systemd[1]: machine-qemu\x2d137\x2dinstance\x2d00000072.scope: Deactivated successfully.
Oct 11 09:20:25 compute-0 systemd[1]: machine-qemu\x2d137\x2dinstance\x2d00000072.scope: Consumed 16.944s CPU time.
Oct 11 09:20:25 compute-0 systemd-machined[215705]: Machine qemu-137-instance-00000072 terminated.
Oct 11 09:20:25 compute-0 ceph-mon[74313]: pgmap v2357: 321 pgs: 321 active+clean; 407 MiB data, 954 MiB used, 59 GiB / 60 GiB avail; 74 KiB/s rd, 92 KiB/s wr, 64 op/s
Oct 11 09:20:25 compute-0 neutron-haproxy-ovnmeta-6346ea52-07fc-49ad-8f2d-fcfed9769241[384643]: [NOTICE]   (384647) : haproxy version is 2.8.14-c23fe91
Oct 11 09:20:25 compute-0 neutron-haproxy-ovnmeta-6346ea52-07fc-49ad-8f2d-fcfed9769241[384643]: [NOTICE]   (384647) : path to executable is /usr/sbin/haproxy
Oct 11 09:20:25 compute-0 neutron-haproxy-ovnmeta-6346ea52-07fc-49ad-8f2d-fcfed9769241[384643]: [WARNING]  (384647) : Exiting Master process...
Oct 11 09:20:25 compute-0 neutron-haproxy-ovnmeta-6346ea52-07fc-49ad-8f2d-fcfed9769241[384643]: [ALERT]    (384647) : Current worker (384649) exited with code 143 (Terminated)
Oct 11 09:20:25 compute-0 neutron-haproxy-ovnmeta-6346ea52-07fc-49ad-8f2d-fcfed9769241[384643]: [WARNING]  (384647) : All workers exited. Exiting... (0)
Oct 11 09:20:25 compute-0 systemd[1]: libpod-35a4ceb1b8073f14fe148299f689cfe5109498accc2ce93b22f7a21ca56c96a2.scope: Deactivated successfully.
Oct 11 09:20:25 compute-0 podman[386978]: 2025-10-11 09:20:25.944718579 +0000 UTC m=+0.044272571 container died 35a4ceb1b8073f14fe148299f689cfe5109498accc2ce93b22f7a21ca56c96a2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-6346ea52-07fc-49ad-8f2d-fcfed9769241, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Oct 11 09:20:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-adfdfffff4362d598c2ef7afc8f136cd17a5b46b0bac4f37ab2cb909f4afc6f2-merged.mount: Deactivated successfully.
Oct 11 09:20:25 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-35a4ceb1b8073f14fe148299f689cfe5109498accc2ce93b22f7a21ca56c96a2-userdata-shm.mount: Deactivated successfully.
Oct 11 09:20:25 compute-0 podman[386978]: 2025-10-11 09:20:25.983536854 +0000 UTC m=+0.083090846 container cleanup 35a4ceb1b8073f14fe148299f689cfe5109498accc2ce93b22f7a21ca56c96a2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-6346ea52-07fc-49ad-8f2d-fcfed9769241, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true)
Oct 11 09:20:25 compute-0 systemd[1]: libpod-conmon-35a4ceb1b8073f14fe148299f689cfe5109498accc2ce93b22f7a21ca56c96a2.scope: Deactivated successfully.
Oct 11 09:20:26 compute-0 podman[387009]: 2025-10-11 09:20:26.037446666 +0000 UTC m=+0.036196993 container remove 35a4ceb1b8073f14fe148299f689cfe5109498accc2ce93b22f7a21ca56c96a2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-6346ea52-07fc-49ad-8f2d-fcfed9769241, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team)
Oct 11 09:20:26 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:20:26.043 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[09178a5b-cae3-480f-be65-3564c85489d5]: (4, ('Sat Oct 11 09:20:25 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-6346ea52-07fc-49ad-8f2d-fcfed9769241 (35a4ceb1b8073f14fe148299f689cfe5109498accc2ce93b22f7a21ca56c96a2)\n35a4ceb1b8073f14fe148299f689cfe5109498accc2ce93b22f7a21ca56c96a2\nSat Oct 11 09:20:25 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-6346ea52-07fc-49ad-8f2d-fcfed9769241 (35a4ceb1b8073f14fe148299f689cfe5109498accc2ce93b22f7a21ca56c96a2)\n35a4ceb1b8073f14fe148299f689cfe5109498accc2ce93b22f7a21ca56c96a2\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:20:26 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:20:26.044 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[12f98348-0fa3-4b1e-98da-8a7c2a0a1407]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:20:26 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:20:26.045 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6346ea52-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:20:26 compute-0 nova_compute[260935]: 2025-10-11 09:20:26.047 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:20:26 compute-0 kernel: tap6346ea52-00: left promiscuous mode
Oct 11 09:20:26 compute-0 nova_compute[260935]: 2025-10-11 09:20:26.071 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:20:26 compute-0 NetworkManager[44960]: <info>  [1760174426.0714] manager: (tapdea05614-b0): new Tun device (/org/freedesktop/NetworkManager/Devices/477)
Oct 11 09:20:26 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:20:26.076 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[241f2d6f-9a14-41c7-be75-8c3f7c5a7ef0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:20:26 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:20:26.108 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[21e1ade7-6ce0-4b8b-9cec-ce12e3fc04c3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:20:26 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:20:26.109 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[bda1745d-a4ad-422c-8f41-7b255b780d1e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:20:26 compute-0 nova_compute[260935]: 2025-10-11 09:20:26.114 2 INFO nova.virt.libvirt.driver [-] [instance: 564a3027-0f98-40fd-a495-1c13a103ea39] Instance destroyed successfully.
Oct 11 09:20:26 compute-0 nova_compute[260935]: 2025-10-11 09:20:26.115 2 DEBUG nova.objects.instance [None req-e62ae1d8-0e00-4340-a2dd-168d2362cf9c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lazy-loading 'resources' on Instance uuid 564a3027-0f98-40fd-a495-1c13a103ea39 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:20:26 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:20:26.124 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[6579c86a-eb5b-42eb-b111-5d3050e47eb1]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 618836, 'reachable_time': 33711, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 387044, 'error': None, 'target': 'ovnmeta-6346ea52-07fc-49ad-8f2d-fcfed9769241', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:20:26 compute-0 systemd[1]: run-netns-ovnmeta\x2d6346ea52\x2d07fc\x2d49ad\x2d8f2d\x2dfcfed9769241.mount: Deactivated successfully.
Oct 11 09:20:26 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:20:26.128 162929 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-6346ea52-07fc-49ad-8f2d-fcfed9769241 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 11 09:20:26 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:20:26.128 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[a58e8790-a299-428b-8438-4b79bcbf97d7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:20:26 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:20:26.129 162815 INFO neutron.agent.ovn.metadata.agent [-] Port c8bc0542-b12a-4eb6-898d-5fd663184c82 in datapath ce353a4c-7280-46bb-ac7d-157bb5dc08cb unbound from our chassis
Oct 11 09:20:26 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:20:26.130 162815 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network ce353a4c-7280-46bb-ac7d-157bb5dc08cb, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 11 09:20:26 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:20:26.131 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[863d3e94-1fb7-46db-9433-d9bbe9fce57b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:20:26 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:20:26.131 162815 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-ce353a4c-7280-46bb-ac7d-157bb5dc08cb namespace which is not needed anymore
Oct 11 09:20:26 compute-0 nova_compute[260935]: 2025-10-11 09:20:26.152 2 DEBUG nova.virt.libvirt.vif [None req-e62ae1d8-0e00-4340-a2dd-168d2362cf9c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T09:18:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1664284742',display_name='tempest-TestGettingAddress-server-1664284742',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1664284742',id=114,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJTFYLtktGdqXo2IJ/ShBWfWgvaV4+fWmolxALneU1gDJL8dLDtvTdIhs+PbG9YcPYaBAkq6yl21bo4TTyj4znAX7oza+01Fop0j7H2jGDTRbWSF88RRRnjrHtRpLFrBeg==',key_name='tempest-TestGettingAddress-2133650400',keypairs=<?>,launch_index=0,launched_at=2025-10-11T09:19:05Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='db6885dd005947ad850fed13cefdf2fc',ramdisk_id='',reservation_id='r-2sdrmv3t',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-1238692117',owner_user_name='tempest-TestGettingAddress-1238692117-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T09:19:05Z,user_data=None,user_id='0e1fd111a1ff43179343661e01457085',uuid=564a3027-0f98-40fd-a495-1c13a103ea39,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "dea05614-b04a-4078-a98e-428065514f37", "address": "fa:16:3e:00:3b:17", "network": {"id": "6346ea52-07fc-49ad-8f2d-fcfed9769241", "bridge": "br-int", "label": "tempest-network-smoke--1742332739", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdea05614-b0", "ovs_interfaceid": "dea05614-b04a-4078-a98e-428065514f37", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 11 09:20:26 compute-0 nova_compute[260935]: 2025-10-11 09:20:26.153 2 DEBUG nova.network.os_vif_util [None req-e62ae1d8-0e00-4340-a2dd-168d2362cf9c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converting VIF {"id": "dea05614-b04a-4078-a98e-428065514f37", "address": "fa:16:3e:00:3b:17", "network": {"id": "6346ea52-07fc-49ad-8f2d-fcfed9769241", "bridge": "br-int", "label": "tempest-network-smoke--1742332739", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdea05614-b0", "ovs_interfaceid": "dea05614-b04a-4078-a98e-428065514f37", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 09:20:26 compute-0 nova_compute[260935]: 2025-10-11 09:20:26.154 2 DEBUG nova.network.os_vif_util [None req-e62ae1d8-0e00-4340-a2dd-168d2362cf9c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:00:3b:17,bridge_name='br-int',has_traffic_filtering=True,id=dea05614-b04a-4078-a98e-428065514f37,network=Network(6346ea52-07fc-49ad-8f2d-fcfed9769241),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdea05614-b0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 09:20:26 compute-0 nova_compute[260935]: 2025-10-11 09:20:26.154 2 DEBUG os_vif [None req-e62ae1d8-0e00-4340-a2dd-168d2362cf9c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:00:3b:17,bridge_name='br-int',has_traffic_filtering=True,id=dea05614-b04a-4078-a98e-428065514f37,network=Network(6346ea52-07fc-49ad-8f2d-fcfed9769241),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdea05614-b0') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 11 09:20:26 compute-0 nova_compute[260935]: 2025-10-11 09:20:26.155 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:20:26 compute-0 nova_compute[260935]: 2025-10-11 09:20:26.155 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapdea05614-b0, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:20:26 compute-0 nova_compute[260935]: 2025-10-11 09:20:26.157 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:20:26 compute-0 nova_compute[260935]: 2025-10-11 09:20:26.160 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 09:20:26 compute-0 nova_compute[260935]: 2025-10-11 09:20:26.161 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:20:26 compute-0 nova_compute[260935]: 2025-10-11 09:20:26.163 2 INFO os_vif [None req-e62ae1d8-0e00-4340-a2dd-168d2362cf9c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:00:3b:17,bridge_name='br-int',has_traffic_filtering=True,id=dea05614-b04a-4078-a98e-428065514f37,network=Network(6346ea52-07fc-49ad-8f2d-fcfed9769241),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdea05614-b0')
Oct 11 09:20:26 compute-0 nova_compute[260935]: 2025-10-11 09:20:26.164 2 DEBUG nova.virt.libvirt.vif [None req-e62ae1d8-0e00-4340-a2dd-168d2362cf9c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T09:18:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1664284742',display_name='tempest-TestGettingAddress-server-1664284742',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1664284742',id=114,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJTFYLtktGdqXo2IJ/ShBWfWgvaV4+fWmolxALneU1gDJL8dLDtvTdIhs+PbG9YcPYaBAkq6yl21bo4TTyj4znAX7oza+01Fop0j7H2jGDTRbWSF88RRRnjrHtRpLFrBeg==',key_name='tempest-TestGettingAddress-2133650400',keypairs=<?>,launch_index=0,launched_at=2025-10-11T09:19:05Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='db6885dd005947ad850fed13cefdf2fc',ramdisk_id='',reservation_id='r-2sdrmv3t',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-1238692117',owner_user_name='tempest-TestGettingAddress-1238692117-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T09:19:05Z,user_data=None,user_id='0e1fd111a1ff43179343661e01457085',uuid=564a3027-0f98-40fd-a495-1c13a103ea39,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "c8bc0542-b12a-4eb6-898d-5fd663184c82", "address": "fa:16:3e:78:b9:1a", "network": {"id": "ce353a4c-7280-46bb-ac7d-157bb5dc08cb", "bridge": "br-int", "label": "tempest-network-smoke--1463782365", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe78:b91a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc8bc0542-b1", "ovs_interfaceid": "c8bc0542-b12a-4eb6-898d-5fd663184c82", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 11 09:20:26 compute-0 nova_compute[260935]: 2025-10-11 09:20:26.164 2 DEBUG nova.network.os_vif_util [None req-e62ae1d8-0e00-4340-a2dd-168d2362cf9c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converting VIF {"id": "c8bc0542-b12a-4eb6-898d-5fd663184c82", "address": "fa:16:3e:78:b9:1a", "network": {"id": "ce353a4c-7280-46bb-ac7d-157bb5dc08cb", "bridge": "br-int", "label": "tempest-network-smoke--1463782365", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe78:b91a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc8bc0542-b1", "ovs_interfaceid": "c8bc0542-b12a-4eb6-898d-5fd663184c82", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 09:20:26 compute-0 nova_compute[260935]: 2025-10-11 09:20:26.165 2 DEBUG nova.network.os_vif_util [None req-e62ae1d8-0e00-4340-a2dd-168d2362cf9c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:78:b9:1a,bridge_name='br-int',has_traffic_filtering=True,id=c8bc0542-b12a-4eb6-898d-5fd663184c82,network=Network(ce353a4c-7280-46bb-ac7d-157bb5dc08cb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc8bc0542-b1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 09:20:26 compute-0 nova_compute[260935]: 2025-10-11 09:20:26.165 2 DEBUG os_vif [None req-e62ae1d8-0e00-4340-a2dd-168d2362cf9c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:78:b9:1a,bridge_name='br-int',has_traffic_filtering=True,id=c8bc0542-b12a-4eb6-898d-5fd663184c82,network=Network(ce353a4c-7280-46bb-ac7d-157bb5dc08cb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc8bc0542-b1') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 11 09:20:26 compute-0 nova_compute[260935]: 2025-10-11 09:20:26.168 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:20:26 compute-0 nova_compute[260935]: 2025-10-11 09:20:26.169 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc8bc0542-b1, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:20:26 compute-0 nova_compute[260935]: 2025-10-11 09:20:26.170 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:20:26 compute-0 nova_compute[260935]: 2025-10-11 09:20:26.174 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 09:20:26 compute-0 nova_compute[260935]: 2025-10-11 09:20:26.177 2 INFO os_vif [None req-e62ae1d8-0e00-4340-a2dd-168d2362cf9c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:78:b9:1a,bridge_name='br-int',has_traffic_filtering=True,id=c8bc0542-b12a-4eb6-898d-5fd663184c82,network=Network(ce353a4c-7280-46bb-ac7d-157bb5dc08cb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc8bc0542-b1')
Oct 11 09:20:26 compute-0 neutron-haproxy-ovnmeta-ce353a4c-7280-46bb-ac7d-157bb5dc08cb[384716]: [NOTICE]   (384720) : haproxy version is 2.8.14-c23fe91
Oct 11 09:20:26 compute-0 neutron-haproxy-ovnmeta-ce353a4c-7280-46bb-ac7d-157bb5dc08cb[384716]: [NOTICE]   (384720) : path to executable is /usr/sbin/haproxy
Oct 11 09:20:26 compute-0 neutron-haproxy-ovnmeta-ce353a4c-7280-46bb-ac7d-157bb5dc08cb[384716]: [WARNING]  (384720) : Exiting Master process...
Oct 11 09:20:26 compute-0 neutron-haproxy-ovnmeta-ce353a4c-7280-46bb-ac7d-157bb5dc08cb[384716]: [ALERT]    (384720) : Current worker (384722) exited with code 143 (Terminated)
Oct 11 09:20:26 compute-0 neutron-haproxy-ovnmeta-ce353a4c-7280-46bb-ac7d-157bb5dc08cb[384716]: [WARNING]  (384720) : All workers exited. Exiting... (0)
Oct 11 09:20:26 compute-0 systemd[1]: libpod-4f3f77c60f2bcd18ca6d28297f83f5e108db244fbb44511df252c2033ac620b4.scope: Deactivated successfully.
Oct 11 09:20:26 compute-0 podman[387081]: 2025-10-11 09:20:26.283103007 +0000 UTC m=+0.052799510 container died 4f3f77c60f2bcd18ca6d28297f83f5e108db244fbb44511df252c2033ac620b4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-ce353a4c-7280-46bb-ac7d-157bb5dc08cb, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct 11 09:20:26 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-4f3f77c60f2bcd18ca6d28297f83f5e108db244fbb44511df252c2033ac620b4-userdata-shm.mount: Deactivated successfully.
Oct 11 09:20:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-e8c33ef9ddc9558d0573d134c73c574d03ca7402b4d4ad1531c574c341a884ee-merged.mount: Deactivated successfully.
Oct 11 09:20:26 compute-0 podman[387081]: 2025-10-11 09:20:26.332540613 +0000 UTC m=+0.102237106 container cleanup 4f3f77c60f2bcd18ca6d28297f83f5e108db244fbb44511df252c2033ac620b4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-ce353a4c-7280-46bb-ac7d-157bb5dc08cb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 11 09:20:26 compute-0 nova_compute[260935]: 2025-10-11 09:20:26.339 2 DEBUG nova.compute.manager [req-e5494a93-4bff-4c5b-a2fc-d666c6c6c124 req-1e8eb111-8a45-4529-92c7-97ce01cc4bfd e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 564a3027-0f98-40fd-a495-1c13a103ea39] Received event network-changed-dea05614-b04a-4078-a98e-428065514f37 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:20:26 compute-0 nova_compute[260935]: 2025-10-11 09:20:26.339 2 DEBUG nova.compute.manager [req-e5494a93-4bff-4c5b-a2fc-d666c6c6c124 req-1e8eb111-8a45-4529-92c7-97ce01cc4bfd e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 564a3027-0f98-40fd-a495-1c13a103ea39] Refreshing instance network info cache due to event network-changed-dea05614-b04a-4078-a98e-428065514f37. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 09:20:26 compute-0 nova_compute[260935]: 2025-10-11 09:20:26.340 2 DEBUG oslo_concurrency.lockutils [req-e5494a93-4bff-4c5b-a2fc-d666c6c6c124 req-1e8eb111-8a45-4529-92c7-97ce01cc4bfd e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-564a3027-0f98-40fd-a495-1c13a103ea39" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:20:26 compute-0 nova_compute[260935]: 2025-10-11 09:20:26.340 2 DEBUG oslo_concurrency.lockutils [req-e5494a93-4bff-4c5b-a2fc-d666c6c6c124 req-1e8eb111-8a45-4529-92c7-97ce01cc4bfd e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-564a3027-0f98-40fd-a495-1c13a103ea39" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:20:26 compute-0 nova_compute[260935]: 2025-10-11 09:20:26.340 2 DEBUG nova.network.neutron [req-e5494a93-4bff-4c5b-a2fc-d666c6c6c124 req-1e8eb111-8a45-4529-92c7-97ce01cc4bfd e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 564a3027-0f98-40fd-a495-1c13a103ea39] Refreshing network info cache for port dea05614-b04a-4078-a98e-428065514f37 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 09:20:26 compute-0 systemd[1]: libpod-conmon-4f3f77c60f2bcd18ca6d28297f83f5e108db244fbb44511df252c2033ac620b4.scope: Deactivated successfully.
Oct 11 09:20:26 compute-0 podman[387111]: 2025-10-11 09:20:26.398242837 +0000 UTC m=+0.041872243 container remove 4f3f77c60f2bcd18ca6d28297f83f5e108db244fbb44511df252c2033ac620b4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-ce353a4c-7280-46bb-ac7d-157bb5dc08cb, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 11 09:20:26 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:20:26.408 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[7216be97-223e-4860-9a1a-ed4ac0fadc5f]: (4, ('Sat Oct 11 09:20:26 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-ce353a4c-7280-46bb-ac7d-157bb5dc08cb (4f3f77c60f2bcd18ca6d28297f83f5e108db244fbb44511df252c2033ac620b4)\n4f3f77c60f2bcd18ca6d28297f83f5e108db244fbb44511df252c2033ac620b4\nSat Oct 11 09:20:26 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-ce353a4c-7280-46bb-ac7d-157bb5dc08cb (4f3f77c60f2bcd18ca6d28297f83f5e108db244fbb44511df252c2033ac620b4)\n4f3f77c60f2bcd18ca6d28297f83f5e108db244fbb44511df252c2033ac620b4\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:20:26 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:20:26.410 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[0d04a085-7210-43a9-9ae8-3b80401a3370]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:20:26 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:20:26.410 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapce353a4c-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:20:26 compute-0 kernel: tapce353a4c-70: left promiscuous mode
Oct 11 09:20:26 compute-0 nova_compute[260935]: 2025-10-11 09:20:26.418 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:20:26 compute-0 nova_compute[260935]: 2025-10-11 09:20:26.442 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:20:26 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:20:26.445 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[a7bd3284-ef5d-4065-a638-b7ec4820d3e5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:20:26 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:20:26.476 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[0432fe86-c177-46bf-af90-242208c4a69a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:20:26 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:20:26.477 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[cb2a0a32-d6ae-4ae8-ab50-9f59bb743587]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:20:26 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:20:26.506 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[04987f3d-918d-45ec-95fb-93fdd24ec345]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 618938, 'reachable_time': 31550, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 387128, 'error': None, 'target': 'ovnmeta-ce353a4c-7280-46bb-ac7d-157bb5dc08cb', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:20:26 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:20:26.508 162929 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-ce353a4c-7280-46bb-ac7d-157bb5dc08cb deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 11 09:20:26 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:20:26.508 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[e3ae5bb6-f5d6-4530-9af3-a6c16f5f1b81]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:20:26 compute-0 nova_compute[260935]: 2025-10-11 09:20:26.575 2 INFO nova.virt.libvirt.driver [None req-e62ae1d8-0e00-4340-a2dd-168d2362cf9c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 564a3027-0f98-40fd-a495-1c13a103ea39] Deleting instance files /var/lib/nova/instances/564a3027-0f98-40fd-a495-1c13a103ea39_del
Oct 11 09:20:26 compute-0 nova_compute[260935]: 2025-10-11 09:20:26.576 2 INFO nova.virt.libvirt.driver [None req-e62ae1d8-0e00-4340-a2dd-168d2362cf9c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 564a3027-0f98-40fd-a495-1c13a103ea39] Deletion of /var/lib/nova/instances/564a3027-0f98-40fd-a495-1c13a103ea39_del complete
Oct 11 09:20:26 compute-0 nova_compute[260935]: 2025-10-11 09:20:26.663 2 INFO nova.compute.manager [None req-e62ae1d8-0e00-4340-a2dd-168d2362cf9c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 564a3027-0f98-40fd-a495-1c13a103ea39] Took 1.02 seconds to destroy the instance on the hypervisor.
Oct 11 09:20:26 compute-0 nova_compute[260935]: 2025-10-11 09:20:26.664 2 DEBUG oslo.service.loopingcall [None req-e62ae1d8-0e00-4340-a2dd-168d2362cf9c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 11 09:20:26 compute-0 nova_compute[260935]: 2025-10-11 09:20:26.665 2 DEBUG nova.compute.manager [-] [instance: 564a3027-0f98-40fd-a495-1c13a103ea39] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 11 09:20:26 compute-0 nova_compute[260935]: 2025-10-11 09:20:26.665 2 DEBUG nova.network.neutron [-] [instance: 564a3027-0f98-40fd-a495-1c13a103ea39] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 11 09:20:26 compute-0 nova_compute[260935]: 2025-10-11 09:20:26.688 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:20:26 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 09:20:26 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2825321102' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 09:20:26 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 09:20:26 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2825321102' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 09:20:26 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2358: 321 pgs: 321 active+clean; 407 MiB data, 954 MiB used, 59 GiB / 60 GiB avail; 54 KiB/s rd, 23 KiB/s wr, 59 op/s
Oct 11 09:20:26 compute-0 ceph-mon[74313]: from='client.? 192.168.122.10:0/2825321102' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 09:20:26 compute-0 ceph-mon[74313]: from='client.? 192.168.122.10:0/2825321102' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 09:20:26 compute-0 systemd[1]: run-netns-ovnmeta\x2dce353a4c\x2d7280\x2d46bb\x2dac7d\x2d157bb5dc08cb.mount: Deactivated successfully.
Oct 11 09:20:27 compute-0 sshd-session[386926]: Failed password for invalid user debian1 from 155.4.244.179 port 33436 ssh2
Oct 11 09:20:27 compute-0 nova_compute[260935]: 2025-10-11 09:20:27.390 2 DEBUG nova.compute.manager [req-e5299af2-6ff5-4d1f-81ab-bf5ad8dbe1f7 req-d3c4c889-306e-46f5-8d9a-646245d2d510 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 564a3027-0f98-40fd-a495-1c13a103ea39] Received event network-vif-deleted-c8bc0542-b12a-4eb6-898d-5fd663184c82 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:20:27 compute-0 nova_compute[260935]: 2025-10-11 09:20:27.391 2 INFO nova.compute.manager [req-e5299af2-6ff5-4d1f-81ab-bf5ad8dbe1f7 req-d3c4c889-306e-46f5-8d9a-646245d2d510 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 564a3027-0f98-40fd-a495-1c13a103ea39] Neutron deleted interface c8bc0542-b12a-4eb6-898d-5fd663184c82; detaching it from the instance and deleting it from the info cache
Oct 11 09:20:27 compute-0 nova_compute[260935]: 2025-10-11 09:20:27.392 2 DEBUG nova.network.neutron [req-e5299af2-6ff5-4d1f-81ab-bf5ad8dbe1f7 req-d3c4c889-306e-46f5-8d9a-646245d2d510 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 564a3027-0f98-40fd-a495-1c13a103ea39] Updating instance_info_cache with network_info: [{"id": "dea05614-b04a-4078-a98e-428065514f37", "address": "fa:16:3e:00:3b:17", "network": {"id": "6346ea52-07fc-49ad-8f2d-fcfed9769241", "bridge": "br-int", "label": "tempest-network-smoke--1742332739", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdea05614-b0", "ovs_interfaceid": "dea05614-b04a-4078-a98e-428065514f37", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:20:27 compute-0 nova_compute[260935]: 2025-10-11 09:20:27.415 2 DEBUG nova.compute.manager [req-e5299af2-6ff5-4d1f-81ab-bf5ad8dbe1f7 req-d3c4c889-306e-46f5-8d9a-646245d2d510 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 564a3027-0f98-40fd-a495-1c13a103ea39] Detach interface failed, port_id=c8bc0542-b12a-4eb6-898d-5fd663184c82, reason: Instance 564a3027-0f98-40fd-a495-1c13a103ea39 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Oct 11 09:20:27 compute-0 nova_compute[260935]: 2025-10-11 09:20:27.673 2 DEBUG nova.network.neutron [req-e5494a93-4bff-4c5b-a2fc-d666c6c6c124 req-1e8eb111-8a45-4529-92c7-97ce01cc4bfd e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 564a3027-0f98-40fd-a495-1c13a103ea39] Updated VIF entry in instance network info cache for port dea05614-b04a-4078-a98e-428065514f37. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 09:20:27 compute-0 nova_compute[260935]: 2025-10-11 09:20:27.673 2 DEBUG nova.network.neutron [req-e5494a93-4bff-4c5b-a2fc-d666c6c6c124 req-1e8eb111-8a45-4529-92c7-97ce01cc4bfd e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 564a3027-0f98-40fd-a495-1c13a103ea39] Updating instance_info_cache with network_info: [{"id": "dea05614-b04a-4078-a98e-428065514f37", "address": "fa:16:3e:00:3b:17", "network": {"id": "6346ea52-07fc-49ad-8f2d-fcfed9769241", "bridge": "br-int", "label": "tempest-network-smoke--1742332739", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdea05614-b0", "ovs_interfaceid": "dea05614-b04a-4078-a98e-428065514f37", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "c8bc0542-b12a-4eb6-898d-5fd663184c82", "address": "fa:16:3e:78:b9:1a", "network": {"id": "ce353a4c-7280-46bb-ac7d-157bb5dc08cb", "bridge": "br-int", "label": "tempest-network-smoke--1463782365", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe78:b91a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc8bc0542-b1", "ovs_interfaceid": "c8bc0542-b12a-4eb6-898d-5fd663184c82", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:20:27 compute-0 nova_compute[260935]: 2025-10-11 09:20:27.702 2 DEBUG oslo_concurrency.lockutils [req-e5494a93-4bff-4c5b-a2fc-d666c6c6c124 req-1e8eb111-8a45-4529-92c7-97ce01cc4bfd e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-564a3027-0f98-40fd-a495-1c13a103ea39" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:20:27 compute-0 nova_compute[260935]: 2025-10-11 09:20:27.888 2 DEBUG nova.network.neutron [-] [instance: 564a3027-0f98-40fd-a495-1c13a103ea39] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:20:27 compute-0 nova_compute[260935]: 2025-10-11 09:20:27.915 2 INFO nova.compute.manager [-] [instance: 564a3027-0f98-40fd-a495-1c13a103ea39] Took 1.25 seconds to deallocate network for instance.
Oct 11 09:20:27 compute-0 ceph-mon[74313]: pgmap v2358: 321 pgs: 321 active+clean; 407 MiB data, 954 MiB used, 59 GiB / 60 GiB avail; 54 KiB/s rd, 23 KiB/s wr, 59 op/s
Oct 11 09:20:27 compute-0 nova_compute[260935]: 2025-10-11 09:20:27.984 2 DEBUG oslo_concurrency.lockutils [None req-e62ae1d8-0e00-4340-a2dd-168d2362cf9c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:20:27 compute-0 nova_compute[260935]: 2025-10-11 09:20:27.985 2 DEBUG oslo_concurrency.lockutils [None req-e62ae1d8-0e00-4340-a2dd-168d2362cf9c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:20:28 compute-0 nova_compute[260935]: 2025-10-11 09:20:28.086 2 DEBUG oslo_concurrency.processutils [None req-e62ae1d8-0e00-4340-a2dd-168d2362cf9c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:20:28 compute-0 nova_compute[260935]: 2025-10-11 09:20:28.449 2 DEBUG nova.compute.manager [req-b29cc371-a48e-493d-8ca7-db4da13e5bc9 req-9947a489-6649-4869-bef4-f0331b95f83c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 564a3027-0f98-40fd-a495-1c13a103ea39] Received event network-vif-unplugged-dea05614-b04a-4078-a98e-428065514f37 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:20:28 compute-0 nova_compute[260935]: 2025-10-11 09:20:28.450 2 DEBUG oslo_concurrency.lockutils [req-b29cc371-a48e-493d-8ca7-db4da13e5bc9 req-9947a489-6649-4869-bef4-f0331b95f83c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "564a3027-0f98-40fd-a495-1c13a103ea39-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:20:28 compute-0 nova_compute[260935]: 2025-10-11 09:20:28.450 2 DEBUG oslo_concurrency.lockutils [req-b29cc371-a48e-493d-8ca7-db4da13e5bc9 req-9947a489-6649-4869-bef4-f0331b95f83c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "564a3027-0f98-40fd-a495-1c13a103ea39-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:20:28 compute-0 nova_compute[260935]: 2025-10-11 09:20:28.451 2 DEBUG oslo_concurrency.lockutils [req-b29cc371-a48e-493d-8ca7-db4da13e5bc9 req-9947a489-6649-4869-bef4-f0331b95f83c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "564a3027-0f98-40fd-a495-1c13a103ea39-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:20:28 compute-0 nova_compute[260935]: 2025-10-11 09:20:28.451 2 DEBUG nova.compute.manager [req-b29cc371-a48e-493d-8ca7-db4da13e5bc9 req-9947a489-6649-4869-bef4-f0331b95f83c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 564a3027-0f98-40fd-a495-1c13a103ea39] No waiting events found dispatching network-vif-unplugged-dea05614-b04a-4078-a98e-428065514f37 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 09:20:28 compute-0 nova_compute[260935]: 2025-10-11 09:20:28.451 2 WARNING nova.compute.manager [req-b29cc371-a48e-493d-8ca7-db4da13e5bc9 req-9947a489-6649-4869-bef4-f0331b95f83c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 564a3027-0f98-40fd-a495-1c13a103ea39] Received unexpected event network-vif-unplugged-dea05614-b04a-4078-a98e-428065514f37 for instance with vm_state deleted and task_state None.
Oct 11 09:20:28 compute-0 nova_compute[260935]: 2025-10-11 09:20:28.452 2 DEBUG nova.compute.manager [req-b29cc371-a48e-493d-8ca7-db4da13e5bc9 req-9947a489-6649-4869-bef4-f0331b95f83c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 564a3027-0f98-40fd-a495-1c13a103ea39] Received event network-vif-plugged-dea05614-b04a-4078-a98e-428065514f37 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:20:28 compute-0 nova_compute[260935]: 2025-10-11 09:20:28.452 2 DEBUG oslo_concurrency.lockutils [req-b29cc371-a48e-493d-8ca7-db4da13e5bc9 req-9947a489-6649-4869-bef4-f0331b95f83c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "564a3027-0f98-40fd-a495-1c13a103ea39-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:20:28 compute-0 nova_compute[260935]: 2025-10-11 09:20:28.452 2 DEBUG oslo_concurrency.lockutils [req-b29cc371-a48e-493d-8ca7-db4da13e5bc9 req-9947a489-6649-4869-bef4-f0331b95f83c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "564a3027-0f98-40fd-a495-1c13a103ea39-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:20:28 compute-0 nova_compute[260935]: 2025-10-11 09:20:28.453 2 DEBUG oslo_concurrency.lockutils [req-b29cc371-a48e-493d-8ca7-db4da13e5bc9 req-9947a489-6649-4869-bef4-f0331b95f83c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "564a3027-0f98-40fd-a495-1c13a103ea39-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:20:28 compute-0 nova_compute[260935]: 2025-10-11 09:20:28.453 2 DEBUG nova.compute.manager [req-b29cc371-a48e-493d-8ca7-db4da13e5bc9 req-9947a489-6649-4869-bef4-f0331b95f83c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 564a3027-0f98-40fd-a495-1c13a103ea39] No waiting events found dispatching network-vif-plugged-dea05614-b04a-4078-a98e-428065514f37 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 09:20:28 compute-0 nova_compute[260935]: 2025-10-11 09:20:28.454 2 WARNING nova.compute.manager [req-b29cc371-a48e-493d-8ca7-db4da13e5bc9 req-9947a489-6649-4869-bef4-f0331b95f83c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 564a3027-0f98-40fd-a495-1c13a103ea39] Received unexpected event network-vif-plugged-dea05614-b04a-4078-a98e-428065514f37 for instance with vm_state deleted and task_state None.
Oct 11 09:20:28 compute-0 nova_compute[260935]: 2025-10-11 09:20:28.479 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760174413.4776118, 69860e17-caac-461a-a4a5-34ca72c0ee09 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:20:28 compute-0 nova_compute[260935]: 2025-10-11 09:20:28.479 2 INFO nova.compute.manager [-] [instance: 69860e17-caac-461a-a4a5-34ca72c0ee09] VM Stopped (Lifecycle Event)
Oct 11 09:20:28 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 09:20:28 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/431750741' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:20:28 compute-0 nova_compute[260935]: 2025-10-11 09:20:28.533 2 DEBUG nova.compute.manager [None req-20a5a97f-cf48-4e8b-b6e5-6097bb9d0561 - - - - - -] [instance: 69860e17-caac-461a-a4a5-34ca72c0ee09] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:20:28 compute-0 nova_compute[260935]: 2025-10-11 09:20:28.535 2 DEBUG oslo_concurrency.processutils [None req-e62ae1d8-0e00-4340-a2dd-168d2362cf9c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:20:28 compute-0 nova_compute[260935]: 2025-10-11 09:20:28.547 2 DEBUG nova.compute.provider_tree [None req-e62ae1d8-0e00-4340-a2dd-168d2362cf9c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 09:20:28 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2359: 321 pgs: 321 active+clean; 328 MiB data, 912 MiB used, 59 GiB / 60 GiB avail; 73 KiB/s rd, 29 KiB/s wr, 87 op/s
Oct 11 09:20:28 compute-0 nova_compute[260935]: 2025-10-11 09:20:28.774 2 DEBUG nova.scheduler.client.report [None req-e62ae1d8-0e00-4340-a2dd-168d2362cf9c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 09:20:28 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/431750741' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:20:28 compute-0 ceph-mon[74313]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #114. Immutable memtables: 0.
Oct 11 09:20:28 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:20:28.937758) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 11 09:20:28 compute-0 ceph-mon[74313]: rocksdb: [db/flush_job.cc:856] [default] [JOB 67] Flushing memtable with next log file: 114
Oct 11 09:20:28 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760174428937899, "job": 67, "event": "flush_started", "num_memtables": 1, "num_entries": 1362, "num_deletes": 251, "total_data_size": 2055089, "memory_usage": 2088912, "flush_reason": "Manual Compaction"}
Oct 11 09:20:28 compute-0 ceph-mon[74313]: rocksdb: [db/flush_job.cc:885] [default] [JOB 67] Level-0 flush table #115: started
Oct 11 09:20:28 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760174428951584, "cf_name": "default", "job": 67, "event": "table_file_creation", "file_number": 115, "file_size": 2023529, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 48332, "largest_seqno": 49693, "table_properties": {"data_size": 2017140, "index_size": 3592, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1733, "raw_key_size": 13616, "raw_average_key_size": 19, "raw_value_size": 2004290, "raw_average_value_size": 2938, "num_data_blocks": 161, "num_entries": 682, "num_filter_entries": 682, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760174294, "oldest_key_time": 1760174294, "file_creation_time": 1760174428, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "25d4b2da-549a-4320-988a-8e4ed36a341b", "db_session_id": "HBQ1LYMNE0LLZGR7AHC6", "orig_file_number": 115, "seqno_to_time_mapping": "N/A"}}
Oct 11 09:20:28 compute-0 ceph-mon[74313]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 67] Flush lasted 13874 microseconds, and 8536 cpu microseconds.
Oct 11 09:20:28 compute-0 ceph-mon[74313]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 09:20:28 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:20:28.951645) [db/flush_job.cc:967] [default] [JOB 67] Level-0 flush table #115: 2023529 bytes OK
Oct 11 09:20:28 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:20:28.951670) [db/memtable_list.cc:519] [default] Level-0 commit table #115 started
Oct 11 09:20:28 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:20:28.955039) [db/memtable_list.cc:722] [default] Level-0 commit table #115: memtable #1 done
Oct 11 09:20:28 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:20:28.955069) EVENT_LOG_v1 {"time_micros": 1760174428955060, "job": 67, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 11 09:20:28 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:20:28.955094) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 11 09:20:28 compute-0 ceph-mon[74313]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 67] Try to delete WAL files size 2049000, prev total WAL file size 2049000, number of live WAL files 2.
Oct 11 09:20:28 compute-0 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000111.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 09:20:28 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:20:28.956251) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730034353138' seq:72057594037927935, type:22 .. '7061786F730034373730' seq:0, type:0; will stop at (end)
Oct 11 09:20:28 compute-0 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 68] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 11 09:20:28 compute-0 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 67 Base level 0, inputs: [115(1976KB)], [113(8063KB)]
Oct 11 09:20:28 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760174428956333, "job": 68, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [115], "files_L6": [113], "score": -1, "input_data_size": 10280056, "oldest_snapshot_seqno": -1}
Oct 11 09:20:29 compute-0 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 68] Generated table #116: 6875 keys, 8525301 bytes, temperature: kUnknown
Oct 11 09:20:29 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760174429017044, "cf_name": "default", "job": 68, "event": "table_file_creation", "file_number": 116, "file_size": 8525301, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8480929, "index_size": 26084, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 17221, "raw_key_size": 179458, "raw_average_key_size": 26, "raw_value_size": 8359291, "raw_average_value_size": 1215, "num_data_blocks": 1014, "num_entries": 6875, "num_filter_entries": 6875, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760170204, "oldest_key_time": 0, "file_creation_time": 1760174428, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "25d4b2da-549a-4320-988a-8e4ed36a341b", "db_session_id": "HBQ1LYMNE0LLZGR7AHC6", "orig_file_number": 116, "seqno_to_time_mapping": "N/A"}}
Oct 11 09:20:29 compute-0 ceph-mon[74313]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 09:20:29 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:20:29.020130) [db/compaction/compaction_job.cc:1663] [default] [JOB 68] Compacted 1@0 + 1@6 files to L6 => 8525301 bytes
Oct 11 09:20:29 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:20:29.022408) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 169.1 rd, 140.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.9, 7.9 +0.0 blob) out(8.1 +0.0 blob), read-write-amplify(9.3) write-amplify(4.2) OK, records in: 7389, records dropped: 514 output_compression: NoCompression
Oct 11 09:20:29 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:20:29.022440) EVENT_LOG_v1 {"time_micros": 1760174429022425, "job": 68, "event": "compaction_finished", "compaction_time_micros": 60801, "compaction_time_cpu_micros": 39573, "output_level": 6, "num_output_files": 1, "total_output_size": 8525301, "num_input_records": 7389, "num_output_records": 6875, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 11 09:20:29 compute-0 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000115.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 09:20:29 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760174429023235, "job": 68, "event": "table_file_deletion", "file_number": 115}
Oct 11 09:20:29 compute-0 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000113.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 09:20:29 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760174429026333, "job": 68, "event": "table_file_deletion", "file_number": 113}
Oct 11 09:20:29 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:20:28.956091) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 09:20:29 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:20:29.026378) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 09:20:29 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:20:29.026387) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 09:20:29 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:20:29.026390) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 09:20:29 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:20:29.026394) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 09:20:29 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:20:29.026396) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 09:20:29 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:20:29 compute-0 sshd-session[386926]: Received disconnect from 155.4.244.179 port 33436:11: Bye Bye [preauth]
Oct 11 09:20:29 compute-0 sshd-session[386926]: Disconnected from invalid user debian1 155.4.244.179 port 33436 [preauth]
Oct 11 09:20:29 compute-0 nova_compute[260935]: 2025-10-11 09:20:29.285 2 DEBUG oslo_concurrency.lockutils [None req-e62ae1d8-0e00-4340-a2dd-168d2362cf9c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.300s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:20:29 compute-0 nova_compute[260935]: 2025-10-11 09:20:29.340 2 INFO nova.scheduler.client.report [None req-e62ae1d8-0e00-4340-a2dd-168d2362cf9c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Deleted allocations for instance 564a3027-0f98-40fd-a495-1c13a103ea39
Oct 11 09:20:29 compute-0 nova_compute[260935]: 2025-10-11 09:20:29.450 2 DEBUG oslo_concurrency.lockutils [None req-e62ae1d8-0e00-4340-a2dd-168d2362cf9c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "564a3027-0f98-40fd-a495-1c13a103ea39" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.813s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:20:29 compute-0 nova_compute[260935]: 2025-10-11 09:20:29.500 2 DEBUG nova.compute.manager [req-ccafee76-3b93-4d3a-a9c3-9b05824256ad req-3636d77e-858a-4992-a6a7-c26a409da628 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 564a3027-0f98-40fd-a495-1c13a103ea39] Received event network-vif-deleted-dea05614-b04a-4078-a98e-428065514f37 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:20:29 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:20:29.672 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=37, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:d1:d9', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '16:ab:1e:b7:4b:7f'}, ipsec=False) old=SB_Global(nb_cfg=36) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 09:20:29 compute-0 nova_compute[260935]: 2025-10-11 09:20:29.672 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:20:29 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:20:29.675 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 11 09:20:29 compute-0 nova_compute[260935]: 2025-10-11 09:20:29.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:20:29 compute-0 ceph-mon[74313]: pgmap v2359: 321 pgs: 321 active+clean; 328 MiB data, 912 MiB used, 59 GiB / 60 GiB avail; 73 KiB/s rd, 29 KiB/s wr, 87 op/s
Oct 11 09:20:30 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:20:30.678 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=bec9a5e2-82b0-42f0-811d-08d245f1dc66, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '37'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:20:30 compute-0 nova_compute[260935]: 2025-10-11 09:20:30.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:20:30 compute-0 nova_compute[260935]: 2025-10-11 09:20:30.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:20:30 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2360: 321 pgs: 321 active+clean; 328 MiB data, 912 MiB used, 59 GiB / 60 GiB avail; 53 KiB/s rd, 6.7 KiB/s wr, 57 op/s
Oct 11 09:20:31 compute-0 nova_compute[260935]: 2025-10-11 09:20:31.171 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:20:31 compute-0 nova_compute[260935]: 2025-10-11 09:20:31.691 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:20:31 compute-0 ceph-mon[74313]: pgmap v2360: 321 pgs: 321 active+clean; 328 MiB data, 912 MiB used, 59 GiB / 60 GiB avail; 53 KiB/s rd, 6.7 KiB/s wr, 57 op/s
Oct 11 09:20:32 compute-0 nova_compute[260935]: 2025-10-11 09:20:32.256 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:20:32 compute-0 nova_compute[260935]: 2025-10-11 09:20:32.699 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:20:32 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2361: 321 pgs: 321 active+clean; 328 MiB data, 912 MiB used, 59 GiB / 60 GiB avail; 53 KiB/s rd, 6.7 KiB/s wr, 57 op/s
Oct 11 09:20:33 compute-0 unix_chkpwd[387153]: password check failed for user (root)
Oct 11 09:20:33 compute-0 sshd-session[387151]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=165.232.82.252  user=root
Oct 11 09:20:33 compute-0 ovn_controller[152945]: 2025-10-11T09:20:33Z|01185|binding|INFO|Releasing lport e23cd806-8523-4e59-ba27-db15cee52548 from this chassis (sb_readonly=0)
Oct 11 09:20:33 compute-0 ovn_controller[152945]: 2025-10-11T09:20:33Z|01186|binding|INFO|Releasing lport f45dd889-4db0-488b-b6d3-356bc191844e from this chassis (sb_readonly=0)
Oct 11 09:20:33 compute-0 nova_compute[260935]: 2025-10-11 09:20:33.735 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:20:33 compute-0 ovn_controller[152945]: 2025-10-11T09:20:33Z|01187|binding|INFO|Releasing lport e23cd806-8523-4e59-ba27-db15cee52548 from this chassis (sb_readonly=0)
Oct 11 09:20:33 compute-0 ovn_controller[152945]: 2025-10-11T09:20:33Z|01188|binding|INFO|Releasing lport f45dd889-4db0-488b-b6d3-356bc191844e from this chassis (sb_readonly=0)
Oct 11 09:20:33 compute-0 nova_compute[260935]: 2025-10-11 09:20:33.921 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:20:33 compute-0 ceph-mon[74313]: pgmap v2361: 321 pgs: 321 active+clean; 328 MiB data, 912 MiB used, 59 GiB / 60 GiB avail; 53 KiB/s rd, 6.7 KiB/s wr, 57 op/s
Oct 11 09:20:34 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:20:34 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2362: 321 pgs: 321 active+clean; 328 MiB data, 912 MiB used, 59 GiB / 60 GiB avail; 27 KiB/s rd, 5.5 KiB/s wr, 30 op/s
Oct 11 09:20:35 compute-0 sshd-session[387151]: Failed password for root from 165.232.82.252 port 53418 ssh2
Oct 11 09:20:35 compute-0 sshd-session[387151]: Connection closed by authenticating user root 165.232.82.252 port 53418 [preauth]
Oct 11 09:20:35 compute-0 nova_compute[260935]: 2025-10-11 09:20:35.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:20:35 compute-0 nova_compute[260935]: 2025-10-11 09:20:35.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:20:35 compute-0 nova_compute[260935]: 2025-10-11 09:20:35.703 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 11 09:20:35 compute-0 podman[387155]: 2025-10-11 09:20:35.785743818 +0000 UTC m=+0.081783909 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent)
Oct 11 09:20:35 compute-0 ceph-mon[74313]: pgmap v2362: 321 pgs: 321 active+clean; 328 MiB data, 912 MiB used, 59 GiB / 60 GiB avail; 27 KiB/s rd, 5.5 KiB/s wr, 30 op/s
Oct 11 09:20:36 compute-0 nova_compute[260935]: 2025-10-11 09:20:36.176 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:20:36 compute-0 nova_compute[260935]: 2025-10-11 09:20:36.306 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760174421.304581, a798e2c3-8294-482d-a073-883121427765 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:20:36 compute-0 nova_compute[260935]: 2025-10-11 09:20:36.306 2 INFO nova.compute.manager [-] [instance: a798e2c3-8294-482d-a073-883121427765] VM Stopped (Lifecycle Event)
Oct 11 09:20:36 compute-0 nova_compute[260935]: 2025-10-11 09:20:36.348 2 DEBUG nova.compute.manager [None req-80a8a35b-6f75-4be8-83b5-4f78a0e28fa0 - - - - - -] [instance: a798e2c3-8294-482d-a073-883121427765] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:20:36 compute-0 nova_compute[260935]: 2025-10-11 09:20:36.693 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:20:36 compute-0 nova_compute[260935]: 2025-10-11 09:20:36.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:20:36 compute-0 nova_compute[260935]: 2025-10-11 09:20:36.704 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 11 09:20:36 compute-0 nova_compute[260935]: 2025-10-11 09:20:36.704 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 11 09:20:36 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2363: 321 pgs: 321 active+clean; 328 MiB data, 912 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 5.5 KiB/s wr, 27 op/s
Oct 11 09:20:37 compute-0 nova_compute[260935]: 2025-10-11 09:20:37.077 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "refresh_cache-c176845c-89c0-4038-ba22-4ee79bd3ebfe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:20:37 compute-0 nova_compute[260935]: 2025-10-11 09:20:37.078 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquired lock "refresh_cache-c176845c-89c0-4038-ba22-4ee79bd3ebfe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:20:37 compute-0 nova_compute[260935]: 2025-10-11 09:20:37.078 2 DEBUG nova.network.neutron [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: c176845c-89c0-4038-ba22-4ee79bd3ebfe] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 11 09:20:37 compute-0 nova_compute[260935]: 2025-10-11 09:20:37.079 2 DEBUG nova.objects.instance [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lazy-loading 'info_cache' on Instance uuid c176845c-89c0-4038-ba22-4ee79bd3ebfe obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:20:37 compute-0 ceph-mon[74313]: pgmap v2363: 321 pgs: 321 active+clean; 328 MiB data, 912 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 5.5 KiB/s wr, 27 op/s
Oct 11 09:20:38 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2364: 321 pgs: 321 active+clean; 328 MiB data, 912 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 5.5 KiB/s wr, 27 op/s
Oct 11 09:20:38 compute-0 nova_compute[260935]: 2025-10-11 09:20:38.788 2 DEBUG nova.network.neutron [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: c176845c-89c0-4038-ba22-4ee79bd3ebfe] Updating instance_info_cache with network_info: [{"id": "e61ae661-47c6-4317-a2c2-6e7a5b567441", "address": "fa:16:3e:1e:82:58", "network": {"id": "164a664d-5e52-48b9-8b00-f73d0851a4cc", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-311778958-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d33b48586acf4e6c8254f2a1213b001c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape61ae661-47", "ovs_interfaceid": "e61ae661-47c6-4317-a2c2-6e7a5b567441", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:20:38 compute-0 nova_compute[260935]: 2025-10-11 09:20:38.812 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Releasing lock "refresh_cache-c176845c-89c0-4038-ba22-4ee79bd3ebfe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:20:38 compute-0 nova_compute[260935]: 2025-10-11 09:20:38.813 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: c176845c-89c0-4038-ba22-4ee79bd3ebfe] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 11 09:20:38 compute-0 nova_compute[260935]: 2025-10-11 09:20:38.814 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:20:38 compute-0 nova_compute[260935]: 2025-10-11 09:20:38.840 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:20:38 compute-0 nova_compute[260935]: 2025-10-11 09:20:38.840 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:20:38 compute-0 nova_compute[260935]: 2025-10-11 09:20:38.841 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:20:38 compute-0 nova_compute[260935]: 2025-10-11 09:20:38.841 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 11 09:20:38 compute-0 nova_compute[260935]: 2025-10-11 09:20:38.841 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:20:39 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:20:39 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 09:20:39 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/596815401' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:20:39 compute-0 nova_compute[260935]: 2025-10-11 09:20:39.284 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:20:39 compute-0 nova_compute[260935]: 2025-10-11 09:20:39.409 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:20:39 compute-0 nova_compute[260935]: 2025-10-11 09:20:39.410 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:20:39 compute-0 nova_compute[260935]: 2025-10-11 09:20:39.411 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:20:39 compute-0 nova_compute[260935]: 2025-10-11 09:20:39.416 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:20:39 compute-0 nova_compute[260935]: 2025-10-11 09:20:39.417 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:20:39 compute-0 nova_compute[260935]: 2025-10-11 09:20:39.423 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000052 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:20:39 compute-0 nova_compute[260935]: 2025-10-11 09:20:39.423 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000052 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:20:39 compute-0 nova_compute[260935]: 2025-10-11 09:20:39.515 2 DEBUG oslo_concurrency.lockutils [None req-eea5f556-8701-4a10-9b0a-fe73e94c13ac dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Acquiring lock "7f0d9214-39a5-458d-82db-dcbc7d61b8b5" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:20:39 compute-0 nova_compute[260935]: 2025-10-11 09:20:39.516 2 DEBUG oslo_concurrency.lockutils [None req-eea5f556-8701-4a10-9b0a-fe73e94c13ac dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "7f0d9214-39a5-458d-82db-dcbc7d61b8b5" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:20:39 compute-0 nova_compute[260935]: 2025-10-11 09:20:39.533 2 DEBUG nova.compute.manager [None req-eea5f556-8701-4a10-9b0a-fe73e94c13ac dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 7f0d9214-39a5-458d-82db-dcbc7d61b8b5] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 11 09:20:39 compute-0 nova_compute[260935]: 2025-10-11 09:20:39.604 2 DEBUG oslo_concurrency.lockutils [None req-eea5f556-8701-4a10-9b0a-fe73e94c13ac dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:20:39 compute-0 nova_compute[260935]: 2025-10-11 09:20:39.604 2 DEBUG oslo_concurrency.lockutils [None req-eea5f556-8701-4a10-9b0a-fe73e94c13ac dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:20:39 compute-0 nova_compute[260935]: 2025-10-11 09:20:39.612 2 DEBUG nova.virt.hardware [None req-eea5f556-8701-4a10-9b0a-fe73e94c13ac dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 11 09:20:39 compute-0 nova_compute[260935]: 2025-10-11 09:20:39.612 2 INFO nova.compute.claims [None req-eea5f556-8701-4a10-9b0a-fe73e94c13ac dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 7f0d9214-39a5-458d-82db-dcbc7d61b8b5] Claim successful on node compute-0.ctlplane.example.com
Oct 11 09:20:39 compute-0 nova_compute[260935]: 2025-10-11 09:20:39.700 2 WARNING nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 09:20:39 compute-0 nova_compute[260935]: 2025-10-11 09:20:39.701 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=2951MB free_disk=59.83066940307617GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 11 09:20:39 compute-0 nova_compute[260935]: 2025-10-11 09:20:39.701 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:20:39 compute-0 nova_compute[260935]: 2025-10-11 09:20:39.901 2 DEBUG oslo_concurrency.processutils [None req-eea5f556-8701-4a10-9b0a-fe73e94c13ac dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:20:39 compute-0 ceph-mon[74313]: pgmap v2364: 321 pgs: 321 active+clean; 328 MiB data, 912 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 5.5 KiB/s wr, 27 op/s
Oct 11 09:20:39 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/596815401' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:20:40 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 09:20:40 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2499093858' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:20:40 compute-0 nova_compute[260935]: 2025-10-11 09:20:40.388 2 DEBUG oslo_concurrency.processutils [None req-eea5f556-8701-4a10-9b0a-fe73e94c13ac dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.487s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:20:40 compute-0 nova_compute[260935]: 2025-10-11 09:20:40.398 2 DEBUG nova.compute.provider_tree [None req-eea5f556-8701-4a10-9b0a-fe73e94c13ac dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 09:20:40 compute-0 nova_compute[260935]: 2025-10-11 09:20:40.416 2 DEBUG nova.scheduler.client.report [None req-eea5f556-8701-4a10-9b0a-fe73e94c13ac dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 09:20:40 compute-0 nova_compute[260935]: 2025-10-11 09:20:40.445 2 DEBUG oslo_concurrency.lockutils [None req-eea5f556-8701-4a10-9b0a-fe73e94c13ac dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.841s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:20:40 compute-0 nova_compute[260935]: 2025-10-11 09:20:40.446 2 DEBUG nova.compute.manager [None req-eea5f556-8701-4a10-9b0a-fe73e94c13ac dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 7f0d9214-39a5-458d-82db-dcbc7d61b8b5] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 11 09:20:40 compute-0 nova_compute[260935]: 2025-10-11 09:20:40.451 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.750s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:20:40 compute-0 nova_compute[260935]: 2025-10-11 09:20:40.519 2 DEBUG nova.compute.manager [None req-eea5f556-8701-4a10-9b0a-fe73e94c13ac dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 7f0d9214-39a5-458d-82db-dcbc7d61b8b5] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 11 09:20:40 compute-0 nova_compute[260935]: 2025-10-11 09:20:40.519 2 DEBUG nova.network.neutron [None req-eea5f556-8701-4a10-9b0a-fe73e94c13ac dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 7f0d9214-39a5-458d-82db-dcbc7d61b8b5] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 11 09:20:40 compute-0 nova_compute[260935]: 2025-10-11 09:20:40.544 2 INFO nova.virt.libvirt.driver [None req-eea5f556-8701-4a10-9b0a-fe73e94c13ac dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 7f0d9214-39a5-458d-82db-dcbc7d61b8b5] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 11 09:20:40 compute-0 nova_compute[260935]: 2025-10-11 09:20:40.551 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance c176845c-89c0-4038-ba22-4ee79bd3ebfe actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 09:20:40 compute-0 nova_compute[260935]: 2025-10-11 09:20:40.552 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance b75d8ded-515b-48ff-a6b6-28df88878996 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 09:20:40 compute-0 nova_compute[260935]: 2025-10-11 09:20:40.552 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance 52be16b4-343a-4fd4-9041-39069a1fde2a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 09:20:40 compute-0 nova_compute[260935]: 2025-10-11 09:20:40.552 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance 7f0d9214-39a5-458d-82db-dcbc7d61b8b5 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 09:20:40 compute-0 nova_compute[260935]: 2025-10-11 09:20:40.552 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 11 09:20:40 compute-0 nova_compute[260935]: 2025-10-11 09:20:40.553 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=1024MB phys_disk=59GB used_disk=4GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 11 09:20:40 compute-0 nova_compute[260935]: 2025-10-11 09:20:40.564 2 DEBUG nova.compute.manager [None req-eea5f556-8701-4a10-9b0a-fe73e94c13ac dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 7f0d9214-39a5-458d-82db-dcbc7d61b8b5] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 11 09:20:40 compute-0 nova_compute[260935]: 2025-10-11 09:20:40.656 2 DEBUG nova.compute.manager [None req-eea5f556-8701-4a10-9b0a-fe73e94c13ac dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 7f0d9214-39a5-458d-82db-dcbc7d61b8b5] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 11 09:20:40 compute-0 nova_compute[260935]: 2025-10-11 09:20:40.658 2 DEBUG nova.virt.libvirt.driver [None req-eea5f556-8701-4a10-9b0a-fe73e94c13ac dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 7f0d9214-39a5-458d-82db-dcbc7d61b8b5] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 11 09:20:40 compute-0 nova_compute[260935]: 2025-10-11 09:20:40.658 2 INFO nova.virt.libvirt.driver [None req-eea5f556-8701-4a10-9b0a-fe73e94c13ac dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 7f0d9214-39a5-458d-82db-dcbc7d61b8b5] Creating image(s)
Oct 11 09:20:40 compute-0 nova_compute[260935]: 2025-10-11 09:20:40.692 2 DEBUG nova.storage.rbd_utils [None req-eea5f556-8701-4a10-9b0a-fe73e94c13ac dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] rbd image 7f0d9214-39a5-458d-82db-dcbc7d61b8b5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:20:40 compute-0 nova_compute[260935]: 2025-10-11 09:20:40.726 2 DEBUG nova.storage.rbd_utils [None req-eea5f556-8701-4a10-9b0a-fe73e94c13ac dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] rbd image 7f0d9214-39a5-458d-82db-dcbc7d61b8b5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:20:40 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2365: 321 pgs: 321 active+clean; 328 MiB data, 912 MiB used, 59 GiB / 60 GiB avail
Oct 11 09:20:40 compute-0 nova_compute[260935]: 2025-10-11 09:20:40.754 2 DEBUG nova.storage.rbd_utils [None req-eea5f556-8701-4a10-9b0a-fe73e94c13ac dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] rbd image 7f0d9214-39a5-458d-82db-dcbc7d61b8b5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:20:40 compute-0 nova_compute[260935]: 2025-10-11 09:20:40.758 2 DEBUG oslo_concurrency.processutils [None req-eea5f556-8701-4a10-9b0a-fe73e94c13ac dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:20:40 compute-0 nova_compute[260935]: 2025-10-11 09:20:40.799 2 DEBUG nova.policy [None req-eea5f556-8701-4a10-9b0a-fe73e94c13ac dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'dd336dcb24664df58613d4105ce1b004', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'bee9c6aad5fe46a2b0fb6caf4d995b72', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 11 09:20:40 compute-0 nova_compute[260935]: 2025-10-11 09:20:40.828 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:20:40 compute-0 nova_compute[260935]: 2025-10-11 09:20:40.864 2 DEBUG oslo_concurrency.processutils [None req-eea5f556-8701-4a10-9b0a-fe73e94c13ac dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json" returned: 0 in 0.105s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:20:40 compute-0 nova_compute[260935]: 2025-10-11 09:20:40.865 2 DEBUG oslo_concurrency.lockutils [None req-eea5f556-8701-4a10-9b0a-fe73e94c13ac dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Acquiring lock "9811042c7d73cc51997f7c966840f4b7728169a1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:20:40 compute-0 nova_compute[260935]: 2025-10-11 09:20:40.866 2 DEBUG oslo_concurrency.lockutils [None req-eea5f556-8701-4a10-9b0a-fe73e94c13ac dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:20:40 compute-0 nova_compute[260935]: 2025-10-11 09:20:40.866 2 DEBUG oslo_concurrency.lockutils [None req-eea5f556-8701-4a10-9b0a-fe73e94c13ac dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:20:40 compute-0 nova_compute[260935]: 2025-10-11 09:20:40.890 2 DEBUG nova.storage.rbd_utils [None req-eea5f556-8701-4a10-9b0a-fe73e94c13ac dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] rbd image 7f0d9214-39a5-458d-82db-dcbc7d61b8b5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:20:40 compute-0 nova_compute[260935]: 2025-10-11 09:20:40.893 2 DEBUG oslo_concurrency.processutils [None req-eea5f556-8701-4a10-9b0a-fe73e94c13ac dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 7f0d9214-39a5-458d-82db-dcbc7d61b8b5_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:20:40 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/2499093858' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:20:41 compute-0 nova_compute[260935]: 2025-10-11 09:20:41.113 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760174426.111461, 564a3027-0f98-40fd-a495-1c13a103ea39 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:20:41 compute-0 nova_compute[260935]: 2025-10-11 09:20:41.114 2 INFO nova.compute.manager [-] [instance: 564a3027-0f98-40fd-a495-1c13a103ea39] VM Stopped (Lifecycle Event)
Oct 11 09:20:41 compute-0 nova_compute[260935]: 2025-10-11 09:20:41.145 2 DEBUG nova.compute.manager [None req-d45f3af2-02cb-461d-a8c3-b95ef63b8bda - - - - - -] [instance: 564a3027-0f98-40fd-a495-1c13a103ea39] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:20:41 compute-0 nova_compute[260935]: 2025-10-11 09:20:41.179 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:20:41 compute-0 nova_compute[260935]: 2025-10-11 09:20:41.222 2 DEBUG oslo_concurrency.processutils [None req-eea5f556-8701-4a10-9b0a-fe73e94c13ac dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 7f0d9214-39a5-458d-82db-dcbc7d61b8b5_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.330s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:20:41 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 09:20:41 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3269150857' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:20:41 compute-0 nova_compute[260935]: 2025-10-11 09:20:41.282 2 DEBUG nova.storage.rbd_utils [None req-eea5f556-8701-4a10-9b0a-fe73e94c13ac dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] resizing rbd image 7f0d9214-39a5-458d-82db-dcbc7d61b8b5_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 11 09:20:41 compute-0 nova_compute[260935]: 2025-10-11 09:20:41.310 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.482s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:20:41 compute-0 nova_compute[260935]: 2025-10-11 09:20:41.315 2 DEBUG nova.compute.provider_tree [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 09:20:41 compute-0 nova_compute[260935]: 2025-10-11 09:20:41.333 2 DEBUG nova.scheduler.client.report [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 09:20:41 compute-0 nova_compute[260935]: 2025-10-11 09:20:41.368 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 11 09:20:41 compute-0 nova_compute[260935]: 2025-10-11 09:20:41.368 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.918s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:20:41 compute-0 nova_compute[260935]: 2025-10-11 09:20:41.375 2 DEBUG nova.objects.instance [None req-eea5f556-8701-4a10-9b0a-fe73e94c13ac dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lazy-loading 'migration_context' on Instance uuid 7f0d9214-39a5-458d-82db-dcbc7d61b8b5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:20:41 compute-0 nova_compute[260935]: 2025-10-11 09:20:41.389 2 DEBUG nova.virt.libvirt.driver [None req-eea5f556-8701-4a10-9b0a-fe73e94c13ac dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 7f0d9214-39a5-458d-82db-dcbc7d61b8b5] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 11 09:20:41 compute-0 nova_compute[260935]: 2025-10-11 09:20:41.390 2 DEBUG nova.virt.libvirt.driver [None req-eea5f556-8701-4a10-9b0a-fe73e94c13ac dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 7f0d9214-39a5-458d-82db-dcbc7d61b8b5] Ensure instance console log exists: /var/lib/nova/instances/7f0d9214-39a5-458d-82db-dcbc7d61b8b5/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 11 09:20:41 compute-0 nova_compute[260935]: 2025-10-11 09:20:41.390 2 DEBUG oslo_concurrency.lockutils [None req-eea5f556-8701-4a10-9b0a-fe73e94c13ac dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:20:41 compute-0 nova_compute[260935]: 2025-10-11 09:20:41.391 2 DEBUG oslo_concurrency.lockutils [None req-eea5f556-8701-4a10-9b0a-fe73e94c13ac dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:20:41 compute-0 nova_compute[260935]: 2025-10-11 09:20:41.391 2 DEBUG oslo_concurrency.lockutils [None req-eea5f556-8701-4a10-9b0a-fe73e94c13ac dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:20:41 compute-0 nova_compute[260935]: 2025-10-11 09:20:41.695 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:20:41 compute-0 podman[387407]: 2025-10-11 09:20:41.78507813 +0000 UTC m=+0.080585565 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=iscsid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct 11 09:20:41 compute-0 ceph-mon[74313]: pgmap v2365: 321 pgs: 321 active+clean; 328 MiB data, 912 MiB used, 59 GiB / 60 GiB avail
Oct 11 09:20:41 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/3269150857' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:20:42 compute-0 nova_compute[260935]: 2025-10-11 09:20:42.299 2 DEBUG nova.network.neutron [None req-eea5f556-8701-4a10-9b0a-fe73e94c13ac dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 7f0d9214-39a5-458d-82db-dcbc7d61b8b5] Successfully created port: 6aa7ac72-3e8c-4881-ba5d-9e2fca0200c5 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 11 09:20:42 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2366: 321 pgs: 321 active+clean; 374 MiB data, 928 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 11 09:20:43 compute-0 ceph-mon[74313]: pgmap v2366: 321 pgs: 321 active+clean; 374 MiB data, 928 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 11 09:20:44 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:20:44 compute-0 nova_compute[260935]: 2025-10-11 09:20:44.490 2 DEBUG nova.network.neutron [None req-eea5f556-8701-4a10-9b0a-fe73e94c13ac dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 7f0d9214-39a5-458d-82db-dcbc7d61b8b5] Successfully updated port: 6aa7ac72-3e8c-4881-ba5d-9e2fca0200c5 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 11 09:20:44 compute-0 nova_compute[260935]: 2025-10-11 09:20:44.506 2 DEBUG oslo_concurrency.lockutils [None req-eea5f556-8701-4a10-9b0a-fe73e94c13ac dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Acquiring lock "refresh_cache-7f0d9214-39a5-458d-82db-dcbc7d61b8b5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:20:44 compute-0 nova_compute[260935]: 2025-10-11 09:20:44.507 2 DEBUG oslo_concurrency.lockutils [None req-eea5f556-8701-4a10-9b0a-fe73e94c13ac dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Acquired lock "refresh_cache-7f0d9214-39a5-458d-82db-dcbc7d61b8b5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:20:44 compute-0 nova_compute[260935]: 2025-10-11 09:20:44.507 2 DEBUG nova.network.neutron [None req-eea5f556-8701-4a10-9b0a-fe73e94c13ac dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 7f0d9214-39a5-458d-82db-dcbc7d61b8b5] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 11 09:20:44 compute-0 nova_compute[260935]: 2025-10-11 09:20:44.699 2 DEBUG nova.compute.manager [req-479b6167-2282-44c4-af07-5c2a352e1f31 req-4fc599a1-4ae1-44b0-9f35-47fe39e5ab62 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 7f0d9214-39a5-458d-82db-dcbc7d61b8b5] Received event network-changed-6aa7ac72-3e8c-4881-ba5d-9e2fca0200c5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:20:44 compute-0 nova_compute[260935]: 2025-10-11 09:20:44.700 2 DEBUG nova.compute.manager [req-479b6167-2282-44c4-af07-5c2a352e1f31 req-4fc599a1-4ae1-44b0-9f35-47fe39e5ab62 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 7f0d9214-39a5-458d-82db-dcbc7d61b8b5] Refreshing instance network info cache due to event network-changed-6aa7ac72-3e8c-4881-ba5d-9e2fca0200c5. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 09:20:44 compute-0 nova_compute[260935]: 2025-10-11 09:20:44.700 2 DEBUG oslo_concurrency.lockutils [req-479b6167-2282-44c4-af07-5c2a352e1f31 req-4fc599a1-4ae1-44b0-9f35-47fe39e5ab62 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-7f0d9214-39a5-458d-82db-dcbc7d61b8b5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:20:44 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2367: 321 pgs: 321 active+clean; 374 MiB data, 928 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 11 09:20:44 compute-0 nova_compute[260935]: 2025-10-11 09:20:44.910 2 DEBUG nova.network.neutron [None req-eea5f556-8701-4a10-9b0a-fe73e94c13ac dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 7f0d9214-39a5-458d-82db-dcbc7d61b8b5] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 11 09:20:45 compute-0 sudo[387427]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:20:45 compute-0 sudo[387427]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:20:45 compute-0 sudo[387427]: pam_unix(sudo:session): session closed for user root
Oct 11 09:20:45 compute-0 sudo[387452]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 09:20:45 compute-0 sudo[387452]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:20:45 compute-0 sudo[387452]: pam_unix(sudo:session): session closed for user root
Oct 11 09:20:45 compute-0 ceph-mon[74313]: pgmap v2367: 321 pgs: 321 active+clean; 374 MiB data, 928 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 11 09:20:45 compute-0 sudo[387477]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:20:45 compute-0 sudo[387477]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:20:45 compute-0 sudo[387477]: pam_unix(sudo:session): session closed for user root
Oct 11 09:20:45 compute-0 sudo[387514]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 11 09:20:45 compute-0 sudo[387514]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:20:45 compute-0 podman[387501]: 2025-10-11 09:20:45.927579564 +0000 UTC m=+0.070642374 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=multipathd, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd)
Oct 11 09:20:45 compute-0 podman[387502]: 2025-10-11 09:20:45.969874098 +0000 UTC m=+0.109756489 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.schema-version=1.0, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Oct 11 09:20:46 compute-0 nova_compute[260935]: 2025-10-11 09:20:46.181 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:20:46 compute-0 sudo[387514]: pam_unix(sudo:session): session closed for user root
Oct 11 09:20:46 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 09:20:46 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 09:20:46 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 11 09:20:46 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 09:20:46 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 11 09:20:46 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:20:46 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev 2138afd5-e370-436a-a9d5-d0530a47daa6 does not exist
Oct 11 09:20:46 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev ca56203b-81c3-4232-8b9b-30f399d35282 does not exist
Oct 11 09:20:46 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev 0a1b985c-494b-457d-9592-646f0045b60b does not exist
Oct 11 09:20:46 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 11 09:20:46 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 09:20:46 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 11 09:20:46 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 09:20:46 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 09:20:46 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 09:20:46 compute-0 sudo[387606]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:20:46 compute-0 sudo[387606]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:20:46 compute-0 sudo[387606]: pam_unix(sudo:session): session closed for user root
Oct 11 09:20:46 compute-0 sudo[387631]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 09:20:46 compute-0 sudo[387631]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:20:46 compute-0 sudo[387631]: pam_unix(sudo:session): session closed for user root
Oct 11 09:20:46 compute-0 nova_compute[260935]: 2025-10-11 09:20:46.696 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:20:46 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2368: 321 pgs: 321 active+clean; 374 MiB data, 928 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 11 09:20:46 compute-0 sudo[387656]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:20:46 compute-0 sudo[387656]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:20:46 compute-0 sudo[387656]: pam_unix(sudo:session): session closed for user root
Oct 11 09:20:46 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 09:20:46 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 09:20:46 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:20:46 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 09:20:46 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 09:20:46 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 09:20:46 compute-0 sudo[387681]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 11 09:20:46 compute-0 sudo[387681]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:20:47 compute-0 podman[387748]: 2025-10-11 09:20:47.318848877 +0000 UTC m=+0.066603110 container create b25160fe3a32ae798afb3621636f6c35381edba3da183a0c33e229c23cb4108e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_vaughan, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 09:20:47 compute-0 systemd[1]: Started libpod-conmon-b25160fe3a32ae798afb3621636f6c35381edba3da183a0c33e229c23cb4108e.scope.
Oct 11 09:20:47 compute-0 podman[387748]: 2025-10-11 09:20:47.292300278 +0000 UTC m=+0.040054561 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:20:47 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:20:47 compute-0 podman[387748]: 2025-10-11 09:20:47.41639257 +0000 UTC m=+0.164146793 container init b25160fe3a32ae798afb3621636f6c35381edba3da183a0c33e229c23cb4108e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_vaughan, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct 11 09:20:47 compute-0 podman[387748]: 2025-10-11 09:20:47.426898066 +0000 UTC m=+0.174652259 container start b25160fe3a32ae798afb3621636f6c35381edba3da183a0c33e229c23cb4108e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_vaughan, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default)
Oct 11 09:20:47 compute-0 podman[387748]: 2025-10-11 09:20:47.431023133 +0000 UTC m=+0.178777406 container attach b25160fe3a32ae798afb3621636f6c35381edba3da183a0c33e229c23cb4108e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_vaughan, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 09:20:47 compute-0 fervent_vaughan[387765]: 167 167
Oct 11 09:20:47 compute-0 systemd[1]: libpod-b25160fe3a32ae798afb3621636f6c35381edba3da183a0c33e229c23cb4108e.scope: Deactivated successfully.
Oct 11 09:20:47 compute-0 conmon[387765]: conmon b25160fe3a32ae798afb <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b25160fe3a32ae798afb3621636f6c35381edba3da183a0c33e229c23cb4108e.scope/container/memory.events
Oct 11 09:20:47 compute-0 podman[387748]: 2025-10-11 09:20:47.44829272 +0000 UTC m=+0.196047003 container died b25160fe3a32ae798afb3621636f6c35381edba3da183a0c33e229c23cb4108e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_vaughan, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct 11 09:20:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-76c56d0ec68e3f5ecf415bf9a92c9198305d5baefd2cfc559431c8dc118f6dc6-merged.mount: Deactivated successfully.
Oct 11 09:20:47 compute-0 podman[387748]: 2025-10-11 09:20:47.503007074 +0000 UTC m=+0.250761297 container remove b25160fe3a32ae798afb3621636f6c35381edba3da183a0c33e229c23cb4108e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_vaughan, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct 11 09:20:47 compute-0 systemd[1]: libpod-conmon-b25160fe3a32ae798afb3621636f6c35381edba3da183a0c33e229c23cb4108e.scope: Deactivated successfully.
Oct 11 09:20:47 compute-0 nova_compute[260935]: 2025-10-11 09:20:47.587 2 DEBUG nova.network.neutron [None req-eea5f556-8701-4a10-9b0a-fe73e94c13ac dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 7f0d9214-39a5-458d-82db-dcbc7d61b8b5] Updating instance_info_cache with network_info: [{"id": "6aa7ac72-3e8c-4881-ba5d-9e2fca0200c5", "address": "fa:16:3e:38:a7:f5", "network": {"id": "6fb90d02-96cd-4920-92ac-462cc457cb11", "bridge": "br-int", "label": "tempest-network-smoke--1169953700", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6aa7ac72-3e", "ovs_interfaceid": "6aa7ac72-3e8c-4881-ba5d-9e2fca0200c5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:20:47 compute-0 nova_compute[260935]: 2025-10-11 09:20:47.620 2 DEBUG oslo_concurrency.lockutils [None req-eea5f556-8701-4a10-9b0a-fe73e94c13ac dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Releasing lock "refresh_cache-7f0d9214-39a5-458d-82db-dcbc7d61b8b5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:20:47 compute-0 nova_compute[260935]: 2025-10-11 09:20:47.620 2 DEBUG nova.compute.manager [None req-eea5f556-8701-4a10-9b0a-fe73e94c13ac dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 7f0d9214-39a5-458d-82db-dcbc7d61b8b5] Instance network_info: |[{"id": "6aa7ac72-3e8c-4881-ba5d-9e2fca0200c5", "address": "fa:16:3e:38:a7:f5", "network": {"id": "6fb90d02-96cd-4920-92ac-462cc457cb11", "bridge": "br-int", "label": "tempest-network-smoke--1169953700", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6aa7ac72-3e", "ovs_interfaceid": "6aa7ac72-3e8c-4881-ba5d-9e2fca0200c5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 11 09:20:47 compute-0 nova_compute[260935]: 2025-10-11 09:20:47.621 2 DEBUG oslo_concurrency.lockutils [req-479b6167-2282-44c4-af07-5c2a352e1f31 req-4fc599a1-4ae1-44b0-9f35-47fe39e5ab62 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-7f0d9214-39a5-458d-82db-dcbc7d61b8b5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:20:47 compute-0 nova_compute[260935]: 2025-10-11 09:20:47.621 2 DEBUG nova.network.neutron [req-479b6167-2282-44c4-af07-5c2a352e1f31 req-4fc599a1-4ae1-44b0-9f35-47fe39e5ab62 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 7f0d9214-39a5-458d-82db-dcbc7d61b8b5] Refreshing network info cache for port 6aa7ac72-3e8c-4881-ba5d-9e2fca0200c5 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 09:20:47 compute-0 nova_compute[260935]: 2025-10-11 09:20:47.625 2 DEBUG nova.virt.libvirt.driver [None req-eea5f556-8701-4a10-9b0a-fe73e94c13ac dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 7f0d9214-39a5-458d-82db-dcbc7d61b8b5] Start _get_guest_xml network_info=[{"id": "6aa7ac72-3e8c-4881-ba5d-9e2fca0200c5", "address": "fa:16:3e:38:a7:f5", "network": {"id": "6fb90d02-96cd-4920-92ac-462cc457cb11", "bridge": "br-int", "label": "tempest-network-smoke--1169953700", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6aa7ac72-3e", "ovs_interfaceid": "6aa7ac72-3e8c-4881-ba5d-9e2fca0200c5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 11 09:20:47 compute-0 nova_compute[260935]: 2025-10-11 09:20:47.629 2 WARNING nova.virt.libvirt.driver [None req-eea5f556-8701-4a10-9b0a-fe73e94c13ac dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 09:20:47 compute-0 nova_compute[260935]: 2025-10-11 09:20:47.635 2 DEBUG nova.virt.libvirt.host [None req-eea5f556-8701-4a10-9b0a-fe73e94c13ac dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 11 09:20:47 compute-0 nova_compute[260935]: 2025-10-11 09:20:47.636 2 DEBUG nova.virt.libvirt.host [None req-eea5f556-8701-4a10-9b0a-fe73e94c13ac dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 11 09:20:47 compute-0 nova_compute[260935]: 2025-10-11 09:20:47.640 2 DEBUG nova.virt.libvirt.host [None req-eea5f556-8701-4a10-9b0a-fe73e94c13ac dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 11 09:20:47 compute-0 nova_compute[260935]: 2025-10-11 09:20:47.641 2 DEBUG nova.virt.libvirt.host [None req-eea5f556-8701-4a10-9b0a-fe73e94c13ac dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 11 09:20:47 compute-0 nova_compute[260935]: 2025-10-11 09:20:47.641 2 DEBUG nova.virt.libvirt.driver [None req-eea5f556-8701-4a10-9b0a-fe73e94c13ac dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 11 09:20:47 compute-0 nova_compute[260935]: 2025-10-11 09:20:47.642 2 DEBUG nova.virt.hardware [None req-eea5f556-8701-4a10-9b0a-fe73e94c13ac dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 11 09:20:47 compute-0 nova_compute[260935]: 2025-10-11 09:20:47.642 2 DEBUG nova.virt.hardware [None req-eea5f556-8701-4a10-9b0a-fe73e94c13ac dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 11 09:20:47 compute-0 nova_compute[260935]: 2025-10-11 09:20:47.643 2 DEBUG nova.virt.hardware [None req-eea5f556-8701-4a10-9b0a-fe73e94c13ac dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 11 09:20:47 compute-0 nova_compute[260935]: 2025-10-11 09:20:47.643 2 DEBUG nova.virt.hardware [None req-eea5f556-8701-4a10-9b0a-fe73e94c13ac dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 11 09:20:47 compute-0 nova_compute[260935]: 2025-10-11 09:20:47.644 2 DEBUG nova.virt.hardware [None req-eea5f556-8701-4a10-9b0a-fe73e94c13ac dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 11 09:20:47 compute-0 nova_compute[260935]: 2025-10-11 09:20:47.644 2 DEBUG nova.virt.hardware [None req-eea5f556-8701-4a10-9b0a-fe73e94c13ac dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 11 09:20:47 compute-0 nova_compute[260935]: 2025-10-11 09:20:47.645 2 DEBUG nova.virt.hardware [None req-eea5f556-8701-4a10-9b0a-fe73e94c13ac dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 11 09:20:47 compute-0 nova_compute[260935]: 2025-10-11 09:20:47.645 2 DEBUG nova.virt.hardware [None req-eea5f556-8701-4a10-9b0a-fe73e94c13ac dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 11 09:20:47 compute-0 nova_compute[260935]: 2025-10-11 09:20:47.645 2 DEBUG nova.virt.hardware [None req-eea5f556-8701-4a10-9b0a-fe73e94c13ac dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 11 09:20:47 compute-0 nova_compute[260935]: 2025-10-11 09:20:47.646 2 DEBUG nova.virt.hardware [None req-eea5f556-8701-4a10-9b0a-fe73e94c13ac dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 11 09:20:47 compute-0 nova_compute[260935]: 2025-10-11 09:20:47.646 2 DEBUG nova.virt.hardware [None req-eea5f556-8701-4a10-9b0a-fe73e94c13ac dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 11 09:20:47 compute-0 nova_compute[260935]: 2025-10-11 09:20:47.650 2 DEBUG oslo_concurrency.processutils [None req-eea5f556-8701-4a10-9b0a-fe73e94c13ac dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:20:47 compute-0 podman[387788]: 2025-10-11 09:20:47.736578145 +0000 UTC m=+0.063159253 container create dcc20560dddba7e136f8269db0394784eb2aa945da86a9a30c88b63f44281e17 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_murdock, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default)
Oct 11 09:20:47 compute-0 systemd[1]: Started libpod-conmon-dcc20560dddba7e136f8269db0394784eb2aa945da86a9a30c88b63f44281e17.scope.
Oct 11 09:20:47 compute-0 ceph-mon[74313]: pgmap v2368: 321 pgs: 321 active+clean; 374 MiB data, 928 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 11 09:20:47 compute-0 podman[387788]: 2025-10-11 09:20:47.71690051 +0000 UTC m=+0.043481668 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:20:47 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:20:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b218df1b0c249dbd0cf9056236c12ca02effddc8b206c7e18241a0b2e27a1681/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 09:20:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b218df1b0c249dbd0cf9056236c12ca02effddc8b206c7e18241a0b2e27a1681/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 09:20:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b218df1b0c249dbd0cf9056236c12ca02effddc8b206c7e18241a0b2e27a1681/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 09:20:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b218df1b0c249dbd0cf9056236c12ca02effddc8b206c7e18241a0b2e27a1681/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 09:20:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b218df1b0c249dbd0cf9056236c12ca02effddc8b206c7e18241a0b2e27a1681/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 09:20:47 compute-0 podman[387788]: 2025-10-11 09:20:47.844746477 +0000 UTC m=+0.171327665 container init dcc20560dddba7e136f8269db0394784eb2aa945da86a9a30c88b63f44281e17 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_murdock, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 09:20:47 compute-0 podman[387788]: 2025-10-11 09:20:47.858743232 +0000 UTC m=+0.185324380 container start dcc20560dddba7e136f8269db0394784eb2aa945da86a9a30c88b63f44281e17 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_murdock, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 09:20:47 compute-0 podman[387788]: 2025-10-11 09:20:47.862939081 +0000 UTC m=+0.189520229 container attach dcc20560dddba7e136f8269db0394784eb2aa945da86a9a30c88b63f44281e17 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_murdock, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct 11 09:20:48 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 09:20:48 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/443230958' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:20:48 compute-0 nova_compute[260935]: 2025-10-11 09:20:48.109 2 DEBUG oslo_concurrency.processutils [None req-eea5f556-8701-4a10-9b0a-fe73e94c13ac dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:20:48 compute-0 nova_compute[260935]: 2025-10-11 09:20:48.143 2 DEBUG nova.storage.rbd_utils [None req-eea5f556-8701-4a10-9b0a-fe73e94c13ac dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] rbd image 7f0d9214-39a5-458d-82db-dcbc7d61b8b5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:20:48 compute-0 nova_compute[260935]: 2025-10-11 09:20:48.148 2 DEBUG oslo_concurrency.processutils [None req-eea5f556-8701-4a10-9b0a-fe73e94c13ac dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:20:48 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 09:20:48 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3789507855' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:20:48 compute-0 nova_compute[260935]: 2025-10-11 09:20:48.615 2 DEBUG oslo_concurrency.processutils [None req-eea5f556-8701-4a10-9b0a-fe73e94c13ac dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:20:48 compute-0 nova_compute[260935]: 2025-10-11 09:20:48.619 2 DEBUG nova.virt.libvirt.vif [None req-eea5f556-8701-4a10-9b0a-fe73e94c13ac dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T09:20:38Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1960711580',display_name='tempest-TestNetworkBasicOps-server-1960711580',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1960711580',id=117,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBF5Wlq36JtMeAih7avX8wXRrPqfFsSPD2izoTRA0/VeN08ZY174fYPsstqVdaqAprTgQ0B4WJKyd87FK5YPL+XzWekXrwbc+R4XTrvOSv6dJKGu7vh0OlJADJW05rfop0g==',key_name='tempest-TestNetworkBasicOps-1691611342',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='bee9c6aad5fe46a2b0fb6caf4d995b72',ramdisk_id='',reservation_id='r-9mefs4w8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1622727639',owner_user_name='tempest-TestNetworkBasicOps-1622727639-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:20:40Z,user_data=None,user_id='dd336dcb24664df58613d4105ce1b004',uuid=7f0d9214-39a5-458d-82db-dcbc7d61b8b5,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6aa7ac72-3e8c-4881-ba5d-9e2fca0200c5", "address": "fa:16:3e:38:a7:f5", "network": {"id": "6fb90d02-96cd-4920-92ac-462cc457cb11", "bridge": "br-int", "label": "tempest-network-smoke--1169953700", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6aa7ac72-3e", "ovs_interfaceid": "6aa7ac72-3e8c-4881-ba5d-9e2fca0200c5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 11 09:20:48 compute-0 nova_compute[260935]: 2025-10-11 09:20:48.620 2 DEBUG nova.network.os_vif_util [None req-eea5f556-8701-4a10-9b0a-fe73e94c13ac dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Converting VIF {"id": "6aa7ac72-3e8c-4881-ba5d-9e2fca0200c5", "address": "fa:16:3e:38:a7:f5", "network": {"id": "6fb90d02-96cd-4920-92ac-462cc457cb11", "bridge": "br-int", "label": "tempest-network-smoke--1169953700", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6aa7ac72-3e", "ovs_interfaceid": "6aa7ac72-3e8c-4881-ba5d-9e2fca0200c5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 09:20:48 compute-0 nova_compute[260935]: 2025-10-11 09:20:48.622 2 DEBUG nova.network.os_vif_util [None req-eea5f556-8701-4a10-9b0a-fe73e94c13ac dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:38:a7:f5,bridge_name='br-int',has_traffic_filtering=True,id=6aa7ac72-3e8c-4881-ba5d-9e2fca0200c5,network=Network(6fb90d02-96cd-4920-92ac-462cc457cb11),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6aa7ac72-3e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 09:20:48 compute-0 nova_compute[260935]: 2025-10-11 09:20:48.624 2 DEBUG nova.objects.instance [None req-eea5f556-8701-4a10-9b0a-fe73e94c13ac dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lazy-loading 'pci_devices' on Instance uuid 7f0d9214-39a5-458d-82db-dcbc7d61b8b5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:20:48 compute-0 nova_compute[260935]: 2025-10-11 09:20:48.706 2 DEBUG nova.virt.libvirt.driver [None req-eea5f556-8701-4a10-9b0a-fe73e94c13ac dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 7f0d9214-39a5-458d-82db-dcbc7d61b8b5] End _get_guest_xml xml=<domain type="kvm">
Oct 11 09:20:48 compute-0 nova_compute[260935]:   <uuid>7f0d9214-39a5-458d-82db-dcbc7d61b8b5</uuid>
Oct 11 09:20:48 compute-0 nova_compute[260935]:   <name>instance-00000075</name>
Oct 11 09:20:48 compute-0 nova_compute[260935]:   <memory>131072</memory>
Oct 11 09:20:48 compute-0 nova_compute[260935]:   <vcpu>1</vcpu>
Oct 11 09:20:48 compute-0 nova_compute[260935]:   <metadata>
Oct 11 09:20:48 compute-0 nova_compute[260935]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 09:20:48 compute-0 nova_compute[260935]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 09:20:48 compute-0 nova_compute[260935]:       <nova:name>tempest-TestNetworkBasicOps-server-1960711580</nova:name>
Oct 11 09:20:48 compute-0 nova_compute[260935]:       <nova:creationTime>2025-10-11 09:20:47</nova:creationTime>
Oct 11 09:20:48 compute-0 nova_compute[260935]:       <nova:flavor name="m1.nano">
Oct 11 09:20:48 compute-0 nova_compute[260935]:         <nova:memory>128</nova:memory>
Oct 11 09:20:48 compute-0 nova_compute[260935]:         <nova:disk>1</nova:disk>
Oct 11 09:20:48 compute-0 nova_compute[260935]:         <nova:swap>0</nova:swap>
Oct 11 09:20:48 compute-0 nova_compute[260935]:         <nova:ephemeral>0</nova:ephemeral>
Oct 11 09:20:48 compute-0 nova_compute[260935]:         <nova:vcpus>1</nova:vcpus>
Oct 11 09:20:48 compute-0 nova_compute[260935]:       </nova:flavor>
Oct 11 09:20:48 compute-0 nova_compute[260935]:       <nova:owner>
Oct 11 09:20:48 compute-0 nova_compute[260935]:         <nova:user uuid="dd336dcb24664df58613d4105ce1b004">tempest-TestNetworkBasicOps-1622727639-project-member</nova:user>
Oct 11 09:20:48 compute-0 nova_compute[260935]:         <nova:project uuid="bee9c6aad5fe46a2b0fb6caf4d995b72">tempest-TestNetworkBasicOps-1622727639</nova:project>
Oct 11 09:20:48 compute-0 nova_compute[260935]:       </nova:owner>
Oct 11 09:20:48 compute-0 nova_compute[260935]:       <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 09:20:48 compute-0 nova_compute[260935]:       <nova:ports>
Oct 11 09:20:48 compute-0 nova_compute[260935]:         <nova:port uuid="6aa7ac72-3e8c-4881-ba5d-9e2fca0200c5">
Oct 11 09:20:48 compute-0 nova_compute[260935]:           <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Oct 11 09:20:48 compute-0 nova_compute[260935]:         </nova:port>
Oct 11 09:20:48 compute-0 nova_compute[260935]:       </nova:ports>
Oct 11 09:20:48 compute-0 nova_compute[260935]:     </nova:instance>
Oct 11 09:20:48 compute-0 nova_compute[260935]:   </metadata>
Oct 11 09:20:48 compute-0 nova_compute[260935]:   <sysinfo type="smbios">
Oct 11 09:20:48 compute-0 nova_compute[260935]:     <system>
Oct 11 09:20:48 compute-0 nova_compute[260935]:       <entry name="manufacturer">RDO</entry>
Oct 11 09:20:48 compute-0 nova_compute[260935]:       <entry name="product">OpenStack Compute</entry>
Oct 11 09:20:48 compute-0 nova_compute[260935]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 09:20:48 compute-0 nova_compute[260935]:       <entry name="serial">7f0d9214-39a5-458d-82db-dcbc7d61b8b5</entry>
Oct 11 09:20:48 compute-0 nova_compute[260935]:       <entry name="uuid">7f0d9214-39a5-458d-82db-dcbc7d61b8b5</entry>
Oct 11 09:20:48 compute-0 nova_compute[260935]:       <entry name="family">Virtual Machine</entry>
Oct 11 09:20:48 compute-0 nova_compute[260935]:     </system>
Oct 11 09:20:48 compute-0 nova_compute[260935]:   </sysinfo>
Oct 11 09:20:48 compute-0 nova_compute[260935]:   <os>
Oct 11 09:20:48 compute-0 nova_compute[260935]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 11 09:20:48 compute-0 nova_compute[260935]:     <boot dev="hd"/>
Oct 11 09:20:48 compute-0 nova_compute[260935]:     <smbios mode="sysinfo"/>
Oct 11 09:20:48 compute-0 nova_compute[260935]:   </os>
Oct 11 09:20:48 compute-0 nova_compute[260935]:   <features>
Oct 11 09:20:48 compute-0 nova_compute[260935]:     <acpi/>
Oct 11 09:20:48 compute-0 nova_compute[260935]:     <apic/>
Oct 11 09:20:48 compute-0 nova_compute[260935]:     <vmcoreinfo/>
Oct 11 09:20:48 compute-0 nova_compute[260935]:   </features>
Oct 11 09:20:48 compute-0 nova_compute[260935]:   <clock offset="utc">
Oct 11 09:20:48 compute-0 nova_compute[260935]:     <timer name="pit" tickpolicy="delay"/>
Oct 11 09:20:48 compute-0 nova_compute[260935]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 11 09:20:48 compute-0 nova_compute[260935]:     <timer name="hpet" present="no"/>
Oct 11 09:20:48 compute-0 nova_compute[260935]:   </clock>
Oct 11 09:20:48 compute-0 nova_compute[260935]:   <cpu mode="host-model" match="exact">
Oct 11 09:20:48 compute-0 nova_compute[260935]:     <topology sockets="1" cores="1" threads="1"/>
Oct 11 09:20:48 compute-0 nova_compute[260935]:   </cpu>
Oct 11 09:20:48 compute-0 nova_compute[260935]:   <devices>
Oct 11 09:20:48 compute-0 nova_compute[260935]:     <disk type="network" device="disk">
Oct 11 09:20:48 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 09:20:48 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/7f0d9214-39a5-458d-82db-dcbc7d61b8b5_disk">
Oct 11 09:20:48 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 09:20:48 compute-0 nova_compute[260935]:       </source>
Oct 11 09:20:48 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 09:20:48 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 09:20:48 compute-0 nova_compute[260935]:       </auth>
Oct 11 09:20:48 compute-0 nova_compute[260935]:       <target dev="vda" bus="virtio"/>
Oct 11 09:20:48 compute-0 nova_compute[260935]:     </disk>
Oct 11 09:20:48 compute-0 nova_compute[260935]:     <disk type="network" device="cdrom">
Oct 11 09:20:48 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 09:20:48 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/7f0d9214-39a5-458d-82db-dcbc7d61b8b5_disk.config">
Oct 11 09:20:48 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 09:20:48 compute-0 nova_compute[260935]:       </source>
Oct 11 09:20:48 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 09:20:48 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 09:20:48 compute-0 nova_compute[260935]:       </auth>
Oct 11 09:20:48 compute-0 nova_compute[260935]:       <target dev="sda" bus="sata"/>
Oct 11 09:20:48 compute-0 nova_compute[260935]:     </disk>
Oct 11 09:20:48 compute-0 nova_compute[260935]:     <interface type="ethernet">
Oct 11 09:20:48 compute-0 nova_compute[260935]:       <mac address="fa:16:3e:38:a7:f5"/>
Oct 11 09:20:48 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 09:20:48 compute-0 nova_compute[260935]:       <driver name="vhost" rx_queue_size="512"/>
Oct 11 09:20:48 compute-0 nova_compute[260935]:       <mtu size="1442"/>
Oct 11 09:20:48 compute-0 nova_compute[260935]:       <target dev="tap6aa7ac72-3e"/>
Oct 11 09:20:48 compute-0 nova_compute[260935]:     </interface>
Oct 11 09:20:48 compute-0 nova_compute[260935]:     <serial type="pty">
Oct 11 09:20:48 compute-0 nova_compute[260935]:       <log file="/var/lib/nova/instances/7f0d9214-39a5-458d-82db-dcbc7d61b8b5/console.log" append="off"/>
Oct 11 09:20:48 compute-0 nova_compute[260935]:     </serial>
Oct 11 09:20:48 compute-0 nova_compute[260935]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 09:20:48 compute-0 nova_compute[260935]:     <video>
Oct 11 09:20:48 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 09:20:48 compute-0 nova_compute[260935]:     </video>
Oct 11 09:20:48 compute-0 nova_compute[260935]:     <input type="tablet" bus="usb"/>
Oct 11 09:20:48 compute-0 nova_compute[260935]:     <rng model="virtio">
Oct 11 09:20:48 compute-0 nova_compute[260935]:       <backend model="random">/dev/urandom</backend>
Oct 11 09:20:48 compute-0 nova_compute[260935]:     </rng>
Oct 11 09:20:48 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root"/>
Oct 11 09:20:48 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:20:48 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:20:48 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:20:48 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:20:48 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:20:48 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:20:48 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:20:48 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:20:48 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:20:48 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:20:48 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:20:48 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:20:48 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:20:48 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:20:48 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:20:48 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:20:48 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:20:48 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:20:48 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:20:48 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:20:48 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:20:48 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:20:48 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:20:48 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:20:48 compute-0 nova_compute[260935]:     <controller type="usb" index="0"/>
Oct 11 09:20:48 compute-0 nova_compute[260935]:     <memballoon model="virtio">
Oct 11 09:20:48 compute-0 nova_compute[260935]:       <stats period="10"/>
Oct 11 09:20:48 compute-0 nova_compute[260935]:     </memballoon>
Oct 11 09:20:48 compute-0 nova_compute[260935]:   </devices>
Oct 11 09:20:48 compute-0 nova_compute[260935]: </domain>
Oct 11 09:20:48 compute-0 nova_compute[260935]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 11 09:20:48 compute-0 nova_compute[260935]: 2025-10-11 09:20:48.711 2 DEBUG nova.compute.manager [None req-eea5f556-8701-4a10-9b0a-fe73e94c13ac dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 7f0d9214-39a5-458d-82db-dcbc7d61b8b5] Preparing to wait for external event network-vif-plugged-6aa7ac72-3e8c-4881-ba5d-9e2fca0200c5 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 11 09:20:48 compute-0 nova_compute[260935]: 2025-10-11 09:20:48.712 2 DEBUG oslo_concurrency.lockutils [None req-eea5f556-8701-4a10-9b0a-fe73e94c13ac dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Acquiring lock "7f0d9214-39a5-458d-82db-dcbc7d61b8b5-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:20:48 compute-0 nova_compute[260935]: 2025-10-11 09:20:48.713 2 DEBUG oslo_concurrency.lockutils [None req-eea5f556-8701-4a10-9b0a-fe73e94c13ac dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "7f0d9214-39a5-458d-82db-dcbc7d61b8b5-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:20:48 compute-0 nova_compute[260935]: 2025-10-11 09:20:48.713 2 DEBUG oslo_concurrency.lockutils [None req-eea5f556-8701-4a10-9b0a-fe73e94c13ac dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "7f0d9214-39a5-458d-82db-dcbc7d61b8b5-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:20:48 compute-0 nova_compute[260935]: 2025-10-11 09:20:48.714 2 DEBUG nova.virt.libvirt.vif [None req-eea5f556-8701-4a10-9b0a-fe73e94c13ac dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T09:20:38Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1960711580',display_name='tempest-TestNetworkBasicOps-server-1960711580',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1960711580',id=117,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBF5Wlq36JtMeAih7avX8wXRrPqfFsSPD2izoTRA0/VeN08ZY174fYPsstqVdaqAprTgQ0B4WJKyd87FK5YPL+XzWekXrwbc+R4XTrvOSv6dJKGu7vh0OlJADJW05rfop0g==',key_name='tempest-TestNetworkBasicOps-1691611342',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='bee9c6aad5fe46a2b0fb6caf4d995b72',ramdisk_id='',reservation_id='r-9mefs4w8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1622727639',owner_user_name='tempest-TestNetworkBasicOps-1622727639-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:20:40Z,user_data=None,user_id='dd336dcb24664df58613d4105ce1b004',uuid=7f0d9214-39a5-458d-82db-dcbc7d61b8b5,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6aa7ac72-3e8c-4881-ba5d-9e2fca0200c5", "address": "fa:16:3e:38:a7:f5", "network": {"id": "6fb90d02-96cd-4920-92ac-462cc457cb11", "bridge": "br-int", "label": "tempest-network-smoke--1169953700", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6aa7ac72-3e", "ovs_interfaceid": "6aa7ac72-3e8c-4881-ba5d-9e2fca0200c5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 11 09:20:48 compute-0 nova_compute[260935]: 2025-10-11 09:20:48.715 2 DEBUG nova.network.os_vif_util [None req-eea5f556-8701-4a10-9b0a-fe73e94c13ac dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Converting VIF {"id": "6aa7ac72-3e8c-4881-ba5d-9e2fca0200c5", "address": "fa:16:3e:38:a7:f5", "network": {"id": "6fb90d02-96cd-4920-92ac-462cc457cb11", "bridge": "br-int", "label": "tempest-network-smoke--1169953700", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6aa7ac72-3e", "ovs_interfaceid": "6aa7ac72-3e8c-4881-ba5d-9e2fca0200c5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 09:20:48 compute-0 nova_compute[260935]: 2025-10-11 09:20:48.716 2 DEBUG nova.network.os_vif_util [None req-eea5f556-8701-4a10-9b0a-fe73e94c13ac dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:38:a7:f5,bridge_name='br-int',has_traffic_filtering=True,id=6aa7ac72-3e8c-4881-ba5d-9e2fca0200c5,network=Network(6fb90d02-96cd-4920-92ac-462cc457cb11),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6aa7ac72-3e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 09:20:48 compute-0 nova_compute[260935]: 2025-10-11 09:20:48.717 2 DEBUG os_vif [None req-eea5f556-8701-4a10-9b0a-fe73e94c13ac dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:38:a7:f5,bridge_name='br-int',has_traffic_filtering=True,id=6aa7ac72-3e8c-4881-ba5d-9e2fca0200c5,network=Network(6fb90d02-96cd-4920-92ac-462cc457cb11),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6aa7ac72-3e') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 11 09:20:48 compute-0 nova_compute[260935]: 2025-10-11 09:20:48.718 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:20:48 compute-0 nova_compute[260935]: 2025-10-11 09:20:48.719 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:20:48 compute-0 nova_compute[260935]: 2025-10-11 09:20:48.720 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 09:20:48 compute-0 nova_compute[260935]: 2025-10-11 09:20:48.725 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:20:48 compute-0 nova_compute[260935]: 2025-10-11 09:20:48.725 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6aa7ac72-3e, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:20:48 compute-0 nova_compute[260935]: 2025-10-11 09:20:48.726 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap6aa7ac72-3e, col_values=(('external_ids', {'iface-id': '6aa7ac72-3e8c-4881-ba5d-9e2fca0200c5', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:38:a7:f5', 'vm-uuid': '7f0d9214-39a5-458d-82db-dcbc7d61b8b5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:20:48 compute-0 NetworkManager[44960]: <info>  [1760174448.7304] manager: (tap6aa7ac72-3e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/478)
Oct 11 09:20:48 compute-0 nova_compute[260935]: 2025-10-11 09:20:48.729 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:20:48 compute-0 nova_compute[260935]: 2025-10-11 09:20:48.732 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 09:20:48 compute-0 nova_compute[260935]: 2025-10-11 09:20:48.741 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:20:48 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2369: 321 pgs: 321 active+clean; 374 MiB data, 928 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 11 09:20:48 compute-0 nova_compute[260935]: 2025-10-11 09:20:48.743 2 INFO os_vif [None req-eea5f556-8701-4a10-9b0a-fe73e94c13ac dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:38:a7:f5,bridge_name='br-int',has_traffic_filtering=True,id=6aa7ac72-3e8c-4881-ba5d-9e2fca0200c5,network=Network(6fb90d02-96cd-4920-92ac-462cc457cb11),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6aa7ac72-3e')
Oct 11 09:20:48 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/443230958' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:20:48 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/3789507855' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:20:48 compute-0 nova_compute[260935]: 2025-10-11 09:20:48.876 2 DEBUG nova.virt.libvirt.driver [None req-eea5f556-8701-4a10-9b0a-fe73e94c13ac dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 09:20:48 compute-0 nova_compute[260935]: 2025-10-11 09:20:48.878 2 DEBUG nova.virt.libvirt.driver [None req-eea5f556-8701-4a10-9b0a-fe73e94c13ac dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 09:20:48 compute-0 nova_compute[260935]: 2025-10-11 09:20:48.879 2 DEBUG nova.virt.libvirt.driver [None req-eea5f556-8701-4a10-9b0a-fe73e94c13ac dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] No VIF found with MAC fa:16:3e:38:a7:f5, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 11 09:20:48 compute-0 nova_compute[260935]: 2025-10-11 09:20:48.880 2 INFO nova.virt.libvirt.driver [None req-eea5f556-8701-4a10-9b0a-fe73e94c13ac dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 7f0d9214-39a5-458d-82db-dcbc7d61b8b5] Using config drive
Oct 11 09:20:48 compute-0 nova_compute[260935]: 2025-10-11 09:20:48.914 2 DEBUG nova.storage.rbd_utils [None req-eea5f556-8701-4a10-9b0a-fe73e94c13ac dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] rbd image 7f0d9214-39a5-458d-82db-dcbc7d61b8b5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:20:49 compute-0 romantic_murdock[387816]: --> passed data devices: 0 physical, 3 LVM
Oct 11 09:20:49 compute-0 romantic_murdock[387816]: --> relative data size: 1.0
Oct 11 09:20:49 compute-0 romantic_murdock[387816]: --> All data devices are unavailable
Oct 11 09:20:49 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:20:49 compute-0 systemd[1]: libpod-dcc20560dddba7e136f8269db0394784eb2aa945da86a9a30c88b63f44281e17.scope: Deactivated successfully.
Oct 11 09:20:49 compute-0 systemd[1]: libpod-dcc20560dddba7e136f8269db0394784eb2aa945da86a9a30c88b63f44281e17.scope: Consumed 1.055s CPU time.
Oct 11 09:20:49 compute-0 podman[387788]: 2025-10-11 09:20:49.057515903 +0000 UTC m=+1.384097041 container died dcc20560dddba7e136f8269db0394784eb2aa945da86a9a30c88b63f44281e17 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_murdock, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 09:20:49 compute-0 nova_compute[260935]: 2025-10-11 09:20:49.076 2 DEBUG nova.network.neutron [req-479b6167-2282-44c4-af07-5c2a352e1f31 req-4fc599a1-4ae1-44b0-9f35-47fe39e5ab62 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 7f0d9214-39a5-458d-82db-dcbc7d61b8b5] Updated VIF entry in instance network info cache for port 6aa7ac72-3e8c-4881-ba5d-9e2fca0200c5. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 09:20:49 compute-0 nova_compute[260935]: 2025-10-11 09:20:49.077 2 DEBUG nova.network.neutron [req-479b6167-2282-44c4-af07-5c2a352e1f31 req-4fc599a1-4ae1-44b0-9f35-47fe39e5ab62 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 7f0d9214-39a5-458d-82db-dcbc7d61b8b5] Updating instance_info_cache with network_info: [{"id": "6aa7ac72-3e8c-4881-ba5d-9e2fca0200c5", "address": "fa:16:3e:38:a7:f5", "network": {"id": "6fb90d02-96cd-4920-92ac-462cc457cb11", "bridge": "br-int", "label": "tempest-network-smoke--1169953700", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6aa7ac72-3e", "ovs_interfaceid": "6aa7ac72-3e8c-4881-ba5d-9e2fca0200c5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:20:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-b218df1b0c249dbd0cf9056236c12ca02effddc8b206c7e18241a0b2e27a1681-merged.mount: Deactivated successfully.
Oct 11 09:20:49 compute-0 podman[387788]: 2025-10-11 09:20:49.11729719 +0000 UTC m=+1.443878288 container remove dcc20560dddba7e136f8269db0394784eb2aa945da86a9a30c88b63f44281e17 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_murdock, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct 11 09:20:49 compute-0 systemd[1]: libpod-conmon-dcc20560dddba7e136f8269db0394784eb2aa945da86a9a30c88b63f44281e17.scope: Deactivated successfully.
Oct 11 09:20:49 compute-0 sudo[387681]: pam_unix(sudo:session): session closed for user root
Oct 11 09:20:49 compute-0 nova_compute[260935]: 2025-10-11 09:20:49.206 2 DEBUG oslo_concurrency.lockutils [req-479b6167-2282-44c4-af07-5c2a352e1f31 req-4fc599a1-4ae1-44b0-9f35-47fe39e5ab62 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-7f0d9214-39a5-458d-82db-dcbc7d61b8b5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:20:49 compute-0 sudo[387930]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:20:49 compute-0 sudo[387930]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:20:49 compute-0 sudo[387930]: pam_unix(sudo:session): session closed for user root
Oct 11 09:20:49 compute-0 sudo[387955]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 09:20:49 compute-0 sudo[387955]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:20:49 compute-0 sudo[387955]: pam_unix(sudo:session): session closed for user root
Oct 11 09:20:49 compute-0 sudo[387980]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:20:49 compute-0 sudo[387980]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:20:49 compute-0 sudo[387980]: pam_unix(sudo:session): session closed for user root
Oct 11 09:20:49 compute-0 sudo[388005]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a -- lvm list --format json
Oct 11 09:20:49 compute-0 sudo[388005]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:20:49 compute-0 sshd-session[387851]: Invalid user sdtdserver from 152.32.213.170 port 42436
Oct 11 09:20:49 compute-0 sshd-session[387851]: pam_unix(sshd:auth): check pass; user unknown
Oct 11 09:20:49 compute-0 sshd-session[387851]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=152.32.213.170
Oct 11 09:20:49 compute-0 ceph-mon[74313]: pgmap v2369: 321 pgs: 321 active+clean; 374 MiB data, 928 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 11 09:20:49 compute-0 podman[388069]: 2025-10-11 09:20:49.991624194 +0000 UTC m=+0.065211341 container create 12a4d16e900b57869d729d6082faaca6f644f93bfb00a70b5a288dba4166d8a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_cerf, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct 11 09:20:50 compute-0 systemd[1]: Started libpod-conmon-12a4d16e900b57869d729d6082faaca6f644f93bfb00a70b5a288dba4166d8a8.scope.
Oct 11 09:20:50 compute-0 podman[388069]: 2025-10-11 09:20:49.967216745 +0000 UTC m=+0.040803902 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:20:50 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:20:50 compute-0 podman[388069]: 2025-10-11 09:20:50.10345413 +0000 UTC m=+0.177041327 container init 12a4d16e900b57869d729d6082faaca6f644f93bfb00a70b5a288dba4166d8a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_cerf, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 09:20:50 compute-0 podman[388069]: 2025-10-11 09:20:50.111100086 +0000 UTC m=+0.184687243 container start 12a4d16e900b57869d729d6082faaca6f644f93bfb00a70b5a288dba4166d8a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_cerf, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct 11 09:20:50 compute-0 podman[388069]: 2025-10-11 09:20:50.115428948 +0000 UTC m=+0.189016125 container attach 12a4d16e900b57869d729d6082faaca6f644f93bfb00a70b5a288dba4166d8a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_cerf, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 09:20:50 compute-0 epic_cerf[388086]: 167 167
Oct 11 09:20:50 compute-0 podman[388069]: 2025-10-11 09:20:50.117497897 +0000 UTC m=+0.191085044 container died 12a4d16e900b57869d729d6082faaca6f644f93bfb00a70b5a288dba4166d8a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_cerf, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 09:20:50 compute-0 systemd[1]: libpod-12a4d16e900b57869d729d6082faaca6f644f93bfb00a70b5a288dba4166d8a8.scope: Deactivated successfully.
Oct 11 09:20:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-2c9702f3978339b81d868c9a4fc1e35376cb3d28c2da032e84aa047c78052a9d-merged.mount: Deactivated successfully.
Oct 11 09:20:50 compute-0 podman[388069]: 2025-10-11 09:20:50.165378758 +0000 UTC m=+0.238965905 container remove 12a4d16e900b57869d729d6082faaca6f644f93bfb00a70b5a288dba4166d8a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_cerf, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct 11 09:20:50 compute-0 systemd[1]: libpod-conmon-12a4d16e900b57869d729d6082faaca6f644f93bfb00a70b5a288dba4166d8a8.scope: Deactivated successfully.
Oct 11 09:20:50 compute-0 podman[388111]: 2025-10-11 09:20:50.421481035 +0000 UTC m=+0.064763118 container create 6735826573eda39b0d6ebe761b0acd00a1084ea990e979c09445ec782bcdc921 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_lewin, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 09:20:50 compute-0 systemd[1]: Started libpod-conmon-6735826573eda39b0d6ebe761b0acd00a1084ea990e979c09445ec782bcdc921.scope.
Oct 11 09:20:50 compute-0 podman[388111]: 2025-10-11 09:20:50.392129557 +0000 UTC m=+0.035411680 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:20:50 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:20:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bece647e87b72f881f22423a494a7f1f7cde6ce571bd74e6e2c51600770893b3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 09:20:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bece647e87b72f881f22423a494a7f1f7cde6ce571bd74e6e2c51600770893b3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 09:20:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bece647e87b72f881f22423a494a7f1f7cde6ce571bd74e6e2c51600770893b3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 09:20:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bece647e87b72f881f22423a494a7f1f7cde6ce571bd74e6e2c51600770893b3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 09:20:50 compute-0 podman[388111]: 2025-10-11 09:20:50.532032695 +0000 UTC m=+0.175314818 container init 6735826573eda39b0d6ebe761b0acd00a1084ea990e979c09445ec782bcdc921 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_lewin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct 11 09:20:50 compute-0 podman[388111]: 2025-10-11 09:20:50.546483833 +0000 UTC m=+0.189765876 container start 6735826573eda39b0d6ebe761b0acd00a1084ea990e979c09445ec782bcdc921 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_lewin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 09:20:50 compute-0 podman[388111]: 2025-10-11 09:20:50.550597969 +0000 UTC m=+0.193880062 container attach 6735826573eda39b0d6ebe761b0acd00a1084ea990e979c09445ec782bcdc921 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_lewin, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 09:20:50 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2370: 321 pgs: 321 active+clean; 374 MiB data, 928 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 11 09:20:51 compute-0 nova_compute[260935]: 2025-10-11 09:20:51.106 2 INFO nova.virt.libvirt.driver [None req-eea5f556-8701-4a10-9b0a-fe73e94c13ac dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 7f0d9214-39a5-458d-82db-dcbc7d61b8b5] Creating config drive at /var/lib/nova/instances/7f0d9214-39a5-458d-82db-dcbc7d61b8b5/disk.config
Oct 11 09:20:51 compute-0 nova_compute[260935]: 2025-10-11 09:20:51.112 2 DEBUG oslo_concurrency.processutils [None req-eea5f556-8701-4a10-9b0a-fe73e94c13ac dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/7f0d9214-39a5-458d-82db-dcbc7d61b8b5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpq_eq0i0r execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:20:51 compute-0 nova_compute[260935]: 2025-10-11 09:20:51.262 2 DEBUG oslo_concurrency.processutils [None req-eea5f556-8701-4a10-9b0a-fe73e94c13ac dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/7f0d9214-39a5-458d-82db-dcbc7d61b8b5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpq_eq0i0r" returned: 0 in 0.149s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:20:51 compute-0 nova_compute[260935]: 2025-10-11 09:20:51.302 2 DEBUG nova.storage.rbd_utils [None req-eea5f556-8701-4a10-9b0a-fe73e94c13ac dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] rbd image 7f0d9214-39a5-458d-82db-dcbc7d61b8b5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:20:51 compute-0 nova_compute[260935]: 2025-10-11 09:20:51.307 2 DEBUG oslo_concurrency.processutils [None req-eea5f556-8701-4a10-9b0a-fe73e94c13ac dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/7f0d9214-39a5-458d-82db-dcbc7d61b8b5/disk.config 7f0d9214-39a5-458d-82db-dcbc7d61b8b5_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:20:51 compute-0 ecstatic_lewin[388128]: {
Oct 11 09:20:51 compute-0 ecstatic_lewin[388128]:     "0": [
Oct 11 09:20:51 compute-0 ecstatic_lewin[388128]:         {
Oct 11 09:20:51 compute-0 ecstatic_lewin[388128]:             "devices": [
Oct 11 09:20:51 compute-0 ecstatic_lewin[388128]:                 "/dev/loop3"
Oct 11 09:20:51 compute-0 ecstatic_lewin[388128]:             ],
Oct 11 09:20:51 compute-0 ecstatic_lewin[388128]:             "lv_name": "ceph_lv0",
Oct 11 09:20:51 compute-0 ecstatic_lewin[388128]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 09:20:51 compute-0 ecstatic_lewin[388128]:             "lv_size": "21470642176",
Oct 11 09:20:51 compute-0 ecstatic_lewin[388128]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=bd31cfe9-266a-4479-a7f9-659fff14b3e1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 09:20:51 compute-0 ecstatic_lewin[388128]:             "lv_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 09:20:51 compute-0 ecstatic_lewin[388128]:             "name": "ceph_lv0",
Oct 11 09:20:51 compute-0 ecstatic_lewin[388128]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 09:20:51 compute-0 ecstatic_lewin[388128]:             "tags": {
Oct 11 09:20:51 compute-0 ecstatic_lewin[388128]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 11 09:20:51 compute-0 ecstatic_lewin[388128]:                 "ceph.block_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 09:20:51 compute-0 ecstatic_lewin[388128]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 09:20:51 compute-0 ecstatic_lewin[388128]:                 "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:20:51 compute-0 ecstatic_lewin[388128]:                 "ceph.cluster_name": "ceph",
Oct 11 09:20:51 compute-0 ecstatic_lewin[388128]:                 "ceph.crush_device_class": "",
Oct 11 09:20:51 compute-0 ecstatic_lewin[388128]:                 "ceph.encrypted": "0",
Oct 11 09:20:51 compute-0 ecstatic_lewin[388128]:                 "ceph.osd_fsid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 09:20:51 compute-0 ecstatic_lewin[388128]:                 "ceph.osd_id": "0",
Oct 11 09:20:51 compute-0 ecstatic_lewin[388128]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 09:20:51 compute-0 ecstatic_lewin[388128]:                 "ceph.type": "block",
Oct 11 09:20:51 compute-0 ecstatic_lewin[388128]:                 "ceph.vdo": "0"
Oct 11 09:20:51 compute-0 ecstatic_lewin[388128]:             },
Oct 11 09:20:51 compute-0 ecstatic_lewin[388128]:             "type": "block",
Oct 11 09:20:51 compute-0 ecstatic_lewin[388128]:             "vg_name": "ceph_vg0"
Oct 11 09:20:51 compute-0 ecstatic_lewin[388128]:         }
Oct 11 09:20:51 compute-0 ecstatic_lewin[388128]:     ],
Oct 11 09:20:51 compute-0 ecstatic_lewin[388128]:     "1": [
Oct 11 09:20:51 compute-0 ecstatic_lewin[388128]:         {
Oct 11 09:20:51 compute-0 ecstatic_lewin[388128]:             "devices": [
Oct 11 09:20:51 compute-0 ecstatic_lewin[388128]:                 "/dev/loop4"
Oct 11 09:20:51 compute-0 ecstatic_lewin[388128]:             ],
Oct 11 09:20:51 compute-0 ecstatic_lewin[388128]:             "lv_name": "ceph_lv1",
Oct 11 09:20:51 compute-0 ecstatic_lewin[388128]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 09:20:51 compute-0 ecstatic_lewin[388128]:             "lv_size": "21470642176",
Oct 11 09:20:51 compute-0 ecstatic_lewin[388128]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c6e730c8-7075-45e5-bf72-061b73088f5b,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 09:20:51 compute-0 ecstatic_lewin[388128]:             "lv_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 09:20:51 compute-0 ecstatic_lewin[388128]:             "name": "ceph_lv1",
Oct 11 09:20:51 compute-0 ecstatic_lewin[388128]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 09:20:51 compute-0 ecstatic_lewin[388128]:             "tags": {
Oct 11 09:20:51 compute-0 ecstatic_lewin[388128]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 11 09:20:51 compute-0 ecstatic_lewin[388128]:                 "ceph.block_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 09:20:51 compute-0 ecstatic_lewin[388128]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 09:20:51 compute-0 ecstatic_lewin[388128]:                 "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:20:51 compute-0 ecstatic_lewin[388128]:                 "ceph.cluster_name": "ceph",
Oct 11 09:20:51 compute-0 ecstatic_lewin[388128]:                 "ceph.crush_device_class": "",
Oct 11 09:20:51 compute-0 ecstatic_lewin[388128]:                 "ceph.encrypted": "0",
Oct 11 09:20:51 compute-0 ecstatic_lewin[388128]:                 "ceph.osd_fsid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 09:20:51 compute-0 ecstatic_lewin[388128]:                 "ceph.osd_id": "1",
Oct 11 09:20:51 compute-0 ecstatic_lewin[388128]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 09:20:51 compute-0 ecstatic_lewin[388128]:                 "ceph.type": "block",
Oct 11 09:20:51 compute-0 ecstatic_lewin[388128]:                 "ceph.vdo": "0"
Oct 11 09:20:51 compute-0 ecstatic_lewin[388128]:             },
Oct 11 09:20:51 compute-0 ecstatic_lewin[388128]:             "type": "block",
Oct 11 09:20:51 compute-0 ecstatic_lewin[388128]:             "vg_name": "ceph_vg1"
Oct 11 09:20:51 compute-0 ecstatic_lewin[388128]:         }
Oct 11 09:20:51 compute-0 ecstatic_lewin[388128]:     ],
Oct 11 09:20:51 compute-0 ecstatic_lewin[388128]:     "2": [
Oct 11 09:20:51 compute-0 ecstatic_lewin[388128]:         {
Oct 11 09:20:51 compute-0 ecstatic_lewin[388128]:             "devices": [
Oct 11 09:20:51 compute-0 ecstatic_lewin[388128]:                 "/dev/loop5"
Oct 11 09:20:51 compute-0 ecstatic_lewin[388128]:             ],
Oct 11 09:20:51 compute-0 ecstatic_lewin[388128]:             "lv_name": "ceph_lv2",
Oct 11 09:20:51 compute-0 ecstatic_lewin[388128]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 09:20:51 compute-0 ecstatic_lewin[388128]:             "lv_size": "21470642176",
Oct 11 09:20:51 compute-0 ecstatic_lewin[388128]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9a8d87fe-940e-4960-94fb-405e22e6e9ee,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 09:20:51 compute-0 ecstatic_lewin[388128]:             "lv_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 09:20:51 compute-0 ecstatic_lewin[388128]:             "name": "ceph_lv2",
Oct 11 09:20:51 compute-0 ecstatic_lewin[388128]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 09:20:51 compute-0 ecstatic_lewin[388128]:             "tags": {
Oct 11 09:20:51 compute-0 ecstatic_lewin[388128]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 11 09:20:51 compute-0 ecstatic_lewin[388128]:                 "ceph.block_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 09:20:51 compute-0 ecstatic_lewin[388128]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 09:20:51 compute-0 ecstatic_lewin[388128]:                 "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:20:51 compute-0 ecstatic_lewin[388128]:                 "ceph.cluster_name": "ceph",
Oct 11 09:20:51 compute-0 ecstatic_lewin[388128]:                 "ceph.crush_device_class": "",
Oct 11 09:20:51 compute-0 ecstatic_lewin[388128]:                 "ceph.encrypted": "0",
Oct 11 09:20:51 compute-0 ecstatic_lewin[388128]:                 "ceph.osd_fsid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 09:20:51 compute-0 ecstatic_lewin[388128]:                 "ceph.osd_id": "2",
Oct 11 09:20:51 compute-0 ecstatic_lewin[388128]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 09:20:51 compute-0 ecstatic_lewin[388128]:                 "ceph.type": "block",
Oct 11 09:20:51 compute-0 ecstatic_lewin[388128]:                 "ceph.vdo": "0"
Oct 11 09:20:51 compute-0 ecstatic_lewin[388128]:             },
Oct 11 09:20:51 compute-0 ecstatic_lewin[388128]:             "type": "block",
Oct 11 09:20:51 compute-0 ecstatic_lewin[388128]:             "vg_name": "ceph_vg2"
Oct 11 09:20:51 compute-0 ecstatic_lewin[388128]:         }
Oct 11 09:20:51 compute-0 ecstatic_lewin[388128]:     ]
Oct 11 09:20:51 compute-0 ecstatic_lewin[388128]: }
Oct 11 09:20:51 compute-0 sshd-session[387851]: Failed password for invalid user sdtdserver from 152.32.213.170 port 42436 ssh2
Oct 11 09:20:51 compute-0 systemd[1]: libpod-6735826573eda39b0d6ebe761b0acd00a1084ea990e979c09445ec782bcdc921.scope: Deactivated successfully.
Oct 11 09:20:51 compute-0 podman[388111]: 2025-10-11 09:20:51.374573141 +0000 UTC m=+1.017855204 container died 6735826573eda39b0d6ebe761b0acd00a1084ea990e979c09445ec782bcdc921 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_lewin, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct 11 09:20:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-bece647e87b72f881f22423a494a7f1f7cde6ce571bd74e6e2c51600770893b3-merged.mount: Deactivated successfully.
Oct 11 09:20:51 compute-0 podman[388111]: 2025-10-11 09:20:51.461540296 +0000 UTC m=+1.104822349 container remove 6735826573eda39b0d6ebe761b0acd00a1084ea990e979c09445ec782bcdc921 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_lewin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 09:20:51 compute-0 systemd[1]: libpod-conmon-6735826573eda39b0d6ebe761b0acd00a1084ea990e979c09445ec782bcdc921.scope: Deactivated successfully.
Oct 11 09:20:51 compute-0 sudo[388005]: pam_unix(sudo:session): session closed for user root
Oct 11 09:20:51 compute-0 nova_compute[260935]: 2025-10-11 09:20:51.564 2 DEBUG oslo_concurrency.processutils [None req-eea5f556-8701-4a10-9b0a-fe73e94c13ac dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/7f0d9214-39a5-458d-82db-dcbc7d61b8b5/disk.config 7f0d9214-39a5-458d-82db-dcbc7d61b8b5_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.257s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:20:51 compute-0 nova_compute[260935]: 2025-10-11 09:20:51.566 2 INFO nova.virt.libvirt.driver [None req-eea5f556-8701-4a10-9b0a-fe73e94c13ac dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 7f0d9214-39a5-458d-82db-dcbc7d61b8b5] Deleting local config drive /var/lib/nova/instances/7f0d9214-39a5-458d-82db-dcbc7d61b8b5/disk.config because it was imported into RBD.
Oct 11 09:20:51 compute-0 sudo[388189]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:20:51 compute-0 sudo[388189]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:20:51 compute-0 sudo[388189]: pam_unix(sudo:session): session closed for user root
Oct 11 09:20:51 compute-0 kernel: tap6aa7ac72-3e: entered promiscuous mode
Oct 11 09:20:51 compute-0 NetworkManager[44960]: <info>  [1760174451.6495] manager: (tap6aa7ac72-3e): new Tun device (/org/freedesktop/NetworkManager/Devices/479)
Oct 11 09:20:51 compute-0 ovn_controller[152945]: 2025-10-11T09:20:51Z|01189|binding|INFO|Claiming lport 6aa7ac72-3e8c-4881-ba5d-9e2fca0200c5 for this chassis.
Oct 11 09:20:51 compute-0 ovn_controller[152945]: 2025-10-11T09:20:51Z|01190|binding|INFO|6aa7ac72-3e8c-4881-ba5d-9e2fca0200c5: Claiming fa:16:3e:38:a7:f5 10.100.0.5
Oct 11 09:20:51 compute-0 nova_compute[260935]: 2025-10-11 09:20:51.692 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:20:51 compute-0 systemd-udevd[388239]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 09:20:51 compute-0 systemd-machined[215705]: New machine qemu-140-instance-00000075.
Oct 11 09:20:51 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:20:51.719 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:38:a7:f5 10.100.0.5'], port_security=['fa:16:3e:38:a7:f5 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '7f0d9214-39a5-458d-82db-dcbc7d61b8b5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6fb90d02-96cd-4920-92ac-462cc457cb11', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'bee9c6aad5fe46a2b0fb6caf4d995b72', 'neutron:revision_number': '2', 'neutron:security_group_ids': '5dc25595-7367-4d0c-a935-a850b2363806', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=bf7e0c92-efbd-4ce9-a9f4-9c0c91150cdc, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=6aa7ac72-3e8c-4881-ba5d-9e2fca0200c5) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 09:20:51 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:20:51.721 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 6aa7ac72-3e8c-4881-ba5d-9e2fca0200c5 in datapath 6fb90d02-96cd-4920-92ac-462cc457cb11 bound to our chassis
Oct 11 09:20:51 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:20:51.723 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6fb90d02-96cd-4920-92ac-462cc457cb11
Oct 11 09:20:51 compute-0 NetworkManager[44960]: <info>  [1760174451.7276] device (tap6aa7ac72-3e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 09:20:51 compute-0 NetworkManager[44960]: <info>  [1760174451.7287] device (tap6aa7ac72-3e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 09:20:51 compute-0 systemd[1]: Started Virtual Machine qemu-140-instance-00000075.
Oct 11 09:20:51 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:20:51.738 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[0153c93f-9631-4a75-9452-a24ad0e1f6d7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:20:51 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:20:51.739 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap6fb90d02-91 in ovnmeta-6fb90d02-96cd-4920-92ac-462cc457cb11 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 11 09:20:51 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:20:51.742 276945 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap6fb90d02-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 11 09:20:51 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:20:51.742 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[43be4a03-9131-4ce2-a736-8beb070635f1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:20:51 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:20:51.743 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[ed222795-9fe2-4511-8ea4-66e008d6f340]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:20:51 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:20:51.757 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[b1a50f53-d910-4f6f-a767-2a47336cd893]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:20:51 compute-0 sudo[388222]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 09:20:51 compute-0 sudo[388222]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:20:51 compute-0 sudo[388222]: pam_unix(sudo:session): session closed for user root
Oct 11 09:20:51 compute-0 nova_compute[260935]: 2025-10-11 09:20:51.782 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:20:51 compute-0 ovn_controller[152945]: 2025-10-11T09:20:51Z|01191|binding|INFO|Setting lport 6aa7ac72-3e8c-4881-ba5d-9e2fca0200c5 ovn-installed in OVS
Oct 11 09:20:51 compute-0 ovn_controller[152945]: 2025-10-11T09:20:51Z|01192|binding|INFO|Setting lport 6aa7ac72-3e8c-4881-ba5d-9e2fca0200c5 up in Southbound
Oct 11 09:20:51 compute-0 nova_compute[260935]: 2025-10-11 09:20:51.785 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:20:51 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:20:51.789 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[a48d1a33-8624-452a-bb6a-cadebfb9d525]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:20:51 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:20:51.823 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[d034e89b-244e-431e-a832-b7e8e1630351]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:20:51 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:20:51.830 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[ffea0a4c-223c-44ed-9344-3569feeb6f7d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:20:51 compute-0 NetworkManager[44960]: <info>  [1760174451.8320] manager: (tap6fb90d02-90): new Veth device (/org/freedesktop/NetworkManager/Devices/480)
Oct 11 09:20:51 compute-0 ceph-mon[74313]: pgmap v2370: 321 pgs: 321 active+clean; 374 MiB data, 928 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 11 09:20:51 compute-0 sudo[388259]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:20:51 compute-0 sudo[388259]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:20:51 compute-0 sudo[388259]: pam_unix(sudo:session): session closed for user root
Oct 11 09:20:51 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:20:51.880 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[dcb908a6-d4de-4b56-84c3-d0152515a833]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:20:51 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:20:51.883 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[77878250-341c-4f42-a78f-6708cf5f8d04]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:20:51 compute-0 NetworkManager[44960]: <info>  [1760174451.9095] device (tap6fb90d02-90): carrier: link connected
Oct 11 09:20:51 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:20:51.916 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[6f6d2f5b-bb03-41bc-8edd-7d9453ddfd44]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:20:51 compute-0 sudo[388308]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a -- raw list --format json
Oct 11 09:20:51 compute-0 sudo[388308]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:20:51 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:20:51.938 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[173504d1-dcad-4883-bfad-a212a0d6346f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6fb90d02-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8c:3c:da'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 337], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 629661, 'reachable_time': 21889, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 388334, 'error': None, 'target': 'ovnmeta-6fb90d02-96cd-4920-92ac-462cc457cb11', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:20:51 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:20:51.960 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[4f63eca1-afc2-4054-a8ef-da4cf0299c8e]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe8c:3cda'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 629661, 'tstamp': 629661}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 388336, 'error': None, 'target': 'ovnmeta-6fb90d02-96cd-4920-92ac-462cc457cb11', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:20:51 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:20:51.981 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[0f7554b0-9669-43db-9ab4-674dadcb2ce7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6fb90d02-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8c:3c:da'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 337], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 629661, 'reachable_time': 21889, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 388337, 'error': None, 'target': 'ovnmeta-6fb90d02-96cd-4920-92ac-462cc457cb11', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:20:52 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:20:52.029 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[0152dc10-860b-483a-a7ba-b6d0d95d66eb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:20:52 compute-0 sshd-session[387851]: Received disconnect from 152.32.213.170 port 42436:11: Bye Bye [preauth]
Oct 11 09:20:52 compute-0 sshd-session[387851]: Disconnected from invalid user sdtdserver 152.32.213.170 port 42436 [preauth]
Oct 11 09:20:52 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:20:52.106 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[847f15a3-c5ad-423f-81e6-82a78acfe16f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:20:52 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:20:52.110 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6fb90d02-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:20:52 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:20:52.110 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 09:20:52 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:20:52.111 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6fb90d02-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:20:52 compute-0 kernel: tap6fb90d02-90: entered promiscuous mode
Oct 11 09:20:52 compute-0 nova_compute[260935]: 2025-10-11 09:20:52.114 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:20:52 compute-0 NetworkManager[44960]: <info>  [1760174452.1150] manager: (tap6fb90d02-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/481)
Oct 11 09:20:52 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:20:52.118 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6fb90d02-90, col_values=(('external_ids', {'iface-id': '89155f05-1b39-4918-893c-15a2cd2a9493'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:20:52 compute-0 ovn_controller[152945]: 2025-10-11T09:20:52Z|01193|binding|INFO|Releasing lport 89155f05-1b39-4918-893c-15a2cd2a9493 from this chassis (sb_readonly=0)
Oct 11 09:20:52 compute-0 nova_compute[260935]: 2025-10-11 09:20:52.139 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:20:52 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:20:52.140 162815 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/6fb90d02-96cd-4920-92ac-462cc457cb11.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/6fb90d02-96cd-4920-92ac-462cc457cb11.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 11 09:20:52 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:20:52.142 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[dc59cfea-bb9f-496c-bdbd-0350891d1d03]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:20:52 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:20:52.143 162815 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 11 09:20:52 compute-0 ovn_metadata_agent[162810]: global
Oct 11 09:20:52 compute-0 ovn_metadata_agent[162810]:     log         /dev/log local0 debug
Oct 11 09:20:52 compute-0 ovn_metadata_agent[162810]:     log-tag     haproxy-metadata-proxy-6fb90d02-96cd-4920-92ac-462cc457cb11
Oct 11 09:20:52 compute-0 ovn_metadata_agent[162810]:     user        root
Oct 11 09:20:52 compute-0 ovn_metadata_agent[162810]:     group       root
Oct 11 09:20:52 compute-0 ovn_metadata_agent[162810]:     maxconn     1024
Oct 11 09:20:52 compute-0 ovn_metadata_agent[162810]:     pidfile     /var/lib/neutron/external/pids/6fb90d02-96cd-4920-92ac-462cc457cb11.pid.haproxy
Oct 11 09:20:52 compute-0 ovn_metadata_agent[162810]:     daemon
Oct 11 09:20:52 compute-0 ovn_metadata_agent[162810]: 
Oct 11 09:20:52 compute-0 ovn_metadata_agent[162810]: defaults
Oct 11 09:20:52 compute-0 ovn_metadata_agent[162810]:     log global
Oct 11 09:20:52 compute-0 ovn_metadata_agent[162810]:     mode http
Oct 11 09:20:52 compute-0 ovn_metadata_agent[162810]:     option httplog
Oct 11 09:20:52 compute-0 ovn_metadata_agent[162810]:     option dontlognull
Oct 11 09:20:52 compute-0 ovn_metadata_agent[162810]:     option http-server-close
Oct 11 09:20:52 compute-0 ovn_metadata_agent[162810]:     option forwardfor
Oct 11 09:20:52 compute-0 ovn_metadata_agent[162810]:     retries                 3
Oct 11 09:20:52 compute-0 ovn_metadata_agent[162810]:     timeout http-request    30s
Oct 11 09:20:52 compute-0 ovn_metadata_agent[162810]:     timeout connect         30s
Oct 11 09:20:52 compute-0 ovn_metadata_agent[162810]:     timeout client          32s
Oct 11 09:20:52 compute-0 ovn_metadata_agent[162810]:     timeout server          32s
Oct 11 09:20:52 compute-0 ovn_metadata_agent[162810]:     timeout http-keep-alive 30s
Oct 11 09:20:52 compute-0 ovn_metadata_agent[162810]: 
Oct 11 09:20:52 compute-0 ovn_metadata_agent[162810]: 
Oct 11 09:20:52 compute-0 ovn_metadata_agent[162810]: listen listener
Oct 11 09:20:52 compute-0 ovn_metadata_agent[162810]:     bind 169.254.169.254:80
Oct 11 09:20:52 compute-0 ovn_metadata_agent[162810]:     server metadata /var/lib/neutron/metadata_proxy
Oct 11 09:20:52 compute-0 ovn_metadata_agent[162810]:     http-request add-header X-OVN-Network-ID 6fb90d02-96cd-4920-92ac-462cc457cb11
Oct 11 09:20:52 compute-0 ovn_metadata_agent[162810]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 11 09:20:52 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:20:52.144 162815 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-6fb90d02-96cd-4920-92ac-462cc457cb11', 'env', 'PROCESS_TAG=haproxy-6fb90d02-96cd-4920-92ac-462cc457cb11', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/6fb90d02-96cd-4920-92ac-462cc457cb11.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 11 09:20:52 compute-0 podman[388387]: 2025-10-11 09:20:52.384684008 +0000 UTC m=+0.061702333 container create 8eb7757e41f9fa1d20357232312fd9cef4c132206ffca305b98af0c69cc8313d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_davinci, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 09:20:52 compute-0 systemd[1]: Started libpod-conmon-8eb7757e41f9fa1d20357232312fd9cef4c132206ffca305b98af0c69cc8313d.scope.
Oct 11 09:20:52 compute-0 podman[388387]: 2025-10-11 09:20:52.350924345 +0000 UTC m=+0.027942710 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:20:52 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:20:52 compute-0 podman[388387]: 2025-10-11 09:20:52.49992701 +0000 UTC m=+0.176945325 container init 8eb7757e41f9fa1d20357232312fd9cef4c132206ffca305b98af0c69cc8313d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_davinci, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct 11 09:20:52 compute-0 podman[388387]: 2025-10-11 09:20:52.508350748 +0000 UTC m=+0.185369073 container start 8eb7757e41f9fa1d20357232312fd9cef4c132206ffca305b98af0c69cc8313d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_davinci, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct 11 09:20:52 compute-0 podman[388387]: 2025-10-11 09:20:52.512949867 +0000 UTC m=+0.189968152 container attach 8eb7757e41f9fa1d20357232312fd9cef4c132206ffca305b98af0c69cc8313d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_davinci, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct 11 09:20:52 compute-0 systemd[1]: libpod-8eb7757e41f9fa1d20357232312fd9cef4c132206ffca305b98af0c69cc8313d.scope: Deactivated successfully.
Oct 11 09:20:52 compute-0 quizzical_davinci[388443]: 167 167
Oct 11 09:20:52 compute-0 conmon[388443]: conmon 8eb7757e41f9fa1d2035 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-8eb7757e41f9fa1d20357232312fd9cef4c132206ffca305b98af0c69cc8313d.scope/container/memory.events
Oct 11 09:20:52 compute-0 podman[388387]: 2025-10-11 09:20:52.520435999 +0000 UTC m=+0.197454314 container died 8eb7757e41f9fa1d20357232312fd9cef4c132206ffca305b98af0c69cc8313d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_davinci, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 09:20:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-5999b051e5cd6b406f42e90788c0770e102dbd73564df67bf5e55d2fb68082dc-merged.mount: Deactivated successfully.
Oct 11 09:20:52 compute-0 podman[388387]: 2025-10-11 09:20:52.585053342 +0000 UTC m=+0.262071637 container remove 8eb7757e41f9fa1d20357232312fd9cef4c132206ffca305b98af0c69cc8313d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_davinci, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct 11 09:20:52 compute-0 systemd[1]: libpod-conmon-8eb7757e41f9fa1d20357232312fd9cef4c132206ffca305b98af0c69cc8313d.scope: Deactivated successfully.
Oct 11 09:20:52 compute-0 podman[388477]: 2025-10-11 09:20:52.646505117 +0000 UTC m=+0.074167555 container create ba90e074b51be5207e984f89886b0437dbdf15c3e7ac9ee48d3902e875c55c96 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-6fb90d02-96cd-4920-92ac-462cc457cb11, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0)
Oct 11 09:20:52 compute-0 systemd[1]: Started libpod-conmon-ba90e074b51be5207e984f89886b0437dbdf15c3e7ac9ee48d3902e875c55c96.scope.
Oct 11 09:20:52 compute-0 podman[388477]: 2025-10-11 09:20:52.615608515 +0000 UTC m=+0.043270983 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 11 09:20:52 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:20:52 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2371: 321 pgs: 321 active+clean; 374 MiB data, 928 MiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 36 op/s
Oct 11 09:20:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f2806711dd2ed8736dcb901ac2aef71c255be2695bfa4fe140c6bb3417babc2b/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 11 09:20:52 compute-0 podman[388477]: 2025-10-11 09:20:52.767052578 +0000 UTC m=+0.194715036 container init ba90e074b51be5207e984f89886b0437dbdf15c3e7ac9ee48d3902e875c55c96 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-6fb90d02-96cd-4920-92ac-462cc457cb11, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 09:20:52 compute-0 podman[388477]: 2025-10-11 09:20:52.772906014 +0000 UTC m=+0.200568442 container start ba90e074b51be5207e984f89886b0437dbdf15c3e7ac9ee48d3902e875c55c96 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-6fb90d02-96cd-4920-92ac-462cc457cb11, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 09:20:52 compute-0 neutron-haproxy-ovnmeta-6fb90d02-96cd-4920-92ac-462cc457cb11[388501]: [NOTICE]   (388521) : New worker (388525) forked
Oct 11 09:20:52 compute-0 neutron-haproxy-ovnmeta-6fb90d02-96cd-4920-92ac-462cc457cb11[388501]: [NOTICE]   (388521) : Loading success.
Oct 11 09:20:52 compute-0 podman[388509]: 2025-10-11 09:20:52.807527731 +0000 UTC m=+0.048219832 container create 610ae3cb47813447ae260052b68721147b24b71e11f26264006effaa48fb1fb2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_leakey, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 09:20:52 compute-0 systemd[1]: Started libpod-conmon-610ae3cb47813447ae260052b68721147b24b71e11f26264006effaa48fb1fb2.scope.
Oct 11 09:20:52 compute-0 podman[388509]: 2025-10-11 09:20:52.783360579 +0000 UTC m=+0.024052710 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:20:52 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:20:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e915e3ef93d593d2d3047941005b9ae2700199e5c0ff2acd45429761a0fcc47/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 09:20:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e915e3ef93d593d2d3047941005b9ae2700199e5c0ff2acd45429761a0fcc47/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 09:20:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e915e3ef93d593d2d3047941005b9ae2700199e5c0ff2acd45429761a0fcc47/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 09:20:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e915e3ef93d593d2d3047941005b9ae2700199e5c0ff2acd45429761a0fcc47/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 09:20:52 compute-0 podman[388509]: 2025-10-11 09:20:52.909930451 +0000 UTC m=+0.150622572 container init 610ae3cb47813447ae260052b68721147b24b71e11f26264006effaa48fb1fb2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_leakey, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct 11 09:20:52 compute-0 podman[388509]: 2025-10-11 09:20:52.924702728 +0000 UTC m=+0.165394829 container start 610ae3cb47813447ae260052b68721147b24b71e11f26264006effaa48fb1fb2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_leakey, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 09:20:52 compute-0 podman[388509]: 2025-10-11 09:20:52.927899988 +0000 UTC m=+0.168592089 container attach 610ae3cb47813447ae260052b68721147b24b71e11f26264006effaa48fb1fb2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_leakey, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS)
Oct 11 09:20:52 compute-0 nova_compute[260935]: 2025-10-11 09:20:52.939 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760174452.9391422, 7f0d9214-39a5-458d-82db-dcbc7d61b8b5 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:20:52 compute-0 nova_compute[260935]: 2025-10-11 09:20:52.940 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 7f0d9214-39a5-458d-82db-dcbc7d61b8b5] VM Started (Lifecycle Event)
Oct 11 09:20:52 compute-0 nova_compute[260935]: 2025-10-11 09:20:52.968 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 7f0d9214-39a5-458d-82db-dcbc7d61b8b5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:20:52 compute-0 nova_compute[260935]: 2025-10-11 09:20:52.972 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760174452.939985, 7f0d9214-39a5-458d-82db-dcbc7d61b8b5 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:20:52 compute-0 nova_compute[260935]: 2025-10-11 09:20:52.972 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 7f0d9214-39a5-458d-82db-dcbc7d61b8b5] VM Paused (Lifecycle Event)
Oct 11 09:20:52 compute-0 nova_compute[260935]: 2025-10-11 09:20:52.996 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 7f0d9214-39a5-458d-82db-dcbc7d61b8b5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:20:53 compute-0 nova_compute[260935]: 2025-10-11 09:20:52.999 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 7f0d9214-39a5-458d-82db-dcbc7d61b8b5] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 09:20:53 compute-0 nova_compute[260935]: 2025-10-11 09:20:53.031 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 7f0d9214-39a5-458d-82db-dcbc7d61b8b5] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 09:20:53 compute-0 nova_compute[260935]: 2025-10-11 09:20:53.730 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:20:53 compute-0 ceph-mon[74313]: pgmap v2371: 321 pgs: 321 active+clean; 374 MiB data, 928 MiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 36 op/s
Oct 11 09:20:53 compute-0 crazy_leakey[388536]: {
Oct 11 09:20:53 compute-0 crazy_leakey[388536]:     "9a8d87fe-940e-4960-94fb-405e22e6e9ee": {
Oct 11 09:20:53 compute-0 crazy_leakey[388536]:         "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:20:53 compute-0 crazy_leakey[388536]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 11 09:20:53 compute-0 crazy_leakey[388536]:         "osd_id": 2,
Oct 11 09:20:53 compute-0 crazy_leakey[388536]:         "osd_uuid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 09:20:53 compute-0 crazy_leakey[388536]:         "type": "bluestore"
Oct 11 09:20:53 compute-0 crazy_leakey[388536]:     },
Oct 11 09:20:53 compute-0 crazy_leakey[388536]:     "bd31cfe9-266a-4479-a7f9-659fff14b3e1": {
Oct 11 09:20:53 compute-0 crazy_leakey[388536]:         "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:20:53 compute-0 crazy_leakey[388536]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 11 09:20:53 compute-0 crazy_leakey[388536]:         "osd_id": 0,
Oct 11 09:20:53 compute-0 crazy_leakey[388536]:         "osd_uuid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 09:20:53 compute-0 crazy_leakey[388536]:         "type": "bluestore"
Oct 11 09:20:53 compute-0 crazy_leakey[388536]:     },
Oct 11 09:20:53 compute-0 crazy_leakey[388536]:     "c6e730c8-7075-45e5-bf72-061b73088f5b": {
Oct 11 09:20:53 compute-0 crazy_leakey[388536]:         "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:20:53 compute-0 crazy_leakey[388536]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 11 09:20:53 compute-0 crazy_leakey[388536]:         "osd_id": 1,
Oct 11 09:20:53 compute-0 crazy_leakey[388536]:         "osd_uuid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 09:20:53 compute-0 crazy_leakey[388536]:         "type": "bluestore"
Oct 11 09:20:53 compute-0 crazy_leakey[388536]:     }
Oct 11 09:20:53 compute-0 crazy_leakey[388536]: }
Oct 11 09:20:53 compute-0 systemd[1]: libpod-610ae3cb47813447ae260052b68721147b24b71e11f26264006effaa48fb1fb2.scope: Deactivated successfully.
Oct 11 09:20:53 compute-0 systemd[1]: libpod-610ae3cb47813447ae260052b68721147b24b71e11f26264006effaa48fb1fb2.scope: Consumed 1.022s CPU time.
Oct 11 09:20:53 compute-0 podman[388509]: 2025-10-11 09:20:53.958322977 +0000 UTC m=+1.199015118 container died 610ae3cb47813447ae260052b68721147b24b71e11f26264006effaa48fb1fb2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_leakey, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 09:20:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-1e915e3ef93d593d2d3047941005b9ae2700199e5c0ff2acd45429761a0fcc47-merged.mount: Deactivated successfully.
Oct 11 09:20:54 compute-0 podman[388509]: 2025-10-11 09:20:54.024349031 +0000 UTC m=+1.265041132 container remove 610ae3cb47813447ae260052b68721147b24b71e11f26264006effaa48fb1fb2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_leakey, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct 11 09:20:54 compute-0 systemd[1]: libpod-conmon-610ae3cb47813447ae260052b68721147b24b71e11f26264006effaa48fb1fb2.scope: Deactivated successfully.
Oct 11 09:20:54 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:20:54 compute-0 sudo[388308]: pam_unix(sudo:session): session closed for user root
Oct 11 09:20:54 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 09:20:54 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:20:54 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 09:20:54 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:20:54 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev 5d5cb1a2-18f1-4f6c-a01a-756e0b9fd824 does not exist
Oct 11 09:20:54 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev cd0f531e-15e6-479a-a427-3bbf5143d316 does not exist
Oct 11 09:20:54 compute-0 sudo[388581]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:20:54 compute-0 sudo[388581]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:20:54 compute-0 sudo[388581]: pam_unix(sudo:session): session closed for user root
Oct 11 09:20:54 compute-0 sudo[388606]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 11 09:20:54 compute-0 sudo[388606]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:20:54 compute-0 sudo[388606]: pam_unix(sudo:session): session closed for user root
Oct 11 09:20:54 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2372: 321 pgs: 321 active+clean; 374 MiB data, 928 MiB used, 59 GiB / 60 GiB avail; 7.2 KiB/s rd, 12 KiB/s wr, 9 op/s
Oct 11 09:20:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:20:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:20:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:20:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:20:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:20:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:20:54 compute-0 ceph-mgr[74605]: [balancer INFO root] Optimize plan auto_2025-10-11_09:20:54
Oct 11 09:20:54 compute-0 ceph-mgr[74605]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 09:20:54 compute-0 ceph-mgr[74605]: [balancer INFO root] do_upmap
Oct 11 09:20:54 compute-0 ceph-mgr[74605]: [balancer INFO root] pools ['.rgw.root', '.mgr', 'default.rgw.meta', 'default.rgw.log', 'vms', 'volumes', 'images', 'cephfs.cephfs.meta', 'default.rgw.control', 'cephfs.cephfs.data', 'backups']
Oct 11 09:20:54 compute-0 ceph-mgr[74605]: [balancer INFO root] prepared 0/10 changes
Oct 11 09:20:55 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:20:55 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:20:55 compute-0 ceph-mon[74313]: pgmap v2372: 321 pgs: 321 active+clean; 374 MiB data, 928 MiB used, 59 GiB / 60 GiB avail; 7.2 KiB/s rd, 12 KiB/s wr, 9 op/s
Oct 11 09:20:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 09:20:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 09:20:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 09:20:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 09:20:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 09:20:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 09:20:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 09:20:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 09:20:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 09:20:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 09:20:56 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2373: 321 pgs: 321 active+clean; 374 MiB data, 928 MiB used, 59 GiB / 60 GiB avail; 7.2 KiB/s rd, 12 KiB/s wr, 9 op/s
Oct 11 09:20:56 compute-0 nova_compute[260935]: 2025-10-11 09:20:56.784 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:20:57 compute-0 ceph-mon[74313]: pgmap v2373: 321 pgs: 321 active+clean; 374 MiB data, 928 MiB used, 59 GiB / 60 GiB avail; 7.2 KiB/s rd, 12 KiB/s wr, 9 op/s
Oct 11 09:20:58 compute-0 nova_compute[260935]: 2025-10-11 09:20:58.067 2 DEBUG nova.compute.manager [req-2efd19d7-3f57-40c3-86ea-0b9f098d39d2 req-a2119d61-00be-4864-90ae-d5ad0484b44f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 7f0d9214-39a5-458d-82db-dcbc7d61b8b5] Received event network-vif-plugged-6aa7ac72-3e8c-4881-ba5d-9e2fca0200c5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:20:58 compute-0 nova_compute[260935]: 2025-10-11 09:20:58.068 2 DEBUG oslo_concurrency.lockutils [req-2efd19d7-3f57-40c3-86ea-0b9f098d39d2 req-a2119d61-00be-4864-90ae-d5ad0484b44f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "7f0d9214-39a5-458d-82db-dcbc7d61b8b5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:20:58 compute-0 nova_compute[260935]: 2025-10-11 09:20:58.068 2 DEBUG oslo_concurrency.lockutils [req-2efd19d7-3f57-40c3-86ea-0b9f098d39d2 req-a2119d61-00be-4864-90ae-d5ad0484b44f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "7f0d9214-39a5-458d-82db-dcbc7d61b8b5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:20:58 compute-0 nova_compute[260935]: 2025-10-11 09:20:58.068 2 DEBUG oslo_concurrency.lockutils [req-2efd19d7-3f57-40c3-86ea-0b9f098d39d2 req-a2119d61-00be-4864-90ae-d5ad0484b44f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "7f0d9214-39a5-458d-82db-dcbc7d61b8b5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:20:58 compute-0 nova_compute[260935]: 2025-10-11 09:20:58.068 2 DEBUG nova.compute.manager [req-2efd19d7-3f57-40c3-86ea-0b9f098d39d2 req-a2119d61-00be-4864-90ae-d5ad0484b44f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 7f0d9214-39a5-458d-82db-dcbc7d61b8b5] Processing event network-vif-plugged-6aa7ac72-3e8c-4881-ba5d-9e2fca0200c5 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 11 09:20:58 compute-0 nova_compute[260935]: 2025-10-11 09:20:58.070 2 DEBUG nova.compute.manager [None req-eea5f556-8701-4a10-9b0a-fe73e94c13ac dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 7f0d9214-39a5-458d-82db-dcbc7d61b8b5] Instance event wait completed in 5 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 11 09:20:58 compute-0 nova_compute[260935]: 2025-10-11 09:20:58.075 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760174458.0746095, 7f0d9214-39a5-458d-82db-dcbc7d61b8b5 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:20:58 compute-0 nova_compute[260935]: 2025-10-11 09:20:58.075 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 7f0d9214-39a5-458d-82db-dcbc7d61b8b5] VM Resumed (Lifecycle Event)
Oct 11 09:20:58 compute-0 nova_compute[260935]: 2025-10-11 09:20:58.079 2 DEBUG nova.virt.libvirt.driver [None req-eea5f556-8701-4a10-9b0a-fe73e94c13ac dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 7f0d9214-39a5-458d-82db-dcbc7d61b8b5] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 11 09:20:58 compute-0 nova_compute[260935]: 2025-10-11 09:20:58.084 2 INFO nova.virt.libvirt.driver [-] [instance: 7f0d9214-39a5-458d-82db-dcbc7d61b8b5] Instance spawned successfully.
Oct 11 09:20:58 compute-0 nova_compute[260935]: 2025-10-11 09:20:58.085 2 DEBUG nova.virt.libvirt.driver [None req-eea5f556-8701-4a10-9b0a-fe73e94c13ac dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 7f0d9214-39a5-458d-82db-dcbc7d61b8b5] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 11 09:20:58 compute-0 nova_compute[260935]: 2025-10-11 09:20:58.159 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 7f0d9214-39a5-458d-82db-dcbc7d61b8b5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:20:58 compute-0 nova_compute[260935]: 2025-10-11 09:20:58.165 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 7f0d9214-39a5-458d-82db-dcbc7d61b8b5] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 09:20:58 compute-0 nova_compute[260935]: 2025-10-11 09:20:58.170 2 DEBUG nova.virt.libvirt.driver [None req-eea5f556-8701-4a10-9b0a-fe73e94c13ac dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 7f0d9214-39a5-458d-82db-dcbc7d61b8b5] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:20:58 compute-0 nova_compute[260935]: 2025-10-11 09:20:58.170 2 DEBUG nova.virt.libvirt.driver [None req-eea5f556-8701-4a10-9b0a-fe73e94c13ac dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 7f0d9214-39a5-458d-82db-dcbc7d61b8b5] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:20:58 compute-0 nova_compute[260935]: 2025-10-11 09:20:58.171 2 DEBUG nova.virt.libvirt.driver [None req-eea5f556-8701-4a10-9b0a-fe73e94c13ac dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 7f0d9214-39a5-458d-82db-dcbc7d61b8b5] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:20:58 compute-0 nova_compute[260935]: 2025-10-11 09:20:58.172 2 DEBUG nova.virt.libvirt.driver [None req-eea5f556-8701-4a10-9b0a-fe73e94c13ac dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 7f0d9214-39a5-458d-82db-dcbc7d61b8b5] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:20:58 compute-0 nova_compute[260935]: 2025-10-11 09:20:58.172 2 DEBUG nova.virt.libvirt.driver [None req-eea5f556-8701-4a10-9b0a-fe73e94c13ac dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 7f0d9214-39a5-458d-82db-dcbc7d61b8b5] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:20:58 compute-0 nova_compute[260935]: 2025-10-11 09:20:58.173 2 DEBUG nova.virt.libvirt.driver [None req-eea5f556-8701-4a10-9b0a-fe73e94c13ac dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 7f0d9214-39a5-458d-82db-dcbc7d61b8b5] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:20:58 compute-0 nova_compute[260935]: 2025-10-11 09:20:58.386 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 7f0d9214-39a5-458d-82db-dcbc7d61b8b5] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 09:20:58 compute-0 nova_compute[260935]: 2025-10-11 09:20:58.533 2 INFO nova.compute.manager [None req-eea5f556-8701-4a10-9b0a-fe73e94c13ac dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 7f0d9214-39a5-458d-82db-dcbc7d61b8b5] Took 17.88 seconds to spawn the instance on the hypervisor.
Oct 11 09:20:58 compute-0 nova_compute[260935]: 2025-10-11 09:20:58.534 2 DEBUG nova.compute.manager [None req-eea5f556-8701-4a10-9b0a-fe73e94c13ac dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 7f0d9214-39a5-458d-82db-dcbc7d61b8b5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:20:58 compute-0 nova_compute[260935]: 2025-10-11 09:20:58.733 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:20:58 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2374: 321 pgs: 321 active+clean; 374 MiB data, 928 MiB used, 59 GiB / 60 GiB avail; 7.2 KiB/s rd, 12 KiB/s wr, 9 op/s
Oct 11 09:20:58 compute-0 nova_compute[260935]: 2025-10-11 09:20:58.771 2 INFO nova.compute.manager [None req-eea5f556-8701-4a10-9b0a-fe73e94c13ac dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 7f0d9214-39a5-458d-82db-dcbc7d61b8b5] Took 19.19 seconds to build instance.
Oct 11 09:20:58 compute-0 nova_compute[260935]: 2025-10-11 09:20:58.821 2 DEBUG oslo_concurrency.lockutils [None req-eea5f556-8701-4a10-9b0a-fe73e94c13ac dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "7f0d9214-39a5-458d-82db-dcbc7d61b8b5" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 19.306s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:20:59 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:20:59 compute-0 ceph-mon[74313]: pgmap v2374: 321 pgs: 321 active+clean; 374 MiB data, 928 MiB used, 59 GiB / 60 GiB avail; 7.2 KiB/s rd, 12 KiB/s wr, 9 op/s
Oct 11 09:21:00 compute-0 nova_compute[260935]: 2025-10-11 09:21:00.375 2 DEBUG nova.compute.manager [req-a30e8990-bb41-4553-a2b8-13064166b5cd req-4fad60f2-e4b7-448a-85b4-7ba21197f58f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 7f0d9214-39a5-458d-82db-dcbc7d61b8b5] Received event network-vif-plugged-6aa7ac72-3e8c-4881-ba5d-9e2fca0200c5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:21:00 compute-0 nova_compute[260935]: 2025-10-11 09:21:00.375 2 DEBUG oslo_concurrency.lockutils [req-a30e8990-bb41-4553-a2b8-13064166b5cd req-4fad60f2-e4b7-448a-85b4-7ba21197f58f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "7f0d9214-39a5-458d-82db-dcbc7d61b8b5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:21:00 compute-0 nova_compute[260935]: 2025-10-11 09:21:00.375 2 DEBUG oslo_concurrency.lockutils [req-a30e8990-bb41-4553-a2b8-13064166b5cd req-4fad60f2-e4b7-448a-85b4-7ba21197f58f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "7f0d9214-39a5-458d-82db-dcbc7d61b8b5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:21:00 compute-0 nova_compute[260935]: 2025-10-11 09:21:00.376 2 DEBUG oslo_concurrency.lockutils [req-a30e8990-bb41-4553-a2b8-13064166b5cd req-4fad60f2-e4b7-448a-85b4-7ba21197f58f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "7f0d9214-39a5-458d-82db-dcbc7d61b8b5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:21:00 compute-0 nova_compute[260935]: 2025-10-11 09:21:00.376 2 DEBUG nova.compute.manager [req-a30e8990-bb41-4553-a2b8-13064166b5cd req-4fad60f2-e4b7-448a-85b4-7ba21197f58f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 7f0d9214-39a5-458d-82db-dcbc7d61b8b5] No waiting events found dispatching network-vif-plugged-6aa7ac72-3e8c-4881-ba5d-9e2fca0200c5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 09:21:00 compute-0 nova_compute[260935]: 2025-10-11 09:21:00.376 2 WARNING nova.compute.manager [req-a30e8990-bb41-4553-a2b8-13064166b5cd req-4fad60f2-e4b7-448a-85b4-7ba21197f58f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 7f0d9214-39a5-458d-82db-dcbc7d61b8b5] Received unexpected event network-vif-plugged-6aa7ac72-3e8c-4881-ba5d-9e2fca0200c5 for instance with vm_state active and task_state None.
Oct 11 09:21:00 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:21:00.434 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ab:ac:3b 2001:db8:0:1:f816:3eff:feab:ac3b 2001:db8::f816:3eff:feab:ac3b'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:feab:ac3b/64 2001:db8::f816:3eff:feab:ac3b/64', 'neutron:device_id': 'ovnmeta-e87b272f-66b8-494e-ab80-c2ee66df15a2', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e87b272f-66b8-494e-ab80-c2ee66df15a2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'db6885dd005947ad850fed13cefdf2fc', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4bb88753-f635-4d32-a71c-7301c0eb5e38, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=af33797e-d57d-45c8-92d7-86ea03fdf1ef) old=Port_Binding(mac=['fa:16:3e:ab:ac:3b 2001:db8::f816:3eff:feab:ac3b'], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:feab:ac3b/64', 'neutron:device_id': 'ovnmeta-e87b272f-66b8-494e-ab80-c2ee66df15a2', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e87b272f-66b8-494e-ab80-c2ee66df15a2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'db6885dd005947ad850fed13cefdf2fc', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 09:21:00 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:21:00.436 162815 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port af33797e-d57d-45c8-92d7-86ea03fdf1ef in datapath e87b272f-66b8-494e-ab80-c2ee66df15a2 updated
Oct 11 09:21:00 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:21:00.438 162815 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e87b272f-66b8-494e-ab80-c2ee66df15a2, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 11 09:21:00 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:21:00.439 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[4e60a330-520a-4f93-b79f-a30b2543c777]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:21:00 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2375: 321 pgs: 321 active+clean; 374 MiB data, 928 MiB used, 59 GiB / 60 GiB avail; 7.2 KiB/s rd, 12 KiB/s wr, 9 op/s
Oct 11 09:21:01 compute-0 nova_compute[260935]: 2025-10-11 09:21:01.786 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:21:01 compute-0 ceph-mon[74313]: pgmap v2375: 321 pgs: 321 active+clean; 374 MiB data, 928 MiB used, 59 GiB / 60 GiB avail; 7.2 KiB/s rd, 12 KiB/s wr, 9 op/s
Oct 11 09:21:02 compute-0 nova_compute[260935]: 2025-10-11 09:21:02.498 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:21:02 compute-0 NetworkManager[44960]: <info>  [1760174462.4998] manager: (patch-br-int-to-provnet-02213cae-d648-4a8f-b18b-9bdf9573f1be): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/482)
Oct 11 09:21:02 compute-0 NetworkManager[44960]: <info>  [1760174462.5017] manager: (patch-provnet-02213cae-d648-4a8f-b18b-9bdf9573f1be-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/483)
Oct 11 09:21:02 compute-0 nova_compute[260935]: 2025-10-11 09:21:02.599 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:21:02 compute-0 ovn_controller[152945]: 2025-10-11T09:21:02Z|01194|binding|INFO|Releasing lport e23cd806-8523-4e59-ba27-db15cee52548 from this chassis (sb_readonly=0)
Oct 11 09:21:02 compute-0 ovn_controller[152945]: 2025-10-11T09:21:02Z|01195|binding|INFO|Releasing lport 89155f05-1b39-4918-893c-15a2cd2a9493 from this chassis (sb_readonly=0)
Oct 11 09:21:02 compute-0 ovn_controller[152945]: 2025-10-11T09:21:02Z|01196|binding|INFO|Releasing lport f45dd889-4db0-488b-b6d3-356bc191844e from this chassis (sb_readonly=0)
Oct 11 09:21:02 compute-0 nova_compute[260935]: 2025-10-11 09:21:02.622 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:21:02 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2376: 321 pgs: 321 active+clean; 374 MiB data, 928 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 11 09:21:03 compute-0 nova_compute[260935]: 2025-10-11 09:21:03.736 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:21:03 compute-0 ceph-mon[74313]: pgmap v2376: 321 pgs: 321 active+clean; 374 MiB data, 928 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 11 09:21:04 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:21:04 compute-0 nova_compute[260935]: 2025-10-11 09:21:04.230 2 DEBUG nova.compute.manager [req-e52cc407-1a7f-4d04-a677-23c7ede9571a req-b0d86aab-c3cb-434e-b10f-aa25c47e5e54 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 7f0d9214-39a5-458d-82db-dcbc7d61b8b5] Received event network-changed-6aa7ac72-3e8c-4881-ba5d-9e2fca0200c5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:21:04 compute-0 nova_compute[260935]: 2025-10-11 09:21:04.231 2 DEBUG nova.compute.manager [req-e52cc407-1a7f-4d04-a677-23c7ede9571a req-b0d86aab-c3cb-434e-b10f-aa25c47e5e54 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 7f0d9214-39a5-458d-82db-dcbc7d61b8b5] Refreshing instance network info cache due to event network-changed-6aa7ac72-3e8c-4881-ba5d-9e2fca0200c5. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 09:21:04 compute-0 nova_compute[260935]: 2025-10-11 09:21:04.231 2 DEBUG oslo_concurrency.lockutils [req-e52cc407-1a7f-4d04-a677-23c7ede9571a req-b0d86aab-c3cb-434e-b10f-aa25c47e5e54 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-7f0d9214-39a5-458d-82db-dcbc7d61b8b5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:21:04 compute-0 nova_compute[260935]: 2025-10-11 09:21:04.232 2 DEBUG oslo_concurrency.lockutils [req-e52cc407-1a7f-4d04-a677-23c7ede9571a req-b0d86aab-c3cb-434e-b10f-aa25c47e5e54 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-7f0d9214-39a5-458d-82db-dcbc7d61b8b5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:21:04 compute-0 nova_compute[260935]: 2025-10-11 09:21:04.232 2 DEBUG nova.network.neutron [req-e52cc407-1a7f-4d04-a677-23c7ede9571a req-b0d86aab-c3cb-434e-b10f-aa25c47e5e54 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 7f0d9214-39a5-458d-82db-dcbc7d61b8b5] Refreshing network info cache for port 6aa7ac72-3e8c-4881-ba5d-9e2fca0200c5 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 09:21:04 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2377: 321 pgs: 321 active+clean; 374 MiB data, 928 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 63 op/s
Oct 11 09:21:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 09:21:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:21:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 09:21:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:21:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0029757271724620694 of space, bias 1.0, pg target 0.8927181517386208 quantized to 32 (current 32)
Oct 11 09:21:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:21:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 09:21:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:21:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 09:21:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:21:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Oct 11 09:21:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:21:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 11 09:21:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:21:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 09:21:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:21:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 11 09:21:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:21:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 11 09:21:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:21:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 09:21:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:21:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 11 09:21:05 compute-0 ceph-mon[74313]: pgmap v2377: 321 pgs: 321 active+clean; 374 MiB data, 928 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 63 op/s
Oct 11 09:21:06 compute-0 nova_compute[260935]: 2025-10-11 09:21:06.415 2 DEBUG nova.network.neutron [req-e52cc407-1a7f-4d04-a677-23c7ede9571a req-b0d86aab-c3cb-434e-b10f-aa25c47e5e54 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 7f0d9214-39a5-458d-82db-dcbc7d61b8b5] Updated VIF entry in instance network info cache for port 6aa7ac72-3e8c-4881-ba5d-9e2fca0200c5. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 09:21:06 compute-0 nova_compute[260935]: 2025-10-11 09:21:06.417 2 DEBUG nova.network.neutron [req-e52cc407-1a7f-4d04-a677-23c7ede9571a req-b0d86aab-c3cb-434e-b10f-aa25c47e5e54 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 7f0d9214-39a5-458d-82db-dcbc7d61b8b5] Updating instance_info_cache with network_info: [{"id": "6aa7ac72-3e8c-4881-ba5d-9e2fca0200c5", "address": "fa:16:3e:38:a7:f5", "network": {"id": "6fb90d02-96cd-4920-92ac-462cc457cb11", "bridge": "br-int", "label": "tempest-network-smoke--1169953700", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.243", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6aa7ac72-3e", "ovs_interfaceid": "6aa7ac72-3e8c-4881-ba5d-9e2fca0200c5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:21:06 compute-0 nova_compute[260935]: 2025-10-11 09:21:06.470 2 DEBUG oslo_concurrency.lockutils [req-e52cc407-1a7f-4d04-a677-23c7ede9571a req-b0d86aab-c3cb-434e-b10f-aa25c47e5e54 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-7f0d9214-39a5-458d-82db-dcbc7d61b8b5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:21:06 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2378: 321 pgs: 321 active+clean; 374 MiB data, 928 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 63 op/s
Oct 11 09:21:06 compute-0 podman[388632]: 2025-10-11 09:21:06.781491145 +0000 UTC m=+0.079766542 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct 11 09:21:06 compute-0 nova_compute[260935]: 2025-10-11 09:21:06.789 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:21:07 compute-0 nova_compute[260935]: 2025-10-11 09:21:07.658 2 DEBUG oslo_concurrency.lockutils [None req-22fc24f5-c7ae-4ccc-b583-7df789c64b4f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "fcb45648-eb7b-4975-9f50-08675a787d9c" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:21:07 compute-0 nova_compute[260935]: 2025-10-11 09:21:07.659 2 DEBUG oslo_concurrency.lockutils [None req-22fc24f5-c7ae-4ccc-b583-7df789c64b4f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "fcb45648-eb7b-4975-9f50-08675a787d9c" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:21:07 compute-0 nova_compute[260935]: 2025-10-11 09:21:07.675 2 DEBUG nova.compute.manager [None req-22fc24f5-c7ae-4ccc-b583-7df789c64b4f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: fcb45648-eb7b-4975-9f50-08675a787d9c] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 11 09:21:07 compute-0 nova_compute[260935]: 2025-10-11 09:21:07.748 2 DEBUG oslo_concurrency.lockutils [None req-22fc24f5-c7ae-4ccc-b583-7df789c64b4f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:21:07 compute-0 nova_compute[260935]: 2025-10-11 09:21:07.749 2 DEBUG oslo_concurrency.lockutils [None req-22fc24f5-c7ae-4ccc-b583-7df789c64b4f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:21:07 compute-0 nova_compute[260935]: 2025-10-11 09:21:07.758 2 DEBUG nova.virt.hardware [None req-22fc24f5-c7ae-4ccc-b583-7df789c64b4f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 11 09:21:07 compute-0 nova_compute[260935]: 2025-10-11 09:21:07.758 2 INFO nova.compute.claims [None req-22fc24f5-c7ae-4ccc-b583-7df789c64b4f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: fcb45648-eb7b-4975-9f50-08675a787d9c] Claim successful on node compute-0.ctlplane.example.com
Oct 11 09:21:07 compute-0 ceph-mon[74313]: pgmap v2378: 321 pgs: 321 active+clean; 374 MiB data, 928 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 63 op/s
Oct 11 09:21:07 compute-0 nova_compute[260935]: 2025-10-11 09:21:07.926 2 DEBUG oslo_concurrency.processutils [None req-22fc24f5-c7ae-4ccc-b583-7df789c64b4f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:21:08 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 09:21:08 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1115680410' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:21:08 compute-0 nova_compute[260935]: 2025-10-11 09:21:08.383 2 DEBUG oslo_concurrency.processutils [None req-22fc24f5-c7ae-4ccc-b583-7df789c64b4f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:21:08 compute-0 nova_compute[260935]: 2025-10-11 09:21:08.393 2 DEBUG nova.compute.provider_tree [None req-22fc24f5-c7ae-4ccc-b583-7df789c64b4f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 09:21:08 compute-0 nova_compute[260935]: 2025-10-11 09:21:08.423 2 DEBUG nova.scheduler.client.report [None req-22fc24f5-c7ae-4ccc-b583-7df789c64b4f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 09:21:08 compute-0 nova_compute[260935]: 2025-10-11 09:21:08.455 2 DEBUG oslo_concurrency.lockutils [None req-22fc24f5-c7ae-4ccc-b583-7df789c64b4f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.707s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:21:08 compute-0 nova_compute[260935]: 2025-10-11 09:21:08.457 2 DEBUG nova.compute.manager [None req-22fc24f5-c7ae-4ccc-b583-7df789c64b4f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: fcb45648-eb7b-4975-9f50-08675a787d9c] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 11 09:21:08 compute-0 nova_compute[260935]: 2025-10-11 09:21:08.522 2 DEBUG nova.compute.manager [None req-22fc24f5-c7ae-4ccc-b583-7df789c64b4f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: fcb45648-eb7b-4975-9f50-08675a787d9c] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 11 09:21:08 compute-0 nova_compute[260935]: 2025-10-11 09:21:08.523 2 DEBUG nova.network.neutron [None req-22fc24f5-c7ae-4ccc-b583-7df789c64b4f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: fcb45648-eb7b-4975-9f50-08675a787d9c] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 11 09:21:08 compute-0 nova_compute[260935]: 2025-10-11 09:21:08.553 2 INFO nova.virt.libvirt.driver [None req-22fc24f5-c7ae-4ccc-b583-7df789c64b4f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: fcb45648-eb7b-4975-9f50-08675a787d9c] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 11 09:21:08 compute-0 nova_compute[260935]: 2025-10-11 09:21:08.593 2 DEBUG nova.compute.manager [None req-22fc24f5-c7ae-4ccc-b583-7df789c64b4f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: fcb45648-eb7b-4975-9f50-08675a787d9c] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 11 09:21:08 compute-0 nova_compute[260935]: 2025-10-11 09:21:08.739 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:21:08 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2379: 321 pgs: 321 active+clean; 374 MiB data, 928 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 63 op/s
Oct 11 09:21:08 compute-0 nova_compute[260935]: 2025-10-11 09:21:08.758 2 DEBUG nova.compute.manager [None req-22fc24f5-c7ae-4ccc-b583-7df789c64b4f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: fcb45648-eb7b-4975-9f50-08675a787d9c] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 11 09:21:08 compute-0 nova_compute[260935]: 2025-10-11 09:21:08.760 2 DEBUG nova.virt.libvirt.driver [None req-22fc24f5-c7ae-4ccc-b583-7df789c64b4f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: fcb45648-eb7b-4975-9f50-08675a787d9c] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 11 09:21:08 compute-0 nova_compute[260935]: 2025-10-11 09:21:08.760 2 INFO nova.virt.libvirt.driver [None req-22fc24f5-c7ae-4ccc-b583-7df789c64b4f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: fcb45648-eb7b-4975-9f50-08675a787d9c] Creating image(s)
Oct 11 09:21:08 compute-0 nova_compute[260935]: 2025-10-11 09:21:08.786 2 DEBUG nova.storage.rbd_utils [None req-22fc24f5-c7ae-4ccc-b583-7df789c64b4f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] rbd image fcb45648-eb7b-4975-9f50-08675a787d9c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:21:08 compute-0 nova_compute[260935]: 2025-10-11 09:21:08.817 2 DEBUG nova.storage.rbd_utils [None req-22fc24f5-c7ae-4ccc-b583-7df789c64b4f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] rbd image fcb45648-eb7b-4975-9f50-08675a787d9c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:21:08 compute-0 nova_compute[260935]: 2025-10-11 09:21:08.850 2 DEBUG nova.storage.rbd_utils [None req-22fc24f5-c7ae-4ccc-b583-7df789c64b4f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] rbd image fcb45648-eb7b-4975-9f50-08675a787d9c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:21:08 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/1115680410' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:21:08 compute-0 nova_compute[260935]: 2025-10-11 09:21:08.857 2 DEBUG oslo_concurrency.processutils [None req-22fc24f5-c7ae-4ccc-b583-7df789c64b4f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:21:08 compute-0 nova_compute[260935]: 2025-10-11 09:21:08.898 2 DEBUG nova.policy [None req-22fc24f5-c7ae-4ccc-b583-7df789c64b4f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '0e1fd111a1ff43179343661e01457085', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'db6885dd005947ad850fed13cefdf2fc', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 11 09:21:08 compute-0 nova_compute[260935]: 2025-10-11 09:21:08.936 2 DEBUG oslo_concurrency.processutils [None req-22fc24f5-c7ae-4ccc-b583-7df789c64b4f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json" returned: 0 in 0.080s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:21:08 compute-0 nova_compute[260935]: 2025-10-11 09:21:08.937 2 DEBUG oslo_concurrency.lockutils [None req-22fc24f5-c7ae-4ccc-b583-7df789c64b4f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "9811042c7d73cc51997f7c966840f4b7728169a1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:21:08 compute-0 nova_compute[260935]: 2025-10-11 09:21:08.938 2 DEBUG oslo_concurrency.lockutils [None req-22fc24f5-c7ae-4ccc-b583-7df789c64b4f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:21:08 compute-0 nova_compute[260935]: 2025-10-11 09:21:08.938 2 DEBUG oslo_concurrency.lockutils [None req-22fc24f5-c7ae-4ccc-b583-7df789c64b4f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:21:08 compute-0 nova_compute[260935]: 2025-10-11 09:21:08.964 2 DEBUG nova.storage.rbd_utils [None req-22fc24f5-c7ae-4ccc-b583-7df789c64b4f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] rbd image fcb45648-eb7b-4975-9f50-08675a787d9c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:21:08 compute-0 nova_compute[260935]: 2025-10-11 09:21:08.968 2 DEBUG oslo_concurrency.processutils [None req-22fc24f5-c7ae-4ccc-b583-7df789c64b4f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 fcb45648-eb7b-4975-9f50-08675a787d9c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:21:09 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:21:09 compute-0 nova_compute[260935]: 2025-10-11 09:21:09.243 2 DEBUG oslo_concurrency.processutils [None req-22fc24f5-c7ae-4ccc-b583-7df789c64b4f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 fcb45648-eb7b-4975-9f50-08675a787d9c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.275s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:21:09 compute-0 nova_compute[260935]: 2025-10-11 09:21:09.331 2 DEBUG nova.storage.rbd_utils [None req-22fc24f5-c7ae-4ccc-b583-7df789c64b4f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] resizing rbd image fcb45648-eb7b-4975-9f50-08675a787d9c_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 11 09:21:09 compute-0 nova_compute[260935]: 2025-10-11 09:21:09.492 2 DEBUG nova.objects.instance [None req-22fc24f5-c7ae-4ccc-b583-7df789c64b4f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lazy-loading 'migration_context' on Instance uuid fcb45648-eb7b-4975-9f50-08675a787d9c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:21:09 compute-0 nova_compute[260935]: 2025-10-11 09:21:09.513 2 DEBUG nova.virt.libvirt.driver [None req-22fc24f5-c7ae-4ccc-b583-7df789c64b4f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: fcb45648-eb7b-4975-9f50-08675a787d9c] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 11 09:21:09 compute-0 nova_compute[260935]: 2025-10-11 09:21:09.514 2 DEBUG nova.virt.libvirt.driver [None req-22fc24f5-c7ae-4ccc-b583-7df789c64b4f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: fcb45648-eb7b-4975-9f50-08675a787d9c] Ensure instance console log exists: /var/lib/nova/instances/fcb45648-eb7b-4975-9f50-08675a787d9c/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 11 09:21:09 compute-0 nova_compute[260935]: 2025-10-11 09:21:09.515 2 DEBUG oslo_concurrency.lockutils [None req-22fc24f5-c7ae-4ccc-b583-7df789c64b4f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:21:09 compute-0 nova_compute[260935]: 2025-10-11 09:21:09.515 2 DEBUG oslo_concurrency.lockutils [None req-22fc24f5-c7ae-4ccc-b583-7df789c64b4f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:21:09 compute-0 nova_compute[260935]: 2025-10-11 09:21:09.516 2 DEBUG oslo_concurrency.lockutils [None req-22fc24f5-c7ae-4ccc-b583-7df789c64b4f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:21:09 compute-0 ceph-mon[74313]: pgmap v2379: 321 pgs: 321 active+clean; 374 MiB data, 928 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 63 op/s
Oct 11 09:21:10 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2380: 321 pgs: 321 active+clean; 374 MiB data, 928 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 63 op/s
Oct 11 09:21:10 compute-0 ovn_controller[152945]: 2025-10-11T09:21:10Z|00139|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:38:a7:f5 10.100.0.5
Oct 11 09:21:10 compute-0 ovn_controller[152945]: 2025-10-11T09:21:10Z|00140|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:38:a7:f5 10.100.0.5
Oct 11 09:21:11 compute-0 nova_compute[260935]: 2025-10-11 09:21:11.496 2 DEBUG nova.network.neutron [None req-22fc24f5-c7ae-4ccc-b583-7df789c64b4f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: fcb45648-eb7b-4975-9f50-08675a787d9c] Successfully created port: 9669f110-042a-40c6-b7a4-8d78d421ed23 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 11 09:21:11 compute-0 nova_compute[260935]: 2025-10-11 09:21:11.791 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:21:11 compute-0 ceph-mon[74313]: pgmap v2380: 321 pgs: 321 active+clean; 374 MiB data, 928 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 63 op/s
Oct 11 09:21:12 compute-0 nova_compute[260935]: 2025-10-11 09:21:12.015 2 DEBUG nova.network.neutron [None req-22fc24f5-c7ae-4ccc-b583-7df789c64b4f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: fcb45648-eb7b-4975-9f50-08675a787d9c] Successfully created port: 5a14fd50-c9b4-4c8c-b576-9f2a05d734f9 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 11 09:21:12 compute-0 unix_chkpwd[388844]: password check failed for user (root)
Oct 11 09:21:12 compute-0 sshd-session[388842]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=165.232.82.252  user=root
Oct 11 09:21:12 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2381: 321 pgs: 321 active+clean; 453 MiB data, 979 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.9 MiB/s wr, 152 op/s
Oct 11 09:21:12 compute-0 podman[388845]: 2025-10-11 09:21:12.816374461 +0000 UTC m=+0.100765263 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid)
Oct 11 09:21:13 compute-0 nova_compute[260935]: 2025-10-11 09:21:13.503 2 DEBUG nova.network.neutron [None req-22fc24f5-c7ae-4ccc-b583-7df789c64b4f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: fcb45648-eb7b-4975-9f50-08675a787d9c] Successfully updated port: 9669f110-042a-40c6-b7a4-8d78d421ed23 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 11 09:21:13 compute-0 nova_compute[260935]: 2025-10-11 09:21:13.656 2 DEBUG nova.compute.manager [req-6da5ec96-4847-4d03-b995-7dedd7b304bd req-187b0095-798e-493b-9db5-ca2f3493f407 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: fcb45648-eb7b-4975-9f50-08675a787d9c] Received event network-changed-9669f110-042a-40c6-b7a4-8d78d421ed23 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:21:13 compute-0 nova_compute[260935]: 2025-10-11 09:21:13.657 2 DEBUG nova.compute.manager [req-6da5ec96-4847-4d03-b995-7dedd7b304bd req-187b0095-798e-493b-9db5-ca2f3493f407 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: fcb45648-eb7b-4975-9f50-08675a787d9c] Refreshing instance network info cache due to event network-changed-9669f110-042a-40c6-b7a4-8d78d421ed23. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 09:21:13 compute-0 nova_compute[260935]: 2025-10-11 09:21:13.658 2 DEBUG oslo_concurrency.lockutils [req-6da5ec96-4847-4d03-b995-7dedd7b304bd req-187b0095-798e-493b-9db5-ca2f3493f407 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-fcb45648-eb7b-4975-9f50-08675a787d9c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:21:13 compute-0 nova_compute[260935]: 2025-10-11 09:21:13.659 2 DEBUG oslo_concurrency.lockutils [req-6da5ec96-4847-4d03-b995-7dedd7b304bd req-187b0095-798e-493b-9db5-ca2f3493f407 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-fcb45648-eb7b-4975-9f50-08675a787d9c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:21:13 compute-0 nova_compute[260935]: 2025-10-11 09:21:13.659 2 DEBUG nova.network.neutron [req-6da5ec96-4847-4d03-b995-7dedd7b304bd req-187b0095-798e-493b-9db5-ca2f3493f407 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: fcb45648-eb7b-4975-9f50-08675a787d9c] Refreshing network info cache for port 9669f110-042a-40c6-b7a4-8d78d421ed23 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 09:21:13 compute-0 nova_compute[260935]: 2025-10-11 09:21:13.747 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:21:13 compute-0 ceph-mon[74313]: pgmap v2381: 321 pgs: 321 active+clean; 453 MiB data, 979 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.9 MiB/s wr, 152 op/s
Oct 11 09:21:13 compute-0 nova_compute[260935]: 2025-10-11 09:21:13.922 2 DEBUG nova.network.neutron [req-6da5ec96-4847-4d03-b995-7dedd7b304bd req-187b0095-798e-493b-9db5-ca2f3493f407 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: fcb45648-eb7b-4975-9f50-08675a787d9c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 11 09:21:14 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:21:14 compute-0 sshd-session[388842]: Failed password for root from 165.232.82.252 port 39568 ssh2
Oct 11 09:21:14 compute-0 nova_compute[260935]: 2025-10-11 09:21:14.288 2 DEBUG nova.network.neutron [None req-22fc24f5-c7ae-4ccc-b583-7df789c64b4f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: fcb45648-eb7b-4975-9f50-08675a787d9c] Successfully updated port: 5a14fd50-c9b4-4c8c-b576-9f2a05d734f9 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 11 09:21:14 compute-0 nova_compute[260935]: 2025-10-11 09:21:14.319 2 DEBUG oslo_concurrency.lockutils [None req-22fc24f5-c7ae-4ccc-b583-7df789c64b4f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "refresh_cache-fcb45648-eb7b-4975-9f50-08675a787d9c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:21:14 compute-0 nova_compute[260935]: 2025-10-11 09:21:14.379 2 DEBUG nova.network.neutron [req-6da5ec96-4847-4d03-b995-7dedd7b304bd req-187b0095-798e-493b-9db5-ca2f3493f407 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: fcb45648-eb7b-4975-9f50-08675a787d9c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:21:14 compute-0 nova_compute[260935]: 2025-10-11 09:21:14.410 2 DEBUG oslo_concurrency.lockutils [req-6da5ec96-4847-4d03-b995-7dedd7b304bd req-187b0095-798e-493b-9db5-ca2f3493f407 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-fcb45648-eb7b-4975-9f50-08675a787d9c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:21:14 compute-0 nova_compute[260935]: 2025-10-11 09:21:14.411 2 DEBUG oslo_concurrency.lockutils [None req-22fc24f5-c7ae-4ccc-b583-7df789c64b4f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquired lock "refresh_cache-fcb45648-eb7b-4975-9f50-08675a787d9c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:21:14 compute-0 nova_compute[260935]: 2025-10-11 09:21:14.411 2 DEBUG nova.network.neutron [None req-22fc24f5-c7ae-4ccc-b583-7df789c64b4f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: fcb45648-eb7b-4975-9f50-08675a787d9c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 11 09:21:14 compute-0 nova_compute[260935]: 2025-10-11 09:21:14.579 2 DEBUG nova.network.neutron [None req-22fc24f5-c7ae-4ccc-b583-7df789c64b4f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: fcb45648-eb7b-4975-9f50-08675a787d9c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 11 09:21:14 compute-0 sshd-session[388842]: Connection closed by authenticating user root 165.232.82.252 port 39568 [preauth]
Oct 11 09:21:14 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2382: 321 pgs: 321 active+clean; 453 MiB data, 979 MiB used, 59 GiB / 60 GiB avail; 322 KiB/s rd, 3.9 MiB/s wr, 88 op/s
Oct 11 09:21:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:21:15.219 162815 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:21:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:21:15.219 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:21:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:21:15.221 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:21:15 compute-0 nova_compute[260935]: 2025-10-11 09:21:15.876 2 DEBUG nova.compute.manager [req-76aa6285-d0af-4ab5-98aa-6701905a7a00 req-e7f0b20c-3ee6-4818-bd10-905079373d50 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: fcb45648-eb7b-4975-9f50-08675a787d9c] Received event network-changed-5a14fd50-c9b4-4c8c-b576-9f2a05d734f9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:21:15 compute-0 nova_compute[260935]: 2025-10-11 09:21:15.876 2 DEBUG nova.compute.manager [req-76aa6285-d0af-4ab5-98aa-6701905a7a00 req-e7f0b20c-3ee6-4818-bd10-905079373d50 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: fcb45648-eb7b-4975-9f50-08675a787d9c] Refreshing instance network info cache due to event network-changed-5a14fd50-c9b4-4c8c-b576-9f2a05d734f9. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 09:21:15 compute-0 nova_compute[260935]: 2025-10-11 09:21:15.877 2 DEBUG oslo_concurrency.lockutils [req-76aa6285-d0af-4ab5-98aa-6701905a7a00 req-e7f0b20c-3ee6-4818-bd10-905079373d50 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-fcb45648-eb7b-4975-9f50-08675a787d9c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:21:15 compute-0 ceph-mon[74313]: pgmap v2382: 321 pgs: 321 active+clean; 453 MiB data, 979 MiB used, 59 GiB / 60 GiB avail; 322 KiB/s rd, 3.9 MiB/s wr, 88 op/s
Oct 11 09:21:16 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2383: 321 pgs: 321 active+clean; 453 MiB data, 979 MiB used, 59 GiB / 60 GiB avail; 322 KiB/s rd, 3.9 MiB/s wr, 88 op/s
Oct 11 09:21:16 compute-0 nova_compute[260935]: 2025-10-11 09:21:16.827 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:21:16 compute-0 podman[388866]: 2025-10-11 09:21:16.858534831 +0000 UTC m=+0.148979615 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Oct 11 09:21:16 compute-0 podman[388865]: 2025-10-11 09:21:16.858766717 +0000 UTC m=+0.152658099 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=multipathd)
Oct 11 09:21:16 compute-0 nova_compute[260935]: 2025-10-11 09:21:16.993 2 INFO nova.compute.manager [None req-753a3a21-0a9e-4dbf-a11c-2497b19a58a0 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 7f0d9214-39a5-458d-82db-dcbc7d61b8b5] Get console output
Oct 11 09:21:17 compute-0 nova_compute[260935]: 2025-10-11 09:21:17.001 29289 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Oct 11 09:21:17 compute-0 ceph-mon[74313]: pgmap v2383: 321 pgs: 321 active+clean; 453 MiB data, 979 MiB used, 59 GiB / 60 GiB avail; 322 KiB/s rd, 3.9 MiB/s wr, 88 op/s
Oct 11 09:21:18 compute-0 nova_compute[260935]: 2025-10-11 09:21:18.227 2 DEBUG nova.network.neutron [None req-22fc24f5-c7ae-4ccc-b583-7df789c64b4f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: fcb45648-eb7b-4975-9f50-08675a787d9c] Updating instance_info_cache with network_info: [{"id": "9669f110-042a-40c6-b7a4-8d78d421ed23", "address": "fa:16:3e:eb:54:35", "network": {"id": "02690ac5-d004-4c7d-b780-e5fed29e0aa7", "bridge": "br-int", "label": "tempest-network-smoke--1847613909", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9669f110-04", "ovs_interfaceid": "9669f110-042a-40c6-b7a4-8d78d421ed23", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "5a14fd50-c9b4-4c8c-b576-9f2a05d734f9", "address": "fa:16:3e:ed:83:c1", "network": {"id": "e87b272f-66b8-494e-ab80-c2ee66df15a2", "bridge": "br-int", "label": "tempest-network-smoke--2027628824", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feed:83c1", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feed:83c1", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a14fd50-c9", "ovs_interfaceid": "5a14fd50-c9b4-4c8c-b576-9f2a05d734f9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:21:18 compute-0 nova_compute[260935]: 2025-10-11 09:21:18.255 2 DEBUG oslo_concurrency.lockutils [None req-22fc24f5-c7ae-4ccc-b583-7df789c64b4f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Releasing lock "refresh_cache-fcb45648-eb7b-4975-9f50-08675a787d9c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:21:18 compute-0 nova_compute[260935]: 2025-10-11 09:21:18.256 2 DEBUG nova.compute.manager [None req-22fc24f5-c7ae-4ccc-b583-7df789c64b4f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: fcb45648-eb7b-4975-9f50-08675a787d9c] Instance network_info: |[{"id": "9669f110-042a-40c6-b7a4-8d78d421ed23", "address": "fa:16:3e:eb:54:35", "network": {"id": "02690ac5-d004-4c7d-b780-e5fed29e0aa7", "bridge": "br-int", "label": "tempest-network-smoke--1847613909", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9669f110-04", "ovs_interfaceid": "9669f110-042a-40c6-b7a4-8d78d421ed23", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "5a14fd50-c9b4-4c8c-b576-9f2a05d734f9", "address": "fa:16:3e:ed:83:c1", "network": {"id": "e87b272f-66b8-494e-ab80-c2ee66df15a2", "bridge": "br-int", "label": "tempest-network-smoke--2027628824", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feed:83c1", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feed:83c1", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a14fd50-c9", "ovs_interfaceid": "5a14fd50-c9b4-4c8c-b576-9f2a05d734f9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 11 09:21:18 compute-0 nova_compute[260935]: 2025-10-11 09:21:18.256 2 DEBUG oslo_concurrency.lockutils [req-76aa6285-d0af-4ab5-98aa-6701905a7a00 req-e7f0b20c-3ee6-4818-bd10-905079373d50 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-fcb45648-eb7b-4975-9f50-08675a787d9c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:21:18 compute-0 nova_compute[260935]: 2025-10-11 09:21:18.256 2 DEBUG nova.network.neutron [req-76aa6285-d0af-4ab5-98aa-6701905a7a00 req-e7f0b20c-3ee6-4818-bd10-905079373d50 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: fcb45648-eb7b-4975-9f50-08675a787d9c] Refreshing network info cache for port 5a14fd50-c9b4-4c8c-b576-9f2a05d734f9 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 09:21:18 compute-0 nova_compute[260935]: 2025-10-11 09:21:18.261 2 DEBUG nova.virt.libvirt.driver [None req-22fc24f5-c7ae-4ccc-b583-7df789c64b4f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: fcb45648-eb7b-4975-9f50-08675a787d9c] Start _get_guest_xml network_info=[{"id": "9669f110-042a-40c6-b7a4-8d78d421ed23", "address": "fa:16:3e:eb:54:35", "network": {"id": "02690ac5-d004-4c7d-b780-e5fed29e0aa7", "bridge": "br-int", "label": "tempest-network-smoke--1847613909", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9669f110-04", "ovs_interfaceid": "9669f110-042a-40c6-b7a4-8d78d421ed23", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "5a14fd50-c9b4-4c8c-b576-9f2a05d734f9", "address": "fa:16:3e:ed:83:c1", "network": {"id": "e87b272f-66b8-494e-ab80-c2ee66df15a2", "bridge": "br-int", "label": "tempest-network-smoke--2027628824", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feed:83c1", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feed:83c1", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a14fd50-c9", "ovs_interfaceid": "5a14fd50-c9b4-4c8c-b576-9f2a05d734f9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 11 09:21:18 compute-0 nova_compute[260935]: 2025-10-11 09:21:18.266 2 WARNING nova.virt.libvirt.driver [None req-22fc24f5-c7ae-4ccc-b583-7df789c64b4f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 09:21:18 compute-0 nova_compute[260935]: 2025-10-11 09:21:18.273 2 DEBUG nova.virt.libvirt.host [None req-22fc24f5-c7ae-4ccc-b583-7df789c64b4f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 11 09:21:18 compute-0 nova_compute[260935]: 2025-10-11 09:21:18.273 2 DEBUG nova.virt.libvirt.host [None req-22fc24f5-c7ae-4ccc-b583-7df789c64b4f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 11 09:21:18 compute-0 nova_compute[260935]: 2025-10-11 09:21:18.277 2 DEBUG nova.virt.libvirt.host [None req-22fc24f5-c7ae-4ccc-b583-7df789c64b4f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 11 09:21:18 compute-0 nova_compute[260935]: 2025-10-11 09:21:18.278 2 DEBUG nova.virt.libvirt.host [None req-22fc24f5-c7ae-4ccc-b583-7df789c64b4f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 11 09:21:18 compute-0 nova_compute[260935]: 2025-10-11 09:21:18.278 2 DEBUG nova.virt.libvirt.driver [None req-22fc24f5-c7ae-4ccc-b583-7df789c64b4f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 11 09:21:18 compute-0 nova_compute[260935]: 2025-10-11 09:21:18.278 2 DEBUG nova.virt.hardware [None req-22fc24f5-c7ae-4ccc-b583-7df789c64b4f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 11 09:21:18 compute-0 nova_compute[260935]: 2025-10-11 09:21:18.279 2 DEBUG nova.virt.hardware [None req-22fc24f5-c7ae-4ccc-b583-7df789c64b4f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 11 09:21:18 compute-0 nova_compute[260935]: 2025-10-11 09:21:18.279 2 DEBUG nova.virt.hardware [None req-22fc24f5-c7ae-4ccc-b583-7df789c64b4f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 11 09:21:18 compute-0 nova_compute[260935]: 2025-10-11 09:21:18.279 2 DEBUG nova.virt.hardware [None req-22fc24f5-c7ae-4ccc-b583-7df789c64b4f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 11 09:21:18 compute-0 nova_compute[260935]: 2025-10-11 09:21:18.280 2 DEBUG nova.virt.hardware [None req-22fc24f5-c7ae-4ccc-b583-7df789c64b4f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 11 09:21:18 compute-0 nova_compute[260935]: 2025-10-11 09:21:18.280 2 DEBUG nova.virt.hardware [None req-22fc24f5-c7ae-4ccc-b583-7df789c64b4f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 11 09:21:18 compute-0 nova_compute[260935]: 2025-10-11 09:21:18.280 2 DEBUG nova.virt.hardware [None req-22fc24f5-c7ae-4ccc-b583-7df789c64b4f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 11 09:21:18 compute-0 nova_compute[260935]: 2025-10-11 09:21:18.281 2 DEBUG nova.virt.hardware [None req-22fc24f5-c7ae-4ccc-b583-7df789c64b4f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 11 09:21:18 compute-0 nova_compute[260935]: 2025-10-11 09:21:18.281 2 DEBUG nova.virt.hardware [None req-22fc24f5-c7ae-4ccc-b583-7df789c64b4f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 11 09:21:18 compute-0 nova_compute[260935]: 2025-10-11 09:21:18.281 2 DEBUG nova.virt.hardware [None req-22fc24f5-c7ae-4ccc-b583-7df789c64b4f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 11 09:21:18 compute-0 nova_compute[260935]: 2025-10-11 09:21:18.281 2 DEBUG nova.virt.hardware [None req-22fc24f5-c7ae-4ccc-b583-7df789c64b4f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 11 09:21:18 compute-0 nova_compute[260935]: 2025-10-11 09:21:18.285 2 DEBUG oslo_concurrency.processutils [None req-22fc24f5-c7ae-4ccc-b583-7df789c64b4f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:21:18 compute-0 nova_compute[260935]: 2025-10-11 09:21:18.749 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:21:18 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2384: 321 pgs: 321 active+clean; 453 MiB data, 979 MiB used, 59 GiB / 60 GiB avail; 322 KiB/s rd, 3.9 MiB/s wr, 88 op/s
Oct 11 09:21:18 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 09:21:18 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2856541549' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:21:18 compute-0 nova_compute[260935]: 2025-10-11 09:21:18.804 2 DEBUG oslo_concurrency.processutils [None req-22fc24f5-c7ae-4ccc-b583-7df789c64b4f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.519s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:21:18 compute-0 nova_compute[260935]: 2025-10-11 09:21:18.830 2 DEBUG nova.storage.rbd_utils [None req-22fc24f5-c7ae-4ccc-b583-7df789c64b4f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] rbd image fcb45648-eb7b-4975-9f50-08675a787d9c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:21:18 compute-0 nova_compute[260935]: 2025-10-11 09:21:18.834 2 DEBUG oslo_concurrency.processutils [None req-22fc24f5-c7ae-4ccc-b583-7df789c64b4f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:21:18 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/2856541549' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:21:19 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:21:19 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 09:21:19 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/78862230' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:21:19 compute-0 nova_compute[260935]: 2025-10-11 09:21:19.347 2 DEBUG oslo_concurrency.processutils [None req-22fc24f5-c7ae-4ccc-b583-7df789c64b4f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.513s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:21:19 compute-0 nova_compute[260935]: 2025-10-11 09:21:19.349 2 DEBUG nova.virt.libvirt.vif [None req-22fc24f5-c7ae-4ccc-b583-7df789c64b4f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T09:21:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-407795424',display_name='tempest-TestGettingAddress-server-407795424',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-407795424',id=118,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBH3M/kXUSgkBS2kz04q1S/pRsyiwkY2YkIqMb3cQVJF1HU8FNi43mGn58wjJEXbpqq0zRDrJXjUF0vzQa/0kOsDkOMrrOR4SllnyixE3KdibmJHkyv4zy/eyFYJg8UzYow==',key_name='tempest-TestGettingAddress-1350685544',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='db6885dd005947ad850fed13cefdf2fc',ramdisk_id='',reservation_id='r-dfjcqrr4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-1238692117',owner_user_name='tempest-TestGettingAddress-1238692117-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:21:08Z,user_data=None,user_id='0e1fd111a1ff43179343661e01457085',uuid=fcb45648-eb7b-4975-9f50-08675a787d9c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9669f110-042a-40c6-b7a4-8d78d421ed23", "address": "fa:16:3e:eb:54:35", "network": {"id": "02690ac5-d004-4c7d-b780-e5fed29e0aa7", "bridge": "br-int", "label": "tempest-network-smoke--1847613909", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9669f110-04", "ovs_interfaceid": "9669f110-042a-40c6-b7a4-8d78d421ed23", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 11 09:21:19 compute-0 nova_compute[260935]: 2025-10-11 09:21:19.350 2 DEBUG nova.network.os_vif_util [None req-22fc24f5-c7ae-4ccc-b583-7df789c64b4f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converting VIF {"id": "9669f110-042a-40c6-b7a4-8d78d421ed23", "address": "fa:16:3e:eb:54:35", "network": {"id": "02690ac5-d004-4c7d-b780-e5fed29e0aa7", "bridge": "br-int", "label": "tempest-network-smoke--1847613909", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9669f110-04", "ovs_interfaceid": "9669f110-042a-40c6-b7a4-8d78d421ed23", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 09:21:19 compute-0 nova_compute[260935]: 2025-10-11 09:21:19.351 2 DEBUG nova.network.os_vif_util [None req-22fc24f5-c7ae-4ccc-b583-7df789c64b4f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:eb:54:35,bridge_name='br-int',has_traffic_filtering=True,id=9669f110-042a-40c6-b7a4-8d78d421ed23,network=Network(02690ac5-d004-4c7d-b780-e5fed29e0aa7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9669f110-04') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 09:21:19 compute-0 nova_compute[260935]: 2025-10-11 09:21:19.352 2 DEBUG nova.virt.libvirt.vif [None req-22fc24f5-c7ae-4ccc-b583-7df789c64b4f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T09:21:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-407795424',display_name='tempest-TestGettingAddress-server-407795424',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-407795424',id=118,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBH3M/kXUSgkBS2kz04q1S/pRsyiwkY2YkIqMb3cQVJF1HU8FNi43mGn58wjJEXbpqq0zRDrJXjUF0vzQa/0kOsDkOMrrOR4SllnyixE3KdibmJHkyv4zy/eyFYJg8UzYow==',key_name='tempest-TestGettingAddress-1350685544',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='db6885dd005947ad850fed13cefdf2fc',ramdisk_id='',reservation_id='r-dfjcqrr4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-1238692117',owner_user_name='tempest-TestGettingAddress-1238692117-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:21:08Z,user_data=None,user_id='0e1fd111a1ff43179343661e01457085',uuid=fcb45648-eb7b-4975-9f50-08675a787d9c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5a14fd50-c9b4-4c8c-b576-9f2a05d734f9", "address": "fa:16:3e:ed:83:c1", "network": {"id": "e87b272f-66b8-494e-ab80-c2ee66df15a2", "bridge": "br-int", "label": "tempest-network-smoke--2027628824", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feed:83c1", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feed:83c1", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a14fd50-c9", "ovs_interfaceid": "5a14fd50-c9b4-4c8c-b576-9f2a05d734f9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 11 09:21:19 compute-0 nova_compute[260935]: 2025-10-11 09:21:19.352 2 DEBUG nova.network.os_vif_util [None req-22fc24f5-c7ae-4ccc-b583-7df789c64b4f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converting VIF {"id": "5a14fd50-c9b4-4c8c-b576-9f2a05d734f9", "address": "fa:16:3e:ed:83:c1", "network": {"id": "e87b272f-66b8-494e-ab80-c2ee66df15a2", "bridge": "br-int", "label": "tempest-network-smoke--2027628824", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feed:83c1", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feed:83c1", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a14fd50-c9", "ovs_interfaceid": "5a14fd50-c9b4-4c8c-b576-9f2a05d734f9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 09:21:19 compute-0 nova_compute[260935]: 2025-10-11 09:21:19.353 2 DEBUG nova.network.os_vif_util [None req-22fc24f5-c7ae-4ccc-b583-7df789c64b4f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ed:83:c1,bridge_name='br-int',has_traffic_filtering=True,id=5a14fd50-c9b4-4c8c-b576-9f2a05d734f9,network=Network(e87b272f-66b8-494e-ab80-c2ee66df15a2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5a14fd50-c9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 09:21:19 compute-0 nova_compute[260935]: 2025-10-11 09:21:19.354 2 DEBUG nova.objects.instance [None req-22fc24f5-c7ae-4ccc-b583-7df789c64b4f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lazy-loading 'pci_devices' on Instance uuid fcb45648-eb7b-4975-9f50-08675a787d9c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:21:19 compute-0 nova_compute[260935]: 2025-10-11 09:21:19.382 2 DEBUG nova.virt.libvirt.driver [None req-22fc24f5-c7ae-4ccc-b583-7df789c64b4f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: fcb45648-eb7b-4975-9f50-08675a787d9c] End _get_guest_xml xml=<domain type="kvm">
Oct 11 09:21:19 compute-0 nova_compute[260935]:   <uuid>fcb45648-eb7b-4975-9f50-08675a787d9c</uuid>
Oct 11 09:21:19 compute-0 nova_compute[260935]:   <name>instance-00000076</name>
Oct 11 09:21:19 compute-0 nova_compute[260935]:   <memory>131072</memory>
Oct 11 09:21:19 compute-0 nova_compute[260935]:   <vcpu>1</vcpu>
Oct 11 09:21:19 compute-0 nova_compute[260935]:   <metadata>
Oct 11 09:21:19 compute-0 nova_compute[260935]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 09:21:19 compute-0 nova_compute[260935]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 09:21:19 compute-0 nova_compute[260935]:       <nova:name>tempest-TestGettingAddress-server-407795424</nova:name>
Oct 11 09:21:19 compute-0 nova_compute[260935]:       <nova:creationTime>2025-10-11 09:21:18</nova:creationTime>
Oct 11 09:21:19 compute-0 nova_compute[260935]:       <nova:flavor name="m1.nano">
Oct 11 09:21:19 compute-0 nova_compute[260935]:         <nova:memory>128</nova:memory>
Oct 11 09:21:19 compute-0 nova_compute[260935]:         <nova:disk>1</nova:disk>
Oct 11 09:21:19 compute-0 nova_compute[260935]:         <nova:swap>0</nova:swap>
Oct 11 09:21:19 compute-0 nova_compute[260935]:         <nova:ephemeral>0</nova:ephemeral>
Oct 11 09:21:19 compute-0 nova_compute[260935]:         <nova:vcpus>1</nova:vcpus>
Oct 11 09:21:19 compute-0 nova_compute[260935]:       </nova:flavor>
Oct 11 09:21:19 compute-0 nova_compute[260935]:       <nova:owner>
Oct 11 09:21:19 compute-0 nova_compute[260935]:         <nova:user uuid="0e1fd111a1ff43179343661e01457085">tempest-TestGettingAddress-1238692117-project-member</nova:user>
Oct 11 09:21:19 compute-0 nova_compute[260935]:         <nova:project uuid="db6885dd005947ad850fed13cefdf2fc">tempest-TestGettingAddress-1238692117</nova:project>
Oct 11 09:21:19 compute-0 nova_compute[260935]:       </nova:owner>
Oct 11 09:21:19 compute-0 nova_compute[260935]:       <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 09:21:19 compute-0 nova_compute[260935]:       <nova:ports>
Oct 11 09:21:19 compute-0 nova_compute[260935]:         <nova:port uuid="9669f110-042a-40c6-b7a4-8d78d421ed23">
Oct 11 09:21:19 compute-0 nova_compute[260935]:           <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Oct 11 09:21:19 compute-0 nova_compute[260935]:         </nova:port>
Oct 11 09:21:19 compute-0 nova_compute[260935]:         <nova:port uuid="5a14fd50-c9b4-4c8c-b576-9f2a05d734f9">
Oct 11 09:21:19 compute-0 nova_compute[260935]:           <nova:ip type="fixed" address="2001:db8::f816:3eff:feed:83c1" ipVersion="6"/>
Oct 11 09:21:19 compute-0 nova_compute[260935]:           <nova:ip type="fixed" address="2001:db8:0:1:f816:3eff:feed:83c1" ipVersion="6"/>
Oct 11 09:21:19 compute-0 nova_compute[260935]:         </nova:port>
Oct 11 09:21:19 compute-0 nova_compute[260935]:       </nova:ports>
Oct 11 09:21:19 compute-0 nova_compute[260935]:     </nova:instance>
Oct 11 09:21:19 compute-0 nova_compute[260935]:   </metadata>
Oct 11 09:21:19 compute-0 nova_compute[260935]:   <sysinfo type="smbios">
Oct 11 09:21:19 compute-0 nova_compute[260935]:     <system>
Oct 11 09:21:19 compute-0 nova_compute[260935]:       <entry name="manufacturer">RDO</entry>
Oct 11 09:21:19 compute-0 nova_compute[260935]:       <entry name="product">OpenStack Compute</entry>
Oct 11 09:21:19 compute-0 nova_compute[260935]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 09:21:19 compute-0 nova_compute[260935]:       <entry name="serial">fcb45648-eb7b-4975-9f50-08675a787d9c</entry>
Oct 11 09:21:19 compute-0 nova_compute[260935]:       <entry name="uuid">fcb45648-eb7b-4975-9f50-08675a787d9c</entry>
Oct 11 09:21:19 compute-0 nova_compute[260935]:       <entry name="family">Virtual Machine</entry>
Oct 11 09:21:19 compute-0 nova_compute[260935]:     </system>
Oct 11 09:21:19 compute-0 nova_compute[260935]:   </sysinfo>
Oct 11 09:21:19 compute-0 nova_compute[260935]:   <os>
Oct 11 09:21:19 compute-0 nova_compute[260935]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 11 09:21:19 compute-0 nova_compute[260935]:     <boot dev="hd"/>
Oct 11 09:21:19 compute-0 nova_compute[260935]:     <smbios mode="sysinfo"/>
Oct 11 09:21:19 compute-0 nova_compute[260935]:   </os>
Oct 11 09:21:19 compute-0 nova_compute[260935]:   <features>
Oct 11 09:21:19 compute-0 nova_compute[260935]:     <acpi/>
Oct 11 09:21:19 compute-0 nova_compute[260935]:     <apic/>
Oct 11 09:21:19 compute-0 nova_compute[260935]:     <vmcoreinfo/>
Oct 11 09:21:19 compute-0 nova_compute[260935]:   </features>
Oct 11 09:21:19 compute-0 nova_compute[260935]:   <clock offset="utc">
Oct 11 09:21:19 compute-0 nova_compute[260935]:     <timer name="pit" tickpolicy="delay"/>
Oct 11 09:21:19 compute-0 nova_compute[260935]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 11 09:21:19 compute-0 nova_compute[260935]:     <timer name="hpet" present="no"/>
Oct 11 09:21:19 compute-0 nova_compute[260935]:   </clock>
Oct 11 09:21:19 compute-0 nova_compute[260935]:   <cpu mode="host-model" match="exact">
Oct 11 09:21:19 compute-0 nova_compute[260935]:     <topology sockets="1" cores="1" threads="1"/>
Oct 11 09:21:19 compute-0 nova_compute[260935]:   </cpu>
Oct 11 09:21:19 compute-0 nova_compute[260935]:   <devices>
Oct 11 09:21:19 compute-0 nova_compute[260935]:     <disk type="network" device="disk">
Oct 11 09:21:19 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 09:21:19 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/fcb45648-eb7b-4975-9f50-08675a787d9c_disk">
Oct 11 09:21:19 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 09:21:19 compute-0 nova_compute[260935]:       </source>
Oct 11 09:21:19 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 09:21:19 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 09:21:19 compute-0 nova_compute[260935]:       </auth>
Oct 11 09:21:19 compute-0 nova_compute[260935]:       <target dev="vda" bus="virtio"/>
Oct 11 09:21:19 compute-0 nova_compute[260935]:     </disk>
Oct 11 09:21:19 compute-0 nova_compute[260935]:     <disk type="network" device="cdrom">
Oct 11 09:21:19 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 09:21:19 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/fcb45648-eb7b-4975-9f50-08675a787d9c_disk.config">
Oct 11 09:21:19 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 09:21:19 compute-0 nova_compute[260935]:       </source>
Oct 11 09:21:19 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 09:21:19 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 09:21:19 compute-0 nova_compute[260935]:       </auth>
Oct 11 09:21:19 compute-0 nova_compute[260935]:       <target dev="sda" bus="sata"/>
Oct 11 09:21:19 compute-0 nova_compute[260935]:     </disk>
Oct 11 09:21:19 compute-0 nova_compute[260935]:     <interface type="ethernet">
Oct 11 09:21:19 compute-0 nova_compute[260935]:       <mac address="fa:16:3e:eb:54:35"/>
Oct 11 09:21:19 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 09:21:19 compute-0 nova_compute[260935]:       <driver name="vhost" rx_queue_size="512"/>
Oct 11 09:21:19 compute-0 nova_compute[260935]:       <mtu size="1442"/>
Oct 11 09:21:19 compute-0 nova_compute[260935]:       <target dev="tap9669f110-04"/>
Oct 11 09:21:19 compute-0 nova_compute[260935]:     </interface>
Oct 11 09:21:19 compute-0 nova_compute[260935]:     <interface type="ethernet">
Oct 11 09:21:19 compute-0 nova_compute[260935]:       <mac address="fa:16:3e:ed:83:c1"/>
Oct 11 09:21:19 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 09:21:19 compute-0 nova_compute[260935]:       <driver name="vhost" rx_queue_size="512"/>
Oct 11 09:21:19 compute-0 nova_compute[260935]:       <mtu size="1442"/>
Oct 11 09:21:19 compute-0 nova_compute[260935]:       <target dev="tap5a14fd50-c9"/>
Oct 11 09:21:19 compute-0 nova_compute[260935]:     </interface>
Oct 11 09:21:19 compute-0 nova_compute[260935]:     <serial type="pty">
Oct 11 09:21:19 compute-0 nova_compute[260935]:       <log file="/var/lib/nova/instances/fcb45648-eb7b-4975-9f50-08675a787d9c/console.log" append="off"/>
Oct 11 09:21:19 compute-0 nova_compute[260935]:     </serial>
Oct 11 09:21:19 compute-0 nova_compute[260935]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 09:21:19 compute-0 nova_compute[260935]:     <video>
Oct 11 09:21:19 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 09:21:19 compute-0 nova_compute[260935]:     </video>
Oct 11 09:21:19 compute-0 nova_compute[260935]:     <input type="tablet" bus="usb"/>
Oct 11 09:21:19 compute-0 nova_compute[260935]:     <rng model="virtio">
Oct 11 09:21:19 compute-0 nova_compute[260935]:       <backend model="random">/dev/urandom</backend>
Oct 11 09:21:19 compute-0 nova_compute[260935]:     </rng>
Oct 11 09:21:19 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root"/>
Oct 11 09:21:19 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:21:19 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:21:19 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:21:19 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:21:19 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:21:19 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:21:19 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:21:19 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:21:19 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:21:19 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:21:19 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:21:19 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:21:19 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:21:19 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:21:19 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:21:19 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:21:19 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:21:19 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:21:19 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:21:19 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:21:19 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:21:19 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:21:19 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:21:19 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:21:19 compute-0 nova_compute[260935]:     <controller type="usb" index="0"/>
Oct 11 09:21:19 compute-0 nova_compute[260935]:     <memballoon model="virtio">
Oct 11 09:21:19 compute-0 nova_compute[260935]:       <stats period="10"/>
Oct 11 09:21:19 compute-0 nova_compute[260935]:     </memballoon>
Oct 11 09:21:19 compute-0 nova_compute[260935]:   </devices>
Oct 11 09:21:19 compute-0 nova_compute[260935]: </domain>
Oct 11 09:21:19 compute-0 nova_compute[260935]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 11 09:21:19 compute-0 nova_compute[260935]: 2025-10-11 09:21:19.384 2 DEBUG nova.compute.manager [None req-22fc24f5-c7ae-4ccc-b583-7df789c64b4f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: fcb45648-eb7b-4975-9f50-08675a787d9c] Preparing to wait for external event network-vif-plugged-9669f110-042a-40c6-b7a4-8d78d421ed23 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 11 09:21:19 compute-0 nova_compute[260935]: 2025-10-11 09:21:19.384 2 DEBUG oslo_concurrency.lockutils [None req-22fc24f5-c7ae-4ccc-b583-7df789c64b4f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "fcb45648-eb7b-4975-9f50-08675a787d9c-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:21:19 compute-0 nova_compute[260935]: 2025-10-11 09:21:19.385 2 DEBUG oslo_concurrency.lockutils [None req-22fc24f5-c7ae-4ccc-b583-7df789c64b4f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "fcb45648-eb7b-4975-9f50-08675a787d9c-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:21:19 compute-0 nova_compute[260935]: 2025-10-11 09:21:19.385 2 DEBUG oslo_concurrency.lockutils [None req-22fc24f5-c7ae-4ccc-b583-7df789c64b4f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "fcb45648-eb7b-4975-9f50-08675a787d9c-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:21:19 compute-0 nova_compute[260935]: 2025-10-11 09:21:19.385 2 DEBUG nova.compute.manager [None req-22fc24f5-c7ae-4ccc-b583-7df789c64b4f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: fcb45648-eb7b-4975-9f50-08675a787d9c] Preparing to wait for external event network-vif-plugged-5a14fd50-c9b4-4c8c-b576-9f2a05d734f9 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 11 09:21:19 compute-0 nova_compute[260935]: 2025-10-11 09:21:19.385 2 DEBUG oslo_concurrency.lockutils [None req-22fc24f5-c7ae-4ccc-b583-7df789c64b4f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "fcb45648-eb7b-4975-9f50-08675a787d9c-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:21:19 compute-0 nova_compute[260935]: 2025-10-11 09:21:19.385 2 DEBUG oslo_concurrency.lockutils [None req-22fc24f5-c7ae-4ccc-b583-7df789c64b4f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "fcb45648-eb7b-4975-9f50-08675a787d9c-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:21:19 compute-0 nova_compute[260935]: 2025-10-11 09:21:19.386 2 DEBUG oslo_concurrency.lockutils [None req-22fc24f5-c7ae-4ccc-b583-7df789c64b4f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "fcb45648-eb7b-4975-9f50-08675a787d9c-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:21:19 compute-0 nova_compute[260935]: 2025-10-11 09:21:19.386 2 DEBUG nova.virt.libvirt.vif [None req-22fc24f5-c7ae-4ccc-b583-7df789c64b4f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T09:21:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-407795424',display_name='tempest-TestGettingAddress-server-407795424',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-407795424',id=118,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBH3M/kXUSgkBS2kz04q1S/pRsyiwkY2YkIqMb3cQVJF1HU8FNi43mGn58wjJEXbpqq0zRDrJXjUF0vzQa/0kOsDkOMrrOR4SllnyixE3KdibmJHkyv4zy/eyFYJg8UzYow==',key_name='tempest-TestGettingAddress-1350685544',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='db6885dd005947ad850fed13cefdf2fc',ramdisk_id='',reservation_id='r-dfjcqrr4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-1238692117',owner_user_name='tempest-TestGettingAddress-1238692117-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:21:08Z,user_data=None,user_id='0e1fd111a1ff43179343661e01457085',uuid=fcb45648-eb7b-4975-9f50-08675a787d9c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9669f110-042a-40c6-b7a4-8d78d421ed23", "address": "fa:16:3e:eb:54:35", "network": {"id": "02690ac5-d004-4c7d-b780-e5fed29e0aa7", "bridge": "br-int", "label": "tempest-network-smoke--1847613909", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9669f110-04", "ovs_interfaceid": "9669f110-042a-40c6-b7a4-8d78d421ed23", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 11 09:21:19 compute-0 nova_compute[260935]: 2025-10-11 09:21:19.387 2 DEBUG nova.network.os_vif_util [None req-22fc24f5-c7ae-4ccc-b583-7df789c64b4f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converting VIF {"id": "9669f110-042a-40c6-b7a4-8d78d421ed23", "address": "fa:16:3e:eb:54:35", "network": {"id": "02690ac5-d004-4c7d-b780-e5fed29e0aa7", "bridge": "br-int", "label": "tempest-network-smoke--1847613909", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9669f110-04", "ovs_interfaceid": "9669f110-042a-40c6-b7a4-8d78d421ed23", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 09:21:19 compute-0 nova_compute[260935]: 2025-10-11 09:21:19.387 2 DEBUG nova.network.os_vif_util [None req-22fc24f5-c7ae-4ccc-b583-7df789c64b4f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:eb:54:35,bridge_name='br-int',has_traffic_filtering=True,id=9669f110-042a-40c6-b7a4-8d78d421ed23,network=Network(02690ac5-d004-4c7d-b780-e5fed29e0aa7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9669f110-04') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 09:21:19 compute-0 nova_compute[260935]: 2025-10-11 09:21:19.387 2 DEBUG os_vif [None req-22fc24f5-c7ae-4ccc-b583-7df789c64b4f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:eb:54:35,bridge_name='br-int',has_traffic_filtering=True,id=9669f110-042a-40c6-b7a4-8d78d421ed23,network=Network(02690ac5-d004-4c7d-b780-e5fed29e0aa7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9669f110-04') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 11 09:21:19 compute-0 nova_compute[260935]: 2025-10-11 09:21:19.388 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:21:19 compute-0 nova_compute[260935]: 2025-10-11 09:21:19.389 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:21:19 compute-0 nova_compute[260935]: 2025-10-11 09:21:19.389 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 09:21:19 compute-0 nova_compute[260935]: 2025-10-11 09:21:19.392 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:21:19 compute-0 nova_compute[260935]: 2025-10-11 09:21:19.392 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9669f110-04, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:21:19 compute-0 nova_compute[260935]: 2025-10-11 09:21:19.392 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap9669f110-04, col_values=(('external_ids', {'iface-id': '9669f110-042a-40c6-b7a4-8d78d421ed23', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:eb:54:35', 'vm-uuid': 'fcb45648-eb7b-4975-9f50-08675a787d9c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:21:19 compute-0 nova_compute[260935]: 2025-10-11 09:21:19.394 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:21:19 compute-0 NetworkManager[44960]: <info>  [1760174479.3953] manager: (tap9669f110-04): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/484)
Oct 11 09:21:19 compute-0 nova_compute[260935]: 2025-10-11 09:21:19.397 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 09:21:19 compute-0 nova_compute[260935]: 2025-10-11 09:21:19.403 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:21:19 compute-0 nova_compute[260935]: 2025-10-11 09:21:19.404 2 INFO os_vif [None req-22fc24f5-c7ae-4ccc-b583-7df789c64b4f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:eb:54:35,bridge_name='br-int',has_traffic_filtering=True,id=9669f110-042a-40c6-b7a4-8d78d421ed23,network=Network(02690ac5-d004-4c7d-b780-e5fed29e0aa7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9669f110-04')
Oct 11 09:21:19 compute-0 nova_compute[260935]: 2025-10-11 09:21:19.405 2 DEBUG nova.virt.libvirt.vif [None req-22fc24f5-c7ae-4ccc-b583-7df789c64b4f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T09:21:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-407795424',display_name='tempest-TestGettingAddress-server-407795424',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-407795424',id=118,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBH3M/kXUSgkBS2kz04q1S/pRsyiwkY2YkIqMb3cQVJF1HU8FNi43mGn58wjJEXbpqq0zRDrJXjUF0vzQa/0kOsDkOMrrOR4SllnyixE3KdibmJHkyv4zy/eyFYJg8UzYow==',key_name='tempest-TestGettingAddress-1350685544',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='db6885dd005947ad850fed13cefdf2fc',ramdisk_id='',reservation_id='r-dfjcqrr4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-1238692117',owner_user_name='tempest-TestGettingAddress-1238692117-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:21:08Z,user_data=None,user_id='0e1fd111a1ff43179343661e01457085',uuid=fcb45648-eb7b-4975-9f50-08675a787d9c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5a14fd50-c9b4-4c8c-b576-9f2a05d734f9", "address": "fa:16:3e:ed:83:c1", "network": {"id": "e87b272f-66b8-494e-ab80-c2ee66df15a2", "bridge": "br-int", "label": "tempest-network-smoke--2027628824", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feed:83c1", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feed:83c1", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a14fd50-c9", "ovs_interfaceid": "5a14fd50-c9b4-4c8c-b576-9f2a05d734f9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 11 09:21:19 compute-0 nova_compute[260935]: 2025-10-11 09:21:19.405 2 DEBUG nova.network.os_vif_util [None req-22fc24f5-c7ae-4ccc-b583-7df789c64b4f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converting VIF {"id": "5a14fd50-c9b4-4c8c-b576-9f2a05d734f9", "address": "fa:16:3e:ed:83:c1", "network": {"id": "e87b272f-66b8-494e-ab80-c2ee66df15a2", "bridge": "br-int", "label": "tempest-network-smoke--2027628824", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feed:83c1", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feed:83c1", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a14fd50-c9", "ovs_interfaceid": "5a14fd50-c9b4-4c8c-b576-9f2a05d734f9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 09:21:19 compute-0 nova_compute[260935]: 2025-10-11 09:21:19.405 2 DEBUG nova.network.os_vif_util [None req-22fc24f5-c7ae-4ccc-b583-7df789c64b4f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ed:83:c1,bridge_name='br-int',has_traffic_filtering=True,id=5a14fd50-c9b4-4c8c-b576-9f2a05d734f9,network=Network(e87b272f-66b8-494e-ab80-c2ee66df15a2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5a14fd50-c9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 09:21:19 compute-0 nova_compute[260935]: 2025-10-11 09:21:19.406 2 DEBUG os_vif [None req-22fc24f5-c7ae-4ccc-b583-7df789c64b4f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ed:83:c1,bridge_name='br-int',has_traffic_filtering=True,id=5a14fd50-c9b4-4c8c-b576-9f2a05d734f9,network=Network(e87b272f-66b8-494e-ab80-c2ee66df15a2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5a14fd50-c9') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 11 09:21:19 compute-0 nova_compute[260935]: 2025-10-11 09:21:19.406 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:21:19 compute-0 nova_compute[260935]: 2025-10-11 09:21:19.406 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:21:19 compute-0 nova_compute[260935]: 2025-10-11 09:21:19.407 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 09:21:19 compute-0 nova_compute[260935]: 2025-10-11 09:21:19.408 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:21:19 compute-0 nova_compute[260935]: 2025-10-11 09:21:19.408 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5a14fd50-c9, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:21:19 compute-0 nova_compute[260935]: 2025-10-11 09:21:19.408 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap5a14fd50-c9, col_values=(('external_ids', {'iface-id': '5a14fd50-c9b4-4c8c-b576-9f2a05d734f9', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ed:83:c1', 'vm-uuid': 'fcb45648-eb7b-4975-9f50-08675a787d9c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:21:19 compute-0 nova_compute[260935]: 2025-10-11 09:21:19.409 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:21:19 compute-0 NetworkManager[44960]: <info>  [1760174479.4108] manager: (tap5a14fd50-c9): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/485)
Oct 11 09:21:19 compute-0 nova_compute[260935]: 2025-10-11 09:21:19.414 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 09:21:19 compute-0 nova_compute[260935]: 2025-10-11 09:21:19.417 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:21:19 compute-0 nova_compute[260935]: 2025-10-11 09:21:19.418 2 INFO os_vif [None req-22fc24f5-c7ae-4ccc-b583-7df789c64b4f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ed:83:c1,bridge_name='br-int',has_traffic_filtering=True,id=5a14fd50-c9b4-4c8c-b576-9f2a05d734f9,network=Network(e87b272f-66b8-494e-ab80-c2ee66df15a2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5a14fd50-c9')
Oct 11 09:21:19 compute-0 nova_compute[260935]: 2025-10-11 09:21:19.490 2 DEBUG nova.virt.libvirt.driver [None req-22fc24f5-c7ae-4ccc-b583-7df789c64b4f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 09:21:19 compute-0 nova_compute[260935]: 2025-10-11 09:21:19.490 2 DEBUG nova.virt.libvirt.driver [None req-22fc24f5-c7ae-4ccc-b583-7df789c64b4f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 09:21:19 compute-0 nova_compute[260935]: 2025-10-11 09:21:19.490 2 DEBUG nova.virt.libvirt.driver [None req-22fc24f5-c7ae-4ccc-b583-7df789c64b4f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] No VIF found with MAC fa:16:3e:eb:54:35, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 11 09:21:19 compute-0 nova_compute[260935]: 2025-10-11 09:21:19.491 2 DEBUG nova.virt.libvirt.driver [None req-22fc24f5-c7ae-4ccc-b583-7df789c64b4f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] No VIF found with MAC fa:16:3e:ed:83:c1, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 11 09:21:19 compute-0 nova_compute[260935]: 2025-10-11 09:21:19.491 2 INFO nova.virt.libvirt.driver [None req-22fc24f5-c7ae-4ccc-b583-7df789c64b4f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: fcb45648-eb7b-4975-9f50-08675a787d9c] Using config drive
Oct 11 09:21:19 compute-0 nova_compute[260935]: 2025-10-11 09:21:19.514 2 DEBUG nova.storage.rbd_utils [None req-22fc24f5-c7ae-4ccc-b583-7df789c64b4f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] rbd image fcb45648-eb7b-4975-9f50-08675a787d9c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:21:19 compute-0 ceph-mon[74313]: pgmap v2384: 321 pgs: 321 active+clean; 453 MiB data, 979 MiB used, 59 GiB / 60 GiB avail; 322 KiB/s rd, 3.9 MiB/s wr, 88 op/s
Oct 11 09:21:19 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/78862230' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:21:20 compute-0 nova_compute[260935]: 2025-10-11 09:21:20.097 2 INFO nova.virt.libvirt.driver [None req-22fc24f5-c7ae-4ccc-b583-7df789c64b4f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: fcb45648-eb7b-4975-9f50-08675a787d9c] Creating config drive at /var/lib/nova/instances/fcb45648-eb7b-4975-9f50-08675a787d9c/disk.config
Oct 11 09:21:20 compute-0 nova_compute[260935]: 2025-10-11 09:21:20.106 2 DEBUG oslo_concurrency.processutils [None req-22fc24f5-c7ae-4ccc-b583-7df789c64b4f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/fcb45648-eb7b-4975-9f50-08675a787d9c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpbox69brg execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:21:20 compute-0 nova_compute[260935]: 2025-10-11 09:21:20.276 2 DEBUG oslo_concurrency.processutils [None req-22fc24f5-c7ae-4ccc-b583-7df789c64b4f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/fcb45648-eb7b-4975-9f50-08675a787d9c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpbox69brg" returned: 0 in 0.170s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:21:20 compute-0 nova_compute[260935]: 2025-10-11 09:21:20.317 2 DEBUG nova.storage.rbd_utils [None req-22fc24f5-c7ae-4ccc-b583-7df789c64b4f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] rbd image fcb45648-eb7b-4975-9f50-08675a787d9c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:21:20 compute-0 nova_compute[260935]: 2025-10-11 09:21:20.322 2 DEBUG oslo_concurrency.processutils [None req-22fc24f5-c7ae-4ccc-b583-7df789c64b4f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/fcb45648-eb7b-4975-9f50-08675a787d9c/disk.config fcb45648-eb7b-4975-9f50-08675a787d9c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:21:20 compute-0 nova_compute[260935]: 2025-10-11 09:21:20.525 2 DEBUG oslo_concurrency.processutils [None req-22fc24f5-c7ae-4ccc-b583-7df789c64b4f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/fcb45648-eb7b-4975-9f50-08675a787d9c/disk.config fcb45648-eb7b-4975-9f50-08675a787d9c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.203s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:21:20 compute-0 nova_compute[260935]: 2025-10-11 09:21:20.527 2 INFO nova.virt.libvirt.driver [None req-22fc24f5-c7ae-4ccc-b583-7df789c64b4f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: fcb45648-eb7b-4975-9f50-08675a787d9c] Deleting local config drive /var/lib/nova/instances/fcb45648-eb7b-4975-9f50-08675a787d9c/disk.config because it was imported into RBD.
Oct 11 09:21:20 compute-0 NetworkManager[44960]: <info>  [1760174480.6006] manager: (tap9669f110-04): new Tun device (/org/freedesktop/NetworkManager/Devices/486)
Oct 11 09:21:20 compute-0 kernel: tap9669f110-04: entered promiscuous mode
Oct 11 09:21:20 compute-0 nova_compute[260935]: 2025-10-11 09:21:20.609 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:21:20 compute-0 ovn_controller[152945]: 2025-10-11T09:21:20Z|01197|binding|INFO|Claiming lport 9669f110-042a-40c6-b7a4-8d78d421ed23 for this chassis.
Oct 11 09:21:20 compute-0 ovn_controller[152945]: 2025-10-11T09:21:20Z|01198|binding|INFO|9669f110-042a-40c6-b7a4-8d78d421ed23: Claiming fa:16:3e:eb:54:35 10.100.0.11
Oct 11 09:21:20 compute-0 NetworkManager[44960]: <info>  [1760174480.6237] manager: (tap5a14fd50-c9): new Tun device (/org/freedesktop/NetworkManager/Devices/487)
Oct 11 09:21:20 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:21:20.630 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:eb:54:35 10.100.0.11'], port_security=['fa:16:3e:eb:54:35 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'fcb45648-eb7b-4975-9f50-08675a787d9c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-02690ac5-d004-4c7d-b780-e5fed29e0aa7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'db6885dd005947ad850fed13cefdf2fc', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'c29d60dd-64a4-4eb0-a5b7-799e6b286f87', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=bc6a655d-e3d9-4e53-b2d8-3195591acfb2, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=9669f110-042a-40c6-b7a4-8d78d421ed23) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 09:21:20 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:21:20.631 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 9669f110-042a-40c6-b7a4-8d78d421ed23 in datapath 02690ac5-d004-4c7d-b780-e5fed29e0aa7 bound to our chassis
Oct 11 09:21:20 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:21:20.633 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 02690ac5-d004-4c7d-b780-e5fed29e0aa7
Oct 11 09:21:20 compute-0 kernel: tap5a14fd50-c9: entered promiscuous mode
Oct 11 09:21:20 compute-0 nova_compute[260935]: 2025-10-11 09:21:20.649 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:21:20 compute-0 ovn_controller[152945]: 2025-10-11T09:21:20Z|01199|binding|INFO|Setting lport 9669f110-042a-40c6-b7a4-8d78d421ed23 ovn-installed in OVS
Oct 11 09:21:20 compute-0 ovn_controller[152945]: 2025-10-11T09:21:20Z|01200|binding|INFO|Setting lport 9669f110-042a-40c6-b7a4-8d78d421ed23 up in Southbound
Oct 11 09:21:20 compute-0 ovn_controller[152945]: 2025-10-11T09:21:20Z|01201|if_status|INFO|Not updating pb chassis for 5a14fd50-c9b4-4c8c-b576-9f2a05d734f9 now as sb is readonly
Oct 11 09:21:20 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:21:20.653 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[a4c9aad8-dea9-45b7-846a-d132072c2ec1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:21:20 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:21:20.654 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap02690ac5-d1 in ovnmeta-02690ac5-d004-4c7d-b780-e5fed29e0aa7 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 11 09:21:20 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:21:20.656 276945 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap02690ac5-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 11 09:21:20 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:21:20.657 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[2460c76d-1d2f-447d-be02-36109a760b59]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:21:20 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:21:20.657 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[66aae679-3a9d-41d1-9c80-6c19eb72b5a6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:21:20 compute-0 nova_compute[260935]: 2025-10-11 09:21:20.672 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:21:20 compute-0 nova_compute[260935]: 2025-10-11 09:21:20.678 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:21:20 compute-0 systemd-udevd[389056]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 09:21:20 compute-0 systemd-udevd[389054]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 09:21:20 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:21:20.684 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[dc21e49c-c67c-49fe-ab1c-d8d72cca49b1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:21:20 compute-0 systemd-machined[215705]: New machine qemu-141-instance-00000076.
Oct 11 09:21:20 compute-0 NetworkManager[44960]: <info>  [1760174480.7032] device (tap5a14fd50-c9): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 09:21:20 compute-0 NetworkManager[44960]: <info>  [1760174480.7050] device (tap5a14fd50-c9): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 09:21:20 compute-0 NetworkManager[44960]: <info>  [1760174480.7059] device (tap9669f110-04): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 09:21:20 compute-0 systemd[1]: Started Virtual Machine qemu-141-instance-00000076.
Oct 11 09:21:20 compute-0 NetworkManager[44960]: <info>  [1760174480.7072] device (tap9669f110-04): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 09:21:20 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:21:20.712 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[ffa55fc1-ec4b-4d46-a00b-dda6539951f0]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:21:20 compute-0 ovn_controller[152945]: 2025-10-11T09:21:20Z|01202|binding|INFO|Claiming lport 5a14fd50-c9b4-4c8c-b576-9f2a05d734f9 for this chassis.
Oct 11 09:21:20 compute-0 ovn_controller[152945]: 2025-10-11T09:21:20Z|01203|binding|INFO|5a14fd50-c9b4-4c8c-b576-9f2a05d734f9: Claiming fa:16:3e:ed:83:c1 2001:db8:0:1:f816:3eff:feed:83c1 2001:db8::f816:3eff:feed:83c1
Oct 11 09:21:20 compute-0 ovn_controller[152945]: 2025-10-11T09:21:20Z|01204|binding|INFO|Setting lport 5a14fd50-c9b4-4c8c-b576-9f2a05d734f9 ovn-installed in OVS
Oct 11 09:21:20 compute-0 nova_compute[260935]: 2025-10-11 09:21:20.748 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:21:20 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2385: 321 pgs: 321 active+clean; 453 MiB data, 979 MiB used, 59 GiB / 60 GiB avail; 322 KiB/s rd, 3.9 MiB/s wr, 88 op/s
Oct 11 09:21:20 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:21:20.757 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[1ecc05a5-12f6-4013-8cde-abc42784d0b0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:21:20 compute-0 systemd-udevd[389060]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 09:21:20 compute-0 NetworkManager[44960]: <info>  [1760174480.7660] manager: (tap02690ac5-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/488)
Oct 11 09:21:20 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:21:20.764 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[43bb2f55-f24e-498b-9ab7-a334ac044c30]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:21:20 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:21:20.809 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[2e5b3a0a-b59f-4547-8a77-2dcc400463ea]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:21:20 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:21:20.813 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[e8be985b-93c9-492c-af37-5623af03f8de]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:21:20 compute-0 NetworkManager[44960]: <info>  [1760174480.8379] device (tap02690ac5-d0): carrier: link connected
Oct 11 09:21:20 compute-0 ovn_controller[152945]: 2025-10-11T09:21:20Z|01205|binding|INFO|Setting lport 5a14fd50-c9b4-4c8c-b576-9f2a05d734f9 up in Southbound
Oct 11 09:21:20 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:21:20.843 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[78ec4edd-241a-42a1-bfcb-2d7be9ad7027]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:21:20 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:21:20.848 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ed:83:c1 2001:db8:0:1:f816:3eff:feed:83c1 2001:db8::f816:3eff:feed:83c1'], port_security=['fa:16:3e:ed:83:c1 2001:db8:0:1:f816:3eff:feed:83c1 2001:db8::f816:3eff:feed:83c1'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:feed:83c1/64 2001:db8::f816:3eff:feed:83c1/64', 'neutron:device_id': 'fcb45648-eb7b-4975-9f50-08675a787d9c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e87b272f-66b8-494e-ab80-c2ee66df15a2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'db6885dd005947ad850fed13cefdf2fc', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'c29d60dd-64a4-4eb0-a5b7-799e6b286f87', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4bb88753-f635-4d32-a71c-7301c0eb5e38, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=5a14fd50-c9b4-4c8c-b576-9f2a05d734f9) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 09:21:20 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:21:20.862 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[e1d68751-33ab-4f07-a887-7689745acf1e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap02690ac5-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:3a:97:0e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 340], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 632554, 'reachable_time': 37223, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 389087, 'error': None, 'target': 'ovnmeta-02690ac5-d004-4c7d-b780-e5fed29e0aa7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:21:20 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:21:20.881 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[5febfd7c-ff01-4161-88b5-0d29e75afdad]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe3a:970e'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 632554, 'tstamp': 632554}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 389088, 'error': None, 'target': 'ovnmeta-02690ac5-d004-4c7d-b780-e5fed29e0aa7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:21:20 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:21:20.902 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[cface96f-5b7b-483e-aa5b-81bdec2e5e6a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap02690ac5-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:3a:97:0e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 340], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 632554, 'reachable_time': 37223, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 389089, 'error': None, 'target': 'ovnmeta-02690ac5-d004-4c7d-b780-e5fed29e0aa7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:21:20 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:21:20.941 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[9b052046-cfca-4ec8-937f-fb879c839034]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:21:21 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:21:21.021 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[d16885fd-db63-489a-9287-6a065de1565b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:21:21 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:21:21.023 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap02690ac5-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:21:21 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:21:21.023 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 09:21:21 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:21:21.024 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap02690ac5-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:21:21 compute-0 nova_compute[260935]: 2025-10-11 09:21:21.026 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:21:21 compute-0 NetworkManager[44960]: <info>  [1760174481.0271] manager: (tap02690ac5-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/489)
Oct 11 09:21:21 compute-0 kernel: tap02690ac5-d0: entered promiscuous mode
Oct 11 09:21:21 compute-0 nova_compute[260935]: 2025-10-11 09:21:21.031 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:21:21 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:21:21.032 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap02690ac5-d0, col_values=(('external_ids', {'iface-id': 'bc2597de-285d-49b2-9e10-c93c87e43828'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:21:21 compute-0 nova_compute[260935]: 2025-10-11 09:21:21.033 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:21:21 compute-0 ovn_controller[152945]: 2025-10-11T09:21:21Z|01206|binding|INFO|Releasing lport bc2597de-285d-49b2-9e10-c93c87e43828 from this chassis (sb_readonly=0)
Oct 11 09:21:21 compute-0 nova_compute[260935]: 2025-10-11 09:21:21.065 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:21:21 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:21:21.071 162815 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/02690ac5-d004-4c7d-b780-e5fed29e0aa7.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/02690ac5-d004-4c7d-b780-e5fed29e0aa7.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 11 09:21:21 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:21:21.072 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[eeb15664-0802-48ce-8c92-1015abb1d5e8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:21:21 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:21:21.074 162815 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 11 09:21:21 compute-0 ovn_metadata_agent[162810]: global
Oct 11 09:21:21 compute-0 ovn_metadata_agent[162810]:     log         /dev/log local0 debug
Oct 11 09:21:21 compute-0 ovn_metadata_agent[162810]:     log-tag     haproxy-metadata-proxy-02690ac5-d004-4c7d-b780-e5fed29e0aa7
Oct 11 09:21:21 compute-0 ovn_metadata_agent[162810]:     user        root
Oct 11 09:21:21 compute-0 ovn_metadata_agent[162810]:     group       root
Oct 11 09:21:21 compute-0 ovn_metadata_agent[162810]:     maxconn     1024
Oct 11 09:21:21 compute-0 ovn_metadata_agent[162810]:     pidfile     /var/lib/neutron/external/pids/02690ac5-d004-4c7d-b780-e5fed29e0aa7.pid.haproxy
Oct 11 09:21:21 compute-0 ovn_metadata_agent[162810]:     daemon
Oct 11 09:21:21 compute-0 ovn_metadata_agent[162810]: 
Oct 11 09:21:21 compute-0 ovn_metadata_agent[162810]: defaults
Oct 11 09:21:21 compute-0 ovn_metadata_agent[162810]:     log global
Oct 11 09:21:21 compute-0 ovn_metadata_agent[162810]:     mode http
Oct 11 09:21:21 compute-0 ovn_metadata_agent[162810]:     option httplog
Oct 11 09:21:21 compute-0 ovn_metadata_agent[162810]:     option dontlognull
Oct 11 09:21:21 compute-0 ovn_metadata_agent[162810]:     option http-server-close
Oct 11 09:21:21 compute-0 ovn_metadata_agent[162810]:     option forwardfor
Oct 11 09:21:21 compute-0 ovn_metadata_agent[162810]:     retries                 3
Oct 11 09:21:21 compute-0 ovn_metadata_agent[162810]:     timeout http-request    30s
Oct 11 09:21:21 compute-0 ovn_metadata_agent[162810]:     timeout connect         30s
Oct 11 09:21:21 compute-0 ovn_metadata_agent[162810]:     timeout client          32s
Oct 11 09:21:21 compute-0 ovn_metadata_agent[162810]:     timeout server          32s
Oct 11 09:21:21 compute-0 ovn_metadata_agent[162810]:     timeout http-keep-alive 30s
Oct 11 09:21:21 compute-0 ovn_metadata_agent[162810]: 
Oct 11 09:21:21 compute-0 ovn_metadata_agent[162810]: 
Oct 11 09:21:21 compute-0 ovn_metadata_agent[162810]: listen listener
Oct 11 09:21:21 compute-0 ovn_metadata_agent[162810]:     bind 169.254.169.254:80
Oct 11 09:21:21 compute-0 ovn_metadata_agent[162810]:     server metadata /var/lib/neutron/metadata_proxy
Oct 11 09:21:21 compute-0 ovn_metadata_agent[162810]:     http-request add-header X-OVN-Network-ID 02690ac5-d004-4c7d-b780-e5fed29e0aa7
Oct 11 09:21:21 compute-0 ovn_metadata_agent[162810]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 11 09:21:21 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:21:21.077 162815 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-02690ac5-d004-4c7d-b780-e5fed29e0aa7', 'env', 'PROCESS_TAG=haproxy-02690ac5-d004-4c7d-b780-e5fed29e0aa7', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/02690ac5-d004-4c7d-b780-e5fed29e0aa7.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 11 09:21:21 compute-0 nova_compute[260935]: 2025-10-11 09:21:21.109 2 DEBUG nova.network.neutron [req-76aa6285-d0af-4ab5-98aa-6701905a7a00 req-e7f0b20c-3ee6-4818-bd10-905079373d50 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: fcb45648-eb7b-4975-9f50-08675a787d9c] Updated VIF entry in instance network info cache for port 5a14fd50-c9b4-4c8c-b576-9f2a05d734f9. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 09:21:21 compute-0 nova_compute[260935]: 2025-10-11 09:21:21.110 2 DEBUG nova.network.neutron [req-76aa6285-d0af-4ab5-98aa-6701905a7a00 req-e7f0b20c-3ee6-4818-bd10-905079373d50 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: fcb45648-eb7b-4975-9f50-08675a787d9c] Updating instance_info_cache with network_info: [{"id": "9669f110-042a-40c6-b7a4-8d78d421ed23", "address": "fa:16:3e:eb:54:35", "network": {"id": "02690ac5-d004-4c7d-b780-e5fed29e0aa7", "bridge": "br-int", "label": "tempest-network-smoke--1847613909", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9669f110-04", "ovs_interfaceid": "9669f110-042a-40c6-b7a4-8d78d421ed23", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "5a14fd50-c9b4-4c8c-b576-9f2a05d734f9", "address": "fa:16:3e:ed:83:c1", "network": {"id": "e87b272f-66b8-494e-ab80-c2ee66df15a2", "bridge": "br-int", "label": "tempest-network-smoke--2027628824", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feed:83c1", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feed:83c1", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a14fd50-c9", "ovs_interfaceid": "5a14fd50-c9b4-4c8c-b576-9f2a05d734f9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:21:21 compute-0 nova_compute[260935]: 2025-10-11 09:21:21.142 2 DEBUG oslo_concurrency.lockutils [req-76aa6285-d0af-4ab5-98aa-6701905a7a00 req-e7f0b20c-3ee6-4818-bd10-905079373d50 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-fcb45648-eb7b-4975-9f50-08675a787d9c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:21:21 compute-0 nova_compute[260935]: 2025-10-11 09:21:21.309 2 DEBUG nova.compute.manager [req-494aa31d-d1fd-48b5-a3c7-ad0e040ed971 req-4d65361b-caf1-4123-a286-95b761aa4b1b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: fcb45648-eb7b-4975-9f50-08675a787d9c] Received event network-vif-plugged-9669f110-042a-40c6-b7a4-8d78d421ed23 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:21:21 compute-0 nova_compute[260935]: 2025-10-11 09:21:21.310 2 DEBUG oslo_concurrency.lockutils [req-494aa31d-d1fd-48b5-a3c7-ad0e040ed971 req-4d65361b-caf1-4123-a286-95b761aa4b1b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "fcb45648-eb7b-4975-9f50-08675a787d9c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:21:21 compute-0 nova_compute[260935]: 2025-10-11 09:21:21.310 2 DEBUG oslo_concurrency.lockutils [req-494aa31d-d1fd-48b5-a3c7-ad0e040ed971 req-4d65361b-caf1-4123-a286-95b761aa4b1b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "fcb45648-eb7b-4975-9f50-08675a787d9c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:21:21 compute-0 nova_compute[260935]: 2025-10-11 09:21:21.311 2 DEBUG oslo_concurrency.lockutils [req-494aa31d-d1fd-48b5-a3c7-ad0e040ed971 req-4d65361b-caf1-4123-a286-95b761aa4b1b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "fcb45648-eb7b-4975-9f50-08675a787d9c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:21:21 compute-0 nova_compute[260935]: 2025-10-11 09:21:21.311 2 DEBUG nova.compute.manager [req-494aa31d-d1fd-48b5-a3c7-ad0e040ed971 req-4d65361b-caf1-4123-a286-95b761aa4b1b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: fcb45648-eb7b-4975-9f50-08675a787d9c] Processing event network-vif-plugged-9669f110-042a-40c6-b7a4-8d78d421ed23 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 11 09:21:21 compute-0 podman[389121]: 2025-10-11 09:21:21.527726698 +0000 UTC m=+0.064185582 container create a30fa67bcb3a4967e8ed944a89e7f834e57e556362850d2eae55ff591936d2f1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-02690ac5-d004-4c7d-b780-e5fed29e0aa7, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_managed=true)
Oct 11 09:21:21 compute-0 systemd[1]: Started libpod-conmon-a30fa67bcb3a4967e8ed944a89e7f834e57e556362850d2eae55ff591936d2f1.scope.
Oct 11 09:21:21 compute-0 podman[389121]: 2025-10-11 09:21:21.500730777 +0000 UTC m=+0.037189711 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 11 09:21:21 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:21:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45a43bab447974501414b98be50feaa29e43ab1d6b08cf88962b6450fef6fae6/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 11 09:21:21 compute-0 podman[389121]: 2025-10-11 09:21:21.633253737 +0000 UTC m=+0.169712631 container init a30fa67bcb3a4967e8ed944a89e7f834e57e556362850d2eae55ff591936d2f1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-02690ac5-d004-4c7d-b780-e5fed29e0aa7, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0)
Oct 11 09:21:21 compute-0 podman[389121]: 2025-10-11 09:21:21.643703661 +0000 UTC m=+0.180162575 container start a30fa67bcb3a4967e8ed944a89e7f834e57e556362850d2eae55ff591936d2f1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-02690ac5-d004-4c7d-b780-e5fed29e0aa7, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 09:21:21 compute-0 neutron-haproxy-ovnmeta-02690ac5-d004-4c7d-b780-e5fed29e0aa7[389155]: [NOTICE]   (389183) : New worker (389186) forked
Oct 11 09:21:21 compute-0 neutron-haproxy-ovnmeta-02690ac5-d004-4c7d-b780-e5fed29e0aa7[389155]: [NOTICE]   (389183) : Loading success.
Oct 11 09:21:21 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:21:21.722 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 5a14fd50-c9b4-4c8c-b576-9f2a05d734f9 in datapath e87b272f-66b8-494e-ab80-c2ee66df15a2 unbound from our chassis
Oct 11 09:21:21 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:21:21.728 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e87b272f-66b8-494e-ab80-c2ee66df15a2
Oct 11 09:21:21 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:21:21.746 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[85e19c22-3308-48c1-9f05-8da6eded2ed5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:21:21 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:21:21.748 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tape87b272f-61 in ovnmeta-e87b272f-66b8-494e-ab80-c2ee66df15a2 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 11 09:21:21 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:21:21.751 276945 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tape87b272f-60 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 11 09:21:21 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:21:21.751 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[55132554-8176-4410-ab58-22e710b79eff]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:21:21 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:21:21.753 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[8fc30cb8-f19c-4db3-a866-4f7448a3983e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:21:21 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:21:21.779 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[fc6b1955-b316-4969-8e1c-25dfe432815e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:21:21 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:21:21.814 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[fd27ba31-9bf4-45e4-bef6-366a87bf0873]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:21:21 compute-0 nova_compute[260935]: 2025-10-11 09:21:21.829 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:21:21 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:21:21.866 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[99590b47-903f-4fa9-94bb-48f2013dd825]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:21:21 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:21:21.871 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[c1391291-c50a-42e7-b6f2-488217d1364b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:21:21 compute-0 systemd-udevd[389073]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 09:21:21 compute-0 NetworkManager[44960]: <info>  [1760174481.8735] manager: (tape87b272f-60): new Veth device (/org/freedesktop/NetworkManager/Devices/490)
Oct 11 09:21:21 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:21:21.913 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[6c10e11a-1059-4b4d-a13e-03ff861968c8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:21:21 compute-0 nova_compute[260935]: 2025-10-11 09:21:21.918 2 DEBUG oslo_concurrency.lockutils [None req-17a2808a-fce2-40a9-a85c-94ba9c8cfbba dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Acquiring lock "interface-7f0d9214-39a5-458d-82db-dcbc7d61b8b5-None" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:21:21 compute-0 nova_compute[260935]: 2025-10-11 09:21:21.918 2 DEBUG oslo_concurrency.lockutils [None req-17a2808a-fce2-40a9-a85c-94ba9c8cfbba dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "interface-7f0d9214-39a5-458d-82db-dcbc7d61b8b5-None" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:21:21 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:21:21.918 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[1d730707-b634-40a0-ba73-a2657a09f85a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:21:21 compute-0 nova_compute[260935]: 2025-10-11 09:21:21.918 2 DEBUG nova.objects.instance [None req-17a2808a-fce2-40a9-a85c-94ba9c8cfbba dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lazy-loading 'flavor' on Instance uuid 7f0d9214-39a5-458d-82db-dcbc7d61b8b5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:21:21 compute-0 ceph-mon[74313]: pgmap v2385: 321 pgs: 321 active+clean; 453 MiB data, 979 MiB used, 59 GiB / 60 GiB avail; 322 KiB/s rd, 3.9 MiB/s wr, 88 op/s
Oct 11 09:21:21 compute-0 NetworkManager[44960]: <info>  [1760174481.9601] device (tape87b272f-60): carrier: link connected
Oct 11 09:21:21 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:21:21.966 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[022874f2-f9a3-4ee8-bf31-f09945cb9aa5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:21:21 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:21:21.987 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[d37b4323-55fe-478d-aff3-957261c3677f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape87b272f-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ab:ac:3b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 341], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 632666, 'reachable_time': 19970, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 389205, 'error': None, 'target': 'ovnmeta-e87b272f-66b8-494e-ab80-c2ee66df15a2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:21:22 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:21:22.005 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[e9b7b3ae-92f9-4157-b89d-14ea6c9867a5]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feab:ac3b'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 632666, 'tstamp': 632666}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 389206, 'error': None, 'target': 'ovnmeta-e87b272f-66b8-494e-ab80-c2ee66df15a2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:21:22 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:21:22.024 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[e3d55a6f-9ae5-429a-b39b-8aad393a53a7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape87b272f-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ab:ac:3b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 341], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 632666, 'reachable_time': 19970, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 389207, 'error': None, 'target': 'ovnmeta-e87b272f-66b8-494e-ab80-c2ee66df15a2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:21:22 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:21:22.067 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[6c694064-3308-45df-9fe4-d5773e6d78c8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:21:22 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:21:22.108 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[78c107a3-7aa7-40ea-829a-727d2347f549]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:21:22 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:21:22.109 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape87b272f-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:21:22 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:21:22.110 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 09:21:22 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:21:22.110 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape87b272f-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:21:22 compute-0 nova_compute[260935]: 2025-10-11 09:21:22.112 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:21:22 compute-0 NetworkManager[44960]: <info>  [1760174482.1129] manager: (tape87b272f-60): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/491)
Oct 11 09:21:22 compute-0 kernel: tape87b272f-60: entered promiscuous mode
Oct 11 09:21:22 compute-0 nova_compute[260935]: 2025-10-11 09:21:22.116 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:21:22 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:21:22.116 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape87b272f-60, col_values=(('external_ids', {'iface-id': 'af33797e-d57d-45c8-92d7-86ea03fdf1ef'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:21:22 compute-0 nova_compute[260935]: 2025-10-11 09:21:22.117 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:21:22 compute-0 ovn_controller[152945]: 2025-10-11T09:21:22Z|01207|binding|INFO|Releasing lport af33797e-d57d-45c8-92d7-86ea03fdf1ef from this chassis (sb_readonly=0)
Oct 11 09:21:22 compute-0 nova_compute[260935]: 2025-10-11 09:21:22.134 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:21:22 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:21:22.135 162815 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/e87b272f-66b8-494e-ab80-c2ee66df15a2.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/e87b272f-66b8-494e-ab80-c2ee66df15a2.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 11 09:21:22 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:21:22.136 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[2b5d3e2e-0693-421d-82b6-3712aa0a1da3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:21:22 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:21:22.137 162815 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 11 09:21:22 compute-0 ovn_metadata_agent[162810]: global
Oct 11 09:21:22 compute-0 ovn_metadata_agent[162810]:     log         /dev/log local0 debug
Oct 11 09:21:22 compute-0 ovn_metadata_agent[162810]:     log-tag     haproxy-metadata-proxy-e87b272f-66b8-494e-ab80-c2ee66df15a2
Oct 11 09:21:22 compute-0 ovn_metadata_agent[162810]:     user        root
Oct 11 09:21:22 compute-0 ovn_metadata_agent[162810]:     group       root
Oct 11 09:21:22 compute-0 ovn_metadata_agent[162810]:     maxconn     1024
Oct 11 09:21:22 compute-0 ovn_metadata_agent[162810]:     pidfile     /var/lib/neutron/external/pids/e87b272f-66b8-494e-ab80-c2ee66df15a2.pid.haproxy
Oct 11 09:21:22 compute-0 ovn_metadata_agent[162810]:     daemon
Oct 11 09:21:22 compute-0 ovn_metadata_agent[162810]: 
Oct 11 09:21:22 compute-0 ovn_metadata_agent[162810]: defaults
Oct 11 09:21:22 compute-0 ovn_metadata_agent[162810]:     log global
Oct 11 09:21:22 compute-0 ovn_metadata_agent[162810]:     mode http
Oct 11 09:21:22 compute-0 ovn_metadata_agent[162810]:     option httplog
Oct 11 09:21:22 compute-0 ovn_metadata_agent[162810]:     option dontlognull
Oct 11 09:21:22 compute-0 ovn_metadata_agent[162810]:     option http-server-close
Oct 11 09:21:22 compute-0 ovn_metadata_agent[162810]:     option forwardfor
Oct 11 09:21:22 compute-0 ovn_metadata_agent[162810]:     retries                 3
Oct 11 09:21:22 compute-0 ovn_metadata_agent[162810]:     timeout http-request    30s
Oct 11 09:21:22 compute-0 ovn_metadata_agent[162810]:     timeout connect         30s
Oct 11 09:21:22 compute-0 ovn_metadata_agent[162810]:     timeout client          32s
Oct 11 09:21:22 compute-0 ovn_metadata_agent[162810]:     timeout server          32s
Oct 11 09:21:22 compute-0 ovn_metadata_agent[162810]:     timeout http-keep-alive 30s
Oct 11 09:21:22 compute-0 ovn_metadata_agent[162810]: 
Oct 11 09:21:22 compute-0 ovn_metadata_agent[162810]: 
Oct 11 09:21:22 compute-0 ovn_metadata_agent[162810]: listen listener
Oct 11 09:21:22 compute-0 ovn_metadata_agent[162810]:     bind 169.254.169.254:80
Oct 11 09:21:22 compute-0 ovn_metadata_agent[162810]:     server metadata /var/lib/neutron/metadata_proxy
Oct 11 09:21:22 compute-0 ovn_metadata_agent[162810]:     http-request add-header X-OVN-Network-ID e87b272f-66b8-494e-ab80-c2ee66df15a2
Oct 11 09:21:22 compute-0 ovn_metadata_agent[162810]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 11 09:21:22 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:21:22.138 162815 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-e87b272f-66b8-494e-ab80-c2ee66df15a2', 'env', 'PROCESS_TAG=haproxy-e87b272f-66b8-494e-ab80-c2ee66df15a2', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/e87b272f-66b8-494e-ab80-c2ee66df15a2.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 11 09:21:22 compute-0 nova_compute[260935]: 2025-10-11 09:21:22.257 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:21:22 compute-0 nova_compute[260935]: 2025-10-11 09:21:22.272 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760174482.2715697, fcb45648-eb7b-4975-9f50-08675a787d9c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:21:22 compute-0 nova_compute[260935]: 2025-10-11 09:21:22.272 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: fcb45648-eb7b-4975-9f50-08675a787d9c] VM Started (Lifecycle Event)
Oct 11 09:21:22 compute-0 nova_compute[260935]: 2025-10-11 09:21:22.305 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: fcb45648-eb7b-4975-9f50-08675a787d9c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:21:22 compute-0 nova_compute[260935]: 2025-10-11 09:21:22.310 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760174482.271799, fcb45648-eb7b-4975-9f50-08675a787d9c => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:21:22 compute-0 nova_compute[260935]: 2025-10-11 09:21:22.311 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: fcb45648-eb7b-4975-9f50-08675a787d9c] VM Paused (Lifecycle Event)
Oct 11 09:21:22 compute-0 nova_compute[260935]: 2025-10-11 09:21:22.344 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: fcb45648-eb7b-4975-9f50-08675a787d9c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:21:22 compute-0 nova_compute[260935]: 2025-10-11 09:21:22.355 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: fcb45648-eb7b-4975-9f50-08675a787d9c] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 09:21:22 compute-0 nova_compute[260935]: 2025-10-11 09:21:22.394 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: fcb45648-eb7b-4975-9f50-08675a787d9c] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 09:21:22 compute-0 nova_compute[260935]: 2025-10-11 09:21:22.435 2 DEBUG nova.objects.instance [None req-17a2808a-fce2-40a9-a85c-94ba9c8cfbba dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lazy-loading 'pci_requests' on Instance uuid 7f0d9214-39a5-458d-82db-dcbc7d61b8b5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:21:22 compute-0 nova_compute[260935]: 2025-10-11 09:21:22.469 2 DEBUG nova.network.neutron [None req-17a2808a-fce2-40a9-a85c-94ba9c8cfbba dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 7f0d9214-39a5-458d-82db-dcbc7d61b8b5] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 11 09:21:22 compute-0 podman[389238]: 2025-10-11 09:21:22.601559693 +0000 UTC m=+0.084156836 container create b8d98df1aece0f443c2b04c5466e7e8fb0cdc8405dd6c0b75e45450252da2026 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-e87b272f-66b8-494e-ab80-c2ee66df15a2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3)
Oct 11 09:21:22 compute-0 systemd[1]: Started libpod-conmon-b8d98df1aece0f443c2b04c5466e7e8fb0cdc8405dd6c0b75e45450252da2026.scope.
Oct 11 09:21:22 compute-0 podman[389238]: 2025-10-11 09:21:22.559777454 +0000 UTC m=+0.042374677 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 11 09:21:22 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:21:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c17045466eb5e9b60860258040d0d3e0755d8193eccf00a44cefc9a37c24b920/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 11 09:21:22 compute-0 podman[389238]: 2025-10-11 09:21:22.724670337 +0000 UTC m=+0.207267530 container init b8d98df1aece0f443c2b04c5466e7e8fb0cdc8405dd6c0b75e45450252da2026 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-e87b272f-66b8-494e-ab80-c2ee66df15a2, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3)
Oct 11 09:21:22 compute-0 podman[389238]: 2025-10-11 09:21:22.736058179 +0000 UTC m=+0.218655322 container start b8d98df1aece0f443c2b04c5466e7e8fb0cdc8405dd6c0b75e45450252da2026 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-e87b272f-66b8-494e-ab80-c2ee66df15a2, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Oct 11 09:21:22 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2386: 321 pgs: 321 active+clean; 453 MiB data, 979 MiB used, 59 GiB / 60 GiB avail; 329 KiB/s rd, 3.9 MiB/s wr, 98 op/s
Oct 11 09:21:22 compute-0 neutron-haproxy-ovnmeta-e87b272f-66b8-494e-ab80-c2ee66df15a2[389253]: [NOTICE]   (389257) : New worker (389259) forked
Oct 11 09:21:22 compute-0 neutron-haproxy-ovnmeta-e87b272f-66b8-494e-ab80-c2ee66df15a2[389253]: [NOTICE]   (389257) : Loading success.
Oct 11 09:21:23 compute-0 nova_compute[260935]: 2025-10-11 09:21:23.422 2 DEBUG nova.policy [None req-17a2808a-fce2-40a9-a85c-94ba9c8cfbba dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'dd336dcb24664df58613d4105ce1b004', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'bee9c6aad5fe46a2b0fb6caf4d995b72', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 11 09:21:23 compute-0 ceph-mon[74313]: pgmap v2386: 321 pgs: 321 active+clean; 453 MiB data, 979 MiB used, 59 GiB / 60 GiB avail; 329 KiB/s rd, 3.9 MiB/s wr, 98 op/s
Oct 11 09:21:24 compute-0 nova_compute[260935]: 2025-10-11 09:21:24.027 2 DEBUG nova.compute.manager [req-a55ac624-7089-4dc2-9f3a-0e64e467425d req-4db9f8cd-fe2b-4560-b05d-0721889d2323 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: fcb45648-eb7b-4975-9f50-08675a787d9c] Received event network-vif-plugged-9669f110-042a-40c6-b7a4-8d78d421ed23 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:21:24 compute-0 nova_compute[260935]: 2025-10-11 09:21:24.028 2 DEBUG oslo_concurrency.lockutils [req-a55ac624-7089-4dc2-9f3a-0e64e467425d req-4db9f8cd-fe2b-4560-b05d-0721889d2323 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "fcb45648-eb7b-4975-9f50-08675a787d9c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:21:24 compute-0 nova_compute[260935]: 2025-10-11 09:21:24.029 2 DEBUG oslo_concurrency.lockutils [req-a55ac624-7089-4dc2-9f3a-0e64e467425d req-4db9f8cd-fe2b-4560-b05d-0721889d2323 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "fcb45648-eb7b-4975-9f50-08675a787d9c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:21:24 compute-0 nova_compute[260935]: 2025-10-11 09:21:24.030 2 DEBUG oslo_concurrency.lockutils [req-a55ac624-7089-4dc2-9f3a-0e64e467425d req-4db9f8cd-fe2b-4560-b05d-0721889d2323 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "fcb45648-eb7b-4975-9f50-08675a787d9c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:21:24 compute-0 nova_compute[260935]: 2025-10-11 09:21:24.030 2 DEBUG nova.compute.manager [req-a55ac624-7089-4dc2-9f3a-0e64e467425d req-4db9f8cd-fe2b-4560-b05d-0721889d2323 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: fcb45648-eb7b-4975-9f50-08675a787d9c] No event matching network-vif-plugged-9669f110-042a-40c6-b7a4-8d78d421ed23 in dict_keys([('network-vif-plugged', '5a14fd50-c9b4-4c8c-b576-9f2a05d734f9')]) pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:325
Oct 11 09:21:24 compute-0 nova_compute[260935]: 2025-10-11 09:21:24.031 2 WARNING nova.compute.manager [req-a55ac624-7089-4dc2-9f3a-0e64e467425d req-4db9f8cd-fe2b-4560-b05d-0721889d2323 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: fcb45648-eb7b-4975-9f50-08675a787d9c] Received unexpected event network-vif-plugged-9669f110-042a-40c6-b7a4-8d78d421ed23 for instance with vm_state building and task_state spawning.
Oct 11 09:21:24 compute-0 nova_compute[260935]: 2025-10-11 09:21:24.032 2 DEBUG nova.compute.manager [req-a55ac624-7089-4dc2-9f3a-0e64e467425d req-4db9f8cd-fe2b-4560-b05d-0721889d2323 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: fcb45648-eb7b-4975-9f50-08675a787d9c] Received event network-vif-plugged-5a14fd50-c9b4-4c8c-b576-9f2a05d734f9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:21:24 compute-0 nova_compute[260935]: 2025-10-11 09:21:24.032 2 DEBUG oslo_concurrency.lockutils [req-a55ac624-7089-4dc2-9f3a-0e64e467425d req-4db9f8cd-fe2b-4560-b05d-0721889d2323 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "fcb45648-eb7b-4975-9f50-08675a787d9c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:21:24 compute-0 nova_compute[260935]: 2025-10-11 09:21:24.033 2 DEBUG oslo_concurrency.lockutils [req-a55ac624-7089-4dc2-9f3a-0e64e467425d req-4db9f8cd-fe2b-4560-b05d-0721889d2323 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "fcb45648-eb7b-4975-9f50-08675a787d9c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:21:24 compute-0 nova_compute[260935]: 2025-10-11 09:21:24.034 2 DEBUG oslo_concurrency.lockutils [req-a55ac624-7089-4dc2-9f3a-0e64e467425d req-4db9f8cd-fe2b-4560-b05d-0721889d2323 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "fcb45648-eb7b-4975-9f50-08675a787d9c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:21:24 compute-0 nova_compute[260935]: 2025-10-11 09:21:24.035 2 DEBUG nova.compute.manager [req-a55ac624-7089-4dc2-9f3a-0e64e467425d req-4db9f8cd-fe2b-4560-b05d-0721889d2323 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: fcb45648-eb7b-4975-9f50-08675a787d9c] Processing event network-vif-plugged-5a14fd50-c9b4-4c8c-b576-9f2a05d734f9 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 11 09:21:24 compute-0 nova_compute[260935]: 2025-10-11 09:21:24.035 2 DEBUG nova.compute.manager [req-a55ac624-7089-4dc2-9f3a-0e64e467425d req-4db9f8cd-fe2b-4560-b05d-0721889d2323 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: fcb45648-eb7b-4975-9f50-08675a787d9c] Received event network-vif-plugged-5a14fd50-c9b4-4c8c-b576-9f2a05d734f9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:21:24 compute-0 nova_compute[260935]: 2025-10-11 09:21:24.036 2 DEBUG oslo_concurrency.lockutils [req-a55ac624-7089-4dc2-9f3a-0e64e467425d req-4db9f8cd-fe2b-4560-b05d-0721889d2323 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "fcb45648-eb7b-4975-9f50-08675a787d9c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:21:24 compute-0 nova_compute[260935]: 2025-10-11 09:21:24.036 2 DEBUG oslo_concurrency.lockutils [req-a55ac624-7089-4dc2-9f3a-0e64e467425d req-4db9f8cd-fe2b-4560-b05d-0721889d2323 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "fcb45648-eb7b-4975-9f50-08675a787d9c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:21:24 compute-0 nova_compute[260935]: 2025-10-11 09:21:24.037 2 DEBUG oslo_concurrency.lockutils [req-a55ac624-7089-4dc2-9f3a-0e64e467425d req-4db9f8cd-fe2b-4560-b05d-0721889d2323 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "fcb45648-eb7b-4975-9f50-08675a787d9c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:21:24 compute-0 nova_compute[260935]: 2025-10-11 09:21:24.037 2 DEBUG nova.compute.manager [req-a55ac624-7089-4dc2-9f3a-0e64e467425d req-4db9f8cd-fe2b-4560-b05d-0721889d2323 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: fcb45648-eb7b-4975-9f50-08675a787d9c] No waiting events found dispatching network-vif-plugged-5a14fd50-c9b4-4c8c-b576-9f2a05d734f9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 09:21:24 compute-0 nova_compute[260935]: 2025-10-11 09:21:24.038 2 WARNING nova.compute.manager [req-a55ac624-7089-4dc2-9f3a-0e64e467425d req-4db9f8cd-fe2b-4560-b05d-0721889d2323 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: fcb45648-eb7b-4975-9f50-08675a787d9c] Received unexpected event network-vif-plugged-5a14fd50-c9b4-4c8c-b576-9f2a05d734f9 for instance with vm_state building and task_state spawning.
Oct 11 09:21:24 compute-0 nova_compute[260935]: 2025-10-11 09:21:24.039 2 DEBUG nova.compute.manager [None req-22fc24f5-c7ae-4ccc-b583-7df789c64b4f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: fcb45648-eb7b-4975-9f50-08675a787d9c] Instance event wait completed in 1 seconds for network-vif-plugged,network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 11 09:21:24 compute-0 nova_compute[260935]: 2025-10-11 09:21:24.044 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760174484.044082, fcb45648-eb7b-4975-9f50-08675a787d9c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:21:24 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:21:24 compute-0 nova_compute[260935]: 2025-10-11 09:21:24.044 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: fcb45648-eb7b-4975-9f50-08675a787d9c] VM Resumed (Lifecycle Event)
Oct 11 09:21:24 compute-0 nova_compute[260935]: 2025-10-11 09:21:24.048 2 DEBUG nova.virt.libvirt.driver [None req-22fc24f5-c7ae-4ccc-b583-7df789c64b4f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: fcb45648-eb7b-4975-9f50-08675a787d9c] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 11 09:21:24 compute-0 nova_compute[260935]: 2025-10-11 09:21:24.053 2 INFO nova.virt.libvirt.driver [-] [instance: fcb45648-eb7b-4975-9f50-08675a787d9c] Instance spawned successfully.
Oct 11 09:21:24 compute-0 nova_compute[260935]: 2025-10-11 09:21:24.053 2 DEBUG nova.virt.libvirt.driver [None req-22fc24f5-c7ae-4ccc-b583-7df789c64b4f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: fcb45648-eb7b-4975-9f50-08675a787d9c] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 11 09:21:24 compute-0 nova_compute[260935]: 2025-10-11 09:21:24.083 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: fcb45648-eb7b-4975-9f50-08675a787d9c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:21:24 compute-0 nova_compute[260935]: 2025-10-11 09:21:24.088 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: fcb45648-eb7b-4975-9f50-08675a787d9c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 09:21:24 compute-0 nova_compute[260935]: 2025-10-11 09:21:24.104 2 DEBUG nova.virt.libvirt.driver [None req-22fc24f5-c7ae-4ccc-b583-7df789c64b4f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: fcb45648-eb7b-4975-9f50-08675a787d9c] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:21:24 compute-0 nova_compute[260935]: 2025-10-11 09:21:24.105 2 DEBUG nova.virt.libvirt.driver [None req-22fc24f5-c7ae-4ccc-b583-7df789c64b4f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: fcb45648-eb7b-4975-9f50-08675a787d9c] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:21:24 compute-0 nova_compute[260935]: 2025-10-11 09:21:24.105 2 DEBUG nova.virt.libvirt.driver [None req-22fc24f5-c7ae-4ccc-b583-7df789c64b4f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: fcb45648-eb7b-4975-9f50-08675a787d9c] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:21:24 compute-0 nova_compute[260935]: 2025-10-11 09:21:24.106 2 DEBUG nova.virt.libvirt.driver [None req-22fc24f5-c7ae-4ccc-b583-7df789c64b4f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: fcb45648-eb7b-4975-9f50-08675a787d9c] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:21:24 compute-0 nova_compute[260935]: 2025-10-11 09:21:24.107 2 DEBUG nova.virt.libvirt.driver [None req-22fc24f5-c7ae-4ccc-b583-7df789c64b4f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: fcb45648-eb7b-4975-9f50-08675a787d9c] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:21:24 compute-0 nova_compute[260935]: 2025-10-11 09:21:24.107 2 DEBUG nova.virt.libvirt.driver [None req-22fc24f5-c7ae-4ccc-b583-7df789c64b4f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: fcb45648-eb7b-4975-9f50-08675a787d9c] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:21:24 compute-0 nova_compute[260935]: 2025-10-11 09:21:24.120 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: fcb45648-eb7b-4975-9f50-08675a787d9c] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 09:21:24 compute-0 nova_compute[260935]: 2025-10-11 09:21:24.207 2 INFO nova.compute.manager [None req-22fc24f5-c7ae-4ccc-b583-7df789c64b4f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: fcb45648-eb7b-4975-9f50-08675a787d9c] Took 15.45 seconds to spawn the instance on the hypervisor.
Oct 11 09:21:24 compute-0 nova_compute[260935]: 2025-10-11 09:21:24.208 2 DEBUG nova.compute.manager [None req-22fc24f5-c7ae-4ccc-b583-7df789c64b4f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: fcb45648-eb7b-4975-9f50-08675a787d9c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:21:24 compute-0 nova_compute[260935]: 2025-10-11 09:21:24.286 2 INFO nova.compute.manager [None req-22fc24f5-c7ae-4ccc-b583-7df789c64b4f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: fcb45648-eb7b-4975-9f50-08675a787d9c] Took 16.56 seconds to build instance.
Oct 11 09:21:24 compute-0 nova_compute[260935]: 2025-10-11 09:21:24.315 2 DEBUG oslo_concurrency.lockutils [None req-22fc24f5-c7ae-4ccc-b583-7df789c64b4f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "fcb45648-eb7b-4975-9f50-08675a787d9c" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 16.656s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:21:24 compute-0 nova_compute[260935]: 2025-10-11 09:21:24.362 2 DEBUG nova.network.neutron [None req-17a2808a-fce2-40a9-a85c-94ba9c8cfbba dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 7f0d9214-39a5-458d-82db-dcbc7d61b8b5] Successfully created port: 300c071c-a312-4a9b-bd7a-1b16b9a35ae6 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 11 09:21:24 compute-0 nova_compute[260935]: 2025-10-11 09:21:24.411 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:21:24 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2387: 321 pgs: 321 active+clean; 453 MiB data, 979 MiB used, 59 GiB / 60 GiB avail; 7.2 KiB/s rd, 25 KiB/s wr, 10 op/s
Oct 11 09:21:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:21:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:21:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:21:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:21:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:21:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:21:25 compute-0 nova_compute[260935]: 2025-10-11 09:21:25.534 2 DEBUG nova.network.neutron [None req-17a2808a-fce2-40a9-a85c-94ba9c8cfbba dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 7f0d9214-39a5-458d-82db-dcbc7d61b8b5] Successfully updated port: 300c071c-a312-4a9b-bd7a-1b16b9a35ae6 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 11 09:21:25 compute-0 nova_compute[260935]: 2025-10-11 09:21:25.552 2 DEBUG oslo_concurrency.lockutils [None req-17a2808a-fce2-40a9-a85c-94ba9c8cfbba dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Acquiring lock "refresh_cache-7f0d9214-39a5-458d-82db-dcbc7d61b8b5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:21:25 compute-0 nova_compute[260935]: 2025-10-11 09:21:25.552 2 DEBUG oslo_concurrency.lockutils [None req-17a2808a-fce2-40a9-a85c-94ba9c8cfbba dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Acquired lock "refresh_cache-7f0d9214-39a5-458d-82db-dcbc7d61b8b5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:21:25 compute-0 nova_compute[260935]: 2025-10-11 09:21:25.553 2 DEBUG nova.network.neutron [None req-17a2808a-fce2-40a9-a85c-94ba9c8cfbba dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 7f0d9214-39a5-458d-82db-dcbc7d61b8b5] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 11 09:21:25 compute-0 nova_compute[260935]: 2025-10-11 09:21:25.820 2 DEBUG nova.compute.manager [req-af20fffb-fdf6-4aae-b00c-221527506290 req-6e2cf7bf-0ebf-4f98-a4a2-4fd8ff4bd0c9 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 7f0d9214-39a5-458d-82db-dcbc7d61b8b5] Received event network-changed-300c071c-a312-4a9b-bd7a-1b16b9a35ae6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:21:25 compute-0 nova_compute[260935]: 2025-10-11 09:21:25.820 2 DEBUG nova.compute.manager [req-af20fffb-fdf6-4aae-b00c-221527506290 req-6e2cf7bf-0ebf-4f98-a4a2-4fd8ff4bd0c9 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 7f0d9214-39a5-458d-82db-dcbc7d61b8b5] Refreshing instance network info cache due to event network-changed-300c071c-a312-4a9b-bd7a-1b16b9a35ae6. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 09:21:25 compute-0 nova_compute[260935]: 2025-10-11 09:21:25.821 2 DEBUG oslo_concurrency.lockutils [req-af20fffb-fdf6-4aae-b00c-221527506290 req-6e2cf7bf-0ebf-4f98-a4a2-4fd8ff4bd0c9 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-7f0d9214-39a5-458d-82db-dcbc7d61b8b5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:21:25 compute-0 nova_compute[260935]: 2025-10-11 09:21:25.933 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:21:25 compute-0 ceph-mon[74313]: pgmap v2387: 321 pgs: 321 active+clean; 453 MiB data, 979 MiB used, 59 GiB / 60 GiB avail; 7.2 KiB/s rd, 25 KiB/s wr, 10 op/s
Oct 11 09:21:26 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 09:21:26 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2073524993' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 09:21:26 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 09:21:26 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2073524993' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 09:21:26 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2388: 321 pgs: 321 active+clean; 453 MiB data, 979 MiB used, 59 GiB / 60 GiB avail; 7.2 KiB/s rd, 25 KiB/s wr, 10 op/s
Oct 11 09:21:26 compute-0 nova_compute[260935]: 2025-10-11 09:21:26.832 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:21:26 compute-0 ceph-mon[74313]: from='client.? 192.168.122.10:0/2073524993' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 09:21:26 compute-0 ceph-mon[74313]: from='client.? 192.168.122.10:0/2073524993' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 09:21:27 compute-0 ceph-mon[74313]: pgmap v2388: 321 pgs: 321 active+clean; 453 MiB data, 979 MiB used, 59 GiB / 60 GiB avail; 7.2 KiB/s rd, 25 KiB/s wr, 10 op/s
Oct 11 09:21:28 compute-0 nova_compute[260935]: 2025-10-11 09:21:28.748 2 DEBUG nova.network.neutron [None req-17a2808a-fce2-40a9-a85c-94ba9c8cfbba dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 7f0d9214-39a5-458d-82db-dcbc7d61b8b5] Updating instance_info_cache with network_info: [{"id": "6aa7ac72-3e8c-4881-ba5d-9e2fca0200c5", "address": "fa:16:3e:38:a7:f5", "network": {"id": "6fb90d02-96cd-4920-92ac-462cc457cb11", "bridge": "br-int", "label": "tempest-network-smoke--1169953700", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.243", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6aa7ac72-3e", "ovs_interfaceid": "6aa7ac72-3e8c-4881-ba5d-9e2fca0200c5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "300c071c-a312-4a9b-bd7a-1b16b9a35ae6", "address": "fa:16:3e:f8:68:88", "network": {"id": "428d818e-c08a-4eef-be62-24fe484fed05", "bridge": "br-int", "label": "tempest-network-smoke--2064721736", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.30", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap300c071c-a3", "ovs_interfaceid": "300c071c-a312-4a9b-bd7a-1b16b9a35ae6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:21:28 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2389: 321 pgs: 321 active+clean; 453 MiB data, 979 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 27 KiB/s wr, 74 op/s
Oct 11 09:21:28 compute-0 nova_compute[260935]: 2025-10-11 09:21:28.775 2 DEBUG oslo_concurrency.lockutils [None req-17a2808a-fce2-40a9-a85c-94ba9c8cfbba dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Releasing lock "refresh_cache-7f0d9214-39a5-458d-82db-dcbc7d61b8b5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:21:28 compute-0 nova_compute[260935]: 2025-10-11 09:21:28.776 2 DEBUG oslo_concurrency.lockutils [req-af20fffb-fdf6-4aae-b00c-221527506290 req-6e2cf7bf-0ebf-4f98-a4a2-4fd8ff4bd0c9 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-7f0d9214-39a5-458d-82db-dcbc7d61b8b5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:21:28 compute-0 nova_compute[260935]: 2025-10-11 09:21:28.777 2 DEBUG nova.network.neutron [req-af20fffb-fdf6-4aae-b00c-221527506290 req-6e2cf7bf-0ebf-4f98-a4a2-4fd8ff4bd0c9 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 7f0d9214-39a5-458d-82db-dcbc7d61b8b5] Refreshing network info cache for port 300c071c-a312-4a9b-bd7a-1b16b9a35ae6 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 09:21:28 compute-0 nova_compute[260935]: 2025-10-11 09:21:28.780 2 DEBUG nova.virt.libvirt.vif [None req-17a2808a-fce2-40a9-a85c-94ba9c8cfbba dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T09:20:38Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1960711580',display_name='tempest-TestNetworkBasicOps-server-1960711580',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1960711580',id=117,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBF5Wlq36JtMeAih7avX8wXRrPqfFsSPD2izoTRA0/VeN08ZY174fYPsstqVdaqAprTgQ0B4WJKyd87FK5YPL+XzWekXrwbc+R4XTrvOSv6dJKGu7vh0OlJADJW05rfop0g==',key_name='tempest-TestNetworkBasicOps-1691611342',keypairs=<?>,launch_index=0,launched_at=2025-10-11T09:20:58Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='bee9c6aad5fe46a2b0fb6caf4d995b72',ramdisk_id='',reservation_id='r-9mefs4w8',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1622727639',owner_user_name='tempest-TestNetworkBasicOps-1622727639-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T09:20:58Z,user_data=None,user_id='dd336dcb24664df58613d4105ce1b004',uuid=7f0d9214-39a5-458d-82db-dcbc7d61b8b5,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "300c071c-a312-4a9b-bd7a-1b16b9a35ae6", "address": "fa:16:3e:f8:68:88", "network": {"id": "428d818e-c08a-4eef-be62-24fe484fed05", "bridge": "br-int", "label": "tempest-network-smoke--2064721736", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.30", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap300c071c-a3", "ovs_interfaceid": "300c071c-a312-4a9b-bd7a-1b16b9a35ae6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 11 09:21:28 compute-0 nova_compute[260935]: 2025-10-11 09:21:28.780 2 DEBUG nova.network.os_vif_util [None req-17a2808a-fce2-40a9-a85c-94ba9c8cfbba dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Converting VIF {"id": "300c071c-a312-4a9b-bd7a-1b16b9a35ae6", "address": "fa:16:3e:f8:68:88", "network": {"id": "428d818e-c08a-4eef-be62-24fe484fed05", "bridge": "br-int", "label": "tempest-network-smoke--2064721736", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.30", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap300c071c-a3", "ovs_interfaceid": "300c071c-a312-4a9b-bd7a-1b16b9a35ae6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 09:21:28 compute-0 nova_compute[260935]: 2025-10-11 09:21:28.781 2 DEBUG nova.network.os_vif_util [None req-17a2808a-fce2-40a9-a85c-94ba9c8cfbba dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f8:68:88,bridge_name='br-int',has_traffic_filtering=True,id=300c071c-a312-4a9b-bd7a-1b16b9a35ae6,network=Network(428d818e-c08a-4eef-be62-24fe484fed05),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap300c071c-a3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 09:21:28 compute-0 nova_compute[260935]: 2025-10-11 09:21:28.782 2 DEBUG os_vif [None req-17a2808a-fce2-40a9-a85c-94ba9c8cfbba dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f8:68:88,bridge_name='br-int',has_traffic_filtering=True,id=300c071c-a312-4a9b-bd7a-1b16b9a35ae6,network=Network(428d818e-c08a-4eef-be62-24fe484fed05),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap300c071c-a3') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 11 09:21:28 compute-0 nova_compute[260935]: 2025-10-11 09:21:28.782 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:21:28 compute-0 nova_compute[260935]: 2025-10-11 09:21:28.783 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:21:28 compute-0 nova_compute[260935]: 2025-10-11 09:21:28.783 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 09:21:28 compute-0 nova_compute[260935]: 2025-10-11 09:21:28.785 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:21:28 compute-0 nova_compute[260935]: 2025-10-11 09:21:28.786 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap300c071c-a3, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:21:28 compute-0 nova_compute[260935]: 2025-10-11 09:21:28.786 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap300c071c-a3, col_values=(('external_ids', {'iface-id': '300c071c-a312-4a9b-bd7a-1b16b9a35ae6', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:f8:68:88', 'vm-uuid': '7f0d9214-39a5-458d-82db-dcbc7d61b8b5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:21:28 compute-0 nova_compute[260935]: 2025-10-11 09:21:28.788 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:21:28 compute-0 NetworkManager[44960]: <info>  [1760174488.7893] manager: (tap300c071c-a3): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/492)
Oct 11 09:21:28 compute-0 nova_compute[260935]: 2025-10-11 09:21:28.793 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 09:21:28 compute-0 nova_compute[260935]: 2025-10-11 09:21:28.795 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:21:28 compute-0 nova_compute[260935]: 2025-10-11 09:21:28.796 2 INFO os_vif [None req-17a2808a-fce2-40a9-a85c-94ba9c8cfbba dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f8:68:88,bridge_name='br-int',has_traffic_filtering=True,id=300c071c-a312-4a9b-bd7a-1b16b9a35ae6,network=Network(428d818e-c08a-4eef-be62-24fe484fed05),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap300c071c-a3')
Oct 11 09:21:28 compute-0 nova_compute[260935]: 2025-10-11 09:21:28.797 2 DEBUG nova.virt.libvirt.vif [None req-17a2808a-fce2-40a9-a85c-94ba9c8cfbba dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T09:20:38Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1960711580',display_name='tempest-TestNetworkBasicOps-server-1960711580',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1960711580',id=117,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBF5Wlq36JtMeAih7avX8wXRrPqfFsSPD2izoTRA0/VeN08ZY174fYPsstqVdaqAprTgQ0B4WJKyd87FK5YPL+XzWekXrwbc+R4XTrvOSv6dJKGu7vh0OlJADJW05rfop0g==',key_name='tempest-TestNetworkBasicOps-1691611342',keypairs=<?>,launch_index=0,launched_at=2025-10-11T09:20:58Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='bee9c6aad5fe46a2b0fb6caf4d995b72',ramdisk_id='',reservation_id='r-9mefs4w8',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1622727639',owner_user_name='tempest-TestNetworkBasicOps-1622727639-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T09:20:58Z,user_data=None,user_id='dd336dcb24664df58613d4105ce1b004',uuid=7f0d9214-39a5-458d-82db-dcbc7d61b8b5,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "300c071c-a312-4a9b-bd7a-1b16b9a35ae6", "address": "fa:16:3e:f8:68:88", "network": {"id": "428d818e-c08a-4eef-be62-24fe484fed05", "bridge": "br-int", "label": "tempest-network-smoke--2064721736", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.30", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap300c071c-a3", "ovs_interfaceid": "300c071c-a312-4a9b-bd7a-1b16b9a35ae6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 11 09:21:28 compute-0 nova_compute[260935]: 2025-10-11 09:21:28.797 2 DEBUG nova.network.os_vif_util [None req-17a2808a-fce2-40a9-a85c-94ba9c8cfbba dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Converting VIF {"id": "300c071c-a312-4a9b-bd7a-1b16b9a35ae6", "address": "fa:16:3e:f8:68:88", "network": {"id": "428d818e-c08a-4eef-be62-24fe484fed05", "bridge": "br-int", "label": "tempest-network-smoke--2064721736", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.30", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap300c071c-a3", "ovs_interfaceid": "300c071c-a312-4a9b-bd7a-1b16b9a35ae6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 09:21:28 compute-0 nova_compute[260935]: 2025-10-11 09:21:28.798 2 DEBUG nova.network.os_vif_util [None req-17a2808a-fce2-40a9-a85c-94ba9c8cfbba dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f8:68:88,bridge_name='br-int',has_traffic_filtering=True,id=300c071c-a312-4a9b-bd7a-1b16b9a35ae6,network=Network(428d818e-c08a-4eef-be62-24fe484fed05),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap300c071c-a3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 09:21:28 compute-0 nova_compute[260935]: 2025-10-11 09:21:28.801 2 DEBUG nova.virt.libvirt.guest [None req-17a2808a-fce2-40a9-a85c-94ba9c8cfbba dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] attach device xml: <interface type="ethernet">
Oct 11 09:21:28 compute-0 nova_compute[260935]:   <mac address="fa:16:3e:f8:68:88"/>
Oct 11 09:21:28 compute-0 nova_compute[260935]:   <model type="virtio"/>
Oct 11 09:21:28 compute-0 nova_compute[260935]:   <driver name="vhost" rx_queue_size="512"/>
Oct 11 09:21:28 compute-0 nova_compute[260935]:   <mtu size="1442"/>
Oct 11 09:21:28 compute-0 nova_compute[260935]:   <target dev="tap300c071c-a3"/>
Oct 11 09:21:28 compute-0 nova_compute[260935]: </interface>
Oct 11 09:21:28 compute-0 nova_compute[260935]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Oct 11 09:21:28 compute-0 kernel: tap300c071c-a3: entered promiscuous mode
Oct 11 09:21:28 compute-0 nova_compute[260935]: 2025-10-11 09:21:28.816 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:21:28 compute-0 ovn_controller[152945]: 2025-10-11T09:21:28Z|01208|binding|INFO|Claiming lport 300c071c-a312-4a9b-bd7a-1b16b9a35ae6 for this chassis.
Oct 11 09:21:28 compute-0 ovn_controller[152945]: 2025-10-11T09:21:28Z|01209|binding|INFO|300c071c-a312-4a9b-bd7a-1b16b9a35ae6: Claiming fa:16:3e:f8:68:88 10.100.0.30
Oct 11 09:21:28 compute-0 NetworkManager[44960]: <info>  [1760174488.8191] manager: (tap300c071c-a3): new Tun device (/org/freedesktop/NetworkManager/Devices/493)
Oct 11 09:21:28 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:21:28.853 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f8:68:88 10.100.0.30'], port_security=['fa:16:3e:f8:68:88 10.100.0.30'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.30/28', 'neutron:device_id': '7f0d9214-39a5-458d-82db-dcbc7d61b8b5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-428d818e-c08a-4eef-be62-24fe484fed05', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'bee9c6aad5fe46a2b0fb6caf4d995b72', 'neutron:revision_number': '2', 'neutron:security_group_ids': '019fac1b-1c13-4d21-809e-39e39fdd9255', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=020223cc-9ae9-496f-8a67-2eb055f1a089, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=300c071c-a312-4a9b-bd7a-1b16b9a35ae6) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 09:21:28 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:21:28.859 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 300c071c-a312-4a9b-bd7a-1b16b9a35ae6 in datapath 428d818e-c08a-4eef-be62-24fe484fed05 bound to our chassis
Oct 11 09:21:28 compute-0 nova_compute[260935]: 2025-10-11 09:21:28.864 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:21:28 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:21:28.864 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 428d818e-c08a-4eef-be62-24fe484fed05
Oct 11 09:21:28 compute-0 ovn_controller[152945]: 2025-10-11T09:21:28Z|01210|binding|INFO|Setting lport 300c071c-a312-4a9b-bd7a-1b16b9a35ae6 ovn-installed in OVS
Oct 11 09:21:28 compute-0 ovn_controller[152945]: 2025-10-11T09:21:28Z|01211|binding|INFO|Setting lport 300c071c-a312-4a9b-bd7a-1b16b9a35ae6 up in Southbound
Oct 11 09:21:28 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:21:28.878 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[a408a41b-cc5f-4244-b787-87a3d8d56421]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:21:28 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:21:28.878 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap428d818e-c1 in ovnmeta-428d818e-c08a-4eef-be62-24fe484fed05 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 11 09:21:28 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:21:28.880 276945 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap428d818e-c0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 11 09:21:28 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:21:28.880 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[28430f20-2ed3-4433-ab25-6f6fc5a8390b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:21:28 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:21:28.880 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[83542451-dc4c-467c-b213-10986833ada5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:21:28 compute-0 systemd-udevd[389275]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 09:21:28 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:21:28.892 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[d044123a-5bfe-4758-9f5b-cf44fe456977]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:21:28 compute-0 NetworkManager[44960]: <info>  [1760174488.9029] device (tap300c071c-a3): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 09:21:28 compute-0 NetworkManager[44960]: <info>  [1760174488.9040] device (tap300c071c-a3): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 09:21:28 compute-0 nova_compute[260935]: 2025-10-11 09:21:28.910 2 DEBUG nova.virt.libvirt.driver [None req-17a2808a-fce2-40a9-a85c-94ba9c8cfbba dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 09:21:28 compute-0 nova_compute[260935]: 2025-10-11 09:21:28.911 2 DEBUG nova.virt.libvirt.driver [None req-17a2808a-fce2-40a9-a85c-94ba9c8cfbba dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 09:21:28 compute-0 nova_compute[260935]: 2025-10-11 09:21:28.911 2 DEBUG nova.virt.libvirt.driver [None req-17a2808a-fce2-40a9-a85c-94ba9c8cfbba dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] No VIF found with MAC fa:16:3e:38:a7:f5, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 11 09:21:28 compute-0 nova_compute[260935]: 2025-10-11 09:21:28.912 2 DEBUG nova.virt.libvirt.driver [None req-17a2808a-fce2-40a9-a85c-94ba9c8cfbba dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] No VIF found with MAC fa:16:3e:f8:68:88, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 11 09:21:28 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:21:28.914 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[4b49b838-7830-4cf0-bb48-61fce52dc680]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:21:28 compute-0 nova_compute[260935]: 2025-10-11 09:21:28.940 2 DEBUG nova.virt.libvirt.guest [None req-17a2808a-fce2-40a9-a85c-94ba9c8cfbba dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 09:21:28 compute-0 nova_compute[260935]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 09:21:28 compute-0 nova_compute[260935]:   <nova:name>tempest-TestNetworkBasicOps-server-1960711580</nova:name>
Oct 11 09:21:28 compute-0 nova_compute[260935]:   <nova:creationTime>2025-10-11 09:21:28</nova:creationTime>
Oct 11 09:21:28 compute-0 nova_compute[260935]:   <nova:flavor name="m1.nano">
Oct 11 09:21:28 compute-0 nova_compute[260935]:     <nova:memory>128</nova:memory>
Oct 11 09:21:28 compute-0 nova_compute[260935]:     <nova:disk>1</nova:disk>
Oct 11 09:21:28 compute-0 nova_compute[260935]:     <nova:swap>0</nova:swap>
Oct 11 09:21:28 compute-0 nova_compute[260935]:     <nova:ephemeral>0</nova:ephemeral>
Oct 11 09:21:28 compute-0 nova_compute[260935]:     <nova:vcpus>1</nova:vcpus>
Oct 11 09:21:28 compute-0 nova_compute[260935]:   </nova:flavor>
Oct 11 09:21:28 compute-0 nova_compute[260935]:   <nova:owner>
Oct 11 09:21:28 compute-0 nova_compute[260935]:     <nova:user uuid="dd336dcb24664df58613d4105ce1b004">tempest-TestNetworkBasicOps-1622727639-project-member</nova:user>
Oct 11 09:21:28 compute-0 nova_compute[260935]:     <nova:project uuid="bee9c6aad5fe46a2b0fb6caf4d995b72">tempest-TestNetworkBasicOps-1622727639</nova:project>
Oct 11 09:21:28 compute-0 nova_compute[260935]:   </nova:owner>
Oct 11 09:21:28 compute-0 nova_compute[260935]:   <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 09:21:28 compute-0 nova_compute[260935]:   <nova:ports>
Oct 11 09:21:28 compute-0 nova_compute[260935]:     <nova:port uuid="6aa7ac72-3e8c-4881-ba5d-9e2fca0200c5">
Oct 11 09:21:28 compute-0 nova_compute[260935]:       <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Oct 11 09:21:28 compute-0 nova_compute[260935]:     </nova:port>
Oct 11 09:21:28 compute-0 nova_compute[260935]:     <nova:port uuid="300c071c-a312-4a9b-bd7a-1b16b9a35ae6">
Oct 11 09:21:28 compute-0 nova_compute[260935]:       <nova:ip type="fixed" address="10.100.0.30" ipVersion="4"/>
Oct 11 09:21:28 compute-0 nova_compute[260935]:     </nova:port>
Oct 11 09:21:28 compute-0 nova_compute[260935]:   </nova:ports>
Oct 11 09:21:28 compute-0 nova_compute[260935]: </nova:instance>
Oct 11 09:21:28 compute-0 nova_compute[260935]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Oct 11 09:21:28 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:21:28.942 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[0cf0685a-a485-4ae8-8bb1-ca82f77b2d12]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:21:28 compute-0 NetworkManager[44960]: <info>  [1760174488.9475] manager: (tap428d818e-c0): new Veth device (/org/freedesktop/NetworkManager/Devices/494)
Oct 11 09:21:28 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:21:28.946 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[627b4fc7-118c-43e0-8396-835950a0de3b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:21:28 compute-0 nova_compute[260935]: 2025-10-11 09:21:28.965 2 DEBUG nova.compute.manager [req-088b6119-a339-4890-b2c6-05078454be23 req-3aa84d6a-5919-45c1-9e79-6f902047082c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: fcb45648-eb7b-4975-9f50-08675a787d9c] Received event network-changed-9669f110-042a-40c6-b7a4-8d78d421ed23 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:21:28 compute-0 nova_compute[260935]: 2025-10-11 09:21:28.965 2 DEBUG nova.compute.manager [req-088b6119-a339-4890-b2c6-05078454be23 req-3aa84d6a-5919-45c1-9e79-6f902047082c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: fcb45648-eb7b-4975-9f50-08675a787d9c] Refreshing instance network info cache due to event network-changed-9669f110-042a-40c6-b7a4-8d78d421ed23. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 09:21:28 compute-0 nova_compute[260935]: 2025-10-11 09:21:28.966 2 DEBUG oslo_concurrency.lockutils [req-088b6119-a339-4890-b2c6-05078454be23 req-3aa84d6a-5919-45c1-9e79-6f902047082c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-fcb45648-eb7b-4975-9f50-08675a787d9c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:21:28 compute-0 nova_compute[260935]: 2025-10-11 09:21:28.966 2 DEBUG oslo_concurrency.lockutils [req-088b6119-a339-4890-b2c6-05078454be23 req-3aa84d6a-5919-45c1-9e79-6f902047082c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-fcb45648-eb7b-4975-9f50-08675a787d9c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:21:28 compute-0 nova_compute[260935]: 2025-10-11 09:21:28.966 2 DEBUG nova.network.neutron [req-088b6119-a339-4890-b2c6-05078454be23 req-3aa84d6a-5919-45c1-9e79-6f902047082c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: fcb45648-eb7b-4975-9f50-08675a787d9c] Refreshing network info cache for port 9669f110-042a-40c6-b7a4-8d78d421ed23 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 09:21:28 compute-0 nova_compute[260935]: 2025-10-11 09:21:28.994 2 DEBUG oslo_concurrency.lockutils [None req-17a2808a-fce2-40a9-a85c-94ba9c8cfbba dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "interface-7f0d9214-39a5-458d-82db-dcbc7d61b8b5-None" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 7.076s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:21:29 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:21:28.997 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[92f7d87d-bd00-4c8a-84c3-9595dea1ec5d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:21:29 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:21:29.004 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[b1e2f42a-40e5-414d-a66f-3eedd6ea9028]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:21:29 compute-0 NetworkManager[44960]: <info>  [1760174489.0338] device (tap428d818e-c0): carrier: link connected
Oct 11 09:21:29 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:21:29.042 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[dfcb914e-6142-402c-9879-fb4122acbd13]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:21:29 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:21:29 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:21:29.068 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[1e8a957e-728c-491d-be3a-b8ed4ac05d18]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap428d818e-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5d:fc:e7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 343], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 633373, 'reachable_time': 23194, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 389301, 'error': None, 'target': 'ovnmeta-428d818e-c08a-4eef-be62-24fe484fed05', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:21:29 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:21:29.087 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[566ea4fa-aa03-42f1-b82a-b4b9b0806c1c]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe5d:fce7'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 633373, 'tstamp': 633373}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 389302, 'error': None, 'target': 'ovnmeta-428d818e-c08a-4eef-be62-24fe484fed05', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:21:29 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:21:29.114 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[79cbf57c-6c0d-46a9-85c2-5dd29f4ea5a1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap428d818e-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5d:fc:e7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 343], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 633373, 'reachable_time': 23194, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 389303, 'error': None, 'target': 'ovnmeta-428d818e-c08a-4eef-be62-24fe484fed05', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:21:29 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:21:29.149 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[a7a63d72-2cd4-49b1-9aff-e039b1fd9c0a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:21:29 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:21:29.214 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[bc4e63f4-232e-4846-b937-b538e38738f4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:21:29 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:21:29.216 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap428d818e-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:21:29 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:21:29.216 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 09:21:29 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:21:29.217 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap428d818e-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:21:29 compute-0 NetworkManager[44960]: <info>  [1760174489.2203] manager: (tap428d818e-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/495)
Oct 11 09:21:29 compute-0 kernel: tap428d818e-c0: entered promiscuous mode
Oct 11 09:21:29 compute-0 nova_compute[260935]: 2025-10-11 09:21:29.221 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:21:29 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:21:29.224 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap428d818e-c0, col_values=(('external_ids', {'iface-id': 'f0c2b309-d39a-4d01-be06-bdc63deb27f9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:21:29 compute-0 nova_compute[260935]: 2025-10-11 09:21:29.226 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:21:29 compute-0 ovn_controller[152945]: 2025-10-11T09:21:29Z|01212|binding|INFO|Releasing lport f0c2b309-d39a-4d01-be06-bdc63deb27f9 from this chassis (sb_readonly=0)
Oct 11 09:21:29 compute-0 nova_compute[260935]: 2025-10-11 09:21:29.246 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:21:29 compute-0 nova_compute[260935]: 2025-10-11 09:21:29.247 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:21:29 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:21:29.248 162815 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/428d818e-c08a-4eef-be62-24fe484fed05.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/428d818e-c08a-4eef-be62-24fe484fed05.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 11 09:21:29 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:21:29.254 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[8c88bdbd-c838-4ee4-a732-4b2418b01af1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:21:29 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:21:29.255 162815 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 11 09:21:29 compute-0 ovn_metadata_agent[162810]: global
Oct 11 09:21:29 compute-0 ovn_metadata_agent[162810]:     log         /dev/log local0 debug
Oct 11 09:21:29 compute-0 ovn_metadata_agent[162810]:     log-tag     haproxy-metadata-proxy-428d818e-c08a-4eef-be62-24fe484fed05
Oct 11 09:21:29 compute-0 ovn_metadata_agent[162810]:     user        root
Oct 11 09:21:29 compute-0 ovn_metadata_agent[162810]:     group       root
Oct 11 09:21:29 compute-0 ovn_metadata_agent[162810]:     maxconn     1024
Oct 11 09:21:29 compute-0 ovn_metadata_agent[162810]:     pidfile     /var/lib/neutron/external/pids/428d818e-c08a-4eef-be62-24fe484fed05.pid.haproxy
Oct 11 09:21:29 compute-0 ovn_metadata_agent[162810]:     daemon
Oct 11 09:21:29 compute-0 ovn_metadata_agent[162810]: 
Oct 11 09:21:29 compute-0 ovn_metadata_agent[162810]: defaults
Oct 11 09:21:29 compute-0 ovn_metadata_agent[162810]:     log global
Oct 11 09:21:29 compute-0 ovn_metadata_agent[162810]:     mode http
Oct 11 09:21:29 compute-0 ovn_metadata_agent[162810]:     option httplog
Oct 11 09:21:29 compute-0 ovn_metadata_agent[162810]:     option dontlognull
Oct 11 09:21:29 compute-0 ovn_metadata_agent[162810]:     option http-server-close
Oct 11 09:21:29 compute-0 ovn_metadata_agent[162810]:     option forwardfor
Oct 11 09:21:29 compute-0 ovn_metadata_agent[162810]:     retries                 3
Oct 11 09:21:29 compute-0 ovn_metadata_agent[162810]:     timeout http-request    30s
Oct 11 09:21:29 compute-0 ovn_metadata_agent[162810]:     timeout connect         30s
Oct 11 09:21:29 compute-0 ovn_metadata_agent[162810]:     timeout client          32s
Oct 11 09:21:29 compute-0 ovn_metadata_agent[162810]:     timeout server          32s
Oct 11 09:21:29 compute-0 ovn_metadata_agent[162810]:     timeout http-keep-alive 30s
Oct 11 09:21:29 compute-0 ovn_metadata_agent[162810]: 
Oct 11 09:21:29 compute-0 ovn_metadata_agent[162810]: 
Oct 11 09:21:29 compute-0 ovn_metadata_agent[162810]: listen listener
Oct 11 09:21:29 compute-0 ovn_metadata_agent[162810]:     bind 169.254.169.254:80
Oct 11 09:21:29 compute-0 ovn_metadata_agent[162810]:     server metadata /var/lib/neutron/metadata_proxy
Oct 11 09:21:29 compute-0 ovn_metadata_agent[162810]:     http-request add-header X-OVN-Network-ID 428d818e-c08a-4eef-be62-24fe484fed05
Oct 11 09:21:29 compute-0 ovn_metadata_agent[162810]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 11 09:21:29 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:21:29.256 162815 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-428d818e-c08a-4eef-be62-24fe484fed05', 'env', 'PROCESS_TAG=haproxy-428d818e-c08a-4eef-be62-24fe484fed05', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/428d818e-c08a-4eef-be62-24fe484fed05.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 11 09:21:29 compute-0 podman[389335]: 2025-10-11 09:21:29.673354473 +0000 UTC m=+0.078108635 container create 3d44ae400f68c69c83298cdd1bf71370cdc99ef9434f35a592b35316b09b177f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-428d818e-c08a-4eef-be62-24fe484fed05, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 11 09:21:29 compute-0 podman[389335]: 2025-10-11 09:21:29.631260015 +0000 UTC m=+0.036014227 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 11 09:21:29 compute-0 systemd[1]: Started libpod-conmon-3d44ae400f68c69c83298cdd1bf71370cdc99ef9434f35a592b35316b09b177f.scope.
Oct 11 09:21:29 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:21:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e5d17837c87f5f50f3b76a7653ef33e05fc19c9a00cc47d414f88016a25e132/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 11 09:21:29 compute-0 podman[389335]: 2025-10-11 09:21:29.807875069 +0000 UTC m=+0.212629291 container init 3d44ae400f68c69c83298cdd1bf71370cdc99ef9434f35a592b35316b09b177f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-428d818e-c08a-4eef-be62-24fe484fed05, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0)
Oct 11 09:21:29 compute-0 podman[389335]: 2025-10-11 09:21:29.816920035 +0000 UTC m=+0.221674207 container start 3d44ae400f68c69c83298cdd1bf71370cdc99ef9434f35a592b35316b09b177f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-428d818e-c08a-4eef-be62-24fe484fed05, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 11 09:21:29 compute-0 neutron-haproxy-ovnmeta-428d818e-c08a-4eef-be62-24fe484fed05[389350]: [NOTICE]   (389354) : New worker (389356) forked
Oct 11 09:21:29 compute-0 neutron-haproxy-ovnmeta-428d818e-c08a-4eef-be62-24fe484fed05[389350]: [NOTICE]   (389354) : Loading success.
Oct 11 09:21:29 compute-0 ceph-mon[74313]: pgmap v2389: 321 pgs: 321 active+clean; 453 MiB data, 979 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 27 KiB/s wr, 74 op/s
Oct 11 09:21:30 compute-0 nova_compute[260935]: 2025-10-11 09:21:30.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:21:30 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2390: 321 pgs: 321 active+clean; 453 MiB data, 979 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 73 op/s
Oct 11 09:21:30 compute-0 nova_compute[260935]: 2025-10-11 09:21:30.983 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:21:31 compute-0 nova_compute[260935]: 2025-10-11 09:21:31.140 2 DEBUG nova.network.neutron [req-088b6119-a339-4890-b2c6-05078454be23 req-3aa84d6a-5919-45c1-9e79-6f902047082c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: fcb45648-eb7b-4975-9f50-08675a787d9c] Updated VIF entry in instance network info cache for port 9669f110-042a-40c6-b7a4-8d78d421ed23. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 09:21:31 compute-0 nova_compute[260935]: 2025-10-11 09:21:31.141 2 DEBUG nova.network.neutron [req-088b6119-a339-4890-b2c6-05078454be23 req-3aa84d6a-5919-45c1-9e79-6f902047082c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: fcb45648-eb7b-4975-9f50-08675a787d9c] Updating instance_info_cache with network_info: [{"id": "9669f110-042a-40c6-b7a4-8d78d421ed23", "address": "fa:16:3e:eb:54:35", "network": {"id": "02690ac5-d004-4c7d-b780-e5fed29e0aa7", "bridge": "br-int", "label": "tempest-network-smoke--1847613909", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.248", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9669f110-04", "ovs_interfaceid": "9669f110-042a-40c6-b7a4-8d78d421ed23", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "5a14fd50-c9b4-4c8c-b576-9f2a05d734f9", "address": "fa:16:3e:ed:83:c1", "network": {"id": "e87b272f-66b8-494e-ab80-c2ee66df15a2", "bridge": "br-int", "label": "tempest-network-smoke--2027628824", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feed:83c1", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feed:83c1", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a14fd50-c9", "ovs_interfaceid": "5a14fd50-c9b4-4c8c-b576-9f2a05d734f9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:21:31 compute-0 nova_compute[260935]: 2025-10-11 09:21:31.197 2 DEBUG nova.network.neutron [req-af20fffb-fdf6-4aae-b00c-221527506290 req-6e2cf7bf-0ebf-4f98-a4a2-4fd8ff4bd0c9 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 7f0d9214-39a5-458d-82db-dcbc7d61b8b5] Updated VIF entry in instance network info cache for port 300c071c-a312-4a9b-bd7a-1b16b9a35ae6. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 09:21:31 compute-0 nova_compute[260935]: 2025-10-11 09:21:31.197 2 DEBUG nova.network.neutron [req-af20fffb-fdf6-4aae-b00c-221527506290 req-6e2cf7bf-0ebf-4f98-a4a2-4fd8ff4bd0c9 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 7f0d9214-39a5-458d-82db-dcbc7d61b8b5] Updating instance_info_cache with network_info: [{"id": "6aa7ac72-3e8c-4881-ba5d-9e2fca0200c5", "address": "fa:16:3e:38:a7:f5", "network": {"id": "6fb90d02-96cd-4920-92ac-462cc457cb11", "bridge": "br-int", "label": "tempest-network-smoke--1169953700", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.243", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6aa7ac72-3e", "ovs_interfaceid": "6aa7ac72-3e8c-4881-ba5d-9e2fca0200c5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "300c071c-a312-4a9b-bd7a-1b16b9a35ae6", "address": "fa:16:3e:f8:68:88", "network": {"id": "428d818e-c08a-4eef-be62-24fe484fed05", "bridge": "br-int", "label": "tempest-network-smoke--2064721736", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.30", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap300c071c-a3", "ovs_interfaceid": "300c071c-a312-4a9b-bd7a-1b16b9a35ae6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:21:31 compute-0 nova_compute[260935]: 2025-10-11 09:21:31.204 2 DEBUG oslo_concurrency.lockutils [req-088b6119-a339-4890-b2c6-05078454be23 req-3aa84d6a-5919-45c1-9e79-6f902047082c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-fcb45648-eb7b-4975-9f50-08675a787d9c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:21:31 compute-0 nova_compute[260935]: 2025-10-11 09:21:31.209 2 DEBUG oslo_concurrency.lockutils [req-af20fffb-fdf6-4aae-b00c-221527506290 req-6e2cf7bf-0ebf-4f98-a4a2-4fd8ff4bd0c9 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-7f0d9214-39a5-458d-82db-dcbc7d61b8b5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:21:31 compute-0 nova_compute[260935]: 2025-10-11 09:21:31.211 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:21:31 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:21:31.211 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=38, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:d1:d9', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '16:ab:1e:b7:4b:7f'}, ipsec=False) old=SB_Global(nb_cfg=37) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 09:21:31 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:21:31.214 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 11 09:21:31 compute-0 ovn_controller[152945]: 2025-10-11T09:21:31Z|00141|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:f8:68:88 10.100.0.30
Oct 11 09:21:31 compute-0 ovn_controller[152945]: 2025-10-11T09:21:31Z|00142|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:f8:68:88 10.100.0.30
Oct 11 09:21:31 compute-0 nova_compute[260935]: 2025-10-11 09:21:31.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:21:31 compute-0 nova_compute[260935]: 2025-10-11 09:21:31.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:21:31 compute-0 nova_compute[260935]: 2025-10-11 09:21:31.836 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:21:31 compute-0 ceph-mon[74313]: pgmap v2390: 321 pgs: 321 active+clean; 453 MiB data, 979 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 73 op/s
Oct 11 09:21:32 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2391: 321 pgs: 321 active+clean; 453 MiB data, 979 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 74 op/s
Oct 11 09:21:33 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:21:33.216 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=bec9a5e2-82b0-42f0-811d-08d245f1dc66, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '38'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:21:33 compute-0 nova_compute[260935]: 2025-10-11 09:21:33.698 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:21:33 compute-0 nova_compute[260935]: 2025-10-11 09:21:33.789 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:21:33 compute-0 ceph-mon[74313]: pgmap v2391: 321 pgs: 321 active+clean; 453 MiB data, 979 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 74 op/s
Oct 11 09:21:34 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:21:34 compute-0 nova_compute[260935]: 2025-10-11 09:21:34.695 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:21:34 compute-0 nova_compute[260935]: 2025-10-11 09:21:34.722 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:21:34 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2392: 321 pgs: 321 active+clean; 453 MiB data, 979 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.0 KiB/s wr, 64 op/s
Oct 11 09:21:35 compute-0 nova_compute[260935]: 2025-10-11 09:21:35.354 2 DEBUG nova.compute.manager [req-4abbb14b-b969-4ede-af3d-775d8a0c8872 req-b0204d3c-6b6b-452c-8d23-2b08769c6b3d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 7f0d9214-39a5-458d-82db-dcbc7d61b8b5] Received event network-vif-plugged-300c071c-a312-4a9b-bd7a-1b16b9a35ae6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:21:35 compute-0 nova_compute[260935]: 2025-10-11 09:21:35.354 2 DEBUG oslo_concurrency.lockutils [req-4abbb14b-b969-4ede-af3d-775d8a0c8872 req-b0204d3c-6b6b-452c-8d23-2b08769c6b3d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "7f0d9214-39a5-458d-82db-dcbc7d61b8b5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:21:35 compute-0 nova_compute[260935]: 2025-10-11 09:21:35.355 2 DEBUG oslo_concurrency.lockutils [req-4abbb14b-b969-4ede-af3d-775d8a0c8872 req-b0204d3c-6b6b-452c-8d23-2b08769c6b3d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "7f0d9214-39a5-458d-82db-dcbc7d61b8b5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:21:35 compute-0 nova_compute[260935]: 2025-10-11 09:21:35.355 2 DEBUG oslo_concurrency.lockutils [req-4abbb14b-b969-4ede-af3d-775d8a0c8872 req-b0204d3c-6b6b-452c-8d23-2b08769c6b3d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "7f0d9214-39a5-458d-82db-dcbc7d61b8b5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:21:35 compute-0 nova_compute[260935]: 2025-10-11 09:21:35.356 2 DEBUG nova.compute.manager [req-4abbb14b-b969-4ede-af3d-775d8a0c8872 req-b0204d3c-6b6b-452c-8d23-2b08769c6b3d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 7f0d9214-39a5-458d-82db-dcbc7d61b8b5] No waiting events found dispatching network-vif-plugged-300c071c-a312-4a9b-bd7a-1b16b9a35ae6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 09:21:35 compute-0 nova_compute[260935]: 2025-10-11 09:21:35.357 2 WARNING nova.compute.manager [req-4abbb14b-b969-4ede-af3d-775d8a0c8872 req-b0204d3c-6b6b-452c-8d23-2b08769c6b3d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 7f0d9214-39a5-458d-82db-dcbc7d61b8b5] Received unexpected event network-vif-plugged-300c071c-a312-4a9b-bd7a-1b16b9a35ae6 for instance with vm_state active and task_state None.
Oct 11 09:21:36 compute-0 ceph-mon[74313]: pgmap v2392: 321 pgs: 321 active+clean; 453 MiB data, 979 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.0 KiB/s wr, 64 op/s
Oct 11 09:21:36 compute-0 nova_compute[260935]: 2025-10-11 09:21:36.272 2 DEBUG oslo_concurrency.lockutils [None req-1eed4baf-552c-46ce-8650-0b15a791ccfd dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Acquiring lock "c677661f-bc62-4954-9130-09b285e6abe4" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:21:36 compute-0 nova_compute[260935]: 2025-10-11 09:21:36.273 2 DEBUG oslo_concurrency.lockutils [None req-1eed4baf-552c-46ce-8650-0b15a791ccfd dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "c677661f-bc62-4954-9130-09b285e6abe4" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:21:36 compute-0 nova_compute[260935]: 2025-10-11 09:21:36.293 2 DEBUG nova.compute.manager [None req-1eed4baf-552c-46ce-8650-0b15a791ccfd dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: c677661f-bc62-4954-9130-09b285e6abe4] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 11 09:21:36 compute-0 nova_compute[260935]: 2025-10-11 09:21:36.401 2 DEBUG oslo_concurrency.lockutils [None req-1eed4baf-552c-46ce-8650-0b15a791ccfd dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:21:36 compute-0 nova_compute[260935]: 2025-10-11 09:21:36.402 2 DEBUG oslo_concurrency.lockutils [None req-1eed4baf-552c-46ce-8650-0b15a791ccfd dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:21:36 compute-0 nova_compute[260935]: 2025-10-11 09:21:36.412 2 DEBUG nova.virt.hardware [None req-1eed4baf-552c-46ce-8650-0b15a791ccfd dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 11 09:21:36 compute-0 nova_compute[260935]: 2025-10-11 09:21:36.412 2 INFO nova.compute.claims [None req-1eed4baf-552c-46ce-8650-0b15a791ccfd dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: c677661f-bc62-4954-9130-09b285e6abe4] Claim successful on node compute-0.ctlplane.example.com
Oct 11 09:21:36 compute-0 nova_compute[260935]: 2025-10-11 09:21:36.658 2 DEBUG oslo_concurrency.processutils [None req-1eed4baf-552c-46ce-8650-0b15a791ccfd dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:21:36 compute-0 nova_compute[260935]: 2025-10-11 09:21:36.708 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:21:36 compute-0 nova_compute[260935]: 2025-10-11 09:21:36.709 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 11 09:21:36 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2393: 321 pgs: 321 active+clean; 453 MiB data, 979 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.0 KiB/s wr, 64 op/s
Oct 11 09:21:36 compute-0 nova_compute[260935]: 2025-10-11 09:21:36.839 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:21:36 compute-0 nova_compute[260935]: 2025-10-11 09:21:36.958 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "refresh_cache-b75d8ded-515b-48ff-a6b6-28df88878996" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:21:36 compute-0 nova_compute[260935]: 2025-10-11 09:21:36.959 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquired lock "refresh_cache-b75d8ded-515b-48ff-a6b6-28df88878996" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:21:36 compute-0 nova_compute[260935]: 2025-10-11 09:21:36.959 2 DEBUG nova.network.neutron [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: b75d8ded-515b-48ff-a6b6-28df88878996] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 11 09:21:37 compute-0 ovn_controller[152945]: 2025-10-11T09:21:37Z|00143|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:eb:54:35 10.100.0.11
Oct 11 09:21:37 compute-0 ovn_controller[152945]: 2025-10-11T09:21:37Z|00144|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:eb:54:35 10.100.0.11
Oct 11 09:21:37 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 09:21:37 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3072223939' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:21:37 compute-0 nova_compute[260935]: 2025-10-11 09:21:37.211 2 DEBUG oslo_concurrency.processutils [None req-1eed4baf-552c-46ce-8650-0b15a791ccfd dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.553s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:21:37 compute-0 nova_compute[260935]: 2025-10-11 09:21:37.219 2 DEBUG nova.compute.provider_tree [None req-1eed4baf-552c-46ce-8650-0b15a791ccfd dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 09:21:37 compute-0 nova_compute[260935]: 2025-10-11 09:21:37.243 2 DEBUG nova.scheduler.client.report [None req-1eed4baf-552c-46ce-8650-0b15a791ccfd dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 09:21:37 compute-0 nova_compute[260935]: 2025-10-11 09:21:37.285 2 DEBUG oslo_concurrency.lockutils [None req-1eed4baf-552c-46ce-8650-0b15a791ccfd dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.883s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:21:37 compute-0 nova_compute[260935]: 2025-10-11 09:21:37.286 2 DEBUG nova.compute.manager [None req-1eed4baf-552c-46ce-8650-0b15a791ccfd dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: c677661f-bc62-4954-9130-09b285e6abe4] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 11 09:21:37 compute-0 nova_compute[260935]: 2025-10-11 09:21:37.352 2 DEBUG nova.compute.manager [None req-1eed4baf-552c-46ce-8650-0b15a791ccfd dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: c677661f-bc62-4954-9130-09b285e6abe4] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 11 09:21:37 compute-0 nova_compute[260935]: 2025-10-11 09:21:37.352 2 DEBUG nova.network.neutron [None req-1eed4baf-552c-46ce-8650-0b15a791ccfd dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: c677661f-bc62-4954-9130-09b285e6abe4] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 11 09:21:37 compute-0 nova_compute[260935]: 2025-10-11 09:21:37.368 2 INFO nova.virt.libvirt.driver [None req-1eed4baf-552c-46ce-8650-0b15a791ccfd dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: c677661f-bc62-4954-9130-09b285e6abe4] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 11 09:21:37 compute-0 nova_compute[260935]: 2025-10-11 09:21:37.386 2 DEBUG nova.compute.manager [None req-1eed4baf-552c-46ce-8650-0b15a791ccfd dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: c677661f-bc62-4954-9130-09b285e6abe4] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 11 09:21:37 compute-0 nova_compute[260935]: 2025-10-11 09:21:37.494 2 DEBUG nova.compute.manager [None req-1eed4baf-552c-46ce-8650-0b15a791ccfd dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: c677661f-bc62-4954-9130-09b285e6abe4] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 11 09:21:37 compute-0 nova_compute[260935]: 2025-10-11 09:21:37.496 2 DEBUG nova.virt.libvirt.driver [None req-1eed4baf-552c-46ce-8650-0b15a791ccfd dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: c677661f-bc62-4954-9130-09b285e6abe4] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 11 09:21:37 compute-0 nova_compute[260935]: 2025-10-11 09:21:37.496 2 INFO nova.virt.libvirt.driver [None req-1eed4baf-552c-46ce-8650-0b15a791ccfd dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: c677661f-bc62-4954-9130-09b285e6abe4] Creating image(s)
Oct 11 09:21:37 compute-0 nova_compute[260935]: 2025-10-11 09:21:37.530 2 DEBUG nova.storage.rbd_utils [None req-1eed4baf-552c-46ce-8650-0b15a791ccfd dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] rbd image c677661f-bc62-4954-9130-09b285e6abe4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:21:37 compute-0 nova_compute[260935]: 2025-10-11 09:21:37.566 2 DEBUG nova.storage.rbd_utils [None req-1eed4baf-552c-46ce-8650-0b15a791ccfd dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] rbd image c677661f-bc62-4954-9130-09b285e6abe4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:21:37 compute-0 nova_compute[260935]: 2025-10-11 09:21:37.603 2 DEBUG nova.storage.rbd_utils [None req-1eed4baf-552c-46ce-8650-0b15a791ccfd dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] rbd image c677661f-bc62-4954-9130-09b285e6abe4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:21:37 compute-0 nova_compute[260935]: 2025-10-11 09:21:37.609 2 DEBUG oslo_concurrency.processutils [None req-1eed4baf-552c-46ce-8650-0b15a791ccfd dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:21:37 compute-0 nova_compute[260935]: 2025-10-11 09:21:37.680 2 DEBUG nova.compute.manager [req-643d4e85-792d-4840-9465-da7857923f12 req-bc6c0716-69f0-4b59-88ab-97392747320b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 7f0d9214-39a5-458d-82db-dcbc7d61b8b5] Received event network-vif-plugged-300c071c-a312-4a9b-bd7a-1b16b9a35ae6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:21:37 compute-0 nova_compute[260935]: 2025-10-11 09:21:37.681 2 DEBUG oslo_concurrency.lockutils [req-643d4e85-792d-4840-9465-da7857923f12 req-bc6c0716-69f0-4b59-88ab-97392747320b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "7f0d9214-39a5-458d-82db-dcbc7d61b8b5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:21:37 compute-0 nova_compute[260935]: 2025-10-11 09:21:37.682 2 DEBUG oslo_concurrency.lockutils [req-643d4e85-792d-4840-9465-da7857923f12 req-bc6c0716-69f0-4b59-88ab-97392747320b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "7f0d9214-39a5-458d-82db-dcbc7d61b8b5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:21:37 compute-0 nova_compute[260935]: 2025-10-11 09:21:37.683 2 DEBUG oslo_concurrency.lockutils [req-643d4e85-792d-4840-9465-da7857923f12 req-bc6c0716-69f0-4b59-88ab-97392747320b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "7f0d9214-39a5-458d-82db-dcbc7d61b8b5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:21:37 compute-0 nova_compute[260935]: 2025-10-11 09:21:37.684 2 DEBUG nova.compute.manager [req-643d4e85-792d-4840-9465-da7857923f12 req-bc6c0716-69f0-4b59-88ab-97392747320b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 7f0d9214-39a5-458d-82db-dcbc7d61b8b5] No waiting events found dispatching network-vif-plugged-300c071c-a312-4a9b-bd7a-1b16b9a35ae6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 09:21:37 compute-0 nova_compute[260935]: 2025-10-11 09:21:37.685 2 WARNING nova.compute.manager [req-643d4e85-792d-4840-9465-da7857923f12 req-bc6c0716-69f0-4b59-88ab-97392747320b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 7f0d9214-39a5-458d-82db-dcbc7d61b8b5] Received unexpected event network-vif-plugged-300c071c-a312-4a9b-bd7a-1b16b9a35ae6 for instance with vm_state active and task_state None.
Oct 11 09:21:37 compute-0 nova_compute[260935]: 2025-10-11 09:21:37.718 2 DEBUG oslo_concurrency.processutils [None req-1eed4baf-552c-46ce-8650-0b15a791ccfd dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json" returned: 0 in 0.109s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:21:37 compute-0 nova_compute[260935]: 2025-10-11 09:21:37.720 2 DEBUG oslo_concurrency.lockutils [None req-1eed4baf-552c-46ce-8650-0b15a791ccfd dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Acquiring lock "9811042c7d73cc51997f7c966840f4b7728169a1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:21:37 compute-0 nova_compute[260935]: 2025-10-11 09:21:37.721 2 DEBUG oslo_concurrency.lockutils [None req-1eed4baf-552c-46ce-8650-0b15a791ccfd dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:21:37 compute-0 nova_compute[260935]: 2025-10-11 09:21:37.722 2 DEBUG oslo_concurrency.lockutils [None req-1eed4baf-552c-46ce-8650-0b15a791ccfd dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:21:37 compute-0 nova_compute[260935]: 2025-10-11 09:21:37.759 2 DEBUG nova.storage.rbd_utils [None req-1eed4baf-552c-46ce-8650-0b15a791ccfd dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] rbd image c677661f-bc62-4954-9130-09b285e6abe4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:21:37 compute-0 nova_compute[260935]: 2025-10-11 09:21:37.768 2 DEBUG oslo_concurrency.processutils [None req-1eed4baf-552c-46ce-8650-0b15a791ccfd dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 c677661f-bc62-4954-9130-09b285e6abe4_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:21:37 compute-0 podman[389443]: 2025-10-11 09:21:37.806685642 +0000 UTC m=+0.104156231 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct 11 09:21:37 compute-0 nova_compute[260935]: 2025-10-11 09:21:37.878 2 DEBUG nova.policy [None req-1eed4baf-552c-46ce-8650-0b15a791ccfd dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'dd336dcb24664df58613d4105ce1b004', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'bee9c6aad5fe46a2b0fb6caf4d995b72', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 11 09:21:38 compute-0 ceph-mon[74313]: pgmap v2393: 321 pgs: 321 active+clean; 453 MiB data, 979 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.0 KiB/s wr, 64 op/s
Oct 11 09:21:38 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/3072223939' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:21:38 compute-0 nova_compute[260935]: 2025-10-11 09:21:38.155 2 DEBUG oslo_concurrency.processutils [None req-1eed4baf-552c-46ce-8650-0b15a791ccfd dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 c677661f-bc62-4954-9130-09b285e6abe4_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.388s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:21:38 compute-0 nova_compute[260935]: 2025-10-11 09:21:38.209 2 DEBUG nova.storage.rbd_utils [None req-1eed4baf-552c-46ce-8650-0b15a791ccfd dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] resizing rbd image c677661f-bc62-4954-9130-09b285e6abe4_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 11 09:21:38 compute-0 nova_compute[260935]: 2025-10-11 09:21:38.292 2 DEBUG nova.objects.instance [None req-1eed4baf-552c-46ce-8650-0b15a791ccfd dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lazy-loading 'migration_context' on Instance uuid c677661f-bc62-4954-9130-09b285e6abe4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:21:38 compute-0 nova_compute[260935]: 2025-10-11 09:21:38.307 2 DEBUG nova.virt.libvirt.driver [None req-1eed4baf-552c-46ce-8650-0b15a791ccfd dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: c677661f-bc62-4954-9130-09b285e6abe4] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 11 09:21:38 compute-0 nova_compute[260935]: 2025-10-11 09:21:38.308 2 DEBUG nova.virt.libvirt.driver [None req-1eed4baf-552c-46ce-8650-0b15a791ccfd dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: c677661f-bc62-4954-9130-09b285e6abe4] Ensure instance console log exists: /var/lib/nova/instances/c677661f-bc62-4954-9130-09b285e6abe4/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 11 09:21:38 compute-0 nova_compute[260935]: 2025-10-11 09:21:38.308 2 DEBUG oslo_concurrency.lockutils [None req-1eed4baf-552c-46ce-8650-0b15a791ccfd dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:21:38 compute-0 nova_compute[260935]: 2025-10-11 09:21:38.309 2 DEBUG oslo_concurrency.lockutils [None req-1eed4baf-552c-46ce-8650-0b15a791ccfd dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:21:38 compute-0 nova_compute[260935]: 2025-10-11 09:21:38.309 2 DEBUG oslo_concurrency.lockutils [None req-1eed4baf-552c-46ce-8650-0b15a791ccfd dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:21:38 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2394: 321 pgs: 321 active+clean; 486 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 127 op/s
Oct 11 09:21:38 compute-0 nova_compute[260935]: 2025-10-11 09:21:38.793 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:21:39 compute-0 ceph-mon[74313]: pgmap v2394: 321 pgs: 321 active+clean; 486 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 127 op/s
Oct 11 09:21:39 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:21:40 compute-0 nova_compute[260935]: 2025-10-11 09:21:40.709 2 DEBUG nova.network.neutron [None req-1eed4baf-552c-46ce-8650-0b15a791ccfd dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: c677661f-bc62-4954-9130-09b285e6abe4] Successfully created port: f09cab39-5082-4d84-91fe-0c7b1cc2fb8a _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 11 09:21:40 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2395: 321 pgs: 321 active+clean; 486 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 11 09:21:41 compute-0 nova_compute[260935]: 2025-10-11 09:21:41.007 2 DEBUG nova.network.neutron [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: b75d8ded-515b-48ff-a6b6-28df88878996] Updating instance_info_cache with network_info: [{"id": "99e74dca-1d94-446c-ac4b-bc16dc028d2b", "address": "fa:16:3e:ab:9b:26", "network": {"id": "e4686205-cbf0-4221-bc49-ebb890c4a59f", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1553544744-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "11b44ad9193e4e43838d52056ccf413e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap99e74dca-1d", "ovs_interfaceid": "99e74dca-1d94-446c-ac4b-bc16dc028d2b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:21:41 compute-0 nova_compute[260935]: 2025-10-11 09:21:41.023 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Releasing lock "refresh_cache-b75d8ded-515b-48ff-a6b6-28df88878996" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:21:41 compute-0 nova_compute[260935]: 2025-10-11 09:21:41.024 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: b75d8ded-515b-48ff-a6b6-28df88878996] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 11 09:21:41 compute-0 nova_compute[260935]: 2025-10-11 09:21:41.024 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:21:41 compute-0 nova_compute[260935]: 2025-10-11 09:21:41.025 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:21:41 compute-0 nova_compute[260935]: 2025-10-11 09:21:41.025 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 11 09:21:41 compute-0 nova_compute[260935]: 2025-10-11 09:21:41.025 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:21:41 compute-0 nova_compute[260935]: 2025-10-11 09:21:41.047 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:21:41 compute-0 nova_compute[260935]: 2025-10-11 09:21:41.047 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:21:41 compute-0 nova_compute[260935]: 2025-10-11 09:21:41.048 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:21:41 compute-0 nova_compute[260935]: 2025-10-11 09:21:41.048 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 11 09:21:41 compute-0 nova_compute[260935]: 2025-10-11 09:21:41.049 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:21:41 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 09:21:41 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1929259347' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:21:41 compute-0 nova_compute[260935]: 2025-10-11 09:21:41.552 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.503s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:21:41 compute-0 nova_compute[260935]: 2025-10-11 09:21:41.661 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:21:41 compute-0 nova_compute[260935]: 2025-10-11 09:21:41.661 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:21:41 compute-0 nova_compute[260935]: 2025-10-11 09:21:41.661 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:21:41 compute-0 nova_compute[260935]: 2025-10-11 09:21:41.665 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:21:41 compute-0 nova_compute[260935]: 2025-10-11 09:21:41.666 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:21:41 compute-0 nova_compute[260935]: 2025-10-11 09:21:41.669 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000075 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:21:41 compute-0 nova_compute[260935]: 2025-10-11 09:21:41.669 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000075 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:21:41 compute-0 nova_compute[260935]: 2025-10-11 09:21:41.672 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000052 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:21:41 compute-0 nova_compute[260935]: 2025-10-11 09:21:41.672 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000052 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:21:41 compute-0 nova_compute[260935]: 2025-10-11 09:21:41.676 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000076 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:21:41 compute-0 nova_compute[260935]: 2025-10-11 09:21:41.676 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000076 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:21:41 compute-0 ceph-mon[74313]: pgmap v2395: 321 pgs: 321 active+clean; 486 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 11 09:21:41 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/1929259347' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:21:41 compute-0 nova_compute[260935]: 2025-10-11 09:21:41.842 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:21:41 compute-0 nova_compute[260935]: 2025-10-11 09:21:41.951 2 WARNING nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 09:21:41 compute-0 nova_compute[260935]: 2025-10-11 09:21:41.952 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=2502MB free_disk=59.73967361450195GB free_vcpus=3 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 11 09:21:41 compute-0 nova_compute[260935]: 2025-10-11 09:21:41.952 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:21:41 compute-0 nova_compute[260935]: 2025-10-11 09:21:41.952 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:21:42 compute-0 nova_compute[260935]: 2025-10-11 09:21:42.046 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance c176845c-89c0-4038-ba22-4ee79bd3ebfe actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 09:21:42 compute-0 nova_compute[260935]: 2025-10-11 09:21:42.046 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance b75d8ded-515b-48ff-a6b6-28df88878996 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 09:21:42 compute-0 nova_compute[260935]: 2025-10-11 09:21:42.047 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance 52be16b4-343a-4fd4-9041-39069a1fde2a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 09:21:42 compute-0 nova_compute[260935]: 2025-10-11 09:21:42.047 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance 7f0d9214-39a5-458d-82db-dcbc7d61b8b5 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 09:21:42 compute-0 nova_compute[260935]: 2025-10-11 09:21:42.047 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance fcb45648-eb7b-4975-9f50-08675a787d9c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 09:21:42 compute-0 nova_compute[260935]: 2025-10-11 09:21:42.047 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance c677661f-bc62-4954-9130-09b285e6abe4 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 09:21:42 compute-0 nova_compute[260935]: 2025-10-11 09:21:42.047 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 6 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 11 09:21:42 compute-0 nova_compute[260935]: 2025-10-11 09:21:42.048 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=1280MB phys_disk=59GB used_disk=6GB total_vcpus=8 used_vcpus=6 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 11 09:21:42 compute-0 nova_compute[260935]: 2025-10-11 09:21:42.158 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:21:42 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 09:21:42 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4016864633' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:21:42 compute-0 nova_compute[260935]: 2025-10-11 09:21:42.593 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.434s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:21:42 compute-0 nova_compute[260935]: 2025-10-11 09:21:42.599 2 DEBUG nova.compute.provider_tree [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 09:21:42 compute-0 nova_compute[260935]: 2025-10-11 09:21:42.621 2 DEBUG nova.scheduler.client.report [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 09:21:42 compute-0 nova_compute[260935]: 2025-10-11 09:21:42.661 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 11 09:21:42 compute-0 nova_compute[260935]: 2025-10-11 09:21:42.661 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.709s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:21:42 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2396: 321 pgs: 321 active+clean; 533 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 343 KiB/s rd, 3.9 MiB/s wr, 92 op/s
Oct 11 09:21:42 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/4016864633' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:21:43 compute-0 nova_compute[260935]: 2025-10-11 09:21:43.205 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:21:43 compute-0 nova_compute[260935]: 2025-10-11 09:21:43.513 2 DEBUG nova.network.neutron [None req-1eed4baf-552c-46ce-8650-0b15a791ccfd dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: c677661f-bc62-4954-9130-09b285e6abe4] Successfully updated port: f09cab39-5082-4d84-91fe-0c7b1cc2fb8a _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 11 09:21:43 compute-0 nova_compute[260935]: 2025-10-11 09:21:43.535 2 DEBUG oslo_concurrency.lockutils [None req-1eed4baf-552c-46ce-8650-0b15a791ccfd dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Acquiring lock "refresh_cache-c677661f-bc62-4954-9130-09b285e6abe4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:21:43 compute-0 nova_compute[260935]: 2025-10-11 09:21:43.536 2 DEBUG oslo_concurrency.lockutils [None req-1eed4baf-552c-46ce-8650-0b15a791ccfd dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Acquired lock "refresh_cache-c677661f-bc62-4954-9130-09b285e6abe4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:21:43 compute-0 nova_compute[260935]: 2025-10-11 09:21:43.536 2 DEBUG nova.network.neutron [None req-1eed4baf-552c-46ce-8650-0b15a791ccfd dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: c677661f-bc62-4954-9130-09b285e6abe4] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 11 09:21:43 compute-0 nova_compute[260935]: 2025-10-11 09:21:43.696 2 DEBUG nova.compute.manager [req-2ba3882c-668d-4d8b-9ccb-bc844d0c65f1 req-e7e686db-7651-456e-8f90-1e49f6871dc3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: c677661f-bc62-4954-9130-09b285e6abe4] Received event network-changed-f09cab39-5082-4d84-91fe-0c7b1cc2fb8a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:21:43 compute-0 nova_compute[260935]: 2025-10-11 09:21:43.696 2 DEBUG nova.compute.manager [req-2ba3882c-668d-4d8b-9ccb-bc844d0c65f1 req-e7e686db-7651-456e-8f90-1e49f6871dc3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: c677661f-bc62-4954-9130-09b285e6abe4] Refreshing instance network info cache due to event network-changed-f09cab39-5082-4d84-91fe-0c7b1cc2fb8a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 09:21:43 compute-0 nova_compute[260935]: 2025-10-11 09:21:43.696 2 DEBUG oslo_concurrency.lockutils [req-2ba3882c-668d-4d8b-9ccb-bc844d0c65f1 req-e7e686db-7651-456e-8f90-1e49f6871dc3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-c677661f-bc62-4954-9130-09b285e6abe4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:21:43 compute-0 podman[389618]: 2025-10-11 09:21:43.774001653 +0000 UTC m=+0.072004093 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3)
Oct 11 09:21:43 compute-0 nova_compute[260935]: 2025-10-11 09:21:43.795 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:21:43 compute-0 nova_compute[260935]: 2025-10-11 09:21:43.818 2 DEBUG nova.network.neutron [None req-1eed4baf-552c-46ce-8650-0b15a791ccfd dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: c677661f-bc62-4954-9130-09b285e6abe4] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 11 09:21:43 compute-0 ceph-mon[74313]: pgmap v2396: 321 pgs: 321 active+clean; 533 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 343 KiB/s rd, 3.9 MiB/s wr, 92 op/s
Oct 11 09:21:44 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:21:44 compute-0 nova_compute[260935]: 2025-10-11 09:21:44.636 2 DEBUG nova.network.neutron [None req-1eed4baf-552c-46ce-8650-0b15a791ccfd dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: c677661f-bc62-4954-9130-09b285e6abe4] Updating instance_info_cache with network_info: [{"id": "f09cab39-5082-4d84-91fe-0c7b1cc2fb8a", "address": "fa:16:3e:29:57:ba", "network": {"id": "428d818e-c08a-4eef-be62-24fe484fed05", "bridge": "br-int", "label": "tempest-network-smoke--2064721736", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.19", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf09cab39-50", "ovs_interfaceid": "f09cab39-5082-4d84-91fe-0c7b1cc2fb8a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:21:44 compute-0 nova_compute[260935]: 2025-10-11 09:21:44.666 2 DEBUG oslo_concurrency.lockutils [None req-1eed4baf-552c-46ce-8650-0b15a791ccfd dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Releasing lock "refresh_cache-c677661f-bc62-4954-9130-09b285e6abe4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:21:44 compute-0 nova_compute[260935]: 2025-10-11 09:21:44.667 2 DEBUG nova.compute.manager [None req-1eed4baf-552c-46ce-8650-0b15a791ccfd dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: c677661f-bc62-4954-9130-09b285e6abe4] Instance network_info: |[{"id": "f09cab39-5082-4d84-91fe-0c7b1cc2fb8a", "address": "fa:16:3e:29:57:ba", "network": {"id": "428d818e-c08a-4eef-be62-24fe484fed05", "bridge": "br-int", "label": "tempest-network-smoke--2064721736", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.19", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf09cab39-50", "ovs_interfaceid": "f09cab39-5082-4d84-91fe-0c7b1cc2fb8a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 11 09:21:44 compute-0 nova_compute[260935]: 2025-10-11 09:21:44.668 2 DEBUG oslo_concurrency.lockutils [req-2ba3882c-668d-4d8b-9ccb-bc844d0c65f1 req-e7e686db-7651-456e-8f90-1e49f6871dc3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-c677661f-bc62-4954-9130-09b285e6abe4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:21:44 compute-0 nova_compute[260935]: 2025-10-11 09:21:44.668 2 DEBUG nova.network.neutron [req-2ba3882c-668d-4d8b-9ccb-bc844d0c65f1 req-e7e686db-7651-456e-8f90-1e49f6871dc3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: c677661f-bc62-4954-9130-09b285e6abe4] Refreshing network info cache for port f09cab39-5082-4d84-91fe-0c7b1cc2fb8a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 09:21:44 compute-0 nova_compute[260935]: 2025-10-11 09:21:44.673 2 DEBUG nova.virt.libvirt.driver [None req-1eed4baf-552c-46ce-8650-0b15a791ccfd dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: c677661f-bc62-4954-9130-09b285e6abe4] Start _get_guest_xml network_info=[{"id": "f09cab39-5082-4d84-91fe-0c7b1cc2fb8a", "address": "fa:16:3e:29:57:ba", "network": {"id": "428d818e-c08a-4eef-be62-24fe484fed05", "bridge": "br-int", "label": "tempest-network-smoke--2064721736", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.19", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf09cab39-50", "ovs_interfaceid": "f09cab39-5082-4d84-91fe-0c7b1cc2fb8a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 11 09:21:44 compute-0 nova_compute[260935]: 2025-10-11 09:21:44.680 2 WARNING nova.virt.libvirt.driver [None req-1eed4baf-552c-46ce-8650-0b15a791ccfd dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 09:21:44 compute-0 nova_compute[260935]: 2025-10-11 09:21:44.686 2 DEBUG nova.virt.libvirt.host [None req-1eed4baf-552c-46ce-8650-0b15a791ccfd dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 11 09:21:44 compute-0 nova_compute[260935]: 2025-10-11 09:21:44.687 2 DEBUG nova.virt.libvirt.host [None req-1eed4baf-552c-46ce-8650-0b15a791ccfd dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 11 09:21:44 compute-0 nova_compute[260935]: 2025-10-11 09:21:44.696 2 DEBUG nova.virt.libvirt.host [None req-1eed4baf-552c-46ce-8650-0b15a791ccfd dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 11 09:21:44 compute-0 nova_compute[260935]: 2025-10-11 09:21:44.697 2 DEBUG nova.virt.libvirt.host [None req-1eed4baf-552c-46ce-8650-0b15a791ccfd dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 11 09:21:44 compute-0 nova_compute[260935]: 2025-10-11 09:21:44.698 2 DEBUG nova.virt.libvirt.driver [None req-1eed4baf-552c-46ce-8650-0b15a791ccfd dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 11 09:21:44 compute-0 nova_compute[260935]: 2025-10-11 09:21:44.698 2 DEBUG nova.virt.hardware [None req-1eed4baf-552c-46ce-8650-0b15a791ccfd dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 11 09:21:44 compute-0 nova_compute[260935]: 2025-10-11 09:21:44.699 2 DEBUG nova.virt.hardware [None req-1eed4baf-552c-46ce-8650-0b15a791ccfd dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 11 09:21:44 compute-0 nova_compute[260935]: 2025-10-11 09:21:44.700 2 DEBUG nova.virt.hardware [None req-1eed4baf-552c-46ce-8650-0b15a791ccfd dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 11 09:21:44 compute-0 nova_compute[260935]: 2025-10-11 09:21:44.700 2 DEBUG nova.virt.hardware [None req-1eed4baf-552c-46ce-8650-0b15a791ccfd dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 11 09:21:44 compute-0 nova_compute[260935]: 2025-10-11 09:21:44.701 2 DEBUG nova.virt.hardware [None req-1eed4baf-552c-46ce-8650-0b15a791ccfd dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 11 09:21:44 compute-0 nova_compute[260935]: 2025-10-11 09:21:44.701 2 DEBUG nova.virt.hardware [None req-1eed4baf-552c-46ce-8650-0b15a791ccfd dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 11 09:21:44 compute-0 nova_compute[260935]: 2025-10-11 09:21:44.702 2 DEBUG nova.virt.hardware [None req-1eed4baf-552c-46ce-8650-0b15a791ccfd dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 11 09:21:44 compute-0 nova_compute[260935]: 2025-10-11 09:21:44.702 2 DEBUG nova.virt.hardware [None req-1eed4baf-552c-46ce-8650-0b15a791ccfd dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 11 09:21:44 compute-0 nova_compute[260935]: 2025-10-11 09:21:44.703 2 DEBUG nova.virt.hardware [None req-1eed4baf-552c-46ce-8650-0b15a791ccfd dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 11 09:21:44 compute-0 nova_compute[260935]: 2025-10-11 09:21:44.703 2 DEBUG nova.virt.hardware [None req-1eed4baf-552c-46ce-8650-0b15a791ccfd dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 11 09:21:44 compute-0 nova_compute[260935]: 2025-10-11 09:21:44.704 2 DEBUG nova.virt.hardware [None req-1eed4baf-552c-46ce-8650-0b15a791ccfd dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 11 09:21:44 compute-0 nova_compute[260935]: 2025-10-11 09:21:44.709 2 DEBUG oslo_concurrency.processutils [None req-1eed4baf-552c-46ce-8650-0b15a791ccfd dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:21:44 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2397: 321 pgs: 321 active+clean; 533 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 342 KiB/s rd, 3.9 MiB/s wr, 91 op/s
Oct 11 09:21:45 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 09:21:45 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4236163751' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:21:45 compute-0 nova_compute[260935]: 2025-10-11 09:21:45.254 2 DEBUG oslo_concurrency.processutils [None req-1eed4baf-552c-46ce-8650-0b15a791ccfd dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.545s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:21:45 compute-0 nova_compute[260935]: 2025-10-11 09:21:45.283 2 DEBUG nova.storage.rbd_utils [None req-1eed4baf-552c-46ce-8650-0b15a791ccfd dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] rbd image c677661f-bc62-4954-9130-09b285e6abe4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:21:45 compute-0 nova_compute[260935]: 2025-10-11 09:21:45.288 2 DEBUG oslo_concurrency.processutils [None req-1eed4baf-552c-46ce-8650-0b15a791ccfd dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:21:45 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 09:21:45 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3920895477' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:21:45 compute-0 nova_compute[260935]: 2025-10-11 09:21:45.703 2 DEBUG oslo_concurrency.processutils [None req-1eed4baf-552c-46ce-8650-0b15a791ccfd dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.415s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:21:45 compute-0 nova_compute[260935]: 2025-10-11 09:21:45.707 2 DEBUG nova.virt.libvirt.vif [None req-1eed4baf-552c-46ce-8650-0b15a791ccfd dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T09:21:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1301201135',display_name='tempest-TestNetworkBasicOps-server-1301201135',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1301201135',id=119,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAFMu8G7LIufdrCxIhBQQCul5ZF6SEzch6wK6wL4IRYDngjbMt9CFdOpjTGBGp3BV5OfwWRm0v8OrQ6qzwsg+DrsRLw4XZQAC1nE0Ke58JBZ4nmeeruu5Ghd9xdLrk6Odw==',key_name='tempest-TestNetworkBasicOps-1626339166',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='bee9c6aad5fe46a2b0fb6caf4d995b72',ramdisk_id='',reservation_id='r-1lilo2wr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1622727639',owner_user_name='tempest-TestNetworkBasicOps-1622727639-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:21:37Z,user_data=None,user_id='dd336dcb24664df58613d4105ce1b004',uuid=c677661f-bc62-4954-9130-09b285e6abe4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f09cab39-5082-4d84-91fe-0c7b1cc2fb8a", "address": "fa:16:3e:29:57:ba", "network": {"id": "428d818e-c08a-4eef-be62-24fe484fed05", "bridge": "br-int", "label": "tempest-network-smoke--2064721736", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.19", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf09cab39-50", "ovs_interfaceid": "f09cab39-5082-4d84-91fe-0c7b1cc2fb8a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 11 09:21:45 compute-0 nova_compute[260935]: 2025-10-11 09:21:45.707 2 DEBUG nova.network.os_vif_util [None req-1eed4baf-552c-46ce-8650-0b15a791ccfd dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Converting VIF {"id": "f09cab39-5082-4d84-91fe-0c7b1cc2fb8a", "address": "fa:16:3e:29:57:ba", "network": {"id": "428d818e-c08a-4eef-be62-24fe484fed05", "bridge": "br-int", "label": "tempest-network-smoke--2064721736", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.19", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf09cab39-50", "ovs_interfaceid": "f09cab39-5082-4d84-91fe-0c7b1cc2fb8a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 09:21:45 compute-0 nova_compute[260935]: 2025-10-11 09:21:45.709 2 DEBUG nova.network.os_vif_util [None req-1eed4baf-552c-46ce-8650-0b15a791ccfd dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:29:57:ba,bridge_name='br-int',has_traffic_filtering=True,id=f09cab39-5082-4d84-91fe-0c7b1cc2fb8a,network=Network(428d818e-c08a-4eef-be62-24fe484fed05),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf09cab39-50') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 09:21:45 compute-0 nova_compute[260935]: 2025-10-11 09:21:45.711 2 DEBUG nova.objects.instance [None req-1eed4baf-552c-46ce-8650-0b15a791ccfd dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lazy-loading 'pci_devices' on Instance uuid c677661f-bc62-4954-9130-09b285e6abe4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:21:45 compute-0 nova_compute[260935]: 2025-10-11 09:21:45.731 2 DEBUG nova.virt.libvirt.driver [None req-1eed4baf-552c-46ce-8650-0b15a791ccfd dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: c677661f-bc62-4954-9130-09b285e6abe4] End _get_guest_xml xml=<domain type="kvm">
Oct 11 09:21:45 compute-0 nova_compute[260935]:   <uuid>c677661f-bc62-4954-9130-09b285e6abe4</uuid>
Oct 11 09:21:45 compute-0 nova_compute[260935]:   <name>instance-00000077</name>
Oct 11 09:21:45 compute-0 nova_compute[260935]:   <memory>131072</memory>
Oct 11 09:21:45 compute-0 nova_compute[260935]:   <vcpu>1</vcpu>
Oct 11 09:21:45 compute-0 nova_compute[260935]:   <metadata>
Oct 11 09:21:45 compute-0 nova_compute[260935]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 09:21:45 compute-0 nova_compute[260935]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 09:21:45 compute-0 nova_compute[260935]:       <nova:name>tempest-TestNetworkBasicOps-server-1301201135</nova:name>
Oct 11 09:21:45 compute-0 nova_compute[260935]:       <nova:creationTime>2025-10-11 09:21:44</nova:creationTime>
Oct 11 09:21:45 compute-0 nova_compute[260935]:       <nova:flavor name="m1.nano">
Oct 11 09:21:45 compute-0 nova_compute[260935]:         <nova:memory>128</nova:memory>
Oct 11 09:21:45 compute-0 nova_compute[260935]:         <nova:disk>1</nova:disk>
Oct 11 09:21:45 compute-0 nova_compute[260935]:         <nova:swap>0</nova:swap>
Oct 11 09:21:45 compute-0 nova_compute[260935]:         <nova:ephemeral>0</nova:ephemeral>
Oct 11 09:21:45 compute-0 nova_compute[260935]:         <nova:vcpus>1</nova:vcpus>
Oct 11 09:21:45 compute-0 nova_compute[260935]:       </nova:flavor>
Oct 11 09:21:45 compute-0 nova_compute[260935]:       <nova:owner>
Oct 11 09:21:45 compute-0 nova_compute[260935]:         <nova:user uuid="dd336dcb24664df58613d4105ce1b004">tempest-TestNetworkBasicOps-1622727639-project-member</nova:user>
Oct 11 09:21:45 compute-0 nova_compute[260935]:         <nova:project uuid="bee9c6aad5fe46a2b0fb6caf4d995b72">tempest-TestNetworkBasicOps-1622727639</nova:project>
Oct 11 09:21:45 compute-0 nova_compute[260935]:       </nova:owner>
Oct 11 09:21:45 compute-0 nova_compute[260935]:       <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 09:21:45 compute-0 nova_compute[260935]:       <nova:ports>
Oct 11 09:21:45 compute-0 nova_compute[260935]:         <nova:port uuid="f09cab39-5082-4d84-91fe-0c7b1cc2fb8a">
Oct 11 09:21:45 compute-0 nova_compute[260935]:           <nova:ip type="fixed" address="10.100.0.19" ipVersion="4"/>
Oct 11 09:21:45 compute-0 nova_compute[260935]:         </nova:port>
Oct 11 09:21:45 compute-0 nova_compute[260935]:       </nova:ports>
Oct 11 09:21:45 compute-0 nova_compute[260935]:     </nova:instance>
Oct 11 09:21:45 compute-0 nova_compute[260935]:   </metadata>
Oct 11 09:21:45 compute-0 nova_compute[260935]:   <sysinfo type="smbios">
Oct 11 09:21:45 compute-0 nova_compute[260935]:     <system>
Oct 11 09:21:45 compute-0 nova_compute[260935]:       <entry name="manufacturer">RDO</entry>
Oct 11 09:21:45 compute-0 nova_compute[260935]:       <entry name="product">OpenStack Compute</entry>
Oct 11 09:21:45 compute-0 nova_compute[260935]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 09:21:45 compute-0 nova_compute[260935]:       <entry name="serial">c677661f-bc62-4954-9130-09b285e6abe4</entry>
Oct 11 09:21:45 compute-0 nova_compute[260935]:       <entry name="uuid">c677661f-bc62-4954-9130-09b285e6abe4</entry>
Oct 11 09:21:45 compute-0 nova_compute[260935]:       <entry name="family">Virtual Machine</entry>
Oct 11 09:21:45 compute-0 nova_compute[260935]:     </system>
Oct 11 09:21:45 compute-0 nova_compute[260935]:   </sysinfo>
Oct 11 09:21:45 compute-0 nova_compute[260935]:   <os>
Oct 11 09:21:45 compute-0 nova_compute[260935]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 11 09:21:45 compute-0 nova_compute[260935]:     <boot dev="hd"/>
Oct 11 09:21:45 compute-0 nova_compute[260935]:     <smbios mode="sysinfo"/>
Oct 11 09:21:45 compute-0 nova_compute[260935]:   </os>
Oct 11 09:21:45 compute-0 nova_compute[260935]:   <features>
Oct 11 09:21:45 compute-0 nova_compute[260935]:     <acpi/>
Oct 11 09:21:45 compute-0 nova_compute[260935]:     <apic/>
Oct 11 09:21:45 compute-0 nova_compute[260935]:     <vmcoreinfo/>
Oct 11 09:21:45 compute-0 nova_compute[260935]:   </features>
Oct 11 09:21:45 compute-0 nova_compute[260935]:   <clock offset="utc">
Oct 11 09:21:45 compute-0 nova_compute[260935]:     <timer name="pit" tickpolicy="delay"/>
Oct 11 09:21:45 compute-0 nova_compute[260935]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 11 09:21:45 compute-0 nova_compute[260935]:     <timer name="hpet" present="no"/>
Oct 11 09:21:45 compute-0 nova_compute[260935]:   </clock>
Oct 11 09:21:45 compute-0 nova_compute[260935]:   <cpu mode="host-model" match="exact">
Oct 11 09:21:45 compute-0 nova_compute[260935]:     <topology sockets="1" cores="1" threads="1"/>
Oct 11 09:21:45 compute-0 nova_compute[260935]:   </cpu>
Oct 11 09:21:45 compute-0 nova_compute[260935]:   <devices>
Oct 11 09:21:45 compute-0 nova_compute[260935]:     <disk type="network" device="disk">
Oct 11 09:21:45 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 09:21:45 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/c677661f-bc62-4954-9130-09b285e6abe4_disk">
Oct 11 09:21:45 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 09:21:45 compute-0 nova_compute[260935]:       </source>
Oct 11 09:21:45 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 09:21:45 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 09:21:45 compute-0 nova_compute[260935]:       </auth>
Oct 11 09:21:45 compute-0 nova_compute[260935]:       <target dev="vda" bus="virtio"/>
Oct 11 09:21:45 compute-0 nova_compute[260935]:     </disk>
Oct 11 09:21:45 compute-0 nova_compute[260935]:     <disk type="network" device="cdrom">
Oct 11 09:21:45 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 09:21:45 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/c677661f-bc62-4954-9130-09b285e6abe4_disk.config">
Oct 11 09:21:45 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 09:21:45 compute-0 nova_compute[260935]:       </source>
Oct 11 09:21:45 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 09:21:45 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 09:21:45 compute-0 nova_compute[260935]:       </auth>
Oct 11 09:21:45 compute-0 nova_compute[260935]:       <target dev="sda" bus="sata"/>
Oct 11 09:21:45 compute-0 nova_compute[260935]:     </disk>
Oct 11 09:21:45 compute-0 nova_compute[260935]:     <interface type="ethernet">
Oct 11 09:21:45 compute-0 nova_compute[260935]:       <mac address="fa:16:3e:29:57:ba"/>
Oct 11 09:21:45 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 09:21:45 compute-0 nova_compute[260935]:       <driver name="vhost" rx_queue_size="512"/>
Oct 11 09:21:45 compute-0 nova_compute[260935]:       <mtu size="1442"/>
Oct 11 09:21:45 compute-0 nova_compute[260935]:       <target dev="tapf09cab39-50"/>
Oct 11 09:21:45 compute-0 nova_compute[260935]:     </interface>
Oct 11 09:21:45 compute-0 nova_compute[260935]:     <serial type="pty">
Oct 11 09:21:45 compute-0 nova_compute[260935]:       <log file="/var/lib/nova/instances/c677661f-bc62-4954-9130-09b285e6abe4/console.log" append="off"/>
Oct 11 09:21:45 compute-0 nova_compute[260935]:     </serial>
Oct 11 09:21:45 compute-0 nova_compute[260935]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 09:21:45 compute-0 nova_compute[260935]:     <video>
Oct 11 09:21:45 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 09:21:45 compute-0 nova_compute[260935]:     </video>
Oct 11 09:21:45 compute-0 nova_compute[260935]:     <input type="tablet" bus="usb"/>
Oct 11 09:21:45 compute-0 nova_compute[260935]:     <rng model="virtio">
Oct 11 09:21:45 compute-0 nova_compute[260935]:       <backend model="random">/dev/urandom</backend>
Oct 11 09:21:45 compute-0 nova_compute[260935]:     </rng>
Oct 11 09:21:45 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root"/>
Oct 11 09:21:45 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:21:45 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:21:45 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:21:45 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:21:45 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:21:45 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:21:45 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:21:45 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:21:45 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:21:45 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:21:45 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:21:45 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:21:45 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:21:45 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:21:45 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:21:45 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:21:45 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:21:45 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:21:45 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:21:45 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:21:45 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:21:45 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:21:45 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:21:45 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:21:45 compute-0 nova_compute[260935]:     <controller type="usb" index="0"/>
Oct 11 09:21:45 compute-0 nova_compute[260935]:     <memballoon model="virtio">
Oct 11 09:21:45 compute-0 nova_compute[260935]:       <stats period="10"/>
Oct 11 09:21:45 compute-0 nova_compute[260935]:     </memballoon>
Oct 11 09:21:45 compute-0 nova_compute[260935]:   </devices>
Oct 11 09:21:45 compute-0 nova_compute[260935]: </domain>
Oct 11 09:21:45 compute-0 nova_compute[260935]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 11 09:21:45 compute-0 nova_compute[260935]: 2025-10-11 09:21:45.733 2 DEBUG nova.compute.manager [None req-1eed4baf-552c-46ce-8650-0b15a791ccfd dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: c677661f-bc62-4954-9130-09b285e6abe4] Preparing to wait for external event network-vif-plugged-f09cab39-5082-4d84-91fe-0c7b1cc2fb8a prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 11 09:21:45 compute-0 nova_compute[260935]: 2025-10-11 09:21:45.733 2 DEBUG oslo_concurrency.lockutils [None req-1eed4baf-552c-46ce-8650-0b15a791ccfd dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Acquiring lock "c677661f-bc62-4954-9130-09b285e6abe4-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:21:45 compute-0 nova_compute[260935]: 2025-10-11 09:21:45.734 2 DEBUG oslo_concurrency.lockutils [None req-1eed4baf-552c-46ce-8650-0b15a791ccfd dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "c677661f-bc62-4954-9130-09b285e6abe4-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:21:45 compute-0 nova_compute[260935]: 2025-10-11 09:21:45.734 2 DEBUG oslo_concurrency.lockutils [None req-1eed4baf-552c-46ce-8650-0b15a791ccfd dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "c677661f-bc62-4954-9130-09b285e6abe4-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:21:45 compute-0 nova_compute[260935]: 2025-10-11 09:21:45.736 2 DEBUG nova.virt.libvirt.vif [None req-1eed4baf-552c-46ce-8650-0b15a791ccfd dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T09:21:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1301201135',display_name='tempest-TestNetworkBasicOps-server-1301201135',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1301201135',id=119,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAFMu8G7LIufdrCxIhBQQCul5ZF6SEzch6wK6wL4IRYDngjbMt9CFdOpjTGBGp3BV5OfwWRm0v8OrQ6qzwsg+DrsRLw4XZQAC1nE0Ke58JBZ4nmeeruu5Ghd9xdLrk6Odw==',key_name='tempest-TestNetworkBasicOps-1626339166',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='bee9c6aad5fe46a2b0fb6caf4d995b72',ramdisk_id='',reservation_id='r-1lilo2wr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1622727639',owner_user_name='tempest-TestNetworkBasicOps-1622727639-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:21:37Z,user_data=None,user_id='dd336dcb24664df58613d4105ce1b004',uuid=c677661f-bc62-4954-9130-09b285e6abe4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f09cab39-5082-4d84-91fe-0c7b1cc2fb8a", "address": "fa:16:3e:29:57:ba", "network": {"id": "428d818e-c08a-4eef-be62-24fe484fed05", "bridge": "br-int", "label": "tempest-network-smoke--2064721736", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.19", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf09cab39-50", "ovs_interfaceid": "f09cab39-5082-4d84-91fe-0c7b1cc2fb8a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 11 09:21:45 compute-0 nova_compute[260935]: 2025-10-11 09:21:45.736 2 DEBUG nova.network.os_vif_util [None req-1eed4baf-552c-46ce-8650-0b15a791ccfd dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Converting VIF {"id": "f09cab39-5082-4d84-91fe-0c7b1cc2fb8a", "address": "fa:16:3e:29:57:ba", "network": {"id": "428d818e-c08a-4eef-be62-24fe484fed05", "bridge": "br-int", "label": "tempest-network-smoke--2064721736", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.19", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf09cab39-50", "ovs_interfaceid": "f09cab39-5082-4d84-91fe-0c7b1cc2fb8a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 09:21:45 compute-0 nova_compute[260935]: 2025-10-11 09:21:45.737 2 DEBUG nova.network.os_vif_util [None req-1eed4baf-552c-46ce-8650-0b15a791ccfd dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:29:57:ba,bridge_name='br-int',has_traffic_filtering=True,id=f09cab39-5082-4d84-91fe-0c7b1cc2fb8a,network=Network(428d818e-c08a-4eef-be62-24fe484fed05),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf09cab39-50') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 09:21:45 compute-0 nova_compute[260935]: 2025-10-11 09:21:45.738 2 DEBUG os_vif [None req-1eed4baf-552c-46ce-8650-0b15a791ccfd dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:29:57:ba,bridge_name='br-int',has_traffic_filtering=True,id=f09cab39-5082-4d84-91fe-0c7b1cc2fb8a,network=Network(428d818e-c08a-4eef-be62-24fe484fed05),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf09cab39-50') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 11 09:21:45 compute-0 nova_compute[260935]: 2025-10-11 09:21:45.739 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:21:45 compute-0 nova_compute[260935]: 2025-10-11 09:21:45.740 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:21:45 compute-0 nova_compute[260935]: 2025-10-11 09:21:45.741 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 09:21:45 compute-0 nova_compute[260935]: 2025-10-11 09:21:45.746 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:21:45 compute-0 nova_compute[260935]: 2025-10-11 09:21:45.747 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf09cab39-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:21:45 compute-0 nova_compute[260935]: 2025-10-11 09:21:45.748 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapf09cab39-50, col_values=(('external_ids', {'iface-id': 'f09cab39-5082-4d84-91fe-0c7b1cc2fb8a', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:29:57:ba', 'vm-uuid': 'c677661f-bc62-4954-9130-09b285e6abe4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:21:45 compute-0 nova_compute[260935]: 2025-10-11 09:21:45.796 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:21:45 compute-0 NetworkManager[44960]: <info>  [1760174505.7971] manager: (tapf09cab39-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/496)
Oct 11 09:21:45 compute-0 nova_compute[260935]: 2025-10-11 09:21:45.800 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 09:21:45 compute-0 nova_compute[260935]: 2025-10-11 09:21:45.810 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:21:45 compute-0 nova_compute[260935]: 2025-10-11 09:21:45.812 2 INFO os_vif [None req-1eed4baf-552c-46ce-8650-0b15a791ccfd dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:29:57:ba,bridge_name='br-int',has_traffic_filtering=True,id=f09cab39-5082-4d84-91fe-0c7b1cc2fb8a,network=Network(428d818e-c08a-4eef-be62-24fe484fed05),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf09cab39-50')
Oct 11 09:21:45 compute-0 ceph-mon[74313]: pgmap v2397: 321 pgs: 321 active+clean; 533 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 342 KiB/s rd, 3.9 MiB/s wr, 91 op/s
Oct 11 09:21:45 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/4236163751' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:21:45 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/3920895477' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:21:45 compute-0 nova_compute[260935]: 2025-10-11 09:21:45.871 2 DEBUG nova.virt.libvirt.driver [None req-1eed4baf-552c-46ce-8650-0b15a791ccfd dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 09:21:45 compute-0 nova_compute[260935]: 2025-10-11 09:21:45.871 2 DEBUG nova.virt.libvirt.driver [None req-1eed4baf-552c-46ce-8650-0b15a791ccfd dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 09:21:45 compute-0 nova_compute[260935]: 2025-10-11 09:21:45.872 2 DEBUG nova.virt.libvirt.driver [None req-1eed4baf-552c-46ce-8650-0b15a791ccfd dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] No VIF found with MAC fa:16:3e:29:57:ba, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 11 09:21:45 compute-0 nova_compute[260935]: 2025-10-11 09:21:45.873 2 INFO nova.virt.libvirt.driver [None req-1eed4baf-552c-46ce-8650-0b15a791ccfd dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: c677661f-bc62-4954-9130-09b285e6abe4] Using config drive
Oct 11 09:21:45 compute-0 nova_compute[260935]: 2025-10-11 09:21:45.906 2 DEBUG nova.storage.rbd_utils [None req-1eed4baf-552c-46ce-8650-0b15a791ccfd dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] rbd image c677661f-bc62-4954-9130-09b285e6abe4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:21:46 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2398: 321 pgs: 321 active+clean; 533 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 342 KiB/s rd, 3.9 MiB/s wr, 91 op/s
Oct 11 09:21:46 compute-0 nova_compute[260935]: 2025-10-11 09:21:46.941 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:21:47 compute-0 nova_compute[260935]: 2025-10-11 09:21:47.094 2 INFO nova.virt.libvirt.driver [None req-1eed4baf-552c-46ce-8650-0b15a791ccfd dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: c677661f-bc62-4954-9130-09b285e6abe4] Creating config drive at /var/lib/nova/instances/c677661f-bc62-4954-9130-09b285e6abe4/disk.config
Oct 11 09:21:47 compute-0 nova_compute[260935]: 2025-10-11 09:21:47.100 2 DEBUG oslo_concurrency.processutils [None req-1eed4baf-552c-46ce-8650-0b15a791ccfd dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/c677661f-bc62-4954-9130-09b285e6abe4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpby9k0os1 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:21:47 compute-0 nova_compute[260935]: 2025-10-11 09:21:47.264 2 DEBUG oslo_concurrency.processutils [None req-1eed4baf-552c-46ce-8650-0b15a791ccfd dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/c677661f-bc62-4954-9130-09b285e6abe4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpby9k0os1" returned: 0 in 0.164s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:21:47 compute-0 nova_compute[260935]: 2025-10-11 09:21:47.291 2 DEBUG nova.storage.rbd_utils [None req-1eed4baf-552c-46ce-8650-0b15a791ccfd dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] rbd image c677661f-bc62-4954-9130-09b285e6abe4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:21:47 compute-0 nova_compute[260935]: 2025-10-11 09:21:47.295 2 DEBUG oslo_concurrency.processutils [None req-1eed4baf-552c-46ce-8650-0b15a791ccfd dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/c677661f-bc62-4954-9130-09b285e6abe4/disk.config c677661f-bc62-4954-9130-09b285e6abe4_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:21:47 compute-0 ovn_controller[152945]: 2025-10-11T09:21:47Z|01213|binding|INFO|Releasing lport bc2597de-285d-49b2-9e10-c93c87e43828 from this chassis (sb_readonly=0)
Oct 11 09:21:47 compute-0 ovn_controller[152945]: 2025-10-11T09:21:47Z|01214|binding|INFO|Releasing lport af33797e-d57d-45c8-92d7-86ea03fdf1ef from this chassis (sb_readonly=0)
Oct 11 09:21:47 compute-0 ovn_controller[152945]: 2025-10-11T09:21:47Z|01215|binding|INFO|Releasing lport e23cd806-8523-4e59-ba27-db15cee52548 from this chassis (sb_readonly=0)
Oct 11 09:21:47 compute-0 ovn_controller[152945]: 2025-10-11T09:21:47Z|01216|binding|INFO|Releasing lport f0c2b309-d39a-4d01-be06-bdc63deb27f9 from this chassis (sb_readonly=0)
Oct 11 09:21:47 compute-0 ovn_controller[152945]: 2025-10-11T09:21:47Z|01217|binding|INFO|Releasing lport 89155f05-1b39-4918-893c-15a2cd2a9493 from this chassis (sb_readonly=0)
Oct 11 09:21:47 compute-0 ovn_controller[152945]: 2025-10-11T09:21:47Z|01218|binding|INFO|Releasing lport f45dd889-4db0-488b-b6d3-356bc191844e from this chassis (sb_readonly=0)
Oct 11 09:21:47 compute-0 nova_compute[260935]: 2025-10-11 09:21:47.474 2 DEBUG nova.network.neutron [req-2ba3882c-668d-4d8b-9ccb-bc844d0c65f1 req-e7e686db-7651-456e-8f90-1e49f6871dc3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: c677661f-bc62-4954-9130-09b285e6abe4] Updated VIF entry in instance network info cache for port f09cab39-5082-4d84-91fe-0c7b1cc2fb8a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 09:21:47 compute-0 nova_compute[260935]: 2025-10-11 09:21:47.477 2 DEBUG nova.network.neutron [req-2ba3882c-668d-4d8b-9ccb-bc844d0c65f1 req-e7e686db-7651-456e-8f90-1e49f6871dc3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: c677661f-bc62-4954-9130-09b285e6abe4] Updating instance_info_cache with network_info: [{"id": "f09cab39-5082-4d84-91fe-0c7b1cc2fb8a", "address": "fa:16:3e:29:57:ba", "network": {"id": "428d818e-c08a-4eef-be62-24fe484fed05", "bridge": "br-int", "label": "tempest-network-smoke--2064721736", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.19", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf09cab39-50", "ovs_interfaceid": "f09cab39-5082-4d84-91fe-0c7b1cc2fb8a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:21:47 compute-0 nova_compute[260935]: 2025-10-11 09:21:47.515 2 DEBUG oslo_concurrency.lockutils [req-2ba3882c-668d-4d8b-9ccb-bc844d0c65f1 req-e7e686db-7651-456e-8f90-1e49f6871dc3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-c677661f-bc62-4954-9130-09b285e6abe4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:21:47 compute-0 nova_compute[260935]: 2025-10-11 09:21:47.517 2 DEBUG oslo_concurrency.processutils [None req-1eed4baf-552c-46ce-8650-0b15a791ccfd dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/c677661f-bc62-4954-9130-09b285e6abe4/disk.config c677661f-bc62-4954-9130-09b285e6abe4_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.223s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:21:47 compute-0 nova_compute[260935]: 2025-10-11 09:21:47.519 2 INFO nova.virt.libvirt.driver [None req-1eed4baf-552c-46ce-8650-0b15a791ccfd dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: c677661f-bc62-4954-9130-09b285e6abe4] Deleting local config drive /var/lib/nova/instances/c677661f-bc62-4954-9130-09b285e6abe4/disk.config because it was imported into RBD.
Oct 11 09:21:47 compute-0 nova_compute[260935]: 2025-10-11 09:21:47.563 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:21:47 compute-0 kernel: tapf09cab39-50: entered promiscuous mode
Oct 11 09:21:47 compute-0 NetworkManager[44960]: <info>  [1760174507.6063] manager: (tapf09cab39-50): new Tun device (/org/freedesktop/NetworkManager/Devices/497)
Oct 11 09:21:47 compute-0 nova_compute[260935]: 2025-10-11 09:21:47.606 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:21:47 compute-0 ovn_controller[152945]: 2025-10-11T09:21:47Z|01219|binding|INFO|Claiming lport f09cab39-5082-4d84-91fe-0c7b1cc2fb8a for this chassis.
Oct 11 09:21:47 compute-0 ovn_controller[152945]: 2025-10-11T09:21:47Z|01220|binding|INFO|f09cab39-5082-4d84-91fe-0c7b1cc2fb8a: Claiming fa:16:3e:29:57:ba 10.100.0.19
Oct 11 09:21:47 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:21:47.617 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:29:57:ba 10.100.0.19'], port_security=['fa:16:3e:29:57:ba 10.100.0.19'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.19/28', 'neutron:device_id': 'c677661f-bc62-4954-9130-09b285e6abe4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-428d818e-c08a-4eef-be62-24fe484fed05', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'bee9c6aad5fe46a2b0fb6caf4d995b72', 'neutron:revision_number': '2', 'neutron:security_group_ids': '8bcfc965-4dfd-4911-b600-4d86bbeab7bc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=020223cc-9ae9-496f-8a67-2eb055f1a089, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=f09cab39-5082-4d84-91fe-0c7b1cc2fb8a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 09:21:47 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:21:47.620 162815 INFO neutron.agent.ovn.metadata.agent [-] Port f09cab39-5082-4d84-91fe-0c7b1cc2fb8a in datapath 428d818e-c08a-4eef-be62-24fe484fed05 bound to our chassis
Oct 11 09:21:47 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:21:47.624 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 428d818e-c08a-4eef-be62-24fe484fed05
Oct 11 09:21:47 compute-0 ovn_controller[152945]: 2025-10-11T09:21:47Z|01221|binding|INFO|Setting lport f09cab39-5082-4d84-91fe-0c7b1cc2fb8a ovn-installed in OVS
Oct 11 09:21:47 compute-0 ovn_controller[152945]: 2025-10-11T09:21:47Z|01222|binding|INFO|Setting lport f09cab39-5082-4d84-91fe-0c7b1cc2fb8a up in Southbound
Oct 11 09:21:47 compute-0 nova_compute[260935]: 2025-10-11 09:21:47.627 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:21:47 compute-0 nova_compute[260935]: 2025-10-11 09:21:47.636 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:21:47 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:21:47.656 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[a80eec20-b682-49b4-a0cf-283a9fafcb93]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:21:47 compute-0 systemd-machined[215705]: New machine qemu-142-instance-00000077.
Oct 11 09:21:47 compute-0 systemd[1]: Started Virtual Machine qemu-142-instance-00000077.
Oct 11 09:21:47 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:21:47.700 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[10efe28c-dcbb-4087-88af-0ccaaf1385b1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:21:47 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:21:47.705 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[6cb2b385-3d14-4939-acd8-bd8d6392cbda]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:21:47 compute-0 systemd-udevd[389808]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 09:21:47 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:21:47.734 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[4941e860-e826-4bc8-aafb-ee26af51a05e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:21:47 compute-0 NetworkManager[44960]: <info>  [1760174507.7451] device (tapf09cab39-50): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 09:21:47 compute-0 NetworkManager[44960]: <info>  [1760174507.7462] device (tapf09cab39-50): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 09:21:47 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:21:47.757 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[d579d8fc-4fcf-4d31-ab4c-76402dd3d0a3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap428d818e-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5d:fc:e7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 343], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 633373, 'reachable_time': 23194, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 389813, 'error': None, 'target': 'ovnmeta-428d818e-c08a-4eef-be62-24fe484fed05', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:21:47 compute-0 podman[389770]: 2025-10-11 09:21:47.763628961 +0000 UTC m=+0.109492181 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team)
Oct 11 09:21:47 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:21:47.779 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[00dda8fb-d749-4009-a113-b0aed9aba25d]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap428d818e-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 633388, 'tstamp': 633388}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 389823, 'error': None, 'target': 'ovnmeta-428d818e-c08a-4eef-be62-24fe484fed05', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.17'], ['IFA_LOCAL', '10.100.0.17'], ['IFA_BROADCAST', '10.100.0.31'], ['IFA_LABEL', 'tap428d818e-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 633391, 'tstamp': 633391}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 389823, 'error': None, 'target': 'ovnmeta-428d818e-c08a-4eef-be62-24fe484fed05', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:21:47 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:21:47.782 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap428d818e-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:21:47 compute-0 nova_compute[260935]: 2025-10-11 09:21:47.784 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:21:47 compute-0 podman[389772]: 2025-10-11 09:21:47.787965548 +0000 UTC m=+0.138939842 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct 11 09:21:47 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:21:47.788 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap428d818e-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:21:47 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:21:47.788 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 09:21:47 compute-0 nova_compute[260935]: 2025-10-11 09:21:47.789 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:21:47 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:21:47.788 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap428d818e-c0, col_values=(('external_ids', {'iface-id': 'f0c2b309-d39a-4d01-be06-bdc63deb27f9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:21:47 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:21:47.789 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 09:21:47 compute-0 nova_compute[260935]: 2025-10-11 09:21:47.902 2 DEBUG nova.compute.manager [req-ce966411-99e5-4a25-873b-8b36ad2672ae req-7c26ab00-72d0-438b-b22c-c29caf75ae25 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: c677661f-bc62-4954-9130-09b285e6abe4] Received event network-vif-plugged-f09cab39-5082-4d84-91fe-0c7b1cc2fb8a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:21:47 compute-0 nova_compute[260935]: 2025-10-11 09:21:47.903 2 DEBUG oslo_concurrency.lockutils [req-ce966411-99e5-4a25-873b-8b36ad2672ae req-7c26ab00-72d0-438b-b22c-c29caf75ae25 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "c677661f-bc62-4954-9130-09b285e6abe4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:21:47 compute-0 nova_compute[260935]: 2025-10-11 09:21:47.903 2 DEBUG oslo_concurrency.lockutils [req-ce966411-99e5-4a25-873b-8b36ad2672ae req-7c26ab00-72d0-438b-b22c-c29caf75ae25 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "c677661f-bc62-4954-9130-09b285e6abe4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:21:47 compute-0 nova_compute[260935]: 2025-10-11 09:21:47.903 2 DEBUG oslo_concurrency.lockutils [req-ce966411-99e5-4a25-873b-8b36ad2672ae req-7c26ab00-72d0-438b-b22c-c29caf75ae25 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "c677661f-bc62-4954-9130-09b285e6abe4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:21:47 compute-0 nova_compute[260935]: 2025-10-11 09:21:47.903 2 DEBUG nova.compute.manager [req-ce966411-99e5-4a25-873b-8b36ad2672ae req-7c26ab00-72d0-438b-b22c-c29caf75ae25 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: c677661f-bc62-4954-9130-09b285e6abe4] Processing event network-vif-plugged-f09cab39-5082-4d84-91fe-0c7b1cc2fb8a _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 11 09:21:47 compute-0 ceph-mon[74313]: pgmap v2398: 321 pgs: 321 active+clean; 533 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 342 KiB/s rd, 3.9 MiB/s wr, 91 op/s
Oct 11 09:21:48 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2399: 321 pgs: 321 active+clean; 533 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 343 KiB/s rd, 3.9 MiB/s wr, 92 op/s
Oct 11 09:21:49 compute-0 nova_compute[260935]: 2025-10-11 09:21:49.012 2 DEBUG nova.compute.manager [None req-1eed4baf-552c-46ce-8650-0b15a791ccfd dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: c677661f-bc62-4954-9130-09b285e6abe4] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 11 09:21:49 compute-0 nova_compute[260935]: 2025-10-11 09:21:49.013 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760174509.0115993, c677661f-bc62-4954-9130-09b285e6abe4 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:21:49 compute-0 nova_compute[260935]: 2025-10-11 09:21:49.013 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: c677661f-bc62-4954-9130-09b285e6abe4] VM Started (Lifecycle Event)
Oct 11 09:21:49 compute-0 nova_compute[260935]: 2025-10-11 09:21:49.020 2 DEBUG nova.virt.libvirt.driver [None req-1eed4baf-552c-46ce-8650-0b15a791ccfd dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: c677661f-bc62-4954-9130-09b285e6abe4] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 11 09:21:49 compute-0 nova_compute[260935]: 2025-10-11 09:21:49.027 2 INFO nova.virt.libvirt.driver [-] [instance: c677661f-bc62-4954-9130-09b285e6abe4] Instance spawned successfully.
Oct 11 09:21:49 compute-0 nova_compute[260935]: 2025-10-11 09:21:49.027 2 DEBUG nova.virt.libvirt.driver [None req-1eed4baf-552c-46ce-8650-0b15a791ccfd dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: c677661f-bc62-4954-9130-09b285e6abe4] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 11 09:21:49 compute-0 nova_compute[260935]: 2025-10-11 09:21:49.035 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: c677661f-bc62-4954-9130-09b285e6abe4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:21:49 compute-0 nova_compute[260935]: 2025-10-11 09:21:49.046 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: c677661f-bc62-4954-9130-09b285e6abe4] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 09:21:49 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:21:49 compute-0 nova_compute[260935]: 2025-10-11 09:21:49.056 2 DEBUG nova.virt.libvirt.driver [None req-1eed4baf-552c-46ce-8650-0b15a791ccfd dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: c677661f-bc62-4954-9130-09b285e6abe4] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:21:49 compute-0 nova_compute[260935]: 2025-10-11 09:21:49.057 2 DEBUG nova.virt.libvirt.driver [None req-1eed4baf-552c-46ce-8650-0b15a791ccfd dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: c677661f-bc62-4954-9130-09b285e6abe4] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:21:49 compute-0 nova_compute[260935]: 2025-10-11 09:21:49.057 2 DEBUG nova.virt.libvirt.driver [None req-1eed4baf-552c-46ce-8650-0b15a791ccfd dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: c677661f-bc62-4954-9130-09b285e6abe4] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:21:49 compute-0 nova_compute[260935]: 2025-10-11 09:21:49.058 2 DEBUG nova.virt.libvirt.driver [None req-1eed4baf-552c-46ce-8650-0b15a791ccfd dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: c677661f-bc62-4954-9130-09b285e6abe4] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:21:49 compute-0 nova_compute[260935]: 2025-10-11 09:21:49.058 2 DEBUG nova.virt.libvirt.driver [None req-1eed4baf-552c-46ce-8650-0b15a791ccfd dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: c677661f-bc62-4954-9130-09b285e6abe4] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:21:49 compute-0 nova_compute[260935]: 2025-10-11 09:21:49.059 2 DEBUG nova.virt.libvirt.driver [None req-1eed4baf-552c-46ce-8650-0b15a791ccfd dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: c677661f-bc62-4954-9130-09b285e6abe4] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:21:49 compute-0 nova_compute[260935]: 2025-10-11 09:21:49.066 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: c677661f-bc62-4954-9130-09b285e6abe4] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 09:21:49 compute-0 nova_compute[260935]: 2025-10-11 09:21:49.066 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760174509.0163283, c677661f-bc62-4954-9130-09b285e6abe4 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:21:49 compute-0 nova_compute[260935]: 2025-10-11 09:21:49.066 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: c677661f-bc62-4954-9130-09b285e6abe4] VM Paused (Lifecycle Event)
Oct 11 09:21:49 compute-0 nova_compute[260935]: 2025-10-11 09:21:49.102 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: c677661f-bc62-4954-9130-09b285e6abe4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:21:49 compute-0 nova_compute[260935]: 2025-10-11 09:21:49.105 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760174509.0186293, c677661f-bc62-4954-9130-09b285e6abe4 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:21:49 compute-0 nova_compute[260935]: 2025-10-11 09:21:49.105 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: c677661f-bc62-4954-9130-09b285e6abe4] VM Resumed (Lifecycle Event)
Oct 11 09:21:49 compute-0 nova_compute[260935]: 2025-10-11 09:21:49.123 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: c677661f-bc62-4954-9130-09b285e6abe4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:21:49 compute-0 nova_compute[260935]: 2025-10-11 09:21:49.125 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: c677661f-bc62-4954-9130-09b285e6abe4] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 09:21:49 compute-0 nova_compute[260935]: 2025-10-11 09:21:49.139 2 INFO nova.compute.manager [None req-1eed4baf-552c-46ce-8650-0b15a791ccfd dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: c677661f-bc62-4954-9130-09b285e6abe4] Took 11.64 seconds to spawn the instance on the hypervisor.
Oct 11 09:21:49 compute-0 nova_compute[260935]: 2025-10-11 09:21:49.139 2 DEBUG nova.compute.manager [None req-1eed4baf-552c-46ce-8650-0b15a791ccfd dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: c677661f-bc62-4954-9130-09b285e6abe4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:21:49 compute-0 nova_compute[260935]: 2025-10-11 09:21:49.176 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: c677661f-bc62-4954-9130-09b285e6abe4] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 09:21:49 compute-0 nova_compute[260935]: 2025-10-11 09:21:49.225 2 INFO nova.compute.manager [None req-1eed4baf-552c-46ce-8650-0b15a791ccfd dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: c677661f-bc62-4954-9130-09b285e6abe4] Took 12.86 seconds to build instance.
Oct 11 09:21:49 compute-0 nova_compute[260935]: 2025-10-11 09:21:49.245 2 DEBUG oslo_concurrency.lockutils [None req-1eed4baf-552c-46ce-8650-0b15a791ccfd dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "c677661f-bc62-4954-9130-09b285e6abe4" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.972s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:21:49 compute-0 ceph-mon[74313]: pgmap v2399: 321 pgs: 321 active+clean; 533 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 343 KiB/s rd, 3.9 MiB/s wr, 92 op/s
Oct 11 09:21:50 compute-0 unix_chkpwd[389875]: password check failed for user (root)
Oct 11 09:21:50 compute-0 sshd-session[389873]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=165.232.82.252  user=root
Oct 11 09:21:50 compute-0 nova_compute[260935]: 2025-10-11 09:21:50.204 2 DEBUG nova.compute.manager [req-2a505d11-9e2d-43aa-89ac-e0b2eb71daca req-674d449c-0a93-4647-ac93-8627b023cba0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: c677661f-bc62-4954-9130-09b285e6abe4] Received event network-vif-plugged-f09cab39-5082-4d84-91fe-0c7b1cc2fb8a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:21:50 compute-0 nova_compute[260935]: 2025-10-11 09:21:50.205 2 DEBUG oslo_concurrency.lockutils [req-2a505d11-9e2d-43aa-89ac-e0b2eb71daca req-674d449c-0a93-4647-ac93-8627b023cba0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "c677661f-bc62-4954-9130-09b285e6abe4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:21:50 compute-0 nova_compute[260935]: 2025-10-11 09:21:50.205 2 DEBUG oslo_concurrency.lockutils [req-2a505d11-9e2d-43aa-89ac-e0b2eb71daca req-674d449c-0a93-4647-ac93-8627b023cba0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "c677661f-bc62-4954-9130-09b285e6abe4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:21:50 compute-0 nova_compute[260935]: 2025-10-11 09:21:50.206 2 DEBUG oslo_concurrency.lockutils [req-2a505d11-9e2d-43aa-89ac-e0b2eb71daca req-674d449c-0a93-4647-ac93-8627b023cba0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "c677661f-bc62-4954-9130-09b285e6abe4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:21:50 compute-0 nova_compute[260935]: 2025-10-11 09:21:50.206 2 DEBUG nova.compute.manager [req-2a505d11-9e2d-43aa-89ac-e0b2eb71daca req-674d449c-0a93-4647-ac93-8627b023cba0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: c677661f-bc62-4954-9130-09b285e6abe4] No waiting events found dispatching network-vif-plugged-f09cab39-5082-4d84-91fe-0c7b1cc2fb8a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 09:21:50 compute-0 nova_compute[260935]: 2025-10-11 09:21:50.207 2 WARNING nova.compute.manager [req-2a505d11-9e2d-43aa-89ac-e0b2eb71daca req-674d449c-0a93-4647-ac93-8627b023cba0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: c677661f-bc62-4954-9130-09b285e6abe4] Received unexpected event network-vif-plugged-f09cab39-5082-4d84-91fe-0c7b1cc2fb8a for instance with vm_state active and task_state None.
Oct 11 09:21:50 compute-0 nova_compute[260935]: 2025-10-11 09:21:50.616 2 DEBUG oslo_concurrency.lockutils [None req-34b9f7bf-00fa-46ca-b7f5-72da6044d656 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "d703a39a-f502-4bc4-895a-bb87752c83df" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:21:50 compute-0 nova_compute[260935]: 2025-10-11 09:21:50.617 2 DEBUG oslo_concurrency.lockutils [None req-34b9f7bf-00fa-46ca-b7f5-72da6044d656 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "d703a39a-f502-4bc4-895a-bb87752c83df" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:21:50 compute-0 nova_compute[260935]: 2025-10-11 09:21:50.633 2 DEBUG nova.compute.manager [None req-34b9f7bf-00fa-46ca-b7f5-72da6044d656 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: d703a39a-f502-4bc4-895a-bb87752c83df] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 11 09:21:50 compute-0 nova_compute[260935]: 2025-10-11 09:21:50.736 2 DEBUG oslo_concurrency.lockutils [None req-34b9f7bf-00fa-46ca-b7f5-72da6044d656 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:21:50 compute-0 nova_compute[260935]: 2025-10-11 09:21:50.736 2 DEBUG oslo_concurrency.lockutils [None req-34b9f7bf-00fa-46ca-b7f5-72da6044d656 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:21:50 compute-0 nova_compute[260935]: 2025-10-11 09:21:50.743 2 DEBUG nova.virt.hardware [None req-34b9f7bf-00fa-46ca-b7f5-72da6044d656 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 11 09:21:50 compute-0 nova_compute[260935]: 2025-10-11 09:21:50.744 2 INFO nova.compute.claims [None req-34b9f7bf-00fa-46ca-b7f5-72da6044d656 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: d703a39a-f502-4bc4-895a-bb87752c83df] Claim successful on node compute-0.ctlplane.example.com
Oct 11 09:21:50 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2400: 321 pgs: 321 active+clean; 533 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Oct 11 09:21:50 compute-0 nova_compute[260935]: 2025-10-11 09:21:50.798 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:21:50 compute-0 nova_compute[260935]: 2025-10-11 09:21:50.971 2 DEBUG oslo_concurrency.processutils [None req-34b9f7bf-00fa-46ca-b7f5-72da6044d656 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:21:51 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 09:21:51 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2583833662' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:21:51 compute-0 nova_compute[260935]: 2025-10-11 09:21:51.464 2 DEBUG oslo_concurrency.processutils [None req-34b9f7bf-00fa-46ca-b7f5-72da6044d656 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.493s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:21:51 compute-0 nova_compute[260935]: 2025-10-11 09:21:51.471 2 DEBUG nova.compute.provider_tree [None req-34b9f7bf-00fa-46ca-b7f5-72da6044d656 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 09:21:51 compute-0 nova_compute[260935]: 2025-10-11 09:21:51.489 2 DEBUG nova.scheduler.client.report [None req-34b9f7bf-00fa-46ca-b7f5-72da6044d656 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 09:21:51 compute-0 nova_compute[260935]: 2025-10-11 09:21:51.509 2 DEBUG oslo_concurrency.lockutils [None req-34b9f7bf-00fa-46ca-b7f5-72da6044d656 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.772s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:21:51 compute-0 nova_compute[260935]: 2025-10-11 09:21:51.509 2 DEBUG nova.compute.manager [None req-34b9f7bf-00fa-46ca-b7f5-72da6044d656 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: d703a39a-f502-4bc4-895a-bb87752c83df] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 11 09:21:51 compute-0 nova_compute[260935]: 2025-10-11 09:21:51.553 2 DEBUG nova.compute.manager [None req-34b9f7bf-00fa-46ca-b7f5-72da6044d656 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: d703a39a-f502-4bc4-895a-bb87752c83df] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 11 09:21:51 compute-0 nova_compute[260935]: 2025-10-11 09:21:51.553 2 DEBUG nova.network.neutron [None req-34b9f7bf-00fa-46ca-b7f5-72da6044d656 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: d703a39a-f502-4bc4-895a-bb87752c83df] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 11 09:21:51 compute-0 nova_compute[260935]: 2025-10-11 09:21:51.573 2 INFO nova.virt.libvirt.driver [None req-34b9f7bf-00fa-46ca-b7f5-72da6044d656 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: d703a39a-f502-4bc4-895a-bb87752c83df] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 11 09:21:51 compute-0 nova_compute[260935]: 2025-10-11 09:21:51.591 2 DEBUG nova.compute.manager [None req-34b9f7bf-00fa-46ca-b7f5-72da6044d656 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: d703a39a-f502-4bc4-895a-bb87752c83df] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 11 09:21:51 compute-0 nova_compute[260935]: 2025-10-11 09:21:51.745 2 DEBUG nova.compute.manager [None req-34b9f7bf-00fa-46ca-b7f5-72da6044d656 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: d703a39a-f502-4bc4-895a-bb87752c83df] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 11 09:21:51 compute-0 nova_compute[260935]: 2025-10-11 09:21:51.746 2 DEBUG nova.virt.libvirt.driver [None req-34b9f7bf-00fa-46ca-b7f5-72da6044d656 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: d703a39a-f502-4bc4-895a-bb87752c83df] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 11 09:21:51 compute-0 nova_compute[260935]: 2025-10-11 09:21:51.747 2 INFO nova.virt.libvirt.driver [None req-34b9f7bf-00fa-46ca-b7f5-72da6044d656 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: d703a39a-f502-4bc4-895a-bb87752c83df] Creating image(s)
Oct 11 09:21:51 compute-0 nova_compute[260935]: 2025-10-11 09:21:51.772 2 DEBUG nova.storage.rbd_utils [None req-34b9f7bf-00fa-46ca-b7f5-72da6044d656 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] rbd image d703a39a-f502-4bc4-895a-bb87752c83df_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:21:51 compute-0 nova_compute[260935]: 2025-10-11 09:21:51.799 2 DEBUG nova.storage.rbd_utils [None req-34b9f7bf-00fa-46ca-b7f5-72da6044d656 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] rbd image d703a39a-f502-4bc4-895a-bb87752c83df_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:21:51 compute-0 nova_compute[260935]: 2025-10-11 09:21:51.832 2 DEBUG nova.storage.rbd_utils [None req-34b9f7bf-00fa-46ca-b7f5-72da6044d656 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] rbd image d703a39a-f502-4bc4-895a-bb87752c83df_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:21:51 compute-0 nova_compute[260935]: 2025-10-11 09:21:51.836 2 DEBUG oslo_concurrency.processutils [None req-34b9f7bf-00fa-46ca-b7f5-72da6044d656 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:21:51 compute-0 nova_compute[260935]: 2025-10-11 09:21:51.943 2 DEBUG oslo_concurrency.processutils [None req-34b9f7bf-00fa-46ca-b7f5-72da6044d656 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json" returned: 0 in 0.107s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:21:51 compute-0 nova_compute[260935]: 2025-10-11 09:21:51.945 2 DEBUG oslo_concurrency.lockutils [None req-34b9f7bf-00fa-46ca-b7f5-72da6044d656 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "9811042c7d73cc51997f7c966840f4b7728169a1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:21:51 compute-0 nova_compute[260935]: 2025-10-11 09:21:51.945 2 DEBUG oslo_concurrency.lockutils [None req-34b9f7bf-00fa-46ca-b7f5-72da6044d656 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:21:51 compute-0 nova_compute[260935]: 2025-10-11 09:21:51.946 2 DEBUG oslo_concurrency.lockutils [None req-34b9f7bf-00fa-46ca-b7f5-72da6044d656 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:21:51 compute-0 ceph-mon[74313]: pgmap v2400: 321 pgs: 321 active+clean; 533 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Oct 11 09:21:51 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/2583833662' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:21:51 compute-0 nova_compute[260935]: 2025-10-11 09:21:51.980 2 DEBUG nova.storage.rbd_utils [None req-34b9f7bf-00fa-46ca-b7f5-72da6044d656 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] rbd image d703a39a-f502-4bc4-895a-bb87752c83df_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:21:51 compute-0 nova_compute[260935]: 2025-10-11 09:21:51.986 2 DEBUG oslo_concurrency.processutils [None req-34b9f7bf-00fa-46ca-b7f5-72da6044d656 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 d703a39a-f502-4bc4-895a-bb87752c83df_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:21:52 compute-0 nova_compute[260935]: 2025-10-11 09:21:52.036 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:21:52 compute-0 nova_compute[260935]: 2025-10-11 09:21:52.146 2 DEBUG nova.policy [None req-34b9f7bf-00fa-46ca-b7f5-72da6044d656 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '0e1fd111a1ff43179343661e01457085', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'db6885dd005947ad850fed13cefdf2fc', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 11 09:21:52 compute-0 nova_compute[260935]: 2025-10-11 09:21:52.409 2 DEBUG oslo_concurrency.processutils [None req-34b9f7bf-00fa-46ca-b7f5-72da6044d656 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 d703a39a-f502-4bc4-895a-bb87752c83df_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.423s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:21:52 compute-0 nova_compute[260935]: 2025-10-11 09:21:52.488 2 DEBUG nova.storage.rbd_utils [None req-34b9f7bf-00fa-46ca-b7f5-72da6044d656 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] resizing rbd image d703a39a-f502-4bc4-895a-bb87752c83df_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 11 09:21:52 compute-0 nova_compute[260935]: 2025-10-11 09:21:52.590 2 DEBUG nova.objects.instance [None req-34b9f7bf-00fa-46ca-b7f5-72da6044d656 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lazy-loading 'migration_context' on Instance uuid d703a39a-f502-4bc4-895a-bb87752c83df obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:21:52 compute-0 nova_compute[260935]: 2025-10-11 09:21:52.611 2 DEBUG nova.virt.libvirt.driver [None req-34b9f7bf-00fa-46ca-b7f5-72da6044d656 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: d703a39a-f502-4bc4-895a-bb87752c83df] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 11 09:21:52 compute-0 nova_compute[260935]: 2025-10-11 09:21:52.611 2 DEBUG nova.virt.libvirt.driver [None req-34b9f7bf-00fa-46ca-b7f5-72da6044d656 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: d703a39a-f502-4bc4-895a-bb87752c83df] Ensure instance console log exists: /var/lib/nova/instances/d703a39a-f502-4bc4-895a-bb87752c83df/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 11 09:21:52 compute-0 nova_compute[260935]: 2025-10-11 09:21:52.612 2 DEBUG oslo_concurrency.lockutils [None req-34b9f7bf-00fa-46ca-b7f5-72da6044d656 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:21:52 compute-0 nova_compute[260935]: 2025-10-11 09:21:52.612 2 DEBUG oslo_concurrency.lockutils [None req-34b9f7bf-00fa-46ca-b7f5-72da6044d656 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:21:52 compute-0 nova_compute[260935]: 2025-10-11 09:21:52.613 2 DEBUG oslo_concurrency.lockutils [None req-34b9f7bf-00fa-46ca-b7f5-72da6044d656 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:21:52 compute-0 sshd-session[389873]: Failed password for root from 165.232.82.252 port 35216 ssh2
Oct 11 09:21:52 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2401: 321 pgs: 321 active+clean; 564 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 3.2 MiB/s wr, 117 op/s
Oct 11 09:21:53 compute-0 ovn_controller[152945]: 2025-10-11T09:21:53Z|01223|binding|INFO|Releasing lport bc2597de-285d-49b2-9e10-c93c87e43828 from this chassis (sb_readonly=0)
Oct 11 09:21:53 compute-0 ovn_controller[152945]: 2025-10-11T09:21:53Z|01224|binding|INFO|Releasing lport af33797e-d57d-45c8-92d7-86ea03fdf1ef from this chassis (sb_readonly=0)
Oct 11 09:21:53 compute-0 ovn_controller[152945]: 2025-10-11T09:21:53Z|01225|binding|INFO|Releasing lport e23cd806-8523-4e59-ba27-db15cee52548 from this chassis (sb_readonly=0)
Oct 11 09:21:53 compute-0 ovn_controller[152945]: 2025-10-11T09:21:53Z|01226|binding|INFO|Releasing lport f0c2b309-d39a-4d01-be06-bdc63deb27f9 from this chassis (sb_readonly=0)
Oct 11 09:21:53 compute-0 ovn_controller[152945]: 2025-10-11T09:21:53Z|01227|binding|INFO|Releasing lport 89155f05-1b39-4918-893c-15a2cd2a9493 from this chassis (sb_readonly=0)
Oct 11 09:21:53 compute-0 ovn_controller[152945]: 2025-10-11T09:21:53Z|01228|binding|INFO|Releasing lport f45dd889-4db0-488b-b6d3-356bc191844e from this chassis (sb_readonly=0)
Oct 11 09:21:53 compute-0 nova_compute[260935]: 2025-10-11 09:21:53.303 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:21:53 compute-0 nova_compute[260935]: 2025-10-11 09:21:53.870 2 DEBUG nova.network.neutron [None req-34b9f7bf-00fa-46ca-b7f5-72da6044d656 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: d703a39a-f502-4bc4-895a-bb87752c83df] Successfully created port: 30ec8ddb-e058-428a-a08a-817e1e452938 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 11 09:21:53 compute-0 ceph-mon[74313]: pgmap v2401: 321 pgs: 321 active+clean; 564 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 3.2 MiB/s wr, 117 op/s
Oct 11 09:21:54 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:21:54 compute-0 sudo[390065]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:21:54 compute-0 sudo[390065]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:21:54 compute-0 sudo[390065]: pam_unix(sudo:session): session closed for user root
Oct 11 09:21:54 compute-0 sshd-session[389873]: Connection closed by authenticating user root 165.232.82.252 port 35216 [preauth]
Oct 11 09:21:54 compute-0 nova_compute[260935]: 2025-10-11 09:21:54.457 2 DEBUG nova.network.neutron [None req-34b9f7bf-00fa-46ca-b7f5-72da6044d656 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: d703a39a-f502-4bc4-895a-bb87752c83df] Successfully created port: a35d553f-941e-4ff2-bc8a-39419cc32ff2 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 11 09:21:54 compute-0 sudo[390090]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 09:21:54 compute-0 sudo[390090]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:21:54 compute-0 sudo[390090]: pam_unix(sudo:session): session closed for user root
Oct 11 09:21:54 compute-0 sudo[390115]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:21:54 compute-0 sudo[390115]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:21:54 compute-0 sudo[390115]: pam_unix(sudo:session): session closed for user root
Oct 11 09:21:54 compute-0 sudo[390140]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 11 09:21:54 compute-0 sudo[390140]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:21:54 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2402: 321 pgs: 321 active+clean; 564 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.4 MiB/s wr, 89 op/s
Oct 11 09:21:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:21:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:21:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:21:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:21:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:21:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:21:54 compute-0 ceph-mgr[74605]: [balancer INFO root] Optimize plan auto_2025-10-11_09:21:54
Oct 11 09:21:54 compute-0 ceph-mgr[74605]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 09:21:54 compute-0 ceph-mgr[74605]: [balancer INFO root] do_upmap
Oct 11 09:21:54 compute-0 ceph-mgr[74605]: [balancer INFO root] pools ['vms', 'cephfs.cephfs.data', '.mgr', 'default.rgw.meta', 'cephfs.cephfs.meta', 'default.rgw.log', 'volumes', '.rgw.root', 'default.rgw.control', 'backups', 'images']
Oct 11 09:21:54 compute-0 ceph-mgr[74605]: [balancer INFO root] prepared 0/10 changes
Oct 11 09:21:55 compute-0 ceph-osd[88249]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 11 09:21:55 compute-0 ceph-osd[88249]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 4200.1 total, 600.0 interval
                                           Cumulative writes: 36K writes, 143K keys, 36K commit groups, 1.0 writes per commit group, ingest: 0.14 GB, 0.03 MB/s
                                           Cumulative WAL: 36K writes, 13K syncs, 2.76 writes per sync, written: 0.14 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 3998 writes, 16K keys, 3998 commit groups, 1.0 writes per commit group, ingest: 19.74 MB, 0.03 MB/s
                                           Interval WAL: 3998 writes, 1549 syncs, 2.58 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 11 09:21:55 compute-0 sudo[390140]: pam_unix(sudo:session): session closed for user root
Oct 11 09:21:55 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 09:21:55 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 09:21:55 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 11 09:21:55 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 09:21:55 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 11 09:21:55 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:21:55 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev 2e514127-806d-4884-bb6a-9887f6f7a5f4 does not exist
Oct 11 09:21:55 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev f81826bb-97db-4417-a9d7-d5283f775d29 does not exist
Oct 11 09:21:55 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev f349f8f0-b7a5-4e6b-a3e2-37b2a195b0a5 does not exist
Oct 11 09:21:55 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 11 09:21:55 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 09:21:55 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 11 09:21:55 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 09:21:55 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 09:21:55 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 09:21:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 09:21:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 09:21:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 09:21:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 09:21:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 09:21:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 09:21:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 09:21:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 09:21:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 09:21:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 09:21:55 compute-0 nova_compute[260935]: 2025-10-11 09:21:55.440 2 DEBUG nova.network.neutron [None req-34b9f7bf-00fa-46ca-b7f5-72da6044d656 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: d703a39a-f502-4bc4-895a-bb87752c83df] Successfully updated port: 30ec8ddb-e058-428a-a08a-817e1e452938 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 11 09:21:55 compute-0 sudo[390196]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:21:55 compute-0 sudo[390196]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:21:55 compute-0 sudo[390196]: pam_unix(sudo:session): session closed for user root
Oct 11 09:21:55 compute-0 sudo[390221]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 09:21:55 compute-0 sudo[390221]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:21:55 compute-0 sudo[390221]: pam_unix(sudo:session): session closed for user root
Oct 11 09:21:55 compute-0 nova_compute[260935]: 2025-10-11 09:21:55.624 2 DEBUG nova.compute.manager [req-c31149c9-9c56-414e-9b7e-fd19a9737faf req-548946cf-ad76-4b3b-ab58-dc37ef5162b9 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d703a39a-f502-4bc4-895a-bb87752c83df] Received event network-changed-30ec8ddb-e058-428a-a08a-817e1e452938 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:21:55 compute-0 nova_compute[260935]: 2025-10-11 09:21:55.625 2 DEBUG nova.compute.manager [req-c31149c9-9c56-414e-9b7e-fd19a9737faf req-548946cf-ad76-4b3b-ab58-dc37ef5162b9 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d703a39a-f502-4bc4-895a-bb87752c83df] Refreshing instance network info cache due to event network-changed-30ec8ddb-e058-428a-a08a-817e1e452938. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 09:21:55 compute-0 nova_compute[260935]: 2025-10-11 09:21:55.625 2 DEBUG oslo_concurrency.lockutils [req-c31149c9-9c56-414e-9b7e-fd19a9737faf req-548946cf-ad76-4b3b-ab58-dc37ef5162b9 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-d703a39a-f502-4bc4-895a-bb87752c83df" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:21:55 compute-0 nova_compute[260935]: 2025-10-11 09:21:55.625 2 DEBUG oslo_concurrency.lockutils [req-c31149c9-9c56-414e-9b7e-fd19a9737faf req-548946cf-ad76-4b3b-ab58-dc37ef5162b9 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-d703a39a-f502-4bc4-895a-bb87752c83df" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:21:55 compute-0 nova_compute[260935]: 2025-10-11 09:21:55.626 2 DEBUG nova.network.neutron [req-c31149c9-9c56-414e-9b7e-fd19a9737faf req-548946cf-ad76-4b3b-ab58-dc37ef5162b9 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d703a39a-f502-4bc4-895a-bb87752c83df] Refreshing network info cache for port 30ec8ddb-e058-428a-a08a-817e1e452938 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 09:21:55 compute-0 sudo[390246]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:21:55 compute-0 sudo[390246]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:21:55 compute-0 sudo[390246]: pam_unix(sudo:session): session closed for user root
Oct 11 09:21:55 compute-0 sudo[390271]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 11 09:21:55 compute-0 sudo[390271]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:21:55 compute-0 nova_compute[260935]: 2025-10-11 09:21:55.846 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:21:55 compute-0 ceph-mon[74313]: pgmap v2402: 321 pgs: 321 active+clean; 564 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.4 MiB/s wr, 89 op/s
Oct 11 09:21:55 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 09:21:55 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 09:21:55 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:21:55 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 09:21:55 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 09:21:55 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 09:21:56 compute-0 nova_compute[260935]: 2025-10-11 09:21:56.048 2 DEBUG nova.network.neutron [req-c31149c9-9c56-414e-9b7e-fd19a9737faf req-548946cf-ad76-4b3b-ab58-dc37ef5162b9 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d703a39a-f502-4bc4-895a-bb87752c83df] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 11 09:21:56 compute-0 podman[390338]: 2025-10-11 09:21:56.357736011 +0000 UTC m=+0.082514679 container create 09100efc325169dd03cc5e2e050595c6d4cf312e9af1093871766a6eaead26cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_dirac, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct 11 09:21:56 compute-0 podman[390338]: 2025-10-11 09:21:56.323060153 +0000 UTC m=+0.047838911 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:21:56 compute-0 systemd[1]: Started libpod-conmon-09100efc325169dd03cc5e2e050595c6d4cf312e9af1093871766a6eaead26cd.scope.
Oct 11 09:21:56 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:21:56 compute-0 podman[390338]: 2025-10-11 09:21:56.502667321 +0000 UTC m=+0.227446059 container init 09100efc325169dd03cc5e2e050595c6d4cf312e9af1093871766a6eaead26cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_dirac, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Oct 11 09:21:56 compute-0 podman[390338]: 2025-10-11 09:21:56.514802074 +0000 UTC m=+0.239580762 container start 09100efc325169dd03cc5e2e050595c6d4cf312e9af1093871766a6eaead26cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_dirac, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 09:21:56 compute-0 podman[390338]: 2025-10-11 09:21:56.519233449 +0000 UTC m=+0.244012137 container attach 09100efc325169dd03cc5e2e050595c6d4cf312e9af1093871766a6eaead26cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_dirac, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct 11 09:21:56 compute-0 epic_dirac[390354]: 167 167
Oct 11 09:21:56 compute-0 systemd[1]: libpod-09100efc325169dd03cc5e2e050595c6d4cf312e9af1093871766a6eaead26cd.scope: Deactivated successfully.
Oct 11 09:21:56 compute-0 podman[390338]: 2025-10-11 09:21:56.524789356 +0000 UTC m=+0.249568054 container died 09100efc325169dd03cc5e2e050595c6d4cf312e9af1093871766a6eaead26cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_dirac, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 09:21:56 compute-0 nova_compute[260935]: 2025-10-11 09:21:56.533 2 DEBUG nova.network.neutron [req-c31149c9-9c56-414e-9b7e-fd19a9737faf req-548946cf-ad76-4b3b-ab58-dc37ef5162b9 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d703a39a-f502-4bc4-895a-bb87752c83df] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:21:56 compute-0 nova_compute[260935]: 2025-10-11 09:21:56.556 2 DEBUG oslo_concurrency.lockutils [req-c31149c9-9c56-414e-9b7e-fd19a9737faf req-548946cf-ad76-4b3b-ab58-dc37ef5162b9 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-d703a39a-f502-4bc4-895a-bb87752c83df" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:21:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-019f0df99434ba76c1207907b8c2bcc7b3cfcac6e5fc6e30e3b14236928ff4b2-merged.mount: Deactivated successfully.
Oct 11 09:21:56 compute-0 nova_compute[260935]: 2025-10-11 09:21:56.569 2 DEBUG nova.network.neutron [None req-34b9f7bf-00fa-46ca-b7f5-72da6044d656 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: d703a39a-f502-4bc4-895a-bb87752c83df] Successfully updated port: a35d553f-941e-4ff2-bc8a-39419cc32ff2 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 11 09:21:56 compute-0 podman[390338]: 2025-10-11 09:21:56.587565517 +0000 UTC m=+0.312344175 container remove 09100efc325169dd03cc5e2e050595c6d4cf312e9af1093871766a6eaead26cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_dirac, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Oct 11 09:21:56 compute-0 nova_compute[260935]: 2025-10-11 09:21:56.587 2 DEBUG oslo_concurrency.lockutils [None req-34b9f7bf-00fa-46ca-b7f5-72da6044d656 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "refresh_cache-d703a39a-f502-4bc4-895a-bb87752c83df" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:21:56 compute-0 nova_compute[260935]: 2025-10-11 09:21:56.588 2 DEBUG oslo_concurrency.lockutils [None req-34b9f7bf-00fa-46ca-b7f5-72da6044d656 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquired lock "refresh_cache-d703a39a-f502-4bc4-895a-bb87752c83df" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:21:56 compute-0 nova_compute[260935]: 2025-10-11 09:21:56.588 2 DEBUG nova.network.neutron [None req-34b9f7bf-00fa-46ca-b7f5-72da6044d656 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: d703a39a-f502-4bc4-895a-bb87752c83df] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 11 09:21:56 compute-0 systemd[1]: libpod-conmon-09100efc325169dd03cc5e2e050595c6d4cf312e9af1093871766a6eaead26cd.scope: Deactivated successfully.
Oct 11 09:21:56 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2403: 321 pgs: 321 active+clean; 564 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.4 MiB/s wr, 89 op/s
Oct 11 09:21:56 compute-0 podman[390377]: 2025-10-11 09:21:56.913903457 +0000 UTC m=+0.100234620 container create ae8d70956043874001b4322608e2b9d85b13b8cef4d4417b8bfcb4b8c9b10afd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_joliot, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 09:21:56 compute-0 podman[390377]: 2025-10-11 09:21:56.870117261 +0000 UTC m=+0.056448494 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:21:56 compute-0 nova_compute[260935]: 2025-10-11 09:21:56.982 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:21:57 compute-0 systemd[1]: Started libpod-conmon-ae8d70956043874001b4322608e2b9d85b13b8cef4d4417b8bfcb4b8c9b10afd.scope.
Oct 11 09:21:57 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:21:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/014f6d00710210cda0d1ecfaa8b7249ec4c0970af74a653c8fcb87647f9f087c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 09:21:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/014f6d00710210cda0d1ecfaa8b7249ec4c0970af74a653c8fcb87647f9f087c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 09:21:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/014f6d00710210cda0d1ecfaa8b7249ec4c0970af74a653c8fcb87647f9f087c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 09:21:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/014f6d00710210cda0d1ecfaa8b7249ec4c0970af74a653c8fcb87647f9f087c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 09:21:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/014f6d00710210cda0d1ecfaa8b7249ec4c0970af74a653c8fcb87647f9f087c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 09:21:57 compute-0 nova_compute[260935]: 2025-10-11 09:21:57.063 2 DEBUG nova.network.neutron [None req-34b9f7bf-00fa-46ca-b7f5-72da6044d656 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: d703a39a-f502-4bc4-895a-bb87752c83df] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 11 09:21:57 compute-0 podman[390377]: 2025-10-11 09:21:57.086481097 +0000 UTC m=+0.272812260 container init ae8d70956043874001b4322608e2b9d85b13b8cef4d4417b8bfcb4b8c9b10afd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_joliot, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 09:21:57 compute-0 podman[390377]: 2025-10-11 09:21:57.094503674 +0000 UTC m=+0.280834837 container start ae8d70956043874001b4322608e2b9d85b13b8cef4d4417b8bfcb4b8c9b10afd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_joliot, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct 11 09:21:57 compute-0 podman[390377]: 2025-10-11 09:21:57.097677923 +0000 UTC m=+0.284009086 container attach ae8d70956043874001b4322608e2b9d85b13b8cef4d4417b8bfcb4b8c9b10afd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_joliot, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 09:21:57 compute-0 nova_compute[260935]: 2025-10-11 09:21:57.773 2 DEBUG nova.compute.manager [req-289eff2a-4d7c-4008-b49b-96bfc9312e11 req-dbc4a8c5-e9b3-454a-8877-66b84e53996c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d703a39a-f502-4bc4-895a-bb87752c83df] Received event network-changed-a35d553f-941e-4ff2-bc8a-39419cc32ff2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:21:57 compute-0 nova_compute[260935]: 2025-10-11 09:21:57.774 2 DEBUG nova.compute.manager [req-289eff2a-4d7c-4008-b49b-96bfc9312e11 req-dbc4a8c5-e9b3-454a-8877-66b84e53996c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d703a39a-f502-4bc4-895a-bb87752c83df] Refreshing instance network info cache due to event network-changed-a35d553f-941e-4ff2-bc8a-39419cc32ff2. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 09:21:57 compute-0 nova_compute[260935]: 2025-10-11 09:21:57.774 2 DEBUG oslo_concurrency.lockutils [req-289eff2a-4d7c-4008-b49b-96bfc9312e11 req-dbc4a8c5-e9b3-454a-8877-66b84e53996c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-d703a39a-f502-4bc4-895a-bb87752c83df" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:21:57 compute-0 ceph-mon[74313]: pgmap v2403: 321 pgs: 321 active+clean; 564 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.4 MiB/s wr, 89 op/s
Oct 11 09:21:58 compute-0 naughty_joliot[390393]: --> passed data devices: 0 physical, 3 LVM
Oct 11 09:21:58 compute-0 naughty_joliot[390393]: --> relative data size: 1.0
Oct 11 09:21:58 compute-0 naughty_joliot[390393]: --> All data devices are unavailable
Oct 11 09:21:58 compute-0 systemd[1]: libpod-ae8d70956043874001b4322608e2b9d85b13b8cef4d4417b8bfcb4b8c9b10afd.scope: Deactivated successfully.
Oct 11 09:21:58 compute-0 podman[390377]: 2025-10-11 09:21:58.202251125 +0000 UTC m=+1.388582278 container died ae8d70956043874001b4322608e2b9d85b13b8cef4d4417b8bfcb4b8c9b10afd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_joliot, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 09:21:58 compute-0 systemd[1]: libpod-ae8d70956043874001b4322608e2b9d85b13b8cef4d4417b8bfcb4b8c9b10afd.scope: Consumed 1.017s CPU time.
Oct 11 09:21:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-014f6d00710210cda0d1ecfaa8b7249ec4c0970af74a653c8fcb87647f9f087c-merged.mount: Deactivated successfully.
Oct 11 09:21:58 compute-0 podman[390377]: 2025-10-11 09:21:58.264479501 +0000 UTC m=+1.450810674 container remove ae8d70956043874001b4322608e2b9d85b13b8cef4d4417b8bfcb4b8c9b10afd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_joliot, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct 11 09:21:58 compute-0 systemd[1]: libpod-conmon-ae8d70956043874001b4322608e2b9d85b13b8cef4d4417b8bfcb4b8c9b10afd.scope: Deactivated successfully.
Oct 11 09:21:58 compute-0 sudo[390271]: pam_unix(sudo:session): session closed for user root
Oct 11 09:21:58 compute-0 sudo[390436]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:21:58 compute-0 sudo[390436]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:21:58 compute-0 sudo[390436]: pam_unix(sudo:session): session closed for user root
Oct 11 09:21:58 compute-0 sudo[390461]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 09:21:58 compute-0 sudo[390461]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:21:58 compute-0 sudo[390461]: pam_unix(sudo:session): session closed for user root
Oct 11 09:21:58 compute-0 sudo[390486]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:21:58 compute-0 sudo[390486]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:21:58 compute-0 sudo[390486]: pam_unix(sudo:session): session closed for user root
Oct 11 09:21:58 compute-0 sudo[390511]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a -- lvm list --format json
Oct 11 09:21:58 compute-0 sudo[390511]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:21:58 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2404: 321 pgs: 321 active+clean; 579 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 102 op/s
Oct 11 09:21:59 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:21:59 compute-0 podman[390576]: 2025-10-11 09:21:59.146295617 +0000 UTC m=+0.088397746 container create 479cf3a6e6abccd60f429a272c53bb20acaa65dd5db137c516c7f4dda71760d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_bardeen, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 09:21:59 compute-0 podman[390576]: 2025-10-11 09:21:59.104352623 +0000 UTC m=+0.046454802 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:21:59 compute-0 systemd[1]: Started libpod-conmon-479cf3a6e6abccd60f429a272c53bb20acaa65dd5db137c516c7f4dda71760d6.scope.
Oct 11 09:21:59 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:21:59 compute-0 podman[390576]: 2025-10-11 09:21:59.246411092 +0000 UTC m=+0.188513231 container init 479cf3a6e6abccd60f429a272c53bb20acaa65dd5db137c516c7f4dda71760d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_bardeen, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 09:21:59 compute-0 podman[390576]: 2025-10-11 09:21:59.261141678 +0000 UTC m=+0.203243807 container start 479cf3a6e6abccd60f429a272c53bb20acaa65dd5db137c516c7f4dda71760d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_bardeen, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 09:21:59 compute-0 podman[390576]: 2025-10-11 09:21:59.26973302 +0000 UTC m=+0.211835189 container attach 479cf3a6e6abccd60f429a272c53bb20acaa65dd5db137c516c7f4dda71760d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_bardeen, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 09:21:59 compute-0 hardcore_bardeen[390593]: 167 167
Oct 11 09:21:59 compute-0 podman[390576]: 2025-10-11 09:21:59.274262598 +0000 UTC m=+0.216364697 container died 479cf3a6e6abccd60f429a272c53bb20acaa65dd5db137c516c7f4dda71760d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_bardeen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct 11 09:21:59 compute-0 systemd[1]: libpod-479cf3a6e6abccd60f429a272c53bb20acaa65dd5db137c516c7f4dda71760d6.scope: Deactivated successfully.
Oct 11 09:21:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-1b02528375492a19819aa9e39342232b550acf7191a1b798ea6dd0f9fe4c2a39-merged.mount: Deactivated successfully.
Oct 11 09:21:59 compute-0 podman[390576]: 2025-10-11 09:21:59.338602343 +0000 UTC m=+0.280704452 container remove 479cf3a6e6abccd60f429a272c53bb20acaa65dd5db137c516c7f4dda71760d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_bardeen, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 09:21:59 compute-0 systemd[1]: libpod-conmon-479cf3a6e6abccd60f429a272c53bb20acaa65dd5db137c516c7f4dda71760d6.scope: Deactivated successfully.
Oct 11 09:21:59 compute-0 ceph-osd[88249]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #49. Immutable memtables: 6.
Oct 11 09:21:59 compute-0 nova_compute[260935]: 2025-10-11 09:21:59.433 2 DEBUG nova.network.neutron [None req-34b9f7bf-00fa-46ca-b7f5-72da6044d656 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: d703a39a-f502-4bc4-895a-bb87752c83df] Updating instance_info_cache with network_info: [{"id": "30ec8ddb-e058-428a-a08a-817e1e452938", "address": "fa:16:3e:97:eb:db", "network": {"id": "02690ac5-d004-4c7d-b780-e5fed29e0aa7", "bridge": "br-int", "label": "tempest-network-smoke--1847613909", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap30ec8ddb-e0", "ovs_interfaceid": "30ec8ddb-e058-428a-a08a-817e1e452938", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "a35d553f-941e-4ff2-bc8a-39419cc32ff2", "address": "fa:16:3e:62:2c:ec", "network": {"id": "e87b272f-66b8-494e-ab80-c2ee66df15a2", "bridge": "br-int", "label": "tempest-network-smoke--2027628824", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe62:2cec", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe62:2cec", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa35d553f-94", "ovs_interfaceid": "a35d553f-941e-4ff2-bc8a-39419cc32ff2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:21:59 compute-0 nova_compute[260935]: 2025-10-11 09:21:59.461 2 DEBUG oslo_concurrency.lockutils [None req-34b9f7bf-00fa-46ca-b7f5-72da6044d656 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Releasing lock "refresh_cache-d703a39a-f502-4bc4-895a-bb87752c83df" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:21:59 compute-0 nova_compute[260935]: 2025-10-11 09:21:59.462 2 DEBUG nova.compute.manager [None req-34b9f7bf-00fa-46ca-b7f5-72da6044d656 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: d703a39a-f502-4bc4-895a-bb87752c83df] Instance network_info: |[{"id": "30ec8ddb-e058-428a-a08a-817e1e452938", "address": "fa:16:3e:97:eb:db", "network": {"id": "02690ac5-d004-4c7d-b780-e5fed29e0aa7", "bridge": "br-int", "label": "tempest-network-smoke--1847613909", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap30ec8ddb-e0", "ovs_interfaceid": "30ec8ddb-e058-428a-a08a-817e1e452938", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "a35d553f-941e-4ff2-bc8a-39419cc32ff2", "address": "fa:16:3e:62:2c:ec", "network": {"id": "e87b272f-66b8-494e-ab80-c2ee66df15a2", "bridge": "br-int", "label": "tempest-network-smoke--2027628824", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe62:2cec", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe62:2cec", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa35d553f-94", "ovs_interfaceid": "a35d553f-941e-4ff2-bc8a-39419cc32ff2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 11 09:21:59 compute-0 nova_compute[260935]: 2025-10-11 09:21:59.463 2 DEBUG oslo_concurrency.lockutils [req-289eff2a-4d7c-4008-b49b-96bfc9312e11 req-dbc4a8c5-e9b3-454a-8877-66b84e53996c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-d703a39a-f502-4bc4-895a-bb87752c83df" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:21:59 compute-0 nova_compute[260935]: 2025-10-11 09:21:59.463 2 DEBUG nova.network.neutron [req-289eff2a-4d7c-4008-b49b-96bfc9312e11 req-dbc4a8c5-e9b3-454a-8877-66b84e53996c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d703a39a-f502-4bc4-895a-bb87752c83df] Refreshing network info cache for port a35d553f-941e-4ff2-bc8a-39419cc32ff2 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 09:21:59 compute-0 nova_compute[260935]: 2025-10-11 09:21:59.470 2 DEBUG nova.virt.libvirt.driver [None req-34b9f7bf-00fa-46ca-b7f5-72da6044d656 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: d703a39a-f502-4bc4-895a-bb87752c83df] Start _get_guest_xml network_info=[{"id": "30ec8ddb-e058-428a-a08a-817e1e452938", "address": "fa:16:3e:97:eb:db", "network": {"id": "02690ac5-d004-4c7d-b780-e5fed29e0aa7", "bridge": "br-int", "label": "tempest-network-smoke--1847613909", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap30ec8ddb-e0", "ovs_interfaceid": "30ec8ddb-e058-428a-a08a-817e1e452938", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "a35d553f-941e-4ff2-bc8a-39419cc32ff2", "address": "fa:16:3e:62:2c:ec", "network": {"id": "e87b272f-66b8-494e-ab80-c2ee66df15a2", "bridge": "br-int", "label": "tempest-network-smoke--2027628824", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe62:2cec", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe62:2cec", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa35d553f-94", "ovs_interfaceid": "a35d553f-941e-4ff2-bc8a-39419cc32ff2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 11 09:21:59 compute-0 nova_compute[260935]: 2025-10-11 09:21:59.477 2 WARNING nova.virt.libvirt.driver [None req-34b9f7bf-00fa-46ca-b7f5-72da6044d656 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 09:21:59 compute-0 nova_compute[260935]: 2025-10-11 09:21:59.484 2 DEBUG nova.virt.libvirt.host [None req-34b9f7bf-00fa-46ca-b7f5-72da6044d656 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 11 09:21:59 compute-0 nova_compute[260935]: 2025-10-11 09:21:59.485 2 DEBUG nova.virt.libvirt.host [None req-34b9f7bf-00fa-46ca-b7f5-72da6044d656 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 11 09:21:59 compute-0 nova_compute[260935]: 2025-10-11 09:21:59.489 2 DEBUG nova.virt.libvirt.host [None req-34b9f7bf-00fa-46ca-b7f5-72da6044d656 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 11 09:21:59 compute-0 nova_compute[260935]: 2025-10-11 09:21:59.489 2 DEBUG nova.virt.libvirt.host [None req-34b9f7bf-00fa-46ca-b7f5-72da6044d656 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 11 09:21:59 compute-0 nova_compute[260935]: 2025-10-11 09:21:59.489 2 DEBUG nova.virt.libvirt.driver [None req-34b9f7bf-00fa-46ca-b7f5-72da6044d656 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 11 09:21:59 compute-0 nova_compute[260935]: 2025-10-11 09:21:59.490 2 DEBUG nova.virt.hardware [None req-34b9f7bf-00fa-46ca-b7f5-72da6044d656 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 11 09:21:59 compute-0 nova_compute[260935]: 2025-10-11 09:21:59.490 2 DEBUG nova.virt.hardware [None req-34b9f7bf-00fa-46ca-b7f5-72da6044d656 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 11 09:21:59 compute-0 nova_compute[260935]: 2025-10-11 09:21:59.490 2 DEBUG nova.virt.hardware [None req-34b9f7bf-00fa-46ca-b7f5-72da6044d656 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 11 09:21:59 compute-0 nova_compute[260935]: 2025-10-11 09:21:59.490 2 DEBUG nova.virt.hardware [None req-34b9f7bf-00fa-46ca-b7f5-72da6044d656 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 11 09:21:59 compute-0 nova_compute[260935]: 2025-10-11 09:21:59.491 2 DEBUG nova.virt.hardware [None req-34b9f7bf-00fa-46ca-b7f5-72da6044d656 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 11 09:21:59 compute-0 nova_compute[260935]: 2025-10-11 09:21:59.491 2 DEBUG nova.virt.hardware [None req-34b9f7bf-00fa-46ca-b7f5-72da6044d656 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 11 09:21:59 compute-0 nova_compute[260935]: 2025-10-11 09:21:59.491 2 DEBUG nova.virt.hardware [None req-34b9f7bf-00fa-46ca-b7f5-72da6044d656 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 11 09:21:59 compute-0 nova_compute[260935]: 2025-10-11 09:21:59.491 2 DEBUG nova.virt.hardware [None req-34b9f7bf-00fa-46ca-b7f5-72da6044d656 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 11 09:21:59 compute-0 nova_compute[260935]: 2025-10-11 09:21:59.491 2 DEBUG nova.virt.hardware [None req-34b9f7bf-00fa-46ca-b7f5-72da6044d656 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 11 09:21:59 compute-0 nova_compute[260935]: 2025-10-11 09:21:59.491 2 DEBUG nova.virt.hardware [None req-34b9f7bf-00fa-46ca-b7f5-72da6044d656 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 11 09:21:59 compute-0 nova_compute[260935]: 2025-10-11 09:21:59.491 2 DEBUG nova.virt.hardware [None req-34b9f7bf-00fa-46ca-b7f5-72da6044d656 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 11 09:21:59 compute-0 nova_compute[260935]: 2025-10-11 09:21:59.494 2 DEBUG oslo_concurrency.processutils [None req-34b9f7bf-00fa-46ca-b7f5-72da6044d656 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:21:59 compute-0 podman[390619]: 2025-10-11 09:21:59.604158637 +0000 UTC m=+0.044042084 container create 9a9508abfd4865edac354c1312cb8e776c5bb0c68621265cccb05f6578ecc734 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_newton, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct 11 09:21:59 compute-0 systemd[1]: Started libpod-conmon-9a9508abfd4865edac354c1312cb8e776c5bb0c68621265cccb05f6578ecc734.scope.
Oct 11 09:21:59 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:21:59 compute-0 podman[390619]: 2025-10-11 09:21:59.587025754 +0000 UTC m=+0.026909221 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:21:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0cb4efb4ede56d4028982a986af9a410045447d4760c1983441c927331e9fc46/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 09:21:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0cb4efb4ede56d4028982a986af9a410045447d4760c1983441c927331e9fc46/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 09:21:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0cb4efb4ede56d4028982a986af9a410045447d4760c1983441c927331e9fc46/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 09:21:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0cb4efb4ede56d4028982a986af9a410045447d4760c1983441c927331e9fc46/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 09:21:59 compute-0 podman[390619]: 2025-10-11 09:21:59.736092211 +0000 UTC m=+0.175975678 container init 9a9508abfd4865edac354c1312cb8e776c5bb0c68621265cccb05f6578ecc734 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_newton, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct 11 09:21:59 compute-0 podman[390619]: 2025-10-11 09:21:59.745799735 +0000 UTC m=+0.185683192 container start 9a9508abfd4865edac354c1312cb8e776c5bb0c68621265cccb05f6578ecc734 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_newton, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 09:21:59 compute-0 podman[390619]: 2025-10-11 09:21:59.749279793 +0000 UTC m=+0.189163390 container attach 9a9508abfd4865edac354c1312cb8e776c5bb0c68621265cccb05f6578ecc734 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_newton, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 09:21:59 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 09:21:59 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2908042383' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:22:00 compute-0 ceph-mon[74313]: pgmap v2404: 321 pgs: 321 active+clean; 579 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 102 op/s
Oct 11 09:22:00 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/2908042383' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:22:00 compute-0 nova_compute[260935]: 2025-10-11 09:22:00.007 2 DEBUG oslo_concurrency.processutils [None req-34b9f7bf-00fa-46ca-b7f5-72da6044d656 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.513s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:22:00 compute-0 nova_compute[260935]: 2025-10-11 09:22:00.042 2 DEBUG nova.storage.rbd_utils [None req-34b9f7bf-00fa-46ca-b7f5-72da6044d656 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] rbd image d703a39a-f502-4bc4-895a-bb87752c83df_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:22:00 compute-0 nova_compute[260935]: 2025-10-11 09:22:00.048 2 DEBUG oslo_concurrency.processutils [None req-34b9f7bf-00fa-46ca-b7f5-72da6044d656 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:22:00 compute-0 ceph-osd[89278]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 11 09:22:00 compute-0 ceph-osd[89278]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 4200.1 total, 600.0 interval
                                           Cumulative writes: 38K writes, 143K keys, 38K commit groups, 1.0 writes per commit group, ingest: 0.14 GB, 0.03 MB/s
                                           Cumulative WAL: 38K writes, 14K syncs, 2.73 writes per sync, written: 0.14 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 4198 writes, 17K keys, 4198 commit groups, 1.0 writes per commit group, ingest: 20.50 MB, 0.03 MB/s
                                           Interval WAL: 4198 writes, 1588 syncs, 2.64 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 11 09:22:00 compute-0 determined_newton[390653]: {
Oct 11 09:22:00 compute-0 determined_newton[390653]:     "0": [
Oct 11 09:22:00 compute-0 determined_newton[390653]:         {
Oct 11 09:22:00 compute-0 determined_newton[390653]:             "devices": [
Oct 11 09:22:00 compute-0 determined_newton[390653]:                 "/dev/loop3"
Oct 11 09:22:00 compute-0 determined_newton[390653]:             ],
Oct 11 09:22:00 compute-0 determined_newton[390653]:             "lv_name": "ceph_lv0",
Oct 11 09:22:00 compute-0 determined_newton[390653]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 09:22:00 compute-0 determined_newton[390653]:             "lv_size": "21470642176",
Oct 11 09:22:00 compute-0 determined_newton[390653]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=bd31cfe9-266a-4479-a7f9-659fff14b3e1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 09:22:00 compute-0 determined_newton[390653]:             "lv_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 09:22:00 compute-0 determined_newton[390653]:             "name": "ceph_lv0",
Oct 11 09:22:00 compute-0 determined_newton[390653]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 09:22:00 compute-0 determined_newton[390653]:             "tags": {
Oct 11 09:22:00 compute-0 determined_newton[390653]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 11 09:22:00 compute-0 determined_newton[390653]:                 "ceph.block_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 09:22:00 compute-0 determined_newton[390653]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 09:22:00 compute-0 determined_newton[390653]:                 "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:22:00 compute-0 determined_newton[390653]:                 "ceph.cluster_name": "ceph",
Oct 11 09:22:00 compute-0 determined_newton[390653]:                 "ceph.crush_device_class": "",
Oct 11 09:22:00 compute-0 determined_newton[390653]:                 "ceph.encrypted": "0",
Oct 11 09:22:00 compute-0 determined_newton[390653]:                 "ceph.osd_fsid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 09:22:00 compute-0 determined_newton[390653]:                 "ceph.osd_id": "0",
Oct 11 09:22:00 compute-0 determined_newton[390653]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 09:22:00 compute-0 determined_newton[390653]:                 "ceph.type": "block",
Oct 11 09:22:00 compute-0 determined_newton[390653]:                 "ceph.vdo": "0"
Oct 11 09:22:00 compute-0 determined_newton[390653]:             },
Oct 11 09:22:00 compute-0 determined_newton[390653]:             "type": "block",
Oct 11 09:22:00 compute-0 determined_newton[390653]:             "vg_name": "ceph_vg0"
Oct 11 09:22:00 compute-0 determined_newton[390653]:         }
Oct 11 09:22:00 compute-0 determined_newton[390653]:     ],
Oct 11 09:22:00 compute-0 determined_newton[390653]:     "1": [
Oct 11 09:22:00 compute-0 determined_newton[390653]:         {
Oct 11 09:22:00 compute-0 determined_newton[390653]:             "devices": [
Oct 11 09:22:00 compute-0 determined_newton[390653]:                 "/dev/loop4"
Oct 11 09:22:00 compute-0 determined_newton[390653]:             ],
Oct 11 09:22:00 compute-0 determined_newton[390653]:             "lv_name": "ceph_lv1",
Oct 11 09:22:00 compute-0 determined_newton[390653]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 09:22:00 compute-0 determined_newton[390653]:             "lv_size": "21470642176",
Oct 11 09:22:00 compute-0 determined_newton[390653]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c6e730c8-7075-45e5-bf72-061b73088f5b,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 09:22:00 compute-0 determined_newton[390653]:             "lv_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 09:22:00 compute-0 determined_newton[390653]:             "name": "ceph_lv1",
Oct 11 09:22:00 compute-0 determined_newton[390653]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 09:22:00 compute-0 determined_newton[390653]:             "tags": {
Oct 11 09:22:00 compute-0 determined_newton[390653]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 11 09:22:00 compute-0 determined_newton[390653]:                 "ceph.block_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 09:22:00 compute-0 determined_newton[390653]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 09:22:00 compute-0 determined_newton[390653]:                 "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:22:00 compute-0 determined_newton[390653]:                 "ceph.cluster_name": "ceph",
Oct 11 09:22:00 compute-0 determined_newton[390653]:                 "ceph.crush_device_class": "",
Oct 11 09:22:00 compute-0 determined_newton[390653]:                 "ceph.encrypted": "0",
Oct 11 09:22:00 compute-0 determined_newton[390653]:                 "ceph.osd_fsid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 09:22:00 compute-0 determined_newton[390653]:                 "ceph.osd_id": "1",
Oct 11 09:22:00 compute-0 determined_newton[390653]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 09:22:00 compute-0 determined_newton[390653]:                 "ceph.type": "block",
Oct 11 09:22:00 compute-0 determined_newton[390653]:                 "ceph.vdo": "0"
Oct 11 09:22:00 compute-0 determined_newton[390653]:             },
Oct 11 09:22:00 compute-0 determined_newton[390653]:             "type": "block",
Oct 11 09:22:00 compute-0 determined_newton[390653]:             "vg_name": "ceph_vg1"
Oct 11 09:22:00 compute-0 determined_newton[390653]:         }
Oct 11 09:22:00 compute-0 determined_newton[390653]:     ],
Oct 11 09:22:00 compute-0 determined_newton[390653]:     "2": [
Oct 11 09:22:00 compute-0 determined_newton[390653]:         {
Oct 11 09:22:00 compute-0 determined_newton[390653]:             "devices": [
Oct 11 09:22:00 compute-0 determined_newton[390653]:                 "/dev/loop5"
Oct 11 09:22:00 compute-0 determined_newton[390653]:             ],
Oct 11 09:22:00 compute-0 determined_newton[390653]:             "lv_name": "ceph_lv2",
Oct 11 09:22:00 compute-0 determined_newton[390653]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 09:22:00 compute-0 determined_newton[390653]:             "lv_size": "21470642176",
Oct 11 09:22:00 compute-0 determined_newton[390653]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9a8d87fe-940e-4960-94fb-405e22e6e9ee,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 09:22:00 compute-0 determined_newton[390653]:             "lv_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 09:22:00 compute-0 determined_newton[390653]:             "name": "ceph_lv2",
Oct 11 09:22:00 compute-0 determined_newton[390653]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 09:22:00 compute-0 determined_newton[390653]:             "tags": {
Oct 11 09:22:00 compute-0 determined_newton[390653]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 11 09:22:00 compute-0 determined_newton[390653]:                 "ceph.block_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 09:22:00 compute-0 determined_newton[390653]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 09:22:00 compute-0 determined_newton[390653]:                 "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:22:00 compute-0 determined_newton[390653]:                 "ceph.cluster_name": "ceph",
Oct 11 09:22:00 compute-0 determined_newton[390653]:                 "ceph.crush_device_class": "",
Oct 11 09:22:00 compute-0 determined_newton[390653]:                 "ceph.encrypted": "0",
Oct 11 09:22:00 compute-0 determined_newton[390653]:                 "ceph.osd_fsid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 09:22:00 compute-0 determined_newton[390653]:                 "ceph.osd_id": "2",
Oct 11 09:22:00 compute-0 determined_newton[390653]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 09:22:00 compute-0 determined_newton[390653]:                 "ceph.type": "block",
Oct 11 09:22:00 compute-0 determined_newton[390653]:                 "ceph.vdo": "0"
Oct 11 09:22:00 compute-0 determined_newton[390653]:             },
Oct 11 09:22:00 compute-0 determined_newton[390653]:             "type": "block",
Oct 11 09:22:00 compute-0 determined_newton[390653]:             "vg_name": "ceph_vg2"
Oct 11 09:22:00 compute-0 determined_newton[390653]:         }
Oct 11 09:22:00 compute-0 determined_newton[390653]:     ]
Oct 11 09:22:00 compute-0 determined_newton[390653]: }
Oct 11 09:22:00 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 09:22:00 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/586867095' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:22:00 compute-0 nova_compute[260935]: 2025-10-11 09:22:00.517 2 DEBUG oslo_concurrency.processutils [None req-34b9f7bf-00fa-46ca-b7f5-72da6044d656 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:22:00 compute-0 nova_compute[260935]: 2025-10-11 09:22:00.519 2 DEBUG nova.virt.libvirt.vif [None req-34b9f7bf-00fa-46ca-b7f5-72da6044d656 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T09:21:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-125667472',display_name='tempest-TestGettingAddress-server-125667472',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-125667472',id=120,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBH3M/kXUSgkBS2kz04q1S/pRsyiwkY2YkIqMb3cQVJF1HU8FNi43mGn58wjJEXbpqq0zRDrJXjUF0vzQa/0kOsDkOMrrOR4SllnyixE3KdibmJHkyv4zy/eyFYJg8UzYow==',key_name='tempest-TestGettingAddress-1350685544',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='db6885dd005947ad850fed13cefdf2fc',ramdisk_id='',reservation_id='r-knbmzljz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-1238692117',owner_user_name='tempest-TestGettingAddress-1238692117-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:21:51Z,user_data=None,user_id='0e1fd111a1ff43179343661e01457085',uuid=d703a39a-f502-4bc4-895a-bb87752c83df,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "30ec8ddb-e058-428a-a08a-817e1e452938", "address": "fa:16:3e:97:eb:db", "network": {"id": "02690ac5-d004-4c7d-b780-e5fed29e0aa7", "bridge": "br-int", "label": "tempest-network-smoke--1847613909", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap30ec8ddb-e0", "ovs_interfaceid": "30ec8ddb-e058-428a-a08a-817e1e452938", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 11 09:22:00 compute-0 nova_compute[260935]: 2025-10-11 09:22:00.520 2 DEBUG nova.network.os_vif_util [None req-34b9f7bf-00fa-46ca-b7f5-72da6044d656 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converting VIF {"id": "30ec8ddb-e058-428a-a08a-817e1e452938", "address": "fa:16:3e:97:eb:db", "network": {"id": "02690ac5-d004-4c7d-b780-e5fed29e0aa7", "bridge": "br-int", "label": "tempest-network-smoke--1847613909", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap30ec8ddb-e0", "ovs_interfaceid": "30ec8ddb-e058-428a-a08a-817e1e452938", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 09:22:00 compute-0 nova_compute[260935]: 2025-10-11 09:22:00.520 2 DEBUG nova.network.os_vif_util [None req-34b9f7bf-00fa-46ca-b7f5-72da6044d656 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:97:eb:db,bridge_name='br-int',has_traffic_filtering=True,id=30ec8ddb-e058-428a-a08a-817e1e452938,network=Network(02690ac5-d004-4c7d-b780-e5fed29e0aa7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap30ec8ddb-e0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 09:22:00 compute-0 nova_compute[260935]: 2025-10-11 09:22:00.521 2 DEBUG nova.virt.libvirt.vif [None req-34b9f7bf-00fa-46ca-b7f5-72da6044d656 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T09:21:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-125667472',display_name='tempest-TestGettingAddress-server-125667472',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-125667472',id=120,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBH3M/kXUSgkBS2kz04q1S/pRsyiwkY2YkIqMb3cQVJF1HU8FNi43mGn58wjJEXbpqq0zRDrJXjUF0vzQa/0kOsDkOMrrOR4SllnyixE3KdibmJHkyv4zy/eyFYJg8UzYow==',key_name='tempest-TestGettingAddress-1350685544',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='db6885dd005947ad850fed13cefdf2fc',ramdisk_id='',reservation_id='r-knbmzljz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-1238692117',owner_user_name='tempest-TestGettingAddress-1238692117-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:21:51Z,user_data=None,user_id='0e1fd111a1ff43179343661e01457085',uuid=d703a39a-f502-4bc4-895a-bb87752c83df,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a35d553f-941e-4ff2-bc8a-39419cc32ff2", "address": "fa:16:3e:62:2c:ec", "network": {"id": "e87b272f-66b8-494e-ab80-c2ee66df15a2", "bridge": "br-int", "label": "tempest-network-smoke--2027628824", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe62:2cec", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe62:2cec", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa35d553f-94", "ovs_interfaceid": "a35d553f-941e-4ff2-bc8a-39419cc32ff2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 11 09:22:00 compute-0 nova_compute[260935]: 2025-10-11 09:22:00.521 2 DEBUG nova.network.os_vif_util [None req-34b9f7bf-00fa-46ca-b7f5-72da6044d656 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converting VIF {"id": "a35d553f-941e-4ff2-bc8a-39419cc32ff2", "address": "fa:16:3e:62:2c:ec", "network": {"id": "e87b272f-66b8-494e-ab80-c2ee66df15a2", "bridge": "br-int", "label": "tempest-network-smoke--2027628824", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe62:2cec", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe62:2cec", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa35d553f-94", "ovs_interfaceid": "a35d553f-941e-4ff2-bc8a-39419cc32ff2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 09:22:00 compute-0 nova_compute[260935]: 2025-10-11 09:22:00.522 2 DEBUG nova.network.os_vif_util [None req-34b9f7bf-00fa-46ca-b7f5-72da6044d656 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:62:2c:ec,bridge_name='br-int',has_traffic_filtering=True,id=a35d553f-941e-4ff2-bc8a-39419cc32ff2,network=Network(e87b272f-66b8-494e-ab80-c2ee66df15a2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa35d553f-94') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 09:22:00 compute-0 nova_compute[260935]: 2025-10-11 09:22:00.523 2 DEBUG nova.objects.instance [None req-34b9f7bf-00fa-46ca-b7f5-72da6044d656 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lazy-loading 'pci_devices' on Instance uuid d703a39a-f502-4bc4-895a-bb87752c83df obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:22:00 compute-0 systemd[1]: libpod-9a9508abfd4865edac354c1312cb8e776c5bb0c68621265cccb05f6578ecc734.scope: Deactivated successfully.
Oct 11 09:22:00 compute-0 podman[390619]: 2025-10-11 09:22:00.535440299 +0000 UTC m=+0.975323786 container died 9a9508abfd4865edac354c1312cb8e776c5bb0c68621265cccb05f6578ecc734 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_newton, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 09:22:00 compute-0 nova_compute[260935]: 2025-10-11 09:22:00.545 2 DEBUG nova.virt.libvirt.driver [None req-34b9f7bf-00fa-46ca-b7f5-72da6044d656 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: d703a39a-f502-4bc4-895a-bb87752c83df] End _get_guest_xml xml=<domain type="kvm">
Oct 11 09:22:00 compute-0 nova_compute[260935]:   <uuid>d703a39a-f502-4bc4-895a-bb87752c83df</uuid>
Oct 11 09:22:00 compute-0 nova_compute[260935]:   <name>instance-00000078</name>
Oct 11 09:22:00 compute-0 nova_compute[260935]:   <memory>131072</memory>
Oct 11 09:22:00 compute-0 nova_compute[260935]:   <vcpu>1</vcpu>
Oct 11 09:22:00 compute-0 nova_compute[260935]:   <metadata>
Oct 11 09:22:00 compute-0 nova_compute[260935]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 09:22:00 compute-0 nova_compute[260935]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 09:22:00 compute-0 nova_compute[260935]:       <nova:name>tempest-TestGettingAddress-server-125667472</nova:name>
Oct 11 09:22:00 compute-0 nova_compute[260935]:       <nova:creationTime>2025-10-11 09:21:59</nova:creationTime>
Oct 11 09:22:00 compute-0 nova_compute[260935]:       <nova:flavor name="m1.nano">
Oct 11 09:22:00 compute-0 nova_compute[260935]:         <nova:memory>128</nova:memory>
Oct 11 09:22:00 compute-0 nova_compute[260935]:         <nova:disk>1</nova:disk>
Oct 11 09:22:00 compute-0 nova_compute[260935]:         <nova:swap>0</nova:swap>
Oct 11 09:22:00 compute-0 nova_compute[260935]:         <nova:ephemeral>0</nova:ephemeral>
Oct 11 09:22:00 compute-0 nova_compute[260935]:         <nova:vcpus>1</nova:vcpus>
Oct 11 09:22:00 compute-0 nova_compute[260935]:       </nova:flavor>
Oct 11 09:22:00 compute-0 nova_compute[260935]:       <nova:owner>
Oct 11 09:22:00 compute-0 nova_compute[260935]:         <nova:user uuid="0e1fd111a1ff43179343661e01457085">tempest-TestGettingAddress-1238692117-project-member</nova:user>
Oct 11 09:22:00 compute-0 nova_compute[260935]:         <nova:project uuid="db6885dd005947ad850fed13cefdf2fc">tempest-TestGettingAddress-1238692117</nova:project>
Oct 11 09:22:00 compute-0 nova_compute[260935]:       </nova:owner>
Oct 11 09:22:00 compute-0 nova_compute[260935]:       <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 09:22:00 compute-0 nova_compute[260935]:       <nova:ports>
Oct 11 09:22:00 compute-0 nova_compute[260935]:         <nova:port uuid="30ec8ddb-e058-428a-a08a-817e1e452938">
Oct 11 09:22:00 compute-0 nova_compute[260935]:           <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Oct 11 09:22:00 compute-0 nova_compute[260935]:         </nova:port>
Oct 11 09:22:00 compute-0 nova_compute[260935]:         <nova:port uuid="a35d553f-941e-4ff2-bc8a-39419cc32ff2">
Oct 11 09:22:00 compute-0 nova_compute[260935]:           <nova:ip type="fixed" address="2001:db8::f816:3eff:fe62:2cec" ipVersion="6"/>
Oct 11 09:22:00 compute-0 nova_compute[260935]:           <nova:ip type="fixed" address="2001:db8:0:1:f816:3eff:fe62:2cec" ipVersion="6"/>
Oct 11 09:22:00 compute-0 nova_compute[260935]:         </nova:port>
Oct 11 09:22:00 compute-0 nova_compute[260935]:       </nova:ports>
Oct 11 09:22:00 compute-0 nova_compute[260935]:     </nova:instance>
Oct 11 09:22:00 compute-0 nova_compute[260935]:   </metadata>
Oct 11 09:22:00 compute-0 nova_compute[260935]:   <sysinfo type="smbios">
Oct 11 09:22:00 compute-0 nova_compute[260935]:     <system>
Oct 11 09:22:00 compute-0 nova_compute[260935]:       <entry name="manufacturer">RDO</entry>
Oct 11 09:22:00 compute-0 nova_compute[260935]:       <entry name="product">OpenStack Compute</entry>
Oct 11 09:22:00 compute-0 nova_compute[260935]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 09:22:00 compute-0 nova_compute[260935]:       <entry name="serial">d703a39a-f502-4bc4-895a-bb87752c83df</entry>
Oct 11 09:22:00 compute-0 nova_compute[260935]:       <entry name="uuid">d703a39a-f502-4bc4-895a-bb87752c83df</entry>
Oct 11 09:22:00 compute-0 nova_compute[260935]:       <entry name="family">Virtual Machine</entry>
Oct 11 09:22:00 compute-0 nova_compute[260935]:     </system>
Oct 11 09:22:00 compute-0 nova_compute[260935]:   </sysinfo>
Oct 11 09:22:00 compute-0 nova_compute[260935]:   <os>
Oct 11 09:22:00 compute-0 nova_compute[260935]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 11 09:22:00 compute-0 nova_compute[260935]:     <boot dev="hd"/>
Oct 11 09:22:00 compute-0 nova_compute[260935]:     <smbios mode="sysinfo"/>
Oct 11 09:22:00 compute-0 nova_compute[260935]:   </os>
Oct 11 09:22:00 compute-0 nova_compute[260935]:   <features>
Oct 11 09:22:00 compute-0 nova_compute[260935]:     <acpi/>
Oct 11 09:22:00 compute-0 nova_compute[260935]:     <apic/>
Oct 11 09:22:00 compute-0 nova_compute[260935]:     <vmcoreinfo/>
Oct 11 09:22:00 compute-0 nova_compute[260935]:   </features>
Oct 11 09:22:00 compute-0 nova_compute[260935]:   <clock offset="utc">
Oct 11 09:22:00 compute-0 nova_compute[260935]:     <timer name="pit" tickpolicy="delay"/>
Oct 11 09:22:00 compute-0 nova_compute[260935]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 11 09:22:00 compute-0 nova_compute[260935]:     <timer name="hpet" present="no"/>
Oct 11 09:22:00 compute-0 nova_compute[260935]:   </clock>
Oct 11 09:22:00 compute-0 nova_compute[260935]:   <cpu mode="host-model" match="exact">
Oct 11 09:22:00 compute-0 nova_compute[260935]:     <topology sockets="1" cores="1" threads="1"/>
Oct 11 09:22:00 compute-0 nova_compute[260935]:   </cpu>
Oct 11 09:22:00 compute-0 nova_compute[260935]:   <devices>
Oct 11 09:22:00 compute-0 nova_compute[260935]:     <disk type="network" device="disk">
Oct 11 09:22:00 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 09:22:00 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/d703a39a-f502-4bc4-895a-bb87752c83df_disk">
Oct 11 09:22:00 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 09:22:00 compute-0 nova_compute[260935]:       </source>
Oct 11 09:22:00 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 09:22:00 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 09:22:00 compute-0 nova_compute[260935]:       </auth>
Oct 11 09:22:00 compute-0 nova_compute[260935]:       <target dev="vda" bus="virtio"/>
Oct 11 09:22:00 compute-0 nova_compute[260935]:     </disk>
Oct 11 09:22:00 compute-0 nova_compute[260935]:     <disk type="network" device="cdrom">
Oct 11 09:22:00 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 09:22:00 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/d703a39a-f502-4bc4-895a-bb87752c83df_disk.config">
Oct 11 09:22:00 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 09:22:00 compute-0 nova_compute[260935]:       </source>
Oct 11 09:22:00 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 09:22:00 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 09:22:00 compute-0 nova_compute[260935]:       </auth>
Oct 11 09:22:00 compute-0 nova_compute[260935]:       <target dev="sda" bus="sata"/>
Oct 11 09:22:00 compute-0 nova_compute[260935]:     </disk>
Oct 11 09:22:00 compute-0 nova_compute[260935]:     <interface type="ethernet">
Oct 11 09:22:00 compute-0 nova_compute[260935]:       <mac address="fa:16:3e:97:eb:db"/>
Oct 11 09:22:00 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 09:22:00 compute-0 nova_compute[260935]:       <driver name="vhost" rx_queue_size="512"/>
Oct 11 09:22:00 compute-0 nova_compute[260935]:       <mtu size="1442"/>
Oct 11 09:22:00 compute-0 nova_compute[260935]:       <target dev="tap30ec8ddb-e0"/>
Oct 11 09:22:00 compute-0 nova_compute[260935]:     </interface>
Oct 11 09:22:00 compute-0 nova_compute[260935]:     <interface type="ethernet">
Oct 11 09:22:00 compute-0 nova_compute[260935]:       <mac address="fa:16:3e:62:2c:ec"/>
Oct 11 09:22:00 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 09:22:00 compute-0 nova_compute[260935]:       <driver name="vhost" rx_queue_size="512"/>
Oct 11 09:22:00 compute-0 nova_compute[260935]:       <mtu size="1442"/>
Oct 11 09:22:00 compute-0 nova_compute[260935]:       <target dev="tapa35d553f-94"/>
Oct 11 09:22:00 compute-0 nova_compute[260935]:     </interface>
Oct 11 09:22:00 compute-0 nova_compute[260935]:     <serial type="pty">
Oct 11 09:22:00 compute-0 nova_compute[260935]:       <log file="/var/lib/nova/instances/d703a39a-f502-4bc4-895a-bb87752c83df/console.log" append="off"/>
Oct 11 09:22:00 compute-0 nova_compute[260935]:     </serial>
Oct 11 09:22:00 compute-0 nova_compute[260935]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 09:22:00 compute-0 nova_compute[260935]:     <video>
Oct 11 09:22:00 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 09:22:00 compute-0 nova_compute[260935]:     </video>
Oct 11 09:22:00 compute-0 nova_compute[260935]:     <input type="tablet" bus="usb"/>
Oct 11 09:22:00 compute-0 nova_compute[260935]:     <rng model="virtio">
Oct 11 09:22:00 compute-0 nova_compute[260935]:       <backend model="random">/dev/urandom</backend>
Oct 11 09:22:00 compute-0 nova_compute[260935]:     </rng>
Oct 11 09:22:00 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root"/>
Oct 11 09:22:00 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:22:00 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:22:00 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:22:00 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:22:00 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:22:00 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:22:00 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:22:00 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:22:00 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:22:00 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:22:00 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:22:00 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:22:00 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:22:00 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:22:00 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:22:00 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:22:00 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:22:00 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:22:00 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:22:00 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:22:00 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:22:00 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:22:00 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:22:00 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:22:00 compute-0 nova_compute[260935]:     <controller type="usb" index="0"/>
Oct 11 09:22:00 compute-0 nova_compute[260935]:     <memballoon model="virtio">
Oct 11 09:22:00 compute-0 nova_compute[260935]:       <stats period="10"/>
Oct 11 09:22:00 compute-0 nova_compute[260935]:     </memballoon>
Oct 11 09:22:00 compute-0 nova_compute[260935]:   </devices>
Oct 11 09:22:00 compute-0 nova_compute[260935]: </domain>
Oct 11 09:22:00 compute-0 nova_compute[260935]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 11 09:22:00 compute-0 nova_compute[260935]: 2025-10-11 09:22:00.546 2 DEBUG nova.compute.manager [None req-34b9f7bf-00fa-46ca-b7f5-72da6044d656 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: d703a39a-f502-4bc4-895a-bb87752c83df] Preparing to wait for external event network-vif-plugged-30ec8ddb-e058-428a-a08a-817e1e452938 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 11 09:22:00 compute-0 nova_compute[260935]: 2025-10-11 09:22:00.546 2 DEBUG oslo_concurrency.lockutils [None req-34b9f7bf-00fa-46ca-b7f5-72da6044d656 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "d703a39a-f502-4bc4-895a-bb87752c83df-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:22:00 compute-0 nova_compute[260935]: 2025-10-11 09:22:00.546 2 DEBUG oslo_concurrency.lockutils [None req-34b9f7bf-00fa-46ca-b7f5-72da6044d656 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "d703a39a-f502-4bc4-895a-bb87752c83df-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:22:00 compute-0 nova_compute[260935]: 2025-10-11 09:22:00.546 2 DEBUG oslo_concurrency.lockutils [None req-34b9f7bf-00fa-46ca-b7f5-72da6044d656 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "d703a39a-f502-4bc4-895a-bb87752c83df-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:22:00 compute-0 nova_compute[260935]: 2025-10-11 09:22:00.547 2 DEBUG nova.compute.manager [None req-34b9f7bf-00fa-46ca-b7f5-72da6044d656 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: d703a39a-f502-4bc4-895a-bb87752c83df] Preparing to wait for external event network-vif-plugged-a35d553f-941e-4ff2-bc8a-39419cc32ff2 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 11 09:22:00 compute-0 nova_compute[260935]: 2025-10-11 09:22:00.547 2 DEBUG oslo_concurrency.lockutils [None req-34b9f7bf-00fa-46ca-b7f5-72da6044d656 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "d703a39a-f502-4bc4-895a-bb87752c83df-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:22:00 compute-0 nova_compute[260935]: 2025-10-11 09:22:00.547 2 DEBUG oslo_concurrency.lockutils [None req-34b9f7bf-00fa-46ca-b7f5-72da6044d656 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "d703a39a-f502-4bc4-895a-bb87752c83df-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:22:00 compute-0 nova_compute[260935]: 2025-10-11 09:22:00.547 2 DEBUG oslo_concurrency.lockutils [None req-34b9f7bf-00fa-46ca-b7f5-72da6044d656 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "d703a39a-f502-4bc4-895a-bb87752c83df-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:22:00 compute-0 nova_compute[260935]: 2025-10-11 09:22:00.548 2 DEBUG nova.virt.libvirt.vif [None req-34b9f7bf-00fa-46ca-b7f5-72da6044d656 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T09:21:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-125667472',display_name='tempest-TestGettingAddress-server-125667472',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-125667472',id=120,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBH3M/kXUSgkBS2kz04q1S/pRsyiwkY2YkIqMb3cQVJF1HU8FNi43mGn58wjJEXbpqq0zRDrJXjUF0vzQa/0kOsDkOMrrOR4SllnyixE3KdibmJHkyv4zy/eyFYJg8UzYow==',key_name='tempest-TestGettingAddress-1350685544',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='db6885dd005947ad850fed13cefdf2fc',ramdisk_id='',reservation_id='r-knbmzljz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-1238692117',owner_user_name='tempest-TestGettingAddress-1238692117-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:21:51Z,user_data=None,user_id='0e1fd111a1ff43179343661e01457085',uuid=d703a39a-f502-4bc4-895a-bb87752c83df,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "30ec8ddb-e058-428a-a08a-817e1e452938", "address": "fa:16:3e:97:eb:db", "network": {"id": "02690ac5-d004-4c7d-b780-e5fed29e0aa7", "bridge": "br-int", "label": "tempest-network-smoke--1847613909", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap30ec8ddb-e0", "ovs_interfaceid": "30ec8ddb-e058-428a-a08a-817e1e452938", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 11 09:22:00 compute-0 nova_compute[260935]: 2025-10-11 09:22:00.548 2 DEBUG nova.network.os_vif_util [None req-34b9f7bf-00fa-46ca-b7f5-72da6044d656 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converting VIF {"id": "30ec8ddb-e058-428a-a08a-817e1e452938", "address": "fa:16:3e:97:eb:db", "network": {"id": "02690ac5-d004-4c7d-b780-e5fed29e0aa7", "bridge": "br-int", "label": "tempest-network-smoke--1847613909", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap30ec8ddb-e0", "ovs_interfaceid": "30ec8ddb-e058-428a-a08a-817e1e452938", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 09:22:00 compute-0 nova_compute[260935]: 2025-10-11 09:22:00.549 2 DEBUG nova.network.os_vif_util [None req-34b9f7bf-00fa-46ca-b7f5-72da6044d656 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:97:eb:db,bridge_name='br-int',has_traffic_filtering=True,id=30ec8ddb-e058-428a-a08a-817e1e452938,network=Network(02690ac5-d004-4c7d-b780-e5fed29e0aa7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap30ec8ddb-e0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 09:22:00 compute-0 nova_compute[260935]: 2025-10-11 09:22:00.549 2 DEBUG os_vif [None req-34b9f7bf-00fa-46ca-b7f5-72da6044d656 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:97:eb:db,bridge_name='br-int',has_traffic_filtering=True,id=30ec8ddb-e058-428a-a08a-817e1e452938,network=Network(02690ac5-d004-4c7d-b780-e5fed29e0aa7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap30ec8ddb-e0') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 11 09:22:00 compute-0 nova_compute[260935]: 2025-10-11 09:22:00.549 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:22:00 compute-0 nova_compute[260935]: 2025-10-11 09:22:00.550 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:22:00 compute-0 nova_compute[260935]: 2025-10-11 09:22:00.550 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 09:22:00 compute-0 nova_compute[260935]: 2025-10-11 09:22:00.555 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:22:00 compute-0 nova_compute[260935]: 2025-10-11 09:22:00.555 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap30ec8ddb-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:22:00 compute-0 nova_compute[260935]: 2025-10-11 09:22:00.556 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap30ec8ddb-e0, col_values=(('external_ids', {'iface-id': '30ec8ddb-e058-428a-a08a-817e1e452938', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:97:eb:db', 'vm-uuid': 'd703a39a-f502-4bc4-895a-bb87752c83df'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:22:00 compute-0 nova_compute[260935]: 2025-10-11 09:22:00.557 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:22:00 compute-0 NetworkManager[44960]: <info>  [1760174520.5593] manager: (tap30ec8ddb-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/498)
Oct 11 09:22:00 compute-0 nova_compute[260935]: 2025-10-11 09:22:00.559 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 09:22:00 compute-0 nova_compute[260935]: 2025-10-11 09:22:00.567 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:22:00 compute-0 nova_compute[260935]: 2025-10-11 09:22:00.570 2 INFO os_vif [None req-34b9f7bf-00fa-46ca-b7f5-72da6044d656 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:97:eb:db,bridge_name='br-int',has_traffic_filtering=True,id=30ec8ddb-e058-428a-a08a-817e1e452938,network=Network(02690ac5-d004-4c7d-b780-e5fed29e0aa7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap30ec8ddb-e0')
Oct 11 09:22:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-0cb4efb4ede56d4028982a986af9a410045447d4760c1983441c927331e9fc46-merged.mount: Deactivated successfully.
Oct 11 09:22:00 compute-0 nova_compute[260935]: 2025-10-11 09:22:00.571 2 DEBUG nova.virt.libvirt.vif [None req-34b9f7bf-00fa-46ca-b7f5-72da6044d656 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T09:21:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-125667472',display_name='tempest-TestGettingAddress-server-125667472',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-125667472',id=120,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBH3M/kXUSgkBS2kz04q1S/pRsyiwkY2YkIqMb3cQVJF1HU8FNi43mGn58wjJEXbpqq0zRDrJXjUF0vzQa/0kOsDkOMrrOR4SllnyixE3KdibmJHkyv4zy/eyFYJg8UzYow==',key_name='tempest-TestGettingAddress-1350685544',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='db6885dd005947ad850fed13cefdf2fc',ramdisk_id='',reservation_id='r-knbmzljz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-1238692117',owner_user_name='tempest-TestGettingAddress-1238692117-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:21:51Z,user_data=None,user_id='0e1fd111a1ff43179343661e01457085',uuid=d703a39a-f502-4bc4-895a-bb87752c83df,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a35d553f-941e-4ff2-bc8a-39419cc32ff2", "address": "fa:16:3e:62:2c:ec", "network": {"id": "e87b272f-66b8-494e-ab80-c2ee66df15a2", "bridge": "br-int", "label": "tempest-network-smoke--2027628824", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe62:2cec", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe62:2cec", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa35d553f-94", "ovs_interfaceid": "a35d553f-941e-4ff2-bc8a-39419cc32ff2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 11 09:22:00 compute-0 nova_compute[260935]: 2025-10-11 09:22:00.571 2 DEBUG nova.network.os_vif_util [None req-34b9f7bf-00fa-46ca-b7f5-72da6044d656 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converting VIF {"id": "a35d553f-941e-4ff2-bc8a-39419cc32ff2", "address": "fa:16:3e:62:2c:ec", "network": {"id": "e87b272f-66b8-494e-ab80-c2ee66df15a2", "bridge": "br-int", "label": "tempest-network-smoke--2027628824", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe62:2cec", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe62:2cec", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa35d553f-94", "ovs_interfaceid": "a35d553f-941e-4ff2-bc8a-39419cc32ff2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 09:22:00 compute-0 nova_compute[260935]: 2025-10-11 09:22:00.572 2 DEBUG nova.network.os_vif_util [None req-34b9f7bf-00fa-46ca-b7f5-72da6044d656 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:62:2c:ec,bridge_name='br-int',has_traffic_filtering=True,id=a35d553f-941e-4ff2-bc8a-39419cc32ff2,network=Network(e87b272f-66b8-494e-ab80-c2ee66df15a2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa35d553f-94') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 09:22:00 compute-0 nova_compute[260935]: 2025-10-11 09:22:00.572 2 DEBUG os_vif [None req-34b9f7bf-00fa-46ca-b7f5-72da6044d656 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:62:2c:ec,bridge_name='br-int',has_traffic_filtering=True,id=a35d553f-941e-4ff2-bc8a-39419cc32ff2,network=Network(e87b272f-66b8-494e-ab80-c2ee66df15a2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa35d553f-94') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 11 09:22:00 compute-0 nova_compute[260935]: 2025-10-11 09:22:00.573 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:22:00 compute-0 nova_compute[260935]: 2025-10-11 09:22:00.573 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:22:00 compute-0 nova_compute[260935]: 2025-10-11 09:22:00.573 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 09:22:00 compute-0 nova_compute[260935]: 2025-10-11 09:22:00.576 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:22:00 compute-0 nova_compute[260935]: 2025-10-11 09:22:00.577 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa35d553f-94, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:22:00 compute-0 nova_compute[260935]: 2025-10-11 09:22:00.577 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa35d553f-94, col_values=(('external_ids', {'iface-id': 'a35d553f-941e-4ff2-bc8a-39419cc32ff2', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:62:2c:ec', 'vm-uuid': 'd703a39a-f502-4bc4-895a-bb87752c83df'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:22:00 compute-0 nova_compute[260935]: 2025-10-11 09:22:00.579 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:22:00 compute-0 NetworkManager[44960]: <info>  [1760174520.5798] manager: (tapa35d553f-94): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/499)
Oct 11 09:22:00 compute-0 nova_compute[260935]: 2025-10-11 09:22:00.582 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 09:22:00 compute-0 nova_compute[260935]: 2025-10-11 09:22:00.588 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:22:00 compute-0 nova_compute[260935]: 2025-10-11 09:22:00.589 2 INFO os_vif [None req-34b9f7bf-00fa-46ca-b7f5-72da6044d656 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:62:2c:ec,bridge_name='br-int',has_traffic_filtering=True,id=a35d553f-941e-4ff2-bc8a-39419cc32ff2,network=Network(e87b272f-66b8-494e-ab80-c2ee66df15a2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa35d553f-94')
Oct 11 09:22:00 compute-0 podman[390619]: 2025-10-11 09:22:00.606617248 +0000 UTC m=+1.046500695 container remove 9a9508abfd4865edac354c1312cb8e776c5bb0c68621265cccb05f6578ecc734 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_newton, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 09:22:00 compute-0 systemd[1]: libpod-conmon-9a9508abfd4865edac354c1312cb8e776c5bb0c68621265cccb05f6578ecc734.scope: Deactivated successfully.
Oct 11 09:22:00 compute-0 sudo[390511]: pam_unix(sudo:session): session closed for user root
Oct 11 09:22:00 compute-0 nova_compute[260935]: 2025-10-11 09:22:00.658 2 DEBUG nova.virt.libvirt.driver [None req-34b9f7bf-00fa-46ca-b7f5-72da6044d656 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 09:22:00 compute-0 nova_compute[260935]: 2025-10-11 09:22:00.658 2 DEBUG nova.virt.libvirt.driver [None req-34b9f7bf-00fa-46ca-b7f5-72da6044d656 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 09:22:00 compute-0 nova_compute[260935]: 2025-10-11 09:22:00.658 2 DEBUG nova.virt.libvirt.driver [None req-34b9f7bf-00fa-46ca-b7f5-72da6044d656 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] No VIF found with MAC fa:16:3e:97:eb:db, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 11 09:22:00 compute-0 nova_compute[260935]: 2025-10-11 09:22:00.658 2 DEBUG nova.virt.libvirt.driver [None req-34b9f7bf-00fa-46ca-b7f5-72da6044d656 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] No VIF found with MAC fa:16:3e:62:2c:ec, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 11 09:22:00 compute-0 nova_compute[260935]: 2025-10-11 09:22:00.659 2 INFO nova.virt.libvirt.driver [None req-34b9f7bf-00fa-46ca-b7f5-72da6044d656 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: d703a39a-f502-4bc4-895a-bb87752c83df] Using config drive
Oct 11 09:22:00 compute-0 nova_compute[260935]: 2025-10-11 09:22:00.686 2 DEBUG nova.storage.rbd_utils [None req-34b9f7bf-00fa-46ca-b7f5-72da6044d656 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] rbd image d703a39a-f502-4bc4-895a-bb87752c83df_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:22:00 compute-0 sudo[390722]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:22:00 compute-0 sudo[390722]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:22:00 compute-0 sudo[390722]: pam_unix(sudo:session): session closed for user root
Oct 11 09:22:00 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2405: 321 pgs: 321 active+clean; 579 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Oct 11 09:22:00 compute-0 sudo[390765]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 09:22:00 compute-0 sudo[390765]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:22:00 compute-0 sudo[390765]: pam_unix(sudo:session): session closed for user root
Oct 11 09:22:00 compute-0 sudo[390790]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:22:00 compute-0 sudo[390790]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:22:00 compute-0 sudo[390790]: pam_unix(sudo:session): session closed for user root
Oct 11 09:22:00 compute-0 ovn_controller[152945]: 2025-10-11T09:22:00Z|00145|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:29:57:ba 10.100.0.19
Oct 11 09:22:00 compute-0 ovn_controller[152945]: 2025-10-11T09:22:00Z|00146|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:29:57:ba 10.100.0.19
Oct 11 09:22:01 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/586867095' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:22:01 compute-0 sudo[390815]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a -- raw list --format json
Oct 11 09:22:01 compute-0 sudo[390815]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:22:01 compute-0 nova_compute[260935]: 2025-10-11 09:22:01.250 2 INFO nova.virt.libvirt.driver [None req-34b9f7bf-00fa-46ca-b7f5-72da6044d656 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: d703a39a-f502-4bc4-895a-bb87752c83df] Creating config drive at /var/lib/nova/instances/d703a39a-f502-4bc4-895a-bb87752c83df/disk.config
Oct 11 09:22:01 compute-0 nova_compute[260935]: 2025-10-11 09:22:01.259 2 DEBUG oslo_concurrency.processutils [None req-34b9f7bf-00fa-46ca-b7f5-72da6044d656 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/d703a39a-f502-4bc4-895a-bb87752c83df/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpyw7s9mne execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:22:01 compute-0 nova_compute[260935]: 2025-10-11 09:22:01.415 2 DEBUG oslo_concurrency.processutils [None req-34b9f7bf-00fa-46ca-b7f5-72da6044d656 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/d703a39a-f502-4bc4-895a-bb87752c83df/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpyw7s9mne" returned: 0 in 0.157s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:22:01 compute-0 nova_compute[260935]: 2025-10-11 09:22:01.437 2 DEBUG nova.storage.rbd_utils [None req-34b9f7bf-00fa-46ca-b7f5-72da6044d656 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] rbd image d703a39a-f502-4bc4-895a-bb87752c83df_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:22:01 compute-0 nova_compute[260935]: 2025-10-11 09:22:01.440 2 DEBUG oslo_concurrency.processutils [None req-34b9f7bf-00fa-46ca-b7f5-72da6044d656 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/d703a39a-f502-4bc4-895a-bb87752c83df/disk.config d703a39a-f502-4bc4-895a-bb87752c83df_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:22:01 compute-0 podman[390901]: 2025-10-11 09:22:01.525006755 +0000 UTC m=+0.049373444 container create cfd7f8b4a57e2da5cd957ab6f9f70a3f7079e295fbcf4a985b7e4e5c77bfee04 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_blackburn, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct 11 09:22:01 compute-0 nova_compute[260935]: 2025-10-11 09:22:01.543 2 DEBUG nova.network.neutron [req-289eff2a-4d7c-4008-b49b-96bfc9312e11 req-dbc4a8c5-e9b3-454a-8877-66b84e53996c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d703a39a-f502-4bc4-895a-bb87752c83df] Updated VIF entry in instance network info cache for port a35d553f-941e-4ff2-bc8a-39419cc32ff2. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 09:22:01 compute-0 nova_compute[260935]: 2025-10-11 09:22:01.544 2 DEBUG nova.network.neutron [req-289eff2a-4d7c-4008-b49b-96bfc9312e11 req-dbc4a8c5-e9b3-454a-8877-66b84e53996c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d703a39a-f502-4bc4-895a-bb87752c83df] Updating instance_info_cache with network_info: [{"id": "30ec8ddb-e058-428a-a08a-817e1e452938", "address": "fa:16:3e:97:eb:db", "network": {"id": "02690ac5-d004-4c7d-b780-e5fed29e0aa7", "bridge": "br-int", "label": "tempest-network-smoke--1847613909", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap30ec8ddb-e0", "ovs_interfaceid": "30ec8ddb-e058-428a-a08a-817e1e452938", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "a35d553f-941e-4ff2-bc8a-39419cc32ff2", "address": "fa:16:3e:62:2c:ec", "network": {"id": "e87b272f-66b8-494e-ab80-c2ee66df15a2", "bridge": "br-int", "label": "tempest-network-smoke--2027628824", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe62:2cec", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe62:2cec", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa35d553f-94", "ovs_interfaceid": "a35d553f-941e-4ff2-bc8a-39419cc32ff2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:22:01 compute-0 nova_compute[260935]: 2025-10-11 09:22:01.566 2 DEBUG oslo_concurrency.lockutils [req-289eff2a-4d7c-4008-b49b-96bfc9312e11 req-dbc4a8c5-e9b3-454a-8877-66b84e53996c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-d703a39a-f502-4bc4-895a-bb87752c83df" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:22:01 compute-0 systemd[1]: Started libpod-conmon-cfd7f8b4a57e2da5cd957ab6f9f70a3f7079e295fbcf4a985b7e4e5c77bfee04.scope.
Oct 11 09:22:01 compute-0 nova_compute[260935]: 2025-10-11 09:22:01.590 2 DEBUG oslo_concurrency.processutils [None req-34b9f7bf-00fa-46ca-b7f5-72da6044d656 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/d703a39a-f502-4bc4-895a-bb87752c83df/disk.config d703a39a-f502-4bc4-895a-bb87752c83df_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.150s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:22:01 compute-0 nova_compute[260935]: 2025-10-11 09:22:01.590 2 INFO nova.virt.libvirt.driver [None req-34b9f7bf-00fa-46ca-b7f5-72da6044d656 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: d703a39a-f502-4bc4-895a-bb87752c83df] Deleting local config drive /var/lib/nova/instances/d703a39a-f502-4bc4-895a-bb87752c83df/disk.config because it was imported into RBD.
Oct 11 09:22:01 compute-0 podman[390901]: 2025-10-11 09:22:01.503104477 +0000 UTC m=+0.027471186 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:22:01 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:22:01 compute-0 podman[390901]: 2025-10-11 09:22:01.628321911 +0000 UTC m=+0.152688600 container init cfd7f8b4a57e2da5cd957ab6f9f70a3f7079e295fbcf4a985b7e4e5c77bfee04 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_blackburn, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct 11 09:22:01 compute-0 podman[390901]: 2025-10-11 09:22:01.64353605 +0000 UTC m=+0.167902779 container start cfd7f8b4a57e2da5cd957ab6f9f70a3f7079e295fbcf4a985b7e4e5c77bfee04 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_blackburn, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 09:22:01 compute-0 podman[390901]: 2025-10-11 09:22:01.647903354 +0000 UTC m=+0.172270073 container attach cfd7f8b4a57e2da5cd957ab6f9f70a3f7079e295fbcf4a985b7e4e5c77bfee04 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_blackburn, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct 11 09:22:01 compute-0 flamboyant_blackburn[390935]: 167 167
Oct 11 09:22:01 compute-0 systemd[1]: libpod-cfd7f8b4a57e2da5cd957ab6f9f70a3f7079e295fbcf4a985b7e4e5c77bfee04.scope: Deactivated successfully.
Oct 11 09:22:01 compute-0 podman[390901]: 2025-10-11 09:22:01.655897609 +0000 UTC m=+0.180264328 container died cfd7f8b4a57e2da5cd957ab6f9f70a3f7079e295fbcf4a985b7e4e5c77bfee04 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_blackburn, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct 11 09:22:01 compute-0 NetworkManager[44960]: <info>  [1760174521.6818] manager: (tap30ec8ddb-e0): new Tun device (/org/freedesktop/NetworkManager/Devices/500)
Oct 11 09:22:01 compute-0 kernel: tap30ec8ddb-e0: entered promiscuous mode
Oct 11 09:22:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-de24776d3d9548be8cd72115752df7cda374cf25f6b95dafcd7d0846a13fa76d-merged.mount: Deactivated successfully.
Oct 11 09:22:01 compute-0 NetworkManager[44960]: <info>  [1760174521.7034] manager: (tapa35d553f-94): new Tun device (/org/freedesktop/NetworkManager/Devices/501)
Oct 11 09:22:01 compute-0 ovn_controller[152945]: 2025-10-11T09:22:01Z|01229|binding|INFO|Claiming lport 30ec8ddb-e058-428a-a08a-817e1e452938 for this chassis.
Oct 11 09:22:01 compute-0 ovn_controller[152945]: 2025-10-11T09:22:01Z|01230|binding|INFO|30ec8ddb-e058-428a-a08a-817e1e452938: Claiming fa:16:3e:97:eb:db 10.100.0.6
Oct 11 09:22:01 compute-0 nova_compute[260935]: 2025-10-11 09:22:01.709 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:22:01 compute-0 podman[390901]: 2025-10-11 09:22:01.714453642 +0000 UTC m=+0.238820341 container remove cfd7f8b4a57e2da5cd957ab6f9f70a3f7079e295fbcf4a985b7e4e5c77bfee04 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_blackburn, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Oct 11 09:22:01 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:22:01.728 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:97:eb:db 10.100.0.6'], port_security=['fa:16:3e:97:eb:db 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'd703a39a-f502-4bc4-895a-bb87752c83df', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-02690ac5-d004-4c7d-b780-e5fed29e0aa7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'db6885dd005947ad850fed13cefdf2fc', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'c29d60dd-64a4-4eb0-a5b7-799e6b286f87', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=bc6a655d-e3d9-4e53-b2d8-3195591acfb2, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=30ec8ddb-e058-428a-a08a-817e1e452938) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 09:22:01 compute-0 systemd[1]: libpod-conmon-cfd7f8b4a57e2da5cd957ab6f9f70a3f7079e295fbcf4a985b7e4e5c77bfee04.scope: Deactivated successfully.
Oct 11 09:22:01 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:22:01.735 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 30ec8ddb-e058-428a-a08a-817e1e452938 in datapath 02690ac5-d004-4c7d-b780-e5fed29e0aa7 bound to our chassis
Oct 11 09:22:01 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:22:01.737 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 02690ac5-d004-4c7d-b780-e5fed29e0aa7
Oct 11 09:22:01 compute-0 systemd-udevd[390966]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 09:22:01 compute-0 systemd-udevd[390968]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 09:22:01 compute-0 kernel: tapa35d553f-94: entered promiscuous mode
Oct 11 09:22:01 compute-0 ovn_controller[152945]: 2025-10-11T09:22:01Z|01231|binding|INFO|Setting lport 30ec8ddb-e058-428a-a08a-817e1e452938 ovn-installed in OVS
Oct 11 09:22:01 compute-0 ovn_controller[152945]: 2025-10-11T09:22:01Z|01232|binding|INFO|Setting lport 30ec8ddb-e058-428a-a08a-817e1e452938 up in Southbound
Oct 11 09:22:01 compute-0 nova_compute[260935]: 2025-10-11 09:22:01.754 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:22:01 compute-0 ovn_controller[152945]: 2025-10-11T09:22:01Z|01233|if_status|INFO|Dropped 3 log messages in last 41 seconds (most recently, 41 seconds ago) due to excessive rate
Oct 11 09:22:01 compute-0 ovn_controller[152945]: 2025-10-11T09:22:01Z|01234|if_status|INFO|Not updating pb chassis for a35d553f-941e-4ff2-bc8a-39419cc32ff2 now as sb is readonly
Oct 11 09:22:01 compute-0 systemd-machined[215705]: New machine qemu-143-instance-00000078.
Oct 11 09:22:01 compute-0 ovn_controller[152945]: 2025-10-11T09:22:01Z|01235|binding|INFO|Claiming lport a35d553f-941e-4ff2-bc8a-39419cc32ff2 for this chassis.
Oct 11 09:22:01 compute-0 ovn_controller[152945]: 2025-10-11T09:22:01Z|01236|binding|INFO|a35d553f-941e-4ff2-bc8a-39419cc32ff2: Claiming fa:16:3e:62:2c:ec 2001:db8:0:1:f816:3eff:fe62:2cec 2001:db8::f816:3eff:fe62:2cec
Oct 11 09:22:01 compute-0 NetworkManager[44960]: <info>  [1760174521.7618] device (tapa35d553f-94): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 09:22:01 compute-0 NetworkManager[44960]: <info>  [1760174521.7634] device (tapa35d553f-94): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 09:22:01 compute-0 NetworkManager[44960]: <info>  [1760174521.7646] device (tap30ec8ddb-e0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 09:22:01 compute-0 NetworkManager[44960]: <info>  [1760174521.7665] device (tap30ec8ddb-e0): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 09:22:01 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:22:01.766 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:62:2c:ec 2001:db8:0:1:f816:3eff:fe62:2cec 2001:db8::f816:3eff:fe62:2cec'], port_security=['fa:16:3e:62:2c:ec 2001:db8:0:1:f816:3eff:fe62:2cec 2001:db8::f816:3eff:fe62:2cec'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:fe62:2cec/64 2001:db8::f816:3eff:fe62:2cec/64', 'neutron:device_id': 'd703a39a-f502-4bc4-895a-bb87752c83df', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e87b272f-66b8-494e-ab80-c2ee66df15a2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'db6885dd005947ad850fed13cefdf2fc', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'c29d60dd-64a4-4eb0-a5b7-799e6b286f87', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4bb88753-f635-4d32-a71c-7301c0eb5e38, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=a35d553f-941e-4ff2-bc8a-39419cc32ff2) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 09:22:01 compute-0 systemd[1]: Started Virtual Machine qemu-143-instance-00000078.
Oct 11 09:22:01 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:22:01.773 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[110c1904-4545-439f-92b9-fed8a8e56ac9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:22:01 compute-0 ovn_controller[152945]: 2025-10-11T09:22:01Z|01237|binding|INFO|Setting lport a35d553f-941e-4ff2-bc8a-39419cc32ff2 ovn-installed in OVS
Oct 11 09:22:01 compute-0 ovn_controller[152945]: 2025-10-11T09:22:01Z|01238|binding|INFO|Setting lport a35d553f-941e-4ff2-bc8a-39419cc32ff2 up in Southbound
Oct 11 09:22:01 compute-0 nova_compute[260935]: 2025-10-11 09:22:01.779 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:22:01 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:22:01.802 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[ff24a203-60ae-44a4-9365-9a0d77f7372d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:22:01 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:22:01.809 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[da42ca27-93d7-46d6-ae04-033fc71b7dab]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:22:01 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:22:01.849 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[26857fe2-1279-4723-9c3c-08cb438f6f0d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:22:01 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:22:01.878 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[7f8478b1-0f0d-4d1c-bd52-592a8e04001d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap02690ac5-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:3a:97:0e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 340], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 632554, 'reachable_time': 37223, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 390982, 'error': None, 'target': 'ovnmeta-02690ac5-d004-4c7d-b780-e5fed29e0aa7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:22:01 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:22:01.903 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[969cc1f3-1263-43ef-9195-62ab18ca5504]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap02690ac5-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 632567, 'tstamp': 632567}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 390984, 'error': None, 'target': 'ovnmeta-02690ac5-d004-4c7d-b780-e5fed29e0aa7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap02690ac5-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 632571, 'tstamp': 632571}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 390984, 'error': None, 'target': 'ovnmeta-02690ac5-d004-4c7d-b780-e5fed29e0aa7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:22:01 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:22:01.911 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap02690ac5-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:22:01 compute-0 nova_compute[260935]: 2025-10-11 09:22:01.913 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:22:01 compute-0 nova_compute[260935]: 2025-10-11 09:22:01.915 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:22:01 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:22:01.915 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap02690ac5-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:22:01 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:22:01.915 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 09:22:01 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:22:01.915 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap02690ac5-d0, col_values=(('external_ids', {'iface-id': 'bc2597de-285d-49b2-9e10-c93c87e43828'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:22:01 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:22:01.915 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 09:22:01 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:22:01.917 162815 INFO neutron.agent.ovn.metadata.agent [-] Port a35d553f-941e-4ff2-bc8a-39419cc32ff2 in datapath e87b272f-66b8-494e-ab80-c2ee66df15a2 unbound from our chassis
Oct 11 09:22:01 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:22:01.918 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e87b272f-66b8-494e-ab80-c2ee66df15a2
Oct 11 09:22:01 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:22:01.945 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[9568afbe-39bd-47cb-8a2f-316625883721]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:22:01 compute-0 nova_compute[260935]: 2025-10-11 09:22:01.986 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:22:01 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:22:01.989 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[9529afb7-cb33-4863-8f97-003bbdabc3f1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:22:01 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:22:01.994 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[84929a72-8425-4bb0-b076-acb5fe7b9ecd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:22:02 compute-0 ceph-mon[74313]: pgmap v2405: 321 pgs: 321 active+clean; 579 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Oct 11 09:22:02 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:22:02.033 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[50092168-032d-4070-9d33-9430cbe4ef64]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:22:02 compute-0 podman[390991]: 2025-10-11 09:22:02.052689457 +0000 UTC m=+0.084019042 container create 7e2275b3d3bc322ce807aec87c3c4c30f76122b8622f580055a49d5a91d0200a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_kapitsa, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct 11 09:22:02 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:22:02.067 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[9961a8bc-0b7f-442c-a86d-b89f944ad335]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape87b272f-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ab:ac:3b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 23, 'tx_packets': 5, 'rx_bytes': 2146, 'tx_bytes': 402, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 23, 'tx_packets': 5, 'rx_bytes': 2146, 'tx_bytes': 402, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 341], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 632666, 'reachable_time': 19970, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 23, 'inoctets': 1824, 'indelivers': 7, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 23, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 1824, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 23, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 7, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 391008, 'error': None, 'target': 'ovnmeta-e87b272f-66b8-494e-ab80-c2ee66df15a2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:22:02 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:22:02.097 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[4a2845e5-8102-4bfb-9bf2-173c6309e326]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tape87b272f-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 632680, 'tstamp': 632680}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 391011, 'error': None, 'target': 'ovnmeta-e87b272f-66b8-494e-ab80-c2ee66df15a2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:22:02 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:22:02.099 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape87b272f-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:22:02 compute-0 podman[390991]: 2025-10-11 09:22:02.024486441 +0000 UTC m=+0.055816126 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:22:02 compute-0 nova_compute[260935]: 2025-10-11 09:22:02.137 2 DEBUG nova.compute.manager [req-1e48f9e3-bcf2-4e7b-8859-ac5a8e922aca req-ca271afc-ed5e-477b-9137-303eafbc9e57 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d703a39a-f502-4bc4-895a-bb87752c83df] Received event network-vif-plugged-a35d553f-941e-4ff2-bc8a-39419cc32ff2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:22:02 compute-0 nova_compute[260935]: 2025-10-11 09:22:02.138 2 DEBUG oslo_concurrency.lockutils [req-1e48f9e3-bcf2-4e7b-8859-ac5a8e922aca req-ca271afc-ed5e-477b-9137-303eafbc9e57 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "d703a39a-f502-4bc4-895a-bb87752c83df-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:22:02 compute-0 nova_compute[260935]: 2025-10-11 09:22:02.139 2 DEBUG oslo_concurrency.lockutils [req-1e48f9e3-bcf2-4e7b-8859-ac5a8e922aca req-ca271afc-ed5e-477b-9137-303eafbc9e57 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "d703a39a-f502-4bc4-895a-bb87752c83df-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:22:02 compute-0 nova_compute[260935]: 2025-10-11 09:22:02.140 2 DEBUG oslo_concurrency.lockutils [req-1e48f9e3-bcf2-4e7b-8859-ac5a8e922aca req-ca271afc-ed5e-477b-9137-303eafbc9e57 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "d703a39a-f502-4bc4-895a-bb87752c83df-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:22:02 compute-0 nova_compute[260935]: 2025-10-11 09:22:02.141 2 DEBUG nova.compute.manager [req-1e48f9e3-bcf2-4e7b-8859-ac5a8e922aca req-ca271afc-ed5e-477b-9137-303eafbc9e57 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d703a39a-f502-4bc4-895a-bb87752c83df] Processing event network-vif-plugged-a35d553f-941e-4ff2-bc8a-39419cc32ff2 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 11 09:22:02 compute-0 systemd[1]: Started libpod-conmon-7e2275b3d3bc322ce807aec87c3c4c30f76122b8622f580055a49d5a91d0200a.scope.
Oct 11 09:22:02 compute-0 nova_compute[260935]: 2025-10-11 09:22:02.144 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:22:02 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:22:02.148 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape87b272f-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:22:02 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:22:02.148 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 09:22:02 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:22:02.148 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape87b272f-60, col_values=(('external_ids', {'iface-id': 'af33797e-d57d-45c8-92d7-86ea03fdf1ef'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:22:02 compute-0 nova_compute[260935]: 2025-10-11 09:22:02.148 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:22:02 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:22:02.149 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 09:22:02 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:22:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/feb155d8c716de36b1c9c04c308a7797c575c774d388fc5a32add1459f9652b0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 09:22:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/feb155d8c716de36b1c9c04c308a7797c575c774d388fc5a32add1459f9652b0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 09:22:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/feb155d8c716de36b1c9c04c308a7797c575c774d388fc5a32add1459f9652b0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 09:22:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/feb155d8c716de36b1c9c04c308a7797c575c774d388fc5a32add1459f9652b0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 09:22:02 compute-0 podman[390991]: 2025-10-11 09:22:02.216468789 +0000 UTC m=+0.247798394 container init 7e2275b3d3bc322ce807aec87c3c4c30f76122b8622f580055a49d5a91d0200a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_kapitsa, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 09:22:02 compute-0 podman[390991]: 2025-10-11 09:22:02.234711694 +0000 UTC m=+0.266041289 container start 7e2275b3d3bc322ce807aec87c3c4c30f76122b8622f580055a49d5a91d0200a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_kapitsa, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 09:22:02 compute-0 podman[390991]: 2025-10-11 09:22:02.239482259 +0000 UTC m=+0.270811884 container attach 7e2275b3d3bc322ce807aec87c3c4c30f76122b8622f580055a49d5a91d0200a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_kapitsa, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 09:22:02 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2406: 321 pgs: 321 active+clean; 612 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.9 MiB/s wr, 173 op/s
Oct 11 09:22:02 compute-0 nova_compute[260935]: 2025-10-11 09:22:02.921 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760174522.9214606, d703a39a-f502-4bc4-895a-bb87752c83df => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:22:02 compute-0 nova_compute[260935]: 2025-10-11 09:22:02.922 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: d703a39a-f502-4bc4-895a-bb87752c83df] VM Started (Lifecycle Event)
Oct 11 09:22:02 compute-0 nova_compute[260935]: 2025-10-11 09:22:02.950 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: d703a39a-f502-4bc4-895a-bb87752c83df] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:22:02 compute-0 nova_compute[260935]: 2025-10-11 09:22:02.960 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760174522.9253392, d703a39a-f502-4bc4-895a-bb87752c83df => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:22:02 compute-0 nova_compute[260935]: 2025-10-11 09:22:02.960 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: d703a39a-f502-4bc4-895a-bb87752c83df] VM Paused (Lifecycle Event)
Oct 11 09:22:02 compute-0 nova_compute[260935]: 2025-10-11 09:22:02.978 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: d703a39a-f502-4bc4-895a-bb87752c83df] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:22:02 compute-0 nova_compute[260935]: 2025-10-11 09:22:02.982 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: d703a39a-f502-4bc4-895a-bb87752c83df] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 09:22:03 compute-0 nova_compute[260935]: 2025-10-11 09:22:03.006 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: d703a39a-f502-4bc4-895a-bb87752c83df] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 09:22:03 compute-0 ceph-mon[74313]: pgmap v2406: 321 pgs: 321 active+clean; 612 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.9 MiB/s wr, 173 op/s
Oct 11 09:22:03 compute-0 gracious_kapitsa[391014]: {
Oct 11 09:22:03 compute-0 gracious_kapitsa[391014]:     "9a8d87fe-940e-4960-94fb-405e22e6e9ee": {
Oct 11 09:22:03 compute-0 gracious_kapitsa[391014]:         "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:22:03 compute-0 gracious_kapitsa[391014]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 11 09:22:03 compute-0 gracious_kapitsa[391014]:         "osd_id": 2,
Oct 11 09:22:03 compute-0 gracious_kapitsa[391014]:         "osd_uuid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 09:22:03 compute-0 gracious_kapitsa[391014]:         "type": "bluestore"
Oct 11 09:22:03 compute-0 gracious_kapitsa[391014]:     },
Oct 11 09:22:03 compute-0 gracious_kapitsa[391014]:     "bd31cfe9-266a-4479-a7f9-659fff14b3e1": {
Oct 11 09:22:03 compute-0 gracious_kapitsa[391014]:         "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:22:03 compute-0 gracious_kapitsa[391014]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 11 09:22:03 compute-0 gracious_kapitsa[391014]:         "osd_id": 0,
Oct 11 09:22:03 compute-0 gracious_kapitsa[391014]:         "osd_uuid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 09:22:03 compute-0 gracious_kapitsa[391014]:         "type": "bluestore"
Oct 11 09:22:03 compute-0 gracious_kapitsa[391014]:     },
Oct 11 09:22:03 compute-0 gracious_kapitsa[391014]:     "c6e730c8-7075-45e5-bf72-061b73088f5b": {
Oct 11 09:22:03 compute-0 gracious_kapitsa[391014]:         "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:22:03 compute-0 gracious_kapitsa[391014]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 11 09:22:03 compute-0 gracious_kapitsa[391014]:         "osd_id": 1,
Oct 11 09:22:03 compute-0 gracious_kapitsa[391014]:         "osd_uuid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 09:22:03 compute-0 gracious_kapitsa[391014]:         "type": "bluestore"
Oct 11 09:22:03 compute-0 gracious_kapitsa[391014]:     }
Oct 11 09:22:03 compute-0 gracious_kapitsa[391014]: }
Oct 11 09:22:03 compute-0 systemd[1]: libpod-7e2275b3d3bc322ce807aec87c3c4c30f76122b8622f580055a49d5a91d0200a.scope: Deactivated successfully.
Oct 11 09:22:03 compute-0 podman[390991]: 2025-10-11 09:22:03.398106925 +0000 UTC m=+1.429436560 container died 7e2275b3d3bc322ce807aec87c3c4c30f76122b8622f580055a49d5a91d0200a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_kapitsa, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 09:22:03 compute-0 systemd[1]: libpod-7e2275b3d3bc322ce807aec87c3c4c30f76122b8622f580055a49d5a91d0200a.scope: Consumed 1.145s CPU time.
Oct 11 09:22:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-feb155d8c716de36b1c9c04c308a7797c575c774d388fc5a32add1459f9652b0-merged.mount: Deactivated successfully.
Oct 11 09:22:03 compute-0 podman[390991]: 2025-10-11 09:22:03.47056023 +0000 UTC m=+1.501889825 container remove 7e2275b3d3bc322ce807aec87c3c4c30f76122b8622f580055a49d5a91d0200a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_kapitsa, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True)
Oct 11 09:22:03 compute-0 systemd[1]: libpod-conmon-7e2275b3d3bc322ce807aec87c3c4c30f76122b8622f580055a49d5a91d0200a.scope: Deactivated successfully.
Oct 11 09:22:03 compute-0 sudo[390815]: pam_unix(sudo:session): session closed for user root
Oct 11 09:22:03 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 09:22:03 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:22:03 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 09:22:03 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:22:03 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev cd3acdd0-36c7-4010-a426-4b22c794fcbb does not exist
Oct 11 09:22:03 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev 1df60877-1629-45e6-9b6d-3b626773b40a does not exist
Oct 11 09:22:03 compute-0 sudo[391104]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:22:03 compute-0 sudo[391104]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:22:03 compute-0 sudo[391104]: pam_unix(sudo:session): session closed for user root
Oct 11 09:22:03 compute-0 sudo[391129]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 11 09:22:03 compute-0 sudo[391129]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:22:03 compute-0 sudo[391129]: pam_unix(sudo:session): session closed for user root
Oct 11 09:22:04 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:22:04 compute-0 nova_compute[260935]: 2025-10-11 09:22:04.227 2 DEBUG nova.compute.manager [req-7448c91f-5996-4335-884d-0c7d3a39483f req-c9542f8f-136f-4aad-96dd-78bb478a50cd e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d703a39a-f502-4bc4-895a-bb87752c83df] Received event network-vif-plugged-a35d553f-941e-4ff2-bc8a-39419cc32ff2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:22:04 compute-0 nova_compute[260935]: 2025-10-11 09:22:04.229 2 DEBUG oslo_concurrency.lockutils [req-7448c91f-5996-4335-884d-0c7d3a39483f req-c9542f8f-136f-4aad-96dd-78bb478a50cd e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "d703a39a-f502-4bc4-895a-bb87752c83df-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:22:04 compute-0 nova_compute[260935]: 2025-10-11 09:22:04.229 2 DEBUG oslo_concurrency.lockutils [req-7448c91f-5996-4335-884d-0c7d3a39483f req-c9542f8f-136f-4aad-96dd-78bb478a50cd e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "d703a39a-f502-4bc4-895a-bb87752c83df-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:22:04 compute-0 nova_compute[260935]: 2025-10-11 09:22:04.230 2 DEBUG oslo_concurrency.lockutils [req-7448c91f-5996-4335-884d-0c7d3a39483f req-c9542f8f-136f-4aad-96dd-78bb478a50cd e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "d703a39a-f502-4bc4-895a-bb87752c83df-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:22:04 compute-0 nova_compute[260935]: 2025-10-11 09:22:04.230 2 DEBUG nova.compute.manager [req-7448c91f-5996-4335-884d-0c7d3a39483f req-c9542f8f-136f-4aad-96dd-78bb478a50cd e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d703a39a-f502-4bc4-895a-bb87752c83df] No event matching network-vif-plugged-a35d553f-941e-4ff2-bc8a-39419cc32ff2 in dict_keys([('network-vif-plugged', '30ec8ddb-e058-428a-a08a-817e1e452938')]) pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:325
Oct 11 09:22:04 compute-0 nova_compute[260935]: 2025-10-11 09:22:04.232 2 WARNING nova.compute.manager [req-7448c91f-5996-4335-884d-0c7d3a39483f req-c9542f8f-136f-4aad-96dd-78bb478a50cd e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d703a39a-f502-4bc4-895a-bb87752c83df] Received unexpected event network-vif-plugged-a35d553f-941e-4ff2-bc8a-39419cc32ff2 for instance with vm_state building and task_state spawning.
Oct 11 09:22:04 compute-0 nova_compute[260935]: 2025-10-11 09:22:04.504 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:22:04 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:22:04 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:22:04 compute-0 nova_compute[260935]: 2025-10-11 09:22:04.588 2 DEBUG nova.compute.manager [req-2b195128-48cd-4c84-a45a-9f5aa81155b6 req-a8c21f2e-3df0-43bc-b04b-dba39f5f574e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d703a39a-f502-4bc4-895a-bb87752c83df] Received event network-vif-plugged-30ec8ddb-e058-428a-a08a-817e1e452938 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:22:04 compute-0 nova_compute[260935]: 2025-10-11 09:22:04.588 2 DEBUG oslo_concurrency.lockutils [req-2b195128-48cd-4c84-a45a-9f5aa81155b6 req-a8c21f2e-3df0-43bc-b04b-dba39f5f574e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "d703a39a-f502-4bc4-895a-bb87752c83df-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:22:04 compute-0 nova_compute[260935]: 2025-10-11 09:22:04.588 2 DEBUG oslo_concurrency.lockutils [req-2b195128-48cd-4c84-a45a-9f5aa81155b6 req-a8c21f2e-3df0-43bc-b04b-dba39f5f574e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "d703a39a-f502-4bc4-895a-bb87752c83df-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:22:04 compute-0 nova_compute[260935]: 2025-10-11 09:22:04.589 2 DEBUG oslo_concurrency.lockutils [req-2b195128-48cd-4c84-a45a-9f5aa81155b6 req-a8c21f2e-3df0-43bc-b04b-dba39f5f574e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "d703a39a-f502-4bc4-895a-bb87752c83df-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:22:04 compute-0 nova_compute[260935]: 2025-10-11 09:22:04.589 2 DEBUG nova.compute.manager [req-2b195128-48cd-4c84-a45a-9f5aa81155b6 req-a8c21f2e-3df0-43bc-b04b-dba39f5f574e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d703a39a-f502-4bc4-895a-bb87752c83df] Processing event network-vif-plugged-30ec8ddb-e058-428a-a08a-817e1e452938 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 11 09:22:04 compute-0 nova_compute[260935]: 2025-10-11 09:22:04.589 2 DEBUG nova.compute.manager [None req-34b9f7bf-00fa-46ca-b7f5-72da6044d656 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: d703a39a-f502-4bc4-895a-bb87752c83df] Instance event wait completed in 1 seconds for network-vif-plugged,network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 11 09:22:04 compute-0 nova_compute[260935]: 2025-10-11 09:22:04.599 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760174524.5987594, d703a39a-f502-4bc4-895a-bb87752c83df => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:22:04 compute-0 nova_compute[260935]: 2025-10-11 09:22:04.599 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: d703a39a-f502-4bc4-895a-bb87752c83df] VM Resumed (Lifecycle Event)
Oct 11 09:22:04 compute-0 nova_compute[260935]: 2025-10-11 09:22:04.601 2 DEBUG nova.virt.libvirt.driver [None req-34b9f7bf-00fa-46ca-b7f5-72da6044d656 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: d703a39a-f502-4bc4-895a-bb87752c83df] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 11 09:22:04 compute-0 nova_compute[260935]: 2025-10-11 09:22:04.604 2 INFO nova.virt.libvirt.driver [-] [instance: d703a39a-f502-4bc4-895a-bb87752c83df] Instance spawned successfully.
Oct 11 09:22:04 compute-0 nova_compute[260935]: 2025-10-11 09:22:04.605 2 DEBUG nova.virt.libvirt.driver [None req-34b9f7bf-00fa-46ca-b7f5-72da6044d656 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: d703a39a-f502-4bc4-895a-bb87752c83df] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 11 09:22:04 compute-0 nova_compute[260935]: 2025-10-11 09:22:04.634 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: d703a39a-f502-4bc4-895a-bb87752c83df] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:22:04 compute-0 nova_compute[260935]: 2025-10-11 09:22:04.640 2 DEBUG nova.virt.libvirt.driver [None req-34b9f7bf-00fa-46ca-b7f5-72da6044d656 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: d703a39a-f502-4bc4-895a-bb87752c83df] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:22:04 compute-0 nova_compute[260935]: 2025-10-11 09:22:04.641 2 DEBUG nova.virt.libvirt.driver [None req-34b9f7bf-00fa-46ca-b7f5-72da6044d656 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: d703a39a-f502-4bc4-895a-bb87752c83df] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:22:04 compute-0 nova_compute[260935]: 2025-10-11 09:22:04.641 2 DEBUG nova.virt.libvirt.driver [None req-34b9f7bf-00fa-46ca-b7f5-72da6044d656 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: d703a39a-f502-4bc4-895a-bb87752c83df] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:22:04 compute-0 nova_compute[260935]: 2025-10-11 09:22:04.642 2 DEBUG nova.virt.libvirt.driver [None req-34b9f7bf-00fa-46ca-b7f5-72da6044d656 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: d703a39a-f502-4bc4-895a-bb87752c83df] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:22:04 compute-0 nova_compute[260935]: 2025-10-11 09:22:04.642 2 DEBUG nova.virt.libvirt.driver [None req-34b9f7bf-00fa-46ca-b7f5-72da6044d656 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: d703a39a-f502-4bc4-895a-bb87752c83df] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:22:04 compute-0 nova_compute[260935]: 2025-10-11 09:22:04.643 2 DEBUG nova.virt.libvirt.driver [None req-34b9f7bf-00fa-46ca-b7f5-72da6044d656 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: d703a39a-f502-4bc4-895a-bb87752c83df] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:22:04 compute-0 nova_compute[260935]: 2025-10-11 09:22:04.647 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: d703a39a-f502-4bc4-895a-bb87752c83df] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 09:22:04 compute-0 nova_compute[260935]: 2025-10-11 09:22:04.700 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: d703a39a-f502-4bc4-895a-bb87752c83df] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 09:22:04 compute-0 nova_compute[260935]: 2025-10-11 09:22:04.732 2 INFO nova.compute.manager [None req-34b9f7bf-00fa-46ca-b7f5-72da6044d656 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: d703a39a-f502-4bc4-895a-bb87752c83df] Took 12.99 seconds to spawn the instance on the hypervisor.
Oct 11 09:22:04 compute-0 nova_compute[260935]: 2025-10-11 09:22:04.733 2 DEBUG nova.compute.manager [None req-34b9f7bf-00fa-46ca-b7f5-72da6044d656 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: d703a39a-f502-4bc4-895a-bb87752c83df] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:22:04 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2407: 321 pgs: 321 active+clean; 612 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 334 KiB/s rd, 2.5 MiB/s wr, 85 op/s
Oct 11 09:22:04 compute-0 nova_compute[260935]: 2025-10-11 09:22:04.824 2 INFO nova.compute.manager [None req-34b9f7bf-00fa-46ca-b7f5-72da6044d656 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: d703a39a-f502-4bc4-895a-bb87752c83df] Took 14.13 seconds to build instance.
Oct 11 09:22:04 compute-0 nova_compute[260935]: 2025-10-11 09:22:04.847 2 DEBUG oslo_concurrency.lockutils [None req-34b9f7bf-00fa-46ca-b7f5-72da6044d656 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "d703a39a-f502-4bc4-895a-bb87752c83df" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 14.230s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:22:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 09:22:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:22:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 09:22:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:22:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.005253101191638992 of space, bias 1.0, pg target 1.5759303574916974 quantized to 32 (current 32)
Oct 11 09:22:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:22:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 09:22:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:22:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 09:22:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:22:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.1992057139048968 quantized to 32 (current 32)
Oct 11 09:22:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:22:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006084358924269063 quantized to 16 (current 32)
Oct 11 09:22:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:22:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 09:22:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:22:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.605448655336329e-05 quantized to 32 (current 32)
Oct 11 09:22:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:22:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006464631357035879 quantized to 32 (current 32)
Oct 11 09:22:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:22:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 09:22:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:22:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015210897310672657 quantized to 32 (current 32)
Oct 11 09:22:05 compute-0 ceph-mon[74313]: pgmap v2407: 321 pgs: 321 active+clean; 612 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 334 KiB/s rd, 2.5 MiB/s wr, 85 op/s
Oct 11 09:22:05 compute-0 nova_compute[260935]: 2025-10-11 09:22:05.579 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:22:06 compute-0 ceph-osd[89278]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #49. Immutable memtables: 6.
Oct 11 09:22:06 compute-0 ceph-osd[90364]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 11 09:22:06 compute-0 ceph-osd[90364]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 4200.3 total, 600.0 interval
                                           Cumulative writes: 29K writes, 117K keys, 29K commit groups, 1.0 writes per commit group, ingest: 0.11 GB, 0.03 MB/s
                                           Cumulative WAL: 29K writes, 10K syncs, 2.81 writes per sync, written: 0.11 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 2981 writes, 12K keys, 2981 commit groups, 1.0 writes per commit group, ingest: 16.28 MB, 0.03 MB/s
                                           Interval WAL: 2981 writes, 1191 syncs, 2.50 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 11 09:22:06 compute-0 nova_compute[260935]: 2025-10-11 09:22:06.723 2 DEBUG nova.compute.manager [req-24af4119-5c29-4d4d-b5fe-e2aa950fbd8b req-b3df8e9c-997e-491d-acac-a27b9198672f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d703a39a-f502-4bc4-895a-bb87752c83df] Received event network-vif-plugged-30ec8ddb-e058-428a-a08a-817e1e452938 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:22:06 compute-0 nova_compute[260935]: 2025-10-11 09:22:06.724 2 DEBUG oslo_concurrency.lockutils [req-24af4119-5c29-4d4d-b5fe-e2aa950fbd8b req-b3df8e9c-997e-491d-acac-a27b9198672f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "d703a39a-f502-4bc4-895a-bb87752c83df-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:22:06 compute-0 nova_compute[260935]: 2025-10-11 09:22:06.724 2 DEBUG oslo_concurrency.lockutils [req-24af4119-5c29-4d4d-b5fe-e2aa950fbd8b req-b3df8e9c-997e-491d-acac-a27b9198672f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "d703a39a-f502-4bc4-895a-bb87752c83df-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:22:06 compute-0 nova_compute[260935]: 2025-10-11 09:22:06.724 2 DEBUG oslo_concurrency.lockutils [req-24af4119-5c29-4d4d-b5fe-e2aa950fbd8b req-b3df8e9c-997e-491d-acac-a27b9198672f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "d703a39a-f502-4bc4-895a-bb87752c83df-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:22:06 compute-0 nova_compute[260935]: 2025-10-11 09:22:06.725 2 DEBUG nova.compute.manager [req-24af4119-5c29-4d4d-b5fe-e2aa950fbd8b req-b3df8e9c-997e-491d-acac-a27b9198672f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d703a39a-f502-4bc4-895a-bb87752c83df] No waiting events found dispatching network-vif-plugged-30ec8ddb-e058-428a-a08a-817e1e452938 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 09:22:06 compute-0 nova_compute[260935]: 2025-10-11 09:22:06.725 2 WARNING nova.compute.manager [req-24af4119-5c29-4d4d-b5fe-e2aa950fbd8b req-b3df8e9c-997e-491d-acac-a27b9198672f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d703a39a-f502-4bc4-895a-bb87752c83df] Received unexpected event network-vif-plugged-30ec8ddb-e058-428a-a08a-817e1e452938 for instance with vm_state active and task_state None.
Oct 11 09:22:06 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2408: 321 pgs: 321 active+clean; 612 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 334 KiB/s rd, 2.5 MiB/s wr, 85 op/s
Oct 11 09:22:07 compute-0 nova_compute[260935]: 2025-10-11 09:22:07.028 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:22:07 compute-0 ceph-mon[74313]: pgmap v2408: 321 pgs: 321 active+clean; 612 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 334 KiB/s rd, 2.5 MiB/s wr, 85 op/s
Oct 11 09:22:07 compute-0 ceph-mgr[74605]: [devicehealth INFO root] Check health
Oct 11 09:22:08 compute-0 nova_compute[260935]: 2025-10-11 09:22:08.453 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:22:08 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2409: 321 pgs: 321 active+clean; 612 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.5 MiB/s wr, 150 op/s
Oct 11 09:22:08 compute-0 podman[391154]: 2025-10-11 09:22:08.813713158 +0000 UTC m=+0.102553896 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 09:22:08 compute-0 nova_compute[260935]: 2025-10-11 09:22:08.847 2 DEBUG nova.compute.manager [req-d7bf0e0e-bb5a-43e8-9f2a-d0cfac71d02c req-ae8b6e8b-881f-4e68-9a9e-246f159ceb6c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 7f0d9214-39a5-458d-82db-dcbc7d61b8b5] Received event network-changed-300c071c-a312-4a9b-bd7a-1b16b9a35ae6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:22:08 compute-0 nova_compute[260935]: 2025-10-11 09:22:08.847 2 DEBUG nova.compute.manager [req-d7bf0e0e-bb5a-43e8-9f2a-d0cfac71d02c req-ae8b6e8b-881f-4e68-9a9e-246f159ceb6c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 7f0d9214-39a5-458d-82db-dcbc7d61b8b5] Refreshing instance network info cache due to event network-changed-300c071c-a312-4a9b-bd7a-1b16b9a35ae6. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 09:22:08 compute-0 nova_compute[260935]: 2025-10-11 09:22:08.848 2 DEBUG oslo_concurrency.lockutils [req-d7bf0e0e-bb5a-43e8-9f2a-d0cfac71d02c req-ae8b6e8b-881f-4e68-9a9e-246f159ceb6c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-7f0d9214-39a5-458d-82db-dcbc7d61b8b5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:22:08 compute-0 nova_compute[260935]: 2025-10-11 09:22:08.851 2 DEBUG oslo_concurrency.lockutils [req-d7bf0e0e-bb5a-43e8-9f2a-d0cfac71d02c req-ae8b6e8b-881f-4e68-9a9e-246f159ceb6c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-7f0d9214-39a5-458d-82db-dcbc7d61b8b5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:22:08 compute-0 nova_compute[260935]: 2025-10-11 09:22:08.851 2 DEBUG nova.network.neutron [req-d7bf0e0e-bb5a-43e8-9f2a-d0cfac71d02c req-ae8b6e8b-881f-4e68-9a9e-246f159ceb6c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 7f0d9214-39a5-458d-82db-dcbc7d61b8b5] Refreshing network info cache for port 300c071c-a312-4a9b-bd7a-1b16b9a35ae6 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 09:22:09 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:22:09 compute-0 ceph-mon[74313]: pgmap v2409: 321 pgs: 321 active+clean; 612 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.5 MiB/s wr, 150 op/s
Oct 11 09:22:10 compute-0 nova_compute[260935]: 2025-10-11 09:22:10.628 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:22:10 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2410: 321 pgs: 321 active+clean; 612 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.2 MiB/s wr, 137 op/s
Oct 11 09:22:10 compute-0 nova_compute[260935]: 2025-10-11 09:22:10.925 2 DEBUG nova.compute.manager [req-57f76c4f-9e18-4f10-825e-711e1f444756 req-0c87906e-ae7e-48c2-b2b4-33c597b3526e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d703a39a-f502-4bc4-895a-bb87752c83df] Received event network-changed-30ec8ddb-e058-428a-a08a-817e1e452938 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:22:10 compute-0 nova_compute[260935]: 2025-10-11 09:22:10.926 2 DEBUG nova.compute.manager [req-57f76c4f-9e18-4f10-825e-711e1f444756 req-0c87906e-ae7e-48c2-b2b4-33c597b3526e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d703a39a-f502-4bc4-895a-bb87752c83df] Refreshing instance network info cache due to event network-changed-30ec8ddb-e058-428a-a08a-817e1e452938. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 09:22:10 compute-0 nova_compute[260935]: 2025-10-11 09:22:10.927 2 DEBUG oslo_concurrency.lockutils [req-57f76c4f-9e18-4f10-825e-711e1f444756 req-0c87906e-ae7e-48c2-b2b4-33c597b3526e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-d703a39a-f502-4bc4-895a-bb87752c83df" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:22:10 compute-0 nova_compute[260935]: 2025-10-11 09:22:10.927 2 DEBUG oslo_concurrency.lockutils [req-57f76c4f-9e18-4f10-825e-711e1f444756 req-0c87906e-ae7e-48c2-b2b4-33c597b3526e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-d703a39a-f502-4bc4-895a-bb87752c83df" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:22:10 compute-0 nova_compute[260935]: 2025-10-11 09:22:10.927 2 DEBUG nova.network.neutron [req-57f76c4f-9e18-4f10-825e-711e1f444756 req-0c87906e-ae7e-48c2-b2b4-33c597b3526e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d703a39a-f502-4bc4-895a-bb87752c83df] Refreshing network info cache for port 30ec8ddb-e058-428a-a08a-817e1e452938 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 09:22:11 compute-0 nova_compute[260935]: 2025-10-11 09:22:11.187 2 DEBUG nova.network.neutron [req-d7bf0e0e-bb5a-43e8-9f2a-d0cfac71d02c req-ae8b6e8b-881f-4e68-9a9e-246f159ceb6c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 7f0d9214-39a5-458d-82db-dcbc7d61b8b5] Updated VIF entry in instance network info cache for port 300c071c-a312-4a9b-bd7a-1b16b9a35ae6. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 09:22:11 compute-0 nova_compute[260935]: 2025-10-11 09:22:11.188 2 DEBUG nova.network.neutron [req-d7bf0e0e-bb5a-43e8-9f2a-d0cfac71d02c req-ae8b6e8b-881f-4e68-9a9e-246f159ceb6c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 7f0d9214-39a5-458d-82db-dcbc7d61b8b5] Updating instance_info_cache with network_info: [{"id": "6aa7ac72-3e8c-4881-ba5d-9e2fca0200c5", "address": "fa:16:3e:38:a7:f5", "network": {"id": "6fb90d02-96cd-4920-92ac-462cc457cb11", "bridge": "br-int", "label": "tempest-network-smoke--1169953700", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.243", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6aa7ac72-3e", "ovs_interfaceid": "6aa7ac72-3e8c-4881-ba5d-9e2fca0200c5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "300c071c-a312-4a9b-bd7a-1b16b9a35ae6", "address": "fa:16:3e:f8:68:88", "network": {"id": "428d818e-c08a-4eef-be62-24fe484fed05", "bridge": "br-int", "label": "tempest-network-smoke--2064721736", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.30", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap300c071c-a3", "ovs_interfaceid": "300c071c-a312-4a9b-bd7a-1b16b9a35ae6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:22:11 compute-0 ceph-mon[74313]: pgmap v2410: 321 pgs: 321 active+clean; 612 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.2 MiB/s wr, 137 op/s
Oct 11 09:22:11 compute-0 nova_compute[260935]: 2025-10-11 09:22:11.928 2 DEBUG oslo_concurrency.lockutils [req-d7bf0e0e-bb5a-43e8-9f2a-d0cfac71d02c req-ae8b6e8b-881f-4e68-9a9e-246f159ceb6c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-7f0d9214-39a5-458d-82db-dcbc7d61b8b5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:22:12 compute-0 nova_compute[260935]: 2025-10-11 09:22:12.032 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:22:12 compute-0 nova_compute[260935]: 2025-10-11 09:22:12.234 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:22:12 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2411: 321 pgs: 321 active+clean; 612 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.2 MiB/s wr, 138 op/s
Oct 11 09:22:13 compute-0 ceph-mon[74313]: pgmap v2411: 321 pgs: 321 active+clean; 612 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.2 MiB/s wr, 138 op/s
Oct 11 09:22:14 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:22:14 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2412: 321 pgs: 321 active+clean; 612 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 27 KiB/s wr, 66 op/s
Oct 11 09:22:14 compute-0 podman[391173]: 2025-10-11 09:22:14.834352894 +0000 UTC m=+0.123743553 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=iscsid, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Oct 11 09:22:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:22:15.219 162815 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:22:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:22:15.220 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:22:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:22:15.221 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:22:15 compute-0 nova_compute[260935]: 2025-10-11 09:22:15.408 2 DEBUG nova.network.neutron [req-57f76c4f-9e18-4f10-825e-711e1f444756 req-0c87906e-ae7e-48c2-b2b4-33c597b3526e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d703a39a-f502-4bc4-895a-bb87752c83df] Updated VIF entry in instance network info cache for port 30ec8ddb-e058-428a-a08a-817e1e452938. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 09:22:15 compute-0 nova_compute[260935]: 2025-10-11 09:22:15.410 2 DEBUG nova.network.neutron [req-57f76c4f-9e18-4f10-825e-711e1f444756 req-0c87906e-ae7e-48c2-b2b4-33c597b3526e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d703a39a-f502-4bc4-895a-bb87752c83df] Updating instance_info_cache with network_info: [{"id": "30ec8ddb-e058-428a-a08a-817e1e452938", "address": "fa:16:3e:97:eb:db", "network": {"id": "02690ac5-d004-4c7d-b780-e5fed29e0aa7", "bridge": "br-int", "label": "tempest-network-smoke--1847613909", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.194", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap30ec8ddb-e0", "ovs_interfaceid": "30ec8ddb-e058-428a-a08a-817e1e452938", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "a35d553f-941e-4ff2-bc8a-39419cc32ff2", "address": "fa:16:3e:62:2c:ec", "network": {"id": "e87b272f-66b8-494e-ab80-c2ee66df15a2", "bridge": "br-int", "label": "tempest-network-smoke--2027628824", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe62:2cec", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe62:2cec", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa35d553f-94", "ovs_interfaceid": "a35d553f-941e-4ff2-bc8a-39419cc32ff2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:22:15 compute-0 nova_compute[260935]: 2025-10-11 09:22:15.480 2 DEBUG oslo_concurrency.lockutils [req-57f76c4f-9e18-4f10-825e-711e1f444756 req-0c87906e-ae7e-48c2-b2b4-33c597b3526e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-d703a39a-f502-4bc4-895a-bb87752c83df" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:22:15 compute-0 nova_compute[260935]: 2025-10-11 09:22:15.630 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:22:15 compute-0 ceph-mon[74313]: pgmap v2412: 321 pgs: 321 active+clean; 612 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 27 KiB/s wr, 66 op/s
Oct 11 09:22:16 compute-0 ovn_controller[152945]: 2025-10-11T09:22:16Z|00147|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:97:eb:db 10.100.0.6
Oct 11 09:22:16 compute-0 ovn_controller[152945]: 2025-10-11T09:22:16Z|00148|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:97:eb:db 10.100.0.6
Oct 11 09:22:16 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2413: 321 pgs: 321 active+clean; 612 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 27 KiB/s wr, 66 op/s
Oct 11 09:22:17 compute-0 nova_compute[260935]: 2025-10-11 09:22:17.077 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:22:17 compute-0 sshd-session[391192]: Invalid user fabian from 152.32.213.170 port 37482
Oct 11 09:22:17 compute-0 sshd-session[391192]: pam_unix(sshd:auth): check pass; user unknown
Oct 11 09:22:17 compute-0 sshd-session[391192]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=152.32.213.170
Oct 11 09:22:17 compute-0 ceph-mon[74313]: pgmap v2413: 321 pgs: 321 active+clean; 612 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 27 KiB/s wr, 66 op/s
Oct 11 09:22:18 compute-0 sshd-session[391192]: Failed password for invalid user fabian from 152.32.213.170 port 37482 ssh2
Oct 11 09:22:18 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2414: 321 pgs: 321 active+clean; 645 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.2 MiB/s wr, 131 op/s
Oct 11 09:22:18 compute-0 podman[391194]: 2025-10-11 09:22:18.845744408 +0000 UTC m=+0.133339204 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true)
Oct 11 09:22:18 compute-0 podman[391195]: 2025-10-11 09:22:18.867720658 +0000 UTC m=+0.154566703 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.schema-version=1.0)
Oct 11 09:22:19 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:22:19 compute-0 sshd-session[391192]: Received disconnect from 152.32.213.170 port 37482:11: Bye Bye [preauth]
Oct 11 09:22:19 compute-0 sshd-session[391192]: Disconnected from invalid user fabian 152.32.213.170 port 37482 [preauth]
Oct 11 09:22:20 compute-0 ceph-mon[74313]: pgmap v2414: 321 pgs: 321 active+clean; 645 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.2 MiB/s wr, 131 op/s
Oct 11 09:22:20 compute-0 nova_compute[260935]: 2025-10-11 09:22:20.634 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:22:20 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2415: 321 pgs: 321 active+clean; 645 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 324 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Oct 11 09:22:21 compute-0 ceph-mon[74313]: pgmap v2415: 321 pgs: 321 active+clean; 645 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 324 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Oct 11 09:22:21 compute-0 nova_compute[260935]: 2025-10-11 09:22:21.340 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:22:22 compute-0 nova_compute[260935]: 2025-10-11 09:22:22.151 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:22:22 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2416: 321 pgs: 321 active+clean; 645 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Oct 11 09:22:23 compute-0 ceph-mon[74313]: pgmap v2416: 321 pgs: 321 active+clean; 645 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Oct 11 09:22:24 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:22:24 compute-0 sshd-session[391238]: Invalid user gera from 155.4.244.179 port 7055
Oct 11 09:22:24 compute-0 sshd-session[391238]: pam_unix(sshd:auth): check pass; user unknown
Oct 11 09:22:24 compute-0 sshd-session[391238]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=155.4.244.179
Oct 11 09:22:24 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2417: 321 pgs: 321 active+clean; 645 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Oct 11 09:22:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:22:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:22:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:22:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:22:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:22:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:22:25 compute-0 nova_compute[260935]: 2025-10-11 09:22:25.636 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:22:25 compute-0 sshd-session[391238]: Failed password for invalid user gera from 155.4.244.179 port 7055 ssh2
Oct 11 09:22:26 compute-0 ceph-mon[74313]: pgmap v2417: 321 pgs: 321 active+clean; 645 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Oct 11 09:22:26 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 09:22:26 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/300202442' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 09:22:26 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 09:22:26 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/300202442' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 09:22:26 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2418: 321 pgs: 321 active+clean; 645 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Oct 11 09:22:27 compute-0 nova_compute[260935]: 2025-10-11 09:22:27.154 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:22:27 compute-0 ceph-mon[74313]: from='client.? 192.168.122.10:0/300202442' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 09:22:27 compute-0 ceph-mon[74313]: from='client.? 192.168.122.10:0/300202442' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 09:22:27 compute-0 ceph-mon[74313]: pgmap v2418: 321 pgs: 321 active+clean; 645 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Oct 11 09:22:27 compute-0 sshd-session[391238]: Received disconnect from 155.4.244.179 port 7055:11: Bye Bye [preauth]
Oct 11 09:22:27 compute-0 sshd-session[391238]: Disconnected from invalid user gera 155.4.244.179 port 7055 [preauth]
Oct 11 09:22:28 compute-0 sshd-session[391240]: Invalid user ubuntu from 43.251.161.75 port 59936
Oct 11 09:22:28 compute-0 sshd-session[391240]: pam_unix(sshd:auth): check pass; user unknown
Oct 11 09:22:28 compute-0 sshd-session[391240]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=43.251.161.75
Oct 11 09:22:28 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2419: 321 pgs: 321 active+clean; 645 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 326 KiB/s rd, 2.2 MiB/s wr, 66 op/s
Oct 11 09:22:29 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:22:29 compute-0 sshd-session[391242]: Invalid user admin from 165.232.82.252 port 35770
Oct 11 09:22:29 compute-0 sshd-session[391242]: pam_unix(sshd:auth): check pass; user unknown
Oct 11 09:22:29 compute-0 sshd-session[391242]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=165.232.82.252
Oct 11 09:22:29 compute-0 sshd-session[391240]: Failed password for invalid user ubuntu from 43.251.161.75 port 59936 ssh2
Oct 11 09:22:30 compute-0 ceph-mon[74313]: pgmap v2419: 321 pgs: 321 active+clean; 645 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 326 KiB/s rd, 2.2 MiB/s wr, 66 op/s
Oct 11 09:22:30 compute-0 sshd-session[391240]: Received disconnect from 43.251.161.75 port 59936:11:  [preauth]
Oct 11 09:22:30 compute-0 sshd-session[391240]: Disconnected from invalid user ubuntu 43.251.161.75 port 59936 [preauth]
Oct 11 09:22:30 compute-0 nova_compute[260935]: 2025-10-11 09:22:30.640 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:22:30 compute-0 sshd-session[391242]: Failed password for invalid user admin from 165.232.82.252 port 35770 ssh2
Oct 11 09:22:30 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2420: 321 pgs: 321 active+clean; 645 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.7 KiB/s rd, 18 KiB/s wr, 1 op/s
Oct 11 09:22:30 compute-0 sshd-session[391242]: Connection closed by invalid user admin 165.232.82.252 port 35770 [preauth]
Oct 11 09:22:31 compute-0 ceph-mon[74313]: pgmap v2420: 321 pgs: 321 active+clean; 645 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.7 KiB/s rd, 18 KiB/s wr, 1 op/s
Oct 11 09:22:31 compute-0 nova_compute[260935]: 2025-10-11 09:22:31.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:22:31 compute-0 nova_compute[260935]: 2025-10-11 09:22:31.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:22:32 compute-0 nova_compute[260935]: 2025-10-11 09:22:32.187 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:22:32 compute-0 nova_compute[260935]: 2025-10-11 09:22:32.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:22:32 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2421: 321 pgs: 321 active+clean; 645 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 4.0 KiB/s rd, 22 KiB/s wr, 2 op/s
Oct 11 09:22:33 compute-0 ceph-mon[74313]: pgmap v2421: 321 pgs: 321 active+clean; 645 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 4.0 KiB/s rd, 22 KiB/s wr, 2 op/s
Oct 11 09:22:34 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:22:34 compute-0 nova_compute[260935]: 2025-10-11 09:22:34.699 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:22:34 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2422: 321 pgs: 321 active+clean; 645 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 3.0 KiB/s rd, 19 KiB/s wr, 2 op/s
Oct 11 09:22:35 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:22:35.279 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=39, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:d1:d9', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '16:ab:1e:b7:4b:7f'}, ipsec=False) old=SB_Global(nb_cfg=38) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 09:22:35 compute-0 nova_compute[260935]: 2025-10-11 09:22:35.280 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:22:35 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:22:35.282 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 11 09:22:35 compute-0 nova_compute[260935]: 2025-10-11 09:22:35.642 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:22:35 compute-0 nova_compute[260935]: 2025-10-11 09:22:35.711 2 DEBUG oslo_concurrency.lockutils [None req-c2c406e0-4543-46b5-8b38-f49bca64c128 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "d703a39a-f502-4bc4-895a-bb87752c83df" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:22:35 compute-0 nova_compute[260935]: 2025-10-11 09:22:35.712 2 DEBUG oslo_concurrency.lockutils [None req-c2c406e0-4543-46b5-8b38-f49bca64c128 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "d703a39a-f502-4bc4-895a-bb87752c83df" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:22:35 compute-0 nova_compute[260935]: 2025-10-11 09:22:35.713 2 DEBUG oslo_concurrency.lockutils [None req-c2c406e0-4543-46b5-8b38-f49bca64c128 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "d703a39a-f502-4bc4-895a-bb87752c83df-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:22:35 compute-0 nova_compute[260935]: 2025-10-11 09:22:35.713 2 DEBUG oslo_concurrency.lockutils [None req-c2c406e0-4543-46b5-8b38-f49bca64c128 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "d703a39a-f502-4bc4-895a-bb87752c83df-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:22:35 compute-0 nova_compute[260935]: 2025-10-11 09:22:35.714 2 DEBUG oslo_concurrency.lockutils [None req-c2c406e0-4543-46b5-8b38-f49bca64c128 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "d703a39a-f502-4bc4-895a-bb87752c83df-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:22:35 compute-0 nova_compute[260935]: 2025-10-11 09:22:35.715 2 INFO nova.compute.manager [None req-c2c406e0-4543-46b5-8b38-f49bca64c128 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: d703a39a-f502-4bc4-895a-bb87752c83df] Terminating instance
Oct 11 09:22:35 compute-0 nova_compute[260935]: 2025-10-11 09:22:35.717 2 DEBUG nova.compute.manager [None req-c2c406e0-4543-46b5-8b38-f49bca64c128 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: d703a39a-f502-4bc4-895a-bb87752c83df] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 11 09:22:35 compute-0 nova_compute[260935]: 2025-10-11 09:22:35.747 2 DEBUG nova.compute.manager [req-4ef6af49-ab96-4aeb-b173-f1db2bef0ae1 req-5616ddc8-c1ab-403f-978d-928065fd1cb1 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d703a39a-f502-4bc4-895a-bb87752c83df] Received event network-changed-30ec8ddb-e058-428a-a08a-817e1e452938 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:22:35 compute-0 nova_compute[260935]: 2025-10-11 09:22:35.748 2 DEBUG nova.compute.manager [req-4ef6af49-ab96-4aeb-b173-f1db2bef0ae1 req-5616ddc8-c1ab-403f-978d-928065fd1cb1 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d703a39a-f502-4bc4-895a-bb87752c83df] Refreshing instance network info cache due to event network-changed-30ec8ddb-e058-428a-a08a-817e1e452938. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 09:22:35 compute-0 nova_compute[260935]: 2025-10-11 09:22:35.748 2 DEBUG oslo_concurrency.lockutils [req-4ef6af49-ab96-4aeb-b173-f1db2bef0ae1 req-5616ddc8-c1ab-403f-978d-928065fd1cb1 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-d703a39a-f502-4bc4-895a-bb87752c83df" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:22:35 compute-0 nova_compute[260935]: 2025-10-11 09:22:35.749 2 DEBUG oslo_concurrency.lockutils [req-4ef6af49-ab96-4aeb-b173-f1db2bef0ae1 req-5616ddc8-c1ab-403f-978d-928065fd1cb1 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-d703a39a-f502-4bc4-895a-bb87752c83df" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:22:35 compute-0 nova_compute[260935]: 2025-10-11 09:22:35.750 2 DEBUG nova.network.neutron [req-4ef6af49-ab96-4aeb-b173-f1db2bef0ae1 req-5616ddc8-c1ab-403f-978d-928065fd1cb1 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d703a39a-f502-4bc4-895a-bb87752c83df] Refreshing network info cache for port 30ec8ddb-e058-428a-a08a-817e1e452938 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 09:22:35 compute-0 kernel: tap30ec8ddb-e0 (unregistering): left promiscuous mode
Oct 11 09:22:35 compute-0 NetworkManager[44960]: <info>  [1760174555.8131] device (tap30ec8ddb-e0): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 09:22:35 compute-0 ovn_controller[152945]: 2025-10-11T09:22:35Z|01239|binding|INFO|Releasing lport 30ec8ddb-e058-428a-a08a-817e1e452938 from this chassis (sb_readonly=0)
Oct 11 09:22:35 compute-0 ovn_controller[152945]: 2025-10-11T09:22:35Z|01240|binding|INFO|Setting lport 30ec8ddb-e058-428a-a08a-817e1e452938 down in Southbound
Oct 11 09:22:35 compute-0 nova_compute[260935]: 2025-10-11 09:22:35.834 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:22:35 compute-0 ovn_controller[152945]: 2025-10-11T09:22:35Z|01241|binding|INFO|Removing iface tap30ec8ddb-e0 ovn-installed in OVS
Oct 11 09:22:35 compute-0 nova_compute[260935]: 2025-10-11 09:22:35.840 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:22:35 compute-0 kernel: tapa35d553f-94 (unregistering): left promiscuous mode
Oct 11 09:22:35 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:22:35.863 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:97:eb:db 10.100.0.6'], port_security=['fa:16:3e:97:eb:db 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'd703a39a-f502-4bc4-895a-bb87752c83df', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-02690ac5-d004-4c7d-b780-e5fed29e0aa7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'db6885dd005947ad850fed13cefdf2fc', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'c29d60dd-64a4-4eb0-a5b7-799e6b286f87', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=bc6a655d-e3d9-4e53-b2d8-3195591acfb2, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=30ec8ddb-e058-428a-a08a-817e1e452938) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 09:22:35 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:22:35.865 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 30ec8ddb-e058-428a-a08a-817e1e452938 in datapath 02690ac5-d004-4c7d-b780-e5fed29e0aa7 unbound from our chassis
Oct 11 09:22:35 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:22:35.871 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 02690ac5-d004-4c7d-b780-e5fed29e0aa7
Oct 11 09:22:35 compute-0 NetworkManager[44960]: <info>  [1760174555.8742] device (tapa35d553f-94): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 09:22:35 compute-0 nova_compute[260935]: 2025-10-11 09:22:35.882 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:22:35 compute-0 ovn_controller[152945]: 2025-10-11T09:22:35Z|01242|binding|INFO|Releasing lport a35d553f-941e-4ff2-bc8a-39419cc32ff2 from this chassis (sb_readonly=0)
Oct 11 09:22:35 compute-0 ovn_controller[152945]: 2025-10-11T09:22:35Z|01243|binding|INFO|Setting lport a35d553f-941e-4ff2-bc8a-39419cc32ff2 down in Southbound
Oct 11 09:22:35 compute-0 ovn_controller[152945]: 2025-10-11T09:22:35Z|01244|binding|INFO|Removing iface tapa35d553f-94 ovn-installed in OVS
Oct 11 09:22:35 compute-0 nova_compute[260935]: 2025-10-11 09:22:35.888 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:22:35 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:22:35.910 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:62:2c:ec 2001:db8:0:1:f816:3eff:fe62:2cec 2001:db8::f816:3eff:fe62:2cec'], port_security=['fa:16:3e:62:2c:ec 2001:db8:0:1:f816:3eff:fe62:2cec 2001:db8::f816:3eff:fe62:2cec'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:fe62:2cec/64 2001:db8::f816:3eff:fe62:2cec/64', 'neutron:device_id': 'd703a39a-f502-4bc4-895a-bb87752c83df', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e87b272f-66b8-494e-ab80-c2ee66df15a2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'db6885dd005947ad850fed13cefdf2fc', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'c29d60dd-64a4-4eb0-a5b7-799e6b286f87', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4bb88753-f635-4d32-a71c-7301c0eb5e38, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=a35d553f-941e-4ff2-bc8a-39419cc32ff2) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 09:22:35 compute-0 nova_compute[260935]: 2025-10-11 09:22:35.911 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:22:35 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:22:35.910 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[44565fb9-8eee-4705-90ad-2c1d325f49f9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:22:35 compute-0 ceph-mon[74313]: pgmap v2422: 321 pgs: 321 active+clean; 645 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 3.0 KiB/s rd, 19 KiB/s wr, 2 op/s
Oct 11 09:22:35 compute-0 systemd[1]: machine-qemu\x2d143\x2dinstance\x2d00000078.scope: Deactivated successfully.
Oct 11 09:22:35 compute-0 systemd[1]: machine-qemu\x2d143\x2dinstance\x2d00000078.scope: Consumed 14.261s CPU time.
Oct 11 09:22:35 compute-0 systemd-machined[215705]: Machine qemu-143-instance-00000078 terminated.
Oct 11 09:22:35 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:22:35.954 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[e3be260a-6224-4460-92d6-18bc0cc4914c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:22:35 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:22:35.957 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[ab945d87-fde4-4e1e-a55a-95443f22cc43]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:22:35 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:22:35.996 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[7a7039f9-2f66-4fc1-828c-22cff8725e1d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:22:36 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:22:36.016 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[821fdc58-b91a-4ee5-949f-38506678b97d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap02690ac5-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:3a:97:0e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 7, 'rx_bytes': 1000, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 7, 'rx_bytes': 1000, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 340], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 632554, 'reachable_time': 37223, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 391260, 'error': None, 'target': 'ovnmeta-02690ac5-d004-4c7d-b780-e5fed29e0aa7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:22:36 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:22:36.034 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[6ed5ca9d-2487-4f7c-8ac3-b109d2feaeaf]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap02690ac5-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 632567, 'tstamp': 632567}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 391261, 'error': None, 'target': 'ovnmeta-02690ac5-d004-4c7d-b780-e5fed29e0aa7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap02690ac5-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 632571, 'tstamp': 632571}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 391261, 'error': None, 'target': 'ovnmeta-02690ac5-d004-4c7d-b780-e5fed29e0aa7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:22:36 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:22:36.036 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap02690ac5-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:22:36 compute-0 nova_compute[260935]: 2025-10-11 09:22:36.038 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:22:36 compute-0 nova_compute[260935]: 2025-10-11 09:22:36.046 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:22:36 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:22:36.047 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap02690ac5-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:22:36 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:22:36.048 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 09:22:36 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:22:36.048 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap02690ac5-d0, col_values=(('external_ids', {'iface-id': 'bc2597de-285d-49b2-9e10-c93c87e43828'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:22:36 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:22:36.049 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 09:22:36 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:22:36.051 162815 INFO neutron.agent.ovn.metadata.agent [-] Port a35d553f-941e-4ff2-bc8a-39419cc32ff2 in datapath e87b272f-66b8-494e-ab80-c2ee66df15a2 unbound from our chassis
Oct 11 09:22:36 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:22:36.055 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e87b272f-66b8-494e-ab80-c2ee66df15a2
Oct 11 09:22:36 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:22:36.074 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[5dfce559-dd54-4a34-9f41-acf3ba3e80f7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:22:36 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:22:36.117 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[e0d80629-00a9-4b8e-aa17-7b6f282b6473]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:22:36 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:22:36.122 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[8e31e20e-2cd4-433e-93ae-feb7258ec122]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:22:36 compute-0 nova_compute[260935]: 2025-10-11 09:22:36.154 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:22:36 compute-0 NetworkManager[44960]: <info>  [1760174556.1648] manager: (tapa35d553f-94): new Tun device (/org/freedesktop/NetworkManager/Devices/502)
Oct 11 09:22:36 compute-0 nova_compute[260935]: 2025-10-11 09:22:36.176 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:22:36 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:22:36.185 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[602a087e-7bb8-40d1-b15e-bebb34e64cf8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:22:36 compute-0 nova_compute[260935]: 2025-10-11 09:22:36.183 2 INFO nova.virt.libvirt.driver [-] [instance: d703a39a-f502-4bc4-895a-bb87752c83df] Instance destroyed successfully.
Oct 11 09:22:36 compute-0 nova_compute[260935]: 2025-10-11 09:22:36.184 2 DEBUG nova.objects.instance [None req-c2c406e0-4543-46b5-8b38-f49bca64c128 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lazy-loading 'resources' on Instance uuid d703a39a-f502-4bc4-895a-bb87752c83df obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:22:36 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:22:36.217 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[378fb50b-9e69-438e-ac62-ef5471243aad]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape87b272f-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ab:ac:3b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 38, 'tx_packets': 6, 'rx_bytes': 3460, 'tx_bytes': 444, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 38, 'tx_packets': 6, 'rx_bytes': 3460, 'tx_bytes': 444, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 341], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 632666, 'reachable_time': 19970, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 38, 'inoctets': 2928, 'indelivers': 13, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 38, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 2928, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 38, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 13, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 391288, 'error': None, 'target': 'ovnmeta-e87b272f-66b8-494e-ab80-c2ee66df15a2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:22:36 compute-0 nova_compute[260935]: 2025-10-11 09:22:36.236 2 DEBUG nova.virt.libvirt.vif [None req-c2c406e0-4543-46b5-8b38-f49bca64c128 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T09:21:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-125667472',display_name='tempest-TestGettingAddress-server-125667472',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-125667472',id=120,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBH3M/kXUSgkBS2kz04q1S/pRsyiwkY2YkIqMb3cQVJF1HU8FNi43mGn58wjJEXbpqq0zRDrJXjUF0vzQa/0kOsDkOMrrOR4SllnyixE3KdibmJHkyv4zy/eyFYJg8UzYow==',key_name='tempest-TestGettingAddress-1350685544',keypairs=<?>,launch_index=0,launched_at=2025-10-11T09:22:04Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='db6885dd005947ad850fed13cefdf2fc',ramdisk_id='',reservation_id='r-knbmzljz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-1238692117',owner_user_name='tempest-TestGettingAddress-1238692117-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T09:22:04Z,user_data=None,user_id='0e1fd111a1ff43179343661e01457085',uuid=d703a39a-f502-4bc4-895a-bb87752c83df,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "30ec8ddb-e058-428a-a08a-817e1e452938", "address": "fa:16:3e:97:eb:db", "network": {"id": "02690ac5-d004-4c7d-b780-e5fed29e0aa7", "bridge": "br-int", "label": "tempest-network-smoke--1847613909", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.194", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap30ec8ddb-e0", "ovs_interfaceid": "30ec8ddb-e058-428a-a08a-817e1e452938", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 11 09:22:36 compute-0 nova_compute[260935]: 2025-10-11 09:22:36.237 2 DEBUG nova.network.os_vif_util [None req-c2c406e0-4543-46b5-8b38-f49bca64c128 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converting VIF {"id": "30ec8ddb-e058-428a-a08a-817e1e452938", "address": "fa:16:3e:97:eb:db", "network": {"id": "02690ac5-d004-4c7d-b780-e5fed29e0aa7", "bridge": "br-int", "label": "tempest-network-smoke--1847613909", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.194", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap30ec8ddb-e0", "ovs_interfaceid": "30ec8ddb-e058-428a-a08a-817e1e452938", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 09:22:36 compute-0 nova_compute[260935]: 2025-10-11 09:22:36.239 2 DEBUG nova.network.os_vif_util [None req-c2c406e0-4543-46b5-8b38-f49bca64c128 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:97:eb:db,bridge_name='br-int',has_traffic_filtering=True,id=30ec8ddb-e058-428a-a08a-817e1e452938,network=Network(02690ac5-d004-4c7d-b780-e5fed29e0aa7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap30ec8ddb-e0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 09:22:36 compute-0 nova_compute[260935]: 2025-10-11 09:22:36.240 2 DEBUG os_vif [None req-c2c406e0-4543-46b5-8b38-f49bca64c128 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:97:eb:db,bridge_name='br-int',has_traffic_filtering=True,id=30ec8ddb-e058-428a-a08a-817e1e452938,network=Network(02690ac5-d004-4c7d-b780-e5fed29e0aa7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap30ec8ddb-e0') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 11 09:22:36 compute-0 nova_compute[260935]: 2025-10-11 09:22:36.242 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:22:36 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:22:36.242 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[388610dd-2b1f-4280-a0c8-2ab8baa30228]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tape87b272f-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 632680, 'tstamp': 632680}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 391289, 'error': None, 'target': 'ovnmeta-e87b272f-66b8-494e-ab80-c2ee66df15a2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:22:36 compute-0 nova_compute[260935]: 2025-10-11 09:22:36.243 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap30ec8ddb-e0, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:22:36 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:22:36.244 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape87b272f-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:22:36 compute-0 nova_compute[260935]: 2025-10-11 09:22:36.247 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 09:22:36 compute-0 nova_compute[260935]: 2025-10-11 09:22:36.253 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:22:36 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:22:36.254 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape87b272f-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:22:36 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:22:36.254 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 09:22:36 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:22:36.255 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape87b272f-60, col_values=(('external_ids', {'iface-id': 'af33797e-d57d-45c8-92d7-86ea03fdf1ef'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:22:36 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:22:36.255 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 09:22:36 compute-0 nova_compute[260935]: 2025-10-11 09:22:36.256 2 INFO os_vif [None req-c2c406e0-4543-46b5-8b38-f49bca64c128 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:97:eb:db,bridge_name='br-int',has_traffic_filtering=True,id=30ec8ddb-e058-428a-a08a-817e1e452938,network=Network(02690ac5-d004-4c7d-b780-e5fed29e0aa7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap30ec8ddb-e0')
Oct 11 09:22:36 compute-0 nova_compute[260935]: 2025-10-11 09:22:36.256 2 DEBUG nova.virt.libvirt.vif [None req-c2c406e0-4543-46b5-8b38-f49bca64c128 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T09:21:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-125667472',display_name='tempest-TestGettingAddress-server-125667472',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-125667472',id=120,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBH3M/kXUSgkBS2kz04q1S/pRsyiwkY2YkIqMb3cQVJF1HU8FNi43mGn58wjJEXbpqq0zRDrJXjUF0vzQa/0kOsDkOMrrOR4SllnyixE3KdibmJHkyv4zy/eyFYJg8UzYow==',key_name='tempest-TestGettingAddress-1350685544',keypairs=<?>,launch_index=0,launched_at=2025-10-11T09:22:04Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='db6885dd005947ad850fed13cefdf2fc',ramdisk_id='',reservation_id='r-knbmzljz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-1238692117',owner_user_name='tempest-TestGettingAddress-1238692117-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T09:22:04Z,user_data=None,user_id='0e1fd111a1ff43179343661e01457085',uuid=d703a39a-f502-4bc4-895a-bb87752c83df,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a35d553f-941e-4ff2-bc8a-39419cc32ff2", "address": "fa:16:3e:62:2c:ec", "network": {"id": "e87b272f-66b8-494e-ab80-c2ee66df15a2", "bridge": "br-int", "label": "tempest-network-smoke--2027628824", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe62:2cec", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe62:2cec", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa35d553f-94", "ovs_interfaceid": "a35d553f-941e-4ff2-bc8a-39419cc32ff2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 11 09:22:36 compute-0 nova_compute[260935]: 2025-10-11 09:22:36.257 2 DEBUG nova.network.os_vif_util [None req-c2c406e0-4543-46b5-8b38-f49bca64c128 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converting VIF {"id": "a35d553f-941e-4ff2-bc8a-39419cc32ff2", "address": "fa:16:3e:62:2c:ec", "network": {"id": "e87b272f-66b8-494e-ab80-c2ee66df15a2", "bridge": "br-int", "label": "tempest-network-smoke--2027628824", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe62:2cec", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe62:2cec", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa35d553f-94", "ovs_interfaceid": "a35d553f-941e-4ff2-bc8a-39419cc32ff2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 09:22:36 compute-0 nova_compute[260935]: 2025-10-11 09:22:36.258 2 DEBUG nova.network.os_vif_util [None req-c2c406e0-4543-46b5-8b38-f49bca64c128 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:62:2c:ec,bridge_name='br-int',has_traffic_filtering=True,id=a35d553f-941e-4ff2-bc8a-39419cc32ff2,network=Network(e87b272f-66b8-494e-ab80-c2ee66df15a2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa35d553f-94') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 09:22:36 compute-0 nova_compute[260935]: 2025-10-11 09:22:36.258 2 DEBUG os_vif [None req-c2c406e0-4543-46b5-8b38-f49bca64c128 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:62:2c:ec,bridge_name='br-int',has_traffic_filtering=True,id=a35d553f-941e-4ff2-bc8a-39419cc32ff2,network=Network(e87b272f-66b8-494e-ab80-c2ee66df15a2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa35d553f-94') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 11 09:22:36 compute-0 nova_compute[260935]: 2025-10-11 09:22:36.259 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:22:36 compute-0 nova_compute[260935]: 2025-10-11 09:22:36.260 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa35d553f-94, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:22:36 compute-0 nova_compute[260935]: 2025-10-11 09:22:36.261 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:22:36 compute-0 nova_compute[260935]: 2025-10-11 09:22:36.263 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 09:22:36 compute-0 nova_compute[260935]: 2025-10-11 09:22:36.265 2 INFO os_vif [None req-c2c406e0-4543-46b5-8b38-f49bca64c128 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:62:2c:ec,bridge_name='br-int',has_traffic_filtering=True,id=a35d553f-941e-4ff2-bc8a-39419cc32ff2,network=Network(e87b272f-66b8-494e-ab80-c2ee66df15a2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa35d553f-94')
Oct 11 09:22:36 compute-0 nova_compute[260935]: 2025-10-11 09:22:36.296 2 DEBUG nova.compute.manager [req-368ab2fe-a1e3-40cc-990e-1c159a60a479 req-09997ae3-5ebf-463a-9864-bea1b7c02c69 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d703a39a-f502-4bc4-895a-bb87752c83df] Received event network-vif-unplugged-30ec8ddb-e058-428a-a08a-817e1e452938 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:22:36 compute-0 nova_compute[260935]: 2025-10-11 09:22:36.297 2 DEBUG oslo_concurrency.lockutils [req-368ab2fe-a1e3-40cc-990e-1c159a60a479 req-09997ae3-5ebf-463a-9864-bea1b7c02c69 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "d703a39a-f502-4bc4-895a-bb87752c83df-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:22:36 compute-0 nova_compute[260935]: 2025-10-11 09:22:36.298 2 DEBUG oslo_concurrency.lockutils [req-368ab2fe-a1e3-40cc-990e-1c159a60a479 req-09997ae3-5ebf-463a-9864-bea1b7c02c69 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "d703a39a-f502-4bc4-895a-bb87752c83df-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:22:36 compute-0 nova_compute[260935]: 2025-10-11 09:22:36.298 2 DEBUG oslo_concurrency.lockutils [req-368ab2fe-a1e3-40cc-990e-1c159a60a479 req-09997ae3-5ebf-463a-9864-bea1b7c02c69 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "d703a39a-f502-4bc4-895a-bb87752c83df-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:22:36 compute-0 nova_compute[260935]: 2025-10-11 09:22:36.299 2 DEBUG nova.compute.manager [req-368ab2fe-a1e3-40cc-990e-1c159a60a479 req-09997ae3-5ebf-463a-9864-bea1b7c02c69 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d703a39a-f502-4bc4-895a-bb87752c83df] No waiting events found dispatching network-vif-unplugged-30ec8ddb-e058-428a-a08a-817e1e452938 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 09:22:36 compute-0 nova_compute[260935]: 2025-10-11 09:22:36.299 2 DEBUG nova.compute.manager [req-368ab2fe-a1e3-40cc-990e-1c159a60a479 req-09997ae3-5ebf-463a-9864-bea1b7c02c69 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d703a39a-f502-4bc4-895a-bb87752c83df] Received event network-vif-unplugged-30ec8ddb-e058-428a-a08a-817e1e452938 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 11 09:22:36 compute-0 nova_compute[260935]: 2025-10-11 09:22:36.441 2 DEBUG oslo_concurrency.lockutils [None req-4fc48ab2-5389-4c80-9e31-49ff047eb67b 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] Acquiring lock "ef21f945-0076-48fa-8d22-c5376e26d278" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:22:36 compute-0 nova_compute[260935]: 2025-10-11 09:22:36.442 2 DEBUG oslo_concurrency.lockutils [None req-4fc48ab2-5389-4c80-9e31-49ff047eb67b 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] Lock "ef21f945-0076-48fa-8d22-c5376e26d278" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:22:36 compute-0 nova_compute[260935]: 2025-10-11 09:22:36.536 2 DEBUG nova.compute.manager [None req-4fc48ab2-5389-4c80-9e31-49ff047eb67b 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 11 09:22:36 compute-0 nova_compute[260935]: 2025-10-11 09:22:36.731 2 INFO nova.virt.libvirt.driver [None req-c2c406e0-4543-46b5-8b38-f49bca64c128 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: d703a39a-f502-4bc4-895a-bb87752c83df] Deleting instance files /var/lib/nova/instances/d703a39a-f502-4bc4-895a-bb87752c83df_del
Oct 11 09:22:36 compute-0 nova_compute[260935]: 2025-10-11 09:22:36.733 2 INFO nova.virt.libvirt.driver [None req-c2c406e0-4543-46b5-8b38-f49bca64c128 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: d703a39a-f502-4bc4-895a-bb87752c83df] Deletion of /var/lib/nova/instances/d703a39a-f502-4bc4-895a-bb87752c83df_del complete
Oct 11 09:22:36 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2423: 321 pgs: 321 active+clean; 645 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 3.0 KiB/s rd, 19 KiB/s wr, 2 op/s
Oct 11 09:22:36 compute-0 nova_compute[260935]: 2025-10-11 09:22:36.794 2 DEBUG oslo_concurrency.lockutils [None req-4fc48ab2-5389-4c80-9e31-49ff047eb67b 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:22:36 compute-0 nova_compute[260935]: 2025-10-11 09:22:36.795 2 DEBUG oslo_concurrency.lockutils [None req-4fc48ab2-5389-4c80-9e31-49ff047eb67b 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:22:36 compute-0 nova_compute[260935]: 2025-10-11 09:22:36.807 2 DEBUG nova.virt.hardware [None req-4fc48ab2-5389-4c80-9e31-49ff047eb67b 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 11 09:22:36 compute-0 nova_compute[260935]: 2025-10-11 09:22:36.807 2 INFO nova.compute.claims [None req-4fc48ab2-5389-4c80-9e31-49ff047eb67b 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] Claim successful on node compute-0.ctlplane.example.com
Oct 11 09:22:36 compute-0 nova_compute[260935]: 2025-10-11 09:22:36.996 2 INFO nova.compute.manager [None req-c2c406e0-4543-46b5-8b38-f49bca64c128 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: d703a39a-f502-4bc4-895a-bb87752c83df] Took 1.28 seconds to destroy the instance on the hypervisor.
Oct 11 09:22:36 compute-0 nova_compute[260935]: 2025-10-11 09:22:36.997 2 DEBUG oslo.service.loopingcall [None req-c2c406e0-4543-46b5-8b38-f49bca64c128 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 11 09:22:36 compute-0 nova_compute[260935]: 2025-10-11 09:22:36.998 2 DEBUG nova.compute.manager [-] [instance: d703a39a-f502-4bc4-895a-bb87752c83df] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 11 09:22:36 compute-0 nova_compute[260935]: 2025-10-11 09:22:36.998 2 DEBUG nova.network.neutron [-] [instance: d703a39a-f502-4bc4-895a-bb87752c83df] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 11 09:22:37 compute-0 nova_compute[260935]: 2025-10-11 09:22:37.192 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:22:37 compute-0 nova_compute[260935]: 2025-10-11 09:22:37.215 2 DEBUG oslo_concurrency.processutils [None req-4fc48ab2-5389-4c80-9e31-49ff047eb67b 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:22:37 compute-0 nova_compute[260935]: 2025-10-11 09:22:37.465 2 DEBUG nova.network.neutron [req-4ef6af49-ab96-4aeb-b173-f1db2bef0ae1 req-5616ddc8-c1ab-403f-978d-928065fd1cb1 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d703a39a-f502-4bc4-895a-bb87752c83df] Updated VIF entry in instance network info cache for port 30ec8ddb-e058-428a-a08a-817e1e452938. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 09:22:37 compute-0 nova_compute[260935]: 2025-10-11 09:22:37.467 2 DEBUG nova.network.neutron [req-4ef6af49-ab96-4aeb-b173-f1db2bef0ae1 req-5616ddc8-c1ab-403f-978d-928065fd1cb1 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d703a39a-f502-4bc4-895a-bb87752c83df] Updating instance_info_cache with network_info: [{"id": "30ec8ddb-e058-428a-a08a-817e1e452938", "address": "fa:16:3e:97:eb:db", "network": {"id": "02690ac5-d004-4c7d-b780-e5fed29e0aa7", "bridge": "br-int", "label": "tempest-network-smoke--1847613909", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap30ec8ddb-e0", "ovs_interfaceid": "30ec8ddb-e058-428a-a08a-817e1e452938", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "a35d553f-941e-4ff2-bc8a-39419cc32ff2", "address": "fa:16:3e:62:2c:ec", "network": {"id": "e87b272f-66b8-494e-ab80-c2ee66df15a2", "bridge": "br-int", "label": "tempest-network-smoke--2027628824", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe62:2cec", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe62:2cec", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa35d553f-94", "ovs_interfaceid": "a35d553f-941e-4ff2-bc8a-39419cc32ff2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:22:37 compute-0 nova_compute[260935]: 2025-10-11 09:22:37.625 2 DEBUG oslo_concurrency.lockutils [req-4ef6af49-ab96-4aeb-b173-f1db2bef0ae1 req-5616ddc8-c1ab-403f-978d-928065fd1cb1 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-d703a39a-f502-4bc4-895a-bb87752c83df" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:22:37 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 09:22:37 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2858889742' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:22:37 compute-0 nova_compute[260935]: 2025-10-11 09:22:37.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:22:37 compute-0 nova_compute[260935]: 2025-10-11 09:22:37.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:22:37 compute-0 nova_compute[260935]: 2025-10-11 09:22:37.702 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 11 09:22:37 compute-0 nova_compute[260935]: 2025-10-11 09:22:37.722 2 DEBUG oslo_concurrency.processutils [None req-4fc48ab2-5389-4c80-9e31-49ff047eb67b 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.507s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:22:37 compute-0 nova_compute[260935]: 2025-10-11 09:22:37.729 2 DEBUG nova.compute.provider_tree [None req-4fc48ab2-5389-4c80-9e31-49ff047eb67b 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 09:22:37 compute-0 nova_compute[260935]: 2025-10-11 09:22:37.771 2 DEBUG nova.scheduler.client.report [None req-4fc48ab2-5389-4c80-9e31-49ff047eb67b 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 09:22:37 compute-0 nova_compute[260935]: 2025-10-11 09:22:37.866 2 DEBUG oslo_concurrency.lockutils [None req-4fc48ab2-5389-4c80-9e31-49ff047eb67b 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.070s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:22:37 compute-0 nova_compute[260935]: 2025-10-11 09:22:37.867 2 DEBUG nova.compute.manager [None req-4fc48ab2-5389-4c80-9e31-49ff047eb67b 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 11 09:22:37 compute-0 nova_compute[260935]: 2025-10-11 09:22:37.893 2 DEBUG nova.compute.manager [req-da87ed27-e0c6-4cbe-bc37-d8a03fc5b55c req-e6ba7759-77e9-480d-9507-720da3833e1c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d703a39a-f502-4bc4-895a-bb87752c83df] Received event network-vif-unplugged-a35d553f-941e-4ff2-bc8a-39419cc32ff2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:22:37 compute-0 nova_compute[260935]: 2025-10-11 09:22:37.893 2 DEBUG oslo_concurrency.lockutils [req-da87ed27-e0c6-4cbe-bc37-d8a03fc5b55c req-e6ba7759-77e9-480d-9507-720da3833e1c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "d703a39a-f502-4bc4-895a-bb87752c83df-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:22:37 compute-0 nova_compute[260935]: 2025-10-11 09:22:37.894 2 DEBUG oslo_concurrency.lockutils [req-da87ed27-e0c6-4cbe-bc37-d8a03fc5b55c req-e6ba7759-77e9-480d-9507-720da3833e1c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "d703a39a-f502-4bc4-895a-bb87752c83df-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:22:37 compute-0 nova_compute[260935]: 2025-10-11 09:22:37.894 2 DEBUG oslo_concurrency.lockutils [req-da87ed27-e0c6-4cbe-bc37-d8a03fc5b55c req-e6ba7759-77e9-480d-9507-720da3833e1c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "d703a39a-f502-4bc4-895a-bb87752c83df-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:22:37 compute-0 nova_compute[260935]: 2025-10-11 09:22:37.895 2 DEBUG nova.compute.manager [req-da87ed27-e0c6-4cbe-bc37-d8a03fc5b55c req-e6ba7759-77e9-480d-9507-720da3833e1c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d703a39a-f502-4bc4-895a-bb87752c83df] No waiting events found dispatching network-vif-unplugged-a35d553f-941e-4ff2-bc8a-39419cc32ff2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 09:22:37 compute-0 nova_compute[260935]: 2025-10-11 09:22:37.895 2 DEBUG nova.compute.manager [req-da87ed27-e0c6-4cbe-bc37-d8a03fc5b55c req-e6ba7759-77e9-480d-9507-720da3833e1c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d703a39a-f502-4bc4-895a-bb87752c83df] Received event network-vif-unplugged-a35d553f-941e-4ff2-bc8a-39419cc32ff2 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 11 09:22:37 compute-0 nova_compute[260935]: 2025-10-11 09:22:37.896 2 DEBUG nova.compute.manager [req-da87ed27-e0c6-4cbe-bc37-d8a03fc5b55c req-e6ba7759-77e9-480d-9507-720da3833e1c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d703a39a-f502-4bc4-895a-bb87752c83df] Received event network-vif-plugged-a35d553f-941e-4ff2-bc8a-39419cc32ff2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:22:37 compute-0 nova_compute[260935]: 2025-10-11 09:22:37.896 2 DEBUG oslo_concurrency.lockutils [req-da87ed27-e0c6-4cbe-bc37-d8a03fc5b55c req-e6ba7759-77e9-480d-9507-720da3833e1c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "d703a39a-f502-4bc4-895a-bb87752c83df-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:22:37 compute-0 nova_compute[260935]: 2025-10-11 09:22:37.897 2 DEBUG oslo_concurrency.lockutils [req-da87ed27-e0c6-4cbe-bc37-d8a03fc5b55c req-e6ba7759-77e9-480d-9507-720da3833e1c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "d703a39a-f502-4bc4-895a-bb87752c83df-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:22:37 compute-0 nova_compute[260935]: 2025-10-11 09:22:37.897 2 DEBUG oslo_concurrency.lockutils [req-da87ed27-e0c6-4cbe-bc37-d8a03fc5b55c req-e6ba7759-77e9-480d-9507-720da3833e1c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "d703a39a-f502-4bc4-895a-bb87752c83df-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:22:37 compute-0 nova_compute[260935]: 2025-10-11 09:22:37.898 2 DEBUG nova.compute.manager [req-da87ed27-e0c6-4cbe-bc37-d8a03fc5b55c req-e6ba7759-77e9-480d-9507-720da3833e1c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d703a39a-f502-4bc4-895a-bb87752c83df] No waiting events found dispatching network-vif-plugged-a35d553f-941e-4ff2-bc8a-39419cc32ff2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 09:22:37 compute-0 nova_compute[260935]: 2025-10-11 09:22:37.898 2 WARNING nova.compute.manager [req-da87ed27-e0c6-4cbe-bc37-d8a03fc5b55c req-e6ba7759-77e9-480d-9507-720da3833e1c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d703a39a-f502-4bc4-895a-bb87752c83df] Received unexpected event network-vif-plugged-a35d553f-941e-4ff2-bc8a-39419cc32ff2 for instance with vm_state active and task_state deleting.
Oct 11 09:22:37 compute-0 ceph-mon[74313]: pgmap v2423: 321 pgs: 321 active+clean; 645 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 3.0 KiB/s rd, 19 KiB/s wr, 2 op/s
Oct 11 09:22:37 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/2858889742' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:22:37 compute-0 nova_compute[260935]: 2025-10-11 09:22:37.989 2 DEBUG nova.compute.manager [None req-4fc48ab2-5389-4c80-9e31-49ff047eb67b 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 11 09:22:37 compute-0 nova_compute[260935]: 2025-10-11 09:22:37.990 2 DEBUG nova.network.neutron [None req-4fc48ab2-5389-4c80-9e31-49ff047eb67b 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 11 09:22:38 compute-0 nova_compute[260935]: 2025-10-11 09:22:38.044 2 INFO nova.virt.libvirt.driver [None req-4fc48ab2-5389-4c80-9e31-49ff047eb67b 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 11 09:22:38 compute-0 nova_compute[260935]: 2025-10-11 09:22:38.139 2 DEBUG nova.compute.manager [None req-4fc48ab2-5389-4c80-9e31-49ff047eb67b 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 11 09:22:38 compute-0 nova_compute[260935]: 2025-10-11 09:22:38.165 2 DEBUG nova.policy [None req-4fc48ab2-5389-4c80-9e31-49ff047eb67b 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '67e20c1f7ae24f2f8b9e25e0d8ce61ca', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'a13210f275984f3eadf85eba0c749d99', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 11 09:22:38 compute-0 nova_compute[260935]: 2025-10-11 09:22:38.447 2 DEBUG nova.compute.manager [None req-4fc48ab2-5389-4c80-9e31-49ff047eb67b 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 11 09:22:38 compute-0 nova_compute[260935]: 2025-10-11 09:22:38.449 2 DEBUG nova.virt.libvirt.driver [None req-4fc48ab2-5389-4c80-9e31-49ff047eb67b 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 11 09:22:38 compute-0 nova_compute[260935]: 2025-10-11 09:22:38.450 2 INFO nova.virt.libvirt.driver [None req-4fc48ab2-5389-4c80-9e31-49ff047eb67b 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] Creating image(s)
Oct 11 09:22:38 compute-0 nova_compute[260935]: 2025-10-11 09:22:38.485 2 DEBUG nova.storage.rbd_utils [None req-4fc48ab2-5389-4c80-9e31-49ff047eb67b 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] rbd image ef21f945-0076-48fa-8d22-c5376e26d278_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:22:38 compute-0 nova_compute[260935]: 2025-10-11 09:22:38.530 2 DEBUG nova.storage.rbd_utils [None req-4fc48ab2-5389-4c80-9e31-49ff047eb67b 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] rbd image ef21f945-0076-48fa-8d22-c5376e26d278_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:22:38 compute-0 nova_compute[260935]: 2025-10-11 09:22:38.563 2 DEBUG nova.storage.rbd_utils [None req-4fc48ab2-5389-4c80-9e31-49ff047eb67b 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] rbd image ef21f945-0076-48fa-8d22-c5376e26d278_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:22:38 compute-0 nova_compute[260935]: 2025-10-11 09:22:38.567 2 DEBUG oslo_concurrency.processutils [None req-4fc48ab2-5389-4c80-9e31-49ff047eb67b 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:22:38 compute-0 nova_compute[260935]: 2025-10-11 09:22:38.631 2 DEBUG nova.compute.manager [req-74e00882-09ba-4c7f-bf84-123a96f6c2d9 req-61db89a6-94b4-4aeb-8faa-37b1067a49b8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d703a39a-f502-4bc4-895a-bb87752c83df] Received event network-vif-plugged-30ec8ddb-e058-428a-a08a-817e1e452938 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:22:38 compute-0 nova_compute[260935]: 2025-10-11 09:22:38.632 2 DEBUG oslo_concurrency.lockutils [req-74e00882-09ba-4c7f-bf84-123a96f6c2d9 req-61db89a6-94b4-4aeb-8faa-37b1067a49b8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "d703a39a-f502-4bc4-895a-bb87752c83df-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:22:38 compute-0 nova_compute[260935]: 2025-10-11 09:22:38.632 2 DEBUG oslo_concurrency.lockutils [req-74e00882-09ba-4c7f-bf84-123a96f6c2d9 req-61db89a6-94b4-4aeb-8faa-37b1067a49b8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "d703a39a-f502-4bc4-895a-bb87752c83df-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:22:38 compute-0 nova_compute[260935]: 2025-10-11 09:22:38.632 2 DEBUG oslo_concurrency.lockutils [req-74e00882-09ba-4c7f-bf84-123a96f6c2d9 req-61db89a6-94b4-4aeb-8faa-37b1067a49b8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "d703a39a-f502-4bc4-895a-bb87752c83df-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:22:38 compute-0 nova_compute[260935]: 2025-10-11 09:22:38.633 2 DEBUG nova.compute.manager [req-74e00882-09ba-4c7f-bf84-123a96f6c2d9 req-61db89a6-94b4-4aeb-8faa-37b1067a49b8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d703a39a-f502-4bc4-895a-bb87752c83df] No waiting events found dispatching network-vif-plugged-30ec8ddb-e058-428a-a08a-817e1e452938 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 09:22:38 compute-0 nova_compute[260935]: 2025-10-11 09:22:38.633 2 WARNING nova.compute.manager [req-74e00882-09ba-4c7f-bf84-123a96f6c2d9 req-61db89a6-94b4-4aeb-8faa-37b1067a49b8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d703a39a-f502-4bc4-895a-bb87752c83df] Received unexpected event network-vif-plugged-30ec8ddb-e058-428a-a08a-817e1e452938 for instance with vm_state active and task_state deleting.
Oct 11 09:22:38 compute-0 nova_compute[260935]: 2025-10-11 09:22:38.634 2 DEBUG nova.compute.manager [req-74e00882-09ba-4c7f-bf84-123a96f6c2d9 req-61db89a6-94b4-4aeb-8faa-37b1067a49b8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d703a39a-f502-4bc4-895a-bb87752c83df] Received event network-vif-deleted-a35d553f-941e-4ff2-bc8a-39419cc32ff2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:22:38 compute-0 nova_compute[260935]: 2025-10-11 09:22:38.634 2 INFO nova.compute.manager [req-74e00882-09ba-4c7f-bf84-123a96f6c2d9 req-61db89a6-94b4-4aeb-8faa-37b1067a49b8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d703a39a-f502-4bc4-895a-bb87752c83df] Neutron deleted interface a35d553f-941e-4ff2-bc8a-39419cc32ff2; detaching it from the instance and deleting it from the info cache
Oct 11 09:22:38 compute-0 nova_compute[260935]: 2025-10-11 09:22:38.634 2 DEBUG nova.network.neutron [req-74e00882-09ba-4c7f-bf84-123a96f6c2d9 req-61db89a6-94b4-4aeb-8faa-37b1067a49b8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d703a39a-f502-4bc4-895a-bb87752c83df] Updating instance_info_cache with network_info: [{"id": "30ec8ddb-e058-428a-a08a-817e1e452938", "address": "fa:16:3e:97:eb:db", "network": {"id": "02690ac5-d004-4c7d-b780-e5fed29e0aa7", "bridge": "br-int", "label": "tempest-network-smoke--1847613909", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap30ec8ddb-e0", "ovs_interfaceid": "30ec8ddb-e058-428a-a08a-817e1e452938", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:22:38 compute-0 nova_compute[260935]: 2025-10-11 09:22:38.695 2 DEBUG oslo_concurrency.processutils [None req-4fc48ab2-5389-4c80-9e31-49ff047eb67b 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json" returned: 0 in 0.129s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:22:38 compute-0 nova_compute[260935]: 2025-10-11 09:22:38.697 2 DEBUG oslo_concurrency.lockutils [None req-4fc48ab2-5389-4c80-9e31-49ff047eb67b 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] Acquiring lock "9811042c7d73cc51997f7c966840f4b7728169a1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:22:38 compute-0 nova_compute[260935]: 2025-10-11 09:22:38.698 2 DEBUG oslo_concurrency.lockutils [None req-4fc48ab2-5389-4c80-9e31-49ff047eb67b 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:22:38 compute-0 nova_compute[260935]: 2025-10-11 09:22:38.699 2 DEBUG oslo_concurrency.lockutils [None req-4fc48ab2-5389-4c80-9e31-49ff047eb67b 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:22:38 compute-0 nova_compute[260935]: 2025-10-11 09:22:38.734 2 DEBUG nova.storage.rbd_utils [None req-4fc48ab2-5389-4c80-9e31-49ff047eb67b 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] rbd image ef21f945-0076-48fa-8d22-c5376e26d278_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:22:38 compute-0 nova_compute[260935]: 2025-10-11 09:22:38.739 2 DEBUG oslo_concurrency.processutils [None req-4fc48ab2-5389-4c80-9e31-49ff047eb67b 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 ef21f945-0076-48fa-8d22-c5376e26d278_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:22:38 compute-0 nova_compute[260935]: 2025-10-11 09:22:38.788 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:22:38 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2424: 321 pgs: 321 active+clean; 566 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 27 KiB/s wr, 32 op/s
Oct 11 09:22:38 compute-0 nova_compute[260935]: 2025-10-11 09:22:38.789 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 11 09:22:38 compute-0 nova_compute[260935]: 2025-10-11 09:22:38.791 2 DEBUG nova.network.neutron [-] [instance: d703a39a-f502-4bc4-895a-bb87752c83df] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:22:38 compute-0 nova_compute[260935]: 2025-10-11 09:22:38.795 2 DEBUG nova.compute.manager [req-74e00882-09ba-4c7f-bf84-123a96f6c2d9 req-61db89a6-94b4-4aeb-8faa-37b1067a49b8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d703a39a-f502-4bc4-895a-bb87752c83df] Detach interface failed, port_id=a35d553f-941e-4ff2-bc8a-39419cc32ff2, reason: Instance d703a39a-f502-4bc4-895a-bb87752c83df could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Oct 11 09:22:38 compute-0 nova_compute[260935]: 2025-10-11 09:22:38.913 2 INFO nova.compute.manager [-] [instance: d703a39a-f502-4bc4-895a-bb87752c83df] Took 1.91 seconds to deallocate network for instance.
Oct 11 09:22:39 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:22:39 compute-0 nova_compute[260935]: 2025-10-11 09:22:39.060 2 DEBUG oslo_concurrency.lockutils [None req-c2c406e0-4543-46b5-8b38-f49bca64c128 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:22:39 compute-0 nova_compute[260935]: 2025-10-11 09:22:39.061 2 DEBUG oslo_concurrency.lockutils [None req-c2c406e0-4543-46b5-8b38-f49bca64c128 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:22:39 compute-0 nova_compute[260935]: 2025-10-11 09:22:39.110 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "refresh_cache-52be16b4-343a-4fd4-9041-39069a1fde2a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:22:39 compute-0 nova_compute[260935]: 2025-10-11 09:22:39.110 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquired lock "refresh_cache-52be16b4-343a-4fd4-9041-39069a1fde2a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:22:39 compute-0 nova_compute[260935]: 2025-10-11 09:22:39.111 2 DEBUG nova.network.neutron [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: 52be16b4-343a-4fd4-9041-39069a1fde2a] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 11 09:22:39 compute-0 nova_compute[260935]: 2025-10-11 09:22:39.115 2 DEBUG oslo_concurrency.processutils [None req-4fc48ab2-5389-4c80-9e31-49ff047eb67b 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 ef21f945-0076-48fa-8d22-c5376e26d278_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.376s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:22:39 compute-0 nova_compute[260935]: 2025-10-11 09:22:39.213 2 DEBUG nova.storage.rbd_utils [None req-4fc48ab2-5389-4c80-9e31-49ff047eb67b 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] resizing rbd image ef21f945-0076-48fa-8d22-c5376e26d278_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 11 09:22:39 compute-0 nova_compute[260935]: 2025-10-11 09:22:39.319 2 DEBUG nova.network.neutron [None req-4fc48ab2-5389-4c80-9e31-49ff047eb67b 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] Successfully created port: 0f516e4b-c284-4151-944c-8a7d98f695b5 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 11 09:22:39 compute-0 nova_compute[260935]: 2025-10-11 09:22:39.327 2 DEBUG nova.objects.instance [None req-4fc48ab2-5389-4c80-9e31-49ff047eb67b 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] Lazy-loading 'migration_context' on Instance uuid ef21f945-0076-48fa-8d22-c5376e26d278 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:22:39 compute-0 nova_compute[260935]: 2025-10-11 09:22:39.381 2 DEBUG nova.virt.libvirt.driver [None req-4fc48ab2-5389-4c80-9e31-49ff047eb67b 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 11 09:22:39 compute-0 nova_compute[260935]: 2025-10-11 09:22:39.381 2 DEBUG nova.virt.libvirt.driver [None req-4fc48ab2-5389-4c80-9e31-49ff047eb67b 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] Ensure instance console log exists: /var/lib/nova/instances/ef21f945-0076-48fa-8d22-c5376e26d278/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 11 09:22:39 compute-0 nova_compute[260935]: 2025-10-11 09:22:39.381 2 DEBUG oslo_concurrency.lockutils [None req-4fc48ab2-5389-4c80-9e31-49ff047eb67b 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:22:39 compute-0 nova_compute[260935]: 2025-10-11 09:22:39.382 2 DEBUG oslo_concurrency.lockutils [None req-4fc48ab2-5389-4c80-9e31-49ff047eb67b 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:22:39 compute-0 nova_compute[260935]: 2025-10-11 09:22:39.382 2 DEBUG oslo_concurrency.lockutils [None req-4fc48ab2-5389-4c80-9e31-49ff047eb67b 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:22:39 compute-0 nova_compute[260935]: 2025-10-11 09:22:39.395 2 DEBUG oslo_concurrency.processutils [None req-c2c406e0-4543-46b5-8b38-f49bca64c128 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:22:39 compute-0 podman[391519]: 2025-10-11 09:22:39.806552147 +0000 UTC m=+0.094859888 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Oct 11 09:22:39 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 09:22:39 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/187125492' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:22:39 compute-0 nova_compute[260935]: 2025-10-11 09:22:39.879 2 DEBUG oslo_concurrency.processutils [None req-c2c406e0-4543-46b5-8b38-f49bca64c128 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.484s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:22:39 compute-0 nova_compute[260935]: 2025-10-11 09:22:39.889 2 DEBUG nova.compute.provider_tree [None req-c2c406e0-4543-46b5-8b38-f49bca64c128 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 09:22:39 compute-0 nova_compute[260935]: 2025-10-11 09:22:39.911 2 DEBUG nova.scheduler.client.report [None req-c2c406e0-4543-46b5-8b38-f49bca64c128 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 09:22:39 compute-0 nova_compute[260935]: 2025-10-11 09:22:39.939 2 DEBUG oslo_concurrency.lockutils [None req-c2c406e0-4543-46b5-8b38-f49bca64c128 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.877s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:22:39 compute-0 nova_compute[260935]: 2025-10-11 09:22:39.970 2 INFO nova.scheduler.client.report [None req-c2c406e0-4543-46b5-8b38-f49bca64c128 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Deleted allocations for instance d703a39a-f502-4bc4-895a-bb87752c83df
Oct 11 09:22:39 compute-0 ceph-mon[74313]: pgmap v2424: 321 pgs: 321 active+clean; 566 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 27 KiB/s wr, 32 op/s
Oct 11 09:22:39 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/187125492' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:22:40 compute-0 nova_compute[260935]: 2025-10-11 09:22:40.068 2 DEBUG oslo_concurrency.lockutils [None req-c2c406e0-4543-46b5-8b38-f49bca64c128 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "d703a39a-f502-4bc4-895a-bb87752c83df" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.355s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:22:40 compute-0 nova_compute[260935]: 2025-10-11 09:22:40.163 2 DEBUG nova.network.neutron [None req-4fc48ab2-5389-4c80-9e31-49ff047eb67b 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] Successfully updated port: 0f516e4b-c284-4151-944c-8a7d98f695b5 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 11 09:22:40 compute-0 nova_compute[260935]: 2025-10-11 09:22:40.181 2 DEBUG oslo_concurrency.lockutils [None req-4fc48ab2-5389-4c80-9e31-49ff047eb67b 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] Acquiring lock "refresh_cache-ef21f945-0076-48fa-8d22-c5376e26d278" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:22:40 compute-0 nova_compute[260935]: 2025-10-11 09:22:40.182 2 DEBUG oslo_concurrency.lockutils [None req-4fc48ab2-5389-4c80-9e31-49ff047eb67b 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] Acquired lock "refresh_cache-ef21f945-0076-48fa-8d22-c5376e26d278" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:22:40 compute-0 nova_compute[260935]: 2025-10-11 09:22:40.182 2 DEBUG nova.network.neutron [None req-4fc48ab2-5389-4c80-9e31-49ff047eb67b 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 11 09:22:40 compute-0 nova_compute[260935]: 2025-10-11 09:22:40.282 2 DEBUG nova.compute.manager [req-1eae47d7-ed4b-4ec4-9822-19dc4c720a81 req-6ae3b80a-de73-4cc1-b7cf-5b622ee7f39d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] Received event network-changed-0f516e4b-c284-4151-944c-8a7d98f695b5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:22:40 compute-0 nova_compute[260935]: 2025-10-11 09:22:40.283 2 DEBUG nova.compute.manager [req-1eae47d7-ed4b-4ec4-9822-19dc4c720a81 req-6ae3b80a-de73-4cc1-b7cf-5b622ee7f39d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] Refreshing instance network info cache due to event network-changed-0f516e4b-c284-4151-944c-8a7d98f695b5. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 09:22:40 compute-0 nova_compute[260935]: 2025-10-11 09:22:40.283 2 DEBUG oslo_concurrency.lockutils [req-1eae47d7-ed4b-4ec4-9822-19dc4c720a81 req-6ae3b80a-de73-4cc1-b7cf-5b622ee7f39d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-ef21f945-0076-48fa-8d22-c5376e26d278" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:22:40 compute-0 nova_compute[260935]: 2025-10-11 09:22:40.375 2 DEBUG nova.network.neutron [None req-4fc48ab2-5389-4c80-9e31-49ff047eb67b 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 11 09:22:40 compute-0 nova_compute[260935]: 2025-10-11 09:22:40.583 2 DEBUG nova.compute.manager [req-5ccec58f-ae6a-4e04-b2db-ddc7d8a55cac req-1d109231-e42a-4520-ab03-a6a11dda4a73 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d703a39a-f502-4bc4-895a-bb87752c83df] Received event network-vif-deleted-30ec8ddb-e058-428a-a08a-817e1e452938 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:22:40 compute-0 nova_compute[260935]: 2025-10-11 09:22:40.688 2 DEBUG nova.network.neutron [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: 52be16b4-343a-4fd4-9041-39069a1fde2a] Updating instance_info_cache with network_info: [{"id": "c992d6e3-ef59-42a0-80c5-109fe0c056cd", "address": "fa:16:3e:d3:b5:ce", "network": {"id": "7c40ad6c-6e2c-4d8e-a70f-72c8786fa745", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1855455514-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0ba95f2514ce4fe4b00f245335eaeb01", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc992d6e3-ef", "ovs_interfaceid": "c992d6e3-ef59-42a0-80c5-109fe0c056cd", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:22:40 compute-0 nova_compute[260935]: 2025-10-11 09:22:40.705 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Releasing lock "refresh_cache-52be16b4-343a-4fd4-9041-39069a1fde2a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:22:40 compute-0 nova_compute[260935]: 2025-10-11 09:22:40.705 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: 52be16b4-343a-4fd4-9041-39069a1fde2a] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 11 09:22:40 compute-0 nova_compute[260935]: 2025-10-11 09:22:40.705 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:22:40 compute-0 nova_compute[260935]: 2025-10-11 09:22:40.725 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:22:40 compute-0 nova_compute[260935]: 2025-10-11 09:22:40.725 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:22:40 compute-0 nova_compute[260935]: 2025-10-11 09:22:40.726 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:22:40 compute-0 nova_compute[260935]: 2025-10-11 09:22:40.726 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 11 09:22:40 compute-0 nova_compute[260935]: 2025-10-11 09:22:40.726 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:22:40 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2425: 321 pgs: 321 active+clean; 566 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 13 KiB/s wr, 31 op/s
Oct 11 09:22:41 compute-0 nova_compute[260935]: 2025-10-11 09:22:41.039 2 DEBUG oslo_concurrency.lockutils [None req-f7015ac2-3d47-44e7-a392-83aaacccf689 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "fcb45648-eb7b-4975-9f50-08675a787d9c" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:22:41 compute-0 nova_compute[260935]: 2025-10-11 09:22:41.040 2 DEBUG oslo_concurrency.lockutils [None req-f7015ac2-3d47-44e7-a392-83aaacccf689 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "fcb45648-eb7b-4975-9f50-08675a787d9c" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:22:41 compute-0 nova_compute[260935]: 2025-10-11 09:22:41.040 2 DEBUG oslo_concurrency.lockutils [None req-f7015ac2-3d47-44e7-a392-83aaacccf689 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "fcb45648-eb7b-4975-9f50-08675a787d9c-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:22:41 compute-0 nova_compute[260935]: 2025-10-11 09:22:41.040 2 DEBUG oslo_concurrency.lockutils [None req-f7015ac2-3d47-44e7-a392-83aaacccf689 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "fcb45648-eb7b-4975-9f50-08675a787d9c-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:22:41 compute-0 nova_compute[260935]: 2025-10-11 09:22:41.041 2 DEBUG oslo_concurrency.lockutils [None req-f7015ac2-3d47-44e7-a392-83aaacccf689 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "fcb45648-eb7b-4975-9f50-08675a787d9c-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:22:41 compute-0 nova_compute[260935]: 2025-10-11 09:22:41.042 2 INFO nova.compute.manager [None req-f7015ac2-3d47-44e7-a392-83aaacccf689 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: fcb45648-eb7b-4975-9f50-08675a787d9c] Terminating instance
Oct 11 09:22:41 compute-0 nova_compute[260935]: 2025-10-11 09:22:41.043 2 DEBUG nova.compute.manager [None req-f7015ac2-3d47-44e7-a392-83aaacccf689 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: fcb45648-eb7b-4975-9f50-08675a787d9c] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 11 09:22:41 compute-0 kernel: tap9669f110-04 (unregistering): left promiscuous mode
Oct 11 09:22:41 compute-0 NetworkManager[44960]: <info>  [1760174561.1120] device (tap9669f110-04): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 09:22:41 compute-0 ovn_controller[152945]: 2025-10-11T09:22:41Z|01245|binding|INFO|Releasing lport 9669f110-042a-40c6-b7a4-8d78d421ed23 from this chassis (sb_readonly=0)
Oct 11 09:22:41 compute-0 ovn_controller[152945]: 2025-10-11T09:22:41Z|01246|binding|INFO|Setting lport 9669f110-042a-40c6-b7a4-8d78d421ed23 down in Southbound
Oct 11 09:22:41 compute-0 ovn_controller[152945]: 2025-10-11T09:22:41Z|01247|binding|INFO|Removing iface tap9669f110-04 ovn-installed in OVS
Oct 11 09:22:41 compute-0 nova_compute[260935]: 2025-10-11 09:22:41.191 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:22:41 compute-0 nova_compute[260935]: 2025-10-11 09:22:41.204 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:22:41 compute-0 kernel: tap5a14fd50-c9 (unregistering): left promiscuous mode
Oct 11 09:22:41 compute-0 NetworkManager[44960]: <info>  [1760174561.2127] device (tap5a14fd50-c9): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 09:22:41 compute-0 ovn_controller[152945]: 2025-10-11T09:22:41Z|01248|binding|INFO|Releasing lport 5a14fd50-c9b4-4c8c-b576-9f2a05d734f9 from this chassis (sb_readonly=1)
Oct 11 09:22:41 compute-0 ovn_controller[152945]: 2025-10-11T09:22:41Z|01249|binding|INFO|Removing iface tap5a14fd50-c9 ovn-installed in OVS
Oct 11 09:22:41 compute-0 nova_compute[260935]: 2025-10-11 09:22:41.221 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:22:41 compute-0 ovn_controller[152945]: 2025-10-11T09:22:41Z|01250|if_status|INFO|Dropped 1 log messages in last 901 seconds (most recently, 901 seconds ago) due to excessive rate
Oct 11 09:22:41 compute-0 ovn_controller[152945]: 2025-10-11T09:22:41Z|01251|if_status|INFO|Not setting lport 5a14fd50-c9b4-4c8c-b576-9f2a05d734f9 down as sb is readonly
Oct 11 09:22:41 compute-0 nova_compute[260935]: 2025-10-11 09:22:41.222 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:22:41 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 09:22:41 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2850151861' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:22:41 compute-0 ovn_controller[152945]: 2025-10-11T09:22:41Z|01252|binding|INFO|Setting lport 5a14fd50-c9b4-4c8c-b576-9f2a05d734f9 down in Southbound
Oct 11 09:22:41 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:22:41.233 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:eb:54:35 10.100.0.11'], port_security=['fa:16:3e:eb:54:35 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'fcb45648-eb7b-4975-9f50-08675a787d9c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-02690ac5-d004-4c7d-b780-e5fed29e0aa7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'db6885dd005947ad850fed13cefdf2fc', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'c29d60dd-64a4-4eb0-a5b7-799e6b286f87', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=bc6a655d-e3d9-4e53-b2d8-3195591acfb2, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=9669f110-042a-40c6-b7a4-8d78d421ed23) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 09:22:41 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:22:41.236 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 9669f110-042a-40c6-b7a4-8d78d421ed23 in datapath 02690ac5-d004-4c7d-b780-e5fed29e0aa7 unbound from our chassis
Oct 11 09:22:41 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:22:41.239 162815 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 02690ac5-d004-4c7d-b780-e5fed29e0aa7, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 11 09:22:41 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:22:41.241 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ed:83:c1 2001:db8:0:1:f816:3eff:feed:83c1 2001:db8::f816:3eff:feed:83c1'], port_security=['fa:16:3e:ed:83:c1 2001:db8:0:1:f816:3eff:feed:83c1 2001:db8::f816:3eff:feed:83c1'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:feed:83c1/64 2001:db8::f816:3eff:feed:83c1/64', 'neutron:device_id': 'fcb45648-eb7b-4975-9f50-08675a787d9c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e87b272f-66b8-494e-ab80-c2ee66df15a2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'db6885dd005947ad850fed13cefdf2fc', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'c29d60dd-64a4-4eb0-a5b7-799e6b286f87', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4bb88753-f635-4d32-a71c-7301c0eb5e38, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=5a14fd50-c9b4-4c8c-b576-9f2a05d734f9) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 09:22:41 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:22:41.241 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[eb8d25bc-1011-4302-b851-d26cf887c89f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:22:41 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:22:41.244 162815 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-02690ac5-d004-4c7d-b780-e5fed29e0aa7 namespace which is not needed anymore
Oct 11 09:22:41 compute-0 nova_compute[260935]: 2025-10-11 09:22:41.243 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:22:41 compute-0 nova_compute[260935]: 2025-10-11 09:22:41.257 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.531s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:22:41 compute-0 nova_compute[260935]: 2025-10-11 09:22:41.261 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:22:41 compute-0 systemd[1]: machine-qemu\x2d141\x2dinstance\x2d00000076.scope: Deactivated successfully.
Oct 11 09:22:41 compute-0 systemd[1]: machine-qemu\x2d141\x2dinstance\x2d00000076.scope: Consumed 16.770s CPU time.
Oct 11 09:22:41 compute-0 systemd-machined[215705]: Machine qemu-141-instance-00000076 terminated.
Oct 11 09:22:41 compute-0 neutron-haproxy-ovnmeta-02690ac5-d004-4c7d-b780-e5fed29e0aa7[389155]: [NOTICE]   (389183) : haproxy version is 2.8.14-c23fe91
Oct 11 09:22:41 compute-0 neutron-haproxy-ovnmeta-02690ac5-d004-4c7d-b780-e5fed29e0aa7[389155]: [NOTICE]   (389183) : path to executable is /usr/sbin/haproxy
Oct 11 09:22:41 compute-0 neutron-haproxy-ovnmeta-02690ac5-d004-4c7d-b780-e5fed29e0aa7[389155]: [WARNING]  (389183) : Exiting Master process...
Oct 11 09:22:41 compute-0 systemd[1]: libpod-a30fa67bcb3a4967e8ed944a89e7f834e57e556362850d2eae55ff591936d2f1.scope: Deactivated successfully.
Oct 11 09:22:41 compute-0 neutron-haproxy-ovnmeta-02690ac5-d004-4c7d-b780-e5fed29e0aa7[389155]: [ALERT]    (389183) : Current worker (389186) exited with code 143 (Terminated)
Oct 11 09:22:41 compute-0 neutron-haproxy-ovnmeta-02690ac5-d004-4c7d-b780-e5fed29e0aa7[389155]: [WARNING]  (389183) : All workers exited. Exiting... (0)
Oct 11 09:22:41 compute-0 conmon[389155]: conmon a30fa67bcb3a4967e8ed <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a30fa67bcb3a4967e8ed944a89e7f834e57e556362850d2eae55ff591936d2f1.scope/container/memory.events
Oct 11 09:22:41 compute-0 podman[391591]: 2025-10-11 09:22:41.409941966 +0000 UTC m=+0.058561183 container died a30fa67bcb3a4967e8ed944a89e7f834e57e556362850d2eae55ff591936d2f1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-02690ac5-d004-4c7d-b780-e5fed29e0aa7, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 11 09:22:41 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a30fa67bcb3a4967e8ed944a89e7f834e57e556362850d2eae55ff591936d2f1-userdata-shm.mount: Deactivated successfully.
Oct 11 09:22:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-45a43bab447974501414b98be50feaa29e43ab1d6b08cf88962b6450fef6fae6-merged.mount: Deactivated successfully.
Oct 11 09:22:41 compute-0 podman[391591]: 2025-10-11 09:22:41.455599495 +0000 UTC m=+0.104218652 container cleanup a30fa67bcb3a4967e8ed944a89e7f834e57e556362850d2eae55ff591936d2f1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-02690ac5-d004-4c7d-b780-e5fed29e0aa7, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS)
Oct 11 09:22:41 compute-0 systemd[1]: libpod-conmon-a30fa67bcb3a4967e8ed944a89e7f834e57e556362850d2eae55ff591936d2f1.scope: Deactivated successfully.
Oct 11 09:22:41 compute-0 nova_compute[260935]: 2025-10-11 09:22:41.503 2 INFO nova.virt.libvirt.driver [-] [instance: fcb45648-eb7b-4975-9f50-08675a787d9c] Instance destroyed successfully.
Oct 11 09:22:41 compute-0 nova_compute[260935]: 2025-10-11 09:22:41.503 2 DEBUG nova.objects.instance [None req-f7015ac2-3d47-44e7-a392-83aaacccf689 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lazy-loading 'resources' on Instance uuid fcb45648-eb7b-4975-9f50-08675a787d9c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:22:41 compute-0 nova_compute[260935]: 2025-10-11 09:22:41.506 2 DEBUG nova.network.neutron [None req-4fc48ab2-5389-4c80-9e31-49ff047eb67b 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] Updating instance_info_cache with network_info: [{"id": "0f516e4b-c284-4151-944c-8a7d98f695b5", "address": "fa:16:3e:4b:08:14", "network": {"id": "0007a0de-db42-4add-9b55-6d92ceffa860", "bridge": "br-int", "label": "tempest-TestShelveInstance-683353345-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a13210f275984f3eadf85eba0c749d99", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0f516e4b-c2", "ovs_interfaceid": "0f516e4b-c284-4151-944c-8a7d98f695b5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:22:41 compute-0 nova_compute[260935]: 2025-10-11 09:22:41.522 2 DEBUG nova.virt.libvirt.vif [None req-f7015ac2-3d47-44e7-a392-83aaacccf689 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T09:21:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-407795424',display_name='tempest-TestGettingAddress-server-407795424',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-407795424',id=118,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBH3M/kXUSgkBS2kz04q1S/pRsyiwkY2YkIqMb3cQVJF1HU8FNi43mGn58wjJEXbpqq0zRDrJXjUF0vzQa/0kOsDkOMrrOR4SllnyixE3KdibmJHkyv4zy/eyFYJg8UzYow==',key_name='tempest-TestGettingAddress-1350685544',keypairs=<?>,launch_index=0,launched_at=2025-10-11T09:21:24Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='db6885dd005947ad850fed13cefdf2fc',ramdisk_id='',reservation_id='r-dfjcqrr4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-1238692117',owner_user_name='tempest-TestGettingAddress-1238692117-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T09:21:24Z,user_data=None,user_id='0e1fd111a1ff43179343661e01457085',uuid=fcb45648-eb7b-4975-9f50-08675a787d9c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9669f110-042a-40c6-b7a4-8d78d421ed23", "address": "fa:16:3e:eb:54:35", "network": {"id": "02690ac5-d004-4c7d-b780-e5fed29e0aa7", "bridge": "br-int", "label": "tempest-network-smoke--1847613909", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.248", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9669f110-04", "ovs_interfaceid": "9669f110-042a-40c6-b7a4-8d78d421ed23", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 11 09:22:41 compute-0 nova_compute[260935]: 2025-10-11 09:22:41.523 2 DEBUG nova.network.os_vif_util [None req-f7015ac2-3d47-44e7-a392-83aaacccf689 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converting VIF {"id": "9669f110-042a-40c6-b7a4-8d78d421ed23", "address": "fa:16:3e:eb:54:35", "network": {"id": "02690ac5-d004-4c7d-b780-e5fed29e0aa7", "bridge": "br-int", "label": "tempest-network-smoke--1847613909", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.248", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9669f110-04", "ovs_interfaceid": "9669f110-042a-40c6-b7a4-8d78d421ed23", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 09:22:41 compute-0 nova_compute[260935]: 2025-10-11 09:22:41.524 2 DEBUG nova.network.os_vif_util [None req-f7015ac2-3d47-44e7-a392-83aaacccf689 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:eb:54:35,bridge_name='br-int',has_traffic_filtering=True,id=9669f110-042a-40c6-b7a4-8d78d421ed23,network=Network(02690ac5-d004-4c7d-b780-e5fed29e0aa7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9669f110-04') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 09:22:41 compute-0 nova_compute[260935]: 2025-10-11 09:22:41.524 2 DEBUG os_vif [None req-f7015ac2-3d47-44e7-a392-83aaacccf689 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:eb:54:35,bridge_name='br-int',has_traffic_filtering=True,id=9669f110-042a-40c6-b7a4-8d78d421ed23,network=Network(02690ac5-d004-4c7d-b780-e5fed29e0aa7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9669f110-04') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 11 09:22:41 compute-0 nova_compute[260935]: 2025-10-11 09:22:41.526 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:22:41 compute-0 nova_compute[260935]: 2025-10-11 09:22:41.526 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9669f110-04, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:22:41 compute-0 nova_compute[260935]: 2025-10-11 09:22:41.528 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:22:41 compute-0 nova_compute[260935]: 2025-10-11 09:22:41.529 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 09:22:41 compute-0 nova_compute[260935]: 2025-10-11 09:22:41.537 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:22:41 compute-0 nova_compute[260935]: 2025-10-11 09:22:41.540 2 DEBUG oslo_concurrency.lockutils [None req-4fc48ab2-5389-4c80-9e31-49ff047eb67b 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] Releasing lock "refresh_cache-ef21f945-0076-48fa-8d22-c5376e26d278" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:22:41 compute-0 nova_compute[260935]: 2025-10-11 09:22:41.540 2 DEBUG nova.compute.manager [None req-4fc48ab2-5389-4c80-9e31-49ff047eb67b 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] Instance network_info: |[{"id": "0f516e4b-c284-4151-944c-8a7d98f695b5", "address": "fa:16:3e:4b:08:14", "network": {"id": "0007a0de-db42-4add-9b55-6d92ceffa860", "bridge": "br-int", "label": "tempest-TestShelveInstance-683353345-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a13210f275984f3eadf85eba0c749d99", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0f516e4b-c2", "ovs_interfaceid": "0f516e4b-c284-4151-944c-8a7d98f695b5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 11 09:22:41 compute-0 nova_compute[260935]: 2025-10-11 09:22:41.540 2 DEBUG oslo_concurrency.lockutils [req-1eae47d7-ed4b-4ec4-9822-19dc4c720a81 req-6ae3b80a-de73-4cc1-b7cf-5b622ee7f39d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-ef21f945-0076-48fa-8d22-c5376e26d278" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:22:41 compute-0 nova_compute[260935]: 2025-10-11 09:22:41.541 2 DEBUG nova.network.neutron [req-1eae47d7-ed4b-4ec4-9822-19dc4c720a81 req-6ae3b80a-de73-4cc1-b7cf-5b622ee7f39d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] Refreshing network info cache for port 0f516e4b-c284-4151-944c-8a7d98f695b5 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 09:22:41 compute-0 nova_compute[260935]: 2025-10-11 09:22:41.543 2 DEBUG nova.virt.libvirt.driver [None req-4fc48ab2-5389-4c80-9e31-49ff047eb67b 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] Start _get_guest_xml network_info=[{"id": "0f516e4b-c284-4151-944c-8a7d98f695b5", "address": "fa:16:3e:4b:08:14", "network": {"id": "0007a0de-db42-4add-9b55-6d92ceffa860", "bridge": "br-int", "label": "tempest-TestShelveInstance-683353345-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a13210f275984f3eadf85eba0c749d99", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0f516e4b-c2", "ovs_interfaceid": "0f516e4b-c284-4151-944c-8a7d98f695b5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 11 09:22:41 compute-0 nova_compute[260935]: 2025-10-11 09:22:41.544 2 INFO os_vif [None req-f7015ac2-3d47-44e7-a392-83aaacccf689 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:eb:54:35,bridge_name='br-int',has_traffic_filtering=True,id=9669f110-042a-40c6-b7a4-8d78d421ed23,network=Network(02690ac5-d004-4c7d-b780-e5fed29e0aa7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9669f110-04')
Oct 11 09:22:41 compute-0 nova_compute[260935]: 2025-10-11 09:22:41.544 2 DEBUG nova.virt.libvirt.vif [None req-f7015ac2-3d47-44e7-a392-83aaacccf689 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T09:21:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-407795424',display_name='tempest-TestGettingAddress-server-407795424',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-407795424',id=118,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBH3M/kXUSgkBS2kz04q1S/pRsyiwkY2YkIqMb3cQVJF1HU8FNi43mGn58wjJEXbpqq0zRDrJXjUF0vzQa/0kOsDkOMrrOR4SllnyixE3KdibmJHkyv4zy/eyFYJg8UzYow==',key_name='tempest-TestGettingAddress-1350685544',keypairs=<?>,launch_index=0,launched_at=2025-10-11T09:21:24Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='db6885dd005947ad850fed13cefdf2fc',ramdisk_id='',reservation_id='r-dfjcqrr4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-1238692117',owner_user_name='tempest-TestGettingAddress-1238692117-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T09:21:24Z,user_data=None,user_id='0e1fd111a1ff43179343661e01457085',uuid=fcb45648-eb7b-4975-9f50-08675a787d9c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "5a14fd50-c9b4-4c8c-b576-9f2a05d734f9", "address": "fa:16:3e:ed:83:c1", "network": {"id": "e87b272f-66b8-494e-ab80-c2ee66df15a2", "bridge": "br-int", "label": "tempest-network-smoke--2027628824", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feed:83c1", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feed:83c1", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a14fd50-c9", "ovs_interfaceid": "5a14fd50-c9b4-4c8c-b576-9f2a05d734f9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 11 09:22:41 compute-0 nova_compute[260935]: 2025-10-11 09:22:41.545 2 DEBUG nova.network.os_vif_util [None req-f7015ac2-3d47-44e7-a392-83aaacccf689 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converting VIF {"id": "5a14fd50-c9b4-4c8c-b576-9f2a05d734f9", "address": "fa:16:3e:ed:83:c1", "network": {"id": "e87b272f-66b8-494e-ab80-c2ee66df15a2", "bridge": "br-int", "label": "tempest-network-smoke--2027628824", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feed:83c1", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feed:83c1", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a14fd50-c9", "ovs_interfaceid": "5a14fd50-c9b4-4c8c-b576-9f2a05d734f9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 09:22:41 compute-0 nova_compute[260935]: 2025-10-11 09:22:41.545 2 DEBUG nova.network.os_vif_util [None req-f7015ac2-3d47-44e7-a392-83aaacccf689 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ed:83:c1,bridge_name='br-int',has_traffic_filtering=True,id=5a14fd50-c9b4-4c8c-b576-9f2a05d734f9,network=Network(e87b272f-66b8-494e-ab80-c2ee66df15a2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5a14fd50-c9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 09:22:41 compute-0 nova_compute[260935]: 2025-10-11 09:22:41.546 2 DEBUG os_vif [None req-f7015ac2-3d47-44e7-a392-83aaacccf689 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ed:83:c1,bridge_name='br-int',has_traffic_filtering=True,id=5a14fd50-c9b4-4c8c-b576-9f2a05d734f9,network=Network(e87b272f-66b8-494e-ab80-c2ee66df15a2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5a14fd50-c9') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 11 09:22:41 compute-0 nova_compute[260935]: 2025-10-11 09:22:41.547 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:22:41 compute-0 nova_compute[260935]: 2025-10-11 09:22:41.547 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5a14fd50-c9, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:22:41 compute-0 nova_compute[260935]: 2025-10-11 09:22:41.550 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:22:41 compute-0 podman[391624]: 2025-10-11 09:22:41.552493289 +0000 UTC m=+0.064504951 container remove a30fa67bcb3a4967e8ed944a89e7f834e57e556362850d2eae55ff591936d2f1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-02690ac5-d004-4c7d-b780-e5fed29e0aa7, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.schema-version=1.0)
Oct 11 09:22:41 compute-0 nova_compute[260935]: 2025-10-11 09:22:41.554 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 09:22:41 compute-0 nova_compute[260935]: 2025-10-11 09:22:41.559 2 INFO os_vif [None req-f7015ac2-3d47-44e7-a392-83aaacccf689 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ed:83:c1,bridge_name='br-int',has_traffic_filtering=True,id=5a14fd50-c9b4-4c8c-b576-9f2a05d734f9,network=Network(e87b272f-66b8-494e-ab80-c2ee66df15a2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5a14fd50-c9')
Oct 11 09:22:41 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:22:41.563 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[e36d7dad-7a0a-433d-8bf2-63d5e606eb8c]: (4, ('Sat Oct 11 09:22:41 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-02690ac5-d004-4c7d-b780-e5fed29e0aa7 (a30fa67bcb3a4967e8ed944a89e7f834e57e556362850d2eae55ff591936d2f1)\na30fa67bcb3a4967e8ed944a89e7f834e57e556362850d2eae55ff591936d2f1\nSat Oct 11 09:22:41 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-02690ac5-d004-4c7d-b780-e5fed29e0aa7 (a30fa67bcb3a4967e8ed944a89e7f834e57e556362850d2eae55ff591936d2f1)\na30fa67bcb3a4967e8ed944a89e7f834e57e556362850d2eae55ff591936d2f1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:22:41 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:22:41.566 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[c5e2a0e6-3443-4112-bea1-46a6cf561970]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:22:41 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:22:41.567 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap02690ac5-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:22:41 compute-0 kernel: tap02690ac5-d0: left promiscuous mode
Oct 11 09:22:41 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:22:41.575 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[a90b3171-e341-483a-b232-bece09fd1d79]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:22:41 compute-0 nova_compute[260935]: 2025-10-11 09:22:41.581 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:22:41 compute-0 nova_compute[260935]: 2025-10-11 09:22:41.584 2 WARNING nova.virt.libvirt.driver [None req-4fc48ab2-5389-4c80-9e31-49ff047eb67b 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 09:22:41 compute-0 nova_compute[260935]: 2025-10-11 09:22:41.595 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:22:41 compute-0 nova_compute[260935]: 2025-10-11 09:22:41.599 2 DEBUG nova.virt.libvirt.host [None req-4fc48ab2-5389-4c80-9e31-49ff047eb67b 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 11 09:22:41 compute-0 nova_compute[260935]: 2025-10-11 09:22:41.600 2 DEBUG nova.virt.libvirt.host [None req-4fc48ab2-5389-4c80-9e31-49ff047eb67b 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 11 09:22:41 compute-0 nova_compute[260935]: 2025-10-11 09:22:41.603 2 DEBUG nova.virt.libvirt.host [None req-4fc48ab2-5389-4c80-9e31-49ff047eb67b 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 11 09:22:41 compute-0 nova_compute[260935]: 2025-10-11 09:22:41.604 2 DEBUG nova.virt.libvirt.host [None req-4fc48ab2-5389-4c80-9e31-49ff047eb67b 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 11 09:22:41 compute-0 nova_compute[260935]: 2025-10-11 09:22:41.604 2 DEBUG nova.virt.libvirt.driver [None req-4fc48ab2-5389-4c80-9e31-49ff047eb67b 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 11 09:22:41 compute-0 nova_compute[260935]: 2025-10-11 09:22:41.604 2 DEBUG nova.virt.hardware [None req-4fc48ab2-5389-4c80-9e31-49ff047eb67b 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 11 09:22:41 compute-0 nova_compute[260935]: 2025-10-11 09:22:41.605 2 DEBUG nova.virt.hardware [None req-4fc48ab2-5389-4c80-9e31-49ff047eb67b 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 11 09:22:41 compute-0 nova_compute[260935]: 2025-10-11 09:22:41.605 2 DEBUG nova.virt.hardware [None req-4fc48ab2-5389-4c80-9e31-49ff047eb67b 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 11 09:22:41 compute-0 nova_compute[260935]: 2025-10-11 09:22:41.605 2 DEBUG nova.virt.hardware [None req-4fc48ab2-5389-4c80-9e31-49ff047eb67b 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 11 09:22:41 compute-0 nova_compute[260935]: 2025-10-11 09:22:41.606 2 DEBUG nova.virt.hardware [None req-4fc48ab2-5389-4c80-9e31-49ff047eb67b 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 11 09:22:41 compute-0 nova_compute[260935]: 2025-10-11 09:22:41.606 2 DEBUG nova.virt.hardware [None req-4fc48ab2-5389-4c80-9e31-49ff047eb67b 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 11 09:22:41 compute-0 nova_compute[260935]: 2025-10-11 09:22:41.606 2 DEBUG nova.virt.hardware [None req-4fc48ab2-5389-4c80-9e31-49ff047eb67b 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 11 09:22:41 compute-0 nova_compute[260935]: 2025-10-11 09:22:41.607 2 DEBUG nova.virt.hardware [None req-4fc48ab2-5389-4c80-9e31-49ff047eb67b 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 11 09:22:41 compute-0 nova_compute[260935]: 2025-10-11 09:22:41.607 2 DEBUG nova.virt.hardware [None req-4fc48ab2-5389-4c80-9e31-49ff047eb67b 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 11 09:22:41 compute-0 nova_compute[260935]: 2025-10-11 09:22:41.607 2 DEBUG nova.virt.hardware [None req-4fc48ab2-5389-4c80-9e31-49ff047eb67b 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 11 09:22:41 compute-0 nova_compute[260935]: 2025-10-11 09:22:41.608 2 DEBUG nova.virt.hardware [None req-4fc48ab2-5389-4c80-9e31-49ff047eb67b 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 11 09:22:41 compute-0 nova_compute[260935]: 2025-10-11 09:22:41.612 2 DEBUG oslo_concurrency.processutils [None req-4fc48ab2-5389-4c80-9e31-49ff047eb67b 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:22:41 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:22:41.612 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[1aeb8f00-5b19-45f4-afbe-43473ab24eba]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:22:41 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:22:41.614 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[81eada51-7a4b-43e2-98a2-de506b56f798]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:22:41 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:22:41.641 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[2a5c4799-1a59-44db-87f6-24c0791b12a4]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 632545, 'reachable_time': 18072, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 391670, 'error': None, 'target': 'ovnmeta-02690ac5-d004-4c7d-b780-e5fed29e0aa7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:22:41 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:22:41.647 162929 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-02690ac5-d004-4c7d-b780-e5fed29e0aa7 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 11 09:22:41 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:22:41.647 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[ed3a0dd5-6271-43ad-ad69-583864d49539]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:22:41 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:22:41.649 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 5a14fd50-c9b4-4c8c-b576-9f2a05d734f9 in datapath e87b272f-66b8-494e-ab80-c2ee66df15a2 unbound from our chassis
Oct 11 09:22:41 compute-0 systemd[1]: run-netns-ovnmeta\x2d02690ac5\x2dd004\x2d4c7d\x2db780\x2de5fed29e0aa7.mount: Deactivated successfully.
Oct 11 09:22:41 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:22:41.654 162815 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e87b272f-66b8-494e-ab80-c2ee66df15a2, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 11 09:22:41 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:22:41.655 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[960da418-9d7b-413b-b9f3-b605df035495]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:22:41 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:22:41.656 162815 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-e87b272f-66b8-494e-ab80-c2ee66df15a2 namespace which is not needed anymore
Oct 11 09:22:41 compute-0 nova_compute[260935]: 2025-10-11 09:22:41.716 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000077 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:22:41 compute-0 nova_compute[260935]: 2025-10-11 09:22:41.716 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000077 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:22:41 compute-0 nova_compute[260935]: 2025-10-11 09:22:41.724 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:22:41 compute-0 nova_compute[260935]: 2025-10-11 09:22:41.725 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:22:41 compute-0 nova_compute[260935]: 2025-10-11 09:22:41.725 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:22:41 compute-0 nova_compute[260935]: 2025-10-11 09:22:41.731 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:22:41 compute-0 nova_compute[260935]: 2025-10-11 09:22:41.731 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:22:41 compute-0 nova_compute[260935]: 2025-10-11 09:22:41.739 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000075 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:22:41 compute-0 nova_compute[260935]: 2025-10-11 09:22:41.739 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000075 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:22:41 compute-0 nova_compute[260935]: 2025-10-11 09:22:41.747 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000052 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:22:41 compute-0 nova_compute[260935]: 2025-10-11 09:22:41.747 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000052 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:22:41 compute-0 nova_compute[260935]: 2025-10-11 09:22:41.752 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000076 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:22:41 compute-0 nova_compute[260935]: 2025-10-11 09:22:41.753 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000076 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:22:41 compute-0 neutron-haproxy-ovnmeta-e87b272f-66b8-494e-ab80-c2ee66df15a2[389253]: [NOTICE]   (389257) : haproxy version is 2.8.14-c23fe91
Oct 11 09:22:41 compute-0 neutron-haproxy-ovnmeta-e87b272f-66b8-494e-ab80-c2ee66df15a2[389253]: [NOTICE]   (389257) : path to executable is /usr/sbin/haproxy
Oct 11 09:22:41 compute-0 neutron-haproxy-ovnmeta-e87b272f-66b8-494e-ab80-c2ee66df15a2[389253]: [WARNING]  (389257) : Exiting Master process...
Oct 11 09:22:41 compute-0 neutron-haproxy-ovnmeta-e87b272f-66b8-494e-ab80-c2ee66df15a2[389253]: [WARNING]  (389257) : Exiting Master process...
Oct 11 09:22:41 compute-0 neutron-haproxy-ovnmeta-e87b272f-66b8-494e-ab80-c2ee66df15a2[389253]: [ALERT]    (389257) : Current worker (389259) exited with code 143 (Terminated)
Oct 11 09:22:41 compute-0 neutron-haproxy-ovnmeta-e87b272f-66b8-494e-ab80-c2ee66df15a2[389253]: [WARNING]  (389257) : All workers exited. Exiting... (0)
Oct 11 09:22:41 compute-0 systemd[1]: libpod-b8d98df1aece0f443c2b04c5466e7e8fb0cdc8405dd6c0b75e45450252da2026.scope: Deactivated successfully.
Oct 11 09:22:41 compute-0 podman[391691]: 2025-10-11 09:22:41.86009296 +0000 UTC m=+0.065899601 container died b8d98df1aece0f443c2b04c5466e7e8fb0cdc8405dd6c0b75e45450252da2026 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-e87b272f-66b8-494e-ab80-c2ee66df15a2, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team)
Oct 11 09:22:41 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b8d98df1aece0f443c2b04c5466e7e8fb0cdc8405dd6c0b75e45450252da2026-userdata-shm.mount: Deactivated successfully.
Oct 11 09:22:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-c17045466eb5e9b60860258040d0d3e0755d8193eccf00a44cefc9a37c24b920-merged.mount: Deactivated successfully.
Oct 11 09:22:41 compute-0 podman[391691]: 2025-10-11 09:22:41.909012901 +0000 UTC m=+0.114819552 container cleanup b8d98df1aece0f443c2b04c5466e7e8fb0cdc8405dd6c0b75e45450252da2026 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-e87b272f-66b8-494e-ab80-c2ee66df15a2, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 11 09:22:41 compute-0 systemd[1]: libpod-conmon-b8d98df1aece0f443c2b04c5466e7e8fb0cdc8405dd6c0b75e45450252da2026.scope: Deactivated successfully.
Oct 11 09:22:41 compute-0 ceph-mon[74313]: pgmap v2425: 321 pgs: 321 active+clean; 566 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 13 KiB/s wr, 31 op/s
Oct 11 09:22:41 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/2850151861' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:22:42 compute-0 podman[391739]: 2025-10-11 09:22:42.015923448 +0000 UTC m=+0.065133459 container remove b8d98df1aece0f443c2b04c5466e7e8fb0cdc8405dd6c0b75e45450252da2026 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-e87b272f-66b8-494e-ab80-c2ee66df15a2, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, tcib_managed=true)
Oct 11 09:22:42 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:22:42.024 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[6366c50f-61c1-4185-bf01-3236959a8958]: (4, ('Sat Oct 11 09:22:41 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-e87b272f-66b8-494e-ab80-c2ee66df15a2 (b8d98df1aece0f443c2b04c5466e7e8fb0cdc8405dd6c0b75e45450252da2026)\nb8d98df1aece0f443c2b04c5466e7e8fb0cdc8405dd6c0b75e45450252da2026\nSat Oct 11 09:22:41 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-e87b272f-66b8-494e-ab80-c2ee66df15a2 (b8d98df1aece0f443c2b04c5466e7e8fb0cdc8405dd6c0b75e45450252da2026)\nb8d98df1aece0f443c2b04c5466e7e8fb0cdc8405dd6c0b75e45450252da2026\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:22:42 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:22:42.027 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[4f6eab01-7757-46b5-bc9f-068e0fb48898]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:22:42 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:22:42.028 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape87b272f-60, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:22:42 compute-0 nova_compute[260935]: 2025-10-11 09:22:42.030 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:22:42 compute-0 kernel: tape87b272f-60: left promiscuous mode
Oct 11 09:22:42 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:22:42.036 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[98767196-3202-4c97-b217-3810d30a4426]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:22:42 compute-0 nova_compute[260935]: 2025-10-11 09:22:42.057 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:22:42 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:22:42.066 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[ace3161f-e3f0-4357-860d-07b4f168a12d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:22:42 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:22:42.067 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[20f8af05-b1f6-4a6c-bc5a-8664ec47364c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:22:42 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:22:42.098 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[dd14abb5-9258-4a48-99d4-12710c1b0f5e]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 632655, 'reachable_time': 31666, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 391754, 'error': None, 'target': 'ovnmeta-e87b272f-66b8-494e-ab80-c2ee66df15a2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:22:42 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:22:42.100 162929 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-e87b272f-66b8-494e-ab80-c2ee66df15a2 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 11 09:22:42 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:22:42.100 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[ae52e7ea-a361-4e70-9405-3db4d0b82c1c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:22:42 compute-0 nova_compute[260935]: 2025-10-11 09:22:42.125 2 WARNING nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 09:22:42 compute-0 nova_compute[260935]: 2025-10-11 09:22:42.126 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=2446MB free_disk=59.69379425048828GB free_vcpus=2 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 11 09:22:42 compute-0 nova_compute[260935]: 2025-10-11 09:22:42.126 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:22:42 compute-0 nova_compute[260935]: 2025-10-11 09:22:42.126 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:22:42 compute-0 nova_compute[260935]: 2025-10-11 09:22:42.166 2 INFO nova.virt.libvirt.driver [None req-f7015ac2-3d47-44e7-a392-83aaacccf689 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: fcb45648-eb7b-4975-9f50-08675a787d9c] Deleting instance files /var/lib/nova/instances/fcb45648-eb7b-4975-9f50-08675a787d9c_del
Oct 11 09:22:42 compute-0 nova_compute[260935]: 2025-10-11 09:22:42.167 2 INFO nova.virt.libvirt.driver [None req-f7015ac2-3d47-44e7-a392-83aaacccf689 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: fcb45648-eb7b-4975-9f50-08675a787d9c] Deletion of /var/lib/nova/instances/fcb45648-eb7b-4975-9f50-08675a787d9c_del complete
Oct 11 09:22:42 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 09:22:42 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3544785565' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:22:42 compute-0 nova_compute[260935]: 2025-10-11 09:22:42.194 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:22:42 compute-0 nova_compute[260935]: 2025-10-11 09:22:42.210 2 DEBUG oslo_concurrency.processutils [None req-4fc48ab2-5389-4c80-9e31-49ff047eb67b 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.598s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:22:42 compute-0 nova_compute[260935]: 2025-10-11 09:22:42.239 2 DEBUG nova.storage.rbd_utils [None req-4fc48ab2-5389-4c80-9e31-49ff047eb67b 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] rbd image ef21f945-0076-48fa-8d22-c5376e26d278_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:22:42 compute-0 nova_compute[260935]: 2025-10-11 09:22:42.245 2 DEBUG oslo_concurrency.processutils [None req-4fc48ab2-5389-4c80-9e31-49ff047eb67b 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:22:42 compute-0 nova_compute[260935]: 2025-10-11 09:22:42.294 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance c176845c-89c0-4038-ba22-4ee79bd3ebfe actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 09:22:42 compute-0 nova_compute[260935]: 2025-10-11 09:22:42.295 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance b75d8ded-515b-48ff-a6b6-28df88878996 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 09:22:42 compute-0 nova_compute[260935]: 2025-10-11 09:22:42.295 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance 52be16b4-343a-4fd4-9041-39069a1fde2a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 09:22:42 compute-0 nova_compute[260935]: 2025-10-11 09:22:42.295 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance 7f0d9214-39a5-458d-82db-dcbc7d61b8b5 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 09:22:42 compute-0 nova_compute[260935]: 2025-10-11 09:22:42.295 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance fcb45648-eb7b-4975-9f50-08675a787d9c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 09:22:42 compute-0 nova_compute[260935]: 2025-10-11 09:22:42.296 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance c677661f-bc62-4954-9130-09b285e6abe4 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 09:22:42 compute-0 nova_compute[260935]: 2025-10-11 09:22:42.296 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance ef21f945-0076-48fa-8d22-c5376e26d278 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 09:22:42 compute-0 nova_compute[260935]: 2025-10-11 09:22:42.296 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 7 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 11 09:22:42 compute-0 nova_compute[260935]: 2025-10-11 09:22:42.296 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=1408MB phys_disk=59GB used_disk=7GB total_vcpus=8 used_vcpus=7 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 11 09:22:42 compute-0 nova_compute[260935]: 2025-10-11 09:22:42.304 2 INFO nova.compute.manager [None req-f7015ac2-3d47-44e7-a392-83aaacccf689 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: fcb45648-eb7b-4975-9f50-08675a787d9c] Took 1.26 seconds to destroy the instance on the hypervisor.
Oct 11 09:22:42 compute-0 nova_compute[260935]: 2025-10-11 09:22:42.305 2 DEBUG oslo.service.loopingcall [None req-f7015ac2-3d47-44e7-a392-83aaacccf689 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 11 09:22:42 compute-0 nova_compute[260935]: 2025-10-11 09:22:42.305 2 DEBUG nova.compute.manager [-] [instance: fcb45648-eb7b-4975-9f50-08675a787d9c] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 11 09:22:42 compute-0 nova_compute[260935]: 2025-10-11 09:22:42.305 2 DEBUG nova.network.neutron [-] [instance: fcb45648-eb7b-4975-9f50-08675a787d9c] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 11 09:22:42 compute-0 systemd[1]: run-netns-ovnmeta\x2de87b272f\x2d66b8\x2d494e\x2dab80\x2dc2ee66df15a2.mount: Deactivated successfully.
Oct 11 09:22:42 compute-0 nova_compute[260935]: 2025-10-11 09:22:42.462 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:22:42 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 09:22:42 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2661389353' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:22:42 compute-0 nova_compute[260935]: 2025-10-11 09:22:42.662 2 DEBUG oslo_concurrency.processutils [None req-4fc48ab2-5389-4c80-9e31-49ff047eb67b 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.417s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:22:42 compute-0 nova_compute[260935]: 2025-10-11 09:22:42.663 2 DEBUG nova.virt.libvirt.vif [None req-4fc48ab2-5389-4c80-9e31-49ff047eb67b 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T09:22:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestShelveInstance-server-1752003988',display_name='tempest-TestShelveInstance-server-1752003988',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testshelveinstance-server-1752003988',id=121,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBc59pqsdiIpcBTYIi36cHQ9pbnLBXoyhUafGO6McKtc7V938v9xkK/F0LUwOp/S6AlI8sKdzLvrGPTJbVDXHRRnlCu0B+lzAS2vfK523X9mqHyrpvhTkQghQuITH7BALg==',key_name='tempest-TestShelveInstance-1436929269',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a13210f275984f3eadf85eba0c749d99',ramdisk_id='',reservation_id='r-9i2gklxc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestShelveInstance-243029510',owner_user_name='tempest-TestShelveInstance-243029510-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:22:38Z,user_data=None,user_id='67e20c1f7ae24f2f8b9e25e0d8ce61ca',uuid=ef21f945-0076-48fa-8d22-c5376e26d278,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0f516e4b-c284-4151-944c-8a7d98f695b5", "address": "fa:16:3e:4b:08:14", "network": {"id": "0007a0de-db42-4add-9b55-6d92ceffa860", "bridge": "br-int", "label": "tempest-TestShelveInstance-683353345-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a13210f275984f3eadf85eba0c749d99", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0f516e4b-c2", "ovs_interfaceid": "0f516e4b-c284-4151-944c-8a7d98f695b5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 11 09:22:42 compute-0 nova_compute[260935]: 2025-10-11 09:22:42.664 2 DEBUG nova.network.os_vif_util [None req-4fc48ab2-5389-4c80-9e31-49ff047eb67b 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] Converting VIF {"id": "0f516e4b-c284-4151-944c-8a7d98f695b5", "address": "fa:16:3e:4b:08:14", "network": {"id": "0007a0de-db42-4add-9b55-6d92ceffa860", "bridge": "br-int", "label": "tempest-TestShelveInstance-683353345-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a13210f275984f3eadf85eba0c749d99", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0f516e4b-c2", "ovs_interfaceid": "0f516e4b-c284-4151-944c-8a7d98f695b5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 09:22:42 compute-0 nova_compute[260935]: 2025-10-11 09:22:42.665 2 DEBUG nova.network.os_vif_util [None req-4fc48ab2-5389-4c80-9e31-49ff047eb67b 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4b:08:14,bridge_name='br-int',has_traffic_filtering=True,id=0f516e4b-c284-4151-944c-8a7d98f695b5,network=Network(0007a0de-db42-4add-9b55-6d92ceffa860),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0f516e4b-c2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 09:22:42 compute-0 nova_compute[260935]: 2025-10-11 09:22:42.666 2 DEBUG nova.objects.instance [None req-4fc48ab2-5389-4c80-9e31-49ff047eb67b 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] Lazy-loading 'pci_devices' on Instance uuid ef21f945-0076-48fa-8d22-c5376e26d278 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:22:42 compute-0 nova_compute[260935]: 2025-10-11 09:22:42.688 2 DEBUG nova.virt.libvirt.driver [None req-4fc48ab2-5389-4c80-9e31-49ff047eb67b 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] End _get_guest_xml xml=<domain type="kvm">
Oct 11 09:22:42 compute-0 nova_compute[260935]:   <uuid>ef21f945-0076-48fa-8d22-c5376e26d278</uuid>
Oct 11 09:22:42 compute-0 nova_compute[260935]:   <name>instance-00000079</name>
Oct 11 09:22:42 compute-0 nova_compute[260935]:   <memory>131072</memory>
Oct 11 09:22:42 compute-0 nova_compute[260935]:   <vcpu>1</vcpu>
Oct 11 09:22:42 compute-0 nova_compute[260935]:   <metadata>
Oct 11 09:22:42 compute-0 nova_compute[260935]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 09:22:42 compute-0 nova_compute[260935]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 09:22:42 compute-0 nova_compute[260935]:       <nova:name>tempest-TestShelveInstance-server-1752003988</nova:name>
Oct 11 09:22:42 compute-0 nova_compute[260935]:       <nova:creationTime>2025-10-11 09:22:41</nova:creationTime>
Oct 11 09:22:42 compute-0 nova_compute[260935]:       <nova:flavor name="m1.nano">
Oct 11 09:22:42 compute-0 nova_compute[260935]:         <nova:memory>128</nova:memory>
Oct 11 09:22:42 compute-0 nova_compute[260935]:         <nova:disk>1</nova:disk>
Oct 11 09:22:42 compute-0 nova_compute[260935]:         <nova:swap>0</nova:swap>
Oct 11 09:22:42 compute-0 nova_compute[260935]:         <nova:ephemeral>0</nova:ephemeral>
Oct 11 09:22:42 compute-0 nova_compute[260935]:         <nova:vcpus>1</nova:vcpus>
Oct 11 09:22:42 compute-0 nova_compute[260935]:       </nova:flavor>
Oct 11 09:22:42 compute-0 nova_compute[260935]:       <nova:owner>
Oct 11 09:22:42 compute-0 nova_compute[260935]:         <nova:user uuid="67e20c1f7ae24f2f8b9e25e0d8ce61ca">tempest-TestShelveInstance-243029510-project-member</nova:user>
Oct 11 09:22:42 compute-0 nova_compute[260935]:         <nova:project uuid="a13210f275984f3eadf85eba0c749d99">tempest-TestShelveInstance-243029510</nova:project>
Oct 11 09:22:42 compute-0 nova_compute[260935]:       </nova:owner>
Oct 11 09:22:42 compute-0 nova_compute[260935]:       <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 09:22:42 compute-0 nova_compute[260935]:       <nova:ports>
Oct 11 09:22:42 compute-0 nova_compute[260935]:         <nova:port uuid="0f516e4b-c284-4151-944c-8a7d98f695b5">
Oct 11 09:22:42 compute-0 nova_compute[260935]:           <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Oct 11 09:22:42 compute-0 nova_compute[260935]:         </nova:port>
Oct 11 09:22:42 compute-0 nova_compute[260935]:       </nova:ports>
Oct 11 09:22:42 compute-0 nova_compute[260935]:     </nova:instance>
Oct 11 09:22:42 compute-0 nova_compute[260935]:   </metadata>
Oct 11 09:22:42 compute-0 nova_compute[260935]:   <sysinfo type="smbios">
Oct 11 09:22:42 compute-0 nova_compute[260935]:     <system>
Oct 11 09:22:42 compute-0 nova_compute[260935]:       <entry name="manufacturer">RDO</entry>
Oct 11 09:22:42 compute-0 nova_compute[260935]:       <entry name="product">OpenStack Compute</entry>
Oct 11 09:22:42 compute-0 nova_compute[260935]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 09:22:42 compute-0 nova_compute[260935]:       <entry name="serial">ef21f945-0076-48fa-8d22-c5376e26d278</entry>
Oct 11 09:22:42 compute-0 nova_compute[260935]:       <entry name="uuid">ef21f945-0076-48fa-8d22-c5376e26d278</entry>
Oct 11 09:22:42 compute-0 nova_compute[260935]:       <entry name="family">Virtual Machine</entry>
Oct 11 09:22:42 compute-0 nova_compute[260935]:     </system>
Oct 11 09:22:42 compute-0 nova_compute[260935]:   </sysinfo>
Oct 11 09:22:42 compute-0 nova_compute[260935]:   <os>
Oct 11 09:22:42 compute-0 nova_compute[260935]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 11 09:22:42 compute-0 nova_compute[260935]:     <boot dev="hd"/>
Oct 11 09:22:42 compute-0 nova_compute[260935]:     <smbios mode="sysinfo"/>
Oct 11 09:22:42 compute-0 nova_compute[260935]:   </os>
Oct 11 09:22:42 compute-0 nova_compute[260935]:   <features>
Oct 11 09:22:42 compute-0 nova_compute[260935]:     <acpi/>
Oct 11 09:22:42 compute-0 nova_compute[260935]:     <apic/>
Oct 11 09:22:42 compute-0 nova_compute[260935]:     <vmcoreinfo/>
Oct 11 09:22:42 compute-0 nova_compute[260935]:   </features>
Oct 11 09:22:42 compute-0 nova_compute[260935]:   <clock offset="utc">
Oct 11 09:22:42 compute-0 nova_compute[260935]:     <timer name="pit" tickpolicy="delay"/>
Oct 11 09:22:42 compute-0 nova_compute[260935]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 11 09:22:42 compute-0 nova_compute[260935]:     <timer name="hpet" present="no"/>
Oct 11 09:22:42 compute-0 nova_compute[260935]:   </clock>
Oct 11 09:22:42 compute-0 nova_compute[260935]:   <cpu mode="host-model" match="exact">
Oct 11 09:22:42 compute-0 nova_compute[260935]:     <topology sockets="1" cores="1" threads="1"/>
Oct 11 09:22:42 compute-0 nova_compute[260935]:   </cpu>
Oct 11 09:22:42 compute-0 nova_compute[260935]:   <devices>
Oct 11 09:22:42 compute-0 nova_compute[260935]:     <disk type="network" device="disk">
Oct 11 09:22:42 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 09:22:42 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/ef21f945-0076-48fa-8d22-c5376e26d278_disk">
Oct 11 09:22:42 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 09:22:42 compute-0 nova_compute[260935]:       </source>
Oct 11 09:22:42 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 09:22:42 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 09:22:42 compute-0 nova_compute[260935]:       </auth>
Oct 11 09:22:42 compute-0 nova_compute[260935]:       <target dev="vda" bus="virtio"/>
Oct 11 09:22:42 compute-0 nova_compute[260935]:     </disk>
Oct 11 09:22:42 compute-0 nova_compute[260935]:     <disk type="network" device="cdrom">
Oct 11 09:22:42 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 09:22:42 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/ef21f945-0076-48fa-8d22-c5376e26d278_disk.config">
Oct 11 09:22:42 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 09:22:42 compute-0 nova_compute[260935]:       </source>
Oct 11 09:22:42 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 09:22:42 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 09:22:42 compute-0 nova_compute[260935]:       </auth>
Oct 11 09:22:42 compute-0 nova_compute[260935]:       <target dev="sda" bus="sata"/>
Oct 11 09:22:42 compute-0 nova_compute[260935]:     </disk>
Oct 11 09:22:42 compute-0 nova_compute[260935]:     <interface type="ethernet">
Oct 11 09:22:42 compute-0 nova_compute[260935]:       <mac address="fa:16:3e:4b:08:14"/>
Oct 11 09:22:42 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 09:22:42 compute-0 nova_compute[260935]:       <driver name="vhost" rx_queue_size="512"/>
Oct 11 09:22:42 compute-0 nova_compute[260935]:       <mtu size="1442"/>
Oct 11 09:22:42 compute-0 nova_compute[260935]:       <target dev="tap0f516e4b-c2"/>
Oct 11 09:22:42 compute-0 nova_compute[260935]:     </interface>
Oct 11 09:22:42 compute-0 nova_compute[260935]:     <serial type="pty">
Oct 11 09:22:42 compute-0 nova_compute[260935]:       <log file="/var/lib/nova/instances/ef21f945-0076-48fa-8d22-c5376e26d278/console.log" append="off"/>
Oct 11 09:22:42 compute-0 nova_compute[260935]:     </serial>
Oct 11 09:22:42 compute-0 nova_compute[260935]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 09:22:42 compute-0 nova_compute[260935]:     <video>
Oct 11 09:22:42 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 09:22:42 compute-0 nova_compute[260935]:     </video>
Oct 11 09:22:42 compute-0 nova_compute[260935]:     <input type="tablet" bus="usb"/>
Oct 11 09:22:42 compute-0 nova_compute[260935]:     <rng model="virtio">
Oct 11 09:22:42 compute-0 nova_compute[260935]:       <backend model="random">/dev/urandom</backend>
Oct 11 09:22:42 compute-0 nova_compute[260935]:     </rng>
Oct 11 09:22:42 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root"/>
Oct 11 09:22:42 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:22:42 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:22:42 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:22:42 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:22:42 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:22:42 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:22:42 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:22:42 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:22:42 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:22:42 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:22:42 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:22:42 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:22:42 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:22:42 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:22:42 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:22:42 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:22:42 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:22:42 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:22:42 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:22:42 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:22:42 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:22:42 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:22:42 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:22:42 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:22:42 compute-0 nova_compute[260935]:     <controller type="usb" index="0"/>
Oct 11 09:22:42 compute-0 nova_compute[260935]:     <memballoon model="virtio">
Oct 11 09:22:42 compute-0 nova_compute[260935]:       <stats period="10"/>
Oct 11 09:22:42 compute-0 nova_compute[260935]:     </memballoon>
Oct 11 09:22:42 compute-0 nova_compute[260935]:   </devices>
Oct 11 09:22:42 compute-0 nova_compute[260935]: </domain>
Oct 11 09:22:42 compute-0 nova_compute[260935]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 11 09:22:42 compute-0 nova_compute[260935]: 2025-10-11 09:22:42.689 2 DEBUG nova.compute.manager [None req-4fc48ab2-5389-4c80-9e31-49ff047eb67b 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] Preparing to wait for external event network-vif-plugged-0f516e4b-c284-4151-944c-8a7d98f695b5 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 11 09:22:42 compute-0 nova_compute[260935]: 2025-10-11 09:22:42.689 2 DEBUG oslo_concurrency.lockutils [None req-4fc48ab2-5389-4c80-9e31-49ff047eb67b 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] Acquiring lock "ef21f945-0076-48fa-8d22-c5376e26d278-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:22:42 compute-0 nova_compute[260935]: 2025-10-11 09:22:42.690 2 DEBUG oslo_concurrency.lockutils [None req-4fc48ab2-5389-4c80-9e31-49ff047eb67b 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] Lock "ef21f945-0076-48fa-8d22-c5376e26d278-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:22:42 compute-0 nova_compute[260935]: 2025-10-11 09:22:42.690 2 DEBUG oslo_concurrency.lockutils [None req-4fc48ab2-5389-4c80-9e31-49ff047eb67b 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] Lock "ef21f945-0076-48fa-8d22-c5376e26d278-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:22:42 compute-0 nova_compute[260935]: 2025-10-11 09:22:42.691 2 DEBUG nova.virt.libvirt.vif [None req-4fc48ab2-5389-4c80-9e31-49ff047eb67b 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T09:22:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestShelveInstance-server-1752003988',display_name='tempest-TestShelveInstance-server-1752003988',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testshelveinstance-server-1752003988',id=121,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBc59pqsdiIpcBTYIi36cHQ9pbnLBXoyhUafGO6McKtc7V938v9xkK/F0LUwOp/S6AlI8sKdzLvrGPTJbVDXHRRnlCu0B+lzAS2vfK523X9mqHyrpvhTkQghQuITH7BALg==',key_name='tempest-TestShelveInstance-1436929269',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a13210f275984f3eadf85eba0c749d99',ramdisk_id='',reservation_id='r-9i2gklxc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestShelveInstance-243029510',owner_user_name='tempest-TestShelveInstance-243029510-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:22:38Z,user_data=None,user_id='67e20c1f7ae24f2f8b9e25e0d8ce61ca',uuid=ef21f945-0076-48fa-8d22-c5376e26d278,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0f516e4b-c284-4151-944c-8a7d98f695b5", "address": "fa:16:3e:4b:08:14", "network": {"id": "0007a0de-db42-4add-9b55-6d92ceffa860", "bridge": "br-int", "label": "tempest-TestShelveInstance-683353345-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a13210f275984f3eadf85eba0c749d99", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0f516e4b-c2", "ovs_interfaceid": "0f516e4b-c284-4151-944c-8a7d98f695b5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 11 09:22:42 compute-0 nova_compute[260935]: 2025-10-11 09:22:42.691 2 DEBUG nova.network.os_vif_util [None req-4fc48ab2-5389-4c80-9e31-49ff047eb67b 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] Converting VIF {"id": "0f516e4b-c284-4151-944c-8a7d98f695b5", "address": "fa:16:3e:4b:08:14", "network": {"id": "0007a0de-db42-4add-9b55-6d92ceffa860", "bridge": "br-int", "label": "tempest-TestShelveInstance-683353345-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a13210f275984f3eadf85eba0c749d99", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0f516e4b-c2", "ovs_interfaceid": "0f516e4b-c284-4151-944c-8a7d98f695b5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 09:22:42 compute-0 nova_compute[260935]: 2025-10-11 09:22:42.692 2 DEBUG nova.network.os_vif_util [None req-4fc48ab2-5389-4c80-9e31-49ff047eb67b 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4b:08:14,bridge_name='br-int',has_traffic_filtering=True,id=0f516e4b-c284-4151-944c-8a7d98f695b5,network=Network(0007a0de-db42-4add-9b55-6d92ceffa860),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0f516e4b-c2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 09:22:42 compute-0 nova_compute[260935]: 2025-10-11 09:22:42.692 2 DEBUG os_vif [None req-4fc48ab2-5389-4c80-9e31-49ff047eb67b 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:4b:08:14,bridge_name='br-int',has_traffic_filtering=True,id=0f516e4b-c284-4151-944c-8a7d98f695b5,network=Network(0007a0de-db42-4add-9b55-6d92ceffa860),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0f516e4b-c2') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 11 09:22:42 compute-0 nova_compute[260935]: 2025-10-11 09:22:42.693 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:22:42 compute-0 nova_compute[260935]: 2025-10-11 09:22:42.693 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:22:42 compute-0 nova_compute[260935]: 2025-10-11 09:22:42.693 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 09:22:42 compute-0 nova_compute[260935]: 2025-10-11 09:22:42.696 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:22:42 compute-0 nova_compute[260935]: 2025-10-11 09:22:42.696 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0f516e4b-c2, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:22:42 compute-0 nova_compute[260935]: 2025-10-11 09:22:42.696 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap0f516e4b-c2, col_values=(('external_ids', {'iface-id': '0f516e4b-c284-4151-944c-8a7d98f695b5', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:4b:08:14', 'vm-uuid': 'ef21f945-0076-48fa-8d22-c5376e26d278'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:22:42 compute-0 nova_compute[260935]: 2025-10-11 09:22:42.698 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:22:42 compute-0 NetworkManager[44960]: <info>  [1760174562.6995] manager: (tap0f516e4b-c2): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/503)
Oct 11 09:22:42 compute-0 nova_compute[260935]: 2025-10-11 09:22:42.699 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 09:22:42 compute-0 nova_compute[260935]: 2025-10-11 09:22:42.711 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:22:42 compute-0 nova_compute[260935]: 2025-10-11 09:22:42.711 2 INFO os_vif [None req-4fc48ab2-5389-4c80-9e31-49ff047eb67b 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:4b:08:14,bridge_name='br-int',has_traffic_filtering=True,id=0f516e4b-c284-4151-944c-8a7d98f695b5,network=Network(0007a0de-db42-4add-9b55-6d92ceffa860),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0f516e4b-c2')
Oct 11 09:22:42 compute-0 nova_compute[260935]: 2025-10-11 09:22:42.715 2 DEBUG nova.compute.manager [req-eb62cceb-bb2f-4df7-9d20-8749530b12bd req-81a69ed1-48a4-4824-8e94-48a882a02a2e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: fcb45648-eb7b-4975-9f50-08675a787d9c] Received event network-changed-9669f110-042a-40c6-b7a4-8d78d421ed23 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:22:42 compute-0 nova_compute[260935]: 2025-10-11 09:22:42.715 2 DEBUG nova.compute.manager [req-eb62cceb-bb2f-4df7-9d20-8749530b12bd req-81a69ed1-48a4-4824-8e94-48a882a02a2e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: fcb45648-eb7b-4975-9f50-08675a787d9c] Refreshing instance network info cache due to event network-changed-9669f110-042a-40c6-b7a4-8d78d421ed23. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 09:22:42 compute-0 nova_compute[260935]: 2025-10-11 09:22:42.716 2 DEBUG oslo_concurrency.lockutils [req-eb62cceb-bb2f-4df7-9d20-8749530b12bd req-81a69ed1-48a4-4824-8e94-48a882a02a2e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-fcb45648-eb7b-4975-9f50-08675a787d9c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:22:42 compute-0 nova_compute[260935]: 2025-10-11 09:22:42.716 2 DEBUG oslo_concurrency.lockutils [req-eb62cceb-bb2f-4df7-9d20-8749530b12bd req-81a69ed1-48a4-4824-8e94-48a882a02a2e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-fcb45648-eb7b-4975-9f50-08675a787d9c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:22:42 compute-0 nova_compute[260935]: 2025-10-11 09:22:42.716 2 DEBUG nova.network.neutron [req-eb62cceb-bb2f-4df7-9d20-8749530b12bd req-81a69ed1-48a4-4824-8e94-48a882a02a2e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: fcb45648-eb7b-4975-9f50-08675a787d9c] Refreshing network info cache for port 9669f110-042a-40c6-b7a4-8d78d421ed23 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 09:22:42 compute-0 nova_compute[260935]: 2025-10-11 09:22:42.784 2 DEBUG nova.virt.libvirt.driver [None req-4fc48ab2-5389-4c80-9e31-49ff047eb67b 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 09:22:42 compute-0 nova_compute[260935]: 2025-10-11 09:22:42.785 2 DEBUG nova.virt.libvirt.driver [None req-4fc48ab2-5389-4c80-9e31-49ff047eb67b 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 09:22:42 compute-0 nova_compute[260935]: 2025-10-11 09:22:42.785 2 DEBUG nova.virt.libvirt.driver [None req-4fc48ab2-5389-4c80-9e31-49ff047eb67b 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] No VIF found with MAC fa:16:3e:4b:08:14, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 11 09:22:42 compute-0 nova_compute[260935]: 2025-10-11 09:22:42.786 2 INFO nova.virt.libvirt.driver [None req-4fc48ab2-5389-4c80-9e31-49ff047eb67b 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] Using config drive
Oct 11 09:22:42 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2426: 321 pgs: 321 active+clean; 533 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 70 KiB/s rd, 1.8 MiB/s wr, 87 op/s
Oct 11 09:22:42 compute-0 nova_compute[260935]: 2025-10-11 09:22:42.820 2 DEBUG nova.storage.rbd_utils [None req-4fc48ab2-5389-4c80-9e31-49ff047eb67b 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] rbd image ef21f945-0076-48fa-8d22-c5376e26d278_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:22:42 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 09:22:42 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1558495703' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:22:42 compute-0 nova_compute[260935]: 2025-10-11 09:22:42.896 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.434s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:22:42 compute-0 nova_compute[260935]: 2025-10-11 09:22:42.904 2 DEBUG nova.compute.provider_tree [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 09:22:42 compute-0 nova_compute[260935]: 2025-10-11 09:22:42.926 2 DEBUG nova.scheduler.client.report [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 09:22:42 compute-0 nova_compute[260935]: 2025-10-11 09:22:42.957 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 11 09:22:42 compute-0 nova_compute[260935]: 2025-10-11 09:22:42.957 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.831s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:22:42 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/3544785565' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:22:42 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/2661389353' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:22:42 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/1558495703' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:22:43 compute-0 nova_compute[260935]: 2025-10-11 09:22:43.404 2 INFO nova.virt.libvirt.driver [None req-4fc48ab2-5389-4c80-9e31-49ff047eb67b 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] Creating config drive at /var/lib/nova/instances/ef21f945-0076-48fa-8d22-c5376e26d278/disk.config
Oct 11 09:22:43 compute-0 nova_compute[260935]: 2025-10-11 09:22:43.414 2 DEBUG oslo_concurrency.processutils [None req-4fc48ab2-5389-4c80-9e31-49ff047eb67b 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/ef21f945-0076-48fa-8d22-c5376e26d278/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpwknq21d5 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:22:43 compute-0 nova_compute[260935]: 2025-10-11 09:22:43.575 2 DEBUG nova.network.neutron [req-1eae47d7-ed4b-4ec4-9822-19dc4c720a81 req-6ae3b80a-de73-4cc1-b7cf-5b622ee7f39d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] Updated VIF entry in instance network info cache for port 0f516e4b-c284-4151-944c-8a7d98f695b5. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 09:22:43 compute-0 nova_compute[260935]: 2025-10-11 09:22:43.576 2 DEBUG nova.network.neutron [req-1eae47d7-ed4b-4ec4-9822-19dc4c720a81 req-6ae3b80a-de73-4cc1-b7cf-5b622ee7f39d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] Updating instance_info_cache with network_info: [{"id": "0f516e4b-c284-4151-944c-8a7d98f695b5", "address": "fa:16:3e:4b:08:14", "network": {"id": "0007a0de-db42-4add-9b55-6d92ceffa860", "bridge": "br-int", "label": "tempest-TestShelveInstance-683353345-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a13210f275984f3eadf85eba0c749d99", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0f516e4b-c2", "ovs_interfaceid": "0f516e4b-c284-4151-944c-8a7d98f695b5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:22:43 compute-0 nova_compute[260935]: 2025-10-11 09:22:43.594 2 DEBUG oslo_concurrency.processutils [None req-4fc48ab2-5389-4c80-9e31-49ff047eb67b 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/ef21f945-0076-48fa-8d22-c5376e26d278/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpwknq21d5" returned: 0 in 0.181s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:22:43 compute-0 nova_compute[260935]: 2025-10-11 09:22:43.644 2 DEBUG nova.storage.rbd_utils [None req-4fc48ab2-5389-4c80-9e31-49ff047eb67b 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] rbd image ef21f945-0076-48fa-8d22-c5376e26d278_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:22:43 compute-0 nova_compute[260935]: 2025-10-11 09:22:43.650 2 DEBUG oslo_concurrency.processutils [None req-4fc48ab2-5389-4c80-9e31-49ff047eb67b 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/ef21f945-0076-48fa-8d22-c5376e26d278/disk.config ef21f945-0076-48fa-8d22-c5376e26d278_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:22:43 compute-0 nova_compute[260935]: 2025-10-11 09:22:43.700 2 DEBUG oslo_concurrency.lockutils [req-1eae47d7-ed4b-4ec4-9822-19dc4c720a81 req-6ae3b80a-de73-4cc1-b7cf-5b622ee7f39d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-ef21f945-0076-48fa-8d22-c5376e26d278" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:22:43 compute-0 nova_compute[260935]: 2025-10-11 09:22:43.737 2 DEBUG nova.network.neutron [-] [instance: fcb45648-eb7b-4975-9f50-08675a787d9c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:22:43 compute-0 nova_compute[260935]: 2025-10-11 09:22:43.755 2 INFO nova.compute.manager [-] [instance: fcb45648-eb7b-4975-9f50-08675a787d9c] Took 1.45 seconds to deallocate network for instance.
Oct 11 09:22:43 compute-0 nova_compute[260935]: 2025-10-11 09:22:43.803 2 DEBUG oslo_concurrency.lockutils [None req-f7015ac2-3d47-44e7-a392-83aaacccf689 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:22:43 compute-0 nova_compute[260935]: 2025-10-11 09:22:43.804 2 DEBUG oslo_concurrency.lockutils [None req-f7015ac2-3d47-44e7-a392-83aaacccf689 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:22:43 compute-0 nova_compute[260935]: 2025-10-11 09:22:43.814 2 DEBUG oslo_concurrency.processutils [None req-4fc48ab2-5389-4c80-9e31-49ff047eb67b 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/ef21f945-0076-48fa-8d22-c5376e26d278/disk.config ef21f945-0076-48fa-8d22-c5376e26d278_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.165s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:22:43 compute-0 nova_compute[260935]: 2025-10-11 09:22:43.815 2 INFO nova.virt.libvirt.driver [None req-4fc48ab2-5389-4c80-9e31-49ff047eb67b 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] Deleting local config drive /var/lib/nova/instances/ef21f945-0076-48fa-8d22-c5376e26d278/disk.config because it was imported into RBD.
Oct 11 09:22:43 compute-0 kernel: tap0f516e4b-c2: entered promiscuous mode
Oct 11 09:22:43 compute-0 ovn_controller[152945]: 2025-10-11T09:22:43Z|01253|binding|INFO|Claiming lport 0f516e4b-c284-4151-944c-8a7d98f695b5 for this chassis.
Oct 11 09:22:43 compute-0 ovn_controller[152945]: 2025-10-11T09:22:43Z|01254|binding|INFO|0f516e4b-c284-4151-944c-8a7d98f695b5: Claiming fa:16:3e:4b:08:14 10.100.0.10
Oct 11 09:22:43 compute-0 NetworkManager[44960]: <info>  [1760174563.8740] manager: (tap0f516e4b-c2): new Tun device (/org/freedesktop/NetworkManager/Devices/504)
Oct 11 09:22:43 compute-0 nova_compute[260935]: 2025-10-11 09:22:43.873 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:22:43 compute-0 systemd-udevd[391562]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 09:22:43 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:22:43.892 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4b:08:14 10.100.0.10'], port_security=['fa:16:3e:4b:08:14 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'ef21f945-0076-48fa-8d22-c5376e26d278', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0007a0de-db42-4add-9b55-6d92ceffa860', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a13210f275984f3eadf85eba0c749d99', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'fd52e2b9-19bf-4137-a511-25dd1d4c9f0e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d2271296-3ceb-4987-affd-a0b4da64fffe, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=0f516e4b-c284-4151-944c-8a7d98f695b5) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 09:22:43 compute-0 NetworkManager[44960]: <info>  [1760174563.8965] device (tap0f516e4b-c2): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 09:22:43 compute-0 NetworkManager[44960]: <info>  [1760174563.8971] device (tap0f516e4b-c2): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 09:22:43 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:22:43.895 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 0f516e4b-c284-4151-944c-8a7d98f695b5 in datapath 0007a0de-db42-4add-9b55-6d92ceffa860 bound to our chassis
Oct 11 09:22:43 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:22:43.900 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 0007a0de-db42-4add-9b55-6d92ceffa860
Oct 11 09:22:43 compute-0 ovn_controller[152945]: 2025-10-11T09:22:43Z|01255|binding|INFO|Setting lport 0f516e4b-c284-4151-944c-8a7d98f695b5 ovn-installed in OVS
Oct 11 09:22:43 compute-0 ovn_controller[152945]: 2025-10-11T09:22:43Z|01256|binding|INFO|Setting lport 0f516e4b-c284-4151-944c-8a7d98f695b5 up in Southbound
Oct 11 09:22:43 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:22:43.920 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[82b887c8-3f0e-450b-923d-b0dc9735afa0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:22:43 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:22:43.922 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap0007a0de-d1 in ovnmeta-0007a0de-db42-4add-9b55-6d92ceffa860 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 11 09:22:43 compute-0 nova_compute[260935]: 2025-10-11 09:22:43.923 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:22:43 compute-0 nova_compute[260935]: 2025-10-11 09:22:43.925 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:22:43 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:22:43.924 276945 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap0007a0de-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 11 09:22:43 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:22:43.925 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[63c0b6e5-664a-436b-865d-bea59c46dd6e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:22:43 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:22:43.928 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[75515830-fd0a-4e05-b265-7c7e589e86bc]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:22:43 compute-0 systemd-machined[215705]: New machine qemu-144-instance-00000079.
Oct 11 09:22:43 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:22:43.947 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[1123e63f-e0bb-41af-ac74-3ee6651d3196]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:22:43 compute-0 systemd[1]: Started Virtual Machine qemu-144-instance-00000079.
Oct 11 09:22:43 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:22:43.982 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[52e6c2a6-1e72-4704-b005-9810ef2a18d9]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:22:43 compute-0 nova_compute[260935]: 2025-10-11 09:22:43.996 2 DEBUG oslo_concurrency.processutils [None req-f7015ac2-3d47-44e7-a392-83aaacccf689 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:22:44 compute-0 ceph-mon[74313]: pgmap v2426: 321 pgs: 321 active+clean; 533 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 70 KiB/s rd, 1.8 MiB/s wr, 87 op/s
Oct 11 09:22:44 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:22:44.033 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[10f7ace6-0b2e-44ed-92e8-429f5997668f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:22:44 compute-0 NetworkManager[44960]: <info>  [1760174564.0473] manager: (tap0007a0de-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/505)
Oct 11 09:22:44 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:22:44.046 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[f2e604f0-6e62-403f-9ec6-771e3a63ccc1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:22:44 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:22:44 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:22:44.085 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[503888d6-c257-4bd4-a143-4d91afb36f04]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:22:44 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:22:44.089 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[b1ea09c9-f00f-44ba-b8a8-9a35641ad272]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:22:44 compute-0 NetworkManager[44960]: <info>  [1760174564.1214] device (tap0007a0de-d0): carrier: link connected
Oct 11 09:22:44 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:22:44.131 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[7f6cbdc7-ba13-449d-9d2e-088905dd0e74]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:22:44 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:22:44.157 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[56e76348-9088-49bb-b809-58c6f7baaec1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0007a0de-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7d:ca:09'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 352], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 640882, 'reachable_time': 43124, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 391924, 'error': None, 'target': 'ovnmeta-0007a0de-db42-4add-9b55-6d92ceffa860', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:22:44 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:22:44.184 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[4f831581-c7d9-4d0e-9722-030b44e9bbc2]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe7d:ca09'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 640882, 'tstamp': 640882}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 391935, 'error': None, 'target': 'ovnmeta-0007a0de-db42-4add-9b55-6d92ceffa860', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:22:44 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:22:44.206 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[a146e5d8-9867-42ef-a605-8a797796669c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0007a0de-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7d:ca:09'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 352], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 640882, 'reachable_time': 43124, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 391945, 'error': None, 'target': 'ovnmeta-0007a0de-db42-4add-9b55-6d92ceffa860', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:22:44 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:22:44.260 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[66bc9abd-c121-4ca0-b5f2-fd7fa8b5bb07]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:22:44 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:22:44.365 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[10b260a2-2a72-4e24-a32e-6e3bb6ef1870]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:22:44 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:22:44.367 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0007a0de-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:22:44 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:22:44.367 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 09:22:44 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:22:44.368 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0007a0de-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:22:44 compute-0 kernel: tap0007a0de-d0: entered promiscuous mode
Oct 11 09:22:44 compute-0 NetworkManager[44960]: <info>  [1760174564.3719] manager: (tap0007a0de-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/506)
Oct 11 09:22:44 compute-0 nova_compute[260935]: 2025-10-11 09:22:44.378 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:22:44 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:22:44.379 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap0007a0de-d0, col_values=(('external_ids', {'iface-id': 'e89e8ea5-8038-433d-8c45-d2ec20f4f896'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:22:44 compute-0 nova_compute[260935]: 2025-10-11 09:22:44.380 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:22:44 compute-0 ovn_controller[152945]: 2025-10-11T09:22:44Z|01257|binding|INFO|Releasing lport e89e8ea5-8038-433d-8c45-d2ec20f4f896 from this chassis (sb_readonly=0)
Oct 11 09:22:44 compute-0 nova_compute[260935]: 2025-10-11 09:22:44.401 2 DEBUG nova.network.neutron [req-eb62cceb-bb2f-4df7-9d20-8749530b12bd req-81a69ed1-48a4-4824-8e94-48a882a02a2e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: fcb45648-eb7b-4975-9f50-08675a787d9c] Updated VIF entry in instance network info cache for port 9669f110-042a-40c6-b7a4-8d78d421ed23. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 09:22:44 compute-0 nova_compute[260935]: 2025-10-11 09:22:44.402 2 DEBUG nova.network.neutron [req-eb62cceb-bb2f-4df7-9d20-8749530b12bd req-81a69ed1-48a4-4824-8e94-48a882a02a2e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: fcb45648-eb7b-4975-9f50-08675a787d9c] Updating instance_info_cache with network_info: [{"id": "9669f110-042a-40c6-b7a4-8d78d421ed23", "address": "fa:16:3e:eb:54:35", "network": {"id": "02690ac5-d004-4c7d-b780-e5fed29e0aa7", "bridge": "br-int", "label": "tempest-network-smoke--1847613909", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9669f110-04", "ovs_interfaceid": "9669f110-042a-40c6-b7a4-8d78d421ed23", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "5a14fd50-c9b4-4c8c-b576-9f2a05d734f9", "address": "fa:16:3e:ed:83:c1", "network": {"id": "e87b272f-66b8-494e-ab80-c2ee66df15a2", "bridge": "br-int", "label": "tempest-network-smoke--2027628824", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feed:83c1", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feed:83c1", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a14fd50-c9", "ovs_interfaceid": "5a14fd50-c9b4-4c8c-b576-9f2a05d734f9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:22:44 compute-0 nova_compute[260935]: 2025-10-11 09:22:44.416 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:22:44 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:22:44.421 162815 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/0007a0de-db42-4add-9b55-6d92ceffa860.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/0007a0de-db42-4add-9b55-6d92ceffa860.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 11 09:22:44 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:22:44.422 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[352331b4-2a7f-4926-8d1b-e7be1fde5442]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:22:44 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:22:44.424 162815 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 11 09:22:44 compute-0 ovn_metadata_agent[162810]: global
Oct 11 09:22:44 compute-0 ovn_metadata_agent[162810]:     log         /dev/log local0 debug
Oct 11 09:22:44 compute-0 ovn_metadata_agent[162810]:     log-tag     haproxy-metadata-proxy-0007a0de-db42-4add-9b55-6d92ceffa860
Oct 11 09:22:44 compute-0 ovn_metadata_agent[162810]:     user        root
Oct 11 09:22:44 compute-0 ovn_metadata_agent[162810]:     group       root
Oct 11 09:22:44 compute-0 ovn_metadata_agent[162810]:     maxconn     1024
Oct 11 09:22:44 compute-0 ovn_metadata_agent[162810]:     pidfile     /var/lib/neutron/external/pids/0007a0de-db42-4add-9b55-6d92ceffa860.pid.haproxy
Oct 11 09:22:44 compute-0 ovn_metadata_agent[162810]:     daemon
Oct 11 09:22:44 compute-0 ovn_metadata_agent[162810]: 
Oct 11 09:22:44 compute-0 ovn_metadata_agent[162810]: defaults
Oct 11 09:22:44 compute-0 ovn_metadata_agent[162810]:     log global
Oct 11 09:22:44 compute-0 ovn_metadata_agent[162810]:     mode http
Oct 11 09:22:44 compute-0 ovn_metadata_agent[162810]:     option httplog
Oct 11 09:22:44 compute-0 ovn_metadata_agent[162810]:     option dontlognull
Oct 11 09:22:44 compute-0 ovn_metadata_agent[162810]:     option http-server-close
Oct 11 09:22:44 compute-0 ovn_metadata_agent[162810]:     option forwardfor
Oct 11 09:22:44 compute-0 ovn_metadata_agent[162810]:     retries                 3
Oct 11 09:22:44 compute-0 ovn_metadata_agent[162810]:     timeout http-request    30s
Oct 11 09:22:44 compute-0 ovn_metadata_agent[162810]:     timeout connect         30s
Oct 11 09:22:44 compute-0 ovn_metadata_agent[162810]:     timeout client          32s
Oct 11 09:22:44 compute-0 ovn_metadata_agent[162810]:     timeout server          32s
Oct 11 09:22:44 compute-0 ovn_metadata_agent[162810]:     timeout http-keep-alive 30s
Oct 11 09:22:44 compute-0 ovn_metadata_agent[162810]: 
Oct 11 09:22:44 compute-0 ovn_metadata_agent[162810]: 
Oct 11 09:22:44 compute-0 ovn_metadata_agent[162810]: listen listener
Oct 11 09:22:44 compute-0 ovn_metadata_agent[162810]:     bind 169.254.169.254:80
Oct 11 09:22:44 compute-0 ovn_metadata_agent[162810]:     server metadata /var/lib/neutron/metadata_proxy
Oct 11 09:22:44 compute-0 ovn_metadata_agent[162810]:     http-request add-header X-OVN-Network-ID 0007a0de-db42-4add-9b55-6d92ceffa860
Oct 11 09:22:44 compute-0 ovn_metadata_agent[162810]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 11 09:22:44 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:22:44.425 162815 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-0007a0de-db42-4add-9b55-6d92ceffa860', 'env', 'PROCESS_TAG=haproxy-0007a0de-db42-4add-9b55-6d92ceffa860', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/0007a0de-db42-4add-9b55-6d92ceffa860.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 11 09:22:44 compute-0 nova_compute[260935]: 2025-10-11 09:22:44.431 2 DEBUG oslo_concurrency.lockutils [req-eb62cceb-bb2f-4df7-9d20-8749530b12bd req-81a69ed1-48a4-4824-8e94-48a882a02a2e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-fcb45648-eb7b-4975-9f50-08675a787d9c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:22:44 compute-0 nova_compute[260935]: 2025-10-11 09:22:44.431 2 DEBUG nova.compute.manager [req-eb62cceb-bb2f-4df7-9d20-8749530b12bd req-81a69ed1-48a4-4824-8e94-48a882a02a2e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: fcb45648-eb7b-4975-9f50-08675a787d9c] Received event network-vif-unplugged-9669f110-042a-40c6-b7a4-8d78d421ed23 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:22:44 compute-0 nova_compute[260935]: 2025-10-11 09:22:44.432 2 DEBUG oslo_concurrency.lockutils [req-eb62cceb-bb2f-4df7-9d20-8749530b12bd req-81a69ed1-48a4-4824-8e94-48a882a02a2e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "fcb45648-eb7b-4975-9f50-08675a787d9c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:22:44 compute-0 nova_compute[260935]: 2025-10-11 09:22:44.432 2 DEBUG oslo_concurrency.lockutils [req-eb62cceb-bb2f-4df7-9d20-8749530b12bd req-81a69ed1-48a4-4824-8e94-48a882a02a2e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "fcb45648-eb7b-4975-9f50-08675a787d9c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:22:44 compute-0 nova_compute[260935]: 2025-10-11 09:22:44.432 2 DEBUG oslo_concurrency.lockutils [req-eb62cceb-bb2f-4df7-9d20-8749530b12bd req-81a69ed1-48a4-4824-8e94-48a882a02a2e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "fcb45648-eb7b-4975-9f50-08675a787d9c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:22:44 compute-0 nova_compute[260935]: 2025-10-11 09:22:44.432 2 DEBUG nova.compute.manager [req-eb62cceb-bb2f-4df7-9d20-8749530b12bd req-81a69ed1-48a4-4824-8e94-48a882a02a2e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: fcb45648-eb7b-4975-9f50-08675a787d9c] No waiting events found dispatching network-vif-unplugged-9669f110-042a-40c6-b7a4-8d78d421ed23 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 09:22:44 compute-0 nova_compute[260935]: 2025-10-11 09:22:44.433 2 DEBUG nova.compute.manager [req-eb62cceb-bb2f-4df7-9d20-8749530b12bd req-81a69ed1-48a4-4824-8e94-48a882a02a2e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: fcb45648-eb7b-4975-9f50-08675a787d9c] Received event network-vif-unplugged-9669f110-042a-40c6-b7a4-8d78d421ed23 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 11 09:22:44 compute-0 nova_compute[260935]: 2025-10-11 09:22:44.433 2 DEBUG nova.compute.manager [req-eb62cceb-bb2f-4df7-9d20-8749530b12bd req-81a69ed1-48a4-4824-8e94-48a882a02a2e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: fcb45648-eb7b-4975-9f50-08675a787d9c] Received event network-vif-plugged-9669f110-042a-40c6-b7a4-8d78d421ed23 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:22:44 compute-0 nova_compute[260935]: 2025-10-11 09:22:44.433 2 DEBUG oslo_concurrency.lockutils [req-eb62cceb-bb2f-4df7-9d20-8749530b12bd req-81a69ed1-48a4-4824-8e94-48a882a02a2e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "fcb45648-eb7b-4975-9f50-08675a787d9c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:22:44 compute-0 nova_compute[260935]: 2025-10-11 09:22:44.433 2 DEBUG oslo_concurrency.lockutils [req-eb62cceb-bb2f-4df7-9d20-8749530b12bd req-81a69ed1-48a4-4824-8e94-48a882a02a2e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "fcb45648-eb7b-4975-9f50-08675a787d9c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:22:44 compute-0 nova_compute[260935]: 2025-10-11 09:22:44.434 2 DEBUG oslo_concurrency.lockutils [req-eb62cceb-bb2f-4df7-9d20-8749530b12bd req-81a69ed1-48a4-4824-8e94-48a882a02a2e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "fcb45648-eb7b-4975-9f50-08675a787d9c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:22:44 compute-0 nova_compute[260935]: 2025-10-11 09:22:44.434 2 DEBUG nova.compute.manager [req-eb62cceb-bb2f-4df7-9d20-8749530b12bd req-81a69ed1-48a4-4824-8e94-48a882a02a2e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: fcb45648-eb7b-4975-9f50-08675a787d9c] No waiting events found dispatching network-vif-plugged-9669f110-042a-40c6-b7a4-8d78d421ed23 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 09:22:44 compute-0 nova_compute[260935]: 2025-10-11 09:22:44.434 2 WARNING nova.compute.manager [req-eb62cceb-bb2f-4df7-9d20-8749530b12bd req-81a69ed1-48a4-4824-8e94-48a882a02a2e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: fcb45648-eb7b-4975-9f50-08675a787d9c] Received unexpected event network-vif-plugged-9669f110-042a-40c6-b7a4-8d78d421ed23 for instance with vm_state active and task_state deleting.
Oct 11 09:22:44 compute-0 nova_compute[260935]: 2025-10-11 09:22:44.434 2 DEBUG nova.compute.manager [req-eb62cceb-bb2f-4df7-9d20-8749530b12bd req-81a69ed1-48a4-4824-8e94-48a882a02a2e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: fcb45648-eb7b-4975-9f50-08675a787d9c] Received event network-vif-unplugged-5a14fd50-c9b4-4c8c-b576-9f2a05d734f9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:22:44 compute-0 nova_compute[260935]: 2025-10-11 09:22:44.435 2 DEBUG oslo_concurrency.lockutils [req-eb62cceb-bb2f-4df7-9d20-8749530b12bd req-81a69ed1-48a4-4824-8e94-48a882a02a2e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "fcb45648-eb7b-4975-9f50-08675a787d9c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:22:44 compute-0 nova_compute[260935]: 2025-10-11 09:22:44.435 2 DEBUG oslo_concurrency.lockutils [req-eb62cceb-bb2f-4df7-9d20-8749530b12bd req-81a69ed1-48a4-4824-8e94-48a882a02a2e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "fcb45648-eb7b-4975-9f50-08675a787d9c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:22:44 compute-0 nova_compute[260935]: 2025-10-11 09:22:44.435 2 DEBUG oslo_concurrency.lockutils [req-eb62cceb-bb2f-4df7-9d20-8749530b12bd req-81a69ed1-48a4-4824-8e94-48a882a02a2e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "fcb45648-eb7b-4975-9f50-08675a787d9c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:22:44 compute-0 nova_compute[260935]: 2025-10-11 09:22:44.435 2 DEBUG nova.compute.manager [req-eb62cceb-bb2f-4df7-9d20-8749530b12bd req-81a69ed1-48a4-4824-8e94-48a882a02a2e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: fcb45648-eb7b-4975-9f50-08675a787d9c] No waiting events found dispatching network-vif-unplugged-5a14fd50-c9b4-4c8c-b576-9f2a05d734f9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 09:22:44 compute-0 nova_compute[260935]: 2025-10-11 09:22:44.435 2 DEBUG nova.compute.manager [req-eb62cceb-bb2f-4df7-9d20-8749530b12bd req-81a69ed1-48a4-4824-8e94-48a882a02a2e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: fcb45648-eb7b-4975-9f50-08675a787d9c] Received event network-vif-unplugged-5a14fd50-c9b4-4c8c-b576-9f2a05d734f9 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 11 09:22:44 compute-0 nova_compute[260935]: 2025-10-11 09:22:44.436 2 DEBUG nova.compute.manager [req-eb62cceb-bb2f-4df7-9d20-8749530b12bd req-81a69ed1-48a4-4824-8e94-48a882a02a2e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: fcb45648-eb7b-4975-9f50-08675a787d9c] Received event network-vif-plugged-5a14fd50-c9b4-4c8c-b576-9f2a05d734f9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:22:44 compute-0 nova_compute[260935]: 2025-10-11 09:22:44.436 2 DEBUG oslo_concurrency.lockutils [req-eb62cceb-bb2f-4df7-9d20-8749530b12bd req-81a69ed1-48a4-4824-8e94-48a882a02a2e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "fcb45648-eb7b-4975-9f50-08675a787d9c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:22:44 compute-0 nova_compute[260935]: 2025-10-11 09:22:44.436 2 DEBUG oslo_concurrency.lockutils [req-eb62cceb-bb2f-4df7-9d20-8749530b12bd req-81a69ed1-48a4-4824-8e94-48a882a02a2e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "fcb45648-eb7b-4975-9f50-08675a787d9c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:22:44 compute-0 nova_compute[260935]: 2025-10-11 09:22:44.436 2 DEBUG oslo_concurrency.lockutils [req-eb62cceb-bb2f-4df7-9d20-8749530b12bd req-81a69ed1-48a4-4824-8e94-48a882a02a2e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "fcb45648-eb7b-4975-9f50-08675a787d9c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:22:44 compute-0 nova_compute[260935]: 2025-10-11 09:22:44.437 2 DEBUG nova.compute.manager [req-eb62cceb-bb2f-4df7-9d20-8749530b12bd req-81a69ed1-48a4-4824-8e94-48a882a02a2e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: fcb45648-eb7b-4975-9f50-08675a787d9c] No waiting events found dispatching network-vif-plugged-5a14fd50-c9b4-4c8c-b576-9f2a05d734f9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 09:22:44 compute-0 nova_compute[260935]: 2025-10-11 09:22:44.437 2 WARNING nova.compute.manager [req-eb62cceb-bb2f-4df7-9d20-8749530b12bd req-81a69ed1-48a4-4824-8e94-48a882a02a2e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: fcb45648-eb7b-4975-9f50-08675a787d9c] Received unexpected event network-vif-plugged-5a14fd50-c9b4-4c8c-b576-9f2a05d734f9 for instance with vm_state active and task_state deleting.
Oct 11 09:22:44 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 09:22:44 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/298502596' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:22:44 compute-0 nova_compute[260935]: 2025-10-11 09:22:44.525 2 DEBUG oslo_concurrency.processutils [None req-f7015ac2-3d47-44e7-a392-83aaacccf689 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.529s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:22:44 compute-0 nova_compute[260935]: 2025-10-11 09:22:44.534 2 DEBUG nova.compute.provider_tree [None req-f7015ac2-3d47-44e7-a392-83aaacccf689 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 09:22:44 compute-0 nova_compute[260935]: 2025-10-11 09:22:44.553 2 DEBUG nova.scheduler.client.report [None req-f7015ac2-3d47-44e7-a392-83aaacccf689 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 09:22:44 compute-0 nova_compute[260935]: 2025-10-11 09:22:44.583 2 DEBUG oslo_concurrency.lockutils [None req-f7015ac2-3d47-44e7-a392-83aaacccf689 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.780s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:22:44 compute-0 nova_compute[260935]: 2025-10-11 09:22:44.620 2 INFO nova.scheduler.client.report [None req-f7015ac2-3d47-44e7-a392-83aaacccf689 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Deleted allocations for instance fcb45648-eb7b-4975-9f50-08675a787d9c
Oct 11 09:22:44 compute-0 nova_compute[260935]: 2025-10-11 09:22:44.735 2 DEBUG oslo_concurrency.lockutils [None req-f7015ac2-3d47-44e7-a392-83aaacccf689 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "fcb45648-eb7b-4975-9f50-08675a787d9c" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.696s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:22:44 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2427: 321 pgs: 321 active+clean; 533 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 68 KiB/s rd, 1.8 MiB/s wr, 86 op/s
Oct 11 09:22:44 compute-0 nova_compute[260935]: 2025-10-11 09:22:44.850 2 DEBUG nova.compute.manager [req-cb444796-a88d-45e5-8002-7aff3a6d3e79 req-e8ccd70c-2c53-4d58-8a5e-6523ac028591 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: fcb45648-eb7b-4975-9f50-08675a787d9c] Received event network-vif-deleted-5a14fd50-c9b4-4c8c-b576-9f2a05d734f9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:22:44 compute-0 nova_compute[260935]: 2025-10-11 09:22:44.851 2 INFO nova.compute.manager [req-cb444796-a88d-45e5-8002-7aff3a6d3e79 req-e8ccd70c-2c53-4d58-8a5e-6523ac028591 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: fcb45648-eb7b-4975-9f50-08675a787d9c] Neutron deleted interface 5a14fd50-c9b4-4c8c-b576-9f2a05d734f9; detaching it from the instance and deleting it from the info cache
Oct 11 09:22:44 compute-0 nova_compute[260935]: 2025-10-11 09:22:44.852 2 DEBUG nova.network.neutron [req-cb444796-a88d-45e5-8002-7aff3a6d3e79 req-e8ccd70c-2c53-4d58-8a5e-6523ac028591 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: fcb45648-eb7b-4975-9f50-08675a787d9c] Instance is deleted, no further info cache update update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:106
Oct 11 09:22:44 compute-0 nova_compute[260935]: 2025-10-11 09:22:44.855 2 DEBUG nova.compute.manager [req-cb444796-a88d-45e5-8002-7aff3a6d3e79 req-e8ccd70c-2c53-4d58-8a5e-6523ac028591 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: fcb45648-eb7b-4975-9f50-08675a787d9c] Detach interface failed, port_id=5a14fd50-c9b4-4c8c-b576-9f2a05d734f9, reason: Instance fcb45648-eb7b-4975-9f50-08675a787d9c could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Oct 11 09:22:44 compute-0 nova_compute[260935]: 2025-10-11 09:22:44.856 2 DEBUG nova.compute.manager [req-cb444796-a88d-45e5-8002-7aff3a6d3e79 req-e8ccd70c-2c53-4d58-8a5e-6523ac028591 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: fcb45648-eb7b-4975-9f50-08675a787d9c] Received event network-vif-deleted-9669f110-042a-40c6-b7a4-8d78d421ed23 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:22:44 compute-0 nova_compute[260935]: 2025-10-11 09:22:44.856 2 INFO nova.compute.manager [req-cb444796-a88d-45e5-8002-7aff3a6d3e79 req-e8ccd70c-2c53-4d58-8a5e-6523ac028591 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: fcb45648-eb7b-4975-9f50-08675a787d9c] Neutron deleted interface 9669f110-042a-40c6-b7a4-8d78d421ed23; detaching it from the instance and deleting it from the info cache
Oct 11 09:22:44 compute-0 nova_compute[260935]: 2025-10-11 09:22:44.856 2 DEBUG nova.network.neutron [req-cb444796-a88d-45e5-8002-7aff3a6d3e79 req-e8ccd70c-2c53-4d58-8a5e-6523ac028591 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: fcb45648-eb7b-4975-9f50-08675a787d9c] Instance is deleted, no further info cache update update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:106
Oct 11 09:22:44 compute-0 nova_compute[260935]: 2025-10-11 09:22:44.858 2 DEBUG nova.compute.manager [req-cb444796-a88d-45e5-8002-7aff3a6d3e79 req-e8ccd70c-2c53-4d58-8a5e-6523ac028591 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: fcb45648-eb7b-4975-9f50-08675a787d9c] Detach interface failed, port_id=9669f110-042a-40c6-b7a4-8d78d421ed23, reason: Instance fcb45648-eb7b-4975-9f50-08675a787d9c could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Oct 11 09:22:44 compute-0 nova_compute[260935]: 2025-10-11 09:22:44.859 2 DEBUG nova.compute.manager [req-cb444796-a88d-45e5-8002-7aff3a6d3e79 req-e8ccd70c-2c53-4d58-8a5e-6523ac028591 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] Received event network-vif-plugged-0f516e4b-c284-4151-944c-8a7d98f695b5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:22:44 compute-0 nova_compute[260935]: 2025-10-11 09:22:44.859 2 DEBUG oslo_concurrency.lockutils [req-cb444796-a88d-45e5-8002-7aff3a6d3e79 req-e8ccd70c-2c53-4d58-8a5e-6523ac028591 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "ef21f945-0076-48fa-8d22-c5376e26d278-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:22:44 compute-0 nova_compute[260935]: 2025-10-11 09:22:44.859 2 DEBUG oslo_concurrency.lockutils [req-cb444796-a88d-45e5-8002-7aff3a6d3e79 req-e8ccd70c-2c53-4d58-8a5e-6523ac028591 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "ef21f945-0076-48fa-8d22-c5376e26d278-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:22:44 compute-0 nova_compute[260935]: 2025-10-11 09:22:44.860 2 DEBUG oslo_concurrency.lockutils [req-cb444796-a88d-45e5-8002-7aff3a6d3e79 req-e8ccd70c-2c53-4d58-8a5e-6523ac028591 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "ef21f945-0076-48fa-8d22-c5376e26d278-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:22:44 compute-0 nova_compute[260935]: 2025-10-11 09:22:44.860 2 DEBUG nova.compute.manager [req-cb444796-a88d-45e5-8002-7aff3a6d3e79 req-e8ccd70c-2c53-4d58-8a5e-6523ac028591 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] Processing event network-vif-plugged-0f516e4b-c284-4151-944c-8a7d98f695b5 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 11 09:22:44 compute-0 nova_compute[260935]: 2025-10-11 09:22:44.860 2 DEBUG nova.compute.manager [req-cb444796-a88d-45e5-8002-7aff3a6d3e79 req-e8ccd70c-2c53-4d58-8a5e-6523ac028591 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] Received event network-vif-plugged-0f516e4b-c284-4151-944c-8a7d98f695b5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:22:44 compute-0 nova_compute[260935]: 2025-10-11 09:22:44.860 2 DEBUG oslo_concurrency.lockutils [req-cb444796-a88d-45e5-8002-7aff3a6d3e79 req-e8ccd70c-2c53-4d58-8a5e-6523ac028591 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "ef21f945-0076-48fa-8d22-c5376e26d278-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:22:44 compute-0 nova_compute[260935]: 2025-10-11 09:22:44.861 2 DEBUG oslo_concurrency.lockutils [req-cb444796-a88d-45e5-8002-7aff3a6d3e79 req-e8ccd70c-2c53-4d58-8a5e-6523ac028591 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "ef21f945-0076-48fa-8d22-c5376e26d278-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:22:44 compute-0 nova_compute[260935]: 2025-10-11 09:22:44.861 2 DEBUG oslo_concurrency.lockutils [req-cb444796-a88d-45e5-8002-7aff3a6d3e79 req-e8ccd70c-2c53-4d58-8a5e-6523ac028591 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "ef21f945-0076-48fa-8d22-c5376e26d278-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:22:44 compute-0 nova_compute[260935]: 2025-10-11 09:22:44.861 2 DEBUG nova.compute.manager [req-cb444796-a88d-45e5-8002-7aff3a6d3e79 req-e8ccd70c-2c53-4d58-8a5e-6523ac028591 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] No waiting events found dispatching network-vif-plugged-0f516e4b-c284-4151-944c-8a7d98f695b5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 09:22:44 compute-0 nova_compute[260935]: 2025-10-11 09:22:44.861 2 WARNING nova.compute.manager [req-cb444796-a88d-45e5-8002-7aff3a6d3e79 req-e8ccd70c-2c53-4d58-8a5e-6523ac028591 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] Received unexpected event network-vif-plugged-0f516e4b-c284-4151-944c-8a7d98f695b5 for instance with vm_state building and task_state spawning.
Oct 11 09:22:44 compute-0 podman[392022]: 2025-10-11 09:22:44.942608701 +0000 UTC m=+0.058666937 container create 08681398b84cfa03e0e290ee16c1ce0aff8a4babeb74f855ff91733dfa902f9f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-0007a0de-db42-4add-9b55-6d92ceffa860, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true)
Oct 11 09:22:45 compute-0 systemd[1]: Started libpod-conmon-08681398b84cfa03e0e290ee16c1ce0aff8a4babeb74f855ff91733dfa902f9f.scope.
Oct 11 09:22:45 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/298502596' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:22:45 compute-0 podman[392022]: 2025-10-11 09:22:44.91177324 +0000 UTC m=+0.027831456 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 11 09:22:45 compute-0 nova_compute[260935]: 2025-10-11 09:22:45.013 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760174565.012413, ef21f945-0076-48fa-8d22-c5376e26d278 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:22:45 compute-0 nova_compute[260935]: 2025-10-11 09:22:45.014 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] VM Started (Lifecycle Event)
Oct 11 09:22:45 compute-0 nova_compute[260935]: 2025-10-11 09:22:45.017 2 DEBUG nova.compute.manager [None req-4fc48ab2-5389-4c80-9e31-49ff047eb67b 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 11 09:22:45 compute-0 nova_compute[260935]: 2025-10-11 09:22:45.022 2 DEBUG nova.virt.libvirt.driver [None req-4fc48ab2-5389-4c80-9e31-49ff047eb67b 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 11 09:22:45 compute-0 nova_compute[260935]: 2025-10-11 09:22:45.028 2 INFO nova.virt.libvirt.driver [-] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] Instance spawned successfully.
Oct 11 09:22:45 compute-0 nova_compute[260935]: 2025-10-11 09:22:45.029 2 DEBUG nova.virt.libvirt.driver [None req-4fc48ab2-5389-4c80-9e31-49ff047eb67b 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 11 09:22:45 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:22:45 compute-0 nova_compute[260935]: 2025-10-11 09:22:45.039 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:22:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f47c3c136e5a0070dae877380d4b02cb73021141c299a51e87e245e6899f06b/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 11 09:22:45 compute-0 nova_compute[260935]: 2025-10-11 09:22:45.050 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 09:22:45 compute-0 nova_compute[260935]: 2025-10-11 09:22:45.057 2 DEBUG nova.virt.libvirt.driver [None req-4fc48ab2-5389-4c80-9e31-49ff047eb67b 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:22:45 compute-0 nova_compute[260935]: 2025-10-11 09:22:45.058 2 DEBUG nova.virt.libvirt.driver [None req-4fc48ab2-5389-4c80-9e31-49ff047eb67b 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:22:45 compute-0 nova_compute[260935]: 2025-10-11 09:22:45.058 2 DEBUG nova.virt.libvirt.driver [None req-4fc48ab2-5389-4c80-9e31-49ff047eb67b 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:22:45 compute-0 nova_compute[260935]: 2025-10-11 09:22:45.059 2 DEBUG nova.virt.libvirt.driver [None req-4fc48ab2-5389-4c80-9e31-49ff047eb67b 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:22:45 compute-0 nova_compute[260935]: 2025-10-11 09:22:45.060 2 DEBUG nova.virt.libvirt.driver [None req-4fc48ab2-5389-4c80-9e31-49ff047eb67b 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:22:45 compute-0 nova_compute[260935]: 2025-10-11 09:22:45.060 2 DEBUG nova.virt.libvirt.driver [None req-4fc48ab2-5389-4c80-9e31-49ff047eb67b 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:22:45 compute-0 podman[392022]: 2025-10-11 09:22:45.064047588 +0000 UTC m=+0.180105834 container init 08681398b84cfa03e0e290ee16c1ce0aff8a4babeb74f855ff91733dfa902f9f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-0007a0de-db42-4add-9b55-6d92ceffa860, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 11 09:22:45 compute-0 nova_compute[260935]: 2025-10-11 09:22:45.071 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 09:22:45 compute-0 podman[392022]: 2025-10-11 09:22:45.072296871 +0000 UTC m=+0.188355087 container start 08681398b84cfa03e0e290ee16c1ce0aff8a4babeb74f855ff91733dfa902f9f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-0007a0de-db42-4add-9b55-6d92ceffa860, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, io.buildah.version=1.41.3)
Oct 11 09:22:45 compute-0 nova_compute[260935]: 2025-10-11 09:22:45.072 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760174565.0125074, ef21f945-0076-48fa-8d22-c5376e26d278 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:22:45 compute-0 nova_compute[260935]: 2025-10-11 09:22:45.072 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] VM Paused (Lifecycle Event)
Oct 11 09:22:45 compute-0 podman[392036]: 2025-10-11 09:22:45.079793902 +0000 UTC m=+0.094293842 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=iscsid)
Oct 11 09:22:45 compute-0 nova_compute[260935]: 2025-10-11 09:22:45.093 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:22:45 compute-0 nova_compute[260935]: 2025-10-11 09:22:45.096 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760174565.0221415, ef21f945-0076-48fa-8d22-c5376e26d278 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:22:45 compute-0 nova_compute[260935]: 2025-10-11 09:22:45.096 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] VM Resumed (Lifecycle Event)
Oct 11 09:22:45 compute-0 neutron-haproxy-ovnmeta-0007a0de-db42-4add-9b55-6d92ceffa860[392049]: [NOTICE]   (392060) : New worker (392064) forked
Oct 11 09:22:45 compute-0 neutron-haproxy-ovnmeta-0007a0de-db42-4add-9b55-6d92ceffa860[392049]: [NOTICE]   (392060) : Loading success.
Oct 11 09:22:45 compute-0 nova_compute[260935]: 2025-10-11 09:22:45.130 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:22:45 compute-0 nova_compute[260935]: 2025-10-11 09:22:45.134 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 09:22:45 compute-0 nova_compute[260935]: 2025-10-11 09:22:45.151 2 INFO nova.compute.manager [None req-4fc48ab2-5389-4c80-9e31-49ff047eb67b 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] Took 6.70 seconds to spawn the instance on the hypervisor.
Oct 11 09:22:45 compute-0 nova_compute[260935]: 2025-10-11 09:22:45.151 2 DEBUG nova.compute.manager [None req-4fc48ab2-5389-4c80-9e31-49ff047eb67b 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:22:45 compute-0 nova_compute[260935]: 2025-10-11 09:22:45.185 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 09:22:45 compute-0 nova_compute[260935]: 2025-10-11 09:22:45.225 2 INFO nova.compute.manager [None req-4fc48ab2-5389-4c80-9e31-49ff047eb67b 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] Took 8.56 seconds to build instance.
Oct 11 09:22:45 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:22:45.284 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=bec9a5e2-82b0-42f0-811d-08d245f1dc66, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '39'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:22:45 compute-0 nova_compute[260935]: 2025-10-11 09:22:45.292 2 DEBUG oslo_concurrency.lockutils [None req-4fc48ab2-5389-4c80-9e31-49ff047eb67b 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] Lock "ef21f945-0076-48fa-8d22-c5376e26d278" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.850s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:22:46 compute-0 ceph-mon[74313]: pgmap v2427: 321 pgs: 321 active+clean; 533 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 68 KiB/s rd, 1.8 MiB/s wr, 86 op/s
Oct 11 09:22:46 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2428: 321 pgs: 321 active+clean; 533 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 68 KiB/s rd, 1.8 MiB/s wr, 86 op/s
Oct 11 09:22:47 compute-0 ceph-mon[74313]: pgmap v2428: 321 pgs: 321 active+clean; 533 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 68 KiB/s rd, 1.8 MiB/s wr, 86 op/s
Oct 11 09:22:47 compute-0 nova_compute[260935]: 2025-10-11 09:22:47.198 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:22:47 compute-0 nova_compute[260935]: 2025-10-11 09:22:47.699 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:22:48 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2429: 321 pgs: 321 active+clean; 533 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 161 op/s
Oct 11 09:22:49 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:22:49 compute-0 podman[392073]: 2025-10-11 09:22:49.802637223 +0000 UTC m=+0.103609195 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=multipathd, managed_by=edpm_ansible, config_id=multipathd, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 11 09:22:49 compute-0 ceph-mon[74313]: pgmap v2429: 321 pgs: 321 active+clean; 533 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 161 op/s
Oct 11 09:22:49 compute-0 podman[392074]: 2025-10-11 09:22:49.925466509 +0000 UTC m=+0.212292572 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct 11 09:22:50 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2430: 321 pgs: 321 active+clean; 533 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 131 op/s
Oct 11 09:22:51 compute-0 nova_compute[260935]: 2025-10-11 09:22:51.180 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760174556.1787422, d703a39a-f502-4bc4-895a-bb87752c83df => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:22:51 compute-0 nova_compute[260935]: 2025-10-11 09:22:51.180 2 INFO nova.compute.manager [-] [instance: d703a39a-f502-4bc4-895a-bb87752c83df] VM Stopped (Lifecycle Event)
Oct 11 09:22:51 compute-0 nova_compute[260935]: 2025-10-11 09:22:51.210 2 DEBUG nova.compute.manager [None req-2c589f31-d519-4157-9f3c-2ba626c48f24 - - - - - -] [instance: d703a39a-f502-4bc4-895a-bb87752c83df] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:22:51 compute-0 nova_compute[260935]: 2025-10-11 09:22:51.285 2 DEBUG nova.compute.manager [req-68b84799-cea6-45d6-bc46-eeb0bc8e9d70 req-26bc73a3-7d74-426b-9906-a35f9de993d0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] Received event network-changed-0f516e4b-c284-4151-944c-8a7d98f695b5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:22:51 compute-0 nova_compute[260935]: 2025-10-11 09:22:51.286 2 DEBUG nova.compute.manager [req-68b84799-cea6-45d6-bc46-eeb0bc8e9d70 req-26bc73a3-7d74-426b-9906-a35f9de993d0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] Refreshing instance network info cache due to event network-changed-0f516e4b-c284-4151-944c-8a7d98f695b5. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 09:22:51 compute-0 nova_compute[260935]: 2025-10-11 09:22:51.286 2 DEBUG oslo_concurrency.lockutils [req-68b84799-cea6-45d6-bc46-eeb0bc8e9d70 req-26bc73a3-7d74-426b-9906-a35f9de993d0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-ef21f945-0076-48fa-8d22-c5376e26d278" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:22:51 compute-0 nova_compute[260935]: 2025-10-11 09:22:51.286 2 DEBUG oslo_concurrency.lockutils [req-68b84799-cea6-45d6-bc46-eeb0bc8e9d70 req-26bc73a3-7d74-426b-9906-a35f9de993d0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-ef21f945-0076-48fa-8d22-c5376e26d278" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:22:51 compute-0 nova_compute[260935]: 2025-10-11 09:22:51.287 2 DEBUG nova.network.neutron [req-68b84799-cea6-45d6-bc46-eeb0bc8e9d70 req-26bc73a3-7d74-426b-9906-a35f9de993d0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] Refreshing network info cache for port 0f516e4b-c284-4151-944c-8a7d98f695b5 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 09:22:51 compute-0 ceph-mon[74313]: pgmap v2430: 321 pgs: 321 active+clean; 533 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 131 op/s
Oct 11 09:22:52 compute-0 nova_compute[260935]: 2025-10-11 09:22:52.246 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:22:52 compute-0 ovn_controller[152945]: 2025-10-11T09:22:52Z|01258|binding|INFO|Releasing lport e89e8ea5-8038-433d-8c45-d2ec20f4f896 from this chassis (sb_readonly=0)
Oct 11 09:22:52 compute-0 ovn_controller[152945]: 2025-10-11T09:22:52Z|01259|binding|INFO|Releasing lport e23cd806-8523-4e59-ba27-db15cee52548 from this chassis (sb_readonly=0)
Oct 11 09:22:52 compute-0 ovn_controller[152945]: 2025-10-11T09:22:52Z|01260|binding|INFO|Releasing lport f0c2b309-d39a-4d01-be06-bdc63deb27f9 from this chassis (sb_readonly=0)
Oct 11 09:22:52 compute-0 ovn_controller[152945]: 2025-10-11T09:22:52Z|01261|binding|INFO|Releasing lport 89155f05-1b39-4918-893c-15a2cd2a9493 from this chassis (sb_readonly=0)
Oct 11 09:22:52 compute-0 ovn_controller[152945]: 2025-10-11T09:22:52Z|01262|binding|INFO|Releasing lport f45dd889-4db0-488b-b6d3-356bc191844e from this chassis (sb_readonly=0)
Oct 11 09:22:52 compute-0 nova_compute[260935]: 2025-10-11 09:22:52.630 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:22:52 compute-0 nova_compute[260935]: 2025-10-11 09:22:52.701 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:22:52 compute-0 nova_compute[260935]: 2025-10-11 09:22:52.778 2 DEBUG nova.network.neutron [req-68b84799-cea6-45d6-bc46-eeb0bc8e9d70 req-26bc73a3-7d74-426b-9906-a35f9de993d0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] Updated VIF entry in instance network info cache for port 0f516e4b-c284-4151-944c-8a7d98f695b5. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 09:22:52 compute-0 nova_compute[260935]: 2025-10-11 09:22:52.778 2 DEBUG nova.network.neutron [req-68b84799-cea6-45d6-bc46-eeb0bc8e9d70 req-26bc73a3-7d74-426b-9906-a35f9de993d0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] Updating instance_info_cache with network_info: [{"id": "0f516e4b-c284-4151-944c-8a7d98f695b5", "address": "fa:16:3e:4b:08:14", "network": {"id": "0007a0de-db42-4add-9b55-6d92ceffa860", "bridge": "br-int", "label": "tempest-TestShelveInstance-683353345-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.199", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a13210f275984f3eadf85eba0c749d99", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0f516e4b-c2", "ovs_interfaceid": "0f516e4b-c284-4151-944c-8a7d98f695b5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:22:52 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2431: 321 pgs: 321 active+clean; 533 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 131 op/s
Oct 11 09:22:52 compute-0 nova_compute[260935]: 2025-10-11 09:22:52.802 2 DEBUG oslo_concurrency.lockutils [req-68b84799-cea6-45d6-bc46-eeb0bc8e9d70 req-26bc73a3-7d74-426b-9906-a35f9de993d0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-ef21f945-0076-48fa-8d22-c5376e26d278" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:22:53 compute-0 nova_compute[260935]: 2025-10-11 09:22:53.860 2 DEBUG oslo_concurrency.lockutils [None req-77443734-065e-4e2d-80ca-8d8e3f47c155 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Acquiring lock "c677661f-bc62-4954-9130-09b285e6abe4" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:22:53 compute-0 nova_compute[260935]: 2025-10-11 09:22:53.861 2 DEBUG oslo_concurrency.lockutils [None req-77443734-065e-4e2d-80ca-8d8e3f47c155 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "c677661f-bc62-4954-9130-09b285e6abe4" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:22:53 compute-0 nova_compute[260935]: 2025-10-11 09:22:53.861 2 DEBUG oslo_concurrency.lockutils [None req-77443734-065e-4e2d-80ca-8d8e3f47c155 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Acquiring lock "c677661f-bc62-4954-9130-09b285e6abe4-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:22:53 compute-0 nova_compute[260935]: 2025-10-11 09:22:53.862 2 DEBUG oslo_concurrency.lockutils [None req-77443734-065e-4e2d-80ca-8d8e3f47c155 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "c677661f-bc62-4954-9130-09b285e6abe4-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:22:53 compute-0 nova_compute[260935]: 2025-10-11 09:22:53.862 2 DEBUG oslo_concurrency.lockutils [None req-77443734-065e-4e2d-80ca-8d8e3f47c155 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "c677661f-bc62-4954-9130-09b285e6abe4-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:22:53 compute-0 nova_compute[260935]: 2025-10-11 09:22:53.864 2 INFO nova.compute.manager [None req-77443734-065e-4e2d-80ca-8d8e3f47c155 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: c677661f-bc62-4954-9130-09b285e6abe4] Terminating instance
Oct 11 09:22:53 compute-0 nova_compute[260935]: 2025-10-11 09:22:53.865 2 DEBUG nova.compute.manager [None req-77443734-065e-4e2d-80ca-8d8e3f47c155 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: c677661f-bc62-4954-9130-09b285e6abe4] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 11 09:22:53 compute-0 ceph-mon[74313]: pgmap v2431: 321 pgs: 321 active+clean; 533 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 131 op/s
Oct 11 09:22:53 compute-0 kernel: tapf09cab39-50 (unregistering): left promiscuous mode
Oct 11 09:22:53 compute-0 NetworkManager[44960]: <info>  [1760174573.9415] device (tapf09cab39-50): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 09:22:53 compute-0 ovn_controller[152945]: 2025-10-11T09:22:53Z|01263|binding|INFO|Releasing lport f09cab39-5082-4d84-91fe-0c7b1cc2fb8a from this chassis (sb_readonly=0)
Oct 11 09:22:53 compute-0 ovn_controller[152945]: 2025-10-11T09:22:53Z|01264|binding|INFO|Setting lport f09cab39-5082-4d84-91fe-0c7b1cc2fb8a down in Southbound
Oct 11 09:22:53 compute-0 nova_compute[260935]: 2025-10-11 09:22:53.964 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:22:53 compute-0 ovn_controller[152945]: 2025-10-11T09:22:53Z|01265|binding|INFO|Removing iface tapf09cab39-50 ovn-installed in OVS
Oct 11 09:22:53 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:22:53.976 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:29:57:ba 10.100.0.19'], port_security=['fa:16:3e:29:57:ba 10.100.0.19'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.19/28', 'neutron:device_id': 'c677661f-bc62-4954-9130-09b285e6abe4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-428d818e-c08a-4eef-be62-24fe484fed05', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'bee9c6aad5fe46a2b0fb6caf4d995b72', 'neutron:revision_number': '4', 'neutron:security_group_ids': '8bcfc965-4dfd-4911-b600-4d86bbeab7bc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=020223cc-9ae9-496f-8a67-2eb055f1a089, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=f09cab39-5082-4d84-91fe-0c7b1cc2fb8a) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 09:22:53 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:22:53.977 162815 INFO neutron.agent.ovn.metadata.agent [-] Port f09cab39-5082-4d84-91fe-0c7b1cc2fb8a in datapath 428d818e-c08a-4eef-be62-24fe484fed05 unbound from our chassis
Oct 11 09:22:53 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:22:53.978 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 428d818e-c08a-4eef-be62-24fe484fed05
Oct 11 09:22:53 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:22:53.994 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[8082b36c-3b0b-4d82-a88a-27378c196b13]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:22:54 compute-0 nova_compute[260935]: 2025-10-11 09:22:53.999 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:22:54 compute-0 systemd[1]: machine-qemu\x2d142\x2dinstance\x2d00000077.scope: Deactivated successfully.
Oct 11 09:22:54 compute-0 systemd[1]: machine-qemu\x2d142\x2dinstance\x2d00000077.scope: Consumed 15.303s CPU time.
Oct 11 09:22:54 compute-0 systemd-machined[215705]: Machine qemu-142-instance-00000077 terminated.
Oct 11 09:22:54 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:22:54.027 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[841702fa-e64d-4594-a793-0a131ff2356b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:22:54 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:22:54.032 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[8c08081a-9811-454a-bae4-98ddae44885e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:22:54 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:22:54.061 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[a45cdab2-9bc2-43ef-87e5-1e4796456b4a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:22:54 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:22:54 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:22:54.085 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[633230b9-8838-4155-bc30-390419363578]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap428d818e-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5d:fc:e7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 13, 'tx_packets': 7, 'rx_bytes': 1042, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 13, 'tx_packets': 7, 'rx_bytes': 1042, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 343], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 633373, 'reachable_time': 23194, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 392131, 'error': None, 'target': 'ovnmeta-428d818e-c08a-4eef-be62-24fe484fed05', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:22:54 compute-0 nova_compute[260935]: 2025-10-11 09:22:54.092 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:22:54 compute-0 nova_compute[260935]: 2025-10-11 09:22:54.103 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:22:54 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:22:54.107 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[a5d4775a-f148-4134-9026-33f86d9d07fe]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap428d818e-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 633388, 'tstamp': 633388}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 392135, 'error': None, 'target': 'ovnmeta-428d818e-c08a-4eef-be62-24fe484fed05', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.17'], ['IFA_LOCAL', '10.100.0.17'], ['IFA_BROADCAST', '10.100.0.31'], ['IFA_LABEL', 'tap428d818e-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 633391, 'tstamp': 633391}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 392135, 'error': None, 'target': 'ovnmeta-428d818e-c08a-4eef-be62-24fe484fed05', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:22:54 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:22:54.109 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap428d818e-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:22:54 compute-0 nova_compute[260935]: 2025-10-11 09:22:54.111 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:22:54 compute-0 nova_compute[260935]: 2025-10-11 09:22:54.115 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:22:54 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:22:54.115 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap428d818e-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:22:54 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:22:54.115 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 09:22:54 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:22:54.115 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap428d818e-c0, col_values=(('external_ids', {'iface-id': 'f0c2b309-d39a-4d01-be06-bdc63deb27f9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:22:54 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:22:54.116 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 09:22:54 compute-0 nova_compute[260935]: 2025-10-11 09:22:54.116 2 INFO nova.virt.libvirt.driver [-] [instance: c677661f-bc62-4954-9130-09b285e6abe4] Instance destroyed successfully.
Oct 11 09:22:54 compute-0 nova_compute[260935]: 2025-10-11 09:22:54.116 2 DEBUG nova.objects.instance [None req-77443734-065e-4e2d-80ca-8d8e3f47c155 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lazy-loading 'resources' on Instance uuid c677661f-bc62-4954-9130-09b285e6abe4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:22:54 compute-0 nova_compute[260935]: 2025-10-11 09:22:54.131 2 DEBUG nova.virt.libvirt.vif [None req-77443734-065e-4e2d-80ca-8d8e3f47c155 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T09:21:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1301201135',display_name='tempest-TestNetworkBasicOps-server-1301201135',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1301201135',id=119,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAFMu8G7LIufdrCxIhBQQCul5ZF6SEzch6wK6wL4IRYDngjbMt9CFdOpjTGBGp3BV5OfwWRm0v8OrQ6qzwsg+DrsRLw4XZQAC1nE0Ke58JBZ4nmeeruu5Ghd9xdLrk6Odw==',key_name='tempest-TestNetworkBasicOps-1626339166',keypairs=<?>,launch_index=0,launched_at=2025-10-11T09:21:49Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='bee9c6aad5fe46a2b0fb6caf4d995b72',ramdisk_id='',reservation_id='r-1lilo2wr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1622727639',owner_user_name='tempest-TestNetworkBasicOps-1622727639-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T09:21:49Z,user_data=None,user_id='dd336dcb24664df58613d4105ce1b004',uuid=c677661f-bc62-4954-9130-09b285e6abe4,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f09cab39-5082-4d84-91fe-0c7b1cc2fb8a", "address": "fa:16:3e:29:57:ba", "network": {"id": "428d818e-c08a-4eef-be62-24fe484fed05", "bridge": "br-int", "label": "tempest-network-smoke--2064721736", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.19", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf09cab39-50", "ovs_interfaceid": "f09cab39-5082-4d84-91fe-0c7b1cc2fb8a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 11 09:22:54 compute-0 nova_compute[260935]: 2025-10-11 09:22:54.132 2 DEBUG nova.network.os_vif_util [None req-77443734-065e-4e2d-80ca-8d8e3f47c155 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Converting VIF {"id": "f09cab39-5082-4d84-91fe-0c7b1cc2fb8a", "address": "fa:16:3e:29:57:ba", "network": {"id": "428d818e-c08a-4eef-be62-24fe484fed05", "bridge": "br-int", "label": "tempest-network-smoke--2064721736", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.19", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf09cab39-50", "ovs_interfaceid": "f09cab39-5082-4d84-91fe-0c7b1cc2fb8a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 09:22:54 compute-0 nova_compute[260935]: 2025-10-11 09:22:54.133 2 DEBUG nova.network.os_vif_util [None req-77443734-065e-4e2d-80ca-8d8e3f47c155 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:29:57:ba,bridge_name='br-int',has_traffic_filtering=True,id=f09cab39-5082-4d84-91fe-0c7b1cc2fb8a,network=Network(428d818e-c08a-4eef-be62-24fe484fed05),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf09cab39-50') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 09:22:54 compute-0 nova_compute[260935]: 2025-10-11 09:22:54.134 2 DEBUG os_vif [None req-77443734-065e-4e2d-80ca-8d8e3f47c155 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:29:57:ba,bridge_name='br-int',has_traffic_filtering=True,id=f09cab39-5082-4d84-91fe-0c7b1cc2fb8a,network=Network(428d818e-c08a-4eef-be62-24fe484fed05),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf09cab39-50') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 11 09:22:54 compute-0 nova_compute[260935]: 2025-10-11 09:22:54.136 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:22:54 compute-0 nova_compute[260935]: 2025-10-11 09:22:54.136 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf09cab39-50, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:22:54 compute-0 nova_compute[260935]: 2025-10-11 09:22:54.142 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 09:22:54 compute-0 nova_compute[260935]: 2025-10-11 09:22:54.144 2 INFO os_vif [None req-77443734-065e-4e2d-80ca-8d8e3f47c155 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:29:57:ba,bridge_name='br-int',has_traffic_filtering=True,id=f09cab39-5082-4d84-91fe-0c7b1cc2fb8a,network=Network(428d818e-c08a-4eef-be62-24fe484fed05),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf09cab39-50')
Oct 11 09:22:54 compute-0 nova_compute[260935]: 2025-10-11 09:22:54.234 2 DEBUG nova.compute.manager [req-ebdc15d1-1c5e-420c-bf72-a453c9aec90f req-1239e11d-bc60-4243-a4d0-d6d007b564d4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: c677661f-bc62-4954-9130-09b285e6abe4] Received event network-vif-unplugged-f09cab39-5082-4d84-91fe-0c7b1cc2fb8a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:22:54 compute-0 nova_compute[260935]: 2025-10-11 09:22:54.234 2 DEBUG oslo_concurrency.lockutils [req-ebdc15d1-1c5e-420c-bf72-a453c9aec90f req-1239e11d-bc60-4243-a4d0-d6d007b564d4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "c677661f-bc62-4954-9130-09b285e6abe4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:22:54 compute-0 nova_compute[260935]: 2025-10-11 09:22:54.234 2 DEBUG oslo_concurrency.lockutils [req-ebdc15d1-1c5e-420c-bf72-a453c9aec90f req-1239e11d-bc60-4243-a4d0-d6d007b564d4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "c677661f-bc62-4954-9130-09b285e6abe4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:22:54 compute-0 nova_compute[260935]: 2025-10-11 09:22:54.235 2 DEBUG oslo_concurrency.lockutils [req-ebdc15d1-1c5e-420c-bf72-a453c9aec90f req-1239e11d-bc60-4243-a4d0-d6d007b564d4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "c677661f-bc62-4954-9130-09b285e6abe4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:22:54 compute-0 nova_compute[260935]: 2025-10-11 09:22:54.235 2 DEBUG nova.compute.manager [req-ebdc15d1-1c5e-420c-bf72-a453c9aec90f req-1239e11d-bc60-4243-a4d0-d6d007b564d4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: c677661f-bc62-4954-9130-09b285e6abe4] No waiting events found dispatching network-vif-unplugged-f09cab39-5082-4d84-91fe-0c7b1cc2fb8a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 09:22:54 compute-0 nova_compute[260935]: 2025-10-11 09:22:54.235 2 DEBUG nova.compute.manager [req-ebdc15d1-1c5e-420c-bf72-a453c9aec90f req-1239e11d-bc60-4243-a4d0-d6d007b564d4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: c677661f-bc62-4954-9130-09b285e6abe4] Received event network-vif-unplugged-f09cab39-5082-4d84-91fe-0c7b1cc2fb8a for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 11 09:22:54 compute-0 nova_compute[260935]: 2025-10-11 09:22:54.599 2 INFO nova.virt.libvirt.driver [None req-77443734-065e-4e2d-80ca-8d8e3f47c155 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: c677661f-bc62-4954-9130-09b285e6abe4] Deleting instance files /var/lib/nova/instances/c677661f-bc62-4954-9130-09b285e6abe4_del
Oct 11 09:22:54 compute-0 nova_compute[260935]: 2025-10-11 09:22:54.600 2 INFO nova.virt.libvirt.driver [None req-77443734-065e-4e2d-80ca-8d8e3f47c155 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: c677661f-bc62-4954-9130-09b285e6abe4] Deletion of /var/lib/nova/instances/c677661f-bc62-4954-9130-09b285e6abe4_del complete
Oct 11 09:22:54 compute-0 nova_compute[260935]: 2025-10-11 09:22:54.689 2 INFO nova.compute.manager [None req-77443734-065e-4e2d-80ca-8d8e3f47c155 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: c677661f-bc62-4954-9130-09b285e6abe4] Took 0.82 seconds to destroy the instance on the hypervisor.
Oct 11 09:22:54 compute-0 nova_compute[260935]: 2025-10-11 09:22:54.690 2 DEBUG oslo.service.loopingcall [None req-77443734-065e-4e2d-80ca-8d8e3f47c155 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 11 09:22:54 compute-0 nova_compute[260935]: 2025-10-11 09:22:54.690 2 DEBUG nova.compute.manager [-] [instance: c677661f-bc62-4954-9130-09b285e6abe4] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 11 09:22:54 compute-0 nova_compute[260935]: 2025-10-11 09:22:54.691 2 DEBUG nova.network.neutron [-] [instance: c677661f-bc62-4954-9130-09b285e6abe4] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 11 09:22:54 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2432: 321 pgs: 321 active+clean; 533 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 23 KiB/s wr, 75 op/s
Oct 11 09:22:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:22:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:22:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:22:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:22:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:22:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:22:54 compute-0 ceph-mgr[74605]: [balancer INFO root] Optimize plan auto_2025-10-11_09:22:54
Oct 11 09:22:54 compute-0 ceph-mgr[74605]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 09:22:54 compute-0 ceph-mgr[74605]: [balancer INFO root] do_upmap
Oct 11 09:22:54 compute-0 ceph-mgr[74605]: [balancer INFO root] pools ['backups', 'default.rgw.meta', 'cephfs.cephfs.data', 'images', '.rgw.root', '.mgr', 'cephfs.cephfs.meta', 'vms', 'default.rgw.log', 'volumes', 'default.rgw.control']
Oct 11 09:22:54 compute-0 ceph-mgr[74605]: [balancer INFO root] prepared 0/10 changes
Oct 11 09:22:55 compute-0 nova_compute[260935]: 2025-10-11 09:22:55.339 2 DEBUG nova.network.neutron [-] [instance: c677661f-bc62-4954-9130-09b285e6abe4] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:22:55 compute-0 nova_compute[260935]: 2025-10-11 09:22:55.356 2 INFO nova.compute.manager [-] [instance: c677661f-bc62-4954-9130-09b285e6abe4] Took 0.66 seconds to deallocate network for instance.
Oct 11 09:22:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 09:22:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 09:22:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 09:22:55 compute-0 nova_compute[260935]: 2025-10-11 09:22:55.403 2 DEBUG oslo_concurrency.lockutils [None req-77443734-065e-4e2d-80ca-8d8e3f47c155 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:22:55 compute-0 nova_compute[260935]: 2025-10-11 09:22:55.404 2 DEBUG oslo_concurrency.lockutils [None req-77443734-065e-4e2d-80ca-8d8e3f47c155 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:22:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 09:22:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 09:22:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 09:22:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 09:22:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 09:22:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 09:22:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 09:22:55 compute-0 nova_compute[260935]: 2025-10-11 09:22:55.417 2 DEBUG nova.compute.manager [req-a508f92c-837e-43bd-b5db-9614c2938312 req-59825ce6-c679-4cd8-a85e-d0b6caea822f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: c677661f-bc62-4954-9130-09b285e6abe4] Received event network-vif-deleted-f09cab39-5082-4d84-91fe-0c7b1cc2fb8a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:22:55 compute-0 nova_compute[260935]: 2025-10-11 09:22:55.557 2 DEBUG oslo_concurrency.processutils [None req-77443734-065e-4e2d-80ca-8d8e3f47c155 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:22:55 compute-0 ceph-mon[74313]: pgmap v2432: 321 pgs: 321 active+clean; 533 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 23 KiB/s wr, 75 op/s
Oct 11 09:22:56 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 09:22:56 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2362224432' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:22:56 compute-0 nova_compute[260935]: 2025-10-11 09:22:56.111 2 DEBUG oslo_concurrency.processutils [None req-77443734-065e-4e2d-80ca-8d8e3f47c155 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.554s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:22:56 compute-0 nova_compute[260935]: 2025-10-11 09:22:56.119 2 DEBUG nova.compute.provider_tree [None req-77443734-065e-4e2d-80ca-8d8e3f47c155 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 09:22:56 compute-0 nova_compute[260935]: 2025-10-11 09:22:56.138 2 DEBUG nova.scheduler.client.report [None req-77443734-065e-4e2d-80ca-8d8e3f47c155 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 09:22:56 compute-0 nova_compute[260935]: 2025-10-11 09:22:56.160 2 DEBUG oslo_concurrency.lockutils [None req-77443734-065e-4e2d-80ca-8d8e3f47c155 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.756s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:22:56 compute-0 nova_compute[260935]: 2025-10-11 09:22:56.192 2 INFO nova.scheduler.client.report [None req-77443734-065e-4e2d-80ca-8d8e3f47c155 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Deleted allocations for instance c677661f-bc62-4954-9130-09b285e6abe4
Oct 11 09:22:56 compute-0 nova_compute[260935]: 2025-10-11 09:22:56.259 2 DEBUG oslo_concurrency.lockutils [None req-77443734-065e-4e2d-80ca-8d8e3f47c155 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "c677661f-bc62-4954-9130-09b285e6abe4" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.398s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:22:56 compute-0 nova_compute[260935]: 2025-10-11 09:22:56.348 2 DEBUG nova.compute.manager [req-12e59c4a-d169-4c52-9d98-006404c0f404 req-ded371c5-f52c-4c0b-b97e-74d12bcdbebd e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: c677661f-bc62-4954-9130-09b285e6abe4] Received event network-vif-plugged-f09cab39-5082-4d84-91fe-0c7b1cc2fb8a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:22:56 compute-0 nova_compute[260935]: 2025-10-11 09:22:56.349 2 DEBUG oslo_concurrency.lockutils [req-12e59c4a-d169-4c52-9d98-006404c0f404 req-ded371c5-f52c-4c0b-b97e-74d12bcdbebd e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "c677661f-bc62-4954-9130-09b285e6abe4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:22:56 compute-0 nova_compute[260935]: 2025-10-11 09:22:56.349 2 DEBUG oslo_concurrency.lockutils [req-12e59c4a-d169-4c52-9d98-006404c0f404 req-ded371c5-f52c-4c0b-b97e-74d12bcdbebd e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "c677661f-bc62-4954-9130-09b285e6abe4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:22:56 compute-0 nova_compute[260935]: 2025-10-11 09:22:56.349 2 DEBUG oslo_concurrency.lockutils [req-12e59c4a-d169-4c52-9d98-006404c0f404 req-ded371c5-f52c-4c0b-b97e-74d12bcdbebd e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "c677661f-bc62-4954-9130-09b285e6abe4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:22:56 compute-0 nova_compute[260935]: 2025-10-11 09:22:56.349 2 DEBUG nova.compute.manager [req-12e59c4a-d169-4c52-9d98-006404c0f404 req-ded371c5-f52c-4c0b-b97e-74d12bcdbebd e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: c677661f-bc62-4954-9130-09b285e6abe4] No waiting events found dispatching network-vif-plugged-f09cab39-5082-4d84-91fe-0c7b1cc2fb8a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 09:22:56 compute-0 nova_compute[260935]: 2025-10-11 09:22:56.350 2 WARNING nova.compute.manager [req-12e59c4a-d169-4c52-9d98-006404c0f404 req-ded371c5-f52c-4c0b-b97e-74d12bcdbebd e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: c677661f-bc62-4954-9130-09b285e6abe4] Received unexpected event network-vif-plugged-f09cab39-5082-4d84-91fe-0c7b1cc2fb8a for instance with vm_state deleted and task_state None.
Oct 11 09:22:56 compute-0 nova_compute[260935]: 2025-10-11 09:22:56.499 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760174561.4984128, fcb45648-eb7b-4975-9f50-08675a787d9c => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:22:56 compute-0 nova_compute[260935]: 2025-10-11 09:22:56.500 2 INFO nova.compute.manager [-] [instance: fcb45648-eb7b-4975-9f50-08675a787d9c] VM Stopped (Lifecycle Event)
Oct 11 09:22:56 compute-0 nova_compute[260935]: 2025-10-11 09:22:56.521 2 DEBUG nova.compute.manager [None req-3ddd017b-90e9-4e7d-b87a-15c2a87c6696 - - - - - -] [instance: fcb45648-eb7b-4975-9f50-08675a787d9c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:22:56 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2433: 321 pgs: 321 active+clean; 533 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 23 KiB/s wr, 75 op/s
Oct 11 09:22:57 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/2362224432' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:22:57 compute-0 nova_compute[260935]: 2025-10-11 09:22:57.252 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:22:57 compute-0 ovn_controller[152945]: 2025-10-11T09:22:57Z|00149|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:4b:08:14 10.100.0.10
Oct 11 09:22:57 compute-0 ovn_controller[152945]: 2025-10-11T09:22:57Z|00150|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:4b:08:14 10.100.0.10
Oct 11 09:22:57 compute-0 nova_compute[260935]: 2025-10-11 09:22:57.591 2 DEBUG oslo_concurrency.lockutils [None req-2532b6af-ef32-4bc7-afc5-a76cc640509f dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Acquiring lock "interface-7f0d9214-39a5-458d-82db-dcbc7d61b8b5-300c071c-a312-4a9b-bd7a-1b16b9a35ae6" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:22:57 compute-0 nova_compute[260935]: 2025-10-11 09:22:57.591 2 DEBUG oslo_concurrency.lockutils [None req-2532b6af-ef32-4bc7-afc5-a76cc640509f dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "interface-7f0d9214-39a5-458d-82db-dcbc7d61b8b5-300c071c-a312-4a9b-bd7a-1b16b9a35ae6" acquired by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:22:57 compute-0 nova_compute[260935]: 2025-10-11 09:22:57.609 2 DEBUG nova.objects.instance [None req-2532b6af-ef32-4bc7-afc5-a76cc640509f dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lazy-loading 'flavor' on Instance uuid 7f0d9214-39a5-458d-82db-dcbc7d61b8b5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:22:57 compute-0 nova_compute[260935]: 2025-10-11 09:22:57.627 2 DEBUG nova.virt.libvirt.vif [None req-2532b6af-ef32-4bc7-afc5-a76cc640509f dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T09:20:38Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1960711580',display_name='tempest-TestNetworkBasicOps-server-1960711580',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1960711580',id=117,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBF5Wlq36JtMeAih7avX8wXRrPqfFsSPD2izoTRA0/VeN08ZY174fYPsstqVdaqAprTgQ0B4WJKyd87FK5YPL+XzWekXrwbc+R4XTrvOSv6dJKGu7vh0OlJADJW05rfop0g==',key_name='tempest-TestNetworkBasicOps-1691611342',keypairs=<?>,launch_index=0,launched_at=2025-10-11T09:20:58Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='bee9c6aad5fe46a2b0fb6caf4d995b72',ramdisk_id='',reservation_id='r-9mefs4w8',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1622727639',owner_user_name='tempest-TestNetworkBasicOps-1622727639-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T09:20:58Z,user_data=None,user_id='dd336dcb24664df58613d4105ce1b004',uuid=7f0d9214-39a5-458d-82db-dcbc7d61b8b5,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "300c071c-a312-4a9b-bd7a-1b16b9a35ae6", "address": "fa:16:3e:f8:68:88", "network": {"id": "428d818e-c08a-4eef-be62-24fe484fed05", "bridge": "br-int", "label": "tempest-network-smoke--2064721736", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.30", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap300c071c-a3", "ovs_interfaceid": "300c071c-a312-4a9b-bd7a-1b16b9a35ae6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 11 09:22:57 compute-0 nova_compute[260935]: 2025-10-11 09:22:57.628 2 DEBUG nova.network.os_vif_util [None req-2532b6af-ef32-4bc7-afc5-a76cc640509f dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Converting VIF {"id": "300c071c-a312-4a9b-bd7a-1b16b9a35ae6", "address": "fa:16:3e:f8:68:88", "network": {"id": "428d818e-c08a-4eef-be62-24fe484fed05", "bridge": "br-int", "label": "tempest-network-smoke--2064721736", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.30", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap300c071c-a3", "ovs_interfaceid": "300c071c-a312-4a9b-bd7a-1b16b9a35ae6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 09:22:57 compute-0 nova_compute[260935]: 2025-10-11 09:22:57.629 2 DEBUG nova.network.os_vif_util [None req-2532b6af-ef32-4bc7-afc5-a76cc640509f dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:f8:68:88,bridge_name='br-int',has_traffic_filtering=True,id=300c071c-a312-4a9b-bd7a-1b16b9a35ae6,network=Network(428d818e-c08a-4eef-be62-24fe484fed05),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap300c071c-a3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 09:22:57 compute-0 nova_compute[260935]: 2025-10-11 09:22:57.634 2 DEBUG nova.virt.libvirt.guest [None req-2532b6af-ef32-4bc7-afc5-a76cc640509f dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:f8:68:88"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap300c071c-a3"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Oct 11 09:22:57 compute-0 nova_compute[260935]: 2025-10-11 09:22:57.638 2 DEBUG nova.virt.libvirt.guest [None req-2532b6af-ef32-4bc7-afc5-a76cc640509f dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:f8:68:88"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap300c071c-a3"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Oct 11 09:22:57 compute-0 nova_compute[260935]: 2025-10-11 09:22:57.643 2 DEBUG nova.virt.libvirt.driver [None req-2532b6af-ef32-4bc7-afc5-a76cc640509f dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Attempting to detach device tap300c071c-a3 from instance 7f0d9214-39a5-458d-82db-dcbc7d61b8b5 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Oct 11 09:22:57 compute-0 nova_compute[260935]: 2025-10-11 09:22:57.643 2 DEBUG nova.virt.libvirt.guest [None req-2532b6af-ef32-4bc7-afc5-a76cc640509f dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] detach device xml: <interface type="ethernet">
Oct 11 09:22:57 compute-0 nova_compute[260935]:   <mac address="fa:16:3e:f8:68:88"/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:   <model type="virtio"/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:   <driver name="vhost" rx_queue_size="512"/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:   <mtu size="1442"/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:   <target dev="tap300c071c-a3"/>
Oct 11 09:22:57 compute-0 nova_compute[260935]: </interface>
Oct 11 09:22:57 compute-0 nova_compute[260935]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Oct 11 09:22:57 compute-0 nova_compute[260935]: 2025-10-11 09:22:57.736 2 DEBUG nova.virt.libvirt.guest [None req-2532b6af-ef32-4bc7-afc5-a76cc640509f dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:f8:68:88"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap300c071c-a3"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Oct 11 09:22:57 compute-0 nova_compute[260935]: 2025-10-11 09:22:57.743 2 DEBUG nova.virt.libvirt.guest [None req-2532b6af-ef32-4bc7-afc5-a76cc640509f dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:f8:68:88"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap300c071c-a3"/></interface>not found in domain: <domain type='kvm' id='140'>
Oct 11 09:22:57 compute-0 nova_compute[260935]:   <name>instance-00000075</name>
Oct 11 09:22:57 compute-0 nova_compute[260935]:   <uuid>7f0d9214-39a5-458d-82db-dcbc7d61b8b5</uuid>
Oct 11 09:22:57 compute-0 nova_compute[260935]:   <metadata>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 09:22:57 compute-0 nova_compute[260935]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:   <nova:name>tempest-TestNetworkBasicOps-server-1960711580</nova:name>
Oct 11 09:22:57 compute-0 nova_compute[260935]:   <nova:creationTime>2025-10-11 09:21:28</nova:creationTime>
Oct 11 09:22:57 compute-0 nova_compute[260935]:   <nova:flavor name="m1.nano">
Oct 11 09:22:57 compute-0 nova_compute[260935]:     <nova:memory>128</nova:memory>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     <nova:disk>1</nova:disk>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     <nova:swap>0</nova:swap>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     <nova:ephemeral>0</nova:ephemeral>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     <nova:vcpus>1</nova:vcpus>
Oct 11 09:22:57 compute-0 nova_compute[260935]:   </nova:flavor>
Oct 11 09:22:57 compute-0 nova_compute[260935]:   <nova:owner>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     <nova:user uuid="dd336dcb24664df58613d4105ce1b004">tempest-TestNetworkBasicOps-1622727639-project-member</nova:user>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     <nova:project uuid="bee9c6aad5fe46a2b0fb6caf4d995b72">tempest-TestNetworkBasicOps-1622727639</nova:project>
Oct 11 09:22:57 compute-0 nova_compute[260935]:   </nova:owner>
Oct 11 09:22:57 compute-0 nova_compute[260935]:   <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:   <nova:ports>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     <nova:port uuid="6aa7ac72-3e8c-4881-ba5d-9e2fca0200c5">
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     </nova:port>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     <nova:port uuid="300c071c-a312-4a9b-bd7a-1b16b9a35ae6">
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <nova:ip type="fixed" address="10.100.0.30" ipVersion="4"/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     </nova:port>
Oct 11 09:22:57 compute-0 nova_compute[260935]:   </nova:ports>
Oct 11 09:22:57 compute-0 nova_compute[260935]: </nova:instance>
Oct 11 09:22:57 compute-0 nova_compute[260935]:   </metadata>
Oct 11 09:22:57 compute-0 nova_compute[260935]:   <memory unit='KiB'>131072</memory>
Oct 11 09:22:57 compute-0 nova_compute[260935]:   <currentMemory unit='KiB'>131072</currentMemory>
Oct 11 09:22:57 compute-0 nova_compute[260935]:   <vcpu placement='static'>1</vcpu>
Oct 11 09:22:57 compute-0 nova_compute[260935]:   <resource>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     <partition>/machine</partition>
Oct 11 09:22:57 compute-0 nova_compute[260935]:   </resource>
Oct 11 09:22:57 compute-0 nova_compute[260935]:   <sysinfo type='smbios'>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     <system>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <entry name='manufacturer'>RDO</entry>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <entry name='product'>OpenStack Compute</entry>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <entry name='serial'>7f0d9214-39a5-458d-82db-dcbc7d61b8b5</entry>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <entry name='uuid'>7f0d9214-39a5-458d-82db-dcbc7d61b8b5</entry>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <entry name='family'>Virtual Machine</entry>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     </system>
Oct 11 09:22:57 compute-0 nova_compute[260935]:   </sysinfo>
Oct 11 09:22:57 compute-0 nova_compute[260935]:   <os>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     <type arch='x86_64' machine='pc-q35-rhel9.6.0'>hvm</type>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     <boot dev='hd'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     <smbios mode='sysinfo'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:   </os>
Oct 11 09:22:57 compute-0 nova_compute[260935]:   <features>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     <acpi/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     <apic/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     <vmcoreinfo state='on'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:   </features>
Oct 11 09:22:57 compute-0 nova_compute[260935]:   <cpu mode='custom' match='exact' check='full'>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     <model fallback='forbid'>EPYC-Rome</model>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     <vendor>AMD</vendor>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     <feature policy='require' name='x2apic'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     <feature policy='require' name='tsc-deadline'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     <feature policy='require' name='hypervisor'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     <feature policy='require' name='tsc_adjust'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     <feature policy='require' name='spec-ctrl'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     <feature policy='require' name='stibp'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     <feature policy='require' name='arch-capabilities'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     <feature policy='require' name='ssbd'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     <feature policy='require' name='cmp_legacy'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     <feature policy='require' name='overflow-recov'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     <feature policy='require' name='succor'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     <feature policy='require' name='ibrs'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     <feature policy='require' name='amd-ssbd'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     <feature policy='require' name='virt-ssbd'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     <feature policy='disable' name='lbrv'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     <feature policy='disable' name='tsc-scale'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     <feature policy='disable' name='vmcb-clean'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     <feature policy='disable' name='flushbyasid'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     <feature policy='disable' name='pause-filter'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     <feature policy='disable' name='pfthreshold'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     <feature policy='disable' name='svme-addr-chk'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     <feature policy='require' name='lfence-always-serializing'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     <feature policy='require' name='rdctl-no'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     <feature policy='require' name='mds-no'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     <feature policy='require' name='pschange-mc-no'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     <feature policy='require' name='gds-no'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     <feature policy='require' name='rfds-no'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     <feature policy='disable' name='xsaves'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     <feature policy='disable' name='svm'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     <feature policy='require' name='topoext'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     <feature policy='disable' name='npt'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     <feature policy='disable' name='nrip-save'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:   </cpu>
Oct 11 09:22:57 compute-0 nova_compute[260935]:   <clock offset='utc'>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     <timer name='pit' tickpolicy='delay'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     <timer name='rtc' tickpolicy='catchup'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     <timer name='hpet' present='no'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:   </clock>
Oct 11 09:22:57 compute-0 nova_compute[260935]:   <on_poweroff>destroy</on_poweroff>
Oct 11 09:22:57 compute-0 nova_compute[260935]:   <on_reboot>restart</on_reboot>
Oct 11 09:22:57 compute-0 nova_compute[260935]:   <on_crash>destroy</on_crash>
Oct 11 09:22:57 compute-0 nova_compute[260935]:   <devices>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     <disk type='network' device='disk'>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <driver name='qemu' type='raw' cache='none'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <auth username='openstack'>
Oct 11 09:22:57 compute-0 nova_compute[260935]:         <secret type='ceph' uuid='33219f8b-dc38-5a8f-a577-8ccc4b37190a'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       </auth>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <source protocol='rbd' name='vms/7f0d9214-39a5-458d-82db-dcbc7d61b8b5_disk' index='2'>
Oct 11 09:22:57 compute-0 nova_compute[260935]:         <host name='192.168.122.100' port='6789'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       </source>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <target dev='vda' bus='virtio'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <alias name='virtio-disk0'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     </disk>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     <disk type='network' device='cdrom'>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <driver name='qemu' type='raw' cache='none'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <auth username='openstack'>
Oct 11 09:22:57 compute-0 nova_compute[260935]:         <secret type='ceph' uuid='33219f8b-dc38-5a8f-a577-8ccc4b37190a'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       </auth>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <source protocol='rbd' name='vms/7f0d9214-39a5-458d-82db-dcbc7d61b8b5_disk.config' index='1'>
Oct 11 09:22:57 compute-0 nova_compute[260935]:         <host name='192.168.122.100' port='6789'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       </source>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <target dev='sda' bus='sata'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <readonly/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <alias name='sata0-0-0'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     </disk>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     <controller type='pci' index='0' model='pcie-root'>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <alias name='pcie.0'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     </controller>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     <controller type='pci' index='1' model='pcie-root-port'>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <target chassis='1' port='0x10'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <alias name='pci.1'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     </controller>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     <controller type='pci' index='2' model='pcie-root-port'>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <target chassis='2' port='0x11'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <alias name='pci.2'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     </controller>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     <controller type='pci' index='3' model='pcie-root-port'>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <target chassis='3' port='0x12'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <alias name='pci.3'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     </controller>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     <controller type='pci' index='4' model='pcie-root-port'>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <target chassis='4' port='0x13'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <alias name='pci.4'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     </controller>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     <controller type='pci' index='5' model='pcie-root-port'>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <target chassis='5' port='0x14'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <alias name='pci.5'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     </controller>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     <controller type='pci' index='6' model='pcie-root-port'>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <target chassis='6' port='0x15'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <alias name='pci.6'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     </controller>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     <controller type='pci' index='7' model='pcie-root-port'>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <target chassis='7' port='0x16'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <alias name='pci.7'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     </controller>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     <controller type='pci' index='8' model='pcie-root-port'>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <target chassis='8' port='0x17'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <alias name='pci.8'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     </controller>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     <controller type='pci' index='9' model='pcie-root-port'>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <target chassis='9' port='0x18'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <alias name='pci.9'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     </controller>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     <controller type='pci' index='10' model='pcie-root-port'>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <target chassis='10' port='0x19'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <alias name='pci.10'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     </controller>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     <controller type='pci' index='11' model='pcie-root-port'>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <target chassis='11' port='0x1a'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <alias name='pci.11'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     </controller>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     <controller type='pci' index='12' model='pcie-root-port'>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <target chassis='12' port='0x1b'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <alias name='pci.12'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     </controller>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     <controller type='pci' index='13' model='pcie-root-port'>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <target chassis='13' port='0x1c'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <alias name='pci.13'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     </controller>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     <controller type='pci' index='14' model='pcie-root-port'>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <target chassis='14' port='0x1d'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <alias name='pci.14'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     </controller>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     <controller type='pci' index='15' model='pcie-root-port'>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <target chassis='15' port='0x1e'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <alias name='pci.15'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     </controller>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     <controller type='pci' index='16' model='pcie-root-port'>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <target chassis='16' port='0x1f'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <alias name='pci.16'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     </controller>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     <controller type='pci' index='17' model='pcie-root-port'>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <target chassis='17' port='0x20'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <alias name='pci.17'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     </controller>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     <controller type='pci' index='18' model='pcie-root-port'>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <target chassis='18' port='0x21'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <alias name='pci.18'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     </controller>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     <controller type='pci' index='19' model='pcie-root-port'>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <target chassis='19' port='0x22'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <alias name='pci.19'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     </controller>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     <controller type='pci' index='20' model='pcie-root-port'>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <target chassis='20' port='0x23'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <alias name='pci.20'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     </controller>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     <controller type='pci' index='21' model='pcie-root-port'>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <target chassis='21' port='0x24'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <alias name='pci.21'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     </controller>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     <controller type='pci' index='22' model='pcie-root-port'>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <target chassis='22' port='0x25'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <alias name='pci.22'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     </controller>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     <controller type='pci' index='23' model='pcie-root-port'>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <target chassis='23' port='0x26'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <alias name='pci.23'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     </controller>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     <controller type='pci' index='24' model='pcie-root-port'>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <target chassis='24' port='0x27'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <alias name='pci.24'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     </controller>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     <controller type='pci' index='25' model='pcie-root-port'>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <target chassis='25' port='0x28'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <alias name='pci.25'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     </controller>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <model name='pcie-pci-bridge'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <alias name='pci.26'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     </controller>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     <controller type='usb' index='0' model='piix3-uhci'>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <alias name='usb'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     </controller>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     <controller type='sata' index='0'>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <alias name='ide'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     </controller>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     <interface type='ethernet'>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <mac address='fa:16:3e:38:a7:f5'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <target dev='tap6aa7ac72-3e'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <model type='virtio'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <driver name='vhost' rx_queue_size='512'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <mtu size='1442'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <alias name='net0'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     </interface>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     <interface type='ethernet'>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <mac address='fa:16:3e:f8:68:88'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <target dev='tap300c071c-a3'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <model type='virtio'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <driver name='vhost' rx_queue_size='512'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <mtu size='1442'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <alias name='net1'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     </interface>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     <serial type='pty'>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <source path='/dev/pts/2'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <log file='/var/lib/nova/instances/7f0d9214-39a5-458d-82db-dcbc7d61b8b5/console.log' append='off'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <target type='isa-serial' port='0'>
Oct 11 09:22:57 compute-0 nova_compute[260935]:         <model name='isa-serial'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       </target>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <alias name='serial0'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     </serial>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     <console type='pty' tty='/dev/pts/2'>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <source path='/dev/pts/2'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <log file='/var/lib/nova/instances/7f0d9214-39a5-458d-82db-dcbc7d61b8b5/console.log' append='off'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <target type='serial' port='0'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <alias name='serial0'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     </console>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     <input type='tablet' bus='usb'>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <alias name='input0'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <address type='usb' bus='0' port='1'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     </input>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     <input type='mouse' bus='ps2'>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <alias name='input1'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     </input>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     <input type='keyboard' bus='ps2'>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <alias name='input2'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     </input>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     <graphics type='vnc' port='5901' autoport='yes' listen='::0'>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <listen type='address' address='::0'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     </graphics>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     <audio id='1' type='none'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     <video>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <model type='virtio' heads='1' primary='yes'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <alias name='video0'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     </video>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     <watchdog model='itco' action='reset'>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <alias name='watchdog0'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     </watchdog>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     <memballoon model='virtio'>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <stats period='10'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <alias name='balloon0'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     </memballoon>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     <rng model='virtio'>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <backend model='random'>/dev/urandom</backend>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <alias name='rng0'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     </rng>
Oct 11 09:22:57 compute-0 nova_compute[260935]:   </devices>
Oct 11 09:22:57 compute-0 nova_compute[260935]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     <label>system_u:system_r:svirt_t:s0:c329,c594</label>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c329,c594</imagelabel>
Oct 11 09:22:57 compute-0 nova_compute[260935]:   </seclabel>
Oct 11 09:22:57 compute-0 nova_compute[260935]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     <label>+107:+107</label>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     <imagelabel>+107:+107</imagelabel>
Oct 11 09:22:57 compute-0 nova_compute[260935]:   </seclabel>
Oct 11 09:22:57 compute-0 nova_compute[260935]: </domain>
Oct 11 09:22:57 compute-0 nova_compute[260935]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Oct 11 09:22:57 compute-0 nova_compute[260935]: 2025-10-11 09:22:57.746 2 INFO nova.virt.libvirt.driver [None req-2532b6af-ef32-4bc7-afc5-a76cc640509f dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Successfully detached device tap300c071c-a3 from instance 7f0d9214-39a5-458d-82db-dcbc7d61b8b5 from the persistent domain config.
Oct 11 09:22:57 compute-0 nova_compute[260935]: 2025-10-11 09:22:57.746 2 DEBUG nova.virt.libvirt.driver [None req-2532b6af-ef32-4bc7-afc5-a76cc640509f dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] (1/8): Attempting to detach device tap300c071c-a3 with device alias net1 from instance 7f0d9214-39a5-458d-82db-dcbc7d61b8b5 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Oct 11 09:22:57 compute-0 nova_compute[260935]: 2025-10-11 09:22:57.747 2 DEBUG nova.virt.libvirt.guest [None req-2532b6af-ef32-4bc7-afc5-a76cc640509f dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] detach device xml: <interface type="ethernet">
Oct 11 09:22:57 compute-0 nova_compute[260935]:   <mac address="fa:16:3e:f8:68:88"/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:   <model type="virtio"/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:   <driver name="vhost" rx_queue_size="512"/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:   <mtu size="1442"/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:   <target dev="tap300c071c-a3"/>
Oct 11 09:22:57 compute-0 nova_compute[260935]: </interface>
Oct 11 09:22:57 compute-0 nova_compute[260935]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Oct 11 09:22:57 compute-0 kernel: tap300c071c-a3 (unregistering): left promiscuous mode
Oct 11 09:22:57 compute-0 NetworkManager[44960]: <info>  [1760174577.8576] device (tap300c071c-a3): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 09:22:57 compute-0 nova_compute[260935]: 2025-10-11 09:22:57.870 2 DEBUG nova.virt.libvirt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Received event <DeviceRemovedEvent: 1760174577.8703454, 7f0d9214-39a5-458d-82db-dcbc7d61b8b5 => net1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Oct 11 09:22:57 compute-0 ovn_controller[152945]: 2025-10-11T09:22:57Z|01266|binding|INFO|Releasing lport 300c071c-a312-4a9b-bd7a-1b16b9a35ae6 from this chassis (sb_readonly=0)
Oct 11 09:22:57 compute-0 ovn_controller[152945]: 2025-10-11T09:22:57Z|01267|binding|INFO|Setting lport 300c071c-a312-4a9b-bd7a-1b16b9a35ae6 down in Southbound
Oct 11 09:22:57 compute-0 ovn_controller[152945]: 2025-10-11T09:22:57Z|01268|binding|INFO|Removing iface tap300c071c-a3 ovn-installed in OVS
Oct 11 09:22:57 compute-0 nova_compute[260935]: 2025-10-11 09:22:57.873 2 DEBUG nova.virt.libvirt.driver [None req-2532b6af-ef32-4bc7-afc5-a76cc640509f dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Start waiting for the detach event from libvirt for device tap300c071c-a3 with device alias net1 for instance 7f0d9214-39a5-458d-82db-dcbc7d61b8b5 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Oct 11 09:22:57 compute-0 nova_compute[260935]: 2025-10-11 09:22:57.874 2 DEBUG nova.virt.libvirt.guest [None req-2532b6af-ef32-4bc7-afc5-a76cc640509f dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:f8:68:88"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap300c071c-a3"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Oct 11 09:22:57 compute-0 nova_compute[260935]: 2025-10-11 09:22:57.874 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:22:57 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:22:57.881 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f8:68:88 10.100.0.30', 'unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.30/28', 'neutron:device_id': '7f0d9214-39a5-458d-82db-dcbc7d61b8b5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-428d818e-c08a-4eef-be62-24fe484fed05', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'bee9c6aad5fe46a2b0fb6caf4d995b72', 'neutron:revision_number': '5', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=020223cc-9ae9-496f-8a67-2eb055f1a089, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=300c071c-a312-4a9b-bd7a-1b16b9a35ae6) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 09:22:57 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:22:57.883 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 300c071c-a312-4a9b-bd7a-1b16b9a35ae6 in datapath 428d818e-c08a-4eef-be62-24fe484fed05 unbound from our chassis
Oct 11 09:22:57 compute-0 nova_compute[260935]: 2025-10-11 09:22:57.886 2 DEBUG nova.virt.libvirt.guest [None req-2532b6af-ef32-4bc7-afc5-a76cc640509f dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:f8:68:88"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap300c071c-a3"/></interface>not found in domain: <domain type='kvm' id='140'>
Oct 11 09:22:57 compute-0 nova_compute[260935]:   <name>instance-00000075</name>
Oct 11 09:22:57 compute-0 nova_compute[260935]:   <uuid>7f0d9214-39a5-458d-82db-dcbc7d61b8b5</uuid>
Oct 11 09:22:57 compute-0 nova_compute[260935]:   <metadata>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 09:22:57 compute-0 nova_compute[260935]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:   <nova:name>tempest-TestNetworkBasicOps-server-1960711580</nova:name>
Oct 11 09:22:57 compute-0 nova_compute[260935]:   <nova:creationTime>2025-10-11 09:21:28</nova:creationTime>
Oct 11 09:22:57 compute-0 nova_compute[260935]:   <nova:flavor name="m1.nano">
Oct 11 09:22:57 compute-0 nova_compute[260935]:     <nova:memory>128</nova:memory>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     <nova:disk>1</nova:disk>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     <nova:swap>0</nova:swap>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     <nova:ephemeral>0</nova:ephemeral>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     <nova:vcpus>1</nova:vcpus>
Oct 11 09:22:57 compute-0 nova_compute[260935]:   </nova:flavor>
Oct 11 09:22:57 compute-0 nova_compute[260935]:   <nova:owner>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     <nova:user uuid="dd336dcb24664df58613d4105ce1b004">tempest-TestNetworkBasicOps-1622727639-project-member</nova:user>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     <nova:project uuid="bee9c6aad5fe46a2b0fb6caf4d995b72">tempest-TestNetworkBasicOps-1622727639</nova:project>
Oct 11 09:22:57 compute-0 nova_compute[260935]:   </nova:owner>
Oct 11 09:22:57 compute-0 nova_compute[260935]:   <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:   <nova:ports>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     <nova:port uuid="6aa7ac72-3e8c-4881-ba5d-9e2fca0200c5">
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     </nova:port>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     <nova:port uuid="300c071c-a312-4a9b-bd7a-1b16b9a35ae6">
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <nova:ip type="fixed" address="10.100.0.30" ipVersion="4"/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     </nova:port>
Oct 11 09:22:57 compute-0 nova_compute[260935]:   </nova:ports>
Oct 11 09:22:57 compute-0 nova_compute[260935]: </nova:instance>
Oct 11 09:22:57 compute-0 nova_compute[260935]:   </metadata>
Oct 11 09:22:57 compute-0 nova_compute[260935]:   <memory unit='KiB'>131072</memory>
Oct 11 09:22:57 compute-0 nova_compute[260935]:   <currentMemory unit='KiB'>131072</currentMemory>
Oct 11 09:22:57 compute-0 nova_compute[260935]:   <vcpu placement='static'>1</vcpu>
Oct 11 09:22:57 compute-0 nova_compute[260935]:   <resource>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     <partition>/machine</partition>
Oct 11 09:22:57 compute-0 nova_compute[260935]:   </resource>
Oct 11 09:22:57 compute-0 nova_compute[260935]:   <sysinfo type='smbios'>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     <system>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <entry name='manufacturer'>RDO</entry>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <entry name='product'>OpenStack Compute</entry>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <entry name='serial'>7f0d9214-39a5-458d-82db-dcbc7d61b8b5</entry>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <entry name='uuid'>7f0d9214-39a5-458d-82db-dcbc7d61b8b5</entry>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <entry name='family'>Virtual Machine</entry>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     </system>
Oct 11 09:22:57 compute-0 nova_compute[260935]:   </sysinfo>
Oct 11 09:22:57 compute-0 nova_compute[260935]:   <os>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     <type arch='x86_64' machine='pc-q35-rhel9.6.0'>hvm</type>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     <boot dev='hd'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     <smbios mode='sysinfo'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:   </os>
Oct 11 09:22:57 compute-0 nova_compute[260935]:   <features>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     <acpi/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     <apic/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     <vmcoreinfo state='on'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:   </features>
Oct 11 09:22:57 compute-0 nova_compute[260935]:   <cpu mode='custom' match='exact' check='full'>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     <model fallback='forbid'>EPYC-Rome</model>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     <vendor>AMD</vendor>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     <feature policy='require' name='x2apic'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     <feature policy='require' name='tsc-deadline'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     <feature policy='require' name='hypervisor'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     <feature policy='require' name='tsc_adjust'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     <feature policy='require' name='spec-ctrl'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     <feature policy='require' name='stibp'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     <feature policy='require' name='arch-capabilities'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     <feature policy='require' name='ssbd'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     <feature policy='require' name='cmp_legacy'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     <feature policy='require' name='overflow-recov'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     <feature policy='require' name='succor'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     <feature policy='require' name='ibrs'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     <feature policy='require' name='amd-ssbd'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     <feature policy='require' name='virt-ssbd'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     <feature policy='disable' name='lbrv'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     <feature policy='disable' name='tsc-scale'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     <feature policy='disable' name='vmcb-clean'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     <feature policy='disable' name='flushbyasid'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     <feature policy='disable' name='pause-filter'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     <feature policy='disable' name='pfthreshold'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     <feature policy='disable' name='svme-addr-chk'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     <feature policy='require' name='lfence-always-serializing'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     <feature policy='require' name='rdctl-no'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     <feature policy='require' name='mds-no'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     <feature policy='require' name='pschange-mc-no'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     <feature policy='require' name='gds-no'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     <feature policy='require' name='rfds-no'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     <feature policy='disable' name='xsaves'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     <feature policy='disable' name='svm'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     <feature policy='require' name='topoext'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     <feature policy='disable' name='npt'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     <feature policy='disable' name='nrip-save'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:   </cpu>
Oct 11 09:22:57 compute-0 nova_compute[260935]:   <clock offset='utc'>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     <timer name='pit' tickpolicy='delay'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     <timer name='rtc' tickpolicy='catchup'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     <timer name='hpet' present='no'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:   </clock>
Oct 11 09:22:57 compute-0 nova_compute[260935]:   <on_poweroff>destroy</on_poweroff>
Oct 11 09:22:57 compute-0 nova_compute[260935]:   <on_reboot>restart</on_reboot>
Oct 11 09:22:57 compute-0 nova_compute[260935]:   <on_crash>destroy</on_crash>
Oct 11 09:22:57 compute-0 nova_compute[260935]:   <devices>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     <disk type='network' device='disk'>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <driver name='qemu' type='raw' cache='none'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <auth username='openstack'>
Oct 11 09:22:57 compute-0 nova_compute[260935]:         <secret type='ceph' uuid='33219f8b-dc38-5a8f-a577-8ccc4b37190a'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       </auth>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <source protocol='rbd' name='vms/7f0d9214-39a5-458d-82db-dcbc7d61b8b5_disk' index='2'>
Oct 11 09:22:57 compute-0 nova_compute[260935]:         <host name='192.168.122.100' port='6789'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       </source>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <target dev='vda' bus='virtio'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <alias name='virtio-disk0'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     </disk>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     <disk type='network' device='cdrom'>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <driver name='qemu' type='raw' cache='none'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <auth username='openstack'>
Oct 11 09:22:57 compute-0 nova_compute[260935]:         <secret type='ceph' uuid='33219f8b-dc38-5a8f-a577-8ccc4b37190a'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       </auth>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <source protocol='rbd' name='vms/7f0d9214-39a5-458d-82db-dcbc7d61b8b5_disk.config' index='1'>
Oct 11 09:22:57 compute-0 nova_compute[260935]:         <host name='192.168.122.100' port='6789'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       </source>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <target dev='sda' bus='sata'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <readonly/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <alias name='sata0-0-0'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     </disk>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     <controller type='pci' index='0' model='pcie-root'>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <alias name='pcie.0'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     </controller>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     <controller type='pci' index='1' model='pcie-root-port'>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <target chassis='1' port='0x10'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <alias name='pci.1'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     </controller>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     <controller type='pci' index='2' model='pcie-root-port'>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <target chassis='2' port='0x11'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <alias name='pci.2'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     </controller>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     <controller type='pci' index='3' model='pcie-root-port'>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <target chassis='3' port='0x12'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <alias name='pci.3'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     </controller>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     <controller type='pci' index='4' model='pcie-root-port'>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <target chassis='4' port='0x13'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <alias name='pci.4'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     </controller>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     <controller type='pci' index='5' model='pcie-root-port'>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <target chassis='5' port='0x14'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <alias name='pci.5'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     </controller>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     <controller type='pci' index='6' model='pcie-root-port'>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <target chassis='6' port='0x15'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <alias name='pci.6'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     </controller>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     <controller type='pci' index='7' model='pcie-root-port'>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <target chassis='7' port='0x16'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <alias name='pci.7'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     </controller>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     <controller type='pci' index='8' model='pcie-root-port'>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <target chassis='8' port='0x17'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <alias name='pci.8'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     </controller>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     <controller type='pci' index='9' model='pcie-root-port'>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <target chassis='9' port='0x18'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <alias name='pci.9'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     </controller>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     <controller type='pci' index='10' model='pcie-root-port'>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <target chassis='10' port='0x19'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <alias name='pci.10'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     </controller>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     <controller type='pci' index='11' model='pcie-root-port'>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <target chassis='11' port='0x1a'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <alias name='pci.11'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     </controller>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     <controller type='pci' index='12' model='pcie-root-port'>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <target chassis='12' port='0x1b'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <alias name='pci.12'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     </controller>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     <controller type='pci' index='13' model='pcie-root-port'>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <target chassis='13' port='0x1c'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <alias name='pci.13'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     </controller>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     <controller type='pci' index='14' model='pcie-root-port'>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <target chassis='14' port='0x1d'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <alias name='pci.14'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     </controller>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     <controller type='pci' index='15' model='pcie-root-port'>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <target chassis='15' port='0x1e'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <alias name='pci.15'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     </controller>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     <controller type='pci' index='16' model='pcie-root-port'>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <target chassis='16' port='0x1f'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <alias name='pci.16'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     </controller>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     <controller type='pci' index='17' model='pcie-root-port'>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <target chassis='17' port='0x20'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <alias name='pci.17'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     </controller>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     <controller type='pci' index='18' model='pcie-root-port'>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <target chassis='18' port='0x21'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <alias name='pci.18'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     </controller>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     <controller type='pci' index='19' model='pcie-root-port'>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <target chassis='19' port='0x22'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <alias name='pci.19'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     </controller>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     <controller type='pci' index='20' model='pcie-root-port'>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <target chassis='20' port='0x23'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <alias name='pci.20'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     </controller>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     <controller type='pci' index='21' model='pcie-root-port'>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <target chassis='21' port='0x24'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <alias name='pci.21'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     </controller>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     <controller type='pci' index='22' model='pcie-root-port'>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <target chassis='22' port='0x25'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <alias name='pci.22'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     </controller>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     <controller type='pci' index='23' model='pcie-root-port'>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <target chassis='23' port='0x26'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <alias name='pci.23'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     </controller>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     <controller type='pci' index='24' model='pcie-root-port'>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <target chassis='24' port='0x27'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <alias name='pci.24'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     </controller>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     <controller type='pci' index='25' model='pcie-root-port'>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <target chassis='25' port='0x28'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <alias name='pci.25'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     </controller>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <model name='pcie-pci-bridge'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <alias name='pci.26'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     </controller>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     <controller type='usb' index='0' model='piix3-uhci'>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <alias name='usb'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     </controller>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     <controller type='sata' index='0'>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <alias name='ide'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     </controller>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     <interface type='ethernet'>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <mac address='fa:16:3e:38:a7:f5'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <target dev='tap6aa7ac72-3e'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <model type='virtio'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <driver name='vhost' rx_queue_size='512'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <mtu size='1442'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <alias name='net0'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     </interface>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     <serial type='pty'>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <source path='/dev/pts/2'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <log file='/var/lib/nova/instances/7f0d9214-39a5-458d-82db-dcbc7d61b8b5/console.log' append='off'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <target type='isa-serial' port='0'>
Oct 11 09:22:57 compute-0 nova_compute[260935]:         <model name='isa-serial'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       </target>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <alias name='serial0'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     </serial>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     <console type='pty' tty='/dev/pts/2'>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <source path='/dev/pts/2'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <log file='/var/lib/nova/instances/7f0d9214-39a5-458d-82db-dcbc7d61b8b5/console.log' append='off'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <target type='serial' port='0'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <alias name='serial0'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     </console>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     <input type='tablet' bus='usb'>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <alias name='input0'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <address type='usb' bus='0' port='1'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     </input>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     <input type='mouse' bus='ps2'>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <alias name='input1'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     </input>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     <input type='keyboard' bus='ps2'>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <alias name='input2'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     </input>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     <graphics type='vnc' port='5901' autoport='yes' listen='::0'>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <listen type='address' address='::0'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     </graphics>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     <audio id='1' type='none'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     <video>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <model type='virtio' heads='1' primary='yes'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <alias name='video0'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     </video>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     <watchdog model='itco' action='reset'>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <alias name='watchdog0'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     </watchdog>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     <memballoon model='virtio'>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <stats period='10'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <alias name='balloon0'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     </memballoon>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     <rng model='virtio'>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <backend model='random'>/dev/urandom</backend>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <alias name='rng0'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     </rng>
Oct 11 09:22:57 compute-0 nova_compute[260935]:   </devices>
Oct 11 09:22:57 compute-0 nova_compute[260935]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     <label>system_u:system_r:svirt_t:s0:c329,c594</label>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c329,c594</imagelabel>
Oct 11 09:22:57 compute-0 nova_compute[260935]:   </seclabel>
Oct 11 09:22:57 compute-0 nova_compute[260935]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     <label>+107:+107</label>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     <imagelabel>+107:+107</imagelabel>
Oct 11 09:22:57 compute-0 nova_compute[260935]:   </seclabel>
Oct 11 09:22:57 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:22:57.887 162815 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 428d818e-c08a-4eef-be62-24fe484fed05, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 11 09:22:57 compute-0 nova_compute[260935]: </domain>
Oct 11 09:22:57 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:22:57.888 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[6617feeb-ad2d-48b3-8829-e7b831fe58f1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:22:57 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:22:57.889 162815 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-428d818e-c08a-4eef-be62-24fe484fed05 namespace which is not needed anymore
Oct 11 09:22:57 compute-0 nova_compute[260935]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Oct 11 09:22:57 compute-0 nova_compute[260935]: 2025-10-11 09:22:57.886 2 INFO nova.virt.libvirt.driver [None req-2532b6af-ef32-4bc7-afc5-a76cc640509f dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Successfully detached device tap300c071c-a3 from instance 7f0d9214-39a5-458d-82db-dcbc7d61b8b5 from the live domain config.
Oct 11 09:22:57 compute-0 nova_compute[260935]: 2025-10-11 09:22:57.887 2 DEBUG nova.virt.libvirt.vif [None req-2532b6af-ef32-4bc7-afc5-a76cc640509f dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T09:20:38Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1960711580',display_name='tempest-TestNetworkBasicOps-server-1960711580',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1960711580',id=117,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBF5Wlq36JtMeAih7avX8wXRrPqfFsSPD2izoTRA0/VeN08ZY174fYPsstqVdaqAprTgQ0B4WJKyd87FK5YPL+XzWekXrwbc+R4XTrvOSv6dJKGu7vh0OlJADJW05rfop0g==',key_name='tempest-TestNetworkBasicOps-1691611342',keypairs=<?>,launch_index=0,launched_at=2025-10-11T09:20:58Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='bee9c6aad5fe46a2b0fb6caf4d995b72',ramdisk_id='',reservation_id='r-9mefs4w8',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1622727639',owner_user_name='tempest-TestNetworkBasicOps-1622727639-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T09:20:58Z,user_data=None,user_id='dd336dcb24664df58613d4105ce1b004',uuid=7f0d9214-39a5-458d-82db-dcbc7d61b8b5,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "300c071c-a312-4a9b-bd7a-1b16b9a35ae6", "address": "fa:16:3e:f8:68:88", "network": {"id": "428d818e-c08a-4eef-be62-24fe484fed05", "bridge": "br-int", "label": "tempest-network-smoke--2064721736", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.30", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap300c071c-a3", "ovs_interfaceid": "300c071c-a312-4a9b-bd7a-1b16b9a35ae6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 11 09:22:57 compute-0 nova_compute[260935]: 2025-10-11 09:22:57.888 2 DEBUG nova.network.os_vif_util [None req-2532b6af-ef32-4bc7-afc5-a76cc640509f dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Converting VIF {"id": "300c071c-a312-4a9b-bd7a-1b16b9a35ae6", "address": "fa:16:3e:f8:68:88", "network": {"id": "428d818e-c08a-4eef-be62-24fe484fed05", "bridge": "br-int", "label": "tempest-network-smoke--2064721736", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.30", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap300c071c-a3", "ovs_interfaceid": "300c071c-a312-4a9b-bd7a-1b16b9a35ae6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 09:22:57 compute-0 nova_compute[260935]: 2025-10-11 09:22:57.889 2 DEBUG nova.network.os_vif_util [None req-2532b6af-ef32-4bc7-afc5-a76cc640509f dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:f8:68:88,bridge_name='br-int',has_traffic_filtering=True,id=300c071c-a312-4a9b-bd7a-1b16b9a35ae6,network=Network(428d818e-c08a-4eef-be62-24fe484fed05),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap300c071c-a3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 09:22:57 compute-0 nova_compute[260935]: 2025-10-11 09:22:57.889 2 DEBUG os_vif [None req-2532b6af-ef32-4bc7-afc5-a76cc640509f dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:f8:68:88,bridge_name='br-int',has_traffic_filtering=True,id=300c071c-a312-4a9b-bd7a-1b16b9a35ae6,network=Network(428d818e-c08a-4eef-be62-24fe484fed05),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap300c071c-a3') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 11 09:22:57 compute-0 nova_compute[260935]: 2025-10-11 09:22:57.891 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:22:57 compute-0 nova_compute[260935]: 2025-10-11 09:22:57.892 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap300c071c-a3, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:22:57 compute-0 nova_compute[260935]: 2025-10-11 09:22:57.895 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 09:22:57 compute-0 nova_compute[260935]: 2025-10-11 09:22:57.900 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:22:57 compute-0 nova_compute[260935]: 2025-10-11 09:22:57.904 2 INFO os_vif [None req-2532b6af-ef32-4bc7-afc5-a76cc640509f dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:f8:68:88,bridge_name='br-int',has_traffic_filtering=True,id=300c071c-a312-4a9b-bd7a-1b16b9a35ae6,network=Network(428d818e-c08a-4eef-be62-24fe484fed05),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap300c071c-a3')
Oct 11 09:22:57 compute-0 nova_compute[260935]: 2025-10-11 09:22:57.905 2 DEBUG nova.virt.libvirt.guest [None req-2532b6af-ef32-4bc7-afc5-a76cc640509f dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 09:22:57 compute-0 nova_compute[260935]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:   <nova:name>tempest-TestNetworkBasicOps-server-1960711580</nova:name>
Oct 11 09:22:57 compute-0 nova_compute[260935]:   <nova:creationTime>2025-10-11 09:22:57</nova:creationTime>
Oct 11 09:22:57 compute-0 nova_compute[260935]:   <nova:flavor name="m1.nano">
Oct 11 09:22:57 compute-0 nova_compute[260935]:     <nova:memory>128</nova:memory>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     <nova:disk>1</nova:disk>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     <nova:swap>0</nova:swap>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     <nova:ephemeral>0</nova:ephemeral>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     <nova:vcpus>1</nova:vcpus>
Oct 11 09:22:57 compute-0 nova_compute[260935]:   </nova:flavor>
Oct 11 09:22:57 compute-0 nova_compute[260935]:   <nova:owner>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     <nova:user uuid="dd336dcb24664df58613d4105ce1b004">tempest-TestNetworkBasicOps-1622727639-project-member</nova:user>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     <nova:project uuid="bee9c6aad5fe46a2b0fb6caf4d995b72">tempest-TestNetworkBasicOps-1622727639</nova:project>
Oct 11 09:22:57 compute-0 nova_compute[260935]:   </nova:owner>
Oct 11 09:22:57 compute-0 nova_compute[260935]:   <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:   <nova:ports>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     <nova:port uuid="6aa7ac72-3e8c-4881-ba5d-9e2fca0200c5">
Oct 11 09:22:57 compute-0 nova_compute[260935]:       <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Oct 11 09:22:57 compute-0 nova_compute[260935]:     </nova:port>
Oct 11 09:22:57 compute-0 nova_compute[260935]:   </nova:ports>
Oct 11 09:22:57 compute-0 nova_compute[260935]: </nova:instance>
Oct 11 09:22:57 compute-0 nova_compute[260935]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Oct 11 09:22:58 compute-0 neutron-haproxy-ovnmeta-428d818e-c08a-4eef-be62-24fe484fed05[389350]: [NOTICE]   (389354) : haproxy version is 2.8.14-c23fe91
Oct 11 09:22:58 compute-0 neutron-haproxy-ovnmeta-428d818e-c08a-4eef-be62-24fe484fed05[389350]: [NOTICE]   (389354) : path to executable is /usr/sbin/haproxy
Oct 11 09:22:58 compute-0 neutron-haproxy-ovnmeta-428d818e-c08a-4eef-be62-24fe484fed05[389350]: [WARNING]  (389354) : Exiting Master process...
Oct 11 09:22:58 compute-0 neutron-haproxy-ovnmeta-428d818e-c08a-4eef-be62-24fe484fed05[389350]: [WARNING]  (389354) : Exiting Master process...
Oct 11 09:22:58 compute-0 neutron-haproxy-ovnmeta-428d818e-c08a-4eef-be62-24fe484fed05[389350]: [ALERT]    (389354) : Current worker (389356) exited with code 143 (Terminated)
Oct 11 09:22:58 compute-0 neutron-haproxy-ovnmeta-428d818e-c08a-4eef-be62-24fe484fed05[389350]: [WARNING]  (389354) : All workers exited. Exiting... (0)
Oct 11 09:22:58 compute-0 systemd[1]: libpod-3d44ae400f68c69c83298cdd1bf71370cdc99ef9434f35a592b35316b09b177f.scope: Deactivated successfully.
Oct 11 09:22:58 compute-0 podman[392208]: 2025-10-11 09:22:58.041576801 +0000 UTC m=+0.053897982 container died 3d44ae400f68c69c83298cdd1bf71370cdc99ef9434f35a592b35316b09b177f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-428d818e-c08a-4eef-be62-24fe484fed05, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 09:22:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-4e5d17837c87f5f50f3b76a7653ef33e05fc19c9a00cc47d414f88016a25e132-merged.mount: Deactivated successfully.
Oct 11 09:22:58 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-3d44ae400f68c69c83298cdd1bf71370cdc99ef9434f35a592b35316b09b177f-userdata-shm.mount: Deactivated successfully.
Oct 11 09:22:58 compute-0 podman[392208]: 2025-10-11 09:22:58.112087881 +0000 UTC m=+0.124409082 container cleanup 3d44ae400f68c69c83298cdd1bf71370cdc99ef9434f35a592b35316b09b177f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-428d818e-c08a-4eef-be62-24fe484fed05, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0)
Oct 11 09:22:58 compute-0 systemd[1]: libpod-conmon-3d44ae400f68c69c83298cdd1bf71370cdc99ef9434f35a592b35316b09b177f.scope: Deactivated successfully.
Oct 11 09:22:58 compute-0 ceph-mon[74313]: pgmap v2433: 321 pgs: 321 active+clean; 533 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 23 KiB/s wr, 75 op/s
Oct 11 09:22:58 compute-0 podman[392237]: 2025-10-11 09:22:58.240502065 +0000 UTC m=+0.066865470 container remove 3d44ae400f68c69c83298cdd1bf71370cdc99ef9434f35a592b35316b09b177f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-428d818e-c08a-4eef-be62-24fe484fed05, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 11 09:22:58 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:22:58.248 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[002f844d-f101-45d7-8198-0b28142d38c1]: (4, ('Sat Oct 11 09:22:57 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-428d818e-c08a-4eef-be62-24fe484fed05 (3d44ae400f68c69c83298cdd1bf71370cdc99ef9434f35a592b35316b09b177f)\n3d44ae400f68c69c83298cdd1bf71370cdc99ef9434f35a592b35316b09b177f\nSat Oct 11 09:22:58 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-428d818e-c08a-4eef-be62-24fe484fed05 (3d44ae400f68c69c83298cdd1bf71370cdc99ef9434f35a592b35316b09b177f)\n3d44ae400f68c69c83298cdd1bf71370cdc99ef9434f35a592b35316b09b177f\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:22:58 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:22:58.250 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[ef3d00b9-b425-412d-9fa6-e927ccd78cf5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:22:58 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:22:58.251 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap428d818e-c0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:22:58 compute-0 kernel: tap428d818e-c0: left promiscuous mode
Oct 11 09:22:58 compute-0 nova_compute[260935]: 2025-10-11 09:22:58.260 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:22:58 compute-0 nova_compute[260935]: 2025-10-11 09:22:58.274 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:22:58 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:22:58.276 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[42430a25-27fe-4853-afb8-b0d471e62259]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:22:58 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:22:58.303 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[4968a1f4-e46a-4c38-b96f-00d8329b89b5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:22:58 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:22:58.304 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[18e519f7-ce44-4d7e-adb0-4fdd6dcd922f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:22:58 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:22:58.339 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[c9110da9-b25f-4b9c-bb44-7fb7bc0d313a]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 633363, 'reachable_time': 29039, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 392252, 'error': None, 'target': 'ovnmeta-428d818e-c08a-4eef-be62-24fe484fed05', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:22:58 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:22:58.344 162929 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-428d818e-c08a-4eef-be62-24fe484fed05 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 11 09:22:58 compute-0 systemd[1]: run-netns-ovnmeta\x2d428d818e\x2dc08a\x2d4eef\x2dbe62\x2d24fe484fed05.mount: Deactivated successfully.
Oct 11 09:22:58 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:22:58.344 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[892f9552-e3a3-4e71-83d4-a9025c916b18]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:22:58 compute-0 nova_compute[260935]: 2025-10-11 09:22:58.403 2 DEBUG oslo_concurrency.lockutils [None req-2532b6af-ef32-4bc7-afc5-a76cc640509f dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Acquiring lock "refresh_cache-7f0d9214-39a5-458d-82db-dcbc7d61b8b5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:22:58 compute-0 nova_compute[260935]: 2025-10-11 09:22:58.403 2 DEBUG oslo_concurrency.lockutils [None req-2532b6af-ef32-4bc7-afc5-a76cc640509f dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Acquired lock "refresh_cache-7f0d9214-39a5-458d-82db-dcbc7d61b8b5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:22:58 compute-0 nova_compute[260935]: 2025-10-11 09:22:58.404 2 DEBUG nova.network.neutron [None req-2532b6af-ef32-4bc7-afc5-a76cc640509f dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 7f0d9214-39a5-458d-82db-dcbc7d61b8b5] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 11 09:22:58 compute-0 nova_compute[260935]: 2025-10-11 09:22:58.464 2 DEBUG nova.compute.manager [req-d761917b-9a4c-4ef8-8611-f8ade2a85eea req-fd76771f-3126-4a7f-9dba-e7975cd449b1 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 7f0d9214-39a5-458d-82db-dcbc7d61b8b5] Received event network-vif-deleted-300c071c-a312-4a9b-bd7a-1b16b9a35ae6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:22:58 compute-0 nova_compute[260935]: 2025-10-11 09:22:58.464 2 INFO nova.compute.manager [req-d761917b-9a4c-4ef8-8611-f8ade2a85eea req-fd76771f-3126-4a7f-9dba-e7975cd449b1 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 7f0d9214-39a5-458d-82db-dcbc7d61b8b5] Neutron deleted interface 300c071c-a312-4a9b-bd7a-1b16b9a35ae6; detaching it from the instance and deleting it from the info cache
Oct 11 09:22:58 compute-0 nova_compute[260935]: 2025-10-11 09:22:58.465 2 DEBUG nova.network.neutron [req-d761917b-9a4c-4ef8-8611-f8ade2a85eea req-fd76771f-3126-4a7f-9dba-e7975cd449b1 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 7f0d9214-39a5-458d-82db-dcbc7d61b8b5] Updating instance_info_cache with network_info: [{"id": "6aa7ac72-3e8c-4881-ba5d-9e2fca0200c5", "address": "fa:16:3e:38:a7:f5", "network": {"id": "6fb90d02-96cd-4920-92ac-462cc457cb11", "bridge": "br-int", "label": "tempest-network-smoke--1169953700", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.243", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6aa7ac72-3e", "ovs_interfaceid": "6aa7ac72-3e8c-4881-ba5d-9e2fca0200c5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:22:58 compute-0 nova_compute[260935]: 2025-10-11 09:22:58.469 2 DEBUG nova.compute.manager [req-b371964c-f5f8-45f8-b250-0f9e13a4c2ec req-27592c84-3b89-4a36-8872-dbf2513ad211 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 7f0d9214-39a5-458d-82db-dcbc7d61b8b5] Received event network-vif-unplugged-300c071c-a312-4a9b-bd7a-1b16b9a35ae6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:22:58 compute-0 nova_compute[260935]: 2025-10-11 09:22:58.470 2 DEBUG oslo_concurrency.lockutils [req-b371964c-f5f8-45f8-b250-0f9e13a4c2ec req-27592c84-3b89-4a36-8872-dbf2513ad211 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "7f0d9214-39a5-458d-82db-dcbc7d61b8b5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:22:58 compute-0 nova_compute[260935]: 2025-10-11 09:22:58.470 2 DEBUG oslo_concurrency.lockutils [req-b371964c-f5f8-45f8-b250-0f9e13a4c2ec req-27592c84-3b89-4a36-8872-dbf2513ad211 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "7f0d9214-39a5-458d-82db-dcbc7d61b8b5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:22:58 compute-0 nova_compute[260935]: 2025-10-11 09:22:58.470 2 DEBUG oslo_concurrency.lockutils [req-b371964c-f5f8-45f8-b250-0f9e13a4c2ec req-27592c84-3b89-4a36-8872-dbf2513ad211 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "7f0d9214-39a5-458d-82db-dcbc7d61b8b5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:22:58 compute-0 nova_compute[260935]: 2025-10-11 09:22:58.471 2 DEBUG nova.compute.manager [req-b371964c-f5f8-45f8-b250-0f9e13a4c2ec req-27592c84-3b89-4a36-8872-dbf2513ad211 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 7f0d9214-39a5-458d-82db-dcbc7d61b8b5] No waiting events found dispatching network-vif-unplugged-300c071c-a312-4a9b-bd7a-1b16b9a35ae6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 09:22:58 compute-0 nova_compute[260935]: 2025-10-11 09:22:58.471 2 WARNING nova.compute.manager [req-b371964c-f5f8-45f8-b250-0f9e13a4c2ec req-27592c84-3b89-4a36-8872-dbf2513ad211 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 7f0d9214-39a5-458d-82db-dcbc7d61b8b5] Received unexpected event network-vif-unplugged-300c071c-a312-4a9b-bd7a-1b16b9a35ae6 for instance with vm_state active and task_state None.
Oct 11 09:22:58 compute-0 nova_compute[260935]: 2025-10-11 09:22:58.629 2 DEBUG nova.objects.instance [req-d761917b-9a4c-4ef8-8611-f8ade2a85eea req-fd76771f-3126-4a7f-9dba-e7975cd449b1 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lazy-loading 'system_metadata' on Instance uuid 7f0d9214-39a5-458d-82db-dcbc7d61b8b5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:22:58 compute-0 nova_compute[260935]: 2025-10-11 09:22:58.663 2 DEBUG nova.objects.instance [req-d761917b-9a4c-4ef8-8611-f8ade2a85eea req-fd76771f-3126-4a7f-9dba-e7975cd449b1 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lazy-loading 'flavor' on Instance uuid 7f0d9214-39a5-458d-82db-dcbc7d61b8b5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:22:58 compute-0 nova_compute[260935]: 2025-10-11 09:22:58.696 2 DEBUG nova.virt.libvirt.vif [req-d761917b-9a4c-4ef8-8611-f8ade2a85eea req-fd76771f-3126-4a7f-9dba-e7975cd449b1 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T09:20:38Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1960711580',display_name='tempest-TestNetworkBasicOps-server-1960711580',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1960711580',id=117,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBF5Wlq36JtMeAih7avX8wXRrPqfFsSPD2izoTRA0/VeN08ZY174fYPsstqVdaqAprTgQ0B4WJKyd87FK5YPL+XzWekXrwbc+R4XTrvOSv6dJKGu7vh0OlJADJW05rfop0g==',key_name='tempest-TestNetworkBasicOps-1691611342',keypairs=<?>,launch_index=0,launched_at=2025-10-11T09:20:58Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='bee9c6aad5fe46a2b0fb6caf4d995b72',ramdisk_id='',reservation_id='r-9mefs4w8',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1622727639',owner_user_name='tempest-TestNetworkBasicOps-1622727639-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T09:20:58Z,user_data=None,user_id='dd336dcb24664df58613d4105ce1b004',uuid=7f0d9214-39a5-458d-82db-dcbc7d61b8b5,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "300c071c-a312-4a9b-bd7a-1b16b9a35ae6", "address": "fa:16:3e:f8:68:88", "network": {"id": "428d818e-c08a-4eef-be62-24fe484fed05", "bridge": "br-int", "label": "tempest-network-smoke--2064721736", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.30", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap300c071c-a3", "ovs_interfaceid": "300c071c-a312-4a9b-bd7a-1b16b9a35ae6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 11 09:22:58 compute-0 nova_compute[260935]: 2025-10-11 09:22:58.696 2 DEBUG nova.network.os_vif_util [req-d761917b-9a4c-4ef8-8611-f8ade2a85eea req-fd76771f-3126-4a7f-9dba-e7975cd449b1 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Converting VIF {"id": "300c071c-a312-4a9b-bd7a-1b16b9a35ae6", "address": "fa:16:3e:f8:68:88", "network": {"id": "428d818e-c08a-4eef-be62-24fe484fed05", "bridge": "br-int", "label": "tempest-network-smoke--2064721736", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.30", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap300c071c-a3", "ovs_interfaceid": "300c071c-a312-4a9b-bd7a-1b16b9a35ae6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 09:22:58 compute-0 nova_compute[260935]: 2025-10-11 09:22:58.697 2 DEBUG nova.network.os_vif_util [req-d761917b-9a4c-4ef8-8611-f8ade2a85eea req-fd76771f-3126-4a7f-9dba-e7975cd449b1 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:f8:68:88,bridge_name='br-int',has_traffic_filtering=True,id=300c071c-a312-4a9b-bd7a-1b16b9a35ae6,network=Network(428d818e-c08a-4eef-be62-24fe484fed05),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap300c071c-a3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 09:22:58 compute-0 nova_compute[260935]: 2025-10-11 09:22:58.701 2 DEBUG nova.virt.libvirt.guest [req-d761917b-9a4c-4ef8-8611-f8ade2a85eea req-fd76771f-3126-4a7f-9dba-e7975cd449b1 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:f8:68:88"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap300c071c-a3"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Oct 11 09:22:58 compute-0 nova_compute[260935]: 2025-10-11 09:22:58.704 2 DEBUG nova.virt.libvirt.guest [req-d761917b-9a4c-4ef8-8611-f8ade2a85eea req-fd76771f-3126-4a7f-9dba-e7975cd449b1 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:f8:68:88"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap300c071c-a3"/></interface>not found in domain: <domain type='kvm' id='140'>
Oct 11 09:22:58 compute-0 nova_compute[260935]:   <name>instance-00000075</name>
Oct 11 09:22:58 compute-0 nova_compute[260935]:   <uuid>7f0d9214-39a5-458d-82db-dcbc7d61b8b5</uuid>
Oct 11 09:22:58 compute-0 nova_compute[260935]:   <metadata>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 09:22:58 compute-0 nova_compute[260935]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:   <nova:name>tempest-TestNetworkBasicOps-server-1960711580</nova:name>
Oct 11 09:22:58 compute-0 nova_compute[260935]:   <nova:creationTime>2025-10-11 09:22:57</nova:creationTime>
Oct 11 09:22:58 compute-0 nova_compute[260935]:   <nova:flavor name="m1.nano">
Oct 11 09:22:58 compute-0 nova_compute[260935]:     <nova:memory>128</nova:memory>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     <nova:disk>1</nova:disk>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     <nova:swap>0</nova:swap>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     <nova:ephemeral>0</nova:ephemeral>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     <nova:vcpus>1</nova:vcpus>
Oct 11 09:22:58 compute-0 nova_compute[260935]:   </nova:flavor>
Oct 11 09:22:58 compute-0 nova_compute[260935]:   <nova:owner>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     <nova:user uuid="dd336dcb24664df58613d4105ce1b004">tempest-TestNetworkBasicOps-1622727639-project-member</nova:user>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     <nova:project uuid="bee9c6aad5fe46a2b0fb6caf4d995b72">tempest-TestNetworkBasicOps-1622727639</nova:project>
Oct 11 09:22:58 compute-0 nova_compute[260935]:   </nova:owner>
Oct 11 09:22:58 compute-0 nova_compute[260935]:   <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:   <nova:ports>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     <nova:port uuid="6aa7ac72-3e8c-4881-ba5d-9e2fca0200c5">
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     </nova:port>
Oct 11 09:22:58 compute-0 nova_compute[260935]:   </nova:ports>
Oct 11 09:22:58 compute-0 nova_compute[260935]: </nova:instance>
Oct 11 09:22:58 compute-0 nova_compute[260935]:   </metadata>
Oct 11 09:22:58 compute-0 nova_compute[260935]:   <memory unit='KiB'>131072</memory>
Oct 11 09:22:58 compute-0 nova_compute[260935]:   <currentMemory unit='KiB'>131072</currentMemory>
Oct 11 09:22:58 compute-0 nova_compute[260935]:   <vcpu placement='static'>1</vcpu>
Oct 11 09:22:58 compute-0 nova_compute[260935]:   <resource>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     <partition>/machine</partition>
Oct 11 09:22:58 compute-0 nova_compute[260935]:   </resource>
Oct 11 09:22:58 compute-0 nova_compute[260935]:   <sysinfo type='smbios'>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     <system>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <entry name='manufacturer'>RDO</entry>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <entry name='product'>OpenStack Compute</entry>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <entry name='serial'>7f0d9214-39a5-458d-82db-dcbc7d61b8b5</entry>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <entry name='uuid'>7f0d9214-39a5-458d-82db-dcbc7d61b8b5</entry>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <entry name='family'>Virtual Machine</entry>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     </system>
Oct 11 09:22:58 compute-0 nova_compute[260935]:   </sysinfo>
Oct 11 09:22:58 compute-0 nova_compute[260935]:   <os>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     <type arch='x86_64' machine='pc-q35-rhel9.6.0'>hvm</type>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     <boot dev='hd'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     <smbios mode='sysinfo'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:   </os>
Oct 11 09:22:58 compute-0 nova_compute[260935]:   <features>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     <acpi/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     <apic/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     <vmcoreinfo state='on'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:   </features>
Oct 11 09:22:58 compute-0 nova_compute[260935]:   <cpu mode='custom' match='exact' check='full'>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     <model fallback='forbid'>EPYC-Rome</model>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     <vendor>AMD</vendor>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     <feature policy='require' name='x2apic'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     <feature policy='require' name='tsc-deadline'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     <feature policy='require' name='hypervisor'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     <feature policy='require' name='tsc_adjust'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     <feature policy='require' name='spec-ctrl'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     <feature policy='require' name='stibp'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     <feature policy='require' name='arch-capabilities'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     <feature policy='require' name='ssbd'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     <feature policy='require' name='cmp_legacy'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     <feature policy='require' name='overflow-recov'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     <feature policy='require' name='succor'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     <feature policy='require' name='ibrs'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     <feature policy='require' name='amd-ssbd'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     <feature policy='require' name='virt-ssbd'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     <feature policy='disable' name='lbrv'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     <feature policy='disable' name='tsc-scale'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     <feature policy='disable' name='vmcb-clean'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     <feature policy='disable' name='flushbyasid'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     <feature policy='disable' name='pause-filter'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     <feature policy='disable' name='pfthreshold'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     <feature policy='disable' name='svme-addr-chk'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     <feature policy='require' name='lfence-always-serializing'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     <feature policy='require' name='rdctl-no'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     <feature policy='require' name='mds-no'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     <feature policy='require' name='pschange-mc-no'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     <feature policy='require' name='gds-no'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     <feature policy='require' name='rfds-no'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     <feature policy='disable' name='xsaves'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     <feature policy='disable' name='svm'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     <feature policy='require' name='topoext'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     <feature policy='disable' name='npt'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     <feature policy='disable' name='nrip-save'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:   </cpu>
Oct 11 09:22:58 compute-0 nova_compute[260935]:   <clock offset='utc'>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     <timer name='pit' tickpolicy='delay'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     <timer name='rtc' tickpolicy='catchup'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     <timer name='hpet' present='no'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:   </clock>
Oct 11 09:22:58 compute-0 nova_compute[260935]:   <on_poweroff>destroy</on_poweroff>
Oct 11 09:22:58 compute-0 nova_compute[260935]:   <on_reboot>restart</on_reboot>
Oct 11 09:22:58 compute-0 nova_compute[260935]:   <on_crash>destroy</on_crash>
Oct 11 09:22:58 compute-0 nova_compute[260935]:   <devices>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     <disk type='network' device='disk'>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <driver name='qemu' type='raw' cache='none'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <auth username='openstack'>
Oct 11 09:22:58 compute-0 nova_compute[260935]:         <secret type='ceph' uuid='33219f8b-dc38-5a8f-a577-8ccc4b37190a'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       </auth>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <source protocol='rbd' name='vms/7f0d9214-39a5-458d-82db-dcbc7d61b8b5_disk' index='2'>
Oct 11 09:22:58 compute-0 nova_compute[260935]:         <host name='192.168.122.100' port='6789'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       </source>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <target dev='vda' bus='virtio'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <alias name='virtio-disk0'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     </disk>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     <disk type='network' device='cdrom'>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <driver name='qemu' type='raw' cache='none'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <auth username='openstack'>
Oct 11 09:22:58 compute-0 nova_compute[260935]:         <secret type='ceph' uuid='33219f8b-dc38-5a8f-a577-8ccc4b37190a'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       </auth>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <source protocol='rbd' name='vms/7f0d9214-39a5-458d-82db-dcbc7d61b8b5_disk.config' index='1'>
Oct 11 09:22:58 compute-0 nova_compute[260935]:         <host name='192.168.122.100' port='6789'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       </source>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <target dev='sda' bus='sata'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <readonly/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <alias name='sata0-0-0'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     </disk>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     <controller type='pci' index='0' model='pcie-root'>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <alias name='pcie.0'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     </controller>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     <controller type='pci' index='1' model='pcie-root-port'>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <target chassis='1' port='0x10'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <alias name='pci.1'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     </controller>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     <controller type='pci' index='2' model='pcie-root-port'>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <target chassis='2' port='0x11'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <alias name='pci.2'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     </controller>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     <controller type='pci' index='3' model='pcie-root-port'>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <target chassis='3' port='0x12'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <alias name='pci.3'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     </controller>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     <controller type='pci' index='4' model='pcie-root-port'>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <target chassis='4' port='0x13'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <alias name='pci.4'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     </controller>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     <controller type='pci' index='5' model='pcie-root-port'>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <target chassis='5' port='0x14'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <alias name='pci.5'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     </controller>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     <controller type='pci' index='6' model='pcie-root-port'>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <target chassis='6' port='0x15'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <alias name='pci.6'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     </controller>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     <controller type='pci' index='7' model='pcie-root-port'>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <target chassis='7' port='0x16'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <alias name='pci.7'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     </controller>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     <controller type='pci' index='8' model='pcie-root-port'>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <target chassis='8' port='0x17'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <alias name='pci.8'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     </controller>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     <controller type='pci' index='9' model='pcie-root-port'>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <target chassis='9' port='0x18'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <alias name='pci.9'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     </controller>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     <controller type='pci' index='10' model='pcie-root-port'>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <target chassis='10' port='0x19'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <alias name='pci.10'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     </controller>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     <controller type='pci' index='11' model='pcie-root-port'>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <target chassis='11' port='0x1a'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <alias name='pci.11'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     </controller>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     <controller type='pci' index='12' model='pcie-root-port'>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <target chassis='12' port='0x1b'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <alias name='pci.12'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     </controller>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     <controller type='pci' index='13' model='pcie-root-port'>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <target chassis='13' port='0x1c'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <alias name='pci.13'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     </controller>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     <controller type='pci' index='14' model='pcie-root-port'>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <target chassis='14' port='0x1d'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <alias name='pci.14'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     </controller>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     <controller type='pci' index='15' model='pcie-root-port'>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <target chassis='15' port='0x1e'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <alias name='pci.15'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     </controller>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     <controller type='pci' index='16' model='pcie-root-port'>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <target chassis='16' port='0x1f'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <alias name='pci.16'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     </controller>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     <controller type='pci' index='17' model='pcie-root-port'>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <target chassis='17' port='0x20'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <alias name='pci.17'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     </controller>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     <controller type='pci' index='18' model='pcie-root-port'>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <target chassis='18' port='0x21'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <alias name='pci.18'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     </controller>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     <controller type='pci' index='19' model='pcie-root-port'>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <target chassis='19' port='0x22'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <alias name='pci.19'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     </controller>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     <controller type='pci' index='20' model='pcie-root-port'>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <target chassis='20' port='0x23'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <alias name='pci.20'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     </controller>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     <controller type='pci' index='21' model='pcie-root-port'>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <target chassis='21' port='0x24'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <alias name='pci.21'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     </controller>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     <controller type='pci' index='22' model='pcie-root-port'>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <target chassis='22' port='0x25'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <alias name='pci.22'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     </controller>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     <controller type='pci' index='23' model='pcie-root-port'>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <target chassis='23' port='0x26'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <alias name='pci.23'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     </controller>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     <controller type='pci' index='24' model='pcie-root-port'>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <target chassis='24' port='0x27'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <alias name='pci.24'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     </controller>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     <controller type='pci' index='25' model='pcie-root-port'>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <target chassis='25' port='0x28'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <alias name='pci.25'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     </controller>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <model name='pcie-pci-bridge'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <alias name='pci.26'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     </controller>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     <controller type='usb' index='0' model='piix3-uhci'>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <alias name='usb'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     </controller>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     <controller type='sata' index='0'>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <alias name='ide'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     </controller>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     <interface type='ethernet'>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <mac address='fa:16:3e:38:a7:f5'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <target dev='tap6aa7ac72-3e'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <model type='virtio'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <driver name='vhost' rx_queue_size='512'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <mtu size='1442'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <alias name='net0'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     </interface>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     <serial type='pty'>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <source path='/dev/pts/2'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <log file='/var/lib/nova/instances/7f0d9214-39a5-458d-82db-dcbc7d61b8b5/console.log' append='off'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <target type='isa-serial' port='0'>
Oct 11 09:22:58 compute-0 nova_compute[260935]:         <model name='isa-serial'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       </target>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <alias name='serial0'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     </serial>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     <console type='pty' tty='/dev/pts/2'>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <source path='/dev/pts/2'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <log file='/var/lib/nova/instances/7f0d9214-39a5-458d-82db-dcbc7d61b8b5/console.log' append='off'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <target type='serial' port='0'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <alias name='serial0'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     </console>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     <input type='tablet' bus='usb'>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <alias name='input0'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <address type='usb' bus='0' port='1'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     </input>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     <input type='mouse' bus='ps2'>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <alias name='input1'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     </input>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     <input type='keyboard' bus='ps2'>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <alias name='input2'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     </input>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     <graphics type='vnc' port='5901' autoport='yes' listen='::0'>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <listen type='address' address='::0'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     </graphics>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     <audio id='1' type='none'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     <video>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <model type='virtio' heads='1' primary='yes'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <alias name='video0'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     </video>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     <watchdog model='itco' action='reset'>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <alias name='watchdog0'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     </watchdog>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     <memballoon model='virtio'>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <stats period='10'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <alias name='balloon0'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     </memballoon>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     <rng model='virtio'>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <backend model='random'>/dev/urandom</backend>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <alias name='rng0'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     </rng>
Oct 11 09:22:58 compute-0 nova_compute[260935]:   </devices>
Oct 11 09:22:58 compute-0 nova_compute[260935]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     <label>system_u:system_r:svirt_t:s0:c329,c594</label>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c329,c594</imagelabel>
Oct 11 09:22:58 compute-0 nova_compute[260935]:   </seclabel>
Oct 11 09:22:58 compute-0 nova_compute[260935]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     <label>+107:+107</label>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     <imagelabel>+107:+107</imagelabel>
Oct 11 09:22:58 compute-0 nova_compute[260935]:   </seclabel>
Oct 11 09:22:58 compute-0 nova_compute[260935]: </domain>
Oct 11 09:22:58 compute-0 nova_compute[260935]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Oct 11 09:22:58 compute-0 nova_compute[260935]: 2025-10-11 09:22:58.705 2 DEBUG nova.virt.libvirt.guest [req-d761917b-9a4c-4ef8-8611-f8ade2a85eea req-fd76771f-3126-4a7f-9dba-e7975cd449b1 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:f8:68:88"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap300c071c-a3"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Oct 11 09:22:58 compute-0 nova_compute[260935]: 2025-10-11 09:22:58.712 2 DEBUG nova.virt.libvirt.guest [req-d761917b-9a4c-4ef8-8611-f8ade2a85eea req-fd76771f-3126-4a7f-9dba-e7975cd449b1 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:f8:68:88"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap300c071c-a3"/></interface>not found in domain: <domain type='kvm' id='140'>
Oct 11 09:22:58 compute-0 nova_compute[260935]:   <name>instance-00000075</name>
Oct 11 09:22:58 compute-0 nova_compute[260935]:   <uuid>7f0d9214-39a5-458d-82db-dcbc7d61b8b5</uuid>
Oct 11 09:22:58 compute-0 nova_compute[260935]:   <metadata>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 09:22:58 compute-0 nova_compute[260935]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:   <nova:name>tempest-TestNetworkBasicOps-server-1960711580</nova:name>
Oct 11 09:22:58 compute-0 nova_compute[260935]:   <nova:creationTime>2025-10-11 09:22:57</nova:creationTime>
Oct 11 09:22:58 compute-0 nova_compute[260935]:   <nova:flavor name="m1.nano">
Oct 11 09:22:58 compute-0 nova_compute[260935]:     <nova:memory>128</nova:memory>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     <nova:disk>1</nova:disk>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     <nova:swap>0</nova:swap>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     <nova:ephemeral>0</nova:ephemeral>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     <nova:vcpus>1</nova:vcpus>
Oct 11 09:22:58 compute-0 nova_compute[260935]:   </nova:flavor>
Oct 11 09:22:58 compute-0 nova_compute[260935]:   <nova:owner>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     <nova:user uuid="dd336dcb24664df58613d4105ce1b004">tempest-TestNetworkBasicOps-1622727639-project-member</nova:user>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     <nova:project uuid="bee9c6aad5fe46a2b0fb6caf4d995b72">tempest-TestNetworkBasicOps-1622727639</nova:project>
Oct 11 09:22:58 compute-0 nova_compute[260935]:   </nova:owner>
Oct 11 09:22:58 compute-0 nova_compute[260935]:   <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:   <nova:ports>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     <nova:port uuid="6aa7ac72-3e8c-4881-ba5d-9e2fca0200c5">
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     </nova:port>
Oct 11 09:22:58 compute-0 nova_compute[260935]:   </nova:ports>
Oct 11 09:22:58 compute-0 nova_compute[260935]: </nova:instance>
Oct 11 09:22:58 compute-0 nova_compute[260935]:   </metadata>
Oct 11 09:22:58 compute-0 nova_compute[260935]:   <memory unit='KiB'>131072</memory>
Oct 11 09:22:58 compute-0 nova_compute[260935]:   <currentMemory unit='KiB'>131072</currentMemory>
Oct 11 09:22:58 compute-0 nova_compute[260935]:   <vcpu placement='static'>1</vcpu>
Oct 11 09:22:58 compute-0 nova_compute[260935]:   <resource>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     <partition>/machine</partition>
Oct 11 09:22:58 compute-0 nova_compute[260935]:   </resource>
Oct 11 09:22:58 compute-0 nova_compute[260935]:   <sysinfo type='smbios'>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     <system>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <entry name='manufacturer'>RDO</entry>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <entry name='product'>OpenStack Compute</entry>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <entry name='serial'>7f0d9214-39a5-458d-82db-dcbc7d61b8b5</entry>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <entry name='uuid'>7f0d9214-39a5-458d-82db-dcbc7d61b8b5</entry>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <entry name='family'>Virtual Machine</entry>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     </system>
Oct 11 09:22:58 compute-0 nova_compute[260935]:   </sysinfo>
Oct 11 09:22:58 compute-0 nova_compute[260935]:   <os>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     <type arch='x86_64' machine='pc-q35-rhel9.6.0'>hvm</type>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     <boot dev='hd'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     <smbios mode='sysinfo'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:   </os>
Oct 11 09:22:58 compute-0 nova_compute[260935]:   <features>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     <acpi/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     <apic/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     <vmcoreinfo state='on'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:   </features>
Oct 11 09:22:58 compute-0 nova_compute[260935]:   <cpu mode='custom' match='exact' check='full'>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     <model fallback='forbid'>EPYC-Rome</model>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     <vendor>AMD</vendor>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     <feature policy='require' name='x2apic'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     <feature policy='require' name='tsc-deadline'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     <feature policy='require' name='hypervisor'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     <feature policy='require' name='tsc_adjust'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     <feature policy='require' name='spec-ctrl'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     <feature policy='require' name='stibp'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     <feature policy='require' name='arch-capabilities'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     <feature policy='require' name='ssbd'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     <feature policy='require' name='cmp_legacy'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     <feature policy='require' name='overflow-recov'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     <feature policy='require' name='succor'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     <feature policy='require' name='ibrs'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     <feature policy='require' name='amd-ssbd'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     <feature policy='require' name='virt-ssbd'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     <feature policy='disable' name='lbrv'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     <feature policy='disable' name='tsc-scale'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     <feature policy='disable' name='vmcb-clean'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     <feature policy='disable' name='flushbyasid'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     <feature policy='disable' name='pause-filter'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     <feature policy='disable' name='pfthreshold'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     <feature policy='disable' name='svme-addr-chk'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     <feature policy='require' name='lfence-always-serializing'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     <feature policy='require' name='rdctl-no'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     <feature policy='require' name='mds-no'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     <feature policy='require' name='pschange-mc-no'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     <feature policy='require' name='gds-no'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     <feature policy='require' name='rfds-no'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     <feature policy='disable' name='xsaves'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     <feature policy='disable' name='svm'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     <feature policy='require' name='topoext'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     <feature policy='disable' name='npt'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     <feature policy='disable' name='nrip-save'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:   </cpu>
Oct 11 09:22:58 compute-0 nova_compute[260935]:   <clock offset='utc'>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     <timer name='pit' tickpolicy='delay'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     <timer name='rtc' tickpolicy='catchup'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     <timer name='hpet' present='no'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:   </clock>
Oct 11 09:22:58 compute-0 nova_compute[260935]:   <on_poweroff>destroy</on_poweroff>
Oct 11 09:22:58 compute-0 nova_compute[260935]:   <on_reboot>restart</on_reboot>
Oct 11 09:22:58 compute-0 nova_compute[260935]:   <on_crash>destroy</on_crash>
Oct 11 09:22:58 compute-0 nova_compute[260935]:   <devices>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     <disk type='network' device='disk'>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <driver name='qemu' type='raw' cache='none'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <auth username='openstack'>
Oct 11 09:22:58 compute-0 nova_compute[260935]:         <secret type='ceph' uuid='33219f8b-dc38-5a8f-a577-8ccc4b37190a'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       </auth>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <source protocol='rbd' name='vms/7f0d9214-39a5-458d-82db-dcbc7d61b8b5_disk' index='2'>
Oct 11 09:22:58 compute-0 nova_compute[260935]:         <host name='192.168.122.100' port='6789'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       </source>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <target dev='vda' bus='virtio'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <alias name='virtio-disk0'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     </disk>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     <disk type='network' device='cdrom'>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <driver name='qemu' type='raw' cache='none'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <auth username='openstack'>
Oct 11 09:22:58 compute-0 nova_compute[260935]:         <secret type='ceph' uuid='33219f8b-dc38-5a8f-a577-8ccc4b37190a'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       </auth>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <source protocol='rbd' name='vms/7f0d9214-39a5-458d-82db-dcbc7d61b8b5_disk.config' index='1'>
Oct 11 09:22:58 compute-0 nova_compute[260935]:         <host name='192.168.122.100' port='6789'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       </source>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <target dev='sda' bus='sata'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <readonly/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <alias name='sata0-0-0'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     </disk>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     <controller type='pci' index='0' model='pcie-root'>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <alias name='pcie.0'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     </controller>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     <controller type='pci' index='1' model='pcie-root-port'>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <target chassis='1' port='0x10'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <alias name='pci.1'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     </controller>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     <controller type='pci' index='2' model='pcie-root-port'>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <target chassis='2' port='0x11'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <alias name='pci.2'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     </controller>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     <controller type='pci' index='3' model='pcie-root-port'>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <target chassis='3' port='0x12'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <alias name='pci.3'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     </controller>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     <controller type='pci' index='4' model='pcie-root-port'>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <target chassis='4' port='0x13'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <alias name='pci.4'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     </controller>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     <controller type='pci' index='5' model='pcie-root-port'>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <target chassis='5' port='0x14'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <alias name='pci.5'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     </controller>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     <controller type='pci' index='6' model='pcie-root-port'>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <target chassis='6' port='0x15'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <alias name='pci.6'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     </controller>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     <controller type='pci' index='7' model='pcie-root-port'>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <target chassis='7' port='0x16'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <alias name='pci.7'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     </controller>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     <controller type='pci' index='8' model='pcie-root-port'>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <target chassis='8' port='0x17'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <alias name='pci.8'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     </controller>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     <controller type='pci' index='9' model='pcie-root-port'>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <target chassis='9' port='0x18'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <alias name='pci.9'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     </controller>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     <controller type='pci' index='10' model='pcie-root-port'>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <target chassis='10' port='0x19'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <alias name='pci.10'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     </controller>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     <controller type='pci' index='11' model='pcie-root-port'>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <target chassis='11' port='0x1a'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <alias name='pci.11'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     </controller>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     <controller type='pci' index='12' model='pcie-root-port'>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <target chassis='12' port='0x1b'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <alias name='pci.12'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     </controller>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     <controller type='pci' index='13' model='pcie-root-port'>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <target chassis='13' port='0x1c'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <alias name='pci.13'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     </controller>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     <controller type='pci' index='14' model='pcie-root-port'>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <target chassis='14' port='0x1d'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <alias name='pci.14'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     </controller>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     <controller type='pci' index='15' model='pcie-root-port'>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <target chassis='15' port='0x1e'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <alias name='pci.15'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     </controller>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     <controller type='pci' index='16' model='pcie-root-port'>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <target chassis='16' port='0x1f'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <alias name='pci.16'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     </controller>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     <controller type='pci' index='17' model='pcie-root-port'>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <target chassis='17' port='0x20'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <alias name='pci.17'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     </controller>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     <controller type='pci' index='18' model='pcie-root-port'>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <target chassis='18' port='0x21'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <alias name='pci.18'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     </controller>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     <controller type='pci' index='19' model='pcie-root-port'>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <target chassis='19' port='0x22'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <alias name='pci.19'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     </controller>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     <controller type='pci' index='20' model='pcie-root-port'>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <target chassis='20' port='0x23'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <alias name='pci.20'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     </controller>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     <controller type='pci' index='21' model='pcie-root-port'>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <target chassis='21' port='0x24'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <alias name='pci.21'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     </controller>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     <controller type='pci' index='22' model='pcie-root-port'>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <target chassis='22' port='0x25'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <alias name='pci.22'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     </controller>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     <controller type='pci' index='23' model='pcie-root-port'>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <target chassis='23' port='0x26'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <alias name='pci.23'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     </controller>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     <controller type='pci' index='24' model='pcie-root-port'>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <target chassis='24' port='0x27'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <alias name='pci.24'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     </controller>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     <controller type='pci' index='25' model='pcie-root-port'>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <model name='pcie-root-port'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <target chassis='25' port='0x28'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <alias name='pci.25'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     </controller>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <model name='pcie-pci-bridge'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <alias name='pci.26'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     </controller>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     <controller type='usb' index='0' model='piix3-uhci'>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <alias name='usb'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     </controller>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     <controller type='sata' index='0'>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <alias name='ide'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     </controller>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     <interface type='ethernet'>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <mac address='fa:16:3e:38:a7:f5'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <target dev='tap6aa7ac72-3e'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <model type='virtio'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <driver name='vhost' rx_queue_size='512'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <mtu size='1442'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <alias name='net0'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     </interface>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     <serial type='pty'>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <source path='/dev/pts/2'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <log file='/var/lib/nova/instances/7f0d9214-39a5-458d-82db-dcbc7d61b8b5/console.log' append='off'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <target type='isa-serial' port='0'>
Oct 11 09:22:58 compute-0 nova_compute[260935]:         <model name='isa-serial'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       </target>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <alias name='serial0'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     </serial>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     <console type='pty' tty='/dev/pts/2'>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <source path='/dev/pts/2'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <log file='/var/lib/nova/instances/7f0d9214-39a5-458d-82db-dcbc7d61b8b5/console.log' append='off'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <target type='serial' port='0'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <alias name='serial0'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     </console>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     <input type='tablet' bus='usb'>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <alias name='input0'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <address type='usb' bus='0' port='1'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     </input>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     <input type='mouse' bus='ps2'>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <alias name='input1'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     </input>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     <input type='keyboard' bus='ps2'>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <alias name='input2'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     </input>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     <graphics type='vnc' port='5901' autoport='yes' listen='::0'>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <listen type='address' address='::0'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     </graphics>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     <audio id='1' type='none'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     <video>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <model type='virtio' heads='1' primary='yes'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <alias name='video0'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     </video>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     <watchdog model='itco' action='reset'>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <alias name='watchdog0'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     </watchdog>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     <memballoon model='virtio'>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <stats period='10'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <alias name='balloon0'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     </memballoon>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     <rng model='virtio'>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <backend model='random'>/dev/urandom</backend>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <alias name='rng0'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     </rng>
Oct 11 09:22:58 compute-0 nova_compute[260935]:   </devices>
Oct 11 09:22:58 compute-0 nova_compute[260935]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     <label>system_u:system_r:svirt_t:s0:c329,c594</label>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c329,c594</imagelabel>
Oct 11 09:22:58 compute-0 nova_compute[260935]:   </seclabel>
Oct 11 09:22:58 compute-0 nova_compute[260935]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     <label>+107:+107</label>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     <imagelabel>+107:+107</imagelabel>
Oct 11 09:22:58 compute-0 nova_compute[260935]:   </seclabel>
Oct 11 09:22:58 compute-0 nova_compute[260935]: </domain>
Oct 11 09:22:58 compute-0 nova_compute[260935]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Oct 11 09:22:58 compute-0 nova_compute[260935]: 2025-10-11 09:22:58.712 2 WARNING nova.virt.libvirt.driver [req-d761917b-9a4c-4ef8-8611-f8ade2a85eea req-fd76771f-3126-4a7f-9dba-e7975cd449b1 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 7f0d9214-39a5-458d-82db-dcbc7d61b8b5] Detaching interface fa:16:3e:f8:68:88 failed because the device is no longer found on the guest.: nova.exception.DeviceNotFound: Device 'tap300c071c-a3' not found.
Oct 11 09:22:58 compute-0 nova_compute[260935]: 2025-10-11 09:22:58.712 2 DEBUG nova.virt.libvirt.vif [req-d761917b-9a4c-4ef8-8611-f8ade2a85eea req-fd76771f-3126-4a7f-9dba-e7975cd449b1 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T09:20:38Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1960711580',display_name='tempest-TestNetworkBasicOps-server-1960711580',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1960711580',id=117,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBF5Wlq36JtMeAih7avX8wXRrPqfFsSPD2izoTRA0/VeN08ZY174fYPsstqVdaqAprTgQ0B4WJKyd87FK5YPL+XzWekXrwbc+R4XTrvOSv6dJKGu7vh0OlJADJW05rfop0g==',key_name='tempest-TestNetworkBasicOps-1691611342',keypairs=<?>,launch_index=0,launched_at=2025-10-11T09:20:58Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='bee9c6aad5fe46a2b0fb6caf4d995b72',ramdisk_id='',reservation_id='r-9mefs4w8',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1622727639',owner_user_name='tempest-TestNetworkBasicOps-1622727639-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T09:20:58Z,user_data=None,user_id='dd336dcb24664df58613d4105ce1b004',uuid=7f0d9214-39a5-458d-82db-dcbc7d61b8b5,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "300c071c-a312-4a9b-bd7a-1b16b9a35ae6", "address": "fa:16:3e:f8:68:88", "network": {"id": "428d818e-c08a-4eef-be62-24fe484fed05", "bridge": "br-int", "label": "tempest-network-smoke--2064721736", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.30", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap300c071c-a3", "ovs_interfaceid": "300c071c-a312-4a9b-bd7a-1b16b9a35ae6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 11 09:22:58 compute-0 nova_compute[260935]: 2025-10-11 09:22:58.713 2 DEBUG nova.network.os_vif_util [req-d761917b-9a4c-4ef8-8611-f8ade2a85eea req-fd76771f-3126-4a7f-9dba-e7975cd449b1 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Converting VIF {"id": "300c071c-a312-4a9b-bd7a-1b16b9a35ae6", "address": "fa:16:3e:f8:68:88", "network": {"id": "428d818e-c08a-4eef-be62-24fe484fed05", "bridge": "br-int", "label": "tempest-network-smoke--2064721736", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.30", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap300c071c-a3", "ovs_interfaceid": "300c071c-a312-4a9b-bd7a-1b16b9a35ae6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 09:22:58 compute-0 nova_compute[260935]: 2025-10-11 09:22:58.713 2 DEBUG nova.network.os_vif_util [req-d761917b-9a4c-4ef8-8611-f8ade2a85eea req-fd76771f-3126-4a7f-9dba-e7975cd449b1 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:f8:68:88,bridge_name='br-int',has_traffic_filtering=True,id=300c071c-a312-4a9b-bd7a-1b16b9a35ae6,network=Network(428d818e-c08a-4eef-be62-24fe484fed05),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap300c071c-a3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 09:22:58 compute-0 nova_compute[260935]: 2025-10-11 09:22:58.713 2 DEBUG os_vif [req-d761917b-9a4c-4ef8-8611-f8ade2a85eea req-fd76771f-3126-4a7f-9dba-e7975cd449b1 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:f8:68:88,bridge_name='br-int',has_traffic_filtering=True,id=300c071c-a312-4a9b-bd7a-1b16b9a35ae6,network=Network(428d818e-c08a-4eef-be62-24fe484fed05),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap300c071c-a3') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 11 09:22:58 compute-0 nova_compute[260935]: 2025-10-11 09:22:58.715 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:22:58 compute-0 nova_compute[260935]: 2025-10-11 09:22:58.715 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap300c071c-a3, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:22:58 compute-0 nova_compute[260935]: 2025-10-11 09:22:58.716 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 09:22:58 compute-0 nova_compute[260935]: 2025-10-11 09:22:58.718 2 INFO os_vif [req-d761917b-9a4c-4ef8-8611-f8ade2a85eea req-fd76771f-3126-4a7f-9dba-e7975cd449b1 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:f8:68:88,bridge_name='br-int',has_traffic_filtering=True,id=300c071c-a312-4a9b-bd7a-1b16b9a35ae6,network=Network(428d818e-c08a-4eef-be62-24fe484fed05),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap300c071c-a3')
Oct 11 09:22:58 compute-0 nova_compute[260935]: 2025-10-11 09:22:58.718 2 DEBUG nova.virt.libvirt.guest [req-d761917b-9a4c-4ef8-8611-f8ade2a85eea req-fd76771f-3126-4a7f-9dba-e7975cd449b1 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 09:22:58 compute-0 nova_compute[260935]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:   <nova:name>tempest-TestNetworkBasicOps-server-1960711580</nova:name>
Oct 11 09:22:58 compute-0 nova_compute[260935]:   <nova:creationTime>2025-10-11 09:22:58</nova:creationTime>
Oct 11 09:22:58 compute-0 nova_compute[260935]:   <nova:flavor name="m1.nano">
Oct 11 09:22:58 compute-0 nova_compute[260935]:     <nova:memory>128</nova:memory>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     <nova:disk>1</nova:disk>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     <nova:swap>0</nova:swap>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     <nova:ephemeral>0</nova:ephemeral>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     <nova:vcpus>1</nova:vcpus>
Oct 11 09:22:58 compute-0 nova_compute[260935]:   </nova:flavor>
Oct 11 09:22:58 compute-0 nova_compute[260935]:   <nova:owner>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     <nova:user uuid="dd336dcb24664df58613d4105ce1b004">tempest-TestNetworkBasicOps-1622727639-project-member</nova:user>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     <nova:project uuid="bee9c6aad5fe46a2b0fb6caf4d995b72">tempest-TestNetworkBasicOps-1622727639</nova:project>
Oct 11 09:22:58 compute-0 nova_compute[260935]:   </nova:owner>
Oct 11 09:22:58 compute-0 nova_compute[260935]:   <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:   <nova:ports>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     <nova:port uuid="6aa7ac72-3e8c-4881-ba5d-9e2fca0200c5">
Oct 11 09:22:58 compute-0 nova_compute[260935]:       <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Oct 11 09:22:58 compute-0 nova_compute[260935]:     </nova:port>
Oct 11 09:22:58 compute-0 nova_compute[260935]:   </nova:ports>
Oct 11 09:22:58 compute-0 nova_compute[260935]: </nova:instance>
Oct 11 09:22:58 compute-0 nova_compute[260935]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Oct 11 09:22:58 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2434: 321 pgs: 321 active+clean; 479 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 154 op/s
Oct 11 09:22:59 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:22:59 compute-0 ceph-mon[74313]: pgmap v2434: 321 pgs: 321 active+clean; 479 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 154 op/s
Oct 11 09:23:00 compute-0 ovn_controller[152945]: 2025-10-11T09:23:00Z|01269|binding|INFO|Releasing lport e89e8ea5-8038-433d-8c45-d2ec20f4f896 from this chassis (sb_readonly=0)
Oct 11 09:23:00 compute-0 ovn_controller[152945]: 2025-10-11T09:23:00Z|01270|binding|INFO|Releasing lport e23cd806-8523-4e59-ba27-db15cee52548 from this chassis (sb_readonly=0)
Oct 11 09:23:00 compute-0 ovn_controller[152945]: 2025-10-11T09:23:00Z|01271|binding|INFO|Releasing lport 89155f05-1b39-4918-893c-15a2cd2a9493 from this chassis (sb_readonly=0)
Oct 11 09:23:00 compute-0 ovn_controller[152945]: 2025-10-11T09:23:00Z|01272|binding|INFO|Releasing lport f45dd889-4db0-488b-b6d3-356bc191844e from this chassis (sb_readonly=0)
Oct 11 09:23:00 compute-0 nova_compute[260935]: 2025-10-11 09:23:00.575 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:23:00 compute-0 nova_compute[260935]: 2025-10-11 09:23:00.602 2 DEBUG nova.compute.manager [req-d9d24be2-e2b2-45a7-90fa-8a0a9a976df8 req-d0ca769d-c3a7-4043-91fa-2a02cc5e2cbc e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 7f0d9214-39a5-458d-82db-dcbc7d61b8b5] Received event network-vif-plugged-300c071c-a312-4a9b-bd7a-1b16b9a35ae6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:23:00 compute-0 nova_compute[260935]: 2025-10-11 09:23:00.603 2 DEBUG oslo_concurrency.lockutils [req-d9d24be2-e2b2-45a7-90fa-8a0a9a976df8 req-d0ca769d-c3a7-4043-91fa-2a02cc5e2cbc e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "7f0d9214-39a5-458d-82db-dcbc7d61b8b5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:23:00 compute-0 nova_compute[260935]: 2025-10-11 09:23:00.604 2 DEBUG oslo_concurrency.lockutils [req-d9d24be2-e2b2-45a7-90fa-8a0a9a976df8 req-d0ca769d-c3a7-4043-91fa-2a02cc5e2cbc e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "7f0d9214-39a5-458d-82db-dcbc7d61b8b5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:23:00 compute-0 nova_compute[260935]: 2025-10-11 09:23:00.605 2 DEBUG oslo_concurrency.lockutils [req-d9d24be2-e2b2-45a7-90fa-8a0a9a976df8 req-d0ca769d-c3a7-4043-91fa-2a02cc5e2cbc e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "7f0d9214-39a5-458d-82db-dcbc7d61b8b5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:23:00 compute-0 nova_compute[260935]: 2025-10-11 09:23:00.606 2 DEBUG nova.compute.manager [req-d9d24be2-e2b2-45a7-90fa-8a0a9a976df8 req-d0ca769d-c3a7-4043-91fa-2a02cc5e2cbc e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 7f0d9214-39a5-458d-82db-dcbc7d61b8b5] No waiting events found dispatching network-vif-plugged-300c071c-a312-4a9b-bd7a-1b16b9a35ae6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 09:23:00 compute-0 nova_compute[260935]: 2025-10-11 09:23:00.606 2 WARNING nova.compute.manager [req-d9d24be2-e2b2-45a7-90fa-8a0a9a976df8 req-d0ca769d-c3a7-4043-91fa-2a02cc5e2cbc e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 7f0d9214-39a5-458d-82db-dcbc7d61b8b5] Received unexpected event network-vif-plugged-300c071c-a312-4a9b-bd7a-1b16b9a35ae6 for instance with vm_state active and task_state None.
Oct 11 09:23:00 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2435: 321 pgs: 321 active+clean; 479 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 336 KiB/s rd, 2.0 MiB/s wr, 78 op/s
Oct 11 09:23:01 compute-0 nova_compute[260935]: 2025-10-11 09:23:01.593 2 INFO nova.network.neutron [None req-2532b6af-ef32-4bc7-afc5-a76cc640509f dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 7f0d9214-39a5-458d-82db-dcbc7d61b8b5] Port 300c071c-a312-4a9b-bd7a-1b16b9a35ae6 from network info_cache is no longer associated with instance in Neutron. Removing from network info_cache.
Oct 11 09:23:01 compute-0 nova_compute[260935]: 2025-10-11 09:23:01.594 2 DEBUG nova.network.neutron [None req-2532b6af-ef32-4bc7-afc5-a76cc640509f dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 7f0d9214-39a5-458d-82db-dcbc7d61b8b5] Updating instance_info_cache with network_info: [{"id": "6aa7ac72-3e8c-4881-ba5d-9e2fca0200c5", "address": "fa:16:3e:38:a7:f5", "network": {"id": "6fb90d02-96cd-4920-92ac-462cc457cb11", "bridge": "br-int", "label": "tempest-network-smoke--1169953700", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.243", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6aa7ac72-3e", "ovs_interfaceid": "6aa7ac72-3e8c-4881-ba5d-9e2fca0200c5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:23:01 compute-0 nova_compute[260935]: 2025-10-11 09:23:01.692 2 DEBUG oslo_concurrency.lockutils [None req-2532b6af-ef32-4bc7-afc5-a76cc640509f dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Releasing lock "refresh_cache-7f0d9214-39a5-458d-82db-dcbc7d61b8b5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:23:01 compute-0 nova_compute[260935]: 2025-10-11 09:23:01.746 2 DEBUG oslo_concurrency.lockutils [None req-2532b6af-ef32-4bc7-afc5-a76cc640509f dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "interface-7f0d9214-39a5-458d-82db-dcbc7d61b8b5-300c071c-a312-4a9b-bd7a-1b16b9a35ae6" "released" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: held 4.155s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:23:01 compute-0 nova_compute[260935]: 2025-10-11 09:23:01.813 2 DEBUG oslo_concurrency.lockutils [None req-4687798b-5f06-4a27-a232-9e475c105426 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Acquiring lock "7f0d9214-39a5-458d-82db-dcbc7d61b8b5" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:23:01 compute-0 nova_compute[260935]: 2025-10-11 09:23:01.814 2 DEBUG oslo_concurrency.lockutils [None req-4687798b-5f06-4a27-a232-9e475c105426 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "7f0d9214-39a5-458d-82db-dcbc7d61b8b5" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:23:01 compute-0 nova_compute[260935]: 2025-10-11 09:23:01.815 2 DEBUG oslo_concurrency.lockutils [None req-4687798b-5f06-4a27-a232-9e475c105426 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Acquiring lock "7f0d9214-39a5-458d-82db-dcbc7d61b8b5-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:23:01 compute-0 nova_compute[260935]: 2025-10-11 09:23:01.815 2 DEBUG oslo_concurrency.lockutils [None req-4687798b-5f06-4a27-a232-9e475c105426 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "7f0d9214-39a5-458d-82db-dcbc7d61b8b5-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:23:01 compute-0 nova_compute[260935]: 2025-10-11 09:23:01.816 2 DEBUG oslo_concurrency.lockutils [None req-4687798b-5f06-4a27-a232-9e475c105426 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "7f0d9214-39a5-458d-82db-dcbc7d61b8b5-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:23:01 compute-0 nova_compute[260935]: 2025-10-11 09:23:01.817 2 INFO nova.compute.manager [None req-4687798b-5f06-4a27-a232-9e475c105426 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 7f0d9214-39a5-458d-82db-dcbc7d61b8b5] Terminating instance
Oct 11 09:23:01 compute-0 nova_compute[260935]: 2025-10-11 09:23:01.819 2 DEBUG nova.compute.manager [None req-4687798b-5f06-4a27-a232-9e475c105426 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 7f0d9214-39a5-458d-82db-dcbc7d61b8b5] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 11 09:23:01 compute-0 ceph-mon[74313]: pgmap v2435: 321 pgs: 321 active+clean; 479 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 336 KiB/s rd, 2.0 MiB/s wr, 78 op/s
Oct 11 09:23:01 compute-0 kernel: tap6aa7ac72-3e (unregistering): left promiscuous mode
Oct 11 09:23:01 compute-0 NetworkManager[44960]: <info>  [1760174581.9206] device (tap6aa7ac72-3e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 09:23:01 compute-0 ovn_controller[152945]: 2025-10-11T09:23:01Z|01273|binding|INFO|Releasing lport 6aa7ac72-3e8c-4881-ba5d-9e2fca0200c5 from this chassis (sb_readonly=0)
Oct 11 09:23:01 compute-0 ovn_controller[152945]: 2025-10-11T09:23:01Z|01274|binding|INFO|Setting lport 6aa7ac72-3e8c-4881-ba5d-9e2fca0200c5 down in Southbound
Oct 11 09:23:01 compute-0 ovn_controller[152945]: 2025-10-11T09:23:01Z|01275|binding|INFO|Removing iface tap6aa7ac72-3e ovn-installed in OVS
Oct 11 09:23:01 compute-0 nova_compute[260935]: 2025-10-11 09:23:01.943 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:23:01 compute-0 nova_compute[260935]: 2025-10-11 09:23:01.949 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:23:01 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:23:01.956 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:38:a7:f5 10.100.0.5'], port_security=['fa:16:3e:38:a7:f5 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '7f0d9214-39a5-458d-82db-dcbc7d61b8b5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6fb90d02-96cd-4920-92ac-462cc457cb11', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'bee9c6aad5fe46a2b0fb6caf4d995b72', 'neutron:revision_number': '4', 'neutron:security_group_ids': '5dc25595-7367-4d0c-a935-a850b2363806', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=bf7e0c92-efbd-4ce9-a9f4-9c0c91150cdc, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=6aa7ac72-3e8c-4881-ba5d-9e2fca0200c5) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 09:23:01 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:23:01.958 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 6aa7ac72-3e8c-4881-ba5d-9e2fca0200c5 in datapath 6fb90d02-96cd-4920-92ac-462cc457cb11 unbound from our chassis
Oct 11 09:23:01 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:23:01.961 162815 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 6fb90d02-96cd-4920-92ac-462cc457cb11, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 11 09:23:01 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:23:01.963 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[927cfde9-cfa0-4bc5-bee3-8dc92f603b3f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:23:01 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:23:01.964 162815 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-6fb90d02-96cd-4920-92ac-462cc457cb11 namespace which is not needed anymore
Oct 11 09:23:01 compute-0 systemd[1]: machine-qemu\x2d140\x2dinstance\x2d00000075.scope: Deactivated successfully.
Oct 11 09:23:01 compute-0 systemd[1]: machine-qemu\x2d140\x2dinstance\x2d00000075.scope: Consumed 19.513s CPU time.
Oct 11 09:23:01 compute-0 nova_compute[260935]: 2025-10-11 09:23:01.985 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:23:01 compute-0 systemd-machined[215705]: Machine qemu-140-instance-00000075 terminated.
Oct 11 09:23:02 compute-0 nova_compute[260935]: 2025-10-11 09:23:02.056 2 INFO nova.virt.libvirt.driver [-] [instance: 7f0d9214-39a5-458d-82db-dcbc7d61b8b5] Instance destroyed successfully.
Oct 11 09:23:02 compute-0 nova_compute[260935]: 2025-10-11 09:23:02.057 2 DEBUG nova.objects.instance [None req-4687798b-5f06-4a27-a232-9e475c105426 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lazy-loading 'resources' on Instance uuid 7f0d9214-39a5-458d-82db-dcbc7d61b8b5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:23:02 compute-0 nova_compute[260935]: 2025-10-11 09:23:02.075 2 DEBUG nova.virt.libvirt.vif [None req-4687798b-5f06-4a27-a232-9e475c105426 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T09:20:38Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1960711580',display_name='tempest-TestNetworkBasicOps-server-1960711580',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1960711580',id=117,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBF5Wlq36JtMeAih7avX8wXRrPqfFsSPD2izoTRA0/VeN08ZY174fYPsstqVdaqAprTgQ0B4WJKyd87FK5YPL+XzWekXrwbc+R4XTrvOSv6dJKGu7vh0OlJADJW05rfop0g==',key_name='tempest-TestNetworkBasicOps-1691611342',keypairs=<?>,launch_index=0,launched_at=2025-10-11T09:20:58Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='bee9c6aad5fe46a2b0fb6caf4d995b72',ramdisk_id='',reservation_id='r-9mefs4w8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1622727639',owner_user_name='tempest-TestNetworkBasicOps-1622727639-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T09:20:58Z,user_data=None,user_id='dd336dcb24664df58613d4105ce1b004',uuid=7f0d9214-39a5-458d-82db-dcbc7d61b8b5,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "6aa7ac72-3e8c-4881-ba5d-9e2fca0200c5", "address": "fa:16:3e:38:a7:f5", "network": {"id": "6fb90d02-96cd-4920-92ac-462cc457cb11", "bridge": "br-int", "label": "tempest-network-smoke--1169953700", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.243", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6aa7ac72-3e", "ovs_interfaceid": "6aa7ac72-3e8c-4881-ba5d-9e2fca0200c5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 11 09:23:02 compute-0 nova_compute[260935]: 2025-10-11 09:23:02.075 2 DEBUG nova.network.os_vif_util [None req-4687798b-5f06-4a27-a232-9e475c105426 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Converting VIF {"id": "6aa7ac72-3e8c-4881-ba5d-9e2fca0200c5", "address": "fa:16:3e:38:a7:f5", "network": {"id": "6fb90d02-96cd-4920-92ac-462cc457cb11", "bridge": "br-int", "label": "tempest-network-smoke--1169953700", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.243", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6aa7ac72-3e", "ovs_interfaceid": "6aa7ac72-3e8c-4881-ba5d-9e2fca0200c5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 09:23:02 compute-0 nova_compute[260935]: 2025-10-11 09:23:02.076 2 DEBUG nova.network.os_vif_util [None req-4687798b-5f06-4a27-a232-9e475c105426 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:38:a7:f5,bridge_name='br-int',has_traffic_filtering=True,id=6aa7ac72-3e8c-4881-ba5d-9e2fca0200c5,network=Network(6fb90d02-96cd-4920-92ac-462cc457cb11),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6aa7ac72-3e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 09:23:02 compute-0 nova_compute[260935]: 2025-10-11 09:23:02.077 2 DEBUG os_vif [None req-4687798b-5f06-4a27-a232-9e475c105426 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:38:a7:f5,bridge_name='br-int',has_traffic_filtering=True,id=6aa7ac72-3e8c-4881-ba5d-9e2fca0200c5,network=Network(6fb90d02-96cd-4920-92ac-462cc457cb11),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6aa7ac72-3e') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 11 09:23:02 compute-0 nova_compute[260935]: 2025-10-11 09:23:02.079 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:23:02 compute-0 nova_compute[260935]: 2025-10-11 09:23:02.079 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6aa7ac72-3e, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:23:02 compute-0 nova_compute[260935]: 2025-10-11 09:23:02.081 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:23:02 compute-0 nova_compute[260935]: 2025-10-11 09:23:02.083 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 09:23:02 compute-0 nova_compute[260935]: 2025-10-11 09:23:02.084 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:23:02 compute-0 nova_compute[260935]: 2025-10-11 09:23:02.086 2 INFO os_vif [None req-4687798b-5f06-4a27-a232-9e475c105426 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:38:a7:f5,bridge_name='br-int',has_traffic_filtering=True,id=6aa7ac72-3e8c-4881-ba5d-9e2fca0200c5,network=Network(6fb90d02-96cd-4920-92ac-462cc457cb11),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6aa7ac72-3e')
Oct 11 09:23:02 compute-0 neutron-haproxy-ovnmeta-6fb90d02-96cd-4920-92ac-462cc457cb11[388501]: [NOTICE]   (388521) : haproxy version is 2.8.14-c23fe91
Oct 11 09:23:02 compute-0 neutron-haproxy-ovnmeta-6fb90d02-96cd-4920-92ac-462cc457cb11[388501]: [NOTICE]   (388521) : path to executable is /usr/sbin/haproxy
Oct 11 09:23:02 compute-0 neutron-haproxy-ovnmeta-6fb90d02-96cd-4920-92ac-462cc457cb11[388501]: [WARNING]  (388521) : Exiting Master process...
Oct 11 09:23:02 compute-0 neutron-haproxy-ovnmeta-6fb90d02-96cd-4920-92ac-462cc457cb11[388501]: [WARNING]  (388521) : Exiting Master process...
Oct 11 09:23:02 compute-0 neutron-haproxy-ovnmeta-6fb90d02-96cd-4920-92ac-462cc457cb11[388501]: [ALERT]    (388521) : Current worker (388525) exited with code 143 (Terminated)
Oct 11 09:23:02 compute-0 neutron-haproxy-ovnmeta-6fb90d02-96cd-4920-92ac-462cc457cb11[388501]: [WARNING]  (388521) : All workers exited. Exiting... (0)
Oct 11 09:23:02 compute-0 systemd[1]: libpod-ba90e074b51be5207e984f89886b0437dbdf15c3e7ac9ee48d3902e875c55c96.scope: Deactivated successfully.
Oct 11 09:23:02 compute-0 podman[392300]: 2025-10-11 09:23:02.201055344 +0000 UTC m=+0.058734089 container died ba90e074b51be5207e984f89886b0437dbdf15c3e7ac9ee48d3902e875c55c96 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-6fb90d02-96cd-4920-92ac-462cc457cb11, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 11 09:23:02 compute-0 nova_compute[260935]: 2025-10-11 09:23:02.222 2 DEBUG nova.compute.manager [req-18ccc87d-3505-4a04-a267-f8fa1c1e9df1 req-d6260500-6674-450c-b1ad-cd017fc13d6a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 7f0d9214-39a5-458d-82db-dcbc7d61b8b5] Received event network-vif-unplugged-6aa7ac72-3e8c-4881-ba5d-9e2fca0200c5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:23:02 compute-0 nova_compute[260935]: 2025-10-11 09:23:02.222 2 DEBUG oslo_concurrency.lockutils [req-18ccc87d-3505-4a04-a267-f8fa1c1e9df1 req-d6260500-6674-450c-b1ad-cd017fc13d6a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "7f0d9214-39a5-458d-82db-dcbc7d61b8b5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:23:02 compute-0 nova_compute[260935]: 2025-10-11 09:23:02.223 2 DEBUG oslo_concurrency.lockutils [req-18ccc87d-3505-4a04-a267-f8fa1c1e9df1 req-d6260500-6674-450c-b1ad-cd017fc13d6a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "7f0d9214-39a5-458d-82db-dcbc7d61b8b5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:23:02 compute-0 nova_compute[260935]: 2025-10-11 09:23:02.224 2 DEBUG oslo_concurrency.lockutils [req-18ccc87d-3505-4a04-a267-f8fa1c1e9df1 req-d6260500-6674-450c-b1ad-cd017fc13d6a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "7f0d9214-39a5-458d-82db-dcbc7d61b8b5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:23:02 compute-0 nova_compute[260935]: 2025-10-11 09:23:02.224 2 DEBUG nova.compute.manager [req-18ccc87d-3505-4a04-a267-f8fa1c1e9df1 req-d6260500-6674-450c-b1ad-cd017fc13d6a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 7f0d9214-39a5-458d-82db-dcbc7d61b8b5] No waiting events found dispatching network-vif-unplugged-6aa7ac72-3e8c-4881-ba5d-9e2fca0200c5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 09:23:02 compute-0 nova_compute[260935]: 2025-10-11 09:23:02.224 2 DEBUG nova.compute.manager [req-18ccc87d-3505-4a04-a267-f8fa1c1e9df1 req-d6260500-6674-450c-b1ad-cd017fc13d6a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 7f0d9214-39a5-458d-82db-dcbc7d61b8b5] Received event network-vif-unplugged-6aa7ac72-3e8c-4881-ba5d-9e2fca0200c5 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 11 09:23:02 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ba90e074b51be5207e984f89886b0437dbdf15c3e7ac9ee48d3902e875c55c96-userdata-shm.mount: Deactivated successfully.
Oct 11 09:23:02 compute-0 nova_compute[260935]: 2025-10-11 09:23:02.287 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:23:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-f2806711dd2ed8736dcb901ac2aef71c255be2695bfa4fe140c6bb3417babc2b-merged.mount: Deactivated successfully.
Oct 11 09:23:02 compute-0 podman[392300]: 2025-10-11 09:23:02.348776173 +0000 UTC m=+0.206454898 container cleanup ba90e074b51be5207e984f89886b0437dbdf15c3e7ac9ee48d3902e875c55c96 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-6fb90d02-96cd-4920-92ac-462cc457cb11, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 11 09:23:02 compute-0 systemd[1]: libpod-conmon-ba90e074b51be5207e984f89886b0437dbdf15c3e7ac9ee48d3902e875c55c96.scope: Deactivated successfully.
Oct 11 09:23:02 compute-0 podman[392334]: 2025-10-11 09:23:02.481653982 +0000 UTC m=+0.095473425 container remove ba90e074b51be5207e984f89886b0437dbdf15c3e7ac9ee48d3902e875c55c96 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-6fb90d02-96cd-4920-92ac-462cc457cb11, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 11 09:23:02 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:23:02.492 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[e350c72e-11b8-4eff-a6c9-071a33b1d376]: (4, ('Sat Oct 11 09:23:02 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-6fb90d02-96cd-4920-92ac-462cc457cb11 (ba90e074b51be5207e984f89886b0437dbdf15c3e7ac9ee48d3902e875c55c96)\nba90e074b51be5207e984f89886b0437dbdf15c3e7ac9ee48d3902e875c55c96\nSat Oct 11 09:23:02 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-6fb90d02-96cd-4920-92ac-462cc457cb11 (ba90e074b51be5207e984f89886b0437dbdf15c3e7ac9ee48d3902e875c55c96)\nba90e074b51be5207e984f89886b0437dbdf15c3e7ac9ee48d3902e875c55c96\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:23:02 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:23:02.494 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[9de6d340-d1f0-408d-9c79-4e1c99d4c8ac]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:23:02 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:23:02.496 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6fb90d02-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:23:02 compute-0 nova_compute[260935]: 2025-10-11 09:23:02.499 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:23:02 compute-0 kernel: tap6fb90d02-90: left promiscuous mode
Oct 11 09:23:02 compute-0 nova_compute[260935]: 2025-10-11 09:23:02.529 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:23:02 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:23:02.533 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[844500d5-00c1-445e-9ede-9a138344b3e8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:23:02 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:23:02.564 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[e90cbbb8-09cb-42f8-9482-2c04c1676201]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:23:02 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:23:02.565 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[9bb46361-f0d0-477e-acd9-c9dda64c92b0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:23:02 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:23:02.584 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[e760ce03-86c8-4106-9c8e-5d29fd62ae55]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 629651, 'reachable_time': 25072, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 392349, 'error': None, 'target': 'ovnmeta-6fb90d02-96cd-4920-92ac-462cc457cb11', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:23:02 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:23:02.586 162929 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-6fb90d02-96cd-4920-92ac-462cc457cb11 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 11 09:23:02 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:23:02.586 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[54c79994-9b63-4bea-aec0-c790374633b3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:23:02 compute-0 systemd[1]: run-netns-ovnmeta\x2d6fb90d02\x2d96cd\x2d4920\x2d92ac\x2d462cc457cb11.mount: Deactivated successfully.
Oct 11 09:23:02 compute-0 nova_compute[260935]: 2025-10-11 09:23:02.799 2 DEBUG nova.compute.manager [req-7a3dbc67-4fdf-4e7a-bcc1-f05077cf79be req-38f85783-19d8-40b7-adbb-260af21e9782 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 7f0d9214-39a5-458d-82db-dcbc7d61b8b5] Received event network-changed-6aa7ac72-3e8c-4881-ba5d-9e2fca0200c5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:23:02 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2436: 321 pgs: 321 active+clean; 435 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 404 KiB/s rd, 2.1 MiB/s wr, 109 op/s
Oct 11 09:23:02 compute-0 nova_compute[260935]: 2025-10-11 09:23:02.799 2 DEBUG nova.compute.manager [req-7a3dbc67-4fdf-4e7a-bcc1-f05077cf79be req-38f85783-19d8-40b7-adbb-260af21e9782 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 7f0d9214-39a5-458d-82db-dcbc7d61b8b5] Refreshing instance network info cache due to event network-changed-6aa7ac72-3e8c-4881-ba5d-9e2fca0200c5. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 09:23:02 compute-0 nova_compute[260935]: 2025-10-11 09:23:02.800 2 DEBUG oslo_concurrency.lockutils [req-7a3dbc67-4fdf-4e7a-bcc1-f05077cf79be req-38f85783-19d8-40b7-adbb-260af21e9782 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-7f0d9214-39a5-458d-82db-dcbc7d61b8b5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:23:02 compute-0 nova_compute[260935]: 2025-10-11 09:23:02.801 2 DEBUG oslo_concurrency.lockutils [req-7a3dbc67-4fdf-4e7a-bcc1-f05077cf79be req-38f85783-19d8-40b7-adbb-260af21e9782 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-7f0d9214-39a5-458d-82db-dcbc7d61b8b5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:23:02 compute-0 nova_compute[260935]: 2025-10-11 09:23:02.801 2 DEBUG nova.network.neutron [req-7a3dbc67-4fdf-4e7a-bcc1-f05077cf79be req-38f85783-19d8-40b7-adbb-260af21e9782 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 7f0d9214-39a5-458d-82db-dcbc7d61b8b5] Refreshing network info cache for port 6aa7ac72-3e8c-4881-ba5d-9e2fca0200c5 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 09:23:03 compute-0 nova_compute[260935]: 2025-10-11 09:23:03.698 2 INFO nova.virt.libvirt.driver [None req-4687798b-5f06-4a27-a232-9e475c105426 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 7f0d9214-39a5-458d-82db-dcbc7d61b8b5] Deleting instance files /var/lib/nova/instances/7f0d9214-39a5-458d-82db-dcbc7d61b8b5_del
Oct 11 09:23:03 compute-0 nova_compute[260935]: 2025-10-11 09:23:03.700 2 INFO nova.virt.libvirt.driver [None req-4687798b-5f06-4a27-a232-9e475c105426 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 7f0d9214-39a5-458d-82db-dcbc7d61b8b5] Deletion of /var/lib/nova/instances/7f0d9214-39a5-458d-82db-dcbc7d61b8b5_del complete
Oct 11 09:23:03 compute-0 nova_compute[260935]: 2025-10-11 09:23:03.768 2 INFO nova.compute.manager [None req-4687798b-5f06-4a27-a232-9e475c105426 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 7f0d9214-39a5-458d-82db-dcbc7d61b8b5] Took 1.95 seconds to destroy the instance on the hypervisor.
Oct 11 09:23:03 compute-0 nova_compute[260935]: 2025-10-11 09:23:03.769 2 DEBUG oslo.service.loopingcall [None req-4687798b-5f06-4a27-a232-9e475c105426 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 11 09:23:03 compute-0 nova_compute[260935]: 2025-10-11 09:23:03.769 2 DEBUG nova.compute.manager [-] [instance: 7f0d9214-39a5-458d-82db-dcbc7d61b8b5] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 11 09:23:03 compute-0 nova_compute[260935]: 2025-10-11 09:23:03.770 2 DEBUG nova.network.neutron [-] [instance: 7f0d9214-39a5-458d-82db-dcbc7d61b8b5] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 11 09:23:03 compute-0 sudo[392351]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:23:03 compute-0 sudo[392351]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:23:03 compute-0 sudo[392351]: pam_unix(sudo:session): session closed for user root
Oct 11 09:23:03 compute-0 sudo[392376]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 09:23:03 compute-0 sudo[392376]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:23:03 compute-0 sudo[392376]: pam_unix(sudo:session): session closed for user root
Oct 11 09:23:03 compute-0 ceph-mon[74313]: pgmap v2436: 321 pgs: 321 active+clean; 435 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 404 KiB/s rd, 2.1 MiB/s wr, 109 op/s
Oct 11 09:23:03 compute-0 sudo[392401]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:23:03 compute-0 sudo[392401]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:23:03 compute-0 sudo[392401]: pam_unix(sudo:session): session closed for user root
Oct 11 09:23:04 compute-0 sudo[392426]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 11 09:23:04 compute-0 sudo[392426]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:23:04 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:23:04 compute-0 nova_compute[260935]: 2025-10-11 09:23:04.336 2 DEBUG nova.compute.manager [req-4a7abc6c-d9f6-470d-a701-40b10add4bec req-0c92b593-5f94-4aea-8b30-b80622bfa869 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 7f0d9214-39a5-458d-82db-dcbc7d61b8b5] Received event network-vif-plugged-6aa7ac72-3e8c-4881-ba5d-9e2fca0200c5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:23:04 compute-0 nova_compute[260935]: 2025-10-11 09:23:04.339 2 DEBUG oslo_concurrency.lockutils [req-4a7abc6c-d9f6-470d-a701-40b10add4bec req-0c92b593-5f94-4aea-8b30-b80622bfa869 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "7f0d9214-39a5-458d-82db-dcbc7d61b8b5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:23:04 compute-0 nova_compute[260935]: 2025-10-11 09:23:04.339 2 DEBUG oslo_concurrency.lockutils [req-4a7abc6c-d9f6-470d-a701-40b10add4bec req-0c92b593-5f94-4aea-8b30-b80622bfa869 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "7f0d9214-39a5-458d-82db-dcbc7d61b8b5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:23:04 compute-0 nova_compute[260935]: 2025-10-11 09:23:04.340 2 DEBUG oslo_concurrency.lockutils [req-4a7abc6c-d9f6-470d-a701-40b10add4bec req-0c92b593-5f94-4aea-8b30-b80622bfa869 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "7f0d9214-39a5-458d-82db-dcbc7d61b8b5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:23:04 compute-0 nova_compute[260935]: 2025-10-11 09:23:04.340 2 DEBUG nova.compute.manager [req-4a7abc6c-d9f6-470d-a701-40b10add4bec req-0c92b593-5f94-4aea-8b30-b80622bfa869 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 7f0d9214-39a5-458d-82db-dcbc7d61b8b5] No waiting events found dispatching network-vif-plugged-6aa7ac72-3e8c-4881-ba5d-9e2fca0200c5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 09:23:04 compute-0 nova_compute[260935]: 2025-10-11 09:23:04.341 2 WARNING nova.compute.manager [req-4a7abc6c-d9f6-470d-a701-40b10add4bec req-0c92b593-5f94-4aea-8b30-b80622bfa869 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 7f0d9214-39a5-458d-82db-dcbc7d61b8b5] Received unexpected event network-vif-plugged-6aa7ac72-3e8c-4881-ba5d-9e2fca0200c5 for instance with vm_state active and task_state deleting.
Oct 11 09:23:04 compute-0 nova_compute[260935]: 2025-10-11 09:23:04.558 2 DEBUG nova.network.neutron [req-7a3dbc67-4fdf-4e7a-bcc1-f05077cf79be req-38f85783-19d8-40b7-adbb-260af21e9782 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 7f0d9214-39a5-458d-82db-dcbc7d61b8b5] Updated VIF entry in instance network info cache for port 6aa7ac72-3e8c-4881-ba5d-9e2fca0200c5. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 09:23:04 compute-0 nova_compute[260935]: 2025-10-11 09:23:04.559 2 DEBUG nova.network.neutron [req-7a3dbc67-4fdf-4e7a-bcc1-f05077cf79be req-38f85783-19d8-40b7-adbb-260af21e9782 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 7f0d9214-39a5-458d-82db-dcbc7d61b8b5] Updating instance_info_cache with network_info: [{"id": "6aa7ac72-3e8c-4881-ba5d-9e2fca0200c5", "address": "fa:16:3e:38:a7:f5", "network": {"id": "6fb90d02-96cd-4920-92ac-462cc457cb11", "bridge": "br-int", "label": "tempest-network-smoke--1169953700", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6aa7ac72-3e", "ovs_interfaceid": "6aa7ac72-3e8c-4881-ba5d-9e2fca0200c5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:23:04 compute-0 sudo[392426]: pam_unix(sudo:session): session closed for user root
Oct 11 09:23:04 compute-0 nova_compute[260935]: 2025-10-11 09:23:04.732 2 DEBUG oslo_concurrency.lockutils [req-7a3dbc67-4fdf-4e7a-bcc1-f05077cf79be req-38f85783-19d8-40b7-adbb-260af21e9782 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-7f0d9214-39a5-458d-82db-dcbc7d61b8b5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:23:04 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 09:23:04 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 09:23:04 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 11 09:23:04 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 09:23:04 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 11 09:23:04 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2437: 321 pgs: 321 active+clean; 435 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 404 KiB/s rd, 2.1 MiB/s wr, 109 op/s
Oct 11 09:23:04 compute-0 nova_compute[260935]: 2025-10-11 09:23:04.819 2 DEBUG nova.network.neutron [-] [instance: 7f0d9214-39a5-458d-82db-dcbc7d61b8b5] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:23:04 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:23:04 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev 8094a4bb-e8c4-4343-bc23-168d38f91675 does not exist
Oct 11 09:23:04 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev 32417e03-2432-4fe7-bb9a-c0e00b98d124 does not exist
Oct 11 09:23:04 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev 70c01afb-a987-45a4-8e14-25795668cd20 does not exist
Oct 11 09:23:04 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 11 09:23:04 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 09:23:04 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 11 09:23:04 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 09:23:04 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 09:23:04 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 09:23:04 compute-0 sudo[392480]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:23:04 compute-0 nova_compute[260935]: 2025-10-11 09:23:04.919 2 INFO nova.compute.manager [-] [instance: 7f0d9214-39a5-458d-82db-dcbc7d61b8b5] Took 1.15 seconds to deallocate network for instance.
Oct 11 09:23:04 compute-0 sudo[392480]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:23:04 compute-0 sudo[392480]: pam_unix(sudo:session): session closed for user root
Oct 11 09:23:04 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 09:23:04 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 09:23:04 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:23:04 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 09:23:04 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 09:23:04 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 09:23:05 compute-0 sudo[392505]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 09:23:05 compute-0 sudo[392505]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:23:05 compute-0 sudo[392505]: pam_unix(sudo:session): session closed for user root
Oct 11 09:23:05 compute-0 nova_compute[260935]: 2025-10-11 09:23:05.083 2 DEBUG oslo_concurrency.lockutils [None req-4687798b-5f06-4a27-a232-9e475c105426 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:23:05 compute-0 nova_compute[260935]: 2025-10-11 09:23:05.083 2 DEBUG oslo_concurrency.lockutils [None req-4687798b-5f06-4a27-a232-9e475c105426 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:23:05 compute-0 sudo[392530]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:23:05 compute-0 sudo[392530]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:23:05 compute-0 sudo[392530]: pam_unix(sudo:session): session closed for user root
Oct 11 09:23:05 compute-0 sudo[392555]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 11 09:23:05 compute-0 sudo[392555]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:23:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 09:23:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:23:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 09:23:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:23:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0036581190580842614 of space, bias 1.0, pg target 1.0974357174252785 quantized to 32 (current 32)
Oct 11 09:23:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:23:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 09:23:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:23:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 09:23:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:23:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.1992057139048968 quantized to 32 (current 32)
Oct 11 09:23:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:23:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006084358924269063 quantized to 16 (current 32)
Oct 11 09:23:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:23:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 09:23:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:23:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.605448655336329e-05 quantized to 32 (current 32)
Oct 11 09:23:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:23:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006464631357035879 quantized to 32 (current 32)
Oct 11 09:23:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:23:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 09:23:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:23:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015210897310672657 quantized to 32 (current 32)
Oct 11 09:23:05 compute-0 podman[392620]: 2025-10-11 09:23:05.59242548 +0000 UTC m=+0.044878258 container create b1b08078307f02b063f089e3564d68829344ce1ac3b416102ca491a156af2921 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_hertz, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct 11 09:23:05 compute-0 systemd[1]: Started libpod-conmon-b1b08078307f02b063f089e3564d68829344ce1ac3b416102ca491a156af2921.scope.
Oct 11 09:23:05 compute-0 podman[392620]: 2025-10-11 09:23:05.57469147 +0000 UTC m=+0.027144278 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:23:05 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:23:05 compute-0 podman[392620]: 2025-10-11 09:23:05.700295164 +0000 UTC m=+0.152748002 container init b1b08078307f02b063f089e3564d68829344ce1ac3b416102ca491a156af2921 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_hertz, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct 11 09:23:05 compute-0 podman[392620]: 2025-10-11 09:23:05.71253811 +0000 UTC m=+0.164990898 container start b1b08078307f02b063f089e3564d68829344ce1ac3b416102ca491a156af2921 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_hertz, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 09:23:05 compute-0 podman[392620]: 2025-10-11 09:23:05.71643237 +0000 UTC m=+0.168885208 container attach b1b08078307f02b063f089e3564d68829344ce1ac3b416102ca491a156af2921 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_hertz, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 09:23:05 compute-0 wizardly_hertz[392636]: 167 167
Oct 11 09:23:05 compute-0 systemd[1]: libpod-b1b08078307f02b063f089e3564d68829344ce1ac3b416102ca491a156af2921.scope: Deactivated successfully.
Oct 11 09:23:05 compute-0 conmon[392636]: conmon b1b08078307f02b063f0 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b1b08078307f02b063f089e3564d68829344ce1ac3b416102ca491a156af2921.scope/container/memory.events
Oct 11 09:23:05 compute-0 podman[392620]: 2025-10-11 09:23:05.721628716 +0000 UTC m=+0.174081504 container died b1b08078307f02b063f089e3564d68829344ce1ac3b416102ca491a156af2921 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_hertz, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct 11 09:23:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-25ef04cb751192ca782db2fca021509eac38d668511699e7c913c96d00dc992f-merged.mount: Deactivated successfully.
Oct 11 09:23:05 compute-0 podman[392620]: 2025-10-11 09:23:05.764501846 +0000 UTC m=+0.216954634 container remove b1b08078307f02b063f089e3564d68829344ce1ac3b416102ca491a156af2921 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_hertz, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 11 09:23:05 compute-0 systemd[1]: libpod-conmon-b1b08078307f02b063f089e3564d68829344ce1ac3b416102ca491a156af2921.scope: Deactivated successfully.
Oct 11 09:23:05 compute-0 podman[392662]: 2025-10-11 09:23:05.992896902 +0000 UTC m=+0.043846259 container create 1ed3ff9ea95b1b4b2495b1c726289cf21b649473103867ed21abbca2c3954eab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_hodgkin, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 09:23:06 compute-0 ceph-mon[74313]: pgmap v2437: 321 pgs: 321 active+clean; 435 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 404 KiB/s rd, 2.1 MiB/s wr, 109 op/s
Oct 11 09:23:06 compute-0 systemd[1]: Started libpod-conmon-1ed3ff9ea95b1b4b2495b1c726289cf21b649473103867ed21abbca2c3954eab.scope.
Oct 11 09:23:06 compute-0 podman[392662]: 2025-10-11 09:23:05.975495801 +0000 UTC m=+0.026445178 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:23:06 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:23:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/839f2be87f634ab0890b8a4a2b0c38b2b8ced936778ee2b48dc7c930645c7343/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 09:23:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/839f2be87f634ab0890b8a4a2b0c38b2b8ced936778ee2b48dc7c930645c7343/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 09:23:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/839f2be87f634ab0890b8a4a2b0c38b2b8ced936778ee2b48dc7c930645c7343/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 09:23:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/839f2be87f634ab0890b8a4a2b0c38b2b8ced936778ee2b48dc7c930645c7343/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 09:23:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/839f2be87f634ab0890b8a4a2b0c38b2b8ced936778ee2b48dc7c930645c7343/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 09:23:06 compute-0 podman[392662]: 2025-10-11 09:23:06.106672262 +0000 UTC m=+0.157621669 container init 1ed3ff9ea95b1b4b2495b1c726289cf21b649473103867ed21abbca2c3954eab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_hodgkin, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct 11 09:23:06 compute-0 podman[392662]: 2025-10-11 09:23:06.122461348 +0000 UTC m=+0.173410705 container start 1ed3ff9ea95b1b4b2495b1c726289cf21b649473103867ed21abbca2c3954eab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_hodgkin, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct 11 09:23:06 compute-0 podman[392662]: 2025-10-11 09:23:06.12572413 +0000 UTC m=+0.176673497 container attach 1ed3ff9ea95b1b4b2495b1c726289cf21b649473103867ed21abbca2c3954eab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_hodgkin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 09:23:06 compute-0 nova_compute[260935]: 2025-10-11 09:23:06.131 2 DEBUG oslo_concurrency.processutils [None req-4687798b-5f06-4a27-a232-9e475c105426 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:23:06 compute-0 nova_compute[260935]: 2025-10-11 09:23:06.343 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:23:06 compute-0 sshd-session[392655]: Invalid user admin from 165.232.82.252 port 40584
Oct 11 09:23:06 compute-0 sshd-session[392655]: pam_unix(sshd:auth): check pass; user unknown
Oct 11 09:23:06 compute-0 sshd-session[392655]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=165.232.82.252
Oct 11 09:23:06 compute-0 nova_compute[260935]: 2025-10-11 09:23:06.481 2 DEBUG nova.compute.manager [req-34c10473-3da8-4f74-a90a-b713284e2442 req-d140928c-3cbf-455c-9701-2bdd6c79e625 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 7f0d9214-39a5-458d-82db-dcbc7d61b8b5] Received event network-vif-deleted-6aa7ac72-3e8c-4881-ba5d-9e2fca0200c5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:23:06 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 09:23:06 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2133143860' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:23:06 compute-0 nova_compute[260935]: 2025-10-11 09:23:06.651 2 DEBUG oslo_concurrency.processutils [None req-4687798b-5f06-4a27-a232-9e475c105426 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.520s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:23:06 compute-0 nova_compute[260935]: 2025-10-11 09:23:06.660 2 DEBUG nova.compute.provider_tree [None req-4687798b-5f06-4a27-a232-9e475c105426 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 09:23:06 compute-0 nova_compute[260935]: 2025-10-11 09:23:06.693 2 DEBUG nova.scheduler.client.report [None req-4687798b-5f06-4a27-a232-9e475c105426 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 09:23:06 compute-0 nova_compute[260935]: 2025-10-11 09:23:06.732 2 DEBUG oslo_concurrency.lockutils [None req-4687798b-5f06-4a27-a232-9e475c105426 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.649s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:23:06 compute-0 nova_compute[260935]: 2025-10-11 09:23:06.782 2 INFO nova.scheduler.client.report [None req-4687798b-5f06-4a27-a232-9e475c105426 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Deleted allocations for instance 7f0d9214-39a5-458d-82db-dcbc7d61b8b5
Oct 11 09:23:06 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2438: 321 pgs: 321 active+clean; 435 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 404 KiB/s rd, 2.1 MiB/s wr, 109 op/s
Oct 11 09:23:06 compute-0 nova_compute[260935]: 2025-10-11 09:23:06.932 2 DEBUG oslo_concurrency.lockutils [None req-4687798b-5f06-4a27-a232-9e475c105426 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "7f0d9214-39a5-458d-82db-dcbc7d61b8b5" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.118s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:23:07 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/2133143860' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:23:07 compute-0 nova_compute[260935]: 2025-10-11 09:23:07.083 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:23:07 compute-0 jovial_hodgkin[392678]: --> passed data devices: 0 physical, 3 LVM
Oct 11 09:23:07 compute-0 jovial_hodgkin[392678]: --> relative data size: 1.0
Oct 11 09:23:07 compute-0 jovial_hodgkin[392678]: --> All data devices are unavailable
Oct 11 09:23:07 compute-0 nova_compute[260935]: 2025-10-11 09:23:07.289 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:23:07 compute-0 systemd[1]: libpod-1ed3ff9ea95b1b4b2495b1c726289cf21b649473103867ed21abbca2c3954eab.scope: Deactivated successfully.
Oct 11 09:23:07 compute-0 systemd[1]: libpod-1ed3ff9ea95b1b4b2495b1c726289cf21b649473103867ed21abbca2c3954eab.scope: Consumed 1.123s CPU time.
Oct 11 09:23:07 compute-0 podman[392662]: 2025-10-11 09:23:07.322459002 +0000 UTC m=+1.373408379 container died 1ed3ff9ea95b1b4b2495b1c726289cf21b649473103867ed21abbca2c3954eab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_hodgkin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct 11 09:23:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-839f2be87f634ab0890b8a4a2b0c38b2b8ced936778ee2b48dc7c930645c7343-merged.mount: Deactivated successfully.
Oct 11 09:23:07 compute-0 podman[392662]: 2025-10-11 09:23:07.712350215 +0000 UTC m=+1.763299602 container remove 1ed3ff9ea95b1b4b2495b1c726289cf21b649473103867ed21abbca2c3954eab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_hodgkin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct 11 09:23:07 compute-0 systemd[1]: libpod-conmon-1ed3ff9ea95b1b4b2495b1c726289cf21b649473103867ed21abbca2c3954eab.scope: Deactivated successfully.
Oct 11 09:23:07 compute-0 sudo[392555]: pam_unix(sudo:session): session closed for user root
Oct 11 09:23:07 compute-0 sudo[392743]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:23:07 compute-0 sudo[392743]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:23:07 compute-0 sudo[392743]: pam_unix(sudo:session): session closed for user root
Oct 11 09:23:07 compute-0 nova_compute[260935]: 2025-10-11 09:23:07.948 2 DEBUG oslo_concurrency.lockutils [None req-b25b14ac-461b-4896-b3df-a299c81599b8 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] Acquiring lock "ef21f945-0076-48fa-8d22-c5376e26d278" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:23:07 compute-0 nova_compute[260935]: 2025-10-11 09:23:07.948 2 DEBUG oslo_concurrency.lockutils [None req-b25b14ac-461b-4896-b3df-a299c81599b8 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] Lock "ef21f945-0076-48fa-8d22-c5376e26d278" acquired by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:23:07 compute-0 nova_compute[260935]: 2025-10-11 09:23:07.948 2 INFO nova.compute.manager [None req-b25b14ac-461b-4896-b3df-a299c81599b8 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] Shelving
Oct 11 09:23:07 compute-0 sudo[392768]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 09:23:07 compute-0 sudo[392768]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:23:07 compute-0 sudo[392768]: pam_unix(sudo:session): session closed for user root
Oct 11 09:23:07 compute-0 nova_compute[260935]: 2025-10-11 09:23:07.983 2 DEBUG nova.virt.libvirt.driver [None req-b25b14ac-461b-4896-b3df-a299c81599b8 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Oct 11 09:23:08 compute-0 sudo[392793]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:23:08 compute-0 sudo[392793]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:23:08 compute-0 sudo[392793]: pam_unix(sudo:session): session closed for user root
Oct 11 09:23:08 compute-0 ceph-mon[74313]: pgmap v2438: 321 pgs: 321 active+clean; 435 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 404 KiB/s rd, 2.1 MiB/s wr, 109 op/s
Oct 11 09:23:08 compute-0 sudo[392818]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a -- lvm list --format json
Oct 11 09:23:08 compute-0 sudo[392818]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:23:08 compute-0 sshd-session[392655]: Failed password for invalid user admin from 165.232.82.252 port 40584 ssh2
Oct 11 09:23:08 compute-0 podman[392884]: 2025-10-11 09:23:08.625001301 +0000 UTC m=+0.048458099 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:23:08 compute-0 podman[392884]: 2025-10-11 09:23:08.731290251 +0000 UTC m=+0.154746999 container create ca08c74cb92cf7e1810dbe0459a61801c55f54abea4713a4d9d9de75d6ac309e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_goldberg, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 09:23:08 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2439: 321 pgs: 321 active+clean; 407 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 414 KiB/s rd, 2.2 MiB/s wr, 124 op/s
Oct 11 09:23:08 compute-0 systemd[1]: Started libpod-conmon-ca08c74cb92cf7e1810dbe0459a61801c55f54abea4713a4d9d9de75d6ac309e.scope.
Oct 11 09:23:08 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:23:08 compute-0 podman[392884]: 2025-10-11 09:23:08.962142045 +0000 UTC m=+0.385598853 container init ca08c74cb92cf7e1810dbe0459a61801c55f54abea4713a4d9d9de75d6ac309e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_goldberg, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 09:23:08 compute-0 podman[392884]: 2025-10-11 09:23:08.97435612 +0000 UTC m=+0.397812868 container start ca08c74cb92cf7e1810dbe0459a61801c55f54abea4713a4d9d9de75d6ac309e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_goldberg, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 09:23:08 compute-0 optimistic_goldberg[392900]: 167 167
Oct 11 09:23:08 compute-0 systemd[1]: libpod-ca08c74cb92cf7e1810dbe0459a61801c55f54abea4713a4d9d9de75d6ac309e.scope: Deactivated successfully.
Oct 11 09:23:09 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:23:09 compute-0 nova_compute[260935]: 2025-10-11 09:23:09.110 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760174574.1085095, c677661f-bc62-4954-9130-09b285e6abe4 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:23:09 compute-0 nova_compute[260935]: 2025-10-11 09:23:09.111 2 INFO nova.compute.manager [-] [instance: c677661f-bc62-4954-9130-09b285e6abe4] VM Stopped (Lifecycle Event)
Oct 11 09:23:09 compute-0 podman[392884]: 2025-10-11 09:23:09.129364364 +0000 UTC m=+0.552821112 container attach ca08c74cb92cf7e1810dbe0459a61801c55f54abea4713a4d9d9de75d6ac309e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_goldberg, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct 11 09:23:09 compute-0 podman[392884]: 2025-10-11 09:23:09.130227659 +0000 UTC m=+0.553684407 container died ca08c74cb92cf7e1810dbe0459a61801c55f54abea4713a4d9d9de75d6ac309e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_goldberg, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 09:23:09 compute-0 nova_compute[260935]: 2025-10-11 09:23:09.161 2 DEBUG nova.compute.manager [None req-f2693f11-7788-4e2b-a260-c8ed5e837a54 - - - - - -] [instance: c677661f-bc62-4954-9130-09b285e6abe4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:23:09 compute-0 ceph-mon[74313]: pgmap v2439: 321 pgs: 321 active+clean; 407 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 414 KiB/s rd, 2.2 MiB/s wr, 124 op/s
Oct 11 09:23:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-f8c06a9ec4d65ac1ce622ffacb6483c974e9f78fd3eb9dd62df0dbf0263d5607-merged.mount: Deactivated successfully.
Oct 11 09:23:09 compute-0 podman[392884]: 2025-10-11 09:23:09.467421155 +0000 UTC m=+0.890877913 container remove ca08c74cb92cf7e1810dbe0459a61801c55f54abea4713a4d9d9de75d6ac309e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_goldberg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 09:23:09 compute-0 nova_compute[260935]: 2025-10-11 09:23:09.544 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:23:09 compute-0 systemd[1]: libpod-conmon-ca08c74cb92cf7e1810dbe0459a61801c55f54abea4713a4d9d9de75d6ac309e.scope: Deactivated successfully.
Oct 11 09:23:09 compute-0 sshd-session[392655]: Connection closed by invalid user admin 165.232.82.252 port 40584 [preauth]
Oct 11 09:23:09 compute-0 podman[392924]: 2025-10-11 09:23:09.74861163 +0000 UTC m=+0.071210291 container create e22244cd44f6fe2f2eb45e185154518469896254c043ec317e3bb655cd6aaf04 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_tesla, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct 11 09:23:09 compute-0 podman[392924]: 2025-10-11 09:23:09.703036254 +0000 UTC m=+0.025634995 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:23:09 compute-0 systemd[1]: Started libpod-conmon-e22244cd44f6fe2f2eb45e185154518469896254c043ec317e3bb655cd6aaf04.scope.
Oct 11 09:23:09 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:23:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0226c3908f538d448ec2055eafc18e02102b756bdc5e8f9a159bff3b95d37cd2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 09:23:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0226c3908f538d448ec2055eafc18e02102b756bdc5e8f9a159bff3b95d37cd2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 09:23:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0226c3908f538d448ec2055eafc18e02102b756bdc5e8f9a159bff3b95d37cd2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 09:23:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0226c3908f538d448ec2055eafc18e02102b756bdc5e8f9a159bff3b95d37cd2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 09:23:10 compute-0 podman[392924]: 2025-10-11 09:23:10.025291228 +0000 UTC m=+0.347889919 container init e22244cd44f6fe2f2eb45e185154518469896254c043ec317e3bb655cd6aaf04 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_tesla, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 11 09:23:10 compute-0 podman[392924]: 2025-10-11 09:23:10.040849427 +0000 UTC m=+0.363448108 container start e22244cd44f6fe2f2eb45e185154518469896254c043ec317e3bb655cd6aaf04 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_tesla, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct 11 09:23:10 compute-0 podman[392924]: 2025-10-11 09:23:10.093175534 +0000 UTC m=+0.415774225 container attach e22244cd44f6fe2f2eb45e185154518469896254c043ec317e3bb655cd6aaf04 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_tesla, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct 11 09:23:10 compute-0 podman[392942]: 2025-10-11 09:23:10.148456734 +0000 UTC m=+0.191145386 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Oct 11 09:23:10 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2440: 321 pgs: 321 active+clean; 407 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 78 KiB/s rd, 125 KiB/s wr, 46 op/s
Oct 11 09:23:10 compute-0 trusting_tesla[392941]: {
Oct 11 09:23:10 compute-0 trusting_tesla[392941]:     "0": [
Oct 11 09:23:10 compute-0 trusting_tesla[392941]:         {
Oct 11 09:23:10 compute-0 trusting_tesla[392941]:             "devices": [
Oct 11 09:23:10 compute-0 trusting_tesla[392941]:                 "/dev/loop3"
Oct 11 09:23:10 compute-0 trusting_tesla[392941]:             ],
Oct 11 09:23:10 compute-0 trusting_tesla[392941]:             "lv_name": "ceph_lv0",
Oct 11 09:23:10 compute-0 trusting_tesla[392941]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 09:23:10 compute-0 trusting_tesla[392941]:             "lv_size": "21470642176",
Oct 11 09:23:10 compute-0 trusting_tesla[392941]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=bd31cfe9-266a-4479-a7f9-659fff14b3e1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 09:23:10 compute-0 trusting_tesla[392941]:             "lv_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 09:23:10 compute-0 trusting_tesla[392941]:             "name": "ceph_lv0",
Oct 11 09:23:10 compute-0 trusting_tesla[392941]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 09:23:10 compute-0 trusting_tesla[392941]:             "tags": {
Oct 11 09:23:10 compute-0 trusting_tesla[392941]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 11 09:23:10 compute-0 trusting_tesla[392941]:                 "ceph.block_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 09:23:10 compute-0 trusting_tesla[392941]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 09:23:10 compute-0 trusting_tesla[392941]:                 "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:23:10 compute-0 trusting_tesla[392941]:                 "ceph.cluster_name": "ceph",
Oct 11 09:23:10 compute-0 trusting_tesla[392941]:                 "ceph.crush_device_class": "",
Oct 11 09:23:10 compute-0 trusting_tesla[392941]:                 "ceph.encrypted": "0",
Oct 11 09:23:10 compute-0 trusting_tesla[392941]:                 "ceph.osd_fsid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 09:23:10 compute-0 trusting_tesla[392941]:                 "ceph.osd_id": "0",
Oct 11 09:23:10 compute-0 trusting_tesla[392941]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 09:23:10 compute-0 trusting_tesla[392941]:                 "ceph.type": "block",
Oct 11 09:23:10 compute-0 trusting_tesla[392941]:                 "ceph.vdo": "0"
Oct 11 09:23:10 compute-0 trusting_tesla[392941]:             },
Oct 11 09:23:10 compute-0 trusting_tesla[392941]:             "type": "block",
Oct 11 09:23:10 compute-0 trusting_tesla[392941]:             "vg_name": "ceph_vg0"
Oct 11 09:23:10 compute-0 trusting_tesla[392941]:         }
Oct 11 09:23:10 compute-0 trusting_tesla[392941]:     ],
Oct 11 09:23:10 compute-0 trusting_tesla[392941]:     "1": [
Oct 11 09:23:10 compute-0 trusting_tesla[392941]:         {
Oct 11 09:23:10 compute-0 trusting_tesla[392941]:             "devices": [
Oct 11 09:23:10 compute-0 trusting_tesla[392941]:                 "/dev/loop4"
Oct 11 09:23:10 compute-0 trusting_tesla[392941]:             ],
Oct 11 09:23:10 compute-0 trusting_tesla[392941]:             "lv_name": "ceph_lv1",
Oct 11 09:23:10 compute-0 trusting_tesla[392941]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 09:23:10 compute-0 trusting_tesla[392941]:             "lv_size": "21470642176",
Oct 11 09:23:10 compute-0 trusting_tesla[392941]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c6e730c8-7075-45e5-bf72-061b73088f5b,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 09:23:10 compute-0 trusting_tesla[392941]:             "lv_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 09:23:10 compute-0 trusting_tesla[392941]:             "name": "ceph_lv1",
Oct 11 09:23:10 compute-0 trusting_tesla[392941]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 09:23:10 compute-0 trusting_tesla[392941]:             "tags": {
Oct 11 09:23:10 compute-0 trusting_tesla[392941]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 11 09:23:10 compute-0 trusting_tesla[392941]:                 "ceph.block_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 09:23:10 compute-0 trusting_tesla[392941]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 09:23:10 compute-0 trusting_tesla[392941]:                 "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:23:10 compute-0 trusting_tesla[392941]:                 "ceph.cluster_name": "ceph",
Oct 11 09:23:10 compute-0 trusting_tesla[392941]:                 "ceph.crush_device_class": "",
Oct 11 09:23:10 compute-0 trusting_tesla[392941]:                 "ceph.encrypted": "0",
Oct 11 09:23:10 compute-0 trusting_tesla[392941]:                 "ceph.osd_fsid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 09:23:10 compute-0 trusting_tesla[392941]:                 "ceph.osd_id": "1",
Oct 11 09:23:10 compute-0 trusting_tesla[392941]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 09:23:10 compute-0 trusting_tesla[392941]:                 "ceph.type": "block",
Oct 11 09:23:10 compute-0 trusting_tesla[392941]:                 "ceph.vdo": "0"
Oct 11 09:23:10 compute-0 trusting_tesla[392941]:             },
Oct 11 09:23:10 compute-0 trusting_tesla[392941]:             "type": "block",
Oct 11 09:23:10 compute-0 trusting_tesla[392941]:             "vg_name": "ceph_vg1"
Oct 11 09:23:10 compute-0 trusting_tesla[392941]:         }
Oct 11 09:23:10 compute-0 trusting_tesla[392941]:     ],
Oct 11 09:23:10 compute-0 trusting_tesla[392941]:     "2": [
Oct 11 09:23:10 compute-0 trusting_tesla[392941]:         {
Oct 11 09:23:10 compute-0 trusting_tesla[392941]:             "devices": [
Oct 11 09:23:10 compute-0 trusting_tesla[392941]:                 "/dev/loop5"
Oct 11 09:23:10 compute-0 trusting_tesla[392941]:             ],
Oct 11 09:23:10 compute-0 trusting_tesla[392941]:             "lv_name": "ceph_lv2",
Oct 11 09:23:10 compute-0 trusting_tesla[392941]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 09:23:10 compute-0 trusting_tesla[392941]:             "lv_size": "21470642176",
Oct 11 09:23:10 compute-0 trusting_tesla[392941]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9a8d87fe-940e-4960-94fb-405e22e6e9ee,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 09:23:10 compute-0 trusting_tesla[392941]:             "lv_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 09:23:10 compute-0 trusting_tesla[392941]:             "name": "ceph_lv2",
Oct 11 09:23:10 compute-0 trusting_tesla[392941]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 09:23:10 compute-0 trusting_tesla[392941]:             "tags": {
Oct 11 09:23:10 compute-0 trusting_tesla[392941]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 11 09:23:10 compute-0 trusting_tesla[392941]:                 "ceph.block_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 09:23:10 compute-0 trusting_tesla[392941]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 09:23:10 compute-0 trusting_tesla[392941]:                 "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:23:10 compute-0 trusting_tesla[392941]:                 "ceph.cluster_name": "ceph",
Oct 11 09:23:10 compute-0 trusting_tesla[392941]:                 "ceph.crush_device_class": "",
Oct 11 09:23:10 compute-0 trusting_tesla[392941]:                 "ceph.encrypted": "0",
Oct 11 09:23:10 compute-0 trusting_tesla[392941]:                 "ceph.osd_fsid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 09:23:10 compute-0 trusting_tesla[392941]:                 "ceph.osd_id": "2",
Oct 11 09:23:10 compute-0 trusting_tesla[392941]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 09:23:10 compute-0 trusting_tesla[392941]:                 "ceph.type": "block",
Oct 11 09:23:10 compute-0 trusting_tesla[392941]:                 "ceph.vdo": "0"
Oct 11 09:23:10 compute-0 trusting_tesla[392941]:             },
Oct 11 09:23:10 compute-0 trusting_tesla[392941]:             "type": "block",
Oct 11 09:23:10 compute-0 trusting_tesla[392941]:             "vg_name": "ceph_vg2"
Oct 11 09:23:10 compute-0 trusting_tesla[392941]:         }
Oct 11 09:23:10 compute-0 trusting_tesla[392941]:     ]
Oct 11 09:23:10 compute-0 trusting_tesla[392941]: }
Oct 11 09:23:10 compute-0 systemd[1]: libpod-e22244cd44f6fe2f2eb45e185154518469896254c043ec317e3bb655cd6aaf04.scope: Deactivated successfully.
Oct 11 09:23:10 compute-0 podman[392924]: 2025-10-11 09:23:10.846976567 +0000 UTC m=+1.169575258 container died e22244cd44f6fe2f2eb45e185154518469896254c043ec317e3bb655cd6aaf04 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_tesla, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 09:23:11 compute-0 ceph-mon[74313]: pgmap v2440: 321 pgs: 321 active+clean; 407 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 78 KiB/s rd, 125 KiB/s wr, 46 op/s
Oct 11 09:23:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-0226c3908f538d448ec2055eafc18e02102b756bdc5e8f9a159bff3b95d37cd2-merged.mount: Deactivated successfully.
Oct 11 09:23:11 compute-0 kernel: tap0f516e4b-c2 (unregistering): left promiscuous mode
Oct 11 09:23:11 compute-0 NetworkManager[44960]: <info>  [1760174591.4311] device (tap0f516e4b-c2): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 09:23:11 compute-0 podman[392924]: 2025-10-11 09:23:11.434151346 +0000 UTC m=+1.756750017 container remove e22244cd44f6fe2f2eb45e185154518469896254c043ec317e3bb655cd6aaf04 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_tesla, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 11 09:23:11 compute-0 systemd[1]: libpod-conmon-e22244cd44f6fe2f2eb45e185154518469896254c043ec317e3bb655cd6aaf04.scope: Deactivated successfully.
Oct 11 09:23:11 compute-0 nova_compute[260935]: 2025-10-11 09:23:11.447 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:23:11 compute-0 ovn_controller[152945]: 2025-10-11T09:23:11Z|01276|binding|INFO|Releasing lport 0f516e4b-c284-4151-944c-8a7d98f695b5 from this chassis (sb_readonly=0)
Oct 11 09:23:11 compute-0 ovn_controller[152945]: 2025-10-11T09:23:11Z|01277|binding|INFO|Setting lport 0f516e4b-c284-4151-944c-8a7d98f695b5 down in Southbound
Oct 11 09:23:11 compute-0 ovn_controller[152945]: 2025-10-11T09:23:11Z|01278|binding|INFO|Removing iface tap0f516e4b-c2 ovn-installed in OVS
Oct 11 09:23:11 compute-0 nova_compute[260935]: 2025-10-11 09:23:11.457 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:23:11 compute-0 sudo[392818]: pam_unix(sudo:session): session closed for user root
Oct 11 09:23:11 compute-0 nova_compute[260935]: 2025-10-11 09:23:11.483 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:23:11 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:23:11.491 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4b:08:14 10.100.0.10'], port_security=['fa:16:3e:4b:08:14 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'ef21f945-0076-48fa-8d22-c5376e26d278', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0007a0de-db42-4add-9b55-6d92ceffa860', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a13210f275984f3eadf85eba0c749d99', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'fd52e2b9-19bf-4137-a511-25dd1d4c9f0e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.199'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d2271296-3ceb-4987-affd-a0b4da64fffe, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=0f516e4b-c284-4151-944c-8a7d98f695b5) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 09:23:11 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:23:11.493 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 0f516e4b-c284-4151-944c-8a7d98f695b5 in datapath 0007a0de-db42-4add-9b55-6d92ceffa860 unbound from our chassis
Oct 11 09:23:11 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:23:11.497 162815 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 0007a0de-db42-4add-9b55-6d92ceffa860, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 11 09:23:11 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:23:11.499 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[c1644e13-6339-48e3-82df-9f9884596996]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:23:11 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:23:11.500 162815 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-0007a0de-db42-4add-9b55-6d92ceffa860 namespace which is not needed anymore
Oct 11 09:23:11 compute-0 systemd[1]: machine-qemu\x2d144\x2dinstance\x2d00000079.scope: Deactivated successfully.
Oct 11 09:23:11 compute-0 systemd[1]: machine-qemu\x2d144\x2dinstance\x2d00000079.scope: Consumed 13.635s CPU time.
Oct 11 09:23:11 compute-0 systemd-machined[215705]: Machine qemu-144-instance-00000079 terminated.
Oct 11 09:23:11 compute-0 sudo[392985]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:23:11 compute-0 sudo[392985]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:23:11 compute-0 sudo[392985]: pam_unix(sudo:session): session closed for user root
Oct 11 09:23:11 compute-0 sudo[393031]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 09:23:11 compute-0 sudo[393031]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:23:11 compute-0 sudo[393031]: pam_unix(sudo:session): session closed for user root
Oct 11 09:23:11 compute-0 sudo[393072]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:23:11 compute-0 neutron-haproxy-ovnmeta-0007a0de-db42-4add-9b55-6d92ceffa860[392049]: [NOTICE]   (392060) : haproxy version is 2.8.14-c23fe91
Oct 11 09:23:11 compute-0 neutron-haproxy-ovnmeta-0007a0de-db42-4add-9b55-6d92ceffa860[392049]: [NOTICE]   (392060) : path to executable is /usr/sbin/haproxy
Oct 11 09:23:11 compute-0 neutron-haproxy-ovnmeta-0007a0de-db42-4add-9b55-6d92ceffa860[392049]: [WARNING]  (392060) : Exiting Master process...
Oct 11 09:23:11 compute-0 neutron-haproxy-ovnmeta-0007a0de-db42-4add-9b55-6d92ceffa860[392049]: [WARNING]  (392060) : Exiting Master process...
Oct 11 09:23:11 compute-0 neutron-haproxy-ovnmeta-0007a0de-db42-4add-9b55-6d92ceffa860[392049]: [ALERT]    (392060) : Current worker (392064) exited with code 143 (Terminated)
Oct 11 09:23:11 compute-0 neutron-haproxy-ovnmeta-0007a0de-db42-4add-9b55-6d92ceffa860[392049]: [WARNING]  (392060) : All workers exited. Exiting... (0)
Oct 11 09:23:11 compute-0 sudo[393072]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:23:11 compute-0 sudo[393072]: pam_unix(sudo:session): session closed for user root
Oct 11 09:23:11 compute-0 systemd[1]: libpod-08681398b84cfa03e0e290ee16c1ce0aff8a4babeb74f855ff91733dfa902f9f.scope: Deactivated successfully.
Oct 11 09:23:11 compute-0 podman[393037]: 2025-10-11 09:23:11.774758319 +0000 UTC m=+0.140020053 container died 08681398b84cfa03e0e290ee16c1ce0aff8a4babeb74f855ff91733dfa902f9f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-0007a0de-db42-4add-9b55-6d92ceffa860, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.vendor=CentOS)
Oct 11 09:23:11 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-08681398b84cfa03e0e290ee16c1ce0aff8a4babeb74f855ff91733dfa902f9f-userdata-shm.mount: Deactivated successfully.
Oct 11 09:23:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-8f47c3c136e5a0070dae877380d4b02cb73021141c299a51e87e245e6899f06b-merged.mount: Deactivated successfully.
Oct 11 09:23:11 compute-0 sudo[393113]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a -- raw list --format json
Oct 11 09:23:11 compute-0 sudo[393113]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:23:11 compute-0 podman[393037]: 2025-10-11 09:23:11.876782148 +0000 UTC m=+0.242043842 container cleanup 08681398b84cfa03e0e290ee16c1ce0aff8a4babeb74f855ff91733dfa902f9f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-0007a0de-db42-4add-9b55-6d92ceffa860, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team)
Oct 11 09:23:11 compute-0 systemd[1]: libpod-conmon-08681398b84cfa03e0e290ee16c1ce0aff8a4babeb74f855ff91733dfa902f9f.scope: Deactivated successfully.
Oct 11 09:23:12 compute-0 nova_compute[260935]: 2025-10-11 09:23:12.015 2 INFO nova.virt.libvirt.driver [None req-b25b14ac-461b-4896-b3df-a299c81599b8 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] Instance shutdown successfully after 4 seconds.
Oct 11 09:23:12 compute-0 nova_compute[260935]: 2025-10-11 09:23:12.026 2 INFO nova.virt.libvirt.driver [-] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] Instance destroyed successfully.
Oct 11 09:23:12 compute-0 nova_compute[260935]: 2025-10-11 09:23:12.027 2 DEBUG nova.objects.instance [None req-b25b14ac-461b-4896-b3df-a299c81599b8 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] Lazy-loading 'numa_topology' on Instance uuid ef21f945-0076-48fa-8d22-c5376e26d278 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:23:12 compute-0 nova_compute[260935]: 2025-10-11 09:23:12.085 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:23:12 compute-0 podman[393147]: 2025-10-11 09:23:12.155441402 +0000 UTC m=+0.242458184 container remove 08681398b84cfa03e0e290ee16c1ce0aff8a4babeb74f855ff91733dfa902f9f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-0007a0de-db42-4add-9b55-6d92ceffa860, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 11 09:23:12 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:23:12.164 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[c7c3b07c-b514-4c66-b58f-ec5d6c8d9651]: (4, ('Sat Oct 11 09:23:11 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-0007a0de-db42-4add-9b55-6d92ceffa860 (08681398b84cfa03e0e290ee16c1ce0aff8a4babeb74f855ff91733dfa902f9f)\n08681398b84cfa03e0e290ee16c1ce0aff8a4babeb74f855ff91733dfa902f9f\nSat Oct 11 09:23:11 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-0007a0de-db42-4add-9b55-6d92ceffa860 (08681398b84cfa03e0e290ee16c1ce0aff8a4babeb74f855ff91733dfa902f9f)\n08681398b84cfa03e0e290ee16c1ce0aff8a4babeb74f855ff91733dfa902f9f\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:23:12 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:23:12.166 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[6b72ecfd-38c0-4f85-b5b1-a0bd2e16006b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:23:12 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:23:12.167 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0007a0de-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:23:12 compute-0 nova_compute[260935]: 2025-10-11 09:23:12.171 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:23:12 compute-0 kernel: tap0007a0de-d0: left promiscuous mode
Oct 11 09:23:12 compute-0 nova_compute[260935]: 2025-10-11 09:23:12.198 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:23:12 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:23:12.204 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[a2751334-c7c8-420d-865a-c0c4d3c808d3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:23:12 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:23:12.237 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[f4b2f6ad-1540-4f8e-a569-5b907cc19457]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:23:12 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:23:12.239 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[30fa5a89-1639-4c9b-8ef1-742352cda691]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:23:12 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:23:12.263 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[c4680da4-f00c-4135-b7a6-8bee5de5e433]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 640873, 'reachable_time': 28634, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 393192, 'error': None, 'target': 'ovnmeta-0007a0de-db42-4add-9b55-6d92ceffa860', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:23:12 compute-0 systemd[1]: run-netns-ovnmeta\x2d0007a0de\x2ddb42\x2d4add\x2d9b55\x2d6d92ceffa860.mount: Deactivated successfully.
Oct 11 09:23:12 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:23:12.268 162929 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-0007a0de-db42-4add-9b55-6d92ceffa860 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 11 09:23:12 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:23:12.268 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[56cf7863-6362-4c02-a4da-2372abc8b69f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:23:12 compute-0 nova_compute[260935]: 2025-10-11 09:23:12.292 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:23:12 compute-0 nova_compute[260935]: 2025-10-11 09:23:12.420 2 INFO nova.virt.libvirt.driver [None req-b25b14ac-461b-4896-b3df-a299c81599b8 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] Beginning cold snapshot process
Oct 11 09:23:12 compute-0 podman[393208]: 2025-10-11 09:23:12.425049161 +0000 UTC m=+0.046755111 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:23:12 compute-0 podman[393208]: 2025-10-11 09:23:12.52495533 +0000 UTC m=+0.146661230 container create d854f82dc79dbf699c152eca924d7e3d148be03ff823b81c194fd632fc63a565 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_liskov, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct 11 09:23:12 compute-0 systemd[1]: Started libpod-conmon-d854f82dc79dbf699c152eca924d7e3d148be03ff823b81c194fd632fc63a565.scope.
Oct 11 09:23:12 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:23:12 compute-0 podman[393208]: 2025-10-11 09:23:12.706114702 +0000 UTC m=+0.327820592 container init d854f82dc79dbf699c152eca924d7e3d148be03ff823b81c194fd632fc63a565 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_liskov, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 09:23:12 compute-0 podman[393208]: 2025-10-11 09:23:12.719394867 +0000 UTC m=+0.341100737 container start d854f82dc79dbf699c152eca924d7e3d148be03ff823b81c194fd632fc63a565 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_liskov, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 09:23:12 compute-0 nova_compute[260935]: 2025-10-11 09:23:12.722 2 DEBUG nova.virt.libvirt.imagebackend [None req-b25b14ac-461b-4896-b3df-a299c81599b8 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] No parent info for 03f2fef0-11c0-48e1-b3a0-3e02d898739e; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Oct 11 09:23:12 compute-0 cool_liskov[393225]: 167 167
Oct 11 09:23:12 compute-0 systemd[1]: libpod-d854f82dc79dbf699c152eca924d7e3d148be03ff823b81c194fd632fc63a565.scope: Deactivated successfully.
Oct 11 09:23:12 compute-0 podman[393208]: 2025-10-11 09:23:12.737739225 +0000 UTC m=+0.359445085 container attach d854f82dc79dbf699c152eca924d7e3d148be03ff823b81c194fd632fc63a565 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_liskov, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 09:23:12 compute-0 podman[393208]: 2025-10-11 09:23:12.738192467 +0000 UTC m=+0.359898337 container died d854f82dc79dbf699c152eca924d7e3d148be03ff823b81c194fd632fc63a565 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_liskov, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 09:23:12 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2441: 321 pgs: 321 active+clean; 407 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 79 KiB/s rd, 134 KiB/s wr, 47 op/s
Oct 11 09:23:12 compute-0 nova_compute[260935]: 2025-10-11 09:23:12.910 2 DEBUG nova.storage.rbd_utils [None req-b25b14ac-461b-4896-b3df-a299c81599b8 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] creating snapshot(72e2a98ef8e74bc4b5a1fdfe221cc8fb) on rbd image(ef21f945-0076-48fa-8d22-c5376e26d278_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Oct 11 09:23:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-539b1bce0e1bd782e6a327b9702766efd164028e619bb38fda8ef4cc04283645-merged.mount: Deactivated successfully.
Oct 11 09:23:13 compute-0 podman[393208]: 2025-10-11 09:23:13.092119075 +0000 UTC m=+0.713824986 container remove d854f82dc79dbf699c152eca924d7e3d148be03ff823b81c194fd632fc63a565 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_liskov, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 09:23:13 compute-0 systemd[1]: libpod-conmon-d854f82dc79dbf699c152eca924d7e3d148be03ff823b81c194fd632fc63a565.scope: Deactivated successfully.
Oct 11 09:23:13 compute-0 podman[393302]: 2025-10-11 09:23:13.331601014 +0000 UTC m=+0.032144748 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:23:13 compute-0 podman[393302]: 2025-10-11 09:23:13.432690907 +0000 UTC m=+0.133234621 container create 6072704a3c03a584729f2efece58af20dee5a82ea93bef251fdf8dfbdcaec5ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_ritchie, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS)
Oct 11 09:23:13 compute-0 systemd[1]: Started libpod-conmon-6072704a3c03a584729f2efece58af20dee5a82ea93bef251fdf8dfbdcaec5ca.scope.
Oct 11 09:23:13 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:23:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d994243090408733e9719bb7e50883484b107b9706393fff50ca657ec55c6895/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 09:23:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d994243090408733e9719bb7e50883484b107b9706393fff50ca657ec55c6895/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 09:23:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d994243090408733e9719bb7e50883484b107b9706393fff50ca657ec55c6895/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 09:23:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d994243090408733e9719bb7e50883484b107b9706393fff50ca657ec55c6895/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 09:23:13 compute-0 podman[393302]: 2025-10-11 09:23:13.644451823 +0000 UTC m=+0.344995547 container init 6072704a3c03a584729f2efece58af20dee5a82ea93bef251fdf8dfbdcaec5ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_ritchie, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct 11 09:23:13 compute-0 podman[393302]: 2025-10-11 09:23:13.660403073 +0000 UTC m=+0.360946767 container start 6072704a3c03a584729f2efece58af20dee5a82ea93bef251fdf8dfbdcaec5ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_ritchie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 09:23:13 compute-0 podman[393302]: 2025-10-11 09:23:13.668665106 +0000 UTC m=+0.369208790 container attach 6072704a3c03a584729f2efece58af20dee5a82ea93bef251fdf8dfbdcaec5ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_ritchie, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct 11 09:23:13 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e282 do_prune osdmap full prune enabled
Oct 11 09:23:13 compute-0 ceph-mon[74313]: pgmap v2441: 321 pgs: 321 active+clean; 407 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 79 KiB/s rd, 134 KiB/s wr, 47 op/s
Oct 11 09:23:14 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e283 e283: 3 total, 3 up, 3 in
Oct 11 09:23:14 compute-0 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e283: 3 total, 3 up, 3 in
Oct 11 09:23:14 compute-0 zen_ritchie[393319]: {
Oct 11 09:23:14 compute-0 zen_ritchie[393319]:     "9a8d87fe-940e-4960-94fb-405e22e6e9ee": {
Oct 11 09:23:14 compute-0 zen_ritchie[393319]:         "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:23:14 compute-0 zen_ritchie[393319]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 11 09:23:14 compute-0 zen_ritchie[393319]:         "osd_id": 2,
Oct 11 09:23:14 compute-0 zen_ritchie[393319]:         "osd_uuid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 09:23:14 compute-0 zen_ritchie[393319]:         "type": "bluestore"
Oct 11 09:23:14 compute-0 zen_ritchie[393319]:     },
Oct 11 09:23:14 compute-0 zen_ritchie[393319]:     "bd31cfe9-266a-4479-a7f9-659fff14b3e1": {
Oct 11 09:23:14 compute-0 zen_ritchie[393319]:         "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:23:14 compute-0 zen_ritchie[393319]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 11 09:23:14 compute-0 zen_ritchie[393319]:         "osd_id": 0,
Oct 11 09:23:14 compute-0 zen_ritchie[393319]:         "osd_uuid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 09:23:14 compute-0 zen_ritchie[393319]:         "type": "bluestore"
Oct 11 09:23:14 compute-0 zen_ritchie[393319]:     },
Oct 11 09:23:14 compute-0 zen_ritchie[393319]:     "c6e730c8-7075-45e5-bf72-061b73088f5b": {
Oct 11 09:23:14 compute-0 zen_ritchie[393319]:         "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:23:14 compute-0 zen_ritchie[393319]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 11 09:23:14 compute-0 zen_ritchie[393319]:         "osd_id": 1,
Oct 11 09:23:14 compute-0 zen_ritchie[393319]:         "osd_uuid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 09:23:14 compute-0 zen_ritchie[393319]:         "type": "bluestore"
Oct 11 09:23:14 compute-0 zen_ritchie[393319]:     }
Oct 11 09:23:14 compute-0 zen_ritchie[393319]: }
Oct 11 09:23:14 compute-0 systemd[1]: libpod-6072704a3c03a584729f2efece58af20dee5a82ea93bef251fdf8dfbdcaec5ca.scope: Deactivated successfully.
Oct 11 09:23:14 compute-0 systemd[1]: libpod-6072704a3c03a584729f2efece58af20dee5a82ea93bef251fdf8dfbdcaec5ca.scope: Consumed 1.123s CPU time.
Oct 11 09:23:14 compute-0 podman[393302]: 2025-10-11 09:23:14.781298585 +0000 UTC m=+1.481842279 container died 6072704a3c03a584729f2efece58af20dee5a82ea93bef251fdf8dfbdcaec5ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_ritchie, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Oct 11 09:23:14 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2443: 321 pgs: 321 active+clean; 407 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 14 KiB/s rd, 47 KiB/s wr, 20 op/s
Oct 11 09:23:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-d994243090408733e9719bb7e50883484b107b9706393fff50ca657ec55c6895-merged.mount: Deactivated successfully.
Oct 11 09:23:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:23:15.220 162815 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:23:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:23:15.222 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:23:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:23:15.222 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:23:15 compute-0 ceph-mon[74313]: osdmap e283: 3 total, 3 up, 3 in
Oct 11 09:23:15 compute-0 ceph-mon[74313]: pgmap v2443: 321 pgs: 321 active+clean; 407 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 14 KiB/s rd, 47 KiB/s wr, 20 op/s
Oct 11 09:23:15 compute-0 nova_compute[260935]: 2025-10-11 09:23:15.385 2 DEBUG nova.storage.rbd_utils [None req-b25b14ac-461b-4896-b3df-a299c81599b8 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] cloning vms/ef21f945-0076-48fa-8d22-c5376e26d278_disk@72e2a98ef8e74bc4b5a1fdfe221cc8fb to images/a24df9e3-57ef-4be8-af9c-14e8b8d2e436 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Oct 11 09:23:15 compute-0 podman[393302]: 2025-10-11 09:23:15.536846747 +0000 UTC m=+2.237390441 container remove 6072704a3c03a584729f2efece58af20dee5a82ea93bef251fdf8dfbdcaec5ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_ritchie, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 09:23:15 compute-0 systemd[1]: libpod-conmon-6072704a3c03a584729f2efece58af20dee5a82ea93bef251fdf8dfbdcaec5ca.scope: Deactivated successfully.
Oct 11 09:23:15 compute-0 sudo[393113]: pam_unix(sudo:session): session closed for user root
Oct 11 09:23:15 compute-0 podman[393364]: 2025-10-11 09:23:15.60145121 +0000 UTC m=+0.422387481 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=iscsid)
Oct 11 09:23:15 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 09:23:15 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:23:15 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 09:23:15 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:23:15 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev 43b08548-e02c-4a3c-b85c-af18422ddb35 does not exist
Oct 11 09:23:15 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev 012a3b34-c7d9-4f1c-9935-3324c3f0d104 does not exist
Oct 11 09:23:15 compute-0 nova_compute[260935]: 2025-10-11 09:23:15.727 2 DEBUG nova.storage.rbd_utils [None req-b25b14ac-461b-4896-b3df-a299c81599b8 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] flattening images/a24df9e3-57ef-4be8-af9c-14e8b8d2e436 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Oct 11 09:23:15 compute-0 ovn_controller[152945]: 2025-10-11T09:23:15Z|01279|binding|INFO|Releasing lport e23cd806-8523-4e59-ba27-db15cee52548 from this chassis (sb_readonly=0)
Oct 11 09:23:15 compute-0 ovn_controller[152945]: 2025-10-11T09:23:15Z|01280|binding|INFO|Releasing lport f45dd889-4db0-488b-b6d3-356bc191844e from this chassis (sb_readonly=0)
Oct 11 09:23:15 compute-0 sudo[393416]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:23:15 compute-0 sudo[393416]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:23:15 compute-0 sudo[393416]: pam_unix(sudo:session): session closed for user root
Oct 11 09:23:15 compute-0 nova_compute[260935]: 2025-10-11 09:23:15.811 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:23:15 compute-0 sudo[393462]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 11 09:23:15 compute-0 sudo[393462]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:23:15 compute-0 sudo[393462]: pam_unix(sudo:session): session closed for user root
Oct 11 09:23:16 compute-0 ovn_controller[152945]: 2025-10-11T09:23:16Z|01281|binding|INFO|Releasing lport e23cd806-8523-4e59-ba27-db15cee52548 from this chassis (sb_readonly=0)
Oct 11 09:23:16 compute-0 ovn_controller[152945]: 2025-10-11T09:23:16Z|01282|binding|INFO|Releasing lport f45dd889-4db0-488b-b6d3-356bc191844e from this chassis (sb_readonly=0)
Oct 11 09:23:16 compute-0 nova_compute[260935]: 2025-10-11 09:23:16.037 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:23:16 compute-0 nova_compute[260935]: 2025-10-11 09:23:16.288 2 DEBUG nova.storage.rbd_utils [None req-b25b14ac-461b-4896-b3df-a299c81599b8 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] removing snapshot(72e2a98ef8e74bc4b5a1fdfe221cc8fb) on rbd image(ef21f945-0076-48fa-8d22-c5376e26d278_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Oct 11 09:23:16 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:23:16 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:23:16 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e283 do_prune osdmap full prune enabled
Oct 11 09:23:16 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e284 e284: 3 total, 3 up, 3 in
Oct 11 09:23:16 compute-0 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e284: 3 total, 3 up, 3 in
Oct 11 09:23:16 compute-0 nova_compute[260935]: 2025-10-11 09:23:16.703 2 DEBUG nova.storage.rbd_utils [None req-b25b14ac-461b-4896-b3df-a299c81599b8 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] creating snapshot(snap) on rbd image(a24df9e3-57ef-4be8-af9c-14e8b8d2e436) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Oct 11 09:23:16 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2445: 321 pgs: 321 active+clean; 407 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 KiB/s rd, 13 KiB/s wr, 2 op/s
Oct 11 09:23:17 compute-0 nova_compute[260935]: 2025-10-11 09:23:17.055 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760174582.0539756, 7f0d9214-39a5-458d-82db-dcbc7d61b8b5 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:23:17 compute-0 nova_compute[260935]: 2025-10-11 09:23:17.055 2 INFO nova.compute.manager [-] [instance: 7f0d9214-39a5-458d-82db-dcbc7d61b8b5] VM Stopped (Lifecycle Event)
Oct 11 09:23:17 compute-0 nova_compute[260935]: 2025-10-11 09:23:17.089 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:23:17 compute-0 nova_compute[260935]: 2025-10-11 09:23:17.214 2 DEBUG nova.compute.manager [None req-33bc5a34-5573-4828-a3ce-3e6852adde9d - - - - - -] [instance: 7f0d9214-39a5-458d-82db-dcbc7d61b8b5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:23:17 compute-0 nova_compute[260935]: 2025-10-11 09:23:17.295 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:23:17 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e284 do_prune osdmap full prune enabled
Oct 11 09:23:17 compute-0 ceph-mon[74313]: osdmap e284: 3 total, 3 up, 3 in
Oct 11 09:23:17 compute-0 ceph-mon[74313]: pgmap v2445: 321 pgs: 321 active+clean; 407 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 KiB/s rd, 13 KiB/s wr, 2 op/s
Oct 11 09:23:17 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e285 e285: 3 total, 3 up, 3 in
Oct 11 09:23:17 compute-0 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e285: 3 total, 3 up, 3 in
Oct 11 09:23:18 compute-0 ceph-mon[74313]: osdmap e285: 3 total, 3 up, 3 in
Oct 11 09:23:18 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2447: 321 pgs: 321 active+clean; 486 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 7.8 MiB/s rd, 7.8 MiB/s wr, 145 op/s
Oct 11 09:23:19 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e285 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:23:19 compute-0 nova_compute[260935]: 2025-10-11 09:23:19.286 2 DEBUG nova.compute.manager [req-6b000ce9-8fca-41eb-8d25-95d0f23745e4 req-354de8e5-f7e1-42b1-8c53-b75791371861 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] Received event network-vif-unplugged-0f516e4b-c284-4151-944c-8a7d98f695b5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:23:19 compute-0 nova_compute[260935]: 2025-10-11 09:23:19.286 2 DEBUG oslo_concurrency.lockutils [req-6b000ce9-8fca-41eb-8d25-95d0f23745e4 req-354de8e5-f7e1-42b1-8c53-b75791371861 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "ef21f945-0076-48fa-8d22-c5376e26d278-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:23:19 compute-0 nova_compute[260935]: 2025-10-11 09:23:19.287 2 DEBUG oslo_concurrency.lockutils [req-6b000ce9-8fca-41eb-8d25-95d0f23745e4 req-354de8e5-f7e1-42b1-8c53-b75791371861 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "ef21f945-0076-48fa-8d22-c5376e26d278-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:23:19 compute-0 nova_compute[260935]: 2025-10-11 09:23:19.287 2 DEBUG oslo_concurrency.lockutils [req-6b000ce9-8fca-41eb-8d25-95d0f23745e4 req-354de8e5-f7e1-42b1-8c53-b75791371861 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "ef21f945-0076-48fa-8d22-c5376e26d278-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:23:19 compute-0 nova_compute[260935]: 2025-10-11 09:23:19.288 2 DEBUG nova.compute.manager [req-6b000ce9-8fca-41eb-8d25-95d0f23745e4 req-354de8e5-f7e1-42b1-8c53-b75791371861 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] No waiting events found dispatching network-vif-unplugged-0f516e4b-c284-4151-944c-8a7d98f695b5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 09:23:19 compute-0 nova_compute[260935]: 2025-10-11 09:23:19.288 2 WARNING nova.compute.manager [req-6b000ce9-8fca-41eb-8d25-95d0f23745e4 req-354de8e5-f7e1-42b1-8c53-b75791371861 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] Received unexpected event network-vif-unplugged-0f516e4b-c284-4151-944c-8a7d98f695b5 for instance with vm_state active and task_state shelving_image_uploading.
Oct 11 09:23:19 compute-0 nova_compute[260935]: 2025-10-11 09:23:19.621 2 INFO nova.virt.libvirt.driver [None req-b25b14ac-461b-4896-b3df-a299c81599b8 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] Snapshot image upload complete
Oct 11 09:23:19 compute-0 nova_compute[260935]: 2025-10-11 09:23:19.621 2 DEBUG nova.compute.manager [None req-b25b14ac-461b-4896-b3df-a299c81599b8 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:23:19 compute-0 ceph-mon[74313]: pgmap v2447: 321 pgs: 321 active+clean; 486 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 7.8 MiB/s rd, 7.8 MiB/s wr, 145 op/s
Oct 11 09:23:19 compute-0 nova_compute[260935]: 2025-10-11 09:23:19.838 2 INFO nova.compute.manager [None req-b25b14ac-461b-4896-b3df-a299c81599b8 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] Shelve offloading
Oct 11 09:23:19 compute-0 nova_compute[260935]: 2025-10-11 09:23:19.850 2 INFO nova.virt.libvirt.driver [-] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] Instance destroyed successfully.
Oct 11 09:23:19 compute-0 nova_compute[260935]: 2025-10-11 09:23:19.850 2 DEBUG nova.compute.manager [None req-b25b14ac-461b-4896-b3df-a299c81599b8 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:23:19 compute-0 nova_compute[260935]: 2025-10-11 09:23:19.853 2 DEBUG oslo_concurrency.lockutils [None req-b25b14ac-461b-4896-b3df-a299c81599b8 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] Acquiring lock "refresh_cache-ef21f945-0076-48fa-8d22-c5376e26d278" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:23:19 compute-0 nova_compute[260935]: 2025-10-11 09:23:19.854 2 DEBUG oslo_concurrency.lockutils [None req-b25b14ac-461b-4896-b3df-a299c81599b8 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] Acquired lock "refresh_cache-ef21f945-0076-48fa-8d22-c5376e26d278" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:23:19 compute-0 nova_compute[260935]: 2025-10-11 09:23:19.854 2 DEBUG nova.network.neutron [None req-b25b14ac-461b-4896-b3df-a299c81599b8 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 11 09:23:20 compute-0 podman[393525]: 2025-10-11 09:23:20.785439355 +0000 UTC m=+0.078788984 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct 11 09:23:20 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2448: 321 pgs: 321 active+clean; 486 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 7.2 MiB/s rd, 7.1 MiB/s wr, 134 op/s
Oct 11 09:23:20 compute-0 podman[393526]: 2025-10-11 09:23:20.836355052 +0000 UTC m=+0.130227806 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 11 09:23:21 compute-0 nova_compute[260935]: 2025-10-11 09:23:21.545 2 DEBUG nova.compute.manager [req-9800c47f-a5b0-4ac8-a3c4-51519968983c req-489711d1-1a57-4606-9c94-2d14340bb0c8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] Received event network-vif-plugged-0f516e4b-c284-4151-944c-8a7d98f695b5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:23:21 compute-0 nova_compute[260935]: 2025-10-11 09:23:21.546 2 DEBUG oslo_concurrency.lockutils [req-9800c47f-a5b0-4ac8-a3c4-51519968983c req-489711d1-1a57-4606-9c94-2d14340bb0c8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "ef21f945-0076-48fa-8d22-c5376e26d278-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:23:21 compute-0 nova_compute[260935]: 2025-10-11 09:23:21.546 2 DEBUG oslo_concurrency.lockutils [req-9800c47f-a5b0-4ac8-a3c4-51519968983c req-489711d1-1a57-4606-9c94-2d14340bb0c8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "ef21f945-0076-48fa-8d22-c5376e26d278-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:23:21 compute-0 nova_compute[260935]: 2025-10-11 09:23:21.547 2 DEBUG oslo_concurrency.lockutils [req-9800c47f-a5b0-4ac8-a3c4-51519968983c req-489711d1-1a57-4606-9c94-2d14340bb0c8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "ef21f945-0076-48fa-8d22-c5376e26d278-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:23:21 compute-0 nova_compute[260935]: 2025-10-11 09:23:21.547 2 DEBUG nova.compute.manager [req-9800c47f-a5b0-4ac8-a3c4-51519968983c req-489711d1-1a57-4606-9c94-2d14340bb0c8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] No waiting events found dispatching network-vif-plugged-0f516e4b-c284-4151-944c-8a7d98f695b5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 09:23:21 compute-0 nova_compute[260935]: 2025-10-11 09:23:21.548 2 WARNING nova.compute.manager [req-9800c47f-a5b0-4ac8-a3c4-51519968983c req-489711d1-1a57-4606-9c94-2d14340bb0c8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] Received unexpected event network-vif-plugged-0f516e4b-c284-4151-944c-8a7d98f695b5 for instance with vm_state shelved and task_state shelving_offloading.
Oct 11 09:23:21 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:23:21.551 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:be:23:00 2001:db8:0:1:f816:3eff:febe:2300 2001:db8::f816:3eff:febe:2300'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:febe:2300/64 2001:db8::f816:3eff:febe:2300/64', 'neutron:device_id': 'ovnmeta-f0bc2c62-89ab-4ce1-9157-2273788b9018', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f0bc2c62-89ab-4ce1-9157-2273788b9018', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'db6885dd005947ad850fed13cefdf2fc', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=75b51dc9-c211-445f-9e3e-d219efddef19, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=c1fabca0-ae77-4e48-b93b-3023955db235) old=Port_Binding(mac=['fa:16:3e:be:23:00 2001:db8::f816:3eff:febe:2300'], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:febe:2300/64', 'neutron:device_id': 'ovnmeta-f0bc2c62-89ab-4ce1-9157-2273788b9018', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f0bc2c62-89ab-4ce1-9157-2273788b9018', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'db6885dd005947ad850fed13cefdf2fc', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 09:23:21 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:23:21.553 162815 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port c1fabca0-ae77-4e48-b93b-3023955db235 in datapath f0bc2c62-89ab-4ce1-9157-2273788b9018 updated
Oct 11 09:23:21 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:23:21.556 162815 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f0bc2c62-89ab-4ce1-9157-2273788b9018, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 11 09:23:21 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:23:21.557 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[d47ac421-e2c9-46b0-94ae-7a7cc09e003f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:23:21 compute-0 ceph-mon[74313]: pgmap v2448: 321 pgs: 321 active+clean; 486 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 7.2 MiB/s rd, 7.1 MiB/s wr, 134 op/s
Oct 11 09:23:22 compute-0 nova_compute[260935]: 2025-10-11 09:23:22.137 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:23:22 compute-0 nova_compute[260935]: 2025-10-11 09:23:22.312 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:23:22 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2449: 321 pgs: 321 active+clean; 486 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 5.9 MiB/s rd, 5.8 MiB/s wr, 128 op/s
Oct 11 09:23:23 compute-0 nova_compute[260935]: 2025-10-11 09:23:23.234 2 DEBUG nova.network.neutron [None req-b25b14ac-461b-4896-b3df-a299c81599b8 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] Updating instance_info_cache with network_info: [{"id": "0f516e4b-c284-4151-944c-8a7d98f695b5", "address": "fa:16:3e:4b:08:14", "network": {"id": "0007a0de-db42-4add-9b55-6d92ceffa860", "bridge": "br-int", "label": "tempest-TestShelveInstance-683353345-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.199", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a13210f275984f3eadf85eba0c749d99", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0f516e4b-c2", "ovs_interfaceid": "0f516e4b-c284-4151-944c-8a7d98f695b5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:23:23 compute-0 nova_compute[260935]: 2025-10-11 09:23:23.549 2 DEBUG oslo_concurrency.lockutils [None req-b25b14ac-461b-4896-b3df-a299c81599b8 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] Releasing lock "refresh_cache-ef21f945-0076-48fa-8d22-c5376e26d278" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:23:23 compute-0 ceph-mon[74313]: pgmap v2449: 321 pgs: 321 active+clean; 486 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 5.9 MiB/s rd, 5.8 MiB/s wr, 128 op/s
Oct 11 09:23:23 compute-0 nova_compute[260935]: 2025-10-11 09:23:23.956 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:23:24 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e285 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:23:24 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e285 do_prune osdmap full prune enabled
Oct 11 09:23:24 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e286 e286: 3 total, 3 up, 3 in
Oct 11 09:23:24 compute-0 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e286: 3 total, 3 up, 3 in
Oct 11 09:23:24 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2451: 321 pgs: 321 active+clean; 486 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 5.9 MiB/s rd, 5.8 MiB/s wr, 128 op/s
Oct 11 09:23:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:23:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:23:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:23:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:23:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:23:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:23:25 compute-0 ceph-mon[74313]: osdmap e286: 3 total, 3 up, 3 in
Oct 11 09:23:25 compute-0 ceph-mon[74313]: pgmap v2451: 321 pgs: 321 active+clean; 486 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 5.9 MiB/s rd, 5.8 MiB/s wr, 128 op/s
Oct 11 09:23:26 compute-0 nova_compute[260935]: 2025-10-11 09:23:26.250 2 INFO nova.virt.libvirt.driver [-] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] Instance destroyed successfully.
Oct 11 09:23:26 compute-0 nova_compute[260935]: 2025-10-11 09:23:26.251 2 DEBUG nova.objects.instance [None req-b25b14ac-461b-4896-b3df-a299c81599b8 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] Lazy-loading 'resources' on Instance uuid ef21f945-0076-48fa-8d22-c5376e26d278 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:23:26 compute-0 nova_compute[260935]: 2025-10-11 09:23:26.367 2 DEBUG nova.virt.libvirt.vif [None req-b25b14ac-461b-4896-b3df-a299c81599b8 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T09:22:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestShelveInstance-server-1752003988',display_name='tempest-TestShelveInstance-server-1752003988',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testshelveinstance-server-1752003988',id=121,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBc59pqsdiIpcBTYIi36cHQ9pbnLBXoyhUafGO6McKtc7V938v9xkK/F0LUwOp/S6AlI8sKdzLvrGPTJbVDXHRRnlCu0B+lzAS2vfK523X9mqHyrpvhTkQghQuITH7BALg==',key_name='tempest-TestShelveInstance-1436929269',keypairs=<?>,launch_index=0,launched_at=2025-10-11T09:22:45Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='a13210f275984f3eadf85eba0c749d99',ramdisk_id='',reservation_id='r-9i2gklxc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestShelveInstance-243029510',owner_user_name='tempest-TestShelveInstance-243029510-project-member',shelved_at='2025-10-11T09:23:19.621852',shelved_host='compute-0.ctlplane.example.com',shelved_image_id='a24df9e3-57ef-4be8-af9c-14e8b8d2e436'},tags=<?>,task_state='shelving_offloading',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T09:23:12Z,user_data=None,user_id='67e20c1f7ae24f2f8b9e25e0d8ce61ca',uuid=ef21f945-0076-48fa-8d22-c5376e26d278,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='shelved') vif={"id": "0f516e4b-c284-4151-944c-8a7d98f695b5", "address": "fa:16:3e:4b:08:14", "network": {"id": "0007a0de-db42-4add-9b55-6d92ceffa860", "bridge": "br-int", "label": "tempest-TestShelveInstance-683353345-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.199", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a13210f275984f3eadf85eba0c749d99", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0f516e4b-c2", "ovs_interfaceid": "0f516e4b-c284-4151-944c-8a7d98f695b5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 11 09:23:26 compute-0 nova_compute[260935]: 2025-10-11 09:23:26.368 2 DEBUG nova.network.os_vif_util [None req-b25b14ac-461b-4896-b3df-a299c81599b8 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] Converting VIF {"id": "0f516e4b-c284-4151-944c-8a7d98f695b5", "address": "fa:16:3e:4b:08:14", "network": {"id": "0007a0de-db42-4add-9b55-6d92ceffa860", "bridge": "br-int", "label": "tempest-TestShelveInstance-683353345-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.199", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a13210f275984f3eadf85eba0c749d99", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0f516e4b-c2", "ovs_interfaceid": "0f516e4b-c284-4151-944c-8a7d98f695b5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 09:23:26 compute-0 nova_compute[260935]: 2025-10-11 09:23:26.369 2 DEBUG nova.network.os_vif_util [None req-b25b14ac-461b-4896-b3df-a299c81599b8 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4b:08:14,bridge_name='br-int',has_traffic_filtering=True,id=0f516e4b-c284-4151-944c-8a7d98f695b5,network=Network(0007a0de-db42-4add-9b55-6d92ceffa860),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0f516e4b-c2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 09:23:26 compute-0 nova_compute[260935]: 2025-10-11 09:23:26.370 2 DEBUG os_vif [None req-b25b14ac-461b-4896-b3df-a299c81599b8 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:4b:08:14,bridge_name='br-int',has_traffic_filtering=True,id=0f516e4b-c284-4151-944c-8a7d98f695b5,network=Network(0007a0de-db42-4add-9b55-6d92ceffa860),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0f516e4b-c2') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 11 09:23:26 compute-0 nova_compute[260935]: 2025-10-11 09:23:26.374 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:23:26 compute-0 nova_compute[260935]: 2025-10-11 09:23:26.374 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0f516e4b-c2, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:23:26 compute-0 nova_compute[260935]: 2025-10-11 09:23:26.380 2 DEBUG nova.compute.manager [req-8f7da495-29b3-4372-a7a8-3f80171eb406 req-27e2bb8d-43ab-471e-96ce-911b5f471617 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] Received event network-changed-0f516e4b-c284-4151-944c-8a7d98f695b5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:23:26 compute-0 nova_compute[260935]: 2025-10-11 09:23:26.380 2 DEBUG nova.compute.manager [req-8f7da495-29b3-4372-a7a8-3f80171eb406 req-27e2bb8d-43ab-471e-96ce-911b5f471617 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] Refreshing instance network info cache due to event network-changed-0f516e4b-c284-4151-944c-8a7d98f695b5. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 09:23:26 compute-0 nova_compute[260935]: 2025-10-11 09:23:26.381 2 DEBUG oslo_concurrency.lockutils [req-8f7da495-29b3-4372-a7a8-3f80171eb406 req-27e2bb8d-43ab-471e-96ce-911b5f471617 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-ef21f945-0076-48fa-8d22-c5376e26d278" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:23:26 compute-0 nova_compute[260935]: 2025-10-11 09:23:26.381 2 DEBUG oslo_concurrency.lockutils [req-8f7da495-29b3-4372-a7a8-3f80171eb406 req-27e2bb8d-43ab-471e-96ce-911b5f471617 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-ef21f945-0076-48fa-8d22-c5376e26d278" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:23:26 compute-0 nova_compute[260935]: 2025-10-11 09:23:26.382 2 DEBUG nova.network.neutron [req-8f7da495-29b3-4372-a7a8-3f80171eb406 req-27e2bb8d-43ab-471e-96ce-911b5f471617 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] Refreshing network info cache for port 0f516e4b-c284-4151-944c-8a7d98f695b5 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 09:23:26 compute-0 nova_compute[260935]: 2025-10-11 09:23:26.383 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:23:26 compute-0 nova_compute[260935]: 2025-10-11 09:23:26.386 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 09:23:26 compute-0 nova_compute[260935]: 2025-10-11 09:23:26.389 2 INFO os_vif [None req-b25b14ac-461b-4896-b3df-a299c81599b8 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:4b:08:14,bridge_name='br-int',has_traffic_filtering=True,id=0f516e4b-c284-4151-944c-8a7d98f695b5,network=Network(0007a0de-db42-4add-9b55-6d92ceffa860),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0f516e4b-c2')
Oct 11 09:23:26 compute-0 nova_compute[260935]: 2025-10-11 09:23:26.693 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760174591.69287, ef21f945-0076-48fa-8d22-c5376e26d278 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:23:26 compute-0 nova_compute[260935]: 2025-10-11 09:23:26.694 2 INFO nova.compute.manager [-] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] VM Stopped (Lifecycle Event)
Oct 11 09:23:26 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 09:23:26 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3638457583' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 09:23:26 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 09:23:26 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3638457583' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 09:23:26 compute-0 ceph-mon[74313]: from='client.? 192.168.122.10:0/3638457583' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 09:23:26 compute-0 ceph-mon[74313]: from='client.? 192.168.122.10:0/3638457583' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 09:23:26 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2452: 321 pgs: 321 active+clean; 486 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 866 KiB/s rd, 1.4 MiB/s wr, 53 op/s
Oct 11 09:23:26 compute-0 nova_compute[260935]: 2025-10-11 09:23:26.906 2 DEBUG nova.compute.manager [None req-29dc8321-0927-4e5f-96aa-47cdcece444b - - - - - -] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:23:26 compute-0 nova_compute[260935]: 2025-10-11 09:23:26.914 2 DEBUG nova.compute.manager [None req-29dc8321-0927-4e5f-96aa-47cdcece444b - - - - - -] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] Synchronizing instance power state after lifecycle event "Stopped"; current vm_state: shelved, current task_state: shelving_offloading, current DB power_state: 4, VM power_state: 4 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 09:23:26 compute-0 nova_compute[260935]: 2025-10-11 09:23:26.982 2 INFO nova.compute.manager [None req-29dc8321-0927-4e5f-96aa-47cdcece444b - - - - - -] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] During sync_power_state the instance has a pending task (shelving_offloading). Skip.
Oct 11 09:23:26 compute-0 nova_compute[260935]: 2025-10-11 09:23:26.990 2 INFO nova.virt.libvirt.driver [None req-b25b14ac-461b-4896-b3df-a299c81599b8 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] Deleting instance files /var/lib/nova/instances/ef21f945-0076-48fa-8d22-c5376e26d278_del
Oct 11 09:23:26 compute-0 nova_compute[260935]: 2025-10-11 09:23:26.991 2 INFO nova.virt.libvirt.driver [None req-b25b14ac-461b-4896-b3df-a299c81599b8 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] Deletion of /var/lib/nova/instances/ef21f945-0076-48fa-8d22-c5376e26d278_del complete
Oct 11 09:23:27 compute-0 nova_compute[260935]: 2025-10-11 09:23:27.315 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:23:27 compute-0 nova_compute[260935]: 2025-10-11 09:23:27.319 2 INFO nova.scheduler.client.report [None req-b25b14ac-461b-4896-b3df-a299c81599b8 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] Deleted allocations for instance ef21f945-0076-48fa-8d22-c5376e26d278
Oct 11 09:23:27 compute-0 nova_compute[260935]: 2025-10-11 09:23:27.531 2 DEBUG oslo_concurrency.lockutils [None req-b25b14ac-461b-4896-b3df-a299c81599b8 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:23:27 compute-0 nova_compute[260935]: 2025-10-11 09:23:27.532 2 DEBUG oslo_concurrency.lockutils [None req-b25b14ac-461b-4896-b3df-a299c81599b8 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:23:27 compute-0 nova_compute[260935]: 2025-10-11 09:23:27.648 2 DEBUG oslo_concurrency.processutils [None req-b25b14ac-461b-4896-b3df-a299c81599b8 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:23:27 compute-0 ceph-mon[74313]: pgmap v2452: 321 pgs: 321 active+clean; 486 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 866 KiB/s rd, 1.4 MiB/s wr, 53 op/s
Oct 11 09:23:28 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 09:23:28 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2241683377' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:23:28 compute-0 nova_compute[260935]: 2025-10-11 09:23:28.130 2 DEBUG oslo_concurrency.processutils [None req-b25b14ac-461b-4896-b3df-a299c81599b8 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.481s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:23:28 compute-0 nova_compute[260935]: 2025-10-11 09:23:28.139 2 DEBUG nova.compute.provider_tree [None req-b25b14ac-461b-4896-b3df-a299c81599b8 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 09:23:28 compute-0 nova_compute[260935]: 2025-10-11 09:23:28.204 2 DEBUG nova.scheduler.client.report [None req-b25b14ac-461b-4896-b3df-a299c81599b8 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 09:23:28 compute-0 nova_compute[260935]: 2025-10-11 09:23:28.339 2 DEBUG oslo_concurrency.lockutils [None req-b25b14ac-461b-4896-b3df-a299c81599b8 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.807s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:23:28 compute-0 nova_compute[260935]: 2025-10-11 09:23:28.480 2 DEBUG oslo_concurrency.lockutils [None req-b25b14ac-461b-4896-b3df-a299c81599b8 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] Lock "ef21f945-0076-48fa-8d22-c5376e26d278" "released" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: held 20.532s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:23:28 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/2241683377' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:23:28 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2453: 321 pgs: 321 active+clean; 407 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 35 KiB/s rd, 1.8 KiB/s wr, 48 op/s
Oct 11 09:23:28 compute-0 nova_compute[260935]: 2025-10-11 09:23:28.830 2 DEBUG nova.network.neutron [req-8f7da495-29b3-4372-a7a8-3f80171eb406 req-27e2bb8d-43ab-471e-96ce-911b5f471617 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] Updated VIF entry in instance network info cache for port 0f516e4b-c284-4151-944c-8a7d98f695b5. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 09:23:28 compute-0 nova_compute[260935]: 2025-10-11 09:23:28.832 2 DEBUG nova.network.neutron [req-8f7da495-29b3-4372-a7a8-3f80171eb406 req-27e2bb8d-43ab-471e-96ce-911b5f471617 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] Updating instance_info_cache with network_info: [{"id": "0f516e4b-c284-4151-944c-8a7d98f695b5", "address": "fa:16:3e:4b:08:14", "network": {"id": "0007a0de-db42-4add-9b55-6d92ceffa860", "bridge": null, "label": "tempest-TestShelveInstance-683353345-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.199", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a13210f275984f3eadf85eba0c749d99", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "unbound", "details": {}, "devname": "tap0f516e4b-c2", "ovs_interfaceid": null, "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:23:28 compute-0 nova_compute[260935]: 2025-10-11 09:23:28.880 2 DEBUG oslo_concurrency.lockutils [req-8f7da495-29b3-4372-a7a8-3f80171eb406 req-27e2bb8d-43ab-471e-96ce-911b5f471617 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-ef21f945-0076-48fa-8d22-c5376e26d278" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:23:29 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e286 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:23:29 compute-0 ceph-mon[74313]: pgmap v2453: 321 pgs: 321 active+clean; 407 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 35 KiB/s rd, 1.8 KiB/s wr, 48 op/s
Oct 11 09:23:30 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2454: 321 pgs: 321 active+clean; 407 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 35 KiB/s rd, 1.8 KiB/s wr, 48 op/s
Oct 11 09:23:31 compute-0 nova_compute[260935]: 2025-10-11 09:23:31.378 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:23:31 compute-0 ceph-mon[74313]: pgmap v2454: 321 pgs: 321 active+clean; 407 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 35 KiB/s rd, 1.8 KiB/s wr, 48 op/s
Oct 11 09:23:32 compute-0 nova_compute[260935]: 2025-10-11 09:23:32.316 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:23:32 compute-0 nova_compute[260935]: 2025-10-11 09:23:32.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:23:32 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2455: 321 pgs: 321 active+clean; 407 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 65 KiB/s rd, 1.4 KiB/s wr, 104 op/s
Oct 11 09:23:33 compute-0 nova_compute[260935]: 2025-10-11 09:23:33.346 2 DEBUG oslo_concurrency.lockutils [None req-3068d5e0-3e17-4027-a4de-f9fde66921fa 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] Acquiring lock "ef21f945-0076-48fa-8d22-c5376e26d278" by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:23:33 compute-0 nova_compute[260935]: 2025-10-11 09:23:33.347 2 DEBUG oslo_concurrency.lockutils [None req-3068d5e0-3e17-4027-a4de-f9fde66921fa 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] Lock "ef21f945-0076-48fa-8d22-c5376e26d278" acquired by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:23:33 compute-0 nova_compute[260935]: 2025-10-11 09:23:33.347 2 INFO nova.compute.manager [None req-3068d5e0-3e17-4027-a4de-f9fde66921fa 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] Unshelving
Oct 11 09:23:33 compute-0 nova_compute[260935]: 2025-10-11 09:23:33.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:23:33 compute-0 nova_compute[260935]: 2025-10-11 09:23:33.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:23:33 compute-0 nova_compute[260935]: 2025-10-11 09:23:33.879 2 DEBUG oslo_concurrency.lockutils [None req-3068d5e0-3e17-4027-a4de-f9fde66921fa 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:23:33 compute-0 nova_compute[260935]: 2025-10-11 09:23:33.880 2 DEBUG oslo_concurrency.lockutils [None req-3068d5e0-3e17-4027-a4de-f9fde66921fa 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:23:33 compute-0 ceph-mon[74313]: pgmap v2455: 321 pgs: 321 active+clean; 407 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 65 KiB/s rd, 1.4 KiB/s wr, 104 op/s
Oct 11 09:23:33 compute-0 nova_compute[260935]: 2025-10-11 09:23:33.885 2 DEBUG nova.objects.instance [None req-3068d5e0-3e17-4027-a4de-f9fde66921fa 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] Lazy-loading 'pci_requests' on Instance uuid ef21f945-0076-48fa-8d22-c5376e26d278 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:23:33 compute-0 nova_compute[260935]: 2025-10-11 09:23:33.947 2 DEBUG oslo_concurrency.lockutils [None req-359157f5-4620-47af-88e8-b12b212b7fe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "85cf93a0-2068-4567-a399-b8d52e672913" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:23:33 compute-0 nova_compute[260935]: 2025-10-11 09:23:33.947 2 DEBUG oslo_concurrency.lockutils [None req-359157f5-4620-47af-88e8-b12b212b7fe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "85cf93a0-2068-4567-a399-b8d52e672913" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:23:34 compute-0 nova_compute[260935]: 2025-10-11 09:23:34.017 2 DEBUG nova.objects.instance [None req-3068d5e0-3e17-4027-a4de-f9fde66921fa 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] Lazy-loading 'numa_topology' on Instance uuid ef21f945-0076-48fa-8d22-c5376e26d278 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:23:34 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e286 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:23:34 compute-0 nova_compute[260935]: 2025-10-11 09:23:34.200 2 DEBUG nova.compute.manager [None req-359157f5-4620-47af-88e8-b12b212b7fe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 85cf93a0-2068-4567-a399-b8d52e672913] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 11 09:23:34 compute-0 nova_compute[260935]: 2025-10-11 09:23:34.216 2 DEBUG nova.virt.hardware [None req-3068d5e0-3e17-4027-a4de-f9fde66921fa 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 11 09:23:34 compute-0 nova_compute[260935]: 2025-10-11 09:23:34.216 2 INFO nova.compute.claims [None req-3068d5e0-3e17-4027-a4de-f9fde66921fa 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] Claim successful on node compute-0.ctlplane.example.com
Oct 11 09:23:34 compute-0 nova_compute[260935]: 2025-10-11 09:23:34.649 2 DEBUG oslo_concurrency.lockutils [None req-359157f5-4620-47af-88e8-b12b212b7fe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:23:34 compute-0 nova_compute[260935]: 2025-10-11 09:23:34.705 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:23:34 compute-0 nova_compute[260935]: 2025-10-11 09:23:34.706 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Oct 11 09:23:34 compute-0 nova_compute[260935]: 2025-10-11 09:23:34.792 2 DEBUG oslo_concurrency.processutils [None req-3068d5e0-3e17-4027-a4de-f9fde66921fa 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:23:34 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2456: 321 pgs: 321 active+clean; 407 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 61 KiB/s rd, 1.3 KiB/s wr, 97 op/s
Oct 11 09:23:35 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 09:23:35 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3031109721' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:23:35 compute-0 nova_compute[260935]: 2025-10-11 09:23:35.294 2 DEBUG oslo_concurrency.processutils [None req-3068d5e0-3e17-4027-a4de-f9fde66921fa 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.502s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:23:35 compute-0 nova_compute[260935]: 2025-10-11 09:23:35.302 2 DEBUG nova.compute.provider_tree [None req-3068d5e0-3e17-4027-a4de-f9fde66921fa 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 09:23:35 compute-0 nova_compute[260935]: 2025-10-11 09:23:35.326 2 DEBUG nova.scheduler.client.report [None req-3068d5e0-3e17-4027-a4de-f9fde66921fa 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 09:23:35 compute-0 nova_compute[260935]: 2025-10-11 09:23:35.389 2 DEBUG oslo_concurrency.lockutils [None req-3068d5e0-3e17-4027-a4de-f9fde66921fa 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.509s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:23:35 compute-0 nova_compute[260935]: 2025-10-11 09:23:35.398 2 DEBUG oslo_concurrency.lockutils [None req-359157f5-4620-47af-88e8-b12b212b7fe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.749s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:23:35 compute-0 nova_compute[260935]: 2025-10-11 09:23:35.410 2 DEBUG nova.virt.hardware [None req-359157f5-4620-47af-88e8-b12b212b7fe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 11 09:23:35 compute-0 nova_compute[260935]: 2025-10-11 09:23:35.411 2 INFO nova.compute.claims [None req-359157f5-4620-47af-88e8-b12b212b7fe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 85cf93a0-2068-4567-a399-b8d52e672913] Claim successful on node compute-0.ctlplane.example.com
Oct 11 09:23:35 compute-0 nova_compute[260935]: 2025-10-11 09:23:35.620 2 INFO nova.network.neutron [None req-3068d5e0-3e17-4027-a4de-f9fde66921fa 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] Updating port 0f516e4b-c284-4151-944c-8a7d98f695b5 with attributes {'binding:host_id': 'compute-0.ctlplane.example.com', 'device_owner': 'compute:nova'}
Oct 11 09:23:35 compute-0 nova_compute[260935]: 2025-10-11 09:23:35.725 2 DEBUG oslo_concurrency.processutils [None req-359157f5-4620-47af-88e8-b12b212b7fe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:23:36 compute-0 ceph-mon[74313]: pgmap v2456: 321 pgs: 321 active+clean; 407 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 61 KiB/s rd, 1.3 KiB/s wr, 97 op/s
Oct 11 09:23:36 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/3031109721' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:23:36 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 09:23:36 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1403048579' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:23:36 compute-0 nova_compute[260935]: 2025-10-11 09:23:36.186 2 DEBUG oslo_concurrency.processutils [None req-359157f5-4620-47af-88e8-b12b212b7fe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:23:36 compute-0 nova_compute[260935]: 2025-10-11 09:23:36.198 2 DEBUG nova.compute.provider_tree [None req-359157f5-4620-47af-88e8-b12b212b7fe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 09:23:36 compute-0 nova_compute[260935]: 2025-10-11 09:23:36.218 2 DEBUG nova.scheduler.client.report [None req-359157f5-4620-47af-88e8-b12b212b7fe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 09:23:36 compute-0 nova_compute[260935]: 2025-10-11 09:23:36.253 2 DEBUG oslo_concurrency.lockutils [None req-359157f5-4620-47af-88e8-b12b212b7fe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.855s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:23:36 compute-0 nova_compute[260935]: 2025-10-11 09:23:36.255 2 DEBUG nova.compute.manager [None req-359157f5-4620-47af-88e8-b12b212b7fe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 85cf93a0-2068-4567-a399-b8d52e672913] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 11 09:23:36 compute-0 nova_compute[260935]: 2025-10-11 09:23:36.265 2 DEBUG oslo_concurrency.lockutils [None req-3068d5e0-3e17-4027-a4de-f9fde66921fa 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] Acquiring lock "refresh_cache-ef21f945-0076-48fa-8d22-c5376e26d278" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:23:36 compute-0 nova_compute[260935]: 2025-10-11 09:23:36.267 2 DEBUG oslo_concurrency.lockutils [None req-3068d5e0-3e17-4027-a4de-f9fde66921fa 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] Acquired lock "refresh_cache-ef21f945-0076-48fa-8d22-c5376e26d278" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:23:36 compute-0 nova_compute[260935]: 2025-10-11 09:23:36.267 2 DEBUG nova.network.neutron [None req-3068d5e0-3e17-4027-a4de-f9fde66921fa 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 11 09:23:36 compute-0 nova_compute[260935]: 2025-10-11 09:23:36.313 2 DEBUG nova.compute.manager [None req-359157f5-4620-47af-88e8-b12b212b7fe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 85cf93a0-2068-4567-a399-b8d52e672913] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 11 09:23:36 compute-0 nova_compute[260935]: 2025-10-11 09:23:36.314 2 DEBUG nova.network.neutron [None req-359157f5-4620-47af-88e8-b12b212b7fe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 85cf93a0-2068-4567-a399-b8d52e672913] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 11 09:23:36 compute-0 nova_compute[260935]: 2025-10-11 09:23:36.336 2 INFO nova.virt.libvirt.driver [None req-359157f5-4620-47af-88e8-b12b212b7fe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 85cf93a0-2068-4567-a399-b8d52e672913] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 11 09:23:36 compute-0 nova_compute[260935]: 2025-10-11 09:23:36.343 2 DEBUG nova.compute.manager [req-c499fbc0-d96b-4fe6-8281-96b30038efca req-e987cbf7-a099-4f5b-a35c-df66cbc4c4ab e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] Received event network-changed-0f516e4b-c284-4151-944c-8a7d98f695b5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:23:36 compute-0 nova_compute[260935]: 2025-10-11 09:23:36.344 2 DEBUG nova.compute.manager [req-c499fbc0-d96b-4fe6-8281-96b30038efca req-e987cbf7-a099-4f5b-a35c-df66cbc4c4ab e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] Refreshing instance network info cache due to event network-changed-0f516e4b-c284-4151-944c-8a7d98f695b5. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 09:23:36 compute-0 nova_compute[260935]: 2025-10-11 09:23:36.344 2 DEBUG oslo_concurrency.lockutils [req-c499fbc0-d96b-4fe6-8281-96b30038efca req-e987cbf7-a099-4f5b-a35c-df66cbc4c4ab e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-ef21f945-0076-48fa-8d22-c5376e26d278" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:23:36 compute-0 nova_compute[260935]: 2025-10-11 09:23:36.359 2 DEBUG nova.compute.manager [None req-359157f5-4620-47af-88e8-b12b212b7fe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 85cf93a0-2068-4567-a399-b8d52e672913] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 11 09:23:36 compute-0 nova_compute[260935]: 2025-10-11 09:23:36.381 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:23:36 compute-0 nova_compute[260935]: 2025-10-11 09:23:36.469 2 DEBUG nova.compute.manager [None req-359157f5-4620-47af-88e8-b12b212b7fe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 85cf93a0-2068-4567-a399-b8d52e672913] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 11 09:23:36 compute-0 nova_compute[260935]: 2025-10-11 09:23:36.471 2 DEBUG nova.virt.libvirt.driver [None req-359157f5-4620-47af-88e8-b12b212b7fe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 85cf93a0-2068-4567-a399-b8d52e672913] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 11 09:23:36 compute-0 nova_compute[260935]: 2025-10-11 09:23:36.472 2 INFO nova.virt.libvirt.driver [None req-359157f5-4620-47af-88e8-b12b212b7fe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 85cf93a0-2068-4567-a399-b8d52e672913] Creating image(s)
Oct 11 09:23:36 compute-0 nova_compute[260935]: 2025-10-11 09:23:36.505 2 DEBUG nova.storage.rbd_utils [None req-359157f5-4620-47af-88e8-b12b212b7fe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] rbd image 85cf93a0-2068-4567-a399-b8d52e672913_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:23:36 compute-0 nova_compute[260935]: 2025-10-11 09:23:36.534 2 DEBUG nova.storage.rbd_utils [None req-359157f5-4620-47af-88e8-b12b212b7fe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] rbd image 85cf93a0-2068-4567-a399-b8d52e672913_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:23:36 compute-0 nova_compute[260935]: 2025-10-11 09:23:36.560 2 DEBUG nova.storage.rbd_utils [None req-359157f5-4620-47af-88e8-b12b212b7fe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] rbd image 85cf93a0-2068-4567-a399-b8d52e672913_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:23:36 compute-0 nova_compute[260935]: 2025-10-11 09:23:36.564 2 DEBUG oslo_concurrency.processutils [None req-359157f5-4620-47af-88e8-b12b212b7fe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:23:36 compute-0 nova_compute[260935]: 2025-10-11 09:23:36.608 2 DEBUG nova.policy [None req-359157f5-4620-47af-88e8-b12b212b7fe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '0e1fd111a1ff43179343661e01457085', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'db6885dd005947ad850fed13cefdf2fc', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 11 09:23:36 compute-0 nova_compute[260935]: 2025-10-11 09:23:36.639 2 DEBUG oslo_concurrency.processutils [None req-359157f5-4620-47af-88e8-b12b212b7fe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json" returned: 0 in 0.075s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:23:36 compute-0 nova_compute[260935]: 2025-10-11 09:23:36.640 2 DEBUG oslo_concurrency.lockutils [None req-359157f5-4620-47af-88e8-b12b212b7fe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "9811042c7d73cc51997f7c966840f4b7728169a1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:23:36 compute-0 nova_compute[260935]: 2025-10-11 09:23:36.641 2 DEBUG oslo_concurrency.lockutils [None req-359157f5-4620-47af-88e8-b12b212b7fe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:23:36 compute-0 nova_compute[260935]: 2025-10-11 09:23:36.642 2 DEBUG oslo_concurrency.lockutils [None req-359157f5-4620-47af-88e8-b12b212b7fe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:23:36 compute-0 nova_compute[260935]: 2025-10-11 09:23:36.673 2 DEBUG nova.storage.rbd_utils [None req-359157f5-4620-47af-88e8-b12b212b7fe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] rbd image 85cf93a0-2068-4567-a399-b8d52e672913_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:23:36 compute-0 nova_compute[260935]: 2025-10-11 09:23:36.677 2 DEBUG oslo_concurrency.processutils [None req-359157f5-4620-47af-88e8-b12b212b7fe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 85cf93a0-2068-4567-a399-b8d52e672913_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:23:36 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2457: 321 pgs: 321 active+clean; 407 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 54 KiB/s rd, 1.2 KiB/s wr, 86 op/s
Oct 11 09:23:36 compute-0 nova_compute[260935]: 2025-10-11 09:23:36.844 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:23:37 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/1403048579' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:23:37 compute-0 nova_compute[260935]: 2025-10-11 09:23:37.088 2 DEBUG oslo_concurrency.processutils [None req-359157f5-4620-47af-88e8-b12b212b7fe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 85cf93a0-2068-4567-a399-b8d52e672913_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.411s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:23:37 compute-0 nova_compute[260935]: 2025-10-11 09:23:37.158 2 DEBUG nova.storage.rbd_utils [None req-359157f5-4620-47af-88e8-b12b212b7fe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] resizing rbd image 85cf93a0-2068-4567-a399-b8d52e672913_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 11 09:23:37 compute-0 nova_compute[260935]: 2025-10-11 09:23:37.242 2 DEBUG nova.objects.instance [None req-359157f5-4620-47af-88e8-b12b212b7fe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lazy-loading 'migration_context' on Instance uuid 85cf93a0-2068-4567-a399-b8d52e672913 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:23:37 compute-0 nova_compute[260935]: 2025-10-11 09:23:37.348 2 DEBUG nova.virt.libvirt.driver [None req-359157f5-4620-47af-88e8-b12b212b7fe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 85cf93a0-2068-4567-a399-b8d52e672913] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 11 09:23:37 compute-0 nova_compute[260935]: 2025-10-11 09:23:37.348 2 DEBUG nova.virt.libvirt.driver [None req-359157f5-4620-47af-88e8-b12b212b7fe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 85cf93a0-2068-4567-a399-b8d52e672913] Ensure instance console log exists: /var/lib/nova/instances/85cf93a0-2068-4567-a399-b8d52e672913/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 11 09:23:37 compute-0 nova_compute[260935]: 2025-10-11 09:23:37.348 2 DEBUG oslo_concurrency.lockutils [None req-359157f5-4620-47af-88e8-b12b212b7fe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:23:37 compute-0 nova_compute[260935]: 2025-10-11 09:23:37.349 2 DEBUG oslo_concurrency.lockutils [None req-359157f5-4620-47af-88e8-b12b212b7fe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:23:37 compute-0 nova_compute[260935]: 2025-10-11 09:23:37.349 2 DEBUG oslo_concurrency.lockutils [None req-359157f5-4620-47af-88e8-b12b212b7fe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:23:37 compute-0 nova_compute[260935]: 2025-10-11 09:23:37.355 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:23:38 compute-0 ceph-mon[74313]: pgmap v2457: 321 pgs: 321 active+clean; 407 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 54 KiB/s rd, 1.2 KiB/s wr, 86 op/s
Oct 11 09:23:38 compute-0 nova_compute[260935]: 2025-10-11 09:23:38.550 2 DEBUG nova.network.neutron [None req-359157f5-4620-47af-88e8-b12b212b7fe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 85cf93a0-2068-4567-a399-b8d52e672913] Successfully created port: 85875c6f-3380-42bc-9e4b-0b66df391e0d _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 11 09:23:38 compute-0 nova_compute[260935]: 2025-10-11 09:23:38.697 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:23:38 compute-0 nova_compute[260935]: 2025-10-11 09:23:38.801 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:23:38 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2458: 321 pgs: 321 active+clean; 453 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 70 KiB/s rd, 1.8 MiB/s wr, 111 op/s
Oct 11 09:23:38 compute-0 nova_compute[260935]: 2025-10-11 09:23:38.827 2 DEBUG nova.network.neutron [None req-3068d5e0-3e17-4027-a4de-f9fde66921fa 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] Updating instance_info_cache with network_info: [{"id": "0f516e4b-c284-4151-944c-8a7d98f695b5", "address": "fa:16:3e:4b:08:14", "network": {"id": "0007a0de-db42-4add-9b55-6d92ceffa860", "bridge": "br-int", "label": "tempest-TestShelveInstance-683353345-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.199", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a13210f275984f3eadf85eba0c749d99", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0f516e4b-c2", "ovs_interfaceid": "0f516e4b-c284-4151-944c-8a7d98f695b5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:23:38 compute-0 nova_compute[260935]: 2025-10-11 09:23:38.876 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:23:38 compute-0 nova_compute[260935]: 2025-10-11 09:23:38.877 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:23:38 compute-0 nova_compute[260935]: 2025-10-11 09:23:38.877 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:23:38 compute-0 nova_compute[260935]: 2025-10-11 09:23:38.878 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 11 09:23:38 compute-0 nova_compute[260935]: 2025-10-11 09:23:38.878 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:23:38 compute-0 nova_compute[260935]: 2025-10-11 09:23:38.974 2 DEBUG oslo_concurrency.lockutils [None req-3068d5e0-3e17-4027-a4de-f9fde66921fa 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] Releasing lock "refresh_cache-ef21f945-0076-48fa-8d22-c5376e26d278" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:23:38 compute-0 nova_compute[260935]: 2025-10-11 09:23:38.977 2 DEBUG nova.virt.libvirt.driver [None req-3068d5e0-3e17-4027-a4de-f9fde66921fa 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 11 09:23:38 compute-0 nova_compute[260935]: 2025-10-11 09:23:38.978 2 INFO nova.virt.libvirt.driver [None req-3068d5e0-3e17-4027-a4de-f9fde66921fa 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] Creating image(s)
Oct 11 09:23:39 compute-0 nova_compute[260935]: 2025-10-11 09:23:39.011 2 DEBUG nova.storage.rbd_utils [None req-3068d5e0-3e17-4027-a4de-f9fde66921fa 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] rbd image ef21f945-0076-48fa-8d22-c5376e26d278_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:23:39 compute-0 nova_compute[260935]: 2025-10-11 09:23:39.017 2 DEBUG nova.objects.instance [None req-3068d5e0-3e17-4027-a4de-f9fde66921fa 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] Lazy-loading 'trusted_certs' on Instance uuid ef21f945-0076-48fa-8d22-c5376e26d278 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:23:39 compute-0 nova_compute[260935]: 2025-10-11 09:23:39.020 2 DEBUG oslo_concurrency.lockutils [req-c499fbc0-d96b-4fe6-8281-96b30038efca req-e987cbf7-a099-4f5b-a35c-df66cbc4c4ab e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-ef21f945-0076-48fa-8d22-c5376e26d278" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:23:39 compute-0 nova_compute[260935]: 2025-10-11 09:23:39.021 2 DEBUG nova.network.neutron [req-c499fbc0-d96b-4fe6-8281-96b30038efca req-e987cbf7-a099-4f5b-a35c-df66cbc4c4ab e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] Refreshing network info cache for port 0f516e4b-c284-4151-944c-8a7d98f695b5 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 09:23:39 compute-0 ceph-mon[74313]: pgmap v2458: 321 pgs: 321 active+clean; 453 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 70 KiB/s rd, 1.8 MiB/s wr, 111 op/s
Oct 11 09:23:39 compute-0 nova_compute[260935]: 2025-10-11 09:23:39.094 2 DEBUG nova.storage.rbd_utils [None req-3068d5e0-3e17-4027-a4de-f9fde66921fa 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] rbd image ef21f945-0076-48fa-8d22-c5376e26d278_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:23:39 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e286 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:23:39 compute-0 nova_compute[260935]: 2025-10-11 09:23:39.120 2 DEBUG nova.storage.rbd_utils [None req-3068d5e0-3e17-4027-a4de-f9fde66921fa 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] rbd image ef21f945-0076-48fa-8d22-c5376e26d278_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:23:39 compute-0 nova_compute[260935]: 2025-10-11 09:23:39.124 2 DEBUG oslo_concurrency.lockutils [None req-3068d5e0-3e17-4027-a4de-f9fde66921fa 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] Acquiring lock "d5adfb1452ade04ee58ac0834f4c40e7493c553e" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:23:39 compute-0 nova_compute[260935]: 2025-10-11 09:23:39.126 2 DEBUG oslo_concurrency.lockutils [None req-3068d5e0-3e17-4027-a4de-f9fde66921fa 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] Lock "d5adfb1452ade04ee58ac0834f4c40e7493c553e" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:23:39 compute-0 nova_compute[260935]: 2025-10-11 09:23:39.307 2 DEBUG nova.network.neutron [None req-359157f5-4620-47af-88e8-b12b212b7fe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 85cf93a0-2068-4567-a399-b8d52e672913] Successfully created port: f45a21da-34ce-448b-92cc-2639a191c755 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 11 09:23:39 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 09:23:39 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/342507186' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:23:39 compute-0 nova_compute[260935]: 2025-10-11 09:23:39.357 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.479s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:23:39 compute-0 nova_compute[260935]: 2025-10-11 09:23:39.417 2 DEBUG nova.virt.libvirt.imagebackend [None req-3068d5e0-3e17-4027-a4de-f9fde66921fa 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] Image locations are: [{'url': 'rbd://33219f8b-dc38-5a8f-a577-8ccc4b37190a/images/a24df9e3-57ef-4be8-af9c-14e8b8d2e436/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://33219f8b-dc38-5a8f-a577-8ccc4b37190a/images/a24df9e3-57ef-4be8-af9c-14e8b8d2e436/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085
Oct 11 09:23:39 compute-0 nova_compute[260935]: 2025-10-11 09:23:39.495 2 DEBUG nova.virt.libvirt.imagebackend [None req-3068d5e0-3e17-4027-a4de-f9fde66921fa 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] Selected location: {'url': 'rbd://33219f8b-dc38-5a8f-a577-8ccc4b37190a/images/a24df9e3-57ef-4be8-af9c-14e8b8d2e436/snap', 'metadata': {'store': 'default_backend'}} clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1094
Oct 11 09:23:39 compute-0 nova_compute[260935]: 2025-10-11 09:23:39.496 2 DEBUG nova.storage.rbd_utils [None req-3068d5e0-3e17-4027-a4de-f9fde66921fa 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] cloning images/a24df9e3-57ef-4be8-af9c-14e8b8d2e436@snap to None/ef21f945-0076-48fa-8d22-c5376e26d278_disk clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Oct 11 09:23:39 compute-0 nova_compute[260935]: 2025-10-11 09:23:39.651 2 DEBUG oslo_concurrency.lockutils [None req-3068d5e0-3e17-4027-a4de-f9fde66921fa 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] Lock "d5adfb1452ade04ee58ac0834f4c40e7493c553e" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.526s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:23:39 compute-0 nova_compute[260935]: 2025-10-11 09:23:39.840 2 DEBUG nova.objects.instance [None req-3068d5e0-3e17-4027-a4de-f9fde66921fa 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] Lazy-loading 'migration_context' on Instance uuid ef21f945-0076-48fa-8d22-c5376e26d278 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:23:39 compute-0 nova_compute[260935]: 2025-10-11 09:23:39.846 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:23:39 compute-0 nova_compute[260935]: 2025-10-11 09:23:39.847 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:23:39 compute-0 nova_compute[260935]: 2025-10-11 09:23:39.847 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:23:39 compute-0 nova_compute[260935]: 2025-10-11 09:23:39.852 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:23:39 compute-0 nova_compute[260935]: 2025-10-11 09:23:39.852 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:23:39 compute-0 nova_compute[260935]: 2025-10-11 09:23:39.856 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000052 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:23:39 compute-0 nova_compute[260935]: 2025-10-11 09:23:39.856 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000052 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:23:39 compute-0 nova_compute[260935]: 2025-10-11 09:23:39.946 2 DEBUG oslo_concurrency.lockutils [None req-091e0e0c-811a-465e-b89f-9b7e7c4c5823 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Acquiring lock "527bff5a-2d35-406a-8702-f80298a22342" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:23:39 compute-0 nova_compute[260935]: 2025-10-11 09:23:39.946 2 DEBUG oslo_concurrency.lockutils [None req-091e0e0c-811a-465e-b89f-9b7e7c4c5823 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "527bff5a-2d35-406a-8702-f80298a22342" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:23:39 compute-0 nova_compute[260935]: 2025-10-11 09:23:39.954 2 DEBUG nova.storage.rbd_utils [None req-3068d5e0-3e17-4027-a4de-f9fde66921fa 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] flattening vms/ef21f945-0076-48fa-8d22-c5376e26d278_disk flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Oct 11 09:23:40 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/342507186' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:23:40 compute-0 nova_compute[260935]: 2025-10-11 09:23:40.078 2 DEBUG nova.compute.manager [None req-091e0e0c-811a-465e-b89f-9b7e7c4c5823 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 527bff5a-2d35-406a-8702-f80298a22342] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 11 09:23:40 compute-0 nova_compute[260935]: 2025-10-11 09:23:40.249 2 DEBUG oslo_concurrency.lockutils [None req-091e0e0c-811a-465e-b89f-9b7e7c4c5823 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:23:40 compute-0 nova_compute[260935]: 2025-10-11 09:23:40.249 2 DEBUG oslo_concurrency.lockutils [None req-091e0e0c-811a-465e-b89f-9b7e7c4c5823 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:23:40 compute-0 nova_compute[260935]: 2025-10-11 09:23:40.255 2 DEBUG nova.virt.hardware [None req-091e0e0c-811a-465e-b89f-9b7e7c4c5823 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 11 09:23:40 compute-0 nova_compute[260935]: 2025-10-11 09:23:40.257 2 INFO nova.compute.claims [None req-091e0e0c-811a-465e-b89f-9b7e7c4c5823 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 527bff5a-2d35-406a-8702-f80298a22342] Claim successful on node compute-0.ctlplane.example.com
Oct 11 09:23:40 compute-0 nova_compute[260935]: 2025-10-11 09:23:40.311 2 WARNING nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 09:23:40 compute-0 nova_compute[260935]: 2025-10-11 09:23:40.312 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=2928MB free_disk=59.80991744995117GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 11 09:23:40 compute-0 nova_compute[260935]: 2025-10-11 09:23:40.312 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:23:40 compute-0 nova_compute[260935]: 2025-10-11 09:23:40.366 2 DEBUG nova.virt.libvirt.driver [None req-3068d5e0-3e17-4027-a4de-f9fde66921fa 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] Image rbd:vms/ef21f945-0076-48fa-8d22-c5376e26d278_disk:id=openstack:conf=/etc/ceph/ceph.conf flattened successfully while unshelving instance. _try_fetch_image_cache /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11007
Oct 11 09:23:40 compute-0 nova_compute[260935]: 2025-10-11 09:23:40.367 2 DEBUG nova.virt.libvirt.driver [None req-3068d5e0-3e17-4027-a4de-f9fde66921fa 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 11 09:23:40 compute-0 nova_compute[260935]: 2025-10-11 09:23:40.367 2 DEBUG nova.virt.libvirt.driver [None req-3068d5e0-3e17-4027-a4de-f9fde66921fa 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] Ensure instance console log exists: /var/lib/nova/instances/ef21f945-0076-48fa-8d22-c5376e26d278/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 11 09:23:40 compute-0 nova_compute[260935]: 2025-10-11 09:23:40.368 2 DEBUG oslo_concurrency.lockutils [None req-3068d5e0-3e17-4027-a4de-f9fde66921fa 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:23:40 compute-0 nova_compute[260935]: 2025-10-11 09:23:40.368 2 DEBUG oslo_concurrency.lockutils [None req-3068d5e0-3e17-4027-a4de-f9fde66921fa 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:23:40 compute-0 nova_compute[260935]: 2025-10-11 09:23:40.368 2 DEBUG oslo_concurrency.lockutils [None req-3068d5e0-3e17-4027-a4de-f9fde66921fa 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:23:40 compute-0 nova_compute[260935]: 2025-10-11 09:23:40.371 2 DEBUG nova.virt.libvirt.driver [None req-3068d5e0-3e17-4027-a4de-f9fde66921fa 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] Start _get_guest_xml network_info=[{"id": "0f516e4b-c284-4151-944c-8a7d98f695b5", "address": "fa:16:3e:4b:08:14", "network": {"id": "0007a0de-db42-4add-9b55-6d92ceffa860", "bridge": "br-int", "label": "tempest-TestShelveInstance-683353345-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.199", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a13210f275984f3eadf85eba0c749d99", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0f516e4b-c2", "ovs_interfaceid": "0f516e4b-c284-4151-944c-8a7d98f695b5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='',container_format='bare',created_at=2025-10-11T09:23:07Z,direct_url=<?>,disk_format='raw',id=a24df9e3-57ef-4be8-af9c-14e8b8d2e436,min_disk=1,min_ram=0,name='tempest-TestShelveInstance-server-1752003988-shelved',owner='a13210f275984f3eadf85eba0c749d99',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2025-10-11T09:23:19Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 11 09:23:40 compute-0 nova_compute[260935]: 2025-10-11 09:23:40.374 2 WARNING nova.virt.libvirt.driver [None req-3068d5e0-3e17-4027-a4de-f9fde66921fa 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 09:23:40 compute-0 nova_compute[260935]: 2025-10-11 09:23:40.379 2 DEBUG nova.virt.libvirt.host [None req-3068d5e0-3e17-4027-a4de-f9fde66921fa 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 11 09:23:40 compute-0 nova_compute[260935]: 2025-10-11 09:23:40.379 2 DEBUG nova.virt.libvirt.host [None req-3068d5e0-3e17-4027-a4de-f9fde66921fa 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 11 09:23:40 compute-0 nova_compute[260935]: 2025-10-11 09:23:40.384 2 DEBUG nova.virt.libvirt.host [None req-3068d5e0-3e17-4027-a4de-f9fde66921fa 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 11 09:23:40 compute-0 nova_compute[260935]: 2025-10-11 09:23:40.384 2 DEBUG nova.virt.libvirt.host [None req-3068d5e0-3e17-4027-a4de-f9fde66921fa 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 11 09:23:40 compute-0 nova_compute[260935]: 2025-10-11 09:23:40.385 2 DEBUG nova.virt.libvirt.driver [None req-3068d5e0-3e17-4027-a4de-f9fde66921fa 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 11 09:23:40 compute-0 nova_compute[260935]: 2025-10-11 09:23:40.385 2 DEBUG nova.virt.hardware [None req-3068d5e0-3e17-4027-a4de-f9fde66921fa 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='',container_format='bare',created_at=2025-10-11T09:23:07Z,direct_url=<?>,disk_format='raw',id=a24df9e3-57ef-4be8-af9c-14e8b8d2e436,min_disk=1,min_ram=0,name='tempest-TestShelveInstance-server-1752003988-shelved',owner='a13210f275984f3eadf85eba0c749d99',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2025-10-11T09:23:19Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 11 09:23:40 compute-0 nova_compute[260935]: 2025-10-11 09:23:40.385 2 DEBUG nova.virt.hardware [None req-3068d5e0-3e17-4027-a4de-f9fde66921fa 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 11 09:23:40 compute-0 nova_compute[260935]: 2025-10-11 09:23:40.386 2 DEBUG nova.virt.hardware [None req-3068d5e0-3e17-4027-a4de-f9fde66921fa 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 11 09:23:40 compute-0 nova_compute[260935]: 2025-10-11 09:23:40.386 2 DEBUG nova.virt.hardware [None req-3068d5e0-3e17-4027-a4de-f9fde66921fa 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 11 09:23:40 compute-0 nova_compute[260935]: 2025-10-11 09:23:40.386 2 DEBUG nova.virt.hardware [None req-3068d5e0-3e17-4027-a4de-f9fde66921fa 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 11 09:23:40 compute-0 nova_compute[260935]: 2025-10-11 09:23:40.386 2 DEBUG nova.virt.hardware [None req-3068d5e0-3e17-4027-a4de-f9fde66921fa 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 11 09:23:40 compute-0 nova_compute[260935]: 2025-10-11 09:23:40.387 2 DEBUG nova.virt.hardware [None req-3068d5e0-3e17-4027-a4de-f9fde66921fa 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 11 09:23:40 compute-0 nova_compute[260935]: 2025-10-11 09:23:40.387 2 DEBUG nova.virt.hardware [None req-3068d5e0-3e17-4027-a4de-f9fde66921fa 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 11 09:23:40 compute-0 nova_compute[260935]: 2025-10-11 09:23:40.387 2 DEBUG nova.virt.hardware [None req-3068d5e0-3e17-4027-a4de-f9fde66921fa 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 11 09:23:40 compute-0 nova_compute[260935]: 2025-10-11 09:23:40.387 2 DEBUG nova.virt.hardware [None req-3068d5e0-3e17-4027-a4de-f9fde66921fa 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 11 09:23:40 compute-0 nova_compute[260935]: 2025-10-11 09:23:40.387 2 DEBUG nova.virt.hardware [None req-3068d5e0-3e17-4027-a4de-f9fde66921fa 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 11 09:23:40 compute-0 nova_compute[260935]: 2025-10-11 09:23:40.388 2 DEBUG nova.objects.instance [None req-3068d5e0-3e17-4027-a4de-f9fde66921fa 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] Lazy-loading 'vcpu_model' on Instance uuid ef21f945-0076-48fa-8d22-c5376e26d278 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:23:40 compute-0 nova_compute[260935]: 2025-10-11 09:23:40.406 2 DEBUG oslo_concurrency.processutils [None req-3068d5e0-3e17-4027-a4de-f9fde66921fa 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:23:40 compute-0 nova_compute[260935]: 2025-10-11 09:23:40.522 2 DEBUG oslo_concurrency.processutils [None req-091e0e0c-811a-465e-b89f-9b7e7c4c5823 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:23:40 compute-0 podman[394080]: 2025-10-11 09:23:40.763933367 +0000 UTC m=+0.073301180 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 11 09:23:40 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2459: 321 pgs: 321 active+clean; 453 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 51 KiB/s rd, 1.8 MiB/s wr, 84 op/s
Oct 11 09:23:40 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 09:23:40 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/589241222' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:23:40 compute-0 nova_compute[260935]: 2025-10-11 09:23:40.881 2 DEBUG oslo_concurrency.processutils [None req-3068d5e0-3e17-4027-a4de-f9fde66921fa 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:23:40 compute-0 nova_compute[260935]: 2025-10-11 09:23:40.912 2 DEBUG nova.storage.rbd_utils [None req-3068d5e0-3e17-4027-a4de-f9fde66921fa 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] rbd image ef21f945-0076-48fa-8d22-c5376e26d278_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:23:40 compute-0 nova_compute[260935]: 2025-10-11 09:23:40.916 2 DEBUG oslo_concurrency.processutils [None req-3068d5e0-3e17-4027-a4de-f9fde66921fa 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:23:41 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 09:23:41 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2958725479' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:23:41 compute-0 nova_compute[260935]: 2025-10-11 09:23:41.061 2 DEBUG oslo_concurrency.processutils [None req-091e0e0c-811a-465e-b89f-9b7e7c4c5823 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.540s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:23:41 compute-0 nova_compute[260935]: 2025-10-11 09:23:41.073 2 DEBUG nova.compute.provider_tree [None req-091e0e0c-811a-465e-b89f-9b7e7c4c5823 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 09:23:41 compute-0 ceph-mon[74313]: pgmap v2459: 321 pgs: 321 active+clean; 453 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 51 KiB/s rd, 1.8 MiB/s wr, 84 op/s
Oct 11 09:23:41 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/589241222' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:23:41 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/2958725479' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:23:41 compute-0 nova_compute[260935]: 2025-10-11 09:23:41.107 2 DEBUG nova.scheduler.client.report [None req-091e0e0c-811a-465e-b89f-9b7e7c4c5823 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 09:23:41 compute-0 nova_compute[260935]: 2025-10-11 09:23:41.222 2 DEBUG oslo_concurrency.lockutils [None req-091e0e0c-811a-465e-b89f-9b7e7c4c5823 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.973s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:23:41 compute-0 nova_compute[260935]: 2025-10-11 09:23:41.223 2 DEBUG nova.compute.manager [None req-091e0e0c-811a-465e-b89f-9b7e7c4c5823 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 527bff5a-2d35-406a-8702-f80298a22342] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 11 09:23:41 compute-0 nova_compute[260935]: 2025-10-11 09:23:41.228 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.916s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:23:41 compute-0 nova_compute[260935]: 2025-10-11 09:23:41.376 2 DEBUG nova.compute.manager [None req-091e0e0c-811a-465e-b89f-9b7e7c4c5823 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 527bff5a-2d35-406a-8702-f80298a22342] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 11 09:23:41 compute-0 nova_compute[260935]: 2025-10-11 09:23:41.377 2 DEBUG nova.network.neutron [None req-091e0e0c-811a-465e-b89f-9b7e7c4c5823 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 527bff5a-2d35-406a-8702-f80298a22342] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 11 09:23:41 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 09:23:41 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1481547396' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:23:41 compute-0 nova_compute[260935]: 2025-10-11 09:23:41.386 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:23:41 compute-0 nova_compute[260935]: 2025-10-11 09:23:41.403 2 DEBUG oslo_concurrency.processutils [None req-3068d5e0-3e17-4027-a4de-f9fde66921fa 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.486s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:23:41 compute-0 nova_compute[260935]: 2025-10-11 09:23:41.405 2 DEBUG nova.virt.libvirt.vif [None req-3068d5e0-3e17-4027-a4de-f9fde66921fa 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-11T09:22:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestShelveInstance-server-1752003988',display_name='tempest-TestShelveInstance-server-1752003988',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testshelveinstance-server-1752003988',id=121,image_ref='a24df9e3-57ef-4be8-af9c-14e8b8d2e436',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name='tempest-TestShelveInstance-1436929269',keypairs=<?>,launch_index=0,launched_at=2025-10-11T09:22:45Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='a13210f275984f3eadf85eba0c749d99',ramdisk_id='',reservation_id='r-9i2gklxc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestShelveInstance-243029510',owner_user_name='tempest-TestShelveInstance-243029510-project-member',shelved_at='2025-10-11T09:23:19.621852',shelved_host='compute-0.ctlplane.example.com',shelved_image_id='a24df9e3-57ef-4be8-af9c-14e8b8d2e436'},tags=<?>,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:23:33Z,user_data=None,user_id='67e20c1f7ae24f2f8b9e25e0d8ce61ca',uuid=ef21f945-0076-48fa-8d22-c5376e26d278,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='shelved_offloaded') vif={"id": "0f516e4b-c284-4151-944c-8a7d98f695b5", "address": "fa:16:3e:4b:08:14", "network": {"id": "0007a0de-db42-4add-9b55-6d92ceffa860", "bridge": "br-int", "label": "tempest-TestShelveInstance-683353345-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.199", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a13210f275984f3eadf85eba0c749d99", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0f516e4b-c2", "ovs_interfaceid": "0f516e4b-c284-4151-944c-8a7d98f695b5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 11 09:23:41 compute-0 nova_compute[260935]: 2025-10-11 09:23:41.405 2 DEBUG nova.network.os_vif_util [None req-3068d5e0-3e17-4027-a4de-f9fde66921fa 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] Converting VIF {"id": "0f516e4b-c284-4151-944c-8a7d98f695b5", "address": "fa:16:3e:4b:08:14", "network": {"id": "0007a0de-db42-4add-9b55-6d92ceffa860", "bridge": "br-int", "label": "tempest-TestShelveInstance-683353345-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.199", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a13210f275984f3eadf85eba0c749d99", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0f516e4b-c2", "ovs_interfaceid": "0f516e4b-c284-4151-944c-8a7d98f695b5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 09:23:41 compute-0 nova_compute[260935]: 2025-10-11 09:23:41.407 2 DEBUG nova.network.os_vif_util [None req-3068d5e0-3e17-4027-a4de-f9fde66921fa 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4b:08:14,bridge_name='br-int',has_traffic_filtering=True,id=0f516e4b-c284-4151-944c-8a7d98f695b5,network=Network(0007a0de-db42-4add-9b55-6d92ceffa860),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0f516e4b-c2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 09:23:41 compute-0 nova_compute[260935]: 2025-10-11 09:23:41.409 2 DEBUG nova.objects.instance [None req-3068d5e0-3e17-4027-a4de-f9fde66921fa 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] Lazy-loading 'pci_devices' on Instance uuid ef21f945-0076-48fa-8d22-c5376e26d278 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:23:41 compute-0 nova_compute[260935]: 2025-10-11 09:23:41.436 2 DEBUG nova.virt.libvirt.driver [None req-3068d5e0-3e17-4027-a4de-f9fde66921fa 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] End _get_guest_xml xml=<domain type="kvm">
Oct 11 09:23:41 compute-0 nova_compute[260935]:   <uuid>ef21f945-0076-48fa-8d22-c5376e26d278</uuid>
Oct 11 09:23:41 compute-0 nova_compute[260935]:   <name>instance-00000079</name>
Oct 11 09:23:41 compute-0 nova_compute[260935]:   <memory>131072</memory>
Oct 11 09:23:41 compute-0 nova_compute[260935]:   <vcpu>1</vcpu>
Oct 11 09:23:41 compute-0 nova_compute[260935]:   <metadata>
Oct 11 09:23:41 compute-0 nova_compute[260935]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 09:23:41 compute-0 nova_compute[260935]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 09:23:41 compute-0 nova_compute[260935]:       <nova:name>tempest-TestShelveInstance-server-1752003988</nova:name>
Oct 11 09:23:41 compute-0 nova_compute[260935]:       <nova:creationTime>2025-10-11 09:23:40</nova:creationTime>
Oct 11 09:23:41 compute-0 nova_compute[260935]:       <nova:flavor name="m1.nano">
Oct 11 09:23:41 compute-0 nova_compute[260935]:         <nova:memory>128</nova:memory>
Oct 11 09:23:41 compute-0 nova_compute[260935]:         <nova:disk>1</nova:disk>
Oct 11 09:23:41 compute-0 nova_compute[260935]:         <nova:swap>0</nova:swap>
Oct 11 09:23:41 compute-0 nova_compute[260935]:         <nova:ephemeral>0</nova:ephemeral>
Oct 11 09:23:41 compute-0 nova_compute[260935]:         <nova:vcpus>1</nova:vcpus>
Oct 11 09:23:41 compute-0 nova_compute[260935]:       </nova:flavor>
Oct 11 09:23:41 compute-0 nova_compute[260935]:       <nova:owner>
Oct 11 09:23:41 compute-0 nova_compute[260935]:         <nova:user uuid="67e20c1f7ae24f2f8b9e25e0d8ce61ca">tempest-TestShelveInstance-243029510-project-member</nova:user>
Oct 11 09:23:41 compute-0 nova_compute[260935]:         <nova:project uuid="a13210f275984f3eadf85eba0c749d99">tempest-TestShelveInstance-243029510</nova:project>
Oct 11 09:23:41 compute-0 nova_compute[260935]:       </nova:owner>
Oct 11 09:23:41 compute-0 nova_compute[260935]:       <nova:root type="image" uuid="a24df9e3-57ef-4be8-af9c-14e8b8d2e436"/>
Oct 11 09:23:41 compute-0 nova_compute[260935]:       <nova:ports>
Oct 11 09:23:41 compute-0 nova_compute[260935]:         <nova:port uuid="0f516e4b-c284-4151-944c-8a7d98f695b5">
Oct 11 09:23:41 compute-0 nova_compute[260935]:           <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Oct 11 09:23:41 compute-0 nova_compute[260935]:         </nova:port>
Oct 11 09:23:41 compute-0 nova_compute[260935]:       </nova:ports>
Oct 11 09:23:41 compute-0 nova_compute[260935]:     </nova:instance>
Oct 11 09:23:41 compute-0 nova_compute[260935]:   </metadata>
Oct 11 09:23:41 compute-0 nova_compute[260935]:   <sysinfo type="smbios">
Oct 11 09:23:41 compute-0 nova_compute[260935]:     <system>
Oct 11 09:23:41 compute-0 nova_compute[260935]:       <entry name="manufacturer">RDO</entry>
Oct 11 09:23:41 compute-0 nova_compute[260935]:       <entry name="product">OpenStack Compute</entry>
Oct 11 09:23:41 compute-0 nova_compute[260935]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 09:23:41 compute-0 nova_compute[260935]:       <entry name="serial">ef21f945-0076-48fa-8d22-c5376e26d278</entry>
Oct 11 09:23:41 compute-0 nova_compute[260935]:       <entry name="uuid">ef21f945-0076-48fa-8d22-c5376e26d278</entry>
Oct 11 09:23:41 compute-0 nova_compute[260935]:       <entry name="family">Virtual Machine</entry>
Oct 11 09:23:41 compute-0 nova_compute[260935]:     </system>
Oct 11 09:23:41 compute-0 nova_compute[260935]:   </sysinfo>
Oct 11 09:23:41 compute-0 nova_compute[260935]:   <os>
Oct 11 09:23:41 compute-0 nova_compute[260935]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 11 09:23:41 compute-0 nova_compute[260935]:     <boot dev="hd"/>
Oct 11 09:23:41 compute-0 nova_compute[260935]:     <smbios mode="sysinfo"/>
Oct 11 09:23:41 compute-0 nova_compute[260935]:   </os>
Oct 11 09:23:41 compute-0 nova_compute[260935]:   <features>
Oct 11 09:23:41 compute-0 nova_compute[260935]:     <acpi/>
Oct 11 09:23:41 compute-0 nova_compute[260935]:     <apic/>
Oct 11 09:23:41 compute-0 nova_compute[260935]:     <vmcoreinfo/>
Oct 11 09:23:41 compute-0 nova_compute[260935]:   </features>
Oct 11 09:23:41 compute-0 nova_compute[260935]:   <clock offset="utc">
Oct 11 09:23:41 compute-0 nova_compute[260935]:     <timer name="pit" tickpolicy="delay"/>
Oct 11 09:23:41 compute-0 nova_compute[260935]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 11 09:23:41 compute-0 nova_compute[260935]:     <timer name="hpet" present="no"/>
Oct 11 09:23:41 compute-0 nova_compute[260935]:   </clock>
Oct 11 09:23:41 compute-0 nova_compute[260935]:   <cpu mode="host-model" match="exact">
Oct 11 09:23:41 compute-0 nova_compute[260935]:     <topology sockets="1" cores="1" threads="1"/>
Oct 11 09:23:41 compute-0 nova_compute[260935]:   </cpu>
Oct 11 09:23:41 compute-0 nova_compute[260935]:   <devices>
Oct 11 09:23:41 compute-0 nova_compute[260935]:     <disk type="network" device="disk">
Oct 11 09:23:41 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 09:23:41 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/ef21f945-0076-48fa-8d22-c5376e26d278_disk">
Oct 11 09:23:41 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 09:23:41 compute-0 nova_compute[260935]:       </source>
Oct 11 09:23:41 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 09:23:41 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 09:23:41 compute-0 nova_compute[260935]:       </auth>
Oct 11 09:23:41 compute-0 nova_compute[260935]:       <target dev="vda" bus="virtio"/>
Oct 11 09:23:41 compute-0 nova_compute[260935]:     </disk>
Oct 11 09:23:41 compute-0 nova_compute[260935]:     <disk type="network" device="cdrom">
Oct 11 09:23:41 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 09:23:41 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/ef21f945-0076-48fa-8d22-c5376e26d278_disk.config">
Oct 11 09:23:41 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 09:23:41 compute-0 nova_compute[260935]:       </source>
Oct 11 09:23:41 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 09:23:41 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 09:23:41 compute-0 nova_compute[260935]:       </auth>
Oct 11 09:23:41 compute-0 nova_compute[260935]:       <target dev="sda" bus="sata"/>
Oct 11 09:23:41 compute-0 nova_compute[260935]:     </disk>
Oct 11 09:23:41 compute-0 nova_compute[260935]:     <interface type="ethernet">
Oct 11 09:23:41 compute-0 nova_compute[260935]:       <mac address="fa:16:3e:4b:08:14"/>
Oct 11 09:23:41 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 09:23:41 compute-0 nova_compute[260935]:       <driver name="vhost" rx_queue_size="512"/>
Oct 11 09:23:41 compute-0 nova_compute[260935]:       <mtu size="1442"/>
Oct 11 09:23:41 compute-0 nova_compute[260935]:       <target dev="tap0f516e4b-c2"/>
Oct 11 09:23:41 compute-0 nova_compute[260935]:     </interface>
Oct 11 09:23:41 compute-0 nova_compute[260935]:     <serial type="pty">
Oct 11 09:23:41 compute-0 nova_compute[260935]:       <log file="/var/lib/nova/instances/ef21f945-0076-48fa-8d22-c5376e26d278/console.log" append="off"/>
Oct 11 09:23:41 compute-0 nova_compute[260935]:     </serial>
Oct 11 09:23:41 compute-0 nova_compute[260935]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 09:23:41 compute-0 nova_compute[260935]:     <video>
Oct 11 09:23:41 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 09:23:41 compute-0 nova_compute[260935]:     </video>
Oct 11 09:23:41 compute-0 nova_compute[260935]:     <input type="tablet" bus="usb"/>
Oct 11 09:23:41 compute-0 nova_compute[260935]:     <input type="keyboard" bus="usb"/>
Oct 11 09:23:41 compute-0 nova_compute[260935]:     <rng model="virtio">
Oct 11 09:23:41 compute-0 nova_compute[260935]:       <backend model="random">/dev/urandom</backend>
Oct 11 09:23:41 compute-0 nova_compute[260935]:     </rng>
Oct 11 09:23:41 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root"/>
Oct 11 09:23:41 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:23:41 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:23:41 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:23:41 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:23:41 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:23:41 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:23:41 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:23:41 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:23:41 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:23:41 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:23:41 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:23:41 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:23:41 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:23:41 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:23:41 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:23:41 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:23:41 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:23:41 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:23:41 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:23:41 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:23:41 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:23:41 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:23:41 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:23:41 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:23:41 compute-0 nova_compute[260935]:     <controller type="usb" index="0"/>
Oct 11 09:23:41 compute-0 nova_compute[260935]:     <memballoon model="virtio">
Oct 11 09:23:41 compute-0 nova_compute[260935]:       <stats period="10"/>
Oct 11 09:23:41 compute-0 nova_compute[260935]:     </memballoon>
Oct 11 09:23:41 compute-0 nova_compute[260935]:   </devices>
Oct 11 09:23:41 compute-0 nova_compute[260935]: </domain>
Oct 11 09:23:41 compute-0 nova_compute[260935]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 11 09:23:41 compute-0 nova_compute[260935]: 2025-10-11 09:23:41.436 2 DEBUG nova.compute.manager [None req-3068d5e0-3e17-4027-a4de-f9fde66921fa 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] Preparing to wait for external event network-vif-plugged-0f516e4b-c284-4151-944c-8a7d98f695b5 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 11 09:23:41 compute-0 nova_compute[260935]: 2025-10-11 09:23:41.437 2 DEBUG oslo_concurrency.lockutils [None req-3068d5e0-3e17-4027-a4de-f9fde66921fa 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] Acquiring lock "ef21f945-0076-48fa-8d22-c5376e26d278-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:23:41 compute-0 nova_compute[260935]: 2025-10-11 09:23:41.437 2 DEBUG oslo_concurrency.lockutils [None req-3068d5e0-3e17-4027-a4de-f9fde66921fa 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] Lock "ef21f945-0076-48fa-8d22-c5376e26d278-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:23:41 compute-0 nova_compute[260935]: 2025-10-11 09:23:41.437 2 DEBUG oslo_concurrency.lockutils [None req-3068d5e0-3e17-4027-a4de-f9fde66921fa 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] Lock "ef21f945-0076-48fa-8d22-c5376e26d278-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:23:41 compute-0 nova_compute[260935]: 2025-10-11 09:23:41.439 2 DEBUG nova.virt.libvirt.vif [None req-3068d5e0-3e17-4027-a4de-f9fde66921fa 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-11T09:22:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestShelveInstance-server-1752003988',display_name='tempest-TestShelveInstance-server-1752003988',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testshelveinstance-server-1752003988',id=121,image_ref='a24df9e3-57ef-4be8-af9c-14e8b8d2e436',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name='tempest-TestShelveInstance-1436929269',keypairs=<?>,launch_index=0,launched_at=2025-10-11T09:22:45Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='a13210f275984f3eadf85eba0c749d99',ramdisk_id='',reservation_id='r-9i2gklxc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestShelveInstance-243029510',owner_user_name='tempest-TestShelveInstance-243029510-project-member',shelved_at='2025-10-11T09:23:19.621852',shelved_host='compute-0.ctlplane.example.com',shelved_image_id='a24df9e3-57ef-4be8-af9c-14e8b8d2e436'},tags=<?>,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:23:33Z,user_data=None,user_id='67e20c1f7ae24f2f8b9e25e0d8ce61ca',uuid=ef21f945-0076-48fa-8d22-c5376e26d278,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='shelved_offloaded') vif={"id": "0f516e4b-c284-4151-944c-8a7d98f695b5", "address": "fa:16:3e:4b:08:14", "network": {"id": "0007a0de-db42-4add-9b55-6d92ceffa860", "bridge": "br-int", "label": "tempest-TestShelveInstance-683353345-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.199", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a13210f275984f3eadf85eba0c749d99", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0f516e4b-c2", "ovs_interfaceid": "0f516e4b-c284-4151-944c-8a7d98f695b5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 11 09:23:41 compute-0 nova_compute[260935]: 2025-10-11 09:23:41.439 2 DEBUG nova.network.os_vif_util [None req-3068d5e0-3e17-4027-a4de-f9fde66921fa 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] Converting VIF {"id": "0f516e4b-c284-4151-944c-8a7d98f695b5", "address": "fa:16:3e:4b:08:14", "network": {"id": "0007a0de-db42-4add-9b55-6d92ceffa860", "bridge": "br-int", "label": "tempest-TestShelveInstance-683353345-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.199", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a13210f275984f3eadf85eba0c749d99", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0f516e4b-c2", "ovs_interfaceid": "0f516e4b-c284-4151-944c-8a7d98f695b5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 09:23:41 compute-0 nova_compute[260935]: 2025-10-11 09:23:41.440 2 DEBUG nova.network.os_vif_util [None req-3068d5e0-3e17-4027-a4de-f9fde66921fa 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4b:08:14,bridge_name='br-int',has_traffic_filtering=True,id=0f516e4b-c284-4151-944c-8a7d98f695b5,network=Network(0007a0de-db42-4add-9b55-6d92ceffa860),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0f516e4b-c2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 09:23:41 compute-0 nova_compute[260935]: 2025-10-11 09:23:41.440 2 DEBUG os_vif [None req-3068d5e0-3e17-4027-a4de-f9fde66921fa 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:4b:08:14,bridge_name='br-int',has_traffic_filtering=True,id=0f516e4b-c284-4151-944c-8a7d98f695b5,network=Network(0007a0de-db42-4add-9b55-6d92ceffa860),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0f516e4b-c2') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 11 09:23:41 compute-0 nova_compute[260935]: 2025-10-11 09:23:41.442 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:23:41 compute-0 nova_compute[260935]: 2025-10-11 09:23:41.443 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:23:41 compute-0 nova_compute[260935]: 2025-10-11 09:23:41.443 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 09:23:41 compute-0 nova_compute[260935]: 2025-10-11 09:23:41.444 2 INFO nova.virt.libvirt.driver [None req-091e0e0c-811a-465e-b89f-9b7e7c4c5823 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 527bff5a-2d35-406a-8702-f80298a22342] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 11 09:23:41 compute-0 nova_compute[260935]: 2025-10-11 09:23:41.451 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:23:41 compute-0 nova_compute[260935]: 2025-10-11 09:23:41.451 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0f516e4b-c2, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:23:41 compute-0 nova_compute[260935]: 2025-10-11 09:23:41.452 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap0f516e4b-c2, col_values=(('external_ids', {'iface-id': '0f516e4b-c284-4151-944c-8a7d98f695b5', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:4b:08:14', 'vm-uuid': 'ef21f945-0076-48fa-8d22-c5376e26d278'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:23:41 compute-0 nova_compute[260935]: 2025-10-11 09:23:41.455 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:23:41 compute-0 NetworkManager[44960]: <info>  [1760174621.4564] manager: (tap0f516e4b-c2): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/507)
Oct 11 09:23:41 compute-0 nova_compute[260935]: 2025-10-11 09:23:41.458 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 09:23:41 compute-0 nova_compute[260935]: 2025-10-11 09:23:41.465 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:23:41 compute-0 nova_compute[260935]: 2025-10-11 09:23:41.466 2 INFO os_vif [None req-3068d5e0-3e17-4027-a4de-f9fde66921fa 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:4b:08:14,bridge_name='br-int',has_traffic_filtering=True,id=0f516e4b-c284-4151-944c-8a7d98f695b5,network=Network(0007a0de-db42-4add-9b55-6d92ceffa860),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0f516e4b-c2')
Oct 11 09:23:41 compute-0 nova_compute[260935]: 2025-10-11 09:23:41.468 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance c176845c-89c0-4038-ba22-4ee79bd3ebfe actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 09:23:41 compute-0 nova_compute[260935]: 2025-10-11 09:23:41.469 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance b75d8ded-515b-48ff-a6b6-28df88878996 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 09:23:41 compute-0 nova_compute[260935]: 2025-10-11 09:23:41.469 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance 52be16b4-343a-4fd4-9041-39069a1fde2a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 09:23:41 compute-0 nova_compute[260935]: 2025-10-11 09:23:41.469 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance 85cf93a0-2068-4567-a399-b8d52e672913 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 09:23:41 compute-0 nova_compute[260935]: 2025-10-11 09:23:41.469 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance ef21f945-0076-48fa-8d22-c5376e26d278 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 09:23:41 compute-0 nova_compute[260935]: 2025-10-11 09:23:41.469 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance 527bff5a-2d35-406a-8702-f80298a22342 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 09:23:41 compute-0 nova_compute[260935]: 2025-10-11 09:23:41.470 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 6 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 11 09:23:41 compute-0 nova_compute[260935]: 2025-10-11 09:23:41.470 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=1280MB phys_disk=59GB used_disk=6GB total_vcpus=8 used_vcpus=6 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 11 09:23:41 compute-0 nova_compute[260935]: 2025-10-11 09:23:41.472 2 DEBUG nova.compute.manager [None req-091e0e0c-811a-465e-b89f-9b7e7c4c5823 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 527bff5a-2d35-406a-8702-f80298a22342] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 11 09:23:41 compute-0 nova_compute[260935]: 2025-10-11 09:23:41.563 2 DEBUG nova.virt.libvirt.driver [None req-3068d5e0-3e17-4027-a4de-f9fde66921fa 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 09:23:41 compute-0 nova_compute[260935]: 2025-10-11 09:23:41.563 2 DEBUG nova.virt.libvirt.driver [None req-3068d5e0-3e17-4027-a4de-f9fde66921fa 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 09:23:41 compute-0 nova_compute[260935]: 2025-10-11 09:23:41.563 2 DEBUG nova.virt.libvirt.driver [None req-3068d5e0-3e17-4027-a4de-f9fde66921fa 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] No VIF found with MAC fa:16:3e:4b:08:14, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 11 09:23:41 compute-0 nova_compute[260935]: 2025-10-11 09:23:41.564 2 INFO nova.virt.libvirt.driver [None req-3068d5e0-3e17-4027-a4de-f9fde66921fa 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] Using config drive
Oct 11 09:23:41 compute-0 nova_compute[260935]: 2025-10-11 09:23:41.588 2 DEBUG nova.storage.rbd_utils [None req-3068d5e0-3e17-4027-a4de-f9fde66921fa 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] rbd image ef21f945-0076-48fa-8d22-c5376e26d278_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:23:41 compute-0 nova_compute[260935]: 2025-10-11 09:23:41.599 2 DEBUG nova.compute.manager [None req-091e0e0c-811a-465e-b89f-9b7e7c4c5823 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 527bff5a-2d35-406a-8702-f80298a22342] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 11 09:23:41 compute-0 nova_compute[260935]: 2025-10-11 09:23:41.600 2 DEBUG nova.virt.libvirt.driver [None req-091e0e0c-811a-465e-b89f-9b7e7c4c5823 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 527bff5a-2d35-406a-8702-f80298a22342] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 11 09:23:41 compute-0 nova_compute[260935]: 2025-10-11 09:23:41.600 2 INFO nova.virt.libvirt.driver [None req-091e0e0c-811a-465e-b89f-9b7e7c4c5823 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 527bff5a-2d35-406a-8702-f80298a22342] Creating image(s)
Oct 11 09:23:41 compute-0 nova_compute[260935]: 2025-10-11 09:23:41.625 2 DEBUG nova.storage.rbd_utils [None req-091e0e0c-811a-465e-b89f-9b7e7c4c5823 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] rbd image 527bff5a-2d35-406a-8702-f80298a22342_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:23:41 compute-0 nova_compute[260935]: 2025-10-11 09:23:41.653 2 DEBUG nova.storage.rbd_utils [None req-091e0e0c-811a-465e-b89f-9b7e7c4c5823 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] rbd image 527bff5a-2d35-406a-8702-f80298a22342_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:23:41 compute-0 nova_compute[260935]: 2025-10-11 09:23:41.685 2 DEBUG nova.storage.rbd_utils [None req-091e0e0c-811a-465e-b89f-9b7e7c4c5823 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] rbd image 527bff5a-2d35-406a-8702-f80298a22342_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:23:41 compute-0 nova_compute[260935]: 2025-10-11 09:23:41.689 2 DEBUG oslo_concurrency.processutils [None req-091e0e0c-811a-465e-b89f-9b7e7c4c5823 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:23:41 compute-0 nova_compute[260935]: 2025-10-11 09:23:41.735 2 DEBUG nova.objects.instance [None req-3068d5e0-3e17-4027-a4de-f9fde66921fa 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] Lazy-loading 'ec2_ids' on Instance uuid ef21f945-0076-48fa-8d22-c5376e26d278 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:23:41 compute-0 nova_compute[260935]: 2025-10-11 09:23:41.775 2 DEBUG nova.objects.instance [None req-3068d5e0-3e17-4027-a4de-f9fde66921fa 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] Lazy-loading 'keypairs' on Instance uuid ef21f945-0076-48fa-8d22-c5376e26d278 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:23:41 compute-0 nova_compute[260935]: 2025-10-11 09:23:41.782 2 DEBUG oslo_concurrency.processutils [None req-091e0e0c-811a-465e-b89f-9b7e7c4c5823 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json" returned: 0 in 0.093s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:23:41 compute-0 nova_compute[260935]: 2025-10-11 09:23:41.783 2 DEBUG oslo_concurrency.lockutils [None req-091e0e0c-811a-465e-b89f-9b7e7c4c5823 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Acquiring lock "9811042c7d73cc51997f7c966840f4b7728169a1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:23:41 compute-0 nova_compute[260935]: 2025-10-11 09:23:41.784 2 DEBUG oslo_concurrency.lockutils [None req-091e0e0c-811a-465e-b89f-9b7e7c4c5823 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:23:41 compute-0 nova_compute[260935]: 2025-10-11 09:23:41.785 2 DEBUG oslo_concurrency.lockutils [None req-091e0e0c-811a-465e-b89f-9b7e7c4c5823 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:23:41 compute-0 nova_compute[260935]: 2025-10-11 09:23:41.815 2 DEBUG nova.storage.rbd_utils [None req-091e0e0c-811a-465e-b89f-9b7e7c4c5823 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] rbd image 527bff5a-2d35-406a-8702-f80298a22342_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:23:41 compute-0 nova_compute[260935]: 2025-10-11 09:23:41.819 2 DEBUG oslo_concurrency.processutils [None req-091e0e0c-811a-465e-b89f-9b7e7c4c5823 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 527bff5a-2d35-406a-8702-f80298a22342_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:23:41 compute-0 nova_compute[260935]: 2025-10-11 09:23:41.905 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:23:42 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/1481547396' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:23:42 compute-0 nova_compute[260935]: 2025-10-11 09:23:42.160 2 DEBUG oslo_concurrency.processutils [None req-091e0e0c-811a-465e-b89f-9b7e7c4c5823 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 527bff5a-2d35-406a-8702-f80298a22342_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.341s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:23:42 compute-0 nova_compute[260935]: 2025-10-11 09:23:42.240 2 DEBUG nova.storage.rbd_utils [None req-091e0e0c-811a-465e-b89f-9b7e7c4c5823 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] resizing rbd image 527bff5a-2d35-406a-8702-f80298a22342_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 11 09:23:42 compute-0 nova_compute[260935]: 2025-10-11 09:23:42.356 2 DEBUG nova.objects.instance [None req-091e0e0c-811a-465e-b89f-9b7e7c4c5823 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lazy-loading 'migration_context' on Instance uuid 527bff5a-2d35-406a-8702-f80298a22342 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:23:42 compute-0 nova_compute[260935]: 2025-10-11 09:23:42.359 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:23:42 compute-0 nova_compute[260935]: 2025-10-11 09:23:42.371 2 DEBUG nova.virt.libvirt.driver [None req-091e0e0c-811a-465e-b89f-9b7e7c4c5823 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 527bff5a-2d35-406a-8702-f80298a22342] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 11 09:23:42 compute-0 nova_compute[260935]: 2025-10-11 09:23:42.372 2 DEBUG nova.virt.libvirt.driver [None req-091e0e0c-811a-465e-b89f-9b7e7c4c5823 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 527bff5a-2d35-406a-8702-f80298a22342] Ensure instance console log exists: /var/lib/nova/instances/527bff5a-2d35-406a-8702-f80298a22342/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 11 09:23:42 compute-0 nova_compute[260935]: 2025-10-11 09:23:42.372 2 DEBUG oslo_concurrency.lockutils [None req-091e0e0c-811a-465e-b89f-9b7e7c4c5823 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:23:42 compute-0 nova_compute[260935]: 2025-10-11 09:23:42.373 2 DEBUG oslo_concurrency.lockutils [None req-091e0e0c-811a-465e-b89f-9b7e7c4c5823 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:23:42 compute-0 nova_compute[260935]: 2025-10-11 09:23:42.373 2 DEBUG oslo_concurrency.lockutils [None req-091e0e0c-811a-465e-b89f-9b7e7c4c5823 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:23:42 compute-0 nova_compute[260935]: 2025-10-11 09:23:42.399 2 INFO nova.virt.libvirt.driver [None req-3068d5e0-3e17-4027-a4de-f9fde66921fa 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] Creating config drive at /var/lib/nova/instances/ef21f945-0076-48fa-8d22-c5376e26d278/disk.config
Oct 11 09:23:42 compute-0 nova_compute[260935]: 2025-10-11 09:23:42.405 2 DEBUG oslo_concurrency.processutils [None req-3068d5e0-3e17-4027-a4de-f9fde66921fa 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/ef21f945-0076-48fa-8d22-c5376e26d278/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpntmi9zyc execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:23:42 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 09:23:42 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2943104708' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:23:42 compute-0 nova_compute[260935]: 2025-10-11 09:23:42.464 2 DEBUG nova.policy [None req-091e0e0c-811a-465e-b89f-9b7e7c4c5823 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'dd336dcb24664df58613d4105ce1b004', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'bee9c6aad5fe46a2b0fb6caf4d995b72', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 11 09:23:42 compute-0 nova_compute[260935]: 2025-10-11 09:23:42.475 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.570s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:23:42 compute-0 nova_compute[260935]: 2025-10-11 09:23:42.484 2 DEBUG nova.compute.provider_tree [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 09:23:42 compute-0 nova_compute[260935]: 2025-10-11 09:23:42.502 2 DEBUG nova.scheduler.client.report [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 09:23:42 compute-0 nova_compute[260935]: 2025-10-11 09:23:42.527 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 11 09:23:42 compute-0 nova_compute[260935]: 2025-10-11 09:23:42.528 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.300s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:23:42 compute-0 nova_compute[260935]: 2025-10-11 09:23:42.576 2 DEBUG oslo_concurrency.processutils [None req-3068d5e0-3e17-4027-a4de-f9fde66921fa 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/ef21f945-0076-48fa-8d22-c5376e26d278/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpntmi9zyc" returned: 0 in 0.171s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:23:42 compute-0 nova_compute[260935]: 2025-10-11 09:23:42.616 2 DEBUG nova.storage.rbd_utils [None req-3068d5e0-3e17-4027-a4de-f9fde66921fa 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] rbd image ef21f945-0076-48fa-8d22-c5376e26d278_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:23:42 compute-0 nova_compute[260935]: 2025-10-11 09:23:42.621 2 DEBUG oslo_concurrency.processutils [None req-3068d5e0-3e17-4027-a4de-f9fde66921fa 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/ef21f945-0076-48fa-8d22-c5376e26d278/disk.config ef21f945-0076-48fa-8d22-c5376e26d278_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:23:42 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2460: 321 pgs: 321 active+clean; 579 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 4.0 MiB/s rd, 7.4 MiB/s wr, 189 op/s
Oct 11 09:23:42 compute-0 nova_compute[260935]: 2025-10-11 09:23:42.918 2 DEBUG oslo_concurrency.processutils [None req-3068d5e0-3e17-4027-a4de-f9fde66921fa 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/ef21f945-0076-48fa-8d22-c5376e26d278/disk.config ef21f945-0076-48fa-8d22-c5376e26d278_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.297s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:23:42 compute-0 nova_compute[260935]: 2025-10-11 09:23:42.920 2 INFO nova.virt.libvirt.driver [None req-3068d5e0-3e17-4027-a4de-f9fde66921fa 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] Deleting local config drive /var/lib/nova/instances/ef21f945-0076-48fa-8d22-c5376e26d278/disk.config because it was imported into RBD.
Oct 11 09:23:42 compute-0 kernel: tap0f516e4b-c2: entered promiscuous mode
Oct 11 09:23:42 compute-0 NetworkManager[44960]: <info>  [1760174622.9932] manager: (tap0f516e4b-c2): new Tun device (/org/freedesktop/NetworkManager/Devices/508)
Oct 11 09:23:42 compute-0 nova_compute[260935]: 2025-10-11 09:23:42.993 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:23:42 compute-0 ovn_controller[152945]: 2025-10-11T09:23:42Z|01283|binding|INFO|Claiming lport 0f516e4b-c284-4151-944c-8a7d98f695b5 for this chassis.
Oct 11 09:23:42 compute-0 ovn_controller[152945]: 2025-10-11T09:23:42Z|01284|binding|INFO|0f516e4b-c284-4151-944c-8a7d98f695b5: Claiming fa:16:3e:4b:08:14 10.100.0.10
Oct 11 09:23:43 compute-0 nova_compute[260935]: 2025-10-11 09:23:43.000 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:23:43 compute-0 nova_compute[260935]: 2025-10-11 09:23:43.010 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:23:43 compute-0 NetworkManager[44960]: <info>  [1760174623.0338] manager: (patch-br-int-to-provnet-02213cae-d648-4a8f-b18b-9bdf9573f1be): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/509)
Oct 11 09:23:43 compute-0 NetworkManager[44960]: <info>  [1760174623.0352] manager: (patch-provnet-02213cae-d648-4a8f-b18b-9bdf9573f1be-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/510)
Oct 11 09:23:43 compute-0 nova_compute[260935]: 2025-10-11 09:23:43.036 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:23:43 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:23:43.037 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4b:08:14 10.100.0.10'], port_security=['fa:16:3e:4b:08:14 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'ef21f945-0076-48fa-8d22-c5376e26d278', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0007a0de-db42-4add-9b55-6d92ceffa860', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a13210f275984f3eadf85eba0c749d99', 'neutron:revision_number': '7', 'neutron:security_group_ids': 'fd52e2b9-19bf-4137-a511-25dd1d4c9f0e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.199'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d2271296-3ceb-4987-affd-a0b4da64fffe, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=0f516e4b-c284-4151-944c-8a7d98f695b5) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 09:23:43 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:23:43.040 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 0f516e4b-c284-4151-944c-8a7d98f695b5 in datapath 0007a0de-db42-4add-9b55-6d92ceffa860 bound to our chassis
Oct 11 09:23:43 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:23:43.043 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 0007a0de-db42-4add-9b55-6d92ceffa860
Oct 11 09:23:43 compute-0 systemd-machined[215705]: New machine qemu-145-instance-00000079.
Oct 11 09:23:43 compute-0 systemd-udevd[394424]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 09:23:43 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:23:43.063 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[df18b95c-cd62-4855-a4cd-c894e546cc52]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:23:43 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:23:43.065 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap0007a0de-d1 in ovnmeta-0007a0de-db42-4add-9b55-6d92ceffa860 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 11 09:23:43 compute-0 systemd[1]: Started Virtual Machine qemu-145-instance-00000079.
Oct 11 09:23:43 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:23:43.069 276945 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap0007a0de-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 11 09:23:43 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:23:43.069 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[71d26d76-982c-488e-84af-a751aae81fe6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:23:43 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:23:43.073 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[5f6e1d79-1c15-41ae-9bed-8bb6c11b1c63]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:23:43 compute-0 NetworkManager[44960]: <info>  [1760174623.0872] device (tap0f516e4b-c2): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 09:23:43 compute-0 NetworkManager[44960]: <info>  [1760174623.0889] device (tap0f516e4b-c2): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 09:23:43 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:23:43.098 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[47a9085b-83e0-4a96-9dc1-41b6a354739e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:23:43 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/2943104708' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:23:43 compute-0 ceph-mon[74313]: pgmap v2460: 321 pgs: 321 active+clean; 579 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 4.0 MiB/s rd, 7.4 MiB/s wr, 189 op/s
Oct 11 09:23:43 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:23:43.130 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[ddce8097-2992-4dc3-876b-8f0b4ed70330]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:23:43 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:23:43.168 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[53a750be-6451-4ffe-89d7-4827891c85b5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:23:43 compute-0 systemd-udevd[394427]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 09:23:43 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:23:43.175 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[574e8cb9-165a-4f80-9721-477bfc3d9690]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:23:43 compute-0 NetworkManager[44960]: <info>  [1760174623.1769] manager: (tap0007a0de-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/511)
Oct 11 09:23:43 compute-0 nova_compute[260935]: 2025-10-11 09:23:43.213 2 DEBUG nova.network.neutron [req-c499fbc0-d96b-4fe6-8281-96b30038efca req-e987cbf7-a099-4f5b-a35c-df66cbc4c4ab e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] Updated VIF entry in instance network info cache for port 0f516e4b-c284-4151-944c-8a7d98f695b5. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 09:23:43 compute-0 nova_compute[260935]: 2025-10-11 09:23:43.214 2 DEBUG nova.network.neutron [req-c499fbc0-d96b-4fe6-8281-96b30038efca req-e987cbf7-a099-4f5b-a35c-df66cbc4c4ab e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] Updating instance_info_cache with network_info: [{"id": "0f516e4b-c284-4151-944c-8a7d98f695b5", "address": "fa:16:3e:4b:08:14", "network": {"id": "0007a0de-db42-4add-9b55-6d92ceffa860", "bridge": "br-int", "label": "tempest-TestShelveInstance-683353345-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.199", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a13210f275984f3eadf85eba0c749d99", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0f516e4b-c2", "ovs_interfaceid": "0f516e4b-c284-4151-944c-8a7d98f695b5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:23:43 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:23:43.255 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[c722428c-e81e-448c-b215-152d2491c5b1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:23:43 compute-0 nova_compute[260935]: 2025-10-11 09:23:43.257 2 DEBUG oslo_concurrency.lockutils [req-c499fbc0-d96b-4fe6-8281-96b30038efca req-e987cbf7-a099-4f5b-a35c-df66cbc4c4ab e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-ef21f945-0076-48fa-8d22-c5376e26d278" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:23:43 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:23:43.258 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[2e532d77-69fa-45de-b008-f5a9b8aa7682]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:23:43 compute-0 nova_compute[260935]: 2025-10-11 09:23:43.268 2 DEBUG nova.network.neutron [None req-359157f5-4620-47af-88e8-b12b212b7fe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 85cf93a0-2068-4567-a399-b8d52e672913] Successfully updated port: 85875c6f-3380-42bc-9e4b-0b66df391e0d _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 11 09:23:43 compute-0 NetworkManager[44960]: <info>  [1760174623.2880] device (tap0007a0de-d0): carrier: link connected
Oct 11 09:23:43 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:23:43.296 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[365c7f9a-b59d-4825-b5b2-0b639582cf8c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:23:43 compute-0 unix_chkpwd[394460]: password check failed for user (root)
Oct 11 09:23:43 compute-0 ovn_controller[152945]: 2025-10-11T09:23:43Z|01285|binding|INFO|Releasing lport e23cd806-8523-4e59-ba27-db15cee52548 from this chassis (sb_readonly=0)
Oct 11 09:23:43 compute-0 ovn_controller[152945]: 2025-10-11T09:23:43Z|01286|binding|INFO|Releasing lport f45dd889-4db0-488b-b6d3-356bc191844e from this chassis (sb_readonly=0)
Oct 11 09:23:43 compute-0 sshd-session[394277]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=152.32.213.170  user=root
Oct 11 09:23:43 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:23:43.327 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[8562ff12-aa46-4ca7-92cf-785281edd293]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0007a0de-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7d:ca:09'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 357], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 646799, 'reachable_time': 34457, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 394459, 'error': None, 'target': 'ovnmeta-0007a0de-db42-4add-9b55-6d92ceffa860', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:23:43 compute-0 nova_compute[260935]: 2025-10-11 09:23:43.340 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:23:43 compute-0 ovn_controller[152945]: 2025-10-11T09:23:43Z|01287|binding|INFO|Releasing lport e23cd806-8523-4e59-ba27-db15cee52548 from this chassis (sb_readonly=0)
Oct 11 09:23:43 compute-0 ovn_controller[152945]: 2025-10-11T09:23:43Z|01288|binding|INFO|Releasing lport f45dd889-4db0-488b-b6d3-356bc191844e from this chassis (sb_readonly=0)
Oct 11 09:23:43 compute-0 nova_compute[260935]: 2025-10-11 09:23:43.367 2 DEBUG nova.compute.manager [req-f183313e-d3ce-4aa5-bd64-6c19c02ff857 req-e467080a-ca0b-476a-9f7f-c0ed14ef7366 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 85cf93a0-2068-4567-a399-b8d52e672913] Received event network-changed-85875c6f-3380-42bc-9e4b-0b66df391e0d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:23:43 compute-0 nova_compute[260935]: 2025-10-11 09:23:43.368 2 DEBUG nova.compute.manager [req-f183313e-d3ce-4aa5-bd64-6c19c02ff857 req-e467080a-ca0b-476a-9f7f-c0ed14ef7366 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 85cf93a0-2068-4567-a399-b8d52e672913] Refreshing instance network info cache due to event network-changed-85875c6f-3380-42bc-9e4b-0b66df391e0d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 09:23:43 compute-0 nova_compute[260935]: 2025-10-11 09:23:43.368 2 DEBUG oslo_concurrency.lockutils [req-f183313e-d3ce-4aa5-bd64-6c19c02ff857 req-e467080a-ca0b-476a-9f7f-c0ed14ef7366 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-85cf93a0-2068-4567-a399-b8d52e672913" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:23:43 compute-0 nova_compute[260935]: 2025-10-11 09:23:43.368 2 DEBUG oslo_concurrency.lockutils [req-f183313e-d3ce-4aa5-bd64-6c19c02ff857 req-e467080a-ca0b-476a-9f7f-c0ed14ef7366 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-85cf93a0-2068-4567-a399-b8d52e672913" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:23:43 compute-0 nova_compute[260935]: 2025-10-11 09:23:43.369 2 DEBUG nova.network.neutron [req-f183313e-d3ce-4aa5-bd64-6c19c02ff857 req-e467080a-ca0b-476a-9f7f-c0ed14ef7366 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 85cf93a0-2068-4567-a399-b8d52e672913] Refreshing network info cache for port 85875c6f-3380-42bc-9e4b-0b66df391e0d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 09:23:43 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:23:43.368 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[161da8fa-9262-415c-a255-dff27bbf164a]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe7d:ca09'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 646799, 'tstamp': 646799}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 394461, 'error': None, 'target': 'ovnmeta-0007a0de-db42-4add-9b55-6d92ceffa860', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:23:43 compute-0 ovn_controller[152945]: 2025-10-11T09:23:43Z|01289|binding|INFO|Setting lport 0f516e4b-c284-4151-944c-8a7d98f695b5 up in Southbound
Oct 11 09:23:43 compute-0 ovn_controller[152945]: 2025-10-11T09:23:43Z|01290|binding|INFO|Setting lport 0f516e4b-c284-4151-944c-8a7d98f695b5 ovn-installed in OVS
Oct 11 09:23:43 compute-0 nova_compute[260935]: 2025-10-11 09:23:43.376 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:23:43 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:23:43.388 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[daffd302-2b3f-43fb-a131-d092d7f8cbc0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0007a0de-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7d:ca:09'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 357], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 646799, 'reachable_time': 34457, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 394462, 'error': None, 'target': 'ovnmeta-0007a0de-db42-4add-9b55-6d92ceffa860', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:23:43 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:23:43.429 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[79a5401f-3a34-41f5-b675-474ec46b2787]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:23:43 compute-0 nova_compute[260935]: 2025-10-11 09:23:43.429 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:23:43 compute-0 nova_compute[260935]: 2025-10-11 09:23:43.430 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 11 09:23:43 compute-0 nova_compute[260935]: 2025-10-11 09:23:43.430 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 11 09:23:43 compute-0 nova_compute[260935]: 2025-10-11 09:23:43.470 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: 85cf93a0-2068-4567-a399-b8d52e672913] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Oct 11 09:23:43 compute-0 nova_compute[260935]: 2025-10-11 09:23:43.470 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: 527bff5a-2d35-406a-8702-f80298a22342] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Oct 11 09:23:43 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:23:43.502 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[d164bbb1-3b05-49f6-87bc-9c62133025df]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:23:43 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:23:43.503 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0007a0de-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:23:43 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:23:43.503 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 09:23:43 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:23:43.503 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0007a0de-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:23:43 compute-0 kernel: tap0007a0de-d0: entered promiscuous mode
Oct 11 09:23:43 compute-0 nova_compute[260935]: 2025-10-11 09:23:43.505 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:23:43 compute-0 NetworkManager[44960]: <info>  [1760174623.5065] manager: (tap0007a0de-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/512)
Oct 11 09:23:43 compute-0 nova_compute[260935]: 2025-10-11 09:23:43.509 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:23:43 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:23:43.509 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap0007a0de-d0, col_values=(('external_ids', {'iface-id': 'e89e8ea5-8038-433d-8c45-d2ec20f4f896'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:23:43 compute-0 nova_compute[260935]: 2025-10-11 09:23:43.510 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:23:43 compute-0 ovn_controller[152945]: 2025-10-11T09:23:43Z|01291|binding|INFO|Releasing lport e89e8ea5-8038-433d-8c45-d2ec20f4f896 from this chassis (sb_readonly=0)
Oct 11 09:23:43 compute-0 nova_compute[260935]: 2025-10-11 09:23:43.541 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:23:43 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:23:43.542 162815 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/0007a0de-db42-4add-9b55-6d92ceffa860.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/0007a0de-db42-4add-9b55-6d92ceffa860.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 11 09:23:43 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:23:43.543 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[92bc4d88-0892-4e10-aad8-2dae7bf736c6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:23:43 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:23:43.544 162815 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 11 09:23:43 compute-0 ovn_metadata_agent[162810]: global
Oct 11 09:23:43 compute-0 ovn_metadata_agent[162810]:     log         /dev/log local0 debug
Oct 11 09:23:43 compute-0 ovn_metadata_agent[162810]:     log-tag     haproxy-metadata-proxy-0007a0de-db42-4add-9b55-6d92ceffa860
Oct 11 09:23:43 compute-0 ovn_metadata_agent[162810]:     user        root
Oct 11 09:23:43 compute-0 ovn_metadata_agent[162810]:     group       root
Oct 11 09:23:43 compute-0 ovn_metadata_agent[162810]:     maxconn     1024
Oct 11 09:23:43 compute-0 ovn_metadata_agent[162810]:     pidfile     /var/lib/neutron/external/pids/0007a0de-db42-4add-9b55-6d92ceffa860.pid.haproxy
Oct 11 09:23:43 compute-0 ovn_metadata_agent[162810]:     daemon
Oct 11 09:23:43 compute-0 ovn_metadata_agent[162810]: 
Oct 11 09:23:43 compute-0 ovn_metadata_agent[162810]: defaults
Oct 11 09:23:43 compute-0 ovn_metadata_agent[162810]:     log global
Oct 11 09:23:43 compute-0 ovn_metadata_agent[162810]:     mode http
Oct 11 09:23:43 compute-0 ovn_metadata_agent[162810]:     option httplog
Oct 11 09:23:43 compute-0 ovn_metadata_agent[162810]:     option dontlognull
Oct 11 09:23:43 compute-0 ovn_metadata_agent[162810]:     option http-server-close
Oct 11 09:23:43 compute-0 ovn_metadata_agent[162810]:     option forwardfor
Oct 11 09:23:43 compute-0 ovn_metadata_agent[162810]:     retries                 3
Oct 11 09:23:43 compute-0 ovn_metadata_agent[162810]:     timeout http-request    30s
Oct 11 09:23:43 compute-0 ovn_metadata_agent[162810]:     timeout connect         30s
Oct 11 09:23:43 compute-0 ovn_metadata_agent[162810]:     timeout client          32s
Oct 11 09:23:43 compute-0 ovn_metadata_agent[162810]:     timeout server          32s
Oct 11 09:23:43 compute-0 ovn_metadata_agent[162810]:     timeout http-keep-alive 30s
Oct 11 09:23:43 compute-0 ovn_metadata_agent[162810]: 
Oct 11 09:23:43 compute-0 ovn_metadata_agent[162810]: 
Oct 11 09:23:43 compute-0 ovn_metadata_agent[162810]: listen listener
Oct 11 09:23:43 compute-0 ovn_metadata_agent[162810]:     bind 169.254.169.254:80
Oct 11 09:23:43 compute-0 ovn_metadata_agent[162810]:     server metadata /var/lib/neutron/metadata_proxy
Oct 11 09:23:43 compute-0 ovn_metadata_agent[162810]:     http-request add-header X-OVN-Network-ID 0007a0de-db42-4add-9b55-6d92ceffa860
Oct 11 09:23:43 compute-0 ovn_metadata_agent[162810]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 11 09:23:43 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:23:43.544 162815 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-0007a0de-db42-4add-9b55-6d92ceffa860', 'env', 'PROCESS_TAG=haproxy-0007a0de-db42-4add-9b55-6d92ceffa860', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/0007a0de-db42-4add-9b55-6d92ceffa860.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 11 09:23:43 compute-0 sshd-session[394455]: Invalid user admin from 165.232.82.252 port 56302
Oct 11 09:23:43 compute-0 nova_compute[260935]: 2025-10-11 09:23:43.757 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "refresh_cache-c176845c-89c0-4038-ba22-4ee79bd3ebfe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:23:43 compute-0 nova_compute[260935]: 2025-10-11 09:23:43.758 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquired lock "refresh_cache-c176845c-89c0-4038-ba22-4ee79bd3ebfe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:23:43 compute-0 nova_compute[260935]: 2025-10-11 09:23:43.759 2 DEBUG nova.network.neutron [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: c176845c-89c0-4038-ba22-4ee79bd3ebfe] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 11 09:23:43 compute-0 nova_compute[260935]: 2025-10-11 09:23:43.759 2 DEBUG nova.objects.instance [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lazy-loading 'info_cache' on Instance uuid c176845c-89c0-4038-ba22-4ee79bd3ebfe obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:23:43 compute-0 nova_compute[260935]: 2025-10-11 09:23:43.766 2 DEBUG nova.network.neutron [req-f183313e-d3ce-4aa5-bd64-6c19c02ff857 req-e467080a-ca0b-476a-9f7f-c0ed14ef7366 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 85cf93a0-2068-4567-a399-b8d52e672913] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 11 09:23:43 compute-0 sshd-session[394455]: pam_unix(sshd:auth): check pass; user unknown
Oct 11 09:23:43 compute-0 sshd-session[394455]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=165.232.82.252
Oct 11 09:23:44 compute-0 podman[394536]: 2025-10-11 09:23:44.025561462 +0000 UTC m=+0.087905641 container create 22cf63d83641e03cba16870a5f372c09f901e5786a22ad9e01ecb7c96b7ff5b2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-0007a0de-db42-4add-9b55-6d92ceffa860, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 11 09:23:44 compute-0 podman[394536]: 2025-10-11 09:23:43.97977811 +0000 UTC m=+0.042122329 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 11 09:23:44 compute-0 systemd[1]: Started libpod-conmon-22cf63d83641e03cba16870a5f372c09f901e5786a22ad9e01ecb7c96b7ff5b2.scope.
Oct 11 09:23:44 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e286 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:23:44 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:23:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1fd13644a85799175e493bbd75cfd83641987306523ad37c33480641586b4542/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 11 09:23:44 compute-0 podman[394536]: 2025-10-11 09:23:44.13746098 +0000 UTC m=+0.199805199 container init 22cf63d83641e03cba16870a5f372c09f901e5786a22ad9e01ecb7c96b7ff5b2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-0007a0de-db42-4add-9b55-6d92ceffa860, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct 11 09:23:44 compute-0 podman[394536]: 2025-10-11 09:23:44.145635751 +0000 UTC m=+0.207979920 container start 22cf63d83641e03cba16870a5f372c09f901e5786a22ad9e01ecb7c96b7ff5b2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-0007a0de-db42-4add-9b55-6d92ceffa860, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, tcib_managed=true, io.buildah.version=1.41.3)
Oct 11 09:23:44 compute-0 neutron-haproxy-ovnmeta-0007a0de-db42-4add-9b55-6d92ceffa860[394551]: [NOTICE]   (394555) : New worker (394557) forked
Oct 11 09:23:44 compute-0 neutron-haproxy-ovnmeta-0007a0de-db42-4add-9b55-6d92ceffa860[394551]: [NOTICE]   (394555) : Loading success.
Oct 11 09:23:44 compute-0 nova_compute[260935]: 2025-10-11 09:23:44.232 2 DEBUG nova.network.neutron [req-f183313e-d3ce-4aa5-bd64-6c19c02ff857 req-e467080a-ca0b-476a-9f7f-c0ed14ef7366 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 85cf93a0-2068-4567-a399-b8d52e672913] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:23:44 compute-0 nova_compute[260935]: 2025-10-11 09:23:44.248 2 DEBUG oslo_concurrency.lockutils [req-f183313e-d3ce-4aa5-bd64-6c19c02ff857 req-e467080a-ca0b-476a-9f7f-c0ed14ef7366 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-85cf93a0-2068-4567-a399-b8d52e672913" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:23:44 compute-0 nova_compute[260935]: 2025-10-11 09:23:44.340 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760174624.3390472, ef21f945-0076-48fa-8d22-c5376e26d278 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:23:44 compute-0 nova_compute[260935]: 2025-10-11 09:23:44.340 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] VM Started (Lifecycle Event)
Oct 11 09:23:44 compute-0 nova_compute[260935]: 2025-10-11 09:23:44.366 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:23:44 compute-0 nova_compute[260935]: 2025-10-11 09:23:44.371 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760174624.3397129, ef21f945-0076-48fa-8d22-c5376e26d278 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:23:44 compute-0 nova_compute[260935]: 2025-10-11 09:23:44.371 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] VM Paused (Lifecycle Event)
Oct 11 09:23:44 compute-0 nova_compute[260935]: 2025-10-11 09:23:44.387 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:23:44 compute-0 nova_compute[260935]: 2025-10-11 09:23:44.391 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: shelved_offloaded, current task_state: spawning, current DB power_state: 4, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 09:23:44 compute-0 nova_compute[260935]: 2025-10-11 09:23:44.413 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 09:23:44 compute-0 nova_compute[260935]: 2025-10-11 09:23:44.470 2 DEBUG nova.network.neutron [None req-091e0e0c-811a-465e-b89f-9b7e7c4c5823 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 527bff5a-2d35-406a-8702-f80298a22342] Successfully updated port: ba6bfb6c-8b45-4ce7-af7b-30a3c2e097ca _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 11 09:23:44 compute-0 nova_compute[260935]: 2025-10-11 09:23:44.512 2 DEBUG oslo_concurrency.lockutils [None req-091e0e0c-811a-465e-b89f-9b7e7c4c5823 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Acquiring lock "refresh_cache-527bff5a-2d35-406a-8702-f80298a22342" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:23:44 compute-0 nova_compute[260935]: 2025-10-11 09:23:44.513 2 DEBUG oslo_concurrency.lockutils [None req-091e0e0c-811a-465e-b89f-9b7e7c4c5823 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Acquired lock "refresh_cache-527bff5a-2d35-406a-8702-f80298a22342" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:23:44 compute-0 nova_compute[260935]: 2025-10-11 09:23:44.513 2 DEBUG nova.network.neutron [None req-091e0e0c-811a-465e-b89f-9b7e7c4c5823 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 527bff5a-2d35-406a-8702-f80298a22342] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 11 09:23:44 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2461: 321 pgs: 321 active+clean; 579 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 4.0 MiB/s rd, 7.4 MiB/s wr, 130 op/s
Oct 11 09:23:44 compute-0 nova_compute[260935]: 2025-10-11 09:23:44.849 2 DEBUG nova.network.neutron [None req-091e0e0c-811a-465e-b89f-9b7e7c4c5823 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 527bff5a-2d35-406a-8702-f80298a22342] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 11 09:23:45 compute-0 sshd-session[394277]: Failed password for root from 152.32.213.170 port 35694 ssh2
Oct 11 09:23:45 compute-0 nova_compute[260935]: 2025-10-11 09:23:45.250 2 DEBUG nova.network.neutron [None req-359157f5-4620-47af-88e8-b12b212b7fe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 85cf93a0-2068-4567-a399-b8d52e672913] Successfully updated port: f45a21da-34ce-448b-92cc-2639a191c755 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 11 09:23:45 compute-0 nova_compute[260935]: 2025-10-11 09:23:45.270 2 DEBUG oslo_concurrency.lockutils [None req-359157f5-4620-47af-88e8-b12b212b7fe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "refresh_cache-85cf93a0-2068-4567-a399-b8d52e672913" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:23:45 compute-0 nova_compute[260935]: 2025-10-11 09:23:45.270 2 DEBUG oslo_concurrency.lockutils [None req-359157f5-4620-47af-88e8-b12b212b7fe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquired lock "refresh_cache-85cf93a0-2068-4567-a399-b8d52e672913" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:23:45 compute-0 nova_compute[260935]: 2025-10-11 09:23:45.270 2 DEBUG nova.network.neutron [None req-359157f5-4620-47af-88e8-b12b212b7fe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 85cf93a0-2068-4567-a399-b8d52e672913] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 11 09:23:45 compute-0 nova_compute[260935]: 2025-10-11 09:23:45.518 2 DEBUG nova.network.neutron [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: c176845c-89c0-4038-ba22-4ee79bd3ebfe] Updating instance_info_cache with network_info: [{"id": "e61ae661-47c6-4317-a2c2-6e7a5b567441", "address": "fa:16:3e:1e:82:58", "network": {"id": "164a664d-5e52-48b9-8b00-f73d0851a4cc", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-311778958-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d33b48586acf4e6c8254f2a1213b001c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape61ae661-47", "ovs_interfaceid": "e61ae661-47c6-4317-a2c2-6e7a5b567441", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:23:45 compute-0 sshd-session[394455]: Failed password for invalid user admin from 165.232.82.252 port 56302 ssh2
Oct 11 09:23:45 compute-0 nova_compute[260935]: 2025-10-11 09:23:45.535 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Releasing lock "refresh_cache-c176845c-89c0-4038-ba22-4ee79bd3ebfe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:23:45 compute-0 nova_compute[260935]: 2025-10-11 09:23:45.535 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: c176845c-89c0-4038-ba22-4ee79bd3ebfe] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 11 09:23:45 compute-0 nova_compute[260935]: 2025-10-11 09:23:45.535 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:23:45 compute-0 nova_compute[260935]: 2025-10-11 09:23:45.536 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:23:45 compute-0 nova_compute[260935]: 2025-10-11 09:23:45.536 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 11 09:23:45 compute-0 sshd-session[394277]: Received disconnect from 152.32.213.170 port 35694:11: Bye Bye [preauth]
Oct 11 09:23:45 compute-0 sshd-session[394277]: Disconnected from authenticating user root 152.32.213.170 port 35694 [preauth]
Oct 11 09:23:45 compute-0 podman[394566]: 2025-10-11 09:23:45.822593396 +0000 UTC m=+0.116027555 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, container_name=iscsid, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct 11 09:23:45 compute-0 ceph-mon[74313]: pgmap v2461: 321 pgs: 321 active+clean; 579 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 4.0 MiB/s rd, 7.4 MiB/s wr, 130 op/s
Oct 11 09:23:46 compute-0 nova_compute[260935]: 2025-10-11 09:23:46.108 2 DEBUG nova.network.neutron [None req-359157f5-4620-47af-88e8-b12b212b7fe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 85cf93a0-2068-4567-a399-b8d52e672913] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 11 09:23:46 compute-0 nova_compute[260935]: 2025-10-11 09:23:46.234 2 DEBUG nova.compute.manager [req-7df65449-e412-407c-99df-175c6deaca94 req-ba1b4746-6122-4bea-856b-246577763942 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] Received event network-vif-plugged-0f516e4b-c284-4151-944c-8a7d98f695b5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:23:46 compute-0 nova_compute[260935]: 2025-10-11 09:23:46.234 2 DEBUG oslo_concurrency.lockutils [req-7df65449-e412-407c-99df-175c6deaca94 req-ba1b4746-6122-4bea-856b-246577763942 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "ef21f945-0076-48fa-8d22-c5376e26d278-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:23:46 compute-0 nova_compute[260935]: 2025-10-11 09:23:46.234 2 DEBUG oslo_concurrency.lockutils [req-7df65449-e412-407c-99df-175c6deaca94 req-ba1b4746-6122-4bea-856b-246577763942 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "ef21f945-0076-48fa-8d22-c5376e26d278-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:23:46 compute-0 nova_compute[260935]: 2025-10-11 09:23:46.235 2 DEBUG oslo_concurrency.lockutils [req-7df65449-e412-407c-99df-175c6deaca94 req-ba1b4746-6122-4bea-856b-246577763942 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "ef21f945-0076-48fa-8d22-c5376e26d278-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:23:46 compute-0 nova_compute[260935]: 2025-10-11 09:23:46.235 2 DEBUG nova.compute.manager [req-7df65449-e412-407c-99df-175c6deaca94 req-ba1b4746-6122-4bea-856b-246577763942 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] Processing event network-vif-plugged-0f516e4b-c284-4151-944c-8a7d98f695b5 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 11 09:23:46 compute-0 nova_compute[260935]: 2025-10-11 09:23:46.235 2 DEBUG nova.compute.manager [req-7df65449-e412-407c-99df-175c6deaca94 req-ba1b4746-6122-4bea-856b-246577763942 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] Received event network-vif-plugged-0f516e4b-c284-4151-944c-8a7d98f695b5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:23:46 compute-0 nova_compute[260935]: 2025-10-11 09:23:46.235 2 DEBUG oslo_concurrency.lockutils [req-7df65449-e412-407c-99df-175c6deaca94 req-ba1b4746-6122-4bea-856b-246577763942 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "ef21f945-0076-48fa-8d22-c5376e26d278-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:23:46 compute-0 nova_compute[260935]: 2025-10-11 09:23:46.236 2 DEBUG oslo_concurrency.lockutils [req-7df65449-e412-407c-99df-175c6deaca94 req-ba1b4746-6122-4bea-856b-246577763942 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "ef21f945-0076-48fa-8d22-c5376e26d278-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:23:46 compute-0 nova_compute[260935]: 2025-10-11 09:23:46.236 2 DEBUG oslo_concurrency.lockutils [req-7df65449-e412-407c-99df-175c6deaca94 req-ba1b4746-6122-4bea-856b-246577763942 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "ef21f945-0076-48fa-8d22-c5376e26d278-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:23:46 compute-0 nova_compute[260935]: 2025-10-11 09:23:46.236 2 DEBUG nova.compute.manager [req-7df65449-e412-407c-99df-175c6deaca94 req-ba1b4746-6122-4bea-856b-246577763942 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] No waiting events found dispatching network-vif-plugged-0f516e4b-c284-4151-944c-8a7d98f695b5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 09:23:46 compute-0 nova_compute[260935]: 2025-10-11 09:23:46.236 2 WARNING nova.compute.manager [req-7df65449-e412-407c-99df-175c6deaca94 req-ba1b4746-6122-4bea-856b-246577763942 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] Received unexpected event network-vif-plugged-0f516e4b-c284-4151-944c-8a7d98f695b5 for instance with vm_state shelved_offloaded and task_state spawning.
Oct 11 09:23:46 compute-0 nova_compute[260935]: 2025-10-11 09:23:46.236 2 DEBUG nova.compute.manager [req-7df65449-e412-407c-99df-175c6deaca94 req-ba1b4746-6122-4bea-856b-246577763942 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 527bff5a-2d35-406a-8702-f80298a22342] Received event network-changed-ba6bfb6c-8b45-4ce7-af7b-30a3c2e097ca external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:23:46 compute-0 nova_compute[260935]: 2025-10-11 09:23:46.237 2 DEBUG nova.compute.manager [req-7df65449-e412-407c-99df-175c6deaca94 req-ba1b4746-6122-4bea-856b-246577763942 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 527bff5a-2d35-406a-8702-f80298a22342] Refreshing instance network info cache due to event network-changed-ba6bfb6c-8b45-4ce7-af7b-30a3c2e097ca. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 09:23:46 compute-0 nova_compute[260935]: 2025-10-11 09:23:46.237 2 DEBUG oslo_concurrency.lockutils [req-7df65449-e412-407c-99df-175c6deaca94 req-ba1b4746-6122-4bea-856b-246577763942 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-527bff5a-2d35-406a-8702-f80298a22342" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:23:46 compute-0 nova_compute[260935]: 2025-10-11 09:23:46.237 2 DEBUG nova.compute.manager [None req-3068d5e0-3e17-4027-a4de-f9fde66921fa 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 11 09:23:46 compute-0 nova_compute[260935]: 2025-10-11 09:23:46.244 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760174626.2443242, ef21f945-0076-48fa-8d22-c5376e26d278 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:23:46 compute-0 nova_compute[260935]: 2025-10-11 09:23:46.244 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] VM Resumed (Lifecycle Event)
Oct 11 09:23:46 compute-0 nova_compute[260935]: 2025-10-11 09:23:46.246 2 DEBUG nova.virt.libvirt.driver [None req-3068d5e0-3e17-4027-a4de-f9fde66921fa 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 11 09:23:46 compute-0 nova_compute[260935]: 2025-10-11 09:23:46.249 2 INFO nova.virt.libvirt.driver [-] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] Instance spawned successfully.
Oct 11 09:23:46 compute-0 nova_compute[260935]: 2025-10-11 09:23:46.431 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:23:46 compute-0 nova_compute[260935]: 2025-10-11 09:23:46.442 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: shelved_offloaded, current task_state: spawning, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 09:23:46 compute-0 nova_compute[260935]: 2025-10-11 09:23:46.456 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:23:46 compute-0 nova_compute[260935]: 2025-10-11 09:23:46.492 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 09:23:46 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2462: 321 pgs: 321 active+clean; 579 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 4.0 MiB/s rd, 7.4 MiB/s wr, 130 op/s
Oct 11 09:23:46 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e286 do_prune osdmap full prune enabled
Oct 11 09:23:46 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e287 e287: 3 total, 3 up, 3 in
Oct 11 09:23:46 compute-0 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e287: 3 total, 3 up, 3 in
Oct 11 09:23:46 compute-0 sshd-session[394455]: Connection closed by invalid user admin 165.232.82.252 port 56302 [preauth]
Oct 11 09:23:47 compute-0 nova_compute[260935]: 2025-10-11 09:23:47.151 2 DEBUG nova.network.neutron [None req-091e0e0c-811a-465e-b89f-9b7e7c4c5823 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 527bff5a-2d35-406a-8702-f80298a22342] Updating instance_info_cache with network_info: [{"id": "ba6bfb6c-8b45-4ce7-af7b-30a3c2e097ca", "address": "fa:16:3e:d6:ce:63", "network": {"id": "a7aa5898-dcd8-41c5-81e2-5c41061a1fb2", "bridge": "br-int", "label": "tempest-network-smoke--1511589092", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapba6bfb6c-8b", "ovs_interfaceid": "ba6bfb6c-8b45-4ce7-af7b-30a3c2e097ca", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:23:47 compute-0 nova_compute[260935]: 2025-10-11 09:23:47.181 2 DEBUG oslo_concurrency.lockutils [None req-091e0e0c-811a-465e-b89f-9b7e7c4c5823 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Releasing lock "refresh_cache-527bff5a-2d35-406a-8702-f80298a22342" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:23:47 compute-0 nova_compute[260935]: 2025-10-11 09:23:47.182 2 DEBUG nova.compute.manager [None req-091e0e0c-811a-465e-b89f-9b7e7c4c5823 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 527bff5a-2d35-406a-8702-f80298a22342] Instance network_info: |[{"id": "ba6bfb6c-8b45-4ce7-af7b-30a3c2e097ca", "address": "fa:16:3e:d6:ce:63", "network": {"id": "a7aa5898-dcd8-41c5-81e2-5c41061a1fb2", "bridge": "br-int", "label": "tempest-network-smoke--1511589092", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapba6bfb6c-8b", "ovs_interfaceid": "ba6bfb6c-8b45-4ce7-af7b-30a3c2e097ca", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 11 09:23:47 compute-0 nova_compute[260935]: 2025-10-11 09:23:47.182 2 DEBUG oslo_concurrency.lockutils [req-7df65449-e412-407c-99df-175c6deaca94 req-ba1b4746-6122-4bea-856b-246577763942 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-527bff5a-2d35-406a-8702-f80298a22342" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:23:47 compute-0 nova_compute[260935]: 2025-10-11 09:23:47.182 2 DEBUG nova.network.neutron [req-7df65449-e412-407c-99df-175c6deaca94 req-ba1b4746-6122-4bea-856b-246577763942 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 527bff5a-2d35-406a-8702-f80298a22342] Refreshing network info cache for port ba6bfb6c-8b45-4ce7-af7b-30a3c2e097ca _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 09:23:47 compute-0 nova_compute[260935]: 2025-10-11 09:23:47.184 2 DEBUG nova.virt.libvirt.driver [None req-091e0e0c-811a-465e-b89f-9b7e7c4c5823 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 527bff5a-2d35-406a-8702-f80298a22342] Start _get_guest_xml network_info=[{"id": "ba6bfb6c-8b45-4ce7-af7b-30a3c2e097ca", "address": "fa:16:3e:d6:ce:63", "network": {"id": "a7aa5898-dcd8-41c5-81e2-5c41061a1fb2", "bridge": "br-int", "label": "tempest-network-smoke--1511589092", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapba6bfb6c-8b", "ovs_interfaceid": "ba6bfb6c-8b45-4ce7-af7b-30a3c2e097ca", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 11 09:23:47 compute-0 nova_compute[260935]: 2025-10-11 09:23:47.188 2 WARNING nova.virt.libvirt.driver [None req-091e0e0c-811a-465e-b89f-9b7e7c4c5823 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 09:23:47 compute-0 nova_compute[260935]: 2025-10-11 09:23:47.193 2 DEBUG nova.virt.libvirt.host [None req-091e0e0c-811a-465e-b89f-9b7e7c4c5823 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 11 09:23:47 compute-0 nova_compute[260935]: 2025-10-11 09:23:47.194 2 DEBUG nova.virt.libvirt.host [None req-091e0e0c-811a-465e-b89f-9b7e7c4c5823 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 11 09:23:47 compute-0 nova_compute[260935]: 2025-10-11 09:23:47.198 2 DEBUG nova.virt.libvirt.host [None req-091e0e0c-811a-465e-b89f-9b7e7c4c5823 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 11 09:23:47 compute-0 nova_compute[260935]: 2025-10-11 09:23:47.199 2 DEBUG nova.virt.libvirt.host [None req-091e0e0c-811a-465e-b89f-9b7e7c4c5823 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 11 09:23:47 compute-0 nova_compute[260935]: 2025-10-11 09:23:47.199 2 DEBUG nova.virt.libvirt.driver [None req-091e0e0c-811a-465e-b89f-9b7e7c4c5823 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 11 09:23:47 compute-0 nova_compute[260935]: 2025-10-11 09:23:47.200 2 DEBUG nova.virt.hardware [None req-091e0e0c-811a-465e-b89f-9b7e7c4c5823 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 11 09:23:47 compute-0 nova_compute[260935]: 2025-10-11 09:23:47.201 2 DEBUG nova.virt.hardware [None req-091e0e0c-811a-465e-b89f-9b7e7c4c5823 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 11 09:23:47 compute-0 nova_compute[260935]: 2025-10-11 09:23:47.201 2 DEBUG nova.virt.hardware [None req-091e0e0c-811a-465e-b89f-9b7e7c4c5823 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 11 09:23:47 compute-0 nova_compute[260935]: 2025-10-11 09:23:47.201 2 DEBUG nova.virt.hardware [None req-091e0e0c-811a-465e-b89f-9b7e7c4c5823 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 11 09:23:47 compute-0 nova_compute[260935]: 2025-10-11 09:23:47.202 2 DEBUG nova.virt.hardware [None req-091e0e0c-811a-465e-b89f-9b7e7c4c5823 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 11 09:23:47 compute-0 nova_compute[260935]: 2025-10-11 09:23:47.202 2 DEBUG nova.virt.hardware [None req-091e0e0c-811a-465e-b89f-9b7e7c4c5823 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 11 09:23:47 compute-0 nova_compute[260935]: 2025-10-11 09:23:47.202 2 DEBUG nova.virt.hardware [None req-091e0e0c-811a-465e-b89f-9b7e7c4c5823 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 11 09:23:47 compute-0 nova_compute[260935]: 2025-10-11 09:23:47.203 2 DEBUG nova.virt.hardware [None req-091e0e0c-811a-465e-b89f-9b7e7c4c5823 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 11 09:23:47 compute-0 nova_compute[260935]: 2025-10-11 09:23:47.203 2 DEBUG nova.virt.hardware [None req-091e0e0c-811a-465e-b89f-9b7e7c4c5823 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 11 09:23:47 compute-0 nova_compute[260935]: 2025-10-11 09:23:47.204 2 DEBUG nova.virt.hardware [None req-091e0e0c-811a-465e-b89f-9b7e7c4c5823 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 11 09:23:47 compute-0 nova_compute[260935]: 2025-10-11 09:23:47.204 2 DEBUG nova.virt.hardware [None req-091e0e0c-811a-465e-b89f-9b7e7c4c5823 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 11 09:23:47 compute-0 nova_compute[260935]: 2025-10-11 09:23:47.209 2 DEBUG oslo_concurrency.processutils [None req-091e0e0c-811a-465e-b89f-9b7e7c4c5823 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:23:47 compute-0 nova_compute[260935]: 2025-10-11 09:23:47.408 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:23:47 compute-0 nova_compute[260935]: 2025-10-11 09:23:47.538 2 DEBUG nova.compute.manager [None req-3068d5e0-3e17-4027-a4de-f9fde66921fa 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:23:47 compute-0 nova_compute[260935]: 2025-10-11 09:23:47.630 2 DEBUG oslo_concurrency.lockutils [None req-3068d5e0-3e17-4027-a4de-f9fde66921fa 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] Lock "ef21f945-0076-48fa-8d22-c5376e26d278" "released" by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" :: held 14.283s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:23:47 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 09:23:47 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2391666843' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:23:47 compute-0 nova_compute[260935]: 2025-10-11 09:23:47.778 2 DEBUG oslo_concurrency.processutils [None req-091e0e0c-811a-465e-b89f-9b7e7c4c5823 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.568s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:23:47 compute-0 nova_compute[260935]: 2025-10-11 09:23:47.799 2 DEBUG nova.storage.rbd_utils [None req-091e0e0c-811a-465e-b89f-9b7e7c4c5823 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] rbd image 527bff5a-2d35-406a-8702-f80298a22342_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:23:47 compute-0 nova_compute[260935]: 2025-10-11 09:23:47.804 2 DEBUG oslo_concurrency.processutils [None req-091e0e0c-811a-465e-b89f-9b7e7c4c5823 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:23:47 compute-0 ceph-mon[74313]: pgmap v2462: 321 pgs: 321 active+clean; 579 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 4.0 MiB/s rd, 7.4 MiB/s wr, 130 op/s
Oct 11 09:23:47 compute-0 ceph-mon[74313]: osdmap e287: 3 total, 3 up, 3 in
Oct 11 09:23:47 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/2391666843' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:23:48 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 09:23:48 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2914209808' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:23:48 compute-0 nova_compute[260935]: 2025-10-11 09:23:48.256 2 DEBUG oslo_concurrency.processutils [None req-091e0e0c-811a-465e-b89f-9b7e7c4c5823 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:23:48 compute-0 nova_compute[260935]: 2025-10-11 09:23:48.259 2 DEBUG nova.virt.libvirt.vif [None req-091e0e0c-811a-465e-b89f-9b7e7c4c5823 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T09:23:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1659866276',display_name='tempest-TestNetworkBasicOps-server-1659866276',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1659866276',id=123,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBD5xFqm8IPc+0j24PucHdWviLqVsYMHOSZIhiNWh/27UIL+IzWKFcQXG0H5lF53N0KOCicqByqWqQ/NMVHseguyx/gUQwyqXnA+qdRlYqxWLbxA13mTZNbIWDUUTRGaKUQ==',key_name='tempest-TestNetworkBasicOps-1951063949',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='bee9c6aad5fe46a2b0fb6caf4d995b72',ramdisk_id='',reservation_id='r-slz4kjjw',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1622727639',owner_user_name='tempest-TestNetworkBasicOps-1622727639-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:23:41Z,user_data=None,user_id='dd336dcb24664df58613d4105ce1b004',uuid=527bff5a-2d35-406a-8702-f80298a22342,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ba6bfb6c-8b45-4ce7-af7b-30a3c2e097ca", "address": "fa:16:3e:d6:ce:63", "network": {"id": "a7aa5898-dcd8-41c5-81e2-5c41061a1fb2", "bridge": "br-int", "label": "tempest-network-smoke--1511589092", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapba6bfb6c-8b", "ovs_interfaceid": "ba6bfb6c-8b45-4ce7-af7b-30a3c2e097ca", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 11 09:23:48 compute-0 nova_compute[260935]: 2025-10-11 09:23:48.260 2 DEBUG nova.network.os_vif_util [None req-091e0e0c-811a-465e-b89f-9b7e7c4c5823 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Converting VIF {"id": "ba6bfb6c-8b45-4ce7-af7b-30a3c2e097ca", "address": "fa:16:3e:d6:ce:63", "network": {"id": "a7aa5898-dcd8-41c5-81e2-5c41061a1fb2", "bridge": "br-int", "label": "tempest-network-smoke--1511589092", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapba6bfb6c-8b", "ovs_interfaceid": "ba6bfb6c-8b45-4ce7-af7b-30a3c2e097ca", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 09:23:48 compute-0 nova_compute[260935]: 2025-10-11 09:23:48.262 2 DEBUG nova.network.os_vif_util [None req-091e0e0c-811a-465e-b89f-9b7e7c4c5823 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d6:ce:63,bridge_name='br-int',has_traffic_filtering=True,id=ba6bfb6c-8b45-4ce7-af7b-30a3c2e097ca,network=Network(a7aa5898-dcd8-41c5-81e2-5c41061a1fb2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapba6bfb6c-8b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 09:23:48 compute-0 nova_compute[260935]: 2025-10-11 09:23:48.264 2 DEBUG nova.objects.instance [None req-091e0e0c-811a-465e-b89f-9b7e7c4c5823 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lazy-loading 'pci_devices' on Instance uuid 527bff5a-2d35-406a-8702-f80298a22342 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:23:48 compute-0 nova_compute[260935]: 2025-10-11 09:23:48.289 2 DEBUG nova.virt.libvirt.driver [None req-091e0e0c-811a-465e-b89f-9b7e7c4c5823 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 527bff5a-2d35-406a-8702-f80298a22342] End _get_guest_xml xml=<domain type="kvm">
Oct 11 09:23:48 compute-0 nova_compute[260935]:   <uuid>527bff5a-2d35-406a-8702-f80298a22342</uuid>
Oct 11 09:23:48 compute-0 nova_compute[260935]:   <name>instance-0000007b</name>
Oct 11 09:23:48 compute-0 nova_compute[260935]:   <memory>131072</memory>
Oct 11 09:23:48 compute-0 nova_compute[260935]:   <vcpu>1</vcpu>
Oct 11 09:23:48 compute-0 nova_compute[260935]:   <metadata>
Oct 11 09:23:48 compute-0 nova_compute[260935]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 09:23:48 compute-0 nova_compute[260935]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 09:23:48 compute-0 nova_compute[260935]:       <nova:name>tempest-TestNetworkBasicOps-server-1659866276</nova:name>
Oct 11 09:23:48 compute-0 nova_compute[260935]:       <nova:creationTime>2025-10-11 09:23:47</nova:creationTime>
Oct 11 09:23:48 compute-0 nova_compute[260935]:       <nova:flavor name="m1.nano">
Oct 11 09:23:48 compute-0 nova_compute[260935]:         <nova:memory>128</nova:memory>
Oct 11 09:23:48 compute-0 nova_compute[260935]:         <nova:disk>1</nova:disk>
Oct 11 09:23:48 compute-0 nova_compute[260935]:         <nova:swap>0</nova:swap>
Oct 11 09:23:48 compute-0 nova_compute[260935]:         <nova:ephemeral>0</nova:ephemeral>
Oct 11 09:23:48 compute-0 nova_compute[260935]:         <nova:vcpus>1</nova:vcpus>
Oct 11 09:23:48 compute-0 nova_compute[260935]:       </nova:flavor>
Oct 11 09:23:48 compute-0 nova_compute[260935]:       <nova:owner>
Oct 11 09:23:48 compute-0 nova_compute[260935]:         <nova:user uuid="dd336dcb24664df58613d4105ce1b004">tempest-TestNetworkBasicOps-1622727639-project-member</nova:user>
Oct 11 09:23:48 compute-0 nova_compute[260935]:         <nova:project uuid="bee9c6aad5fe46a2b0fb6caf4d995b72">tempest-TestNetworkBasicOps-1622727639</nova:project>
Oct 11 09:23:48 compute-0 nova_compute[260935]:       </nova:owner>
Oct 11 09:23:48 compute-0 nova_compute[260935]:       <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 09:23:48 compute-0 nova_compute[260935]:       <nova:ports>
Oct 11 09:23:48 compute-0 nova_compute[260935]:         <nova:port uuid="ba6bfb6c-8b45-4ce7-af7b-30a3c2e097ca">
Oct 11 09:23:48 compute-0 nova_compute[260935]:           <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Oct 11 09:23:48 compute-0 nova_compute[260935]:         </nova:port>
Oct 11 09:23:48 compute-0 nova_compute[260935]:       </nova:ports>
Oct 11 09:23:48 compute-0 nova_compute[260935]:     </nova:instance>
Oct 11 09:23:48 compute-0 nova_compute[260935]:   </metadata>
Oct 11 09:23:48 compute-0 nova_compute[260935]:   <sysinfo type="smbios">
Oct 11 09:23:48 compute-0 nova_compute[260935]:     <system>
Oct 11 09:23:48 compute-0 nova_compute[260935]:       <entry name="manufacturer">RDO</entry>
Oct 11 09:23:48 compute-0 nova_compute[260935]:       <entry name="product">OpenStack Compute</entry>
Oct 11 09:23:48 compute-0 nova_compute[260935]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 09:23:48 compute-0 nova_compute[260935]:       <entry name="serial">527bff5a-2d35-406a-8702-f80298a22342</entry>
Oct 11 09:23:48 compute-0 nova_compute[260935]:       <entry name="uuid">527bff5a-2d35-406a-8702-f80298a22342</entry>
Oct 11 09:23:48 compute-0 nova_compute[260935]:       <entry name="family">Virtual Machine</entry>
Oct 11 09:23:48 compute-0 nova_compute[260935]:     </system>
Oct 11 09:23:48 compute-0 nova_compute[260935]:   </sysinfo>
Oct 11 09:23:48 compute-0 nova_compute[260935]:   <os>
Oct 11 09:23:48 compute-0 nova_compute[260935]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 11 09:23:48 compute-0 nova_compute[260935]:     <boot dev="hd"/>
Oct 11 09:23:48 compute-0 nova_compute[260935]:     <smbios mode="sysinfo"/>
Oct 11 09:23:48 compute-0 nova_compute[260935]:   </os>
Oct 11 09:23:48 compute-0 nova_compute[260935]:   <features>
Oct 11 09:23:48 compute-0 nova_compute[260935]:     <acpi/>
Oct 11 09:23:48 compute-0 nova_compute[260935]:     <apic/>
Oct 11 09:23:48 compute-0 nova_compute[260935]:     <vmcoreinfo/>
Oct 11 09:23:48 compute-0 nova_compute[260935]:   </features>
Oct 11 09:23:48 compute-0 nova_compute[260935]:   <clock offset="utc">
Oct 11 09:23:48 compute-0 nova_compute[260935]:     <timer name="pit" tickpolicy="delay"/>
Oct 11 09:23:48 compute-0 nova_compute[260935]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 11 09:23:48 compute-0 nova_compute[260935]:     <timer name="hpet" present="no"/>
Oct 11 09:23:48 compute-0 nova_compute[260935]:   </clock>
Oct 11 09:23:48 compute-0 nova_compute[260935]:   <cpu mode="host-model" match="exact">
Oct 11 09:23:48 compute-0 nova_compute[260935]:     <topology sockets="1" cores="1" threads="1"/>
Oct 11 09:23:48 compute-0 nova_compute[260935]:   </cpu>
Oct 11 09:23:48 compute-0 nova_compute[260935]:   <devices>
Oct 11 09:23:48 compute-0 nova_compute[260935]:     <disk type="network" device="disk">
Oct 11 09:23:48 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 09:23:48 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/527bff5a-2d35-406a-8702-f80298a22342_disk">
Oct 11 09:23:48 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 09:23:48 compute-0 nova_compute[260935]:       </source>
Oct 11 09:23:48 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 09:23:48 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 09:23:48 compute-0 nova_compute[260935]:       </auth>
Oct 11 09:23:48 compute-0 nova_compute[260935]:       <target dev="vda" bus="virtio"/>
Oct 11 09:23:48 compute-0 nova_compute[260935]:     </disk>
Oct 11 09:23:48 compute-0 nova_compute[260935]:     <disk type="network" device="cdrom">
Oct 11 09:23:48 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 09:23:48 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/527bff5a-2d35-406a-8702-f80298a22342_disk.config">
Oct 11 09:23:48 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 09:23:48 compute-0 nova_compute[260935]:       </source>
Oct 11 09:23:48 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 09:23:48 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 09:23:48 compute-0 nova_compute[260935]:       </auth>
Oct 11 09:23:48 compute-0 nova_compute[260935]:       <target dev="sda" bus="sata"/>
Oct 11 09:23:48 compute-0 nova_compute[260935]:     </disk>
Oct 11 09:23:48 compute-0 nova_compute[260935]:     <interface type="ethernet">
Oct 11 09:23:48 compute-0 nova_compute[260935]:       <mac address="fa:16:3e:d6:ce:63"/>
Oct 11 09:23:48 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 09:23:48 compute-0 nova_compute[260935]:       <driver name="vhost" rx_queue_size="512"/>
Oct 11 09:23:48 compute-0 nova_compute[260935]:       <mtu size="1442"/>
Oct 11 09:23:48 compute-0 nova_compute[260935]:       <target dev="tapba6bfb6c-8b"/>
Oct 11 09:23:48 compute-0 nova_compute[260935]:     </interface>
Oct 11 09:23:48 compute-0 nova_compute[260935]:     <serial type="pty">
Oct 11 09:23:48 compute-0 nova_compute[260935]:       <log file="/var/lib/nova/instances/527bff5a-2d35-406a-8702-f80298a22342/console.log" append="off"/>
Oct 11 09:23:48 compute-0 nova_compute[260935]:     </serial>
Oct 11 09:23:48 compute-0 nova_compute[260935]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 09:23:48 compute-0 nova_compute[260935]:     <video>
Oct 11 09:23:48 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 09:23:48 compute-0 nova_compute[260935]:     </video>
Oct 11 09:23:48 compute-0 nova_compute[260935]:     <input type="tablet" bus="usb"/>
Oct 11 09:23:48 compute-0 nova_compute[260935]:     <rng model="virtio">
Oct 11 09:23:48 compute-0 nova_compute[260935]:       <backend model="random">/dev/urandom</backend>
Oct 11 09:23:48 compute-0 nova_compute[260935]:     </rng>
Oct 11 09:23:48 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root"/>
Oct 11 09:23:48 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:23:48 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:23:48 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:23:48 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:23:48 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:23:48 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:23:48 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:23:48 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:23:48 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:23:48 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:23:48 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:23:48 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:23:48 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:23:48 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:23:48 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:23:48 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:23:48 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:23:48 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:23:48 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:23:48 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:23:48 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:23:48 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:23:48 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:23:48 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:23:48 compute-0 nova_compute[260935]:     <controller type="usb" index="0"/>
Oct 11 09:23:48 compute-0 nova_compute[260935]:     <memballoon model="virtio">
Oct 11 09:23:48 compute-0 nova_compute[260935]:       <stats period="10"/>
Oct 11 09:23:48 compute-0 nova_compute[260935]:     </memballoon>
Oct 11 09:23:48 compute-0 nova_compute[260935]:   </devices>
Oct 11 09:23:48 compute-0 nova_compute[260935]: </domain>
Oct 11 09:23:48 compute-0 nova_compute[260935]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 11 09:23:48 compute-0 nova_compute[260935]: 2025-10-11 09:23:48.290 2 DEBUG nova.compute.manager [None req-091e0e0c-811a-465e-b89f-9b7e7c4c5823 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 527bff5a-2d35-406a-8702-f80298a22342] Preparing to wait for external event network-vif-plugged-ba6bfb6c-8b45-4ce7-af7b-30a3c2e097ca prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 11 09:23:48 compute-0 nova_compute[260935]: 2025-10-11 09:23:48.291 2 DEBUG oslo_concurrency.lockutils [None req-091e0e0c-811a-465e-b89f-9b7e7c4c5823 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Acquiring lock "527bff5a-2d35-406a-8702-f80298a22342-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:23:48 compute-0 nova_compute[260935]: 2025-10-11 09:23:48.291 2 DEBUG oslo_concurrency.lockutils [None req-091e0e0c-811a-465e-b89f-9b7e7c4c5823 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "527bff5a-2d35-406a-8702-f80298a22342-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:23:48 compute-0 nova_compute[260935]: 2025-10-11 09:23:48.292 2 DEBUG oslo_concurrency.lockutils [None req-091e0e0c-811a-465e-b89f-9b7e7c4c5823 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "527bff5a-2d35-406a-8702-f80298a22342-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:23:48 compute-0 nova_compute[260935]: 2025-10-11 09:23:48.293 2 DEBUG nova.virt.libvirt.vif [None req-091e0e0c-811a-465e-b89f-9b7e7c4c5823 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T09:23:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1659866276',display_name='tempest-TestNetworkBasicOps-server-1659866276',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1659866276',id=123,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBD5xFqm8IPc+0j24PucHdWviLqVsYMHOSZIhiNWh/27UIL+IzWKFcQXG0H5lF53N0KOCicqByqWqQ/NMVHseguyx/gUQwyqXnA+qdRlYqxWLbxA13mTZNbIWDUUTRGaKUQ==',key_name='tempest-TestNetworkBasicOps-1951063949',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='bee9c6aad5fe46a2b0fb6caf4d995b72',ramdisk_id='',reservation_id='r-slz4kjjw',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1622727639',owner_user_name='tempest-TestNetworkBasicOps-1622727639-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:23:41Z,user_data=None,user_id='dd336dcb24664df58613d4105ce1b004',uuid=527bff5a-2d35-406a-8702-f80298a22342,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ba6bfb6c-8b45-4ce7-af7b-30a3c2e097ca", "address": "fa:16:3e:d6:ce:63", "network": {"id": "a7aa5898-dcd8-41c5-81e2-5c41061a1fb2", "bridge": "br-int", "label": "tempest-network-smoke--1511589092", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapba6bfb6c-8b", "ovs_interfaceid": "ba6bfb6c-8b45-4ce7-af7b-30a3c2e097ca", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 11 09:23:48 compute-0 nova_compute[260935]: 2025-10-11 09:23:48.294 2 DEBUG nova.network.os_vif_util [None req-091e0e0c-811a-465e-b89f-9b7e7c4c5823 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Converting VIF {"id": "ba6bfb6c-8b45-4ce7-af7b-30a3c2e097ca", "address": "fa:16:3e:d6:ce:63", "network": {"id": "a7aa5898-dcd8-41c5-81e2-5c41061a1fb2", "bridge": "br-int", "label": "tempest-network-smoke--1511589092", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapba6bfb6c-8b", "ovs_interfaceid": "ba6bfb6c-8b45-4ce7-af7b-30a3c2e097ca", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 09:23:48 compute-0 nova_compute[260935]: 2025-10-11 09:23:48.295 2 DEBUG nova.network.os_vif_util [None req-091e0e0c-811a-465e-b89f-9b7e7c4c5823 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d6:ce:63,bridge_name='br-int',has_traffic_filtering=True,id=ba6bfb6c-8b45-4ce7-af7b-30a3c2e097ca,network=Network(a7aa5898-dcd8-41c5-81e2-5c41061a1fb2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapba6bfb6c-8b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 09:23:48 compute-0 nova_compute[260935]: 2025-10-11 09:23:48.295 2 DEBUG os_vif [None req-091e0e0c-811a-465e-b89f-9b7e7c4c5823 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d6:ce:63,bridge_name='br-int',has_traffic_filtering=True,id=ba6bfb6c-8b45-4ce7-af7b-30a3c2e097ca,network=Network(a7aa5898-dcd8-41c5-81e2-5c41061a1fb2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapba6bfb6c-8b') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 11 09:23:48 compute-0 nova_compute[260935]: 2025-10-11 09:23:48.297 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:23:48 compute-0 nova_compute[260935]: 2025-10-11 09:23:48.298 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:23:48 compute-0 nova_compute[260935]: 2025-10-11 09:23:48.299 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 09:23:48 compute-0 nova_compute[260935]: 2025-10-11 09:23:48.306 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:23:48 compute-0 nova_compute[260935]: 2025-10-11 09:23:48.307 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapba6bfb6c-8b, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:23:48 compute-0 nova_compute[260935]: 2025-10-11 09:23:48.308 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapba6bfb6c-8b, col_values=(('external_ids', {'iface-id': 'ba6bfb6c-8b45-4ce7-af7b-30a3c2e097ca', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:d6:ce:63', 'vm-uuid': '527bff5a-2d35-406a-8702-f80298a22342'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:23:48 compute-0 NetworkManager[44960]: <info>  [1760174628.3114] manager: (tapba6bfb6c-8b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/513)
Oct 11 09:23:48 compute-0 nova_compute[260935]: 2025-10-11 09:23:48.310 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:23:48 compute-0 nova_compute[260935]: 2025-10-11 09:23:48.313 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 09:23:48 compute-0 nova_compute[260935]: 2025-10-11 09:23:48.322 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:23:48 compute-0 nova_compute[260935]: 2025-10-11 09:23:48.323 2 INFO os_vif [None req-091e0e0c-811a-465e-b89f-9b7e7c4c5823 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d6:ce:63,bridge_name='br-int',has_traffic_filtering=True,id=ba6bfb6c-8b45-4ce7-af7b-30a3c2e097ca,network=Network(a7aa5898-dcd8-41c5-81e2-5c41061a1fb2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapba6bfb6c-8b')
Oct 11 09:23:48 compute-0 nova_compute[260935]: 2025-10-11 09:23:48.402 2 DEBUG nova.virt.libvirt.driver [None req-091e0e0c-811a-465e-b89f-9b7e7c4c5823 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 09:23:48 compute-0 nova_compute[260935]: 2025-10-11 09:23:48.403 2 DEBUG nova.virt.libvirt.driver [None req-091e0e0c-811a-465e-b89f-9b7e7c4c5823 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 09:23:48 compute-0 nova_compute[260935]: 2025-10-11 09:23:48.404 2 DEBUG nova.virt.libvirt.driver [None req-091e0e0c-811a-465e-b89f-9b7e7c4c5823 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] No VIF found with MAC fa:16:3e:d6:ce:63, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 11 09:23:48 compute-0 nova_compute[260935]: 2025-10-11 09:23:48.405 2 INFO nova.virt.libvirt.driver [None req-091e0e0c-811a-465e-b89f-9b7e7c4c5823 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 527bff5a-2d35-406a-8702-f80298a22342] Using config drive
Oct 11 09:23:48 compute-0 nova_compute[260935]: 2025-10-11 09:23:48.444 2 DEBUG nova.storage.rbd_utils [None req-091e0e0c-811a-465e-b89f-9b7e7c4c5823 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] rbd image 527bff5a-2d35-406a-8702-f80298a22342_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:23:48 compute-0 nova_compute[260935]: 2025-10-11 09:23:48.646 2 DEBUG nova.network.neutron [req-7df65449-e412-407c-99df-175c6deaca94 req-ba1b4746-6122-4bea-856b-246577763942 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 527bff5a-2d35-406a-8702-f80298a22342] Updated VIF entry in instance network info cache for port ba6bfb6c-8b45-4ce7-af7b-30a3c2e097ca. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 09:23:48 compute-0 nova_compute[260935]: 2025-10-11 09:23:48.647 2 DEBUG nova.network.neutron [req-7df65449-e412-407c-99df-175c6deaca94 req-ba1b4746-6122-4bea-856b-246577763942 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 527bff5a-2d35-406a-8702-f80298a22342] Updating instance_info_cache with network_info: [{"id": "ba6bfb6c-8b45-4ce7-af7b-30a3c2e097ca", "address": "fa:16:3e:d6:ce:63", "network": {"id": "a7aa5898-dcd8-41c5-81e2-5c41061a1fb2", "bridge": "br-int", "label": "tempest-network-smoke--1511589092", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapba6bfb6c-8b", "ovs_interfaceid": "ba6bfb6c-8b45-4ce7-af7b-30a3c2e097ca", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:23:48 compute-0 nova_compute[260935]: 2025-10-11 09:23:48.666 2 DEBUG oslo_concurrency.lockutils [req-7df65449-e412-407c-99df-175c6deaca94 req-ba1b4746-6122-4bea-856b-246577763942 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-527bff5a-2d35-406a-8702-f80298a22342" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:23:48 compute-0 nova_compute[260935]: 2025-10-11 09:23:48.667 2 DEBUG nova.compute.manager [req-7df65449-e412-407c-99df-175c6deaca94 req-ba1b4746-6122-4bea-856b-246577763942 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 85cf93a0-2068-4567-a399-b8d52e672913] Received event network-changed-f45a21da-34ce-448b-92cc-2639a191c755 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:23:48 compute-0 nova_compute[260935]: 2025-10-11 09:23:48.667 2 DEBUG nova.compute.manager [req-7df65449-e412-407c-99df-175c6deaca94 req-ba1b4746-6122-4bea-856b-246577763942 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 85cf93a0-2068-4567-a399-b8d52e672913] Refreshing instance network info cache due to event network-changed-f45a21da-34ce-448b-92cc-2639a191c755. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 09:23:48 compute-0 nova_compute[260935]: 2025-10-11 09:23:48.667 2 DEBUG oslo_concurrency.lockutils [req-7df65449-e412-407c-99df-175c6deaca94 req-ba1b4746-6122-4bea-856b-246577763942 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-85cf93a0-2068-4567-a399-b8d52e672913" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:23:48 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2464: 321 pgs: 321 active+clean; 509 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 6.3 MiB/s rd, 6.8 MiB/s wr, 218 op/s
Oct 11 09:23:48 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/2914209808' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:23:49 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e287 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:23:49 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e287 do_prune osdmap full prune enabled
Oct 11 09:23:49 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 e288: 3 total, 3 up, 3 in
Oct 11 09:23:49 compute-0 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e288: 3 total, 3 up, 3 in
Oct 11 09:23:49 compute-0 nova_compute[260935]: 2025-10-11 09:23:49.172 2 INFO nova.virt.libvirt.driver [None req-091e0e0c-811a-465e-b89f-9b7e7c4c5823 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 527bff5a-2d35-406a-8702-f80298a22342] Creating config drive at /var/lib/nova/instances/527bff5a-2d35-406a-8702-f80298a22342/disk.config
Oct 11 09:23:49 compute-0 nova_compute[260935]: 2025-10-11 09:23:49.176 2 DEBUG oslo_concurrency.processutils [None req-091e0e0c-811a-465e-b89f-9b7e7c4c5823 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/527bff5a-2d35-406a-8702-f80298a22342/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp82fbrooc execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:23:49 compute-0 nova_compute[260935]: 2025-10-11 09:23:49.320 2 DEBUG oslo_concurrency.processutils [None req-091e0e0c-811a-465e-b89f-9b7e7c4c5823 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/527bff5a-2d35-406a-8702-f80298a22342/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp82fbrooc" returned: 0 in 0.144s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:23:49 compute-0 nova_compute[260935]: 2025-10-11 09:23:49.359 2 DEBUG nova.storage.rbd_utils [None req-091e0e0c-811a-465e-b89f-9b7e7c4c5823 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] rbd image 527bff5a-2d35-406a-8702-f80298a22342_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:23:49 compute-0 nova_compute[260935]: 2025-10-11 09:23:49.364 2 DEBUG oslo_concurrency.processutils [None req-091e0e0c-811a-465e-b89f-9b7e7c4c5823 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/527bff5a-2d35-406a-8702-f80298a22342/disk.config 527bff5a-2d35-406a-8702-f80298a22342_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:23:49 compute-0 nova_compute[260935]: 2025-10-11 09:23:49.551 2 DEBUG oslo_concurrency.processutils [None req-091e0e0c-811a-465e-b89f-9b7e7c4c5823 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/527bff5a-2d35-406a-8702-f80298a22342/disk.config 527bff5a-2d35-406a-8702-f80298a22342_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.187s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:23:49 compute-0 nova_compute[260935]: 2025-10-11 09:23:49.553 2 INFO nova.virt.libvirt.driver [None req-091e0e0c-811a-465e-b89f-9b7e7c4c5823 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 527bff5a-2d35-406a-8702-f80298a22342] Deleting local config drive /var/lib/nova/instances/527bff5a-2d35-406a-8702-f80298a22342/disk.config because it was imported into RBD.
Oct 11 09:23:49 compute-0 nova_compute[260935]: 2025-10-11 09:23:49.555 2 DEBUG nova.network.neutron [None req-359157f5-4620-47af-88e8-b12b212b7fe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 85cf93a0-2068-4567-a399-b8d52e672913] Updating instance_info_cache with network_info: [{"id": "85875c6f-3380-42bc-9e4b-0b66df391e0d", "address": "fa:16:3e:81:f1:c1", "network": {"id": "cdd3c547-e0c3-4649-8427-08ce8e1c52d4", "bridge": "br-int", "label": "tempest-network-smoke--1698384506", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap85875c6f-33", "ovs_interfaceid": "85875c6f-3380-42bc-9e4b-0b66df391e0d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "f45a21da-34ce-448b-92cc-2639a191c755", "address": "fa:16:3e:f6:a9:1e", "network": {"id": "f0bc2c62-89ab-4ce1-9157-2273788b9018", "bridge": "br-int", "label": "tempest-network-smoke--892002892", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fef6:a91e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fef6:a91e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf45a21da-34", "ovs_interfaceid": "f45a21da-34ce-448b-92cc-2639a191c755", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:23:49 compute-0 nova_compute[260935]: 2025-10-11 09:23:49.580 2 DEBUG oslo_concurrency.lockutils [None req-359157f5-4620-47af-88e8-b12b212b7fe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Releasing lock "refresh_cache-85cf93a0-2068-4567-a399-b8d52e672913" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:23:49 compute-0 nova_compute[260935]: 2025-10-11 09:23:49.580 2 DEBUG nova.compute.manager [None req-359157f5-4620-47af-88e8-b12b212b7fe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 85cf93a0-2068-4567-a399-b8d52e672913] Instance network_info: |[{"id": "85875c6f-3380-42bc-9e4b-0b66df391e0d", "address": "fa:16:3e:81:f1:c1", "network": {"id": "cdd3c547-e0c3-4649-8427-08ce8e1c52d4", "bridge": "br-int", "label": "tempest-network-smoke--1698384506", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap85875c6f-33", "ovs_interfaceid": "85875c6f-3380-42bc-9e4b-0b66df391e0d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "f45a21da-34ce-448b-92cc-2639a191c755", "address": "fa:16:3e:f6:a9:1e", "network": {"id": "f0bc2c62-89ab-4ce1-9157-2273788b9018", "bridge": "br-int", "label": "tempest-network-smoke--892002892", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fef6:a91e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fef6:a91e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf45a21da-34", "ovs_interfaceid": "f45a21da-34ce-448b-92cc-2639a191c755", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 11 09:23:49 compute-0 nova_compute[260935]: 2025-10-11 09:23:49.581 2 DEBUG oslo_concurrency.lockutils [req-7df65449-e412-407c-99df-175c6deaca94 req-ba1b4746-6122-4bea-856b-246577763942 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-85cf93a0-2068-4567-a399-b8d52e672913" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:23:49 compute-0 nova_compute[260935]: 2025-10-11 09:23:49.581 2 DEBUG nova.network.neutron [req-7df65449-e412-407c-99df-175c6deaca94 req-ba1b4746-6122-4bea-856b-246577763942 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 85cf93a0-2068-4567-a399-b8d52e672913] Refreshing network info cache for port f45a21da-34ce-448b-92cc-2639a191c755 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 09:23:49 compute-0 nova_compute[260935]: 2025-10-11 09:23:49.588 2 DEBUG nova.virt.libvirt.driver [None req-359157f5-4620-47af-88e8-b12b212b7fe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 85cf93a0-2068-4567-a399-b8d52e672913] Start _get_guest_xml network_info=[{"id": "85875c6f-3380-42bc-9e4b-0b66df391e0d", "address": "fa:16:3e:81:f1:c1", "network": {"id": "cdd3c547-e0c3-4649-8427-08ce8e1c52d4", "bridge": "br-int", "label": "tempest-network-smoke--1698384506", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap85875c6f-33", "ovs_interfaceid": "85875c6f-3380-42bc-9e4b-0b66df391e0d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "f45a21da-34ce-448b-92cc-2639a191c755", "address": "fa:16:3e:f6:a9:1e", "network": {"id": "f0bc2c62-89ab-4ce1-9157-2273788b9018", "bridge": "br-int", "label": "tempest-network-smoke--892002892", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fef6:a91e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fef6:a91e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf45a21da-34", "ovs_interfaceid": "f45a21da-34ce-448b-92cc-2639a191c755", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 11 09:23:49 compute-0 nova_compute[260935]: 2025-10-11 09:23:49.603 2 WARNING nova.virt.libvirt.driver [None req-359157f5-4620-47af-88e8-b12b212b7fe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 09:23:49 compute-0 nova_compute[260935]: 2025-10-11 09:23:49.614 2 DEBUG nova.virt.libvirt.host [None req-359157f5-4620-47af-88e8-b12b212b7fe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 11 09:23:49 compute-0 nova_compute[260935]: 2025-10-11 09:23:49.616 2 DEBUG nova.virt.libvirt.host [None req-359157f5-4620-47af-88e8-b12b212b7fe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 11 09:23:49 compute-0 nova_compute[260935]: 2025-10-11 09:23:49.637 2 DEBUG nova.virt.libvirt.host [None req-359157f5-4620-47af-88e8-b12b212b7fe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 11 09:23:49 compute-0 nova_compute[260935]: 2025-10-11 09:23:49.637 2 DEBUG nova.virt.libvirt.host [None req-359157f5-4620-47af-88e8-b12b212b7fe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 11 09:23:49 compute-0 nova_compute[260935]: 2025-10-11 09:23:49.638 2 DEBUG nova.virt.libvirt.driver [None req-359157f5-4620-47af-88e8-b12b212b7fe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 11 09:23:49 compute-0 nova_compute[260935]: 2025-10-11 09:23:49.638 2 DEBUG nova.virt.hardware [None req-359157f5-4620-47af-88e8-b12b212b7fe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 11 09:23:49 compute-0 nova_compute[260935]: 2025-10-11 09:23:49.639 2 DEBUG nova.virt.hardware [None req-359157f5-4620-47af-88e8-b12b212b7fe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 11 09:23:49 compute-0 nova_compute[260935]: 2025-10-11 09:23:49.639 2 DEBUG nova.virt.hardware [None req-359157f5-4620-47af-88e8-b12b212b7fe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 11 09:23:49 compute-0 nova_compute[260935]: 2025-10-11 09:23:49.639 2 DEBUG nova.virt.hardware [None req-359157f5-4620-47af-88e8-b12b212b7fe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 11 09:23:49 compute-0 nova_compute[260935]: 2025-10-11 09:23:49.639 2 DEBUG nova.virt.hardware [None req-359157f5-4620-47af-88e8-b12b212b7fe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 11 09:23:49 compute-0 nova_compute[260935]: 2025-10-11 09:23:49.640 2 DEBUG nova.virt.hardware [None req-359157f5-4620-47af-88e8-b12b212b7fe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 11 09:23:49 compute-0 nova_compute[260935]: 2025-10-11 09:23:49.640 2 DEBUG nova.virt.hardware [None req-359157f5-4620-47af-88e8-b12b212b7fe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 11 09:23:49 compute-0 nova_compute[260935]: 2025-10-11 09:23:49.640 2 DEBUG nova.virt.hardware [None req-359157f5-4620-47af-88e8-b12b212b7fe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 11 09:23:49 compute-0 nova_compute[260935]: 2025-10-11 09:23:49.640 2 DEBUG nova.virt.hardware [None req-359157f5-4620-47af-88e8-b12b212b7fe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 11 09:23:49 compute-0 nova_compute[260935]: 2025-10-11 09:23:49.640 2 DEBUG nova.virt.hardware [None req-359157f5-4620-47af-88e8-b12b212b7fe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 11 09:23:49 compute-0 nova_compute[260935]: 2025-10-11 09:23:49.641 2 DEBUG nova.virt.hardware [None req-359157f5-4620-47af-88e8-b12b212b7fe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 11 09:23:49 compute-0 nova_compute[260935]: 2025-10-11 09:23:49.645 2 DEBUG oslo_concurrency.processutils [None req-359157f5-4620-47af-88e8-b12b212b7fe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:23:49 compute-0 kernel: tapba6bfb6c-8b: entered promiscuous mode
Oct 11 09:23:49 compute-0 NetworkManager[44960]: <info>  [1760174629.6571] manager: (tapba6bfb6c-8b): new Tun device (/org/freedesktop/NetworkManager/Devices/514)
Oct 11 09:23:49 compute-0 systemd-udevd[394721]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 09:23:49 compute-0 ovn_controller[152945]: 2025-10-11T09:23:49Z|01292|binding|INFO|Claiming lport ba6bfb6c-8b45-4ce7-af7b-30a3c2e097ca for this chassis.
Oct 11 09:23:49 compute-0 ovn_controller[152945]: 2025-10-11T09:23:49Z|01293|binding|INFO|ba6bfb6c-8b45-4ce7-af7b-30a3c2e097ca: Claiming fa:16:3e:d6:ce:63 10.100.0.5
Oct 11 09:23:49 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:23:49.694 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d6:ce:63 10.100.0.5'], port_security=['fa:16:3e:d6:ce:63 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-TestNetworkBasicOps-111056357', 'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '527bff5a-2d35-406a-8702-f80298a22342', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a7aa5898-dcd8-41c5-81e2-5c41061a1fb2', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-TestNetworkBasicOps-111056357', 'neutron:project_id': 'bee9c6aad5fe46a2b0fb6caf4d995b72', 'neutron:revision_number': '2', 'neutron:security_group_ids': '019fac1b-1c13-4d21-809e-39e39fdd9255', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=60a8a4a9-2613-497d-90d8-59ebfcd1b053, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=ba6bfb6c-8b45-4ce7-af7b-30a3c2e097ca) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 09:23:49 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:23:49.697 162815 INFO neutron.agent.ovn.metadata.agent [-] Port ba6bfb6c-8b45-4ce7-af7b-30a3c2e097ca in datapath a7aa5898-dcd8-41c5-81e2-5c41061a1fb2 bound to our chassis
Oct 11 09:23:49 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:23:49.700 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network a7aa5898-dcd8-41c5-81e2-5c41061a1fb2
Oct 11 09:23:49 compute-0 NetworkManager[44960]: <info>  [1760174629.7030] device (tapba6bfb6c-8b): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 09:23:49 compute-0 NetworkManager[44960]: <info>  [1760174629.7050] device (tapba6bfb6c-8b): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 09:23:49 compute-0 ovn_controller[152945]: 2025-10-11T09:23:49Z|01294|binding|INFO|Setting lport ba6bfb6c-8b45-4ce7-af7b-30a3c2e097ca ovn-installed in OVS
Oct 11 09:23:49 compute-0 ovn_controller[152945]: 2025-10-11T09:23:49Z|01295|binding|INFO|Setting lport ba6bfb6c-8b45-4ce7-af7b-30a3c2e097ca up in Southbound
Oct 11 09:23:49 compute-0 nova_compute[260935]: 2025-10-11 09:23:49.716 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:23:49 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:23:49.720 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[67410d3e-0135-4245-a6ac-b2b3d515bc51]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:23:49 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:23:49.720 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapa7aa5898-d1 in ovnmeta-a7aa5898-dcd8-41c5-81e2-5c41061a1fb2 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 11 09:23:49 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:23:49.722 276945 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapa7aa5898-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 11 09:23:49 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:23:49.722 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[296c97f8-3a20-4a37-a8a4-32a5de2080c3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:23:49 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:23:49.724 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[3268f57c-a114-4621-859b-860109071335]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:23:49 compute-0 systemd-machined[215705]: New machine qemu-146-instance-0000007b.
Oct 11 09:23:49 compute-0 systemd[1]: Started Virtual Machine qemu-146-instance-0000007b.
Oct 11 09:23:49 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:23:49.743 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[58d7bcc1-c863-4e02-b797-abcd9d14f53f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:23:49 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:23:49.768 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[23ff34f6-d990-45ec-8950-902d4ba45d63]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:23:49 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:23:49.815 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[23cb0eb8-4cb5-4f60-bc46-368d277b8eab]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:23:49 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:23:49.826 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[bd41df57-8ae2-4d3b-9e53-e32b646ba9d5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:23:49 compute-0 NetworkManager[44960]: <info>  [1760174629.8274] manager: (tapa7aa5898-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/515)
Oct 11 09:23:49 compute-0 systemd-udevd[394723]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 09:23:49 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:23:49.872 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[b846da9f-d02e-433f-9ba9-df9be7140952]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:23:49 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:23:49.879 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[4c6c5e19-fcbf-4725-835b-e17058e4ad6e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:23:49 compute-0 NetworkManager[44960]: <info>  [1760174629.9120] device (tapa7aa5898-d0): carrier: link connected
Oct 11 09:23:49 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:23:49.924 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[a7d74f02-e007-46d4-84e8-bae8f72c5bd4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:23:49 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:23:49.949 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[4f80c8aa-eecb-45c6-8adb-6e1ae8f0a397]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa7aa5898-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1a:0e:28'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 359], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 647461, 'reachable_time': 32882, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 394776, 'error': None, 'target': 'ovnmeta-a7aa5898-dcd8-41c5-81e2-5c41061a1fb2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:23:49 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:23:49.975 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[16b39acb-a2f7-44a7-a57c-fd63bfad00e7]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe1a:e28'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 647461, 'tstamp': 647461}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 394777, 'error': None, 'target': 'ovnmeta-a7aa5898-dcd8-41c5-81e2-5c41061a1fb2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:23:49 compute-0 ceph-mon[74313]: pgmap v2464: 321 pgs: 321 active+clean; 509 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 6.3 MiB/s rd, 6.8 MiB/s wr, 218 op/s
Oct 11 09:23:49 compute-0 ceph-mon[74313]: osdmap e288: 3 total, 3 up, 3 in
Oct 11 09:23:50 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:23:50.010 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[f1dd64c3-025d-40a2-91e1-bc5979828029]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa7aa5898-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1a:0e:28'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 359], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 647461, 'reachable_time': 32882, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 394778, 'error': None, 'target': 'ovnmeta-a7aa5898-dcd8-41c5-81e2-5c41061a1fb2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:23:50 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:23:50.060 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[d056461f-8a81-4997-bf33-c50973f21176]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:23:50 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 09:23:50 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/869300691' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:23:50 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:23:50.183 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[9549a83c-7292-4fec-8734-1ee0f2b2845c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:23:50 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:23:50.184 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa7aa5898-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:23:50 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:23:50.185 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 09:23:50 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:23:50.185 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa7aa5898-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:23:50 compute-0 NetworkManager[44960]: <info>  [1760174630.1876] manager: (tapa7aa5898-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/516)
Oct 11 09:23:50 compute-0 kernel: tapa7aa5898-d0: entered promiscuous mode
Oct 11 09:23:50 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:23:50.191 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapa7aa5898-d0, col_values=(('external_ids', {'iface-id': 'aaa7cd6b-360e-44dd-8a57-64ba5457b964'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:23:50 compute-0 ovn_controller[152945]: 2025-10-11T09:23:50Z|01296|binding|INFO|Releasing lport aaa7cd6b-360e-44dd-8a57-64ba5457b964 from this chassis (sb_readonly=0)
Oct 11 09:23:50 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:23:50.199 162815 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/a7aa5898-dcd8-41c5-81e2-5c41061a1fb2.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/a7aa5898-dcd8-41c5-81e2-5c41061a1fb2.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 11 09:23:50 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:23:50.200 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[675b87b6-1284-48f6-ac2f-34066a9a7b93]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:23:50 compute-0 nova_compute[260935]: 2025-10-11 09:23:50.200 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:23:50 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:23:50.202 162815 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 11 09:23:50 compute-0 ovn_metadata_agent[162810]: global
Oct 11 09:23:50 compute-0 ovn_metadata_agent[162810]:     log         /dev/log local0 debug
Oct 11 09:23:50 compute-0 ovn_metadata_agent[162810]:     log-tag     haproxy-metadata-proxy-a7aa5898-dcd8-41c5-81e2-5c41061a1fb2
Oct 11 09:23:50 compute-0 ovn_metadata_agent[162810]:     user        root
Oct 11 09:23:50 compute-0 ovn_metadata_agent[162810]:     group       root
Oct 11 09:23:50 compute-0 ovn_metadata_agent[162810]:     maxconn     1024
Oct 11 09:23:50 compute-0 ovn_metadata_agent[162810]:     pidfile     /var/lib/neutron/external/pids/a7aa5898-dcd8-41c5-81e2-5c41061a1fb2.pid.haproxy
Oct 11 09:23:50 compute-0 ovn_metadata_agent[162810]:     daemon
Oct 11 09:23:50 compute-0 ovn_metadata_agent[162810]: 
Oct 11 09:23:50 compute-0 ovn_metadata_agent[162810]: defaults
Oct 11 09:23:50 compute-0 ovn_metadata_agent[162810]:     log global
Oct 11 09:23:50 compute-0 ovn_metadata_agent[162810]:     mode http
Oct 11 09:23:50 compute-0 ovn_metadata_agent[162810]:     option httplog
Oct 11 09:23:50 compute-0 ovn_metadata_agent[162810]:     option dontlognull
Oct 11 09:23:50 compute-0 ovn_metadata_agent[162810]:     option http-server-close
Oct 11 09:23:50 compute-0 ovn_metadata_agent[162810]:     option forwardfor
Oct 11 09:23:50 compute-0 ovn_metadata_agent[162810]:     retries                 3
Oct 11 09:23:50 compute-0 ovn_metadata_agent[162810]:     timeout http-request    30s
Oct 11 09:23:50 compute-0 ovn_metadata_agent[162810]:     timeout connect         30s
Oct 11 09:23:50 compute-0 ovn_metadata_agent[162810]:     timeout client          32s
Oct 11 09:23:50 compute-0 ovn_metadata_agent[162810]:     timeout server          32s
Oct 11 09:23:50 compute-0 ovn_metadata_agent[162810]:     timeout http-keep-alive 30s
Oct 11 09:23:50 compute-0 ovn_metadata_agent[162810]: 
Oct 11 09:23:50 compute-0 ovn_metadata_agent[162810]: 
Oct 11 09:23:50 compute-0 ovn_metadata_agent[162810]: listen listener
Oct 11 09:23:50 compute-0 ovn_metadata_agent[162810]:     bind 169.254.169.254:80
Oct 11 09:23:50 compute-0 ovn_metadata_agent[162810]:     server metadata /var/lib/neutron/metadata_proxy
Oct 11 09:23:50 compute-0 ovn_metadata_agent[162810]:     http-request add-header X-OVN-Network-ID a7aa5898-dcd8-41c5-81e2-5c41061a1fb2
Oct 11 09:23:50 compute-0 ovn_metadata_agent[162810]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 11 09:23:50 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:23:50.202 162815 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-a7aa5898-dcd8-41c5-81e2-5c41061a1fb2', 'env', 'PROCESS_TAG=haproxy-a7aa5898-dcd8-41c5-81e2-5c41061a1fb2', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/a7aa5898-dcd8-41c5-81e2-5c41061a1fb2.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 11 09:23:50 compute-0 nova_compute[260935]: 2025-10-11 09:23:50.214 2 DEBUG oslo_concurrency.processutils [None req-359157f5-4620-47af-88e8-b12b212b7fe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.569s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:23:50 compute-0 nova_compute[260935]: 2025-10-11 09:23:50.248 2 DEBUG nova.storage.rbd_utils [None req-359157f5-4620-47af-88e8-b12b212b7fe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] rbd image 85cf93a0-2068-4567-a399-b8d52e672913_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:23:50 compute-0 nova_compute[260935]: 2025-10-11 09:23:50.254 2 DEBUG oslo_concurrency.processutils [None req-359157f5-4620-47af-88e8-b12b212b7fe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:23:50 compute-0 nova_compute[260935]: 2025-10-11 09:23:50.305 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:23:50 compute-0 nova_compute[260935]: 2025-10-11 09:23:50.374 2 DEBUG nova.compute.manager [req-4a076041-faa2-4e35-b528-cdb13e62132f req-e0684e74-3b4e-4b42-a04a-5df38ec2e78a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 527bff5a-2d35-406a-8702-f80298a22342] Received event network-vif-plugged-ba6bfb6c-8b45-4ce7-af7b-30a3c2e097ca external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:23:50 compute-0 nova_compute[260935]: 2025-10-11 09:23:50.375 2 DEBUG oslo_concurrency.lockutils [req-4a076041-faa2-4e35-b528-cdb13e62132f req-e0684e74-3b4e-4b42-a04a-5df38ec2e78a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "527bff5a-2d35-406a-8702-f80298a22342-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:23:50 compute-0 nova_compute[260935]: 2025-10-11 09:23:50.376 2 DEBUG oslo_concurrency.lockutils [req-4a076041-faa2-4e35-b528-cdb13e62132f req-e0684e74-3b4e-4b42-a04a-5df38ec2e78a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "527bff5a-2d35-406a-8702-f80298a22342-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:23:50 compute-0 nova_compute[260935]: 2025-10-11 09:23:50.376 2 DEBUG oslo_concurrency.lockutils [req-4a076041-faa2-4e35-b528-cdb13e62132f req-e0684e74-3b4e-4b42-a04a-5df38ec2e78a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "527bff5a-2d35-406a-8702-f80298a22342-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:23:50 compute-0 nova_compute[260935]: 2025-10-11 09:23:50.377 2 DEBUG nova.compute.manager [req-4a076041-faa2-4e35-b528-cdb13e62132f req-e0684e74-3b4e-4b42-a04a-5df38ec2e78a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 527bff5a-2d35-406a-8702-f80298a22342] Processing event network-vif-plugged-ba6bfb6c-8b45-4ce7-af7b-30a3c2e097ca _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 11 09:23:50 compute-0 nova_compute[260935]: 2025-10-11 09:23:50.485 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:23:50 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:23:50.485 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=40, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:d1:d9', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '16:ab:1e:b7:4b:7f'}, ipsec=False) old=SB_Global(nb_cfg=39) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 09:23:50 compute-0 podman[394891]: 2025-10-11 09:23:50.665777134 +0000 UTC m=+0.073721802 container create a287c3e2c1a13e36c7b223b0999b063f29ee7f65b1e8f71cff7b574f9d71dbf1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-a7aa5898-dcd8-41c5-81e2-5c41061a1fb2, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 11 09:23:50 compute-0 podman[394891]: 2025-10-11 09:23:50.633360429 +0000 UTC m=+0.041305187 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 11 09:23:50 compute-0 systemd[1]: Started libpod-conmon-a287c3e2c1a13e36c7b223b0999b063f29ee7f65b1e8f71cff7b574f9d71dbf1.scope.
Oct 11 09:23:50 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:23:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/19190283d7b8624e3bb625f0ca81497210fb38475de1d71735aa96791be754ca/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 11 09:23:50 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 09:23:50 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2069791840' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:23:50 compute-0 podman[394891]: 2025-10-11 09:23:50.770720355 +0000 UTC m=+0.178665023 container init a287c3e2c1a13e36c7b223b0999b063f29ee7f65b1e8f71cff7b574f9d71dbf1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-a7aa5898-dcd8-41c5-81e2-5c41061a1fb2, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct 11 09:23:50 compute-0 podman[394891]: 2025-10-11 09:23:50.781062997 +0000 UTC m=+0.189007665 container start a287c3e2c1a13e36c7b223b0999b063f29ee7f65b1e8f71cff7b574f9d71dbf1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-a7aa5898-dcd8-41c5-81e2-5c41061a1fb2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 11 09:23:50 compute-0 nova_compute[260935]: 2025-10-11 09:23:50.792 2 DEBUG oslo_concurrency.processutils [None req-359157f5-4620-47af-88e8-b12b212b7fe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.538s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:23:50 compute-0 nova_compute[260935]: 2025-10-11 09:23:50.794 2 DEBUG nova.virt.libvirt.vif [None req-359157f5-4620-47af-88e8-b12b212b7fe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T09:23:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-145104929',display_name='tempest-TestGettingAddress-server-145104929',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-145104929',id=122,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFOcQNhE7VrJ44Z/esb1CHEpx8lcRbf27s/aaZGWQqBCH7+6fr5AKVHiLVq1p8ssvkamN6U1RSiwjmJnfBxXYkiNbQhuZVIjGwYPhoT+eqQySo9UJb10NKMqWJoQDGuD9g==',key_name='tempest-TestGettingAddress-1559448996',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='db6885dd005947ad850fed13cefdf2fc',ramdisk_id='',reservation_id='r-tevfktjf',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-1238692117',owner_user_name='tempest-TestGettingAddress-1238692117-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:23:36Z,user_data=None,user_id='0e1fd111a1ff43179343661e01457085',uuid=85cf93a0-2068-4567-a399-b8d52e672913,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "85875c6f-3380-42bc-9e4b-0b66df391e0d", "address": "fa:16:3e:81:f1:c1", "network": {"id": "cdd3c547-e0c3-4649-8427-08ce8e1c52d4", "bridge": "br-int", "label": "tempest-network-smoke--1698384506", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap85875c6f-33", "ovs_interfaceid": "85875c6f-3380-42bc-9e4b-0b66df391e0d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 11 09:23:50 compute-0 nova_compute[260935]: 2025-10-11 09:23:50.794 2 DEBUG nova.network.os_vif_util [None req-359157f5-4620-47af-88e8-b12b212b7fe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converting VIF {"id": "85875c6f-3380-42bc-9e4b-0b66df391e0d", "address": "fa:16:3e:81:f1:c1", "network": {"id": "cdd3c547-e0c3-4649-8427-08ce8e1c52d4", "bridge": "br-int", "label": "tempest-network-smoke--1698384506", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap85875c6f-33", "ovs_interfaceid": "85875c6f-3380-42bc-9e4b-0b66df391e0d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 09:23:50 compute-0 nova_compute[260935]: 2025-10-11 09:23:50.795 2 DEBUG nova.network.os_vif_util [None req-359157f5-4620-47af-88e8-b12b212b7fe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:81:f1:c1,bridge_name='br-int',has_traffic_filtering=True,id=85875c6f-3380-42bc-9e4b-0b66df391e0d,network=Network(cdd3c547-e0c3-4649-8427-08ce8e1c52d4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap85875c6f-33') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 09:23:50 compute-0 nova_compute[260935]: 2025-10-11 09:23:50.795 2 DEBUG nova.virt.libvirt.vif [None req-359157f5-4620-47af-88e8-b12b212b7fe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T09:23:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-145104929',display_name='tempest-TestGettingAddress-server-145104929',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-145104929',id=122,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFOcQNhE7VrJ44Z/esb1CHEpx8lcRbf27s/aaZGWQqBCH7+6fr5AKVHiLVq1p8ssvkamN6U1RSiwjmJnfBxXYkiNbQhuZVIjGwYPhoT+eqQySo9UJb10NKMqWJoQDGuD9g==',key_name='tempest-TestGettingAddress-1559448996',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='db6885dd005947ad850fed13cefdf2fc',ramdisk_id='',reservation_id='r-tevfktjf',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-1238692117',owner_user_name='tempest-TestGettingAddress-1238692117-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:23:36Z,user_data=None,user_id='0e1fd111a1ff43179343661e01457085',uuid=85cf93a0-2068-4567-a399-b8d52e672913,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f45a21da-34ce-448b-92cc-2639a191c755", "address": "fa:16:3e:f6:a9:1e", "network": {"id": "f0bc2c62-89ab-4ce1-9157-2273788b9018", "bridge": "br-int", "label": "tempest-network-smoke--892002892", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fef6:a91e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fef6:a91e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf45a21da-34", "ovs_interfaceid": "f45a21da-34ce-448b-92cc-2639a191c755", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 11 09:23:50 compute-0 nova_compute[260935]: 2025-10-11 09:23:50.796 2 DEBUG nova.network.os_vif_util [None req-359157f5-4620-47af-88e8-b12b212b7fe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converting VIF {"id": "f45a21da-34ce-448b-92cc-2639a191c755", "address": "fa:16:3e:f6:a9:1e", "network": {"id": "f0bc2c62-89ab-4ce1-9157-2273788b9018", "bridge": "br-int", "label": "tempest-network-smoke--892002892", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fef6:a91e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fef6:a91e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf45a21da-34", "ovs_interfaceid": "f45a21da-34ce-448b-92cc-2639a191c755", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 09:23:50 compute-0 nova_compute[260935]: 2025-10-11 09:23:50.796 2 DEBUG nova.network.os_vif_util [None req-359157f5-4620-47af-88e8-b12b212b7fe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f6:a9:1e,bridge_name='br-int',has_traffic_filtering=True,id=f45a21da-34ce-448b-92cc-2639a191c755,network=Network(f0bc2c62-89ab-4ce1-9157-2273788b9018),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf45a21da-34') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 09:23:50 compute-0 nova_compute[260935]: 2025-10-11 09:23:50.797 2 DEBUG nova.objects.instance [None req-359157f5-4620-47af-88e8-b12b212b7fe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lazy-loading 'pci_devices' on Instance uuid 85cf93a0-2068-4567-a399-b8d52e672913 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:23:50 compute-0 neutron-haproxy-ovnmeta-a7aa5898-dcd8-41c5-81e2-5c41061a1fb2[394906]: [NOTICE]   (394912) : New worker (394914) forked
Oct 11 09:23:50 compute-0 neutron-haproxy-ovnmeta-a7aa5898-dcd8-41c5-81e2-5c41061a1fb2[394906]: [NOTICE]   (394912) : Loading success.
Oct 11 09:23:50 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2466: 321 pgs: 321 active+clean; 509 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 20 KiB/s wr, 115 op/s
Oct 11 09:23:50 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:23:50.856 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 11 09:23:50 compute-0 nova_compute[260935]: 2025-10-11 09:23:50.915 2 DEBUG nova.compute.manager [None req-091e0e0c-811a-465e-b89f-9b7e7c4c5823 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 527bff5a-2d35-406a-8702-f80298a22342] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 11 09:23:50 compute-0 nova_compute[260935]: 2025-10-11 09:23:50.917 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760174630.9149716, 527bff5a-2d35-406a-8702-f80298a22342 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:23:50 compute-0 nova_compute[260935]: 2025-10-11 09:23:50.917 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 527bff5a-2d35-406a-8702-f80298a22342] VM Started (Lifecycle Event)
Oct 11 09:23:50 compute-0 nova_compute[260935]: 2025-10-11 09:23:50.921 2 DEBUG nova.virt.libvirt.driver [None req-091e0e0c-811a-465e-b89f-9b7e7c4c5823 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 527bff5a-2d35-406a-8702-f80298a22342] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 11 09:23:50 compute-0 nova_compute[260935]: 2025-10-11 09:23:50.929 2 DEBUG nova.virt.libvirt.driver [None req-359157f5-4620-47af-88e8-b12b212b7fe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 85cf93a0-2068-4567-a399-b8d52e672913] End _get_guest_xml xml=<domain type="kvm">
Oct 11 09:23:50 compute-0 nova_compute[260935]:   <uuid>85cf93a0-2068-4567-a399-b8d52e672913</uuid>
Oct 11 09:23:50 compute-0 nova_compute[260935]:   <name>instance-0000007a</name>
Oct 11 09:23:50 compute-0 nova_compute[260935]:   <memory>131072</memory>
Oct 11 09:23:50 compute-0 nova_compute[260935]:   <vcpu>1</vcpu>
Oct 11 09:23:50 compute-0 nova_compute[260935]:   <metadata>
Oct 11 09:23:50 compute-0 nova_compute[260935]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 09:23:50 compute-0 nova_compute[260935]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 09:23:50 compute-0 nova_compute[260935]:       <nova:name>tempest-TestGettingAddress-server-145104929</nova:name>
Oct 11 09:23:50 compute-0 nova_compute[260935]:       <nova:creationTime>2025-10-11 09:23:49</nova:creationTime>
Oct 11 09:23:50 compute-0 nova_compute[260935]:       <nova:flavor name="m1.nano">
Oct 11 09:23:50 compute-0 nova_compute[260935]:         <nova:memory>128</nova:memory>
Oct 11 09:23:50 compute-0 nova_compute[260935]:         <nova:disk>1</nova:disk>
Oct 11 09:23:50 compute-0 nova_compute[260935]:         <nova:swap>0</nova:swap>
Oct 11 09:23:50 compute-0 nova_compute[260935]:         <nova:ephemeral>0</nova:ephemeral>
Oct 11 09:23:50 compute-0 nova_compute[260935]:         <nova:vcpus>1</nova:vcpus>
Oct 11 09:23:50 compute-0 nova_compute[260935]:       </nova:flavor>
Oct 11 09:23:50 compute-0 nova_compute[260935]:       <nova:owner>
Oct 11 09:23:50 compute-0 nova_compute[260935]:         <nova:user uuid="0e1fd111a1ff43179343661e01457085">tempest-TestGettingAddress-1238692117-project-member</nova:user>
Oct 11 09:23:50 compute-0 nova_compute[260935]:         <nova:project uuid="db6885dd005947ad850fed13cefdf2fc">tempest-TestGettingAddress-1238692117</nova:project>
Oct 11 09:23:50 compute-0 nova_compute[260935]:       </nova:owner>
Oct 11 09:23:50 compute-0 nova_compute[260935]:       <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 09:23:50 compute-0 nova_compute[260935]:       <nova:ports>
Oct 11 09:23:50 compute-0 nova_compute[260935]:         <nova:port uuid="85875c6f-3380-42bc-9e4b-0b66df391e0d">
Oct 11 09:23:50 compute-0 nova_compute[260935]:           <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Oct 11 09:23:50 compute-0 nova_compute[260935]:         </nova:port>
Oct 11 09:23:50 compute-0 nova_compute[260935]:         <nova:port uuid="f45a21da-34ce-448b-92cc-2639a191c755">
Oct 11 09:23:50 compute-0 nova_compute[260935]:           <nova:ip type="fixed" address="2001:db8:0:1:f816:3eff:fef6:a91e" ipVersion="6"/>
Oct 11 09:23:50 compute-0 nova_compute[260935]:           <nova:ip type="fixed" address="2001:db8::f816:3eff:fef6:a91e" ipVersion="6"/>
Oct 11 09:23:50 compute-0 nova_compute[260935]:         </nova:port>
Oct 11 09:23:50 compute-0 nova_compute[260935]:       </nova:ports>
Oct 11 09:23:50 compute-0 nova_compute[260935]:     </nova:instance>
Oct 11 09:23:50 compute-0 nova_compute[260935]:   </metadata>
Oct 11 09:23:50 compute-0 nova_compute[260935]:   <sysinfo type="smbios">
Oct 11 09:23:50 compute-0 nova_compute[260935]:     <system>
Oct 11 09:23:50 compute-0 nova_compute[260935]:       <entry name="manufacturer">RDO</entry>
Oct 11 09:23:50 compute-0 nova_compute[260935]:       <entry name="product">OpenStack Compute</entry>
Oct 11 09:23:50 compute-0 nova_compute[260935]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 09:23:50 compute-0 nova_compute[260935]:       <entry name="serial">85cf93a0-2068-4567-a399-b8d52e672913</entry>
Oct 11 09:23:50 compute-0 nova_compute[260935]:       <entry name="uuid">85cf93a0-2068-4567-a399-b8d52e672913</entry>
Oct 11 09:23:50 compute-0 nova_compute[260935]:       <entry name="family">Virtual Machine</entry>
Oct 11 09:23:50 compute-0 nova_compute[260935]:     </system>
Oct 11 09:23:50 compute-0 nova_compute[260935]:   </sysinfo>
Oct 11 09:23:50 compute-0 nova_compute[260935]:   <os>
Oct 11 09:23:50 compute-0 nova_compute[260935]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 11 09:23:50 compute-0 nova_compute[260935]:     <boot dev="hd"/>
Oct 11 09:23:50 compute-0 nova_compute[260935]:     <smbios mode="sysinfo"/>
Oct 11 09:23:50 compute-0 nova_compute[260935]:   </os>
Oct 11 09:23:50 compute-0 nova_compute[260935]:   <features>
Oct 11 09:23:50 compute-0 nova_compute[260935]:     <acpi/>
Oct 11 09:23:50 compute-0 nova_compute[260935]:     <apic/>
Oct 11 09:23:50 compute-0 nova_compute[260935]:     <vmcoreinfo/>
Oct 11 09:23:50 compute-0 nova_compute[260935]:   </features>
Oct 11 09:23:50 compute-0 nova_compute[260935]:   <clock offset="utc">
Oct 11 09:23:50 compute-0 nova_compute[260935]:     <timer name="pit" tickpolicy="delay"/>
Oct 11 09:23:50 compute-0 nova_compute[260935]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 11 09:23:50 compute-0 nova_compute[260935]:     <timer name="hpet" present="no"/>
Oct 11 09:23:50 compute-0 nova_compute[260935]:   </clock>
Oct 11 09:23:50 compute-0 nova_compute[260935]:   <cpu mode="host-model" match="exact">
Oct 11 09:23:50 compute-0 nova_compute[260935]:     <topology sockets="1" cores="1" threads="1"/>
Oct 11 09:23:50 compute-0 nova_compute[260935]:   </cpu>
Oct 11 09:23:50 compute-0 nova_compute[260935]:   <devices>
Oct 11 09:23:50 compute-0 nova_compute[260935]:     <disk type="network" device="disk">
Oct 11 09:23:50 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 09:23:50 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/85cf93a0-2068-4567-a399-b8d52e672913_disk">
Oct 11 09:23:50 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 09:23:50 compute-0 nova_compute[260935]:       </source>
Oct 11 09:23:50 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 09:23:50 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 09:23:50 compute-0 nova_compute[260935]:       </auth>
Oct 11 09:23:50 compute-0 nova_compute[260935]:       <target dev="vda" bus="virtio"/>
Oct 11 09:23:50 compute-0 nova_compute[260935]:     </disk>
Oct 11 09:23:50 compute-0 nova_compute[260935]:     <disk type="network" device="cdrom">
Oct 11 09:23:50 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 09:23:50 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/85cf93a0-2068-4567-a399-b8d52e672913_disk.config">
Oct 11 09:23:50 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 09:23:50 compute-0 nova_compute[260935]:       </source>
Oct 11 09:23:50 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 09:23:50 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 09:23:50 compute-0 nova_compute[260935]:       </auth>
Oct 11 09:23:50 compute-0 nova_compute[260935]:       <target dev="sda" bus="sata"/>
Oct 11 09:23:50 compute-0 nova_compute[260935]:     </disk>
Oct 11 09:23:50 compute-0 nova_compute[260935]:     <interface type="ethernet">
Oct 11 09:23:50 compute-0 nova_compute[260935]:       <mac address="fa:16:3e:81:f1:c1"/>
Oct 11 09:23:50 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 09:23:50 compute-0 nova_compute[260935]:       <driver name="vhost" rx_queue_size="512"/>
Oct 11 09:23:50 compute-0 nova_compute[260935]:       <mtu size="1442"/>
Oct 11 09:23:50 compute-0 nova_compute[260935]:       <target dev="tap85875c6f-33"/>
Oct 11 09:23:50 compute-0 nova_compute[260935]:     </interface>
Oct 11 09:23:50 compute-0 nova_compute[260935]:     <interface type="ethernet">
Oct 11 09:23:50 compute-0 nova_compute[260935]:       <mac address="fa:16:3e:f6:a9:1e"/>
Oct 11 09:23:50 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 09:23:50 compute-0 nova_compute[260935]:       <driver name="vhost" rx_queue_size="512"/>
Oct 11 09:23:50 compute-0 nova_compute[260935]:       <mtu size="1442"/>
Oct 11 09:23:50 compute-0 nova_compute[260935]:       <target dev="tapf45a21da-34"/>
Oct 11 09:23:50 compute-0 nova_compute[260935]:     </interface>
Oct 11 09:23:50 compute-0 nova_compute[260935]:     <serial type="pty">
Oct 11 09:23:50 compute-0 nova_compute[260935]:       <log file="/var/lib/nova/instances/85cf93a0-2068-4567-a399-b8d52e672913/console.log" append="off"/>
Oct 11 09:23:50 compute-0 nova_compute[260935]:     </serial>
Oct 11 09:23:50 compute-0 nova_compute[260935]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 09:23:50 compute-0 nova_compute[260935]:     <video>
Oct 11 09:23:50 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 09:23:50 compute-0 nova_compute[260935]:     </video>
Oct 11 09:23:50 compute-0 nova_compute[260935]:     <input type="tablet" bus="usb"/>
Oct 11 09:23:50 compute-0 nova_compute[260935]:     <rng model="virtio">
Oct 11 09:23:50 compute-0 nova_compute[260935]:       <backend model="random">/dev/urandom</backend>
Oct 11 09:23:50 compute-0 nova_compute[260935]:     </rng>
Oct 11 09:23:50 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root"/>
Oct 11 09:23:50 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:23:50 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:23:50 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:23:50 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:23:50 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:23:50 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:23:50 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:23:50 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:23:50 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:23:50 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:23:50 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:23:50 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:23:50 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:23:50 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:23:50 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:23:50 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:23:50 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:23:50 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:23:50 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:23:50 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:23:50 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:23:50 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:23:50 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:23:50 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:23:50 compute-0 nova_compute[260935]:     <controller type="usb" index="0"/>
Oct 11 09:23:50 compute-0 nova_compute[260935]:     <memballoon model="virtio">
Oct 11 09:23:50 compute-0 nova_compute[260935]:       <stats period="10"/>
Oct 11 09:23:50 compute-0 nova_compute[260935]:     </memballoon>
Oct 11 09:23:50 compute-0 nova_compute[260935]:   </devices>
Oct 11 09:23:50 compute-0 nova_compute[260935]: </domain>
Oct 11 09:23:50 compute-0 nova_compute[260935]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 11 09:23:50 compute-0 nova_compute[260935]: 2025-10-11 09:23:50.930 2 DEBUG nova.compute.manager [None req-359157f5-4620-47af-88e8-b12b212b7fe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 85cf93a0-2068-4567-a399-b8d52e672913] Preparing to wait for external event network-vif-plugged-85875c6f-3380-42bc-9e4b-0b66df391e0d prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 11 09:23:50 compute-0 nova_compute[260935]: 2025-10-11 09:23:50.930 2 DEBUG oslo_concurrency.lockutils [None req-359157f5-4620-47af-88e8-b12b212b7fe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "85cf93a0-2068-4567-a399-b8d52e672913-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:23:50 compute-0 nova_compute[260935]: 2025-10-11 09:23:50.931 2 DEBUG oslo_concurrency.lockutils [None req-359157f5-4620-47af-88e8-b12b212b7fe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "85cf93a0-2068-4567-a399-b8d52e672913-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:23:50 compute-0 nova_compute[260935]: 2025-10-11 09:23:50.931 2 DEBUG oslo_concurrency.lockutils [None req-359157f5-4620-47af-88e8-b12b212b7fe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "85cf93a0-2068-4567-a399-b8d52e672913-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:23:50 compute-0 nova_compute[260935]: 2025-10-11 09:23:50.931 2 DEBUG nova.compute.manager [None req-359157f5-4620-47af-88e8-b12b212b7fe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 85cf93a0-2068-4567-a399-b8d52e672913] Preparing to wait for external event network-vif-plugged-f45a21da-34ce-448b-92cc-2639a191c755 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 11 09:23:50 compute-0 nova_compute[260935]: 2025-10-11 09:23:50.932 2 DEBUG oslo_concurrency.lockutils [None req-359157f5-4620-47af-88e8-b12b212b7fe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "85cf93a0-2068-4567-a399-b8d52e672913-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:23:50 compute-0 nova_compute[260935]: 2025-10-11 09:23:50.932 2 DEBUG oslo_concurrency.lockutils [None req-359157f5-4620-47af-88e8-b12b212b7fe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "85cf93a0-2068-4567-a399-b8d52e672913-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:23:50 compute-0 nova_compute[260935]: 2025-10-11 09:23:50.932 2 DEBUG oslo_concurrency.lockutils [None req-359157f5-4620-47af-88e8-b12b212b7fe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "85cf93a0-2068-4567-a399-b8d52e672913-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:23:50 compute-0 nova_compute[260935]: 2025-10-11 09:23:50.934 2 DEBUG nova.virt.libvirt.vif [None req-359157f5-4620-47af-88e8-b12b212b7fe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T09:23:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-145104929',display_name='tempest-TestGettingAddress-server-145104929',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-145104929',id=122,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFOcQNhE7VrJ44Z/esb1CHEpx8lcRbf27s/aaZGWQqBCH7+6fr5AKVHiLVq1p8ssvkamN6U1RSiwjmJnfBxXYkiNbQhuZVIjGwYPhoT+eqQySo9UJb10NKMqWJoQDGuD9g==',key_name='tempest-TestGettingAddress-1559448996',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='db6885dd005947ad850fed13cefdf2fc',ramdisk_id='',reservation_id='r-tevfktjf',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-1238692117',owner_user_name='tempest-TestGettingAddress-1238692117-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:23:36Z,user_data=None,user_id='0e1fd111a1ff43179343661e01457085',uuid=85cf93a0-2068-4567-a399-b8d52e672913,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "85875c6f-3380-42bc-9e4b-0b66df391e0d", "address": "fa:16:3e:81:f1:c1", "network": {"id": "cdd3c547-e0c3-4649-8427-08ce8e1c52d4", "bridge": "br-int", "label": "tempest-network-smoke--1698384506", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap85875c6f-33", "ovs_interfaceid": "85875c6f-3380-42bc-9e4b-0b66df391e0d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 11 09:23:50 compute-0 nova_compute[260935]: 2025-10-11 09:23:50.934 2 DEBUG nova.network.os_vif_util [None req-359157f5-4620-47af-88e8-b12b212b7fe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converting VIF {"id": "85875c6f-3380-42bc-9e4b-0b66df391e0d", "address": "fa:16:3e:81:f1:c1", "network": {"id": "cdd3c547-e0c3-4649-8427-08ce8e1c52d4", "bridge": "br-int", "label": "tempest-network-smoke--1698384506", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap85875c6f-33", "ovs_interfaceid": "85875c6f-3380-42bc-9e4b-0b66df391e0d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 09:23:50 compute-0 nova_compute[260935]: 2025-10-11 09:23:50.936 2 DEBUG nova.network.os_vif_util [None req-359157f5-4620-47af-88e8-b12b212b7fe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:81:f1:c1,bridge_name='br-int',has_traffic_filtering=True,id=85875c6f-3380-42bc-9e4b-0b66df391e0d,network=Network(cdd3c547-e0c3-4649-8427-08ce8e1c52d4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap85875c6f-33') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 09:23:50 compute-0 nova_compute[260935]: 2025-10-11 09:23:50.937 2 DEBUG os_vif [None req-359157f5-4620-47af-88e8-b12b212b7fe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:81:f1:c1,bridge_name='br-int',has_traffic_filtering=True,id=85875c6f-3380-42bc-9e4b-0b66df391e0d,network=Network(cdd3c547-e0c3-4649-8427-08ce8e1c52d4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap85875c6f-33') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 11 09:23:50 compute-0 nova_compute[260935]: 2025-10-11 09:23:50.939 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:23:50 compute-0 nova_compute[260935]: 2025-10-11 09:23:50.940 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:23:50 compute-0 nova_compute[260935]: 2025-10-11 09:23:50.940 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 09:23:50 compute-0 nova_compute[260935]: 2025-10-11 09:23:50.944 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 527bff5a-2d35-406a-8702-f80298a22342] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:23:50 compute-0 nova_compute[260935]: 2025-10-11 09:23:50.945 2 INFO nova.virt.libvirt.driver [-] [instance: 527bff5a-2d35-406a-8702-f80298a22342] Instance spawned successfully.
Oct 11 09:23:50 compute-0 nova_compute[260935]: 2025-10-11 09:23:50.946 2 DEBUG nova.virt.libvirt.driver [None req-091e0e0c-811a-465e-b89f-9b7e7c4c5823 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 527bff5a-2d35-406a-8702-f80298a22342] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 11 09:23:50 compute-0 nova_compute[260935]: 2025-10-11 09:23:50.949 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:23:50 compute-0 nova_compute[260935]: 2025-10-11 09:23:50.949 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap85875c6f-33, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:23:50 compute-0 nova_compute[260935]: 2025-10-11 09:23:50.951 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap85875c6f-33, col_values=(('external_ids', {'iface-id': '85875c6f-3380-42bc-9e4b-0b66df391e0d', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:81:f1:c1', 'vm-uuid': '85cf93a0-2068-4567-a399-b8d52e672913'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:23:50 compute-0 nova_compute[260935]: 2025-10-11 09:23:50.954 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 527bff5a-2d35-406a-8702-f80298a22342] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 09:23:50 compute-0 nova_compute[260935]: 2025-10-11 09:23:50.980 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 527bff5a-2d35-406a-8702-f80298a22342] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 09:23:50 compute-0 nova_compute[260935]: 2025-10-11 09:23:50.980 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760174630.915211, 527bff5a-2d35-406a-8702-f80298a22342 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:23:50 compute-0 nova_compute[260935]: 2025-10-11 09:23:50.981 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 527bff5a-2d35-406a-8702-f80298a22342] VM Paused (Lifecycle Event)
Oct 11 09:23:50 compute-0 NetworkManager[44960]: <info>  [1760174630.9867] manager: (tap85875c6f-33): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/517)
Oct 11 09:23:50 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/869300691' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:23:50 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/2069791840' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:23:50 compute-0 nova_compute[260935]: 2025-10-11 09:23:50.987 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:23:50 compute-0 nova_compute[260935]: 2025-10-11 09:23:50.991 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 09:23:50 compute-0 nova_compute[260935]: 2025-10-11 09:23:50.995 2 INFO os_vif [None req-359157f5-4620-47af-88e8-b12b212b7fe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:81:f1:c1,bridge_name='br-int',has_traffic_filtering=True,id=85875c6f-3380-42bc-9e4b-0b66df391e0d,network=Network(cdd3c547-e0c3-4649-8427-08ce8e1c52d4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap85875c6f-33')
Oct 11 09:23:50 compute-0 nova_compute[260935]: 2025-10-11 09:23:50.996 2 DEBUG nova.virt.libvirt.vif [None req-359157f5-4620-47af-88e8-b12b212b7fe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T09:23:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-145104929',display_name='tempest-TestGettingAddress-server-145104929',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-145104929',id=122,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFOcQNhE7VrJ44Z/esb1CHEpx8lcRbf27s/aaZGWQqBCH7+6fr5AKVHiLVq1p8ssvkamN6U1RSiwjmJnfBxXYkiNbQhuZVIjGwYPhoT+eqQySo9UJb10NKMqWJoQDGuD9g==',key_name='tempest-TestGettingAddress-1559448996',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='db6885dd005947ad850fed13cefdf2fc',ramdisk_id='',reservation_id='r-tevfktjf',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-1238692117',owner_user_name='tempest-TestGettingAddress-1238692117-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:23:36Z,user_data=None,user_id='0e1fd111a1ff43179343661e01457085',uuid=85cf93a0-2068-4567-a399-b8d52e672913,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f45a21da-34ce-448b-92cc-2639a191c755", "address": "fa:16:3e:f6:a9:1e", "network": {"id": "f0bc2c62-89ab-4ce1-9157-2273788b9018", "bridge": "br-int", "label": "tempest-network-smoke--892002892", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fef6:a91e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fef6:a91e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf45a21da-34", "ovs_interfaceid": "f45a21da-34ce-448b-92cc-2639a191c755", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 11 09:23:50 compute-0 nova_compute[260935]: 2025-10-11 09:23:50.997 2 DEBUG nova.network.os_vif_util [None req-359157f5-4620-47af-88e8-b12b212b7fe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converting VIF {"id": "f45a21da-34ce-448b-92cc-2639a191c755", "address": "fa:16:3e:f6:a9:1e", "network": {"id": "f0bc2c62-89ab-4ce1-9157-2273788b9018", "bridge": "br-int", "label": "tempest-network-smoke--892002892", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fef6:a91e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fef6:a91e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf45a21da-34", "ovs_interfaceid": "f45a21da-34ce-448b-92cc-2639a191c755", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 09:23:50 compute-0 nova_compute[260935]: 2025-10-11 09:23:50.999 2 DEBUG nova.network.os_vif_util [None req-359157f5-4620-47af-88e8-b12b212b7fe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f6:a9:1e,bridge_name='br-int',has_traffic_filtering=True,id=f45a21da-34ce-448b-92cc-2639a191c755,network=Network(f0bc2c62-89ab-4ce1-9157-2273788b9018),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf45a21da-34') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 09:23:51 compute-0 nova_compute[260935]: 2025-10-11 09:23:50.999 2 DEBUG os_vif [None req-359157f5-4620-47af-88e8-b12b212b7fe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f6:a9:1e,bridge_name='br-int',has_traffic_filtering=True,id=f45a21da-34ce-448b-92cc-2639a191c755,network=Network(f0bc2c62-89ab-4ce1-9157-2273788b9018),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf45a21da-34') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 11 09:23:51 compute-0 nova_compute[260935]: 2025-10-11 09:23:51.000 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:23:51 compute-0 nova_compute[260935]: 2025-10-11 09:23:51.000 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:23:51 compute-0 nova_compute[260935]: 2025-10-11 09:23:51.001 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 09:23:51 compute-0 nova_compute[260935]: 2025-10-11 09:23:51.003 2 DEBUG nova.virt.libvirt.driver [None req-091e0e0c-811a-465e-b89f-9b7e7c4c5823 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 527bff5a-2d35-406a-8702-f80298a22342] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:23:51 compute-0 nova_compute[260935]: 2025-10-11 09:23:51.003 2 DEBUG nova.virt.libvirt.driver [None req-091e0e0c-811a-465e-b89f-9b7e7c4c5823 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 527bff5a-2d35-406a-8702-f80298a22342] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:23:51 compute-0 nova_compute[260935]: 2025-10-11 09:23:51.004 2 DEBUG nova.virt.libvirt.driver [None req-091e0e0c-811a-465e-b89f-9b7e7c4c5823 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 527bff5a-2d35-406a-8702-f80298a22342] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:23:51 compute-0 nova_compute[260935]: 2025-10-11 09:23:51.005 2 DEBUG nova.virt.libvirt.driver [None req-091e0e0c-811a-465e-b89f-9b7e7c4c5823 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 527bff5a-2d35-406a-8702-f80298a22342] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:23:51 compute-0 nova_compute[260935]: 2025-10-11 09:23:51.006 2 DEBUG nova.virt.libvirt.driver [None req-091e0e0c-811a-465e-b89f-9b7e7c4c5823 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 527bff5a-2d35-406a-8702-f80298a22342] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:23:51 compute-0 nova_compute[260935]: 2025-10-11 09:23:51.006 2 DEBUG nova.virt.libvirt.driver [None req-091e0e0c-811a-465e-b89f-9b7e7c4c5823 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 527bff5a-2d35-406a-8702-f80298a22342] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:23:51 compute-0 nova_compute[260935]: 2025-10-11 09:23:51.014 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 527bff5a-2d35-406a-8702-f80298a22342] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:23:51 compute-0 nova_compute[260935]: 2025-10-11 09:23:51.015 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:23:51 compute-0 nova_compute[260935]: 2025-10-11 09:23:51.016 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf45a21da-34, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:23:51 compute-0 nova_compute[260935]: 2025-10-11 09:23:51.017 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapf45a21da-34, col_values=(('external_ids', {'iface-id': 'f45a21da-34ce-448b-92cc-2639a191c755', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:f6:a9:1e', 'vm-uuid': '85cf93a0-2068-4567-a399-b8d52e672913'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:23:51 compute-0 NetworkManager[44960]: <info>  [1760174631.0194] manager: (tapf45a21da-34): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/518)
Oct 11 09:23:51 compute-0 nova_compute[260935]: 2025-10-11 09:23:51.019 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:23:51 compute-0 nova_compute[260935]: 2025-10-11 09:23:51.021 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 09:23:51 compute-0 nova_compute[260935]: 2025-10-11 09:23:51.025 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760174630.920565, 527bff5a-2d35-406a-8702-f80298a22342 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:23:51 compute-0 nova_compute[260935]: 2025-10-11 09:23:51.025 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 527bff5a-2d35-406a-8702-f80298a22342] VM Resumed (Lifecycle Event)
Oct 11 09:23:51 compute-0 nova_compute[260935]: 2025-10-11 09:23:51.027 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:23:51 compute-0 nova_compute[260935]: 2025-10-11 09:23:51.028 2 INFO os_vif [None req-359157f5-4620-47af-88e8-b12b212b7fe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f6:a9:1e,bridge_name='br-int',has_traffic_filtering=True,id=f45a21da-34ce-448b-92cc-2639a191c755,network=Network(f0bc2c62-89ab-4ce1-9157-2273788b9018),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf45a21da-34')
Oct 11 09:23:51 compute-0 nova_compute[260935]: 2025-10-11 09:23:51.047 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 527bff5a-2d35-406a-8702-f80298a22342] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:23:51 compute-0 nova_compute[260935]: 2025-10-11 09:23:51.054 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 527bff5a-2d35-406a-8702-f80298a22342] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 09:23:51 compute-0 nova_compute[260935]: 2025-10-11 09:23:51.057 2 INFO nova.compute.manager [None req-091e0e0c-811a-465e-b89f-9b7e7c4c5823 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 527bff5a-2d35-406a-8702-f80298a22342] Took 9.46 seconds to spawn the instance on the hypervisor.
Oct 11 09:23:51 compute-0 nova_compute[260935]: 2025-10-11 09:23:51.057 2 DEBUG nova.compute.manager [None req-091e0e0c-811a-465e-b89f-9b7e7c4c5823 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 527bff5a-2d35-406a-8702-f80298a22342] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:23:51 compute-0 nova_compute[260935]: 2025-10-11 09:23:51.070 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 527bff5a-2d35-406a-8702-f80298a22342] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 09:23:51 compute-0 nova_compute[260935]: 2025-10-11 09:23:51.108 2 DEBUG nova.virt.libvirt.driver [None req-359157f5-4620-47af-88e8-b12b212b7fe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 09:23:51 compute-0 nova_compute[260935]: 2025-10-11 09:23:51.108 2 DEBUG nova.virt.libvirt.driver [None req-359157f5-4620-47af-88e8-b12b212b7fe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 09:23:51 compute-0 nova_compute[260935]: 2025-10-11 09:23:51.109 2 DEBUG nova.virt.libvirt.driver [None req-359157f5-4620-47af-88e8-b12b212b7fe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] No VIF found with MAC fa:16:3e:81:f1:c1, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 11 09:23:51 compute-0 nova_compute[260935]: 2025-10-11 09:23:51.109 2 DEBUG nova.virt.libvirt.driver [None req-359157f5-4620-47af-88e8-b12b212b7fe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] No VIF found with MAC fa:16:3e:f6:a9:1e, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 11 09:23:51 compute-0 nova_compute[260935]: 2025-10-11 09:23:51.110 2 INFO nova.virt.libvirt.driver [None req-359157f5-4620-47af-88e8-b12b212b7fe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 85cf93a0-2068-4567-a399-b8d52e672913] Using config drive
Oct 11 09:23:51 compute-0 podman[394927]: 2025-10-11 09:23:51.121735661 +0000 UTC m=+0.061702992 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 09:23:51 compute-0 podman[394928]: 2025-10-11 09:23:51.150264856 +0000 UTC m=+0.088487868 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct 11 09:23:51 compute-0 nova_compute[260935]: 2025-10-11 09:23:51.151 2 DEBUG nova.storage.rbd_utils [None req-359157f5-4620-47af-88e8-b12b212b7fe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] rbd image 85cf93a0-2068-4567-a399-b8d52e672913_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:23:51 compute-0 nova_compute[260935]: 2025-10-11 09:23:51.174 2 INFO nova.compute.manager [None req-091e0e0c-811a-465e-b89f-9b7e7c4c5823 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 527bff5a-2d35-406a-8702-f80298a22342] Took 11.00 seconds to build instance.
Oct 11 09:23:51 compute-0 nova_compute[260935]: 2025-10-11 09:23:51.196 2 DEBUG oslo_concurrency.lockutils [None req-091e0e0c-811a-465e-b89f-9b7e7c4c5823 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "527bff5a-2d35-406a-8702-f80298a22342" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.249s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:23:51 compute-0 nova_compute[260935]: 2025-10-11 09:23:51.481 2 INFO nova.virt.libvirt.driver [None req-359157f5-4620-47af-88e8-b12b212b7fe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 85cf93a0-2068-4567-a399-b8d52e672913] Creating config drive at /var/lib/nova/instances/85cf93a0-2068-4567-a399-b8d52e672913/disk.config
Oct 11 09:23:51 compute-0 nova_compute[260935]: 2025-10-11 09:23:51.486 2 DEBUG oslo_concurrency.processutils [None req-359157f5-4620-47af-88e8-b12b212b7fe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/85cf93a0-2068-4567-a399-b8d52e672913/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp46u_0hqg execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:23:51 compute-0 nova_compute[260935]: 2025-10-11 09:23:51.663 2 DEBUG oslo_concurrency.processutils [None req-359157f5-4620-47af-88e8-b12b212b7fe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/85cf93a0-2068-4567-a399-b8d52e672913/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp46u_0hqg" returned: 0 in 0.176s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:23:51 compute-0 nova_compute[260935]: 2025-10-11 09:23:51.713 2 DEBUG nova.storage.rbd_utils [None req-359157f5-4620-47af-88e8-b12b212b7fe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] rbd image 85cf93a0-2068-4567-a399-b8d52e672913_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:23:51 compute-0 nova_compute[260935]: 2025-10-11 09:23:51.717 2 DEBUG oslo_concurrency.processutils [None req-359157f5-4620-47af-88e8-b12b212b7fe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/85cf93a0-2068-4567-a399-b8d52e672913/disk.config 85cf93a0-2068-4567-a399-b8d52e672913_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:23:51 compute-0 nova_compute[260935]: 2025-10-11 09:23:51.921 2 DEBUG oslo_concurrency.processutils [None req-359157f5-4620-47af-88e8-b12b212b7fe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/85cf93a0-2068-4567-a399-b8d52e672913/disk.config 85cf93a0-2068-4567-a399-b8d52e672913_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.203s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:23:51 compute-0 nova_compute[260935]: 2025-10-11 09:23:51.923 2 INFO nova.virt.libvirt.driver [None req-359157f5-4620-47af-88e8-b12b212b7fe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 85cf93a0-2068-4567-a399-b8d52e672913] Deleting local config drive /var/lib/nova/instances/85cf93a0-2068-4567-a399-b8d52e672913/disk.config because it was imported into RBD.
Oct 11 09:23:51 compute-0 kernel: tap85875c6f-33: entered promiscuous mode
Oct 11 09:23:51 compute-0 NetworkManager[44960]: <info>  [1760174631.9825] manager: (tap85875c6f-33): new Tun device (/org/freedesktop/NetworkManager/Devices/519)
Oct 11 09:23:51 compute-0 nova_compute[260935]: 2025-10-11 09:23:51.982 2 DEBUG nova.network.neutron [req-7df65449-e412-407c-99df-175c6deaca94 req-ba1b4746-6122-4bea-856b-246577763942 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 85cf93a0-2068-4567-a399-b8d52e672913] Updated VIF entry in instance network info cache for port f45a21da-34ce-448b-92cc-2639a191c755. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 09:23:51 compute-0 systemd-udevd[394754]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 09:23:51 compute-0 nova_compute[260935]: 2025-10-11 09:23:51.984 2 DEBUG nova.network.neutron [req-7df65449-e412-407c-99df-175c6deaca94 req-ba1b4746-6122-4bea-856b-246577763942 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 85cf93a0-2068-4567-a399-b8d52e672913] Updating instance_info_cache with network_info: [{"id": "85875c6f-3380-42bc-9e4b-0b66df391e0d", "address": "fa:16:3e:81:f1:c1", "network": {"id": "cdd3c547-e0c3-4649-8427-08ce8e1c52d4", "bridge": "br-int", "label": "tempest-network-smoke--1698384506", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap85875c6f-33", "ovs_interfaceid": "85875c6f-3380-42bc-9e4b-0b66df391e0d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "f45a21da-34ce-448b-92cc-2639a191c755", "address": "fa:16:3e:f6:a9:1e", "network": {"id": "f0bc2c62-89ab-4ce1-9157-2273788b9018", "bridge": "br-int", "label": "tempest-network-smoke--892002892", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fef6:a91e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fef6:a91e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf45a21da-34", "ovs_interfaceid": "f45a21da-34ce-448b-92cc-2639a191c755", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:23:51 compute-0 ovn_controller[152945]: 2025-10-11T09:23:51Z|01297|binding|INFO|Claiming lport 85875c6f-3380-42bc-9e4b-0b66df391e0d for this chassis.
Oct 11 09:23:51 compute-0 ovn_controller[152945]: 2025-10-11T09:23:51Z|01298|binding|INFO|85875c6f-3380-42bc-9e4b-0b66df391e0d: Claiming fa:16:3e:81:f1:c1 10.100.0.14
Oct 11 09:23:51 compute-0 nova_compute[260935]: 2025-10-11 09:23:51.991 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:23:51 compute-0 ceph-mon[74313]: pgmap v2466: 321 pgs: 321 active+clean; 509 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 20 KiB/s wr, 115 op/s
Oct 11 09:23:52 compute-0 NetworkManager[44960]: <info>  [1760174632.0028] device (tap85875c6f-33): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 09:23:52 compute-0 NetworkManager[44960]: <info>  [1760174632.0035] device (tap85875c6f-33): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 09:23:52 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:23:52.006 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:81:f1:c1 10.100.0.14'], port_security=['fa:16:3e:81:f1:c1 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '85cf93a0-2068-4567-a399-b8d52e672913', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-cdd3c547-e0c3-4649-8427-08ce8e1c52d4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'db6885dd005947ad850fed13cefdf2fc', 'neutron:revision_number': '2', 'neutron:security_group_ids': '0ad377f6-44f5-45fc-95cb-2f677a42f5a2', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=039eefbf-a236-4641-9b4d-7b7f1014a5e2, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=85875c6f-3380-42bc-9e4b-0b66df391e0d) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 09:23:52 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:23:52.007 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 85875c6f-3380-42bc-9e4b-0b66df391e0d in datapath cdd3c547-e0c3-4649-8427-08ce8e1c52d4 bound to our chassis
Oct 11 09:23:52 compute-0 NetworkManager[44960]: <info>  [1760174632.0113] manager: (tapf45a21da-34): new Tun device (/org/freedesktop/NetworkManager/Devices/520)
Oct 11 09:23:52 compute-0 nova_compute[260935]: 2025-10-11 09:23:52.010 2 DEBUG oslo_concurrency.lockutils [req-7df65449-e412-407c-99df-175c6deaca94 req-ba1b4746-6122-4bea-856b-246577763942 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-85cf93a0-2068-4567-a399-b8d52e672913" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:23:52 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:23:52.014 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network cdd3c547-e0c3-4649-8427-08ce8e1c52d4
Oct 11 09:23:52 compute-0 kernel: tapf45a21da-34: entered promiscuous mode
Oct 11 09:23:52 compute-0 nova_compute[260935]: 2025-10-11 09:23:52.022 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:23:52 compute-0 ovn_controller[152945]: 2025-10-11T09:23:52Z|01299|binding|INFO|Claiming lport f45a21da-34ce-448b-92cc-2639a191c755 for this chassis.
Oct 11 09:23:52 compute-0 ovn_controller[152945]: 2025-10-11T09:23:52Z|01300|binding|INFO|f45a21da-34ce-448b-92cc-2639a191c755: Claiming fa:16:3e:f6:a9:1e 2001:db8:0:1:f816:3eff:fef6:a91e 2001:db8::f816:3eff:fef6:a91e
Oct 11 09:23:52 compute-0 ovn_controller[152945]: 2025-10-11T09:23:52Z|01301|binding|INFO|Setting lport 85875c6f-3380-42bc-9e4b-0b66df391e0d ovn-installed in OVS
Oct 11 09:23:52 compute-0 ovn_controller[152945]: 2025-10-11T09:23:52Z|01302|binding|INFO|Setting lport 85875c6f-3380-42bc-9e4b-0b66df391e0d up in Southbound
Oct 11 09:23:52 compute-0 NetworkManager[44960]: <info>  [1760174632.0289] device (tapf45a21da-34): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 09:23:52 compute-0 nova_compute[260935]: 2025-10-11 09:23:52.027 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:23:52 compute-0 NetworkManager[44960]: <info>  [1760174632.0296] device (tapf45a21da-34): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 09:23:52 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:23:52.031 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[303e7948-784d-4fab-acb1-81c868435521]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:23:52 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:23:52.033 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f6:a9:1e 2001:db8:0:1:f816:3eff:fef6:a91e 2001:db8::f816:3eff:fef6:a91e'], port_security=['fa:16:3e:f6:a9:1e 2001:db8:0:1:f816:3eff:fef6:a91e 2001:db8::f816:3eff:fef6:a91e'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:fef6:a91e/64 2001:db8::f816:3eff:fef6:a91e/64', 'neutron:device_id': '85cf93a0-2068-4567-a399-b8d52e672913', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f0bc2c62-89ab-4ce1-9157-2273788b9018', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'db6885dd005947ad850fed13cefdf2fc', 'neutron:revision_number': '2', 'neutron:security_group_ids': '0ad377f6-44f5-45fc-95cb-2f677a42f5a2', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=75b51dc9-c211-445f-9e3e-d219efddef19, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=f45a21da-34ce-448b-92cc-2639a191c755) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 09:23:52 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:23:52.034 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapcdd3c547-e1 in ovnmeta-cdd3c547-e0c3-4649-8427-08ce8e1c52d4 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 11 09:23:52 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:23:52.037 276945 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapcdd3c547-e0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 11 09:23:52 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:23:52.037 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[af29a287-b64f-41ed-8f96-9bd2eeec7691]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:23:52 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:23:52.038 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[153d807a-8f9f-479d-a9cd-bd550d9dc56e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:23:52 compute-0 ovn_controller[152945]: 2025-10-11T09:23:52Z|01303|binding|INFO|Setting lport f45a21da-34ce-448b-92cc-2639a191c755 ovn-installed in OVS
Oct 11 09:23:52 compute-0 ovn_controller[152945]: 2025-10-11T09:23:52Z|01304|binding|INFO|Setting lport f45a21da-34ce-448b-92cc-2639a191c755 up in Southbound
Oct 11 09:23:52 compute-0 nova_compute[260935]: 2025-10-11 09:23:52.047 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:23:52 compute-0 nova_compute[260935]: 2025-10-11 09:23:52.050 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:23:52 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:23:52.063 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[3113b5c0-c633-4e96-9b3d-624f47c1885b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:23:52 compute-0 systemd-machined[215705]: New machine qemu-147-instance-0000007a.
Oct 11 09:23:52 compute-0 systemd[1]: Started Virtual Machine qemu-147-instance-0000007a.
Oct 11 09:23:52 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:23:52.092 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[5f715a3a-64ec-4c9b-b108-e88d4ac46891]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:23:52 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:23:52.129 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[87642fe1-6bc8-480b-8df5-ba3145454d55]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:23:52 compute-0 NetworkManager[44960]: <info>  [1760174632.1350] manager: (tapcdd3c547-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/521)
Oct 11 09:23:52 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:23:52.136 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[71ac57df-9c10-4888-888a-dd88fc550918]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:23:52 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:23:52.167 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[cd887db4-3c65-4950-8566-ab494f20bcfa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:23:52 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:23:52.172 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[cb5c27a7-def7-46b4-9491-7da4e3f99732]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:23:52 compute-0 NetworkManager[44960]: <info>  [1760174632.1970] device (tapcdd3c547-e0): carrier: link connected
Oct 11 09:23:52 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:23:52.202 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[3818228c-5c8c-4d8b-aa0c-d2e08fee9906]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:23:52 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:23:52.219 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[868a83f2-d862-4712-8ecf-de484aa06c89]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapcdd3c547-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f1:b8:0d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 362], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 647689, 'reachable_time': 38844, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 395061, 'error': None, 'target': 'ovnmeta-cdd3c547-e0c3-4649-8427-08ce8e1c52d4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:23:52 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:23:52.240 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[6f497c93-3624-4a3a-81ae-12165bf7175d]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fef1:b80d'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 647689, 'tstamp': 647689}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 395062, 'error': None, 'target': 'ovnmeta-cdd3c547-e0c3-4649-8427-08ce8e1c52d4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:23:52 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:23:52.264 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[9148730e-c565-4e49-957c-6bb2311eaeb3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapcdd3c547-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f1:b8:0d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 362], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 647689, 'reachable_time': 38844, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 395063, 'error': None, 'target': 'ovnmeta-cdd3c547-e0c3-4649-8427-08ce8e1c52d4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:23:52 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:23:52.308 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[714bcac1-62be-48ee-ab02-531cd919ae11]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:23:52 compute-0 nova_compute[260935]: 2025-10-11 09:23:52.354 2 DEBUG nova.compute.manager [req-bb3e8536-f41c-4714-91b4-863e14944f64 req-62f694e6-1c24-4bc5-9b8f-0ff74adaf9a2 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 85cf93a0-2068-4567-a399-b8d52e672913] Received event network-vif-plugged-85875c6f-3380-42bc-9e4b-0b66df391e0d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:23:52 compute-0 nova_compute[260935]: 2025-10-11 09:23:52.355 2 DEBUG oslo_concurrency.lockutils [req-bb3e8536-f41c-4714-91b4-863e14944f64 req-62f694e6-1c24-4bc5-9b8f-0ff74adaf9a2 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "85cf93a0-2068-4567-a399-b8d52e672913-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:23:52 compute-0 nova_compute[260935]: 2025-10-11 09:23:52.355 2 DEBUG oslo_concurrency.lockutils [req-bb3e8536-f41c-4714-91b4-863e14944f64 req-62f694e6-1c24-4bc5-9b8f-0ff74adaf9a2 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "85cf93a0-2068-4567-a399-b8d52e672913-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:23:52 compute-0 nova_compute[260935]: 2025-10-11 09:23:52.356 2 DEBUG oslo_concurrency.lockutils [req-bb3e8536-f41c-4714-91b4-863e14944f64 req-62f694e6-1c24-4bc5-9b8f-0ff74adaf9a2 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "85cf93a0-2068-4567-a399-b8d52e672913-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:23:52 compute-0 nova_compute[260935]: 2025-10-11 09:23:52.356 2 DEBUG nova.compute.manager [req-bb3e8536-f41c-4714-91b4-863e14944f64 req-62f694e6-1c24-4bc5-9b8f-0ff74adaf9a2 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 85cf93a0-2068-4567-a399-b8d52e672913] Processing event network-vif-plugged-85875c6f-3380-42bc-9e4b-0b66df391e0d _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 11 09:23:52 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:23:52.388 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[c964ba5d-293a-4f57-8251-e0f1730d5b4b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:23:52 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:23:52.390 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcdd3c547-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:23:52 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:23:52.390 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 09:23:52 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:23:52.390 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapcdd3c547-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:23:52 compute-0 nova_compute[260935]: 2025-10-11 09:23:52.392 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:23:52 compute-0 NetworkManager[44960]: <info>  [1760174632.3933] manager: (tapcdd3c547-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/522)
Oct 11 09:23:52 compute-0 kernel: tapcdd3c547-e0: entered promiscuous mode
Oct 11 09:23:52 compute-0 nova_compute[260935]: 2025-10-11 09:23:52.395 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:23:52 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:23:52.396 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapcdd3c547-e0, col_values=(('external_ids', {'iface-id': 'cd117be9-e2e2-4d92-9c01-a0f37b7175b9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:23:52 compute-0 nova_compute[260935]: 2025-10-11 09:23:52.397 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:23:52 compute-0 ovn_controller[152945]: 2025-10-11T09:23:52Z|01305|binding|INFO|Releasing lport cd117be9-e2e2-4d92-9c01-a0f37b7175b9 from this chassis (sb_readonly=0)
Oct 11 09:23:52 compute-0 nova_compute[260935]: 2025-10-11 09:23:52.426 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:23:52 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:23:52.426 162815 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/cdd3c547-e0c3-4649-8427-08ce8e1c52d4.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/cdd3c547-e0c3-4649-8427-08ce8e1c52d4.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 11 09:23:52 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:23:52.427 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[26f2e327-1883-456c-87f1-6e697e586668]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:23:52 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:23:52.428 162815 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 11 09:23:52 compute-0 ovn_metadata_agent[162810]: global
Oct 11 09:23:52 compute-0 ovn_metadata_agent[162810]:     log         /dev/log local0 debug
Oct 11 09:23:52 compute-0 ovn_metadata_agent[162810]:     log-tag     haproxy-metadata-proxy-cdd3c547-e0c3-4649-8427-08ce8e1c52d4
Oct 11 09:23:52 compute-0 ovn_metadata_agent[162810]:     user        root
Oct 11 09:23:52 compute-0 ovn_metadata_agent[162810]:     group       root
Oct 11 09:23:52 compute-0 ovn_metadata_agent[162810]:     maxconn     1024
Oct 11 09:23:52 compute-0 ovn_metadata_agent[162810]:     pidfile     /var/lib/neutron/external/pids/cdd3c547-e0c3-4649-8427-08ce8e1c52d4.pid.haproxy
Oct 11 09:23:52 compute-0 ovn_metadata_agent[162810]:     daemon
Oct 11 09:23:52 compute-0 ovn_metadata_agent[162810]: 
Oct 11 09:23:52 compute-0 ovn_metadata_agent[162810]: defaults
Oct 11 09:23:52 compute-0 ovn_metadata_agent[162810]:     log global
Oct 11 09:23:52 compute-0 ovn_metadata_agent[162810]:     mode http
Oct 11 09:23:52 compute-0 ovn_metadata_agent[162810]:     option httplog
Oct 11 09:23:52 compute-0 ovn_metadata_agent[162810]:     option dontlognull
Oct 11 09:23:52 compute-0 ovn_metadata_agent[162810]:     option http-server-close
Oct 11 09:23:52 compute-0 ovn_metadata_agent[162810]:     option forwardfor
Oct 11 09:23:52 compute-0 ovn_metadata_agent[162810]:     retries                 3
Oct 11 09:23:52 compute-0 ovn_metadata_agent[162810]:     timeout http-request    30s
Oct 11 09:23:52 compute-0 ovn_metadata_agent[162810]:     timeout connect         30s
Oct 11 09:23:52 compute-0 ovn_metadata_agent[162810]:     timeout client          32s
Oct 11 09:23:52 compute-0 ovn_metadata_agent[162810]:     timeout server          32s
Oct 11 09:23:52 compute-0 ovn_metadata_agent[162810]:     timeout http-keep-alive 30s
Oct 11 09:23:52 compute-0 ovn_metadata_agent[162810]: 
Oct 11 09:23:52 compute-0 ovn_metadata_agent[162810]: 
Oct 11 09:23:52 compute-0 ovn_metadata_agent[162810]: listen listener
Oct 11 09:23:52 compute-0 ovn_metadata_agent[162810]:     bind 169.254.169.254:80
Oct 11 09:23:52 compute-0 ovn_metadata_agent[162810]:     server metadata /var/lib/neutron/metadata_proxy
Oct 11 09:23:52 compute-0 ovn_metadata_agent[162810]:     http-request add-header X-OVN-Network-ID cdd3c547-e0c3-4649-8427-08ce8e1c52d4
Oct 11 09:23:52 compute-0 ovn_metadata_agent[162810]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 11 09:23:52 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:23:52.429 162815 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-cdd3c547-e0c3-4649-8427-08ce8e1c52d4', 'env', 'PROCESS_TAG=haproxy-cdd3c547-e0c3-4649-8427-08ce8e1c52d4', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/cdd3c547-e0c3-4649-8427-08ce8e1c52d4.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 11 09:23:52 compute-0 nova_compute[260935]: 2025-10-11 09:23:52.464 2 DEBUG nova.compute.manager [req-01ad71ab-e518-4614-9531-d4ccb8cc8336 req-4825e3e6-2f74-4726-8235-11eb5c698d37 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 527bff5a-2d35-406a-8702-f80298a22342] Received event network-vif-plugged-ba6bfb6c-8b45-4ce7-af7b-30a3c2e097ca external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:23:52 compute-0 nova_compute[260935]: 2025-10-11 09:23:52.464 2 DEBUG oslo_concurrency.lockutils [req-01ad71ab-e518-4614-9531-d4ccb8cc8336 req-4825e3e6-2f74-4726-8235-11eb5c698d37 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "527bff5a-2d35-406a-8702-f80298a22342-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:23:52 compute-0 nova_compute[260935]: 2025-10-11 09:23:52.465 2 DEBUG oslo_concurrency.lockutils [req-01ad71ab-e518-4614-9531-d4ccb8cc8336 req-4825e3e6-2f74-4726-8235-11eb5c698d37 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "527bff5a-2d35-406a-8702-f80298a22342-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:23:52 compute-0 nova_compute[260935]: 2025-10-11 09:23:52.465 2 DEBUG oslo_concurrency.lockutils [req-01ad71ab-e518-4614-9531-d4ccb8cc8336 req-4825e3e6-2f74-4726-8235-11eb5c698d37 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "527bff5a-2d35-406a-8702-f80298a22342-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:23:52 compute-0 nova_compute[260935]: 2025-10-11 09:23:52.465 2 DEBUG nova.compute.manager [req-01ad71ab-e518-4614-9531-d4ccb8cc8336 req-4825e3e6-2f74-4726-8235-11eb5c698d37 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 527bff5a-2d35-406a-8702-f80298a22342] No waiting events found dispatching network-vif-plugged-ba6bfb6c-8b45-4ce7-af7b-30a3c2e097ca pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 09:23:52 compute-0 nova_compute[260935]: 2025-10-11 09:23:52.466 2 WARNING nova.compute.manager [req-01ad71ab-e518-4614-9531-d4ccb8cc8336 req-4825e3e6-2f74-4726-8235-11eb5c698d37 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 527bff5a-2d35-406a-8702-f80298a22342] Received unexpected event network-vif-plugged-ba6bfb6c-8b45-4ce7-af7b-30a3c2e097ca for instance with vm_state active and task_state None.
Oct 11 09:23:52 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2467: 321 pgs: 321 active+clean; 500 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 5.4 MiB/s rd, 58 KiB/s wr, 252 op/s
Oct 11 09:23:52 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:23:52.858 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=bec9a5e2-82b0-42f0-811d-08d245f1dc66, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '40'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:23:52 compute-0 podman[395095]: 2025-10-11 09:23:52.857431854 +0000 UTC m=+0.077642862 container create ab6ebb48cd0eab225fc177b91d9f4f5da5225331666aabf295bcb159bbbda39c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-cdd3c547-e0c3-4649-8427-08ce8e1c52d4, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Oct 11 09:23:52 compute-0 podman[395095]: 2025-10-11 09:23:52.818066543 +0000 UTC m=+0.038277601 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 11 09:23:52 compute-0 systemd[1]: Started libpod-conmon-ab6ebb48cd0eab225fc177b91d9f4f5da5225331666aabf295bcb159bbbda39c.scope.
Oct 11 09:23:52 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:23:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66d9ee5906b3e37b7221b27948dcaeafcd376d35672ef3e4c15b762b77c22192/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 11 09:23:52 compute-0 podman[395095]: 2025-10-11 09:23:52.967254704 +0000 UTC m=+0.187465762 container init ab6ebb48cd0eab225fc177b91d9f4f5da5225331666aabf295bcb159bbbda39c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-cdd3c547-e0c3-4649-8427-08ce8e1c52d4, tcib_managed=true, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Oct 11 09:23:52 compute-0 podman[395095]: 2025-10-11 09:23:52.974334093 +0000 UTC m=+0.194545061 container start ab6ebb48cd0eab225fc177b91d9f4f5da5225331666aabf295bcb159bbbda39c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-cdd3c547-e0c3-4649-8427-08ce8e1c52d4, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true)
Oct 11 09:23:52 compute-0 neutron-haproxy-ovnmeta-cdd3c547-e0c3-4649-8427-08ce8e1c52d4[395110]: [NOTICE]   (395121) : New worker (395128) forked
Oct 11 09:23:52 compute-0 neutron-haproxy-ovnmeta-cdd3c547-e0c3-4649-8427-08ce8e1c52d4[395110]: [NOTICE]   (395121) : Loading success.
Oct 11 09:23:53 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:23:53.051 162815 INFO neutron.agent.ovn.metadata.agent [-] Port f45a21da-34ce-448b-92cc-2639a191c755 in datapath f0bc2c62-89ab-4ce1-9157-2273788b9018 unbound from our chassis
Oct 11 09:23:53 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:23:53.055 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f0bc2c62-89ab-4ce1-9157-2273788b9018
Oct 11 09:23:53 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:23:53.080 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[54191dc7-09f9-45ff-9a8e-582cc9b39297]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:23:53 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:23:53.085 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapf0bc2c62-81 in ovnmeta-f0bc2c62-89ab-4ce1-9157-2273788b9018 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 11 09:23:53 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:23:53.087 276945 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapf0bc2c62-80 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 11 09:23:53 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:23:53.087 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[a42a8d5d-a1d3-4f71-b951-3951cca9da24]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:23:53 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:23:53.089 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[9311f2f0-9b65-4dcf-8b02-406d53c2132d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:23:53 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:23:53.103 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[22ed617c-41e9-4c22-9099-2f341718b431]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:23:53 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:23:53.129 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[f980627a-79f5-40d4-a6c1-0dac0d28f421]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:23:53 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:23:53.165 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[6bfbf6bd-5f71-4303-8a41-5478f109c882]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:23:53 compute-0 NetworkManager[44960]: <info>  [1760174633.1732] manager: (tapf0bc2c62-80): new Veth device (/org/freedesktop/NetworkManager/Devices/523)
Oct 11 09:23:53 compute-0 systemd-udevd[395168]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 09:23:53 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:23:53.174 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[91dbf7a1-543d-48c8-a0be-31033fea6cbf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:23:53 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:23:53.225 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[49dc4a38-75e7-4554-bd24-d146a11212ea]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:23:53 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:23:53.231 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[8ff56465-4421-40d0-8826-f17e3b2698fb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:23:53 compute-0 NetworkManager[44960]: <info>  [1760174633.2635] device (tapf0bc2c62-80): carrier: link connected
Oct 11 09:23:53 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:23:53.268 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[448cdd13-feaf-4df2-8038-3c320af70ff5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:23:53 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:23:53.304 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[d6a438ad-d75f-423a-a539-85c3ba8d498f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf0bc2c62-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:be:23:00'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 363], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 647796, 'reachable_time': 34498, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 395194, 'error': None, 'target': 'ovnmeta-f0bc2c62-89ab-4ce1-9157-2273788b9018', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:23:53 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:23:53.327 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[d2906475-9771-4ec4-b8c5-1f7570bc6314]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:febe:2300'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 647796, 'tstamp': 647796}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 395195, 'error': None, 'target': 'ovnmeta-f0bc2c62-89ab-4ce1-9157-2273788b9018', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:23:53 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:23:53.350 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[974877a0-6a32-401a-a576-b70576f07966]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf0bc2c62-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:be:23:00'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 363], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 647796, 'reachable_time': 34498, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 395196, 'error': None, 'target': 'ovnmeta-f0bc2c62-89ab-4ce1-9157-2273788b9018', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:23:53 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:23:53.393 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[2c7b33e3-bc6a-47d4-8f2c-60c28de49e01]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:23:53 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:23:53.430 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[2c343d36-5cae-4c70-852d-20af71ecf68c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:23:53 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:23:53.431 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf0bc2c62-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:23:53 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:23:53.431 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 09:23:53 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:23:53.432 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf0bc2c62-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:23:53 compute-0 nova_compute[260935]: 2025-10-11 09:23:53.476 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:23:53 compute-0 NetworkManager[44960]: <info>  [1760174633.4769] manager: (tapf0bc2c62-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/524)
Oct 11 09:23:53 compute-0 kernel: tapf0bc2c62-80: entered promiscuous mode
Oct 11 09:23:53 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:23:53.479 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf0bc2c62-80, col_values=(('external_ids', {'iface-id': 'c1fabca0-ae77-4e48-b93b-3023955db235'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:23:53 compute-0 nova_compute[260935]: 2025-10-11 09:23:53.481 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:23:53 compute-0 ovn_controller[152945]: 2025-10-11T09:23:53Z|01306|binding|INFO|Releasing lport c1fabca0-ae77-4e48-b93b-3023955db235 from this chassis (sb_readonly=0)
Oct 11 09:23:53 compute-0 nova_compute[260935]: 2025-10-11 09:23:53.504 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:23:53 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:23:53.505 162815 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/f0bc2c62-89ab-4ce1-9157-2273788b9018.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/f0bc2c62-89ab-4ce1-9157-2273788b9018.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 11 09:23:53 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:23:53.506 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[f59b4617-c20d-4791-b79e-c0bc3d95c8b3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:23:53 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:23:53.507 162815 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 11 09:23:53 compute-0 ovn_metadata_agent[162810]: global
Oct 11 09:23:53 compute-0 ovn_metadata_agent[162810]:     log         /dev/log local0 debug
Oct 11 09:23:53 compute-0 ovn_metadata_agent[162810]:     log-tag     haproxy-metadata-proxy-f0bc2c62-89ab-4ce1-9157-2273788b9018
Oct 11 09:23:53 compute-0 ovn_metadata_agent[162810]:     user        root
Oct 11 09:23:53 compute-0 ovn_metadata_agent[162810]:     group       root
Oct 11 09:23:53 compute-0 ovn_metadata_agent[162810]:     maxconn     1024
Oct 11 09:23:53 compute-0 ovn_metadata_agent[162810]:     pidfile     /var/lib/neutron/external/pids/f0bc2c62-89ab-4ce1-9157-2273788b9018.pid.haproxy
Oct 11 09:23:53 compute-0 ovn_metadata_agent[162810]:     daemon
Oct 11 09:23:53 compute-0 ovn_metadata_agent[162810]: 
Oct 11 09:23:53 compute-0 ovn_metadata_agent[162810]: defaults
Oct 11 09:23:53 compute-0 ovn_metadata_agent[162810]:     log global
Oct 11 09:23:53 compute-0 ovn_metadata_agent[162810]:     mode http
Oct 11 09:23:53 compute-0 ovn_metadata_agent[162810]:     option httplog
Oct 11 09:23:53 compute-0 ovn_metadata_agent[162810]:     option dontlognull
Oct 11 09:23:53 compute-0 ovn_metadata_agent[162810]:     option http-server-close
Oct 11 09:23:53 compute-0 ovn_metadata_agent[162810]:     option forwardfor
Oct 11 09:23:53 compute-0 ovn_metadata_agent[162810]:     retries                 3
Oct 11 09:23:53 compute-0 ovn_metadata_agent[162810]:     timeout http-request    30s
Oct 11 09:23:53 compute-0 ovn_metadata_agent[162810]:     timeout connect         30s
Oct 11 09:23:53 compute-0 ovn_metadata_agent[162810]:     timeout client          32s
Oct 11 09:23:53 compute-0 ovn_metadata_agent[162810]:     timeout server          32s
Oct 11 09:23:53 compute-0 ovn_metadata_agent[162810]:     timeout http-keep-alive 30s
Oct 11 09:23:53 compute-0 ovn_metadata_agent[162810]: 
Oct 11 09:23:53 compute-0 ovn_metadata_agent[162810]: 
Oct 11 09:23:53 compute-0 ovn_metadata_agent[162810]: listen listener
Oct 11 09:23:53 compute-0 ovn_metadata_agent[162810]:     bind 169.254.169.254:80
Oct 11 09:23:53 compute-0 ovn_metadata_agent[162810]:     server metadata /var/lib/neutron/metadata_proxy
Oct 11 09:23:53 compute-0 ovn_metadata_agent[162810]:     http-request add-header X-OVN-Network-ID f0bc2c62-89ab-4ce1-9157-2273788b9018
Oct 11 09:23:53 compute-0 ovn_metadata_agent[162810]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 11 09:23:53 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:23:53.507 162815 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-f0bc2c62-89ab-4ce1-9157-2273788b9018', 'env', 'PROCESS_TAG=haproxy-f0bc2c62-89ab-4ce1-9157-2273788b9018', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/f0bc2c62-89ab-4ce1-9157-2273788b9018.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 11 09:23:53 compute-0 nova_compute[260935]: 2025-10-11 09:23:53.797 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760174633.7969532, 85cf93a0-2068-4567-a399-b8d52e672913 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:23:53 compute-0 nova_compute[260935]: 2025-10-11 09:23:53.798 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 85cf93a0-2068-4567-a399-b8d52e672913] VM Started (Lifecycle Event)
Oct 11 09:23:53 compute-0 nova_compute[260935]: 2025-10-11 09:23:53.815 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 85cf93a0-2068-4567-a399-b8d52e672913] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:23:53 compute-0 nova_compute[260935]: 2025-10-11 09:23:53.817 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760174633.7980013, 85cf93a0-2068-4567-a399-b8d52e672913 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:23:53 compute-0 nova_compute[260935]: 2025-10-11 09:23:53.818 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 85cf93a0-2068-4567-a399-b8d52e672913] VM Paused (Lifecycle Event)
Oct 11 09:23:53 compute-0 nova_compute[260935]: 2025-10-11 09:23:53.832 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 85cf93a0-2068-4567-a399-b8d52e672913] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:23:53 compute-0 nova_compute[260935]: 2025-10-11 09:23:53.834 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 85cf93a0-2068-4567-a399-b8d52e672913] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 09:23:53 compute-0 nova_compute[260935]: 2025-10-11 09:23:53.855 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 85cf93a0-2068-4567-a399-b8d52e672913] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 09:23:53 compute-0 podman[395227]: 2025-10-11 09:23:53.868722033 +0000 UTC m=+0.043320333 container create 049b33c53e6737f8ceec85abe382027b44c2d2f7a48e3b43e0b4bf0d5ccae7e4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-f0bc2c62-89ab-4ce1-9157-2273788b9018, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 09:23:53 compute-0 systemd[1]: Started libpod-conmon-049b33c53e6737f8ceec85abe382027b44c2d2f7a48e3b43e0b4bf0d5ccae7e4.scope.
Oct 11 09:23:53 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:23:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e1a18029a52b2d99369687829c9fae311726c99a73db60edcfbb05990f0b5022/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 11 09:23:53 compute-0 podman[395227]: 2025-10-11 09:23:53.848838012 +0000 UTC m=+0.023436322 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 11 09:23:53 compute-0 podman[395227]: 2025-10-11 09:23:53.952725024 +0000 UTC m=+0.127323334 container init 049b33c53e6737f8ceec85abe382027b44c2d2f7a48e3b43e0b4bf0d5ccae7e4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-f0bc2c62-89ab-4ce1-9157-2273788b9018, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 09:23:53 compute-0 podman[395227]: 2025-10-11 09:23:53.959307139 +0000 UTC m=+0.133905429 container start 049b33c53e6737f8ceec85abe382027b44c2d2f7a48e3b43e0b4bf0d5ccae7e4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-f0bc2c62-89ab-4ce1-9157-2273788b9018, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.vendor=CentOS)
Oct 11 09:23:53 compute-0 neutron-haproxy-ovnmeta-f0bc2c62-89ab-4ce1-9157-2273788b9018[395243]: [NOTICE]   (395247) : New worker (395249) forked
Oct 11 09:23:53 compute-0 neutron-haproxy-ovnmeta-f0bc2c62-89ab-4ce1-9157-2273788b9018[395243]: [NOTICE]   (395247) : Loading success.
Oct 11 09:23:54 compute-0 ceph-mon[74313]: pgmap v2467: 321 pgs: 321 active+clean; 500 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 5.4 MiB/s rd, 58 KiB/s wr, 252 op/s
Oct 11 09:23:54 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:23:54 compute-0 nova_compute[260935]: 2025-10-11 09:23:54.530 2 DEBUG nova.compute.manager [req-ad766bb9-9402-4bef-868c-cc62cb6ead1c req-12632814-3f32-409c-b0e9-420eeec99f61 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 85cf93a0-2068-4567-a399-b8d52e672913] Received event network-vif-plugged-85875c6f-3380-42bc-9e4b-0b66df391e0d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:23:54 compute-0 nova_compute[260935]: 2025-10-11 09:23:54.531 2 DEBUG oslo_concurrency.lockutils [req-ad766bb9-9402-4bef-868c-cc62cb6ead1c req-12632814-3f32-409c-b0e9-420eeec99f61 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "85cf93a0-2068-4567-a399-b8d52e672913-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:23:54 compute-0 nova_compute[260935]: 2025-10-11 09:23:54.531 2 DEBUG oslo_concurrency.lockutils [req-ad766bb9-9402-4bef-868c-cc62cb6ead1c req-12632814-3f32-409c-b0e9-420eeec99f61 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "85cf93a0-2068-4567-a399-b8d52e672913-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:23:54 compute-0 nova_compute[260935]: 2025-10-11 09:23:54.532 2 DEBUG oslo_concurrency.lockutils [req-ad766bb9-9402-4bef-868c-cc62cb6ead1c req-12632814-3f32-409c-b0e9-420eeec99f61 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "85cf93a0-2068-4567-a399-b8d52e672913-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:23:54 compute-0 nova_compute[260935]: 2025-10-11 09:23:54.532 2 DEBUG nova.compute.manager [req-ad766bb9-9402-4bef-868c-cc62cb6ead1c req-12632814-3f32-409c-b0e9-420eeec99f61 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 85cf93a0-2068-4567-a399-b8d52e672913] No event matching network-vif-plugged-85875c6f-3380-42bc-9e4b-0b66df391e0d in dict_keys([('network-vif-plugged', 'f45a21da-34ce-448b-92cc-2639a191c755')]) pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:325
Oct 11 09:23:54 compute-0 nova_compute[260935]: 2025-10-11 09:23:54.532 2 WARNING nova.compute.manager [req-ad766bb9-9402-4bef-868c-cc62cb6ead1c req-12632814-3f32-409c-b0e9-420eeec99f61 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 85cf93a0-2068-4567-a399-b8d52e672913] Received unexpected event network-vif-plugged-85875c6f-3380-42bc-9e4b-0b66df391e0d for instance with vm_state building and task_state spawning.
Oct 11 09:23:54 compute-0 nova_compute[260935]: 2025-10-11 09:23:54.533 2 DEBUG nova.compute.manager [req-ad766bb9-9402-4bef-868c-cc62cb6ead1c req-12632814-3f32-409c-b0e9-420eeec99f61 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 85cf93a0-2068-4567-a399-b8d52e672913] Received event network-vif-plugged-f45a21da-34ce-448b-92cc-2639a191c755 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:23:54 compute-0 nova_compute[260935]: 2025-10-11 09:23:54.533 2 DEBUG oslo_concurrency.lockutils [req-ad766bb9-9402-4bef-868c-cc62cb6ead1c req-12632814-3f32-409c-b0e9-420eeec99f61 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "85cf93a0-2068-4567-a399-b8d52e672913-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:23:54 compute-0 nova_compute[260935]: 2025-10-11 09:23:54.533 2 DEBUG oslo_concurrency.lockutils [req-ad766bb9-9402-4bef-868c-cc62cb6ead1c req-12632814-3f32-409c-b0e9-420eeec99f61 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "85cf93a0-2068-4567-a399-b8d52e672913-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:23:54 compute-0 nova_compute[260935]: 2025-10-11 09:23:54.534 2 DEBUG oslo_concurrency.lockutils [req-ad766bb9-9402-4bef-868c-cc62cb6ead1c req-12632814-3f32-409c-b0e9-420eeec99f61 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "85cf93a0-2068-4567-a399-b8d52e672913-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:23:54 compute-0 nova_compute[260935]: 2025-10-11 09:23:54.534 2 DEBUG nova.compute.manager [req-ad766bb9-9402-4bef-868c-cc62cb6ead1c req-12632814-3f32-409c-b0e9-420eeec99f61 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 85cf93a0-2068-4567-a399-b8d52e672913] Processing event network-vif-plugged-f45a21da-34ce-448b-92cc-2639a191c755 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 11 09:23:54 compute-0 nova_compute[260935]: 2025-10-11 09:23:54.534 2 DEBUG nova.compute.manager [req-ad766bb9-9402-4bef-868c-cc62cb6ead1c req-12632814-3f32-409c-b0e9-420eeec99f61 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 85cf93a0-2068-4567-a399-b8d52e672913] Received event network-vif-plugged-f45a21da-34ce-448b-92cc-2639a191c755 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:23:54 compute-0 nova_compute[260935]: 2025-10-11 09:23:54.535 2 DEBUG oslo_concurrency.lockutils [req-ad766bb9-9402-4bef-868c-cc62cb6ead1c req-12632814-3f32-409c-b0e9-420eeec99f61 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "85cf93a0-2068-4567-a399-b8d52e672913-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:23:54 compute-0 nova_compute[260935]: 2025-10-11 09:23:54.535 2 DEBUG oslo_concurrency.lockutils [req-ad766bb9-9402-4bef-868c-cc62cb6ead1c req-12632814-3f32-409c-b0e9-420eeec99f61 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "85cf93a0-2068-4567-a399-b8d52e672913-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:23:54 compute-0 nova_compute[260935]: 2025-10-11 09:23:54.536 2 DEBUG oslo_concurrency.lockutils [req-ad766bb9-9402-4bef-868c-cc62cb6ead1c req-12632814-3f32-409c-b0e9-420eeec99f61 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "85cf93a0-2068-4567-a399-b8d52e672913-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:23:54 compute-0 nova_compute[260935]: 2025-10-11 09:23:54.536 2 DEBUG nova.compute.manager [req-ad766bb9-9402-4bef-868c-cc62cb6ead1c req-12632814-3f32-409c-b0e9-420eeec99f61 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 85cf93a0-2068-4567-a399-b8d52e672913] No waiting events found dispatching network-vif-plugged-f45a21da-34ce-448b-92cc-2639a191c755 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 09:23:54 compute-0 nova_compute[260935]: 2025-10-11 09:23:54.536 2 WARNING nova.compute.manager [req-ad766bb9-9402-4bef-868c-cc62cb6ead1c req-12632814-3f32-409c-b0e9-420eeec99f61 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 85cf93a0-2068-4567-a399-b8d52e672913] Received unexpected event network-vif-plugged-f45a21da-34ce-448b-92cc-2639a191c755 for instance with vm_state building and task_state spawning.
Oct 11 09:23:54 compute-0 nova_compute[260935]: 2025-10-11 09:23:54.537 2 DEBUG nova.compute.manager [None req-359157f5-4620-47af-88e8-b12b212b7fe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 85cf93a0-2068-4567-a399-b8d52e672913] Instance event wait completed in 0 seconds for network-vif-plugged,network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 11 09:23:54 compute-0 nova_compute[260935]: 2025-10-11 09:23:54.547 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760174634.546722, 85cf93a0-2068-4567-a399-b8d52e672913 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:23:54 compute-0 nova_compute[260935]: 2025-10-11 09:23:54.548 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 85cf93a0-2068-4567-a399-b8d52e672913] VM Resumed (Lifecycle Event)
Oct 11 09:23:54 compute-0 nova_compute[260935]: 2025-10-11 09:23:54.549 2 DEBUG nova.virt.libvirt.driver [None req-359157f5-4620-47af-88e8-b12b212b7fe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 85cf93a0-2068-4567-a399-b8d52e672913] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 11 09:23:54 compute-0 nova_compute[260935]: 2025-10-11 09:23:54.572 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 85cf93a0-2068-4567-a399-b8d52e672913] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:23:54 compute-0 nova_compute[260935]: 2025-10-11 09:23:54.573 2 INFO nova.virt.libvirt.driver [-] [instance: 85cf93a0-2068-4567-a399-b8d52e672913] Instance spawned successfully.
Oct 11 09:23:54 compute-0 nova_compute[260935]: 2025-10-11 09:23:54.574 2 DEBUG nova.virt.libvirt.driver [None req-359157f5-4620-47af-88e8-b12b212b7fe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 85cf93a0-2068-4567-a399-b8d52e672913] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 11 09:23:54 compute-0 nova_compute[260935]: 2025-10-11 09:23:54.577 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 85cf93a0-2068-4567-a399-b8d52e672913] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 09:23:54 compute-0 nova_compute[260935]: 2025-10-11 09:23:54.597 2 DEBUG nova.virt.libvirt.driver [None req-359157f5-4620-47af-88e8-b12b212b7fe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 85cf93a0-2068-4567-a399-b8d52e672913] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:23:54 compute-0 nova_compute[260935]: 2025-10-11 09:23:54.597 2 DEBUG nova.virt.libvirt.driver [None req-359157f5-4620-47af-88e8-b12b212b7fe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 85cf93a0-2068-4567-a399-b8d52e672913] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:23:54 compute-0 nova_compute[260935]: 2025-10-11 09:23:54.598 2 DEBUG nova.virt.libvirt.driver [None req-359157f5-4620-47af-88e8-b12b212b7fe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 85cf93a0-2068-4567-a399-b8d52e672913] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:23:54 compute-0 nova_compute[260935]: 2025-10-11 09:23:54.598 2 DEBUG nova.virt.libvirt.driver [None req-359157f5-4620-47af-88e8-b12b212b7fe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 85cf93a0-2068-4567-a399-b8d52e672913] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:23:54 compute-0 nova_compute[260935]: 2025-10-11 09:23:54.599 2 DEBUG nova.virt.libvirt.driver [None req-359157f5-4620-47af-88e8-b12b212b7fe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 85cf93a0-2068-4567-a399-b8d52e672913] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:23:54 compute-0 nova_compute[260935]: 2025-10-11 09:23:54.599 2 DEBUG nova.virt.libvirt.driver [None req-359157f5-4620-47af-88e8-b12b212b7fe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 85cf93a0-2068-4567-a399-b8d52e672913] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:23:54 compute-0 nova_compute[260935]: 2025-10-11 09:23:54.603 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 85cf93a0-2068-4567-a399-b8d52e672913] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 09:23:54 compute-0 nova_compute[260935]: 2025-10-11 09:23:54.655 2 INFO nova.compute.manager [None req-359157f5-4620-47af-88e8-b12b212b7fe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 85cf93a0-2068-4567-a399-b8d52e672913] Took 18.19 seconds to spawn the instance on the hypervisor.
Oct 11 09:23:54 compute-0 nova_compute[260935]: 2025-10-11 09:23:54.656 2 DEBUG nova.compute.manager [None req-359157f5-4620-47af-88e8-b12b212b7fe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 85cf93a0-2068-4567-a399-b8d52e672913] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:23:54 compute-0 nova_compute[260935]: 2025-10-11 09:23:54.765 2 INFO nova.compute.manager [None req-359157f5-4620-47af-88e8-b12b212b7fe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 85cf93a0-2068-4567-a399-b8d52e672913] Took 20.14 seconds to build instance.
Oct 11 09:23:54 compute-0 nova_compute[260935]: 2025-10-11 09:23:54.789 2 DEBUG oslo_concurrency.lockutils [None req-359157f5-4620-47af-88e8-b12b212b7fe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "85cf93a0-2068-4567-a399-b8d52e672913" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 20.842s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:23:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:23:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:23:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:23:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:23:54 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2468: 321 pgs: 321 active+clean; 500 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 5.4 MiB/s rd, 58 KiB/s wr, 252 op/s
Oct 11 09:23:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:23:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:23:54 compute-0 ceph-mgr[74605]: [balancer INFO root] Optimize plan auto_2025-10-11_09:23:54
Oct 11 09:23:54 compute-0 ceph-mgr[74605]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 09:23:54 compute-0 ceph-mgr[74605]: [balancer INFO root] do_upmap
Oct 11 09:23:54 compute-0 ceph-mgr[74605]: [balancer INFO root] pools ['backups', 'vms', '.mgr', 'volumes', 'default.rgw.control', 'cephfs.cephfs.data', 'default.rgw.meta', '.rgw.root', 'images', 'cephfs.cephfs.meta', 'default.rgw.log']
Oct 11 09:23:54 compute-0 ceph-mgr[74605]: [balancer INFO root] prepared 0/10 changes
Oct 11 09:23:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 09:23:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 09:23:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 09:23:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 09:23:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 09:23:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 09:23:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 09:23:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 09:23:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 09:23:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 09:23:56 compute-0 ceph-mon[74313]: pgmap v2468: 321 pgs: 321 active+clean; 500 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 5.4 MiB/s rd, 58 KiB/s wr, 252 op/s
Oct 11 09:23:56 compute-0 nova_compute[260935]: 2025-10-11 09:23:56.019 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:23:56 compute-0 nova_compute[260935]: 2025-10-11 09:23:56.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:23:56 compute-0 nova_compute[260935]: 2025-10-11 09:23:56.703 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Oct 11 09:23:56 compute-0 nova_compute[260935]: 2025-10-11 09:23:56.727 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Oct 11 09:23:56 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2469: 321 pgs: 321 active+clean; 500 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 4.4 MiB/s rd, 47 KiB/s wr, 203 op/s
Oct 11 09:23:57 compute-0 nova_compute[260935]: 2025-10-11 09:23:57.449 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:23:58 compute-0 ceph-mon[74313]: pgmap v2469: 321 pgs: 321 active+clean; 500 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 4.4 MiB/s rd, 47 KiB/s wr, 203 op/s
Oct 11 09:23:58 compute-0 nova_compute[260935]: 2025-10-11 09:23:58.085 2 DEBUG nova.compute.manager [req-1d5a3aad-541d-42b0-ad7c-7d76e3311bb3 req-80853ffe-3399-4e2e-9a62-9c15cf3bb31b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 527bff5a-2d35-406a-8702-f80298a22342] Received event network-changed-ba6bfb6c-8b45-4ce7-af7b-30a3c2e097ca external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:23:58 compute-0 nova_compute[260935]: 2025-10-11 09:23:58.086 2 DEBUG nova.compute.manager [req-1d5a3aad-541d-42b0-ad7c-7d76e3311bb3 req-80853ffe-3399-4e2e-9a62-9c15cf3bb31b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 527bff5a-2d35-406a-8702-f80298a22342] Refreshing instance network info cache due to event network-changed-ba6bfb6c-8b45-4ce7-af7b-30a3c2e097ca. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 09:23:58 compute-0 nova_compute[260935]: 2025-10-11 09:23:58.087 2 DEBUG oslo_concurrency.lockutils [req-1d5a3aad-541d-42b0-ad7c-7d76e3311bb3 req-80853ffe-3399-4e2e-9a62-9c15cf3bb31b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-527bff5a-2d35-406a-8702-f80298a22342" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:23:58 compute-0 nova_compute[260935]: 2025-10-11 09:23:58.087 2 DEBUG oslo_concurrency.lockutils [req-1d5a3aad-541d-42b0-ad7c-7d76e3311bb3 req-80853ffe-3399-4e2e-9a62-9c15cf3bb31b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-527bff5a-2d35-406a-8702-f80298a22342" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:23:58 compute-0 nova_compute[260935]: 2025-10-11 09:23:58.087 2 DEBUG nova.network.neutron [req-1d5a3aad-541d-42b0-ad7c-7d76e3311bb3 req-80853ffe-3399-4e2e-9a62-9c15cf3bb31b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 527bff5a-2d35-406a-8702-f80298a22342] Refreshing network info cache for port ba6bfb6c-8b45-4ce7-af7b-30a3c2e097ca _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 09:23:58 compute-0 nova_compute[260935]: 2025-10-11 09:23:58.264 2 DEBUG oslo_concurrency.lockutils [None req-bf3d38fa-9d60-4481-84a4-4eda7f81908e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Acquiring lock "527bff5a-2d35-406a-8702-f80298a22342" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:23:58 compute-0 nova_compute[260935]: 2025-10-11 09:23:58.265 2 DEBUG oslo_concurrency.lockutils [None req-bf3d38fa-9d60-4481-84a4-4eda7f81908e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "527bff5a-2d35-406a-8702-f80298a22342" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:23:58 compute-0 nova_compute[260935]: 2025-10-11 09:23:58.266 2 DEBUG oslo_concurrency.lockutils [None req-bf3d38fa-9d60-4481-84a4-4eda7f81908e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Acquiring lock "527bff5a-2d35-406a-8702-f80298a22342-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:23:58 compute-0 nova_compute[260935]: 2025-10-11 09:23:58.267 2 DEBUG oslo_concurrency.lockutils [None req-bf3d38fa-9d60-4481-84a4-4eda7f81908e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "527bff5a-2d35-406a-8702-f80298a22342-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:23:58 compute-0 nova_compute[260935]: 2025-10-11 09:23:58.267 2 DEBUG oslo_concurrency.lockutils [None req-bf3d38fa-9d60-4481-84a4-4eda7f81908e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "527bff5a-2d35-406a-8702-f80298a22342-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:23:58 compute-0 nova_compute[260935]: 2025-10-11 09:23:58.269 2 INFO nova.compute.manager [None req-bf3d38fa-9d60-4481-84a4-4eda7f81908e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 527bff5a-2d35-406a-8702-f80298a22342] Terminating instance
Oct 11 09:23:58 compute-0 nova_compute[260935]: 2025-10-11 09:23:58.271 2 DEBUG nova.compute.manager [None req-bf3d38fa-9d60-4481-84a4-4eda7f81908e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 527bff5a-2d35-406a-8702-f80298a22342] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 11 09:23:58 compute-0 kernel: tapba6bfb6c-8b (unregistering): left promiscuous mode
Oct 11 09:23:58 compute-0 NetworkManager[44960]: <info>  [1760174638.3301] device (tapba6bfb6c-8b): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 09:23:58 compute-0 ovn_controller[152945]: 2025-10-11T09:23:58Z|01307|binding|INFO|Releasing lport ba6bfb6c-8b45-4ce7-af7b-30a3c2e097ca from this chassis (sb_readonly=0)
Oct 11 09:23:58 compute-0 ovn_controller[152945]: 2025-10-11T09:23:58Z|01308|binding|INFO|Setting lport ba6bfb6c-8b45-4ce7-af7b-30a3c2e097ca down in Southbound
Oct 11 09:23:58 compute-0 ovn_controller[152945]: 2025-10-11T09:23:58Z|01309|binding|INFO|Removing iface tapba6bfb6c-8b ovn-installed in OVS
Oct 11 09:23:58 compute-0 nova_compute[260935]: 2025-10-11 09:23:58.342 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:23:58 compute-0 nova_compute[260935]: 2025-10-11 09:23:58.345 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:23:58 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:23:58.348 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d6:ce:63 10.100.0.5'], port_security=['fa:16:3e:d6:ce:63 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-TestNetworkBasicOps-111056357', 'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '527bff5a-2d35-406a-8702-f80298a22342', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a7aa5898-dcd8-41c5-81e2-5c41061a1fb2', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-TestNetworkBasicOps-111056357', 'neutron:project_id': 'bee9c6aad5fe46a2b0fb6caf4d995b72', 'neutron:revision_number': '4', 'neutron:security_group_ids': '019fac1b-1c13-4d21-809e-39e39fdd9255', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.192'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=60a8a4a9-2613-497d-90d8-59ebfcd1b053, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=ba6bfb6c-8b45-4ce7-af7b-30a3c2e097ca) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 09:23:58 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:23:58.350 162815 INFO neutron.agent.ovn.metadata.agent [-] Port ba6bfb6c-8b45-4ce7-af7b-30a3c2e097ca in datapath a7aa5898-dcd8-41c5-81e2-5c41061a1fb2 unbound from our chassis
Oct 11 09:23:58 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:23:58.352 162815 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network a7aa5898-dcd8-41c5-81e2-5c41061a1fb2, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 11 09:23:58 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:23:58.353 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[2eac6d5c-8cb8-4f40-b97f-1d93da1ee74b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:23:58 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:23:58.354 162815 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-a7aa5898-dcd8-41c5-81e2-5c41061a1fb2 namespace which is not needed anymore
Oct 11 09:23:58 compute-0 nova_compute[260935]: 2025-10-11 09:23:58.363 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:23:58 compute-0 systemd[1]: machine-qemu\x2d146\x2dinstance\x2d0000007b.scope: Deactivated successfully.
Oct 11 09:23:58 compute-0 systemd[1]: machine-qemu\x2d146\x2dinstance\x2d0000007b.scope: Consumed 8.435s CPU time.
Oct 11 09:23:58 compute-0 systemd-machined[215705]: Machine qemu-146-instance-0000007b terminated.
Oct 11 09:23:58 compute-0 kernel: tapba6bfb6c-8b: entered promiscuous mode
Oct 11 09:23:58 compute-0 NetworkManager[44960]: <info>  [1760174638.4919] manager: (tapba6bfb6c-8b): new Tun device (/org/freedesktop/NetworkManager/Devices/525)
Oct 11 09:23:58 compute-0 kernel: tapba6bfb6c-8b (unregistering): left promiscuous mode
Oct 11 09:23:58 compute-0 ovn_controller[152945]: 2025-10-11T09:23:58Z|01310|binding|INFO|Claiming lport ba6bfb6c-8b45-4ce7-af7b-30a3c2e097ca for this chassis.
Oct 11 09:23:58 compute-0 ovn_controller[152945]: 2025-10-11T09:23:58Z|01311|binding|INFO|ba6bfb6c-8b45-4ce7-af7b-30a3c2e097ca: Claiming fa:16:3e:d6:ce:63 10.100.0.5
Oct 11 09:23:58 compute-0 nova_compute[260935]: 2025-10-11 09:23:58.510 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:23:58 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:23:58.516 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d6:ce:63 10.100.0.5'], port_security=['fa:16:3e:d6:ce:63 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-TestNetworkBasicOps-111056357', 'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '527bff5a-2d35-406a-8702-f80298a22342', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a7aa5898-dcd8-41c5-81e2-5c41061a1fb2', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-TestNetworkBasicOps-111056357', 'neutron:project_id': 'bee9c6aad5fe46a2b0fb6caf4d995b72', 'neutron:revision_number': '4', 'neutron:security_group_ids': '019fac1b-1c13-4d21-809e-39e39fdd9255', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.192'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=60a8a4a9-2613-497d-90d8-59ebfcd1b053, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=ba6bfb6c-8b45-4ce7-af7b-30a3c2e097ca) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 09:23:58 compute-0 neutron-haproxy-ovnmeta-a7aa5898-dcd8-41c5-81e2-5c41061a1fb2[394906]: [NOTICE]   (394912) : haproxy version is 2.8.14-c23fe91
Oct 11 09:23:58 compute-0 neutron-haproxy-ovnmeta-a7aa5898-dcd8-41c5-81e2-5c41061a1fb2[394906]: [NOTICE]   (394912) : path to executable is /usr/sbin/haproxy
Oct 11 09:23:58 compute-0 neutron-haproxy-ovnmeta-a7aa5898-dcd8-41c5-81e2-5c41061a1fb2[394906]: [WARNING]  (394912) : Exiting Master process...
Oct 11 09:23:58 compute-0 nova_compute[260935]: 2025-10-11 09:23:58.537 2 INFO nova.virt.libvirt.driver [-] [instance: 527bff5a-2d35-406a-8702-f80298a22342] Instance destroyed successfully.
Oct 11 09:23:58 compute-0 nova_compute[260935]: 2025-10-11 09:23:58.537 2 DEBUG nova.objects.instance [None req-bf3d38fa-9d60-4481-84a4-4eda7f81908e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lazy-loading 'resources' on Instance uuid 527bff5a-2d35-406a-8702-f80298a22342 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:23:58 compute-0 neutron-haproxy-ovnmeta-a7aa5898-dcd8-41c5-81e2-5c41061a1fb2[394906]: [WARNING]  (394912) : Exiting Master process...
Oct 11 09:23:58 compute-0 ovn_controller[152945]: 2025-10-11T09:23:58Z|01312|binding|INFO|Setting lport ba6bfb6c-8b45-4ce7-af7b-30a3c2e097ca ovn-installed in OVS
Oct 11 09:23:58 compute-0 ovn_controller[152945]: 2025-10-11T09:23:58Z|01313|binding|INFO|Setting lport ba6bfb6c-8b45-4ce7-af7b-30a3c2e097ca up in Southbound
Oct 11 09:23:58 compute-0 neutron-haproxy-ovnmeta-a7aa5898-dcd8-41c5-81e2-5c41061a1fb2[394906]: [ALERT]    (394912) : Current worker (394914) exited with code 143 (Terminated)
Oct 11 09:23:58 compute-0 neutron-haproxy-ovnmeta-a7aa5898-dcd8-41c5-81e2-5c41061a1fb2[394906]: [WARNING]  (394912) : All workers exited. Exiting... (0)
Oct 11 09:23:58 compute-0 ovn_controller[152945]: 2025-10-11T09:23:58Z|00151|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:4b:08:14 10.100.0.10
Oct 11 09:23:58 compute-0 nova_compute[260935]: 2025-10-11 09:23:58.544 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:23:58 compute-0 systemd[1]: libpod-a287c3e2c1a13e36c7b223b0999b063f29ee7f65b1e8f71cff7b574f9d71dbf1.scope: Deactivated successfully.
Oct 11 09:23:58 compute-0 conmon[394906]: conmon a287c3e2c1a13e36c7b2 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a287c3e2c1a13e36c7b223b0999b063f29ee7f65b1e8f71cff7b574f9d71dbf1.scope/container/memory.events
Oct 11 09:23:58 compute-0 ovn_controller[152945]: 2025-10-11T09:23:58Z|01314|binding|INFO|Releasing lport ba6bfb6c-8b45-4ce7-af7b-30a3c2e097ca from this chassis (sb_readonly=1)
Oct 11 09:23:58 compute-0 ovn_controller[152945]: 2025-10-11T09:23:58Z|01315|binding|INFO|Removing iface tapba6bfb6c-8b ovn-installed in OVS
Oct 11 09:23:58 compute-0 ovn_controller[152945]: 2025-10-11T09:23:58Z|01316|if_status|INFO|Dropped 1 log messages in last 78 seconds (most recently, 78 seconds ago) due to excessive rate
Oct 11 09:23:58 compute-0 ovn_controller[152945]: 2025-10-11T09:23:58Z|01317|if_status|INFO|Not setting lport ba6bfb6c-8b45-4ce7-af7b-30a3c2e097ca down as sb is readonly
Oct 11 09:23:58 compute-0 ovn_controller[152945]: 2025-10-11T09:23:58Z|01318|binding|INFO|Releasing lport ba6bfb6c-8b45-4ce7-af7b-30a3c2e097ca from this chassis (sb_readonly=0)
Oct 11 09:23:58 compute-0 ovn_controller[152945]: 2025-10-11T09:23:58Z|01319|binding|INFO|Setting lport ba6bfb6c-8b45-4ce7-af7b-30a3c2e097ca down in Southbound
Oct 11 09:23:58 compute-0 podman[395283]: 2025-10-11 09:23:58.553197122 +0000 UTC m=+0.090391612 container died a287c3e2c1a13e36c7b223b0999b063f29ee7f65b1e8f71cff7b574f9d71dbf1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-a7aa5898-dcd8-41c5-81e2-5c41061a1fb2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 11 09:23:58 compute-0 nova_compute[260935]: 2025-10-11 09:23:58.559 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:23:58 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:23:58.560 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d6:ce:63 10.100.0.5'], port_security=['fa:16:3e:d6:ce:63 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-TestNetworkBasicOps-111056357', 'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '527bff5a-2d35-406a-8702-f80298a22342', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a7aa5898-dcd8-41c5-81e2-5c41061a1fb2', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-TestNetworkBasicOps-111056357', 'neutron:project_id': 'bee9c6aad5fe46a2b0fb6caf4d995b72', 'neutron:revision_number': '4', 'neutron:security_group_ids': '019fac1b-1c13-4d21-809e-39e39fdd9255', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.192'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=60a8a4a9-2613-497d-90d8-59ebfcd1b053, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=ba6bfb6c-8b45-4ce7-af7b-30a3c2e097ca) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 09:23:58 compute-0 nova_compute[260935]: 2025-10-11 09:23:58.564 2 DEBUG nova.virt.libvirt.vif [None req-bf3d38fa-9d60-4481-84a4-4eda7f81908e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T09:23:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1659866276',display_name='tempest-TestNetworkBasicOps-server-1659866276',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1659866276',id=123,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBD5xFqm8IPc+0j24PucHdWviLqVsYMHOSZIhiNWh/27UIL+IzWKFcQXG0H5lF53N0KOCicqByqWqQ/NMVHseguyx/gUQwyqXnA+qdRlYqxWLbxA13mTZNbIWDUUTRGaKUQ==',key_name='tempest-TestNetworkBasicOps-1951063949',keypairs=<?>,launch_index=0,launched_at=2025-10-11T09:23:51Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='bee9c6aad5fe46a2b0fb6caf4d995b72',ramdisk_id='',reservation_id='r-slz4kjjw',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1622727639',owner_user_name='tempest-TestNetworkBasicOps-1622727639-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T09:23:51Z,user_data=None,user_id='dd336dcb24664df58613d4105ce1b004',uuid=527bff5a-2d35-406a-8702-f80298a22342,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ba6bfb6c-8b45-4ce7-af7b-30a3c2e097ca", "address": "fa:16:3e:d6:ce:63", "network": {"id": "a7aa5898-dcd8-41c5-81e2-5c41061a1fb2", "bridge": "br-int", "label": "tempest-network-smoke--1511589092", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapba6bfb6c-8b", "ovs_interfaceid": "ba6bfb6c-8b45-4ce7-af7b-30a3c2e097ca", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 11 09:23:58 compute-0 nova_compute[260935]: 2025-10-11 09:23:58.565 2 DEBUG nova.network.os_vif_util [None req-bf3d38fa-9d60-4481-84a4-4eda7f81908e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Converting VIF {"id": "ba6bfb6c-8b45-4ce7-af7b-30a3c2e097ca", "address": "fa:16:3e:d6:ce:63", "network": {"id": "a7aa5898-dcd8-41c5-81e2-5c41061a1fb2", "bridge": "br-int", "label": "tempest-network-smoke--1511589092", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapba6bfb6c-8b", "ovs_interfaceid": "ba6bfb6c-8b45-4ce7-af7b-30a3c2e097ca", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 09:23:58 compute-0 nova_compute[260935]: 2025-10-11 09:23:58.566 2 DEBUG nova.network.os_vif_util [None req-bf3d38fa-9d60-4481-84a4-4eda7f81908e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d6:ce:63,bridge_name='br-int',has_traffic_filtering=True,id=ba6bfb6c-8b45-4ce7-af7b-30a3c2e097ca,network=Network(a7aa5898-dcd8-41c5-81e2-5c41061a1fb2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapba6bfb6c-8b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 09:23:58 compute-0 nova_compute[260935]: 2025-10-11 09:23:58.566 2 DEBUG os_vif [None req-bf3d38fa-9d60-4481-84a4-4eda7f81908e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d6:ce:63,bridge_name='br-int',has_traffic_filtering=True,id=ba6bfb6c-8b45-4ce7-af7b-30a3c2e097ca,network=Network(a7aa5898-dcd8-41c5-81e2-5c41061a1fb2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapba6bfb6c-8b') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 11 09:23:58 compute-0 nova_compute[260935]: 2025-10-11 09:23:58.568 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:23:58 compute-0 nova_compute[260935]: 2025-10-11 09:23:58.569 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapba6bfb6c-8b, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:23:58 compute-0 nova_compute[260935]: 2025-10-11 09:23:58.571 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:23:58 compute-0 nova_compute[260935]: 2025-10-11 09:23:58.573 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 09:23:58 compute-0 nova_compute[260935]: 2025-10-11 09:23:58.575 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:23:58 compute-0 nova_compute[260935]: 2025-10-11 09:23:58.579 2 INFO os_vif [None req-bf3d38fa-9d60-4481-84a4-4eda7f81908e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d6:ce:63,bridge_name='br-int',has_traffic_filtering=True,id=ba6bfb6c-8b45-4ce7-af7b-30a3c2e097ca,network=Network(a7aa5898-dcd8-41c5-81e2-5c41061a1fb2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapba6bfb6c-8b')
Oct 11 09:23:58 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a287c3e2c1a13e36c7b223b0999b063f29ee7f65b1e8f71cff7b574f9d71dbf1-userdata-shm.mount: Deactivated successfully.
Oct 11 09:23:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-19190283d7b8624e3bb625f0ca81497210fb38475de1d71735aa96791be754ca-merged.mount: Deactivated successfully.
Oct 11 09:23:58 compute-0 podman[395283]: 2025-10-11 09:23:58.627769027 +0000 UTC m=+0.164963517 container cleanup a287c3e2c1a13e36c7b223b0999b063f29ee7f65b1e8f71cff7b574f9d71dbf1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-a7aa5898-dcd8-41c5-81e2-5c41061a1fb2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 09:23:58 compute-0 systemd[1]: libpod-conmon-a287c3e2c1a13e36c7b223b0999b063f29ee7f65b1e8f71cff7b574f9d71dbf1.scope: Deactivated successfully.
Oct 11 09:23:58 compute-0 podman[395339]: 2025-10-11 09:23:58.701259281 +0000 UTC m=+0.050192818 container remove a287c3e2c1a13e36c7b223b0999b063f29ee7f65b1e8f71cff7b574f9d71dbf1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-a7aa5898-dcd8-41c5-81e2-5c41061a1fb2, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Oct 11 09:23:58 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:23:58.711 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[c0136503-8e4b-4079-8b9a-39b1d215544f]: (4, ('Sat Oct 11 09:23:58 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-a7aa5898-dcd8-41c5-81e2-5c41061a1fb2 (a287c3e2c1a13e36c7b223b0999b063f29ee7f65b1e8f71cff7b574f9d71dbf1)\na287c3e2c1a13e36c7b223b0999b063f29ee7f65b1e8f71cff7b574f9d71dbf1\nSat Oct 11 09:23:58 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-a7aa5898-dcd8-41c5-81e2-5c41061a1fb2 (a287c3e2c1a13e36c7b223b0999b063f29ee7f65b1e8f71cff7b574f9d71dbf1)\na287c3e2c1a13e36c7b223b0999b063f29ee7f65b1e8f71cff7b574f9d71dbf1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:23:58 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:23:58.714 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[c266f070-8ed7-46b2-b08c-a2e539df5414]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:23:58 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:23:58.715 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa7aa5898-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:23:58 compute-0 kernel: tapa7aa5898-d0: left promiscuous mode
Oct 11 09:23:58 compute-0 nova_compute[260935]: 2025-10-11 09:23:58.719 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:23:58 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:23:58.727 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[8cdd4a9f-531d-4835-838f-4116219d4bfc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:23:58 compute-0 nova_compute[260935]: 2025-10-11 09:23:58.741 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:23:58 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:23:58.761 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[38edcec9-4573-4b9f-99ad-8782483f9745]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:23:58 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:23:58.764 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[2d16754d-3976-4186-964c-dcc2a8f8bc5f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:23:58 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:23:58.790 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[264b4710-6cd9-49ec-81a7-81a68ec0ec89]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 647450, 'reachable_time': 36469, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 395354, 'error': None, 'target': 'ovnmeta-a7aa5898-dcd8-41c5-81e2-5c41061a1fb2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:23:58 compute-0 systemd[1]: run-netns-ovnmeta\x2da7aa5898\x2ddcd8\x2d41c5\x2d81e2\x2d5c41061a1fb2.mount: Deactivated successfully.
Oct 11 09:23:58 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:23:58.795 162929 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-a7aa5898-dcd8-41c5-81e2-5c41061a1fb2 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 11 09:23:58 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:23:58.795 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[911d2a12-5e1f-4004-b832-72a913101edf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:23:58 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:23:58.796 162815 INFO neutron.agent.ovn.metadata.agent [-] Port ba6bfb6c-8b45-4ce7-af7b-30a3c2e097ca in datapath a7aa5898-dcd8-41c5-81e2-5c41061a1fb2 unbound from our chassis
Oct 11 09:23:58 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:23:58.800 162815 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network a7aa5898-dcd8-41c5-81e2-5c41061a1fb2, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 11 09:23:58 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:23:58.800 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[74b365f7-0a65-4f5a-8f94-35df51782449]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:23:58 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:23:58.801 162815 INFO neutron.agent.ovn.metadata.agent [-] Port ba6bfb6c-8b45-4ce7-af7b-30a3c2e097ca in datapath a7aa5898-dcd8-41c5-81e2-5c41061a1fb2 unbound from our chassis
Oct 11 09:23:58 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:23:58.805 162815 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network a7aa5898-dcd8-41c5-81e2-5c41061a1fb2, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 11 09:23:58 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:23:58.805 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[dc0df4cf-13e3-4399-9499-190590003fb1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:23:58 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2470: 321 pgs: 321 active+clean; 500 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 5.6 MiB/s rd, 45 KiB/s wr, 232 op/s
Oct 11 09:23:59 compute-0 nova_compute[260935]: 2025-10-11 09:23:59.020 2 INFO nova.virt.libvirt.driver [None req-bf3d38fa-9d60-4481-84a4-4eda7f81908e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 527bff5a-2d35-406a-8702-f80298a22342] Deleting instance files /var/lib/nova/instances/527bff5a-2d35-406a-8702-f80298a22342_del
Oct 11 09:23:59 compute-0 nova_compute[260935]: 2025-10-11 09:23:59.021 2 INFO nova.virt.libvirt.driver [None req-bf3d38fa-9d60-4481-84a4-4eda7f81908e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 527bff5a-2d35-406a-8702-f80298a22342] Deletion of /var/lib/nova/instances/527bff5a-2d35-406a-8702-f80298a22342_del complete
Oct 11 09:23:59 compute-0 nova_compute[260935]: 2025-10-11 09:23:59.083 2 INFO nova.compute.manager [None req-bf3d38fa-9d60-4481-84a4-4eda7f81908e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 527bff5a-2d35-406a-8702-f80298a22342] Took 0.81 seconds to destroy the instance on the hypervisor.
Oct 11 09:23:59 compute-0 nova_compute[260935]: 2025-10-11 09:23:59.084 2 DEBUG oslo.service.loopingcall [None req-bf3d38fa-9d60-4481-84a4-4eda7f81908e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 11 09:23:59 compute-0 nova_compute[260935]: 2025-10-11 09:23:59.084 2 DEBUG nova.compute.manager [-] [instance: 527bff5a-2d35-406a-8702-f80298a22342] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 11 09:23:59 compute-0 nova_compute[260935]: 2025-10-11 09:23:59.084 2 DEBUG nova.network.neutron [-] [instance: 527bff5a-2d35-406a-8702-f80298a22342] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 11 09:23:59 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:24:00 compute-0 ceph-mon[74313]: pgmap v2470: 321 pgs: 321 active+clean; 500 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 5.6 MiB/s rd, 45 KiB/s wr, 232 op/s
Oct 11 09:24:00 compute-0 nova_compute[260935]: 2025-10-11 09:24:00.333 2 DEBUG nova.compute.manager [req-257906a9-d490-4c5f-9297-26a91e020d87 req-c847e475-c92d-46c7-a009-7d90b37b1f8e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 527bff5a-2d35-406a-8702-f80298a22342] Received event network-vif-unplugged-ba6bfb6c-8b45-4ce7-af7b-30a3c2e097ca external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:24:00 compute-0 nova_compute[260935]: 2025-10-11 09:24:00.334 2 DEBUG oslo_concurrency.lockutils [req-257906a9-d490-4c5f-9297-26a91e020d87 req-c847e475-c92d-46c7-a009-7d90b37b1f8e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "527bff5a-2d35-406a-8702-f80298a22342-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:24:00 compute-0 nova_compute[260935]: 2025-10-11 09:24:00.334 2 DEBUG oslo_concurrency.lockutils [req-257906a9-d490-4c5f-9297-26a91e020d87 req-c847e475-c92d-46c7-a009-7d90b37b1f8e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "527bff5a-2d35-406a-8702-f80298a22342-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:24:00 compute-0 nova_compute[260935]: 2025-10-11 09:24:00.334 2 DEBUG oslo_concurrency.lockutils [req-257906a9-d490-4c5f-9297-26a91e020d87 req-c847e475-c92d-46c7-a009-7d90b37b1f8e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "527bff5a-2d35-406a-8702-f80298a22342-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:24:00 compute-0 nova_compute[260935]: 2025-10-11 09:24:00.334 2 DEBUG nova.compute.manager [req-257906a9-d490-4c5f-9297-26a91e020d87 req-c847e475-c92d-46c7-a009-7d90b37b1f8e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 527bff5a-2d35-406a-8702-f80298a22342] No waiting events found dispatching network-vif-unplugged-ba6bfb6c-8b45-4ce7-af7b-30a3c2e097ca pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 09:24:00 compute-0 nova_compute[260935]: 2025-10-11 09:24:00.334 2 DEBUG nova.compute.manager [req-257906a9-d490-4c5f-9297-26a91e020d87 req-c847e475-c92d-46c7-a009-7d90b37b1f8e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 527bff5a-2d35-406a-8702-f80298a22342] Received event network-vif-unplugged-ba6bfb6c-8b45-4ce7-af7b-30a3c2e097ca for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 11 09:24:00 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2471: 321 pgs: 321 active+clean; 500 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 4.8 MiB/s rd, 39 KiB/s wr, 198 op/s
Oct 11 09:24:01 compute-0 nova_compute[260935]: 2025-10-11 09:24:01.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:24:02 compute-0 ceph-mon[74313]: pgmap v2471: 321 pgs: 321 active+clean; 500 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 4.8 MiB/s rd, 39 KiB/s wr, 198 op/s
Oct 11 09:24:02 compute-0 ceph-mon[74313]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #117. Immutable memtables: 0.
Oct 11 09:24:02 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:24:02.060443) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 11 09:24:02 compute-0 ceph-mon[74313]: rocksdb: [db/flush_job.cc:856] [default] [JOB 69] Flushing memtable with next log file: 117
Oct 11 09:24:02 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760174642060479, "job": 69, "event": "flush_started", "num_memtables": 1, "num_entries": 2089, "num_deletes": 253, "total_data_size": 3286330, "memory_usage": 3336688, "flush_reason": "Manual Compaction"}
Oct 11 09:24:02 compute-0 ceph-mon[74313]: rocksdb: [db/flush_job.cc:885] [default] [JOB 69] Level-0 flush table #118: started
Oct 11 09:24:02 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760174642074390, "cf_name": "default", "job": 69, "event": "table_file_creation", "file_number": 118, "file_size": 3229379, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 49694, "largest_seqno": 51782, "table_properties": {"data_size": 3219977, "index_size": 5896, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2437, "raw_key_size": 19631, "raw_average_key_size": 20, "raw_value_size": 3201015, "raw_average_value_size": 3324, "num_data_blocks": 260, "num_entries": 963, "num_filter_entries": 963, "num_deletions": 253, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760174429, "oldest_key_time": 1760174429, "file_creation_time": 1760174642, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "25d4b2da-549a-4320-988a-8e4ed36a341b", "db_session_id": "HBQ1LYMNE0LLZGR7AHC6", "orig_file_number": 118, "seqno_to_time_mapping": "N/A"}}
Oct 11 09:24:02 compute-0 ceph-mon[74313]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 69] Flush lasted 13991 microseconds, and 7684 cpu microseconds.
Oct 11 09:24:02 compute-0 ceph-mon[74313]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 09:24:02 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:24:02.074431) [db/flush_job.cc:967] [default] [JOB 69] Level-0 flush table #118: 3229379 bytes OK
Oct 11 09:24:02 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:24:02.074452) [db/memtable_list.cc:519] [default] Level-0 commit table #118 started
Oct 11 09:24:02 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:24:02.076785) [db/memtable_list.cc:722] [default] Level-0 commit table #118: memtable #1 done
Oct 11 09:24:02 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:24:02.076800) EVENT_LOG_v1 {"time_micros": 1760174642076794, "job": 69, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 11 09:24:02 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:24:02.076848) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 11 09:24:02 compute-0 ceph-mon[74313]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 69] Try to delete WAL files size 3277550, prev total WAL file size 3277550, number of live WAL files 2.
Oct 11 09:24:02 compute-0 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000114.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 09:24:02 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:24:02.077682) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730034373639' seq:72057594037927935, type:22 .. '7061786F730035303231' seq:0, type:0; will stop at (end)
Oct 11 09:24:02 compute-0 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 70] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 11 09:24:02 compute-0 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 69 Base level 0, inputs: [118(3153KB)], [116(8325KB)]
Oct 11 09:24:02 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760174642077708, "job": 70, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [118], "files_L6": [116], "score": -1, "input_data_size": 11754680, "oldest_snapshot_seqno": -1}
Oct 11 09:24:02 compute-0 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 70] Generated table #119: 7316 keys, 10030321 bytes, temperature: kUnknown
Oct 11 09:24:02 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760174642154290, "cf_name": "default", "job": 70, "event": "table_file_creation", "file_number": 119, "file_size": 10030321, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9981523, "index_size": 29422, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 18309, "raw_key_size": 189366, "raw_average_key_size": 25, "raw_value_size": 9850741, "raw_average_value_size": 1346, "num_data_blocks": 1152, "num_entries": 7316, "num_filter_entries": 7316, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760170204, "oldest_key_time": 0, "file_creation_time": 1760174642, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "25d4b2da-549a-4320-988a-8e4ed36a341b", "db_session_id": "HBQ1LYMNE0LLZGR7AHC6", "orig_file_number": 119, "seqno_to_time_mapping": "N/A"}}
Oct 11 09:24:02 compute-0 ceph-mon[74313]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 09:24:02 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:24:02.154574) [db/compaction/compaction_job.cc:1663] [default] [JOB 70] Compacted 1@0 + 1@6 files to L6 => 10030321 bytes
Oct 11 09:24:02 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:24:02.157735) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 153.3 rd, 130.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.1, 8.1 +0.0 blob) out(9.6 +0.0 blob), read-write-amplify(6.7) write-amplify(3.1) OK, records in: 7838, records dropped: 522 output_compression: NoCompression
Oct 11 09:24:02 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:24:02.157753) EVENT_LOG_v1 {"time_micros": 1760174642157744, "job": 70, "event": "compaction_finished", "compaction_time_micros": 76698, "compaction_time_cpu_micros": 28042, "output_level": 6, "num_output_files": 1, "total_output_size": 10030321, "num_input_records": 7838, "num_output_records": 7316, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 11 09:24:02 compute-0 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000118.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 09:24:02 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760174642158570, "job": 70, "event": "table_file_deletion", "file_number": 118}
Oct 11 09:24:02 compute-0 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000116.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 09:24:02 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760174642161411, "job": 70, "event": "table_file_deletion", "file_number": 116}
Oct 11 09:24:02 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:24:02.077625) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 09:24:02 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:24:02.161502) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 09:24:02 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:24:02.161508) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 09:24:02 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:24:02.161510) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 09:24:02 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:24:02.161512) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 09:24:02 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:24:02.161514) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 09:24:02 compute-0 nova_compute[260935]: 2025-10-11 09:24:02.493 2 DEBUG nova.compute.manager [req-1d7e722d-7dd9-4174-8ddd-140d91e71620 req-8d96fe77-f3e2-479c-adc8-1f6a50badea3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 527bff5a-2d35-406a-8702-f80298a22342] Received event network-vif-plugged-ba6bfb6c-8b45-4ce7-af7b-30a3c2e097ca external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:24:02 compute-0 nova_compute[260935]: 2025-10-11 09:24:02.493 2 DEBUG oslo_concurrency.lockutils [req-1d7e722d-7dd9-4174-8ddd-140d91e71620 req-8d96fe77-f3e2-479c-adc8-1f6a50badea3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "527bff5a-2d35-406a-8702-f80298a22342-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:24:02 compute-0 nova_compute[260935]: 2025-10-11 09:24:02.493 2 DEBUG oslo_concurrency.lockutils [req-1d7e722d-7dd9-4174-8ddd-140d91e71620 req-8d96fe77-f3e2-479c-adc8-1f6a50badea3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "527bff5a-2d35-406a-8702-f80298a22342-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:24:02 compute-0 nova_compute[260935]: 2025-10-11 09:24:02.493 2 DEBUG oslo_concurrency.lockutils [req-1d7e722d-7dd9-4174-8ddd-140d91e71620 req-8d96fe77-f3e2-479c-adc8-1f6a50badea3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "527bff5a-2d35-406a-8702-f80298a22342-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:24:02 compute-0 nova_compute[260935]: 2025-10-11 09:24:02.493 2 DEBUG nova.compute.manager [req-1d7e722d-7dd9-4174-8ddd-140d91e71620 req-8d96fe77-f3e2-479c-adc8-1f6a50badea3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 527bff5a-2d35-406a-8702-f80298a22342] No waiting events found dispatching network-vif-plugged-ba6bfb6c-8b45-4ce7-af7b-30a3c2e097ca pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 09:24:02 compute-0 nova_compute[260935]: 2025-10-11 09:24:02.494 2 WARNING nova.compute.manager [req-1d7e722d-7dd9-4174-8ddd-140d91e71620 req-8d96fe77-f3e2-479c-adc8-1f6a50badea3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 527bff5a-2d35-406a-8702-f80298a22342] Received unexpected event network-vif-plugged-ba6bfb6c-8b45-4ce7-af7b-30a3c2e097ca for instance with vm_state active and task_state deleting.
Oct 11 09:24:02 compute-0 nova_compute[260935]: 2025-10-11 09:24:02.494 2 DEBUG nova.compute.manager [req-1d7e722d-7dd9-4174-8ddd-140d91e71620 req-8d96fe77-f3e2-479c-adc8-1f6a50badea3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 527bff5a-2d35-406a-8702-f80298a22342] Received event network-vif-plugged-ba6bfb6c-8b45-4ce7-af7b-30a3c2e097ca external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:24:02 compute-0 nova_compute[260935]: 2025-10-11 09:24:02.494 2 DEBUG oslo_concurrency.lockutils [req-1d7e722d-7dd9-4174-8ddd-140d91e71620 req-8d96fe77-f3e2-479c-adc8-1f6a50badea3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "527bff5a-2d35-406a-8702-f80298a22342-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:24:02 compute-0 nova_compute[260935]: 2025-10-11 09:24:02.494 2 DEBUG oslo_concurrency.lockutils [req-1d7e722d-7dd9-4174-8ddd-140d91e71620 req-8d96fe77-f3e2-479c-adc8-1f6a50badea3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "527bff5a-2d35-406a-8702-f80298a22342-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:24:02 compute-0 nova_compute[260935]: 2025-10-11 09:24:02.494 2 DEBUG oslo_concurrency.lockutils [req-1d7e722d-7dd9-4174-8ddd-140d91e71620 req-8d96fe77-f3e2-479c-adc8-1f6a50badea3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "527bff5a-2d35-406a-8702-f80298a22342-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:24:02 compute-0 nova_compute[260935]: 2025-10-11 09:24:02.494 2 DEBUG nova.compute.manager [req-1d7e722d-7dd9-4174-8ddd-140d91e71620 req-8d96fe77-f3e2-479c-adc8-1f6a50badea3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 527bff5a-2d35-406a-8702-f80298a22342] No waiting events found dispatching network-vif-plugged-ba6bfb6c-8b45-4ce7-af7b-30a3c2e097ca pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 09:24:02 compute-0 nova_compute[260935]: 2025-10-11 09:24:02.495 2 WARNING nova.compute.manager [req-1d7e722d-7dd9-4174-8ddd-140d91e71620 req-8d96fe77-f3e2-479c-adc8-1f6a50badea3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 527bff5a-2d35-406a-8702-f80298a22342] Received unexpected event network-vif-plugged-ba6bfb6c-8b45-4ce7-af7b-30a3c2e097ca for instance with vm_state active and task_state deleting.
Oct 11 09:24:02 compute-0 nova_compute[260935]: 2025-10-11 09:24:02.495 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:24:02 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2472: 321 pgs: 321 active+clean; 453 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 5.0 MiB/s rd, 39 KiB/s wr, 241 op/s
Oct 11 09:24:03 compute-0 nova_compute[260935]: 2025-10-11 09:24:03.061 2 DEBUG nova.network.neutron [-] [instance: 527bff5a-2d35-406a-8702-f80298a22342] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:24:03 compute-0 nova_compute[260935]: 2025-10-11 09:24:03.075 2 INFO nova.compute.manager [-] [instance: 527bff5a-2d35-406a-8702-f80298a22342] Took 3.99 seconds to deallocate network for instance.
Oct 11 09:24:03 compute-0 ceph-mon[74313]: pgmap v2472: 321 pgs: 321 active+clean; 453 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 5.0 MiB/s rd, 39 KiB/s wr, 241 op/s
Oct 11 09:24:03 compute-0 nova_compute[260935]: 2025-10-11 09:24:03.156 2 DEBUG oslo_concurrency.lockutils [None req-bf3d38fa-9d60-4481-84a4-4eda7f81908e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:24:03 compute-0 nova_compute[260935]: 2025-10-11 09:24:03.157 2 DEBUG oslo_concurrency.lockutils [None req-bf3d38fa-9d60-4481-84a4-4eda7f81908e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:24:03 compute-0 nova_compute[260935]: 2025-10-11 09:24:03.298 2 DEBUG nova.network.neutron [req-1d5a3aad-541d-42b0-ad7c-7d76e3311bb3 req-80853ffe-3399-4e2e-9a62-9c15cf3bb31b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 527bff5a-2d35-406a-8702-f80298a22342] Updated VIF entry in instance network info cache for port ba6bfb6c-8b45-4ce7-af7b-30a3c2e097ca. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 09:24:03 compute-0 nova_compute[260935]: 2025-10-11 09:24:03.298 2 DEBUG nova.network.neutron [req-1d5a3aad-541d-42b0-ad7c-7d76e3311bb3 req-80853ffe-3399-4e2e-9a62-9c15cf3bb31b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 527bff5a-2d35-406a-8702-f80298a22342] Updating instance_info_cache with network_info: [{"id": "ba6bfb6c-8b45-4ce7-af7b-30a3c2e097ca", "address": "fa:16:3e:d6:ce:63", "network": {"id": "a7aa5898-dcd8-41c5-81e2-5c41061a1fb2", "bridge": "br-int", "label": "tempest-network-smoke--1511589092", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.192", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapba6bfb6c-8b", "ovs_interfaceid": "ba6bfb6c-8b45-4ce7-af7b-30a3c2e097ca", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:24:03 compute-0 nova_compute[260935]: 2025-10-11 09:24:03.315 2 DEBUG oslo_concurrency.lockutils [req-1d5a3aad-541d-42b0-ad7c-7d76e3311bb3 req-80853ffe-3399-4e2e-9a62-9c15cf3bb31b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-527bff5a-2d35-406a-8702-f80298a22342" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:24:03 compute-0 nova_compute[260935]: 2025-10-11 09:24:03.317 2 DEBUG oslo_concurrency.processutils [None req-bf3d38fa-9d60-4481-84a4-4eda7f81908e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:24:03 compute-0 nova_compute[260935]: 2025-10-11 09:24:03.466 2 DEBUG nova.compute.manager [req-2d8b5391-b302-481f-91f7-ca45950a270b req-0e889bd1-5c82-4797-aa18-8046eb2dd336 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 85cf93a0-2068-4567-a399-b8d52e672913] Received event network-changed-85875c6f-3380-42bc-9e4b-0b66df391e0d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:24:03 compute-0 nova_compute[260935]: 2025-10-11 09:24:03.467 2 DEBUG nova.compute.manager [req-2d8b5391-b302-481f-91f7-ca45950a270b req-0e889bd1-5c82-4797-aa18-8046eb2dd336 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 85cf93a0-2068-4567-a399-b8d52e672913] Refreshing instance network info cache due to event network-changed-85875c6f-3380-42bc-9e4b-0b66df391e0d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 09:24:03 compute-0 nova_compute[260935]: 2025-10-11 09:24:03.467 2 DEBUG oslo_concurrency.lockutils [req-2d8b5391-b302-481f-91f7-ca45950a270b req-0e889bd1-5c82-4797-aa18-8046eb2dd336 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-85cf93a0-2068-4567-a399-b8d52e672913" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:24:03 compute-0 nova_compute[260935]: 2025-10-11 09:24:03.468 2 DEBUG oslo_concurrency.lockutils [req-2d8b5391-b302-481f-91f7-ca45950a270b req-0e889bd1-5c82-4797-aa18-8046eb2dd336 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-85cf93a0-2068-4567-a399-b8d52e672913" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:24:03 compute-0 nova_compute[260935]: 2025-10-11 09:24:03.468 2 DEBUG nova.network.neutron [req-2d8b5391-b302-481f-91f7-ca45950a270b req-0e889bd1-5c82-4797-aa18-8046eb2dd336 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 85cf93a0-2068-4567-a399-b8d52e672913] Refreshing network info cache for port 85875c6f-3380-42bc-9e4b-0b66df391e0d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 09:24:03 compute-0 nova_compute[260935]: 2025-10-11 09:24:03.571 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:24:03 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 09:24:03 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/36327719' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:24:03 compute-0 nova_compute[260935]: 2025-10-11 09:24:03.774 2 DEBUG oslo_concurrency.processutils [None req-bf3d38fa-9d60-4481-84a4-4eda7f81908e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:24:03 compute-0 nova_compute[260935]: 2025-10-11 09:24:03.783 2 DEBUG nova.compute.provider_tree [None req-bf3d38fa-9d60-4481-84a4-4eda7f81908e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 09:24:03 compute-0 nova_compute[260935]: 2025-10-11 09:24:03.829 2 DEBUG nova.scheduler.client.report [None req-bf3d38fa-9d60-4481-84a4-4eda7f81908e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 09:24:03 compute-0 nova_compute[260935]: 2025-10-11 09:24:03.865 2 DEBUG oslo_concurrency.lockutils [None req-bf3d38fa-9d60-4481-84a4-4eda7f81908e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.708s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:24:03 compute-0 nova_compute[260935]: 2025-10-11 09:24:03.901 2 INFO nova.scheduler.client.report [None req-bf3d38fa-9d60-4481-84a4-4eda7f81908e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Deleted allocations for instance 527bff5a-2d35-406a-8702-f80298a22342
Oct 11 09:24:04 compute-0 nova_compute[260935]: 2025-10-11 09:24:04.001 2 DEBUG oslo_concurrency.lockutils [None req-bf3d38fa-9d60-4481-84a4-4eda7f81908e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "527bff5a-2d35-406a-8702-f80298a22342" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.736s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:24:04 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:24:04 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/36327719' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:24:04 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2473: 321 pgs: 321 active+clean; 453 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 14 KiB/s wr, 149 op/s
Oct 11 09:24:05 compute-0 nova_compute[260935]: 2025-10-11 09:24:05.139 2 DEBUG nova.network.neutron [req-2d8b5391-b302-481f-91f7-ca45950a270b req-0e889bd1-5c82-4797-aa18-8046eb2dd336 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 85cf93a0-2068-4567-a399-b8d52e672913] Updated VIF entry in instance network info cache for port 85875c6f-3380-42bc-9e4b-0b66df391e0d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 09:24:05 compute-0 nova_compute[260935]: 2025-10-11 09:24:05.141 2 DEBUG nova.network.neutron [req-2d8b5391-b302-481f-91f7-ca45950a270b req-0e889bd1-5c82-4797-aa18-8046eb2dd336 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 85cf93a0-2068-4567-a399-b8d52e672913] Updating instance_info_cache with network_info: [{"id": "85875c6f-3380-42bc-9e4b-0b66df391e0d", "address": "fa:16:3e:81:f1:c1", "network": {"id": "cdd3c547-e0c3-4649-8427-08ce8e1c52d4", "bridge": "br-int", "label": "tempest-network-smoke--1698384506", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.179", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap85875c6f-33", "ovs_interfaceid": "85875c6f-3380-42bc-9e4b-0b66df391e0d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "f45a21da-34ce-448b-92cc-2639a191c755", "address": "fa:16:3e:f6:a9:1e", "network": {"id": "f0bc2c62-89ab-4ce1-9157-2273788b9018", "bridge": "br-int", "label": "tempest-network-smoke--892002892", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fef6:a91e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fef6:a91e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf45a21da-34", "ovs_interfaceid": "f45a21da-34ce-448b-92cc-2639a191c755", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:24:05 compute-0 nova_compute[260935]: 2025-10-11 09:24:05.172 2 DEBUG oslo_concurrency.lockutils [req-2d8b5391-b302-481f-91f7-ca45950a270b req-0e889bd1-5c82-4797-aa18-8046eb2dd336 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-85cf93a0-2068-4567-a399-b8d52e672913" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:24:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 09:24:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:24:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 09:24:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:24:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0037367807636582667 of space, bias 1.0, pg target 1.12103422909748 quantized to 32 (current 32)
Oct 11 09:24:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:24:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 09:24:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:24:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 09:24:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:24:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.1992057139048968 quantized to 32 (current 32)
Oct 11 09:24:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:24:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006084358924269063 quantized to 16 (current 32)
Oct 11 09:24:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:24:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 09:24:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:24:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.605448655336329e-05 quantized to 32 (current 32)
Oct 11 09:24:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:24:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006464631357035879 quantized to 32 (current 32)
Oct 11 09:24:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:24:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 09:24:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:24:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015210897310672657 quantized to 32 (current 32)
Oct 11 09:24:05 compute-0 ceph-osd[90364]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #48. Immutable memtables: 5.
Oct 11 09:24:05 compute-0 ceph-mon[74313]: pgmap v2473: 321 pgs: 321 active+clean; 453 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 14 KiB/s wr, 149 op/s
Oct 11 09:24:05 compute-0 nova_compute[260935]: 2025-10-11 09:24:05.620 2 DEBUG oslo_concurrency.lockutils [None req-40d26b14-7bb1-44f1-9e06-f6f7cebb71d8 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] Acquiring lock "ef21f945-0076-48fa-8d22-c5376e26d278" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:24:05 compute-0 nova_compute[260935]: 2025-10-11 09:24:05.620 2 DEBUG oslo_concurrency.lockutils [None req-40d26b14-7bb1-44f1-9e06-f6f7cebb71d8 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] Lock "ef21f945-0076-48fa-8d22-c5376e26d278" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:24:05 compute-0 nova_compute[260935]: 2025-10-11 09:24:05.621 2 DEBUG oslo_concurrency.lockutils [None req-40d26b14-7bb1-44f1-9e06-f6f7cebb71d8 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] Acquiring lock "ef21f945-0076-48fa-8d22-c5376e26d278-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:24:05 compute-0 nova_compute[260935]: 2025-10-11 09:24:05.621 2 DEBUG oslo_concurrency.lockutils [None req-40d26b14-7bb1-44f1-9e06-f6f7cebb71d8 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] Lock "ef21f945-0076-48fa-8d22-c5376e26d278-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:24:05 compute-0 nova_compute[260935]: 2025-10-11 09:24:05.622 2 DEBUG oslo_concurrency.lockutils [None req-40d26b14-7bb1-44f1-9e06-f6f7cebb71d8 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] Lock "ef21f945-0076-48fa-8d22-c5376e26d278-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:24:05 compute-0 nova_compute[260935]: 2025-10-11 09:24:05.624 2 INFO nova.compute.manager [None req-40d26b14-7bb1-44f1-9e06-f6f7cebb71d8 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] Terminating instance
Oct 11 09:24:05 compute-0 nova_compute[260935]: 2025-10-11 09:24:05.626 2 DEBUG nova.compute.manager [None req-40d26b14-7bb1-44f1-9e06-f6f7cebb71d8 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 11 09:24:05 compute-0 nova_compute[260935]: 2025-10-11 09:24:05.631 2 DEBUG nova.compute.manager [req-100a8278-0c1c-4b94-a562-5d13925ed047 req-8938b891-0c37-4681-a3c4-056e1b4a220a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] Received event network-changed-0f516e4b-c284-4151-944c-8a7d98f695b5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:24:05 compute-0 nova_compute[260935]: 2025-10-11 09:24:05.632 2 DEBUG nova.compute.manager [req-100a8278-0c1c-4b94-a562-5d13925ed047 req-8938b891-0c37-4681-a3c4-056e1b4a220a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] Refreshing instance network info cache due to event network-changed-0f516e4b-c284-4151-944c-8a7d98f695b5. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 09:24:05 compute-0 nova_compute[260935]: 2025-10-11 09:24:05.633 2 DEBUG oslo_concurrency.lockutils [req-100a8278-0c1c-4b94-a562-5d13925ed047 req-8938b891-0c37-4681-a3c4-056e1b4a220a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-ef21f945-0076-48fa-8d22-c5376e26d278" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:24:05 compute-0 nova_compute[260935]: 2025-10-11 09:24:05.633 2 DEBUG oslo_concurrency.lockutils [req-100a8278-0c1c-4b94-a562-5d13925ed047 req-8938b891-0c37-4681-a3c4-056e1b4a220a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-ef21f945-0076-48fa-8d22-c5376e26d278" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:24:05 compute-0 nova_compute[260935]: 2025-10-11 09:24:05.633 2 DEBUG nova.network.neutron [req-100a8278-0c1c-4b94-a562-5d13925ed047 req-8938b891-0c37-4681-a3c4-056e1b4a220a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] Refreshing network info cache for port 0f516e4b-c284-4151-944c-8a7d98f695b5 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 09:24:06 compute-0 kernel: tap0f516e4b-c2 (unregistering): left promiscuous mode
Oct 11 09:24:06 compute-0 NetworkManager[44960]: <info>  [1760174646.6924] device (tap0f516e4b-c2): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 09:24:06 compute-0 ovn_controller[152945]: 2025-10-11T09:24:06Z|01320|binding|INFO|Releasing lport 0f516e4b-c284-4151-944c-8a7d98f695b5 from this chassis (sb_readonly=0)
Oct 11 09:24:06 compute-0 ovn_controller[152945]: 2025-10-11T09:24:06Z|01321|binding|INFO|Setting lport 0f516e4b-c284-4151-944c-8a7d98f695b5 down in Southbound
Oct 11 09:24:06 compute-0 ovn_controller[152945]: 2025-10-11T09:24:06Z|01322|binding|INFO|Removing iface tap0f516e4b-c2 ovn-installed in OVS
Oct 11 09:24:06 compute-0 nova_compute[260935]: 2025-10-11 09:24:06.711 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:24:06 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:24:06.725 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4b:08:14 10.100.0.10'], port_security=['fa:16:3e:4b:08:14 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'ef21f945-0076-48fa-8d22-c5376e26d278', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0007a0de-db42-4add-9b55-6d92ceffa860', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a13210f275984f3eadf85eba0c749d99', 'neutron:revision_number': '9', 'neutron:security_group_ids': 'fd52e2b9-19bf-4137-a511-25dd1d4c9f0e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d2271296-3ceb-4987-affd-a0b4da64fffe, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=0f516e4b-c284-4151-944c-8a7d98f695b5) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 09:24:06 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:24:06.726 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 0f516e4b-c284-4151-944c-8a7d98f695b5 in datapath 0007a0de-db42-4add-9b55-6d92ceffa860 unbound from our chassis
Oct 11 09:24:06 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:24:06.730 162815 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 0007a0de-db42-4add-9b55-6d92ceffa860, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 11 09:24:06 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:24:06.733 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[c246af82-a525-4be5-bf80-026a4858fccd]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:24:06 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:24:06.733 162815 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-0007a0de-db42-4add-9b55-6d92ceffa860 namespace which is not needed anymore
Oct 11 09:24:06 compute-0 nova_compute[260935]: 2025-10-11 09:24:06.745 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:24:06 compute-0 systemd[1]: machine-qemu\x2d145\x2dinstance\x2d00000079.scope: Deactivated successfully.
Oct 11 09:24:06 compute-0 systemd[1]: machine-qemu\x2d145\x2dinstance\x2d00000079.scope: Consumed 13.489s CPU time.
Oct 11 09:24:06 compute-0 systemd-machined[215705]: Machine qemu-145-instance-00000079 terminated.
Oct 11 09:24:06 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2474: 321 pgs: 321 active+clean; 453 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 14 KiB/s wr, 149 op/s
Oct 11 09:24:06 compute-0 nova_compute[260935]: 2025-10-11 09:24:06.879 2 INFO nova.virt.libvirt.driver [-] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] Instance destroyed successfully.
Oct 11 09:24:06 compute-0 nova_compute[260935]: 2025-10-11 09:24:06.880 2 DEBUG nova.objects.instance [None req-40d26b14-7bb1-44f1-9e06-f6f7cebb71d8 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] Lazy-loading 'resources' on Instance uuid ef21f945-0076-48fa-8d22-c5376e26d278 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:24:06 compute-0 nova_compute[260935]: 2025-10-11 09:24:06.913 2 DEBUG nova.virt.libvirt.vif [None req-40d26b14-7bb1-44f1-9e06-f6f7cebb71d8 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-11T09:22:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestShelveInstance-server-1752003988',display_name='tempest-TestShelveInstance-server-1752003988',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testshelveinstance-server-1752003988',id=121,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBc59pqsdiIpcBTYIi36cHQ9pbnLBXoyhUafGO6McKtc7V938v9xkK/F0LUwOp/S6AlI8sKdzLvrGPTJbVDXHRRnlCu0B+lzAS2vfK523X9mqHyrpvhTkQghQuITH7BALg==',key_name='tempest-TestShelveInstance-1436929269',keypairs=<?>,launch_index=0,launched_at=2025-10-11T09:23:47Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='a13210f275984f3eadf85eba0c749d99',ramdisk_id='',reservation_id='r-9i2gklxc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestShelveInstance-243029510',owner_user_name='tempest-TestShelveInstance-243029510-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T09:23:47Z,user_data=None,user_id='67e20c1f7ae24f2f8b9e25e0d8ce61ca',uuid=ef21f945-0076-48fa-8d22-c5376e26d278,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "0f516e4b-c284-4151-944c-8a7d98f695b5", "address": "fa:16:3e:4b:08:14", "network": {"id": "0007a0de-db42-4add-9b55-6d92ceffa860", "bridge": "br-int", "label": "tempest-TestShelveInstance-683353345-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.199", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a13210f275984f3eadf85eba0c749d99", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0f516e4b-c2", "ovs_interfaceid": "0f516e4b-c284-4151-944c-8a7d98f695b5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 11 09:24:06 compute-0 nova_compute[260935]: 2025-10-11 09:24:06.914 2 DEBUG nova.network.os_vif_util [None req-40d26b14-7bb1-44f1-9e06-f6f7cebb71d8 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] Converting VIF {"id": "0f516e4b-c284-4151-944c-8a7d98f695b5", "address": "fa:16:3e:4b:08:14", "network": {"id": "0007a0de-db42-4add-9b55-6d92ceffa860", "bridge": "br-int", "label": "tempest-TestShelveInstance-683353345-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.199", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a13210f275984f3eadf85eba0c749d99", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0f516e4b-c2", "ovs_interfaceid": "0f516e4b-c284-4151-944c-8a7d98f695b5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 09:24:06 compute-0 nova_compute[260935]: 2025-10-11 09:24:06.915 2 DEBUG nova.network.os_vif_util [None req-40d26b14-7bb1-44f1-9e06-f6f7cebb71d8 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4b:08:14,bridge_name='br-int',has_traffic_filtering=True,id=0f516e4b-c284-4151-944c-8a7d98f695b5,network=Network(0007a0de-db42-4add-9b55-6d92ceffa860),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0f516e4b-c2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 09:24:06 compute-0 nova_compute[260935]: 2025-10-11 09:24:06.915 2 DEBUG os_vif [None req-40d26b14-7bb1-44f1-9e06-f6f7cebb71d8 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:4b:08:14,bridge_name='br-int',has_traffic_filtering=True,id=0f516e4b-c284-4151-944c-8a7d98f695b5,network=Network(0007a0de-db42-4add-9b55-6d92ceffa860),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0f516e4b-c2') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 11 09:24:06 compute-0 nova_compute[260935]: 2025-10-11 09:24:06.918 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:24:06 compute-0 nova_compute[260935]: 2025-10-11 09:24:06.918 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0f516e4b-c2, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:24:06 compute-0 nova_compute[260935]: 2025-10-11 09:24:06.920 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:24:06 compute-0 nova_compute[260935]: 2025-10-11 09:24:06.922 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:24:06 compute-0 nova_compute[260935]: 2025-10-11 09:24:06.925 2 INFO os_vif [None req-40d26b14-7bb1-44f1-9e06-f6f7cebb71d8 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:4b:08:14,bridge_name='br-int',has_traffic_filtering=True,id=0f516e4b-c284-4151-944c-8a7d98f695b5,network=Network(0007a0de-db42-4add-9b55-6d92ceffa860),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0f516e4b-c2')
Oct 11 09:24:07 compute-0 neutron-haproxy-ovnmeta-0007a0de-db42-4add-9b55-6d92ceffa860[394551]: [NOTICE]   (394555) : haproxy version is 2.8.14-c23fe91
Oct 11 09:24:07 compute-0 neutron-haproxy-ovnmeta-0007a0de-db42-4add-9b55-6d92ceffa860[394551]: [NOTICE]   (394555) : path to executable is /usr/sbin/haproxy
Oct 11 09:24:07 compute-0 neutron-haproxy-ovnmeta-0007a0de-db42-4add-9b55-6d92ceffa860[394551]: [WARNING]  (394555) : Exiting Master process...
Oct 11 09:24:07 compute-0 neutron-haproxy-ovnmeta-0007a0de-db42-4add-9b55-6d92ceffa860[394551]: [WARNING]  (394555) : Exiting Master process...
Oct 11 09:24:07 compute-0 neutron-haproxy-ovnmeta-0007a0de-db42-4add-9b55-6d92ceffa860[394551]: [ALERT]    (394555) : Current worker (394557) exited with code 143 (Terminated)
Oct 11 09:24:07 compute-0 neutron-haproxy-ovnmeta-0007a0de-db42-4add-9b55-6d92ceffa860[394551]: [WARNING]  (394555) : All workers exited. Exiting... (0)
Oct 11 09:24:07 compute-0 systemd[1]: libpod-22cf63d83641e03cba16870a5f372c09f901e5786a22ad9e01ecb7c96b7ff5b2.scope: Deactivated successfully.
Oct 11 09:24:07 compute-0 conmon[394551]: conmon 22cf63d83641e03cba16 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-22cf63d83641e03cba16870a5f372c09f901e5786a22ad9e01ecb7c96b7ff5b2.scope/container/memory.events
Oct 11 09:24:07 compute-0 podman[395403]: 2025-10-11 09:24:07.135568563 +0000 UTC m=+0.293494174 container died 22cf63d83641e03cba16870a5f372c09f901e5786a22ad9e01ecb7c96b7ff5b2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-0007a0de-db42-4add-9b55-6d92ceffa860, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true)
Oct 11 09:24:07 compute-0 ceph-mon[74313]: pgmap v2474: 321 pgs: 321 active+clean; 453 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 14 KiB/s wr, 149 op/s
Oct 11 09:24:07 compute-0 nova_compute[260935]: 2025-10-11 09:24:07.496 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:24:07 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-22cf63d83641e03cba16870a5f372c09f901e5786a22ad9e01ecb7c96b7ff5b2-userdata-shm.mount: Deactivated successfully.
Oct 11 09:24:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-1fd13644a85799175e493bbd75cfd83641987306523ad37c33480641586b4542-merged.mount: Deactivated successfully.
Oct 11 09:24:07 compute-0 nova_compute[260935]: 2025-10-11 09:24:07.740 2 DEBUG nova.compute.manager [req-933a6697-0892-49dc-9960-a1bd9ceb4304 req-85a81438-6701-4512-a33a-8b4e984409a9 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] Received event network-vif-unplugged-0f516e4b-c284-4151-944c-8a7d98f695b5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:24:07 compute-0 nova_compute[260935]: 2025-10-11 09:24:07.740 2 DEBUG oslo_concurrency.lockutils [req-933a6697-0892-49dc-9960-a1bd9ceb4304 req-85a81438-6701-4512-a33a-8b4e984409a9 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "ef21f945-0076-48fa-8d22-c5376e26d278-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:24:07 compute-0 nova_compute[260935]: 2025-10-11 09:24:07.741 2 DEBUG oslo_concurrency.lockutils [req-933a6697-0892-49dc-9960-a1bd9ceb4304 req-85a81438-6701-4512-a33a-8b4e984409a9 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "ef21f945-0076-48fa-8d22-c5376e26d278-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:24:07 compute-0 nova_compute[260935]: 2025-10-11 09:24:07.741 2 DEBUG oslo_concurrency.lockutils [req-933a6697-0892-49dc-9960-a1bd9ceb4304 req-85a81438-6701-4512-a33a-8b4e984409a9 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "ef21f945-0076-48fa-8d22-c5376e26d278-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:24:07 compute-0 nova_compute[260935]: 2025-10-11 09:24:07.741 2 DEBUG nova.compute.manager [req-933a6697-0892-49dc-9960-a1bd9ceb4304 req-85a81438-6701-4512-a33a-8b4e984409a9 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] No waiting events found dispatching network-vif-unplugged-0f516e4b-c284-4151-944c-8a7d98f695b5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 09:24:07 compute-0 nova_compute[260935]: 2025-10-11 09:24:07.742 2 DEBUG nova.compute.manager [req-933a6697-0892-49dc-9960-a1bd9ceb4304 req-85a81438-6701-4512-a33a-8b4e984409a9 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] Received event network-vif-unplugged-0f516e4b-c284-4151-944c-8a7d98f695b5 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 11 09:24:07 compute-0 nova_compute[260935]: 2025-10-11 09:24:07.785 2 DEBUG nova.network.neutron [req-100a8278-0c1c-4b94-a562-5d13925ed047 req-8938b891-0c37-4681-a3c4-056e1b4a220a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] Updated VIF entry in instance network info cache for port 0f516e4b-c284-4151-944c-8a7d98f695b5. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 09:24:07 compute-0 nova_compute[260935]: 2025-10-11 09:24:07.785 2 DEBUG nova.network.neutron [req-100a8278-0c1c-4b94-a562-5d13925ed047 req-8938b891-0c37-4681-a3c4-056e1b4a220a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] Updating instance_info_cache with network_info: [{"id": "0f516e4b-c284-4151-944c-8a7d98f695b5", "address": "fa:16:3e:4b:08:14", "network": {"id": "0007a0de-db42-4add-9b55-6d92ceffa860", "bridge": "br-int", "label": "tempest-TestShelveInstance-683353345-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a13210f275984f3eadf85eba0c749d99", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0f516e4b-c2", "ovs_interfaceid": "0f516e4b-c284-4151-944c-8a7d98f695b5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:24:07 compute-0 nova_compute[260935]: 2025-10-11 09:24:07.807 2 DEBUG oslo_concurrency.lockutils [req-100a8278-0c1c-4b94-a562-5d13925ed047 req-8938b891-0c37-4681-a3c4-056e1b4a220a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-ef21f945-0076-48fa-8d22-c5376e26d278" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:24:07 compute-0 podman[395403]: 2025-10-11 09:24:07.974651911 +0000 UTC m=+1.132577522 container cleanup 22cf63d83641e03cba16870a5f372c09f901e5786a22ad9e01ecb7c96b7ff5b2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-0007a0de-db42-4add-9b55-6d92ceffa860, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.schema-version=1.0)
Oct 11 09:24:07 compute-0 systemd[1]: libpod-conmon-22cf63d83641e03cba16870a5f372c09f901e5786a22ad9e01ecb7c96b7ff5b2.scope: Deactivated successfully.
Oct 11 09:24:08 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2475: 321 pgs: 321 active+clean; 476 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 2.0 MiB/s wr, 182 op/s
Oct 11 09:24:08 compute-0 podman[395463]: 2025-10-11 09:24:08.968259309 +0000 UTC m=+0.955967217 container remove 22cf63d83641e03cba16870a5f372c09f901e5786a22ad9e01ecb7c96b7ff5b2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-0007a0de-db42-4add-9b55-6d92ceffa860, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 11 09:24:08 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:24:08.978 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[8cfb0247-47df-4981-b96f-add7c1d88a5a]: (4, ('Sat Oct 11 09:24:06 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-0007a0de-db42-4add-9b55-6d92ceffa860 (22cf63d83641e03cba16870a5f372c09f901e5786a22ad9e01ecb7c96b7ff5b2)\n22cf63d83641e03cba16870a5f372c09f901e5786a22ad9e01ecb7c96b7ff5b2\nSat Oct 11 09:24:07 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-0007a0de-db42-4add-9b55-6d92ceffa860 (22cf63d83641e03cba16870a5f372c09f901e5786a22ad9e01ecb7c96b7ff5b2)\n22cf63d83641e03cba16870a5f372c09f901e5786a22ad9e01ecb7c96b7ff5b2\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:24:08 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:24:08.980 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[30b8f885-5e88-4f2c-9a0a-8980da76bdcc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:24:08 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:24:08.982 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0007a0de-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:24:08 compute-0 nova_compute[260935]: 2025-10-11 09:24:08.985 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:24:08 compute-0 kernel: tap0007a0de-d0: left promiscuous mode
Oct 11 09:24:09 compute-0 nova_compute[260935]: 2025-10-11 09:24:09.005 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:24:09 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:24:09.010 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[7979659f-61e2-4844-8eaf-a1f76b8b57e1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:24:09 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:24:09.038 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[fe42606d-7846-473d-a58e-22a521333620]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:24:09 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:24:09.039 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[4355a677-85f9-4afb-b2d9-bd869753c058]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:24:09 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:24:09.062 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[16797d34-a1f2-4eb4-b255-970dc0c2cd57]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 646786, 'reachable_time': 20036, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 395479, 'error': None, 'target': 'ovnmeta-0007a0de-db42-4add-9b55-6d92ceffa860', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:24:09 compute-0 systemd[1]: run-netns-ovnmeta\x2d0007a0de\x2ddb42\x2d4add\x2d9b55\x2d6d92ceffa860.mount: Deactivated successfully.
Oct 11 09:24:09 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:24:09.066 162929 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-0007a0de-db42-4add-9b55-6d92ceffa860 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 11 09:24:09 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:24:09.066 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[fe2d79e4-32ca-411d-9fb4-9dc462d9788c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:24:09 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:24:09 compute-0 nova_compute[260935]: 2025-10-11 09:24:09.582 2 INFO nova.virt.libvirt.driver [None req-40d26b14-7bb1-44f1-9e06-f6f7cebb71d8 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] Deleting instance files /var/lib/nova/instances/ef21f945-0076-48fa-8d22-c5376e26d278_del
Oct 11 09:24:09 compute-0 nova_compute[260935]: 2025-10-11 09:24:09.582 2 INFO nova.virt.libvirt.driver [None req-40d26b14-7bb1-44f1-9e06-f6f7cebb71d8 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] Deletion of /var/lib/nova/instances/ef21f945-0076-48fa-8d22-c5376e26d278_del complete
Oct 11 09:24:09 compute-0 nova_compute[260935]: 2025-10-11 09:24:09.663 2 INFO nova.compute.manager [None req-40d26b14-7bb1-44f1-9e06-f6f7cebb71d8 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] Took 4.04 seconds to destroy the instance on the hypervisor.
Oct 11 09:24:09 compute-0 nova_compute[260935]: 2025-10-11 09:24:09.664 2 DEBUG oslo.service.loopingcall [None req-40d26b14-7bb1-44f1-9e06-f6f7cebb71d8 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 11 09:24:09 compute-0 nova_compute[260935]: 2025-10-11 09:24:09.665 2 DEBUG nova.compute.manager [-] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 11 09:24:09 compute-0 nova_compute[260935]: 2025-10-11 09:24:09.665 2 DEBUG nova.network.neutron [-] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 11 09:24:09 compute-0 nova_compute[260935]: 2025-10-11 09:24:09.842 2 DEBUG nova.compute.manager [req-e26b1321-d02c-4178-9efb-cf6fbdcda488 req-0feb0b40-a196-40e9-a4c5-bbc84622ad14 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] Received event network-vif-plugged-0f516e4b-c284-4151-944c-8a7d98f695b5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:24:09 compute-0 nova_compute[260935]: 2025-10-11 09:24:09.843 2 DEBUG oslo_concurrency.lockutils [req-e26b1321-d02c-4178-9efb-cf6fbdcda488 req-0feb0b40-a196-40e9-a4c5-bbc84622ad14 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "ef21f945-0076-48fa-8d22-c5376e26d278-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:24:09 compute-0 nova_compute[260935]: 2025-10-11 09:24:09.843 2 DEBUG oslo_concurrency.lockutils [req-e26b1321-d02c-4178-9efb-cf6fbdcda488 req-0feb0b40-a196-40e9-a4c5-bbc84622ad14 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "ef21f945-0076-48fa-8d22-c5376e26d278-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:24:09 compute-0 nova_compute[260935]: 2025-10-11 09:24:09.844 2 DEBUG oslo_concurrency.lockutils [req-e26b1321-d02c-4178-9efb-cf6fbdcda488 req-0feb0b40-a196-40e9-a4c5-bbc84622ad14 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "ef21f945-0076-48fa-8d22-c5376e26d278-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:24:09 compute-0 nova_compute[260935]: 2025-10-11 09:24:09.844 2 DEBUG nova.compute.manager [req-e26b1321-d02c-4178-9efb-cf6fbdcda488 req-0feb0b40-a196-40e9-a4c5-bbc84622ad14 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] No waiting events found dispatching network-vif-plugged-0f516e4b-c284-4151-944c-8a7d98f695b5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 09:24:09 compute-0 nova_compute[260935]: 2025-10-11 09:24:09.844 2 WARNING nova.compute.manager [req-e26b1321-d02c-4178-9efb-cf6fbdcda488 req-0feb0b40-a196-40e9-a4c5-bbc84622ad14 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] Received unexpected event network-vif-plugged-0f516e4b-c284-4151-944c-8a7d98f695b5 for instance with vm_state active and task_state deleting.
Oct 11 09:24:09 compute-0 ceph-mon[74313]: pgmap v2475: 321 pgs: 321 active+clean; 476 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 2.0 MiB/s wr, 182 op/s
Oct 11 09:24:10 compute-0 nova_compute[260935]: 2025-10-11 09:24:10.290 2 DEBUG nova.network.neutron [-] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:24:10 compute-0 nova_compute[260935]: 2025-10-11 09:24:10.317 2 INFO nova.compute.manager [-] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] Took 0.65 seconds to deallocate network for instance.
Oct 11 09:24:10 compute-0 nova_compute[260935]: 2025-10-11 09:24:10.393 2 DEBUG oslo_concurrency.lockutils [None req-40d26b14-7bb1-44f1-9e06-f6f7cebb71d8 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:24:10 compute-0 nova_compute[260935]: 2025-10-11 09:24:10.394 2 DEBUG oslo_concurrency.lockutils [None req-40d26b14-7bb1-44f1-9e06-f6f7cebb71d8 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:24:10 compute-0 nova_compute[260935]: 2025-10-11 09:24:10.539 2 DEBUG oslo_concurrency.processutils [None req-40d26b14-7bb1-44f1-9e06-f6f7cebb71d8 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:24:10 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2476: 321 pgs: 321 active+clean; 476 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 560 KiB/s rd, 1.9 MiB/s wr, 79 op/s
Oct 11 09:24:11 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 09:24:11 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3079503864' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:24:11 compute-0 nova_compute[260935]: 2025-10-11 09:24:11.125 2 DEBUG oslo_concurrency.processutils [None req-40d26b14-7bb1-44f1-9e06-f6f7cebb71d8 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.586s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:24:11 compute-0 nova_compute[260935]: 2025-10-11 09:24:11.131 2 DEBUG nova.compute.provider_tree [None req-40d26b14-7bb1-44f1-9e06-f6f7cebb71d8 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 09:24:11 compute-0 nova_compute[260935]: 2025-10-11 09:24:11.147 2 DEBUG nova.scheduler.client.report [None req-40d26b14-7bb1-44f1-9e06-f6f7cebb71d8 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 09:24:11 compute-0 nova_compute[260935]: 2025-10-11 09:24:11.170 2 DEBUG oslo_concurrency.lockutils [None req-40d26b14-7bb1-44f1-9e06-f6f7cebb71d8 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.776s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:24:11 compute-0 nova_compute[260935]: 2025-10-11 09:24:11.214 2 INFO nova.scheduler.client.report [None req-40d26b14-7bb1-44f1-9e06-f6f7cebb71d8 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] Deleted allocations for instance ef21f945-0076-48fa-8d22-c5376e26d278
Oct 11 09:24:11 compute-0 ovn_controller[152945]: 2025-10-11T09:24:11Z|00152|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:81:f1:c1 10.100.0.14
Oct 11 09:24:11 compute-0 ovn_controller[152945]: 2025-10-11T09:24:11Z|00153|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:81:f1:c1 10.100.0.14
Oct 11 09:24:11 compute-0 nova_compute[260935]: 2025-10-11 09:24:11.299 2 DEBUG oslo_concurrency.lockutils [None req-40d26b14-7bb1-44f1-9e06-f6f7cebb71d8 67e20c1f7ae24f2f8b9e25e0d8ce61ca a13210f275984f3eadf85eba0c749d99 - - default default] Lock "ef21f945-0076-48fa-8d22-c5376e26d278" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.679s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:24:11 compute-0 nova_compute[260935]: 2025-10-11 09:24:11.426 2 DEBUG oslo_concurrency.lockutils [None req-c35abbe6-3688-4543-a2e2-fe03ac2f218a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Acquiring lock "32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:24:11 compute-0 nova_compute[260935]: 2025-10-11 09:24:11.426 2 DEBUG oslo_concurrency.lockutils [None req-c35abbe6-3688-4543-a2e2-fe03ac2f218a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:24:11 compute-0 nova_compute[260935]: 2025-10-11 09:24:11.443 2 DEBUG nova.compute.manager [None req-c35abbe6-3688-4543-a2e2-fe03ac2f218a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 11 09:24:11 compute-0 nova_compute[260935]: 2025-10-11 09:24:11.532 2 DEBUG oslo_concurrency.lockutils [None req-c35abbe6-3688-4543-a2e2-fe03ac2f218a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:24:11 compute-0 nova_compute[260935]: 2025-10-11 09:24:11.533 2 DEBUG oslo_concurrency.lockutils [None req-c35abbe6-3688-4543-a2e2-fe03ac2f218a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:24:11 compute-0 nova_compute[260935]: 2025-10-11 09:24:11.543 2 DEBUG nova.virt.hardware [None req-c35abbe6-3688-4543-a2e2-fe03ac2f218a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 11 09:24:11 compute-0 nova_compute[260935]: 2025-10-11 09:24:11.543 2 INFO nova.compute.claims [None req-c35abbe6-3688-4543-a2e2-fe03ac2f218a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1] Claim successful on node compute-0.ctlplane.example.com
Oct 11 09:24:11 compute-0 podman[395503]: 2025-10-11 09:24:11.781662355 +0000 UTC m=+0.080294806 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251001, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent)
Oct 11 09:24:11 compute-0 nova_compute[260935]: 2025-10-11 09:24:11.838 2 DEBUG oslo_concurrency.processutils [None req-c35abbe6-3688-4543-a2e2-fe03ac2f218a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:24:11 compute-0 nova_compute[260935]: 2025-10-11 09:24:11.922 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:24:11 compute-0 nova_compute[260935]: 2025-10-11 09:24:11.956 2 DEBUG nova.compute.manager [req-a6087bb7-aaf1-4fbf-bede-1c483ed4ef53 req-a2e91bb9-d7ca-425f-b398-b87370618de7 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] Received event network-vif-deleted-0f516e4b-c284-4151-944c-8a7d98f695b5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:24:11 compute-0 ceph-mon[74313]: pgmap v2476: 321 pgs: 321 active+clean; 476 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 560 KiB/s rd, 1.9 MiB/s wr, 79 op/s
Oct 11 09:24:11 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/3079503864' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:24:12 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 09:24:12 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3903550588' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:24:12 compute-0 nova_compute[260935]: 2025-10-11 09:24:12.300 2 DEBUG oslo_concurrency.processutils [None req-c35abbe6-3688-4543-a2e2-fe03ac2f218a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:24:12 compute-0 nova_compute[260935]: 2025-10-11 09:24:12.306 2 DEBUG nova.compute.provider_tree [None req-c35abbe6-3688-4543-a2e2-fe03ac2f218a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 09:24:12 compute-0 nova_compute[260935]: 2025-10-11 09:24:12.327 2 DEBUG nova.scheduler.client.report [None req-c35abbe6-3688-4543-a2e2-fe03ac2f218a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 09:24:12 compute-0 nova_compute[260935]: 2025-10-11 09:24:12.361 2 DEBUG oslo_concurrency.lockutils [None req-c35abbe6-3688-4543-a2e2-fe03ac2f218a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.828s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:24:12 compute-0 nova_compute[260935]: 2025-10-11 09:24:12.362 2 DEBUG nova.compute.manager [None req-c35abbe6-3688-4543-a2e2-fe03ac2f218a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 11 09:24:12 compute-0 nova_compute[260935]: 2025-10-11 09:24:12.435 2 DEBUG nova.compute.manager [None req-c35abbe6-3688-4543-a2e2-fe03ac2f218a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 11 09:24:12 compute-0 nova_compute[260935]: 2025-10-11 09:24:12.436 2 DEBUG nova.network.neutron [None req-c35abbe6-3688-4543-a2e2-fe03ac2f218a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 11 09:24:12 compute-0 nova_compute[260935]: 2025-10-11 09:24:12.456 2 INFO nova.virt.libvirt.driver [None req-c35abbe6-3688-4543-a2e2-fe03ac2f218a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 11 09:24:12 compute-0 nova_compute[260935]: 2025-10-11 09:24:12.474 2 DEBUG nova.compute.manager [None req-c35abbe6-3688-4543-a2e2-fe03ac2f218a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 11 09:24:12 compute-0 nova_compute[260935]: 2025-10-11 09:24:12.498 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:24:12 compute-0 nova_compute[260935]: 2025-10-11 09:24:12.559 2 DEBUG nova.compute.manager [None req-c35abbe6-3688-4543-a2e2-fe03ac2f218a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 11 09:24:12 compute-0 nova_compute[260935]: 2025-10-11 09:24:12.561 2 DEBUG nova.virt.libvirt.driver [None req-c35abbe6-3688-4543-a2e2-fe03ac2f218a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 11 09:24:12 compute-0 nova_compute[260935]: 2025-10-11 09:24:12.561 2 INFO nova.virt.libvirt.driver [None req-c35abbe6-3688-4543-a2e2-fe03ac2f218a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1] Creating image(s)
Oct 11 09:24:12 compute-0 nova_compute[260935]: 2025-10-11 09:24:12.593 2 DEBUG nova.storage.rbd_utils [None req-c35abbe6-3688-4543-a2e2-fe03ac2f218a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] rbd image 32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:24:12 compute-0 nova_compute[260935]: 2025-10-11 09:24:12.626 2 DEBUG nova.storage.rbd_utils [None req-c35abbe6-3688-4543-a2e2-fe03ac2f218a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] rbd image 32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:24:12 compute-0 nova_compute[260935]: 2025-10-11 09:24:12.660 2 DEBUG nova.storage.rbd_utils [None req-c35abbe6-3688-4543-a2e2-fe03ac2f218a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] rbd image 32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:24:12 compute-0 nova_compute[260935]: 2025-10-11 09:24:12.665 2 DEBUG oslo_concurrency.processutils [None req-c35abbe6-3688-4543-a2e2-fe03ac2f218a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:24:12 compute-0 nova_compute[260935]: 2025-10-11 09:24:12.748 2 DEBUG oslo_concurrency.processutils [None req-c35abbe6-3688-4543-a2e2-fe03ac2f218a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json" returned: 0 in 0.083s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:24:12 compute-0 nova_compute[260935]: 2025-10-11 09:24:12.749 2 DEBUG oslo_concurrency.lockutils [None req-c35abbe6-3688-4543-a2e2-fe03ac2f218a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Acquiring lock "9811042c7d73cc51997f7c966840f4b7728169a1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:24:12 compute-0 nova_compute[260935]: 2025-10-11 09:24:12.750 2 DEBUG oslo_concurrency.lockutils [None req-c35abbe6-3688-4543-a2e2-fe03ac2f218a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:24:12 compute-0 nova_compute[260935]: 2025-10-11 09:24:12.751 2 DEBUG oslo_concurrency.lockutils [None req-c35abbe6-3688-4543-a2e2-fe03ac2f218a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:24:12 compute-0 nova_compute[260935]: 2025-10-11 09:24:12.787 2 DEBUG nova.storage.rbd_utils [None req-c35abbe6-3688-4543-a2e2-fe03ac2f218a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] rbd image 32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:24:12 compute-0 nova_compute[260935]: 2025-10-11 09:24:12.792 2 DEBUG oslo_concurrency.processutils [None req-c35abbe6-3688-4543-a2e2-fe03ac2f218a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:24:12 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2477: 321 pgs: 321 active+clean; 407 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 915 KiB/s rd, 2.1 MiB/s wr, 143 op/s
Oct 11 09:24:12 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/3903550588' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:24:13 compute-0 nova_compute[260935]: 2025-10-11 09:24:13.218 2 DEBUG oslo_concurrency.processutils [None req-c35abbe6-3688-4543-a2e2-fe03ac2f218a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.426s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:24:13 compute-0 nova_compute[260935]: 2025-10-11 09:24:13.309 2 DEBUG nova.storage.rbd_utils [None req-c35abbe6-3688-4543-a2e2-fe03ac2f218a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] resizing rbd image 32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 11 09:24:13 compute-0 nova_compute[260935]: 2025-10-11 09:24:13.404 2 DEBUG nova.policy [None req-c35abbe6-3688-4543-a2e2-fe03ac2f218a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'dd336dcb24664df58613d4105ce1b004', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'bee9c6aad5fe46a2b0fb6caf4d995b72', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 11 09:24:13 compute-0 nova_compute[260935]: 2025-10-11 09:24:13.456 2 DEBUG nova.objects.instance [None req-c35abbe6-3688-4543-a2e2-fe03ac2f218a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lazy-loading 'migration_context' on Instance uuid 32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:24:13 compute-0 nova_compute[260935]: 2025-10-11 09:24:13.476 2 DEBUG nova.virt.libvirt.driver [None req-c35abbe6-3688-4543-a2e2-fe03ac2f218a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 11 09:24:13 compute-0 nova_compute[260935]: 2025-10-11 09:24:13.477 2 DEBUG nova.virt.libvirt.driver [None req-c35abbe6-3688-4543-a2e2-fe03ac2f218a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1] Ensure instance console log exists: /var/lib/nova/instances/32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 11 09:24:13 compute-0 nova_compute[260935]: 2025-10-11 09:24:13.478 2 DEBUG oslo_concurrency.lockutils [None req-c35abbe6-3688-4543-a2e2-fe03ac2f218a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:24:13 compute-0 nova_compute[260935]: 2025-10-11 09:24:13.478 2 DEBUG oslo_concurrency.lockutils [None req-c35abbe6-3688-4543-a2e2-fe03ac2f218a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:24:13 compute-0 nova_compute[260935]: 2025-10-11 09:24:13.479 2 DEBUG oslo_concurrency.lockutils [None req-c35abbe6-3688-4543-a2e2-fe03ac2f218a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:24:13 compute-0 nova_compute[260935]: 2025-10-11 09:24:13.528 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760174638.5274398, 527bff5a-2d35-406a-8702-f80298a22342 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:24:13 compute-0 nova_compute[260935]: 2025-10-11 09:24:13.529 2 INFO nova.compute.manager [-] [instance: 527bff5a-2d35-406a-8702-f80298a22342] VM Stopped (Lifecycle Event)
Oct 11 09:24:13 compute-0 nova_compute[260935]: 2025-10-11 09:24:13.557 2 DEBUG nova.compute.manager [None req-b78a5af4-7d30-4843-a9b8-20288124f355 - - - - - -] [instance: 527bff5a-2d35-406a-8702-f80298a22342] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:24:13 compute-0 ceph-mon[74313]: pgmap v2477: 321 pgs: 321 active+clean; 407 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 915 KiB/s rd, 2.1 MiB/s wr, 143 op/s
Oct 11 09:24:14 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:24:14 compute-0 nova_compute[260935]: 2025-10-11 09:24:14.652 2 DEBUG nova.network.neutron [None req-c35abbe6-3688-4543-a2e2-fe03ac2f218a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1] Successfully updated port: ba6bfb6c-8b45-4ce7-af7b-30a3c2e097ca _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 11 09:24:14 compute-0 nova_compute[260935]: 2025-10-11 09:24:14.684 2 DEBUG oslo_concurrency.lockutils [None req-c35abbe6-3688-4543-a2e2-fe03ac2f218a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Acquiring lock "refresh_cache-32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:24:14 compute-0 nova_compute[260935]: 2025-10-11 09:24:14.684 2 DEBUG oslo_concurrency.lockutils [None req-c35abbe6-3688-4543-a2e2-fe03ac2f218a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Acquired lock "refresh_cache-32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:24:14 compute-0 nova_compute[260935]: 2025-10-11 09:24:14.685 2 DEBUG nova.network.neutron [None req-c35abbe6-3688-4543-a2e2-fe03ac2f218a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 11 09:24:14 compute-0 nova_compute[260935]: 2025-10-11 09:24:14.793 2 DEBUG nova.compute.manager [req-3004c618-5dce-4d24-a297-c9044294418c req-b118437c-81a0-420c-9200-77efe9ceb834 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1] Received event network-changed-ba6bfb6c-8b45-4ce7-af7b-30a3c2e097ca external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:24:14 compute-0 nova_compute[260935]: 2025-10-11 09:24:14.794 2 DEBUG nova.compute.manager [req-3004c618-5dce-4d24-a297-c9044294418c req-b118437c-81a0-420c-9200-77efe9ceb834 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1] Refreshing instance network info cache due to event network-changed-ba6bfb6c-8b45-4ce7-af7b-30a3c2e097ca. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 09:24:14 compute-0 nova_compute[260935]: 2025-10-11 09:24:14.794 2 DEBUG oslo_concurrency.lockutils [req-3004c618-5dce-4d24-a297-c9044294418c req-b118437c-81a0-420c-9200-77efe9ceb834 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:24:14 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2478: 321 pgs: 321 active+clean; 407 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 601 KiB/s rd, 2.1 MiB/s wr, 96 op/s
Oct 11 09:24:14 compute-0 nova_compute[260935]: 2025-10-11 09:24:14.912 2 DEBUG nova.network.neutron [None req-c35abbe6-3688-4543-a2e2-fe03ac2f218a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 11 09:24:15 compute-0 ceph-mon[74313]: pgmap v2478: 321 pgs: 321 active+clean; 407 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 601 KiB/s rd, 2.1 MiB/s wr, 96 op/s
Oct 11 09:24:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:24:15.220 162815 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:24:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:24:15.221 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:24:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:24:15.222 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:24:16 compute-0 sudo[395710]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:24:16 compute-0 sudo[395710]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:24:16 compute-0 sudo[395710]: pam_unix(sudo:session): session closed for user root
Oct 11 09:24:16 compute-0 sudo[395741]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 09:24:16 compute-0 sudo[395741]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:24:16 compute-0 podman[395734]: 2025-10-11 09:24:16.143043026 +0000 UTC m=+0.091605956 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=iscsid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct 11 09:24:16 compute-0 sudo[395741]: pam_unix(sudo:session): session closed for user root
Oct 11 09:24:16 compute-0 sudo[395779]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:24:16 compute-0 sudo[395779]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:24:16 compute-0 sudo[395779]: pam_unix(sudo:session): session closed for user root
Oct 11 09:24:16 compute-0 sudo[395804]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Oct 11 09:24:16 compute-0 sudo[395804]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:24:16 compute-0 sudo[395804]: pam_unix(sudo:session): session closed for user root
Oct 11 09:24:16 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 09:24:16 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:24:16 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 09:24:16 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:24:16 compute-0 sudo[395850]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:24:16 compute-0 sudo[395850]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:24:16 compute-0 sudo[395850]: pam_unix(sudo:session): session closed for user root
Oct 11 09:24:16 compute-0 sudo[395875]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 09:24:16 compute-0 sudo[395875]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:24:16 compute-0 sudo[395875]: pam_unix(sudo:session): session closed for user root
Oct 11 09:24:16 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2479: 321 pgs: 321 active+clean; 407 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 601 KiB/s rd, 2.1 MiB/s wr, 96 op/s
Oct 11 09:24:16 compute-0 sudo[395900]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:24:16 compute-0 sudo[395900]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:24:16 compute-0 sudo[395900]: pam_unix(sudo:session): session closed for user root
Oct 11 09:24:16 compute-0 nova_compute[260935]: 2025-10-11 09:24:16.944 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:24:16 compute-0 sudo[395925]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 11 09:24:16 compute-0 sudo[395925]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:24:17 compute-0 nova_compute[260935]: 2025-10-11 09:24:17.501 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:24:17 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:24:17 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:24:17 compute-0 ceph-mon[74313]: pgmap v2479: 321 pgs: 321 active+clean; 407 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 601 KiB/s rd, 2.1 MiB/s wr, 96 op/s
Oct 11 09:24:17 compute-0 sudo[395925]: pam_unix(sudo:session): session closed for user root
Oct 11 09:24:17 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 09:24:17 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 09:24:17 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 11 09:24:17 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 09:24:17 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 11 09:24:17 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:24:17 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev da967a8e-ce8a-4697-afc1-0f4d16586d3a does not exist
Oct 11 09:24:17 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev afad6b6f-3d20-420d-8de5-2b9c8b2a2c3f does not exist
Oct 11 09:24:17 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev 54934954-23e6-46f4-a382-429e69934502 does not exist
Oct 11 09:24:17 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 11 09:24:17 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 09:24:17 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 11 09:24:17 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 09:24:17 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 09:24:17 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 09:24:17 compute-0 sudo[395981]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:24:17 compute-0 sudo[395981]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:24:17 compute-0 sudo[395981]: pam_unix(sudo:session): session closed for user root
Oct 11 09:24:17 compute-0 sudo[396006]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 09:24:17 compute-0 sudo[396006]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:24:17 compute-0 sudo[396006]: pam_unix(sudo:session): session closed for user root
Oct 11 09:24:17 compute-0 sudo[396032]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:24:17 compute-0 sudo[396032]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:24:17 compute-0 sudo[396032]: pam_unix(sudo:session): session closed for user root
Oct 11 09:24:18 compute-0 sudo[396057]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 11 09:24:18 compute-0 sudo[396057]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:24:18 compute-0 nova_compute[260935]: 2025-10-11 09:24:18.147 2 DEBUG nova.network.neutron [None req-c35abbe6-3688-4543-a2e2-fe03ac2f218a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1] Updating instance_info_cache with network_info: [{"id": "ba6bfb6c-8b45-4ce7-af7b-30a3c2e097ca", "address": "fa:16:3e:d6:ce:63", "network": {"id": "a7aa5898-dcd8-41c5-81e2-5c41061a1fb2", "bridge": "br-int", "label": "tempest-network-smoke--1511589092", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.192", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapba6bfb6c-8b", "ovs_interfaceid": "ba6bfb6c-8b45-4ce7-af7b-30a3c2e097ca", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:24:18 compute-0 nova_compute[260935]: 2025-10-11 09:24:18.166 2 DEBUG oslo_concurrency.lockutils [None req-c35abbe6-3688-4543-a2e2-fe03ac2f218a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Releasing lock "refresh_cache-32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:24:18 compute-0 nova_compute[260935]: 2025-10-11 09:24:18.166 2 DEBUG nova.compute.manager [None req-c35abbe6-3688-4543-a2e2-fe03ac2f218a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1] Instance network_info: |[{"id": "ba6bfb6c-8b45-4ce7-af7b-30a3c2e097ca", "address": "fa:16:3e:d6:ce:63", "network": {"id": "a7aa5898-dcd8-41c5-81e2-5c41061a1fb2", "bridge": "br-int", "label": "tempest-network-smoke--1511589092", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.192", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapba6bfb6c-8b", "ovs_interfaceid": "ba6bfb6c-8b45-4ce7-af7b-30a3c2e097ca", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 11 09:24:18 compute-0 nova_compute[260935]: 2025-10-11 09:24:18.167 2 DEBUG oslo_concurrency.lockutils [req-3004c618-5dce-4d24-a297-c9044294418c req-b118437c-81a0-420c-9200-77efe9ceb834 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:24:18 compute-0 nova_compute[260935]: 2025-10-11 09:24:18.167 2 DEBUG nova.network.neutron [req-3004c618-5dce-4d24-a297-c9044294418c req-b118437c-81a0-420c-9200-77efe9ceb834 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1] Refreshing network info cache for port ba6bfb6c-8b45-4ce7-af7b-30a3c2e097ca _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 09:24:18 compute-0 nova_compute[260935]: 2025-10-11 09:24:18.169 2 DEBUG nova.virt.libvirt.driver [None req-c35abbe6-3688-4543-a2e2-fe03ac2f218a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1] Start _get_guest_xml network_info=[{"id": "ba6bfb6c-8b45-4ce7-af7b-30a3c2e097ca", "address": "fa:16:3e:d6:ce:63", "network": {"id": "a7aa5898-dcd8-41c5-81e2-5c41061a1fb2", "bridge": "br-int", "label": "tempest-network-smoke--1511589092", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.192", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapba6bfb6c-8b", "ovs_interfaceid": "ba6bfb6c-8b45-4ce7-af7b-30a3c2e097ca", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 11 09:24:18 compute-0 nova_compute[260935]: 2025-10-11 09:24:18.174 2 WARNING nova.virt.libvirt.driver [None req-c35abbe6-3688-4543-a2e2-fe03ac2f218a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 09:24:18 compute-0 nova_compute[260935]: 2025-10-11 09:24:18.179 2 DEBUG nova.virt.libvirt.host [None req-c35abbe6-3688-4543-a2e2-fe03ac2f218a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 11 09:24:18 compute-0 nova_compute[260935]: 2025-10-11 09:24:18.179 2 DEBUG nova.virt.libvirt.host [None req-c35abbe6-3688-4543-a2e2-fe03ac2f218a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 11 09:24:18 compute-0 nova_compute[260935]: 2025-10-11 09:24:18.182 2 DEBUG nova.virt.libvirt.host [None req-c35abbe6-3688-4543-a2e2-fe03ac2f218a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 11 09:24:18 compute-0 nova_compute[260935]: 2025-10-11 09:24:18.183 2 DEBUG nova.virt.libvirt.host [None req-c35abbe6-3688-4543-a2e2-fe03ac2f218a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 11 09:24:18 compute-0 nova_compute[260935]: 2025-10-11 09:24:18.183 2 DEBUG nova.virt.libvirt.driver [None req-c35abbe6-3688-4543-a2e2-fe03ac2f218a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 11 09:24:18 compute-0 nova_compute[260935]: 2025-10-11 09:24:18.183 2 DEBUG nova.virt.hardware [None req-c35abbe6-3688-4543-a2e2-fe03ac2f218a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 11 09:24:18 compute-0 nova_compute[260935]: 2025-10-11 09:24:18.184 2 DEBUG nova.virt.hardware [None req-c35abbe6-3688-4543-a2e2-fe03ac2f218a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 11 09:24:18 compute-0 nova_compute[260935]: 2025-10-11 09:24:18.184 2 DEBUG nova.virt.hardware [None req-c35abbe6-3688-4543-a2e2-fe03ac2f218a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 11 09:24:18 compute-0 nova_compute[260935]: 2025-10-11 09:24:18.184 2 DEBUG nova.virt.hardware [None req-c35abbe6-3688-4543-a2e2-fe03ac2f218a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 11 09:24:18 compute-0 nova_compute[260935]: 2025-10-11 09:24:18.184 2 DEBUG nova.virt.hardware [None req-c35abbe6-3688-4543-a2e2-fe03ac2f218a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 11 09:24:18 compute-0 nova_compute[260935]: 2025-10-11 09:24:18.185 2 DEBUG nova.virt.hardware [None req-c35abbe6-3688-4543-a2e2-fe03ac2f218a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 11 09:24:18 compute-0 nova_compute[260935]: 2025-10-11 09:24:18.185 2 DEBUG nova.virt.hardware [None req-c35abbe6-3688-4543-a2e2-fe03ac2f218a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 11 09:24:18 compute-0 nova_compute[260935]: 2025-10-11 09:24:18.185 2 DEBUG nova.virt.hardware [None req-c35abbe6-3688-4543-a2e2-fe03ac2f218a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 11 09:24:18 compute-0 nova_compute[260935]: 2025-10-11 09:24:18.185 2 DEBUG nova.virt.hardware [None req-c35abbe6-3688-4543-a2e2-fe03ac2f218a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 11 09:24:18 compute-0 nova_compute[260935]: 2025-10-11 09:24:18.186 2 DEBUG nova.virt.hardware [None req-c35abbe6-3688-4543-a2e2-fe03ac2f218a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 11 09:24:18 compute-0 nova_compute[260935]: 2025-10-11 09:24:18.186 2 DEBUG nova.virt.hardware [None req-c35abbe6-3688-4543-a2e2-fe03ac2f218a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 11 09:24:18 compute-0 nova_compute[260935]: 2025-10-11 09:24:18.188 2 DEBUG oslo_concurrency.processutils [None req-c35abbe6-3688-4543-a2e2-fe03ac2f218a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:24:18 compute-0 podman[396142]: 2025-10-11 09:24:18.466138556 +0000 UTC m=+0.049628181 container create 4e755bc5f00bc16340bb70f3053627fd90b39132d2b4bf5b289cf737b42179f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_germain, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 09:24:18 compute-0 systemd[1]: Started libpod-conmon-4e755bc5f00bc16340bb70f3053627fd90b39132d2b4bf5b289cf737b42179f3.scope.
Oct 11 09:24:18 compute-0 podman[396142]: 2025-10-11 09:24:18.442635313 +0000 UTC m=+0.026124928 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:24:18 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:24:18 compute-0 podman[396142]: 2025-10-11 09:24:18.583135268 +0000 UTC m=+0.166624953 container init 4e755bc5f00bc16340bb70f3053627fd90b39132d2b4bf5b289cf737b42179f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_germain, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct 11 09:24:18 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 09:24:18 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/614254496' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:24:18 compute-0 podman[396142]: 2025-10-11 09:24:18.595316702 +0000 UTC m=+0.178806327 container start 4e755bc5f00bc16340bb70f3053627fd90b39132d2b4bf5b289cf737b42179f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_germain, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct 11 09:24:18 compute-0 podman[396142]: 2025-10-11 09:24:18.600110247 +0000 UTC m=+0.183599922 container attach 4e755bc5f00bc16340bb70f3053627fd90b39132d2b4bf5b289cf737b42179f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_germain, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 09:24:18 compute-0 focused_germain[396158]: 167 167
Oct 11 09:24:18 compute-0 systemd[1]: libpod-4e755bc5f00bc16340bb70f3053627fd90b39132d2b4bf5b289cf737b42179f3.scope: Deactivated successfully.
Oct 11 09:24:18 compute-0 podman[396142]: 2025-10-11 09:24:18.605279583 +0000 UTC m=+0.188769268 container died 4e755bc5f00bc16340bb70f3053627fd90b39132d2b4bf5b289cf737b42179f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_germain, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 09:24:18 compute-0 nova_compute[260935]: 2025-10-11 09:24:18.618 2 DEBUG oslo_concurrency.processutils [None req-c35abbe6-3688-4543-a2e2-fe03ac2f218a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.430s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:24:18 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 09:24:18 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 09:24:18 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:24:18 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 09:24:18 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 09:24:18 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 09:24:18 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/614254496' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:24:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-92876428261ee7dbc1db31d19d0c93e3aa1081e77c2c18897b0c2dccc8b49b6c-merged.mount: Deactivated successfully.
Oct 11 09:24:18 compute-0 nova_compute[260935]: 2025-10-11 09:24:18.651 2 DEBUG nova.storage.rbd_utils [None req-c35abbe6-3688-4543-a2e2-fe03ac2f218a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] rbd image 32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:24:18 compute-0 nova_compute[260935]: 2025-10-11 09:24:18.658 2 DEBUG oslo_concurrency.processutils [None req-c35abbe6-3688-4543-a2e2-fe03ac2f218a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:24:18 compute-0 podman[396142]: 2025-10-11 09:24:18.658437123 +0000 UTC m=+0.241926748 container remove 4e755bc5f00bc16340bb70f3053627fd90b39132d2b4bf5b289cf737b42179f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_germain, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 09:24:18 compute-0 systemd[1]: libpod-conmon-4e755bc5f00bc16340bb70f3053627fd90b39132d2b4bf5b289cf737b42179f3.scope: Deactivated successfully.
Oct 11 09:24:18 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2480: 321 pgs: 321 active+clean; 453 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 618 KiB/s rd, 3.9 MiB/s wr, 124 op/s
Oct 11 09:24:18 compute-0 podman[396221]: 2025-10-11 09:24:18.906986116 +0000 UTC m=+0.048151809 container create 6cd72fcb8fbfc5b414651ba1c906adde7a223d8be17a17e043c941e0f8a4ad94 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_williams, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 11 09:24:18 compute-0 systemd[1]: Started libpod-conmon-6cd72fcb8fbfc5b414651ba1c906adde7a223d8be17a17e043c941e0f8a4ad94.scope.
Oct 11 09:24:18 compute-0 podman[396221]: 2025-10-11 09:24:18.881794866 +0000 UTC m=+0.022960529 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:24:18 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:24:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2082cf4e9099de7b30ac0e4b71b7b4b6141249335e35f635796e904ae90c4372/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 09:24:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2082cf4e9099de7b30ac0e4b71b7b4b6141249335e35f635796e904ae90c4372/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 09:24:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2082cf4e9099de7b30ac0e4b71b7b4b6141249335e35f635796e904ae90c4372/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 09:24:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2082cf4e9099de7b30ac0e4b71b7b4b6141249335e35f635796e904ae90c4372/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 09:24:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2082cf4e9099de7b30ac0e4b71b7b4b6141249335e35f635796e904ae90c4372/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 09:24:19 compute-0 podman[396221]: 2025-10-11 09:24:19.026768057 +0000 UTC m=+0.167933780 container init 6cd72fcb8fbfc5b414651ba1c906adde7a223d8be17a17e043c941e0f8a4ad94 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_williams, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 09:24:19 compute-0 podman[396221]: 2025-10-11 09:24:19.039470325 +0000 UTC m=+0.180636008 container start 6cd72fcb8fbfc5b414651ba1c906adde7a223d8be17a17e043c941e0f8a4ad94 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_williams, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 09:24:19 compute-0 podman[396221]: 2025-10-11 09:24:19.044870618 +0000 UTC m=+0.186036371 container attach 6cd72fcb8fbfc5b414651ba1c906adde7a223d8be17a17e043c941e0f8a4ad94 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_williams, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct 11 09:24:19 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:24:19 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 09:24:19 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4226472973' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:24:19 compute-0 nova_compute[260935]: 2025-10-11 09:24:19.181 2 DEBUG oslo_concurrency.processutils [None req-c35abbe6-3688-4543-a2e2-fe03ac2f218a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.524s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:24:19 compute-0 nova_compute[260935]: 2025-10-11 09:24:19.185 2 DEBUG nova.virt.libvirt.vif [None req-c35abbe6-3688-4543-a2e2-fe03ac2f218a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T09:24:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1892767569',display_name='tempest-TestNetworkBasicOps-server-1892767569',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1892767569',id=124,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBrnL5y5rmm40zHS3GRiZibnIxynw7Rihcq0RaK7G/33Ym/s8OYTLPyLC9bisJAK/wCL7YwT5B+iTgnNd5S0eO9bWMML8tCfmSgkHWu6rGCUSjC8WYx7kb6zW0E6al3ojg==',key_name='tempest-TestNetworkBasicOps-485184698',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='bee9c6aad5fe46a2b0fb6caf4d995b72',ramdisk_id='',reservation_id='r-i2l25idp',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1622727639',owner_user_name='tempest-TestNetworkBasicOps-1622727639-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:24:12Z,user_data=None,user_id='dd336dcb24664df58613d4105ce1b004',uuid=32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ba6bfb6c-8b45-4ce7-af7b-30a3c2e097ca", "address": "fa:16:3e:d6:ce:63", "network": {"id": "a7aa5898-dcd8-41c5-81e2-5c41061a1fb2", "bridge": "br-int", "label": "tempest-network-smoke--1511589092", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.192", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapba6bfb6c-8b", "ovs_interfaceid": "ba6bfb6c-8b45-4ce7-af7b-30a3c2e097ca", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 11 09:24:19 compute-0 nova_compute[260935]: 2025-10-11 09:24:19.186 2 DEBUG nova.network.os_vif_util [None req-c35abbe6-3688-4543-a2e2-fe03ac2f218a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Converting VIF {"id": "ba6bfb6c-8b45-4ce7-af7b-30a3c2e097ca", "address": "fa:16:3e:d6:ce:63", "network": {"id": "a7aa5898-dcd8-41c5-81e2-5c41061a1fb2", "bridge": "br-int", "label": "tempest-network-smoke--1511589092", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.192", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapba6bfb6c-8b", "ovs_interfaceid": "ba6bfb6c-8b45-4ce7-af7b-30a3c2e097ca", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 09:24:19 compute-0 nova_compute[260935]: 2025-10-11 09:24:19.187 2 DEBUG nova.network.os_vif_util [None req-c35abbe6-3688-4543-a2e2-fe03ac2f218a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d6:ce:63,bridge_name='br-int',has_traffic_filtering=True,id=ba6bfb6c-8b45-4ce7-af7b-30a3c2e097ca,network=Network(a7aa5898-dcd8-41c5-81e2-5c41061a1fb2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapba6bfb6c-8b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 09:24:19 compute-0 nova_compute[260935]: 2025-10-11 09:24:19.189 2 DEBUG nova.objects.instance [None req-c35abbe6-3688-4543-a2e2-fe03ac2f218a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lazy-loading 'pci_devices' on Instance uuid 32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:24:19 compute-0 nova_compute[260935]: 2025-10-11 09:24:19.429 2 DEBUG nova.virt.libvirt.driver [None req-c35abbe6-3688-4543-a2e2-fe03ac2f218a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1] End _get_guest_xml xml=<domain type="kvm">
Oct 11 09:24:19 compute-0 nova_compute[260935]:   <uuid>32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1</uuid>
Oct 11 09:24:19 compute-0 nova_compute[260935]:   <name>instance-0000007c</name>
Oct 11 09:24:19 compute-0 nova_compute[260935]:   <memory>131072</memory>
Oct 11 09:24:19 compute-0 nova_compute[260935]:   <vcpu>1</vcpu>
Oct 11 09:24:19 compute-0 nova_compute[260935]:   <metadata>
Oct 11 09:24:19 compute-0 nova_compute[260935]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 09:24:19 compute-0 nova_compute[260935]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 09:24:19 compute-0 nova_compute[260935]:       <nova:name>tempest-TestNetworkBasicOps-server-1892767569</nova:name>
Oct 11 09:24:19 compute-0 nova_compute[260935]:       <nova:creationTime>2025-10-11 09:24:18</nova:creationTime>
Oct 11 09:24:19 compute-0 nova_compute[260935]:       <nova:flavor name="m1.nano">
Oct 11 09:24:19 compute-0 nova_compute[260935]:         <nova:memory>128</nova:memory>
Oct 11 09:24:19 compute-0 nova_compute[260935]:         <nova:disk>1</nova:disk>
Oct 11 09:24:19 compute-0 nova_compute[260935]:         <nova:swap>0</nova:swap>
Oct 11 09:24:19 compute-0 nova_compute[260935]:         <nova:ephemeral>0</nova:ephemeral>
Oct 11 09:24:19 compute-0 nova_compute[260935]:         <nova:vcpus>1</nova:vcpus>
Oct 11 09:24:19 compute-0 nova_compute[260935]:       </nova:flavor>
Oct 11 09:24:19 compute-0 nova_compute[260935]:       <nova:owner>
Oct 11 09:24:19 compute-0 nova_compute[260935]:         <nova:user uuid="dd336dcb24664df58613d4105ce1b004">tempest-TestNetworkBasicOps-1622727639-project-member</nova:user>
Oct 11 09:24:19 compute-0 nova_compute[260935]:         <nova:project uuid="bee9c6aad5fe46a2b0fb6caf4d995b72">tempest-TestNetworkBasicOps-1622727639</nova:project>
Oct 11 09:24:19 compute-0 nova_compute[260935]:       </nova:owner>
Oct 11 09:24:19 compute-0 nova_compute[260935]:       <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 09:24:19 compute-0 nova_compute[260935]:       <nova:ports>
Oct 11 09:24:19 compute-0 nova_compute[260935]:         <nova:port uuid="ba6bfb6c-8b45-4ce7-af7b-30a3c2e097ca">
Oct 11 09:24:19 compute-0 nova_compute[260935]:           <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Oct 11 09:24:19 compute-0 nova_compute[260935]:         </nova:port>
Oct 11 09:24:19 compute-0 nova_compute[260935]:       </nova:ports>
Oct 11 09:24:19 compute-0 nova_compute[260935]:     </nova:instance>
Oct 11 09:24:19 compute-0 nova_compute[260935]:   </metadata>
Oct 11 09:24:19 compute-0 nova_compute[260935]:   <sysinfo type="smbios">
Oct 11 09:24:19 compute-0 nova_compute[260935]:     <system>
Oct 11 09:24:19 compute-0 nova_compute[260935]:       <entry name="manufacturer">RDO</entry>
Oct 11 09:24:19 compute-0 nova_compute[260935]:       <entry name="product">OpenStack Compute</entry>
Oct 11 09:24:19 compute-0 nova_compute[260935]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 09:24:19 compute-0 nova_compute[260935]:       <entry name="serial">32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1</entry>
Oct 11 09:24:19 compute-0 nova_compute[260935]:       <entry name="uuid">32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1</entry>
Oct 11 09:24:19 compute-0 nova_compute[260935]:       <entry name="family">Virtual Machine</entry>
Oct 11 09:24:19 compute-0 nova_compute[260935]:     </system>
Oct 11 09:24:19 compute-0 nova_compute[260935]:   </sysinfo>
Oct 11 09:24:19 compute-0 nova_compute[260935]:   <os>
Oct 11 09:24:19 compute-0 nova_compute[260935]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 11 09:24:19 compute-0 nova_compute[260935]:     <boot dev="hd"/>
Oct 11 09:24:19 compute-0 nova_compute[260935]:     <smbios mode="sysinfo"/>
Oct 11 09:24:19 compute-0 nova_compute[260935]:   </os>
Oct 11 09:24:19 compute-0 nova_compute[260935]:   <features>
Oct 11 09:24:19 compute-0 nova_compute[260935]:     <acpi/>
Oct 11 09:24:19 compute-0 nova_compute[260935]:     <apic/>
Oct 11 09:24:19 compute-0 nova_compute[260935]:     <vmcoreinfo/>
Oct 11 09:24:19 compute-0 nova_compute[260935]:   </features>
Oct 11 09:24:19 compute-0 nova_compute[260935]:   <clock offset="utc">
Oct 11 09:24:19 compute-0 nova_compute[260935]:     <timer name="pit" tickpolicy="delay"/>
Oct 11 09:24:19 compute-0 nova_compute[260935]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 11 09:24:19 compute-0 nova_compute[260935]:     <timer name="hpet" present="no"/>
Oct 11 09:24:19 compute-0 nova_compute[260935]:   </clock>
Oct 11 09:24:19 compute-0 nova_compute[260935]:   <cpu mode="host-model" match="exact">
Oct 11 09:24:19 compute-0 nova_compute[260935]:     <topology sockets="1" cores="1" threads="1"/>
Oct 11 09:24:19 compute-0 nova_compute[260935]:   </cpu>
Oct 11 09:24:19 compute-0 nova_compute[260935]:   <devices>
Oct 11 09:24:19 compute-0 nova_compute[260935]:     <disk type="network" device="disk">
Oct 11 09:24:19 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 09:24:19 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1_disk">
Oct 11 09:24:19 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 09:24:19 compute-0 nova_compute[260935]:       </source>
Oct 11 09:24:19 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 09:24:19 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 09:24:19 compute-0 nova_compute[260935]:       </auth>
Oct 11 09:24:19 compute-0 nova_compute[260935]:       <target dev="vda" bus="virtio"/>
Oct 11 09:24:19 compute-0 nova_compute[260935]:     </disk>
Oct 11 09:24:19 compute-0 nova_compute[260935]:     <disk type="network" device="cdrom">
Oct 11 09:24:19 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 09:24:19 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1_disk.config">
Oct 11 09:24:19 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 09:24:19 compute-0 nova_compute[260935]:       </source>
Oct 11 09:24:19 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 09:24:19 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 09:24:19 compute-0 nova_compute[260935]:       </auth>
Oct 11 09:24:19 compute-0 nova_compute[260935]:       <target dev="sda" bus="sata"/>
Oct 11 09:24:19 compute-0 nova_compute[260935]:     </disk>
Oct 11 09:24:19 compute-0 nova_compute[260935]:     <interface type="ethernet">
Oct 11 09:24:19 compute-0 nova_compute[260935]:       <mac address="fa:16:3e:d6:ce:63"/>
Oct 11 09:24:19 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 09:24:19 compute-0 nova_compute[260935]:       <driver name="vhost" rx_queue_size="512"/>
Oct 11 09:24:19 compute-0 nova_compute[260935]:       <mtu size="1442"/>
Oct 11 09:24:19 compute-0 nova_compute[260935]:       <target dev="tapba6bfb6c-8b"/>
Oct 11 09:24:19 compute-0 nova_compute[260935]:     </interface>
Oct 11 09:24:19 compute-0 nova_compute[260935]:     <serial type="pty">
Oct 11 09:24:19 compute-0 nova_compute[260935]:       <log file="/var/lib/nova/instances/32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1/console.log" append="off"/>
Oct 11 09:24:19 compute-0 nova_compute[260935]:     </serial>
Oct 11 09:24:19 compute-0 nova_compute[260935]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 09:24:19 compute-0 nova_compute[260935]:     <video>
Oct 11 09:24:19 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 09:24:19 compute-0 nova_compute[260935]:     </video>
Oct 11 09:24:19 compute-0 nova_compute[260935]:     <input type="tablet" bus="usb"/>
Oct 11 09:24:19 compute-0 nova_compute[260935]:     <rng model="virtio">
Oct 11 09:24:19 compute-0 nova_compute[260935]:       <backend model="random">/dev/urandom</backend>
Oct 11 09:24:19 compute-0 nova_compute[260935]:     </rng>
Oct 11 09:24:19 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root"/>
Oct 11 09:24:19 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:24:19 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:24:19 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:24:19 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:24:19 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:24:19 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:24:19 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:24:19 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:24:19 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:24:19 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:24:19 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:24:19 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:24:19 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:24:19 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:24:19 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:24:19 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:24:19 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:24:19 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:24:19 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:24:19 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:24:19 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:24:19 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:24:19 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:24:19 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:24:19 compute-0 nova_compute[260935]:     <controller type="usb" index="0"/>
Oct 11 09:24:19 compute-0 nova_compute[260935]:     <memballoon model="virtio">
Oct 11 09:24:19 compute-0 nova_compute[260935]:       <stats period="10"/>
Oct 11 09:24:19 compute-0 nova_compute[260935]:     </memballoon>
Oct 11 09:24:19 compute-0 nova_compute[260935]:   </devices>
Oct 11 09:24:19 compute-0 nova_compute[260935]: </domain>
Oct 11 09:24:19 compute-0 nova_compute[260935]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 11 09:24:19 compute-0 nova_compute[260935]: 2025-10-11 09:24:19.432 2 DEBUG nova.compute.manager [None req-c35abbe6-3688-4543-a2e2-fe03ac2f218a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1] Preparing to wait for external event network-vif-plugged-ba6bfb6c-8b45-4ce7-af7b-30a3c2e097ca prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 11 09:24:19 compute-0 nova_compute[260935]: 2025-10-11 09:24:19.432 2 DEBUG oslo_concurrency.lockutils [None req-c35abbe6-3688-4543-a2e2-fe03ac2f218a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Acquiring lock "32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:24:19 compute-0 nova_compute[260935]: 2025-10-11 09:24:19.433 2 DEBUG oslo_concurrency.lockutils [None req-c35abbe6-3688-4543-a2e2-fe03ac2f218a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:24:19 compute-0 nova_compute[260935]: 2025-10-11 09:24:19.433 2 DEBUG oslo_concurrency.lockutils [None req-c35abbe6-3688-4543-a2e2-fe03ac2f218a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:24:19 compute-0 nova_compute[260935]: 2025-10-11 09:24:19.434 2 DEBUG nova.virt.libvirt.vif [None req-c35abbe6-3688-4543-a2e2-fe03ac2f218a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T09:24:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1892767569',display_name='tempest-TestNetworkBasicOps-server-1892767569',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1892767569',id=124,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBrnL5y5rmm40zHS3GRiZibnIxynw7Rihcq0RaK7G/33Ym/s8OYTLPyLC9bisJAK/wCL7YwT5B+iTgnNd5S0eO9bWMML8tCfmSgkHWu6rGCUSjC8WYx7kb6zW0E6al3ojg==',key_name='tempest-TestNetworkBasicOps-485184698',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='bee9c6aad5fe46a2b0fb6caf4d995b72',ramdisk_id='',reservation_id='r-i2l25idp',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1622727639',owner_user_name='tempest-TestNetworkBasicOps-1622727639-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:24:12Z,user_data=None,user_id='dd336dcb24664df58613d4105ce1b004',uuid=32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ba6bfb6c-8b45-4ce7-af7b-30a3c2e097ca", "address": "fa:16:3e:d6:ce:63", "network": {"id": "a7aa5898-dcd8-41c5-81e2-5c41061a1fb2", "bridge": "br-int", "label": "tempest-network-smoke--1511589092", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.192", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapba6bfb6c-8b", "ovs_interfaceid": "ba6bfb6c-8b45-4ce7-af7b-30a3c2e097ca", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 11 09:24:19 compute-0 nova_compute[260935]: 2025-10-11 09:24:19.435 2 DEBUG nova.network.os_vif_util [None req-c35abbe6-3688-4543-a2e2-fe03ac2f218a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Converting VIF {"id": "ba6bfb6c-8b45-4ce7-af7b-30a3c2e097ca", "address": "fa:16:3e:d6:ce:63", "network": {"id": "a7aa5898-dcd8-41c5-81e2-5c41061a1fb2", "bridge": "br-int", "label": "tempest-network-smoke--1511589092", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.192", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapba6bfb6c-8b", "ovs_interfaceid": "ba6bfb6c-8b45-4ce7-af7b-30a3c2e097ca", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 09:24:19 compute-0 nova_compute[260935]: 2025-10-11 09:24:19.436 2 DEBUG nova.network.os_vif_util [None req-c35abbe6-3688-4543-a2e2-fe03ac2f218a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d6:ce:63,bridge_name='br-int',has_traffic_filtering=True,id=ba6bfb6c-8b45-4ce7-af7b-30a3c2e097ca,network=Network(a7aa5898-dcd8-41c5-81e2-5c41061a1fb2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapba6bfb6c-8b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 09:24:19 compute-0 nova_compute[260935]: 2025-10-11 09:24:19.437 2 DEBUG os_vif [None req-c35abbe6-3688-4543-a2e2-fe03ac2f218a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d6:ce:63,bridge_name='br-int',has_traffic_filtering=True,id=ba6bfb6c-8b45-4ce7-af7b-30a3c2e097ca,network=Network(a7aa5898-dcd8-41c5-81e2-5c41061a1fb2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapba6bfb6c-8b') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 11 09:24:19 compute-0 nova_compute[260935]: 2025-10-11 09:24:19.438 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:24:19 compute-0 nova_compute[260935]: 2025-10-11 09:24:19.439 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:24:19 compute-0 nova_compute[260935]: 2025-10-11 09:24:19.439 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 09:24:19 compute-0 nova_compute[260935]: 2025-10-11 09:24:19.443 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:24:19 compute-0 nova_compute[260935]: 2025-10-11 09:24:19.443 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapba6bfb6c-8b, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:24:19 compute-0 nova_compute[260935]: 2025-10-11 09:24:19.444 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapba6bfb6c-8b, col_values=(('external_ids', {'iface-id': 'ba6bfb6c-8b45-4ce7-af7b-30a3c2e097ca', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:d6:ce:63', 'vm-uuid': '32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:24:19 compute-0 NetworkManager[44960]: <info>  [1760174659.4486] manager: (tapba6bfb6c-8b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/526)
Oct 11 09:24:19 compute-0 nova_compute[260935]: 2025-10-11 09:24:19.454 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 09:24:19 compute-0 nova_compute[260935]: 2025-10-11 09:24:19.455 2 INFO os_vif [None req-c35abbe6-3688-4543-a2e2-fe03ac2f218a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d6:ce:63,bridge_name='br-int',has_traffic_filtering=True,id=ba6bfb6c-8b45-4ce7-af7b-30a3c2e097ca,network=Network(a7aa5898-dcd8-41c5-81e2-5c41061a1fb2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapba6bfb6c-8b')
Oct 11 09:24:19 compute-0 ovn_controller[152945]: 2025-10-11T09:24:19Z|01323|binding|INFO|Releasing lport c1fabca0-ae77-4e48-b93b-3023955db235 from this chassis (sb_readonly=0)
Oct 11 09:24:19 compute-0 ovn_controller[152945]: 2025-10-11T09:24:19Z|01324|binding|INFO|Releasing lport e23cd806-8523-4e59-ba27-db15cee52548 from this chassis (sb_readonly=0)
Oct 11 09:24:19 compute-0 ovn_controller[152945]: 2025-10-11T09:24:19Z|01325|binding|INFO|Releasing lport cd117be9-e2e2-4d92-9c01-a0f37b7175b9 from this chassis (sb_readonly=0)
Oct 11 09:24:19 compute-0 ovn_controller[152945]: 2025-10-11T09:24:19Z|01326|binding|INFO|Releasing lport f45dd889-4db0-488b-b6d3-356bc191844e from this chassis (sb_readonly=0)
Oct 11 09:24:19 compute-0 nova_compute[260935]: 2025-10-11 09:24:19.601 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:24:19 compute-0 nova_compute[260935]: 2025-10-11 09:24:19.605 2 DEBUG nova.virt.libvirt.driver [None req-c35abbe6-3688-4543-a2e2-fe03ac2f218a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 09:24:19 compute-0 nova_compute[260935]: 2025-10-11 09:24:19.605 2 DEBUG nova.virt.libvirt.driver [None req-c35abbe6-3688-4543-a2e2-fe03ac2f218a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 09:24:19 compute-0 nova_compute[260935]: 2025-10-11 09:24:19.606 2 DEBUG nova.virt.libvirt.driver [None req-c35abbe6-3688-4543-a2e2-fe03ac2f218a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] No VIF found with MAC fa:16:3e:d6:ce:63, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 11 09:24:19 compute-0 nova_compute[260935]: 2025-10-11 09:24:19.606 2 INFO nova.virt.libvirt.driver [None req-c35abbe6-3688-4543-a2e2-fe03ac2f218a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1] Using config drive
Oct 11 09:24:19 compute-0 nova_compute[260935]: 2025-10-11 09:24:19.630 2 DEBUG nova.storage.rbd_utils [None req-c35abbe6-3688-4543-a2e2-fe03ac2f218a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] rbd image 32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:24:19 compute-0 ceph-mon[74313]: pgmap v2480: 321 pgs: 321 active+clean; 453 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 618 KiB/s rd, 3.9 MiB/s wr, 124 op/s
Oct 11 09:24:19 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/4226472973' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:24:20 compute-0 sshd-session[396030]: Invalid user rustserver from 155.4.244.179 port 51746
Oct 11 09:24:20 compute-0 sshd-session[396030]: pam_unix(sshd:auth): check pass; user unknown
Oct 11 09:24:20 compute-0 sshd-session[396030]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=155.4.244.179
Oct 11 09:24:20 compute-0 practical_williams[396238]: --> passed data devices: 0 physical, 3 LVM
Oct 11 09:24:20 compute-0 practical_williams[396238]: --> relative data size: 1.0
Oct 11 09:24:20 compute-0 practical_williams[396238]: --> All data devices are unavailable
Oct 11 09:24:20 compute-0 systemd[1]: libpod-6cd72fcb8fbfc5b414651ba1c906adde7a223d8be17a17e043c941e0f8a4ad94.scope: Deactivated successfully.
Oct 11 09:24:20 compute-0 systemd[1]: libpod-6cd72fcb8fbfc5b414651ba1c906adde7a223d8be17a17e043c941e0f8a4ad94.scope: Consumed 1.042s CPU time.
Oct 11 09:24:20 compute-0 podman[396290]: 2025-10-11 09:24:20.205159932 +0000 UTC m=+0.030545783 container died 6cd72fcb8fbfc5b414651ba1c906adde7a223d8be17a17e043c941e0f8a4ad94 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_williams, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 09:24:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-2082cf4e9099de7b30ac0e4b71b7b4b6141249335e35f635796e904ae90c4372-merged.mount: Deactivated successfully.
Oct 11 09:24:20 compute-0 podman[396290]: 2025-10-11 09:24:20.282799673 +0000 UTC m=+0.108185554 container remove 6cd72fcb8fbfc5b414651ba1c906adde7a223d8be17a17e043c941e0f8a4ad94 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_williams, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct 11 09:24:20 compute-0 systemd[1]: libpod-conmon-6cd72fcb8fbfc5b414651ba1c906adde7a223d8be17a17e043c941e0f8a4ad94.scope: Deactivated successfully.
Oct 11 09:24:20 compute-0 sudo[396057]: pam_unix(sudo:session): session closed for user root
Oct 11 09:24:20 compute-0 sudo[396305]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:24:20 compute-0 sudo[396305]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:24:20 compute-0 sudo[396305]: pam_unix(sudo:session): session closed for user root
Oct 11 09:24:20 compute-0 sudo[396330]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 09:24:20 compute-0 sudo[396330]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:24:20 compute-0 sudo[396330]: pam_unix(sudo:session): session closed for user root
Oct 11 09:24:20 compute-0 sudo[396355]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:24:20 compute-0 sudo[396355]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:24:20 compute-0 sudo[396355]: pam_unix(sudo:session): session closed for user root
Oct 11 09:24:20 compute-0 sudo[396380]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a -- lvm list --format json
Oct 11 09:24:20 compute-0 sudo[396380]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:24:20 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2481: 321 pgs: 321 active+clean; 453 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 372 KiB/s rd, 2.0 MiB/s wr, 91 op/s
Oct 11 09:24:21 compute-0 nova_compute[260935]: 2025-10-11 09:24:21.209 2 INFO nova.virt.libvirt.driver [None req-c35abbe6-3688-4543-a2e2-fe03ac2f218a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1] Creating config drive at /var/lib/nova/instances/32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1/disk.config
Oct 11 09:24:21 compute-0 nova_compute[260935]: 2025-10-11 09:24:21.220 2 DEBUG oslo_concurrency.processutils [None req-c35abbe6-3688-4543-a2e2-fe03ac2f218a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpt4zfj3_c execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:24:21 compute-0 podman[396445]: 2025-10-11 09:24:21.229532271 +0000 UTC m=+0.076933422 container create 10ceebc6ed3f8771af4cc7ecf3e9924fd1aa8aff3921aa68d80ca4d051834c43 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_ishizaka, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 09:24:21 compute-0 systemd[1]: Started libpod-conmon-10ceebc6ed3f8771af4cc7ecf3e9924fd1aa8aff3921aa68d80ca4d051834c43.scope.
Oct 11 09:24:21 compute-0 podman[396445]: 2025-10-11 09:24:21.198501685 +0000 UTC m=+0.045902886 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:24:21 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:24:21 compute-0 podman[396445]: 2025-10-11 09:24:21.349782244 +0000 UTC m=+0.197183395 container init 10ceebc6ed3f8771af4cc7ecf3e9924fd1aa8aff3921aa68d80ca4d051834c43 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_ishizaka, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 09:24:21 compute-0 podman[396445]: 2025-10-11 09:24:21.365348344 +0000 UTC m=+0.212749465 container start 10ceebc6ed3f8771af4cc7ecf3e9924fd1aa8aff3921aa68d80ca4d051834c43 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_ishizaka, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct 11 09:24:21 compute-0 podman[396445]: 2025-10-11 09:24:21.369949814 +0000 UTC m=+0.217350985 container attach 10ceebc6ed3f8771af4cc7ecf3e9924fd1aa8aff3921aa68d80ca4d051834c43 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_ishizaka, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True)
Oct 11 09:24:21 compute-0 dreamy_ishizaka[396470]: 167 167
Oct 11 09:24:21 compute-0 systemd[1]: libpod-10ceebc6ed3f8771af4cc7ecf3e9924fd1aa8aff3921aa68d80ca4d051834c43.scope: Deactivated successfully.
Oct 11 09:24:21 compute-0 podman[396445]: 2025-10-11 09:24:21.37549763 +0000 UTC m=+0.222898801 container died 10ceebc6ed3f8771af4cc7ecf3e9924fd1aa8aff3921aa68d80ca4d051834c43 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_ishizaka, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct 11 09:24:21 compute-0 nova_compute[260935]: 2025-10-11 09:24:21.384 2 DEBUG oslo_concurrency.processutils [None req-c35abbe6-3688-4543-a2e2-fe03ac2f218a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpt4zfj3_c" returned: 0 in 0.164s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:24:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-9ab8e3e7a896951404ae28ea76dcc06f5a68255963124d242ac3ce51f22714ba-merged.mount: Deactivated successfully.
Oct 11 09:24:21 compute-0 podman[396460]: 2025-10-11 09:24:21.413928415 +0000 UTC m=+0.125614776 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, managed_by=edpm_ansible, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 11 09:24:21 compute-0 nova_compute[260935]: 2025-10-11 09:24:21.427 2 DEBUG nova.storage.rbd_utils [None req-c35abbe6-3688-4543-a2e2-fe03ac2f218a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] rbd image 32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:24:21 compute-0 nova_compute[260935]: 2025-10-11 09:24:21.434 2 DEBUG oslo_concurrency.processutils [None req-c35abbe6-3688-4543-a2e2-fe03ac2f218a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1/disk.config 32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:24:21 compute-0 podman[396445]: 2025-10-11 09:24:21.439012363 +0000 UTC m=+0.286413484 container remove 10ceebc6ed3f8771af4cc7ecf3e9924fd1aa8aff3921aa68d80ca4d051834c43 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_ishizaka, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct 11 09:24:21 compute-0 podman[396463]: 2025-10-11 09:24:21.440265188 +0000 UTC m=+0.145155378 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, managed_by=edpm_ansible)
Oct 11 09:24:21 compute-0 systemd[1]: libpod-conmon-10ceebc6ed3f8771af4cc7ecf3e9924fd1aa8aff3921aa68d80ca4d051834c43.scope: Deactivated successfully.
Oct 11 09:24:21 compute-0 nova_compute[260935]: 2025-10-11 09:24:21.669 2 DEBUG oslo_concurrency.processutils [None req-c35abbe6-3688-4543-a2e2-fe03ac2f218a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1/disk.config 32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.235s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:24:21 compute-0 nova_compute[260935]: 2025-10-11 09:24:21.670 2 INFO nova.virt.libvirt.driver [None req-c35abbe6-3688-4543-a2e2-fe03ac2f218a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1] Deleting local config drive /var/lib/nova/instances/32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1/disk.config because it was imported into RBD.
Oct 11 09:24:21 compute-0 podman[396570]: 2025-10-11 09:24:21.711330488 +0000 UTC m=+0.078886758 container create c7e5baf1e46da732c9b315e649b653b9bf14673b7f224704bee2a858fc2df5f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_galileo, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 09:24:21 compute-0 nova_compute[260935]: 2025-10-11 09:24:21.717 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:24:21 compute-0 NetworkManager[44960]: <info>  [1760174661.7662] manager: (tapba6bfb6c-8b): new Tun device (/org/freedesktop/NetworkManager/Devices/527)
Oct 11 09:24:21 compute-0 systemd[1]: Started libpod-conmon-c7e5baf1e46da732c9b315e649b653b9bf14673b7f224704bee2a858fc2df5f6.scope.
Oct 11 09:24:21 compute-0 podman[396570]: 2025-10-11 09:24:21.678714337 +0000 UTC m=+0.046270697 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:24:21 compute-0 kernel: tapba6bfb6c-8b: entered promiscuous mode
Oct 11 09:24:21 compute-0 nova_compute[260935]: 2025-10-11 09:24:21.780 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:24:21 compute-0 ovn_controller[152945]: 2025-10-11T09:24:21Z|01327|binding|INFO|Claiming lport ba6bfb6c-8b45-4ce7-af7b-30a3c2e097ca for this chassis.
Oct 11 09:24:21 compute-0 ovn_controller[152945]: 2025-10-11T09:24:21Z|01328|binding|INFO|ba6bfb6c-8b45-4ce7-af7b-30a3c2e097ca: Claiming fa:16:3e:d6:ce:63 10.100.0.5
Oct 11 09:24:21 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:24:21.792 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d6:ce:63 10.100.0.5'], port_security=['fa:16:3e:d6:ce:63 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-TestNetworkBasicOps-111056357', 'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a7aa5898-dcd8-41c5-81e2-5c41061a1fb2', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-TestNetworkBasicOps-111056357', 'neutron:project_id': 'bee9c6aad5fe46a2b0fb6caf4d995b72', 'neutron:revision_number': '8', 'neutron:security_group_ids': '019fac1b-1c13-4d21-809e-39e39fdd9255', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.192'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=60a8a4a9-2613-497d-90d8-59ebfcd1b053, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=ba6bfb6c-8b45-4ce7-af7b-30a3c2e097ca) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 09:24:21 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:24:21.794 162815 INFO neutron.agent.ovn.metadata.agent [-] Port ba6bfb6c-8b45-4ce7-af7b-30a3c2e097ca in datapath a7aa5898-dcd8-41c5-81e2-5c41061a1fb2 bound to our chassis
Oct 11 09:24:21 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:24:21.798 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network a7aa5898-dcd8-41c5-81e2-5c41061a1fb2
Oct 11 09:24:21 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:24:21.817 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[d003b91e-ff62-4bfe-b60f-caf2f181b8eb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:24:21 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:24:21.819 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapa7aa5898-d1 in ovnmeta-a7aa5898-dcd8-41c5-81e2-5c41061a1fb2 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 11 09:24:21 compute-0 ovn_controller[152945]: 2025-10-11T09:24:21Z|01329|binding|INFO|Setting lport ba6bfb6c-8b45-4ce7-af7b-30a3c2e097ca ovn-installed in OVS
Oct 11 09:24:21 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:24:21.822 276945 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapa7aa5898-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 11 09:24:21 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:24:21.822 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[f74ee1b8-97dd-4c36-aeec-fe22e8682a24]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:24:21 compute-0 ovn_controller[152945]: 2025-10-11T09:24:21Z|01330|binding|INFO|Setting lport ba6bfb6c-8b45-4ce7-af7b-30a3c2e097ca up in Southbound
Oct 11 09:24:21 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:24:21 compute-0 nova_compute[260935]: 2025-10-11 09:24:21.824 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:24:21 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:24:21.825 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[8e957c6d-861d-4d85-aa61-6db12204bd2a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:24:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c4ba737ecf8dfdac7b1c3c01298648218bcc531da869fe350e8964fa4b79995e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 09:24:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c4ba737ecf8dfdac7b1c3c01298648218bcc531da869fe350e8964fa4b79995e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 09:24:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c4ba737ecf8dfdac7b1c3c01298648218bcc531da869fe350e8964fa4b79995e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 09:24:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c4ba737ecf8dfdac7b1c3c01298648218bcc531da869fe350e8964fa4b79995e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 09:24:21 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:24:21.847 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[ac0d06c9-d7bb-40f7-9bd6-368d50944b20]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:24:21 compute-0 systemd-machined[215705]: New machine qemu-148-instance-0000007c.
Oct 11 09:24:21 compute-0 systemd[1]: Started Virtual Machine qemu-148-instance-0000007c.
Oct 11 09:24:21 compute-0 systemd-udevd[396606]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 09:24:21 compute-0 podman[396570]: 2025-10-11 09:24:21.871338293 +0000 UTC m=+0.238894653 container init c7e5baf1e46da732c9b315e649b653b9bf14673b7f224704bee2a858fc2df5f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_galileo, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 09:24:21 compute-0 nova_compute[260935]: 2025-10-11 09:24:21.878 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760174646.8773644, ef21f945-0076-48fa-8d22-c5376e26d278 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:24:21 compute-0 nova_compute[260935]: 2025-10-11 09:24:21.878 2 INFO nova.compute.manager [-] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] VM Stopped (Lifecycle Event)
Oct 11 09:24:21 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:24:21.880 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[169165e9-86bb-44ae-af3c-c263e17f31ce]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:24:21 compute-0 NetworkManager[44960]: <info>  [1760174661.8826] device (tapba6bfb6c-8b): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 09:24:21 compute-0 NetworkManager[44960]: <info>  [1760174661.8836] device (tapba6bfb6c-8b): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 09:24:21 compute-0 podman[396570]: 2025-10-11 09:24:21.888856038 +0000 UTC m=+0.256412338 container start c7e5baf1e46da732c9b315e649b653b9bf14673b7f224704bee2a858fc2df5f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_galileo, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default)
Oct 11 09:24:21 compute-0 podman[396570]: 2025-10-11 09:24:21.894213529 +0000 UTC m=+0.261769809 container attach c7e5baf1e46da732c9b315e649b653b9bf14673b7f224704bee2a858fc2df5f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_galileo, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct 11 09:24:21 compute-0 ceph-mon[74313]: pgmap v2481: 321 pgs: 321 active+clean; 453 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 372 KiB/s rd, 2.0 MiB/s wr, 91 op/s
Oct 11 09:24:21 compute-0 nova_compute[260935]: 2025-10-11 09:24:21.910 2 DEBUG nova.compute.manager [None req-7e25574c-e786-455f-b544-0a211679ca83 - - - - - -] [instance: ef21f945-0076-48fa-8d22-c5376e26d278] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:24:21 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:24:21.926 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[0f405d82-4c22-4ebb-95d7-1ac8a3ccc0d7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:24:21 compute-0 NetworkManager[44960]: <info>  [1760174661.9339] manager: (tapa7aa5898-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/528)
Oct 11 09:24:21 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:24:21.933 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[01ffe2a8-bd34-402d-8cbe-15da5d862e6b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:24:21 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:24:21.980 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[e8332885-20e0-4682-aca0-1f49e3e4a6ba]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:24:21 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:24:21.984 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[6b9ccba6-5a15-4252-b2a7-04b661b70625]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:24:22 compute-0 NetworkManager[44960]: <info>  [1760174662.0104] device (tapa7aa5898-d0): carrier: link connected
Oct 11 09:24:22 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:24:22.015 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[2e443cca-d9ee-481f-86e2-2331b6f4f60e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:24:22 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:24:22.030 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[c1f3e609-60c4-452f-b65b-377b7da8ccb6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa7aa5898-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1a:0e:28'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 367], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 650671, 'reachable_time': 39532, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 396639, 'error': None, 'target': 'ovnmeta-a7aa5898-dcd8-41c5-81e2-5c41061a1fb2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:24:22 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:24:22.044 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[42b08470-d735-4c38-8ef6-392d81aeed99]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe1a:e28'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 650671, 'tstamp': 650671}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 396640, 'error': None, 'target': 'ovnmeta-a7aa5898-dcd8-41c5-81e2-5c41061a1fb2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:24:22 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:24:22.057 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[8f1ba8aa-033b-4698-928b-c21aafb0a832]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa7aa5898-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1a:0e:28'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 367], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 650671, 'reachable_time': 39532, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 396641, 'error': None, 'target': 'ovnmeta-a7aa5898-dcd8-41c5-81e2-5c41061a1fb2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:24:22 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:24:22.092 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[24516d76-0b3e-4af6-ad64-47a1de115a5a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:24:22 compute-0 sshd-session[396030]: Failed password for invalid user rustserver from 155.4.244.179 port 51746 ssh2
Oct 11 09:24:22 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:24:22.177 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[a23edcf8-fdef-41be-8173-fe1162f1ef62]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:24:22 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:24:22.178 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa7aa5898-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:24:22 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:24:22.178 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 09:24:22 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:24:22.179 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa7aa5898-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:24:22 compute-0 NetworkManager[44960]: <info>  [1760174662.2395] manager: (tapa7aa5898-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/529)
Oct 11 09:24:22 compute-0 kernel: tapa7aa5898-d0: entered promiscuous mode
Oct 11 09:24:22 compute-0 nova_compute[260935]: 2025-10-11 09:24:22.240 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:24:22 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:24:22.241 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapa7aa5898-d0, col_values=(('external_ids', {'iface-id': 'aaa7cd6b-360e-44dd-8a57-64ba5457b964'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:24:22 compute-0 ovn_controller[152945]: 2025-10-11T09:24:22Z|01331|binding|INFO|Releasing lport aaa7cd6b-360e-44dd-8a57-64ba5457b964 from this chassis (sb_readonly=0)
Oct 11 09:24:22 compute-0 sshd-session[396602]: Invalid user admin from 165.232.82.252 port 38678
Oct 11 09:24:22 compute-0 nova_compute[260935]: 2025-10-11 09:24:22.256 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:24:22 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:24:22.258 162815 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/a7aa5898-dcd8-41c5-81e2-5c41061a1fb2.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/a7aa5898-dcd8-41c5-81e2-5c41061a1fb2.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 11 09:24:22 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:24:22.258 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[c8098961-5fb5-4c0b-8644-1a5b5e3ba131]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:24:22 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:24:22.259 162815 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 11 09:24:22 compute-0 ovn_metadata_agent[162810]: global
Oct 11 09:24:22 compute-0 ovn_metadata_agent[162810]:     log         /dev/log local0 debug
Oct 11 09:24:22 compute-0 ovn_metadata_agent[162810]:     log-tag     haproxy-metadata-proxy-a7aa5898-dcd8-41c5-81e2-5c41061a1fb2
Oct 11 09:24:22 compute-0 ovn_metadata_agent[162810]:     user        root
Oct 11 09:24:22 compute-0 ovn_metadata_agent[162810]:     group       root
Oct 11 09:24:22 compute-0 ovn_metadata_agent[162810]:     maxconn     1024
Oct 11 09:24:22 compute-0 ovn_metadata_agent[162810]:     pidfile     /var/lib/neutron/external/pids/a7aa5898-dcd8-41c5-81e2-5c41061a1fb2.pid.haproxy
Oct 11 09:24:22 compute-0 ovn_metadata_agent[162810]:     daemon
Oct 11 09:24:22 compute-0 ovn_metadata_agent[162810]: 
Oct 11 09:24:22 compute-0 ovn_metadata_agent[162810]: defaults
Oct 11 09:24:22 compute-0 ovn_metadata_agent[162810]:     log global
Oct 11 09:24:22 compute-0 ovn_metadata_agent[162810]:     mode http
Oct 11 09:24:22 compute-0 ovn_metadata_agent[162810]:     option httplog
Oct 11 09:24:22 compute-0 ovn_metadata_agent[162810]:     option dontlognull
Oct 11 09:24:22 compute-0 ovn_metadata_agent[162810]:     option http-server-close
Oct 11 09:24:22 compute-0 ovn_metadata_agent[162810]:     option forwardfor
Oct 11 09:24:22 compute-0 ovn_metadata_agent[162810]:     retries                 3
Oct 11 09:24:22 compute-0 ovn_metadata_agent[162810]:     timeout http-request    30s
Oct 11 09:24:22 compute-0 ovn_metadata_agent[162810]:     timeout connect         30s
Oct 11 09:24:22 compute-0 ovn_metadata_agent[162810]:     timeout client          32s
Oct 11 09:24:22 compute-0 ovn_metadata_agent[162810]:     timeout server          32s
Oct 11 09:24:22 compute-0 ovn_metadata_agent[162810]:     timeout http-keep-alive 30s
Oct 11 09:24:22 compute-0 ovn_metadata_agent[162810]: 
Oct 11 09:24:22 compute-0 ovn_metadata_agent[162810]: 
Oct 11 09:24:22 compute-0 ovn_metadata_agent[162810]: listen listener
Oct 11 09:24:22 compute-0 ovn_metadata_agent[162810]:     bind 169.254.169.254:80
Oct 11 09:24:22 compute-0 ovn_metadata_agent[162810]:     server metadata /var/lib/neutron/metadata_proxy
Oct 11 09:24:22 compute-0 ovn_metadata_agent[162810]:     http-request add-header X-OVN-Network-ID a7aa5898-dcd8-41c5-81e2-5c41061a1fb2
Oct 11 09:24:22 compute-0 ovn_metadata_agent[162810]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 11 09:24:22 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:24:22.260 162815 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-a7aa5898-dcd8-41c5-81e2-5c41061a1fb2', 'env', 'PROCESS_TAG=haproxy-a7aa5898-dcd8-41c5-81e2-5c41061a1fb2', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/a7aa5898-dcd8-41c5-81e2-5c41061a1fb2.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 11 09:24:22 compute-0 sshd-session[396602]: pam_unix(sshd:auth): check pass; user unknown
Oct 11 09:24:22 compute-0 sshd-session[396602]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=165.232.82.252
Oct 11 09:24:22 compute-0 nova_compute[260935]: 2025-10-11 09:24:22.504 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:24:22 compute-0 blissful_galileo[396595]: {
Oct 11 09:24:22 compute-0 blissful_galileo[396595]:     "0": [
Oct 11 09:24:22 compute-0 blissful_galileo[396595]:         {
Oct 11 09:24:22 compute-0 blissful_galileo[396595]:             "devices": [
Oct 11 09:24:22 compute-0 blissful_galileo[396595]:                 "/dev/loop3"
Oct 11 09:24:22 compute-0 blissful_galileo[396595]:             ],
Oct 11 09:24:22 compute-0 blissful_galileo[396595]:             "lv_name": "ceph_lv0",
Oct 11 09:24:22 compute-0 blissful_galileo[396595]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 09:24:22 compute-0 blissful_galileo[396595]:             "lv_size": "21470642176",
Oct 11 09:24:22 compute-0 blissful_galileo[396595]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=bd31cfe9-266a-4479-a7f9-659fff14b3e1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 09:24:22 compute-0 blissful_galileo[396595]:             "lv_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 09:24:22 compute-0 blissful_galileo[396595]:             "name": "ceph_lv0",
Oct 11 09:24:22 compute-0 blissful_galileo[396595]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 09:24:22 compute-0 blissful_galileo[396595]:             "tags": {
Oct 11 09:24:22 compute-0 blissful_galileo[396595]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 11 09:24:22 compute-0 blissful_galileo[396595]:                 "ceph.block_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 09:24:22 compute-0 blissful_galileo[396595]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 09:24:22 compute-0 blissful_galileo[396595]:                 "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:24:22 compute-0 blissful_galileo[396595]:                 "ceph.cluster_name": "ceph",
Oct 11 09:24:22 compute-0 blissful_galileo[396595]:                 "ceph.crush_device_class": "",
Oct 11 09:24:22 compute-0 blissful_galileo[396595]:                 "ceph.encrypted": "0",
Oct 11 09:24:22 compute-0 blissful_galileo[396595]:                 "ceph.osd_fsid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 09:24:22 compute-0 blissful_galileo[396595]:                 "ceph.osd_id": "0",
Oct 11 09:24:22 compute-0 blissful_galileo[396595]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 09:24:22 compute-0 blissful_galileo[396595]:                 "ceph.type": "block",
Oct 11 09:24:22 compute-0 blissful_galileo[396595]:                 "ceph.vdo": "0"
Oct 11 09:24:22 compute-0 blissful_galileo[396595]:             },
Oct 11 09:24:22 compute-0 blissful_galileo[396595]:             "type": "block",
Oct 11 09:24:22 compute-0 blissful_galileo[396595]:             "vg_name": "ceph_vg0"
Oct 11 09:24:22 compute-0 blissful_galileo[396595]:         }
Oct 11 09:24:22 compute-0 blissful_galileo[396595]:     ],
Oct 11 09:24:22 compute-0 blissful_galileo[396595]:     "1": [
Oct 11 09:24:22 compute-0 blissful_galileo[396595]:         {
Oct 11 09:24:22 compute-0 blissful_galileo[396595]:             "devices": [
Oct 11 09:24:22 compute-0 blissful_galileo[396595]:                 "/dev/loop4"
Oct 11 09:24:22 compute-0 blissful_galileo[396595]:             ],
Oct 11 09:24:22 compute-0 blissful_galileo[396595]:             "lv_name": "ceph_lv1",
Oct 11 09:24:22 compute-0 blissful_galileo[396595]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 09:24:22 compute-0 blissful_galileo[396595]:             "lv_size": "21470642176",
Oct 11 09:24:22 compute-0 blissful_galileo[396595]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c6e730c8-7075-45e5-bf72-061b73088f5b,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 09:24:22 compute-0 blissful_galileo[396595]:             "lv_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 09:24:22 compute-0 blissful_galileo[396595]:             "name": "ceph_lv1",
Oct 11 09:24:22 compute-0 blissful_galileo[396595]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 09:24:22 compute-0 blissful_galileo[396595]:             "tags": {
Oct 11 09:24:22 compute-0 blissful_galileo[396595]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 11 09:24:22 compute-0 blissful_galileo[396595]:                 "ceph.block_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 09:24:22 compute-0 blissful_galileo[396595]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 09:24:22 compute-0 blissful_galileo[396595]:                 "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:24:22 compute-0 blissful_galileo[396595]:                 "ceph.cluster_name": "ceph",
Oct 11 09:24:22 compute-0 blissful_galileo[396595]:                 "ceph.crush_device_class": "",
Oct 11 09:24:22 compute-0 blissful_galileo[396595]:                 "ceph.encrypted": "0",
Oct 11 09:24:22 compute-0 blissful_galileo[396595]:                 "ceph.osd_fsid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 09:24:22 compute-0 blissful_galileo[396595]:                 "ceph.osd_id": "1",
Oct 11 09:24:22 compute-0 blissful_galileo[396595]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 09:24:22 compute-0 blissful_galileo[396595]:                 "ceph.type": "block",
Oct 11 09:24:22 compute-0 blissful_galileo[396595]:                 "ceph.vdo": "0"
Oct 11 09:24:22 compute-0 blissful_galileo[396595]:             },
Oct 11 09:24:22 compute-0 blissful_galileo[396595]:             "type": "block",
Oct 11 09:24:22 compute-0 blissful_galileo[396595]:             "vg_name": "ceph_vg1"
Oct 11 09:24:22 compute-0 blissful_galileo[396595]:         }
Oct 11 09:24:22 compute-0 blissful_galileo[396595]:     ],
Oct 11 09:24:22 compute-0 blissful_galileo[396595]:     "2": [
Oct 11 09:24:22 compute-0 blissful_galileo[396595]:         {
Oct 11 09:24:22 compute-0 blissful_galileo[396595]:             "devices": [
Oct 11 09:24:22 compute-0 blissful_galileo[396595]:                 "/dev/loop5"
Oct 11 09:24:22 compute-0 blissful_galileo[396595]:             ],
Oct 11 09:24:22 compute-0 blissful_galileo[396595]:             "lv_name": "ceph_lv2",
Oct 11 09:24:22 compute-0 blissful_galileo[396595]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 09:24:22 compute-0 blissful_galileo[396595]:             "lv_size": "21470642176",
Oct 11 09:24:22 compute-0 blissful_galileo[396595]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9a8d87fe-940e-4960-94fb-405e22e6e9ee,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 09:24:22 compute-0 blissful_galileo[396595]:             "lv_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 09:24:22 compute-0 blissful_galileo[396595]:             "name": "ceph_lv2",
Oct 11 09:24:22 compute-0 blissful_galileo[396595]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 09:24:22 compute-0 blissful_galileo[396595]:             "tags": {
Oct 11 09:24:22 compute-0 blissful_galileo[396595]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 11 09:24:22 compute-0 blissful_galileo[396595]:                 "ceph.block_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 09:24:22 compute-0 blissful_galileo[396595]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 09:24:22 compute-0 blissful_galileo[396595]:                 "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:24:22 compute-0 blissful_galileo[396595]:                 "ceph.cluster_name": "ceph",
Oct 11 09:24:22 compute-0 blissful_galileo[396595]:                 "ceph.crush_device_class": "",
Oct 11 09:24:22 compute-0 blissful_galileo[396595]:                 "ceph.encrypted": "0",
Oct 11 09:24:22 compute-0 blissful_galileo[396595]:                 "ceph.osd_fsid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 09:24:22 compute-0 blissful_galileo[396595]:                 "ceph.osd_id": "2",
Oct 11 09:24:22 compute-0 blissful_galileo[396595]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 09:24:22 compute-0 blissful_galileo[396595]:                 "ceph.type": "block",
Oct 11 09:24:22 compute-0 blissful_galileo[396595]:                 "ceph.vdo": "0"
Oct 11 09:24:22 compute-0 blissful_galileo[396595]:             },
Oct 11 09:24:22 compute-0 blissful_galileo[396595]:             "type": "block",
Oct 11 09:24:22 compute-0 blissful_galileo[396595]:             "vg_name": "ceph_vg2"
Oct 11 09:24:22 compute-0 blissful_galileo[396595]:         }
Oct 11 09:24:22 compute-0 blissful_galileo[396595]:     ]
Oct 11 09:24:22 compute-0 blissful_galileo[396595]: }
Oct 11 09:24:22 compute-0 systemd[1]: libpod-c7e5baf1e46da732c9b315e649b653b9bf14673b7f224704bee2a858fc2df5f6.scope: Deactivated successfully.
Oct 11 09:24:22 compute-0 podman[396570]: 2025-10-11 09:24:22.666676037 +0000 UTC m=+1.034232307 container died c7e5baf1e46da732c9b315e649b653b9bf14673b7f224704bee2a858fc2df5f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_galileo, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct 11 09:24:22 compute-0 podman[396714]: 2025-10-11 09:24:22.692689391 +0000 UTC m=+0.060238520 container create 154018af38dbd3f9211de43375a45fc046310d8d6609a6abdb810f3a4e6db988 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-a7aa5898-dcd8-41c5-81e2-5c41061a1fb2, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3)
Oct 11 09:24:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-c4ba737ecf8dfdac7b1c3c01298648218bcc531da869fe350e8964fa4b79995e-merged.mount: Deactivated successfully.
Oct 11 09:24:22 compute-0 podman[396570]: 2025-10-11 09:24:22.727807433 +0000 UTC m=+1.095363693 container remove c7e5baf1e46da732c9b315e649b653b9bf14673b7f224704bee2a858fc2df5f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_galileo, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default)
Oct 11 09:24:22 compute-0 podman[396714]: 2025-10-11 09:24:22.657531909 +0000 UTC m=+0.025081028 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 11 09:24:22 compute-0 systemd[1]: Started libpod-conmon-154018af38dbd3f9211de43375a45fc046310d8d6609a6abdb810f3a4e6db988.scope.
Oct 11 09:24:22 compute-0 systemd[1]: libpod-conmon-c7e5baf1e46da732c9b315e649b653b9bf14673b7f224704bee2a858fc2df5f6.scope: Deactivated successfully.
Oct 11 09:24:22 compute-0 sudo[396380]: pam_unix(sudo:session): session closed for user root
Oct 11 09:24:22 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:24:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1f43b85947b21cd3c6ad161e442ba8d1aa969883f679ceb40877a58563c40fdb/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 11 09:24:22 compute-0 podman[396714]: 2025-10-11 09:24:22.810107705 +0000 UTC m=+0.177656824 container init 154018af38dbd3f9211de43375a45fc046310d8d6609a6abdb810f3a4e6db988 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-a7aa5898-dcd8-41c5-81e2-5c41061a1fb2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 11 09:24:22 compute-0 podman[396714]: 2025-10-11 09:24:22.815342893 +0000 UTC m=+0.182891982 container start 154018af38dbd3f9211de43375a45fc046310d8d6609a6abdb810f3a4e6db988 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-a7aa5898-dcd8-41c5-81e2-5c41061a1fb2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 11 09:24:22 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2482: 321 pgs: 321 active+clean; 453 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 376 KiB/s rd, 2.0 MiB/s wr, 96 op/s
Oct 11 09:24:22 compute-0 neutron-haproxy-ovnmeta-a7aa5898-dcd8-41c5-81e2-5c41061a1fb2[396742]: [NOTICE]   (396766) : New worker (396771) forked
Oct 11 09:24:22 compute-0 neutron-haproxy-ovnmeta-a7aa5898-dcd8-41c5-81e2-5c41061a1fb2[396742]: [NOTICE]   (396766) : Loading success.
Oct 11 09:24:22 compute-0 sudo[396745]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:24:22 compute-0 sudo[396745]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:24:22 compute-0 sudo[396745]: pam_unix(sudo:session): session closed for user root
Oct 11 09:24:22 compute-0 sudo[396782]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 09:24:22 compute-0 sudo[396782]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:24:22 compute-0 sudo[396782]: pam_unix(sudo:session): session closed for user root
Oct 11 09:24:23 compute-0 sudo[396807]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:24:23 compute-0 sudo[396807]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:24:23 compute-0 sudo[396807]: pam_unix(sudo:session): session closed for user root
Oct 11 09:24:23 compute-0 sudo[396832]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a -- raw list --format json
Oct 11 09:24:23 compute-0 sudo[396832]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:24:23 compute-0 nova_compute[260935]: 2025-10-11 09:24:23.173 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760174663.1730971, 32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:24:23 compute-0 nova_compute[260935]: 2025-10-11 09:24:23.174 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1] VM Started (Lifecycle Event)
Oct 11 09:24:23 compute-0 nova_compute[260935]: 2025-10-11 09:24:23.179 2 DEBUG nova.network.neutron [req-3004c618-5dce-4d24-a297-c9044294418c req-b118437c-81a0-420c-9200-77efe9ceb834 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1] Updated VIF entry in instance network info cache for port ba6bfb6c-8b45-4ce7-af7b-30a3c2e097ca. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 09:24:23 compute-0 nova_compute[260935]: 2025-10-11 09:24:23.179 2 DEBUG nova.network.neutron [req-3004c618-5dce-4d24-a297-c9044294418c req-b118437c-81a0-420c-9200-77efe9ceb834 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1] Updating instance_info_cache with network_info: [{"id": "ba6bfb6c-8b45-4ce7-af7b-30a3c2e097ca", "address": "fa:16:3e:d6:ce:63", "network": {"id": "a7aa5898-dcd8-41c5-81e2-5c41061a1fb2", "bridge": "br-int", "label": "tempest-network-smoke--1511589092", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.192", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapba6bfb6c-8b", "ovs_interfaceid": "ba6bfb6c-8b45-4ce7-af7b-30a3c2e097ca", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:24:23 compute-0 nova_compute[260935]: 2025-10-11 09:24:23.216 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:24:23 compute-0 nova_compute[260935]: 2025-10-11 09:24:23.219 2 DEBUG oslo_concurrency.lockutils [req-3004c618-5dce-4d24-a297-c9044294418c req-b118437c-81a0-420c-9200-77efe9ceb834 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:24:23 compute-0 nova_compute[260935]: 2025-10-11 09:24:23.223 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760174663.1732094, 32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:24:23 compute-0 nova_compute[260935]: 2025-10-11 09:24:23.223 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1] VM Paused (Lifecycle Event)
Oct 11 09:24:23 compute-0 nova_compute[260935]: 2025-10-11 09:24:23.256 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:24:23 compute-0 nova_compute[260935]: 2025-10-11 09:24:23.260 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 09:24:23 compute-0 nova_compute[260935]: 2025-10-11 09:24:23.296 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 09:24:23 compute-0 podman[396898]: 2025-10-11 09:24:23.489676753 +0000 UTC m=+0.052198354 container create e171397ed2570fbb10bf5598b5e49d38cab2285dd914f0945a241336e9476428 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_greider, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 09:24:23 compute-0 systemd[1]: Started libpod-conmon-e171397ed2570fbb10bf5598b5e49d38cab2285dd914f0945a241336e9476428.scope.
Oct 11 09:24:23 compute-0 podman[396898]: 2025-10-11 09:24:23.466846239 +0000 UTC m=+0.029367890 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:24:23 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:24:23 compute-0 podman[396898]: 2025-10-11 09:24:23.58312379 +0000 UTC m=+0.145645441 container init e171397ed2570fbb10bf5598b5e49d38cab2285dd914f0945a241336e9476428 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_greider, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 09:24:23 compute-0 podman[396898]: 2025-10-11 09:24:23.591922839 +0000 UTC m=+0.154444480 container start e171397ed2570fbb10bf5598b5e49d38cab2285dd914f0945a241336e9476428 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_greider, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 09:24:23 compute-0 podman[396898]: 2025-10-11 09:24:23.595592502 +0000 UTC m=+0.158114153 container attach e171397ed2570fbb10bf5598b5e49d38cab2285dd914f0945a241336e9476428 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_greider, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 09:24:23 compute-0 elastic_greider[396914]: 167 167
Oct 11 09:24:23 compute-0 systemd[1]: libpod-e171397ed2570fbb10bf5598b5e49d38cab2285dd914f0945a241336e9476428.scope: Deactivated successfully.
Oct 11 09:24:23 compute-0 conmon[396914]: conmon e171397ed2570fbb10bf <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e171397ed2570fbb10bf5598b5e49d38cab2285dd914f0945a241336e9476428.scope/container/memory.events
Oct 11 09:24:23 compute-0 podman[396898]: 2025-10-11 09:24:23.604269587 +0000 UTC m=+0.166791228 container died e171397ed2570fbb10bf5598b5e49d38cab2285dd914f0945a241336e9476428 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_greider, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Oct 11 09:24:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-12e8d5acd52f35f6becd0663eebead45fe188fa49a86c9ef34cfcb39a31963e1-merged.mount: Deactivated successfully.
Oct 11 09:24:23 compute-0 podman[396898]: 2025-10-11 09:24:23.65290407 +0000 UTC m=+0.215425711 container remove e171397ed2570fbb10bf5598b5e49d38cab2285dd914f0945a241336e9476428 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_greider, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 09:24:23 compute-0 systemd[1]: libpod-conmon-e171397ed2570fbb10bf5598b5e49d38cab2285dd914f0945a241336e9476428.scope: Deactivated successfully.
Oct 11 09:24:23 compute-0 sshd-session[396030]: Received disconnect from 155.4.244.179 port 51746:11: Bye Bye [preauth]
Oct 11 09:24:23 compute-0 ceph-mon[74313]: pgmap v2482: 321 pgs: 321 active+clean; 453 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 376 KiB/s rd, 2.0 MiB/s wr, 96 op/s
Oct 11 09:24:23 compute-0 sshd-session[396030]: Disconnected from invalid user rustserver 155.4.244.179 port 51746 [preauth]
Oct 11 09:24:23 compute-0 podman[396936]: 2025-10-11 09:24:23.954033128 +0000 UTC m=+0.082544221 container create 026561d048a66d10ed3e0b0eaa507a98e6815d8c655f7fa82c4f54dd6f558e2a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_jackson, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 09:24:24 compute-0 systemd[1]: Started libpod-conmon-026561d048a66d10ed3e0b0eaa507a98e6815d8c655f7fa82c4f54dd6f558e2a.scope.
Oct 11 09:24:24 compute-0 podman[396936]: 2025-10-11 09:24:23.922385985 +0000 UTC m=+0.050897148 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:24:24 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:24:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c4892047388cfef2f33695d90cdc60210c4cb7ac91bd8e22eed18abf2d260ca/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 09:24:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c4892047388cfef2f33695d90cdc60210c4cb7ac91bd8e22eed18abf2d260ca/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 09:24:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c4892047388cfef2f33695d90cdc60210c4cb7ac91bd8e22eed18abf2d260ca/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 09:24:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c4892047388cfef2f33695d90cdc60210c4cb7ac91bd8e22eed18abf2d260ca/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 09:24:24 compute-0 podman[396936]: 2025-10-11 09:24:24.066407209 +0000 UTC m=+0.194918352 container init 026561d048a66d10ed3e0b0eaa507a98e6815d8c655f7fa82c4f54dd6f558e2a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_jackson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 09:24:24 compute-0 podman[396936]: 2025-10-11 09:24:24.081348781 +0000 UTC m=+0.209859864 container start 026561d048a66d10ed3e0b0eaa507a98e6815d8c655f7fa82c4f54dd6f558e2a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_jackson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 11 09:24:24 compute-0 podman[396936]: 2025-10-11 09:24:24.085799916 +0000 UTC m=+0.214311019 container attach 026561d048a66d10ed3e0b0eaa507a98e6815d8c655f7fa82c4f54dd6f558e2a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_jackson, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct 11 09:24:24 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:24:24 compute-0 nova_compute[260935]: 2025-10-11 09:24:24.204 2 DEBUG nova.compute.manager [req-89ce029d-f7aa-45cb-9c01-d1cc1cc5723a req-f65478f9-5d86-4a56-a80e-cdb574b134f1 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1] Received event network-vif-plugged-ba6bfb6c-8b45-4ce7-af7b-30a3c2e097ca external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:24:24 compute-0 nova_compute[260935]: 2025-10-11 09:24:24.206 2 DEBUG oslo_concurrency.lockutils [req-89ce029d-f7aa-45cb-9c01-d1cc1cc5723a req-f65478f9-5d86-4a56-a80e-cdb574b134f1 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:24:24 compute-0 nova_compute[260935]: 2025-10-11 09:24:24.206 2 DEBUG oslo_concurrency.lockutils [req-89ce029d-f7aa-45cb-9c01-d1cc1cc5723a req-f65478f9-5d86-4a56-a80e-cdb574b134f1 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:24:24 compute-0 nova_compute[260935]: 2025-10-11 09:24:24.207 2 DEBUG oslo_concurrency.lockutils [req-89ce029d-f7aa-45cb-9c01-d1cc1cc5723a req-f65478f9-5d86-4a56-a80e-cdb574b134f1 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:24:24 compute-0 nova_compute[260935]: 2025-10-11 09:24:24.207 2 DEBUG nova.compute.manager [req-89ce029d-f7aa-45cb-9c01-d1cc1cc5723a req-f65478f9-5d86-4a56-a80e-cdb574b134f1 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1] Processing event network-vif-plugged-ba6bfb6c-8b45-4ce7-af7b-30a3c2e097ca _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 11 09:24:24 compute-0 nova_compute[260935]: 2025-10-11 09:24:24.208 2 DEBUG nova.compute.manager [None req-c35abbe6-3688-4543-a2e2-fe03ac2f218a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 11 09:24:24 compute-0 nova_compute[260935]: 2025-10-11 09:24:24.214 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760174664.2137172, 32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:24:24 compute-0 nova_compute[260935]: 2025-10-11 09:24:24.214 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1] VM Resumed (Lifecycle Event)
Oct 11 09:24:24 compute-0 nova_compute[260935]: 2025-10-11 09:24:24.217 2 DEBUG nova.virt.libvirt.driver [None req-c35abbe6-3688-4543-a2e2-fe03ac2f218a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 11 09:24:24 compute-0 nova_compute[260935]: 2025-10-11 09:24:24.226 2 INFO nova.virt.libvirt.driver [-] [instance: 32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1] Instance spawned successfully.
Oct 11 09:24:24 compute-0 nova_compute[260935]: 2025-10-11 09:24:24.226 2 DEBUG nova.virt.libvirt.driver [None req-c35abbe6-3688-4543-a2e2-fe03ac2f218a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 11 09:24:24 compute-0 nova_compute[260935]: 2025-10-11 09:24:24.249 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:24:24 compute-0 nova_compute[260935]: 2025-10-11 09:24:24.261 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 09:24:24 compute-0 nova_compute[260935]: 2025-10-11 09:24:24.266 2 DEBUG nova.virt.libvirt.driver [None req-c35abbe6-3688-4543-a2e2-fe03ac2f218a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:24:24 compute-0 nova_compute[260935]: 2025-10-11 09:24:24.267 2 DEBUG nova.virt.libvirt.driver [None req-c35abbe6-3688-4543-a2e2-fe03ac2f218a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:24:24 compute-0 nova_compute[260935]: 2025-10-11 09:24:24.267 2 DEBUG nova.virt.libvirt.driver [None req-c35abbe6-3688-4543-a2e2-fe03ac2f218a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:24:24 compute-0 nova_compute[260935]: 2025-10-11 09:24:24.268 2 DEBUG nova.virt.libvirt.driver [None req-c35abbe6-3688-4543-a2e2-fe03ac2f218a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:24:24 compute-0 nova_compute[260935]: 2025-10-11 09:24:24.269 2 DEBUG nova.virt.libvirt.driver [None req-c35abbe6-3688-4543-a2e2-fe03ac2f218a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:24:24 compute-0 nova_compute[260935]: 2025-10-11 09:24:24.270 2 DEBUG nova.virt.libvirt.driver [None req-c35abbe6-3688-4543-a2e2-fe03ac2f218a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:24:24 compute-0 nova_compute[260935]: 2025-10-11 09:24:24.308 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 09:24:24 compute-0 nova_compute[260935]: 2025-10-11 09:24:24.351 2 INFO nova.compute.manager [None req-c35abbe6-3688-4543-a2e2-fe03ac2f218a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1] Took 11.79 seconds to spawn the instance on the hypervisor.
Oct 11 09:24:24 compute-0 nova_compute[260935]: 2025-10-11 09:24:24.351 2 DEBUG nova.compute.manager [None req-c35abbe6-3688-4543-a2e2-fe03ac2f218a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:24:24 compute-0 nova_compute[260935]: 2025-10-11 09:24:24.436 2 INFO nova.compute.manager [None req-c35abbe6-3688-4543-a2e2-fe03ac2f218a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1] Took 12.92 seconds to build instance.
Oct 11 09:24:24 compute-0 nova_compute[260935]: 2025-10-11 09:24:24.446 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:24:24 compute-0 nova_compute[260935]: 2025-10-11 09:24:24.455 2 DEBUG oslo_concurrency.lockutils [None req-c35abbe6-3688-4543-a2e2-fe03ac2f218a dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.028s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:24:24 compute-0 sshd-session[396602]: Failed password for invalid user admin from 165.232.82.252 port 38678 ssh2
Oct 11 09:24:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:24:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:24:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:24:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:24:24 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2483: 321 pgs: 321 active+clean; 453 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 32 op/s
Oct 11 09:24:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:24:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:24:25 compute-0 interesting_jackson[396952]: {
Oct 11 09:24:25 compute-0 interesting_jackson[396952]:     "9a8d87fe-940e-4960-94fb-405e22e6e9ee": {
Oct 11 09:24:25 compute-0 interesting_jackson[396952]:         "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:24:25 compute-0 interesting_jackson[396952]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 11 09:24:25 compute-0 interesting_jackson[396952]:         "osd_id": 2,
Oct 11 09:24:25 compute-0 interesting_jackson[396952]:         "osd_uuid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 09:24:25 compute-0 interesting_jackson[396952]:         "type": "bluestore"
Oct 11 09:24:25 compute-0 interesting_jackson[396952]:     },
Oct 11 09:24:25 compute-0 interesting_jackson[396952]:     "bd31cfe9-266a-4479-a7f9-659fff14b3e1": {
Oct 11 09:24:25 compute-0 interesting_jackson[396952]:         "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:24:25 compute-0 interesting_jackson[396952]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 11 09:24:25 compute-0 interesting_jackson[396952]:         "osd_id": 0,
Oct 11 09:24:25 compute-0 interesting_jackson[396952]:         "osd_uuid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 09:24:25 compute-0 interesting_jackson[396952]:         "type": "bluestore"
Oct 11 09:24:25 compute-0 interesting_jackson[396952]:     },
Oct 11 09:24:25 compute-0 interesting_jackson[396952]:     "c6e730c8-7075-45e5-bf72-061b73088f5b": {
Oct 11 09:24:25 compute-0 interesting_jackson[396952]:         "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:24:25 compute-0 interesting_jackson[396952]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 11 09:24:25 compute-0 interesting_jackson[396952]:         "osd_id": 1,
Oct 11 09:24:25 compute-0 interesting_jackson[396952]:         "osd_uuid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 09:24:25 compute-0 interesting_jackson[396952]:         "type": "bluestore"
Oct 11 09:24:25 compute-0 interesting_jackson[396952]:     }
Oct 11 09:24:25 compute-0 interesting_jackson[396952]: }
Oct 11 09:24:25 compute-0 systemd[1]: libpod-026561d048a66d10ed3e0b0eaa507a98e6815d8c655f7fa82c4f54dd6f558e2a.scope: Deactivated successfully.
Oct 11 09:24:25 compute-0 podman[396936]: 2025-10-11 09:24:25.236967903 +0000 UTC m=+1.365478966 container died 026561d048a66d10ed3e0b0eaa507a98e6815d8c655f7fa82c4f54dd6f558e2a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_jackson, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct 11 09:24:25 compute-0 systemd[1]: libpod-026561d048a66d10ed3e0b0eaa507a98e6815d8c655f7fa82c4f54dd6f558e2a.scope: Consumed 1.115s CPU time.
Oct 11 09:24:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-8c4892047388cfef2f33695d90cdc60210c4cb7ac91bd8e22eed18abf2d260ca-merged.mount: Deactivated successfully.
Oct 11 09:24:25 compute-0 podman[396936]: 2025-10-11 09:24:25.30559767 +0000 UTC m=+1.434108733 container remove 026561d048a66d10ed3e0b0eaa507a98e6815d8c655f7fa82c4f54dd6f558e2a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_jackson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct 11 09:24:25 compute-0 systemd[1]: libpod-conmon-026561d048a66d10ed3e0b0eaa507a98e6815d8c655f7fa82c4f54dd6f558e2a.scope: Deactivated successfully.
Oct 11 09:24:25 compute-0 sudo[396832]: pam_unix(sudo:session): session closed for user root
Oct 11 09:24:25 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 09:24:25 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:24:25 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 09:24:25 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:24:25 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev 562dd2ef-f7e7-4f6e-a236-d7a36e0be0da does not exist
Oct 11 09:24:25 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev d4cfe1bd-34d3-441f-ba8c-f9cb189e7086 does not exist
Oct 11 09:24:25 compute-0 sudo[396998]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:24:25 compute-0 sudo[396998]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:24:25 compute-0 sudo[396998]: pam_unix(sudo:session): session closed for user root
Oct 11 09:24:25 compute-0 sudo[397023]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 11 09:24:25 compute-0 sudo[397023]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:24:25 compute-0 sudo[397023]: pam_unix(sudo:session): session closed for user root
Oct 11 09:24:25 compute-0 sshd-session[396602]: Connection closed by invalid user admin 165.232.82.252 port 38678 [preauth]
Oct 11 09:24:25 compute-0 ceph-mon[74313]: pgmap v2483: 321 pgs: 321 active+clean; 453 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 32 op/s
Oct 11 09:24:25 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:24:25 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:24:26 compute-0 nova_compute[260935]: 2025-10-11 09:24:26.370 2 DEBUG nova.compute.manager [req-74f3b3c9-8de2-4e64-9501-7319b727129b req-45390fcb-bd40-4f3c-8886-f5e167673c5d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1] Received event network-vif-plugged-ba6bfb6c-8b45-4ce7-af7b-30a3c2e097ca external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:24:26 compute-0 nova_compute[260935]: 2025-10-11 09:24:26.372 2 DEBUG oslo_concurrency.lockutils [req-74f3b3c9-8de2-4e64-9501-7319b727129b req-45390fcb-bd40-4f3c-8886-f5e167673c5d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:24:26 compute-0 nova_compute[260935]: 2025-10-11 09:24:26.372 2 DEBUG oslo_concurrency.lockutils [req-74f3b3c9-8de2-4e64-9501-7319b727129b req-45390fcb-bd40-4f3c-8886-f5e167673c5d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:24:26 compute-0 nova_compute[260935]: 2025-10-11 09:24:26.373 2 DEBUG oslo_concurrency.lockutils [req-74f3b3c9-8de2-4e64-9501-7319b727129b req-45390fcb-bd40-4f3c-8886-f5e167673c5d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:24:26 compute-0 nova_compute[260935]: 2025-10-11 09:24:26.373 2 DEBUG nova.compute.manager [req-74f3b3c9-8de2-4e64-9501-7319b727129b req-45390fcb-bd40-4f3c-8886-f5e167673c5d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1] No waiting events found dispatching network-vif-plugged-ba6bfb6c-8b45-4ce7-af7b-30a3c2e097ca pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 09:24:26 compute-0 nova_compute[260935]: 2025-10-11 09:24:26.374 2 WARNING nova.compute.manager [req-74f3b3c9-8de2-4e64-9501-7319b727129b req-45390fcb-bd40-4f3c-8886-f5e167673c5d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1] Received unexpected event network-vif-plugged-ba6bfb6c-8b45-4ce7-af7b-30a3c2e097ca for instance with vm_state active and task_state None.
Oct 11 09:24:26 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 09:24:26 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/597248780' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 09:24:26 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 09:24:26 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/597248780' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 09:24:26 compute-0 nova_compute[260935]: 2025-10-11 09:24:26.732 2 DEBUG oslo_concurrency.lockutils [None req-026f9251-1d03-4d44-ad11-414cadc65076 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "ece1112a-294b-461f-8b8f-c6a0bb212647" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:24:26 compute-0 nova_compute[260935]: 2025-10-11 09:24:26.733 2 DEBUG oslo_concurrency.lockutils [None req-026f9251-1d03-4d44-ad11-414cadc65076 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "ece1112a-294b-461f-8b8f-c6a0bb212647" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:24:26 compute-0 nova_compute[260935]: 2025-10-11 09:24:26.748 2 DEBUG nova.compute.manager [None req-026f9251-1d03-4d44-ad11-414cadc65076 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: ece1112a-294b-461f-8b8f-c6a0bb212647] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 11 09:24:26 compute-0 nova_compute[260935]: 2025-10-11 09:24:26.834 2 DEBUG oslo_concurrency.lockutils [None req-026f9251-1d03-4d44-ad11-414cadc65076 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:24:26 compute-0 nova_compute[260935]: 2025-10-11 09:24:26.835 2 DEBUG oslo_concurrency.lockutils [None req-026f9251-1d03-4d44-ad11-414cadc65076 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:24:26 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2484: 321 pgs: 321 active+clean; 453 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 32 op/s
Oct 11 09:24:26 compute-0 nova_compute[260935]: 2025-10-11 09:24:26.850 2 DEBUG nova.virt.hardware [None req-026f9251-1d03-4d44-ad11-414cadc65076 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 11 09:24:26 compute-0 nova_compute[260935]: 2025-10-11 09:24:26.851 2 INFO nova.compute.claims [None req-026f9251-1d03-4d44-ad11-414cadc65076 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: ece1112a-294b-461f-8b8f-c6a0bb212647] Claim successful on node compute-0.ctlplane.example.com
Oct 11 09:24:26 compute-0 ceph-mon[74313]: from='client.? 192.168.122.10:0/597248780' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 09:24:26 compute-0 ceph-mon[74313]: from='client.? 192.168.122.10:0/597248780' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 09:24:27 compute-0 nova_compute[260935]: 2025-10-11 09:24:27.135 2 DEBUG oslo_concurrency.processutils [None req-026f9251-1d03-4d44-ad11-414cadc65076 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:24:27 compute-0 nova_compute[260935]: 2025-10-11 09:24:27.508 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:24:27 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 09:24:27 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1427365149' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:24:27 compute-0 nova_compute[260935]: 2025-10-11 09:24:27.615 2 DEBUG oslo_concurrency.processutils [None req-026f9251-1d03-4d44-ad11-414cadc65076 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:24:27 compute-0 nova_compute[260935]: 2025-10-11 09:24:27.621 2 DEBUG nova.compute.provider_tree [None req-026f9251-1d03-4d44-ad11-414cadc65076 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 09:24:27 compute-0 nova_compute[260935]: 2025-10-11 09:24:27.682 2 DEBUG nova.scheduler.client.report [None req-026f9251-1d03-4d44-ad11-414cadc65076 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 09:24:27 compute-0 nova_compute[260935]: 2025-10-11 09:24:27.706 2 DEBUG oslo_concurrency.lockutils [None req-026f9251-1d03-4d44-ad11-414cadc65076 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.871s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:24:27 compute-0 nova_compute[260935]: 2025-10-11 09:24:27.707 2 DEBUG nova.compute.manager [None req-026f9251-1d03-4d44-ad11-414cadc65076 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: ece1112a-294b-461f-8b8f-c6a0bb212647] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 11 09:24:27 compute-0 nova_compute[260935]: 2025-10-11 09:24:27.758 2 DEBUG nova.compute.manager [None req-026f9251-1d03-4d44-ad11-414cadc65076 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: ece1112a-294b-461f-8b8f-c6a0bb212647] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 11 09:24:27 compute-0 nova_compute[260935]: 2025-10-11 09:24:27.759 2 DEBUG nova.network.neutron [None req-026f9251-1d03-4d44-ad11-414cadc65076 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: ece1112a-294b-461f-8b8f-c6a0bb212647] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 11 09:24:27 compute-0 nova_compute[260935]: 2025-10-11 09:24:27.768 2 DEBUG oslo_concurrency.lockutils [None req-4011166e-9c26-4d1c-9b3f-cdae3a928c0d dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Acquiring lock "32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:24:27 compute-0 nova_compute[260935]: 2025-10-11 09:24:27.768 2 DEBUG oslo_concurrency.lockutils [None req-4011166e-9c26-4d1c-9b3f-cdae3a928c0d dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:24:27 compute-0 nova_compute[260935]: 2025-10-11 09:24:27.769 2 DEBUG oslo_concurrency.lockutils [None req-4011166e-9c26-4d1c-9b3f-cdae3a928c0d dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Acquiring lock "32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:24:27 compute-0 nova_compute[260935]: 2025-10-11 09:24:27.769 2 DEBUG oslo_concurrency.lockutils [None req-4011166e-9c26-4d1c-9b3f-cdae3a928c0d dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:24:27 compute-0 nova_compute[260935]: 2025-10-11 09:24:27.769 2 DEBUG oslo_concurrency.lockutils [None req-4011166e-9c26-4d1c-9b3f-cdae3a928c0d dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:24:27 compute-0 nova_compute[260935]: 2025-10-11 09:24:27.771 2 INFO nova.compute.manager [None req-4011166e-9c26-4d1c-9b3f-cdae3a928c0d dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1] Terminating instance
Oct 11 09:24:27 compute-0 nova_compute[260935]: 2025-10-11 09:24:27.773 2 DEBUG nova.compute.manager [None req-4011166e-9c26-4d1c-9b3f-cdae3a928c0d dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 11 09:24:27 compute-0 nova_compute[260935]: 2025-10-11 09:24:27.802 2 INFO nova.virt.libvirt.driver [None req-026f9251-1d03-4d44-ad11-414cadc65076 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: ece1112a-294b-461f-8b8f-c6a0bb212647] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 11 09:24:27 compute-0 kernel: tapba6bfb6c-8b (unregistering): left promiscuous mode
Oct 11 09:24:27 compute-0 NetworkManager[44960]: <info>  [1760174667.8310] device (tapba6bfb6c-8b): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 09:24:27 compute-0 nova_compute[260935]: 2025-10-11 09:24:27.839 2 DEBUG nova.compute.manager [None req-026f9251-1d03-4d44-ad11-414cadc65076 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: ece1112a-294b-461f-8b8f-c6a0bb212647] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 11 09:24:27 compute-0 nova_compute[260935]: 2025-10-11 09:24:27.842 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:24:27 compute-0 ovn_controller[152945]: 2025-10-11T09:24:27Z|01332|binding|INFO|Releasing lport ba6bfb6c-8b45-4ce7-af7b-30a3c2e097ca from this chassis (sb_readonly=0)
Oct 11 09:24:27 compute-0 ovn_controller[152945]: 2025-10-11T09:24:27Z|01333|binding|INFO|Setting lport ba6bfb6c-8b45-4ce7-af7b-30a3c2e097ca down in Southbound
Oct 11 09:24:27 compute-0 ovn_controller[152945]: 2025-10-11T09:24:27Z|01334|binding|INFO|Removing iface tapba6bfb6c-8b ovn-installed in OVS
Oct 11 09:24:27 compute-0 nova_compute[260935]: 2025-10-11 09:24:27.844 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:24:27 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:24:27.853 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d6:ce:63 10.100.0.5'], port_security=['fa:16:3e:d6:ce:63 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-TestNetworkBasicOps-111056357', 'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a7aa5898-dcd8-41c5-81e2-5c41061a1fb2', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-TestNetworkBasicOps-111056357', 'neutron:project_id': 'bee9c6aad5fe46a2b0fb6caf4d995b72', 'neutron:revision_number': '10', 'neutron:security_group_ids': '019fac1b-1c13-4d21-809e-39e39fdd9255', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.192', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=60a8a4a9-2613-497d-90d8-59ebfcd1b053, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=ba6bfb6c-8b45-4ce7-af7b-30a3c2e097ca) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 09:24:27 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:24:27.855 162815 INFO neutron.agent.ovn.metadata.agent [-] Port ba6bfb6c-8b45-4ce7-af7b-30a3c2e097ca in datapath a7aa5898-dcd8-41c5-81e2-5c41061a1fb2 unbound from our chassis
Oct 11 09:24:27 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:24:27.856 162815 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network a7aa5898-dcd8-41c5-81e2-5c41061a1fb2, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 11 09:24:27 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:24:27.858 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[c6e77930-ac5d-4611-8676-d7de7b587ba4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:24:27 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:24:27.859 162815 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-a7aa5898-dcd8-41c5-81e2-5c41061a1fb2 namespace which is not needed anymore
Oct 11 09:24:27 compute-0 nova_compute[260935]: 2025-10-11 09:24:27.865 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:24:27 compute-0 ovn_controller[152945]: 2025-10-11T09:24:27Z|01335|binding|INFO|Releasing lport c1fabca0-ae77-4e48-b93b-3023955db235 from this chassis (sb_readonly=0)
Oct 11 09:24:27 compute-0 ovn_controller[152945]: 2025-10-11T09:24:27Z|01336|binding|INFO|Releasing lport e23cd806-8523-4e59-ba27-db15cee52548 from this chassis (sb_readonly=0)
Oct 11 09:24:27 compute-0 ovn_controller[152945]: 2025-10-11T09:24:27Z|01337|binding|INFO|Releasing lport cd117be9-e2e2-4d92-9c01-a0f37b7175b9 from this chassis (sb_readonly=0)
Oct 11 09:24:27 compute-0 ovn_controller[152945]: 2025-10-11T09:24:27Z|01338|binding|INFO|Releasing lport aaa7cd6b-360e-44dd-8a57-64ba5457b964 from this chassis (sb_readonly=0)
Oct 11 09:24:27 compute-0 ovn_controller[152945]: 2025-10-11T09:24:27Z|01339|binding|INFO|Releasing lport f45dd889-4db0-488b-b6d3-356bc191844e from this chassis (sb_readonly=0)
Oct 11 09:24:27 compute-0 systemd[1]: machine-qemu\x2d148\x2dinstance\x2d0000007c.scope: Deactivated successfully.
Oct 11 09:24:27 compute-0 systemd[1]: machine-qemu\x2d148\x2dinstance\x2d0000007c.scope: Consumed 4.748s CPU time.
Oct 11 09:24:27 compute-0 systemd-machined[215705]: Machine qemu-148-instance-0000007c terminated.
Oct 11 09:24:27 compute-0 nova_compute[260935]: 2025-10-11 09:24:27.935 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:24:27 compute-0 ceph-mon[74313]: pgmap v2484: 321 pgs: 321 active+clean; 453 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 32 op/s
Oct 11 09:24:27 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/1427365149' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:24:27 compute-0 nova_compute[260935]: 2025-10-11 09:24:27.980 2 DEBUG nova.compute.manager [None req-026f9251-1d03-4d44-ad11-414cadc65076 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: ece1112a-294b-461f-8b8f-c6a0bb212647] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 11 09:24:27 compute-0 nova_compute[260935]: 2025-10-11 09:24:27.981 2 DEBUG nova.virt.libvirt.driver [None req-026f9251-1d03-4d44-ad11-414cadc65076 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: ece1112a-294b-461f-8b8f-c6a0bb212647] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 11 09:24:27 compute-0 nova_compute[260935]: 2025-10-11 09:24:27.981 2 INFO nova.virt.libvirt.driver [None req-026f9251-1d03-4d44-ad11-414cadc65076 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: ece1112a-294b-461f-8b8f-c6a0bb212647] Creating image(s)
Oct 11 09:24:28 compute-0 neutron-haproxy-ovnmeta-a7aa5898-dcd8-41c5-81e2-5c41061a1fb2[396742]: [NOTICE]   (396766) : haproxy version is 2.8.14-c23fe91
Oct 11 09:24:28 compute-0 neutron-haproxy-ovnmeta-a7aa5898-dcd8-41c5-81e2-5c41061a1fb2[396742]: [NOTICE]   (396766) : path to executable is /usr/sbin/haproxy
Oct 11 09:24:28 compute-0 neutron-haproxy-ovnmeta-a7aa5898-dcd8-41c5-81e2-5c41061a1fb2[396742]: [WARNING]  (396766) : Exiting Master process...
Oct 11 09:24:28 compute-0 neutron-haproxy-ovnmeta-a7aa5898-dcd8-41c5-81e2-5c41061a1fb2[396742]: [ALERT]    (396766) : Current worker (396771) exited with code 143 (Terminated)
Oct 11 09:24:28 compute-0 neutron-haproxy-ovnmeta-a7aa5898-dcd8-41c5-81e2-5c41061a1fb2[396742]: [WARNING]  (396766) : All workers exited. Exiting... (0)
Oct 11 09:24:28 compute-0 systemd[1]: libpod-154018af38dbd3f9211de43375a45fc046310d8d6609a6abdb810f3a4e6db988.scope: Deactivated successfully.
Oct 11 09:24:28 compute-0 nova_compute[260935]: 2025-10-11 09:24:28.023 2 DEBUG nova.storage.rbd_utils [None req-026f9251-1d03-4d44-ad11-414cadc65076 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] rbd image ece1112a-294b-461f-8b8f-c6a0bb212647_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:24:28 compute-0 podman[397094]: 2025-10-11 09:24:28.028280646 +0000 UTC m=+0.065599332 container died 154018af38dbd3f9211de43375a45fc046310d8d6609a6abdb810f3a4e6db988 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-a7aa5898-dcd8-41c5-81e2-5c41061a1fb2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 11 09:24:28 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-154018af38dbd3f9211de43375a45fc046310d8d6609a6abdb810f3a4e6db988-userdata-shm.mount: Deactivated successfully.
Oct 11 09:24:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-1f43b85947b21cd3c6ad161e442ba8d1aa969883f679ceb40877a58563c40fdb-merged.mount: Deactivated successfully.
Oct 11 09:24:28 compute-0 nova_compute[260935]: 2025-10-11 09:24:28.071 2 DEBUG nova.storage.rbd_utils [None req-026f9251-1d03-4d44-ad11-414cadc65076 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] rbd image ece1112a-294b-461f-8b8f-c6a0bb212647_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:24:28 compute-0 podman[397094]: 2025-10-11 09:24:28.081728284 +0000 UTC m=+0.119046960 container cleanup 154018af38dbd3f9211de43375a45fc046310d8d6609a6abdb810f3a4e6db988 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-a7aa5898-dcd8-41c5-81e2-5c41061a1fb2, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 11 09:24:28 compute-0 systemd[1]: libpod-conmon-154018af38dbd3f9211de43375a45fc046310d8d6609a6abdb810f3a4e6db988.scope: Deactivated successfully.
Oct 11 09:24:28 compute-0 nova_compute[260935]: 2025-10-11 09:24:28.103 2 DEBUG nova.storage.rbd_utils [None req-026f9251-1d03-4d44-ad11-414cadc65076 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] rbd image ece1112a-294b-461f-8b8f-c6a0bb212647_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:24:28 compute-0 nova_compute[260935]: 2025-10-11 09:24:28.108 2 DEBUG oslo_concurrency.processutils [None req-026f9251-1d03-4d44-ad11-414cadc65076 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:24:28 compute-0 nova_compute[260935]: 2025-10-11 09:24:28.153 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:24:28 compute-0 nova_compute[260935]: 2025-10-11 09:24:28.159 2 DEBUG nova.policy [None req-026f9251-1d03-4d44-ad11-414cadc65076 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '0e1fd111a1ff43179343661e01457085', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'db6885dd005947ad850fed13cefdf2fc', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 11 09:24:28 compute-0 nova_compute[260935]: 2025-10-11 09:24:28.168 2 INFO nova.virt.libvirt.driver [-] [instance: 32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1] Instance destroyed successfully.
Oct 11 09:24:28 compute-0 nova_compute[260935]: 2025-10-11 09:24:28.169 2 DEBUG nova.objects.instance [None req-4011166e-9c26-4d1c-9b3f-cdae3a928c0d dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lazy-loading 'resources' on Instance uuid 32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:24:28 compute-0 nova_compute[260935]: 2025-10-11 09:24:28.187 2 DEBUG nova.virt.libvirt.vif [None req-4011166e-9c26-4d1c-9b3f-cdae3a928c0d dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T09:24:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1892767569',display_name='tempest-TestNetworkBasicOps-server-1892767569',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1892767569',id=124,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBrnL5y5rmm40zHS3GRiZibnIxynw7Rihcq0RaK7G/33Ym/s8OYTLPyLC9bisJAK/wCL7YwT5B+iTgnNd5S0eO9bWMML8tCfmSgkHWu6rGCUSjC8WYx7kb6zW0E6al3ojg==',key_name='tempest-TestNetworkBasicOps-485184698',keypairs=<?>,launch_index=0,launched_at=2025-10-11T09:24:24Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='bee9c6aad5fe46a2b0fb6caf4d995b72',ramdisk_id='',reservation_id='r-i2l25idp',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1622727639',owner_user_name='tempest-TestNetworkBasicOps-1622727639-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T09:24:24Z,user_data=None,user_id='dd336dcb24664df58613d4105ce1b004',uuid=32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ba6bfb6c-8b45-4ce7-af7b-30a3c2e097ca", "address": "fa:16:3e:d6:ce:63", "network": {"id": "a7aa5898-dcd8-41c5-81e2-5c41061a1fb2", "bridge": "br-int", "label": "tempest-network-smoke--1511589092", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.192", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapba6bfb6c-8b", "ovs_interfaceid": "ba6bfb6c-8b45-4ce7-af7b-30a3c2e097ca", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 11 09:24:28 compute-0 nova_compute[260935]: 2025-10-11 09:24:28.187 2 DEBUG nova.network.os_vif_util [None req-4011166e-9c26-4d1c-9b3f-cdae3a928c0d dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Converting VIF {"id": "ba6bfb6c-8b45-4ce7-af7b-30a3c2e097ca", "address": "fa:16:3e:d6:ce:63", "network": {"id": "a7aa5898-dcd8-41c5-81e2-5c41061a1fb2", "bridge": "br-int", "label": "tempest-network-smoke--1511589092", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.192", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapba6bfb6c-8b", "ovs_interfaceid": "ba6bfb6c-8b45-4ce7-af7b-30a3c2e097ca", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 09:24:28 compute-0 nova_compute[260935]: 2025-10-11 09:24:28.188 2 DEBUG nova.network.os_vif_util [None req-4011166e-9c26-4d1c-9b3f-cdae3a928c0d dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d6:ce:63,bridge_name='br-int',has_traffic_filtering=True,id=ba6bfb6c-8b45-4ce7-af7b-30a3c2e097ca,network=Network(a7aa5898-dcd8-41c5-81e2-5c41061a1fb2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapba6bfb6c-8b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 09:24:28 compute-0 nova_compute[260935]: 2025-10-11 09:24:28.189 2 DEBUG os_vif [None req-4011166e-9c26-4d1c-9b3f-cdae3a928c0d dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d6:ce:63,bridge_name='br-int',has_traffic_filtering=True,id=ba6bfb6c-8b45-4ce7-af7b-30a3c2e097ca,network=Network(a7aa5898-dcd8-41c5-81e2-5c41061a1fb2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapba6bfb6c-8b') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 11 09:24:28 compute-0 nova_compute[260935]: 2025-10-11 09:24:28.191 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:24:28 compute-0 nova_compute[260935]: 2025-10-11 09:24:28.192 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapba6bfb6c-8b, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:24:28 compute-0 nova_compute[260935]: 2025-10-11 09:24:28.193 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:24:28 compute-0 nova_compute[260935]: 2025-10-11 09:24:28.196 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 09:24:28 compute-0 nova_compute[260935]: 2025-10-11 09:24:28.198 2 INFO os_vif [None req-4011166e-9c26-4d1c-9b3f-cdae3a928c0d dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d6:ce:63,bridge_name='br-int',has_traffic_filtering=True,id=ba6bfb6c-8b45-4ce7-af7b-30a3c2e097ca,network=Network(a7aa5898-dcd8-41c5-81e2-5c41061a1fb2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapba6bfb6c-8b')
Oct 11 09:24:28 compute-0 nova_compute[260935]: 2025-10-11 09:24:28.218 2 DEBUG oslo_concurrency.processutils [None req-026f9251-1d03-4d44-ad11-414cadc65076 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json" returned: 0 in 0.110s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:24:28 compute-0 nova_compute[260935]: 2025-10-11 09:24:28.218 2 DEBUG oslo_concurrency.lockutils [None req-026f9251-1d03-4d44-ad11-414cadc65076 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "9811042c7d73cc51997f7c966840f4b7728169a1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:24:28 compute-0 nova_compute[260935]: 2025-10-11 09:24:28.219 2 DEBUG oslo_concurrency.lockutils [None req-026f9251-1d03-4d44-ad11-414cadc65076 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:24:28 compute-0 nova_compute[260935]: 2025-10-11 09:24:28.219 2 DEBUG oslo_concurrency.lockutils [None req-026f9251-1d03-4d44-ad11-414cadc65076 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:24:28 compute-0 podman[397182]: 2025-10-11 09:24:28.226087308 +0000 UTC m=+0.118476514 container remove 154018af38dbd3f9211de43375a45fc046310d8d6609a6abdb810f3a4e6db988 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-a7aa5898-dcd8-41c5-81e2-5c41061a1fb2, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 11 09:24:28 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:24:28.236 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[75ddc3ad-7b98-4420-ad70-452b5389d9e1]: (4, ('Sat Oct 11 09:24:27 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-a7aa5898-dcd8-41c5-81e2-5c41061a1fb2 (154018af38dbd3f9211de43375a45fc046310d8d6609a6abdb810f3a4e6db988)\n154018af38dbd3f9211de43375a45fc046310d8d6609a6abdb810f3a4e6db988\nSat Oct 11 09:24:28 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-a7aa5898-dcd8-41c5-81e2-5c41061a1fb2 (154018af38dbd3f9211de43375a45fc046310d8d6609a6abdb810f3a4e6db988)\n154018af38dbd3f9211de43375a45fc046310d8d6609a6abdb810f3a4e6db988\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:24:28 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:24:28.237 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[2b19c7b1-cfb4-4652-967d-7bf01d1f12a2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:24:28 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:24:28.238 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa7aa5898-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:24:28 compute-0 kernel: tapa7aa5898-d0: left promiscuous mode
Oct 11 09:24:28 compute-0 nova_compute[260935]: 2025-10-11 09:24:28.252 2 DEBUG nova.storage.rbd_utils [None req-026f9251-1d03-4d44-ad11-414cadc65076 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] rbd image ece1112a-294b-461f-8b8f-c6a0bb212647_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:24:28 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:24:28.262 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[cf317727-8ca5-48b4-9b6b-73ec62b756d9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:24:28 compute-0 nova_compute[260935]: 2025-10-11 09:24:28.264 2 DEBUG oslo_concurrency.processutils [None req-026f9251-1d03-4d44-ad11-414cadc65076 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 ece1112a-294b-461f-8b8f-c6a0bb212647_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:24:28 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:24:28.287 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[f64f0d52-6575-4933-bc0a-2b495becbc2b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:24:28 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:24:28.288 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[f29cda37-652a-4657-af6e-7a371c996467]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:24:28 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:24:28.313 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[b47ea44d-77ef-4cc0-9cca-4765ccdebc3a]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 650661, 'reachable_time': 38453, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 397237, 'error': None, 'target': 'ovnmeta-a7aa5898-dcd8-41c5-81e2-5c41061a1fb2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:24:28 compute-0 systemd[1]: run-netns-ovnmeta\x2da7aa5898\x2ddcd8\x2d41c5\x2d81e2\x2d5c41061a1fb2.mount: Deactivated successfully.
Oct 11 09:24:28 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:24:28.317 162929 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-a7aa5898-dcd8-41c5-81e2-5c41061a1fb2 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 11 09:24:28 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:24:28.317 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[2a459dea-f87f-4f32-8f22-c12f3ba96998]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:24:28 compute-0 nova_compute[260935]: 2025-10-11 09:24:28.329 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:24:28 compute-0 nova_compute[260935]: 2025-10-11 09:24:28.662 2 DEBUG nova.compute.manager [req-61d63996-77ae-487a-a5fa-3a59071db54b req-f2063364-4e0a-4d71-9119-c5fb396e3793 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1] Received event network-vif-unplugged-ba6bfb6c-8b45-4ce7-af7b-30a3c2e097ca external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:24:28 compute-0 nova_compute[260935]: 2025-10-11 09:24:28.663 2 DEBUG oslo_concurrency.lockutils [req-61d63996-77ae-487a-a5fa-3a59071db54b req-f2063364-4e0a-4d71-9119-c5fb396e3793 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:24:28 compute-0 nova_compute[260935]: 2025-10-11 09:24:28.663 2 DEBUG oslo_concurrency.lockutils [req-61d63996-77ae-487a-a5fa-3a59071db54b req-f2063364-4e0a-4d71-9119-c5fb396e3793 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:24:28 compute-0 nova_compute[260935]: 2025-10-11 09:24:28.664 2 DEBUG oslo_concurrency.lockutils [req-61d63996-77ae-487a-a5fa-3a59071db54b req-f2063364-4e0a-4d71-9119-c5fb396e3793 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:24:28 compute-0 nova_compute[260935]: 2025-10-11 09:24:28.664 2 DEBUG nova.compute.manager [req-61d63996-77ae-487a-a5fa-3a59071db54b req-f2063364-4e0a-4d71-9119-c5fb396e3793 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1] No waiting events found dispatching network-vif-unplugged-ba6bfb6c-8b45-4ce7-af7b-30a3c2e097ca pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 09:24:28 compute-0 nova_compute[260935]: 2025-10-11 09:24:28.664 2 DEBUG nova.compute.manager [req-61d63996-77ae-487a-a5fa-3a59071db54b req-f2063364-4e0a-4d71-9119-c5fb396e3793 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1] Received event network-vif-unplugged-ba6bfb6c-8b45-4ce7-af7b-30a3c2e097ca for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 11 09:24:28 compute-0 nova_compute[260935]: 2025-10-11 09:24:28.664 2 DEBUG nova.compute.manager [req-61d63996-77ae-487a-a5fa-3a59071db54b req-f2063364-4e0a-4d71-9119-c5fb396e3793 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1] Received event network-vif-plugged-ba6bfb6c-8b45-4ce7-af7b-30a3c2e097ca external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:24:28 compute-0 nova_compute[260935]: 2025-10-11 09:24:28.665 2 DEBUG oslo_concurrency.lockutils [req-61d63996-77ae-487a-a5fa-3a59071db54b req-f2063364-4e0a-4d71-9119-c5fb396e3793 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:24:28 compute-0 nova_compute[260935]: 2025-10-11 09:24:28.665 2 DEBUG oslo_concurrency.lockutils [req-61d63996-77ae-487a-a5fa-3a59071db54b req-f2063364-4e0a-4d71-9119-c5fb396e3793 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:24:28 compute-0 nova_compute[260935]: 2025-10-11 09:24:28.665 2 DEBUG oslo_concurrency.lockutils [req-61d63996-77ae-487a-a5fa-3a59071db54b req-f2063364-4e0a-4d71-9119-c5fb396e3793 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:24:28 compute-0 nova_compute[260935]: 2025-10-11 09:24:28.665 2 DEBUG nova.compute.manager [req-61d63996-77ae-487a-a5fa-3a59071db54b req-f2063364-4e0a-4d71-9119-c5fb396e3793 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1] No waiting events found dispatching network-vif-plugged-ba6bfb6c-8b45-4ce7-af7b-30a3c2e097ca pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 09:24:28 compute-0 nova_compute[260935]: 2025-10-11 09:24:28.665 2 WARNING nova.compute.manager [req-61d63996-77ae-487a-a5fa-3a59071db54b req-f2063364-4e0a-4d71-9119-c5fb396e3793 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1] Received unexpected event network-vif-plugged-ba6bfb6c-8b45-4ce7-af7b-30a3c2e097ca for instance with vm_state active and task_state deleting.
Oct 11 09:24:28 compute-0 nova_compute[260935]: 2025-10-11 09:24:28.721 2 DEBUG oslo_concurrency.processutils [None req-026f9251-1d03-4d44-ad11-414cadc65076 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 ece1112a-294b-461f-8b8f-c6a0bb212647_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:24:28 compute-0 nova_compute[260935]: 2025-10-11 09:24:28.796 2 DEBUG nova.storage.rbd_utils [None req-026f9251-1d03-4d44-ad11-414cadc65076 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] resizing rbd image ece1112a-294b-461f-8b8f-c6a0bb212647_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 11 09:24:28 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2485: 321 pgs: 321 active+clean; 453 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Oct 11 09:24:28 compute-0 nova_compute[260935]: 2025-10-11 09:24:28.897 2 INFO nova.virt.libvirt.driver [None req-4011166e-9c26-4d1c-9b3f-cdae3a928c0d dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1] Deleting instance files /var/lib/nova/instances/32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1_del
Oct 11 09:24:28 compute-0 nova_compute[260935]: 2025-10-11 09:24:28.898 2 INFO nova.virt.libvirt.driver [None req-4011166e-9c26-4d1c-9b3f-cdae3a928c0d dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1] Deletion of /var/lib/nova/instances/32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1_del complete
Oct 11 09:24:28 compute-0 nova_compute[260935]: 2025-10-11 09:24:28.904 2 DEBUG nova.objects.instance [None req-026f9251-1d03-4d44-ad11-414cadc65076 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lazy-loading 'migration_context' on Instance uuid ece1112a-294b-461f-8b8f-c6a0bb212647 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:24:28 compute-0 nova_compute[260935]: 2025-10-11 09:24:28.988 2 DEBUG nova.virt.libvirt.driver [None req-026f9251-1d03-4d44-ad11-414cadc65076 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: ece1112a-294b-461f-8b8f-c6a0bb212647] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 11 09:24:28 compute-0 nova_compute[260935]: 2025-10-11 09:24:28.989 2 DEBUG nova.virt.libvirt.driver [None req-026f9251-1d03-4d44-ad11-414cadc65076 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: ece1112a-294b-461f-8b8f-c6a0bb212647] Ensure instance console log exists: /var/lib/nova/instances/ece1112a-294b-461f-8b8f-c6a0bb212647/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 11 09:24:28 compute-0 nova_compute[260935]: 2025-10-11 09:24:28.989 2 DEBUG oslo_concurrency.lockutils [None req-026f9251-1d03-4d44-ad11-414cadc65076 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:24:28 compute-0 nova_compute[260935]: 2025-10-11 09:24:28.990 2 DEBUG oslo_concurrency.lockutils [None req-026f9251-1d03-4d44-ad11-414cadc65076 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:24:28 compute-0 nova_compute[260935]: 2025-10-11 09:24:28.990 2 DEBUG oslo_concurrency.lockutils [None req-026f9251-1d03-4d44-ad11-414cadc65076 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:24:29 compute-0 nova_compute[260935]: 2025-10-11 09:24:29.016 2 INFO nova.compute.manager [None req-4011166e-9c26-4d1c-9b3f-cdae3a928c0d dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1] Took 1.24 seconds to destroy the instance on the hypervisor.
Oct 11 09:24:29 compute-0 nova_compute[260935]: 2025-10-11 09:24:29.017 2 DEBUG oslo.service.loopingcall [None req-4011166e-9c26-4d1c-9b3f-cdae3a928c0d dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 11 09:24:29 compute-0 nova_compute[260935]: 2025-10-11 09:24:29.017 2 DEBUG nova.compute.manager [-] [instance: 32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 11 09:24:29 compute-0 nova_compute[260935]: 2025-10-11 09:24:29.017 2 DEBUG nova.network.neutron [-] [instance: 32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 11 09:24:29 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:24:29 compute-0 ceph-mon[74313]: pgmap v2485: 321 pgs: 321 active+clean; 453 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Oct 11 09:24:30 compute-0 nova_compute[260935]: 2025-10-11 09:24:30.344 2 DEBUG nova.network.neutron [None req-026f9251-1d03-4d44-ad11-414cadc65076 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: ece1112a-294b-461f-8b8f-c6a0bb212647] Successfully created port: c9d0cfb2-3f7b-4404-8f94-1d24beb7e6c9 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 11 09:24:30 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2486: 321 pgs: 321 active+clean; 453 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 73 op/s
Oct 11 09:24:31 compute-0 nova_compute[260935]: 2025-10-11 09:24:31.595 2 DEBUG nova.network.neutron [None req-026f9251-1d03-4d44-ad11-414cadc65076 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: ece1112a-294b-461f-8b8f-c6a0bb212647] Successfully created port: 336ce2f8-5f2c-4306-b2ea-d4b85ebcbff0 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 11 09:24:31 compute-0 nova_compute[260935]: 2025-10-11 09:24:31.703 2 DEBUG nova.network.neutron [-] [instance: 32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:24:31 compute-0 nova_compute[260935]: 2025-10-11 09:24:31.725 2 INFO nova.compute.manager [-] [instance: 32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1] Took 2.71 seconds to deallocate network for instance.
Oct 11 09:24:31 compute-0 nova_compute[260935]: 2025-10-11 09:24:31.776 2 DEBUG oslo_concurrency.lockutils [None req-4011166e-9c26-4d1c-9b3f-cdae3a928c0d dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:24:31 compute-0 nova_compute[260935]: 2025-10-11 09:24:31.777 2 DEBUG oslo_concurrency.lockutils [None req-4011166e-9c26-4d1c-9b3f-cdae3a928c0d dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:24:31 compute-0 nova_compute[260935]: 2025-10-11 09:24:31.806 2 DEBUG nova.scheduler.client.report [None req-4011166e-9c26-4d1c-9b3f-cdae3a928c0d dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Refreshing inventories for resource provider ead2f521-4d5d-46d9-864c-1aac19134114 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Oct 11 09:24:31 compute-0 nova_compute[260935]: 2025-10-11 09:24:31.838 2 DEBUG nova.scheduler.client.report [None req-4011166e-9c26-4d1c-9b3f-cdae3a928c0d dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Updating ProviderTree inventory for provider ead2f521-4d5d-46d9-864c-1aac19134114 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Oct 11 09:24:31 compute-0 nova_compute[260935]: 2025-10-11 09:24:31.839 2 DEBUG nova.compute.provider_tree [None req-4011166e-9c26-4d1c-9b3f-cdae3a928c0d dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Updating inventory in ProviderTree for provider ead2f521-4d5d-46d9-864c-1aac19134114 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Oct 11 09:24:31 compute-0 nova_compute[260935]: 2025-10-11 09:24:31.862 2 DEBUG nova.scheduler.client.report [None req-4011166e-9c26-4d1c-9b3f-cdae3a928c0d dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Refreshing aggregate associations for resource provider ead2f521-4d5d-46d9-864c-1aac19134114, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Oct 11 09:24:31 compute-0 nova_compute[260935]: 2025-10-11 09:24:31.883 2 DEBUG nova.scheduler.client.report [None req-4011166e-9c26-4d1c-9b3f-cdae3a928c0d dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Refreshing trait associations for resource provider ead2f521-4d5d-46d9-864c-1aac19134114, traits: HW_CPU_X86_AESNI,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_CLMUL,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_AVX,COMPUTE_RESCUE_BFV,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_F16C,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NODE,HW_CPU_X86_SSE2,HW_CPU_X86_BMI,COMPUTE_DEVICE_TAGGING,COMPUTE_STORAGE_BUS_FDC,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SHA,COMPUTE_STORAGE_BUS_SATA,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SSE42,HW_CPU_X86_SSE41,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_ABM,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_STORAGE_BUS_IDE,COMPUTE_STORAGE_BUS_USB,COMPUTE_TRUSTED_CERTS,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_FMA3,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_SSE4A,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSSE3,HW_CPU_X86_SVM,HW_CPU_X86_BMI2,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_AVX2,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_SECURITY_TPM_2_0,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_AMD_SVM,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_RTL8139 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Oct 11 09:24:31 compute-0 ceph-mon[74313]: pgmap v2486: 321 pgs: 321 active+clean; 453 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 73 op/s
Oct 11 09:24:32 compute-0 nova_compute[260935]: 2025-10-11 09:24:32.000 2 DEBUG oslo_concurrency.processutils [None req-4011166e-9c26-4d1c-9b3f-cdae3a928c0d dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:24:32 compute-0 nova_compute[260935]: 2025-10-11 09:24:32.470 2 DEBUG nova.network.neutron [None req-026f9251-1d03-4d44-ad11-414cadc65076 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: ece1112a-294b-461f-8b8f-c6a0bb212647] Successfully updated port: c9d0cfb2-3f7b-4404-8f94-1d24beb7e6c9 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 11 09:24:32 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 09:24:32 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1438114392' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:24:32 compute-0 nova_compute[260935]: 2025-10-11 09:24:32.519 2 DEBUG oslo_concurrency.processutils [None req-4011166e-9c26-4d1c-9b3f-cdae3a928c0d dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.519s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:24:32 compute-0 nova_compute[260935]: 2025-10-11 09:24:32.563 2 DEBUG nova.compute.provider_tree [None req-4011166e-9c26-4d1c-9b3f-cdae3a928c0d dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 09:24:32 compute-0 nova_compute[260935]: 2025-10-11 09:24:32.565 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:24:32 compute-0 nova_compute[260935]: 2025-10-11 09:24:32.585 2 DEBUG nova.scheduler.client.report [None req-4011166e-9c26-4d1c-9b3f-cdae3a928c0d dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 09:24:32 compute-0 nova_compute[260935]: 2025-10-11 09:24:32.609 2 DEBUG oslo_concurrency.lockutils [None req-4011166e-9c26-4d1c-9b3f-cdae3a928c0d dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.832s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:24:32 compute-0 nova_compute[260935]: 2025-10-11 09:24:32.646 2 INFO nova.scheduler.client.report [None req-4011166e-9c26-4d1c-9b3f-cdae3a928c0d dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Deleted allocations for instance 32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1
Oct 11 09:24:32 compute-0 nova_compute[260935]: 2025-10-11 09:24:32.692 2 DEBUG nova.compute.manager [req-b9263583-9176-49a9-8af4-8b97784824af req-c1aad60f-a7f6-460e-9daf-7586253dc807 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ece1112a-294b-461f-8b8f-c6a0bb212647] Received event network-changed-c9d0cfb2-3f7b-4404-8f94-1d24beb7e6c9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:24:32 compute-0 nova_compute[260935]: 2025-10-11 09:24:32.692 2 DEBUG nova.compute.manager [req-b9263583-9176-49a9-8af4-8b97784824af req-c1aad60f-a7f6-460e-9daf-7586253dc807 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ece1112a-294b-461f-8b8f-c6a0bb212647] Refreshing instance network info cache due to event network-changed-c9d0cfb2-3f7b-4404-8f94-1d24beb7e6c9. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 09:24:32 compute-0 nova_compute[260935]: 2025-10-11 09:24:32.692 2 DEBUG oslo_concurrency.lockutils [req-b9263583-9176-49a9-8af4-8b97784824af req-c1aad60f-a7f6-460e-9daf-7586253dc807 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-ece1112a-294b-461f-8b8f-c6a0bb212647" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:24:32 compute-0 nova_compute[260935]: 2025-10-11 09:24:32.692 2 DEBUG oslo_concurrency.lockutils [req-b9263583-9176-49a9-8af4-8b97784824af req-c1aad60f-a7f6-460e-9daf-7586253dc807 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-ece1112a-294b-461f-8b8f-c6a0bb212647" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:24:32 compute-0 nova_compute[260935]: 2025-10-11 09:24:32.693 2 DEBUG nova.network.neutron [req-b9263583-9176-49a9-8af4-8b97784824af req-c1aad60f-a7f6-460e-9daf-7586253dc807 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ece1112a-294b-461f-8b8f-c6a0bb212647] Refreshing network info cache for port c9d0cfb2-3f7b-4404-8f94-1d24beb7e6c9 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 09:24:32 compute-0 nova_compute[260935]: 2025-10-11 09:24:32.713 2 DEBUG oslo_concurrency.lockutils [None req-4011166e-9c26-4d1c-9b3f-cdae3a928c0d dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.944s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:24:32 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2487: 321 pgs: 321 active+clean; 453 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 127 op/s
Oct 11 09:24:32 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/1438114392' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:24:33 compute-0 nova_compute[260935]: 2025-10-11 09:24:33.185 2 DEBUG nova.network.neutron [req-b9263583-9176-49a9-8af4-8b97784824af req-c1aad60f-a7f6-460e-9daf-7586253dc807 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ece1112a-294b-461f-8b8f-c6a0bb212647] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 11 09:24:33 compute-0 nova_compute[260935]: 2025-10-11 09:24:33.194 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:24:33 compute-0 nova_compute[260935]: 2025-10-11 09:24:33.527 2 DEBUG nova.network.neutron [req-b9263583-9176-49a9-8af4-8b97784824af req-c1aad60f-a7f6-460e-9daf-7586253dc807 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ece1112a-294b-461f-8b8f-c6a0bb212647] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:24:33 compute-0 nova_compute[260935]: 2025-10-11 09:24:33.546 2 DEBUG oslo_concurrency.lockutils [req-b9263583-9176-49a9-8af4-8b97784824af req-c1aad60f-a7f6-460e-9daf-7586253dc807 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-ece1112a-294b-461f-8b8f-c6a0bb212647" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:24:33 compute-0 nova_compute[260935]: 2025-10-11 09:24:33.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:24:33 compute-0 nova_compute[260935]: 2025-10-11 09:24:33.775 2 DEBUG nova.network.neutron [None req-026f9251-1d03-4d44-ad11-414cadc65076 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: ece1112a-294b-461f-8b8f-c6a0bb212647] Successfully updated port: 336ce2f8-5f2c-4306-b2ea-d4b85ebcbff0 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 11 09:24:33 compute-0 nova_compute[260935]: 2025-10-11 09:24:33.841 2 DEBUG oslo_concurrency.lockutils [None req-026f9251-1d03-4d44-ad11-414cadc65076 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "refresh_cache-ece1112a-294b-461f-8b8f-c6a0bb212647" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:24:33 compute-0 nova_compute[260935]: 2025-10-11 09:24:33.841 2 DEBUG oslo_concurrency.lockutils [None req-026f9251-1d03-4d44-ad11-414cadc65076 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquired lock "refresh_cache-ece1112a-294b-461f-8b8f-c6a0bb212647" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:24:33 compute-0 nova_compute[260935]: 2025-10-11 09:24:33.841 2 DEBUG nova.network.neutron [None req-026f9251-1d03-4d44-ad11-414cadc65076 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: ece1112a-294b-461f-8b8f-c6a0bb212647] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 11 09:24:33 compute-0 ceph-mon[74313]: pgmap v2487: 321 pgs: 321 active+clean; 453 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 127 op/s
Oct 11 09:24:34 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:24:34 compute-0 ceph-mon[74313]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #120. Immutable memtables: 0.
Oct 11 09:24:34 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:24:34.128986) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 11 09:24:34 compute-0 ceph-mon[74313]: rocksdb: [db/flush_job.cc:856] [default] [JOB 71] Flushing memtable with next log file: 120
Oct 11 09:24:34 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760174674129089, "job": 71, "event": "flush_started", "num_memtables": 1, "num_entries": 570, "num_deletes": 255, "total_data_size": 575467, "memory_usage": 587432, "flush_reason": "Manual Compaction"}
Oct 11 09:24:34 compute-0 ceph-mon[74313]: rocksdb: [db/flush_job.cc:885] [default] [JOB 71] Level-0 flush table #121: started
Oct 11 09:24:34 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760174674137743, "cf_name": "default", "job": 71, "event": "table_file_creation", "file_number": 121, "file_size": 559091, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 51783, "largest_seqno": 52352, "table_properties": {"data_size": 555965, "index_size": 1034, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1029, "raw_key_size": 7270, "raw_average_key_size": 18, "raw_value_size": 549672, "raw_average_value_size": 1416, "num_data_blocks": 46, "num_entries": 388, "num_filter_entries": 388, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760174643, "oldest_key_time": 1760174643, "file_creation_time": 1760174674, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "25d4b2da-549a-4320-988a-8e4ed36a341b", "db_session_id": "HBQ1LYMNE0LLZGR7AHC6", "orig_file_number": 121, "seqno_to_time_mapping": "N/A"}}
Oct 11 09:24:34 compute-0 ceph-mon[74313]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 71] Flush lasted 8902 microseconds, and 5163 cpu microseconds.
Oct 11 09:24:34 compute-0 ceph-mon[74313]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 09:24:34 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:24:34.137895) [db/flush_job.cc:967] [default] [JOB 71] Level-0 flush table #121: 559091 bytes OK
Oct 11 09:24:34 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:24:34.137930) [db/memtable_list.cc:519] [default] Level-0 commit table #121 started
Oct 11 09:24:34 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:24:34.140026) [db/memtable_list.cc:722] [default] Level-0 commit table #121: memtable #1 done
Oct 11 09:24:34 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:24:34.140053) EVENT_LOG_v1 {"time_micros": 1760174674140045, "job": 71, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 11 09:24:34 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:24:34.140080) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 11 09:24:34 compute-0 ceph-mon[74313]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 71] Try to delete WAL files size 572254, prev total WAL file size 572254, number of live WAL files 2.
Oct 11 09:24:34 compute-0 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000117.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 09:24:34 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:24:34.140706) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0032303039' seq:72057594037927935, type:22 .. '6C6F676D0032323630' seq:0, type:0; will stop at (end)
Oct 11 09:24:34 compute-0 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 72] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 11 09:24:34 compute-0 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 71 Base level 0, inputs: [121(545KB)], [119(9795KB)]
Oct 11 09:24:34 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760174674140768, "job": 72, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [121], "files_L6": [119], "score": -1, "input_data_size": 10589412, "oldest_snapshot_seqno": -1}
Oct 11 09:24:34 compute-0 nova_compute[260935]: 2025-10-11 09:24:34.167 2 DEBUG nova.network.neutron [None req-026f9251-1d03-4d44-ad11-414cadc65076 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: ece1112a-294b-461f-8b8f-c6a0bb212647] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 11 09:24:34 compute-0 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 72] Generated table #122: 7182 keys, 10464671 bytes, temperature: kUnknown
Oct 11 09:24:34 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760174674219338, "cf_name": "default", "job": 72, "event": "table_file_creation", "file_number": 122, "file_size": 10464671, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10415702, "index_size": 29913, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 17989, "raw_key_size": 187526, "raw_average_key_size": 26, "raw_value_size": 10286188, "raw_average_value_size": 1432, "num_data_blocks": 1171, "num_entries": 7182, "num_filter_entries": 7182, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760170204, "oldest_key_time": 0, "file_creation_time": 1760174674, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "25d4b2da-549a-4320-988a-8e4ed36a341b", "db_session_id": "HBQ1LYMNE0LLZGR7AHC6", "orig_file_number": 122, "seqno_to_time_mapping": "N/A"}}
Oct 11 09:24:34 compute-0 ceph-mon[74313]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 09:24:34 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:24:34.219702) [db/compaction/compaction_job.cc:1663] [default] [JOB 72] Compacted 1@0 + 1@6 files to L6 => 10464671 bytes
Oct 11 09:24:34 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:24:34.221426) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 134.6 rd, 133.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.5, 9.6 +0.0 blob) out(10.0 +0.0 blob), read-write-amplify(37.7) write-amplify(18.7) OK, records in: 7704, records dropped: 522 output_compression: NoCompression
Oct 11 09:24:34 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:24:34.221454) EVENT_LOG_v1 {"time_micros": 1760174674221441, "job": 72, "event": "compaction_finished", "compaction_time_micros": 78696, "compaction_time_cpu_micros": 49567, "output_level": 6, "num_output_files": 1, "total_output_size": 10464671, "num_input_records": 7704, "num_output_records": 7182, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 11 09:24:34 compute-0 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000121.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 09:24:34 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760174674222050, "job": 72, "event": "table_file_deletion", "file_number": 121}
Oct 11 09:24:34 compute-0 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000119.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 09:24:34 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760174674224597, "job": 72, "event": "table_file_deletion", "file_number": 119}
Oct 11 09:24:34 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:24:34.140637) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 09:24:34 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:24:34.224767) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 09:24:34 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:24:34.224778) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 09:24:34 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:24:34.224781) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 09:24:34 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:24:34.224785) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 09:24:34 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:24:34.224789) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 09:24:34 compute-0 nova_compute[260935]: 2025-10-11 09:24:34.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:24:34 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2488: 321 pgs: 321 active+clean; 453 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 122 op/s
Oct 11 09:24:34 compute-0 nova_compute[260935]: 2025-10-11 09:24:34.865 2 DEBUG nova.compute.manager [req-81ab7537-b13d-464b-be3a-9a35876895bc req-289088ac-177e-4ac5-83ed-82d7a9e486f4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ece1112a-294b-461f-8b8f-c6a0bb212647] Received event network-changed-336ce2f8-5f2c-4306-b2ea-d4b85ebcbff0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:24:34 compute-0 nova_compute[260935]: 2025-10-11 09:24:34.865 2 DEBUG nova.compute.manager [req-81ab7537-b13d-464b-be3a-9a35876895bc req-289088ac-177e-4ac5-83ed-82d7a9e486f4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ece1112a-294b-461f-8b8f-c6a0bb212647] Refreshing instance network info cache due to event network-changed-336ce2f8-5f2c-4306-b2ea-d4b85ebcbff0. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 09:24:34 compute-0 nova_compute[260935]: 2025-10-11 09:24:34.866 2 DEBUG oslo_concurrency.lockutils [req-81ab7537-b13d-464b-be3a-9a35876895bc req-289088ac-177e-4ac5-83ed-82d7a9e486f4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-ece1112a-294b-461f-8b8f-c6a0bb212647" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:24:35 compute-0 ceph-mon[74313]: pgmap v2488: 321 pgs: 321 active+clean; 453 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 122 op/s
Oct 11 09:24:35 compute-0 nova_compute[260935]: 2025-10-11 09:24:35.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:24:36 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2489: 321 pgs: 321 active+clean; 453 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 122 op/s
Oct 11 09:24:37 compute-0 nova_compute[260935]: 2025-10-11 09:24:37.430 2 DEBUG nova.network.neutron [None req-026f9251-1d03-4d44-ad11-414cadc65076 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: ece1112a-294b-461f-8b8f-c6a0bb212647] Updating instance_info_cache with network_info: [{"id": "c9d0cfb2-3f7b-4404-8f94-1d24beb7e6c9", "address": "fa:16:3e:c1:1a:f4", "network": {"id": "cdd3c547-e0c3-4649-8427-08ce8e1c52d4", "bridge": "br-int", "label": "tempest-network-smoke--1698384506", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc9d0cfb2-3f", "ovs_interfaceid": "c9d0cfb2-3f7b-4404-8f94-1d24beb7e6c9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "336ce2f8-5f2c-4306-b2ea-d4b85ebcbff0", "address": "fa:16:3e:5b:3e:95", "network": {"id": "f0bc2c62-89ab-4ce1-9157-2273788b9018", "bridge": "br-int", "label": "tempest-network-smoke--892002892", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe5b:3e95", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe5b:3e95", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap336ce2f8-5f", "ovs_interfaceid": "336ce2f8-5f2c-4306-b2ea-d4b85ebcbff0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:24:37 compute-0 nova_compute[260935]: 2025-10-11 09:24:37.454 2 DEBUG oslo_concurrency.lockutils [None req-026f9251-1d03-4d44-ad11-414cadc65076 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Releasing lock "refresh_cache-ece1112a-294b-461f-8b8f-c6a0bb212647" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:24:37 compute-0 nova_compute[260935]: 2025-10-11 09:24:37.454 2 DEBUG nova.compute.manager [None req-026f9251-1d03-4d44-ad11-414cadc65076 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: ece1112a-294b-461f-8b8f-c6a0bb212647] Instance network_info: |[{"id": "c9d0cfb2-3f7b-4404-8f94-1d24beb7e6c9", "address": "fa:16:3e:c1:1a:f4", "network": {"id": "cdd3c547-e0c3-4649-8427-08ce8e1c52d4", "bridge": "br-int", "label": "tempest-network-smoke--1698384506", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc9d0cfb2-3f", "ovs_interfaceid": "c9d0cfb2-3f7b-4404-8f94-1d24beb7e6c9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "336ce2f8-5f2c-4306-b2ea-d4b85ebcbff0", "address": "fa:16:3e:5b:3e:95", "network": {"id": "f0bc2c62-89ab-4ce1-9157-2273788b9018", "bridge": "br-int", "label": "tempest-network-smoke--892002892", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe5b:3e95", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe5b:3e95", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap336ce2f8-5f", "ovs_interfaceid": "336ce2f8-5f2c-4306-b2ea-d4b85ebcbff0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 11 09:24:37 compute-0 nova_compute[260935]: 2025-10-11 09:24:37.455 2 DEBUG oslo_concurrency.lockutils [req-81ab7537-b13d-464b-be3a-9a35876895bc req-289088ac-177e-4ac5-83ed-82d7a9e486f4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-ece1112a-294b-461f-8b8f-c6a0bb212647" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:24:37 compute-0 nova_compute[260935]: 2025-10-11 09:24:37.455 2 DEBUG nova.network.neutron [req-81ab7537-b13d-464b-be3a-9a35876895bc req-289088ac-177e-4ac5-83ed-82d7a9e486f4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ece1112a-294b-461f-8b8f-c6a0bb212647] Refreshing network info cache for port 336ce2f8-5f2c-4306-b2ea-d4b85ebcbff0 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 09:24:37 compute-0 nova_compute[260935]: 2025-10-11 09:24:37.460 2 DEBUG nova.virt.libvirt.driver [None req-026f9251-1d03-4d44-ad11-414cadc65076 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: ece1112a-294b-461f-8b8f-c6a0bb212647] Start _get_guest_xml network_info=[{"id": "c9d0cfb2-3f7b-4404-8f94-1d24beb7e6c9", "address": "fa:16:3e:c1:1a:f4", "network": {"id": "cdd3c547-e0c3-4649-8427-08ce8e1c52d4", "bridge": "br-int", "label": "tempest-network-smoke--1698384506", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc9d0cfb2-3f", "ovs_interfaceid": "c9d0cfb2-3f7b-4404-8f94-1d24beb7e6c9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "336ce2f8-5f2c-4306-b2ea-d4b85ebcbff0", "address": "fa:16:3e:5b:3e:95", "network": {"id": "f0bc2c62-89ab-4ce1-9157-2273788b9018", "bridge": "br-int", "label": "tempest-network-smoke--892002892", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe5b:3e95", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe5b:3e95", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap336ce2f8-5f", "ovs_interfaceid": "336ce2f8-5f2c-4306-b2ea-d4b85ebcbff0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 11 09:24:37 compute-0 nova_compute[260935]: 2025-10-11 09:24:37.465 2 WARNING nova.virt.libvirt.driver [None req-026f9251-1d03-4d44-ad11-414cadc65076 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 09:24:37 compute-0 nova_compute[260935]: 2025-10-11 09:24:37.470 2 DEBUG nova.virt.libvirt.host [None req-026f9251-1d03-4d44-ad11-414cadc65076 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 11 09:24:37 compute-0 nova_compute[260935]: 2025-10-11 09:24:37.471 2 DEBUG nova.virt.libvirt.host [None req-026f9251-1d03-4d44-ad11-414cadc65076 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 11 09:24:37 compute-0 nova_compute[260935]: 2025-10-11 09:24:37.480 2 DEBUG nova.virt.libvirt.host [None req-026f9251-1d03-4d44-ad11-414cadc65076 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 11 09:24:37 compute-0 nova_compute[260935]: 2025-10-11 09:24:37.481 2 DEBUG nova.virt.libvirt.host [None req-026f9251-1d03-4d44-ad11-414cadc65076 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 11 09:24:37 compute-0 nova_compute[260935]: 2025-10-11 09:24:37.482 2 DEBUG nova.virt.libvirt.driver [None req-026f9251-1d03-4d44-ad11-414cadc65076 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 11 09:24:37 compute-0 nova_compute[260935]: 2025-10-11 09:24:37.482 2 DEBUG nova.virt.hardware [None req-026f9251-1d03-4d44-ad11-414cadc65076 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 11 09:24:37 compute-0 nova_compute[260935]: 2025-10-11 09:24:37.483 2 DEBUG nova.virt.hardware [None req-026f9251-1d03-4d44-ad11-414cadc65076 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 11 09:24:37 compute-0 nova_compute[260935]: 2025-10-11 09:24:37.483 2 DEBUG nova.virt.hardware [None req-026f9251-1d03-4d44-ad11-414cadc65076 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 11 09:24:37 compute-0 nova_compute[260935]: 2025-10-11 09:24:37.484 2 DEBUG nova.virt.hardware [None req-026f9251-1d03-4d44-ad11-414cadc65076 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 11 09:24:37 compute-0 nova_compute[260935]: 2025-10-11 09:24:37.484 2 DEBUG nova.virt.hardware [None req-026f9251-1d03-4d44-ad11-414cadc65076 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 11 09:24:37 compute-0 nova_compute[260935]: 2025-10-11 09:24:37.484 2 DEBUG nova.virt.hardware [None req-026f9251-1d03-4d44-ad11-414cadc65076 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 11 09:24:37 compute-0 nova_compute[260935]: 2025-10-11 09:24:37.485 2 DEBUG nova.virt.hardware [None req-026f9251-1d03-4d44-ad11-414cadc65076 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 11 09:24:37 compute-0 nova_compute[260935]: 2025-10-11 09:24:37.485 2 DEBUG nova.virt.hardware [None req-026f9251-1d03-4d44-ad11-414cadc65076 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 11 09:24:37 compute-0 nova_compute[260935]: 2025-10-11 09:24:37.485 2 DEBUG nova.virt.hardware [None req-026f9251-1d03-4d44-ad11-414cadc65076 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 11 09:24:37 compute-0 nova_compute[260935]: 2025-10-11 09:24:37.486 2 DEBUG nova.virt.hardware [None req-026f9251-1d03-4d44-ad11-414cadc65076 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 11 09:24:37 compute-0 nova_compute[260935]: 2025-10-11 09:24:37.486 2 DEBUG nova.virt.hardware [None req-026f9251-1d03-4d44-ad11-414cadc65076 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 11 09:24:37 compute-0 nova_compute[260935]: 2025-10-11 09:24:37.490 2 DEBUG oslo_concurrency.processutils [None req-026f9251-1d03-4d44-ad11-414cadc65076 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:24:37 compute-0 nova_compute[260935]: 2025-10-11 09:24:37.567 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:24:37 compute-0 nova_compute[260935]: 2025-10-11 09:24:37.698 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:24:37 compute-0 ceph-mon[74313]: pgmap v2489: 321 pgs: 321 active+clean; 453 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 122 op/s
Oct 11 09:24:37 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 09:24:37 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2011420552' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:24:37 compute-0 nova_compute[260935]: 2025-10-11 09:24:37.988 2 DEBUG oslo_concurrency.processutils [None req-026f9251-1d03-4d44-ad11-414cadc65076 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.498s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:24:38 compute-0 nova_compute[260935]: 2025-10-11 09:24:38.015 2 DEBUG nova.storage.rbd_utils [None req-026f9251-1d03-4d44-ad11-414cadc65076 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] rbd image ece1112a-294b-461f-8b8f-c6a0bb212647_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:24:38 compute-0 nova_compute[260935]: 2025-10-11 09:24:38.019 2 DEBUG oslo_concurrency.processutils [None req-026f9251-1d03-4d44-ad11-414cadc65076 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:24:38 compute-0 nova_compute[260935]: 2025-10-11 09:24:38.196 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:24:38 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 09:24:38 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3188556766' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:24:38 compute-0 nova_compute[260935]: 2025-10-11 09:24:38.476 2 DEBUG oslo_concurrency.processutils [None req-026f9251-1d03-4d44-ad11-414cadc65076 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:24:38 compute-0 nova_compute[260935]: 2025-10-11 09:24:38.478 2 DEBUG nova.virt.libvirt.vif [None req-026f9251-1d03-4d44-ad11-414cadc65076 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T09:24:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1996538223',display_name='tempest-TestGettingAddress-server-1996538223',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1996538223',id=125,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFOcQNhE7VrJ44Z/esb1CHEpx8lcRbf27s/aaZGWQqBCH7+6fr5AKVHiLVq1p8ssvkamN6U1RSiwjmJnfBxXYkiNbQhuZVIjGwYPhoT+eqQySo9UJb10NKMqWJoQDGuD9g==',key_name='tempest-TestGettingAddress-1559448996',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='db6885dd005947ad850fed13cefdf2fc',ramdisk_id='',reservation_id='r-oa0yn1o6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-1238692117',owner_user_name='tempest-TestGettingAddress-1238692117-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:24:27Z,user_data=None,user_id='0e1fd111a1ff43179343661e01457085',uuid=ece1112a-294b-461f-8b8f-c6a0bb212647,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c9d0cfb2-3f7b-4404-8f94-1d24beb7e6c9", "address": "fa:16:3e:c1:1a:f4", "network": {"id": "cdd3c547-e0c3-4649-8427-08ce8e1c52d4", "bridge": "br-int", "label": "tempest-network-smoke--1698384506", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc9d0cfb2-3f", "ovs_interfaceid": "c9d0cfb2-3f7b-4404-8f94-1d24beb7e6c9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 11 09:24:38 compute-0 nova_compute[260935]: 2025-10-11 09:24:38.479 2 DEBUG nova.network.os_vif_util [None req-026f9251-1d03-4d44-ad11-414cadc65076 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converting VIF {"id": "c9d0cfb2-3f7b-4404-8f94-1d24beb7e6c9", "address": "fa:16:3e:c1:1a:f4", "network": {"id": "cdd3c547-e0c3-4649-8427-08ce8e1c52d4", "bridge": "br-int", "label": "tempest-network-smoke--1698384506", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc9d0cfb2-3f", "ovs_interfaceid": "c9d0cfb2-3f7b-4404-8f94-1d24beb7e6c9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 09:24:38 compute-0 nova_compute[260935]: 2025-10-11 09:24:38.480 2 DEBUG nova.network.os_vif_util [None req-026f9251-1d03-4d44-ad11-414cadc65076 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c1:1a:f4,bridge_name='br-int',has_traffic_filtering=True,id=c9d0cfb2-3f7b-4404-8f94-1d24beb7e6c9,network=Network(cdd3c547-e0c3-4649-8427-08ce8e1c52d4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc9d0cfb2-3f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 09:24:38 compute-0 nova_compute[260935]: 2025-10-11 09:24:38.482 2 DEBUG nova.virt.libvirt.vif [None req-026f9251-1d03-4d44-ad11-414cadc65076 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T09:24:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1996538223',display_name='tempest-TestGettingAddress-server-1996538223',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1996538223',id=125,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFOcQNhE7VrJ44Z/esb1CHEpx8lcRbf27s/aaZGWQqBCH7+6fr5AKVHiLVq1p8ssvkamN6U1RSiwjmJnfBxXYkiNbQhuZVIjGwYPhoT+eqQySo9UJb10NKMqWJoQDGuD9g==',key_name='tempest-TestGettingAddress-1559448996',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='db6885dd005947ad850fed13cefdf2fc',ramdisk_id='',reservation_id='r-oa0yn1o6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-1238692117',owner_user_name='tempest-TestGettingAddress-1238692117-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:24:27Z,user_data=None,user_id='0e1fd111a1ff43179343661e01457085',uuid=ece1112a-294b-461f-8b8f-c6a0bb212647,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "336ce2f8-5f2c-4306-b2ea-d4b85ebcbff0", "address": "fa:16:3e:5b:3e:95", "network": {"id": "f0bc2c62-89ab-4ce1-9157-2273788b9018", "bridge": "br-int", "label": "tempest-network-smoke--892002892", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe5b:3e95", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe5b:3e95", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap336ce2f8-5f", "ovs_interfaceid": "336ce2f8-5f2c-4306-b2ea-d4b85ebcbff0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 11 09:24:38 compute-0 nova_compute[260935]: 2025-10-11 09:24:38.483 2 DEBUG nova.network.os_vif_util [None req-026f9251-1d03-4d44-ad11-414cadc65076 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converting VIF {"id": "336ce2f8-5f2c-4306-b2ea-d4b85ebcbff0", "address": "fa:16:3e:5b:3e:95", "network": {"id": "f0bc2c62-89ab-4ce1-9157-2273788b9018", "bridge": "br-int", "label": "tempest-network-smoke--892002892", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe5b:3e95", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe5b:3e95", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap336ce2f8-5f", "ovs_interfaceid": "336ce2f8-5f2c-4306-b2ea-d4b85ebcbff0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 09:24:38 compute-0 nova_compute[260935]: 2025-10-11 09:24:38.484 2 DEBUG nova.network.os_vif_util [None req-026f9251-1d03-4d44-ad11-414cadc65076 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5b:3e:95,bridge_name='br-int',has_traffic_filtering=True,id=336ce2f8-5f2c-4306-b2ea-d4b85ebcbff0,network=Network(f0bc2c62-89ab-4ce1-9157-2273788b9018),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap336ce2f8-5f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 09:24:38 compute-0 nova_compute[260935]: 2025-10-11 09:24:38.486 2 DEBUG nova.objects.instance [None req-026f9251-1d03-4d44-ad11-414cadc65076 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lazy-loading 'pci_devices' on Instance uuid ece1112a-294b-461f-8b8f-c6a0bb212647 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:24:38 compute-0 nova_compute[260935]: 2025-10-11 09:24:38.508 2 DEBUG nova.virt.libvirt.driver [None req-026f9251-1d03-4d44-ad11-414cadc65076 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: ece1112a-294b-461f-8b8f-c6a0bb212647] End _get_guest_xml xml=<domain type="kvm">
Oct 11 09:24:38 compute-0 nova_compute[260935]:   <uuid>ece1112a-294b-461f-8b8f-c6a0bb212647</uuid>
Oct 11 09:24:38 compute-0 nova_compute[260935]:   <name>instance-0000007d</name>
Oct 11 09:24:38 compute-0 nova_compute[260935]:   <memory>131072</memory>
Oct 11 09:24:38 compute-0 nova_compute[260935]:   <vcpu>1</vcpu>
Oct 11 09:24:38 compute-0 nova_compute[260935]:   <metadata>
Oct 11 09:24:38 compute-0 nova_compute[260935]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 09:24:38 compute-0 nova_compute[260935]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 09:24:38 compute-0 nova_compute[260935]:       <nova:name>tempest-TestGettingAddress-server-1996538223</nova:name>
Oct 11 09:24:38 compute-0 nova_compute[260935]:       <nova:creationTime>2025-10-11 09:24:37</nova:creationTime>
Oct 11 09:24:38 compute-0 nova_compute[260935]:       <nova:flavor name="m1.nano">
Oct 11 09:24:38 compute-0 nova_compute[260935]:         <nova:memory>128</nova:memory>
Oct 11 09:24:38 compute-0 nova_compute[260935]:         <nova:disk>1</nova:disk>
Oct 11 09:24:38 compute-0 nova_compute[260935]:         <nova:swap>0</nova:swap>
Oct 11 09:24:38 compute-0 nova_compute[260935]:         <nova:ephemeral>0</nova:ephemeral>
Oct 11 09:24:38 compute-0 nova_compute[260935]:         <nova:vcpus>1</nova:vcpus>
Oct 11 09:24:38 compute-0 nova_compute[260935]:       </nova:flavor>
Oct 11 09:24:38 compute-0 nova_compute[260935]:       <nova:owner>
Oct 11 09:24:38 compute-0 nova_compute[260935]:         <nova:user uuid="0e1fd111a1ff43179343661e01457085">tempest-TestGettingAddress-1238692117-project-member</nova:user>
Oct 11 09:24:38 compute-0 nova_compute[260935]:         <nova:project uuid="db6885dd005947ad850fed13cefdf2fc">tempest-TestGettingAddress-1238692117</nova:project>
Oct 11 09:24:38 compute-0 nova_compute[260935]:       </nova:owner>
Oct 11 09:24:38 compute-0 nova_compute[260935]:       <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 09:24:38 compute-0 nova_compute[260935]:       <nova:ports>
Oct 11 09:24:38 compute-0 nova_compute[260935]:         <nova:port uuid="c9d0cfb2-3f7b-4404-8f94-1d24beb7e6c9">
Oct 11 09:24:38 compute-0 nova_compute[260935]:           <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Oct 11 09:24:38 compute-0 nova_compute[260935]:         </nova:port>
Oct 11 09:24:38 compute-0 nova_compute[260935]:         <nova:port uuid="336ce2f8-5f2c-4306-b2ea-d4b85ebcbff0">
Oct 11 09:24:38 compute-0 nova_compute[260935]:           <nova:ip type="fixed" address="2001:db8:0:1:f816:3eff:fe5b:3e95" ipVersion="6"/>
Oct 11 09:24:38 compute-0 nova_compute[260935]:           <nova:ip type="fixed" address="2001:db8::f816:3eff:fe5b:3e95" ipVersion="6"/>
Oct 11 09:24:38 compute-0 nova_compute[260935]:         </nova:port>
Oct 11 09:24:38 compute-0 nova_compute[260935]:       </nova:ports>
Oct 11 09:24:38 compute-0 nova_compute[260935]:     </nova:instance>
Oct 11 09:24:38 compute-0 nova_compute[260935]:   </metadata>
Oct 11 09:24:38 compute-0 nova_compute[260935]:   <sysinfo type="smbios">
Oct 11 09:24:38 compute-0 nova_compute[260935]:     <system>
Oct 11 09:24:38 compute-0 nova_compute[260935]:       <entry name="manufacturer">RDO</entry>
Oct 11 09:24:38 compute-0 nova_compute[260935]:       <entry name="product">OpenStack Compute</entry>
Oct 11 09:24:38 compute-0 nova_compute[260935]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 09:24:38 compute-0 nova_compute[260935]:       <entry name="serial">ece1112a-294b-461f-8b8f-c6a0bb212647</entry>
Oct 11 09:24:38 compute-0 nova_compute[260935]:       <entry name="uuid">ece1112a-294b-461f-8b8f-c6a0bb212647</entry>
Oct 11 09:24:38 compute-0 nova_compute[260935]:       <entry name="family">Virtual Machine</entry>
Oct 11 09:24:38 compute-0 nova_compute[260935]:     </system>
Oct 11 09:24:38 compute-0 nova_compute[260935]:   </sysinfo>
Oct 11 09:24:38 compute-0 nova_compute[260935]:   <os>
Oct 11 09:24:38 compute-0 nova_compute[260935]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 11 09:24:38 compute-0 nova_compute[260935]:     <boot dev="hd"/>
Oct 11 09:24:38 compute-0 nova_compute[260935]:     <smbios mode="sysinfo"/>
Oct 11 09:24:38 compute-0 nova_compute[260935]:   </os>
Oct 11 09:24:38 compute-0 nova_compute[260935]:   <features>
Oct 11 09:24:38 compute-0 nova_compute[260935]:     <acpi/>
Oct 11 09:24:38 compute-0 nova_compute[260935]:     <apic/>
Oct 11 09:24:38 compute-0 nova_compute[260935]:     <vmcoreinfo/>
Oct 11 09:24:38 compute-0 nova_compute[260935]:   </features>
Oct 11 09:24:38 compute-0 nova_compute[260935]:   <clock offset="utc">
Oct 11 09:24:38 compute-0 nova_compute[260935]:     <timer name="pit" tickpolicy="delay"/>
Oct 11 09:24:38 compute-0 nova_compute[260935]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 11 09:24:38 compute-0 nova_compute[260935]:     <timer name="hpet" present="no"/>
Oct 11 09:24:38 compute-0 nova_compute[260935]:   </clock>
Oct 11 09:24:38 compute-0 nova_compute[260935]:   <cpu mode="host-model" match="exact">
Oct 11 09:24:38 compute-0 nova_compute[260935]:     <topology sockets="1" cores="1" threads="1"/>
Oct 11 09:24:38 compute-0 nova_compute[260935]:   </cpu>
Oct 11 09:24:38 compute-0 nova_compute[260935]:   <devices>
Oct 11 09:24:38 compute-0 nova_compute[260935]:     <disk type="network" device="disk">
Oct 11 09:24:38 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 09:24:38 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/ece1112a-294b-461f-8b8f-c6a0bb212647_disk">
Oct 11 09:24:38 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 09:24:38 compute-0 nova_compute[260935]:       </source>
Oct 11 09:24:38 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 09:24:38 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 09:24:38 compute-0 nova_compute[260935]:       </auth>
Oct 11 09:24:38 compute-0 nova_compute[260935]:       <target dev="vda" bus="virtio"/>
Oct 11 09:24:38 compute-0 nova_compute[260935]:     </disk>
Oct 11 09:24:38 compute-0 nova_compute[260935]:     <disk type="network" device="cdrom">
Oct 11 09:24:38 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 09:24:38 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/ece1112a-294b-461f-8b8f-c6a0bb212647_disk.config">
Oct 11 09:24:38 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 09:24:38 compute-0 nova_compute[260935]:       </source>
Oct 11 09:24:38 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 09:24:38 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 09:24:38 compute-0 nova_compute[260935]:       </auth>
Oct 11 09:24:38 compute-0 nova_compute[260935]:       <target dev="sda" bus="sata"/>
Oct 11 09:24:38 compute-0 nova_compute[260935]:     </disk>
Oct 11 09:24:38 compute-0 nova_compute[260935]:     <interface type="ethernet">
Oct 11 09:24:38 compute-0 nova_compute[260935]:       <mac address="fa:16:3e:c1:1a:f4"/>
Oct 11 09:24:38 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 09:24:38 compute-0 nova_compute[260935]:       <driver name="vhost" rx_queue_size="512"/>
Oct 11 09:24:38 compute-0 nova_compute[260935]:       <mtu size="1442"/>
Oct 11 09:24:38 compute-0 nova_compute[260935]:       <target dev="tapc9d0cfb2-3f"/>
Oct 11 09:24:38 compute-0 nova_compute[260935]:     </interface>
Oct 11 09:24:38 compute-0 nova_compute[260935]:     <interface type="ethernet">
Oct 11 09:24:38 compute-0 nova_compute[260935]:       <mac address="fa:16:3e:5b:3e:95"/>
Oct 11 09:24:38 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 09:24:38 compute-0 nova_compute[260935]:       <driver name="vhost" rx_queue_size="512"/>
Oct 11 09:24:38 compute-0 nova_compute[260935]:       <mtu size="1442"/>
Oct 11 09:24:38 compute-0 nova_compute[260935]:       <target dev="tap336ce2f8-5f"/>
Oct 11 09:24:38 compute-0 nova_compute[260935]:     </interface>
Oct 11 09:24:38 compute-0 nova_compute[260935]:     <serial type="pty">
Oct 11 09:24:38 compute-0 nova_compute[260935]:       <log file="/var/lib/nova/instances/ece1112a-294b-461f-8b8f-c6a0bb212647/console.log" append="off"/>
Oct 11 09:24:38 compute-0 nova_compute[260935]:     </serial>
Oct 11 09:24:38 compute-0 nova_compute[260935]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 09:24:38 compute-0 nova_compute[260935]:     <video>
Oct 11 09:24:38 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 09:24:38 compute-0 nova_compute[260935]:     </video>
Oct 11 09:24:38 compute-0 nova_compute[260935]:     <input type="tablet" bus="usb"/>
Oct 11 09:24:38 compute-0 nova_compute[260935]:     <rng model="virtio">
Oct 11 09:24:38 compute-0 nova_compute[260935]:       <backend model="random">/dev/urandom</backend>
Oct 11 09:24:38 compute-0 nova_compute[260935]:     </rng>
Oct 11 09:24:38 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root"/>
Oct 11 09:24:38 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:24:38 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:24:38 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:24:38 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:24:38 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:24:38 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:24:38 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:24:38 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:24:38 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:24:38 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:24:38 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:24:38 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:24:38 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:24:38 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:24:38 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:24:38 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:24:38 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:24:38 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:24:38 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:24:38 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:24:38 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:24:38 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:24:38 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:24:38 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:24:38 compute-0 nova_compute[260935]:     <controller type="usb" index="0"/>
Oct 11 09:24:38 compute-0 nova_compute[260935]:     <memballoon model="virtio">
Oct 11 09:24:38 compute-0 nova_compute[260935]:       <stats period="10"/>
Oct 11 09:24:38 compute-0 nova_compute[260935]:     </memballoon>
Oct 11 09:24:38 compute-0 nova_compute[260935]:   </devices>
Oct 11 09:24:38 compute-0 nova_compute[260935]: </domain>
Oct 11 09:24:38 compute-0 nova_compute[260935]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 11 09:24:38 compute-0 nova_compute[260935]: 2025-10-11 09:24:38.508 2 DEBUG nova.compute.manager [None req-026f9251-1d03-4d44-ad11-414cadc65076 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: ece1112a-294b-461f-8b8f-c6a0bb212647] Preparing to wait for external event network-vif-plugged-c9d0cfb2-3f7b-4404-8f94-1d24beb7e6c9 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 11 09:24:38 compute-0 nova_compute[260935]: 2025-10-11 09:24:38.509 2 DEBUG oslo_concurrency.lockutils [None req-026f9251-1d03-4d44-ad11-414cadc65076 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "ece1112a-294b-461f-8b8f-c6a0bb212647-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:24:38 compute-0 nova_compute[260935]: 2025-10-11 09:24:38.510 2 DEBUG oslo_concurrency.lockutils [None req-026f9251-1d03-4d44-ad11-414cadc65076 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "ece1112a-294b-461f-8b8f-c6a0bb212647-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:24:38 compute-0 nova_compute[260935]: 2025-10-11 09:24:38.510 2 DEBUG oslo_concurrency.lockutils [None req-026f9251-1d03-4d44-ad11-414cadc65076 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "ece1112a-294b-461f-8b8f-c6a0bb212647-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:24:38 compute-0 nova_compute[260935]: 2025-10-11 09:24:38.511 2 DEBUG nova.compute.manager [None req-026f9251-1d03-4d44-ad11-414cadc65076 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: ece1112a-294b-461f-8b8f-c6a0bb212647] Preparing to wait for external event network-vif-plugged-336ce2f8-5f2c-4306-b2ea-d4b85ebcbff0 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 11 09:24:38 compute-0 nova_compute[260935]: 2025-10-11 09:24:38.511 2 DEBUG oslo_concurrency.lockutils [None req-026f9251-1d03-4d44-ad11-414cadc65076 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "ece1112a-294b-461f-8b8f-c6a0bb212647-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:24:38 compute-0 nova_compute[260935]: 2025-10-11 09:24:38.512 2 DEBUG oslo_concurrency.lockutils [None req-026f9251-1d03-4d44-ad11-414cadc65076 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "ece1112a-294b-461f-8b8f-c6a0bb212647-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:24:38 compute-0 nova_compute[260935]: 2025-10-11 09:24:38.512 2 DEBUG oslo_concurrency.lockutils [None req-026f9251-1d03-4d44-ad11-414cadc65076 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "ece1112a-294b-461f-8b8f-c6a0bb212647-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:24:38 compute-0 nova_compute[260935]: 2025-10-11 09:24:38.514 2 DEBUG nova.virt.libvirt.vif [None req-026f9251-1d03-4d44-ad11-414cadc65076 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T09:24:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1996538223',display_name='tempest-TestGettingAddress-server-1996538223',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1996538223',id=125,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFOcQNhE7VrJ44Z/esb1CHEpx8lcRbf27s/aaZGWQqBCH7+6fr5AKVHiLVq1p8ssvkamN6U1RSiwjmJnfBxXYkiNbQhuZVIjGwYPhoT+eqQySo9UJb10NKMqWJoQDGuD9g==',key_name='tempest-TestGettingAddress-1559448996',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='db6885dd005947ad850fed13cefdf2fc',ramdisk_id='',reservation_id='r-oa0yn1o6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-1238692117',owner_user_name='tempest-TestGettingAddress-1238692117-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:24:27Z,user_data=None,user_id='0e1fd111a1ff43179343661e01457085',uuid=ece1112a-294b-461f-8b8f-c6a0bb212647,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c9d0cfb2-3f7b-4404-8f94-1d24beb7e6c9", "address": "fa:16:3e:c1:1a:f4", "network": {"id": "cdd3c547-e0c3-4649-8427-08ce8e1c52d4", "bridge": "br-int", "label": "tempest-network-smoke--1698384506", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc9d0cfb2-3f", "ovs_interfaceid": "c9d0cfb2-3f7b-4404-8f94-1d24beb7e6c9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 11 09:24:38 compute-0 nova_compute[260935]: 2025-10-11 09:24:38.515 2 DEBUG nova.network.os_vif_util [None req-026f9251-1d03-4d44-ad11-414cadc65076 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converting VIF {"id": "c9d0cfb2-3f7b-4404-8f94-1d24beb7e6c9", "address": "fa:16:3e:c1:1a:f4", "network": {"id": "cdd3c547-e0c3-4649-8427-08ce8e1c52d4", "bridge": "br-int", "label": "tempest-network-smoke--1698384506", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc9d0cfb2-3f", "ovs_interfaceid": "c9d0cfb2-3f7b-4404-8f94-1d24beb7e6c9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 09:24:38 compute-0 nova_compute[260935]: 2025-10-11 09:24:38.516 2 DEBUG nova.network.os_vif_util [None req-026f9251-1d03-4d44-ad11-414cadc65076 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c1:1a:f4,bridge_name='br-int',has_traffic_filtering=True,id=c9d0cfb2-3f7b-4404-8f94-1d24beb7e6c9,network=Network(cdd3c547-e0c3-4649-8427-08ce8e1c52d4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc9d0cfb2-3f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 09:24:38 compute-0 nova_compute[260935]: 2025-10-11 09:24:38.517 2 DEBUG os_vif [None req-026f9251-1d03-4d44-ad11-414cadc65076 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c1:1a:f4,bridge_name='br-int',has_traffic_filtering=True,id=c9d0cfb2-3f7b-4404-8f94-1d24beb7e6c9,network=Network(cdd3c547-e0c3-4649-8427-08ce8e1c52d4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc9d0cfb2-3f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 11 09:24:38 compute-0 nova_compute[260935]: 2025-10-11 09:24:38.518 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:24:38 compute-0 nova_compute[260935]: 2025-10-11 09:24:38.519 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:24:38 compute-0 nova_compute[260935]: 2025-10-11 09:24:38.520 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 09:24:38 compute-0 nova_compute[260935]: 2025-10-11 09:24:38.525 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:24:38 compute-0 nova_compute[260935]: 2025-10-11 09:24:38.525 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc9d0cfb2-3f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:24:38 compute-0 nova_compute[260935]: 2025-10-11 09:24:38.526 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapc9d0cfb2-3f, col_values=(('external_ids', {'iface-id': 'c9d0cfb2-3f7b-4404-8f94-1d24beb7e6c9', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:c1:1a:f4', 'vm-uuid': 'ece1112a-294b-461f-8b8f-c6a0bb212647'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:24:38 compute-0 nova_compute[260935]: 2025-10-11 09:24:38.528 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:24:38 compute-0 NetworkManager[44960]: <info>  [1760174678.5301] manager: (tapc9d0cfb2-3f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/530)
Oct 11 09:24:38 compute-0 nova_compute[260935]: 2025-10-11 09:24:38.532 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 09:24:38 compute-0 nova_compute[260935]: 2025-10-11 09:24:38.538 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:24:38 compute-0 nova_compute[260935]: 2025-10-11 09:24:38.539 2 INFO os_vif [None req-026f9251-1d03-4d44-ad11-414cadc65076 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c1:1a:f4,bridge_name='br-int',has_traffic_filtering=True,id=c9d0cfb2-3f7b-4404-8f94-1d24beb7e6c9,network=Network(cdd3c547-e0c3-4649-8427-08ce8e1c52d4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc9d0cfb2-3f')
Oct 11 09:24:38 compute-0 nova_compute[260935]: 2025-10-11 09:24:38.541 2 DEBUG nova.virt.libvirt.vif [None req-026f9251-1d03-4d44-ad11-414cadc65076 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T09:24:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1996538223',display_name='tempest-TestGettingAddress-server-1996538223',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1996538223',id=125,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFOcQNhE7VrJ44Z/esb1CHEpx8lcRbf27s/aaZGWQqBCH7+6fr5AKVHiLVq1p8ssvkamN6U1RSiwjmJnfBxXYkiNbQhuZVIjGwYPhoT+eqQySo9UJb10NKMqWJoQDGuD9g==',key_name='tempest-TestGettingAddress-1559448996',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='db6885dd005947ad850fed13cefdf2fc',ramdisk_id='',reservation_id='r-oa0yn1o6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-1238692117',owner_user_name='tempest-TestGettingAddress-1238692117-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:24:27Z,user_data=None,user_id='0e1fd111a1ff43179343661e01457085',uuid=ece1112a-294b-461f-8b8f-c6a0bb212647,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "336ce2f8-5f2c-4306-b2ea-d4b85ebcbff0", "address": "fa:16:3e:5b:3e:95", "network": {"id": "f0bc2c62-89ab-4ce1-9157-2273788b9018", "bridge": "br-int", "label": "tempest-network-smoke--892002892", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe5b:3e95", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe5b:3e95", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap336ce2f8-5f", "ovs_interfaceid": "336ce2f8-5f2c-4306-b2ea-d4b85ebcbff0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 11 09:24:38 compute-0 nova_compute[260935]: 2025-10-11 09:24:38.541 2 DEBUG nova.network.os_vif_util [None req-026f9251-1d03-4d44-ad11-414cadc65076 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converting VIF {"id": "336ce2f8-5f2c-4306-b2ea-d4b85ebcbff0", "address": "fa:16:3e:5b:3e:95", "network": {"id": "f0bc2c62-89ab-4ce1-9157-2273788b9018", "bridge": "br-int", "label": "tempest-network-smoke--892002892", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe5b:3e95", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe5b:3e95", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap336ce2f8-5f", "ovs_interfaceid": "336ce2f8-5f2c-4306-b2ea-d4b85ebcbff0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 09:24:38 compute-0 nova_compute[260935]: 2025-10-11 09:24:38.542 2 DEBUG nova.network.os_vif_util [None req-026f9251-1d03-4d44-ad11-414cadc65076 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5b:3e:95,bridge_name='br-int',has_traffic_filtering=True,id=336ce2f8-5f2c-4306-b2ea-d4b85ebcbff0,network=Network(f0bc2c62-89ab-4ce1-9157-2273788b9018),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap336ce2f8-5f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 09:24:38 compute-0 nova_compute[260935]: 2025-10-11 09:24:38.543 2 DEBUG os_vif [None req-026f9251-1d03-4d44-ad11-414cadc65076 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:5b:3e:95,bridge_name='br-int',has_traffic_filtering=True,id=336ce2f8-5f2c-4306-b2ea-d4b85ebcbff0,network=Network(f0bc2c62-89ab-4ce1-9157-2273788b9018),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap336ce2f8-5f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 11 09:24:38 compute-0 nova_compute[260935]: 2025-10-11 09:24:38.544 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:24:38 compute-0 nova_compute[260935]: 2025-10-11 09:24:38.544 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:24:38 compute-0 nova_compute[260935]: 2025-10-11 09:24:38.545 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 09:24:38 compute-0 nova_compute[260935]: 2025-10-11 09:24:38.548 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:24:38 compute-0 nova_compute[260935]: 2025-10-11 09:24:38.548 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap336ce2f8-5f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:24:38 compute-0 nova_compute[260935]: 2025-10-11 09:24:38.549 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap336ce2f8-5f, col_values=(('external_ids', {'iface-id': '336ce2f8-5f2c-4306-b2ea-d4b85ebcbff0', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:5b:3e:95', 'vm-uuid': 'ece1112a-294b-461f-8b8f-c6a0bb212647'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:24:38 compute-0 nova_compute[260935]: 2025-10-11 09:24:38.551 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:24:38 compute-0 NetworkManager[44960]: <info>  [1760174678.5528] manager: (tap336ce2f8-5f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/531)
Oct 11 09:24:38 compute-0 nova_compute[260935]: 2025-10-11 09:24:38.555 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 09:24:38 compute-0 nova_compute[260935]: 2025-10-11 09:24:38.563 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:24:38 compute-0 nova_compute[260935]: 2025-10-11 09:24:38.566 2 INFO os_vif [None req-026f9251-1d03-4d44-ad11-414cadc65076 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:5b:3e:95,bridge_name='br-int',has_traffic_filtering=True,id=336ce2f8-5f2c-4306-b2ea-d4b85ebcbff0,network=Network(f0bc2c62-89ab-4ce1-9157-2273788b9018),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap336ce2f8-5f')
Oct 11 09:24:38 compute-0 ovn_controller[152945]: 2025-10-11T09:24:38Z|01340|binding|INFO|Releasing lport c1fabca0-ae77-4e48-b93b-3023955db235 from this chassis (sb_readonly=0)
Oct 11 09:24:38 compute-0 ovn_controller[152945]: 2025-10-11T09:24:38Z|01341|binding|INFO|Releasing lport e23cd806-8523-4e59-ba27-db15cee52548 from this chassis (sb_readonly=0)
Oct 11 09:24:38 compute-0 ovn_controller[152945]: 2025-10-11T09:24:38Z|01342|binding|INFO|Releasing lport cd117be9-e2e2-4d92-9c01-a0f37b7175b9 from this chassis (sb_readonly=0)
Oct 11 09:24:38 compute-0 ovn_controller[152945]: 2025-10-11T09:24:38Z|01343|binding|INFO|Releasing lport f45dd889-4db0-488b-b6d3-356bc191844e from this chassis (sb_readonly=0)
Oct 11 09:24:38 compute-0 nova_compute[260935]: 2025-10-11 09:24:38.665 2 DEBUG nova.virt.libvirt.driver [None req-026f9251-1d03-4d44-ad11-414cadc65076 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 09:24:38 compute-0 nova_compute[260935]: 2025-10-11 09:24:38.665 2 DEBUG nova.virt.libvirt.driver [None req-026f9251-1d03-4d44-ad11-414cadc65076 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 09:24:38 compute-0 nova_compute[260935]: 2025-10-11 09:24:38.666 2 DEBUG nova.virt.libvirt.driver [None req-026f9251-1d03-4d44-ad11-414cadc65076 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] No VIF found with MAC fa:16:3e:c1:1a:f4, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 11 09:24:38 compute-0 nova_compute[260935]: 2025-10-11 09:24:38.666 2 DEBUG nova.virt.libvirt.driver [None req-026f9251-1d03-4d44-ad11-414cadc65076 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] No VIF found with MAC fa:16:3e:5b:3e:95, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 11 09:24:38 compute-0 nova_compute[260935]: 2025-10-11 09:24:38.667 2 INFO nova.virt.libvirt.driver [None req-026f9251-1d03-4d44-ad11-414cadc65076 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: ece1112a-294b-461f-8b8f-c6a0bb212647] Using config drive
Oct 11 09:24:38 compute-0 nova_compute[260935]: 2025-10-11 09:24:38.705 2 DEBUG nova.storage.rbd_utils [None req-026f9251-1d03-4d44-ad11-414cadc65076 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] rbd image ece1112a-294b-461f-8b8f-c6a0bb212647_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:24:38 compute-0 nova_compute[260935]: 2025-10-11 09:24:38.715 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:24:38 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2490: 321 pgs: 321 active+clean; 453 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 122 op/s
Oct 11 09:24:38 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/2011420552' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:24:38 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/3188556766' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:24:38 compute-0 nova_compute[260935]: 2025-10-11 09:24:38.919 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:24:39 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:24:39 compute-0 nova_compute[260935]: 2025-10-11 09:24:39.227 2 INFO nova.virt.libvirt.driver [None req-026f9251-1d03-4d44-ad11-414cadc65076 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: ece1112a-294b-461f-8b8f-c6a0bb212647] Creating config drive at /var/lib/nova/instances/ece1112a-294b-461f-8b8f-c6a0bb212647/disk.config
Oct 11 09:24:39 compute-0 nova_compute[260935]: 2025-10-11 09:24:39.235 2 DEBUG oslo_concurrency.processutils [None req-026f9251-1d03-4d44-ad11-414cadc65076 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/ece1112a-294b-461f-8b8f-c6a0bb212647/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp1oa0yu8c execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:24:39 compute-0 nova_compute[260935]: 2025-10-11 09:24:39.403 2 DEBUG oslo_concurrency.processutils [None req-026f9251-1d03-4d44-ad11-414cadc65076 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/ece1112a-294b-461f-8b8f-c6a0bb212647/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp1oa0yu8c" returned: 0 in 0.168s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:24:39 compute-0 nova_compute[260935]: 2025-10-11 09:24:39.449 2 DEBUG nova.storage.rbd_utils [None req-026f9251-1d03-4d44-ad11-414cadc65076 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] rbd image ece1112a-294b-461f-8b8f-c6a0bb212647_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:24:39 compute-0 nova_compute[260935]: 2025-10-11 09:24:39.454 2 DEBUG oslo_concurrency.processutils [None req-026f9251-1d03-4d44-ad11-414cadc65076 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/ece1112a-294b-461f-8b8f-c6a0bb212647/disk.config ece1112a-294b-461f-8b8f-c6a0bb212647_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:24:39 compute-0 nova_compute[260935]: 2025-10-11 09:24:39.678 2 DEBUG oslo_concurrency.processutils [None req-026f9251-1d03-4d44-ad11-414cadc65076 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/ece1112a-294b-461f-8b8f-c6a0bb212647/disk.config ece1112a-294b-461f-8b8f-c6a0bb212647_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.224s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:24:39 compute-0 nova_compute[260935]: 2025-10-11 09:24:39.679 2 INFO nova.virt.libvirt.driver [None req-026f9251-1d03-4d44-ad11-414cadc65076 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: ece1112a-294b-461f-8b8f-c6a0bb212647] Deleting local config drive /var/lib/nova/instances/ece1112a-294b-461f-8b8f-c6a0bb212647/disk.config because it was imported into RBD.
Oct 11 09:24:39 compute-0 nova_compute[260935]: 2025-10-11 09:24:39.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:24:39 compute-0 nova_compute[260935]: 2025-10-11 09:24:39.703 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 11 09:24:39 compute-0 nova_compute[260935]: 2025-10-11 09:24:39.704 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:24:39 compute-0 nova_compute[260935]: 2025-10-11 09:24:39.745 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:24:39 compute-0 nova_compute[260935]: 2025-10-11 09:24:39.746 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:24:39 compute-0 nova_compute[260935]: 2025-10-11 09:24:39.747 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:24:39 compute-0 nova_compute[260935]: 2025-10-11 09:24:39.747 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 11 09:24:39 compute-0 nova_compute[260935]: 2025-10-11 09:24:39.748 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:24:39 compute-0 kernel: tapc9d0cfb2-3f: entered promiscuous mode
Oct 11 09:24:39 compute-0 NetworkManager[44960]: <info>  [1760174679.7554] manager: (tapc9d0cfb2-3f): new Tun device (/org/freedesktop/NetworkManager/Devices/532)
Oct 11 09:24:39 compute-0 ovn_controller[152945]: 2025-10-11T09:24:39Z|01344|binding|INFO|Claiming lport c9d0cfb2-3f7b-4404-8f94-1d24beb7e6c9 for this chassis.
Oct 11 09:24:39 compute-0 ovn_controller[152945]: 2025-10-11T09:24:39Z|01345|binding|INFO|c9d0cfb2-3f7b-4404-8f94-1d24beb7e6c9: Claiming fa:16:3e:c1:1a:f4 10.100.0.11
Oct 11 09:24:39 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:24:39.812 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c1:1a:f4 10.100.0.11'], port_security=['fa:16:3e:c1:1a:f4 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'ece1112a-294b-461f-8b8f-c6a0bb212647', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-cdd3c547-e0c3-4649-8427-08ce8e1c52d4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'db6885dd005947ad850fed13cefdf2fc', 'neutron:revision_number': '2', 'neutron:security_group_ids': '0ad377f6-44f5-45fc-95cb-2f677a42f5a2', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=039eefbf-a236-4641-9b4d-7b7f1014a5e2, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=c9d0cfb2-3f7b-4404-8f94-1d24beb7e6c9) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 09:24:39 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:24:39.813 162815 INFO neutron.agent.ovn.metadata.agent [-] Port c9d0cfb2-3f7b-4404-8f94-1d24beb7e6c9 in datapath cdd3c547-e0c3-4649-8427-08ce8e1c52d4 bound to our chassis
Oct 11 09:24:39 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:24:39.816 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network cdd3c547-e0c3-4649-8427-08ce8e1c52d4
Oct 11 09:24:39 compute-0 NetworkManager[44960]: <info>  [1760174679.8197] manager: (tap336ce2f8-5f): new Tun device (/org/freedesktop/NetworkManager/Devices/533)
Oct 11 09:24:39 compute-0 kernel: tap336ce2f8-5f: entered promiscuous mode
Oct 11 09:24:39 compute-0 nova_compute[260935]: 2025-10-11 09:24:39.830 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:24:39 compute-0 ovn_controller[152945]: 2025-10-11T09:24:39Z|01346|binding|INFO|Setting lport c9d0cfb2-3f7b-4404-8f94-1d24beb7e6c9 ovn-installed in OVS
Oct 11 09:24:39 compute-0 ovn_controller[152945]: 2025-10-11T09:24:39Z|01347|binding|INFO|Setting lport c9d0cfb2-3f7b-4404-8f94-1d24beb7e6c9 up in Southbound
Oct 11 09:24:39 compute-0 systemd-udevd[397496]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 09:24:39 compute-0 ovn_controller[152945]: 2025-10-11T09:24:39Z|01348|if_status|INFO|Dropped 1 log messages in last 158 seconds (most recently, 158 seconds ago) due to excessive rate
Oct 11 09:24:39 compute-0 ovn_controller[152945]: 2025-10-11T09:24:39Z|01349|if_status|INFO|Not updating pb chassis for 336ce2f8-5f2c-4306-b2ea-d4b85ebcbff0 now as sb is readonly
Oct 11 09:24:39 compute-0 ovn_controller[152945]: 2025-10-11T09:24:39Z|01350|binding|INFO|Claiming lport 336ce2f8-5f2c-4306-b2ea-d4b85ebcbff0 for this chassis.
Oct 11 09:24:39 compute-0 ovn_controller[152945]: 2025-10-11T09:24:39Z|01351|binding|INFO|336ce2f8-5f2c-4306-b2ea-d4b85ebcbff0: Claiming fa:16:3e:5b:3e:95 2001:db8:0:1:f816:3eff:fe5b:3e95 2001:db8::f816:3eff:fe5b:3e95
Oct 11 09:24:39 compute-0 systemd-udevd[397497]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 09:24:39 compute-0 nova_compute[260935]: 2025-10-11 09:24:39.846 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:24:39 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:24:39.849 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[2c0c5b77-8127-44c7-aeb6-21babe1b13b8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:24:39 compute-0 NetworkManager[44960]: <info>  [1760174679.8539] device (tap336ce2f8-5f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 09:24:39 compute-0 NetworkManager[44960]: <info>  [1760174679.8550] device (tap336ce2f8-5f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 09:24:39 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:24:39.855 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5b:3e:95 2001:db8:0:1:f816:3eff:fe5b:3e95 2001:db8::f816:3eff:fe5b:3e95'], port_security=['fa:16:3e:5b:3e:95 2001:db8:0:1:f816:3eff:fe5b:3e95 2001:db8::f816:3eff:fe5b:3e95'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:fe5b:3e95/64 2001:db8::f816:3eff:fe5b:3e95/64', 'neutron:device_id': 'ece1112a-294b-461f-8b8f-c6a0bb212647', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f0bc2c62-89ab-4ce1-9157-2273788b9018', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'db6885dd005947ad850fed13cefdf2fc', 'neutron:revision_number': '2', 'neutron:security_group_ids': '0ad377f6-44f5-45fc-95cb-2f677a42f5a2', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=75b51dc9-c211-445f-9e3e-d219efddef19, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=336ce2f8-5f2c-4306-b2ea-d4b85ebcbff0) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 09:24:39 compute-0 ovn_controller[152945]: 2025-10-11T09:24:39Z|01352|binding|INFO|Setting lport 336ce2f8-5f2c-4306-b2ea-d4b85ebcbff0 ovn-installed in OVS
Oct 11 09:24:39 compute-0 ovn_controller[152945]: 2025-10-11T09:24:39Z|01353|binding|INFO|Setting lport 336ce2f8-5f2c-4306-b2ea-d4b85ebcbff0 up in Southbound
Oct 11 09:24:39 compute-0 NetworkManager[44960]: <info>  [1760174679.8597] device (tapc9d0cfb2-3f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 09:24:39 compute-0 NetworkManager[44960]: <info>  [1760174679.8607] device (tapc9d0cfb2-3f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 09:24:39 compute-0 nova_compute[260935]: 2025-10-11 09:24:39.861 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:24:39 compute-0 systemd-machined[215705]: New machine qemu-149-instance-0000007d.
Oct 11 09:24:39 compute-0 systemd[1]: Started Virtual Machine qemu-149-instance-0000007d.
Oct 11 09:24:39 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:24:39.887 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[7baca2f7-866b-40b2-85b6-1fa5ef3d3d8d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:24:39 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:24:39.891 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[ceef3728-8c06-44cb-89d1-66702d4eac20]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:24:39 compute-0 ceph-mon[74313]: pgmap v2490: 321 pgs: 321 active+clean; 453 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 122 op/s
Oct 11 09:24:39 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:24:39.923 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[38140c31-b093-4671-89fa-e121e2636f64]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:24:39 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:24:39.952 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[8401059d-fde1-4e1a-b303-f8d2a93c744f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapcdd3c547-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f1:b8:0d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 6, 'rx_bytes': 916, 'tx_bytes': 444, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 6, 'rx_bytes': 916, 'tx_bytes': 444, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 362], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 647689, 'reachable_time': 38844, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 397521, 'error': None, 'target': 'ovnmeta-cdd3c547-e0c3-4649-8427-08ce8e1c52d4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:24:39 compute-0 nova_compute[260935]: 2025-10-11 09:24:39.965 2 DEBUG nova.network.neutron [req-81ab7537-b13d-464b-be3a-9a35876895bc req-289088ac-177e-4ac5-83ed-82d7a9e486f4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ece1112a-294b-461f-8b8f-c6a0bb212647] Updated VIF entry in instance network info cache for port 336ce2f8-5f2c-4306-b2ea-d4b85ebcbff0. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 09:24:39 compute-0 nova_compute[260935]: 2025-10-11 09:24:39.966 2 DEBUG nova.network.neutron [req-81ab7537-b13d-464b-be3a-9a35876895bc req-289088ac-177e-4ac5-83ed-82d7a9e486f4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ece1112a-294b-461f-8b8f-c6a0bb212647] Updating instance_info_cache with network_info: [{"id": "c9d0cfb2-3f7b-4404-8f94-1d24beb7e6c9", "address": "fa:16:3e:c1:1a:f4", "network": {"id": "cdd3c547-e0c3-4649-8427-08ce8e1c52d4", "bridge": "br-int", "label": "tempest-network-smoke--1698384506", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc9d0cfb2-3f", "ovs_interfaceid": "c9d0cfb2-3f7b-4404-8f94-1d24beb7e6c9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "336ce2f8-5f2c-4306-b2ea-d4b85ebcbff0", "address": "fa:16:3e:5b:3e:95", "network": {"id": "f0bc2c62-89ab-4ce1-9157-2273788b9018", "bridge": "br-int", "label": "tempest-network-smoke--892002892", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe5b:3e95", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe5b:3e95", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap336ce2f8-5f", "ovs_interfaceid": "336ce2f8-5f2c-4306-b2ea-d4b85ebcbff0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:24:39 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:24:39.970 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[edb4a456-240c-40af-9184-81952a4ef347]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapcdd3c547-e1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 647704, 'tstamp': 647704}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 397532, 'error': None, 'target': 'ovnmeta-cdd3c547-e0c3-4649-8427-08ce8e1c52d4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapcdd3c547-e1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 647708, 'tstamp': 647708}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 397532, 'error': None, 'target': 'ovnmeta-cdd3c547-e0c3-4649-8427-08ce8e1c52d4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:24:39 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:24:39.972 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcdd3c547-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:24:39 compute-0 nova_compute[260935]: 2025-10-11 09:24:39.973 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:24:39 compute-0 nova_compute[260935]: 2025-10-11 09:24:39.975 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:24:39 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:24:39.975 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapcdd3c547-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:24:39 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:24:39.975 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 09:24:39 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:24:39.976 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapcdd3c547-e0, col_values=(('external_ids', {'iface-id': 'cd117be9-e2e2-4d92-9c01-a0f37b7175b9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:24:39 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:24:39.976 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 09:24:39 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:24:39.977 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 336ce2f8-5f2c-4306-b2ea-d4b85ebcbff0 in datapath f0bc2c62-89ab-4ce1-9157-2273788b9018 unbound from our chassis
Oct 11 09:24:39 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:24:39.978 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f0bc2c62-89ab-4ce1-9157-2273788b9018
Oct 11 09:24:39 compute-0 nova_compute[260935]: 2025-10-11 09:24:39.991 2 DEBUG oslo_concurrency.lockutils [req-81ab7537-b13d-464b-be3a-9a35876895bc req-289088ac-177e-4ac5-83ed-82d7a9e486f4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-ece1112a-294b-461f-8b8f-c6a0bb212647" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:24:39 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:24:39.996 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[95bf92fc-56e8-4b8e-b619-4f7de22b01a0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:24:40 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:24:40.036 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[218866de-4259-496b-915f-0a43c2f431ca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:24:40 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:24:40.040 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[6399581f-207f-4da5-a696-4d2648c5f5fa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:24:40 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:24:40.078 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[6e9a515d-8504-43a8-bfcb-f5edbb615ae5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:24:40 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:24:40.094 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[69300fa4-08ff-40c1-9e0e-5b871d0e3429]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf0bc2c62-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:be:23:00'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 23, 'tx_packets': 4, 'rx_bytes': 2146, 'tx_bytes': 312, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 23, 'tx_packets': 4, 'rx_bytes': 2146, 'tx_bytes': 312, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 363], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 647796, 'reachable_time': 34498, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 23, 'inoctets': 1824, 'indelivers': 7, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 23, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 1824, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 23, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 7, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 397538, 'error': None, 'target': 'ovnmeta-f0bc2c62-89ab-4ce1-9157-2273788b9018', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:24:40 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:24:40.109 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[14e68b55-3924-4b04-b45f-5db7945c59da]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapf0bc2c62-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 647812, 'tstamp': 647812}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 397539, 'error': None, 'target': 'ovnmeta-f0bc2c62-89ab-4ce1-9157-2273788b9018', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:24:40 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:24:40.111 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf0bc2c62-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:24:40 compute-0 nova_compute[260935]: 2025-10-11 09:24:40.112 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:24:40 compute-0 nova_compute[260935]: 2025-10-11 09:24:40.113 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:24:40 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:24:40.114 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf0bc2c62-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:24:40 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:24:40.114 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 09:24:40 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:24:40.114 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf0bc2c62-80, col_values=(('external_ids', {'iface-id': 'c1fabca0-ae77-4e48-b93b-3023955db235'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:24:40 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:24:40.115 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 09:24:40 compute-0 nova_compute[260935]: 2025-10-11 09:24:40.190 2 DEBUG nova.compute.manager [req-85fddae4-ff8f-486b-9e80-24233495e566 req-32437273-42f9-4e56-b0c7-ea07262a43c2 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ece1112a-294b-461f-8b8f-c6a0bb212647] Received event network-vif-plugged-c9d0cfb2-3f7b-4404-8f94-1d24beb7e6c9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:24:40 compute-0 nova_compute[260935]: 2025-10-11 09:24:40.191 2 DEBUG oslo_concurrency.lockutils [req-85fddae4-ff8f-486b-9e80-24233495e566 req-32437273-42f9-4e56-b0c7-ea07262a43c2 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "ece1112a-294b-461f-8b8f-c6a0bb212647-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:24:40 compute-0 nova_compute[260935]: 2025-10-11 09:24:40.191 2 DEBUG oslo_concurrency.lockutils [req-85fddae4-ff8f-486b-9e80-24233495e566 req-32437273-42f9-4e56-b0c7-ea07262a43c2 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "ece1112a-294b-461f-8b8f-c6a0bb212647-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:24:40 compute-0 nova_compute[260935]: 2025-10-11 09:24:40.191 2 DEBUG oslo_concurrency.lockutils [req-85fddae4-ff8f-486b-9e80-24233495e566 req-32437273-42f9-4e56-b0c7-ea07262a43c2 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "ece1112a-294b-461f-8b8f-c6a0bb212647-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:24:40 compute-0 nova_compute[260935]: 2025-10-11 09:24:40.192 2 DEBUG nova.compute.manager [req-85fddae4-ff8f-486b-9e80-24233495e566 req-32437273-42f9-4e56-b0c7-ea07262a43c2 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ece1112a-294b-461f-8b8f-c6a0bb212647] Processing event network-vif-plugged-c9d0cfb2-3f7b-4404-8f94-1d24beb7e6c9 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 11 09:24:40 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 09:24:40 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4294208709' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:24:40 compute-0 nova_compute[260935]: 2025-10-11 09:24:40.255 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.507s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:24:40 compute-0 nova_compute[260935]: 2025-10-11 09:24:40.343 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:24:40 compute-0 nova_compute[260935]: 2025-10-11 09:24:40.344 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:24:40 compute-0 nova_compute[260935]: 2025-10-11 09:24:40.344 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:24:40 compute-0 nova_compute[260935]: 2025-10-11 09:24:40.350 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:24:40 compute-0 nova_compute[260935]: 2025-10-11 09:24:40.351 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:24:40 compute-0 nova_compute[260935]: 2025-10-11 09:24:40.354 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-0000007d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:24:40 compute-0 nova_compute[260935]: 2025-10-11 09:24:40.354 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-0000007d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:24:40 compute-0 nova_compute[260935]: 2025-10-11 09:24:40.357 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000052 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:24:40 compute-0 nova_compute[260935]: 2025-10-11 09:24:40.357 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000052 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:24:40 compute-0 nova_compute[260935]: 2025-10-11 09:24:40.360 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-0000007a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:24:40 compute-0 nova_compute[260935]: 2025-10-11 09:24:40.360 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-0000007a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:24:40 compute-0 nova_compute[260935]: 2025-10-11 09:24:40.575 2 WARNING nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 09:24:40 compute-0 nova_compute[260935]: 2025-10-11 09:24:40.576 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=2657MB free_disk=59.76435852050781GB free_vcpus=3 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 11 09:24:40 compute-0 nova_compute[260935]: 2025-10-11 09:24:40.576 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:24:40 compute-0 nova_compute[260935]: 2025-10-11 09:24:40.577 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:24:40 compute-0 nova_compute[260935]: 2025-10-11 09:24:40.726 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance c176845c-89c0-4038-ba22-4ee79bd3ebfe actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 09:24:40 compute-0 nova_compute[260935]: 2025-10-11 09:24:40.726 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance b75d8ded-515b-48ff-a6b6-28df88878996 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 09:24:40 compute-0 nova_compute[260935]: 2025-10-11 09:24:40.726 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance 52be16b4-343a-4fd4-9041-39069a1fde2a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 09:24:40 compute-0 nova_compute[260935]: 2025-10-11 09:24:40.727 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance 85cf93a0-2068-4567-a399-b8d52e672913 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 09:24:40 compute-0 nova_compute[260935]: 2025-10-11 09:24:40.727 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance ece1112a-294b-461f-8b8f-c6a0bb212647 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 09:24:40 compute-0 nova_compute[260935]: 2025-10-11 09:24:40.727 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 5 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 11 09:24:40 compute-0 nova_compute[260935]: 2025-10-11 09:24:40.727 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=1152MB phys_disk=59GB used_disk=5GB total_vcpus=8 used_vcpus=5 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 11 09:24:40 compute-0 nova_compute[260935]: 2025-10-11 09:24:40.840 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760174680.8400786, ece1112a-294b-461f-8b8f-c6a0bb212647 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:24:40 compute-0 nova_compute[260935]: 2025-10-11 09:24:40.840 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: ece1112a-294b-461f-8b8f-c6a0bb212647] VM Started (Lifecycle Event)
Oct 11 09:24:40 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2491: 321 pgs: 321 active+clean; 453 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 1.8 MiB/s wr, 53 op/s
Oct 11 09:24:40 compute-0 nova_compute[260935]: 2025-10-11 09:24:40.858 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:24:40 compute-0 nova_compute[260935]: 2025-10-11 09:24:40.897 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: ece1112a-294b-461f-8b8f-c6a0bb212647] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:24:40 compute-0 nova_compute[260935]: 2025-10-11 09:24:40.902 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760174680.8402112, ece1112a-294b-461f-8b8f-c6a0bb212647 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:24:40 compute-0 nova_compute[260935]: 2025-10-11 09:24:40.902 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: ece1112a-294b-461f-8b8f-c6a0bb212647] VM Paused (Lifecycle Event)
Oct 11 09:24:40 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/4294208709' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:24:40 compute-0 nova_compute[260935]: 2025-10-11 09:24:40.928 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: ece1112a-294b-461f-8b8f-c6a0bb212647] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:24:40 compute-0 nova_compute[260935]: 2025-10-11 09:24:40.934 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: ece1112a-294b-461f-8b8f-c6a0bb212647] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 09:24:40 compute-0 nova_compute[260935]: 2025-10-11 09:24:40.958 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: ece1112a-294b-461f-8b8f-c6a0bb212647] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 09:24:41 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 09:24:41 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/137599789' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:24:41 compute-0 nova_compute[260935]: 2025-10-11 09:24:41.286 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.428s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:24:41 compute-0 nova_compute[260935]: 2025-10-11 09:24:41.293 2 DEBUG nova.compute.provider_tree [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 09:24:41 compute-0 nova_compute[260935]: 2025-10-11 09:24:41.321 2 DEBUG nova.scheduler.client.report [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 09:24:41 compute-0 nova_compute[260935]: 2025-10-11 09:24:41.352 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 11 09:24:41 compute-0 nova_compute[260935]: 2025-10-11 09:24:41.352 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.776s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:24:41 compute-0 ceph-mon[74313]: pgmap v2491: 321 pgs: 321 active+clean; 453 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 1.8 MiB/s wr, 53 op/s
Oct 11 09:24:41 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/137599789' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:24:42 compute-0 nova_compute[260935]: 2025-10-11 09:24:42.177 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:24:42 compute-0 nova_compute[260935]: 2025-10-11 09:24:42.216 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:24:42 compute-0 nova_compute[260935]: 2025-10-11 09:24:42.288 2 DEBUG nova.compute.manager [req-a38e041f-243a-468c-9933-7773ac783c2f req-462dc945-ea86-4235-bbf1-d35e611d43c1 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ece1112a-294b-461f-8b8f-c6a0bb212647] Received event network-vif-plugged-c9d0cfb2-3f7b-4404-8f94-1d24beb7e6c9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:24:42 compute-0 nova_compute[260935]: 2025-10-11 09:24:42.289 2 DEBUG oslo_concurrency.lockutils [req-a38e041f-243a-468c-9933-7773ac783c2f req-462dc945-ea86-4235-bbf1-d35e611d43c1 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "ece1112a-294b-461f-8b8f-c6a0bb212647-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:24:42 compute-0 nova_compute[260935]: 2025-10-11 09:24:42.289 2 DEBUG oslo_concurrency.lockutils [req-a38e041f-243a-468c-9933-7773ac783c2f req-462dc945-ea86-4235-bbf1-d35e611d43c1 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "ece1112a-294b-461f-8b8f-c6a0bb212647-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:24:42 compute-0 nova_compute[260935]: 2025-10-11 09:24:42.290 2 DEBUG oslo_concurrency.lockutils [req-a38e041f-243a-468c-9933-7773ac783c2f req-462dc945-ea86-4235-bbf1-d35e611d43c1 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "ece1112a-294b-461f-8b8f-c6a0bb212647-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:24:42 compute-0 nova_compute[260935]: 2025-10-11 09:24:42.291 2 DEBUG nova.compute.manager [req-a38e041f-243a-468c-9933-7773ac783c2f req-462dc945-ea86-4235-bbf1-d35e611d43c1 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ece1112a-294b-461f-8b8f-c6a0bb212647] No event matching network-vif-plugged-c9d0cfb2-3f7b-4404-8f94-1d24beb7e6c9 in dict_keys([('network-vif-plugged', '336ce2f8-5f2c-4306-b2ea-d4b85ebcbff0')]) pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:325
Oct 11 09:24:42 compute-0 nova_compute[260935]: 2025-10-11 09:24:42.291 2 WARNING nova.compute.manager [req-a38e041f-243a-468c-9933-7773ac783c2f req-462dc945-ea86-4235-bbf1-d35e611d43c1 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ece1112a-294b-461f-8b8f-c6a0bb212647] Received unexpected event network-vif-plugged-c9d0cfb2-3f7b-4404-8f94-1d24beb7e6c9 for instance with vm_state building and task_state spawning.
Oct 11 09:24:42 compute-0 nova_compute[260935]: 2025-10-11 09:24:42.292 2 DEBUG nova.compute.manager [req-a38e041f-243a-468c-9933-7773ac783c2f req-462dc945-ea86-4235-bbf1-d35e611d43c1 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ece1112a-294b-461f-8b8f-c6a0bb212647] Received event network-vif-plugged-336ce2f8-5f2c-4306-b2ea-d4b85ebcbff0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:24:42 compute-0 nova_compute[260935]: 2025-10-11 09:24:42.292 2 DEBUG oslo_concurrency.lockutils [req-a38e041f-243a-468c-9933-7773ac783c2f req-462dc945-ea86-4235-bbf1-d35e611d43c1 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "ece1112a-294b-461f-8b8f-c6a0bb212647-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:24:42 compute-0 nova_compute[260935]: 2025-10-11 09:24:42.293 2 DEBUG oslo_concurrency.lockutils [req-a38e041f-243a-468c-9933-7773ac783c2f req-462dc945-ea86-4235-bbf1-d35e611d43c1 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "ece1112a-294b-461f-8b8f-c6a0bb212647-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:24:42 compute-0 nova_compute[260935]: 2025-10-11 09:24:42.293 2 DEBUG oslo_concurrency.lockutils [req-a38e041f-243a-468c-9933-7773ac783c2f req-462dc945-ea86-4235-bbf1-d35e611d43c1 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "ece1112a-294b-461f-8b8f-c6a0bb212647-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:24:42 compute-0 nova_compute[260935]: 2025-10-11 09:24:42.294 2 DEBUG nova.compute.manager [req-a38e041f-243a-468c-9933-7773ac783c2f req-462dc945-ea86-4235-bbf1-d35e611d43c1 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ece1112a-294b-461f-8b8f-c6a0bb212647] Processing event network-vif-plugged-336ce2f8-5f2c-4306-b2ea-d4b85ebcbff0 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 11 09:24:42 compute-0 nova_compute[260935]: 2025-10-11 09:24:42.294 2 DEBUG nova.compute.manager [req-a38e041f-243a-468c-9933-7773ac783c2f req-462dc945-ea86-4235-bbf1-d35e611d43c1 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ece1112a-294b-461f-8b8f-c6a0bb212647] Received event network-vif-plugged-336ce2f8-5f2c-4306-b2ea-d4b85ebcbff0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:24:42 compute-0 nova_compute[260935]: 2025-10-11 09:24:42.295 2 DEBUG oslo_concurrency.lockutils [req-a38e041f-243a-468c-9933-7773ac783c2f req-462dc945-ea86-4235-bbf1-d35e611d43c1 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "ece1112a-294b-461f-8b8f-c6a0bb212647-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:24:42 compute-0 nova_compute[260935]: 2025-10-11 09:24:42.295 2 DEBUG oslo_concurrency.lockutils [req-a38e041f-243a-468c-9933-7773ac783c2f req-462dc945-ea86-4235-bbf1-d35e611d43c1 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "ece1112a-294b-461f-8b8f-c6a0bb212647-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:24:42 compute-0 nova_compute[260935]: 2025-10-11 09:24:42.296 2 DEBUG oslo_concurrency.lockutils [req-a38e041f-243a-468c-9933-7773ac783c2f req-462dc945-ea86-4235-bbf1-d35e611d43c1 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "ece1112a-294b-461f-8b8f-c6a0bb212647-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:24:42 compute-0 nova_compute[260935]: 2025-10-11 09:24:42.296 2 DEBUG nova.compute.manager [req-a38e041f-243a-468c-9933-7773ac783c2f req-462dc945-ea86-4235-bbf1-d35e611d43c1 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ece1112a-294b-461f-8b8f-c6a0bb212647] No waiting events found dispatching network-vif-plugged-336ce2f8-5f2c-4306-b2ea-d4b85ebcbff0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 09:24:42 compute-0 nova_compute[260935]: 2025-10-11 09:24:42.297 2 WARNING nova.compute.manager [req-a38e041f-243a-468c-9933-7773ac783c2f req-462dc945-ea86-4235-bbf1-d35e611d43c1 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ece1112a-294b-461f-8b8f-c6a0bb212647] Received unexpected event network-vif-plugged-336ce2f8-5f2c-4306-b2ea-d4b85ebcbff0 for instance with vm_state building and task_state spawning.
Oct 11 09:24:42 compute-0 nova_compute[260935]: 2025-10-11 09:24:42.298 2 DEBUG nova.compute.manager [None req-026f9251-1d03-4d44-ad11-414cadc65076 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: ece1112a-294b-461f-8b8f-c6a0bb212647] Instance event wait completed in 1 seconds for network-vif-plugged,network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 11 09:24:42 compute-0 nova_compute[260935]: 2025-10-11 09:24:42.303 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760174682.3019073, ece1112a-294b-461f-8b8f-c6a0bb212647 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:24:42 compute-0 nova_compute[260935]: 2025-10-11 09:24:42.305 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: ece1112a-294b-461f-8b8f-c6a0bb212647] VM Resumed (Lifecycle Event)
Oct 11 09:24:42 compute-0 nova_compute[260935]: 2025-10-11 09:24:42.308 2 DEBUG nova.virt.libvirt.driver [None req-026f9251-1d03-4d44-ad11-414cadc65076 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: ece1112a-294b-461f-8b8f-c6a0bb212647] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 11 09:24:42 compute-0 nova_compute[260935]: 2025-10-11 09:24:42.317 2 INFO nova.virt.libvirt.driver [-] [instance: ece1112a-294b-461f-8b8f-c6a0bb212647] Instance spawned successfully.
Oct 11 09:24:42 compute-0 nova_compute[260935]: 2025-10-11 09:24:42.318 2 DEBUG nova.virt.libvirt.driver [None req-026f9251-1d03-4d44-ad11-414cadc65076 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: ece1112a-294b-461f-8b8f-c6a0bb212647] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 11 09:24:42 compute-0 nova_compute[260935]: 2025-10-11 09:24:42.335 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: ece1112a-294b-461f-8b8f-c6a0bb212647] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:24:42 compute-0 nova_compute[260935]: 2025-10-11 09:24:42.342 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: ece1112a-294b-461f-8b8f-c6a0bb212647] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 09:24:42 compute-0 nova_compute[260935]: 2025-10-11 09:24:42.346 2 DEBUG nova.virt.libvirt.driver [None req-026f9251-1d03-4d44-ad11-414cadc65076 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: ece1112a-294b-461f-8b8f-c6a0bb212647] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:24:42 compute-0 nova_compute[260935]: 2025-10-11 09:24:42.346 2 DEBUG nova.virt.libvirt.driver [None req-026f9251-1d03-4d44-ad11-414cadc65076 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: ece1112a-294b-461f-8b8f-c6a0bb212647] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:24:42 compute-0 nova_compute[260935]: 2025-10-11 09:24:42.347 2 DEBUG nova.virt.libvirt.driver [None req-026f9251-1d03-4d44-ad11-414cadc65076 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: ece1112a-294b-461f-8b8f-c6a0bb212647] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:24:42 compute-0 nova_compute[260935]: 2025-10-11 09:24:42.347 2 DEBUG nova.virt.libvirt.driver [None req-026f9251-1d03-4d44-ad11-414cadc65076 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: ece1112a-294b-461f-8b8f-c6a0bb212647] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:24:42 compute-0 nova_compute[260935]: 2025-10-11 09:24:42.348 2 DEBUG nova.virt.libvirt.driver [None req-026f9251-1d03-4d44-ad11-414cadc65076 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: ece1112a-294b-461f-8b8f-c6a0bb212647] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:24:42 compute-0 nova_compute[260935]: 2025-10-11 09:24:42.348 2 DEBUG nova.virt.libvirt.driver [None req-026f9251-1d03-4d44-ad11-414cadc65076 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: ece1112a-294b-461f-8b8f-c6a0bb212647] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:24:42 compute-0 nova_compute[260935]: 2025-10-11 09:24:42.352 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:24:42 compute-0 nova_compute[260935]: 2025-10-11 09:24:42.353 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 11 09:24:42 compute-0 nova_compute[260935]: 2025-10-11 09:24:42.380 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: ece1112a-294b-461f-8b8f-c6a0bb212647] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 09:24:42 compute-0 nova_compute[260935]: 2025-10-11 09:24:42.429 2 INFO nova.compute.manager [None req-026f9251-1d03-4d44-ad11-414cadc65076 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: ece1112a-294b-461f-8b8f-c6a0bb212647] Took 14.45 seconds to spawn the instance on the hypervisor.
Oct 11 09:24:42 compute-0 nova_compute[260935]: 2025-10-11 09:24:42.429 2 DEBUG nova.compute.manager [None req-026f9251-1d03-4d44-ad11-414cadc65076 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: ece1112a-294b-461f-8b8f-c6a0bb212647] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:24:42 compute-0 nova_compute[260935]: 2025-10-11 09:24:42.522 2 INFO nova.compute.manager [None req-026f9251-1d03-4d44-ad11-414cadc65076 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: ece1112a-294b-461f-8b8f-c6a0bb212647] Took 15.73 seconds to build instance.
Oct 11 09:24:42 compute-0 nova_compute[260935]: 2025-10-11 09:24:42.550 2 DEBUG oslo_concurrency.lockutils [None req-026f9251-1d03-4d44-ad11-414cadc65076 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "ece1112a-294b-461f-8b8f-c6a0bb212647" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 15.817s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:24:42 compute-0 nova_compute[260935]: 2025-10-11 09:24:42.572 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:24:42 compute-0 podman[397608]: 2025-10-11 09:24:42.811443895 +0000 UTC m=+0.101160916 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 11 09:24:42 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2492: 321 pgs: 321 active+clean; 453 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 43 KiB/s rd, 1.8 MiB/s wr, 64 op/s
Oct 11 09:24:43 compute-0 nova_compute[260935]: 2025-10-11 09:24:43.162 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760174668.0107627, 32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:24:43 compute-0 nova_compute[260935]: 2025-10-11 09:24:43.163 2 INFO nova.compute.manager [-] [instance: 32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1] VM Stopped (Lifecycle Event)
Oct 11 09:24:43 compute-0 nova_compute[260935]: 2025-10-11 09:24:43.196 2 DEBUG nova.compute.manager [None req-63549b3d-04f8-44b3-8b26-5cb220d1ebb6 - - - - - -] [instance: 32ff26c3-5e8a-4593-a95b-0e77c8dcc5e1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:24:43 compute-0 nova_compute[260935]: 2025-10-11 09:24:43.267 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "refresh_cache-b75d8ded-515b-48ff-a6b6-28df88878996" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:24:43 compute-0 nova_compute[260935]: 2025-10-11 09:24:43.267 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquired lock "refresh_cache-b75d8ded-515b-48ff-a6b6-28df88878996" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:24:43 compute-0 nova_compute[260935]: 2025-10-11 09:24:43.268 2 DEBUG nova.network.neutron [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: b75d8ded-515b-48ff-a6b6-28df88878996] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 11 09:24:43 compute-0 nova_compute[260935]: 2025-10-11 09:24:43.552 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:24:43 compute-0 ceph-mon[74313]: pgmap v2492: 321 pgs: 321 active+clean; 453 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 43 KiB/s rd, 1.8 MiB/s wr, 64 op/s
Oct 11 09:24:44 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:24:44 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2493: 321 pgs: 321 active+clean; 453 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 7.4 KiB/s rd, 15 KiB/s wr, 10 op/s
Oct 11 09:24:46 compute-0 ceph-mon[74313]: pgmap v2493: 321 pgs: 321 active+clean; 453 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 7.4 KiB/s rd, 15 KiB/s wr, 10 op/s
Oct 11 09:24:46 compute-0 nova_compute[260935]: 2025-10-11 09:24:46.248 2 DEBUG nova.network.neutron [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: b75d8ded-515b-48ff-a6b6-28df88878996] Updating instance_info_cache with network_info: [{"id": "99e74dca-1d94-446c-ac4b-bc16dc028d2b", "address": "fa:16:3e:ab:9b:26", "network": {"id": "e4686205-cbf0-4221-bc49-ebb890c4a59f", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1553544744-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "11b44ad9193e4e43838d52056ccf413e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap99e74dca-1d", "ovs_interfaceid": "99e74dca-1d94-446c-ac4b-bc16dc028d2b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:24:46 compute-0 nova_compute[260935]: 2025-10-11 09:24:46.266 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Releasing lock "refresh_cache-b75d8ded-515b-48ff-a6b6-28df88878996" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:24:46 compute-0 nova_compute[260935]: 2025-10-11 09:24:46.266 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: b75d8ded-515b-48ff-a6b6-28df88878996] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 11 09:24:46 compute-0 nova_compute[260935]: 2025-10-11 09:24:46.267 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:24:46 compute-0 nova_compute[260935]: 2025-10-11 09:24:46.267 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:24:46 compute-0 nova_compute[260935]: 2025-10-11 09:24:46.294 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Triggering sync for uuid c176845c-89c0-4038-ba22-4ee79bd3ebfe _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Oct 11 09:24:46 compute-0 nova_compute[260935]: 2025-10-11 09:24:46.295 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Triggering sync for uuid b75d8ded-515b-48ff-a6b6-28df88878996 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Oct 11 09:24:46 compute-0 nova_compute[260935]: 2025-10-11 09:24:46.295 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Triggering sync for uuid 52be16b4-343a-4fd4-9041-39069a1fde2a _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Oct 11 09:24:46 compute-0 nova_compute[260935]: 2025-10-11 09:24:46.296 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Triggering sync for uuid 85cf93a0-2068-4567-a399-b8d52e672913 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Oct 11 09:24:46 compute-0 nova_compute[260935]: 2025-10-11 09:24:46.296 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Triggering sync for uuid ece1112a-294b-461f-8b8f-c6a0bb212647 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Oct 11 09:24:46 compute-0 nova_compute[260935]: 2025-10-11 09:24:46.296 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "c176845c-89c0-4038-ba22-4ee79bd3ebfe" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:24:46 compute-0 nova_compute[260935]: 2025-10-11 09:24:46.296 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "c176845c-89c0-4038-ba22-4ee79bd3ebfe" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:24:46 compute-0 nova_compute[260935]: 2025-10-11 09:24:46.297 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "b75d8ded-515b-48ff-a6b6-28df88878996" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:24:46 compute-0 nova_compute[260935]: 2025-10-11 09:24:46.297 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "b75d8ded-515b-48ff-a6b6-28df88878996" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:24:46 compute-0 nova_compute[260935]: 2025-10-11 09:24:46.297 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "52be16b4-343a-4fd4-9041-39069a1fde2a" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:24:46 compute-0 nova_compute[260935]: 2025-10-11 09:24:46.298 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "52be16b4-343a-4fd4-9041-39069a1fde2a" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:24:46 compute-0 nova_compute[260935]: 2025-10-11 09:24:46.298 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "85cf93a0-2068-4567-a399-b8d52e672913" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:24:46 compute-0 nova_compute[260935]: 2025-10-11 09:24:46.298 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "85cf93a0-2068-4567-a399-b8d52e672913" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:24:46 compute-0 nova_compute[260935]: 2025-10-11 09:24:46.298 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "ece1112a-294b-461f-8b8f-c6a0bb212647" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:24:46 compute-0 nova_compute[260935]: 2025-10-11 09:24:46.299 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "ece1112a-294b-461f-8b8f-c6a0bb212647" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:24:46 compute-0 nova_compute[260935]: 2025-10-11 09:24:46.350 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "85cf93a0-2068-4567-a399-b8d52e672913" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.052s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:24:46 compute-0 nova_compute[260935]: 2025-10-11 09:24:46.353 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "c176845c-89c0-4038-ba22-4ee79bd3ebfe" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.057s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:24:46 compute-0 nova_compute[260935]: 2025-10-11 09:24:46.353 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "b75d8ded-515b-48ff-a6b6-28df88878996" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.056s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:24:46 compute-0 nova_compute[260935]: 2025-10-11 09:24:46.354 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "ece1112a-294b-461f-8b8f-c6a0bb212647" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.056s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:24:46 compute-0 nova_compute[260935]: 2025-10-11 09:24:46.356 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "52be16b4-343a-4fd4-9041-39069a1fde2a" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.058s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:24:46 compute-0 podman[397628]: 2025-10-11 09:24:46.800079407 +0000 UTC m=+0.097836893 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=iscsid, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Oct 11 09:24:46 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2494: 321 pgs: 321 active+clean; 453 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 7.4 KiB/s rd, 15 KiB/s wr, 10 op/s
Oct 11 09:24:47 compute-0 nova_compute[260935]: 2025-10-11 09:24:47.575 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:24:48 compute-0 ceph-mon[74313]: pgmap v2494: 321 pgs: 321 active+clean; 453 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 7.4 KiB/s rd, 15 KiB/s wr, 10 op/s
Oct 11 09:24:48 compute-0 nova_compute[260935]: 2025-10-11 09:24:48.598 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:24:48 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2495: 321 pgs: 321 active+clean; 453 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 74 op/s
Oct 11 09:24:49 compute-0 ceph-mon[74313]: pgmap v2495: 321 pgs: 321 active+clean; 453 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 74 op/s
Oct 11 09:24:49 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:24:50 compute-0 nova_compute[260935]: 2025-10-11 09:24:50.099 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:24:50 compute-0 ovn_controller[152945]: 2025-10-11T09:24:50Z|01354|binding|INFO|Releasing lport c1fabca0-ae77-4e48-b93b-3023955db235 from this chassis (sb_readonly=0)
Oct 11 09:24:50 compute-0 ovn_controller[152945]: 2025-10-11T09:24:50Z|01355|binding|INFO|Releasing lport e23cd806-8523-4e59-ba27-db15cee52548 from this chassis (sb_readonly=0)
Oct 11 09:24:50 compute-0 ovn_controller[152945]: 2025-10-11T09:24:50Z|01356|binding|INFO|Releasing lport cd117be9-e2e2-4d92-9c01-a0f37b7175b9 from this chassis (sb_readonly=0)
Oct 11 09:24:50 compute-0 ovn_controller[152945]: 2025-10-11T09:24:50Z|01357|binding|INFO|Releasing lport f45dd889-4db0-488b-b6d3-356bc191844e from this chassis (sb_readonly=0)
Oct 11 09:24:50 compute-0 nova_compute[260935]: 2025-10-11 09:24:50.686 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:24:50 compute-0 nova_compute[260935]: 2025-10-11 09:24:50.826 2 DEBUG nova.compute.manager [req-a8559006-e3a2-4bca-9afc-6bc0ea4214c4 req-66df1d76-4bdd-46e6-8bad-2983d4b8bc5b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ece1112a-294b-461f-8b8f-c6a0bb212647] Received event network-changed-c9d0cfb2-3f7b-4404-8f94-1d24beb7e6c9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:24:50 compute-0 nova_compute[260935]: 2025-10-11 09:24:50.827 2 DEBUG nova.compute.manager [req-a8559006-e3a2-4bca-9afc-6bc0ea4214c4 req-66df1d76-4bdd-46e6-8bad-2983d4b8bc5b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ece1112a-294b-461f-8b8f-c6a0bb212647] Refreshing instance network info cache due to event network-changed-c9d0cfb2-3f7b-4404-8f94-1d24beb7e6c9. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 09:24:50 compute-0 nova_compute[260935]: 2025-10-11 09:24:50.827 2 DEBUG oslo_concurrency.lockutils [req-a8559006-e3a2-4bca-9afc-6bc0ea4214c4 req-66df1d76-4bdd-46e6-8bad-2983d4b8bc5b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-ece1112a-294b-461f-8b8f-c6a0bb212647" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:24:50 compute-0 nova_compute[260935]: 2025-10-11 09:24:50.827 2 DEBUG oslo_concurrency.lockutils [req-a8559006-e3a2-4bca-9afc-6bc0ea4214c4 req-66df1d76-4bdd-46e6-8bad-2983d4b8bc5b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-ece1112a-294b-461f-8b8f-c6a0bb212647" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:24:50 compute-0 nova_compute[260935]: 2025-10-11 09:24:50.828 2 DEBUG nova.network.neutron [req-a8559006-e3a2-4bca-9afc-6bc0ea4214c4 req-66df1d76-4bdd-46e6-8bad-2983d4b8bc5b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ece1112a-294b-461f-8b8f-c6a0bb212647] Refreshing network info cache for port c9d0cfb2-3f7b-4404-8f94-1d24beb7e6c9 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 09:24:50 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2496: 321 pgs: 321 active+clean; 453 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 74 op/s
Oct 11 09:24:51 compute-0 podman[397651]: 2025-10-11 09:24:51.812332955 +0000 UTC m=+0.102736810 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct 11 09:24:51 compute-0 podman[397652]: 2025-10-11 09:24:51.861289787 +0000 UTC m=+0.143242854 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 09:24:51 compute-0 ceph-mon[74313]: pgmap v2496: 321 pgs: 321 active+clean; 453 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 74 op/s
Oct 11 09:24:52 compute-0 nova_compute[260935]: 2025-10-11 09:24:52.629 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:24:52 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2497: 321 pgs: 321 active+clean; 456 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 190 KiB/s wr, 80 op/s
Oct 11 09:24:53 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:24:53.288 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=41, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:d1:d9', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '16:ab:1e:b7:4b:7f'}, ipsec=False) old=SB_Global(nb_cfg=40) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 09:24:53 compute-0 nova_compute[260935]: 2025-10-11 09:24:53.290 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:24:53 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:24:53.290 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 11 09:24:53 compute-0 nova_compute[260935]: 2025-10-11 09:24:53.295 2 DEBUG nova.network.neutron [req-a8559006-e3a2-4bca-9afc-6bc0ea4214c4 req-66df1d76-4bdd-46e6-8bad-2983d4b8bc5b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ece1112a-294b-461f-8b8f-c6a0bb212647] Updated VIF entry in instance network info cache for port c9d0cfb2-3f7b-4404-8f94-1d24beb7e6c9. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 09:24:53 compute-0 nova_compute[260935]: 2025-10-11 09:24:53.296 2 DEBUG nova.network.neutron [req-a8559006-e3a2-4bca-9afc-6bc0ea4214c4 req-66df1d76-4bdd-46e6-8bad-2983d4b8bc5b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ece1112a-294b-461f-8b8f-c6a0bb212647] Updating instance_info_cache with network_info: [{"id": "c9d0cfb2-3f7b-4404-8f94-1d24beb7e6c9", "address": "fa:16:3e:c1:1a:f4", "network": {"id": "cdd3c547-e0c3-4649-8427-08ce8e1c52d4", "bridge": "br-int", "label": "tempest-network-smoke--1698384506", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.172", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc9d0cfb2-3f", "ovs_interfaceid": "c9d0cfb2-3f7b-4404-8f94-1d24beb7e6c9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "336ce2f8-5f2c-4306-b2ea-d4b85ebcbff0", "address": "fa:16:3e:5b:3e:95", "network": {"id": "f0bc2c62-89ab-4ce1-9157-2273788b9018", "bridge": "br-int", "label": "tempest-network-smoke--892002892", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe5b:3e95", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe5b:3e95", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap336ce2f8-5f", "ovs_interfaceid": "336ce2f8-5f2c-4306-b2ea-d4b85ebcbff0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:24:53 compute-0 nova_compute[260935]: 2025-10-11 09:24:53.337 2 DEBUG oslo_concurrency.lockutils [req-a8559006-e3a2-4bca-9afc-6bc0ea4214c4 req-66df1d76-4bdd-46e6-8bad-2983d4b8bc5b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-ece1112a-294b-461f-8b8f-c6a0bb212647" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:24:53 compute-0 nova_compute[260935]: 2025-10-11 09:24:53.600 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:24:53 compute-0 ceph-mon[74313]: pgmap v2497: 321 pgs: 321 active+clean; 456 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 190 KiB/s wr, 80 op/s
Oct 11 09:24:54 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:24:54 compute-0 ovn_controller[152945]: 2025-10-11T09:24:54Z|00154|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:c1:1a:f4 10.100.0.11
Oct 11 09:24:54 compute-0 ovn_controller[152945]: 2025-10-11T09:24:54Z|00155|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:c1:1a:f4 10.100.0.11
Oct 11 09:24:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:24:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:24:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:24:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:24:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:24:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:24:54 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2498: 321 pgs: 321 active+clean; 456 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 176 KiB/s wr, 69 op/s
Oct 11 09:24:54 compute-0 ceph-mgr[74605]: [balancer INFO root] Optimize plan auto_2025-10-11_09:24:54
Oct 11 09:24:54 compute-0 ceph-mgr[74605]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 09:24:54 compute-0 ceph-mgr[74605]: [balancer INFO root] do_upmap
Oct 11 09:24:54 compute-0 ceph-mgr[74605]: [balancer INFO root] pools ['.rgw.root', 'cephfs.cephfs.meta', 'default.rgw.control', 'backups', 'volumes', 'vms', 'images', '.mgr', 'cephfs.cephfs.data', 'default.rgw.meta', 'default.rgw.log']
Oct 11 09:24:54 compute-0 ceph-mgr[74605]: [balancer INFO root] prepared 0/10 changes
Oct 11 09:24:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 09:24:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 09:24:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 09:24:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 09:24:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 09:24:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 09:24:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 09:24:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 09:24:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 09:24:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 09:24:55 compute-0 ceph-mon[74313]: pgmap v2498: 321 pgs: 321 active+clean; 456 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 176 KiB/s wr, 69 op/s
Oct 11 09:24:56 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2499: 321 pgs: 321 active+clean; 456 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 176 KiB/s wr, 69 op/s
Oct 11 09:24:57 compute-0 nova_compute[260935]: 2025-10-11 09:24:57.633 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:24:57 compute-0 ceph-mon[74313]: pgmap v2499: 321 pgs: 321 active+clean; 456 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 176 KiB/s wr, 69 op/s
Oct 11 09:24:58 compute-0 nova_compute[260935]: 2025-10-11 09:24:58.636 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:24:58 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2500: 321 pgs: 321 active+clean; 486 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 127 op/s
Oct 11 09:24:59 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:24:59 compute-0 sshd-session[397698]: Invalid user admin from 165.232.82.252 port 45652
Oct 11 09:24:59 compute-0 sshd-session[397698]: pam_unix(sshd:auth): check pass; user unknown
Oct 11 09:24:59 compute-0 sshd-session[397698]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=165.232.82.252
Oct 11 09:24:59 compute-0 ceph-mon[74313]: pgmap v2500: 321 pgs: 321 active+clean; 486 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 127 op/s
Oct 11 09:25:00 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:25:00.293 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=bec9a5e2-82b0-42f0-811d-08d245f1dc66, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '41'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:25:00 compute-0 nova_compute[260935]: 2025-10-11 09:25:00.604 2 DEBUG oslo_concurrency.lockutils [None req-92196fed-77a5-4494-a151-d2aec69d63ea dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Acquiring lock "e4670193-9ea3-45bc-9dbd-14d2e62f32f1" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:25:00 compute-0 nova_compute[260935]: 2025-10-11 09:25:00.604 2 DEBUG oslo_concurrency.lockutils [None req-92196fed-77a5-4494-a151-d2aec69d63ea dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "e4670193-9ea3-45bc-9dbd-14d2e62f32f1" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:25:00 compute-0 nova_compute[260935]: 2025-10-11 09:25:00.633 2 DEBUG nova.compute.manager [None req-92196fed-77a5-4494-a151-d2aec69d63ea dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: e4670193-9ea3-45bc-9dbd-14d2e62f32f1] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 11 09:25:00 compute-0 nova_compute[260935]: 2025-10-11 09:25:00.738 2 DEBUG oslo_concurrency.lockutils [None req-92196fed-77a5-4494-a151-d2aec69d63ea dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:25:00 compute-0 nova_compute[260935]: 2025-10-11 09:25:00.739 2 DEBUG oslo_concurrency.lockutils [None req-92196fed-77a5-4494-a151-d2aec69d63ea dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:25:00 compute-0 nova_compute[260935]: 2025-10-11 09:25:00.752 2 DEBUG nova.virt.hardware [None req-92196fed-77a5-4494-a151-d2aec69d63ea dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 11 09:25:00 compute-0 nova_compute[260935]: 2025-10-11 09:25:00.753 2 INFO nova.compute.claims [None req-92196fed-77a5-4494-a151-d2aec69d63ea dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: e4670193-9ea3-45bc-9dbd-14d2e62f32f1] Claim successful on node compute-0.ctlplane.example.com
Oct 11 09:25:00 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2501: 321 pgs: 321 active+clean; 486 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 11 09:25:01 compute-0 nova_compute[260935]: 2025-10-11 09:25:01.038 2 DEBUG oslo_concurrency.processutils [None req-92196fed-77a5-4494-a151-d2aec69d63ea dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:25:01 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 09:25:01 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2932817991' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:25:01 compute-0 nova_compute[260935]: 2025-10-11 09:25:01.539 2 DEBUG oslo_concurrency.processutils [None req-92196fed-77a5-4494-a151-d2aec69d63ea dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.501s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:25:01 compute-0 nova_compute[260935]: 2025-10-11 09:25:01.549 2 DEBUG nova.compute.provider_tree [None req-92196fed-77a5-4494-a151-d2aec69d63ea dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 09:25:01 compute-0 sshd-session[397698]: Failed password for invalid user admin from 165.232.82.252 port 45652 ssh2
Oct 11 09:25:01 compute-0 nova_compute[260935]: 2025-10-11 09:25:01.628 2 DEBUG nova.scheduler.client.report [None req-92196fed-77a5-4494-a151-d2aec69d63ea dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 09:25:01 compute-0 nova_compute[260935]: 2025-10-11 09:25:01.668 2 DEBUG oslo_concurrency.lockutils [None req-92196fed-77a5-4494-a151-d2aec69d63ea dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.929s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:25:01 compute-0 nova_compute[260935]: 2025-10-11 09:25:01.669 2 DEBUG nova.compute.manager [None req-92196fed-77a5-4494-a151-d2aec69d63ea dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: e4670193-9ea3-45bc-9dbd-14d2e62f32f1] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 11 09:25:01 compute-0 nova_compute[260935]: 2025-10-11 09:25:01.748 2 DEBUG nova.compute.manager [None req-92196fed-77a5-4494-a151-d2aec69d63ea dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: e4670193-9ea3-45bc-9dbd-14d2e62f32f1] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 11 09:25:01 compute-0 nova_compute[260935]: 2025-10-11 09:25:01.749 2 DEBUG nova.network.neutron [None req-92196fed-77a5-4494-a151-d2aec69d63ea dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: e4670193-9ea3-45bc-9dbd-14d2e62f32f1] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 11 09:25:01 compute-0 nova_compute[260935]: 2025-10-11 09:25:01.776 2 INFO nova.virt.libvirt.driver [None req-92196fed-77a5-4494-a151-d2aec69d63ea dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: e4670193-9ea3-45bc-9dbd-14d2e62f32f1] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 11 09:25:01 compute-0 nova_compute[260935]: 2025-10-11 09:25:01.797 2 DEBUG nova.compute.manager [None req-92196fed-77a5-4494-a151-d2aec69d63ea dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: e4670193-9ea3-45bc-9dbd-14d2e62f32f1] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 11 09:25:01 compute-0 nova_compute[260935]: 2025-10-11 09:25:01.907 2 DEBUG nova.compute.manager [None req-92196fed-77a5-4494-a151-d2aec69d63ea dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: e4670193-9ea3-45bc-9dbd-14d2e62f32f1] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 11 09:25:01 compute-0 nova_compute[260935]: 2025-10-11 09:25:01.909 2 DEBUG nova.virt.libvirt.driver [None req-92196fed-77a5-4494-a151-d2aec69d63ea dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: e4670193-9ea3-45bc-9dbd-14d2e62f32f1] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 11 09:25:01 compute-0 nova_compute[260935]: 2025-10-11 09:25:01.910 2 INFO nova.virt.libvirt.driver [None req-92196fed-77a5-4494-a151-d2aec69d63ea dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: e4670193-9ea3-45bc-9dbd-14d2e62f32f1] Creating image(s)
Oct 11 09:25:01 compute-0 nova_compute[260935]: 2025-10-11 09:25:01.943 2 DEBUG nova.storage.rbd_utils [None req-92196fed-77a5-4494-a151-d2aec69d63ea dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] rbd image e4670193-9ea3-45bc-9dbd-14d2e62f32f1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:25:01 compute-0 ceph-mon[74313]: pgmap v2501: 321 pgs: 321 active+clean; 486 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 11 09:25:01 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/2932817991' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:25:01 compute-0 nova_compute[260935]: 2025-10-11 09:25:01.978 2 DEBUG nova.storage.rbd_utils [None req-92196fed-77a5-4494-a151-d2aec69d63ea dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] rbd image e4670193-9ea3-45bc-9dbd-14d2e62f32f1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:25:02 compute-0 nova_compute[260935]: 2025-10-11 09:25:02.014 2 DEBUG nova.storage.rbd_utils [None req-92196fed-77a5-4494-a151-d2aec69d63ea dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] rbd image e4670193-9ea3-45bc-9dbd-14d2e62f32f1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:25:02 compute-0 nova_compute[260935]: 2025-10-11 09:25:02.020 2 DEBUG oslo_concurrency.processutils [None req-92196fed-77a5-4494-a151-d2aec69d63ea dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:25:02 compute-0 nova_compute[260935]: 2025-10-11 09:25:02.134 2 DEBUG oslo_concurrency.processutils [None req-92196fed-77a5-4494-a151-d2aec69d63ea dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json" returned: 0 in 0.114s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:25:02 compute-0 nova_compute[260935]: 2025-10-11 09:25:02.136 2 DEBUG oslo_concurrency.lockutils [None req-92196fed-77a5-4494-a151-d2aec69d63ea dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Acquiring lock "9811042c7d73cc51997f7c966840f4b7728169a1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:25:02 compute-0 nova_compute[260935]: 2025-10-11 09:25:02.137 2 DEBUG oslo_concurrency.lockutils [None req-92196fed-77a5-4494-a151-d2aec69d63ea dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:25:02 compute-0 nova_compute[260935]: 2025-10-11 09:25:02.138 2 DEBUG oslo_concurrency.lockutils [None req-92196fed-77a5-4494-a151-d2aec69d63ea dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:25:02 compute-0 nova_compute[260935]: 2025-10-11 09:25:02.173 2 DEBUG nova.storage.rbd_utils [None req-92196fed-77a5-4494-a151-d2aec69d63ea dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] rbd image e4670193-9ea3-45bc-9dbd-14d2e62f32f1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:25:02 compute-0 nova_compute[260935]: 2025-10-11 09:25:02.179 2 DEBUG oslo_concurrency.processutils [None req-92196fed-77a5-4494-a151-d2aec69d63ea dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 e4670193-9ea3-45bc-9dbd-14d2e62f32f1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:25:02 compute-0 nova_compute[260935]: 2025-10-11 09:25:02.271 2 DEBUG nova.policy [None req-92196fed-77a5-4494-a151-d2aec69d63ea dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'dd336dcb24664df58613d4105ce1b004', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'bee9c6aad5fe46a2b0fb6caf4d995b72', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 11 09:25:02 compute-0 nova_compute[260935]: 2025-10-11 09:25:02.487 2 DEBUG oslo_concurrency.processutils [None req-92196fed-77a5-4494-a151-d2aec69d63ea dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 e4670193-9ea3-45bc-9dbd-14d2e62f32f1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.308s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:25:02 compute-0 nova_compute[260935]: 2025-10-11 09:25:02.576 2 DEBUG nova.storage.rbd_utils [None req-92196fed-77a5-4494-a151-d2aec69d63ea dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] resizing rbd image e4670193-9ea3-45bc-9dbd-14d2e62f32f1_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 11 09:25:02 compute-0 nova_compute[260935]: 2025-10-11 09:25:02.663 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:25:02 compute-0 nova_compute[260935]: 2025-10-11 09:25:02.732 2 DEBUG nova.objects.instance [None req-92196fed-77a5-4494-a151-d2aec69d63ea dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lazy-loading 'migration_context' on Instance uuid e4670193-9ea3-45bc-9dbd-14d2e62f32f1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:25:02 compute-0 nova_compute[260935]: 2025-10-11 09:25:02.752 2 DEBUG nova.virt.libvirt.driver [None req-92196fed-77a5-4494-a151-d2aec69d63ea dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: e4670193-9ea3-45bc-9dbd-14d2e62f32f1] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 11 09:25:02 compute-0 nova_compute[260935]: 2025-10-11 09:25:02.753 2 DEBUG nova.virt.libvirt.driver [None req-92196fed-77a5-4494-a151-d2aec69d63ea dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: e4670193-9ea3-45bc-9dbd-14d2e62f32f1] Ensure instance console log exists: /var/lib/nova/instances/e4670193-9ea3-45bc-9dbd-14d2e62f32f1/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 11 09:25:02 compute-0 nova_compute[260935]: 2025-10-11 09:25:02.753 2 DEBUG oslo_concurrency.lockutils [None req-92196fed-77a5-4494-a151-d2aec69d63ea dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:25:02 compute-0 nova_compute[260935]: 2025-10-11 09:25:02.754 2 DEBUG oslo_concurrency.lockutils [None req-92196fed-77a5-4494-a151-d2aec69d63ea dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:25:02 compute-0 nova_compute[260935]: 2025-10-11 09:25:02.755 2 DEBUG oslo_concurrency.lockutils [None req-92196fed-77a5-4494-a151-d2aec69d63ea dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:25:02 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2502: 321 pgs: 321 active+clean; 518 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 334 KiB/s rd, 3.3 MiB/s wr, 80 op/s
Oct 11 09:25:02 compute-0 sshd-session[397698]: Connection closed by invalid user admin 165.232.82.252 port 45652 [preauth]
Oct 11 09:25:03 compute-0 unix_chkpwd[397888]: password check failed for user (root)
Oct 11 09:25:03 compute-0 sshd-session[397627]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=5.44.252.30  user=root
Oct 11 09:25:03 compute-0 nova_compute[260935]: 2025-10-11 09:25:03.637 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:25:03 compute-0 ceph-mon[74313]: pgmap v2502: 321 pgs: 321 active+clean; 518 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 334 KiB/s rd, 3.3 MiB/s wr, 80 op/s
Oct 11 09:25:04 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:25:04 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2503: 321 pgs: 321 active+clean; 518 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 315 KiB/s rd, 3.2 MiB/s wr, 74 op/s
Oct 11 09:25:04 compute-0 nova_compute[260935]: 2025-10-11 09:25:04.944 2 DEBUG nova.network.neutron [None req-92196fed-77a5-4494-a151-d2aec69d63ea dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: e4670193-9ea3-45bc-9dbd-14d2e62f32f1] Successfully created port: 119ff9b5-e3af-4688-97a4-92b5f6a240dc _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 11 09:25:05 compute-0 sshd-session[397627]: Failed password for root from 5.44.252.30 port 50460 ssh2
Oct 11 09:25:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 09:25:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:25:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 09:25:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:25:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0043770120099954415 of space, bias 1.0, pg target 1.3131036029986325 quantized to 32 (current 32)
Oct 11 09:25:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:25:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 09:25:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:25:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 09:25:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:25:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.1992057139048968 quantized to 32 (current 32)
Oct 11 09:25:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:25:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006084358924269063 quantized to 16 (current 32)
Oct 11 09:25:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:25:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 09:25:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:25:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.605448655336329e-05 quantized to 32 (current 32)
Oct 11 09:25:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:25:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006464631357035879 quantized to 32 (current 32)
Oct 11 09:25:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:25:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 09:25:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:25:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015210897310672657 quantized to 32 (current 32)
Oct 11 09:25:05 compute-0 ceph-mon[74313]: pgmap v2503: 321 pgs: 321 active+clean; 518 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 315 KiB/s rd, 3.2 MiB/s wr, 74 op/s
Oct 11 09:25:06 compute-0 nova_compute[260935]: 2025-10-11 09:25:06.280 2 DEBUG nova.network.neutron [None req-92196fed-77a5-4494-a151-d2aec69d63ea dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: e4670193-9ea3-45bc-9dbd-14d2e62f32f1] Successfully updated port: 119ff9b5-e3af-4688-97a4-92b5f6a240dc _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 11 09:25:06 compute-0 nova_compute[260935]: 2025-10-11 09:25:06.669 2 DEBUG oslo_concurrency.lockutils [None req-92196fed-77a5-4494-a151-d2aec69d63ea dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Acquiring lock "refresh_cache-e4670193-9ea3-45bc-9dbd-14d2e62f32f1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:25:06 compute-0 nova_compute[260935]: 2025-10-11 09:25:06.669 2 DEBUG oslo_concurrency.lockutils [None req-92196fed-77a5-4494-a151-d2aec69d63ea dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Acquired lock "refresh_cache-e4670193-9ea3-45bc-9dbd-14d2e62f32f1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:25:06 compute-0 nova_compute[260935]: 2025-10-11 09:25:06.670 2 DEBUG nova.network.neutron [None req-92196fed-77a5-4494-a151-d2aec69d63ea dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: e4670193-9ea3-45bc-9dbd-14d2e62f32f1] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 11 09:25:06 compute-0 nova_compute[260935]: 2025-10-11 09:25:06.833 2 DEBUG nova.compute.manager [req-f59ddb1c-6142-42d2-b11a-b1ad2ac739b8 req-e29d0e37-95d8-4ea5-8dcf-21feeaafc175 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: e4670193-9ea3-45bc-9dbd-14d2e62f32f1] Received event network-changed-119ff9b5-e3af-4688-97a4-92b5f6a240dc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:25:06 compute-0 nova_compute[260935]: 2025-10-11 09:25:06.834 2 DEBUG nova.compute.manager [req-f59ddb1c-6142-42d2-b11a-b1ad2ac739b8 req-e29d0e37-95d8-4ea5-8dcf-21feeaafc175 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: e4670193-9ea3-45bc-9dbd-14d2e62f32f1] Refreshing instance network info cache due to event network-changed-119ff9b5-e3af-4688-97a4-92b5f6a240dc. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 09:25:06 compute-0 nova_compute[260935]: 2025-10-11 09:25:06.835 2 DEBUG oslo_concurrency.lockutils [req-f59ddb1c-6142-42d2-b11a-b1ad2ac739b8 req-e29d0e37-95d8-4ea5-8dcf-21feeaafc175 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-e4670193-9ea3-45bc-9dbd-14d2e62f32f1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:25:06 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2504: 321 pgs: 321 active+clean; 518 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 315 KiB/s rd, 3.2 MiB/s wr, 74 op/s
Oct 11 09:25:07 compute-0 nova_compute[260935]: 2025-10-11 09:25:07.268 2 DEBUG nova.network.neutron [None req-92196fed-77a5-4494-a151-d2aec69d63ea dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: e4670193-9ea3-45bc-9dbd-14d2e62f32f1] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 11 09:25:07 compute-0 nova_compute[260935]: 2025-10-11 09:25:07.667 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:25:07 compute-0 ceph-mon[74313]: pgmap v2504: 321 pgs: 321 active+clean; 518 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 315 KiB/s rd, 3.2 MiB/s wr, 74 op/s
Oct 11 09:25:08 compute-0 nova_compute[260935]: 2025-10-11 09:25:08.684 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:25:08 compute-0 nova_compute[260935]: 2025-10-11 09:25:08.712 2 DEBUG nova.network.neutron [None req-92196fed-77a5-4494-a151-d2aec69d63ea dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: e4670193-9ea3-45bc-9dbd-14d2e62f32f1] Updating instance_info_cache with network_info: [{"id": "119ff9b5-e3af-4688-97a4-92b5f6a240dc", "address": "fa:16:3e:54:f4:1e", "network": {"id": "342daca7-3c5d-4bb3-bcdc-6abce7b4d414", "bridge": "br-int", "label": "tempest-network-smoke--2000863110", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap119ff9b5-e3", "ovs_interfaceid": "119ff9b5-e3af-4688-97a4-92b5f6a240dc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:25:08 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2505: 321 pgs: 321 active+clean; 533 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 326 KiB/s rd, 3.7 MiB/s wr, 86 op/s
Oct 11 09:25:09 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:25:09 compute-0 nova_compute[260935]: 2025-10-11 09:25:09.440 2 DEBUG oslo_concurrency.lockutils [None req-92196fed-77a5-4494-a151-d2aec69d63ea dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Releasing lock "refresh_cache-e4670193-9ea3-45bc-9dbd-14d2e62f32f1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:25:09 compute-0 nova_compute[260935]: 2025-10-11 09:25:09.440 2 DEBUG nova.compute.manager [None req-92196fed-77a5-4494-a151-d2aec69d63ea dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: e4670193-9ea3-45bc-9dbd-14d2e62f32f1] Instance network_info: |[{"id": "119ff9b5-e3af-4688-97a4-92b5f6a240dc", "address": "fa:16:3e:54:f4:1e", "network": {"id": "342daca7-3c5d-4bb3-bcdc-6abce7b4d414", "bridge": "br-int", "label": "tempest-network-smoke--2000863110", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap119ff9b5-e3", "ovs_interfaceid": "119ff9b5-e3af-4688-97a4-92b5f6a240dc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 11 09:25:09 compute-0 nova_compute[260935]: 2025-10-11 09:25:09.440 2 DEBUG oslo_concurrency.lockutils [req-f59ddb1c-6142-42d2-b11a-b1ad2ac739b8 req-e29d0e37-95d8-4ea5-8dcf-21feeaafc175 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-e4670193-9ea3-45bc-9dbd-14d2e62f32f1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:25:09 compute-0 nova_compute[260935]: 2025-10-11 09:25:09.441 2 DEBUG nova.network.neutron [req-f59ddb1c-6142-42d2-b11a-b1ad2ac739b8 req-e29d0e37-95d8-4ea5-8dcf-21feeaafc175 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: e4670193-9ea3-45bc-9dbd-14d2e62f32f1] Refreshing network info cache for port 119ff9b5-e3af-4688-97a4-92b5f6a240dc _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 09:25:09 compute-0 nova_compute[260935]: 2025-10-11 09:25:09.444 2 DEBUG nova.virt.libvirt.driver [None req-92196fed-77a5-4494-a151-d2aec69d63ea dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: e4670193-9ea3-45bc-9dbd-14d2e62f32f1] Start _get_guest_xml network_info=[{"id": "119ff9b5-e3af-4688-97a4-92b5f6a240dc", "address": "fa:16:3e:54:f4:1e", "network": {"id": "342daca7-3c5d-4bb3-bcdc-6abce7b4d414", "bridge": "br-int", "label": "tempest-network-smoke--2000863110", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap119ff9b5-e3", "ovs_interfaceid": "119ff9b5-e3af-4688-97a4-92b5f6a240dc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 11 09:25:09 compute-0 nova_compute[260935]: 2025-10-11 09:25:09.450 2 WARNING nova.virt.libvirt.driver [None req-92196fed-77a5-4494-a151-d2aec69d63ea dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 09:25:09 compute-0 nova_compute[260935]: 2025-10-11 09:25:09.462 2 DEBUG nova.virt.libvirt.host [None req-92196fed-77a5-4494-a151-d2aec69d63ea dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 11 09:25:09 compute-0 nova_compute[260935]: 2025-10-11 09:25:09.462 2 DEBUG nova.virt.libvirt.host [None req-92196fed-77a5-4494-a151-d2aec69d63ea dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 11 09:25:09 compute-0 nova_compute[260935]: 2025-10-11 09:25:09.469 2 DEBUG nova.virt.libvirt.host [None req-92196fed-77a5-4494-a151-d2aec69d63ea dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 11 09:25:09 compute-0 nova_compute[260935]: 2025-10-11 09:25:09.469 2 DEBUG nova.virt.libvirt.host [None req-92196fed-77a5-4494-a151-d2aec69d63ea dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 11 09:25:09 compute-0 nova_compute[260935]: 2025-10-11 09:25:09.470 2 DEBUG nova.virt.libvirt.driver [None req-92196fed-77a5-4494-a151-d2aec69d63ea dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 11 09:25:09 compute-0 nova_compute[260935]: 2025-10-11 09:25:09.470 2 DEBUG nova.virt.hardware [None req-92196fed-77a5-4494-a151-d2aec69d63ea dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 11 09:25:09 compute-0 nova_compute[260935]: 2025-10-11 09:25:09.471 2 DEBUG nova.virt.hardware [None req-92196fed-77a5-4494-a151-d2aec69d63ea dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 11 09:25:09 compute-0 nova_compute[260935]: 2025-10-11 09:25:09.471 2 DEBUG nova.virt.hardware [None req-92196fed-77a5-4494-a151-d2aec69d63ea dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 11 09:25:09 compute-0 nova_compute[260935]: 2025-10-11 09:25:09.471 2 DEBUG nova.virt.hardware [None req-92196fed-77a5-4494-a151-d2aec69d63ea dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 11 09:25:09 compute-0 nova_compute[260935]: 2025-10-11 09:25:09.472 2 DEBUG nova.virt.hardware [None req-92196fed-77a5-4494-a151-d2aec69d63ea dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 11 09:25:09 compute-0 nova_compute[260935]: 2025-10-11 09:25:09.472 2 DEBUG nova.virt.hardware [None req-92196fed-77a5-4494-a151-d2aec69d63ea dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 11 09:25:09 compute-0 nova_compute[260935]: 2025-10-11 09:25:09.473 2 DEBUG nova.virt.hardware [None req-92196fed-77a5-4494-a151-d2aec69d63ea dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 11 09:25:09 compute-0 nova_compute[260935]: 2025-10-11 09:25:09.473 2 DEBUG nova.virt.hardware [None req-92196fed-77a5-4494-a151-d2aec69d63ea dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 11 09:25:09 compute-0 nova_compute[260935]: 2025-10-11 09:25:09.473 2 DEBUG nova.virt.hardware [None req-92196fed-77a5-4494-a151-d2aec69d63ea dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 11 09:25:09 compute-0 nova_compute[260935]: 2025-10-11 09:25:09.473 2 DEBUG nova.virt.hardware [None req-92196fed-77a5-4494-a151-d2aec69d63ea dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 11 09:25:09 compute-0 nova_compute[260935]: 2025-10-11 09:25:09.474 2 DEBUG nova.virt.hardware [None req-92196fed-77a5-4494-a151-d2aec69d63ea dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 11 09:25:09 compute-0 nova_compute[260935]: 2025-10-11 09:25:09.477 2 DEBUG oslo_concurrency.processutils [None req-92196fed-77a5-4494-a151-d2aec69d63ea dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:25:09 compute-0 sshd-session[397889]: Invalid user activemq from 152.32.213.170 port 43802
Oct 11 09:25:09 compute-0 sshd-session[397889]: pam_unix(sshd:auth): check pass; user unknown
Oct 11 09:25:09 compute-0 sshd-session[397889]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=152.32.213.170
Oct 11 09:25:09 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 09:25:09 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1106144960' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:25:09 compute-0 nova_compute[260935]: 2025-10-11 09:25:09.975 2 DEBUG oslo_concurrency.processutils [None req-92196fed-77a5-4494-a151-d2aec69d63ea dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.497s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:25:09 compute-0 ceph-mon[74313]: pgmap v2505: 321 pgs: 321 active+clean; 533 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 326 KiB/s rd, 3.7 MiB/s wr, 86 op/s
Oct 11 09:25:09 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/1106144960' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:25:10 compute-0 nova_compute[260935]: 2025-10-11 09:25:10.020 2 DEBUG nova.storage.rbd_utils [None req-92196fed-77a5-4494-a151-d2aec69d63ea dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] rbd image e4670193-9ea3-45bc-9dbd-14d2e62f32f1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:25:10 compute-0 nova_compute[260935]: 2025-10-11 09:25:10.028 2 DEBUG oslo_concurrency.processutils [None req-92196fed-77a5-4494-a151-d2aec69d63ea dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:25:10 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 09:25:10 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3703661375' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:25:10 compute-0 nova_compute[260935]: 2025-10-11 09:25:10.686 2 DEBUG oslo_concurrency.processutils [None req-92196fed-77a5-4494-a151-d2aec69d63ea dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.658s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:25:10 compute-0 nova_compute[260935]: 2025-10-11 09:25:10.690 2 DEBUG nova.virt.libvirt.vif [None req-92196fed-77a5-4494-a151-d2aec69d63ea dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T09:24:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1522106396',display_name='tempest-TestNetworkBasicOps-server-1522106396',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1522106396',id=126,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPJuvLde49Grl0xrW9XqlZWm2iapzj/ptvpeM0vUXkL9WjkO1Vts1ZqPNK0xa0RFKzwonFoVCMufaWV362ApeqPtUvwTGOGY43qYDgAKGh07RNbr2NQMmIQQ5Njnnyha+Q==',key_name='tempest-TestNetworkBasicOps-2127179421',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='bee9c6aad5fe46a2b0fb6caf4d995b72',ramdisk_id='',reservation_id='r-4ic01z0m',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1622727639',owner_user_name='tempest-TestNetworkBasicOps-1622727639-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:25:01Z,user_data=None,user_id='dd336dcb24664df58613d4105ce1b004',uuid=e4670193-9ea3-45bc-9dbd-14d2e62f32f1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "119ff9b5-e3af-4688-97a4-92b5f6a240dc", "address": "fa:16:3e:54:f4:1e", "network": {"id": "342daca7-3c5d-4bb3-bcdc-6abce7b4d414", "bridge": "br-int", "label": "tempest-network-smoke--2000863110", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap119ff9b5-e3", "ovs_interfaceid": "119ff9b5-e3af-4688-97a4-92b5f6a240dc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 11 09:25:10 compute-0 nova_compute[260935]: 2025-10-11 09:25:10.691 2 DEBUG nova.network.os_vif_util [None req-92196fed-77a5-4494-a151-d2aec69d63ea dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Converting VIF {"id": "119ff9b5-e3af-4688-97a4-92b5f6a240dc", "address": "fa:16:3e:54:f4:1e", "network": {"id": "342daca7-3c5d-4bb3-bcdc-6abce7b4d414", "bridge": "br-int", "label": "tempest-network-smoke--2000863110", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap119ff9b5-e3", "ovs_interfaceid": "119ff9b5-e3af-4688-97a4-92b5f6a240dc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 09:25:10 compute-0 nova_compute[260935]: 2025-10-11 09:25:10.693 2 DEBUG nova.network.os_vif_util [None req-92196fed-77a5-4494-a151-d2aec69d63ea dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:54:f4:1e,bridge_name='br-int',has_traffic_filtering=True,id=119ff9b5-e3af-4688-97a4-92b5f6a240dc,network=Network(342daca7-3c5d-4bb3-bcdc-6abce7b4d414),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap119ff9b5-e3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 09:25:10 compute-0 nova_compute[260935]: 2025-10-11 09:25:10.695 2 DEBUG nova.objects.instance [None req-92196fed-77a5-4494-a151-d2aec69d63ea dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lazy-loading 'pci_devices' on Instance uuid e4670193-9ea3-45bc-9dbd-14d2e62f32f1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:25:10 compute-0 nova_compute[260935]: 2025-10-11 09:25:10.747 2 DEBUG nova.virt.libvirt.driver [None req-92196fed-77a5-4494-a151-d2aec69d63ea dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: e4670193-9ea3-45bc-9dbd-14d2e62f32f1] End _get_guest_xml xml=<domain type="kvm">
Oct 11 09:25:10 compute-0 nova_compute[260935]:   <uuid>e4670193-9ea3-45bc-9dbd-14d2e62f32f1</uuid>
Oct 11 09:25:10 compute-0 nova_compute[260935]:   <name>instance-0000007e</name>
Oct 11 09:25:10 compute-0 nova_compute[260935]:   <memory>131072</memory>
Oct 11 09:25:10 compute-0 nova_compute[260935]:   <vcpu>1</vcpu>
Oct 11 09:25:10 compute-0 nova_compute[260935]:   <metadata>
Oct 11 09:25:10 compute-0 nova_compute[260935]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 09:25:10 compute-0 nova_compute[260935]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 09:25:10 compute-0 nova_compute[260935]:       <nova:name>tempest-TestNetworkBasicOps-server-1522106396</nova:name>
Oct 11 09:25:10 compute-0 nova_compute[260935]:       <nova:creationTime>2025-10-11 09:25:09</nova:creationTime>
Oct 11 09:25:10 compute-0 nova_compute[260935]:       <nova:flavor name="m1.nano">
Oct 11 09:25:10 compute-0 nova_compute[260935]:         <nova:memory>128</nova:memory>
Oct 11 09:25:10 compute-0 nova_compute[260935]:         <nova:disk>1</nova:disk>
Oct 11 09:25:10 compute-0 nova_compute[260935]:         <nova:swap>0</nova:swap>
Oct 11 09:25:10 compute-0 nova_compute[260935]:         <nova:ephemeral>0</nova:ephemeral>
Oct 11 09:25:10 compute-0 nova_compute[260935]:         <nova:vcpus>1</nova:vcpus>
Oct 11 09:25:10 compute-0 nova_compute[260935]:       </nova:flavor>
Oct 11 09:25:10 compute-0 nova_compute[260935]:       <nova:owner>
Oct 11 09:25:10 compute-0 nova_compute[260935]:         <nova:user uuid="dd336dcb24664df58613d4105ce1b004">tempest-TestNetworkBasicOps-1622727639-project-member</nova:user>
Oct 11 09:25:10 compute-0 nova_compute[260935]:         <nova:project uuid="bee9c6aad5fe46a2b0fb6caf4d995b72">tempest-TestNetworkBasicOps-1622727639</nova:project>
Oct 11 09:25:10 compute-0 nova_compute[260935]:       </nova:owner>
Oct 11 09:25:10 compute-0 nova_compute[260935]:       <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 09:25:10 compute-0 nova_compute[260935]:       <nova:ports>
Oct 11 09:25:10 compute-0 nova_compute[260935]:         <nova:port uuid="119ff9b5-e3af-4688-97a4-92b5f6a240dc">
Oct 11 09:25:10 compute-0 nova_compute[260935]:           <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Oct 11 09:25:10 compute-0 nova_compute[260935]:         </nova:port>
Oct 11 09:25:10 compute-0 nova_compute[260935]:       </nova:ports>
Oct 11 09:25:10 compute-0 nova_compute[260935]:     </nova:instance>
Oct 11 09:25:10 compute-0 nova_compute[260935]:   </metadata>
Oct 11 09:25:10 compute-0 nova_compute[260935]:   <sysinfo type="smbios">
Oct 11 09:25:10 compute-0 nova_compute[260935]:     <system>
Oct 11 09:25:10 compute-0 nova_compute[260935]:       <entry name="manufacturer">RDO</entry>
Oct 11 09:25:10 compute-0 nova_compute[260935]:       <entry name="product">OpenStack Compute</entry>
Oct 11 09:25:10 compute-0 nova_compute[260935]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 09:25:10 compute-0 nova_compute[260935]:       <entry name="serial">e4670193-9ea3-45bc-9dbd-14d2e62f32f1</entry>
Oct 11 09:25:10 compute-0 nova_compute[260935]:       <entry name="uuid">e4670193-9ea3-45bc-9dbd-14d2e62f32f1</entry>
Oct 11 09:25:10 compute-0 nova_compute[260935]:       <entry name="family">Virtual Machine</entry>
Oct 11 09:25:10 compute-0 nova_compute[260935]:     </system>
Oct 11 09:25:10 compute-0 nova_compute[260935]:   </sysinfo>
Oct 11 09:25:10 compute-0 nova_compute[260935]:   <os>
Oct 11 09:25:10 compute-0 nova_compute[260935]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 11 09:25:10 compute-0 nova_compute[260935]:     <boot dev="hd"/>
Oct 11 09:25:10 compute-0 nova_compute[260935]:     <smbios mode="sysinfo"/>
Oct 11 09:25:10 compute-0 nova_compute[260935]:   </os>
Oct 11 09:25:10 compute-0 nova_compute[260935]:   <features>
Oct 11 09:25:10 compute-0 nova_compute[260935]:     <acpi/>
Oct 11 09:25:10 compute-0 nova_compute[260935]:     <apic/>
Oct 11 09:25:10 compute-0 nova_compute[260935]:     <vmcoreinfo/>
Oct 11 09:25:10 compute-0 nova_compute[260935]:   </features>
Oct 11 09:25:10 compute-0 nova_compute[260935]:   <clock offset="utc">
Oct 11 09:25:10 compute-0 nova_compute[260935]:     <timer name="pit" tickpolicy="delay"/>
Oct 11 09:25:10 compute-0 nova_compute[260935]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 11 09:25:10 compute-0 nova_compute[260935]:     <timer name="hpet" present="no"/>
Oct 11 09:25:10 compute-0 nova_compute[260935]:   </clock>
Oct 11 09:25:10 compute-0 nova_compute[260935]:   <cpu mode="host-model" match="exact">
Oct 11 09:25:10 compute-0 nova_compute[260935]:     <topology sockets="1" cores="1" threads="1"/>
Oct 11 09:25:10 compute-0 nova_compute[260935]:   </cpu>
Oct 11 09:25:10 compute-0 nova_compute[260935]:   <devices>
Oct 11 09:25:10 compute-0 nova_compute[260935]:     <disk type="network" device="disk">
Oct 11 09:25:10 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 09:25:10 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/e4670193-9ea3-45bc-9dbd-14d2e62f32f1_disk">
Oct 11 09:25:10 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 09:25:10 compute-0 nova_compute[260935]:       </source>
Oct 11 09:25:10 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 09:25:10 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 09:25:10 compute-0 nova_compute[260935]:       </auth>
Oct 11 09:25:10 compute-0 nova_compute[260935]:       <target dev="vda" bus="virtio"/>
Oct 11 09:25:10 compute-0 nova_compute[260935]:     </disk>
Oct 11 09:25:10 compute-0 nova_compute[260935]:     <disk type="network" device="cdrom">
Oct 11 09:25:10 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 09:25:10 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/e4670193-9ea3-45bc-9dbd-14d2e62f32f1_disk.config">
Oct 11 09:25:10 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 09:25:10 compute-0 nova_compute[260935]:       </source>
Oct 11 09:25:10 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 09:25:10 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 09:25:10 compute-0 nova_compute[260935]:       </auth>
Oct 11 09:25:10 compute-0 nova_compute[260935]:       <target dev="sda" bus="sata"/>
Oct 11 09:25:10 compute-0 nova_compute[260935]:     </disk>
Oct 11 09:25:10 compute-0 nova_compute[260935]:     <interface type="ethernet">
Oct 11 09:25:10 compute-0 nova_compute[260935]:       <mac address="fa:16:3e:54:f4:1e"/>
Oct 11 09:25:10 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 09:25:10 compute-0 nova_compute[260935]:       <driver name="vhost" rx_queue_size="512"/>
Oct 11 09:25:10 compute-0 nova_compute[260935]:       <mtu size="1442"/>
Oct 11 09:25:10 compute-0 nova_compute[260935]:       <target dev="tap119ff9b5-e3"/>
Oct 11 09:25:10 compute-0 nova_compute[260935]:     </interface>
Oct 11 09:25:10 compute-0 nova_compute[260935]:     <serial type="pty">
Oct 11 09:25:10 compute-0 nova_compute[260935]:       <log file="/var/lib/nova/instances/e4670193-9ea3-45bc-9dbd-14d2e62f32f1/console.log" append="off"/>
Oct 11 09:25:10 compute-0 nova_compute[260935]:     </serial>
Oct 11 09:25:10 compute-0 nova_compute[260935]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 09:25:10 compute-0 nova_compute[260935]:     <video>
Oct 11 09:25:10 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 09:25:10 compute-0 nova_compute[260935]:     </video>
Oct 11 09:25:10 compute-0 nova_compute[260935]:     <input type="tablet" bus="usb"/>
Oct 11 09:25:10 compute-0 nova_compute[260935]:     <rng model="virtio">
Oct 11 09:25:10 compute-0 nova_compute[260935]:       <backend model="random">/dev/urandom</backend>
Oct 11 09:25:10 compute-0 nova_compute[260935]:     </rng>
Oct 11 09:25:10 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root"/>
Oct 11 09:25:10 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:25:10 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:25:10 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:25:10 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:25:10 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:25:10 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:25:10 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:25:10 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:25:10 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:25:10 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:25:10 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:25:10 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:25:10 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:25:10 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:25:10 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:25:10 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:25:10 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:25:10 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:25:10 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:25:10 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:25:10 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:25:10 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:25:10 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:25:10 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:25:10 compute-0 nova_compute[260935]:     <controller type="usb" index="0"/>
Oct 11 09:25:10 compute-0 nova_compute[260935]:     <memballoon model="virtio">
Oct 11 09:25:10 compute-0 nova_compute[260935]:       <stats period="10"/>
Oct 11 09:25:10 compute-0 nova_compute[260935]:     </memballoon>
Oct 11 09:25:10 compute-0 nova_compute[260935]:   </devices>
Oct 11 09:25:10 compute-0 nova_compute[260935]: </domain>
Oct 11 09:25:10 compute-0 nova_compute[260935]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 11 09:25:10 compute-0 nova_compute[260935]: 2025-10-11 09:25:10.749 2 DEBUG nova.compute.manager [None req-92196fed-77a5-4494-a151-d2aec69d63ea dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: e4670193-9ea3-45bc-9dbd-14d2e62f32f1] Preparing to wait for external event network-vif-plugged-119ff9b5-e3af-4688-97a4-92b5f6a240dc prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 11 09:25:10 compute-0 nova_compute[260935]: 2025-10-11 09:25:10.750 2 DEBUG oslo_concurrency.lockutils [None req-92196fed-77a5-4494-a151-d2aec69d63ea dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Acquiring lock "e4670193-9ea3-45bc-9dbd-14d2e62f32f1-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:25:10 compute-0 nova_compute[260935]: 2025-10-11 09:25:10.751 2 DEBUG oslo_concurrency.lockutils [None req-92196fed-77a5-4494-a151-d2aec69d63ea dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "e4670193-9ea3-45bc-9dbd-14d2e62f32f1-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:25:10 compute-0 nova_compute[260935]: 2025-10-11 09:25:10.751 2 DEBUG oslo_concurrency.lockutils [None req-92196fed-77a5-4494-a151-d2aec69d63ea dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "e4670193-9ea3-45bc-9dbd-14d2e62f32f1-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:25:10 compute-0 nova_compute[260935]: 2025-10-11 09:25:10.752 2 DEBUG nova.virt.libvirt.vif [None req-92196fed-77a5-4494-a151-d2aec69d63ea dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T09:24:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1522106396',display_name='tempest-TestNetworkBasicOps-server-1522106396',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1522106396',id=126,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPJuvLde49Grl0xrW9XqlZWm2iapzj/ptvpeM0vUXkL9WjkO1Vts1ZqPNK0xa0RFKzwonFoVCMufaWV362ApeqPtUvwTGOGY43qYDgAKGh07RNbr2NQMmIQQ5Njnnyha+Q==',key_name='tempest-TestNetworkBasicOps-2127179421',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='bee9c6aad5fe46a2b0fb6caf4d995b72',ramdisk_id='',reservation_id='r-4ic01z0m',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1622727639',owner_user_name='tempest-TestNetworkBasicOps-1622727639-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:25:01Z,user_data=None,user_id='dd336dcb24664df58613d4105ce1b004',uuid=e4670193-9ea3-45bc-9dbd-14d2e62f32f1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "119ff9b5-e3af-4688-97a4-92b5f6a240dc", "address": "fa:16:3e:54:f4:1e", "network": {"id": "342daca7-3c5d-4bb3-bcdc-6abce7b4d414", "bridge": "br-int", "label": "tempest-network-smoke--2000863110", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap119ff9b5-e3", "ovs_interfaceid": "119ff9b5-e3af-4688-97a4-92b5f6a240dc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 11 09:25:10 compute-0 nova_compute[260935]: 2025-10-11 09:25:10.753 2 DEBUG nova.network.os_vif_util [None req-92196fed-77a5-4494-a151-d2aec69d63ea dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Converting VIF {"id": "119ff9b5-e3af-4688-97a4-92b5f6a240dc", "address": "fa:16:3e:54:f4:1e", "network": {"id": "342daca7-3c5d-4bb3-bcdc-6abce7b4d414", "bridge": "br-int", "label": "tempest-network-smoke--2000863110", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap119ff9b5-e3", "ovs_interfaceid": "119ff9b5-e3af-4688-97a4-92b5f6a240dc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 09:25:10 compute-0 nova_compute[260935]: 2025-10-11 09:25:10.754 2 DEBUG nova.network.os_vif_util [None req-92196fed-77a5-4494-a151-d2aec69d63ea dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:54:f4:1e,bridge_name='br-int',has_traffic_filtering=True,id=119ff9b5-e3af-4688-97a4-92b5f6a240dc,network=Network(342daca7-3c5d-4bb3-bcdc-6abce7b4d414),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap119ff9b5-e3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 09:25:10 compute-0 nova_compute[260935]: 2025-10-11 09:25:10.755 2 DEBUG os_vif [None req-92196fed-77a5-4494-a151-d2aec69d63ea dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:54:f4:1e,bridge_name='br-int',has_traffic_filtering=True,id=119ff9b5-e3af-4688-97a4-92b5f6a240dc,network=Network(342daca7-3c5d-4bb3-bcdc-6abce7b4d414),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap119ff9b5-e3') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 11 09:25:10 compute-0 nova_compute[260935]: 2025-10-11 09:25:10.756 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:25:10 compute-0 nova_compute[260935]: 2025-10-11 09:25:10.757 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:25:10 compute-0 nova_compute[260935]: 2025-10-11 09:25:10.757 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 09:25:10 compute-0 nova_compute[260935]: 2025-10-11 09:25:10.762 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:25:10 compute-0 nova_compute[260935]: 2025-10-11 09:25:10.763 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap119ff9b5-e3, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:25:10 compute-0 nova_compute[260935]: 2025-10-11 09:25:10.764 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap119ff9b5-e3, col_values=(('external_ids', {'iface-id': '119ff9b5-e3af-4688-97a4-92b5f6a240dc', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:54:f4:1e', 'vm-uuid': 'e4670193-9ea3-45bc-9dbd-14d2e62f32f1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:25:10 compute-0 nova_compute[260935]: 2025-10-11 09:25:10.766 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:25:10 compute-0 NetworkManager[44960]: <info>  [1760174710.7680] manager: (tap119ff9b5-e3): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/534)
Oct 11 09:25:10 compute-0 nova_compute[260935]: 2025-10-11 09:25:10.770 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 09:25:10 compute-0 nova_compute[260935]: 2025-10-11 09:25:10.776 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:25:10 compute-0 nova_compute[260935]: 2025-10-11 09:25:10.778 2 INFO os_vif [None req-92196fed-77a5-4494-a151-d2aec69d63ea dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:54:f4:1e,bridge_name='br-int',has_traffic_filtering=True,id=119ff9b5-e3af-4688-97a4-92b5f6a240dc,network=Network(342daca7-3c5d-4bb3-bcdc-6abce7b4d414),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap119ff9b5-e3')
Oct 11 09:25:10 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2506: 321 pgs: 321 active+clean; 533 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Oct 11 09:25:10 compute-0 nova_compute[260935]: 2025-10-11 09:25:10.941 2 DEBUG nova.compute.manager [req-42f83aae-c5de-45c8-acae-8d4aa041b73c req-8855b0e1-fa39-4ea0-82a9-2aec76bd5a94 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ece1112a-294b-461f-8b8f-c6a0bb212647] Received event network-changed-c9d0cfb2-3f7b-4404-8f94-1d24beb7e6c9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:25:10 compute-0 nova_compute[260935]: 2025-10-11 09:25:10.942 2 DEBUG nova.compute.manager [req-42f83aae-c5de-45c8-acae-8d4aa041b73c req-8855b0e1-fa39-4ea0-82a9-2aec76bd5a94 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ece1112a-294b-461f-8b8f-c6a0bb212647] Refreshing instance network info cache due to event network-changed-c9d0cfb2-3f7b-4404-8f94-1d24beb7e6c9. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 09:25:10 compute-0 nova_compute[260935]: 2025-10-11 09:25:10.942 2 DEBUG oslo_concurrency.lockutils [req-42f83aae-c5de-45c8-acae-8d4aa041b73c req-8855b0e1-fa39-4ea0-82a9-2aec76bd5a94 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-ece1112a-294b-461f-8b8f-c6a0bb212647" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:25:10 compute-0 nova_compute[260935]: 2025-10-11 09:25:10.943 2 DEBUG oslo_concurrency.lockutils [req-42f83aae-c5de-45c8-acae-8d4aa041b73c req-8855b0e1-fa39-4ea0-82a9-2aec76bd5a94 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-ece1112a-294b-461f-8b8f-c6a0bb212647" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:25:10 compute-0 nova_compute[260935]: 2025-10-11 09:25:10.943 2 DEBUG nova.network.neutron [req-42f83aae-c5de-45c8-acae-8d4aa041b73c req-8855b0e1-fa39-4ea0-82a9-2aec76bd5a94 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ece1112a-294b-461f-8b8f-c6a0bb212647] Refreshing network info cache for port c9d0cfb2-3f7b-4404-8f94-1d24beb7e6c9 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 09:25:10 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/3703661375' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:25:11 compute-0 nova_compute[260935]: 2025-10-11 09:25:11.092 2 DEBUG nova.virt.libvirt.driver [None req-92196fed-77a5-4494-a151-d2aec69d63ea dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 09:25:11 compute-0 nova_compute[260935]: 2025-10-11 09:25:11.092 2 DEBUG nova.virt.libvirt.driver [None req-92196fed-77a5-4494-a151-d2aec69d63ea dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 09:25:11 compute-0 nova_compute[260935]: 2025-10-11 09:25:11.092 2 DEBUG nova.virt.libvirt.driver [None req-92196fed-77a5-4494-a151-d2aec69d63ea dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] No VIF found with MAC fa:16:3e:54:f4:1e, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 11 09:25:11 compute-0 nova_compute[260935]: 2025-10-11 09:25:11.093 2 INFO nova.virt.libvirt.driver [None req-92196fed-77a5-4494-a151-d2aec69d63ea dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: e4670193-9ea3-45bc-9dbd-14d2e62f32f1] Using config drive
Oct 11 09:25:11 compute-0 nova_compute[260935]: 2025-10-11 09:25:11.128 2 DEBUG nova.storage.rbd_utils [None req-92196fed-77a5-4494-a151-d2aec69d63ea dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] rbd image e4670193-9ea3-45bc-9dbd-14d2e62f32f1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:25:11 compute-0 nova_compute[260935]: 2025-10-11 09:25:11.173 2 DEBUG oslo_concurrency.lockutils [None req-e9edc5ad-86de-4520-b1b3-5738abe426dc 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "ece1112a-294b-461f-8b8f-c6a0bb212647" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:25:11 compute-0 nova_compute[260935]: 2025-10-11 09:25:11.174 2 DEBUG oslo_concurrency.lockutils [None req-e9edc5ad-86de-4520-b1b3-5738abe426dc 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "ece1112a-294b-461f-8b8f-c6a0bb212647" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:25:11 compute-0 nova_compute[260935]: 2025-10-11 09:25:11.175 2 DEBUG oslo_concurrency.lockutils [None req-e9edc5ad-86de-4520-b1b3-5738abe426dc 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "ece1112a-294b-461f-8b8f-c6a0bb212647-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:25:11 compute-0 nova_compute[260935]: 2025-10-11 09:25:11.175 2 DEBUG oslo_concurrency.lockutils [None req-e9edc5ad-86de-4520-b1b3-5738abe426dc 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "ece1112a-294b-461f-8b8f-c6a0bb212647-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:25:11 compute-0 nova_compute[260935]: 2025-10-11 09:25:11.175 2 DEBUG oslo_concurrency.lockutils [None req-e9edc5ad-86de-4520-b1b3-5738abe426dc 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "ece1112a-294b-461f-8b8f-c6a0bb212647-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:25:11 compute-0 nova_compute[260935]: 2025-10-11 09:25:11.177 2 INFO nova.compute.manager [None req-e9edc5ad-86de-4520-b1b3-5738abe426dc 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: ece1112a-294b-461f-8b8f-c6a0bb212647] Terminating instance
Oct 11 09:25:11 compute-0 nova_compute[260935]: 2025-10-11 09:25:11.179 2 DEBUG nova.compute.manager [None req-e9edc5ad-86de-4520-b1b3-5738abe426dc 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: ece1112a-294b-461f-8b8f-c6a0bb212647] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 11 09:25:11 compute-0 kernel: tapc9d0cfb2-3f (unregistering): left promiscuous mode
Oct 11 09:25:11 compute-0 NetworkManager[44960]: <info>  [1760174711.2500] device (tapc9d0cfb2-3f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 09:25:11 compute-0 ovn_controller[152945]: 2025-10-11T09:25:11Z|01358|binding|INFO|Releasing lport c9d0cfb2-3f7b-4404-8f94-1d24beb7e6c9 from this chassis (sb_readonly=0)
Oct 11 09:25:11 compute-0 ovn_controller[152945]: 2025-10-11T09:25:11Z|01359|binding|INFO|Setting lport c9d0cfb2-3f7b-4404-8f94-1d24beb7e6c9 down in Southbound
Oct 11 09:25:11 compute-0 nova_compute[260935]: 2025-10-11 09:25:11.272 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:25:11 compute-0 ovn_controller[152945]: 2025-10-11T09:25:11Z|01360|binding|INFO|Removing iface tapc9d0cfb2-3f ovn-installed in OVS
Oct 11 09:25:11 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:25:11.294 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c1:1a:f4 10.100.0.11'], port_security=['fa:16:3e:c1:1a:f4 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'ece1112a-294b-461f-8b8f-c6a0bb212647', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-cdd3c547-e0c3-4649-8427-08ce8e1c52d4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'db6885dd005947ad850fed13cefdf2fc', 'neutron:revision_number': '4', 'neutron:security_group_ids': '0ad377f6-44f5-45fc-95cb-2f677a42f5a2', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=039eefbf-a236-4641-9b4d-7b7f1014a5e2, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=c9d0cfb2-3f7b-4404-8f94-1d24beb7e6c9) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 09:25:11 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:25:11.295 162815 INFO neutron.agent.ovn.metadata.agent [-] Port c9d0cfb2-3f7b-4404-8f94-1d24beb7e6c9 in datapath cdd3c547-e0c3-4649-8427-08ce8e1c52d4 unbound from our chassis
Oct 11 09:25:11 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:25:11.299 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network cdd3c547-e0c3-4649-8427-08ce8e1c52d4
Oct 11 09:25:11 compute-0 kernel: tap336ce2f8-5f (unregistering): left promiscuous mode
Oct 11 09:25:11 compute-0 NetworkManager[44960]: <info>  [1760174711.3107] device (tap336ce2f8-5f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 09:25:11 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:25:11.330 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[d5395388-6971-4554-8e88-11743e77a99a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:25:11 compute-0 ovn_controller[152945]: 2025-10-11T09:25:11Z|01361|binding|INFO|Releasing lport 336ce2f8-5f2c-4306-b2ea-d4b85ebcbff0 from this chassis (sb_readonly=0)
Oct 11 09:25:11 compute-0 ovn_controller[152945]: 2025-10-11T09:25:11Z|01362|binding|INFO|Setting lport 336ce2f8-5f2c-4306-b2ea-d4b85ebcbff0 down in Southbound
Oct 11 09:25:11 compute-0 nova_compute[260935]: 2025-10-11 09:25:11.353 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:25:11 compute-0 ovn_controller[152945]: 2025-10-11T09:25:11Z|01363|binding|INFO|Removing iface tap336ce2f8-5f ovn-installed in OVS
Oct 11 09:25:11 compute-0 nova_compute[260935]: 2025-10-11 09:25:11.356 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:25:11 compute-0 nova_compute[260935]: 2025-10-11 09:25:11.373 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:25:11 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:25:11.401 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[8155c85e-9887-404c-88f3-cb444697a2ed]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:25:11 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:25:11.406 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[0cb89acc-4345-49d6-957e-0cc1958a353b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:25:11 compute-0 systemd[1]: machine-qemu\x2d149\x2dinstance\x2d0000007d.scope: Deactivated successfully.
Oct 11 09:25:11 compute-0 systemd[1]: machine-qemu\x2d149\x2dinstance\x2d0000007d.scope: Consumed 13.986s CPU time.
Oct 11 09:25:11 compute-0 systemd-machined[215705]: Machine qemu-149-instance-0000007d terminated.
Oct 11 09:25:11 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:25:11.439 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5b:3e:95 2001:db8:0:1:f816:3eff:fe5b:3e95 2001:db8::f816:3eff:fe5b:3e95'], port_security=['fa:16:3e:5b:3e:95 2001:db8:0:1:f816:3eff:fe5b:3e95 2001:db8::f816:3eff:fe5b:3e95'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:fe5b:3e95/64 2001:db8::f816:3eff:fe5b:3e95/64', 'neutron:device_id': 'ece1112a-294b-461f-8b8f-c6a0bb212647', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f0bc2c62-89ab-4ce1-9157-2273788b9018', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'db6885dd005947ad850fed13cefdf2fc', 'neutron:revision_number': '4', 'neutron:security_group_ids': '0ad377f6-44f5-45fc-95cb-2f677a42f5a2', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=75b51dc9-c211-445f-9e3e-d219efddef19, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=336ce2f8-5f2c-4306-b2ea-d4b85ebcbff0) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 09:25:11 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:25:11.451 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[ad8ffaec-7801-40eb-b649-7aab2eef317f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:25:11 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:25:11.477 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[9a701fd4-3cab-43f6-b556-cbc6b465301d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapcdd3c547-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f1:b8:0d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 8, 'rx_bytes': 1000, 'tx_bytes': 528, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 8, 'rx_bytes': 1000, 'tx_bytes': 528, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 362], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 647689, 'reachable_time': 38844, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 397991, 'error': None, 'target': 'ovnmeta-cdd3c547-e0c3-4649-8427-08ce8e1c52d4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:25:11 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:25:11.501 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[e9a4cd6e-54a8-434f-911a-fb61a8ce8008]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapcdd3c547-e1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 647704, 'tstamp': 647704}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 397992, 'error': None, 'target': 'ovnmeta-cdd3c547-e0c3-4649-8427-08ce8e1c52d4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapcdd3c547-e1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 647708, 'tstamp': 647708}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 397992, 'error': None, 'target': 'ovnmeta-cdd3c547-e0c3-4649-8427-08ce8e1c52d4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:25:11 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:25:11.503 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcdd3c547-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:25:11 compute-0 nova_compute[260935]: 2025-10-11 09:25:11.507 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:25:11 compute-0 nova_compute[260935]: 2025-10-11 09:25:11.524 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:25:11 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:25:11.525 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapcdd3c547-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:25:11 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:25:11.526 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 09:25:11 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:25:11.526 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapcdd3c547-e0, col_values=(('external_ids', {'iface-id': 'cd117be9-e2e2-4d92-9c01-a0f37b7175b9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:25:11 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:25:11.527 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 09:25:11 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:25:11.528 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 336ce2f8-5f2c-4306-b2ea-d4b85ebcbff0 in datapath f0bc2c62-89ab-4ce1-9157-2273788b9018 unbound from our chassis
Oct 11 09:25:11 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:25:11.530 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f0bc2c62-89ab-4ce1-9157-2273788b9018
Oct 11 09:25:11 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:25:11.558 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[2d451e41-d34d-40cf-8196-2e4118a46815]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:25:11 compute-0 sshd-session[397889]: Failed password for invalid user activemq from 152.32.213.170 port 43802 ssh2
Oct 11 09:25:11 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:25:11.610 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[e386137f-4881-4a20-b7a3-22a600a03293]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:25:11 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:25:11.616 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[0b12e3c9-36fd-4167-828b-03ec101d64dc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:25:11 compute-0 nova_compute[260935]: 2025-10-11 09:25:11.644 2 INFO nova.virt.libvirt.driver [-] [instance: ece1112a-294b-461f-8b8f-c6a0bb212647] Instance destroyed successfully.
Oct 11 09:25:11 compute-0 nova_compute[260935]: 2025-10-11 09:25:11.645 2 DEBUG nova.objects.instance [None req-e9edc5ad-86de-4520-b1b3-5738abe426dc 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lazy-loading 'resources' on Instance uuid ece1112a-294b-461f-8b8f-c6a0bb212647 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:25:11 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:25:11.667 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[204fa331-4a17-4514-b1b2-c6689949b21f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:25:11 compute-0 nova_compute[260935]: 2025-10-11 09:25:11.684 2 INFO nova.virt.libvirt.driver [None req-92196fed-77a5-4494-a151-d2aec69d63ea dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: e4670193-9ea3-45bc-9dbd-14d2e62f32f1] Creating config drive at /var/lib/nova/instances/e4670193-9ea3-45bc-9dbd-14d2e62f32f1/disk.config
Oct 11 09:25:11 compute-0 nova_compute[260935]: 2025-10-11 09:25:11.694 2 DEBUG oslo_concurrency.processutils [None req-92196fed-77a5-4494-a151-d2aec69d63ea dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/e4670193-9ea3-45bc-9dbd-14d2e62f32f1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp3rg0a14t execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:25:11 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:25:11.700 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[8347092d-bf86-45a8-933e-b7ec693532ac]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf0bc2c62-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:be:23:00'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 38, 'tx_packets': 5, 'rx_bytes': 3460, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 38, 'tx_packets': 5, 'rx_bytes': 3460, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 363], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 647796, 'reachable_time': 34498, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 38, 'inoctets': 2928, 'indelivers': 13, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 38, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 2928, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 38, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 13, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 398023, 'error': None, 'target': 'ovnmeta-f0bc2c62-89ab-4ce1-9157-2273788b9018', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:25:11 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:25:11.727 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[fc86a639-764c-4121-a124-6fe87ff12a0d]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapf0bc2c62-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 647812, 'tstamp': 647812}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 398024, 'error': None, 'target': 'ovnmeta-f0bc2c62-89ab-4ce1-9157-2273788b9018', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:25:11 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:25:11.729 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf0bc2c62-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:25:11 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:25:11.743 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf0bc2c62-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:25:11 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:25:11.744 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 09:25:11 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:25:11.745 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf0bc2c62-80, col_values=(('external_ids', {'iface-id': 'c1fabca0-ae77-4e48-b93b-3023955db235'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:25:11 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:25:11.746 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 09:25:11 compute-0 nova_compute[260935]: 2025-10-11 09:25:11.750 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:25:11 compute-0 nova_compute[260935]: 2025-10-11 09:25:11.859 2 DEBUG oslo_concurrency.processutils [None req-92196fed-77a5-4494-a151-d2aec69d63ea dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/e4670193-9ea3-45bc-9dbd-14d2e62f32f1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp3rg0a14t" returned: 0 in 0.165s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:25:11 compute-0 nova_compute[260935]: 2025-10-11 09:25:11.889 2 DEBUG nova.storage.rbd_utils [None req-92196fed-77a5-4494-a151-d2aec69d63ea dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] rbd image e4670193-9ea3-45bc-9dbd-14d2e62f32f1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:25:11 compute-0 nova_compute[260935]: 2025-10-11 09:25:11.894 2 DEBUG oslo_concurrency.processutils [None req-92196fed-77a5-4494-a151-d2aec69d63ea dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/e4670193-9ea3-45bc-9dbd-14d2e62f32f1/disk.config e4670193-9ea3-45bc-9dbd-14d2e62f32f1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:25:11 compute-0 nova_compute[260935]: 2025-10-11 09:25:11.952 2 DEBUG nova.virt.libvirt.vif [None req-e9edc5ad-86de-4520-b1b3-5738abe426dc 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T09:24:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1996538223',display_name='tempest-TestGettingAddress-server-1996538223',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1996538223',id=125,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFOcQNhE7VrJ44Z/esb1CHEpx8lcRbf27s/aaZGWQqBCH7+6fr5AKVHiLVq1p8ssvkamN6U1RSiwjmJnfBxXYkiNbQhuZVIjGwYPhoT+eqQySo9UJb10NKMqWJoQDGuD9g==',key_name='tempest-TestGettingAddress-1559448996',keypairs=<?>,launch_index=0,launched_at=2025-10-11T09:24:42Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='db6885dd005947ad850fed13cefdf2fc',ramdisk_id='',reservation_id='r-oa0yn1o6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-1238692117',owner_user_name='tempest-TestGettingAddress-1238692117-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T09:24:42Z,user_data=None,user_id='0e1fd111a1ff43179343661e01457085',uuid=ece1112a-294b-461f-8b8f-c6a0bb212647,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "c9d0cfb2-3f7b-4404-8f94-1d24beb7e6c9", "address": "fa:16:3e:c1:1a:f4", "network": {"id": "cdd3c547-e0c3-4649-8427-08ce8e1c52d4", "bridge": "br-int", "label": "tempest-network-smoke--1698384506", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.172", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc9d0cfb2-3f", "ovs_interfaceid": "c9d0cfb2-3f7b-4404-8f94-1d24beb7e6c9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 11 09:25:11 compute-0 nova_compute[260935]: 2025-10-11 09:25:11.953 2 DEBUG nova.network.os_vif_util [None req-e9edc5ad-86de-4520-b1b3-5738abe426dc 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converting VIF {"id": "c9d0cfb2-3f7b-4404-8f94-1d24beb7e6c9", "address": "fa:16:3e:c1:1a:f4", "network": {"id": "cdd3c547-e0c3-4649-8427-08ce8e1c52d4", "bridge": "br-int", "label": "tempest-network-smoke--1698384506", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.172", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc9d0cfb2-3f", "ovs_interfaceid": "c9d0cfb2-3f7b-4404-8f94-1d24beb7e6c9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 09:25:11 compute-0 nova_compute[260935]: 2025-10-11 09:25:11.955 2 DEBUG nova.network.os_vif_util [None req-e9edc5ad-86de-4520-b1b3-5738abe426dc 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:c1:1a:f4,bridge_name='br-int',has_traffic_filtering=True,id=c9d0cfb2-3f7b-4404-8f94-1d24beb7e6c9,network=Network(cdd3c547-e0c3-4649-8427-08ce8e1c52d4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc9d0cfb2-3f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 09:25:11 compute-0 nova_compute[260935]: 2025-10-11 09:25:11.955 2 DEBUG os_vif [None req-e9edc5ad-86de-4520-b1b3-5738abe426dc 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:c1:1a:f4,bridge_name='br-int',has_traffic_filtering=True,id=c9d0cfb2-3f7b-4404-8f94-1d24beb7e6c9,network=Network(cdd3c547-e0c3-4649-8427-08ce8e1c52d4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc9d0cfb2-3f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 11 09:25:11 compute-0 nova_compute[260935]: 2025-10-11 09:25:11.958 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:25:11 compute-0 nova_compute[260935]: 2025-10-11 09:25:11.959 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc9d0cfb2-3f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:25:11 compute-0 nova_compute[260935]: 2025-10-11 09:25:11.961 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:25:11 compute-0 nova_compute[260935]: 2025-10-11 09:25:11.965 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 09:25:11 compute-0 nova_compute[260935]: 2025-10-11 09:25:11.973 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:25:11 compute-0 nova_compute[260935]: 2025-10-11 09:25:11.976 2 INFO os_vif [None req-e9edc5ad-86de-4520-b1b3-5738abe426dc 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:c1:1a:f4,bridge_name='br-int',has_traffic_filtering=True,id=c9d0cfb2-3f7b-4404-8f94-1d24beb7e6c9,network=Network(cdd3c547-e0c3-4649-8427-08ce8e1c52d4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc9d0cfb2-3f')
Oct 11 09:25:11 compute-0 nova_compute[260935]: 2025-10-11 09:25:11.978 2 DEBUG nova.virt.libvirt.vif [None req-e9edc5ad-86de-4520-b1b3-5738abe426dc 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T09:24:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1996538223',display_name='tempest-TestGettingAddress-server-1996538223',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1996538223',id=125,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFOcQNhE7VrJ44Z/esb1CHEpx8lcRbf27s/aaZGWQqBCH7+6fr5AKVHiLVq1p8ssvkamN6U1RSiwjmJnfBxXYkiNbQhuZVIjGwYPhoT+eqQySo9UJb10NKMqWJoQDGuD9g==',key_name='tempest-TestGettingAddress-1559448996',keypairs=<?>,launch_index=0,launched_at=2025-10-11T09:24:42Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='db6885dd005947ad850fed13cefdf2fc',ramdisk_id='',reservation_id='r-oa0yn1o6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-1238692117',owner_user_name='tempest-TestGettingAddress-1238692117-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T09:24:42Z,user_data=None,user_id='0e1fd111a1ff43179343661e01457085',uuid=ece1112a-294b-461f-8b8f-c6a0bb212647,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "336ce2f8-5f2c-4306-b2ea-d4b85ebcbff0", "address": "fa:16:3e:5b:3e:95", "network": {"id": "f0bc2c62-89ab-4ce1-9157-2273788b9018", "bridge": "br-int", "label": "tempest-network-smoke--892002892", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe5b:3e95", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe5b:3e95", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap336ce2f8-5f", "ovs_interfaceid": "336ce2f8-5f2c-4306-b2ea-d4b85ebcbff0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 11 09:25:11 compute-0 nova_compute[260935]: 2025-10-11 09:25:11.979 2 DEBUG nova.network.os_vif_util [None req-e9edc5ad-86de-4520-b1b3-5738abe426dc 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converting VIF {"id": "336ce2f8-5f2c-4306-b2ea-d4b85ebcbff0", "address": "fa:16:3e:5b:3e:95", "network": {"id": "f0bc2c62-89ab-4ce1-9157-2273788b9018", "bridge": "br-int", "label": "tempest-network-smoke--892002892", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe5b:3e95", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe5b:3e95", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap336ce2f8-5f", "ovs_interfaceid": "336ce2f8-5f2c-4306-b2ea-d4b85ebcbff0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 09:25:11 compute-0 nova_compute[260935]: 2025-10-11 09:25:11.981 2 DEBUG nova.network.os_vif_util [None req-e9edc5ad-86de-4520-b1b3-5738abe426dc 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5b:3e:95,bridge_name='br-int',has_traffic_filtering=True,id=336ce2f8-5f2c-4306-b2ea-d4b85ebcbff0,network=Network(f0bc2c62-89ab-4ce1-9157-2273788b9018),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap336ce2f8-5f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 09:25:11 compute-0 nova_compute[260935]: 2025-10-11 09:25:11.982 2 DEBUG os_vif [None req-e9edc5ad-86de-4520-b1b3-5738abe426dc 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:5b:3e:95,bridge_name='br-int',has_traffic_filtering=True,id=336ce2f8-5f2c-4306-b2ea-d4b85ebcbff0,network=Network(f0bc2c62-89ab-4ce1-9157-2273788b9018),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap336ce2f8-5f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 11 09:25:11 compute-0 nova_compute[260935]: 2025-10-11 09:25:11.984 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:25:11 compute-0 nova_compute[260935]: 2025-10-11 09:25:11.985 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap336ce2f8-5f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:25:11 compute-0 nova_compute[260935]: 2025-10-11 09:25:11.987 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:25:11 compute-0 nova_compute[260935]: 2025-10-11 09:25:11.990 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 09:25:11 compute-0 nova_compute[260935]: 2025-10-11 09:25:11.992 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:25:11 compute-0 nova_compute[260935]: 2025-10-11 09:25:11.995 2 INFO os_vif [None req-e9edc5ad-86de-4520-b1b3-5738abe426dc 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:5b:3e:95,bridge_name='br-int',has_traffic_filtering=True,id=336ce2f8-5f2c-4306-b2ea-d4b85ebcbff0,network=Network(f0bc2c62-89ab-4ce1-9157-2273788b9018),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap336ce2f8-5f')
Oct 11 09:25:12 compute-0 ceph-mon[74313]: pgmap v2506: 321 pgs: 321 active+clean; 533 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Oct 11 09:25:12 compute-0 nova_compute[260935]: 2025-10-11 09:25:12.097 2 DEBUG oslo_concurrency.processutils [None req-92196fed-77a5-4494-a151-d2aec69d63ea dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/e4670193-9ea3-45bc-9dbd-14d2e62f32f1/disk.config e4670193-9ea3-45bc-9dbd-14d2e62f32f1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.204s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:25:12 compute-0 nova_compute[260935]: 2025-10-11 09:25:12.098 2 INFO nova.virt.libvirt.driver [None req-92196fed-77a5-4494-a151-d2aec69d63ea dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: e4670193-9ea3-45bc-9dbd-14d2e62f32f1] Deleting local config drive /var/lib/nova/instances/e4670193-9ea3-45bc-9dbd-14d2e62f32f1/disk.config because it was imported into RBD.
Oct 11 09:25:12 compute-0 kernel: tap119ff9b5-e3: entered promiscuous mode
Oct 11 09:25:12 compute-0 NetworkManager[44960]: <info>  [1760174712.1930] manager: (tap119ff9b5-e3): new Tun device (/org/freedesktop/NetworkManager/Devices/535)
Oct 11 09:25:12 compute-0 ovn_controller[152945]: 2025-10-11T09:25:12Z|01364|binding|INFO|Claiming lport 119ff9b5-e3af-4688-97a4-92b5f6a240dc for this chassis.
Oct 11 09:25:12 compute-0 ovn_controller[152945]: 2025-10-11T09:25:12Z|01365|binding|INFO|119ff9b5-e3af-4688-97a4-92b5f6a240dc: Claiming fa:16:3e:54:f4:1e 10.100.0.9
Oct 11 09:25:12 compute-0 nova_compute[260935]: 2025-10-11 09:25:12.192 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:25:12 compute-0 systemd-udevd[397977]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 09:25:12 compute-0 NetworkManager[44960]: <info>  [1760174712.2254] device (tap119ff9b5-e3): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 09:25:12 compute-0 NetworkManager[44960]: <info>  [1760174712.2271] device (tap119ff9b5-e3): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 09:25:12 compute-0 ovn_controller[152945]: 2025-10-11T09:25:12Z|01366|binding|INFO|Setting lport 119ff9b5-e3af-4688-97a4-92b5f6a240dc ovn-installed in OVS
Oct 11 09:25:12 compute-0 nova_compute[260935]: 2025-10-11 09:25:12.234 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:25:12 compute-0 nova_compute[260935]: 2025-10-11 09:25:12.239 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:25:12 compute-0 systemd-machined[215705]: New machine qemu-150-instance-0000007e.
Oct 11 09:25:12 compute-0 ovn_controller[152945]: 2025-10-11T09:25:12Z|01367|binding|INFO|Setting lport 119ff9b5-e3af-4688-97a4-92b5f6a240dc up in Southbound
Oct 11 09:25:12 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:25:12.251 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:54:f4:1e 10.100.0.9'], port_security=['fa:16:3e:54:f4:1e 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'e4670193-9ea3-45bc-9dbd-14d2e62f32f1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-342daca7-3c5d-4bb3-bcdc-6abce7b4d414', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'bee9c6aad5fe46a2b0fb6caf4d995b72', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'c1e4fc68-c314-4fe3-a3ef-a4986a78eba5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b157abac-1a65-40f2-b3c7-9c7765cf8246, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=119ff9b5-e3af-4688-97a4-92b5f6a240dc) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 09:25:12 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:25:12.252 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 119ff9b5-e3af-4688-97a4-92b5f6a240dc in datapath 342daca7-3c5d-4bb3-bcdc-6abce7b4d414 bound to our chassis
Oct 11 09:25:12 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:25:12.255 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 342daca7-3c5d-4bb3-bcdc-6abce7b4d414
Oct 11 09:25:12 compute-0 nova_compute[260935]: 2025-10-11 09:25:12.257 2 DEBUG nova.network.neutron [req-f59ddb1c-6142-42d2-b11a-b1ad2ac739b8 req-e29d0e37-95d8-4ea5-8dcf-21feeaafc175 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: e4670193-9ea3-45bc-9dbd-14d2e62f32f1] Updated VIF entry in instance network info cache for port 119ff9b5-e3af-4688-97a4-92b5f6a240dc. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 09:25:12 compute-0 nova_compute[260935]: 2025-10-11 09:25:12.258 2 DEBUG nova.network.neutron [req-f59ddb1c-6142-42d2-b11a-b1ad2ac739b8 req-e29d0e37-95d8-4ea5-8dcf-21feeaafc175 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: e4670193-9ea3-45bc-9dbd-14d2e62f32f1] Updating instance_info_cache with network_info: [{"id": "119ff9b5-e3af-4688-97a4-92b5f6a240dc", "address": "fa:16:3e:54:f4:1e", "network": {"id": "342daca7-3c5d-4bb3-bcdc-6abce7b4d414", "bridge": "br-int", "label": "tempest-network-smoke--2000863110", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap119ff9b5-e3", "ovs_interfaceid": "119ff9b5-e3af-4688-97a4-92b5f6a240dc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:25:12 compute-0 systemd[1]: Started Virtual Machine qemu-150-instance-0000007e.
Oct 11 09:25:12 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:25:12.268 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[1ba98029-9520-4486-bd1e-add87c2f4e5c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:25:12 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:25:12.269 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap342daca7-31 in ovnmeta-342daca7-3c5d-4bb3-bcdc-6abce7b4d414 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 11 09:25:12 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:25:12.271 276945 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap342daca7-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 11 09:25:12 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:25:12.271 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[663c302c-31ae-4b3a-9072-8e988fa5698e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:25:12 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:25:12.272 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[9a3719c1-e895-4d17-8fb1-707ca8fe42ab]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:25:12 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:25:12.289 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[41a26af8-e7b4-4778-964b-9cd7dfb10ec7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:25:12 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:25:12.304 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[a3d75560-0021-4b2d-9cf5-ddfcf06c5675]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:25:12 compute-0 nova_compute[260935]: 2025-10-11 09:25:12.323 2 DEBUG oslo_concurrency.lockutils [req-f59ddb1c-6142-42d2-b11a-b1ad2ac739b8 req-e29d0e37-95d8-4ea5-8dcf-21feeaafc175 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-e4670193-9ea3-45bc-9dbd-14d2e62f32f1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:25:12 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:25:12.352 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[f7762623-21f0-488d-929a-4071496b752c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:25:12 compute-0 NetworkManager[44960]: <info>  [1760174712.3608] manager: (tap342daca7-30): new Veth device (/org/freedesktop/NetworkManager/Devices/536)
Oct 11 09:25:12 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:25:12.360 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[15f013ec-6700-413e-b6ae-6cf678965b9c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:25:12 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:25:12.412 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[f1a4b9f1-4958-4777-a707-aca3e5c12bbc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:25:12 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:25:12.417 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[87fe7c6f-eba3-478f-8f46-0d081dc02c9d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:25:12 compute-0 NetworkManager[44960]: <info>  [1760174712.4566] device (tap342daca7-30): carrier: link connected
Oct 11 09:25:12 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:25:12.465 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[6ab1e2b3-d341-4b47-8bff-05b3abe83eff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:25:12 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:25:12.502 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[d6ad7dbd-44c2-4dee-94e9-b3a46fe93d21]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap342daca7-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ed:bf:8c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 374], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 655715, 'reachable_time': 36185, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 398133, 'error': None, 'target': 'ovnmeta-342daca7-3c5d-4bb3-bcdc-6abce7b4d414', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:25:12 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:25:12.526 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[412ae439-6e21-46e7-8acb-5af170d97efb]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feed:bf8c'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 655715, 'tstamp': 655715}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 398134, 'error': None, 'target': 'ovnmeta-342daca7-3c5d-4bb3-bcdc-6abce7b4d414', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:25:12 compute-0 nova_compute[260935]: 2025-10-11 09:25:12.541 2 INFO nova.virt.libvirt.driver [None req-e9edc5ad-86de-4520-b1b3-5738abe426dc 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: ece1112a-294b-461f-8b8f-c6a0bb212647] Deleting instance files /var/lib/nova/instances/ece1112a-294b-461f-8b8f-c6a0bb212647_del
Oct 11 09:25:12 compute-0 nova_compute[260935]: 2025-10-11 09:25:12.542 2 INFO nova.virt.libvirt.driver [None req-e9edc5ad-86de-4520-b1b3-5738abe426dc 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: ece1112a-294b-461f-8b8f-c6a0bb212647] Deletion of /var/lib/nova/instances/ece1112a-294b-461f-8b8f-c6a0bb212647_del complete
Oct 11 09:25:12 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:25:12.552 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[a51ea093-30eb-4bff-a439-68ed392b5e2e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap342daca7-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ed:bf:8c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 374], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 655715, 'reachable_time': 36185, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 398135, 'error': None, 'target': 'ovnmeta-342daca7-3c5d-4bb3-bcdc-6abce7b4d414', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:25:12 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:25:12.594 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[bc7f5c7f-cf60-45ec-8836-40476ae9055a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:25:12 compute-0 nova_compute[260935]: 2025-10-11 09:25:12.667 2 INFO nova.compute.manager [None req-e9edc5ad-86de-4520-b1b3-5738abe426dc 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: ece1112a-294b-461f-8b8f-c6a0bb212647] Took 1.49 seconds to destroy the instance on the hypervisor.
Oct 11 09:25:12 compute-0 nova_compute[260935]: 2025-10-11 09:25:12.667 2 DEBUG oslo.service.loopingcall [None req-e9edc5ad-86de-4520-b1b3-5738abe426dc 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 11 09:25:12 compute-0 nova_compute[260935]: 2025-10-11 09:25:12.668 2 DEBUG nova.compute.manager [-] [instance: ece1112a-294b-461f-8b8f-c6a0bb212647] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 11 09:25:12 compute-0 nova_compute[260935]: 2025-10-11 09:25:12.671 2 DEBUG nova.network.neutron [-] [instance: ece1112a-294b-461f-8b8f-c6a0bb212647] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 11 09:25:12 compute-0 nova_compute[260935]: 2025-10-11 09:25:12.676 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:25:12 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:25:12.680 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[cfef72fa-3f5b-4fd9-b396-02280c480ed0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:25:12 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:25:12.683 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap342daca7-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:25:12 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:25:12.684 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 09:25:12 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:25:12.684 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap342daca7-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:25:12 compute-0 nova_compute[260935]: 2025-10-11 09:25:12.686 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:25:12 compute-0 NetworkManager[44960]: <info>  [1760174712.6875] manager: (tap342daca7-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/537)
Oct 11 09:25:12 compute-0 kernel: tap342daca7-30: entered promiscuous mode
Oct 11 09:25:12 compute-0 nova_compute[260935]: 2025-10-11 09:25:12.691 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:25:12 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:25:12.693 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap342daca7-30, col_values=(('external_ids', {'iface-id': 'ac8c2194-1d46-4375-b091-3b31589411f5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:25:12 compute-0 nova_compute[260935]: 2025-10-11 09:25:12.695 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:25:12 compute-0 ovn_controller[152945]: 2025-10-11T09:25:12Z|01368|binding|INFO|Releasing lport ac8c2194-1d46-4375-b091-3b31589411f5 from this chassis (sb_readonly=0)
Oct 11 09:25:12 compute-0 nova_compute[260935]: 2025-10-11 09:25:12.724 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:25:12 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:25:12.726 162815 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/342daca7-3c5d-4bb3-bcdc-6abce7b4d414.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/342daca7-3c5d-4bb3-bcdc-6abce7b4d414.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 11 09:25:12 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:25:12.727 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[b216997d-8479-4d30-9e71-2a549ae274a0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:25:12 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:25:12.728 162815 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 11 09:25:12 compute-0 ovn_metadata_agent[162810]: global
Oct 11 09:25:12 compute-0 ovn_metadata_agent[162810]:     log         /dev/log local0 debug
Oct 11 09:25:12 compute-0 ovn_metadata_agent[162810]:     log-tag     haproxy-metadata-proxy-342daca7-3c5d-4bb3-bcdc-6abce7b4d414
Oct 11 09:25:12 compute-0 ovn_metadata_agent[162810]:     user        root
Oct 11 09:25:12 compute-0 ovn_metadata_agent[162810]:     group       root
Oct 11 09:25:12 compute-0 ovn_metadata_agent[162810]:     maxconn     1024
Oct 11 09:25:12 compute-0 ovn_metadata_agent[162810]:     pidfile     /var/lib/neutron/external/pids/342daca7-3c5d-4bb3-bcdc-6abce7b4d414.pid.haproxy
Oct 11 09:25:12 compute-0 ovn_metadata_agent[162810]:     daemon
Oct 11 09:25:12 compute-0 ovn_metadata_agent[162810]: 
Oct 11 09:25:12 compute-0 ovn_metadata_agent[162810]: defaults
Oct 11 09:25:12 compute-0 ovn_metadata_agent[162810]:     log global
Oct 11 09:25:12 compute-0 ovn_metadata_agent[162810]:     mode http
Oct 11 09:25:12 compute-0 ovn_metadata_agent[162810]:     option httplog
Oct 11 09:25:12 compute-0 ovn_metadata_agent[162810]:     option dontlognull
Oct 11 09:25:12 compute-0 ovn_metadata_agent[162810]:     option http-server-close
Oct 11 09:25:12 compute-0 ovn_metadata_agent[162810]:     option forwardfor
Oct 11 09:25:12 compute-0 ovn_metadata_agent[162810]:     retries                 3
Oct 11 09:25:12 compute-0 ovn_metadata_agent[162810]:     timeout http-request    30s
Oct 11 09:25:12 compute-0 ovn_metadata_agent[162810]:     timeout connect         30s
Oct 11 09:25:12 compute-0 ovn_metadata_agent[162810]:     timeout client          32s
Oct 11 09:25:12 compute-0 ovn_metadata_agent[162810]:     timeout server          32s
Oct 11 09:25:12 compute-0 ovn_metadata_agent[162810]:     timeout http-keep-alive 30s
Oct 11 09:25:12 compute-0 ovn_metadata_agent[162810]: 
Oct 11 09:25:12 compute-0 ovn_metadata_agent[162810]: 
Oct 11 09:25:12 compute-0 ovn_metadata_agent[162810]: listen listener
Oct 11 09:25:12 compute-0 ovn_metadata_agent[162810]:     bind 169.254.169.254:80
Oct 11 09:25:12 compute-0 ovn_metadata_agent[162810]:     server metadata /var/lib/neutron/metadata_proxy
Oct 11 09:25:12 compute-0 ovn_metadata_agent[162810]:     http-request add-header X-OVN-Network-ID 342daca7-3c5d-4bb3-bcdc-6abce7b4d414
Oct 11 09:25:12 compute-0 ovn_metadata_agent[162810]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 11 09:25:12 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:25:12.729 162815 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-342daca7-3c5d-4bb3-bcdc-6abce7b4d414', 'env', 'PROCESS_TAG=haproxy-342daca7-3c5d-4bb3-bcdc-6abce7b4d414', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/342daca7-3c5d-4bb3-bcdc-6abce7b4d414.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 11 09:25:12 compute-0 nova_compute[260935]: 2025-10-11 09:25:12.798 2 DEBUG nova.compute.manager [req-647b0d45-842f-446a-aae0-4709d5fdfd44 req-ca72c4c0-dd38-4b45-86eb-a16d0f94f67d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: e4670193-9ea3-45bc-9dbd-14d2e62f32f1] Received event network-vif-plugged-119ff9b5-e3af-4688-97a4-92b5f6a240dc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:25:12 compute-0 nova_compute[260935]: 2025-10-11 09:25:12.799 2 DEBUG oslo_concurrency.lockutils [req-647b0d45-842f-446a-aae0-4709d5fdfd44 req-ca72c4c0-dd38-4b45-86eb-a16d0f94f67d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "e4670193-9ea3-45bc-9dbd-14d2e62f32f1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:25:12 compute-0 nova_compute[260935]: 2025-10-11 09:25:12.800 2 DEBUG oslo_concurrency.lockutils [req-647b0d45-842f-446a-aae0-4709d5fdfd44 req-ca72c4c0-dd38-4b45-86eb-a16d0f94f67d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "e4670193-9ea3-45bc-9dbd-14d2e62f32f1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:25:12 compute-0 nova_compute[260935]: 2025-10-11 09:25:12.800 2 DEBUG oslo_concurrency.lockutils [req-647b0d45-842f-446a-aae0-4709d5fdfd44 req-ca72c4c0-dd38-4b45-86eb-a16d0f94f67d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "e4670193-9ea3-45bc-9dbd-14d2e62f32f1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:25:12 compute-0 nova_compute[260935]: 2025-10-11 09:25:12.801 2 DEBUG nova.compute.manager [req-647b0d45-842f-446a-aae0-4709d5fdfd44 req-ca72c4c0-dd38-4b45-86eb-a16d0f94f67d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: e4670193-9ea3-45bc-9dbd-14d2e62f32f1] Processing event network-vif-plugged-119ff9b5-e3af-4688-97a4-92b5f6a240dc _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 11 09:25:12 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2507: 321 pgs: 321 active+clean; 453 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 47 KiB/s rd, 1.8 MiB/s wr, 54 op/s
Oct 11 09:25:12 compute-0 sshd-session[397889]: Received disconnect from 152.32.213.170 port 43802:11: Bye Bye [preauth]
Oct 11 09:25:13 compute-0 sshd-session[397889]: Disconnected from invalid user activemq 152.32.213.170 port 43802 [preauth]
Oct 11 09:25:13 compute-0 podman[398210]: 2025-10-11 09:25:13.213727325 +0000 UTC m=+0.067399203 container create 7942cf246211150ab9a379ef3012a741b0b0deb7b617be7c2673948e421375b3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-342daca7-3c5d-4bb3-bcdc-6abce7b4d414, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 11 09:25:13 compute-0 nova_compute[260935]: 2025-10-11 09:25:13.263 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760174713.26251, e4670193-9ea3-45bc-9dbd-14d2e62f32f1 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:25:13 compute-0 nova_compute[260935]: 2025-10-11 09:25:13.263 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: e4670193-9ea3-45bc-9dbd-14d2e62f32f1] VM Started (Lifecycle Event)
Oct 11 09:25:13 compute-0 nova_compute[260935]: 2025-10-11 09:25:13.267 2 DEBUG nova.compute.manager [None req-92196fed-77a5-4494-a151-d2aec69d63ea dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: e4670193-9ea3-45bc-9dbd-14d2e62f32f1] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 11 09:25:13 compute-0 nova_compute[260935]: 2025-10-11 09:25:13.276 2 DEBUG nova.virt.libvirt.driver [None req-92196fed-77a5-4494-a151-d2aec69d63ea dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: e4670193-9ea3-45bc-9dbd-14d2e62f32f1] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 11 09:25:13 compute-0 podman[398210]: 2025-10-11 09:25:13.183864082 +0000 UTC m=+0.037536030 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 11 09:25:13 compute-0 nova_compute[260935]: 2025-10-11 09:25:13.281 2 INFO nova.virt.libvirt.driver [-] [instance: e4670193-9ea3-45bc-9dbd-14d2e62f32f1] Instance spawned successfully.
Oct 11 09:25:13 compute-0 nova_compute[260935]: 2025-10-11 09:25:13.281 2 DEBUG nova.virt.libvirt.driver [None req-92196fed-77a5-4494-a151-d2aec69d63ea dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: e4670193-9ea3-45bc-9dbd-14d2e62f32f1] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 11 09:25:13 compute-0 systemd[1]: Started libpod-conmon-7942cf246211150ab9a379ef3012a741b0b0deb7b617be7c2673948e421375b3.scope.
Oct 11 09:25:13 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:25:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72cac3b459deeeef18dd1fe465a7ead282b0060817d48bf95435e7d2a56eb042/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 11 09:25:13 compute-0 podman[398210]: 2025-10-11 09:25:13.358335616 +0000 UTC m=+0.212007574 container init 7942cf246211150ab9a379ef3012a741b0b0deb7b617be7c2673948e421375b3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-342daca7-3c5d-4bb3-bcdc-6abce7b4d414, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2)
Oct 11 09:25:13 compute-0 podman[398223]: 2025-10-11 09:25:13.358437288 +0000 UTC m=+0.091245736 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true)
Oct 11 09:25:13 compute-0 podman[398210]: 2025-10-11 09:25:13.369674906 +0000 UTC m=+0.223346804 container start 7942cf246211150ab9a379ef3012a741b0b0deb7b617be7c2673948e421375b3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-342daca7-3c5d-4bb3-bcdc-6abce7b4d414, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0)
Oct 11 09:25:13 compute-0 neutron-haproxy-ovnmeta-342daca7-3c5d-4bb3-bcdc-6abce7b4d414[398234]: [NOTICE]   (398247) : New worker (398249) forked
Oct 11 09:25:13 compute-0 neutron-haproxy-ovnmeta-342daca7-3c5d-4bb3-bcdc-6abce7b4d414[398234]: [NOTICE]   (398247) : Loading success.
Oct 11 09:25:14 compute-0 ceph-mon[74313]: pgmap v2507: 321 pgs: 321 active+clean; 453 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 47 KiB/s rd, 1.8 MiB/s wr, 54 op/s
Oct 11 09:25:14 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:25:14 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2508: 321 pgs: 321 active+clean; 453 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 628 KiB/s wr, 38 op/s
Oct 11 09:25:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:25:15.221 162815 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:25:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:25:15.222 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:25:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:25:15.223 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:25:16 compute-0 ceph-mon[74313]: pgmap v2508: 321 pgs: 321 active+clean; 453 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 628 KiB/s wr, 38 op/s
Oct 11 09:25:16 compute-0 nova_compute[260935]: 2025-10-11 09:25:16.377 2 DEBUG nova.compute.manager [req-38f7775a-eb65-49fe-b98f-761f0d9e095a req-e9e19990-e851-4057-b445-587ce59a2f4e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ece1112a-294b-461f-8b8f-c6a0bb212647] Received event network-vif-unplugged-c9d0cfb2-3f7b-4404-8f94-1d24beb7e6c9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:25:16 compute-0 nova_compute[260935]: 2025-10-11 09:25:16.377 2 DEBUG oslo_concurrency.lockutils [req-38f7775a-eb65-49fe-b98f-761f0d9e095a req-e9e19990-e851-4057-b445-587ce59a2f4e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "ece1112a-294b-461f-8b8f-c6a0bb212647-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:25:16 compute-0 nova_compute[260935]: 2025-10-11 09:25:16.378 2 DEBUG oslo_concurrency.lockutils [req-38f7775a-eb65-49fe-b98f-761f0d9e095a req-e9e19990-e851-4057-b445-587ce59a2f4e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "ece1112a-294b-461f-8b8f-c6a0bb212647-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:25:16 compute-0 nova_compute[260935]: 2025-10-11 09:25:16.379 2 DEBUG oslo_concurrency.lockutils [req-38f7775a-eb65-49fe-b98f-761f0d9e095a req-e9e19990-e851-4057-b445-587ce59a2f4e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "ece1112a-294b-461f-8b8f-c6a0bb212647-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:25:16 compute-0 nova_compute[260935]: 2025-10-11 09:25:16.379 2 DEBUG nova.compute.manager [req-38f7775a-eb65-49fe-b98f-761f0d9e095a req-e9e19990-e851-4057-b445-587ce59a2f4e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ece1112a-294b-461f-8b8f-c6a0bb212647] No waiting events found dispatching network-vif-unplugged-c9d0cfb2-3f7b-4404-8f94-1d24beb7e6c9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 09:25:16 compute-0 nova_compute[260935]: 2025-10-11 09:25:16.380 2 DEBUG nova.compute.manager [req-38f7775a-eb65-49fe-b98f-761f0d9e095a req-e9e19990-e851-4057-b445-587ce59a2f4e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ece1112a-294b-461f-8b8f-c6a0bb212647] Received event network-vif-unplugged-c9d0cfb2-3f7b-4404-8f94-1d24beb7e6c9 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 11 09:25:16 compute-0 nova_compute[260935]: 2025-10-11 09:25:16.381 2 DEBUG nova.compute.manager [req-38f7775a-eb65-49fe-b98f-761f0d9e095a req-e9e19990-e851-4057-b445-587ce59a2f4e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ece1112a-294b-461f-8b8f-c6a0bb212647] Received event network-vif-plugged-c9d0cfb2-3f7b-4404-8f94-1d24beb7e6c9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:25:16 compute-0 nova_compute[260935]: 2025-10-11 09:25:16.381 2 DEBUG oslo_concurrency.lockutils [req-38f7775a-eb65-49fe-b98f-761f0d9e095a req-e9e19990-e851-4057-b445-587ce59a2f4e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "ece1112a-294b-461f-8b8f-c6a0bb212647-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:25:16 compute-0 nova_compute[260935]: 2025-10-11 09:25:16.382 2 DEBUG oslo_concurrency.lockutils [req-38f7775a-eb65-49fe-b98f-761f0d9e095a req-e9e19990-e851-4057-b445-587ce59a2f4e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "ece1112a-294b-461f-8b8f-c6a0bb212647-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:25:16 compute-0 nova_compute[260935]: 2025-10-11 09:25:16.382 2 DEBUG oslo_concurrency.lockutils [req-38f7775a-eb65-49fe-b98f-761f0d9e095a req-e9e19990-e851-4057-b445-587ce59a2f4e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "ece1112a-294b-461f-8b8f-c6a0bb212647-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:25:16 compute-0 nova_compute[260935]: 2025-10-11 09:25:16.383 2 DEBUG nova.compute.manager [req-38f7775a-eb65-49fe-b98f-761f0d9e095a req-e9e19990-e851-4057-b445-587ce59a2f4e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ece1112a-294b-461f-8b8f-c6a0bb212647] No waiting events found dispatching network-vif-plugged-c9d0cfb2-3f7b-4404-8f94-1d24beb7e6c9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 09:25:16 compute-0 nova_compute[260935]: 2025-10-11 09:25:16.383 2 WARNING nova.compute.manager [req-38f7775a-eb65-49fe-b98f-761f0d9e095a req-e9e19990-e851-4057-b445-587ce59a2f4e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ece1112a-294b-461f-8b8f-c6a0bb212647] Received unexpected event network-vif-plugged-c9d0cfb2-3f7b-4404-8f94-1d24beb7e6c9 for instance with vm_state active and task_state deleting.
Oct 11 09:25:16 compute-0 nova_compute[260935]: 2025-10-11 09:25:16.384 2 DEBUG nova.compute.manager [req-38f7775a-eb65-49fe-b98f-761f0d9e095a req-e9e19990-e851-4057-b445-587ce59a2f4e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ece1112a-294b-461f-8b8f-c6a0bb212647] Received event network-vif-unplugged-336ce2f8-5f2c-4306-b2ea-d4b85ebcbff0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:25:16 compute-0 nova_compute[260935]: 2025-10-11 09:25:16.384 2 DEBUG oslo_concurrency.lockutils [req-38f7775a-eb65-49fe-b98f-761f0d9e095a req-e9e19990-e851-4057-b445-587ce59a2f4e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "ece1112a-294b-461f-8b8f-c6a0bb212647-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:25:16 compute-0 nova_compute[260935]: 2025-10-11 09:25:16.385 2 DEBUG oslo_concurrency.lockutils [req-38f7775a-eb65-49fe-b98f-761f0d9e095a req-e9e19990-e851-4057-b445-587ce59a2f4e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "ece1112a-294b-461f-8b8f-c6a0bb212647-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:25:16 compute-0 nova_compute[260935]: 2025-10-11 09:25:16.385 2 DEBUG oslo_concurrency.lockutils [req-38f7775a-eb65-49fe-b98f-761f0d9e095a req-e9e19990-e851-4057-b445-587ce59a2f4e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "ece1112a-294b-461f-8b8f-c6a0bb212647-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:25:16 compute-0 nova_compute[260935]: 2025-10-11 09:25:16.386 2 DEBUG nova.compute.manager [req-38f7775a-eb65-49fe-b98f-761f0d9e095a req-e9e19990-e851-4057-b445-587ce59a2f4e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ece1112a-294b-461f-8b8f-c6a0bb212647] No waiting events found dispatching network-vif-unplugged-336ce2f8-5f2c-4306-b2ea-d4b85ebcbff0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 09:25:16 compute-0 nova_compute[260935]: 2025-10-11 09:25:16.386 2 DEBUG nova.compute.manager [req-38f7775a-eb65-49fe-b98f-761f0d9e095a req-e9e19990-e851-4057-b445-587ce59a2f4e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ece1112a-294b-461f-8b8f-c6a0bb212647] Received event network-vif-unplugged-336ce2f8-5f2c-4306-b2ea-d4b85ebcbff0 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 11 09:25:16 compute-0 nova_compute[260935]: 2025-10-11 09:25:16.387 2 DEBUG nova.compute.manager [req-38f7775a-eb65-49fe-b98f-761f0d9e095a req-e9e19990-e851-4057-b445-587ce59a2f4e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ece1112a-294b-461f-8b8f-c6a0bb212647] Received event network-vif-plugged-336ce2f8-5f2c-4306-b2ea-d4b85ebcbff0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:25:16 compute-0 nova_compute[260935]: 2025-10-11 09:25:16.387 2 DEBUG oslo_concurrency.lockutils [req-38f7775a-eb65-49fe-b98f-761f0d9e095a req-e9e19990-e851-4057-b445-587ce59a2f4e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "ece1112a-294b-461f-8b8f-c6a0bb212647-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:25:16 compute-0 nova_compute[260935]: 2025-10-11 09:25:16.388 2 DEBUG oslo_concurrency.lockutils [req-38f7775a-eb65-49fe-b98f-761f0d9e095a req-e9e19990-e851-4057-b445-587ce59a2f4e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "ece1112a-294b-461f-8b8f-c6a0bb212647-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:25:16 compute-0 nova_compute[260935]: 2025-10-11 09:25:16.388 2 DEBUG oslo_concurrency.lockutils [req-38f7775a-eb65-49fe-b98f-761f0d9e095a req-e9e19990-e851-4057-b445-587ce59a2f4e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "ece1112a-294b-461f-8b8f-c6a0bb212647-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:25:16 compute-0 nova_compute[260935]: 2025-10-11 09:25:16.389 2 DEBUG nova.compute.manager [req-38f7775a-eb65-49fe-b98f-761f0d9e095a req-e9e19990-e851-4057-b445-587ce59a2f4e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ece1112a-294b-461f-8b8f-c6a0bb212647] No waiting events found dispatching network-vif-plugged-336ce2f8-5f2c-4306-b2ea-d4b85ebcbff0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 09:25:16 compute-0 nova_compute[260935]: 2025-10-11 09:25:16.390 2 WARNING nova.compute.manager [req-38f7775a-eb65-49fe-b98f-761f0d9e095a req-e9e19990-e851-4057-b445-587ce59a2f4e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ece1112a-294b-461f-8b8f-c6a0bb212647] Received unexpected event network-vif-plugged-336ce2f8-5f2c-4306-b2ea-d4b85ebcbff0 for instance with vm_state active and task_state deleting.
Oct 11 09:25:16 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2509: 321 pgs: 321 active+clean; 453 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 628 KiB/s wr, 38 op/s
Oct 11 09:25:16 compute-0 nova_compute[260935]: 2025-10-11 09:25:16.988 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:25:17 compute-0 nova_compute[260935]: 2025-10-11 09:25:17.016 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: e4670193-9ea3-45bc-9dbd-14d2e62f32f1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:25:17 compute-0 nova_compute[260935]: 2025-10-11 09:25:17.022 2 DEBUG nova.virt.libvirt.driver [None req-92196fed-77a5-4494-a151-d2aec69d63ea dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: e4670193-9ea3-45bc-9dbd-14d2e62f32f1] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:25:17 compute-0 nova_compute[260935]: 2025-10-11 09:25:17.023 2 DEBUG nova.virt.libvirt.driver [None req-92196fed-77a5-4494-a151-d2aec69d63ea dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: e4670193-9ea3-45bc-9dbd-14d2e62f32f1] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:25:17 compute-0 nova_compute[260935]: 2025-10-11 09:25:17.023 2 DEBUG nova.virt.libvirt.driver [None req-92196fed-77a5-4494-a151-d2aec69d63ea dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: e4670193-9ea3-45bc-9dbd-14d2e62f32f1] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:25:17 compute-0 nova_compute[260935]: 2025-10-11 09:25:17.024 2 DEBUG nova.virt.libvirt.driver [None req-92196fed-77a5-4494-a151-d2aec69d63ea dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: e4670193-9ea3-45bc-9dbd-14d2e62f32f1] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:25:17 compute-0 nova_compute[260935]: 2025-10-11 09:25:17.024 2 DEBUG nova.virt.libvirt.driver [None req-92196fed-77a5-4494-a151-d2aec69d63ea dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: e4670193-9ea3-45bc-9dbd-14d2e62f32f1] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:25:17 compute-0 nova_compute[260935]: 2025-10-11 09:25:17.025 2 DEBUG nova.virt.libvirt.driver [None req-92196fed-77a5-4494-a151-d2aec69d63ea dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: e4670193-9ea3-45bc-9dbd-14d2e62f32f1] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:25:17 compute-0 nova_compute[260935]: 2025-10-11 09:25:17.030 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: e4670193-9ea3-45bc-9dbd-14d2e62f32f1] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 09:25:17 compute-0 ceph-mon[74313]: pgmap v2509: 321 pgs: 321 active+clean; 453 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 628 KiB/s wr, 38 op/s
Oct 11 09:25:17 compute-0 nova_compute[260935]: 2025-10-11 09:25:17.719 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:25:17 compute-0 nova_compute[260935]: 2025-10-11 09:25:17.745 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: e4670193-9ea3-45bc-9dbd-14d2e62f32f1] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 09:25:17 compute-0 nova_compute[260935]: 2025-10-11 09:25:17.745 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760174713.2627838, e4670193-9ea3-45bc-9dbd-14d2e62f32f1 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:25:17 compute-0 nova_compute[260935]: 2025-10-11 09:25:17.746 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: e4670193-9ea3-45bc-9dbd-14d2e62f32f1] VM Paused (Lifecycle Event)
Oct 11 09:25:17 compute-0 podman[398258]: 2025-10-11 09:25:17.838989693 +0000 UTC m=+0.094545710 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=iscsid, container_name=iscsid, io.buildah.version=1.41.3)
Oct 11 09:25:18 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2510: 321 pgs: 321 active+clean; 453 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 633 KiB/s wr, 116 op/s
Oct 11 09:25:18 compute-0 nova_compute[260935]: 2025-10-11 09:25:18.875 2 INFO nova.compute.manager [None req-92196fed-77a5-4494-a151-d2aec69d63ea dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: e4670193-9ea3-45bc-9dbd-14d2e62f32f1] Took 16.97 seconds to spawn the instance on the hypervisor.
Oct 11 09:25:18 compute-0 nova_compute[260935]: 2025-10-11 09:25:18.875 2 DEBUG nova.compute.manager [None req-92196fed-77a5-4494-a151-d2aec69d63ea dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: e4670193-9ea3-45bc-9dbd-14d2e62f32f1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:25:18 compute-0 nova_compute[260935]: 2025-10-11 09:25:18.933 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: e4670193-9ea3-45bc-9dbd-14d2e62f32f1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:25:18 compute-0 nova_compute[260935]: 2025-10-11 09:25:18.936 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760174713.2746034, e4670193-9ea3-45bc-9dbd-14d2e62f32f1 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:25:18 compute-0 nova_compute[260935]: 2025-10-11 09:25:18.937 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: e4670193-9ea3-45bc-9dbd-14d2e62f32f1] VM Resumed (Lifecycle Event)
Oct 11 09:25:19 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:25:19 compute-0 nova_compute[260935]: 2025-10-11 09:25:19.581 2 DEBUG nova.compute.manager [req-bca77e6b-39d8-4f24-8f48-549ac482a1d4 req-1eb946f5-c7d5-42c8-84ad-5af8a94b82c8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: e4670193-9ea3-45bc-9dbd-14d2e62f32f1] Received event network-vif-plugged-119ff9b5-e3af-4688-97a4-92b5f6a240dc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:25:19 compute-0 nova_compute[260935]: 2025-10-11 09:25:19.582 2 DEBUG oslo_concurrency.lockutils [req-bca77e6b-39d8-4f24-8f48-549ac482a1d4 req-1eb946f5-c7d5-42c8-84ad-5af8a94b82c8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "e4670193-9ea3-45bc-9dbd-14d2e62f32f1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:25:19 compute-0 nova_compute[260935]: 2025-10-11 09:25:19.582 2 DEBUG oslo_concurrency.lockutils [req-bca77e6b-39d8-4f24-8f48-549ac482a1d4 req-1eb946f5-c7d5-42c8-84ad-5af8a94b82c8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "e4670193-9ea3-45bc-9dbd-14d2e62f32f1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:25:19 compute-0 nova_compute[260935]: 2025-10-11 09:25:19.582 2 DEBUG oslo_concurrency.lockutils [req-bca77e6b-39d8-4f24-8f48-549ac482a1d4 req-1eb946f5-c7d5-42c8-84ad-5af8a94b82c8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "e4670193-9ea3-45bc-9dbd-14d2e62f32f1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:25:19 compute-0 nova_compute[260935]: 2025-10-11 09:25:19.583 2 DEBUG nova.compute.manager [req-bca77e6b-39d8-4f24-8f48-549ac482a1d4 req-1eb946f5-c7d5-42c8-84ad-5af8a94b82c8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: e4670193-9ea3-45bc-9dbd-14d2e62f32f1] No waiting events found dispatching network-vif-plugged-119ff9b5-e3af-4688-97a4-92b5f6a240dc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 09:25:19 compute-0 nova_compute[260935]: 2025-10-11 09:25:19.583 2 WARNING nova.compute.manager [req-bca77e6b-39d8-4f24-8f48-549ac482a1d4 req-1eb946f5-c7d5-42c8-84ad-5af8a94b82c8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: e4670193-9ea3-45bc-9dbd-14d2e62f32f1] Received unexpected event network-vif-plugged-119ff9b5-e3af-4688-97a4-92b5f6a240dc for instance with vm_state building and task_state spawning.
Oct 11 09:25:19 compute-0 nova_compute[260935]: 2025-10-11 09:25:19.595 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: e4670193-9ea3-45bc-9dbd-14d2e62f32f1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:25:19 compute-0 nova_compute[260935]: 2025-10-11 09:25:19.600 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: e4670193-9ea3-45bc-9dbd-14d2e62f32f1] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 09:25:19 compute-0 nova_compute[260935]: 2025-10-11 09:25:19.674 2 DEBUG nova.network.neutron [req-42f83aae-c5de-45c8-acae-8d4aa041b73c req-8855b0e1-fa39-4ea0-82a9-2aec76bd5a94 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ece1112a-294b-461f-8b8f-c6a0bb212647] Updated VIF entry in instance network info cache for port c9d0cfb2-3f7b-4404-8f94-1d24beb7e6c9. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 09:25:19 compute-0 nova_compute[260935]: 2025-10-11 09:25:19.675 2 DEBUG nova.network.neutron [req-42f83aae-c5de-45c8-acae-8d4aa041b73c req-8855b0e1-fa39-4ea0-82a9-2aec76bd5a94 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ece1112a-294b-461f-8b8f-c6a0bb212647] Updating instance_info_cache with network_info: [{"id": "c9d0cfb2-3f7b-4404-8f94-1d24beb7e6c9", "address": "fa:16:3e:c1:1a:f4", "network": {"id": "cdd3c547-e0c3-4649-8427-08ce8e1c52d4", "bridge": "br-int", "label": "tempest-network-smoke--1698384506", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc9d0cfb2-3f", "ovs_interfaceid": "c9d0cfb2-3f7b-4404-8f94-1d24beb7e6c9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "336ce2f8-5f2c-4306-b2ea-d4b85ebcbff0", "address": "fa:16:3e:5b:3e:95", "network": {"id": "f0bc2c62-89ab-4ce1-9157-2273788b9018", "bridge": "br-int", "label": "tempest-network-smoke--892002892", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe5b:3e95", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe5b:3e95", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap336ce2f8-5f", "ovs_interfaceid": "336ce2f8-5f2c-4306-b2ea-d4b85ebcbff0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:25:19 compute-0 nova_compute[260935]: 2025-10-11 09:25:19.757 2 INFO nova.compute.manager [None req-92196fed-77a5-4494-a151-d2aec69d63ea dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: e4670193-9ea3-45bc-9dbd-14d2e62f32f1] Took 19.06 seconds to build instance.
Oct 11 09:25:19 compute-0 ceph-mon[74313]: pgmap v2510: 321 pgs: 321 active+clean; 453 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 633 KiB/s wr, 116 op/s
Oct 11 09:25:20 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2511: 321 pgs: 321 active+clean; 453 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 30 KiB/s wr, 104 op/s
Oct 11 09:25:21 compute-0 nova_compute[260935]: 2025-10-11 09:25:21.196 2 DEBUG oslo_concurrency.lockutils [req-42f83aae-c5de-45c8-acae-8d4aa041b73c req-8855b0e1-fa39-4ea0-82a9-2aec76bd5a94 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-ece1112a-294b-461f-8b8f-c6a0bb212647" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:25:21 compute-0 nova_compute[260935]: 2025-10-11 09:25:21.308 2 DEBUG oslo_concurrency.lockutils [None req-92196fed-77a5-4494-a151-d2aec69d63ea dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "e4670193-9ea3-45bc-9dbd-14d2e62f32f1" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 20.704s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:25:21 compute-0 nova_compute[260935]: 2025-10-11 09:25:21.735 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:25:21 compute-0 ceph-mon[74313]: pgmap v2511: 321 pgs: 321 active+clean; 453 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 30 KiB/s wr, 104 op/s
Oct 11 09:25:21 compute-0 nova_compute[260935]: 2025-10-11 09:25:21.991 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:25:22 compute-0 sshd-session[397627]: Connection closed by authenticating user root 5.44.252.30 port 50460 [preauth]
Oct 11 09:25:22 compute-0 nova_compute[260935]: 2025-10-11 09:25:22.721 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:25:22 compute-0 podman[398281]: 2025-10-11 09:25:22.822886001 +0000 UTC m=+0.119537144 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct 11 09:25:22 compute-0 podman[398282]: 2025-10-11 09:25:22.866102421 +0000 UTC m=+0.157742613 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_controller, container_name=ovn_controller, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct 11 09:25:22 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2512: 321 pgs: 321 active+clean; 453 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 30 KiB/s wr, 104 op/s
Oct 11 09:25:23 compute-0 ceph-mon[74313]: pgmap v2512: 321 pgs: 321 active+clean; 453 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 30 KiB/s wr, 104 op/s
Oct 11 09:25:24 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:25:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:25:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:25:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:25:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:25:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:25:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:25:24 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2513: 321 pgs: 321 active+clean; 453 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 5.0 KiB/s wr, 77 op/s
Oct 11 09:25:24 compute-0 ovn_controller[152945]: 2025-10-11T09:25:24Z|00156|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:54:f4:1e 10.100.0.9
Oct 11 09:25:24 compute-0 ovn_controller[152945]: 2025-10-11T09:25:24Z|00157|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:54:f4:1e 10.100.0.9
Oct 11 09:25:25 compute-0 nova_compute[260935]: 2025-10-11 09:25:25.211 2 DEBUG nova.compute.manager [req-8fd9544a-06a9-4fa6-99b6-161592353f9f req-34ce2620-3af4-420f-9d8f-aae15dd8ea0f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ece1112a-294b-461f-8b8f-c6a0bb212647] Received event network-vif-deleted-336ce2f8-5f2c-4306-b2ea-d4b85ebcbff0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:25:25 compute-0 nova_compute[260935]: 2025-10-11 09:25:25.212 2 INFO nova.compute.manager [req-8fd9544a-06a9-4fa6-99b6-161592353f9f req-34ce2620-3af4-420f-9d8f-aae15dd8ea0f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ece1112a-294b-461f-8b8f-c6a0bb212647] Neutron deleted interface 336ce2f8-5f2c-4306-b2ea-d4b85ebcbff0; detaching it from the instance and deleting it from the info cache
Oct 11 09:25:25 compute-0 nova_compute[260935]: 2025-10-11 09:25:25.212 2 DEBUG nova.network.neutron [req-8fd9544a-06a9-4fa6-99b6-161592353f9f req-34ce2620-3af4-420f-9d8f-aae15dd8ea0f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ece1112a-294b-461f-8b8f-c6a0bb212647] Updating instance_info_cache with network_info: [{"id": "c9d0cfb2-3f7b-4404-8f94-1d24beb7e6c9", "address": "fa:16:3e:c1:1a:f4", "network": {"id": "cdd3c547-e0c3-4649-8427-08ce8e1c52d4", "bridge": "br-int", "label": "tempest-network-smoke--1698384506", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc9d0cfb2-3f", "ovs_interfaceid": "c9d0cfb2-3f7b-4404-8f94-1d24beb7e6c9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:25:25 compute-0 sudo[398326]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:25:25 compute-0 sudo[398326]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:25:25 compute-0 sudo[398326]: pam_unix(sudo:session): session closed for user root
Oct 11 09:25:25 compute-0 sudo[398351]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 09:25:25 compute-0 sudo[398351]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:25:25 compute-0 sudo[398351]: pam_unix(sudo:session): session closed for user root
Oct 11 09:25:25 compute-0 sudo[398376]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:25:25 compute-0 sudo[398376]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:25:25 compute-0 sudo[398376]: pam_unix(sudo:session): session closed for user root
Oct 11 09:25:25 compute-0 sudo[398401]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 11 09:25:25 compute-0 sudo[398401]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:25:25 compute-0 ceph-mon[74313]: pgmap v2513: 321 pgs: 321 active+clean; 453 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 5.0 KiB/s wr, 77 op/s
Oct 11 09:25:26 compute-0 nova_compute[260935]: 2025-10-11 09:25:26.641 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760174711.639506, ece1112a-294b-461f-8b8f-c6a0bb212647 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:25:26 compute-0 nova_compute[260935]: 2025-10-11 09:25:26.642 2 INFO nova.compute.manager [-] [instance: ece1112a-294b-461f-8b8f-c6a0bb212647] VM Stopped (Lifecycle Event)
Oct 11 09:25:26 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 09:25:26 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1053940313' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 09:25:26 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 09:25:26 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1053940313' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 09:25:26 compute-0 sudo[398401]: pam_unix(sudo:session): session closed for user root
Oct 11 09:25:26 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Oct 11 09:25:26 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 11 09:25:26 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 09:25:26 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 09:25:26 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 11 09:25:26 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 09:25:26 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 11 09:25:26 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:25:26 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev 9280dd1a-20cb-4f83-8eb7-12a6dfcdf04d does not exist
Oct 11 09:25:26 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev 07acd9e6-9892-4bb9-95e8-e4e844d2fda6 does not exist
Oct 11 09:25:26 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev 6a04d1a7-badb-43d2-9afc-66e4194a4d82 does not exist
Oct 11 09:25:26 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 11 09:25:26 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 09:25:26 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 11 09:25:26 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 09:25:26 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 09:25:26 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 09:25:26 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2514: 321 pgs: 321 active+clean; 453 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 5.0 KiB/s wr, 77 op/s
Oct 11 09:25:26 compute-0 sudo[398459]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:25:26 compute-0 sudo[398459]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:25:26 compute-0 sudo[398459]: pam_unix(sudo:session): session closed for user root
Oct 11 09:25:26 compute-0 ceph-mon[74313]: from='client.? 192.168.122.10:0/1053940313' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 09:25:26 compute-0 ceph-mon[74313]: from='client.? 192.168.122.10:0/1053940313' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 09:25:26 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 11 09:25:26 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 09:25:26 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 09:25:26 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:25:26 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 09:25:26 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 09:25:26 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 09:25:26 compute-0 sudo[398484]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 09:25:26 compute-0 sudo[398484]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:25:26 compute-0 nova_compute[260935]: 2025-10-11 09:25:26.993 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:25:26 compute-0 sudo[398484]: pam_unix(sudo:session): session closed for user root
Oct 11 09:25:27 compute-0 sudo[398509]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:25:27 compute-0 sudo[398509]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:25:27 compute-0 sudo[398509]: pam_unix(sudo:session): session closed for user root
Oct 11 09:25:27 compute-0 sudo[398534]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 11 09:25:27 compute-0 sudo[398534]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:25:27 compute-0 nova_compute[260935]: 2025-10-11 09:25:27.318 2 DEBUG nova.compute.manager [None req-de49268c-f16f-47ab-ad69-d5aeee2e74a0 - - - - - -] [instance: ece1112a-294b-461f-8b8f-c6a0bb212647] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:25:27 compute-0 nova_compute[260935]: 2025-10-11 09:25:27.353 2 DEBUG nova.compute.manager [req-8fd9544a-06a9-4fa6-99b6-161592353f9f req-34ce2620-3af4-420f-9d8f-aae15dd8ea0f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ece1112a-294b-461f-8b8f-c6a0bb212647] Detach interface failed, port_id=336ce2f8-5f2c-4306-b2ea-d4b85ebcbff0, reason: Instance ece1112a-294b-461f-8b8f-c6a0bb212647 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Oct 11 09:25:27 compute-0 podman[398599]: 2025-10-11 09:25:27.666464889 +0000 UTC m=+0.076973633 container create d2f65b93e70c31c2a1d1b152c10579da83b1c71b4762318f428b8d29081ae737 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_williams, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Oct 11 09:25:27 compute-0 podman[398599]: 2025-10-11 09:25:27.632506111 +0000 UTC m=+0.043014905 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:25:27 compute-0 nova_compute[260935]: 2025-10-11 09:25:27.725 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:25:27 compute-0 systemd[1]: Started libpod-conmon-d2f65b93e70c31c2a1d1b152c10579da83b1c71b4762318f428b8d29081ae737.scope.
Oct 11 09:25:27 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:25:27 compute-0 podman[398599]: 2025-10-11 09:25:27.794795671 +0000 UTC m=+0.205304475 container init d2f65b93e70c31c2a1d1b152c10579da83b1c71b4762318f428b8d29081ae737 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_williams, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Oct 11 09:25:27 compute-0 podman[398599]: 2025-10-11 09:25:27.806318276 +0000 UTC m=+0.216827020 container start d2f65b93e70c31c2a1d1b152c10579da83b1c71b4762318f428b8d29081ae737 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_williams, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 09:25:27 compute-0 frosty_williams[398616]: 167 167
Oct 11 09:25:27 compute-0 systemd[1]: libpod-d2f65b93e70c31c2a1d1b152c10579da83b1c71b4762318f428b8d29081ae737.scope: Deactivated successfully.
Oct 11 09:25:27 compute-0 conmon[398616]: conmon d2f65b93e70c31c2a1d1 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d2f65b93e70c31c2a1d1b152c10579da83b1c71b4762318f428b8d29081ae737.scope/container/memory.events
Oct 11 09:25:27 compute-0 podman[398599]: 2025-10-11 09:25:27.816776231 +0000 UTC m=+0.227284975 container attach d2f65b93e70c31c2a1d1b152c10579da83b1c71b4762318f428b8d29081ae737 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_williams, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Oct 11 09:25:27 compute-0 podman[398599]: 2025-10-11 09:25:27.818034677 +0000 UTC m=+0.228543481 container died d2f65b93e70c31c2a1d1b152c10579da83b1c71b4762318f428b8d29081ae737 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_williams, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 09:25:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-d8df2d914a614085e1a2d2f84d81c34d29fc98a5eb02a2d2e0b3a51c5945d3c6-merged.mount: Deactivated successfully.
Oct 11 09:25:27 compute-0 podman[398599]: 2025-10-11 09:25:27.875975502 +0000 UTC m=+0.286484236 container remove d2f65b93e70c31c2a1d1b152c10579da83b1c71b4762318f428b8d29081ae737 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_williams, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 09:25:27 compute-0 systemd[1]: libpod-conmon-d2f65b93e70c31c2a1d1b152c10579da83b1c71b4762318f428b8d29081ae737.scope: Deactivated successfully.
Oct 11 09:25:27 compute-0 ceph-mon[74313]: pgmap v2514: 321 pgs: 321 active+clean; 453 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 5.0 KiB/s wr, 77 op/s
Oct 11 09:25:28 compute-0 nova_compute[260935]: 2025-10-11 09:25:28.004 2 DEBUG nova.network.neutron [-] [instance: ece1112a-294b-461f-8b8f-c6a0bb212647] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:25:28 compute-0 nova_compute[260935]: 2025-10-11 09:25:28.093 2 INFO nova.compute.manager [-] [instance: ece1112a-294b-461f-8b8f-c6a0bb212647] Took 15.42 seconds to deallocate network for instance.
Oct 11 09:25:28 compute-0 nova_compute[260935]: 2025-10-11 09:25:28.102 2 DEBUG nova.compute.manager [req-99adba50-68e6-4b74-b4bf-26f1da833184 req-f96d1192-3e64-48e5-bdd4-9409a1b6b9bc e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ece1112a-294b-461f-8b8f-c6a0bb212647] Received event network-vif-deleted-c9d0cfb2-3f7b-4404-8f94-1d24beb7e6c9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:25:28 compute-0 nova_compute[260935]: 2025-10-11 09:25:28.103 2 INFO nova.compute.manager [req-99adba50-68e6-4b74-b4bf-26f1da833184 req-f96d1192-3e64-48e5-bdd4-9409a1b6b9bc e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ece1112a-294b-461f-8b8f-c6a0bb212647] Neutron deleted interface c9d0cfb2-3f7b-4404-8f94-1d24beb7e6c9; detaching it from the instance and deleting it from the info cache
Oct 11 09:25:28 compute-0 nova_compute[260935]: 2025-10-11 09:25:28.103 2 DEBUG nova.network.neutron [req-99adba50-68e6-4b74-b4bf-26f1da833184 req-f96d1192-3e64-48e5-bdd4-9409a1b6b9bc e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ece1112a-294b-461f-8b8f-c6a0bb212647] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:25:28 compute-0 podman[398641]: 2025-10-11 09:25:28.122296303 +0000 UTC m=+0.054547510 container create a593ae937500db96e196f72cd4a031ae6e96f85e94f03ba5814ccb2ae313f389 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_elbakyan, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 09:25:28 compute-0 systemd[1]: Started libpod-conmon-a593ae937500db96e196f72cd4a031ae6e96f85e94f03ba5814ccb2ae313f389.scope.
Oct 11 09:25:28 compute-0 nova_compute[260935]: 2025-10-11 09:25:28.184 2 DEBUG nova.compute.manager [req-99adba50-68e6-4b74-b4bf-26f1da833184 req-f96d1192-3e64-48e5-bdd4-9409a1b6b9bc e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ece1112a-294b-461f-8b8f-c6a0bb212647] Detach interface failed, port_id=c9d0cfb2-3f7b-4404-8f94-1d24beb7e6c9, reason: Instance ece1112a-294b-461f-8b8f-c6a0bb212647 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Oct 11 09:25:28 compute-0 podman[398641]: 2025-10-11 09:25:28.102901196 +0000 UTC m=+0.035152423 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:25:28 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:25:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef65815cbce6ac3058a74c292c393a5f6bb4e148df6f3c0630e66ce8a827e0d2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 09:25:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef65815cbce6ac3058a74c292c393a5f6bb4e148df6f3c0630e66ce8a827e0d2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 09:25:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef65815cbce6ac3058a74c292c393a5f6bb4e148df6f3c0630e66ce8a827e0d2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 09:25:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef65815cbce6ac3058a74c292c393a5f6bb4e148df6f3c0630e66ce8a827e0d2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 09:25:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef65815cbce6ac3058a74c292c393a5f6bb4e148df6f3c0630e66ce8a827e0d2/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 09:25:28 compute-0 nova_compute[260935]: 2025-10-11 09:25:28.222 2 DEBUG oslo_concurrency.lockutils [None req-e9edc5ad-86de-4520-b1b3-5738abe426dc 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:25:28 compute-0 podman[398641]: 2025-10-11 09:25:28.223331374 +0000 UTC m=+0.155582621 container init a593ae937500db96e196f72cd4a031ae6e96f85e94f03ba5814ccb2ae313f389 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_elbakyan, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3)
Oct 11 09:25:28 compute-0 nova_compute[260935]: 2025-10-11 09:25:28.223 2 DEBUG oslo_concurrency.lockutils [None req-e9edc5ad-86de-4520-b1b3-5738abe426dc 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:25:28 compute-0 podman[398641]: 2025-10-11 09:25:28.238300467 +0000 UTC m=+0.170551674 container start a593ae937500db96e196f72cd4a031ae6e96f85e94f03ba5814ccb2ae313f389 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_elbakyan, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 09:25:28 compute-0 podman[398641]: 2025-10-11 09:25:28.243684759 +0000 UTC m=+0.175936016 container attach a593ae937500db96e196f72cd4a031ae6e96f85e94f03ba5814ccb2ae313f389 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_elbakyan, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 11 09:25:28 compute-0 nova_compute[260935]: 2025-10-11 09:25:28.380 2 DEBUG oslo_concurrency.processutils [None req-e9edc5ad-86de-4520-b1b3-5738abe426dc 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:25:28 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 09:25:28 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1931950980' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:25:28 compute-0 nova_compute[260935]: 2025-10-11 09:25:28.857 2 DEBUG oslo_concurrency.processutils [None req-e9edc5ad-86de-4520-b1b3-5738abe426dc 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.477s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:25:28 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2515: 321 pgs: 321 active+clean; 486 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 141 op/s
Oct 11 09:25:28 compute-0 nova_compute[260935]: 2025-10-11 09:25:28.869 2 DEBUG nova.compute.provider_tree [None req-e9edc5ad-86de-4520-b1b3-5738abe426dc 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 09:25:28 compute-0 nova_compute[260935]: 2025-10-11 09:25:28.950 2 DEBUG nova.scheduler.client.report [None req-e9edc5ad-86de-4520-b1b3-5738abe426dc 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 09:25:28 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/1931950980' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:25:29 compute-0 nova_compute[260935]: 2025-10-11 09:25:29.017 2 DEBUG oslo_concurrency.lockutils [None req-e9edc5ad-86de-4520-b1b3-5738abe426dc 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.794s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:25:29 compute-0 nova_compute[260935]: 2025-10-11 09:25:29.060 2 INFO nova.scheduler.client.report [None req-e9edc5ad-86de-4520-b1b3-5738abe426dc 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Deleted allocations for instance ece1112a-294b-461f-8b8f-c6a0bb212647
Oct 11 09:25:29 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:25:29 compute-0 nova_compute[260935]: 2025-10-11 09:25:29.213 2 DEBUG oslo_concurrency.lockutils [None req-e9edc5ad-86de-4520-b1b3-5738abe426dc 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "ece1112a-294b-461f-8b8f-c6a0bb212647" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 18.039s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:25:29 compute-0 trusting_elbakyan[398657]: --> passed data devices: 0 physical, 3 LVM
Oct 11 09:25:29 compute-0 trusting_elbakyan[398657]: --> relative data size: 1.0
Oct 11 09:25:29 compute-0 trusting_elbakyan[398657]: --> All data devices are unavailable
Oct 11 09:25:29 compute-0 systemd[1]: libpod-a593ae937500db96e196f72cd4a031ae6e96f85e94f03ba5814ccb2ae313f389.scope: Deactivated successfully.
Oct 11 09:25:29 compute-0 podman[398641]: 2025-10-11 09:25:29.312199353 +0000 UTC m=+1.244450560 container died a593ae937500db96e196f72cd4a031ae6e96f85e94f03ba5814ccb2ae313f389 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_elbakyan, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 09:25:29 compute-0 systemd[1]: libpod-a593ae937500db96e196f72cd4a031ae6e96f85e94f03ba5814ccb2ae313f389.scope: Consumed 1.015s CPU time.
Oct 11 09:25:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-ef65815cbce6ac3058a74c292c393a5f6bb4e148df6f3c0630e66ce8a827e0d2-merged.mount: Deactivated successfully.
Oct 11 09:25:29 compute-0 podman[398641]: 2025-10-11 09:25:29.402588134 +0000 UTC m=+1.334839351 container remove a593ae937500db96e196f72cd4a031ae6e96f85e94f03ba5814ccb2ae313f389 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_elbakyan, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 09:25:29 compute-0 systemd[1]: libpod-conmon-a593ae937500db96e196f72cd4a031ae6e96f85e94f03ba5814ccb2ae313f389.scope: Deactivated successfully.
Oct 11 09:25:29 compute-0 sudo[398534]: pam_unix(sudo:session): session closed for user root
Oct 11 09:25:29 compute-0 sudo[398721]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:25:29 compute-0 sudo[398721]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:25:29 compute-0 sudo[398721]: pam_unix(sudo:session): session closed for user root
Oct 11 09:25:29 compute-0 sudo[398746]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 09:25:29 compute-0 sudo[398746]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:25:29 compute-0 sudo[398746]: pam_unix(sudo:session): session closed for user root
Oct 11 09:25:29 compute-0 sudo[398771]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:25:29 compute-0 sudo[398771]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:25:29 compute-0 sudo[398771]: pam_unix(sudo:session): session closed for user root
Oct 11 09:25:29 compute-0 sudo[398796]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a -- lvm list --format json
Oct 11 09:25:29 compute-0 sudo[398796]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:25:29 compute-0 ceph-mon[74313]: pgmap v2515: 321 pgs: 321 active+clean; 486 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 141 op/s
Oct 11 09:25:30 compute-0 podman[398864]: 2025-10-11 09:25:30.278590226 +0000 UTC m=+0.067584749 container create 642f6ebc500a6e4a78c80465c2b51d6b3822a7be894b96a88c8508be237896c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_elgamal, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct 11 09:25:30 compute-0 podman[398864]: 2025-10-11 09:25:30.24934181 +0000 UTC m=+0.038336383 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:25:30 compute-0 systemd[1]: Started libpod-conmon-642f6ebc500a6e4a78c80465c2b51d6b3822a7be894b96a88c8508be237896c5.scope.
Oct 11 09:25:30 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:25:30 compute-0 podman[398864]: 2025-10-11 09:25:30.409392817 +0000 UTC m=+0.198387330 container init 642f6ebc500a6e4a78c80465c2b51d6b3822a7be894b96a88c8508be237896c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_elgamal, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 09:25:30 compute-0 podman[398864]: 2025-10-11 09:25:30.416184719 +0000 UTC m=+0.205179192 container start 642f6ebc500a6e4a78c80465c2b51d6b3822a7be894b96a88c8508be237896c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_elgamal, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct 11 09:25:30 compute-0 podman[398864]: 2025-10-11 09:25:30.419391819 +0000 UTC m=+0.208386342 container attach 642f6ebc500a6e4a78c80465c2b51d6b3822a7be894b96a88c8508be237896c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_elgamal, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Oct 11 09:25:30 compute-0 condescending_elgamal[398880]: 167 167
Oct 11 09:25:30 compute-0 systemd[1]: libpod-642f6ebc500a6e4a78c80465c2b51d6b3822a7be894b96a88c8508be237896c5.scope: Deactivated successfully.
Oct 11 09:25:30 compute-0 conmon[398880]: conmon 642f6ebc500a6e4a78c8 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-642f6ebc500a6e4a78c80465c2b51d6b3822a7be894b96a88c8508be237896c5.scope/container/memory.events
Oct 11 09:25:30 compute-0 podman[398864]: 2025-10-11 09:25:30.422320042 +0000 UTC m=+0.211314525 container died 642f6ebc500a6e4a78c80465c2b51d6b3822a7be894b96a88c8508be237896c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_elgamal, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 09:25:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-fe4c176979dbd9cda4390a1a204ab4f884754df3e73659725d607fc53ab627ce-merged.mount: Deactivated successfully.
Oct 11 09:25:30 compute-0 podman[398864]: 2025-10-11 09:25:30.463316369 +0000 UTC m=+0.252310882 container remove 642f6ebc500a6e4a78c80465c2b51d6b3822a7be894b96a88c8508be237896c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_elgamal, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 09:25:30 compute-0 systemd[1]: libpod-conmon-642f6ebc500a6e4a78c80465c2b51d6b3822a7be894b96a88c8508be237896c5.scope: Deactivated successfully.
Oct 11 09:25:30 compute-0 podman[398902]: 2025-10-11 09:25:30.700175272 +0000 UTC m=+0.045939557 container create 17a04c1fc8ffae27e7eb48719bff9c68dc5f62b6601373c938f539748457326d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_poincare, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct 11 09:25:30 compute-0 systemd[1]: Started libpod-conmon-17a04c1fc8ffae27e7eb48719bff9c68dc5f62b6601373c938f539748457326d.scope.
Oct 11 09:25:30 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:25:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c01530c07a40aa3d847d215677c6556a6eee366d3d24f8429cf161363d15f19/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 09:25:30 compute-0 podman[398902]: 2025-10-11 09:25:30.684210802 +0000 UTC m=+0.029975087 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:25:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c01530c07a40aa3d847d215677c6556a6eee366d3d24f8429cf161363d15f19/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 09:25:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c01530c07a40aa3d847d215677c6556a6eee366d3d24f8429cf161363d15f19/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 09:25:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c01530c07a40aa3d847d215677c6556a6eee366d3d24f8429cf161363d15f19/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 09:25:30 compute-0 podman[398902]: 2025-10-11 09:25:30.794429552 +0000 UTC m=+0.140193847 container init 17a04c1fc8ffae27e7eb48719bff9c68dc5f62b6601373c938f539748457326d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_poincare, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct 11 09:25:30 compute-0 podman[398902]: 2025-10-11 09:25:30.80676606 +0000 UTC m=+0.152530345 container start 17a04c1fc8ffae27e7eb48719bff9c68dc5f62b6601373c938f539748457326d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_poincare, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct 11 09:25:30 compute-0 podman[398902]: 2025-10-11 09:25:30.811216186 +0000 UTC m=+0.156980491 container attach 17a04c1fc8ffae27e7eb48719bff9c68dc5f62b6601373c938f539748457326d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_poincare, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0)
Oct 11 09:25:30 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2516: 321 pgs: 321 active+clean; 486 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 11 09:25:31 compute-0 nova_compute[260935]: 2025-10-11 09:25:31.129 2 DEBUG nova.compute.manager [req-b162d961-a910-4491-8eb8-868da786e142 req-5a378dc8-b74d-4af9-b990-cd7d7a36554e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 85cf93a0-2068-4567-a399-b8d52e672913] Received event network-changed-85875c6f-3380-42bc-9e4b-0b66df391e0d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:25:31 compute-0 nova_compute[260935]: 2025-10-11 09:25:31.130 2 DEBUG nova.compute.manager [req-b162d961-a910-4491-8eb8-868da786e142 req-5a378dc8-b74d-4af9-b990-cd7d7a36554e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 85cf93a0-2068-4567-a399-b8d52e672913] Refreshing instance network info cache due to event network-changed-85875c6f-3380-42bc-9e4b-0b66df391e0d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 09:25:31 compute-0 nova_compute[260935]: 2025-10-11 09:25:31.131 2 DEBUG oslo_concurrency.lockutils [req-b162d961-a910-4491-8eb8-868da786e142 req-5a378dc8-b74d-4af9-b990-cd7d7a36554e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-85cf93a0-2068-4567-a399-b8d52e672913" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:25:31 compute-0 nova_compute[260935]: 2025-10-11 09:25:31.131 2 DEBUG oslo_concurrency.lockutils [req-b162d961-a910-4491-8eb8-868da786e142 req-5a378dc8-b74d-4af9-b990-cd7d7a36554e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-85cf93a0-2068-4567-a399-b8d52e672913" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:25:31 compute-0 nova_compute[260935]: 2025-10-11 09:25:31.132 2 DEBUG nova.network.neutron [req-b162d961-a910-4491-8eb8-868da786e142 req-5a378dc8-b74d-4af9-b990-cd7d7a36554e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 85cf93a0-2068-4567-a399-b8d52e672913] Refreshing network info cache for port 85875c6f-3380-42bc-9e4b-0b66df391e0d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 09:25:31 compute-0 nova_compute[260935]: 2025-10-11 09:25:31.345 2 DEBUG oslo_concurrency.lockutils [None req-ff3f7fec-a100-4441-a42d-f95642af354a 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "85cf93a0-2068-4567-a399-b8d52e672913" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:25:31 compute-0 nova_compute[260935]: 2025-10-11 09:25:31.346 2 DEBUG oslo_concurrency.lockutils [None req-ff3f7fec-a100-4441-a42d-f95642af354a 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "85cf93a0-2068-4567-a399-b8d52e672913" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:25:31 compute-0 nova_compute[260935]: 2025-10-11 09:25:31.346 2 DEBUG oslo_concurrency.lockutils [None req-ff3f7fec-a100-4441-a42d-f95642af354a 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "85cf93a0-2068-4567-a399-b8d52e672913-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:25:31 compute-0 nova_compute[260935]: 2025-10-11 09:25:31.346 2 DEBUG oslo_concurrency.lockutils [None req-ff3f7fec-a100-4441-a42d-f95642af354a 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "85cf93a0-2068-4567-a399-b8d52e672913-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:25:31 compute-0 nova_compute[260935]: 2025-10-11 09:25:31.347 2 DEBUG oslo_concurrency.lockutils [None req-ff3f7fec-a100-4441-a42d-f95642af354a 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "85cf93a0-2068-4567-a399-b8d52e672913-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:25:31 compute-0 nova_compute[260935]: 2025-10-11 09:25:31.348 2 INFO nova.compute.manager [None req-ff3f7fec-a100-4441-a42d-f95642af354a 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 85cf93a0-2068-4567-a399-b8d52e672913] Terminating instance
Oct 11 09:25:31 compute-0 nova_compute[260935]: 2025-10-11 09:25:31.349 2 DEBUG nova.compute.manager [None req-ff3f7fec-a100-4441-a42d-f95642af354a 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 85cf93a0-2068-4567-a399-b8d52e672913] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 11 09:25:31 compute-0 kernel: tap85875c6f-33 (unregistering): left promiscuous mode
Oct 11 09:25:31 compute-0 NetworkManager[44960]: <info>  [1760174731.4202] device (tap85875c6f-33): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 09:25:31 compute-0 kernel: tapf45a21da-34 (unregistering): left promiscuous mode
Oct 11 09:25:31 compute-0 ovn_controller[152945]: 2025-10-11T09:25:31Z|01369|binding|INFO|Releasing lport 85875c6f-3380-42bc-9e4b-0b66df391e0d from this chassis (sb_readonly=0)
Oct 11 09:25:31 compute-0 ovn_controller[152945]: 2025-10-11T09:25:31Z|01370|binding|INFO|Setting lport 85875c6f-3380-42bc-9e4b-0b66df391e0d down in Southbound
Oct 11 09:25:31 compute-0 ovn_controller[152945]: 2025-10-11T09:25:31Z|01371|binding|INFO|Removing iface tap85875c6f-33 ovn-installed in OVS
Oct 11 09:25:31 compute-0 nova_compute[260935]: 2025-10-11 09:25:31.483 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:25:31 compute-0 NetworkManager[44960]: <info>  [1760174731.4862] device (tapf45a21da-34): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 09:25:31 compute-0 nova_compute[260935]: 2025-10-11 09:25:31.490 2 DEBUG nova.compute.manager [req-70b14429-9b8a-40a6-bc3f-3d564b8ca202 req-14447cff-54b2-402e-b7d0-aebd62646e38 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: e4670193-9ea3-45bc-9dbd-14d2e62f32f1] Received event network-changed-119ff9b5-e3af-4688-97a4-92b5f6a240dc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:25:31 compute-0 nova_compute[260935]: 2025-10-11 09:25:31.491 2 DEBUG nova.compute.manager [req-70b14429-9b8a-40a6-bc3f-3d564b8ca202 req-14447cff-54b2-402e-b7d0-aebd62646e38 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: e4670193-9ea3-45bc-9dbd-14d2e62f32f1] Refreshing instance network info cache due to event network-changed-119ff9b5-e3af-4688-97a4-92b5f6a240dc. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 09:25:31 compute-0 nova_compute[260935]: 2025-10-11 09:25:31.491 2 DEBUG oslo_concurrency.lockutils [req-70b14429-9b8a-40a6-bc3f-3d564b8ca202 req-14447cff-54b2-402e-b7d0-aebd62646e38 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-e4670193-9ea3-45bc-9dbd-14d2e62f32f1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:25:31 compute-0 nova_compute[260935]: 2025-10-11 09:25:31.492 2 DEBUG oslo_concurrency.lockutils [req-70b14429-9b8a-40a6-bc3f-3d564b8ca202 req-14447cff-54b2-402e-b7d0-aebd62646e38 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-e4670193-9ea3-45bc-9dbd-14d2e62f32f1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:25:31 compute-0 nova_compute[260935]: 2025-10-11 09:25:31.492 2 DEBUG nova.network.neutron [req-70b14429-9b8a-40a6-bc3f-3d564b8ca202 req-14447cff-54b2-402e-b7d0-aebd62646e38 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: e4670193-9ea3-45bc-9dbd-14d2e62f32f1] Refreshing network info cache for port 119ff9b5-e3af-4688-97a4-92b5f6a240dc _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 09:25:31 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:25:31.503 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:81:f1:c1 10.100.0.14'], port_security=['fa:16:3e:81:f1:c1 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '85cf93a0-2068-4567-a399-b8d52e672913', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-cdd3c547-e0c3-4649-8427-08ce8e1c52d4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'db6885dd005947ad850fed13cefdf2fc', 'neutron:revision_number': '4', 'neutron:security_group_ids': '0ad377f6-44f5-45fc-95cb-2f677a42f5a2', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=039eefbf-a236-4641-9b4d-7b7f1014a5e2, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=85875c6f-3380-42bc-9e4b-0b66df391e0d) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 09:25:31 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:25:31.505 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 85875c6f-3380-42bc-9e4b-0b66df391e0d in datapath cdd3c547-e0c3-4649-8427-08ce8e1c52d4 unbound from our chassis
Oct 11 09:25:31 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:25:31.508 162815 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network cdd3c547-e0c3-4649-8427-08ce8e1c52d4, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 11 09:25:31 compute-0 nova_compute[260935]: 2025-10-11 09:25:31.516 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:25:31 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:25:31.515 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[4f176940-8fd4-4e4f-b8a4-b1572640c0ed]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:25:31 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:25:31.518 162815 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-cdd3c547-e0c3-4649-8427-08ce8e1c52d4 namespace which is not needed anymore
Oct 11 09:25:31 compute-0 nova_compute[260935]: 2025-10-11 09:25:31.521 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:25:31 compute-0 ovn_controller[152945]: 2025-10-11T09:25:31Z|01372|binding|INFO|Releasing lport f45a21da-34ce-448b-92cc-2639a191c755 from this chassis (sb_readonly=0)
Oct 11 09:25:31 compute-0 ovn_controller[152945]: 2025-10-11T09:25:31Z|01373|binding|INFO|Setting lport f45a21da-34ce-448b-92cc-2639a191c755 down in Southbound
Oct 11 09:25:31 compute-0 ovn_controller[152945]: 2025-10-11T09:25:31Z|01374|binding|INFO|Removing iface tapf45a21da-34 ovn-installed in OVS
Oct 11 09:25:31 compute-0 nova_compute[260935]: 2025-10-11 09:25:31.526 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:25:31 compute-0 funny_poincare[398918]: {
Oct 11 09:25:31 compute-0 funny_poincare[398918]:     "0": [
Oct 11 09:25:31 compute-0 funny_poincare[398918]:         {
Oct 11 09:25:31 compute-0 funny_poincare[398918]:             "devices": [
Oct 11 09:25:31 compute-0 funny_poincare[398918]:                 "/dev/loop3"
Oct 11 09:25:31 compute-0 funny_poincare[398918]:             ],
Oct 11 09:25:31 compute-0 funny_poincare[398918]:             "lv_name": "ceph_lv0",
Oct 11 09:25:31 compute-0 funny_poincare[398918]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 09:25:31 compute-0 funny_poincare[398918]:             "lv_size": "21470642176",
Oct 11 09:25:31 compute-0 funny_poincare[398918]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=bd31cfe9-266a-4479-a7f9-659fff14b3e1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 09:25:31 compute-0 funny_poincare[398918]:             "lv_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 09:25:31 compute-0 funny_poincare[398918]:             "name": "ceph_lv0",
Oct 11 09:25:31 compute-0 funny_poincare[398918]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 09:25:31 compute-0 funny_poincare[398918]:             "tags": {
Oct 11 09:25:31 compute-0 funny_poincare[398918]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 11 09:25:31 compute-0 funny_poincare[398918]:                 "ceph.block_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 09:25:31 compute-0 funny_poincare[398918]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 09:25:31 compute-0 funny_poincare[398918]:                 "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:25:31 compute-0 funny_poincare[398918]:                 "ceph.cluster_name": "ceph",
Oct 11 09:25:31 compute-0 funny_poincare[398918]:                 "ceph.crush_device_class": "",
Oct 11 09:25:31 compute-0 funny_poincare[398918]:                 "ceph.encrypted": "0",
Oct 11 09:25:31 compute-0 funny_poincare[398918]:                 "ceph.osd_fsid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 09:25:31 compute-0 funny_poincare[398918]:                 "ceph.osd_id": "0",
Oct 11 09:25:31 compute-0 funny_poincare[398918]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 09:25:31 compute-0 funny_poincare[398918]:                 "ceph.type": "block",
Oct 11 09:25:31 compute-0 funny_poincare[398918]:                 "ceph.vdo": "0"
Oct 11 09:25:31 compute-0 funny_poincare[398918]:             },
Oct 11 09:25:31 compute-0 funny_poincare[398918]:             "type": "block",
Oct 11 09:25:31 compute-0 funny_poincare[398918]:             "vg_name": "ceph_vg0"
Oct 11 09:25:31 compute-0 funny_poincare[398918]:         }
Oct 11 09:25:31 compute-0 funny_poincare[398918]:     ],
Oct 11 09:25:31 compute-0 funny_poincare[398918]:     "1": [
Oct 11 09:25:31 compute-0 funny_poincare[398918]:         {
Oct 11 09:25:31 compute-0 funny_poincare[398918]:             "devices": [
Oct 11 09:25:31 compute-0 funny_poincare[398918]:                 "/dev/loop4"
Oct 11 09:25:31 compute-0 funny_poincare[398918]:             ],
Oct 11 09:25:31 compute-0 funny_poincare[398918]:             "lv_name": "ceph_lv1",
Oct 11 09:25:31 compute-0 funny_poincare[398918]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 09:25:31 compute-0 funny_poincare[398918]:             "lv_size": "21470642176",
Oct 11 09:25:31 compute-0 funny_poincare[398918]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c6e730c8-7075-45e5-bf72-061b73088f5b,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 09:25:31 compute-0 funny_poincare[398918]:             "lv_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 09:25:31 compute-0 funny_poincare[398918]:             "name": "ceph_lv1",
Oct 11 09:25:31 compute-0 funny_poincare[398918]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 09:25:31 compute-0 funny_poincare[398918]:             "tags": {
Oct 11 09:25:31 compute-0 funny_poincare[398918]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 11 09:25:31 compute-0 funny_poincare[398918]:                 "ceph.block_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 09:25:31 compute-0 funny_poincare[398918]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 09:25:31 compute-0 funny_poincare[398918]:                 "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:25:31 compute-0 funny_poincare[398918]:                 "ceph.cluster_name": "ceph",
Oct 11 09:25:31 compute-0 funny_poincare[398918]:                 "ceph.crush_device_class": "",
Oct 11 09:25:31 compute-0 funny_poincare[398918]:                 "ceph.encrypted": "0",
Oct 11 09:25:31 compute-0 funny_poincare[398918]:                 "ceph.osd_fsid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 09:25:31 compute-0 funny_poincare[398918]:                 "ceph.osd_id": "1",
Oct 11 09:25:31 compute-0 funny_poincare[398918]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 09:25:31 compute-0 funny_poincare[398918]:                 "ceph.type": "block",
Oct 11 09:25:31 compute-0 funny_poincare[398918]:                 "ceph.vdo": "0"
Oct 11 09:25:31 compute-0 funny_poincare[398918]:             },
Oct 11 09:25:31 compute-0 funny_poincare[398918]:             "type": "block",
Oct 11 09:25:31 compute-0 funny_poincare[398918]:             "vg_name": "ceph_vg1"
Oct 11 09:25:31 compute-0 funny_poincare[398918]:         }
Oct 11 09:25:31 compute-0 funny_poincare[398918]:     ],
Oct 11 09:25:31 compute-0 funny_poincare[398918]:     "2": [
Oct 11 09:25:31 compute-0 funny_poincare[398918]:         {
Oct 11 09:25:31 compute-0 funny_poincare[398918]:             "devices": [
Oct 11 09:25:31 compute-0 funny_poincare[398918]:                 "/dev/loop5"
Oct 11 09:25:31 compute-0 funny_poincare[398918]:             ],
Oct 11 09:25:31 compute-0 funny_poincare[398918]:             "lv_name": "ceph_lv2",
Oct 11 09:25:31 compute-0 funny_poincare[398918]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 09:25:31 compute-0 funny_poincare[398918]:             "lv_size": "21470642176",
Oct 11 09:25:31 compute-0 funny_poincare[398918]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9a8d87fe-940e-4960-94fb-405e22e6e9ee,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 09:25:31 compute-0 funny_poincare[398918]:             "lv_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 09:25:31 compute-0 funny_poincare[398918]:             "name": "ceph_lv2",
Oct 11 09:25:31 compute-0 funny_poincare[398918]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 09:25:31 compute-0 funny_poincare[398918]:             "tags": {
Oct 11 09:25:31 compute-0 funny_poincare[398918]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 11 09:25:31 compute-0 funny_poincare[398918]:                 "ceph.block_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 09:25:31 compute-0 funny_poincare[398918]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 09:25:31 compute-0 funny_poincare[398918]:                 "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:25:31 compute-0 funny_poincare[398918]:                 "ceph.cluster_name": "ceph",
Oct 11 09:25:31 compute-0 funny_poincare[398918]:                 "ceph.crush_device_class": "",
Oct 11 09:25:31 compute-0 funny_poincare[398918]:                 "ceph.encrypted": "0",
Oct 11 09:25:31 compute-0 funny_poincare[398918]:                 "ceph.osd_fsid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 09:25:31 compute-0 funny_poincare[398918]:                 "ceph.osd_id": "2",
Oct 11 09:25:31 compute-0 funny_poincare[398918]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 09:25:31 compute-0 funny_poincare[398918]:                 "ceph.type": "block",
Oct 11 09:25:31 compute-0 funny_poincare[398918]:                 "ceph.vdo": "0"
Oct 11 09:25:31 compute-0 funny_poincare[398918]:             },
Oct 11 09:25:31 compute-0 funny_poincare[398918]:             "type": "block",
Oct 11 09:25:31 compute-0 funny_poincare[398918]:             "vg_name": "ceph_vg2"
Oct 11 09:25:31 compute-0 funny_poincare[398918]:         }
Oct 11 09:25:31 compute-0 funny_poincare[398918]:     ]
Oct 11 09:25:31 compute-0 funny_poincare[398918]: }
Oct 11 09:25:31 compute-0 nova_compute[260935]: 2025-10-11 09:25:31.553 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:25:31 compute-0 systemd[1]: machine-qemu\x2d147\x2dinstance\x2d0000007a.scope: Deactivated successfully.
Oct 11 09:25:31 compute-0 systemd[1]: machine-qemu\x2d147\x2dinstance\x2d0000007a.scope: Consumed 17.934s CPU time.
Oct 11 09:25:31 compute-0 systemd-machined[215705]: Machine qemu-147-instance-0000007a terminated.
Oct 11 09:25:31 compute-0 systemd[1]: libpod-17a04c1fc8ffae27e7eb48719bff9c68dc5f62b6601373c938f539748457326d.scope: Deactivated successfully.
Oct 11 09:25:31 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:25:31.578 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f6:a9:1e 2001:db8:0:1:f816:3eff:fef6:a91e 2001:db8::f816:3eff:fef6:a91e'], port_security=['fa:16:3e:f6:a9:1e 2001:db8:0:1:f816:3eff:fef6:a91e 2001:db8::f816:3eff:fef6:a91e'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:fef6:a91e/64 2001:db8::f816:3eff:fef6:a91e/64', 'neutron:device_id': '85cf93a0-2068-4567-a399-b8d52e672913', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f0bc2c62-89ab-4ce1-9157-2273788b9018', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'db6885dd005947ad850fed13cefdf2fc', 'neutron:revision_number': '4', 'neutron:security_group_ids': '0ad377f6-44f5-45fc-95cb-2f677a42f5a2', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=75b51dc9-c211-445f-9e3e-d219efddef19, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=f45a21da-34ce-448b-92cc-2639a191c755) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 09:25:31 compute-0 podman[398946]: 2025-10-11 09:25:31.620974848 +0000 UTC m=+0.033285800 container died 17a04c1fc8ffae27e7eb48719bff9c68dc5f62b6601373c938f539748457326d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_poincare, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 09:25:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-7c01530c07a40aa3d847d215677c6556a6eee366d3d24f8429cf161363d15f19-merged.mount: Deactivated successfully.
Oct 11 09:25:31 compute-0 neutron-haproxy-ovnmeta-cdd3c547-e0c3-4649-8427-08ce8e1c52d4[395110]: [NOTICE]   (395121) : haproxy version is 2.8.14-c23fe91
Oct 11 09:25:31 compute-0 neutron-haproxy-ovnmeta-cdd3c547-e0c3-4649-8427-08ce8e1c52d4[395110]: [NOTICE]   (395121) : path to executable is /usr/sbin/haproxy
Oct 11 09:25:31 compute-0 neutron-haproxy-ovnmeta-cdd3c547-e0c3-4649-8427-08ce8e1c52d4[395110]: [WARNING]  (395121) : Exiting Master process...
Oct 11 09:25:31 compute-0 neutron-haproxy-ovnmeta-cdd3c547-e0c3-4649-8427-08ce8e1c52d4[395110]: [WARNING]  (395121) : Exiting Master process...
Oct 11 09:25:31 compute-0 neutron-haproxy-ovnmeta-cdd3c547-e0c3-4649-8427-08ce8e1c52d4[395110]: [ALERT]    (395121) : Current worker (395128) exited with code 143 (Terminated)
Oct 11 09:25:31 compute-0 neutron-haproxy-ovnmeta-cdd3c547-e0c3-4649-8427-08ce8e1c52d4[395110]: [WARNING]  (395121) : All workers exited. Exiting... (0)
Oct 11 09:25:31 compute-0 systemd[1]: libpod-ab6ebb48cd0eab225fc177b91d9f4f5da5225331666aabf295bcb159bbbda39c.scope: Deactivated successfully.
Oct 11 09:25:31 compute-0 podman[398965]: 2025-10-11 09:25:31.680488678 +0000 UTC m=+0.050799155 container died ab6ebb48cd0eab225fc177b91d9f4f5da5225331666aabf295bcb159bbbda39c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-cdd3c547-e0c3-4649-8427-08ce8e1c52d4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 09:25:31 compute-0 podman[398946]: 2025-10-11 09:25:31.706480821 +0000 UTC m=+0.118791713 container remove 17a04c1fc8ffae27e7eb48719bff9c68dc5f62b6601373c938f539748457326d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_poincare, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 09:25:31 compute-0 systemd[1]: libpod-conmon-17a04c1fc8ffae27e7eb48719bff9c68dc5f62b6601373c938f539748457326d.scope: Deactivated successfully.
Oct 11 09:25:31 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ab6ebb48cd0eab225fc177b91d9f4f5da5225331666aabf295bcb159bbbda39c-userdata-shm.mount: Deactivated successfully.
Oct 11 09:25:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-66d9ee5906b3e37b7221b27948dcaeafcd376d35672ef3e4c15b762b77c22192-merged.mount: Deactivated successfully.
Oct 11 09:25:31 compute-0 podman[398965]: 2025-10-11 09:25:31.744858074 +0000 UTC m=+0.115168531 container cleanup ab6ebb48cd0eab225fc177b91d9f4f5da5225331666aabf295bcb159bbbda39c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-cdd3c547-e0c3-4649-8427-08ce8e1c52d4, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 09:25:31 compute-0 systemd[1]: libpod-conmon-ab6ebb48cd0eab225fc177b91d9f4f5da5225331666aabf295bcb159bbbda39c.scope: Deactivated successfully.
Oct 11 09:25:31 compute-0 sudo[398796]: pam_unix(sudo:session): session closed for user root
Oct 11 09:25:31 compute-0 NetworkManager[44960]: <info>  [1760174731.7862] manager: (tapf45a21da-34): new Tun device (/org/freedesktop/NetworkManager/Devices/538)
Oct 11 09:25:31 compute-0 nova_compute[260935]: 2025-10-11 09:25:31.803 2 INFO nova.virt.libvirt.driver [-] [instance: 85cf93a0-2068-4567-a399-b8d52e672913] Instance destroyed successfully.
Oct 11 09:25:31 compute-0 nova_compute[260935]: 2025-10-11 09:25:31.804 2 DEBUG nova.objects.instance [None req-ff3f7fec-a100-4441-a42d-f95642af354a 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lazy-loading 'resources' on Instance uuid 85cf93a0-2068-4567-a399-b8d52e672913 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:25:31 compute-0 sudo[399004]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:25:31 compute-0 nova_compute[260935]: 2025-10-11 09:25:31.853 2 DEBUG nova.virt.libvirt.vif [None req-ff3f7fec-a100-4441-a42d-f95642af354a 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T09:23:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-145104929',display_name='tempest-TestGettingAddress-server-145104929',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-145104929',id=122,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFOcQNhE7VrJ44Z/esb1CHEpx8lcRbf27s/aaZGWQqBCH7+6fr5AKVHiLVq1p8ssvkamN6U1RSiwjmJnfBxXYkiNbQhuZVIjGwYPhoT+eqQySo9UJb10NKMqWJoQDGuD9g==',key_name='tempest-TestGettingAddress-1559448996',keypairs=<?>,launch_index=0,launched_at=2025-10-11T09:23:54Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='db6885dd005947ad850fed13cefdf2fc',ramdisk_id='',reservation_id='r-tevfktjf',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-1238692117',owner_user_name='tempest-TestGettingAddress-1238692117-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T09:23:54Z,user_data=None,user_id='0e1fd111a1ff43179343661e01457085',uuid=85cf93a0-2068-4567-a399-b8d52e672913,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "85875c6f-3380-42bc-9e4b-0b66df391e0d", "address": "fa:16:3e:81:f1:c1", "network": {"id": "cdd3c547-e0c3-4649-8427-08ce8e1c52d4", "bridge": "br-int", "label": "tempest-network-smoke--1698384506", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.179", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap85875c6f-33", "ovs_interfaceid": "85875c6f-3380-42bc-9e4b-0b66df391e0d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 11 09:25:31 compute-0 nova_compute[260935]: 2025-10-11 09:25:31.856 2 DEBUG nova.network.os_vif_util [None req-ff3f7fec-a100-4441-a42d-f95642af354a 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converting VIF {"id": "85875c6f-3380-42bc-9e4b-0b66df391e0d", "address": "fa:16:3e:81:f1:c1", "network": {"id": "cdd3c547-e0c3-4649-8427-08ce8e1c52d4", "bridge": "br-int", "label": "tempest-network-smoke--1698384506", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.179", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap85875c6f-33", "ovs_interfaceid": "85875c6f-3380-42bc-9e4b-0b66df391e0d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 09:25:31 compute-0 sudo[399004]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:25:31 compute-0 nova_compute[260935]: 2025-10-11 09:25:31.858 2 DEBUG nova.network.os_vif_util [None req-ff3f7fec-a100-4441-a42d-f95642af354a 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:81:f1:c1,bridge_name='br-int',has_traffic_filtering=True,id=85875c6f-3380-42bc-9e4b-0b66df391e0d,network=Network(cdd3c547-e0c3-4649-8427-08ce8e1c52d4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap85875c6f-33') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 09:25:31 compute-0 podman[398992]: 2025-10-11 09:25:31.859659234 +0000 UTC m=+0.076336455 container remove ab6ebb48cd0eab225fc177b91d9f4f5da5225331666aabf295bcb159bbbda39c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-cdd3c547-e0c3-4649-8427-08ce8e1c52d4, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct 11 09:25:31 compute-0 sudo[399004]: pam_unix(sudo:session): session closed for user root
Oct 11 09:25:31 compute-0 nova_compute[260935]: 2025-10-11 09:25:31.859 2 DEBUG os_vif [None req-ff3f7fec-a100-4441-a42d-f95642af354a 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:81:f1:c1,bridge_name='br-int',has_traffic_filtering=True,id=85875c6f-3380-42bc-9e4b-0b66df391e0d,network=Network(cdd3c547-e0c3-4649-8427-08ce8e1c52d4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap85875c6f-33') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 11 09:25:31 compute-0 nova_compute[260935]: 2025-10-11 09:25:31.863 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:25:31 compute-0 nova_compute[260935]: 2025-10-11 09:25:31.866 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap85875c6f-33, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:25:31 compute-0 nova_compute[260935]: 2025-10-11 09:25:31.867 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:25:31 compute-0 nova_compute[260935]: 2025-10-11 09:25:31.870 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 09:25:31 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:25:31.872 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[91ce86f3-b1bb-45fe-a9f3-44b05bba6359]: (4, ('Sat Oct 11 09:25:31 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-cdd3c547-e0c3-4649-8427-08ce8e1c52d4 (ab6ebb48cd0eab225fc177b91d9f4f5da5225331666aabf295bcb159bbbda39c)\nab6ebb48cd0eab225fc177b91d9f4f5da5225331666aabf295bcb159bbbda39c\nSat Oct 11 09:25:31 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-cdd3c547-e0c3-4649-8427-08ce8e1c52d4 (ab6ebb48cd0eab225fc177b91d9f4f5da5225331666aabf295bcb159bbbda39c)\nab6ebb48cd0eab225fc177b91d9f4f5da5225331666aabf295bcb159bbbda39c\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:25:31 compute-0 nova_compute[260935]: 2025-10-11 09:25:31.874 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:25:31 compute-0 nova_compute[260935]: 2025-10-11 09:25:31.877 2 INFO os_vif [None req-ff3f7fec-a100-4441-a42d-f95642af354a 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:81:f1:c1,bridge_name='br-int',has_traffic_filtering=True,id=85875c6f-3380-42bc-9e4b-0b66df391e0d,network=Network(cdd3c547-e0c3-4649-8427-08ce8e1c52d4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap85875c6f-33')
Oct 11 09:25:31 compute-0 nova_compute[260935]: 2025-10-11 09:25:31.878 2 DEBUG nova.virt.libvirt.vif [None req-ff3f7fec-a100-4441-a42d-f95642af354a 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T09:23:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-145104929',display_name='tempest-TestGettingAddress-server-145104929',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-145104929',id=122,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFOcQNhE7VrJ44Z/esb1CHEpx8lcRbf27s/aaZGWQqBCH7+6fr5AKVHiLVq1p8ssvkamN6U1RSiwjmJnfBxXYkiNbQhuZVIjGwYPhoT+eqQySo9UJb10NKMqWJoQDGuD9g==',key_name='tempest-TestGettingAddress-1559448996',keypairs=<?>,launch_index=0,launched_at=2025-10-11T09:23:54Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='db6885dd005947ad850fed13cefdf2fc',ramdisk_id='',reservation_id='r-tevfktjf',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-1238692117',owner_user_name='tempest-TestGettingAddress-1238692117-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T09:23:54Z,user_data=None,user_id='0e1fd111a1ff43179343661e01457085',uuid=85cf93a0-2068-4567-a399-b8d52e672913,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f45a21da-34ce-448b-92cc-2639a191c755", "address": "fa:16:3e:f6:a9:1e", "network": {"id": "f0bc2c62-89ab-4ce1-9157-2273788b9018", "bridge": "br-int", "label": "tempest-network-smoke--892002892", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fef6:a91e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fef6:a91e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf45a21da-34", "ovs_interfaceid": "f45a21da-34ce-448b-92cc-2639a191c755", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 11 09:25:31 compute-0 nova_compute[260935]: 2025-10-11 09:25:31.878 2 DEBUG nova.network.os_vif_util [None req-ff3f7fec-a100-4441-a42d-f95642af354a 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converting VIF {"id": "f45a21da-34ce-448b-92cc-2639a191c755", "address": "fa:16:3e:f6:a9:1e", "network": {"id": "f0bc2c62-89ab-4ce1-9157-2273788b9018", "bridge": "br-int", "label": "tempest-network-smoke--892002892", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fef6:a91e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fef6:a91e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf45a21da-34", "ovs_interfaceid": "f45a21da-34ce-448b-92cc-2639a191c755", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 09:25:31 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:25:31.878 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[d2f91037-12ca-4259-9263-372f72103a08]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:25:31 compute-0 nova_compute[260935]: 2025-10-11 09:25:31.878 2 DEBUG nova.network.os_vif_util [None req-ff3f7fec-a100-4441-a42d-f95642af354a 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f6:a9:1e,bridge_name='br-int',has_traffic_filtering=True,id=f45a21da-34ce-448b-92cc-2639a191c755,network=Network(f0bc2c62-89ab-4ce1-9157-2273788b9018),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf45a21da-34') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 09:25:31 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:25:31.879 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcdd3c547-e0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:25:31 compute-0 kernel: tapcdd3c547-e0: left promiscuous mode
Oct 11 09:25:31 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:25:31.919 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[e896effe-6e89-4bac-b301-7f466ad10006]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:25:31 compute-0 sudo[399051]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 09:25:31 compute-0 sudo[399051]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:25:31 compute-0 sudo[399051]: pam_unix(sudo:session): session closed for user root
Oct 11 09:25:31 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:25:31.950 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[49193f5b-89a7-48c8-9e16-0a20a70ab181]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:25:31 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:25:31.951 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[0cfa1f16-be8c-4b47-bd22-9b1678dd947a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:25:31 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:25:31.973 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[5d98073c-1111-4f2c-a1ad-f0beb6f32156]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 647682, 'reachable_time': 27246, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 399085, 'error': None, 'target': 'ovnmeta-cdd3c547-e0c3-4649-8427-08ce8e1c52d4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:25:31 compute-0 systemd[1]: run-netns-ovnmeta\x2dcdd3c547\x2de0c3\x2d4649\x2d8427\x2d08ce8e1c52d4.mount: Deactivated successfully.
Oct 11 09:25:31 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:25:31.980 162929 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-cdd3c547-e0c3-4649-8427-08ce8e1c52d4 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 11 09:25:31 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:25:31.980 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[75a56882-6cc3-43b4-b3c2-e41d6f42a517]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:25:31 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:25:31.981 162815 INFO neutron.agent.ovn.metadata.agent [-] Port f45a21da-34ce-448b-92cc-2639a191c755 in datapath f0bc2c62-89ab-4ce1-9157-2273788b9018 unbound from our chassis
Oct 11 09:25:31 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:25:31.983 162815 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f0bc2c62-89ab-4ce1-9157-2273788b9018, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 11 09:25:31 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:25:31.983 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[280bd44b-4263-4781-afcc-0e4d3e63f6b1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:25:31 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:25:31.984 162815 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-f0bc2c62-89ab-4ce1-9157-2273788b9018 namespace which is not needed anymore
Oct 11 09:25:32 compute-0 sudo[399082]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:25:32 compute-0 ceph-mon[74313]: pgmap v2516: 321 pgs: 321 active+clean; 486 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 11 09:25:32 compute-0 nova_compute[260935]: 2025-10-11 09:25:31.879 2 DEBUG os_vif [None req-ff3f7fec-a100-4441-a42d-f95642af354a 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f6:a9:1e,bridge_name='br-int',has_traffic_filtering=True,id=f45a21da-34ce-448b-92cc-2639a191c755,network=Network(f0bc2c62-89ab-4ce1-9157-2273788b9018),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf45a21da-34') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 11 09:25:32 compute-0 nova_compute[260935]: 2025-10-11 09:25:32.010 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:25:32 compute-0 nova_compute[260935]: 2025-10-11 09:25:32.011 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf45a21da-34, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:25:32 compute-0 nova_compute[260935]: 2025-10-11 09:25:32.011 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:25:32 compute-0 nova_compute[260935]: 2025-10-11 09:25:32.012 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:25:32 compute-0 sudo[399082]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:25:32 compute-0 nova_compute[260935]: 2025-10-11 09:25:32.013 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 09:25:32 compute-0 sudo[399082]: pam_unix(sudo:session): session closed for user root
Oct 11 09:25:32 compute-0 nova_compute[260935]: 2025-10-11 09:25:32.021 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:25:32 compute-0 nova_compute[260935]: 2025-10-11 09:25:32.024 2 INFO os_vif [None req-ff3f7fec-a100-4441-a42d-f95642af354a 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f6:a9:1e,bridge_name='br-int',has_traffic_filtering=True,id=f45a21da-34ce-448b-92cc-2639a191c755,network=Network(f0bc2c62-89ab-4ce1-9157-2273788b9018),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf45a21da-34')
Oct 11 09:25:32 compute-0 sudo[399117]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a -- raw list --format json
Oct 11 09:25:32 compute-0 sudo[399117]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:25:32 compute-0 neutron-haproxy-ovnmeta-f0bc2c62-89ab-4ce1-9157-2273788b9018[395243]: [NOTICE]   (395247) : haproxy version is 2.8.14-c23fe91
Oct 11 09:25:32 compute-0 neutron-haproxy-ovnmeta-f0bc2c62-89ab-4ce1-9157-2273788b9018[395243]: [NOTICE]   (395247) : path to executable is /usr/sbin/haproxy
Oct 11 09:25:32 compute-0 neutron-haproxy-ovnmeta-f0bc2c62-89ab-4ce1-9157-2273788b9018[395243]: [WARNING]  (395247) : Exiting Master process...
Oct 11 09:25:32 compute-0 neutron-haproxy-ovnmeta-f0bc2c62-89ab-4ce1-9157-2273788b9018[395243]: [WARNING]  (395247) : Exiting Master process...
Oct 11 09:25:32 compute-0 neutron-haproxy-ovnmeta-f0bc2c62-89ab-4ce1-9157-2273788b9018[395243]: [ALERT]    (395247) : Current worker (395249) exited with code 143 (Terminated)
Oct 11 09:25:32 compute-0 neutron-haproxy-ovnmeta-f0bc2c62-89ab-4ce1-9157-2273788b9018[395243]: [WARNING]  (395247) : All workers exited. Exiting... (0)
Oct 11 09:25:32 compute-0 systemd[1]: libpod-049b33c53e6737f8ceec85abe382027b44c2d2f7a48e3b43e0b4bf0d5ccae7e4.scope: Deactivated successfully.
Oct 11 09:25:32 compute-0 podman[399167]: 2025-10-11 09:25:32.141952161 +0000 UTC m=+0.053692657 container died 049b33c53e6737f8ceec85abe382027b44c2d2f7a48e3b43e0b4bf0d5ccae7e4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-f0bc2c62-89ab-4ce1-9157-2273788b9018, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 09:25:32 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-049b33c53e6737f8ceec85abe382027b44c2d2f7a48e3b43e0b4bf0d5ccae7e4-userdata-shm.mount: Deactivated successfully.
Oct 11 09:25:32 compute-0 podman[399167]: 2025-10-11 09:25:32.209188278 +0000 UTC m=+0.120928734 container cleanup 049b33c53e6737f8ceec85abe382027b44c2d2f7a48e3b43e0b4bf0d5ccae7e4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-f0bc2c62-89ab-4ce1-9157-2273788b9018, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 09:25:32 compute-0 systemd[1]: libpod-conmon-049b33c53e6737f8ceec85abe382027b44c2d2f7a48e3b43e0b4bf0d5ccae7e4.scope: Deactivated successfully.
Oct 11 09:25:32 compute-0 podman[399201]: 2025-10-11 09:25:32.304090156 +0000 UTC m=+0.063282667 container remove 049b33c53e6737f8ceec85abe382027b44c2d2f7a48e3b43e0b4bf0d5ccae7e4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-f0bc2c62-89ab-4ce1-9157-2273788b9018, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct 11 09:25:32 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:25:32.311 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[4fb9754f-df0c-4718-9af7-bfef9a9aaf9e]: (4, ('Sat Oct 11 09:25:32 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-f0bc2c62-89ab-4ce1-9157-2273788b9018 (049b33c53e6737f8ceec85abe382027b44c2d2f7a48e3b43e0b4bf0d5ccae7e4)\n049b33c53e6737f8ceec85abe382027b44c2d2f7a48e3b43e0b4bf0d5ccae7e4\nSat Oct 11 09:25:32 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-f0bc2c62-89ab-4ce1-9157-2273788b9018 (049b33c53e6737f8ceec85abe382027b44c2d2f7a48e3b43e0b4bf0d5ccae7e4)\n049b33c53e6737f8ceec85abe382027b44c2d2f7a48e3b43e0b4bf0d5ccae7e4\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:25:32 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:25:32.312 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[43b5667c-3aa4-4ebf-a9d6-cb8fc24595af]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:25:32 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:25:32.314 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf0bc2c62-80, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:25:32 compute-0 nova_compute[260935]: 2025-10-11 09:25:32.316 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:25:32 compute-0 kernel: tapf0bc2c62-80: left promiscuous mode
Oct 11 09:25:32 compute-0 nova_compute[260935]: 2025-10-11 09:25:32.330 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:25:32 compute-0 nova_compute[260935]: 2025-10-11 09:25:32.332 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:25:32 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:25:32.336 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[cc67f4ad-4661-41f1-9a62-5f6148fcbf99]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:25:32 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:25:32.355 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[d3a4f9a2-925f-49ab-b80c-5def52696e2f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:25:32 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:25:32.358 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[aaa1d1f5-a841-4431-9989-033f3ffbf57d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:25:32 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:25:32.373 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[008c6154-b11a-4494-b89b-4da4f4fbb4cd]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 647786, 'reachable_time': 30996, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 399246, 'error': None, 'target': 'ovnmeta-f0bc2c62-89ab-4ce1-9157-2273788b9018', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:25:32 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:25:32.375 162929 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-f0bc2c62-89ab-4ce1-9157-2273788b9018 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 11 09:25:32 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:25:32.375 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[0810bc7a-8074-4595-a9ca-4b0255881d35]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:25:32 compute-0 nova_compute[260935]: 2025-10-11 09:25:32.404 2 INFO nova.virt.libvirt.driver [None req-ff3f7fec-a100-4441-a42d-f95642af354a 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 85cf93a0-2068-4567-a399-b8d52e672913] Deleting instance files /var/lib/nova/instances/85cf93a0-2068-4567-a399-b8d52e672913_del
Oct 11 09:25:32 compute-0 nova_compute[260935]: 2025-10-11 09:25:32.405 2 INFO nova.virt.libvirt.driver [None req-ff3f7fec-a100-4441-a42d-f95642af354a 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 85cf93a0-2068-4567-a399-b8d52e672913] Deletion of /var/lib/nova/instances/85cf93a0-2068-4567-a399-b8d52e672913_del complete
Oct 11 09:25:32 compute-0 podman[399255]: 2025-10-11 09:25:32.490048424 +0000 UTC m=+0.056418483 container create 88782e1fcaf5f920e655151f5adf3bb887e2ee0d4572e9bcfaa9db06bdeab5d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_sutherland, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Oct 11 09:25:32 compute-0 systemd[1]: Started libpod-conmon-88782e1fcaf5f920e655151f5adf3bb887e2ee0d4572e9bcfaa9db06bdeab5d8.scope.
Oct 11 09:25:32 compute-0 podman[399255]: 2025-10-11 09:25:32.463471394 +0000 UTC m=+0.029841683 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:25:32 compute-0 nova_compute[260935]: 2025-10-11 09:25:32.571 2 INFO nova.compute.manager [None req-ff3f7fec-a100-4441-a42d-f95642af354a 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 85cf93a0-2068-4567-a399-b8d52e672913] Took 1.22 seconds to destroy the instance on the hypervisor.
Oct 11 09:25:32 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:25:32 compute-0 nova_compute[260935]: 2025-10-11 09:25:32.573 2 DEBUG oslo.service.loopingcall [None req-ff3f7fec-a100-4441-a42d-f95642af354a 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 11 09:25:32 compute-0 nova_compute[260935]: 2025-10-11 09:25:32.574 2 DEBUG nova.compute.manager [-] [instance: 85cf93a0-2068-4567-a399-b8d52e672913] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 11 09:25:32 compute-0 nova_compute[260935]: 2025-10-11 09:25:32.574 2 DEBUG nova.network.neutron [-] [instance: 85cf93a0-2068-4567-a399-b8d52e672913] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 11 09:25:32 compute-0 podman[399255]: 2025-10-11 09:25:32.592075023 +0000 UTC m=+0.158445152 container init 88782e1fcaf5f920e655151f5adf3bb887e2ee0d4572e9bcfaa9db06bdeab5d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_sutherland, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 09:25:32 compute-0 podman[399255]: 2025-10-11 09:25:32.605364668 +0000 UTC m=+0.171734737 container start 88782e1fcaf5f920e655151f5adf3bb887e2ee0d4572e9bcfaa9db06bdeab5d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_sutherland, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct 11 09:25:32 compute-0 podman[399255]: 2025-10-11 09:25:32.609457064 +0000 UTC m=+0.175827193 container attach 88782e1fcaf5f920e655151f5adf3bb887e2ee0d4572e9bcfaa9db06bdeab5d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_sutherland, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 09:25:32 compute-0 kind_sutherland[399272]: 167 167
Oct 11 09:25:32 compute-0 systemd[1]: libpod-88782e1fcaf5f920e655151f5adf3bb887e2ee0d4572e9bcfaa9db06bdeab5d8.scope: Deactivated successfully.
Oct 11 09:25:32 compute-0 conmon[399272]: conmon 88782e1fcaf5f920e655 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-88782e1fcaf5f920e655151f5adf3bb887e2ee0d4572e9bcfaa9db06bdeab5d8.scope/container/memory.events
Oct 11 09:25:32 compute-0 podman[399255]: 2025-10-11 09:25:32.615121164 +0000 UTC m=+0.181491223 container died 88782e1fcaf5f920e655151f5adf3bb887e2ee0d4572e9bcfaa9db06bdeab5d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_sutherland, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 09:25:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-e1a18029a52b2d99369687829c9fae311726c99a73db60edcfbb05990f0b5022-merged.mount: Deactivated successfully.
Oct 11 09:25:32 compute-0 systemd[1]: run-netns-ovnmeta\x2df0bc2c62\x2d89ab\x2d4ce1\x2d9157\x2d2273788b9018.mount: Deactivated successfully.
Oct 11 09:25:32 compute-0 podman[399255]: 2025-10-11 09:25:32.662758768 +0000 UTC m=+0.229128837 container remove 88782e1fcaf5f920e655151f5adf3bb887e2ee0d4572e9bcfaa9db06bdeab5d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_sutherland, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 09:25:32 compute-0 systemd[1]: libpod-conmon-88782e1fcaf5f920e655151f5adf3bb887e2ee0d4572e9bcfaa9db06bdeab5d8.scope: Deactivated successfully.
Oct 11 09:25:32 compute-0 nova_compute[260935]: 2025-10-11 09:25:32.726 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:25:32 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2517: 321 pgs: 321 active+clean; 407 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 340 KiB/s rd, 2.1 MiB/s wr, 84 op/s
Oct 11 09:25:32 compute-0 nova_compute[260935]: 2025-10-11 09:25:32.907 2 INFO nova.compute.manager [None req-532fdc66-7134-4826-b2f2-f9db9ee56e78 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: e4670193-9ea3-45bc-9dbd-14d2e62f32f1] Get console output
Oct 11 09:25:32 compute-0 nova_compute[260935]: 2025-10-11 09:25:32.918 29289 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Oct 11 09:25:32 compute-0 podman[399295]: 2025-10-11 09:25:32.958225897 +0000 UTC m=+0.074085502 container create 8706fdc9b627f21130aa1b80d84c84630b13a8d16802c7e9858a66c55fbe88da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_darwin, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 11 09:25:33 compute-0 systemd[1]: Started libpod-conmon-8706fdc9b627f21130aa1b80d84c84630b13a8d16802c7e9858a66c55fbe88da.scope.
Oct 11 09:25:33 compute-0 podman[399295]: 2025-10-11 09:25:32.930179765 +0000 UTC m=+0.046039450 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:25:33 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:25:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8a2ae4c33597cdb47ac09d39f42f645cf0ff74c4c2fb654321bf06d8f062660/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 09:25:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8a2ae4c33597cdb47ac09d39f42f645cf0ff74c4c2fb654321bf06d8f062660/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 09:25:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8a2ae4c33597cdb47ac09d39f42f645cf0ff74c4c2fb654321bf06d8f062660/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 09:25:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8a2ae4c33597cdb47ac09d39f42f645cf0ff74c4c2fb654321bf06d8f062660/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 09:25:33 compute-0 podman[399295]: 2025-10-11 09:25:33.084711836 +0000 UTC m=+0.200571471 container init 8706fdc9b627f21130aa1b80d84c84630b13a8d16802c7e9858a66c55fbe88da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_darwin, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 09:25:33 compute-0 podman[399295]: 2025-10-11 09:25:33.099469623 +0000 UTC m=+0.215329268 container start 8706fdc9b627f21130aa1b80d84c84630b13a8d16802c7e9858a66c55fbe88da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_darwin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Oct 11 09:25:33 compute-0 podman[399295]: 2025-10-11 09:25:33.103933189 +0000 UTC m=+0.219792824 container attach 8706fdc9b627f21130aa1b80d84c84630b13a8d16802c7e9858a66c55fbe88da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_darwin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct 11 09:25:33 compute-0 nova_compute[260935]: 2025-10-11 09:25:33.594 2 DEBUG nova.compute.manager [req-13ba95d1-477a-46b8-9d97-b86253c68297 req-fa97920b-0580-4806-882d-e954bcfb1875 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 85cf93a0-2068-4567-a399-b8d52e672913] Received event network-vif-unplugged-85875c6f-3380-42bc-9e4b-0b66df391e0d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:25:33 compute-0 nova_compute[260935]: 2025-10-11 09:25:33.595 2 DEBUG oslo_concurrency.lockutils [req-13ba95d1-477a-46b8-9d97-b86253c68297 req-fa97920b-0580-4806-882d-e954bcfb1875 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "85cf93a0-2068-4567-a399-b8d52e672913-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:25:33 compute-0 nova_compute[260935]: 2025-10-11 09:25:33.595 2 DEBUG oslo_concurrency.lockutils [req-13ba95d1-477a-46b8-9d97-b86253c68297 req-fa97920b-0580-4806-882d-e954bcfb1875 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "85cf93a0-2068-4567-a399-b8d52e672913-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:25:33 compute-0 nova_compute[260935]: 2025-10-11 09:25:33.595 2 DEBUG oslo_concurrency.lockutils [req-13ba95d1-477a-46b8-9d97-b86253c68297 req-fa97920b-0580-4806-882d-e954bcfb1875 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "85cf93a0-2068-4567-a399-b8d52e672913-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:25:33 compute-0 nova_compute[260935]: 2025-10-11 09:25:33.596 2 DEBUG nova.compute.manager [req-13ba95d1-477a-46b8-9d97-b86253c68297 req-fa97920b-0580-4806-882d-e954bcfb1875 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 85cf93a0-2068-4567-a399-b8d52e672913] No waiting events found dispatching network-vif-unplugged-85875c6f-3380-42bc-9e4b-0b66df391e0d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 09:25:33 compute-0 nova_compute[260935]: 2025-10-11 09:25:33.596 2 DEBUG nova.compute.manager [req-13ba95d1-477a-46b8-9d97-b86253c68297 req-fa97920b-0580-4806-882d-e954bcfb1875 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 85cf93a0-2068-4567-a399-b8d52e672913] Received event network-vif-unplugged-85875c6f-3380-42bc-9e4b-0b66df391e0d for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 11 09:25:33 compute-0 nova_compute[260935]: 2025-10-11 09:25:33.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:25:34 compute-0 ceph-mon[74313]: pgmap v2517: 321 pgs: 321 active+clean; 407 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 340 KiB/s rd, 2.1 MiB/s wr, 84 op/s
Oct 11 09:25:34 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:25:34 compute-0 infallible_darwin[399311]: {
Oct 11 09:25:34 compute-0 infallible_darwin[399311]:     "9a8d87fe-940e-4960-94fb-405e22e6e9ee": {
Oct 11 09:25:34 compute-0 infallible_darwin[399311]:         "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:25:34 compute-0 infallible_darwin[399311]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 11 09:25:34 compute-0 infallible_darwin[399311]:         "osd_id": 2,
Oct 11 09:25:34 compute-0 infallible_darwin[399311]:         "osd_uuid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 09:25:34 compute-0 infallible_darwin[399311]:         "type": "bluestore"
Oct 11 09:25:34 compute-0 infallible_darwin[399311]:     },
Oct 11 09:25:34 compute-0 infallible_darwin[399311]:     "bd31cfe9-266a-4479-a7f9-659fff14b3e1": {
Oct 11 09:25:34 compute-0 infallible_darwin[399311]:         "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:25:34 compute-0 infallible_darwin[399311]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 11 09:25:34 compute-0 infallible_darwin[399311]:         "osd_id": 0,
Oct 11 09:25:34 compute-0 infallible_darwin[399311]:         "osd_uuid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 09:25:34 compute-0 infallible_darwin[399311]:         "type": "bluestore"
Oct 11 09:25:34 compute-0 infallible_darwin[399311]:     },
Oct 11 09:25:34 compute-0 infallible_darwin[399311]:     "c6e730c8-7075-45e5-bf72-061b73088f5b": {
Oct 11 09:25:34 compute-0 infallible_darwin[399311]:         "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:25:34 compute-0 infallible_darwin[399311]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 11 09:25:34 compute-0 infallible_darwin[399311]:         "osd_id": 1,
Oct 11 09:25:34 compute-0 infallible_darwin[399311]:         "osd_uuid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 09:25:34 compute-0 infallible_darwin[399311]:         "type": "bluestore"
Oct 11 09:25:34 compute-0 infallible_darwin[399311]:     }
Oct 11 09:25:34 compute-0 infallible_darwin[399311]: }
Oct 11 09:25:34 compute-0 systemd[1]: libpod-8706fdc9b627f21130aa1b80d84c84630b13a8d16802c7e9858a66c55fbe88da.scope: Deactivated successfully.
Oct 11 09:25:34 compute-0 systemd[1]: libpod-8706fdc9b627f21130aa1b80d84c84630b13a8d16802c7e9858a66c55fbe88da.scope: Consumed 1.131s CPU time.
Oct 11 09:25:34 compute-0 podman[399295]: 2025-10-11 09:25:34.225338885 +0000 UTC m=+1.341198560 container died 8706fdc9b627f21130aa1b80d84c84630b13a8d16802c7e9858a66c55fbe88da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_darwin, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct 11 09:25:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-a8a2ae4c33597cdb47ac09d39f42f645cf0ff74c4c2fb654321bf06d8f062660-merged.mount: Deactivated successfully.
Oct 11 09:25:34 compute-0 podman[399295]: 2025-10-11 09:25:34.303522131 +0000 UTC m=+1.419381726 container remove 8706fdc9b627f21130aa1b80d84c84630b13a8d16802c7e9858a66c55fbe88da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_darwin, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 09:25:34 compute-0 systemd[1]: libpod-conmon-8706fdc9b627f21130aa1b80d84c84630b13a8d16802c7e9858a66c55fbe88da.scope: Deactivated successfully.
Oct 11 09:25:34 compute-0 sudo[399117]: pam_unix(sudo:session): session closed for user root
Oct 11 09:25:34 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 09:25:34 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:25:34 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 09:25:34 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:25:34 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev 9dfca160-2797-4030-bd8a-842fa083a2a2 does not exist
Oct 11 09:25:34 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev 13a2331d-7b21-4c57-ad8b-67b533840d4c does not exist
Oct 11 09:25:34 compute-0 sudo[399356]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:25:34 compute-0 sudo[399356]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:25:34 compute-0 sudo[399356]: pam_unix(sudo:session): session closed for user root
Oct 11 09:25:34 compute-0 sudo[399381]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 11 09:25:34 compute-0 sudo[399381]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:25:34 compute-0 sudo[399381]: pam_unix(sudo:session): session closed for user root
Oct 11 09:25:34 compute-0 ovn_controller[152945]: 2025-10-11T09:25:34Z|00158|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:54:f4:1e 10.100.0.9
Oct 11 09:25:34 compute-0 nova_compute[260935]: 2025-10-11 09:25:34.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:25:34 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2518: 321 pgs: 321 active+clean; 407 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 340 KiB/s rd, 2.1 MiB/s wr, 84 op/s
Oct 11 09:25:34 compute-0 nova_compute[260935]: 2025-10-11 09:25:34.992 2 DEBUG nova.compute.manager [req-95bf35b9-9f84-4340-b0d8-731de9533d04 req-6360398c-1674-41a9-a0ef-ec742a2ae7f5 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 85cf93a0-2068-4567-a399-b8d52e672913] Received event network-vif-deleted-f45a21da-34ce-448b-92cc-2639a191c755 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:25:34 compute-0 nova_compute[260935]: 2025-10-11 09:25:34.993 2 INFO nova.compute.manager [req-95bf35b9-9f84-4340-b0d8-731de9533d04 req-6360398c-1674-41a9-a0ef-ec742a2ae7f5 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 85cf93a0-2068-4567-a399-b8d52e672913] Neutron deleted interface f45a21da-34ce-448b-92cc-2639a191c755; detaching it from the instance and deleting it from the info cache
Oct 11 09:25:34 compute-0 nova_compute[260935]: 2025-10-11 09:25:34.993 2 DEBUG nova.network.neutron [req-95bf35b9-9f84-4340-b0d8-731de9533d04 req-6360398c-1674-41a9-a0ef-ec742a2ae7f5 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 85cf93a0-2068-4567-a399-b8d52e672913] Updating instance_info_cache with network_info: [{"id": "85875c6f-3380-42bc-9e4b-0b66df391e0d", "address": "fa:16:3e:81:f1:c1", "network": {"id": "cdd3c547-e0c3-4649-8427-08ce8e1c52d4", "bridge": "br-int", "label": "tempest-network-smoke--1698384506", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.179", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap85875c6f-33", "ovs_interfaceid": "85875c6f-3380-42bc-9e4b-0b66df391e0d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:25:35 compute-0 nova_compute[260935]: 2025-10-11 09:25:35.030 2 DEBUG nova.network.neutron [req-b162d961-a910-4491-8eb8-868da786e142 req-5a378dc8-b74d-4af9-b990-cd7d7a36554e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 85cf93a0-2068-4567-a399-b8d52e672913] Updated VIF entry in instance network info cache for port 85875c6f-3380-42bc-9e4b-0b66df391e0d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 09:25:35 compute-0 nova_compute[260935]: 2025-10-11 09:25:35.031 2 DEBUG nova.network.neutron [req-b162d961-a910-4491-8eb8-868da786e142 req-5a378dc8-b74d-4af9-b990-cd7d7a36554e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 85cf93a0-2068-4567-a399-b8d52e672913] Updating instance_info_cache with network_info: [{"id": "85875c6f-3380-42bc-9e4b-0b66df391e0d", "address": "fa:16:3e:81:f1:c1", "network": {"id": "cdd3c547-e0c3-4649-8427-08ce8e1c52d4", "bridge": "br-int", "label": "tempest-network-smoke--1698384506", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap85875c6f-33", "ovs_interfaceid": "85875c6f-3380-42bc-9e4b-0b66df391e0d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "f45a21da-34ce-448b-92cc-2639a191c755", "address": "fa:16:3e:f6:a9:1e", "network": {"id": "f0bc2c62-89ab-4ce1-9157-2273788b9018", "bridge": "br-int", "label": "tempest-network-smoke--892002892", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fef6:a91e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fef6:a91e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf45a21da-34", "ovs_interfaceid": "f45a21da-34ce-448b-92cc-2639a191c755", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:25:35 compute-0 nova_compute[260935]: 2025-10-11 09:25:35.101 2 DEBUG nova.compute.manager [req-95bf35b9-9f84-4340-b0d8-731de9533d04 req-6360398c-1674-41a9-a0ef-ec742a2ae7f5 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 85cf93a0-2068-4567-a399-b8d52e672913] Detach interface failed, port_id=f45a21da-34ce-448b-92cc-2639a191c755, reason: Instance 85cf93a0-2068-4567-a399-b8d52e672913 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Oct 11 09:25:35 compute-0 nova_compute[260935]: 2025-10-11 09:25:35.138 2 DEBUG oslo_concurrency.lockutils [req-b162d961-a910-4491-8eb8-868da786e142 req-5a378dc8-b74d-4af9-b990-cd7d7a36554e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-85cf93a0-2068-4567-a399-b8d52e672913" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:25:35 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:25:35 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:25:35 compute-0 ceph-mon[74313]: pgmap v2518: 321 pgs: 321 active+clean; 407 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 340 KiB/s rd, 2.1 MiB/s wr, 84 op/s
Oct 11 09:25:35 compute-0 nova_compute[260935]: 2025-10-11 09:25:35.519 2 DEBUG nova.network.neutron [-] [instance: 85cf93a0-2068-4567-a399-b8d52e672913] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:25:35 compute-0 nova_compute[260935]: 2025-10-11 09:25:35.541 2 DEBUG nova.network.neutron [req-70b14429-9b8a-40a6-bc3f-3d564b8ca202 req-14447cff-54b2-402e-b7d0-aebd62646e38 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: e4670193-9ea3-45bc-9dbd-14d2e62f32f1] Updated VIF entry in instance network info cache for port 119ff9b5-e3af-4688-97a4-92b5f6a240dc. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 09:25:35 compute-0 nova_compute[260935]: 2025-10-11 09:25:35.542 2 DEBUG nova.network.neutron [req-70b14429-9b8a-40a6-bc3f-3d564b8ca202 req-14447cff-54b2-402e-b7d0-aebd62646e38 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: e4670193-9ea3-45bc-9dbd-14d2e62f32f1] Updating instance_info_cache with network_info: [{"id": "119ff9b5-e3af-4688-97a4-92b5f6a240dc", "address": "fa:16:3e:54:f4:1e", "network": {"id": "342daca7-3c5d-4bb3-bcdc-6abce7b4d414", "bridge": "br-int", "label": "tempest-network-smoke--2000863110", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "9.8.7.6", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.178", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap119ff9b5-e3", "ovs_interfaceid": "119ff9b5-e3af-4688-97a4-92b5f6a240dc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:25:35 compute-0 nova_compute[260935]: 2025-10-11 09:25:35.645 2 INFO nova.compute.manager [-] [instance: 85cf93a0-2068-4567-a399-b8d52e672913] Took 3.07 seconds to deallocate network for instance.
Oct 11 09:25:35 compute-0 nova_compute[260935]: 2025-10-11 09:25:35.672 2 DEBUG oslo_concurrency.lockutils [req-70b14429-9b8a-40a6-bc3f-3d564b8ca202 req-14447cff-54b2-402e-b7d0-aebd62646e38 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-e4670193-9ea3-45bc-9dbd-14d2e62f32f1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:25:35 compute-0 nova_compute[260935]: 2025-10-11 09:25:35.699 2 DEBUG nova.compute.manager [req-399145a9-4ff6-4feb-90e9-75b341585672 req-677a52d9-dbfd-4362-b2fc-986152c7e446 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 85cf93a0-2068-4567-a399-b8d52e672913] Received event network-vif-plugged-85875c6f-3380-42bc-9e4b-0b66df391e0d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:25:35 compute-0 nova_compute[260935]: 2025-10-11 09:25:35.700 2 DEBUG oslo_concurrency.lockutils [req-399145a9-4ff6-4feb-90e9-75b341585672 req-677a52d9-dbfd-4362-b2fc-986152c7e446 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "85cf93a0-2068-4567-a399-b8d52e672913-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:25:35 compute-0 nova_compute[260935]: 2025-10-11 09:25:35.700 2 DEBUG oslo_concurrency.lockutils [req-399145a9-4ff6-4feb-90e9-75b341585672 req-677a52d9-dbfd-4362-b2fc-986152c7e446 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "85cf93a0-2068-4567-a399-b8d52e672913-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:25:35 compute-0 nova_compute[260935]: 2025-10-11 09:25:35.701 2 DEBUG oslo_concurrency.lockutils [req-399145a9-4ff6-4feb-90e9-75b341585672 req-677a52d9-dbfd-4362-b2fc-986152c7e446 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "85cf93a0-2068-4567-a399-b8d52e672913-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:25:35 compute-0 nova_compute[260935]: 2025-10-11 09:25:35.701 2 DEBUG nova.compute.manager [req-399145a9-4ff6-4feb-90e9-75b341585672 req-677a52d9-dbfd-4362-b2fc-986152c7e446 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 85cf93a0-2068-4567-a399-b8d52e672913] No waiting events found dispatching network-vif-plugged-85875c6f-3380-42bc-9e4b-0b66df391e0d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 09:25:35 compute-0 nova_compute[260935]: 2025-10-11 09:25:35.701 2 WARNING nova.compute.manager [req-399145a9-4ff6-4feb-90e9-75b341585672 req-677a52d9-dbfd-4362-b2fc-986152c7e446 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 85cf93a0-2068-4567-a399-b8d52e672913] Received unexpected event network-vif-plugged-85875c6f-3380-42bc-9e4b-0b66df391e0d for instance with vm_state active and task_state deleting.
Oct 11 09:25:35 compute-0 nova_compute[260935]: 2025-10-11 09:25:35.727 2 DEBUG oslo_concurrency.lockutils [None req-ff3f7fec-a100-4441-a42d-f95642af354a 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:25:35 compute-0 nova_compute[260935]: 2025-10-11 09:25:35.728 2 DEBUG oslo_concurrency.lockutils [None req-ff3f7fec-a100-4441-a42d-f95642af354a 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:25:35 compute-0 nova_compute[260935]: 2025-10-11 09:25:35.856 2 DEBUG oslo_concurrency.processutils [None req-ff3f7fec-a100-4441-a42d-f95642af354a 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:25:36 compute-0 ovn_controller[152945]: 2025-10-11T09:25:36Z|00159|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:54:f4:1e 10.100.0.9
Oct 11 09:25:36 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 09:25:36 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2142244451' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:25:36 compute-0 nova_compute[260935]: 2025-10-11 09:25:36.373 2 DEBUG oslo_concurrency.processutils [None req-ff3f7fec-a100-4441-a42d-f95642af354a 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.516s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:25:36 compute-0 nova_compute[260935]: 2025-10-11 09:25:36.379 2 DEBUG nova.compute.provider_tree [None req-ff3f7fec-a100-4441-a42d-f95642af354a 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 09:25:36 compute-0 nova_compute[260935]: 2025-10-11 09:25:36.419 2 DEBUG nova.scheduler.client.report [None req-ff3f7fec-a100-4441-a42d-f95642af354a 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 09:25:36 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/2142244451' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:25:36 compute-0 nova_compute[260935]: 2025-10-11 09:25:36.482 2 DEBUG oslo_concurrency.lockutils [None req-ff3f7fec-a100-4441-a42d-f95642af354a 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.755s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:25:36 compute-0 nova_compute[260935]: 2025-10-11 09:25:36.522 2 INFO nova.scheduler.client.report [None req-ff3f7fec-a100-4441-a42d-f95642af354a 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Deleted allocations for instance 85cf93a0-2068-4567-a399-b8d52e672913
Oct 11 09:25:36 compute-0 nova_compute[260935]: 2025-10-11 09:25:36.669 2 DEBUG oslo_concurrency.lockutils [None req-ff3f7fec-a100-4441-a42d-f95642af354a 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "85cf93a0-2068-4567-a399-b8d52e672913" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.323s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:25:36 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2519: 321 pgs: 321 active+clean; 407 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 340 KiB/s rd, 2.1 MiB/s wr, 84 op/s
Oct 11 09:25:37 compute-0 nova_compute[260935]: 2025-10-11 09:25:37.014 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:25:37 compute-0 nova_compute[260935]: 2025-10-11 09:25:37.116 2 DEBUG nova.compute.manager [req-99392b72-825b-4faa-890d-c717a9d3c7c4 req-2ec63e0d-015b-40bf-8055-4e25f9640bf2 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 85cf93a0-2068-4567-a399-b8d52e672913] Received event network-vif-deleted-85875c6f-3380-42bc-9e4b-0b66df391e0d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:25:37 compute-0 sshd-session[399428]: Invalid user admin from 165.232.82.252 port 54018
Oct 11 09:25:37 compute-0 sshd-session[399428]: pam_unix(sshd:auth): check pass; user unknown
Oct 11 09:25:37 compute-0 sshd-session[399428]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=165.232.82.252
Oct 11 09:25:37 compute-0 ceph-mon[74313]: pgmap v2519: 321 pgs: 321 active+clean; 407 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 340 KiB/s rd, 2.1 MiB/s wr, 84 op/s
Oct 11 09:25:37 compute-0 nova_compute[260935]: 2025-10-11 09:25:37.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:25:37 compute-0 nova_compute[260935]: 2025-10-11 09:25:37.729 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:25:38 compute-0 nova_compute[260935]: 2025-10-11 09:25:38.699 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:25:38 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2520: 321 pgs: 321 active+clean; 407 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 349 KiB/s rd, 2.1 MiB/s wr, 92 op/s
Oct 11 09:25:39 compute-0 nova_compute[260935]: 2025-10-11 09:25:39.005 2 DEBUG nova.compute.manager [req-406994a8-a847-4ede-bb61-eb0e099e405e req-f3c660b0-154b-40c5-91fb-f7aef9229881 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: e4670193-9ea3-45bc-9dbd-14d2e62f32f1] Received event network-changed-119ff9b5-e3af-4688-97a4-92b5f6a240dc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:25:39 compute-0 nova_compute[260935]: 2025-10-11 09:25:39.005 2 DEBUG nova.compute.manager [req-406994a8-a847-4ede-bb61-eb0e099e405e req-f3c660b0-154b-40c5-91fb-f7aef9229881 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: e4670193-9ea3-45bc-9dbd-14d2e62f32f1] Refreshing instance network info cache due to event network-changed-119ff9b5-e3af-4688-97a4-92b5f6a240dc. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 09:25:39 compute-0 nova_compute[260935]: 2025-10-11 09:25:39.007 2 DEBUG oslo_concurrency.lockutils [req-406994a8-a847-4ede-bb61-eb0e099e405e req-f3c660b0-154b-40c5-91fb-f7aef9229881 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-e4670193-9ea3-45bc-9dbd-14d2e62f32f1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:25:39 compute-0 nova_compute[260935]: 2025-10-11 09:25:39.007 2 DEBUG oslo_concurrency.lockutils [req-406994a8-a847-4ede-bb61-eb0e099e405e req-f3c660b0-154b-40c5-91fb-f7aef9229881 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-e4670193-9ea3-45bc-9dbd-14d2e62f32f1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:25:39 compute-0 nova_compute[260935]: 2025-10-11 09:25:39.008 2 DEBUG nova.network.neutron [req-406994a8-a847-4ede-bb61-eb0e099e405e req-f3c660b0-154b-40c5-91fb-f7aef9229881 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: e4670193-9ea3-45bc-9dbd-14d2e62f32f1] Refreshing network info cache for port 119ff9b5-e3af-4688-97a4-92b5f6a240dc _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 09:25:39 compute-0 nova_compute[260935]: 2025-10-11 09:25:39.058 2 DEBUG oslo_concurrency.lockutils [None req-8ae7735c-d341-4f8a-af49-6959c16c170b dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Acquiring lock "e4670193-9ea3-45bc-9dbd-14d2e62f32f1" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:25:39 compute-0 nova_compute[260935]: 2025-10-11 09:25:39.059 2 DEBUG oslo_concurrency.lockutils [None req-8ae7735c-d341-4f8a-af49-6959c16c170b dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "e4670193-9ea3-45bc-9dbd-14d2e62f32f1" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:25:39 compute-0 nova_compute[260935]: 2025-10-11 09:25:39.059 2 DEBUG oslo_concurrency.lockutils [None req-8ae7735c-d341-4f8a-af49-6959c16c170b dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Acquiring lock "e4670193-9ea3-45bc-9dbd-14d2e62f32f1-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:25:39 compute-0 nova_compute[260935]: 2025-10-11 09:25:39.060 2 DEBUG oslo_concurrency.lockutils [None req-8ae7735c-d341-4f8a-af49-6959c16c170b dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "e4670193-9ea3-45bc-9dbd-14d2e62f32f1-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:25:39 compute-0 nova_compute[260935]: 2025-10-11 09:25:39.060 2 DEBUG oslo_concurrency.lockutils [None req-8ae7735c-d341-4f8a-af49-6959c16c170b dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "e4670193-9ea3-45bc-9dbd-14d2e62f32f1-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:25:39 compute-0 nova_compute[260935]: 2025-10-11 09:25:39.062 2 INFO nova.compute.manager [None req-8ae7735c-d341-4f8a-af49-6959c16c170b dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: e4670193-9ea3-45bc-9dbd-14d2e62f32f1] Terminating instance
Oct 11 09:25:39 compute-0 nova_compute[260935]: 2025-10-11 09:25:39.065 2 DEBUG nova.compute.manager [None req-8ae7735c-d341-4f8a-af49-6959c16c170b dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: e4670193-9ea3-45bc-9dbd-14d2e62f32f1] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 11 09:25:39 compute-0 kernel: tap119ff9b5-e3 (unregistering): left promiscuous mode
Oct 11 09:25:39 compute-0 NetworkManager[44960]: <info>  [1760174739.1286] device (tap119ff9b5-e3): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 09:25:39 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:25:39 compute-0 nova_compute[260935]: 2025-10-11 09:25:39.143 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:25:39 compute-0 ovn_controller[152945]: 2025-10-11T09:25:39Z|01375|binding|INFO|Releasing lport 119ff9b5-e3af-4688-97a4-92b5f6a240dc from this chassis (sb_readonly=0)
Oct 11 09:25:39 compute-0 ovn_controller[152945]: 2025-10-11T09:25:39Z|01376|binding|INFO|Setting lport 119ff9b5-e3af-4688-97a4-92b5f6a240dc down in Southbound
Oct 11 09:25:39 compute-0 ovn_controller[152945]: 2025-10-11T09:25:39Z|01377|binding|INFO|Removing iface tap119ff9b5-e3 ovn-installed in OVS
Oct 11 09:25:39 compute-0 nova_compute[260935]: 2025-10-11 09:25:39.146 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:25:39 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:25:39.153 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:54:f4:1e 10.100.0.9'], port_security=['fa:16:3e:54:f4:1e 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'e4670193-9ea3-45bc-9dbd-14d2e62f32f1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-342daca7-3c5d-4bb3-bcdc-6abce7b4d414', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'bee9c6aad5fe46a2b0fb6caf4d995b72', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'c1e4fc68-c314-4fe3-a3ef-a4986a78eba5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b157abac-1a65-40f2-b3c7-9c7765cf8246, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=119ff9b5-e3af-4688-97a4-92b5f6a240dc) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 09:25:39 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:25:39.155 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 119ff9b5-e3af-4688-97a4-92b5f6a240dc in datapath 342daca7-3c5d-4bb3-bcdc-6abce7b4d414 unbound from our chassis
Oct 11 09:25:39 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:25:39.156 162815 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 342daca7-3c5d-4bb3-bcdc-6abce7b4d414, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 11 09:25:39 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:25:39.158 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[5c7ec4b0-2beb-4ad7-85a7-bde9fdb45957]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:25:39 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:25:39.158 162815 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-342daca7-3c5d-4bb3-bcdc-6abce7b4d414 namespace which is not needed anymore
Oct 11 09:25:39 compute-0 nova_compute[260935]: 2025-10-11 09:25:39.165 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:25:39 compute-0 systemd[1]: machine-qemu\x2d150\x2dinstance\x2d0000007e.scope: Deactivated successfully.
Oct 11 09:25:39 compute-0 systemd[1]: machine-qemu\x2d150\x2dinstance\x2d0000007e.scope: Consumed 13.873s CPU time.
Oct 11 09:25:39 compute-0 systemd-machined[215705]: Machine qemu-150-instance-0000007e terminated.
Oct 11 09:25:39 compute-0 nova_compute[260935]: 2025-10-11 09:25:39.311 2 INFO nova.virt.libvirt.driver [-] [instance: e4670193-9ea3-45bc-9dbd-14d2e62f32f1] Instance destroyed successfully.
Oct 11 09:25:39 compute-0 nova_compute[260935]: 2025-10-11 09:25:39.312 2 DEBUG nova.objects.instance [None req-8ae7735c-d341-4f8a-af49-6959c16c170b dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lazy-loading 'resources' on Instance uuid e4670193-9ea3-45bc-9dbd-14d2e62f32f1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:25:39 compute-0 nova_compute[260935]: 2025-10-11 09:25:39.329 2 DEBUG nova.virt.libvirt.vif [None req-8ae7735c-d341-4f8a-af49-6959c16c170b dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T09:24:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1522106396',display_name='tempest-TestNetworkBasicOps-server-1522106396',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1522106396',id=126,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPJuvLde49Grl0xrW9XqlZWm2iapzj/ptvpeM0vUXkL9WjkO1Vts1ZqPNK0xa0RFKzwonFoVCMufaWV362ApeqPtUvwTGOGY43qYDgAKGh07RNbr2NQMmIQQ5Njnnyha+Q==',key_name='tempest-TestNetworkBasicOps-2127179421',keypairs=<?>,launch_index=0,launched_at=2025-10-11T09:25:18Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='bee9c6aad5fe46a2b0fb6caf4d995b72',ramdisk_id='',reservation_id='r-4ic01z0m',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1622727639',owner_user_name='tempest-TestNetworkBasicOps-1622727639-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T09:25:19Z,user_data=None,user_id='dd336dcb24664df58613d4105ce1b004',uuid=e4670193-9ea3-45bc-9dbd-14d2e62f32f1,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "119ff9b5-e3af-4688-97a4-92b5f6a240dc", "address": "fa:16:3e:54:f4:1e", "network": {"id": "342daca7-3c5d-4bb3-bcdc-6abce7b4d414", "bridge": "br-int", "label": "tempest-network-smoke--2000863110", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "9.8.7.6", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.178", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap119ff9b5-e3", "ovs_interfaceid": "119ff9b5-e3af-4688-97a4-92b5f6a240dc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 11 09:25:39 compute-0 nova_compute[260935]: 2025-10-11 09:25:39.329 2 DEBUG nova.network.os_vif_util [None req-8ae7735c-d341-4f8a-af49-6959c16c170b dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Converting VIF {"id": "119ff9b5-e3af-4688-97a4-92b5f6a240dc", "address": "fa:16:3e:54:f4:1e", "network": {"id": "342daca7-3c5d-4bb3-bcdc-6abce7b4d414", "bridge": "br-int", "label": "tempest-network-smoke--2000863110", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "9.8.7.6", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.178", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap119ff9b5-e3", "ovs_interfaceid": "119ff9b5-e3af-4688-97a4-92b5f6a240dc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 09:25:39 compute-0 nova_compute[260935]: 2025-10-11 09:25:39.330 2 DEBUG nova.network.os_vif_util [None req-8ae7735c-d341-4f8a-af49-6959c16c170b dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:54:f4:1e,bridge_name='br-int',has_traffic_filtering=True,id=119ff9b5-e3af-4688-97a4-92b5f6a240dc,network=Network(342daca7-3c5d-4bb3-bcdc-6abce7b4d414),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap119ff9b5-e3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 09:25:39 compute-0 nova_compute[260935]: 2025-10-11 09:25:39.330 2 DEBUG os_vif [None req-8ae7735c-d341-4f8a-af49-6959c16c170b dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:54:f4:1e,bridge_name='br-int',has_traffic_filtering=True,id=119ff9b5-e3af-4688-97a4-92b5f6a240dc,network=Network(342daca7-3c5d-4bb3-bcdc-6abce7b4d414),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap119ff9b5-e3') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 11 09:25:39 compute-0 nova_compute[260935]: 2025-10-11 09:25:39.332 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:25:39 compute-0 nova_compute[260935]: 2025-10-11 09:25:39.332 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap119ff9b5-e3, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:25:39 compute-0 nova_compute[260935]: 2025-10-11 09:25:39.333 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:25:39 compute-0 nova_compute[260935]: 2025-10-11 09:25:39.335 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:25:39 compute-0 nova_compute[260935]: 2025-10-11 09:25:39.337 2 INFO os_vif [None req-8ae7735c-d341-4f8a-af49-6959c16c170b dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:54:f4:1e,bridge_name='br-int',has_traffic_filtering=True,id=119ff9b5-e3af-4688-97a4-92b5f6a240dc,network=Network(342daca7-3c5d-4bb3-bcdc-6abce7b4d414),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap119ff9b5-e3')
Oct 11 09:25:39 compute-0 neutron-haproxy-ovnmeta-342daca7-3c5d-4bb3-bcdc-6abce7b4d414[398234]: [NOTICE]   (398247) : haproxy version is 2.8.14-c23fe91
Oct 11 09:25:39 compute-0 neutron-haproxy-ovnmeta-342daca7-3c5d-4bb3-bcdc-6abce7b4d414[398234]: [NOTICE]   (398247) : path to executable is /usr/sbin/haproxy
Oct 11 09:25:39 compute-0 neutron-haproxy-ovnmeta-342daca7-3c5d-4bb3-bcdc-6abce7b4d414[398234]: [WARNING]  (398247) : Exiting Master process...
Oct 11 09:25:39 compute-0 neutron-haproxy-ovnmeta-342daca7-3c5d-4bb3-bcdc-6abce7b4d414[398234]: [ALERT]    (398247) : Current worker (398249) exited with code 143 (Terminated)
Oct 11 09:25:39 compute-0 neutron-haproxy-ovnmeta-342daca7-3c5d-4bb3-bcdc-6abce7b4d414[398234]: [WARNING]  (398247) : All workers exited. Exiting... (0)
Oct 11 09:25:39 compute-0 systemd[1]: libpod-7942cf246211150ab9a379ef3012a741b0b0deb7b617be7c2673948e421375b3.scope: Deactivated successfully.
Oct 11 09:25:39 compute-0 podman[399453]: 2025-10-11 09:25:39.368473299 +0000 UTC m=+0.072835977 container died 7942cf246211150ab9a379ef3012a741b0b0deb7b617be7c2673948e421375b3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-342daca7-3c5d-4bb3-bcdc-6abce7b4d414, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 11 09:25:39 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-7942cf246211150ab9a379ef3012a741b0b0deb7b617be7c2673948e421375b3-userdata-shm.mount: Deactivated successfully.
Oct 11 09:25:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-72cac3b459deeeef18dd1fe465a7ead282b0060817d48bf95435e7d2a56eb042-merged.mount: Deactivated successfully.
Oct 11 09:25:39 compute-0 podman[399453]: 2025-10-11 09:25:39.421397902 +0000 UTC m=+0.125760580 container cleanup 7942cf246211150ab9a379ef3012a741b0b0deb7b617be7c2673948e421375b3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-342daca7-3c5d-4bb3-bcdc-6abce7b4d414, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 11 09:25:39 compute-0 systemd[1]: libpod-conmon-7942cf246211150ab9a379ef3012a741b0b0deb7b617be7c2673948e421375b3.scope: Deactivated successfully.
Oct 11 09:25:39 compute-0 nova_compute[260935]: 2025-10-11 09:25:39.440 2 DEBUG nova.compute.manager [req-678f704d-b48f-4583-8258-b974274a9feb req-09c0925c-672a-4d7b-96c9-67acb87f4046 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: e4670193-9ea3-45bc-9dbd-14d2e62f32f1] Received event network-vif-unplugged-119ff9b5-e3af-4688-97a4-92b5f6a240dc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:25:39 compute-0 nova_compute[260935]: 2025-10-11 09:25:39.441 2 DEBUG oslo_concurrency.lockutils [req-678f704d-b48f-4583-8258-b974274a9feb req-09c0925c-672a-4d7b-96c9-67acb87f4046 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "e4670193-9ea3-45bc-9dbd-14d2e62f32f1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:25:39 compute-0 nova_compute[260935]: 2025-10-11 09:25:39.441 2 DEBUG oslo_concurrency.lockutils [req-678f704d-b48f-4583-8258-b974274a9feb req-09c0925c-672a-4d7b-96c9-67acb87f4046 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "e4670193-9ea3-45bc-9dbd-14d2e62f32f1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:25:39 compute-0 nova_compute[260935]: 2025-10-11 09:25:39.441 2 DEBUG oslo_concurrency.lockutils [req-678f704d-b48f-4583-8258-b974274a9feb req-09c0925c-672a-4d7b-96c9-67acb87f4046 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "e4670193-9ea3-45bc-9dbd-14d2e62f32f1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:25:39 compute-0 nova_compute[260935]: 2025-10-11 09:25:39.442 2 DEBUG nova.compute.manager [req-678f704d-b48f-4583-8258-b974274a9feb req-09c0925c-672a-4d7b-96c9-67acb87f4046 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: e4670193-9ea3-45bc-9dbd-14d2e62f32f1] No waiting events found dispatching network-vif-unplugged-119ff9b5-e3af-4688-97a4-92b5f6a240dc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 09:25:39 compute-0 nova_compute[260935]: 2025-10-11 09:25:39.442 2 DEBUG nova.compute.manager [req-678f704d-b48f-4583-8258-b974274a9feb req-09c0925c-672a-4d7b-96c9-67acb87f4046 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: e4670193-9ea3-45bc-9dbd-14d2e62f32f1] Received event network-vif-unplugged-119ff9b5-e3af-4688-97a4-92b5f6a240dc for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 11 09:25:39 compute-0 podman[399512]: 2025-10-11 09:25:39.510208529 +0000 UTC m=+0.055735204 container remove 7942cf246211150ab9a379ef3012a741b0b0deb7b617be7c2673948e421375b3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-342daca7-3c5d-4bb3-bcdc-6abce7b4d414, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 11 09:25:39 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:25:39.525 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[2d96b293-2fc7-4300-9937-b3849b5c0739]: (4, ('Sat Oct 11 09:25:39 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-342daca7-3c5d-4bb3-bcdc-6abce7b4d414 (7942cf246211150ab9a379ef3012a741b0b0deb7b617be7c2673948e421375b3)\n7942cf246211150ab9a379ef3012a741b0b0deb7b617be7c2673948e421375b3\nSat Oct 11 09:25:39 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-342daca7-3c5d-4bb3-bcdc-6abce7b4d414 (7942cf246211150ab9a379ef3012a741b0b0deb7b617be7c2673948e421375b3)\n7942cf246211150ab9a379ef3012a741b0b0deb7b617be7c2673948e421375b3\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:25:39 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:25:39.529 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[7b321d63-1c60-4e87-ac42-a93c7cdf4a3e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:25:39 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:25:39.530 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap342daca7-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:25:39 compute-0 nova_compute[260935]: 2025-10-11 09:25:39.533 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:25:39 compute-0 kernel: tap342daca7-30: left promiscuous mode
Oct 11 09:25:39 compute-0 nova_compute[260935]: 2025-10-11 09:25:39.564 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:25:39 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:25:39.568 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[ec137465-36f9-4838-977d-b00a71927eba]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:25:39 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:25:39.595 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[425a82a7-1ebf-43e3-bb33-0ad4b9529b6a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:25:39 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:25:39.597 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[2df1893e-8f0c-4106-8e28-6a4cc3a35b5f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:25:39 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:25:39.617 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[2a999327-3b6c-48a3-8808-6bb601a92d55]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 655704, 'reachable_time': 44664, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 399528, 'error': None, 'target': 'ovnmeta-342daca7-3c5d-4bb3-bcdc-6abce7b4d414', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:25:39 compute-0 systemd[1]: run-netns-ovnmeta\x2d342daca7\x2d3c5d\x2d4bb3\x2dbcdc\x2d6abce7b4d414.mount: Deactivated successfully.
Oct 11 09:25:39 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:25:39.623 162929 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-342daca7-3c5d-4bb3-bcdc-6abce7b4d414 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 11 09:25:39 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:25:39.624 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[2e2f6c32-d775-417f-8857-0f5386be8f1d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:25:39 compute-0 nova_compute[260935]: 2025-10-11 09:25:39.808 2 INFO nova.virt.libvirt.driver [None req-8ae7735c-d341-4f8a-af49-6959c16c170b dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: e4670193-9ea3-45bc-9dbd-14d2e62f32f1] Deleting instance files /var/lib/nova/instances/e4670193-9ea3-45bc-9dbd-14d2e62f32f1_del
Oct 11 09:25:39 compute-0 nova_compute[260935]: 2025-10-11 09:25:39.809 2 INFO nova.virt.libvirt.driver [None req-8ae7735c-d341-4f8a-af49-6959c16c170b dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: e4670193-9ea3-45bc-9dbd-14d2e62f32f1] Deletion of /var/lib/nova/instances/e4670193-9ea3-45bc-9dbd-14d2e62f32f1_del complete
Oct 11 09:25:39 compute-0 nova_compute[260935]: 2025-10-11 09:25:39.858 2 INFO nova.compute.manager [None req-8ae7735c-d341-4f8a-af49-6959c16c170b dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: e4670193-9ea3-45bc-9dbd-14d2e62f32f1] Took 0.79 seconds to destroy the instance on the hypervisor.
Oct 11 09:25:39 compute-0 nova_compute[260935]: 2025-10-11 09:25:39.859 2 DEBUG oslo.service.loopingcall [None req-8ae7735c-d341-4f8a-af49-6959c16c170b dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 11 09:25:39 compute-0 nova_compute[260935]: 2025-10-11 09:25:39.860 2 DEBUG nova.compute.manager [-] [instance: e4670193-9ea3-45bc-9dbd-14d2e62f32f1] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 11 09:25:39 compute-0 nova_compute[260935]: 2025-10-11 09:25:39.860 2 DEBUG nova.network.neutron [-] [instance: e4670193-9ea3-45bc-9dbd-14d2e62f32f1] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 11 09:25:39 compute-0 ceph-mon[74313]: pgmap v2520: 321 pgs: 321 active+clean; 407 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 349 KiB/s rd, 2.1 MiB/s wr, 92 op/s
Oct 11 09:25:40 compute-0 sshd-session[399428]: Failed password for invalid user admin from 165.232.82.252 port 54018 ssh2
Oct 11 09:25:40 compute-0 sshd-session[399428]: Connection closed by invalid user admin 165.232.82.252 port 54018 [preauth]
Oct 11 09:25:40 compute-0 nova_compute[260935]: 2025-10-11 09:25:40.654 2 DEBUG nova.network.neutron [req-406994a8-a847-4ede-bb61-eb0e099e405e req-f3c660b0-154b-40c5-91fb-f7aef9229881 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: e4670193-9ea3-45bc-9dbd-14d2e62f32f1] Updated VIF entry in instance network info cache for port 119ff9b5-e3af-4688-97a4-92b5f6a240dc. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 09:25:40 compute-0 nova_compute[260935]: 2025-10-11 09:25:40.655 2 DEBUG nova.network.neutron [req-406994a8-a847-4ede-bb61-eb0e099e405e req-f3c660b0-154b-40c5-91fb-f7aef9229881 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: e4670193-9ea3-45bc-9dbd-14d2e62f32f1] Updating instance_info_cache with network_info: [{"id": "119ff9b5-e3af-4688-97a4-92b5f6a240dc", "address": "fa:16:3e:54:f4:1e", "network": {"id": "342daca7-3c5d-4bb3-bcdc-6abce7b4d414", "bridge": "br-int", "label": "tempest-network-smoke--2000863110", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "9.8.7.6", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap119ff9b5-e3", "ovs_interfaceid": "119ff9b5-e3af-4688-97a4-92b5f6a240dc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:25:40 compute-0 nova_compute[260935]: 2025-10-11 09:25:40.685 2 DEBUG oslo_concurrency.lockutils [req-406994a8-a847-4ede-bb61-eb0e099e405e req-f3c660b0-154b-40c5-91fb-f7aef9229881 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-e4670193-9ea3-45bc-9dbd-14d2e62f32f1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:25:40 compute-0 nova_compute[260935]: 2025-10-11 09:25:40.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:25:40 compute-0 nova_compute[260935]: 2025-10-11 09:25:40.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:25:40 compute-0 nova_compute[260935]: 2025-10-11 09:25:40.703 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 11 09:25:40 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2521: 321 pgs: 321 active+clean; 407 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 22 KiB/s rd, 13 KiB/s wr, 29 op/s
Oct 11 09:25:41 compute-0 nova_compute[260935]: 2025-10-11 09:25:41.266 2 DEBUG nova.network.neutron [-] [instance: e4670193-9ea3-45bc-9dbd-14d2e62f32f1] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:25:41 compute-0 nova_compute[260935]: 2025-10-11 09:25:41.297 2 INFO nova.compute.manager [-] [instance: e4670193-9ea3-45bc-9dbd-14d2e62f32f1] Took 1.44 seconds to deallocate network for instance.
Oct 11 09:25:41 compute-0 nova_compute[260935]: 2025-10-11 09:25:41.334 2 DEBUG oslo_concurrency.lockutils [None req-8ae7735c-d341-4f8a-af49-6959c16c170b dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:25:41 compute-0 nova_compute[260935]: 2025-10-11 09:25:41.335 2 DEBUG oslo_concurrency.lockutils [None req-8ae7735c-d341-4f8a-af49-6959c16c170b dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:25:41 compute-0 nova_compute[260935]: 2025-10-11 09:25:41.448 2 DEBUG oslo_concurrency.processutils [None req-8ae7735c-d341-4f8a-af49-6959c16c170b dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:25:41 compute-0 nova_compute[260935]: 2025-10-11 09:25:41.537 2 DEBUG nova.compute.manager [req-ae28e358-5a97-4e4a-90d0-d4e84f22abed req-3682aaf0-ff64-47f5-a347-e842915de394 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: e4670193-9ea3-45bc-9dbd-14d2e62f32f1] Received event network-vif-plugged-119ff9b5-e3af-4688-97a4-92b5f6a240dc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:25:41 compute-0 nova_compute[260935]: 2025-10-11 09:25:41.538 2 DEBUG oslo_concurrency.lockutils [req-ae28e358-5a97-4e4a-90d0-d4e84f22abed req-3682aaf0-ff64-47f5-a347-e842915de394 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "e4670193-9ea3-45bc-9dbd-14d2e62f32f1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:25:41 compute-0 nova_compute[260935]: 2025-10-11 09:25:41.539 2 DEBUG oslo_concurrency.lockutils [req-ae28e358-5a97-4e4a-90d0-d4e84f22abed req-3682aaf0-ff64-47f5-a347-e842915de394 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "e4670193-9ea3-45bc-9dbd-14d2e62f32f1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:25:41 compute-0 nova_compute[260935]: 2025-10-11 09:25:41.540 2 DEBUG oslo_concurrency.lockutils [req-ae28e358-5a97-4e4a-90d0-d4e84f22abed req-3682aaf0-ff64-47f5-a347-e842915de394 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "e4670193-9ea3-45bc-9dbd-14d2e62f32f1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:25:41 compute-0 nova_compute[260935]: 2025-10-11 09:25:41.540 2 DEBUG nova.compute.manager [req-ae28e358-5a97-4e4a-90d0-d4e84f22abed req-3682aaf0-ff64-47f5-a347-e842915de394 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: e4670193-9ea3-45bc-9dbd-14d2e62f32f1] No waiting events found dispatching network-vif-plugged-119ff9b5-e3af-4688-97a4-92b5f6a240dc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 09:25:41 compute-0 nova_compute[260935]: 2025-10-11 09:25:41.541 2 WARNING nova.compute.manager [req-ae28e358-5a97-4e4a-90d0-d4e84f22abed req-3682aaf0-ff64-47f5-a347-e842915de394 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: e4670193-9ea3-45bc-9dbd-14d2e62f32f1] Received unexpected event network-vif-plugged-119ff9b5-e3af-4688-97a4-92b5f6a240dc for instance with vm_state deleted and task_state None.
Oct 11 09:25:41 compute-0 nova_compute[260935]: 2025-10-11 09:25:41.541 2 DEBUG nova.compute.manager [req-ae28e358-5a97-4e4a-90d0-d4e84f22abed req-3682aaf0-ff64-47f5-a347-e842915de394 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: e4670193-9ea3-45bc-9dbd-14d2e62f32f1] Received event network-vif-deleted-119ff9b5-e3af-4688-97a4-92b5f6a240dc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:25:41 compute-0 nova_compute[260935]: 2025-10-11 09:25:41.699 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:25:41 compute-0 nova_compute[260935]: 2025-10-11 09:25:41.726 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:25:41 compute-0 nova_compute[260935]: 2025-10-11 09:25:41.745 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:25:41 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 09:25:41 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3657812144' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:25:41 compute-0 nova_compute[260935]: 2025-10-11 09:25:41.943 2 DEBUG oslo_concurrency.processutils [None req-8ae7735c-d341-4f8a-af49-6959c16c170b dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.494s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:25:41 compute-0 nova_compute[260935]: 2025-10-11 09:25:41.952 2 DEBUG nova.compute.provider_tree [None req-8ae7735c-d341-4f8a-af49-6959c16c170b dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 09:25:41 compute-0 ceph-mon[74313]: pgmap v2521: 321 pgs: 321 active+clean; 407 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 22 KiB/s rd, 13 KiB/s wr, 29 op/s
Oct 11 09:25:41 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/3657812144' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:25:41 compute-0 nova_compute[260935]: 2025-10-11 09:25:41.972 2 DEBUG nova.scheduler.client.report [None req-8ae7735c-d341-4f8a-af49-6959c16c170b dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 09:25:41 compute-0 nova_compute[260935]: 2025-10-11 09:25:41.997 2 DEBUG oslo_concurrency.lockutils [None req-8ae7735c-d341-4f8a-af49-6959c16c170b dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.662s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:25:42 compute-0 nova_compute[260935]: 2025-10-11 09:25:42.002 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.257s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:25:42 compute-0 nova_compute[260935]: 2025-10-11 09:25:42.003 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:25:42 compute-0 nova_compute[260935]: 2025-10-11 09:25:42.003 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 11 09:25:42 compute-0 nova_compute[260935]: 2025-10-11 09:25:42.004 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:25:42 compute-0 nova_compute[260935]: 2025-10-11 09:25:42.097 2 INFO nova.scheduler.client.report [None req-8ae7735c-d341-4f8a-af49-6959c16c170b dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Deleted allocations for instance e4670193-9ea3-45bc-9dbd-14d2e62f32f1
Oct 11 09:25:42 compute-0 nova_compute[260935]: 2025-10-11 09:25:42.165 2 DEBUG oslo_concurrency.lockutils [None req-8ae7735c-d341-4f8a-af49-6959c16c170b dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "e4670193-9ea3-45bc-9dbd-14d2e62f32f1" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.107s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:25:42 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 09:25:42 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/510749789' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:25:42 compute-0 nova_compute[260935]: 2025-10-11 09:25:42.469 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:25:42 compute-0 nova_compute[260935]: 2025-10-11 09:25:42.623 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:25:42 compute-0 nova_compute[260935]: 2025-10-11 09:25:42.623 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:25:42 compute-0 nova_compute[260935]: 2025-10-11 09:25:42.623 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:25:42 compute-0 nova_compute[260935]: 2025-10-11 09:25:42.628 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:25:42 compute-0 nova_compute[260935]: 2025-10-11 09:25:42.628 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:25:42 compute-0 nova_compute[260935]: 2025-10-11 09:25:42.632 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000052 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:25:42 compute-0 nova_compute[260935]: 2025-10-11 09:25:42.632 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000052 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:25:42 compute-0 nova_compute[260935]: 2025-10-11 09:25:42.732 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:25:42 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2522: 321 pgs: 321 active+clean; 328 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 41 KiB/s rd, 20 KiB/s wr, 57 op/s
Oct 11 09:25:42 compute-0 nova_compute[260935]: 2025-10-11 09:25:42.898 2 WARNING nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 09:25:42 compute-0 nova_compute[260935]: 2025-10-11 09:25:42.899 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=2860MB free_disk=59.7851448059082GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 11 09:25:42 compute-0 nova_compute[260935]: 2025-10-11 09:25:42.900 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:25:42 compute-0 nova_compute[260935]: 2025-10-11 09:25:42.900 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:25:42 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/510749789' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:25:43 compute-0 nova_compute[260935]: 2025-10-11 09:25:43.013 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance c176845c-89c0-4038-ba22-4ee79bd3ebfe actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 09:25:43 compute-0 nova_compute[260935]: 2025-10-11 09:25:43.014 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance b75d8ded-515b-48ff-a6b6-28df88878996 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 09:25:43 compute-0 nova_compute[260935]: 2025-10-11 09:25:43.014 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance 52be16b4-343a-4fd4-9041-39069a1fde2a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 09:25:43 compute-0 nova_compute[260935]: 2025-10-11 09:25:43.015 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 11 09:25:43 compute-0 nova_compute[260935]: 2025-10-11 09:25:43.015 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=896MB phys_disk=59GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 11 09:25:43 compute-0 nova_compute[260935]: 2025-10-11 09:25:43.122 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:25:43 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 09:25:43 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1715715982' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:25:43 compute-0 nova_compute[260935]: 2025-10-11 09:25:43.589 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:25:43 compute-0 nova_compute[260935]: 2025-10-11 09:25:43.598 2 DEBUG nova.compute.provider_tree [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 09:25:43 compute-0 nova_compute[260935]: 2025-10-11 09:25:43.622 2 DEBUG nova.scheduler.client.report [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 09:25:43 compute-0 nova_compute[260935]: 2025-10-11 09:25:43.650 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 11 09:25:43 compute-0 nova_compute[260935]: 2025-10-11 09:25:43.651 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.751s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:25:43 compute-0 podman[399596]: 2025-10-11 09:25:43.788963178 +0000 UTC m=+0.088573320 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 11 09:25:44 compute-0 ceph-mon[74313]: pgmap v2522: 321 pgs: 321 active+clean; 328 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 41 KiB/s rd, 20 KiB/s wr, 57 op/s
Oct 11 09:25:44 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/1715715982' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:25:44 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:25:44 compute-0 nova_compute[260935]: 2025-10-11 09:25:44.334 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:25:44 compute-0 nova_compute[260935]: 2025-10-11 09:25:44.627 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:25:44 compute-0 nova_compute[260935]: 2025-10-11 09:25:44.628 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 11 09:25:44 compute-0 ovn_controller[152945]: 2025-10-11T09:25:44Z|01378|binding|INFO|Releasing lport e23cd806-8523-4e59-ba27-db15cee52548 from this chassis (sb_readonly=0)
Oct 11 09:25:44 compute-0 ovn_controller[152945]: 2025-10-11T09:25:44Z|01379|binding|INFO|Releasing lport f45dd889-4db0-488b-b6d3-356bc191844e from this chassis (sb_readonly=0)
Oct 11 09:25:44 compute-0 nova_compute[260935]: 2025-10-11 09:25:44.663 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:25:44 compute-0 ovn_controller[152945]: 2025-10-11T09:25:44Z|01380|binding|INFO|Releasing lport e23cd806-8523-4e59-ba27-db15cee52548 from this chassis (sb_readonly=0)
Oct 11 09:25:44 compute-0 ovn_controller[152945]: 2025-10-11T09:25:44Z|01381|binding|INFO|Releasing lport f45dd889-4db0-488b-b6d3-356bc191844e from this chassis (sb_readonly=0)
Oct 11 09:25:44 compute-0 nova_compute[260935]: 2025-10-11 09:25:44.822 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:25:44 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2523: 321 pgs: 321 active+clean; 328 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 28 KiB/s rd, 6.5 KiB/s wr, 36 op/s
Oct 11 09:25:44 compute-0 nova_compute[260935]: 2025-10-11 09:25:44.935 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "refresh_cache-52be16b4-343a-4fd4-9041-39069a1fde2a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:25:44 compute-0 nova_compute[260935]: 2025-10-11 09:25:44.935 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquired lock "refresh_cache-52be16b4-343a-4fd4-9041-39069a1fde2a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:25:44 compute-0 nova_compute[260935]: 2025-10-11 09:25:44.935 2 DEBUG nova.network.neutron [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: 52be16b4-343a-4fd4-9041-39069a1fde2a] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 11 09:25:46 compute-0 ceph-mon[74313]: pgmap v2523: 321 pgs: 321 active+clean; 328 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 28 KiB/s rd, 6.5 KiB/s wr, 36 op/s
Oct 11 09:25:46 compute-0 nova_compute[260935]: 2025-10-11 09:25:46.802 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760174731.7999434, 85cf93a0-2068-4567-a399-b8d52e672913 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:25:46 compute-0 nova_compute[260935]: 2025-10-11 09:25:46.802 2 INFO nova.compute.manager [-] [instance: 85cf93a0-2068-4567-a399-b8d52e672913] VM Stopped (Lifecycle Event)
Oct 11 09:25:46 compute-0 nova_compute[260935]: 2025-10-11 09:25:46.821 2 DEBUG nova.compute.manager [None req-870de4f7-38e0-4e94-9920-cd1c49edd099 - - - - - -] [instance: 85cf93a0-2068-4567-a399-b8d52e672913] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:25:46 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2524: 321 pgs: 321 active+clean; 328 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 28 KiB/s rd, 6.5 KiB/s wr, 36 op/s
Oct 11 09:25:46 compute-0 nova_compute[260935]: 2025-10-11 09:25:46.943 2 DEBUG nova.network.neutron [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: 52be16b4-343a-4fd4-9041-39069a1fde2a] Updating instance_info_cache with network_info: [{"id": "c992d6e3-ef59-42a0-80c5-109fe0c056cd", "address": "fa:16:3e:d3:b5:ce", "network": {"id": "7c40ad6c-6e2c-4d8e-a70f-72c8786fa745", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1855455514-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0ba95f2514ce4fe4b00f245335eaeb01", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc992d6e3-ef", "ovs_interfaceid": "c992d6e3-ef59-42a0-80c5-109fe0c056cd", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:25:46 compute-0 nova_compute[260935]: 2025-10-11 09:25:46.963 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Releasing lock "refresh_cache-52be16b4-343a-4fd4-9041-39069a1fde2a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:25:46 compute-0 nova_compute[260935]: 2025-10-11 09:25:46.963 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: 52be16b4-343a-4fd4-9041-39069a1fde2a] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 11 09:25:47 compute-0 nova_compute[260935]: 2025-10-11 09:25:47.736 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:25:48 compute-0 ceph-mon[74313]: pgmap v2524: 321 pgs: 321 active+clean; 328 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 28 KiB/s rd, 6.5 KiB/s wr, 36 op/s
Oct 11 09:25:48 compute-0 podman[399616]: 2025-10-11 09:25:48.782251361 +0000 UTC m=+0.084898007 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=iscsid, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 11 09:25:48 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2525: 321 pgs: 321 active+clean; 328 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 28 KiB/s rd, 6.5 KiB/s wr, 36 op/s
Oct 11 09:25:49 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:25:49 compute-0 nova_compute[260935]: 2025-10-11 09:25:49.338 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:25:50 compute-0 ceph-mon[74313]: pgmap v2525: 321 pgs: 321 active+clean; 328 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 28 KiB/s rd, 6.5 KiB/s wr, 36 op/s
Oct 11 09:25:50 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2526: 321 pgs: 321 active+clean; 328 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 6.5 KiB/s wr, 27 op/s
Oct 11 09:25:52 compute-0 ceph-mon[74313]: pgmap v2526: 321 pgs: 321 active+clean; 328 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 6.5 KiB/s wr, 27 op/s
Oct 11 09:25:52 compute-0 nova_compute[260935]: 2025-10-11 09:25:52.768 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:25:52 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2527: 321 pgs: 321 active+clean; 328 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 6.5 KiB/s wr, 27 op/s
Oct 11 09:25:53 compute-0 ceph-mon[74313]: pgmap v2527: 321 pgs: 321 active+clean; 328 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 6.5 KiB/s wr, 27 op/s
Oct 11 09:25:53 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:25:53.392 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=42, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:d1:d9', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '16:ab:1e:b7:4b:7f'}, ipsec=False) old=SB_Global(nb_cfg=41) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 09:25:53 compute-0 nova_compute[260935]: 2025-10-11 09:25:53.393 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:25:53 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:25:53.394 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 11 09:25:53 compute-0 podman[399638]: 2025-10-11 09:25:53.776411169 +0000 UTC m=+0.078703612 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, managed_by=edpm_ansible)
Oct 11 09:25:53 compute-0 podman[399639]: 2025-10-11 09:25:53.858477935 +0000 UTC m=+0.153782411 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 09:25:54 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:25:54 compute-0 nova_compute[260935]: 2025-10-11 09:25:54.310 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760174739.3090696, e4670193-9ea3-45bc-9dbd-14d2e62f32f1 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:25:54 compute-0 nova_compute[260935]: 2025-10-11 09:25:54.310 2 INFO nova.compute.manager [-] [instance: e4670193-9ea3-45bc-9dbd-14d2e62f32f1] VM Stopped (Lifecycle Event)
Oct 11 09:25:54 compute-0 nova_compute[260935]: 2025-10-11 09:25:54.339 2 DEBUG nova.compute.manager [None req-fa20a344-9e78-4177-b5fc-d244242a00f6 - - - - - -] [instance: e4670193-9ea3-45bc-9dbd-14d2e62f32f1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:25:54 compute-0 nova_compute[260935]: 2025-10-11 09:25:54.341 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:25:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:25:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:25:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:25:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:25:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:25:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:25:54 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2528: 321 pgs: 321 active+clean; 328 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:25:54 compute-0 ceph-mgr[74605]: [balancer INFO root] Optimize plan auto_2025-10-11_09:25:54
Oct 11 09:25:54 compute-0 ceph-mgr[74605]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 09:25:54 compute-0 ceph-mgr[74605]: [balancer INFO root] do_upmap
Oct 11 09:25:54 compute-0 ceph-mgr[74605]: [balancer INFO root] pools ['default.rgw.log', 'vms', 'default.rgw.meta', 'default.rgw.control', 'cephfs.cephfs.meta', 'images', 'volumes', '.rgw.root', '.mgr', 'cephfs.cephfs.data', 'backups']
Oct 11 09:25:54 compute-0 ceph-mgr[74605]: [balancer INFO root] prepared 0/10 changes
Oct 11 09:25:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 09:25:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 09:25:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 09:25:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 09:25:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 09:25:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 09:25:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 09:25:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 09:25:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 09:25:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 09:25:56 compute-0 ceph-mon[74313]: pgmap v2528: 321 pgs: 321 active+clean; 328 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:25:56 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:25:56.396 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=bec9a5e2-82b0-42f0-811d-08d245f1dc66, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '42'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:25:56 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2529: 321 pgs: 321 active+clean; 328 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:25:57 compute-0 ceph-mon[74313]: pgmap v2529: 321 pgs: 321 active+clean; 328 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:25:57 compute-0 nova_compute[260935]: 2025-10-11 09:25:57.825 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:25:58 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2530: 321 pgs: 321 active+clean; 328 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:25:59 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:25:59 compute-0 nova_compute[260935]: 2025-10-11 09:25:59.346 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:26:00 compute-0 ceph-mon[74313]: pgmap v2530: 321 pgs: 321 active+clean; 328 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:26:00 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2531: 321 pgs: 321 active+clean; 328 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:26:01 compute-0 sshd-session[398280]: error: kex_exchange_identification: read: Connection reset by peer
Oct 11 09:26:01 compute-0 sshd-session[398280]: Connection reset by 5.44.252.30 port 57030
Oct 11 09:26:01 compute-0 ceph-mon[74313]: pgmap v2531: 321 pgs: 321 active+clean; 328 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:26:02 compute-0 nova_compute[260935]: 2025-10-11 09:26:02.871 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:26:02 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2532: 321 pgs: 321 active+clean; 328 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:26:04 compute-0 ceph-mon[74313]: pgmap v2532: 321 pgs: 321 active+clean; 328 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:26:04 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:26:04 compute-0 nova_compute[260935]: 2025-10-11 09:26:04.350 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:26:04 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2533: 321 pgs: 321 active+clean; 328 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:26:05 compute-0 ceph-mon[74313]: pgmap v2533: 321 pgs: 321 active+clean; 328 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:26:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 09:26:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:26:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 09:26:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:26:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.002627377275021163 of space, bias 1.0, pg target 0.788213182506349 quantized to 32 (current 32)
Oct 11 09:26:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:26:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 09:26:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:26:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 09:26:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:26:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Oct 11 09:26:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:26:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 11 09:26:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:26:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 09:26:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:26:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 11 09:26:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:26:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 11 09:26:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:26:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 09:26:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:26:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 11 09:26:06 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2534: 321 pgs: 321 active+clean; 328 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:26:07 compute-0 nova_compute[260935]: 2025-10-11 09:26:07.882 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:26:08 compute-0 ceph-mon[74313]: pgmap v2534: 321 pgs: 321 active+clean; 328 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:26:08 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2535: 321 pgs: 321 active+clean; 328 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:26:09 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:26:09 compute-0 ceph-mon[74313]: pgmap v2535: 321 pgs: 321 active+clean; 328 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:26:09 compute-0 nova_compute[260935]: 2025-10-11 09:26:09.352 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:26:10 compute-0 nova_compute[260935]: 2025-10-11 09:26:10.391 2 DEBUG oslo_concurrency.lockutils [None req-8964e318-74c5-4774-835b-ae3e1876fb32 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Acquiring lock "36d0acda-9f37-4308-aa46-973f11c57b0e" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:26:10 compute-0 nova_compute[260935]: 2025-10-11 09:26:10.391 2 DEBUG oslo_concurrency.lockutils [None req-8964e318-74c5-4774-835b-ae3e1876fb32 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "36d0acda-9f37-4308-aa46-973f11c57b0e" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:26:10 compute-0 nova_compute[260935]: 2025-10-11 09:26:10.411 2 DEBUG nova.compute.manager [None req-8964e318-74c5-4774-835b-ae3e1876fb32 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 36d0acda-9f37-4308-aa46-973f11c57b0e] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 11 09:26:10 compute-0 nova_compute[260935]: 2025-10-11 09:26:10.495 2 DEBUG oslo_concurrency.lockutils [None req-8964e318-74c5-4774-835b-ae3e1876fb32 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:26:10 compute-0 nova_compute[260935]: 2025-10-11 09:26:10.496 2 DEBUG oslo_concurrency.lockutils [None req-8964e318-74c5-4774-835b-ae3e1876fb32 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:26:10 compute-0 nova_compute[260935]: 2025-10-11 09:26:10.507 2 DEBUG nova.virt.hardware [None req-8964e318-74c5-4774-835b-ae3e1876fb32 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 11 09:26:10 compute-0 nova_compute[260935]: 2025-10-11 09:26:10.507 2 INFO nova.compute.claims [None req-8964e318-74c5-4774-835b-ae3e1876fb32 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 36d0acda-9f37-4308-aa46-973f11c57b0e] Claim successful on node compute-0.ctlplane.example.com
Oct 11 09:26:10 compute-0 nova_compute[260935]: 2025-10-11 09:26:10.663 2 DEBUG oslo_concurrency.processutils [None req-8964e318-74c5-4774-835b-ae3e1876fb32 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:26:10 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2536: 321 pgs: 321 active+clean; 328 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:26:11 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 09:26:11 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3336150054' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:26:11 compute-0 nova_compute[260935]: 2025-10-11 09:26:11.200 2 DEBUG oslo_concurrency.processutils [None req-8964e318-74c5-4774-835b-ae3e1876fb32 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.537s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:26:11 compute-0 nova_compute[260935]: 2025-10-11 09:26:11.208 2 DEBUG nova.compute.provider_tree [None req-8964e318-74c5-4774-835b-ae3e1876fb32 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 09:26:11 compute-0 nova_compute[260935]: 2025-10-11 09:26:11.232 2 DEBUG nova.scheduler.client.report [None req-8964e318-74c5-4774-835b-ae3e1876fb32 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 09:26:11 compute-0 nova_compute[260935]: 2025-10-11 09:26:11.260 2 DEBUG oslo_concurrency.lockutils [None req-8964e318-74c5-4774-835b-ae3e1876fb32 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.763s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:26:11 compute-0 nova_compute[260935]: 2025-10-11 09:26:11.261 2 DEBUG nova.compute.manager [None req-8964e318-74c5-4774-835b-ae3e1876fb32 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 36d0acda-9f37-4308-aa46-973f11c57b0e] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 11 09:26:11 compute-0 nova_compute[260935]: 2025-10-11 09:26:11.331 2 DEBUG nova.compute.manager [None req-8964e318-74c5-4774-835b-ae3e1876fb32 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 36d0acda-9f37-4308-aa46-973f11c57b0e] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 11 09:26:11 compute-0 nova_compute[260935]: 2025-10-11 09:26:11.331 2 DEBUG nova.network.neutron [None req-8964e318-74c5-4774-835b-ae3e1876fb32 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 36d0acda-9f37-4308-aa46-973f11c57b0e] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 11 09:26:11 compute-0 nova_compute[260935]: 2025-10-11 09:26:11.366 2 INFO nova.virt.libvirt.driver [None req-8964e318-74c5-4774-835b-ae3e1876fb32 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 36d0acda-9f37-4308-aa46-973f11c57b0e] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 11 09:26:11 compute-0 nova_compute[260935]: 2025-10-11 09:26:11.397 2 DEBUG nova.compute.manager [None req-8964e318-74c5-4774-835b-ae3e1876fb32 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 36d0acda-9f37-4308-aa46-973f11c57b0e] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 11 09:26:11 compute-0 nova_compute[260935]: 2025-10-11 09:26:11.512 2 DEBUG nova.compute.manager [None req-8964e318-74c5-4774-835b-ae3e1876fb32 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 36d0acda-9f37-4308-aa46-973f11c57b0e] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 11 09:26:11 compute-0 nova_compute[260935]: 2025-10-11 09:26:11.515 2 DEBUG nova.virt.libvirt.driver [None req-8964e318-74c5-4774-835b-ae3e1876fb32 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 36d0acda-9f37-4308-aa46-973f11c57b0e] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 11 09:26:11 compute-0 nova_compute[260935]: 2025-10-11 09:26:11.516 2 INFO nova.virt.libvirt.driver [None req-8964e318-74c5-4774-835b-ae3e1876fb32 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 36d0acda-9f37-4308-aa46-973f11c57b0e] Creating image(s)
Oct 11 09:26:11 compute-0 nova_compute[260935]: 2025-10-11 09:26:11.553 2 DEBUG nova.storage.rbd_utils [None req-8964e318-74c5-4774-835b-ae3e1876fb32 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] rbd image 36d0acda-9f37-4308-aa46-973f11c57b0e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:26:11 compute-0 nova_compute[260935]: 2025-10-11 09:26:11.598 2 DEBUG nova.storage.rbd_utils [None req-8964e318-74c5-4774-835b-ae3e1876fb32 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] rbd image 36d0acda-9f37-4308-aa46-973f11c57b0e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:26:11 compute-0 nova_compute[260935]: 2025-10-11 09:26:11.629 2 DEBUG nova.storage.rbd_utils [None req-8964e318-74c5-4774-835b-ae3e1876fb32 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] rbd image 36d0acda-9f37-4308-aa46-973f11c57b0e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:26:11 compute-0 nova_compute[260935]: 2025-10-11 09:26:11.633 2 DEBUG oslo_concurrency.processutils [None req-8964e318-74c5-4774-835b-ae3e1876fb32 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:26:11 compute-0 nova_compute[260935]: 2025-10-11 09:26:11.699 2 DEBUG nova.policy [None req-8964e318-74c5-4774-835b-ae3e1876fb32 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'dd336dcb24664df58613d4105ce1b004', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'bee9c6aad5fe46a2b0fb6caf4d995b72', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 11 09:26:11 compute-0 nova_compute[260935]: 2025-10-11 09:26:11.747 2 DEBUG oslo_concurrency.processutils [None req-8964e318-74c5-4774-835b-ae3e1876fb32 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json" returned: 0 in 0.114s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:26:11 compute-0 nova_compute[260935]: 2025-10-11 09:26:11.748 2 DEBUG oslo_concurrency.lockutils [None req-8964e318-74c5-4774-835b-ae3e1876fb32 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Acquiring lock "9811042c7d73cc51997f7c966840f4b7728169a1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:26:11 compute-0 nova_compute[260935]: 2025-10-11 09:26:11.748 2 DEBUG oslo_concurrency.lockutils [None req-8964e318-74c5-4774-835b-ae3e1876fb32 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:26:11 compute-0 nova_compute[260935]: 2025-10-11 09:26:11.749 2 DEBUG oslo_concurrency.lockutils [None req-8964e318-74c5-4774-835b-ae3e1876fb32 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:26:11 compute-0 nova_compute[260935]: 2025-10-11 09:26:11.774 2 DEBUG nova.storage.rbd_utils [None req-8964e318-74c5-4774-835b-ae3e1876fb32 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] rbd image 36d0acda-9f37-4308-aa46-973f11c57b0e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:26:11 compute-0 nova_compute[260935]: 2025-10-11 09:26:11.779 2 DEBUG oslo_concurrency.processutils [None req-8964e318-74c5-4774-835b-ae3e1876fb32 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 36d0acda-9f37-4308-aa46-973f11c57b0e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:26:12 compute-0 ceph-mon[74313]: pgmap v2536: 321 pgs: 321 active+clean; 328 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:26:12 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/3336150054' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:26:12 compute-0 nova_compute[260935]: 2025-10-11 09:26:12.170 2 DEBUG oslo_concurrency.processutils [None req-8964e318-74c5-4774-835b-ae3e1876fb32 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 36d0acda-9f37-4308-aa46-973f11c57b0e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.392s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:26:12 compute-0 nova_compute[260935]: 2025-10-11 09:26:12.250 2 DEBUG nova.storage.rbd_utils [None req-8964e318-74c5-4774-835b-ae3e1876fb32 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] resizing rbd image 36d0acda-9f37-4308-aa46-973f11c57b0e_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 11 09:26:12 compute-0 nova_compute[260935]: 2025-10-11 09:26:12.340 2 DEBUG nova.objects.instance [None req-8964e318-74c5-4774-835b-ae3e1876fb32 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lazy-loading 'migration_context' on Instance uuid 36d0acda-9f37-4308-aa46-973f11c57b0e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:26:12 compute-0 nova_compute[260935]: 2025-10-11 09:26:12.355 2 DEBUG nova.virt.libvirt.driver [None req-8964e318-74c5-4774-835b-ae3e1876fb32 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 36d0acda-9f37-4308-aa46-973f11c57b0e] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 11 09:26:12 compute-0 nova_compute[260935]: 2025-10-11 09:26:12.355 2 DEBUG nova.virt.libvirt.driver [None req-8964e318-74c5-4774-835b-ae3e1876fb32 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 36d0acda-9f37-4308-aa46-973f11c57b0e] Ensure instance console log exists: /var/lib/nova/instances/36d0acda-9f37-4308-aa46-973f11c57b0e/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 11 09:26:12 compute-0 nova_compute[260935]: 2025-10-11 09:26:12.355 2 DEBUG oslo_concurrency.lockutils [None req-8964e318-74c5-4774-835b-ae3e1876fb32 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:26:12 compute-0 nova_compute[260935]: 2025-10-11 09:26:12.356 2 DEBUG oslo_concurrency.lockutils [None req-8964e318-74c5-4774-835b-ae3e1876fb32 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:26:12 compute-0 nova_compute[260935]: 2025-10-11 09:26:12.356 2 DEBUG oslo_concurrency.lockutils [None req-8964e318-74c5-4774-835b-ae3e1876fb32 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:26:12 compute-0 nova_compute[260935]: 2025-10-11 09:26:12.718 2 DEBUG nova.network.neutron [None req-8964e318-74c5-4774-835b-ae3e1876fb32 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 36d0acda-9f37-4308-aa46-973f11c57b0e] Successfully created port: 36e1391f-eaf8-490f-8434-c3fb25eed0a4 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 11 09:26:12 compute-0 nova_compute[260935]: 2025-10-11 09:26:12.887 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:26:12 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2537: 321 pgs: 321 active+clean; 374 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 26 op/s
Oct 11 09:26:13 compute-0 nova_compute[260935]: 2025-10-11 09:26:13.475 2 DEBUG nova.network.neutron [None req-8964e318-74c5-4774-835b-ae3e1876fb32 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 36d0acda-9f37-4308-aa46-973f11c57b0e] Successfully updated port: 36e1391f-eaf8-490f-8434-c3fb25eed0a4 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 11 09:26:13 compute-0 nova_compute[260935]: 2025-10-11 09:26:13.510 2 DEBUG oslo_concurrency.lockutils [None req-8964e318-74c5-4774-835b-ae3e1876fb32 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Acquiring lock "refresh_cache-36d0acda-9f37-4308-aa46-973f11c57b0e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:26:13 compute-0 nova_compute[260935]: 2025-10-11 09:26:13.511 2 DEBUG oslo_concurrency.lockutils [None req-8964e318-74c5-4774-835b-ae3e1876fb32 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Acquired lock "refresh_cache-36d0acda-9f37-4308-aa46-973f11c57b0e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:26:13 compute-0 nova_compute[260935]: 2025-10-11 09:26:13.511 2 DEBUG nova.network.neutron [None req-8964e318-74c5-4774-835b-ae3e1876fb32 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 36d0acda-9f37-4308-aa46-973f11c57b0e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 11 09:26:13 compute-0 nova_compute[260935]: 2025-10-11 09:26:13.601 2 DEBUG nova.compute.manager [req-998fd68b-96b2-493d-a235-5c91d0b3165e req-447898cc-5795-4443-9c06-235bd75ca719 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 36d0acda-9f37-4308-aa46-973f11c57b0e] Received event network-changed-36e1391f-eaf8-490f-8434-c3fb25eed0a4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:26:13 compute-0 nova_compute[260935]: 2025-10-11 09:26:13.602 2 DEBUG nova.compute.manager [req-998fd68b-96b2-493d-a235-5c91d0b3165e req-447898cc-5795-4443-9c06-235bd75ca719 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 36d0acda-9f37-4308-aa46-973f11c57b0e] Refreshing instance network info cache due to event network-changed-36e1391f-eaf8-490f-8434-c3fb25eed0a4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 09:26:13 compute-0 nova_compute[260935]: 2025-10-11 09:26:13.603 2 DEBUG oslo_concurrency.lockutils [req-998fd68b-96b2-493d-a235-5c91d0b3165e req-447898cc-5795-4443-9c06-235bd75ca719 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-36d0acda-9f37-4308-aa46-973f11c57b0e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:26:13 compute-0 nova_compute[260935]: 2025-10-11 09:26:13.696 2 DEBUG nova.network.neutron [None req-8964e318-74c5-4774-835b-ae3e1876fb32 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 36d0acda-9f37-4308-aa46-973f11c57b0e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 11 09:26:13 compute-0 nova_compute[260935]: 2025-10-11 09:26:13.851 2 DEBUG oslo_concurrency.lockutils [None req-6c99d863-a15a-4b8a-895e-cd58d1191873 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "b39b8161-8a46-46fe-8a2a-0fc6b4eab850" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:26:13 compute-0 nova_compute[260935]: 2025-10-11 09:26:13.851 2 DEBUG oslo_concurrency.lockutils [None req-6c99d863-a15a-4b8a-895e-cd58d1191873 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "b39b8161-8a46-46fe-8a2a-0fc6b4eab850" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:26:13 compute-0 nova_compute[260935]: 2025-10-11 09:26:13.957 2 DEBUG nova.compute.manager [None req-6c99d863-a15a-4b8a-895e-cd58d1191873 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: b39b8161-8a46-46fe-8a2a-0fc6b4eab850] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 11 09:26:14 compute-0 ceph-mon[74313]: pgmap v2537: 321 pgs: 321 active+clean; 374 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 26 op/s
Oct 11 09:26:14 compute-0 nova_compute[260935]: 2025-10-11 09:26:14.159 2 DEBUG oslo_concurrency.lockutils [None req-6c99d863-a15a-4b8a-895e-cd58d1191873 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:26:14 compute-0 nova_compute[260935]: 2025-10-11 09:26:14.160 2 DEBUG oslo_concurrency.lockutils [None req-6c99d863-a15a-4b8a-895e-cd58d1191873 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:26:14 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:26:14 compute-0 nova_compute[260935]: 2025-10-11 09:26:14.170 2 DEBUG nova.virt.hardware [None req-6c99d863-a15a-4b8a-895e-cd58d1191873 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 11 09:26:14 compute-0 nova_compute[260935]: 2025-10-11 09:26:14.170 2 INFO nova.compute.claims [None req-6c99d863-a15a-4b8a-895e-cd58d1191873 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: b39b8161-8a46-46fe-8a2a-0fc6b4eab850] Claim successful on node compute-0.ctlplane.example.com
Oct 11 09:26:14 compute-0 nova_compute[260935]: 2025-10-11 09:26:14.356 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:26:14 compute-0 nova_compute[260935]: 2025-10-11 09:26:14.542 2 DEBUG oslo_concurrency.processutils [None req-6c99d863-a15a-4b8a-895e-cd58d1191873 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:26:14 compute-0 nova_compute[260935]: 2025-10-11 09:26:14.656 2 DEBUG nova.network.neutron [None req-8964e318-74c5-4774-835b-ae3e1876fb32 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 36d0acda-9f37-4308-aa46-973f11c57b0e] Updating instance_info_cache with network_info: [{"id": "36e1391f-eaf8-490f-8434-c3fb25eed0a4", "address": "fa:16:3e:da:66:34", "network": {"id": "b9f9ae84-9b18-48f7-bff2-94e8835de5c8", "bridge": "br-int", "label": "tempest-network-smoke--306681163", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap36e1391f-ea", "ovs_interfaceid": "36e1391f-eaf8-490f-8434-c3fb25eed0a4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:26:14 compute-0 nova_compute[260935]: 2025-10-11 09:26:14.751 2 DEBUG oslo_concurrency.lockutils [None req-8964e318-74c5-4774-835b-ae3e1876fb32 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Releasing lock "refresh_cache-36d0acda-9f37-4308-aa46-973f11c57b0e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:26:14 compute-0 nova_compute[260935]: 2025-10-11 09:26:14.752 2 DEBUG nova.compute.manager [None req-8964e318-74c5-4774-835b-ae3e1876fb32 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 36d0acda-9f37-4308-aa46-973f11c57b0e] Instance network_info: |[{"id": "36e1391f-eaf8-490f-8434-c3fb25eed0a4", "address": "fa:16:3e:da:66:34", "network": {"id": "b9f9ae84-9b18-48f7-bff2-94e8835de5c8", "bridge": "br-int", "label": "tempest-network-smoke--306681163", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap36e1391f-ea", "ovs_interfaceid": "36e1391f-eaf8-490f-8434-c3fb25eed0a4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 11 09:26:14 compute-0 nova_compute[260935]: 2025-10-11 09:26:14.752 2 DEBUG oslo_concurrency.lockutils [req-998fd68b-96b2-493d-a235-5c91d0b3165e req-447898cc-5795-4443-9c06-235bd75ca719 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-36d0acda-9f37-4308-aa46-973f11c57b0e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:26:14 compute-0 nova_compute[260935]: 2025-10-11 09:26:14.752 2 DEBUG nova.network.neutron [req-998fd68b-96b2-493d-a235-5c91d0b3165e req-447898cc-5795-4443-9c06-235bd75ca719 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 36d0acda-9f37-4308-aa46-973f11c57b0e] Refreshing network info cache for port 36e1391f-eaf8-490f-8434-c3fb25eed0a4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 09:26:14 compute-0 nova_compute[260935]: 2025-10-11 09:26:14.756 2 DEBUG nova.virt.libvirt.driver [None req-8964e318-74c5-4774-835b-ae3e1876fb32 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 36d0acda-9f37-4308-aa46-973f11c57b0e] Start _get_guest_xml network_info=[{"id": "36e1391f-eaf8-490f-8434-c3fb25eed0a4", "address": "fa:16:3e:da:66:34", "network": {"id": "b9f9ae84-9b18-48f7-bff2-94e8835de5c8", "bridge": "br-int", "label": "tempest-network-smoke--306681163", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap36e1391f-ea", "ovs_interfaceid": "36e1391f-eaf8-490f-8434-c3fb25eed0a4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 11 09:26:14 compute-0 nova_compute[260935]: 2025-10-11 09:26:14.761 2 WARNING nova.virt.libvirt.driver [None req-8964e318-74c5-4774-835b-ae3e1876fb32 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 09:26:14 compute-0 nova_compute[260935]: 2025-10-11 09:26:14.766 2 DEBUG nova.virt.libvirt.host [None req-8964e318-74c5-4774-835b-ae3e1876fb32 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 11 09:26:14 compute-0 nova_compute[260935]: 2025-10-11 09:26:14.767 2 DEBUG nova.virt.libvirt.host [None req-8964e318-74c5-4774-835b-ae3e1876fb32 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 11 09:26:14 compute-0 nova_compute[260935]: 2025-10-11 09:26:14.770 2 DEBUG nova.virt.libvirt.host [None req-8964e318-74c5-4774-835b-ae3e1876fb32 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 11 09:26:14 compute-0 nova_compute[260935]: 2025-10-11 09:26:14.770 2 DEBUG nova.virt.libvirt.host [None req-8964e318-74c5-4774-835b-ae3e1876fb32 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 11 09:26:14 compute-0 nova_compute[260935]: 2025-10-11 09:26:14.771 2 DEBUG nova.virt.libvirt.driver [None req-8964e318-74c5-4774-835b-ae3e1876fb32 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 11 09:26:14 compute-0 nova_compute[260935]: 2025-10-11 09:26:14.771 2 DEBUG nova.virt.hardware [None req-8964e318-74c5-4774-835b-ae3e1876fb32 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 11 09:26:14 compute-0 nova_compute[260935]: 2025-10-11 09:26:14.771 2 DEBUG nova.virt.hardware [None req-8964e318-74c5-4774-835b-ae3e1876fb32 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 11 09:26:14 compute-0 nova_compute[260935]: 2025-10-11 09:26:14.772 2 DEBUG nova.virt.hardware [None req-8964e318-74c5-4774-835b-ae3e1876fb32 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 11 09:26:14 compute-0 nova_compute[260935]: 2025-10-11 09:26:14.772 2 DEBUG nova.virt.hardware [None req-8964e318-74c5-4774-835b-ae3e1876fb32 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 11 09:26:14 compute-0 nova_compute[260935]: 2025-10-11 09:26:14.772 2 DEBUG nova.virt.hardware [None req-8964e318-74c5-4774-835b-ae3e1876fb32 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 11 09:26:14 compute-0 nova_compute[260935]: 2025-10-11 09:26:14.772 2 DEBUG nova.virt.hardware [None req-8964e318-74c5-4774-835b-ae3e1876fb32 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 11 09:26:14 compute-0 nova_compute[260935]: 2025-10-11 09:26:14.772 2 DEBUG nova.virt.hardware [None req-8964e318-74c5-4774-835b-ae3e1876fb32 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 11 09:26:14 compute-0 nova_compute[260935]: 2025-10-11 09:26:14.773 2 DEBUG nova.virt.hardware [None req-8964e318-74c5-4774-835b-ae3e1876fb32 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 11 09:26:14 compute-0 nova_compute[260935]: 2025-10-11 09:26:14.773 2 DEBUG nova.virt.hardware [None req-8964e318-74c5-4774-835b-ae3e1876fb32 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 11 09:26:14 compute-0 nova_compute[260935]: 2025-10-11 09:26:14.773 2 DEBUG nova.virt.hardware [None req-8964e318-74c5-4774-835b-ae3e1876fb32 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 11 09:26:14 compute-0 nova_compute[260935]: 2025-10-11 09:26:14.773 2 DEBUG nova.virt.hardware [None req-8964e318-74c5-4774-835b-ae3e1876fb32 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 11 09:26:14 compute-0 podman[399874]: 2025-10-11 09:26:14.776567793 +0000 UTC m=+0.067981280 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct 11 09:26:14 compute-0 nova_compute[260935]: 2025-10-11 09:26:14.778 2 DEBUG oslo_concurrency.processutils [None req-8964e318-74c5-4774-835b-ae3e1876fb32 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:26:14 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2538: 321 pgs: 321 active+clean; 374 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 26 op/s
Oct 11 09:26:15 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 09:26:15 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/604605797' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:26:15 compute-0 nova_compute[260935]: 2025-10-11 09:26:15.029 2 DEBUG oslo_concurrency.processutils [None req-6c99d863-a15a-4b8a-895e-cd58d1191873 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.488s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:26:15 compute-0 nova_compute[260935]: 2025-10-11 09:26:15.037 2 DEBUG nova.compute.provider_tree [None req-6c99d863-a15a-4b8a-895e-cd58d1191873 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 09:26:15 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/604605797' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:26:15 compute-0 nova_compute[260935]: 2025-10-11 09:26:15.093 2 DEBUG nova.scheduler.client.report [None req-6c99d863-a15a-4b8a-895e-cd58d1191873 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 09:26:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:26:15.222 162815 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:26:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:26:15.223 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:26:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:26:15.224 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:26:15 compute-0 nova_compute[260935]: 2025-10-11 09:26:15.313 2 DEBUG oslo_concurrency.lockutils [None req-6c99d863-a15a-4b8a-895e-cd58d1191873 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.153s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:26:15 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 09:26:15 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1828184593' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:26:15 compute-0 nova_compute[260935]: 2025-10-11 09:26:15.314 2 DEBUG nova.compute.manager [None req-6c99d863-a15a-4b8a-895e-cd58d1191873 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: b39b8161-8a46-46fe-8a2a-0fc6b4eab850] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 11 09:26:15 compute-0 nova_compute[260935]: 2025-10-11 09:26:15.331 2 DEBUG oslo_concurrency.processutils [None req-8964e318-74c5-4774-835b-ae3e1876fb32 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.553s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:26:15 compute-0 nova_compute[260935]: 2025-10-11 09:26:15.371 2 DEBUG nova.storage.rbd_utils [None req-8964e318-74c5-4774-835b-ae3e1876fb32 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] rbd image 36d0acda-9f37-4308-aa46-973f11c57b0e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:26:15 compute-0 nova_compute[260935]: 2025-10-11 09:26:15.377 2 DEBUG oslo_concurrency.processutils [None req-8964e318-74c5-4774-835b-ae3e1876fb32 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:26:15 compute-0 nova_compute[260935]: 2025-10-11 09:26:15.477 2 DEBUG nova.compute.manager [None req-6c99d863-a15a-4b8a-895e-cd58d1191873 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: b39b8161-8a46-46fe-8a2a-0fc6b4eab850] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 11 09:26:15 compute-0 nova_compute[260935]: 2025-10-11 09:26:15.478 2 DEBUG nova.network.neutron [None req-6c99d863-a15a-4b8a-895e-cd58d1191873 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: b39b8161-8a46-46fe-8a2a-0fc6b4eab850] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 11 09:26:15 compute-0 nova_compute[260935]: 2025-10-11 09:26:15.586 2 INFO nova.virt.libvirt.driver [None req-6c99d863-a15a-4b8a-895e-cd58d1191873 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: b39b8161-8a46-46fe-8a2a-0fc6b4eab850] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 11 09:26:15 compute-0 nova_compute[260935]: 2025-10-11 09:26:15.645 2 DEBUG nova.policy [None req-6c99d863-a15a-4b8a-895e-cd58d1191873 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '0e1fd111a1ff43179343661e01457085', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'db6885dd005947ad850fed13cefdf2fc', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 11 09:26:15 compute-0 nova_compute[260935]: 2025-10-11 09:26:15.696 2 DEBUG nova.compute.manager [None req-6c99d863-a15a-4b8a-895e-cd58d1191873 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: b39b8161-8a46-46fe-8a2a-0fc6b4eab850] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 11 09:26:15 compute-0 nova_compute[260935]: 2025-10-11 09:26:15.868 2 DEBUG nova.network.neutron [req-998fd68b-96b2-493d-a235-5c91d0b3165e req-447898cc-5795-4443-9c06-235bd75ca719 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 36d0acda-9f37-4308-aa46-973f11c57b0e] Updated VIF entry in instance network info cache for port 36e1391f-eaf8-490f-8434-c3fb25eed0a4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 09:26:15 compute-0 nova_compute[260935]: 2025-10-11 09:26:15.868 2 DEBUG nova.network.neutron [req-998fd68b-96b2-493d-a235-5c91d0b3165e req-447898cc-5795-4443-9c06-235bd75ca719 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 36d0acda-9f37-4308-aa46-973f11c57b0e] Updating instance_info_cache with network_info: [{"id": "36e1391f-eaf8-490f-8434-c3fb25eed0a4", "address": "fa:16:3e:da:66:34", "network": {"id": "b9f9ae84-9b18-48f7-bff2-94e8835de5c8", "bridge": "br-int", "label": "tempest-network-smoke--306681163", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap36e1391f-ea", "ovs_interfaceid": "36e1391f-eaf8-490f-8434-c3fb25eed0a4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:26:15 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 09:26:15 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/695730209' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:26:15 compute-0 nova_compute[260935]: 2025-10-11 09:26:15.897 2 DEBUG oslo_concurrency.processutils [None req-8964e318-74c5-4774-835b-ae3e1876fb32 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.520s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:26:15 compute-0 nova_compute[260935]: 2025-10-11 09:26:15.899 2 DEBUG nova.virt.libvirt.vif [None req-8964e318-74c5-4774-835b-ae3e1876fb32 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T09:26:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1410731753',display_name='tempest-TestNetworkBasicOps-server-1410731753',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1410731753',id=127,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNfFDuZ2VUr+6EowKBtrZDd7zud1Oa+cp6ZA/ixez4vTqy3B2Qz2dWoCxMYTkI6OHKrvBP/PVMobSzlTBVFJQHby9DrXveKkH7hKU36MweJuxInYFJMR8tgPbvonZEpvAg==',key_name='tempest-TestNetworkBasicOps-714435839',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='bee9c6aad5fe46a2b0fb6caf4d995b72',ramdisk_id='',reservation_id='r-4f9xvxhr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1622727639',owner_user_name='tempest-TestNetworkBasicOps-1622727639-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:26:11Z,user_data=None,user_id='dd336dcb24664df58613d4105ce1b004',uuid=36d0acda-9f37-4308-aa46-973f11c57b0e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "36e1391f-eaf8-490f-8434-c3fb25eed0a4", "address": "fa:16:3e:da:66:34", "network": {"id": "b9f9ae84-9b18-48f7-bff2-94e8835de5c8", "bridge": "br-int", "label": "tempest-network-smoke--306681163", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap36e1391f-ea", "ovs_interfaceid": "36e1391f-eaf8-490f-8434-c3fb25eed0a4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 11 09:26:15 compute-0 nova_compute[260935]: 2025-10-11 09:26:15.899 2 DEBUG nova.network.os_vif_util [None req-8964e318-74c5-4774-835b-ae3e1876fb32 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Converting VIF {"id": "36e1391f-eaf8-490f-8434-c3fb25eed0a4", "address": "fa:16:3e:da:66:34", "network": {"id": "b9f9ae84-9b18-48f7-bff2-94e8835de5c8", "bridge": "br-int", "label": "tempest-network-smoke--306681163", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap36e1391f-ea", "ovs_interfaceid": "36e1391f-eaf8-490f-8434-c3fb25eed0a4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 09:26:15 compute-0 nova_compute[260935]: 2025-10-11 09:26:15.900 2 DEBUG nova.network.os_vif_util [None req-8964e318-74c5-4774-835b-ae3e1876fb32 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:da:66:34,bridge_name='br-int',has_traffic_filtering=True,id=36e1391f-eaf8-490f-8434-c3fb25eed0a4,network=Network(b9f9ae84-9b18-48f7-bff2-94e8835de5c8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap36e1391f-ea') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 09:26:15 compute-0 nova_compute[260935]: 2025-10-11 09:26:15.901 2 DEBUG nova.objects.instance [None req-8964e318-74c5-4774-835b-ae3e1876fb32 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lazy-loading 'pci_devices' on Instance uuid 36d0acda-9f37-4308-aa46-973f11c57b0e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:26:15 compute-0 nova_compute[260935]: 2025-10-11 09:26:15.976 2 DEBUG nova.virt.libvirt.driver [None req-8964e318-74c5-4774-835b-ae3e1876fb32 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 36d0acda-9f37-4308-aa46-973f11c57b0e] End _get_guest_xml xml=<domain type="kvm">
Oct 11 09:26:15 compute-0 nova_compute[260935]:   <uuid>36d0acda-9f37-4308-aa46-973f11c57b0e</uuid>
Oct 11 09:26:15 compute-0 nova_compute[260935]:   <name>instance-0000007f</name>
Oct 11 09:26:15 compute-0 nova_compute[260935]:   <memory>131072</memory>
Oct 11 09:26:15 compute-0 nova_compute[260935]:   <vcpu>1</vcpu>
Oct 11 09:26:15 compute-0 nova_compute[260935]:   <metadata>
Oct 11 09:26:15 compute-0 nova_compute[260935]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 09:26:15 compute-0 nova_compute[260935]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 09:26:15 compute-0 nova_compute[260935]:       <nova:name>tempest-TestNetworkBasicOps-server-1410731753</nova:name>
Oct 11 09:26:15 compute-0 nova_compute[260935]:       <nova:creationTime>2025-10-11 09:26:14</nova:creationTime>
Oct 11 09:26:15 compute-0 nova_compute[260935]:       <nova:flavor name="m1.nano">
Oct 11 09:26:15 compute-0 nova_compute[260935]:         <nova:memory>128</nova:memory>
Oct 11 09:26:15 compute-0 nova_compute[260935]:         <nova:disk>1</nova:disk>
Oct 11 09:26:15 compute-0 nova_compute[260935]:         <nova:swap>0</nova:swap>
Oct 11 09:26:15 compute-0 nova_compute[260935]:         <nova:ephemeral>0</nova:ephemeral>
Oct 11 09:26:15 compute-0 nova_compute[260935]:         <nova:vcpus>1</nova:vcpus>
Oct 11 09:26:15 compute-0 nova_compute[260935]:       </nova:flavor>
Oct 11 09:26:15 compute-0 nova_compute[260935]:       <nova:owner>
Oct 11 09:26:15 compute-0 nova_compute[260935]:         <nova:user uuid="dd336dcb24664df58613d4105ce1b004">tempest-TestNetworkBasicOps-1622727639-project-member</nova:user>
Oct 11 09:26:15 compute-0 nova_compute[260935]:         <nova:project uuid="bee9c6aad5fe46a2b0fb6caf4d995b72">tempest-TestNetworkBasicOps-1622727639</nova:project>
Oct 11 09:26:15 compute-0 nova_compute[260935]:       </nova:owner>
Oct 11 09:26:15 compute-0 nova_compute[260935]:       <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 09:26:15 compute-0 nova_compute[260935]:       <nova:ports>
Oct 11 09:26:15 compute-0 nova_compute[260935]:         <nova:port uuid="36e1391f-eaf8-490f-8434-c3fb25eed0a4">
Oct 11 09:26:15 compute-0 nova_compute[260935]:           <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Oct 11 09:26:15 compute-0 nova_compute[260935]:         </nova:port>
Oct 11 09:26:15 compute-0 nova_compute[260935]:       </nova:ports>
Oct 11 09:26:15 compute-0 nova_compute[260935]:     </nova:instance>
Oct 11 09:26:15 compute-0 nova_compute[260935]:   </metadata>
Oct 11 09:26:15 compute-0 nova_compute[260935]:   <sysinfo type="smbios">
Oct 11 09:26:15 compute-0 nova_compute[260935]:     <system>
Oct 11 09:26:15 compute-0 nova_compute[260935]:       <entry name="manufacturer">RDO</entry>
Oct 11 09:26:15 compute-0 nova_compute[260935]:       <entry name="product">OpenStack Compute</entry>
Oct 11 09:26:15 compute-0 nova_compute[260935]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 09:26:15 compute-0 nova_compute[260935]:       <entry name="serial">36d0acda-9f37-4308-aa46-973f11c57b0e</entry>
Oct 11 09:26:15 compute-0 nova_compute[260935]:       <entry name="uuid">36d0acda-9f37-4308-aa46-973f11c57b0e</entry>
Oct 11 09:26:15 compute-0 nova_compute[260935]:       <entry name="family">Virtual Machine</entry>
Oct 11 09:26:15 compute-0 nova_compute[260935]:     </system>
Oct 11 09:26:15 compute-0 nova_compute[260935]:   </sysinfo>
Oct 11 09:26:15 compute-0 nova_compute[260935]:   <os>
Oct 11 09:26:15 compute-0 nova_compute[260935]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 11 09:26:15 compute-0 nova_compute[260935]:     <boot dev="hd"/>
Oct 11 09:26:15 compute-0 nova_compute[260935]:     <smbios mode="sysinfo"/>
Oct 11 09:26:15 compute-0 nova_compute[260935]:   </os>
Oct 11 09:26:15 compute-0 nova_compute[260935]:   <features>
Oct 11 09:26:15 compute-0 nova_compute[260935]:     <acpi/>
Oct 11 09:26:15 compute-0 nova_compute[260935]:     <apic/>
Oct 11 09:26:15 compute-0 nova_compute[260935]:     <vmcoreinfo/>
Oct 11 09:26:15 compute-0 nova_compute[260935]:   </features>
Oct 11 09:26:15 compute-0 nova_compute[260935]:   <clock offset="utc">
Oct 11 09:26:15 compute-0 nova_compute[260935]:     <timer name="pit" tickpolicy="delay"/>
Oct 11 09:26:15 compute-0 nova_compute[260935]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 11 09:26:15 compute-0 nova_compute[260935]:     <timer name="hpet" present="no"/>
Oct 11 09:26:15 compute-0 nova_compute[260935]:   </clock>
Oct 11 09:26:15 compute-0 nova_compute[260935]:   <cpu mode="host-model" match="exact">
Oct 11 09:26:15 compute-0 nova_compute[260935]:     <topology sockets="1" cores="1" threads="1"/>
Oct 11 09:26:15 compute-0 nova_compute[260935]:   </cpu>
Oct 11 09:26:15 compute-0 nova_compute[260935]:   <devices>
Oct 11 09:26:15 compute-0 nova_compute[260935]:     <disk type="network" device="disk">
Oct 11 09:26:15 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 09:26:15 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/36d0acda-9f37-4308-aa46-973f11c57b0e_disk">
Oct 11 09:26:15 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 09:26:15 compute-0 nova_compute[260935]:       </source>
Oct 11 09:26:15 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 09:26:15 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 09:26:15 compute-0 nova_compute[260935]:       </auth>
Oct 11 09:26:15 compute-0 nova_compute[260935]:       <target dev="vda" bus="virtio"/>
Oct 11 09:26:15 compute-0 nova_compute[260935]:     </disk>
Oct 11 09:26:15 compute-0 nova_compute[260935]:     <disk type="network" device="cdrom">
Oct 11 09:26:15 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 09:26:15 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/36d0acda-9f37-4308-aa46-973f11c57b0e_disk.config">
Oct 11 09:26:15 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 09:26:15 compute-0 nova_compute[260935]:       </source>
Oct 11 09:26:15 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 09:26:15 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 09:26:15 compute-0 nova_compute[260935]:       </auth>
Oct 11 09:26:15 compute-0 nova_compute[260935]:       <target dev="sda" bus="sata"/>
Oct 11 09:26:15 compute-0 nova_compute[260935]:     </disk>
Oct 11 09:26:15 compute-0 nova_compute[260935]:     <interface type="ethernet">
Oct 11 09:26:15 compute-0 nova_compute[260935]:       <mac address="fa:16:3e:da:66:34"/>
Oct 11 09:26:15 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 09:26:15 compute-0 nova_compute[260935]:       <driver name="vhost" rx_queue_size="512"/>
Oct 11 09:26:15 compute-0 nova_compute[260935]:       <mtu size="1442"/>
Oct 11 09:26:15 compute-0 nova_compute[260935]:       <target dev="tap36e1391f-ea"/>
Oct 11 09:26:15 compute-0 nova_compute[260935]:     </interface>
Oct 11 09:26:15 compute-0 nova_compute[260935]:     <serial type="pty">
Oct 11 09:26:15 compute-0 nova_compute[260935]:       <log file="/var/lib/nova/instances/36d0acda-9f37-4308-aa46-973f11c57b0e/console.log" append="off"/>
Oct 11 09:26:15 compute-0 nova_compute[260935]:     </serial>
Oct 11 09:26:15 compute-0 nova_compute[260935]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 09:26:15 compute-0 nova_compute[260935]:     <video>
Oct 11 09:26:15 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 09:26:15 compute-0 nova_compute[260935]:     </video>
Oct 11 09:26:15 compute-0 nova_compute[260935]:     <input type="tablet" bus="usb"/>
Oct 11 09:26:15 compute-0 nova_compute[260935]:     <rng model="virtio">
Oct 11 09:26:15 compute-0 nova_compute[260935]:       <backend model="random">/dev/urandom</backend>
Oct 11 09:26:15 compute-0 nova_compute[260935]:     </rng>
Oct 11 09:26:15 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root"/>
Oct 11 09:26:15 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:26:15 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:26:15 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:26:15 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:26:15 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:26:15 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:26:15 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:26:15 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:26:15 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:26:15 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:26:15 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:26:15 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:26:15 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:26:15 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:26:15 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:26:15 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:26:15 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:26:15 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:26:15 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:26:15 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:26:15 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:26:15 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:26:15 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:26:15 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:26:15 compute-0 nova_compute[260935]:     <controller type="usb" index="0"/>
Oct 11 09:26:15 compute-0 nova_compute[260935]:     <memballoon model="virtio">
Oct 11 09:26:15 compute-0 nova_compute[260935]:       <stats period="10"/>
Oct 11 09:26:15 compute-0 nova_compute[260935]:     </memballoon>
Oct 11 09:26:15 compute-0 nova_compute[260935]:   </devices>
Oct 11 09:26:15 compute-0 nova_compute[260935]: </domain>
Oct 11 09:26:15 compute-0 nova_compute[260935]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 11 09:26:15 compute-0 nova_compute[260935]: 2025-10-11 09:26:15.977 2 DEBUG nova.compute.manager [None req-8964e318-74c5-4774-835b-ae3e1876fb32 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 36d0acda-9f37-4308-aa46-973f11c57b0e] Preparing to wait for external event network-vif-plugged-36e1391f-eaf8-490f-8434-c3fb25eed0a4 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 11 09:26:15 compute-0 nova_compute[260935]: 2025-10-11 09:26:15.977 2 DEBUG oslo_concurrency.lockutils [None req-8964e318-74c5-4774-835b-ae3e1876fb32 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Acquiring lock "36d0acda-9f37-4308-aa46-973f11c57b0e-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:26:15 compute-0 nova_compute[260935]: 2025-10-11 09:26:15.978 2 DEBUG oslo_concurrency.lockutils [None req-8964e318-74c5-4774-835b-ae3e1876fb32 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "36d0acda-9f37-4308-aa46-973f11c57b0e-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:26:15 compute-0 nova_compute[260935]: 2025-10-11 09:26:15.978 2 DEBUG oslo_concurrency.lockutils [None req-8964e318-74c5-4774-835b-ae3e1876fb32 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "36d0acda-9f37-4308-aa46-973f11c57b0e-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:26:15 compute-0 nova_compute[260935]: 2025-10-11 09:26:15.979 2 DEBUG nova.virt.libvirt.vif [None req-8964e318-74c5-4774-835b-ae3e1876fb32 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T09:26:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1410731753',display_name='tempest-TestNetworkBasicOps-server-1410731753',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1410731753',id=127,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNfFDuZ2VUr+6EowKBtrZDd7zud1Oa+cp6ZA/ixez4vTqy3B2Qz2dWoCxMYTkI6OHKrvBP/PVMobSzlTBVFJQHby9DrXveKkH7hKU36MweJuxInYFJMR8tgPbvonZEpvAg==',key_name='tempest-TestNetworkBasicOps-714435839',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='bee9c6aad5fe46a2b0fb6caf4d995b72',ramdisk_id='',reservation_id='r-4f9xvxhr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1622727639',owner_user_name='tempest-TestNetworkBasicOps-1622727639-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:26:11Z,user_data=None,user_id='dd336dcb24664df58613d4105ce1b004',uuid=36d0acda-9f37-4308-aa46-973f11c57b0e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "36e1391f-eaf8-490f-8434-c3fb25eed0a4", "address": "fa:16:3e:da:66:34", "network": {"id": "b9f9ae84-9b18-48f7-bff2-94e8835de5c8", "bridge": "br-int", "label": "tempest-network-smoke--306681163", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap36e1391f-ea", "ovs_interfaceid": "36e1391f-eaf8-490f-8434-c3fb25eed0a4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 11 09:26:15 compute-0 nova_compute[260935]: 2025-10-11 09:26:15.979 2 DEBUG nova.network.os_vif_util [None req-8964e318-74c5-4774-835b-ae3e1876fb32 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Converting VIF {"id": "36e1391f-eaf8-490f-8434-c3fb25eed0a4", "address": "fa:16:3e:da:66:34", "network": {"id": "b9f9ae84-9b18-48f7-bff2-94e8835de5c8", "bridge": "br-int", "label": "tempest-network-smoke--306681163", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap36e1391f-ea", "ovs_interfaceid": "36e1391f-eaf8-490f-8434-c3fb25eed0a4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 09:26:15 compute-0 nova_compute[260935]: 2025-10-11 09:26:15.980 2 DEBUG nova.network.os_vif_util [None req-8964e318-74c5-4774-835b-ae3e1876fb32 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:da:66:34,bridge_name='br-int',has_traffic_filtering=True,id=36e1391f-eaf8-490f-8434-c3fb25eed0a4,network=Network(b9f9ae84-9b18-48f7-bff2-94e8835de5c8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap36e1391f-ea') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 09:26:15 compute-0 nova_compute[260935]: 2025-10-11 09:26:15.980 2 DEBUG os_vif [None req-8964e318-74c5-4774-835b-ae3e1876fb32 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:da:66:34,bridge_name='br-int',has_traffic_filtering=True,id=36e1391f-eaf8-490f-8434-c3fb25eed0a4,network=Network(b9f9ae84-9b18-48f7-bff2-94e8835de5c8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap36e1391f-ea') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 11 09:26:15 compute-0 nova_compute[260935]: 2025-10-11 09:26:15.983 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:26:15 compute-0 nova_compute[260935]: 2025-10-11 09:26:15.983 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:26:15 compute-0 nova_compute[260935]: 2025-10-11 09:26:15.984 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 09:26:15 compute-0 nova_compute[260935]: 2025-10-11 09:26:15.984 2 DEBUG oslo_concurrency.lockutils [req-998fd68b-96b2-493d-a235-5c91d0b3165e req-447898cc-5795-4443-9c06-235bd75ca719 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-36d0acda-9f37-4308-aa46-973f11c57b0e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:26:15 compute-0 nova_compute[260935]: 2025-10-11 09:26:15.989 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:26:15 compute-0 nova_compute[260935]: 2025-10-11 09:26:15.989 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap36e1391f-ea, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:26:15 compute-0 nova_compute[260935]: 2025-10-11 09:26:15.989 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap36e1391f-ea, col_values=(('external_ids', {'iface-id': '36e1391f-eaf8-490f-8434-c3fb25eed0a4', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:da:66:34', 'vm-uuid': '36d0acda-9f37-4308-aa46-973f11c57b0e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:26:16 compute-0 nova_compute[260935]: 2025-10-11 09:26:16.004 2 DEBUG nova.compute.manager [None req-6c99d863-a15a-4b8a-895e-cd58d1191873 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: b39b8161-8a46-46fe-8a2a-0fc6b4eab850] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 11 09:26:16 compute-0 nova_compute[260935]: 2025-10-11 09:26:16.005 2 DEBUG nova.virt.libvirt.driver [None req-6c99d863-a15a-4b8a-895e-cd58d1191873 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: b39b8161-8a46-46fe-8a2a-0fc6b4eab850] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 11 09:26:16 compute-0 nova_compute[260935]: 2025-10-11 09:26:16.005 2 INFO nova.virt.libvirt.driver [None req-6c99d863-a15a-4b8a-895e-cd58d1191873 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: b39b8161-8a46-46fe-8a2a-0fc6b4eab850] Creating image(s)
Oct 11 09:26:16 compute-0 NetworkManager[44960]: <info>  [1760174776.0229] manager: (tap36e1391f-ea): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/539)
Oct 11 09:26:16 compute-0 nova_compute[260935]: 2025-10-11 09:26:16.038 2 DEBUG nova.storage.rbd_utils [None req-6c99d863-a15a-4b8a-895e-cd58d1191873 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] rbd image b39b8161-8a46-46fe-8a2a-0fc6b4eab850_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:26:16 compute-0 ceph-mon[74313]: pgmap v2538: 321 pgs: 321 active+clean; 374 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 26 op/s
Oct 11 09:26:16 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/1828184593' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:26:16 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/695730209' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:26:16 compute-0 nova_compute[260935]: 2025-10-11 09:26:16.093 2 DEBUG nova.storage.rbd_utils [None req-6c99d863-a15a-4b8a-895e-cd58d1191873 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] rbd image b39b8161-8a46-46fe-8a2a-0fc6b4eab850_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:26:16 compute-0 nova_compute[260935]: 2025-10-11 09:26:16.130 2 DEBUG nova.storage.rbd_utils [None req-6c99d863-a15a-4b8a-895e-cd58d1191873 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] rbd image b39b8161-8a46-46fe-8a2a-0fc6b4eab850_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:26:16 compute-0 nova_compute[260935]: 2025-10-11 09:26:16.142 2 DEBUG oslo_concurrency.processutils [None req-6c99d863-a15a-4b8a-895e-cd58d1191873 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:26:16 compute-0 nova_compute[260935]: 2025-10-11 09:26:16.199 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:26:16 compute-0 nova_compute[260935]: 2025-10-11 09:26:16.204 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 09:26:16 compute-0 nova_compute[260935]: 2025-10-11 09:26:16.204 2 INFO os_vif [None req-8964e318-74c5-4774-835b-ae3e1876fb32 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:da:66:34,bridge_name='br-int',has_traffic_filtering=True,id=36e1391f-eaf8-490f-8434-c3fb25eed0a4,network=Network(b9f9ae84-9b18-48f7-bff2-94e8835de5c8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap36e1391f-ea')
Oct 11 09:26:16 compute-0 nova_compute[260935]: 2025-10-11 09:26:16.251 2 DEBUG oslo_concurrency.processutils [None req-6c99d863-a15a-4b8a-895e-cd58d1191873 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json" returned: 0 in 0.109s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:26:16 compute-0 nova_compute[260935]: 2025-10-11 09:26:16.251 2 DEBUG oslo_concurrency.lockutils [None req-6c99d863-a15a-4b8a-895e-cd58d1191873 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "9811042c7d73cc51997f7c966840f4b7728169a1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:26:16 compute-0 nova_compute[260935]: 2025-10-11 09:26:16.252 2 DEBUG oslo_concurrency.lockutils [None req-6c99d863-a15a-4b8a-895e-cd58d1191873 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:26:16 compute-0 nova_compute[260935]: 2025-10-11 09:26:16.253 2 DEBUG oslo_concurrency.lockutils [None req-6c99d863-a15a-4b8a-895e-cd58d1191873 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:26:16 compute-0 nova_compute[260935]: 2025-10-11 09:26:16.282 2 DEBUG nova.storage.rbd_utils [None req-6c99d863-a15a-4b8a-895e-cd58d1191873 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] rbd image b39b8161-8a46-46fe-8a2a-0fc6b4eab850_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:26:16 compute-0 nova_compute[260935]: 2025-10-11 09:26:16.287 2 DEBUG oslo_concurrency.processutils [None req-6c99d863-a15a-4b8a-895e-cd58d1191873 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 b39b8161-8a46-46fe-8a2a-0fc6b4eab850_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:26:16 compute-0 nova_compute[260935]: 2025-10-11 09:26:16.600 2 DEBUG nova.virt.libvirt.driver [None req-8964e318-74c5-4774-835b-ae3e1876fb32 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 09:26:16 compute-0 nova_compute[260935]: 2025-10-11 09:26:16.601 2 DEBUG nova.virt.libvirt.driver [None req-8964e318-74c5-4774-835b-ae3e1876fb32 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 09:26:16 compute-0 nova_compute[260935]: 2025-10-11 09:26:16.602 2 DEBUG nova.virt.libvirt.driver [None req-8964e318-74c5-4774-835b-ae3e1876fb32 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] No VIF found with MAC fa:16:3e:da:66:34, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 11 09:26:16 compute-0 nova_compute[260935]: 2025-10-11 09:26:16.604 2 INFO nova.virt.libvirt.driver [None req-8964e318-74c5-4774-835b-ae3e1876fb32 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 36d0acda-9f37-4308-aa46-973f11c57b0e] Using config drive
Oct 11 09:26:16 compute-0 sshd-session[400052]: Invalid user admin from 165.232.82.252 port 43562
Oct 11 09:26:16 compute-0 nova_compute[260935]: 2025-10-11 09:26:16.648 2 DEBUG nova.storage.rbd_utils [None req-8964e318-74c5-4774-835b-ae3e1876fb32 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] rbd image 36d0acda-9f37-4308-aa46-973f11c57b0e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:26:16 compute-0 sshd-session[400052]: pam_unix(sshd:auth): check pass; user unknown
Oct 11 09:26:16 compute-0 sshd-session[400052]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=165.232.82.252
Oct 11 09:26:16 compute-0 nova_compute[260935]: 2025-10-11 09:26:16.848 2 DEBUG nova.network.neutron [None req-6c99d863-a15a-4b8a-895e-cd58d1191873 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: b39b8161-8a46-46fe-8a2a-0fc6b4eab850] Successfully created port: 0f8506e7-c03f-48bd-938d-f3b4cea2675b _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 11 09:26:16 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2539: 321 pgs: 321 active+clean; 374 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 26 op/s
Oct 11 09:26:17 compute-0 nova_compute[260935]: 2025-10-11 09:26:17.073 2 DEBUG oslo_concurrency.processutils [None req-6c99d863-a15a-4b8a-895e-cd58d1191873 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 b39b8161-8a46-46fe-8a2a-0fc6b4eab850_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.786s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:26:17 compute-0 ceph-mon[74313]: pgmap v2539: 321 pgs: 321 active+clean; 374 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 26 op/s
Oct 11 09:26:17 compute-0 nova_compute[260935]: 2025-10-11 09:26:17.170 2 DEBUG nova.storage.rbd_utils [None req-6c99d863-a15a-4b8a-895e-cd58d1191873 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] resizing rbd image b39b8161-8a46-46fe-8a2a-0fc6b4eab850_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 11 09:26:17 compute-0 nova_compute[260935]: 2025-10-11 09:26:17.287 2 DEBUG nova.objects.instance [None req-6c99d863-a15a-4b8a-895e-cd58d1191873 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lazy-loading 'migration_context' on Instance uuid b39b8161-8a46-46fe-8a2a-0fc6b4eab850 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:26:17 compute-0 nova_compute[260935]: 2025-10-11 09:26:17.419 2 DEBUG nova.virt.libvirt.driver [None req-6c99d863-a15a-4b8a-895e-cd58d1191873 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: b39b8161-8a46-46fe-8a2a-0fc6b4eab850] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 11 09:26:17 compute-0 nova_compute[260935]: 2025-10-11 09:26:17.419 2 DEBUG nova.virt.libvirt.driver [None req-6c99d863-a15a-4b8a-895e-cd58d1191873 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: b39b8161-8a46-46fe-8a2a-0fc6b4eab850] Ensure instance console log exists: /var/lib/nova/instances/b39b8161-8a46-46fe-8a2a-0fc6b4eab850/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 11 09:26:17 compute-0 nova_compute[260935]: 2025-10-11 09:26:17.420 2 DEBUG oslo_concurrency.lockutils [None req-6c99d863-a15a-4b8a-895e-cd58d1191873 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:26:17 compute-0 rsyslogd[1003]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 11 09:26:17 compute-0 nova_compute[260935]: 2025-10-11 09:26:17.420 2 DEBUG oslo_concurrency.lockutils [None req-6c99d863-a15a-4b8a-895e-cd58d1191873 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:26:17 compute-0 rsyslogd[1003]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 11 09:26:17 compute-0 nova_compute[260935]: 2025-10-11 09:26:17.421 2 DEBUG oslo_concurrency.lockutils [None req-6c99d863-a15a-4b8a-895e-cd58d1191873 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:26:17 compute-0 nova_compute[260935]: 2025-10-11 09:26:17.598 2 INFO nova.virt.libvirt.driver [None req-8964e318-74c5-4774-835b-ae3e1876fb32 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 36d0acda-9f37-4308-aa46-973f11c57b0e] Creating config drive at /var/lib/nova/instances/36d0acda-9f37-4308-aa46-973f11c57b0e/disk.config
Oct 11 09:26:17 compute-0 nova_compute[260935]: 2025-10-11 09:26:17.603 2 DEBUG oslo_concurrency.processutils [None req-8964e318-74c5-4774-835b-ae3e1876fb32 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/36d0acda-9f37-4308-aa46-973f11c57b0e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpddx6wl9p execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:26:17 compute-0 nova_compute[260935]: 2025-10-11 09:26:17.657 2 DEBUG nova.network.neutron [None req-6c99d863-a15a-4b8a-895e-cd58d1191873 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: b39b8161-8a46-46fe-8a2a-0fc6b4eab850] Successfully created port: a0aadf8d-3a8c-48f8-84af-fb4e4c0b4088 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 11 09:26:17 compute-0 nova_compute[260935]: 2025-10-11 09:26:17.770 2 DEBUG oslo_concurrency.processutils [None req-8964e318-74c5-4774-835b-ae3e1876fb32 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/36d0acda-9f37-4308-aa46-973f11c57b0e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpddx6wl9p" returned: 0 in 0.168s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:26:17 compute-0 nova_compute[260935]: 2025-10-11 09:26:17.813 2 DEBUG nova.storage.rbd_utils [None req-8964e318-74c5-4774-835b-ae3e1876fb32 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] rbd image 36d0acda-9f37-4308-aa46-973f11c57b0e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:26:17 compute-0 nova_compute[260935]: 2025-10-11 09:26:17.818 2 DEBUG oslo_concurrency.processutils [None req-8964e318-74c5-4774-835b-ae3e1876fb32 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/36d0acda-9f37-4308-aa46-973f11c57b0e/disk.config 36d0acda-9f37-4308-aa46-973f11c57b0e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:26:17 compute-0 nova_compute[260935]: 2025-10-11 09:26:17.889 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:26:18 compute-0 nova_compute[260935]: 2025-10-11 09:26:18.115 2 DEBUG oslo_concurrency.processutils [None req-8964e318-74c5-4774-835b-ae3e1876fb32 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/36d0acda-9f37-4308-aa46-973f11c57b0e/disk.config 36d0acda-9f37-4308-aa46-973f11c57b0e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.297s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:26:18 compute-0 nova_compute[260935]: 2025-10-11 09:26:18.116 2 INFO nova.virt.libvirt.driver [None req-8964e318-74c5-4774-835b-ae3e1876fb32 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 36d0acda-9f37-4308-aa46-973f11c57b0e] Deleting local config drive /var/lib/nova/instances/36d0acda-9f37-4308-aa46-973f11c57b0e/disk.config because it was imported into RBD.
Oct 11 09:26:18 compute-0 kernel: tap36e1391f-ea: entered promiscuous mode
Oct 11 09:26:18 compute-0 NetworkManager[44960]: <info>  [1760174778.2079] manager: (tap36e1391f-ea): new Tun device (/org/freedesktop/NetworkManager/Devices/540)
Oct 11 09:26:18 compute-0 ovn_controller[152945]: 2025-10-11T09:26:18Z|01382|binding|INFO|Claiming lport 36e1391f-eaf8-490f-8434-c3fb25eed0a4 for this chassis.
Oct 11 09:26:18 compute-0 ovn_controller[152945]: 2025-10-11T09:26:18Z|01383|binding|INFO|36e1391f-eaf8-490f-8434-c3fb25eed0a4: Claiming fa:16:3e:da:66:34 10.100.0.11
Oct 11 09:26:18 compute-0 nova_compute[260935]: 2025-10-11 09:26:18.212 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:26:18 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:26:18.225 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:da:66:34 10.100.0.11'], port_security=['fa:16:3e:da:66:34 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '36d0acda-9f37-4308-aa46-973f11c57b0e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b9f9ae84-9b18-48f7-bff2-94e8835de5c8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'bee9c6aad5fe46a2b0fb6caf4d995b72', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'a0735faf-4c5a-437a-8d73-2ecca218ad1a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=419ebcda-831c-4f7b-8ef8-fba16bc71b52, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=36e1391f-eaf8-490f-8434-c3fb25eed0a4) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 09:26:18 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:26:18.228 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 36e1391f-eaf8-490f-8434-c3fb25eed0a4 in datapath b9f9ae84-9b18-48f7-bff2-94e8835de5c8 bound to our chassis
Oct 11 09:26:18 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:26:18.232 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b9f9ae84-9b18-48f7-bff2-94e8835de5c8
Oct 11 09:26:18 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:26:18.252 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[8cd244b6-d1ee-488a-a00f-9db0baa639e3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:26:18 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:26:18.254 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapb9f9ae84-91 in ovnmeta-b9f9ae84-9b18-48f7-bff2-94e8835de5c8 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 11 09:26:18 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:26:18.256 276945 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapb9f9ae84-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 11 09:26:18 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:26:18.256 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[e3da0435-ec86-418a-b4b1-a16648d0a02d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:26:18 compute-0 systemd-udevd[400219]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 09:26:18 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:26:18.258 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[4ac21390-b012-4953-8441-aa1dbe6dffee]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:26:18 compute-0 systemd-machined[215705]: New machine qemu-151-instance-0000007f.
Oct 11 09:26:18 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:26:18.281 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[8ac529fa-e420-4fc0-952b-d18f3b19a99b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:26:18 compute-0 NetworkManager[44960]: <info>  [1760174778.2862] device (tap36e1391f-ea): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 09:26:18 compute-0 NetworkManager[44960]: <info>  [1760174778.2874] device (tap36e1391f-ea): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 09:26:18 compute-0 systemd[1]: Started Virtual Machine qemu-151-instance-0000007f.
Oct 11 09:26:18 compute-0 nova_compute[260935]: 2025-10-11 09:26:18.305 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:26:18 compute-0 ovn_controller[152945]: 2025-10-11T09:26:18Z|01384|binding|INFO|Releasing lport e23cd806-8523-4e59-ba27-db15cee52548 from this chassis (sb_readonly=0)
Oct 11 09:26:18 compute-0 ovn_controller[152945]: 2025-10-11T09:26:18Z|01385|binding|INFO|Releasing lport f45dd889-4db0-488b-b6d3-356bc191844e from this chassis (sb_readonly=0)
Oct 11 09:26:18 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:26:18.314 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[cf23d405-adac-41fd-8789-fd0e359c7499]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:26:18 compute-0 ovn_controller[152945]: 2025-10-11T09:26:18Z|01386|binding|INFO|Setting lport 36e1391f-eaf8-490f-8434-c3fb25eed0a4 ovn-installed in OVS
Oct 11 09:26:18 compute-0 ovn_controller[152945]: 2025-10-11T09:26:18Z|01387|binding|INFO|Setting lport 36e1391f-eaf8-490f-8434-c3fb25eed0a4 up in Southbound
Oct 11 09:26:18 compute-0 nova_compute[260935]: 2025-10-11 09:26:18.319 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:26:18 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:26:18.359 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[c57a4cea-7914-418d-a203-8c4073fefff0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:26:18 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:26:18.367 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[b4780d4d-ab6a-4ecf-b44b-b336473a5974]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:26:18 compute-0 NetworkManager[44960]: <info>  [1760174778.3691] manager: (tapb9f9ae84-90): new Veth device (/org/freedesktop/NetworkManager/Devices/541)
Oct 11 09:26:18 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:26:18.423 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[711a920b-cc48-43ed-9a2b-8926beb39783]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:26:18 compute-0 sshd-session[400052]: Failed password for invalid user admin from 165.232.82.252 port 43562 ssh2
Oct 11 09:26:18 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:26:18.428 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[e78169a9-9abe-4242-a2f5-4e7e75b081d0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:26:18 compute-0 NetworkManager[44960]: <info>  [1760174778.4650] device (tapb9f9ae84-90): carrier: link connected
Oct 11 09:26:18 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:26:18.474 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[5cf6db54-028b-4356-99a6-36595b92a947]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:26:18 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:26:18.505 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[f2583178-27a8-4425-9db9-475f0bfe743e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb9f9ae84-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f4:ad:53'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 379], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 662316, 'reachable_time': 34988, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 400252, 'error': None, 'target': 'ovnmeta-b9f9ae84-9b18-48f7-bff2-94e8835de5c8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:26:18 compute-0 nova_compute[260935]: 2025-10-11 09:26:18.521 2 DEBUG nova.network.neutron [None req-6c99d863-a15a-4b8a-895e-cd58d1191873 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: b39b8161-8a46-46fe-8a2a-0fc6b4eab850] Successfully updated port: 0f8506e7-c03f-48bd-938d-f3b4cea2675b _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 11 09:26:18 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:26:18.532 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[30b15c68-8067-4849-93f1-46f31e638349]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fef4:ad53'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 662316, 'tstamp': 662316}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 400253, 'error': None, 'target': 'ovnmeta-b9f9ae84-9b18-48f7-bff2-94e8835de5c8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:26:18 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:26:18.562 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[21321d27-60c9-4c33-963c-5e4273d13c88]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb9f9ae84-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f4:ad:53'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 379], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 662316, 'reachable_time': 34988, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 400254, 'error': None, 'target': 'ovnmeta-b9f9ae84-9b18-48f7-bff2-94e8835de5c8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:26:18 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:26:18.618 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[11cdbdba-b3e3-48f3-b04e-cbdd917f89b2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:26:18 compute-0 nova_compute[260935]: 2025-10-11 09:26:18.675 2 DEBUG nova.compute.manager [req-1eb22a8e-2e35-4158-bfe9-06c9e91a00f9 req-9d98cb8d-bad8-42d7-854f-5b7f41321a54 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: b39b8161-8a46-46fe-8a2a-0fc6b4eab850] Received event network-changed-0f8506e7-c03f-48bd-938d-f3b4cea2675b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:26:18 compute-0 nova_compute[260935]: 2025-10-11 09:26:18.675 2 DEBUG nova.compute.manager [req-1eb22a8e-2e35-4158-bfe9-06c9e91a00f9 req-9d98cb8d-bad8-42d7-854f-5b7f41321a54 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: b39b8161-8a46-46fe-8a2a-0fc6b4eab850] Refreshing instance network info cache due to event network-changed-0f8506e7-c03f-48bd-938d-f3b4cea2675b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 09:26:18 compute-0 nova_compute[260935]: 2025-10-11 09:26:18.675 2 DEBUG oslo_concurrency.lockutils [req-1eb22a8e-2e35-4158-bfe9-06c9e91a00f9 req-9d98cb8d-bad8-42d7-854f-5b7f41321a54 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-b39b8161-8a46-46fe-8a2a-0fc6b4eab850" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:26:18 compute-0 nova_compute[260935]: 2025-10-11 09:26:18.676 2 DEBUG oslo_concurrency.lockutils [req-1eb22a8e-2e35-4158-bfe9-06c9e91a00f9 req-9d98cb8d-bad8-42d7-854f-5b7f41321a54 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-b39b8161-8a46-46fe-8a2a-0fc6b4eab850" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:26:18 compute-0 nova_compute[260935]: 2025-10-11 09:26:18.676 2 DEBUG nova.network.neutron [req-1eb22a8e-2e35-4158-bfe9-06c9e91a00f9 req-9d98cb8d-bad8-42d7-854f-5b7f41321a54 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: b39b8161-8a46-46fe-8a2a-0fc6b4eab850] Refreshing network info cache for port 0f8506e7-c03f-48bd-938d-f3b4cea2675b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 09:26:18 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:26:18.713 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[5b6c0aa0-ddfc-4e4d-8c75-b966b5748bfe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:26:18 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:26:18.714 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb9f9ae84-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:26:18 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:26:18.715 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 09:26:18 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:26:18.715 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb9f9ae84-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:26:18 compute-0 kernel: tapb9f9ae84-90: entered promiscuous mode
Oct 11 09:26:18 compute-0 nova_compute[260935]: 2025-10-11 09:26:18.718 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:26:18 compute-0 NetworkManager[44960]: <info>  [1760174778.7200] manager: (tapb9f9ae84-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/542)
Oct 11 09:26:18 compute-0 nova_compute[260935]: 2025-10-11 09:26:18.722 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:26:18 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:26:18.725 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb9f9ae84-90, col_values=(('external_ids', {'iface-id': '829ba2ca-e21f-4927-8525-5f43e59d37f8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:26:18 compute-0 nova_compute[260935]: 2025-10-11 09:26:18.727 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:26:18 compute-0 ovn_controller[152945]: 2025-10-11T09:26:18Z|01388|binding|INFO|Releasing lport 829ba2ca-e21f-4927-8525-5f43e59d37f8 from this chassis (sb_readonly=0)
Oct 11 09:26:18 compute-0 nova_compute[260935]: 2025-10-11 09:26:18.756 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:26:18 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:26:18.758 162815 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/b9f9ae84-9b18-48f7-bff2-94e8835de5c8.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/b9f9ae84-9b18-48f7-bff2-94e8835de5c8.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 11 09:26:18 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:26:18.759 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[5b489582-7c93-4dda-8f27-776d406e1c15]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:26:18 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:26:18.761 162815 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 11 09:26:18 compute-0 ovn_metadata_agent[162810]: global
Oct 11 09:26:18 compute-0 ovn_metadata_agent[162810]:     log         /dev/log local0 debug
Oct 11 09:26:18 compute-0 ovn_metadata_agent[162810]:     log-tag     haproxy-metadata-proxy-b9f9ae84-9b18-48f7-bff2-94e8835de5c8
Oct 11 09:26:18 compute-0 ovn_metadata_agent[162810]:     user        root
Oct 11 09:26:18 compute-0 ovn_metadata_agent[162810]:     group       root
Oct 11 09:26:18 compute-0 ovn_metadata_agent[162810]:     maxconn     1024
Oct 11 09:26:18 compute-0 ovn_metadata_agent[162810]:     pidfile     /var/lib/neutron/external/pids/b9f9ae84-9b18-48f7-bff2-94e8835de5c8.pid.haproxy
Oct 11 09:26:18 compute-0 ovn_metadata_agent[162810]:     daemon
Oct 11 09:26:18 compute-0 ovn_metadata_agent[162810]: 
Oct 11 09:26:18 compute-0 ovn_metadata_agent[162810]: defaults
Oct 11 09:26:18 compute-0 ovn_metadata_agent[162810]:     log global
Oct 11 09:26:18 compute-0 ovn_metadata_agent[162810]:     mode http
Oct 11 09:26:18 compute-0 ovn_metadata_agent[162810]:     option httplog
Oct 11 09:26:18 compute-0 ovn_metadata_agent[162810]:     option dontlognull
Oct 11 09:26:18 compute-0 ovn_metadata_agent[162810]:     option http-server-close
Oct 11 09:26:18 compute-0 ovn_metadata_agent[162810]:     option forwardfor
Oct 11 09:26:18 compute-0 ovn_metadata_agent[162810]:     retries                 3
Oct 11 09:26:18 compute-0 ovn_metadata_agent[162810]:     timeout http-request    30s
Oct 11 09:26:18 compute-0 ovn_metadata_agent[162810]:     timeout connect         30s
Oct 11 09:26:18 compute-0 ovn_metadata_agent[162810]:     timeout client          32s
Oct 11 09:26:18 compute-0 ovn_metadata_agent[162810]:     timeout server          32s
Oct 11 09:26:18 compute-0 ovn_metadata_agent[162810]:     timeout http-keep-alive 30s
Oct 11 09:26:18 compute-0 ovn_metadata_agent[162810]: 
Oct 11 09:26:18 compute-0 ovn_metadata_agent[162810]: 
Oct 11 09:26:18 compute-0 ovn_metadata_agent[162810]: listen listener
Oct 11 09:26:18 compute-0 ovn_metadata_agent[162810]:     bind 169.254.169.254:80
Oct 11 09:26:18 compute-0 ovn_metadata_agent[162810]:     server metadata /var/lib/neutron/metadata_proxy
Oct 11 09:26:18 compute-0 ovn_metadata_agent[162810]:     http-request add-header X-OVN-Network-ID b9f9ae84-9b18-48f7-bff2-94e8835de5c8
Oct 11 09:26:18 compute-0 ovn_metadata_agent[162810]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 11 09:26:18 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:26:18.761 162815 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-b9f9ae84-9b18-48f7-bff2-94e8835de5c8', 'env', 'PROCESS_TAG=haproxy-b9f9ae84-9b18-48f7-bff2-94e8835de5c8', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/b9f9ae84-9b18-48f7-bff2-94e8835de5c8.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 11 09:26:18 compute-0 nova_compute[260935]: 2025-10-11 09:26:18.857 2 DEBUG nova.compute.manager [req-a2379df5-b6bf-4980-a8fa-d10318d4c461 req-b676a8fa-2688-4d6c-92bf-ae7396bb8a28 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 36d0acda-9f37-4308-aa46-973f11c57b0e] Received event network-vif-plugged-36e1391f-eaf8-490f-8434-c3fb25eed0a4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:26:18 compute-0 nova_compute[260935]: 2025-10-11 09:26:18.857 2 DEBUG oslo_concurrency.lockutils [req-a2379df5-b6bf-4980-a8fa-d10318d4c461 req-b676a8fa-2688-4d6c-92bf-ae7396bb8a28 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "36d0acda-9f37-4308-aa46-973f11c57b0e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:26:18 compute-0 nova_compute[260935]: 2025-10-11 09:26:18.858 2 DEBUG oslo_concurrency.lockutils [req-a2379df5-b6bf-4980-a8fa-d10318d4c461 req-b676a8fa-2688-4d6c-92bf-ae7396bb8a28 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "36d0acda-9f37-4308-aa46-973f11c57b0e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:26:18 compute-0 nova_compute[260935]: 2025-10-11 09:26:18.858 2 DEBUG oslo_concurrency.lockutils [req-a2379df5-b6bf-4980-a8fa-d10318d4c461 req-b676a8fa-2688-4d6c-92bf-ae7396bb8a28 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "36d0acda-9f37-4308-aa46-973f11c57b0e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:26:18 compute-0 nova_compute[260935]: 2025-10-11 09:26:18.858 2 DEBUG nova.compute.manager [req-a2379df5-b6bf-4980-a8fa-d10318d4c461 req-b676a8fa-2688-4d6c-92bf-ae7396bb8a28 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 36d0acda-9f37-4308-aa46-973f11c57b0e] Processing event network-vif-plugged-36e1391f-eaf8-490f-8434-c3fb25eed0a4 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 11 09:26:18 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2540: 321 pgs: 321 active+clean; 420 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 3.5 MiB/s wr, 54 op/s
Oct 11 09:26:18 compute-0 nova_compute[260935]: 2025-10-11 09:26:18.904 2 DEBUG nova.network.neutron [req-1eb22a8e-2e35-4158-bfe9-06c9e91a00f9 req-9d98cb8d-bad8-42d7-854f-5b7f41321a54 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: b39b8161-8a46-46fe-8a2a-0fc6b4eab850] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 11 09:26:19 compute-0 unix_chkpwd[400306]: password check failed for user (root)
Oct 11 09:26:19 compute-0 sshd-session[400186]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=155.4.244.179  user=root
Oct 11 09:26:19 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:26:19 compute-0 nova_compute[260935]: 2025-10-11 09:26:19.247 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760174779.2463243, 36d0acda-9f37-4308-aa46-973f11c57b0e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:26:19 compute-0 nova_compute[260935]: 2025-10-11 09:26:19.249 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 36d0acda-9f37-4308-aa46-973f11c57b0e] VM Started (Lifecycle Event)
Oct 11 09:26:19 compute-0 nova_compute[260935]: 2025-10-11 09:26:19.253 2 DEBUG nova.compute.manager [None req-8964e318-74c5-4774-835b-ae3e1876fb32 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 36d0acda-9f37-4308-aa46-973f11c57b0e] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 11 09:26:19 compute-0 podman[400329]: 2025-10-11 09:26:19.257863118 +0000 UTC m=+0.065276513 container create 98a9e3dfba5d9f0db108a43068ad5b78e93374ee7968fa6567843c496e5784d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-b9f9ae84-9b18-48f7-bff2-94e8835de5c8, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 09:26:19 compute-0 nova_compute[260935]: 2025-10-11 09:26:19.258 2 DEBUG nova.virt.libvirt.driver [None req-8964e318-74c5-4774-835b-ae3e1876fb32 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 36d0acda-9f37-4308-aa46-973f11c57b0e] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 11 09:26:19 compute-0 nova_compute[260935]: 2025-10-11 09:26:19.274 2 INFO nova.virt.libvirt.driver [-] [instance: 36d0acda-9f37-4308-aa46-973f11c57b0e] Instance spawned successfully.
Oct 11 09:26:19 compute-0 nova_compute[260935]: 2025-10-11 09:26:19.275 2 DEBUG nova.virt.libvirt.driver [None req-8964e318-74c5-4774-835b-ae3e1876fb32 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 36d0acda-9f37-4308-aa46-973f11c57b0e] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 11 09:26:19 compute-0 nova_compute[260935]: 2025-10-11 09:26:19.289 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 36d0acda-9f37-4308-aa46-973f11c57b0e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:26:19 compute-0 nova_compute[260935]: 2025-10-11 09:26:19.295 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 36d0acda-9f37-4308-aa46-973f11c57b0e] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 09:26:19 compute-0 systemd[1]: Started libpod-conmon-98a9e3dfba5d9f0db108a43068ad5b78e93374ee7968fa6567843c496e5784d5.scope.
Oct 11 09:26:19 compute-0 nova_compute[260935]: 2025-10-11 09:26:19.313 2 DEBUG nova.virt.libvirt.driver [None req-8964e318-74c5-4774-835b-ae3e1876fb32 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 36d0acda-9f37-4308-aa46-973f11c57b0e] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:26:19 compute-0 nova_compute[260935]: 2025-10-11 09:26:19.314 2 DEBUG nova.virt.libvirt.driver [None req-8964e318-74c5-4774-835b-ae3e1876fb32 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 36d0acda-9f37-4308-aa46-973f11c57b0e] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:26:19 compute-0 nova_compute[260935]: 2025-10-11 09:26:19.316 2 DEBUG nova.virt.libvirt.driver [None req-8964e318-74c5-4774-835b-ae3e1876fb32 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 36d0acda-9f37-4308-aa46-973f11c57b0e] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:26:19 compute-0 podman[400329]: 2025-10-11 09:26:19.220750841 +0000 UTC m=+0.028164226 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 11 09:26:19 compute-0 nova_compute[260935]: 2025-10-11 09:26:19.317 2 DEBUG nova.virt.libvirt.driver [None req-8964e318-74c5-4774-835b-ae3e1876fb32 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 36d0acda-9f37-4308-aa46-973f11c57b0e] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:26:19 compute-0 nova_compute[260935]: 2025-10-11 09:26:19.318 2 DEBUG nova.virt.libvirt.driver [None req-8964e318-74c5-4774-835b-ae3e1876fb32 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 36d0acda-9f37-4308-aa46-973f11c57b0e] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:26:19 compute-0 nova_compute[260935]: 2025-10-11 09:26:19.319 2 DEBUG nova.virt.libvirt.driver [None req-8964e318-74c5-4774-835b-ae3e1876fb32 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 36d0acda-9f37-4308-aa46-973f11c57b0e] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:26:19 compute-0 nova_compute[260935]: 2025-10-11 09:26:19.326 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 36d0acda-9f37-4308-aa46-973f11c57b0e] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 09:26:19 compute-0 nova_compute[260935]: 2025-10-11 09:26:19.328 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760174779.248165, 36d0acda-9f37-4308-aa46-973f11c57b0e => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:26:19 compute-0 nova_compute[260935]: 2025-10-11 09:26:19.328 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 36d0acda-9f37-4308-aa46-973f11c57b0e] VM Paused (Lifecycle Event)
Oct 11 09:26:19 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:26:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b93609cfe788969adf943db69f3753cdeb411820f1a1f8aaf68d3557d2f4c37/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 11 09:26:19 compute-0 nova_compute[260935]: 2025-10-11 09:26:19.364 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 36d0acda-9f37-4308-aa46-973f11c57b0e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:26:19 compute-0 nova_compute[260935]: 2025-10-11 09:26:19.370 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760174779.2563791, 36d0acda-9f37-4308-aa46-973f11c57b0e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:26:19 compute-0 nova_compute[260935]: 2025-10-11 09:26:19.370 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 36d0acda-9f37-4308-aa46-973f11c57b0e] VM Resumed (Lifecycle Event)
Oct 11 09:26:19 compute-0 podman[400329]: 2025-10-11 09:26:19.378698658 +0000 UTC m=+0.186112073 container init 98a9e3dfba5d9f0db108a43068ad5b78e93374ee7968fa6567843c496e5784d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-b9f9ae84-9b18-48f7-bff2-94e8835de5c8, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 11 09:26:19 compute-0 podman[400329]: 2025-10-11 09:26:19.386608042 +0000 UTC m=+0.194021437 container start 98a9e3dfba5d9f0db108a43068ad5b78e93374ee7968fa6567843c496e5784d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-b9f9ae84-9b18-48f7-bff2-94e8835de5c8, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0)
Oct 11 09:26:19 compute-0 nova_compute[260935]: 2025-10-11 09:26:19.388 2 INFO nova.compute.manager [None req-8964e318-74c5-4774-835b-ae3e1876fb32 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 36d0acda-9f37-4308-aa46-973f11c57b0e] Took 7.87 seconds to spawn the instance on the hypervisor.
Oct 11 09:26:19 compute-0 nova_compute[260935]: 2025-10-11 09:26:19.389 2 DEBUG nova.compute.manager [None req-8964e318-74c5-4774-835b-ae3e1876fb32 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 36d0acda-9f37-4308-aa46-973f11c57b0e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:26:19 compute-0 nova_compute[260935]: 2025-10-11 09:26:19.393 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 36d0acda-9f37-4308-aa46-973f11c57b0e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:26:19 compute-0 podman[400342]: 2025-10-11 09:26:19.394377071 +0000 UTC m=+0.091306168 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.vendor=CentOS, config_id=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct 11 09:26:19 compute-0 nova_compute[260935]: 2025-10-11 09:26:19.408 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 36d0acda-9f37-4308-aa46-973f11c57b0e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 09:26:19 compute-0 neutron-haproxy-ovnmeta-b9f9ae84-9b18-48f7-bff2-94e8835de5c8[400350]: [NOTICE]   (400368) : New worker (400370) forked
Oct 11 09:26:19 compute-0 neutron-haproxy-ovnmeta-b9f9ae84-9b18-48f7-bff2-94e8835de5c8[400350]: [NOTICE]   (400368) : Loading success.
Oct 11 09:26:19 compute-0 nova_compute[260935]: 2025-10-11 09:26:19.443 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 36d0acda-9f37-4308-aa46-973f11c57b0e] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 09:26:19 compute-0 nova_compute[260935]: 2025-10-11 09:26:19.466 2 INFO nova.compute.manager [None req-8964e318-74c5-4774-835b-ae3e1876fb32 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 36d0acda-9f37-4308-aa46-973f11c57b0e] Took 9.00 seconds to build instance.
Oct 11 09:26:19 compute-0 nova_compute[260935]: 2025-10-11 09:26:19.482 2 DEBUG oslo_concurrency.lockutils [None req-8964e318-74c5-4774-835b-ae3e1876fb32 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "36d0acda-9f37-4308-aa46-973f11c57b0e" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.091s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:26:19 compute-0 ceph-mon[74313]: pgmap v2540: 321 pgs: 321 active+clean; 420 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 3.5 MiB/s wr, 54 op/s
Oct 11 09:26:19 compute-0 sshd-session[400052]: Connection closed by invalid user admin 165.232.82.252 port 43562 [preauth]
Oct 11 09:26:20 compute-0 nova_compute[260935]: 2025-10-11 09:26:20.261 2 DEBUG nova.network.neutron [req-1eb22a8e-2e35-4158-bfe9-06c9e91a00f9 req-9d98cb8d-bad8-42d7-854f-5b7f41321a54 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: b39b8161-8a46-46fe-8a2a-0fc6b4eab850] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:26:20 compute-0 nova_compute[260935]: 2025-10-11 09:26:20.335 2 DEBUG oslo_concurrency.lockutils [req-1eb22a8e-2e35-4158-bfe9-06c9e91a00f9 req-9d98cb8d-bad8-42d7-854f-5b7f41321a54 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-b39b8161-8a46-46fe-8a2a-0fc6b4eab850" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:26:20 compute-0 nova_compute[260935]: 2025-10-11 09:26:20.504 2 DEBUG nova.network.neutron [None req-6c99d863-a15a-4b8a-895e-cd58d1191873 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: b39b8161-8a46-46fe-8a2a-0fc6b4eab850] Successfully updated port: a0aadf8d-3a8c-48f8-84af-fb4e4c0b4088 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 11 09:26:20 compute-0 nova_compute[260935]: 2025-10-11 09:26:20.592 2 DEBUG oslo_concurrency.lockutils [None req-6c99d863-a15a-4b8a-895e-cd58d1191873 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "refresh_cache-b39b8161-8a46-46fe-8a2a-0fc6b4eab850" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:26:20 compute-0 nova_compute[260935]: 2025-10-11 09:26:20.593 2 DEBUG oslo_concurrency.lockutils [None req-6c99d863-a15a-4b8a-895e-cd58d1191873 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquired lock "refresh_cache-b39b8161-8a46-46fe-8a2a-0fc6b4eab850" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:26:20 compute-0 nova_compute[260935]: 2025-10-11 09:26:20.593 2 DEBUG nova.network.neutron [None req-6c99d863-a15a-4b8a-895e-cd58d1191873 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: b39b8161-8a46-46fe-8a2a-0fc6b4eab850] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 11 09:26:20 compute-0 nova_compute[260935]: 2025-10-11 09:26:20.756 2 DEBUG nova.network.neutron [None req-6c99d863-a15a-4b8a-895e-cd58d1191873 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: b39b8161-8a46-46fe-8a2a-0fc6b4eab850] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 11 09:26:20 compute-0 sshd-session[400186]: Failed password for root from 155.4.244.179 port 16098 ssh2
Oct 11 09:26:20 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2541: 321 pgs: 321 active+clean; 420 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 3.5 MiB/s wr, 54 op/s
Oct 11 09:26:20 compute-0 nova_compute[260935]: 2025-10-11 09:26:20.914 2 DEBUG nova.compute.manager [req-581c61dc-dfa6-4212-806b-3989c6eee803 req-913ecc62-ea06-496d-adcf-4740ec3daacc e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: b39b8161-8a46-46fe-8a2a-0fc6b4eab850] Received event network-changed-a0aadf8d-3a8c-48f8-84af-fb4e4c0b4088 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:26:20 compute-0 nova_compute[260935]: 2025-10-11 09:26:20.914 2 DEBUG nova.compute.manager [req-581c61dc-dfa6-4212-806b-3989c6eee803 req-913ecc62-ea06-496d-adcf-4740ec3daacc e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: b39b8161-8a46-46fe-8a2a-0fc6b4eab850] Refreshing instance network info cache due to event network-changed-a0aadf8d-3a8c-48f8-84af-fb4e4c0b4088. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 09:26:20 compute-0 nova_compute[260935]: 2025-10-11 09:26:20.915 2 DEBUG oslo_concurrency.lockutils [req-581c61dc-dfa6-4212-806b-3989c6eee803 req-913ecc62-ea06-496d-adcf-4740ec3daacc e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-b39b8161-8a46-46fe-8a2a-0fc6b4eab850" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:26:21 compute-0 nova_compute[260935]: 2025-10-11 09:26:21.004 2 DEBUG nova.compute.manager [req-751c6d1a-9c99-4e98-88f3-d82a2cc0ed61 req-bbc07894-f84b-4024-87c4-2e66b1d34ed2 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 36d0acda-9f37-4308-aa46-973f11c57b0e] Received event network-vif-plugged-36e1391f-eaf8-490f-8434-c3fb25eed0a4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:26:21 compute-0 nova_compute[260935]: 2025-10-11 09:26:21.004 2 DEBUG oslo_concurrency.lockutils [req-751c6d1a-9c99-4e98-88f3-d82a2cc0ed61 req-bbc07894-f84b-4024-87c4-2e66b1d34ed2 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "36d0acda-9f37-4308-aa46-973f11c57b0e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:26:21 compute-0 nova_compute[260935]: 2025-10-11 09:26:21.005 2 DEBUG oslo_concurrency.lockutils [req-751c6d1a-9c99-4e98-88f3-d82a2cc0ed61 req-bbc07894-f84b-4024-87c4-2e66b1d34ed2 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "36d0acda-9f37-4308-aa46-973f11c57b0e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:26:21 compute-0 nova_compute[260935]: 2025-10-11 09:26:21.005 2 DEBUG oslo_concurrency.lockutils [req-751c6d1a-9c99-4e98-88f3-d82a2cc0ed61 req-bbc07894-f84b-4024-87c4-2e66b1d34ed2 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "36d0acda-9f37-4308-aa46-973f11c57b0e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:26:21 compute-0 nova_compute[260935]: 2025-10-11 09:26:21.006 2 DEBUG nova.compute.manager [req-751c6d1a-9c99-4e98-88f3-d82a2cc0ed61 req-bbc07894-f84b-4024-87c4-2e66b1d34ed2 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 36d0acda-9f37-4308-aa46-973f11c57b0e] No waiting events found dispatching network-vif-plugged-36e1391f-eaf8-490f-8434-c3fb25eed0a4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 09:26:21 compute-0 nova_compute[260935]: 2025-10-11 09:26:21.006 2 WARNING nova.compute.manager [req-751c6d1a-9c99-4e98-88f3-d82a2cc0ed61 req-bbc07894-f84b-4024-87c4-2e66b1d34ed2 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 36d0acda-9f37-4308-aa46-973f11c57b0e] Received unexpected event network-vif-plugged-36e1391f-eaf8-490f-8434-c3fb25eed0a4 for instance with vm_state active and task_state None.
Oct 11 09:26:21 compute-0 nova_compute[260935]: 2025-10-11 09:26:21.022 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:26:21 compute-0 sshd-session[400186]: Received disconnect from 155.4.244.179 port 16098:11: Bye Bye [preauth]
Oct 11 09:26:21 compute-0 sshd-session[400186]: Disconnected from authenticating user root 155.4.244.179 port 16098 [preauth]
Oct 11 09:26:21 compute-0 nova_compute[260935]: 2025-10-11 09:26:21.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:26:21 compute-0 ceph-mon[74313]: pgmap v2541: 321 pgs: 321 active+clean; 420 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 3.5 MiB/s wr, 54 op/s
Oct 11 09:26:22 compute-0 nova_compute[260935]: 2025-10-11 09:26:22.891 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:26:22 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2542: 321 pgs: 321 active+clean; 420 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 3.6 MiB/s wr, 127 op/s
Oct 11 09:26:24 compute-0 ceph-mon[74313]: pgmap v2542: 321 pgs: 321 active+clean; 420 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 3.6 MiB/s wr, 127 op/s
Oct 11 09:26:24 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:26:24 compute-0 podman[400379]: 2025-10-11 09:26:24.783565777 +0000 UTC m=+0.081143041 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd)
Oct 11 09:26:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:26:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:26:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:26:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:26:24 compute-0 podman[400380]: 2025-10-11 09:26:24.845748252 +0000 UTC m=+0.140581159 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.build-date=20251001, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 11 09:26:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:26:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:26:24 compute-0 nova_compute[260935]: 2025-10-11 09:26:24.891 2 DEBUG nova.network.neutron [None req-6c99d863-a15a-4b8a-895e-cd58d1191873 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: b39b8161-8a46-46fe-8a2a-0fc6b4eab850] Updating instance_info_cache with network_info: [{"id": "0f8506e7-c03f-48bd-938d-f3b4cea2675b", "address": "fa:16:3e:f4:67:d9", "network": {"id": "024b1f88-7312-4f05-a55e-4c82e878906e", "bridge": "br-int", "label": "tempest-network-smoke--93934650", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0f8506e7-c0", "ovs_interfaceid": "0f8506e7-c03f-48bd-938d-f3b4cea2675b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "a0aadf8d-3a8c-48f8-84af-fb4e4c0b4088", "address": "fa:16:3e:4d:e9:36", "network": {"id": "894decad-3bed-4c55-b643-5fbe5479bf3f", "bridge": "br-int", "label": "tempest-network-smoke--1044478516", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe4d:e936", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa0aadf8d-3a", "ovs_interfaceid": "a0aadf8d-3a8c-48f8-84af-fb4e4c0b4088", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:26:24 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2543: 321 pgs: 321 active+clean; 420 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Oct 11 09:26:25 compute-0 nova_compute[260935]: 2025-10-11 09:26:25.245 2 DEBUG oslo_concurrency.lockutils [None req-6c99d863-a15a-4b8a-895e-cd58d1191873 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Releasing lock "refresh_cache-b39b8161-8a46-46fe-8a2a-0fc6b4eab850" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:26:25 compute-0 nova_compute[260935]: 2025-10-11 09:26:25.246 2 DEBUG nova.compute.manager [None req-6c99d863-a15a-4b8a-895e-cd58d1191873 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: b39b8161-8a46-46fe-8a2a-0fc6b4eab850] Instance network_info: |[{"id": "0f8506e7-c03f-48bd-938d-f3b4cea2675b", "address": "fa:16:3e:f4:67:d9", "network": {"id": "024b1f88-7312-4f05-a55e-4c82e878906e", "bridge": "br-int", "label": "tempest-network-smoke--93934650", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0f8506e7-c0", "ovs_interfaceid": "0f8506e7-c03f-48bd-938d-f3b4cea2675b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "a0aadf8d-3a8c-48f8-84af-fb4e4c0b4088", "address": "fa:16:3e:4d:e9:36", "network": {"id": "894decad-3bed-4c55-b643-5fbe5479bf3f", "bridge": "br-int", "label": "tempest-network-smoke--1044478516", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe4d:e936", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa0aadf8d-3a", "ovs_interfaceid": "a0aadf8d-3a8c-48f8-84af-fb4e4c0b4088", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 11 09:26:25 compute-0 nova_compute[260935]: 2025-10-11 09:26:25.246 2 DEBUG oslo_concurrency.lockutils [req-581c61dc-dfa6-4212-806b-3989c6eee803 req-913ecc62-ea06-496d-adcf-4740ec3daacc e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-b39b8161-8a46-46fe-8a2a-0fc6b4eab850" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:26:25 compute-0 nova_compute[260935]: 2025-10-11 09:26:25.247 2 DEBUG nova.network.neutron [req-581c61dc-dfa6-4212-806b-3989c6eee803 req-913ecc62-ea06-496d-adcf-4740ec3daacc e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: b39b8161-8a46-46fe-8a2a-0fc6b4eab850] Refreshing network info cache for port a0aadf8d-3a8c-48f8-84af-fb4e4c0b4088 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 09:26:25 compute-0 nova_compute[260935]: 2025-10-11 09:26:25.251 2 DEBUG nova.virt.libvirt.driver [None req-6c99d863-a15a-4b8a-895e-cd58d1191873 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: b39b8161-8a46-46fe-8a2a-0fc6b4eab850] Start _get_guest_xml network_info=[{"id": "0f8506e7-c03f-48bd-938d-f3b4cea2675b", "address": "fa:16:3e:f4:67:d9", "network": {"id": "024b1f88-7312-4f05-a55e-4c82e878906e", "bridge": "br-int", "label": "tempest-network-smoke--93934650", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0f8506e7-c0", "ovs_interfaceid": "0f8506e7-c03f-48bd-938d-f3b4cea2675b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "a0aadf8d-3a8c-48f8-84af-fb4e4c0b4088", "address": "fa:16:3e:4d:e9:36", "network": {"id": "894decad-3bed-4c55-b643-5fbe5479bf3f", "bridge": "br-int", "label": "tempest-network-smoke--1044478516", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe4d:e936", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa0aadf8d-3a", "ovs_interfaceid": "a0aadf8d-3a8c-48f8-84af-fb4e4c0b4088", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 11 09:26:25 compute-0 nova_compute[260935]: 2025-10-11 09:26:25.255 2 WARNING nova.virt.libvirt.driver [None req-6c99d863-a15a-4b8a-895e-cd58d1191873 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 09:26:25 compute-0 nova_compute[260935]: 2025-10-11 09:26:25.261 2 DEBUG nova.virt.libvirt.host [None req-6c99d863-a15a-4b8a-895e-cd58d1191873 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 11 09:26:25 compute-0 nova_compute[260935]: 2025-10-11 09:26:25.262 2 DEBUG nova.virt.libvirt.host [None req-6c99d863-a15a-4b8a-895e-cd58d1191873 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 11 09:26:25 compute-0 nova_compute[260935]: 2025-10-11 09:26:25.266 2 DEBUG nova.virt.libvirt.host [None req-6c99d863-a15a-4b8a-895e-cd58d1191873 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 11 09:26:25 compute-0 nova_compute[260935]: 2025-10-11 09:26:25.267 2 DEBUG nova.virt.libvirt.host [None req-6c99d863-a15a-4b8a-895e-cd58d1191873 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 11 09:26:25 compute-0 nova_compute[260935]: 2025-10-11 09:26:25.267 2 DEBUG nova.virt.libvirt.driver [None req-6c99d863-a15a-4b8a-895e-cd58d1191873 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 11 09:26:25 compute-0 nova_compute[260935]: 2025-10-11 09:26:25.268 2 DEBUG nova.virt.hardware [None req-6c99d863-a15a-4b8a-895e-cd58d1191873 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 11 09:26:25 compute-0 nova_compute[260935]: 2025-10-11 09:26:25.269 2 DEBUG nova.virt.hardware [None req-6c99d863-a15a-4b8a-895e-cd58d1191873 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 11 09:26:25 compute-0 nova_compute[260935]: 2025-10-11 09:26:25.269 2 DEBUG nova.virt.hardware [None req-6c99d863-a15a-4b8a-895e-cd58d1191873 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 11 09:26:25 compute-0 nova_compute[260935]: 2025-10-11 09:26:25.270 2 DEBUG nova.virt.hardware [None req-6c99d863-a15a-4b8a-895e-cd58d1191873 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 11 09:26:25 compute-0 nova_compute[260935]: 2025-10-11 09:26:25.270 2 DEBUG nova.virt.hardware [None req-6c99d863-a15a-4b8a-895e-cd58d1191873 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 11 09:26:25 compute-0 nova_compute[260935]: 2025-10-11 09:26:25.271 2 DEBUG nova.virt.hardware [None req-6c99d863-a15a-4b8a-895e-cd58d1191873 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 11 09:26:25 compute-0 nova_compute[260935]: 2025-10-11 09:26:25.271 2 DEBUG nova.virt.hardware [None req-6c99d863-a15a-4b8a-895e-cd58d1191873 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 11 09:26:25 compute-0 nova_compute[260935]: 2025-10-11 09:26:25.271 2 DEBUG nova.virt.hardware [None req-6c99d863-a15a-4b8a-895e-cd58d1191873 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 11 09:26:25 compute-0 nova_compute[260935]: 2025-10-11 09:26:25.272 2 DEBUG nova.virt.hardware [None req-6c99d863-a15a-4b8a-895e-cd58d1191873 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 11 09:26:25 compute-0 nova_compute[260935]: 2025-10-11 09:26:25.272 2 DEBUG nova.virt.hardware [None req-6c99d863-a15a-4b8a-895e-cd58d1191873 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 11 09:26:25 compute-0 nova_compute[260935]: 2025-10-11 09:26:25.272 2 DEBUG nova.virt.hardware [None req-6c99d863-a15a-4b8a-895e-cd58d1191873 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 11 09:26:25 compute-0 nova_compute[260935]: 2025-10-11 09:26:25.276 2 DEBUG oslo_concurrency.processutils [None req-6c99d863-a15a-4b8a-895e-cd58d1191873 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:26:25 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 09:26:25 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1388407615' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:26:25 compute-0 nova_compute[260935]: 2025-10-11 09:26:25.733 2 DEBUG oslo_concurrency.processutils [None req-6c99d863-a15a-4b8a-895e-cd58d1191873 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:26:25 compute-0 nova_compute[260935]: 2025-10-11 09:26:25.762 2 DEBUG nova.storage.rbd_utils [None req-6c99d863-a15a-4b8a-895e-cd58d1191873 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] rbd image b39b8161-8a46-46fe-8a2a-0fc6b4eab850_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:26:25 compute-0 nova_compute[260935]: 2025-10-11 09:26:25.766 2 DEBUG oslo_concurrency.processutils [None req-6c99d863-a15a-4b8a-895e-cd58d1191873 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:26:26 compute-0 ceph-mon[74313]: pgmap v2543: 321 pgs: 321 active+clean; 420 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Oct 11 09:26:26 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/1388407615' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:26:26 compute-0 nova_compute[260935]: 2025-10-11 09:26:26.063 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:26:26 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 09:26:26 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4168525353' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:26:26 compute-0 nova_compute[260935]: 2025-10-11 09:26:26.208 2 DEBUG oslo_concurrency.processutils [None req-6c99d863-a15a-4b8a-895e-cd58d1191873 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:26:26 compute-0 nova_compute[260935]: 2025-10-11 09:26:26.210 2 DEBUG nova.virt.libvirt.vif [None req-6c99d863-a15a-4b8a-895e-cd58d1191873 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T09:26:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1892429863',display_name='tempest-TestGettingAddress-server-1892429863',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1892429863',id=128,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPG2umSzgx1Pm5tS05rzNTKNesiRVqOQsMrcb/c4H9RJnO9a7zP3A3lTuGkE8FijEzV3gKWwvO4cBsyzZeZuE85e7xUGmBvWEUdTEGeD9UeQgoUvFozUeyHRBe1bh7q6pA==',key_name='tempest-TestGettingAddress-1214513575',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='db6885dd005947ad850fed13cefdf2fc',ramdisk_id='',reservation_id='r-yz54v603',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-1238692117',owner_user_name='tempest-TestGettingAddress-1238692117-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:26:15Z,user_data=None,user_id='0e1fd111a1ff43179343661e01457085',uuid=b39b8161-8a46-46fe-8a2a-0fc6b4eab850,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0f8506e7-c03f-48bd-938d-f3b4cea2675b", "address": "fa:16:3e:f4:67:d9", "network": {"id": "024b1f88-7312-4f05-a55e-4c82e878906e", "bridge": "br-int", "label": "tempest-network-smoke--93934650", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0f8506e7-c0", "ovs_interfaceid": "0f8506e7-c03f-48bd-938d-f3b4cea2675b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 11 09:26:26 compute-0 nova_compute[260935]: 2025-10-11 09:26:26.211 2 DEBUG nova.network.os_vif_util [None req-6c99d863-a15a-4b8a-895e-cd58d1191873 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converting VIF {"id": "0f8506e7-c03f-48bd-938d-f3b4cea2675b", "address": "fa:16:3e:f4:67:d9", "network": {"id": "024b1f88-7312-4f05-a55e-4c82e878906e", "bridge": "br-int", "label": "tempest-network-smoke--93934650", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0f8506e7-c0", "ovs_interfaceid": "0f8506e7-c03f-48bd-938d-f3b4cea2675b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 09:26:26 compute-0 nova_compute[260935]: 2025-10-11 09:26:26.212 2 DEBUG nova.network.os_vif_util [None req-6c99d863-a15a-4b8a-895e-cd58d1191873 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f4:67:d9,bridge_name='br-int',has_traffic_filtering=True,id=0f8506e7-c03f-48bd-938d-f3b4cea2675b,network=Network(024b1f88-7312-4f05-a55e-4c82e878906e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0f8506e7-c0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 09:26:26 compute-0 nova_compute[260935]: 2025-10-11 09:26:26.214 2 DEBUG nova.virt.libvirt.vif [None req-6c99d863-a15a-4b8a-895e-cd58d1191873 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T09:26:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1892429863',display_name='tempest-TestGettingAddress-server-1892429863',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1892429863',id=128,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPG2umSzgx1Pm5tS05rzNTKNesiRVqOQsMrcb/c4H9RJnO9a7zP3A3lTuGkE8FijEzV3gKWwvO4cBsyzZeZuE85e7xUGmBvWEUdTEGeD9UeQgoUvFozUeyHRBe1bh7q6pA==',key_name='tempest-TestGettingAddress-1214513575',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='db6885dd005947ad850fed13cefdf2fc',ramdisk_id='',reservation_id='r-yz54v603',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-1238692117',owner_user_name='tempest-TestGettingAddress-1238692117-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:26:15Z,user_data=None,user_id='0e1fd111a1ff43179343661e01457085',uuid=b39b8161-8a46-46fe-8a2a-0fc6b4eab850,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a0aadf8d-3a8c-48f8-84af-fb4e4c0b4088", "address": "fa:16:3e:4d:e9:36", "network": {"id": "894decad-3bed-4c55-b643-5fbe5479bf3f", "bridge": "br-int", "label": "tempest-network-smoke--1044478516", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe4d:e936", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa0aadf8d-3a", "ovs_interfaceid": "a0aadf8d-3a8c-48f8-84af-fb4e4c0b4088", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 11 09:26:26 compute-0 nova_compute[260935]: 2025-10-11 09:26:26.214 2 DEBUG nova.network.os_vif_util [None req-6c99d863-a15a-4b8a-895e-cd58d1191873 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converting VIF {"id": "a0aadf8d-3a8c-48f8-84af-fb4e4c0b4088", "address": "fa:16:3e:4d:e9:36", "network": {"id": "894decad-3bed-4c55-b643-5fbe5479bf3f", "bridge": "br-int", "label": "tempest-network-smoke--1044478516", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe4d:e936", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa0aadf8d-3a", "ovs_interfaceid": "a0aadf8d-3a8c-48f8-84af-fb4e4c0b4088", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 09:26:26 compute-0 nova_compute[260935]: 2025-10-11 09:26:26.215 2 DEBUG nova.network.os_vif_util [None req-6c99d863-a15a-4b8a-895e-cd58d1191873 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4d:e9:36,bridge_name='br-int',has_traffic_filtering=True,id=a0aadf8d-3a8c-48f8-84af-fb4e4c0b4088,network=Network(894decad-3bed-4c55-b643-5fbe5479bf3f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa0aadf8d-3a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 09:26:26 compute-0 nova_compute[260935]: 2025-10-11 09:26:26.217 2 DEBUG nova.objects.instance [None req-6c99d863-a15a-4b8a-895e-cd58d1191873 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lazy-loading 'pci_devices' on Instance uuid b39b8161-8a46-46fe-8a2a-0fc6b4eab850 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:26:26 compute-0 nova_compute[260935]: 2025-10-11 09:26:26.259 2 DEBUG nova.virt.libvirt.driver [None req-6c99d863-a15a-4b8a-895e-cd58d1191873 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: b39b8161-8a46-46fe-8a2a-0fc6b4eab850] End _get_guest_xml xml=<domain type="kvm">
Oct 11 09:26:26 compute-0 nova_compute[260935]:   <uuid>b39b8161-8a46-46fe-8a2a-0fc6b4eab850</uuid>
Oct 11 09:26:26 compute-0 nova_compute[260935]:   <name>instance-00000080</name>
Oct 11 09:26:26 compute-0 nova_compute[260935]:   <memory>131072</memory>
Oct 11 09:26:26 compute-0 nova_compute[260935]:   <vcpu>1</vcpu>
Oct 11 09:26:26 compute-0 nova_compute[260935]:   <metadata>
Oct 11 09:26:26 compute-0 nova_compute[260935]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 09:26:26 compute-0 nova_compute[260935]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 09:26:26 compute-0 nova_compute[260935]:       <nova:name>tempest-TestGettingAddress-server-1892429863</nova:name>
Oct 11 09:26:26 compute-0 nova_compute[260935]:       <nova:creationTime>2025-10-11 09:26:25</nova:creationTime>
Oct 11 09:26:26 compute-0 nova_compute[260935]:       <nova:flavor name="m1.nano">
Oct 11 09:26:26 compute-0 nova_compute[260935]:         <nova:memory>128</nova:memory>
Oct 11 09:26:26 compute-0 nova_compute[260935]:         <nova:disk>1</nova:disk>
Oct 11 09:26:26 compute-0 nova_compute[260935]:         <nova:swap>0</nova:swap>
Oct 11 09:26:26 compute-0 nova_compute[260935]:         <nova:ephemeral>0</nova:ephemeral>
Oct 11 09:26:26 compute-0 nova_compute[260935]:         <nova:vcpus>1</nova:vcpus>
Oct 11 09:26:26 compute-0 nova_compute[260935]:       </nova:flavor>
Oct 11 09:26:26 compute-0 nova_compute[260935]:       <nova:owner>
Oct 11 09:26:26 compute-0 nova_compute[260935]:         <nova:user uuid="0e1fd111a1ff43179343661e01457085">tempest-TestGettingAddress-1238692117-project-member</nova:user>
Oct 11 09:26:26 compute-0 nova_compute[260935]:         <nova:project uuid="db6885dd005947ad850fed13cefdf2fc">tempest-TestGettingAddress-1238692117</nova:project>
Oct 11 09:26:26 compute-0 nova_compute[260935]:       </nova:owner>
Oct 11 09:26:26 compute-0 nova_compute[260935]:       <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 09:26:26 compute-0 nova_compute[260935]:       <nova:ports>
Oct 11 09:26:26 compute-0 nova_compute[260935]:         <nova:port uuid="0f8506e7-c03f-48bd-938d-f3b4cea2675b">
Oct 11 09:26:26 compute-0 nova_compute[260935]:           <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Oct 11 09:26:26 compute-0 nova_compute[260935]:         </nova:port>
Oct 11 09:26:26 compute-0 nova_compute[260935]:         <nova:port uuid="a0aadf8d-3a8c-48f8-84af-fb4e4c0b4088">
Oct 11 09:26:26 compute-0 nova_compute[260935]:           <nova:ip type="fixed" address="2001:db8::f816:3eff:fe4d:e936" ipVersion="6"/>
Oct 11 09:26:26 compute-0 nova_compute[260935]:         </nova:port>
Oct 11 09:26:26 compute-0 nova_compute[260935]:       </nova:ports>
Oct 11 09:26:26 compute-0 nova_compute[260935]:     </nova:instance>
Oct 11 09:26:26 compute-0 nova_compute[260935]:   </metadata>
Oct 11 09:26:26 compute-0 nova_compute[260935]:   <sysinfo type="smbios">
Oct 11 09:26:26 compute-0 nova_compute[260935]:     <system>
Oct 11 09:26:26 compute-0 nova_compute[260935]:       <entry name="manufacturer">RDO</entry>
Oct 11 09:26:26 compute-0 nova_compute[260935]:       <entry name="product">OpenStack Compute</entry>
Oct 11 09:26:26 compute-0 nova_compute[260935]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 09:26:26 compute-0 nova_compute[260935]:       <entry name="serial">b39b8161-8a46-46fe-8a2a-0fc6b4eab850</entry>
Oct 11 09:26:26 compute-0 nova_compute[260935]:       <entry name="uuid">b39b8161-8a46-46fe-8a2a-0fc6b4eab850</entry>
Oct 11 09:26:26 compute-0 nova_compute[260935]:       <entry name="family">Virtual Machine</entry>
Oct 11 09:26:26 compute-0 nova_compute[260935]:     </system>
Oct 11 09:26:26 compute-0 nova_compute[260935]:   </sysinfo>
Oct 11 09:26:26 compute-0 nova_compute[260935]:   <os>
Oct 11 09:26:26 compute-0 nova_compute[260935]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 11 09:26:26 compute-0 nova_compute[260935]:     <boot dev="hd"/>
Oct 11 09:26:26 compute-0 nova_compute[260935]:     <smbios mode="sysinfo"/>
Oct 11 09:26:26 compute-0 nova_compute[260935]:   </os>
Oct 11 09:26:26 compute-0 nova_compute[260935]:   <features>
Oct 11 09:26:26 compute-0 nova_compute[260935]:     <acpi/>
Oct 11 09:26:26 compute-0 nova_compute[260935]:     <apic/>
Oct 11 09:26:26 compute-0 nova_compute[260935]:     <vmcoreinfo/>
Oct 11 09:26:26 compute-0 nova_compute[260935]:   </features>
Oct 11 09:26:26 compute-0 nova_compute[260935]:   <clock offset="utc">
Oct 11 09:26:26 compute-0 nova_compute[260935]:     <timer name="pit" tickpolicy="delay"/>
Oct 11 09:26:26 compute-0 nova_compute[260935]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 11 09:26:26 compute-0 nova_compute[260935]:     <timer name="hpet" present="no"/>
Oct 11 09:26:26 compute-0 nova_compute[260935]:   </clock>
Oct 11 09:26:26 compute-0 nova_compute[260935]:   <cpu mode="host-model" match="exact">
Oct 11 09:26:26 compute-0 nova_compute[260935]:     <topology sockets="1" cores="1" threads="1"/>
Oct 11 09:26:26 compute-0 nova_compute[260935]:   </cpu>
Oct 11 09:26:26 compute-0 nova_compute[260935]:   <devices>
Oct 11 09:26:26 compute-0 nova_compute[260935]:     <disk type="network" device="disk">
Oct 11 09:26:26 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 09:26:26 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/b39b8161-8a46-46fe-8a2a-0fc6b4eab850_disk">
Oct 11 09:26:26 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 09:26:26 compute-0 nova_compute[260935]:       </source>
Oct 11 09:26:26 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 09:26:26 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 09:26:26 compute-0 nova_compute[260935]:       </auth>
Oct 11 09:26:26 compute-0 nova_compute[260935]:       <target dev="vda" bus="virtio"/>
Oct 11 09:26:26 compute-0 nova_compute[260935]:     </disk>
Oct 11 09:26:26 compute-0 nova_compute[260935]:     <disk type="network" device="cdrom">
Oct 11 09:26:26 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 09:26:26 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/b39b8161-8a46-46fe-8a2a-0fc6b4eab850_disk.config">
Oct 11 09:26:26 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 09:26:26 compute-0 nova_compute[260935]:       </source>
Oct 11 09:26:26 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 09:26:26 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 09:26:26 compute-0 nova_compute[260935]:       </auth>
Oct 11 09:26:26 compute-0 nova_compute[260935]:       <target dev="sda" bus="sata"/>
Oct 11 09:26:26 compute-0 nova_compute[260935]:     </disk>
Oct 11 09:26:26 compute-0 nova_compute[260935]:     <interface type="ethernet">
Oct 11 09:26:26 compute-0 nova_compute[260935]:       <mac address="fa:16:3e:f4:67:d9"/>
Oct 11 09:26:26 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 09:26:26 compute-0 nova_compute[260935]:       <driver name="vhost" rx_queue_size="512"/>
Oct 11 09:26:26 compute-0 nova_compute[260935]:       <mtu size="1442"/>
Oct 11 09:26:26 compute-0 nova_compute[260935]:       <target dev="tap0f8506e7-c0"/>
Oct 11 09:26:26 compute-0 nova_compute[260935]:     </interface>
Oct 11 09:26:26 compute-0 nova_compute[260935]:     <interface type="ethernet">
Oct 11 09:26:26 compute-0 nova_compute[260935]:       <mac address="fa:16:3e:4d:e9:36"/>
Oct 11 09:26:26 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 09:26:26 compute-0 nova_compute[260935]:       <driver name="vhost" rx_queue_size="512"/>
Oct 11 09:26:26 compute-0 nova_compute[260935]:       <mtu size="1442"/>
Oct 11 09:26:26 compute-0 nova_compute[260935]:       <target dev="tapa0aadf8d-3a"/>
Oct 11 09:26:26 compute-0 nova_compute[260935]:     </interface>
Oct 11 09:26:26 compute-0 nova_compute[260935]:     <serial type="pty">
Oct 11 09:26:26 compute-0 nova_compute[260935]:       <log file="/var/lib/nova/instances/b39b8161-8a46-46fe-8a2a-0fc6b4eab850/console.log" append="off"/>
Oct 11 09:26:26 compute-0 nova_compute[260935]:     </serial>
Oct 11 09:26:26 compute-0 nova_compute[260935]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 09:26:26 compute-0 nova_compute[260935]:     <video>
Oct 11 09:26:26 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 09:26:26 compute-0 nova_compute[260935]:     </video>
Oct 11 09:26:26 compute-0 nova_compute[260935]:     <input type="tablet" bus="usb"/>
Oct 11 09:26:26 compute-0 nova_compute[260935]:     <rng model="virtio">
Oct 11 09:26:26 compute-0 nova_compute[260935]:       <backend model="random">/dev/urandom</backend>
Oct 11 09:26:26 compute-0 nova_compute[260935]:     </rng>
Oct 11 09:26:26 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root"/>
Oct 11 09:26:26 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:26:26 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:26:26 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:26:26 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:26:26 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:26:26 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:26:26 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:26:26 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:26:26 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:26:26 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:26:26 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:26:26 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:26:26 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:26:26 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:26:26 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:26:26 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:26:26 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:26:26 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:26:26 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:26:26 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:26:26 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:26:26 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:26:26 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:26:26 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:26:26 compute-0 nova_compute[260935]:     <controller type="usb" index="0"/>
Oct 11 09:26:26 compute-0 nova_compute[260935]:     <memballoon model="virtio">
Oct 11 09:26:26 compute-0 nova_compute[260935]:       <stats period="10"/>
Oct 11 09:26:26 compute-0 nova_compute[260935]:     </memballoon>
Oct 11 09:26:26 compute-0 nova_compute[260935]:   </devices>
Oct 11 09:26:26 compute-0 nova_compute[260935]: </domain>
Oct 11 09:26:26 compute-0 nova_compute[260935]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 11 09:26:26 compute-0 nova_compute[260935]: 2025-10-11 09:26:26.259 2 DEBUG nova.compute.manager [None req-6c99d863-a15a-4b8a-895e-cd58d1191873 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: b39b8161-8a46-46fe-8a2a-0fc6b4eab850] Preparing to wait for external event network-vif-plugged-0f8506e7-c03f-48bd-938d-f3b4cea2675b prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 11 09:26:26 compute-0 nova_compute[260935]: 2025-10-11 09:26:26.260 2 DEBUG oslo_concurrency.lockutils [None req-6c99d863-a15a-4b8a-895e-cd58d1191873 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "b39b8161-8a46-46fe-8a2a-0fc6b4eab850-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:26:26 compute-0 nova_compute[260935]: 2025-10-11 09:26:26.260 2 DEBUG oslo_concurrency.lockutils [None req-6c99d863-a15a-4b8a-895e-cd58d1191873 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "b39b8161-8a46-46fe-8a2a-0fc6b4eab850-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:26:26 compute-0 nova_compute[260935]: 2025-10-11 09:26:26.261 2 DEBUG oslo_concurrency.lockutils [None req-6c99d863-a15a-4b8a-895e-cd58d1191873 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "b39b8161-8a46-46fe-8a2a-0fc6b4eab850-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:26:26 compute-0 nova_compute[260935]: 2025-10-11 09:26:26.261 2 DEBUG nova.compute.manager [None req-6c99d863-a15a-4b8a-895e-cd58d1191873 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: b39b8161-8a46-46fe-8a2a-0fc6b4eab850] Preparing to wait for external event network-vif-plugged-a0aadf8d-3a8c-48f8-84af-fb4e4c0b4088 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 11 09:26:26 compute-0 nova_compute[260935]: 2025-10-11 09:26:26.261 2 DEBUG oslo_concurrency.lockutils [None req-6c99d863-a15a-4b8a-895e-cd58d1191873 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "b39b8161-8a46-46fe-8a2a-0fc6b4eab850-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:26:26 compute-0 nova_compute[260935]: 2025-10-11 09:26:26.262 2 DEBUG oslo_concurrency.lockutils [None req-6c99d863-a15a-4b8a-895e-cd58d1191873 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "b39b8161-8a46-46fe-8a2a-0fc6b4eab850-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:26:26 compute-0 nova_compute[260935]: 2025-10-11 09:26:26.262 2 DEBUG oslo_concurrency.lockutils [None req-6c99d863-a15a-4b8a-895e-cd58d1191873 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "b39b8161-8a46-46fe-8a2a-0fc6b4eab850-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:26:26 compute-0 nova_compute[260935]: 2025-10-11 09:26:26.263 2 DEBUG nova.virt.libvirt.vif [None req-6c99d863-a15a-4b8a-895e-cd58d1191873 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T09:26:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1892429863',display_name='tempest-TestGettingAddress-server-1892429863',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1892429863',id=128,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPG2umSzgx1Pm5tS05rzNTKNesiRVqOQsMrcb/c4H9RJnO9a7zP3A3lTuGkE8FijEzV3gKWwvO4cBsyzZeZuE85e7xUGmBvWEUdTEGeD9UeQgoUvFozUeyHRBe1bh7q6pA==',key_name='tempest-TestGettingAddress-1214513575',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='db6885dd005947ad850fed13cefdf2fc',ramdisk_id='',reservation_id='r-yz54v603',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-1238692117',owner_user_name='tempest-TestGettingAddress-1238692117-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:26:15Z,user_data=None,user_id='0e1fd111a1ff43179343661e01457085',uuid=b39b8161-8a46-46fe-8a2a-0fc6b4eab850,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0f8506e7-c03f-48bd-938d-f3b4cea2675b", "address": "fa:16:3e:f4:67:d9", "network": {"id": "024b1f88-7312-4f05-a55e-4c82e878906e", "bridge": "br-int", "label": "tempest-network-smoke--93934650", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0f8506e7-c0", "ovs_interfaceid": "0f8506e7-c03f-48bd-938d-f3b4cea2675b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 11 09:26:26 compute-0 nova_compute[260935]: 2025-10-11 09:26:26.263 2 DEBUG nova.network.os_vif_util [None req-6c99d863-a15a-4b8a-895e-cd58d1191873 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converting VIF {"id": "0f8506e7-c03f-48bd-938d-f3b4cea2675b", "address": "fa:16:3e:f4:67:d9", "network": {"id": "024b1f88-7312-4f05-a55e-4c82e878906e", "bridge": "br-int", "label": "tempest-network-smoke--93934650", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0f8506e7-c0", "ovs_interfaceid": "0f8506e7-c03f-48bd-938d-f3b4cea2675b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 09:26:26 compute-0 nova_compute[260935]: 2025-10-11 09:26:26.264 2 DEBUG nova.network.os_vif_util [None req-6c99d863-a15a-4b8a-895e-cd58d1191873 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f4:67:d9,bridge_name='br-int',has_traffic_filtering=True,id=0f8506e7-c03f-48bd-938d-f3b4cea2675b,network=Network(024b1f88-7312-4f05-a55e-4c82e878906e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0f8506e7-c0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 09:26:26 compute-0 nova_compute[260935]: 2025-10-11 09:26:26.265 2 DEBUG os_vif [None req-6c99d863-a15a-4b8a-895e-cd58d1191873 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f4:67:d9,bridge_name='br-int',has_traffic_filtering=True,id=0f8506e7-c03f-48bd-938d-f3b4cea2675b,network=Network(024b1f88-7312-4f05-a55e-4c82e878906e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0f8506e7-c0') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 11 09:26:26 compute-0 nova_compute[260935]: 2025-10-11 09:26:26.266 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:26:26 compute-0 nova_compute[260935]: 2025-10-11 09:26:26.266 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:26:26 compute-0 nova_compute[260935]: 2025-10-11 09:26:26.267 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 09:26:26 compute-0 nova_compute[260935]: 2025-10-11 09:26:26.271 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:26:26 compute-0 nova_compute[260935]: 2025-10-11 09:26:26.272 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0f8506e7-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:26:26 compute-0 nova_compute[260935]: 2025-10-11 09:26:26.273 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap0f8506e7-c0, col_values=(('external_ids', {'iface-id': '0f8506e7-c03f-48bd-938d-f3b4cea2675b', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:f4:67:d9', 'vm-uuid': 'b39b8161-8a46-46fe-8a2a-0fc6b4eab850'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:26:26 compute-0 NetworkManager[44960]: <info>  [1760174786.2757] manager: (tap0f8506e7-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/543)
Oct 11 09:26:26 compute-0 nova_compute[260935]: 2025-10-11 09:26:26.274 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:26:26 compute-0 nova_compute[260935]: 2025-10-11 09:26:26.278 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 09:26:26 compute-0 nova_compute[260935]: 2025-10-11 09:26:26.286 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:26:26 compute-0 nova_compute[260935]: 2025-10-11 09:26:26.287 2 INFO os_vif [None req-6c99d863-a15a-4b8a-895e-cd58d1191873 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f4:67:d9,bridge_name='br-int',has_traffic_filtering=True,id=0f8506e7-c03f-48bd-938d-f3b4cea2675b,network=Network(024b1f88-7312-4f05-a55e-4c82e878906e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0f8506e7-c0')
Oct 11 09:26:26 compute-0 nova_compute[260935]: 2025-10-11 09:26:26.288 2 DEBUG nova.virt.libvirt.vif [None req-6c99d863-a15a-4b8a-895e-cd58d1191873 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T09:26:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1892429863',display_name='tempest-TestGettingAddress-server-1892429863',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1892429863',id=128,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPG2umSzgx1Pm5tS05rzNTKNesiRVqOQsMrcb/c4H9RJnO9a7zP3A3lTuGkE8FijEzV3gKWwvO4cBsyzZeZuE85e7xUGmBvWEUdTEGeD9UeQgoUvFozUeyHRBe1bh7q6pA==',key_name='tempest-TestGettingAddress-1214513575',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='db6885dd005947ad850fed13cefdf2fc',ramdisk_id='',reservation_id='r-yz54v603',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-1238692117',owner_user_name='tempest-TestGettingAddress-1238692117-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:26:15Z,user_data=None,user_id='0e1fd111a1ff43179343661e01457085',uuid=b39b8161-8a46-46fe-8a2a-0fc6b4eab850,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a0aadf8d-3a8c-48f8-84af-fb4e4c0b4088", "address": "fa:16:3e:4d:e9:36", "network": {"id": "894decad-3bed-4c55-b643-5fbe5479bf3f", "bridge": "br-int", "label": "tempest-network-smoke--1044478516", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe4d:e936", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa0aadf8d-3a", "ovs_interfaceid": "a0aadf8d-3a8c-48f8-84af-fb4e4c0b4088", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 11 09:26:26 compute-0 nova_compute[260935]: 2025-10-11 09:26:26.289 2 DEBUG nova.network.os_vif_util [None req-6c99d863-a15a-4b8a-895e-cd58d1191873 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converting VIF {"id": "a0aadf8d-3a8c-48f8-84af-fb4e4c0b4088", "address": "fa:16:3e:4d:e9:36", "network": {"id": "894decad-3bed-4c55-b643-5fbe5479bf3f", "bridge": "br-int", "label": "tempest-network-smoke--1044478516", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe4d:e936", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa0aadf8d-3a", "ovs_interfaceid": "a0aadf8d-3a8c-48f8-84af-fb4e4c0b4088", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 09:26:26 compute-0 nova_compute[260935]: 2025-10-11 09:26:26.290 2 DEBUG nova.network.os_vif_util [None req-6c99d863-a15a-4b8a-895e-cd58d1191873 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4d:e9:36,bridge_name='br-int',has_traffic_filtering=True,id=a0aadf8d-3a8c-48f8-84af-fb4e4c0b4088,network=Network(894decad-3bed-4c55-b643-5fbe5479bf3f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa0aadf8d-3a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 09:26:26 compute-0 nova_compute[260935]: 2025-10-11 09:26:26.290 2 DEBUG os_vif [None req-6c99d863-a15a-4b8a-895e-cd58d1191873 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:4d:e9:36,bridge_name='br-int',has_traffic_filtering=True,id=a0aadf8d-3a8c-48f8-84af-fb4e4c0b4088,network=Network(894decad-3bed-4c55-b643-5fbe5479bf3f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa0aadf8d-3a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 11 09:26:26 compute-0 nova_compute[260935]: 2025-10-11 09:26:26.291 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:26:26 compute-0 nova_compute[260935]: 2025-10-11 09:26:26.291 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:26:26 compute-0 nova_compute[260935]: 2025-10-11 09:26:26.292 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 09:26:26 compute-0 nova_compute[260935]: 2025-10-11 09:26:26.294 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:26:26 compute-0 nova_compute[260935]: 2025-10-11 09:26:26.294 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa0aadf8d-3a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:26:26 compute-0 nova_compute[260935]: 2025-10-11 09:26:26.295 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa0aadf8d-3a, col_values=(('external_ids', {'iface-id': 'a0aadf8d-3a8c-48f8-84af-fb4e4c0b4088', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:4d:e9:36', 'vm-uuid': 'b39b8161-8a46-46fe-8a2a-0fc6b4eab850'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:26:26 compute-0 NetworkManager[44960]: <info>  [1760174786.2973] manager: (tapa0aadf8d-3a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/544)
Oct 11 09:26:26 compute-0 nova_compute[260935]: 2025-10-11 09:26:26.296 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:26:26 compute-0 nova_compute[260935]: 2025-10-11 09:26:26.299 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 09:26:26 compute-0 nova_compute[260935]: 2025-10-11 09:26:26.309 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:26:26 compute-0 nova_compute[260935]: 2025-10-11 09:26:26.310 2 INFO os_vif [None req-6c99d863-a15a-4b8a-895e-cd58d1191873 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:4d:e9:36,bridge_name='br-int',has_traffic_filtering=True,id=a0aadf8d-3a8c-48f8-84af-fb4e4c0b4088,network=Network(894decad-3bed-4c55-b643-5fbe5479bf3f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa0aadf8d-3a')
Oct 11 09:26:26 compute-0 nova_compute[260935]: 2025-10-11 09:26:26.655 2 DEBUG nova.virt.libvirt.driver [None req-6c99d863-a15a-4b8a-895e-cd58d1191873 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 09:26:26 compute-0 nova_compute[260935]: 2025-10-11 09:26:26.655 2 DEBUG nova.virt.libvirt.driver [None req-6c99d863-a15a-4b8a-895e-cd58d1191873 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 09:26:26 compute-0 nova_compute[260935]: 2025-10-11 09:26:26.656 2 DEBUG nova.virt.libvirt.driver [None req-6c99d863-a15a-4b8a-895e-cd58d1191873 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] No VIF found with MAC fa:16:3e:f4:67:d9, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 11 09:26:26 compute-0 nova_compute[260935]: 2025-10-11 09:26:26.656 2 DEBUG nova.virt.libvirt.driver [None req-6c99d863-a15a-4b8a-895e-cd58d1191873 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] No VIF found with MAC fa:16:3e:4d:e9:36, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 11 09:26:26 compute-0 nova_compute[260935]: 2025-10-11 09:26:26.657 2 INFO nova.virt.libvirt.driver [None req-6c99d863-a15a-4b8a-895e-cd58d1191873 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: b39b8161-8a46-46fe-8a2a-0fc6b4eab850] Using config drive
Oct 11 09:26:26 compute-0 nova_compute[260935]: 2025-10-11 09:26:26.688 2 DEBUG nova.storage.rbd_utils [None req-6c99d863-a15a-4b8a-895e-cd58d1191873 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] rbd image b39b8161-8a46-46fe-8a2a-0fc6b4eab850_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:26:26 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 09:26:26 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/309278539' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 09:26:26 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 09:26:26 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/309278539' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 09:26:26 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2544: 321 pgs: 321 active+clean; 420 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Oct 11 09:26:27 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/4168525353' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:26:27 compute-0 ceph-mon[74313]: from='client.? 192.168.122.10:0/309278539' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 09:26:27 compute-0 ceph-mon[74313]: from='client.? 192.168.122.10:0/309278539' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 09:26:27 compute-0 nova_compute[260935]: 2025-10-11 09:26:27.246 2 INFO nova.virt.libvirt.driver [None req-6c99d863-a15a-4b8a-895e-cd58d1191873 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: b39b8161-8a46-46fe-8a2a-0fc6b4eab850] Creating config drive at /var/lib/nova/instances/b39b8161-8a46-46fe-8a2a-0fc6b4eab850/disk.config
Oct 11 09:26:27 compute-0 nova_compute[260935]: 2025-10-11 09:26:27.250 2 DEBUG oslo_concurrency.processutils [None req-6c99d863-a15a-4b8a-895e-cd58d1191873 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/b39b8161-8a46-46fe-8a2a-0fc6b4eab850/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpv087eoef execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:26:27 compute-0 NetworkManager[44960]: <info>  [1760174787.3228] manager: (patch-provnet-02213cae-d648-4a8f-b18b-9bdf9573f1be-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/545)
Oct 11 09:26:27 compute-0 NetworkManager[44960]: <info>  [1760174787.3241] manager: (patch-br-int-to-provnet-02213cae-d648-4a8f-b18b-9bdf9573f1be): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/546)
Oct 11 09:26:27 compute-0 nova_compute[260935]: 2025-10-11 09:26:27.331 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:26:27 compute-0 nova_compute[260935]: 2025-10-11 09:26:27.397 2 DEBUG oslo_concurrency.processutils [None req-6c99d863-a15a-4b8a-895e-cd58d1191873 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/b39b8161-8a46-46fe-8a2a-0fc6b4eab850/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpv087eoef" returned: 0 in 0.147s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:26:27 compute-0 nova_compute[260935]: 2025-10-11 09:26:27.440 2 DEBUG nova.storage.rbd_utils [None req-6c99d863-a15a-4b8a-895e-cd58d1191873 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] rbd image b39b8161-8a46-46fe-8a2a-0fc6b4eab850_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:26:27 compute-0 nova_compute[260935]: 2025-10-11 09:26:27.452 2 DEBUG oslo_concurrency.processutils [None req-6c99d863-a15a-4b8a-895e-cd58d1191873 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/b39b8161-8a46-46fe-8a2a-0fc6b4eab850/disk.config b39b8161-8a46-46fe-8a2a-0fc6b4eab850_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:26:27 compute-0 ovn_controller[152945]: 2025-10-11T09:26:27Z|01389|binding|INFO|Releasing lport 829ba2ca-e21f-4927-8525-5f43e59d37f8 from this chassis (sb_readonly=0)
Oct 11 09:26:27 compute-0 ovn_controller[152945]: 2025-10-11T09:26:27Z|01390|binding|INFO|Releasing lport e23cd806-8523-4e59-ba27-db15cee52548 from this chassis (sb_readonly=0)
Oct 11 09:26:27 compute-0 ovn_controller[152945]: 2025-10-11T09:26:27Z|01391|binding|INFO|Releasing lport f45dd889-4db0-488b-b6d3-356bc191844e from this chassis (sb_readonly=0)
Oct 11 09:26:27 compute-0 nova_compute[260935]: 2025-10-11 09:26:27.514 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:26:27 compute-0 nova_compute[260935]: 2025-10-11 09:26:27.797 2 DEBUG nova.network.neutron [req-581c61dc-dfa6-4212-806b-3989c6eee803 req-913ecc62-ea06-496d-adcf-4740ec3daacc e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: b39b8161-8a46-46fe-8a2a-0fc6b4eab850] Updated VIF entry in instance network info cache for port a0aadf8d-3a8c-48f8-84af-fb4e4c0b4088. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 09:26:27 compute-0 nova_compute[260935]: 2025-10-11 09:26:27.798 2 DEBUG nova.network.neutron [req-581c61dc-dfa6-4212-806b-3989c6eee803 req-913ecc62-ea06-496d-adcf-4740ec3daacc e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: b39b8161-8a46-46fe-8a2a-0fc6b4eab850] Updating instance_info_cache with network_info: [{"id": "0f8506e7-c03f-48bd-938d-f3b4cea2675b", "address": "fa:16:3e:f4:67:d9", "network": {"id": "024b1f88-7312-4f05-a55e-4c82e878906e", "bridge": "br-int", "label": "tempest-network-smoke--93934650", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0f8506e7-c0", "ovs_interfaceid": "0f8506e7-c03f-48bd-938d-f3b4cea2675b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "a0aadf8d-3a8c-48f8-84af-fb4e4c0b4088", "address": "fa:16:3e:4d:e9:36", "network": {"id": "894decad-3bed-4c55-b643-5fbe5479bf3f", "bridge": "br-int", "label": "tempest-network-smoke--1044478516", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe4d:e936", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa0aadf8d-3a", "ovs_interfaceid": "a0aadf8d-3a8c-48f8-84af-fb4e4c0b4088", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:26:27 compute-0 nova_compute[260935]: 2025-10-11 09:26:27.893 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:26:28 compute-0 nova_compute[260935]: 2025-10-11 09:26:28.017 2 DEBUG oslo_concurrency.lockutils [req-581c61dc-dfa6-4212-806b-3989c6eee803 req-913ecc62-ea06-496d-adcf-4740ec3daacc e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-b39b8161-8a46-46fe-8a2a-0fc6b4eab850" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:26:28 compute-0 nova_compute[260935]: 2025-10-11 09:26:28.233 2 DEBUG nova.compute.manager [req-dea1eaee-1a8f-46f3-aef4-56e6bd341a75 req-5b771df6-e6ff-460d-a831-b71261d3fd3f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 36d0acda-9f37-4308-aa46-973f11c57b0e] Received event network-changed-36e1391f-eaf8-490f-8434-c3fb25eed0a4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:26:28 compute-0 nova_compute[260935]: 2025-10-11 09:26:28.233 2 DEBUG nova.compute.manager [req-dea1eaee-1a8f-46f3-aef4-56e6bd341a75 req-5b771df6-e6ff-460d-a831-b71261d3fd3f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 36d0acda-9f37-4308-aa46-973f11c57b0e] Refreshing instance network info cache due to event network-changed-36e1391f-eaf8-490f-8434-c3fb25eed0a4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 09:26:28 compute-0 nova_compute[260935]: 2025-10-11 09:26:28.234 2 DEBUG oslo_concurrency.lockutils [req-dea1eaee-1a8f-46f3-aef4-56e6bd341a75 req-5b771df6-e6ff-460d-a831-b71261d3fd3f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-36d0acda-9f37-4308-aa46-973f11c57b0e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:26:28 compute-0 nova_compute[260935]: 2025-10-11 09:26:28.234 2 DEBUG oslo_concurrency.lockutils [req-dea1eaee-1a8f-46f3-aef4-56e6bd341a75 req-5b771df6-e6ff-460d-a831-b71261d3fd3f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-36d0acda-9f37-4308-aa46-973f11c57b0e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:26:28 compute-0 nova_compute[260935]: 2025-10-11 09:26:28.234 2 DEBUG nova.network.neutron [req-dea1eaee-1a8f-46f3-aef4-56e6bd341a75 req-5b771df6-e6ff-460d-a831-b71261d3fd3f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 36d0acda-9f37-4308-aa46-973f11c57b0e] Refreshing network info cache for port 36e1391f-eaf8-490f-8434-c3fb25eed0a4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 09:26:28 compute-0 ceph-mon[74313]: pgmap v2544: 321 pgs: 321 active+clean; 420 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Oct 11 09:26:28 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2545: 321 pgs: 321 active+clean; 420 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Oct 11 09:26:29 compute-0 nova_compute[260935]: 2025-10-11 09:26:29.104 2 DEBUG oslo_concurrency.processutils [None req-6c99d863-a15a-4b8a-895e-cd58d1191873 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/b39b8161-8a46-46fe-8a2a-0fc6b4eab850/disk.config b39b8161-8a46-46fe-8a2a-0fc6b4eab850_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.652s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:26:29 compute-0 nova_compute[260935]: 2025-10-11 09:26:29.105 2 INFO nova.virt.libvirt.driver [None req-6c99d863-a15a-4b8a-895e-cd58d1191873 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: b39b8161-8a46-46fe-8a2a-0fc6b4eab850] Deleting local config drive /var/lib/nova/instances/b39b8161-8a46-46fe-8a2a-0fc6b4eab850/disk.config because it was imported into RBD.
Oct 11 09:26:29 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:26:29 compute-0 kernel: tap0f8506e7-c0: entered promiscuous mode
Oct 11 09:26:29 compute-0 NetworkManager[44960]: <info>  [1760174789.1856] manager: (tap0f8506e7-c0): new Tun device (/org/freedesktop/NetworkManager/Devices/547)
Oct 11 09:26:29 compute-0 ovn_controller[152945]: 2025-10-11T09:26:29Z|01392|binding|INFO|Claiming lport 0f8506e7-c03f-48bd-938d-f3b4cea2675b for this chassis.
Oct 11 09:26:29 compute-0 ovn_controller[152945]: 2025-10-11T09:26:29Z|01393|binding|INFO|0f8506e7-c03f-48bd-938d-f3b4cea2675b: Claiming fa:16:3e:f4:67:d9 10.100.0.4
Oct 11 09:26:29 compute-0 nova_compute[260935]: 2025-10-11 09:26:29.192 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:26:29 compute-0 NetworkManager[44960]: <info>  [1760174789.2114] manager: (tapa0aadf8d-3a): new Tun device (/org/freedesktop/NetworkManager/Devices/548)
Oct 11 09:26:29 compute-0 kernel: tapa0aadf8d-3a: entered promiscuous mode
Oct 11 09:26:29 compute-0 ovn_controller[152945]: 2025-10-11T09:26:29Z|01394|binding|INFO|Setting lport 0f8506e7-c03f-48bd-938d-f3b4cea2675b ovn-installed in OVS
Oct 11 09:26:29 compute-0 ovn_controller[152945]: 2025-10-11T09:26:29Z|01395|if_status|INFO|Not updating pb chassis for a0aadf8d-3a8c-48f8-84af-fb4e4c0b4088 now as sb is readonly
Oct 11 09:26:29 compute-0 nova_compute[260935]: 2025-10-11 09:26:29.216 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:26:29 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:26:29.226 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f4:67:d9 10.100.0.4'], port_security=['fa:16:3e:f4:67:d9 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'b39b8161-8a46-46fe-8a2a-0fc6b4eab850', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-024b1f88-7312-4f05-a55e-4c82e878906e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'db6885dd005947ad850fed13cefdf2fc', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'efa7de30-a0ba-4f09-b29e-87a969a72d97', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5a6a9a27-420b-4816-9550-7c34ef42c810, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=0f8506e7-c03f-48bd-938d-f3b4cea2675b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 09:26:29 compute-0 ovn_controller[152945]: 2025-10-11T09:26:29Z|01396|binding|INFO|Claiming lport a0aadf8d-3a8c-48f8-84af-fb4e4c0b4088 for this chassis.
Oct 11 09:26:29 compute-0 ovn_controller[152945]: 2025-10-11T09:26:29Z|01397|binding|INFO|a0aadf8d-3a8c-48f8-84af-fb4e4c0b4088: Claiming fa:16:3e:4d:e9:36 2001:db8::f816:3eff:fe4d:e936
Oct 11 09:26:29 compute-0 ovn_controller[152945]: 2025-10-11T09:26:29Z|01398|binding|INFO|Setting lport 0f8506e7-c03f-48bd-938d-f3b4cea2675b up in Southbound
Oct 11 09:26:29 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:26:29.227 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 0f8506e7-c03f-48bd-938d-f3b4cea2675b in datapath 024b1f88-7312-4f05-a55e-4c82e878906e bound to our chassis
Oct 11 09:26:29 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:26:29.229 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 024b1f88-7312-4f05-a55e-4c82e878906e
Oct 11 09:26:29 compute-0 systemd-udevd[400570]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 09:26:29 compute-0 systemd-udevd[400571]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 09:26:29 compute-0 ovn_controller[152945]: 2025-10-11T09:26:29Z|01399|binding|INFO|Setting lport a0aadf8d-3a8c-48f8-84af-fb4e4c0b4088 ovn-installed in OVS
Oct 11 09:26:29 compute-0 nova_compute[260935]: 2025-10-11 09:26:29.240 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:26:29 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:26:29.247 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[f442b89e-16c9-4de9-99d5-e3f56a6987e9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:26:29 compute-0 NetworkManager[44960]: <info>  [1760174789.2516] device (tap0f8506e7-c0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 09:26:29 compute-0 ovn_controller[152945]: 2025-10-11T09:26:29Z|01400|binding|INFO|Setting lport a0aadf8d-3a8c-48f8-84af-fb4e4c0b4088 up in Southbound
Oct 11 09:26:29 compute-0 NetworkManager[44960]: <info>  [1760174789.2525] device (tap0f8506e7-c0): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 09:26:29 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:26:29.251 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4d:e9:36 2001:db8::f816:3eff:fe4d:e936'], port_security=['fa:16:3e:4d:e9:36 2001:db8::f816:3eff:fe4d:e936'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe4d:e936/64', 'neutron:device_id': 'b39b8161-8a46-46fe-8a2a-0fc6b4eab850', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-894decad-3bed-4c55-b643-5fbe5479bf3f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'db6885dd005947ad850fed13cefdf2fc', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'efa7de30-a0ba-4f09-b29e-87a969a72d97', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8c88be83-b34c-4a0a-8d40-8a42bbccb566, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=a0aadf8d-3a8c-48f8-84af-fb4e4c0b4088) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 09:26:29 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:26:29.252 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap024b1f88-71 in ovnmeta-024b1f88-7312-4f05-a55e-4c82e878906e namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 11 09:26:29 compute-0 NetworkManager[44960]: <info>  [1760174789.2553] device (tapa0aadf8d-3a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 09:26:29 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:26:29.255 276945 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap024b1f88-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 11 09:26:29 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:26:29.255 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[3ef96c4f-0208-4f18-b8cc-a2b8ef634899]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:26:29 compute-0 NetworkManager[44960]: <info>  [1760174789.2561] device (tapa0aadf8d-3a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 09:26:29 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:26:29.256 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[92bad109-a86a-4bf4-ac44-13ec4505501d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:26:29 compute-0 systemd-machined[215705]: New machine qemu-152-instance-00000080.
Oct 11 09:26:29 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:26:29.275 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[6919be81-57c9-4e13-b98e-6cc80a964b79]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:26:29 compute-0 systemd[1]: Started Virtual Machine qemu-152-instance-00000080.
Oct 11 09:26:29 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:26:29.294 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[f1c9d063-7b9b-4e67-99a3-cf07cb8af700]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:26:29 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:26:29.330 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[d62ac21a-0ecc-4882-8f36-409bbc7871f5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:26:29 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:26:29.339 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[6d2557dc-9452-423d-86dc-827155e86531]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:26:29 compute-0 NetworkManager[44960]: <info>  [1760174789.3404] manager: (tap024b1f88-70): new Veth device (/org/freedesktop/NetworkManager/Devices/549)
Oct 11 09:26:29 compute-0 systemd-udevd[400577]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 09:26:29 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:26:29.382 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[2ee992c0-d133-41d2-8627-890e59968eaf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:26:29 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:26:29.389 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[2b181a66-343b-4a1a-8d9a-3f3f99bcc9ae]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:26:29 compute-0 NetworkManager[44960]: <info>  [1760174789.4231] device (tap024b1f88-70): carrier: link connected
Oct 11 09:26:29 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:26:29.430 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[0a309948-b1ef-49ec-bb63-f93b62c89153]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:26:29 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:26:29.462 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[2ed84848-1ee8-4b4d-a5d7-6c5018878656]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap024b1f88-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7a:47:3b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 382], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 663412, 'reachable_time': 33597, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 400607, 'error': None, 'target': 'ovnmeta-024b1f88-7312-4f05-a55e-4c82e878906e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:26:29 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:26:29.479 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[cacb4dc6-f60b-4654-8081-2bb70836b447]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe7a:473b'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 663412, 'tstamp': 663412}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 400608, 'error': None, 'target': 'ovnmeta-024b1f88-7312-4f05-a55e-4c82e878906e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:26:29 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:26:29.506 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[373d8c5f-f8d7-4354-9f06-013545662e13]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap024b1f88-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7a:47:3b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 382], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 663412, 'reachable_time': 33597, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 400609, 'error': None, 'target': 'ovnmeta-024b1f88-7312-4f05-a55e-4c82e878906e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:26:29 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:26:29.558 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[b3f5b7d3-ea0e-415f-9f92-c8407af28a32]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:26:29 compute-0 ceph-mon[74313]: pgmap v2545: 321 pgs: 321 active+clean; 420 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Oct 11 09:26:29 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:26:29.673 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[703f6e5a-abd3-4572-bfa4-7ae85452d9a2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:26:29 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:26:29.674 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap024b1f88-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:26:29 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:26:29.675 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 09:26:29 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:26:29.675 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap024b1f88-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:26:29 compute-0 nova_compute[260935]: 2025-10-11 09:26:29.678 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:26:29 compute-0 kernel: tap024b1f88-70: entered promiscuous mode
Oct 11 09:26:29 compute-0 NetworkManager[44960]: <info>  [1760174789.6792] manager: (tap024b1f88-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/550)
Oct 11 09:26:29 compute-0 nova_compute[260935]: 2025-10-11 09:26:29.682 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:26:29 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:26:29.683 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap024b1f88-70, col_values=(('external_ids', {'iface-id': '219b454c-6c32-4a38-b10c-afcdb1be1980'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:26:29 compute-0 ovn_controller[152945]: 2025-10-11T09:26:29Z|01401|binding|INFO|Releasing lport 219b454c-6c32-4a38-b10c-afcdb1be1980 from this chassis (sb_readonly=0)
Oct 11 09:26:29 compute-0 nova_compute[260935]: 2025-10-11 09:26:29.685 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:26:29 compute-0 nova_compute[260935]: 2025-10-11 09:26:29.716 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:26:29 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:26:29.717 162815 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/024b1f88-7312-4f05-a55e-4c82e878906e.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/024b1f88-7312-4f05-a55e-4c82e878906e.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 11 09:26:29 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:26:29.720 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[28117820-ed7d-44f1-bc4e-456ac999cdb2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:26:29 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:26:29.721 162815 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 11 09:26:29 compute-0 ovn_metadata_agent[162810]: global
Oct 11 09:26:29 compute-0 ovn_metadata_agent[162810]:     log         /dev/log local0 debug
Oct 11 09:26:29 compute-0 ovn_metadata_agent[162810]:     log-tag     haproxy-metadata-proxy-024b1f88-7312-4f05-a55e-4c82e878906e
Oct 11 09:26:29 compute-0 ovn_metadata_agent[162810]:     user        root
Oct 11 09:26:29 compute-0 ovn_metadata_agent[162810]:     group       root
Oct 11 09:26:29 compute-0 ovn_metadata_agent[162810]:     maxconn     1024
Oct 11 09:26:29 compute-0 ovn_metadata_agent[162810]:     pidfile     /var/lib/neutron/external/pids/024b1f88-7312-4f05-a55e-4c82e878906e.pid.haproxy
Oct 11 09:26:29 compute-0 ovn_metadata_agent[162810]:     daemon
Oct 11 09:26:29 compute-0 ovn_metadata_agent[162810]: 
Oct 11 09:26:29 compute-0 ovn_metadata_agent[162810]: defaults
Oct 11 09:26:29 compute-0 ovn_metadata_agent[162810]:     log global
Oct 11 09:26:29 compute-0 ovn_metadata_agent[162810]:     mode http
Oct 11 09:26:29 compute-0 ovn_metadata_agent[162810]:     option httplog
Oct 11 09:26:29 compute-0 ovn_metadata_agent[162810]:     option dontlognull
Oct 11 09:26:29 compute-0 ovn_metadata_agent[162810]:     option http-server-close
Oct 11 09:26:29 compute-0 ovn_metadata_agent[162810]:     option forwardfor
Oct 11 09:26:29 compute-0 ovn_metadata_agent[162810]:     retries                 3
Oct 11 09:26:29 compute-0 ovn_metadata_agent[162810]:     timeout http-request    30s
Oct 11 09:26:29 compute-0 ovn_metadata_agent[162810]:     timeout connect         30s
Oct 11 09:26:29 compute-0 ovn_metadata_agent[162810]:     timeout client          32s
Oct 11 09:26:29 compute-0 ovn_metadata_agent[162810]:     timeout server          32s
Oct 11 09:26:29 compute-0 ovn_metadata_agent[162810]:     timeout http-keep-alive 30s
Oct 11 09:26:29 compute-0 ovn_metadata_agent[162810]: 
Oct 11 09:26:29 compute-0 ovn_metadata_agent[162810]: 
Oct 11 09:26:29 compute-0 ovn_metadata_agent[162810]: listen listener
Oct 11 09:26:29 compute-0 ovn_metadata_agent[162810]:     bind 169.254.169.254:80
Oct 11 09:26:29 compute-0 ovn_metadata_agent[162810]:     server metadata /var/lib/neutron/metadata_proxy
Oct 11 09:26:29 compute-0 ovn_metadata_agent[162810]:     http-request add-header X-OVN-Network-ID 024b1f88-7312-4f05-a55e-4c82e878906e
Oct 11 09:26:29 compute-0 ovn_metadata_agent[162810]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 11 09:26:29 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:26:29.722 162815 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-024b1f88-7312-4f05-a55e-4c82e878906e', 'env', 'PROCESS_TAG=haproxy-024b1f88-7312-4f05-a55e-4c82e878906e', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/024b1f88-7312-4f05-a55e-4c82e878906e.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 11 09:26:29 compute-0 nova_compute[260935]: 2025-10-11 09:26:29.771 2 DEBUG nova.compute.manager [req-fbfbafdb-d7d1-4c5b-9536-a98b65fced9c req-a71930b8-40c9-4060-9df9-c6f060b08d09 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: b39b8161-8a46-46fe-8a2a-0fc6b4eab850] Received event network-vif-plugged-0f8506e7-c03f-48bd-938d-f3b4cea2675b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:26:29 compute-0 nova_compute[260935]: 2025-10-11 09:26:29.771 2 DEBUG oslo_concurrency.lockutils [req-fbfbafdb-d7d1-4c5b-9536-a98b65fced9c req-a71930b8-40c9-4060-9df9-c6f060b08d09 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "b39b8161-8a46-46fe-8a2a-0fc6b4eab850-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:26:29 compute-0 nova_compute[260935]: 2025-10-11 09:26:29.772 2 DEBUG oslo_concurrency.lockutils [req-fbfbafdb-d7d1-4c5b-9536-a98b65fced9c req-a71930b8-40c9-4060-9df9-c6f060b08d09 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "b39b8161-8a46-46fe-8a2a-0fc6b4eab850-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:26:29 compute-0 nova_compute[260935]: 2025-10-11 09:26:29.772 2 DEBUG oslo_concurrency.lockutils [req-fbfbafdb-d7d1-4c5b-9536-a98b65fced9c req-a71930b8-40c9-4060-9df9-c6f060b08d09 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "b39b8161-8a46-46fe-8a2a-0fc6b4eab850-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:26:29 compute-0 nova_compute[260935]: 2025-10-11 09:26:29.773 2 DEBUG nova.compute.manager [req-fbfbafdb-d7d1-4c5b-9536-a98b65fced9c req-a71930b8-40c9-4060-9df9-c6f060b08d09 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: b39b8161-8a46-46fe-8a2a-0fc6b4eab850] Processing event network-vif-plugged-0f8506e7-c03f-48bd-938d-f3b4cea2675b _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 11 09:26:30 compute-0 podman[400660]: 2025-10-11 09:26:30.205016922 +0000 UTC m=+0.054266312 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 11 09:26:30 compute-0 nova_compute[260935]: 2025-10-11 09:26:30.358 2 DEBUG nova.network.neutron [req-dea1eaee-1a8f-46f3-aef4-56e6bd341a75 req-5b771df6-e6ff-460d-a831-b71261d3fd3f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 36d0acda-9f37-4308-aa46-973f11c57b0e] Updated VIF entry in instance network info cache for port 36e1391f-eaf8-490f-8434-c3fb25eed0a4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 09:26:30 compute-0 nova_compute[260935]: 2025-10-11 09:26:30.359 2 DEBUG nova.network.neutron [req-dea1eaee-1a8f-46f3-aef4-56e6bd341a75 req-5b771df6-e6ff-460d-a831-b71261d3fd3f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 36d0acda-9f37-4308-aa46-973f11c57b0e] Updating instance_info_cache with network_info: [{"id": "36e1391f-eaf8-490f-8434-c3fb25eed0a4", "address": "fa:16:3e:da:66:34", "network": {"id": "b9f9ae84-9b18-48f7-bff2-94e8835de5c8", "bridge": "br-int", "label": "tempest-network-smoke--306681163", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.203", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap36e1391f-ea", "ovs_interfaceid": "36e1391f-eaf8-490f-8434-c3fb25eed0a4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:26:30 compute-0 nova_compute[260935]: 2025-10-11 09:26:30.389 2 DEBUG nova.compute.manager [req-7e8b8b00-6161-43c8-9788-0b28dcfd6b2e req-04d87ddd-ab98-4813-b68f-4f846907e104 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: b39b8161-8a46-46fe-8a2a-0fc6b4eab850] Received event network-vif-plugged-a0aadf8d-3a8c-48f8-84af-fb4e4c0b4088 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:26:30 compute-0 nova_compute[260935]: 2025-10-11 09:26:30.389 2 DEBUG oslo_concurrency.lockutils [req-7e8b8b00-6161-43c8-9788-0b28dcfd6b2e req-04d87ddd-ab98-4813-b68f-4f846907e104 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "b39b8161-8a46-46fe-8a2a-0fc6b4eab850-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:26:30 compute-0 nova_compute[260935]: 2025-10-11 09:26:30.390 2 DEBUG oslo_concurrency.lockutils [req-7e8b8b00-6161-43c8-9788-0b28dcfd6b2e req-04d87ddd-ab98-4813-b68f-4f846907e104 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "b39b8161-8a46-46fe-8a2a-0fc6b4eab850-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:26:30 compute-0 nova_compute[260935]: 2025-10-11 09:26:30.390 2 DEBUG oslo_concurrency.lockutils [req-7e8b8b00-6161-43c8-9788-0b28dcfd6b2e req-04d87ddd-ab98-4813-b68f-4f846907e104 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "b39b8161-8a46-46fe-8a2a-0fc6b4eab850-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:26:30 compute-0 nova_compute[260935]: 2025-10-11 09:26:30.391 2 DEBUG nova.compute.manager [req-7e8b8b00-6161-43c8-9788-0b28dcfd6b2e req-04d87ddd-ab98-4813-b68f-4f846907e104 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: b39b8161-8a46-46fe-8a2a-0fc6b4eab850] Processing event network-vif-plugged-a0aadf8d-3a8c-48f8-84af-fb4e4c0b4088 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 11 09:26:30 compute-0 nova_compute[260935]: 2025-10-11 09:26:30.511 2 DEBUG oslo_concurrency.lockutils [req-dea1eaee-1a8f-46f3-aef4-56e6bd341a75 req-5b771df6-e6ff-460d-a831-b71261d3fd3f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-36d0acda-9f37-4308-aa46-973f11c57b0e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:26:30 compute-0 podman[400660]: 2025-10-11 09:26:30.85589084 +0000 UTC m=+0.705140150 container create ebae48339624729f873bd9cdaf767f454b383aabcfedadf411bcdf7d4b67e8a5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-024b1f88-7312-4f05-a55e-4c82e878906e, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 11 09:26:30 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2546: 321 pgs: 321 active+clean; 420 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 11 09:26:30 compute-0 systemd[1]: Started libpod-conmon-ebae48339624729f873bd9cdaf767f454b383aabcfedadf411bcdf7d4b67e8a5.scope.
Oct 11 09:26:30 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:26:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7364967b58c22da1966d5d1241e71eee425dfe401e15845ca723c5e08b53f77/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 11 09:26:31 compute-0 podman[400660]: 2025-10-11 09:26:31.119772037 +0000 UTC m=+0.969021447 container init ebae48339624729f873bd9cdaf767f454b383aabcfedadf411bcdf7d4b67e8a5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-024b1f88-7312-4f05-a55e-4c82e878906e, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct 11 09:26:31 compute-0 podman[400660]: 2025-10-11 09:26:31.131142208 +0000 UTC m=+0.980391558 container start ebae48339624729f873bd9cdaf767f454b383aabcfedadf411bcdf7d4b67e8a5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-024b1f88-7312-4f05-a55e-4c82e878906e, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Oct 11 09:26:31 compute-0 neutron-haproxy-ovnmeta-024b1f88-7312-4f05-a55e-4c82e878906e[400697]: [NOTICE]   (400702) : New worker (400704) forked
Oct 11 09:26:31 compute-0 neutron-haproxy-ovnmeta-024b1f88-7312-4f05-a55e-4c82e878906e[400697]: [NOTICE]   (400702) : Loading success.
Oct 11 09:26:31 compute-0 nova_compute[260935]: 2025-10-11 09:26:31.297 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:26:31 compute-0 ceph-mon[74313]: pgmap v2546: 321 pgs: 321 active+clean; 420 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 11 09:26:31 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:26:31.459 162815 INFO neutron.agent.ovn.metadata.agent [-] Port a0aadf8d-3a8c-48f8-84af-fb4e4c0b4088 in datapath 894decad-3bed-4c55-b643-5fbe5479bf3f unbound from our chassis
Oct 11 09:26:31 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:26:31.464 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 894decad-3bed-4c55-b643-5fbe5479bf3f
Oct 11 09:26:31 compute-0 nova_compute[260935]: 2025-10-11 09:26:31.465 2 DEBUG nova.compute.manager [None req-6c99d863-a15a-4b8a-895e-cd58d1191873 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: b39b8161-8a46-46fe-8a2a-0fc6b4eab850] Instance event wait completed in 0 seconds for network-vif-plugged,network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 11 09:26:31 compute-0 nova_compute[260935]: 2025-10-11 09:26:31.467 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760174791.4650497, b39b8161-8a46-46fe-8a2a-0fc6b4eab850 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:26:31 compute-0 nova_compute[260935]: 2025-10-11 09:26:31.470 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: b39b8161-8a46-46fe-8a2a-0fc6b4eab850] VM Started (Lifecycle Event)
Oct 11 09:26:31 compute-0 nova_compute[260935]: 2025-10-11 09:26:31.474 2 DEBUG nova.virt.libvirt.driver [None req-6c99d863-a15a-4b8a-895e-cd58d1191873 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: b39b8161-8a46-46fe-8a2a-0fc6b4eab850] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 11 09:26:31 compute-0 nova_compute[260935]: 2025-10-11 09:26:31.480 2 INFO nova.virt.libvirt.driver [-] [instance: b39b8161-8a46-46fe-8a2a-0fc6b4eab850] Instance spawned successfully.
Oct 11 09:26:31 compute-0 nova_compute[260935]: 2025-10-11 09:26:31.481 2 DEBUG nova.virt.libvirt.driver [None req-6c99d863-a15a-4b8a-895e-cd58d1191873 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: b39b8161-8a46-46fe-8a2a-0fc6b4eab850] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 11 09:26:31 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:26:31.486 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[71d5c0c1-b1c7-4fbe-a519-68f94482dbcc]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:26:31 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:26:31.488 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap894decad-31 in ovnmeta-894decad-3bed-4c55-b643-5fbe5479bf3f namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 11 09:26:31 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:26:31.491 276945 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap894decad-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 11 09:26:31 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:26:31.491 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[f1fd2655-f498-4ffc-9d79-13082b063a29]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:26:31 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:26:31.493 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[550ce776-fb43-4ba0-83c2-02ffd6cf6a01]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:26:31 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:26:31.514 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[6e212e56-e682-40c1-a729-ccf0c1a27095]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:26:31 compute-0 nova_compute[260935]: 2025-10-11 09:26:31.516 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: b39b8161-8a46-46fe-8a2a-0fc6b4eab850] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:26:31 compute-0 nova_compute[260935]: 2025-10-11 09:26:31.521 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: b39b8161-8a46-46fe-8a2a-0fc6b4eab850] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 09:26:31 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:26:31.548 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[ce289530-7407-4607-9dd4-96344740f059]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:26:31 compute-0 nova_compute[260935]: 2025-10-11 09:26:31.599 2 DEBUG nova.virt.libvirt.driver [None req-6c99d863-a15a-4b8a-895e-cd58d1191873 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: b39b8161-8a46-46fe-8a2a-0fc6b4eab850] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:26:31 compute-0 nova_compute[260935]: 2025-10-11 09:26:31.600 2 DEBUG nova.virt.libvirt.driver [None req-6c99d863-a15a-4b8a-895e-cd58d1191873 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: b39b8161-8a46-46fe-8a2a-0fc6b4eab850] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:26:31 compute-0 nova_compute[260935]: 2025-10-11 09:26:31.601 2 DEBUG nova.virt.libvirt.driver [None req-6c99d863-a15a-4b8a-895e-cd58d1191873 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: b39b8161-8a46-46fe-8a2a-0fc6b4eab850] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:26:31 compute-0 nova_compute[260935]: 2025-10-11 09:26:31.602 2 DEBUG nova.virt.libvirt.driver [None req-6c99d863-a15a-4b8a-895e-cd58d1191873 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: b39b8161-8a46-46fe-8a2a-0fc6b4eab850] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:26:31 compute-0 nova_compute[260935]: 2025-10-11 09:26:31.603 2 DEBUG nova.virt.libvirt.driver [None req-6c99d863-a15a-4b8a-895e-cd58d1191873 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: b39b8161-8a46-46fe-8a2a-0fc6b4eab850] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:26:31 compute-0 nova_compute[260935]: 2025-10-11 09:26:31.604 2 DEBUG nova.virt.libvirt.driver [None req-6c99d863-a15a-4b8a-895e-cd58d1191873 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: b39b8161-8a46-46fe-8a2a-0fc6b4eab850] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:26:31 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:26:31.603 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[29ce435b-75ac-45b0-b529-eb5efd127dc1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:26:31 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:26:31.617 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[c9296b0b-0031-4eda-9ec3-0ad35e267fc3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:26:31 compute-0 NetworkManager[44960]: <info>  [1760174791.6190] manager: (tap894decad-30): new Veth device (/org/freedesktop/NetworkManager/Devices/551)
Oct 11 09:26:31 compute-0 systemd-udevd[400603]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 09:26:31 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:26:31.673 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[74d719e8-7a5b-44f3-abd4-b62149bf1906]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:26:31 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:26:31.677 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[8bf33924-b961-45c4-b9ba-be9e0a13536e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:26:31 compute-0 NetworkManager[44960]: <info>  [1760174791.7211] device (tap894decad-30): carrier: link connected
Oct 11 09:26:31 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:26:31.735 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[7f527dff-6640-4382-9c71-ff789291953a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:26:31 compute-0 nova_compute[260935]: 2025-10-11 09:26:31.760 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: b39b8161-8a46-46fe-8a2a-0fc6b4eab850] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 09:26:31 compute-0 nova_compute[260935]: 2025-10-11 09:26:31.761 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760174791.4651244, b39b8161-8a46-46fe-8a2a-0fc6b4eab850 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:26:31 compute-0 nova_compute[260935]: 2025-10-11 09:26:31.762 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: b39b8161-8a46-46fe-8a2a-0fc6b4eab850] VM Paused (Lifecycle Event)
Oct 11 09:26:31 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:26:31.761 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[109275fb-7840-4111-bfc8-fb62fd09c2fe]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap894decad-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1c:b0:96'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 383], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 663642, 'reachable_time': 44137, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 400725, 'error': None, 'target': 'ovnmeta-894decad-3bed-4c55-b643-5fbe5479bf3f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:26:31 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:26:31.781 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[4a9b3464-de9b-43c2-bb87-9c5b04dd6aab]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe1c:b096'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 663642, 'tstamp': 663642}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 400726, 'error': None, 'target': 'ovnmeta-894decad-3bed-4c55-b643-5fbe5479bf3f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:26:31 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:26:31.807 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[afadd8fd-8d5d-440c-856a-b80d39b4eab6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap894decad-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1c:b0:96'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 383], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 663642, 'reachable_time': 44137, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 400727, 'error': None, 'target': 'ovnmeta-894decad-3bed-4c55-b643-5fbe5479bf3f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:26:31 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:26:31.848 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[a744aaa6-d5b6-4a5c-829e-2e244d3ce681]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:26:31 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:26:31.891 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[552120d5-1522-45ce-b411-7b44b8e7a2e2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:26:31 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:26:31.894 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap894decad-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:26:31 compute-0 nova_compute[260935]: 2025-10-11 09:26:31.893 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: b39b8161-8a46-46fe-8a2a-0fc6b4eab850] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:26:31 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:26:31.894 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 09:26:31 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:26:31.896 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap894decad-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:26:31 compute-0 nova_compute[260935]: 2025-10-11 09:26:31.899 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760174791.4705718, b39b8161-8a46-46fe-8a2a-0fc6b4eab850 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:26:31 compute-0 kernel: tap894decad-30: entered promiscuous mode
Oct 11 09:26:31 compute-0 NetworkManager[44960]: <info>  [1760174791.9006] manager: (tap894decad-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/552)
Oct 11 09:26:31 compute-0 nova_compute[260935]: 2025-10-11 09:26:31.901 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: b39b8161-8a46-46fe-8a2a-0fc6b4eab850] VM Resumed (Lifecycle Event)
Oct 11 09:26:31 compute-0 nova_compute[260935]: 2025-10-11 09:26:31.903 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:26:31 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:26:31.907 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap894decad-30, col_values=(('external_ids', {'iface-id': '48cfaf23-d7a3-467e-89cb-3eb3687ea15e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:26:31 compute-0 ovn_controller[152945]: 2025-10-11T09:26:31Z|01402|binding|INFO|Releasing lport 48cfaf23-d7a3-467e-89cb-3eb3687ea15e from this chassis (sb_readonly=0)
Oct 11 09:26:31 compute-0 nova_compute[260935]: 2025-10-11 09:26:31.908 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:26:31 compute-0 nova_compute[260935]: 2025-10-11 09:26:31.914 2 DEBUG nova.compute.manager [req-e409e923-ed5e-4233-a3b3-6b60de5f6c98 req-4437c9f6-12cd-4313-8b4b-b3329d6199c5 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: b39b8161-8a46-46fe-8a2a-0fc6b4eab850] Received event network-vif-plugged-0f8506e7-c03f-48bd-938d-f3b4cea2675b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:26:31 compute-0 nova_compute[260935]: 2025-10-11 09:26:31.914 2 DEBUG oslo_concurrency.lockutils [req-e409e923-ed5e-4233-a3b3-6b60de5f6c98 req-4437c9f6-12cd-4313-8b4b-b3329d6199c5 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "b39b8161-8a46-46fe-8a2a-0fc6b4eab850-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:26:31 compute-0 nova_compute[260935]: 2025-10-11 09:26:31.915 2 DEBUG oslo_concurrency.lockutils [req-e409e923-ed5e-4233-a3b3-6b60de5f6c98 req-4437c9f6-12cd-4313-8b4b-b3329d6199c5 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "b39b8161-8a46-46fe-8a2a-0fc6b4eab850-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:26:31 compute-0 nova_compute[260935]: 2025-10-11 09:26:31.916 2 DEBUG oslo_concurrency.lockutils [req-e409e923-ed5e-4233-a3b3-6b60de5f6c98 req-4437c9f6-12cd-4313-8b4b-b3329d6199c5 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "b39b8161-8a46-46fe-8a2a-0fc6b4eab850-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:26:31 compute-0 nova_compute[260935]: 2025-10-11 09:26:31.917 2 DEBUG nova.compute.manager [req-e409e923-ed5e-4233-a3b3-6b60de5f6c98 req-4437c9f6-12cd-4313-8b4b-b3329d6199c5 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: b39b8161-8a46-46fe-8a2a-0fc6b4eab850] No waiting events found dispatching network-vif-plugged-0f8506e7-c03f-48bd-938d-f3b4cea2675b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 09:26:31 compute-0 nova_compute[260935]: 2025-10-11 09:26:31.918 2 WARNING nova.compute.manager [req-e409e923-ed5e-4233-a3b3-6b60de5f6c98 req-4437c9f6-12cd-4313-8b4b-b3329d6199c5 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: b39b8161-8a46-46fe-8a2a-0fc6b4eab850] Received unexpected event network-vif-plugged-0f8506e7-c03f-48bd-938d-f3b4cea2675b for instance with vm_state building and task_state spawning.
Oct 11 09:26:31 compute-0 nova_compute[260935]: 2025-10-11 09:26:31.926 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:26:31 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:26:31.928 162815 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/894decad-3bed-4c55-b643-5fbe5479bf3f.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/894decad-3bed-4c55-b643-5fbe5479bf3f.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 11 09:26:31 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:26:31.929 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[f25bfda7-5866-4c06-9a4f-d7517c6d760d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:26:31 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:26:31.930 162815 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 11 09:26:31 compute-0 ovn_metadata_agent[162810]: global
Oct 11 09:26:31 compute-0 ovn_metadata_agent[162810]:     log         /dev/log local0 debug
Oct 11 09:26:31 compute-0 ovn_metadata_agent[162810]:     log-tag     haproxy-metadata-proxy-894decad-3bed-4c55-b643-5fbe5479bf3f
Oct 11 09:26:31 compute-0 ovn_metadata_agent[162810]:     user        root
Oct 11 09:26:31 compute-0 ovn_metadata_agent[162810]:     group       root
Oct 11 09:26:31 compute-0 ovn_metadata_agent[162810]:     maxconn     1024
Oct 11 09:26:31 compute-0 ovn_metadata_agent[162810]:     pidfile     /var/lib/neutron/external/pids/894decad-3bed-4c55-b643-5fbe5479bf3f.pid.haproxy
Oct 11 09:26:31 compute-0 ovn_metadata_agent[162810]:     daemon
Oct 11 09:26:31 compute-0 ovn_metadata_agent[162810]: 
Oct 11 09:26:31 compute-0 ovn_metadata_agent[162810]: defaults
Oct 11 09:26:31 compute-0 ovn_metadata_agent[162810]:     log global
Oct 11 09:26:31 compute-0 ovn_metadata_agent[162810]:     mode http
Oct 11 09:26:31 compute-0 ovn_metadata_agent[162810]:     option httplog
Oct 11 09:26:31 compute-0 ovn_metadata_agent[162810]:     option dontlognull
Oct 11 09:26:31 compute-0 ovn_metadata_agent[162810]:     option http-server-close
Oct 11 09:26:31 compute-0 ovn_metadata_agent[162810]:     option forwardfor
Oct 11 09:26:31 compute-0 ovn_metadata_agent[162810]:     retries                 3
Oct 11 09:26:31 compute-0 ovn_metadata_agent[162810]:     timeout http-request    30s
Oct 11 09:26:31 compute-0 ovn_metadata_agent[162810]:     timeout connect         30s
Oct 11 09:26:31 compute-0 ovn_metadata_agent[162810]:     timeout client          32s
Oct 11 09:26:31 compute-0 ovn_metadata_agent[162810]:     timeout server          32s
Oct 11 09:26:31 compute-0 ovn_metadata_agent[162810]:     timeout http-keep-alive 30s
Oct 11 09:26:31 compute-0 ovn_metadata_agent[162810]: 
Oct 11 09:26:31 compute-0 ovn_metadata_agent[162810]: 
Oct 11 09:26:31 compute-0 ovn_metadata_agent[162810]: listen listener
Oct 11 09:26:31 compute-0 ovn_metadata_agent[162810]:     bind 169.254.169.254:80
Oct 11 09:26:31 compute-0 ovn_metadata_agent[162810]:     server metadata /var/lib/neutron/metadata_proxy
Oct 11 09:26:31 compute-0 ovn_metadata_agent[162810]:     http-request add-header X-OVN-Network-ID 894decad-3bed-4c55-b643-5fbe5479bf3f
Oct 11 09:26:31 compute-0 ovn_metadata_agent[162810]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 11 09:26:31 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:26:31.931 162815 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-894decad-3bed-4c55-b643-5fbe5479bf3f', 'env', 'PROCESS_TAG=haproxy-894decad-3bed-4c55-b643-5fbe5479bf3f', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/894decad-3bed-4c55-b643-5fbe5479bf3f.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 11 09:26:31 compute-0 nova_compute[260935]: 2025-10-11 09:26:31.940 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: b39b8161-8a46-46fe-8a2a-0fc6b4eab850] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:26:31 compute-0 nova_compute[260935]: 2025-10-11 09:26:31.945 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: b39b8161-8a46-46fe-8a2a-0fc6b4eab850] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 09:26:31 compute-0 nova_compute[260935]: 2025-10-11 09:26:31.958 2 INFO nova.compute.manager [None req-6c99d863-a15a-4b8a-895e-cd58d1191873 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: b39b8161-8a46-46fe-8a2a-0fc6b4eab850] Took 15.95 seconds to spawn the instance on the hypervisor.
Oct 11 09:26:31 compute-0 nova_compute[260935]: 2025-10-11 09:26:31.959 2 DEBUG nova.compute.manager [None req-6c99d863-a15a-4b8a-895e-cd58d1191873 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: b39b8161-8a46-46fe-8a2a-0fc6b4eab850] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:26:32 compute-0 nova_compute[260935]: 2025-10-11 09:26:32.012 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: b39b8161-8a46-46fe-8a2a-0fc6b4eab850] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 09:26:32 compute-0 nova_compute[260935]: 2025-10-11 09:26:32.149 2 INFO nova.compute.manager [None req-6c99d863-a15a-4b8a-895e-cd58d1191873 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: b39b8161-8a46-46fe-8a2a-0fc6b4eab850] Took 18.01 seconds to build instance.
Oct 11 09:26:32 compute-0 nova_compute[260935]: 2025-10-11 09:26:32.269 2 DEBUG oslo_concurrency.lockutils [None req-6c99d863-a15a-4b8a-895e-cd58d1191873 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "b39b8161-8a46-46fe-8a2a-0fc6b4eab850" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 18.418s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:26:32 compute-0 podman[400757]: 2025-10-11 09:26:32.459329259 +0000 UTC m=+0.074892785 container create 7d8390a7292f4de4796afe322c1990187330820880950a473cdbf2710ac3ec79 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-894decad-3bed-4c55-b643-5fbe5479bf3f, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001)
Oct 11 09:26:32 compute-0 podman[400757]: 2025-10-11 09:26:32.427664185 +0000 UTC m=+0.043227721 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 11 09:26:32 compute-0 systemd[1]: Started libpod-conmon-7d8390a7292f4de4796afe322c1990187330820880950a473cdbf2710ac3ec79.scope.
Oct 11 09:26:32 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:26:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dfc0d2ca647c901f7f4d6b1ca7ee3a7b693ee57f6d1e527af9b3cd693721335a/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 11 09:26:32 compute-0 podman[400757]: 2025-10-11 09:26:32.562167141 +0000 UTC m=+0.177730657 container init 7d8390a7292f4de4796afe322c1990187330820880950a473cdbf2710ac3ec79 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-894decad-3bed-4c55-b643-5fbe5479bf3f, org.label-schema.build-date=20251001, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 11 09:26:32 compute-0 podman[400757]: 2025-10-11 09:26:32.56920005 +0000 UTC m=+0.184763546 container start 7d8390a7292f4de4796afe322c1990187330820880950a473cdbf2710ac3ec79 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-894decad-3bed-4c55-b643-5fbe5479bf3f, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 11 09:26:32 compute-0 neutron-haproxy-ovnmeta-894decad-3bed-4c55-b643-5fbe5479bf3f[400773]: [NOTICE]   (400777) : New worker (400779) forked
Oct 11 09:26:32 compute-0 neutron-haproxy-ovnmeta-894decad-3bed-4c55-b643-5fbe5479bf3f[400773]: [NOTICE]   (400777) : Loading success.
Oct 11 09:26:32 compute-0 nova_compute[260935]: 2025-10-11 09:26:32.615 2 DEBUG nova.compute.manager [req-af43ad6c-1637-4e8a-97c9-f6392bb7925d req-1e43513a-27d7-422c-805b-5cc4654e0109 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: b39b8161-8a46-46fe-8a2a-0fc6b4eab850] Received event network-vif-plugged-a0aadf8d-3a8c-48f8-84af-fb4e4c0b4088 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:26:32 compute-0 nova_compute[260935]: 2025-10-11 09:26:32.616 2 DEBUG oslo_concurrency.lockutils [req-af43ad6c-1637-4e8a-97c9-f6392bb7925d req-1e43513a-27d7-422c-805b-5cc4654e0109 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "b39b8161-8a46-46fe-8a2a-0fc6b4eab850-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:26:32 compute-0 nova_compute[260935]: 2025-10-11 09:26:32.616 2 DEBUG oslo_concurrency.lockutils [req-af43ad6c-1637-4e8a-97c9-f6392bb7925d req-1e43513a-27d7-422c-805b-5cc4654e0109 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "b39b8161-8a46-46fe-8a2a-0fc6b4eab850-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:26:32 compute-0 nova_compute[260935]: 2025-10-11 09:26:32.617 2 DEBUG oslo_concurrency.lockutils [req-af43ad6c-1637-4e8a-97c9-f6392bb7925d req-1e43513a-27d7-422c-805b-5cc4654e0109 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "b39b8161-8a46-46fe-8a2a-0fc6b4eab850-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:26:32 compute-0 nova_compute[260935]: 2025-10-11 09:26:32.617 2 DEBUG nova.compute.manager [req-af43ad6c-1637-4e8a-97c9-f6392bb7925d req-1e43513a-27d7-422c-805b-5cc4654e0109 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: b39b8161-8a46-46fe-8a2a-0fc6b4eab850] No waiting events found dispatching network-vif-plugged-a0aadf8d-3a8c-48f8-84af-fb4e4c0b4088 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 09:26:32 compute-0 nova_compute[260935]: 2025-10-11 09:26:32.618 2 WARNING nova.compute.manager [req-af43ad6c-1637-4e8a-97c9-f6392bb7925d req-1e43513a-27d7-422c-805b-5cc4654e0109 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: b39b8161-8a46-46fe-8a2a-0fc6b4eab850] Received unexpected event network-vif-plugged-a0aadf8d-3a8c-48f8-84af-fb4e4c0b4088 for instance with vm_state active and task_state None.
Oct 11 09:26:32 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2547: 321 pgs: 321 active+clean; 445 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 3.2 MiB/s rd, 2.1 MiB/s wr, 178 op/s
Oct 11 09:26:32 compute-0 nova_compute[260935]: 2025-10-11 09:26:32.949 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:26:33 compute-0 ovn_controller[152945]: 2025-10-11T09:26:33Z|00160|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:da:66:34 10.100.0.11
Oct 11 09:26:33 compute-0 ovn_controller[152945]: 2025-10-11T09:26:33Z|00161|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:da:66:34 10.100.0.11
Oct 11 09:26:34 compute-0 ceph-mon[74313]: pgmap v2547: 321 pgs: 321 active+clean; 445 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 3.2 MiB/s rd, 2.1 MiB/s wr, 178 op/s
Oct 11 09:26:34 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:26:34 compute-0 ceph-mon[74313]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #123. Immutable memtables: 0.
Oct 11 09:26:34 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:26:34.177978) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 11 09:26:34 compute-0 ceph-mon[74313]: rocksdb: [db/flush_job.cc:856] [default] [JOB 73] Flushing memtable with next log file: 123
Oct 11 09:26:34 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760174794178274, "job": 73, "event": "flush_started", "num_memtables": 1, "num_entries": 1203, "num_deletes": 250, "total_data_size": 1770298, "memory_usage": 1798424, "flush_reason": "Manual Compaction"}
Oct 11 09:26:34 compute-0 ceph-mon[74313]: rocksdb: [db/flush_job.cc:885] [default] [JOB 73] Level-0 flush table #124: started
Oct 11 09:26:34 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760174794187867, "cf_name": "default", "job": 73, "event": "table_file_creation", "file_number": 124, "file_size": 1043477, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 52353, "largest_seqno": 53555, "table_properties": {"data_size": 1039058, "index_size": 1879, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1477, "raw_key_size": 11721, "raw_average_key_size": 20, "raw_value_size": 1029524, "raw_average_value_size": 1822, "num_data_blocks": 85, "num_entries": 565, "num_filter_entries": 565, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760174675, "oldest_key_time": 1760174675, "file_creation_time": 1760174794, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "25d4b2da-549a-4320-988a-8e4ed36a341b", "db_session_id": "HBQ1LYMNE0LLZGR7AHC6", "orig_file_number": 124, "seqno_to_time_mapping": "N/A"}}
Oct 11 09:26:34 compute-0 ceph-mon[74313]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 73] Flush lasted 9931 microseconds, and 4818 cpu microseconds.
Oct 11 09:26:34 compute-0 ceph-mon[74313]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 09:26:34 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:26:34.187912) [db/flush_job.cc:967] [default] [JOB 73] Level-0 flush table #124: 1043477 bytes OK
Oct 11 09:26:34 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:26:34.187932) [db/memtable_list.cc:519] [default] Level-0 commit table #124 started
Oct 11 09:26:34 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:26:34.191172) [db/memtable_list.cc:722] [default] Level-0 commit table #124: memtable #1 done
Oct 11 09:26:34 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:26:34.191189) EVENT_LOG_v1 {"time_micros": 1760174794191183, "job": 73, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 11 09:26:34 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:26:34.191210) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 11 09:26:34 compute-0 ceph-mon[74313]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 73] Try to delete WAL files size 1764819, prev total WAL file size 1764819, number of live WAL files 2.
Oct 11 09:26:34 compute-0 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000120.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 09:26:34 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:26:34.192227) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740032303034' seq:72057594037927935, type:22 .. '6D6772737461740032323535' seq:0, type:0; will stop at (end)
Oct 11 09:26:34 compute-0 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 74] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 11 09:26:34 compute-0 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 73 Base level 0, inputs: [124(1019KB)], [122(10219KB)]
Oct 11 09:26:34 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760174794192279, "job": 74, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [124], "files_L6": [122], "score": -1, "input_data_size": 11508148, "oldest_snapshot_seqno": -1}
Oct 11 09:26:34 compute-0 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 74] Generated table #125: 7289 keys, 8916464 bytes, temperature: kUnknown
Oct 11 09:26:34 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760174794265572, "cf_name": "default", "job": 74, "event": "table_file_creation", "file_number": 125, "file_size": 8916464, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8870098, "index_size": 27054, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 18245, "raw_key_size": 189934, "raw_average_key_size": 26, "raw_value_size": 8741941, "raw_average_value_size": 1199, "num_data_blocks": 1056, "num_entries": 7289, "num_filter_entries": 7289, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760170204, "oldest_key_time": 0, "file_creation_time": 1760174794, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "25d4b2da-549a-4320-988a-8e4ed36a341b", "db_session_id": "HBQ1LYMNE0LLZGR7AHC6", "orig_file_number": 125, "seqno_to_time_mapping": "N/A"}}
Oct 11 09:26:34 compute-0 ceph-mon[74313]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 09:26:34 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:26:34.265858) [db/compaction/compaction_job.cc:1663] [default] [JOB 74] Compacted 1@0 + 1@6 files to L6 => 8916464 bytes
Oct 11 09:26:34 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:26:34.267256) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 156.8 rd, 121.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.0, 10.0 +0.0 blob) out(8.5 +0.0 blob), read-write-amplify(19.6) write-amplify(8.5) OK, records in: 7747, records dropped: 458 output_compression: NoCompression
Oct 11 09:26:34 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:26:34.267276) EVENT_LOG_v1 {"time_micros": 1760174794267267, "job": 74, "event": "compaction_finished", "compaction_time_micros": 73379, "compaction_time_cpu_micros": 41178, "output_level": 6, "num_output_files": 1, "total_output_size": 8916464, "num_input_records": 7747, "num_output_records": 7289, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 11 09:26:34 compute-0 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000124.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 09:26:34 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760174794267804, "job": 74, "event": "table_file_deletion", "file_number": 124}
Oct 11 09:26:34 compute-0 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000122.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 09:26:34 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760174794270370, "job": 74, "event": "table_file_deletion", "file_number": 122}
Oct 11 09:26:34 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:26:34.192122) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 09:26:34 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:26:34.270550) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 09:26:34 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:26:34.270563) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 09:26:34 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:26:34.270568) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 09:26:34 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:26:34.270571) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 09:26:34 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:26:34.270576) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 09:26:34 compute-0 sudo[400788]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:26:34 compute-0 sudo[400788]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:26:34 compute-0 sudo[400788]: pam_unix(sudo:session): session closed for user root
Oct 11 09:26:34 compute-0 nova_compute[260935]: 2025-10-11 09:26:34.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:26:34 compute-0 sudo[400813]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 09:26:34 compute-0 sudo[400813]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:26:34 compute-0 sudo[400813]: pam_unix(sudo:session): session closed for user root
Oct 11 09:26:34 compute-0 sudo[400838]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:26:34 compute-0 sudo[400838]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:26:34 compute-0 sudo[400838]: pam_unix(sudo:session): session closed for user root
Oct 11 09:26:34 compute-0 sudo[400863]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Oct 11 09:26:34 compute-0 sudo[400863]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:26:34 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2548: 321 pgs: 321 active+clean; 445 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 2.1 MiB/s wr, 105 op/s
Oct 11 09:26:35 compute-0 ceph-mon[74313]: pgmap v2548: 321 pgs: 321 active+clean; 445 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 2.1 MiB/s wr, 105 op/s
Oct 11 09:26:36 compute-0 podman[400959]: 2025-10-11 09:26:36.000828902 +0000 UTC m=+0.534422403 container exec ef4d743dbf6b626090e433b260dff1359de31ba4682290cbdab8727911345729 (image=quay.io/ceph/ceph:v18, name=ceph-33219f8b-dc38-5a8f-a577-8ccc4b37190a-mon-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default)
Oct 11 09:26:36 compute-0 nova_compute[260935]: 2025-10-11 09:26:36.310 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:26:36 compute-0 podman[400959]: 2025-10-11 09:26:36.334303003 +0000 UTC m=+0.867896474 container exec_died ef4d743dbf6b626090e433b260dff1359de31ba4682290cbdab8727911345729 (image=quay.io/ceph/ceph:v18, name=ceph-33219f8b-dc38-5a8f-a577-8ccc4b37190a-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct 11 09:26:36 compute-0 nova_compute[260935]: 2025-10-11 09:26:36.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:26:36 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2549: 321 pgs: 321 active+clean; 445 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 2.1 MiB/s wr, 105 op/s
Oct 11 09:26:37 compute-0 ceph-mon[74313]: pgmap v2549: 321 pgs: 321 active+clean; 445 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 2.1 MiB/s wr, 105 op/s
Oct 11 09:26:37 compute-0 nova_compute[260935]: 2025-10-11 09:26:37.705 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:26:37 compute-0 sudo[400863]: pam_unix(sudo:session): session closed for user root
Oct 11 09:26:37 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 09:26:37 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:26:37 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 09:26:37 compute-0 nova_compute[260935]: 2025-10-11 09:26:37.952 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:26:37 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:26:38 compute-0 sudo[401118]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:26:38 compute-0 sudo[401118]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:26:38 compute-0 sudo[401118]: pam_unix(sudo:session): session closed for user root
Oct 11 09:26:38 compute-0 sudo[401143]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 09:26:38 compute-0 sudo[401143]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:26:38 compute-0 sudo[401143]: pam_unix(sudo:session): session closed for user root
Oct 11 09:26:38 compute-0 sudo[401168]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:26:38 compute-0 sudo[401168]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:26:38 compute-0 sudo[401168]: pam_unix(sudo:session): session closed for user root
Oct 11 09:26:38 compute-0 sudo[401193]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 11 09:26:38 compute-0 sudo[401193]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:26:38 compute-0 sshd-session[401062]: Invalid user git-admin from 152.32.213.170 port 45192
Oct 11 09:26:38 compute-0 sshd-session[401062]: pam_unix(sshd:auth): check pass; user unknown
Oct 11 09:26:38 compute-0 sshd-session[401062]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=152.32.213.170
Oct 11 09:26:38 compute-0 nova_compute[260935]: 2025-10-11 09:26:38.698 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:26:38 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:26:38 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:26:38 compute-0 nova_compute[260935]: 2025-10-11 09:26:38.892 2 DEBUG nova.compute.manager [req-1563758d-fb28-4fdd-a3c5-e4b0ed07eb66 req-1449793e-6114-4d7e-b39d-12d3cddeb743 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: b39b8161-8a46-46fe-8a2a-0fc6b4eab850] Received event network-changed-0f8506e7-c03f-48bd-938d-f3b4cea2675b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:26:38 compute-0 nova_compute[260935]: 2025-10-11 09:26:38.893 2 DEBUG nova.compute.manager [req-1563758d-fb28-4fdd-a3c5-e4b0ed07eb66 req-1449793e-6114-4d7e-b39d-12d3cddeb743 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: b39b8161-8a46-46fe-8a2a-0fc6b4eab850] Refreshing instance network info cache due to event network-changed-0f8506e7-c03f-48bd-938d-f3b4cea2675b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 09:26:38 compute-0 nova_compute[260935]: 2025-10-11 09:26:38.894 2 DEBUG oslo_concurrency.lockutils [req-1563758d-fb28-4fdd-a3c5-e4b0ed07eb66 req-1449793e-6114-4d7e-b39d-12d3cddeb743 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-b39b8161-8a46-46fe-8a2a-0fc6b4eab850" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:26:38 compute-0 nova_compute[260935]: 2025-10-11 09:26:38.895 2 DEBUG oslo_concurrency.lockutils [req-1563758d-fb28-4fdd-a3c5-e4b0ed07eb66 req-1449793e-6114-4d7e-b39d-12d3cddeb743 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-b39b8161-8a46-46fe-8a2a-0fc6b4eab850" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:26:38 compute-0 nova_compute[260935]: 2025-10-11 09:26:38.895 2 DEBUG nova.network.neutron [req-1563758d-fb28-4fdd-a3c5-e4b0ed07eb66 req-1449793e-6114-4d7e-b39d-12d3cddeb743 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: b39b8161-8a46-46fe-8a2a-0fc6b4eab850] Refreshing network info cache for port 0f8506e7-c03f-48bd-938d-f3b4cea2675b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 09:26:38 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2550: 321 pgs: 321 active+clean; 453 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 144 op/s
Oct 11 09:26:38 compute-0 sudo[401193]: pam_unix(sudo:session): session closed for user root
Oct 11 09:26:39 compute-0 nova_compute[260935]: 2025-10-11 09:26:39.015 2 DEBUG oslo_concurrency.lockutils [None req-eb6d7eb6-27dc-44f8-87f2-37ada8ac837e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Acquiring lock "ebf4e4f9-b225-4ba9-927e-0619aeea8d89" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:26:39 compute-0 nova_compute[260935]: 2025-10-11 09:26:39.017 2 DEBUG oslo_concurrency.lockutils [None req-eb6d7eb6-27dc-44f8-87f2-37ada8ac837e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "ebf4e4f9-b225-4ba9-927e-0619aeea8d89" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:26:39 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 09:26:39 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 09:26:39 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 11 09:26:39 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 09:26:39 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 11 09:26:39 compute-0 nova_compute[260935]: 2025-10-11 09:26:39.119 2 DEBUG nova.compute.manager [None req-eb6d7eb6-27dc-44f8-87f2-37ada8ac837e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: ebf4e4f9-b225-4ba9-927e-0619aeea8d89] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 11 09:26:39 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:26:39 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev 234dd9b1-913f-4e87-90d5-5e71cebc31e0 does not exist
Oct 11 09:26:39 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev 97e808bd-4736-4c8f-abbc-9e1e1a5d031e does not exist
Oct 11 09:26:39 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev 47f0b9c4-a202-4094-b1b1-3f7f5fe21e3a does not exist
Oct 11 09:26:39 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 11 09:26:39 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 09:26:39 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:26:39 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 11 09:26:39 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 09:26:39 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 09:26:39 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 09:26:39 compute-0 sudo[401249]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:26:39 compute-0 sudo[401249]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:26:39 compute-0 sudo[401249]: pam_unix(sudo:session): session closed for user root
Oct 11 09:26:39 compute-0 sudo[401274]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 09:26:39 compute-0 sudo[401274]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:26:39 compute-0 sudo[401274]: pam_unix(sudo:session): session closed for user root
Oct 11 09:26:39 compute-0 nova_compute[260935]: 2025-10-11 09:26:39.385 2 DEBUG oslo_concurrency.lockutils [None req-eb6d7eb6-27dc-44f8-87f2-37ada8ac837e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:26:39 compute-0 nova_compute[260935]: 2025-10-11 09:26:39.386 2 DEBUG oslo_concurrency.lockutils [None req-eb6d7eb6-27dc-44f8-87f2-37ada8ac837e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:26:39 compute-0 nova_compute[260935]: 2025-10-11 09:26:39.402 2 DEBUG nova.virt.hardware [None req-eb6d7eb6-27dc-44f8-87f2-37ada8ac837e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 11 09:26:39 compute-0 nova_compute[260935]: 2025-10-11 09:26:39.403 2 INFO nova.compute.claims [None req-eb6d7eb6-27dc-44f8-87f2-37ada8ac837e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: ebf4e4f9-b225-4ba9-927e-0619aeea8d89] Claim successful on node compute-0.ctlplane.example.com
Oct 11 09:26:39 compute-0 sudo[401299]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:26:39 compute-0 sudo[401299]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:26:39 compute-0 sudo[401299]: pam_unix(sudo:session): session closed for user root
Oct 11 09:26:39 compute-0 sudo[401324]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 11 09:26:39 compute-0 sudo[401324]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:26:39 compute-0 nova_compute[260935]: 2025-10-11 09:26:39.761 2 DEBUG oslo_concurrency.processutils [None req-eb6d7eb6-27dc-44f8-87f2-37ada8ac837e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:26:39 compute-0 ceph-mon[74313]: pgmap v2550: 321 pgs: 321 active+clean; 453 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 144 op/s
Oct 11 09:26:39 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 09:26:39 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 09:26:39 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:26:39 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 09:26:39 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 09:26:39 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 09:26:40 compute-0 podman[401401]: 2025-10-11 09:26:39.95884503 +0000 UTC m=+0.036137481 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:26:40 compute-0 podman[401401]: 2025-10-11 09:26:40.150037695 +0000 UTC m=+0.227330126 container create fc94fe6945d6264270abdd98f38daf9e1d6de7ba9a6b9ed76f56682e97f63243 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_yonath, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 11 09:26:40 compute-0 systemd[1]: Started libpod-conmon-fc94fe6945d6264270abdd98f38daf9e1d6de7ba9a6b9ed76f56682e97f63243.scope.
Oct 11 09:26:40 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:26:40 compute-0 podman[401401]: 2025-10-11 09:26:40.473214116 +0000 UTC m=+0.550506577 container init fc94fe6945d6264270abdd98f38daf9e1d6de7ba9a6b9ed76f56682e97f63243 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_yonath, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct 11 09:26:40 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 09:26:40 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2425161141' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:26:40 compute-0 podman[401401]: 2025-10-11 09:26:40.487769996 +0000 UTC m=+0.565062457 container start fc94fe6945d6264270abdd98f38daf9e1d6de7ba9a6b9ed76f56682e97f63243 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_yonath, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 09:26:40 compute-0 thirsty_yonath[401426]: 167 167
Oct 11 09:26:40 compute-0 systemd[1]: libpod-fc94fe6945d6264270abdd98f38daf9e1d6de7ba9a6b9ed76f56682e97f63243.scope: Deactivated successfully.
Oct 11 09:26:40 compute-0 conmon[401426]: conmon fc94fe6945d6264270ab <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-fc94fe6945d6264270abdd98f38daf9e1d6de7ba9a6b9ed76f56682e97f63243.scope/container/memory.events
Oct 11 09:26:40 compute-0 nova_compute[260935]: 2025-10-11 09:26:40.513 2 DEBUG oslo_concurrency.processutils [None req-eb6d7eb6-27dc-44f8-87f2-37ada8ac837e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.751s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:26:40 compute-0 nova_compute[260935]: 2025-10-11 09:26:40.526 2 DEBUG nova.compute.provider_tree [None req-eb6d7eb6-27dc-44f8-87f2-37ada8ac837e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 09:26:40 compute-0 nova_compute[260935]: 2025-10-11 09:26:40.548 2 DEBUG nova.scheduler.client.report [None req-eb6d7eb6-27dc-44f8-87f2-37ada8ac837e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 09:26:40 compute-0 nova_compute[260935]: 2025-10-11 09:26:40.587 2 DEBUG oslo_concurrency.lockutils [None req-eb6d7eb6-27dc-44f8-87f2-37ada8ac837e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.201s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:26:40 compute-0 nova_compute[260935]: 2025-10-11 09:26:40.588 2 DEBUG nova.compute.manager [None req-eb6d7eb6-27dc-44f8-87f2-37ada8ac837e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: ebf4e4f9-b225-4ba9-927e-0619aeea8d89] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 11 09:26:40 compute-0 podman[401401]: 2025-10-11 09:26:40.61333627 +0000 UTC m=+0.690628691 container attach fc94fe6945d6264270abdd98f38daf9e1d6de7ba9a6b9ed76f56682e97f63243 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_yonath, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Oct 11 09:26:40 compute-0 podman[401401]: 2025-10-11 09:26:40.615508061 +0000 UTC m=+0.692800522 container died fc94fe6945d6264270abdd98f38daf9e1d6de7ba9a6b9ed76f56682e97f63243 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_yonath, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True)
Oct 11 09:26:40 compute-0 nova_compute[260935]: 2025-10-11 09:26:40.664 2 DEBUG nova.compute.manager [None req-eb6d7eb6-27dc-44f8-87f2-37ada8ac837e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: ebf4e4f9-b225-4ba9-927e-0619aeea8d89] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 11 09:26:40 compute-0 nova_compute[260935]: 2025-10-11 09:26:40.666 2 DEBUG nova.network.neutron [None req-eb6d7eb6-27dc-44f8-87f2-37ada8ac837e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: ebf4e4f9-b225-4ba9-927e-0619aeea8d89] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 11 09:26:40 compute-0 nova_compute[260935]: 2025-10-11 09:26:40.701 2 INFO nova.virt.libvirt.driver [None req-eb6d7eb6-27dc-44f8-87f2-37ada8ac837e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: ebf4e4f9-b225-4ba9-927e-0619aeea8d89] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 11 09:26:40 compute-0 nova_compute[260935]: 2025-10-11 09:26:40.732 2 DEBUG nova.compute.manager [None req-eb6d7eb6-27dc-44f8-87f2-37ada8ac837e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: ebf4e4f9-b225-4ba9-927e-0619aeea8d89] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 11 09:26:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-7625e06170edad45e849fee0f32493380d20861a3e2f663d66f2437e02ea3b4c-merged.mount: Deactivated successfully.
Oct 11 09:26:40 compute-0 nova_compute[260935]: 2025-10-11 09:26:40.851 2 DEBUG nova.compute.manager [None req-eb6d7eb6-27dc-44f8-87f2-37ada8ac837e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: ebf4e4f9-b225-4ba9-927e-0619aeea8d89] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 11 09:26:40 compute-0 nova_compute[260935]: 2025-10-11 09:26:40.852 2 DEBUG nova.virt.libvirt.driver [None req-eb6d7eb6-27dc-44f8-87f2-37ada8ac837e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: ebf4e4f9-b225-4ba9-927e-0619aeea8d89] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 11 09:26:40 compute-0 nova_compute[260935]: 2025-10-11 09:26:40.853 2 INFO nova.virt.libvirt.driver [None req-eb6d7eb6-27dc-44f8-87f2-37ada8ac837e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: ebf4e4f9-b225-4ba9-927e-0619aeea8d89] Creating image(s)
Oct 11 09:26:40 compute-0 nova_compute[260935]: 2025-10-11 09:26:40.879 2 DEBUG nova.storage.rbd_utils [None req-eb6d7eb6-27dc-44f8-87f2-37ada8ac837e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] rbd image ebf4e4f9-b225-4ba9-927e-0619aeea8d89_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:26:40 compute-0 sshd-session[401062]: Failed password for invalid user git-admin from 152.32.213.170 port 45192 ssh2
Oct 11 09:26:40 compute-0 nova_compute[260935]: 2025-10-11 09:26:40.927 2 DEBUG nova.storage.rbd_utils [None req-eb6d7eb6-27dc-44f8-87f2-37ada8ac837e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] rbd image ebf4e4f9-b225-4ba9-927e-0619aeea8d89_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:26:40 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2551: 321 pgs: 321 active+clean; 453 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 144 op/s
Oct 11 09:26:40 compute-0 nova_compute[260935]: 2025-10-11 09:26:40.971 2 DEBUG nova.storage.rbd_utils [None req-eb6d7eb6-27dc-44f8-87f2-37ada8ac837e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] rbd image ebf4e4f9-b225-4ba9-927e-0619aeea8d89_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:26:40 compute-0 nova_compute[260935]: 2025-10-11 09:26:40.976 2 DEBUG oslo_concurrency.processutils [None req-eb6d7eb6-27dc-44f8-87f2-37ada8ac837e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:26:41 compute-0 nova_compute[260935]: 2025-10-11 09:26:41.050 2 DEBUG oslo_concurrency.processutils [None req-eb6d7eb6-27dc-44f8-87f2-37ada8ac837e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json" returned: 0 in 0.074s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:26:41 compute-0 nova_compute[260935]: 2025-10-11 09:26:41.051 2 DEBUG oslo_concurrency.lockutils [None req-eb6d7eb6-27dc-44f8-87f2-37ada8ac837e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Acquiring lock "9811042c7d73cc51997f7c966840f4b7728169a1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:26:41 compute-0 nova_compute[260935]: 2025-10-11 09:26:41.052 2 DEBUG oslo_concurrency.lockutils [None req-eb6d7eb6-27dc-44f8-87f2-37ada8ac837e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:26:41 compute-0 nova_compute[260935]: 2025-10-11 09:26:41.052 2 DEBUG oslo_concurrency.lockutils [None req-eb6d7eb6-27dc-44f8-87f2-37ada8ac837e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:26:41 compute-0 podman[401401]: 2025-10-11 09:26:41.053961335 +0000 UTC m=+1.131253806 container remove fc94fe6945d6264270abdd98f38daf9e1d6de7ba9a6b9ed76f56682e97f63243 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_yonath, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 09:26:41 compute-0 nova_compute[260935]: 2025-10-11 09:26:41.093 2 DEBUG nova.storage.rbd_utils [None req-eb6d7eb6-27dc-44f8-87f2-37ada8ac837e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] rbd image ebf4e4f9-b225-4ba9-927e-0619aeea8d89_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:26:41 compute-0 nova_compute[260935]: 2025-10-11 09:26:41.103 2 DEBUG oslo_concurrency.processutils [None req-eb6d7eb6-27dc-44f8-87f2-37ada8ac837e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 ebf4e4f9-b225-4ba9-927e-0619aeea8d89_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:26:41 compute-0 systemd[1]: libpod-conmon-fc94fe6945d6264270abdd98f38daf9e1d6de7ba9a6b9ed76f56682e97f63243.scope: Deactivated successfully.
Oct 11 09:26:41 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/2425161141' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:26:41 compute-0 ceph-mon[74313]: pgmap v2551: 321 pgs: 321 active+clean; 453 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 144 op/s
Oct 11 09:26:41 compute-0 nova_compute[260935]: 2025-10-11 09:26:41.312 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:26:41 compute-0 podman[401543]: 2025-10-11 09:26:41.329875261 +0000 UTC m=+0.060891659 container create e6abf1327511a0a30be33139b8592de4c47bb212d95fccaa58d7d5207d894f5d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_perlman, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 09:26:41 compute-0 nova_compute[260935]: 2025-10-11 09:26:41.360 2 DEBUG nova.policy [None req-eb6d7eb6-27dc-44f8-87f2-37ada8ac837e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'dd336dcb24664df58613d4105ce1b004', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'bee9c6aad5fe46a2b0fb6caf4d995b72', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 11 09:26:41 compute-0 systemd[1]: Started libpod-conmon-e6abf1327511a0a30be33139b8592de4c47bb212d95fccaa58d7d5207d894f5d.scope.
Oct 11 09:26:41 compute-0 podman[401543]: 2025-10-11 09:26:41.297192359 +0000 UTC m=+0.028208767 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:26:41 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:26:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6523e40f322e3f7661473808937bcc4950df177a5e2bed6868bcdbe32903173/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 09:26:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6523e40f322e3f7661473808937bcc4950df177a5e2bed6868bcdbe32903173/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 09:26:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6523e40f322e3f7661473808937bcc4950df177a5e2bed6868bcdbe32903173/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 09:26:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6523e40f322e3f7661473808937bcc4950df177a5e2bed6868bcdbe32903173/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 09:26:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6523e40f322e3f7661473808937bcc4950df177a5e2bed6868bcdbe32903173/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 09:26:41 compute-0 podman[401543]: 2025-10-11 09:26:41.531349737 +0000 UTC m=+0.262366165 container init e6abf1327511a0a30be33139b8592de4c47bb212d95fccaa58d7d5207d894f5d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_perlman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Oct 11 09:26:41 compute-0 podman[401543]: 2025-10-11 09:26:41.540062013 +0000 UTC m=+0.271078411 container start e6abf1327511a0a30be33139b8592de4c47bb212d95fccaa58d7d5207d894f5d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_perlman, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 09:26:41 compute-0 podman[401543]: 2025-10-11 09:26:41.599949073 +0000 UTC m=+0.330965471 container attach e6abf1327511a0a30be33139b8592de4c47bb212d95fccaa58d7d5207d894f5d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_perlman, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 11 09:26:41 compute-0 nova_compute[260935]: 2025-10-11 09:26:41.700 2 DEBUG oslo_concurrency.processutils [None req-eb6d7eb6-27dc-44f8-87f2-37ada8ac837e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 ebf4e4f9-b225-4ba9-927e-0619aeea8d89_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.597s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:26:41 compute-0 nova_compute[260935]: 2025-10-11 09:26:41.725 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:26:41 compute-0 nova_compute[260935]: 2025-10-11 09:26:41.767 2 DEBUG nova.storage.rbd_utils [None req-eb6d7eb6-27dc-44f8-87f2-37ada8ac837e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] resizing rbd image ebf4e4f9-b225-4ba9-927e-0619aeea8d89_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 11 09:26:41 compute-0 nova_compute[260935]: 2025-10-11 09:26:41.806 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:26:41 compute-0 nova_compute[260935]: 2025-10-11 09:26:41.806 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:26:41 compute-0 nova_compute[260935]: 2025-10-11 09:26:41.807 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:26:41 compute-0 nova_compute[260935]: 2025-10-11 09:26:41.807 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 11 09:26:41 compute-0 nova_compute[260935]: 2025-10-11 09:26:41.808 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:26:42 compute-0 nova_compute[260935]: 2025-10-11 09:26:42.131 2 DEBUG nova.objects.instance [None req-eb6d7eb6-27dc-44f8-87f2-37ada8ac837e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lazy-loading 'migration_context' on Instance uuid ebf4e4f9-b225-4ba9-927e-0619aeea8d89 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:26:42 compute-0 nova_compute[260935]: 2025-10-11 09:26:42.158 2 DEBUG nova.virt.libvirt.driver [None req-eb6d7eb6-27dc-44f8-87f2-37ada8ac837e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: ebf4e4f9-b225-4ba9-927e-0619aeea8d89] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 11 09:26:42 compute-0 nova_compute[260935]: 2025-10-11 09:26:42.158 2 DEBUG nova.virt.libvirt.driver [None req-eb6d7eb6-27dc-44f8-87f2-37ada8ac837e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: ebf4e4f9-b225-4ba9-927e-0619aeea8d89] Ensure instance console log exists: /var/lib/nova/instances/ebf4e4f9-b225-4ba9-927e-0619aeea8d89/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 11 09:26:42 compute-0 nova_compute[260935]: 2025-10-11 09:26:42.159 2 DEBUG oslo_concurrency.lockutils [None req-eb6d7eb6-27dc-44f8-87f2-37ada8ac837e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:26:42 compute-0 nova_compute[260935]: 2025-10-11 09:26:42.159 2 DEBUG oslo_concurrency.lockutils [None req-eb6d7eb6-27dc-44f8-87f2-37ada8ac837e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:26:42 compute-0 nova_compute[260935]: 2025-10-11 09:26:42.159 2 DEBUG oslo_concurrency.lockutils [None req-eb6d7eb6-27dc-44f8-87f2-37ada8ac837e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:26:42 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 09:26:42 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2712588189' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:26:42 compute-0 nova_compute[260935]: 2025-10-11 09:26:42.359 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.550s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:26:42 compute-0 sshd-session[401062]: Received disconnect from 152.32.213.170 port 45192:11: Bye Bye [preauth]
Oct 11 09:26:42 compute-0 sshd-session[401062]: Disconnected from invalid user git-admin 152.32.213.170 port 45192 [preauth]
Oct 11 09:26:42 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/2712588189' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:26:42 compute-0 nova_compute[260935]: 2025-10-11 09:26:42.626 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-0000007f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:26:42 compute-0 nova_compute[260935]: 2025-10-11 09:26:42.627 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-0000007f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:26:42 compute-0 nova_compute[260935]: 2025-10-11 09:26:42.633 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:26:42 compute-0 nova_compute[260935]: 2025-10-11 09:26:42.633 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:26:42 compute-0 nova_compute[260935]: 2025-10-11 09:26:42.634 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:26:42 compute-0 nova_compute[260935]: 2025-10-11 09:26:42.642 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:26:42 compute-0 nova_compute[260935]: 2025-10-11 09:26:42.643 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:26:42 compute-0 nova_compute[260935]: 2025-10-11 09:26:42.649 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000080 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:26:42 compute-0 nova_compute[260935]: 2025-10-11 09:26:42.649 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000080 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:26:42 compute-0 nova_compute[260935]: 2025-10-11 09:26:42.656 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000052 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:26:42 compute-0 nova_compute[260935]: 2025-10-11 09:26:42.657 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000052 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:26:42 compute-0 boring_perlman[401563]: --> passed data devices: 0 physical, 3 LVM
Oct 11 09:26:42 compute-0 boring_perlman[401563]: --> relative data size: 1.0
Oct 11 09:26:42 compute-0 boring_perlman[401563]: --> All data devices are unavailable
Oct 11 09:26:42 compute-0 systemd[1]: libpod-e6abf1327511a0a30be33139b8592de4c47bb212d95fccaa58d7d5207d894f5d.scope: Deactivated successfully.
Oct 11 09:26:42 compute-0 systemd[1]: libpod-e6abf1327511a0a30be33139b8592de4c47bb212d95fccaa58d7d5207d894f5d.scope: Consumed 1.156s CPU time.
Oct 11 09:26:42 compute-0 podman[401543]: 2025-10-11 09:26:42.937135359 +0000 UTC m=+1.668151807 container died e6abf1327511a0a30be33139b8592de4c47bb212d95fccaa58d7d5207d894f5d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_perlman, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 09:26:42 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2552: 321 pgs: 321 active+clean; 508 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 5.0 MiB/s wr, 186 op/s
Oct 11 09:26:42 compute-0 nova_compute[260935]: 2025-10-11 09:26:42.955 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:26:43 compute-0 nova_compute[260935]: 2025-10-11 09:26:43.045 2 WARNING nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 09:26:43 compute-0 nova_compute[260935]: 2025-10-11 09:26:43.047 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=2360MB free_disk=59.76438522338867GB free_vcpus=3 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 11 09:26:43 compute-0 nova_compute[260935]: 2025-10-11 09:26:43.047 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:26:43 compute-0 nova_compute[260935]: 2025-10-11 09:26:43.048 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:26:43 compute-0 nova_compute[260935]: 2025-10-11 09:26:43.147 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance c176845c-89c0-4038-ba22-4ee79bd3ebfe actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 09:26:43 compute-0 nova_compute[260935]: 2025-10-11 09:26:43.147 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance b75d8ded-515b-48ff-a6b6-28df88878996 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 09:26:43 compute-0 nova_compute[260935]: 2025-10-11 09:26:43.148 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance 52be16b4-343a-4fd4-9041-39069a1fde2a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 09:26:43 compute-0 nova_compute[260935]: 2025-10-11 09:26:43.148 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance 36d0acda-9f37-4308-aa46-973f11c57b0e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 09:26:43 compute-0 nova_compute[260935]: 2025-10-11 09:26:43.148 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance b39b8161-8a46-46fe-8a2a-0fc6b4eab850 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 09:26:43 compute-0 nova_compute[260935]: 2025-10-11 09:26:43.149 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance ebf4e4f9-b225-4ba9-927e-0619aeea8d89 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 09:26:43 compute-0 nova_compute[260935]: 2025-10-11 09:26:43.149 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 6 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 11 09:26:43 compute-0 nova_compute[260935]: 2025-10-11 09:26:43.149 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=1280MB phys_disk=59GB used_disk=6GB total_vcpus=8 used_vcpus=6 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 11 09:26:43 compute-0 nova_compute[260935]: 2025-10-11 09:26:43.279 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:26:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-a6523e40f322e3f7661473808937bcc4950df177a5e2bed6868bcdbe32903173-merged.mount: Deactivated successfully.
Oct 11 09:26:43 compute-0 ceph-mon[74313]: pgmap v2552: 321 pgs: 321 active+clean; 508 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 5.0 MiB/s wr, 186 op/s
Oct 11 09:26:43 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 09:26:43 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/247873486' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:26:43 compute-0 podman[401543]: 2025-10-11 09:26:43.75041645 +0000 UTC m=+2.481432878 container remove e6abf1327511a0a30be33139b8592de4c47bb212d95fccaa58d7d5207d894f5d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_perlman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 09:26:43 compute-0 nova_compute[260935]: 2025-10-11 09:26:43.775 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.496s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:26:43 compute-0 nova_compute[260935]: 2025-10-11 09:26:43.784 2 DEBUG nova.compute.provider_tree [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 09:26:43 compute-0 sudo[401324]: pam_unix(sudo:session): session closed for user root
Oct 11 09:26:43 compute-0 nova_compute[260935]: 2025-10-11 09:26:43.813 2 DEBUG nova.scheduler.client.report [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 09:26:43 compute-0 systemd[1]: libpod-conmon-e6abf1327511a0a30be33139b8592de4c47bb212d95fccaa58d7d5207d894f5d.scope: Deactivated successfully.
Oct 11 09:26:43 compute-0 nova_compute[260935]: 2025-10-11 09:26:43.843 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 11 09:26:43 compute-0 nova_compute[260935]: 2025-10-11 09:26:43.844 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.796s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:26:43 compute-0 sudo[401723]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:26:43 compute-0 sudo[401723]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:26:43 compute-0 sudo[401723]: pam_unix(sudo:session): session closed for user root
Oct 11 09:26:43 compute-0 sudo[401748]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 09:26:43 compute-0 sudo[401748]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:26:43 compute-0 sudo[401748]: pam_unix(sudo:session): session closed for user root
Oct 11 09:26:44 compute-0 sudo[401773]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:26:44 compute-0 sudo[401773]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:26:44 compute-0 sudo[401773]: pam_unix(sudo:session): session closed for user root
Oct 11 09:26:44 compute-0 sudo[401798]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a -- lvm list --format json
Oct 11 09:26:44 compute-0 sudo[401798]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:26:44 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:26:44 compute-0 nova_compute[260935]: 2025-10-11 09:26:44.365 2 DEBUG nova.network.neutron [req-1563758d-fb28-4fdd-a3c5-e4b0ed07eb66 req-1449793e-6114-4d7e-b39d-12d3cddeb743 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: b39b8161-8a46-46fe-8a2a-0fc6b4eab850] Updated VIF entry in instance network info cache for port 0f8506e7-c03f-48bd-938d-f3b4cea2675b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 09:26:44 compute-0 nova_compute[260935]: 2025-10-11 09:26:44.366 2 DEBUG nova.network.neutron [req-1563758d-fb28-4fdd-a3c5-e4b0ed07eb66 req-1449793e-6114-4d7e-b39d-12d3cddeb743 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: b39b8161-8a46-46fe-8a2a-0fc6b4eab850] Updating instance_info_cache with network_info: [{"id": "0f8506e7-c03f-48bd-938d-f3b4cea2675b", "address": "fa:16:3e:f4:67:d9", "network": {"id": "024b1f88-7312-4f05-a55e-4c82e878906e", "bridge": "br-int", "label": "tempest-network-smoke--93934650", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0f8506e7-c0", "ovs_interfaceid": "0f8506e7-c03f-48bd-938d-f3b4cea2675b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "a0aadf8d-3a8c-48f8-84af-fb4e4c0b4088", "address": "fa:16:3e:4d:e9:36", "network": {"id": "894decad-3bed-4c55-b643-5fbe5479bf3f", "bridge": "br-int", "label": "tempest-network-smoke--1044478516", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe4d:e936", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa0aadf8d-3a", "ovs_interfaceid": "a0aadf8d-3a8c-48f8-84af-fb4e4c0b4088", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:26:44 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/247873486' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:26:44 compute-0 podman[401864]: 2025-10-11 09:26:44.683859453 +0000 UTC m=+0.079267288 container create 32fb6cf391b33dd692c821d37210b9e76871fcbefb7b07075b7dba7c7e010952 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_noether, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Oct 11 09:26:44 compute-0 nova_compute[260935]: 2025-10-11 09:26:44.711 2 DEBUG oslo_concurrency.lockutils [req-1563758d-fb28-4fdd-a3c5-e4b0ed07eb66 req-1449793e-6114-4d7e-b39d-12d3cddeb743 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-b39b8161-8a46-46fe-8a2a-0fc6b4eab850" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:26:44 compute-0 nova_compute[260935]: 2025-10-11 09:26:44.713 2 DEBUG nova.network.neutron [None req-eb6d7eb6-27dc-44f8-87f2-37ada8ac837e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: ebf4e4f9-b225-4ba9-927e-0619aeea8d89] Successfully created port: 733f68a9-9ab3-4ed3-9fa3-d749ce2bfeb6 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 11 09:26:44 compute-0 podman[401864]: 2025-10-11 09:26:44.642495646 +0000 UTC m=+0.037903531 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:26:44 compute-0 systemd[1]: Started libpod-conmon-32fb6cf391b33dd692c821d37210b9e76871fcbefb7b07075b7dba7c7e010952.scope.
Oct 11 09:26:44 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:26:44 compute-0 podman[401864]: 2025-10-11 09:26:44.813904293 +0000 UTC m=+0.209312218 container init 32fb6cf391b33dd692c821d37210b9e76871fcbefb7b07075b7dba7c7e010952 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_noether, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct 11 09:26:44 compute-0 nova_compute[260935]: 2025-10-11 09:26:44.823 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:26:44 compute-0 nova_compute[260935]: 2025-10-11 09:26:44.825 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 11 09:26:44 compute-0 podman[401864]: 2025-10-11 09:26:44.8315296 +0000 UTC m=+0.226937435 container start 32fb6cf391b33dd692c821d37210b9e76871fcbefb7b07075b7dba7c7e010952 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_noether, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 09:26:44 compute-0 sad_noether[401880]: 167 167
Oct 11 09:26:44 compute-0 systemd[1]: libpod-32fb6cf391b33dd692c821d37210b9e76871fcbefb7b07075b7dba7c7e010952.scope: Deactivated successfully.
Oct 11 09:26:44 compute-0 conmon[401880]: conmon 32fb6cf391b33dd692c8 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-32fb6cf391b33dd692c821d37210b9e76871fcbefb7b07075b7dba7c7e010952.scope/container/memory.events
Oct 11 09:26:44 compute-0 podman[401864]: 2025-10-11 09:26:44.845408612 +0000 UTC m=+0.240816487 container attach 32fb6cf391b33dd692c821d37210b9e76871fcbefb7b07075b7dba7c7e010952 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_noether, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 09:26:44 compute-0 podman[401864]: 2025-10-11 09:26:44.846100032 +0000 UTC m=+0.241507907 container died 32fb6cf391b33dd692c821d37210b9e76871fcbefb7b07075b7dba7c7e010952 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_noether, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 09:26:44 compute-0 nova_compute[260935]: 2025-10-11 09:26:44.861 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 11 09:26:44 compute-0 nova_compute[260935]: 2025-10-11 09:26:44.862 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:26:44 compute-0 nova_compute[260935]: 2025-10-11 09:26:44.862 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:26:44 compute-0 nova_compute[260935]: 2025-10-11 09:26:44.862 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 11 09:26:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-d492595a93584019fa6fbaa7aa575831263e006b1189e662500f3cc855d43477-merged.mount: Deactivated successfully.
Oct 11 09:26:44 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2553: 321 pgs: 321 active+clean; 508 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.0 MiB/s rd, 2.9 MiB/s wr, 81 op/s
Oct 11 09:26:45 compute-0 podman[401864]: 2025-10-11 09:26:45.063462416 +0000 UTC m=+0.458870261 container remove 32fb6cf391b33dd692c821d37210b9e76871fcbefb7b07075b7dba7c7e010952 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_noether, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Oct 11 09:26:45 compute-0 systemd[1]: libpod-conmon-32fb6cf391b33dd692c821d37210b9e76871fcbefb7b07075b7dba7c7e010952.scope: Deactivated successfully.
Oct 11 09:26:45 compute-0 podman[401883]: 2025-10-11 09:26:45.155295577 +0000 UTC m=+0.339851752 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct 11 09:26:45 compute-0 podman[401925]: 2025-10-11 09:26:45.31660602 +0000 UTC m=+0.024132072 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:26:45 compute-0 podman[401925]: 2025-10-11 09:26:45.428876748 +0000 UTC m=+0.136402800 container create cf47a4979cc279ed36e66b223cbe8581bbc71b75196f29de32d496b486e99ce0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_saha, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 11 09:26:45 compute-0 systemd[1]: Started libpod-conmon-cf47a4979cc279ed36e66b223cbe8581bbc71b75196f29de32d496b486e99ce0.scope.
Oct 11 09:26:45 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:26:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fcc7ac2b70633db55b88430f0783934540668a9e4bdfbc872dcee86c6776a052/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 09:26:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fcc7ac2b70633db55b88430f0783934540668a9e4bdfbc872dcee86c6776a052/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 09:26:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fcc7ac2b70633db55b88430f0783934540668a9e4bdfbc872dcee86c6776a052/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 09:26:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fcc7ac2b70633db55b88430f0783934540668a9e4bdfbc872dcee86c6776a052/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 09:26:45 compute-0 ceph-mon[74313]: pgmap v2553: 321 pgs: 321 active+clean; 508 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.0 MiB/s rd, 2.9 MiB/s wr, 81 op/s
Oct 11 09:26:45 compute-0 podman[401925]: 2025-10-11 09:26:45.995887128 +0000 UTC m=+0.703413260 container init cf47a4979cc279ed36e66b223cbe8581bbc71b75196f29de32d496b486e99ce0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_saha, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct 11 09:26:46 compute-0 podman[401925]: 2025-10-11 09:26:46.012314262 +0000 UTC m=+0.719840324 container start cf47a4979cc279ed36e66b223cbe8581bbc71b75196f29de32d496b486e99ce0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_saha, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 09:26:46 compute-0 ovn_controller[152945]: 2025-10-11T09:26:46Z|00162|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:f4:67:d9 10.100.0.4
Oct 11 09:26:46 compute-0 ovn_controller[152945]: 2025-10-11T09:26:46Z|00163|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:f4:67:d9 10.100.0.4
Oct 11 09:26:46 compute-0 podman[401925]: 2025-10-11 09:26:46.213447328 +0000 UTC m=+0.920973450 container attach cf47a4979cc279ed36e66b223cbe8581bbc71b75196f29de32d496b486e99ce0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_saha, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 09:26:46 compute-0 nova_compute[260935]: 2025-10-11 09:26:46.249 2 DEBUG nova.network.neutron [None req-eb6d7eb6-27dc-44f8-87f2-37ada8ac837e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: ebf4e4f9-b225-4ba9-927e-0619aeea8d89] Successfully updated port: 733f68a9-9ab3-4ed3-9fa3-d749ce2bfeb6 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 11 09:26:46 compute-0 nova_compute[260935]: 2025-10-11 09:26:46.315 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:26:46 compute-0 nova_compute[260935]: 2025-10-11 09:26:46.398 2 DEBUG oslo_concurrency.lockutils [None req-eb6d7eb6-27dc-44f8-87f2-37ada8ac837e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Acquiring lock "refresh_cache-ebf4e4f9-b225-4ba9-927e-0619aeea8d89" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:26:46 compute-0 nova_compute[260935]: 2025-10-11 09:26:46.399 2 DEBUG oslo_concurrency.lockutils [None req-eb6d7eb6-27dc-44f8-87f2-37ada8ac837e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Acquired lock "refresh_cache-ebf4e4f9-b225-4ba9-927e-0619aeea8d89" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:26:46 compute-0 nova_compute[260935]: 2025-10-11 09:26:46.399 2 DEBUG nova.network.neutron [None req-eb6d7eb6-27dc-44f8-87f2-37ada8ac837e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: ebf4e4f9-b225-4ba9-927e-0619aeea8d89] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 11 09:26:46 compute-0 nova_compute[260935]: 2025-10-11 09:26:46.450 2 DEBUG nova.compute.manager [req-cee575f3-02ca-46d2-aee8-5ee353edb984 req-b1b23c97-ec86-4362-8b55-77c404311886 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ebf4e4f9-b225-4ba9-927e-0619aeea8d89] Received event network-changed-733f68a9-9ab3-4ed3-9fa3-d749ce2bfeb6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:26:46 compute-0 nova_compute[260935]: 2025-10-11 09:26:46.450 2 DEBUG nova.compute.manager [req-cee575f3-02ca-46d2-aee8-5ee353edb984 req-b1b23c97-ec86-4362-8b55-77c404311886 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ebf4e4f9-b225-4ba9-927e-0619aeea8d89] Refreshing instance network info cache due to event network-changed-733f68a9-9ab3-4ed3-9fa3-d749ce2bfeb6. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 09:26:46 compute-0 nova_compute[260935]: 2025-10-11 09:26:46.451 2 DEBUG oslo_concurrency.lockutils [req-cee575f3-02ca-46d2-aee8-5ee353edb984 req-b1b23c97-ec86-4362-8b55-77c404311886 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-ebf4e4f9-b225-4ba9-927e-0619aeea8d89" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:26:46 compute-0 nova_compute[260935]: 2025-10-11 09:26:46.682 2 DEBUG nova.network.neutron [None req-eb6d7eb6-27dc-44f8-87f2-37ada8ac837e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: ebf4e4f9-b225-4ba9-927e-0619aeea8d89] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 11 09:26:46 compute-0 optimistic_saha[401942]: {
Oct 11 09:26:46 compute-0 optimistic_saha[401942]:     "0": [
Oct 11 09:26:46 compute-0 optimistic_saha[401942]:         {
Oct 11 09:26:46 compute-0 optimistic_saha[401942]:             "devices": [
Oct 11 09:26:46 compute-0 optimistic_saha[401942]:                 "/dev/loop3"
Oct 11 09:26:46 compute-0 optimistic_saha[401942]:             ],
Oct 11 09:26:46 compute-0 optimistic_saha[401942]:             "lv_name": "ceph_lv0",
Oct 11 09:26:46 compute-0 optimistic_saha[401942]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 09:26:46 compute-0 optimistic_saha[401942]:             "lv_size": "21470642176",
Oct 11 09:26:46 compute-0 optimistic_saha[401942]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=bd31cfe9-266a-4479-a7f9-659fff14b3e1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 09:26:46 compute-0 optimistic_saha[401942]:             "lv_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 09:26:46 compute-0 optimistic_saha[401942]:             "name": "ceph_lv0",
Oct 11 09:26:46 compute-0 optimistic_saha[401942]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 09:26:46 compute-0 optimistic_saha[401942]:             "tags": {
Oct 11 09:26:46 compute-0 optimistic_saha[401942]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 11 09:26:46 compute-0 optimistic_saha[401942]:                 "ceph.block_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 09:26:46 compute-0 optimistic_saha[401942]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 09:26:46 compute-0 optimistic_saha[401942]:                 "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:26:46 compute-0 optimistic_saha[401942]:                 "ceph.cluster_name": "ceph",
Oct 11 09:26:46 compute-0 optimistic_saha[401942]:                 "ceph.crush_device_class": "",
Oct 11 09:26:46 compute-0 optimistic_saha[401942]:                 "ceph.encrypted": "0",
Oct 11 09:26:46 compute-0 optimistic_saha[401942]:                 "ceph.osd_fsid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 09:26:46 compute-0 optimistic_saha[401942]:                 "ceph.osd_id": "0",
Oct 11 09:26:46 compute-0 optimistic_saha[401942]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 09:26:46 compute-0 optimistic_saha[401942]:                 "ceph.type": "block",
Oct 11 09:26:46 compute-0 optimistic_saha[401942]:                 "ceph.vdo": "0"
Oct 11 09:26:46 compute-0 optimistic_saha[401942]:             },
Oct 11 09:26:46 compute-0 optimistic_saha[401942]:             "type": "block",
Oct 11 09:26:46 compute-0 optimistic_saha[401942]:             "vg_name": "ceph_vg0"
Oct 11 09:26:46 compute-0 optimistic_saha[401942]:         }
Oct 11 09:26:46 compute-0 optimistic_saha[401942]:     ],
Oct 11 09:26:46 compute-0 optimistic_saha[401942]:     "1": [
Oct 11 09:26:46 compute-0 optimistic_saha[401942]:         {
Oct 11 09:26:46 compute-0 optimistic_saha[401942]:             "devices": [
Oct 11 09:26:46 compute-0 optimistic_saha[401942]:                 "/dev/loop4"
Oct 11 09:26:46 compute-0 optimistic_saha[401942]:             ],
Oct 11 09:26:46 compute-0 optimistic_saha[401942]:             "lv_name": "ceph_lv1",
Oct 11 09:26:46 compute-0 optimistic_saha[401942]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 09:26:46 compute-0 optimistic_saha[401942]:             "lv_size": "21470642176",
Oct 11 09:26:46 compute-0 optimistic_saha[401942]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c6e730c8-7075-45e5-bf72-061b73088f5b,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 09:26:46 compute-0 optimistic_saha[401942]:             "lv_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 09:26:46 compute-0 optimistic_saha[401942]:             "name": "ceph_lv1",
Oct 11 09:26:46 compute-0 optimistic_saha[401942]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 09:26:46 compute-0 optimistic_saha[401942]:             "tags": {
Oct 11 09:26:46 compute-0 optimistic_saha[401942]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 11 09:26:46 compute-0 optimistic_saha[401942]:                 "ceph.block_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 09:26:46 compute-0 optimistic_saha[401942]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 09:26:46 compute-0 optimistic_saha[401942]:                 "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:26:46 compute-0 optimistic_saha[401942]:                 "ceph.cluster_name": "ceph",
Oct 11 09:26:46 compute-0 optimistic_saha[401942]:                 "ceph.crush_device_class": "",
Oct 11 09:26:46 compute-0 optimistic_saha[401942]:                 "ceph.encrypted": "0",
Oct 11 09:26:46 compute-0 optimistic_saha[401942]:                 "ceph.osd_fsid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 09:26:46 compute-0 optimistic_saha[401942]:                 "ceph.osd_id": "1",
Oct 11 09:26:46 compute-0 optimistic_saha[401942]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 09:26:46 compute-0 optimistic_saha[401942]:                 "ceph.type": "block",
Oct 11 09:26:46 compute-0 optimistic_saha[401942]:                 "ceph.vdo": "0"
Oct 11 09:26:46 compute-0 optimistic_saha[401942]:             },
Oct 11 09:26:46 compute-0 optimistic_saha[401942]:             "type": "block",
Oct 11 09:26:46 compute-0 optimistic_saha[401942]:             "vg_name": "ceph_vg1"
Oct 11 09:26:46 compute-0 optimistic_saha[401942]:         }
Oct 11 09:26:46 compute-0 optimistic_saha[401942]:     ],
Oct 11 09:26:46 compute-0 optimistic_saha[401942]:     "2": [
Oct 11 09:26:46 compute-0 optimistic_saha[401942]:         {
Oct 11 09:26:46 compute-0 optimistic_saha[401942]:             "devices": [
Oct 11 09:26:46 compute-0 optimistic_saha[401942]:                 "/dev/loop5"
Oct 11 09:26:46 compute-0 optimistic_saha[401942]:             ],
Oct 11 09:26:46 compute-0 optimistic_saha[401942]:             "lv_name": "ceph_lv2",
Oct 11 09:26:46 compute-0 optimistic_saha[401942]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 09:26:46 compute-0 optimistic_saha[401942]:             "lv_size": "21470642176",
Oct 11 09:26:46 compute-0 optimistic_saha[401942]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9a8d87fe-940e-4960-94fb-405e22e6e9ee,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 09:26:46 compute-0 optimistic_saha[401942]:             "lv_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 09:26:46 compute-0 optimistic_saha[401942]:             "name": "ceph_lv2",
Oct 11 09:26:46 compute-0 optimistic_saha[401942]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 09:26:46 compute-0 optimistic_saha[401942]:             "tags": {
Oct 11 09:26:46 compute-0 optimistic_saha[401942]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 11 09:26:46 compute-0 optimistic_saha[401942]:                 "ceph.block_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 09:26:46 compute-0 optimistic_saha[401942]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 09:26:46 compute-0 optimistic_saha[401942]:                 "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:26:46 compute-0 optimistic_saha[401942]:                 "ceph.cluster_name": "ceph",
Oct 11 09:26:46 compute-0 optimistic_saha[401942]:                 "ceph.crush_device_class": "",
Oct 11 09:26:46 compute-0 optimistic_saha[401942]:                 "ceph.encrypted": "0",
Oct 11 09:26:46 compute-0 optimistic_saha[401942]:                 "ceph.osd_fsid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 09:26:46 compute-0 optimistic_saha[401942]:                 "ceph.osd_id": "2",
Oct 11 09:26:46 compute-0 optimistic_saha[401942]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 09:26:46 compute-0 optimistic_saha[401942]:                 "ceph.type": "block",
Oct 11 09:26:46 compute-0 optimistic_saha[401942]:                 "ceph.vdo": "0"
Oct 11 09:26:46 compute-0 optimistic_saha[401942]:             },
Oct 11 09:26:46 compute-0 optimistic_saha[401942]:             "type": "block",
Oct 11 09:26:46 compute-0 optimistic_saha[401942]:             "vg_name": "ceph_vg2"
Oct 11 09:26:46 compute-0 optimistic_saha[401942]:         }
Oct 11 09:26:46 compute-0 optimistic_saha[401942]:     ]
Oct 11 09:26:46 compute-0 optimistic_saha[401942]: }
Oct 11 09:26:46 compute-0 systemd[1]: libpod-cf47a4979cc279ed36e66b223cbe8581bbc71b75196f29de32d496b486e99ce0.scope: Deactivated successfully.
Oct 11 09:26:46 compute-0 podman[401951]: 2025-10-11 09:26:46.891999218 +0000 UTC m=+0.036401759 container died cf47a4979cc279ed36e66b223cbe8581bbc71b75196f29de32d496b486e99ce0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_saha, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 11 09:26:46 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2554: 321 pgs: 321 active+clean; 508 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.0 MiB/s rd, 2.9 MiB/s wr, 81 op/s
Oct 11 09:26:47 compute-0 nova_compute[260935]: 2025-10-11 09:26:47.907 2 DEBUG nova.network.neutron [None req-eb6d7eb6-27dc-44f8-87f2-37ada8ac837e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: ebf4e4f9-b225-4ba9-927e-0619aeea8d89] Updating instance_info_cache with network_info: [{"id": "733f68a9-9ab3-4ed3-9fa3-d749ce2bfeb6", "address": "fa:16:3e:aa:8a:b1", "network": {"id": "b9f9ae84-9b18-48f7-bff2-94e8835de5c8", "bridge": "br-int", "label": "tempest-network-smoke--306681163", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap733f68a9-9a", "ovs_interfaceid": "733f68a9-9ab3-4ed3-9fa3-d749ce2bfeb6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:26:47 compute-0 nova_compute[260935]: 2025-10-11 09:26:47.934 2 DEBUG oslo_concurrency.lockutils [None req-eb6d7eb6-27dc-44f8-87f2-37ada8ac837e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Releasing lock "refresh_cache-ebf4e4f9-b225-4ba9-927e-0619aeea8d89" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:26:47 compute-0 nova_compute[260935]: 2025-10-11 09:26:47.934 2 DEBUG nova.compute.manager [None req-eb6d7eb6-27dc-44f8-87f2-37ada8ac837e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: ebf4e4f9-b225-4ba9-927e-0619aeea8d89] Instance network_info: |[{"id": "733f68a9-9ab3-4ed3-9fa3-d749ce2bfeb6", "address": "fa:16:3e:aa:8a:b1", "network": {"id": "b9f9ae84-9b18-48f7-bff2-94e8835de5c8", "bridge": "br-int", "label": "tempest-network-smoke--306681163", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap733f68a9-9a", "ovs_interfaceid": "733f68a9-9ab3-4ed3-9fa3-d749ce2bfeb6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 11 09:26:47 compute-0 nova_compute[260935]: 2025-10-11 09:26:47.935 2 DEBUG oslo_concurrency.lockutils [req-cee575f3-02ca-46d2-aee8-5ee353edb984 req-b1b23c97-ec86-4362-8b55-77c404311886 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-ebf4e4f9-b225-4ba9-927e-0619aeea8d89" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:26:47 compute-0 nova_compute[260935]: 2025-10-11 09:26:47.935 2 DEBUG nova.network.neutron [req-cee575f3-02ca-46d2-aee8-5ee353edb984 req-b1b23c97-ec86-4362-8b55-77c404311886 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ebf4e4f9-b225-4ba9-927e-0619aeea8d89] Refreshing network info cache for port 733f68a9-9ab3-4ed3-9fa3-d749ce2bfeb6 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 09:26:47 compute-0 nova_compute[260935]: 2025-10-11 09:26:47.940 2 DEBUG nova.virt.libvirt.driver [None req-eb6d7eb6-27dc-44f8-87f2-37ada8ac837e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: ebf4e4f9-b225-4ba9-927e-0619aeea8d89] Start _get_guest_xml network_info=[{"id": "733f68a9-9ab3-4ed3-9fa3-d749ce2bfeb6", "address": "fa:16:3e:aa:8a:b1", "network": {"id": "b9f9ae84-9b18-48f7-bff2-94e8835de5c8", "bridge": "br-int", "label": "tempest-network-smoke--306681163", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap733f68a9-9a", "ovs_interfaceid": "733f68a9-9ab3-4ed3-9fa3-d749ce2bfeb6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 11 09:26:47 compute-0 nova_compute[260935]: 2025-10-11 09:26:47.951 2 WARNING nova.virt.libvirt.driver [None req-eb6d7eb6-27dc-44f8-87f2-37ada8ac837e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 09:26:47 compute-0 nova_compute[260935]: 2025-10-11 09:26:47.959 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:26:47 compute-0 nova_compute[260935]: 2025-10-11 09:26:47.970 2 DEBUG nova.virt.libvirt.host [None req-eb6d7eb6-27dc-44f8-87f2-37ada8ac837e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 11 09:26:47 compute-0 nova_compute[260935]: 2025-10-11 09:26:47.971 2 DEBUG nova.virt.libvirt.host [None req-eb6d7eb6-27dc-44f8-87f2-37ada8ac837e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 11 09:26:47 compute-0 nova_compute[260935]: 2025-10-11 09:26:47.975 2 DEBUG nova.virt.libvirt.host [None req-eb6d7eb6-27dc-44f8-87f2-37ada8ac837e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 11 09:26:47 compute-0 nova_compute[260935]: 2025-10-11 09:26:47.976 2 DEBUG nova.virt.libvirt.host [None req-eb6d7eb6-27dc-44f8-87f2-37ada8ac837e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 11 09:26:47 compute-0 nova_compute[260935]: 2025-10-11 09:26:47.976 2 DEBUG nova.virt.libvirt.driver [None req-eb6d7eb6-27dc-44f8-87f2-37ada8ac837e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 11 09:26:47 compute-0 nova_compute[260935]: 2025-10-11 09:26:47.977 2 DEBUG nova.virt.hardware [None req-eb6d7eb6-27dc-44f8-87f2-37ada8ac837e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 11 09:26:47 compute-0 nova_compute[260935]: 2025-10-11 09:26:47.978 2 DEBUG nova.virt.hardware [None req-eb6d7eb6-27dc-44f8-87f2-37ada8ac837e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 11 09:26:47 compute-0 nova_compute[260935]: 2025-10-11 09:26:47.978 2 DEBUG nova.virt.hardware [None req-eb6d7eb6-27dc-44f8-87f2-37ada8ac837e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 11 09:26:47 compute-0 nova_compute[260935]: 2025-10-11 09:26:47.978 2 DEBUG nova.virt.hardware [None req-eb6d7eb6-27dc-44f8-87f2-37ada8ac837e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 11 09:26:47 compute-0 nova_compute[260935]: 2025-10-11 09:26:47.979 2 DEBUG nova.virt.hardware [None req-eb6d7eb6-27dc-44f8-87f2-37ada8ac837e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 11 09:26:47 compute-0 nova_compute[260935]: 2025-10-11 09:26:47.979 2 DEBUG nova.virt.hardware [None req-eb6d7eb6-27dc-44f8-87f2-37ada8ac837e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 11 09:26:47 compute-0 nova_compute[260935]: 2025-10-11 09:26:47.979 2 DEBUG nova.virt.hardware [None req-eb6d7eb6-27dc-44f8-87f2-37ada8ac837e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 11 09:26:47 compute-0 nova_compute[260935]: 2025-10-11 09:26:47.980 2 DEBUG nova.virt.hardware [None req-eb6d7eb6-27dc-44f8-87f2-37ada8ac837e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 11 09:26:47 compute-0 nova_compute[260935]: 2025-10-11 09:26:47.980 2 DEBUG nova.virt.hardware [None req-eb6d7eb6-27dc-44f8-87f2-37ada8ac837e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 11 09:26:47 compute-0 nova_compute[260935]: 2025-10-11 09:26:47.980 2 DEBUG nova.virt.hardware [None req-eb6d7eb6-27dc-44f8-87f2-37ada8ac837e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 11 09:26:47 compute-0 nova_compute[260935]: 2025-10-11 09:26:47.981 2 DEBUG nova.virt.hardware [None req-eb6d7eb6-27dc-44f8-87f2-37ada8ac837e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 11 09:26:47 compute-0 nova_compute[260935]: 2025-10-11 09:26:47.986 2 DEBUG oslo_concurrency.processutils [None req-eb6d7eb6-27dc-44f8-87f2-37ada8ac837e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:26:48 compute-0 ceph-mon[74313]: pgmap v2554: 321 pgs: 321 active+clean; 508 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.0 MiB/s rd, 2.9 MiB/s wr, 81 op/s
Oct 11 09:26:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-fcc7ac2b70633db55b88430f0783934540668a9e4bdfbc872dcee86c6776a052-merged.mount: Deactivated successfully.
Oct 11 09:26:48 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 09:26:48 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3346623215' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:26:48 compute-0 nova_compute[260935]: 2025-10-11 09:26:48.574 2 DEBUG oslo_concurrency.processutils [None req-eb6d7eb6-27dc-44f8-87f2-37ada8ac837e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.588s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:26:48 compute-0 nova_compute[260935]: 2025-10-11 09:26:48.621 2 DEBUG nova.storage.rbd_utils [None req-eb6d7eb6-27dc-44f8-87f2-37ada8ac837e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] rbd image ebf4e4f9-b225-4ba9-927e-0619aeea8d89_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:26:48 compute-0 nova_compute[260935]: 2025-10-11 09:26:48.626 2 DEBUG oslo_concurrency.processutils [None req-eb6d7eb6-27dc-44f8-87f2-37ada8ac837e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:26:48 compute-0 podman[401951]: 2025-10-11 09:26:48.875263247 +0000 UTC m=+2.019665798 container remove cf47a4979cc279ed36e66b223cbe8581bbc71b75196f29de32d496b486e99ce0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_saha, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 09:26:48 compute-0 systemd[1]: libpod-conmon-cf47a4979cc279ed36e66b223cbe8581bbc71b75196f29de32d496b486e99ce0.scope: Deactivated successfully.
Oct 11 09:26:48 compute-0 sudo[401798]: pam_unix(sudo:session): session closed for user root
Oct 11 09:26:48 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2555: 321 pgs: 321 active+clean; 525 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 3.9 MiB/s wr, 114 op/s
Oct 11 09:26:49 compute-0 sudo[402026]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:26:49 compute-0 sudo[402026]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:26:49 compute-0 sudo[402026]: pam_unix(sudo:session): session closed for user root
Oct 11 09:26:49 compute-0 sudo[402051]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 09:26:49 compute-0 sudo[402051]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:26:49 compute-0 sudo[402051]: pam_unix(sudo:session): session closed for user root
Oct 11 09:26:49 compute-0 sudo[402077]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:26:49 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:26:49 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 09:26:49 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4012108614' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:26:49 compute-0 sudo[402077]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:26:49 compute-0 sudo[402077]: pam_unix(sudo:session): session closed for user root
Oct 11 09:26:49 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/3346623215' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:26:49 compute-0 ceph-mon[74313]: pgmap v2555: 321 pgs: 321 active+clean; 525 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 3.9 MiB/s wr, 114 op/s
Oct 11 09:26:49 compute-0 nova_compute[260935]: 2025-10-11 09:26:49.240 2 DEBUG oslo_concurrency.processutils [None req-eb6d7eb6-27dc-44f8-87f2-37ada8ac837e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.614s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:26:49 compute-0 nova_compute[260935]: 2025-10-11 09:26:49.244 2 DEBUG nova.virt.libvirt.vif [None req-eb6d7eb6-27dc-44f8-87f2-37ada8ac837e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T09:26:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-642071395',display_name='tempest-TestNetworkBasicOps-server-642071395',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-642071395',id=129,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBL/a4lv10vOA8+Y0o9A9HssJRB57eWElESeZBiHZYTC4NgZskMwnl+IoMYFCrg3V9vZquWFF14c5AYcCtavq1tvAMKZzjjtkqimBk00BtDUv/kjU7cBauBpTA49aY1aFOA==',key_name='tempest-TestNetworkBasicOps-566812727',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='bee9c6aad5fe46a2b0fb6caf4d995b72',ramdisk_id='',reservation_id='r-z5r1nfln',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1622727639',owner_user_name='tempest-TestNetworkBasicOps-1622727639-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:26:40Z,user_data=None,user_id='dd336dcb24664df58613d4105ce1b004',uuid=ebf4e4f9-b225-4ba9-927e-0619aeea8d89,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "733f68a9-9ab3-4ed3-9fa3-d749ce2bfeb6", "address": "fa:16:3e:aa:8a:b1", "network": {"id": "b9f9ae84-9b18-48f7-bff2-94e8835de5c8", "bridge": "br-int", "label": "tempest-network-smoke--306681163", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap733f68a9-9a", "ovs_interfaceid": "733f68a9-9ab3-4ed3-9fa3-d749ce2bfeb6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 11 09:26:49 compute-0 nova_compute[260935]: 2025-10-11 09:26:49.244 2 DEBUG nova.network.os_vif_util [None req-eb6d7eb6-27dc-44f8-87f2-37ada8ac837e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Converting VIF {"id": "733f68a9-9ab3-4ed3-9fa3-d749ce2bfeb6", "address": "fa:16:3e:aa:8a:b1", "network": {"id": "b9f9ae84-9b18-48f7-bff2-94e8835de5c8", "bridge": "br-int", "label": "tempest-network-smoke--306681163", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap733f68a9-9a", "ovs_interfaceid": "733f68a9-9ab3-4ed3-9fa3-d749ce2bfeb6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 09:26:49 compute-0 nova_compute[260935]: 2025-10-11 09:26:49.246 2 DEBUG nova.network.os_vif_util [None req-eb6d7eb6-27dc-44f8-87f2-37ada8ac837e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:aa:8a:b1,bridge_name='br-int',has_traffic_filtering=True,id=733f68a9-9ab3-4ed3-9fa3-d749ce2bfeb6,network=Network(b9f9ae84-9b18-48f7-bff2-94e8835de5c8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap733f68a9-9a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 09:26:49 compute-0 nova_compute[260935]: 2025-10-11 09:26:49.248 2 DEBUG nova.objects.instance [None req-eb6d7eb6-27dc-44f8-87f2-37ada8ac837e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lazy-loading 'pci_devices' on Instance uuid ebf4e4f9-b225-4ba9-927e-0619aeea8d89 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:26:49 compute-0 nova_compute[260935]: 2025-10-11 09:26:49.265 2 DEBUG nova.virt.libvirt.driver [None req-eb6d7eb6-27dc-44f8-87f2-37ada8ac837e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: ebf4e4f9-b225-4ba9-927e-0619aeea8d89] End _get_guest_xml xml=<domain type="kvm">
Oct 11 09:26:49 compute-0 nova_compute[260935]:   <uuid>ebf4e4f9-b225-4ba9-927e-0619aeea8d89</uuid>
Oct 11 09:26:49 compute-0 nova_compute[260935]:   <name>instance-00000081</name>
Oct 11 09:26:49 compute-0 nova_compute[260935]:   <memory>131072</memory>
Oct 11 09:26:49 compute-0 nova_compute[260935]:   <vcpu>1</vcpu>
Oct 11 09:26:49 compute-0 nova_compute[260935]:   <metadata>
Oct 11 09:26:49 compute-0 nova_compute[260935]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 09:26:49 compute-0 nova_compute[260935]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 09:26:49 compute-0 nova_compute[260935]:       <nova:name>tempest-TestNetworkBasicOps-server-642071395</nova:name>
Oct 11 09:26:49 compute-0 nova_compute[260935]:       <nova:creationTime>2025-10-11 09:26:47</nova:creationTime>
Oct 11 09:26:49 compute-0 nova_compute[260935]:       <nova:flavor name="m1.nano">
Oct 11 09:26:49 compute-0 nova_compute[260935]:         <nova:memory>128</nova:memory>
Oct 11 09:26:49 compute-0 nova_compute[260935]:         <nova:disk>1</nova:disk>
Oct 11 09:26:49 compute-0 nova_compute[260935]:         <nova:swap>0</nova:swap>
Oct 11 09:26:49 compute-0 nova_compute[260935]:         <nova:ephemeral>0</nova:ephemeral>
Oct 11 09:26:49 compute-0 nova_compute[260935]:         <nova:vcpus>1</nova:vcpus>
Oct 11 09:26:49 compute-0 nova_compute[260935]:       </nova:flavor>
Oct 11 09:26:49 compute-0 nova_compute[260935]:       <nova:owner>
Oct 11 09:26:49 compute-0 nova_compute[260935]:         <nova:user uuid="dd336dcb24664df58613d4105ce1b004">tempest-TestNetworkBasicOps-1622727639-project-member</nova:user>
Oct 11 09:26:49 compute-0 nova_compute[260935]:         <nova:project uuid="bee9c6aad5fe46a2b0fb6caf4d995b72">tempest-TestNetworkBasicOps-1622727639</nova:project>
Oct 11 09:26:49 compute-0 nova_compute[260935]:       </nova:owner>
Oct 11 09:26:49 compute-0 nova_compute[260935]:       <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 09:26:49 compute-0 nova_compute[260935]:       <nova:ports>
Oct 11 09:26:49 compute-0 nova_compute[260935]:         <nova:port uuid="733f68a9-9ab3-4ed3-9fa3-d749ce2bfeb6">
Oct 11 09:26:49 compute-0 nova_compute[260935]:           <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Oct 11 09:26:49 compute-0 nova_compute[260935]:         </nova:port>
Oct 11 09:26:49 compute-0 nova_compute[260935]:       </nova:ports>
Oct 11 09:26:49 compute-0 nova_compute[260935]:     </nova:instance>
Oct 11 09:26:49 compute-0 nova_compute[260935]:   </metadata>
Oct 11 09:26:49 compute-0 nova_compute[260935]:   <sysinfo type="smbios">
Oct 11 09:26:49 compute-0 nova_compute[260935]:     <system>
Oct 11 09:26:49 compute-0 nova_compute[260935]:       <entry name="manufacturer">RDO</entry>
Oct 11 09:26:49 compute-0 nova_compute[260935]:       <entry name="product">OpenStack Compute</entry>
Oct 11 09:26:49 compute-0 nova_compute[260935]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 09:26:49 compute-0 nova_compute[260935]:       <entry name="serial">ebf4e4f9-b225-4ba9-927e-0619aeea8d89</entry>
Oct 11 09:26:49 compute-0 nova_compute[260935]:       <entry name="uuid">ebf4e4f9-b225-4ba9-927e-0619aeea8d89</entry>
Oct 11 09:26:49 compute-0 nova_compute[260935]:       <entry name="family">Virtual Machine</entry>
Oct 11 09:26:49 compute-0 nova_compute[260935]:     </system>
Oct 11 09:26:49 compute-0 nova_compute[260935]:   </sysinfo>
Oct 11 09:26:49 compute-0 nova_compute[260935]:   <os>
Oct 11 09:26:49 compute-0 nova_compute[260935]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 11 09:26:49 compute-0 nova_compute[260935]:     <boot dev="hd"/>
Oct 11 09:26:49 compute-0 nova_compute[260935]:     <smbios mode="sysinfo"/>
Oct 11 09:26:49 compute-0 nova_compute[260935]:   </os>
Oct 11 09:26:49 compute-0 nova_compute[260935]:   <features>
Oct 11 09:26:49 compute-0 nova_compute[260935]:     <acpi/>
Oct 11 09:26:49 compute-0 nova_compute[260935]:     <apic/>
Oct 11 09:26:49 compute-0 nova_compute[260935]:     <vmcoreinfo/>
Oct 11 09:26:49 compute-0 nova_compute[260935]:   </features>
Oct 11 09:26:49 compute-0 nova_compute[260935]:   <clock offset="utc">
Oct 11 09:26:49 compute-0 nova_compute[260935]:     <timer name="pit" tickpolicy="delay"/>
Oct 11 09:26:49 compute-0 nova_compute[260935]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 11 09:26:49 compute-0 nova_compute[260935]:     <timer name="hpet" present="no"/>
Oct 11 09:26:49 compute-0 nova_compute[260935]:   </clock>
Oct 11 09:26:49 compute-0 nova_compute[260935]:   <cpu mode="host-model" match="exact">
Oct 11 09:26:49 compute-0 nova_compute[260935]:     <topology sockets="1" cores="1" threads="1"/>
Oct 11 09:26:49 compute-0 nova_compute[260935]:   </cpu>
Oct 11 09:26:49 compute-0 nova_compute[260935]:   <devices>
Oct 11 09:26:49 compute-0 nova_compute[260935]:     <disk type="network" device="disk">
Oct 11 09:26:49 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 09:26:49 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/ebf4e4f9-b225-4ba9-927e-0619aeea8d89_disk">
Oct 11 09:26:49 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 09:26:49 compute-0 nova_compute[260935]:       </source>
Oct 11 09:26:49 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 09:26:49 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 09:26:49 compute-0 nova_compute[260935]:       </auth>
Oct 11 09:26:49 compute-0 nova_compute[260935]:       <target dev="vda" bus="virtio"/>
Oct 11 09:26:49 compute-0 nova_compute[260935]:     </disk>
Oct 11 09:26:49 compute-0 nova_compute[260935]:     <disk type="network" device="cdrom">
Oct 11 09:26:49 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 09:26:49 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/ebf4e4f9-b225-4ba9-927e-0619aeea8d89_disk.config">
Oct 11 09:26:49 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 09:26:49 compute-0 nova_compute[260935]:       </source>
Oct 11 09:26:49 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 09:26:49 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 09:26:49 compute-0 nova_compute[260935]:       </auth>
Oct 11 09:26:49 compute-0 nova_compute[260935]:       <target dev="sda" bus="sata"/>
Oct 11 09:26:49 compute-0 nova_compute[260935]:     </disk>
Oct 11 09:26:49 compute-0 nova_compute[260935]:     <interface type="ethernet">
Oct 11 09:26:49 compute-0 nova_compute[260935]:       <mac address="fa:16:3e:aa:8a:b1"/>
Oct 11 09:26:49 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 09:26:49 compute-0 nova_compute[260935]:       <driver name="vhost" rx_queue_size="512"/>
Oct 11 09:26:49 compute-0 nova_compute[260935]:       <mtu size="1442"/>
Oct 11 09:26:49 compute-0 nova_compute[260935]:       <target dev="tap733f68a9-9a"/>
Oct 11 09:26:49 compute-0 nova_compute[260935]:     </interface>
Oct 11 09:26:49 compute-0 nova_compute[260935]:     <serial type="pty">
Oct 11 09:26:49 compute-0 nova_compute[260935]:       <log file="/var/lib/nova/instances/ebf4e4f9-b225-4ba9-927e-0619aeea8d89/console.log" append="off"/>
Oct 11 09:26:49 compute-0 nova_compute[260935]:     </serial>
Oct 11 09:26:49 compute-0 nova_compute[260935]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 09:26:49 compute-0 nova_compute[260935]:     <video>
Oct 11 09:26:49 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 09:26:49 compute-0 nova_compute[260935]:     </video>
Oct 11 09:26:49 compute-0 nova_compute[260935]:     <input type="tablet" bus="usb"/>
Oct 11 09:26:49 compute-0 nova_compute[260935]:     <rng model="virtio">
Oct 11 09:26:49 compute-0 nova_compute[260935]:       <backend model="random">/dev/urandom</backend>
Oct 11 09:26:49 compute-0 nova_compute[260935]:     </rng>
Oct 11 09:26:49 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root"/>
Oct 11 09:26:49 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:26:49 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:26:49 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:26:49 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:26:49 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:26:49 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:26:49 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:26:49 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:26:49 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:26:49 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:26:49 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:26:49 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:26:49 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:26:49 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:26:49 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:26:49 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:26:49 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:26:49 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:26:49 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:26:49 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:26:49 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:26:49 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:26:49 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:26:49 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:26:49 compute-0 nova_compute[260935]:     <controller type="usb" index="0"/>
Oct 11 09:26:49 compute-0 nova_compute[260935]:     <memballoon model="virtio">
Oct 11 09:26:49 compute-0 nova_compute[260935]:       <stats period="10"/>
Oct 11 09:26:49 compute-0 nova_compute[260935]:     </memballoon>
Oct 11 09:26:49 compute-0 nova_compute[260935]:   </devices>
Oct 11 09:26:49 compute-0 nova_compute[260935]: </domain>
Oct 11 09:26:49 compute-0 nova_compute[260935]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 11 09:26:49 compute-0 nova_compute[260935]: 2025-10-11 09:26:49.265 2 DEBUG nova.compute.manager [None req-eb6d7eb6-27dc-44f8-87f2-37ada8ac837e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: ebf4e4f9-b225-4ba9-927e-0619aeea8d89] Preparing to wait for external event network-vif-plugged-733f68a9-9ab3-4ed3-9fa3-d749ce2bfeb6 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 11 09:26:49 compute-0 nova_compute[260935]: 2025-10-11 09:26:49.266 2 DEBUG oslo_concurrency.lockutils [None req-eb6d7eb6-27dc-44f8-87f2-37ada8ac837e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Acquiring lock "ebf4e4f9-b225-4ba9-927e-0619aeea8d89-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:26:49 compute-0 nova_compute[260935]: 2025-10-11 09:26:49.266 2 DEBUG oslo_concurrency.lockutils [None req-eb6d7eb6-27dc-44f8-87f2-37ada8ac837e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "ebf4e4f9-b225-4ba9-927e-0619aeea8d89-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:26:49 compute-0 nova_compute[260935]: 2025-10-11 09:26:49.267 2 DEBUG oslo_concurrency.lockutils [None req-eb6d7eb6-27dc-44f8-87f2-37ada8ac837e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "ebf4e4f9-b225-4ba9-927e-0619aeea8d89-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:26:49 compute-0 nova_compute[260935]: 2025-10-11 09:26:49.268 2 DEBUG nova.virt.libvirt.vif [None req-eb6d7eb6-27dc-44f8-87f2-37ada8ac837e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T09:26:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-642071395',display_name='tempest-TestNetworkBasicOps-server-642071395',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-642071395',id=129,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBL/a4lv10vOA8+Y0o9A9HssJRB57eWElESeZBiHZYTC4NgZskMwnl+IoMYFCrg3V9vZquWFF14c5AYcCtavq1tvAMKZzjjtkqimBk00BtDUv/kjU7cBauBpTA49aY1aFOA==',key_name='tempest-TestNetworkBasicOps-566812727',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='bee9c6aad5fe46a2b0fb6caf4d995b72',ramdisk_id='',reservation_id='r-z5r1nfln',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1622727639',owner_user_name='tempest-TestNetworkBasicOps-1622727639-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:26:40Z,user_data=None,user_id='dd336dcb24664df58613d4105ce1b004',uuid=ebf4e4f9-b225-4ba9-927e-0619aeea8d89,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "733f68a9-9ab3-4ed3-9fa3-d749ce2bfeb6", "address": "fa:16:3e:aa:8a:b1", "network": {"id": "b9f9ae84-9b18-48f7-bff2-94e8835de5c8", "bridge": "br-int", "label": "tempest-network-smoke--306681163", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap733f68a9-9a", "ovs_interfaceid": "733f68a9-9ab3-4ed3-9fa3-d749ce2bfeb6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 11 09:26:49 compute-0 nova_compute[260935]: 2025-10-11 09:26:49.269 2 DEBUG nova.network.os_vif_util [None req-eb6d7eb6-27dc-44f8-87f2-37ada8ac837e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Converting VIF {"id": "733f68a9-9ab3-4ed3-9fa3-d749ce2bfeb6", "address": "fa:16:3e:aa:8a:b1", "network": {"id": "b9f9ae84-9b18-48f7-bff2-94e8835de5c8", "bridge": "br-int", "label": "tempest-network-smoke--306681163", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap733f68a9-9a", "ovs_interfaceid": "733f68a9-9ab3-4ed3-9fa3-d749ce2bfeb6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 09:26:49 compute-0 nova_compute[260935]: 2025-10-11 09:26:49.270 2 DEBUG nova.network.os_vif_util [None req-eb6d7eb6-27dc-44f8-87f2-37ada8ac837e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:aa:8a:b1,bridge_name='br-int',has_traffic_filtering=True,id=733f68a9-9ab3-4ed3-9fa3-d749ce2bfeb6,network=Network(b9f9ae84-9b18-48f7-bff2-94e8835de5c8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap733f68a9-9a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 09:26:49 compute-0 nova_compute[260935]: 2025-10-11 09:26:49.270 2 DEBUG os_vif [None req-eb6d7eb6-27dc-44f8-87f2-37ada8ac837e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:aa:8a:b1,bridge_name='br-int',has_traffic_filtering=True,id=733f68a9-9ab3-4ed3-9fa3-d749ce2bfeb6,network=Network(b9f9ae84-9b18-48f7-bff2-94e8835de5c8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap733f68a9-9a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 11 09:26:49 compute-0 nova_compute[260935]: 2025-10-11 09:26:49.272 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:26:49 compute-0 nova_compute[260935]: 2025-10-11 09:26:49.273 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:26:49 compute-0 nova_compute[260935]: 2025-10-11 09:26:49.273 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 09:26:49 compute-0 nova_compute[260935]: 2025-10-11 09:26:49.278 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:26:49 compute-0 nova_compute[260935]: 2025-10-11 09:26:49.279 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap733f68a9-9a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:26:49 compute-0 sudo[402103]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a -- raw list --format json
Oct 11 09:26:49 compute-0 nova_compute[260935]: 2025-10-11 09:26:49.280 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap733f68a9-9a, col_values=(('external_ids', {'iface-id': '733f68a9-9ab3-4ed3-9fa3-d749ce2bfeb6', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:aa:8a:b1', 'vm-uuid': 'ebf4e4f9-b225-4ba9-927e-0619aeea8d89'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:26:49 compute-0 sudo[402103]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:26:49 compute-0 nova_compute[260935]: 2025-10-11 09:26:49.283 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:26:49 compute-0 NetworkManager[44960]: <info>  [1760174809.2842] manager: (tap733f68a9-9a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/553)
Oct 11 09:26:49 compute-0 nova_compute[260935]: 2025-10-11 09:26:49.292 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 09:26:49 compute-0 nova_compute[260935]: 2025-10-11 09:26:49.294 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:26:49 compute-0 nova_compute[260935]: 2025-10-11 09:26:49.295 2 INFO os_vif [None req-eb6d7eb6-27dc-44f8-87f2-37ada8ac837e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:aa:8a:b1,bridge_name='br-int',has_traffic_filtering=True,id=733f68a9-9ab3-4ed3-9fa3-d749ce2bfeb6,network=Network(b9f9ae84-9b18-48f7-bff2-94e8835de5c8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap733f68a9-9a')
Oct 11 09:26:49 compute-0 nova_compute[260935]: 2025-10-11 09:26:49.354 2 DEBUG nova.virt.libvirt.driver [None req-eb6d7eb6-27dc-44f8-87f2-37ada8ac837e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 09:26:49 compute-0 nova_compute[260935]: 2025-10-11 09:26:49.355 2 DEBUG nova.virt.libvirt.driver [None req-eb6d7eb6-27dc-44f8-87f2-37ada8ac837e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 09:26:49 compute-0 nova_compute[260935]: 2025-10-11 09:26:49.356 2 DEBUG nova.virt.libvirt.driver [None req-eb6d7eb6-27dc-44f8-87f2-37ada8ac837e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] No VIF found with MAC fa:16:3e:aa:8a:b1, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 11 09:26:49 compute-0 nova_compute[260935]: 2025-10-11 09:26:49.357 2 INFO nova.virt.libvirt.driver [None req-eb6d7eb6-27dc-44f8-87f2-37ada8ac837e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: ebf4e4f9-b225-4ba9-927e-0619aeea8d89] Using config drive
Oct 11 09:26:49 compute-0 nova_compute[260935]: 2025-10-11 09:26:49.387 2 DEBUG nova.storage.rbd_utils [None req-eb6d7eb6-27dc-44f8-87f2-37ada8ac837e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] rbd image ebf4e4f9-b225-4ba9-927e-0619aeea8d89_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:26:49 compute-0 nova_compute[260935]: 2025-10-11 09:26:49.461 2 DEBUG nova.network.neutron [req-cee575f3-02ca-46d2-aee8-5ee353edb984 req-b1b23c97-ec86-4362-8b55-77c404311886 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ebf4e4f9-b225-4ba9-927e-0619aeea8d89] Updated VIF entry in instance network info cache for port 733f68a9-9ab3-4ed3-9fa3-d749ce2bfeb6. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 09:26:49 compute-0 nova_compute[260935]: 2025-10-11 09:26:49.462 2 DEBUG nova.network.neutron [req-cee575f3-02ca-46d2-aee8-5ee353edb984 req-b1b23c97-ec86-4362-8b55-77c404311886 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ebf4e4f9-b225-4ba9-927e-0619aeea8d89] Updating instance_info_cache with network_info: [{"id": "733f68a9-9ab3-4ed3-9fa3-d749ce2bfeb6", "address": "fa:16:3e:aa:8a:b1", "network": {"id": "b9f9ae84-9b18-48f7-bff2-94e8835de5c8", "bridge": "br-int", "label": "tempest-network-smoke--306681163", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap733f68a9-9a", "ovs_interfaceid": "733f68a9-9ab3-4ed3-9fa3-d749ce2bfeb6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:26:49 compute-0 nova_compute[260935]: 2025-10-11 09:26:49.479 2 DEBUG oslo_concurrency.lockutils [req-cee575f3-02ca-46d2-aee8-5ee353edb984 req-b1b23c97-ec86-4362-8b55-77c404311886 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-ebf4e4f9-b225-4ba9-927e-0619aeea8d89" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:26:49 compute-0 podman[402185]: 2025-10-11 09:26:49.708289374 +0000 UTC m=+0.054670673 container create a94ca6eaf3280ead49da3a0083bcfd416559e7eca14cb9c12014e2d98af7d4ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_wozniak, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 09:26:49 compute-0 nova_compute[260935]: 2025-10-11 09:26:49.738 2 INFO nova.virt.libvirt.driver [None req-eb6d7eb6-27dc-44f8-87f2-37ada8ac837e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: ebf4e4f9-b225-4ba9-927e-0619aeea8d89] Creating config drive at /var/lib/nova/instances/ebf4e4f9-b225-4ba9-927e-0619aeea8d89/disk.config
Oct 11 09:26:49 compute-0 nova_compute[260935]: 2025-10-11 09:26:49.747 2 DEBUG oslo_concurrency.processutils [None req-eb6d7eb6-27dc-44f8-87f2-37ada8ac837e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/ebf4e4f9-b225-4ba9-927e-0619aeea8d89/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpyx8qw76b execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:26:49 compute-0 systemd[1]: Started libpod-conmon-a94ca6eaf3280ead49da3a0083bcfd416559e7eca14cb9c12014e2d98af7d4ad.scope.
Oct 11 09:26:49 compute-0 podman[402185]: 2025-10-11 09:26:49.692107148 +0000 UTC m=+0.038488497 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:26:49 compute-0 podman[402194]: 2025-10-11 09:26:49.802944486 +0000 UTC m=+0.101169426 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_id=iscsid, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct 11 09:26:49 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:26:49 compute-0 podman[402185]: 2025-10-11 09:26:49.82399334 +0000 UTC m=+0.170374639 container init a94ca6eaf3280ead49da3a0083bcfd416559e7eca14cb9c12014e2d98af7d4ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_wozniak, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 09:26:49 compute-0 podman[402185]: 2025-10-11 09:26:49.835723601 +0000 UTC m=+0.182104910 container start a94ca6eaf3280ead49da3a0083bcfd416559e7eca14cb9c12014e2d98af7d4ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_wozniak, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 09:26:49 compute-0 festive_wozniak[402217]: 167 167
Oct 11 09:26:49 compute-0 systemd[1]: libpod-a94ca6eaf3280ead49da3a0083bcfd416559e7eca14cb9c12014e2d98af7d4ad.scope: Deactivated successfully.
Oct 11 09:26:49 compute-0 podman[402185]: 2025-10-11 09:26:49.840751913 +0000 UTC m=+0.187133232 container attach a94ca6eaf3280ead49da3a0083bcfd416559e7eca14cb9c12014e2d98af7d4ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_wozniak, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct 11 09:26:49 compute-0 conmon[402217]: conmon a94ca6eaf3280ead49da <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a94ca6eaf3280ead49da3a0083bcfd416559e7eca14cb9c12014e2d98af7d4ad.scope/container/memory.events
Oct 11 09:26:49 compute-0 podman[402185]: 2025-10-11 09:26:49.84350642 +0000 UTC m=+0.189887769 container died a94ca6eaf3280ead49da3a0083bcfd416559e7eca14cb9c12014e2d98af7d4ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_wozniak, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 09:26:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-3533003b4a273565175e3c84cb51443beabed5ec1845e951001c8a5dcd9da4cd-merged.mount: Deactivated successfully.
Oct 11 09:26:49 compute-0 podman[402185]: 2025-10-11 09:26:49.88352753 +0000 UTC m=+0.229908829 container remove a94ca6eaf3280ead49da3a0083bcfd416559e7eca14cb9c12014e2d98af7d4ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_wozniak, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct 11 09:26:49 compute-0 systemd[1]: libpod-conmon-a94ca6eaf3280ead49da3a0083bcfd416559e7eca14cb9c12014e2d98af7d4ad.scope: Deactivated successfully.
Oct 11 09:26:49 compute-0 nova_compute[260935]: 2025-10-11 09:26:49.914 2 DEBUG oslo_concurrency.processutils [None req-eb6d7eb6-27dc-44f8-87f2-37ada8ac837e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/ebf4e4f9-b225-4ba9-927e-0619aeea8d89/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpyx8qw76b" returned: 0 in 0.167s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:26:49 compute-0 nova_compute[260935]: 2025-10-11 09:26:49.942 2 DEBUG nova.storage.rbd_utils [None req-eb6d7eb6-27dc-44f8-87f2-37ada8ac837e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] rbd image ebf4e4f9-b225-4ba9-927e-0619aeea8d89_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:26:49 compute-0 nova_compute[260935]: 2025-10-11 09:26:49.946 2 DEBUG oslo_concurrency.processutils [None req-eb6d7eb6-27dc-44f8-87f2-37ada8ac837e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/ebf4e4f9-b225-4ba9-927e-0619aeea8d89/disk.config ebf4e4f9-b225-4ba9-927e-0619aeea8d89_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:26:50 compute-0 podman[402279]: 2025-10-11 09:26:50.103352794 +0000 UTC m=+0.053353007 container create 0ed4ba66d0ff46c19a947a281e01662bf4d54f3cb11e975460ab9391b0330c76 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_mahavira, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 09:26:50 compute-0 systemd[1]: Started libpod-conmon-0ed4ba66d0ff46c19a947a281e01662bf4d54f3cb11e975460ab9391b0330c76.scope.
Oct 11 09:26:50 compute-0 nova_compute[260935]: 2025-10-11 09:26:50.143 2 DEBUG oslo_concurrency.processutils [None req-eb6d7eb6-27dc-44f8-87f2-37ada8ac837e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/ebf4e4f9-b225-4ba9-927e-0619aeea8d89/disk.config ebf4e4f9-b225-4ba9-927e-0619aeea8d89_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.198s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:26:50 compute-0 nova_compute[260935]: 2025-10-11 09:26:50.146 2 INFO nova.virt.libvirt.driver [None req-eb6d7eb6-27dc-44f8-87f2-37ada8ac837e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: ebf4e4f9-b225-4ba9-927e-0619aeea8d89] Deleting local config drive /var/lib/nova/instances/ebf4e4f9-b225-4ba9-927e-0619aeea8d89/disk.config because it was imported into RBD.
Oct 11 09:26:50 compute-0 podman[402279]: 2025-10-11 09:26:50.073745948 +0000 UTC m=+0.023746231 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:26:50 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:26:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e22acf452f3b518f1fba6ce8c6d25d6adbfec8cb5b0ad1feaf5bd2fb494f5112/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 09:26:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e22acf452f3b518f1fba6ce8c6d25d6adbfec8cb5b0ad1feaf5bd2fb494f5112/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 09:26:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e22acf452f3b518f1fba6ce8c6d25d6adbfec8cb5b0ad1feaf5bd2fb494f5112/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 09:26:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e22acf452f3b518f1fba6ce8c6d25d6adbfec8cb5b0ad1feaf5bd2fb494f5112/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 09:26:50 compute-0 podman[402279]: 2025-10-11 09:26:50.200271179 +0000 UTC m=+0.150271392 container init 0ed4ba66d0ff46c19a947a281e01662bf4d54f3cb11e975460ab9391b0330c76 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_mahavira, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct 11 09:26:50 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/4012108614' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:26:50 compute-0 podman[402279]: 2025-10-11 09:26:50.209309424 +0000 UTC m=+0.159309657 container start 0ed4ba66d0ff46c19a947a281e01662bf4d54f3cb11e975460ab9391b0330c76 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_mahavira, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct 11 09:26:50 compute-0 NetworkManager[44960]: <info>  [1760174810.2119] manager: (tap733f68a9-9a): new Tun device (/org/freedesktop/NetworkManager/Devices/554)
Oct 11 09:26:50 compute-0 kernel: tap733f68a9-9a: entered promiscuous mode
Oct 11 09:26:50 compute-0 ovn_controller[152945]: 2025-10-11T09:26:50Z|01403|binding|INFO|Claiming lport 733f68a9-9ab3-4ed3-9fa3-d749ce2bfeb6 for this chassis.
Oct 11 09:26:50 compute-0 ovn_controller[152945]: 2025-10-11T09:26:50Z|01404|binding|INFO|733f68a9-9ab3-4ed3-9fa3-d749ce2bfeb6: Claiming fa:16:3e:aa:8a:b1 10.100.0.8
Oct 11 09:26:50 compute-0 nova_compute[260935]: 2025-10-11 09:26:50.215 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:26:50 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:26:50.223 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:aa:8a:b1 10.100.0.8'], port_security=['fa:16:3e:aa:8a:b1 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'ebf4e4f9-b225-4ba9-927e-0619aeea8d89', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b9f9ae84-9b18-48f7-bff2-94e8835de5c8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'bee9c6aad5fe46a2b0fb6caf4d995b72', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'ed8dd5c0-e5cc-4f41-beee-a727185c1ca5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=419ebcda-831c-4f7b-8ef8-fba16bc71b52, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=733f68a9-9ab3-4ed3-9fa3-d749ce2bfeb6) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 09:26:50 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:26:50.228 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 733f68a9-9ab3-4ed3-9fa3-d749ce2bfeb6 in datapath b9f9ae84-9b18-48f7-bff2-94e8835de5c8 bound to our chassis
Oct 11 09:26:50 compute-0 podman[402279]: 2025-10-11 09:26:50.228846325 +0000 UTC m=+0.178846538 container attach 0ed4ba66d0ff46c19a947a281e01662bf4d54f3cb11e975460ab9391b0330c76 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_mahavira, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct 11 09:26:50 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:26:50.231 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b9f9ae84-9b18-48f7-bff2-94e8835de5c8
Oct 11 09:26:50 compute-0 systemd-udevd[402314]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 09:26:50 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:26:50.252 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[17ad2c5a-b4eb-4caf-ade7-775d4c34202c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:26:50 compute-0 ovn_controller[152945]: 2025-10-11T09:26:50Z|01405|binding|INFO|Setting lport 733f68a9-9ab3-4ed3-9fa3-d749ce2bfeb6 ovn-installed in OVS
Oct 11 09:26:50 compute-0 ovn_controller[152945]: 2025-10-11T09:26:50Z|01406|binding|INFO|Setting lport 733f68a9-9ab3-4ed3-9fa3-d749ce2bfeb6 up in Southbound
Oct 11 09:26:50 compute-0 nova_compute[260935]: 2025-10-11 09:26:50.301 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:26:50 compute-0 NetworkManager[44960]: <info>  [1760174810.3045] device (tap733f68a9-9a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 09:26:50 compute-0 NetworkManager[44960]: <info>  [1760174810.3053] device (tap733f68a9-9a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 09:26:50 compute-0 systemd-machined[215705]: New machine qemu-153-instance-00000081.
Oct 11 09:26:50 compute-0 systemd[1]: Started Virtual Machine qemu-153-instance-00000081.
Oct 11 09:26:50 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:26:50.334 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[4dd84647-dd47-4158-9d98-a82f776c4a7b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:26:50 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:26:50.337 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[aa8af352-9576-49ff-9d09-518a7b1ffa85]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:26:50 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:26:50.375 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[60ac2038-d2f5-40b6-8f31-e17f7231c965]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:26:50 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:26:50.396 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[7182574f-e61a-43f5-a65a-ca71c2bf6536]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb9f9ae84-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f4:ad:53'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 379], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 662316, 'reachable_time': 34988, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 402328, 'error': None, 'target': 'ovnmeta-b9f9ae84-9b18-48f7-bff2-94e8835de5c8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:26:50 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:26:50.423 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[7100493a-7904-4504-b405-ef84d38e01c1]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapb9f9ae84-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 662335, 'tstamp': 662335}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 402330, 'error': None, 'target': 'ovnmeta-b9f9ae84-9b18-48f7-bff2-94e8835de5c8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapb9f9ae84-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 662340, 'tstamp': 662340}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 402330, 'error': None, 'target': 'ovnmeta-b9f9ae84-9b18-48f7-bff2-94e8835de5c8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:26:50 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:26:50.425 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb9f9ae84-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:26:50 compute-0 nova_compute[260935]: 2025-10-11 09:26:50.427 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:26:50 compute-0 nova_compute[260935]: 2025-10-11 09:26:50.428 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:26:50 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:26:50.429 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb9f9ae84-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:26:50 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:26:50.430 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 09:26:50 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:26:50.431 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb9f9ae84-90, col_values=(('external_ids', {'iface-id': '829ba2ca-e21f-4927-8525-5f43e59d37f8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:26:50 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:26:50.432 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 09:26:50 compute-0 nova_compute[260935]: 2025-10-11 09:26:50.499 2 DEBUG nova.compute.manager [req-bdedd1be-ed7b-4127-a888-4c1ead5373ec req-3eae7b5f-bee7-4c1c-a2fc-fd9b1814a04d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ebf4e4f9-b225-4ba9-927e-0619aeea8d89] Received event network-vif-plugged-733f68a9-9ab3-4ed3-9fa3-d749ce2bfeb6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:26:50 compute-0 nova_compute[260935]: 2025-10-11 09:26:50.500 2 DEBUG oslo_concurrency.lockutils [req-bdedd1be-ed7b-4127-a888-4c1ead5373ec req-3eae7b5f-bee7-4c1c-a2fc-fd9b1814a04d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "ebf4e4f9-b225-4ba9-927e-0619aeea8d89-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:26:50 compute-0 nova_compute[260935]: 2025-10-11 09:26:50.500 2 DEBUG oslo_concurrency.lockutils [req-bdedd1be-ed7b-4127-a888-4c1ead5373ec req-3eae7b5f-bee7-4c1c-a2fc-fd9b1814a04d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "ebf4e4f9-b225-4ba9-927e-0619aeea8d89-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:26:50 compute-0 nova_compute[260935]: 2025-10-11 09:26:50.501 2 DEBUG oslo_concurrency.lockutils [req-bdedd1be-ed7b-4127-a888-4c1ead5373ec req-3eae7b5f-bee7-4c1c-a2fc-fd9b1814a04d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "ebf4e4f9-b225-4ba9-927e-0619aeea8d89-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:26:50 compute-0 nova_compute[260935]: 2025-10-11 09:26:50.501 2 DEBUG nova.compute.manager [req-bdedd1be-ed7b-4127-a888-4c1ead5373ec req-3eae7b5f-bee7-4c1c-a2fc-fd9b1814a04d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ebf4e4f9-b225-4ba9-927e-0619aeea8d89] Processing event network-vif-plugged-733f68a9-9ab3-4ed3-9fa3-d749ce2bfeb6 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 11 09:26:50 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2556: 321 pgs: 321 active+clean; 525 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 264 KiB/s rd, 3.8 MiB/s wr, 74 op/s
Oct 11 09:26:51 compute-0 tender_mahavira[402299]: {
Oct 11 09:26:51 compute-0 tender_mahavira[402299]:     "9a8d87fe-940e-4960-94fb-405e22e6e9ee": {
Oct 11 09:26:51 compute-0 tender_mahavira[402299]:         "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:26:51 compute-0 tender_mahavira[402299]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 11 09:26:51 compute-0 tender_mahavira[402299]:         "osd_id": 2,
Oct 11 09:26:51 compute-0 tender_mahavira[402299]:         "osd_uuid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 09:26:51 compute-0 tender_mahavira[402299]:         "type": "bluestore"
Oct 11 09:26:51 compute-0 tender_mahavira[402299]:     },
Oct 11 09:26:51 compute-0 tender_mahavira[402299]:     "bd31cfe9-266a-4479-a7f9-659fff14b3e1": {
Oct 11 09:26:51 compute-0 tender_mahavira[402299]:         "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:26:51 compute-0 tender_mahavira[402299]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 11 09:26:51 compute-0 tender_mahavira[402299]:         "osd_id": 0,
Oct 11 09:26:51 compute-0 tender_mahavira[402299]:         "osd_uuid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 09:26:51 compute-0 tender_mahavira[402299]:         "type": "bluestore"
Oct 11 09:26:51 compute-0 tender_mahavira[402299]:     },
Oct 11 09:26:51 compute-0 tender_mahavira[402299]:     "c6e730c8-7075-45e5-bf72-061b73088f5b": {
Oct 11 09:26:51 compute-0 tender_mahavira[402299]:         "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:26:51 compute-0 tender_mahavira[402299]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 11 09:26:51 compute-0 tender_mahavira[402299]:         "osd_id": 1,
Oct 11 09:26:51 compute-0 tender_mahavira[402299]:         "osd_uuid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 09:26:51 compute-0 tender_mahavira[402299]:         "type": "bluestore"
Oct 11 09:26:51 compute-0 tender_mahavira[402299]:     }
Oct 11 09:26:51 compute-0 tender_mahavira[402299]: }
Oct 11 09:26:51 compute-0 systemd[1]: libpod-0ed4ba66d0ff46c19a947a281e01662bf4d54f3cb11e975460ab9391b0330c76.scope: Deactivated successfully.
Oct 11 09:26:51 compute-0 conmon[402299]: conmon 0ed4ba66d0ff46c19a94 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-0ed4ba66d0ff46c19a947a281e01662bf4d54f3cb11e975460ab9391b0330c76.scope/container/memory.events
Oct 11 09:26:51 compute-0 podman[402279]: 2025-10-11 09:26:51.186335366 +0000 UTC m=+1.136335559 container died 0ed4ba66d0ff46c19a947a281e01662bf4d54f3cb11e975460ab9391b0330c76 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_mahavira, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 09:26:51 compute-0 ceph-mon[74313]: pgmap v2556: 321 pgs: 321 active+clean; 525 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 264 KiB/s rd, 3.8 MiB/s wr, 74 op/s
Oct 11 09:26:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-e22acf452f3b518f1fba6ce8c6d25d6adbfec8cb5b0ad1feaf5bd2fb494f5112-merged.mount: Deactivated successfully.
Oct 11 09:26:51 compute-0 podman[402279]: 2025-10-11 09:26:51.260758997 +0000 UTC m=+1.210759200 container remove 0ed4ba66d0ff46c19a947a281e01662bf4d54f3cb11e975460ab9391b0330c76 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_mahavira, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 09:26:51 compute-0 systemd[1]: libpod-conmon-0ed4ba66d0ff46c19a947a281e01662bf4d54f3cb11e975460ab9391b0330c76.scope: Deactivated successfully.
Oct 11 09:26:51 compute-0 sudo[402103]: pam_unix(sudo:session): session closed for user root
Oct 11 09:26:51 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 09:26:51 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:26:51 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 09:26:51 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:26:51 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev 0519fb0a-678f-4add-a355-d50dbc26c946 does not exist
Oct 11 09:26:51 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev 43de6eab-7015-427e-b501-1d2efec697db does not exist
Oct 11 09:26:51 compute-0 sudo[402413]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:26:51 compute-0 sudo[402413]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:26:51 compute-0 sudo[402413]: pam_unix(sudo:session): session closed for user root
Oct 11 09:26:51 compute-0 sudo[402438]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 11 09:26:51 compute-0 sudo[402438]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:26:51 compute-0 sudo[402438]: pam_unix(sudo:session): session closed for user root
Oct 11 09:26:51 compute-0 nova_compute[260935]: 2025-10-11 09:26:51.620 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760174811.6197715, ebf4e4f9-b225-4ba9-927e-0619aeea8d89 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:26:51 compute-0 nova_compute[260935]: 2025-10-11 09:26:51.621 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: ebf4e4f9-b225-4ba9-927e-0619aeea8d89] VM Started (Lifecycle Event)
Oct 11 09:26:51 compute-0 nova_compute[260935]: 2025-10-11 09:26:51.623 2 DEBUG nova.compute.manager [None req-eb6d7eb6-27dc-44f8-87f2-37ada8ac837e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: ebf4e4f9-b225-4ba9-927e-0619aeea8d89] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 11 09:26:51 compute-0 nova_compute[260935]: 2025-10-11 09:26:51.627 2 DEBUG nova.virt.libvirt.driver [None req-eb6d7eb6-27dc-44f8-87f2-37ada8ac837e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: ebf4e4f9-b225-4ba9-927e-0619aeea8d89] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 11 09:26:51 compute-0 nova_compute[260935]: 2025-10-11 09:26:51.631 2 INFO nova.virt.libvirt.driver [-] [instance: ebf4e4f9-b225-4ba9-927e-0619aeea8d89] Instance spawned successfully.
Oct 11 09:26:51 compute-0 nova_compute[260935]: 2025-10-11 09:26:51.632 2 DEBUG nova.virt.libvirt.driver [None req-eb6d7eb6-27dc-44f8-87f2-37ada8ac837e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: ebf4e4f9-b225-4ba9-927e-0619aeea8d89] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 11 09:26:51 compute-0 nova_compute[260935]: 2025-10-11 09:26:51.646 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: ebf4e4f9-b225-4ba9-927e-0619aeea8d89] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:26:51 compute-0 nova_compute[260935]: 2025-10-11 09:26:51.654 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: ebf4e4f9-b225-4ba9-927e-0619aeea8d89] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 09:26:51 compute-0 nova_compute[260935]: 2025-10-11 09:26:51.663 2 DEBUG nova.virt.libvirt.driver [None req-eb6d7eb6-27dc-44f8-87f2-37ada8ac837e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: ebf4e4f9-b225-4ba9-927e-0619aeea8d89] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:26:51 compute-0 nova_compute[260935]: 2025-10-11 09:26:51.663 2 DEBUG nova.virt.libvirt.driver [None req-eb6d7eb6-27dc-44f8-87f2-37ada8ac837e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: ebf4e4f9-b225-4ba9-927e-0619aeea8d89] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:26:51 compute-0 nova_compute[260935]: 2025-10-11 09:26:51.664 2 DEBUG nova.virt.libvirt.driver [None req-eb6d7eb6-27dc-44f8-87f2-37ada8ac837e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: ebf4e4f9-b225-4ba9-927e-0619aeea8d89] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:26:51 compute-0 nova_compute[260935]: 2025-10-11 09:26:51.664 2 DEBUG nova.virt.libvirt.driver [None req-eb6d7eb6-27dc-44f8-87f2-37ada8ac837e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: ebf4e4f9-b225-4ba9-927e-0619aeea8d89] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:26:51 compute-0 nova_compute[260935]: 2025-10-11 09:26:51.665 2 DEBUG nova.virt.libvirt.driver [None req-eb6d7eb6-27dc-44f8-87f2-37ada8ac837e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: ebf4e4f9-b225-4ba9-927e-0619aeea8d89] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:26:51 compute-0 nova_compute[260935]: 2025-10-11 09:26:51.666 2 DEBUG nova.virt.libvirt.driver [None req-eb6d7eb6-27dc-44f8-87f2-37ada8ac837e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: ebf4e4f9-b225-4ba9-927e-0619aeea8d89] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:26:51 compute-0 nova_compute[260935]: 2025-10-11 09:26:51.694 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: ebf4e4f9-b225-4ba9-927e-0619aeea8d89] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 09:26:51 compute-0 nova_compute[260935]: 2025-10-11 09:26:51.694 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760174811.6201277, ebf4e4f9-b225-4ba9-927e-0619aeea8d89 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:26:51 compute-0 nova_compute[260935]: 2025-10-11 09:26:51.695 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: ebf4e4f9-b225-4ba9-927e-0619aeea8d89] VM Paused (Lifecycle Event)
Oct 11 09:26:51 compute-0 nova_compute[260935]: 2025-10-11 09:26:51.719 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: ebf4e4f9-b225-4ba9-927e-0619aeea8d89] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:26:51 compute-0 nova_compute[260935]: 2025-10-11 09:26:51.722 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760174811.626755, ebf4e4f9-b225-4ba9-927e-0619aeea8d89 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:26:51 compute-0 nova_compute[260935]: 2025-10-11 09:26:51.722 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: ebf4e4f9-b225-4ba9-927e-0619aeea8d89] VM Resumed (Lifecycle Event)
Oct 11 09:26:51 compute-0 nova_compute[260935]: 2025-10-11 09:26:51.734 2 INFO nova.compute.manager [None req-eb6d7eb6-27dc-44f8-87f2-37ada8ac837e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: ebf4e4f9-b225-4ba9-927e-0619aeea8d89] Took 10.88 seconds to spawn the instance on the hypervisor.
Oct 11 09:26:51 compute-0 nova_compute[260935]: 2025-10-11 09:26:51.735 2 DEBUG nova.compute.manager [None req-eb6d7eb6-27dc-44f8-87f2-37ada8ac837e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: ebf4e4f9-b225-4ba9-927e-0619aeea8d89] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:26:51 compute-0 nova_compute[260935]: 2025-10-11 09:26:51.749 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: ebf4e4f9-b225-4ba9-927e-0619aeea8d89] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:26:51 compute-0 nova_compute[260935]: 2025-10-11 09:26:51.754 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: ebf4e4f9-b225-4ba9-927e-0619aeea8d89] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 09:26:51 compute-0 nova_compute[260935]: 2025-10-11 09:26:51.776 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: ebf4e4f9-b225-4ba9-927e-0619aeea8d89] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 09:26:51 compute-0 nova_compute[260935]: 2025-10-11 09:26:51.805 2 INFO nova.compute.manager [None req-eb6d7eb6-27dc-44f8-87f2-37ada8ac837e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: ebf4e4f9-b225-4ba9-927e-0619aeea8d89] Took 12.46 seconds to build instance.
Oct 11 09:26:51 compute-0 nova_compute[260935]: 2025-10-11 09:26:51.821 2 DEBUG oslo_concurrency.lockutils [None req-eb6d7eb6-27dc-44f8-87f2-37ada8ac837e dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "ebf4e4f9-b225-4ba9-927e-0619aeea8d89" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.804s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:26:52 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:26:52 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:26:52 compute-0 nova_compute[260935]: 2025-10-11 09:26:52.635 2 DEBUG nova.compute.manager [req-b3ce5aff-6f1e-48de-ab60-7b6302f9b026 req-73a1da29-68a1-4954-9e6a-ec2c8b7b28c9 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ebf4e4f9-b225-4ba9-927e-0619aeea8d89] Received event network-vif-plugged-733f68a9-9ab3-4ed3-9fa3-d749ce2bfeb6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:26:52 compute-0 nova_compute[260935]: 2025-10-11 09:26:52.636 2 DEBUG oslo_concurrency.lockutils [req-b3ce5aff-6f1e-48de-ab60-7b6302f9b026 req-73a1da29-68a1-4954-9e6a-ec2c8b7b28c9 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "ebf4e4f9-b225-4ba9-927e-0619aeea8d89-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:26:52 compute-0 nova_compute[260935]: 2025-10-11 09:26:52.637 2 DEBUG oslo_concurrency.lockutils [req-b3ce5aff-6f1e-48de-ab60-7b6302f9b026 req-73a1da29-68a1-4954-9e6a-ec2c8b7b28c9 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "ebf4e4f9-b225-4ba9-927e-0619aeea8d89-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:26:52 compute-0 nova_compute[260935]: 2025-10-11 09:26:52.638 2 DEBUG oslo_concurrency.lockutils [req-b3ce5aff-6f1e-48de-ab60-7b6302f9b026 req-73a1da29-68a1-4954-9e6a-ec2c8b7b28c9 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "ebf4e4f9-b225-4ba9-927e-0619aeea8d89-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:26:52 compute-0 nova_compute[260935]: 2025-10-11 09:26:52.638 2 DEBUG nova.compute.manager [req-b3ce5aff-6f1e-48de-ab60-7b6302f9b026 req-73a1da29-68a1-4954-9e6a-ec2c8b7b28c9 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ebf4e4f9-b225-4ba9-927e-0619aeea8d89] No waiting events found dispatching network-vif-plugged-733f68a9-9ab3-4ed3-9fa3-d749ce2bfeb6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 09:26:52 compute-0 nova_compute[260935]: 2025-10-11 09:26:52.639 2 WARNING nova.compute.manager [req-b3ce5aff-6f1e-48de-ab60-7b6302f9b026 req-73a1da29-68a1-4954-9e6a-ec2c8b7b28c9 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ebf4e4f9-b225-4ba9-927e-0619aeea8d89] Received unexpected event network-vif-plugged-733f68a9-9ab3-4ed3-9fa3-d749ce2bfeb6 for instance with vm_state active and task_state None.
Oct 11 09:26:52 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2557: 321 pgs: 321 active+clean; 533 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 912 KiB/s rd, 3.9 MiB/s wr, 118 op/s
Oct 11 09:26:52 compute-0 nova_compute[260935]: 2025-10-11 09:26:52.961 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:26:53 compute-0 ceph-mon[74313]: pgmap v2557: 321 pgs: 321 active+clean; 533 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 912 KiB/s rd, 3.9 MiB/s wr, 118 op/s
Oct 11 09:26:53 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:26:53.673 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=43, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:d1:d9', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '16:ab:1e:b7:4b:7f'}, ipsec=False) old=SB_Global(nb_cfg=42) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 09:26:53 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:26:53.674 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 11 09:26:53 compute-0 nova_compute[260935]: 2025-10-11 09:26:53.719 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:26:53 compute-0 sshd-session[402464]: Invalid user admin from 165.232.82.252 port 37222
Oct 11 09:26:54 compute-0 sshd-session[402464]: pam_unix(sshd:auth): check pass; user unknown
Oct 11 09:26:54 compute-0 sshd-session[402464]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=165.232.82.252
Oct 11 09:26:54 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:26:54 compute-0 nova_compute[260935]: 2025-10-11 09:26:54.283 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:26:54 compute-0 nova_compute[260935]: 2025-10-11 09:26:54.681 2 DEBUG nova.compute.manager [req-7fd3cbdb-80e8-4b4e-b0d6-8a04ee458487 req-54a301de-97bd-4934-81d0-8818dc2f8a04 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ebf4e4f9-b225-4ba9-927e-0619aeea8d89] Received event network-changed-733f68a9-9ab3-4ed3-9fa3-d749ce2bfeb6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:26:54 compute-0 nova_compute[260935]: 2025-10-11 09:26:54.681 2 DEBUG nova.compute.manager [req-7fd3cbdb-80e8-4b4e-b0d6-8a04ee458487 req-54a301de-97bd-4934-81d0-8818dc2f8a04 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ebf4e4f9-b225-4ba9-927e-0619aeea8d89] Refreshing instance network info cache due to event network-changed-733f68a9-9ab3-4ed3-9fa3-d749ce2bfeb6. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 09:26:54 compute-0 nova_compute[260935]: 2025-10-11 09:26:54.681 2 DEBUG oslo_concurrency.lockutils [req-7fd3cbdb-80e8-4b4e-b0d6-8a04ee458487 req-54a301de-97bd-4934-81d0-8818dc2f8a04 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-ebf4e4f9-b225-4ba9-927e-0619aeea8d89" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:26:54 compute-0 nova_compute[260935]: 2025-10-11 09:26:54.682 2 DEBUG oslo_concurrency.lockutils [req-7fd3cbdb-80e8-4b4e-b0d6-8a04ee458487 req-54a301de-97bd-4934-81d0-8818dc2f8a04 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-ebf4e4f9-b225-4ba9-927e-0619aeea8d89" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:26:54 compute-0 nova_compute[260935]: 2025-10-11 09:26:54.682 2 DEBUG nova.network.neutron [req-7fd3cbdb-80e8-4b4e-b0d6-8a04ee458487 req-54a301de-97bd-4934-81d0-8818dc2f8a04 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ebf4e4f9-b225-4ba9-927e-0619aeea8d89] Refreshing network info cache for port 733f68a9-9ab3-4ed3-9fa3-d749ce2bfeb6 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 09:26:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:26:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:26:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:26:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:26:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:26:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:26:54 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2558: 321 pgs: 321 active+clean; 533 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 866 KiB/s rd, 1.1 MiB/s wr, 75 op/s
Oct 11 09:26:54 compute-0 ceph-mgr[74605]: [balancer INFO root] Optimize plan auto_2025-10-11_09:26:54
Oct 11 09:26:54 compute-0 ceph-mgr[74605]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 09:26:54 compute-0 ceph-mgr[74605]: [balancer INFO root] do_upmap
Oct 11 09:26:54 compute-0 ceph-mgr[74605]: [balancer INFO root] pools ['images', '.rgw.root', '.mgr', 'backups', 'default.rgw.log', 'volumes', 'default.rgw.control', 'cephfs.cephfs.meta', 'vms', 'default.rgw.meta', 'cephfs.cephfs.data']
Oct 11 09:26:54 compute-0 ceph-mgr[74605]: [balancer INFO root] prepared 0/10 changes
Oct 11 09:26:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 09:26:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 09:26:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 09:26:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 09:26:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 09:26:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 09:26:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 09:26:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 09:26:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 09:26:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 09:26:55 compute-0 sshd-session[402464]: Failed password for invalid user admin from 165.232.82.252 port 37222 ssh2
Oct 11 09:26:55 compute-0 podman[402466]: 2025-10-11 09:26:55.784195351 +0000 UTC m=+0.084430214 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Oct 11 09:26:55 compute-0 podman[402467]: 2025-10-11 09:26:55.817578663 +0000 UTC m=+0.110250792 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct 11 09:26:56 compute-0 ceph-mon[74313]: pgmap v2558: 321 pgs: 321 active+clean; 533 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 866 KiB/s rd, 1.1 MiB/s wr, 75 op/s
Oct 11 09:26:56 compute-0 nova_compute[260935]: 2025-10-11 09:26:56.083 2 DEBUG nova.network.neutron [req-7fd3cbdb-80e8-4b4e-b0d6-8a04ee458487 req-54a301de-97bd-4934-81d0-8818dc2f8a04 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ebf4e4f9-b225-4ba9-927e-0619aeea8d89] Updated VIF entry in instance network info cache for port 733f68a9-9ab3-4ed3-9fa3-d749ce2bfeb6. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 09:26:56 compute-0 nova_compute[260935]: 2025-10-11 09:26:56.084 2 DEBUG nova.network.neutron [req-7fd3cbdb-80e8-4b4e-b0d6-8a04ee458487 req-54a301de-97bd-4934-81d0-8818dc2f8a04 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ebf4e4f9-b225-4ba9-927e-0619aeea8d89] Updating instance_info_cache with network_info: [{"id": "733f68a9-9ab3-4ed3-9fa3-d749ce2bfeb6", "address": "fa:16:3e:aa:8a:b1", "network": {"id": "b9f9ae84-9b18-48f7-bff2-94e8835de5c8", "bridge": "br-int", "label": "tempest-network-smoke--306681163", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.247", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap733f68a9-9a", "ovs_interfaceid": "733f68a9-9ab3-4ed3-9fa3-d749ce2bfeb6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:26:56 compute-0 nova_compute[260935]: 2025-10-11 09:26:56.130 2 DEBUG oslo_concurrency.lockutils [req-7fd3cbdb-80e8-4b4e-b0d6-8a04ee458487 req-54a301de-97bd-4934-81d0-8818dc2f8a04 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-ebf4e4f9-b225-4ba9-927e-0619aeea8d89" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:26:56 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2559: 321 pgs: 321 active+clean; 533 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 866 KiB/s rd, 1.1 MiB/s wr, 75 op/s
Oct 11 09:26:57 compute-0 sshd-session[402464]: Connection closed by invalid user admin 165.232.82.252 port 37222 [preauth]
Oct 11 09:26:57 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:26:57.676 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=bec9a5e2-82b0-42f0-811d-08d245f1dc66, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '43'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:26:57 compute-0 nova_compute[260935]: 2025-10-11 09:26:57.965 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:26:58 compute-0 ceph-mon[74313]: pgmap v2559: 321 pgs: 321 active+clean; 533 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 866 KiB/s rd, 1.1 MiB/s wr, 75 op/s
Oct 11 09:26:58 compute-0 nova_compute[260935]: 2025-10-11 09:26:58.696 2 DEBUG oslo_concurrency.lockutils [None req-12b0b038-0758-4736-a8a1-a9c0c9c193a3 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "ac163a38-d5cb-4d00-af1b-f3361849dd68" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:26:58 compute-0 nova_compute[260935]: 2025-10-11 09:26:58.696 2 DEBUG oslo_concurrency.lockutils [None req-12b0b038-0758-4736-a8a1-a9c0c9c193a3 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "ac163a38-d5cb-4d00-af1b-f3361849dd68" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:26:58 compute-0 nova_compute[260935]: 2025-10-11 09:26:58.727 2 DEBUG nova.compute.manager [None req-12b0b038-0758-4736-a8a1-a9c0c9c193a3 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: ac163a38-d5cb-4d00-af1b-f3361849dd68] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 11 09:26:58 compute-0 nova_compute[260935]: 2025-10-11 09:26:58.843 2 DEBUG oslo_concurrency.lockutils [None req-12b0b038-0758-4736-a8a1-a9c0c9c193a3 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:26:58 compute-0 nova_compute[260935]: 2025-10-11 09:26:58.844 2 DEBUG oslo_concurrency.lockutils [None req-12b0b038-0758-4736-a8a1-a9c0c9c193a3 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:26:58 compute-0 nova_compute[260935]: 2025-10-11 09:26:58.853 2 DEBUG nova.virt.hardware [None req-12b0b038-0758-4736-a8a1-a9c0c9c193a3 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 11 09:26:58 compute-0 nova_compute[260935]: 2025-10-11 09:26:58.853 2 INFO nova.compute.claims [None req-12b0b038-0758-4736-a8a1-a9c0c9c193a3 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: ac163a38-d5cb-4d00-af1b-f3361849dd68] Claim successful on node compute-0.ctlplane.example.com
Oct 11 09:26:58 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2560: 321 pgs: 321 active+clean; 533 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 1.1 MiB/s wr, 120 op/s
Oct 11 09:26:59 compute-0 nova_compute[260935]: 2025-10-11 09:26:59.109 2 DEBUG oslo_concurrency.processutils [None req-12b0b038-0758-4736-a8a1-a9c0c9c193a3 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:26:59 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:26:59 compute-0 nova_compute[260935]: 2025-10-11 09:26:59.333 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:26:59 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 09:26:59 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2143380819' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:26:59 compute-0 nova_compute[260935]: 2025-10-11 09:26:59.568 2 DEBUG oslo_concurrency.processutils [None req-12b0b038-0758-4736-a8a1-a9c0c9c193a3 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:26:59 compute-0 nova_compute[260935]: 2025-10-11 09:26:59.575 2 DEBUG nova.compute.provider_tree [None req-12b0b038-0758-4736-a8a1-a9c0c9c193a3 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 09:26:59 compute-0 nova_compute[260935]: 2025-10-11 09:26:59.597 2 DEBUG nova.scheduler.client.report [None req-12b0b038-0758-4736-a8a1-a9c0c9c193a3 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 09:26:59 compute-0 nova_compute[260935]: 2025-10-11 09:26:59.632 2 DEBUG oslo_concurrency.lockutils [None req-12b0b038-0758-4736-a8a1-a9c0c9c193a3 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.788s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:26:59 compute-0 nova_compute[260935]: 2025-10-11 09:26:59.633 2 DEBUG nova.compute.manager [None req-12b0b038-0758-4736-a8a1-a9c0c9c193a3 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: ac163a38-d5cb-4d00-af1b-f3361849dd68] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 11 09:26:59 compute-0 nova_compute[260935]: 2025-10-11 09:26:59.704 2 DEBUG nova.compute.manager [None req-12b0b038-0758-4736-a8a1-a9c0c9c193a3 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: ac163a38-d5cb-4d00-af1b-f3361849dd68] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 11 09:26:59 compute-0 nova_compute[260935]: 2025-10-11 09:26:59.704 2 DEBUG nova.network.neutron [None req-12b0b038-0758-4736-a8a1-a9c0c9c193a3 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: ac163a38-d5cb-4d00-af1b-f3361849dd68] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 11 09:26:59 compute-0 nova_compute[260935]: 2025-10-11 09:26:59.723 2 INFO nova.virt.libvirt.driver [None req-12b0b038-0758-4736-a8a1-a9c0c9c193a3 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: ac163a38-d5cb-4d00-af1b-f3361849dd68] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 11 09:26:59 compute-0 nova_compute[260935]: 2025-10-11 09:26:59.743 2 DEBUG nova.compute.manager [None req-12b0b038-0758-4736-a8a1-a9c0c9c193a3 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: ac163a38-d5cb-4d00-af1b-f3361849dd68] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 11 09:26:59 compute-0 nova_compute[260935]: 2025-10-11 09:26:59.849 2 DEBUG nova.compute.manager [None req-12b0b038-0758-4736-a8a1-a9c0c9c193a3 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: ac163a38-d5cb-4d00-af1b-f3361849dd68] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 11 09:26:59 compute-0 nova_compute[260935]: 2025-10-11 09:26:59.850 2 DEBUG nova.virt.libvirt.driver [None req-12b0b038-0758-4736-a8a1-a9c0c9c193a3 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: ac163a38-d5cb-4d00-af1b-f3361849dd68] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 11 09:26:59 compute-0 nova_compute[260935]: 2025-10-11 09:26:59.850 2 INFO nova.virt.libvirt.driver [None req-12b0b038-0758-4736-a8a1-a9c0c9c193a3 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: ac163a38-d5cb-4d00-af1b-f3361849dd68] Creating image(s)
Oct 11 09:26:59 compute-0 nova_compute[260935]: 2025-10-11 09:26:59.875 2 DEBUG nova.storage.rbd_utils [None req-12b0b038-0758-4736-a8a1-a9c0c9c193a3 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] rbd image ac163a38-d5cb-4d00-af1b-f3361849dd68_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:26:59 compute-0 nova_compute[260935]: 2025-10-11 09:26:59.899 2 DEBUG nova.storage.rbd_utils [None req-12b0b038-0758-4736-a8a1-a9c0c9c193a3 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] rbd image ac163a38-d5cb-4d00-af1b-f3361849dd68_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:26:59 compute-0 nova_compute[260935]: 2025-10-11 09:26:59.927 2 DEBUG nova.storage.rbd_utils [None req-12b0b038-0758-4736-a8a1-a9c0c9c193a3 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] rbd image ac163a38-d5cb-4d00-af1b-f3361849dd68_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:26:59 compute-0 nova_compute[260935]: 2025-10-11 09:26:59.931 2 DEBUG oslo_concurrency.processutils [None req-12b0b038-0758-4736-a8a1-a9c0c9c193a3 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:27:00 compute-0 nova_compute[260935]: 2025-10-11 09:27:00.009 2 DEBUG oslo_concurrency.processutils [None req-12b0b038-0758-4736-a8a1-a9c0c9c193a3 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json" returned: 0 in 0.078s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:27:00 compute-0 nova_compute[260935]: 2025-10-11 09:27:00.010 2 DEBUG oslo_concurrency.lockutils [None req-12b0b038-0758-4736-a8a1-a9c0c9c193a3 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "9811042c7d73cc51997f7c966840f4b7728169a1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:27:00 compute-0 nova_compute[260935]: 2025-10-11 09:27:00.010 2 DEBUG oslo_concurrency.lockutils [None req-12b0b038-0758-4736-a8a1-a9c0c9c193a3 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:27:00 compute-0 nova_compute[260935]: 2025-10-11 09:27:00.011 2 DEBUG oslo_concurrency.lockutils [None req-12b0b038-0758-4736-a8a1-a9c0c9c193a3 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:27:00 compute-0 ceph-mon[74313]: pgmap v2560: 321 pgs: 321 active+clean; 533 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 1.1 MiB/s wr, 120 op/s
Oct 11 09:27:00 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/2143380819' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:27:00 compute-0 nova_compute[260935]: 2025-10-11 09:27:00.034 2 DEBUG nova.storage.rbd_utils [None req-12b0b038-0758-4736-a8a1-a9c0c9c193a3 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] rbd image ac163a38-d5cb-4d00-af1b-f3361849dd68_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:27:00 compute-0 nova_compute[260935]: 2025-10-11 09:27:00.038 2 DEBUG oslo_concurrency.processutils [None req-12b0b038-0758-4736-a8a1-a9c0c9c193a3 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 ac163a38-d5cb-4d00-af1b-f3361849dd68_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:27:00 compute-0 nova_compute[260935]: 2025-10-11 09:27:00.335 2 DEBUG nova.policy [None req-12b0b038-0758-4736-a8a1-a9c0c9c193a3 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '0e1fd111a1ff43179343661e01457085', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'db6885dd005947ad850fed13cefdf2fc', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 11 09:27:00 compute-0 nova_compute[260935]: 2025-10-11 09:27:00.395 2 DEBUG oslo_concurrency.processutils [None req-12b0b038-0758-4736-a8a1-a9c0c9c193a3 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 ac163a38-d5cb-4d00-af1b-f3361849dd68_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.357s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:27:00 compute-0 nova_compute[260935]: 2025-10-11 09:27:00.495 2 DEBUG nova.storage.rbd_utils [None req-12b0b038-0758-4736-a8a1-a9c0c9c193a3 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] resizing rbd image ac163a38-d5cb-4d00-af1b-f3361849dd68_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 11 09:27:00 compute-0 nova_compute[260935]: 2025-10-11 09:27:00.602 2 DEBUG nova.objects.instance [None req-12b0b038-0758-4736-a8a1-a9c0c9c193a3 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lazy-loading 'migration_context' on Instance uuid ac163a38-d5cb-4d00-af1b-f3361849dd68 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:27:00 compute-0 nova_compute[260935]: 2025-10-11 09:27:00.667 2 DEBUG nova.virt.libvirt.driver [None req-12b0b038-0758-4736-a8a1-a9c0c9c193a3 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: ac163a38-d5cb-4d00-af1b-f3361849dd68] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 11 09:27:00 compute-0 nova_compute[260935]: 2025-10-11 09:27:00.668 2 DEBUG nova.virt.libvirt.driver [None req-12b0b038-0758-4736-a8a1-a9c0c9c193a3 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: ac163a38-d5cb-4d00-af1b-f3361849dd68] Ensure instance console log exists: /var/lib/nova/instances/ac163a38-d5cb-4d00-af1b-f3361849dd68/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 11 09:27:00 compute-0 nova_compute[260935]: 2025-10-11 09:27:00.669 2 DEBUG oslo_concurrency.lockutils [None req-12b0b038-0758-4736-a8a1-a9c0c9c193a3 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:27:00 compute-0 nova_compute[260935]: 2025-10-11 09:27:00.669 2 DEBUG oslo_concurrency.lockutils [None req-12b0b038-0758-4736-a8a1-a9c0c9c193a3 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:27:00 compute-0 nova_compute[260935]: 2025-10-11 09:27:00.670 2 DEBUG oslo_concurrency.lockutils [None req-12b0b038-0758-4736-a8a1-a9c0c9c193a3 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:27:00 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2561: 321 pgs: 321 active+clean; 533 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 107 KiB/s wr, 87 op/s
Oct 11 09:27:02 compute-0 ceph-mon[74313]: pgmap v2561: 321 pgs: 321 active+clean; 533 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 107 KiB/s wr, 87 op/s
Oct 11 09:27:02 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2562: 321 pgs: 321 active+clean; 585 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.4 MiB/s wr, 126 op/s
Oct 11 09:27:02 compute-0 nova_compute[260935]: 2025-10-11 09:27:02.966 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:27:03 compute-0 ceph-mon[74313]: pgmap v2562: 321 pgs: 321 active+clean; 585 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.4 MiB/s wr, 126 op/s
Oct 11 09:27:03 compute-0 nova_compute[260935]: 2025-10-11 09:27:03.386 2 DEBUG nova.network.neutron [None req-12b0b038-0758-4736-a8a1-a9c0c9c193a3 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: ac163a38-d5cb-4d00-af1b-f3361849dd68] Successfully created port: d227ca0f-e837-4a46-be0f-e66b72db8028 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 11 09:27:03 compute-0 ovn_controller[152945]: 2025-10-11T09:27:03Z|00164|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:aa:8a:b1 10.100.0.8
Oct 11 09:27:03 compute-0 ovn_controller[152945]: 2025-10-11T09:27:03Z|00165|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:aa:8a:b1 10.100.0.8
Oct 11 09:27:04 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:27:04 compute-0 nova_compute[260935]: 2025-10-11 09:27:04.336 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:27:04 compute-0 nova_compute[260935]: 2025-10-11 09:27:04.818 2 DEBUG nova.network.neutron [None req-12b0b038-0758-4736-a8a1-a9c0c9c193a3 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: ac163a38-d5cb-4d00-af1b-f3361849dd68] Successfully created port: 731dd7de-8de3-4b67-a34b-8fef195606b3 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 11 09:27:04 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2563: 321 pgs: 321 active+clean; 585 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 2.3 MiB/s wr, 82 op/s
Oct 11 09:27:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 09:27:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:27:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 09:27:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:27:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.005020804335970568 of space, bias 1.0, pg target 1.5062413007911704 quantized to 32 (current 32)
Oct 11 09:27:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:27:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 09:27:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:27:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 09:27:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:27:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.1992057139048968 quantized to 32 (current 32)
Oct 11 09:27:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:27:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006084358924269063 quantized to 16 (current 32)
Oct 11 09:27:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:27:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 09:27:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:27:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.605448655336329e-05 quantized to 32 (current 32)
Oct 11 09:27:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:27:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006464631357035879 quantized to 32 (current 32)
Oct 11 09:27:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:27:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 09:27:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:27:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015210897310672657 quantized to 32 (current 32)
Oct 11 09:27:05 compute-0 nova_compute[260935]: 2025-10-11 09:27:05.393 2 DEBUG nova.network.neutron [None req-12b0b038-0758-4736-a8a1-a9c0c9c193a3 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: ac163a38-d5cb-4d00-af1b-f3361849dd68] Successfully updated port: d227ca0f-e837-4a46-be0f-e66b72db8028 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 11 09:27:05 compute-0 nova_compute[260935]: 2025-10-11 09:27:05.547 2 DEBUG nova.compute.manager [req-e24c0ff0-7afc-429a-87c9-8ed706c6f620 req-292f4b59-68cd-413b-b588-0cbc1319fe76 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ac163a38-d5cb-4d00-af1b-f3361849dd68] Received event network-changed-d227ca0f-e837-4a46-be0f-e66b72db8028 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:27:05 compute-0 nova_compute[260935]: 2025-10-11 09:27:05.548 2 DEBUG nova.compute.manager [req-e24c0ff0-7afc-429a-87c9-8ed706c6f620 req-292f4b59-68cd-413b-b588-0cbc1319fe76 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ac163a38-d5cb-4d00-af1b-f3361849dd68] Refreshing instance network info cache due to event network-changed-d227ca0f-e837-4a46-be0f-e66b72db8028. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 09:27:05 compute-0 nova_compute[260935]: 2025-10-11 09:27:05.548 2 DEBUG oslo_concurrency.lockutils [req-e24c0ff0-7afc-429a-87c9-8ed706c6f620 req-292f4b59-68cd-413b-b588-0cbc1319fe76 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-ac163a38-d5cb-4d00-af1b-f3361849dd68" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:27:05 compute-0 nova_compute[260935]: 2025-10-11 09:27:05.549 2 DEBUG oslo_concurrency.lockutils [req-e24c0ff0-7afc-429a-87c9-8ed706c6f620 req-292f4b59-68cd-413b-b588-0cbc1319fe76 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-ac163a38-d5cb-4d00-af1b-f3361849dd68" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:27:05 compute-0 nova_compute[260935]: 2025-10-11 09:27:05.549 2 DEBUG nova.network.neutron [req-e24c0ff0-7afc-429a-87c9-8ed706c6f620 req-292f4b59-68cd-413b-b588-0cbc1319fe76 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ac163a38-d5cb-4d00-af1b-f3361849dd68] Refreshing network info cache for port d227ca0f-e837-4a46-be0f-e66b72db8028 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 09:27:06 compute-0 ceph-mon[74313]: pgmap v2563: 321 pgs: 321 active+clean; 585 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 2.3 MiB/s wr, 82 op/s
Oct 11 09:27:06 compute-0 nova_compute[260935]: 2025-10-11 09:27:06.157 2 DEBUG nova.network.neutron [req-e24c0ff0-7afc-429a-87c9-8ed706c6f620 req-292f4b59-68cd-413b-b588-0cbc1319fe76 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ac163a38-d5cb-4d00-af1b-f3361849dd68] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 11 09:27:06 compute-0 nova_compute[260935]: 2025-10-11 09:27:06.559 2 DEBUG nova.network.neutron [req-e24c0ff0-7afc-429a-87c9-8ed706c6f620 req-292f4b59-68cd-413b-b588-0cbc1319fe76 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ac163a38-d5cb-4d00-af1b-f3361849dd68] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:27:06 compute-0 nova_compute[260935]: 2025-10-11 09:27:06.587 2 DEBUG oslo_concurrency.lockutils [req-e24c0ff0-7afc-429a-87c9-8ed706c6f620 req-292f4b59-68cd-413b-b588-0cbc1319fe76 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-ac163a38-d5cb-4d00-af1b-f3361849dd68" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:27:06 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2564: 321 pgs: 321 active+clean; 585 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 2.3 MiB/s wr, 82 op/s
Oct 11 09:27:07 compute-0 ceph-mon[74313]: pgmap v2564: 321 pgs: 321 active+clean; 585 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 2.3 MiB/s wr, 82 op/s
Oct 11 09:27:07 compute-0 nova_compute[260935]: 2025-10-11 09:27:07.317 2 DEBUG nova.network.neutron [None req-12b0b038-0758-4736-a8a1-a9c0c9c193a3 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: ac163a38-d5cb-4d00-af1b-f3361849dd68] Successfully updated port: 731dd7de-8de3-4b67-a34b-8fef195606b3 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 11 09:27:07 compute-0 nova_compute[260935]: 2025-10-11 09:27:07.361 2 DEBUG oslo_concurrency.lockutils [None req-12b0b038-0758-4736-a8a1-a9c0c9c193a3 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "refresh_cache-ac163a38-d5cb-4d00-af1b-f3361849dd68" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:27:07 compute-0 nova_compute[260935]: 2025-10-11 09:27:07.362 2 DEBUG oslo_concurrency.lockutils [None req-12b0b038-0758-4736-a8a1-a9c0c9c193a3 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquired lock "refresh_cache-ac163a38-d5cb-4d00-af1b-f3361849dd68" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:27:07 compute-0 nova_compute[260935]: 2025-10-11 09:27:07.363 2 DEBUG nova.network.neutron [None req-12b0b038-0758-4736-a8a1-a9c0c9c193a3 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: ac163a38-d5cb-4d00-af1b-f3361849dd68] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 11 09:27:07 compute-0 nova_compute[260935]: 2025-10-11 09:27:07.543 2 DEBUG nova.network.neutron [None req-12b0b038-0758-4736-a8a1-a9c0c9c193a3 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: ac163a38-d5cb-4d00-af1b-f3361849dd68] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 11 09:27:07 compute-0 nova_compute[260935]: 2025-10-11 09:27:07.667 2 DEBUG nova.compute.manager [req-4b417079-6517-4b3e-86f6-5c51c930ea43 req-18404a90-d79e-4ce2-bd3b-531720486135 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ac163a38-d5cb-4d00-af1b-f3361849dd68] Received event network-changed-731dd7de-8de3-4b67-a34b-8fef195606b3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:27:07 compute-0 nova_compute[260935]: 2025-10-11 09:27:07.669 2 DEBUG nova.compute.manager [req-4b417079-6517-4b3e-86f6-5c51c930ea43 req-18404a90-d79e-4ce2-bd3b-531720486135 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ac163a38-d5cb-4d00-af1b-f3361849dd68] Refreshing instance network info cache due to event network-changed-731dd7de-8de3-4b67-a34b-8fef195606b3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 09:27:07 compute-0 nova_compute[260935]: 2025-10-11 09:27:07.669 2 DEBUG oslo_concurrency.lockutils [req-4b417079-6517-4b3e-86f6-5c51c930ea43 req-18404a90-d79e-4ce2-bd3b-531720486135 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-ac163a38-d5cb-4d00-af1b-f3361849dd68" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:27:07 compute-0 nova_compute[260935]: 2025-10-11 09:27:07.970 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:27:08 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2565: 321 pgs: 321 active+clean; 612 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 3.9 MiB/s wr, 135 op/s
Oct 11 09:27:09 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:27:09 compute-0 nova_compute[260935]: 2025-10-11 09:27:09.338 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:27:09 compute-0 nova_compute[260935]: 2025-10-11 09:27:09.645 2 DEBUG nova.network.neutron [None req-12b0b038-0758-4736-a8a1-a9c0c9c193a3 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: ac163a38-d5cb-4d00-af1b-f3361849dd68] Updating instance_info_cache with network_info: [{"id": "d227ca0f-e837-4a46-be0f-e66b72db8028", "address": "fa:16:3e:01:a2:ce", "network": {"id": "024b1f88-7312-4f05-a55e-4c82e878906e", "bridge": "br-int", "label": "tempest-network-smoke--93934650", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd227ca0f-e8", "ovs_interfaceid": "d227ca0f-e837-4a46-be0f-e66b72db8028", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "731dd7de-8de3-4b67-a34b-8fef195606b3", "address": "fa:16:3e:3b:40:03", "network": {"id": "894decad-3bed-4c55-b643-5fbe5479bf3f", "bridge": "br-int", "label": "tempest-network-smoke--1044478516", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe3b:4003", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap731dd7de-8d", "ovs_interfaceid": "731dd7de-8de3-4b67-a34b-8fef195606b3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:27:09 compute-0 nova_compute[260935]: 2025-10-11 09:27:09.666 2 DEBUG oslo_concurrency.lockutils [None req-12b0b038-0758-4736-a8a1-a9c0c9c193a3 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Releasing lock "refresh_cache-ac163a38-d5cb-4d00-af1b-f3361849dd68" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:27:09 compute-0 nova_compute[260935]: 2025-10-11 09:27:09.667 2 DEBUG nova.compute.manager [None req-12b0b038-0758-4736-a8a1-a9c0c9c193a3 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: ac163a38-d5cb-4d00-af1b-f3361849dd68] Instance network_info: |[{"id": "d227ca0f-e837-4a46-be0f-e66b72db8028", "address": "fa:16:3e:01:a2:ce", "network": {"id": "024b1f88-7312-4f05-a55e-4c82e878906e", "bridge": "br-int", "label": "tempest-network-smoke--93934650", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd227ca0f-e8", "ovs_interfaceid": "d227ca0f-e837-4a46-be0f-e66b72db8028", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "731dd7de-8de3-4b67-a34b-8fef195606b3", "address": "fa:16:3e:3b:40:03", "network": {"id": "894decad-3bed-4c55-b643-5fbe5479bf3f", "bridge": "br-int", "label": "tempest-network-smoke--1044478516", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe3b:4003", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap731dd7de-8d", "ovs_interfaceid": "731dd7de-8de3-4b67-a34b-8fef195606b3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 11 09:27:09 compute-0 nova_compute[260935]: 2025-10-11 09:27:09.667 2 DEBUG oslo_concurrency.lockutils [req-4b417079-6517-4b3e-86f6-5c51c930ea43 req-18404a90-d79e-4ce2-bd3b-531720486135 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-ac163a38-d5cb-4d00-af1b-f3361849dd68" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:27:09 compute-0 nova_compute[260935]: 2025-10-11 09:27:09.667 2 DEBUG nova.network.neutron [req-4b417079-6517-4b3e-86f6-5c51c930ea43 req-18404a90-d79e-4ce2-bd3b-531720486135 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ac163a38-d5cb-4d00-af1b-f3361849dd68] Refreshing network info cache for port 731dd7de-8de3-4b67-a34b-8fef195606b3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 09:27:09 compute-0 nova_compute[260935]: 2025-10-11 09:27:09.670 2 DEBUG nova.virt.libvirt.driver [None req-12b0b038-0758-4736-a8a1-a9c0c9c193a3 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: ac163a38-d5cb-4d00-af1b-f3361849dd68] Start _get_guest_xml network_info=[{"id": "d227ca0f-e837-4a46-be0f-e66b72db8028", "address": "fa:16:3e:01:a2:ce", "network": {"id": "024b1f88-7312-4f05-a55e-4c82e878906e", "bridge": "br-int", "label": "tempest-network-smoke--93934650", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd227ca0f-e8", "ovs_interfaceid": "d227ca0f-e837-4a46-be0f-e66b72db8028", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "731dd7de-8de3-4b67-a34b-8fef195606b3", "address": "fa:16:3e:3b:40:03", "network": {"id": "894decad-3bed-4c55-b643-5fbe5479bf3f", "bridge": "br-int", "label": "tempest-network-smoke--1044478516", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe3b:4003", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap731dd7de-8d", "ovs_interfaceid": "731dd7de-8de3-4b67-a34b-8fef195606b3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 11 09:27:09 compute-0 nova_compute[260935]: 2025-10-11 09:27:09.675 2 WARNING nova.virt.libvirt.driver [None req-12b0b038-0758-4736-a8a1-a9c0c9c193a3 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 09:27:09 compute-0 nova_compute[260935]: 2025-10-11 09:27:09.681 2 DEBUG nova.virt.libvirt.host [None req-12b0b038-0758-4736-a8a1-a9c0c9c193a3 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 11 09:27:09 compute-0 nova_compute[260935]: 2025-10-11 09:27:09.681 2 DEBUG nova.virt.libvirt.host [None req-12b0b038-0758-4736-a8a1-a9c0c9c193a3 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 11 09:27:09 compute-0 nova_compute[260935]: 2025-10-11 09:27:09.686 2 DEBUG nova.virt.libvirt.host [None req-12b0b038-0758-4736-a8a1-a9c0c9c193a3 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 11 09:27:09 compute-0 nova_compute[260935]: 2025-10-11 09:27:09.686 2 DEBUG nova.virt.libvirt.host [None req-12b0b038-0758-4736-a8a1-a9c0c9c193a3 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 11 09:27:09 compute-0 nova_compute[260935]: 2025-10-11 09:27:09.686 2 DEBUG nova.virt.libvirt.driver [None req-12b0b038-0758-4736-a8a1-a9c0c9c193a3 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 11 09:27:09 compute-0 nova_compute[260935]: 2025-10-11 09:27:09.687 2 DEBUG nova.virt.hardware [None req-12b0b038-0758-4736-a8a1-a9c0c9c193a3 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 11 09:27:09 compute-0 nova_compute[260935]: 2025-10-11 09:27:09.687 2 DEBUG nova.virt.hardware [None req-12b0b038-0758-4736-a8a1-a9c0c9c193a3 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 11 09:27:09 compute-0 nova_compute[260935]: 2025-10-11 09:27:09.687 2 DEBUG nova.virt.hardware [None req-12b0b038-0758-4736-a8a1-a9c0c9c193a3 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 11 09:27:09 compute-0 nova_compute[260935]: 2025-10-11 09:27:09.687 2 DEBUG nova.virt.hardware [None req-12b0b038-0758-4736-a8a1-a9c0c9c193a3 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 11 09:27:09 compute-0 nova_compute[260935]: 2025-10-11 09:27:09.687 2 DEBUG nova.virt.hardware [None req-12b0b038-0758-4736-a8a1-a9c0c9c193a3 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 11 09:27:09 compute-0 nova_compute[260935]: 2025-10-11 09:27:09.688 2 DEBUG nova.virt.hardware [None req-12b0b038-0758-4736-a8a1-a9c0c9c193a3 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 11 09:27:09 compute-0 nova_compute[260935]: 2025-10-11 09:27:09.688 2 DEBUG nova.virt.hardware [None req-12b0b038-0758-4736-a8a1-a9c0c9c193a3 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 11 09:27:09 compute-0 nova_compute[260935]: 2025-10-11 09:27:09.688 2 DEBUG nova.virt.hardware [None req-12b0b038-0758-4736-a8a1-a9c0c9c193a3 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 11 09:27:09 compute-0 nova_compute[260935]: 2025-10-11 09:27:09.688 2 DEBUG nova.virt.hardware [None req-12b0b038-0758-4736-a8a1-a9c0c9c193a3 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 11 09:27:09 compute-0 nova_compute[260935]: 2025-10-11 09:27:09.688 2 DEBUG nova.virt.hardware [None req-12b0b038-0758-4736-a8a1-a9c0c9c193a3 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 11 09:27:09 compute-0 nova_compute[260935]: 2025-10-11 09:27:09.688 2 DEBUG nova.virt.hardware [None req-12b0b038-0758-4736-a8a1-a9c0c9c193a3 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 11 09:27:09 compute-0 nova_compute[260935]: 2025-10-11 09:27:09.691 2 DEBUG oslo_concurrency.processutils [None req-12b0b038-0758-4736-a8a1-a9c0c9c193a3 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:27:10 compute-0 ceph-mon[74313]: pgmap v2565: 321 pgs: 321 active+clean; 612 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 3.9 MiB/s wr, 135 op/s
Oct 11 09:27:10 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 09:27:10 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3388837810' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:27:10 compute-0 nova_compute[260935]: 2025-10-11 09:27:10.222 2 DEBUG oslo_concurrency.processutils [None req-12b0b038-0758-4736-a8a1-a9c0c9c193a3 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.531s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:27:10 compute-0 nova_compute[260935]: 2025-10-11 09:27:10.242 2 DEBUG nova.storage.rbd_utils [None req-12b0b038-0758-4736-a8a1-a9c0c9c193a3 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] rbd image ac163a38-d5cb-4d00-af1b-f3361849dd68_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:27:10 compute-0 nova_compute[260935]: 2025-10-11 09:27:10.246 2 DEBUG oslo_concurrency.processutils [None req-12b0b038-0758-4736-a8a1-a9c0c9c193a3 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:27:10 compute-0 nova_compute[260935]: 2025-10-11 09:27:10.531 2 INFO nova.compute.manager [None req-91844f93-46e5-4b48-81de-2ce889f60cc5 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 36d0acda-9f37-4308-aa46-973f11c57b0e] Get console output
Oct 11 09:27:10 compute-0 nova_compute[260935]: 2025-10-11 09:27:10.538 29289 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Oct 11 09:27:10 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 09:27:10 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/481952330' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:27:10 compute-0 nova_compute[260935]: 2025-10-11 09:27:10.726 2 DEBUG oslo_concurrency.processutils [None req-12b0b038-0758-4736-a8a1-a9c0c9c193a3 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:27:10 compute-0 nova_compute[260935]: 2025-10-11 09:27:10.728 2 DEBUG nova.virt.libvirt.vif [None req-12b0b038-0758-4736-a8a1-a9c0c9c193a3 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T09:26:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-8061287',display_name='tempest-TestGettingAddress-server-8061287',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-8061287',id=130,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPG2umSzgx1Pm5tS05rzNTKNesiRVqOQsMrcb/c4H9RJnO9a7zP3A3lTuGkE8FijEzV3gKWwvO4cBsyzZeZuE85e7xUGmBvWEUdTEGeD9UeQgoUvFozUeyHRBe1bh7q6pA==',key_name='tempest-TestGettingAddress-1214513575',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='db6885dd005947ad850fed13cefdf2fc',ramdisk_id='',reservation_id='r-ptk677gz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-1238692117',owner_user_name='tempest-TestGettingAddress-1238692117-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:26:59Z,user_data=None,user_id='0e1fd111a1ff43179343661e01457085',uuid=ac163a38-d5cb-4d00-af1b-f3361849dd68,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d227ca0f-e837-4a46-be0f-e66b72db8028", "address": "fa:16:3e:01:a2:ce", "network": {"id": "024b1f88-7312-4f05-a55e-4c82e878906e", "bridge": "br-int", "label": "tempest-network-smoke--93934650", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd227ca0f-e8", "ovs_interfaceid": "d227ca0f-e837-4a46-be0f-e66b72db8028", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 11 09:27:10 compute-0 nova_compute[260935]: 2025-10-11 09:27:10.729 2 DEBUG nova.network.os_vif_util [None req-12b0b038-0758-4736-a8a1-a9c0c9c193a3 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converting VIF {"id": "d227ca0f-e837-4a46-be0f-e66b72db8028", "address": "fa:16:3e:01:a2:ce", "network": {"id": "024b1f88-7312-4f05-a55e-4c82e878906e", "bridge": "br-int", "label": "tempest-network-smoke--93934650", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd227ca0f-e8", "ovs_interfaceid": "d227ca0f-e837-4a46-be0f-e66b72db8028", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 09:27:10 compute-0 nova_compute[260935]: 2025-10-11 09:27:10.730 2 DEBUG nova.network.os_vif_util [None req-12b0b038-0758-4736-a8a1-a9c0c9c193a3 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:01:a2:ce,bridge_name='br-int',has_traffic_filtering=True,id=d227ca0f-e837-4a46-be0f-e66b72db8028,network=Network(024b1f88-7312-4f05-a55e-4c82e878906e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd227ca0f-e8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 09:27:10 compute-0 nova_compute[260935]: 2025-10-11 09:27:10.732 2 DEBUG nova.virt.libvirt.vif [None req-12b0b038-0758-4736-a8a1-a9c0c9c193a3 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T09:26:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-8061287',display_name='tempest-TestGettingAddress-server-8061287',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-8061287',id=130,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPG2umSzgx1Pm5tS05rzNTKNesiRVqOQsMrcb/c4H9RJnO9a7zP3A3lTuGkE8FijEzV3gKWwvO4cBsyzZeZuE85e7xUGmBvWEUdTEGeD9UeQgoUvFozUeyHRBe1bh7q6pA==',key_name='tempest-TestGettingAddress-1214513575',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='db6885dd005947ad850fed13cefdf2fc',ramdisk_id='',reservation_id='r-ptk677gz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-1238692117',owner_user_name='tempest-TestGettingAddress-1238692117-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:26:59Z,user_data=None,user_id='0e1fd111a1ff43179343661e01457085',uuid=ac163a38-d5cb-4d00-af1b-f3361849dd68,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "731dd7de-8de3-4b67-a34b-8fef195606b3", "address": "fa:16:3e:3b:40:03", "network": {"id": "894decad-3bed-4c55-b643-5fbe5479bf3f", "bridge": "br-int", "label": "tempest-network-smoke--1044478516", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe3b:4003", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap731dd7de-8d", "ovs_interfaceid": "731dd7de-8de3-4b67-a34b-8fef195606b3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 11 09:27:10 compute-0 nova_compute[260935]: 2025-10-11 09:27:10.732 2 DEBUG nova.network.os_vif_util [None req-12b0b038-0758-4736-a8a1-a9c0c9c193a3 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converting VIF {"id": "731dd7de-8de3-4b67-a34b-8fef195606b3", "address": "fa:16:3e:3b:40:03", "network": {"id": "894decad-3bed-4c55-b643-5fbe5479bf3f", "bridge": "br-int", "label": "tempest-network-smoke--1044478516", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe3b:4003", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap731dd7de-8d", "ovs_interfaceid": "731dd7de-8de3-4b67-a34b-8fef195606b3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 09:27:10 compute-0 nova_compute[260935]: 2025-10-11 09:27:10.735 2 DEBUG nova.network.os_vif_util [None req-12b0b038-0758-4736-a8a1-a9c0c9c193a3 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3b:40:03,bridge_name='br-int',has_traffic_filtering=True,id=731dd7de-8de3-4b67-a34b-8fef195606b3,network=Network(894decad-3bed-4c55-b643-5fbe5479bf3f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap731dd7de-8d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 09:27:10 compute-0 nova_compute[260935]: 2025-10-11 09:27:10.737 2 DEBUG nova.objects.instance [None req-12b0b038-0758-4736-a8a1-a9c0c9c193a3 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lazy-loading 'pci_devices' on Instance uuid ac163a38-d5cb-4d00-af1b-f3361849dd68 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:27:10 compute-0 nova_compute[260935]: 2025-10-11 09:27:10.785 2 DEBUG nova.virt.libvirt.driver [None req-12b0b038-0758-4736-a8a1-a9c0c9c193a3 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: ac163a38-d5cb-4d00-af1b-f3361849dd68] End _get_guest_xml xml=<domain type="kvm">
Oct 11 09:27:10 compute-0 nova_compute[260935]:   <uuid>ac163a38-d5cb-4d00-af1b-f3361849dd68</uuid>
Oct 11 09:27:10 compute-0 nova_compute[260935]:   <name>instance-00000082</name>
Oct 11 09:27:10 compute-0 nova_compute[260935]:   <memory>131072</memory>
Oct 11 09:27:10 compute-0 nova_compute[260935]:   <vcpu>1</vcpu>
Oct 11 09:27:10 compute-0 nova_compute[260935]:   <metadata>
Oct 11 09:27:10 compute-0 nova_compute[260935]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 09:27:10 compute-0 nova_compute[260935]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 09:27:10 compute-0 nova_compute[260935]:       <nova:name>tempest-TestGettingAddress-server-8061287</nova:name>
Oct 11 09:27:10 compute-0 nova_compute[260935]:       <nova:creationTime>2025-10-11 09:27:09</nova:creationTime>
Oct 11 09:27:10 compute-0 nova_compute[260935]:       <nova:flavor name="m1.nano">
Oct 11 09:27:10 compute-0 nova_compute[260935]:         <nova:memory>128</nova:memory>
Oct 11 09:27:10 compute-0 nova_compute[260935]:         <nova:disk>1</nova:disk>
Oct 11 09:27:10 compute-0 nova_compute[260935]:         <nova:swap>0</nova:swap>
Oct 11 09:27:10 compute-0 nova_compute[260935]:         <nova:ephemeral>0</nova:ephemeral>
Oct 11 09:27:10 compute-0 nova_compute[260935]:         <nova:vcpus>1</nova:vcpus>
Oct 11 09:27:10 compute-0 nova_compute[260935]:       </nova:flavor>
Oct 11 09:27:10 compute-0 nova_compute[260935]:       <nova:owner>
Oct 11 09:27:10 compute-0 nova_compute[260935]:         <nova:user uuid="0e1fd111a1ff43179343661e01457085">tempest-TestGettingAddress-1238692117-project-member</nova:user>
Oct 11 09:27:10 compute-0 nova_compute[260935]:         <nova:project uuid="db6885dd005947ad850fed13cefdf2fc">tempest-TestGettingAddress-1238692117</nova:project>
Oct 11 09:27:10 compute-0 nova_compute[260935]:       </nova:owner>
Oct 11 09:27:10 compute-0 nova_compute[260935]:       <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 09:27:10 compute-0 nova_compute[260935]:       <nova:ports>
Oct 11 09:27:10 compute-0 nova_compute[260935]:         <nova:port uuid="d227ca0f-e837-4a46-be0f-e66b72db8028">
Oct 11 09:27:10 compute-0 nova_compute[260935]:           <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Oct 11 09:27:10 compute-0 nova_compute[260935]:         </nova:port>
Oct 11 09:27:10 compute-0 nova_compute[260935]:         <nova:port uuid="731dd7de-8de3-4b67-a34b-8fef195606b3">
Oct 11 09:27:10 compute-0 nova_compute[260935]:           <nova:ip type="fixed" address="2001:db8::f816:3eff:fe3b:4003" ipVersion="6"/>
Oct 11 09:27:10 compute-0 nova_compute[260935]:         </nova:port>
Oct 11 09:27:10 compute-0 nova_compute[260935]:       </nova:ports>
Oct 11 09:27:10 compute-0 nova_compute[260935]:     </nova:instance>
Oct 11 09:27:10 compute-0 nova_compute[260935]:   </metadata>
Oct 11 09:27:10 compute-0 nova_compute[260935]:   <sysinfo type="smbios">
Oct 11 09:27:10 compute-0 nova_compute[260935]:     <system>
Oct 11 09:27:10 compute-0 nova_compute[260935]:       <entry name="manufacturer">RDO</entry>
Oct 11 09:27:10 compute-0 nova_compute[260935]:       <entry name="product">OpenStack Compute</entry>
Oct 11 09:27:10 compute-0 nova_compute[260935]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 09:27:10 compute-0 nova_compute[260935]:       <entry name="serial">ac163a38-d5cb-4d00-af1b-f3361849dd68</entry>
Oct 11 09:27:10 compute-0 nova_compute[260935]:       <entry name="uuid">ac163a38-d5cb-4d00-af1b-f3361849dd68</entry>
Oct 11 09:27:10 compute-0 nova_compute[260935]:       <entry name="family">Virtual Machine</entry>
Oct 11 09:27:10 compute-0 nova_compute[260935]:     </system>
Oct 11 09:27:10 compute-0 nova_compute[260935]:   </sysinfo>
Oct 11 09:27:10 compute-0 nova_compute[260935]:   <os>
Oct 11 09:27:10 compute-0 nova_compute[260935]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 11 09:27:10 compute-0 nova_compute[260935]:     <boot dev="hd"/>
Oct 11 09:27:10 compute-0 nova_compute[260935]:     <smbios mode="sysinfo"/>
Oct 11 09:27:10 compute-0 nova_compute[260935]:   </os>
Oct 11 09:27:10 compute-0 nova_compute[260935]:   <features>
Oct 11 09:27:10 compute-0 nova_compute[260935]:     <acpi/>
Oct 11 09:27:10 compute-0 nova_compute[260935]:     <apic/>
Oct 11 09:27:10 compute-0 nova_compute[260935]:     <vmcoreinfo/>
Oct 11 09:27:10 compute-0 nova_compute[260935]:   </features>
Oct 11 09:27:10 compute-0 nova_compute[260935]:   <clock offset="utc">
Oct 11 09:27:10 compute-0 nova_compute[260935]:     <timer name="pit" tickpolicy="delay"/>
Oct 11 09:27:10 compute-0 nova_compute[260935]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 11 09:27:10 compute-0 nova_compute[260935]:     <timer name="hpet" present="no"/>
Oct 11 09:27:10 compute-0 nova_compute[260935]:   </clock>
Oct 11 09:27:10 compute-0 nova_compute[260935]:   <cpu mode="host-model" match="exact">
Oct 11 09:27:10 compute-0 nova_compute[260935]:     <topology sockets="1" cores="1" threads="1"/>
Oct 11 09:27:10 compute-0 nova_compute[260935]:   </cpu>
Oct 11 09:27:10 compute-0 nova_compute[260935]:   <devices>
Oct 11 09:27:10 compute-0 nova_compute[260935]:     <disk type="network" device="disk">
Oct 11 09:27:10 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 09:27:10 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/ac163a38-d5cb-4d00-af1b-f3361849dd68_disk">
Oct 11 09:27:10 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 09:27:10 compute-0 nova_compute[260935]:       </source>
Oct 11 09:27:10 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 09:27:10 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 09:27:10 compute-0 nova_compute[260935]:       </auth>
Oct 11 09:27:10 compute-0 nova_compute[260935]:       <target dev="vda" bus="virtio"/>
Oct 11 09:27:10 compute-0 nova_compute[260935]:     </disk>
Oct 11 09:27:10 compute-0 nova_compute[260935]:     <disk type="network" device="cdrom">
Oct 11 09:27:10 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 09:27:10 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/ac163a38-d5cb-4d00-af1b-f3361849dd68_disk.config">
Oct 11 09:27:10 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 09:27:10 compute-0 nova_compute[260935]:       </source>
Oct 11 09:27:10 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 09:27:10 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 09:27:10 compute-0 nova_compute[260935]:       </auth>
Oct 11 09:27:10 compute-0 nova_compute[260935]:       <target dev="sda" bus="sata"/>
Oct 11 09:27:10 compute-0 nova_compute[260935]:     </disk>
Oct 11 09:27:10 compute-0 nova_compute[260935]:     <interface type="ethernet">
Oct 11 09:27:10 compute-0 nova_compute[260935]:       <mac address="fa:16:3e:01:a2:ce"/>
Oct 11 09:27:10 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 09:27:10 compute-0 nova_compute[260935]:       <driver name="vhost" rx_queue_size="512"/>
Oct 11 09:27:10 compute-0 nova_compute[260935]:       <mtu size="1442"/>
Oct 11 09:27:10 compute-0 nova_compute[260935]:       <target dev="tapd227ca0f-e8"/>
Oct 11 09:27:10 compute-0 nova_compute[260935]:     </interface>
Oct 11 09:27:10 compute-0 nova_compute[260935]:     <interface type="ethernet">
Oct 11 09:27:10 compute-0 nova_compute[260935]:       <mac address="fa:16:3e:3b:40:03"/>
Oct 11 09:27:10 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 09:27:10 compute-0 nova_compute[260935]:       <driver name="vhost" rx_queue_size="512"/>
Oct 11 09:27:10 compute-0 nova_compute[260935]:       <mtu size="1442"/>
Oct 11 09:27:10 compute-0 nova_compute[260935]:       <target dev="tap731dd7de-8d"/>
Oct 11 09:27:10 compute-0 nova_compute[260935]:     </interface>
Oct 11 09:27:10 compute-0 nova_compute[260935]:     <serial type="pty">
Oct 11 09:27:10 compute-0 nova_compute[260935]:       <log file="/var/lib/nova/instances/ac163a38-d5cb-4d00-af1b-f3361849dd68/console.log" append="off"/>
Oct 11 09:27:10 compute-0 nova_compute[260935]:     </serial>
Oct 11 09:27:10 compute-0 nova_compute[260935]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 09:27:10 compute-0 nova_compute[260935]:     <video>
Oct 11 09:27:10 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 09:27:10 compute-0 nova_compute[260935]:     </video>
Oct 11 09:27:10 compute-0 nova_compute[260935]:     <input type="tablet" bus="usb"/>
Oct 11 09:27:10 compute-0 nova_compute[260935]:     <rng model="virtio">
Oct 11 09:27:10 compute-0 nova_compute[260935]:       <backend model="random">/dev/urandom</backend>
Oct 11 09:27:10 compute-0 nova_compute[260935]:     </rng>
Oct 11 09:27:10 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root"/>
Oct 11 09:27:10 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:27:10 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:27:10 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:27:10 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:27:10 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:27:10 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:27:10 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:27:10 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:27:10 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:27:10 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:27:10 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:27:10 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:27:10 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:27:10 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:27:10 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:27:10 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:27:10 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:27:10 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:27:10 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:27:10 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:27:10 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:27:10 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:27:10 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:27:10 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:27:10 compute-0 nova_compute[260935]:     <controller type="usb" index="0"/>
Oct 11 09:27:10 compute-0 nova_compute[260935]:     <memballoon model="virtio">
Oct 11 09:27:10 compute-0 nova_compute[260935]:       <stats period="10"/>
Oct 11 09:27:10 compute-0 nova_compute[260935]:     </memballoon>
Oct 11 09:27:10 compute-0 nova_compute[260935]:   </devices>
Oct 11 09:27:10 compute-0 nova_compute[260935]: </domain>
Oct 11 09:27:10 compute-0 nova_compute[260935]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 11 09:27:10 compute-0 nova_compute[260935]: 2025-10-11 09:27:10.786 2 DEBUG nova.compute.manager [None req-12b0b038-0758-4736-a8a1-a9c0c9c193a3 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: ac163a38-d5cb-4d00-af1b-f3361849dd68] Preparing to wait for external event network-vif-plugged-d227ca0f-e837-4a46-be0f-e66b72db8028 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 11 09:27:10 compute-0 nova_compute[260935]: 2025-10-11 09:27:10.787 2 DEBUG oslo_concurrency.lockutils [None req-12b0b038-0758-4736-a8a1-a9c0c9c193a3 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "ac163a38-d5cb-4d00-af1b-f3361849dd68-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:27:10 compute-0 nova_compute[260935]: 2025-10-11 09:27:10.787 2 DEBUG oslo_concurrency.lockutils [None req-12b0b038-0758-4736-a8a1-a9c0c9c193a3 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "ac163a38-d5cb-4d00-af1b-f3361849dd68-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:27:10 compute-0 nova_compute[260935]: 2025-10-11 09:27:10.788 2 DEBUG oslo_concurrency.lockutils [None req-12b0b038-0758-4736-a8a1-a9c0c9c193a3 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "ac163a38-d5cb-4d00-af1b-f3361849dd68-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:27:10 compute-0 nova_compute[260935]: 2025-10-11 09:27:10.788 2 DEBUG nova.compute.manager [None req-12b0b038-0758-4736-a8a1-a9c0c9c193a3 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: ac163a38-d5cb-4d00-af1b-f3361849dd68] Preparing to wait for external event network-vif-plugged-731dd7de-8de3-4b67-a34b-8fef195606b3 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 11 09:27:10 compute-0 nova_compute[260935]: 2025-10-11 09:27:10.789 2 DEBUG oslo_concurrency.lockutils [None req-12b0b038-0758-4736-a8a1-a9c0c9c193a3 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "ac163a38-d5cb-4d00-af1b-f3361849dd68-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:27:10 compute-0 nova_compute[260935]: 2025-10-11 09:27:10.789 2 DEBUG oslo_concurrency.lockutils [None req-12b0b038-0758-4736-a8a1-a9c0c9c193a3 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "ac163a38-d5cb-4d00-af1b-f3361849dd68-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:27:10 compute-0 nova_compute[260935]: 2025-10-11 09:27:10.790 2 DEBUG oslo_concurrency.lockutils [None req-12b0b038-0758-4736-a8a1-a9c0c9c193a3 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "ac163a38-d5cb-4d00-af1b-f3361849dd68-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:27:10 compute-0 nova_compute[260935]: 2025-10-11 09:27:10.791 2 DEBUG nova.virt.libvirt.vif [None req-12b0b038-0758-4736-a8a1-a9c0c9c193a3 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T09:26:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-8061287',display_name='tempest-TestGettingAddress-server-8061287',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-8061287',id=130,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPG2umSzgx1Pm5tS05rzNTKNesiRVqOQsMrcb/c4H9RJnO9a7zP3A3lTuGkE8FijEzV3gKWwvO4cBsyzZeZuE85e7xUGmBvWEUdTEGeD9UeQgoUvFozUeyHRBe1bh7q6pA==',key_name='tempest-TestGettingAddress-1214513575',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='db6885dd005947ad850fed13cefdf2fc',ramdisk_id='',reservation_id='r-ptk677gz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-1238692117',owner_user_name='tempest-TestGettingAddress-1238692117-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:26:59Z,user_data=None,user_id='0e1fd111a1ff43179343661e01457085',uuid=ac163a38-d5cb-4d00-af1b-f3361849dd68,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d227ca0f-e837-4a46-be0f-e66b72db8028", "address": "fa:16:3e:01:a2:ce", "network": {"id": "024b1f88-7312-4f05-a55e-4c82e878906e", "bridge": "br-int", "label": "tempest-network-smoke--93934650", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd227ca0f-e8", "ovs_interfaceid": "d227ca0f-e837-4a46-be0f-e66b72db8028", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 11 09:27:10 compute-0 nova_compute[260935]: 2025-10-11 09:27:10.791 2 DEBUG nova.network.os_vif_util [None req-12b0b038-0758-4736-a8a1-a9c0c9c193a3 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converting VIF {"id": "d227ca0f-e837-4a46-be0f-e66b72db8028", "address": "fa:16:3e:01:a2:ce", "network": {"id": "024b1f88-7312-4f05-a55e-4c82e878906e", "bridge": "br-int", "label": "tempest-network-smoke--93934650", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd227ca0f-e8", "ovs_interfaceid": "d227ca0f-e837-4a46-be0f-e66b72db8028", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 09:27:10 compute-0 nova_compute[260935]: 2025-10-11 09:27:10.792 2 DEBUG nova.network.os_vif_util [None req-12b0b038-0758-4736-a8a1-a9c0c9c193a3 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:01:a2:ce,bridge_name='br-int',has_traffic_filtering=True,id=d227ca0f-e837-4a46-be0f-e66b72db8028,network=Network(024b1f88-7312-4f05-a55e-4c82e878906e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd227ca0f-e8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 09:27:10 compute-0 nova_compute[260935]: 2025-10-11 09:27:10.793 2 DEBUG os_vif [None req-12b0b038-0758-4736-a8a1-a9c0c9c193a3 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:01:a2:ce,bridge_name='br-int',has_traffic_filtering=True,id=d227ca0f-e837-4a46-be0f-e66b72db8028,network=Network(024b1f88-7312-4f05-a55e-4c82e878906e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd227ca0f-e8') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 11 09:27:10 compute-0 nova_compute[260935]: 2025-10-11 09:27:10.794 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:27:10 compute-0 nova_compute[260935]: 2025-10-11 09:27:10.795 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:27:10 compute-0 nova_compute[260935]: 2025-10-11 09:27:10.796 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 09:27:10 compute-0 nova_compute[260935]: 2025-10-11 09:27:10.801 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:27:10 compute-0 nova_compute[260935]: 2025-10-11 09:27:10.801 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd227ca0f-e8, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:27:10 compute-0 nova_compute[260935]: 2025-10-11 09:27:10.802 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapd227ca0f-e8, col_values=(('external_ids', {'iface-id': 'd227ca0f-e837-4a46-be0f-e66b72db8028', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:01:a2:ce', 'vm-uuid': 'ac163a38-d5cb-4d00-af1b-f3361849dd68'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:27:10 compute-0 NetworkManager[44960]: <info>  [1760174830.8055] manager: (tapd227ca0f-e8): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/555)
Oct 11 09:27:10 compute-0 nova_compute[260935]: 2025-10-11 09:27:10.806 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:27:10 compute-0 nova_compute[260935]: 2025-10-11 09:27:10.810 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 09:27:10 compute-0 nova_compute[260935]: 2025-10-11 09:27:10.817 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:27:10 compute-0 nova_compute[260935]: 2025-10-11 09:27:10.818 2 INFO os_vif [None req-12b0b038-0758-4736-a8a1-a9c0c9c193a3 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:01:a2:ce,bridge_name='br-int',has_traffic_filtering=True,id=d227ca0f-e837-4a46-be0f-e66b72db8028,network=Network(024b1f88-7312-4f05-a55e-4c82e878906e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd227ca0f-e8')
Oct 11 09:27:10 compute-0 nova_compute[260935]: 2025-10-11 09:27:10.820 2 DEBUG nova.virt.libvirt.vif [None req-12b0b038-0758-4736-a8a1-a9c0c9c193a3 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T09:26:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-8061287',display_name='tempest-TestGettingAddress-server-8061287',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-8061287',id=130,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPG2umSzgx1Pm5tS05rzNTKNesiRVqOQsMrcb/c4H9RJnO9a7zP3A3lTuGkE8FijEzV3gKWwvO4cBsyzZeZuE85e7xUGmBvWEUdTEGeD9UeQgoUvFozUeyHRBe1bh7q6pA==',key_name='tempest-TestGettingAddress-1214513575',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='db6885dd005947ad850fed13cefdf2fc',ramdisk_id='',reservation_id='r-ptk677gz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-1238692117',owner_user_name='tempest-TestGettingAddress-1238692117-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:26:59Z,user_data=None,user_id='0e1fd111a1ff43179343661e01457085',uuid=ac163a38-d5cb-4d00-af1b-f3361849dd68,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "731dd7de-8de3-4b67-a34b-8fef195606b3", "address": "fa:16:3e:3b:40:03", "network": {"id": "894decad-3bed-4c55-b643-5fbe5479bf3f", "bridge": "br-int", "label": "tempest-network-smoke--1044478516", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe3b:4003", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap731dd7de-8d", "ovs_interfaceid": "731dd7de-8de3-4b67-a34b-8fef195606b3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 11 09:27:10 compute-0 nova_compute[260935]: 2025-10-11 09:27:10.820 2 DEBUG nova.network.os_vif_util [None req-12b0b038-0758-4736-a8a1-a9c0c9c193a3 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converting VIF {"id": "731dd7de-8de3-4b67-a34b-8fef195606b3", "address": "fa:16:3e:3b:40:03", "network": {"id": "894decad-3bed-4c55-b643-5fbe5479bf3f", "bridge": "br-int", "label": "tempest-network-smoke--1044478516", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe3b:4003", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap731dd7de-8d", "ovs_interfaceid": "731dd7de-8de3-4b67-a34b-8fef195606b3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 09:27:10 compute-0 nova_compute[260935]: 2025-10-11 09:27:10.821 2 DEBUG nova.network.os_vif_util [None req-12b0b038-0758-4736-a8a1-a9c0c9c193a3 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3b:40:03,bridge_name='br-int',has_traffic_filtering=True,id=731dd7de-8de3-4b67-a34b-8fef195606b3,network=Network(894decad-3bed-4c55-b643-5fbe5479bf3f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap731dd7de-8d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 09:27:10 compute-0 nova_compute[260935]: 2025-10-11 09:27:10.822 2 DEBUG os_vif [None req-12b0b038-0758-4736-a8a1-a9c0c9c193a3 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:3b:40:03,bridge_name='br-int',has_traffic_filtering=True,id=731dd7de-8de3-4b67-a34b-8fef195606b3,network=Network(894decad-3bed-4c55-b643-5fbe5479bf3f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap731dd7de-8d') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 11 09:27:10 compute-0 nova_compute[260935]: 2025-10-11 09:27:10.823 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:27:10 compute-0 nova_compute[260935]: 2025-10-11 09:27:10.823 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:27:10 compute-0 nova_compute[260935]: 2025-10-11 09:27:10.823 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 09:27:10 compute-0 nova_compute[260935]: 2025-10-11 09:27:10.826 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:27:10 compute-0 nova_compute[260935]: 2025-10-11 09:27:10.827 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap731dd7de-8d, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:27:10 compute-0 nova_compute[260935]: 2025-10-11 09:27:10.827 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap731dd7de-8d, col_values=(('external_ids', {'iface-id': '731dd7de-8de3-4b67-a34b-8fef195606b3', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:3b:40:03', 'vm-uuid': 'ac163a38-d5cb-4d00-af1b-f3361849dd68'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:27:10 compute-0 nova_compute[260935]: 2025-10-11 09:27:10.829 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:27:10 compute-0 NetworkManager[44960]: <info>  [1760174830.8303] manager: (tap731dd7de-8d): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/556)
Oct 11 09:27:10 compute-0 nova_compute[260935]: 2025-10-11 09:27:10.833 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 09:27:10 compute-0 nova_compute[260935]: 2025-10-11 09:27:10.841 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:27:10 compute-0 nova_compute[260935]: 2025-10-11 09:27:10.843 2 INFO os_vif [None req-12b0b038-0758-4736-a8a1-a9c0c9c193a3 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:3b:40:03,bridge_name='br-int',has_traffic_filtering=True,id=731dd7de-8de3-4b67-a34b-8fef195606b3,network=Network(894decad-3bed-4c55-b643-5fbe5479bf3f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap731dd7de-8d')
Oct 11 09:27:10 compute-0 nova_compute[260935]: 2025-10-11 09:27:10.918 2 DEBUG nova.virt.libvirt.driver [None req-12b0b038-0758-4736-a8a1-a9c0c9c193a3 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 09:27:10 compute-0 nova_compute[260935]: 2025-10-11 09:27:10.918 2 DEBUG nova.virt.libvirt.driver [None req-12b0b038-0758-4736-a8a1-a9c0c9c193a3 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 09:27:10 compute-0 nova_compute[260935]: 2025-10-11 09:27:10.919 2 DEBUG nova.virt.libvirt.driver [None req-12b0b038-0758-4736-a8a1-a9c0c9c193a3 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] No VIF found with MAC fa:16:3e:01:a2:ce, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 11 09:27:10 compute-0 nova_compute[260935]: 2025-10-11 09:27:10.919 2 DEBUG nova.virt.libvirt.driver [None req-12b0b038-0758-4736-a8a1-a9c0c9c193a3 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] No VIF found with MAC fa:16:3e:3b:40:03, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 11 09:27:10 compute-0 nova_compute[260935]: 2025-10-11 09:27:10.920 2 INFO nova.virt.libvirt.driver [None req-12b0b038-0758-4736-a8a1-a9c0c9c193a3 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: ac163a38-d5cb-4d00-af1b-f3361849dd68] Using config drive
Oct 11 09:27:10 compute-0 nova_compute[260935]: 2025-10-11 09:27:10.955 2 DEBUG nova.storage.rbd_utils [None req-12b0b038-0758-4736-a8a1-a9c0c9c193a3 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] rbd image ac163a38-d5cb-4d00-af1b-f3361849dd68_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:27:10 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2566: 321 pgs: 321 active+clean; 612 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 343 KiB/s rd, 3.9 MiB/s wr, 91 op/s
Oct 11 09:27:11 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/3388837810' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:27:11 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/481952330' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:27:11 compute-0 nova_compute[260935]: 2025-10-11 09:27:11.575 2 INFO nova.virt.libvirt.driver [None req-12b0b038-0758-4736-a8a1-a9c0c9c193a3 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: ac163a38-d5cb-4d00-af1b-f3361849dd68] Creating config drive at /var/lib/nova/instances/ac163a38-d5cb-4d00-af1b-f3361849dd68/disk.config
Oct 11 09:27:11 compute-0 nova_compute[260935]: 2025-10-11 09:27:11.585 2 DEBUG oslo_concurrency.processutils [None req-12b0b038-0758-4736-a8a1-a9c0c9c193a3 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/ac163a38-d5cb-4d00-af1b-f3361849dd68/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpjqcgkiid execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:27:11 compute-0 nova_compute[260935]: 2025-10-11 09:27:11.741 2 DEBUG oslo_concurrency.processutils [None req-12b0b038-0758-4736-a8a1-a9c0c9c193a3 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/ac163a38-d5cb-4d00-af1b-f3361849dd68/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpjqcgkiid" returned: 0 in 0.156s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:27:11 compute-0 nova_compute[260935]: 2025-10-11 09:27:11.780 2 DEBUG nova.storage.rbd_utils [None req-12b0b038-0758-4736-a8a1-a9c0c9c193a3 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] rbd image ac163a38-d5cb-4d00-af1b-f3361849dd68_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:27:11 compute-0 nova_compute[260935]: 2025-10-11 09:27:11.785 2 DEBUG oslo_concurrency.processutils [None req-12b0b038-0758-4736-a8a1-a9c0c9c193a3 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/ac163a38-d5cb-4d00-af1b-f3361849dd68/disk.config ac163a38-d5cb-4d00-af1b-f3361849dd68_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:27:11 compute-0 nova_compute[260935]: 2025-10-11 09:27:11.835 2 DEBUG nova.network.neutron [req-4b417079-6517-4b3e-86f6-5c51c930ea43 req-18404a90-d79e-4ce2-bd3b-531720486135 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ac163a38-d5cb-4d00-af1b-f3361849dd68] Updated VIF entry in instance network info cache for port 731dd7de-8de3-4b67-a34b-8fef195606b3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 09:27:11 compute-0 nova_compute[260935]: 2025-10-11 09:27:11.836 2 DEBUG nova.network.neutron [req-4b417079-6517-4b3e-86f6-5c51c930ea43 req-18404a90-d79e-4ce2-bd3b-531720486135 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ac163a38-d5cb-4d00-af1b-f3361849dd68] Updating instance_info_cache with network_info: [{"id": "d227ca0f-e837-4a46-be0f-e66b72db8028", "address": "fa:16:3e:01:a2:ce", "network": {"id": "024b1f88-7312-4f05-a55e-4c82e878906e", "bridge": "br-int", "label": "tempest-network-smoke--93934650", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd227ca0f-e8", "ovs_interfaceid": "d227ca0f-e837-4a46-be0f-e66b72db8028", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "731dd7de-8de3-4b67-a34b-8fef195606b3", "address": "fa:16:3e:3b:40:03", "network": {"id": "894decad-3bed-4c55-b643-5fbe5479bf3f", "bridge": "br-int", "label": "tempest-network-smoke--1044478516", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe3b:4003", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap731dd7de-8d", "ovs_interfaceid": "731dd7de-8de3-4b67-a34b-8fef195606b3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:27:11 compute-0 nova_compute[260935]: 2025-10-11 09:27:11.846 2 DEBUG nova.compute.manager [req-b81c69c2-c6e5-4812-a275-03e82918d592 req-e43ce0ac-486e-491b-b145-53bcf7e3d97f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 36d0acda-9f37-4308-aa46-973f11c57b0e] Received event network-changed-36e1391f-eaf8-490f-8434-c3fb25eed0a4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:27:11 compute-0 nova_compute[260935]: 2025-10-11 09:27:11.847 2 DEBUG nova.compute.manager [req-b81c69c2-c6e5-4812-a275-03e82918d592 req-e43ce0ac-486e-491b-b145-53bcf7e3d97f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 36d0acda-9f37-4308-aa46-973f11c57b0e] Refreshing instance network info cache due to event network-changed-36e1391f-eaf8-490f-8434-c3fb25eed0a4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 09:27:11 compute-0 nova_compute[260935]: 2025-10-11 09:27:11.847 2 DEBUG oslo_concurrency.lockutils [req-b81c69c2-c6e5-4812-a275-03e82918d592 req-e43ce0ac-486e-491b-b145-53bcf7e3d97f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-36d0acda-9f37-4308-aa46-973f11c57b0e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:27:11 compute-0 nova_compute[260935]: 2025-10-11 09:27:11.847 2 DEBUG oslo_concurrency.lockutils [req-b81c69c2-c6e5-4812-a275-03e82918d592 req-e43ce0ac-486e-491b-b145-53bcf7e3d97f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-36d0acda-9f37-4308-aa46-973f11c57b0e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:27:11 compute-0 nova_compute[260935]: 2025-10-11 09:27:11.847 2 DEBUG nova.network.neutron [req-b81c69c2-c6e5-4812-a275-03e82918d592 req-e43ce0ac-486e-491b-b145-53bcf7e3d97f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 36d0acda-9f37-4308-aa46-973f11c57b0e] Refreshing network info cache for port 36e1391f-eaf8-490f-8434-c3fb25eed0a4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 09:27:11 compute-0 nova_compute[260935]: 2025-10-11 09:27:11.875 2 DEBUG oslo_concurrency.lockutils [req-4b417079-6517-4b3e-86f6-5c51c930ea43 req-18404a90-d79e-4ce2-bd3b-531720486135 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-ac163a38-d5cb-4d00-af1b-f3361849dd68" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:27:11 compute-0 nova_compute[260935]: 2025-10-11 09:27:11.991 2 DEBUG oslo_concurrency.processutils [None req-12b0b038-0758-4736-a8a1-a9c0c9c193a3 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/ac163a38-d5cb-4d00-af1b-f3361849dd68/disk.config ac163a38-d5cb-4d00-af1b-f3361849dd68_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.206s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:27:11 compute-0 nova_compute[260935]: 2025-10-11 09:27:11.991 2 INFO nova.virt.libvirt.driver [None req-12b0b038-0758-4736-a8a1-a9c0c9c193a3 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: ac163a38-d5cb-4d00-af1b-f3361849dd68] Deleting local config drive /var/lib/nova/instances/ac163a38-d5cb-4d00-af1b-f3361849dd68/disk.config because it was imported into RBD.
Oct 11 09:27:12 compute-0 ceph-mon[74313]: pgmap v2566: 321 pgs: 321 active+clean; 612 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 343 KiB/s rd, 3.9 MiB/s wr, 91 op/s
Oct 11 09:27:12 compute-0 kernel: tapd227ca0f-e8: entered promiscuous mode
Oct 11 09:27:12 compute-0 NetworkManager[44960]: <info>  [1760174832.0638] manager: (tapd227ca0f-e8): new Tun device (/org/freedesktop/NetworkManager/Devices/557)
Oct 11 09:27:12 compute-0 ovn_controller[152945]: 2025-10-11T09:27:12Z|01407|binding|INFO|Claiming lport d227ca0f-e837-4a46-be0f-e66b72db8028 for this chassis.
Oct 11 09:27:12 compute-0 ovn_controller[152945]: 2025-10-11T09:27:12Z|01408|binding|INFO|d227ca0f-e837-4a46-be0f-e66b72db8028: Claiming fa:16:3e:01:a2:ce 10.100.0.6
Oct 11 09:27:12 compute-0 nova_compute[260935]: 2025-10-11 09:27:12.070 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:27:12 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:27:12.080 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:01:a2:ce 10.100.0.6'], port_security=['fa:16:3e:01:a2:ce 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'ac163a38-d5cb-4d00-af1b-f3361849dd68', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-024b1f88-7312-4f05-a55e-4c82e878906e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'db6885dd005947ad850fed13cefdf2fc', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'efa7de30-a0ba-4f09-b29e-87a969a72d97', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5a6a9a27-420b-4816-9550-7c34ef42c810, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=d227ca0f-e837-4a46-be0f-e66b72db8028) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 09:27:12 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:27:12.082 162815 INFO neutron.agent.ovn.metadata.agent [-] Port d227ca0f-e837-4a46-be0f-e66b72db8028 in datapath 024b1f88-7312-4f05-a55e-4c82e878906e bound to our chassis
Oct 11 09:27:12 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:27:12.084 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 024b1f88-7312-4f05-a55e-4c82e878906e
Oct 11 09:27:12 compute-0 NetworkManager[44960]: <info>  [1760174832.0993] manager: (tap731dd7de-8d): new Tun device (/org/freedesktop/NetworkManager/Devices/558)
Oct 11 09:27:12 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:27:12.106 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[c3dd67b0-09b2-4b33-b734-1d84e0abbb10]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:27:12 compute-0 kernel: tap731dd7de-8d: entered promiscuous mode
Oct 11 09:27:12 compute-0 systemd-udevd[402839]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 09:27:12 compute-0 nova_compute[260935]: 2025-10-11 09:27:12.113 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:27:12 compute-0 systemd-udevd[402838]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 09:27:12 compute-0 ovn_controller[152945]: 2025-10-11T09:27:12Z|01409|binding|INFO|Setting lport d227ca0f-e837-4a46-be0f-e66b72db8028 ovn-installed in OVS
Oct 11 09:27:12 compute-0 ovn_controller[152945]: 2025-10-11T09:27:12Z|01410|binding|INFO|Setting lport d227ca0f-e837-4a46-be0f-e66b72db8028 up in Southbound
Oct 11 09:27:12 compute-0 ovn_controller[152945]: 2025-10-11T09:27:12Z|01411|if_status|INFO|Dropped 1 log messages in last 43 seconds (most recently, 43 seconds ago) due to excessive rate
Oct 11 09:27:12 compute-0 ovn_controller[152945]: 2025-10-11T09:27:12Z|01412|if_status|INFO|Not updating pb chassis for 731dd7de-8de3-4b67-a34b-8fef195606b3 now as sb is readonly
Oct 11 09:27:12 compute-0 ovn_controller[152945]: 2025-10-11T09:27:12Z|01413|binding|INFO|Claiming lport 731dd7de-8de3-4b67-a34b-8fef195606b3 for this chassis.
Oct 11 09:27:12 compute-0 ovn_controller[152945]: 2025-10-11T09:27:12Z|01414|binding|INFO|731dd7de-8de3-4b67-a34b-8fef195606b3: Claiming fa:16:3e:3b:40:03 2001:db8::f816:3eff:fe3b:4003
Oct 11 09:27:12 compute-0 NetworkManager[44960]: <info>  [1760174832.1315] device (tap731dd7de-8d): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 09:27:12 compute-0 NetworkManager[44960]: <info>  [1760174832.1326] device (tapd227ca0f-e8): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 09:27:12 compute-0 NetworkManager[44960]: <info>  [1760174832.1337] device (tap731dd7de-8d): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 09:27:12 compute-0 NetworkManager[44960]: <info>  [1760174832.1343] device (tapd227ca0f-e8): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 09:27:12 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:27:12.135 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3b:40:03 2001:db8::f816:3eff:fe3b:4003'], port_security=['fa:16:3e:3b:40:03 2001:db8::f816:3eff:fe3b:4003'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe3b:4003/64', 'neutron:device_id': 'ac163a38-d5cb-4d00-af1b-f3361849dd68', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-894decad-3bed-4c55-b643-5fbe5479bf3f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'db6885dd005947ad850fed13cefdf2fc', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'efa7de30-a0ba-4f09-b29e-87a969a72d97', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8c88be83-b34c-4a0a-8d40-8a42bbccb566, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=731dd7de-8de3-4b67-a34b-8fef195606b3) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 09:27:12 compute-0 ovn_controller[152945]: 2025-10-11T09:27:12Z|01415|binding|INFO|Setting lport 731dd7de-8de3-4b67-a34b-8fef195606b3 ovn-installed in OVS
Oct 11 09:27:12 compute-0 ovn_controller[152945]: 2025-10-11T09:27:12Z|01416|binding|INFO|Setting lport 731dd7de-8de3-4b67-a34b-8fef195606b3 up in Southbound
Oct 11 09:27:12 compute-0 nova_compute[260935]: 2025-10-11 09:27:12.145 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:27:12 compute-0 systemd-machined[215705]: New machine qemu-154-instance-00000082.
Oct 11 09:27:12 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:27:12.163 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[35a32cbf-799d-45a4-88bc-0cbec413bb8a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:27:12 compute-0 systemd[1]: Started Virtual Machine qemu-154-instance-00000082.
Oct 11 09:27:12 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:27:12.169 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[838f30c7-07a5-4399-b02c-fa5827fc43c1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:27:12 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:27:12.220 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[494245af-f2f9-49d9-9c51-6623036064f5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:27:12 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:27:12.247 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[62c8adae-94d2-43a9-95c3-8b64afb6777f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap024b1f88-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7a:47:3b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 382], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 663412, 'reachable_time': 33597, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 402854, 'error': None, 'target': 'ovnmeta-024b1f88-7312-4f05-a55e-4c82e878906e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:27:12 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:27:12.280 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[efd5070b-7418-44b0-ac05-1cf73b7c1ccf]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap024b1f88-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 663430, 'tstamp': 663430}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 402855, 'error': None, 'target': 'ovnmeta-024b1f88-7312-4f05-a55e-4c82e878906e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap024b1f88-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 663436, 'tstamp': 663436}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 402855, 'error': None, 'target': 'ovnmeta-024b1f88-7312-4f05-a55e-4c82e878906e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:27:12 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:27:12.284 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap024b1f88-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:27:12 compute-0 nova_compute[260935]: 2025-10-11 09:27:12.286 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:27:12 compute-0 nova_compute[260935]: 2025-10-11 09:27:12.287 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:27:12 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:27:12.288 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap024b1f88-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:27:12 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:27:12.289 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 09:27:12 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:27:12.290 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap024b1f88-70, col_values=(('external_ids', {'iface-id': '219b454c-6c32-4a38-b10c-afcdb1be1980'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:27:12 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:27:12.290 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 09:27:12 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:27:12.293 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 731dd7de-8de3-4b67-a34b-8fef195606b3 in datapath 894decad-3bed-4c55-b643-5fbe5479bf3f unbound from our chassis
Oct 11 09:27:12 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:27:12.296 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 894decad-3bed-4c55-b643-5fbe5479bf3f
Oct 11 09:27:12 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:27:12.321 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[0bd82d46-40c7-46de-b7ab-aec26369bf36]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:27:12 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:27:12.363 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[e7b662dc-8091-4a25-8535-94af356b3215]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:27:12 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:27:12.371 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[0e103fab-8777-44ce-8b8e-a89a6fa7acc3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:27:12 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:27:12.415 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[e1cae18d-e52f-4070-b525-ba76ced7de4a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:27:12 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:27:12.442 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[bb523604-bab9-4d7c-9487-190427531255]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap894decad-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1c:b0:96'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 19, 'tx_packets': 5, 'rx_bytes': 1802, 'tx_bytes': 402, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 19, 'tx_packets': 5, 'rx_bytes': 1802, 'tx_bytes': 402, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 383], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 663642, 'reachable_time': 44137, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 19, 'inoctets': 1536, 'indelivers': 4, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 19, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 1536, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 19, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 4, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 402862, 'error': None, 'target': 'ovnmeta-894decad-3bed-4c55-b643-5fbe5479bf3f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:27:12 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:27:12.465 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[aa2ac89d-c545-490b-8395-a3f45c688a1b]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap894decad-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 663658, 'tstamp': 663658}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 402863, 'error': None, 'target': 'ovnmeta-894decad-3bed-4c55-b643-5fbe5479bf3f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:27:12 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:27:12.467 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap894decad-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:27:12 compute-0 nova_compute[260935]: 2025-10-11 09:27:12.469 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:27:12 compute-0 nova_compute[260935]: 2025-10-11 09:27:12.471 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:27:12 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:27:12.471 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap894decad-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:27:12 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:27:12.471 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 09:27:12 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:27:12.471 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap894decad-30, col_values=(('external_ids', {'iface-id': '48cfaf23-d7a3-467e-89cb-3eb3687ea15e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:27:12 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:27:12.472 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 09:27:12 compute-0 nova_compute[260935]: 2025-10-11 09:27:12.542 2 DEBUG nova.compute.manager [req-da329335-0e74-4da8-a01c-8c38fe28e27e req-1be3059b-66ae-4bbd-812c-80daa179ac5e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ac163a38-d5cb-4d00-af1b-f3361849dd68] Received event network-vif-plugged-d227ca0f-e837-4a46-be0f-e66b72db8028 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:27:12 compute-0 nova_compute[260935]: 2025-10-11 09:27:12.543 2 DEBUG oslo_concurrency.lockutils [req-da329335-0e74-4da8-a01c-8c38fe28e27e req-1be3059b-66ae-4bbd-812c-80daa179ac5e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "ac163a38-d5cb-4d00-af1b-f3361849dd68-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:27:12 compute-0 nova_compute[260935]: 2025-10-11 09:27:12.543 2 DEBUG oslo_concurrency.lockutils [req-da329335-0e74-4da8-a01c-8c38fe28e27e req-1be3059b-66ae-4bbd-812c-80daa179ac5e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "ac163a38-d5cb-4d00-af1b-f3361849dd68-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:27:12 compute-0 nova_compute[260935]: 2025-10-11 09:27:12.543 2 DEBUG oslo_concurrency.lockutils [req-da329335-0e74-4da8-a01c-8c38fe28e27e req-1be3059b-66ae-4bbd-812c-80daa179ac5e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "ac163a38-d5cb-4d00-af1b-f3361849dd68-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:27:12 compute-0 nova_compute[260935]: 2025-10-11 09:27:12.544 2 DEBUG nova.compute.manager [req-da329335-0e74-4da8-a01c-8c38fe28e27e req-1be3059b-66ae-4bbd-812c-80daa179ac5e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ac163a38-d5cb-4d00-af1b-f3361849dd68] Processing event network-vif-plugged-d227ca0f-e837-4a46-be0f-e66b72db8028 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 11 09:27:12 compute-0 nova_compute[260935]: 2025-10-11 09:27:12.789 2 INFO nova.compute.manager [None req-2d19f315-b95a-4eaf-b007-57cfe1500179 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 36d0acda-9f37-4308-aa46-973f11c57b0e] Get console output
Oct 11 09:27:12 compute-0 nova_compute[260935]: 2025-10-11 09:27:12.799 29289 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Oct 11 09:27:12 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2567: 321 pgs: 321 active+clean; 612 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 353 KiB/s rd, 3.9 MiB/s wr, 97 op/s
Oct 11 09:27:12 compute-0 nova_compute[260935]: 2025-10-11 09:27:12.983 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:27:13 compute-0 nova_compute[260935]: 2025-10-11 09:27:13.428 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760174833.4274435, ac163a38-d5cb-4d00-af1b-f3361849dd68 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:27:13 compute-0 nova_compute[260935]: 2025-10-11 09:27:13.428 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: ac163a38-d5cb-4d00-af1b-f3361849dd68] VM Started (Lifecycle Event)
Oct 11 09:27:13 compute-0 nova_compute[260935]: 2025-10-11 09:27:13.456 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: ac163a38-d5cb-4d00-af1b-f3361849dd68] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:27:13 compute-0 nova_compute[260935]: 2025-10-11 09:27:13.463 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760174833.4277086, ac163a38-d5cb-4d00-af1b-f3361849dd68 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:27:13 compute-0 nova_compute[260935]: 2025-10-11 09:27:13.463 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: ac163a38-d5cb-4d00-af1b-f3361849dd68] VM Paused (Lifecycle Event)
Oct 11 09:27:13 compute-0 nova_compute[260935]: 2025-10-11 09:27:13.483 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: ac163a38-d5cb-4d00-af1b-f3361849dd68] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:27:13 compute-0 nova_compute[260935]: 2025-10-11 09:27:13.487 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: ac163a38-d5cb-4d00-af1b-f3361849dd68] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 09:27:13 compute-0 nova_compute[260935]: 2025-10-11 09:27:13.508 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: ac163a38-d5cb-4d00-af1b-f3361849dd68] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 09:27:13 compute-0 nova_compute[260935]: 2025-10-11 09:27:13.511 2 DEBUG nova.network.neutron [req-b81c69c2-c6e5-4812-a275-03e82918d592 req-e43ce0ac-486e-491b-b145-53bcf7e3d97f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 36d0acda-9f37-4308-aa46-973f11c57b0e] Updated VIF entry in instance network info cache for port 36e1391f-eaf8-490f-8434-c3fb25eed0a4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 09:27:13 compute-0 nova_compute[260935]: 2025-10-11 09:27:13.512 2 DEBUG nova.network.neutron [req-b81c69c2-c6e5-4812-a275-03e82918d592 req-e43ce0ac-486e-491b-b145-53bcf7e3d97f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 36d0acda-9f37-4308-aa46-973f11c57b0e] Updating instance_info_cache with network_info: [{"id": "36e1391f-eaf8-490f-8434-c3fb25eed0a4", "address": "fa:16:3e:da:66:34", "network": {"id": "b9f9ae84-9b18-48f7-bff2-94e8835de5c8", "bridge": "br-int", "label": "tempest-network-smoke--306681163", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.203", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap36e1391f-ea", "ovs_interfaceid": "36e1391f-eaf8-490f-8434-c3fb25eed0a4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:27:13 compute-0 nova_compute[260935]: 2025-10-11 09:27:13.528 2 DEBUG oslo_concurrency.lockutils [req-b81c69c2-c6e5-4812-a275-03e82918d592 req-e43ce0ac-486e-491b-b145-53bcf7e3d97f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-36d0acda-9f37-4308-aa46-973f11c57b0e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:27:13 compute-0 nova_compute[260935]: 2025-10-11 09:27:13.859 2 DEBUG nova.compute.manager [req-9e513b07-234e-4e77-bbec-2ec68301c79c req-f0b23a8d-3d3c-41e9-8fdf-036abc4384c8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 36d0acda-9f37-4308-aa46-973f11c57b0e] Received event network-vif-unplugged-36e1391f-eaf8-490f-8434-c3fb25eed0a4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:27:13 compute-0 nova_compute[260935]: 2025-10-11 09:27:13.860 2 DEBUG oslo_concurrency.lockutils [req-9e513b07-234e-4e77-bbec-2ec68301c79c req-f0b23a8d-3d3c-41e9-8fdf-036abc4384c8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "36d0acda-9f37-4308-aa46-973f11c57b0e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:27:13 compute-0 nova_compute[260935]: 2025-10-11 09:27:13.860 2 DEBUG oslo_concurrency.lockutils [req-9e513b07-234e-4e77-bbec-2ec68301c79c req-f0b23a8d-3d3c-41e9-8fdf-036abc4384c8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "36d0acda-9f37-4308-aa46-973f11c57b0e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:27:13 compute-0 nova_compute[260935]: 2025-10-11 09:27:13.861 2 DEBUG oslo_concurrency.lockutils [req-9e513b07-234e-4e77-bbec-2ec68301c79c req-f0b23a8d-3d3c-41e9-8fdf-036abc4384c8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "36d0acda-9f37-4308-aa46-973f11c57b0e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:27:13 compute-0 nova_compute[260935]: 2025-10-11 09:27:13.861 2 DEBUG nova.compute.manager [req-9e513b07-234e-4e77-bbec-2ec68301c79c req-f0b23a8d-3d3c-41e9-8fdf-036abc4384c8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 36d0acda-9f37-4308-aa46-973f11c57b0e] No waiting events found dispatching network-vif-unplugged-36e1391f-eaf8-490f-8434-c3fb25eed0a4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 09:27:13 compute-0 nova_compute[260935]: 2025-10-11 09:27:13.862 2 WARNING nova.compute.manager [req-9e513b07-234e-4e77-bbec-2ec68301c79c req-f0b23a8d-3d3c-41e9-8fdf-036abc4384c8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 36d0acda-9f37-4308-aa46-973f11c57b0e] Received unexpected event network-vif-unplugged-36e1391f-eaf8-490f-8434-c3fb25eed0a4 for instance with vm_state active and task_state None.
Oct 11 09:27:13 compute-0 nova_compute[260935]: 2025-10-11 09:27:13.863 2 DEBUG nova.compute.manager [req-9e513b07-234e-4e77-bbec-2ec68301c79c req-f0b23a8d-3d3c-41e9-8fdf-036abc4384c8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 36d0acda-9f37-4308-aa46-973f11c57b0e] Received event network-vif-plugged-36e1391f-eaf8-490f-8434-c3fb25eed0a4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:27:13 compute-0 nova_compute[260935]: 2025-10-11 09:27:13.863 2 DEBUG oslo_concurrency.lockutils [req-9e513b07-234e-4e77-bbec-2ec68301c79c req-f0b23a8d-3d3c-41e9-8fdf-036abc4384c8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "36d0acda-9f37-4308-aa46-973f11c57b0e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:27:13 compute-0 nova_compute[260935]: 2025-10-11 09:27:13.864 2 DEBUG oslo_concurrency.lockutils [req-9e513b07-234e-4e77-bbec-2ec68301c79c req-f0b23a8d-3d3c-41e9-8fdf-036abc4384c8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "36d0acda-9f37-4308-aa46-973f11c57b0e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:27:13 compute-0 nova_compute[260935]: 2025-10-11 09:27:13.864 2 DEBUG oslo_concurrency.lockutils [req-9e513b07-234e-4e77-bbec-2ec68301c79c req-f0b23a8d-3d3c-41e9-8fdf-036abc4384c8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "36d0acda-9f37-4308-aa46-973f11c57b0e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:27:13 compute-0 nova_compute[260935]: 2025-10-11 09:27:13.865 2 DEBUG nova.compute.manager [req-9e513b07-234e-4e77-bbec-2ec68301c79c req-f0b23a8d-3d3c-41e9-8fdf-036abc4384c8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 36d0acda-9f37-4308-aa46-973f11c57b0e] No waiting events found dispatching network-vif-plugged-36e1391f-eaf8-490f-8434-c3fb25eed0a4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 09:27:13 compute-0 nova_compute[260935]: 2025-10-11 09:27:13.865 2 WARNING nova.compute.manager [req-9e513b07-234e-4e77-bbec-2ec68301c79c req-f0b23a8d-3d3c-41e9-8fdf-036abc4384c8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 36d0acda-9f37-4308-aa46-973f11c57b0e] Received unexpected event network-vif-plugged-36e1391f-eaf8-490f-8434-c3fb25eed0a4 for instance with vm_state active and task_state None.
Oct 11 09:27:13 compute-0 nova_compute[260935]: 2025-10-11 09:27:13.865 2 DEBUG nova.compute.manager [req-9e513b07-234e-4e77-bbec-2ec68301c79c req-f0b23a8d-3d3c-41e9-8fdf-036abc4384c8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ac163a38-d5cb-4d00-af1b-f3361849dd68] Received event network-vif-plugged-731dd7de-8de3-4b67-a34b-8fef195606b3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:27:13 compute-0 nova_compute[260935]: 2025-10-11 09:27:13.866 2 DEBUG oslo_concurrency.lockutils [req-9e513b07-234e-4e77-bbec-2ec68301c79c req-f0b23a8d-3d3c-41e9-8fdf-036abc4384c8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "ac163a38-d5cb-4d00-af1b-f3361849dd68-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:27:13 compute-0 nova_compute[260935]: 2025-10-11 09:27:13.866 2 DEBUG oslo_concurrency.lockutils [req-9e513b07-234e-4e77-bbec-2ec68301c79c req-f0b23a8d-3d3c-41e9-8fdf-036abc4384c8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "ac163a38-d5cb-4d00-af1b-f3361849dd68-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:27:13 compute-0 nova_compute[260935]: 2025-10-11 09:27:13.867 2 DEBUG oslo_concurrency.lockutils [req-9e513b07-234e-4e77-bbec-2ec68301c79c req-f0b23a8d-3d3c-41e9-8fdf-036abc4384c8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "ac163a38-d5cb-4d00-af1b-f3361849dd68-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:27:13 compute-0 nova_compute[260935]: 2025-10-11 09:27:13.867 2 DEBUG nova.compute.manager [req-9e513b07-234e-4e77-bbec-2ec68301c79c req-f0b23a8d-3d3c-41e9-8fdf-036abc4384c8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ac163a38-d5cb-4d00-af1b-f3361849dd68] Processing event network-vif-plugged-731dd7de-8de3-4b67-a34b-8fef195606b3 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 11 09:27:13 compute-0 nova_compute[260935]: 2025-10-11 09:27:13.868 2 DEBUG nova.compute.manager [req-9e513b07-234e-4e77-bbec-2ec68301c79c req-f0b23a8d-3d3c-41e9-8fdf-036abc4384c8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ac163a38-d5cb-4d00-af1b-f3361849dd68] Received event network-vif-plugged-731dd7de-8de3-4b67-a34b-8fef195606b3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:27:13 compute-0 nova_compute[260935]: 2025-10-11 09:27:13.868 2 DEBUG oslo_concurrency.lockutils [req-9e513b07-234e-4e77-bbec-2ec68301c79c req-f0b23a8d-3d3c-41e9-8fdf-036abc4384c8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "ac163a38-d5cb-4d00-af1b-f3361849dd68-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:27:13 compute-0 nova_compute[260935]: 2025-10-11 09:27:13.869 2 DEBUG oslo_concurrency.lockutils [req-9e513b07-234e-4e77-bbec-2ec68301c79c req-f0b23a8d-3d3c-41e9-8fdf-036abc4384c8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "ac163a38-d5cb-4d00-af1b-f3361849dd68-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:27:13 compute-0 nova_compute[260935]: 2025-10-11 09:27:13.869 2 DEBUG oslo_concurrency.lockutils [req-9e513b07-234e-4e77-bbec-2ec68301c79c req-f0b23a8d-3d3c-41e9-8fdf-036abc4384c8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "ac163a38-d5cb-4d00-af1b-f3361849dd68-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:27:13 compute-0 nova_compute[260935]: 2025-10-11 09:27:13.870 2 DEBUG nova.compute.manager [req-9e513b07-234e-4e77-bbec-2ec68301c79c req-f0b23a8d-3d3c-41e9-8fdf-036abc4384c8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ac163a38-d5cb-4d00-af1b-f3361849dd68] No waiting events found dispatching network-vif-plugged-731dd7de-8de3-4b67-a34b-8fef195606b3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 09:27:13 compute-0 nova_compute[260935]: 2025-10-11 09:27:13.870 2 WARNING nova.compute.manager [req-9e513b07-234e-4e77-bbec-2ec68301c79c req-f0b23a8d-3d3c-41e9-8fdf-036abc4384c8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ac163a38-d5cb-4d00-af1b-f3361849dd68] Received unexpected event network-vif-plugged-731dd7de-8de3-4b67-a34b-8fef195606b3 for instance with vm_state building and task_state spawning.
Oct 11 09:27:13 compute-0 nova_compute[260935]: 2025-10-11 09:27:13.871 2 DEBUG nova.compute.manager [None req-12b0b038-0758-4736-a8a1-a9c0c9c193a3 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: ac163a38-d5cb-4d00-af1b-f3361849dd68] Instance event wait completed in 0 seconds for network-vif-plugged,network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 11 09:27:13 compute-0 nova_compute[260935]: 2025-10-11 09:27:13.877 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760174833.8764253, ac163a38-d5cb-4d00-af1b-f3361849dd68 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:27:13 compute-0 nova_compute[260935]: 2025-10-11 09:27:13.878 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: ac163a38-d5cb-4d00-af1b-f3361849dd68] VM Resumed (Lifecycle Event)
Oct 11 09:27:13 compute-0 nova_compute[260935]: 2025-10-11 09:27:13.883 2 DEBUG nova.virt.libvirt.driver [None req-12b0b038-0758-4736-a8a1-a9c0c9c193a3 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: ac163a38-d5cb-4d00-af1b-f3361849dd68] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 11 09:27:13 compute-0 nova_compute[260935]: 2025-10-11 09:27:13.891 2 INFO nova.virt.libvirt.driver [-] [instance: ac163a38-d5cb-4d00-af1b-f3361849dd68] Instance spawned successfully.
Oct 11 09:27:13 compute-0 nova_compute[260935]: 2025-10-11 09:27:13.892 2 DEBUG nova.virt.libvirt.driver [None req-12b0b038-0758-4736-a8a1-a9c0c9c193a3 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: ac163a38-d5cb-4d00-af1b-f3361849dd68] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 11 09:27:13 compute-0 nova_compute[260935]: 2025-10-11 09:27:13.902 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: ac163a38-d5cb-4d00-af1b-f3361849dd68] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:27:13 compute-0 nova_compute[260935]: 2025-10-11 09:27:13.915 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: ac163a38-d5cb-4d00-af1b-f3361849dd68] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 09:27:13 compute-0 nova_compute[260935]: 2025-10-11 09:27:13.932 2 DEBUG nova.virt.libvirt.driver [None req-12b0b038-0758-4736-a8a1-a9c0c9c193a3 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: ac163a38-d5cb-4d00-af1b-f3361849dd68] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:27:13 compute-0 nova_compute[260935]: 2025-10-11 09:27:13.933 2 DEBUG nova.virt.libvirt.driver [None req-12b0b038-0758-4736-a8a1-a9c0c9c193a3 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: ac163a38-d5cb-4d00-af1b-f3361849dd68] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:27:13 compute-0 nova_compute[260935]: 2025-10-11 09:27:13.934 2 DEBUG nova.virt.libvirt.driver [None req-12b0b038-0758-4736-a8a1-a9c0c9c193a3 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: ac163a38-d5cb-4d00-af1b-f3361849dd68] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:27:13 compute-0 nova_compute[260935]: 2025-10-11 09:27:13.935 2 DEBUG nova.virt.libvirt.driver [None req-12b0b038-0758-4736-a8a1-a9c0c9c193a3 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: ac163a38-d5cb-4d00-af1b-f3361849dd68] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:27:13 compute-0 nova_compute[260935]: 2025-10-11 09:27:13.936 2 DEBUG nova.virt.libvirt.driver [None req-12b0b038-0758-4736-a8a1-a9c0c9c193a3 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: ac163a38-d5cb-4d00-af1b-f3361849dd68] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:27:13 compute-0 nova_compute[260935]: 2025-10-11 09:27:13.937 2 DEBUG nova.virt.libvirt.driver [None req-12b0b038-0758-4736-a8a1-a9c0c9c193a3 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: ac163a38-d5cb-4d00-af1b-f3361849dd68] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:27:13 compute-0 nova_compute[260935]: 2025-10-11 09:27:13.944 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: ac163a38-d5cb-4d00-af1b-f3361849dd68] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 09:27:14 compute-0 nova_compute[260935]: 2025-10-11 09:27:14.011 2 INFO nova.compute.manager [None req-12b0b038-0758-4736-a8a1-a9c0c9c193a3 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: ac163a38-d5cb-4d00-af1b-f3361849dd68] Took 14.16 seconds to spawn the instance on the hypervisor.
Oct 11 09:27:14 compute-0 nova_compute[260935]: 2025-10-11 09:27:14.011 2 DEBUG nova.compute.manager [None req-12b0b038-0758-4736-a8a1-a9c0c9c193a3 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: ac163a38-d5cb-4d00-af1b-f3361849dd68] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:27:14 compute-0 ceph-mon[74313]: pgmap v2567: 321 pgs: 321 active+clean; 612 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 353 KiB/s rd, 3.9 MiB/s wr, 97 op/s
Oct 11 09:27:14 compute-0 nova_compute[260935]: 2025-10-11 09:27:14.124 2 INFO nova.compute.manager [None req-12b0b038-0758-4736-a8a1-a9c0c9c193a3 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: ac163a38-d5cb-4d00-af1b-f3361849dd68] Took 15.32 seconds to build instance.
Oct 11 09:27:14 compute-0 nova_compute[260935]: 2025-10-11 09:27:14.144 2 DEBUG oslo_concurrency.lockutils [None req-12b0b038-0758-4736-a8a1-a9c0c9c193a3 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "ac163a38-d5cb-4d00-af1b-f3361849dd68" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 15.448s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:27:14 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:27:14 compute-0 nova_compute[260935]: 2025-10-11 09:27:14.629 2 DEBUG nova.compute.manager [req-bd896db1-732b-46d7-8d3a-ae33702a3160 req-d9e542f2-59e0-4e6a-968e-024cac88507a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ac163a38-d5cb-4d00-af1b-f3361849dd68] Received event network-vif-plugged-d227ca0f-e837-4a46-be0f-e66b72db8028 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:27:14 compute-0 nova_compute[260935]: 2025-10-11 09:27:14.630 2 DEBUG oslo_concurrency.lockutils [req-bd896db1-732b-46d7-8d3a-ae33702a3160 req-d9e542f2-59e0-4e6a-968e-024cac88507a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "ac163a38-d5cb-4d00-af1b-f3361849dd68-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:27:14 compute-0 nova_compute[260935]: 2025-10-11 09:27:14.630 2 DEBUG oslo_concurrency.lockutils [req-bd896db1-732b-46d7-8d3a-ae33702a3160 req-d9e542f2-59e0-4e6a-968e-024cac88507a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "ac163a38-d5cb-4d00-af1b-f3361849dd68-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:27:14 compute-0 nova_compute[260935]: 2025-10-11 09:27:14.630 2 DEBUG oslo_concurrency.lockutils [req-bd896db1-732b-46d7-8d3a-ae33702a3160 req-d9e542f2-59e0-4e6a-968e-024cac88507a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "ac163a38-d5cb-4d00-af1b-f3361849dd68-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:27:14 compute-0 nova_compute[260935]: 2025-10-11 09:27:14.630 2 DEBUG nova.compute.manager [req-bd896db1-732b-46d7-8d3a-ae33702a3160 req-d9e542f2-59e0-4e6a-968e-024cac88507a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ac163a38-d5cb-4d00-af1b-f3361849dd68] No waiting events found dispatching network-vif-plugged-d227ca0f-e837-4a46-be0f-e66b72db8028 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 09:27:14 compute-0 nova_compute[260935]: 2025-10-11 09:27:14.631 2 WARNING nova.compute.manager [req-bd896db1-732b-46d7-8d3a-ae33702a3160 req-d9e542f2-59e0-4e6a-968e-024cac88507a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ac163a38-d5cb-4d00-af1b-f3361849dd68] Received unexpected event network-vif-plugged-d227ca0f-e837-4a46-be0f-e66b72db8028 for instance with vm_state active and task_state None.
Oct 11 09:27:14 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2568: 321 pgs: 321 active+clean; 612 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 307 KiB/s rd, 1.6 MiB/s wr, 58 op/s
Oct 11 09:27:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:27:15.223 162815 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:27:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:27:15.224 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:27:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:27:15.226 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:27:15 compute-0 nova_compute[260935]: 2025-10-11 09:27:15.243 2 INFO nova.compute.manager [None req-e763e95b-d22f-463f-9863-71d739ff3653 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 36d0acda-9f37-4308-aa46-973f11c57b0e] Get console output
Oct 11 09:27:15 compute-0 nova_compute[260935]: 2025-10-11 09:27:15.254 29289 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Oct 11 09:27:15 compute-0 nova_compute[260935]: 2025-10-11 09:27:15.830 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:27:15 compute-0 podman[402907]: 2025-10-11 09:27:15.833713424 +0000 UTC m=+0.121839040 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3)
Oct 11 09:27:15 compute-0 nova_compute[260935]: 2025-10-11 09:27:15.981 2 DEBUG nova.compute.manager [req-9c7c4aee-ba45-4af2-b2c2-5087ece79f9f req-55e610d7-6407-4761-b594-5bfcfee5cc57 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 36d0acda-9f37-4308-aa46-973f11c57b0e] Received event network-changed-36e1391f-eaf8-490f-8434-c3fb25eed0a4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:27:15 compute-0 nova_compute[260935]: 2025-10-11 09:27:15.982 2 DEBUG nova.compute.manager [req-9c7c4aee-ba45-4af2-b2c2-5087ece79f9f req-55e610d7-6407-4761-b594-5bfcfee5cc57 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 36d0acda-9f37-4308-aa46-973f11c57b0e] Refreshing instance network info cache due to event network-changed-36e1391f-eaf8-490f-8434-c3fb25eed0a4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 09:27:15 compute-0 nova_compute[260935]: 2025-10-11 09:27:15.982 2 DEBUG oslo_concurrency.lockutils [req-9c7c4aee-ba45-4af2-b2c2-5087ece79f9f req-55e610d7-6407-4761-b594-5bfcfee5cc57 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-36d0acda-9f37-4308-aa46-973f11c57b0e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:27:15 compute-0 nova_compute[260935]: 2025-10-11 09:27:15.982 2 DEBUG oslo_concurrency.lockutils [req-9c7c4aee-ba45-4af2-b2c2-5087ece79f9f req-55e610d7-6407-4761-b594-5bfcfee5cc57 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-36d0acda-9f37-4308-aa46-973f11c57b0e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:27:15 compute-0 nova_compute[260935]: 2025-10-11 09:27:15.983 2 DEBUG nova.network.neutron [req-9c7c4aee-ba45-4af2-b2c2-5087ece79f9f req-55e610d7-6407-4761-b594-5bfcfee5cc57 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 36d0acda-9f37-4308-aa46-973f11c57b0e] Refreshing network info cache for port 36e1391f-eaf8-490f-8434-c3fb25eed0a4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 09:27:16 compute-0 ceph-mon[74313]: pgmap v2568: 321 pgs: 321 active+clean; 612 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 307 KiB/s rd, 1.6 MiB/s wr, 58 op/s
Oct 11 09:27:16 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2569: 321 pgs: 321 active+clean; 612 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 307 KiB/s rd, 1.6 MiB/s wr, 58 op/s
Oct 11 09:27:17 compute-0 ceph-mon[74313]: pgmap v2569: 321 pgs: 321 active+clean; 612 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 307 KiB/s rd, 1.6 MiB/s wr, 58 op/s
Oct 11 09:27:17 compute-0 nova_compute[260935]: 2025-10-11 09:27:17.982 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:27:18 compute-0 nova_compute[260935]: 2025-10-11 09:27:18.108 2 DEBUG nova.compute.manager [req-fe0d9dab-7a58-430c-aa24-9a590b71cc61 req-fe107f88-45b8-4d24-8f9a-cfd5b6ac2469 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ebf4e4f9-b225-4ba9-927e-0619aeea8d89] Received event network-changed-733f68a9-9ab3-4ed3-9fa3-d749ce2bfeb6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:27:18 compute-0 nova_compute[260935]: 2025-10-11 09:27:18.108 2 DEBUG nova.compute.manager [req-fe0d9dab-7a58-430c-aa24-9a590b71cc61 req-fe107f88-45b8-4d24-8f9a-cfd5b6ac2469 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ebf4e4f9-b225-4ba9-927e-0619aeea8d89] Refreshing instance network info cache due to event network-changed-733f68a9-9ab3-4ed3-9fa3-d749ce2bfeb6. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 09:27:18 compute-0 nova_compute[260935]: 2025-10-11 09:27:18.109 2 DEBUG oslo_concurrency.lockutils [req-fe0d9dab-7a58-430c-aa24-9a590b71cc61 req-fe107f88-45b8-4d24-8f9a-cfd5b6ac2469 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-ebf4e4f9-b225-4ba9-927e-0619aeea8d89" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:27:18 compute-0 nova_compute[260935]: 2025-10-11 09:27:18.109 2 DEBUG oslo_concurrency.lockutils [req-fe0d9dab-7a58-430c-aa24-9a590b71cc61 req-fe107f88-45b8-4d24-8f9a-cfd5b6ac2469 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-ebf4e4f9-b225-4ba9-927e-0619aeea8d89" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:27:18 compute-0 nova_compute[260935]: 2025-10-11 09:27:18.109 2 DEBUG nova.network.neutron [req-fe0d9dab-7a58-430c-aa24-9a590b71cc61 req-fe107f88-45b8-4d24-8f9a-cfd5b6ac2469 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ebf4e4f9-b225-4ba9-927e-0619aeea8d89] Refreshing network info cache for port 733f68a9-9ab3-4ed3-9fa3-d749ce2bfeb6 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 09:27:18 compute-0 nova_compute[260935]: 2025-10-11 09:27:18.183 2 DEBUG oslo_concurrency.lockutils [None req-01cd910c-c559-48fa-bbd7-900b3e480470 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Acquiring lock "ebf4e4f9-b225-4ba9-927e-0619aeea8d89" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:27:18 compute-0 nova_compute[260935]: 2025-10-11 09:27:18.184 2 DEBUG oslo_concurrency.lockutils [None req-01cd910c-c559-48fa-bbd7-900b3e480470 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "ebf4e4f9-b225-4ba9-927e-0619aeea8d89" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:27:18 compute-0 nova_compute[260935]: 2025-10-11 09:27:18.184 2 DEBUG oslo_concurrency.lockutils [None req-01cd910c-c559-48fa-bbd7-900b3e480470 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Acquiring lock "ebf4e4f9-b225-4ba9-927e-0619aeea8d89-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:27:18 compute-0 nova_compute[260935]: 2025-10-11 09:27:18.184 2 DEBUG oslo_concurrency.lockutils [None req-01cd910c-c559-48fa-bbd7-900b3e480470 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "ebf4e4f9-b225-4ba9-927e-0619aeea8d89-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:27:18 compute-0 nova_compute[260935]: 2025-10-11 09:27:18.185 2 DEBUG oslo_concurrency.lockutils [None req-01cd910c-c559-48fa-bbd7-900b3e480470 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "ebf4e4f9-b225-4ba9-927e-0619aeea8d89-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:27:18 compute-0 nova_compute[260935]: 2025-10-11 09:27:18.187 2 INFO nova.compute.manager [None req-01cd910c-c559-48fa-bbd7-900b3e480470 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: ebf4e4f9-b225-4ba9-927e-0619aeea8d89] Terminating instance
Oct 11 09:27:18 compute-0 nova_compute[260935]: 2025-10-11 09:27:18.189 2 DEBUG nova.compute.manager [None req-01cd910c-c559-48fa-bbd7-900b3e480470 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: ebf4e4f9-b225-4ba9-927e-0619aeea8d89] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 11 09:27:18 compute-0 kernel: tap733f68a9-9a (unregistering): left promiscuous mode
Oct 11 09:27:18 compute-0 NetworkManager[44960]: <info>  [1760174838.2619] device (tap733f68a9-9a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 09:27:18 compute-0 nova_compute[260935]: 2025-10-11 09:27:18.275 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:27:18 compute-0 ovn_controller[152945]: 2025-10-11T09:27:18Z|01417|binding|INFO|Releasing lport 733f68a9-9ab3-4ed3-9fa3-d749ce2bfeb6 from this chassis (sb_readonly=0)
Oct 11 09:27:18 compute-0 ovn_controller[152945]: 2025-10-11T09:27:18Z|01418|binding|INFO|Setting lport 733f68a9-9ab3-4ed3-9fa3-d749ce2bfeb6 down in Southbound
Oct 11 09:27:18 compute-0 ovn_controller[152945]: 2025-10-11T09:27:18Z|01419|binding|INFO|Removing iface tap733f68a9-9a ovn-installed in OVS
Oct 11 09:27:18 compute-0 nova_compute[260935]: 2025-10-11 09:27:18.278 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:27:18 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:27:18.290 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:aa:8a:b1 10.100.0.8'], port_security=['fa:16:3e:aa:8a:b1 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'ebf4e4f9-b225-4ba9-927e-0619aeea8d89', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b9f9ae84-9b18-48f7-bff2-94e8835de5c8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'bee9c6aad5fe46a2b0fb6caf4d995b72', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'ed8dd5c0-e5cc-4f41-beee-a727185c1ca5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=419ebcda-831c-4f7b-8ef8-fba16bc71b52, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=733f68a9-9ab3-4ed3-9fa3-d749ce2bfeb6) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 09:27:18 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:27:18.293 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 733f68a9-9ab3-4ed3-9fa3-d749ce2bfeb6 in datapath b9f9ae84-9b18-48f7-bff2-94e8835de5c8 unbound from our chassis
Oct 11 09:27:18 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:27:18.296 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b9f9ae84-9b18-48f7-bff2-94e8835de5c8
Oct 11 09:27:18 compute-0 nova_compute[260935]: 2025-10-11 09:27:18.298 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:27:18 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:27:18.322 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[85e29eec-020c-4dcb-9c3c-91e0a0c26489]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:27:18 compute-0 systemd[1]: machine-qemu\x2d153\x2dinstance\x2d00000081.scope: Deactivated successfully.
Oct 11 09:27:18 compute-0 systemd[1]: machine-qemu\x2d153\x2dinstance\x2d00000081.scope: Consumed 13.829s CPU time.
Oct 11 09:27:18 compute-0 systemd-machined[215705]: Machine qemu-153-instance-00000081 terminated.
Oct 11 09:27:18 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:27:18.355 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[a18dc9ab-fa2b-4809-8b31-d73374cd2026]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:27:18 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:27:18.360 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[369bf491-c91e-42ce-a64a-a538161f58ad]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:27:18 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:27:18.390 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[d7e2a4cc-4b9c-484f-8711-ac945328e706]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:27:18 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:27:18.445 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[7554ac2d-5453-4dcb-89b5-5253200c47f0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb9f9ae84-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f4:ad:53'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 7, 'rx_bytes': 1000, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 7, 'rx_bytes': 1000, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 379], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 662316, 'reachable_time': 34988, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 402938, 'error': None, 'target': 'ovnmeta-b9f9ae84-9b18-48f7-bff2-94e8835de5c8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:27:18 compute-0 nova_compute[260935]: 2025-10-11 09:27:18.462 2 INFO nova.virt.libvirt.driver [-] [instance: ebf4e4f9-b225-4ba9-927e-0619aeea8d89] Instance destroyed successfully.
Oct 11 09:27:18 compute-0 nova_compute[260935]: 2025-10-11 09:27:18.462 2 DEBUG nova.objects.instance [None req-01cd910c-c559-48fa-bbd7-900b3e480470 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lazy-loading 'resources' on Instance uuid ebf4e4f9-b225-4ba9-927e-0619aeea8d89 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:27:18 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:27:18.471 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[2fd82c6f-f666-484d-911d-f9b7b332a6ca]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapb9f9ae84-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 662335, 'tstamp': 662335}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 402949, 'error': None, 'target': 'ovnmeta-b9f9ae84-9b18-48f7-bff2-94e8835de5c8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapb9f9ae84-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 662340, 'tstamp': 662340}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 402949, 'error': None, 'target': 'ovnmeta-b9f9ae84-9b18-48f7-bff2-94e8835de5c8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:27:18 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:27:18.473 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb9f9ae84-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:27:18 compute-0 nova_compute[260935]: 2025-10-11 09:27:18.474 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:27:18 compute-0 nova_compute[260935]: 2025-10-11 09:27:18.477 2 DEBUG nova.virt.libvirt.vif [None req-01cd910c-c559-48fa-bbd7-900b3e480470 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T09:26:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-642071395',display_name='tempest-TestNetworkBasicOps-server-642071395',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-642071395',id=129,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBL/a4lv10vOA8+Y0o9A9HssJRB57eWElESeZBiHZYTC4NgZskMwnl+IoMYFCrg3V9vZquWFF14c5AYcCtavq1tvAMKZzjjtkqimBk00BtDUv/kjU7cBauBpTA49aY1aFOA==',key_name='tempest-TestNetworkBasicOps-566812727',keypairs=<?>,launch_index=0,launched_at=2025-10-11T09:26:51Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='bee9c6aad5fe46a2b0fb6caf4d995b72',ramdisk_id='',reservation_id='r-z5r1nfln',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1622727639',owner_user_name='tempest-TestNetworkBasicOps-1622727639-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T09:26:51Z,user_data=None,user_id='dd336dcb24664df58613d4105ce1b004',uuid=ebf4e4f9-b225-4ba9-927e-0619aeea8d89,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "733f68a9-9ab3-4ed3-9fa3-d749ce2bfeb6", "address": "fa:16:3e:aa:8a:b1", "network": {"id": "b9f9ae84-9b18-48f7-bff2-94e8835de5c8", "bridge": "br-int", "label": "tempest-network-smoke--306681163", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.247", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap733f68a9-9a", "ovs_interfaceid": "733f68a9-9ab3-4ed3-9fa3-d749ce2bfeb6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 11 09:27:18 compute-0 nova_compute[260935]: 2025-10-11 09:27:18.477 2 DEBUG nova.network.os_vif_util [None req-01cd910c-c559-48fa-bbd7-900b3e480470 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Converting VIF {"id": "733f68a9-9ab3-4ed3-9fa3-d749ce2bfeb6", "address": "fa:16:3e:aa:8a:b1", "network": {"id": "b9f9ae84-9b18-48f7-bff2-94e8835de5c8", "bridge": "br-int", "label": "tempest-network-smoke--306681163", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.247", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap733f68a9-9a", "ovs_interfaceid": "733f68a9-9ab3-4ed3-9fa3-d749ce2bfeb6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 09:27:18 compute-0 nova_compute[260935]: 2025-10-11 09:27:18.478 2 DEBUG nova.network.os_vif_util [None req-01cd910c-c559-48fa-bbd7-900b3e480470 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:aa:8a:b1,bridge_name='br-int',has_traffic_filtering=True,id=733f68a9-9ab3-4ed3-9fa3-d749ce2bfeb6,network=Network(b9f9ae84-9b18-48f7-bff2-94e8835de5c8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap733f68a9-9a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 09:27:18 compute-0 nova_compute[260935]: 2025-10-11 09:27:18.478 2 DEBUG os_vif [None req-01cd910c-c559-48fa-bbd7-900b3e480470 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:aa:8a:b1,bridge_name='br-int',has_traffic_filtering=True,id=733f68a9-9ab3-4ed3-9fa3-d749ce2bfeb6,network=Network(b9f9ae84-9b18-48f7-bff2-94e8835de5c8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap733f68a9-9a') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 11 09:27:18 compute-0 nova_compute[260935]: 2025-10-11 09:27:18.480 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:27:18 compute-0 nova_compute[260935]: 2025-10-11 09:27:18.480 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap733f68a9-9a, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:27:18 compute-0 nova_compute[260935]: 2025-10-11 09:27:18.481 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:27:18 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:27:18.480 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb9f9ae84-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:27:18 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:27:18.480 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 09:27:18 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:27:18.481 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb9f9ae84-90, col_values=(('external_ids', {'iface-id': '829ba2ca-e21f-4927-8525-5f43e59d37f8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:27:18 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:27:18.481 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 09:27:18 compute-0 nova_compute[260935]: 2025-10-11 09:27:18.482 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:27:18 compute-0 nova_compute[260935]: 2025-10-11 09:27:18.485 2 INFO os_vif [None req-01cd910c-c559-48fa-bbd7-900b3e480470 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:aa:8a:b1,bridge_name='br-int',has_traffic_filtering=True,id=733f68a9-9ab3-4ed3-9fa3-d749ce2bfeb6,network=Network(b9f9ae84-9b18-48f7-bff2-94e8835de5c8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap733f68a9-9a')
Oct 11 09:27:18 compute-0 nova_compute[260935]: 2025-10-11 09:27:18.524 2 DEBUG nova.network.neutron [req-9c7c4aee-ba45-4af2-b2c2-5087ece79f9f req-55e610d7-6407-4761-b594-5bfcfee5cc57 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 36d0acda-9f37-4308-aa46-973f11c57b0e] Updated VIF entry in instance network info cache for port 36e1391f-eaf8-490f-8434-c3fb25eed0a4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 09:27:18 compute-0 nova_compute[260935]: 2025-10-11 09:27:18.525 2 DEBUG nova.network.neutron [req-9c7c4aee-ba45-4af2-b2c2-5087ece79f9f req-55e610d7-6407-4761-b594-5bfcfee5cc57 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 36d0acda-9f37-4308-aa46-973f11c57b0e] Updating instance_info_cache with network_info: [{"id": "36e1391f-eaf8-490f-8434-c3fb25eed0a4", "address": "fa:16:3e:da:66:34", "network": {"id": "b9f9ae84-9b18-48f7-bff2-94e8835de5c8", "bridge": "br-int", "label": "tempest-network-smoke--306681163", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.203", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap36e1391f-ea", "ovs_interfaceid": "36e1391f-eaf8-490f-8434-c3fb25eed0a4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:27:18 compute-0 nova_compute[260935]: 2025-10-11 09:27:18.535 2 DEBUG nova.compute.manager [req-d954d31f-9861-4fd2-9506-b861a89bfb54 req-7e1baa71-a5e9-4f98-bcd7-5fb99efcf2d3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ebf4e4f9-b225-4ba9-927e-0619aeea8d89] Received event network-vif-unplugged-733f68a9-9ab3-4ed3-9fa3-d749ce2bfeb6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:27:18 compute-0 nova_compute[260935]: 2025-10-11 09:27:18.535 2 DEBUG oslo_concurrency.lockutils [req-d954d31f-9861-4fd2-9506-b861a89bfb54 req-7e1baa71-a5e9-4f98-bcd7-5fb99efcf2d3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "ebf4e4f9-b225-4ba9-927e-0619aeea8d89-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:27:18 compute-0 nova_compute[260935]: 2025-10-11 09:27:18.536 2 DEBUG oslo_concurrency.lockutils [req-d954d31f-9861-4fd2-9506-b861a89bfb54 req-7e1baa71-a5e9-4f98-bcd7-5fb99efcf2d3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "ebf4e4f9-b225-4ba9-927e-0619aeea8d89-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:27:18 compute-0 nova_compute[260935]: 2025-10-11 09:27:18.536 2 DEBUG oslo_concurrency.lockutils [req-d954d31f-9861-4fd2-9506-b861a89bfb54 req-7e1baa71-a5e9-4f98-bcd7-5fb99efcf2d3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "ebf4e4f9-b225-4ba9-927e-0619aeea8d89-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:27:18 compute-0 nova_compute[260935]: 2025-10-11 09:27:18.536 2 DEBUG nova.compute.manager [req-d954d31f-9861-4fd2-9506-b861a89bfb54 req-7e1baa71-a5e9-4f98-bcd7-5fb99efcf2d3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ebf4e4f9-b225-4ba9-927e-0619aeea8d89] No waiting events found dispatching network-vif-unplugged-733f68a9-9ab3-4ed3-9fa3-d749ce2bfeb6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 09:27:18 compute-0 nova_compute[260935]: 2025-10-11 09:27:18.536 2 DEBUG nova.compute.manager [req-d954d31f-9861-4fd2-9506-b861a89bfb54 req-7e1baa71-a5e9-4f98-bcd7-5fb99efcf2d3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ebf4e4f9-b225-4ba9-927e-0619aeea8d89] Received event network-vif-unplugged-733f68a9-9ab3-4ed3-9fa3-d749ce2bfeb6 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 11 09:27:18 compute-0 nova_compute[260935]: 2025-10-11 09:27:18.539 2 DEBUG oslo_concurrency.lockutils [req-9c7c4aee-ba45-4af2-b2c2-5087ece79f9f req-55e610d7-6407-4761-b594-5bfcfee5cc57 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-36d0acda-9f37-4308-aa46-973f11c57b0e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:27:18 compute-0 nova_compute[260935]: 2025-10-11 09:27:18.539 2 DEBUG nova.compute.manager [req-9c7c4aee-ba45-4af2-b2c2-5087ece79f9f req-55e610d7-6407-4761-b594-5bfcfee5cc57 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 36d0acda-9f37-4308-aa46-973f11c57b0e] Received event network-vif-plugged-36e1391f-eaf8-490f-8434-c3fb25eed0a4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:27:18 compute-0 nova_compute[260935]: 2025-10-11 09:27:18.540 2 DEBUG oslo_concurrency.lockutils [req-9c7c4aee-ba45-4af2-b2c2-5087ece79f9f req-55e610d7-6407-4761-b594-5bfcfee5cc57 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "36d0acda-9f37-4308-aa46-973f11c57b0e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:27:18 compute-0 nova_compute[260935]: 2025-10-11 09:27:18.540 2 DEBUG oslo_concurrency.lockutils [req-9c7c4aee-ba45-4af2-b2c2-5087ece79f9f req-55e610d7-6407-4761-b594-5bfcfee5cc57 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "36d0acda-9f37-4308-aa46-973f11c57b0e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:27:18 compute-0 nova_compute[260935]: 2025-10-11 09:27:18.540 2 DEBUG oslo_concurrency.lockutils [req-9c7c4aee-ba45-4af2-b2c2-5087ece79f9f req-55e610d7-6407-4761-b594-5bfcfee5cc57 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "36d0acda-9f37-4308-aa46-973f11c57b0e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:27:18 compute-0 nova_compute[260935]: 2025-10-11 09:27:18.540 2 DEBUG nova.compute.manager [req-9c7c4aee-ba45-4af2-b2c2-5087ece79f9f req-55e610d7-6407-4761-b594-5bfcfee5cc57 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 36d0acda-9f37-4308-aa46-973f11c57b0e] No waiting events found dispatching network-vif-plugged-36e1391f-eaf8-490f-8434-c3fb25eed0a4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 09:27:18 compute-0 nova_compute[260935]: 2025-10-11 09:27:18.541 2 WARNING nova.compute.manager [req-9c7c4aee-ba45-4af2-b2c2-5087ece79f9f req-55e610d7-6407-4761-b594-5bfcfee5cc57 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 36d0acda-9f37-4308-aa46-973f11c57b0e] Received unexpected event network-vif-plugged-36e1391f-eaf8-490f-8434-c3fb25eed0a4 for instance with vm_state active and task_state None.
Oct 11 09:27:18 compute-0 nova_compute[260935]: 2025-10-11 09:27:18.541 2 DEBUG nova.compute.manager [req-9c7c4aee-ba45-4af2-b2c2-5087ece79f9f req-55e610d7-6407-4761-b594-5bfcfee5cc57 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 36d0acda-9f37-4308-aa46-973f11c57b0e] Received event network-vif-plugged-36e1391f-eaf8-490f-8434-c3fb25eed0a4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:27:18 compute-0 nova_compute[260935]: 2025-10-11 09:27:18.541 2 DEBUG oslo_concurrency.lockutils [req-9c7c4aee-ba45-4af2-b2c2-5087ece79f9f req-55e610d7-6407-4761-b594-5bfcfee5cc57 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "36d0acda-9f37-4308-aa46-973f11c57b0e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:27:18 compute-0 nova_compute[260935]: 2025-10-11 09:27:18.541 2 DEBUG oslo_concurrency.lockutils [req-9c7c4aee-ba45-4af2-b2c2-5087ece79f9f req-55e610d7-6407-4761-b594-5bfcfee5cc57 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "36d0acda-9f37-4308-aa46-973f11c57b0e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:27:18 compute-0 nova_compute[260935]: 2025-10-11 09:27:18.542 2 DEBUG oslo_concurrency.lockutils [req-9c7c4aee-ba45-4af2-b2c2-5087ece79f9f req-55e610d7-6407-4761-b594-5bfcfee5cc57 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "36d0acda-9f37-4308-aa46-973f11c57b0e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:27:18 compute-0 nova_compute[260935]: 2025-10-11 09:27:18.542 2 DEBUG nova.compute.manager [req-9c7c4aee-ba45-4af2-b2c2-5087ece79f9f req-55e610d7-6407-4761-b594-5bfcfee5cc57 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 36d0acda-9f37-4308-aa46-973f11c57b0e] No waiting events found dispatching network-vif-plugged-36e1391f-eaf8-490f-8434-c3fb25eed0a4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 09:27:18 compute-0 nova_compute[260935]: 2025-10-11 09:27:18.542 2 WARNING nova.compute.manager [req-9c7c4aee-ba45-4af2-b2c2-5087ece79f9f req-55e610d7-6407-4761-b594-5bfcfee5cc57 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 36d0acda-9f37-4308-aa46-973f11c57b0e] Received unexpected event network-vif-plugged-36e1391f-eaf8-490f-8434-c3fb25eed0a4 for instance with vm_state active and task_state None.
Oct 11 09:27:18 compute-0 nova_compute[260935]: 2025-10-11 09:27:18.904 2 INFO nova.virt.libvirt.driver [None req-01cd910c-c559-48fa-bbd7-900b3e480470 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: ebf4e4f9-b225-4ba9-927e-0619aeea8d89] Deleting instance files /var/lib/nova/instances/ebf4e4f9-b225-4ba9-927e-0619aeea8d89_del
Oct 11 09:27:18 compute-0 nova_compute[260935]: 2025-10-11 09:27:18.905 2 INFO nova.virt.libvirt.driver [None req-01cd910c-c559-48fa-bbd7-900b3e480470 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: ebf4e4f9-b225-4ba9-927e-0619aeea8d89] Deletion of /var/lib/nova/instances/ebf4e4f9-b225-4ba9-927e-0619aeea8d89_del complete
Oct 11 09:27:18 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2570: 321 pgs: 321 active+clean; 612 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 1.6 MiB/s wr, 128 op/s
Oct 11 09:27:18 compute-0 nova_compute[260935]: 2025-10-11 09:27:18.973 2 INFO nova.compute.manager [None req-01cd910c-c559-48fa-bbd7-900b3e480470 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: ebf4e4f9-b225-4ba9-927e-0619aeea8d89] Took 0.78 seconds to destroy the instance on the hypervisor.
Oct 11 09:27:18 compute-0 nova_compute[260935]: 2025-10-11 09:27:18.974 2 DEBUG oslo.service.loopingcall [None req-01cd910c-c559-48fa-bbd7-900b3e480470 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 11 09:27:18 compute-0 nova_compute[260935]: 2025-10-11 09:27:18.974 2 DEBUG nova.compute.manager [-] [instance: ebf4e4f9-b225-4ba9-927e-0619aeea8d89] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 11 09:27:18 compute-0 nova_compute[260935]: 2025-10-11 09:27:18.975 2 DEBUG nova.network.neutron [-] [instance: ebf4e4f9-b225-4ba9-927e-0619aeea8d89] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 11 09:27:19 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:27:20 compute-0 ceph-mon[74313]: pgmap v2570: 321 pgs: 321 active+clean; 612 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 1.6 MiB/s wr, 128 op/s
Oct 11 09:27:20 compute-0 nova_compute[260935]: 2025-10-11 09:27:20.223 2 DEBUG nova.compute.manager [req-29b2dc2f-4f5b-4d09-a606-2db89fe0e60d req-118eff73-0884-4d16-aeac-6192bc3e6d8d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ac163a38-d5cb-4d00-af1b-f3361849dd68] Received event network-changed-d227ca0f-e837-4a46-be0f-e66b72db8028 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:27:20 compute-0 nova_compute[260935]: 2025-10-11 09:27:20.224 2 DEBUG nova.compute.manager [req-29b2dc2f-4f5b-4d09-a606-2db89fe0e60d req-118eff73-0884-4d16-aeac-6192bc3e6d8d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ac163a38-d5cb-4d00-af1b-f3361849dd68] Refreshing instance network info cache due to event network-changed-d227ca0f-e837-4a46-be0f-e66b72db8028. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 09:27:20 compute-0 nova_compute[260935]: 2025-10-11 09:27:20.224 2 DEBUG oslo_concurrency.lockutils [req-29b2dc2f-4f5b-4d09-a606-2db89fe0e60d req-118eff73-0884-4d16-aeac-6192bc3e6d8d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-ac163a38-d5cb-4d00-af1b-f3361849dd68" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:27:20 compute-0 nova_compute[260935]: 2025-10-11 09:27:20.225 2 DEBUG oslo_concurrency.lockutils [req-29b2dc2f-4f5b-4d09-a606-2db89fe0e60d req-118eff73-0884-4d16-aeac-6192bc3e6d8d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-ac163a38-d5cb-4d00-af1b-f3361849dd68" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:27:20 compute-0 nova_compute[260935]: 2025-10-11 09:27:20.225 2 DEBUG nova.network.neutron [req-29b2dc2f-4f5b-4d09-a606-2db89fe0e60d req-118eff73-0884-4d16-aeac-6192bc3e6d8d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ac163a38-d5cb-4d00-af1b-f3361849dd68] Refreshing network info cache for port d227ca0f-e837-4a46-be0f-e66b72db8028 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 09:27:20 compute-0 nova_compute[260935]: 2025-10-11 09:27:20.323 2 DEBUG nova.network.neutron [req-fe0d9dab-7a58-430c-aa24-9a590b71cc61 req-fe107f88-45b8-4d24-8f9a-cfd5b6ac2469 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ebf4e4f9-b225-4ba9-927e-0619aeea8d89] Updated VIF entry in instance network info cache for port 733f68a9-9ab3-4ed3-9fa3-d749ce2bfeb6. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 09:27:20 compute-0 nova_compute[260935]: 2025-10-11 09:27:20.324 2 DEBUG nova.network.neutron [req-fe0d9dab-7a58-430c-aa24-9a590b71cc61 req-fe107f88-45b8-4d24-8f9a-cfd5b6ac2469 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ebf4e4f9-b225-4ba9-927e-0619aeea8d89] Updating instance_info_cache with network_info: [{"id": "733f68a9-9ab3-4ed3-9fa3-d749ce2bfeb6", "address": "fa:16:3e:aa:8a:b1", "network": {"id": "b9f9ae84-9b18-48f7-bff2-94e8835de5c8", "bridge": "br-int", "label": "tempest-network-smoke--306681163", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap733f68a9-9a", "ovs_interfaceid": "733f68a9-9ab3-4ed3-9fa3-d749ce2bfeb6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:27:20 compute-0 nova_compute[260935]: 2025-10-11 09:27:20.355 2 DEBUG oslo_concurrency.lockutils [req-fe0d9dab-7a58-430c-aa24-9a590b71cc61 req-fe107f88-45b8-4d24-8f9a-cfd5b6ac2469 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-ebf4e4f9-b225-4ba9-927e-0619aeea8d89" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:27:20 compute-0 nova_compute[260935]: 2025-10-11 09:27:20.719 2 DEBUG nova.compute.manager [req-911f5885-7244-4a35-8d96-9ee2277c4db5 req-0783c1d5-06a4-4f1e-8bec-8593fd943dab e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ebf4e4f9-b225-4ba9-927e-0619aeea8d89] Received event network-vif-plugged-733f68a9-9ab3-4ed3-9fa3-d749ce2bfeb6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:27:20 compute-0 nova_compute[260935]: 2025-10-11 09:27:20.720 2 DEBUG oslo_concurrency.lockutils [req-911f5885-7244-4a35-8d96-9ee2277c4db5 req-0783c1d5-06a4-4f1e-8bec-8593fd943dab e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "ebf4e4f9-b225-4ba9-927e-0619aeea8d89-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:27:20 compute-0 nova_compute[260935]: 2025-10-11 09:27:20.721 2 DEBUG oslo_concurrency.lockutils [req-911f5885-7244-4a35-8d96-9ee2277c4db5 req-0783c1d5-06a4-4f1e-8bec-8593fd943dab e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "ebf4e4f9-b225-4ba9-927e-0619aeea8d89-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:27:20 compute-0 nova_compute[260935]: 2025-10-11 09:27:20.721 2 DEBUG oslo_concurrency.lockutils [req-911f5885-7244-4a35-8d96-9ee2277c4db5 req-0783c1d5-06a4-4f1e-8bec-8593fd943dab e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "ebf4e4f9-b225-4ba9-927e-0619aeea8d89-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:27:20 compute-0 nova_compute[260935]: 2025-10-11 09:27:20.722 2 DEBUG nova.compute.manager [req-911f5885-7244-4a35-8d96-9ee2277c4db5 req-0783c1d5-06a4-4f1e-8bec-8593fd943dab e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ebf4e4f9-b225-4ba9-927e-0619aeea8d89] No waiting events found dispatching network-vif-plugged-733f68a9-9ab3-4ed3-9fa3-d749ce2bfeb6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 09:27:20 compute-0 nova_compute[260935]: 2025-10-11 09:27:20.722 2 WARNING nova.compute.manager [req-911f5885-7244-4a35-8d96-9ee2277c4db5 req-0783c1d5-06a4-4f1e-8bec-8593fd943dab e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ebf4e4f9-b225-4ba9-927e-0619aeea8d89] Received unexpected event network-vif-plugged-733f68a9-9ab3-4ed3-9fa3-d749ce2bfeb6 for instance with vm_state active and task_state deleting.
Oct 11 09:27:20 compute-0 podman[402971]: 2025-10-11 09:27:20.799855262 +0000 UTC m=+0.099819898 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=iscsid, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 09:27:20 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2571: 321 pgs: 321 active+clean; 612 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 30 KiB/s wr, 75 op/s
Oct 11 09:27:21 compute-0 nova_compute[260935]: 2025-10-11 09:27:21.454 2 DEBUG nova.network.neutron [-] [instance: ebf4e4f9-b225-4ba9-927e-0619aeea8d89] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:27:21 compute-0 nova_compute[260935]: 2025-10-11 09:27:21.474 2 INFO nova.compute.manager [-] [instance: ebf4e4f9-b225-4ba9-927e-0619aeea8d89] Took 2.50 seconds to deallocate network for instance.
Oct 11 09:27:21 compute-0 nova_compute[260935]: 2025-10-11 09:27:21.519 2 DEBUG oslo_concurrency.lockutils [None req-01cd910c-c559-48fa-bbd7-900b3e480470 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:27:21 compute-0 nova_compute[260935]: 2025-10-11 09:27:21.520 2 DEBUG oslo_concurrency.lockutils [None req-01cd910c-c559-48fa-bbd7-900b3e480470 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:27:21 compute-0 nova_compute[260935]: 2025-10-11 09:27:21.764 2 DEBUG oslo_concurrency.processutils [None req-01cd910c-c559-48fa-bbd7-900b3e480470 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:27:22 compute-0 ceph-mon[74313]: pgmap v2571: 321 pgs: 321 active+clean; 612 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 30 KiB/s wr, 75 op/s
Oct 11 09:27:22 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 09:27:22 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2125242929' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:27:22 compute-0 nova_compute[260935]: 2025-10-11 09:27:22.266 2 DEBUG oslo_concurrency.processutils [None req-01cd910c-c559-48fa-bbd7-900b3e480470 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.501s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:27:22 compute-0 nova_compute[260935]: 2025-10-11 09:27:22.275 2 DEBUG nova.compute.provider_tree [None req-01cd910c-c559-48fa-bbd7-900b3e480470 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 09:27:22 compute-0 nova_compute[260935]: 2025-10-11 09:27:22.315 2 DEBUG nova.scheduler.client.report [None req-01cd910c-c559-48fa-bbd7-900b3e480470 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 09:27:22 compute-0 nova_compute[260935]: 2025-10-11 09:27:22.352 2 DEBUG oslo_concurrency.lockutils [None req-01cd910c-c559-48fa-bbd7-900b3e480470 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.832s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:27:22 compute-0 nova_compute[260935]: 2025-10-11 09:27:22.402 2 INFO nova.scheduler.client.report [None req-01cd910c-c559-48fa-bbd7-900b3e480470 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Deleted allocations for instance ebf4e4f9-b225-4ba9-927e-0619aeea8d89
Oct 11 09:27:22 compute-0 nova_compute[260935]: 2025-10-11 09:27:22.524 2 DEBUG oslo_concurrency.lockutils [None req-01cd910c-c559-48fa-bbd7-900b3e480470 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "ebf4e4f9-b225-4ba9-927e-0619aeea8d89" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.340s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:27:22 compute-0 nova_compute[260935]: 2025-10-11 09:27:22.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:27:22 compute-0 nova_compute[260935]: 2025-10-11 09:27:22.846 2 DEBUG nova.compute.manager [req-1353d67b-ee4f-4ed4-ac64-f3f355fdab48 req-1cf7cc06-b795-44aa-90c5-b8ad89dddf52 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ebf4e4f9-b225-4ba9-927e-0619aeea8d89] Received event network-vif-deleted-733f68a9-9ab3-4ed3-9fa3-d749ce2bfeb6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:27:22 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2572: 321 pgs: 321 active+clean; 533 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 33 KiB/s wr, 103 op/s
Oct 11 09:27:22 compute-0 nova_compute[260935]: 2025-10-11 09:27:22.986 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:27:23 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/2125242929' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:27:23 compute-0 nova_compute[260935]: 2025-10-11 09:27:23.482 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:27:23 compute-0 nova_compute[260935]: 2025-10-11 09:27:23.747 2 DEBUG nova.network.neutron [req-29b2dc2f-4f5b-4d09-a606-2db89fe0e60d req-118eff73-0884-4d16-aeac-6192bc3e6d8d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ac163a38-d5cb-4d00-af1b-f3361849dd68] Updated VIF entry in instance network info cache for port d227ca0f-e837-4a46-be0f-e66b72db8028. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 09:27:23 compute-0 nova_compute[260935]: 2025-10-11 09:27:23.748 2 DEBUG nova.network.neutron [req-29b2dc2f-4f5b-4d09-a606-2db89fe0e60d req-118eff73-0884-4d16-aeac-6192bc3e6d8d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ac163a38-d5cb-4d00-af1b-f3361849dd68] Updating instance_info_cache with network_info: [{"id": "d227ca0f-e837-4a46-be0f-e66b72db8028", "address": "fa:16:3e:01:a2:ce", "network": {"id": "024b1f88-7312-4f05-a55e-4c82e878906e", "bridge": "br-int", "label": "tempest-network-smoke--93934650", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.230", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd227ca0f-e8", "ovs_interfaceid": "d227ca0f-e837-4a46-be0f-e66b72db8028", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "731dd7de-8de3-4b67-a34b-8fef195606b3", "address": "fa:16:3e:3b:40:03", "network": {"id": "894decad-3bed-4c55-b643-5fbe5479bf3f", "bridge": "br-int", "label": "tempest-network-smoke--1044478516", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe3b:4003", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap731dd7de-8d", "ovs_interfaceid": "731dd7de-8de3-4b67-a34b-8fef195606b3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:27:23 compute-0 nova_compute[260935]: 2025-10-11 09:27:23.773 2 DEBUG oslo_concurrency.lockutils [req-29b2dc2f-4f5b-4d09-a606-2db89fe0e60d req-118eff73-0884-4d16-aeac-6192bc3e6d8d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-ac163a38-d5cb-4d00-af1b-f3361849dd68" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:27:24 compute-0 ceph-mon[74313]: pgmap v2572: 321 pgs: 321 active+clean; 533 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 33 KiB/s wr, 103 op/s
Oct 11 09:27:24 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:27:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:27:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:27:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:27:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:27:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:27:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:27:24 compute-0 nova_compute[260935]: 2025-10-11 09:27:24.952 2 DEBUG oslo_concurrency.lockutils [None req-0e7aaaae-92f2-487c-b813-0e5e81acce09 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Acquiring lock "36d0acda-9f37-4308-aa46-973f11c57b0e" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:27:24 compute-0 nova_compute[260935]: 2025-10-11 09:27:24.953 2 DEBUG oslo_concurrency.lockutils [None req-0e7aaaae-92f2-487c-b813-0e5e81acce09 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "36d0acda-9f37-4308-aa46-973f11c57b0e" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:27:24 compute-0 nova_compute[260935]: 2025-10-11 09:27:24.954 2 DEBUG oslo_concurrency.lockutils [None req-0e7aaaae-92f2-487c-b813-0e5e81acce09 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Acquiring lock "36d0acda-9f37-4308-aa46-973f11c57b0e-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:27:24 compute-0 nova_compute[260935]: 2025-10-11 09:27:24.954 2 DEBUG oslo_concurrency.lockutils [None req-0e7aaaae-92f2-487c-b813-0e5e81acce09 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "36d0acda-9f37-4308-aa46-973f11c57b0e-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:27:24 compute-0 nova_compute[260935]: 2025-10-11 09:27:24.955 2 DEBUG oslo_concurrency.lockutils [None req-0e7aaaae-92f2-487c-b813-0e5e81acce09 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "36d0acda-9f37-4308-aa46-973f11c57b0e-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:27:24 compute-0 nova_compute[260935]: 2025-10-11 09:27:24.957 2 INFO nova.compute.manager [None req-0e7aaaae-92f2-487c-b813-0e5e81acce09 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 36d0acda-9f37-4308-aa46-973f11c57b0e] Terminating instance
Oct 11 09:27:24 compute-0 nova_compute[260935]: 2025-10-11 09:27:24.959 2 DEBUG nova.compute.manager [None req-0e7aaaae-92f2-487c-b813-0e5e81acce09 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 36d0acda-9f37-4308-aa46-973f11c57b0e] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 11 09:27:24 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2573: 321 pgs: 321 active+clean; 533 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 8.3 KiB/s wr, 97 op/s
Oct 11 09:27:24 compute-0 nova_compute[260935]: 2025-10-11 09:27:24.985 2 DEBUG nova.compute.manager [req-17af6400-6014-4a7e-9756-15bd3587c0fb req-7b95ef36-6321-4a23-a5d4-1b643e95242a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 36d0acda-9f37-4308-aa46-973f11c57b0e] Received event network-changed-36e1391f-eaf8-490f-8434-c3fb25eed0a4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:27:24 compute-0 nova_compute[260935]: 2025-10-11 09:27:24.985 2 DEBUG nova.compute.manager [req-17af6400-6014-4a7e-9756-15bd3587c0fb req-7b95ef36-6321-4a23-a5d4-1b643e95242a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 36d0acda-9f37-4308-aa46-973f11c57b0e] Refreshing instance network info cache due to event network-changed-36e1391f-eaf8-490f-8434-c3fb25eed0a4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 09:27:24 compute-0 nova_compute[260935]: 2025-10-11 09:27:24.986 2 DEBUG oslo_concurrency.lockutils [req-17af6400-6014-4a7e-9756-15bd3587c0fb req-7b95ef36-6321-4a23-a5d4-1b643e95242a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-36d0acda-9f37-4308-aa46-973f11c57b0e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:27:24 compute-0 nova_compute[260935]: 2025-10-11 09:27:24.986 2 DEBUG oslo_concurrency.lockutils [req-17af6400-6014-4a7e-9756-15bd3587c0fb req-7b95ef36-6321-4a23-a5d4-1b643e95242a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-36d0acda-9f37-4308-aa46-973f11c57b0e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:27:24 compute-0 nova_compute[260935]: 2025-10-11 09:27:24.987 2 DEBUG nova.network.neutron [req-17af6400-6014-4a7e-9756-15bd3587c0fb req-7b95ef36-6321-4a23-a5d4-1b643e95242a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 36d0acda-9f37-4308-aa46-973f11c57b0e] Refreshing network info cache for port 36e1391f-eaf8-490f-8434-c3fb25eed0a4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 09:27:25 compute-0 kernel: tap36e1391f-ea (unregistering): left promiscuous mode
Oct 11 09:27:25 compute-0 NetworkManager[44960]: <info>  [1760174845.0305] device (tap36e1391f-ea): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 09:27:25 compute-0 nova_compute[260935]: 2025-10-11 09:27:25.072 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:27:25 compute-0 ovn_controller[152945]: 2025-10-11T09:27:25Z|01420|binding|INFO|Releasing lport 36e1391f-eaf8-490f-8434-c3fb25eed0a4 from this chassis (sb_readonly=0)
Oct 11 09:27:25 compute-0 ovn_controller[152945]: 2025-10-11T09:27:25Z|01421|binding|INFO|Setting lport 36e1391f-eaf8-490f-8434-c3fb25eed0a4 down in Southbound
Oct 11 09:27:25 compute-0 ovn_controller[152945]: 2025-10-11T09:27:25Z|01422|binding|INFO|Removing iface tap36e1391f-ea ovn-installed in OVS
Oct 11 09:27:25 compute-0 nova_compute[260935]: 2025-10-11 09:27:25.078 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:27:25 compute-0 nova_compute[260935]: 2025-10-11 09:27:25.097 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:27:25 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:27:25.100 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:da:66:34 10.100.0.11'], port_security=['fa:16:3e:da:66:34 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '36d0acda-9f37-4308-aa46-973f11c57b0e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b9f9ae84-9b18-48f7-bff2-94e8835de5c8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'bee9c6aad5fe46a2b0fb6caf4d995b72', 'neutron:revision_number': '8', 'neutron:security_group_ids': 'a0735faf-4c5a-437a-8d73-2ecca218ad1a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=419ebcda-831c-4f7b-8ef8-fba16bc71b52, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=36e1391f-eaf8-490f-8434-c3fb25eed0a4) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 09:27:25 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:27:25.102 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 36e1391f-eaf8-490f-8434-c3fb25eed0a4 in datapath b9f9ae84-9b18-48f7-bff2-94e8835de5c8 unbound from our chassis
Oct 11 09:27:25 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:27:25.104 162815 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b9f9ae84-9b18-48f7-bff2-94e8835de5c8, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 11 09:27:25 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:27:25.105 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[7291d1a7-dc10-4128-bb52-e5f6435b2b38]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:27:25 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:27:25.106 162815 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-b9f9ae84-9b18-48f7-bff2-94e8835de5c8 namespace which is not needed anymore
Oct 11 09:27:25 compute-0 systemd[1]: machine-qemu\x2d151\x2dinstance\x2d0000007f.scope: Deactivated successfully.
Oct 11 09:27:25 compute-0 systemd[1]: machine-qemu\x2d151\x2dinstance\x2d0000007f.scope: Consumed 15.271s CPU time.
Oct 11 09:27:25 compute-0 systemd-machined[215705]: Machine qemu-151-instance-0000007f terminated.
Oct 11 09:27:25 compute-0 nova_compute[260935]: 2025-10-11 09:27:25.201 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:27:25 compute-0 nova_compute[260935]: 2025-10-11 09:27:25.213 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:27:25 compute-0 nova_compute[260935]: 2025-10-11 09:27:25.214 2 INFO nova.virt.libvirt.driver [-] [instance: 36d0acda-9f37-4308-aa46-973f11c57b0e] Instance destroyed successfully.
Oct 11 09:27:25 compute-0 nova_compute[260935]: 2025-10-11 09:27:25.214 2 DEBUG nova.objects.instance [None req-0e7aaaae-92f2-487c-b813-0e5e81acce09 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lazy-loading 'resources' on Instance uuid 36d0acda-9f37-4308-aa46-973f11c57b0e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:27:25 compute-0 nova_compute[260935]: 2025-10-11 09:27:25.230 2 DEBUG nova.virt.libvirt.vif [None req-0e7aaaae-92f2-487c-b813-0e5e81acce09 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T09:26:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1410731753',display_name='tempest-TestNetworkBasicOps-server-1410731753',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1410731753',id=127,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNfFDuZ2VUr+6EowKBtrZDd7zud1Oa+cp6ZA/ixez4vTqy3B2Qz2dWoCxMYTkI6OHKrvBP/PVMobSzlTBVFJQHby9DrXveKkH7hKU36MweJuxInYFJMR8tgPbvonZEpvAg==',key_name='tempest-TestNetworkBasicOps-714435839',keypairs=<?>,launch_index=0,launched_at=2025-10-11T09:26:19Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='bee9c6aad5fe46a2b0fb6caf4d995b72',ramdisk_id='',reservation_id='r-4f9xvxhr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1622727639',owner_user_name='tempest-TestNetworkBasicOps-1622727639-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T09:26:19Z,user_data=None,user_id='dd336dcb24664df58613d4105ce1b004',uuid=36d0acda-9f37-4308-aa46-973f11c57b0e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "36e1391f-eaf8-490f-8434-c3fb25eed0a4", "address": "fa:16:3e:da:66:34", "network": {"id": "b9f9ae84-9b18-48f7-bff2-94e8835de5c8", "bridge": "br-int", "label": "tempest-network-smoke--306681163", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.203", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap36e1391f-ea", "ovs_interfaceid": "36e1391f-eaf8-490f-8434-c3fb25eed0a4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 11 09:27:25 compute-0 nova_compute[260935]: 2025-10-11 09:27:25.230 2 DEBUG nova.network.os_vif_util [None req-0e7aaaae-92f2-487c-b813-0e5e81acce09 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Converting VIF {"id": "36e1391f-eaf8-490f-8434-c3fb25eed0a4", "address": "fa:16:3e:da:66:34", "network": {"id": "b9f9ae84-9b18-48f7-bff2-94e8835de5c8", "bridge": "br-int", "label": "tempest-network-smoke--306681163", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.203", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap36e1391f-ea", "ovs_interfaceid": "36e1391f-eaf8-490f-8434-c3fb25eed0a4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 09:27:25 compute-0 nova_compute[260935]: 2025-10-11 09:27:25.231 2 DEBUG nova.network.os_vif_util [None req-0e7aaaae-92f2-487c-b813-0e5e81acce09 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:da:66:34,bridge_name='br-int',has_traffic_filtering=True,id=36e1391f-eaf8-490f-8434-c3fb25eed0a4,network=Network(b9f9ae84-9b18-48f7-bff2-94e8835de5c8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap36e1391f-ea') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 09:27:25 compute-0 nova_compute[260935]: 2025-10-11 09:27:25.232 2 DEBUG os_vif [None req-0e7aaaae-92f2-487c-b813-0e5e81acce09 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:da:66:34,bridge_name='br-int',has_traffic_filtering=True,id=36e1391f-eaf8-490f-8434-c3fb25eed0a4,network=Network(b9f9ae84-9b18-48f7-bff2-94e8835de5c8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap36e1391f-ea') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 11 09:27:25 compute-0 nova_compute[260935]: 2025-10-11 09:27:25.234 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:27:25 compute-0 nova_compute[260935]: 2025-10-11 09:27:25.234 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap36e1391f-ea, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:27:25 compute-0 nova_compute[260935]: 2025-10-11 09:27:25.237 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:27:25 compute-0 nova_compute[260935]: 2025-10-11 09:27:25.239 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:27:25 compute-0 nova_compute[260935]: 2025-10-11 09:27:25.243 2 INFO os_vif [None req-0e7aaaae-92f2-487c-b813-0e5e81acce09 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:da:66:34,bridge_name='br-int',has_traffic_filtering=True,id=36e1391f-eaf8-490f-8434-c3fb25eed0a4,network=Network(b9f9ae84-9b18-48f7-bff2-94e8835de5c8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap36e1391f-ea')
Oct 11 09:27:25 compute-0 neutron-haproxy-ovnmeta-b9f9ae84-9b18-48f7-bff2-94e8835de5c8[400350]: [NOTICE]   (400368) : haproxy version is 2.8.14-c23fe91
Oct 11 09:27:25 compute-0 neutron-haproxy-ovnmeta-b9f9ae84-9b18-48f7-bff2-94e8835de5c8[400350]: [NOTICE]   (400368) : path to executable is /usr/sbin/haproxy
Oct 11 09:27:25 compute-0 neutron-haproxy-ovnmeta-b9f9ae84-9b18-48f7-bff2-94e8835de5c8[400350]: [WARNING]  (400368) : Exiting Master process...
Oct 11 09:27:25 compute-0 neutron-haproxy-ovnmeta-b9f9ae84-9b18-48f7-bff2-94e8835de5c8[400350]: [WARNING]  (400368) : Exiting Master process...
Oct 11 09:27:25 compute-0 neutron-haproxy-ovnmeta-b9f9ae84-9b18-48f7-bff2-94e8835de5c8[400350]: [ALERT]    (400368) : Current worker (400370) exited with code 143 (Terminated)
Oct 11 09:27:25 compute-0 neutron-haproxy-ovnmeta-b9f9ae84-9b18-48f7-bff2-94e8835de5c8[400350]: [WARNING]  (400368) : All workers exited. Exiting... (0)
Oct 11 09:27:25 compute-0 systemd[1]: libpod-98a9e3dfba5d9f0db108a43068ad5b78e93374ee7968fa6567843c496e5784d5.scope: Deactivated successfully.
Oct 11 09:27:25 compute-0 podman[403046]: 2025-10-11 09:27:25.314910139 +0000 UTC m=+0.064065499 container died 98a9e3dfba5d9f0db108a43068ad5b78e93374ee7968fa6567843c496e5784d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-b9f9ae84-9b18-48f7-bff2-94e8835de5c8, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2)
Oct 11 09:27:25 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-98a9e3dfba5d9f0db108a43068ad5b78e93374ee7968fa6567843c496e5784d5-userdata-shm.mount: Deactivated successfully.
Oct 11 09:27:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-7b93609cfe788969adf943db69f3753cdeb411820f1a1f8aaf68d3557d2f4c37-merged.mount: Deactivated successfully.
Oct 11 09:27:25 compute-0 podman[403046]: 2025-10-11 09:27:25.387275471 +0000 UTC m=+0.136430791 container cleanup 98a9e3dfba5d9f0db108a43068ad5b78e93374ee7968fa6567843c496e5784d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-b9f9ae84-9b18-48f7-bff2-94e8835de5c8, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 11 09:27:25 compute-0 systemd[1]: libpod-conmon-98a9e3dfba5d9f0db108a43068ad5b78e93374ee7968fa6567843c496e5784d5.scope: Deactivated successfully.
Oct 11 09:27:25 compute-0 podman[403093]: 2025-10-11 09:27:25.492076079 +0000 UTC m=+0.077132918 container remove 98a9e3dfba5d9f0db108a43068ad5b78e93374ee7968fa6567843c496e5784d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-b9f9ae84-9b18-48f7-bff2-94e8835de5c8, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct 11 09:27:25 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:27:25.498 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[05458a23-d688-42a6-afc1-8b075bfa751e]: (4, ('Sat Oct 11 09:27:25 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-b9f9ae84-9b18-48f7-bff2-94e8835de5c8 (98a9e3dfba5d9f0db108a43068ad5b78e93374ee7968fa6567843c496e5784d5)\n98a9e3dfba5d9f0db108a43068ad5b78e93374ee7968fa6567843c496e5784d5\nSat Oct 11 09:27:25 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-b9f9ae84-9b18-48f7-bff2-94e8835de5c8 (98a9e3dfba5d9f0db108a43068ad5b78e93374ee7968fa6567843c496e5784d5)\n98a9e3dfba5d9f0db108a43068ad5b78e93374ee7968fa6567843c496e5784d5\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:27:25 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:27:25.501 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[8d86ca1d-b08b-4d65-a816-01afa4e6731d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:27:25 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:27:25.502 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb9f9ae84-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:27:25 compute-0 kernel: tapb9f9ae84-90: left promiscuous mode
Oct 11 09:27:25 compute-0 nova_compute[260935]: 2025-10-11 09:27:25.506 2 DEBUG nova.compute.manager [req-a4b19fb7-158a-4e8c-be9a-4ab819baeba1 req-c94d6239-ee44-438c-a9f9-18da70a71a41 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 36d0acda-9f37-4308-aa46-973f11c57b0e] Received event network-vif-unplugged-36e1391f-eaf8-490f-8434-c3fb25eed0a4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:27:25 compute-0 nova_compute[260935]: 2025-10-11 09:27:25.507 2 DEBUG oslo_concurrency.lockutils [req-a4b19fb7-158a-4e8c-be9a-4ab819baeba1 req-c94d6239-ee44-438c-a9f9-18da70a71a41 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "36d0acda-9f37-4308-aa46-973f11c57b0e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:27:25 compute-0 nova_compute[260935]: 2025-10-11 09:27:25.507 2 DEBUG oslo_concurrency.lockutils [req-a4b19fb7-158a-4e8c-be9a-4ab819baeba1 req-c94d6239-ee44-438c-a9f9-18da70a71a41 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "36d0acda-9f37-4308-aa46-973f11c57b0e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:27:25 compute-0 nova_compute[260935]: 2025-10-11 09:27:25.507 2 DEBUG oslo_concurrency.lockutils [req-a4b19fb7-158a-4e8c-be9a-4ab819baeba1 req-c94d6239-ee44-438c-a9f9-18da70a71a41 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "36d0acda-9f37-4308-aa46-973f11c57b0e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:27:25 compute-0 nova_compute[260935]: 2025-10-11 09:27:25.508 2 DEBUG nova.compute.manager [req-a4b19fb7-158a-4e8c-be9a-4ab819baeba1 req-c94d6239-ee44-438c-a9f9-18da70a71a41 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 36d0acda-9f37-4308-aa46-973f11c57b0e] No waiting events found dispatching network-vif-unplugged-36e1391f-eaf8-490f-8434-c3fb25eed0a4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 09:27:25 compute-0 nova_compute[260935]: 2025-10-11 09:27:25.508 2 DEBUG nova.compute.manager [req-a4b19fb7-158a-4e8c-be9a-4ab819baeba1 req-c94d6239-ee44-438c-a9f9-18da70a71a41 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 36d0acda-9f37-4308-aa46-973f11c57b0e] Received event network-vif-unplugged-36e1391f-eaf8-490f-8434-c3fb25eed0a4 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 11 09:27:25 compute-0 nova_compute[260935]: 2025-10-11 09:27:25.508 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:27:25 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:27:25.510 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[5a9fbb67-dc17-4fb1-8967-69df9d772a01]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:27:25 compute-0 nova_compute[260935]: 2025-10-11 09:27:25.525 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:27:25 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:27:25.534 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[9911584f-e591-4f7a-bd90-1edca80cc636]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:27:25 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:27:25.535 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[d9682d68-234e-4b86-b0f8-e17c341720e5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:27:25 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:27:25.554 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[60f91175-16b4-4bb1-8fd1-c0f30d136e60]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 662305, 'reachable_time': 23417, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 403108, 'error': None, 'target': 'ovnmeta-b9f9ae84-9b18-48f7-bff2-94e8835de5c8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:27:25 compute-0 systemd[1]: run-netns-ovnmeta\x2db9f9ae84\x2d9b18\x2d48f7\x2dbff2\x2d94e8835de5c8.mount: Deactivated successfully.
Oct 11 09:27:25 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:27:25.560 162929 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-b9f9ae84-9b18-48f7-bff2-94e8835de5c8 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 11 09:27:25 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:27:25.560 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[111d9086-a92d-46af-a472-d3ee4f75503b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:27:25 compute-0 nova_compute[260935]: 2025-10-11 09:27:25.877 2 INFO nova.virt.libvirt.driver [None req-0e7aaaae-92f2-487c-b813-0e5e81acce09 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 36d0acda-9f37-4308-aa46-973f11c57b0e] Deleting instance files /var/lib/nova/instances/36d0acda-9f37-4308-aa46-973f11c57b0e_del
Oct 11 09:27:25 compute-0 nova_compute[260935]: 2025-10-11 09:27:25.878 2 INFO nova.virt.libvirt.driver [None req-0e7aaaae-92f2-487c-b813-0e5e81acce09 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 36d0acda-9f37-4308-aa46-973f11c57b0e] Deletion of /var/lib/nova/instances/36d0acda-9f37-4308-aa46-973f11c57b0e_del complete
Oct 11 09:27:25 compute-0 nova_compute[260935]: 2025-10-11 09:27:25.939 2 INFO nova.compute.manager [None req-0e7aaaae-92f2-487c-b813-0e5e81acce09 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: 36d0acda-9f37-4308-aa46-973f11c57b0e] Took 0.98 seconds to destroy the instance on the hypervisor.
Oct 11 09:27:25 compute-0 nova_compute[260935]: 2025-10-11 09:27:25.940 2 DEBUG oslo.service.loopingcall [None req-0e7aaaae-92f2-487c-b813-0e5e81acce09 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 11 09:27:25 compute-0 nova_compute[260935]: 2025-10-11 09:27:25.940 2 DEBUG nova.compute.manager [-] [instance: 36d0acda-9f37-4308-aa46-973f11c57b0e] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 11 09:27:25 compute-0 nova_compute[260935]: 2025-10-11 09:27:25.940 2 DEBUG nova.network.neutron [-] [instance: 36d0acda-9f37-4308-aa46-973f11c57b0e] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 11 09:27:26 compute-0 ceph-mon[74313]: pgmap v2573: 321 pgs: 321 active+clean; 533 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 8.3 KiB/s wr, 97 op/s
Oct 11 09:27:26 compute-0 nova_compute[260935]: 2025-10-11 09:27:26.595 2 DEBUG nova.network.neutron [req-17af6400-6014-4a7e-9756-15bd3587c0fb req-7b95ef36-6321-4a23-a5d4-1b643e95242a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 36d0acda-9f37-4308-aa46-973f11c57b0e] Updated VIF entry in instance network info cache for port 36e1391f-eaf8-490f-8434-c3fb25eed0a4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 09:27:26 compute-0 nova_compute[260935]: 2025-10-11 09:27:26.596 2 DEBUG nova.network.neutron [req-17af6400-6014-4a7e-9756-15bd3587c0fb req-7b95ef36-6321-4a23-a5d4-1b643e95242a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 36d0acda-9f37-4308-aa46-973f11c57b0e] Updating instance_info_cache with network_info: [{"id": "36e1391f-eaf8-490f-8434-c3fb25eed0a4", "address": "fa:16:3e:da:66:34", "network": {"id": "b9f9ae84-9b18-48f7-bff2-94e8835de5c8", "bridge": "br-int", "label": "tempest-network-smoke--306681163", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap36e1391f-ea", "ovs_interfaceid": "36e1391f-eaf8-490f-8434-c3fb25eed0a4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:27:26 compute-0 nova_compute[260935]: 2025-10-11 09:27:26.621 2 DEBUG oslo_concurrency.lockutils [req-17af6400-6014-4a7e-9756-15bd3587c0fb req-7b95ef36-6321-4a23-a5d4-1b643e95242a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-36d0acda-9f37-4308-aa46-973f11c57b0e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:27:26 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 09:27:26 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3958491461' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 09:27:26 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 09:27:26 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3958491461' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 09:27:26 compute-0 nova_compute[260935]: 2025-10-11 09:27:26.750 2 DEBUG nova.network.neutron [-] [instance: 36d0acda-9f37-4308-aa46-973f11c57b0e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:27:26 compute-0 nova_compute[260935]: 2025-10-11 09:27:26.772 2 INFO nova.compute.manager [-] [instance: 36d0acda-9f37-4308-aa46-973f11c57b0e] Took 0.83 seconds to deallocate network for instance.
Oct 11 09:27:26 compute-0 podman[403110]: 2025-10-11 09:27:26.813919692 +0000 UTC m=+0.112744183 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd)
Oct 11 09:27:26 compute-0 podman[403111]: 2025-10-11 09:27:26.823507963 +0000 UTC m=+0.113549306 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 11 09:27:26 compute-0 nova_compute[260935]: 2025-10-11 09:27:26.835 2 DEBUG oslo_concurrency.lockutils [None req-0e7aaaae-92f2-487c-b813-0e5e81acce09 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:27:26 compute-0 nova_compute[260935]: 2025-10-11 09:27:26.835 2 DEBUG oslo_concurrency.lockutils [None req-0e7aaaae-92f2-487c-b813-0e5e81acce09 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:27:26 compute-0 ovn_controller[152945]: 2025-10-11T09:27:26Z|00166|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:01:a2:ce 10.100.0.6
Oct 11 09:27:26 compute-0 ovn_controller[152945]: 2025-10-11T09:27:26Z|00167|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:01:a2:ce 10.100.0.6
Oct 11 09:27:26 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2574: 321 pgs: 321 active+clean; 533 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 8.3 KiB/s wr, 97 op/s
Oct 11 09:27:27 compute-0 nova_compute[260935]: 2025-10-11 09:27:27.018 2 DEBUG oslo_concurrency.processutils [None req-0e7aaaae-92f2-487c-b813-0e5e81acce09 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:27:27 compute-0 ceph-mon[74313]: from='client.? 192.168.122.10:0/3958491461' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 09:27:27 compute-0 ceph-mon[74313]: from='client.? 192.168.122.10:0/3958491461' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 09:27:27 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 09:27:27 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3689211471' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:27:27 compute-0 nova_compute[260935]: 2025-10-11 09:27:27.466 2 DEBUG oslo_concurrency.processutils [None req-0e7aaaae-92f2-487c-b813-0e5e81acce09 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:27:27 compute-0 nova_compute[260935]: 2025-10-11 09:27:27.473 2 DEBUG nova.compute.provider_tree [None req-0e7aaaae-92f2-487c-b813-0e5e81acce09 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 09:27:27 compute-0 nova_compute[260935]: 2025-10-11 09:27:27.594 2 DEBUG nova.compute.manager [req-7f248af4-2386-4b05-86ab-2e389a0c36c2 req-da9fa276-27c1-4156-b2c4-668607f81835 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 36d0acda-9f37-4308-aa46-973f11c57b0e] Received event network-vif-deleted-36e1391f-eaf8-490f-8434-c3fb25eed0a4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:27:27 compute-0 nova_compute[260935]: 2025-10-11 09:27:27.714 2 DEBUG nova.scheduler.client.report [None req-0e7aaaae-92f2-487c-b813-0e5e81acce09 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 09:27:27 compute-0 nova_compute[260935]: 2025-10-11 09:27:27.747 2 DEBUG nova.compute.manager [req-a430e958-a075-42b3-a1ca-25d4120897a8 req-173bbb11-d8a7-4647-97b4-16554d131682 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 36d0acda-9f37-4308-aa46-973f11c57b0e] Received event network-vif-plugged-36e1391f-eaf8-490f-8434-c3fb25eed0a4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:27:27 compute-0 nova_compute[260935]: 2025-10-11 09:27:27.748 2 DEBUG oslo_concurrency.lockutils [req-a430e958-a075-42b3-a1ca-25d4120897a8 req-173bbb11-d8a7-4647-97b4-16554d131682 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "36d0acda-9f37-4308-aa46-973f11c57b0e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:27:27 compute-0 nova_compute[260935]: 2025-10-11 09:27:27.748 2 DEBUG oslo_concurrency.lockutils [req-a430e958-a075-42b3-a1ca-25d4120897a8 req-173bbb11-d8a7-4647-97b4-16554d131682 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "36d0acda-9f37-4308-aa46-973f11c57b0e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:27:27 compute-0 nova_compute[260935]: 2025-10-11 09:27:27.749 2 DEBUG oslo_concurrency.lockutils [req-a430e958-a075-42b3-a1ca-25d4120897a8 req-173bbb11-d8a7-4647-97b4-16554d131682 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "36d0acda-9f37-4308-aa46-973f11c57b0e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:27:27 compute-0 nova_compute[260935]: 2025-10-11 09:27:27.749 2 DEBUG nova.compute.manager [req-a430e958-a075-42b3-a1ca-25d4120897a8 req-173bbb11-d8a7-4647-97b4-16554d131682 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 36d0acda-9f37-4308-aa46-973f11c57b0e] No waiting events found dispatching network-vif-plugged-36e1391f-eaf8-490f-8434-c3fb25eed0a4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 09:27:27 compute-0 nova_compute[260935]: 2025-10-11 09:27:27.750 2 WARNING nova.compute.manager [req-a430e958-a075-42b3-a1ca-25d4120897a8 req-173bbb11-d8a7-4647-97b4-16554d131682 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 36d0acda-9f37-4308-aa46-973f11c57b0e] Received unexpected event network-vif-plugged-36e1391f-eaf8-490f-8434-c3fb25eed0a4 for instance with vm_state deleted and task_state None.
Oct 11 09:27:27 compute-0 nova_compute[260935]: 2025-10-11 09:27:27.871 2 DEBUG oslo_concurrency.lockutils [None req-0e7aaaae-92f2-487c-b813-0e5e81acce09 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.035s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:27:27 compute-0 nova_compute[260935]: 2025-10-11 09:27:27.911 2 INFO nova.scheduler.client.report [None req-0e7aaaae-92f2-487c-b813-0e5e81acce09 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Deleted allocations for instance 36d0acda-9f37-4308-aa46-973f11c57b0e
Oct 11 09:27:27 compute-0 nova_compute[260935]: 2025-10-11 09:27:27.990 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:27:28 compute-0 nova_compute[260935]: 2025-10-11 09:27:28.002 2 DEBUG oslo_concurrency.lockutils [None req-0e7aaaae-92f2-487c-b813-0e5e81acce09 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "36d0acda-9f37-4308-aa46-973f11c57b0e" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.049s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:27:28 compute-0 ceph-mon[74313]: pgmap v2574: 321 pgs: 321 active+clean; 533 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 8.3 KiB/s wr, 97 op/s
Oct 11 09:27:28 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/3689211471' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:27:28 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2575: 321 pgs: 321 active+clean; 486 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 190 op/s
Oct 11 09:27:29 compute-0 ceph-mon[74313]: pgmap v2575: 321 pgs: 321 active+clean; 486 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 190 op/s
Oct 11 09:27:29 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:27:30 compute-0 nova_compute[260935]: 2025-10-11 09:27:30.237 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:27:30 compute-0 sshd-session[403178]: Invalid user admin from 165.232.82.252 port 41982
Oct 11 09:27:30 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2576: 321 pgs: 321 active+clean; 486 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 383 KiB/s rd, 2.1 MiB/s wr, 120 op/s
Oct 11 09:27:31 compute-0 sshd-session[403178]: pam_unix(sshd:auth): check pass; user unknown
Oct 11 09:27:31 compute-0 sshd-session[403178]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=165.232.82.252
Oct 11 09:27:32 compute-0 ceph-mon[74313]: pgmap v2576: 321 pgs: 321 active+clean; 486 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 383 KiB/s rd, 2.1 MiB/s wr, 120 op/s
Oct 11 09:27:32 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2577: 321 pgs: 321 active+clean; 486 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 383 KiB/s rd, 2.1 MiB/s wr, 121 op/s
Oct 11 09:27:32 compute-0 nova_compute[260935]: 2025-10-11 09:27:32.993 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:27:33 compute-0 nova_compute[260935]: 2025-10-11 09:27:33.461 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760174838.46024, ebf4e4f9-b225-4ba9-927e-0619aeea8d89 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:27:33 compute-0 nova_compute[260935]: 2025-10-11 09:27:33.462 2 INFO nova.compute.manager [-] [instance: ebf4e4f9-b225-4ba9-927e-0619aeea8d89] VM Stopped (Lifecycle Event)
Oct 11 09:27:33 compute-0 nova_compute[260935]: 2025-10-11 09:27:33.495 2 DEBUG nova.compute.manager [None req-4c334cad-f73d-4fce-a940-c6f277deb3c4 - - - - - -] [instance: ebf4e4f9-b225-4ba9-927e-0619aeea8d89] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:27:33 compute-0 sshd-session[403178]: Failed password for invalid user admin from 165.232.82.252 port 41982 ssh2
Oct 11 09:27:33 compute-0 ovn_controller[152945]: 2025-10-11T09:27:33Z|01423|binding|INFO|Releasing lport 219b454c-6c32-4a38-b10c-afcdb1be1980 from this chassis (sb_readonly=0)
Oct 11 09:27:33 compute-0 ovn_controller[152945]: 2025-10-11T09:27:33Z|01424|binding|INFO|Releasing lport e23cd806-8523-4e59-ba27-db15cee52548 from this chassis (sb_readonly=0)
Oct 11 09:27:33 compute-0 ovn_controller[152945]: 2025-10-11T09:27:33Z|01425|binding|INFO|Releasing lport 48cfaf23-d7a3-467e-89cb-3eb3687ea15e from this chassis (sb_readonly=0)
Oct 11 09:27:33 compute-0 ovn_controller[152945]: 2025-10-11T09:27:33Z|01426|binding|INFO|Releasing lport f45dd889-4db0-488b-b6d3-356bc191844e from this chassis (sb_readonly=0)
Oct 11 09:27:33 compute-0 nova_compute[260935]: 2025-10-11 09:27:33.647 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:27:34 compute-0 ceph-mon[74313]: pgmap v2577: 321 pgs: 321 active+clean; 486 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 383 KiB/s rd, 2.1 MiB/s wr, 121 op/s
Oct 11 09:27:34 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:27:34 compute-0 sshd-session[403178]: Connection closed by invalid user admin 165.232.82.252 port 41982 [preauth]
Oct 11 09:27:34 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2578: 321 pgs: 321 active+clean; 486 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 363 KiB/s rd, 2.1 MiB/s wr, 92 op/s
Oct 11 09:27:35 compute-0 ceph-mon[74313]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #126. Immutable memtables: 0.
Oct 11 09:27:35 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:27:35.057034) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 11 09:27:35 compute-0 ceph-mon[74313]: rocksdb: [db/flush_job.cc:856] [default] [JOB 75] Flushing memtable with next log file: 126
Oct 11 09:27:35 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760174855057126, "job": 75, "event": "flush_started", "num_memtables": 1, "num_entries": 790, "num_deletes": 251, "total_data_size": 996698, "memory_usage": 1012728, "flush_reason": "Manual Compaction"}
Oct 11 09:27:35 compute-0 ceph-mon[74313]: rocksdb: [db/flush_job.cc:885] [default] [JOB 75] Level-0 flush table #127: started
Oct 11 09:27:35 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760174855071495, "cf_name": "default", "job": 75, "event": "table_file_creation", "file_number": 127, "file_size": 975848, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 53556, "largest_seqno": 54345, "table_properties": {"data_size": 971815, "index_size": 1749, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1221, "raw_key_size": 9081, "raw_average_key_size": 19, "raw_value_size": 963756, "raw_average_value_size": 2072, "num_data_blocks": 78, "num_entries": 465, "num_filter_entries": 465, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760174795, "oldest_key_time": 1760174795, "file_creation_time": 1760174855, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "25d4b2da-549a-4320-988a-8e4ed36a341b", "db_session_id": "HBQ1LYMNE0LLZGR7AHC6", "orig_file_number": 127, "seqno_to_time_mapping": "N/A"}}
Oct 11 09:27:35 compute-0 ceph-mon[74313]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 75] Flush lasted 14579 microseconds, and 6411 cpu microseconds.
Oct 11 09:27:35 compute-0 ceph-mon[74313]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 09:27:35 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:27:35.071615) [db/flush_job.cc:967] [default] [JOB 75] Level-0 flush table #127: 975848 bytes OK
Oct 11 09:27:35 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:27:35.071652) [db/memtable_list.cc:519] [default] Level-0 commit table #127 started
Oct 11 09:27:35 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:27:35.073778) [db/memtable_list.cc:722] [default] Level-0 commit table #127: memtable #1 done
Oct 11 09:27:35 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:27:35.073807) EVENT_LOG_v1 {"time_micros": 1760174855073796, "job": 75, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 11 09:27:35 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:27:35.073865) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 11 09:27:35 compute-0 ceph-mon[74313]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 75] Try to delete WAL files size 992717, prev total WAL file size 992717, number of live WAL files 2.
Oct 11 09:27:35 compute-0 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000123.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 09:27:35 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:27:35.074845) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730035303230' seq:72057594037927935, type:22 .. '7061786F730035323732' seq:0, type:0; will stop at (end)
Oct 11 09:27:35 compute-0 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 76] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 11 09:27:35 compute-0 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 75 Base level 0, inputs: [127(952KB)], [125(8707KB)]
Oct 11 09:27:35 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760174855074898, "job": 76, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [127], "files_L6": [125], "score": -1, "input_data_size": 9892312, "oldest_snapshot_seqno": -1}
Oct 11 09:27:35 compute-0 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 76] Generated table #128: 7240 keys, 8205270 bytes, temperature: kUnknown
Oct 11 09:27:35 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760174855150155, "cf_name": "default", "job": 76, "event": "table_file_creation", "file_number": 128, "file_size": 8205270, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8159932, "index_size": 26153, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 18117, "raw_key_size": 189588, "raw_average_key_size": 26, "raw_value_size": 8033375, "raw_average_value_size": 1109, "num_data_blocks": 1011, "num_entries": 7240, "num_filter_entries": 7240, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760170204, "oldest_key_time": 0, "file_creation_time": 1760174855, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "25d4b2da-549a-4320-988a-8e4ed36a341b", "db_session_id": "HBQ1LYMNE0LLZGR7AHC6", "orig_file_number": 128, "seqno_to_time_mapping": "N/A"}}
Oct 11 09:27:35 compute-0 ceph-mon[74313]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 09:27:35 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:27:35.150457) [db/compaction/compaction_job.cc:1663] [default] [JOB 76] Compacted 1@0 + 1@6 files to L6 => 8205270 bytes
Oct 11 09:27:35 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:27:35.151934) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 131.3 rd, 108.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.9, 8.5 +0.0 blob) out(7.8 +0.0 blob), read-write-amplify(18.5) write-amplify(8.4) OK, records in: 7754, records dropped: 514 output_compression: NoCompression
Oct 11 09:27:35 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:27:35.151956) EVENT_LOG_v1 {"time_micros": 1760174855151946, "job": 76, "event": "compaction_finished", "compaction_time_micros": 75356, "compaction_time_cpu_micros": 37507, "output_level": 6, "num_output_files": 1, "total_output_size": 8205270, "num_input_records": 7754, "num_output_records": 7240, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 11 09:27:35 compute-0 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000127.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 09:27:35 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760174855152457, "job": 76, "event": "table_file_deletion", "file_number": 127}
Oct 11 09:27:35 compute-0 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000125.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 09:27:35 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760174855154394, "job": 76, "event": "table_file_deletion", "file_number": 125}
Oct 11 09:27:35 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:27:35.074707) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 09:27:35 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:27:35.154470) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 09:27:35 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:27:35.154476) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 09:27:35 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:27:35.154478) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 09:27:35 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:27:35.154480) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 09:27:35 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:27:35.154482) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 09:27:35 compute-0 nova_compute[260935]: 2025-10-11 09:27:35.240 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:27:36 compute-0 ceph-mon[74313]: pgmap v2578: 321 pgs: 321 active+clean; 486 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 363 KiB/s rd, 2.1 MiB/s wr, 92 op/s
Oct 11 09:27:36 compute-0 nova_compute[260935]: 2025-10-11 09:27:36.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:27:36 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2579: 321 pgs: 321 active+clean; 486 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 363 KiB/s rd, 2.1 MiB/s wr, 92 op/s
Oct 11 09:27:37 compute-0 nova_compute[260935]: 2025-10-11 09:27:37.060 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:27:37 compute-0 nova_compute[260935]: 2025-10-11 09:27:37.536 2 DEBUG nova.compute.manager [req-7f5c6790-5d00-4d1a-9a80-54b83dc5c042 req-44e19136-58db-423a-bd00-1581af801984 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ac163a38-d5cb-4d00-af1b-f3361849dd68] Received event network-changed-d227ca0f-e837-4a46-be0f-e66b72db8028 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:27:37 compute-0 nova_compute[260935]: 2025-10-11 09:27:37.536 2 DEBUG nova.compute.manager [req-7f5c6790-5d00-4d1a-9a80-54b83dc5c042 req-44e19136-58db-423a-bd00-1581af801984 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ac163a38-d5cb-4d00-af1b-f3361849dd68] Refreshing instance network info cache due to event network-changed-d227ca0f-e837-4a46-be0f-e66b72db8028. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 09:27:37 compute-0 nova_compute[260935]: 2025-10-11 09:27:37.537 2 DEBUG oslo_concurrency.lockutils [req-7f5c6790-5d00-4d1a-9a80-54b83dc5c042 req-44e19136-58db-423a-bd00-1581af801984 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-ac163a38-d5cb-4d00-af1b-f3361849dd68" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:27:37 compute-0 nova_compute[260935]: 2025-10-11 09:27:37.537 2 DEBUG oslo_concurrency.lockutils [req-7f5c6790-5d00-4d1a-9a80-54b83dc5c042 req-44e19136-58db-423a-bd00-1581af801984 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-ac163a38-d5cb-4d00-af1b-f3361849dd68" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:27:37 compute-0 nova_compute[260935]: 2025-10-11 09:27:37.537 2 DEBUG nova.network.neutron [req-7f5c6790-5d00-4d1a-9a80-54b83dc5c042 req-44e19136-58db-423a-bd00-1581af801984 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ac163a38-d5cb-4d00-af1b-f3361849dd68] Refreshing network info cache for port d227ca0f-e837-4a46-be0f-e66b72db8028 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 09:27:37 compute-0 nova_compute[260935]: 2025-10-11 09:27:37.629 2 DEBUG oslo_concurrency.lockutils [None req-48bab362-c72b-4c5d-bee2-2fa934f94acc 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "ac163a38-d5cb-4d00-af1b-f3361849dd68" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:27:37 compute-0 nova_compute[260935]: 2025-10-11 09:27:37.630 2 DEBUG oslo_concurrency.lockutils [None req-48bab362-c72b-4c5d-bee2-2fa934f94acc 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "ac163a38-d5cb-4d00-af1b-f3361849dd68" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:27:37 compute-0 nova_compute[260935]: 2025-10-11 09:27:37.630 2 DEBUG oslo_concurrency.lockutils [None req-48bab362-c72b-4c5d-bee2-2fa934f94acc 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "ac163a38-d5cb-4d00-af1b-f3361849dd68-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:27:37 compute-0 nova_compute[260935]: 2025-10-11 09:27:37.630 2 DEBUG oslo_concurrency.lockutils [None req-48bab362-c72b-4c5d-bee2-2fa934f94acc 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "ac163a38-d5cb-4d00-af1b-f3361849dd68-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:27:37 compute-0 nova_compute[260935]: 2025-10-11 09:27:37.630 2 DEBUG oslo_concurrency.lockutils [None req-48bab362-c72b-4c5d-bee2-2fa934f94acc 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "ac163a38-d5cb-4d00-af1b-f3361849dd68-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:27:37 compute-0 nova_compute[260935]: 2025-10-11 09:27:37.631 2 INFO nova.compute.manager [None req-48bab362-c72b-4c5d-bee2-2fa934f94acc 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: ac163a38-d5cb-4d00-af1b-f3361849dd68] Terminating instance
Oct 11 09:27:37 compute-0 nova_compute[260935]: 2025-10-11 09:27:37.632 2 DEBUG nova.compute.manager [None req-48bab362-c72b-4c5d-bee2-2fa934f94acc 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: ac163a38-d5cb-4d00-af1b-f3361849dd68] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 11 09:27:37 compute-0 kernel: tapd227ca0f-e8 (unregistering): left promiscuous mode
Oct 11 09:27:37 compute-0 NetworkManager[44960]: <info>  [1760174857.6984] device (tapd227ca0f-e8): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 09:27:37 compute-0 nova_compute[260935]: 2025-10-11 09:27:37.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:27:37 compute-0 ovn_controller[152945]: 2025-10-11T09:27:37Z|01427|binding|INFO|Releasing lport d227ca0f-e837-4a46-be0f-e66b72db8028 from this chassis (sb_readonly=0)
Oct 11 09:27:37 compute-0 ovn_controller[152945]: 2025-10-11T09:27:37Z|01428|binding|INFO|Setting lport d227ca0f-e837-4a46-be0f-e66b72db8028 down in Southbound
Oct 11 09:27:37 compute-0 nova_compute[260935]: 2025-10-11 09:27:37.709 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:27:37 compute-0 ovn_controller[152945]: 2025-10-11T09:27:37Z|01429|binding|INFO|Removing iface tapd227ca0f-e8 ovn-installed in OVS
Oct 11 09:27:37 compute-0 nova_compute[260935]: 2025-10-11 09:27:37.713 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:27:37 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:27:37.721 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:01:a2:ce 10.100.0.6'], port_security=['fa:16:3e:01:a2:ce 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'ac163a38-d5cb-4d00-af1b-f3361849dd68', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-024b1f88-7312-4f05-a55e-4c82e878906e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'db6885dd005947ad850fed13cefdf2fc', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'efa7de30-a0ba-4f09-b29e-87a969a72d97', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5a6a9a27-420b-4816-9550-7c34ef42c810, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=d227ca0f-e837-4a46-be0f-e66b72db8028) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 09:27:37 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:27:37.722 162815 INFO neutron.agent.ovn.metadata.agent [-] Port d227ca0f-e837-4a46-be0f-e66b72db8028 in datapath 024b1f88-7312-4f05-a55e-4c82e878906e unbound from our chassis
Oct 11 09:27:37 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:27:37.723 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 024b1f88-7312-4f05-a55e-4c82e878906e
Oct 11 09:27:37 compute-0 kernel: tap731dd7de-8d (unregistering): left promiscuous mode
Oct 11 09:27:37 compute-0 NetworkManager[44960]: <info>  [1760174857.7301] device (tap731dd7de-8d): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 09:27:37 compute-0 nova_compute[260935]: 2025-10-11 09:27:37.734 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:27:37 compute-0 ovn_controller[152945]: 2025-10-11T09:27:37Z|01430|binding|INFO|Releasing lport 731dd7de-8de3-4b67-a34b-8fef195606b3 from this chassis (sb_readonly=0)
Oct 11 09:27:37 compute-0 ovn_controller[152945]: 2025-10-11T09:27:37Z|01431|binding|INFO|Setting lport 731dd7de-8de3-4b67-a34b-8fef195606b3 down in Southbound
Oct 11 09:27:37 compute-0 ovn_controller[152945]: 2025-10-11T09:27:37Z|01432|binding|INFO|Removing iface tap731dd7de-8d ovn-installed in OVS
Oct 11 09:27:37 compute-0 nova_compute[260935]: 2025-10-11 09:27:37.743 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:27:37 compute-0 nova_compute[260935]: 2025-10-11 09:27:37.745 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:27:37 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:27:37.751 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3b:40:03 2001:db8::f816:3eff:fe3b:4003'], port_security=['fa:16:3e:3b:40:03 2001:db8::f816:3eff:fe3b:4003'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe3b:4003/64', 'neutron:device_id': 'ac163a38-d5cb-4d00-af1b-f3361849dd68', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-894decad-3bed-4c55-b643-5fbe5479bf3f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'db6885dd005947ad850fed13cefdf2fc', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'efa7de30-a0ba-4f09-b29e-87a969a72d97', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8c88be83-b34c-4a0a-8d40-8a42bbccb566, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=731dd7de-8de3-4b67-a34b-8fef195606b3) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 09:27:37 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:27:37.750 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[96226645-1c4f-46c1-a0d7-ed7e5f771afa]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:27:37 compute-0 nova_compute[260935]: 2025-10-11 09:27:37.764 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:27:37 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:27:37.784 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[4d3139bd-8c1a-4ef3-a518-2632115e5cc4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:27:37 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:27:37.788 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[85f50159-a9f5-4337-91f6-969c12a7d990]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:27:37 compute-0 systemd[1]: machine-qemu\x2d154\x2dinstance\x2d00000082.scope: Deactivated successfully.
Oct 11 09:27:37 compute-0 systemd[1]: machine-qemu\x2d154\x2dinstance\x2d00000082.scope: Consumed 14.247s CPU time.
Oct 11 09:27:37 compute-0 systemd-machined[215705]: Machine qemu-154-instance-00000082 terminated.
Oct 11 09:27:37 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:27:37.818 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[8a2d34e4-5165-44c5-8eba-d8c57e24183a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:27:37 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:27:37.834 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[1ccb42ad-9891-4317-9d28-b1773327fda3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap024b1f88-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7a:47:3b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 7, 'rx_bytes': 1000, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 7, 'rx_bytes': 1000, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 382], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 663412, 'reachable_time': 33597, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 403195, 'error': None, 'target': 'ovnmeta-024b1f88-7312-4f05-a55e-4c82e878906e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:27:37 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:27:37.854 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[515a738f-c5f9-4e6b-8906-9b2af3f42d03]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap024b1f88-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 663430, 'tstamp': 663430}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 403196, 'error': None, 'target': 'ovnmeta-024b1f88-7312-4f05-a55e-4c82e878906e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap024b1f88-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 663436, 'tstamp': 663436}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 403196, 'error': None, 'target': 'ovnmeta-024b1f88-7312-4f05-a55e-4c82e878906e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:27:37 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:27:37.855 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap024b1f88-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:27:37 compute-0 NetworkManager[44960]: <info>  [1760174857.8565] manager: (tap731dd7de-8d): new Tun device (/org/freedesktop/NetworkManager/Devices/559)
Oct 11 09:27:37 compute-0 nova_compute[260935]: 2025-10-11 09:27:37.872 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:27:37 compute-0 nova_compute[260935]: 2025-10-11 09:27:37.884 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:27:37 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:27:37.884 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap024b1f88-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:27:37 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:27:37.884 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 09:27:37 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:27:37.885 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap024b1f88-70, col_values=(('external_ids', {'iface-id': '219b454c-6c32-4a38-b10c-afcdb1be1980'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:27:37 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:27:37.885 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 09:27:37 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:27:37.887 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 731dd7de-8de3-4b67-a34b-8fef195606b3 in datapath 894decad-3bed-4c55-b643-5fbe5479bf3f unbound from our chassis
Oct 11 09:27:37 compute-0 nova_compute[260935]: 2025-10-11 09:27:37.888 2 INFO nova.virt.libvirt.driver [-] [instance: ac163a38-d5cb-4d00-af1b-f3361849dd68] Instance destroyed successfully.
Oct 11 09:27:37 compute-0 nova_compute[260935]: 2025-10-11 09:27:37.888 2 DEBUG nova.objects.instance [None req-48bab362-c72b-4c5d-bee2-2fa934f94acc 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lazy-loading 'resources' on Instance uuid ac163a38-d5cb-4d00-af1b-f3361849dd68 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:27:37 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:27:37.889 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 894decad-3bed-4c55-b643-5fbe5479bf3f
Oct 11 09:27:37 compute-0 nova_compute[260935]: 2025-10-11 09:27:37.905 2 DEBUG nova.virt.libvirt.vif [None req-48bab362-c72b-4c5d-bee2-2fa934f94acc 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T09:26:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-8061287',display_name='tempest-TestGettingAddress-server-8061287',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-8061287',id=130,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPG2umSzgx1Pm5tS05rzNTKNesiRVqOQsMrcb/c4H9RJnO9a7zP3A3lTuGkE8FijEzV3gKWwvO4cBsyzZeZuE85e7xUGmBvWEUdTEGeD9UeQgoUvFozUeyHRBe1bh7q6pA==',key_name='tempest-TestGettingAddress-1214513575',keypairs=<?>,launch_index=0,launched_at=2025-10-11T09:27:14Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='db6885dd005947ad850fed13cefdf2fc',ramdisk_id='',reservation_id='r-ptk677gz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-1238692117',owner_user_name='tempest-TestGettingAddress-1238692117-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T09:27:14Z,user_data=None,user_id='0e1fd111a1ff43179343661e01457085',uuid=ac163a38-d5cb-4d00-af1b-f3361849dd68,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d227ca0f-e837-4a46-be0f-e66b72db8028", "address": "fa:16:3e:01:a2:ce", "network": {"id": "024b1f88-7312-4f05-a55e-4c82e878906e", "bridge": "br-int", "label": "tempest-network-smoke--93934650", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.230", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd227ca0f-e8", "ovs_interfaceid": "d227ca0f-e837-4a46-be0f-e66b72db8028", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 11 09:27:37 compute-0 nova_compute[260935]: 2025-10-11 09:27:37.906 2 DEBUG nova.network.os_vif_util [None req-48bab362-c72b-4c5d-bee2-2fa934f94acc 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converting VIF {"id": "d227ca0f-e837-4a46-be0f-e66b72db8028", "address": "fa:16:3e:01:a2:ce", "network": {"id": "024b1f88-7312-4f05-a55e-4c82e878906e", "bridge": "br-int", "label": "tempest-network-smoke--93934650", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.230", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd227ca0f-e8", "ovs_interfaceid": "d227ca0f-e837-4a46-be0f-e66b72db8028", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 09:27:37 compute-0 nova_compute[260935]: 2025-10-11 09:27:37.907 2 DEBUG nova.network.os_vif_util [None req-48bab362-c72b-4c5d-bee2-2fa934f94acc 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:01:a2:ce,bridge_name='br-int',has_traffic_filtering=True,id=d227ca0f-e837-4a46-be0f-e66b72db8028,network=Network(024b1f88-7312-4f05-a55e-4c82e878906e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd227ca0f-e8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 09:27:37 compute-0 nova_compute[260935]: 2025-10-11 09:27:37.907 2 DEBUG os_vif [None req-48bab362-c72b-4c5d-bee2-2fa934f94acc 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:01:a2:ce,bridge_name='br-int',has_traffic_filtering=True,id=d227ca0f-e837-4a46-be0f-e66b72db8028,network=Network(024b1f88-7312-4f05-a55e-4c82e878906e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd227ca0f-e8') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 11 09:27:37 compute-0 nova_compute[260935]: 2025-10-11 09:27:37.908 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:27:37 compute-0 nova_compute[260935]: 2025-10-11 09:27:37.909 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd227ca0f-e8, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:27:37 compute-0 nova_compute[260935]: 2025-10-11 09:27:37.910 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:27:37 compute-0 nova_compute[260935]: 2025-10-11 09:27:37.912 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 09:27:37 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:27:37.911 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[50f3eeed-b567-42fb-9478-ca25c50580b9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:27:37 compute-0 nova_compute[260935]: 2025-10-11 09:27:37.917 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:27:37 compute-0 nova_compute[260935]: 2025-10-11 09:27:37.919 2 INFO os_vif [None req-48bab362-c72b-4c5d-bee2-2fa934f94acc 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:01:a2:ce,bridge_name='br-int',has_traffic_filtering=True,id=d227ca0f-e837-4a46-be0f-e66b72db8028,network=Network(024b1f88-7312-4f05-a55e-4c82e878906e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd227ca0f-e8')
Oct 11 09:27:37 compute-0 nova_compute[260935]: 2025-10-11 09:27:37.920 2 DEBUG nova.virt.libvirt.vif [None req-48bab362-c72b-4c5d-bee2-2fa934f94acc 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T09:26:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-8061287',display_name='tempest-TestGettingAddress-server-8061287',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-8061287',id=130,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPG2umSzgx1Pm5tS05rzNTKNesiRVqOQsMrcb/c4H9RJnO9a7zP3A3lTuGkE8FijEzV3gKWwvO4cBsyzZeZuE85e7xUGmBvWEUdTEGeD9UeQgoUvFozUeyHRBe1bh7q6pA==',key_name='tempest-TestGettingAddress-1214513575',keypairs=<?>,launch_index=0,launched_at=2025-10-11T09:27:14Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='db6885dd005947ad850fed13cefdf2fc',ramdisk_id='',reservation_id='r-ptk677gz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-1238692117',owner_user_name='tempest-TestGettingAddress-1238692117-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T09:27:14Z,user_data=None,user_id='0e1fd111a1ff43179343661e01457085',uuid=ac163a38-d5cb-4d00-af1b-f3361849dd68,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "731dd7de-8de3-4b67-a34b-8fef195606b3", "address": "fa:16:3e:3b:40:03", "network": {"id": "894decad-3bed-4c55-b643-5fbe5479bf3f", "bridge": "br-int", "label": "tempest-network-smoke--1044478516", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe3b:4003", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap731dd7de-8d", "ovs_interfaceid": "731dd7de-8de3-4b67-a34b-8fef195606b3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 11 09:27:37 compute-0 nova_compute[260935]: 2025-10-11 09:27:37.920 2 DEBUG nova.network.os_vif_util [None req-48bab362-c72b-4c5d-bee2-2fa934f94acc 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converting VIF {"id": "731dd7de-8de3-4b67-a34b-8fef195606b3", "address": "fa:16:3e:3b:40:03", "network": {"id": "894decad-3bed-4c55-b643-5fbe5479bf3f", "bridge": "br-int", "label": "tempest-network-smoke--1044478516", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe3b:4003", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap731dd7de-8d", "ovs_interfaceid": "731dd7de-8de3-4b67-a34b-8fef195606b3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 09:27:37 compute-0 nova_compute[260935]: 2025-10-11 09:27:37.921 2 DEBUG nova.network.os_vif_util [None req-48bab362-c72b-4c5d-bee2-2fa934f94acc 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3b:40:03,bridge_name='br-int',has_traffic_filtering=True,id=731dd7de-8de3-4b67-a34b-8fef195606b3,network=Network(894decad-3bed-4c55-b643-5fbe5479bf3f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap731dd7de-8d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 09:27:37 compute-0 nova_compute[260935]: 2025-10-11 09:27:37.921 2 DEBUG os_vif [None req-48bab362-c72b-4c5d-bee2-2fa934f94acc 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:3b:40:03,bridge_name='br-int',has_traffic_filtering=True,id=731dd7de-8de3-4b67-a34b-8fef195606b3,network=Network(894decad-3bed-4c55-b643-5fbe5479bf3f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap731dd7de-8d') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 11 09:27:37 compute-0 nova_compute[260935]: 2025-10-11 09:27:37.922 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:27:37 compute-0 nova_compute[260935]: 2025-10-11 09:27:37.923 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap731dd7de-8d, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:27:37 compute-0 nova_compute[260935]: 2025-10-11 09:27:37.924 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:27:37 compute-0 nova_compute[260935]: 2025-10-11 09:27:37.925 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 09:27:37 compute-0 nova_compute[260935]: 2025-10-11 09:27:37.926 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:27:37 compute-0 nova_compute[260935]: 2025-10-11 09:27:37.928 2 INFO os_vif [None req-48bab362-c72b-4c5d-bee2-2fa934f94acc 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:3b:40:03,bridge_name='br-int',has_traffic_filtering=True,id=731dd7de-8de3-4b67-a34b-8fef195606b3,network=Network(894decad-3bed-4c55-b643-5fbe5479bf3f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap731dd7de-8d')
Oct 11 09:27:37 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:27:37.955 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[2e73e3e2-f270-4600-90ff-8b2be932c1a8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:27:37 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:27:37.959 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[c3b7a2ff-5486-4188-adda-830eb9701b1b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:27:37 compute-0 nova_compute[260935]: 2025-10-11 09:27:37.978 2 DEBUG nova.compute.manager [req-1dba537a-c321-4bb0-afd9-d04dd750ebe1 req-8670f31a-600b-428e-a4db-ba40a8531ff5 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ac163a38-d5cb-4d00-af1b-f3361849dd68] Received event network-vif-unplugged-d227ca0f-e837-4a46-be0f-e66b72db8028 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:27:37 compute-0 nova_compute[260935]: 2025-10-11 09:27:37.979 2 DEBUG oslo_concurrency.lockutils [req-1dba537a-c321-4bb0-afd9-d04dd750ebe1 req-8670f31a-600b-428e-a4db-ba40a8531ff5 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "ac163a38-d5cb-4d00-af1b-f3361849dd68-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:27:37 compute-0 nova_compute[260935]: 2025-10-11 09:27:37.979 2 DEBUG oslo_concurrency.lockutils [req-1dba537a-c321-4bb0-afd9-d04dd750ebe1 req-8670f31a-600b-428e-a4db-ba40a8531ff5 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "ac163a38-d5cb-4d00-af1b-f3361849dd68-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:27:37 compute-0 nova_compute[260935]: 2025-10-11 09:27:37.980 2 DEBUG oslo_concurrency.lockutils [req-1dba537a-c321-4bb0-afd9-d04dd750ebe1 req-8670f31a-600b-428e-a4db-ba40a8531ff5 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "ac163a38-d5cb-4d00-af1b-f3361849dd68-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:27:37 compute-0 nova_compute[260935]: 2025-10-11 09:27:37.981 2 DEBUG nova.compute.manager [req-1dba537a-c321-4bb0-afd9-d04dd750ebe1 req-8670f31a-600b-428e-a4db-ba40a8531ff5 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ac163a38-d5cb-4d00-af1b-f3361849dd68] No waiting events found dispatching network-vif-unplugged-d227ca0f-e837-4a46-be0f-e66b72db8028 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 09:27:37 compute-0 nova_compute[260935]: 2025-10-11 09:27:37.981 2 DEBUG nova.compute.manager [req-1dba537a-c321-4bb0-afd9-d04dd750ebe1 req-8670f31a-600b-428e-a4db-ba40a8531ff5 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ac163a38-d5cb-4d00-af1b-f3361849dd68] Received event network-vif-unplugged-d227ca0f-e837-4a46-be0f-e66b72db8028 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 11 09:27:37 compute-0 nova_compute[260935]: 2025-10-11 09:27:37.997 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:27:38 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:27:37.999 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[35ec33e9-7cbd-4337-80fa-586941e7462f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:27:38 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:27:38.030 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[d641779c-57ee-43fb-abe0-8ef72297602e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap894decad-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1c:b0:96'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 30, 'tx_packets': 6, 'rx_bytes': 2772, 'tx_bytes': 444, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 30, 'tx_packets': 6, 'rx_bytes': 2772, 'tx_bytes': 444, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 383], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 663642, 'reachable_time': 44137, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 30, 'inoctets': 2352, 'indelivers': 7, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 30, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 2352, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 30, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 7, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 403243, 'error': None, 'target': 'ovnmeta-894decad-3bed-4c55-b643-5fbe5479bf3f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:27:38 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:27:38.057 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[4dc9b6dd-cabb-4c2e-8a75-93f0bc1f5228]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap894decad-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 663658, 'tstamp': 663658}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 403244, 'error': None, 'target': 'ovnmeta-894decad-3bed-4c55-b643-5fbe5479bf3f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:27:38 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:27:38.066 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap894decad-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:27:38 compute-0 nova_compute[260935]: 2025-10-11 09:27:38.068 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:27:38 compute-0 nova_compute[260935]: 2025-10-11 09:27:38.070 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:27:38 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:27:38.070 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap894decad-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:27:38 compute-0 ceph-mon[74313]: pgmap v2579: 321 pgs: 321 active+clean; 486 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 363 KiB/s rd, 2.1 MiB/s wr, 92 op/s
Oct 11 09:27:38 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:27:38.071 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 09:27:38 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:27:38.073 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap894decad-30, col_values=(('external_ids', {'iface-id': '48cfaf23-d7a3-467e-89cb-3eb3687ea15e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:27:38 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:27:38.073 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 09:27:38 compute-0 nova_compute[260935]: 2025-10-11 09:27:38.412 2 INFO nova.virt.libvirt.driver [None req-48bab362-c72b-4c5d-bee2-2fa934f94acc 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: ac163a38-d5cb-4d00-af1b-f3361849dd68] Deleting instance files /var/lib/nova/instances/ac163a38-d5cb-4d00-af1b-f3361849dd68_del
Oct 11 09:27:38 compute-0 nova_compute[260935]: 2025-10-11 09:27:38.413 2 INFO nova.virt.libvirt.driver [None req-48bab362-c72b-4c5d-bee2-2fa934f94acc 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: ac163a38-d5cb-4d00-af1b-f3361849dd68] Deletion of /var/lib/nova/instances/ac163a38-d5cb-4d00-af1b-f3361849dd68_del complete
Oct 11 09:27:38 compute-0 nova_compute[260935]: 2025-10-11 09:27:38.464 2 INFO nova.compute.manager [None req-48bab362-c72b-4c5d-bee2-2fa934f94acc 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: ac163a38-d5cb-4d00-af1b-f3361849dd68] Took 0.83 seconds to destroy the instance on the hypervisor.
Oct 11 09:27:38 compute-0 nova_compute[260935]: 2025-10-11 09:27:38.464 2 DEBUG oslo.service.loopingcall [None req-48bab362-c72b-4c5d-bee2-2fa934f94acc 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 11 09:27:38 compute-0 nova_compute[260935]: 2025-10-11 09:27:38.464 2 DEBUG nova.compute.manager [-] [instance: ac163a38-d5cb-4d00-af1b-f3361849dd68] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 11 09:27:38 compute-0 nova_compute[260935]: 2025-10-11 09:27:38.465 2 DEBUG nova.network.neutron [-] [instance: ac163a38-d5cb-4d00-af1b-f3361849dd68] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 11 09:27:38 compute-0 nova_compute[260935]: 2025-10-11 09:27:38.698 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:27:38 compute-0 nova_compute[260935]: 2025-10-11 09:27:38.701 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:27:38 compute-0 nova_compute[260935]: 2025-10-11 09:27:38.901 2 DEBUG nova.network.neutron [req-7f5c6790-5d00-4d1a-9a80-54b83dc5c042 req-44e19136-58db-423a-bd00-1581af801984 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ac163a38-d5cb-4d00-af1b-f3361849dd68] Updated VIF entry in instance network info cache for port d227ca0f-e837-4a46-be0f-e66b72db8028. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 09:27:38 compute-0 nova_compute[260935]: 2025-10-11 09:27:38.901 2 DEBUG nova.network.neutron [req-7f5c6790-5d00-4d1a-9a80-54b83dc5c042 req-44e19136-58db-423a-bd00-1581af801984 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ac163a38-d5cb-4d00-af1b-f3361849dd68] Updating instance_info_cache with network_info: [{"id": "d227ca0f-e837-4a46-be0f-e66b72db8028", "address": "fa:16:3e:01:a2:ce", "network": {"id": "024b1f88-7312-4f05-a55e-4c82e878906e", "bridge": "br-int", "label": "tempest-network-smoke--93934650", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd227ca0f-e8", "ovs_interfaceid": "d227ca0f-e837-4a46-be0f-e66b72db8028", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "731dd7de-8de3-4b67-a34b-8fef195606b3", "address": "fa:16:3e:3b:40:03", "network": {"id": "894decad-3bed-4c55-b643-5fbe5479bf3f", "bridge": "br-int", "label": "tempest-network-smoke--1044478516", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe3b:4003", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap731dd7de-8d", "ovs_interfaceid": "731dd7de-8de3-4b67-a34b-8fef195606b3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:27:38 compute-0 nova_compute[260935]: 2025-10-11 09:27:38.927 2 DEBUG oslo_concurrency.lockutils [req-7f5c6790-5d00-4d1a-9a80-54b83dc5c042 req-44e19136-58db-423a-bd00-1581af801984 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-ac163a38-d5cb-4d00-af1b-f3361849dd68" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:27:38 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2580: 321 pgs: 321 active+clean; 486 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 378 KiB/s rd, 2.1 MiB/s wr, 95 op/s
Oct 11 09:27:39 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:27:39 compute-0 nova_compute[260935]: 2025-10-11 09:27:39.455 2 DEBUG nova.network.neutron [-] [instance: ac163a38-d5cb-4d00-af1b-f3361849dd68] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:27:39 compute-0 nova_compute[260935]: 2025-10-11 09:27:39.478 2 INFO nova.compute.manager [-] [instance: ac163a38-d5cb-4d00-af1b-f3361849dd68] Took 1.01 seconds to deallocate network for instance.
Oct 11 09:27:39 compute-0 nova_compute[260935]: 2025-10-11 09:27:39.530 2 DEBUG oslo_concurrency.lockutils [None req-48bab362-c72b-4c5d-bee2-2fa934f94acc 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:27:39 compute-0 nova_compute[260935]: 2025-10-11 09:27:39.530 2 DEBUG oslo_concurrency.lockutils [None req-48bab362-c72b-4c5d-bee2-2fa934f94acc 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:27:39 compute-0 nova_compute[260935]: 2025-10-11 09:27:39.685 2 DEBUG oslo_concurrency.processutils [None req-48bab362-c72b-4c5d-bee2-2fa934f94acc 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:27:40 compute-0 ceph-mon[74313]: pgmap v2580: 321 pgs: 321 active+clean; 486 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 378 KiB/s rd, 2.1 MiB/s wr, 95 op/s
Oct 11 09:27:40 compute-0 nova_compute[260935]: 2025-10-11 09:27:40.126 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:27:40 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 09:27:40 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2239567093' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:27:40 compute-0 nova_compute[260935]: 2025-10-11 09:27:40.169 2 DEBUG oslo_concurrency.processutils [None req-48bab362-c72b-4c5d-bee2-2fa934f94acc 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.484s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:27:40 compute-0 nova_compute[260935]: 2025-10-11 09:27:40.178 2 DEBUG nova.compute.provider_tree [None req-48bab362-c72b-4c5d-bee2-2fa934f94acc 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 09:27:40 compute-0 nova_compute[260935]: 2025-10-11 09:27:40.189 2 DEBUG nova.compute.manager [req-23841cd6-a33e-4635-a96a-ab57909463ee req-f0807f4a-5bb2-4fb0-835a-fb6132d8a49c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ac163a38-d5cb-4d00-af1b-f3361849dd68] Received event network-vif-plugged-d227ca0f-e837-4a46-be0f-e66b72db8028 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:27:40 compute-0 nova_compute[260935]: 2025-10-11 09:27:40.190 2 DEBUG oslo_concurrency.lockutils [req-23841cd6-a33e-4635-a96a-ab57909463ee req-f0807f4a-5bb2-4fb0-835a-fb6132d8a49c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "ac163a38-d5cb-4d00-af1b-f3361849dd68-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:27:40 compute-0 nova_compute[260935]: 2025-10-11 09:27:40.191 2 DEBUG oslo_concurrency.lockutils [req-23841cd6-a33e-4635-a96a-ab57909463ee req-f0807f4a-5bb2-4fb0-835a-fb6132d8a49c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "ac163a38-d5cb-4d00-af1b-f3361849dd68-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:27:40 compute-0 nova_compute[260935]: 2025-10-11 09:27:40.191 2 DEBUG oslo_concurrency.lockutils [req-23841cd6-a33e-4635-a96a-ab57909463ee req-f0807f4a-5bb2-4fb0-835a-fb6132d8a49c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "ac163a38-d5cb-4d00-af1b-f3361849dd68-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:27:40 compute-0 nova_compute[260935]: 2025-10-11 09:27:40.192 2 DEBUG nova.compute.manager [req-23841cd6-a33e-4635-a96a-ab57909463ee req-f0807f4a-5bb2-4fb0-835a-fb6132d8a49c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ac163a38-d5cb-4d00-af1b-f3361849dd68] No waiting events found dispatching network-vif-plugged-d227ca0f-e837-4a46-be0f-e66b72db8028 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 09:27:40 compute-0 nova_compute[260935]: 2025-10-11 09:27:40.192 2 WARNING nova.compute.manager [req-23841cd6-a33e-4635-a96a-ab57909463ee req-f0807f4a-5bb2-4fb0-835a-fb6132d8a49c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ac163a38-d5cb-4d00-af1b-f3361849dd68] Received unexpected event network-vif-plugged-d227ca0f-e837-4a46-be0f-e66b72db8028 for instance with vm_state deleted and task_state None.
Oct 11 09:27:40 compute-0 nova_compute[260935]: 2025-10-11 09:27:40.193 2 DEBUG nova.compute.manager [req-23841cd6-a33e-4635-a96a-ab57909463ee req-f0807f4a-5bb2-4fb0-835a-fb6132d8a49c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ac163a38-d5cb-4d00-af1b-f3361849dd68] Received event network-vif-deleted-731dd7de-8de3-4b67-a34b-8fef195606b3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:27:40 compute-0 nova_compute[260935]: 2025-10-11 09:27:40.193 2 DEBUG nova.compute.manager [req-23841cd6-a33e-4635-a96a-ab57909463ee req-f0807f4a-5bb2-4fb0-835a-fb6132d8a49c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: ac163a38-d5cb-4d00-af1b-f3361849dd68] Received event network-vif-deleted-d227ca0f-e837-4a46-be0f-e66b72db8028 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:27:40 compute-0 nova_compute[260935]: 2025-10-11 09:27:40.209 2 DEBUG nova.scheduler.client.report [None req-48bab362-c72b-4c5d-bee2-2fa934f94acc 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 09:27:40 compute-0 nova_compute[260935]: 2025-10-11 09:27:40.218 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760174845.2107964, 36d0acda-9f37-4308-aa46-973f11c57b0e => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:27:40 compute-0 nova_compute[260935]: 2025-10-11 09:27:40.218 2 INFO nova.compute.manager [-] [instance: 36d0acda-9f37-4308-aa46-973f11c57b0e] VM Stopped (Lifecycle Event)
Oct 11 09:27:40 compute-0 nova_compute[260935]: 2025-10-11 09:27:40.241 2 DEBUG nova.compute.manager [None req-3db0e2e4-6540-478d-a5a3-ee953752fd0b - - - - - -] [instance: 36d0acda-9f37-4308-aa46-973f11c57b0e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:27:40 compute-0 nova_compute[260935]: 2025-10-11 09:27:40.243 2 DEBUG oslo_concurrency.lockutils [None req-48bab362-c72b-4c5d-bee2-2fa934f94acc 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.713s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:27:40 compute-0 nova_compute[260935]: 2025-10-11 09:27:40.269 2 INFO nova.scheduler.client.report [None req-48bab362-c72b-4c5d-bee2-2fa934f94acc 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Deleted allocations for instance ac163a38-d5cb-4d00-af1b-f3361849dd68
Oct 11 09:27:40 compute-0 nova_compute[260935]: 2025-10-11 09:27:40.338 2 DEBUG oslo_concurrency.lockutils [None req-48bab362-c72b-4c5d-bee2-2fa934f94acc 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "ac163a38-d5cb-4d00-af1b-f3361849dd68" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.708s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:27:40 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2581: 321 pgs: 321 active+clean; 486 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 14 KiB/s rd, 18 KiB/s wr, 3 op/s
Oct 11 09:27:41 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/2239567093' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:27:41 compute-0 nova_compute[260935]: 2025-10-11 09:27:41.291 2 DEBUG nova.compute.manager [req-3d927c7e-70b7-483e-b8ef-aa6280c8359b req-30759d61-0038-4222-a0ca-b3cede909496 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: b39b8161-8a46-46fe-8a2a-0fc6b4eab850] Received event network-changed-0f8506e7-c03f-48bd-938d-f3b4cea2675b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:27:41 compute-0 nova_compute[260935]: 2025-10-11 09:27:41.292 2 DEBUG nova.compute.manager [req-3d927c7e-70b7-483e-b8ef-aa6280c8359b req-30759d61-0038-4222-a0ca-b3cede909496 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: b39b8161-8a46-46fe-8a2a-0fc6b4eab850] Refreshing instance network info cache due to event network-changed-0f8506e7-c03f-48bd-938d-f3b4cea2675b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 09:27:41 compute-0 nova_compute[260935]: 2025-10-11 09:27:41.293 2 DEBUG oslo_concurrency.lockutils [req-3d927c7e-70b7-483e-b8ef-aa6280c8359b req-30759d61-0038-4222-a0ca-b3cede909496 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-b39b8161-8a46-46fe-8a2a-0fc6b4eab850" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:27:41 compute-0 nova_compute[260935]: 2025-10-11 09:27:41.294 2 DEBUG oslo_concurrency.lockutils [req-3d927c7e-70b7-483e-b8ef-aa6280c8359b req-30759d61-0038-4222-a0ca-b3cede909496 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-b39b8161-8a46-46fe-8a2a-0fc6b4eab850" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:27:41 compute-0 nova_compute[260935]: 2025-10-11 09:27:41.294 2 DEBUG nova.network.neutron [req-3d927c7e-70b7-483e-b8ef-aa6280c8359b req-30759d61-0038-4222-a0ca-b3cede909496 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: b39b8161-8a46-46fe-8a2a-0fc6b4eab850] Refreshing network info cache for port 0f8506e7-c03f-48bd-938d-f3b4cea2675b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 09:27:41 compute-0 nova_compute[260935]: 2025-10-11 09:27:41.362 2 DEBUG oslo_concurrency.lockutils [None req-60ddf41e-b2bb-4c30-b57c-9b303b07393a 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "b39b8161-8a46-46fe-8a2a-0fc6b4eab850" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:27:41 compute-0 nova_compute[260935]: 2025-10-11 09:27:41.363 2 DEBUG oslo_concurrency.lockutils [None req-60ddf41e-b2bb-4c30-b57c-9b303b07393a 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "b39b8161-8a46-46fe-8a2a-0fc6b4eab850" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:27:41 compute-0 nova_compute[260935]: 2025-10-11 09:27:41.363 2 DEBUG oslo_concurrency.lockutils [None req-60ddf41e-b2bb-4c30-b57c-9b303b07393a 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "b39b8161-8a46-46fe-8a2a-0fc6b4eab850-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:27:41 compute-0 nova_compute[260935]: 2025-10-11 09:27:41.363 2 DEBUG oslo_concurrency.lockutils [None req-60ddf41e-b2bb-4c30-b57c-9b303b07393a 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "b39b8161-8a46-46fe-8a2a-0fc6b4eab850-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:27:41 compute-0 nova_compute[260935]: 2025-10-11 09:27:41.364 2 DEBUG oslo_concurrency.lockutils [None req-60ddf41e-b2bb-4c30-b57c-9b303b07393a 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "b39b8161-8a46-46fe-8a2a-0fc6b4eab850-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:27:41 compute-0 nova_compute[260935]: 2025-10-11 09:27:41.365 2 INFO nova.compute.manager [None req-60ddf41e-b2bb-4c30-b57c-9b303b07393a 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: b39b8161-8a46-46fe-8a2a-0fc6b4eab850] Terminating instance
Oct 11 09:27:41 compute-0 nova_compute[260935]: 2025-10-11 09:27:41.367 2 DEBUG nova.compute.manager [None req-60ddf41e-b2bb-4c30-b57c-9b303b07393a 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: b39b8161-8a46-46fe-8a2a-0fc6b4eab850] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 11 09:27:41 compute-0 kernel: tap0f8506e7-c0 (unregistering): left promiscuous mode
Oct 11 09:27:41 compute-0 NetworkManager[44960]: <info>  [1760174861.4397] device (tap0f8506e7-c0): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 09:27:41 compute-0 ovn_controller[152945]: 2025-10-11T09:27:41Z|01433|binding|INFO|Releasing lport 0f8506e7-c03f-48bd-938d-f3b4cea2675b from this chassis (sb_readonly=0)
Oct 11 09:27:41 compute-0 ovn_controller[152945]: 2025-10-11T09:27:41Z|01434|binding|INFO|Setting lport 0f8506e7-c03f-48bd-938d-f3b4cea2675b down in Southbound
Oct 11 09:27:41 compute-0 ovn_controller[152945]: 2025-10-11T09:27:41Z|01435|binding|INFO|Removing iface tap0f8506e7-c0 ovn-installed in OVS
Oct 11 09:27:41 compute-0 nova_compute[260935]: 2025-10-11 09:27:41.458 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:27:41 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:27:41.466 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f4:67:d9 10.100.0.4'], port_security=['fa:16:3e:f4:67:d9 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'b39b8161-8a46-46fe-8a2a-0fc6b4eab850', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-024b1f88-7312-4f05-a55e-4c82e878906e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'db6885dd005947ad850fed13cefdf2fc', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'efa7de30-a0ba-4f09-b29e-87a969a72d97', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5a6a9a27-420b-4816-9550-7c34ef42c810, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=0f8506e7-c03f-48bd-938d-f3b4cea2675b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 09:27:41 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:27:41.467 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 0f8506e7-c03f-48bd-938d-f3b4cea2675b in datapath 024b1f88-7312-4f05-a55e-4c82e878906e unbound from our chassis
Oct 11 09:27:41 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:27:41.469 162815 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 024b1f88-7312-4f05-a55e-4c82e878906e, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 11 09:27:41 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:27:41.471 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[a18c3a07-5209-4e50-8e23-97d11c912dfb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:27:41 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:27:41.472 162815 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-024b1f88-7312-4f05-a55e-4c82e878906e namespace which is not needed anymore
Oct 11 09:27:41 compute-0 kernel: tapa0aadf8d-3a (unregistering): left promiscuous mode
Oct 11 09:27:41 compute-0 NetworkManager[44960]: <info>  [1760174861.4930] device (tapa0aadf8d-3a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 09:27:41 compute-0 nova_compute[260935]: 2025-10-11 09:27:41.493 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:27:41 compute-0 nova_compute[260935]: 2025-10-11 09:27:41.501 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:27:41 compute-0 ovn_controller[152945]: 2025-10-11T09:27:41Z|01436|binding|INFO|Releasing lport a0aadf8d-3a8c-48f8-84af-fb4e4c0b4088 from this chassis (sb_readonly=0)
Oct 11 09:27:41 compute-0 ovn_controller[152945]: 2025-10-11T09:27:41Z|01437|binding|INFO|Setting lport a0aadf8d-3a8c-48f8-84af-fb4e4c0b4088 down in Southbound
Oct 11 09:27:41 compute-0 ovn_controller[152945]: 2025-10-11T09:27:41Z|01438|binding|INFO|Removing iface tapa0aadf8d-3a ovn-installed in OVS
Oct 11 09:27:41 compute-0 nova_compute[260935]: 2025-10-11 09:27:41.505 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:27:41 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:27:41.511 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4d:e9:36 2001:db8::f816:3eff:fe4d:e936'], port_security=['fa:16:3e:4d:e9:36 2001:db8::f816:3eff:fe4d:e936'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe4d:e936/64', 'neutron:device_id': 'b39b8161-8a46-46fe-8a2a-0fc6b4eab850', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-894decad-3bed-4c55-b643-5fbe5479bf3f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'db6885dd005947ad850fed13cefdf2fc', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'efa7de30-a0ba-4f09-b29e-87a969a72d97', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8c88be83-b34c-4a0a-8d40-8a42bbccb566, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=a0aadf8d-3a8c-48f8-84af-fb4e4c0b4088) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 09:27:41 compute-0 nova_compute[260935]: 2025-10-11 09:27:41.517 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:27:41 compute-0 systemd[1]: machine-qemu\x2d152\x2dinstance\x2d00000080.scope: Deactivated successfully.
Oct 11 09:27:41 compute-0 systemd[1]: machine-qemu\x2d152\x2dinstance\x2d00000080.scope: Consumed 16.309s CPU time.
Oct 11 09:27:41 compute-0 systemd-machined[215705]: Machine qemu-152-instance-00000080 terminated.
Oct 11 09:27:41 compute-0 nova_compute[260935]: 2025-10-11 09:27:41.601 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:27:41 compute-0 nova_compute[260935]: 2025-10-11 09:27:41.612 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:27:41 compute-0 nova_compute[260935]: 2025-10-11 09:27:41.640 2 INFO nova.virt.libvirt.driver [-] [instance: b39b8161-8a46-46fe-8a2a-0fc6b4eab850] Instance destroyed successfully.
Oct 11 09:27:41 compute-0 nova_compute[260935]: 2025-10-11 09:27:41.641 2 DEBUG nova.objects.instance [None req-60ddf41e-b2bb-4c30-b57c-9b303b07393a 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lazy-loading 'resources' on Instance uuid b39b8161-8a46-46fe-8a2a-0fc6b4eab850 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:27:41 compute-0 nova_compute[260935]: 2025-10-11 09:27:41.655 2 DEBUG nova.virt.libvirt.vif [None req-60ddf41e-b2bb-4c30-b57c-9b303b07393a 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T09:26:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1892429863',display_name='tempest-TestGettingAddress-server-1892429863',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1892429863',id=128,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPG2umSzgx1Pm5tS05rzNTKNesiRVqOQsMrcb/c4H9RJnO9a7zP3A3lTuGkE8FijEzV3gKWwvO4cBsyzZeZuE85e7xUGmBvWEUdTEGeD9UeQgoUvFozUeyHRBe1bh7q6pA==',key_name='tempest-TestGettingAddress-1214513575',keypairs=<?>,launch_index=0,launched_at=2025-10-11T09:26:31Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='db6885dd005947ad850fed13cefdf2fc',ramdisk_id='',reservation_id='r-yz54v603',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-1238692117',owner_user_name='tempest-TestGettingAddress-1238692117-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T09:26:32Z,user_data=None,user_id='0e1fd111a1ff43179343661e01457085',uuid=b39b8161-8a46-46fe-8a2a-0fc6b4eab850,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "0f8506e7-c03f-48bd-938d-f3b4cea2675b", "address": "fa:16:3e:f4:67:d9", "network": {"id": "024b1f88-7312-4f05-a55e-4c82e878906e", "bridge": "br-int", "label": "tempest-network-smoke--93934650", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0f8506e7-c0", "ovs_interfaceid": "0f8506e7-c03f-48bd-938d-f3b4cea2675b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 11 09:27:41 compute-0 nova_compute[260935]: 2025-10-11 09:27:41.657 2 DEBUG nova.network.os_vif_util [None req-60ddf41e-b2bb-4c30-b57c-9b303b07393a 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converting VIF {"id": "0f8506e7-c03f-48bd-938d-f3b4cea2675b", "address": "fa:16:3e:f4:67:d9", "network": {"id": "024b1f88-7312-4f05-a55e-4c82e878906e", "bridge": "br-int", "label": "tempest-network-smoke--93934650", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0f8506e7-c0", "ovs_interfaceid": "0f8506e7-c03f-48bd-938d-f3b4cea2675b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 09:27:41 compute-0 nova_compute[260935]: 2025-10-11 09:27:41.659 2 DEBUG nova.network.os_vif_util [None req-60ddf41e-b2bb-4c30-b57c-9b303b07393a 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:f4:67:d9,bridge_name='br-int',has_traffic_filtering=True,id=0f8506e7-c03f-48bd-938d-f3b4cea2675b,network=Network(024b1f88-7312-4f05-a55e-4c82e878906e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0f8506e7-c0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 09:27:41 compute-0 nova_compute[260935]: 2025-10-11 09:27:41.660 2 DEBUG os_vif [None req-60ddf41e-b2bb-4c30-b57c-9b303b07393a 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:f4:67:d9,bridge_name='br-int',has_traffic_filtering=True,id=0f8506e7-c03f-48bd-938d-f3b4cea2675b,network=Network(024b1f88-7312-4f05-a55e-4c82e878906e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0f8506e7-c0') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 11 09:27:41 compute-0 nova_compute[260935]: 2025-10-11 09:27:41.663 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:27:41 compute-0 nova_compute[260935]: 2025-10-11 09:27:41.664 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0f8506e7-c0, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:27:41 compute-0 nova_compute[260935]: 2025-10-11 09:27:41.666 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:27:41 compute-0 nova_compute[260935]: 2025-10-11 09:27:41.669 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 09:27:41 compute-0 nova_compute[260935]: 2025-10-11 09:27:41.671 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:27:41 compute-0 nova_compute[260935]: 2025-10-11 09:27:41.675 2 INFO os_vif [None req-60ddf41e-b2bb-4c30-b57c-9b303b07393a 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:f4:67:d9,bridge_name='br-int',has_traffic_filtering=True,id=0f8506e7-c03f-48bd-938d-f3b4cea2675b,network=Network(024b1f88-7312-4f05-a55e-4c82e878906e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0f8506e7-c0')
Oct 11 09:27:41 compute-0 nova_compute[260935]: 2025-10-11 09:27:41.676 2 DEBUG nova.virt.libvirt.vif [None req-60ddf41e-b2bb-4c30-b57c-9b303b07393a 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T09:26:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1892429863',display_name='tempest-TestGettingAddress-server-1892429863',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1892429863',id=128,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPG2umSzgx1Pm5tS05rzNTKNesiRVqOQsMrcb/c4H9RJnO9a7zP3A3lTuGkE8FijEzV3gKWwvO4cBsyzZeZuE85e7xUGmBvWEUdTEGeD9UeQgoUvFozUeyHRBe1bh7q6pA==',key_name='tempest-TestGettingAddress-1214513575',keypairs=<?>,launch_index=0,launched_at=2025-10-11T09:26:31Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='db6885dd005947ad850fed13cefdf2fc',ramdisk_id='',reservation_id='r-yz54v603',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-1238692117',owner_user_name='tempest-TestGettingAddress-1238692117-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T09:26:32Z,user_data=None,user_id='0e1fd111a1ff43179343661e01457085',uuid=b39b8161-8a46-46fe-8a2a-0fc6b4eab850,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a0aadf8d-3a8c-48f8-84af-fb4e4c0b4088", "address": "fa:16:3e:4d:e9:36", "network": {"id": "894decad-3bed-4c55-b643-5fbe5479bf3f", "bridge": "br-int", "label": "tempest-network-smoke--1044478516", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe4d:e936", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa0aadf8d-3a", "ovs_interfaceid": "a0aadf8d-3a8c-48f8-84af-fb4e4c0b4088", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 11 09:27:41 compute-0 nova_compute[260935]: 2025-10-11 09:27:41.676 2 DEBUG nova.network.os_vif_util [None req-60ddf41e-b2bb-4c30-b57c-9b303b07393a 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converting VIF {"id": "a0aadf8d-3a8c-48f8-84af-fb4e4c0b4088", "address": "fa:16:3e:4d:e9:36", "network": {"id": "894decad-3bed-4c55-b643-5fbe5479bf3f", "bridge": "br-int", "label": "tempest-network-smoke--1044478516", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe4d:e936", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa0aadf8d-3a", "ovs_interfaceid": "a0aadf8d-3a8c-48f8-84af-fb4e4c0b4088", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 09:27:41 compute-0 nova_compute[260935]: 2025-10-11 09:27:41.677 2 DEBUG nova.network.os_vif_util [None req-60ddf41e-b2bb-4c30-b57c-9b303b07393a 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4d:e9:36,bridge_name='br-int',has_traffic_filtering=True,id=a0aadf8d-3a8c-48f8-84af-fb4e4c0b4088,network=Network(894decad-3bed-4c55-b643-5fbe5479bf3f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa0aadf8d-3a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 09:27:41 compute-0 nova_compute[260935]: 2025-10-11 09:27:41.678 2 DEBUG os_vif [None req-60ddf41e-b2bb-4c30-b57c-9b303b07393a 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:4d:e9:36,bridge_name='br-int',has_traffic_filtering=True,id=a0aadf8d-3a8c-48f8-84af-fb4e4c0b4088,network=Network(894decad-3bed-4c55-b643-5fbe5479bf3f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa0aadf8d-3a') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 11 09:27:41 compute-0 nova_compute[260935]: 2025-10-11 09:27:41.681 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:27:41 compute-0 nova_compute[260935]: 2025-10-11 09:27:41.681 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa0aadf8d-3a, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:27:41 compute-0 nova_compute[260935]: 2025-10-11 09:27:41.684 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:27:41 compute-0 nova_compute[260935]: 2025-10-11 09:27:41.685 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:27:41 compute-0 nova_compute[260935]: 2025-10-11 09:27:41.688 2 INFO os_vif [None req-60ddf41e-b2bb-4c30-b57c-9b303b07393a 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:4d:e9:36,bridge_name='br-int',has_traffic_filtering=True,id=a0aadf8d-3a8c-48f8-84af-fb4e4c0b4088,network=Network(894decad-3bed-4c55-b643-5fbe5479bf3f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa0aadf8d-3a')
Oct 11 09:27:41 compute-0 neutron-haproxy-ovnmeta-024b1f88-7312-4f05-a55e-4c82e878906e[400697]: [NOTICE]   (400702) : haproxy version is 2.8.14-c23fe91
Oct 11 09:27:41 compute-0 neutron-haproxy-ovnmeta-024b1f88-7312-4f05-a55e-4c82e878906e[400697]: [NOTICE]   (400702) : path to executable is /usr/sbin/haproxy
Oct 11 09:27:41 compute-0 neutron-haproxy-ovnmeta-024b1f88-7312-4f05-a55e-4c82e878906e[400697]: [WARNING]  (400702) : Exiting Master process...
Oct 11 09:27:41 compute-0 neutron-haproxy-ovnmeta-024b1f88-7312-4f05-a55e-4c82e878906e[400697]: [WARNING]  (400702) : Exiting Master process...
Oct 11 09:27:41 compute-0 neutron-haproxy-ovnmeta-024b1f88-7312-4f05-a55e-4c82e878906e[400697]: [ALERT]    (400702) : Current worker (400704) exited with code 143 (Terminated)
Oct 11 09:27:41 compute-0 neutron-haproxy-ovnmeta-024b1f88-7312-4f05-a55e-4c82e878906e[400697]: [WARNING]  (400702) : All workers exited. Exiting... (0)
Oct 11 09:27:41 compute-0 systemd[1]: libpod-ebae48339624729f873bd9cdaf767f454b383aabcfedadf411bcdf7d4b67e8a5.scope: Deactivated successfully.
Oct 11 09:27:41 compute-0 podman[403304]: 2025-10-11 09:27:41.711060167 +0000 UTC m=+0.082507879 container died ebae48339624729f873bd9cdaf767f454b383aabcfedadf411bcdf7d4b67e8a5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-024b1f88-7312-4f05-a55e-4c82e878906e, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Oct 11 09:27:41 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ebae48339624729f873bd9cdaf767f454b383aabcfedadf411bcdf7d4b67e8a5-userdata-shm.mount: Deactivated successfully.
Oct 11 09:27:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-c7364967b58c22da1966d5d1241e71eee425dfe401e15845ca723c5e08b53f77-merged.mount: Deactivated successfully.
Oct 11 09:27:41 compute-0 podman[403304]: 2025-10-11 09:27:41.772224093 +0000 UTC m=+0.143671795 container cleanup ebae48339624729f873bd9cdaf767f454b383aabcfedadf411bcdf7d4b67e8a5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-024b1f88-7312-4f05-a55e-4c82e878906e, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct 11 09:27:41 compute-0 systemd[1]: libpod-conmon-ebae48339624729f873bd9cdaf767f454b383aabcfedadf411bcdf7d4b67e8a5.scope: Deactivated successfully.
Oct 11 09:27:41 compute-0 podman[403362]: 2025-10-11 09:27:41.879639505 +0000 UTC m=+0.073235168 container remove ebae48339624729f873bd9cdaf767f454b383aabcfedadf411bcdf7d4b67e8a5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-024b1f88-7312-4f05-a55e-4c82e878906e, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS)
Oct 11 09:27:41 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:27:41.891 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[561c59b0-2f0f-497e-bc56-25e761adbfa4]: (4, ('Sat Oct 11 09:27:41 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-024b1f88-7312-4f05-a55e-4c82e878906e (ebae48339624729f873bd9cdaf767f454b383aabcfedadf411bcdf7d4b67e8a5)\nebae48339624729f873bd9cdaf767f454b383aabcfedadf411bcdf7d4b67e8a5\nSat Oct 11 09:27:41 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-024b1f88-7312-4f05-a55e-4c82e878906e (ebae48339624729f873bd9cdaf767f454b383aabcfedadf411bcdf7d4b67e8a5)\nebae48339624729f873bd9cdaf767f454b383aabcfedadf411bcdf7d4b67e8a5\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:27:41 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:27:41.893 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[9f16f4a7-1f8f-4cc8-b3a8-201e8ca4aba4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:27:41 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:27:41.895 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap024b1f88-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:27:41 compute-0 nova_compute[260935]: 2025-10-11 09:27:41.897 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:27:41 compute-0 kernel: tap024b1f88-70: left promiscuous mode
Oct 11 09:27:41 compute-0 nova_compute[260935]: 2025-10-11 09:27:41.916 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:27:41 compute-0 nova_compute[260935]: 2025-10-11 09:27:41.918 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:27:41 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:27:41.921 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[0b5478b5-3cf7-4f8d-9d6e-fa8b8fc5bf05]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:27:41 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:27:41.949 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[d3ed1743-fd74-4267-a1f8-749815a96ffc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:27:41 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:27:41.950 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[9c52f2e5-d69a-46ce-b601-f984e638c64b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:27:41 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:27:41.976 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[10b35cf3-c5e5-4917-b126-958c60a2b425]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 663402, 'reachable_time': 32325, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 403379, 'error': None, 'target': 'ovnmeta-024b1f88-7312-4f05-a55e-4c82e878906e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:27:41 compute-0 systemd[1]: run-netns-ovnmeta\x2d024b1f88\x2d7312\x2d4f05\x2da55e\x2d4c82e878906e.mount: Deactivated successfully.
Oct 11 09:27:41 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:27:41.983 162929 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-024b1f88-7312-4f05-a55e-4c82e878906e deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 11 09:27:41 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:27:41.984 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[2243a2c6-8a9b-45b6-86ca-fd4aefd4af5f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:27:41 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:27:41.988 162815 INFO neutron.agent.ovn.metadata.agent [-] Port a0aadf8d-3a8c-48f8-84af-fb4e4c0b4088 in datapath 894decad-3bed-4c55-b643-5fbe5479bf3f unbound from our chassis
Oct 11 09:27:41 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:27:41.989 162815 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 894decad-3bed-4c55-b643-5fbe5479bf3f, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 11 09:27:41 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:27:41.990 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[326512c9-b2fc-4733-be83-969a9e622159]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:27:41 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:27:41.991 162815 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-894decad-3bed-4c55-b643-5fbe5479bf3f namespace which is not needed anymore
Oct 11 09:27:42 compute-0 ceph-mon[74313]: pgmap v2581: 321 pgs: 321 active+clean; 486 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 14 KiB/s rd, 18 KiB/s wr, 3 op/s
Oct 11 09:27:42 compute-0 nova_compute[260935]: 2025-10-11 09:27:42.182 2 INFO nova.virt.libvirt.driver [None req-60ddf41e-b2bb-4c30-b57c-9b303b07393a 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: b39b8161-8a46-46fe-8a2a-0fc6b4eab850] Deleting instance files /var/lib/nova/instances/b39b8161-8a46-46fe-8a2a-0fc6b4eab850_del
Oct 11 09:27:42 compute-0 nova_compute[260935]: 2025-10-11 09:27:42.183 2 INFO nova.virt.libvirt.driver [None req-60ddf41e-b2bb-4c30-b57c-9b303b07393a 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: b39b8161-8a46-46fe-8a2a-0fc6b4eab850] Deletion of /var/lib/nova/instances/b39b8161-8a46-46fe-8a2a-0fc6b4eab850_del complete
Oct 11 09:27:42 compute-0 neutron-haproxy-ovnmeta-894decad-3bed-4c55-b643-5fbe5479bf3f[400773]: [NOTICE]   (400777) : haproxy version is 2.8.14-c23fe91
Oct 11 09:27:42 compute-0 neutron-haproxy-ovnmeta-894decad-3bed-4c55-b643-5fbe5479bf3f[400773]: [NOTICE]   (400777) : path to executable is /usr/sbin/haproxy
Oct 11 09:27:42 compute-0 neutron-haproxy-ovnmeta-894decad-3bed-4c55-b643-5fbe5479bf3f[400773]: [WARNING]  (400777) : Exiting Master process...
Oct 11 09:27:42 compute-0 neutron-haproxy-ovnmeta-894decad-3bed-4c55-b643-5fbe5479bf3f[400773]: [WARNING]  (400777) : Exiting Master process...
Oct 11 09:27:42 compute-0 neutron-haproxy-ovnmeta-894decad-3bed-4c55-b643-5fbe5479bf3f[400773]: [ALERT]    (400777) : Current worker (400779) exited with code 143 (Terminated)
Oct 11 09:27:42 compute-0 neutron-haproxy-ovnmeta-894decad-3bed-4c55-b643-5fbe5479bf3f[400773]: [WARNING]  (400777) : All workers exited. Exiting... (0)
Oct 11 09:27:42 compute-0 systemd[1]: libpod-7d8390a7292f4de4796afe322c1990187330820880950a473cdbf2710ac3ec79.scope: Deactivated successfully.
Oct 11 09:27:42 compute-0 conmon[400773]: conmon 7d8390a7292f4de4796a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7d8390a7292f4de4796afe322c1990187330820880950a473cdbf2710ac3ec79.scope/container/memory.events
Oct 11 09:27:42 compute-0 podman[403397]: 2025-10-11 09:27:42.236533996 +0000 UTC m=+0.075136771 container died 7d8390a7292f4de4796afe322c1990187330820880950a473cdbf2710ac3ec79 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-894decad-3bed-4c55-b643-5fbe5479bf3f, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 11 09:27:42 compute-0 nova_compute[260935]: 2025-10-11 09:27:42.260 2 INFO nova.compute.manager [None req-60ddf41e-b2bb-4c30-b57c-9b303b07393a 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: b39b8161-8a46-46fe-8a2a-0fc6b4eab850] Took 0.89 seconds to destroy the instance on the hypervisor.
Oct 11 09:27:42 compute-0 nova_compute[260935]: 2025-10-11 09:27:42.261 2 DEBUG oslo.service.loopingcall [None req-60ddf41e-b2bb-4c30-b57c-9b303b07393a 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 11 09:27:42 compute-0 nova_compute[260935]: 2025-10-11 09:27:42.262 2 DEBUG nova.compute.manager [-] [instance: b39b8161-8a46-46fe-8a2a-0fc6b4eab850] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 11 09:27:42 compute-0 nova_compute[260935]: 2025-10-11 09:27:42.262 2 DEBUG nova.network.neutron [-] [instance: b39b8161-8a46-46fe-8a2a-0fc6b4eab850] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 11 09:27:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-dfc0d2ca647c901f7f4d6b1ca7ee3a7b693ee57f6d1e527af9b3cd693721335a-merged.mount: Deactivated successfully.
Oct 11 09:27:42 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-7d8390a7292f4de4796afe322c1990187330820880950a473cdbf2710ac3ec79-userdata-shm.mount: Deactivated successfully.
Oct 11 09:27:42 compute-0 podman[403397]: 2025-10-11 09:27:42.279450048 +0000 UTC m=+0.118052813 container cleanup 7d8390a7292f4de4796afe322c1990187330820880950a473cdbf2710ac3ec79 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-894decad-3bed-4c55-b643-5fbe5479bf3f, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 11 09:27:42 compute-0 systemd[1]: libpod-conmon-7d8390a7292f4de4796afe322c1990187330820880950a473cdbf2710ac3ec79.scope: Deactivated successfully.
Oct 11 09:27:42 compute-0 podman[403426]: 2025-10-11 09:27:42.375457807 +0000 UTC m=+0.058951815 container remove 7d8390a7292f4de4796afe322c1990187330820880950a473cdbf2710ac3ec79 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-894decad-3bed-4c55-b643-5fbe5479bf3f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_managed=true)
Oct 11 09:27:42 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:27:42.384 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[16c320e8-2a88-470f-8f75-8b7dc20b9e4e]: (4, ('Sat Oct 11 09:27:42 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-894decad-3bed-4c55-b643-5fbe5479bf3f (7d8390a7292f4de4796afe322c1990187330820880950a473cdbf2710ac3ec79)\n7d8390a7292f4de4796afe322c1990187330820880950a473cdbf2710ac3ec79\nSat Oct 11 09:27:42 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-894decad-3bed-4c55-b643-5fbe5479bf3f (7d8390a7292f4de4796afe322c1990187330820880950a473cdbf2710ac3ec79)\n7d8390a7292f4de4796afe322c1990187330820880950a473cdbf2710ac3ec79\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:27:42 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:27:42.386 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[c5f92563-257d-4b4e-9914-77cd69bae63f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:27:42 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:27:42.387 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap894decad-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:27:42 compute-0 nova_compute[260935]: 2025-10-11 09:27:42.428 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:27:42 compute-0 kernel: tap894decad-30: left promiscuous mode
Oct 11 09:27:42 compute-0 nova_compute[260935]: 2025-10-11 09:27:42.460 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:27:42 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:27:42.465 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[5f99b020-3827-4040-83f4-e704988bb204]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:27:42 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:27:42.493 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[1af684bd-9f58-4c3a-afbb-4140c2467d27]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:27:42 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:27:42.495 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[def582c5-5468-47f8-bd51-2865bfab4dda]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:27:42 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:27:42.519 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[fe4be1f2-e445-41d8-90a4-747b88cf0841]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 663629, 'reachable_time': 38945, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 403444, 'error': None, 'target': 'ovnmeta-894decad-3bed-4c55-b643-5fbe5479bf3f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:27:42 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:27:42.522 162929 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-894decad-3bed-4c55-b643-5fbe5479bf3f deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 11 09:27:42 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:27:42.523 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[a0870410-00d0-4e6d-ae3f-463795c6c428]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:27:42 compute-0 nova_compute[260935]: 2025-10-11 09:27:42.654 2 DEBUG nova.network.neutron [req-3d927c7e-70b7-483e-b8ef-aa6280c8359b req-30759d61-0038-4222-a0ca-b3cede909496 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: b39b8161-8a46-46fe-8a2a-0fc6b4eab850] Updated VIF entry in instance network info cache for port 0f8506e7-c03f-48bd-938d-f3b4cea2675b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 09:27:42 compute-0 nova_compute[260935]: 2025-10-11 09:27:42.655 2 DEBUG nova.network.neutron [req-3d927c7e-70b7-483e-b8ef-aa6280c8359b req-30759d61-0038-4222-a0ca-b3cede909496 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: b39b8161-8a46-46fe-8a2a-0fc6b4eab850] Updating instance_info_cache with network_info: [{"id": "0f8506e7-c03f-48bd-938d-f3b4cea2675b", "address": "fa:16:3e:f4:67:d9", "network": {"id": "024b1f88-7312-4f05-a55e-4c82e878906e", "bridge": "br-int", "label": "tempest-network-smoke--93934650", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0f8506e7-c0", "ovs_interfaceid": "0f8506e7-c03f-48bd-938d-f3b4cea2675b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "a0aadf8d-3a8c-48f8-84af-fb4e4c0b4088", "address": "fa:16:3e:4d:e9:36", "network": {"id": "894decad-3bed-4c55-b643-5fbe5479bf3f", "bridge": "br-int", "label": "tempest-network-smoke--1044478516", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe4d:e936", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa0aadf8d-3a", "ovs_interfaceid": "a0aadf8d-3a8c-48f8-84af-fb4e4c0b4088", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:27:42 compute-0 nova_compute[260935]: 2025-10-11 09:27:42.683 2 DEBUG oslo_concurrency.lockutils [req-3d927c7e-70b7-483e-b8ef-aa6280c8359b req-30759d61-0038-4222-a0ca-b3cede909496 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-b39b8161-8a46-46fe-8a2a-0fc6b4eab850" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:27:42 compute-0 nova_compute[260935]: 2025-10-11 09:27:42.697 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:27:42 compute-0 nova_compute[260935]: 2025-10-11 09:27:42.721 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:27:42 compute-0 nova_compute[260935]: 2025-10-11 09:27:42.722 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 11 09:27:42 compute-0 nova_compute[260935]: 2025-10-11 09:27:42.722 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 11 09:27:42 compute-0 nova_compute[260935]: 2025-10-11 09:27:42.745 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: b39b8161-8a46-46fe-8a2a-0fc6b4eab850] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9875
Oct 11 09:27:42 compute-0 systemd[1]: run-netns-ovnmeta\x2d894decad\x2d3bed\x2d4c55\x2db643\x2d5fbe5479bf3f.mount: Deactivated successfully.
Oct 11 09:27:42 compute-0 nova_compute[260935]: 2025-10-11 09:27:42.878 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "refresh_cache-c176845c-89c0-4038-ba22-4ee79bd3ebfe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:27:42 compute-0 nova_compute[260935]: 2025-10-11 09:27:42.879 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquired lock "refresh_cache-c176845c-89c0-4038-ba22-4ee79bd3ebfe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:27:42 compute-0 nova_compute[260935]: 2025-10-11 09:27:42.879 2 DEBUG nova.network.neutron [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: c176845c-89c0-4038-ba22-4ee79bd3ebfe] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 11 09:27:42 compute-0 nova_compute[260935]: 2025-10-11 09:27:42.880 2 DEBUG nova.objects.instance [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lazy-loading 'info_cache' on Instance uuid c176845c-89c0-4038-ba22-4ee79bd3ebfe obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:27:42 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2582: 321 pgs: 321 active+clean; 328 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 52 KiB/s rd, 25 KiB/s wr, 59 op/s
Oct 11 09:27:43 compute-0 nova_compute[260935]: 2025-10-11 09:27:43.000 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:27:43 compute-0 ceph-mon[74313]: pgmap v2582: 321 pgs: 321 active+clean; 328 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 52 KiB/s rd, 25 KiB/s wr, 59 op/s
Oct 11 09:27:43 compute-0 nova_compute[260935]: 2025-10-11 09:27:43.208 2 DEBUG nova.network.neutron [-] [instance: b39b8161-8a46-46fe-8a2a-0fc6b4eab850] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:27:43 compute-0 nova_compute[260935]: 2025-10-11 09:27:43.228 2 INFO nova.compute.manager [-] [instance: b39b8161-8a46-46fe-8a2a-0fc6b4eab850] Took 0.97 seconds to deallocate network for instance.
Oct 11 09:27:43 compute-0 nova_compute[260935]: 2025-10-11 09:27:43.288 2 DEBUG oslo_concurrency.lockutils [None req-60ddf41e-b2bb-4c30-b57c-9b303b07393a 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:27:43 compute-0 nova_compute[260935]: 2025-10-11 09:27:43.289 2 DEBUG oslo_concurrency.lockutils [None req-60ddf41e-b2bb-4c30-b57c-9b303b07393a 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:27:43 compute-0 nova_compute[260935]: 2025-10-11 09:27:43.392 2 DEBUG nova.compute.manager [req-84e33a8d-a459-4526-8a61-d481af9a6f17 req-cbc4bc40-8b85-4990-800a-4631670baf00 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: b39b8161-8a46-46fe-8a2a-0fc6b4eab850] Received event network-vif-unplugged-0f8506e7-c03f-48bd-938d-f3b4cea2675b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:27:43 compute-0 nova_compute[260935]: 2025-10-11 09:27:43.392 2 DEBUG oslo_concurrency.lockutils [req-84e33a8d-a459-4526-8a61-d481af9a6f17 req-cbc4bc40-8b85-4990-800a-4631670baf00 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "b39b8161-8a46-46fe-8a2a-0fc6b4eab850-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:27:43 compute-0 nova_compute[260935]: 2025-10-11 09:27:43.393 2 DEBUG oslo_concurrency.lockutils [req-84e33a8d-a459-4526-8a61-d481af9a6f17 req-cbc4bc40-8b85-4990-800a-4631670baf00 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "b39b8161-8a46-46fe-8a2a-0fc6b4eab850-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:27:43 compute-0 nova_compute[260935]: 2025-10-11 09:27:43.393 2 DEBUG oslo_concurrency.lockutils [req-84e33a8d-a459-4526-8a61-d481af9a6f17 req-cbc4bc40-8b85-4990-800a-4631670baf00 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "b39b8161-8a46-46fe-8a2a-0fc6b4eab850-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:27:43 compute-0 nova_compute[260935]: 2025-10-11 09:27:43.394 2 DEBUG nova.compute.manager [req-84e33a8d-a459-4526-8a61-d481af9a6f17 req-cbc4bc40-8b85-4990-800a-4631670baf00 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: b39b8161-8a46-46fe-8a2a-0fc6b4eab850] No waiting events found dispatching network-vif-unplugged-0f8506e7-c03f-48bd-938d-f3b4cea2675b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 09:27:43 compute-0 nova_compute[260935]: 2025-10-11 09:27:43.394 2 WARNING nova.compute.manager [req-84e33a8d-a459-4526-8a61-d481af9a6f17 req-cbc4bc40-8b85-4990-800a-4631670baf00 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: b39b8161-8a46-46fe-8a2a-0fc6b4eab850] Received unexpected event network-vif-unplugged-0f8506e7-c03f-48bd-938d-f3b4cea2675b for instance with vm_state deleted and task_state None.
Oct 11 09:27:43 compute-0 nova_compute[260935]: 2025-10-11 09:27:43.395 2 DEBUG nova.compute.manager [req-84e33a8d-a459-4526-8a61-d481af9a6f17 req-cbc4bc40-8b85-4990-800a-4631670baf00 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: b39b8161-8a46-46fe-8a2a-0fc6b4eab850] Received event network-vif-plugged-0f8506e7-c03f-48bd-938d-f3b4cea2675b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:27:43 compute-0 nova_compute[260935]: 2025-10-11 09:27:43.395 2 DEBUG oslo_concurrency.lockutils [req-84e33a8d-a459-4526-8a61-d481af9a6f17 req-cbc4bc40-8b85-4990-800a-4631670baf00 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "b39b8161-8a46-46fe-8a2a-0fc6b4eab850-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:27:43 compute-0 nova_compute[260935]: 2025-10-11 09:27:43.396 2 DEBUG oslo_concurrency.lockutils [req-84e33a8d-a459-4526-8a61-d481af9a6f17 req-cbc4bc40-8b85-4990-800a-4631670baf00 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "b39b8161-8a46-46fe-8a2a-0fc6b4eab850-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:27:43 compute-0 nova_compute[260935]: 2025-10-11 09:27:43.396 2 DEBUG oslo_concurrency.lockutils [req-84e33a8d-a459-4526-8a61-d481af9a6f17 req-cbc4bc40-8b85-4990-800a-4631670baf00 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "b39b8161-8a46-46fe-8a2a-0fc6b4eab850-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:27:43 compute-0 nova_compute[260935]: 2025-10-11 09:27:43.397 2 DEBUG nova.compute.manager [req-84e33a8d-a459-4526-8a61-d481af9a6f17 req-cbc4bc40-8b85-4990-800a-4631670baf00 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: b39b8161-8a46-46fe-8a2a-0fc6b4eab850] No waiting events found dispatching network-vif-plugged-0f8506e7-c03f-48bd-938d-f3b4cea2675b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 09:27:43 compute-0 nova_compute[260935]: 2025-10-11 09:27:43.397 2 WARNING nova.compute.manager [req-84e33a8d-a459-4526-8a61-d481af9a6f17 req-cbc4bc40-8b85-4990-800a-4631670baf00 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: b39b8161-8a46-46fe-8a2a-0fc6b4eab850] Received unexpected event network-vif-plugged-0f8506e7-c03f-48bd-938d-f3b4cea2675b for instance with vm_state deleted and task_state None.
Oct 11 09:27:43 compute-0 nova_compute[260935]: 2025-10-11 09:27:43.398 2 DEBUG nova.compute.manager [req-84e33a8d-a459-4526-8a61-d481af9a6f17 req-cbc4bc40-8b85-4990-800a-4631670baf00 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: b39b8161-8a46-46fe-8a2a-0fc6b4eab850] Received event network-vif-deleted-a0aadf8d-3a8c-48f8-84af-fb4e4c0b4088 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:27:43 compute-0 nova_compute[260935]: 2025-10-11 09:27:43.398 2 DEBUG nova.compute.manager [req-84e33a8d-a459-4526-8a61-d481af9a6f17 req-cbc4bc40-8b85-4990-800a-4631670baf00 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: b39b8161-8a46-46fe-8a2a-0fc6b4eab850] Received event network-vif-deleted-0f8506e7-c03f-48bd-938d-f3b4cea2675b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:27:43 compute-0 nova_compute[260935]: 2025-10-11 09:27:43.416 2 DEBUG oslo_concurrency.processutils [None req-60ddf41e-b2bb-4c30-b57c-9b303b07393a 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:27:43 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 09:27:43 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1208272319' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:27:43 compute-0 nova_compute[260935]: 2025-10-11 09:27:43.911 2 DEBUG oslo_concurrency.processutils [None req-60ddf41e-b2bb-4c30-b57c-9b303b07393a 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.495s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:27:43 compute-0 nova_compute[260935]: 2025-10-11 09:27:43.919 2 DEBUG nova.compute.provider_tree [None req-60ddf41e-b2bb-4c30-b57c-9b303b07393a 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 09:27:43 compute-0 nova_compute[260935]: 2025-10-11 09:27:43.945 2 DEBUG nova.scheduler.client.report [None req-60ddf41e-b2bb-4c30-b57c-9b303b07393a 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 09:27:43 compute-0 nova_compute[260935]: 2025-10-11 09:27:43.954 2 DEBUG nova.network.neutron [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: c176845c-89c0-4038-ba22-4ee79bd3ebfe] Updating instance_info_cache with network_info: [{"id": "e61ae661-47c6-4317-a2c2-6e7a5b567441", "address": "fa:16:3e:1e:82:58", "network": {"id": "164a664d-5e52-48b9-8b00-f73d0851a4cc", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-311778958-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d33b48586acf4e6c8254f2a1213b001c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape61ae661-47", "ovs_interfaceid": "e61ae661-47c6-4317-a2c2-6e7a5b567441", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:27:43 compute-0 nova_compute[260935]: 2025-10-11 09:27:43.995 2 DEBUG oslo_concurrency.lockutils [None req-60ddf41e-b2bb-4c30-b57c-9b303b07393a 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.706s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:27:44 compute-0 nova_compute[260935]: 2025-10-11 09:27:44.000 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Releasing lock "refresh_cache-c176845c-89c0-4038-ba22-4ee79bd3ebfe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:27:44 compute-0 nova_compute[260935]: 2025-10-11 09:27:44.001 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: c176845c-89c0-4038-ba22-4ee79bd3ebfe] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 11 09:27:44 compute-0 nova_compute[260935]: 2025-10-11 09:27:44.002 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:27:44 compute-0 nova_compute[260935]: 2025-10-11 09:27:44.003 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:27:44 compute-0 nova_compute[260935]: 2025-10-11 09:27:44.003 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 11 09:27:44 compute-0 nova_compute[260935]: 2025-10-11 09:27:44.004 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:27:44 compute-0 nova_compute[260935]: 2025-10-11 09:27:44.042 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:27:44 compute-0 nova_compute[260935]: 2025-10-11 09:27:44.044 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:27:44 compute-0 nova_compute[260935]: 2025-10-11 09:27:44.045 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:27:44 compute-0 nova_compute[260935]: 2025-10-11 09:27:44.045 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 11 09:27:44 compute-0 nova_compute[260935]: 2025-10-11 09:27:44.046 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:27:44 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/1208272319' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:27:44 compute-0 nova_compute[260935]: 2025-10-11 09:27:44.104 2 INFO nova.scheduler.client.report [None req-60ddf41e-b2bb-4c30-b57c-9b303b07393a 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Deleted allocations for instance b39b8161-8a46-46fe-8a2a-0fc6b4eab850
Oct 11 09:27:44 compute-0 nova_compute[260935]: 2025-10-11 09:27:44.194 2 DEBUG oslo_concurrency.lockutils [None req-60ddf41e-b2bb-4c30-b57c-9b303b07393a 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "b39b8161-8a46-46fe-8a2a-0fc6b4eab850" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.832s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:27:44 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:27:44 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 09:27:44 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1294313095' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:27:44 compute-0 nova_compute[260935]: 2025-10-11 09:27:44.521 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:27:44 compute-0 nova_compute[260935]: 2025-10-11 09:27:44.651 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:27:44 compute-0 nova_compute[260935]: 2025-10-11 09:27:44.651 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:27:44 compute-0 nova_compute[260935]: 2025-10-11 09:27:44.652 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:27:44 compute-0 nova_compute[260935]: 2025-10-11 09:27:44.656 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:27:44 compute-0 nova_compute[260935]: 2025-10-11 09:27:44.656 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:27:44 compute-0 nova_compute[260935]: 2025-10-11 09:27:44.660 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000052 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:27:44 compute-0 nova_compute[260935]: 2025-10-11 09:27:44.661 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000052 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:27:44 compute-0 nova_compute[260935]: 2025-10-11 09:27:44.894 2 WARNING nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 09:27:44 compute-0 nova_compute[260935]: 2025-10-11 09:27:44.896 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=2831MB free_disk=59.830665588378906GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 11 09:27:44 compute-0 nova_compute[260935]: 2025-10-11 09:27:44.896 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:27:44 compute-0 nova_compute[260935]: 2025-10-11 09:27:44.896 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:27:44 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2583: 321 pgs: 321 active+clean; 328 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 52 KiB/s rd, 14 KiB/s wr, 58 op/s
Oct 11 09:27:44 compute-0 nova_compute[260935]: 2025-10-11 09:27:44.988 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance c176845c-89c0-4038-ba22-4ee79bd3ebfe actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 09:27:44 compute-0 nova_compute[260935]: 2025-10-11 09:27:44.989 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance b75d8ded-515b-48ff-a6b6-28df88878996 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 09:27:44 compute-0 nova_compute[260935]: 2025-10-11 09:27:44.989 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance 52be16b4-343a-4fd4-9041-39069a1fde2a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 09:27:44 compute-0 nova_compute[260935]: 2025-10-11 09:27:44.990 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 11 09:27:44 compute-0 nova_compute[260935]: 2025-10-11 09:27:44.990 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=896MB phys_disk=59GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 11 09:27:45 compute-0 nova_compute[260935]: 2025-10-11 09:27:45.069 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:27:45 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/1294313095' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:27:45 compute-0 ceph-mon[74313]: pgmap v2583: 321 pgs: 321 active+clean; 328 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 52 KiB/s rd, 14 KiB/s wr, 58 op/s
Oct 11 09:27:45 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 09:27:45 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2419163320' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:27:45 compute-0 nova_compute[260935]: 2025-10-11 09:27:45.547 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:27:45 compute-0 nova_compute[260935]: 2025-10-11 09:27:45.553 2 DEBUG nova.compute.provider_tree [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 09:27:45 compute-0 nova_compute[260935]: 2025-10-11 09:27:45.572 2 DEBUG nova.scheduler.client.report [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 09:27:45 compute-0 nova_compute[260935]: 2025-10-11 09:27:45.603 2 DEBUG oslo_concurrency.lockutils [None req-f79e75ad-94d7-4be2-99b6-d39e1a2e4247 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Acquiring lock "f4e61943-e59b-40a1-ab1e-07ba5f131bb3" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:27:45 compute-0 nova_compute[260935]: 2025-10-11 09:27:45.604 2 DEBUG oslo_concurrency.lockutils [None req-f79e75ad-94d7-4be2-99b6-d39e1a2e4247 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "f4e61943-e59b-40a1-ab1e-07ba5f131bb3" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:27:45 compute-0 nova_compute[260935]: 2025-10-11 09:27:45.616 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 11 09:27:45 compute-0 nova_compute[260935]: 2025-10-11 09:27:45.617 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.720s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:27:45 compute-0 nova_compute[260935]: 2025-10-11 09:27:45.638 2 DEBUG nova.compute.manager [None req-f79e75ad-94d7-4be2-99b6-d39e1a2e4247 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: f4e61943-e59b-40a1-ab1e-07ba5f131bb3] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 11 09:27:45 compute-0 nova_compute[260935]: 2025-10-11 09:27:45.732 2 DEBUG oslo_concurrency.lockutils [None req-f79e75ad-94d7-4be2-99b6-d39e1a2e4247 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:27:45 compute-0 nova_compute[260935]: 2025-10-11 09:27:45.733 2 DEBUG oslo_concurrency.lockutils [None req-f79e75ad-94d7-4be2-99b6-d39e1a2e4247 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:27:45 compute-0 nova_compute[260935]: 2025-10-11 09:27:45.741 2 DEBUG nova.virt.hardware [None req-f79e75ad-94d7-4be2-99b6-d39e1a2e4247 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 11 09:27:45 compute-0 nova_compute[260935]: 2025-10-11 09:27:45.742 2 INFO nova.compute.claims [None req-f79e75ad-94d7-4be2-99b6-d39e1a2e4247 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: f4e61943-e59b-40a1-ab1e-07ba5f131bb3] Claim successful on node compute-0.ctlplane.example.com
Oct 11 09:27:45 compute-0 nova_compute[260935]: 2025-10-11 09:27:45.890 2 DEBUG oslo_concurrency.processutils [None req-f79e75ad-94d7-4be2-99b6-d39e1a2e4247 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:27:46 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/2419163320' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:27:46 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 09:27:46 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2877685704' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:27:46 compute-0 nova_compute[260935]: 2025-10-11 09:27:46.401 2 DEBUG oslo_concurrency.processutils [None req-f79e75ad-94d7-4be2-99b6-d39e1a2e4247 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.511s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:27:46 compute-0 nova_compute[260935]: 2025-10-11 09:27:46.411 2 DEBUG nova.compute.provider_tree [None req-f79e75ad-94d7-4be2-99b6-d39e1a2e4247 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 09:27:46 compute-0 nova_compute[260935]: 2025-10-11 09:27:46.430 2 DEBUG nova.scheduler.client.report [None req-f79e75ad-94d7-4be2-99b6-d39e1a2e4247 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 09:27:46 compute-0 nova_compute[260935]: 2025-10-11 09:27:46.455 2 DEBUG oslo_concurrency.lockutils [None req-f79e75ad-94d7-4be2-99b6-d39e1a2e4247 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.723s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:27:46 compute-0 nova_compute[260935]: 2025-10-11 09:27:46.457 2 DEBUG nova.compute.manager [None req-f79e75ad-94d7-4be2-99b6-d39e1a2e4247 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: f4e61943-e59b-40a1-ab1e-07ba5f131bb3] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 11 09:27:46 compute-0 nova_compute[260935]: 2025-10-11 09:27:46.513 2 DEBUG nova.compute.manager [None req-f79e75ad-94d7-4be2-99b6-d39e1a2e4247 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: f4e61943-e59b-40a1-ab1e-07ba5f131bb3] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 11 09:27:46 compute-0 nova_compute[260935]: 2025-10-11 09:27:46.514 2 DEBUG nova.network.neutron [None req-f79e75ad-94d7-4be2-99b6-d39e1a2e4247 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: f4e61943-e59b-40a1-ab1e-07ba5f131bb3] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 11 09:27:46 compute-0 nova_compute[260935]: 2025-10-11 09:27:46.537 2 INFO nova.virt.libvirt.driver [None req-f79e75ad-94d7-4be2-99b6-d39e1a2e4247 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: f4e61943-e59b-40a1-ab1e-07ba5f131bb3] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 11 09:27:46 compute-0 nova_compute[260935]: 2025-10-11 09:27:46.559 2 DEBUG nova.compute.manager [None req-f79e75ad-94d7-4be2-99b6-d39e1a2e4247 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: f4e61943-e59b-40a1-ab1e-07ba5f131bb3] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 11 09:27:46 compute-0 nova_compute[260935]: 2025-10-11 09:27:46.670 2 DEBUG nova.compute.manager [None req-f79e75ad-94d7-4be2-99b6-d39e1a2e4247 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: f4e61943-e59b-40a1-ab1e-07ba5f131bb3] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 11 09:27:46 compute-0 nova_compute[260935]: 2025-10-11 09:27:46.672 2 DEBUG nova.virt.libvirt.driver [None req-f79e75ad-94d7-4be2-99b6-d39e1a2e4247 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: f4e61943-e59b-40a1-ab1e-07ba5f131bb3] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 11 09:27:46 compute-0 nova_compute[260935]: 2025-10-11 09:27:46.672 2 INFO nova.virt.libvirt.driver [None req-f79e75ad-94d7-4be2-99b6-d39e1a2e4247 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: f4e61943-e59b-40a1-ab1e-07ba5f131bb3] Creating image(s)
Oct 11 09:27:46 compute-0 nova_compute[260935]: 2025-10-11 09:27:46.716 2 DEBUG nova.storage.rbd_utils [None req-f79e75ad-94d7-4be2-99b6-d39e1a2e4247 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] rbd image f4e61943-e59b-40a1-ab1e-07ba5f131bb3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:27:46 compute-0 nova_compute[260935]: 2025-10-11 09:27:46.765 2 DEBUG nova.storage.rbd_utils [None req-f79e75ad-94d7-4be2-99b6-d39e1a2e4247 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] rbd image f4e61943-e59b-40a1-ab1e-07ba5f131bb3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:27:46 compute-0 nova_compute[260935]: 2025-10-11 09:27:46.805 2 DEBUG nova.storage.rbd_utils [None req-f79e75ad-94d7-4be2-99b6-d39e1a2e4247 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] rbd image f4e61943-e59b-40a1-ab1e-07ba5f131bb3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:27:46 compute-0 nova_compute[260935]: 2025-10-11 09:27:46.810 2 DEBUG oslo_concurrency.processutils [None req-f79e75ad-94d7-4be2-99b6-d39e1a2e4247 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:27:46 compute-0 podman[403534]: 2025-10-11 09:27:46.822535855 +0000 UTC m=+0.108350418 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent)
Oct 11 09:27:46 compute-0 nova_compute[260935]: 2025-10-11 09:27:46.853 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:27:46 compute-0 nova_compute[260935]: 2025-10-11 09:27:46.892 2 DEBUG oslo_concurrency.processutils [None req-f79e75ad-94d7-4be2-99b6-d39e1a2e4247 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json" returned: 0 in 0.083s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:27:46 compute-0 nova_compute[260935]: 2025-10-11 09:27:46.893 2 DEBUG oslo_concurrency.lockutils [None req-f79e75ad-94d7-4be2-99b6-d39e1a2e4247 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Acquiring lock "9811042c7d73cc51997f7c966840f4b7728169a1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:27:46 compute-0 nova_compute[260935]: 2025-10-11 09:27:46.894 2 DEBUG oslo_concurrency.lockutils [None req-f79e75ad-94d7-4be2-99b6-d39e1a2e4247 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:27:46 compute-0 nova_compute[260935]: 2025-10-11 09:27:46.894 2 DEBUG oslo_concurrency.lockutils [None req-f79e75ad-94d7-4be2-99b6-d39e1a2e4247 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:27:46 compute-0 nova_compute[260935]: 2025-10-11 09:27:46.918 2 DEBUG nova.storage.rbd_utils [None req-f79e75ad-94d7-4be2-99b6-d39e1a2e4247 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] rbd image f4e61943-e59b-40a1-ab1e-07ba5f131bb3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:27:46 compute-0 nova_compute[260935]: 2025-10-11 09:27:46.921 2 DEBUG oslo_concurrency.processutils [None req-f79e75ad-94d7-4be2-99b6-d39e1a2e4247 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 f4e61943-e59b-40a1-ab1e-07ba5f131bb3_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:27:46 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2584: 321 pgs: 321 active+clean; 328 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 52 KiB/s rd, 14 KiB/s wr, 58 op/s
Oct 11 09:27:47 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/2877685704' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:27:47 compute-0 ceph-mon[74313]: pgmap v2584: 321 pgs: 321 active+clean; 328 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 52 KiB/s rd, 14 KiB/s wr, 58 op/s
Oct 11 09:27:47 compute-0 nova_compute[260935]: 2025-10-11 09:27:47.232 2 DEBUG oslo_concurrency.processutils [None req-f79e75ad-94d7-4be2-99b6-d39e1a2e4247 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 f4e61943-e59b-40a1-ab1e-07ba5f131bb3_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.311s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:27:47 compute-0 nova_compute[260935]: 2025-10-11 09:27:47.287 2 DEBUG nova.storage.rbd_utils [None req-f79e75ad-94d7-4be2-99b6-d39e1a2e4247 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] resizing rbd image f4e61943-e59b-40a1-ab1e-07ba5f131bb3_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 11 09:27:47 compute-0 nova_compute[260935]: 2025-10-11 09:27:47.383 2 DEBUG nova.policy [None req-f79e75ad-94d7-4be2-99b6-d39e1a2e4247 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'dd336dcb24664df58613d4105ce1b004', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'bee9c6aad5fe46a2b0fb6caf4d995b72', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 11 09:27:47 compute-0 nova_compute[260935]: 2025-10-11 09:27:47.388 2 DEBUG nova.objects.instance [None req-f79e75ad-94d7-4be2-99b6-d39e1a2e4247 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lazy-loading 'migration_context' on Instance uuid f4e61943-e59b-40a1-ab1e-07ba5f131bb3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:27:47 compute-0 nova_compute[260935]: 2025-10-11 09:27:47.410 2 DEBUG nova.virt.libvirt.driver [None req-f79e75ad-94d7-4be2-99b6-d39e1a2e4247 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: f4e61943-e59b-40a1-ab1e-07ba5f131bb3] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 11 09:27:47 compute-0 nova_compute[260935]: 2025-10-11 09:27:47.410 2 DEBUG nova.virt.libvirt.driver [None req-f79e75ad-94d7-4be2-99b6-d39e1a2e4247 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: f4e61943-e59b-40a1-ab1e-07ba5f131bb3] Ensure instance console log exists: /var/lib/nova/instances/f4e61943-e59b-40a1-ab1e-07ba5f131bb3/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 11 09:27:47 compute-0 nova_compute[260935]: 2025-10-11 09:27:47.410 2 DEBUG oslo_concurrency.lockutils [None req-f79e75ad-94d7-4be2-99b6-d39e1a2e4247 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:27:47 compute-0 nova_compute[260935]: 2025-10-11 09:27:47.411 2 DEBUG oslo_concurrency.lockutils [None req-f79e75ad-94d7-4be2-99b6-d39e1a2e4247 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:27:47 compute-0 nova_compute[260935]: 2025-10-11 09:27:47.411 2 DEBUG oslo_concurrency.lockutils [None req-f79e75ad-94d7-4be2-99b6-d39e1a2e4247 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:27:48 compute-0 nova_compute[260935]: 2025-10-11 09:27:48.039 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:27:48 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2585: 321 pgs: 321 active+clean; 368 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 69 KiB/s rd, 1.6 MiB/s wr, 84 op/s
Oct 11 09:27:49 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:27:49 compute-0 nova_compute[260935]: 2025-10-11 09:27:49.282 2 DEBUG nova.network.neutron [None req-f79e75ad-94d7-4be2-99b6-d39e1a2e4247 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: f4e61943-e59b-40a1-ab1e-07ba5f131bb3] Successfully created port: c550ea63-9098-4f0c-95b9-107f8a0a8a5c _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 11 09:27:49 compute-0 ovn_controller[152945]: 2025-10-11T09:27:49Z|01439|binding|INFO|Releasing lport e23cd806-8523-4e59-ba27-db15cee52548 from this chassis (sb_readonly=0)
Oct 11 09:27:49 compute-0 ovn_controller[152945]: 2025-10-11T09:27:49Z|01440|binding|INFO|Releasing lport f45dd889-4db0-488b-b6d3-356bc191844e from this chassis (sb_readonly=0)
Oct 11 09:27:49 compute-0 nova_compute[260935]: 2025-10-11 09:27:49.354 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:27:49 compute-0 ovn_controller[152945]: 2025-10-11T09:27:49Z|01441|binding|INFO|Releasing lport e23cd806-8523-4e59-ba27-db15cee52548 from this chassis (sb_readonly=0)
Oct 11 09:27:49 compute-0 ovn_controller[152945]: 2025-10-11T09:27:49Z|01442|binding|INFO|Releasing lport f45dd889-4db0-488b-b6d3-356bc191844e from this chassis (sb_readonly=0)
Oct 11 09:27:49 compute-0 nova_compute[260935]: 2025-10-11 09:27:49.542 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:27:50 compute-0 ceph-mon[74313]: pgmap v2585: 321 pgs: 321 active+clean; 368 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 69 KiB/s rd, 1.6 MiB/s wr, 84 op/s
Oct 11 09:27:50 compute-0 nova_compute[260935]: 2025-10-11 09:27:50.811 2 DEBUG nova.network.neutron [None req-f79e75ad-94d7-4be2-99b6-d39e1a2e4247 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: f4e61943-e59b-40a1-ab1e-07ba5f131bb3] Successfully updated port: c550ea63-9098-4f0c-95b9-107f8a0a8a5c _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 11 09:27:50 compute-0 nova_compute[260935]: 2025-10-11 09:27:50.831 2 DEBUG oslo_concurrency.lockutils [None req-f79e75ad-94d7-4be2-99b6-d39e1a2e4247 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Acquiring lock "refresh_cache-f4e61943-e59b-40a1-ab1e-07ba5f131bb3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:27:50 compute-0 nova_compute[260935]: 2025-10-11 09:27:50.831 2 DEBUG oslo_concurrency.lockutils [None req-f79e75ad-94d7-4be2-99b6-d39e1a2e4247 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Acquired lock "refresh_cache-f4e61943-e59b-40a1-ab1e-07ba5f131bb3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:27:50 compute-0 nova_compute[260935]: 2025-10-11 09:27:50.832 2 DEBUG nova.network.neutron [None req-f79e75ad-94d7-4be2-99b6-d39e1a2e4247 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: f4e61943-e59b-40a1-ab1e-07ba5f131bb3] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 11 09:27:50 compute-0 nova_compute[260935]: 2025-10-11 09:27:50.917 2 DEBUG nova.compute.manager [req-ee596f4b-1ba9-41e3-9b3c-4ec0a02b253a req-0ee0257b-2910-4ee1-91b1-74062d6915dd e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: f4e61943-e59b-40a1-ab1e-07ba5f131bb3] Received event network-changed-c550ea63-9098-4f0c-95b9-107f8a0a8a5c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:27:50 compute-0 nova_compute[260935]: 2025-10-11 09:27:50.917 2 DEBUG nova.compute.manager [req-ee596f4b-1ba9-41e3-9b3c-4ec0a02b253a req-0ee0257b-2910-4ee1-91b1-74062d6915dd e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: f4e61943-e59b-40a1-ab1e-07ba5f131bb3] Refreshing instance network info cache due to event network-changed-c550ea63-9098-4f0c-95b9-107f8a0a8a5c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 09:27:50 compute-0 nova_compute[260935]: 2025-10-11 09:27:50.918 2 DEBUG oslo_concurrency.lockutils [req-ee596f4b-1ba9-41e3-9b3c-4ec0a02b253a req-0ee0257b-2910-4ee1-91b1-74062d6915dd e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-f4e61943-e59b-40a1-ab1e-07ba5f131bb3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:27:50 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2586: 321 pgs: 321 active+clean; 368 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 55 KiB/s rd, 1.6 MiB/s wr, 82 op/s
Oct 11 09:27:51 compute-0 nova_compute[260935]: 2025-10-11 09:27:51.034 2 DEBUG nova.network.neutron [None req-f79e75ad-94d7-4be2-99b6-d39e1a2e4247 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: f4e61943-e59b-40a1-ab1e-07ba5f131bb3] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 11 09:27:51 compute-0 ceph-mon[74313]: pgmap v2586: 321 pgs: 321 active+clean; 368 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 55 KiB/s rd, 1.6 MiB/s wr, 82 op/s
Oct 11 09:27:51 compute-0 sudo[403721]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:27:51 compute-0 sudo[403721]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:27:51 compute-0 sudo[403721]: pam_unix(sudo:session): session closed for user root
Oct 11 09:27:51 compute-0 sudo[403752]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 09:27:51 compute-0 sudo[403752]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:27:51 compute-0 sudo[403752]: pam_unix(sudo:session): session closed for user root
Oct 11 09:27:51 compute-0 podman[403745]: 2025-10-11 09:27:51.735659716 +0000 UTC m=+0.106561228 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=iscsid, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 11 09:27:51 compute-0 sudo[403791]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:27:51 compute-0 sudo[403791]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:27:51 compute-0 sudo[403791]: pam_unix(sudo:session): session closed for user root
Oct 11 09:27:51 compute-0 nova_compute[260935]: 2025-10-11 09:27:51.905 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:27:51 compute-0 sudo[403816]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 11 09:27:51 compute-0 sudo[403816]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:27:52 compute-0 sudo[403816]: pam_unix(sudo:session): session closed for user root
Oct 11 09:27:52 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 09:27:52 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 09:27:52 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 11 09:27:52 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 09:27:52 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 11 09:27:52 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:27:52 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev 3c9dfe89-70fd-4558-a95e-daf0c3ab2b9d does not exist
Oct 11 09:27:52 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev 3ace1c22-5814-42e5-8219-fa497eaf5255 does not exist
Oct 11 09:27:52 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev a5ba44a9-15f4-4ab6-b007-30c56b812b4d does not exist
Oct 11 09:27:52 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 11 09:27:52 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 09:27:52 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 11 09:27:52 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 09:27:52 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 09:27:52 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 09:27:52 compute-0 nova_compute[260935]: 2025-10-11 09:27:52.738 2 DEBUG nova.network.neutron [None req-f79e75ad-94d7-4be2-99b6-d39e1a2e4247 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: f4e61943-e59b-40a1-ab1e-07ba5f131bb3] Updating instance_info_cache with network_info: [{"id": "c550ea63-9098-4f0c-95b9-107f8a0a8a5c", "address": "fa:16:3e:09:48:8e", "network": {"id": "066f5196-fbff-458b-ab48-5301dc324450", "bridge": "br-int", "label": "tempest-network-smoke--783922482", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc550ea63-90", "ovs_interfaceid": "c550ea63-9098-4f0c-95b9-107f8a0a8a5c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:27:52 compute-0 nova_compute[260935]: 2025-10-11 09:27:52.759 2 DEBUG oslo_concurrency.lockutils [None req-f79e75ad-94d7-4be2-99b6-d39e1a2e4247 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Releasing lock "refresh_cache-f4e61943-e59b-40a1-ab1e-07ba5f131bb3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:27:52 compute-0 nova_compute[260935]: 2025-10-11 09:27:52.759 2 DEBUG nova.compute.manager [None req-f79e75ad-94d7-4be2-99b6-d39e1a2e4247 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: f4e61943-e59b-40a1-ab1e-07ba5f131bb3] Instance network_info: |[{"id": "c550ea63-9098-4f0c-95b9-107f8a0a8a5c", "address": "fa:16:3e:09:48:8e", "network": {"id": "066f5196-fbff-458b-ab48-5301dc324450", "bridge": "br-int", "label": "tempest-network-smoke--783922482", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc550ea63-90", "ovs_interfaceid": "c550ea63-9098-4f0c-95b9-107f8a0a8a5c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 11 09:27:52 compute-0 nova_compute[260935]: 2025-10-11 09:27:52.760 2 DEBUG oslo_concurrency.lockutils [req-ee596f4b-1ba9-41e3-9b3c-4ec0a02b253a req-0ee0257b-2910-4ee1-91b1-74062d6915dd e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-f4e61943-e59b-40a1-ab1e-07ba5f131bb3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:27:52 compute-0 nova_compute[260935]: 2025-10-11 09:27:52.760 2 DEBUG nova.network.neutron [req-ee596f4b-1ba9-41e3-9b3c-4ec0a02b253a req-0ee0257b-2910-4ee1-91b1-74062d6915dd e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: f4e61943-e59b-40a1-ab1e-07ba5f131bb3] Refreshing network info cache for port c550ea63-9098-4f0c-95b9-107f8a0a8a5c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 09:27:52 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 09:27:52 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 09:27:52 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:27:52 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 09:27:52 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 09:27:52 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 09:27:52 compute-0 nova_compute[260935]: 2025-10-11 09:27:52.809 2 DEBUG nova.virt.libvirt.driver [None req-f79e75ad-94d7-4be2-99b6-d39e1a2e4247 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: f4e61943-e59b-40a1-ab1e-07ba5f131bb3] Start _get_guest_xml network_info=[{"id": "c550ea63-9098-4f0c-95b9-107f8a0a8a5c", "address": "fa:16:3e:09:48:8e", "network": {"id": "066f5196-fbff-458b-ab48-5301dc324450", "bridge": "br-int", "label": "tempest-network-smoke--783922482", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc550ea63-90", "ovs_interfaceid": "c550ea63-9098-4f0c-95b9-107f8a0a8a5c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 11 09:27:52 compute-0 nova_compute[260935]: 2025-10-11 09:27:52.817 2 WARNING nova.virt.libvirt.driver [None req-f79e75ad-94d7-4be2-99b6-d39e1a2e4247 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 09:27:52 compute-0 nova_compute[260935]: 2025-10-11 09:27:52.827 2 DEBUG nova.virt.libvirt.host [None req-f79e75ad-94d7-4be2-99b6-d39e1a2e4247 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 11 09:27:52 compute-0 nova_compute[260935]: 2025-10-11 09:27:52.827 2 DEBUG nova.virt.libvirt.host [None req-f79e75ad-94d7-4be2-99b6-d39e1a2e4247 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 11 09:27:52 compute-0 nova_compute[260935]: 2025-10-11 09:27:52.833 2 DEBUG nova.virt.libvirt.host [None req-f79e75ad-94d7-4be2-99b6-d39e1a2e4247 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 11 09:27:52 compute-0 nova_compute[260935]: 2025-10-11 09:27:52.834 2 DEBUG nova.virt.libvirt.host [None req-f79e75ad-94d7-4be2-99b6-d39e1a2e4247 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 11 09:27:52 compute-0 nova_compute[260935]: 2025-10-11 09:27:52.834 2 DEBUG nova.virt.libvirt.driver [None req-f79e75ad-94d7-4be2-99b6-d39e1a2e4247 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 11 09:27:52 compute-0 nova_compute[260935]: 2025-10-11 09:27:52.835 2 DEBUG nova.virt.hardware [None req-f79e75ad-94d7-4be2-99b6-d39e1a2e4247 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 11 09:27:52 compute-0 nova_compute[260935]: 2025-10-11 09:27:52.835 2 DEBUG nova.virt.hardware [None req-f79e75ad-94d7-4be2-99b6-d39e1a2e4247 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 11 09:27:52 compute-0 nova_compute[260935]: 2025-10-11 09:27:52.835 2 DEBUG nova.virt.hardware [None req-f79e75ad-94d7-4be2-99b6-d39e1a2e4247 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 11 09:27:52 compute-0 nova_compute[260935]: 2025-10-11 09:27:52.836 2 DEBUG nova.virt.hardware [None req-f79e75ad-94d7-4be2-99b6-d39e1a2e4247 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 11 09:27:52 compute-0 nova_compute[260935]: 2025-10-11 09:27:52.836 2 DEBUG nova.virt.hardware [None req-f79e75ad-94d7-4be2-99b6-d39e1a2e4247 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 11 09:27:52 compute-0 nova_compute[260935]: 2025-10-11 09:27:52.836 2 DEBUG nova.virt.hardware [None req-f79e75ad-94d7-4be2-99b6-d39e1a2e4247 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 11 09:27:52 compute-0 nova_compute[260935]: 2025-10-11 09:27:52.836 2 DEBUG nova.virt.hardware [None req-f79e75ad-94d7-4be2-99b6-d39e1a2e4247 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 11 09:27:52 compute-0 nova_compute[260935]: 2025-10-11 09:27:52.837 2 DEBUG nova.virt.hardware [None req-f79e75ad-94d7-4be2-99b6-d39e1a2e4247 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 11 09:27:52 compute-0 nova_compute[260935]: 2025-10-11 09:27:52.837 2 DEBUG nova.virt.hardware [None req-f79e75ad-94d7-4be2-99b6-d39e1a2e4247 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 11 09:27:52 compute-0 nova_compute[260935]: 2025-10-11 09:27:52.837 2 DEBUG nova.virt.hardware [None req-f79e75ad-94d7-4be2-99b6-d39e1a2e4247 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 11 09:27:52 compute-0 nova_compute[260935]: 2025-10-11 09:27:52.838 2 DEBUG nova.virt.hardware [None req-f79e75ad-94d7-4be2-99b6-d39e1a2e4247 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 11 09:27:52 compute-0 nova_compute[260935]: 2025-10-11 09:27:52.841 2 DEBUG oslo_concurrency.processutils [None req-f79e75ad-94d7-4be2-99b6-d39e1a2e4247 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:27:52 compute-0 sudo[403872]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:27:52 compute-0 sudo[403872]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:27:52 compute-0 sudo[403872]: pam_unix(sudo:session): session closed for user root
Oct 11 09:27:52 compute-0 nova_compute[260935]: 2025-10-11 09:27:52.887 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760174857.8831065, ac163a38-d5cb-4d00-af1b-f3361849dd68 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:27:52 compute-0 nova_compute[260935]: 2025-10-11 09:27:52.888 2 INFO nova.compute.manager [-] [instance: ac163a38-d5cb-4d00-af1b-f3361849dd68] VM Stopped (Lifecycle Event)
Oct 11 09:27:52 compute-0 nova_compute[260935]: 2025-10-11 09:27:52.908 2 DEBUG nova.compute.manager [None req-e39d945a-293c-45ae-b806-3920025e28ff - - - - - -] [instance: ac163a38-d5cb-4d00-af1b-f3361849dd68] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:27:52 compute-0 sudo[403898]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 09:27:52 compute-0 sudo[403898]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:27:52 compute-0 sudo[403898]: pam_unix(sudo:session): session closed for user root
Oct 11 09:27:52 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2587: 321 pgs: 321 active+clean; 374 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 55 KiB/s rd, 1.8 MiB/s wr, 83 op/s
Oct 11 09:27:53 compute-0 nova_compute[260935]: 2025-10-11 09:27:53.042 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:27:53 compute-0 sudo[403924]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:27:53 compute-0 sudo[403924]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:27:53 compute-0 sudo[403924]: pam_unix(sudo:session): session closed for user root
Oct 11 09:27:53 compute-0 sudo[403967]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 11 09:27:53 compute-0 sudo[403967]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:27:53 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 09:27:53 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2070273049' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:27:53 compute-0 nova_compute[260935]: 2025-10-11 09:27:53.333 2 DEBUG oslo_concurrency.processutils [None req-f79e75ad-94d7-4be2-99b6-d39e1a2e4247 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.493s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:27:53 compute-0 nova_compute[260935]: 2025-10-11 09:27:53.361 2 DEBUG nova.storage.rbd_utils [None req-f79e75ad-94d7-4be2-99b6-d39e1a2e4247 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] rbd image f4e61943-e59b-40a1-ab1e-07ba5f131bb3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:27:53 compute-0 nova_compute[260935]: 2025-10-11 09:27:53.365 2 DEBUG oslo_concurrency.processutils [None req-f79e75ad-94d7-4be2-99b6-d39e1a2e4247 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:27:53 compute-0 podman[404054]: 2025-10-11 09:27:53.57804082 +0000 UTC m=+0.068852874 container create 109fb305e239d2d80bbbd331ee84ffed123cf6dc030eb80f095cea7ce87a0d6a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_blackburn, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 09:27:53 compute-0 systemd[1]: Started libpod-conmon-109fb305e239d2d80bbbd331ee84ffed123cf6dc030eb80f095cea7ce87a0d6a.scope.
Oct 11 09:27:53 compute-0 podman[404054]: 2025-10-11 09:27:53.548308691 +0000 UTC m=+0.039120725 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:27:53 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:27:53 compute-0 podman[404054]: 2025-10-11 09:27:53.689307379 +0000 UTC m=+0.180119483 container init 109fb305e239d2d80bbbd331ee84ffed123cf6dc030eb80f095cea7ce87a0d6a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_blackburn, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True)
Oct 11 09:27:53 compute-0 podman[404054]: 2025-10-11 09:27:53.701325848 +0000 UTC m=+0.192137862 container start 109fb305e239d2d80bbbd331ee84ffed123cf6dc030eb80f095cea7ce87a0d6a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_blackburn, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct 11 09:27:53 compute-0 podman[404054]: 2025-10-11 09:27:53.704924969 +0000 UTC m=+0.195737073 container attach 109fb305e239d2d80bbbd331ee84ffed123cf6dc030eb80f095cea7ce87a0d6a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_blackburn, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct 11 09:27:53 compute-0 systemd[1]: libpod-109fb305e239d2d80bbbd331ee84ffed123cf6dc030eb80f095cea7ce87a0d6a.scope: Deactivated successfully.
Oct 11 09:27:53 compute-0 determined_blackburn[404087]: 167 167
Oct 11 09:27:53 compute-0 conmon[404087]: conmon 109fb305e239d2d80bbb <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-109fb305e239d2d80bbbd331ee84ffed123cf6dc030eb80f095cea7ce87a0d6a.scope/container/memory.events
Oct 11 09:27:53 compute-0 podman[404054]: 2025-10-11 09:27:53.712675568 +0000 UTC m=+0.203487622 container died 109fb305e239d2d80bbbd331ee84ffed123cf6dc030eb80f095cea7ce87a0d6a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_blackburn, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 09:27:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-4e088d4b3577a88df5f158b15bb27f32a3030334533fd2f3b5503b7a061677f8-merged.mount: Deactivated successfully.
Oct 11 09:27:53 compute-0 podman[404054]: 2025-10-11 09:27:53.766321432 +0000 UTC m=+0.257133456 container remove 109fb305e239d2d80bbbd331ee84ffed123cf6dc030eb80f095cea7ce87a0d6a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_blackburn, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 09:27:53 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 09:27:53 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1630305984' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:27:53 compute-0 systemd[1]: libpod-conmon-109fb305e239d2d80bbbd331ee84ffed123cf6dc030eb80f095cea7ce87a0d6a.scope: Deactivated successfully.
Oct 11 09:27:53 compute-0 ceph-mon[74313]: pgmap v2587: 321 pgs: 321 active+clean; 374 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 55 KiB/s rd, 1.8 MiB/s wr, 83 op/s
Oct 11 09:27:53 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/2070273049' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:27:53 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/1630305984' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:27:53 compute-0 nova_compute[260935]: 2025-10-11 09:27:53.808 2 DEBUG oslo_concurrency.processutils [None req-f79e75ad-94d7-4be2-99b6-d39e1a2e4247 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:27:53 compute-0 nova_compute[260935]: 2025-10-11 09:27:53.811 2 DEBUG nova.virt.libvirt.vif [None req-f79e75ad-94d7-4be2-99b6-d39e1a2e4247 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T09:27:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-432477065',display_name='tempest-TestNetworkBasicOps-server-432477065',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-432477065',id=131,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGjbEjzk/3FFF7Yo/G8/P29WblN99ZiVV249D7QjKIfGUPghnNZARefNU8o4oiRaxrPiX6DdwLvn7rFQvHVV6IMhrMlVIFfDDgq3XHS93q11oJL1iJIdpdsGtZABU/MPnw==',key_name='tempest-TestNetworkBasicOps-52027208',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='bee9c6aad5fe46a2b0fb6caf4d995b72',ramdisk_id='',reservation_id='r-kx08je2o',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1622727639',owner_user_name='tempest-TestNetworkBasicOps-1622727639-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:27:46Z,user_data=None,user_id='dd336dcb24664df58613d4105ce1b004',uuid=f4e61943-e59b-40a1-ab1e-07ba5f131bb3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c550ea63-9098-4f0c-95b9-107f8a0a8a5c", "address": "fa:16:3e:09:48:8e", "network": {"id": "066f5196-fbff-458b-ab48-5301dc324450", "bridge": "br-int", "label": "tempest-network-smoke--783922482", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc550ea63-90", "ovs_interfaceid": "c550ea63-9098-4f0c-95b9-107f8a0a8a5c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 11 09:27:53 compute-0 nova_compute[260935]: 2025-10-11 09:27:53.812 2 DEBUG nova.network.os_vif_util [None req-f79e75ad-94d7-4be2-99b6-d39e1a2e4247 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Converting VIF {"id": "c550ea63-9098-4f0c-95b9-107f8a0a8a5c", "address": "fa:16:3e:09:48:8e", "network": {"id": "066f5196-fbff-458b-ab48-5301dc324450", "bridge": "br-int", "label": "tempest-network-smoke--783922482", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc550ea63-90", "ovs_interfaceid": "c550ea63-9098-4f0c-95b9-107f8a0a8a5c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 09:27:53 compute-0 nova_compute[260935]: 2025-10-11 09:27:53.813 2 DEBUG nova.network.os_vif_util [None req-f79e75ad-94d7-4be2-99b6-d39e1a2e4247 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:09:48:8e,bridge_name='br-int',has_traffic_filtering=True,id=c550ea63-9098-4f0c-95b9-107f8a0a8a5c,network=Network(066f5196-fbff-458b-ab48-5301dc324450),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc550ea63-90') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 09:27:53 compute-0 nova_compute[260935]: 2025-10-11 09:27:53.815 2 DEBUG nova.objects.instance [None req-f79e75ad-94d7-4be2-99b6-d39e1a2e4247 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lazy-loading 'pci_devices' on Instance uuid f4e61943-e59b-40a1-ab1e-07ba5f131bb3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:27:53 compute-0 nova_compute[260935]: 2025-10-11 09:27:53.832 2 DEBUG nova.virt.libvirt.driver [None req-f79e75ad-94d7-4be2-99b6-d39e1a2e4247 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: f4e61943-e59b-40a1-ab1e-07ba5f131bb3] End _get_guest_xml xml=<domain type="kvm">
Oct 11 09:27:53 compute-0 nova_compute[260935]:   <uuid>f4e61943-e59b-40a1-ab1e-07ba5f131bb3</uuid>
Oct 11 09:27:53 compute-0 nova_compute[260935]:   <name>instance-00000083</name>
Oct 11 09:27:53 compute-0 nova_compute[260935]:   <memory>131072</memory>
Oct 11 09:27:53 compute-0 nova_compute[260935]:   <vcpu>1</vcpu>
Oct 11 09:27:53 compute-0 nova_compute[260935]:   <metadata>
Oct 11 09:27:53 compute-0 nova_compute[260935]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 09:27:53 compute-0 nova_compute[260935]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 09:27:53 compute-0 nova_compute[260935]:       <nova:name>tempest-TestNetworkBasicOps-server-432477065</nova:name>
Oct 11 09:27:53 compute-0 nova_compute[260935]:       <nova:creationTime>2025-10-11 09:27:52</nova:creationTime>
Oct 11 09:27:53 compute-0 nova_compute[260935]:       <nova:flavor name="m1.nano">
Oct 11 09:27:53 compute-0 nova_compute[260935]:         <nova:memory>128</nova:memory>
Oct 11 09:27:53 compute-0 nova_compute[260935]:         <nova:disk>1</nova:disk>
Oct 11 09:27:53 compute-0 nova_compute[260935]:         <nova:swap>0</nova:swap>
Oct 11 09:27:53 compute-0 nova_compute[260935]:         <nova:ephemeral>0</nova:ephemeral>
Oct 11 09:27:53 compute-0 nova_compute[260935]:         <nova:vcpus>1</nova:vcpus>
Oct 11 09:27:53 compute-0 nova_compute[260935]:       </nova:flavor>
Oct 11 09:27:53 compute-0 nova_compute[260935]:       <nova:owner>
Oct 11 09:27:53 compute-0 nova_compute[260935]:         <nova:user uuid="dd336dcb24664df58613d4105ce1b004">tempest-TestNetworkBasicOps-1622727639-project-member</nova:user>
Oct 11 09:27:53 compute-0 nova_compute[260935]:         <nova:project uuid="bee9c6aad5fe46a2b0fb6caf4d995b72">tempest-TestNetworkBasicOps-1622727639</nova:project>
Oct 11 09:27:53 compute-0 nova_compute[260935]:       </nova:owner>
Oct 11 09:27:53 compute-0 nova_compute[260935]:       <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 09:27:53 compute-0 nova_compute[260935]:       <nova:ports>
Oct 11 09:27:53 compute-0 nova_compute[260935]:         <nova:port uuid="c550ea63-9098-4f0c-95b9-107f8a0a8a5c">
Oct 11 09:27:53 compute-0 nova_compute[260935]:           <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Oct 11 09:27:53 compute-0 nova_compute[260935]:         </nova:port>
Oct 11 09:27:53 compute-0 nova_compute[260935]:       </nova:ports>
Oct 11 09:27:53 compute-0 nova_compute[260935]:     </nova:instance>
Oct 11 09:27:53 compute-0 nova_compute[260935]:   </metadata>
Oct 11 09:27:53 compute-0 nova_compute[260935]:   <sysinfo type="smbios">
Oct 11 09:27:53 compute-0 nova_compute[260935]:     <system>
Oct 11 09:27:53 compute-0 nova_compute[260935]:       <entry name="manufacturer">RDO</entry>
Oct 11 09:27:53 compute-0 nova_compute[260935]:       <entry name="product">OpenStack Compute</entry>
Oct 11 09:27:53 compute-0 nova_compute[260935]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 09:27:53 compute-0 nova_compute[260935]:       <entry name="serial">f4e61943-e59b-40a1-ab1e-07ba5f131bb3</entry>
Oct 11 09:27:53 compute-0 nova_compute[260935]:       <entry name="uuid">f4e61943-e59b-40a1-ab1e-07ba5f131bb3</entry>
Oct 11 09:27:53 compute-0 nova_compute[260935]:       <entry name="family">Virtual Machine</entry>
Oct 11 09:27:53 compute-0 nova_compute[260935]:     </system>
Oct 11 09:27:53 compute-0 nova_compute[260935]:   </sysinfo>
Oct 11 09:27:53 compute-0 nova_compute[260935]:   <os>
Oct 11 09:27:53 compute-0 nova_compute[260935]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 11 09:27:53 compute-0 nova_compute[260935]:     <boot dev="hd"/>
Oct 11 09:27:53 compute-0 nova_compute[260935]:     <smbios mode="sysinfo"/>
Oct 11 09:27:53 compute-0 nova_compute[260935]:   </os>
Oct 11 09:27:53 compute-0 nova_compute[260935]:   <features>
Oct 11 09:27:53 compute-0 nova_compute[260935]:     <acpi/>
Oct 11 09:27:53 compute-0 nova_compute[260935]:     <apic/>
Oct 11 09:27:53 compute-0 nova_compute[260935]:     <vmcoreinfo/>
Oct 11 09:27:53 compute-0 nova_compute[260935]:   </features>
Oct 11 09:27:53 compute-0 nova_compute[260935]:   <clock offset="utc">
Oct 11 09:27:53 compute-0 nova_compute[260935]:     <timer name="pit" tickpolicy="delay"/>
Oct 11 09:27:53 compute-0 nova_compute[260935]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 11 09:27:53 compute-0 nova_compute[260935]:     <timer name="hpet" present="no"/>
Oct 11 09:27:53 compute-0 nova_compute[260935]:   </clock>
Oct 11 09:27:53 compute-0 nova_compute[260935]:   <cpu mode="host-model" match="exact">
Oct 11 09:27:53 compute-0 nova_compute[260935]:     <topology sockets="1" cores="1" threads="1"/>
Oct 11 09:27:53 compute-0 nova_compute[260935]:   </cpu>
Oct 11 09:27:53 compute-0 nova_compute[260935]:   <devices>
Oct 11 09:27:53 compute-0 nova_compute[260935]:     <disk type="network" device="disk">
Oct 11 09:27:53 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 09:27:53 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/f4e61943-e59b-40a1-ab1e-07ba5f131bb3_disk">
Oct 11 09:27:53 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 09:27:53 compute-0 nova_compute[260935]:       </source>
Oct 11 09:27:53 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 09:27:53 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 09:27:53 compute-0 nova_compute[260935]:       </auth>
Oct 11 09:27:53 compute-0 nova_compute[260935]:       <target dev="vda" bus="virtio"/>
Oct 11 09:27:53 compute-0 nova_compute[260935]:     </disk>
Oct 11 09:27:53 compute-0 nova_compute[260935]:     <disk type="network" device="cdrom">
Oct 11 09:27:53 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 09:27:53 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/f4e61943-e59b-40a1-ab1e-07ba5f131bb3_disk.config">
Oct 11 09:27:53 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 09:27:53 compute-0 nova_compute[260935]:       </source>
Oct 11 09:27:53 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 09:27:53 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 09:27:53 compute-0 nova_compute[260935]:       </auth>
Oct 11 09:27:53 compute-0 nova_compute[260935]:       <target dev="sda" bus="sata"/>
Oct 11 09:27:53 compute-0 nova_compute[260935]:     </disk>
Oct 11 09:27:53 compute-0 nova_compute[260935]:     <interface type="ethernet">
Oct 11 09:27:53 compute-0 nova_compute[260935]:       <mac address="fa:16:3e:09:48:8e"/>
Oct 11 09:27:53 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 09:27:53 compute-0 nova_compute[260935]:       <driver name="vhost" rx_queue_size="512"/>
Oct 11 09:27:53 compute-0 nova_compute[260935]:       <mtu size="1442"/>
Oct 11 09:27:53 compute-0 nova_compute[260935]:       <target dev="tapc550ea63-90"/>
Oct 11 09:27:53 compute-0 nova_compute[260935]:     </interface>
Oct 11 09:27:53 compute-0 nova_compute[260935]:     <serial type="pty">
Oct 11 09:27:53 compute-0 nova_compute[260935]:       <log file="/var/lib/nova/instances/f4e61943-e59b-40a1-ab1e-07ba5f131bb3/console.log" append="off"/>
Oct 11 09:27:53 compute-0 nova_compute[260935]:     </serial>
Oct 11 09:27:53 compute-0 nova_compute[260935]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 09:27:53 compute-0 nova_compute[260935]:     <video>
Oct 11 09:27:53 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 09:27:53 compute-0 nova_compute[260935]:     </video>
Oct 11 09:27:53 compute-0 nova_compute[260935]:     <input type="tablet" bus="usb"/>
Oct 11 09:27:53 compute-0 nova_compute[260935]:     <rng model="virtio">
Oct 11 09:27:53 compute-0 nova_compute[260935]:       <backend model="random">/dev/urandom</backend>
Oct 11 09:27:53 compute-0 nova_compute[260935]:     </rng>
Oct 11 09:27:53 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root"/>
Oct 11 09:27:53 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:27:53 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:27:53 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:27:53 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:27:53 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:27:53 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:27:53 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:27:53 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:27:53 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:27:53 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:27:53 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:27:53 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:27:53 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:27:53 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:27:53 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:27:53 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:27:53 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:27:53 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:27:53 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:27:53 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:27:53 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:27:53 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:27:53 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:27:53 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:27:53 compute-0 nova_compute[260935]:     <controller type="usb" index="0"/>
Oct 11 09:27:53 compute-0 nova_compute[260935]:     <memballoon model="virtio">
Oct 11 09:27:53 compute-0 nova_compute[260935]:       <stats period="10"/>
Oct 11 09:27:53 compute-0 nova_compute[260935]:     </memballoon>
Oct 11 09:27:53 compute-0 nova_compute[260935]:   </devices>
Oct 11 09:27:53 compute-0 nova_compute[260935]: </domain>
Oct 11 09:27:53 compute-0 nova_compute[260935]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 11 09:27:53 compute-0 nova_compute[260935]: 2025-10-11 09:27:53.840 2 DEBUG nova.compute.manager [None req-f79e75ad-94d7-4be2-99b6-d39e1a2e4247 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: f4e61943-e59b-40a1-ab1e-07ba5f131bb3] Preparing to wait for external event network-vif-plugged-c550ea63-9098-4f0c-95b9-107f8a0a8a5c prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 11 09:27:53 compute-0 nova_compute[260935]: 2025-10-11 09:27:53.841 2 DEBUG oslo_concurrency.lockutils [None req-f79e75ad-94d7-4be2-99b6-d39e1a2e4247 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Acquiring lock "f4e61943-e59b-40a1-ab1e-07ba5f131bb3-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:27:53 compute-0 nova_compute[260935]: 2025-10-11 09:27:53.841 2 DEBUG oslo_concurrency.lockutils [None req-f79e75ad-94d7-4be2-99b6-d39e1a2e4247 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "f4e61943-e59b-40a1-ab1e-07ba5f131bb3-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:27:53 compute-0 nova_compute[260935]: 2025-10-11 09:27:53.842 2 DEBUG oslo_concurrency.lockutils [None req-f79e75ad-94d7-4be2-99b6-d39e1a2e4247 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "f4e61943-e59b-40a1-ab1e-07ba5f131bb3-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:27:53 compute-0 nova_compute[260935]: 2025-10-11 09:27:53.843 2 DEBUG nova.virt.libvirt.vif [None req-f79e75ad-94d7-4be2-99b6-d39e1a2e4247 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T09:27:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-432477065',display_name='tempest-TestNetworkBasicOps-server-432477065',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-432477065',id=131,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGjbEjzk/3FFF7Yo/G8/P29WblN99ZiVV249D7QjKIfGUPghnNZARefNU8o4oiRaxrPiX6DdwLvn7rFQvHVV6IMhrMlVIFfDDgq3XHS93q11oJL1iJIdpdsGtZABU/MPnw==',key_name='tempest-TestNetworkBasicOps-52027208',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='bee9c6aad5fe46a2b0fb6caf4d995b72',ramdisk_id='',reservation_id='r-kx08je2o',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1622727639',owner_user_name='tempest-TestNetworkBasicOps-1622727639-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:27:46Z,user_data=None,user_id='dd336dcb24664df58613d4105ce1b004',uuid=f4e61943-e59b-40a1-ab1e-07ba5f131bb3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c550ea63-9098-4f0c-95b9-107f8a0a8a5c", "address": "fa:16:3e:09:48:8e", "network": {"id": "066f5196-fbff-458b-ab48-5301dc324450", "bridge": "br-int", "label": "tempest-network-smoke--783922482", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc550ea63-90", "ovs_interfaceid": "c550ea63-9098-4f0c-95b9-107f8a0a8a5c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 11 09:27:53 compute-0 nova_compute[260935]: 2025-10-11 09:27:53.843 2 DEBUG nova.network.os_vif_util [None req-f79e75ad-94d7-4be2-99b6-d39e1a2e4247 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Converting VIF {"id": "c550ea63-9098-4f0c-95b9-107f8a0a8a5c", "address": "fa:16:3e:09:48:8e", "network": {"id": "066f5196-fbff-458b-ab48-5301dc324450", "bridge": "br-int", "label": "tempest-network-smoke--783922482", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc550ea63-90", "ovs_interfaceid": "c550ea63-9098-4f0c-95b9-107f8a0a8a5c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 09:27:53 compute-0 nova_compute[260935]: 2025-10-11 09:27:53.844 2 DEBUG nova.network.os_vif_util [None req-f79e75ad-94d7-4be2-99b6-d39e1a2e4247 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:09:48:8e,bridge_name='br-int',has_traffic_filtering=True,id=c550ea63-9098-4f0c-95b9-107f8a0a8a5c,network=Network(066f5196-fbff-458b-ab48-5301dc324450),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc550ea63-90') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 09:27:53 compute-0 nova_compute[260935]: 2025-10-11 09:27:53.845 2 DEBUG os_vif [None req-f79e75ad-94d7-4be2-99b6-d39e1a2e4247 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:09:48:8e,bridge_name='br-int',has_traffic_filtering=True,id=c550ea63-9098-4f0c-95b9-107f8a0a8a5c,network=Network(066f5196-fbff-458b-ab48-5301dc324450),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc550ea63-90') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 11 09:27:53 compute-0 nova_compute[260935]: 2025-10-11 09:27:53.846 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:27:53 compute-0 nova_compute[260935]: 2025-10-11 09:27:53.847 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:27:53 compute-0 nova_compute[260935]: 2025-10-11 09:27:53.848 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 09:27:53 compute-0 nova_compute[260935]: 2025-10-11 09:27:53.851 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:27:53 compute-0 nova_compute[260935]: 2025-10-11 09:27:53.852 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc550ea63-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:27:53 compute-0 nova_compute[260935]: 2025-10-11 09:27:53.852 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapc550ea63-90, col_values=(('external_ids', {'iface-id': 'c550ea63-9098-4f0c-95b9-107f8a0a8a5c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:09:48:8e', 'vm-uuid': 'f4e61943-e59b-40a1-ab1e-07ba5f131bb3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:27:53 compute-0 nova_compute[260935]: 2025-10-11 09:27:53.854 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:27:53 compute-0 NetworkManager[44960]: <info>  [1760174873.8554] manager: (tapc550ea63-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/560)
Oct 11 09:27:53 compute-0 nova_compute[260935]: 2025-10-11 09:27:53.856 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 09:27:53 compute-0 nova_compute[260935]: 2025-10-11 09:27:53.863 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:27:53 compute-0 nova_compute[260935]: 2025-10-11 09:27:53.864 2 INFO os_vif [None req-f79e75ad-94d7-4be2-99b6-d39e1a2e4247 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:09:48:8e,bridge_name='br-int',has_traffic_filtering=True,id=c550ea63-9098-4f0c-95b9-107f8a0a8a5c,network=Network(066f5196-fbff-458b-ab48-5301dc324450),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc550ea63-90')
Oct 11 09:27:53 compute-0 nova_compute[260935]: 2025-10-11 09:27:53.910 2 DEBUG nova.virt.libvirt.driver [None req-f79e75ad-94d7-4be2-99b6-d39e1a2e4247 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 09:27:53 compute-0 nova_compute[260935]: 2025-10-11 09:27:53.911 2 DEBUG nova.virt.libvirt.driver [None req-f79e75ad-94d7-4be2-99b6-d39e1a2e4247 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 09:27:53 compute-0 nova_compute[260935]: 2025-10-11 09:27:53.912 2 DEBUG nova.virt.libvirt.driver [None req-f79e75ad-94d7-4be2-99b6-d39e1a2e4247 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] No VIF found with MAC fa:16:3e:09:48:8e, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 11 09:27:53 compute-0 nova_compute[260935]: 2025-10-11 09:27:53.913 2 INFO nova.virt.libvirt.driver [None req-f79e75ad-94d7-4be2-99b6-d39e1a2e4247 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: f4e61943-e59b-40a1-ab1e-07ba5f131bb3] Using config drive
Oct 11 09:27:53 compute-0 nova_compute[260935]: 2025-10-11 09:27:53.934 2 DEBUG nova.storage.rbd_utils [None req-f79e75ad-94d7-4be2-99b6-d39e1a2e4247 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] rbd image f4e61943-e59b-40a1-ab1e-07ba5f131bb3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:27:54 compute-0 podman[404132]: 2025-10-11 09:27:54.059553767 +0000 UTC m=+0.071175829 container create 73b7111a56c78bfb20dd2c23d343a6993e951fe38c63ee4b9b64408376596e5a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_hopper, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 09:27:54 compute-0 systemd[1]: Started libpod-conmon-73b7111a56c78bfb20dd2c23d343a6993e951fe38c63ee4b9b64408376596e5a.scope.
Oct 11 09:27:54 compute-0 podman[404132]: 2025-10-11 09:27:54.030465087 +0000 UTC m=+0.042087239 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:27:54 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:27:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dcfe3ff0c32684eea7bdd9ee96bebad21fa107555c80ed4dd02f8a8808a72954/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 09:27:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dcfe3ff0c32684eea7bdd9ee96bebad21fa107555c80ed4dd02f8a8808a72954/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 09:27:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dcfe3ff0c32684eea7bdd9ee96bebad21fa107555c80ed4dd02f8a8808a72954/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 09:27:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dcfe3ff0c32684eea7bdd9ee96bebad21fa107555c80ed4dd02f8a8808a72954/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 09:27:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dcfe3ff0c32684eea7bdd9ee96bebad21fa107555c80ed4dd02f8a8808a72954/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 09:27:54 compute-0 podman[404132]: 2025-10-11 09:27:54.155512565 +0000 UTC m=+0.167134657 container init 73b7111a56c78bfb20dd2c23d343a6993e951fe38c63ee4b9b64408376596e5a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_hopper, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct 11 09:27:54 compute-0 podman[404132]: 2025-10-11 09:27:54.169711066 +0000 UTC m=+0.181333128 container start 73b7111a56c78bfb20dd2c23d343a6993e951fe38c63ee4b9b64408376596e5a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_hopper, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct 11 09:27:54 compute-0 podman[404132]: 2025-10-11 09:27:54.174938704 +0000 UTC m=+0.186560846 container attach 73b7111a56c78bfb20dd2c23d343a6993e951fe38c63ee4b9b64408376596e5a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_hopper, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct 11 09:27:54 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:27:54 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:27:54.402 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=44, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:d1:d9', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '16:ab:1e:b7:4b:7f'}, ipsec=False) old=SB_Global(nb_cfg=43) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 09:27:54 compute-0 nova_compute[260935]: 2025-10-11 09:27:54.403 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:27:54 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:27:54.405 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 11 09:27:54 compute-0 nova_compute[260935]: 2025-10-11 09:27:54.445 2 INFO nova.virt.libvirt.driver [None req-f79e75ad-94d7-4be2-99b6-d39e1a2e4247 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: f4e61943-e59b-40a1-ab1e-07ba5f131bb3] Creating config drive at /var/lib/nova/instances/f4e61943-e59b-40a1-ab1e-07ba5f131bb3/disk.config
Oct 11 09:27:54 compute-0 nova_compute[260935]: 2025-10-11 09:27:54.449 2 DEBUG oslo_concurrency.processutils [None req-f79e75ad-94d7-4be2-99b6-d39e1a2e4247 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/f4e61943-e59b-40a1-ab1e-07ba5f131bb3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpwkl9vmtt execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:27:54 compute-0 nova_compute[260935]: 2025-10-11 09:27:54.625 2 DEBUG oslo_concurrency.processutils [None req-f79e75ad-94d7-4be2-99b6-d39e1a2e4247 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/f4e61943-e59b-40a1-ab1e-07ba5f131bb3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpwkl9vmtt" returned: 0 in 0.175s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:27:54 compute-0 nova_compute[260935]: 2025-10-11 09:27:54.653 2 DEBUG nova.storage.rbd_utils [None req-f79e75ad-94d7-4be2-99b6-d39e1a2e4247 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] rbd image f4e61943-e59b-40a1-ab1e-07ba5f131bb3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:27:54 compute-0 nova_compute[260935]: 2025-10-11 09:27:54.657 2 DEBUG oslo_concurrency.processutils [None req-f79e75ad-94d7-4be2-99b6-d39e1a2e4247 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/f4e61943-e59b-40a1-ab1e-07ba5f131bb3/disk.config f4e61943-e59b-40a1-ab1e-07ba5f131bb3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:27:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:27:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:27:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:27:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:27:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:27:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:27:54 compute-0 nova_compute[260935]: 2025-10-11 09:27:54.883 2 DEBUG oslo_concurrency.processutils [None req-f79e75ad-94d7-4be2-99b6-d39e1a2e4247 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/f4e61943-e59b-40a1-ab1e-07ba5f131bb3/disk.config f4e61943-e59b-40a1-ab1e-07ba5f131bb3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.225s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:27:54 compute-0 nova_compute[260935]: 2025-10-11 09:27:54.885 2 INFO nova.virt.libvirt.driver [None req-f79e75ad-94d7-4be2-99b6-d39e1a2e4247 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: f4e61943-e59b-40a1-ab1e-07ba5f131bb3] Deleting local config drive /var/lib/nova/instances/f4e61943-e59b-40a1-ab1e-07ba5f131bb3/disk.config because it was imported into RBD.
Oct 11 09:27:54 compute-0 NetworkManager[44960]: <info>  [1760174874.9711] manager: (tapc550ea63-90): new Tun device (/org/freedesktop/NetworkManager/Devices/561)
Oct 11 09:27:54 compute-0 ceph-mgr[74605]: [balancer INFO root] Optimize plan auto_2025-10-11_09:27:54
Oct 11 09:27:54 compute-0 ceph-mgr[74605]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 09:27:54 compute-0 ceph-mgr[74605]: [balancer INFO root] do_upmap
Oct 11 09:27:54 compute-0 ceph-mgr[74605]: [balancer INFO root] pools ['vms', 'default.rgw.meta', 'default.rgw.log', 'backups', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', '.rgw.root', 'volumes', 'default.rgw.control', 'images', '.mgr']
Oct 11 09:27:54 compute-0 ceph-mgr[74605]: [balancer INFO root] prepared 0/10 changes
Oct 11 09:27:54 compute-0 kernel: tapc550ea63-90: entered promiscuous mode
Oct 11 09:27:54 compute-0 ovn_controller[152945]: 2025-10-11T09:27:54Z|01443|binding|INFO|Claiming lport c550ea63-9098-4f0c-95b9-107f8a0a8a5c for this chassis.
Oct 11 09:27:54 compute-0 ovn_controller[152945]: 2025-10-11T09:27:54Z|01444|binding|INFO|c550ea63-9098-4f0c-95b9-107f8a0a8a5c: Claiming fa:16:3e:09:48:8e 10.100.0.10
Oct 11 09:27:54 compute-0 nova_compute[260935]: 2025-10-11 09:27:54.977 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:27:54 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2588: 321 pgs: 321 active+clean; 374 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 11 09:27:55 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:27:54.999 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:09:48:8e 10.100.0.10'], port_security=['fa:16:3e:09:48:8e 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'f4e61943-e59b-40a1-ab1e-07ba5f131bb3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-066f5196-fbff-458b-ab48-5301dc324450', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'bee9c6aad5fe46a2b0fb6caf4d995b72', 'neutron:revision_number': '2', 'neutron:security_group_ids': '8fa9b7a7-a9c5-4782-b5d9-2526bb91182e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=dd668780-bea0-4b19-a159-529899429d34, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=c550ea63-9098-4f0c-95b9-107f8a0a8a5c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 09:27:55 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:27:55.001 162815 INFO neutron.agent.ovn.metadata.agent [-] Port c550ea63-9098-4f0c-95b9-107f8a0a8a5c in datapath 066f5196-fbff-458b-ab48-5301dc324450 bound to our chassis
Oct 11 09:27:55 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:27:55.003 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 066f5196-fbff-458b-ab48-5301dc324450
Oct 11 09:27:55 compute-0 nova_compute[260935]: 2025-10-11 09:27:55.007 2 DEBUG nova.network.neutron [req-ee596f4b-1ba9-41e3-9b3c-4ec0a02b253a req-0ee0257b-2910-4ee1-91b1-74062d6915dd e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: f4e61943-e59b-40a1-ab1e-07ba5f131bb3] Updated VIF entry in instance network info cache for port c550ea63-9098-4f0c-95b9-107f8a0a8a5c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 09:27:55 compute-0 nova_compute[260935]: 2025-10-11 09:27:55.008 2 DEBUG nova.network.neutron [req-ee596f4b-1ba9-41e3-9b3c-4ec0a02b253a req-0ee0257b-2910-4ee1-91b1-74062d6915dd e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: f4e61943-e59b-40a1-ab1e-07ba5f131bb3] Updating instance_info_cache with network_info: [{"id": "c550ea63-9098-4f0c-95b9-107f8a0a8a5c", "address": "fa:16:3e:09:48:8e", "network": {"id": "066f5196-fbff-458b-ab48-5301dc324450", "bridge": "br-int", "label": "tempest-network-smoke--783922482", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc550ea63-90", "ovs_interfaceid": "c550ea63-9098-4f0c-95b9-107f8a0a8a5c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:27:55 compute-0 systemd-machined[215705]: New machine qemu-155-instance-00000083.
Oct 11 09:27:55 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:27:55.019 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[5ee5878a-4f5f-4e6b-bfb7-474e35217c70]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:27:55 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:27:55.020 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap066f5196-f1 in ovnmeta-066f5196-fbff-458b-ab48-5301dc324450 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 11 09:27:55 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:27:55.024 276945 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap066f5196-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 11 09:27:55 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:27:55.024 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[7f32bb51-80e5-48e3-bc41-eb4924c81d90]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:27:55 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:27:55.026 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[160348b7-2299-4851-bf85-a831a4f9f011]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:27:55 compute-0 nova_compute[260935]: 2025-10-11 09:27:55.033 2 DEBUG oslo_concurrency.lockutils [req-ee596f4b-1ba9-41e3-9b3c-4ec0a02b253a req-0ee0257b-2910-4ee1-91b1-74062d6915dd e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-f4e61943-e59b-40a1-ab1e-07ba5f131bb3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:27:55 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:27:55.040 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[5d63f022-3d4c-4bb6-9bae-1d0ccffc0720]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:27:55 compute-0 systemd[1]: Started Virtual Machine qemu-155-instance-00000083.
Oct 11 09:27:55 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:27:55.073 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[79dfee6f-1e21-49b8-aaf6-7b6d8d8da656]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:27:55 compute-0 systemd-udevd[404217]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 09:27:55 compute-0 nova_compute[260935]: 2025-10-11 09:27:55.080 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:27:55 compute-0 ovn_controller[152945]: 2025-10-11T09:27:55Z|01445|binding|INFO|Setting lport c550ea63-9098-4f0c-95b9-107f8a0a8a5c ovn-installed in OVS
Oct 11 09:27:55 compute-0 ovn_controller[152945]: 2025-10-11T09:27:55Z|01446|binding|INFO|Setting lport c550ea63-9098-4f0c-95b9-107f8a0a8a5c up in Southbound
Oct 11 09:27:55 compute-0 nova_compute[260935]: 2025-10-11 09:27:55.083 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:27:55 compute-0 NetworkManager[44960]: <info>  [1760174875.0954] device (tapc550ea63-90): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 09:27:55 compute-0 NetworkManager[44960]: <info>  [1760174875.0969] device (tapc550ea63-90): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 09:27:55 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:27:55.113 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[b6ec94c8-48bc-460d-bad5-ce122726238a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:27:55 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:27:55.123 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[28210ecf-1dee-442e-a9a8-636ec2102d68]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:27:55 compute-0 NetworkManager[44960]: <info>  [1760174875.1263] manager: (tap066f5196-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/562)
Oct 11 09:27:55 compute-0 systemd-udevd[404222]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 09:27:55 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:27:55.172 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[026c485e-70d5-41d5-8328-8ea53f707abe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:27:55 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:27:55.177 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[b5cf61e3-ce8d-463e-aaba-3b1af36c2922]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:27:55 compute-0 NetworkManager[44960]: <info>  [1760174875.2072] device (tap066f5196-f0): carrier: link connected
Oct 11 09:27:55 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:27:55.214 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[83bed4e9-5e2f-4b2b-b59d-00ec52ca0e3a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:27:55 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:27:55.233 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[0c604942-d286-41eb-97bf-caad4d03cd9a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap066f5196-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f4:73:af'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 394], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 671990, 'reachable_time': 27379, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 404255, 'error': None, 'target': 'ovnmeta-066f5196-fbff-458b-ab48-5301dc324450', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:27:55 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:27:55.256 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[b41df90d-8d38-4373-8ac6-a8cbc2599008]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fef4:73af'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 671990, 'tstamp': 671990}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 404256, 'error': None, 'target': 'ovnmeta-066f5196-fbff-458b-ab48-5301dc324450', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:27:55 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:27:55.287 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[c1ad61ee-2200-45f3-8210-2562d5118aa9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap066f5196-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f4:73:af'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 220, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 220, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 394], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 671990, 'reachable_time': 27379, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 192, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 192, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 404260, 'error': None, 'target': 'ovnmeta-066f5196-fbff-458b-ab48-5301dc324450', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:27:55 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:27:55.322 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[ec318b6c-f319-4497-a33c-36a7d53f99dc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:27:55 compute-0 nova_compute[260935]: 2025-10-11 09:27:55.378 2 DEBUG nova.compute.manager [req-a35bba3b-23a0-416e-9042-0d9af3e19737 req-de85a242-f179-4ab7-91e0-9d4acbd97c21 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: f4e61943-e59b-40a1-ab1e-07ba5f131bb3] Received event network-vif-plugged-c550ea63-9098-4f0c-95b9-107f8a0a8a5c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:27:55 compute-0 nova_compute[260935]: 2025-10-11 09:27:55.379 2 DEBUG oslo_concurrency.lockutils [req-a35bba3b-23a0-416e-9042-0d9af3e19737 req-de85a242-f179-4ab7-91e0-9d4acbd97c21 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "f4e61943-e59b-40a1-ab1e-07ba5f131bb3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:27:55 compute-0 nova_compute[260935]: 2025-10-11 09:27:55.379 2 DEBUG oslo_concurrency.lockutils [req-a35bba3b-23a0-416e-9042-0d9af3e19737 req-de85a242-f179-4ab7-91e0-9d4acbd97c21 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "f4e61943-e59b-40a1-ab1e-07ba5f131bb3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:27:55 compute-0 nova_compute[260935]: 2025-10-11 09:27:55.380 2 DEBUG oslo_concurrency.lockutils [req-a35bba3b-23a0-416e-9042-0d9af3e19737 req-de85a242-f179-4ab7-91e0-9d4acbd97c21 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "f4e61943-e59b-40a1-ab1e-07ba5f131bb3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:27:55 compute-0 nova_compute[260935]: 2025-10-11 09:27:55.380 2 DEBUG nova.compute.manager [req-a35bba3b-23a0-416e-9042-0d9af3e19737 req-de85a242-f179-4ab7-91e0-9d4acbd97c21 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: f4e61943-e59b-40a1-ab1e-07ba5f131bb3] Processing event network-vif-plugged-c550ea63-9098-4f0c-95b9-107f8a0a8a5c _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 11 09:27:55 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:27:55.393 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[322753c1-7146-430b-9738-b8c38228c9ce]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:27:55 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:27:55.394 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap066f5196-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:27:55 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:27:55.394 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 09:27:55 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:27:55.395 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap066f5196-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:27:55 compute-0 kernel: tap066f5196-f0: entered promiscuous mode
Oct 11 09:27:55 compute-0 NetworkManager[44960]: <info>  [1760174875.3973] manager: (tap066f5196-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/563)
Oct 11 09:27:55 compute-0 nova_compute[260935]: 2025-10-11 09:27:55.396 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:27:55 compute-0 nova_compute[260935]: 2025-10-11 09:27:55.399 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:27:55 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:27:55.400 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap066f5196-f0, col_values=(('external_ids', {'iface-id': '7affb32e-f5e8-4572-9839-dbf40ccc596e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:27:55 compute-0 ovn_controller[152945]: 2025-10-11T09:27:55Z|01447|binding|INFO|Releasing lport 7affb32e-f5e8-4572-9839-dbf40ccc596e from this chassis (sb_readonly=0)
Oct 11 09:27:55 compute-0 nova_compute[260935]: 2025-10-11 09:27:55.415 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:27:55 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:27:55.416 162815 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/066f5196-fbff-458b-ab48-5301dc324450.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/066f5196-fbff-458b-ab48-5301dc324450.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 11 09:27:55 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:27:55.417 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[923a403e-4f3d-4ca6-beea-e4ba4828f424]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:27:55 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:27:55.419 162815 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 11 09:27:55 compute-0 ovn_metadata_agent[162810]: global
Oct 11 09:27:55 compute-0 ovn_metadata_agent[162810]:     log         /dev/log local0 debug
Oct 11 09:27:55 compute-0 ovn_metadata_agent[162810]:     log-tag     haproxy-metadata-proxy-066f5196-fbff-458b-ab48-5301dc324450
Oct 11 09:27:55 compute-0 ovn_metadata_agent[162810]:     user        root
Oct 11 09:27:55 compute-0 ovn_metadata_agent[162810]:     group       root
Oct 11 09:27:55 compute-0 ovn_metadata_agent[162810]:     maxconn     1024
Oct 11 09:27:55 compute-0 ovn_metadata_agent[162810]:     pidfile     /var/lib/neutron/external/pids/066f5196-fbff-458b-ab48-5301dc324450.pid.haproxy
Oct 11 09:27:55 compute-0 ovn_metadata_agent[162810]:     daemon
Oct 11 09:27:55 compute-0 ovn_metadata_agent[162810]: 
Oct 11 09:27:55 compute-0 ovn_metadata_agent[162810]: defaults
Oct 11 09:27:55 compute-0 ovn_metadata_agent[162810]:     log global
Oct 11 09:27:55 compute-0 ovn_metadata_agent[162810]:     mode http
Oct 11 09:27:55 compute-0 ovn_metadata_agent[162810]:     option httplog
Oct 11 09:27:55 compute-0 ovn_metadata_agent[162810]:     option dontlognull
Oct 11 09:27:55 compute-0 ovn_metadata_agent[162810]:     option http-server-close
Oct 11 09:27:55 compute-0 ovn_metadata_agent[162810]:     option forwardfor
Oct 11 09:27:55 compute-0 ovn_metadata_agent[162810]:     retries                 3
Oct 11 09:27:55 compute-0 ovn_metadata_agent[162810]:     timeout http-request    30s
Oct 11 09:27:55 compute-0 ovn_metadata_agent[162810]:     timeout connect         30s
Oct 11 09:27:55 compute-0 ovn_metadata_agent[162810]:     timeout client          32s
Oct 11 09:27:55 compute-0 ovn_metadata_agent[162810]:     timeout server          32s
Oct 11 09:27:55 compute-0 ovn_metadata_agent[162810]:     timeout http-keep-alive 30s
Oct 11 09:27:55 compute-0 ovn_metadata_agent[162810]: 
Oct 11 09:27:55 compute-0 ovn_metadata_agent[162810]: 
Oct 11 09:27:55 compute-0 ovn_metadata_agent[162810]: listen listener
Oct 11 09:27:55 compute-0 ovn_metadata_agent[162810]:     bind 169.254.169.254:80
Oct 11 09:27:55 compute-0 ovn_metadata_agent[162810]:     server metadata /var/lib/neutron/metadata_proxy
Oct 11 09:27:55 compute-0 ovn_metadata_agent[162810]:     http-request add-header X-OVN-Network-ID 066f5196-fbff-458b-ab48-5301dc324450
Oct 11 09:27:55 compute-0 ovn_metadata_agent[162810]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 11 09:27:55 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:27:55.419 162815 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-066f5196-fbff-458b-ab48-5301dc324450', 'env', 'PROCESS_TAG=haproxy-066f5196-fbff-458b-ab48-5301dc324450', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/066f5196-fbff-458b-ab48-5301dc324450.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 11 09:27:55 compute-0 blissful_hopper[404148]: --> passed data devices: 0 physical, 3 LVM
Oct 11 09:27:55 compute-0 blissful_hopper[404148]: --> relative data size: 1.0
Oct 11 09:27:55 compute-0 blissful_hopper[404148]: --> All data devices are unavailable
Oct 11 09:27:55 compute-0 systemd[1]: libpod-73b7111a56c78bfb20dd2c23d343a6993e951fe38c63ee4b9b64408376596e5a.scope: Deactivated successfully.
Oct 11 09:27:55 compute-0 systemd[1]: libpod-73b7111a56c78bfb20dd2c23d343a6993e951fe38c63ee4b9b64408376596e5a.scope: Consumed 1.153s CPU time.
Oct 11 09:27:55 compute-0 podman[404132]: 2025-10-11 09:27:55.482444803 +0000 UTC m=+1.494066865 container died 73b7111a56c78bfb20dd2c23d343a6993e951fe38c63ee4b9b64408376596e5a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_hopper, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct 11 09:27:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 09:27:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 09:27:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 09:27:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 09:27:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 09:27:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 09:27:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 09:27:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 09:27:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 09:27:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 09:27:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-dcfe3ff0c32684eea7bdd9ee96bebad21fa107555c80ed4dd02f8a8808a72954-merged.mount: Deactivated successfully.
Oct 11 09:27:55 compute-0 podman[404132]: 2025-10-11 09:27:55.563521601 +0000 UTC m=+1.575143703 container remove 73b7111a56c78bfb20dd2c23d343a6993e951fe38c63ee4b9b64408376596e5a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_hopper, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 09:27:55 compute-0 sudo[403967]: pam_unix(sudo:session): session closed for user root
Oct 11 09:27:55 compute-0 systemd[1]: libpod-conmon-73b7111a56c78bfb20dd2c23d343a6993e951fe38c63ee4b9b64408376596e5a.scope: Deactivated successfully.
Oct 11 09:27:55 compute-0 sudo[404330]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:27:55 compute-0 sudo[404330]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:27:55 compute-0 sudo[404330]: pam_unix(sudo:session): session closed for user root
Oct 11 09:27:55 compute-0 sudo[404363]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 09:27:55 compute-0 sudo[404363]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:27:55 compute-0 sudo[404363]: pam_unix(sudo:session): session closed for user root
Oct 11 09:27:55 compute-0 sudo[404401]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:27:55 compute-0 sudo[404401]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:27:55 compute-0 sudo[404401]: pam_unix(sudo:session): session closed for user root
Oct 11 09:27:55 compute-0 podman[404398]: 2025-10-11 09:27:55.906034497 +0000 UTC m=+0.087527481 container create 6c1bef00ea444213d88bfa6af5eec4702f85b880b766550f49146dbec2b30f32 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-066f5196-fbff-458b-ab48-5301dc324450, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 11 09:27:55 compute-0 podman[404398]: 2025-10-11 09:27:55.867429967 +0000 UTC m=+0.048922991 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 11 09:27:55 compute-0 systemd[1]: Started libpod-conmon-6c1bef00ea444213d88bfa6af5eec4702f85b880b766550f49146dbec2b30f32.scope.
Oct 11 09:27:55 compute-0 sudo[404438]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a -- lvm list --format json
Oct 11 09:27:55 compute-0 sudo[404438]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:27:56 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:27:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7567a7af7802e398bd5207beaa54376df4c7fcd14b370da4835b5b052e2eda6/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 11 09:27:56 compute-0 podman[404398]: 2025-10-11 09:27:56.025253881 +0000 UTC m=+0.206746885 container init 6c1bef00ea444213d88bfa6af5eec4702f85b880b766550f49146dbec2b30f32 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-066f5196-fbff-458b-ab48-5301dc324450, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.vendor=CentOS)
Oct 11 09:27:56 compute-0 nova_compute[260935]: 2025-10-11 09:27:56.035 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760174876.0343685, f4e61943-e59b-40a1-ab1e-07ba5f131bb3 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:27:56 compute-0 nova_compute[260935]: 2025-10-11 09:27:56.036 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: f4e61943-e59b-40a1-ab1e-07ba5f131bb3] VM Started (Lifecycle Event)
Oct 11 09:27:56 compute-0 podman[404398]: 2025-10-11 09:27:56.036174189 +0000 UTC m=+0.217667153 container start 6c1bef00ea444213d88bfa6af5eec4702f85b880b766550f49146dbec2b30f32 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-066f5196-fbff-458b-ab48-5301dc324450, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 09:27:56 compute-0 nova_compute[260935]: 2025-10-11 09:27:56.037 2 DEBUG nova.compute.manager [None req-f79e75ad-94d7-4be2-99b6-d39e1a2e4247 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: f4e61943-e59b-40a1-ab1e-07ba5f131bb3] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 11 09:27:56 compute-0 ceph-mon[74313]: pgmap v2588: 321 pgs: 321 active+clean; 374 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 11 09:27:56 compute-0 nova_compute[260935]: 2025-10-11 09:27:56.045 2 DEBUG nova.virt.libvirt.driver [None req-f79e75ad-94d7-4be2-99b6-d39e1a2e4247 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: f4e61943-e59b-40a1-ab1e-07ba5f131bb3] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 11 09:27:56 compute-0 nova_compute[260935]: 2025-10-11 09:27:56.051 2 INFO nova.virt.libvirt.driver [-] [instance: f4e61943-e59b-40a1-ab1e-07ba5f131bb3] Instance spawned successfully.
Oct 11 09:27:56 compute-0 nova_compute[260935]: 2025-10-11 09:27:56.052 2 DEBUG nova.virt.libvirt.driver [None req-f79e75ad-94d7-4be2-99b6-d39e1a2e4247 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: f4e61943-e59b-40a1-ab1e-07ba5f131bb3] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 11 09:27:56 compute-0 nova_compute[260935]: 2025-10-11 09:27:56.056 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: f4e61943-e59b-40a1-ab1e-07ba5f131bb3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:27:56 compute-0 nova_compute[260935]: 2025-10-11 09:27:56.062 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: f4e61943-e59b-40a1-ab1e-07ba5f131bb3] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 09:27:56 compute-0 nova_compute[260935]: 2025-10-11 09:27:56.073 2 DEBUG nova.virt.libvirt.driver [None req-f79e75ad-94d7-4be2-99b6-d39e1a2e4247 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: f4e61943-e59b-40a1-ab1e-07ba5f131bb3] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:27:56 compute-0 nova_compute[260935]: 2025-10-11 09:27:56.073 2 DEBUG nova.virt.libvirt.driver [None req-f79e75ad-94d7-4be2-99b6-d39e1a2e4247 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: f4e61943-e59b-40a1-ab1e-07ba5f131bb3] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:27:56 compute-0 nova_compute[260935]: 2025-10-11 09:27:56.074 2 DEBUG nova.virt.libvirt.driver [None req-f79e75ad-94d7-4be2-99b6-d39e1a2e4247 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: f4e61943-e59b-40a1-ab1e-07ba5f131bb3] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:27:56 compute-0 nova_compute[260935]: 2025-10-11 09:27:56.074 2 DEBUG nova.virt.libvirt.driver [None req-f79e75ad-94d7-4be2-99b6-d39e1a2e4247 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: f4e61943-e59b-40a1-ab1e-07ba5f131bb3] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:27:56 compute-0 nova_compute[260935]: 2025-10-11 09:27:56.075 2 DEBUG nova.virt.libvirt.driver [None req-f79e75ad-94d7-4be2-99b6-d39e1a2e4247 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: f4e61943-e59b-40a1-ab1e-07ba5f131bb3] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:27:56 compute-0 nova_compute[260935]: 2025-10-11 09:27:56.076 2 DEBUG nova.virt.libvirt.driver [None req-f79e75ad-94d7-4be2-99b6-d39e1a2e4247 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: f4e61943-e59b-40a1-ab1e-07ba5f131bb3] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:27:56 compute-0 nova_compute[260935]: 2025-10-11 09:27:56.081 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: f4e61943-e59b-40a1-ab1e-07ba5f131bb3] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 09:27:56 compute-0 nova_compute[260935]: 2025-10-11 09:27:56.081 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760174876.0350084, f4e61943-e59b-40a1-ab1e-07ba5f131bb3 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:27:56 compute-0 nova_compute[260935]: 2025-10-11 09:27:56.081 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: f4e61943-e59b-40a1-ab1e-07ba5f131bb3] VM Paused (Lifecycle Event)
Oct 11 09:27:56 compute-0 neutron-haproxy-ovnmeta-066f5196-fbff-458b-ab48-5301dc324450[404464]: [NOTICE]   (404469) : New worker (404471) forked
Oct 11 09:27:56 compute-0 neutron-haproxy-ovnmeta-066f5196-fbff-458b-ab48-5301dc324450[404464]: [NOTICE]   (404469) : Loading success.
Oct 11 09:27:56 compute-0 nova_compute[260935]: 2025-10-11 09:27:56.110 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: f4e61943-e59b-40a1-ab1e-07ba5f131bb3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:27:56 compute-0 nova_compute[260935]: 2025-10-11 09:27:56.115 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760174876.0405686, f4e61943-e59b-40a1-ab1e-07ba5f131bb3 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:27:56 compute-0 nova_compute[260935]: 2025-10-11 09:27:56.115 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: f4e61943-e59b-40a1-ab1e-07ba5f131bb3] VM Resumed (Lifecycle Event)
Oct 11 09:27:56 compute-0 nova_compute[260935]: 2025-10-11 09:27:56.141 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: f4e61943-e59b-40a1-ab1e-07ba5f131bb3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:27:56 compute-0 nova_compute[260935]: 2025-10-11 09:27:56.144 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: f4e61943-e59b-40a1-ab1e-07ba5f131bb3] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 09:27:56 compute-0 nova_compute[260935]: 2025-10-11 09:27:56.166 2 INFO nova.compute.manager [None req-f79e75ad-94d7-4be2-99b6-d39e1a2e4247 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: f4e61943-e59b-40a1-ab1e-07ba5f131bb3] Took 9.50 seconds to spawn the instance on the hypervisor.
Oct 11 09:27:56 compute-0 nova_compute[260935]: 2025-10-11 09:27:56.168 2 DEBUG nova.compute.manager [None req-f79e75ad-94d7-4be2-99b6-d39e1a2e4247 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: f4e61943-e59b-40a1-ab1e-07ba5f131bb3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:27:56 compute-0 nova_compute[260935]: 2025-10-11 09:27:56.169 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: f4e61943-e59b-40a1-ab1e-07ba5f131bb3] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 09:27:56 compute-0 nova_compute[260935]: 2025-10-11 09:27:56.230 2 INFO nova.compute.manager [None req-f79e75ad-94d7-4be2-99b6-d39e1a2e4247 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: f4e61943-e59b-40a1-ab1e-07ba5f131bb3] Took 10.52 seconds to build instance.
Oct 11 09:27:56 compute-0 nova_compute[260935]: 2025-10-11 09:27:56.254 2 DEBUG oslo_concurrency.lockutils [None req-f79e75ad-94d7-4be2-99b6-d39e1a2e4247 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "f4e61943-e59b-40a1-ab1e-07ba5f131bb3" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.650s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:27:56 compute-0 podman[404519]: 2025-10-11 09:27:56.436660412 +0000 UTC m=+0.067442925 container create fdb7a9facb4aff9509d7504b004236f6840ef3e74fffacfc2f074da92a9c23f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_lalande, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 09:27:56 compute-0 systemd[1]: Started libpod-conmon-fdb7a9facb4aff9509d7504b004236f6840ef3e74fffacfc2f074da92a9c23f2.scope.
Oct 11 09:27:56 compute-0 podman[404519]: 2025-10-11 09:27:56.412667174 +0000 UTC m=+0.043449777 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:27:56 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:27:56 compute-0 podman[404519]: 2025-10-11 09:27:56.566801634 +0000 UTC m=+0.197584167 container init fdb7a9facb4aff9509d7504b004236f6840ef3e74fffacfc2f074da92a9c23f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_lalande, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct 11 09:27:56 compute-0 podman[404519]: 2025-10-11 09:27:56.577519657 +0000 UTC m=+0.208302200 container start fdb7a9facb4aff9509d7504b004236f6840ef3e74fffacfc2f074da92a9c23f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_lalande, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 09:27:56 compute-0 podman[404519]: 2025-10-11 09:27:56.581809578 +0000 UTC m=+0.212592101 container attach fdb7a9facb4aff9509d7504b004236f6840ef3e74fffacfc2f074da92a9c23f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_lalande, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 09:27:56 compute-0 festive_lalande[404535]: 167 167
Oct 11 09:27:56 compute-0 systemd[1]: libpod-fdb7a9facb4aff9509d7504b004236f6840ef3e74fffacfc2f074da92a9c23f2.scope: Deactivated successfully.
Oct 11 09:27:56 compute-0 podman[404519]: 2025-10-11 09:27:56.588914458 +0000 UTC m=+0.219697001 container died fdb7a9facb4aff9509d7504b004236f6840ef3e74fffacfc2f074da92a9c23f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_lalande, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct 11 09:27:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-2a008830d2ad2f07edef94b0ed8740421f43c805f8fb81d28115368c1c74ad6a-merged.mount: Deactivated successfully.
Oct 11 09:27:56 compute-0 nova_compute[260935]: 2025-10-11 09:27:56.628 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760174861.6273727, b39b8161-8a46-46fe-8a2a-0fc6b4eab850 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:27:56 compute-0 nova_compute[260935]: 2025-10-11 09:27:56.629 2 INFO nova.compute.manager [-] [instance: b39b8161-8a46-46fe-8a2a-0fc6b4eab850] VM Stopped (Lifecycle Event)
Oct 11 09:27:56 compute-0 podman[404519]: 2025-10-11 09:27:56.642663445 +0000 UTC m=+0.273445998 container remove fdb7a9facb4aff9509d7504b004236f6840ef3e74fffacfc2f074da92a9c23f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_lalande, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 09:27:56 compute-0 nova_compute[260935]: 2025-10-11 09:27:56.653 2 DEBUG nova.compute.manager [None req-464fad34-0a31-4c8f-9aac-ecb224ebe1b0 - - - - - -] [instance: b39b8161-8a46-46fe-8a2a-0fc6b4eab850] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:27:56 compute-0 systemd[1]: libpod-conmon-fdb7a9facb4aff9509d7504b004236f6840ef3e74fffacfc2f074da92a9c23f2.scope: Deactivated successfully.
Oct 11 09:27:56 compute-0 podman[404559]: 2025-10-11 09:27:56.897181638 +0000 UTC m=+0.050531447 container create fe8111882924ed36225f22d9cfdbb957e100cee2566bc9600399086f9d579945 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_wing, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 09:27:56 compute-0 systemd[1]: Started libpod-conmon-fe8111882924ed36225f22d9cfdbb957e100cee2566bc9600399086f9d579945.scope.
Oct 11 09:27:56 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:27:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3cde817fe4f1bdff856147757d987d91227908f6b34bd3f5f48c9475fc61593/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 09:27:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3cde817fe4f1bdff856147757d987d91227908f6b34bd3f5f48c9475fc61593/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 09:27:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3cde817fe4f1bdff856147757d987d91227908f6b34bd3f5f48c9475fc61593/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 09:27:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3cde817fe4f1bdff856147757d987d91227908f6b34bd3f5f48c9475fc61593/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 09:27:56 compute-0 podman[404559]: 2025-10-11 09:27:56.879100718 +0000 UTC m=+0.032450557 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:27:56 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2589: 321 pgs: 321 active+clean; 374 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 11 09:27:56 compute-0 podman[404559]: 2025-10-11 09:27:56.987011663 +0000 UTC m=+0.140361512 container init fe8111882924ed36225f22d9cfdbb957e100cee2566bc9600399086f9d579945 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_wing, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 09:27:56 compute-0 podman[404559]: 2025-10-11 09:27:56.995372989 +0000 UTC m=+0.148722808 container start fe8111882924ed36225f22d9cfdbb957e100cee2566bc9600399086f9d579945 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_wing, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Oct 11 09:27:57 compute-0 podman[404559]: 2025-10-11 09:27:57.001551343 +0000 UTC m=+0.154901202 container attach fe8111882924ed36225f22d9cfdbb957e100cee2566bc9600399086f9d579945 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_wing, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Oct 11 09:27:57 compute-0 podman[404573]: 2025-10-11 09:27:57.016009311 +0000 UTC m=+0.079292708 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.schema-version=1.0, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 09:27:57 compute-0 podman[404576]: 2025-10-11 09:27:57.058322395 +0000 UTC m=+0.113557985 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true)
Oct 11 09:27:57 compute-0 nova_compute[260935]: 2025-10-11 09:27:57.479 2 DEBUG nova.compute.manager [req-3e98ba0d-2034-4080-a602-571243e4ef6b req-2ece9a50-3ba7-4783-86a7-13cf613b9701 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: f4e61943-e59b-40a1-ab1e-07ba5f131bb3] Received event network-vif-plugged-c550ea63-9098-4f0c-95b9-107f8a0a8a5c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:27:57 compute-0 nova_compute[260935]: 2025-10-11 09:27:57.479 2 DEBUG oslo_concurrency.lockutils [req-3e98ba0d-2034-4080-a602-571243e4ef6b req-2ece9a50-3ba7-4783-86a7-13cf613b9701 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "f4e61943-e59b-40a1-ab1e-07ba5f131bb3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:27:57 compute-0 nova_compute[260935]: 2025-10-11 09:27:57.479 2 DEBUG oslo_concurrency.lockutils [req-3e98ba0d-2034-4080-a602-571243e4ef6b req-2ece9a50-3ba7-4783-86a7-13cf613b9701 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "f4e61943-e59b-40a1-ab1e-07ba5f131bb3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:27:57 compute-0 nova_compute[260935]: 2025-10-11 09:27:57.480 2 DEBUG oslo_concurrency.lockutils [req-3e98ba0d-2034-4080-a602-571243e4ef6b req-2ece9a50-3ba7-4783-86a7-13cf613b9701 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "f4e61943-e59b-40a1-ab1e-07ba5f131bb3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:27:57 compute-0 nova_compute[260935]: 2025-10-11 09:27:57.480 2 DEBUG nova.compute.manager [req-3e98ba0d-2034-4080-a602-571243e4ef6b req-2ece9a50-3ba7-4783-86a7-13cf613b9701 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: f4e61943-e59b-40a1-ab1e-07ba5f131bb3] No waiting events found dispatching network-vif-plugged-c550ea63-9098-4f0c-95b9-107f8a0a8a5c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 09:27:57 compute-0 nova_compute[260935]: 2025-10-11 09:27:57.480 2 WARNING nova.compute.manager [req-3e98ba0d-2034-4080-a602-571243e4ef6b req-2ece9a50-3ba7-4783-86a7-13cf613b9701 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: f4e61943-e59b-40a1-ab1e-07ba5f131bb3] Received unexpected event network-vif-plugged-c550ea63-9098-4f0c-95b9-107f8a0a8a5c for instance with vm_state active and task_state None.
Oct 11 09:27:57 compute-0 agitated_wing[404577]: {
Oct 11 09:27:57 compute-0 agitated_wing[404577]:     "0": [
Oct 11 09:27:57 compute-0 agitated_wing[404577]:         {
Oct 11 09:27:57 compute-0 agitated_wing[404577]:             "devices": [
Oct 11 09:27:57 compute-0 agitated_wing[404577]:                 "/dev/loop3"
Oct 11 09:27:57 compute-0 agitated_wing[404577]:             ],
Oct 11 09:27:57 compute-0 agitated_wing[404577]:             "lv_name": "ceph_lv0",
Oct 11 09:27:57 compute-0 agitated_wing[404577]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 09:27:57 compute-0 agitated_wing[404577]:             "lv_size": "21470642176",
Oct 11 09:27:57 compute-0 agitated_wing[404577]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=bd31cfe9-266a-4479-a7f9-659fff14b3e1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 09:27:57 compute-0 agitated_wing[404577]:             "lv_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 09:27:57 compute-0 agitated_wing[404577]:             "name": "ceph_lv0",
Oct 11 09:27:57 compute-0 agitated_wing[404577]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 09:27:57 compute-0 agitated_wing[404577]:             "tags": {
Oct 11 09:27:57 compute-0 agitated_wing[404577]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 11 09:27:57 compute-0 agitated_wing[404577]:                 "ceph.block_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 09:27:57 compute-0 agitated_wing[404577]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 09:27:57 compute-0 agitated_wing[404577]:                 "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:27:57 compute-0 agitated_wing[404577]:                 "ceph.cluster_name": "ceph",
Oct 11 09:27:57 compute-0 agitated_wing[404577]:                 "ceph.crush_device_class": "",
Oct 11 09:27:57 compute-0 agitated_wing[404577]:                 "ceph.encrypted": "0",
Oct 11 09:27:57 compute-0 agitated_wing[404577]:                 "ceph.osd_fsid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 09:27:57 compute-0 agitated_wing[404577]:                 "ceph.osd_id": "0",
Oct 11 09:27:57 compute-0 agitated_wing[404577]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 09:27:57 compute-0 agitated_wing[404577]:                 "ceph.type": "block",
Oct 11 09:27:57 compute-0 agitated_wing[404577]:                 "ceph.vdo": "0"
Oct 11 09:27:57 compute-0 agitated_wing[404577]:             },
Oct 11 09:27:57 compute-0 agitated_wing[404577]:             "type": "block",
Oct 11 09:27:57 compute-0 agitated_wing[404577]:             "vg_name": "ceph_vg0"
Oct 11 09:27:57 compute-0 agitated_wing[404577]:         }
Oct 11 09:27:57 compute-0 agitated_wing[404577]:     ],
Oct 11 09:27:57 compute-0 agitated_wing[404577]:     "1": [
Oct 11 09:27:57 compute-0 agitated_wing[404577]:         {
Oct 11 09:27:57 compute-0 agitated_wing[404577]:             "devices": [
Oct 11 09:27:57 compute-0 agitated_wing[404577]:                 "/dev/loop4"
Oct 11 09:27:57 compute-0 agitated_wing[404577]:             ],
Oct 11 09:27:57 compute-0 agitated_wing[404577]:             "lv_name": "ceph_lv1",
Oct 11 09:27:57 compute-0 agitated_wing[404577]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 09:27:57 compute-0 agitated_wing[404577]:             "lv_size": "21470642176",
Oct 11 09:27:57 compute-0 agitated_wing[404577]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c6e730c8-7075-45e5-bf72-061b73088f5b,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 09:27:57 compute-0 agitated_wing[404577]:             "lv_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 09:27:57 compute-0 agitated_wing[404577]:             "name": "ceph_lv1",
Oct 11 09:27:57 compute-0 agitated_wing[404577]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 09:27:57 compute-0 agitated_wing[404577]:             "tags": {
Oct 11 09:27:57 compute-0 agitated_wing[404577]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 11 09:27:57 compute-0 agitated_wing[404577]:                 "ceph.block_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 09:27:57 compute-0 agitated_wing[404577]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 09:27:57 compute-0 agitated_wing[404577]:                 "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:27:57 compute-0 agitated_wing[404577]:                 "ceph.cluster_name": "ceph",
Oct 11 09:27:57 compute-0 agitated_wing[404577]:                 "ceph.crush_device_class": "",
Oct 11 09:27:57 compute-0 agitated_wing[404577]:                 "ceph.encrypted": "0",
Oct 11 09:27:57 compute-0 agitated_wing[404577]:                 "ceph.osd_fsid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 09:27:57 compute-0 agitated_wing[404577]:                 "ceph.osd_id": "1",
Oct 11 09:27:57 compute-0 agitated_wing[404577]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 09:27:57 compute-0 agitated_wing[404577]:                 "ceph.type": "block",
Oct 11 09:27:57 compute-0 agitated_wing[404577]:                 "ceph.vdo": "0"
Oct 11 09:27:57 compute-0 agitated_wing[404577]:             },
Oct 11 09:27:57 compute-0 agitated_wing[404577]:             "type": "block",
Oct 11 09:27:57 compute-0 agitated_wing[404577]:             "vg_name": "ceph_vg1"
Oct 11 09:27:57 compute-0 agitated_wing[404577]:         }
Oct 11 09:27:57 compute-0 agitated_wing[404577]:     ],
Oct 11 09:27:57 compute-0 agitated_wing[404577]:     "2": [
Oct 11 09:27:57 compute-0 agitated_wing[404577]:         {
Oct 11 09:27:57 compute-0 agitated_wing[404577]:             "devices": [
Oct 11 09:27:57 compute-0 agitated_wing[404577]:                 "/dev/loop5"
Oct 11 09:27:57 compute-0 agitated_wing[404577]:             ],
Oct 11 09:27:57 compute-0 agitated_wing[404577]:             "lv_name": "ceph_lv2",
Oct 11 09:27:57 compute-0 agitated_wing[404577]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 09:27:57 compute-0 agitated_wing[404577]:             "lv_size": "21470642176",
Oct 11 09:27:57 compute-0 agitated_wing[404577]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9a8d87fe-940e-4960-94fb-405e22e6e9ee,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 09:27:57 compute-0 agitated_wing[404577]:             "lv_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 09:27:57 compute-0 agitated_wing[404577]:             "name": "ceph_lv2",
Oct 11 09:27:57 compute-0 agitated_wing[404577]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 09:27:57 compute-0 agitated_wing[404577]:             "tags": {
Oct 11 09:27:57 compute-0 agitated_wing[404577]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 11 09:27:57 compute-0 agitated_wing[404577]:                 "ceph.block_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 09:27:57 compute-0 agitated_wing[404577]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 09:27:57 compute-0 agitated_wing[404577]:                 "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:27:57 compute-0 agitated_wing[404577]:                 "ceph.cluster_name": "ceph",
Oct 11 09:27:57 compute-0 agitated_wing[404577]:                 "ceph.crush_device_class": "",
Oct 11 09:27:57 compute-0 agitated_wing[404577]:                 "ceph.encrypted": "0",
Oct 11 09:27:57 compute-0 agitated_wing[404577]:                 "ceph.osd_fsid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 09:27:57 compute-0 agitated_wing[404577]:                 "ceph.osd_id": "2",
Oct 11 09:27:57 compute-0 agitated_wing[404577]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 09:27:57 compute-0 agitated_wing[404577]:                 "ceph.type": "block",
Oct 11 09:27:57 compute-0 agitated_wing[404577]:                 "ceph.vdo": "0"
Oct 11 09:27:57 compute-0 agitated_wing[404577]:             },
Oct 11 09:27:57 compute-0 agitated_wing[404577]:             "type": "block",
Oct 11 09:27:57 compute-0 agitated_wing[404577]:             "vg_name": "ceph_vg2"
Oct 11 09:27:57 compute-0 agitated_wing[404577]:         }
Oct 11 09:27:57 compute-0 agitated_wing[404577]:     ]
Oct 11 09:27:57 compute-0 agitated_wing[404577]: }
Oct 11 09:27:57 compute-0 systemd[1]: libpod-fe8111882924ed36225f22d9cfdbb957e100cee2566bc9600399086f9d579945.scope: Deactivated successfully.
Oct 11 09:27:57 compute-0 podman[404628]: 2025-10-11 09:27:57.821726418 +0000 UTC m=+0.028350701 container died fe8111882924ed36225f22d9cfdbb957e100cee2566bc9600399086f9d579945 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_wing, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 09:27:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-b3cde817fe4f1bdff856147757d987d91227908f6b34bd3f5f48c9475fc61593-merged.mount: Deactivated successfully.
Oct 11 09:27:57 compute-0 podman[404628]: 2025-10-11 09:27:57.88096225 +0000 UTC m=+0.087586523 container remove fe8111882924ed36225f22d9cfdbb957e100cee2566bc9600399086f9d579945 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_wing, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 11 09:27:57 compute-0 systemd[1]: libpod-conmon-fe8111882924ed36225f22d9cfdbb957e100cee2566bc9600399086f9d579945.scope: Deactivated successfully.
Oct 11 09:27:57 compute-0 sudo[404438]: pam_unix(sudo:session): session closed for user root
Oct 11 09:27:58 compute-0 sudo[404644]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:27:58 compute-0 sudo[404644]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:27:58 compute-0 sudo[404644]: pam_unix(sudo:session): session closed for user root
Oct 11 09:27:58 compute-0 nova_compute[260935]: 2025-10-11 09:27:58.044 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:27:58 compute-0 ceph-mon[74313]: pgmap v2589: 321 pgs: 321 active+clean; 374 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 11 09:27:58 compute-0 sudo[404669]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 09:27:58 compute-0 sudo[404669]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:27:58 compute-0 sudo[404669]: pam_unix(sudo:session): session closed for user root
Oct 11 09:27:58 compute-0 sudo[404694]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:27:58 compute-0 sudo[404694]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:27:58 compute-0 sudo[404694]: pam_unix(sudo:session): session closed for user root
Oct 11 09:27:58 compute-0 sudo[404719]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a -- raw list --format json
Oct 11 09:27:58 compute-0 sudo[404719]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:27:58 compute-0 podman[404786]: 2025-10-11 09:27:58.717317353 +0000 UTC m=+0.068444113 container create 9701ea7ca5b8548af42c56b52c842004653e35f9b912aa326426bb0c2c594d3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_gagarin, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct 11 09:27:58 compute-0 systemd[1]: Started libpod-conmon-9701ea7ca5b8548af42c56b52c842004653e35f9b912aa326426bb0c2c594d3a.scope.
Oct 11 09:27:58 compute-0 podman[404786]: 2025-10-11 09:27:58.687286685 +0000 UTC m=+0.038413535 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:27:58 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:27:58 compute-0 podman[404786]: 2025-10-11 09:27:58.813689663 +0000 UTC m=+0.164816443 container init 9701ea7ca5b8548af42c56b52c842004653e35f9b912aa326426bb0c2c594d3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_gagarin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 09:27:58 compute-0 podman[404786]: 2025-10-11 09:27:58.820939787 +0000 UTC m=+0.172066537 container start 9701ea7ca5b8548af42c56b52c842004653e35f9b912aa326426bb0c2c594d3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_gagarin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 09:27:58 compute-0 podman[404786]: 2025-10-11 09:27:58.824266001 +0000 UTC m=+0.175392751 container attach 9701ea7ca5b8548af42c56b52c842004653e35f9b912aa326426bb0c2c594d3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_gagarin, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 09:27:58 compute-0 funny_gagarin[404800]: 167 167
Oct 11 09:27:58 compute-0 systemd[1]: libpod-9701ea7ca5b8548af42c56b52c842004653e35f9b912aa326426bb0c2c594d3a.scope: Deactivated successfully.
Oct 11 09:27:58 compute-0 podman[404786]: 2025-10-11 09:27:58.830371013 +0000 UTC m=+0.181497773 container died 9701ea7ca5b8548af42c56b52c842004653e35f9b912aa326426bb0c2c594d3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_gagarin, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Oct 11 09:27:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-e3ea3694b280a8757f3208512b973c988aa5caf3f4419a015f2e2057e61a2d9c-merged.mount: Deactivated successfully.
Oct 11 09:27:58 compute-0 nova_compute[260935]: 2025-10-11 09:27:58.855 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:27:58 compute-0 podman[404786]: 2025-10-11 09:27:58.873742687 +0000 UTC m=+0.224869437 container remove 9701ea7ca5b8548af42c56b52c842004653e35f9b912aa326426bb0c2c594d3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_gagarin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 09:27:58 compute-0 systemd[1]: libpod-conmon-9701ea7ca5b8548af42c56b52c842004653e35f9b912aa326426bb0c2c594d3a.scope: Deactivated successfully.
Oct 11 09:27:58 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2590: 321 pgs: 321 active+clean; 374 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 1.8 MiB/s wr, 94 op/s
Oct 11 09:27:59 compute-0 podman[404823]: 2025-10-11 09:27:59.140022742 +0000 UTC m=+0.071909270 container create 431af402c777a49d585736b5636f7a46ffeef4f946081fe92f8683a375f70675 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_ardinghelli, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 09:27:59 compute-0 systemd[1]: Started libpod-conmon-431af402c777a49d585736b5636f7a46ffeef4f946081fe92f8683a375f70675.scope.
Oct 11 09:27:59 compute-0 podman[404823]: 2025-10-11 09:27:59.108897864 +0000 UTC m=+0.040784402 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:27:59 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:27:59 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:27:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d0174f96e188e3f5aef1d4ce80bb1c785603016847defde42858b9e5b1485f5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 09:27:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d0174f96e188e3f5aef1d4ce80bb1c785603016847defde42858b9e5b1485f5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 09:27:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d0174f96e188e3f5aef1d4ce80bb1c785603016847defde42858b9e5b1485f5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 09:27:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d0174f96e188e3f5aef1d4ce80bb1c785603016847defde42858b9e5b1485f5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 09:27:59 compute-0 podman[404823]: 2025-10-11 09:27:59.270537435 +0000 UTC m=+0.202423923 container init 431af402c777a49d585736b5636f7a46ffeef4f946081fe92f8683a375f70675 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_ardinghelli, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 09:27:59 compute-0 podman[404823]: 2025-10-11 09:27:59.285728924 +0000 UTC m=+0.217615392 container start 431af402c777a49d585736b5636f7a46ffeef4f946081fe92f8683a375f70675 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_ardinghelli, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct 11 09:27:59 compute-0 podman[404823]: 2025-10-11 09:27:59.290472008 +0000 UTC m=+0.222358466 container attach 431af402c777a49d585736b5636f7a46ffeef4f946081fe92f8683a375f70675 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_ardinghelli, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 09:27:59 compute-0 ovn_controller[152945]: 2025-10-11T09:27:59Z|01448|binding|INFO|Releasing lport 7affb32e-f5e8-4572-9839-dbf40ccc596e from this chassis (sb_readonly=0)
Oct 11 09:27:59 compute-0 NetworkManager[44960]: <info>  [1760174879.5684] manager: (patch-provnet-02213cae-d648-4a8f-b18b-9bdf9573f1be-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/564)
Oct 11 09:27:59 compute-0 nova_compute[260935]: 2025-10-11 09:27:59.567 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:27:59 compute-0 NetworkManager[44960]: <info>  [1760174879.5702] manager: (patch-br-int-to-provnet-02213cae-d648-4a8f-b18b-9bdf9573f1be): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/565)
Oct 11 09:27:59 compute-0 ovn_controller[152945]: 2025-10-11T09:27:59Z|01449|binding|INFO|Releasing lport e23cd806-8523-4e59-ba27-db15cee52548 from this chassis (sb_readonly=0)
Oct 11 09:27:59 compute-0 ovn_controller[152945]: 2025-10-11T09:27:59Z|01450|binding|INFO|Releasing lport f45dd889-4db0-488b-b6d3-356bc191844e from this chassis (sb_readonly=0)
Oct 11 09:27:59 compute-0 ovn_controller[152945]: 2025-10-11T09:27:59Z|01451|binding|INFO|Releasing lport 7affb32e-f5e8-4572-9839-dbf40ccc596e from this chassis (sb_readonly=0)
Oct 11 09:27:59 compute-0 ovn_controller[152945]: 2025-10-11T09:27:59Z|01452|binding|INFO|Releasing lport e23cd806-8523-4e59-ba27-db15cee52548 from this chassis (sb_readonly=0)
Oct 11 09:27:59 compute-0 ovn_controller[152945]: 2025-10-11T09:27:59Z|01453|binding|INFO|Releasing lport f45dd889-4db0-488b-b6d3-356bc191844e from this chassis (sb_readonly=0)
Oct 11 09:27:59 compute-0 nova_compute[260935]: 2025-10-11 09:27:59.637 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:27:59 compute-0 nova_compute[260935]: 2025-10-11 09:27:59.647 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:28:00 compute-0 ceph-mon[74313]: pgmap v2590: 321 pgs: 321 active+clean; 374 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 1.8 MiB/s wr, 94 op/s
Oct 11 09:28:00 compute-0 quirky_ardinghelli[404840]: {
Oct 11 09:28:00 compute-0 quirky_ardinghelli[404840]:     "9a8d87fe-940e-4960-94fb-405e22e6e9ee": {
Oct 11 09:28:00 compute-0 quirky_ardinghelli[404840]:         "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:28:00 compute-0 quirky_ardinghelli[404840]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 11 09:28:00 compute-0 quirky_ardinghelli[404840]:         "osd_id": 2,
Oct 11 09:28:00 compute-0 quirky_ardinghelli[404840]:         "osd_uuid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 09:28:00 compute-0 quirky_ardinghelli[404840]:         "type": "bluestore"
Oct 11 09:28:00 compute-0 quirky_ardinghelli[404840]:     },
Oct 11 09:28:00 compute-0 quirky_ardinghelli[404840]:     "bd31cfe9-266a-4479-a7f9-659fff14b3e1": {
Oct 11 09:28:00 compute-0 quirky_ardinghelli[404840]:         "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:28:00 compute-0 quirky_ardinghelli[404840]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 11 09:28:00 compute-0 quirky_ardinghelli[404840]:         "osd_id": 0,
Oct 11 09:28:00 compute-0 quirky_ardinghelli[404840]:         "osd_uuid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 09:28:00 compute-0 quirky_ardinghelli[404840]:         "type": "bluestore"
Oct 11 09:28:00 compute-0 quirky_ardinghelli[404840]:     },
Oct 11 09:28:00 compute-0 quirky_ardinghelli[404840]:     "c6e730c8-7075-45e5-bf72-061b73088f5b": {
Oct 11 09:28:00 compute-0 quirky_ardinghelli[404840]:         "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:28:00 compute-0 quirky_ardinghelli[404840]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 11 09:28:00 compute-0 quirky_ardinghelli[404840]:         "osd_id": 1,
Oct 11 09:28:00 compute-0 quirky_ardinghelli[404840]:         "osd_uuid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 09:28:00 compute-0 quirky_ardinghelli[404840]:         "type": "bluestore"
Oct 11 09:28:00 compute-0 quirky_ardinghelli[404840]:     }
Oct 11 09:28:00 compute-0 quirky_ardinghelli[404840]: }
Oct 11 09:28:00 compute-0 nova_compute[260935]: 2025-10-11 09:28:00.383 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:28:00 compute-0 systemd[1]: libpod-431af402c777a49d585736b5636f7a46ffeef4f946081fe92f8683a375f70675.scope: Deactivated successfully.
Oct 11 09:28:00 compute-0 systemd[1]: libpod-431af402c777a49d585736b5636f7a46ffeef4f946081fe92f8683a375f70675.scope: Consumed 1.129s CPU time.
Oct 11 09:28:00 compute-0 podman[404874]: 2025-10-11 09:28:00.469698887 +0000 UTC m=+0.040127764 container died 431af402c777a49d585736b5636f7a46ffeef4f946081fe92f8683a375f70675 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_ardinghelli, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default)
Oct 11 09:28:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-4d0174f96e188e3f5aef1d4ce80bb1c785603016847defde42858b9e5b1485f5-merged.mount: Deactivated successfully.
Oct 11 09:28:00 compute-0 podman[404874]: 2025-10-11 09:28:00.521064066 +0000 UTC m=+0.091492933 container remove 431af402c777a49d585736b5636f7a46ffeef4f946081fe92f8683a375f70675 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_ardinghelli, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True)
Oct 11 09:28:00 compute-0 systemd[1]: libpod-conmon-431af402c777a49d585736b5636f7a46ffeef4f946081fe92f8683a375f70675.scope: Deactivated successfully.
Oct 11 09:28:00 compute-0 sudo[404719]: pam_unix(sudo:session): session closed for user root
Oct 11 09:28:00 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 09:28:00 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:28:00 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 09:28:00 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:28:00 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev ce8331f4-0274-4169-9608-d020345d4f20 does not exist
Oct 11 09:28:00 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev 2bded35a-810d-4d42-92d6-249f47933cc9 does not exist
Oct 11 09:28:00 compute-0 nova_compute[260935]: 2025-10-11 09:28:00.604 2 DEBUG nova.compute.manager [req-3ad761a3-8806-4b9d-a516-40d698ae7679 req-0f41dfab-010f-44fd-8152-b235f41c7008 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: f4e61943-e59b-40a1-ab1e-07ba5f131bb3] Received event network-changed-c550ea63-9098-4f0c-95b9-107f8a0a8a5c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:28:00 compute-0 nova_compute[260935]: 2025-10-11 09:28:00.605 2 DEBUG nova.compute.manager [req-3ad761a3-8806-4b9d-a516-40d698ae7679 req-0f41dfab-010f-44fd-8152-b235f41c7008 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: f4e61943-e59b-40a1-ab1e-07ba5f131bb3] Refreshing instance network info cache due to event network-changed-c550ea63-9098-4f0c-95b9-107f8a0a8a5c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 09:28:00 compute-0 nova_compute[260935]: 2025-10-11 09:28:00.605 2 DEBUG oslo_concurrency.lockutils [req-3ad761a3-8806-4b9d-a516-40d698ae7679 req-0f41dfab-010f-44fd-8152-b235f41c7008 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-f4e61943-e59b-40a1-ab1e-07ba5f131bb3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:28:00 compute-0 nova_compute[260935]: 2025-10-11 09:28:00.606 2 DEBUG oslo_concurrency.lockutils [req-3ad761a3-8806-4b9d-a516-40d698ae7679 req-0f41dfab-010f-44fd-8152-b235f41c7008 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-f4e61943-e59b-40a1-ab1e-07ba5f131bb3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:28:00 compute-0 nova_compute[260935]: 2025-10-11 09:28:00.607 2 DEBUG nova.network.neutron [req-3ad761a3-8806-4b9d-a516-40d698ae7679 req-0f41dfab-010f-44fd-8152-b235f41c7008 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: f4e61943-e59b-40a1-ab1e-07ba5f131bb3] Refreshing network info cache for port c550ea63-9098-4f0c-95b9-107f8a0a8a5c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 09:28:00 compute-0 sudo[404889]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:28:00 compute-0 sudo[404889]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:28:00 compute-0 sudo[404889]: pam_unix(sudo:session): session closed for user root
Oct 11 09:28:00 compute-0 sudo[404914]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 11 09:28:00 compute-0 sudo[404914]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:28:00 compute-0 sudo[404914]: pam_unix(sudo:session): session closed for user root
Oct 11 09:28:00 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2591: 321 pgs: 321 active+clean; 374 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 186 KiB/s wr, 68 op/s
Oct 11 09:28:01 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:28:01.408 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=bec9a5e2-82b0-42f0-811d-08d245f1dc66, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '44'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:28:01 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:28:01 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:28:01 compute-0 ceph-mon[74313]: pgmap v2591: 321 pgs: 321 active+clean; 374 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 186 KiB/s wr, 68 op/s
Oct 11 09:28:02 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2592: 321 pgs: 321 active+clean; 374 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 186 KiB/s wr, 74 op/s
Oct 11 09:28:03 compute-0 nova_compute[260935]: 2025-10-11 09:28:03.047 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:28:03 compute-0 nova_compute[260935]: 2025-10-11 09:28:03.408 2 DEBUG nova.network.neutron [req-3ad761a3-8806-4b9d-a516-40d698ae7679 req-0f41dfab-010f-44fd-8152-b235f41c7008 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: f4e61943-e59b-40a1-ab1e-07ba5f131bb3] Updated VIF entry in instance network info cache for port c550ea63-9098-4f0c-95b9-107f8a0a8a5c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 09:28:03 compute-0 nova_compute[260935]: 2025-10-11 09:28:03.411 2 DEBUG nova.network.neutron [req-3ad761a3-8806-4b9d-a516-40d698ae7679 req-0f41dfab-010f-44fd-8152-b235f41c7008 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: f4e61943-e59b-40a1-ab1e-07ba5f131bb3] Updating instance_info_cache with network_info: [{"id": "c550ea63-9098-4f0c-95b9-107f8a0a8a5c", "address": "fa:16:3e:09:48:8e", "network": {"id": "066f5196-fbff-458b-ab48-5301dc324450", "bridge": "br-int", "label": "tempest-network-smoke--783922482", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.199", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc550ea63-90", "ovs_interfaceid": "c550ea63-9098-4f0c-95b9-107f8a0a8a5c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:28:03 compute-0 nova_compute[260935]: 2025-10-11 09:28:03.445 2 DEBUG oslo_concurrency.lockutils [req-3ad761a3-8806-4b9d-a516-40d698ae7679 req-0f41dfab-010f-44fd-8152-b235f41c7008 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-f4e61943-e59b-40a1-ab1e-07ba5f131bb3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:28:03 compute-0 nova_compute[260935]: 2025-10-11 09:28:03.857 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:28:04 compute-0 ceph-mon[74313]: pgmap v2592: 321 pgs: 321 active+clean; 374 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 186 KiB/s wr, 74 op/s
Oct 11 09:28:04 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:28:04 compute-0 nova_compute[260935]: 2025-10-11 09:28:04.855 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:28:04 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2593: 321 pgs: 321 active+clean; 374 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 11 09:28:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 09:28:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:28:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 09:28:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:28:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0029757271724620694 of space, bias 1.0, pg target 0.8927181517386208 quantized to 32 (current 32)
Oct 11 09:28:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:28:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 09:28:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:28:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 09:28:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:28:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Oct 11 09:28:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:28:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 11 09:28:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:28:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 09:28:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:28:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 11 09:28:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:28:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 11 09:28:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:28:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 09:28:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:28:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 11 09:28:05 compute-0 sshd-session[404939]: Invalid user navid from 152.32.213.170 port 51968
Oct 11 09:28:05 compute-0 sshd-session[404939]: pam_unix(sshd:auth): check pass; user unknown
Oct 11 09:28:05 compute-0 sshd-session[404939]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=152.32.213.170
Oct 11 09:28:06 compute-0 ceph-mon[74313]: pgmap v2593: 321 pgs: 321 active+clean; 374 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 11 09:28:06 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:28:06.280 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1f:c9:82 10.100.0.2 2001:db8::f816:3eff:fe1f:c982'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fe1f:c982/64', 'neutron:device_id': 'ovnmeta-03e15108-5f8d-4fec-9ad8-133d9551c667', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-03e15108-5f8d-4fec-9ad8-133d9551c667', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'db6885dd005947ad850fed13cefdf2fc', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e3325c2b-a3cd-4cde-bb9c-e17cc326ff80, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=1945eb8f-f9e5-437a-9fc9-017b522ae777) old=Port_Binding(mac=['fa:16:3e:1f:c9:82 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-03e15108-5f8d-4fec-9ad8-133d9551c667', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-03e15108-5f8d-4fec-9ad8-133d9551c667', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'db6885dd005947ad850fed13cefdf2fc', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 09:28:06 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:28:06.283 162815 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 1945eb8f-f9e5-437a-9fc9-017b522ae777 in datapath 03e15108-5f8d-4fec-9ad8-133d9551c667 updated
Oct 11 09:28:06 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:28:06.286 162815 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 03e15108-5f8d-4fec-9ad8-133d9551c667, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 11 09:28:06 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:28:06.287 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[b054156d-5dbf-46a2-afac-d2f2c01a6e5d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:28:06 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2594: 321 pgs: 321 active+clean; 374 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 11 09:28:07 compute-0 sshd-session[404941]: Invalid user admin from 165.232.82.252 port 35120
Oct 11 09:28:07 compute-0 sshd-session[404941]: pam_unix(sshd:auth): check pass; user unknown
Oct 11 09:28:07 compute-0 sshd-session[404941]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=165.232.82.252
Oct 11 09:28:07 compute-0 ovn_controller[152945]: 2025-10-11T09:28:07Z|00168|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:09:48:8e 10.100.0.10
Oct 11 09:28:07 compute-0 ovn_controller[152945]: 2025-10-11T09:28:07Z|00169|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:09:48:8e 10.100.0.10
Oct 11 09:28:08 compute-0 nova_compute[260935]: 2025-10-11 09:28:08.050 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:28:08 compute-0 ceph-mon[74313]: pgmap v2594: 321 pgs: 321 active+clean; 374 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 11 09:28:08 compute-0 sshd-session[404939]: Failed password for invalid user navid from 152.32.213.170 port 51968 ssh2
Oct 11 09:28:08 compute-0 nova_compute[260935]: 2025-10-11 09:28:08.859 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:28:08 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2595: 321 pgs: 321 active+clean; 395 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 2.1 MiB/s wr, 115 op/s
Oct 11 09:28:09 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:28:09 compute-0 sshd-session[404941]: Failed password for invalid user admin from 165.232.82.252 port 35120 ssh2
Oct 11 09:28:09 compute-0 sshd-session[404939]: Received disconnect from 152.32.213.170 port 51968:11: Bye Bye [preauth]
Oct 11 09:28:09 compute-0 sshd-session[404939]: Disconnected from invalid user navid 152.32.213.170 port 51968 [preauth]
Oct 11 09:28:10 compute-0 ceph-mon[74313]: pgmap v2595: 321 pgs: 321 active+clean; 395 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 2.1 MiB/s wr, 115 op/s
Oct 11 09:28:10 compute-0 sshd-session[404941]: Connection closed by invalid user admin 165.232.82.252 port 35120 [preauth]
Oct 11 09:28:10 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2596: 321 pgs: 321 active+clean; 395 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 400 KiB/s rd, 2.0 MiB/s wr, 47 op/s
Oct 11 09:28:12 compute-0 ceph-mon[74313]: pgmap v2596: 321 pgs: 321 active+clean; 395 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 400 KiB/s rd, 2.0 MiB/s wr, 47 op/s
Oct 11 09:28:12 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2597: 321 pgs: 321 active+clean; 407 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 510 KiB/s rd, 2.1 MiB/s wr, 70 op/s
Oct 11 09:28:13 compute-0 nova_compute[260935]: 2025-10-11 09:28:13.093 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:28:13 compute-0 nova_compute[260935]: 2025-10-11 09:28:13.530 2 INFO nova.compute.manager [None req-76dee426-8c43-4ff6-8ce0-6f4fedb1a90c dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: f4e61943-e59b-40a1-ab1e-07ba5f131bb3] Get console output
Oct 11 09:28:13 compute-0 nova_compute[260935]: 2025-10-11 09:28:13.537 29289 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Oct 11 09:28:13 compute-0 nova_compute[260935]: 2025-10-11 09:28:13.861 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:28:14 compute-0 ceph-mon[74313]: pgmap v2597: 321 pgs: 321 active+clean; 407 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 510 KiB/s rd, 2.1 MiB/s wr, 70 op/s
Oct 11 09:28:14 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:28:14 compute-0 ovn_controller[152945]: 2025-10-11T09:28:14Z|01454|binding|INFO|Releasing lport 7affb32e-f5e8-4572-9839-dbf40ccc596e from this chassis (sb_readonly=0)
Oct 11 09:28:14 compute-0 ovn_controller[152945]: 2025-10-11T09:28:14Z|01455|binding|INFO|Releasing lport e23cd806-8523-4e59-ba27-db15cee52548 from this chassis (sb_readonly=0)
Oct 11 09:28:14 compute-0 ovn_controller[152945]: 2025-10-11T09:28:14Z|01456|binding|INFO|Releasing lport f45dd889-4db0-488b-b6d3-356bc191844e from this chassis (sb_readonly=0)
Oct 11 09:28:14 compute-0 nova_compute[260935]: 2025-10-11 09:28:14.647 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:28:14 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:28:14.669 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1f:c9:82 10.100.0.2 2001:db8:0:1:f816:3eff:fe1f:c982 2001:db8::f816:3eff:fe1f:c982'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8:0:1:f816:3eff:fe1f:c982/64 2001:db8::f816:3eff:fe1f:c982/64', 'neutron:device_id': 'ovnmeta-03e15108-5f8d-4fec-9ad8-133d9551c667', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-03e15108-5f8d-4fec-9ad8-133d9551c667', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'db6885dd005947ad850fed13cefdf2fc', 'neutron:revision_number': '4', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e3325c2b-a3cd-4cde-bb9c-e17cc326ff80, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=1945eb8f-f9e5-437a-9fc9-017b522ae777) old=Port_Binding(mac=['fa:16:3e:1f:c9:82 10.100.0.2 2001:db8::f816:3eff:fe1f:c982'], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fe1f:c982/64', 'neutron:device_id': 'ovnmeta-03e15108-5f8d-4fec-9ad8-133d9551c667', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-03e15108-5f8d-4fec-9ad8-133d9551c667', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'db6885dd005947ad850fed13cefdf2fc', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 09:28:14 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:28:14.671 162815 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 1945eb8f-f9e5-437a-9fc9-017b522ae777 in datapath 03e15108-5f8d-4fec-9ad8-133d9551c667 updated
Oct 11 09:28:14 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:28:14.674 162815 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 03e15108-5f8d-4fec-9ad8-133d9551c667, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 11 09:28:14 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:28:14.676 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[9bb251b8-9814-480c-a49a-bef2825f1f61]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:28:14 compute-0 ovn_controller[152945]: 2025-10-11T09:28:14Z|01457|binding|INFO|Releasing lport 7affb32e-f5e8-4572-9839-dbf40ccc596e from this chassis (sb_readonly=0)
Oct 11 09:28:14 compute-0 ovn_controller[152945]: 2025-10-11T09:28:14Z|01458|binding|INFO|Releasing lport e23cd806-8523-4e59-ba27-db15cee52548 from this chassis (sb_readonly=0)
Oct 11 09:28:14 compute-0 ovn_controller[152945]: 2025-10-11T09:28:14Z|01459|binding|INFO|Releasing lport f45dd889-4db0-488b-b6d3-356bc191844e from this chassis (sb_readonly=0)
Oct 11 09:28:14 compute-0 nova_compute[260935]: 2025-10-11 09:28:14.766 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:28:14 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2598: 321 pgs: 321 active+clean; 407 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 11 09:28:15 compute-0 ceph-mon[74313]: pgmap v2598: 321 pgs: 321 active+clean; 407 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 11 09:28:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:28:15.224 162815 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:28:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:28:15.224 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:28:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:28:15.225 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:28:15 compute-0 nova_compute[260935]: 2025-10-11 09:28:15.891 2 INFO nova.compute.manager [None req-b3e04bff-9c19-4742-aef8-9bbc5633f275 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: f4e61943-e59b-40a1-ab1e-07ba5f131bb3] Get console output
Oct 11 09:28:15 compute-0 nova_compute[260935]: 2025-10-11 09:28:15.898 29289 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Oct 11 09:28:16 compute-0 nova_compute[260935]: 2025-10-11 09:28:16.455 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:28:16 compute-0 NetworkManager[44960]: <info>  [1760174896.4561] manager: (patch-br-int-to-provnet-02213cae-d648-4a8f-b18b-9bdf9573f1be): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/566)
Oct 11 09:28:16 compute-0 NetworkManager[44960]: <info>  [1760174896.4575] manager: (patch-provnet-02213cae-d648-4a8f-b18b-9bdf9573f1be-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/567)
Oct 11 09:28:16 compute-0 ovn_controller[152945]: 2025-10-11T09:28:16Z|01460|binding|INFO|Releasing lport 7affb32e-f5e8-4572-9839-dbf40ccc596e from this chassis (sb_readonly=0)
Oct 11 09:28:16 compute-0 ovn_controller[152945]: 2025-10-11T09:28:16Z|01461|binding|INFO|Releasing lport e23cd806-8523-4e59-ba27-db15cee52548 from this chassis (sb_readonly=0)
Oct 11 09:28:16 compute-0 ovn_controller[152945]: 2025-10-11T09:28:16Z|01462|binding|INFO|Releasing lport f45dd889-4db0-488b-b6d3-356bc191844e from this chassis (sb_readonly=0)
Oct 11 09:28:16 compute-0 ovn_controller[152945]: 2025-10-11T09:28:16Z|01463|binding|INFO|Releasing lport 7affb32e-f5e8-4572-9839-dbf40ccc596e from this chassis (sb_readonly=0)
Oct 11 09:28:16 compute-0 ovn_controller[152945]: 2025-10-11T09:28:16Z|01464|binding|INFO|Releasing lport e23cd806-8523-4e59-ba27-db15cee52548 from this chassis (sb_readonly=0)
Oct 11 09:28:16 compute-0 ovn_controller[152945]: 2025-10-11T09:28:16Z|01465|binding|INFO|Releasing lport f45dd889-4db0-488b-b6d3-356bc191844e from this chassis (sb_readonly=0)
Oct 11 09:28:16 compute-0 nova_compute[260935]: 2025-10-11 09:28:16.640 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:28:16 compute-0 nova_compute[260935]: 2025-10-11 09:28:16.856 2 INFO nova.compute.manager [None req-110c04c2-6366-4ac4-aef6-6a3e53e8faa6 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: f4e61943-e59b-40a1-ab1e-07ba5f131bb3] Get console output
Oct 11 09:28:16 compute-0 nova_compute[260935]: 2025-10-11 09:28:16.863 29289 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Oct 11 09:28:16 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2599: 321 pgs: 321 active+clean; 407 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 11 09:28:17 compute-0 podman[404945]: 2025-10-11 09:28:17.802411417 +0000 UTC m=+0.083842917 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 11 09:28:18 compute-0 ceph-mon[74313]: pgmap v2599: 321 pgs: 321 active+clean; 407 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 11 09:28:18 compute-0 nova_compute[260935]: 2025-10-11 09:28:18.095 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:28:18 compute-0 nova_compute[260935]: 2025-10-11 09:28:18.113 2 DEBUG oslo_concurrency.lockutils [None req-ae608449-82f5-4a9f-bfd3-602b3265c302 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Acquiring lock "f4e61943-e59b-40a1-ab1e-07ba5f131bb3" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:28:18 compute-0 nova_compute[260935]: 2025-10-11 09:28:18.114 2 DEBUG oslo_concurrency.lockutils [None req-ae608449-82f5-4a9f-bfd3-602b3265c302 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "f4e61943-e59b-40a1-ab1e-07ba5f131bb3" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:28:18 compute-0 nova_compute[260935]: 2025-10-11 09:28:18.114 2 DEBUG oslo_concurrency.lockutils [None req-ae608449-82f5-4a9f-bfd3-602b3265c302 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Acquiring lock "f4e61943-e59b-40a1-ab1e-07ba5f131bb3-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:28:18 compute-0 nova_compute[260935]: 2025-10-11 09:28:18.114 2 DEBUG oslo_concurrency.lockutils [None req-ae608449-82f5-4a9f-bfd3-602b3265c302 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "f4e61943-e59b-40a1-ab1e-07ba5f131bb3-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:28:18 compute-0 nova_compute[260935]: 2025-10-11 09:28:18.114 2 DEBUG oslo_concurrency.lockutils [None req-ae608449-82f5-4a9f-bfd3-602b3265c302 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "f4e61943-e59b-40a1-ab1e-07ba5f131bb3-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:28:18 compute-0 nova_compute[260935]: 2025-10-11 09:28:18.115 2 INFO nova.compute.manager [None req-ae608449-82f5-4a9f-bfd3-602b3265c302 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: f4e61943-e59b-40a1-ab1e-07ba5f131bb3] Terminating instance
Oct 11 09:28:18 compute-0 nova_compute[260935]: 2025-10-11 09:28:18.116 2 DEBUG nova.compute.manager [None req-ae608449-82f5-4a9f-bfd3-602b3265c302 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: f4e61943-e59b-40a1-ab1e-07ba5f131bb3] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 11 09:28:18 compute-0 kernel: tapc550ea63-90 (unregistering): left promiscuous mode
Oct 11 09:28:18 compute-0 NetworkManager[44960]: <info>  [1760174898.1837] device (tapc550ea63-90): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 09:28:18 compute-0 nova_compute[260935]: 2025-10-11 09:28:18.191 2 DEBUG nova.compute.manager [req-5a8bbdf3-0fb2-466d-b83f-afc726083eb0 req-25e06eb8-d50c-415c-97a2-ccad79cc77a9 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: f4e61943-e59b-40a1-ab1e-07ba5f131bb3] Received event network-changed-c550ea63-9098-4f0c-95b9-107f8a0a8a5c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:28:18 compute-0 nova_compute[260935]: 2025-10-11 09:28:18.192 2 DEBUG nova.compute.manager [req-5a8bbdf3-0fb2-466d-b83f-afc726083eb0 req-25e06eb8-d50c-415c-97a2-ccad79cc77a9 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: f4e61943-e59b-40a1-ab1e-07ba5f131bb3] Refreshing instance network info cache due to event network-changed-c550ea63-9098-4f0c-95b9-107f8a0a8a5c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 09:28:18 compute-0 nova_compute[260935]: 2025-10-11 09:28:18.192 2 DEBUG oslo_concurrency.lockutils [req-5a8bbdf3-0fb2-466d-b83f-afc726083eb0 req-25e06eb8-d50c-415c-97a2-ccad79cc77a9 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-f4e61943-e59b-40a1-ab1e-07ba5f131bb3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:28:18 compute-0 nova_compute[260935]: 2025-10-11 09:28:18.192 2 DEBUG oslo_concurrency.lockutils [req-5a8bbdf3-0fb2-466d-b83f-afc726083eb0 req-25e06eb8-d50c-415c-97a2-ccad79cc77a9 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-f4e61943-e59b-40a1-ab1e-07ba5f131bb3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:28:18 compute-0 nova_compute[260935]: 2025-10-11 09:28:18.193 2 DEBUG nova.network.neutron [req-5a8bbdf3-0fb2-466d-b83f-afc726083eb0 req-25e06eb8-d50c-415c-97a2-ccad79cc77a9 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: f4e61943-e59b-40a1-ab1e-07ba5f131bb3] Refreshing network info cache for port c550ea63-9098-4f0c-95b9-107f8a0a8a5c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 09:28:18 compute-0 nova_compute[260935]: 2025-10-11 09:28:18.202 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:28:18 compute-0 ovn_controller[152945]: 2025-10-11T09:28:18Z|01466|binding|INFO|Releasing lport c550ea63-9098-4f0c-95b9-107f8a0a8a5c from this chassis (sb_readonly=0)
Oct 11 09:28:18 compute-0 ovn_controller[152945]: 2025-10-11T09:28:18Z|01467|binding|INFO|Setting lport c550ea63-9098-4f0c-95b9-107f8a0a8a5c down in Southbound
Oct 11 09:28:18 compute-0 ovn_controller[152945]: 2025-10-11T09:28:18Z|01468|binding|INFO|Removing iface tapc550ea63-90 ovn-installed in OVS
Oct 11 09:28:18 compute-0 nova_compute[260935]: 2025-10-11 09:28:18.206 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:28:18 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:28:18.216 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:09:48:8e 10.100.0.10'], port_security=['fa:16:3e:09:48:8e 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'f4e61943-e59b-40a1-ab1e-07ba5f131bb3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-066f5196-fbff-458b-ab48-5301dc324450', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'bee9c6aad5fe46a2b0fb6caf4d995b72', 'neutron:revision_number': '4', 'neutron:security_group_ids': '8fa9b7a7-a9c5-4782-b5d9-2526bb91182e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=dd668780-bea0-4b19-a159-529899429d34, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=c550ea63-9098-4f0c-95b9-107f8a0a8a5c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 09:28:18 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:28:18.217 162815 INFO neutron.agent.ovn.metadata.agent [-] Port c550ea63-9098-4f0c-95b9-107f8a0a8a5c in datapath 066f5196-fbff-458b-ab48-5301dc324450 unbound from our chassis
Oct 11 09:28:18 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:28:18.219 162815 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 066f5196-fbff-458b-ab48-5301dc324450, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 11 09:28:18 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:28:18.220 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[d0633edc-03b5-4caa-a21f-ec8ca12d74af]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:28:18 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:28:18.222 162815 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-066f5196-fbff-458b-ab48-5301dc324450 namespace which is not needed anymore
Oct 11 09:28:18 compute-0 nova_compute[260935]: 2025-10-11 09:28:18.223 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:28:18 compute-0 systemd[1]: machine-qemu\x2d155\x2dinstance\x2d00000083.scope: Deactivated successfully.
Oct 11 09:28:18 compute-0 systemd[1]: machine-qemu\x2d155\x2dinstance\x2d00000083.scope: Consumed 13.019s CPU time.
Oct 11 09:28:18 compute-0 systemd-machined[215705]: Machine qemu-155-instance-00000083 terminated.
Oct 11 09:28:18 compute-0 nova_compute[260935]: 2025-10-11 09:28:18.361 2 INFO nova.virt.libvirt.driver [-] [instance: f4e61943-e59b-40a1-ab1e-07ba5f131bb3] Instance destroyed successfully.
Oct 11 09:28:18 compute-0 nova_compute[260935]: 2025-10-11 09:28:18.362 2 DEBUG nova.objects.instance [None req-ae608449-82f5-4a9f-bfd3-602b3265c302 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lazy-loading 'resources' on Instance uuid f4e61943-e59b-40a1-ab1e-07ba5f131bb3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:28:18 compute-0 neutron-haproxy-ovnmeta-066f5196-fbff-458b-ab48-5301dc324450[404464]: [NOTICE]   (404469) : haproxy version is 2.8.14-c23fe91
Oct 11 09:28:18 compute-0 neutron-haproxy-ovnmeta-066f5196-fbff-458b-ab48-5301dc324450[404464]: [NOTICE]   (404469) : path to executable is /usr/sbin/haproxy
Oct 11 09:28:18 compute-0 neutron-haproxy-ovnmeta-066f5196-fbff-458b-ab48-5301dc324450[404464]: [WARNING]  (404469) : Exiting Master process...
Oct 11 09:28:18 compute-0 neutron-haproxy-ovnmeta-066f5196-fbff-458b-ab48-5301dc324450[404464]: [WARNING]  (404469) : Exiting Master process...
Oct 11 09:28:18 compute-0 neutron-haproxy-ovnmeta-066f5196-fbff-458b-ab48-5301dc324450[404464]: [ALERT]    (404469) : Current worker (404471) exited with code 143 (Terminated)
Oct 11 09:28:18 compute-0 neutron-haproxy-ovnmeta-066f5196-fbff-458b-ab48-5301dc324450[404464]: [WARNING]  (404469) : All workers exited. Exiting... (0)
Oct 11 09:28:18 compute-0 systemd[1]: libpod-6c1bef00ea444213d88bfa6af5eec4702f85b880b766550f49146dbec2b30f32.scope: Deactivated successfully.
Oct 11 09:28:18 compute-0 podman[404990]: 2025-10-11 09:28:18.386937862 +0000 UTC m=+0.067498036 container died 6c1bef00ea444213d88bfa6af5eec4702f85b880b766550f49146dbec2b30f32 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-066f5196-fbff-458b-ab48-5301dc324450, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 09:28:18 compute-0 nova_compute[260935]: 2025-10-11 09:28:18.392 2 DEBUG nova.virt.libvirt.vif [None req-ae608449-82f5-4a9f-bfd3-602b3265c302 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T09:27:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-432477065',display_name='tempest-TestNetworkBasicOps-server-432477065',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-432477065',id=131,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGjbEjzk/3FFF7Yo/G8/P29WblN99ZiVV249D7QjKIfGUPghnNZARefNU8o4oiRaxrPiX6DdwLvn7rFQvHVV6IMhrMlVIFfDDgq3XHS93q11oJL1iJIdpdsGtZABU/MPnw==',key_name='tempest-TestNetworkBasicOps-52027208',keypairs=<?>,launch_index=0,launched_at=2025-10-11T09:27:56Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='bee9c6aad5fe46a2b0fb6caf4d995b72',ramdisk_id='',reservation_id='r-kx08je2o',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1622727639',owner_user_name='tempest-TestNetworkBasicOps-1622727639-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T09:27:56Z,user_data=None,user_id='dd336dcb24664df58613d4105ce1b004',uuid=f4e61943-e59b-40a1-ab1e-07ba5f131bb3,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "c550ea63-9098-4f0c-95b9-107f8a0a8a5c", "address": "fa:16:3e:09:48:8e", "network": {"id": "066f5196-fbff-458b-ab48-5301dc324450", "bridge": "br-int", "label": "tempest-network-smoke--783922482", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.199", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc550ea63-90", "ovs_interfaceid": "c550ea63-9098-4f0c-95b9-107f8a0a8a5c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 11 09:28:18 compute-0 nova_compute[260935]: 2025-10-11 09:28:18.393 2 DEBUG nova.network.os_vif_util [None req-ae608449-82f5-4a9f-bfd3-602b3265c302 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Converting VIF {"id": "c550ea63-9098-4f0c-95b9-107f8a0a8a5c", "address": "fa:16:3e:09:48:8e", "network": {"id": "066f5196-fbff-458b-ab48-5301dc324450", "bridge": "br-int", "label": "tempest-network-smoke--783922482", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.199", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc550ea63-90", "ovs_interfaceid": "c550ea63-9098-4f0c-95b9-107f8a0a8a5c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 09:28:18 compute-0 nova_compute[260935]: 2025-10-11 09:28:18.395 2 DEBUG nova.network.os_vif_util [None req-ae608449-82f5-4a9f-bfd3-602b3265c302 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:09:48:8e,bridge_name='br-int',has_traffic_filtering=True,id=c550ea63-9098-4f0c-95b9-107f8a0a8a5c,network=Network(066f5196-fbff-458b-ab48-5301dc324450),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc550ea63-90') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 09:28:18 compute-0 nova_compute[260935]: 2025-10-11 09:28:18.396 2 DEBUG os_vif [None req-ae608449-82f5-4a9f-bfd3-602b3265c302 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:09:48:8e,bridge_name='br-int',has_traffic_filtering=True,id=c550ea63-9098-4f0c-95b9-107f8a0a8a5c,network=Network(066f5196-fbff-458b-ab48-5301dc324450),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc550ea63-90') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 11 09:28:18 compute-0 nova_compute[260935]: 2025-10-11 09:28:18.399 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:28:18 compute-0 nova_compute[260935]: 2025-10-11 09:28:18.400 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc550ea63-90, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:28:18 compute-0 nova_compute[260935]: 2025-10-11 09:28:18.403 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:28:18 compute-0 nova_compute[260935]: 2025-10-11 09:28:18.407 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 09:28:18 compute-0 nova_compute[260935]: 2025-10-11 09:28:18.410 2 INFO os_vif [None req-ae608449-82f5-4a9f-bfd3-602b3265c302 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:09:48:8e,bridge_name='br-int',has_traffic_filtering=True,id=c550ea63-9098-4f0c-95b9-107f8a0a8a5c,network=Network(066f5196-fbff-458b-ab48-5301dc324450),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc550ea63-90')
Oct 11 09:28:18 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-6c1bef00ea444213d88bfa6af5eec4702f85b880b766550f49146dbec2b30f32-userdata-shm.mount: Deactivated successfully.
Oct 11 09:28:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-d7567a7af7802e398bd5207beaa54376df4c7fcd14b370da4835b5b052e2eda6-merged.mount: Deactivated successfully.
Oct 11 09:28:18 compute-0 podman[404990]: 2025-10-11 09:28:18.435886343 +0000 UTC m=+0.116446497 container cleanup 6c1bef00ea444213d88bfa6af5eec4702f85b880b766550f49146dbec2b30f32 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-066f5196-fbff-458b-ab48-5301dc324450, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Oct 11 09:28:18 compute-0 systemd[1]: libpod-conmon-6c1bef00ea444213d88bfa6af5eec4702f85b880b766550f49146dbec2b30f32.scope: Deactivated successfully.
Oct 11 09:28:18 compute-0 podman[405048]: 2025-10-11 09:28:18.508945465 +0000 UTC m=+0.044029983 container remove 6c1bef00ea444213d88bfa6af5eec4702f85b880b766550f49146dbec2b30f32 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-066f5196-fbff-458b-ab48-5301dc324450, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, tcib_managed=true)
Oct 11 09:28:18 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:28:18.521 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[7fe41dfc-f353-4a66-9786-ae972470c132]: (4, ('Sat Oct 11 09:28:18 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-066f5196-fbff-458b-ab48-5301dc324450 (6c1bef00ea444213d88bfa6af5eec4702f85b880b766550f49146dbec2b30f32)\n6c1bef00ea444213d88bfa6af5eec4702f85b880b766550f49146dbec2b30f32\nSat Oct 11 09:28:18 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-066f5196-fbff-458b-ab48-5301dc324450 (6c1bef00ea444213d88bfa6af5eec4702f85b880b766550f49146dbec2b30f32)\n6c1bef00ea444213d88bfa6af5eec4702f85b880b766550f49146dbec2b30f32\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:28:18 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:28:18.522 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[6e1df120-72b0-42e0-a112-10fc68ac44b2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:28:18 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:28:18.523 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap066f5196-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:28:18 compute-0 kernel: tap066f5196-f0: left promiscuous mode
Oct 11 09:28:18 compute-0 nova_compute[260935]: 2025-10-11 09:28:18.524 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:28:18 compute-0 nova_compute[260935]: 2025-10-11 09:28:18.541 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:28:18 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:28:18.544 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[18a3c6b8-d2e3-4c51-baa7-d3fd65c26c86]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:28:18 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:28:18.566 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[7e79c030-350b-4479-b336-92bfb62174a2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:28:18 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:28:18.567 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[03297a67-3ea3-4ce5-b5c3-0ad734227d64]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:28:18 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:28:18.586 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[43da8779-8f97-4e63-a8b5-e1e5a7ad9e88]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 671981, 'reachable_time': 17746, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 405067, 'error': None, 'target': 'ovnmeta-066f5196-fbff-458b-ab48-5301dc324450', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:28:18 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:28:18.588 162929 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-066f5196-fbff-458b-ab48-5301dc324450 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 11 09:28:18 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:28:18.588 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[6c012e65-1aa1-4287-aaf4-82f10ec744f3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:28:18 compute-0 systemd[1]: run-netns-ovnmeta\x2d066f5196\x2dfbff\x2d458b\x2dab48\x2d5301dc324450.mount: Deactivated successfully.
Oct 11 09:28:18 compute-0 nova_compute[260935]: 2025-10-11 09:28:18.830 2 INFO nova.virt.libvirt.driver [None req-ae608449-82f5-4a9f-bfd3-602b3265c302 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: f4e61943-e59b-40a1-ab1e-07ba5f131bb3] Deleting instance files /var/lib/nova/instances/f4e61943-e59b-40a1-ab1e-07ba5f131bb3_del
Oct 11 09:28:18 compute-0 nova_compute[260935]: 2025-10-11 09:28:18.831 2 INFO nova.virt.libvirt.driver [None req-ae608449-82f5-4a9f-bfd3-602b3265c302 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: f4e61943-e59b-40a1-ab1e-07ba5f131bb3] Deletion of /var/lib/nova/instances/f4e61943-e59b-40a1-ab1e-07ba5f131bb3_del complete
Oct 11 09:28:18 compute-0 nova_compute[260935]: 2025-10-11 09:28:18.878 2 INFO nova.compute.manager [None req-ae608449-82f5-4a9f-bfd3-602b3265c302 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] [instance: f4e61943-e59b-40a1-ab1e-07ba5f131bb3] Took 0.76 seconds to destroy the instance on the hypervisor.
Oct 11 09:28:18 compute-0 nova_compute[260935]: 2025-10-11 09:28:18.879 2 DEBUG oslo.service.loopingcall [None req-ae608449-82f5-4a9f-bfd3-602b3265c302 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 11 09:28:18 compute-0 nova_compute[260935]: 2025-10-11 09:28:18.879 2 DEBUG nova.compute.manager [-] [instance: f4e61943-e59b-40a1-ab1e-07ba5f131bb3] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 11 09:28:18 compute-0 nova_compute[260935]: 2025-10-11 09:28:18.880 2 DEBUG nova.network.neutron [-] [instance: f4e61943-e59b-40a1-ab1e-07ba5f131bb3] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 11 09:28:18 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2600: 321 pgs: 321 active+clean; 407 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Oct 11 09:28:19 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:28:19 compute-0 nova_compute[260935]: 2025-10-11 09:28:19.419 2 DEBUG nova.compute.manager [req-81c06862-b98a-4d89-950a-863a181bd14f req-a932ab58-b6a6-443d-b8c9-802e8b046950 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: f4e61943-e59b-40a1-ab1e-07ba5f131bb3] Received event network-vif-unplugged-c550ea63-9098-4f0c-95b9-107f8a0a8a5c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:28:19 compute-0 nova_compute[260935]: 2025-10-11 09:28:19.420 2 DEBUG oslo_concurrency.lockutils [req-81c06862-b98a-4d89-950a-863a181bd14f req-a932ab58-b6a6-443d-b8c9-802e8b046950 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "f4e61943-e59b-40a1-ab1e-07ba5f131bb3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:28:19 compute-0 nova_compute[260935]: 2025-10-11 09:28:19.420 2 DEBUG oslo_concurrency.lockutils [req-81c06862-b98a-4d89-950a-863a181bd14f req-a932ab58-b6a6-443d-b8c9-802e8b046950 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "f4e61943-e59b-40a1-ab1e-07ba5f131bb3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:28:19 compute-0 nova_compute[260935]: 2025-10-11 09:28:19.421 2 DEBUG oslo_concurrency.lockutils [req-81c06862-b98a-4d89-950a-863a181bd14f req-a932ab58-b6a6-443d-b8c9-802e8b046950 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "f4e61943-e59b-40a1-ab1e-07ba5f131bb3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:28:19 compute-0 nova_compute[260935]: 2025-10-11 09:28:19.421 2 DEBUG nova.compute.manager [req-81c06862-b98a-4d89-950a-863a181bd14f req-a932ab58-b6a6-443d-b8c9-802e8b046950 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: f4e61943-e59b-40a1-ab1e-07ba5f131bb3] No waiting events found dispatching network-vif-unplugged-c550ea63-9098-4f0c-95b9-107f8a0a8a5c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 09:28:19 compute-0 nova_compute[260935]: 2025-10-11 09:28:19.421 2 DEBUG nova.compute.manager [req-81c06862-b98a-4d89-950a-863a181bd14f req-a932ab58-b6a6-443d-b8c9-802e8b046950 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: f4e61943-e59b-40a1-ab1e-07ba5f131bb3] Received event network-vif-unplugged-c550ea63-9098-4f0c-95b9-107f8a0a8a5c for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 11 09:28:19 compute-0 nova_compute[260935]: 2025-10-11 09:28:19.733 2 DEBUG nova.network.neutron [-] [instance: f4e61943-e59b-40a1-ab1e-07ba5f131bb3] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:28:19 compute-0 nova_compute[260935]: 2025-10-11 09:28:19.755 2 INFO nova.compute.manager [-] [instance: f4e61943-e59b-40a1-ab1e-07ba5f131bb3] Took 0.88 seconds to deallocate network for instance.
Oct 11 09:28:19 compute-0 nova_compute[260935]: 2025-10-11 09:28:19.804 2 DEBUG nova.network.neutron [req-5a8bbdf3-0fb2-466d-b83f-afc726083eb0 req-25e06eb8-d50c-415c-97a2-ccad79cc77a9 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: f4e61943-e59b-40a1-ab1e-07ba5f131bb3] Updated VIF entry in instance network info cache for port c550ea63-9098-4f0c-95b9-107f8a0a8a5c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 09:28:19 compute-0 nova_compute[260935]: 2025-10-11 09:28:19.804 2 DEBUG nova.network.neutron [req-5a8bbdf3-0fb2-466d-b83f-afc726083eb0 req-25e06eb8-d50c-415c-97a2-ccad79cc77a9 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: f4e61943-e59b-40a1-ab1e-07ba5f131bb3] Updating instance_info_cache with network_info: [{"id": "c550ea63-9098-4f0c-95b9-107f8a0a8a5c", "address": "fa:16:3e:09:48:8e", "network": {"id": "066f5196-fbff-458b-ab48-5301dc324450", "bridge": "br-int", "label": "tempest-network-smoke--783922482", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bee9c6aad5fe46a2b0fb6caf4d995b72", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc550ea63-90", "ovs_interfaceid": "c550ea63-9098-4f0c-95b9-107f8a0a8a5c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:28:19 compute-0 nova_compute[260935]: 2025-10-11 09:28:19.811 2 DEBUG oslo_concurrency.lockutils [None req-ae608449-82f5-4a9f-bfd3-602b3265c302 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:28:19 compute-0 nova_compute[260935]: 2025-10-11 09:28:19.811 2 DEBUG oslo_concurrency.lockutils [None req-ae608449-82f5-4a9f-bfd3-602b3265c302 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:28:19 compute-0 nova_compute[260935]: 2025-10-11 09:28:19.827 2 DEBUG oslo_concurrency.lockutils [req-5a8bbdf3-0fb2-466d-b83f-afc726083eb0 req-25e06eb8-d50c-415c-97a2-ccad79cc77a9 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-f4e61943-e59b-40a1-ab1e-07ba5f131bb3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:28:19 compute-0 nova_compute[260935]: 2025-10-11 09:28:19.925 2 DEBUG oslo_concurrency.processutils [None req-ae608449-82f5-4a9f-bfd3-602b3265c302 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:28:20 compute-0 ceph-mon[74313]: pgmap v2600: 321 pgs: 321 active+clean; 407 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Oct 11 09:28:20 compute-0 nova_compute[260935]: 2025-10-11 09:28:20.309 2 DEBUG nova.compute.manager [req-a22c9509-6769-434f-a5ee-44f158f1270f req-65c69765-d366-459d-b8c0-7d1c5a570fe9 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: f4e61943-e59b-40a1-ab1e-07ba5f131bb3] Received event network-vif-deleted-c550ea63-9098-4f0c-95b9-107f8a0a8a5c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:28:20 compute-0 nova_compute[260935]: 2025-10-11 09:28:20.310 2 INFO nova.compute.manager [req-a22c9509-6769-434f-a5ee-44f158f1270f req-65c69765-d366-459d-b8c0-7d1c5a570fe9 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: f4e61943-e59b-40a1-ab1e-07ba5f131bb3] Neutron deleted interface c550ea63-9098-4f0c-95b9-107f8a0a8a5c; detaching it from the instance and deleting it from the info cache
Oct 11 09:28:20 compute-0 nova_compute[260935]: 2025-10-11 09:28:20.311 2 DEBUG nova.network.neutron [req-a22c9509-6769-434f-a5ee-44f158f1270f req-65c69765-d366-459d-b8c0-7d1c5a570fe9 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: f4e61943-e59b-40a1-ab1e-07ba5f131bb3] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:28:20 compute-0 nova_compute[260935]: 2025-10-11 09:28:20.344 2 DEBUG nova.compute.manager [req-a22c9509-6769-434f-a5ee-44f158f1270f req-65c69765-d366-459d-b8c0-7d1c5a570fe9 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: f4e61943-e59b-40a1-ab1e-07ba5f131bb3] Detach interface failed, port_id=c550ea63-9098-4f0c-95b9-107f8a0a8a5c, reason: Instance f4e61943-e59b-40a1-ab1e-07ba5f131bb3 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Oct 11 09:28:20 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 09:28:20 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2104140217' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:28:20 compute-0 nova_compute[260935]: 2025-10-11 09:28:20.424 2 DEBUG oslo_concurrency.processutils [None req-ae608449-82f5-4a9f-bfd3-602b3265c302 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.499s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:28:20 compute-0 nova_compute[260935]: 2025-10-11 09:28:20.432 2 DEBUG nova.compute.provider_tree [None req-ae608449-82f5-4a9f-bfd3-602b3265c302 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 09:28:20 compute-0 nova_compute[260935]: 2025-10-11 09:28:20.455 2 DEBUG nova.scheduler.client.report [None req-ae608449-82f5-4a9f-bfd3-602b3265c302 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 09:28:20 compute-0 nova_compute[260935]: 2025-10-11 09:28:20.485 2 DEBUG oslo_concurrency.lockutils [None req-ae608449-82f5-4a9f-bfd3-602b3265c302 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.674s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:28:20 compute-0 nova_compute[260935]: 2025-10-11 09:28:20.518 2 INFO nova.scheduler.client.report [None req-ae608449-82f5-4a9f-bfd3-602b3265c302 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Deleted allocations for instance f4e61943-e59b-40a1-ab1e-07ba5f131bb3
Oct 11 09:28:20 compute-0 nova_compute[260935]: 2025-10-11 09:28:20.629 2 DEBUG oslo_concurrency.lockutils [None req-ae608449-82f5-4a9f-bfd3-602b3265c302 dd336dcb24664df58613d4105ce1b004 bee9c6aad5fe46a2b0fb6caf4d995b72 - - default default] Lock "f4e61943-e59b-40a1-ab1e-07ba5f131bb3" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.515s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:28:20 compute-0 nova_compute[260935]: 2025-10-11 09:28:20.794 2 DEBUG oslo_concurrency.lockutils [None req-179541eb-455c-4e90-85a0-75a2d9e6caba 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "16108538-ef61-4d43-8a42-c324c334138b" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:28:20 compute-0 nova_compute[260935]: 2025-10-11 09:28:20.795 2 DEBUG oslo_concurrency.lockutils [None req-179541eb-455c-4e90-85a0-75a2d9e6caba 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "16108538-ef61-4d43-8a42-c324c334138b" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:28:20 compute-0 nova_compute[260935]: 2025-10-11 09:28:20.828 2 DEBUG nova.compute.manager [None req-179541eb-455c-4e90-85a0-75a2d9e6caba 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 16108538-ef61-4d43-8a42-c324c334138b] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 11 09:28:20 compute-0 nova_compute[260935]: 2025-10-11 09:28:20.911 2 DEBUG oslo_concurrency.lockutils [None req-179541eb-455c-4e90-85a0-75a2d9e6caba 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:28:20 compute-0 nova_compute[260935]: 2025-10-11 09:28:20.911 2 DEBUG oslo_concurrency.lockutils [None req-179541eb-455c-4e90-85a0-75a2d9e6caba 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:28:20 compute-0 nova_compute[260935]: 2025-10-11 09:28:20.922 2 DEBUG nova.virt.hardware [None req-179541eb-455c-4e90-85a0-75a2d9e6caba 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 11 09:28:20 compute-0 nova_compute[260935]: 2025-10-11 09:28:20.923 2 INFO nova.compute.claims [None req-179541eb-455c-4e90-85a0-75a2d9e6caba 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 16108538-ef61-4d43-8a42-c324c334138b] Claim successful on node compute-0.ctlplane.example.com
Oct 11 09:28:20 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2601: 321 pgs: 321 active+clean; 407 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 110 KiB/s rd, 107 KiB/s wr, 23 op/s
Oct 11 09:28:21 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/2104140217' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:28:21 compute-0 nova_compute[260935]: 2025-10-11 09:28:21.106 2 DEBUG oslo_concurrency.processutils [None req-179541eb-455c-4e90-85a0-75a2d9e6caba 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:28:21 compute-0 nova_compute[260935]: 2025-10-11 09:28:21.510 2 DEBUG nova.compute.manager [req-b4913a3f-2c21-4a92-b257-142621781993 req-f1ae4df3-b210-49e4-ad9e-6b52e294b587 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: f4e61943-e59b-40a1-ab1e-07ba5f131bb3] Received event network-vif-plugged-c550ea63-9098-4f0c-95b9-107f8a0a8a5c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:28:21 compute-0 nova_compute[260935]: 2025-10-11 09:28:21.511 2 DEBUG oslo_concurrency.lockutils [req-b4913a3f-2c21-4a92-b257-142621781993 req-f1ae4df3-b210-49e4-ad9e-6b52e294b587 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "f4e61943-e59b-40a1-ab1e-07ba5f131bb3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:28:21 compute-0 nova_compute[260935]: 2025-10-11 09:28:21.511 2 DEBUG oslo_concurrency.lockutils [req-b4913a3f-2c21-4a92-b257-142621781993 req-f1ae4df3-b210-49e4-ad9e-6b52e294b587 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "f4e61943-e59b-40a1-ab1e-07ba5f131bb3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:28:21 compute-0 nova_compute[260935]: 2025-10-11 09:28:21.512 2 DEBUG oslo_concurrency.lockutils [req-b4913a3f-2c21-4a92-b257-142621781993 req-f1ae4df3-b210-49e4-ad9e-6b52e294b587 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "f4e61943-e59b-40a1-ab1e-07ba5f131bb3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:28:21 compute-0 nova_compute[260935]: 2025-10-11 09:28:21.512 2 DEBUG nova.compute.manager [req-b4913a3f-2c21-4a92-b257-142621781993 req-f1ae4df3-b210-49e4-ad9e-6b52e294b587 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: f4e61943-e59b-40a1-ab1e-07ba5f131bb3] No waiting events found dispatching network-vif-plugged-c550ea63-9098-4f0c-95b9-107f8a0a8a5c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 09:28:21 compute-0 nova_compute[260935]: 2025-10-11 09:28:21.512 2 WARNING nova.compute.manager [req-b4913a3f-2c21-4a92-b257-142621781993 req-f1ae4df3-b210-49e4-ad9e-6b52e294b587 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: f4e61943-e59b-40a1-ab1e-07ba5f131bb3] Received unexpected event network-vif-plugged-c550ea63-9098-4f0c-95b9-107f8a0a8a5c for instance with vm_state deleted and task_state None.
Oct 11 09:28:21 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 09:28:21 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4094774514' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:28:21 compute-0 nova_compute[260935]: 2025-10-11 09:28:21.578 2 DEBUG oslo_concurrency.processutils [None req-179541eb-455c-4e90-85a0-75a2d9e6caba 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:28:21 compute-0 nova_compute[260935]: 2025-10-11 09:28:21.583 2 DEBUG nova.compute.provider_tree [None req-179541eb-455c-4e90-85a0-75a2d9e6caba 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 09:28:21 compute-0 nova_compute[260935]: 2025-10-11 09:28:21.601 2 DEBUG nova.scheduler.client.report [None req-179541eb-455c-4e90-85a0-75a2d9e6caba 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 09:28:21 compute-0 nova_compute[260935]: 2025-10-11 09:28:21.633 2 DEBUG oslo_concurrency.lockutils [None req-179541eb-455c-4e90-85a0-75a2d9e6caba 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.722s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:28:21 compute-0 nova_compute[260935]: 2025-10-11 09:28:21.634 2 DEBUG nova.compute.manager [None req-179541eb-455c-4e90-85a0-75a2d9e6caba 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 16108538-ef61-4d43-8a42-c324c334138b] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 11 09:28:21 compute-0 nova_compute[260935]: 2025-10-11 09:28:21.712 2 DEBUG nova.compute.manager [None req-179541eb-455c-4e90-85a0-75a2d9e6caba 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 16108538-ef61-4d43-8a42-c324c334138b] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 11 09:28:21 compute-0 nova_compute[260935]: 2025-10-11 09:28:21.712 2 DEBUG nova.network.neutron [None req-179541eb-455c-4e90-85a0-75a2d9e6caba 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 16108538-ef61-4d43-8a42-c324c334138b] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 11 09:28:21 compute-0 nova_compute[260935]: 2025-10-11 09:28:21.742 2 INFO nova.virt.libvirt.driver [None req-179541eb-455c-4e90-85a0-75a2d9e6caba 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 16108538-ef61-4d43-8a42-c324c334138b] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 11 09:28:21 compute-0 nova_compute[260935]: 2025-10-11 09:28:21.767 2 DEBUG nova.compute.manager [None req-179541eb-455c-4e90-85a0-75a2d9e6caba 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 16108538-ef61-4d43-8a42-c324c334138b] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 11 09:28:21 compute-0 nova_compute[260935]: 2025-10-11 09:28:21.867 2 DEBUG nova.compute.manager [None req-179541eb-455c-4e90-85a0-75a2d9e6caba 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 16108538-ef61-4d43-8a42-c324c334138b] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 11 09:28:21 compute-0 nova_compute[260935]: 2025-10-11 09:28:21.870 2 DEBUG nova.virt.libvirt.driver [None req-179541eb-455c-4e90-85a0-75a2d9e6caba 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 16108538-ef61-4d43-8a42-c324c334138b] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 11 09:28:21 compute-0 nova_compute[260935]: 2025-10-11 09:28:21.871 2 INFO nova.virt.libvirt.driver [None req-179541eb-455c-4e90-85a0-75a2d9e6caba 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 16108538-ef61-4d43-8a42-c324c334138b] Creating image(s)
Oct 11 09:28:21 compute-0 nova_compute[260935]: 2025-10-11 09:28:21.912 2 DEBUG nova.storage.rbd_utils [None req-179541eb-455c-4e90-85a0-75a2d9e6caba 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] rbd image 16108538-ef61-4d43-8a42-c324c334138b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:28:21 compute-0 nova_compute[260935]: 2025-10-11 09:28:21.949 2 DEBUG nova.storage.rbd_utils [None req-179541eb-455c-4e90-85a0-75a2d9e6caba 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] rbd image 16108538-ef61-4d43-8a42-c324c334138b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:28:21 compute-0 nova_compute[260935]: 2025-10-11 09:28:21.981 2 DEBUG nova.storage.rbd_utils [None req-179541eb-455c-4e90-85a0-75a2d9e6caba 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] rbd image 16108538-ef61-4d43-8a42-c324c334138b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:28:21 compute-0 nova_compute[260935]: 2025-10-11 09:28:21.985 2 DEBUG oslo_concurrency.processutils [None req-179541eb-455c-4e90-85a0-75a2d9e6caba 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:28:22 compute-0 nova_compute[260935]: 2025-10-11 09:28:22.040 2 DEBUG nova.policy [None req-179541eb-455c-4e90-85a0-75a2d9e6caba 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '0e1fd111a1ff43179343661e01457085', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'db6885dd005947ad850fed13cefdf2fc', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 11 09:28:22 compute-0 ceph-mon[74313]: pgmap v2601: 321 pgs: 321 active+clean; 407 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 110 KiB/s rd, 107 KiB/s wr, 23 op/s
Oct 11 09:28:22 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/4094774514' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:28:22 compute-0 nova_compute[260935]: 2025-10-11 09:28:22.093 2 DEBUG oslo_concurrency.processutils [None req-179541eb-455c-4e90-85a0-75a2d9e6caba 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json" returned: 0 in 0.109s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:28:22 compute-0 nova_compute[260935]: 2025-10-11 09:28:22.094 2 DEBUG oslo_concurrency.lockutils [None req-179541eb-455c-4e90-85a0-75a2d9e6caba 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "9811042c7d73cc51997f7c966840f4b7728169a1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:28:22 compute-0 nova_compute[260935]: 2025-10-11 09:28:22.095 2 DEBUG oslo_concurrency.lockutils [None req-179541eb-455c-4e90-85a0-75a2d9e6caba 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:28:22 compute-0 nova_compute[260935]: 2025-10-11 09:28:22.095 2 DEBUG oslo_concurrency.lockutils [None req-179541eb-455c-4e90-85a0-75a2d9e6caba 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:28:22 compute-0 nova_compute[260935]: 2025-10-11 09:28:22.123 2 DEBUG nova.storage.rbd_utils [None req-179541eb-455c-4e90-85a0-75a2d9e6caba 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] rbd image 16108538-ef61-4d43-8a42-c324c334138b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:28:22 compute-0 nova_compute[260935]: 2025-10-11 09:28:22.127 2 DEBUG oslo_concurrency.processutils [None req-179541eb-455c-4e90-85a0-75a2d9e6caba 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 16108538-ef61-4d43-8a42-c324c334138b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:28:22 compute-0 nova_compute[260935]: 2025-10-11 09:28:22.470 2 DEBUG oslo_concurrency.processutils [None req-179541eb-455c-4e90-85a0-75a2d9e6caba 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 16108538-ef61-4d43-8a42-c324c334138b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.342s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:28:22 compute-0 nova_compute[260935]: 2025-10-11 09:28:22.546 2 DEBUG nova.storage.rbd_utils [None req-179541eb-455c-4e90-85a0-75a2d9e6caba 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] resizing rbd image 16108538-ef61-4d43-8a42-c324c334138b_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 11 09:28:22 compute-0 nova_compute[260935]: 2025-10-11 09:28:22.652 2 DEBUG nova.objects.instance [None req-179541eb-455c-4e90-85a0-75a2d9e6caba 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lazy-loading 'migration_context' on Instance uuid 16108538-ef61-4d43-8a42-c324c334138b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:28:22 compute-0 nova_compute[260935]: 2025-10-11 09:28:22.679 2 DEBUG nova.virt.libvirt.driver [None req-179541eb-455c-4e90-85a0-75a2d9e6caba 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 16108538-ef61-4d43-8a42-c324c334138b] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 11 09:28:22 compute-0 nova_compute[260935]: 2025-10-11 09:28:22.680 2 DEBUG nova.virt.libvirt.driver [None req-179541eb-455c-4e90-85a0-75a2d9e6caba 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 16108538-ef61-4d43-8a42-c324c334138b] Ensure instance console log exists: /var/lib/nova/instances/16108538-ef61-4d43-8a42-c324c334138b/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 11 09:28:22 compute-0 nova_compute[260935]: 2025-10-11 09:28:22.681 2 DEBUG oslo_concurrency.lockutils [None req-179541eb-455c-4e90-85a0-75a2d9e6caba 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:28:22 compute-0 nova_compute[260935]: 2025-10-11 09:28:22.681 2 DEBUG oslo_concurrency.lockutils [None req-179541eb-455c-4e90-85a0-75a2d9e6caba 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:28:22 compute-0 nova_compute[260935]: 2025-10-11 09:28:22.681 2 DEBUG oslo_concurrency.lockutils [None req-179541eb-455c-4e90-85a0-75a2d9e6caba 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:28:22 compute-0 podman[405279]: 2025-10-11 09:28:22.812204165 +0000 UTC m=+0.105273102 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=iscsid, tcib_managed=true, org.label-schema.vendor=CentOS, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 11 09:28:22 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2602: 321 pgs: 321 active+clean; 336 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 137 KiB/s rd, 181 KiB/s wr, 65 op/s
Oct 11 09:28:23 compute-0 nova_compute[260935]: 2025-10-11 09:28:23.123 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:28:23 compute-0 nova_compute[260935]: 2025-10-11 09:28:23.403 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:28:23 compute-0 nova_compute[260935]: 2025-10-11 09:28:23.512 2 DEBUG nova.network.neutron [None req-179541eb-455c-4e90-85a0-75a2d9e6caba 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 16108538-ef61-4d43-8a42-c324c334138b] Successfully created port: e89af28e-d1c8-408a-8e1f-a80d8e9d0aa0 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 11 09:28:24 compute-0 ceph-mon[74313]: pgmap v2602: 321 pgs: 321 active+clean; 336 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 137 KiB/s rd, 181 KiB/s wr, 65 op/s
Oct 11 09:28:24 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:28:24 compute-0 nova_compute[260935]: 2025-10-11 09:28:24.318 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:28:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:28:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:28:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:28:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:28:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:28:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:28:24 compute-0 nova_compute[260935]: 2025-10-11 09:28:24.850 2 DEBUG nova.network.neutron [None req-179541eb-455c-4e90-85a0-75a2d9e6caba 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 16108538-ef61-4d43-8a42-c324c334138b] Successfully updated port: e89af28e-d1c8-408a-8e1f-a80d8e9d0aa0 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 11 09:28:24 compute-0 nova_compute[260935]: 2025-10-11 09:28:24.870 2 DEBUG oslo_concurrency.lockutils [None req-179541eb-455c-4e90-85a0-75a2d9e6caba 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "refresh_cache-16108538-ef61-4d43-8a42-c324c334138b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:28:24 compute-0 nova_compute[260935]: 2025-10-11 09:28:24.871 2 DEBUG oslo_concurrency.lockutils [None req-179541eb-455c-4e90-85a0-75a2d9e6caba 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquired lock "refresh_cache-16108538-ef61-4d43-8a42-c324c334138b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:28:24 compute-0 nova_compute[260935]: 2025-10-11 09:28:24.871 2 DEBUG nova.network.neutron [None req-179541eb-455c-4e90-85a0-75a2d9e6caba 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 16108538-ef61-4d43-8a42-c324c334138b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 11 09:28:24 compute-0 nova_compute[260935]: 2025-10-11 09:28:24.944 2 DEBUG nova.compute.manager [req-3d5709a5-6a14-4fe9-ad95-cd453cd83a78 req-62c265a3-aa6e-49ea-af1b-f56d06fcf10c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 16108538-ef61-4d43-8a42-c324c334138b] Received event network-changed-e89af28e-d1c8-408a-8e1f-a80d8e9d0aa0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:28:24 compute-0 nova_compute[260935]: 2025-10-11 09:28:24.944 2 DEBUG nova.compute.manager [req-3d5709a5-6a14-4fe9-ad95-cd453cd83a78 req-62c265a3-aa6e-49ea-af1b-f56d06fcf10c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 16108538-ef61-4d43-8a42-c324c334138b] Refreshing instance network info cache due to event network-changed-e89af28e-d1c8-408a-8e1f-a80d8e9d0aa0. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 09:28:24 compute-0 nova_compute[260935]: 2025-10-11 09:28:24.945 2 DEBUG oslo_concurrency.lockutils [req-3d5709a5-6a14-4fe9-ad95-cd453cd83a78 req-62c265a3-aa6e-49ea-af1b-f56d06fcf10c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-16108538-ef61-4d43-8a42-c324c334138b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:28:24 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2603: 321 pgs: 321 active+clean; 336 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 28 KiB/s rd, 86 KiB/s wr, 42 op/s
Oct 11 09:28:25 compute-0 nova_compute[260935]: 2025-10-11 09:28:25.088 2 DEBUG nova.network.neutron [None req-179541eb-455c-4e90-85a0-75a2d9e6caba 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 16108538-ef61-4d43-8a42-c324c334138b] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 11 09:28:25 compute-0 ovn_controller[152945]: 2025-10-11T09:28:25Z|01469|binding|INFO|Releasing lport e23cd806-8523-4e59-ba27-db15cee52548 from this chassis (sb_readonly=0)
Oct 11 09:28:25 compute-0 ovn_controller[152945]: 2025-10-11T09:28:25Z|01470|binding|INFO|Releasing lport f45dd889-4db0-488b-b6d3-356bc191844e from this chassis (sb_readonly=0)
Oct 11 09:28:25 compute-0 nova_compute[260935]: 2025-10-11 09:28:25.248 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:28:25 compute-0 ovn_controller[152945]: 2025-10-11T09:28:25Z|01471|binding|INFO|Releasing lport e23cd806-8523-4e59-ba27-db15cee52548 from this chassis (sb_readonly=0)
Oct 11 09:28:25 compute-0 ovn_controller[152945]: 2025-10-11T09:28:25Z|01472|binding|INFO|Releasing lport f45dd889-4db0-488b-b6d3-356bc191844e from this chassis (sb_readonly=0)
Oct 11 09:28:25 compute-0 nova_compute[260935]: 2025-10-11 09:28:25.459 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:28:26 compute-0 ceph-mon[74313]: pgmap v2603: 321 pgs: 321 active+clean; 336 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 28 KiB/s rd, 86 KiB/s wr, 42 op/s
Oct 11 09:28:26 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 09:28:26 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2096691239' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 09:28:26 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 09:28:26 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2096691239' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 09:28:26 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2604: 321 pgs: 321 active+clean; 336 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 28 KiB/s rd, 86 KiB/s wr, 42 op/s
Oct 11 09:28:27 compute-0 ceph-mon[74313]: from='client.? 192.168.122.10:0/2096691239' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 09:28:27 compute-0 ceph-mon[74313]: from='client.? 192.168.122.10:0/2096691239' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 09:28:27 compute-0 ceph-mon[74313]: pgmap v2604: 321 pgs: 321 active+clean; 336 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 28 KiB/s rd, 86 KiB/s wr, 42 op/s
Oct 11 09:28:27 compute-0 podman[405300]: 2025-10-11 09:28:27.791102812 +0000 UTC m=+0.082820459 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, container_name=multipathd)
Oct 11 09:28:27 compute-0 podman[405301]: 2025-10-11 09:28:27.838028926 +0000 UTC m=+0.118505835 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.build-date=20251001, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true)
Oct 11 09:28:28 compute-0 nova_compute[260935]: 2025-10-11 09:28:28.165 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:28:28 compute-0 nova_compute[260935]: 2025-10-11 09:28:28.359 2 DEBUG nova.network.neutron [None req-179541eb-455c-4e90-85a0-75a2d9e6caba 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 16108538-ef61-4d43-8a42-c324c334138b] Updating instance_info_cache with network_info: [{"id": "e89af28e-d1c8-408a-8e1f-a80d8e9d0aa0", "address": "fa:16:3e:e6:64:0e", "network": {"id": "03e15108-5f8d-4fec-9ad8-133d9551c667", "bridge": "br-int", "label": "tempest-network-smoke--866575139", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fee6:640e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fee6:640e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape89af28e-d1", "ovs_interfaceid": "e89af28e-d1c8-408a-8e1f-a80d8e9d0aa0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:28:28 compute-0 nova_compute[260935]: 2025-10-11 09:28:28.380 2 DEBUG oslo_concurrency.lockutils [None req-179541eb-455c-4e90-85a0-75a2d9e6caba 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Releasing lock "refresh_cache-16108538-ef61-4d43-8a42-c324c334138b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:28:28 compute-0 nova_compute[260935]: 2025-10-11 09:28:28.381 2 DEBUG nova.compute.manager [None req-179541eb-455c-4e90-85a0-75a2d9e6caba 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 16108538-ef61-4d43-8a42-c324c334138b] Instance network_info: |[{"id": "e89af28e-d1c8-408a-8e1f-a80d8e9d0aa0", "address": "fa:16:3e:e6:64:0e", "network": {"id": "03e15108-5f8d-4fec-9ad8-133d9551c667", "bridge": "br-int", "label": "tempest-network-smoke--866575139", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fee6:640e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fee6:640e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape89af28e-d1", "ovs_interfaceid": "e89af28e-d1c8-408a-8e1f-a80d8e9d0aa0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 11 09:28:28 compute-0 nova_compute[260935]: 2025-10-11 09:28:28.381 2 DEBUG oslo_concurrency.lockutils [req-3d5709a5-6a14-4fe9-ad95-cd453cd83a78 req-62c265a3-aa6e-49ea-af1b-f56d06fcf10c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-16108538-ef61-4d43-8a42-c324c334138b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:28:28 compute-0 nova_compute[260935]: 2025-10-11 09:28:28.382 2 DEBUG nova.network.neutron [req-3d5709a5-6a14-4fe9-ad95-cd453cd83a78 req-62c265a3-aa6e-49ea-af1b-f56d06fcf10c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 16108538-ef61-4d43-8a42-c324c334138b] Refreshing network info cache for port e89af28e-d1c8-408a-8e1f-a80d8e9d0aa0 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 09:28:28 compute-0 nova_compute[260935]: 2025-10-11 09:28:28.388 2 DEBUG nova.virt.libvirt.driver [None req-179541eb-455c-4e90-85a0-75a2d9e6caba 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 16108538-ef61-4d43-8a42-c324c334138b] Start _get_guest_xml network_info=[{"id": "e89af28e-d1c8-408a-8e1f-a80d8e9d0aa0", "address": "fa:16:3e:e6:64:0e", "network": {"id": "03e15108-5f8d-4fec-9ad8-133d9551c667", "bridge": "br-int", "label": "tempest-network-smoke--866575139", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fee6:640e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fee6:640e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape89af28e-d1", "ovs_interfaceid": "e89af28e-d1c8-408a-8e1f-a80d8e9d0aa0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 11 09:28:28 compute-0 nova_compute[260935]: 2025-10-11 09:28:28.395 2 WARNING nova.virt.libvirt.driver [None req-179541eb-455c-4e90-85a0-75a2d9e6caba 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 09:28:28 compute-0 nova_compute[260935]: 2025-10-11 09:28:28.400 2 DEBUG nova.virt.libvirt.host [None req-179541eb-455c-4e90-85a0-75a2d9e6caba 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 11 09:28:28 compute-0 nova_compute[260935]: 2025-10-11 09:28:28.402 2 DEBUG nova.virt.libvirt.host [None req-179541eb-455c-4e90-85a0-75a2d9e6caba 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 11 09:28:28 compute-0 nova_compute[260935]: 2025-10-11 09:28:28.410 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:28:28 compute-0 nova_compute[260935]: 2025-10-11 09:28:28.415 2 DEBUG nova.virt.libvirt.host [None req-179541eb-455c-4e90-85a0-75a2d9e6caba 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 11 09:28:28 compute-0 nova_compute[260935]: 2025-10-11 09:28:28.416 2 DEBUG nova.virt.libvirt.host [None req-179541eb-455c-4e90-85a0-75a2d9e6caba 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 11 09:28:28 compute-0 nova_compute[260935]: 2025-10-11 09:28:28.417 2 DEBUG nova.virt.libvirt.driver [None req-179541eb-455c-4e90-85a0-75a2d9e6caba 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 11 09:28:28 compute-0 nova_compute[260935]: 2025-10-11 09:28:28.417 2 DEBUG nova.virt.hardware [None req-179541eb-455c-4e90-85a0-75a2d9e6caba 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 11 09:28:28 compute-0 nova_compute[260935]: 2025-10-11 09:28:28.418 2 DEBUG nova.virt.hardware [None req-179541eb-455c-4e90-85a0-75a2d9e6caba 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 11 09:28:28 compute-0 nova_compute[260935]: 2025-10-11 09:28:28.419 2 DEBUG nova.virt.hardware [None req-179541eb-455c-4e90-85a0-75a2d9e6caba 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 11 09:28:28 compute-0 nova_compute[260935]: 2025-10-11 09:28:28.419 2 DEBUG nova.virt.hardware [None req-179541eb-455c-4e90-85a0-75a2d9e6caba 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 11 09:28:28 compute-0 nova_compute[260935]: 2025-10-11 09:28:28.421 2 DEBUG nova.virt.hardware [None req-179541eb-455c-4e90-85a0-75a2d9e6caba 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 11 09:28:28 compute-0 nova_compute[260935]: 2025-10-11 09:28:28.421 2 DEBUG nova.virt.hardware [None req-179541eb-455c-4e90-85a0-75a2d9e6caba 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 11 09:28:28 compute-0 nova_compute[260935]: 2025-10-11 09:28:28.422 2 DEBUG nova.virt.hardware [None req-179541eb-455c-4e90-85a0-75a2d9e6caba 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 11 09:28:28 compute-0 nova_compute[260935]: 2025-10-11 09:28:28.422 2 DEBUG nova.virt.hardware [None req-179541eb-455c-4e90-85a0-75a2d9e6caba 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 11 09:28:28 compute-0 nova_compute[260935]: 2025-10-11 09:28:28.422 2 DEBUG nova.virt.hardware [None req-179541eb-455c-4e90-85a0-75a2d9e6caba 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 11 09:28:28 compute-0 nova_compute[260935]: 2025-10-11 09:28:28.423 2 DEBUG nova.virt.hardware [None req-179541eb-455c-4e90-85a0-75a2d9e6caba 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 11 09:28:28 compute-0 nova_compute[260935]: 2025-10-11 09:28:28.424 2 DEBUG nova.virt.hardware [None req-179541eb-455c-4e90-85a0-75a2d9e6caba 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 11 09:28:28 compute-0 nova_compute[260935]: 2025-10-11 09:28:28.429 2 DEBUG oslo_concurrency.processutils [None req-179541eb-455c-4e90-85a0-75a2d9e6caba 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:28:28 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 09:28:28 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/243282572' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:28:28 compute-0 nova_compute[260935]: 2025-10-11 09:28:28.924 2 DEBUG oslo_concurrency.processutils [None req-179541eb-455c-4e90-85a0-75a2d9e6caba 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.496s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:28:28 compute-0 nova_compute[260935]: 2025-10-11 09:28:28.961 2 DEBUG nova.storage.rbd_utils [None req-179541eb-455c-4e90-85a0-75a2d9e6caba 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] rbd image 16108538-ef61-4d43-8a42-c324c334138b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:28:28 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/243282572' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:28:28 compute-0 nova_compute[260935]: 2025-10-11 09:28:28.967 2 DEBUG oslo_concurrency.processutils [None req-179541eb-455c-4e90-85a0-75a2d9e6caba 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:28:28 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2605: 321 pgs: 321 active+clean; 374 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 37 KiB/s rd, 1.8 MiB/s wr, 55 op/s
Oct 11 09:28:29 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:28:29 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 09:28:29 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3223387606' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:28:29 compute-0 nova_compute[260935]: 2025-10-11 09:28:29.421 2 DEBUG oslo_concurrency.processutils [None req-179541eb-455c-4e90-85a0-75a2d9e6caba 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:28:29 compute-0 nova_compute[260935]: 2025-10-11 09:28:29.423 2 DEBUG nova.virt.libvirt.vif [None req-179541eb-455c-4e90-85a0-75a2d9e6caba 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T09:28:19Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1335802812',display_name='tempest-TestGettingAddress-server-1335802812',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1335802812',id=132,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBI6hRsTatVKtQ3nuA1r7FSvjUwlf4xF10ggzb3Lhl+N08jgpdPafA8cbnqq6OClpxtss8V3s8tFKTynT8iMbxi96RnSSo3i9TmqSxoUO3xLRHc6Xamp8CuhNQXpWeJ78Zg==',key_name='tempest-TestGettingAddress-951303195',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='db6885dd005947ad850fed13cefdf2fc',ramdisk_id='',reservation_id='r-7xgmxaip',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-1238692117',owner_user_name='tempest-TestGettingAddress-1238692117-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:28:21Z,user_data=None,user_id='0e1fd111a1ff43179343661e01457085',uuid=16108538-ef61-4d43-8a42-c324c334138b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e89af28e-d1c8-408a-8e1f-a80d8e9d0aa0", "address": "fa:16:3e:e6:64:0e", "network": {"id": "03e15108-5f8d-4fec-9ad8-133d9551c667", "bridge": "br-int", "label": "tempest-network-smoke--866575139", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fee6:640e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fee6:640e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape89af28e-d1", "ovs_interfaceid": "e89af28e-d1c8-408a-8e1f-a80d8e9d0aa0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 11 09:28:29 compute-0 nova_compute[260935]: 2025-10-11 09:28:29.424 2 DEBUG nova.network.os_vif_util [None req-179541eb-455c-4e90-85a0-75a2d9e6caba 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converting VIF {"id": "e89af28e-d1c8-408a-8e1f-a80d8e9d0aa0", "address": "fa:16:3e:e6:64:0e", "network": {"id": "03e15108-5f8d-4fec-9ad8-133d9551c667", "bridge": "br-int", "label": "tempest-network-smoke--866575139", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fee6:640e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fee6:640e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape89af28e-d1", "ovs_interfaceid": "e89af28e-d1c8-408a-8e1f-a80d8e9d0aa0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 09:28:29 compute-0 nova_compute[260935]: 2025-10-11 09:28:29.425 2 DEBUG nova.network.os_vif_util [None req-179541eb-455c-4e90-85a0-75a2d9e6caba 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e6:64:0e,bridge_name='br-int',has_traffic_filtering=True,id=e89af28e-d1c8-408a-8e1f-a80d8e9d0aa0,network=Network(03e15108-5f8d-4fec-9ad8-133d9551c667),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape89af28e-d1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 09:28:29 compute-0 nova_compute[260935]: 2025-10-11 09:28:29.426 2 DEBUG nova.objects.instance [None req-179541eb-455c-4e90-85a0-75a2d9e6caba 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lazy-loading 'pci_devices' on Instance uuid 16108538-ef61-4d43-8a42-c324c334138b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:28:29 compute-0 nova_compute[260935]: 2025-10-11 09:28:29.441 2 DEBUG nova.virt.libvirt.driver [None req-179541eb-455c-4e90-85a0-75a2d9e6caba 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 16108538-ef61-4d43-8a42-c324c334138b] End _get_guest_xml xml=<domain type="kvm">
Oct 11 09:28:29 compute-0 nova_compute[260935]:   <uuid>16108538-ef61-4d43-8a42-c324c334138b</uuid>
Oct 11 09:28:29 compute-0 nova_compute[260935]:   <name>instance-00000084</name>
Oct 11 09:28:29 compute-0 nova_compute[260935]:   <memory>131072</memory>
Oct 11 09:28:29 compute-0 nova_compute[260935]:   <vcpu>1</vcpu>
Oct 11 09:28:29 compute-0 nova_compute[260935]:   <metadata>
Oct 11 09:28:29 compute-0 nova_compute[260935]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 09:28:29 compute-0 nova_compute[260935]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 09:28:29 compute-0 nova_compute[260935]:       <nova:name>tempest-TestGettingAddress-server-1335802812</nova:name>
Oct 11 09:28:29 compute-0 nova_compute[260935]:       <nova:creationTime>2025-10-11 09:28:28</nova:creationTime>
Oct 11 09:28:29 compute-0 nova_compute[260935]:       <nova:flavor name="m1.nano">
Oct 11 09:28:29 compute-0 nova_compute[260935]:         <nova:memory>128</nova:memory>
Oct 11 09:28:29 compute-0 nova_compute[260935]:         <nova:disk>1</nova:disk>
Oct 11 09:28:29 compute-0 nova_compute[260935]:         <nova:swap>0</nova:swap>
Oct 11 09:28:29 compute-0 nova_compute[260935]:         <nova:ephemeral>0</nova:ephemeral>
Oct 11 09:28:29 compute-0 nova_compute[260935]:         <nova:vcpus>1</nova:vcpus>
Oct 11 09:28:29 compute-0 nova_compute[260935]:       </nova:flavor>
Oct 11 09:28:29 compute-0 nova_compute[260935]:       <nova:owner>
Oct 11 09:28:29 compute-0 nova_compute[260935]:         <nova:user uuid="0e1fd111a1ff43179343661e01457085">tempest-TestGettingAddress-1238692117-project-member</nova:user>
Oct 11 09:28:29 compute-0 nova_compute[260935]:         <nova:project uuid="db6885dd005947ad850fed13cefdf2fc">tempest-TestGettingAddress-1238692117</nova:project>
Oct 11 09:28:29 compute-0 nova_compute[260935]:       </nova:owner>
Oct 11 09:28:29 compute-0 nova_compute[260935]:       <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 09:28:29 compute-0 nova_compute[260935]:       <nova:ports>
Oct 11 09:28:29 compute-0 nova_compute[260935]:         <nova:port uuid="e89af28e-d1c8-408a-8e1f-a80d8e9d0aa0">
Oct 11 09:28:29 compute-0 nova_compute[260935]:           <nova:ip type="fixed" address="2001:db8:0:1:f816:3eff:fee6:640e" ipVersion="6"/>
Oct 11 09:28:29 compute-0 nova_compute[260935]:           <nova:ip type="fixed" address="2001:db8::f816:3eff:fee6:640e" ipVersion="6"/>
Oct 11 09:28:29 compute-0 nova_compute[260935]:           <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Oct 11 09:28:29 compute-0 nova_compute[260935]:         </nova:port>
Oct 11 09:28:29 compute-0 nova_compute[260935]:       </nova:ports>
Oct 11 09:28:29 compute-0 nova_compute[260935]:     </nova:instance>
Oct 11 09:28:29 compute-0 nova_compute[260935]:   </metadata>
Oct 11 09:28:29 compute-0 nova_compute[260935]:   <sysinfo type="smbios">
Oct 11 09:28:29 compute-0 nova_compute[260935]:     <system>
Oct 11 09:28:29 compute-0 nova_compute[260935]:       <entry name="manufacturer">RDO</entry>
Oct 11 09:28:29 compute-0 nova_compute[260935]:       <entry name="product">OpenStack Compute</entry>
Oct 11 09:28:29 compute-0 nova_compute[260935]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 09:28:29 compute-0 nova_compute[260935]:       <entry name="serial">16108538-ef61-4d43-8a42-c324c334138b</entry>
Oct 11 09:28:29 compute-0 nova_compute[260935]:       <entry name="uuid">16108538-ef61-4d43-8a42-c324c334138b</entry>
Oct 11 09:28:29 compute-0 nova_compute[260935]:       <entry name="family">Virtual Machine</entry>
Oct 11 09:28:29 compute-0 nova_compute[260935]:     </system>
Oct 11 09:28:29 compute-0 nova_compute[260935]:   </sysinfo>
Oct 11 09:28:29 compute-0 nova_compute[260935]:   <os>
Oct 11 09:28:29 compute-0 nova_compute[260935]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 11 09:28:29 compute-0 nova_compute[260935]:     <boot dev="hd"/>
Oct 11 09:28:29 compute-0 nova_compute[260935]:     <smbios mode="sysinfo"/>
Oct 11 09:28:29 compute-0 nova_compute[260935]:   </os>
Oct 11 09:28:29 compute-0 nova_compute[260935]:   <features>
Oct 11 09:28:29 compute-0 nova_compute[260935]:     <acpi/>
Oct 11 09:28:29 compute-0 nova_compute[260935]:     <apic/>
Oct 11 09:28:29 compute-0 nova_compute[260935]:     <vmcoreinfo/>
Oct 11 09:28:29 compute-0 nova_compute[260935]:   </features>
Oct 11 09:28:29 compute-0 nova_compute[260935]:   <clock offset="utc">
Oct 11 09:28:29 compute-0 nova_compute[260935]:     <timer name="pit" tickpolicy="delay"/>
Oct 11 09:28:29 compute-0 nova_compute[260935]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 11 09:28:29 compute-0 nova_compute[260935]:     <timer name="hpet" present="no"/>
Oct 11 09:28:29 compute-0 nova_compute[260935]:   </clock>
Oct 11 09:28:29 compute-0 nova_compute[260935]:   <cpu mode="host-model" match="exact">
Oct 11 09:28:29 compute-0 nova_compute[260935]:     <topology sockets="1" cores="1" threads="1"/>
Oct 11 09:28:29 compute-0 nova_compute[260935]:   </cpu>
Oct 11 09:28:29 compute-0 nova_compute[260935]:   <devices>
Oct 11 09:28:29 compute-0 nova_compute[260935]:     <disk type="network" device="disk">
Oct 11 09:28:29 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 09:28:29 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/16108538-ef61-4d43-8a42-c324c334138b_disk">
Oct 11 09:28:29 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 09:28:29 compute-0 nova_compute[260935]:       </source>
Oct 11 09:28:29 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 09:28:29 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 09:28:29 compute-0 nova_compute[260935]:       </auth>
Oct 11 09:28:29 compute-0 nova_compute[260935]:       <target dev="vda" bus="virtio"/>
Oct 11 09:28:29 compute-0 nova_compute[260935]:     </disk>
Oct 11 09:28:29 compute-0 nova_compute[260935]:     <disk type="network" device="cdrom">
Oct 11 09:28:29 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 09:28:29 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/16108538-ef61-4d43-8a42-c324c334138b_disk.config">
Oct 11 09:28:29 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 09:28:29 compute-0 nova_compute[260935]:       </source>
Oct 11 09:28:29 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 09:28:29 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 09:28:29 compute-0 nova_compute[260935]:       </auth>
Oct 11 09:28:29 compute-0 nova_compute[260935]:       <target dev="sda" bus="sata"/>
Oct 11 09:28:29 compute-0 nova_compute[260935]:     </disk>
Oct 11 09:28:29 compute-0 nova_compute[260935]:     <interface type="ethernet">
Oct 11 09:28:29 compute-0 nova_compute[260935]:       <mac address="fa:16:3e:e6:64:0e"/>
Oct 11 09:28:29 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 09:28:29 compute-0 nova_compute[260935]:       <driver name="vhost" rx_queue_size="512"/>
Oct 11 09:28:29 compute-0 nova_compute[260935]:       <mtu size="1442"/>
Oct 11 09:28:29 compute-0 nova_compute[260935]:       <target dev="tape89af28e-d1"/>
Oct 11 09:28:29 compute-0 nova_compute[260935]:     </interface>
Oct 11 09:28:29 compute-0 nova_compute[260935]:     <serial type="pty">
Oct 11 09:28:29 compute-0 nova_compute[260935]:       <log file="/var/lib/nova/instances/16108538-ef61-4d43-8a42-c324c334138b/console.log" append="off"/>
Oct 11 09:28:29 compute-0 nova_compute[260935]:     </serial>
Oct 11 09:28:29 compute-0 nova_compute[260935]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 09:28:29 compute-0 nova_compute[260935]:     <video>
Oct 11 09:28:29 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 09:28:29 compute-0 nova_compute[260935]:     </video>
Oct 11 09:28:29 compute-0 nova_compute[260935]:     <input type="tablet" bus="usb"/>
Oct 11 09:28:29 compute-0 nova_compute[260935]:     <rng model="virtio">
Oct 11 09:28:29 compute-0 nova_compute[260935]:       <backend model="random">/dev/urandom</backend>
Oct 11 09:28:29 compute-0 nova_compute[260935]:     </rng>
Oct 11 09:28:29 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root"/>
Oct 11 09:28:29 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:28:29 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:28:29 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:28:29 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:28:29 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:28:29 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:28:29 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:28:29 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:28:29 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:28:29 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:28:29 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:28:29 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:28:29 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:28:29 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:28:29 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:28:29 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:28:29 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:28:29 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:28:29 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:28:29 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:28:29 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:28:29 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:28:29 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:28:29 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:28:29 compute-0 nova_compute[260935]:     <controller type="usb" index="0"/>
Oct 11 09:28:29 compute-0 nova_compute[260935]:     <memballoon model="virtio">
Oct 11 09:28:29 compute-0 nova_compute[260935]:       <stats period="10"/>
Oct 11 09:28:29 compute-0 nova_compute[260935]:     </memballoon>
Oct 11 09:28:29 compute-0 nova_compute[260935]:   </devices>
Oct 11 09:28:29 compute-0 nova_compute[260935]: </domain>
Oct 11 09:28:29 compute-0 nova_compute[260935]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 11 09:28:29 compute-0 nova_compute[260935]: 2025-10-11 09:28:29.442 2 DEBUG nova.compute.manager [None req-179541eb-455c-4e90-85a0-75a2d9e6caba 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 16108538-ef61-4d43-8a42-c324c334138b] Preparing to wait for external event network-vif-plugged-e89af28e-d1c8-408a-8e1f-a80d8e9d0aa0 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 11 09:28:29 compute-0 nova_compute[260935]: 2025-10-11 09:28:29.442 2 DEBUG oslo_concurrency.lockutils [None req-179541eb-455c-4e90-85a0-75a2d9e6caba 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "16108538-ef61-4d43-8a42-c324c334138b-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:28:29 compute-0 nova_compute[260935]: 2025-10-11 09:28:29.442 2 DEBUG oslo_concurrency.lockutils [None req-179541eb-455c-4e90-85a0-75a2d9e6caba 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "16108538-ef61-4d43-8a42-c324c334138b-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:28:29 compute-0 nova_compute[260935]: 2025-10-11 09:28:29.443 2 DEBUG oslo_concurrency.lockutils [None req-179541eb-455c-4e90-85a0-75a2d9e6caba 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "16108538-ef61-4d43-8a42-c324c334138b-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:28:29 compute-0 nova_compute[260935]: 2025-10-11 09:28:29.443 2 DEBUG nova.virt.libvirt.vif [None req-179541eb-455c-4e90-85a0-75a2d9e6caba 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T09:28:19Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1335802812',display_name='tempest-TestGettingAddress-server-1335802812',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1335802812',id=132,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBI6hRsTatVKtQ3nuA1r7FSvjUwlf4xF10ggzb3Lhl+N08jgpdPafA8cbnqq6OClpxtss8V3s8tFKTynT8iMbxi96RnSSo3i9TmqSxoUO3xLRHc6Xamp8CuhNQXpWeJ78Zg==',key_name='tempest-TestGettingAddress-951303195',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='db6885dd005947ad850fed13cefdf2fc',ramdisk_id='',reservation_id='r-7xgmxaip',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-1238692117',owner_user_name='tempest-TestGettingAddress-1238692117-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:28:21Z,user_data=None,user_id='0e1fd111a1ff43179343661e01457085',uuid=16108538-ef61-4d43-8a42-c324c334138b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e89af28e-d1c8-408a-8e1f-a80d8e9d0aa0", "address": "fa:16:3e:e6:64:0e", "network": {"id": "03e15108-5f8d-4fec-9ad8-133d9551c667", "bridge": "br-int", "label": "tempest-network-smoke--866575139", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fee6:640e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fee6:640e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape89af28e-d1", "ovs_interfaceid": "e89af28e-d1c8-408a-8e1f-a80d8e9d0aa0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 11 09:28:29 compute-0 nova_compute[260935]: 2025-10-11 09:28:29.444 2 DEBUG nova.network.os_vif_util [None req-179541eb-455c-4e90-85a0-75a2d9e6caba 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converting VIF {"id": "e89af28e-d1c8-408a-8e1f-a80d8e9d0aa0", "address": "fa:16:3e:e6:64:0e", "network": {"id": "03e15108-5f8d-4fec-9ad8-133d9551c667", "bridge": "br-int", "label": "tempest-network-smoke--866575139", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fee6:640e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fee6:640e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape89af28e-d1", "ovs_interfaceid": "e89af28e-d1c8-408a-8e1f-a80d8e9d0aa0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 09:28:29 compute-0 nova_compute[260935]: 2025-10-11 09:28:29.444 2 DEBUG nova.network.os_vif_util [None req-179541eb-455c-4e90-85a0-75a2d9e6caba 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e6:64:0e,bridge_name='br-int',has_traffic_filtering=True,id=e89af28e-d1c8-408a-8e1f-a80d8e9d0aa0,network=Network(03e15108-5f8d-4fec-9ad8-133d9551c667),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape89af28e-d1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 09:28:29 compute-0 nova_compute[260935]: 2025-10-11 09:28:29.445 2 DEBUG os_vif [None req-179541eb-455c-4e90-85a0-75a2d9e6caba 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e6:64:0e,bridge_name='br-int',has_traffic_filtering=True,id=e89af28e-d1c8-408a-8e1f-a80d8e9d0aa0,network=Network(03e15108-5f8d-4fec-9ad8-133d9551c667),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape89af28e-d1') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 11 09:28:29 compute-0 nova_compute[260935]: 2025-10-11 09:28:29.445 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:28:29 compute-0 nova_compute[260935]: 2025-10-11 09:28:29.446 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:28:29 compute-0 nova_compute[260935]: 2025-10-11 09:28:29.446 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 09:28:29 compute-0 nova_compute[260935]: 2025-10-11 09:28:29.449 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:28:29 compute-0 nova_compute[260935]: 2025-10-11 09:28:29.449 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape89af28e-d1, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:28:29 compute-0 nova_compute[260935]: 2025-10-11 09:28:29.449 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tape89af28e-d1, col_values=(('external_ids', {'iface-id': 'e89af28e-d1c8-408a-8e1f-a80d8e9d0aa0', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:e6:64:0e', 'vm-uuid': '16108538-ef61-4d43-8a42-c324c334138b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:28:29 compute-0 NetworkManager[44960]: <info>  [1760174909.4516] manager: (tape89af28e-d1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/568)
Oct 11 09:28:29 compute-0 nova_compute[260935]: 2025-10-11 09:28:29.452 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 09:28:29 compute-0 nova_compute[260935]: 2025-10-11 09:28:29.456 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:28:29 compute-0 nova_compute[260935]: 2025-10-11 09:28:29.457 2 INFO os_vif [None req-179541eb-455c-4e90-85a0-75a2d9e6caba 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e6:64:0e,bridge_name='br-int',has_traffic_filtering=True,id=e89af28e-d1c8-408a-8e1f-a80d8e9d0aa0,network=Network(03e15108-5f8d-4fec-9ad8-133d9551c667),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape89af28e-d1')
Oct 11 09:28:29 compute-0 nova_compute[260935]: 2025-10-11 09:28:29.539 2 DEBUG nova.virt.libvirt.driver [None req-179541eb-455c-4e90-85a0-75a2d9e6caba 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 09:28:29 compute-0 nova_compute[260935]: 2025-10-11 09:28:29.539 2 DEBUG nova.virt.libvirt.driver [None req-179541eb-455c-4e90-85a0-75a2d9e6caba 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 09:28:29 compute-0 nova_compute[260935]: 2025-10-11 09:28:29.539 2 DEBUG nova.virt.libvirt.driver [None req-179541eb-455c-4e90-85a0-75a2d9e6caba 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] No VIF found with MAC fa:16:3e:e6:64:0e, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 11 09:28:29 compute-0 nova_compute[260935]: 2025-10-11 09:28:29.540 2 INFO nova.virt.libvirt.driver [None req-179541eb-455c-4e90-85a0-75a2d9e6caba 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 16108538-ef61-4d43-8a42-c324c334138b] Using config drive
Oct 11 09:28:29 compute-0 nova_compute[260935]: 2025-10-11 09:28:29.562 2 DEBUG nova.storage.rbd_utils [None req-179541eb-455c-4e90-85a0-75a2d9e6caba 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] rbd image 16108538-ef61-4d43-8a42-c324c334138b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:28:30 compute-0 ceph-mon[74313]: pgmap v2605: 321 pgs: 321 active+clean; 374 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 37 KiB/s rd, 1.8 MiB/s wr, 55 op/s
Oct 11 09:28:30 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/3223387606' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:28:30 compute-0 sshd-session[405387]: Invalid user ty from 155.4.244.179 port 1638
Oct 11 09:28:30 compute-0 sshd-session[405387]: pam_unix(sshd:auth): check pass; user unknown
Oct 11 09:28:30 compute-0 sshd-session[405387]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=155.4.244.179
Oct 11 09:28:30 compute-0 nova_compute[260935]: 2025-10-11 09:28:30.508 2 INFO nova.virt.libvirt.driver [None req-179541eb-455c-4e90-85a0-75a2d9e6caba 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 16108538-ef61-4d43-8a42-c324c334138b] Creating config drive at /var/lib/nova/instances/16108538-ef61-4d43-8a42-c324c334138b/disk.config
Oct 11 09:28:30 compute-0 nova_compute[260935]: 2025-10-11 09:28:30.519 2 DEBUG oslo_concurrency.processutils [None req-179541eb-455c-4e90-85a0-75a2d9e6caba 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/16108538-ef61-4d43-8a42-c324c334138b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp2p6wt4bw execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:28:30 compute-0 nova_compute[260935]: 2025-10-11 09:28:30.693 2 DEBUG oslo_concurrency.processutils [None req-179541eb-455c-4e90-85a0-75a2d9e6caba 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/16108538-ef61-4d43-8a42-c324c334138b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp2p6wt4bw" returned: 0 in 0.174s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:28:30 compute-0 nova_compute[260935]: 2025-10-11 09:28:30.725 2 DEBUG nova.storage.rbd_utils [None req-179541eb-455c-4e90-85a0-75a2d9e6caba 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] rbd image 16108538-ef61-4d43-8a42-c324c334138b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:28:30 compute-0 nova_compute[260935]: 2025-10-11 09:28:30.729 2 DEBUG oslo_concurrency.processutils [None req-179541eb-455c-4e90-85a0-75a2d9e6caba 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/16108538-ef61-4d43-8a42-c324c334138b/disk.config 16108538-ef61-4d43-8a42-c324c334138b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:28:30 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2606: 321 pgs: 321 active+clean; 374 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 1.8 MiB/s wr, 54 op/s
Oct 11 09:28:31 compute-0 ceph-mon[74313]: pgmap v2606: 321 pgs: 321 active+clean; 374 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 1.8 MiB/s wr, 54 op/s
Oct 11 09:28:31 compute-0 nova_compute[260935]: 2025-10-11 09:28:31.680 2 DEBUG nova.network.neutron [req-3d5709a5-6a14-4fe9-ad95-cd453cd83a78 req-62c265a3-aa6e-49ea-af1b-f56d06fcf10c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 16108538-ef61-4d43-8a42-c324c334138b] Updated VIF entry in instance network info cache for port e89af28e-d1c8-408a-8e1f-a80d8e9d0aa0. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 09:28:31 compute-0 nova_compute[260935]: 2025-10-11 09:28:31.682 2 DEBUG nova.network.neutron [req-3d5709a5-6a14-4fe9-ad95-cd453cd83a78 req-62c265a3-aa6e-49ea-af1b-f56d06fcf10c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 16108538-ef61-4d43-8a42-c324c334138b] Updating instance_info_cache with network_info: [{"id": "e89af28e-d1c8-408a-8e1f-a80d8e9d0aa0", "address": "fa:16:3e:e6:64:0e", "network": {"id": "03e15108-5f8d-4fec-9ad8-133d9551c667", "bridge": "br-int", "label": "tempest-network-smoke--866575139", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fee6:640e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fee6:640e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape89af28e-d1", "ovs_interfaceid": "e89af28e-d1c8-408a-8e1f-a80d8e9d0aa0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:28:31 compute-0 nova_compute[260935]: 2025-10-11 09:28:31.703 2 DEBUG oslo_concurrency.lockutils [req-3d5709a5-6a14-4fe9-ad95-cd453cd83a78 req-62c265a3-aa6e-49ea-af1b-f56d06fcf10c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-16108538-ef61-4d43-8a42-c324c334138b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:28:32 compute-0 nova_compute[260935]: 2025-10-11 09:28:32.094 2 DEBUG oslo_concurrency.processutils [None req-179541eb-455c-4e90-85a0-75a2d9e6caba 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/16108538-ef61-4d43-8a42-c324c334138b/disk.config 16108538-ef61-4d43-8a42-c324c334138b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.365s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:28:32 compute-0 nova_compute[260935]: 2025-10-11 09:28:32.095 2 INFO nova.virt.libvirt.driver [None req-179541eb-455c-4e90-85a0-75a2d9e6caba 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 16108538-ef61-4d43-8a42-c324c334138b] Deleting local config drive /var/lib/nova/instances/16108538-ef61-4d43-8a42-c324c334138b/disk.config because it was imported into RBD.
Oct 11 09:28:32 compute-0 kernel: tape89af28e-d1: entered promiscuous mode
Oct 11 09:28:32 compute-0 NetworkManager[44960]: <info>  [1760174912.1855] manager: (tape89af28e-d1): new Tun device (/org/freedesktop/NetworkManager/Devices/569)
Oct 11 09:28:32 compute-0 ovn_controller[152945]: 2025-10-11T09:28:32Z|01473|binding|INFO|Claiming lport e89af28e-d1c8-408a-8e1f-a80d8e9d0aa0 for this chassis.
Oct 11 09:28:32 compute-0 ovn_controller[152945]: 2025-10-11T09:28:32Z|01474|binding|INFO|e89af28e-d1c8-408a-8e1f-a80d8e9d0aa0: Claiming fa:16:3e:e6:64:0e 10.100.0.7 2001:db8:0:1:f816:3eff:fee6:640e 2001:db8::f816:3eff:fee6:640e
Oct 11 09:28:32 compute-0 nova_compute[260935]: 2025-10-11 09:28:32.187 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:28:32 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:28:32.206 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e6:64:0e 10.100.0.7 2001:db8:0:1:f816:3eff:fee6:640e 2001:db8::f816:3eff:fee6:640e'], port_security=['fa:16:3e:e6:64:0e 10.100.0.7 2001:db8:0:1:f816:3eff:fee6:640e 2001:db8::f816:3eff:fee6:640e'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28 2001:db8:0:1:f816:3eff:fee6:640e/64 2001:db8::f816:3eff:fee6:640e/64', 'neutron:device_id': '16108538-ef61-4d43-8a42-c324c334138b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-03e15108-5f8d-4fec-9ad8-133d9551c667', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'db6885dd005947ad850fed13cefdf2fc', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'ea903998-56a8-4844-9402-6fdca37c3b3c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e3325c2b-a3cd-4cde-bb9c-e17cc326ff80, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=e89af28e-d1c8-408a-8e1f-a80d8e9d0aa0) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 09:28:32 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:28:32.209 162815 INFO neutron.agent.ovn.metadata.agent [-] Port e89af28e-d1c8-408a-8e1f-a80d8e9d0aa0 in datapath 03e15108-5f8d-4fec-9ad8-133d9551c667 bound to our chassis
Oct 11 09:28:32 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:28:32.212 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 03e15108-5f8d-4fec-9ad8-133d9551c667
Oct 11 09:28:32 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:28:32.233 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[38ab1d46-e600-4524-9f85-67e4db96b181]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:28:32 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:28:32.235 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap03e15108-51 in ovnmeta-03e15108-5f8d-4fec-9ad8-133d9551c667 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 11 09:28:32 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:28:32.238 276945 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap03e15108-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 11 09:28:32 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:28:32.239 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[e825d5a2-70a8-4a60-9c03-915c21e92508]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:28:32 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:28:32.240 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[d8bb5180-01a5-4f3f-9918-d5f78973a8a0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:28:32 compute-0 systemd-udevd[405486]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 09:28:32 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:28:32.256 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[e2b8d9b8-dd02-4e5a-b976-e535d6e499f5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:28:32 compute-0 systemd-machined[215705]: New machine qemu-156-instance-00000084.
Oct 11 09:28:32 compute-0 sshd-session[405387]: Failed password for invalid user ty from 155.4.244.179 port 1638 ssh2
Oct 11 09:28:32 compute-0 NetworkManager[44960]: <info>  [1760174912.2739] device (tape89af28e-d1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 09:28:32 compute-0 NetworkManager[44960]: <info>  [1760174912.2753] device (tape89af28e-d1): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 09:28:32 compute-0 systemd[1]: Started Virtual Machine qemu-156-instance-00000084.
Oct 11 09:28:32 compute-0 nova_compute[260935]: 2025-10-11 09:28:32.292 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:28:32 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:28:32.294 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[d9f37c16-2b8c-4f5f-89bf-4a66109af65d]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:28:32 compute-0 ovn_controller[152945]: 2025-10-11T09:28:32Z|01475|binding|INFO|Setting lport e89af28e-d1c8-408a-8e1f-a80d8e9d0aa0 ovn-installed in OVS
Oct 11 09:28:32 compute-0 ovn_controller[152945]: 2025-10-11T09:28:32Z|01476|binding|INFO|Setting lport e89af28e-d1c8-408a-8e1f-a80d8e9d0aa0 up in Southbound
Oct 11 09:28:32 compute-0 nova_compute[260935]: 2025-10-11 09:28:32.299 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:28:32 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:28:32.330 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[31873169-825c-4a68-9593-a345196a2b2b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:28:32 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:28:32.336 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[99d7c71a-5395-4a18-842e-4a90893649bf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:28:32 compute-0 systemd-udevd[405489]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 09:28:32 compute-0 NetworkManager[44960]: <info>  [1760174912.3381] manager: (tap03e15108-50): new Veth device (/org/freedesktop/NetworkManager/Devices/570)
Oct 11 09:28:32 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:28:32.373 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[dcf199ed-2493-431b-989f-b23c3f98f624]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:28:32 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:28:32.377 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[92b850eb-c342-45cd-9016-6732490679fc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:28:32 compute-0 NetworkManager[44960]: <info>  [1760174912.4197] device (tap03e15108-50): carrier: link connected
Oct 11 09:28:32 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:28:32.428 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[73c5a7be-c3d3-4521-9f36-9dc187db57e9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:28:32 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:28:32.455 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[c56690c1-76b1-47d6-a329-666c518e5cf7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap03e15108-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1f:c9:82'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 397], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 675712, 'reachable_time': 41440, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 405517, 'error': None, 'target': 'ovnmeta-03e15108-5f8d-4fec-9ad8-133d9551c667', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:28:32 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:28:32.478 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[e7368e75-04d8-4959-88c3-2fa640cb6c51]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe1f:c982'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 675712, 'tstamp': 675712}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 405518, 'error': None, 'target': 'ovnmeta-03e15108-5f8d-4fec-9ad8-133d9551c667', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:28:32 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:28:32.507 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[a17f7b5b-d6d4-4b4d-a422-0f0d8ecde5a9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap03e15108-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1f:c9:82'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 397], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 675712, 'reachable_time': 41440, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 405519, 'error': None, 'target': 'ovnmeta-03e15108-5f8d-4fec-9ad8-133d9551c667', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:28:32 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:28:32.558 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[3ec45104-bea7-4441-a081-10cc9c82aede]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:28:32 compute-0 nova_compute[260935]: 2025-10-11 09:28:32.592 2 DEBUG nova.compute.manager [req-1ca29e82-6cea-4620-a66e-548d9054c6db req-84466a12-00db-44cd-8c1d-dcda98b501fa e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 16108538-ef61-4d43-8a42-c324c334138b] Received event network-vif-plugged-e89af28e-d1c8-408a-8e1f-a80d8e9d0aa0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:28:32 compute-0 nova_compute[260935]: 2025-10-11 09:28:32.592 2 DEBUG oslo_concurrency.lockutils [req-1ca29e82-6cea-4620-a66e-548d9054c6db req-84466a12-00db-44cd-8c1d-dcda98b501fa e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "16108538-ef61-4d43-8a42-c324c334138b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:28:32 compute-0 nova_compute[260935]: 2025-10-11 09:28:32.592 2 DEBUG oslo_concurrency.lockutils [req-1ca29e82-6cea-4620-a66e-548d9054c6db req-84466a12-00db-44cd-8c1d-dcda98b501fa e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "16108538-ef61-4d43-8a42-c324c334138b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:28:32 compute-0 nova_compute[260935]: 2025-10-11 09:28:32.592 2 DEBUG oslo_concurrency.lockutils [req-1ca29e82-6cea-4620-a66e-548d9054c6db req-84466a12-00db-44cd-8c1d-dcda98b501fa e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "16108538-ef61-4d43-8a42-c324c334138b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:28:32 compute-0 nova_compute[260935]: 2025-10-11 09:28:32.593 2 DEBUG nova.compute.manager [req-1ca29e82-6cea-4620-a66e-548d9054c6db req-84466a12-00db-44cd-8c1d-dcda98b501fa e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 16108538-ef61-4d43-8a42-c324c334138b] Processing event network-vif-plugged-e89af28e-d1c8-408a-8e1f-a80d8e9d0aa0 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 11 09:28:32 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:28:32.659 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[1f368477-d2f4-4a9b-a9f6-d251160cf08d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:28:32 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:28:32.661 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap03e15108-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:28:32 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:28:32.661 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 09:28:32 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:28:32.662 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap03e15108-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:28:32 compute-0 nova_compute[260935]: 2025-10-11 09:28:32.664 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:28:32 compute-0 NetworkManager[44960]: <info>  [1760174912.6656] manager: (tap03e15108-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/571)
Oct 11 09:28:32 compute-0 kernel: tap03e15108-50: entered promiscuous mode
Oct 11 09:28:32 compute-0 nova_compute[260935]: 2025-10-11 09:28:32.668 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:28:32 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:28:32.669 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap03e15108-50, col_values=(('external_ids', {'iface-id': '1945eb8f-f9e5-437a-9fc9-017b522ae777'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:28:32 compute-0 nova_compute[260935]: 2025-10-11 09:28:32.671 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:28:32 compute-0 ovn_controller[152945]: 2025-10-11T09:28:32Z|01477|binding|INFO|Releasing lport 1945eb8f-f9e5-437a-9fc9-017b522ae777 from this chassis (sb_readonly=0)
Oct 11 09:28:32 compute-0 nova_compute[260935]: 2025-10-11 09:28:32.702 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:28:32 compute-0 nova_compute[260935]: 2025-10-11 09:28:32.703 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:28:32 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:28:32.705 162815 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/03e15108-5f8d-4fec-9ad8-133d9551c667.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/03e15108-5f8d-4fec-9ad8-133d9551c667.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 11 09:28:32 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:28:32.706 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[3f5366e3-ebcb-4a5e-98ce-033fbb2065c1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:28:32 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:28:32.708 162815 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 11 09:28:32 compute-0 ovn_metadata_agent[162810]: global
Oct 11 09:28:32 compute-0 ovn_metadata_agent[162810]:     log         /dev/log local0 debug
Oct 11 09:28:32 compute-0 ovn_metadata_agent[162810]:     log-tag     haproxy-metadata-proxy-03e15108-5f8d-4fec-9ad8-133d9551c667
Oct 11 09:28:32 compute-0 ovn_metadata_agent[162810]:     user        root
Oct 11 09:28:32 compute-0 ovn_metadata_agent[162810]:     group       root
Oct 11 09:28:32 compute-0 ovn_metadata_agent[162810]:     maxconn     1024
Oct 11 09:28:32 compute-0 ovn_metadata_agent[162810]:     pidfile     /var/lib/neutron/external/pids/03e15108-5f8d-4fec-9ad8-133d9551c667.pid.haproxy
Oct 11 09:28:32 compute-0 ovn_metadata_agent[162810]:     daemon
Oct 11 09:28:32 compute-0 ovn_metadata_agent[162810]: 
Oct 11 09:28:32 compute-0 ovn_metadata_agent[162810]: defaults
Oct 11 09:28:32 compute-0 ovn_metadata_agent[162810]:     log global
Oct 11 09:28:32 compute-0 ovn_metadata_agent[162810]:     mode http
Oct 11 09:28:32 compute-0 ovn_metadata_agent[162810]:     option httplog
Oct 11 09:28:32 compute-0 ovn_metadata_agent[162810]:     option dontlognull
Oct 11 09:28:32 compute-0 ovn_metadata_agent[162810]:     option http-server-close
Oct 11 09:28:32 compute-0 ovn_metadata_agent[162810]:     option forwardfor
Oct 11 09:28:32 compute-0 ovn_metadata_agent[162810]:     retries                 3
Oct 11 09:28:32 compute-0 ovn_metadata_agent[162810]:     timeout http-request    30s
Oct 11 09:28:32 compute-0 ovn_metadata_agent[162810]:     timeout connect         30s
Oct 11 09:28:32 compute-0 ovn_metadata_agent[162810]:     timeout client          32s
Oct 11 09:28:32 compute-0 ovn_metadata_agent[162810]:     timeout server          32s
Oct 11 09:28:32 compute-0 ovn_metadata_agent[162810]:     timeout http-keep-alive 30s
Oct 11 09:28:32 compute-0 ovn_metadata_agent[162810]: 
Oct 11 09:28:32 compute-0 ovn_metadata_agent[162810]: 
Oct 11 09:28:32 compute-0 ovn_metadata_agent[162810]: listen listener
Oct 11 09:28:32 compute-0 ovn_metadata_agent[162810]:     bind 169.254.169.254:80
Oct 11 09:28:32 compute-0 ovn_metadata_agent[162810]:     server metadata /var/lib/neutron/metadata_proxy
Oct 11 09:28:32 compute-0 ovn_metadata_agent[162810]:     http-request add-header X-OVN-Network-ID 03e15108-5f8d-4fec-9ad8-133d9551c667
Oct 11 09:28:32 compute-0 ovn_metadata_agent[162810]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 11 09:28:32 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:28:32.709 162815 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-03e15108-5f8d-4fec-9ad8-133d9551c667', 'env', 'PROCESS_TAG=haproxy-03e15108-5f8d-4fec-9ad8-133d9551c667', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/03e15108-5f8d-4fec-9ad8-133d9551c667.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 11 09:28:32 compute-0 sshd-session[405387]: Received disconnect from 155.4.244.179 port 1638:11: Bye Bye [preauth]
Oct 11 09:28:32 compute-0 sshd-session[405387]: Disconnected from invalid user ty 155.4.244.179 port 1638 [preauth]
Oct 11 09:28:33 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2607: 321 pgs: 321 active+clean; 374 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 39 KiB/s rd, 1.8 MiB/s wr, 59 op/s
Oct 11 09:28:33 compute-0 podman[405593]: 2025-10-11 09:28:33.157306259 +0000 UTC m=+0.070179291 container create d0e7b6791d31aa4d05705ff6bca99563fb0a8251a2e20915b4284d5398aa84a4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-03e15108-5f8d-4fec-9ad8-133d9551c667, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 11 09:28:33 compute-0 nova_compute[260935]: 2025-10-11 09:28:33.202 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:28:33 compute-0 podman[405593]: 2025-10-11 09:28:33.120185862 +0000 UTC m=+0.033058934 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 11 09:28:33 compute-0 systemd[1]: Started libpod-conmon-d0e7b6791d31aa4d05705ff6bca99563fb0a8251a2e20915b4284d5398aa84a4.scope.
Oct 11 09:28:33 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:28:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4ed2907c6f4d557002a67a93ce41ed6b3e05074fb0e893ba2f5111564f8191c6/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 11 09:28:33 compute-0 podman[405593]: 2025-10-11 09:28:33.286251558 +0000 UTC m=+0.199124610 container init d0e7b6791d31aa4d05705ff6bca99563fb0a8251a2e20915b4284d5398aa84a4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-03e15108-5f8d-4fec-9ad8-133d9551c667, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true)
Oct 11 09:28:33 compute-0 podman[405593]: 2025-10-11 09:28:33.293189234 +0000 UTC m=+0.206062246 container start d0e7b6791d31aa4d05705ff6bca99563fb0a8251a2e20915b4284d5398aa84a4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-03e15108-5f8d-4fec-9ad8-133d9551c667, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 09:28:33 compute-0 neutron-haproxy-ovnmeta-03e15108-5f8d-4fec-9ad8-133d9551c667[405609]: [NOTICE]   (405613) : New worker (405615) forked
Oct 11 09:28:33 compute-0 neutron-haproxy-ovnmeta-03e15108-5f8d-4fec-9ad8-133d9551c667[405609]: [NOTICE]   (405613) : Loading success.
Oct 11 09:28:33 compute-0 nova_compute[260935]: 2025-10-11 09:28:33.354 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760174898.3532627, f4e61943-e59b-40a1-ab1e-07ba5f131bb3 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:28:33 compute-0 nova_compute[260935]: 2025-10-11 09:28:33.355 2 INFO nova.compute.manager [-] [instance: f4e61943-e59b-40a1-ab1e-07ba5f131bb3] VM Stopped (Lifecycle Event)
Oct 11 09:28:33 compute-0 nova_compute[260935]: 2025-10-11 09:28:33.376 2 DEBUG nova.compute.manager [None req-4590d3ed-687a-49fe-bf7b-2efd5db3dc43 - - - - - -] [instance: f4e61943-e59b-40a1-ab1e-07ba5f131bb3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:28:33 compute-0 nova_compute[260935]: 2025-10-11 09:28:33.575 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760174913.5747213, 16108538-ef61-4d43-8a42-c324c334138b => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:28:33 compute-0 nova_compute[260935]: 2025-10-11 09:28:33.575 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 16108538-ef61-4d43-8a42-c324c334138b] VM Started (Lifecycle Event)
Oct 11 09:28:33 compute-0 nova_compute[260935]: 2025-10-11 09:28:33.578 2 DEBUG nova.compute.manager [None req-179541eb-455c-4e90-85a0-75a2d9e6caba 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 16108538-ef61-4d43-8a42-c324c334138b] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 11 09:28:33 compute-0 nova_compute[260935]: 2025-10-11 09:28:33.581 2 DEBUG nova.virt.libvirt.driver [None req-179541eb-455c-4e90-85a0-75a2d9e6caba 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 16108538-ef61-4d43-8a42-c324c334138b] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 11 09:28:33 compute-0 nova_compute[260935]: 2025-10-11 09:28:33.585 2 INFO nova.virt.libvirt.driver [-] [instance: 16108538-ef61-4d43-8a42-c324c334138b] Instance spawned successfully.
Oct 11 09:28:33 compute-0 nova_compute[260935]: 2025-10-11 09:28:33.585 2 DEBUG nova.virt.libvirt.driver [None req-179541eb-455c-4e90-85a0-75a2d9e6caba 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 16108538-ef61-4d43-8a42-c324c334138b] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 11 09:28:33 compute-0 nova_compute[260935]: 2025-10-11 09:28:33.597 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 16108538-ef61-4d43-8a42-c324c334138b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:28:33 compute-0 nova_compute[260935]: 2025-10-11 09:28:33.600 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 16108538-ef61-4d43-8a42-c324c334138b] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 09:28:33 compute-0 nova_compute[260935]: 2025-10-11 09:28:33.610 2 DEBUG nova.virt.libvirt.driver [None req-179541eb-455c-4e90-85a0-75a2d9e6caba 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 16108538-ef61-4d43-8a42-c324c334138b] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:28:33 compute-0 nova_compute[260935]: 2025-10-11 09:28:33.611 2 DEBUG nova.virt.libvirt.driver [None req-179541eb-455c-4e90-85a0-75a2d9e6caba 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 16108538-ef61-4d43-8a42-c324c334138b] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:28:33 compute-0 nova_compute[260935]: 2025-10-11 09:28:33.611 2 DEBUG nova.virt.libvirt.driver [None req-179541eb-455c-4e90-85a0-75a2d9e6caba 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 16108538-ef61-4d43-8a42-c324c334138b] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:28:33 compute-0 nova_compute[260935]: 2025-10-11 09:28:33.612 2 DEBUG nova.virt.libvirt.driver [None req-179541eb-455c-4e90-85a0-75a2d9e6caba 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 16108538-ef61-4d43-8a42-c324c334138b] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:28:33 compute-0 nova_compute[260935]: 2025-10-11 09:28:33.612 2 DEBUG nova.virt.libvirt.driver [None req-179541eb-455c-4e90-85a0-75a2d9e6caba 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 16108538-ef61-4d43-8a42-c324c334138b] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:28:33 compute-0 nova_compute[260935]: 2025-10-11 09:28:33.613 2 DEBUG nova.virt.libvirt.driver [None req-179541eb-455c-4e90-85a0-75a2d9e6caba 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 16108538-ef61-4d43-8a42-c324c334138b] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:28:33 compute-0 nova_compute[260935]: 2025-10-11 09:28:33.620 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 16108538-ef61-4d43-8a42-c324c334138b] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 09:28:33 compute-0 nova_compute[260935]: 2025-10-11 09:28:33.620 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760174913.5775435, 16108538-ef61-4d43-8a42-c324c334138b => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:28:33 compute-0 nova_compute[260935]: 2025-10-11 09:28:33.621 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 16108538-ef61-4d43-8a42-c324c334138b] VM Paused (Lifecycle Event)
Oct 11 09:28:33 compute-0 nova_compute[260935]: 2025-10-11 09:28:33.656 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 16108538-ef61-4d43-8a42-c324c334138b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:28:33 compute-0 nova_compute[260935]: 2025-10-11 09:28:33.659 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760174913.5807302, 16108538-ef61-4d43-8a42-c324c334138b => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:28:33 compute-0 nova_compute[260935]: 2025-10-11 09:28:33.659 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 16108538-ef61-4d43-8a42-c324c334138b] VM Resumed (Lifecycle Event)
Oct 11 09:28:33 compute-0 nova_compute[260935]: 2025-10-11 09:28:33.681 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 16108538-ef61-4d43-8a42-c324c334138b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:28:33 compute-0 nova_compute[260935]: 2025-10-11 09:28:33.684 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 16108538-ef61-4d43-8a42-c324c334138b] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 09:28:33 compute-0 nova_compute[260935]: 2025-10-11 09:28:33.691 2 INFO nova.compute.manager [None req-179541eb-455c-4e90-85a0-75a2d9e6caba 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 16108538-ef61-4d43-8a42-c324c334138b] Took 11.82 seconds to spawn the instance on the hypervisor.
Oct 11 09:28:33 compute-0 nova_compute[260935]: 2025-10-11 09:28:33.692 2 DEBUG nova.compute.manager [None req-179541eb-455c-4e90-85a0-75a2d9e6caba 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 16108538-ef61-4d43-8a42-c324c334138b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:28:33 compute-0 nova_compute[260935]: 2025-10-11 09:28:33.702 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 16108538-ef61-4d43-8a42-c324c334138b] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 09:28:33 compute-0 nova_compute[260935]: 2025-10-11 09:28:33.764 2 INFO nova.compute.manager [None req-179541eb-455c-4e90-85a0-75a2d9e6caba 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 16108538-ef61-4d43-8a42-c324c334138b] Took 12.88 seconds to build instance.
Oct 11 09:28:33 compute-0 nova_compute[260935]: 2025-10-11 09:28:33.787 2 DEBUG oslo_concurrency.lockutils [None req-179541eb-455c-4e90-85a0-75a2d9e6caba 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "16108538-ef61-4d43-8a42-c324c334138b" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.992s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:28:34 compute-0 ceph-mon[74313]: pgmap v2607: 321 pgs: 321 active+clean; 374 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 39 KiB/s rd, 1.8 MiB/s wr, 59 op/s
Oct 11 09:28:34 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:28:34 compute-0 nova_compute[260935]: 2025-10-11 09:28:34.451 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:28:34 compute-0 nova_compute[260935]: 2025-10-11 09:28:34.754 2 DEBUG nova.compute.manager [req-2f5b4059-ab58-4993-9465-5d88c24f6022 req-d3db7354-8173-410c-a065-9c7394c602e6 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 16108538-ef61-4d43-8a42-c324c334138b] Received event network-vif-plugged-e89af28e-d1c8-408a-8e1f-a80d8e9d0aa0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:28:34 compute-0 nova_compute[260935]: 2025-10-11 09:28:34.755 2 DEBUG oslo_concurrency.lockutils [req-2f5b4059-ab58-4993-9465-5d88c24f6022 req-d3db7354-8173-410c-a065-9c7394c602e6 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "16108538-ef61-4d43-8a42-c324c334138b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:28:34 compute-0 nova_compute[260935]: 2025-10-11 09:28:34.755 2 DEBUG oslo_concurrency.lockutils [req-2f5b4059-ab58-4993-9465-5d88c24f6022 req-d3db7354-8173-410c-a065-9c7394c602e6 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "16108538-ef61-4d43-8a42-c324c334138b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:28:34 compute-0 nova_compute[260935]: 2025-10-11 09:28:34.756 2 DEBUG oslo_concurrency.lockutils [req-2f5b4059-ab58-4993-9465-5d88c24f6022 req-d3db7354-8173-410c-a065-9c7394c602e6 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "16108538-ef61-4d43-8a42-c324c334138b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:28:34 compute-0 nova_compute[260935]: 2025-10-11 09:28:34.756 2 DEBUG nova.compute.manager [req-2f5b4059-ab58-4993-9465-5d88c24f6022 req-d3db7354-8173-410c-a065-9c7394c602e6 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 16108538-ef61-4d43-8a42-c324c334138b] No waiting events found dispatching network-vif-plugged-e89af28e-d1c8-408a-8e1f-a80d8e9d0aa0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 09:28:34 compute-0 nova_compute[260935]: 2025-10-11 09:28:34.757 2 WARNING nova.compute.manager [req-2f5b4059-ab58-4993-9465-5d88c24f6022 req-d3db7354-8173-410c-a065-9c7394c602e6 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 16108538-ef61-4d43-8a42-c324c334138b] Received unexpected event network-vif-plugged-e89af28e-d1c8-408a-8e1f-a80d8e9d0aa0 for instance with vm_state active and task_state None.
Oct 11 09:28:35 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2608: 321 pgs: 321 active+clean; 374 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 12 KiB/s rd, 1.7 MiB/s wr, 18 op/s
Oct 11 09:28:36 compute-0 ceph-mon[74313]: pgmap v2608: 321 pgs: 321 active+clean; 374 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 12 KiB/s rd, 1.7 MiB/s wr, 18 op/s
Oct 11 09:28:37 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2609: 321 pgs: 321 active+clean; 374 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 12 KiB/s rd, 1.7 MiB/s wr, 18 op/s
Oct 11 09:28:37 compute-0 nova_compute[260935]: 2025-10-11 09:28:37.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:28:38 compute-0 ceph-mon[74313]: pgmap v2609: 321 pgs: 321 active+clean; 374 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 12 KiB/s rd, 1.7 MiB/s wr, 18 op/s
Oct 11 09:28:38 compute-0 nova_compute[260935]: 2025-10-11 09:28:38.205 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:28:38 compute-0 nova_compute[260935]: 2025-10-11 09:28:38.699 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:28:38 compute-0 nova_compute[260935]: 2025-10-11 09:28:38.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:28:39 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2610: 321 pgs: 321 active+clean; 374 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.7 MiB/s wr, 86 op/s
Oct 11 09:28:39 compute-0 NetworkManager[44960]: <info>  [1760174919.1751] manager: (patch-br-int-to-provnet-02213cae-d648-4a8f-b18b-9bdf9573f1be): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/572)
Oct 11 09:28:39 compute-0 NetworkManager[44960]: <info>  [1760174919.1771] manager: (patch-provnet-02213cae-d648-4a8f-b18b-9bdf9573f1be-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/573)
Oct 11 09:28:39 compute-0 nova_compute[260935]: 2025-10-11 09:28:39.176 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:28:39 compute-0 ovn_controller[152945]: 2025-10-11T09:28:39Z|01478|binding|INFO|Releasing lport 1945eb8f-f9e5-437a-9fc9-017b522ae777 from this chassis (sb_readonly=0)
Oct 11 09:28:39 compute-0 ovn_controller[152945]: 2025-10-11T09:28:39Z|01479|binding|INFO|Releasing lport e23cd806-8523-4e59-ba27-db15cee52548 from this chassis (sb_readonly=0)
Oct 11 09:28:39 compute-0 ovn_controller[152945]: 2025-10-11T09:28:39Z|01480|binding|INFO|Releasing lport f45dd889-4db0-488b-b6d3-356bc191844e from this chassis (sb_readonly=0)
Oct 11 09:28:39 compute-0 nova_compute[260935]: 2025-10-11 09:28:39.191 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:28:39 compute-0 nova_compute[260935]: 2025-10-11 09:28:39.202 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:28:39 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:28:39 compute-0 nova_compute[260935]: 2025-10-11 09:28:39.454 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:28:39 compute-0 nova_compute[260935]: 2025-10-11 09:28:39.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:28:39 compute-0 nova_compute[260935]: 2025-10-11 09:28:39.896 2 DEBUG nova.compute.manager [req-89ec08e8-66a5-4514-ad41-e343852f4ee0 req-0abee7b7-7cbb-4c2b-a4c0-36cc7ae20cd2 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 16108538-ef61-4d43-8a42-c324c334138b] Received event network-changed-e89af28e-d1c8-408a-8e1f-a80d8e9d0aa0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:28:39 compute-0 nova_compute[260935]: 2025-10-11 09:28:39.897 2 DEBUG nova.compute.manager [req-89ec08e8-66a5-4514-ad41-e343852f4ee0 req-0abee7b7-7cbb-4c2b-a4c0-36cc7ae20cd2 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 16108538-ef61-4d43-8a42-c324c334138b] Refreshing instance network info cache due to event network-changed-e89af28e-d1c8-408a-8e1f-a80d8e9d0aa0. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 09:28:39 compute-0 nova_compute[260935]: 2025-10-11 09:28:39.897 2 DEBUG oslo_concurrency.lockutils [req-89ec08e8-66a5-4514-ad41-e343852f4ee0 req-0abee7b7-7cbb-4c2b-a4c0-36cc7ae20cd2 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-16108538-ef61-4d43-8a42-c324c334138b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:28:39 compute-0 nova_compute[260935]: 2025-10-11 09:28:39.898 2 DEBUG oslo_concurrency.lockutils [req-89ec08e8-66a5-4514-ad41-e343852f4ee0 req-0abee7b7-7cbb-4c2b-a4c0-36cc7ae20cd2 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-16108538-ef61-4d43-8a42-c324c334138b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:28:39 compute-0 nova_compute[260935]: 2025-10-11 09:28:39.898 2 DEBUG nova.network.neutron [req-89ec08e8-66a5-4514-ad41-e343852f4ee0 req-0abee7b7-7cbb-4c2b-a4c0-36cc7ae20cd2 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 16108538-ef61-4d43-8a42-c324c334138b] Refreshing network info cache for port e89af28e-d1c8-408a-8e1f-a80d8e9d0aa0 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 09:28:40 compute-0 ceph-mon[74313]: pgmap v2610: 321 pgs: 321 active+clean; 374 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.7 MiB/s wr, 86 op/s
Oct 11 09:28:40 compute-0 nova_compute[260935]: 2025-10-11 09:28:40.747 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:28:41 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2611: 321 pgs: 321 active+clean; 374 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 11 09:28:42 compute-0 ceph-mon[74313]: pgmap v2611: 321 pgs: 321 active+clean; 374 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 11 09:28:42 compute-0 nova_compute[260935]: 2025-10-11 09:28:42.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:28:42 compute-0 nova_compute[260935]: 2025-10-11 09:28:42.703 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 11 09:28:43 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2612: 321 pgs: 321 active+clean; 374 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 11 09:28:43 compute-0 nova_compute[260935]: 2025-10-11 09:28:43.260 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:28:43 compute-0 sshd-session[405625]: Invalid user admin from 165.232.82.252 port 49914
Oct 11 09:28:43 compute-0 sshd-session[405625]: pam_unix(sshd:auth): check pass; user unknown
Oct 11 09:28:43 compute-0 sshd-session[405625]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=165.232.82.252
Oct 11 09:28:43 compute-0 nova_compute[260935]: 2025-10-11 09:28:43.427 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "refresh_cache-b75d8ded-515b-48ff-a6b6-28df88878996" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:28:43 compute-0 nova_compute[260935]: 2025-10-11 09:28:43.428 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquired lock "refresh_cache-b75d8ded-515b-48ff-a6b6-28df88878996" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:28:43 compute-0 nova_compute[260935]: 2025-10-11 09:28:43.428 2 DEBUG nova.network.neutron [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: b75d8ded-515b-48ff-a6b6-28df88878996] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 11 09:28:43 compute-0 nova_compute[260935]: 2025-10-11 09:28:43.520 2 DEBUG nova.network.neutron [req-89ec08e8-66a5-4514-ad41-e343852f4ee0 req-0abee7b7-7cbb-4c2b-a4c0-36cc7ae20cd2 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 16108538-ef61-4d43-8a42-c324c334138b] Updated VIF entry in instance network info cache for port e89af28e-d1c8-408a-8e1f-a80d8e9d0aa0. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 09:28:43 compute-0 nova_compute[260935]: 2025-10-11 09:28:43.521 2 DEBUG nova.network.neutron [req-89ec08e8-66a5-4514-ad41-e343852f4ee0 req-0abee7b7-7cbb-4c2b-a4c0-36cc7ae20cd2 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 16108538-ef61-4d43-8a42-c324c334138b] Updating instance_info_cache with network_info: [{"id": "e89af28e-d1c8-408a-8e1f-a80d8e9d0aa0", "address": "fa:16:3e:e6:64:0e", "network": {"id": "03e15108-5f8d-4fec-9ad8-133d9551c667", "bridge": "br-int", "label": "tempest-network-smoke--866575139", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fee6:640e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fee6:640e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.172", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape89af28e-d1", "ovs_interfaceid": "e89af28e-d1c8-408a-8e1f-a80d8e9d0aa0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:28:43 compute-0 nova_compute[260935]: 2025-10-11 09:28:43.547 2 DEBUG oslo_concurrency.lockutils [req-89ec08e8-66a5-4514-ad41-e343852f4ee0 req-0abee7b7-7cbb-4c2b-a4c0-36cc7ae20cd2 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-16108538-ef61-4d43-8a42-c324c334138b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:28:44 compute-0 ceph-mon[74313]: pgmap v2612: 321 pgs: 321 active+clean; 374 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 11 09:28:44 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:28:44 compute-0 nova_compute[260935]: 2025-10-11 09:28:44.456 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:28:45 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2613: 321 pgs: 321 active+clean; 374 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 68 op/s
Oct 11 09:28:45 compute-0 ceph-mon[74313]: pgmap v2613: 321 pgs: 321 active+clean; 374 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 68 op/s
Oct 11 09:28:45 compute-0 sshd-session[405625]: Failed password for invalid user admin from 165.232.82.252 port 49914 ssh2
Oct 11 09:28:45 compute-0 ovn_controller[152945]: 2025-10-11T09:28:45Z|00170|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:e6:64:0e 10.100.0.7
Oct 11 09:28:45 compute-0 ovn_controller[152945]: 2025-10-11T09:28:45Z|00171|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:e6:64:0e 10.100.0.7
Oct 11 09:28:46 compute-0 sshd-session[405625]: Connection closed by invalid user admin 165.232.82.252 port 49914 [preauth]
Oct 11 09:28:46 compute-0 nova_compute[260935]: 2025-10-11 09:28:46.699 2 DEBUG nova.network.neutron [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: b75d8ded-515b-48ff-a6b6-28df88878996] Updating instance_info_cache with network_info: [{"id": "99e74dca-1d94-446c-ac4b-bc16dc028d2b", "address": "fa:16:3e:ab:9b:26", "network": {"id": "e4686205-cbf0-4221-bc49-ebb890c4a59f", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1553544744-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "11b44ad9193e4e43838d52056ccf413e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap99e74dca-1d", "ovs_interfaceid": "99e74dca-1d94-446c-ac4b-bc16dc028d2b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:28:46 compute-0 nova_compute[260935]: 2025-10-11 09:28:46.729 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Releasing lock "refresh_cache-b75d8ded-515b-48ff-a6b6-28df88878996" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:28:46 compute-0 nova_compute[260935]: 2025-10-11 09:28:46.730 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: b75d8ded-515b-48ff-a6b6-28df88878996] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 11 09:28:46 compute-0 nova_compute[260935]: 2025-10-11 09:28:46.731 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:28:46 compute-0 nova_compute[260935]: 2025-10-11 09:28:46.731 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:28:46 compute-0 nova_compute[260935]: 2025-10-11 09:28:46.732 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 11 09:28:46 compute-0 nova_compute[260935]: 2025-10-11 09:28:46.732 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:28:46 compute-0 nova_compute[260935]: 2025-10-11 09:28:46.754 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:28:46 compute-0 nova_compute[260935]: 2025-10-11 09:28:46.755 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:28:46 compute-0 nova_compute[260935]: 2025-10-11 09:28:46.755 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:28:46 compute-0 nova_compute[260935]: 2025-10-11 09:28:46.756 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 11 09:28:46 compute-0 nova_compute[260935]: 2025-10-11 09:28:46.756 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:28:47 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2614: 321 pgs: 321 active+clean; 374 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 68 op/s
Oct 11 09:28:47 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 09:28:47 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2059551540' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:28:47 compute-0 nova_compute[260935]: 2025-10-11 09:28:47.313 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.557s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:28:47 compute-0 nova_compute[260935]: 2025-10-11 09:28:47.438 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:28:47 compute-0 nova_compute[260935]: 2025-10-11 09:28:47.439 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:28:47 compute-0 nova_compute[260935]: 2025-10-11 09:28:47.439 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:28:47 compute-0 nova_compute[260935]: 2025-10-11 09:28:47.446 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:28:47 compute-0 nova_compute[260935]: 2025-10-11 09:28:47.447 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:28:47 compute-0 nova_compute[260935]: 2025-10-11 09:28:47.452 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000084 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:28:47 compute-0 nova_compute[260935]: 2025-10-11 09:28:47.452 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000084 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:28:47 compute-0 nova_compute[260935]: 2025-10-11 09:28:47.456 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000052 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:28:47 compute-0 nova_compute[260935]: 2025-10-11 09:28:47.456 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000052 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:28:47 compute-0 nova_compute[260935]: 2025-10-11 09:28:47.695 2 WARNING nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 09:28:47 compute-0 nova_compute[260935]: 2025-10-11 09:28:47.697 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=2723MB free_disk=59.80977249145508GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 11 09:28:47 compute-0 nova_compute[260935]: 2025-10-11 09:28:47.697 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:28:47 compute-0 nova_compute[260935]: 2025-10-11 09:28:47.697 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:28:47 compute-0 nova_compute[260935]: 2025-10-11 09:28:47.944 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance c176845c-89c0-4038-ba22-4ee79bd3ebfe actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 09:28:47 compute-0 nova_compute[260935]: 2025-10-11 09:28:47.945 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance b75d8ded-515b-48ff-a6b6-28df88878996 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 09:28:47 compute-0 nova_compute[260935]: 2025-10-11 09:28:47.945 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance 52be16b4-343a-4fd4-9041-39069a1fde2a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 09:28:47 compute-0 nova_compute[260935]: 2025-10-11 09:28:47.946 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance 16108538-ef61-4d43-8a42-c324c334138b actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 09:28:47 compute-0 nova_compute[260935]: 2025-10-11 09:28:47.946 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 11 09:28:47 compute-0 nova_compute[260935]: 2025-10-11 09:28:47.948 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=1024MB phys_disk=59GB used_disk=4GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 11 09:28:48 compute-0 ceph-mon[74313]: pgmap v2614: 321 pgs: 321 active+clean; 374 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 68 op/s
Oct 11 09:28:48 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/2059551540' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:28:48 compute-0 nova_compute[260935]: 2025-10-11 09:28:48.200 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:28:48 compute-0 nova_compute[260935]: 2025-10-11 09:28:48.262 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:28:48 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 09:28:48 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3513290236' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:28:48 compute-0 nova_compute[260935]: 2025-10-11 09:28:48.705 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.505s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:28:48 compute-0 nova_compute[260935]: 2025-10-11 09:28:48.714 2 DEBUG nova.compute.provider_tree [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 09:28:48 compute-0 nova_compute[260935]: 2025-10-11 09:28:48.742 2 DEBUG nova.scheduler.client.report [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 09:28:48 compute-0 nova_compute[260935]: 2025-10-11 09:28:48.750 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:28:48 compute-0 podman[405671]: 2025-10-11 09:28:48.796545199 +0000 UTC m=+0.096334500 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Oct 11 09:28:48 compute-0 nova_compute[260935]: 2025-10-11 09:28:48.797 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 11 09:28:48 compute-0 nova_compute[260935]: 2025-10-11 09:28:48.798 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.101s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:28:48 compute-0 nova_compute[260935]: 2025-10-11 09:28:48.799 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:28:48 compute-0 nova_compute[260935]: 2025-10-11 09:28:48.799 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Oct 11 09:28:49 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2615: 321 pgs: 321 active+clean; 407 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 130 op/s
Oct 11 09:28:49 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/3513290236' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:28:49 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:28:49 compute-0 nova_compute[260935]: 2025-10-11 09:28:49.458 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:28:50 compute-0 ceph-mon[74313]: pgmap v2615: 321 pgs: 321 active+clean; 407 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 130 op/s
Oct 11 09:28:51 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2616: 321 pgs: 321 active+clean; 407 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 285 KiB/s rd, 2.1 MiB/s wr, 61 op/s
Oct 11 09:28:52 compute-0 ceph-mon[74313]: pgmap v2616: 321 pgs: 321 active+clean; 407 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 285 KiB/s rd, 2.1 MiB/s wr, 61 op/s
Oct 11 09:28:53 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2617: 321 pgs: 321 active+clean; 407 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 285 KiB/s rd, 2.1 MiB/s wr, 61 op/s
Oct 11 09:28:53 compute-0 nova_compute[260935]: 2025-10-11 09:28:53.297 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:28:53 compute-0 nova_compute[260935]: 2025-10-11 09:28:53.726 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:28:53 compute-0 podman[405693]: 2025-10-11 09:28:53.765672339 +0000 UTC m=+0.076885461 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_id=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.license=GPLv2)
Oct 11 09:28:54 compute-0 ceph-mon[74313]: pgmap v2617: 321 pgs: 321 active+clean; 407 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 285 KiB/s rd, 2.1 MiB/s wr, 61 op/s
Oct 11 09:28:54 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:28:54 compute-0 nova_compute[260935]: 2025-10-11 09:28:54.477 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:28:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:28:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:28:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:28:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:28:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:28:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:28:54 compute-0 ceph-mgr[74605]: [balancer INFO root] Optimize plan auto_2025-10-11_09:28:54
Oct 11 09:28:54 compute-0 ceph-mgr[74605]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 09:28:54 compute-0 ceph-mgr[74605]: [balancer INFO root] do_upmap
Oct 11 09:28:54 compute-0 ceph-mgr[74605]: [balancer INFO root] pools ['default.rgw.log', 'cephfs.cephfs.data', 'default.rgw.meta', 'backups', 'vms', 'images', '.rgw.root', 'volumes', '.mgr', 'cephfs.cephfs.meta', 'default.rgw.control']
Oct 11 09:28:54 compute-0 ceph-mgr[74605]: [balancer INFO root] prepared 0/10 changes
Oct 11 09:28:55 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2618: 321 pgs: 321 active+clean; 407 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 285 KiB/s rd, 2.1 MiB/s wr, 61 op/s
Oct 11 09:28:55 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:28:55.061 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=45, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:d1:d9', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '16:ab:1e:b7:4b:7f'}, ipsec=False) old=SB_Global(nb_cfg=44) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 09:28:55 compute-0 nova_compute[260935]: 2025-10-11 09:28:55.062 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:28:55 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:28:55.063 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 11 09:28:55 compute-0 ceph-mon[74313]: pgmap v2618: 321 pgs: 321 active+clean; 407 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 285 KiB/s rd, 2.1 MiB/s wr, 61 op/s
Oct 11 09:28:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 09:28:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 09:28:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 09:28:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 09:28:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 09:28:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 09:28:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 09:28:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 09:28:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 09:28:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 09:28:57 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2619: 321 pgs: 321 active+clean; 407 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 285 KiB/s rd, 2.1 MiB/s wr, 61 op/s
Oct 11 09:28:57 compute-0 ceph-mon[74313]: pgmap v2619: 321 pgs: 321 active+clean; 407 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 285 KiB/s rd, 2.1 MiB/s wr, 61 op/s
Oct 11 09:28:58 compute-0 nova_compute[260935]: 2025-10-11 09:28:58.336 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:28:58 compute-0 podman[405713]: 2025-10-11 09:28:58.793607137 +0000 UTC m=+0.098691366 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=multipathd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS)
Oct 11 09:28:58 compute-0 podman[405714]: 2025-10-11 09:28:58.806493011 +0000 UTC m=+0.099067287 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct 11 09:28:59 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2620: 321 pgs: 321 active+clean; 407 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 285 KiB/s rd, 2.1 MiB/s wr, 61 op/s
Oct 11 09:28:59 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:28:59 compute-0 nova_compute[260935]: 2025-10-11 09:28:59.520 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:29:00 compute-0 ceph-mon[74313]: pgmap v2620: 321 pgs: 321 active+clean; 407 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 285 KiB/s rd, 2.1 MiB/s wr, 61 op/s
Oct 11 09:29:00 compute-0 nova_compute[260935]: 2025-10-11 09:29:00.864 2 DEBUG oslo_concurrency.lockutils [None req-7c75f59e-5cab-46fe-a2aa-d36fa0ad902c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "80209953-3c4c-4932-a9a3-8166c70e1029" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:29:00 compute-0 nova_compute[260935]: 2025-10-11 09:29:00.865 2 DEBUG oslo_concurrency.lockutils [None req-7c75f59e-5cab-46fe-a2aa-d36fa0ad902c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "80209953-3c4c-4932-a9a3-8166c70e1029" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:29:00 compute-0 sudo[405755]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:29:00 compute-0 sudo[405755]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:29:00 compute-0 nova_compute[260935]: 2025-10-11 09:29:00.887 2 DEBUG nova.compute.manager [None req-7c75f59e-5cab-46fe-a2aa-d36fa0ad902c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 80209953-3c4c-4932-a9a3-8166c70e1029] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 11 09:29:00 compute-0 sudo[405755]: pam_unix(sudo:session): session closed for user root
Oct 11 09:29:00 compute-0 nova_compute[260935]: 2025-10-11 09:29:00.976 2 DEBUG oslo_concurrency.lockutils [None req-7c75f59e-5cab-46fe-a2aa-d36fa0ad902c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:29:00 compute-0 nova_compute[260935]: 2025-10-11 09:29:00.976 2 DEBUG oslo_concurrency.lockutils [None req-7c75f59e-5cab-46fe-a2aa-d36fa0ad902c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:29:00 compute-0 sudo[405780]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 09:29:00 compute-0 sudo[405780]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:29:00 compute-0 sudo[405780]: pam_unix(sudo:session): session closed for user root
Oct 11 09:29:00 compute-0 nova_compute[260935]: 2025-10-11 09:29:00.988 2 DEBUG nova.virt.hardware [None req-7c75f59e-5cab-46fe-a2aa-d36fa0ad902c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 11 09:29:00 compute-0 nova_compute[260935]: 2025-10-11 09:29:00.988 2 INFO nova.compute.claims [None req-7c75f59e-5cab-46fe-a2aa-d36fa0ad902c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 80209953-3c4c-4932-a9a3-8166c70e1029] Claim successful on node compute-0.ctlplane.example.com
Oct 11 09:29:01 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2621: 321 pgs: 321 active+clean; 407 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 12 KiB/s wr, 0 op/s
Oct 11 09:29:01 compute-0 sudo[405805]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:29:01 compute-0 sudo[405805]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:29:01 compute-0 sudo[405805]: pam_unix(sudo:session): session closed for user root
Oct 11 09:29:01 compute-0 sudo[405830]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 11 09:29:01 compute-0 sudo[405830]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:29:01 compute-0 nova_compute[260935]: 2025-10-11 09:29:01.218 2 DEBUG oslo_concurrency.processutils [None req-7c75f59e-5cab-46fe-a2aa-d36fa0ad902c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:29:01 compute-0 ceph-mon[74313]: pgmap v2621: 321 pgs: 321 active+clean; 407 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 12 KiB/s wr, 0 op/s
Oct 11 09:29:01 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 09:29:01 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1740083688' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:29:01 compute-0 nova_compute[260935]: 2025-10-11 09:29:01.670 2 DEBUG oslo_concurrency.processutils [None req-7c75f59e-5cab-46fe-a2aa-d36fa0ad902c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:29:01 compute-0 nova_compute[260935]: 2025-10-11 09:29:01.678 2 DEBUG nova.compute.provider_tree [None req-7c75f59e-5cab-46fe-a2aa-d36fa0ad902c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 09:29:01 compute-0 sudo[405830]: pam_unix(sudo:session): session closed for user root
Oct 11 09:29:01 compute-0 nova_compute[260935]: 2025-10-11 09:29:01.721 2 DEBUG nova.scheduler.client.report [None req-7c75f59e-5cab-46fe-a2aa-d36fa0ad902c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 09:29:01 compute-0 nova_compute[260935]: 2025-10-11 09:29:01.747 2 DEBUG oslo_concurrency.lockutils [None req-7c75f59e-5cab-46fe-a2aa-d36fa0ad902c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.771s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:29:01 compute-0 nova_compute[260935]: 2025-10-11 09:29:01.749 2 DEBUG nova.compute.manager [None req-7c75f59e-5cab-46fe-a2aa-d36fa0ad902c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 80209953-3c4c-4932-a9a3-8166c70e1029] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 11 09:29:01 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 09:29:01 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 09:29:01 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 11 09:29:01 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 09:29:01 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 11 09:29:01 compute-0 nova_compute[260935]: 2025-10-11 09:29:01.852 2 DEBUG nova.compute.manager [None req-7c75f59e-5cab-46fe-a2aa-d36fa0ad902c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 80209953-3c4c-4932-a9a3-8166c70e1029] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 11 09:29:01 compute-0 nova_compute[260935]: 2025-10-11 09:29:01.853 2 DEBUG nova.network.neutron [None req-7c75f59e-5cab-46fe-a2aa-d36fa0ad902c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 80209953-3c4c-4932-a9a3-8166c70e1029] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 11 09:29:01 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:29:01 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev 0688c8e1-e0dd-4900-86c7-1f3b69287f73 does not exist
Oct 11 09:29:01 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev fe7894b5-5f05-4809-bed6-58806bebd224 does not exist
Oct 11 09:29:01 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev f55d21b2-cf70-4439-9129-5d932d6c31a8 does not exist
Oct 11 09:29:01 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 11 09:29:01 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 09:29:01 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 11 09:29:01 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 09:29:01 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 09:29:01 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 09:29:01 compute-0 nova_compute[260935]: 2025-10-11 09:29:01.953 2 INFO nova.virt.libvirt.driver [None req-7c75f59e-5cab-46fe-a2aa-d36fa0ad902c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 80209953-3c4c-4932-a9a3-8166c70e1029] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 11 09:29:01 compute-0 sudo[405909]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:29:01 compute-0 sudo[405909]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:29:01 compute-0 sudo[405909]: pam_unix(sudo:session): session closed for user root
Oct 11 09:29:02 compute-0 nova_compute[260935]: 2025-10-11 09:29:02.078 2 DEBUG nova.compute.manager [None req-7c75f59e-5cab-46fe-a2aa-d36fa0ad902c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 80209953-3c4c-4932-a9a3-8166c70e1029] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 11 09:29:02 compute-0 sudo[405934]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 09:29:02 compute-0 sudo[405934]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:29:02 compute-0 sudo[405934]: pam_unix(sudo:session): session closed for user root
Oct 11 09:29:02 compute-0 sudo[405959]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:29:02 compute-0 sudo[405959]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:29:02 compute-0 sudo[405959]: pam_unix(sudo:session): session closed for user root
Oct 11 09:29:02 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/1740083688' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:29:02 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 09:29:02 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 09:29:02 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:29:02 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 09:29:02 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 09:29:02 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 09:29:02 compute-0 sudo[405984]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 11 09:29:02 compute-0 sudo[405984]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:29:02 compute-0 nova_compute[260935]: 2025-10-11 09:29:02.477 2 DEBUG nova.compute.manager [None req-7c75f59e-5cab-46fe-a2aa-d36fa0ad902c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 80209953-3c4c-4932-a9a3-8166c70e1029] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 11 09:29:02 compute-0 nova_compute[260935]: 2025-10-11 09:29:02.479 2 DEBUG nova.virt.libvirt.driver [None req-7c75f59e-5cab-46fe-a2aa-d36fa0ad902c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 80209953-3c4c-4932-a9a3-8166c70e1029] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 11 09:29:02 compute-0 nova_compute[260935]: 2025-10-11 09:29:02.480 2 INFO nova.virt.libvirt.driver [None req-7c75f59e-5cab-46fe-a2aa-d36fa0ad902c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 80209953-3c4c-4932-a9a3-8166c70e1029] Creating image(s)
Oct 11 09:29:02 compute-0 nova_compute[260935]: 2025-10-11 09:29:02.530 2 DEBUG nova.storage.rbd_utils [None req-7c75f59e-5cab-46fe-a2aa-d36fa0ad902c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] rbd image 80209953-3c4c-4932-a9a3-8166c70e1029_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:29:02 compute-0 nova_compute[260935]: 2025-10-11 09:29:02.578 2 DEBUG nova.storage.rbd_utils [None req-7c75f59e-5cab-46fe-a2aa-d36fa0ad902c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] rbd image 80209953-3c4c-4932-a9a3-8166c70e1029_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:29:02 compute-0 nova_compute[260935]: 2025-10-11 09:29:02.622 2 DEBUG nova.storage.rbd_utils [None req-7c75f59e-5cab-46fe-a2aa-d36fa0ad902c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] rbd image 80209953-3c4c-4932-a9a3-8166c70e1029_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:29:02 compute-0 nova_compute[260935]: 2025-10-11 09:29:02.628 2 DEBUG oslo_concurrency.processutils [None req-7c75f59e-5cab-46fe-a2aa-d36fa0ad902c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:29:02 compute-0 nova_compute[260935]: 2025-10-11 09:29:02.689 2 DEBUG nova.policy [None req-7c75f59e-5cab-46fe-a2aa-d36fa0ad902c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '0e1fd111a1ff43179343661e01457085', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'db6885dd005947ad850fed13cefdf2fc', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 11 09:29:02 compute-0 nova_compute[260935]: 2025-10-11 09:29:02.757 2 DEBUG oslo_concurrency.processutils [None req-7c75f59e-5cab-46fe-a2aa-d36fa0ad902c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json" returned: 0 in 0.129s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:29:02 compute-0 nova_compute[260935]: 2025-10-11 09:29:02.759 2 DEBUG oslo_concurrency.lockutils [None req-7c75f59e-5cab-46fe-a2aa-d36fa0ad902c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "9811042c7d73cc51997f7c966840f4b7728169a1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:29:02 compute-0 nova_compute[260935]: 2025-10-11 09:29:02.760 2 DEBUG oslo_concurrency.lockutils [None req-7c75f59e-5cab-46fe-a2aa-d36fa0ad902c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:29:02 compute-0 nova_compute[260935]: 2025-10-11 09:29:02.760 2 DEBUG oslo_concurrency.lockutils [None req-7c75f59e-5cab-46fe-a2aa-d36fa0ad902c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:29:02 compute-0 nova_compute[260935]: 2025-10-11 09:29:02.789 2 DEBUG nova.storage.rbd_utils [None req-7c75f59e-5cab-46fe-a2aa-d36fa0ad902c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] rbd image 80209953-3c4c-4932-a9a3-8166c70e1029_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:29:02 compute-0 nova_compute[260935]: 2025-10-11 09:29:02.794 2 DEBUG oslo_concurrency.processutils [None req-7c75f59e-5cab-46fe-a2aa-d36fa0ad902c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 80209953-3c4c-4932-a9a3-8166c70e1029_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:29:02 compute-0 podman[406104]: 2025-10-11 09:29:02.741054874 +0000 UTC m=+0.029707969 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:29:02 compute-0 podman[406104]: 2025-10-11 09:29:02.864517279 +0000 UTC m=+0.153170394 container create b242c98078d5f08fef08d072b9553419a15dd9c4068c332f7971b3687fd1a192 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_haibt, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef)
Oct 11 09:29:03 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2622: 321 pgs: 321 active+clean; 407 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 14 KiB/s wr, 0 op/s
Oct 11 09:29:03 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:29:03.065 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=bec9a5e2-82b0-42f0-811d-08d245f1dc66, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '45'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:29:03 compute-0 systemd[1]: Started libpod-conmon-b242c98078d5f08fef08d072b9553419a15dd9c4068c332f7971b3687fd1a192.scope.
Oct 11 09:29:03 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:29:03 compute-0 podman[406104]: 2025-10-11 09:29:03.288568285 +0000 UTC m=+0.577221450 container init b242c98078d5f08fef08d072b9553419a15dd9c4068c332f7971b3687fd1a192 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_haibt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct 11 09:29:03 compute-0 podman[406104]: 2025-10-11 09:29:03.301709276 +0000 UTC m=+0.590362391 container start b242c98078d5f08fef08d072b9553419a15dd9c4068c332f7971b3687fd1a192 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_haibt, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 09:29:03 compute-0 peaceful_haibt[406157]: 167 167
Oct 11 09:29:03 compute-0 systemd[1]: libpod-b242c98078d5f08fef08d072b9553419a15dd9c4068c332f7971b3687fd1a192.scope: Deactivated successfully.
Oct 11 09:29:03 compute-0 ceph-mon[74313]: pgmap v2622: 321 pgs: 321 active+clean; 407 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 14 KiB/s wr, 0 op/s
Oct 11 09:29:03 compute-0 nova_compute[260935]: 2025-10-11 09:29:03.384 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:29:03 compute-0 podman[406104]: 2025-10-11 09:29:03.392136558 +0000 UTC m=+0.680789673 container attach b242c98078d5f08fef08d072b9553419a15dd9c4068c332f7971b3687fd1a192 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_haibt, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 09:29:03 compute-0 podman[406104]: 2025-10-11 09:29:03.394516955 +0000 UTC m=+0.683170060 container died b242c98078d5f08fef08d072b9553419a15dd9c4068c332f7971b3687fd1a192 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_haibt, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 09:29:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-151d91323702af5144199a0167d7c52ebcd33ce0f3f9f0f7755043a10ab32d2a-merged.mount: Deactivated successfully.
Oct 11 09:29:03 compute-0 nova_compute[260935]: 2025-10-11 09:29:03.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:29:04 compute-0 podman[406104]: 2025-10-11 09:29:04.155968994 +0000 UTC m=+1.444622099 container remove b242c98078d5f08fef08d072b9553419a15dd9c4068c332f7971b3687fd1a192 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_haibt, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True)
Oct 11 09:29:04 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:29:04 compute-0 systemd[1]: libpod-conmon-b242c98078d5f08fef08d072b9553419a15dd9c4068c332f7971b3687fd1a192.scope: Deactivated successfully.
Oct 11 09:29:04 compute-0 nova_compute[260935]: 2025-10-11 09:29:04.318 2 DEBUG nova.network.neutron [None req-7c75f59e-5cab-46fe-a2aa-d36fa0ad902c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 80209953-3c4c-4932-a9a3-8166c70e1029] Successfully created port: b6c25504-002e-4bf7-8b34-c9e43fcc55c2 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 11 09:29:04 compute-0 nova_compute[260935]: 2025-10-11 09:29:04.398 2 DEBUG oslo_concurrency.processutils [None req-7c75f59e-5cab-46fe-a2aa-d36fa0ad902c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 80209953-3c4c-4932-a9a3-8166c70e1029_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.604s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:29:04 compute-0 nova_compute[260935]: 2025-10-11 09:29:04.527 2 DEBUG nova.storage.rbd_utils [None req-7c75f59e-5cab-46fe-a2aa-d36fa0ad902c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] resizing rbd image 80209953-3c4c-4932-a9a3-8166c70e1029_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 11 09:29:04 compute-0 podman[406185]: 2025-10-11 09:29:04.53107575 +0000 UTC m=+0.116407066 container create 24c2e165636373c1de28ba33e21950660d26c12e1780f152aa3c8ab140f35cd2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_bassi, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Oct 11 09:29:04 compute-0 podman[406185]: 2025-10-11 09:29:04.461051084 +0000 UTC m=+0.046382460 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:29:04 compute-0 nova_compute[260935]: 2025-10-11 09:29:04.609 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:29:04 compute-0 systemd[1]: Started libpod-conmon-24c2e165636373c1de28ba33e21950660d26c12e1780f152aa3c8ab140f35cd2.scope.
Oct 11 09:29:04 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:29:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d0d699d28f7b3ea07cd05ec6840fdfdc7dfc5cee55128e68607606945e321a1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 09:29:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d0d699d28f7b3ea07cd05ec6840fdfdc7dfc5cee55128e68607606945e321a1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 09:29:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d0d699d28f7b3ea07cd05ec6840fdfdc7dfc5cee55128e68607606945e321a1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 09:29:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d0d699d28f7b3ea07cd05ec6840fdfdc7dfc5cee55128e68607606945e321a1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 09:29:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d0d699d28f7b3ea07cd05ec6840fdfdc7dfc5cee55128e68607606945e321a1/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 09:29:04 compute-0 podman[406185]: 2025-10-11 09:29:04.76989859 +0000 UTC m=+0.355229956 container init 24c2e165636373c1de28ba33e21950660d26c12e1780f152aa3c8ab140f35cd2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_bassi, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 09:29:04 compute-0 podman[406185]: 2025-10-11 09:29:04.787161627 +0000 UTC m=+0.372492943 container start 24c2e165636373c1de28ba33e21950660d26c12e1780f152aa3c8ab140f35cd2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_bassi, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 09:29:04 compute-0 podman[406185]: 2025-10-11 09:29:04.828073012 +0000 UTC m=+0.413404378 container attach 24c2e165636373c1de28ba33e21950660d26c12e1780f152aa3c8ab140f35cd2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_bassi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 09:29:04 compute-0 nova_compute[260935]: 2025-10-11 09:29:04.973 2 DEBUG nova.objects.instance [None req-7c75f59e-5cab-46fe-a2aa-d36fa0ad902c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lazy-loading 'migration_context' on Instance uuid 80209953-3c4c-4932-a9a3-8166c70e1029 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:29:05 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2623: 321 pgs: 321 active+clean; 407 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.0 KiB/s wr, 0 op/s
Oct 11 09:29:05 compute-0 nova_compute[260935]: 2025-10-11 09:29:05.066 2 DEBUG nova.virt.libvirt.driver [None req-7c75f59e-5cab-46fe-a2aa-d36fa0ad902c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 80209953-3c4c-4932-a9a3-8166c70e1029] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 11 09:29:05 compute-0 nova_compute[260935]: 2025-10-11 09:29:05.067 2 DEBUG nova.virt.libvirt.driver [None req-7c75f59e-5cab-46fe-a2aa-d36fa0ad902c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 80209953-3c4c-4932-a9a3-8166c70e1029] Ensure instance console log exists: /var/lib/nova/instances/80209953-3c4c-4932-a9a3-8166c70e1029/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 11 09:29:05 compute-0 nova_compute[260935]: 2025-10-11 09:29:05.068 2 DEBUG oslo_concurrency.lockutils [None req-7c75f59e-5cab-46fe-a2aa-d36fa0ad902c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:29:05 compute-0 nova_compute[260935]: 2025-10-11 09:29:05.068 2 DEBUG oslo_concurrency.lockutils [None req-7c75f59e-5cab-46fe-a2aa-d36fa0ad902c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:29:05 compute-0 nova_compute[260935]: 2025-10-11 09:29:05.069 2 DEBUG oslo_concurrency.lockutils [None req-7c75f59e-5cab-46fe-a2aa-d36fa0ad902c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:29:05 compute-0 ceph-mon[74313]: pgmap v2623: 321 pgs: 321 active+clean; 407 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.0 KiB/s wr, 0 op/s
Oct 11 09:29:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 09:29:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:29:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 09:29:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:29:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0033867139171062056 of space, bias 1.0, pg target 1.0160141751318617 quantized to 32 (current 32)
Oct 11 09:29:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:29:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 09:29:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:29:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 09:29:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:29:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.1992057139048968 quantized to 32 (current 32)
Oct 11 09:29:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:29:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006084358924269063 quantized to 16 (current 32)
Oct 11 09:29:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:29:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 09:29:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:29:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.605448655336329e-05 quantized to 32 (current 32)
Oct 11 09:29:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:29:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006464631357035879 quantized to 32 (current 32)
Oct 11 09:29:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:29:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 09:29:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:29:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015210897310672657 quantized to 32 (current 32)
Oct 11 09:29:05 compute-0 nostalgic_bassi[406257]: --> passed data devices: 0 physical, 3 LVM
Oct 11 09:29:05 compute-0 nostalgic_bassi[406257]: --> relative data size: 1.0
Oct 11 09:29:05 compute-0 nostalgic_bassi[406257]: --> All data devices are unavailable
Oct 11 09:29:06 compute-0 systemd[1]: libpod-24c2e165636373c1de28ba33e21950660d26c12e1780f152aa3c8ab140f35cd2.scope: Deactivated successfully.
Oct 11 09:29:06 compute-0 podman[406185]: 2025-10-11 09:29:06.043598724 +0000 UTC m=+1.628930010 container died 24c2e165636373c1de28ba33e21950660d26c12e1780f152aa3c8ab140f35cd2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_bassi, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 09:29:06 compute-0 systemd[1]: libpod-24c2e165636373c1de28ba33e21950660d26c12e1780f152aa3c8ab140f35cd2.scope: Consumed 1.213s CPU time.
Oct 11 09:29:06 compute-0 nova_compute[260935]: 2025-10-11 09:29:06.161 2 DEBUG nova.network.neutron [None req-7c75f59e-5cab-46fe-a2aa-d36fa0ad902c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 80209953-3c4c-4932-a9a3-8166c70e1029] Successfully updated port: b6c25504-002e-4bf7-8b34-c9e43fcc55c2 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 11 09:29:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-6d0d699d28f7b3ea07cd05ec6840fdfdc7dfc5cee55128e68607606945e321a1-merged.mount: Deactivated successfully.
Oct 11 09:29:06 compute-0 nova_compute[260935]: 2025-10-11 09:29:06.322 2 DEBUG oslo_concurrency.lockutils [None req-7c75f59e-5cab-46fe-a2aa-d36fa0ad902c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "refresh_cache-80209953-3c4c-4932-a9a3-8166c70e1029" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:29:06 compute-0 nova_compute[260935]: 2025-10-11 09:29:06.323 2 DEBUG oslo_concurrency.lockutils [None req-7c75f59e-5cab-46fe-a2aa-d36fa0ad902c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquired lock "refresh_cache-80209953-3c4c-4932-a9a3-8166c70e1029" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:29:06 compute-0 nova_compute[260935]: 2025-10-11 09:29:06.323 2 DEBUG nova.network.neutron [None req-7c75f59e-5cab-46fe-a2aa-d36fa0ad902c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 80209953-3c4c-4932-a9a3-8166c70e1029] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 11 09:29:06 compute-0 podman[406185]: 2025-10-11 09:29:06.324856321 +0000 UTC m=+1.910187627 container remove 24c2e165636373c1de28ba33e21950660d26c12e1780f152aa3c8ab140f35cd2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_bassi, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 09:29:06 compute-0 nova_compute[260935]: 2025-10-11 09:29:06.331 2 DEBUG nova.compute.manager [req-3f9fa8b2-02bf-4350-b41a-e4c601f61a38 req-4e8a8726-ad2f-4b4e-aaa1-1923a7a35c6f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 80209953-3c4c-4932-a9a3-8166c70e1029] Received event network-changed-b6c25504-002e-4bf7-8b34-c9e43fcc55c2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:29:06 compute-0 nova_compute[260935]: 2025-10-11 09:29:06.331 2 DEBUG nova.compute.manager [req-3f9fa8b2-02bf-4350-b41a-e4c601f61a38 req-4e8a8726-ad2f-4b4e-aaa1-1923a7a35c6f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 80209953-3c4c-4932-a9a3-8166c70e1029] Refreshing instance network info cache due to event network-changed-b6c25504-002e-4bf7-8b34-c9e43fcc55c2. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 09:29:06 compute-0 nova_compute[260935]: 2025-10-11 09:29:06.332 2 DEBUG oslo_concurrency.lockutils [req-3f9fa8b2-02bf-4350-b41a-e4c601f61a38 req-4e8a8726-ad2f-4b4e-aaa1-1923a7a35c6f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-80209953-3c4c-4932-a9a3-8166c70e1029" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:29:06 compute-0 systemd[1]: libpod-conmon-24c2e165636373c1de28ba33e21950660d26c12e1780f152aa3c8ab140f35cd2.scope: Deactivated successfully.
Oct 11 09:29:06 compute-0 sudo[405984]: pam_unix(sudo:session): session closed for user root
Oct 11 09:29:06 compute-0 sudo[406318]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:29:06 compute-0 sudo[406318]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:29:06 compute-0 sudo[406318]: pam_unix(sudo:session): session closed for user root
Oct 11 09:29:06 compute-0 sudo[406343]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 09:29:06 compute-0 sudo[406343]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:29:06 compute-0 sudo[406343]: pam_unix(sudo:session): session closed for user root
Oct 11 09:29:06 compute-0 sudo[406368]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:29:06 compute-0 sudo[406368]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:29:06 compute-0 sudo[406368]: pam_unix(sudo:session): session closed for user root
Oct 11 09:29:06 compute-0 nova_compute[260935]: 2025-10-11 09:29:06.792 2 DEBUG nova.network.neutron [None req-7c75f59e-5cab-46fe-a2aa-d36fa0ad902c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 80209953-3c4c-4932-a9a3-8166c70e1029] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 11 09:29:06 compute-0 sudo[406393]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a -- lvm list --format json
Oct 11 09:29:06 compute-0 sudo[406393]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:29:07 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2624: 321 pgs: 321 active+clean; 407 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.0 KiB/s wr, 0 op/s
Oct 11 09:29:07 compute-0 podman[406457]: 2025-10-11 09:29:07.306457833 +0000 UTC m=+0.104271103 container create 8c9be1eff8b42edf9bed8049796db75abfc861aae7c2195157bdf0e681432e68 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_bhabha, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 11 09:29:07 compute-0 podman[406457]: 2025-10-11 09:29:07.231189599 +0000 UTC m=+0.029002879 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:29:07 compute-0 nova_compute[260935]: 2025-10-11 09:29:07.333 2 DEBUG oslo_concurrency.lockutils [None req-185274bf-6538-417c-a3ef-e481592e5e5e 95ad8a94007e4ab88fcad372c4695cf5 9e69b3710dc94c919d8bd75d2a540c10 - - default default] Acquiring lock "af8a1ab7-7512-4de4-8493-cfe85095fbc5" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:29:07 compute-0 nova_compute[260935]: 2025-10-11 09:29:07.334 2 DEBUG oslo_concurrency.lockutils [None req-185274bf-6538-417c-a3ef-e481592e5e5e 95ad8a94007e4ab88fcad372c4695cf5 9e69b3710dc94c919d8bd75d2a540c10 - - default default] Lock "af8a1ab7-7512-4de4-8493-cfe85095fbc5" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:29:07 compute-0 systemd[1]: Started libpod-conmon-8c9be1eff8b42edf9bed8049796db75abfc861aae7c2195157bdf0e681432e68.scope.
Oct 11 09:29:07 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:29:07 compute-0 podman[406457]: 2025-10-11 09:29:07.479403834 +0000 UTC m=+0.277217124 container init 8c9be1eff8b42edf9bed8049796db75abfc861aae7c2195157bdf0e681432e68 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_bhabha, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct 11 09:29:07 compute-0 podman[406457]: 2025-10-11 09:29:07.487980286 +0000 UTC m=+0.285793566 container start 8c9be1eff8b42edf9bed8049796db75abfc861aae7c2195157bdf0e681432e68 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_bhabha, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Oct 11 09:29:07 compute-0 eloquent_bhabha[406473]: 167 167
Oct 11 09:29:07 compute-0 systemd[1]: libpod-8c9be1eff8b42edf9bed8049796db75abfc861aae7c2195157bdf0e681432e68.scope: Deactivated successfully.
Oct 11 09:29:07 compute-0 conmon[406473]: conmon 8c9be1eff8b42edf9bed <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-8c9be1eff8b42edf9bed8049796db75abfc861aae7c2195157bdf0e681432e68.scope/container/memory.events
Oct 11 09:29:07 compute-0 nova_compute[260935]: 2025-10-11 09:29:07.500 2 DEBUG nova.compute.manager [None req-185274bf-6538-417c-a3ef-e481592e5e5e 95ad8a94007e4ab88fcad372c4695cf5 9e69b3710dc94c919d8bd75d2a540c10 - - default default] [instance: af8a1ab7-7512-4de4-8493-cfe85095fbc5] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 11 09:29:07 compute-0 podman[406457]: 2025-10-11 09:29:07.547315371 +0000 UTC m=+0.345128631 container attach 8c9be1eff8b42edf9bed8049796db75abfc861aae7c2195157bdf0e681432e68 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_bhabha, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 09:29:07 compute-0 podman[406457]: 2025-10-11 09:29:07.548032321 +0000 UTC m=+0.345845621 container died 8c9be1eff8b42edf9bed8049796db75abfc861aae7c2195157bdf0e681432e68 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_bhabha, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 09:29:07 compute-0 nova_compute[260935]: 2025-10-11 09:29:07.733 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:29:07 compute-0 nova_compute[260935]: 2025-10-11 09:29:07.734 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Oct 11 09:29:07 compute-0 nova_compute[260935]: 2025-10-11 09:29:07.776 2 DEBUG oslo_concurrency.lockutils [None req-185274bf-6538-417c-a3ef-e481592e5e5e 95ad8a94007e4ab88fcad372c4695cf5 9e69b3710dc94c919d8bd75d2a540c10 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:29:07 compute-0 nova_compute[260935]: 2025-10-11 09:29:07.777 2 DEBUG oslo_concurrency.lockutils [None req-185274bf-6538-417c-a3ef-e481592e5e5e 95ad8a94007e4ab88fcad372c4695cf5 9e69b3710dc94c919d8bd75d2a540c10 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:29:07 compute-0 nova_compute[260935]: 2025-10-11 09:29:07.787 2 DEBUG nova.virt.hardware [None req-185274bf-6538-417c-a3ef-e481592e5e5e 95ad8a94007e4ab88fcad372c4695cf5 9e69b3710dc94c919d8bd75d2a540c10 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 11 09:29:07 compute-0 nova_compute[260935]: 2025-10-11 09:29:07.788 2 INFO nova.compute.claims [None req-185274bf-6538-417c-a3ef-e481592e5e5e 95ad8a94007e4ab88fcad372c4695cf5 9e69b3710dc94c919d8bd75d2a540c10 - - default default] [instance: af8a1ab7-7512-4de4-8493-cfe85095fbc5] Claim successful on node compute-0.ctlplane.example.com
Oct 11 09:29:07 compute-0 nova_compute[260935]: 2025-10-11 09:29:07.793 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Oct 11 09:29:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-9b392575ae53c68caa7abdbd62e254410273fb1e3726433d8630b6d921932161-merged.mount: Deactivated successfully.
Oct 11 09:29:08 compute-0 podman[406478]: 2025-10-11 09:29:08.106125691 +0000 UTC m=+0.593170281 container remove 8c9be1eff8b42edf9bed8049796db75abfc861aae7c2195157bdf0e681432e68 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_bhabha, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 09:29:08 compute-0 nova_compute[260935]: 2025-10-11 09:29:08.112 2 DEBUG oslo_concurrency.processutils [None req-185274bf-6538-417c-a3ef-e481592e5e5e 95ad8a94007e4ab88fcad372c4695cf5 9e69b3710dc94c919d8bd75d2a540c10 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:29:08 compute-0 systemd[1]: libpod-conmon-8c9be1eff8b42edf9bed8049796db75abfc861aae7c2195157bdf0e681432e68.scope: Deactivated successfully.
Oct 11 09:29:08 compute-0 ceph-mon[74313]: pgmap v2624: 321 pgs: 321 active+clean; 407 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.0 KiB/s wr, 0 op/s
Oct 11 09:29:08 compute-0 nova_compute[260935]: 2025-10-11 09:29:08.386 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:29:08 compute-0 podman[406521]: 2025-10-11 09:29:08.357923107 +0000 UTC m=+0.026758436 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:29:08 compute-0 podman[406521]: 2025-10-11 09:29:08.46930684 +0000 UTC m=+0.138142189 container create e3a2d4dace19dfd664e42ad7c65e728ffd41f182933a8e1b4f4ef10b8f61d59f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_brahmagupta, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default)
Oct 11 09:29:08 compute-0 systemd[1]: Started libpod-conmon-e3a2d4dace19dfd664e42ad7c65e728ffd41f182933a8e1b4f4ef10b8f61d59f.scope.
Oct 11 09:29:08 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 09:29:08 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1533797081' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:29:08 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:29:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/446082782a84f1b64494d38278f63b78a58d983bda88e8fb665d1b4bc13c8e03/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 09:29:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/446082782a84f1b64494d38278f63b78a58d983bda88e8fb665d1b4bc13c8e03/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 09:29:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/446082782a84f1b64494d38278f63b78a58d983bda88e8fb665d1b4bc13c8e03/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 09:29:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/446082782a84f1b64494d38278f63b78a58d983bda88e8fb665d1b4bc13c8e03/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 09:29:08 compute-0 nova_compute[260935]: 2025-10-11 09:29:08.604 2 DEBUG oslo_concurrency.processutils [None req-185274bf-6538-417c-a3ef-e481592e5e5e 95ad8a94007e4ab88fcad372c4695cf5 9e69b3710dc94c919d8bd75d2a540c10 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.492s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:29:08 compute-0 nova_compute[260935]: 2025-10-11 09:29:08.617 2 DEBUG nova.compute.provider_tree [None req-185274bf-6538-417c-a3ef-e481592e5e5e 95ad8a94007e4ab88fcad372c4695cf5 9e69b3710dc94c919d8bd75d2a540c10 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 09:29:08 compute-0 nova_compute[260935]: 2025-10-11 09:29:08.637 2 DEBUG nova.scheduler.client.report [None req-185274bf-6538-417c-a3ef-e481592e5e5e 95ad8a94007e4ab88fcad372c4695cf5 9e69b3710dc94c919d8bd75d2a540c10 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 09:29:08 compute-0 nova_compute[260935]: 2025-10-11 09:29:08.666 2 DEBUG oslo_concurrency.lockutils [None req-185274bf-6538-417c-a3ef-e481592e5e5e 95ad8a94007e4ab88fcad372c4695cf5 9e69b3710dc94c919d8bd75d2a540c10 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.889s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:29:08 compute-0 nova_compute[260935]: 2025-10-11 09:29:08.667 2 DEBUG nova.compute.manager [None req-185274bf-6538-417c-a3ef-e481592e5e5e 95ad8a94007e4ab88fcad372c4695cf5 9e69b3710dc94c919d8bd75d2a540c10 - - default default] [instance: af8a1ab7-7512-4de4-8493-cfe85095fbc5] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 11 09:29:08 compute-0 podman[406521]: 2025-10-11 09:29:08.678632297 +0000 UTC m=+0.347467706 container init e3a2d4dace19dfd664e42ad7c65e728ffd41f182933a8e1b4f4ef10b8f61d59f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_brahmagupta, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 09:29:08 compute-0 podman[406521]: 2025-10-11 09:29:08.686439728 +0000 UTC m=+0.355275087 container start e3a2d4dace19dfd664e42ad7c65e728ffd41f182933a8e1b4f4ef10b8f61d59f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_brahmagupta, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 09:29:08 compute-0 nova_compute[260935]: 2025-10-11 09:29:08.724 2 DEBUG nova.compute.manager [None req-185274bf-6538-417c-a3ef-e481592e5e5e 95ad8a94007e4ab88fcad372c4695cf5 9e69b3710dc94c919d8bd75d2a540c10 - - default default] [instance: af8a1ab7-7512-4de4-8493-cfe85095fbc5] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 11 09:29:08 compute-0 nova_compute[260935]: 2025-10-11 09:29:08.725 2 DEBUG nova.network.neutron [None req-185274bf-6538-417c-a3ef-e481592e5e5e 95ad8a94007e4ab88fcad372c4695cf5 9e69b3710dc94c919d8bd75d2a540c10 - - default default] [instance: af8a1ab7-7512-4de4-8493-cfe85095fbc5] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 11 09:29:08 compute-0 nova_compute[260935]: 2025-10-11 09:29:08.748 2 INFO nova.virt.libvirt.driver [None req-185274bf-6538-417c-a3ef-e481592e5e5e 95ad8a94007e4ab88fcad372c4695cf5 9e69b3710dc94c919d8bd75d2a540c10 - - default default] [instance: af8a1ab7-7512-4de4-8493-cfe85095fbc5] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 11 09:29:08 compute-0 podman[406521]: 2025-10-11 09:29:08.761189517 +0000 UTC m=+0.430024856 container attach e3a2d4dace19dfd664e42ad7c65e728ffd41f182933a8e1b4f4ef10b8f61d59f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_brahmagupta, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 09:29:08 compute-0 nova_compute[260935]: 2025-10-11 09:29:08.768 2 DEBUG nova.compute.manager [None req-185274bf-6538-417c-a3ef-e481592e5e5e 95ad8a94007e4ab88fcad372c4695cf5 9e69b3710dc94c919d8bd75d2a540c10 - - default default] [instance: af8a1ab7-7512-4de4-8493-cfe85095fbc5] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 11 09:29:08 compute-0 nova_compute[260935]: 2025-10-11 09:29:08.846 2 DEBUG nova.compute.manager [None req-185274bf-6538-417c-a3ef-e481592e5e5e 95ad8a94007e4ab88fcad372c4695cf5 9e69b3710dc94c919d8bd75d2a540c10 - - default default] [instance: af8a1ab7-7512-4de4-8493-cfe85095fbc5] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 11 09:29:08 compute-0 nova_compute[260935]: 2025-10-11 09:29:08.848 2 DEBUG nova.virt.libvirt.driver [None req-185274bf-6538-417c-a3ef-e481592e5e5e 95ad8a94007e4ab88fcad372c4695cf5 9e69b3710dc94c919d8bd75d2a540c10 - - default default] [instance: af8a1ab7-7512-4de4-8493-cfe85095fbc5] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 11 09:29:08 compute-0 nova_compute[260935]: 2025-10-11 09:29:08.848 2 INFO nova.virt.libvirt.driver [None req-185274bf-6538-417c-a3ef-e481592e5e5e 95ad8a94007e4ab88fcad372c4695cf5 9e69b3710dc94c919d8bd75d2a540c10 - - default default] [instance: af8a1ab7-7512-4de4-8493-cfe85095fbc5] Creating image(s)
Oct 11 09:29:08 compute-0 nova_compute[260935]: 2025-10-11 09:29:08.873 2 DEBUG nova.storage.rbd_utils [None req-185274bf-6538-417c-a3ef-e481592e5e5e 95ad8a94007e4ab88fcad372c4695cf5 9e69b3710dc94c919d8bd75d2a540c10 - - default default] rbd image af8a1ab7-7512-4de4-8493-cfe85095fbc5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:29:08 compute-0 nova_compute[260935]: 2025-10-11 09:29:08.900 2 DEBUG nova.storage.rbd_utils [None req-185274bf-6538-417c-a3ef-e481592e5e5e 95ad8a94007e4ab88fcad372c4695cf5 9e69b3710dc94c919d8bd75d2a540c10 - - default default] rbd image af8a1ab7-7512-4de4-8493-cfe85095fbc5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:29:08 compute-0 nova_compute[260935]: 2025-10-11 09:29:08.927 2 DEBUG nova.storage.rbd_utils [None req-185274bf-6538-417c-a3ef-e481592e5e5e 95ad8a94007e4ab88fcad372c4695cf5 9e69b3710dc94c919d8bd75d2a540c10 - - default default] rbd image af8a1ab7-7512-4de4-8493-cfe85095fbc5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:29:08 compute-0 nova_compute[260935]: 2025-10-11 09:29:08.932 2 DEBUG oslo_concurrency.processutils [None req-185274bf-6538-417c-a3ef-e481592e5e5e 95ad8a94007e4ab88fcad372c4695cf5 9e69b3710dc94c919d8bd75d2a540c10 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:29:08 compute-0 nova_compute[260935]: 2025-10-11 09:29:08.983 2 DEBUG nova.policy [None req-185274bf-6538-417c-a3ef-e481592e5e5e 95ad8a94007e4ab88fcad372c4695cf5 9e69b3710dc94c919d8bd75d2a540c10 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '95ad8a94007e4ab88fcad372c4695cf5', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '9e69b3710dc94c919d8bd75d2a540c10', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 11 09:29:08 compute-0 nova_compute[260935]: 2025-10-11 09:29:08.986 2 DEBUG nova.network.neutron [None req-7c75f59e-5cab-46fe-a2aa-d36fa0ad902c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 80209953-3c4c-4932-a9a3-8166c70e1029] Updating instance_info_cache with network_info: [{"id": "b6c25504-002e-4bf7-8b34-c9e43fcc55c2", "address": "fa:16:3e:ba:b7:77", "network": {"id": "03e15108-5f8d-4fec-9ad8-133d9551c667", "bridge": "br-int", "label": "tempest-network-smoke--866575139", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feba:b777", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feba:b777", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb6c25504-00", "ovs_interfaceid": "b6c25504-002e-4bf7-8b34-c9e43fcc55c2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:29:09 compute-0 nova_compute[260935]: 2025-10-11 09:29:09.013 2 DEBUG oslo_concurrency.lockutils [None req-7c75f59e-5cab-46fe-a2aa-d36fa0ad902c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Releasing lock "refresh_cache-80209953-3c4c-4932-a9a3-8166c70e1029" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:29:09 compute-0 nova_compute[260935]: 2025-10-11 09:29:09.013 2 DEBUG nova.compute.manager [None req-7c75f59e-5cab-46fe-a2aa-d36fa0ad902c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 80209953-3c4c-4932-a9a3-8166c70e1029] Instance network_info: |[{"id": "b6c25504-002e-4bf7-8b34-c9e43fcc55c2", "address": "fa:16:3e:ba:b7:77", "network": {"id": "03e15108-5f8d-4fec-9ad8-133d9551c667", "bridge": "br-int", "label": "tempest-network-smoke--866575139", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feba:b777", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feba:b777", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb6c25504-00", "ovs_interfaceid": "b6c25504-002e-4bf7-8b34-c9e43fcc55c2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 11 09:29:09 compute-0 nova_compute[260935]: 2025-10-11 09:29:09.013 2 DEBUG oslo_concurrency.lockutils [req-3f9fa8b2-02bf-4350-b41a-e4c601f61a38 req-4e8a8726-ad2f-4b4e-aaa1-1923a7a35c6f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-80209953-3c4c-4932-a9a3-8166c70e1029" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:29:09 compute-0 nova_compute[260935]: 2025-10-11 09:29:09.014 2 DEBUG nova.network.neutron [req-3f9fa8b2-02bf-4350-b41a-e4c601f61a38 req-4e8a8726-ad2f-4b4e-aaa1-1923a7a35c6f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 80209953-3c4c-4932-a9a3-8166c70e1029] Refreshing network info cache for port b6c25504-002e-4bf7-8b34-c9e43fcc55c2 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 09:29:09 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2625: 321 pgs: 321 active+clean; 453 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 11 09:29:09 compute-0 nova_compute[260935]: 2025-10-11 09:29:09.017 2 DEBUG nova.virt.libvirt.driver [None req-7c75f59e-5cab-46fe-a2aa-d36fa0ad902c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 80209953-3c4c-4932-a9a3-8166c70e1029] Start _get_guest_xml network_info=[{"id": "b6c25504-002e-4bf7-8b34-c9e43fcc55c2", "address": "fa:16:3e:ba:b7:77", "network": {"id": "03e15108-5f8d-4fec-9ad8-133d9551c667", "bridge": "br-int", "label": "tempest-network-smoke--866575139", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feba:b777", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feba:b777", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb6c25504-00", "ovs_interfaceid": "b6c25504-002e-4bf7-8b34-c9e43fcc55c2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 11 09:29:09 compute-0 nova_compute[260935]: 2025-10-11 09:29:09.021 2 WARNING nova.virt.libvirt.driver [None req-7c75f59e-5cab-46fe-a2aa-d36fa0ad902c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 09:29:09 compute-0 nova_compute[260935]: 2025-10-11 09:29:09.026 2 DEBUG nova.virt.libvirt.host [None req-7c75f59e-5cab-46fe-a2aa-d36fa0ad902c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 11 09:29:09 compute-0 nova_compute[260935]: 2025-10-11 09:29:09.027 2 DEBUG nova.virt.libvirt.host [None req-7c75f59e-5cab-46fe-a2aa-d36fa0ad902c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 11 09:29:09 compute-0 nova_compute[260935]: 2025-10-11 09:29:09.030 2 DEBUG nova.virt.libvirt.host [None req-7c75f59e-5cab-46fe-a2aa-d36fa0ad902c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 11 09:29:09 compute-0 nova_compute[260935]: 2025-10-11 09:29:09.030 2 DEBUG nova.virt.libvirt.host [None req-7c75f59e-5cab-46fe-a2aa-d36fa0ad902c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 11 09:29:09 compute-0 nova_compute[260935]: 2025-10-11 09:29:09.030 2 DEBUG nova.virt.libvirt.driver [None req-7c75f59e-5cab-46fe-a2aa-d36fa0ad902c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 11 09:29:09 compute-0 nova_compute[260935]: 2025-10-11 09:29:09.031 2 DEBUG nova.virt.hardware [None req-7c75f59e-5cab-46fe-a2aa-d36fa0ad902c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 11 09:29:09 compute-0 nova_compute[260935]: 2025-10-11 09:29:09.031 2 DEBUG nova.virt.hardware [None req-7c75f59e-5cab-46fe-a2aa-d36fa0ad902c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 11 09:29:09 compute-0 nova_compute[260935]: 2025-10-11 09:29:09.031 2 DEBUG nova.virt.hardware [None req-7c75f59e-5cab-46fe-a2aa-d36fa0ad902c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 11 09:29:09 compute-0 nova_compute[260935]: 2025-10-11 09:29:09.031 2 DEBUG nova.virt.hardware [None req-7c75f59e-5cab-46fe-a2aa-d36fa0ad902c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 11 09:29:09 compute-0 nova_compute[260935]: 2025-10-11 09:29:09.031 2 DEBUG nova.virt.hardware [None req-7c75f59e-5cab-46fe-a2aa-d36fa0ad902c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 11 09:29:09 compute-0 nova_compute[260935]: 2025-10-11 09:29:09.031 2 DEBUG nova.virt.hardware [None req-7c75f59e-5cab-46fe-a2aa-d36fa0ad902c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 11 09:29:09 compute-0 nova_compute[260935]: 2025-10-11 09:29:09.032 2 DEBUG nova.virt.hardware [None req-7c75f59e-5cab-46fe-a2aa-d36fa0ad902c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 11 09:29:09 compute-0 nova_compute[260935]: 2025-10-11 09:29:09.032 2 DEBUG nova.virt.hardware [None req-7c75f59e-5cab-46fe-a2aa-d36fa0ad902c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 11 09:29:09 compute-0 nova_compute[260935]: 2025-10-11 09:29:09.032 2 DEBUG nova.virt.hardware [None req-7c75f59e-5cab-46fe-a2aa-d36fa0ad902c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 11 09:29:09 compute-0 nova_compute[260935]: 2025-10-11 09:29:09.032 2 DEBUG nova.virt.hardware [None req-7c75f59e-5cab-46fe-a2aa-d36fa0ad902c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 11 09:29:09 compute-0 nova_compute[260935]: 2025-10-11 09:29:09.032 2 DEBUG nova.virt.hardware [None req-7c75f59e-5cab-46fe-a2aa-d36fa0ad902c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 11 09:29:09 compute-0 nova_compute[260935]: 2025-10-11 09:29:09.035 2 DEBUG oslo_concurrency.processutils [None req-7c75f59e-5cab-46fe-a2aa-d36fa0ad902c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:29:09 compute-0 nova_compute[260935]: 2025-10-11 09:29:09.074 2 DEBUG oslo_concurrency.processutils [None req-185274bf-6538-417c-a3ef-e481592e5e5e 95ad8a94007e4ab88fcad372c4695cf5 9e69b3710dc94c919d8bd75d2a540c10 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json" returned: 0 in 0.142s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:29:09 compute-0 nova_compute[260935]: 2025-10-11 09:29:09.075 2 DEBUG oslo_concurrency.lockutils [None req-185274bf-6538-417c-a3ef-e481592e5e5e 95ad8a94007e4ab88fcad372c4695cf5 9e69b3710dc94c919d8bd75d2a540c10 - - default default] Acquiring lock "9811042c7d73cc51997f7c966840f4b7728169a1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:29:09 compute-0 nova_compute[260935]: 2025-10-11 09:29:09.076 2 DEBUG oslo_concurrency.lockutils [None req-185274bf-6538-417c-a3ef-e481592e5e5e 95ad8a94007e4ab88fcad372c4695cf5 9e69b3710dc94c919d8bd75d2a540c10 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:29:09 compute-0 nova_compute[260935]: 2025-10-11 09:29:09.076 2 DEBUG oslo_concurrency.lockutils [None req-185274bf-6538-417c-a3ef-e481592e5e5e 95ad8a94007e4ab88fcad372c4695cf5 9e69b3710dc94c919d8bd75d2a540c10 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:29:09 compute-0 nova_compute[260935]: 2025-10-11 09:29:09.101 2 DEBUG nova.storage.rbd_utils [None req-185274bf-6538-417c-a3ef-e481592e5e5e 95ad8a94007e4ab88fcad372c4695cf5 9e69b3710dc94c919d8bd75d2a540c10 - - default default] rbd image af8a1ab7-7512-4de4-8493-cfe85095fbc5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:29:09 compute-0 nova_compute[260935]: 2025-10-11 09:29:09.104 2 DEBUG oslo_concurrency.processutils [None req-185274bf-6538-417c-a3ef-e481592e5e5e 95ad8a94007e4ab88fcad372c4695cf5 9e69b3710dc94c919d8bd75d2a540c10 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 af8a1ab7-7512-4de4-8493-cfe85095fbc5_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:29:09 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/1533797081' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:29:09 compute-0 ceph-mon[74313]: pgmap v2625: 321 pgs: 321 active+clean; 453 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 11 09:29:09 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:29:09 compute-0 festive_brahmagupta[406539]: {
Oct 11 09:29:09 compute-0 festive_brahmagupta[406539]:     "0": [
Oct 11 09:29:09 compute-0 festive_brahmagupta[406539]:         {
Oct 11 09:29:09 compute-0 festive_brahmagupta[406539]:             "devices": [
Oct 11 09:29:09 compute-0 festive_brahmagupta[406539]:                 "/dev/loop3"
Oct 11 09:29:09 compute-0 festive_brahmagupta[406539]:             ],
Oct 11 09:29:09 compute-0 festive_brahmagupta[406539]:             "lv_name": "ceph_lv0",
Oct 11 09:29:09 compute-0 festive_brahmagupta[406539]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 09:29:09 compute-0 festive_brahmagupta[406539]:             "lv_size": "21470642176",
Oct 11 09:29:09 compute-0 festive_brahmagupta[406539]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=bd31cfe9-266a-4479-a7f9-659fff14b3e1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 09:29:09 compute-0 festive_brahmagupta[406539]:             "lv_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 09:29:09 compute-0 festive_brahmagupta[406539]:             "name": "ceph_lv0",
Oct 11 09:29:09 compute-0 festive_brahmagupta[406539]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 09:29:09 compute-0 festive_brahmagupta[406539]:             "tags": {
Oct 11 09:29:09 compute-0 festive_brahmagupta[406539]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 11 09:29:09 compute-0 festive_brahmagupta[406539]:                 "ceph.block_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 09:29:09 compute-0 festive_brahmagupta[406539]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 09:29:09 compute-0 festive_brahmagupta[406539]:                 "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:29:09 compute-0 festive_brahmagupta[406539]:                 "ceph.cluster_name": "ceph",
Oct 11 09:29:09 compute-0 festive_brahmagupta[406539]:                 "ceph.crush_device_class": "",
Oct 11 09:29:09 compute-0 festive_brahmagupta[406539]:                 "ceph.encrypted": "0",
Oct 11 09:29:09 compute-0 festive_brahmagupta[406539]:                 "ceph.osd_fsid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 09:29:09 compute-0 festive_brahmagupta[406539]:                 "ceph.osd_id": "0",
Oct 11 09:29:09 compute-0 festive_brahmagupta[406539]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 09:29:09 compute-0 festive_brahmagupta[406539]:                 "ceph.type": "block",
Oct 11 09:29:09 compute-0 festive_brahmagupta[406539]:                 "ceph.vdo": "0"
Oct 11 09:29:09 compute-0 festive_brahmagupta[406539]:             },
Oct 11 09:29:09 compute-0 festive_brahmagupta[406539]:             "type": "block",
Oct 11 09:29:09 compute-0 festive_brahmagupta[406539]:             "vg_name": "ceph_vg0"
Oct 11 09:29:09 compute-0 festive_brahmagupta[406539]:         }
Oct 11 09:29:09 compute-0 festive_brahmagupta[406539]:     ],
Oct 11 09:29:09 compute-0 festive_brahmagupta[406539]:     "1": [
Oct 11 09:29:09 compute-0 festive_brahmagupta[406539]:         {
Oct 11 09:29:09 compute-0 festive_brahmagupta[406539]:             "devices": [
Oct 11 09:29:09 compute-0 festive_brahmagupta[406539]:                 "/dev/loop4"
Oct 11 09:29:09 compute-0 festive_brahmagupta[406539]:             ],
Oct 11 09:29:09 compute-0 festive_brahmagupta[406539]:             "lv_name": "ceph_lv1",
Oct 11 09:29:09 compute-0 festive_brahmagupta[406539]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 09:29:09 compute-0 festive_brahmagupta[406539]:             "lv_size": "21470642176",
Oct 11 09:29:09 compute-0 festive_brahmagupta[406539]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c6e730c8-7075-45e5-bf72-061b73088f5b,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 09:29:09 compute-0 festive_brahmagupta[406539]:             "lv_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 09:29:09 compute-0 festive_brahmagupta[406539]:             "name": "ceph_lv1",
Oct 11 09:29:09 compute-0 festive_brahmagupta[406539]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 09:29:09 compute-0 festive_brahmagupta[406539]:             "tags": {
Oct 11 09:29:09 compute-0 festive_brahmagupta[406539]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 11 09:29:09 compute-0 festive_brahmagupta[406539]:                 "ceph.block_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 09:29:09 compute-0 festive_brahmagupta[406539]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 09:29:09 compute-0 festive_brahmagupta[406539]:                 "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:29:09 compute-0 festive_brahmagupta[406539]:                 "ceph.cluster_name": "ceph",
Oct 11 09:29:09 compute-0 festive_brahmagupta[406539]:                 "ceph.crush_device_class": "",
Oct 11 09:29:09 compute-0 festive_brahmagupta[406539]:                 "ceph.encrypted": "0",
Oct 11 09:29:09 compute-0 festive_brahmagupta[406539]:                 "ceph.osd_fsid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 09:29:09 compute-0 festive_brahmagupta[406539]:                 "ceph.osd_id": "1",
Oct 11 09:29:09 compute-0 festive_brahmagupta[406539]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 09:29:09 compute-0 festive_brahmagupta[406539]:                 "ceph.type": "block",
Oct 11 09:29:09 compute-0 festive_brahmagupta[406539]:                 "ceph.vdo": "0"
Oct 11 09:29:09 compute-0 festive_brahmagupta[406539]:             },
Oct 11 09:29:09 compute-0 festive_brahmagupta[406539]:             "type": "block",
Oct 11 09:29:09 compute-0 festive_brahmagupta[406539]:             "vg_name": "ceph_vg1"
Oct 11 09:29:09 compute-0 festive_brahmagupta[406539]:         }
Oct 11 09:29:09 compute-0 festive_brahmagupta[406539]:     ],
Oct 11 09:29:09 compute-0 festive_brahmagupta[406539]:     "2": [
Oct 11 09:29:09 compute-0 festive_brahmagupta[406539]:         {
Oct 11 09:29:09 compute-0 festive_brahmagupta[406539]:             "devices": [
Oct 11 09:29:09 compute-0 festive_brahmagupta[406539]:                 "/dev/loop5"
Oct 11 09:29:09 compute-0 festive_brahmagupta[406539]:             ],
Oct 11 09:29:09 compute-0 festive_brahmagupta[406539]:             "lv_name": "ceph_lv2",
Oct 11 09:29:09 compute-0 festive_brahmagupta[406539]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 09:29:09 compute-0 festive_brahmagupta[406539]:             "lv_size": "21470642176",
Oct 11 09:29:09 compute-0 festive_brahmagupta[406539]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9a8d87fe-940e-4960-94fb-405e22e6e9ee,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 09:29:09 compute-0 festive_brahmagupta[406539]:             "lv_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 09:29:09 compute-0 festive_brahmagupta[406539]:             "name": "ceph_lv2",
Oct 11 09:29:09 compute-0 festive_brahmagupta[406539]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 09:29:09 compute-0 festive_brahmagupta[406539]:             "tags": {
Oct 11 09:29:09 compute-0 festive_brahmagupta[406539]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 11 09:29:09 compute-0 festive_brahmagupta[406539]:                 "ceph.block_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 09:29:09 compute-0 festive_brahmagupta[406539]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 09:29:09 compute-0 festive_brahmagupta[406539]:                 "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:29:09 compute-0 festive_brahmagupta[406539]:                 "ceph.cluster_name": "ceph",
Oct 11 09:29:09 compute-0 festive_brahmagupta[406539]:                 "ceph.crush_device_class": "",
Oct 11 09:29:09 compute-0 festive_brahmagupta[406539]:                 "ceph.encrypted": "0",
Oct 11 09:29:09 compute-0 festive_brahmagupta[406539]:                 "ceph.osd_fsid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 09:29:09 compute-0 festive_brahmagupta[406539]:                 "ceph.osd_id": "2",
Oct 11 09:29:09 compute-0 festive_brahmagupta[406539]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 09:29:09 compute-0 festive_brahmagupta[406539]:                 "ceph.type": "block",
Oct 11 09:29:09 compute-0 festive_brahmagupta[406539]:                 "ceph.vdo": "0"
Oct 11 09:29:09 compute-0 festive_brahmagupta[406539]:             },
Oct 11 09:29:09 compute-0 festive_brahmagupta[406539]:             "type": "block",
Oct 11 09:29:09 compute-0 festive_brahmagupta[406539]:             "vg_name": "ceph_vg2"
Oct 11 09:29:09 compute-0 festive_brahmagupta[406539]:         }
Oct 11 09:29:09 compute-0 festive_brahmagupta[406539]:     ]
Oct 11 09:29:09 compute-0 festive_brahmagupta[406539]: }
Oct 11 09:29:09 compute-0 nova_compute[260935]: 2025-10-11 09:29:09.442 2 DEBUG oslo_concurrency.processutils [None req-185274bf-6538-417c-a3ef-e481592e5e5e 95ad8a94007e4ab88fcad372c4695cf5 9e69b3710dc94c919d8bd75d2a540c10 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 af8a1ab7-7512-4de4-8493-cfe85095fbc5_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.337s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:29:09 compute-0 systemd[1]: libpod-e3a2d4dace19dfd664e42ad7c65e728ffd41f182933a8e1b4f4ef10b8f61d59f.scope: Deactivated successfully.
Oct 11 09:29:09 compute-0 podman[406521]: 2025-10-11 09:29:09.472990794 +0000 UTC m=+1.141826113 container died e3a2d4dace19dfd664e42ad7c65e728ffd41f182933a8e1b4f4ef10b8f61d59f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_brahmagupta, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 09:29:09 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 09:29:09 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3842193835' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:29:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-446082782a84f1b64494d38278f63b78a58d983bda88e8fb665d1b4bc13c8e03-merged.mount: Deactivated successfully.
Oct 11 09:29:09 compute-0 podman[406521]: 2025-10-11 09:29:09.528761518 +0000 UTC m=+1.197596837 container remove e3a2d4dace19dfd664e42ad7c65e728ffd41f182933a8e1b4f4ef10b8f61d59f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_brahmagupta, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 09:29:09 compute-0 nova_compute[260935]: 2025-10-11 09:29:09.537 2 DEBUG oslo_concurrency.processutils [None req-7c75f59e-5cab-46fe-a2aa-d36fa0ad902c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.502s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:29:09 compute-0 systemd[1]: libpod-conmon-e3a2d4dace19dfd664e42ad7c65e728ffd41f182933a8e1b4f4ef10b8f61d59f.scope: Deactivated successfully.
Oct 11 09:29:09 compute-0 nova_compute[260935]: 2025-10-11 09:29:09.555 2 DEBUG nova.storage.rbd_utils [None req-7c75f59e-5cab-46fe-a2aa-d36fa0ad902c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] rbd image 80209953-3c4c-4932-a9a3-8166c70e1029_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:29:09 compute-0 nova_compute[260935]: 2025-10-11 09:29:09.558 2 DEBUG oslo_concurrency.processutils [None req-7c75f59e-5cab-46fe-a2aa-d36fa0ad902c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:29:09 compute-0 sudo[406393]: pam_unix(sudo:session): session closed for user root
Oct 11 09:29:09 compute-0 nova_compute[260935]: 2025-10-11 09:29:09.617 2 DEBUG nova.storage.rbd_utils [None req-185274bf-6538-417c-a3ef-e481592e5e5e 95ad8a94007e4ab88fcad372c4695cf5 9e69b3710dc94c919d8bd75d2a540c10 - - default default] resizing rbd image af8a1ab7-7512-4de4-8493-cfe85095fbc5_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 11 09:29:09 compute-0 sudo[406734]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:29:09 compute-0 sudo[406734]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:29:09 compute-0 sudo[406734]: pam_unix(sudo:session): session closed for user root
Oct 11 09:29:09 compute-0 nova_compute[260935]: 2025-10-11 09:29:09.651 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:29:09 compute-0 sudo[406777]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 09:29:09 compute-0 sudo[406777]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:29:09 compute-0 sudo[406777]: pam_unix(sudo:session): session closed for user root
Oct 11 09:29:09 compute-0 nova_compute[260935]: 2025-10-11 09:29:09.725 2 DEBUG nova.objects.instance [None req-185274bf-6538-417c-a3ef-e481592e5e5e 95ad8a94007e4ab88fcad372c4695cf5 9e69b3710dc94c919d8bd75d2a540c10 - - default default] Lazy-loading 'migration_context' on Instance uuid af8a1ab7-7512-4de4-8493-cfe85095fbc5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:29:09 compute-0 nova_compute[260935]: 2025-10-11 09:29:09.742 2 DEBUG nova.virt.libvirt.driver [None req-185274bf-6538-417c-a3ef-e481592e5e5e 95ad8a94007e4ab88fcad372c4695cf5 9e69b3710dc94c919d8bd75d2a540c10 - - default default] [instance: af8a1ab7-7512-4de4-8493-cfe85095fbc5] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 11 09:29:09 compute-0 nova_compute[260935]: 2025-10-11 09:29:09.744 2 DEBUG nova.virt.libvirt.driver [None req-185274bf-6538-417c-a3ef-e481592e5e5e 95ad8a94007e4ab88fcad372c4695cf5 9e69b3710dc94c919d8bd75d2a540c10 - - default default] [instance: af8a1ab7-7512-4de4-8493-cfe85095fbc5] Ensure instance console log exists: /var/lib/nova/instances/af8a1ab7-7512-4de4-8493-cfe85095fbc5/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 11 09:29:09 compute-0 nova_compute[260935]: 2025-10-11 09:29:09.744 2 DEBUG oslo_concurrency.lockutils [None req-185274bf-6538-417c-a3ef-e481592e5e5e 95ad8a94007e4ab88fcad372c4695cf5 9e69b3710dc94c919d8bd75d2a540c10 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:29:09 compute-0 nova_compute[260935]: 2025-10-11 09:29:09.745 2 DEBUG oslo_concurrency.lockutils [None req-185274bf-6538-417c-a3ef-e481592e5e5e 95ad8a94007e4ab88fcad372c4695cf5 9e69b3710dc94c919d8bd75d2a540c10 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:29:09 compute-0 nova_compute[260935]: 2025-10-11 09:29:09.745 2 DEBUG oslo_concurrency.lockutils [None req-185274bf-6538-417c-a3ef-e481592e5e5e 95ad8a94007e4ab88fcad372c4695cf5 9e69b3710dc94c919d8bd75d2a540c10 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:29:09 compute-0 sudo[406839]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:29:09 compute-0 sudo[406839]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:29:09 compute-0 sudo[406839]: pam_unix(sudo:session): session closed for user root
Oct 11 09:29:09 compute-0 nova_compute[260935]: 2025-10-11 09:29:09.770 2 DEBUG nova.network.neutron [None req-185274bf-6538-417c-a3ef-e481592e5e5e 95ad8a94007e4ab88fcad372c4695cf5 9e69b3710dc94c919d8bd75d2a540c10 - - default default] [instance: af8a1ab7-7512-4de4-8493-cfe85095fbc5] Successfully created port: c6f854bc-831f-4bb1-ad9f-e1aa03343d25 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 11 09:29:09 compute-0 sudo[406864]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a -- raw list --format json
Oct 11 09:29:09 compute-0 sudo[406864]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:29:09 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 09:29:09 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1958131527' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:29:10 compute-0 nova_compute[260935]: 2025-10-11 09:29:10.002 2 DEBUG oslo_concurrency.processutils [None req-7c75f59e-5cab-46fe-a2aa-d36fa0ad902c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:29:10 compute-0 nova_compute[260935]: 2025-10-11 09:29:10.003 2 DEBUG nova.virt.libvirt.vif [None req-7c75f59e-5cab-46fe-a2aa-d36fa0ad902c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T09:28:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-373220295',display_name='tempest-TestGettingAddress-server-373220295',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-373220295',id=133,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBI6hRsTatVKtQ3nuA1r7FSvjUwlf4xF10ggzb3Lhl+N08jgpdPafA8cbnqq6OClpxtss8V3s8tFKTynT8iMbxi96RnSSo3i9TmqSxoUO3xLRHc6Xamp8CuhNQXpWeJ78Zg==',key_name='tempest-TestGettingAddress-951303195',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='db6885dd005947ad850fed13cefdf2fc',ramdisk_id='',reservation_id='r-0s96u5c0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-1238692117',owner_user_name='tempest-TestGettingAddress-1238692117-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:29:02Z,user_data=None,user_id='0e1fd111a1ff43179343661e01457085',uuid=80209953-3c4c-4932-a9a3-8166c70e1029,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b6c25504-002e-4bf7-8b34-c9e43fcc55c2", "address": "fa:16:3e:ba:b7:77", "network": {"id": "03e15108-5f8d-4fec-9ad8-133d9551c667", "bridge": "br-int", "label": "tempest-network-smoke--866575139", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feba:b777", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feba:b777", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb6c25504-00", "ovs_interfaceid": "b6c25504-002e-4bf7-8b34-c9e43fcc55c2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 11 09:29:10 compute-0 nova_compute[260935]: 2025-10-11 09:29:10.003 2 DEBUG nova.network.os_vif_util [None req-7c75f59e-5cab-46fe-a2aa-d36fa0ad902c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converting VIF {"id": "b6c25504-002e-4bf7-8b34-c9e43fcc55c2", "address": "fa:16:3e:ba:b7:77", "network": {"id": "03e15108-5f8d-4fec-9ad8-133d9551c667", "bridge": "br-int", "label": "tempest-network-smoke--866575139", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feba:b777", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feba:b777", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb6c25504-00", "ovs_interfaceid": "b6c25504-002e-4bf7-8b34-c9e43fcc55c2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 09:29:10 compute-0 nova_compute[260935]: 2025-10-11 09:29:10.004 2 DEBUG nova.network.os_vif_util [None req-7c75f59e-5cab-46fe-a2aa-d36fa0ad902c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ba:b7:77,bridge_name='br-int',has_traffic_filtering=True,id=b6c25504-002e-4bf7-8b34-c9e43fcc55c2,network=Network(03e15108-5f8d-4fec-9ad8-133d9551c667),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb6c25504-00') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 09:29:10 compute-0 nova_compute[260935]: 2025-10-11 09:29:10.005 2 DEBUG nova.objects.instance [None req-7c75f59e-5cab-46fe-a2aa-d36fa0ad902c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lazy-loading 'pci_devices' on Instance uuid 80209953-3c4c-4932-a9a3-8166c70e1029 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:29:10 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/3842193835' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:29:10 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/1958131527' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:29:10 compute-0 nova_compute[260935]: 2025-10-11 09:29:10.165 2 DEBUG nova.virt.libvirt.driver [None req-7c75f59e-5cab-46fe-a2aa-d36fa0ad902c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 80209953-3c4c-4932-a9a3-8166c70e1029] End _get_guest_xml xml=<domain type="kvm">
Oct 11 09:29:10 compute-0 nova_compute[260935]:   <uuid>80209953-3c4c-4932-a9a3-8166c70e1029</uuid>
Oct 11 09:29:10 compute-0 nova_compute[260935]:   <name>instance-00000085</name>
Oct 11 09:29:10 compute-0 nova_compute[260935]:   <memory>131072</memory>
Oct 11 09:29:10 compute-0 nova_compute[260935]:   <vcpu>1</vcpu>
Oct 11 09:29:10 compute-0 nova_compute[260935]:   <metadata>
Oct 11 09:29:10 compute-0 nova_compute[260935]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 09:29:10 compute-0 nova_compute[260935]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 09:29:10 compute-0 nova_compute[260935]:       <nova:name>tempest-TestGettingAddress-server-373220295</nova:name>
Oct 11 09:29:10 compute-0 nova_compute[260935]:       <nova:creationTime>2025-10-11 09:29:09</nova:creationTime>
Oct 11 09:29:10 compute-0 nova_compute[260935]:       <nova:flavor name="m1.nano">
Oct 11 09:29:10 compute-0 nova_compute[260935]:         <nova:memory>128</nova:memory>
Oct 11 09:29:10 compute-0 nova_compute[260935]:         <nova:disk>1</nova:disk>
Oct 11 09:29:10 compute-0 nova_compute[260935]:         <nova:swap>0</nova:swap>
Oct 11 09:29:10 compute-0 nova_compute[260935]:         <nova:ephemeral>0</nova:ephemeral>
Oct 11 09:29:10 compute-0 nova_compute[260935]:         <nova:vcpus>1</nova:vcpus>
Oct 11 09:29:10 compute-0 nova_compute[260935]:       </nova:flavor>
Oct 11 09:29:10 compute-0 nova_compute[260935]:       <nova:owner>
Oct 11 09:29:10 compute-0 nova_compute[260935]:         <nova:user uuid="0e1fd111a1ff43179343661e01457085">tempest-TestGettingAddress-1238692117-project-member</nova:user>
Oct 11 09:29:10 compute-0 nova_compute[260935]:         <nova:project uuid="db6885dd005947ad850fed13cefdf2fc">tempest-TestGettingAddress-1238692117</nova:project>
Oct 11 09:29:10 compute-0 nova_compute[260935]:       </nova:owner>
Oct 11 09:29:10 compute-0 nova_compute[260935]:       <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 09:29:10 compute-0 nova_compute[260935]:       <nova:ports>
Oct 11 09:29:10 compute-0 nova_compute[260935]:         <nova:port uuid="b6c25504-002e-4bf7-8b34-c9e43fcc55c2">
Oct 11 09:29:10 compute-0 nova_compute[260935]:           <nova:ip type="fixed" address="2001:db8:0:1:f816:3eff:feba:b777" ipVersion="6"/>
Oct 11 09:29:10 compute-0 nova_compute[260935]:           <nova:ip type="fixed" address="2001:db8::f816:3eff:feba:b777" ipVersion="6"/>
Oct 11 09:29:10 compute-0 nova_compute[260935]:           <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Oct 11 09:29:10 compute-0 nova_compute[260935]:         </nova:port>
Oct 11 09:29:10 compute-0 nova_compute[260935]:       </nova:ports>
Oct 11 09:29:10 compute-0 nova_compute[260935]:     </nova:instance>
Oct 11 09:29:10 compute-0 nova_compute[260935]:   </metadata>
Oct 11 09:29:10 compute-0 nova_compute[260935]:   <sysinfo type="smbios">
Oct 11 09:29:10 compute-0 nova_compute[260935]:     <system>
Oct 11 09:29:10 compute-0 nova_compute[260935]:       <entry name="manufacturer">RDO</entry>
Oct 11 09:29:10 compute-0 nova_compute[260935]:       <entry name="product">OpenStack Compute</entry>
Oct 11 09:29:10 compute-0 nova_compute[260935]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 09:29:10 compute-0 nova_compute[260935]:       <entry name="serial">80209953-3c4c-4932-a9a3-8166c70e1029</entry>
Oct 11 09:29:10 compute-0 nova_compute[260935]:       <entry name="uuid">80209953-3c4c-4932-a9a3-8166c70e1029</entry>
Oct 11 09:29:10 compute-0 nova_compute[260935]:       <entry name="family">Virtual Machine</entry>
Oct 11 09:29:10 compute-0 nova_compute[260935]:     </system>
Oct 11 09:29:10 compute-0 nova_compute[260935]:   </sysinfo>
Oct 11 09:29:10 compute-0 nova_compute[260935]:   <os>
Oct 11 09:29:10 compute-0 nova_compute[260935]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 11 09:29:10 compute-0 nova_compute[260935]:     <boot dev="hd"/>
Oct 11 09:29:10 compute-0 nova_compute[260935]:     <smbios mode="sysinfo"/>
Oct 11 09:29:10 compute-0 nova_compute[260935]:   </os>
Oct 11 09:29:10 compute-0 nova_compute[260935]:   <features>
Oct 11 09:29:10 compute-0 nova_compute[260935]:     <acpi/>
Oct 11 09:29:10 compute-0 nova_compute[260935]:     <apic/>
Oct 11 09:29:10 compute-0 nova_compute[260935]:     <vmcoreinfo/>
Oct 11 09:29:10 compute-0 nova_compute[260935]:   </features>
Oct 11 09:29:10 compute-0 nova_compute[260935]:   <clock offset="utc">
Oct 11 09:29:10 compute-0 nova_compute[260935]:     <timer name="pit" tickpolicy="delay"/>
Oct 11 09:29:10 compute-0 nova_compute[260935]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 11 09:29:10 compute-0 nova_compute[260935]:     <timer name="hpet" present="no"/>
Oct 11 09:29:10 compute-0 nova_compute[260935]:   </clock>
Oct 11 09:29:10 compute-0 nova_compute[260935]:   <cpu mode="host-model" match="exact">
Oct 11 09:29:10 compute-0 nova_compute[260935]:     <topology sockets="1" cores="1" threads="1"/>
Oct 11 09:29:10 compute-0 nova_compute[260935]:   </cpu>
Oct 11 09:29:10 compute-0 nova_compute[260935]:   <devices>
Oct 11 09:29:10 compute-0 nova_compute[260935]:     <disk type="network" device="disk">
Oct 11 09:29:10 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 09:29:10 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/80209953-3c4c-4932-a9a3-8166c70e1029_disk">
Oct 11 09:29:10 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 09:29:10 compute-0 nova_compute[260935]:       </source>
Oct 11 09:29:10 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 09:29:10 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 09:29:10 compute-0 nova_compute[260935]:       </auth>
Oct 11 09:29:10 compute-0 nova_compute[260935]:       <target dev="vda" bus="virtio"/>
Oct 11 09:29:10 compute-0 nova_compute[260935]:     </disk>
Oct 11 09:29:10 compute-0 nova_compute[260935]:     <disk type="network" device="cdrom">
Oct 11 09:29:10 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 09:29:10 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/80209953-3c4c-4932-a9a3-8166c70e1029_disk.config">
Oct 11 09:29:10 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 09:29:10 compute-0 nova_compute[260935]:       </source>
Oct 11 09:29:10 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 09:29:10 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 09:29:10 compute-0 nova_compute[260935]:       </auth>
Oct 11 09:29:10 compute-0 nova_compute[260935]:       <target dev="sda" bus="sata"/>
Oct 11 09:29:10 compute-0 nova_compute[260935]:     </disk>
Oct 11 09:29:10 compute-0 nova_compute[260935]:     <interface type="ethernet">
Oct 11 09:29:10 compute-0 nova_compute[260935]:       <mac address="fa:16:3e:ba:b7:77"/>
Oct 11 09:29:10 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 09:29:10 compute-0 nova_compute[260935]:       <driver name="vhost" rx_queue_size="512"/>
Oct 11 09:29:10 compute-0 nova_compute[260935]:       <mtu size="1442"/>
Oct 11 09:29:10 compute-0 nova_compute[260935]:       <target dev="tapb6c25504-00"/>
Oct 11 09:29:10 compute-0 nova_compute[260935]:     </interface>
Oct 11 09:29:10 compute-0 nova_compute[260935]:     <serial type="pty">
Oct 11 09:29:10 compute-0 nova_compute[260935]:       <log file="/var/lib/nova/instances/80209953-3c4c-4932-a9a3-8166c70e1029/console.log" append="off"/>
Oct 11 09:29:10 compute-0 nova_compute[260935]:     </serial>
Oct 11 09:29:10 compute-0 nova_compute[260935]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 09:29:10 compute-0 nova_compute[260935]:     <video>
Oct 11 09:29:10 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 09:29:10 compute-0 nova_compute[260935]:     </video>
Oct 11 09:29:10 compute-0 nova_compute[260935]:     <input type="tablet" bus="usb"/>
Oct 11 09:29:10 compute-0 nova_compute[260935]:     <rng model="virtio">
Oct 11 09:29:10 compute-0 nova_compute[260935]:       <backend model="random">/dev/urandom</backend>
Oct 11 09:29:10 compute-0 nova_compute[260935]:     </rng>
Oct 11 09:29:10 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root"/>
Oct 11 09:29:10 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:29:10 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:29:10 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:29:10 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:29:10 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:29:10 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:29:10 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:29:10 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:29:10 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:29:10 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:29:10 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:29:10 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:29:10 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:29:10 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:29:10 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:29:10 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:29:10 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:29:10 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:29:10 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:29:10 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:29:10 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:29:10 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:29:10 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:29:10 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:29:10 compute-0 nova_compute[260935]:     <controller type="usb" index="0"/>
Oct 11 09:29:10 compute-0 nova_compute[260935]:     <memballoon model="virtio">
Oct 11 09:29:10 compute-0 nova_compute[260935]:       <stats period="10"/>
Oct 11 09:29:10 compute-0 nova_compute[260935]:     </memballoon>
Oct 11 09:29:10 compute-0 nova_compute[260935]:   </devices>
Oct 11 09:29:10 compute-0 nova_compute[260935]: </domain>
Oct 11 09:29:10 compute-0 nova_compute[260935]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 11 09:29:10 compute-0 nova_compute[260935]: 2025-10-11 09:29:10.167 2 DEBUG nova.compute.manager [None req-7c75f59e-5cab-46fe-a2aa-d36fa0ad902c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 80209953-3c4c-4932-a9a3-8166c70e1029] Preparing to wait for external event network-vif-plugged-b6c25504-002e-4bf7-8b34-c9e43fcc55c2 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 11 09:29:10 compute-0 nova_compute[260935]: 2025-10-11 09:29:10.168 2 DEBUG oslo_concurrency.lockutils [None req-7c75f59e-5cab-46fe-a2aa-d36fa0ad902c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "80209953-3c4c-4932-a9a3-8166c70e1029-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:29:10 compute-0 nova_compute[260935]: 2025-10-11 09:29:10.168 2 DEBUG oslo_concurrency.lockutils [None req-7c75f59e-5cab-46fe-a2aa-d36fa0ad902c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "80209953-3c4c-4932-a9a3-8166c70e1029-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:29:10 compute-0 nova_compute[260935]: 2025-10-11 09:29:10.169 2 DEBUG oslo_concurrency.lockutils [None req-7c75f59e-5cab-46fe-a2aa-d36fa0ad902c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "80209953-3c4c-4932-a9a3-8166c70e1029-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:29:10 compute-0 nova_compute[260935]: 2025-10-11 09:29:10.170 2 DEBUG nova.virt.libvirt.vif [None req-7c75f59e-5cab-46fe-a2aa-d36fa0ad902c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T09:28:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-373220295',display_name='tempest-TestGettingAddress-server-373220295',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-373220295',id=133,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBI6hRsTatVKtQ3nuA1r7FSvjUwlf4xF10ggzb3Lhl+N08jgpdPafA8cbnqq6OClpxtss8V3s8tFKTynT8iMbxi96RnSSo3i9TmqSxoUO3xLRHc6Xamp8CuhNQXpWeJ78Zg==',key_name='tempest-TestGettingAddress-951303195',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='db6885dd005947ad850fed13cefdf2fc',ramdisk_id='',reservation_id='r-0s96u5c0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-1238692117',owner_user_name='tempest-TestGettingAddress-1238692117-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:29:02Z,user_data=None,user_id='0e1fd111a1ff43179343661e01457085',uuid=80209953-3c4c-4932-a9a3-8166c70e1029,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b6c25504-002e-4bf7-8b34-c9e43fcc55c2", "address": "fa:16:3e:ba:b7:77", "network": {"id": "03e15108-5f8d-4fec-9ad8-133d9551c667", "bridge": "br-int", "label": "tempest-network-smoke--866575139", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feba:b777", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feba:b777", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb6c25504-00", "ovs_interfaceid": "b6c25504-002e-4bf7-8b34-c9e43fcc55c2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 11 09:29:10 compute-0 nova_compute[260935]: 2025-10-11 09:29:10.171 2 DEBUG nova.network.os_vif_util [None req-7c75f59e-5cab-46fe-a2aa-d36fa0ad902c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converting VIF {"id": "b6c25504-002e-4bf7-8b34-c9e43fcc55c2", "address": "fa:16:3e:ba:b7:77", "network": {"id": "03e15108-5f8d-4fec-9ad8-133d9551c667", "bridge": "br-int", "label": "tempest-network-smoke--866575139", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feba:b777", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feba:b777", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb6c25504-00", "ovs_interfaceid": "b6c25504-002e-4bf7-8b34-c9e43fcc55c2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 09:29:10 compute-0 nova_compute[260935]: 2025-10-11 09:29:10.173 2 DEBUG nova.network.os_vif_util [None req-7c75f59e-5cab-46fe-a2aa-d36fa0ad902c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ba:b7:77,bridge_name='br-int',has_traffic_filtering=True,id=b6c25504-002e-4bf7-8b34-c9e43fcc55c2,network=Network(03e15108-5f8d-4fec-9ad8-133d9551c667),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb6c25504-00') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 09:29:10 compute-0 nova_compute[260935]: 2025-10-11 09:29:10.174 2 DEBUG os_vif [None req-7c75f59e-5cab-46fe-a2aa-d36fa0ad902c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ba:b7:77,bridge_name='br-int',has_traffic_filtering=True,id=b6c25504-002e-4bf7-8b34-c9e43fcc55c2,network=Network(03e15108-5f8d-4fec-9ad8-133d9551c667),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb6c25504-00') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 11 09:29:10 compute-0 nova_compute[260935]: 2025-10-11 09:29:10.175 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:29:10 compute-0 nova_compute[260935]: 2025-10-11 09:29:10.175 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:29:10 compute-0 nova_compute[260935]: 2025-10-11 09:29:10.176 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 09:29:10 compute-0 nova_compute[260935]: 2025-10-11 09:29:10.181 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:29:10 compute-0 nova_compute[260935]: 2025-10-11 09:29:10.181 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb6c25504-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:29:10 compute-0 nova_compute[260935]: 2025-10-11 09:29:10.182 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapb6c25504-00, col_values=(('external_ids', {'iface-id': 'b6c25504-002e-4bf7-8b34-c9e43fcc55c2', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ba:b7:77', 'vm-uuid': '80209953-3c4c-4932-a9a3-8166c70e1029'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:29:10 compute-0 NetworkManager[44960]: <info>  [1760174950.1859] manager: (tapb6c25504-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/574)
Oct 11 09:29:10 compute-0 nova_compute[260935]: 2025-10-11 09:29:10.184 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:29:10 compute-0 nova_compute[260935]: 2025-10-11 09:29:10.189 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 09:29:10 compute-0 nova_compute[260935]: 2025-10-11 09:29:10.196 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:29:10 compute-0 nova_compute[260935]: 2025-10-11 09:29:10.198 2 INFO os_vif [None req-7c75f59e-5cab-46fe-a2aa-d36fa0ad902c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ba:b7:77,bridge_name='br-int',has_traffic_filtering=True,id=b6c25504-002e-4bf7-8b34-c9e43fcc55c2,network=Network(03e15108-5f8d-4fec-9ad8-133d9551c667),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb6c25504-00')
Oct 11 09:29:10 compute-0 nova_compute[260935]: 2025-10-11 09:29:10.250 2 DEBUG nova.virt.libvirt.driver [None req-7c75f59e-5cab-46fe-a2aa-d36fa0ad902c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 09:29:10 compute-0 nova_compute[260935]: 2025-10-11 09:29:10.251 2 DEBUG nova.virt.libvirt.driver [None req-7c75f59e-5cab-46fe-a2aa-d36fa0ad902c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 09:29:10 compute-0 nova_compute[260935]: 2025-10-11 09:29:10.251 2 DEBUG nova.virt.libvirt.driver [None req-7c75f59e-5cab-46fe-a2aa-d36fa0ad902c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] No VIF found with MAC fa:16:3e:ba:b7:77, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 11 09:29:10 compute-0 nova_compute[260935]: 2025-10-11 09:29:10.252 2 INFO nova.virt.libvirt.driver [None req-7c75f59e-5cab-46fe-a2aa-d36fa0ad902c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 80209953-3c4c-4932-a9a3-8166c70e1029] Using config drive
Oct 11 09:29:10 compute-0 podman[406930]: 2025-10-11 09:29:10.259113949 +0000 UTC m=+0.069502593 container create 4d4a75a835a59d53f4b5a60dcc7e720ce077a6603e9554e738f1e32fc247654e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_pascal, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 09:29:10 compute-0 nova_compute[260935]: 2025-10-11 09:29:10.300 2 DEBUG nova.storage.rbd_utils [None req-7c75f59e-5cab-46fe-a2aa-d36fa0ad902c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] rbd image 80209953-3c4c-4932-a9a3-8166c70e1029_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:29:10 compute-0 systemd[1]: Started libpod-conmon-4d4a75a835a59d53f4b5a60dcc7e720ce077a6603e9554e738f1e32fc247654e.scope.
Oct 11 09:29:10 compute-0 podman[406930]: 2025-10-11 09:29:10.226671343 +0000 UTC m=+0.037060037 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:29:10 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:29:10 compute-0 podman[406930]: 2025-10-11 09:29:10.374066023 +0000 UTC m=+0.184454717 container init 4d4a75a835a59d53f4b5a60dcc7e720ce077a6603e9554e738f1e32fc247654e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_pascal, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct 11 09:29:10 compute-0 podman[406930]: 2025-10-11 09:29:10.387464041 +0000 UTC m=+0.197852675 container start 4d4a75a835a59d53f4b5a60dcc7e720ce077a6603e9554e738f1e32fc247654e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_pascal, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default)
Oct 11 09:29:10 compute-0 podman[406930]: 2025-10-11 09:29:10.39416535 +0000 UTC m=+0.204553984 container attach 4d4a75a835a59d53f4b5a60dcc7e720ce077a6603e9554e738f1e32fc247654e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_pascal, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct 11 09:29:10 compute-0 laughing_pascal[406966]: 167 167
Oct 11 09:29:10 compute-0 systemd[1]: libpod-4d4a75a835a59d53f4b5a60dcc7e720ce077a6603e9554e738f1e32fc247654e.scope: Deactivated successfully.
Oct 11 09:29:10 compute-0 podman[406930]: 2025-10-11 09:29:10.397128074 +0000 UTC m=+0.207516768 container died 4d4a75a835a59d53f4b5a60dcc7e720ce077a6603e9554e738f1e32fc247654e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_pascal, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Oct 11 09:29:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-dce4e2bc3ff7641886fe7fc748755d9900c4fbd2a425540535c01f599ee1b638-merged.mount: Deactivated successfully.
Oct 11 09:29:10 compute-0 podman[406930]: 2025-10-11 09:29:10.466014288 +0000 UTC m=+0.276402902 container remove 4d4a75a835a59d53f4b5a60dcc7e720ce077a6603e9554e738f1e32fc247654e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_pascal, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct 11 09:29:10 compute-0 systemd[1]: libpod-conmon-4d4a75a835a59d53f4b5a60dcc7e720ce077a6603e9554e738f1e32fc247654e.scope: Deactivated successfully.
Oct 11 09:29:10 compute-0 podman[406990]: 2025-10-11 09:29:10.733912018 +0000 UTC m=+0.077429546 container create 4b33f2af51897c3144209c02edcc94fa6de2374d3b2044c408f45b92e9bf9b8a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_panini, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct 11 09:29:10 compute-0 podman[406990]: 2025-10-11 09:29:10.702523592 +0000 UTC m=+0.046041180 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:29:10 compute-0 systemd[1]: Started libpod-conmon-4b33f2af51897c3144209c02edcc94fa6de2374d3b2044c408f45b92e9bf9b8a.scope.
Oct 11 09:29:10 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:29:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1469e3fd795504d32d64dd3fe6639706e270344fa77cbf6cd4b4c3aa8551699c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 09:29:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1469e3fd795504d32d64dd3fe6639706e270344fa77cbf6cd4b4c3aa8551699c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 09:29:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1469e3fd795504d32d64dd3fe6639706e270344fa77cbf6cd4b4c3aa8551699c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 09:29:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1469e3fd795504d32d64dd3fe6639706e270344fa77cbf6cd4b4c3aa8551699c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 09:29:10 compute-0 podman[406990]: 2025-10-11 09:29:10.860592383 +0000 UTC m=+0.204109971 container init 4b33f2af51897c3144209c02edcc94fa6de2374d3b2044c408f45b92e9bf9b8a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_panini, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct 11 09:29:10 compute-0 podman[406990]: 2025-10-11 09:29:10.875663938 +0000 UTC m=+0.219181446 container start 4b33f2af51897c3144209c02edcc94fa6de2374d3b2044c408f45b92e9bf9b8a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_panini, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 11 09:29:10 compute-0 podman[406990]: 2025-10-11 09:29:10.879513407 +0000 UTC m=+0.223030945 container attach 4b33f2af51897c3144209c02edcc94fa6de2374d3b2044c408f45b92e9bf9b8a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_panini, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 09:29:10 compute-0 nova_compute[260935]: 2025-10-11 09:29:10.937 2 INFO nova.virt.libvirt.driver [None req-7c75f59e-5cab-46fe-a2aa-d36fa0ad902c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 80209953-3c4c-4932-a9a3-8166c70e1029] Creating config drive at /var/lib/nova/instances/80209953-3c4c-4932-a9a3-8166c70e1029/disk.config
Oct 11 09:29:10 compute-0 nova_compute[260935]: 2025-10-11 09:29:10.945 2 DEBUG oslo_concurrency.processutils [None req-7c75f59e-5cab-46fe-a2aa-d36fa0ad902c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/80209953-3c4c-4932-a9a3-8166c70e1029/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp04n0sd7x execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:29:11 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2626: 321 pgs: 321 active+clean; 453 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 11 09:29:11 compute-0 nova_compute[260935]: 2025-10-11 09:29:11.099 2 DEBUG oslo_concurrency.processutils [None req-7c75f59e-5cab-46fe-a2aa-d36fa0ad902c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/80209953-3c4c-4932-a9a3-8166c70e1029/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp04n0sd7x" returned: 0 in 0.154s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:29:11 compute-0 nova_compute[260935]: 2025-10-11 09:29:11.124 2 DEBUG nova.storage.rbd_utils [None req-7c75f59e-5cab-46fe-a2aa-d36fa0ad902c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] rbd image 80209953-3c4c-4932-a9a3-8166c70e1029_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:29:11 compute-0 nova_compute[260935]: 2025-10-11 09:29:11.128 2 DEBUG oslo_concurrency.processutils [None req-7c75f59e-5cab-46fe-a2aa-d36fa0ad902c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/80209953-3c4c-4932-a9a3-8166c70e1029/disk.config 80209953-3c4c-4932-a9a3-8166c70e1029_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:29:11 compute-0 ceph-mon[74313]: pgmap v2626: 321 pgs: 321 active+clean; 453 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 11 09:29:11 compute-0 nova_compute[260935]: 2025-10-11 09:29:11.330 2 DEBUG oslo_concurrency.processutils [None req-7c75f59e-5cab-46fe-a2aa-d36fa0ad902c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/80209953-3c4c-4932-a9a3-8166c70e1029/disk.config 80209953-3c4c-4932-a9a3-8166c70e1029_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.202s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:29:11 compute-0 nova_compute[260935]: 2025-10-11 09:29:11.332 2 INFO nova.virt.libvirt.driver [None req-7c75f59e-5cab-46fe-a2aa-d36fa0ad902c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 80209953-3c4c-4932-a9a3-8166c70e1029] Deleting local config drive /var/lib/nova/instances/80209953-3c4c-4932-a9a3-8166c70e1029/disk.config because it was imported into RBD.
Oct 11 09:29:11 compute-0 kernel: tapb6c25504-00: entered promiscuous mode
Oct 11 09:29:11 compute-0 NetworkManager[44960]: <info>  [1760174951.3879] manager: (tapb6c25504-00): new Tun device (/org/freedesktop/NetworkManager/Devices/575)
Oct 11 09:29:11 compute-0 ovn_controller[152945]: 2025-10-11T09:29:11Z|01481|binding|INFO|Claiming lport b6c25504-002e-4bf7-8b34-c9e43fcc55c2 for this chassis.
Oct 11 09:29:11 compute-0 ovn_controller[152945]: 2025-10-11T09:29:11Z|01482|binding|INFO|b6c25504-002e-4bf7-8b34-c9e43fcc55c2: Claiming fa:16:3e:ba:b7:77 10.100.0.6 2001:db8:0:1:f816:3eff:feba:b777 2001:db8::f816:3eff:feba:b777
Oct 11 09:29:11 compute-0 nova_compute[260935]: 2025-10-11 09:29:11.398 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:29:11 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:29:11.408 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ba:b7:77 10.100.0.6 2001:db8:0:1:f816:3eff:feba:b777 2001:db8::f816:3eff:feba:b777'], port_security=['fa:16:3e:ba:b7:77 10.100.0.6 2001:db8:0:1:f816:3eff:feba:b777 2001:db8::f816:3eff:feba:b777'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28 2001:db8:0:1:f816:3eff:feba:b777/64 2001:db8::f816:3eff:feba:b777/64', 'neutron:device_id': '80209953-3c4c-4932-a9a3-8166c70e1029', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-03e15108-5f8d-4fec-9ad8-133d9551c667', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'db6885dd005947ad850fed13cefdf2fc', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'ea903998-56a8-4844-9402-6fdca37c3b3c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e3325c2b-a3cd-4cde-bb9c-e17cc326ff80, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=b6c25504-002e-4bf7-8b34-c9e43fcc55c2) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 09:29:11 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:29:11.409 162815 INFO neutron.agent.ovn.metadata.agent [-] Port b6c25504-002e-4bf7-8b34-c9e43fcc55c2 in datapath 03e15108-5f8d-4fec-9ad8-133d9551c667 bound to our chassis
Oct 11 09:29:11 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:29:11.412 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 03e15108-5f8d-4fec-9ad8-133d9551c667
Oct 11 09:29:11 compute-0 ovn_controller[152945]: 2025-10-11T09:29:11Z|01483|binding|INFO|Setting lport b6c25504-002e-4bf7-8b34-c9e43fcc55c2 ovn-installed in OVS
Oct 11 09:29:11 compute-0 ovn_controller[152945]: 2025-10-11T09:29:11Z|01484|binding|INFO|Setting lport b6c25504-002e-4bf7-8b34-c9e43fcc55c2 up in Southbound
Oct 11 09:29:11 compute-0 nova_compute[260935]: 2025-10-11 09:29:11.426 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:29:11 compute-0 nova_compute[260935]: 2025-10-11 09:29:11.428 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:29:11 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:29:11.432 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[05f3b06d-c9aa-4773-8aab-5db25484927e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:29:11 compute-0 systemd-udevd[407066]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 09:29:11 compute-0 systemd-machined[215705]: New machine qemu-157-instance-00000085.
Oct 11 09:29:11 compute-0 systemd[1]: Started Virtual Machine qemu-157-instance-00000085.
Oct 11 09:29:11 compute-0 NetworkManager[44960]: <info>  [1760174951.4602] device (tapb6c25504-00): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 09:29:11 compute-0 NetworkManager[44960]: <info>  [1760174951.4611] device (tapb6c25504-00): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 09:29:11 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:29:11.468 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[b9e72670-1977-45ae-b5a2-71b20197b327]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:29:11 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:29:11.472 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[1d4d110c-2cbe-4a6e-8ec3-d79eaf103b1a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:29:11 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:29:11.506 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[9655f1d2-8018-41c6-9dbf-ee8103748673]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:29:11 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:29:11.528 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[f5270274-333a-4b40-8330-1c1fcd7e0536]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap03e15108-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1f:c9:82'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 26, 'tx_packets': 5, 'rx_bytes': 2300, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 26, 'tx_packets': 5, 'rx_bytes': 2300, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 397], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 675712, 'reachable_time': 41440, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 24, 'inoctets': 1880, 'indelivers': 7, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 24, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 1880, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 24, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 7, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 407078, 'error': None, 'target': 'ovnmeta-03e15108-5f8d-4fec-9ad8-133d9551c667', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:29:11 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:29:11.554 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[7be5ba4c-3cd0-4e1f-be82-a09e08ed5e1d]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap03e15108-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 675730, 'tstamp': 675730}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 407080, 'error': None, 'target': 'ovnmeta-03e15108-5f8d-4fec-9ad8-133d9551c667', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap03e15108-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 675735, 'tstamp': 675735}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 407080, 'error': None, 'target': 'ovnmeta-03e15108-5f8d-4fec-9ad8-133d9551c667', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:29:11 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:29:11.555 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap03e15108-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:29:11 compute-0 nova_compute[260935]: 2025-10-11 09:29:11.557 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:29:11 compute-0 nova_compute[260935]: 2025-10-11 09:29:11.558 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:29:11 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:29:11.559 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap03e15108-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:29:11 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:29:11.559 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 09:29:11 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:29:11.559 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap03e15108-50, col_values=(('external_ids', {'iface-id': '1945eb8f-f9e5-437a-9fc9-017b522ae777'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:29:11 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:29:11.560 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 09:29:11 compute-0 nova_compute[260935]: 2025-10-11 09:29:11.700 2 DEBUG nova.compute.manager [req-7d72c822-ee59-42ca-b18e-ff629807e912 req-a107fa71-ea33-4ded-aaad-497b1cd6f4cb e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 80209953-3c4c-4932-a9a3-8166c70e1029] Received event network-vif-plugged-b6c25504-002e-4bf7-8b34-c9e43fcc55c2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:29:11 compute-0 nova_compute[260935]: 2025-10-11 09:29:11.701 2 DEBUG oslo_concurrency.lockutils [req-7d72c822-ee59-42ca-b18e-ff629807e912 req-a107fa71-ea33-4ded-aaad-497b1cd6f4cb e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "80209953-3c4c-4932-a9a3-8166c70e1029-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:29:11 compute-0 nova_compute[260935]: 2025-10-11 09:29:11.701 2 DEBUG oslo_concurrency.lockutils [req-7d72c822-ee59-42ca-b18e-ff629807e912 req-a107fa71-ea33-4ded-aaad-497b1cd6f4cb e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "80209953-3c4c-4932-a9a3-8166c70e1029-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:29:11 compute-0 nova_compute[260935]: 2025-10-11 09:29:11.702 2 DEBUG oslo_concurrency.lockutils [req-7d72c822-ee59-42ca-b18e-ff629807e912 req-a107fa71-ea33-4ded-aaad-497b1cd6f4cb e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "80209953-3c4c-4932-a9a3-8166c70e1029-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:29:11 compute-0 nova_compute[260935]: 2025-10-11 09:29:11.702 2 DEBUG nova.compute.manager [req-7d72c822-ee59-42ca-b18e-ff629807e912 req-a107fa71-ea33-4ded-aaad-497b1cd6f4cb e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 80209953-3c4c-4932-a9a3-8166c70e1029] Processing event network-vif-plugged-b6c25504-002e-4bf7-8b34-c9e43fcc55c2 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 11 09:29:11 compute-0 nova_compute[260935]: 2025-10-11 09:29:11.707 2 DEBUG nova.network.neutron [None req-185274bf-6538-417c-a3ef-e481592e5e5e 95ad8a94007e4ab88fcad372c4695cf5 9e69b3710dc94c919d8bd75d2a540c10 - - default default] [instance: af8a1ab7-7512-4de4-8493-cfe85095fbc5] Successfully updated port: c6f854bc-831f-4bb1-ad9f-e1aa03343d25 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 11 09:29:11 compute-0 nova_compute[260935]: 2025-10-11 09:29:11.728 2 DEBUG oslo_concurrency.lockutils [None req-185274bf-6538-417c-a3ef-e481592e5e5e 95ad8a94007e4ab88fcad372c4695cf5 9e69b3710dc94c919d8bd75d2a540c10 - - default default] Acquiring lock "refresh_cache-af8a1ab7-7512-4de4-8493-cfe85095fbc5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:29:11 compute-0 nova_compute[260935]: 2025-10-11 09:29:11.729 2 DEBUG oslo_concurrency.lockutils [None req-185274bf-6538-417c-a3ef-e481592e5e5e 95ad8a94007e4ab88fcad372c4695cf5 9e69b3710dc94c919d8bd75d2a540c10 - - default default] Acquired lock "refresh_cache-af8a1ab7-7512-4de4-8493-cfe85095fbc5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:29:11 compute-0 nova_compute[260935]: 2025-10-11 09:29:11.729 2 DEBUG nova.network.neutron [None req-185274bf-6538-417c-a3ef-e481592e5e5e 95ad8a94007e4ab88fcad372c4695cf5 9e69b3710dc94c919d8bd75d2a540c10 - - default default] [instance: af8a1ab7-7512-4de4-8493-cfe85095fbc5] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 11 09:29:11 compute-0 nova_compute[260935]: 2025-10-11 09:29:11.800 2 DEBUG nova.compute.manager [req-6b2267af-46e3-4880-904e-c307501415d5 req-af5809e7-ac01-40fe-a16e-1cdacb57f309 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: af8a1ab7-7512-4de4-8493-cfe85095fbc5] Received event network-changed-c6f854bc-831f-4bb1-ad9f-e1aa03343d25 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:29:11 compute-0 nova_compute[260935]: 2025-10-11 09:29:11.800 2 DEBUG nova.compute.manager [req-6b2267af-46e3-4880-904e-c307501415d5 req-af5809e7-ac01-40fe-a16e-1cdacb57f309 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: af8a1ab7-7512-4de4-8493-cfe85095fbc5] Refreshing instance network info cache due to event network-changed-c6f854bc-831f-4bb1-ad9f-e1aa03343d25. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 09:29:11 compute-0 nova_compute[260935]: 2025-10-11 09:29:11.801 2 DEBUG oslo_concurrency.lockutils [req-6b2267af-46e3-4880-904e-c307501415d5 req-af5809e7-ac01-40fe-a16e-1cdacb57f309 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-af8a1ab7-7512-4de4-8493-cfe85095fbc5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:29:11 compute-0 nova_compute[260935]: 2025-10-11 09:29:11.826 2 DEBUG nova.network.neutron [req-3f9fa8b2-02bf-4350-b41a-e4c601f61a38 req-4e8a8726-ad2f-4b4e-aaa1-1923a7a35c6f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 80209953-3c4c-4932-a9a3-8166c70e1029] Updated VIF entry in instance network info cache for port b6c25504-002e-4bf7-8b34-c9e43fcc55c2. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 09:29:11 compute-0 nova_compute[260935]: 2025-10-11 09:29:11.827 2 DEBUG nova.network.neutron [req-3f9fa8b2-02bf-4350-b41a-e4c601f61a38 req-4e8a8726-ad2f-4b4e-aaa1-1923a7a35c6f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 80209953-3c4c-4932-a9a3-8166c70e1029] Updating instance_info_cache with network_info: [{"id": "b6c25504-002e-4bf7-8b34-c9e43fcc55c2", "address": "fa:16:3e:ba:b7:77", "network": {"id": "03e15108-5f8d-4fec-9ad8-133d9551c667", "bridge": "br-int", "label": "tempest-network-smoke--866575139", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feba:b777", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feba:b777", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb6c25504-00", "ovs_interfaceid": "b6c25504-002e-4bf7-8b34-c9e43fcc55c2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:29:11 compute-0 nova_compute[260935]: 2025-10-11 09:29:11.853 2 DEBUG oslo_concurrency.lockutils [req-3f9fa8b2-02bf-4350-b41a-e4c601f61a38 req-4e8a8726-ad2f-4b4e-aaa1-1923a7a35c6f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-80209953-3c4c-4932-a9a3-8166c70e1029" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:29:11 compute-0 stupefied_panini[407007]: {
Oct 11 09:29:11 compute-0 stupefied_panini[407007]:     "9a8d87fe-940e-4960-94fb-405e22e6e9ee": {
Oct 11 09:29:11 compute-0 stupefied_panini[407007]:         "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:29:11 compute-0 stupefied_panini[407007]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 11 09:29:11 compute-0 stupefied_panini[407007]:         "osd_id": 2,
Oct 11 09:29:11 compute-0 stupefied_panini[407007]:         "osd_uuid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 09:29:11 compute-0 stupefied_panini[407007]:         "type": "bluestore"
Oct 11 09:29:11 compute-0 stupefied_panini[407007]:     },
Oct 11 09:29:11 compute-0 stupefied_panini[407007]:     "bd31cfe9-266a-4479-a7f9-659fff14b3e1": {
Oct 11 09:29:11 compute-0 stupefied_panini[407007]:         "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:29:11 compute-0 stupefied_panini[407007]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 11 09:29:11 compute-0 stupefied_panini[407007]:         "osd_id": 0,
Oct 11 09:29:11 compute-0 stupefied_panini[407007]:         "osd_uuid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 09:29:11 compute-0 stupefied_panini[407007]:         "type": "bluestore"
Oct 11 09:29:11 compute-0 stupefied_panini[407007]:     },
Oct 11 09:29:11 compute-0 stupefied_panini[407007]:     "c6e730c8-7075-45e5-bf72-061b73088f5b": {
Oct 11 09:29:11 compute-0 stupefied_panini[407007]:         "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:29:11 compute-0 stupefied_panini[407007]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 11 09:29:11 compute-0 stupefied_panini[407007]:         "osd_id": 1,
Oct 11 09:29:11 compute-0 stupefied_panini[407007]:         "osd_uuid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 09:29:11 compute-0 stupefied_panini[407007]:         "type": "bluestore"
Oct 11 09:29:11 compute-0 stupefied_panini[407007]:     }
Oct 11 09:29:11 compute-0 stupefied_panini[407007]: }
Oct 11 09:29:11 compute-0 systemd[1]: libpod-4b33f2af51897c3144209c02edcc94fa6de2374d3b2044c408f45b92e9bf9b8a.scope: Deactivated successfully.
Oct 11 09:29:11 compute-0 nova_compute[260935]: 2025-10-11 09:29:11.894 2 DEBUG nova.network.neutron [None req-185274bf-6538-417c-a3ef-e481592e5e5e 95ad8a94007e4ab88fcad372c4695cf5 9e69b3710dc94c919d8bd75d2a540c10 - - default default] [instance: af8a1ab7-7512-4de4-8493-cfe85095fbc5] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 11 09:29:11 compute-0 podman[407146]: 2025-10-11 09:29:11.934060507 +0000 UTC m=+0.024103271 container died 4b33f2af51897c3144209c02edcc94fa6de2374d3b2044c408f45b92e9bf9b8a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_panini, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct 11 09:29:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-1469e3fd795504d32d64dd3fe6639706e270344fa77cbf6cd4b4c3aa8551699c-merged.mount: Deactivated successfully.
Oct 11 09:29:11 compute-0 podman[407146]: 2025-10-11 09:29:11.991025645 +0000 UTC m=+0.081068389 container remove 4b33f2af51897c3144209c02edcc94fa6de2374d3b2044c408f45b92e9bf9b8a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_panini, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0)
Oct 11 09:29:11 compute-0 systemd[1]: libpod-conmon-4b33f2af51897c3144209c02edcc94fa6de2374d3b2044c408f45b92e9bf9b8a.scope: Deactivated successfully.
Oct 11 09:29:12 compute-0 sudo[406864]: pam_unix(sudo:session): session closed for user root
Oct 11 09:29:12 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 09:29:12 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:29:12 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 09:29:12 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:29:12 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev 11a8cb53-aa10-4850-adc4-451f0b8c7010 does not exist
Oct 11 09:29:12 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev 11334dce-3557-462c-b0bf-a702de73eba3 does not exist
Oct 11 09:29:12 compute-0 sudo[407165]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:29:12 compute-0 sudo[407165]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:29:12 compute-0 sudo[407165]: pam_unix(sudo:session): session closed for user root
Oct 11 09:29:12 compute-0 sudo[407190]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 11 09:29:12 compute-0 sudo[407190]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:29:12 compute-0 sudo[407190]: pam_unix(sudo:session): session closed for user root
Oct 11 09:29:12 compute-0 nova_compute[260935]: 2025-10-11 09:29:12.359 2 DEBUG nova.compute.manager [None req-7c75f59e-5cab-46fe-a2aa-d36fa0ad902c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 80209953-3c4c-4932-a9a3-8166c70e1029] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 11 09:29:12 compute-0 nova_compute[260935]: 2025-10-11 09:29:12.360 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760174952.3590016, 80209953-3c4c-4932-a9a3-8166c70e1029 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:29:12 compute-0 nova_compute[260935]: 2025-10-11 09:29:12.361 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 80209953-3c4c-4932-a9a3-8166c70e1029] VM Started (Lifecycle Event)
Oct 11 09:29:12 compute-0 nova_compute[260935]: 2025-10-11 09:29:12.364 2 DEBUG nova.virt.libvirt.driver [None req-7c75f59e-5cab-46fe-a2aa-d36fa0ad902c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 80209953-3c4c-4932-a9a3-8166c70e1029] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 11 09:29:12 compute-0 nova_compute[260935]: 2025-10-11 09:29:12.367 2 INFO nova.virt.libvirt.driver [-] [instance: 80209953-3c4c-4932-a9a3-8166c70e1029] Instance spawned successfully.
Oct 11 09:29:12 compute-0 nova_compute[260935]: 2025-10-11 09:29:12.367 2 DEBUG nova.virt.libvirt.driver [None req-7c75f59e-5cab-46fe-a2aa-d36fa0ad902c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 80209953-3c4c-4932-a9a3-8166c70e1029] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 11 09:29:12 compute-0 nova_compute[260935]: 2025-10-11 09:29:12.384 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 80209953-3c4c-4932-a9a3-8166c70e1029] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:29:12 compute-0 nova_compute[260935]: 2025-10-11 09:29:12.390 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 80209953-3c4c-4932-a9a3-8166c70e1029] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 09:29:12 compute-0 nova_compute[260935]: 2025-10-11 09:29:12.394 2 DEBUG nova.virt.libvirt.driver [None req-7c75f59e-5cab-46fe-a2aa-d36fa0ad902c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 80209953-3c4c-4932-a9a3-8166c70e1029] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:29:12 compute-0 nova_compute[260935]: 2025-10-11 09:29:12.395 2 DEBUG nova.virt.libvirt.driver [None req-7c75f59e-5cab-46fe-a2aa-d36fa0ad902c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 80209953-3c4c-4932-a9a3-8166c70e1029] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:29:12 compute-0 nova_compute[260935]: 2025-10-11 09:29:12.395 2 DEBUG nova.virt.libvirt.driver [None req-7c75f59e-5cab-46fe-a2aa-d36fa0ad902c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 80209953-3c4c-4932-a9a3-8166c70e1029] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:29:12 compute-0 nova_compute[260935]: 2025-10-11 09:29:12.396 2 DEBUG nova.virt.libvirt.driver [None req-7c75f59e-5cab-46fe-a2aa-d36fa0ad902c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 80209953-3c4c-4932-a9a3-8166c70e1029] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:29:12 compute-0 nova_compute[260935]: 2025-10-11 09:29:12.396 2 DEBUG nova.virt.libvirt.driver [None req-7c75f59e-5cab-46fe-a2aa-d36fa0ad902c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 80209953-3c4c-4932-a9a3-8166c70e1029] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:29:12 compute-0 nova_compute[260935]: 2025-10-11 09:29:12.397 2 DEBUG nova.virt.libvirt.driver [None req-7c75f59e-5cab-46fe-a2aa-d36fa0ad902c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 80209953-3c4c-4932-a9a3-8166c70e1029] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:29:12 compute-0 nova_compute[260935]: 2025-10-11 09:29:12.441 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 80209953-3c4c-4932-a9a3-8166c70e1029] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 09:29:12 compute-0 nova_compute[260935]: 2025-10-11 09:29:12.442 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760174952.3602972, 80209953-3c4c-4932-a9a3-8166c70e1029 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:29:12 compute-0 nova_compute[260935]: 2025-10-11 09:29:12.442 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 80209953-3c4c-4932-a9a3-8166c70e1029] VM Paused (Lifecycle Event)
Oct 11 09:29:12 compute-0 nova_compute[260935]: 2025-10-11 09:29:12.497 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 80209953-3c4c-4932-a9a3-8166c70e1029] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:29:12 compute-0 nova_compute[260935]: 2025-10-11 09:29:12.501 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760174952.3634884, 80209953-3c4c-4932-a9a3-8166c70e1029 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:29:12 compute-0 nova_compute[260935]: 2025-10-11 09:29:12.501 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 80209953-3c4c-4932-a9a3-8166c70e1029] VM Resumed (Lifecycle Event)
Oct 11 09:29:12 compute-0 nova_compute[260935]: 2025-10-11 09:29:12.512 2 INFO nova.compute.manager [None req-7c75f59e-5cab-46fe-a2aa-d36fa0ad902c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 80209953-3c4c-4932-a9a3-8166c70e1029] Took 10.03 seconds to spawn the instance on the hypervisor.
Oct 11 09:29:12 compute-0 nova_compute[260935]: 2025-10-11 09:29:12.512 2 DEBUG nova.compute.manager [None req-7c75f59e-5cab-46fe-a2aa-d36fa0ad902c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 80209953-3c4c-4932-a9a3-8166c70e1029] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:29:12 compute-0 nova_compute[260935]: 2025-10-11 09:29:12.524 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 80209953-3c4c-4932-a9a3-8166c70e1029] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:29:12 compute-0 nova_compute[260935]: 2025-10-11 09:29:12.526 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 80209953-3c4c-4932-a9a3-8166c70e1029] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 09:29:12 compute-0 nova_compute[260935]: 2025-10-11 09:29:12.574 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 80209953-3c4c-4932-a9a3-8166c70e1029] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 09:29:12 compute-0 nova_compute[260935]: 2025-10-11 09:29:12.609 2 INFO nova.compute.manager [None req-7c75f59e-5cab-46fe-a2aa-d36fa0ad902c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 80209953-3c4c-4932-a9a3-8166c70e1029] Took 11.67 seconds to build instance.
Oct 11 09:29:12 compute-0 nova_compute[260935]: 2025-10-11 09:29:12.630 2 DEBUG oslo_concurrency.lockutils [None req-7c75f59e-5cab-46fe-a2aa-d36fa0ad902c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "80209953-3c4c-4932-a9a3-8166c70e1029" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.766s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:29:12 compute-0 nova_compute[260935]: 2025-10-11 09:29:12.766 2 DEBUG nova.network.neutron [None req-185274bf-6538-417c-a3ef-e481592e5e5e 95ad8a94007e4ab88fcad372c4695cf5 9e69b3710dc94c919d8bd75d2a540c10 - - default default] [instance: af8a1ab7-7512-4de4-8493-cfe85095fbc5] Updating instance_info_cache with network_info: [{"id": "c6f854bc-831f-4bb1-ad9f-e1aa03343d25", "address": "fa:16:3e:6b:d3:83", "network": {"id": "079fe911-cb70-4ab3-8112-a68496105754", "bridge": "br-int", "label": "tempest-TestServerBasicOps-693767079-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9e69b3710dc94c919d8bd75d2a540c10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc6f854bc-83", "ovs_interfaceid": "c6f854bc-831f-4bb1-ad9f-e1aa03343d25", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:29:12 compute-0 nova_compute[260935]: 2025-10-11 09:29:12.803 2 DEBUG oslo_concurrency.lockutils [None req-185274bf-6538-417c-a3ef-e481592e5e5e 95ad8a94007e4ab88fcad372c4695cf5 9e69b3710dc94c919d8bd75d2a540c10 - - default default] Releasing lock "refresh_cache-af8a1ab7-7512-4de4-8493-cfe85095fbc5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:29:12 compute-0 nova_compute[260935]: 2025-10-11 09:29:12.804 2 DEBUG nova.compute.manager [None req-185274bf-6538-417c-a3ef-e481592e5e5e 95ad8a94007e4ab88fcad372c4695cf5 9e69b3710dc94c919d8bd75d2a540c10 - - default default] [instance: af8a1ab7-7512-4de4-8493-cfe85095fbc5] Instance network_info: |[{"id": "c6f854bc-831f-4bb1-ad9f-e1aa03343d25", "address": "fa:16:3e:6b:d3:83", "network": {"id": "079fe911-cb70-4ab3-8112-a68496105754", "bridge": "br-int", "label": "tempest-TestServerBasicOps-693767079-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9e69b3710dc94c919d8bd75d2a540c10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc6f854bc-83", "ovs_interfaceid": "c6f854bc-831f-4bb1-ad9f-e1aa03343d25", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 11 09:29:12 compute-0 nova_compute[260935]: 2025-10-11 09:29:12.805 2 DEBUG oslo_concurrency.lockutils [req-6b2267af-46e3-4880-904e-c307501415d5 req-af5809e7-ac01-40fe-a16e-1cdacb57f309 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-af8a1ab7-7512-4de4-8493-cfe85095fbc5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:29:12 compute-0 nova_compute[260935]: 2025-10-11 09:29:12.806 2 DEBUG nova.network.neutron [req-6b2267af-46e3-4880-904e-c307501415d5 req-af5809e7-ac01-40fe-a16e-1cdacb57f309 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: af8a1ab7-7512-4de4-8493-cfe85095fbc5] Refreshing network info cache for port c6f854bc-831f-4bb1-ad9f-e1aa03343d25 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 09:29:12 compute-0 nova_compute[260935]: 2025-10-11 09:29:12.811 2 DEBUG nova.virt.libvirt.driver [None req-185274bf-6538-417c-a3ef-e481592e5e5e 95ad8a94007e4ab88fcad372c4695cf5 9e69b3710dc94c919d8bd75d2a540c10 - - default default] [instance: af8a1ab7-7512-4de4-8493-cfe85095fbc5] Start _get_guest_xml network_info=[{"id": "c6f854bc-831f-4bb1-ad9f-e1aa03343d25", "address": "fa:16:3e:6b:d3:83", "network": {"id": "079fe911-cb70-4ab3-8112-a68496105754", "bridge": "br-int", "label": "tempest-TestServerBasicOps-693767079-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9e69b3710dc94c919d8bd75d2a540c10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc6f854bc-83", "ovs_interfaceid": "c6f854bc-831f-4bb1-ad9f-e1aa03343d25", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 11 09:29:12 compute-0 nova_compute[260935]: 2025-10-11 09:29:12.820 2 WARNING nova.virt.libvirt.driver [None req-185274bf-6538-417c-a3ef-e481592e5e5e 95ad8a94007e4ab88fcad372c4695cf5 9e69b3710dc94c919d8bd75d2a540c10 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 09:29:12 compute-0 nova_compute[260935]: 2025-10-11 09:29:12.828 2 DEBUG nova.virt.libvirt.host [None req-185274bf-6538-417c-a3ef-e481592e5e5e 95ad8a94007e4ab88fcad372c4695cf5 9e69b3710dc94c919d8bd75d2a540c10 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 11 09:29:12 compute-0 nova_compute[260935]: 2025-10-11 09:29:12.830 2 DEBUG nova.virt.libvirt.host [None req-185274bf-6538-417c-a3ef-e481592e5e5e 95ad8a94007e4ab88fcad372c4695cf5 9e69b3710dc94c919d8bd75d2a540c10 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 11 09:29:12 compute-0 nova_compute[260935]: 2025-10-11 09:29:12.834 2 DEBUG nova.virt.libvirt.host [None req-185274bf-6538-417c-a3ef-e481592e5e5e 95ad8a94007e4ab88fcad372c4695cf5 9e69b3710dc94c919d8bd75d2a540c10 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 11 09:29:12 compute-0 nova_compute[260935]: 2025-10-11 09:29:12.837 2 DEBUG nova.virt.libvirt.host [None req-185274bf-6538-417c-a3ef-e481592e5e5e 95ad8a94007e4ab88fcad372c4695cf5 9e69b3710dc94c919d8bd75d2a540c10 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 11 09:29:12 compute-0 nova_compute[260935]: 2025-10-11 09:29:12.837 2 DEBUG nova.virt.libvirt.driver [None req-185274bf-6538-417c-a3ef-e481592e5e5e 95ad8a94007e4ab88fcad372c4695cf5 9e69b3710dc94c919d8bd75d2a540c10 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 11 09:29:12 compute-0 nova_compute[260935]: 2025-10-11 09:29:12.838 2 DEBUG nova.virt.hardware [None req-185274bf-6538-417c-a3ef-e481592e5e5e 95ad8a94007e4ab88fcad372c4695cf5 9e69b3710dc94c919d8bd75d2a540c10 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 11 09:29:12 compute-0 nova_compute[260935]: 2025-10-11 09:29:12.839 2 DEBUG nova.virt.hardware [None req-185274bf-6538-417c-a3ef-e481592e5e5e 95ad8a94007e4ab88fcad372c4695cf5 9e69b3710dc94c919d8bd75d2a540c10 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 11 09:29:12 compute-0 nova_compute[260935]: 2025-10-11 09:29:12.840 2 DEBUG nova.virt.hardware [None req-185274bf-6538-417c-a3ef-e481592e5e5e 95ad8a94007e4ab88fcad372c4695cf5 9e69b3710dc94c919d8bd75d2a540c10 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 11 09:29:12 compute-0 nova_compute[260935]: 2025-10-11 09:29:12.840 2 DEBUG nova.virt.hardware [None req-185274bf-6538-417c-a3ef-e481592e5e5e 95ad8a94007e4ab88fcad372c4695cf5 9e69b3710dc94c919d8bd75d2a540c10 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 11 09:29:12 compute-0 nova_compute[260935]: 2025-10-11 09:29:12.840 2 DEBUG nova.virt.hardware [None req-185274bf-6538-417c-a3ef-e481592e5e5e 95ad8a94007e4ab88fcad372c4695cf5 9e69b3710dc94c919d8bd75d2a540c10 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 11 09:29:12 compute-0 nova_compute[260935]: 2025-10-11 09:29:12.841 2 DEBUG nova.virt.hardware [None req-185274bf-6538-417c-a3ef-e481592e5e5e 95ad8a94007e4ab88fcad372c4695cf5 9e69b3710dc94c919d8bd75d2a540c10 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 11 09:29:12 compute-0 nova_compute[260935]: 2025-10-11 09:29:12.841 2 DEBUG nova.virt.hardware [None req-185274bf-6538-417c-a3ef-e481592e5e5e 95ad8a94007e4ab88fcad372c4695cf5 9e69b3710dc94c919d8bd75d2a540c10 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 11 09:29:12 compute-0 nova_compute[260935]: 2025-10-11 09:29:12.842 2 DEBUG nova.virt.hardware [None req-185274bf-6538-417c-a3ef-e481592e5e5e 95ad8a94007e4ab88fcad372c4695cf5 9e69b3710dc94c919d8bd75d2a540c10 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 11 09:29:12 compute-0 nova_compute[260935]: 2025-10-11 09:29:12.842 2 DEBUG nova.virt.hardware [None req-185274bf-6538-417c-a3ef-e481592e5e5e 95ad8a94007e4ab88fcad372c4695cf5 9e69b3710dc94c919d8bd75d2a540c10 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 11 09:29:12 compute-0 nova_compute[260935]: 2025-10-11 09:29:12.843 2 DEBUG nova.virt.hardware [None req-185274bf-6538-417c-a3ef-e481592e5e5e 95ad8a94007e4ab88fcad372c4695cf5 9e69b3710dc94c919d8bd75d2a540c10 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 11 09:29:12 compute-0 nova_compute[260935]: 2025-10-11 09:29:12.843 2 DEBUG nova.virt.hardware [None req-185274bf-6538-417c-a3ef-e481592e5e5e 95ad8a94007e4ab88fcad372c4695cf5 9e69b3710dc94c919d8bd75d2a540c10 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 11 09:29:12 compute-0 nova_compute[260935]: 2025-10-11 09:29:12.848 2 DEBUG oslo_concurrency.processutils [None req-185274bf-6538-417c-a3ef-e481592e5e5e 95ad8a94007e4ab88fcad372c4695cf5 9e69b3710dc94c919d8bd75d2a540c10 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:29:13 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2627: 321 pgs: 321 active+clean; 500 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 42 KiB/s rd, 3.6 MiB/s wr, 64 op/s
Oct 11 09:29:13 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:29:13 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:29:13 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 09:29:13 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1509367093' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:29:13 compute-0 nova_compute[260935]: 2025-10-11 09:29:13.355 2 DEBUG oslo_concurrency.processutils [None req-185274bf-6538-417c-a3ef-e481592e5e5e 95ad8a94007e4ab88fcad372c4695cf5 9e69b3710dc94c919d8bd75d2a540c10 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.507s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:29:13 compute-0 nova_compute[260935]: 2025-10-11 09:29:13.375 2 DEBUG nova.storage.rbd_utils [None req-185274bf-6538-417c-a3ef-e481592e5e5e 95ad8a94007e4ab88fcad372c4695cf5 9e69b3710dc94c919d8bd75d2a540c10 - - default default] rbd image af8a1ab7-7512-4de4-8493-cfe85095fbc5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:29:13 compute-0 nova_compute[260935]: 2025-10-11 09:29:13.379 2 DEBUG oslo_concurrency.processutils [None req-185274bf-6538-417c-a3ef-e481592e5e5e 95ad8a94007e4ab88fcad372c4695cf5 9e69b3710dc94c919d8bd75d2a540c10 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:29:13 compute-0 nova_compute[260935]: 2025-10-11 09:29:13.423 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:29:13 compute-0 nova_compute[260935]: 2025-10-11 09:29:13.790 2 DEBUG nova.compute.manager [req-cfe95003-f550-4d8f-ba9b-bfef0326f536 req-51b20e6a-c0e9-48f3-8aec-8ddb2a840c7d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 80209953-3c4c-4932-a9a3-8166c70e1029] Received event network-vif-plugged-b6c25504-002e-4bf7-8b34-c9e43fcc55c2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:29:13 compute-0 nova_compute[260935]: 2025-10-11 09:29:13.791 2 DEBUG oslo_concurrency.lockutils [req-cfe95003-f550-4d8f-ba9b-bfef0326f536 req-51b20e6a-c0e9-48f3-8aec-8ddb2a840c7d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "80209953-3c4c-4932-a9a3-8166c70e1029-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:29:13 compute-0 nova_compute[260935]: 2025-10-11 09:29:13.791 2 DEBUG oslo_concurrency.lockutils [req-cfe95003-f550-4d8f-ba9b-bfef0326f536 req-51b20e6a-c0e9-48f3-8aec-8ddb2a840c7d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "80209953-3c4c-4932-a9a3-8166c70e1029-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:29:13 compute-0 nova_compute[260935]: 2025-10-11 09:29:13.792 2 DEBUG oslo_concurrency.lockutils [req-cfe95003-f550-4d8f-ba9b-bfef0326f536 req-51b20e6a-c0e9-48f3-8aec-8ddb2a840c7d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "80209953-3c4c-4932-a9a3-8166c70e1029-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:29:13 compute-0 nova_compute[260935]: 2025-10-11 09:29:13.792 2 DEBUG nova.compute.manager [req-cfe95003-f550-4d8f-ba9b-bfef0326f536 req-51b20e6a-c0e9-48f3-8aec-8ddb2a840c7d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 80209953-3c4c-4932-a9a3-8166c70e1029] No waiting events found dispatching network-vif-plugged-b6c25504-002e-4bf7-8b34-c9e43fcc55c2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 09:29:13 compute-0 nova_compute[260935]: 2025-10-11 09:29:13.792 2 WARNING nova.compute.manager [req-cfe95003-f550-4d8f-ba9b-bfef0326f536 req-51b20e6a-c0e9-48f3-8aec-8ddb2a840c7d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 80209953-3c4c-4932-a9a3-8166c70e1029] Received unexpected event network-vif-plugged-b6c25504-002e-4bf7-8b34-c9e43fcc55c2 for instance with vm_state active and task_state None.
Oct 11 09:29:13 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 09:29:13 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3137987385' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:29:13 compute-0 nova_compute[260935]: 2025-10-11 09:29:13.813 2 DEBUG oslo_concurrency.processutils [None req-185274bf-6538-417c-a3ef-e481592e5e5e 95ad8a94007e4ab88fcad372c4695cf5 9e69b3710dc94c919d8bd75d2a540c10 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.434s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:29:13 compute-0 nova_compute[260935]: 2025-10-11 09:29:13.815 2 DEBUG nova.virt.libvirt.vif [None req-185274bf-6538-417c-a3ef-e481592e5e5e 95ad8a94007e4ab88fcad372c4695cf5 9e69b3710dc94c919d8bd75d2a540c10 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T09:29:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestServerBasicOps-server-1930148258',display_name='tempest-TestServerBasicOps-server-1930148258',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testserverbasicops-server-1930148258',id=134,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBH102y8/0euXWtDeJlu4Y5Hi5/M8lShv6g1BNE2c8J5VmPE9Sc1i2z1jjNuXYbG9viqCuCV8FoDB59HS7lsdBA/LnJF8uoB7aDjGRRY+ZC3tUP+E3D+ZjN02qpM1ehWg2g==',key_name='tempest-TestServerBasicOps-820613880',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={meta1='data1',meta2='data2',metaN='dataN'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9e69b3710dc94c919d8bd75d2a540c10',ramdisk_id='',reservation_id='r-dglj62pg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestServerBasicOps-2065645952',owner_user_name='tempest-TestServerBasicOps-2065645952-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:29:08Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='95ad8a94007e4ab88fcad372c4695cf5',uuid=af8a1ab7-7512-4de4-8493-cfe85095fbc5,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c6f854bc-831f-4bb1-ad9f-e1aa03343d25", "address": "fa:16:3e:6b:d3:83", "network": {"id": "079fe911-cb70-4ab3-8112-a68496105754", "bridge": "br-int", "label": "tempest-TestServerBasicOps-693767079-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9e69b3710dc94c919d8bd75d2a540c10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc6f854bc-83", "ovs_interfaceid": "c6f854bc-831f-4bb1-ad9f-e1aa03343d25", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 11 09:29:13 compute-0 nova_compute[260935]: 2025-10-11 09:29:13.816 2 DEBUG nova.network.os_vif_util [None req-185274bf-6538-417c-a3ef-e481592e5e5e 95ad8a94007e4ab88fcad372c4695cf5 9e69b3710dc94c919d8bd75d2a540c10 - - default default] Converting VIF {"id": "c6f854bc-831f-4bb1-ad9f-e1aa03343d25", "address": "fa:16:3e:6b:d3:83", "network": {"id": "079fe911-cb70-4ab3-8112-a68496105754", "bridge": "br-int", "label": "tempest-TestServerBasicOps-693767079-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9e69b3710dc94c919d8bd75d2a540c10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc6f854bc-83", "ovs_interfaceid": "c6f854bc-831f-4bb1-ad9f-e1aa03343d25", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 09:29:13 compute-0 nova_compute[260935]: 2025-10-11 09:29:13.817 2 DEBUG nova.network.os_vif_util [None req-185274bf-6538-417c-a3ef-e481592e5e5e 95ad8a94007e4ab88fcad372c4695cf5 9e69b3710dc94c919d8bd75d2a540c10 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6b:d3:83,bridge_name='br-int',has_traffic_filtering=True,id=c6f854bc-831f-4bb1-ad9f-e1aa03343d25,network=Network(079fe911-cb70-4ab3-8112-a68496105754),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc6f854bc-83') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 09:29:13 compute-0 nova_compute[260935]: 2025-10-11 09:29:13.819 2 DEBUG nova.objects.instance [None req-185274bf-6538-417c-a3ef-e481592e5e5e 95ad8a94007e4ab88fcad372c4695cf5 9e69b3710dc94c919d8bd75d2a540c10 - - default default] Lazy-loading 'pci_devices' on Instance uuid af8a1ab7-7512-4de4-8493-cfe85095fbc5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:29:13 compute-0 nova_compute[260935]: 2025-10-11 09:29:13.846 2 DEBUG nova.virt.libvirt.driver [None req-185274bf-6538-417c-a3ef-e481592e5e5e 95ad8a94007e4ab88fcad372c4695cf5 9e69b3710dc94c919d8bd75d2a540c10 - - default default] [instance: af8a1ab7-7512-4de4-8493-cfe85095fbc5] End _get_guest_xml xml=<domain type="kvm">
Oct 11 09:29:13 compute-0 nova_compute[260935]:   <uuid>af8a1ab7-7512-4de4-8493-cfe85095fbc5</uuid>
Oct 11 09:29:13 compute-0 nova_compute[260935]:   <name>instance-00000086</name>
Oct 11 09:29:13 compute-0 nova_compute[260935]:   <memory>131072</memory>
Oct 11 09:29:13 compute-0 nova_compute[260935]:   <vcpu>1</vcpu>
Oct 11 09:29:13 compute-0 nova_compute[260935]:   <metadata>
Oct 11 09:29:13 compute-0 nova_compute[260935]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 09:29:13 compute-0 nova_compute[260935]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 09:29:13 compute-0 nova_compute[260935]:       <nova:name>tempest-TestServerBasicOps-server-1930148258</nova:name>
Oct 11 09:29:13 compute-0 nova_compute[260935]:       <nova:creationTime>2025-10-11 09:29:12</nova:creationTime>
Oct 11 09:29:13 compute-0 nova_compute[260935]:       <nova:flavor name="m1.nano">
Oct 11 09:29:13 compute-0 nova_compute[260935]:         <nova:memory>128</nova:memory>
Oct 11 09:29:13 compute-0 nova_compute[260935]:         <nova:disk>1</nova:disk>
Oct 11 09:29:13 compute-0 nova_compute[260935]:         <nova:swap>0</nova:swap>
Oct 11 09:29:13 compute-0 nova_compute[260935]:         <nova:ephemeral>0</nova:ephemeral>
Oct 11 09:29:13 compute-0 nova_compute[260935]:         <nova:vcpus>1</nova:vcpus>
Oct 11 09:29:13 compute-0 nova_compute[260935]:       </nova:flavor>
Oct 11 09:29:13 compute-0 nova_compute[260935]:       <nova:owner>
Oct 11 09:29:13 compute-0 nova_compute[260935]:         <nova:user uuid="95ad8a94007e4ab88fcad372c4695cf5">tempest-TestServerBasicOps-2065645952-project-member</nova:user>
Oct 11 09:29:13 compute-0 nova_compute[260935]:         <nova:project uuid="9e69b3710dc94c919d8bd75d2a540c10">tempest-TestServerBasicOps-2065645952</nova:project>
Oct 11 09:29:13 compute-0 nova_compute[260935]:       </nova:owner>
Oct 11 09:29:13 compute-0 nova_compute[260935]:       <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 09:29:13 compute-0 nova_compute[260935]:       <nova:ports>
Oct 11 09:29:13 compute-0 nova_compute[260935]:         <nova:port uuid="c6f854bc-831f-4bb1-ad9f-e1aa03343d25">
Oct 11 09:29:13 compute-0 nova_compute[260935]:           <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Oct 11 09:29:13 compute-0 nova_compute[260935]:         </nova:port>
Oct 11 09:29:13 compute-0 nova_compute[260935]:       </nova:ports>
Oct 11 09:29:13 compute-0 nova_compute[260935]:     </nova:instance>
Oct 11 09:29:13 compute-0 nova_compute[260935]:   </metadata>
Oct 11 09:29:13 compute-0 nova_compute[260935]:   <sysinfo type="smbios">
Oct 11 09:29:13 compute-0 nova_compute[260935]:     <system>
Oct 11 09:29:13 compute-0 nova_compute[260935]:       <entry name="manufacturer">RDO</entry>
Oct 11 09:29:13 compute-0 nova_compute[260935]:       <entry name="product">OpenStack Compute</entry>
Oct 11 09:29:13 compute-0 nova_compute[260935]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 09:29:13 compute-0 nova_compute[260935]:       <entry name="serial">af8a1ab7-7512-4de4-8493-cfe85095fbc5</entry>
Oct 11 09:29:13 compute-0 nova_compute[260935]:       <entry name="uuid">af8a1ab7-7512-4de4-8493-cfe85095fbc5</entry>
Oct 11 09:29:13 compute-0 nova_compute[260935]:       <entry name="family">Virtual Machine</entry>
Oct 11 09:29:13 compute-0 nova_compute[260935]:     </system>
Oct 11 09:29:13 compute-0 nova_compute[260935]:   </sysinfo>
Oct 11 09:29:13 compute-0 nova_compute[260935]:   <os>
Oct 11 09:29:13 compute-0 nova_compute[260935]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 11 09:29:13 compute-0 nova_compute[260935]:     <boot dev="hd"/>
Oct 11 09:29:13 compute-0 nova_compute[260935]:     <smbios mode="sysinfo"/>
Oct 11 09:29:13 compute-0 nova_compute[260935]:   </os>
Oct 11 09:29:13 compute-0 nova_compute[260935]:   <features>
Oct 11 09:29:13 compute-0 nova_compute[260935]:     <acpi/>
Oct 11 09:29:13 compute-0 nova_compute[260935]:     <apic/>
Oct 11 09:29:13 compute-0 nova_compute[260935]:     <vmcoreinfo/>
Oct 11 09:29:13 compute-0 nova_compute[260935]:   </features>
Oct 11 09:29:13 compute-0 nova_compute[260935]:   <clock offset="utc">
Oct 11 09:29:13 compute-0 nova_compute[260935]:     <timer name="pit" tickpolicy="delay"/>
Oct 11 09:29:13 compute-0 nova_compute[260935]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 11 09:29:13 compute-0 nova_compute[260935]:     <timer name="hpet" present="no"/>
Oct 11 09:29:13 compute-0 nova_compute[260935]:   </clock>
Oct 11 09:29:13 compute-0 nova_compute[260935]:   <cpu mode="host-model" match="exact">
Oct 11 09:29:13 compute-0 nova_compute[260935]:     <topology sockets="1" cores="1" threads="1"/>
Oct 11 09:29:13 compute-0 nova_compute[260935]:   </cpu>
Oct 11 09:29:13 compute-0 nova_compute[260935]:   <devices>
Oct 11 09:29:13 compute-0 nova_compute[260935]:     <disk type="network" device="disk">
Oct 11 09:29:13 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 09:29:13 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/af8a1ab7-7512-4de4-8493-cfe85095fbc5_disk">
Oct 11 09:29:13 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 09:29:13 compute-0 nova_compute[260935]:       </source>
Oct 11 09:29:13 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 09:29:13 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 09:29:13 compute-0 nova_compute[260935]:       </auth>
Oct 11 09:29:13 compute-0 nova_compute[260935]:       <target dev="vda" bus="virtio"/>
Oct 11 09:29:13 compute-0 nova_compute[260935]:     </disk>
Oct 11 09:29:13 compute-0 nova_compute[260935]:     <disk type="network" device="cdrom">
Oct 11 09:29:13 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 09:29:13 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/af8a1ab7-7512-4de4-8493-cfe85095fbc5_disk.config">
Oct 11 09:29:13 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 09:29:13 compute-0 nova_compute[260935]:       </source>
Oct 11 09:29:13 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 09:29:13 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 09:29:13 compute-0 nova_compute[260935]:       </auth>
Oct 11 09:29:13 compute-0 nova_compute[260935]:       <target dev="sda" bus="sata"/>
Oct 11 09:29:13 compute-0 nova_compute[260935]:     </disk>
Oct 11 09:29:13 compute-0 nova_compute[260935]:     <interface type="ethernet">
Oct 11 09:29:13 compute-0 nova_compute[260935]:       <mac address="fa:16:3e:6b:d3:83"/>
Oct 11 09:29:13 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 09:29:13 compute-0 nova_compute[260935]:       <driver name="vhost" rx_queue_size="512"/>
Oct 11 09:29:13 compute-0 nova_compute[260935]:       <mtu size="1442"/>
Oct 11 09:29:13 compute-0 nova_compute[260935]:       <target dev="tapc6f854bc-83"/>
Oct 11 09:29:13 compute-0 nova_compute[260935]:     </interface>
Oct 11 09:29:13 compute-0 nova_compute[260935]:     <serial type="pty">
Oct 11 09:29:13 compute-0 nova_compute[260935]:       <log file="/var/lib/nova/instances/af8a1ab7-7512-4de4-8493-cfe85095fbc5/console.log" append="off"/>
Oct 11 09:29:13 compute-0 nova_compute[260935]:     </serial>
Oct 11 09:29:13 compute-0 nova_compute[260935]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 09:29:13 compute-0 nova_compute[260935]:     <video>
Oct 11 09:29:13 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 09:29:13 compute-0 nova_compute[260935]:     </video>
Oct 11 09:29:13 compute-0 nova_compute[260935]:     <input type="tablet" bus="usb"/>
Oct 11 09:29:13 compute-0 nova_compute[260935]:     <rng model="virtio">
Oct 11 09:29:13 compute-0 nova_compute[260935]:       <backend model="random">/dev/urandom</backend>
Oct 11 09:29:13 compute-0 nova_compute[260935]:     </rng>
Oct 11 09:29:13 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root"/>
Oct 11 09:29:13 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:29:13 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:29:13 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:29:13 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:29:13 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:29:13 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:29:13 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:29:13 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:29:13 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:29:13 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:29:13 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:29:13 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:29:13 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:29:13 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:29:13 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:29:13 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:29:13 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:29:13 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:29:13 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:29:13 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:29:13 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:29:13 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:29:13 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:29:13 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:29:13 compute-0 nova_compute[260935]:     <controller type="usb" index="0"/>
Oct 11 09:29:13 compute-0 nova_compute[260935]:     <memballoon model="virtio">
Oct 11 09:29:13 compute-0 nova_compute[260935]:       <stats period="10"/>
Oct 11 09:29:13 compute-0 nova_compute[260935]:     </memballoon>
Oct 11 09:29:13 compute-0 nova_compute[260935]:   </devices>
Oct 11 09:29:13 compute-0 nova_compute[260935]: </domain>
Oct 11 09:29:13 compute-0 nova_compute[260935]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 11 09:29:13 compute-0 nova_compute[260935]: 2025-10-11 09:29:13.847 2 DEBUG nova.compute.manager [None req-185274bf-6538-417c-a3ef-e481592e5e5e 95ad8a94007e4ab88fcad372c4695cf5 9e69b3710dc94c919d8bd75d2a540c10 - - default default] [instance: af8a1ab7-7512-4de4-8493-cfe85095fbc5] Preparing to wait for external event network-vif-plugged-c6f854bc-831f-4bb1-ad9f-e1aa03343d25 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 11 09:29:13 compute-0 nova_compute[260935]: 2025-10-11 09:29:13.847 2 DEBUG oslo_concurrency.lockutils [None req-185274bf-6538-417c-a3ef-e481592e5e5e 95ad8a94007e4ab88fcad372c4695cf5 9e69b3710dc94c919d8bd75d2a540c10 - - default default] Acquiring lock "af8a1ab7-7512-4de4-8493-cfe85095fbc5-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:29:13 compute-0 nova_compute[260935]: 2025-10-11 09:29:13.848 2 DEBUG oslo_concurrency.lockutils [None req-185274bf-6538-417c-a3ef-e481592e5e5e 95ad8a94007e4ab88fcad372c4695cf5 9e69b3710dc94c919d8bd75d2a540c10 - - default default] Lock "af8a1ab7-7512-4de4-8493-cfe85095fbc5-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:29:13 compute-0 nova_compute[260935]: 2025-10-11 09:29:13.848 2 DEBUG oslo_concurrency.lockutils [None req-185274bf-6538-417c-a3ef-e481592e5e5e 95ad8a94007e4ab88fcad372c4695cf5 9e69b3710dc94c919d8bd75d2a540c10 - - default default] Lock "af8a1ab7-7512-4de4-8493-cfe85095fbc5-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:29:13 compute-0 nova_compute[260935]: 2025-10-11 09:29:13.849 2 DEBUG nova.virt.libvirt.vif [None req-185274bf-6538-417c-a3ef-e481592e5e5e 95ad8a94007e4ab88fcad372c4695cf5 9e69b3710dc94c919d8bd75d2a540c10 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T09:29:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestServerBasicOps-server-1930148258',display_name='tempest-TestServerBasicOps-server-1930148258',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testserverbasicops-server-1930148258',id=134,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBH102y8/0euXWtDeJlu4Y5Hi5/M8lShv6g1BNE2c8J5VmPE9Sc1i2z1jjNuXYbG9viqCuCV8FoDB59HS7lsdBA/LnJF8uoB7aDjGRRY+ZC3tUP+E3D+ZjN02qpM1ehWg2g==',key_name='tempest-TestServerBasicOps-820613880',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={meta1='data1',meta2='data2',metaN='dataN'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9e69b3710dc94c919d8bd75d2a540c10',ramdisk_id='',reservation_id='r-dglj62pg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestServerBasicOps-2065645952',owner_user_name='tempest-TestServerBasicOps-2065645952-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:29:08Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='95ad8a94007e4ab88fcad372c4695cf5',uuid=af8a1ab7-7512-4de4-8493-cfe85095fbc5,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c6f854bc-831f-4bb1-ad9f-e1aa03343d25", "address": "fa:16:3e:6b:d3:83", "network": {"id": "079fe911-cb70-4ab3-8112-a68496105754", "bridge": "br-int", "label": "tempest-TestServerBasicOps-693767079-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9e69b3710dc94c919d8bd75d2a540c10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc6f854bc-83", "ovs_interfaceid": "c6f854bc-831f-4bb1-ad9f-e1aa03343d25", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 11 09:29:13 compute-0 nova_compute[260935]: 2025-10-11 09:29:13.849 2 DEBUG nova.network.os_vif_util [None req-185274bf-6538-417c-a3ef-e481592e5e5e 95ad8a94007e4ab88fcad372c4695cf5 9e69b3710dc94c919d8bd75d2a540c10 - - default default] Converting VIF {"id": "c6f854bc-831f-4bb1-ad9f-e1aa03343d25", "address": "fa:16:3e:6b:d3:83", "network": {"id": "079fe911-cb70-4ab3-8112-a68496105754", "bridge": "br-int", "label": "tempest-TestServerBasicOps-693767079-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9e69b3710dc94c919d8bd75d2a540c10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc6f854bc-83", "ovs_interfaceid": "c6f854bc-831f-4bb1-ad9f-e1aa03343d25", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 09:29:13 compute-0 nova_compute[260935]: 2025-10-11 09:29:13.850 2 DEBUG nova.network.os_vif_util [None req-185274bf-6538-417c-a3ef-e481592e5e5e 95ad8a94007e4ab88fcad372c4695cf5 9e69b3710dc94c919d8bd75d2a540c10 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6b:d3:83,bridge_name='br-int',has_traffic_filtering=True,id=c6f854bc-831f-4bb1-ad9f-e1aa03343d25,network=Network(079fe911-cb70-4ab3-8112-a68496105754),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc6f854bc-83') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 09:29:13 compute-0 nova_compute[260935]: 2025-10-11 09:29:13.851 2 DEBUG os_vif [None req-185274bf-6538-417c-a3ef-e481592e5e5e 95ad8a94007e4ab88fcad372c4695cf5 9e69b3710dc94c919d8bd75d2a540c10 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:6b:d3:83,bridge_name='br-int',has_traffic_filtering=True,id=c6f854bc-831f-4bb1-ad9f-e1aa03343d25,network=Network(079fe911-cb70-4ab3-8112-a68496105754),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc6f854bc-83') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 11 09:29:13 compute-0 nova_compute[260935]: 2025-10-11 09:29:13.852 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:29:13 compute-0 nova_compute[260935]: 2025-10-11 09:29:13.852 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:29:13 compute-0 nova_compute[260935]: 2025-10-11 09:29:13.853 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 09:29:13 compute-0 nova_compute[260935]: 2025-10-11 09:29:13.856 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:29:13 compute-0 nova_compute[260935]: 2025-10-11 09:29:13.857 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc6f854bc-83, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:29:13 compute-0 nova_compute[260935]: 2025-10-11 09:29:13.857 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapc6f854bc-83, col_values=(('external_ids', {'iface-id': 'c6f854bc-831f-4bb1-ad9f-e1aa03343d25', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:6b:d3:83', 'vm-uuid': 'af8a1ab7-7512-4de4-8493-cfe85095fbc5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:29:13 compute-0 NetworkManager[44960]: <info>  [1760174953.8601] manager: (tapc6f854bc-83): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/576)
Oct 11 09:29:13 compute-0 nova_compute[260935]: 2025-10-11 09:29:13.860 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:29:13 compute-0 nova_compute[260935]: 2025-10-11 09:29:13.863 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 09:29:13 compute-0 nova_compute[260935]: 2025-10-11 09:29:13.869 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:29:13 compute-0 nova_compute[260935]: 2025-10-11 09:29:13.870 2 INFO os_vif [None req-185274bf-6538-417c-a3ef-e481592e5e5e 95ad8a94007e4ab88fcad372c4695cf5 9e69b3710dc94c919d8bd75d2a540c10 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:6b:d3:83,bridge_name='br-int',has_traffic_filtering=True,id=c6f854bc-831f-4bb1-ad9f-e1aa03343d25,network=Network(079fe911-cb70-4ab3-8112-a68496105754),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc6f854bc-83')
Oct 11 09:29:13 compute-0 nova_compute[260935]: 2025-10-11 09:29:13.944 2 DEBUG nova.virt.libvirt.driver [None req-185274bf-6538-417c-a3ef-e481592e5e5e 95ad8a94007e4ab88fcad372c4695cf5 9e69b3710dc94c919d8bd75d2a540c10 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 09:29:13 compute-0 nova_compute[260935]: 2025-10-11 09:29:13.944 2 DEBUG nova.virt.libvirt.driver [None req-185274bf-6538-417c-a3ef-e481592e5e5e 95ad8a94007e4ab88fcad372c4695cf5 9e69b3710dc94c919d8bd75d2a540c10 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 09:29:13 compute-0 nova_compute[260935]: 2025-10-11 09:29:13.945 2 DEBUG nova.virt.libvirt.driver [None req-185274bf-6538-417c-a3ef-e481592e5e5e 95ad8a94007e4ab88fcad372c4695cf5 9e69b3710dc94c919d8bd75d2a540c10 - - default default] No VIF found with MAC fa:16:3e:6b:d3:83, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 11 09:29:13 compute-0 nova_compute[260935]: 2025-10-11 09:29:13.945 2 INFO nova.virt.libvirt.driver [None req-185274bf-6538-417c-a3ef-e481592e5e5e 95ad8a94007e4ab88fcad372c4695cf5 9e69b3710dc94c919d8bd75d2a540c10 - - default default] [instance: af8a1ab7-7512-4de4-8493-cfe85095fbc5] Using config drive
Oct 11 09:29:13 compute-0 nova_compute[260935]: 2025-10-11 09:29:13.967 2 DEBUG nova.storage.rbd_utils [None req-185274bf-6538-417c-a3ef-e481592e5e5e 95ad8a94007e4ab88fcad372c4695cf5 9e69b3710dc94c919d8bd75d2a540c10 - - default default] rbd image af8a1ab7-7512-4de4-8493-cfe85095fbc5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:29:14 compute-0 ceph-mon[74313]: pgmap v2627: 321 pgs: 321 active+clean; 500 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 42 KiB/s rd, 3.6 MiB/s wr, 64 op/s
Oct 11 09:29:14 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/1509367093' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:29:14 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/3137987385' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:29:14 compute-0 nova_compute[260935]: 2025-10-11 09:29:14.192 2 DEBUG nova.network.neutron [req-6b2267af-46e3-4880-904e-c307501415d5 req-af5809e7-ac01-40fe-a16e-1cdacb57f309 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: af8a1ab7-7512-4de4-8493-cfe85095fbc5] Updated VIF entry in instance network info cache for port c6f854bc-831f-4bb1-ad9f-e1aa03343d25. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 09:29:14 compute-0 nova_compute[260935]: 2025-10-11 09:29:14.192 2 DEBUG nova.network.neutron [req-6b2267af-46e3-4880-904e-c307501415d5 req-af5809e7-ac01-40fe-a16e-1cdacb57f309 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: af8a1ab7-7512-4de4-8493-cfe85095fbc5] Updating instance_info_cache with network_info: [{"id": "c6f854bc-831f-4bb1-ad9f-e1aa03343d25", "address": "fa:16:3e:6b:d3:83", "network": {"id": "079fe911-cb70-4ab3-8112-a68496105754", "bridge": "br-int", "label": "tempest-TestServerBasicOps-693767079-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9e69b3710dc94c919d8bd75d2a540c10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc6f854bc-83", "ovs_interfaceid": "c6f854bc-831f-4bb1-ad9f-e1aa03343d25", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:29:14 compute-0 nova_compute[260935]: 2025-10-11 09:29:14.216 2 DEBUG oslo_concurrency.lockutils [req-6b2267af-46e3-4880-904e-c307501415d5 req-af5809e7-ac01-40fe-a16e-1cdacb57f309 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-af8a1ab7-7512-4de4-8493-cfe85095fbc5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:29:14 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:29:14 compute-0 nova_compute[260935]: 2025-10-11 09:29:14.403 2 INFO nova.virt.libvirt.driver [None req-185274bf-6538-417c-a3ef-e481592e5e5e 95ad8a94007e4ab88fcad372c4695cf5 9e69b3710dc94c919d8bd75d2a540c10 - - default default] [instance: af8a1ab7-7512-4de4-8493-cfe85095fbc5] Creating config drive at /var/lib/nova/instances/af8a1ab7-7512-4de4-8493-cfe85095fbc5/disk.config
Oct 11 09:29:14 compute-0 nova_compute[260935]: 2025-10-11 09:29:14.412 2 DEBUG oslo_concurrency.processutils [None req-185274bf-6538-417c-a3ef-e481592e5e5e 95ad8a94007e4ab88fcad372c4695cf5 9e69b3710dc94c919d8bd75d2a540c10 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/af8a1ab7-7512-4de4-8493-cfe85095fbc5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp6ujcakhs execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:29:14 compute-0 nova_compute[260935]: 2025-10-11 09:29:14.571 2 DEBUG oslo_concurrency.processutils [None req-185274bf-6538-417c-a3ef-e481592e5e5e 95ad8a94007e4ab88fcad372c4695cf5 9e69b3710dc94c919d8bd75d2a540c10 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/af8a1ab7-7512-4de4-8493-cfe85095fbc5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp6ujcakhs" returned: 0 in 0.159s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:29:14 compute-0 nova_compute[260935]: 2025-10-11 09:29:14.617 2 DEBUG nova.storage.rbd_utils [None req-185274bf-6538-417c-a3ef-e481592e5e5e 95ad8a94007e4ab88fcad372c4695cf5 9e69b3710dc94c919d8bd75d2a540c10 - - default default] rbd image af8a1ab7-7512-4de4-8493-cfe85095fbc5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:29:14 compute-0 nova_compute[260935]: 2025-10-11 09:29:14.622 2 DEBUG oslo_concurrency.processutils [None req-185274bf-6538-417c-a3ef-e481592e5e5e 95ad8a94007e4ab88fcad372c4695cf5 9e69b3710dc94c919d8bd75d2a540c10 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/af8a1ab7-7512-4de4-8493-cfe85095fbc5/disk.config af8a1ab7-7512-4de4-8493-cfe85095fbc5_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:29:14 compute-0 nova_compute[260935]: 2025-10-11 09:29:14.776 2 DEBUG oslo_concurrency.processutils [None req-185274bf-6538-417c-a3ef-e481592e5e5e 95ad8a94007e4ab88fcad372c4695cf5 9e69b3710dc94c919d8bd75d2a540c10 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/af8a1ab7-7512-4de4-8493-cfe85095fbc5/disk.config af8a1ab7-7512-4de4-8493-cfe85095fbc5_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.154s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:29:14 compute-0 nova_compute[260935]: 2025-10-11 09:29:14.777 2 INFO nova.virt.libvirt.driver [None req-185274bf-6538-417c-a3ef-e481592e5e5e 95ad8a94007e4ab88fcad372c4695cf5 9e69b3710dc94c919d8bd75d2a540c10 - - default default] [instance: af8a1ab7-7512-4de4-8493-cfe85095fbc5] Deleting local config drive /var/lib/nova/instances/af8a1ab7-7512-4de4-8493-cfe85095fbc5/disk.config because it was imported into RBD.
Oct 11 09:29:14 compute-0 kernel: tapc6f854bc-83: entered promiscuous mode
Oct 11 09:29:14 compute-0 NetworkManager[44960]: <info>  [1760174954.8281] manager: (tapc6f854bc-83): new Tun device (/org/freedesktop/NetworkManager/Devices/577)
Oct 11 09:29:14 compute-0 ovn_controller[152945]: 2025-10-11T09:29:14Z|01485|binding|INFO|Claiming lport c6f854bc-831f-4bb1-ad9f-e1aa03343d25 for this chassis.
Oct 11 09:29:14 compute-0 ovn_controller[152945]: 2025-10-11T09:29:14Z|01486|binding|INFO|c6f854bc-831f-4bb1-ad9f-e1aa03343d25: Claiming fa:16:3e:6b:d3:83 10.100.0.7
Oct 11 09:29:14 compute-0 nova_compute[260935]: 2025-10-11 09:29:14.831 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:29:14 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:29:14.837 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6b:d3:83 10.100.0.7'], port_security=['fa:16:3e:6b:d3:83 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'af8a1ab7-7512-4de4-8493-cfe85095fbc5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-079fe911-cb70-4ab3-8112-a68496105754', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9e69b3710dc94c919d8bd75d2a540c10', 'neutron:revision_number': '2', 'neutron:security_group_ids': '0dd44ff9-fcd8-4fbd-bf9d-f19c93cd632d af5d01b3-d9f2-4ea0-a9bb-fc66bdc2afc6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=eb94e045-6353-4848-a698-5f94ed22b64e, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=c6f854bc-831f-4bb1-ad9f-e1aa03343d25) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 09:29:14 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:29:14.839 162815 INFO neutron.agent.ovn.metadata.agent [-] Port c6f854bc-831f-4bb1-ad9f-e1aa03343d25 in datapath 079fe911-cb70-4ab3-8112-a68496105754 bound to our chassis
Oct 11 09:29:14 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:29:14.841 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 079fe911-cb70-4ab3-8112-a68496105754
Oct 11 09:29:14 compute-0 ovn_controller[152945]: 2025-10-11T09:29:14Z|01487|binding|INFO|Setting lport c6f854bc-831f-4bb1-ad9f-e1aa03343d25 ovn-installed in OVS
Oct 11 09:29:14 compute-0 ovn_controller[152945]: 2025-10-11T09:29:14Z|01488|binding|INFO|Setting lport c6f854bc-831f-4bb1-ad9f-e1aa03343d25 up in Southbound
Oct 11 09:29:14 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:29:14.856 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[7f8b1a7f-9747-445f-b09f-90122377830e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:29:14 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:29:14.857 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap079fe911-c1 in ovnmeta-079fe911-cb70-4ab3-8112-a68496105754 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 11 09:29:14 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:29:14.859 276945 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap079fe911-c0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 11 09:29:14 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:29:14.859 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[02b08daf-8443-4065-8534-41ac1a953b9f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:29:14 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:29:14.860 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[22a6b359-ed8a-491e-a221-85c054528b4b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:29:14 compute-0 nova_compute[260935]: 2025-10-11 09:29:14.859 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:29:14 compute-0 systemd-udevd[407350]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 09:29:14 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:29:14.872 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[9758400b-cd54-4d8e-b4bf-f4d3aceea036]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:29:14 compute-0 NetworkManager[44960]: <info>  [1760174954.8814] device (tapc6f854bc-83): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 09:29:14 compute-0 NetworkManager[44960]: <info>  [1760174954.8822] device (tapc6f854bc-83): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 09:29:14 compute-0 systemd-machined[215705]: New machine qemu-158-instance-00000086.
Oct 11 09:29:14 compute-0 systemd[1]: Started Virtual Machine qemu-158-instance-00000086.
Oct 11 09:29:14 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:29:14.899 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[177b04b0-43d1-43d1-827b-d7926ee6c380]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:29:14 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:29:14.929 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[eb5edfe7-3bf9-49c7-8f31-59617614f907]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:29:14 compute-0 NetworkManager[44960]: <info>  [1760174954.9414] manager: (tap079fe911-c0): new Veth device (/org/freedesktop/NetworkManager/Devices/578)
Oct 11 09:29:14 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:29:14.943 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[0a8d4e9e-ccf2-410b-b242-bd34b59d4b4e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:29:14 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:29:14.982 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[347e4432-5481-4747-b956-08e6eb695cdb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:29:14 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:29:14.984 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[7e482f88-f1f9-4f26-8c55-71893800845c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:29:15 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2628: 321 pgs: 321 active+clean; 500 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 42 KiB/s rd, 3.6 MiB/s wr, 64 op/s
Oct 11 09:29:15 compute-0 NetworkManager[44960]: <info>  [1760174955.1121] device (tap079fe911-c0): carrier: link connected
Oct 11 09:29:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:29:15.120 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[bc923c81-4ab8-49e9-9a57-9b8b23842fb8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:29:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:29:15.148 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[3a4f4704-39d2-4713-b8db-34c5e084ff7c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap079fe911-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:16:bb:d6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 400], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 679981, 'reachable_time': 32917, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 407383, 'error': None, 'target': 'ovnmeta-079fe911-cb70-4ab3-8112-a68496105754', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:29:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:29:15.174 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[d24f8e4e-e8cc-4617-bbd7-18393f7397f1]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe16:bbd6'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 679981, 'tstamp': 679981}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 407384, 'error': None, 'target': 'ovnmeta-079fe911-cb70-4ab3-8112-a68496105754', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:29:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:29:15.199 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[d6a0ca8f-d272-4630-93bf-66a51ef3abf1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap079fe911-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:16:bb:d6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 400], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 679981, 'reachable_time': 32917, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 407385, 'error': None, 'target': 'ovnmeta-079fe911-cb70-4ab3-8112-a68496105754', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:29:15 compute-0 nova_compute[260935]: 2025-10-11 09:29:15.219 2 DEBUG nova.compute.manager [req-99eda441-d7b5-470d-99ca-4d554a3f7823 req-d021fbe2-d39d-4ba6-9a0c-573e6e626c2a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: af8a1ab7-7512-4de4-8493-cfe85095fbc5] Received event network-vif-plugged-c6f854bc-831f-4bb1-ad9f-e1aa03343d25 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:29:15 compute-0 nova_compute[260935]: 2025-10-11 09:29:15.220 2 DEBUG oslo_concurrency.lockutils [req-99eda441-d7b5-470d-99ca-4d554a3f7823 req-d021fbe2-d39d-4ba6-9a0c-573e6e626c2a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "af8a1ab7-7512-4de4-8493-cfe85095fbc5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:29:15 compute-0 nova_compute[260935]: 2025-10-11 09:29:15.220 2 DEBUG oslo_concurrency.lockutils [req-99eda441-d7b5-470d-99ca-4d554a3f7823 req-d021fbe2-d39d-4ba6-9a0c-573e6e626c2a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "af8a1ab7-7512-4de4-8493-cfe85095fbc5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:29:15 compute-0 nova_compute[260935]: 2025-10-11 09:29:15.220 2 DEBUG oslo_concurrency.lockutils [req-99eda441-d7b5-470d-99ca-4d554a3f7823 req-d021fbe2-d39d-4ba6-9a0c-573e6e626c2a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "af8a1ab7-7512-4de4-8493-cfe85095fbc5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:29:15 compute-0 nova_compute[260935]: 2025-10-11 09:29:15.220 2 DEBUG nova.compute.manager [req-99eda441-d7b5-470d-99ca-4d554a3f7823 req-d021fbe2-d39d-4ba6-9a0c-573e6e626c2a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: af8a1ab7-7512-4de4-8493-cfe85095fbc5] Processing event network-vif-plugged-c6f854bc-831f-4bb1-ad9f-e1aa03343d25 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 11 09:29:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:29:15.224 162815 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:29:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:29:15.225 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:29:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:29:15.226 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:29:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:29:15.250 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[9f056072-59cf-46f5-bcd4-202f04a78657]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:29:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:29:15.326 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[a31fe241-92bc-4205-bd01-f731212a54b3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:29:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:29:15.328 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap079fe911-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:29:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:29:15.328 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 09:29:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:29:15.329 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap079fe911-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:29:15 compute-0 NetworkManager[44960]: <info>  [1760174955.3321] manager: (tap079fe911-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/579)
Oct 11 09:29:15 compute-0 kernel: tap079fe911-c0: entered promiscuous mode
Oct 11 09:29:15 compute-0 nova_compute[260935]: 2025-10-11 09:29:15.331 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:29:15 compute-0 nova_compute[260935]: 2025-10-11 09:29:15.335 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:29:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:29:15.336 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap079fe911-c0, col_values=(('external_ids', {'iface-id': '853c3a89-16e1-4df4-8c36-5b3e31f01524'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:29:15 compute-0 nova_compute[260935]: 2025-10-11 09:29:15.337 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:29:15 compute-0 nova_compute[260935]: 2025-10-11 09:29:15.341 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:29:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:29:15.342 162815 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/079fe911-cb70-4ab3-8112-a68496105754.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/079fe911-cb70-4ab3-8112-a68496105754.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 11 09:29:15 compute-0 ovn_controller[152945]: 2025-10-11T09:29:15Z|01489|binding|INFO|Releasing lport 853c3a89-16e1-4df4-8c36-5b3e31f01524 from this chassis (sb_readonly=0)
Oct 11 09:29:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:29:15.347 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[42d4288e-e473-49f4-93f6-fb4792410d3e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:29:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:29:15.349 162815 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 11 09:29:15 compute-0 ovn_metadata_agent[162810]: global
Oct 11 09:29:15 compute-0 ovn_metadata_agent[162810]:     log         /dev/log local0 debug
Oct 11 09:29:15 compute-0 ovn_metadata_agent[162810]:     log-tag     haproxy-metadata-proxy-079fe911-cb70-4ab3-8112-a68496105754
Oct 11 09:29:15 compute-0 ovn_metadata_agent[162810]:     user        root
Oct 11 09:29:15 compute-0 ovn_metadata_agent[162810]:     group       root
Oct 11 09:29:15 compute-0 ovn_metadata_agent[162810]:     maxconn     1024
Oct 11 09:29:15 compute-0 ovn_metadata_agent[162810]:     pidfile     /var/lib/neutron/external/pids/079fe911-cb70-4ab3-8112-a68496105754.pid.haproxy
Oct 11 09:29:15 compute-0 ovn_metadata_agent[162810]:     daemon
Oct 11 09:29:15 compute-0 ovn_metadata_agent[162810]: 
Oct 11 09:29:15 compute-0 ovn_metadata_agent[162810]: defaults
Oct 11 09:29:15 compute-0 ovn_metadata_agent[162810]:     log global
Oct 11 09:29:15 compute-0 ovn_metadata_agent[162810]:     mode http
Oct 11 09:29:15 compute-0 ovn_metadata_agent[162810]:     option httplog
Oct 11 09:29:15 compute-0 ovn_metadata_agent[162810]:     option dontlognull
Oct 11 09:29:15 compute-0 ovn_metadata_agent[162810]:     option http-server-close
Oct 11 09:29:15 compute-0 ovn_metadata_agent[162810]:     option forwardfor
Oct 11 09:29:15 compute-0 ovn_metadata_agent[162810]:     retries                 3
Oct 11 09:29:15 compute-0 ovn_metadata_agent[162810]:     timeout http-request    30s
Oct 11 09:29:15 compute-0 ovn_metadata_agent[162810]:     timeout connect         30s
Oct 11 09:29:15 compute-0 ovn_metadata_agent[162810]:     timeout client          32s
Oct 11 09:29:15 compute-0 ovn_metadata_agent[162810]:     timeout server          32s
Oct 11 09:29:15 compute-0 ovn_metadata_agent[162810]:     timeout http-keep-alive 30s
Oct 11 09:29:15 compute-0 ovn_metadata_agent[162810]: 
Oct 11 09:29:15 compute-0 ovn_metadata_agent[162810]: 
Oct 11 09:29:15 compute-0 ovn_metadata_agent[162810]: listen listener
Oct 11 09:29:15 compute-0 ovn_metadata_agent[162810]:     bind 169.254.169.254:80
Oct 11 09:29:15 compute-0 ovn_metadata_agent[162810]:     server metadata /var/lib/neutron/metadata_proxy
Oct 11 09:29:15 compute-0 ovn_metadata_agent[162810]:     http-request add-header X-OVN-Network-ID 079fe911-cb70-4ab3-8112-a68496105754
Oct 11 09:29:15 compute-0 ovn_metadata_agent[162810]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 11 09:29:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:29:15.351 162815 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-079fe911-cb70-4ab3-8112-a68496105754', 'env', 'PROCESS_TAG=haproxy-079fe911-cb70-4ab3-8112-a68496105754', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/079fe911-cb70-4ab3-8112-a68496105754.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 11 09:29:15 compute-0 nova_compute[260935]: 2025-10-11 09:29:15.377 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:29:15 compute-0 podman[407457]: 2025-10-11 09:29:15.81321465 +0000 UTC m=+0.073649450 container create 7c3025809c58e71159b26621f801a618a836a3bf281c33f5a5ecd917b5d02ebc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-079fe911-cb70-4ab3-8112-a68496105754, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001)
Oct 11 09:29:15 compute-0 podman[407457]: 2025-10-11 09:29:15.772533402 +0000 UTC m=+0.032968252 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 11 09:29:15 compute-0 systemd[1]: Started libpod-conmon-7c3025809c58e71159b26621f801a618a836a3bf281c33f5a5ecd917b5d02ebc.scope.
Oct 11 09:29:15 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:29:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c582133b8efc079cf0c19ee163457092710bcac477f70ae60348ebd6ded8cd4e/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 11 09:29:15 compute-0 podman[407457]: 2025-10-11 09:29:15.927076493 +0000 UTC m=+0.187511293 container init 7c3025809c58e71159b26621f801a618a836a3bf281c33f5a5ecd917b5d02ebc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-079fe911-cb70-4ab3-8112-a68496105754, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 11 09:29:15 compute-0 podman[407457]: 2025-10-11 09:29:15.938269369 +0000 UTC m=+0.198704139 container start 7c3025809c58e71159b26621f801a618a836a3bf281c33f5a5ecd917b5d02ebc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-079fe911-cb70-4ab3-8112-a68496105754, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001)
Oct 11 09:29:15 compute-0 neutron-haproxy-ovnmeta-079fe911-cb70-4ab3-8112-a68496105754[407473]: [NOTICE]   (407477) : New worker (407479) forked
Oct 11 09:29:15 compute-0 neutron-haproxy-ovnmeta-079fe911-cb70-4ab3-8112-a68496105754[407473]: [NOTICE]   (407477) : Loading success.
Oct 11 09:29:16 compute-0 ceph-mon[74313]: pgmap v2628: 321 pgs: 321 active+clean; 500 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 42 KiB/s rd, 3.6 MiB/s wr, 64 op/s
Oct 11 09:29:16 compute-0 nova_compute[260935]: 2025-10-11 09:29:16.157 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760174956.1568, af8a1ab7-7512-4de4-8493-cfe85095fbc5 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:29:16 compute-0 nova_compute[260935]: 2025-10-11 09:29:16.157 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: af8a1ab7-7512-4de4-8493-cfe85095fbc5] VM Started (Lifecycle Event)
Oct 11 09:29:16 compute-0 nova_compute[260935]: 2025-10-11 09:29:16.160 2 DEBUG nova.compute.manager [None req-185274bf-6538-417c-a3ef-e481592e5e5e 95ad8a94007e4ab88fcad372c4695cf5 9e69b3710dc94c919d8bd75d2a540c10 - - default default] [instance: af8a1ab7-7512-4de4-8493-cfe85095fbc5] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 11 09:29:16 compute-0 nova_compute[260935]: 2025-10-11 09:29:16.163 2 DEBUG nova.virt.libvirt.driver [None req-185274bf-6538-417c-a3ef-e481592e5e5e 95ad8a94007e4ab88fcad372c4695cf5 9e69b3710dc94c919d8bd75d2a540c10 - - default default] [instance: af8a1ab7-7512-4de4-8493-cfe85095fbc5] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 11 09:29:16 compute-0 nova_compute[260935]: 2025-10-11 09:29:16.166 2 INFO nova.virt.libvirt.driver [-] [instance: af8a1ab7-7512-4de4-8493-cfe85095fbc5] Instance spawned successfully.
Oct 11 09:29:16 compute-0 nova_compute[260935]: 2025-10-11 09:29:16.166 2 DEBUG nova.virt.libvirt.driver [None req-185274bf-6538-417c-a3ef-e481592e5e5e 95ad8a94007e4ab88fcad372c4695cf5 9e69b3710dc94c919d8bd75d2a540c10 - - default default] [instance: af8a1ab7-7512-4de4-8493-cfe85095fbc5] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 11 09:29:16 compute-0 nova_compute[260935]: 2025-10-11 09:29:16.184 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: af8a1ab7-7512-4de4-8493-cfe85095fbc5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:29:16 compute-0 nova_compute[260935]: 2025-10-11 09:29:16.191 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: af8a1ab7-7512-4de4-8493-cfe85095fbc5] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 09:29:16 compute-0 nova_compute[260935]: 2025-10-11 09:29:16.195 2 DEBUG nova.virt.libvirt.driver [None req-185274bf-6538-417c-a3ef-e481592e5e5e 95ad8a94007e4ab88fcad372c4695cf5 9e69b3710dc94c919d8bd75d2a540c10 - - default default] [instance: af8a1ab7-7512-4de4-8493-cfe85095fbc5] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:29:16 compute-0 nova_compute[260935]: 2025-10-11 09:29:16.196 2 DEBUG nova.virt.libvirt.driver [None req-185274bf-6538-417c-a3ef-e481592e5e5e 95ad8a94007e4ab88fcad372c4695cf5 9e69b3710dc94c919d8bd75d2a540c10 - - default default] [instance: af8a1ab7-7512-4de4-8493-cfe85095fbc5] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:29:16 compute-0 nova_compute[260935]: 2025-10-11 09:29:16.198 2 DEBUG nova.virt.libvirt.driver [None req-185274bf-6538-417c-a3ef-e481592e5e5e 95ad8a94007e4ab88fcad372c4695cf5 9e69b3710dc94c919d8bd75d2a540c10 - - default default] [instance: af8a1ab7-7512-4de4-8493-cfe85095fbc5] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:29:16 compute-0 nova_compute[260935]: 2025-10-11 09:29:16.198 2 DEBUG nova.virt.libvirt.driver [None req-185274bf-6538-417c-a3ef-e481592e5e5e 95ad8a94007e4ab88fcad372c4695cf5 9e69b3710dc94c919d8bd75d2a540c10 - - default default] [instance: af8a1ab7-7512-4de4-8493-cfe85095fbc5] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:29:16 compute-0 nova_compute[260935]: 2025-10-11 09:29:16.199 2 DEBUG nova.virt.libvirt.driver [None req-185274bf-6538-417c-a3ef-e481592e5e5e 95ad8a94007e4ab88fcad372c4695cf5 9e69b3710dc94c919d8bd75d2a540c10 - - default default] [instance: af8a1ab7-7512-4de4-8493-cfe85095fbc5] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:29:16 compute-0 nova_compute[260935]: 2025-10-11 09:29:16.199 2 DEBUG nova.virt.libvirt.driver [None req-185274bf-6538-417c-a3ef-e481592e5e5e 95ad8a94007e4ab88fcad372c4695cf5 9e69b3710dc94c919d8bd75d2a540c10 - - default default] [instance: af8a1ab7-7512-4de4-8493-cfe85095fbc5] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:29:16 compute-0 nova_compute[260935]: 2025-10-11 09:29:16.226 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: af8a1ab7-7512-4de4-8493-cfe85095fbc5] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 09:29:16 compute-0 nova_compute[260935]: 2025-10-11 09:29:16.226 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760174956.1572866, af8a1ab7-7512-4de4-8493-cfe85095fbc5 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:29:16 compute-0 nova_compute[260935]: 2025-10-11 09:29:16.226 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: af8a1ab7-7512-4de4-8493-cfe85095fbc5] VM Paused (Lifecycle Event)
Oct 11 09:29:16 compute-0 nova_compute[260935]: 2025-10-11 09:29:16.252 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: af8a1ab7-7512-4de4-8493-cfe85095fbc5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:29:16 compute-0 nova_compute[260935]: 2025-10-11 09:29:16.255 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760174956.1624923, af8a1ab7-7512-4de4-8493-cfe85095fbc5 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:29:16 compute-0 nova_compute[260935]: 2025-10-11 09:29:16.255 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: af8a1ab7-7512-4de4-8493-cfe85095fbc5] VM Resumed (Lifecycle Event)
Oct 11 09:29:16 compute-0 nova_compute[260935]: 2025-10-11 09:29:16.262 2 INFO nova.compute.manager [None req-185274bf-6538-417c-a3ef-e481592e5e5e 95ad8a94007e4ab88fcad372c4695cf5 9e69b3710dc94c919d8bd75d2a540c10 - - default default] [instance: af8a1ab7-7512-4de4-8493-cfe85095fbc5] Took 7.41 seconds to spawn the instance on the hypervisor.
Oct 11 09:29:16 compute-0 nova_compute[260935]: 2025-10-11 09:29:16.262 2 DEBUG nova.compute.manager [None req-185274bf-6538-417c-a3ef-e481592e5e5e 95ad8a94007e4ab88fcad372c4695cf5 9e69b3710dc94c919d8bd75d2a540c10 - - default default] [instance: af8a1ab7-7512-4de4-8493-cfe85095fbc5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:29:16 compute-0 nova_compute[260935]: 2025-10-11 09:29:16.270 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: af8a1ab7-7512-4de4-8493-cfe85095fbc5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:29:16 compute-0 nova_compute[260935]: 2025-10-11 09:29:16.272 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: af8a1ab7-7512-4de4-8493-cfe85095fbc5] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 09:29:16 compute-0 nova_compute[260935]: 2025-10-11 09:29:16.297 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: af8a1ab7-7512-4de4-8493-cfe85095fbc5] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 09:29:16 compute-0 nova_compute[260935]: 2025-10-11 09:29:16.319 2 INFO nova.compute.manager [None req-185274bf-6538-417c-a3ef-e481592e5e5e 95ad8a94007e4ab88fcad372c4695cf5 9e69b3710dc94c919d8bd75d2a540c10 - - default default] [instance: af8a1ab7-7512-4de4-8493-cfe85095fbc5] Took 8.57 seconds to build instance.
Oct 11 09:29:16 compute-0 nova_compute[260935]: 2025-10-11 09:29:16.335 2 DEBUG oslo_concurrency.lockutils [None req-185274bf-6538-417c-a3ef-e481592e5e5e 95ad8a94007e4ab88fcad372c4695cf5 9e69b3710dc94c919d8bd75d2a540c10 - - default default] Lock "af8a1ab7-7512-4de4-8493-cfe85095fbc5" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:29:17 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2629: 321 pgs: 321 active+clean; 500 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 42 KiB/s rd, 3.6 MiB/s wr, 64 op/s
Oct 11 09:29:17 compute-0 nova_compute[260935]: 2025-10-11 09:29:17.316 2 DEBUG nova.compute.manager [req-b72c4aa9-1abc-4506-b1ef-f6457e77586f req-fd0ba4d0-75b5-48c4-9712-3330f3fb10e1 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: af8a1ab7-7512-4de4-8493-cfe85095fbc5] Received event network-vif-plugged-c6f854bc-831f-4bb1-ad9f-e1aa03343d25 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:29:17 compute-0 nova_compute[260935]: 2025-10-11 09:29:17.316 2 DEBUG oslo_concurrency.lockutils [req-b72c4aa9-1abc-4506-b1ef-f6457e77586f req-fd0ba4d0-75b5-48c4-9712-3330f3fb10e1 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "af8a1ab7-7512-4de4-8493-cfe85095fbc5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:29:17 compute-0 nova_compute[260935]: 2025-10-11 09:29:17.317 2 DEBUG oslo_concurrency.lockutils [req-b72c4aa9-1abc-4506-b1ef-f6457e77586f req-fd0ba4d0-75b5-48c4-9712-3330f3fb10e1 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "af8a1ab7-7512-4de4-8493-cfe85095fbc5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:29:17 compute-0 nova_compute[260935]: 2025-10-11 09:29:17.317 2 DEBUG oslo_concurrency.lockutils [req-b72c4aa9-1abc-4506-b1ef-f6457e77586f req-fd0ba4d0-75b5-48c4-9712-3330f3fb10e1 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "af8a1ab7-7512-4de4-8493-cfe85095fbc5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:29:17 compute-0 nova_compute[260935]: 2025-10-11 09:29:17.317 2 DEBUG nova.compute.manager [req-b72c4aa9-1abc-4506-b1ef-f6457e77586f req-fd0ba4d0-75b5-48c4-9712-3330f3fb10e1 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: af8a1ab7-7512-4de4-8493-cfe85095fbc5] No waiting events found dispatching network-vif-plugged-c6f854bc-831f-4bb1-ad9f-e1aa03343d25 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 09:29:17 compute-0 nova_compute[260935]: 2025-10-11 09:29:17.317 2 WARNING nova.compute.manager [req-b72c4aa9-1abc-4506-b1ef-f6457e77586f req-fd0ba4d0-75b5-48c4-9712-3330f3fb10e1 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: af8a1ab7-7512-4de4-8493-cfe85095fbc5] Received unexpected event network-vif-plugged-c6f854bc-831f-4bb1-ad9f-e1aa03343d25 for instance with vm_state active and task_state None.
Oct 11 09:29:17 compute-0 nova_compute[260935]: 2025-10-11 09:29:17.317 2 DEBUG nova.compute.manager [req-b72c4aa9-1abc-4506-b1ef-f6457e77586f req-fd0ba4d0-75b5-48c4-9712-3330f3fb10e1 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 80209953-3c4c-4932-a9a3-8166c70e1029] Received event network-changed-b6c25504-002e-4bf7-8b34-c9e43fcc55c2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:29:17 compute-0 nova_compute[260935]: 2025-10-11 09:29:17.318 2 DEBUG nova.compute.manager [req-b72c4aa9-1abc-4506-b1ef-f6457e77586f req-fd0ba4d0-75b5-48c4-9712-3330f3fb10e1 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 80209953-3c4c-4932-a9a3-8166c70e1029] Refreshing instance network info cache due to event network-changed-b6c25504-002e-4bf7-8b34-c9e43fcc55c2. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 09:29:17 compute-0 nova_compute[260935]: 2025-10-11 09:29:17.318 2 DEBUG oslo_concurrency.lockutils [req-b72c4aa9-1abc-4506-b1ef-f6457e77586f req-fd0ba4d0-75b5-48c4-9712-3330f3fb10e1 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-80209953-3c4c-4932-a9a3-8166c70e1029" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:29:17 compute-0 nova_compute[260935]: 2025-10-11 09:29:17.318 2 DEBUG oslo_concurrency.lockutils [req-b72c4aa9-1abc-4506-b1ef-f6457e77586f req-fd0ba4d0-75b5-48c4-9712-3330f3fb10e1 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-80209953-3c4c-4932-a9a3-8166c70e1029" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:29:17 compute-0 nova_compute[260935]: 2025-10-11 09:29:17.318 2 DEBUG nova.network.neutron [req-b72c4aa9-1abc-4506-b1ef-f6457e77586f req-fd0ba4d0-75b5-48c4-9712-3330f3fb10e1 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 80209953-3c4c-4932-a9a3-8166c70e1029] Refreshing network info cache for port b6c25504-002e-4bf7-8b34-c9e43fcc55c2 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 09:29:18 compute-0 ceph-mon[74313]: pgmap v2629: 321 pgs: 321 active+clean; 500 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 42 KiB/s rd, 3.6 MiB/s wr, 64 op/s
Oct 11 09:29:18 compute-0 nova_compute[260935]: 2025-10-11 09:29:18.390 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:29:18 compute-0 sshd-session[407488]: Invalid user admin from 165.232.82.252 port 49392
Oct 11 09:29:18 compute-0 sshd-session[407488]: pam_unix(sshd:auth): check pass; user unknown
Oct 11 09:29:18 compute-0 sshd-session[407488]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=165.232.82.252
Oct 11 09:29:18 compute-0 nova_compute[260935]: 2025-10-11 09:29:18.860 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:29:19 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2630: 321 pgs: 321 active+clean; 500 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 3.7 MiB/s rd, 3.6 MiB/s wr, 195 op/s
Oct 11 09:29:19 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:29:19 compute-0 nova_compute[260935]: 2025-10-11 09:29:19.422 2 DEBUG nova.compute.manager [req-65929319-c2ef-47d6-98fb-3457ae2fc994 req-04ec60b6-a950-4488-94f7-d73ad64a4528 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: af8a1ab7-7512-4de4-8493-cfe85095fbc5] Received event network-changed-c6f854bc-831f-4bb1-ad9f-e1aa03343d25 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:29:19 compute-0 nova_compute[260935]: 2025-10-11 09:29:19.423 2 DEBUG nova.compute.manager [req-65929319-c2ef-47d6-98fb-3457ae2fc994 req-04ec60b6-a950-4488-94f7-d73ad64a4528 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: af8a1ab7-7512-4de4-8493-cfe85095fbc5] Refreshing instance network info cache due to event network-changed-c6f854bc-831f-4bb1-ad9f-e1aa03343d25. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 09:29:19 compute-0 nova_compute[260935]: 2025-10-11 09:29:19.423 2 DEBUG oslo_concurrency.lockutils [req-65929319-c2ef-47d6-98fb-3457ae2fc994 req-04ec60b6-a950-4488-94f7-d73ad64a4528 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-af8a1ab7-7512-4de4-8493-cfe85095fbc5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:29:19 compute-0 nova_compute[260935]: 2025-10-11 09:29:19.423 2 DEBUG oslo_concurrency.lockutils [req-65929319-c2ef-47d6-98fb-3457ae2fc994 req-04ec60b6-a950-4488-94f7-d73ad64a4528 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-af8a1ab7-7512-4de4-8493-cfe85095fbc5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:29:19 compute-0 nova_compute[260935]: 2025-10-11 09:29:19.423 2 DEBUG nova.network.neutron [req-65929319-c2ef-47d6-98fb-3457ae2fc994 req-04ec60b6-a950-4488-94f7-d73ad64a4528 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: af8a1ab7-7512-4de4-8493-cfe85095fbc5] Refreshing network info cache for port c6f854bc-831f-4bb1-ad9f-e1aa03343d25 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 09:29:19 compute-0 podman[407490]: 2025-10-11 09:29:19.789897524 +0000 UTC m=+0.080140073 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team)
Oct 11 09:29:20 compute-0 ceph-mon[74313]: pgmap v2630: 321 pgs: 321 active+clean; 500 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 3.7 MiB/s rd, 3.6 MiB/s wr, 195 op/s
Oct 11 09:29:20 compute-0 nova_compute[260935]: 2025-10-11 09:29:20.462 2 DEBUG nova.network.neutron [req-b72c4aa9-1abc-4506-b1ef-f6457e77586f req-fd0ba4d0-75b5-48c4-9712-3330f3fb10e1 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 80209953-3c4c-4932-a9a3-8166c70e1029] Updated VIF entry in instance network info cache for port b6c25504-002e-4bf7-8b34-c9e43fcc55c2. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 09:29:20 compute-0 nova_compute[260935]: 2025-10-11 09:29:20.463 2 DEBUG nova.network.neutron [req-b72c4aa9-1abc-4506-b1ef-f6457e77586f req-fd0ba4d0-75b5-48c4-9712-3330f3fb10e1 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 80209953-3c4c-4932-a9a3-8166c70e1029] Updating instance_info_cache with network_info: [{"id": "b6c25504-002e-4bf7-8b34-c9e43fcc55c2", "address": "fa:16:3e:ba:b7:77", "network": {"id": "03e15108-5f8d-4fec-9ad8-133d9551c667", "bridge": "br-int", "label": "tempest-network-smoke--866575139", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feba:b777", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feba:b777", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.210", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb6c25504-00", "ovs_interfaceid": "b6c25504-002e-4bf7-8b34-c9e43fcc55c2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:29:20 compute-0 nova_compute[260935]: 2025-10-11 09:29:20.488 2 DEBUG oslo_concurrency.lockutils [req-b72c4aa9-1abc-4506-b1ef-f6457e77586f req-fd0ba4d0-75b5-48c4-9712-3330f3fb10e1 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-80209953-3c4c-4932-a9a3-8166c70e1029" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:29:20 compute-0 sshd-session[407488]: Failed password for invalid user admin from 165.232.82.252 port 49392 ssh2
Oct 11 09:29:21 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2631: 321 pgs: 321 active+clean; 500 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 3.6 MiB/s rd, 1.8 MiB/s wr, 168 op/s
Oct 11 09:29:21 compute-0 ceph-mon[74313]: pgmap v2631: 321 pgs: 321 active+clean; 500 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 3.6 MiB/s rd, 1.8 MiB/s wr, 168 op/s
Oct 11 09:29:22 compute-0 sshd-session[407488]: Connection closed by invalid user admin 165.232.82.252 port 49392 [preauth]
Oct 11 09:29:23 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2632: 321 pgs: 321 active+clean; 500 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 1.8 MiB/s wr, 175 op/s
Oct 11 09:29:23 compute-0 ceph-mon[74313]: pgmap v2632: 321 pgs: 321 active+clean; 500 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 1.8 MiB/s wr, 175 op/s
Oct 11 09:29:23 compute-0 nova_compute[260935]: 2025-10-11 09:29:23.396 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:29:23 compute-0 nova_compute[260935]: 2025-10-11 09:29:23.762 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:29:23 compute-0 nova_compute[260935]: 2025-10-11 09:29:23.863 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:29:23 compute-0 ceph-osd[88249]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #50. Immutable memtables: 7.
Oct 11 09:29:24 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:29:24 compute-0 nova_compute[260935]: 2025-10-11 09:29:24.668 2 DEBUG nova.network.neutron [req-65929319-c2ef-47d6-98fb-3457ae2fc994 req-04ec60b6-a950-4488-94f7-d73ad64a4528 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: af8a1ab7-7512-4de4-8493-cfe85095fbc5] Updated VIF entry in instance network info cache for port c6f854bc-831f-4bb1-ad9f-e1aa03343d25. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 09:29:24 compute-0 nova_compute[260935]: 2025-10-11 09:29:24.669 2 DEBUG nova.network.neutron [req-65929319-c2ef-47d6-98fb-3457ae2fc994 req-04ec60b6-a950-4488-94f7-d73ad64a4528 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: af8a1ab7-7512-4de4-8493-cfe85095fbc5] Updating instance_info_cache with network_info: [{"id": "c6f854bc-831f-4bb1-ad9f-e1aa03343d25", "address": "fa:16:3e:6b:d3:83", "network": {"id": "079fe911-cb70-4ab3-8112-a68496105754", "bridge": "br-int", "label": "tempest-TestServerBasicOps-693767079-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.243", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9e69b3710dc94c919d8bd75d2a540c10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc6f854bc-83", "ovs_interfaceid": "c6f854bc-831f-4bb1-ad9f-e1aa03343d25", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:29:24 compute-0 nova_compute[260935]: 2025-10-11 09:29:24.685 2 DEBUG oslo_concurrency.lockutils [req-65929319-c2ef-47d6-98fb-3457ae2fc994 req-04ec60b6-a950-4488-94f7-d73ad64a4528 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-af8a1ab7-7512-4de4-8493-cfe85095fbc5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:29:24 compute-0 podman[407508]: 2025-10-11 09:29:24.756516585 +0000 UTC m=+0.064005677 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid)
Oct 11 09:29:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:29:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:29:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:29:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:29:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:29:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:29:25 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2633: 321 pgs: 321 active+clean; 500 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 17 KiB/s wr, 138 op/s
Oct 11 09:29:25 compute-0 ovn_controller[152945]: 2025-10-11T09:29:25Z|00172|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:ba:b7:77 10.100.0.6
Oct 11 09:29:25 compute-0 ovn_controller[152945]: 2025-10-11T09:29:25Z|00173|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:ba:b7:77 10.100.0.6
Oct 11 09:29:26 compute-0 ceph-mon[74313]: pgmap v2633: 321 pgs: 321 active+clean; 500 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 17 KiB/s wr, 138 op/s
Oct 11 09:29:26 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 09:29:26 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/105732265' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 09:29:26 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 09:29:26 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/105732265' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 09:29:27 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2634: 321 pgs: 321 active+clean; 500 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 17 KiB/s wr, 138 op/s
Oct 11 09:29:27 compute-0 ceph-mon[74313]: from='client.? 192.168.122.10:0/105732265' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 09:29:27 compute-0 ceph-mon[74313]: from='client.? 192.168.122.10:0/105732265' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 09:29:27 compute-0 ceph-mon[74313]: pgmap v2634: 321 pgs: 321 active+clean; 500 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 17 KiB/s wr, 138 op/s
Oct 11 09:29:28 compute-0 ovn_controller[152945]: 2025-10-11T09:29:28Z|00174|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:6b:d3:83 10.100.0.7
Oct 11 09:29:28 compute-0 ovn_controller[152945]: 2025-10-11T09:29:28Z|00175|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:6b:d3:83 10.100.0.7
Oct 11 09:29:28 compute-0 nova_compute[260935]: 2025-10-11 09:29:28.400 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:29:28 compute-0 nova_compute[260935]: 2025-10-11 09:29:28.865 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:29:29 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2635: 321 pgs: 321 active+clean; 554 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 4.3 MiB/s rd, 4.2 MiB/s wr, 240 op/s
Oct 11 09:29:29 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:29:29 compute-0 podman[407528]: 2025-10-11 09:29:29.755583218 +0000 UTC m=+0.061613860 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.vendor=CentOS, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 11 09:29:29 compute-0 podman[407529]: 2025-10-11 09:29:29.813373029 +0000 UTC m=+0.117202879 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_managed=true)
Oct 11 09:29:30 compute-0 ceph-mon[74313]: pgmap v2635: 321 pgs: 321 active+clean; 554 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 4.3 MiB/s rd, 4.2 MiB/s wr, 240 op/s
Oct 11 09:29:31 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2636: 321 pgs: 321 active+clean; 554 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 696 KiB/s rd, 4.2 MiB/s wr, 108 op/s
Oct 11 09:29:32 compute-0 ceph-mon[74313]: pgmap v2636: 321 pgs: 321 active+clean; 554 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 696 KiB/s rd, 4.2 MiB/s wr, 108 op/s
Oct 11 09:29:32 compute-0 ceph-osd[89278]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #50. Immutable memtables: 7.
Oct 11 09:29:33 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2637: 321 pgs: 321 active+clean; 566 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 820 KiB/s rd, 4.3 MiB/s wr, 140 op/s
Oct 11 09:29:33 compute-0 ceph-mon[74313]: pgmap v2637: 321 pgs: 321 active+clean; 566 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 820 KiB/s rd, 4.3 MiB/s wr, 140 op/s
Oct 11 09:29:33 compute-0 nova_compute[260935]: 2025-10-11 09:29:33.403 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:29:33 compute-0 nova_compute[260935]: 2025-10-11 09:29:33.867 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:29:34 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:29:34 compute-0 nova_compute[260935]: 2025-10-11 09:29:34.894 2 DEBUG nova.compute.manager [req-022868cf-302d-472e-bf83-3e0a77ab0203 req-6200f9c3-c286-4b74-a8d1-aac1ea91e9fb e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 80209953-3c4c-4932-a9a3-8166c70e1029] Received event network-changed-b6c25504-002e-4bf7-8b34-c9e43fcc55c2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:29:34 compute-0 nova_compute[260935]: 2025-10-11 09:29:34.895 2 DEBUG nova.compute.manager [req-022868cf-302d-472e-bf83-3e0a77ab0203 req-6200f9c3-c286-4b74-a8d1-aac1ea91e9fb e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 80209953-3c4c-4932-a9a3-8166c70e1029] Refreshing instance network info cache due to event network-changed-b6c25504-002e-4bf7-8b34-c9e43fcc55c2. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 09:29:34 compute-0 nova_compute[260935]: 2025-10-11 09:29:34.896 2 DEBUG oslo_concurrency.lockutils [req-022868cf-302d-472e-bf83-3e0a77ab0203 req-6200f9c3-c286-4b74-a8d1-aac1ea91e9fb e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-80209953-3c4c-4932-a9a3-8166c70e1029" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:29:34 compute-0 nova_compute[260935]: 2025-10-11 09:29:34.896 2 DEBUG oslo_concurrency.lockutils [req-022868cf-302d-472e-bf83-3e0a77ab0203 req-6200f9c3-c286-4b74-a8d1-aac1ea91e9fb e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-80209953-3c4c-4932-a9a3-8166c70e1029" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:29:34 compute-0 nova_compute[260935]: 2025-10-11 09:29:34.897 2 DEBUG nova.network.neutron [req-022868cf-302d-472e-bf83-3e0a77ab0203 req-6200f9c3-c286-4b74-a8d1-aac1ea91e9fb e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 80209953-3c4c-4932-a9a3-8166c70e1029] Refreshing network info cache for port b6c25504-002e-4bf7-8b34-c9e43fcc55c2 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 09:29:35 compute-0 nova_compute[260935]: 2025-10-11 09:29:35.001 2 DEBUG oslo_concurrency.lockutils [None req-5f81f016-bf29-4198-8daa-7fb128a587cc 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "80209953-3c4c-4932-a9a3-8166c70e1029" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:29:35 compute-0 nova_compute[260935]: 2025-10-11 09:29:35.002 2 DEBUG oslo_concurrency.lockutils [None req-5f81f016-bf29-4198-8daa-7fb128a587cc 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "80209953-3c4c-4932-a9a3-8166c70e1029" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:29:35 compute-0 nova_compute[260935]: 2025-10-11 09:29:35.002 2 DEBUG oslo_concurrency.lockutils [None req-5f81f016-bf29-4198-8daa-7fb128a587cc 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "80209953-3c4c-4932-a9a3-8166c70e1029-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:29:35 compute-0 nova_compute[260935]: 2025-10-11 09:29:35.003 2 DEBUG oslo_concurrency.lockutils [None req-5f81f016-bf29-4198-8daa-7fb128a587cc 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "80209953-3c4c-4932-a9a3-8166c70e1029-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:29:35 compute-0 nova_compute[260935]: 2025-10-11 09:29:35.004 2 DEBUG oslo_concurrency.lockutils [None req-5f81f016-bf29-4198-8daa-7fb128a587cc 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "80209953-3c4c-4932-a9a3-8166c70e1029-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:29:35 compute-0 nova_compute[260935]: 2025-10-11 09:29:35.006 2 INFO nova.compute.manager [None req-5f81f016-bf29-4198-8daa-7fb128a587cc 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 80209953-3c4c-4932-a9a3-8166c70e1029] Terminating instance
Oct 11 09:29:35 compute-0 nova_compute[260935]: 2025-10-11 09:29:35.007 2 DEBUG nova.compute.manager [None req-5f81f016-bf29-4198-8daa-7fb128a587cc 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 80209953-3c4c-4932-a9a3-8166c70e1029] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 11 09:29:35 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2638: 321 pgs: 321 active+clean; 566 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 611 KiB/s rd, 4.3 MiB/s wr, 132 op/s
Oct 11 09:29:35 compute-0 ceph-mon[74313]: pgmap v2638: 321 pgs: 321 active+clean; 566 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 611 KiB/s rd, 4.3 MiB/s wr, 132 op/s
Oct 11 09:29:35 compute-0 kernel: tapb6c25504-00 (unregistering): left promiscuous mode
Oct 11 09:29:35 compute-0 NetworkManager[44960]: <info>  [1760174975.4086] device (tapb6c25504-00): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 09:29:35 compute-0 ovn_controller[152945]: 2025-10-11T09:29:35Z|01490|binding|INFO|Releasing lport b6c25504-002e-4bf7-8b34-c9e43fcc55c2 from this chassis (sb_readonly=0)
Oct 11 09:29:35 compute-0 ovn_controller[152945]: 2025-10-11T09:29:35Z|01491|binding|INFO|Setting lport b6c25504-002e-4bf7-8b34-c9e43fcc55c2 down in Southbound
Oct 11 09:29:35 compute-0 nova_compute[260935]: 2025-10-11 09:29:35.430 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:29:35 compute-0 ovn_controller[152945]: 2025-10-11T09:29:35Z|01492|binding|INFO|Removing iface tapb6c25504-00 ovn-installed in OVS
Oct 11 09:29:35 compute-0 nova_compute[260935]: 2025-10-11 09:29:35.434 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:29:35 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:29:35.441 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ba:b7:77 10.100.0.6 2001:db8:0:1:f816:3eff:feba:b777 2001:db8::f816:3eff:feba:b777'], port_security=['fa:16:3e:ba:b7:77 10.100.0.6 2001:db8:0:1:f816:3eff:feba:b777 2001:db8::f816:3eff:feba:b777'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28 2001:db8:0:1:f816:3eff:feba:b777/64 2001:db8::f816:3eff:feba:b777/64', 'neutron:device_id': '80209953-3c4c-4932-a9a3-8166c70e1029', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-03e15108-5f8d-4fec-9ad8-133d9551c667', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'db6885dd005947ad850fed13cefdf2fc', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'ea903998-56a8-4844-9402-6fdca37c3b3c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e3325c2b-a3cd-4cde-bb9c-e17cc326ff80, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=b6c25504-002e-4bf7-8b34-c9e43fcc55c2) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 09:29:35 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:29:35.445 162815 INFO neutron.agent.ovn.metadata.agent [-] Port b6c25504-002e-4bf7-8b34-c9e43fcc55c2 in datapath 03e15108-5f8d-4fec-9ad8-133d9551c667 unbound from our chassis
Oct 11 09:29:35 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:29:35.450 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 03e15108-5f8d-4fec-9ad8-133d9551c667
Oct 11 09:29:35 compute-0 nova_compute[260935]: 2025-10-11 09:29:35.468 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:29:35 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:29:35.480 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[494b3feb-322c-45a8-828e-c516b1541f30]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:29:35 compute-0 systemd[1]: machine-qemu\x2d157\x2dinstance\x2d00000085.scope: Deactivated successfully.
Oct 11 09:29:35 compute-0 systemd[1]: machine-qemu\x2d157\x2dinstance\x2d00000085.scope: Consumed 13.534s CPU time.
Oct 11 09:29:35 compute-0 systemd-machined[215705]: Machine qemu-157-instance-00000085 terminated.
Oct 11 09:29:35 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:29:35.541 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[96e1d028-37e2-4377-8d1f-cf3819ea7ff6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:29:35 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:29:35.547 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[03aae237-803a-46c3-93a8-681ff1bc7ad0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:29:35 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:29:35.599 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[6ac21f5f-e7ef-4bcc-ab03-fc98cddd406f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:29:35 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:29:35.625 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[f6d3ebf5-cef9-477a-a914-30c11d1f90d1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap03e15108-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1f:c9:82'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 44, 'tx_packets': 7, 'rx_bytes': 3768, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 44, 'tx_packets': 7, 'rx_bytes': 3768, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 397], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 675712, 'reachable_time': 41440, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 40, 'inoctets': 3040, 'indelivers': 13, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 40, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 3040, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 40, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 13, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 407586, 'error': None, 'target': 'ovnmeta-03e15108-5f8d-4fec-9ad8-133d9551c667', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:29:35 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:29:35.666 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[0adabb10-0322-491b-829a-d9319c1eaa24]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap03e15108-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 675730, 'tstamp': 675730}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 407588, 'error': None, 'target': 'ovnmeta-03e15108-5f8d-4fec-9ad8-133d9551c667', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap03e15108-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 675735, 'tstamp': 675735}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 407588, 'error': None, 'target': 'ovnmeta-03e15108-5f8d-4fec-9ad8-133d9551c667', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:29:35 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:29:35.670 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap03e15108-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:29:35 compute-0 nova_compute[260935]: 2025-10-11 09:29:35.674 2 INFO nova.virt.libvirt.driver [-] [instance: 80209953-3c4c-4932-a9a3-8166c70e1029] Instance destroyed successfully.
Oct 11 09:29:35 compute-0 nova_compute[260935]: 2025-10-11 09:29:35.674 2 DEBUG nova.objects.instance [None req-5f81f016-bf29-4198-8daa-7fb128a587cc 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lazy-loading 'resources' on Instance uuid 80209953-3c4c-4932-a9a3-8166c70e1029 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:29:35 compute-0 nova_compute[260935]: 2025-10-11 09:29:35.689 2 DEBUG nova.virt.libvirt.vif [None req-5f81f016-bf29-4198-8daa-7fb128a587cc 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T09:28:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-373220295',display_name='tempest-TestGettingAddress-server-373220295',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-373220295',id=133,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBI6hRsTatVKtQ3nuA1r7FSvjUwlf4xF10ggzb3Lhl+N08jgpdPafA8cbnqq6OClpxtss8V3s8tFKTynT8iMbxi96RnSSo3i9TmqSxoUO3xLRHc6Xamp8CuhNQXpWeJ78Zg==',key_name='tempest-TestGettingAddress-951303195',keypairs=<?>,launch_index=0,launched_at=2025-10-11T09:29:12Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='db6885dd005947ad850fed13cefdf2fc',ramdisk_id='',reservation_id='r-0s96u5c0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-1238692117',owner_user_name='tempest-TestGettingAddress-1238692117-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T09:29:12Z,user_data=None,user_id='0e1fd111a1ff43179343661e01457085',uuid=80209953-3c4c-4932-a9a3-8166c70e1029,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b6c25504-002e-4bf7-8b34-c9e43fcc55c2", "address": "fa:16:3e:ba:b7:77", "network": {"id": "03e15108-5f8d-4fec-9ad8-133d9551c667", "bridge": "br-int", "label": "tempest-network-smoke--866575139", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feba:b777", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feba:b777", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.210", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb6c25504-00", "ovs_interfaceid": "b6c25504-002e-4bf7-8b34-c9e43fcc55c2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 11 09:29:35 compute-0 nova_compute[260935]: 2025-10-11 09:29:35.690 2 DEBUG nova.network.os_vif_util [None req-5f81f016-bf29-4198-8daa-7fb128a587cc 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converting VIF {"id": "b6c25504-002e-4bf7-8b34-c9e43fcc55c2", "address": "fa:16:3e:ba:b7:77", "network": {"id": "03e15108-5f8d-4fec-9ad8-133d9551c667", "bridge": "br-int", "label": "tempest-network-smoke--866575139", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feba:b777", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feba:b777", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.210", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb6c25504-00", "ovs_interfaceid": "b6c25504-002e-4bf7-8b34-c9e43fcc55c2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 09:29:35 compute-0 nova_compute[260935]: 2025-10-11 09:29:35.692 2 DEBUG nova.network.os_vif_util [None req-5f81f016-bf29-4198-8daa-7fb128a587cc 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:ba:b7:77,bridge_name='br-int',has_traffic_filtering=True,id=b6c25504-002e-4bf7-8b34-c9e43fcc55c2,network=Network(03e15108-5f8d-4fec-9ad8-133d9551c667),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb6c25504-00') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 09:29:35 compute-0 nova_compute[260935]: 2025-10-11 09:29:35.692 2 DEBUG os_vif [None req-5f81f016-bf29-4198-8daa-7fb128a587cc 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:ba:b7:77,bridge_name='br-int',has_traffic_filtering=True,id=b6c25504-002e-4bf7-8b34-c9e43fcc55c2,network=Network(03e15108-5f8d-4fec-9ad8-133d9551c667),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb6c25504-00') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 11 09:29:35 compute-0 nova_compute[260935]: 2025-10-11 09:29:35.694 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:29:35 compute-0 nova_compute[260935]: 2025-10-11 09:29:35.695 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb6c25504-00, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:29:35 compute-0 nova_compute[260935]: 2025-10-11 09:29:35.714 2 DEBUG nova.compute.manager [req-8fa91584-9a8d-4547-b96b-27efc2da70e6 req-826a6969-ca5d-4952-8788-54f246d5e683 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 80209953-3c4c-4932-a9a3-8166c70e1029] Received event network-vif-unplugged-b6c25504-002e-4bf7-8b34-c9e43fcc55c2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:29:35 compute-0 nova_compute[260935]: 2025-10-11 09:29:35.714 2 DEBUG oslo_concurrency.lockutils [req-8fa91584-9a8d-4547-b96b-27efc2da70e6 req-826a6969-ca5d-4952-8788-54f246d5e683 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "80209953-3c4c-4932-a9a3-8166c70e1029-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:29:35 compute-0 nova_compute[260935]: 2025-10-11 09:29:35.715 2 DEBUG oslo_concurrency.lockutils [req-8fa91584-9a8d-4547-b96b-27efc2da70e6 req-826a6969-ca5d-4952-8788-54f246d5e683 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "80209953-3c4c-4932-a9a3-8166c70e1029-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:29:35 compute-0 nova_compute[260935]: 2025-10-11 09:29:35.715 2 DEBUG oslo_concurrency.lockutils [req-8fa91584-9a8d-4547-b96b-27efc2da70e6 req-826a6969-ca5d-4952-8788-54f246d5e683 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "80209953-3c4c-4932-a9a3-8166c70e1029-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:29:35 compute-0 nova_compute[260935]: 2025-10-11 09:29:35.715 2 DEBUG nova.compute.manager [req-8fa91584-9a8d-4547-b96b-27efc2da70e6 req-826a6969-ca5d-4952-8788-54f246d5e683 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 80209953-3c4c-4932-a9a3-8166c70e1029] No waiting events found dispatching network-vif-unplugged-b6c25504-002e-4bf7-8b34-c9e43fcc55c2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 09:29:35 compute-0 nova_compute[260935]: 2025-10-11 09:29:35.716 2 DEBUG nova.compute.manager [req-8fa91584-9a8d-4547-b96b-27efc2da70e6 req-826a6969-ca5d-4952-8788-54f246d5e683 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 80209953-3c4c-4932-a9a3-8166c70e1029] Received event network-vif-unplugged-b6c25504-002e-4bf7-8b34-c9e43fcc55c2 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 11 09:29:35 compute-0 nova_compute[260935]: 2025-10-11 09:29:35.719 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:29:35 compute-0 nova_compute[260935]: 2025-10-11 09:29:35.722 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 09:29:35 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:29:35.725 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap03e15108-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:29:35 compute-0 nova_compute[260935]: 2025-10-11 09:29:35.724 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:29:35 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:29:35.725 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 09:29:35 compute-0 nova_compute[260935]: 2025-10-11 09:29:35.728 2 INFO os_vif [None req-5f81f016-bf29-4198-8daa-7fb128a587cc 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:ba:b7:77,bridge_name='br-int',has_traffic_filtering=True,id=b6c25504-002e-4bf7-8b34-c9e43fcc55c2,network=Network(03e15108-5f8d-4fec-9ad8-133d9551c667),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb6c25504-00')
Oct 11 09:29:35 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:29:35.728 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap03e15108-50, col_values=(('external_ids', {'iface-id': '1945eb8f-f9e5-437a-9fc9-017b522ae777'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:29:35 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:29:35.730 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 09:29:35 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:29:35.826 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=46, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:d1:d9', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '16:ab:1e:b7:4b:7f'}, ipsec=False) old=SB_Global(nb_cfg=45) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 09:29:35 compute-0 nova_compute[260935]: 2025-10-11 09:29:35.827 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:29:35 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:29:35.827 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 11 09:29:37 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2639: 321 pgs: 321 active+clean; 566 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 611 KiB/s rd, 4.3 MiB/s wr, 132 op/s
Oct 11 09:29:37 compute-0 ceph-mon[74313]: pgmap v2639: 321 pgs: 321 active+clean; 566 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 611 KiB/s rd, 4.3 MiB/s wr, 132 op/s
Oct 11 09:29:37 compute-0 nova_compute[260935]: 2025-10-11 09:29:37.831 2 DEBUG nova.compute.manager [req-c1fcb4e2-926d-4d5f-9952-7d271c6f99ad req-65fd6448-01ad-41c3-8c4c-b68250ff9e70 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 80209953-3c4c-4932-a9a3-8166c70e1029] Received event network-vif-plugged-b6c25504-002e-4bf7-8b34-c9e43fcc55c2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:29:37 compute-0 nova_compute[260935]: 2025-10-11 09:29:37.831 2 DEBUG oslo_concurrency.lockutils [req-c1fcb4e2-926d-4d5f-9952-7d271c6f99ad req-65fd6448-01ad-41c3-8c4c-b68250ff9e70 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "80209953-3c4c-4932-a9a3-8166c70e1029-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:29:37 compute-0 nova_compute[260935]: 2025-10-11 09:29:37.832 2 DEBUG oslo_concurrency.lockutils [req-c1fcb4e2-926d-4d5f-9952-7d271c6f99ad req-65fd6448-01ad-41c3-8c4c-b68250ff9e70 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "80209953-3c4c-4932-a9a3-8166c70e1029-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:29:37 compute-0 nova_compute[260935]: 2025-10-11 09:29:37.832 2 DEBUG oslo_concurrency.lockutils [req-c1fcb4e2-926d-4d5f-9952-7d271c6f99ad req-65fd6448-01ad-41c3-8c4c-b68250ff9e70 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "80209953-3c4c-4932-a9a3-8166c70e1029-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:29:37 compute-0 nova_compute[260935]: 2025-10-11 09:29:37.833 2 DEBUG nova.compute.manager [req-c1fcb4e2-926d-4d5f-9952-7d271c6f99ad req-65fd6448-01ad-41c3-8c4c-b68250ff9e70 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 80209953-3c4c-4932-a9a3-8166c70e1029] No waiting events found dispatching network-vif-plugged-b6c25504-002e-4bf7-8b34-c9e43fcc55c2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 09:29:37 compute-0 nova_compute[260935]: 2025-10-11 09:29:37.834 2 WARNING nova.compute.manager [req-c1fcb4e2-926d-4d5f-9952-7d271c6f99ad req-65fd6448-01ad-41c3-8c4c-b68250ff9e70 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 80209953-3c4c-4932-a9a3-8166c70e1029] Received unexpected event network-vif-plugged-b6c25504-002e-4bf7-8b34-c9e43fcc55c2 for instance with vm_state active and task_state deleting.
Oct 11 09:29:38 compute-0 nova_compute[260935]: 2025-10-11 09:29:38.348 2 INFO nova.virt.libvirt.driver [None req-5f81f016-bf29-4198-8daa-7fb128a587cc 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 80209953-3c4c-4932-a9a3-8166c70e1029] Deleting instance files /var/lib/nova/instances/80209953-3c4c-4932-a9a3-8166c70e1029_del
Oct 11 09:29:38 compute-0 nova_compute[260935]: 2025-10-11 09:29:38.349 2 INFO nova.virt.libvirt.driver [None req-5f81f016-bf29-4198-8daa-7fb128a587cc 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 80209953-3c4c-4932-a9a3-8166c70e1029] Deletion of /var/lib/nova/instances/80209953-3c4c-4932-a9a3-8166c70e1029_del complete
Oct 11 09:29:38 compute-0 nova_compute[260935]: 2025-10-11 09:29:38.359 2 DEBUG nova.network.neutron [req-022868cf-302d-472e-bf83-3e0a77ab0203 req-6200f9c3-c286-4b74-a8d1-aac1ea91e9fb e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 80209953-3c4c-4932-a9a3-8166c70e1029] Updated VIF entry in instance network info cache for port b6c25504-002e-4bf7-8b34-c9e43fcc55c2. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 09:29:38 compute-0 nova_compute[260935]: 2025-10-11 09:29:38.360 2 DEBUG nova.network.neutron [req-022868cf-302d-472e-bf83-3e0a77ab0203 req-6200f9c3-c286-4b74-a8d1-aac1ea91e9fb e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 80209953-3c4c-4932-a9a3-8166c70e1029] Updating instance_info_cache with network_info: [{"id": "b6c25504-002e-4bf7-8b34-c9e43fcc55c2", "address": "fa:16:3e:ba:b7:77", "network": {"id": "03e15108-5f8d-4fec-9ad8-133d9551c667", "bridge": "br-int", "label": "tempest-network-smoke--866575139", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feba:b777", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feba:b777", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb6c25504-00", "ovs_interfaceid": "b6c25504-002e-4bf7-8b34-c9e43fcc55c2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:29:38 compute-0 nova_compute[260935]: 2025-10-11 09:29:38.399 2 DEBUG oslo_concurrency.lockutils [req-022868cf-302d-472e-bf83-3e0a77ab0203 req-6200f9c3-c286-4b74-a8d1-aac1ea91e9fb e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-80209953-3c4c-4932-a9a3-8166c70e1029" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:29:38 compute-0 nova_compute[260935]: 2025-10-11 09:29:38.406 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:29:38 compute-0 nova_compute[260935]: 2025-10-11 09:29:38.418 2 INFO nova.compute.manager [None req-5f81f016-bf29-4198-8daa-7fb128a587cc 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 80209953-3c4c-4932-a9a3-8166c70e1029] Took 3.41 seconds to destroy the instance on the hypervisor.
Oct 11 09:29:38 compute-0 nova_compute[260935]: 2025-10-11 09:29:38.418 2 DEBUG oslo.service.loopingcall [None req-5f81f016-bf29-4198-8daa-7fb128a587cc 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 11 09:29:38 compute-0 nova_compute[260935]: 2025-10-11 09:29:38.419 2 DEBUG nova.compute.manager [-] [instance: 80209953-3c4c-4932-a9a3-8166c70e1029] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 11 09:29:38 compute-0 nova_compute[260935]: 2025-10-11 09:29:38.419 2 DEBUG nova.network.neutron [-] [instance: 80209953-3c4c-4932-a9a3-8166c70e1029] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 11 09:29:38 compute-0 nova_compute[260935]: 2025-10-11 09:29:38.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:29:38 compute-0 nova_compute[260935]: 2025-10-11 09:29:38.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:29:39 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2640: 321 pgs: 321 active+clean; 566 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 624 KiB/s rd, 4.3 MiB/s wr, 143 op/s
Oct 11 09:29:39 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:29:39 compute-0 nova_compute[260935]: 2025-10-11 09:29:39.592 2 DEBUG nova.network.neutron [-] [instance: 80209953-3c4c-4932-a9a3-8166c70e1029] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:29:39 compute-0 nova_compute[260935]: 2025-10-11 09:29:39.615 2 INFO nova.compute.manager [-] [instance: 80209953-3c4c-4932-a9a3-8166c70e1029] Took 1.20 seconds to deallocate network for instance.
Oct 11 09:29:39 compute-0 nova_compute[260935]: 2025-10-11 09:29:39.661 2 DEBUG oslo_concurrency.lockutils [None req-5f81f016-bf29-4198-8daa-7fb128a587cc 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:29:39 compute-0 nova_compute[260935]: 2025-10-11 09:29:39.662 2 DEBUG oslo_concurrency.lockutils [None req-5f81f016-bf29-4198-8daa-7fb128a587cc 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:29:39 compute-0 nova_compute[260935]: 2025-10-11 09:29:39.669 2 DEBUG nova.compute.manager [req-7e3ebcc4-2cfd-4525-9dae-dca19877266a req-30e8dde6-fa39-4173-a806-af24ed2c9a01 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 80209953-3c4c-4932-a9a3-8166c70e1029] Received event network-vif-deleted-b6c25504-002e-4bf7-8b34-c9e43fcc55c2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:29:39 compute-0 nova_compute[260935]: 2025-10-11 09:29:39.697 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:29:39 compute-0 nova_compute[260935]: 2025-10-11 09:29:39.703 2 DEBUG nova.scheduler.client.report [None req-5f81f016-bf29-4198-8daa-7fb128a587cc 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Refreshing inventories for resource provider ead2f521-4d5d-46d9-864c-1aac19134114 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Oct 11 09:29:39 compute-0 nova_compute[260935]: 2025-10-11 09:29:39.728 2 DEBUG nova.scheduler.client.report [None req-5f81f016-bf29-4198-8daa-7fb128a587cc 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Updating ProviderTree inventory for provider ead2f521-4d5d-46d9-864c-1aac19134114 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Oct 11 09:29:39 compute-0 nova_compute[260935]: 2025-10-11 09:29:39.728 2 DEBUG nova.compute.provider_tree [None req-5f81f016-bf29-4198-8daa-7fb128a587cc 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Updating inventory in ProviderTree for provider ead2f521-4d5d-46d9-864c-1aac19134114 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Oct 11 09:29:39 compute-0 nova_compute[260935]: 2025-10-11 09:29:39.757 2 DEBUG nova.scheduler.client.report [None req-5f81f016-bf29-4198-8daa-7fb128a587cc 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Refreshing aggregate associations for resource provider ead2f521-4d5d-46d9-864c-1aac19134114, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Oct 11 09:29:39 compute-0 nova_compute[260935]: 2025-10-11 09:29:39.792 2 DEBUG nova.scheduler.client.report [None req-5f81f016-bf29-4198-8daa-7fb128a587cc 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Refreshing trait associations for resource provider ead2f521-4d5d-46d9-864c-1aac19134114, traits: HW_CPU_X86_AESNI,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_CLMUL,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_AVX,COMPUTE_RESCUE_BFV,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_F16C,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NODE,HW_CPU_X86_SSE2,HW_CPU_X86_BMI,COMPUTE_DEVICE_TAGGING,COMPUTE_STORAGE_BUS_FDC,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SHA,COMPUTE_STORAGE_BUS_SATA,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SSE42,HW_CPU_X86_SSE41,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_ABM,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_STORAGE_BUS_IDE,COMPUTE_STORAGE_BUS_USB,COMPUTE_TRUSTED_CERTS,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_FMA3,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_SSE4A,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSSE3,HW_CPU_X86_SVM,HW_CPU_X86_BMI2,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_AVX2,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_SECURITY_TPM_2_0,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_AMD_SVM,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_RTL8139 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Oct 11 09:29:39 compute-0 nova_compute[260935]: 2025-10-11 09:29:39.927 2 DEBUG oslo_concurrency.processutils [None req-5f81f016-bf29-4198-8daa-7fb128a587cc 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:29:40 compute-0 ceph-mon[74313]: pgmap v2640: 321 pgs: 321 active+clean; 566 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 624 KiB/s rd, 4.3 MiB/s wr, 143 op/s
Oct 11 09:29:40 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 09:29:40 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1860391964' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:29:40 compute-0 nova_compute[260935]: 2025-10-11 09:29:40.384 2 DEBUG oslo_concurrency.processutils [None req-5f81f016-bf29-4198-8daa-7fb128a587cc 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:29:40 compute-0 nova_compute[260935]: 2025-10-11 09:29:40.394 2 DEBUG nova.compute.provider_tree [None req-5f81f016-bf29-4198-8daa-7fb128a587cc 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 09:29:40 compute-0 nova_compute[260935]: 2025-10-11 09:29:40.417 2 DEBUG nova.scheduler.client.report [None req-5f81f016-bf29-4198-8daa-7fb128a587cc 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 09:29:40 compute-0 nova_compute[260935]: 2025-10-11 09:29:40.441 2 DEBUG oslo_concurrency.lockutils [None req-5f81f016-bf29-4198-8daa-7fb128a587cc 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.779s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:29:40 compute-0 nova_compute[260935]: 2025-10-11 09:29:40.468 2 INFO nova.scheduler.client.report [None req-5f81f016-bf29-4198-8daa-7fb128a587cc 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Deleted allocations for instance 80209953-3c4c-4932-a9a3-8166c70e1029
Oct 11 09:29:40 compute-0 nova_compute[260935]: 2025-10-11 09:29:40.541 2 DEBUG oslo_concurrency.lockutils [None req-5f81f016-bf29-4198-8daa-7fb128a587cc 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "80209953-3c4c-4932-a9a3-8166c70e1029" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.539s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:29:40 compute-0 nova_compute[260935]: 2025-10-11 09:29:40.719 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:29:41 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2641: 321 pgs: 321 active+clean; 566 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 137 KiB/s rd, 129 KiB/s wr, 41 op/s
Oct 11 09:29:41 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/1860391964' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:29:41 compute-0 nova_compute[260935]: 2025-10-11 09:29:41.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:29:42 compute-0 ceph-mon[74313]: pgmap v2641: 321 pgs: 321 active+clean; 566 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 137 KiB/s rd, 129 KiB/s wr, 41 op/s
Oct 11 09:29:42 compute-0 nova_compute[260935]: 2025-10-11 09:29:42.610 2 DEBUG nova.compute.manager [req-e1ef3a65-e183-43aa-b560-6473502afbf6 req-a5f477c9-824e-4604-8927-c3b16166ffba e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 16108538-ef61-4d43-8a42-c324c334138b] Received event network-changed-e89af28e-d1c8-408a-8e1f-a80d8e9d0aa0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:29:42 compute-0 nova_compute[260935]: 2025-10-11 09:29:42.611 2 DEBUG nova.compute.manager [req-e1ef3a65-e183-43aa-b560-6473502afbf6 req-a5f477c9-824e-4604-8927-c3b16166ffba e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 16108538-ef61-4d43-8a42-c324c334138b] Refreshing instance network info cache due to event network-changed-e89af28e-d1c8-408a-8e1f-a80d8e9d0aa0. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 09:29:42 compute-0 nova_compute[260935]: 2025-10-11 09:29:42.611 2 DEBUG oslo_concurrency.lockutils [req-e1ef3a65-e183-43aa-b560-6473502afbf6 req-a5f477c9-824e-4604-8927-c3b16166ffba e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-16108538-ef61-4d43-8a42-c324c334138b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:29:42 compute-0 nova_compute[260935]: 2025-10-11 09:29:42.612 2 DEBUG oslo_concurrency.lockutils [req-e1ef3a65-e183-43aa-b560-6473502afbf6 req-a5f477c9-824e-4604-8927-c3b16166ffba e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-16108538-ef61-4d43-8a42-c324c334138b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:29:42 compute-0 nova_compute[260935]: 2025-10-11 09:29:42.612 2 DEBUG nova.network.neutron [req-e1ef3a65-e183-43aa-b560-6473502afbf6 req-a5f477c9-824e-4604-8927-c3b16166ffba e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 16108538-ef61-4d43-8a42-c324c334138b] Refreshing network info cache for port e89af28e-d1c8-408a-8e1f-a80d8e9d0aa0 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 09:29:42 compute-0 nova_compute[260935]: 2025-10-11 09:29:42.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:29:42 compute-0 nova_compute[260935]: 2025-10-11 09:29:42.702 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 11 09:29:42 compute-0 nova_compute[260935]: 2025-10-11 09:29:42.716 2 DEBUG oslo_concurrency.lockutils [None req-ceb0ac14-3eb8-4c1c-ab12-43ef8fb7945f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "16108538-ef61-4d43-8a42-c324c334138b" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:29:42 compute-0 nova_compute[260935]: 2025-10-11 09:29:42.716 2 DEBUG oslo_concurrency.lockutils [None req-ceb0ac14-3eb8-4c1c-ab12-43ef8fb7945f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "16108538-ef61-4d43-8a42-c324c334138b" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:29:42 compute-0 nova_compute[260935]: 2025-10-11 09:29:42.716 2 DEBUG oslo_concurrency.lockutils [None req-ceb0ac14-3eb8-4c1c-ab12-43ef8fb7945f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "16108538-ef61-4d43-8a42-c324c334138b-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:29:42 compute-0 nova_compute[260935]: 2025-10-11 09:29:42.717 2 DEBUG oslo_concurrency.lockutils [None req-ceb0ac14-3eb8-4c1c-ab12-43ef8fb7945f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "16108538-ef61-4d43-8a42-c324c334138b-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:29:42 compute-0 nova_compute[260935]: 2025-10-11 09:29:42.717 2 DEBUG oslo_concurrency.lockutils [None req-ceb0ac14-3eb8-4c1c-ab12-43ef8fb7945f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "16108538-ef61-4d43-8a42-c324c334138b-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:29:42 compute-0 nova_compute[260935]: 2025-10-11 09:29:42.718 2 INFO nova.compute.manager [None req-ceb0ac14-3eb8-4c1c-ab12-43ef8fb7945f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 16108538-ef61-4d43-8a42-c324c334138b] Terminating instance
Oct 11 09:29:42 compute-0 nova_compute[260935]: 2025-10-11 09:29:42.719 2 DEBUG nova.compute.manager [None req-ceb0ac14-3eb8-4c1c-ab12-43ef8fb7945f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 16108538-ef61-4d43-8a42-c324c334138b] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 11 09:29:42 compute-0 kernel: tape89af28e-d1 (unregistering): left promiscuous mode
Oct 11 09:29:42 compute-0 NetworkManager[44960]: <info>  [1760174982.7760] device (tape89af28e-d1): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 09:29:42 compute-0 ovn_controller[152945]: 2025-10-11T09:29:42Z|01493|binding|INFO|Releasing lport e89af28e-d1c8-408a-8e1f-a80d8e9d0aa0 from this chassis (sb_readonly=0)
Oct 11 09:29:42 compute-0 ovn_controller[152945]: 2025-10-11T09:29:42Z|01494|binding|INFO|Setting lport e89af28e-d1c8-408a-8e1f-a80d8e9d0aa0 down in Southbound
Oct 11 09:29:42 compute-0 nova_compute[260935]: 2025-10-11 09:29:42.793 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:29:42 compute-0 ovn_controller[152945]: 2025-10-11T09:29:42Z|01495|binding|INFO|Removing iface tape89af28e-d1 ovn-installed in OVS
Oct 11 09:29:42 compute-0 nova_compute[260935]: 2025-10-11 09:29:42.795 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:29:42 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:29:42.799 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e6:64:0e 10.100.0.7 2001:db8:0:1:f816:3eff:fee6:640e 2001:db8::f816:3eff:fee6:640e'], port_security=['fa:16:3e:e6:64:0e 10.100.0.7 2001:db8:0:1:f816:3eff:fee6:640e 2001:db8::f816:3eff:fee6:640e'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28 2001:db8:0:1:f816:3eff:fee6:640e/64 2001:db8::f816:3eff:fee6:640e/64', 'neutron:device_id': '16108538-ef61-4d43-8a42-c324c334138b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-03e15108-5f8d-4fec-9ad8-133d9551c667', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'db6885dd005947ad850fed13cefdf2fc', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'ea903998-56a8-4844-9402-6fdca37c3b3c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e3325c2b-a3cd-4cde-bb9c-e17cc326ff80, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=e89af28e-d1c8-408a-8e1f-a80d8e9d0aa0) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 09:29:42 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:29:42.800 162815 INFO neutron.agent.ovn.metadata.agent [-] Port e89af28e-d1c8-408a-8e1f-a80d8e9d0aa0 in datapath 03e15108-5f8d-4fec-9ad8-133d9551c667 unbound from our chassis
Oct 11 09:29:42 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:29:42.801 162815 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 03e15108-5f8d-4fec-9ad8-133d9551c667, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 11 09:29:42 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:29:42.805 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[17b75d14-12c4-457e-b1ca-55a4b020c526]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:29:42 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:29:42.805 162815 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-03e15108-5f8d-4fec-9ad8-133d9551c667 namespace which is not needed anymore
Oct 11 09:29:42 compute-0 nova_compute[260935]: 2025-10-11 09:29:42.823 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:29:42 compute-0 systemd[1]: machine-qemu\x2d156\x2dinstance\x2d00000084.scope: Deactivated successfully.
Oct 11 09:29:42 compute-0 systemd[1]: machine-qemu\x2d156\x2dinstance\x2d00000084.scope: Consumed 16.319s CPU time.
Oct 11 09:29:42 compute-0 systemd-machined[215705]: Machine qemu-156-instance-00000084 terminated.
Oct 11 09:29:42 compute-0 neutron-haproxy-ovnmeta-03e15108-5f8d-4fec-9ad8-133d9551c667[405609]: [NOTICE]   (405613) : haproxy version is 2.8.14-c23fe91
Oct 11 09:29:42 compute-0 neutron-haproxy-ovnmeta-03e15108-5f8d-4fec-9ad8-133d9551c667[405609]: [NOTICE]   (405613) : path to executable is /usr/sbin/haproxy
Oct 11 09:29:42 compute-0 neutron-haproxy-ovnmeta-03e15108-5f8d-4fec-9ad8-133d9551c667[405609]: [WARNING]  (405613) : Exiting Master process...
Oct 11 09:29:42 compute-0 neutron-haproxy-ovnmeta-03e15108-5f8d-4fec-9ad8-133d9551c667[405609]: [ALERT]    (405613) : Current worker (405615) exited with code 143 (Terminated)
Oct 11 09:29:42 compute-0 neutron-haproxy-ovnmeta-03e15108-5f8d-4fec-9ad8-133d9551c667[405609]: [WARNING]  (405613) : All workers exited. Exiting... (0)
Oct 11 09:29:42 compute-0 systemd[1]: libpod-d0e7b6791d31aa4d05705ff6bca99563fb0a8251a2e20915b4284d5398aa84a4.scope: Deactivated successfully.
Oct 11 09:29:42 compute-0 podman[407662]: 2025-10-11 09:29:42.963868321 +0000 UTC m=+0.060622402 container died d0e7b6791d31aa4d05705ff6bca99563fb0a8251a2e20915b4284d5398aa84a4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-03e15108-5f8d-4fec-9ad8-133d9551c667, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001)
Oct 11 09:29:42 compute-0 nova_compute[260935]: 2025-10-11 09:29:42.966 2 INFO nova.virt.libvirt.driver [-] [instance: 16108538-ef61-4d43-8a42-c324c334138b] Instance destroyed successfully.
Oct 11 09:29:42 compute-0 nova_compute[260935]: 2025-10-11 09:29:42.966 2 DEBUG nova.objects.instance [None req-ceb0ac14-3eb8-4c1c-ab12-43ef8fb7945f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lazy-loading 'resources' on Instance uuid 16108538-ef61-4d43-8a42-c324c334138b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:29:42 compute-0 nova_compute[260935]: 2025-10-11 09:29:42.983 2 DEBUG nova.virt.libvirt.vif [None req-ceb0ac14-3eb8-4c1c-ab12-43ef8fb7945f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T09:28:19Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1335802812',display_name='tempest-TestGettingAddress-server-1335802812',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1335802812',id=132,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBI6hRsTatVKtQ3nuA1r7FSvjUwlf4xF10ggzb3Lhl+N08jgpdPafA8cbnqq6OClpxtss8V3s8tFKTynT8iMbxi96RnSSo3i9TmqSxoUO3xLRHc6Xamp8CuhNQXpWeJ78Zg==',key_name='tempest-TestGettingAddress-951303195',keypairs=<?>,launch_index=0,launched_at=2025-10-11T09:28:33Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='db6885dd005947ad850fed13cefdf2fc',ramdisk_id='',reservation_id='r-7xgmxaip',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-1238692117',owner_user_name='tempest-TestGettingAddress-1238692117-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T09:28:33Z,user_data=None,user_id='0e1fd111a1ff43179343661e01457085',uuid=16108538-ef61-4d43-8a42-c324c334138b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "e89af28e-d1c8-408a-8e1f-a80d8e9d0aa0", "address": "fa:16:3e:e6:64:0e", "network": {"id": "03e15108-5f8d-4fec-9ad8-133d9551c667", "bridge": "br-int", "label": "tempest-network-smoke--866575139", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fee6:640e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fee6:640e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.172", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape89af28e-d1", "ovs_interfaceid": "e89af28e-d1c8-408a-8e1f-a80d8e9d0aa0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 11 09:29:42 compute-0 nova_compute[260935]: 2025-10-11 09:29:42.984 2 DEBUG nova.network.os_vif_util [None req-ceb0ac14-3eb8-4c1c-ab12-43ef8fb7945f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converting VIF {"id": "e89af28e-d1c8-408a-8e1f-a80d8e9d0aa0", "address": "fa:16:3e:e6:64:0e", "network": {"id": "03e15108-5f8d-4fec-9ad8-133d9551c667", "bridge": "br-int", "label": "tempest-network-smoke--866575139", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fee6:640e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fee6:640e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.172", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape89af28e-d1", "ovs_interfaceid": "e89af28e-d1c8-408a-8e1f-a80d8e9d0aa0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 09:29:42 compute-0 nova_compute[260935]: 2025-10-11 09:29:42.986 2 DEBUG nova.network.os_vif_util [None req-ceb0ac14-3eb8-4c1c-ab12-43ef8fb7945f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:e6:64:0e,bridge_name='br-int',has_traffic_filtering=True,id=e89af28e-d1c8-408a-8e1f-a80d8e9d0aa0,network=Network(03e15108-5f8d-4fec-9ad8-133d9551c667),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape89af28e-d1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 09:29:42 compute-0 nova_compute[260935]: 2025-10-11 09:29:42.986 2 DEBUG os_vif [None req-ceb0ac14-3eb8-4c1c-ab12-43ef8fb7945f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:e6:64:0e,bridge_name='br-int',has_traffic_filtering=True,id=e89af28e-d1c8-408a-8e1f-a80d8e9d0aa0,network=Network(03e15108-5f8d-4fec-9ad8-133d9551c667),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape89af28e-d1') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 11 09:29:42 compute-0 nova_compute[260935]: 2025-10-11 09:29:42.988 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:29:42 compute-0 nova_compute[260935]: 2025-10-11 09:29:42.989 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape89af28e-d1, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:29:42 compute-0 nova_compute[260935]: 2025-10-11 09:29:42.995 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:29:42 compute-0 nova_compute[260935]: 2025-10-11 09:29:42.998 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 09:29:43 compute-0 nova_compute[260935]: 2025-10-11 09:29:43.001 2 INFO os_vif [None req-ceb0ac14-3eb8-4c1c-ab12-43ef8fb7945f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:e6:64:0e,bridge_name='br-int',has_traffic_filtering=True,id=e89af28e-d1c8-408a-8e1f-a80d8e9d0aa0,network=Network(03e15108-5f8d-4fec-9ad8-133d9551c667),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape89af28e-d1')
Oct 11 09:29:43 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-d0e7b6791d31aa4d05705ff6bca99563fb0a8251a2e20915b4284d5398aa84a4-userdata-shm.mount: Deactivated successfully.
Oct 11 09:29:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-4ed2907c6f4d557002a67a93ce41ed6b3e05074fb0e893ba2f5111564f8191c6-merged.mount: Deactivated successfully.
Oct 11 09:29:43 compute-0 podman[407662]: 2025-10-11 09:29:43.018291467 +0000 UTC m=+0.115045558 container cleanup d0e7b6791d31aa4d05705ff6bca99563fb0a8251a2e20915b4284d5398aa84a4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-03e15108-5f8d-4fec-9ad8-133d9551c667, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Oct 11 09:29:43 compute-0 systemd[1]: libpod-conmon-d0e7b6791d31aa4d05705ff6bca99563fb0a8251a2e20915b4284d5398aa84a4.scope: Deactivated successfully.
Oct 11 09:29:43 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2642: 321 pgs: 321 active+clean; 486 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 149 KiB/s rd, 133 KiB/s wr, 60 op/s
Oct 11 09:29:43 compute-0 podman[407720]: 2025-10-11 09:29:43.104301084 +0000 UTC m=+0.053456890 container remove d0e7b6791d31aa4d05705ff6bca99563fb0a8251a2e20915b4284d5398aa84a4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-03e15108-5f8d-4fec-9ad8-133d9551c667, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Oct 11 09:29:43 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:29:43.111 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[9a045684-ba3b-4097-b493-f7e6e92a5812]: (4, ('Sat Oct 11 09:29:42 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-03e15108-5f8d-4fec-9ad8-133d9551c667 (d0e7b6791d31aa4d05705ff6bca99563fb0a8251a2e20915b4284d5398aa84a4)\nd0e7b6791d31aa4d05705ff6bca99563fb0a8251a2e20915b4284d5398aa84a4\nSat Oct 11 09:29:43 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-03e15108-5f8d-4fec-9ad8-133d9551c667 (d0e7b6791d31aa4d05705ff6bca99563fb0a8251a2e20915b4284d5398aa84a4)\nd0e7b6791d31aa4d05705ff6bca99563fb0a8251a2e20915b4284d5398aa84a4\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:29:43 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:29:43.113 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[b52db876-9929-42b6-8098-2f3401fb3c32]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:29:43 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:29:43.114 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap03e15108-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:29:43 compute-0 kernel: tap03e15108-50: left promiscuous mode
Oct 11 09:29:43 compute-0 nova_compute[260935]: 2025-10-11 09:29:43.116 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:29:43 compute-0 nova_compute[260935]: 2025-10-11 09:29:43.121 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:29:43 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:29:43.124 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[e31bdbc9-c282-4c48-8e29-580b2374bf52]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:29:43 compute-0 ceph-mon[74313]: pgmap v2642: 321 pgs: 321 active+clean; 486 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 149 KiB/s rd, 133 KiB/s wr, 60 op/s
Oct 11 09:29:43 compute-0 nova_compute[260935]: 2025-10-11 09:29:43.141 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:29:43 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:29:43.145 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[baed22a9-8569-4bc3-9c32-998c312cf999]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:29:43 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:29:43.146 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[03239e74-e38c-4535-81fc-dbd225eabd84]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:29:43 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:29:43.174 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[dbf0a76b-5b70-4139-ab79-9380e5790037]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 675702, 'reachable_time': 25969, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 407738, 'error': None, 'target': 'ovnmeta-03e15108-5f8d-4fec-9ad8-133d9551c667', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:29:43 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:29:43.180 162929 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-03e15108-5f8d-4fec-9ad8-133d9551c667 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 11 09:29:43 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:29:43.180 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[7d4a6b4b-9478-40db-aa9f-d2b892d2f985]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:29:43 compute-0 systemd[1]: run-netns-ovnmeta\x2d03e15108\x2d5f8d\x2d4fec\x2d9ad8\x2d133d9551c667.mount: Deactivated successfully.
Oct 11 09:29:43 compute-0 nova_compute[260935]: 2025-10-11 09:29:43.413 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:29:43 compute-0 nova_compute[260935]: 2025-10-11 09:29:43.464 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "refresh_cache-52be16b4-343a-4fd4-9041-39069a1fde2a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:29:43 compute-0 nova_compute[260935]: 2025-10-11 09:29:43.465 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquired lock "refresh_cache-52be16b4-343a-4fd4-9041-39069a1fde2a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:29:43 compute-0 nova_compute[260935]: 2025-10-11 09:29:43.465 2 DEBUG nova.network.neutron [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: 52be16b4-343a-4fd4-9041-39069a1fde2a] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 11 09:29:43 compute-0 nova_compute[260935]: 2025-10-11 09:29:43.472 2 INFO nova.virt.libvirt.driver [None req-ceb0ac14-3eb8-4c1c-ab12-43ef8fb7945f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 16108538-ef61-4d43-8a42-c324c334138b] Deleting instance files /var/lib/nova/instances/16108538-ef61-4d43-8a42-c324c334138b_del
Oct 11 09:29:43 compute-0 nova_compute[260935]: 2025-10-11 09:29:43.473 2 INFO nova.virt.libvirt.driver [None req-ceb0ac14-3eb8-4c1c-ab12-43ef8fb7945f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 16108538-ef61-4d43-8a42-c324c334138b] Deletion of /var/lib/nova/instances/16108538-ef61-4d43-8a42-c324c334138b_del complete
Oct 11 09:29:43 compute-0 nova_compute[260935]: 2025-10-11 09:29:43.521 2 INFO nova.compute.manager [None req-ceb0ac14-3eb8-4c1c-ab12-43ef8fb7945f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 16108538-ef61-4d43-8a42-c324c334138b] Took 0.80 seconds to destroy the instance on the hypervisor.
Oct 11 09:29:43 compute-0 nova_compute[260935]: 2025-10-11 09:29:43.522 2 DEBUG oslo.service.loopingcall [None req-ceb0ac14-3eb8-4c1c-ab12-43ef8fb7945f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 11 09:29:43 compute-0 nova_compute[260935]: 2025-10-11 09:29:43.523 2 DEBUG nova.compute.manager [-] [instance: 16108538-ef61-4d43-8a42-c324c334138b] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 11 09:29:43 compute-0 nova_compute[260935]: 2025-10-11 09:29:43.523 2 DEBUG nova.network.neutron [-] [instance: 16108538-ef61-4d43-8a42-c324c334138b] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 11 09:29:43 compute-0 nova_compute[260935]: 2025-10-11 09:29:43.722 2 DEBUG nova.compute.manager [req-a930dbb7-b60e-4f69-ac62-b005c8d5755c req-46a06c2b-0936-4edd-b030-3bfe0ab3cdc0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 16108538-ef61-4d43-8a42-c324c334138b] Received event network-vif-unplugged-e89af28e-d1c8-408a-8e1f-a80d8e9d0aa0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:29:43 compute-0 nova_compute[260935]: 2025-10-11 09:29:43.724 2 DEBUG oslo_concurrency.lockutils [req-a930dbb7-b60e-4f69-ac62-b005c8d5755c req-46a06c2b-0936-4edd-b030-3bfe0ab3cdc0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "16108538-ef61-4d43-8a42-c324c334138b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:29:43 compute-0 nova_compute[260935]: 2025-10-11 09:29:43.724 2 DEBUG oslo_concurrency.lockutils [req-a930dbb7-b60e-4f69-ac62-b005c8d5755c req-46a06c2b-0936-4edd-b030-3bfe0ab3cdc0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "16108538-ef61-4d43-8a42-c324c334138b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:29:43 compute-0 nova_compute[260935]: 2025-10-11 09:29:43.725 2 DEBUG oslo_concurrency.lockutils [req-a930dbb7-b60e-4f69-ac62-b005c8d5755c req-46a06c2b-0936-4edd-b030-3bfe0ab3cdc0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "16108538-ef61-4d43-8a42-c324c334138b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:29:43 compute-0 nova_compute[260935]: 2025-10-11 09:29:43.725 2 DEBUG nova.compute.manager [req-a930dbb7-b60e-4f69-ac62-b005c8d5755c req-46a06c2b-0936-4edd-b030-3bfe0ab3cdc0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 16108538-ef61-4d43-8a42-c324c334138b] No waiting events found dispatching network-vif-unplugged-e89af28e-d1c8-408a-8e1f-a80d8e9d0aa0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 09:29:43 compute-0 nova_compute[260935]: 2025-10-11 09:29:43.726 2 DEBUG nova.compute.manager [req-a930dbb7-b60e-4f69-ac62-b005c8d5755c req-46a06c2b-0936-4edd-b030-3bfe0ab3cdc0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 16108538-ef61-4d43-8a42-c324c334138b] Received event network-vif-unplugged-e89af28e-d1c8-408a-8e1f-a80d8e9d0aa0 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 11 09:29:44 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:29:45 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2643: 321 pgs: 321 active+clean; 486 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 17 KiB/s wr, 29 op/s
Oct 11 09:29:45 compute-0 nova_compute[260935]: 2025-10-11 09:29:45.515 2 DEBUG nova.network.neutron [-] [instance: 16108538-ef61-4d43-8a42-c324c334138b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:29:45 compute-0 nova_compute[260935]: 2025-10-11 09:29:45.540 2 INFO nova.compute.manager [-] [instance: 16108538-ef61-4d43-8a42-c324c334138b] Took 2.02 seconds to deallocate network for instance.
Oct 11 09:29:45 compute-0 nova_compute[260935]: 2025-10-11 09:29:45.623 2 DEBUG nova.compute.manager [req-09a1a7a4-9cb6-4804-aaca-771c0db9b481 req-803916f9-a2a7-4325-ba3b-4b7001fcfaf0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 16108538-ef61-4d43-8a42-c324c334138b] Received event network-vif-deleted-e89af28e-d1c8-408a-8e1f-a80d8e9d0aa0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:29:45 compute-0 nova_compute[260935]: 2025-10-11 09:29:45.627 2 DEBUG oslo_concurrency.lockutils [None req-ceb0ac14-3eb8-4c1c-ab12-43ef8fb7945f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:29:45 compute-0 nova_compute[260935]: 2025-10-11 09:29:45.627 2 DEBUG oslo_concurrency.lockutils [None req-ceb0ac14-3eb8-4c1c-ab12-43ef8fb7945f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:29:45 compute-0 nova_compute[260935]: 2025-10-11 09:29:45.722 2 DEBUG nova.network.neutron [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: 52be16b4-343a-4fd4-9041-39069a1fde2a] Updating instance_info_cache with network_info: [{"id": "c992d6e3-ef59-42a0-80c5-109fe0c056cd", "address": "fa:16:3e:d3:b5:ce", "network": {"id": "7c40ad6c-6e2c-4d8e-a70f-72c8786fa745", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1855455514-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0ba95f2514ce4fe4b00f245335eaeb01", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc992d6e3-ef", "ovs_interfaceid": "c992d6e3-ef59-42a0-80c5-109fe0c056cd", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:29:45 compute-0 nova_compute[260935]: 2025-10-11 09:29:45.740 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Releasing lock "refresh_cache-52be16b4-343a-4fd4-9041-39069a1fde2a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:29:45 compute-0 nova_compute[260935]: 2025-10-11 09:29:45.741 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: 52be16b4-343a-4fd4-9041-39069a1fde2a] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 11 09:29:45 compute-0 nova_compute[260935]: 2025-10-11 09:29:45.746 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:29:45 compute-0 nova_compute[260935]: 2025-10-11 09:29:45.747 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:29:45 compute-0 nova_compute[260935]: 2025-10-11 09:29:45.747 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 11 09:29:45 compute-0 nova_compute[260935]: 2025-10-11 09:29:45.760 2 DEBUG oslo_concurrency.processutils [None req-ceb0ac14-3eb8-4c1c-ab12-43ef8fb7945f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:29:45 compute-0 nova_compute[260935]: 2025-10-11 09:29:45.827 2 DEBUG nova.compute.manager [req-e8a69a60-981f-44bd-97c8-289d35c22262 req-a4144217-ccd0-46f2-8a0f-35d11cc71509 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 16108538-ef61-4d43-8a42-c324c334138b] Received event network-vif-plugged-e89af28e-d1c8-408a-8e1f-a80d8e9d0aa0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:29:45 compute-0 nova_compute[260935]: 2025-10-11 09:29:45.828 2 DEBUG oslo_concurrency.lockutils [req-e8a69a60-981f-44bd-97c8-289d35c22262 req-a4144217-ccd0-46f2-8a0f-35d11cc71509 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "16108538-ef61-4d43-8a42-c324c334138b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:29:45 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:29:45.830 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=bec9a5e2-82b0-42f0-811d-08d245f1dc66, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '46'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:29:45 compute-0 nova_compute[260935]: 2025-10-11 09:29:45.829 2 DEBUG oslo_concurrency.lockutils [req-e8a69a60-981f-44bd-97c8-289d35c22262 req-a4144217-ccd0-46f2-8a0f-35d11cc71509 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "16108538-ef61-4d43-8a42-c324c334138b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:29:45 compute-0 nova_compute[260935]: 2025-10-11 09:29:45.832 2 DEBUG oslo_concurrency.lockutils [req-e8a69a60-981f-44bd-97c8-289d35c22262 req-a4144217-ccd0-46f2-8a0f-35d11cc71509 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "16108538-ef61-4d43-8a42-c324c334138b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:29:45 compute-0 nova_compute[260935]: 2025-10-11 09:29:45.832 2 DEBUG nova.compute.manager [req-e8a69a60-981f-44bd-97c8-289d35c22262 req-a4144217-ccd0-46f2-8a0f-35d11cc71509 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 16108538-ef61-4d43-8a42-c324c334138b] No waiting events found dispatching network-vif-plugged-e89af28e-d1c8-408a-8e1f-a80d8e9d0aa0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 09:29:45 compute-0 nova_compute[260935]: 2025-10-11 09:29:45.833 2 WARNING nova.compute.manager [req-e8a69a60-981f-44bd-97c8-289d35c22262 req-a4144217-ccd0-46f2-8a0f-35d11cc71509 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 16108538-ef61-4d43-8a42-c324c334138b] Received unexpected event network-vif-plugged-e89af28e-d1c8-408a-8e1f-a80d8e9d0aa0 for instance with vm_state deleted and task_state None.
Oct 11 09:29:46 compute-0 ceph-mon[74313]: pgmap v2643: 321 pgs: 321 active+clean; 486 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 17 KiB/s wr, 29 op/s
Oct 11 09:29:46 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 09:29:46 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1256189463' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:29:46 compute-0 nova_compute[260935]: 2025-10-11 09:29:46.257 2 DEBUG oslo_concurrency.processutils [None req-ceb0ac14-3eb8-4c1c-ab12-43ef8fb7945f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.497s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:29:46 compute-0 nova_compute[260935]: 2025-10-11 09:29:46.264 2 DEBUG nova.compute.provider_tree [None req-ceb0ac14-3eb8-4c1c-ab12-43ef8fb7945f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 09:29:46 compute-0 nova_compute[260935]: 2025-10-11 09:29:46.289 2 DEBUG nova.scheduler.client.report [None req-ceb0ac14-3eb8-4c1c-ab12-43ef8fb7945f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 09:29:46 compute-0 nova_compute[260935]: 2025-10-11 09:29:46.319 2 DEBUG nova.network.neutron [req-e1ef3a65-e183-43aa-b560-6473502afbf6 req-a5f477c9-824e-4604-8927-c3b16166ffba e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 16108538-ef61-4d43-8a42-c324c334138b] Updated VIF entry in instance network info cache for port e89af28e-d1c8-408a-8e1f-a80d8e9d0aa0. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 09:29:46 compute-0 nova_compute[260935]: 2025-10-11 09:29:46.320 2 DEBUG nova.network.neutron [req-e1ef3a65-e183-43aa-b560-6473502afbf6 req-a5f477c9-824e-4604-8927-c3b16166ffba e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 16108538-ef61-4d43-8a42-c324c334138b] Updating instance_info_cache with network_info: [{"id": "e89af28e-d1c8-408a-8e1f-a80d8e9d0aa0", "address": "fa:16:3e:e6:64:0e", "network": {"id": "03e15108-5f8d-4fec-9ad8-133d9551c667", "bridge": "br-int", "label": "tempest-network-smoke--866575139", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fee6:640e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fee6:640e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape89af28e-d1", "ovs_interfaceid": "e89af28e-d1c8-408a-8e1f-a80d8e9d0aa0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:29:46 compute-0 nova_compute[260935]: 2025-10-11 09:29:46.325 2 DEBUG oslo_concurrency.lockutils [None req-ceb0ac14-3eb8-4c1c-ab12-43ef8fb7945f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.697s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:29:46 compute-0 nova_compute[260935]: 2025-10-11 09:29:46.339 2 DEBUG oslo_concurrency.lockutils [req-e1ef3a65-e183-43aa-b560-6473502afbf6 req-a5f477c9-824e-4604-8927-c3b16166ffba e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-16108538-ef61-4d43-8a42-c324c334138b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:29:46 compute-0 nova_compute[260935]: 2025-10-11 09:29:46.347 2 INFO nova.scheduler.client.report [None req-ceb0ac14-3eb8-4c1c-ab12-43ef8fb7945f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Deleted allocations for instance 16108538-ef61-4d43-8a42-c324c334138b
Oct 11 09:29:46 compute-0 nova_compute[260935]: 2025-10-11 09:29:46.416 2 DEBUG oslo_concurrency.lockutils [None req-ceb0ac14-3eb8-4c1c-ab12-43ef8fb7945f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "16108538-ef61-4d43-8a42-c324c334138b" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.700s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:29:46 compute-0 nova_compute[260935]: 2025-10-11 09:29:46.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:29:46 compute-0 nova_compute[260935]: 2025-10-11 09:29:46.730 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:29:46 compute-0 nova_compute[260935]: 2025-10-11 09:29:46.754 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:29:46 compute-0 nova_compute[260935]: 2025-10-11 09:29:46.754 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:29:46 compute-0 nova_compute[260935]: 2025-10-11 09:29:46.754 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:29:46 compute-0 nova_compute[260935]: 2025-10-11 09:29:46.755 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 11 09:29:46 compute-0 nova_compute[260935]: 2025-10-11 09:29:46.755 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:29:47 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2644: 321 pgs: 321 active+clean; 486 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 17 KiB/s wr, 29 op/s
Oct 11 09:29:47 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/1256189463' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:29:47 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 09:29:47 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/622301888' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:29:47 compute-0 nova_compute[260935]: 2025-10-11 09:29:47.266 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.511s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:29:47 compute-0 nova_compute[260935]: 2025-10-11 09:29:47.386 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:29:47 compute-0 nova_compute[260935]: 2025-10-11 09:29:47.386 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:29:47 compute-0 nova_compute[260935]: 2025-10-11 09:29:47.386 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:29:47 compute-0 nova_compute[260935]: 2025-10-11 09:29:47.390 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:29:47 compute-0 nova_compute[260935]: 2025-10-11 09:29:47.391 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:29:47 compute-0 nova_compute[260935]: 2025-10-11 09:29:47.394 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000086 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:29:47 compute-0 nova_compute[260935]: 2025-10-11 09:29:47.394 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000086 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:29:47 compute-0 nova_compute[260935]: 2025-10-11 09:29:47.399 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000052 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:29:47 compute-0 nova_compute[260935]: 2025-10-11 09:29:47.399 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000052 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:29:47 compute-0 nova_compute[260935]: 2025-10-11 09:29:47.611 2 WARNING nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 09:29:47 compute-0 nova_compute[260935]: 2025-10-11 09:29:47.612 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=2597MB free_disk=59.739524841308594GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 11 09:29:47 compute-0 nova_compute[260935]: 2025-10-11 09:29:47.613 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:29:47 compute-0 nova_compute[260935]: 2025-10-11 09:29:47.613 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:29:47 compute-0 nova_compute[260935]: 2025-10-11 09:29:47.731 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance c176845c-89c0-4038-ba22-4ee79bd3ebfe actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 09:29:47 compute-0 nova_compute[260935]: 2025-10-11 09:29:47.731 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance b75d8ded-515b-48ff-a6b6-28df88878996 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 09:29:47 compute-0 nova_compute[260935]: 2025-10-11 09:29:47.732 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance 52be16b4-343a-4fd4-9041-39069a1fde2a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 09:29:47 compute-0 nova_compute[260935]: 2025-10-11 09:29:47.732 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance af8a1ab7-7512-4de4-8493-cfe85095fbc5 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 09:29:47 compute-0 nova_compute[260935]: 2025-10-11 09:29:47.732 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 11 09:29:47 compute-0 nova_compute[260935]: 2025-10-11 09:29:47.733 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=1024MB phys_disk=59GB used_disk=4GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 11 09:29:47 compute-0 nova_compute[260935]: 2025-10-11 09:29:47.808 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:29:47 compute-0 nova_compute[260935]: 2025-10-11 09:29:47.992 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:29:48 compute-0 ceph-mon[74313]: pgmap v2644: 321 pgs: 321 active+clean; 486 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 17 KiB/s wr, 29 op/s
Oct 11 09:29:48 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/622301888' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:29:48 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 09:29:48 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2299407443' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:29:48 compute-0 nova_compute[260935]: 2025-10-11 09:29:48.352 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.544s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:29:48 compute-0 nova_compute[260935]: 2025-10-11 09:29:48.363 2 DEBUG nova.compute.provider_tree [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 09:29:48 compute-0 nova_compute[260935]: 2025-10-11 09:29:48.394 2 DEBUG nova.scheduler.client.report [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 09:29:48 compute-0 nova_compute[260935]: 2025-10-11 09:29:48.413 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:29:48 compute-0 nova_compute[260935]: 2025-10-11 09:29:48.424 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 11 09:29:48 compute-0 nova_compute[260935]: 2025-10-11 09:29:48.424 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.811s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:29:49 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2645: 321 pgs: 321 active+clean; 407 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 43 KiB/s rd, 19 KiB/s wr, 57 op/s
Oct 11 09:29:49 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/2299407443' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:29:49 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:29:50 compute-0 ceph-mon[74313]: pgmap v2645: 321 pgs: 321 active+clean; 407 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 43 KiB/s rd, 19 KiB/s wr, 57 op/s
Oct 11 09:29:50 compute-0 nova_compute[260935]: 2025-10-11 09:29:50.670 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760174975.6681478, 80209953-3c4c-4932-a9a3-8166c70e1029 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:29:50 compute-0 nova_compute[260935]: 2025-10-11 09:29:50.670 2 INFO nova.compute.manager [-] [instance: 80209953-3c4c-4932-a9a3-8166c70e1029] VM Stopped (Lifecycle Event)
Oct 11 09:29:50 compute-0 nova_compute[260935]: 2025-10-11 09:29:50.697 2 DEBUG nova.compute.manager [None req-cf949d5a-071b-4c69-9967-4b12635529af - - - - - -] [instance: 80209953-3c4c-4932-a9a3-8166c70e1029] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:29:50 compute-0 podman[407807]: 2025-10-11 09:29:50.770187829 +0000 UTC m=+0.065751206 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true)
Oct 11 09:29:51 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2646: 321 pgs: 321 active+clean; 407 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 30 KiB/s rd, 6.0 KiB/s wr, 46 op/s
Oct 11 09:29:51 compute-0 ceph-mon[74313]: pgmap v2646: 321 pgs: 321 active+clean; 407 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 30 KiB/s rd, 6.0 KiB/s wr, 46 op/s
Oct 11 09:29:52 compute-0 nova_compute[260935]: 2025-10-11 09:29:52.995 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:29:53 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2647: 321 pgs: 321 active+clean; 407 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 31 KiB/s rd, 6.0 KiB/s wr, 46 op/s
Oct 11 09:29:53 compute-0 nova_compute[260935]: 2025-10-11 09:29:53.461 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:29:53 compute-0 sshd-session[407828]: Invalid user admin from 165.232.82.252 port 34564
Oct 11 09:29:54 compute-0 sshd-session[407828]: pam_unix(sshd:auth): check pass; user unknown
Oct 11 09:29:54 compute-0 sshd-session[407828]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=165.232.82.252
Oct 11 09:29:54 compute-0 ceph-mon[74313]: pgmap v2647: 321 pgs: 321 active+clean; 407 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 31 KiB/s rd, 6.0 KiB/s wr, 46 op/s
Oct 11 09:29:54 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:29:54 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:29:54.659 162924 DEBUG eventlet.wsgi.server [-] (162924) accepted '' server /usr/lib/python3.9/site-packages/eventlet/wsgi.py:1004
Oct 11 09:29:54 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:29:54.661 162924 DEBUG neutron.agent.ovn.metadata.server [-] Request: GET /latest/meta-data/public-ipv4 HTTP/1.0
Oct 11 09:29:54 compute-0 ovn_metadata_agent[162810]: Accept: */*
Oct 11 09:29:54 compute-0 ovn_metadata_agent[162810]: Connection: close
Oct 11 09:29:54 compute-0 ovn_metadata_agent[162810]: Content-Type: text/plain
Oct 11 09:29:54 compute-0 ovn_metadata_agent[162810]: Host: 169.254.169.254
Oct 11 09:29:54 compute-0 ovn_metadata_agent[162810]: User-Agent: curl/7.84.0
Oct 11 09:29:54 compute-0 ovn_metadata_agent[162810]: X-Forwarded-For: 10.100.0.7
Oct 11 09:29:54 compute-0 ovn_metadata_agent[162810]: X-Ovn-Network-Id: 079fe911-cb70-4ab3-8112-a68496105754 __call__ /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:82
Oct 11 09:29:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:29:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:29:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:29:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:29:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:29:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:29:54 compute-0 ceph-mgr[74605]: [balancer INFO root] Optimize plan auto_2025-10-11_09:29:54
Oct 11 09:29:54 compute-0 ceph-mgr[74605]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 09:29:54 compute-0 ceph-mgr[74605]: [balancer INFO root] do_upmap
Oct 11 09:29:54 compute-0 ceph-mgr[74605]: [balancer INFO root] pools ['volumes', '.rgw.root', 'default.rgw.meta', 'default.rgw.control', 'default.rgw.log', '.mgr', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'images', 'vms', 'backups']
Oct 11 09:29:54 compute-0 ceph-mgr[74605]: [balancer INFO root] prepared 0/10 changes
Oct 11 09:29:55 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2648: 321 pgs: 321 active+clean; 407 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 2.2 KiB/s wr, 28 op/s
Oct 11 09:29:55 compute-0 ovn_controller[152945]: 2025-10-11T09:29:55Z|01496|binding|INFO|Releasing lport 853c3a89-16e1-4df4-8c36-5b3e31f01524 from this chassis (sb_readonly=0)
Oct 11 09:29:55 compute-0 ovn_controller[152945]: 2025-10-11T09:29:55Z|01497|binding|INFO|Releasing lport e23cd806-8523-4e59-ba27-db15cee52548 from this chassis (sb_readonly=0)
Oct 11 09:29:55 compute-0 ovn_controller[152945]: 2025-10-11T09:29:55Z|01498|binding|INFO|Releasing lport f45dd889-4db0-488b-b6d3-356bc191844e from this chassis (sb_readonly=0)
Oct 11 09:29:55 compute-0 nova_compute[260935]: 2025-10-11 09:29:55.441 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:29:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 09:29:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 09:29:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 09:29:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 09:29:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 09:29:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 09:29:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 09:29:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 09:29:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 09:29:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 09:29:55 compute-0 podman[407830]: 2025-10-11 09:29:55.801696281 +0000 UTC m=+0.099178800 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=iscsid, org.label-schema.build-date=20251001)
Oct 11 09:29:55 compute-0 haproxy-metadata-proxy-079fe911-cb70-4ab3-8112-a68496105754[407479]: 10.100.0.7:60544 [11/Oct/2025:09:29:54.657] listener listener/metadata 0/0/0/1203/1203 200 135 - - ---- 1/1/0/0/0 0/0 "GET /latest/meta-data/public-ipv4 HTTP/1.1"
Oct 11 09:29:55 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:29:55.859 162924 DEBUG neutron.agent.ovn.metadata.server [-] <Response [200]> _proxy_request /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:161
Oct 11 09:29:55 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:29:55.861 162924 INFO eventlet.wsgi.server [-] 10.100.0.7,<local> "GET /latest/meta-data/public-ipv4 HTTP/1.1" status: 200  len: 151 time: 1.1997545
Oct 11 09:29:56 compute-0 sshd-session[407828]: Failed password for invalid user admin from 165.232.82.252 port 34564 ssh2
Oct 11 09:29:56 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:29:56.022 162924 DEBUG eventlet.wsgi.server [-] (162924) accepted '' server /usr/lib/python3.9/site-packages/eventlet/wsgi.py:1004
Oct 11 09:29:56 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:29:56.023 162924 DEBUG neutron.agent.ovn.metadata.server [-] Request: POST /openstack/2013-10-17/password HTTP/1.0
Oct 11 09:29:56 compute-0 ovn_metadata_agent[162810]: Accept: */*
Oct 11 09:29:56 compute-0 ovn_metadata_agent[162810]: Connection: close
Oct 11 09:29:56 compute-0 ovn_metadata_agent[162810]: Content-Length: 100
Oct 11 09:29:56 compute-0 ovn_metadata_agent[162810]: Content-Type: application/x-www-form-urlencoded
Oct 11 09:29:56 compute-0 ovn_metadata_agent[162810]: Host: 169.254.169.254
Oct 11 09:29:56 compute-0 ovn_metadata_agent[162810]: User-Agent: curl/7.84.0
Oct 11 09:29:56 compute-0 ovn_metadata_agent[162810]: X-Forwarded-For: 10.100.0.7
Oct 11 09:29:56 compute-0 ovn_metadata_agent[162810]: X-Ovn-Network-Id: 079fe911-cb70-4ab3-8112-a68496105754
Oct 11 09:29:56 compute-0 ovn_metadata_agent[162810]: 
Oct 11 09:29:56 compute-0 ovn_metadata_agent[162810]: testtesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttest __call__ /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:82
Oct 11 09:29:56 compute-0 ceph-mon[74313]: pgmap v2648: 321 pgs: 321 active+clean; 407 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 2.2 KiB/s wr, 28 op/s
Oct 11 09:29:56 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:29:56.313 162924 DEBUG neutron.agent.ovn.metadata.server [-] <Response [200]> _proxy_request /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:161
Oct 11 09:29:56 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:29:56.314 162924 INFO eventlet.wsgi.server [-] 10.100.0.7,<local> "POST /openstack/2013-10-17/password HTTP/1.1" status: 200  len: 134 time: 0.2909412
Oct 11 09:29:56 compute-0 haproxy-metadata-proxy-079fe911-cb70-4ab3-8112-a68496105754[407479]: 10.100.0.7:60552 [11/Oct/2025:09:29:56.021] listener listener/metadata 0/0/0/292/292 200 118 - - ---- 1/1/0/0/0 0/0 "POST /openstack/2013-10-17/password HTTP/1.1"
Oct 11 09:29:57 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2649: 321 pgs: 321 active+clean; 407 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 2.2 KiB/s wr, 28 op/s
Oct 11 09:29:57 compute-0 sshd-session[407828]: Connection closed by invalid user admin 165.232.82.252 port 34564 [preauth]
Oct 11 09:29:57 compute-0 nova_compute[260935]: 2025-10-11 09:29:57.961 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760174982.9584463, 16108538-ef61-4d43-8a42-c324c334138b => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:29:57 compute-0 nova_compute[260935]: 2025-10-11 09:29:57.962 2 INFO nova.compute.manager [-] [instance: 16108538-ef61-4d43-8a42-c324c334138b] VM Stopped (Lifecycle Event)
Oct 11 09:29:57 compute-0 nova_compute[260935]: 2025-10-11 09:29:57.990 2 DEBUG nova.compute.manager [None req-e2db50e4-d6f7-43eb-a707-0307841cf8fe - - - - - -] [instance: 16108538-ef61-4d43-8a42-c324c334138b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:29:57 compute-0 nova_compute[260935]: 2025-10-11 09:29:57.997 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:29:58 compute-0 ceph-mon[74313]: pgmap v2649: 321 pgs: 321 active+clean; 407 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 2.2 KiB/s wr, 28 op/s
Oct 11 09:29:58 compute-0 nova_compute[260935]: 2025-10-11 09:29:58.272 2 DEBUG oslo_concurrency.lockutils [None req-795b8d2a-821d-4871-9132-7fac6bfa75ba 95ad8a94007e4ab88fcad372c4695cf5 9e69b3710dc94c919d8bd75d2a540c10 - - default default] Acquiring lock "af8a1ab7-7512-4de4-8493-cfe85095fbc5" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:29:58 compute-0 nova_compute[260935]: 2025-10-11 09:29:58.273 2 DEBUG oslo_concurrency.lockutils [None req-795b8d2a-821d-4871-9132-7fac6bfa75ba 95ad8a94007e4ab88fcad372c4695cf5 9e69b3710dc94c919d8bd75d2a540c10 - - default default] Lock "af8a1ab7-7512-4de4-8493-cfe85095fbc5" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:29:58 compute-0 nova_compute[260935]: 2025-10-11 09:29:58.273 2 DEBUG oslo_concurrency.lockutils [None req-795b8d2a-821d-4871-9132-7fac6bfa75ba 95ad8a94007e4ab88fcad372c4695cf5 9e69b3710dc94c919d8bd75d2a540c10 - - default default] Acquiring lock "af8a1ab7-7512-4de4-8493-cfe85095fbc5-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:29:58 compute-0 nova_compute[260935]: 2025-10-11 09:29:58.274 2 DEBUG oslo_concurrency.lockutils [None req-795b8d2a-821d-4871-9132-7fac6bfa75ba 95ad8a94007e4ab88fcad372c4695cf5 9e69b3710dc94c919d8bd75d2a540c10 - - default default] Lock "af8a1ab7-7512-4de4-8493-cfe85095fbc5-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:29:58 compute-0 nova_compute[260935]: 2025-10-11 09:29:58.274 2 DEBUG oslo_concurrency.lockutils [None req-795b8d2a-821d-4871-9132-7fac6bfa75ba 95ad8a94007e4ab88fcad372c4695cf5 9e69b3710dc94c919d8bd75d2a540c10 - - default default] Lock "af8a1ab7-7512-4de4-8493-cfe85095fbc5-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:29:58 compute-0 nova_compute[260935]: 2025-10-11 09:29:58.277 2 INFO nova.compute.manager [None req-795b8d2a-821d-4871-9132-7fac6bfa75ba 95ad8a94007e4ab88fcad372c4695cf5 9e69b3710dc94c919d8bd75d2a540c10 - - default default] [instance: af8a1ab7-7512-4de4-8493-cfe85095fbc5] Terminating instance
Oct 11 09:29:58 compute-0 nova_compute[260935]: 2025-10-11 09:29:58.280 2 DEBUG nova.compute.manager [None req-795b8d2a-821d-4871-9132-7fac6bfa75ba 95ad8a94007e4ab88fcad372c4695cf5 9e69b3710dc94c919d8bd75d2a540c10 - - default default] [instance: af8a1ab7-7512-4de4-8493-cfe85095fbc5] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 11 09:29:58 compute-0 nova_compute[260935]: 2025-10-11 09:29:58.464 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:29:58 compute-0 kernel: tapc6f854bc-83 (unregistering): left promiscuous mode
Oct 11 09:29:58 compute-0 NetworkManager[44960]: <info>  [1760174998.8526] device (tapc6f854bc-83): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 09:29:58 compute-0 ovn_controller[152945]: 2025-10-11T09:29:58Z|01499|binding|INFO|Releasing lport c6f854bc-831f-4bb1-ad9f-e1aa03343d25 from this chassis (sb_readonly=0)
Oct 11 09:29:58 compute-0 ovn_controller[152945]: 2025-10-11T09:29:58Z|01500|binding|INFO|Setting lport c6f854bc-831f-4bb1-ad9f-e1aa03343d25 down in Southbound
Oct 11 09:29:58 compute-0 ovn_controller[152945]: 2025-10-11T09:29:58Z|01501|binding|INFO|Removing iface tapc6f854bc-83 ovn-installed in OVS
Oct 11 09:29:58 compute-0 nova_compute[260935]: 2025-10-11 09:29:58.864 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:29:58 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:29:58.873 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6b:d3:83 10.100.0.7'], port_security=['fa:16:3e:6b:d3:83 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'af8a1ab7-7512-4de4-8493-cfe85095fbc5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-079fe911-cb70-4ab3-8112-a68496105754', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9e69b3710dc94c919d8bd75d2a540c10', 'neutron:revision_number': '4', 'neutron:security_group_ids': '0dd44ff9-fcd8-4fbd-bf9d-f19c93cd632d af5d01b3-d9f2-4ea0-a9bb-fc66bdc2afc6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.243'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=eb94e045-6353-4848-a698-5f94ed22b64e, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=c6f854bc-831f-4bb1-ad9f-e1aa03343d25) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 09:29:58 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:29:58.874 162815 INFO neutron.agent.ovn.metadata.agent [-] Port c6f854bc-831f-4bb1-ad9f-e1aa03343d25 in datapath 079fe911-cb70-4ab3-8112-a68496105754 unbound from our chassis
Oct 11 09:29:58 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:29:58.876 162815 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 079fe911-cb70-4ab3-8112-a68496105754, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 11 09:29:58 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:29:58.878 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[b37bbfd2-f216-4e72-b57a-1e75757c981d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:29:58 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:29:58.879 162815 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-079fe911-cb70-4ab3-8112-a68496105754 namespace which is not needed anymore
Oct 11 09:29:58 compute-0 nova_compute[260935]: 2025-10-11 09:29:58.903 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:29:58 compute-0 systemd[1]: machine-qemu\x2d158\x2dinstance\x2d00000086.scope: Deactivated successfully.
Oct 11 09:29:58 compute-0 systemd[1]: machine-qemu\x2d158\x2dinstance\x2d00000086.scope: Consumed 14.747s CPU time.
Oct 11 09:29:58 compute-0 systemd-machined[215705]: Machine qemu-158-instance-00000086 terminated.
Oct 11 09:29:59 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2650: 321 pgs: 321 active+clean; 407 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 3.8 KiB/s wr, 29 op/s
Oct 11 09:29:59 compute-0 neutron-haproxy-ovnmeta-079fe911-cb70-4ab3-8112-a68496105754[407473]: [NOTICE]   (407477) : haproxy version is 2.8.14-c23fe91
Oct 11 09:29:59 compute-0 neutron-haproxy-ovnmeta-079fe911-cb70-4ab3-8112-a68496105754[407473]: [NOTICE]   (407477) : path to executable is /usr/sbin/haproxy
Oct 11 09:29:59 compute-0 neutron-haproxy-ovnmeta-079fe911-cb70-4ab3-8112-a68496105754[407473]: [WARNING]  (407477) : Exiting Master process...
Oct 11 09:29:59 compute-0 neutron-haproxy-ovnmeta-079fe911-cb70-4ab3-8112-a68496105754[407473]: [ALERT]    (407477) : Current worker (407479) exited with code 143 (Terminated)
Oct 11 09:29:59 compute-0 neutron-haproxy-ovnmeta-079fe911-cb70-4ab3-8112-a68496105754[407473]: [WARNING]  (407477) : All workers exited. Exiting... (0)
Oct 11 09:29:59 compute-0 systemd[1]: libpod-7c3025809c58e71159b26621f801a618a836a3bf281c33f5a5ecd917b5d02ebc.scope: Deactivated successfully.
Oct 11 09:29:59 compute-0 podman[407873]: 2025-10-11 09:29:59.076546188 +0000 UTC m=+0.090146575 container died 7c3025809c58e71159b26621f801a618a836a3bf281c33f5a5ecd917b5d02ebc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-079fe911-cb70-4ab3-8112-a68496105754, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 11 09:29:59 compute-0 nova_compute[260935]: 2025-10-11 09:29:59.132 2 INFO nova.virt.libvirt.driver [-] [instance: af8a1ab7-7512-4de4-8493-cfe85095fbc5] Instance destroyed successfully.
Oct 11 09:29:59 compute-0 nova_compute[260935]: 2025-10-11 09:29:59.133 2 DEBUG nova.objects.instance [None req-795b8d2a-821d-4871-9132-7fac6bfa75ba 95ad8a94007e4ab88fcad372c4695cf5 9e69b3710dc94c919d8bd75d2a540c10 - - default default] Lazy-loading 'resources' on Instance uuid af8a1ab7-7512-4de4-8493-cfe85095fbc5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:29:59 compute-0 ceph-mon[74313]: pgmap v2650: 321 pgs: 321 active+clean; 407 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 3.8 KiB/s wr, 29 op/s
Oct 11 09:29:59 compute-0 nova_compute[260935]: 2025-10-11 09:29:59.164 2 DEBUG nova.virt.libvirt.vif [None req-795b8d2a-821d-4871-9132-7fac6bfa75ba 95ad8a94007e4ab88fcad372c4695cf5 9e69b3710dc94c919d8bd75d2a540c10 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T09:29:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestServerBasicOps-server-1930148258',display_name='tempest-TestServerBasicOps-server-1930148258',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testserverbasicops-server-1930148258',id=134,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBH102y8/0euXWtDeJlu4Y5Hi5/M8lShv6g1BNE2c8J5VmPE9Sc1i2z1jjNuXYbG9viqCuCV8FoDB59HS7lsdBA/LnJF8uoB7aDjGRRY+ZC3tUP+E3D+ZjN02qpM1ehWg2g==',key_name='tempest-TestServerBasicOps-820613880',keypairs=<?>,launch_index=0,launched_at=2025-10-11T09:29:16Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={meta1='data1',meta2='data2',metaN='dataN'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='9e69b3710dc94c919d8bd75d2a540c10',ramdisk_id='',reservation_id='r-dglj62pg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestServerBasicOps-2065645952',owner_user_name='tempest-TestServerBasicOps-2065645952-project-member',password_0='testtesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttest',password_1='',password_2='',password_3=''},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T09:29:56Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='95ad8a94007e4ab88fcad372c4695cf5',uuid=af8a1ab7-7512-4de4-8493-cfe85095fbc5,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "c6f854bc-831f-4bb1-ad9f-e1aa03343d25", "address": "fa:16:3e:6b:d3:83", "network": {"id": "079fe911-cb70-4ab3-8112-a68496105754", "bridge": "br-int", "label": "tempest-TestServerBasicOps-693767079-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.243", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9e69b3710dc94c919d8bd75d2a540c10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc6f854bc-83", "ovs_interfaceid": "c6f854bc-831f-4bb1-ad9f-e1aa03343d25", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 11 09:29:59 compute-0 nova_compute[260935]: 2025-10-11 09:29:59.165 2 DEBUG nova.network.os_vif_util [None req-795b8d2a-821d-4871-9132-7fac6bfa75ba 95ad8a94007e4ab88fcad372c4695cf5 9e69b3710dc94c919d8bd75d2a540c10 - - default default] Converting VIF {"id": "c6f854bc-831f-4bb1-ad9f-e1aa03343d25", "address": "fa:16:3e:6b:d3:83", "network": {"id": "079fe911-cb70-4ab3-8112-a68496105754", "bridge": "br-int", "label": "tempest-TestServerBasicOps-693767079-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.243", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9e69b3710dc94c919d8bd75d2a540c10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc6f854bc-83", "ovs_interfaceid": "c6f854bc-831f-4bb1-ad9f-e1aa03343d25", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 09:29:59 compute-0 nova_compute[260935]: 2025-10-11 09:29:59.166 2 DEBUG nova.network.os_vif_util [None req-795b8d2a-821d-4871-9132-7fac6bfa75ba 95ad8a94007e4ab88fcad372c4695cf5 9e69b3710dc94c919d8bd75d2a540c10 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:6b:d3:83,bridge_name='br-int',has_traffic_filtering=True,id=c6f854bc-831f-4bb1-ad9f-e1aa03343d25,network=Network(079fe911-cb70-4ab3-8112-a68496105754),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc6f854bc-83') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 09:29:59 compute-0 nova_compute[260935]: 2025-10-11 09:29:59.167 2 DEBUG os_vif [None req-795b8d2a-821d-4871-9132-7fac6bfa75ba 95ad8a94007e4ab88fcad372c4695cf5 9e69b3710dc94c919d8bd75d2a540c10 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:6b:d3:83,bridge_name='br-int',has_traffic_filtering=True,id=c6f854bc-831f-4bb1-ad9f-e1aa03343d25,network=Network(079fe911-cb70-4ab3-8112-a68496105754),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc6f854bc-83') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 11 09:29:59 compute-0 nova_compute[260935]: 2025-10-11 09:29:59.170 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:29:59 compute-0 nova_compute[260935]: 2025-10-11 09:29:59.171 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc6f854bc-83, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:29:59 compute-0 nova_compute[260935]: 2025-10-11 09:29:59.177 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:29:59 compute-0 nova_compute[260935]: 2025-10-11 09:29:59.179 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 09:29:59 compute-0 nova_compute[260935]: 2025-10-11 09:29:59.181 2 INFO os_vif [None req-795b8d2a-821d-4871-9132-7fac6bfa75ba 95ad8a94007e4ab88fcad372c4695cf5 9e69b3710dc94c919d8bd75d2a540c10 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:6b:d3:83,bridge_name='br-int',has_traffic_filtering=True,id=c6f854bc-831f-4bb1-ad9f-e1aa03343d25,network=Network(079fe911-cb70-4ab3-8112-a68496105754),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc6f854bc-83')
Oct 11 09:29:59 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-7c3025809c58e71159b26621f801a618a836a3bf281c33f5a5ecd917b5d02ebc-userdata-shm.mount: Deactivated successfully.
Oct 11 09:29:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-c582133b8efc079cf0c19ee163457092710bcac477f70ae60348ebd6ded8cd4e-merged.mount: Deactivated successfully.
Oct 11 09:29:59 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:29:59 compute-0 podman[407873]: 2025-10-11 09:29:59.352422464 +0000 UTC m=+0.366022881 container cleanup 7c3025809c58e71159b26621f801a618a836a3bf281c33f5a5ecd917b5d02ebc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-079fe911-cb70-4ab3-8112-a68496105754, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Oct 11 09:29:59 compute-0 systemd[1]: libpod-conmon-7c3025809c58e71159b26621f801a618a836a3bf281c33f5a5ecd917b5d02ebc.scope: Deactivated successfully.
Oct 11 09:29:59 compute-0 nova_compute[260935]: 2025-10-11 09:29:59.407 2 DEBUG nova.compute.manager [req-2911521d-9e29-4d3c-be7a-333fd013e481 req-df462282-2733-4b99-b2a3-e02f61d9036f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: af8a1ab7-7512-4de4-8493-cfe85095fbc5] Received event network-vif-unplugged-c6f854bc-831f-4bb1-ad9f-e1aa03343d25 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:29:59 compute-0 nova_compute[260935]: 2025-10-11 09:29:59.407 2 DEBUG oslo_concurrency.lockutils [req-2911521d-9e29-4d3c-be7a-333fd013e481 req-df462282-2733-4b99-b2a3-e02f61d9036f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "af8a1ab7-7512-4de4-8493-cfe85095fbc5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:29:59 compute-0 nova_compute[260935]: 2025-10-11 09:29:59.409 2 DEBUG oslo_concurrency.lockutils [req-2911521d-9e29-4d3c-be7a-333fd013e481 req-df462282-2733-4b99-b2a3-e02f61d9036f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "af8a1ab7-7512-4de4-8493-cfe85095fbc5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:29:59 compute-0 nova_compute[260935]: 2025-10-11 09:29:59.410 2 DEBUG oslo_concurrency.lockutils [req-2911521d-9e29-4d3c-be7a-333fd013e481 req-df462282-2733-4b99-b2a3-e02f61d9036f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "af8a1ab7-7512-4de4-8493-cfe85095fbc5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:29:59 compute-0 nova_compute[260935]: 2025-10-11 09:29:59.411 2 DEBUG nova.compute.manager [req-2911521d-9e29-4d3c-be7a-333fd013e481 req-df462282-2733-4b99-b2a3-e02f61d9036f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: af8a1ab7-7512-4de4-8493-cfe85095fbc5] No waiting events found dispatching network-vif-unplugged-c6f854bc-831f-4bb1-ad9f-e1aa03343d25 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 09:29:59 compute-0 nova_compute[260935]: 2025-10-11 09:29:59.411 2 DEBUG nova.compute.manager [req-2911521d-9e29-4d3c-be7a-333fd013e481 req-df462282-2733-4b99-b2a3-e02f61d9036f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: af8a1ab7-7512-4de4-8493-cfe85095fbc5] Received event network-vif-unplugged-c6f854bc-831f-4bb1-ad9f-e1aa03343d25 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 11 09:29:59 compute-0 podman[407931]: 2025-10-11 09:29:59.724621408 +0000 UTC m=+0.340116989 container remove 7c3025809c58e71159b26621f801a618a836a3bf281c33f5a5ecd917b5d02ebc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-079fe911-cb70-4ab3-8112-a68496105754, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 09:29:59 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:29:59.735 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[b4040b48-6fb3-4f48-a2f0-734844a11df5]: (4, ('Sat Oct 11 09:29:58 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-079fe911-cb70-4ab3-8112-a68496105754 (7c3025809c58e71159b26621f801a618a836a3bf281c33f5a5ecd917b5d02ebc)\n7c3025809c58e71159b26621f801a618a836a3bf281c33f5a5ecd917b5d02ebc\nSat Oct 11 09:29:59 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-079fe911-cb70-4ab3-8112-a68496105754 (7c3025809c58e71159b26621f801a618a836a3bf281c33f5a5ecd917b5d02ebc)\n7c3025809c58e71159b26621f801a618a836a3bf281c33f5a5ecd917b5d02ebc\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:29:59 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:29:59.738 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[1857f95d-1bde-4c28-bbfb-f4ac825eb57c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:29:59 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:29:59.740 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap079fe911-c0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:29:59 compute-0 kernel: tap079fe911-c0: left promiscuous mode
Oct 11 09:29:59 compute-0 nova_compute[260935]: 2025-10-11 09:29:59.767 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:29:59 compute-0 nova_compute[260935]: 2025-10-11 09:29:59.772 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:29:59 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:29:59.777 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[2b3650f5-1c58-4459-99d9-268c476b14d8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:29:59 compute-0 nova_compute[260935]: 2025-10-11 09:29:59.788 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:29:59 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:29:59.805 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[964a435c-da45-45e8-a873-c8f3bb4a303f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:29:59 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:29:59.806 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[d4b4934f-b6bb-4db7-80e4-e0a819713b7d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:29:59 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:29:59.826 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[792ea839-065c-44da-ac36-575228e5fc86]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 679962, 'reachable_time': 33203, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 407952, 'error': None, 'target': 'ovnmeta-079fe911-cb70-4ab3-8112-a68496105754', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:29:59 compute-0 systemd[1]: run-netns-ovnmeta\x2d079fe911\x2dcb70\x2d4ab3\x2d8112\x2da68496105754.mount: Deactivated successfully.
Oct 11 09:29:59 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:29:59.830 162929 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-079fe911-cb70-4ab3-8112-a68496105754 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 11 09:29:59 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:29:59.830 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[bae90ec0-2e15-4176-907f-abeef58f7a21]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:29:59 compute-0 podman[407944]: 2025-10-11 09:29:59.922330698 +0000 UTC m=+0.112312001 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd)
Oct 11 09:30:00 compute-0 podman[407957]: 2025-10-11 09:30:00.006914105 +0000 UTC m=+0.124404282 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct 11 09:30:01 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2651: 321 pgs: 321 active+clean; 407 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 5.0 KiB/s rd, 1.7 KiB/s wr, 1 op/s
Oct 11 09:30:01 compute-0 ceph-mon[74313]: pgmap v2651: 321 pgs: 321 active+clean; 407 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 5.0 KiB/s rd, 1.7 KiB/s wr, 1 op/s
Oct 11 09:30:01 compute-0 nova_compute[260935]: 2025-10-11 09:30:01.573 2 DEBUG nova.compute.manager [req-8feee4d7-cb6d-43b7-a103-8685f4ceb280 req-23860e84-12c6-4664-88a3-374ea590f401 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: af8a1ab7-7512-4de4-8493-cfe85095fbc5] Received event network-vif-plugged-c6f854bc-831f-4bb1-ad9f-e1aa03343d25 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:30:01 compute-0 nova_compute[260935]: 2025-10-11 09:30:01.573 2 DEBUG oslo_concurrency.lockutils [req-8feee4d7-cb6d-43b7-a103-8685f4ceb280 req-23860e84-12c6-4664-88a3-374ea590f401 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "af8a1ab7-7512-4de4-8493-cfe85095fbc5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:30:01 compute-0 nova_compute[260935]: 2025-10-11 09:30:01.574 2 DEBUG oslo_concurrency.lockutils [req-8feee4d7-cb6d-43b7-a103-8685f4ceb280 req-23860e84-12c6-4664-88a3-374ea590f401 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "af8a1ab7-7512-4de4-8493-cfe85095fbc5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:30:01 compute-0 nova_compute[260935]: 2025-10-11 09:30:01.574 2 DEBUG oslo_concurrency.lockutils [req-8feee4d7-cb6d-43b7-a103-8685f4ceb280 req-23860e84-12c6-4664-88a3-374ea590f401 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "af8a1ab7-7512-4de4-8493-cfe85095fbc5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:30:01 compute-0 nova_compute[260935]: 2025-10-11 09:30:01.574 2 DEBUG nova.compute.manager [req-8feee4d7-cb6d-43b7-a103-8685f4ceb280 req-23860e84-12c6-4664-88a3-374ea590f401 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: af8a1ab7-7512-4de4-8493-cfe85095fbc5] No waiting events found dispatching network-vif-plugged-c6f854bc-831f-4bb1-ad9f-e1aa03343d25 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 09:30:01 compute-0 nova_compute[260935]: 2025-10-11 09:30:01.574 2 WARNING nova.compute.manager [req-8feee4d7-cb6d-43b7-a103-8685f4ceb280 req-23860e84-12c6-4664-88a3-374ea590f401 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: af8a1ab7-7512-4de4-8493-cfe85095fbc5] Received unexpected event network-vif-plugged-c6f854bc-831f-4bb1-ad9f-e1aa03343d25 for instance with vm_state active and task_state deleting.
Oct 11 09:30:03 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2652: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 2.2 KiB/s wr, 18 op/s
Oct 11 09:30:03 compute-0 ceph-mon[74313]: pgmap v2652: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 2.2 KiB/s wr, 18 op/s
Oct 11 09:30:03 compute-0 nova_compute[260935]: 2025-10-11 09:30:03.519 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:30:04 compute-0 nova_compute[260935]: 2025-10-11 09:30:04.174 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:30:04 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:30:04 compute-0 nova_compute[260935]: 2025-10-11 09:30:04.845 2 INFO nova.virt.libvirt.driver [None req-795b8d2a-821d-4871-9132-7fac6bfa75ba 95ad8a94007e4ab88fcad372c4695cf5 9e69b3710dc94c919d8bd75d2a540c10 - - default default] [instance: af8a1ab7-7512-4de4-8493-cfe85095fbc5] Deleting instance files /var/lib/nova/instances/af8a1ab7-7512-4de4-8493-cfe85095fbc5_del
Oct 11 09:30:04 compute-0 nova_compute[260935]: 2025-10-11 09:30:04.846 2 INFO nova.virt.libvirt.driver [None req-795b8d2a-821d-4871-9132-7fac6bfa75ba 95ad8a94007e4ab88fcad372c4695cf5 9e69b3710dc94c919d8bd75d2a540c10 - - default default] [instance: af8a1ab7-7512-4de4-8493-cfe85095fbc5] Deletion of /var/lib/nova/instances/af8a1ab7-7512-4de4-8493-cfe85095fbc5_del complete
Oct 11 09:30:05 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2653: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 16 KiB/s rd, 2.2 KiB/s wr, 17 op/s
Oct 11 09:30:05 compute-0 nova_compute[260935]: 2025-10-11 09:30:05.154 2 INFO nova.compute.manager [None req-795b8d2a-821d-4871-9132-7fac6bfa75ba 95ad8a94007e4ab88fcad372c4695cf5 9e69b3710dc94c919d8bd75d2a540c10 - - default default] [instance: af8a1ab7-7512-4de4-8493-cfe85095fbc5] Took 6.87 seconds to destroy the instance on the hypervisor.
Oct 11 09:30:05 compute-0 nova_compute[260935]: 2025-10-11 09:30:05.155 2 DEBUG oslo.service.loopingcall [None req-795b8d2a-821d-4871-9132-7fac6bfa75ba 95ad8a94007e4ab88fcad372c4695cf5 9e69b3710dc94c919d8bd75d2a540c10 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 11 09:30:05 compute-0 nova_compute[260935]: 2025-10-11 09:30:05.156 2 DEBUG nova.compute.manager [-] [instance: af8a1ab7-7512-4de4-8493-cfe85095fbc5] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 11 09:30:05 compute-0 nova_compute[260935]: 2025-10-11 09:30:05.156 2 DEBUG nova.network.neutron [-] [instance: af8a1ab7-7512-4de4-8493-cfe85095fbc5] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 11 09:30:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 09:30:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:30:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 09:30:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:30:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0026301752661652667 of space, bias 1.0, pg target 0.78905257984958 quantized to 32 (current 32)
Oct 11 09:30:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:30:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 09:30:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:30:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 09:30:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:30:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Oct 11 09:30:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:30:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 11 09:30:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:30:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 09:30:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:30:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 11 09:30:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:30:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 11 09:30:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:30:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 09:30:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:30:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 11 09:30:06 compute-0 ceph-mon[74313]: pgmap v2653: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 16 KiB/s rd, 2.2 KiB/s wr, 17 op/s
Oct 11 09:30:07 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2654: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 16 KiB/s rd, 2.2 KiB/s wr, 17 op/s
Oct 11 09:30:07 compute-0 ceph-mon[74313]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 11 09:30:07 compute-0 ceph-mon[74313]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 4800.0 total, 600.0 interval
                                           Cumulative writes: 12K writes, 55K keys, 12K commit groups, 1.0 writes per commit group, ingest: 0.07 GB, 0.02 MB/s
                                           Cumulative WAL: 12K writes, 12K syncs, 1.00 writes per sync, written: 0.07 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1404 writes, 6317 keys, 1404 commit groups, 1.0 writes per commit group, ingest: 8.77 MB, 0.01 MB/s
                                           Interval WAL: 1404 writes, 1404 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     71.9      0.91              0.27        38    0.024       0      0       0.0       0.0
                                             L6      1/0    7.83 MB   0.0      0.3     0.1      0.3       0.3      0.0       0.0   4.6    162.3    136.1      2.20              1.22        37    0.059    223K    20K       0.0       0.0
                                            Sum      1/0    7.83 MB   0.0      0.3     0.1      0.3       0.4      0.1       0.0   5.6    114.8    117.3      3.11              1.48        75    0.041    223K    20K       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.1     0.0      0.0       0.1      0.0       0.0   6.9    120.9    120.8      0.43              0.23        10    0.043     38K   2530       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.3     0.1      0.3       0.3      0.0       0.0   0.0    162.3    136.1      2.20              1.22        37    0.059    223K    20K       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     72.2      0.91              0.27        37    0.024       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     12.2      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 4800.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.064, interval 0.007
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.36 GB write, 0.08 MB/s write, 0.35 GB read, 0.07 MB/s read, 3.1 seconds
                                           Interval compaction: 0.05 GB write, 0.09 MB/s write, 0.05 GB read, 0.09 MB/s read, 0.4 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558f0ab3b1f0#2 capacity: 304.00 MB usage: 40.58 MB table_size: 0 occupancy: 18446744073709551615 collections: 9 last_copies: 0 last_secs: 0.000398 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2665,38.95 MB,12.8112%) FilterBlock(76,632.23 KB,0.203097%) IndexBlock(76,1.02 MB,0.334258%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Oct 11 09:30:08 compute-0 ceph-mon[74313]: pgmap v2654: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 16 KiB/s rd, 2.2 KiB/s wr, 17 op/s
Oct 11 09:30:08 compute-0 nova_compute[260935]: 2025-10-11 09:30:08.521 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:30:08 compute-0 nova_compute[260935]: 2025-10-11 09:30:08.769 2 DEBUG nova.network.neutron [-] [instance: af8a1ab7-7512-4de4-8493-cfe85095fbc5] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:30:08 compute-0 nova_compute[260935]: 2025-10-11 09:30:08.916 2 DEBUG nova.compute.manager [req-1604b474-7ad4-479e-9b02-54dcf915433e req-8505ceaf-fe27-4962-bdba-34ad083b8bfd e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: af8a1ab7-7512-4de4-8493-cfe85095fbc5] Received event network-vif-deleted-c6f854bc-831f-4bb1-ad9f-e1aa03343d25 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:30:08 compute-0 nova_compute[260935]: 2025-10-11 09:30:08.916 2 INFO nova.compute.manager [req-1604b474-7ad4-479e-9b02-54dcf915433e req-8505ceaf-fe27-4962-bdba-34ad083b8bfd e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: af8a1ab7-7512-4de4-8493-cfe85095fbc5] Neutron deleted interface c6f854bc-831f-4bb1-ad9f-e1aa03343d25; detaching it from the instance and deleting it from the info cache
Oct 11 09:30:08 compute-0 nova_compute[260935]: 2025-10-11 09:30:08.917 2 DEBUG nova.network.neutron [req-1604b474-7ad4-479e-9b02-54dcf915433e req-8505ceaf-fe27-4962-bdba-34ad083b8bfd e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: af8a1ab7-7512-4de4-8493-cfe85095fbc5] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:30:08 compute-0 nova_compute[260935]: 2025-10-11 09:30:08.919 2 INFO nova.compute.manager [-] [instance: af8a1ab7-7512-4de4-8493-cfe85095fbc5] Took 3.76 seconds to deallocate network for instance.
Oct 11 09:30:08 compute-0 nova_compute[260935]: 2025-10-11 09:30:08.977 2 DEBUG nova.compute.manager [req-1604b474-7ad4-479e-9b02-54dcf915433e req-8505ceaf-fe27-4962-bdba-34ad083b8bfd e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: af8a1ab7-7512-4de4-8493-cfe85095fbc5] Detach interface failed, port_id=c6f854bc-831f-4bb1-ad9f-e1aa03343d25, reason: Instance af8a1ab7-7512-4de4-8493-cfe85095fbc5 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Oct 11 09:30:09 compute-0 nova_compute[260935]: 2025-10-11 09:30:09.006 2 DEBUG oslo_concurrency.lockutils [None req-795b8d2a-821d-4871-9132-7fac6bfa75ba 95ad8a94007e4ab88fcad372c4695cf5 9e69b3710dc94c919d8bd75d2a540c10 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:30:09 compute-0 nova_compute[260935]: 2025-10-11 09:30:09.006 2 DEBUG oslo_concurrency.lockutils [None req-795b8d2a-821d-4871-9132-7fac6bfa75ba 95ad8a94007e4ab88fcad372c4695cf5 9e69b3710dc94c919d8bd75d2a540c10 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:30:09 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2655: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 25 KiB/s rd, 2.8 KiB/s wr, 30 op/s
Oct 11 09:30:09 compute-0 nova_compute[260935]: 2025-10-11 09:30:09.103 2 DEBUG oslo_concurrency.processutils [None req-795b8d2a-821d-4871-9132-7fac6bfa75ba 95ad8a94007e4ab88fcad372c4695cf5 9e69b3710dc94c919d8bd75d2a540c10 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:30:09 compute-0 nova_compute[260935]: 2025-10-11 09:30:09.176 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:30:09 compute-0 ceph-mon[74313]: pgmap v2655: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 25 KiB/s rd, 2.8 KiB/s wr, 30 op/s
Oct 11 09:30:09 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:30:09 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 09:30:09 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1609355747' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:30:09 compute-0 nova_compute[260935]: 2025-10-11 09:30:09.702 2 DEBUG oslo_concurrency.processutils [None req-795b8d2a-821d-4871-9132-7fac6bfa75ba 95ad8a94007e4ab88fcad372c4695cf5 9e69b3710dc94c919d8bd75d2a540c10 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.599s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:30:09 compute-0 nova_compute[260935]: 2025-10-11 09:30:09.711 2 DEBUG nova.compute.provider_tree [None req-795b8d2a-821d-4871-9132-7fac6bfa75ba 95ad8a94007e4ab88fcad372c4695cf5 9e69b3710dc94c919d8bd75d2a540c10 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 09:30:09 compute-0 nova_compute[260935]: 2025-10-11 09:30:09.755 2 DEBUG nova.scheduler.client.report [None req-795b8d2a-821d-4871-9132-7fac6bfa75ba 95ad8a94007e4ab88fcad372c4695cf5 9e69b3710dc94c919d8bd75d2a540c10 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 09:30:09 compute-0 nova_compute[260935]: 2025-10-11 09:30:09.817 2 DEBUG oslo_concurrency.lockutils [None req-795b8d2a-821d-4871-9132-7fac6bfa75ba 95ad8a94007e4ab88fcad372c4695cf5 9e69b3710dc94c919d8bd75d2a540c10 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.811s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:30:09 compute-0 nova_compute[260935]: 2025-10-11 09:30:09.853 2 INFO nova.scheduler.client.report [None req-795b8d2a-821d-4871-9132-7fac6bfa75ba 95ad8a94007e4ab88fcad372c4695cf5 9e69b3710dc94c919d8bd75d2a540c10 - - default default] Deleted allocations for instance af8a1ab7-7512-4de4-8493-cfe85095fbc5
Oct 11 09:30:09 compute-0 nova_compute[260935]: 2025-10-11 09:30:09.877 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:30:09 compute-0 nova_compute[260935]: 2025-10-11 09:30:09.986 2 DEBUG oslo_concurrency.lockutils [None req-795b8d2a-821d-4871-9132-7fac6bfa75ba 95ad8a94007e4ab88fcad372c4695cf5 9e69b3710dc94c919d8bd75d2a540c10 - - default default] Lock "af8a1ab7-7512-4de4-8493-cfe85095fbc5" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 11.714s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:30:10 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/1609355747' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:30:11 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2656: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 21 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Oct 11 09:30:11 compute-0 ceph-mon[74313]: pgmap v2656: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 21 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Oct 11 09:30:12 compute-0 sudo[408013]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:30:12 compute-0 sudo[408013]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:30:12 compute-0 sudo[408013]: pam_unix(sudo:session): session closed for user root
Oct 11 09:30:12 compute-0 sudo[408038]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 09:30:12 compute-0 sudo[408038]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:30:12 compute-0 sudo[408038]: pam_unix(sudo:session): session closed for user root
Oct 11 09:30:12 compute-0 sudo[408063]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:30:12 compute-0 sudo[408063]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:30:12 compute-0 sudo[408063]: pam_unix(sudo:session): session closed for user root
Oct 11 09:30:12 compute-0 sudo[408088]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 11 09:30:12 compute-0 sudo[408088]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:30:13 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2657: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 21 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Oct 11 09:30:13 compute-0 sudo[408088]: pam_unix(sudo:session): session closed for user root
Oct 11 09:30:13 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 09:30:13 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 09:30:13 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 11 09:30:13 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 09:30:13 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 11 09:30:13 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:30:13 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev 0f2bc4d2-f290-4393-81e1-63622da7154b does not exist
Oct 11 09:30:13 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev c8e626a2-150f-4606-a299-982dfbb50cc3 does not exist
Oct 11 09:30:13 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev 9820134d-fb49-4372-9f8f-7b145d3ecaf8 does not exist
Oct 11 09:30:13 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 11 09:30:13 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 09:30:13 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 11 09:30:13 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 09:30:13 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 09:30:13 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 09:30:13 compute-0 sudo[408144]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:30:13 compute-0 sudo[408144]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:30:13 compute-0 sudo[408144]: pam_unix(sudo:session): session closed for user root
Oct 11 09:30:13 compute-0 nova_compute[260935]: 2025-10-11 09:30:13.524 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:30:13 compute-0 sudo[408169]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 09:30:13 compute-0 sudo[408169]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:30:13 compute-0 sudo[408169]: pam_unix(sudo:session): session closed for user root
Oct 11 09:30:13 compute-0 sudo[408194]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:30:13 compute-0 sudo[408194]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:30:13 compute-0 sudo[408194]: pam_unix(sudo:session): session closed for user root
Oct 11 09:30:13 compute-0 sudo[408219]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 11 09:30:13 compute-0 sudo[408219]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:30:14 compute-0 ceph-mon[74313]: pgmap v2657: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 21 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Oct 11 09:30:14 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 09:30:14 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 09:30:14 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:30:14 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 09:30:14 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 09:30:14 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 09:30:14 compute-0 podman[408282]: 2025-10-11 09:30:14.124178577 +0000 UTC m=+0.061287681 container create 28803cf2813d347abadbca604d17f8f83db8205210869a9d0b4b8a115884a046 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_gauss, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct 11 09:30:14 compute-0 nova_compute[260935]: 2025-10-11 09:30:14.129 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760174999.1259048, af8a1ab7-7512-4de4-8493-cfe85095fbc5 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:30:14 compute-0 nova_compute[260935]: 2025-10-11 09:30:14.130 2 INFO nova.compute.manager [-] [instance: af8a1ab7-7512-4de4-8493-cfe85095fbc5] VM Stopped (Lifecycle Event)
Oct 11 09:30:14 compute-0 systemd[1]: Started libpod-conmon-28803cf2813d347abadbca604d17f8f83db8205210869a9d0b4b8a115884a046.scope.
Oct 11 09:30:14 compute-0 nova_compute[260935]: 2025-10-11 09:30:14.180 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:30:14 compute-0 podman[408282]: 2025-10-11 09:30:14.103313308 +0000 UTC m=+0.040422392 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:30:14 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:30:14 compute-0 podman[408282]: 2025-10-11 09:30:14.252016185 +0000 UTC m=+0.189125309 container init 28803cf2813d347abadbca604d17f8f83db8205210869a9d0b4b8a115884a046 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_gauss, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 09:30:14 compute-0 podman[408282]: 2025-10-11 09:30:14.265053533 +0000 UTC m=+0.202162597 container start 28803cf2813d347abadbca604d17f8f83db8205210869a9d0b4b8a115884a046 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_gauss, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct 11 09:30:14 compute-0 podman[408282]: 2025-10-11 09:30:14.269008944 +0000 UTC m=+0.206118098 container attach 28803cf2813d347abadbca604d17f8f83db8205210869a9d0b4b8a115884a046 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_gauss, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct 11 09:30:14 compute-0 systemd[1]: libpod-28803cf2813d347abadbca604d17f8f83db8205210869a9d0b4b8a115884a046.scope: Deactivated successfully.
Oct 11 09:30:14 compute-0 fervent_gauss[408298]: 167 167
Oct 11 09:30:14 compute-0 conmon[408298]: conmon 28803cf2813d347abadb <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-28803cf2813d347abadbca604d17f8f83db8205210869a9d0b4b8a115884a046.scope/container/memory.events
Oct 11 09:30:14 compute-0 podman[408282]: 2025-10-11 09:30:14.276021842 +0000 UTC m=+0.213130946 container died 28803cf2813d347abadbca604d17f8f83db8205210869a9d0b4b8a115884a046 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_gauss, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Oct 11 09:30:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-d8b44d551d2ba1f06c6097fbd12ac347f7cd4c0e5686e3651f7762b29540a853-merged.mount: Deactivated successfully.
Oct 11 09:30:14 compute-0 podman[408282]: 2025-10-11 09:30:14.335528622 +0000 UTC m=+0.272637716 container remove 28803cf2813d347abadbca604d17f8f83db8205210869a9d0b4b8a115884a046 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_gauss, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct 11 09:30:14 compute-0 systemd[1]: libpod-conmon-28803cf2813d347abadbca604d17f8f83db8205210869a9d0b4b8a115884a046.scope: Deactivated successfully.
Oct 11 09:30:14 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:30:14 compute-0 nova_compute[260935]: 2025-10-11 09:30:14.374 2 DEBUG nova.compute.manager [None req-2bdc7660-e614-49d5-af34-6a77dc85d54c - - - - - -] [instance: af8a1ab7-7512-4de4-8493-cfe85095fbc5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:30:14 compute-0 podman[408322]: 2025-10-11 09:30:14.62533621 +0000 UTC m=+0.068275958 container create 72595bcb372bdab64e8e061399e18ad5a2cebc0920285c3efdf52efa4429e54d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_wiles, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 09:30:14 compute-0 systemd[1]: Started libpod-conmon-72595bcb372bdab64e8e061399e18ad5a2cebc0920285c3efdf52efa4429e54d.scope.
Oct 11 09:30:14 compute-0 podman[408322]: 2025-10-11 09:30:14.597938457 +0000 UTC m=+0.040878265 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:30:14 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:30:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/873005bc29385c5d2bc2851ee880aed2b5cec676b21fec85f659f625ad037b03/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 09:30:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/873005bc29385c5d2bc2851ee880aed2b5cec676b21fec85f659f625ad037b03/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 09:30:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/873005bc29385c5d2bc2851ee880aed2b5cec676b21fec85f659f625ad037b03/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 09:30:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/873005bc29385c5d2bc2851ee880aed2b5cec676b21fec85f659f625ad037b03/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 09:30:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/873005bc29385c5d2bc2851ee880aed2b5cec676b21fec85f659f625ad037b03/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 09:30:14 compute-0 podman[408322]: 2025-10-11 09:30:14.75433109 +0000 UTC m=+0.197270878 container init 72595bcb372bdab64e8e061399e18ad5a2cebc0920285c3efdf52efa4429e54d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_wiles, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct 11 09:30:14 compute-0 podman[408322]: 2025-10-11 09:30:14.772558975 +0000 UTC m=+0.215498733 container start 72595bcb372bdab64e8e061399e18ad5a2cebc0920285c3efdf52efa4429e54d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_wiles, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 09:30:14 compute-0 podman[408322]: 2025-10-11 09:30:14.777221736 +0000 UTC m=+0.220161544 container attach 72595bcb372bdab64e8e061399e18ad5a2cebc0920285c3efdf52efa4429e54d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_wiles, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 09:30:14 compute-0 nova_compute[260935]: 2025-10-11 09:30:14.913 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:30:15 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2658: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 9.2 KiB/s rd, 597 B/s wr, 12 op/s
Oct 11 09:30:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:30:15.224 162815 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:30:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:30:15.226 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:30:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:30:15.227 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:30:15 compute-0 stupefied_wiles[408338]: --> passed data devices: 0 physical, 3 LVM
Oct 11 09:30:15 compute-0 stupefied_wiles[408338]: --> relative data size: 1.0
Oct 11 09:30:15 compute-0 stupefied_wiles[408338]: --> All data devices are unavailable
Oct 11 09:30:15 compute-0 systemd[1]: libpod-72595bcb372bdab64e8e061399e18ad5a2cebc0920285c3efdf52efa4429e54d.scope: Deactivated successfully.
Oct 11 09:30:15 compute-0 systemd[1]: libpod-72595bcb372bdab64e8e061399e18ad5a2cebc0920285c3efdf52efa4429e54d.scope: Consumed 1.126s CPU time.
Oct 11 09:30:15 compute-0 conmon[408338]: conmon 72595bcb372bdab64e8e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-72595bcb372bdab64e8e061399e18ad5a2cebc0920285c3efdf52efa4429e54d.scope/container/memory.events
Oct 11 09:30:15 compute-0 podman[408322]: 2025-10-11 09:30:15.96226786 +0000 UTC m=+1.405207578 container died 72595bcb372bdab64e8e061399e18ad5a2cebc0920285c3efdf52efa4429e54d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_wiles, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct 11 09:30:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-873005bc29385c5d2bc2851ee880aed2b5cec676b21fec85f659f625ad037b03-merged.mount: Deactivated successfully.
Oct 11 09:30:16 compute-0 podman[408322]: 2025-10-11 09:30:16.011436107 +0000 UTC m=+1.454375865 container remove 72595bcb372bdab64e8e061399e18ad5a2cebc0920285c3efdf52efa4429e54d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_wiles, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 09:30:16 compute-0 systemd[1]: libpod-conmon-72595bcb372bdab64e8e061399e18ad5a2cebc0920285c3efdf52efa4429e54d.scope: Deactivated successfully.
Oct 11 09:30:16 compute-0 sudo[408219]: pam_unix(sudo:session): session closed for user root
Oct 11 09:30:16 compute-0 ceph-mon[74313]: pgmap v2658: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 9.2 KiB/s rd, 597 B/s wr, 12 op/s
Oct 11 09:30:16 compute-0 sudo[408378]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:30:16 compute-0 sudo[408378]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:30:16 compute-0 sudo[408378]: pam_unix(sudo:session): session closed for user root
Oct 11 09:30:16 compute-0 sudo[408403]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 09:30:16 compute-0 sudo[408403]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:30:16 compute-0 sudo[408403]: pam_unix(sudo:session): session closed for user root
Oct 11 09:30:16 compute-0 sudo[408428]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:30:16 compute-0 sudo[408428]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:30:16 compute-0 sudo[408428]: pam_unix(sudo:session): session closed for user root
Oct 11 09:30:16 compute-0 sudo[408453]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a -- lvm list --format json
Oct 11 09:30:16 compute-0 sudo[408453]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:30:16 compute-0 podman[408523]: 2025-10-11 09:30:16.868610446 +0000 UTC m=+0.037545990 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:30:16 compute-0 podman[408523]: 2025-10-11 09:30:16.96265275 +0000 UTC m=+0.131588234 container create 97ff7ad4928705f9d74cd84fa0f57016c01fa4ca8604d6dca915c5d3382e7738 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_hypatia, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 09:30:17 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2659: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 9.2 KiB/s rd, 597 B/s wr, 12 op/s
Oct 11 09:30:17 compute-0 systemd[1]: Started libpod-conmon-97ff7ad4928705f9d74cd84fa0f57016c01fa4ca8604d6dca915c5d3382e7738.scope.
Oct 11 09:30:17 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:30:17 compute-0 podman[408523]: 2025-10-11 09:30:17.225231981 +0000 UTC m=+0.394167535 container init 97ff7ad4928705f9d74cd84fa0f57016c01fa4ca8604d6dca915c5d3382e7738 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_hypatia, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct 11 09:30:17 compute-0 podman[408523]: 2025-10-11 09:30:17.237550658 +0000 UTC m=+0.406486142 container start 97ff7ad4928705f9d74cd84fa0f57016c01fa4ca8604d6dca915c5d3382e7738 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_hypatia, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 09:30:17 compute-0 optimistic_hypatia[408539]: 167 167
Oct 11 09:30:17 compute-0 systemd[1]: libpod-97ff7ad4928705f9d74cd84fa0f57016c01fa4ca8604d6dca915c5d3382e7738.scope: Deactivated successfully.
Oct 11 09:30:17 compute-0 podman[408523]: 2025-10-11 09:30:17.289971448 +0000 UTC m=+0.458906932 container attach 97ff7ad4928705f9d74cd84fa0f57016c01fa4ca8604d6dca915c5d3382e7738 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_hypatia, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 09:30:17 compute-0 podman[408523]: 2025-10-11 09:30:17.290607846 +0000 UTC m=+0.459543320 container died 97ff7ad4928705f9d74cd84fa0f57016c01fa4ca8604d6dca915c5d3382e7738 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_hypatia, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 09:30:17 compute-0 ceph-mon[74313]: pgmap v2659: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 9.2 KiB/s rd, 597 B/s wr, 12 op/s
Oct 11 09:30:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-0306c4a6d15d2425455907a9920247194b518a871b8374b7e828707b54b05f7d-merged.mount: Deactivated successfully.
Oct 11 09:30:17 compute-0 podman[408523]: 2025-10-11 09:30:17.648436784 +0000 UTC m=+0.817372258 container remove 97ff7ad4928705f9d74cd84fa0f57016c01fa4ca8604d6dca915c5d3382e7738 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_hypatia, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 09:30:17 compute-0 systemd[1]: libpod-conmon-97ff7ad4928705f9d74cd84fa0f57016c01fa4ca8604d6dca915c5d3382e7738.scope: Deactivated successfully.
Oct 11 09:30:17 compute-0 podman[408565]: 2025-10-11 09:30:17.897478052 +0000 UTC m=+0.035805041 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:30:18 compute-0 podman[408565]: 2025-10-11 09:30:18.063658232 +0000 UTC m=+0.201985181 container create 23e9ecf0d7d90560609d733e48c6488306df441ecee90abadec408a592a7193d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_wiles, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct 11 09:30:18 compute-0 systemd[1]: Started libpod-conmon-23e9ecf0d7d90560609d733e48c6488306df441ecee90abadec408a592a7193d.scope.
Oct 11 09:30:18 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:30:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7888703b6ebaa21a6c8835339488b6ddcf2ac00235d5d47d2627b1429b45751b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 09:30:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7888703b6ebaa21a6c8835339488b6ddcf2ac00235d5d47d2627b1429b45751b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 09:30:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7888703b6ebaa21a6c8835339488b6ddcf2ac00235d5d47d2627b1429b45751b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 09:30:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7888703b6ebaa21a6c8835339488b6ddcf2ac00235d5d47d2627b1429b45751b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 09:30:18 compute-0 podman[408565]: 2025-10-11 09:30:18.274764869 +0000 UTC m=+0.413091888 container init 23e9ecf0d7d90560609d733e48c6488306df441ecee90abadec408a592a7193d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_wiles, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct 11 09:30:18 compute-0 podman[408565]: 2025-10-11 09:30:18.281414137 +0000 UTC m=+0.419741046 container start 23e9ecf0d7d90560609d733e48c6488306df441ecee90abadec408a592a7193d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_wiles, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef)
Oct 11 09:30:18 compute-0 podman[408565]: 2025-10-11 09:30:18.332325204 +0000 UTC m=+0.470652143 container attach 23e9ecf0d7d90560609d733e48c6488306df441ecee90abadec408a592a7193d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_wiles, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef)
Oct 11 09:30:18 compute-0 nova_compute[260935]: 2025-10-11 09:30:18.528 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:30:19 compute-0 great_wiles[408581]: {
Oct 11 09:30:19 compute-0 great_wiles[408581]:     "0": [
Oct 11 09:30:19 compute-0 great_wiles[408581]:         {
Oct 11 09:30:19 compute-0 great_wiles[408581]:             "devices": [
Oct 11 09:30:19 compute-0 great_wiles[408581]:                 "/dev/loop3"
Oct 11 09:30:19 compute-0 great_wiles[408581]:             ],
Oct 11 09:30:19 compute-0 great_wiles[408581]:             "lv_name": "ceph_lv0",
Oct 11 09:30:19 compute-0 great_wiles[408581]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 09:30:19 compute-0 great_wiles[408581]:             "lv_size": "21470642176",
Oct 11 09:30:19 compute-0 great_wiles[408581]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=bd31cfe9-266a-4479-a7f9-659fff14b3e1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 09:30:19 compute-0 great_wiles[408581]:             "lv_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 09:30:19 compute-0 great_wiles[408581]:             "name": "ceph_lv0",
Oct 11 09:30:19 compute-0 great_wiles[408581]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 09:30:19 compute-0 great_wiles[408581]:             "tags": {
Oct 11 09:30:19 compute-0 great_wiles[408581]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 11 09:30:19 compute-0 great_wiles[408581]:                 "ceph.block_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 09:30:19 compute-0 great_wiles[408581]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 09:30:19 compute-0 great_wiles[408581]:                 "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:30:19 compute-0 great_wiles[408581]:                 "ceph.cluster_name": "ceph",
Oct 11 09:30:19 compute-0 great_wiles[408581]:                 "ceph.crush_device_class": "",
Oct 11 09:30:19 compute-0 great_wiles[408581]:                 "ceph.encrypted": "0",
Oct 11 09:30:19 compute-0 great_wiles[408581]:                 "ceph.osd_fsid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 09:30:19 compute-0 great_wiles[408581]:                 "ceph.osd_id": "0",
Oct 11 09:30:19 compute-0 great_wiles[408581]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 09:30:19 compute-0 great_wiles[408581]:                 "ceph.type": "block",
Oct 11 09:30:19 compute-0 great_wiles[408581]:                 "ceph.vdo": "0"
Oct 11 09:30:19 compute-0 great_wiles[408581]:             },
Oct 11 09:30:19 compute-0 great_wiles[408581]:             "type": "block",
Oct 11 09:30:19 compute-0 great_wiles[408581]:             "vg_name": "ceph_vg0"
Oct 11 09:30:19 compute-0 great_wiles[408581]:         }
Oct 11 09:30:19 compute-0 great_wiles[408581]:     ],
Oct 11 09:30:19 compute-0 great_wiles[408581]:     "1": [
Oct 11 09:30:19 compute-0 great_wiles[408581]:         {
Oct 11 09:30:19 compute-0 great_wiles[408581]:             "devices": [
Oct 11 09:30:19 compute-0 great_wiles[408581]:                 "/dev/loop4"
Oct 11 09:30:19 compute-0 great_wiles[408581]:             ],
Oct 11 09:30:19 compute-0 great_wiles[408581]:             "lv_name": "ceph_lv1",
Oct 11 09:30:19 compute-0 great_wiles[408581]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 09:30:19 compute-0 great_wiles[408581]:             "lv_size": "21470642176",
Oct 11 09:30:19 compute-0 great_wiles[408581]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c6e730c8-7075-45e5-bf72-061b73088f5b,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 09:30:19 compute-0 great_wiles[408581]:             "lv_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 09:30:19 compute-0 great_wiles[408581]:             "name": "ceph_lv1",
Oct 11 09:30:19 compute-0 great_wiles[408581]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 09:30:19 compute-0 great_wiles[408581]:             "tags": {
Oct 11 09:30:19 compute-0 great_wiles[408581]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 11 09:30:19 compute-0 great_wiles[408581]:                 "ceph.block_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 09:30:19 compute-0 great_wiles[408581]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 09:30:19 compute-0 great_wiles[408581]:                 "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:30:19 compute-0 great_wiles[408581]:                 "ceph.cluster_name": "ceph",
Oct 11 09:30:19 compute-0 great_wiles[408581]:                 "ceph.crush_device_class": "",
Oct 11 09:30:19 compute-0 great_wiles[408581]:                 "ceph.encrypted": "0",
Oct 11 09:30:19 compute-0 great_wiles[408581]:                 "ceph.osd_fsid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 09:30:19 compute-0 great_wiles[408581]:                 "ceph.osd_id": "1",
Oct 11 09:30:19 compute-0 great_wiles[408581]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 09:30:19 compute-0 great_wiles[408581]:                 "ceph.type": "block",
Oct 11 09:30:19 compute-0 great_wiles[408581]:                 "ceph.vdo": "0"
Oct 11 09:30:19 compute-0 great_wiles[408581]:             },
Oct 11 09:30:19 compute-0 great_wiles[408581]:             "type": "block",
Oct 11 09:30:19 compute-0 great_wiles[408581]:             "vg_name": "ceph_vg1"
Oct 11 09:30:19 compute-0 great_wiles[408581]:         }
Oct 11 09:30:19 compute-0 great_wiles[408581]:     ],
Oct 11 09:30:19 compute-0 great_wiles[408581]:     "2": [
Oct 11 09:30:19 compute-0 great_wiles[408581]:         {
Oct 11 09:30:19 compute-0 great_wiles[408581]:             "devices": [
Oct 11 09:30:19 compute-0 great_wiles[408581]:                 "/dev/loop5"
Oct 11 09:30:19 compute-0 great_wiles[408581]:             ],
Oct 11 09:30:19 compute-0 great_wiles[408581]:             "lv_name": "ceph_lv2",
Oct 11 09:30:19 compute-0 great_wiles[408581]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 09:30:19 compute-0 great_wiles[408581]:             "lv_size": "21470642176",
Oct 11 09:30:19 compute-0 great_wiles[408581]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9a8d87fe-940e-4960-94fb-405e22e6e9ee,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 09:30:19 compute-0 great_wiles[408581]:             "lv_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 09:30:19 compute-0 great_wiles[408581]:             "name": "ceph_lv2",
Oct 11 09:30:19 compute-0 great_wiles[408581]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 09:30:19 compute-0 great_wiles[408581]:             "tags": {
Oct 11 09:30:19 compute-0 great_wiles[408581]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 11 09:30:19 compute-0 great_wiles[408581]:                 "ceph.block_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 09:30:19 compute-0 great_wiles[408581]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 09:30:19 compute-0 great_wiles[408581]:                 "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:30:19 compute-0 great_wiles[408581]:                 "ceph.cluster_name": "ceph",
Oct 11 09:30:19 compute-0 great_wiles[408581]:                 "ceph.crush_device_class": "",
Oct 11 09:30:19 compute-0 great_wiles[408581]:                 "ceph.encrypted": "0",
Oct 11 09:30:19 compute-0 great_wiles[408581]:                 "ceph.osd_fsid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 09:30:19 compute-0 great_wiles[408581]:                 "ceph.osd_id": "2",
Oct 11 09:30:19 compute-0 great_wiles[408581]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 09:30:19 compute-0 great_wiles[408581]:                 "ceph.type": "block",
Oct 11 09:30:19 compute-0 great_wiles[408581]:                 "ceph.vdo": "0"
Oct 11 09:30:19 compute-0 great_wiles[408581]:             },
Oct 11 09:30:19 compute-0 great_wiles[408581]:             "type": "block",
Oct 11 09:30:19 compute-0 great_wiles[408581]:             "vg_name": "ceph_vg2"
Oct 11 09:30:19 compute-0 great_wiles[408581]:         }
Oct 11 09:30:19 compute-0 great_wiles[408581]:     ]
Oct 11 09:30:19 compute-0 great_wiles[408581]: }
Oct 11 09:30:19 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2660: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 9.2 KiB/s rd, 597 B/s wr, 12 op/s
Oct 11 09:30:19 compute-0 systemd[1]: libpod-23e9ecf0d7d90560609d733e48c6488306df441ecee90abadec408a592a7193d.scope: Deactivated successfully.
Oct 11 09:30:19 compute-0 podman[408565]: 2025-10-11 09:30:19.051462268 +0000 UTC m=+1.189789227 container died 23e9ecf0d7d90560609d733e48c6488306df441ecee90abadec408a592a7193d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_wiles, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 09:30:19 compute-0 nova_compute[260935]: 2025-10-11 09:30:19.230 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:30:19 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:30:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-7888703b6ebaa21a6c8835339488b6ddcf2ac00235d5d47d2627b1429b45751b-merged.mount: Deactivated successfully.
Oct 11 09:30:19 compute-0 ceph-mon[74313]: pgmap v2660: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 9.2 KiB/s rd, 597 B/s wr, 12 op/s
Oct 11 09:30:19 compute-0 podman[408565]: 2025-10-11 09:30:19.974176248 +0000 UTC m=+2.112503197 container remove 23e9ecf0d7d90560609d733e48c6488306df441ecee90abadec408a592a7193d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_wiles, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct 11 09:30:20 compute-0 sudo[408453]: pam_unix(sudo:session): session closed for user root
Oct 11 09:30:20 compute-0 systemd[1]: libpod-conmon-23e9ecf0d7d90560609d733e48c6488306df441ecee90abadec408a592a7193d.scope: Deactivated successfully.
Oct 11 09:30:20 compute-0 sudo[408602]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:30:20 compute-0 sudo[408602]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:30:20 compute-0 sudo[408602]: pam_unix(sudo:session): session closed for user root
Oct 11 09:30:20 compute-0 sudo[408627]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 09:30:20 compute-0 sudo[408627]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:30:20 compute-0 sudo[408627]: pam_unix(sudo:session): session closed for user root
Oct 11 09:30:20 compute-0 sudo[408652]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:30:20 compute-0 sudo[408652]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:30:20 compute-0 sudo[408652]: pam_unix(sudo:session): session closed for user root
Oct 11 09:30:20 compute-0 sudo[408677]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a -- raw list --format json
Oct 11 09:30:20 compute-0 sudo[408677]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:30:20 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:30:20.727 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d0:5b:95 10.100.0.2 2001:db8::f816:3eff:fed0:5b95'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fed0:5b95/64', 'neutron:device_id': 'ovnmeta-b8c83697-e12f-4ca4-8bd5-07c36e7b45a8', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b8c83697-e12f-4ca4-8bd5-07c36e7b45a8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'db6885dd005947ad850fed13cefdf2fc', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=09ee216c-15c4-4f81-ac68-20fd45768bc8, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=3d1ec619-c133-47e3-8121-0a84b73ae16e) old=Port_Binding(mac=['fa:16:3e:d0:5b:95 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-b8c83697-e12f-4ca4-8bd5-07c36e7b45a8', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b8c83697-e12f-4ca4-8bd5-07c36e7b45a8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'db6885dd005947ad850fed13cefdf2fc', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 09:30:20 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:30:20.729 162815 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 3d1ec619-c133-47e3-8121-0a84b73ae16e in datapath b8c83697-e12f-4ca4-8bd5-07c36e7b45a8 updated
Oct 11 09:30:20 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:30:20.730 162815 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b8c83697-e12f-4ca4-8bd5-07c36e7b45a8, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 11 09:30:20 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:30:20.731 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[7050fffc-13e4-45ef-8e26-8b01bfd7234f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:30:20 compute-0 podman[408744]: 2025-10-11 09:30:20.805893049 +0000 UTC m=+0.027914699 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:30:20 compute-0 podman[408744]: 2025-10-11 09:30:20.921535213 +0000 UTC m=+0.143556813 container create 9ab8e44127a404f22d1c91abc10ae3bd6f1395958e389b92c9e3acca0023b4b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_banzai, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct 11 09:30:21 compute-0 systemd[1]: Started libpod-conmon-9ab8e44127a404f22d1c91abc10ae3bd6f1395958e389b92c9e3acca0023b4b0.scope.
Oct 11 09:30:21 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:30:21 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2661: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:30:21 compute-0 podman[408744]: 2025-10-11 09:30:21.102536691 +0000 UTC m=+0.324558341 container init 9ab8e44127a404f22d1c91abc10ae3bd6f1395958e389b92c9e3acca0023b4b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_banzai, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 09:30:21 compute-0 podman[408758]: 2025-10-11 09:30:21.11278116 +0000 UTC m=+0.130904005 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, managed_by=edpm_ansible, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3)
Oct 11 09:30:21 compute-0 podman[408744]: 2025-10-11 09:30:21.120139027 +0000 UTC m=+0.342160607 container start 9ab8e44127a404f22d1c91abc10ae3bd6f1395958e389b92c9e3acca0023b4b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_banzai, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 09:30:21 compute-0 silly_banzai[408771]: 167 167
Oct 11 09:30:21 compute-0 systemd[1]: libpod-9ab8e44127a404f22d1c91abc10ae3bd6f1395958e389b92c9e3acca0023b4b0.scope: Deactivated successfully.
Oct 11 09:30:21 compute-0 podman[408744]: 2025-10-11 09:30:21.138565417 +0000 UTC m=+0.360587067 container attach 9ab8e44127a404f22d1c91abc10ae3bd6f1395958e389b92c9e3acca0023b4b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_banzai, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct 11 09:30:21 compute-0 podman[408744]: 2025-10-11 09:30:21.139872934 +0000 UTC m=+0.361894544 container died 9ab8e44127a404f22d1c91abc10ae3bd6f1395958e389b92c9e3acca0023b4b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_banzai, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Oct 11 09:30:21 compute-0 ceph-mon[74313]: pgmap v2661: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:30:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-bb530d299d53db85513da58367e93e72dc76083c65551a7bbe6a15d03a66746f-merged.mount: Deactivated successfully.
Oct 11 09:30:21 compute-0 podman[408744]: 2025-10-11 09:30:21.408462634 +0000 UTC m=+0.630484204 container remove 9ab8e44127a404f22d1c91abc10ae3bd6f1395958e389b92c9e3acca0023b4b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_banzai, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 09:30:21 compute-0 systemd[1]: libpod-conmon-9ab8e44127a404f22d1c91abc10ae3bd6f1395958e389b92c9e3acca0023b4b0.scope: Deactivated successfully.
Oct 11 09:30:21 compute-0 podman[408806]: 2025-10-11 09:30:21.687621583 +0000 UTC m=+0.046649728 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:30:21 compute-0 podman[408806]: 2025-10-11 09:30:21.807601138 +0000 UTC m=+0.166629233 container create 0cf40633061737a03bb814acd5b2436af13edd4c7d9b499ff448ad2821bf70a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_mccarthy, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 09:30:21 compute-0 systemd[1]: Started libpod-conmon-0cf40633061737a03bb814acd5b2436af13edd4c7d9b499ff448ad2821bf70a8.scope.
Oct 11 09:30:22 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:30:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/88585cc36d87cbd28670095b47f2d5c86258b4e9e2d197963e181400015332a0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 09:30:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/88585cc36d87cbd28670095b47f2d5c86258b4e9e2d197963e181400015332a0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 09:30:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/88585cc36d87cbd28670095b47f2d5c86258b4e9e2d197963e181400015332a0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 09:30:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/88585cc36d87cbd28670095b47f2d5c86258b4e9e2d197963e181400015332a0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 09:30:22 compute-0 podman[408806]: 2025-10-11 09:30:22.091452629 +0000 UTC m=+0.450480764 container init 0cf40633061737a03bb814acd5b2436af13edd4c7d9b499ff448ad2821bf70a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_mccarthy, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct 11 09:30:22 compute-0 podman[408806]: 2025-10-11 09:30:22.102936423 +0000 UTC m=+0.461964488 container start 0cf40633061737a03bb814acd5b2436af13edd4c7d9b499ff448ad2821bf70a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_mccarthy, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct 11 09:30:22 compute-0 podman[408806]: 2025-10-11 09:30:22.217093225 +0000 UTC m=+0.576121320 container attach 0cf40633061737a03bb814acd5b2436af13edd4c7d9b499ff448ad2821bf70a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_mccarthy, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 09:30:23 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2662: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:30:23 compute-0 adoring_mccarthy[408824]: {
Oct 11 09:30:23 compute-0 adoring_mccarthy[408824]:     "9a8d87fe-940e-4960-94fb-405e22e6e9ee": {
Oct 11 09:30:23 compute-0 adoring_mccarthy[408824]:         "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:30:23 compute-0 adoring_mccarthy[408824]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 11 09:30:23 compute-0 adoring_mccarthy[408824]:         "osd_id": 2,
Oct 11 09:30:23 compute-0 adoring_mccarthy[408824]:         "osd_uuid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 09:30:23 compute-0 adoring_mccarthy[408824]:         "type": "bluestore"
Oct 11 09:30:23 compute-0 adoring_mccarthy[408824]:     },
Oct 11 09:30:23 compute-0 adoring_mccarthy[408824]:     "bd31cfe9-266a-4479-a7f9-659fff14b3e1": {
Oct 11 09:30:23 compute-0 adoring_mccarthy[408824]:         "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:30:23 compute-0 adoring_mccarthy[408824]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 11 09:30:23 compute-0 adoring_mccarthy[408824]:         "osd_id": 0,
Oct 11 09:30:23 compute-0 adoring_mccarthy[408824]:         "osd_uuid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 09:30:23 compute-0 adoring_mccarthy[408824]:         "type": "bluestore"
Oct 11 09:30:23 compute-0 adoring_mccarthy[408824]:     },
Oct 11 09:30:23 compute-0 adoring_mccarthy[408824]:     "c6e730c8-7075-45e5-bf72-061b73088f5b": {
Oct 11 09:30:23 compute-0 adoring_mccarthy[408824]:         "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:30:23 compute-0 adoring_mccarthy[408824]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 11 09:30:23 compute-0 adoring_mccarthy[408824]:         "osd_id": 1,
Oct 11 09:30:23 compute-0 adoring_mccarthy[408824]:         "osd_uuid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 09:30:23 compute-0 adoring_mccarthy[408824]:         "type": "bluestore"
Oct 11 09:30:23 compute-0 adoring_mccarthy[408824]:     }
Oct 11 09:30:23 compute-0 adoring_mccarthy[408824]: }
Oct 11 09:30:23 compute-0 systemd[1]: libpod-0cf40633061737a03bb814acd5b2436af13edd4c7d9b499ff448ad2821bf70a8.scope: Deactivated successfully.
Oct 11 09:30:23 compute-0 podman[408806]: 2025-10-11 09:30:23.258739701 +0000 UTC m=+1.617767786 container died 0cf40633061737a03bb814acd5b2436af13edd4c7d9b499ff448ad2821bf70a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_mccarthy, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 09:30:23 compute-0 systemd[1]: libpod-0cf40633061737a03bb814acd5b2436af13edd4c7d9b499ff448ad2821bf70a8.scope: Consumed 1.154s CPU time.
Oct 11 09:30:23 compute-0 ceph-mon[74313]: pgmap v2662: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:30:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-88585cc36d87cbd28670095b47f2d5c86258b4e9e2d197963e181400015332a0-merged.mount: Deactivated successfully.
Oct 11 09:30:23 compute-0 nova_compute[260935]: 2025-10-11 09:30:23.579 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:30:23 compute-0 podman[408806]: 2025-10-11 09:30:23.8157327 +0000 UTC m=+2.174760795 container remove 0cf40633061737a03bb814acd5b2436af13edd4c7d9b499ff448ad2821bf70a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_mccarthy, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct 11 09:30:23 compute-0 systemd[1]: libpod-conmon-0cf40633061737a03bb814acd5b2436af13edd4c7d9b499ff448ad2821bf70a8.scope: Deactivated successfully.
Oct 11 09:30:23 compute-0 sudo[408677]: pam_unix(sudo:session): session closed for user root
Oct 11 09:30:23 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 09:30:23 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:30:23 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 09:30:23 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:30:23 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev bd50cfd7-41c4-4188-bee8-1c32a6d012d4 does not exist
Oct 11 09:30:23 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev 539918a0-1ed3-4312-85b6-0aa865390a13 does not exist
Oct 11 09:30:24 compute-0 sudo[408873]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:30:24 compute-0 sudo[408873]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:30:24 compute-0 sudo[408873]: pam_unix(sudo:session): session closed for user root
Oct 11 09:30:24 compute-0 sshd-session[408852]: Invalid user farm from 155.4.244.179 port 51568
Oct 11 09:30:24 compute-0 sshd-session[408852]: pam_unix(sshd:auth): check pass; user unknown
Oct 11 09:30:24 compute-0 sshd-session[408852]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=155.4.244.179
Oct 11 09:30:24 compute-0 sudo[408898]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 11 09:30:24 compute-0 sudo[408898]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:30:24 compute-0 sudo[408898]: pam_unix(sudo:session): session closed for user root
Oct 11 09:30:24 compute-0 nova_compute[260935]: 2025-10-11 09:30:24.233 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:30:24 compute-0 ovn_controller[152945]: 2025-10-11T09:30:24Z|01502|binding|INFO|Releasing lport e23cd806-8523-4e59-ba27-db15cee52548 from this chassis (sb_readonly=0)
Oct 11 09:30:24 compute-0 ovn_controller[152945]: 2025-10-11T09:30:24Z|01503|binding|INFO|Releasing lport f45dd889-4db0-488b-b6d3-356bc191844e from this chassis (sb_readonly=0)
Oct 11 09:30:24 compute-0 nova_compute[260935]: 2025-10-11 09:30:24.337 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:30:24 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:30:24 compute-0 ovn_controller[152945]: 2025-10-11T09:30:24Z|01504|binding|INFO|Releasing lport e23cd806-8523-4e59-ba27-db15cee52548 from this chassis (sb_readonly=0)
Oct 11 09:30:24 compute-0 ovn_controller[152945]: 2025-10-11T09:30:24Z|01505|binding|INFO|Releasing lport f45dd889-4db0-488b-b6d3-356bc191844e from this chassis (sb_readonly=0)
Oct 11 09:30:24 compute-0 nova_compute[260935]: 2025-10-11 09:30:24.545 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:30:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:30:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:30:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:30:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:30:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:30:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:30:24 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:30:24 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:30:25 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2663: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:30:25 compute-0 nova_compute[260935]: 2025-10-11 09:30:25.398 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:30:25 compute-0 ceph-mon[74313]: pgmap v2663: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:30:26 compute-0 sshd-session[408852]: Failed password for invalid user farm from 155.4.244.179 port 51568 ssh2
Oct 11 09:30:26 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 09:30:26 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/638731363' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 09:30:26 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 09:30:26 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/638731363' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 09:30:26 compute-0 podman[408924]: 2025-10-11 09:30:26.824739516 +0000 UTC m=+0.113371211 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=iscsid, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct 11 09:30:26 compute-0 sshd-session[408852]: Received disconnect from 155.4.244.179 port 51568:11: Bye Bye [preauth]
Oct 11 09:30:26 compute-0 sshd-session[408852]: Disconnected from invalid user farm 155.4.244.179 port 51568 [preauth]
Oct 11 09:30:26 compute-0 ceph-mon[74313]: from='client.? 192.168.122.10:0/638731363' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 09:30:26 compute-0 ceph-mon[74313]: from='client.? 192.168.122.10:0/638731363' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 09:30:27 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2664: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:30:27 compute-0 ceph-mon[74313]: pgmap v2664: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:30:28 compute-0 nova_compute[260935]: 2025-10-11 09:30:28.581 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:30:28 compute-0 sshd-session[408945]: Invalid user admin from 165.232.82.252 port 54772
Oct 11 09:30:28 compute-0 sshd-session[408945]: pam_unix(sshd:auth): check pass; user unknown
Oct 11 09:30:28 compute-0 sshd-session[408945]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=165.232.82.252
Oct 11 09:30:28 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:30:28.943 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d0:5b:95 10.100.0.2 2001:db8:0:1:f816:3eff:fed0:5b95 2001:db8::f816:3eff:fed0:5b95'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8:0:1:f816:3eff:fed0:5b95/64 2001:db8::f816:3eff:fed0:5b95/64', 'neutron:device_id': 'ovnmeta-b8c83697-e12f-4ca4-8bd5-07c36e7b45a8', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b8c83697-e12f-4ca4-8bd5-07c36e7b45a8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'db6885dd005947ad850fed13cefdf2fc', 'neutron:revision_number': '4', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=09ee216c-15c4-4f81-ac68-20fd45768bc8, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=3d1ec619-c133-47e3-8121-0a84b73ae16e) old=Port_Binding(mac=['fa:16:3e:d0:5b:95 10.100.0.2 2001:db8::f816:3eff:fed0:5b95'], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fed0:5b95/64', 'neutron:device_id': 'ovnmeta-b8c83697-e12f-4ca4-8bd5-07c36e7b45a8', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b8c83697-e12f-4ca4-8bd5-07c36e7b45a8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'db6885dd005947ad850fed13cefdf2fc', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 09:30:28 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:30:28.945 162815 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 3d1ec619-c133-47e3-8121-0a84b73ae16e in datapath b8c83697-e12f-4ca4-8bd5-07c36e7b45a8 updated
Oct 11 09:30:28 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:30:28.948 162815 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b8c83697-e12f-4ca4-8bd5-07c36e7b45a8, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 11 09:30:28 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:30:28.948 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[16d5d547-5ba9-4f28-ae70-fa51557bc25b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:30:29 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2665: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:30:29 compute-0 nova_compute[260935]: 2025-10-11 09:30:29.236 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:30:29 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:30:30 compute-0 ceph-mon[74313]: pgmap v2665: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:30:30 compute-0 sshd-session[408945]: Failed password for invalid user admin from 165.232.82.252 port 54772 ssh2
Oct 11 09:30:30 compute-0 podman[408947]: 2025-10-11 09:30:30.813738118 +0000 UTC m=+0.099548731 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 11 09:30:30 compute-0 podman[408948]: 2025-10-11 09:30:30.902810991 +0000 UTC m=+0.184115417 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251001, managed_by=edpm_ansible)
Oct 11 09:30:31 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2666: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:30:32 compute-0 sshd-session[408945]: Connection closed by invalid user admin 165.232.82.252 port 54772 [preauth]
Oct 11 09:30:32 compute-0 ceph-mon[74313]: pgmap v2666: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:30:33 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2667: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:30:33 compute-0 nova_compute[260935]: 2025-10-11 09:30:33.582 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:30:34 compute-0 ceph-mon[74313]: pgmap v2667: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:30:34 compute-0 nova_compute[260935]: 2025-10-11 09:30:34.274 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:30:34 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:30:35 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2668: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:30:35 compute-0 ceph-mon[74313]: pgmap v2668: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:30:36 compute-0 nova_compute[260935]: 2025-10-11 09:30:36.853 2 DEBUG oslo_concurrency.lockutils [None req-5f7fda1c-9004-4734-81d5-03d57f859841 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "811aca81-b712-4b96-a66c-8108b7791b3c" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:30:36 compute-0 nova_compute[260935]: 2025-10-11 09:30:36.854 2 DEBUG oslo_concurrency.lockutils [None req-5f7fda1c-9004-4734-81d5-03d57f859841 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "811aca81-b712-4b96-a66c-8108b7791b3c" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:30:37 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2669: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:30:37 compute-0 nova_compute[260935]: 2025-10-11 09:30:37.079 2 DEBUG nova.compute.manager [None req-5f7fda1c-9004-4734-81d5-03d57f859841 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 811aca81-b712-4b96-a66c-8108b7791b3c] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 11 09:30:37 compute-0 nova_compute[260935]: 2025-10-11 09:30:37.306 2 DEBUG oslo_concurrency.lockutils [None req-5f7fda1c-9004-4734-81d5-03d57f859841 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:30:37 compute-0 nova_compute[260935]: 2025-10-11 09:30:37.307 2 DEBUG oslo_concurrency.lockutils [None req-5f7fda1c-9004-4734-81d5-03d57f859841 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:30:37 compute-0 nova_compute[260935]: 2025-10-11 09:30:37.319 2 DEBUG nova.virt.hardware [None req-5f7fda1c-9004-4734-81d5-03d57f859841 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 11 09:30:37 compute-0 nova_compute[260935]: 2025-10-11 09:30:37.319 2 INFO nova.compute.claims [None req-5f7fda1c-9004-4734-81d5-03d57f859841 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 811aca81-b712-4b96-a66c-8108b7791b3c] Claim successful on node compute-0.ctlplane.example.com
Oct 11 09:30:37 compute-0 nova_compute[260935]: 2025-10-11 09:30:37.763 2 DEBUG oslo_concurrency.processutils [None req-5f7fda1c-9004-4734-81d5-03d57f859841 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:30:38 compute-0 ceph-mon[74313]: pgmap v2669: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:30:38 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 09:30:38 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3677823553' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:30:38 compute-0 nova_compute[260935]: 2025-10-11 09:30:38.256 2 DEBUG oslo_concurrency.processutils [None req-5f7fda1c-9004-4734-81d5-03d57f859841 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.493s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:30:38 compute-0 nova_compute[260935]: 2025-10-11 09:30:38.264 2 DEBUG nova.compute.provider_tree [None req-5f7fda1c-9004-4734-81d5-03d57f859841 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 09:30:38 compute-0 nova_compute[260935]: 2025-10-11 09:30:38.327 2 DEBUG nova.scheduler.client.report [None req-5f7fda1c-9004-4734-81d5-03d57f859841 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 09:30:38 compute-0 nova_compute[260935]: 2025-10-11 09:30:38.386 2 DEBUG oslo_concurrency.lockutils [None req-5f7fda1c-9004-4734-81d5-03d57f859841 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.080s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:30:38 compute-0 nova_compute[260935]: 2025-10-11 09:30:38.388 2 DEBUG nova.compute.manager [None req-5f7fda1c-9004-4734-81d5-03d57f859841 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 811aca81-b712-4b96-a66c-8108b7791b3c] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 11 09:30:38 compute-0 nova_compute[260935]: 2025-10-11 09:30:38.529 2 DEBUG nova.compute.manager [None req-5f7fda1c-9004-4734-81d5-03d57f859841 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 811aca81-b712-4b96-a66c-8108b7791b3c] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 11 09:30:38 compute-0 nova_compute[260935]: 2025-10-11 09:30:38.530 2 DEBUG nova.network.neutron [None req-5f7fda1c-9004-4734-81d5-03d57f859841 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 811aca81-b712-4b96-a66c-8108b7791b3c] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 11 09:30:38 compute-0 nova_compute[260935]: 2025-10-11 09:30:38.618 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:30:38 compute-0 nova_compute[260935]: 2025-10-11 09:30:38.633 2 INFO nova.virt.libvirt.driver [None req-5f7fda1c-9004-4734-81d5-03d57f859841 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 811aca81-b712-4b96-a66c-8108b7791b3c] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 11 09:30:38 compute-0 nova_compute[260935]: 2025-10-11 09:30:38.748 2 DEBUG nova.policy [None req-5f7fda1c-9004-4734-81d5-03d57f859841 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '0e1fd111a1ff43179343661e01457085', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'db6885dd005947ad850fed13cefdf2fc', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 11 09:30:38 compute-0 nova_compute[260935]: 2025-10-11 09:30:38.770 2 DEBUG nova.compute.manager [None req-5f7fda1c-9004-4734-81d5-03d57f859841 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 811aca81-b712-4b96-a66c-8108b7791b3c] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 11 09:30:38 compute-0 nova_compute[260935]: 2025-10-11 09:30:38.987 2 DEBUG nova.compute.manager [None req-5f7fda1c-9004-4734-81d5-03d57f859841 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 811aca81-b712-4b96-a66c-8108b7791b3c] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 11 09:30:38 compute-0 nova_compute[260935]: 2025-10-11 09:30:38.989 2 DEBUG nova.virt.libvirt.driver [None req-5f7fda1c-9004-4734-81d5-03d57f859841 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 811aca81-b712-4b96-a66c-8108b7791b3c] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 11 09:30:38 compute-0 nova_compute[260935]: 2025-10-11 09:30:38.990 2 INFO nova.virt.libvirt.driver [None req-5f7fda1c-9004-4734-81d5-03d57f859841 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 811aca81-b712-4b96-a66c-8108b7791b3c] Creating image(s)
Oct 11 09:30:39 compute-0 nova_compute[260935]: 2025-10-11 09:30:39.026 2 DEBUG nova.storage.rbd_utils [None req-5f7fda1c-9004-4734-81d5-03d57f859841 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] rbd image 811aca81-b712-4b96-a66c-8108b7791b3c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:30:39 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2670: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:30:39 compute-0 nova_compute[260935]: 2025-10-11 09:30:39.064 2 DEBUG nova.storage.rbd_utils [None req-5f7fda1c-9004-4734-81d5-03d57f859841 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] rbd image 811aca81-b712-4b96-a66c-8108b7791b3c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:30:39 compute-0 nova_compute[260935]: 2025-10-11 09:30:39.094 2 DEBUG nova.storage.rbd_utils [None req-5f7fda1c-9004-4734-81d5-03d57f859841 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] rbd image 811aca81-b712-4b96-a66c-8108b7791b3c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:30:39 compute-0 nova_compute[260935]: 2025-10-11 09:30:39.098 2 DEBUG oslo_concurrency.processutils [None req-5f7fda1c-9004-4734-81d5-03d57f859841 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:30:39 compute-0 nova_compute[260935]: 2025-10-11 09:30:39.194 2 DEBUG oslo_concurrency.processutils [None req-5f7fda1c-9004-4734-81d5-03d57f859841 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json" returned: 0 in 0.095s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:30:39 compute-0 nova_compute[260935]: 2025-10-11 09:30:39.196 2 DEBUG oslo_concurrency.lockutils [None req-5f7fda1c-9004-4734-81d5-03d57f859841 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "9811042c7d73cc51997f7c966840f4b7728169a1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:30:39 compute-0 nova_compute[260935]: 2025-10-11 09:30:39.197 2 DEBUG oslo_concurrency.lockutils [None req-5f7fda1c-9004-4734-81d5-03d57f859841 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:30:39 compute-0 nova_compute[260935]: 2025-10-11 09:30:39.198 2 DEBUG oslo_concurrency.lockutils [None req-5f7fda1c-9004-4734-81d5-03d57f859841 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:30:39 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/3677823553' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:30:39 compute-0 ceph-mon[74313]: pgmap v2670: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:30:39 compute-0 nova_compute[260935]: 2025-10-11 09:30:39.390 2 DEBUG nova.storage.rbd_utils [None req-5f7fda1c-9004-4734-81d5-03d57f859841 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] rbd image 811aca81-b712-4b96-a66c-8108b7791b3c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:30:39 compute-0 nova_compute[260935]: 2025-10-11 09:30:39.396 2 DEBUG oslo_concurrency.processutils [None req-5f7fda1c-9004-4734-81d5-03d57f859841 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 811aca81-b712-4b96-a66c-8108b7791b3c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:30:39 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:30:39 compute-0 nova_compute[260935]: 2025-10-11 09:30:39.462 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:30:39 compute-0 nova_compute[260935]: 2025-10-11 09:30:39.699 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:30:39 compute-0 nova_compute[260935]: 2025-10-11 09:30:39.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:30:40 compute-0 nova_compute[260935]: 2025-10-11 09:30:40.002 2 DEBUG nova.network.neutron [None req-5f7fda1c-9004-4734-81d5-03d57f859841 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 811aca81-b712-4b96-a66c-8108b7791b3c] Successfully created port: 41c97ff4-4ad5-4d35-ac33-d083a904f55a _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 11 09:30:40 compute-0 nova_compute[260935]: 2025-10-11 09:30:40.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:30:40 compute-0 nova_compute[260935]: 2025-10-11 09:30:40.706 2 DEBUG oslo_concurrency.processutils [None req-5f7fda1c-9004-4734-81d5-03d57f859841 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 811aca81-b712-4b96-a66c-8108b7791b3c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.311s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:30:40 compute-0 nova_compute[260935]: 2025-10-11 09:30:40.795 2 DEBUG nova.storage.rbd_utils [None req-5f7fda1c-9004-4734-81d5-03d57f859841 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] resizing rbd image 811aca81-b712-4b96-a66c-8108b7791b3c_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 11 09:30:40 compute-0 nova_compute[260935]: 2025-10-11 09:30:40.923 2 DEBUG nova.objects.instance [None req-5f7fda1c-9004-4734-81d5-03d57f859841 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lazy-loading 'migration_context' on Instance uuid 811aca81-b712-4b96-a66c-8108b7791b3c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:30:41 compute-0 nova_compute[260935]: 2025-10-11 09:30:41.049 2 DEBUG nova.virt.libvirt.driver [None req-5f7fda1c-9004-4734-81d5-03d57f859841 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 811aca81-b712-4b96-a66c-8108b7791b3c] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 11 09:30:41 compute-0 nova_compute[260935]: 2025-10-11 09:30:41.050 2 DEBUG nova.virt.libvirt.driver [None req-5f7fda1c-9004-4734-81d5-03d57f859841 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 811aca81-b712-4b96-a66c-8108b7791b3c] Ensure instance console log exists: /var/lib/nova/instances/811aca81-b712-4b96-a66c-8108b7791b3c/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 11 09:30:41 compute-0 nova_compute[260935]: 2025-10-11 09:30:41.050 2 DEBUG oslo_concurrency.lockutils [None req-5f7fda1c-9004-4734-81d5-03d57f859841 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:30:41 compute-0 nova_compute[260935]: 2025-10-11 09:30:41.051 2 DEBUG oslo_concurrency.lockutils [None req-5f7fda1c-9004-4734-81d5-03d57f859841 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:30:41 compute-0 nova_compute[260935]: 2025-10-11 09:30:41.051 2 DEBUG oslo_concurrency.lockutils [None req-5f7fda1c-9004-4734-81d5-03d57f859841 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:30:41 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2671: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:30:42 compute-0 ceph-mon[74313]: pgmap v2671: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:30:42 compute-0 nova_compute[260935]: 2025-10-11 09:30:42.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:30:42 compute-0 nova_compute[260935]: 2025-10-11 09:30:42.702 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 11 09:30:42 compute-0 nova_compute[260935]: 2025-10-11 09:30:42.703 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 11 09:30:42 compute-0 nova_compute[260935]: 2025-10-11 09:30:42.725 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: 811aca81-b712-4b96-a66c-8108b7791b3c] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Oct 11 09:30:43 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2672: 321 pgs: 321 active+clean; 374 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 11 09:30:43 compute-0 nova_compute[260935]: 2025-10-11 09:30:43.488 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "refresh_cache-c176845c-89c0-4038-ba22-4ee79bd3ebfe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:30:43 compute-0 nova_compute[260935]: 2025-10-11 09:30:43.488 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquired lock "refresh_cache-c176845c-89c0-4038-ba22-4ee79bd3ebfe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:30:43 compute-0 nova_compute[260935]: 2025-10-11 09:30:43.489 2 DEBUG nova.network.neutron [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: c176845c-89c0-4038-ba22-4ee79bd3ebfe] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 11 09:30:43 compute-0 nova_compute[260935]: 2025-10-11 09:30:43.489 2 DEBUG nova.objects.instance [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lazy-loading 'info_cache' on Instance uuid c176845c-89c0-4038-ba22-4ee79bd3ebfe obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:30:43 compute-0 nova_compute[260935]: 2025-10-11 09:30:43.621 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:30:44 compute-0 ceph-mon[74313]: pgmap v2672: 321 pgs: 321 active+clean; 374 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 11 09:30:44 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:30:44 compute-0 nova_compute[260935]: 2025-10-11 09:30:44.465 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:30:45 compute-0 nova_compute[260935]: 2025-10-11 09:30:45.049 2 DEBUG nova.network.neutron [None req-5f7fda1c-9004-4734-81d5-03d57f859841 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 811aca81-b712-4b96-a66c-8108b7791b3c] Successfully updated port: 41c97ff4-4ad5-4d35-ac33-d083a904f55a _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 11 09:30:45 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2673: 321 pgs: 321 active+clean; 374 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 11 09:30:45 compute-0 nova_compute[260935]: 2025-10-11 09:30:45.062 2 DEBUG oslo_concurrency.lockutils [None req-5f7fda1c-9004-4734-81d5-03d57f859841 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "refresh_cache-811aca81-b712-4b96-a66c-8108b7791b3c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:30:45 compute-0 nova_compute[260935]: 2025-10-11 09:30:45.062 2 DEBUG oslo_concurrency.lockutils [None req-5f7fda1c-9004-4734-81d5-03d57f859841 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquired lock "refresh_cache-811aca81-b712-4b96-a66c-8108b7791b3c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:30:45 compute-0 nova_compute[260935]: 2025-10-11 09:30:45.063 2 DEBUG nova.network.neutron [None req-5f7fda1c-9004-4734-81d5-03d57f859841 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 811aca81-b712-4b96-a66c-8108b7791b3c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 11 09:30:45 compute-0 ceph-mon[74313]: pgmap v2673: 321 pgs: 321 active+clean; 374 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 11 09:30:45 compute-0 nova_compute[260935]: 2025-10-11 09:30:45.184 2 DEBUG nova.compute.manager [req-6c3679e1-8d6f-4a5c-ba63-f78ffb4be5f9 req-33f74171-2d4b-42c4-ac5d-efcb5c7b0eb9 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 811aca81-b712-4b96-a66c-8108b7791b3c] Received event network-changed-41c97ff4-4ad5-4d35-ac33-d083a904f55a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:30:45 compute-0 nova_compute[260935]: 2025-10-11 09:30:45.185 2 DEBUG nova.compute.manager [req-6c3679e1-8d6f-4a5c-ba63-f78ffb4be5f9 req-33f74171-2d4b-42c4-ac5d-efcb5c7b0eb9 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 811aca81-b712-4b96-a66c-8108b7791b3c] Refreshing instance network info cache due to event network-changed-41c97ff4-4ad5-4d35-ac33-d083a904f55a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 09:30:45 compute-0 nova_compute[260935]: 2025-10-11 09:30:45.185 2 DEBUG oslo_concurrency.lockutils [req-6c3679e1-8d6f-4a5c-ba63-f78ffb4be5f9 req-33f74171-2d4b-42c4-ac5d-efcb5c7b0eb9 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-811aca81-b712-4b96-a66c-8108b7791b3c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:30:45 compute-0 nova_compute[260935]: 2025-10-11 09:30:45.257 2 DEBUG nova.network.neutron [None req-5f7fda1c-9004-4734-81d5-03d57f859841 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 811aca81-b712-4b96-a66c-8108b7791b3c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 11 09:30:46 compute-0 nova_compute[260935]: 2025-10-11 09:30:46.241 2 DEBUG nova.network.neutron [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: c176845c-89c0-4038-ba22-4ee79bd3ebfe] Updating instance_info_cache with network_info: [{"id": "e61ae661-47c6-4317-a2c2-6e7a5b567441", "address": "fa:16:3e:1e:82:58", "network": {"id": "164a664d-5e52-48b9-8b00-f73d0851a4cc", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-311778958-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d33b48586acf4e6c8254f2a1213b001c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape61ae661-47", "ovs_interfaceid": "e61ae661-47c6-4317-a2c2-6e7a5b567441", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:30:46 compute-0 nova_compute[260935]: 2025-10-11 09:30:46.403 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Releasing lock "refresh_cache-c176845c-89c0-4038-ba22-4ee79bd3ebfe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:30:46 compute-0 nova_compute[260935]: 2025-10-11 09:30:46.403 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: c176845c-89c0-4038-ba22-4ee79bd3ebfe] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 11 09:30:46 compute-0 nova_compute[260935]: 2025-10-11 09:30:46.404 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:30:46 compute-0 nova_compute[260935]: 2025-10-11 09:30:46.404 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:30:46 compute-0 nova_compute[260935]: 2025-10-11 09:30:46.404 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:30:46 compute-0 nova_compute[260935]: 2025-10-11 09:30:46.404 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 11 09:30:46 compute-0 nova_compute[260935]: 2025-10-11 09:30:46.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:30:46 compute-0 nova_compute[260935]: 2025-10-11 09:30:46.741 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:30:46 compute-0 nova_compute[260935]: 2025-10-11 09:30:46.742 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:30:46 compute-0 nova_compute[260935]: 2025-10-11 09:30:46.742 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:30:46 compute-0 nova_compute[260935]: 2025-10-11 09:30:46.743 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 11 09:30:46 compute-0 nova_compute[260935]: 2025-10-11 09:30:46.743 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:30:47 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2674: 321 pgs: 321 active+clean; 374 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 11 09:30:47 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 09:30:47 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1825597475' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:30:47 compute-0 nova_compute[260935]: 2025-10-11 09:30:47.270 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.526s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:30:47 compute-0 nova_compute[260935]: 2025-10-11 09:30:47.410 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:30:47 compute-0 nova_compute[260935]: 2025-10-11 09:30:47.411 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:30:47 compute-0 nova_compute[260935]: 2025-10-11 09:30:47.412 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:30:47 compute-0 nova_compute[260935]: 2025-10-11 09:30:47.418 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:30:47 compute-0 nova_compute[260935]: 2025-10-11 09:30:47.418 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:30:47 compute-0 nova_compute[260935]: 2025-10-11 09:30:47.424 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000052 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:30:47 compute-0 nova_compute[260935]: 2025-10-11 09:30:47.424 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000052 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:30:47 compute-0 nova_compute[260935]: 2025-10-11 09:30:47.687 2 WARNING nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 09:30:47 compute-0 nova_compute[260935]: 2025-10-11 09:30:47.689 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=2837MB free_disk=59.80991744995117GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 11 09:30:47 compute-0 nova_compute[260935]: 2025-10-11 09:30:47.689 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:30:47 compute-0 nova_compute[260935]: 2025-10-11 09:30:47.689 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:30:47 compute-0 nova_compute[260935]: 2025-10-11 09:30:47.762 2 DEBUG nova.network.neutron [None req-5f7fda1c-9004-4734-81d5-03d57f859841 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 811aca81-b712-4b96-a66c-8108b7791b3c] Updating instance_info_cache with network_info: [{"id": "41c97ff4-4ad5-4d35-ac33-d083a904f55a", "address": "fa:16:3e:e6:b0:e4", "network": {"id": "b8c83697-e12f-4ca4-8bd5-07c36e7b45a8", "bridge": "br-int", "label": "tempest-network-smoke--75951287", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fee6:b0e4", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fee6:b0e4", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap41c97ff4-4a", "ovs_interfaceid": "41c97ff4-4ad5-4d35-ac33-d083a904f55a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:30:47 compute-0 nova_compute[260935]: 2025-10-11 09:30:47.909 2 DEBUG oslo_concurrency.lockutils [None req-5f7fda1c-9004-4734-81d5-03d57f859841 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Releasing lock "refresh_cache-811aca81-b712-4b96-a66c-8108b7791b3c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:30:47 compute-0 nova_compute[260935]: 2025-10-11 09:30:47.910 2 DEBUG nova.compute.manager [None req-5f7fda1c-9004-4734-81d5-03d57f859841 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 811aca81-b712-4b96-a66c-8108b7791b3c] Instance network_info: |[{"id": "41c97ff4-4ad5-4d35-ac33-d083a904f55a", "address": "fa:16:3e:e6:b0:e4", "network": {"id": "b8c83697-e12f-4ca4-8bd5-07c36e7b45a8", "bridge": "br-int", "label": "tempest-network-smoke--75951287", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fee6:b0e4", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fee6:b0e4", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap41c97ff4-4a", "ovs_interfaceid": "41c97ff4-4ad5-4d35-ac33-d083a904f55a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 11 09:30:47 compute-0 nova_compute[260935]: 2025-10-11 09:30:47.910 2 DEBUG oslo_concurrency.lockutils [req-6c3679e1-8d6f-4a5c-ba63-f78ffb4be5f9 req-33f74171-2d4b-42c4-ac5d-efcb5c7b0eb9 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-811aca81-b712-4b96-a66c-8108b7791b3c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:30:47 compute-0 nova_compute[260935]: 2025-10-11 09:30:47.911 2 DEBUG nova.network.neutron [req-6c3679e1-8d6f-4a5c-ba63-f78ffb4be5f9 req-33f74171-2d4b-42c4-ac5d-efcb5c7b0eb9 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 811aca81-b712-4b96-a66c-8108b7791b3c] Refreshing network info cache for port 41c97ff4-4ad5-4d35-ac33-d083a904f55a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 09:30:47 compute-0 nova_compute[260935]: 2025-10-11 09:30:47.914 2 DEBUG nova.virt.libvirt.driver [None req-5f7fda1c-9004-4734-81d5-03d57f859841 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 811aca81-b712-4b96-a66c-8108b7791b3c] Start _get_guest_xml network_info=[{"id": "41c97ff4-4ad5-4d35-ac33-d083a904f55a", "address": "fa:16:3e:e6:b0:e4", "network": {"id": "b8c83697-e12f-4ca4-8bd5-07c36e7b45a8", "bridge": "br-int", "label": "tempest-network-smoke--75951287", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fee6:b0e4", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fee6:b0e4", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap41c97ff4-4a", "ovs_interfaceid": "41c97ff4-4ad5-4d35-ac33-d083a904f55a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 11 09:30:47 compute-0 nova_compute[260935]: 2025-10-11 09:30:47.920 2 WARNING nova.virt.libvirt.driver [None req-5f7fda1c-9004-4734-81d5-03d57f859841 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 09:30:47 compute-0 nova_compute[260935]: 2025-10-11 09:30:47.925 2 DEBUG nova.virt.libvirt.host [None req-5f7fda1c-9004-4734-81d5-03d57f859841 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 11 09:30:47 compute-0 nova_compute[260935]: 2025-10-11 09:30:47.926 2 DEBUG nova.virt.libvirt.host [None req-5f7fda1c-9004-4734-81d5-03d57f859841 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 11 09:30:47 compute-0 nova_compute[260935]: 2025-10-11 09:30:47.929 2 DEBUG nova.virt.libvirt.host [None req-5f7fda1c-9004-4734-81d5-03d57f859841 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 11 09:30:47 compute-0 nova_compute[260935]: 2025-10-11 09:30:47.929 2 DEBUG nova.virt.libvirt.host [None req-5f7fda1c-9004-4734-81d5-03d57f859841 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 11 09:30:47 compute-0 nova_compute[260935]: 2025-10-11 09:30:47.930 2 DEBUG nova.virt.libvirt.driver [None req-5f7fda1c-9004-4734-81d5-03d57f859841 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 11 09:30:47 compute-0 nova_compute[260935]: 2025-10-11 09:30:47.930 2 DEBUG nova.virt.hardware [None req-5f7fda1c-9004-4734-81d5-03d57f859841 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 11 09:30:47 compute-0 nova_compute[260935]: 2025-10-11 09:30:47.931 2 DEBUG nova.virt.hardware [None req-5f7fda1c-9004-4734-81d5-03d57f859841 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 11 09:30:47 compute-0 nova_compute[260935]: 2025-10-11 09:30:47.931 2 DEBUG nova.virt.hardware [None req-5f7fda1c-9004-4734-81d5-03d57f859841 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 11 09:30:47 compute-0 nova_compute[260935]: 2025-10-11 09:30:47.931 2 DEBUG nova.virt.hardware [None req-5f7fda1c-9004-4734-81d5-03d57f859841 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 11 09:30:47 compute-0 nova_compute[260935]: 2025-10-11 09:30:47.931 2 DEBUG nova.virt.hardware [None req-5f7fda1c-9004-4734-81d5-03d57f859841 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 11 09:30:47 compute-0 nova_compute[260935]: 2025-10-11 09:30:47.932 2 DEBUG nova.virt.hardware [None req-5f7fda1c-9004-4734-81d5-03d57f859841 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 11 09:30:47 compute-0 nova_compute[260935]: 2025-10-11 09:30:47.932 2 DEBUG nova.virt.hardware [None req-5f7fda1c-9004-4734-81d5-03d57f859841 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 11 09:30:47 compute-0 nova_compute[260935]: 2025-10-11 09:30:47.932 2 DEBUG nova.virt.hardware [None req-5f7fda1c-9004-4734-81d5-03d57f859841 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 11 09:30:47 compute-0 nova_compute[260935]: 2025-10-11 09:30:47.933 2 DEBUG nova.virt.hardware [None req-5f7fda1c-9004-4734-81d5-03d57f859841 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 11 09:30:47 compute-0 nova_compute[260935]: 2025-10-11 09:30:47.933 2 DEBUG nova.virt.hardware [None req-5f7fda1c-9004-4734-81d5-03d57f859841 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 11 09:30:47 compute-0 nova_compute[260935]: 2025-10-11 09:30:47.933 2 DEBUG nova.virt.hardware [None req-5f7fda1c-9004-4734-81d5-03d57f859841 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 11 09:30:47 compute-0 nova_compute[260935]: 2025-10-11 09:30:47.936 2 DEBUG oslo_concurrency.processutils [None req-5f7fda1c-9004-4734-81d5-03d57f859841 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:30:48 compute-0 ceph-mon[74313]: pgmap v2674: 321 pgs: 321 active+clean; 374 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 11 09:30:48 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/1825597475' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:30:48 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 09:30:48 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/272243793' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:30:48 compute-0 nova_compute[260935]: 2025-10-11 09:30:48.448 2 DEBUG oslo_concurrency.processutils [None req-5f7fda1c-9004-4734-81d5-03d57f859841 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.512s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:30:48 compute-0 nova_compute[260935]: 2025-10-11 09:30:48.485 2 DEBUG nova.storage.rbd_utils [None req-5f7fda1c-9004-4734-81d5-03d57f859841 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] rbd image 811aca81-b712-4b96-a66c-8108b7791b3c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:30:48 compute-0 nova_compute[260935]: 2025-10-11 09:30:48.490 2 DEBUG oslo_concurrency.processutils [None req-5f7fda1c-9004-4734-81d5-03d57f859841 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:30:48 compute-0 nova_compute[260935]: 2025-10-11 09:30:48.623 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:30:48 compute-0 nova_compute[260935]: 2025-10-11 09:30:48.636 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance c176845c-89c0-4038-ba22-4ee79bd3ebfe actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 09:30:48 compute-0 nova_compute[260935]: 2025-10-11 09:30:48.637 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance b75d8ded-515b-48ff-a6b6-28df88878996 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 09:30:48 compute-0 nova_compute[260935]: 2025-10-11 09:30:48.637 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance 52be16b4-343a-4fd4-9041-39069a1fde2a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 09:30:48 compute-0 nova_compute[260935]: 2025-10-11 09:30:48.638 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance 811aca81-b712-4b96-a66c-8108b7791b3c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 09:30:48 compute-0 nova_compute[260935]: 2025-10-11 09:30:48.638 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 11 09:30:48 compute-0 nova_compute[260935]: 2025-10-11 09:30:48.639 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=1024MB phys_disk=59GB used_disk=4GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 11 09:30:48 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 09:30:48 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/575589019' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:30:48 compute-0 nova_compute[260935]: 2025-10-11 09:30:48.954 2 DEBUG oslo_concurrency.processutils [None req-5f7fda1c-9004-4734-81d5-03d57f859841 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:30:48 compute-0 nova_compute[260935]: 2025-10-11 09:30:48.957 2 DEBUG nova.virt.libvirt.vif [None req-5f7fda1c-9004-4734-81d5-03d57f859841 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T09:30:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-739529148',display_name='tempest-TestGettingAddress-server-739529148',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-739529148',id=135,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHRstPG1ot2VOcbBUXtJ+eOOOPIpYxS1823a8lgCgglrjvAd3j//+qkav+rN6NO2rJV6NwAQSbhFZXjoVb6gdqi2VPWucmakmAAgqAmAhQOUxzhrLgrzGURRlRuM8US+xA==',key_name='tempest-TestGettingAddress-264191416',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='db6885dd005947ad850fed13cefdf2fc',ramdisk_id='',reservation_id='r-jlin4bys',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-1238692117',owner_user_name='tempest-TestGettingAddress-1238692117-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:30:38Z,user_data=None,user_id='0e1fd111a1ff43179343661e01457085',uuid=811aca81-b712-4b96-a66c-8108b7791b3c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "41c97ff4-4ad5-4d35-ac33-d083a904f55a", "address": "fa:16:3e:e6:b0:e4", "network": {"id": "b8c83697-e12f-4ca4-8bd5-07c36e7b45a8", "bridge": "br-int", "label": "tempest-network-smoke--75951287", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fee6:b0e4", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fee6:b0e4", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap41c97ff4-4a", "ovs_interfaceid": "41c97ff4-4ad5-4d35-ac33-d083a904f55a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 11 09:30:48 compute-0 nova_compute[260935]: 2025-10-11 09:30:48.958 2 DEBUG nova.network.os_vif_util [None req-5f7fda1c-9004-4734-81d5-03d57f859841 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converting VIF {"id": "41c97ff4-4ad5-4d35-ac33-d083a904f55a", "address": "fa:16:3e:e6:b0:e4", "network": {"id": "b8c83697-e12f-4ca4-8bd5-07c36e7b45a8", "bridge": "br-int", "label": "tempest-network-smoke--75951287", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fee6:b0e4", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fee6:b0e4", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap41c97ff4-4a", "ovs_interfaceid": "41c97ff4-4ad5-4d35-ac33-d083a904f55a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 09:30:48 compute-0 nova_compute[260935]: 2025-10-11 09:30:48.960 2 DEBUG nova.network.os_vif_util [None req-5f7fda1c-9004-4734-81d5-03d57f859841 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e6:b0:e4,bridge_name='br-int',has_traffic_filtering=True,id=41c97ff4-4ad5-4d35-ac33-d083a904f55a,network=Network(b8c83697-e12f-4ca4-8bd5-07c36e7b45a8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap41c97ff4-4a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 09:30:48 compute-0 nova_compute[260935]: 2025-10-11 09:30:48.962 2 DEBUG nova.objects.instance [None req-5f7fda1c-9004-4734-81d5-03d57f859841 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lazy-loading 'pci_devices' on Instance uuid 811aca81-b712-4b96-a66c-8108b7791b3c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:30:49 compute-0 nova_compute[260935]: 2025-10-11 09:30:49.019 2 DEBUG nova.virt.libvirt.driver [None req-5f7fda1c-9004-4734-81d5-03d57f859841 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 811aca81-b712-4b96-a66c-8108b7791b3c] End _get_guest_xml xml=<domain type="kvm">
Oct 11 09:30:49 compute-0 nova_compute[260935]:   <uuid>811aca81-b712-4b96-a66c-8108b7791b3c</uuid>
Oct 11 09:30:49 compute-0 nova_compute[260935]:   <name>instance-00000087</name>
Oct 11 09:30:49 compute-0 nova_compute[260935]:   <memory>131072</memory>
Oct 11 09:30:49 compute-0 nova_compute[260935]:   <vcpu>1</vcpu>
Oct 11 09:30:49 compute-0 nova_compute[260935]:   <metadata>
Oct 11 09:30:49 compute-0 nova_compute[260935]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 09:30:49 compute-0 nova_compute[260935]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 09:30:49 compute-0 nova_compute[260935]:       <nova:name>tempest-TestGettingAddress-server-739529148</nova:name>
Oct 11 09:30:49 compute-0 nova_compute[260935]:       <nova:creationTime>2025-10-11 09:30:47</nova:creationTime>
Oct 11 09:30:49 compute-0 nova_compute[260935]:       <nova:flavor name="m1.nano">
Oct 11 09:30:49 compute-0 nova_compute[260935]:         <nova:memory>128</nova:memory>
Oct 11 09:30:49 compute-0 nova_compute[260935]:         <nova:disk>1</nova:disk>
Oct 11 09:30:49 compute-0 nova_compute[260935]:         <nova:swap>0</nova:swap>
Oct 11 09:30:49 compute-0 nova_compute[260935]:         <nova:ephemeral>0</nova:ephemeral>
Oct 11 09:30:49 compute-0 nova_compute[260935]:         <nova:vcpus>1</nova:vcpus>
Oct 11 09:30:49 compute-0 nova_compute[260935]:       </nova:flavor>
Oct 11 09:30:49 compute-0 nova_compute[260935]:       <nova:owner>
Oct 11 09:30:49 compute-0 nova_compute[260935]:         <nova:user uuid="0e1fd111a1ff43179343661e01457085">tempest-TestGettingAddress-1238692117-project-member</nova:user>
Oct 11 09:30:49 compute-0 nova_compute[260935]:         <nova:project uuid="db6885dd005947ad850fed13cefdf2fc">tempest-TestGettingAddress-1238692117</nova:project>
Oct 11 09:30:49 compute-0 nova_compute[260935]:       </nova:owner>
Oct 11 09:30:49 compute-0 nova_compute[260935]:       <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 09:30:49 compute-0 nova_compute[260935]:       <nova:ports>
Oct 11 09:30:49 compute-0 nova_compute[260935]:         <nova:port uuid="41c97ff4-4ad5-4d35-ac33-d083a904f55a">
Oct 11 09:30:49 compute-0 nova_compute[260935]:           <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Oct 11 09:30:49 compute-0 nova_compute[260935]:           <nova:ip type="fixed" address="2001:db8::f816:3eff:fee6:b0e4" ipVersion="6"/>
Oct 11 09:30:49 compute-0 nova_compute[260935]:           <nova:ip type="fixed" address="2001:db8:0:1:f816:3eff:fee6:b0e4" ipVersion="6"/>
Oct 11 09:30:49 compute-0 nova_compute[260935]:         </nova:port>
Oct 11 09:30:49 compute-0 nova_compute[260935]:       </nova:ports>
Oct 11 09:30:49 compute-0 nova_compute[260935]:     </nova:instance>
Oct 11 09:30:49 compute-0 nova_compute[260935]:   </metadata>
Oct 11 09:30:49 compute-0 nova_compute[260935]:   <sysinfo type="smbios">
Oct 11 09:30:49 compute-0 nova_compute[260935]:     <system>
Oct 11 09:30:49 compute-0 nova_compute[260935]:       <entry name="manufacturer">RDO</entry>
Oct 11 09:30:49 compute-0 nova_compute[260935]:       <entry name="product">OpenStack Compute</entry>
Oct 11 09:30:49 compute-0 nova_compute[260935]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 09:30:49 compute-0 nova_compute[260935]:       <entry name="serial">811aca81-b712-4b96-a66c-8108b7791b3c</entry>
Oct 11 09:30:49 compute-0 nova_compute[260935]:       <entry name="uuid">811aca81-b712-4b96-a66c-8108b7791b3c</entry>
Oct 11 09:30:49 compute-0 nova_compute[260935]:       <entry name="family">Virtual Machine</entry>
Oct 11 09:30:49 compute-0 nova_compute[260935]:     </system>
Oct 11 09:30:49 compute-0 nova_compute[260935]:   </sysinfo>
Oct 11 09:30:49 compute-0 nova_compute[260935]:   <os>
Oct 11 09:30:49 compute-0 nova_compute[260935]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 11 09:30:49 compute-0 nova_compute[260935]:     <boot dev="hd"/>
Oct 11 09:30:49 compute-0 nova_compute[260935]:     <smbios mode="sysinfo"/>
Oct 11 09:30:49 compute-0 nova_compute[260935]:   </os>
Oct 11 09:30:49 compute-0 nova_compute[260935]:   <features>
Oct 11 09:30:49 compute-0 nova_compute[260935]:     <acpi/>
Oct 11 09:30:49 compute-0 nova_compute[260935]:     <apic/>
Oct 11 09:30:49 compute-0 nova_compute[260935]:     <vmcoreinfo/>
Oct 11 09:30:49 compute-0 nova_compute[260935]:   </features>
Oct 11 09:30:49 compute-0 nova_compute[260935]:   <clock offset="utc">
Oct 11 09:30:49 compute-0 nova_compute[260935]:     <timer name="pit" tickpolicy="delay"/>
Oct 11 09:30:49 compute-0 nova_compute[260935]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 11 09:30:49 compute-0 nova_compute[260935]:     <timer name="hpet" present="no"/>
Oct 11 09:30:49 compute-0 nova_compute[260935]:   </clock>
Oct 11 09:30:49 compute-0 nova_compute[260935]:   <cpu mode="host-model" match="exact">
Oct 11 09:30:49 compute-0 nova_compute[260935]:     <topology sockets="1" cores="1" threads="1"/>
Oct 11 09:30:49 compute-0 nova_compute[260935]:   </cpu>
Oct 11 09:30:49 compute-0 nova_compute[260935]:   <devices>
Oct 11 09:30:49 compute-0 nova_compute[260935]:     <disk type="network" device="disk">
Oct 11 09:30:49 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 09:30:49 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/811aca81-b712-4b96-a66c-8108b7791b3c_disk">
Oct 11 09:30:49 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 09:30:49 compute-0 nova_compute[260935]:       </source>
Oct 11 09:30:49 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 09:30:49 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 09:30:49 compute-0 nova_compute[260935]:       </auth>
Oct 11 09:30:49 compute-0 nova_compute[260935]:       <target dev="vda" bus="virtio"/>
Oct 11 09:30:49 compute-0 nova_compute[260935]:     </disk>
Oct 11 09:30:49 compute-0 nova_compute[260935]:     <disk type="network" device="cdrom">
Oct 11 09:30:49 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 09:30:49 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/811aca81-b712-4b96-a66c-8108b7791b3c_disk.config">
Oct 11 09:30:49 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 09:30:49 compute-0 nova_compute[260935]:       </source>
Oct 11 09:30:49 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 09:30:49 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 09:30:49 compute-0 nova_compute[260935]:       </auth>
Oct 11 09:30:49 compute-0 nova_compute[260935]:       <target dev="sda" bus="sata"/>
Oct 11 09:30:49 compute-0 nova_compute[260935]:     </disk>
Oct 11 09:30:49 compute-0 nova_compute[260935]:     <interface type="ethernet">
Oct 11 09:30:49 compute-0 nova_compute[260935]:       <mac address="fa:16:3e:e6:b0:e4"/>
Oct 11 09:30:49 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 09:30:49 compute-0 nova_compute[260935]:       <driver name="vhost" rx_queue_size="512"/>
Oct 11 09:30:49 compute-0 nova_compute[260935]:       <mtu size="1442"/>
Oct 11 09:30:49 compute-0 nova_compute[260935]:       <target dev="tap41c97ff4-4a"/>
Oct 11 09:30:49 compute-0 nova_compute[260935]:     </interface>
Oct 11 09:30:49 compute-0 nova_compute[260935]:     <serial type="pty">
Oct 11 09:30:49 compute-0 nova_compute[260935]:       <log file="/var/lib/nova/instances/811aca81-b712-4b96-a66c-8108b7791b3c/console.log" append="off"/>
Oct 11 09:30:49 compute-0 nova_compute[260935]:     </serial>
Oct 11 09:30:49 compute-0 nova_compute[260935]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 09:30:49 compute-0 nova_compute[260935]:     <video>
Oct 11 09:30:49 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 09:30:49 compute-0 nova_compute[260935]:     </video>
Oct 11 09:30:49 compute-0 nova_compute[260935]:     <input type="tablet" bus="usb"/>
Oct 11 09:30:49 compute-0 nova_compute[260935]:     <rng model="virtio">
Oct 11 09:30:49 compute-0 nova_compute[260935]:       <backend model="random">/dev/urandom</backend>
Oct 11 09:30:49 compute-0 nova_compute[260935]:     </rng>
Oct 11 09:30:49 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root"/>
Oct 11 09:30:49 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:30:49 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:30:49 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:30:49 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:30:49 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:30:49 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:30:49 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:30:49 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:30:49 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:30:49 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:30:49 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:30:49 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:30:49 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:30:49 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:30:49 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:30:49 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:30:49 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:30:49 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:30:49 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:30:49 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:30:49 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:30:49 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:30:49 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:30:49 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:30:49 compute-0 nova_compute[260935]:     <controller type="usb" index="0"/>
Oct 11 09:30:49 compute-0 nova_compute[260935]:     <memballoon model="virtio">
Oct 11 09:30:49 compute-0 nova_compute[260935]:       <stats period="10"/>
Oct 11 09:30:49 compute-0 nova_compute[260935]:     </memballoon>
Oct 11 09:30:49 compute-0 nova_compute[260935]:   </devices>
Oct 11 09:30:49 compute-0 nova_compute[260935]: </domain>
Oct 11 09:30:49 compute-0 nova_compute[260935]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 11 09:30:49 compute-0 nova_compute[260935]: 2025-10-11 09:30:49.021 2 DEBUG nova.compute.manager [None req-5f7fda1c-9004-4734-81d5-03d57f859841 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 811aca81-b712-4b96-a66c-8108b7791b3c] Preparing to wait for external event network-vif-plugged-41c97ff4-4ad5-4d35-ac33-d083a904f55a prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 11 09:30:49 compute-0 nova_compute[260935]: 2025-10-11 09:30:49.021 2 DEBUG oslo_concurrency.lockutils [None req-5f7fda1c-9004-4734-81d5-03d57f859841 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "811aca81-b712-4b96-a66c-8108b7791b3c-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:30:49 compute-0 nova_compute[260935]: 2025-10-11 09:30:49.022 2 DEBUG oslo_concurrency.lockutils [None req-5f7fda1c-9004-4734-81d5-03d57f859841 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "811aca81-b712-4b96-a66c-8108b7791b3c-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:30:49 compute-0 nova_compute[260935]: 2025-10-11 09:30:49.022 2 DEBUG oslo_concurrency.lockutils [None req-5f7fda1c-9004-4734-81d5-03d57f859841 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "811aca81-b712-4b96-a66c-8108b7791b3c-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:30:49 compute-0 nova_compute[260935]: 2025-10-11 09:30:49.023 2 DEBUG nova.virt.libvirt.vif [None req-5f7fda1c-9004-4734-81d5-03d57f859841 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T09:30:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-739529148',display_name='tempest-TestGettingAddress-server-739529148',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-739529148',id=135,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHRstPG1ot2VOcbBUXtJ+eOOOPIpYxS1823a8lgCgglrjvAd3j//+qkav+rN6NO2rJV6NwAQSbhFZXjoVb6gdqi2VPWucmakmAAgqAmAhQOUxzhrLgrzGURRlRuM8US+xA==',key_name='tempest-TestGettingAddress-264191416',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='db6885dd005947ad850fed13cefdf2fc',ramdisk_id='',reservation_id='r-jlin4bys',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-1238692117',owner_user_name='tempest-TestGettingAddress-1238692117-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:30:38Z,user_data=None,user_id='0e1fd111a1ff43179343661e01457085',uuid=811aca81-b712-4b96-a66c-8108b7791b3c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "41c97ff4-4ad5-4d35-ac33-d083a904f55a", "address": "fa:16:3e:e6:b0:e4", "network": {"id": "b8c83697-e12f-4ca4-8bd5-07c36e7b45a8", "bridge": "br-int", "label": "tempest-network-smoke--75951287", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fee6:b0e4", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fee6:b0e4", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap41c97ff4-4a", "ovs_interfaceid": "41c97ff4-4ad5-4d35-ac33-d083a904f55a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 11 09:30:49 compute-0 nova_compute[260935]: 2025-10-11 09:30:49.023 2 DEBUG nova.network.os_vif_util [None req-5f7fda1c-9004-4734-81d5-03d57f859841 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converting VIF {"id": "41c97ff4-4ad5-4d35-ac33-d083a904f55a", "address": "fa:16:3e:e6:b0:e4", "network": {"id": "b8c83697-e12f-4ca4-8bd5-07c36e7b45a8", "bridge": "br-int", "label": "tempest-network-smoke--75951287", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fee6:b0e4", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fee6:b0e4", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap41c97ff4-4a", "ovs_interfaceid": "41c97ff4-4ad5-4d35-ac33-d083a904f55a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 09:30:49 compute-0 nova_compute[260935]: 2025-10-11 09:30:49.024 2 DEBUG nova.network.os_vif_util [None req-5f7fda1c-9004-4734-81d5-03d57f859841 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e6:b0:e4,bridge_name='br-int',has_traffic_filtering=True,id=41c97ff4-4ad5-4d35-ac33-d083a904f55a,network=Network(b8c83697-e12f-4ca4-8bd5-07c36e7b45a8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap41c97ff4-4a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 09:30:49 compute-0 nova_compute[260935]: 2025-10-11 09:30:49.025 2 DEBUG os_vif [None req-5f7fda1c-9004-4734-81d5-03d57f859841 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e6:b0:e4,bridge_name='br-int',has_traffic_filtering=True,id=41c97ff4-4ad5-4d35-ac33-d083a904f55a,network=Network(b8c83697-e12f-4ca4-8bd5-07c36e7b45a8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap41c97ff4-4a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 11 09:30:49 compute-0 nova_compute[260935]: 2025-10-11 09:30:49.026 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:30:49 compute-0 nova_compute[260935]: 2025-10-11 09:30:49.026 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:30:49 compute-0 nova_compute[260935]: 2025-10-11 09:30:49.027 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 09:30:49 compute-0 nova_compute[260935]: 2025-10-11 09:30:49.030 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:30:49 compute-0 nova_compute[260935]: 2025-10-11 09:30:49.031 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap41c97ff4-4a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:30:49 compute-0 nova_compute[260935]: 2025-10-11 09:30:49.031 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap41c97ff4-4a, col_values=(('external_ids', {'iface-id': '41c97ff4-4ad5-4d35-ac33-d083a904f55a', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:e6:b0:e4', 'vm-uuid': '811aca81-b712-4b96-a66c-8108b7791b3c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:30:49 compute-0 nova_compute[260935]: 2025-10-11 09:30:49.033 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:30:49 compute-0 NetworkManager[44960]: <info>  [1760175049.0349] manager: (tap41c97ff4-4a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/580)
Oct 11 09:30:49 compute-0 nova_compute[260935]: 2025-10-11 09:30:49.035 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 09:30:49 compute-0 nova_compute[260935]: 2025-10-11 09:30:49.041 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:30:49 compute-0 nova_compute[260935]: 2025-10-11 09:30:49.042 2 INFO os_vif [None req-5f7fda1c-9004-4734-81d5-03d57f859841 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e6:b0:e4,bridge_name='br-int',has_traffic_filtering=True,id=41c97ff4-4ad5-4d35-ac33-d083a904f55a,network=Network(b8c83697-e12f-4ca4-8bd5-07c36e7b45a8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap41c97ff4-4a')
Oct 11 09:30:49 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2675: 321 pgs: 321 active+clean; 374 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 11 09:30:49 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/272243793' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:30:49 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/575589019' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:30:49 compute-0 nova_compute[260935]: 2025-10-11 09:30:49.309 2 DEBUG nova.virt.libvirt.driver [None req-5f7fda1c-9004-4734-81d5-03d57f859841 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 09:30:49 compute-0 nova_compute[260935]: 2025-10-11 09:30:49.310 2 DEBUG nova.virt.libvirt.driver [None req-5f7fda1c-9004-4734-81d5-03d57f859841 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 09:30:49 compute-0 nova_compute[260935]: 2025-10-11 09:30:49.310 2 DEBUG nova.virt.libvirt.driver [None req-5f7fda1c-9004-4734-81d5-03d57f859841 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] No VIF found with MAC fa:16:3e:e6:b0:e4, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 11 09:30:49 compute-0 nova_compute[260935]: 2025-10-11 09:30:49.311 2 INFO nova.virt.libvirt.driver [None req-5f7fda1c-9004-4734-81d5-03d57f859841 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 811aca81-b712-4b96-a66c-8108b7791b3c] Using config drive
Oct 11 09:30:49 compute-0 nova_compute[260935]: 2025-10-11 09:30:49.350 2 DEBUG nova.storage.rbd_utils [None req-5f7fda1c-9004-4734-81d5-03d57f859841 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] rbd image 811aca81-b712-4b96-a66c-8108b7791b3c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:30:49 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:30:49 compute-0 nova_compute[260935]: 2025-10-11 09:30:49.968 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:30:50 compute-0 ceph-mon[74313]: pgmap v2675: 321 pgs: 321 active+clean; 374 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 11 09:30:50 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 09:30:50 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3882794878' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:30:50 compute-0 nova_compute[260935]: 2025-10-11 09:30:50.450 2 INFO nova.virt.libvirt.driver [None req-5f7fda1c-9004-4734-81d5-03d57f859841 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 811aca81-b712-4b96-a66c-8108b7791b3c] Creating config drive at /var/lib/nova/instances/811aca81-b712-4b96-a66c-8108b7791b3c/disk.config
Oct 11 09:30:50 compute-0 nova_compute[260935]: 2025-10-11 09:30:50.459 2 DEBUG oslo_concurrency.processutils [None req-5f7fda1c-9004-4734-81d5-03d57f859841 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/811aca81-b712-4b96-a66c-8108b7791b3c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpourchqpe execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:30:50 compute-0 nova_compute[260935]: 2025-10-11 09:30:50.503 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.535s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:30:50 compute-0 nova_compute[260935]: 2025-10-11 09:30:50.512 2 DEBUG nova.compute.provider_tree [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 09:30:50 compute-0 nova_compute[260935]: 2025-10-11 09:30:50.610 2 DEBUG oslo_concurrency.processutils [None req-5f7fda1c-9004-4734-81d5-03d57f859841 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/811aca81-b712-4b96-a66c-8108b7791b3c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpourchqpe" returned: 0 in 0.151s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:30:50 compute-0 nova_compute[260935]: 2025-10-11 09:30:50.635 2 DEBUG nova.storage.rbd_utils [None req-5f7fda1c-9004-4734-81d5-03d57f859841 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] rbd image 811aca81-b712-4b96-a66c-8108b7791b3c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:30:50 compute-0 nova_compute[260935]: 2025-10-11 09:30:50.638 2 DEBUG oslo_concurrency.processutils [None req-5f7fda1c-9004-4734-81d5-03d57f859841 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/811aca81-b712-4b96-a66c-8108b7791b3c/disk.config 811aca81-b712-4b96-a66c-8108b7791b3c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:30:50 compute-0 nova_compute[260935]: 2025-10-11 09:30:50.689 2 DEBUG nova.scheduler.client.report [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 09:30:50 compute-0 nova_compute[260935]: 2025-10-11 09:30:50.842 2 DEBUG oslo_concurrency.processutils [None req-5f7fda1c-9004-4734-81d5-03d57f859841 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/811aca81-b712-4b96-a66c-8108b7791b3c/disk.config 811aca81-b712-4b96-a66c-8108b7791b3c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.204s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:30:50 compute-0 nova_compute[260935]: 2025-10-11 09:30:50.843 2 INFO nova.virt.libvirt.driver [None req-5f7fda1c-9004-4734-81d5-03d57f859841 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 811aca81-b712-4b96-a66c-8108b7791b3c] Deleting local config drive /var/lib/nova/instances/811aca81-b712-4b96-a66c-8108b7791b3c/disk.config because it was imported into RBD.
Oct 11 09:30:50 compute-0 nova_compute[260935]: 2025-10-11 09:30:50.855 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 11 09:30:50 compute-0 nova_compute[260935]: 2025-10-11 09:30:50.857 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 3.168s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:30:50 compute-0 kernel: tap41c97ff4-4a: entered promiscuous mode
Oct 11 09:30:50 compute-0 NetworkManager[44960]: <info>  [1760175050.9235] manager: (tap41c97ff4-4a): new Tun device (/org/freedesktop/NetworkManager/Devices/581)
Oct 11 09:30:50 compute-0 ovn_controller[152945]: 2025-10-11T09:30:50Z|01506|binding|INFO|Claiming lport 41c97ff4-4ad5-4d35-ac33-d083a904f55a for this chassis.
Oct 11 09:30:50 compute-0 ovn_controller[152945]: 2025-10-11T09:30:50Z|01507|binding|INFO|41c97ff4-4ad5-4d35-ac33-d083a904f55a: Claiming fa:16:3e:e6:b0:e4 10.100.0.14 2001:db8:0:1:f816:3eff:fee6:b0e4 2001:db8::f816:3eff:fee6:b0e4
Oct 11 09:30:50 compute-0 nova_compute[260935]: 2025-10-11 09:30:50.987 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:30:50 compute-0 nova_compute[260935]: 2025-10-11 09:30:50.998 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:30:51 compute-0 systemd-machined[215705]: New machine qemu-159-instance-00000087.
Oct 11 09:30:51 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2676: 321 pgs: 321 active+clean; 374 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 11 09:30:51 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:30:51.073 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e6:b0:e4 10.100.0.14 2001:db8:0:1:f816:3eff:fee6:b0e4 2001:db8::f816:3eff:fee6:b0e4'], port_security=['fa:16:3e:e6:b0:e4 10.100.0.14 2001:db8:0:1:f816:3eff:fee6:b0e4 2001:db8::f816:3eff:fee6:b0e4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28 2001:db8:0:1:f816:3eff:fee6:b0e4/64 2001:db8::f816:3eff:fee6:b0e4/64', 'neutron:device_id': '811aca81-b712-4b96-a66c-8108b7791b3c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b8c83697-e12f-4ca4-8bd5-07c36e7b45a8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'db6885dd005947ad850fed13cefdf2fc', 'neutron:revision_number': '2', 'neutron:security_group_ids': '9b3d6b51-0356-4b0d-af67-6a82e6bb09bb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=09ee216c-15c4-4f81-ac68-20fd45768bc8, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=41c97ff4-4ad5-4d35-ac33-d083a904f55a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 09:30:51 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:30:51.074 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 41c97ff4-4ad5-4d35-ac33-d083a904f55a in datapath b8c83697-e12f-4ca4-8bd5-07c36e7b45a8 bound to our chassis
Oct 11 09:30:51 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:30:51.076 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b8c83697-e12f-4ca4-8bd5-07c36e7b45a8
Oct 11 09:30:51 compute-0 systemd[1]: Started Virtual Machine qemu-159-instance-00000087.
Oct 11 09:30:51 compute-0 ovn_controller[152945]: 2025-10-11T09:30:51Z|01508|binding|INFO|Setting lport 41c97ff4-4ad5-4d35-ac33-d083a904f55a ovn-installed in OVS
Oct 11 09:30:51 compute-0 ovn_controller[152945]: 2025-10-11T09:30:51Z|01509|binding|INFO|Setting lport 41c97ff4-4ad5-4d35-ac33-d083a904f55a up in Southbound
Oct 11 09:30:51 compute-0 systemd-udevd[409364]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 09:30:51 compute-0 nova_compute[260935]: 2025-10-11 09:30:51.081 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:30:51 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:30:51.088 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[61ff9053-c79f-45ae-9c17-5135d7356830]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:30:51 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:30:51.089 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapb8c83697-e1 in ovnmeta-b8c83697-e12f-4ca4-8bd5-07c36e7b45a8 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 11 09:30:51 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:30:51.091 276945 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapb8c83697-e0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 11 09:30:51 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:30:51.092 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[0b39212b-bb57-4000-90d8-3bc59445d6b2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:30:51 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:30:51.093 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[c19adee0-68a6-4573-8c9c-6fd9d50a60ac]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:30:51 compute-0 NetworkManager[44960]: <info>  [1760175051.0979] device (tap41c97ff4-4a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 09:30:51 compute-0 NetworkManager[44960]: <info>  [1760175051.0989] device (tap41c97ff4-4a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 09:30:51 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:30:51.115 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[3e8d9aa4-d8b5-4721-9804-89836468bb96]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:30:51 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:30:51.148 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[51f324df-80fc-4e35-b571-749bc97e7762]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:30:51 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:30:51.186 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[3f18cd74-91d9-482a-8552-a349bf3b4264]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:30:51 compute-0 NetworkManager[44960]: <info>  [1760175051.1945] manager: (tapb8c83697-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/582)
Oct 11 09:30:51 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:30:51.193 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[372e0502-50cb-48a4-b463-c5d4574bc39b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:30:51 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:30:51.236 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[19ac61b0-3954-4fb6-acad-ea273c85221c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:30:51 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:30:51.239 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[4619cb93-e200-4789-8b2f-91a2df00a7be]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:30:51 compute-0 NetworkManager[44960]: <info>  [1760175051.2641] device (tapb8c83697-e0): carrier: link connected
Oct 11 09:30:51 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:30:51.269 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[0f81a09f-2621-48a4-b192-65764b79b963]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:30:51 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:30:51.286 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[7f744505-0c47-4724-818f-dc37b56cf45a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb8c83697-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d0:5b:95'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 405], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 689596, 'reachable_time': 36408, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 409405, 'error': None, 'target': 'ovnmeta-b8c83697-e12f-4ca4-8bd5-07c36e7b45a8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:30:51 compute-0 podman[409387]: 2025-10-11 09:30:51.297408645 +0000 UTC m=+0.062356800 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent)
Oct 11 09:30:51 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:30:51.301 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[e50918c8-c95f-4509-8224-738d1ee0acf6]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fed0:5b95'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 689596, 'tstamp': 689596}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 409414, 'error': None, 'target': 'ovnmeta-b8c83697-e12f-4ca4-8bd5-07c36e7b45a8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:30:51 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:30:51.317 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[b8752176-e613-453f-999b-7fb980293911]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb8c83697-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d0:5b:95'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 405], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 689596, 'reachable_time': 36408, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 409416, 'error': None, 'target': 'ovnmeta-b8c83697-e12f-4ca4-8bd5-07c36e7b45a8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:30:51 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:30:51.343 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[c1dee893-8bc3-4788-9e97-9335f832e7fd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:30:51 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/3882794878' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:30:51 compute-0 ceph-mon[74313]: pgmap v2676: 321 pgs: 321 active+clean; 374 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 11 09:30:51 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:30:51.415 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[df902823-ca22-4ac4-8585-e6416daac4d9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:30:51 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:30:51.416 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb8c83697-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:30:51 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:30:51.416 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 09:30:51 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:30:51.417 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb8c83697-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:30:51 compute-0 NetworkManager[44960]: <info>  [1760175051.4195] manager: (tapb8c83697-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/583)
Oct 11 09:30:51 compute-0 nova_compute[260935]: 2025-10-11 09:30:51.418 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:30:51 compute-0 kernel: tapb8c83697-e0: entered promiscuous mode
Oct 11 09:30:51 compute-0 nova_compute[260935]: 2025-10-11 09:30:51.423 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:30:51 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:30:51.424 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb8c83697-e0, col_values=(('external_ids', {'iface-id': '3d1ec619-c133-47e3-8121-0a84b73ae16e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:30:51 compute-0 ovn_controller[152945]: 2025-10-11T09:30:51Z|01510|binding|INFO|Releasing lport 3d1ec619-c133-47e3-8121-0a84b73ae16e from this chassis (sb_readonly=0)
Oct 11 09:30:51 compute-0 nova_compute[260935]: 2025-10-11 09:30:51.451 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:30:51 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:30:51.453 162815 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/b8c83697-e12f-4ca4-8bd5-07c36e7b45a8.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/b8c83697-e12f-4ca4-8bd5-07c36e7b45a8.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 11 09:30:51 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:30:51.454 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[fad1b6a3-aaef-4fa7-b1df-10d2da75d112]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:30:51 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:30:51.454 162815 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 11 09:30:51 compute-0 ovn_metadata_agent[162810]: global
Oct 11 09:30:51 compute-0 ovn_metadata_agent[162810]:     log         /dev/log local0 debug
Oct 11 09:30:51 compute-0 ovn_metadata_agent[162810]:     log-tag     haproxy-metadata-proxy-b8c83697-e12f-4ca4-8bd5-07c36e7b45a8
Oct 11 09:30:51 compute-0 ovn_metadata_agent[162810]:     user        root
Oct 11 09:30:51 compute-0 ovn_metadata_agent[162810]:     group       root
Oct 11 09:30:51 compute-0 ovn_metadata_agent[162810]:     maxconn     1024
Oct 11 09:30:51 compute-0 ovn_metadata_agent[162810]:     pidfile     /var/lib/neutron/external/pids/b8c83697-e12f-4ca4-8bd5-07c36e7b45a8.pid.haproxy
Oct 11 09:30:51 compute-0 ovn_metadata_agent[162810]:     daemon
Oct 11 09:30:51 compute-0 ovn_metadata_agent[162810]: 
Oct 11 09:30:51 compute-0 ovn_metadata_agent[162810]: defaults
Oct 11 09:30:51 compute-0 ovn_metadata_agent[162810]:     log global
Oct 11 09:30:51 compute-0 ovn_metadata_agent[162810]:     mode http
Oct 11 09:30:51 compute-0 ovn_metadata_agent[162810]:     option httplog
Oct 11 09:30:51 compute-0 ovn_metadata_agent[162810]:     option dontlognull
Oct 11 09:30:51 compute-0 ovn_metadata_agent[162810]:     option http-server-close
Oct 11 09:30:51 compute-0 ovn_metadata_agent[162810]:     option forwardfor
Oct 11 09:30:51 compute-0 ovn_metadata_agent[162810]:     retries                 3
Oct 11 09:30:51 compute-0 ovn_metadata_agent[162810]:     timeout http-request    30s
Oct 11 09:30:51 compute-0 ovn_metadata_agent[162810]:     timeout connect         30s
Oct 11 09:30:51 compute-0 ovn_metadata_agent[162810]:     timeout client          32s
Oct 11 09:30:51 compute-0 ovn_metadata_agent[162810]:     timeout server          32s
Oct 11 09:30:51 compute-0 ovn_metadata_agent[162810]:     timeout http-keep-alive 30s
Oct 11 09:30:51 compute-0 ovn_metadata_agent[162810]: 
Oct 11 09:30:51 compute-0 ovn_metadata_agent[162810]: 
Oct 11 09:30:51 compute-0 ovn_metadata_agent[162810]: listen listener
Oct 11 09:30:51 compute-0 ovn_metadata_agent[162810]:     bind 169.254.169.254:80
Oct 11 09:30:51 compute-0 ovn_metadata_agent[162810]:     server metadata /var/lib/neutron/metadata_proxy
Oct 11 09:30:51 compute-0 ovn_metadata_agent[162810]:     http-request add-header X-OVN-Network-ID b8c83697-e12f-4ca4-8bd5-07c36e7b45a8
Oct 11 09:30:51 compute-0 ovn_metadata_agent[162810]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 11 09:30:51 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:30:51.455 162815 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-b8c83697-e12f-4ca4-8bd5-07c36e7b45a8', 'env', 'PROCESS_TAG=haproxy-b8c83697-e12f-4ca4-8bd5-07c36e7b45a8', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/b8c83697-e12f-4ca4-8bd5-07c36e7b45a8.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 11 09:30:51 compute-0 nova_compute[260935]: 2025-10-11 09:30:51.740 2 DEBUG nova.compute.manager [req-6c90ae79-fe76-4eea-ab8a-19843455a39b req-d5b30dfc-b9c1-472a-bed6-325e6683397c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 811aca81-b712-4b96-a66c-8108b7791b3c] Received event network-vif-plugged-41c97ff4-4ad5-4d35-ac33-d083a904f55a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:30:51 compute-0 nova_compute[260935]: 2025-10-11 09:30:51.741 2 DEBUG oslo_concurrency.lockutils [req-6c90ae79-fe76-4eea-ab8a-19843455a39b req-d5b30dfc-b9c1-472a-bed6-325e6683397c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "811aca81-b712-4b96-a66c-8108b7791b3c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:30:51 compute-0 nova_compute[260935]: 2025-10-11 09:30:51.741 2 DEBUG oslo_concurrency.lockutils [req-6c90ae79-fe76-4eea-ab8a-19843455a39b req-d5b30dfc-b9c1-472a-bed6-325e6683397c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "811aca81-b712-4b96-a66c-8108b7791b3c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:30:51 compute-0 nova_compute[260935]: 2025-10-11 09:30:51.741 2 DEBUG oslo_concurrency.lockutils [req-6c90ae79-fe76-4eea-ab8a-19843455a39b req-d5b30dfc-b9c1-472a-bed6-325e6683397c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "811aca81-b712-4b96-a66c-8108b7791b3c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:30:51 compute-0 nova_compute[260935]: 2025-10-11 09:30:51.742 2 DEBUG nova.compute.manager [req-6c90ae79-fe76-4eea-ab8a-19843455a39b req-d5b30dfc-b9c1-472a-bed6-325e6683397c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 811aca81-b712-4b96-a66c-8108b7791b3c] Processing event network-vif-plugged-41c97ff4-4ad5-4d35-ac33-d083a904f55a _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 11 09:30:51 compute-0 nova_compute[260935]: 2025-10-11 09:30:51.840 2 DEBUG nova.network.neutron [req-6c3679e1-8d6f-4a5c-ba63-f78ffb4be5f9 req-33f74171-2d4b-42c4-ac5d-efcb5c7b0eb9 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 811aca81-b712-4b96-a66c-8108b7791b3c] Updated VIF entry in instance network info cache for port 41c97ff4-4ad5-4d35-ac33-d083a904f55a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 09:30:51 compute-0 nova_compute[260935]: 2025-10-11 09:30:51.841 2 DEBUG nova.network.neutron [req-6c3679e1-8d6f-4a5c-ba63-f78ffb4be5f9 req-33f74171-2d4b-42c4-ac5d-efcb5c7b0eb9 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 811aca81-b712-4b96-a66c-8108b7791b3c] Updating instance_info_cache with network_info: [{"id": "41c97ff4-4ad5-4d35-ac33-d083a904f55a", "address": "fa:16:3e:e6:b0:e4", "network": {"id": "b8c83697-e12f-4ca4-8bd5-07c36e7b45a8", "bridge": "br-int", "label": "tempest-network-smoke--75951287", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fee6:b0e4", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fee6:b0e4", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap41c97ff4-4a", "ovs_interfaceid": "41c97ff4-4ad5-4d35-ac33-d083a904f55a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:30:51 compute-0 nova_compute[260935]: 2025-10-11 09:30:51.857 2 DEBUG oslo_concurrency.lockutils [req-6c3679e1-8d6f-4a5c-ba63-f78ffb4be5f9 req-33f74171-2d4b-42c4-ac5d-efcb5c7b0eb9 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-811aca81-b712-4b96-a66c-8108b7791b3c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:30:51 compute-0 podman[409490]: 2025-10-11 09:30:51.897877971 +0000 UTC m=+0.081826300 container create cef6679f7f0fa451ed298ba0c50459d4b4021c71f328dd9432072e9dd2a9fab9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-b8c83697-e12f-4ca4-8bd5-07c36e7b45a8, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 09:30:51 compute-0 systemd[1]: Started libpod-conmon-cef6679f7f0fa451ed298ba0c50459d4b4021c71f328dd9432072e9dd2a9fab9.scope.
Oct 11 09:30:51 compute-0 podman[409490]: 2025-10-11 09:30:51.859717214 +0000 UTC m=+0.043665623 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 11 09:30:51 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:30:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac5db13615b0c34d4d58a71f6df0fd0ba5744113047ad9855ac3fe68117d7b25/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 11 09:30:52 compute-0 podman[409490]: 2025-10-11 09:30:52.004176951 +0000 UTC m=+0.188125310 container init cef6679f7f0fa451ed298ba0c50459d4b4021c71f328dd9432072e9dd2a9fab9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-b8c83697-e12f-4ca4-8bd5-07c36e7b45a8, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 11 09:30:52 compute-0 podman[409490]: 2025-10-11 09:30:52.014991666 +0000 UTC m=+0.198939985 container start cef6679f7f0fa451ed298ba0c50459d4b4021c71f328dd9432072e9dd2a9fab9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-b8c83697-e12f-4ca4-8bd5-07c36e7b45a8, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 11 09:30:52 compute-0 neutron-haproxy-ovnmeta-b8c83697-e12f-4ca4-8bd5-07c36e7b45a8[409505]: [NOTICE]   (409509) : New worker (409511) forked
Oct 11 09:30:52 compute-0 neutron-haproxy-ovnmeta-b8c83697-e12f-4ca4-8bd5-07c36e7b45a8[409505]: [NOTICE]   (409509) : Loading success.
Oct 11 09:30:52 compute-0 nova_compute[260935]: 2025-10-11 09:30:52.116 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760175052.115802, 811aca81-b712-4b96-a66c-8108b7791b3c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:30:52 compute-0 nova_compute[260935]: 2025-10-11 09:30:52.117 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 811aca81-b712-4b96-a66c-8108b7791b3c] VM Started (Lifecycle Event)
Oct 11 09:30:52 compute-0 nova_compute[260935]: 2025-10-11 09:30:52.120 2 DEBUG nova.compute.manager [None req-5f7fda1c-9004-4734-81d5-03d57f859841 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 811aca81-b712-4b96-a66c-8108b7791b3c] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 11 09:30:52 compute-0 nova_compute[260935]: 2025-10-11 09:30:52.123 2 DEBUG nova.virt.libvirt.driver [None req-5f7fda1c-9004-4734-81d5-03d57f859841 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 811aca81-b712-4b96-a66c-8108b7791b3c] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 11 09:30:52 compute-0 nova_compute[260935]: 2025-10-11 09:30:52.127 2 INFO nova.virt.libvirt.driver [-] [instance: 811aca81-b712-4b96-a66c-8108b7791b3c] Instance spawned successfully.
Oct 11 09:30:52 compute-0 nova_compute[260935]: 2025-10-11 09:30:52.127 2 DEBUG nova.virt.libvirt.driver [None req-5f7fda1c-9004-4734-81d5-03d57f859841 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 811aca81-b712-4b96-a66c-8108b7791b3c] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 11 09:30:52 compute-0 nova_compute[260935]: 2025-10-11 09:30:52.142 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 811aca81-b712-4b96-a66c-8108b7791b3c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:30:52 compute-0 nova_compute[260935]: 2025-10-11 09:30:52.148 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 811aca81-b712-4b96-a66c-8108b7791b3c] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 09:30:52 compute-0 nova_compute[260935]: 2025-10-11 09:30:52.152 2 DEBUG nova.virt.libvirt.driver [None req-5f7fda1c-9004-4734-81d5-03d57f859841 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 811aca81-b712-4b96-a66c-8108b7791b3c] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:30:52 compute-0 nova_compute[260935]: 2025-10-11 09:30:52.153 2 DEBUG nova.virt.libvirt.driver [None req-5f7fda1c-9004-4734-81d5-03d57f859841 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 811aca81-b712-4b96-a66c-8108b7791b3c] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:30:52 compute-0 nova_compute[260935]: 2025-10-11 09:30:52.153 2 DEBUG nova.virt.libvirt.driver [None req-5f7fda1c-9004-4734-81d5-03d57f859841 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 811aca81-b712-4b96-a66c-8108b7791b3c] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:30:52 compute-0 nova_compute[260935]: 2025-10-11 09:30:52.154 2 DEBUG nova.virt.libvirt.driver [None req-5f7fda1c-9004-4734-81d5-03d57f859841 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 811aca81-b712-4b96-a66c-8108b7791b3c] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:30:52 compute-0 nova_compute[260935]: 2025-10-11 09:30:52.154 2 DEBUG nova.virt.libvirt.driver [None req-5f7fda1c-9004-4734-81d5-03d57f859841 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 811aca81-b712-4b96-a66c-8108b7791b3c] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:30:52 compute-0 nova_compute[260935]: 2025-10-11 09:30:52.155 2 DEBUG nova.virt.libvirt.driver [None req-5f7fda1c-9004-4734-81d5-03d57f859841 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 811aca81-b712-4b96-a66c-8108b7791b3c] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:30:52 compute-0 nova_compute[260935]: 2025-10-11 09:30:52.178 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 811aca81-b712-4b96-a66c-8108b7791b3c] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 09:30:52 compute-0 nova_compute[260935]: 2025-10-11 09:30:52.178 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760175052.115922, 811aca81-b712-4b96-a66c-8108b7791b3c => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:30:52 compute-0 nova_compute[260935]: 2025-10-11 09:30:52.178 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 811aca81-b712-4b96-a66c-8108b7791b3c] VM Paused (Lifecycle Event)
Oct 11 09:30:52 compute-0 nova_compute[260935]: 2025-10-11 09:30:52.210 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 811aca81-b712-4b96-a66c-8108b7791b3c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:30:52 compute-0 nova_compute[260935]: 2025-10-11 09:30:52.213 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760175052.1228755, 811aca81-b712-4b96-a66c-8108b7791b3c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:30:52 compute-0 nova_compute[260935]: 2025-10-11 09:30:52.214 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 811aca81-b712-4b96-a66c-8108b7791b3c] VM Resumed (Lifecycle Event)
Oct 11 09:30:52 compute-0 nova_compute[260935]: 2025-10-11 09:30:52.219 2 INFO nova.compute.manager [None req-5f7fda1c-9004-4734-81d5-03d57f859841 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 811aca81-b712-4b96-a66c-8108b7791b3c] Took 13.23 seconds to spawn the instance on the hypervisor.
Oct 11 09:30:52 compute-0 nova_compute[260935]: 2025-10-11 09:30:52.219 2 DEBUG nova.compute.manager [None req-5f7fda1c-9004-4734-81d5-03d57f859841 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 811aca81-b712-4b96-a66c-8108b7791b3c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:30:52 compute-0 nova_compute[260935]: 2025-10-11 09:30:52.231 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 811aca81-b712-4b96-a66c-8108b7791b3c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:30:52 compute-0 nova_compute[260935]: 2025-10-11 09:30:52.234 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 811aca81-b712-4b96-a66c-8108b7791b3c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 09:30:52 compute-0 nova_compute[260935]: 2025-10-11 09:30:52.254 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 811aca81-b712-4b96-a66c-8108b7791b3c] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 09:30:52 compute-0 nova_compute[260935]: 2025-10-11 09:30:52.273 2 INFO nova.compute.manager [None req-5f7fda1c-9004-4734-81d5-03d57f859841 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 811aca81-b712-4b96-a66c-8108b7791b3c] Took 15.01 seconds to build instance.
Oct 11 09:30:52 compute-0 nova_compute[260935]: 2025-10-11 09:30:52.290 2 DEBUG oslo_concurrency.lockutils [None req-5f7fda1c-9004-4734-81d5-03d57f859841 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "811aca81-b712-4b96-a66c-8108b7791b3c" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 15.437s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:30:53 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2677: 321 pgs: 321 active+clean; 374 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 36 op/s
Oct 11 09:30:53 compute-0 nova_compute[260935]: 2025-10-11 09:30:53.624 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:30:53 compute-0 nova_compute[260935]: 2025-10-11 09:30:53.825 2 DEBUG nova.compute.manager [req-9171ec85-3190-4f48-9afc-cb131fbba77f req-cf813c66-b4e2-4681-a8d7-73f9be8d0b10 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 811aca81-b712-4b96-a66c-8108b7791b3c] Received event network-vif-plugged-41c97ff4-4ad5-4d35-ac33-d083a904f55a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:30:53 compute-0 nova_compute[260935]: 2025-10-11 09:30:53.826 2 DEBUG oslo_concurrency.lockutils [req-9171ec85-3190-4f48-9afc-cb131fbba77f req-cf813c66-b4e2-4681-a8d7-73f9be8d0b10 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "811aca81-b712-4b96-a66c-8108b7791b3c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:30:53 compute-0 nova_compute[260935]: 2025-10-11 09:30:53.826 2 DEBUG oslo_concurrency.lockutils [req-9171ec85-3190-4f48-9afc-cb131fbba77f req-cf813c66-b4e2-4681-a8d7-73f9be8d0b10 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "811aca81-b712-4b96-a66c-8108b7791b3c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:30:53 compute-0 nova_compute[260935]: 2025-10-11 09:30:53.827 2 DEBUG oslo_concurrency.lockutils [req-9171ec85-3190-4f48-9afc-cb131fbba77f req-cf813c66-b4e2-4681-a8d7-73f9be8d0b10 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "811aca81-b712-4b96-a66c-8108b7791b3c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:30:53 compute-0 nova_compute[260935]: 2025-10-11 09:30:53.827 2 DEBUG nova.compute.manager [req-9171ec85-3190-4f48-9afc-cb131fbba77f req-cf813c66-b4e2-4681-a8d7-73f9be8d0b10 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 811aca81-b712-4b96-a66c-8108b7791b3c] No waiting events found dispatching network-vif-plugged-41c97ff4-4ad5-4d35-ac33-d083a904f55a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 09:30:53 compute-0 nova_compute[260935]: 2025-10-11 09:30:53.828 2 WARNING nova.compute.manager [req-9171ec85-3190-4f48-9afc-cb131fbba77f req-cf813c66-b4e2-4681-a8d7-73f9be8d0b10 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 811aca81-b712-4b96-a66c-8108b7791b3c] Received unexpected event network-vif-plugged-41c97ff4-4ad5-4d35-ac33-d083a904f55a for instance with vm_state active and task_state None.
Oct 11 09:30:54 compute-0 nova_compute[260935]: 2025-10-11 09:30:54.034 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:30:54 compute-0 ceph-mon[74313]: pgmap v2677: 321 pgs: 321 active+clean; 374 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 36 op/s
Oct 11 09:30:54 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:30:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:30:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:30:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:30:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:30:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:30:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:30:54 compute-0 ceph-mgr[74605]: [balancer INFO root] Optimize plan auto_2025-10-11_09:30:54
Oct 11 09:30:54 compute-0 ceph-mgr[74605]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 09:30:54 compute-0 ceph-mgr[74605]: [balancer INFO root] do_upmap
Oct 11 09:30:54 compute-0 ceph-mgr[74605]: [balancer INFO root] pools ['cephfs.cephfs.data', 'default.rgw.control', 'images', 'backups', 'volumes', 'default.rgw.log', '.mgr', 'default.rgw.meta', 'vms', '.rgw.root', 'cephfs.cephfs.meta']
Oct 11 09:30:54 compute-0 ceph-mgr[74605]: [balancer INFO root] prepared 0/10 changes
Oct 11 09:30:55 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2678: 321 pgs: 321 active+clean; 374 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 7.2 KiB/s rd, 12 KiB/s wr, 9 op/s
Oct 11 09:30:55 compute-0 nova_compute[260935]: 2025-10-11 09:30:55.517 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:30:55 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:30:55.516 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=47, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:d1:d9', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '16:ab:1e:b7:4b:7f'}, ipsec=False) old=SB_Global(nb_cfg=46) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 09:30:55 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:30:55.519 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 11 09:30:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 09:30:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 09:30:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 09:30:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 09:30:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 09:30:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 09:30:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 09:30:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 09:30:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 09:30:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 09:30:56 compute-0 ceph-mon[74313]: pgmap v2678: 321 pgs: 321 active+clean; 374 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 7.2 KiB/s rd, 12 KiB/s wr, 9 op/s
Oct 11 09:30:56 compute-0 ovn_controller[152945]: 2025-10-11T09:30:56Z|01511|binding|INFO|Releasing lport 3d1ec619-c133-47e3-8121-0a84b73ae16e from this chassis (sb_readonly=0)
Oct 11 09:30:56 compute-0 ovn_controller[152945]: 2025-10-11T09:30:56Z|01512|binding|INFO|Releasing lport e23cd806-8523-4e59-ba27-db15cee52548 from this chassis (sb_readonly=0)
Oct 11 09:30:56 compute-0 ovn_controller[152945]: 2025-10-11T09:30:56Z|01513|binding|INFO|Releasing lport f45dd889-4db0-488b-b6d3-356bc191844e from this chassis (sb_readonly=0)
Oct 11 09:30:56 compute-0 NetworkManager[44960]: <info>  [1760175056.1689] manager: (patch-provnet-02213cae-d648-4a8f-b18b-9bdf9573f1be-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/584)
Oct 11 09:30:56 compute-0 NetworkManager[44960]: <info>  [1760175056.1695] manager: (patch-br-int-to-provnet-02213cae-d648-4a8f-b18b-9bdf9573f1be): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/585)
Oct 11 09:30:56 compute-0 nova_compute[260935]: 2025-10-11 09:30:56.168 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:30:56 compute-0 ovn_controller[152945]: 2025-10-11T09:30:56Z|01514|binding|INFO|Releasing lport 3d1ec619-c133-47e3-8121-0a84b73ae16e from this chassis (sb_readonly=0)
Oct 11 09:30:56 compute-0 ovn_controller[152945]: 2025-10-11T09:30:56Z|01515|binding|INFO|Releasing lport e23cd806-8523-4e59-ba27-db15cee52548 from this chassis (sb_readonly=0)
Oct 11 09:30:56 compute-0 ovn_controller[152945]: 2025-10-11T09:30:56Z|01516|binding|INFO|Releasing lport f45dd889-4db0-488b-b6d3-356bc191844e from this chassis (sb_readonly=0)
Oct 11 09:30:56 compute-0 nova_compute[260935]: 2025-10-11 09:30:56.203 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:30:56 compute-0 nova_compute[260935]: 2025-10-11 09:30:56.214 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:30:56 compute-0 nova_compute[260935]: 2025-10-11 09:30:56.569 2 DEBUG nova.compute.manager [req-647d31e5-cf65-4c7e-a7de-2f5d20659534 req-76a29f32-2a15-4059-9387-d6aa9b3d5fd7 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 811aca81-b712-4b96-a66c-8108b7791b3c] Received event network-changed-41c97ff4-4ad5-4d35-ac33-d083a904f55a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:30:56 compute-0 nova_compute[260935]: 2025-10-11 09:30:56.570 2 DEBUG nova.compute.manager [req-647d31e5-cf65-4c7e-a7de-2f5d20659534 req-76a29f32-2a15-4059-9387-d6aa9b3d5fd7 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 811aca81-b712-4b96-a66c-8108b7791b3c] Refreshing instance network info cache due to event network-changed-41c97ff4-4ad5-4d35-ac33-d083a904f55a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 09:30:56 compute-0 nova_compute[260935]: 2025-10-11 09:30:56.571 2 DEBUG oslo_concurrency.lockutils [req-647d31e5-cf65-4c7e-a7de-2f5d20659534 req-76a29f32-2a15-4059-9387-d6aa9b3d5fd7 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-811aca81-b712-4b96-a66c-8108b7791b3c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:30:56 compute-0 nova_compute[260935]: 2025-10-11 09:30:56.571 2 DEBUG oslo_concurrency.lockutils [req-647d31e5-cf65-4c7e-a7de-2f5d20659534 req-76a29f32-2a15-4059-9387-d6aa9b3d5fd7 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-811aca81-b712-4b96-a66c-8108b7791b3c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:30:56 compute-0 nova_compute[260935]: 2025-10-11 09:30:56.572 2 DEBUG nova.network.neutron [req-647d31e5-cf65-4c7e-a7de-2f5d20659534 req-76a29f32-2a15-4059-9387-d6aa9b3d5fd7 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 811aca81-b712-4b96-a66c-8108b7791b3c] Refreshing network info cache for port 41c97ff4-4ad5-4d35-ac33-d083a904f55a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 09:30:57 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2679: 321 pgs: 321 active+clean; 374 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 7.2 KiB/s rd, 12 KiB/s wr, 9 op/s
Oct 11 09:30:57 compute-0 ceph-mon[74313]: pgmap v2679: 321 pgs: 321 active+clean; 374 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 7.2 KiB/s rd, 12 KiB/s wr, 9 op/s
Oct 11 09:30:57 compute-0 podman[409521]: 2025-10-11 09:30:57.798952823 +0000 UTC m=+0.095942219 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.schema-version=1.0, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=iscsid)
Oct 11 09:30:58 compute-0 nova_compute[260935]: 2025-10-11 09:30:58.627 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:30:58 compute-0 nova_compute[260935]: 2025-10-11 09:30:58.902 2 DEBUG nova.network.neutron [req-647d31e5-cf65-4c7e-a7de-2f5d20659534 req-76a29f32-2a15-4059-9387-d6aa9b3d5fd7 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 811aca81-b712-4b96-a66c-8108b7791b3c] Updated VIF entry in instance network info cache for port 41c97ff4-4ad5-4d35-ac33-d083a904f55a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 09:30:58 compute-0 nova_compute[260935]: 2025-10-11 09:30:58.903 2 DEBUG nova.network.neutron [req-647d31e5-cf65-4c7e-a7de-2f5d20659534 req-76a29f32-2a15-4059-9387-d6aa9b3d5fd7 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 811aca81-b712-4b96-a66c-8108b7791b3c] Updating instance_info_cache with network_info: [{"id": "41c97ff4-4ad5-4d35-ac33-d083a904f55a", "address": "fa:16:3e:e6:b0:e4", "network": {"id": "b8c83697-e12f-4ca4-8bd5-07c36e7b45a8", "bridge": "br-int", "label": "tempest-network-smoke--75951287", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.211", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fee6:b0e4", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fee6:b0e4", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap41c97ff4-4a", "ovs_interfaceid": "41c97ff4-4ad5-4d35-ac33-d083a904f55a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:30:58 compute-0 nova_compute[260935]: 2025-10-11 09:30:58.927 2 DEBUG oslo_concurrency.lockutils [req-647d31e5-cf65-4c7e-a7de-2f5d20659534 req-76a29f32-2a15-4059-9387-d6aa9b3d5fd7 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-811aca81-b712-4b96-a66c-8108b7791b3c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:30:59 compute-0 nova_compute[260935]: 2025-10-11 09:30:59.036 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:30:59 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2680: 321 pgs: 321 active+clean; 374 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 11 09:30:59 compute-0 ceph-mon[74313]: pgmap v2680: 321 pgs: 321 active+clean; 374 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 11 09:30:59 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:30:59 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:30:59.521 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=bec9a5e2-82b0-42f0-811d-08d245f1dc66, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '47'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:31:01 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2681: 321 pgs: 321 active+clean; 374 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 11 09:31:01 compute-0 podman[409541]: 2025-10-11 09:31:01.793459421 +0000 UTC m=+0.091797652 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.build-date=20251001, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 11 09:31:01 compute-0 podman[409542]: 2025-10-11 09:31:01.845355065 +0000 UTC m=+0.144450607 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251001)
Oct 11 09:31:02 compute-0 ceph-mon[74313]: pgmap v2681: 321 pgs: 321 active+clean; 374 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 11 09:31:03 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2682: 321 pgs: 321 active+clean; 374 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Oct 11 09:31:03 compute-0 sshd-session[409586]: Invalid user admin from 165.232.82.252 port 44722
Oct 11 09:31:03 compute-0 sshd-session[409586]: pam_unix(sshd:auth): check pass; user unknown
Oct 11 09:31:03 compute-0 sshd-session[409586]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=165.232.82.252
Oct 11 09:31:03 compute-0 nova_compute[260935]: 2025-10-11 09:31:03.674 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:31:04 compute-0 nova_compute[260935]: 2025-10-11 09:31:04.039 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:31:04 compute-0 ceph-mon[74313]: pgmap v2682: 321 pgs: 321 active+clean; 374 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Oct 11 09:31:04 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:31:04 compute-0 ovn_controller[152945]: 2025-10-11T09:31:04Z|00176|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:e6:b0:e4 10.100.0.14
Oct 11 09:31:04 compute-0 ovn_controller[152945]: 2025-10-11T09:31:04Z|00177|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:e6:b0:e4 10.100.0.14
Oct 11 09:31:05 compute-0 sshd-session[409586]: Failed password for invalid user admin from 165.232.82.252 port 44722 ssh2
Oct 11 09:31:05 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2683: 321 pgs: 321 active+clean; 374 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 64 op/s
Oct 11 09:31:05 compute-0 ceph-mon[74313]: pgmap v2683: 321 pgs: 321 active+clean; 374 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 64 op/s
Oct 11 09:31:05 compute-0 sshd-session[409586]: Connection closed by invalid user admin 165.232.82.252 port 44722 [preauth]
Oct 11 09:31:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 09:31:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:31:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 09:31:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:31:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0029757271724620694 of space, bias 1.0, pg target 0.8927181517386208 quantized to 32 (current 32)
Oct 11 09:31:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:31:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 09:31:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:31:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 09:31:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:31:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Oct 11 09:31:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:31:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 11 09:31:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:31:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 09:31:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:31:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 11 09:31:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:31:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 11 09:31:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:31:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 09:31:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:31:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 11 09:31:07 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2684: 321 pgs: 321 active+clean; 374 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 64 op/s
Oct 11 09:31:08 compute-0 ceph-mon[74313]: pgmap v2684: 321 pgs: 321 active+clean; 374 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 64 op/s
Oct 11 09:31:08 compute-0 nova_compute[260935]: 2025-10-11 09:31:08.677 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:31:09 compute-0 nova_compute[260935]: 2025-10-11 09:31:09.041 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:31:09 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2685: 321 pgs: 321 active+clean; 407 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 129 op/s
Oct 11 09:31:09 compute-0 ceph-mon[74313]: pgmap v2685: 321 pgs: 321 active+clean; 407 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 129 op/s
Oct 11 09:31:09 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:31:11 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2686: 321 pgs: 321 active+clean; 407 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 390 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Oct 11 09:31:12 compute-0 ceph-mon[74313]: pgmap v2686: 321 pgs: 321 active+clean; 407 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 390 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Oct 11 09:31:13 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2687: 321 pgs: 321 active+clean; 407 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 390 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Oct 11 09:31:13 compute-0 ceph-mon[74313]: pgmap v2687: 321 pgs: 321 active+clean; 407 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 390 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Oct 11 09:31:13 compute-0 nova_compute[260935]: 2025-10-11 09:31:13.680 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:31:14 compute-0 nova_compute[260935]: 2025-10-11 09:31:14.043 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:31:14 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:31:15 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2688: 321 pgs: 321 active+clean; 407 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 387 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Oct 11 09:31:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:31:15.226 162815 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:31:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:31:15.227 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:31:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:31:15.228 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:31:16 compute-0 nova_compute[260935]: 2025-10-11 09:31:16.066 2 DEBUG oslo_concurrency.lockutils [None req-a58efe35-e911-4b34-8fc5-8bbf99892ea8 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "30069bfa-2edc-4f4c-b685-233b65a11de1" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:31:16 compute-0 nova_compute[260935]: 2025-10-11 09:31:16.067 2 DEBUG oslo_concurrency.lockutils [None req-a58efe35-e911-4b34-8fc5-8bbf99892ea8 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "30069bfa-2edc-4f4c-b685-233b65a11de1" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:31:16 compute-0 nova_compute[260935]: 2025-10-11 09:31:16.099 2 DEBUG nova.compute.manager [None req-a58efe35-e911-4b34-8fc5-8bbf99892ea8 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 30069bfa-2edc-4f4c-b685-233b65a11de1] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 11 09:31:16 compute-0 ceph-mon[74313]: pgmap v2688: 321 pgs: 321 active+clean; 407 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 387 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Oct 11 09:31:16 compute-0 nova_compute[260935]: 2025-10-11 09:31:16.209 2 DEBUG oslo_concurrency.lockutils [None req-a58efe35-e911-4b34-8fc5-8bbf99892ea8 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:31:16 compute-0 nova_compute[260935]: 2025-10-11 09:31:16.210 2 DEBUG oslo_concurrency.lockutils [None req-a58efe35-e911-4b34-8fc5-8bbf99892ea8 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:31:16 compute-0 nova_compute[260935]: 2025-10-11 09:31:16.221 2 DEBUG nova.virt.hardware [None req-a58efe35-e911-4b34-8fc5-8bbf99892ea8 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 11 09:31:16 compute-0 nova_compute[260935]: 2025-10-11 09:31:16.222 2 INFO nova.compute.claims [None req-a58efe35-e911-4b34-8fc5-8bbf99892ea8 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 30069bfa-2edc-4f4c-b685-233b65a11de1] Claim successful on node compute-0.ctlplane.example.com
Oct 11 09:31:16 compute-0 nova_compute[260935]: 2025-10-11 09:31:16.458 2 DEBUG oslo_concurrency.processutils [None req-a58efe35-e911-4b34-8fc5-8bbf99892ea8 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:31:16 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 09:31:16 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3394472828' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:31:16 compute-0 nova_compute[260935]: 2025-10-11 09:31:16.899 2 DEBUG oslo_concurrency.processutils [None req-a58efe35-e911-4b34-8fc5-8bbf99892ea8 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:31:16 compute-0 nova_compute[260935]: 2025-10-11 09:31:16.909 2 DEBUG nova.compute.provider_tree [None req-a58efe35-e911-4b34-8fc5-8bbf99892ea8 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 09:31:16 compute-0 nova_compute[260935]: 2025-10-11 09:31:16.927 2 DEBUG nova.scheduler.client.report [None req-a58efe35-e911-4b34-8fc5-8bbf99892ea8 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 09:31:16 compute-0 nova_compute[260935]: 2025-10-11 09:31:16.948 2 DEBUG oslo_concurrency.lockutils [None req-a58efe35-e911-4b34-8fc5-8bbf99892ea8 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.738s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:31:16 compute-0 nova_compute[260935]: 2025-10-11 09:31:16.949 2 DEBUG nova.compute.manager [None req-a58efe35-e911-4b34-8fc5-8bbf99892ea8 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 30069bfa-2edc-4f4c-b685-233b65a11de1] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 11 09:31:17 compute-0 nova_compute[260935]: 2025-10-11 09:31:17.003 2 DEBUG nova.compute.manager [None req-a58efe35-e911-4b34-8fc5-8bbf99892ea8 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 30069bfa-2edc-4f4c-b685-233b65a11de1] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 11 09:31:17 compute-0 nova_compute[260935]: 2025-10-11 09:31:17.004 2 DEBUG nova.network.neutron [None req-a58efe35-e911-4b34-8fc5-8bbf99892ea8 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 30069bfa-2edc-4f4c-b685-233b65a11de1] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 11 09:31:17 compute-0 nova_compute[260935]: 2025-10-11 09:31:17.037 2 INFO nova.virt.libvirt.driver [None req-a58efe35-e911-4b34-8fc5-8bbf99892ea8 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 30069bfa-2edc-4f4c-b685-233b65a11de1] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 11 09:31:17 compute-0 nova_compute[260935]: 2025-10-11 09:31:17.062 2 DEBUG nova.compute.manager [None req-a58efe35-e911-4b34-8fc5-8bbf99892ea8 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 30069bfa-2edc-4f4c-b685-233b65a11de1] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 11 09:31:17 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2689: 321 pgs: 321 active+clean; 407 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 387 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Oct 11 09:31:17 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/3394472828' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:31:17 compute-0 ceph-mon[74313]: pgmap v2689: 321 pgs: 321 active+clean; 407 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 387 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Oct 11 09:31:17 compute-0 nova_compute[260935]: 2025-10-11 09:31:17.172 2 DEBUG nova.compute.manager [None req-a58efe35-e911-4b34-8fc5-8bbf99892ea8 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 30069bfa-2edc-4f4c-b685-233b65a11de1] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 11 09:31:17 compute-0 nova_compute[260935]: 2025-10-11 09:31:17.173 2 DEBUG nova.virt.libvirt.driver [None req-a58efe35-e911-4b34-8fc5-8bbf99892ea8 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 30069bfa-2edc-4f4c-b685-233b65a11de1] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 11 09:31:17 compute-0 nova_compute[260935]: 2025-10-11 09:31:17.174 2 INFO nova.virt.libvirt.driver [None req-a58efe35-e911-4b34-8fc5-8bbf99892ea8 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 30069bfa-2edc-4f4c-b685-233b65a11de1] Creating image(s)
Oct 11 09:31:17 compute-0 nova_compute[260935]: 2025-10-11 09:31:17.208 2 DEBUG nova.storage.rbd_utils [None req-a58efe35-e911-4b34-8fc5-8bbf99892ea8 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] rbd image 30069bfa-2edc-4f4c-b685-233b65a11de1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:31:17 compute-0 nova_compute[260935]: 2025-10-11 09:31:17.235 2 DEBUG nova.storage.rbd_utils [None req-a58efe35-e911-4b34-8fc5-8bbf99892ea8 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] rbd image 30069bfa-2edc-4f4c-b685-233b65a11de1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:31:17 compute-0 nova_compute[260935]: 2025-10-11 09:31:17.274 2 DEBUG nova.storage.rbd_utils [None req-a58efe35-e911-4b34-8fc5-8bbf99892ea8 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] rbd image 30069bfa-2edc-4f4c-b685-233b65a11de1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:31:17 compute-0 nova_compute[260935]: 2025-10-11 09:31:17.278 2 DEBUG oslo_concurrency.processutils [None req-a58efe35-e911-4b34-8fc5-8bbf99892ea8 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:31:17 compute-0 nova_compute[260935]: 2025-10-11 09:31:17.389 2 DEBUG oslo_concurrency.processutils [None req-a58efe35-e911-4b34-8fc5-8bbf99892ea8 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json" returned: 0 in 0.110s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:31:17 compute-0 nova_compute[260935]: 2025-10-11 09:31:17.390 2 DEBUG oslo_concurrency.lockutils [None req-a58efe35-e911-4b34-8fc5-8bbf99892ea8 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "9811042c7d73cc51997f7c966840f4b7728169a1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:31:17 compute-0 nova_compute[260935]: 2025-10-11 09:31:17.391 2 DEBUG oslo_concurrency.lockutils [None req-a58efe35-e911-4b34-8fc5-8bbf99892ea8 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:31:17 compute-0 nova_compute[260935]: 2025-10-11 09:31:17.391 2 DEBUG oslo_concurrency.lockutils [None req-a58efe35-e911-4b34-8fc5-8bbf99892ea8 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:31:17 compute-0 nova_compute[260935]: 2025-10-11 09:31:17.425 2 DEBUG nova.storage.rbd_utils [None req-a58efe35-e911-4b34-8fc5-8bbf99892ea8 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] rbd image 30069bfa-2edc-4f4c-b685-233b65a11de1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:31:17 compute-0 nova_compute[260935]: 2025-10-11 09:31:17.429 2 DEBUG oslo_concurrency.processutils [None req-a58efe35-e911-4b34-8fc5-8bbf99892ea8 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 30069bfa-2edc-4f4c-b685-233b65a11de1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:31:17 compute-0 nova_compute[260935]: 2025-10-11 09:31:17.504 2 DEBUG nova.policy [None req-a58efe35-e911-4b34-8fc5-8bbf99892ea8 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '0e1fd111a1ff43179343661e01457085', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'db6885dd005947ad850fed13cefdf2fc', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 11 09:31:17 compute-0 nova_compute[260935]: 2025-10-11 09:31:17.854 2 DEBUG oslo_concurrency.processutils [None req-a58efe35-e911-4b34-8fc5-8bbf99892ea8 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 30069bfa-2edc-4f4c-b685-233b65a11de1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.425s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:31:17 compute-0 nova_compute[260935]: 2025-10-11 09:31:17.947 2 DEBUG nova.storage.rbd_utils [None req-a58efe35-e911-4b34-8fc5-8bbf99892ea8 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] resizing rbd image 30069bfa-2edc-4f4c-b685-233b65a11de1_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 11 09:31:18 compute-0 nova_compute[260935]: 2025-10-11 09:31:18.074 2 DEBUG nova.objects.instance [None req-a58efe35-e911-4b34-8fc5-8bbf99892ea8 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lazy-loading 'migration_context' on Instance uuid 30069bfa-2edc-4f4c-b685-233b65a11de1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:31:18 compute-0 nova_compute[260935]: 2025-10-11 09:31:18.095 2 DEBUG nova.virt.libvirt.driver [None req-a58efe35-e911-4b34-8fc5-8bbf99892ea8 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 30069bfa-2edc-4f4c-b685-233b65a11de1] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 11 09:31:18 compute-0 nova_compute[260935]: 2025-10-11 09:31:18.096 2 DEBUG nova.virt.libvirt.driver [None req-a58efe35-e911-4b34-8fc5-8bbf99892ea8 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 30069bfa-2edc-4f4c-b685-233b65a11de1] Ensure instance console log exists: /var/lib/nova/instances/30069bfa-2edc-4f4c-b685-233b65a11de1/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 11 09:31:18 compute-0 nova_compute[260935]: 2025-10-11 09:31:18.097 2 DEBUG oslo_concurrency.lockutils [None req-a58efe35-e911-4b34-8fc5-8bbf99892ea8 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:31:18 compute-0 nova_compute[260935]: 2025-10-11 09:31:18.097 2 DEBUG oslo_concurrency.lockutils [None req-a58efe35-e911-4b34-8fc5-8bbf99892ea8 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:31:18 compute-0 nova_compute[260935]: 2025-10-11 09:31:18.098 2 DEBUG oslo_concurrency.lockutils [None req-a58efe35-e911-4b34-8fc5-8bbf99892ea8 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:31:18 compute-0 nova_compute[260935]: 2025-10-11 09:31:18.400 2 DEBUG nova.network.neutron [None req-a58efe35-e911-4b34-8fc5-8bbf99892ea8 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 30069bfa-2edc-4f4c-b685-233b65a11de1] Successfully created port: faf6491c-2fe9-4559-bb38-e0b4068de1b5 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 11 09:31:18 compute-0 nova_compute[260935]: 2025-10-11 09:31:18.683 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:31:19 compute-0 nova_compute[260935]: 2025-10-11 09:31:19.045 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:31:19 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2690: 321 pgs: 321 active+clean; 407 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 387 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Oct 11 09:31:19 compute-0 ceph-mon[74313]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #129. Immutable memtables: 0.
Oct 11 09:31:19 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:31:19.136360) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 11 09:31:19 compute-0 ceph-mon[74313]: rocksdb: [db/flush_job.cc:856] [default] [JOB 77] Flushing memtable with next log file: 129
Oct 11 09:31:19 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760175079136394, "job": 77, "event": "flush_started", "num_memtables": 1, "num_entries": 2053, "num_deletes": 251, "total_data_size": 3346756, "memory_usage": 3404496, "flush_reason": "Manual Compaction"}
Oct 11 09:31:19 compute-0 ceph-mon[74313]: rocksdb: [db/flush_job.cc:885] [default] [JOB 77] Level-0 flush table #130: started
Oct 11 09:31:19 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760175079154025, "cf_name": "default", "job": 77, "event": "table_file_creation", "file_number": 130, "file_size": 3291039, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 54346, "largest_seqno": 56398, "table_properties": {"data_size": 3281726, "index_size": 5870, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2373, "raw_key_size": 18898, "raw_average_key_size": 20, "raw_value_size": 3263231, "raw_average_value_size": 3478, "num_data_blocks": 260, "num_entries": 938, "num_filter_entries": 938, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760174856, "oldest_key_time": 1760174856, "file_creation_time": 1760175079, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "25d4b2da-549a-4320-988a-8e4ed36a341b", "db_session_id": "HBQ1LYMNE0LLZGR7AHC6", "orig_file_number": 130, "seqno_to_time_mapping": "N/A"}}
Oct 11 09:31:19 compute-0 ceph-mon[74313]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 77] Flush lasted 17704 microseconds, and 6091 cpu microseconds.
Oct 11 09:31:19 compute-0 ceph-mon[74313]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 09:31:19 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:31:19.154064) [db/flush_job.cc:967] [default] [JOB 77] Level-0 flush table #130: 3291039 bytes OK
Oct 11 09:31:19 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:31:19.154083) [db/memtable_list.cc:519] [default] Level-0 commit table #130 started
Oct 11 09:31:19 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:31:19.156079) [db/memtable_list.cc:722] [default] Level-0 commit table #130: memtable #1 done
Oct 11 09:31:19 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:31:19.156091) EVENT_LOG_v1 {"time_micros": 1760175079156087, "job": 77, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 11 09:31:19 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:31:19.156107) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 11 09:31:19 compute-0 ceph-mon[74313]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 77] Try to delete WAL files size 3338158, prev total WAL file size 3338158, number of live WAL files 2.
Oct 11 09:31:19 compute-0 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000126.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 09:31:19 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:31:19.156952) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730035323731' seq:72057594037927935, type:22 .. '7061786F730035353233' seq:0, type:0; will stop at (end)
Oct 11 09:31:19 compute-0 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 78] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 11 09:31:19 compute-0 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 77 Base level 0, inputs: [130(3213KB)], [128(8012KB)]
Oct 11 09:31:19 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760175079156993, "job": 78, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [130], "files_L6": [128], "score": -1, "input_data_size": 11496309, "oldest_snapshot_seqno": -1}
Oct 11 09:31:19 compute-0 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 78] Generated table #131: 7664 keys, 9833867 bytes, temperature: kUnknown
Oct 11 09:31:19 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760175079228279, "cf_name": "default", "job": 78, "event": "table_file_creation", "file_number": 131, "file_size": 9833867, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9784118, "index_size": 29476, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 19205, "raw_key_size": 199059, "raw_average_key_size": 25, "raw_value_size": 9648641, "raw_average_value_size": 1258, "num_data_blocks": 1149, "num_entries": 7664, "num_filter_entries": 7664, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760170204, "oldest_key_time": 0, "file_creation_time": 1760175079, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "25d4b2da-549a-4320-988a-8e4ed36a341b", "db_session_id": "HBQ1LYMNE0LLZGR7AHC6", "orig_file_number": 131, "seqno_to_time_mapping": "N/A"}}
Oct 11 09:31:19 compute-0 ceph-mon[74313]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 09:31:19 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:31:19.228691) [db/compaction/compaction_job.cc:1663] [default] [JOB 78] Compacted 1@0 + 1@6 files to L6 => 9833867 bytes
Oct 11 09:31:19 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:31:19.230332) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 161.0 rd, 137.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.1, 7.8 +0.0 blob) out(9.4 +0.0 blob), read-write-amplify(6.5) write-amplify(3.0) OK, records in: 8178, records dropped: 514 output_compression: NoCompression
Oct 11 09:31:19 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:31:19.230356) EVENT_LOG_v1 {"time_micros": 1760175079230342, "job": 78, "event": "compaction_finished", "compaction_time_micros": 71427, "compaction_time_cpu_micros": 29221, "output_level": 6, "num_output_files": 1, "total_output_size": 9833867, "num_input_records": 8178, "num_output_records": 7664, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 11 09:31:19 compute-0 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000130.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 09:31:19 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760175079231247, "job": 78, "event": "table_file_deletion", "file_number": 130}
Oct 11 09:31:19 compute-0 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000128.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 09:31:19 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760175079232870, "job": 78, "event": "table_file_deletion", "file_number": 128}
Oct 11 09:31:19 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:31:19.156878) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 09:31:19 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:31:19.232973) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 09:31:19 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:31:19.232977) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 09:31:19 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:31:19.232979) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 09:31:19 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:31:19.232981) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 09:31:19 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:31:19.232983) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 09:31:19 compute-0 nova_compute[260935]: 2025-10-11 09:31:19.351 2 DEBUG nova.network.neutron [None req-a58efe35-e911-4b34-8fc5-8bbf99892ea8 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 30069bfa-2edc-4f4c-b685-233b65a11de1] Successfully updated port: faf6491c-2fe9-4559-bb38-e0b4068de1b5 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 11 09:31:19 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:31:19 compute-0 nova_compute[260935]: 2025-10-11 09:31:19.438 2 DEBUG oslo_concurrency.lockutils [None req-a58efe35-e911-4b34-8fc5-8bbf99892ea8 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "refresh_cache-30069bfa-2edc-4f4c-b685-233b65a11de1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:31:19 compute-0 nova_compute[260935]: 2025-10-11 09:31:19.438 2 DEBUG oslo_concurrency.lockutils [None req-a58efe35-e911-4b34-8fc5-8bbf99892ea8 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquired lock "refresh_cache-30069bfa-2edc-4f4c-b685-233b65a11de1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:31:19 compute-0 nova_compute[260935]: 2025-10-11 09:31:19.440 2 DEBUG nova.network.neutron [None req-a58efe35-e911-4b34-8fc5-8bbf99892ea8 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 30069bfa-2edc-4f4c-b685-233b65a11de1] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 11 09:31:19 compute-0 nova_compute[260935]: 2025-10-11 09:31:19.495 2 DEBUG nova.compute.manager [req-0dda961c-5d5c-4b9d-8030-3e4b3fdff853 req-bbc7dca2-660d-4d19-ab70-10e338ce18a9 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 30069bfa-2edc-4f4c-b685-233b65a11de1] Received event network-changed-faf6491c-2fe9-4559-bb38-e0b4068de1b5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:31:19 compute-0 nova_compute[260935]: 2025-10-11 09:31:19.495 2 DEBUG nova.compute.manager [req-0dda961c-5d5c-4b9d-8030-3e4b3fdff853 req-bbc7dca2-660d-4d19-ab70-10e338ce18a9 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 30069bfa-2edc-4f4c-b685-233b65a11de1] Refreshing instance network info cache due to event network-changed-faf6491c-2fe9-4559-bb38-e0b4068de1b5. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 09:31:19 compute-0 nova_compute[260935]: 2025-10-11 09:31:19.496 2 DEBUG oslo_concurrency.lockutils [req-0dda961c-5d5c-4b9d-8030-3e4b3fdff853 req-bbc7dca2-660d-4d19-ab70-10e338ce18a9 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-30069bfa-2edc-4f4c-b685-233b65a11de1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:31:19 compute-0 nova_compute[260935]: 2025-10-11 09:31:19.642 2 DEBUG nova.network.neutron [None req-a58efe35-e911-4b34-8fc5-8bbf99892ea8 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 30069bfa-2edc-4f4c-b685-233b65a11de1] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 11 09:31:20 compute-0 ceph-mon[74313]: pgmap v2690: 321 pgs: 321 active+clean; 407 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 387 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Oct 11 09:31:21 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2691: 321 pgs: 321 active+clean; 407 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 682 B/s rd, 16 KiB/s wr, 0 op/s
Oct 11 09:31:21 compute-0 ceph-mon[74313]: pgmap v2691: 321 pgs: 321 active+clean; 407 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 682 B/s rd, 16 KiB/s wr, 0 op/s
Oct 11 09:31:21 compute-0 podman[409776]: 2025-10-11 09:31:21.780990521 +0000 UTC m=+0.075301896 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Oct 11 09:31:21 compute-0 nova_compute[260935]: 2025-10-11 09:31:21.893 2 DEBUG nova.network.neutron [None req-a58efe35-e911-4b34-8fc5-8bbf99892ea8 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 30069bfa-2edc-4f4c-b685-233b65a11de1] Updating instance_info_cache with network_info: [{"id": "faf6491c-2fe9-4559-bb38-e0b4068de1b5", "address": "fa:16:3e:73:79:c8", "network": {"id": "b8c83697-e12f-4ca4-8bd5-07c36e7b45a8", "bridge": "br-int", "label": "tempest-network-smoke--75951287", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe73:79c8", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe73:79c8", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfaf6491c-2f", "ovs_interfaceid": "faf6491c-2fe9-4559-bb38-e0b4068de1b5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:31:21 compute-0 nova_compute[260935]: 2025-10-11 09:31:21.914 2 DEBUG oslo_concurrency.lockutils [None req-a58efe35-e911-4b34-8fc5-8bbf99892ea8 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Releasing lock "refresh_cache-30069bfa-2edc-4f4c-b685-233b65a11de1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:31:21 compute-0 nova_compute[260935]: 2025-10-11 09:31:21.915 2 DEBUG nova.compute.manager [None req-a58efe35-e911-4b34-8fc5-8bbf99892ea8 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 30069bfa-2edc-4f4c-b685-233b65a11de1] Instance network_info: |[{"id": "faf6491c-2fe9-4559-bb38-e0b4068de1b5", "address": "fa:16:3e:73:79:c8", "network": {"id": "b8c83697-e12f-4ca4-8bd5-07c36e7b45a8", "bridge": "br-int", "label": "tempest-network-smoke--75951287", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe73:79c8", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe73:79c8", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfaf6491c-2f", "ovs_interfaceid": "faf6491c-2fe9-4559-bb38-e0b4068de1b5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 11 09:31:21 compute-0 nova_compute[260935]: 2025-10-11 09:31:21.915 2 DEBUG oslo_concurrency.lockutils [req-0dda961c-5d5c-4b9d-8030-3e4b3fdff853 req-bbc7dca2-660d-4d19-ab70-10e338ce18a9 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-30069bfa-2edc-4f4c-b685-233b65a11de1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:31:21 compute-0 nova_compute[260935]: 2025-10-11 09:31:21.916 2 DEBUG nova.network.neutron [req-0dda961c-5d5c-4b9d-8030-3e4b3fdff853 req-bbc7dca2-660d-4d19-ab70-10e338ce18a9 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 30069bfa-2edc-4f4c-b685-233b65a11de1] Refreshing network info cache for port faf6491c-2fe9-4559-bb38-e0b4068de1b5 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 09:31:21 compute-0 nova_compute[260935]: 2025-10-11 09:31:21.921 2 DEBUG nova.virt.libvirt.driver [None req-a58efe35-e911-4b34-8fc5-8bbf99892ea8 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 30069bfa-2edc-4f4c-b685-233b65a11de1] Start _get_guest_xml network_info=[{"id": "faf6491c-2fe9-4559-bb38-e0b4068de1b5", "address": "fa:16:3e:73:79:c8", "network": {"id": "b8c83697-e12f-4ca4-8bd5-07c36e7b45a8", "bridge": "br-int", "label": "tempest-network-smoke--75951287", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe73:79c8", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe73:79c8", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfaf6491c-2f", "ovs_interfaceid": "faf6491c-2fe9-4559-bb38-e0b4068de1b5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 11 09:31:21 compute-0 nova_compute[260935]: 2025-10-11 09:31:21.927 2 WARNING nova.virt.libvirt.driver [None req-a58efe35-e911-4b34-8fc5-8bbf99892ea8 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 09:31:21 compute-0 nova_compute[260935]: 2025-10-11 09:31:21.936 2 DEBUG nova.virt.libvirt.host [None req-a58efe35-e911-4b34-8fc5-8bbf99892ea8 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 11 09:31:21 compute-0 nova_compute[260935]: 2025-10-11 09:31:21.937 2 DEBUG nova.virt.libvirt.host [None req-a58efe35-e911-4b34-8fc5-8bbf99892ea8 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 11 09:31:21 compute-0 nova_compute[260935]: 2025-10-11 09:31:21.940 2 DEBUG nova.virt.libvirt.host [None req-a58efe35-e911-4b34-8fc5-8bbf99892ea8 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 11 09:31:21 compute-0 nova_compute[260935]: 2025-10-11 09:31:21.941 2 DEBUG nova.virt.libvirt.host [None req-a58efe35-e911-4b34-8fc5-8bbf99892ea8 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 11 09:31:21 compute-0 nova_compute[260935]: 2025-10-11 09:31:21.941 2 DEBUG nova.virt.libvirt.driver [None req-a58efe35-e911-4b34-8fc5-8bbf99892ea8 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 11 09:31:21 compute-0 nova_compute[260935]: 2025-10-11 09:31:21.941 2 DEBUG nova.virt.hardware [None req-a58efe35-e911-4b34-8fc5-8bbf99892ea8 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 11 09:31:21 compute-0 nova_compute[260935]: 2025-10-11 09:31:21.941 2 DEBUG nova.virt.hardware [None req-a58efe35-e911-4b34-8fc5-8bbf99892ea8 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 11 09:31:21 compute-0 nova_compute[260935]: 2025-10-11 09:31:21.942 2 DEBUG nova.virt.hardware [None req-a58efe35-e911-4b34-8fc5-8bbf99892ea8 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 11 09:31:21 compute-0 nova_compute[260935]: 2025-10-11 09:31:21.942 2 DEBUG nova.virt.hardware [None req-a58efe35-e911-4b34-8fc5-8bbf99892ea8 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 11 09:31:21 compute-0 nova_compute[260935]: 2025-10-11 09:31:21.942 2 DEBUG nova.virt.hardware [None req-a58efe35-e911-4b34-8fc5-8bbf99892ea8 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 11 09:31:21 compute-0 nova_compute[260935]: 2025-10-11 09:31:21.942 2 DEBUG nova.virt.hardware [None req-a58efe35-e911-4b34-8fc5-8bbf99892ea8 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 11 09:31:21 compute-0 nova_compute[260935]: 2025-10-11 09:31:21.942 2 DEBUG nova.virt.hardware [None req-a58efe35-e911-4b34-8fc5-8bbf99892ea8 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 11 09:31:21 compute-0 nova_compute[260935]: 2025-10-11 09:31:21.943 2 DEBUG nova.virt.hardware [None req-a58efe35-e911-4b34-8fc5-8bbf99892ea8 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 11 09:31:21 compute-0 nova_compute[260935]: 2025-10-11 09:31:21.943 2 DEBUG nova.virt.hardware [None req-a58efe35-e911-4b34-8fc5-8bbf99892ea8 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 11 09:31:21 compute-0 nova_compute[260935]: 2025-10-11 09:31:21.943 2 DEBUG nova.virt.hardware [None req-a58efe35-e911-4b34-8fc5-8bbf99892ea8 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 11 09:31:21 compute-0 nova_compute[260935]: 2025-10-11 09:31:21.943 2 DEBUG nova.virt.hardware [None req-a58efe35-e911-4b34-8fc5-8bbf99892ea8 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 11 09:31:21 compute-0 nova_compute[260935]: 2025-10-11 09:31:21.946 2 DEBUG oslo_concurrency.processutils [None req-a58efe35-e911-4b34-8fc5-8bbf99892ea8 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:31:22 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 09:31:22 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2192263090' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:31:22 compute-0 nova_compute[260935]: 2025-10-11 09:31:22.452 2 DEBUG oslo_concurrency.processutils [None req-a58efe35-e911-4b34-8fc5-8bbf99892ea8 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.506s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:31:22 compute-0 nova_compute[260935]: 2025-10-11 09:31:22.480 2 DEBUG nova.storage.rbd_utils [None req-a58efe35-e911-4b34-8fc5-8bbf99892ea8 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] rbd image 30069bfa-2edc-4f4c-b685-233b65a11de1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:31:22 compute-0 nova_compute[260935]: 2025-10-11 09:31:22.484 2 DEBUG oslo_concurrency.processutils [None req-a58efe35-e911-4b34-8fc5-8bbf99892ea8 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:31:22 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/2192263090' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:31:22 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 09:31:22 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3870550153' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:31:22 compute-0 nova_compute[260935]: 2025-10-11 09:31:22.980 2 DEBUG oslo_concurrency.processutils [None req-a58efe35-e911-4b34-8fc5-8bbf99892ea8 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.495s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:31:22 compute-0 nova_compute[260935]: 2025-10-11 09:31:22.983 2 DEBUG nova.virt.libvirt.vif [None req-a58efe35-e911-4b34-8fc5-8bbf99892ea8 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T09:31:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1320880782',display_name='tempest-TestGettingAddress-server-1320880782',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1320880782',id=136,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHRstPG1ot2VOcbBUXtJ+eOOOPIpYxS1823a8lgCgglrjvAd3j//+qkav+rN6NO2rJV6NwAQSbhFZXjoVb6gdqi2VPWucmakmAAgqAmAhQOUxzhrLgrzGURRlRuM8US+xA==',key_name='tempest-TestGettingAddress-264191416',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='db6885dd005947ad850fed13cefdf2fc',ramdisk_id='',reservation_id='r-fmtjcp7u',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-1238692117',owner_user_name='tempest-TestGettingAddress-1238692117-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:31:17Z,user_data=None,user_id='0e1fd111a1ff43179343661e01457085',uuid=30069bfa-2edc-4f4c-b685-233b65a11de1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "faf6491c-2fe9-4559-bb38-e0b4068de1b5", "address": "fa:16:3e:73:79:c8", "network": {"id": "b8c83697-e12f-4ca4-8bd5-07c36e7b45a8", "bridge": "br-int", "label": "tempest-network-smoke--75951287", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe73:79c8", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe73:79c8", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfaf6491c-2f", "ovs_interfaceid": "faf6491c-2fe9-4559-bb38-e0b4068de1b5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 11 09:31:22 compute-0 nova_compute[260935]: 2025-10-11 09:31:22.984 2 DEBUG nova.network.os_vif_util [None req-a58efe35-e911-4b34-8fc5-8bbf99892ea8 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converting VIF {"id": "faf6491c-2fe9-4559-bb38-e0b4068de1b5", "address": "fa:16:3e:73:79:c8", "network": {"id": "b8c83697-e12f-4ca4-8bd5-07c36e7b45a8", "bridge": "br-int", "label": "tempest-network-smoke--75951287", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe73:79c8", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe73:79c8", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfaf6491c-2f", "ovs_interfaceid": "faf6491c-2fe9-4559-bb38-e0b4068de1b5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 09:31:22 compute-0 nova_compute[260935]: 2025-10-11 09:31:22.986 2 DEBUG nova.network.os_vif_util [None req-a58efe35-e911-4b34-8fc5-8bbf99892ea8 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:73:79:c8,bridge_name='br-int',has_traffic_filtering=True,id=faf6491c-2fe9-4559-bb38-e0b4068de1b5,network=Network(b8c83697-e12f-4ca4-8bd5-07c36e7b45a8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfaf6491c-2f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 09:31:22 compute-0 nova_compute[260935]: 2025-10-11 09:31:22.989 2 DEBUG nova.objects.instance [None req-a58efe35-e911-4b34-8fc5-8bbf99892ea8 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lazy-loading 'pci_devices' on Instance uuid 30069bfa-2edc-4f4c-b685-233b65a11de1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:31:23 compute-0 nova_compute[260935]: 2025-10-11 09:31:23.015 2 DEBUG nova.virt.libvirt.driver [None req-a58efe35-e911-4b34-8fc5-8bbf99892ea8 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 30069bfa-2edc-4f4c-b685-233b65a11de1] End _get_guest_xml xml=<domain type="kvm">
Oct 11 09:31:23 compute-0 nova_compute[260935]:   <uuid>30069bfa-2edc-4f4c-b685-233b65a11de1</uuid>
Oct 11 09:31:23 compute-0 nova_compute[260935]:   <name>instance-00000088</name>
Oct 11 09:31:23 compute-0 nova_compute[260935]:   <memory>131072</memory>
Oct 11 09:31:23 compute-0 nova_compute[260935]:   <vcpu>1</vcpu>
Oct 11 09:31:23 compute-0 nova_compute[260935]:   <metadata>
Oct 11 09:31:23 compute-0 nova_compute[260935]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 09:31:23 compute-0 nova_compute[260935]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 09:31:23 compute-0 nova_compute[260935]:       <nova:name>tempest-TestGettingAddress-server-1320880782</nova:name>
Oct 11 09:31:23 compute-0 nova_compute[260935]:       <nova:creationTime>2025-10-11 09:31:21</nova:creationTime>
Oct 11 09:31:23 compute-0 nova_compute[260935]:       <nova:flavor name="m1.nano">
Oct 11 09:31:23 compute-0 nova_compute[260935]:         <nova:memory>128</nova:memory>
Oct 11 09:31:23 compute-0 nova_compute[260935]:         <nova:disk>1</nova:disk>
Oct 11 09:31:23 compute-0 nova_compute[260935]:         <nova:swap>0</nova:swap>
Oct 11 09:31:23 compute-0 nova_compute[260935]:         <nova:ephemeral>0</nova:ephemeral>
Oct 11 09:31:23 compute-0 nova_compute[260935]:         <nova:vcpus>1</nova:vcpus>
Oct 11 09:31:23 compute-0 nova_compute[260935]:       </nova:flavor>
Oct 11 09:31:23 compute-0 nova_compute[260935]:       <nova:owner>
Oct 11 09:31:23 compute-0 nova_compute[260935]:         <nova:user uuid="0e1fd111a1ff43179343661e01457085">tempest-TestGettingAddress-1238692117-project-member</nova:user>
Oct 11 09:31:23 compute-0 nova_compute[260935]:         <nova:project uuid="db6885dd005947ad850fed13cefdf2fc">tempest-TestGettingAddress-1238692117</nova:project>
Oct 11 09:31:23 compute-0 nova_compute[260935]:       </nova:owner>
Oct 11 09:31:23 compute-0 nova_compute[260935]:       <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 09:31:23 compute-0 nova_compute[260935]:       <nova:ports>
Oct 11 09:31:23 compute-0 nova_compute[260935]:         <nova:port uuid="faf6491c-2fe9-4559-bb38-e0b4068de1b5">
Oct 11 09:31:23 compute-0 nova_compute[260935]:           <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Oct 11 09:31:23 compute-0 nova_compute[260935]:           <nova:ip type="fixed" address="2001:db8::f816:3eff:fe73:79c8" ipVersion="6"/>
Oct 11 09:31:23 compute-0 nova_compute[260935]:           <nova:ip type="fixed" address="2001:db8:0:1:f816:3eff:fe73:79c8" ipVersion="6"/>
Oct 11 09:31:23 compute-0 nova_compute[260935]:         </nova:port>
Oct 11 09:31:23 compute-0 nova_compute[260935]:       </nova:ports>
Oct 11 09:31:23 compute-0 nova_compute[260935]:     </nova:instance>
Oct 11 09:31:23 compute-0 nova_compute[260935]:   </metadata>
Oct 11 09:31:23 compute-0 nova_compute[260935]:   <sysinfo type="smbios">
Oct 11 09:31:23 compute-0 nova_compute[260935]:     <system>
Oct 11 09:31:23 compute-0 nova_compute[260935]:       <entry name="manufacturer">RDO</entry>
Oct 11 09:31:23 compute-0 nova_compute[260935]:       <entry name="product">OpenStack Compute</entry>
Oct 11 09:31:23 compute-0 nova_compute[260935]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 09:31:23 compute-0 nova_compute[260935]:       <entry name="serial">30069bfa-2edc-4f4c-b685-233b65a11de1</entry>
Oct 11 09:31:23 compute-0 nova_compute[260935]:       <entry name="uuid">30069bfa-2edc-4f4c-b685-233b65a11de1</entry>
Oct 11 09:31:23 compute-0 nova_compute[260935]:       <entry name="family">Virtual Machine</entry>
Oct 11 09:31:23 compute-0 nova_compute[260935]:     </system>
Oct 11 09:31:23 compute-0 nova_compute[260935]:   </sysinfo>
Oct 11 09:31:23 compute-0 nova_compute[260935]:   <os>
Oct 11 09:31:23 compute-0 nova_compute[260935]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 11 09:31:23 compute-0 nova_compute[260935]:     <boot dev="hd"/>
Oct 11 09:31:23 compute-0 nova_compute[260935]:     <smbios mode="sysinfo"/>
Oct 11 09:31:23 compute-0 nova_compute[260935]:   </os>
Oct 11 09:31:23 compute-0 nova_compute[260935]:   <features>
Oct 11 09:31:23 compute-0 nova_compute[260935]:     <acpi/>
Oct 11 09:31:23 compute-0 nova_compute[260935]:     <apic/>
Oct 11 09:31:23 compute-0 nova_compute[260935]:     <vmcoreinfo/>
Oct 11 09:31:23 compute-0 nova_compute[260935]:   </features>
Oct 11 09:31:23 compute-0 nova_compute[260935]:   <clock offset="utc">
Oct 11 09:31:23 compute-0 nova_compute[260935]:     <timer name="pit" tickpolicy="delay"/>
Oct 11 09:31:23 compute-0 nova_compute[260935]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 11 09:31:23 compute-0 nova_compute[260935]:     <timer name="hpet" present="no"/>
Oct 11 09:31:23 compute-0 nova_compute[260935]:   </clock>
Oct 11 09:31:23 compute-0 nova_compute[260935]:   <cpu mode="host-model" match="exact">
Oct 11 09:31:23 compute-0 nova_compute[260935]:     <topology sockets="1" cores="1" threads="1"/>
Oct 11 09:31:23 compute-0 nova_compute[260935]:   </cpu>
Oct 11 09:31:23 compute-0 nova_compute[260935]:   <devices>
Oct 11 09:31:23 compute-0 nova_compute[260935]:     <disk type="network" device="disk">
Oct 11 09:31:23 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 09:31:23 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/30069bfa-2edc-4f4c-b685-233b65a11de1_disk">
Oct 11 09:31:23 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 09:31:23 compute-0 nova_compute[260935]:       </source>
Oct 11 09:31:23 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 09:31:23 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 09:31:23 compute-0 nova_compute[260935]:       </auth>
Oct 11 09:31:23 compute-0 nova_compute[260935]:       <target dev="vda" bus="virtio"/>
Oct 11 09:31:23 compute-0 nova_compute[260935]:     </disk>
Oct 11 09:31:23 compute-0 nova_compute[260935]:     <disk type="network" device="cdrom">
Oct 11 09:31:23 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 09:31:23 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/30069bfa-2edc-4f4c-b685-233b65a11de1_disk.config">
Oct 11 09:31:23 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 09:31:23 compute-0 nova_compute[260935]:       </source>
Oct 11 09:31:23 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 09:31:23 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 09:31:23 compute-0 nova_compute[260935]:       </auth>
Oct 11 09:31:23 compute-0 nova_compute[260935]:       <target dev="sda" bus="sata"/>
Oct 11 09:31:23 compute-0 nova_compute[260935]:     </disk>
Oct 11 09:31:23 compute-0 nova_compute[260935]:     <interface type="ethernet">
Oct 11 09:31:23 compute-0 nova_compute[260935]:       <mac address="fa:16:3e:73:79:c8"/>
Oct 11 09:31:23 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 09:31:23 compute-0 nova_compute[260935]:       <driver name="vhost" rx_queue_size="512"/>
Oct 11 09:31:23 compute-0 nova_compute[260935]:       <mtu size="1442"/>
Oct 11 09:31:23 compute-0 nova_compute[260935]:       <target dev="tapfaf6491c-2f"/>
Oct 11 09:31:23 compute-0 nova_compute[260935]:     </interface>
Oct 11 09:31:23 compute-0 nova_compute[260935]:     <serial type="pty">
Oct 11 09:31:23 compute-0 nova_compute[260935]:       <log file="/var/lib/nova/instances/30069bfa-2edc-4f4c-b685-233b65a11de1/console.log" append="off"/>
Oct 11 09:31:23 compute-0 nova_compute[260935]:     </serial>
Oct 11 09:31:23 compute-0 nova_compute[260935]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 09:31:23 compute-0 nova_compute[260935]:     <video>
Oct 11 09:31:23 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 09:31:23 compute-0 nova_compute[260935]:     </video>
Oct 11 09:31:23 compute-0 nova_compute[260935]:     <input type="tablet" bus="usb"/>
Oct 11 09:31:23 compute-0 nova_compute[260935]:     <rng model="virtio">
Oct 11 09:31:23 compute-0 nova_compute[260935]:       <backend model="random">/dev/urandom</backend>
Oct 11 09:31:23 compute-0 nova_compute[260935]:     </rng>
Oct 11 09:31:23 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root"/>
Oct 11 09:31:23 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:31:23 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:31:23 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:31:23 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:31:23 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:31:23 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:31:23 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:31:23 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:31:23 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:31:23 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:31:23 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:31:23 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:31:23 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:31:23 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:31:23 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:31:23 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:31:23 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:31:23 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:31:23 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:31:23 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:31:23 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:31:23 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:31:23 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:31:23 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:31:23 compute-0 nova_compute[260935]:     <controller type="usb" index="0"/>
Oct 11 09:31:23 compute-0 nova_compute[260935]:     <memballoon model="virtio">
Oct 11 09:31:23 compute-0 nova_compute[260935]:       <stats period="10"/>
Oct 11 09:31:23 compute-0 nova_compute[260935]:     </memballoon>
Oct 11 09:31:23 compute-0 nova_compute[260935]:   </devices>
Oct 11 09:31:23 compute-0 nova_compute[260935]: </domain>
Oct 11 09:31:23 compute-0 nova_compute[260935]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 11 09:31:23 compute-0 nova_compute[260935]: 2025-10-11 09:31:23.017 2 DEBUG nova.compute.manager [None req-a58efe35-e911-4b34-8fc5-8bbf99892ea8 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 30069bfa-2edc-4f4c-b685-233b65a11de1] Preparing to wait for external event network-vif-plugged-faf6491c-2fe9-4559-bb38-e0b4068de1b5 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 11 09:31:23 compute-0 nova_compute[260935]: 2025-10-11 09:31:23.018 2 DEBUG oslo_concurrency.lockutils [None req-a58efe35-e911-4b34-8fc5-8bbf99892ea8 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "30069bfa-2edc-4f4c-b685-233b65a11de1-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:31:23 compute-0 nova_compute[260935]: 2025-10-11 09:31:23.018 2 DEBUG oslo_concurrency.lockutils [None req-a58efe35-e911-4b34-8fc5-8bbf99892ea8 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "30069bfa-2edc-4f4c-b685-233b65a11de1-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:31:23 compute-0 nova_compute[260935]: 2025-10-11 09:31:23.019 2 DEBUG oslo_concurrency.lockutils [None req-a58efe35-e911-4b34-8fc5-8bbf99892ea8 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "30069bfa-2edc-4f4c-b685-233b65a11de1-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:31:23 compute-0 nova_compute[260935]: 2025-10-11 09:31:23.020 2 DEBUG nova.virt.libvirt.vif [None req-a58efe35-e911-4b34-8fc5-8bbf99892ea8 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T09:31:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1320880782',display_name='tempest-TestGettingAddress-server-1320880782',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1320880782',id=136,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHRstPG1ot2VOcbBUXtJ+eOOOPIpYxS1823a8lgCgglrjvAd3j//+qkav+rN6NO2rJV6NwAQSbhFZXjoVb6gdqi2VPWucmakmAAgqAmAhQOUxzhrLgrzGURRlRuM8US+xA==',key_name='tempest-TestGettingAddress-264191416',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='db6885dd005947ad850fed13cefdf2fc',ramdisk_id='',reservation_id='r-fmtjcp7u',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-1238692117',owner_user_name='tempest-TestGettingAddress-1238692117-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:31:17Z,user_data=None,user_id='0e1fd111a1ff43179343661e01457085',uuid=30069bfa-2edc-4f4c-b685-233b65a11de1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "faf6491c-2fe9-4559-bb38-e0b4068de1b5", "address": "fa:16:3e:73:79:c8", "network": {"id": "b8c83697-e12f-4ca4-8bd5-07c36e7b45a8", "bridge": "br-int", "label": "tempest-network-smoke--75951287", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe73:79c8", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe73:79c8", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfaf6491c-2f", "ovs_interfaceid": "faf6491c-2fe9-4559-bb38-e0b4068de1b5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 11 09:31:23 compute-0 nova_compute[260935]: 2025-10-11 09:31:23.021 2 DEBUG nova.network.os_vif_util [None req-a58efe35-e911-4b34-8fc5-8bbf99892ea8 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converting VIF {"id": "faf6491c-2fe9-4559-bb38-e0b4068de1b5", "address": "fa:16:3e:73:79:c8", "network": {"id": "b8c83697-e12f-4ca4-8bd5-07c36e7b45a8", "bridge": "br-int", "label": "tempest-network-smoke--75951287", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe73:79c8", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe73:79c8", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfaf6491c-2f", "ovs_interfaceid": "faf6491c-2fe9-4559-bb38-e0b4068de1b5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 09:31:23 compute-0 nova_compute[260935]: 2025-10-11 09:31:23.022 2 DEBUG nova.network.os_vif_util [None req-a58efe35-e911-4b34-8fc5-8bbf99892ea8 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:73:79:c8,bridge_name='br-int',has_traffic_filtering=True,id=faf6491c-2fe9-4559-bb38-e0b4068de1b5,network=Network(b8c83697-e12f-4ca4-8bd5-07c36e7b45a8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfaf6491c-2f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 09:31:23 compute-0 nova_compute[260935]: 2025-10-11 09:31:23.023 2 DEBUG os_vif [None req-a58efe35-e911-4b34-8fc5-8bbf99892ea8 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:73:79:c8,bridge_name='br-int',has_traffic_filtering=True,id=faf6491c-2fe9-4559-bb38-e0b4068de1b5,network=Network(b8c83697-e12f-4ca4-8bd5-07c36e7b45a8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfaf6491c-2f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 11 09:31:23 compute-0 nova_compute[260935]: 2025-10-11 09:31:23.024 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:31:23 compute-0 nova_compute[260935]: 2025-10-11 09:31:23.025 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:31:23 compute-0 nova_compute[260935]: 2025-10-11 09:31:23.026 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 09:31:23 compute-0 nova_compute[260935]: 2025-10-11 09:31:23.032 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:31:23 compute-0 nova_compute[260935]: 2025-10-11 09:31:23.032 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfaf6491c-2f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:31:23 compute-0 nova_compute[260935]: 2025-10-11 09:31:23.033 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapfaf6491c-2f, col_values=(('external_ids', {'iface-id': 'faf6491c-2fe9-4559-bb38-e0b4068de1b5', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:73:79:c8', 'vm-uuid': '30069bfa-2edc-4f4c-b685-233b65a11de1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:31:23 compute-0 nova_compute[260935]: 2025-10-11 09:31:23.036 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:31:23 compute-0 NetworkManager[44960]: <info>  [1760175083.0380] manager: (tapfaf6491c-2f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/586)
Oct 11 09:31:23 compute-0 nova_compute[260935]: 2025-10-11 09:31:23.039 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 09:31:23 compute-0 nova_compute[260935]: 2025-10-11 09:31:23.044 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:31:23 compute-0 nova_compute[260935]: 2025-10-11 09:31:23.045 2 INFO os_vif [None req-a58efe35-e911-4b34-8fc5-8bbf99892ea8 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:73:79:c8,bridge_name='br-int',has_traffic_filtering=True,id=faf6491c-2fe9-4559-bb38-e0b4068de1b5,network=Network(b8c83697-e12f-4ca4-8bd5-07c36e7b45a8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfaf6491c-2f')
Oct 11 09:31:23 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2692: 321 pgs: 321 active+clean; 453 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 11 09:31:23 compute-0 nova_compute[260935]: 2025-10-11 09:31:23.141 2 DEBUG nova.virt.libvirt.driver [None req-a58efe35-e911-4b34-8fc5-8bbf99892ea8 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 09:31:23 compute-0 nova_compute[260935]: 2025-10-11 09:31:23.141 2 DEBUG nova.virt.libvirt.driver [None req-a58efe35-e911-4b34-8fc5-8bbf99892ea8 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 09:31:23 compute-0 nova_compute[260935]: 2025-10-11 09:31:23.142 2 DEBUG nova.virt.libvirt.driver [None req-a58efe35-e911-4b34-8fc5-8bbf99892ea8 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] No VIF found with MAC fa:16:3e:73:79:c8, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 11 09:31:23 compute-0 nova_compute[260935]: 2025-10-11 09:31:23.143 2 INFO nova.virt.libvirt.driver [None req-a58efe35-e911-4b34-8fc5-8bbf99892ea8 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 30069bfa-2edc-4f4c-b685-233b65a11de1] Using config drive
Oct 11 09:31:23 compute-0 nova_compute[260935]: 2025-10-11 09:31:23.180 2 DEBUG nova.storage.rbd_utils [None req-a58efe35-e911-4b34-8fc5-8bbf99892ea8 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] rbd image 30069bfa-2edc-4f4c-b685-233b65a11de1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:31:23 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/3870550153' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:31:23 compute-0 ceph-mon[74313]: pgmap v2692: 321 pgs: 321 active+clean; 453 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 11 09:31:23 compute-0 nova_compute[260935]: 2025-10-11 09:31:23.674 2 INFO nova.virt.libvirt.driver [None req-a58efe35-e911-4b34-8fc5-8bbf99892ea8 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 30069bfa-2edc-4f4c-b685-233b65a11de1] Creating config drive at /var/lib/nova/instances/30069bfa-2edc-4f4c-b685-233b65a11de1/disk.config
Oct 11 09:31:23 compute-0 nova_compute[260935]: 2025-10-11 09:31:23.685 2 DEBUG oslo_concurrency.processutils [None req-a58efe35-e911-4b34-8fc5-8bbf99892ea8 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/30069bfa-2edc-4f4c-b685-233b65a11de1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp04q_nnrl execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:31:23 compute-0 nova_compute[260935]: 2025-10-11 09:31:23.743 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:31:23 compute-0 nova_compute[260935]: 2025-10-11 09:31:23.860 2 DEBUG oslo_concurrency.processutils [None req-a58efe35-e911-4b34-8fc5-8bbf99892ea8 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/30069bfa-2edc-4f4c-b685-233b65a11de1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp04q_nnrl" returned: 0 in 0.175s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:31:23 compute-0 nova_compute[260935]: 2025-10-11 09:31:23.904 2 DEBUG nova.storage.rbd_utils [None req-a58efe35-e911-4b34-8fc5-8bbf99892ea8 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] rbd image 30069bfa-2edc-4f4c-b685-233b65a11de1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:31:23 compute-0 nova_compute[260935]: 2025-10-11 09:31:23.910 2 DEBUG oslo_concurrency.processutils [None req-a58efe35-e911-4b34-8fc5-8bbf99892ea8 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/30069bfa-2edc-4f4c-b685-233b65a11de1/disk.config 30069bfa-2edc-4f4c-b685-233b65a11de1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:31:24 compute-0 sudo[409920]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:31:24 compute-0 sudo[409920]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:31:24 compute-0 sudo[409920]: pam_unix(sudo:session): session closed for user root
Oct 11 09:31:24 compute-0 nova_compute[260935]: 2025-10-11 09:31:24.366 2 DEBUG oslo_concurrency.processutils [None req-a58efe35-e911-4b34-8fc5-8bbf99892ea8 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/30069bfa-2edc-4f4c-b685-233b65a11de1/disk.config 30069bfa-2edc-4f4c-b685-233b65a11de1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:31:24 compute-0 nova_compute[260935]: 2025-10-11 09:31:24.369 2 INFO nova.virt.libvirt.driver [None req-a58efe35-e911-4b34-8fc5-8bbf99892ea8 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 30069bfa-2edc-4f4c-b685-233b65a11de1] Deleting local config drive /var/lib/nova/instances/30069bfa-2edc-4f4c-b685-233b65a11de1/disk.config because it was imported into RBD.
Oct 11 09:31:24 compute-0 sudo[409945]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 09:31:24 compute-0 sudo[409945]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:31:24 compute-0 sudo[409945]: pam_unix(sudo:session): session closed for user root
Oct 11 09:31:24 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:31:24 compute-0 kernel: tapfaf6491c-2f: entered promiscuous mode
Oct 11 09:31:24 compute-0 NetworkManager[44960]: <info>  [1760175084.4517] manager: (tapfaf6491c-2f): new Tun device (/org/freedesktop/NetworkManager/Devices/587)
Oct 11 09:31:24 compute-0 ovn_controller[152945]: 2025-10-11T09:31:24Z|01517|binding|INFO|Claiming lport faf6491c-2fe9-4559-bb38-e0b4068de1b5 for this chassis.
Oct 11 09:31:24 compute-0 ovn_controller[152945]: 2025-10-11T09:31:24Z|01518|binding|INFO|faf6491c-2fe9-4559-bb38-e0b4068de1b5: Claiming fa:16:3e:73:79:c8 10.100.0.6 2001:db8:0:1:f816:3eff:fe73:79c8 2001:db8::f816:3eff:fe73:79c8
Oct 11 09:31:24 compute-0 nova_compute[260935]: 2025-10-11 09:31:24.460 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:31:24 compute-0 nova_compute[260935]: 2025-10-11 09:31:24.477 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:31:24 compute-0 ovn_controller[152945]: 2025-10-11T09:31:24Z|01519|binding|INFO|Setting lport faf6491c-2fe9-4559-bb38-e0b4068de1b5 ovn-installed in OVS
Oct 11 09:31:24 compute-0 ovn_controller[152945]: 2025-10-11T09:31:24Z|01520|binding|INFO|Setting lport faf6491c-2fe9-4559-bb38-e0b4068de1b5 up in Southbound
Oct 11 09:31:24 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:31:24.475 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:73:79:c8 10.100.0.6 2001:db8:0:1:f816:3eff:fe73:79c8 2001:db8::f816:3eff:fe73:79c8'], port_security=['fa:16:3e:73:79:c8 10.100.0.6 2001:db8:0:1:f816:3eff:fe73:79c8 2001:db8::f816:3eff:fe73:79c8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28 2001:db8:0:1:f816:3eff:fe73:79c8/64 2001:db8::f816:3eff:fe73:79c8/64', 'neutron:device_id': '30069bfa-2edc-4f4c-b685-233b65a11de1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b8c83697-e12f-4ca4-8bd5-07c36e7b45a8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'db6885dd005947ad850fed13cefdf2fc', 'neutron:revision_number': '2', 'neutron:security_group_ids': '9b3d6b51-0356-4b0d-af67-6a82e6bb09bb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=09ee216c-15c4-4f81-ac68-20fd45768bc8, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=faf6491c-2fe9-4559-bb38-e0b4068de1b5) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 09:31:24 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:31:24.476 162815 INFO neutron.agent.ovn.metadata.agent [-] Port faf6491c-2fe9-4559-bb38-e0b4068de1b5 in datapath b8c83697-e12f-4ca4-8bd5-07c36e7b45a8 bound to our chassis
Oct 11 09:31:24 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:31:24.478 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b8c83697-e12f-4ca4-8bd5-07c36e7b45a8
Oct 11 09:31:24 compute-0 systemd-udevd[410004]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 09:31:24 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:31:24.503 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[6ea55ba6-3bcc-4781-9c30-c60913998ed7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:31:24 compute-0 systemd-machined[215705]: New machine qemu-160-instance-00000088.
Oct 11 09:31:24 compute-0 NetworkManager[44960]: <info>  [1760175084.5126] device (tapfaf6491c-2f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 09:31:24 compute-0 NetworkManager[44960]: <info>  [1760175084.5144] device (tapfaf6491c-2f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 09:31:24 compute-0 systemd[1]: Started Virtual Machine qemu-160-instance-00000088.
Oct 11 09:31:24 compute-0 sudo[409978]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:31:24 compute-0 sudo[409978]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:31:24 compute-0 sudo[409978]: pam_unix(sudo:session): session closed for user root
Oct 11 09:31:24 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:31:24.546 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[49ff8a6d-1ede-4a39-9392-68a632150592]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:31:24 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:31:24.550 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[e7c2dcc9-811a-4a61-89b8-04a6a195f3cc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:31:24 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:31:24.582 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[861855b2-c69c-48d6-80f8-b61a35d0a4e8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:31:24 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:31:24.609 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[d270db61-3155-41be-af1e-d7f6f44b84dd]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb8c83697-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d0:5b:95'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 25, 'tx_packets': 5, 'rx_bytes': 2230, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 25, 'tx_packets': 5, 'rx_bytes': 2230, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 405], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 689596, 'reachable_time': 36408, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 23, 'inoctets': 1824, 'indelivers': 7, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 23, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 1824, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 23, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 7, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 410044, 'error': None, 'target': 'ovnmeta-b8c83697-e12f-4ca4-8bd5-07c36e7b45a8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:31:24 compute-0 sudo[410014]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 11 09:31:24 compute-0 sudo[410014]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:31:24 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:31:24.645 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[a57096f4-11fb-439c-bd4e-9ef1eaf19f70]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapb8c83697-e1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 689607, 'tstamp': 689607}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 410046, 'error': None, 'target': 'ovnmeta-b8c83697-e12f-4ca4-8bd5-07c36e7b45a8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapb8c83697-e1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 689611, 'tstamp': 689611}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 410046, 'error': None, 'target': 'ovnmeta-b8c83697-e12f-4ca4-8bd5-07c36e7b45a8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:31:24 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:31:24.648 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb8c83697-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:31:24 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:31:24.696 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb8c83697-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:31:24 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:31:24.697 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 09:31:24 compute-0 nova_compute[260935]: 2025-10-11 09:31:24.697 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:31:24 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:31:24.697 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb8c83697-e0, col_values=(('external_ids', {'iface-id': '3d1ec619-c133-47e3-8121-0a84b73ae16e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:31:24 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:31:24.698 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 09:31:24 compute-0 nova_compute[260935]: 2025-10-11 09:31:24.790 2 DEBUG nova.network.neutron [req-0dda961c-5d5c-4b9d-8030-3e4b3fdff853 req-bbc7dca2-660d-4d19-ab70-10e338ce18a9 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 30069bfa-2edc-4f4c-b685-233b65a11de1] Updated VIF entry in instance network info cache for port faf6491c-2fe9-4559-bb38-e0b4068de1b5. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 09:31:24 compute-0 nova_compute[260935]: 2025-10-11 09:31:24.791 2 DEBUG nova.network.neutron [req-0dda961c-5d5c-4b9d-8030-3e4b3fdff853 req-bbc7dca2-660d-4d19-ab70-10e338ce18a9 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 30069bfa-2edc-4f4c-b685-233b65a11de1] Updating instance_info_cache with network_info: [{"id": "faf6491c-2fe9-4559-bb38-e0b4068de1b5", "address": "fa:16:3e:73:79:c8", "network": {"id": "b8c83697-e12f-4ca4-8bd5-07c36e7b45a8", "bridge": "br-int", "label": "tempest-network-smoke--75951287", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe73:79c8", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe73:79c8", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfaf6491c-2f", "ovs_interfaceid": "faf6491c-2fe9-4559-bb38-e0b4068de1b5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:31:24 compute-0 nova_compute[260935]: 2025-10-11 09:31:24.809 2 DEBUG oslo_concurrency.lockutils [req-0dda961c-5d5c-4b9d-8030-3e4b3fdff853 req-bbc7dca2-660d-4d19-ab70-10e338ce18a9 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-30069bfa-2edc-4f4c-b685-233b65a11de1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:31:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:31:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:31:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:31:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:31:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:31:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:31:25 compute-0 nova_compute[260935]: 2025-10-11 09:31:24.998 2 DEBUG nova.compute.manager [req-86c5373b-59bc-4efa-8f66-65a7c316960a req-7b94cf95-fee1-4922-9466-2c3f7cb88d2a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 30069bfa-2edc-4f4c-b685-233b65a11de1] Received event network-vif-plugged-faf6491c-2fe9-4559-bb38-e0b4068de1b5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:31:25 compute-0 nova_compute[260935]: 2025-10-11 09:31:24.999 2 DEBUG oslo_concurrency.lockutils [req-86c5373b-59bc-4efa-8f66-65a7c316960a req-7b94cf95-fee1-4922-9466-2c3f7cb88d2a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "30069bfa-2edc-4f4c-b685-233b65a11de1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:31:25 compute-0 nova_compute[260935]: 2025-10-11 09:31:24.999 2 DEBUG oslo_concurrency.lockutils [req-86c5373b-59bc-4efa-8f66-65a7c316960a req-7b94cf95-fee1-4922-9466-2c3f7cb88d2a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "30069bfa-2edc-4f4c-b685-233b65a11de1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:31:25 compute-0 nova_compute[260935]: 2025-10-11 09:31:25.001 2 DEBUG oslo_concurrency.lockutils [req-86c5373b-59bc-4efa-8f66-65a7c316960a req-7b94cf95-fee1-4922-9466-2c3f7cb88d2a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "30069bfa-2edc-4f4c-b685-233b65a11de1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:31:25 compute-0 nova_compute[260935]: 2025-10-11 09:31:25.001 2 DEBUG nova.compute.manager [req-86c5373b-59bc-4efa-8f66-65a7c316960a req-7b94cf95-fee1-4922-9466-2c3f7cb88d2a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 30069bfa-2edc-4f4c-b685-233b65a11de1] Processing event network-vif-plugged-faf6491c-2fe9-4559-bb38-e0b4068de1b5 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 11 09:31:25 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2693: 321 pgs: 321 active+clean; 453 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 11 09:31:25 compute-0 ceph-mon[74313]: pgmap v2693: 321 pgs: 321 active+clean; 453 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 11 09:31:25 compute-0 sudo[410014]: pam_unix(sudo:session): session closed for user root
Oct 11 09:31:25 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 09:31:25 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 09:31:25 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 11 09:31:25 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 09:31:25 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 11 09:31:25 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:31:25 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev 9dfa91e0-16fd-452f-859d-0829f17462f2 does not exist
Oct 11 09:31:25 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev 2698604d-60e9-42a6-a321-6c3eb622df1c does not exist
Oct 11 09:31:25 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev b7eb5832-8404-4f2f-8711-427b6b3573f8 does not exist
Oct 11 09:31:25 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 11 09:31:25 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 09:31:25 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 11 09:31:25 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 09:31:25 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 09:31:25 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 09:31:25 compute-0 sudo[410122]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:31:25 compute-0 sudo[410122]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:31:25 compute-0 sudo[410122]: pam_unix(sudo:session): session closed for user root
Oct 11 09:31:25 compute-0 sudo[410147]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 09:31:25 compute-0 sudo[410147]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:31:25 compute-0 sudo[410147]: pam_unix(sudo:session): session closed for user root
Oct 11 09:31:25 compute-0 sudo[410172]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:31:25 compute-0 sudo[410172]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:31:25 compute-0 sudo[410172]: pam_unix(sudo:session): session closed for user root
Oct 11 09:31:25 compute-0 nova_compute[260935]: 2025-10-11 09:31:25.766 2 DEBUG nova.compute.manager [None req-a58efe35-e911-4b34-8fc5-8bbf99892ea8 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 30069bfa-2edc-4f4c-b685-233b65a11de1] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 11 09:31:25 compute-0 nova_compute[260935]: 2025-10-11 09:31:25.767 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760175085.7671936, 30069bfa-2edc-4f4c-b685-233b65a11de1 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:31:25 compute-0 nova_compute[260935]: 2025-10-11 09:31:25.767 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 30069bfa-2edc-4f4c-b685-233b65a11de1] VM Started (Lifecycle Event)
Oct 11 09:31:25 compute-0 nova_compute[260935]: 2025-10-11 09:31:25.771 2 DEBUG nova.virt.libvirt.driver [None req-a58efe35-e911-4b34-8fc5-8bbf99892ea8 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 30069bfa-2edc-4f4c-b685-233b65a11de1] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 11 09:31:25 compute-0 nova_compute[260935]: 2025-10-11 09:31:25.775 2 INFO nova.virt.libvirt.driver [-] [instance: 30069bfa-2edc-4f4c-b685-233b65a11de1] Instance spawned successfully.
Oct 11 09:31:25 compute-0 nova_compute[260935]: 2025-10-11 09:31:25.776 2 DEBUG nova.virt.libvirt.driver [None req-a58efe35-e911-4b34-8fc5-8bbf99892ea8 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 30069bfa-2edc-4f4c-b685-233b65a11de1] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 11 09:31:25 compute-0 sudo[410197]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 11 09:31:25 compute-0 sudo[410197]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:31:25 compute-0 nova_compute[260935]: 2025-10-11 09:31:25.808 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 30069bfa-2edc-4f4c-b685-233b65a11de1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:31:25 compute-0 nova_compute[260935]: 2025-10-11 09:31:25.813 2 DEBUG nova.virt.libvirt.driver [None req-a58efe35-e911-4b34-8fc5-8bbf99892ea8 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 30069bfa-2edc-4f4c-b685-233b65a11de1] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:31:25 compute-0 nova_compute[260935]: 2025-10-11 09:31:25.814 2 DEBUG nova.virt.libvirt.driver [None req-a58efe35-e911-4b34-8fc5-8bbf99892ea8 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 30069bfa-2edc-4f4c-b685-233b65a11de1] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:31:25 compute-0 nova_compute[260935]: 2025-10-11 09:31:25.814 2 DEBUG nova.virt.libvirt.driver [None req-a58efe35-e911-4b34-8fc5-8bbf99892ea8 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 30069bfa-2edc-4f4c-b685-233b65a11de1] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:31:25 compute-0 nova_compute[260935]: 2025-10-11 09:31:25.814 2 DEBUG nova.virt.libvirt.driver [None req-a58efe35-e911-4b34-8fc5-8bbf99892ea8 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 30069bfa-2edc-4f4c-b685-233b65a11de1] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:31:25 compute-0 nova_compute[260935]: 2025-10-11 09:31:25.815 2 DEBUG nova.virt.libvirt.driver [None req-a58efe35-e911-4b34-8fc5-8bbf99892ea8 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 30069bfa-2edc-4f4c-b685-233b65a11de1] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:31:25 compute-0 nova_compute[260935]: 2025-10-11 09:31:25.815 2 DEBUG nova.virt.libvirt.driver [None req-a58efe35-e911-4b34-8fc5-8bbf99892ea8 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 30069bfa-2edc-4f4c-b685-233b65a11de1] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:31:25 compute-0 nova_compute[260935]: 2025-10-11 09:31:25.819 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 30069bfa-2edc-4f4c-b685-233b65a11de1] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 09:31:25 compute-0 nova_compute[260935]: 2025-10-11 09:31:25.856 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 30069bfa-2edc-4f4c-b685-233b65a11de1] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 09:31:25 compute-0 nova_compute[260935]: 2025-10-11 09:31:25.857 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760175085.769778, 30069bfa-2edc-4f4c-b685-233b65a11de1 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:31:25 compute-0 nova_compute[260935]: 2025-10-11 09:31:25.857 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 30069bfa-2edc-4f4c-b685-233b65a11de1] VM Paused (Lifecycle Event)
Oct 11 09:31:25 compute-0 nova_compute[260935]: 2025-10-11 09:31:25.876 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 30069bfa-2edc-4f4c-b685-233b65a11de1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:31:25 compute-0 nova_compute[260935]: 2025-10-11 09:31:25.882 2 INFO nova.compute.manager [None req-a58efe35-e911-4b34-8fc5-8bbf99892ea8 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 30069bfa-2edc-4f4c-b685-233b65a11de1] Took 8.71 seconds to spawn the instance on the hypervisor.
Oct 11 09:31:25 compute-0 nova_compute[260935]: 2025-10-11 09:31:25.882 2 DEBUG nova.compute.manager [None req-a58efe35-e911-4b34-8fc5-8bbf99892ea8 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 30069bfa-2edc-4f4c-b685-233b65a11de1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:31:25 compute-0 nova_compute[260935]: 2025-10-11 09:31:25.883 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760175085.770087, 30069bfa-2edc-4f4c-b685-233b65a11de1 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:31:25 compute-0 nova_compute[260935]: 2025-10-11 09:31:25.884 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 30069bfa-2edc-4f4c-b685-233b65a11de1] VM Resumed (Lifecycle Event)
Oct 11 09:31:25 compute-0 nova_compute[260935]: 2025-10-11 09:31:25.922 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 30069bfa-2edc-4f4c-b685-233b65a11de1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:31:25 compute-0 nova_compute[260935]: 2025-10-11 09:31:25.925 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 30069bfa-2edc-4f4c-b685-233b65a11de1] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 09:31:25 compute-0 nova_compute[260935]: 2025-10-11 09:31:25.972 2 INFO nova.compute.manager [None req-a58efe35-e911-4b34-8fc5-8bbf99892ea8 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 30069bfa-2edc-4f4c-b685-233b65a11de1] Took 9.80 seconds to build instance.
Oct 11 09:31:26 compute-0 nova_compute[260935]: 2025-10-11 09:31:26.018 2 DEBUG oslo_concurrency.lockutils [None req-a58efe35-e911-4b34-8fc5-8bbf99892ea8 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "30069bfa-2edc-4f4c-b685-233b65a11de1" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.951s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:31:26 compute-0 podman[410262]: 2025-10-11 09:31:26.181151817 +0000 UTC m=+0.058549614 container create 3ee363ec5344f088a64c43c207b0dbc7924f956a22813b0ac8b1b051a725192b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_mestorf, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 09:31:26 compute-0 systemd[1]: Started libpod-conmon-3ee363ec5344f088a64c43c207b0dbc7924f956a22813b0ac8b1b051a725192b.scope.
Oct 11 09:31:26 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 09:31:26 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 09:31:26 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:31:26 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 09:31:26 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 09:31:26 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 09:31:26 compute-0 podman[410262]: 2025-10-11 09:31:26.153071704 +0000 UTC m=+0.030469591 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:31:26 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:31:26 compute-0 podman[410262]: 2025-10-11 09:31:26.323122933 +0000 UTC m=+0.200520800 container init 3ee363ec5344f088a64c43c207b0dbc7924f956a22813b0ac8b1b051a725192b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_mestorf, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct 11 09:31:26 compute-0 podman[410262]: 2025-10-11 09:31:26.336767858 +0000 UTC m=+0.214165695 container start 3ee363ec5344f088a64c43c207b0dbc7924f956a22813b0ac8b1b051a725192b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_mestorf, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct 11 09:31:26 compute-0 strange_mestorf[410279]: 167 167
Oct 11 09:31:26 compute-0 systemd[1]: libpod-3ee363ec5344f088a64c43c207b0dbc7924f956a22813b0ac8b1b051a725192b.scope: Deactivated successfully.
Oct 11 09:31:26 compute-0 podman[410262]: 2025-10-11 09:31:26.414242825 +0000 UTC m=+0.291640722 container attach 3ee363ec5344f088a64c43c207b0dbc7924f956a22813b0ac8b1b051a725192b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_mestorf, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2)
Oct 11 09:31:26 compute-0 podman[410262]: 2025-10-11 09:31:26.415674795 +0000 UTC m=+0.293072612 container died 3ee363ec5344f088a64c43c207b0dbc7924f956a22813b0ac8b1b051a725192b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_mestorf, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True)
Oct 11 09:31:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-cd7d6e83847a0ca50a029083374e8000c6cd4caeb51463faa001b13d2b821a60-merged.mount: Deactivated successfully.
Oct 11 09:31:26 compute-0 podman[410262]: 2025-10-11 09:31:26.614428284 +0000 UTC m=+0.491826111 container remove 3ee363ec5344f088a64c43c207b0dbc7924f956a22813b0ac8b1b051a725192b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_mestorf, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 09:31:26 compute-0 systemd[1]: libpod-conmon-3ee363ec5344f088a64c43c207b0dbc7924f956a22813b0ac8b1b051a725192b.scope: Deactivated successfully.
Oct 11 09:31:26 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 09:31:26 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2698628905' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 09:31:26 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 09:31:26 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2698628905' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 09:31:26 compute-0 podman[410305]: 2025-10-11 09:31:26.986657079 +0000 UTC m=+0.120107111 container create ab7b49654790e4517a9e204fc924686b3e3782192320efa96f3c1b766a485a56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_bouman, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 09:31:27 compute-0 podman[410305]: 2025-10-11 09:31:26.935650079 +0000 UTC m=+0.069100191 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:31:27 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2694: 321 pgs: 321 active+clean; 453 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 11 09:31:27 compute-0 systemd[1]: Started libpod-conmon-ab7b49654790e4517a9e204fc924686b3e3782192320efa96f3c1b766a485a56.scope.
Oct 11 09:31:27 compute-0 nova_compute[260935]: 2025-10-11 09:31:27.129 2 DEBUG nova.compute.manager [req-7665eca1-0a78-4290-b7fa-189cb594665f req-1681e5c7-2040-4e6b-9994-70726dc9fbe2 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 30069bfa-2edc-4f4c-b685-233b65a11de1] Received event network-vif-plugged-faf6491c-2fe9-4559-bb38-e0b4068de1b5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:31:27 compute-0 nova_compute[260935]: 2025-10-11 09:31:27.130 2 DEBUG oslo_concurrency.lockutils [req-7665eca1-0a78-4290-b7fa-189cb594665f req-1681e5c7-2040-4e6b-9994-70726dc9fbe2 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "30069bfa-2edc-4f4c-b685-233b65a11de1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:31:27 compute-0 nova_compute[260935]: 2025-10-11 09:31:27.130 2 DEBUG oslo_concurrency.lockutils [req-7665eca1-0a78-4290-b7fa-189cb594665f req-1681e5c7-2040-4e6b-9994-70726dc9fbe2 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "30069bfa-2edc-4f4c-b685-233b65a11de1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:31:27 compute-0 nova_compute[260935]: 2025-10-11 09:31:27.130 2 DEBUG oslo_concurrency.lockutils [req-7665eca1-0a78-4290-b7fa-189cb594665f req-1681e5c7-2040-4e6b-9994-70726dc9fbe2 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "30069bfa-2edc-4f4c-b685-233b65a11de1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:31:27 compute-0 nova_compute[260935]: 2025-10-11 09:31:27.130 2 DEBUG nova.compute.manager [req-7665eca1-0a78-4290-b7fa-189cb594665f req-1681e5c7-2040-4e6b-9994-70726dc9fbe2 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 30069bfa-2edc-4f4c-b685-233b65a11de1] No waiting events found dispatching network-vif-plugged-faf6491c-2fe9-4559-bb38-e0b4068de1b5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 09:31:27 compute-0 nova_compute[260935]: 2025-10-11 09:31:27.131 2 WARNING nova.compute.manager [req-7665eca1-0a78-4290-b7fa-189cb594665f req-1681e5c7-2040-4e6b-9994-70726dc9fbe2 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 30069bfa-2edc-4f4c-b685-233b65a11de1] Received unexpected event network-vif-plugged-faf6491c-2fe9-4559-bb38-e0b4068de1b5 for instance with vm_state active and task_state None.
Oct 11 09:31:27 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:31:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d1da03864e2d19ecf89844b30cd858c2ce1797f5661a26034caccf861a66f20/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 09:31:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d1da03864e2d19ecf89844b30cd858c2ce1797f5661a26034caccf861a66f20/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 09:31:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d1da03864e2d19ecf89844b30cd858c2ce1797f5661a26034caccf861a66f20/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 09:31:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d1da03864e2d19ecf89844b30cd858c2ce1797f5661a26034caccf861a66f20/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 09:31:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d1da03864e2d19ecf89844b30cd858c2ce1797f5661a26034caccf861a66f20/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 09:31:27 compute-0 podman[410305]: 2025-10-11 09:31:27.207183942 +0000 UTC m=+0.340634004 container init ab7b49654790e4517a9e204fc924686b3e3782192320efa96f3c1b766a485a56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_bouman, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct 11 09:31:27 compute-0 podman[410305]: 2025-10-11 09:31:27.216448334 +0000 UTC m=+0.349898366 container start ab7b49654790e4517a9e204fc924686b3e3782192320efa96f3c1b766a485a56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_bouman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct 11 09:31:27 compute-0 ceph-mon[74313]: from='client.? 192.168.122.10:0/2698628905' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 09:31:27 compute-0 ceph-mon[74313]: from='client.? 192.168.122.10:0/2698628905' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 09:31:27 compute-0 ceph-mon[74313]: pgmap v2694: 321 pgs: 321 active+clean; 453 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 11 09:31:27 compute-0 podman[410305]: 2025-10-11 09:31:27.307251696 +0000 UTC m=+0.440701768 container attach ab7b49654790e4517a9e204fc924686b3e3782192320efa96f3c1b766a485a56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_bouman, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct 11 09:31:28 compute-0 nova_compute[260935]: 2025-10-11 09:31:28.037 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:31:28 compute-0 gifted_bouman[410322]: --> passed data devices: 0 physical, 3 LVM
Oct 11 09:31:28 compute-0 gifted_bouman[410322]: --> relative data size: 1.0
Oct 11 09:31:28 compute-0 gifted_bouman[410322]: --> All data devices are unavailable
Oct 11 09:31:28 compute-0 systemd[1]: libpod-ab7b49654790e4517a9e204fc924686b3e3782192320efa96f3c1b766a485a56.scope: Deactivated successfully.
Oct 11 09:31:28 compute-0 systemd[1]: libpod-ab7b49654790e4517a9e204fc924686b3e3782192320efa96f3c1b766a485a56.scope: Consumed 1.088s CPU time.
Oct 11 09:31:28 compute-0 podman[410305]: 2025-10-11 09:31:28.353860452 +0000 UTC m=+1.487310534 container died ab7b49654790e4517a9e204fc924686b3e3782192320efa96f3c1b766a485a56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_bouman, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct 11 09:31:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-2d1da03864e2d19ecf89844b30cd858c2ce1797f5661a26034caccf861a66f20-merged.mount: Deactivated successfully.
Oct 11 09:31:28 compute-0 nova_compute[260935]: 2025-10-11 09:31:28.688 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:31:28 compute-0 nova_compute[260935]: 2025-10-11 09:31:28.859 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:31:28 compute-0 podman[410305]: 2025-10-11 09:31:28.872942 +0000 UTC m=+2.006392062 container remove ab7b49654790e4517a9e204fc924686b3e3782192320efa96f3c1b766a485a56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_bouman, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 09:31:28 compute-0 sudo[410197]: pam_unix(sudo:session): session closed for user root
Oct 11 09:31:28 compute-0 podman[410351]: 2025-10-11 09:31:28.935940988 +0000 UTC m=+0.541260595 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=iscsid, container_name=iscsid, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true)
Oct 11 09:31:28 compute-0 systemd[1]: libpod-conmon-ab7b49654790e4517a9e204fc924686b3e3782192320efa96f3c1b766a485a56.scope: Deactivated successfully.
Oct 11 09:31:29 compute-0 sudo[410380]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:31:29 compute-0 sudo[410380]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:31:29 compute-0 sudo[410380]: pam_unix(sudo:session): session closed for user root
Oct 11 09:31:29 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2695: 321 pgs: 321 active+clean; 453 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Oct 11 09:31:29 compute-0 sudo[410408]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 09:31:29 compute-0 sudo[410408]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:31:29 compute-0 sudo[410408]: pam_unix(sudo:session): session closed for user root
Oct 11 09:31:29 compute-0 sudo[410433]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:31:29 compute-0 sudo[410433]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:31:29 compute-0 sudo[410433]: pam_unix(sudo:session): session closed for user root
Oct 11 09:31:29 compute-0 ceph-mon[74313]: pgmap v2695: 321 pgs: 321 active+clean; 453 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Oct 11 09:31:29 compute-0 sudo[410458]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a -- lvm list --format json
Oct 11 09:31:29 compute-0 sudo[410458]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:31:29 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:31:29 compute-0 podman[410524]: 2025-10-11 09:31:29.757120273 +0000 UTC m=+0.076588863 container create 8364d5bea19cdeb22c7ead5e28c38804a9c6e9ac628f9c84af09fa31a3713fb6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_euler, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 11 09:31:29 compute-0 podman[410524]: 2025-10-11 09:31:29.710882548 +0000 UTC m=+0.030351208 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:31:29 compute-0 systemd[1]: Started libpod-conmon-8364d5bea19cdeb22c7ead5e28c38804a9c6e9ac628f9c84af09fa31a3713fb6.scope.
Oct 11 09:31:29 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:31:30 compute-0 podman[410524]: 2025-10-11 09:31:30.026295239 +0000 UTC m=+0.345763869 container init 8364d5bea19cdeb22c7ead5e28c38804a9c6e9ac628f9c84af09fa31a3713fb6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_euler, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct 11 09:31:30 compute-0 podman[410524]: 2025-10-11 09:31:30.036837047 +0000 UTC m=+0.356305627 container start 8364d5bea19cdeb22c7ead5e28c38804a9c6e9ac628f9c84af09fa31a3713fb6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_euler, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct 11 09:31:30 compute-0 silly_euler[410540]: 167 167
Oct 11 09:31:30 compute-0 systemd[1]: libpod-8364d5bea19cdeb22c7ead5e28c38804a9c6e9ac628f9c84af09fa31a3713fb6.scope: Deactivated successfully.
Oct 11 09:31:30 compute-0 podman[410524]: 2025-10-11 09:31:30.083257677 +0000 UTC m=+0.402726267 container attach 8364d5bea19cdeb22c7ead5e28c38804a9c6e9ac628f9c84af09fa31a3713fb6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_euler, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS)
Oct 11 09:31:30 compute-0 podman[410524]: 2025-10-11 09:31:30.083757501 +0000 UTC m=+0.403226101 container died 8364d5bea19cdeb22c7ead5e28c38804a9c6e9ac628f9c84af09fa31a3713fb6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_euler, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct 11 09:31:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-916e653e41d4d725cb804cbf57b04c6a6722234b83058e3fc5ba1fa89aa0f031-merged.mount: Deactivated successfully.
Oct 11 09:31:30 compute-0 podman[410524]: 2025-10-11 09:31:30.464427664 +0000 UTC m=+0.783896254 container remove 8364d5bea19cdeb22c7ead5e28c38804a9c6e9ac628f9c84af09fa31a3713fb6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_euler, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507)
Oct 11 09:31:30 compute-0 systemd[1]: libpod-conmon-8364d5bea19cdeb22c7ead5e28c38804a9c6e9ac628f9c84af09fa31a3713fb6.scope: Deactivated successfully.
Oct 11 09:31:30 compute-0 podman[410567]: 2025-10-11 09:31:30.810685385 +0000 UTC m=+0.115163801 container create e4c8b79b5928f90dbfd400b279a2e57b8ebcc3a02734271bd22911b6f57ccfb4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_cerf, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct 11 09:31:30 compute-0 podman[410567]: 2025-10-11 09:31:30.743697195 +0000 UTC m=+0.048175661 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:31:31 compute-0 systemd[1]: Started libpod-conmon-e4c8b79b5928f90dbfd400b279a2e57b8ebcc3a02734271bd22911b6f57ccfb4.scope.
Oct 11 09:31:31 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:31:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7412e1827b2ea30a5c9eddf77a9aa02d1b7966336153a3ec2141f2f454d3ab4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 09:31:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7412e1827b2ea30a5c9eddf77a9aa02d1b7966336153a3ec2141f2f454d3ab4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 09:31:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7412e1827b2ea30a5c9eddf77a9aa02d1b7966336153a3ec2141f2f454d3ab4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 09:31:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7412e1827b2ea30a5c9eddf77a9aa02d1b7966336153a3ec2141f2f454d3ab4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 09:31:31 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2696: 321 pgs: 321 active+clean; 453 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Oct 11 09:31:31 compute-0 podman[410567]: 2025-10-11 09:31:31.114682674 +0000 UTC m=+0.419161070 container init e4c8b79b5928f90dbfd400b279a2e57b8ebcc3a02734271bd22911b6f57ccfb4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_cerf, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Oct 11 09:31:31 compute-0 podman[410567]: 2025-10-11 09:31:31.129660597 +0000 UTC m=+0.434138983 container start e4c8b79b5928f90dbfd400b279a2e57b8ebcc3a02734271bd22911b6f57ccfb4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_cerf, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 09:31:31 compute-0 podman[410567]: 2025-10-11 09:31:31.23675907 +0000 UTC m=+0.541237466 container attach e4c8b79b5928f90dbfd400b279a2e57b8ebcc3a02734271bd22911b6f57ccfb4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_cerf, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct 11 09:31:31 compute-0 ceph-mon[74313]: pgmap v2696: 321 pgs: 321 active+clean; 453 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Oct 11 09:31:31 compute-0 gifted_cerf[410584]: {
Oct 11 09:31:31 compute-0 gifted_cerf[410584]:     "0": [
Oct 11 09:31:31 compute-0 gifted_cerf[410584]:         {
Oct 11 09:31:31 compute-0 gifted_cerf[410584]:             "devices": [
Oct 11 09:31:31 compute-0 gifted_cerf[410584]:                 "/dev/loop3"
Oct 11 09:31:31 compute-0 gifted_cerf[410584]:             ],
Oct 11 09:31:31 compute-0 gifted_cerf[410584]:             "lv_name": "ceph_lv0",
Oct 11 09:31:31 compute-0 gifted_cerf[410584]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 09:31:31 compute-0 gifted_cerf[410584]:             "lv_size": "21470642176",
Oct 11 09:31:31 compute-0 gifted_cerf[410584]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=bd31cfe9-266a-4479-a7f9-659fff14b3e1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 09:31:31 compute-0 gifted_cerf[410584]:             "lv_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 09:31:31 compute-0 gifted_cerf[410584]:             "name": "ceph_lv0",
Oct 11 09:31:31 compute-0 gifted_cerf[410584]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 09:31:31 compute-0 gifted_cerf[410584]:             "tags": {
Oct 11 09:31:31 compute-0 gifted_cerf[410584]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 11 09:31:31 compute-0 gifted_cerf[410584]:                 "ceph.block_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 09:31:31 compute-0 gifted_cerf[410584]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 09:31:31 compute-0 gifted_cerf[410584]:                 "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:31:31 compute-0 gifted_cerf[410584]:                 "ceph.cluster_name": "ceph",
Oct 11 09:31:31 compute-0 gifted_cerf[410584]:                 "ceph.crush_device_class": "",
Oct 11 09:31:31 compute-0 gifted_cerf[410584]:                 "ceph.encrypted": "0",
Oct 11 09:31:31 compute-0 gifted_cerf[410584]:                 "ceph.osd_fsid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 09:31:31 compute-0 gifted_cerf[410584]:                 "ceph.osd_id": "0",
Oct 11 09:31:31 compute-0 gifted_cerf[410584]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 09:31:31 compute-0 gifted_cerf[410584]:                 "ceph.type": "block",
Oct 11 09:31:31 compute-0 gifted_cerf[410584]:                 "ceph.vdo": "0"
Oct 11 09:31:31 compute-0 gifted_cerf[410584]:             },
Oct 11 09:31:31 compute-0 gifted_cerf[410584]:             "type": "block",
Oct 11 09:31:31 compute-0 gifted_cerf[410584]:             "vg_name": "ceph_vg0"
Oct 11 09:31:31 compute-0 gifted_cerf[410584]:         }
Oct 11 09:31:31 compute-0 gifted_cerf[410584]:     ],
Oct 11 09:31:31 compute-0 gifted_cerf[410584]:     "1": [
Oct 11 09:31:31 compute-0 gifted_cerf[410584]:         {
Oct 11 09:31:31 compute-0 gifted_cerf[410584]:             "devices": [
Oct 11 09:31:31 compute-0 gifted_cerf[410584]:                 "/dev/loop4"
Oct 11 09:31:31 compute-0 gifted_cerf[410584]:             ],
Oct 11 09:31:31 compute-0 gifted_cerf[410584]:             "lv_name": "ceph_lv1",
Oct 11 09:31:31 compute-0 gifted_cerf[410584]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 09:31:31 compute-0 gifted_cerf[410584]:             "lv_size": "21470642176",
Oct 11 09:31:31 compute-0 gifted_cerf[410584]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c6e730c8-7075-45e5-bf72-061b73088f5b,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 09:31:31 compute-0 gifted_cerf[410584]:             "lv_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 09:31:31 compute-0 gifted_cerf[410584]:             "name": "ceph_lv1",
Oct 11 09:31:31 compute-0 gifted_cerf[410584]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 09:31:31 compute-0 gifted_cerf[410584]:             "tags": {
Oct 11 09:31:31 compute-0 gifted_cerf[410584]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 11 09:31:31 compute-0 gifted_cerf[410584]:                 "ceph.block_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 09:31:31 compute-0 gifted_cerf[410584]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 09:31:31 compute-0 gifted_cerf[410584]:                 "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:31:31 compute-0 gifted_cerf[410584]:                 "ceph.cluster_name": "ceph",
Oct 11 09:31:31 compute-0 gifted_cerf[410584]:                 "ceph.crush_device_class": "",
Oct 11 09:31:31 compute-0 gifted_cerf[410584]:                 "ceph.encrypted": "0",
Oct 11 09:31:31 compute-0 gifted_cerf[410584]:                 "ceph.osd_fsid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 09:31:31 compute-0 gifted_cerf[410584]:                 "ceph.osd_id": "1",
Oct 11 09:31:31 compute-0 gifted_cerf[410584]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 09:31:31 compute-0 gifted_cerf[410584]:                 "ceph.type": "block",
Oct 11 09:31:31 compute-0 gifted_cerf[410584]:                 "ceph.vdo": "0"
Oct 11 09:31:31 compute-0 gifted_cerf[410584]:             },
Oct 11 09:31:31 compute-0 gifted_cerf[410584]:             "type": "block",
Oct 11 09:31:31 compute-0 gifted_cerf[410584]:             "vg_name": "ceph_vg1"
Oct 11 09:31:31 compute-0 gifted_cerf[410584]:         }
Oct 11 09:31:31 compute-0 gifted_cerf[410584]:     ],
Oct 11 09:31:31 compute-0 gifted_cerf[410584]:     "2": [
Oct 11 09:31:31 compute-0 gifted_cerf[410584]:         {
Oct 11 09:31:31 compute-0 gifted_cerf[410584]:             "devices": [
Oct 11 09:31:31 compute-0 gifted_cerf[410584]:                 "/dev/loop5"
Oct 11 09:31:31 compute-0 gifted_cerf[410584]:             ],
Oct 11 09:31:31 compute-0 gifted_cerf[410584]:             "lv_name": "ceph_lv2",
Oct 11 09:31:31 compute-0 gifted_cerf[410584]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 09:31:31 compute-0 gifted_cerf[410584]:             "lv_size": "21470642176",
Oct 11 09:31:31 compute-0 gifted_cerf[410584]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9a8d87fe-940e-4960-94fb-405e22e6e9ee,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 09:31:31 compute-0 gifted_cerf[410584]:             "lv_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 09:31:31 compute-0 gifted_cerf[410584]:             "name": "ceph_lv2",
Oct 11 09:31:31 compute-0 gifted_cerf[410584]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 09:31:31 compute-0 gifted_cerf[410584]:             "tags": {
Oct 11 09:31:31 compute-0 gifted_cerf[410584]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 11 09:31:31 compute-0 gifted_cerf[410584]:                 "ceph.block_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 09:31:31 compute-0 gifted_cerf[410584]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 09:31:31 compute-0 gifted_cerf[410584]:                 "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:31:31 compute-0 gifted_cerf[410584]:                 "ceph.cluster_name": "ceph",
Oct 11 09:31:31 compute-0 gifted_cerf[410584]:                 "ceph.crush_device_class": "",
Oct 11 09:31:31 compute-0 gifted_cerf[410584]:                 "ceph.encrypted": "0",
Oct 11 09:31:31 compute-0 gifted_cerf[410584]:                 "ceph.osd_fsid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 09:31:31 compute-0 gifted_cerf[410584]:                 "ceph.osd_id": "2",
Oct 11 09:31:31 compute-0 gifted_cerf[410584]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 09:31:31 compute-0 gifted_cerf[410584]:                 "ceph.type": "block",
Oct 11 09:31:31 compute-0 gifted_cerf[410584]:                 "ceph.vdo": "0"
Oct 11 09:31:31 compute-0 gifted_cerf[410584]:             },
Oct 11 09:31:31 compute-0 gifted_cerf[410584]:             "type": "block",
Oct 11 09:31:31 compute-0 gifted_cerf[410584]:             "vg_name": "ceph_vg2"
Oct 11 09:31:31 compute-0 gifted_cerf[410584]:         }
Oct 11 09:31:31 compute-0 gifted_cerf[410584]:     ]
Oct 11 09:31:31 compute-0 gifted_cerf[410584]: }
Oct 11 09:31:31 compute-0 systemd[1]: libpod-e4c8b79b5928f90dbfd400b279a2e57b8ebcc3a02734271bd22911b6f57ccfb4.scope: Deactivated successfully.
Oct 11 09:31:31 compute-0 podman[410567]: 2025-10-11 09:31:31.910103412 +0000 UTC m=+1.214581798 container died e4c8b79b5928f90dbfd400b279a2e57b8ebcc3a02734271bd22911b6f57ccfb4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_cerf, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct 11 09:31:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-e7412e1827b2ea30a5c9eddf77a9aa02d1b7966336153a3ec2141f2f454d3ab4-merged.mount: Deactivated successfully.
Oct 11 09:31:32 compute-0 podman[410567]: 2025-10-11 09:31:32.605867946 +0000 UTC m=+1.910346362 container remove e4c8b79b5928f90dbfd400b279a2e57b8ebcc3a02734271bd22911b6f57ccfb4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_cerf, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 09:31:32 compute-0 sudo[410458]: pam_unix(sudo:session): session closed for user root
Oct 11 09:31:32 compute-0 systemd[1]: libpod-conmon-e4c8b79b5928f90dbfd400b279a2e57b8ebcc3a02734271bd22911b6f57ccfb4.scope: Deactivated successfully.
Oct 11 09:31:32 compute-0 podman[410594]: 2025-10-11 09:31:32.749267383 +0000 UTC m=+0.809648299 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct 11 09:31:32 compute-0 sudo[410625]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:31:32 compute-0 sudo[410625]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:31:32 compute-0 sudo[410625]: pam_unix(sudo:session): session closed for user root
Oct 11 09:31:32 compute-0 podman[410598]: 2025-10-11 09:31:32.803440062 +0000 UTC m=+0.862671326 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS)
Oct 11 09:31:32 compute-0 sudo[410672]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 09:31:32 compute-0 sudo[410672]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:31:32 compute-0 sudo[410672]: pam_unix(sudo:session): session closed for user root
Oct 11 09:31:32 compute-0 sudo[410700]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:31:32 compute-0 sudo[410700]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:31:32 compute-0 sudo[410700]: pam_unix(sudo:session): session closed for user root
Oct 11 09:31:33 compute-0 nova_compute[260935]: 2025-10-11 09:31:33.041 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:31:33 compute-0 sudo[410725]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a -- raw list --format json
Oct 11 09:31:33 compute-0 sudo[410725]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:31:33 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2697: 321 pgs: 321 active+clean; 453 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Oct 11 09:31:33 compute-0 ceph-mon[74313]: pgmap v2697: 321 pgs: 321 active+clean; 453 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Oct 11 09:31:33 compute-0 podman[410790]: 2025-10-11 09:31:33.519742276 +0000 UTC m=+0.098767768 container create aa4689eb8d19124360b016972744306a1dfa02f7c7419cadb84ebc0936b297f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_dhawan, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 09:31:33 compute-0 podman[410790]: 2025-10-11 09:31:33.442445195 +0000 UTC m=+0.021470667 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:31:33 compute-0 nova_compute[260935]: 2025-10-11 09:31:33.629 2 DEBUG nova.compute.manager [req-56b822ba-0458-4270-8332-c3e0273d3fb9 req-8d68ab8f-13de-43a6-9ce4-1493590a799d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 30069bfa-2edc-4f4c-b685-233b65a11de1] Received event network-changed-faf6491c-2fe9-4559-bb38-e0b4068de1b5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:31:33 compute-0 nova_compute[260935]: 2025-10-11 09:31:33.630 2 DEBUG nova.compute.manager [req-56b822ba-0458-4270-8332-c3e0273d3fb9 req-8d68ab8f-13de-43a6-9ce4-1493590a799d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 30069bfa-2edc-4f4c-b685-233b65a11de1] Refreshing instance network info cache due to event network-changed-faf6491c-2fe9-4559-bb38-e0b4068de1b5. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 09:31:33 compute-0 nova_compute[260935]: 2025-10-11 09:31:33.630 2 DEBUG oslo_concurrency.lockutils [req-56b822ba-0458-4270-8332-c3e0273d3fb9 req-8d68ab8f-13de-43a6-9ce4-1493590a799d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-30069bfa-2edc-4f4c-b685-233b65a11de1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:31:33 compute-0 nova_compute[260935]: 2025-10-11 09:31:33.631 2 DEBUG oslo_concurrency.lockutils [req-56b822ba-0458-4270-8332-c3e0273d3fb9 req-8d68ab8f-13de-43a6-9ce4-1493590a799d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-30069bfa-2edc-4f4c-b685-233b65a11de1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:31:33 compute-0 nova_compute[260935]: 2025-10-11 09:31:33.631 2 DEBUG nova.network.neutron [req-56b822ba-0458-4270-8332-c3e0273d3fb9 req-8d68ab8f-13de-43a6-9ce4-1493590a799d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 30069bfa-2edc-4f4c-b685-233b65a11de1] Refreshing network info cache for port faf6491c-2fe9-4559-bb38-e0b4068de1b5 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 09:31:33 compute-0 nova_compute[260935]: 2025-10-11 09:31:33.690 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:31:33 compute-0 systemd[1]: Started libpod-conmon-aa4689eb8d19124360b016972744306a1dfa02f7c7419cadb84ebc0936b297f4.scope.
Oct 11 09:31:33 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:31:33 compute-0 podman[410790]: 2025-10-11 09:31:33.921175955 +0000 UTC m=+0.500201477 container init aa4689eb8d19124360b016972744306a1dfa02f7c7419cadb84ebc0936b297f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_dhawan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True)
Oct 11 09:31:33 compute-0 podman[410790]: 2025-10-11 09:31:33.935361286 +0000 UTC m=+0.514386778 container start aa4689eb8d19124360b016972744306a1dfa02f7c7419cadb84ebc0936b297f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_dhawan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct 11 09:31:33 compute-0 hopeful_dhawan[410806]: 167 167
Oct 11 09:31:33 compute-0 systemd[1]: libpod-aa4689eb8d19124360b016972744306a1dfa02f7c7419cadb84ebc0936b297f4.scope: Deactivated successfully.
Oct 11 09:31:34 compute-0 podman[410790]: 2025-10-11 09:31:34.004711493 +0000 UTC m=+0.583737025 container attach aa4689eb8d19124360b016972744306a1dfa02f7c7419cadb84ebc0936b297f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_dhawan, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 09:31:34 compute-0 podman[410790]: 2025-10-11 09:31:34.006141793 +0000 UTC m=+0.585167275 container died aa4689eb8d19124360b016972744306a1dfa02f7c7419cadb84ebc0936b297f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_dhawan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Oct 11 09:31:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-c719f056635ced1df7e7a3aaae097783ade0368d5e14695227ffa12492c265b2-merged.mount: Deactivated successfully.
Oct 11 09:31:34 compute-0 podman[410790]: 2025-10-11 09:31:34.159228263 +0000 UTC m=+0.738253755 container remove aa4689eb8d19124360b016972744306a1dfa02f7c7419cadb84ebc0936b297f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_dhawan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct 11 09:31:34 compute-0 systemd[1]: libpod-conmon-aa4689eb8d19124360b016972744306a1dfa02f7c7419cadb84ebc0936b297f4.scope: Deactivated successfully.
Oct 11 09:31:34 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:31:34 compute-0 podman[410832]: 2025-10-11 09:31:34.522459104 +0000 UTC m=+0.107958508 container create 3e873a2a907803512e47c8a59dd4e8bf7a9d01f047937def10297d93c089d980 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_wilbur, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 09:31:34 compute-0 podman[410832]: 2025-10-11 09:31:34.443568078 +0000 UTC m=+0.029067512 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:31:34 compute-0 systemd[1]: Started libpod-conmon-3e873a2a907803512e47c8a59dd4e8bf7a9d01f047937def10297d93c089d980.scope.
Oct 11 09:31:34 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:31:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a9c341e1ce616213a7a9bc13e4ed3227e23e3e4e8e7894380decc3abdafb962e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 09:31:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a9c341e1ce616213a7a9bc13e4ed3227e23e3e4e8e7894380decc3abdafb962e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 09:31:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a9c341e1ce616213a7a9bc13e4ed3227e23e3e4e8e7894380decc3abdafb962e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 09:31:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a9c341e1ce616213a7a9bc13e4ed3227e23e3e4e8e7894380decc3abdafb962e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 09:31:34 compute-0 podman[410832]: 2025-10-11 09:31:34.897261431 +0000 UTC m=+0.482760885 container init 3e873a2a907803512e47c8a59dd4e8bf7a9d01f047937def10297d93c089d980 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_wilbur, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef)
Oct 11 09:31:34 compute-0 podman[410832]: 2025-10-11 09:31:34.905223326 +0000 UTC m=+0.490722730 container start 3e873a2a907803512e47c8a59dd4e8bf7a9d01f047937def10297d93c089d980 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_wilbur, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 09:31:35 compute-0 podman[410832]: 2025-10-11 09:31:35.013681987 +0000 UTC m=+0.599181401 container attach 3e873a2a907803512e47c8a59dd4e8bf7a9d01f047937def10297d93c089d980 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_wilbur, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 09:31:35 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2698: 321 pgs: 321 active+clean; 453 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 11 09:31:35 compute-0 ceph-mon[74313]: pgmap v2698: 321 pgs: 321 active+clean; 453 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 11 09:31:35 compute-0 zen_wilbur[410850]: {
Oct 11 09:31:35 compute-0 zen_wilbur[410850]:     "9a8d87fe-940e-4960-94fb-405e22e6e9ee": {
Oct 11 09:31:35 compute-0 zen_wilbur[410850]:         "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:31:35 compute-0 zen_wilbur[410850]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 11 09:31:35 compute-0 zen_wilbur[410850]:         "osd_id": 2,
Oct 11 09:31:35 compute-0 zen_wilbur[410850]:         "osd_uuid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 09:31:35 compute-0 zen_wilbur[410850]:         "type": "bluestore"
Oct 11 09:31:35 compute-0 zen_wilbur[410850]:     },
Oct 11 09:31:35 compute-0 zen_wilbur[410850]:     "bd31cfe9-266a-4479-a7f9-659fff14b3e1": {
Oct 11 09:31:35 compute-0 zen_wilbur[410850]:         "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:31:35 compute-0 zen_wilbur[410850]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 11 09:31:35 compute-0 zen_wilbur[410850]:         "osd_id": 0,
Oct 11 09:31:35 compute-0 zen_wilbur[410850]:         "osd_uuid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 09:31:35 compute-0 zen_wilbur[410850]:         "type": "bluestore"
Oct 11 09:31:35 compute-0 zen_wilbur[410850]:     },
Oct 11 09:31:35 compute-0 zen_wilbur[410850]:     "c6e730c8-7075-45e5-bf72-061b73088f5b": {
Oct 11 09:31:35 compute-0 zen_wilbur[410850]:         "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:31:35 compute-0 zen_wilbur[410850]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 11 09:31:35 compute-0 zen_wilbur[410850]:         "osd_id": 1,
Oct 11 09:31:35 compute-0 zen_wilbur[410850]:         "osd_uuid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 09:31:35 compute-0 zen_wilbur[410850]:         "type": "bluestore"
Oct 11 09:31:35 compute-0 zen_wilbur[410850]:     }
Oct 11 09:31:35 compute-0 zen_wilbur[410850]: }
Oct 11 09:31:35 compute-0 systemd[1]: libpod-3e873a2a907803512e47c8a59dd4e8bf7a9d01f047937def10297d93c089d980.scope: Deactivated successfully.
Oct 11 09:31:35 compute-0 systemd[1]: libpod-3e873a2a907803512e47c8a59dd4e8bf7a9d01f047937def10297d93c089d980.scope: Consumed 1.077s CPU time.
Oct 11 09:31:36 compute-0 podman[410883]: 2025-10-11 09:31:36.052948135 +0000 UTC m=+0.044370853 container died 3e873a2a907803512e47c8a59dd4e8bf7a9d01f047937def10297d93c089d980 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_wilbur, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct 11 09:31:36 compute-0 nova_compute[260935]: 2025-10-11 09:31:36.191 2 DEBUG nova.network.neutron [req-56b822ba-0458-4270-8332-c3e0273d3fb9 req-8d68ab8f-13de-43a6-9ce4-1493590a799d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 30069bfa-2edc-4f4c-b685-233b65a11de1] Updated VIF entry in instance network info cache for port faf6491c-2fe9-4559-bb38-e0b4068de1b5. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 09:31:36 compute-0 nova_compute[260935]: 2025-10-11 09:31:36.192 2 DEBUG nova.network.neutron [req-56b822ba-0458-4270-8332-c3e0273d3fb9 req-8d68ab8f-13de-43a6-9ce4-1493590a799d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 30069bfa-2edc-4f4c-b685-233b65a11de1] Updating instance_info_cache with network_info: [{"id": "faf6491c-2fe9-4559-bb38-e0b4068de1b5", "address": "fa:16:3e:73:79:c8", "network": {"id": "b8c83697-e12f-4ca4-8bd5-07c36e7b45a8", "bridge": "br-int", "label": "tempest-network-smoke--75951287", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.229", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe73:79c8", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe73:79c8", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfaf6491c-2f", "ovs_interfaceid": "faf6491c-2fe9-4559-bb38-e0b4068de1b5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:31:36 compute-0 nova_compute[260935]: 2025-10-11 09:31:36.235 2 DEBUG oslo_concurrency.lockutils [req-56b822ba-0458-4270-8332-c3e0273d3fb9 req-8d68ab8f-13de-43a6-9ce4-1493590a799d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-30069bfa-2edc-4f4c-b685-233b65a11de1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:31:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-a9c341e1ce616213a7a9bc13e4ed3227e23e3e4e8e7894380decc3abdafb962e-merged.mount: Deactivated successfully.
Oct 11 09:31:37 compute-0 podman[410883]: 2025-10-11 09:31:37.000714272 +0000 UTC m=+0.992136990 container remove 3e873a2a907803512e47c8a59dd4e8bf7a9d01f047937def10297d93c089d980 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_wilbur, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0)
Oct 11 09:31:37 compute-0 systemd[1]: libpod-conmon-3e873a2a907803512e47c8a59dd4e8bf7a9d01f047937def10297d93c089d980.scope: Deactivated successfully.
Oct 11 09:31:37 compute-0 sudo[410725]: pam_unix(sudo:session): session closed for user root
Oct 11 09:31:37 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 09:31:37 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2699: 321 pgs: 321 active+clean; 453 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 11 09:31:37 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:31:37 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 09:31:37 compute-0 ceph-mon[74313]: pgmap v2699: 321 pgs: 321 active+clean; 453 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 11 09:31:37 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:31:37 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev 3556e753-0b34-465a-86f8-e0a0d7ac1b36 does not exist
Oct 11 09:31:37 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev dd61f4b1-7c83-46b3-9bdd-015097ff0e7c does not exist
Oct 11 09:31:37 compute-0 sudo[410900]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:31:37 compute-0 sudo[410900]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:31:37 compute-0 sudo[410900]: pam_unix(sudo:session): session closed for user root
Oct 11 09:31:37 compute-0 sudo[410925]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 11 09:31:37 compute-0 sudo[410925]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:31:37 compute-0 sudo[410925]: pam_unix(sudo:session): session closed for user root
Oct 11 09:31:38 compute-0 nova_compute[260935]: 2025-10-11 09:31:38.072 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:31:38 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:31:38 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:31:38 compute-0 nova_compute[260935]: 2025-10-11 09:31:38.693 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:31:39 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2700: 321 pgs: 321 active+clean; 457 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 534 KiB/s wr, 89 op/s
Oct 11 09:31:39 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:31:39 compute-0 ceph-mon[74313]: pgmap v2700: 321 pgs: 321 active+clean; 457 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 534 KiB/s wr, 89 op/s
Oct 11 09:31:39 compute-0 sshd-session[410950]: Invalid user admin from 165.232.82.252 port 48470
Oct 11 09:31:39 compute-0 sshd-session[410950]: pam_unix(sshd:auth): check pass; user unknown
Oct 11 09:31:39 compute-0 sshd-session[410950]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=165.232.82.252
Oct 11 09:31:40 compute-0 ovn_controller[152945]: 2025-10-11T09:31:40Z|00178|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:73:79:c8 10.100.0.6
Oct 11 09:31:40 compute-0 ovn_controller[152945]: 2025-10-11T09:31:40Z|00179|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:73:79:c8 10.100.0.6
Oct 11 09:31:41 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2701: 321 pgs: 321 active+clean; 457 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 30 KiB/s rd, 522 KiB/s wr, 15 op/s
Oct 11 09:31:41 compute-0 ceph-mon[74313]: pgmap v2701: 321 pgs: 321 active+clean; 457 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 30 KiB/s rd, 522 KiB/s wr, 15 op/s
Oct 11 09:31:41 compute-0 nova_compute[260935]: 2025-10-11 09:31:41.698 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:31:41 compute-0 nova_compute[260935]: 2025-10-11 09:31:41.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:31:42 compute-0 sshd-session[410950]: Failed password for invalid user admin from 165.232.82.252 port 48470 ssh2
Oct 11 09:31:42 compute-0 nova_compute[260935]: 2025-10-11 09:31:42.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:31:42 compute-0 nova_compute[260935]: 2025-10-11 09:31:42.702 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 11 09:31:43 compute-0 sshd-session[410950]: Connection closed by invalid user admin 165.232.82.252 port 48470 [preauth]
Oct 11 09:31:43 compute-0 nova_compute[260935]: 2025-10-11 09:31:43.077 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:31:43 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2702: 321 pgs: 321 active+clean; 486 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 235 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 11 09:31:43 compute-0 ceph-mon[74313]: pgmap v2702: 321 pgs: 321 active+clean; 486 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 235 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 11 09:31:43 compute-0 nova_compute[260935]: 2025-10-11 09:31:43.497 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "refresh_cache-b75d8ded-515b-48ff-a6b6-28df88878996" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:31:43 compute-0 nova_compute[260935]: 2025-10-11 09:31:43.497 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquired lock "refresh_cache-b75d8ded-515b-48ff-a6b6-28df88878996" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:31:43 compute-0 nova_compute[260935]: 2025-10-11 09:31:43.498 2 DEBUG nova.network.neutron [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: b75d8ded-515b-48ff-a6b6-28df88878996] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 11 09:31:43 compute-0 nova_compute[260935]: 2025-10-11 09:31:43.728 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:31:44 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:31:44 compute-0 ceph-mon[74313]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #132. Immutable memtables: 0.
Oct 11 09:31:44 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:31:44.445358) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 11 09:31:44 compute-0 ceph-mon[74313]: rocksdb: [db/flush_job.cc:856] [default] [JOB 79] Flushing memtable with next log file: 132
Oct 11 09:31:44 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760175104445404, "job": 79, "event": "flush_started", "num_memtables": 1, "num_entries": 486, "num_deletes": 257, "total_data_size": 412121, "memory_usage": 421544, "flush_reason": "Manual Compaction"}
Oct 11 09:31:44 compute-0 ceph-mon[74313]: rocksdb: [db/flush_job.cc:885] [default] [JOB 79] Level-0 flush table #133: started
Oct 11 09:31:44 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760175104451229, "cf_name": "default", "job": 79, "event": "table_file_creation", "file_number": 133, "file_size": 408479, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 56399, "largest_seqno": 56884, "table_properties": {"data_size": 405733, "index_size": 781, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 901, "raw_key_size": 6619, "raw_average_key_size": 18, "raw_value_size": 400109, "raw_average_value_size": 1120, "num_data_blocks": 34, "num_entries": 357, "num_filter_entries": 357, "num_deletions": 257, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760175080, "oldest_key_time": 1760175080, "file_creation_time": 1760175104, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "25d4b2da-549a-4320-988a-8e4ed36a341b", "db_session_id": "HBQ1LYMNE0LLZGR7AHC6", "orig_file_number": 133, "seqno_to_time_mapping": "N/A"}}
Oct 11 09:31:44 compute-0 ceph-mon[74313]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 79] Flush lasted 5921 microseconds, and 3073 cpu microseconds.
Oct 11 09:31:44 compute-0 ceph-mon[74313]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 09:31:44 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:31:44.451281) [db/flush_job.cc:967] [default] [JOB 79] Level-0 flush table #133: 408479 bytes OK
Oct 11 09:31:44 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:31:44.451305) [db/memtable_list.cc:519] [default] Level-0 commit table #133 started
Oct 11 09:31:44 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:31:44.453114) [db/memtable_list.cc:722] [default] Level-0 commit table #133: memtable #1 done
Oct 11 09:31:44 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:31:44.453128) EVENT_LOG_v1 {"time_micros": 1760175104453123, "job": 79, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 11 09:31:44 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:31:44.453156) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 11 09:31:44 compute-0 ceph-mon[74313]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 79] Try to delete WAL files size 409206, prev total WAL file size 409206, number of live WAL files 2.
Oct 11 09:31:44 compute-0 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000129.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 09:31:44 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:31:44.453782) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0032323539' seq:72057594037927935, type:22 .. '6C6F676D0032353132' seq:0, type:0; will stop at (end)
Oct 11 09:31:44 compute-0 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 80] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 11 09:31:44 compute-0 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 79 Base level 0, inputs: [133(398KB)], [131(9603KB)]
Oct 11 09:31:44 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760175104453935, "job": 80, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [133], "files_L6": [131], "score": -1, "input_data_size": 10242346, "oldest_snapshot_seqno": -1}
Oct 11 09:31:44 compute-0 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 80] Generated table #134: 7495 keys, 10127045 bytes, temperature: kUnknown
Oct 11 09:31:44 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760175104560461, "cf_name": "default", "job": 80, "event": "table_file_creation", "file_number": 134, "file_size": 10127045, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10077477, "index_size": 29743, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 18757, "raw_key_size": 196500, "raw_average_key_size": 26, "raw_value_size": 9943999, "raw_average_value_size": 1326, "num_data_blocks": 1158, "num_entries": 7495, "num_filter_entries": 7495, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760170204, "oldest_key_time": 0, "file_creation_time": 1760175104, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "25d4b2da-549a-4320-988a-8e4ed36a341b", "db_session_id": "HBQ1LYMNE0LLZGR7AHC6", "orig_file_number": 134, "seqno_to_time_mapping": "N/A"}}
Oct 11 09:31:44 compute-0 ceph-mon[74313]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 09:31:44 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:31:44.560788) [db/compaction/compaction_job.cc:1663] [default] [JOB 80] Compacted 1@0 + 1@6 files to L6 => 10127045 bytes
Oct 11 09:31:44 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:31:44.562312) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 96.1 rd, 95.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.4, 9.4 +0.0 blob) out(9.7 +0.0 blob), read-write-amplify(49.9) write-amplify(24.8) OK, records in: 8021, records dropped: 526 output_compression: NoCompression
Oct 11 09:31:44 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:31:44.562340) EVENT_LOG_v1 {"time_micros": 1760175104562327, "job": 80, "event": "compaction_finished", "compaction_time_micros": 106612, "compaction_time_cpu_micros": 50027, "output_level": 6, "num_output_files": 1, "total_output_size": 10127045, "num_input_records": 8021, "num_output_records": 7495, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 11 09:31:44 compute-0 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000133.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 09:31:44 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760175104562618, "job": 80, "event": "table_file_deletion", "file_number": 133}
Oct 11 09:31:44 compute-0 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000131.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 09:31:44 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760175104565633, "job": 80, "event": "table_file_deletion", "file_number": 131}
Oct 11 09:31:44 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:31:44.453627) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 09:31:44 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:31:44.565686) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 09:31:44 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:31:44.565692) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 09:31:44 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:31:44.565695) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 09:31:44 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:31:44.565698) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 09:31:44 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:31:44.565701) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 09:31:45 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2703: 321 pgs: 321 active+clean; 486 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 235 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 11 09:31:45 compute-0 ceph-mon[74313]: pgmap v2703: 321 pgs: 321 active+clean; 486 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 235 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 11 09:31:45 compute-0 nova_compute[260935]: 2025-10-11 09:31:45.497 2 DEBUG nova.network.neutron [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: b75d8ded-515b-48ff-a6b6-28df88878996] Updating instance_info_cache with network_info: [{"id": "99e74dca-1d94-446c-ac4b-bc16dc028d2b", "address": "fa:16:3e:ab:9b:26", "network": {"id": "e4686205-cbf0-4221-bc49-ebb890c4a59f", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1553544744-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "11b44ad9193e4e43838d52056ccf413e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap99e74dca-1d", "ovs_interfaceid": "99e74dca-1d94-446c-ac4b-bc16dc028d2b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:31:45 compute-0 nova_compute[260935]: 2025-10-11 09:31:45.539 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Releasing lock "refresh_cache-b75d8ded-515b-48ff-a6b6-28df88878996" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:31:45 compute-0 nova_compute[260935]: 2025-10-11 09:31:45.540 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: b75d8ded-515b-48ff-a6b6-28df88878996] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 11 09:31:45 compute-0 nova_compute[260935]: 2025-10-11 09:31:45.541 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:31:45 compute-0 nova_compute[260935]: 2025-10-11 09:31:45.541 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:31:46 compute-0 nova_compute[260935]: 2025-10-11 09:31:46.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:31:46 compute-0 nova_compute[260935]: 2025-10-11 09:31:46.704 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 11 09:31:46 compute-0 nova_compute[260935]: 2025-10-11 09:31:46.705 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:31:46 compute-0 nova_compute[260935]: 2025-10-11 09:31:46.735 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:31:46 compute-0 nova_compute[260935]: 2025-10-11 09:31:46.736 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:31:46 compute-0 nova_compute[260935]: 2025-10-11 09:31:46.737 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:31:46 compute-0 nova_compute[260935]: 2025-10-11 09:31:46.738 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 11 09:31:46 compute-0 nova_compute[260935]: 2025-10-11 09:31:46.739 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:31:47 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2704: 321 pgs: 321 active+clean; 486 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 235 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 11 09:31:47 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 09:31:47 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2877184816' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:31:47 compute-0 nova_compute[260935]: 2025-10-11 09:31:47.252 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.513s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:31:47 compute-0 nova_compute[260935]: 2025-10-11 09:31:47.376 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:31:47 compute-0 nova_compute[260935]: 2025-10-11 09:31:47.377 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:31:47 compute-0 nova_compute[260935]: 2025-10-11 09:31:47.377 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:31:47 compute-0 nova_compute[260935]: 2025-10-11 09:31:47.384 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:31:47 compute-0 nova_compute[260935]: 2025-10-11 09:31:47.384 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:31:47 compute-0 nova_compute[260935]: 2025-10-11 09:31:47.391 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000087 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:31:47 compute-0 nova_compute[260935]: 2025-10-11 09:31:47.391 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000087 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:31:47 compute-0 nova_compute[260935]: 2025-10-11 09:31:47.397 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000052 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:31:47 compute-0 nova_compute[260935]: 2025-10-11 09:31:47.398 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000052 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:31:47 compute-0 nova_compute[260935]: 2025-10-11 09:31:47.404 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000088 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:31:47 compute-0 nova_compute[260935]: 2025-10-11 09:31:47.404 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000088 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:31:47 compute-0 nova_compute[260935]: 2025-10-11 09:31:47.661 2 WARNING nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 09:31:47 compute-0 nova_compute[260935]: 2025-10-11 09:31:47.662 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=2432MB free_disk=59.739715576171875GB free_vcpus=3 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 11 09:31:47 compute-0 nova_compute[260935]: 2025-10-11 09:31:47.662 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:31:47 compute-0 nova_compute[260935]: 2025-10-11 09:31:47.662 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:31:47 compute-0 nova_compute[260935]: 2025-10-11 09:31:47.757 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance c176845c-89c0-4038-ba22-4ee79bd3ebfe actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 09:31:47 compute-0 nova_compute[260935]: 2025-10-11 09:31:47.757 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance b75d8ded-515b-48ff-a6b6-28df88878996 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 09:31:47 compute-0 nova_compute[260935]: 2025-10-11 09:31:47.757 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance 52be16b4-343a-4fd4-9041-39069a1fde2a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 09:31:47 compute-0 nova_compute[260935]: 2025-10-11 09:31:47.757 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance 811aca81-b712-4b96-a66c-8108b7791b3c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 09:31:47 compute-0 nova_compute[260935]: 2025-10-11 09:31:47.757 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance 30069bfa-2edc-4f4c-b685-233b65a11de1 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 09:31:47 compute-0 nova_compute[260935]: 2025-10-11 09:31:47.758 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 5 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 11 09:31:47 compute-0 nova_compute[260935]: 2025-10-11 09:31:47.758 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=1152MB phys_disk=59GB used_disk=5GB total_vcpus=8 used_vcpus=5 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 11 09:31:47 compute-0 nova_compute[260935]: 2025-10-11 09:31:47.877 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:31:48 compute-0 nova_compute[260935]: 2025-10-11 09:31:48.116 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:31:48 compute-0 ceph-mon[74313]: pgmap v2704: 321 pgs: 321 active+clean; 486 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 235 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 11 09:31:48 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/2877184816' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:31:48 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 09:31:48 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1368239932' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:31:48 compute-0 nova_compute[260935]: 2025-10-11 09:31:48.341 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:31:48 compute-0 nova_compute[260935]: 2025-10-11 09:31:48.348 2 DEBUG nova.compute.provider_tree [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 09:31:48 compute-0 nova_compute[260935]: 2025-10-11 09:31:48.368 2 DEBUG nova.scheduler.client.report [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 09:31:48 compute-0 nova_compute[260935]: 2025-10-11 09:31:48.395 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 11 09:31:48 compute-0 nova_compute[260935]: 2025-10-11 09:31:48.395 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.733s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:31:48 compute-0 nova_compute[260935]: 2025-10-11 09:31:48.731 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:31:49 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2705: 321 pgs: 321 active+clean; 486 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 235 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 11 09:31:49 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/1368239932' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:31:49 compute-0 ceph-mon[74313]: pgmap v2705: 321 pgs: 321 active+clean; 486 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 235 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 11 09:31:49 compute-0 nova_compute[260935]: 2025-10-11 09:31:49.391 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:31:49 compute-0 nova_compute[260935]: 2025-10-11 09:31:49.421 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:31:49 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:31:51 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2706: 321 pgs: 321 active+clean; 486 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 205 KiB/s rd, 1.6 MiB/s wr, 48 op/s
Oct 11 09:31:51 compute-0 ceph-mon[74313]: pgmap v2706: 321 pgs: 321 active+clean; 486 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 205 KiB/s rd, 1.6 MiB/s wr, 48 op/s
Oct 11 09:31:51 compute-0 nova_compute[260935]: 2025-10-11 09:31:51.910 2 DEBUG nova.compute.manager [req-8c040527-bde1-44af-91c3-be40390a8452 req-226b263e-02cd-4966-81a5-de0258ff06b5 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 30069bfa-2edc-4f4c-b685-233b65a11de1] Received event network-changed-faf6491c-2fe9-4559-bb38-e0b4068de1b5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:31:51 compute-0 nova_compute[260935]: 2025-10-11 09:31:51.911 2 DEBUG nova.compute.manager [req-8c040527-bde1-44af-91c3-be40390a8452 req-226b263e-02cd-4966-81a5-de0258ff06b5 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 30069bfa-2edc-4f4c-b685-233b65a11de1] Refreshing instance network info cache due to event network-changed-faf6491c-2fe9-4559-bb38-e0b4068de1b5. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 09:31:51 compute-0 nova_compute[260935]: 2025-10-11 09:31:51.911 2 DEBUG oslo_concurrency.lockutils [req-8c040527-bde1-44af-91c3-be40390a8452 req-226b263e-02cd-4966-81a5-de0258ff06b5 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-30069bfa-2edc-4f4c-b685-233b65a11de1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:31:51 compute-0 nova_compute[260935]: 2025-10-11 09:31:51.911 2 DEBUG oslo_concurrency.lockutils [req-8c040527-bde1-44af-91c3-be40390a8452 req-226b263e-02cd-4966-81a5-de0258ff06b5 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-30069bfa-2edc-4f4c-b685-233b65a11de1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:31:51 compute-0 nova_compute[260935]: 2025-10-11 09:31:51.911 2 DEBUG nova.network.neutron [req-8c040527-bde1-44af-91c3-be40390a8452 req-226b263e-02cd-4966-81a5-de0258ff06b5 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 30069bfa-2edc-4f4c-b685-233b65a11de1] Refreshing network info cache for port faf6491c-2fe9-4559-bb38-e0b4068de1b5 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 09:31:52 compute-0 nova_compute[260935]: 2025-10-11 09:31:52.011 2 DEBUG oslo_concurrency.lockutils [None req-bbc8b334-050d-4cdc-81f4-62e98d796d1f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "30069bfa-2edc-4f4c-b685-233b65a11de1" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:31:52 compute-0 nova_compute[260935]: 2025-10-11 09:31:52.011 2 DEBUG oslo_concurrency.lockutils [None req-bbc8b334-050d-4cdc-81f4-62e98d796d1f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "30069bfa-2edc-4f4c-b685-233b65a11de1" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:31:52 compute-0 nova_compute[260935]: 2025-10-11 09:31:52.011 2 DEBUG oslo_concurrency.lockutils [None req-bbc8b334-050d-4cdc-81f4-62e98d796d1f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "30069bfa-2edc-4f4c-b685-233b65a11de1-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:31:52 compute-0 nova_compute[260935]: 2025-10-11 09:31:52.011 2 DEBUG oslo_concurrency.lockutils [None req-bbc8b334-050d-4cdc-81f4-62e98d796d1f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "30069bfa-2edc-4f4c-b685-233b65a11de1-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:31:52 compute-0 nova_compute[260935]: 2025-10-11 09:31:52.012 2 DEBUG oslo_concurrency.lockutils [None req-bbc8b334-050d-4cdc-81f4-62e98d796d1f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "30069bfa-2edc-4f4c-b685-233b65a11de1-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:31:52 compute-0 nova_compute[260935]: 2025-10-11 09:31:52.013 2 INFO nova.compute.manager [None req-bbc8b334-050d-4cdc-81f4-62e98d796d1f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 30069bfa-2edc-4f4c-b685-233b65a11de1] Terminating instance
Oct 11 09:31:52 compute-0 nova_compute[260935]: 2025-10-11 09:31:52.014 2 DEBUG nova.compute.manager [None req-bbc8b334-050d-4cdc-81f4-62e98d796d1f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 30069bfa-2edc-4f4c-b685-233b65a11de1] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 11 09:31:52 compute-0 kernel: tapfaf6491c-2f (unregistering): left promiscuous mode
Oct 11 09:31:52 compute-0 NetworkManager[44960]: <info>  [1760175112.0745] device (tapfaf6491c-2f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 09:31:52 compute-0 ovn_controller[152945]: 2025-10-11T09:31:52Z|01521|binding|INFO|Releasing lport faf6491c-2fe9-4559-bb38-e0b4068de1b5 from this chassis (sb_readonly=0)
Oct 11 09:31:52 compute-0 nova_compute[260935]: 2025-10-11 09:31:52.092 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:31:52 compute-0 ovn_controller[152945]: 2025-10-11T09:31:52Z|01522|binding|INFO|Setting lport faf6491c-2fe9-4559-bb38-e0b4068de1b5 down in Southbound
Oct 11 09:31:52 compute-0 ovn_controller[152945]: 2025-10-11T09:31:52Z|01523|binding|INFO|Removing iface tapfaf6491c-2f ovn-installed in OVS
Oct 11 09:31:52 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:31:52.103 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:73:79:c8 10.100.0.6 2001:db8:0:1:f816:3eff:fe73:79c8 2001:db8::f816:3eff:fe73:79c8'], port_security=['fa:16:3e:73:79:c8 10.100.0.6 2001:db8:0:1:f816:3eff:fe73:79c8 2001:db8::f816:3eff:fe73:79c8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28 2001:db8:0:1:f816:3eff:fe73:79c8/64 2001:db8::f816:3eff:fe73:79c8/64', 'neutron:device_id': '30069bfa-2edc-4f4c-b685-233b65a11de1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b8c83697-e12f-4ca4-8bd5-07c36e7b45a8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'db6885dd005947ad850fed13cefdf2fc', 'neutron:revision_number': '4', 'neutron:security_group_ids': '9b3d6b51-0356-4b0d-af67-6a82e6bb09bb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=09ee216c-15c4-4f81-ac68-20fd45768bc8, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=faf6491c-2fe9-4559-bb38-e0b4068de1b5) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 09:31:52 compute-0 nova_compute[260935]: 2025-10-11 09:31:52.107 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:31:52 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:31:52.108 162815 INFO neutron.agent.ovn.metadata.agent [-] Port faf6491c-2fe9-4559-bb38-e0b4068de1b5 in datapath b8c83697-e12f-4ca4-8bd5-07c36e7b45a8 unbound from our chassis
Oct 11 09:31:52 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:31:52.112 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b8c83697-e12f-4ca4-8bd5-07c36e7b45a8
Oct 11 09:31:52 compute-0 nova_compute[260935]: 2025-10-11 09:31:52.125 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:31:52 compute-0 systemd[1]: machine-qemu\x2d160\x2dinstance\x2d00000088.scope: Deactivated successfully.
Oct 11 09:31:52 compute-0 systemd[1]: machine-qemu\x2d160\x2dinstance\x2d00000088.scope: Consumed 13.907s CPU time.
Oct 11 09:31:52 compute-0 systemd-machined[215705]: Machine qemu-160-instance-00000088 terminated.
Oct 11 09:31:52 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:31:52.140 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[7a5b6b31-2835-42a3-8c18-5704e04c9d84]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:31:52 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:31:52.173 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[848b0093-d480-4573-8dc0-1ad370d8dc53]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:31:52 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:31:52.177 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[6f312a59-ad6a-4999-b481-a133c88bdeb7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:31:52 compute-0 podman[410998]: 2025-10-11 09:31:52.180205003 +0000 UTC m=+0.067753863 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct 11 09:31:52 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:31:52.212 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[6b6fa40e-8fae-43f2-9007-09d96c755ec6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:31:52 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:31:52.230 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[eb16f595-f7f5-43f2-a894-6c2b1795cef0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb8c83697-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d0:5b:95'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 42, 'tx_packets': 7, 'rx_bytes': 3628, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 42, 'tx_packets': 7, 'rx_bytes': 3628, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 405], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 689596, 'reachable_time': 36408, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 38, 'inoctets': 2928, 'indelivers': 13, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 38, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 2928, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 38, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 13, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 411024, 'error': None, 'target': 'ovnmeta-b8c83697-e12f-4ca4-8bd5-07c36e7b45a8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:31:52 compute-0 nova_compute[260935]: 2025-10-11 09:31:52.253 2 INFO nova.virt.libvirt.driver [-] [instance: 30069bfa-2edc-4f4c-b685-233b65a11de1] Instance destroyed successfully.
Oct 11 09:31:52 compute-0 nova_compute[260935]: 2025-10-11 09:31:52.254 2 DEBUG nova.objects.instance [None req-bbc8b334-050d-4cdc-81f4-62e98d796d1f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lazy-loading 'resources' on Instance uuid 30069bfa-2edc-4f4c-b685-233b65a11de1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:31:52 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:31:52.255 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[ea78a73c-1cee-4141-9585-26537beefc44]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapb8c83697-e1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 689607, 'tstamp': 689607}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 411028, 'error': None, 'target': 'ovnmeta-b8c83697-e12f-4ca4-8bd5-07c36e7b45a8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapb8c83697-e1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 689611, 'tstamp': 689611}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 411028, 'error': None, 'target': 'ovnmeta-b8c83697-e12f-4ca4-8bd5-07c36e7b45a8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:31:52 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:31:52.257 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb8c83697-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:31:52 compute-0 nova_compute[260935]: 2025-10-11 09:31:52.259 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:31:52 compute-0 nova_compute[260935]: 2025-10-11 09:31:52.263 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:31:52 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:31:52.264 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb8c83697-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:31:52 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:31:52.265 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 09:31:52 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:31:52.265 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb8c83697-e0, col_values=(('external_ids', {'iface-id': '3d1ec619-c133-47e3-8121-0a84b73ae16e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:31:52 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:31:52.266 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 09:31:52 compute-0 nova_compute[260935]: 2025-10-11 09:31:52.270 2 DEBUG nova.virt.libvirt.vif [None req-bbc8b334-050d-4cdc-81f4-62e98d796d1f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T09:31:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1320880782',display_name='tempest-TestGettingAddress-server-1320880782',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1320880782',id=136,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHRstPG1ot2VOcbBUXtJ+eOOOPIpYxS1823a8lgCgglrjvAd3j//+qkav+rN6NO2rJV6NwAQSbhFZXjoVb6gdqi2VPWucmakmAAgqAmAhQOUxzhrLgrzGURRlRuM8US+xA==',key_name='tempest-TestGettingAddress-264191416',keypairs=<?>,launch_index=0,launched_at=2025-10-11T09:31:25Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='db6885dd005947ad850fed13cefdf2fc',ramdisk_id='',reservation_id='r-fmtjcp7u',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-1238692117',owner_user_name='tempest-TestGettingAddress-1238692117-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T09:31:25Z,user_data=None,user_id='0e1fd111a1ff43179343661e01457085',uuid=30069bfa-2edc-4f4c-b685-233b65a11de1,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "faf6491c-2fe9-4559-bb38-e0b4068de1b5", "address": "fa:16:3e:73:79:c8", "network": {"id": "b8c83697-e12f-4ca4-8bd5-07c36e7b45a8", "bridge": "br-int", "label": "tempest-network-smoke--75951287", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.229", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe73:79c8", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe73:79c8", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfaf6491c-2f", "ovs_interfaceid": "faf6491c-2fe9-4559-bb38-e0b4068de1b5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 11 09:31:52 compute-0 nova_compute[260935]: 2025-10-11 09:31:52.271 2 DEBUG nova.network.os_vif_util [None req-bbc8b334-050d-4cdc-81f4-62e98d796d1f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converting VIF {"id": "faf6491c-2fe9-4559-bb38-e0b4068de1b5", "address": "fa:16:3e:73:79:c8", "network": {"id": "b8c83697-e12f-4ca4-8bd5-07c36e7b45a8", "bridge": "br-int", "label": "tempest-network-smoke--75951287", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.229", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe73:79c8", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe73:79c8", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfaf6491c-2f", "ovs_interfaceid": "faf6491c-2fe9-4559-bb38-e0b4068de1b5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 09:31:52 compute-0 nova_compute[260935]: 2025-10-11 09:31:52.271 2 DEBUG nova.network.os_vif_util [None req-bbc8b334-050d-4cdc-81f4-62e98d796d1f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:73:79:c8,bridge_name='br-int',has_traffic_filtering=True,id=faf6491c-2fe9-4559-bb38-e0b4068de1b5,network=Network(b8c83697-e12f-4ca4-8bd5-07c36e7b45a8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfaf6491c-2f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 09:31:52 compute-0 nova_compute[260935]: 2025-10-11 09:31:52.272 2 DEBUG os_vif [None req-bbc8b334-050d-4cdc-81f4-62e98d796d1f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:73:79:c8,bridge_name='br-int',has_traffic_filtering=True,id=faf6491c-2fe9-4559-bb38-e0b4068de1b5,network=Network(b8c83697-e12f-4ca4-8bd5-07c36e7b45a8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfaf6491c-2f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 11 09:31:52 compute-0 nova_compute[260935]: 2025-10-11 09:31:52.273 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:31:52 compute-0 nova_compute[260935]: 2025-10-11 09:31:52.273 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfaf6491c-2f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:31:52 compute-0 nova_compute[260935]: 2025-10-11 09:31:52.275 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:31:52 compute-0 nova_compute[260935]: 2025-10-11 09:31:52.276 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:31:52 compute-0 nova_compute[260935]: 2025-10-11 09:31:52.279 2 INFO os_vif [None req-bbc8b334-050d-4cdc-81f4-62e98d796d1f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:73:79:c8,bridge_name='br-int',has_traffic_filtering=True,id=faf6491c-2fe9-4559-bb38-e0b4068de1b5,network=Network(b8c83697-e12f-4ca4-8bd5-07c36e7b45a8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfaf6491c-2f')
Oct 11 09:31:52 compute-0 nova_compute[260935]: 2025-10-11 09:31:52.709 2 DEBUG nova.compute.manager [req-ba260f11-0c56-40ea-a357-10fb5faf0c89 req-04fc610f-252e-436e-bf06-eb234ee28535 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 30069bfa-2edc-4f4c-b685-233b65a11de1] Received event network-vif-unplugged-faf6491c-2fe9-4559-bb38-e0b4068de1b5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:31:52 compute-0 nova_compute[260935]: 2025-10-11 09:31:52.710 2 DEBUG oslo_concurrency.lockutils [req-ba260f11-0c56-40ea-a357-10fb5faf0c89 req-04fc610f-252e-436e-bf06-eb234ee28535 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "30069bfa-2edc-4f4c-b685-233b65a11de1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:31:52 compute-0 nova_compute[260935]: 2025-10-11 09:31:52.710 2 DEBUG oslo_concurrency.lockutils [req-ba260f11-0c56-40ea-a357-10fb5faf0c89 req-04fc610f-252e-436e-bf06-eb234ee28535 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "30069bfa-2edc-4f4c-b685-233b65a11de1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:31:52 compute-0 nova_compute[260935]: 2025-10-11 09:31:52.711 2 DEBUG oslo_concurrency.lockutils [req-ba260f11-0c56-40ea-a357-10fb5faf0c89 req-04fc610f-252e-436e-bf06-eb234ee28535 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "30069bfa-2edc-4f4c-b685-233b65a11de1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:31:52 compute-0 nova_compute[260935]: 2025-10-11 09:31:52.711 2 DEBUG nova.compute.manager [req-ba260f11-0c56-40ea-a357-10fb5faf0c89 req-04fc610f-252e-436e-bf06-eb234ee28535 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 30069bfa-2edc-4f4c-b685-233b65a11de1] No waiting events found dispatching network-vif-unplugged-faf6491c-2fe9-4559-bb38-e0b4068de1b5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 09:31:52 compute-0 nova_compute[260935]: 2025-10-11 09:31:52.712 2 DEBUG nova.compute.manager [req-ba260f11-0c56-40ea-a357-10fb5faf0c89 req-04fc610f-252e-436e-bf06-eb234ee28535 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 30069bfa-2edc-4f4c-b685-233b65a11de1] Received event network-vif-unplugged-faf6491c-2fe9-4559-bb38-e0b4068de1b5 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 11 09:31:52 compute-0 nova_compute[260935]: 2025-10-11 09:31:52.749 2 INFO nova.virt.libvirt.driver [None req-bbc8b334-050d-4cdc-81f4-62e98d796d1f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 30069bfa-2edc-4f4c-b685-233b65a11de1] Deleting instance files /var/lib/nova/instances/30069bfa-2edc-4f4c-b685-233b65a11de1_del
Oct 11 09:31:52 compute-0 nova_compute[260935]: 2025-10-11 09:31:52.750 2 INFO nova.virt.libvirt.driver [None req-bbc8b334-050d-4cdc-81f4-62e98d796d1f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 30069bfa-2edc-4f4c-b685-233b65a11de1] Deletion of /var/lib/nova/instances/30069bfa-2edc-4f4c-b685-233b65a11de1_del complete
Oct 11 09:31:52 compute-0 nova_compute[260935]: 2025-10-11 09:31:52.815 2 INFO nova.compute.manager [None req-bbc8b334-050d-4cdc-81f4-62e98d796d1f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 30069bfa-2edc-4f4c-b685-233b65a11de1] Took 0.80 seconds to destroy the instance on the hypervisor.
Oct 11 09:31:52 compute-0 nova_compute[260935]: 2025-10-11 09:31:52.815 2 DEBUG oslo.service.loopingcall [None req-bbc8b334-050d-4cdc-81f4-62e98d796d1f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 11 09:31:52 compute-0 nova_compute[260935]: 2025-10-11 09:31:52.816 2 DEBUG nova.compute.manager [-] [instance: 30069bfa-2edc-4f4c-b685-233b65a11de1] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 11 09:31:52 compute-0 nova_compute[260935]: 2025-10-11 09:31:52.816 2 DEBUG nova.network.neutron [-] [instance: 30069bfa-2edc-4f4c-b685-233b65a11de1] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 11 09:31:53 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2707: 321 pgs: 321 active+clean; 467 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 216 KiB/s rd, 1.6 MiB/s wr, 57 op/s
Oct 11 09:31:53 compute-0 ceph-mon[74313]: pgmap v2707: 321 pgs: 321 active+clean; 467 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 216 KiB/s rd, 1.6 MiB/s wr, 57 op/s
Oct 11 09:31:53 compute-0 nova_compute[260935]: 2025-10-11 09:31:53.734 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:31:53 compute-0 nova_compute[260935]: 2025-10-11 09:31:53.800 2 DEBUG nova.network.neutron [-] [instance: 30069bfa-2edc-4f4c-b685-233b65a11de1] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:31:53 compute-0 nova_compute[260935]: 2025-10-11 09:31:53.835 2 INFO nova.compute.manager [-] [instance: 30069bfa-2edc-4f4c-b685-233b65a11de1] Took 1.02 seconds to deallocate network for instance.
Oct 11 09:31:53 compute-0 nova_compute[260935]: 2025-10-11 09:31:53.921 2 DEBUG oslo_concurrency.lockutils [None req-bbc8b334-050d-4cdc-81f4-62e98d796d1f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:31:53 compute-0 nova_compute[260935]: 2025-10-11 09:31:53.922 2 DEBUG oslo_concurrency.lockutils [None req-bbc8b334-050d-4cdc-81f4-62e98d796d1f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:31:54 compute-0 nova_compute[260935]: 2025-10-11 09:31:54.037 2 DEBUG nova.compute.manager [req-743cc7eb-7219-4ed2-a9cb-748eee1081c6 req-7f390076-d48a-4fe6-ab57-e979f76fa6fb e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 30069bfa-2edc-4f4c-b685-233b65a11de1] Received event network-vif-deleted-faf6491c-2fe9-4559-bb38-e0b4068de1b5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:31:54 compute-0 nova_compute[260935]: 2025-10-11 09:31:54.106 2 DEBUG oslo_concurrency.processutils [None req-bbc8b334-050d-4cdc-81f4-62e98d796d1f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:31:54 compute-0 nova_compute[260935]: 2025-10-11 09:31:54.414 2 DEBUG nova.network.neutron [req-8c040527-bde1-44af-91c3-be40390a8452 req-226b263e-02cd-4966-81a5-de0258ff06b5 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 30069bfa-2edc-4f4c-b685-233b65a11de1] Updated VIF entry in instance network info cache for port faf6491c-2fe9-4559-bb38-e0b4068de1b5. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 09:31:54 compute-0 nova_compute[260935]: 2025-10-11 09:31:54.416 2 DEBUG nova.network.neutron [req-8c040527-bde1-44af-91c3-be40390a8452 req-226b263e-02cd-4966-81a5-de0258ff06b5 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 30069bfa-2edc-4f4c-b685-233b65a11de1] Updating instance_info_cache with network_info: [{"id": "faf6491c-2fe9-4559-bb38-e0b4068de1b5", "address": "fa:16:3e:73:79:c8", "network": {"id": "b8c83697-e12f-4ca4-8bd5-07c36e7b45a8", "bridge": "br-int", "label": "tempest-network-smoke--75951287", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe73:79c8", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe73:79c8", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfaf6491c-2f", "ovs_interfaceid": "faf6491c-2fe9-4559-bb38-e0b4068de1b5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:31:54 compute-0 nova_compute[260935]: 2025-10-11 09:31:54.435 2 DEBUG oslo_concurrency.lockutils [req-8c040527-bde1-44af-91c3-be40390a8452 req-226b263e-02cd-4966-81a5-de0258ff06b5 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-30069bfa-2edc-4f4c-b685-233b65a11de1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:31:54 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:31:54 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 09:31:54 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3384131280' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:31:54 compute-0 nova_compute[260935]: 2025-10-11 09:31:54.593 2 DEBUG oslo_concurrency.processutils [None req-bbc8b334-050d-4cdc-81f4-62e98d796d1f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.488s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:31:54 compute-0 nova_compute[260935]: 2025-10-11 09:31:54.602 2 DEBUG nova.compute.provider_tree [None req-bbc8b334-050d-4cdc-81f4-62e98d796d1f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 09:31:54 compute-0 nova_compute[260935]: 2025-10-11 09:31:54.619 2 DEBUG nova.scheduler.client.report [None req-bbc8b334-050d-4cdc-81f4-62e98d796d1f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 09:31:54 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/3384131280' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:31:54 compute-0 nova_compute[260935]: 2025-10-11 09:31:54.646 2 DEBUG oslo_concurrency.lockutils [None req-bbc8b334-050d-4cdc-81f4-62e98d796d1f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.724s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:31:54 compute-0 nova_compute[260935]: 2025-10-11 09:31:54.673 2 INFO nova.scheduler.client.report [None req-bbc8b334-050d-4cdc-81f4-62e98d796d1f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Deleted allocations for instance 30069bfa-2edc-4f4c-b685-233b65a11de1
Oct 11 09:31:54 compute-0 nova_compute[260935]: 2025-10-11 09:31:54.744 2 DEBUG oslo_concurrency.lockutils [None req-bbc8b334-050d-4cdc-81f4-62e98d796d1f 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "30069bfa-2edc-4f4c-b685-233b65a11de1" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.733s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:31:54 compute-0 nova_compute[260935]: 2025-10-11 09:31:54.802 2 DEBUG nova.compute.manager [req-db361101-51c2-4885-90c6-bb480c3ca96d req-32a4335f-940f-48af-bee6-76208c4d5e57 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 30069bfa-2edc-4f4c-b685-233b65a11de1] Received event network-vif-plugged-faf6491c-2fe9-4559-bb38-e0b4068de1b5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:31:54 compute-0 nova_compute[260935]: 2025-10-11 09:31:54.803 2 DEBUG oslo_concurrency.lockutils [req-db361101-51c2-4885-90c6-bb480c3ca96d req-32a4335f-940f-48af-bee6-76208c4d5e57 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "30069bfa-2edc-4f4c-b685-233b65a11de1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:31:54 compute-0 nova_compute[260935]: 2025-10-11 09:31:54.803 2 DEBUG oslo_concurrency.lockutils [req-db361101-51c2-4885-90c6-bb480c3ca96d req-32a4335f-940f-48af-bee6-76208c4d5e57 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "30069bfa-2edc-4f4c-b685-233b65a11de1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:31:54 compute-0 nova_compute[260935]: 2025-10-11 09:31:54.804 2 DEBUG oslo_concurrency.lockutils [req-db361101-51c2-4885-90c6-bb480c3ca96d req-32a4335f-940f-48af-bee6-76208c4d5e57 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "30069bfa-2edc-4f4c-b685-233b65a11de1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:31:54 compute-0 nova_compute[260935]: 2025-10-11 09:31:54.804 2 DEBUG nova.compute.manager [req-db361101-51c2-4885-90c6-bb480c3ca96d req-32a4335f-940f-48af-bee6-76208c4d5e57 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 30069bfa-2edc-4f4c-b685-233b65a11de1] No waiting events found dispatching network-vif-plugged-faf6491c-2fe9-4559-bb38-e0b4068de1b5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 09:31:54 compute-0 nova_compute[260935]: 2025-10-11 09:31:54.804 2 WARNING nova.compute.manager [req-db361101-51c2-4885-90c6-bb480c3ca96d req-32a4335f-940f-48af-bee6-76208c4d5e57 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 30069bfa-2edc-4f4c-b685-233b65a11de1] Received unexpected event network-vif-plugged-faf6491c-2fe9-4559-bb38-e0b4068de1b5 for instance with vm_state deleted and task_state None.
Oct 11 09:31:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:31:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:31:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:31:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:31:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:31:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:31:54 compute-0 ceph-mgr[74605]: [balancer INFO root] Optimize plan auto_2025-10-11_09:31:54
Oct 11 09:31:54 compute-0 ceph-mgr[74605]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 09:31:54 compute-0 ceph-mgr[74605]: [balancer INFO root] do_upmap
Oct 11 09:31:54 compute-0 ceph-mgr[74605]: [balancer INFO root] pools ['images', 'backups', 'default.rgw.meta', '.mgr', 'vms', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'volumes', 'default.rgw.log', 'default.rgw.control', '.rgw.root']
Oct 11 09:31:54 compute-0 ceph-mgr[74605]: [balancer INFO root] prepared 0/10 changes
Oct 11 09:31:55 compute-0 ceph-osd[88249]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 11 09:31:55 compute-0 ceph-osd[88249]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 4800.1 total, 600.0 interval
                                           Cumulative writes: 41K writes, 164K keys, 41K commit groups, 1.0 writes per commit group, ingest: 0.16 GB, 0.03 MB/s
                                           Cumulative WAL: 41K writes, 15K syncs, 2.75 writes per sync, written: 0.16 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 4946 writes, 21K keys, 4946 commit groups, 1.0 writes per commit group, ingest: 24.70 MB, 0.04 MB/s
                                           Interval WAL: 4946 writes, 1807 syncs, 2.74 writes per sync, written: 0.02 GB, 0.04 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 11 09:31:55 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2708: 321 pgs: 321 active+clean; 467 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 20 KiB/s wr, 9 op/s
Oct 11 09:31:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 09:31:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 09:31:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 09:31:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 09:31:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 09:31:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 09:31:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 09:31:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 09:31:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 09:31:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 09:31:55 compute-0 ceph-mon[74313]: pgmap v2708: 321 pgs: 321 active+clean; 467 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 20 KiB/s wr, 9 op/s
Oct 11 09:31:56 compute-0 nova_compute[260935]: 2025-10-11 09:31:56.193 2 DEBUG oslo_concurrency.lockutils [None req-a4abe9b7-cc7a-4b53-a93e-408020c6fbe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "811aca81-b712-4b96-a66c-8108b7791b3c" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:31:56 compute-0 nova_compute[260935]: 2025-10-11 09:31:56.194 2 DEBUG oslo_concurrency.lockutils [None req-a4abe9b7-cc7a-4b53-a93e-408020c6fbe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "811aca81-b712-4b96-a66c-8108b7791b3c" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:31:56 compute-0 nova_compute[260935]: 2025-10-11 09:31:56.194 2 DEBUG oslo_concurrency.lockutils [None req-a4abe9b7-cc7a-4b53-a93e-408020c6fbe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "811aca81-b712-4b96-a66c-8108b7791b3c-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:31:56 compute-0 nova_compute[260935]: 2025-10-11 09:31:56.195 2 DEBUG oslo_concurrency.lockutils [None req-a4abe9b7-cc7a-4b53-a93e-408020c6fbe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "811aca81-b712-4b96-a66c-8108b7791b3c-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:31:56 compute-0 nova_compute[260935]: 2025-10-11 09:31:56.195 2 DEBUG oslo_concurrency.lockutils [None req-a4abe9b7-cc7a-4b53-a93e-408020c6fbe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "811aca81-b712-4b96-a66c-8108b7791b3c-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:31:56 compute-0 nova_compute[260935]: 2025-10-11 09:31:56.197 2 INFO nova.compute.manager [None req-a4abe9b7-cc7a-4b53-a93e-408020c6fbe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 811aca81-b712-4b96-a66c-8108b7791b3c] Terminating instance
Oct 11 09:31:56 compute-0 nova_compute[260935]: 2025-10-11 09:31:56.199 2 DEBUG nova.compute.manager [None req-a4abe9b7-cc7a-4b53-a93e-408020c6fbe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 811aca81-b712-4b96-a66c-8108b7791b3c] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 11 09:31:56 compute-0 kernel: tap41c97ff4-4a (unregistering): left promiscuous mode
Oct 11 09:31:56 compute-0 NetworkManager[44960]: <info>  [1760175116.4736] device (tap41c97ff4-4a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 09:31:56 compute-0 ovn_controller[152945]: 2025-10-11T09:31:56Z|01524|binding|INFO|Releasing lport 41c97ff4-4ad5-4d35-ac33-d083a904f55a from this chassis (sb_readonly=0)
Oct 11 09:31:56 compute-0 ovn_controller[152945]: 2025-10-11T09:31:56Z|01525|binding|INFO|Setting lport 41c97ff4-4ad5-4d35-ac33-d083a904f55a down in Southbound
Oct 11 09:31:56 compute-0 ovn_controller[152945]: 2025-10-11T09:31:56Z|01526|binding|INFO|Removing iface tap41c97ff4-4a ovn-installed in OVS
Oct 11 09:31:56 compute-0 nova_compute[260935]: 2025-10-11 09:31:56.530 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:31:56 compute-0 nova_compute[260935]: 2025-10-11 09:31:56.533 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:31:56 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:31:56.540 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e6:b0:e4 10.100.0.14 2001:db8:0:1:f816:3eff:fee6:b0e4 2001:db8::f816:3eff:fee6:b0e4'], port_security=['fa:16:3e:e6:b0:e4 10.100.0.14 2001:db8:0:1:f816:3eff:fee6:b0e4 2001:db8::f816:3eff:fee6:b0e4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28 2001:db8:0:1:f816:3eff:fee6:b0e4/64 2001:db8::f816:3eff:fee6:b0e4/64', 'neutron:device_id': '811aca81-b712-4b96-a66c-8108b7791b3c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b8c83697-e12f-4ca4-8bd5-07c36e7b45a8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'db6885dd005947ad850fed13cefdf2fc', 'neutron:revision_number': '4', 'neutron:security_group_ids': '9b3d6b51-0356-4b0d-af67-6a82e6bb09bb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=09ee216c-15c4-4f81-ac68-20fd45768bc8, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=41c97ff4-4ad5-4d35-ac33-d083a904f55a) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 09:31:56 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:31:56.543 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 41c97ff4-4ad5-4d35-ac33-d083a904f55a in datapath b8c83697-e12f-4ca4-8bd5-07c36e7b45a8 unbound from our chassis
Oct 11 09:31:56 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:31:56.545 162815 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b8c83697-e12f-4ca4-8bd5-07c36e7b45a8, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 11 09:31:56 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:31:56.547 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[b1bad010-04f0-4499-ae91-9642c42cd27c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:31:56 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:31:56.548 162815 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-b8c83697-e12f-4ca4-8bd5-07c36e7b45a8 namespace which is not needed anymore
Oct 11 09:31:56 compute-0 nova_compute[260935]: 2025-10-11 09:31:56.557 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:31:56 compute-0 systemd[1]: machine-qemu\x2d159\x2dinstance\x2d00000087.scope: Deactivated successfully.
Oct 11 09:31:56 compute-0 systemd[1]: machine-qemu\x2d159\x2dinstance\x2d00000087.scope: Consumed 15.990s CPU time.
Oct 11 09:31:56 compute-0 systemd-machined[215705]: Machine qemu-159-instance-00000087 terminated.
Oct 11 09:31:56 compute-0 nova_compute[260935]: 2025-10-11 09:31:56.647 2 INFO nova.virt.libvirt.driver [-] [instance: 811aca81-b712-4b96-a66c-8108b7791b3c] Instance destroyed successfully.
Oct 11 09:31:56 compute-0 nova_compute[260935]: 2025-10-11 09:31:56.648 2 DEBUG nova.objects.instance [None req-a4abe9b7-cc7a-4b53-a93e-408020c6fbe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lazy-loading 'resources' on Instance uuid 811aca81-b712-4b96-a66c-8108b7791b3c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:31:56 compute-0 nova_compute[260935]: 2025-10-11 09:31:56.676 2 DEBUG nova.virt.libvirt.vif [None req-a4abe9b7-cc7a-4b53-a93e-408020c6fbe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T09:30:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-739529148',display_name='tempest-TestGettingAddress-server-739529148',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-739529148',id=135,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHRstPG1ot2VOcbBUXtJ+eOOOPIpYxS1823a8lgCgglrjvAd3j//+qkav+rN6NO2rJV6NwAQSbhFZXjoVb6gdqi2VPWucmakmAAgqAmAhQOUxzhrLgrzGURRlRuM8US+xA==',key_name='tempest-TestGettingAddress-264191416',keypairs=<?>,launch_index=0,launched_at=2025-10-11T09:30:52Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='db6885dd005947ad850fed13cefdf2fc',ramdisk_id='',reservation_id='r-jlin4bys',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-1238692117',owner_user_name='tempest-TestGettingAddress-1238692117-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T09:30:52Z,user_data=None,user_id='0e1fd111a1ff43179343661e01457085',uuid=811aca81-b712-4b96-a66c-8108b7791b3c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "41c97ff4-4ad5-4d35-ac33-d083a904f55a", "address": "fa:16:3e:e6:b0:e4", "network": {"id": "b8c83697-e12f-4ca4-8bd5-07c36e7b45a8", "bridge": "br-int", "label": "tempest-network-smoke--75951287", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.211", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fee6:b0e4", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fee6:b0e4", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap41c97ff4-4a", "ovs_interfaceid": "41c97ff4-4ad5-4d35-ac33-d083a904f55a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 11 09:31:56 compute-0 nova_compute[260935]: 2025-10-11 09:31:56.677 2 DEBUG nova.network.os_vif_util [None req-a4abe9b7-cc7a-4b53-a93e-408020c6fbe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converting VIF {"id": "41c97ff4-4ad5-4d35-ac33-d083a904f55a", "address": "fa:16:3e:e6:b0:e4", "network": {"id": "b8c83697-e12f-4ca4-8bd5-07c36e7b45a8", "bridge": "br-int", "label": "tempest-network-smoke--75951287", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.211", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fee6:b0e4", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fee6:b0e4", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap41c97ff4-4a", "ovs_interfaceid": "41c97ff4-4ad5-4d35-ac33-d083a904f55a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 09:31:56 compute-0 nova_compute[260935]: 2025-10-11 09:31:56.679 2 DEBUG nova.network.os_vif_util [None req-a4abe9b7-cc7a-4b53-a93e-408020c6fbe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:e6:b0:e4,bridge_name='br-int',has_traffic_filtering=True,id=41c97ff4-4ad5-4d35-ac33-d083a904f55a,network=Network(b8c83697-e12f-4ca4-8bd5-07c36e7b45a8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap41c97ff4-4a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 09:31:56 compute-0 nova_compute[260935]: 2025-10-11 09:31:56.679 2 DEBUG os_vif [None req-a4abe9b7-cc7a-4b53-a93e-408020c6fbe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:e6:b0:e4,bridge_name='br-int',has_traffic_filtering=True,id=41c97ff4-4ad5-4d35-ac33-d083a904f55a,network=Network(b8c83697-e12f-4ca4-8bd5-07c36e7b45a8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap41c97ff4-4a') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 11 09:31:56 compute-0 nova_compute[260935]: 2025-10-11 09:31:56.682 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:31:56 compute-0 nova_compute[260935]: 2025-10-11 09:31:56.683 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap41c97ff4-4a, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:31:56 compute-0 nova_compute[260935]: 2025-10-11 09:31:56.688 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 09:31:56 compute-0 nova_compute[260935]: 2025-10-11 09:31:56.693 2 INFO os_vif [None req-a4abe9b7-cc7a-4b53-a93e-408020c6fbe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:e6:b0:e4,bridge_name='br-int',has_traffic_filtering=True,id=41c97ff4-4ad5-4d35-ac33-d083a904f55a,network=Network(b8c83697-e12f-4ca4-8bd5-07c36e7b45a8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap41c97ff4-4a')
Oct 11 09:31:56 compute-0 neutron-haproxy-ovnmeta-b8c83697-e12f-4ca4-8bd5-07c36e7b45a8[409505]: [NOTICE]   (409509) : haproxy version is 2.8.14-c23fe91
Oct 11 09:31:56 compute-0 neutron-haproxy-ovnmeta-b8c83697-e12f-4ca4-8bd5-07c36e7b45a8[409505]: [NOTICE]   (409509) : path to executable is /usr/sbin/haproxy
Oct 11 09:31:56 compute-0 neutron-haproxy-ovnmeta-b8c83697-e12f-4ca4-8bd5-07c36e7b45a8[409505]: [WARNING]  (409509) : Exiting Master process...
Oct 11 09:31:56 compute-0 neutron-haproxy-ovnmeta-b8c83697-e12f-4ca4-8bd5-07c36e7b45a8[409505]: [WARNING]  (409509) : Exiting Master process...
Oct 11 09:31:56 compute-0 neutron-haproxy-ovnmeta-b8c83697-e12f-4ca4-8bd5-07c36e7b45a8[409505]: [ALERT]    (409509) : Current worker (409511) exited with code 143 (Terminated)
Oct 11 09:31:56 compute-0 neutron-haproxy-ovnmeta-b8c83697-e12f-4ca4-8bd5-07c36e7b45a8[409505]: [WARNING]  (409509) : All workers exited. Exiting... (0)
Oct 11 09:31:56 compute-0 systemd[1]: libpod-cef6679f7f0fa451ed298ba0c50459d4b4021c71f328dd9432072e9dd2a9fab9.scope: Deactivated successfully.
Oct 11 09:31:56 compute-0 podman[411113]: 2025-10-11 09:31:56.75550535 +0000 UTC m=+0.083266001 container died cef6679f7f0fa451ed298ba0c50459d4b4021c71f328dd9432072e9dd2a9fab9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-b8c83697-e12f-4ca4-8bd5-07c36e7b45a8, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 11 09:31:56 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-cef6679f7f0fa451ed298ba0c50459d4b4021c71f328dd9432072e9dd2a9fab9-userdata-shm.mount: Deactivated successfully.
Oct 11 09:31:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-ac5db13615b0c34d4d58a71f6df0fd0ba5744113047ad9855ac3fe68117d7b25-merged.mount: Deactivated successfully.
Oct 11 09:31:56 compute-0 podman[411113]: 2025-10-11 09:31:56.827006737 +0000 UTC m=+0.154767388 container cleanup cef6679f7f0fa451ed298ba0c50459d4b4021c71f328dd9432072e9dd2a9fab9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-b8c83697-e12f-4ca4-8bd5-07c36e7b45a8, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 09:31:56 compute-0 systemd[1]: libpod-conmon-cef6679f7f0fa451ed298ba0c50459d4b4021c71f328dd9432072e9dd2a9fab9.scope: Deactivated successfully.
Oct 11 09:31:56 compute-0 nova_compute[260935]: 2025-10-11 09:31:56.877 2 DEBUG nova.compute.manager [req-1c1d2c9b-d441-46ca-84f6-7fdecf28ae3c req-a5e24557-dec2-472d-92d8-ada7d1367806 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 811aca81-b712-4b96-a66c-8108b7791b3c] Received event network-vif-unplugged-41c97ff4-4ad5-4d35-ac33-d083a904f55a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:31:56 compute-0 nova_compute[260935]: 2025-10-11 09:31:56.878 2 DEBUG oslo_concurrency.lockutils [req-1c1d2c9b-d441-46ca-84f6-7fdecf28ae3c req-a5e24557-dec2-472d-92d8-ada7d1367806 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "811aca81-b712-4b96-a66c-8108b7791b3c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:31:56 compute-0 nova_compute[260935]: 2025-10-11 09:31:56.878 2 DEBUG oslo_concurrency.lockutils [req-1c1d2c9b-d441-46ca-84f6-7fdecf28ae3c req-a5e24557-dec2-472d-92d8-ada7d1367806 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "811aca81-b712-4b96-a66c-8108b7791b3c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:31:56 compute-0 nova_compute[260935]: 2025-10-11 09:31:56.879 2 DEBUG oslo_concurrency.lockutils [req-1c1d2c9b-d441-46ca-84f6-7fdecf28ae3c req-a5e24557-dec2-472d-92d8-ada7d1367806 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "811aca81-b712-4b96-a66c-8108b7791b3c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:31:56 compute-0 nova_compute[260935]: 2025-10-11 09:31:56.879 2 DEBUG nova.compute.manager [req-1c1d2c9b-d441-46ca-84f6-7fdecf28ae3c req-a5e24557-dec2-472d-92d8-ada7d1367806 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 811aca81-b712-4b96-a66c-8108b7791b3c] No waiting events found dispatching network-vif-unplugged-41c97ff4-4ad5-4d35-ac33-d083a904f55a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 09:31:56 compute-0 nova_compute[260935]: 2025-10-11 09:31:56.880 2 DEBUG nova.compute.manager [req-1c1d2c9b-d441-46ca-84f6-7fdecf28ae3c req-a5e24557-dec2-472d-92d8-ada7d1367806 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 811aca81-b712-4b96-a66c-8108b7791b3c] Received event network-vif-unplugged-41c97ff4-4ad5-4d35-ac33-d083a904f55a for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 11 09:31:56 compute-0 nova_compute[260935]: 2025-10-11 09:31:56.910 2 DEBUG nova.compute.manager [req-11bce2fa-3853-45b0-b808-154a5e45ca83 req-ccd0c56f-5d47-48c1-8e05-2c26e414cd13 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 811aca81-b712-4b96-a66c-8108b7791b3c] Received event network-changed-41c97ff4-4ad5-4d35-ac33-d083a904f55a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:31:56 compute-0 nova_compute[260935]: 2025-10-11 09:31:56.911 2 DEBUG nova.compute.manager [req-11bce2fa-3853-45b0-b808-154a5e45ca83 req-ccd0c56f-5d47-48c1-8e05-2c26e414cd13 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 811aca81-b712-4b96-a66c-8108b7791b3c] Refreshing instance network info cache due to event network-changed-41c97ff4-4ad5-4d35-ac33-d083a904f55a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 09:31:56 compute-0 nova_compute[260935]: 2025-10-11 09:31:56.912 2 DEBUG oslo_concurrency.lockutils [req-11bce2fa-3853-45b0-b808-154a5e45ca83 req-ccd0c56f-5d47-48c1-8e05-2c26e414cd13 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-811aca81-b712-4b96-a66c-8108b7791b3c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:31:56 compute-0 nova_compute[260935]: 2025-10-11 09:31:56.912 2 DEBUG oslo_concurrency.lockutils [req-11bce2fa-3853-45b0-b808-154a5e45ca83 req-ccd0c56f-5d47-48c1-8e05-2c26e414cd13 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-811aca81-b712-4b96-a66c-8108b7791b3c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:31:56 compute-0 nova_compute[260935]: 2025-10-11 09:31:56.912 2 DEBUG nova.network.neutron [req-11bce2fa-3853-45b0-b808-154a5e45ca83 req-ccd0c56f-5d47-48c1-8e05-2c26e414cd13 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 811aca81-b712-4b96-a66c-8108b7791b3c] Refreshing network info cache for port 41c97ff4-4ad5-4d35-ac33-d083a904f55a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 09:31:56 compute-0 podman[411163]: 2025-10-11 09:31:56.918277873 +0000 UTC m=+0.064002117 container remove cef6679f7f0fa451ed298ba0c50459d4b4021c71f328dd9432072e9dd2a9fab9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-b8c83697-e12f-4ca4-8bd5-07c36e7b45a8, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 11 09:31:56 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:31:56.933 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[b3ca0e26-9d46-4b7c-b907-df5de5294f03]: (4, ('Sat Oct 11 09:31:56 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-b8c83697-e12f-4ca4-8bd5-07c36e7b45a8 (cef6679f7f0fa451ed298ba0c50459d4b4021c71f328dd9432072e9dd2a9fab9)\ncef6679f7f0fa451ed298ba0c50459d4b4021c71f328dd9432072e9dd2a9fab9\nSat Oct 11 09:31:56 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-b8c83697-e12f-4ca4-8bd5-07c36e7b45a8 (cef6679f7f0fa451ed298ba0c50459d4b4021c71f328dd9432072e9dd2a9fab9)\ncef6679f7f0fa451ed298ba0c50459d4b4021c71f328dd9432072e9dd2a9fab9\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:31:56 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:31:56.936 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[82deefca-79ce-4d76-994d-66fd6ec3cb5e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:31:56 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:31:56.937 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb8c83697-e0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:31:56 compute-0 nova_compute[260935]: 2025-10-11 09:31:56.940 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:31:56 compute-0 kernel: tapb8c83697-e0: left promiscuous mode
Oct 11 09:31:56 compute-0 nova_compute[260935]: 2025-10-11 09:31:56.967 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:31:56 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:31:56.971 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[4ef33fa3-fcd0-4379-8dfe-873ef101108c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:31:56 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:31:56.998 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[26742deb-8534-4a4a-977e-11adb2683c80]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:31:57 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:31:56.999 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[2a4cdafb-b5e0-47f8-a4d1-4374e7cb52b0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:31:57 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:31:57.020 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[fa322211-8df8-4b1a-af3d-6befb4bb4aec]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 689588, 'reachable_time': 33954, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 411179, 'error': None, 'target': 'ovnmeta-b8c83697-e12f-4ca4-8bd5-07c36e7b45a8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:31:57 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:31:57.024 162929 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-b8c83697-e12f-4ca4-8bd5-07c36e7b45a8 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 11 09:31:57 compute-0 systemd[1]: run-netns-ovnmeta\x2db8c83697\x2de12f\x2d4ca4\x2d8bd5\x2d07c36e7b45a8.mount: Deactivated successfully.
Oct 11 09:31:57 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:31:57.024 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[9735fe78-a78f-4adb-8186-64f7cb6817e3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:31:57 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:31:57.026 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=48, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:d1:d9', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '16:ab:1e:b7:4b:7f'}, ipsec=False) old=SB_Global(nb_cfg=47) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 09:31:57 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:31:57.027 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 11 09:31:57 compute-0 nova_compute[260935]: 2025-10-11 09:31:57.027 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:31:57 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2709: 321 pgs: 321 active+clean; 467 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 20 KiB/s wr, 9 op/s
Oct 11 09:31:57 compute-0 ceph-mon[74313]: pgmap v2709: 321 pgs: 321 active+clean; 467 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 20 KiB/s wr, 9 op/s
Oct 11 09:31:57 compute-0 nova_compute[260935]: 2025-10-11 09:31:57.224 2 INFO nova.virt.libvirt.driver [None req-a4abe9b7-cc7a-4b53-a93e-408020c6fbe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 811aca81-b712-4b96-a66c-8108b7791b3c] Deleting instance files /var/lib/nova/instances/811aca81-b712-4b96-a66c-8108b7791b3c_del
Oct 11 09:31:57 compute-0 nova_compute[260935]: 2025-10-11 09:31:57.225 2 INFO nova.virt.libvirt.driver [None req-a4abe9b7-cc7a-4b53-a93e-408020c6fbe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 811aca81-b712-4b96-a66c-8108b7791b3c] Deletion of /var/lib/nova/instances/811aca81-b712-4b96-a66c-8108b7791b3c_del complete
Oct 11 09:31:57 compute-0 nova_compute[260935]: 2025-10-11 09:31:57.325 2 INFO nova.compute.manager [None req-a4abe9b7-cc7a-4b53-a93e-408020c6fbe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 811aca81-b712-4b96-a66c-8108b7791b3c] Took 1.13 seconds to destroy the instance on the hypervisor.
Oct 11 09:31:57 compute-0 nova_compute[260935]: 2025-10-11 09:31:57.326 2 DEBUG oslo.service.loopingcall [None req-a4abe9b7-cc7a-4b53-a93e-408020c6fbe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 11 09:31:57 compute-0 nova_compute[260935]: 2025-10-11 09:31:57.327 2 DEBUG nova.compute.manager [-] [instance: 811aca81-b712-4b96-a66c-8108b7791b3c] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 11 09:31:57 compute-0 nova_compute[260935]: 2025-10-11 09:31:57.327 2 DEBUG nova.network.neutron [-] [instance: 811aca81-b712-4b96-a66c-8108b7791b3c] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 11 09:31:58 compute-0 nova_compute[260935]: 2025-10-11 09:31:58.112 2 DEBUG nova.network.neutron [-] [instance: 811aca81-b712-4b96-a66c-8108b7791b3c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:31:58 compute-0 nova_compute[260935]: 2025-10-11 09:31:58.135 2 INFO nova.compute.manager [-] [instance: 811aca81-b712-4b96-a66c-8108b7791b3c] Took 0.81 seconds to deallocate network for instance.
Oct 11 09:31:58 compute-0 nova_compute[260935]: 2025-10-11 09:31:58.189 2 DEBUG oslo_concurrency.lockutils [None req-a4abe9b7-cc7a-4b53-a93e-408020c6fbe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:31:58 compute-0 nova_compute[260935]: 2025-10-11 09:31:58.190 2 DEBUG oslo_concurrency.lockutils [None req-a4abe9b7-cc7a-4b53-a93e-408020c6fbe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:31:58 compute-0 nova_compute[260935]: 2025-10-11 09:31:58.297 2 DEBUG oslo_concurrency.processutils [None req-a4abe9b7-cc7a-4b53-a93e-408020c6fbe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:31:58 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 09:31:58 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3955888215' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:31:58 compute-0 nova_compute[260935]: 2025-10-11 09:31:58.771 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:31:58 compute-0 nova_compute[260935]: 2025-10-11 09:31:58.787 2 DEBUG oslo_concurrency.processutils [None req-a4abe9b7-cc7a-4b53-a93e-408020c6fbe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.490s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:31:58 compute-0 nova_compute[260935]: 2025-10-11 09:31:58.794 2 DEBUG nova.compute.provider_tree [None req-a4abe9b7-cc7a-4b53-a93e-408020c6fbe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 09:31:58 compute-0 nova_compute[260935]: 2025-10-11 09:31:58.815 2 DEBUG nova.scheduler.client.report [None req-a4abe9b7-cc7a-4b53-a93e-408020c6fbe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 09:31:58 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/3955888215' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:31:58 compute-0 nova_compute[260935]: 2025-10-11 09:31:58.849 2 DEBUG oslo_concurrency.lockutils [None req-a4abe9b7-cc7a-4b53-a93e-408020c6fbe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.659s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:31:58 compute-0 nova_compute[260935]: 2025-10-11 09:31:58.883 2 INFO nova.scheduler.client.report [None req-a4abe9b7-cc7a-4b53-a93e-408020c6fbe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Deleted allocations for instance 811aca81-b712-4b96-a66c-8108b7791b3c
Oct 11 09:31:58 compute-0 nova_compute[260935]: 2025-10-11 09:31:58.971 2 DEBUG nova.compute.manager [req-266c5b87-5618-4929-b9f1-bd74dd6459e0 req-53492b09-c773-4d40-9316-2abcc5af7acb e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 811aca81-b712-4b96-a66c-8108b7791b3c] Received event network-vif-plugged-41c97ff4-4ad5-4d35-ac33-d083a904f55a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:31:58 compute-0 nova_compute[260935]: 2025-10-11 09:31:58.971 2 DEBUG oslo_concurrency.lockutils [req-266c5b87-5618-4929-b9f1-bd74dd6459e0 req-53492b09-c773-4d40-9316-2abcc5af7acb e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "811aca81-b712-4b96-a66c-8108b7791b3c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:31:58 compute-0 nova_compute[260935]: 2025-10-11 09:31:58.972 2 DEBUG oslo_concurrency.lockutils [req-266c5b87-5618-4929-b9f1-bd74dd6459e0 req-53492b09-c773-4d40-9316-2abcc5af7acb e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "811aca81-b712-4b96-a66c-8108b7791b3c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:31:58 compute-0 nova_compute[260935]: 2025-10-11 09:31:58.972 2 DEBUG oslo_concurrency.lockutils [req-266c5b87-5618-4929-b9f1-bd74dd6459e0 req-53492b09-c773-4d40-9316-2abcc5af7acb e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "811aca81-b712-4b96-a66c-8108b7791b3c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:31:58 compute-0 nova_compute[260935]: 2025-10-11 09:31:58.972 2 DEBUG nova.compute.manager [req-266c5b87-5618-4929-b9f1-bd74dd6459e0 req-53492b09-c773-4d40-9316-2abcc5af7acb e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 811aca81-b712-4b96-a66c-8108b7791b3c] No waiting events found dispatching network-vif-plugged-41c97ff4-4ad5-4d35-ac33-d083a904f55a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 09:31:58 compute-0 nova_compute[260935]: 2025-10-11 09:31:58.972 2 WARNING nova.compute.manager [req-266c5b87-5618-4929-b9f1-bd74dd6459e0 req-53492b09-c773-4d40-9316-2abcc5af7acb e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 811aca81-b712-4b96-a66c-8108b7791b3c] Received unexpected event network-vif-plugged-41c97ff4-4ad5-4d35-ac33-d083a904f55a for instance with vm_state deleted and task_state None.
Oct 11 09:31:58 compute-0 nova_compute[260935]: 2025-10-11 09:31:58.973 2 DEBUG nova.compute.manager [req-266c5b87-5618-4929-b9f1-bd74dd6459e0 req-53492b09-c773-4d40-9316-2abcc5af7acb e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 811aca81-b712-4b96-a66c-8108b7791b3c] Received event network-vif-deleted-41c97ff4-4ad5-4d35-ac33-d083a904f55a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:31:58 compute-0 nova_compute[260935]: 2025-10-11 09:31:58.975 2 DEBUG oslo_concurrency.lockutils [None req-a4abe9b7-cc7a-4b53-a93e-408020c6fbe7 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "811aca81-b712-4b96-a66c-8108b7791b3c" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.781s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:31:59 compute-0 nova_compute[260935]: 2025-10-11 09:31:59.067 2 DEBUG nova.network.neutron [req-11bce2fa-3853-45b0-b808-154a5e45ca83 req-ccd0c56f-5d47-48c1-8e05-2c26e414cd13 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 811aca81-b712-4b96-a66c-8108b7791b3c] Updated VIF entry in instance network info cache for port 41c97ff4-4ad5-4d35-ac33-d083a904f55a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 09:31:59 compute-0 nova_compute[260935]: 2025-10-11 09:31:59.067 2 DEBUG nova.network.neutron [req-11bce2fa-3853-45b0-b808-154a5e45ca83 req-ccd0c56f-5d47-48c1-8e05-2c26e414cd13 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 811aca81-b712-4b96-a66c-8108b7791b3c] Updating instance_info_cache with network_info: [{"id": "41c97ff4-4ad5-4d35-ac33-d083a904f55a", "address": "fa:16:3e:e6:b0:e4", "network": {"id": "b8c83697-e12f-4ca4-8bd5-07c36e7b45a8", "bridge": "br-int", "label": "tempest-network-smoke--75951287", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fee6:b0e4", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fee6:b0e4", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap41c97ff4-4a", "ovs_interfaceid": "41c97ff4-4ad5-4d35-ac33-d083a904f55a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:31:59 compute-0 nova_compute[260935]: 2025-10-11 09:31:59.088 2 DEBUG oslo_concurrency.lockutils [req-11bce2fa-3853-45b0-b808-154a5e45ca83 req-ccd0c56f-5d47-48c1-8e05-2c26e414cd13 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-811aca81-b712-4b96-a66c-8108b7791b3c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:31:59 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2710: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 43 KiB/s rd, 25 KiB/s wr, 58 op/s
Oct 11 09:31:59 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:31:59 compute-0 podman[411202]: 2025-10-11 09:31:59.805366958 +0000 UTC m=+0.089217369 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_id=iscsid, container_name=iscsid, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Oct 11 09:31:59 compute-0 ceph-mon[74313]: pgmap v2710: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 43 KiB/s rd, 25 KiB/s wr, 58 op/s
Oct 11 09:32:00 compute-0 ceph-osd[89278]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 11 09:32:00 compute-0 ceph-osd[89278]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 4800.1 total, 600.0 interval
                                           Cumulative writes: 43K writes, 162K keys, 43K commit groups, 1.0 writes per commit group, ingest: 0.16 GB, 0.03 MB/s
                                           Cumulative WAL: 43K writes, 15K syncs, 2.71 writes per sync, written: 0.16 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 4545 writes, 19K keys, 4545 commit groups, 1.0 writes per commit group, ingest: 21.41 MB, 0.04 MB/s
                                           Interval WAL: 4545 writes, 1749 syncs, 2.60 writes per sync, written: 0.02 GB, 0.04 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 11 09:32:01 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2711: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 43 KiB/s rd, 12 KiB/s wr, 57 op/s
Oct 11 09:32:01 compute-0 ceph-mon[74313]: pgmap v2711: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 43 KiB/s rd, 12 KiB/s wr, 57 op/s
Oct 11 09:32:01 compute-0 nova_compute[260935]: 2025-10-11 09:32:01.686 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:32:03 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2712: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 43 KiB/s rd, 12 KiB/s wr, 57 op/s
Oct 11 09:32:03 compute-0 ceph-mon[74313]: pgmap v2712: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 43 KiB/s rd, 12 KiB/s wr, 57 op/s
Oct 11 09:32:03 compute-0 nova_compute[260935]: 2025-10-11 09:32:03.775 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:32:03 compute-0 podman[411223]: 2025-10-11 09:32:03.811400619 +0000 UTC m=+0.099997633 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd)
Oct 11 09:32:03 compute-0 podman[411224]: 2025-10-11 09:32:03.850489312 +0000 UTC m=+0.135092523 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=ovn_controller)
Oct 11 09:32:04 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:32:05 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2713: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 32 KiB/s rd, 4.7 KiB/s wr, 48 op/s
Oct 11 09:32:05 compute-0 ceph-mon[74313]: pgmap v2713: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 32 KiB/s rd, 4.7 KiB/s wr, 48 op/s
Oct 11 09:32:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 09:32:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:32:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 09:32:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:32:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.002627377275021163 of space, bias 1.0, pg target 0.788213182506349 quantized to 32 (current 32)
Oct 11 09:32:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:32:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 09:32:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:32:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 09:32:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:32:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Oct 11 09:32:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:32:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 11 09:32:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:32:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 09:32:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:32:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 11 09:32:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:32:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 11 09:32:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:32:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 09:32:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:32:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 11 09:32:06 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:32:06.029 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=bec9a5e2-82b0-42f0-811d-08d245f1dc66, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '48'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:32:06 compute-0 ovn_controller[152945]: 2025-10-11T09:32:06Z|01527|binding|INFO|Releasing lport e23cd806-8523-4e59-ba27-db15cee52548 from this chassis (sb_readonly=0)
Oct 11 09:32:06 compute-0 ovn_controller[152945]: 2025-10-11T09:32:06Z|01528|binding|INFO|Releasing lport f45dd889-4db0-488b-b6d3-356bc191844e from this chassis (sb_readonly=0)
Oct 11 09:32:06 compute-0 nova_compute[260935]: 2025-10-11 09:32:06.145 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:32:06 compute-0 ceph-osd[90364]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 11 09:32:06 compute-0 ceph-osd[90364]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 4800.3 total, 600.0 interval
                                           Cumulative writes: 33K writes, 131K keys, 33K commit groups, 1.0 writes per commit group, ingest: 0.13 GB, 0.03 MB/s
                                           Cumulative WAL: 33K writes, 11K syncs, 2.78 writes per sync, written: 0.13 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 3725 writes, 14K keys, 3725 commit groups, 1.0 writes per commit group, ingest: 17.79 MB, 0.03 MB/s
                                           Interval WAL: 3725 writes, 1461 syncs, 2.55 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 11 09:32:06 compute-0 ovn_controller[152945]: 2025-10-11T09:32:06Z|01529|binding|INFO|Releasing lport e23cd806-8523-4e59-ba27-db15cee52548 from this chassis (sb_readonly=0)
Oct 11 09:32:06 compute-0 ovn_controller[152945]: 2025-10-11T09:32:06Z|01530|binding|INFO|Releasing lport f45dd889-4db0-488b-b6d3-356bc191844e from this chassis (sb_readonly=0)
Oct 11 09:32:06 compute-0 nova_compute[260935]: 2025-10-11 09:32:06.268 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:32:06 compute-0 nova_compute[260935]: 2025-10-11 09:32:06.688 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:32:07 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2714: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 32 KiB/s rd, 4.7 KiB/s wr, 48 op/s
Oct 11 09:32:07 compute-0 nova_compute[260935]: 2025-10-11 09:32:07.247 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760175112.246012, 30069bfa-2edc-4f4c-b685-233b65a11de1 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:32:07 compute-0 nova_compute[260935]: 2025-10-11 09:32:07.248 2 INFO nova.compute.manager [-] [instance: 30069bfa-2edc-4f4c-b685-233b65a11de1] VM Stopped (Lifecycle Event)
Oct 11 09:32:07 compute-0 ceph-mon[74313]: pgmap v2714: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 32 KiB/s rd, 4.7 KiB/s wr, 48 op/s
Oct 11 09:32:07 compute-0 nova_compute[260935]: 2025-10-11 09:32:07.303 2 DEBUG nova.compute.manager [None req-f3d1296f-b3f8-4421-914d-7e362d61e3ac - - - - - -] [instance: 30069bfa-2edc-4f4c-b685-233b65a11de1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:32:07 compute-0 ceph-mgr[74605]: [devicehealth INFO root] Check health
Oct 11 09:32:08 compute-0 nova_compute[260935]: 2025-10-11 09:32:08.776 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:32:09 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2715: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 32 KiB/s rd, 4.7 KiB/s wr, 48 op/s
Oct 11 09:32:09 compute-0 ceph-mon[74313]: pgmap v2715: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 32 KiB/s rd, 4.7 KiB/s wr, 48 op/s
Oct 11 09:32:09 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:32:11 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2716: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:32:11 compute-0 ceph-mon[74313]: pgmap v2716: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:32:11 compute-0 nova_compute[260935]: 2025-10-11 09:32:11.643 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760175116.642373, 811aca81-b712-4b96-a66c-8108b7791b3c => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:32:11 compute-0 nova_compute[260935]: 2025-10-11 09:32:11.644 2 INFO nova.compute.manager [-] [instance: 811aca81-b712-4b96-a66c-8108b7791b3c] VM Stopped (Lifecycle Event)
Oct 11 09:32:11 compute-0 nova_compute[260935]: 2025-10-11 09:32:11.727 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:32:11 compute-0 nova_compute[260935]: 2025-10-11 09:32:11.928 2 DEBUG nova.compute.manager [None req-695f903c-32f1-4b6b-b32b-bd60e9f2cd2e - - - - - -] [instance: 811aca81-b712-4b96-a66c-8108b7791b3c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:32:13 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2717: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:32:13 compute-0 ceph-mon[74313]: pgmap v2717: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:32:13 compute-0 nova_compute[260935]: 2025-10-11 09:32:13.779 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:32:14 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:32:15 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2718: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:32:15 compute-0 ceph-mon[74313]: pgmap v2718: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:32:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:32:15.226 162815 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:32:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:32:15.227 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:32:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:32:15.228 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:32:16 compute-0 nova_compute[260935]: 2025-10-11 09:32:16.729 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:32:17 compute-0 sshd-session[411270]: Invalid user admin from 165.232.82.252 port 40600
Oct 11 09:32:17 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2719: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:32:17 compute-0 ceph-mon[74313]: pgmap v2719: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:32:17 compute-0 sshd-session[411270]: pam_unix(sshd:auth): check pass; user unknown
Oct 11 09:32:17 compute-0 sshd-session[411270]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=165.232.82.252
Oct 11 09:32:18 compute-0 nova_compute[260935]: 2025-10-11 09:32:18.780 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:32:19 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2720: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:32:19 compute-0 sshd-session[411270]: Failed password for invalid user admin from 165.232.82.252 port 40600 ssh2
Oct 11 09:32:19 compute-0 ceph-mon[74313]: pgmap v2720: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:32:19 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:32:20 compute-0 sshd-session[411270]: Connection closed by invalid user admin 165.232.82.252 port 40600 [preauth]
Oct 11 09:32:21 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2721: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:32:21 compute-0 ceph-mon[74313]: pgmap v2721: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:32:21 compute-0 nova_compute[260935]: 2025-10-11 09:32:21.731 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:32:22 compute-0 podman[411274]: 2025-10-11 09:32:22.78701125 +0000 UTC m=+0.085616398 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent)
Oct 11 09:32:23 compute-0 sshd-session[411272]: Invalid user janus from 155.4.244.179 port 59199
Oct 11 09:32:23 compute-0 sshd-session[411272]: pam_unix(sshd:auth): check pass; user unknown
Oct 11 09:32:23 compute-0 sshd-session[411272]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=155.4.244.179
Oct 11 09:32:23 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2722: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:32:23 compute-0 ceph-mon[74313]: pgmap v2722: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:32:23 compute-0 nova_compute[260935]: 2025-10-11 09:32:23.783 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:32:24 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:32:24 compute-0 nova_compute[260935]: 2025-10-11 09:32:24.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:32:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:32:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:32:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:32:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:32:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:32:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:32:25 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2723: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:32:25 compute-0 ceph-mon[74313]: pgmap v2723: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:32:25 compute-0 sshd-session[411272]: Failed password for invalid user janus from 155.4.244.179 port 59199 ssh2
Oct 11 09:32:26 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 09:32:26 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/553364881' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 09:32:26 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 09:32:26 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/553364881' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 09:32:26 compute-0 nova_compute[260935]: 2025-10-11 09:32:26.733 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:32:26 compute-0 ceph-mon[74313]: from='client.? 192.168.122.10:0/553364881' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 09:32:26 compute-0 ceph-mon[74313]: from='client.? 192.168.122.10:0/553364881' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 09:32:26 compute-0 sshd-session[411272]: Received disconnect from 155.4.244.179 port 59199:11: Bye Bye [preauth]
Oct 11 09:32:26 compute-0 sshd-session[411272]: Disconnected from invalid user janus 155.4.244.179 port 59199 [preauth]
Oct 11 09:32:27 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2724: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:32:27 compute-0 ceph-mon[74313]: pgmap v2724: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:32:28 compute-0 nova_compute[260935]: 2025-10-11 09:32:28.786 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:32:29 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2725: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:32:29 compute-0 ceph-mon[74313]: pgmap v2725: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:32:29 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:32:30 compute-0 podman[411294]: 2025-10-11 09:32:30.786090195 +0000 UTC m=+0.086449231 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 11 09:32:31 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2726: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:32:31 compute-0 ceph-mon[74313]: pgmap v2726: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:32:31 compute-0 nova_compute[260935]: 2025-10-11 09:32:31.745 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:32:33 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2727: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:32:33 compute-0 ceph-mon[74313]: pgmap v2727: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:32:33 compute-0 nova_compute[260935]: 2025-10-11 09:32:33.789 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:32:34 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:32:34 compute-0 podman[411314]: 2025-10-11 09:32:34.800949665 +0000 UTC m=+0.096943757 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Oct 11 09:32:34 compute-0 podman[411315]: 2025-10-11 09:32:34.845104791 +0000 UTC m=+0.134944729 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 11 09:32:35 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2728: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:32:35 compute-0 ceph-mon[74313]: pgmap v2728: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:32:36 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:32:36.016 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:09:4e:e8 10.100.0.2 2001:db8::f816:3eff:fe09:4ee8'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fe09:4ee8/64', 'neutron:device_id': 'ovnmeta-84203be9-d0b1-4633-bcd6-51523ee507a5', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-84203be9-d0b1-4633-bcd6-51523ee507a5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'db6885dd005947ad850fed13cefdf2fc', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d927a0f8-4868-42ce-9cc2-00b73d34efaa, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=8bbc5aca-a0c9-493f-8847-60a74f11dbe6) old=Port_Binding(mac=['fa:16:3e:09:4e:e8 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-84203be9-d0b1-4633-bcd6-51523ee507a5', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-84203be9-d0b1-4633-bcd6-51523ee507a5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'db6885dd005947ad850fed13cefdf2fc', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 09:32:36 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:32:36.019 162815 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 8bbc5aca-a0c9-493f-8847-60a74f11dbe6 in datapath 84203be9-d0b1-4633-bcd6-51523ee507a5 updated
Oct 11 09:32:36 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:32:36.023 162815 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 84203be9-d0b1-4633-bcd6-51523ee507a5, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 11 09:32:36 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:32:36.024 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[a49569cc-d2d4-4e49-8e53-b11f3ca049a1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:32:36 compute-0 nova_compute[260935]: 2025-10-11 09:32:36.747 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:32:37 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2729: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:32:37 compute-0 ceph-mon[74313]: pgmap v2729: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:32:38 compute-0 sudo[411358]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:32:38 compute-0 sudo[411358]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:32:38 compute-0 sudo[411358]: pam_unix(sudo:session): session closed for user root
Oct 11 09:32:38 compute-0 sudo[411383]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 09:32:38 compute-0 sudo[411383]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:32:38 compute-0 sudo[411383]: pam_unix(sudo:session): session closed for user root
Oct 11 09:32:38 compute-0 sudo[411408]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:32:38 compute-0 sudo[411408]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:32:38 compute-0 sudo[411408]: pam_unix(sudo:session): session closed for user root
Oct 11 09:32:38 compute-0 sudo[411433]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 11 09:32:38 compute-0 sudo[411433]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:32:38 compute-0 nova_compute[260935]: 2025-10-11 09:32:38.824 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:32:38 compute-0 sudo[411433]: pam_unix(sudo:session): session closed for user root
Oct 11 09:32:38 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 09:32:38 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 09:32:38 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 11 09:32:38 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 09:32:38 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 11 09:32:38 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:32:38 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev 3e99f6d0-bbc5-4923-a239-0e7fdca95289 does not exist
Oct 11 09:32:38 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev 40ca2200-d19d-4ba5-846c-a7047bbf149e does not exist
Oct 11 09:32:38 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev ab5fee12-e89c-4212-99a4-8c7a38cbb614 does not exist
Oct 11 09:32:39 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 11 09:32:39 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 09:32:39 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 11 09:32:39 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 09:32:39 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 09:32:39 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 09:32:39 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 09:32:39 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 09:32:39 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:32:39 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 09:32:39 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 09:32:39 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 09:32:39 compute-0 sudo[411490]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:32:39 compute-0 sudo[411490]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:32:39 compute-0 sudo[411490]: pam_unix(sudo:session): session closed for user root
Oct 11 09:32:39 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2730: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:32:39 compute-0 sudo[411515]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 09:32:39 compute-0 sudo[411515]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:32:39 compute-0 sudo[411515]: pam_unix(sudo:session): session closed for user root
Oct 11 09:32:39 compute-0 sudo[411540]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:32:39 compute-0 sudo[411540]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:32:39 compute-0 sudo[411540]: pam_unix(sudo:session): session closed for user root
Oct 11 09:32:39 compute-0 sudo[411565]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 11 09:32:39 compute-0 sudo[411565]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:32:39 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:32:39 compute-0 podman[411630]: 2025-10-11 09:32:39.874093102 +0000 UTC m=+0.071868089 container create 3a7650f5d2d177ed3efde067f4dcfae4611b22cdfa7e91a37d989abd3473c643 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_hermann, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 09:32:39 compute-0 systemd[1]: Started libpod-conmon-3a7650f5d2d177ed3efde067f4dcfae4611b22cdfa7e91a37d989abd3473c643.scope.
Oct 11 09:32:39 compute-0 podman[411630]: 2025-10-11 09:32:39.845454224 +0000 UTC m=+0.043229291 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:32:39 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:32:39 compute-0 podman[411630]: 2025-10-11 09:32:39.979154957 +0000 UTC m=+0.176929974 container init 3a7650f5d2d177ed3efde067f4dcfae4611b22cdfa7e91a37d989abd3473c643 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_hermann, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 09:32:39 compute-0 podman[411630]: 2025-10-11 09:32:39.991292658 +0000 UTC m=+0.189067655 container start 3a7650f5d2d177ed3efde067f4dcfae4611b22cdfa7e91a37d989abd3473c643 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_hermann, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 09:32:39 compute-0 gracious_hermann[411647]: 167 167
Oct 11 09:32:39 compute-0 systemd[1]: libpod-3a7650f5d2d177ed3efde067f4dcfae4611b22cdfa7e91a37d989abd3473c643.scope: Deactivated successfully.
Oct 11 09:32:39 compute-0 podman[411630]: 2025-10-11 09:32:39.998997546 +0000 UTC m=+0.196772613 container attach 3a7650f5d2d177ed3efde067f4dcfae4611b22cdfa7e91a37d989abd3473c643 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_hermann, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct 11 09:32:40 compute-0 podman[411630]: 2025-10-11 09:32:39.999902251 +0000 UTC m=+0.197677288 container died 3a7650f5d2d177ed3efde067f4dcfae4611b22cdfa7e91a37d989abd3473c643 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_hermann, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct 11 09:32:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-5ce4818c9c590565f0439765da3b0361670fc31ef51e2b2aceccd483acaab9f1-merged.mount: Deactivated successfully.
Oct 11 09:32:40 compute-0 ceph-mon[74313]: pgmap v2730: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:32:40 compute-0 podman[411630]: 2025-10-11 09:32:40.062134048 +0000 UTC m=+0.259909075 container remove 3a7650f5d2d177ed3efde067f4dcfae4611b22cdfa7e91a37d989abd3473c643 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_hermann, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct 11 09:32:40 compute-0 systemd[1]: libpod-conmon-3a7650f5d2d177ed3efde067f4dcfae4611b22cdfa7e91a37d989abd3473c643.scope: Deactivated successfully.
Oct 11 09:32:40 compute-0 podman[411670]: 2025-10-11 09:32:40.291386357 +0000 UTC m=+0.062903996 container create 70cda5baaa1ea847a8b8031fadcd3b369e8af945cdc94f2f9c8ded48449e56ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_haslett, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 09:32:40 compute-0 systemd[1]: Started libpod-conmon-70cda5baaa1ea847a8b8031fadcd3b369e8af945cdc94f2f9c8ded48449e56ce.scope.
Oct 11 09:32:40 compute-0 podman[411670]: 2025-10-11 09:32:40.270523549 +0000 UTC m=+0.042041188 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:32:40 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:32:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3727ed4c9af0d85f0958b9b4e47d1f720f3745995c7c21713ed889e72483e2d7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 09:32:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3727ed4c9af0d85f0958b9b4e47d1f720f3745995c7c21713ed889e72483e2d7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 09:32:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3727ed4c9af0d85f0958b9b4e47d1f720f3745995c7c21713ed889e72483e2d7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 09:32:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3727ed4c9af0d85f0958b9b4e47d1f720f3745995c7c21713ed889e72483e2d7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 09:32:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3727ed4c9af0d85f0958b9b4e47d1f720f3745995c7c21713ed889e72483e2d7/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 09:32:40 compute-0 podman[411670]: 2025-10-11 09:32:40.395195317 +0000 UTC m=+0.166713026 container init 70cda5baaa1ea847a8b8031fadcd3b369e8af945cdc94f2f9c8ded48449e56ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_haslett, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 09:32:40 compute-0 podman[411670]: 2025-10-11 09:32:40.414208103 +0000 UTC m=+0.185725762 container start 70cda5baaa1ea847a8b8031fadcd3b369e8af945cdc94f2f9c8ded48449e56ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_haslett, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 09:32:40 compute-0 podman[411670]: 2025-10-11 09:32:40.41904536 +0000 UTC m=+0.190563059 container attach 70cda5baaa1ea847a8b8031fadcd3b369e8af945cdc94f2f9c8ded48449e56ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_haslett, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct 11 09:32:41 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2731: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:32:41 compute-0 ceph-mon[74313]: pgmap v2731: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:32:41 compute-0 cranky_haslett[411687]: --> passed data devices: 0 physical, 3 LVM
Oct 11 09:32:41 compute-0 cranky_haslett[411687]: --> relative data size: 1.0
Oct 11 09:32:41 compute-0 cranky_haslett[411687]: --> All data devices are unavailable
Oct 11 09:32:41 compute-0 systemd[1]: libpod-70cda5baaa1ea847a8b8031fadcd3b369e8af945cdc94f2f9c8ded48449e56ce.scope: Deactivated successfully.
Oct 11 09:32:41 compute-0 podman[411670]: 2025-10-11 09:32:41.598783523 +0000 UTC m=+1.370301172 container died 70cda5baaa1ea847a8b8031fadcd3b369e8af945cdc94f2f9c8ded48449e56ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_haslett, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 09:32:41 compute-0 systemd[1]: libpod-70cda5baaa1ea847a8b8031fadcd3b369e8af945cdc94f2f9c8ded48449e56ce.scope: Consumed 1.107s CPU time.
Oct 11 09:32:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-3727ed4c9af0d85f0958b9b4e47d1f720f3745995c7c21713ed889e72483e2d7-merged.mount: Deactivated successfully.
Oct 11 09:32:41 compute-0 podman[411670]: 2025-10-11 09:32:41.662481601 +0000 UTC m=+1.433999220 container remove 70cda5baaa1ea847a8b8031fadcd3b369e8af945cdc94f2f9c8ded48449e56ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_haslett, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 09:32:41 compute-0 systemd[1]: libpod-conmon-70cda5baaa1ea847a8b8031fadcd3b369e8af945cdc94f2f9c8ded48449e56ce.scope: Deactivated successfully.
Oct 11 09:32:41 compute-0 sudo[411565]: pam_unix(sudo:session): session closed for user root
Oct 11 09:32:41 compute-0 nova_compute[260935]: 2025-10-11 09:32:41.749 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:32:41 compute-0 sudo[411729]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:32:41 compute-0 sudo[411729]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:32:41 compute-0 sudo[411729]: pam_unix(sudo:session): session closed for user root
Oct 11 09:32:41 compute-0 sudo[411754]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 09:32:41 compute-0 sudo[411754]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:32:41 compute-0 sudo[411754]: pam_unix(sudo:session): session closed for user root
Oct 11 09:32:41 compute-0 sudo[411779]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:32:41 compute-0 sudo[411779]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:32:41 compute-0 sudo[411779]: pam_unix(sudo:session): session closed for user root
Oct 11 09:32:42 compute-0 sudo[411804]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a -- lvm list --format json
Oct 11 09:32:42 compute-0 sudo[411804]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:32:42 compute-0 podman[411868]: 2025-10-11 09:32:42.566182094 +0000 UTC m=+0.078433184 container create 729964ae30f2a23c405455a02aae4c8e6296b74cc94406cab5720aa22de1a2b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_cohen, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 09:32:42 compute-0 systemd[1]: Started libpod-conmon-729964ae30f2a23c405455a02aae4c8e6296b74cc94406cab5720aa22de1a2b7.scope.
Oct 11 09:32:42 compute-0 podman[411868]: 2025-10-11 09:32:42.533021868 +0000 UTC m=+0.045273018 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:32:42 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:32:42 compute-0 podman[411868]: 2025-10-11 09:32:42.667206695 +0000 UTC m=+0.179457815 container init 729964ae30f2a23c405455a02aae4c8e6296b74cc94406cab5720aa22de1a2b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_cohen, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct 11 09:32:42 compute-0 podman[411868]: 2025-10-11 09:32:42.678313549 +0000 UTC m=+0.190564639 container start 729964ae30f2a23c405455a02aae4c8e6296b74cc94406cab5720aa22de1a2b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_cohen, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct 11 09:32:42 compute-0 podman[411868]: 2025-10-11 09:32:42.68227391 +0000 UTC m=+0.194525050 container attach 729964ae30f2a23c405455a02aae4c8e6296b74cc94406cab5720aa22de1a2b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_cohen, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 09:32:42 compute-0 bold_cohen[411885]: 167 167
Oct 11 09:32:42 compute-0 systemd[1]: libpod-729964ae30f2a23c405455a02aae4c8e6296b74cc94406cab5720aa22de1a2b7.scope: Deactivated successfully.
Oct 11 09:32:42 compute-0 podman[411868]: 2025-10-11 09:32:42.685960664 +0000 UTC m=+0.198211754 container died 729964ae30f2a23c405455a02aae4c8e6296b74cc94406cab5720aa22de1a2b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_cohen, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 09:32:42 compute-0 nova_compute[260935]: 2025-10-11 09:32:42.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:32:42 compute-0 nova_compute[260935]: 2025-10-11 09:32:42.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:32:42 compute-0 nova_compute[260935]: 2025-10-11 09:32:42.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:32:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-825862ed11853bdc7e158fa82f7bcc7e815dd22bd6f54d291bce73d78462c676-merged.mount: Deactivated successfully.
Oct 11 09:32:42 compute-0 podman[411868]: 2025-10-11 09:32:42.743739965 +0000 UTC m=+0.255991055 container remove 729964ae30f2a23c405455a02aae4c8e6296b74cc94406cab5720aa22de1a2b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_cohen, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 09:32:42 compute-0 systemd[1]: libpod-conmon-729964ae30f2a23c405455a02aae4c8e6296b74cc94406cab5720aa22de1a2b7.scope: Deactivated successfully.
Oct 11 09:32:43 compute-0 podman[411908]: 2025-10-11 09:32:43.018039986 +0000 UTC m=+0.077047975 container create 4c8b4769c9cf251c28436273031592b71571902570dac6baa4c3abe21e0a7ba7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_zhukovsky, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct 11 09:32:43 compute-0 systemd[1]: Started libpod-conmon-4c8b4769c9cf251c28436273031592b71571902570dac6baa4c3abe21e0a7ba7.scope.
Oct 11 09:32:43 compute-0 podman[411908]: 2025-10-11 09:32:42.984946642 +0000 UTC m=+0.043954701 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:32:43 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:32:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8fbffc907665822d08e5886c855375799e4647b6ee8425bf158d4d7a15e5022/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 09:32:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8fbffc907665822d08e5886c855375799e4647b6ee8425bf158d4d7a15e5022/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 09:32:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8fbffc907665822d08e5886c855375799e4647b6ee8425bf158d4d7a15e5022/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 09:32:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8fbffc907665822d08e5886c855375799e4647b6ee8425bf158d4d7a15e5022/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 09:32:43 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2732: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:32:43 compute-0 podman[411908]: 2025-10-11 09:32:43.128757211 +0000 UTC m=+0.187765270 container init 4c8b4769c9cf251c28436273031592b71571902570dac6baa4c3abe21e0a7ba7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_zhukovsky, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct 11 09:32:43 compute-0 podman[411908]: 2025-10-11 09:32:43.143166397 +0000 UTC m=+0.202174396 container start 4c8b4769c9cf251c28436273031592b71571902570dac6baa4c3abe21e0a7ba7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_zhukovsky, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 09:32:43 compute-0 podman[411908]: 2025-10-11 09:32:43.148191999 +0000 UTC m=+0.207200008 container attach 4c8b4769c9cf251c28436273031592b71571902570dac6baa4c3abe21e0a7ba7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_zhukovsky, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 09:32:43 compute-0 ceph-mon[74313]: pgmap v2732: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:32:43 compute-0 nova_compute[260935]: 2025-10-11 09:32:43.699 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:32:43 compute-0 nova_compute[260935]: 2025-10-11 09:32:43.701 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_shelved_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:32:43 compute-0 nova_compute[260935]: 2025-10-11 09:32:43.826 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:32:43 compute-0 amazing_zhukovsky[411925]: {
Oct 11 09:32:43 compute-0 amazing_zhukovsky[411925]:     "0": [
Oct 11 09:32:43 compute-0 amazing_zhukovsky[411925]:         {
Oct 11 09:32:43 compute-0 amazing_zhukovsky[411925]:             "devices": [
Oct 11 09:32:43 compute-0 amazing_zhukovsky[411925]:                 "/dev/loop3"
Oct 11 09:32:43 compute-0 amazing_zhukovsky[411925]:             ],
Oct 11 09:32:43 compute-0 amazing_zhukovsky[411925]:             "lv_name": "ceph_lv0",
Oct 11 09:32:43 compute-0 amazing_zhukovsky[411925]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 09:32:43 compute-0 amazing_zhukovsky[411925]:             "lv_size": "21470642176",
Oct 11 09:32:43 compute-0 amazing_zhukovsky[411925]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=bd31cfe9-266a-4479-a7f9-659fff14b3e1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 09:32:43 compute-0 amazing_zhukovsky[411925]:             "lv_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 09:32:43 compute-0 amazing_zhukovsky[411925]:             "name": "ceph_lv0",
Oct 11 09:32:43 compute-0 amazing_zhukovsky[411925]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 09:32:43 compute-0 amazing_zhukovsky[411925]:             "tags": {
Oct 11 09:32:43 compute-0 amazing_zhukovsky[411925]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 11 09:32:43 compute-0 amazing_zhukovsky[411925]:                 "ceph.block_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 09:32:43 compute-0 amazing_zhukovsky[411925]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 09:32:43 compute-0 amazing_zhukovsky[411925]:                 "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:32:43 compute-0 amazing_zhukovsky[411925]:                 "ceph.cluster_name": "ceph",
Oct 11 09:32:43 compute-0 amazing_zhukovsky[411925]:                 "ceph.crush_device_class": "",
Oct 11 09:32:43 compute-0 amazing_zhukovsky[411925]:                 "ceph.encrypted": "0",
Oct 11 09:32:43 compute-0 amazing_zhukovsky[411925]:                 "ceph.osd_fsid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 09:32:43 compute-0 amazing_zhukovsky[411925]:                 "ceph.osd_id": "0",
Oct 11 09:32:43 compute-0 amazing_zhukovsky[411925]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 09:32:43 compute-0 amazing_zhukovsky[411925]:                 "ceph.type": "block",
Oct 11 09:32:43 compute-0 amazing_zhukovsky[411925]:                 "ceph.vdo": "0"
Oct 11 09:32:43 compute-0 amazing_zhukovsky[411925]:             },
Oct 11 09:32:43 compute-0 amazing_zhukovsky[411925]:             "type": "block",
Oct 11 09:32:43 compute-0 amazing_zhukovsky[411925]:             "vg_name": "ceph_vg0"
Oct 11 09:32:43 compute-0 amazing_zhukovsky[411925]:         }
Oct 11 09:32:43 compute-0 amazing_zhukovsky[411925]:     ],
Oct 11 09:32:43 compute-0 amazing_zhukovsky[411925]:     "1": [
Oct 11 09:32:43 compute-0 amazing_zhukovsky[411925]:         {
Oct 11 09:32:43 compute-0 amazing_zhukovsky[411925]:             "devices": [
Oct 11 09:32:43 compute-0 amazing_zhukovsky[411925]:                 "/dev/loop4"
Oct 11 09:32:43 compute-0 amazing_zhukovsky[411925]:             ],
Oct 11 09:32:43 compute-0 amazing_zhukovsky[411925]:             "lv_name": "ceph_lv1",
Oct 11 09:32:43 compute-0 amazing_zhukovsky[411925]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 09:32:43 compute-0 amazing_zhukovsky[411925]:             "lv_size": "21470642176",
Oct 11 09:32:43 compute-0 amazing_zhukovsky[411925]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c6e730c8-7075-45e5-bf72-061b73088f5b,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 09:32:43 compute-0 amazing_zhukovsky[411925]:             "lv_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 09:32:43 compute-0 amazing_zhukovsky[411925]:             "name": "ceph_lv1",
Oct 11 09:32:43 compute-0 amazing_zhukovsky[411925]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 09:32:43 compute-0 amazing_zhukovsky[411925]:             "tags": {
Oct 11 09:32:43 compute-0 amazing_zhukovsky[411925]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 11 09:32:43 compute-0 amazing_zhukovsky[411925]:                 "ceph.block_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 09:32:43 compute-0 amazing_zhukovsky[411925]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 09:32:43 compute-0 amazing_zhukovsky[411925]:                 "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:32:43 compute-0 amazing_zhukovsky[411925]:                 "ceph.cluster_name": "ceph",
Oct 11 09:32:43 compute-0 amazing_zhukovsky[411925]:                 "ceph.crush_device_class": "",
Oct 11 09:32:43 compute-0 amazing_zhukovsky[411925]:                 "ceph.encrypted": "0",
Oct 11 09:32:43 compute-0 amazing_zhukovsky[411925]:                 "ceph.osd_fsid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 09:32:43 compute-0 amazing_zhukovsky[411925]:                 "ceph.osd_id": "1",
Oct 11 09:32:43 compute-0 amazing_zhukovsky[411925]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 09:32:43 compute-0 amazing_zhukovsky[411925]:                 "ceph.type": "block",
Oct 11 09:32:43 compute-0 amazing_zhukovsky[411925]:                 "ceph.vdo": "0"
Oct 11 09:32:43 compute-0 amazing_zhukovsky[411925]:             },
Oct 11 09:32:43 compute-0 amazing_zhukovsky[411925]:             "type": "block",
Oct 11 09:32:43 compute-0 amazing_zhukovsky[411925]:             "vg_name": "ceph_vg1"
Oct 11 09:32:43 compute-0 amazing_zhukovsky[411925]:         }
Oct 11 09:32:43 compute-0 amazing_zhukovsky[411925]:     ],
Oct 11 09:32:43 compute-0 amazing_zhukovsky[411925]:     "2": [
Oct 11 09:32:43 compute-0 amazing_zhukovsky[411925]:         {
Oct 11 09:32:43 compute-0 amazing_zhukovsky[411925]:             "devices": [
Oct 11 09:32:43 compute-0 amazing_zhukovsky[411925]:                 "/dev/loop5"
Oct 11 09:32:43 compute-0 amazing_zhukovsky[411925]:             ],
Oct 11 09:32:43 compute-0 amazing_zhukovsky[411925]:             "lv_name": "ceph_lv2",
Oct 11 09:32:43 compute-0 amazing_zhukovsky[411925]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 09:32:43 compute-0 amazing_zhukovsky[411925]:             "lv_size": "21470642176",
Oct 11 09:32:43 compute-0 amazing_zhukovsky[411925]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9a8d87fe-940e-4960-94fb-405e22e6e9ee,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 09:32:43 compute-0 amazing_zhukovsky[411925]:             "lv_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 09:32:43 compute-0 amazing_zhukovsky[411925]:             "name": "ceph_lv2",
Oct 11 09:32:43 compute-0 amazing_zhukovsky[411925]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 09:32:43 compute-0 amazing_zhukovsky[411925]:             "tags": {
Oct 11 09:32:43 compute-0 amazing_zhukovsky[411925]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 11 09:32:43 compute-0 amazing_zhukovsky[411925]:                 "ceph.block_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 09:32:43 compute-0 amazing_zhukovsky[411925]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 09:32:43 compute-0 amazing_zhukovsky[411925]:                 "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:32:43 compute-0 amazing_zhukovsky[411925]:                 "ceph.cluster_name": "ceph",
Oct 11 09:32:43 compute-0 amazing_zhukovsky[411925]:                 "ceph.crush_device_class": "",
Oct 11 09:32:43 compute-0 amazing_zhukovsky[411925]:                 "ceph.encrypted": "0",
Oct 11 09:32:43 compute-0 amazing_zhukovsky[411925]:                 "ceph.osd_fsid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 09:32:43 compute-0 amazing_zhukovsky[411925]:                 "ceph.osd_id": "2",
Oct 11 09:32:43 compute-0 amazing_zhukovsky[411925]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 09:32:43 compute-0 amazing_zhukovsky[411925]:                 "ceph.type": "block",
Oct 11 09:32:43 compute-0 amazing_zhukovsky[411925]:                 "ceph.vdo": "0"
Oct 11 09:32:43 compute-0 amazing_zhukovsky[411925]:             },
Oct 11 09:32:43 compute-0 amazing_zhukovsky[411925]:             "type": "block",
Oct 11 09:32:43 compute-0 amazing_zhukovsky[411925]:             "vg_name": "ceph_vg2"
Oct 11 09:32:43 compute-0 amazing_zhukovsky[411925]:         }
Oct 11 09:32:43 compute-0 amazing_zhukovsky[411925]:     ]
Oct 11 09:32:43 compute-0 amazing_zhukovsky[411925]: }
Oct 11 09:32:43 compute-0 systemd[1]: libpod-4c8b4769c9cf251c28436273031592b71571902570dac6baa4c3abe21e0a7ba7.scope: Deactivated successfully.
Oct 11 09:32:43 compute-0 podman[411908]: 2025-10-11 09:32:43.949952624 +0000 UTC m=+1.008960633 container died 4c8b4769c9cf251c28436273031592b71571902570dac6baa4c3abe21e0a7ba7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_zhukovsky, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 09:32:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-a8fbffc907665822d08e5886c855375799e4647b6ee8425bf158d4d7a15e5022-merged.mount: Deactivated successfully.
Oct 11 09:32:44 compute-0 podman[411908]: 2025-10-11 09:32:44.042361402 +0000 UTC m=+1.101369411 container remove 4c8b4769c9cf251c28436273031592b71571902570dac6baa4c3abe21e0a7ba7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_zhukovsky, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 09:32:44 compute-0 systemd[1]: libpod-conmon-4c8b4769c9cf251c28436273031592b71571902570dac6baa4c3abe21e0a7ba7.scope: Deactivated successfully.
Oct 11 09:32:44 compute-0 sudo[411804]: pam_unix(sudo:session): session closed for user root
Oct 11 09:32:44 compute-0 sudo[411948]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:32:44 compute-0 sudo[411948]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:32:44 compute-0 sudo[411948]: pam_unix(sudo:session): session closed for user root
Oct 11 09:32:44 compute-0 sudo[411973]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 09:32:44 compute-0 sudo[411973]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:32:44 compute-0 sudo[411973]: pam_unix(sudo:session): session closed for user root
Oct 11 09:32:44 compute-0 sudo[411998]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:32:44 compute-0 sudo[411998]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:32:44 compute-0 sudo[411998]: pam_unix(sudo:session): session closed for user root
Oct 11 09:32:44 compute-0 sudo[412023]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a -- raw list --format json
Oct 11 09:32:44 compute-0 sudo[412023]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:32:44 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:32:44 compute-0 nova_compute[260935]: 2025-10-11 09:32:44.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:32:44 compute-0 nova_compute[260935]: 2025-10-11 09:32:44.703 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 11 09:32:44 compute-0 podman[412089]: 2025-10-11 09:32:44.814004659 +0000 UTC m=+0.063482933 container create 76c1f90201a0c8247d079c1678842e4a58fa6634b7b65fc7f37f3b2e640c599f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_sanderson, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 09:32:44 compute-0 systemd[1]: Started libpod-conmon-76c1f90201a0c8247d079c1678842e4a58fa6634b7b65fc7f37f3b2e640c599f.scope.
Oct 11 09:32:44 compute-0 podman[412089]: 2025-10-11 09:32:44.787253034 +0000 UTC m=+0.036731378 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:32:44 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:32:44 compute-0 podman[412089]: 2025-10-11 09:32:44.928740467 +0000 UTC m=+0.178218731 container init 76c1f90201a0c8247d079c1678842e4a58fa6634b7b65fc7f37f3b2e640c599f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_sanderson, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 09:32:44 compute-0 podman[412089]: 2025-10-11 09:32:44.94125913 +0000 UTC m=+0.190737384 container start 76c1f90201a0c8247d079c1678842e4a58fa6634b7b65fc7f37f3b2e640c599f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_sanderson, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct 11 09:32:44 compute-0 podman[412089]: 2025-10-11 09:32:44.945534591 +0000 UTC m=+0.195012835 container attach 76c1f90201a0c8247d079c1678842e4a58fa6634b7b65fc7f37f3b2e640c599f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_sanderson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 09:32:44 compute-0 exciting_sanderson[412105]: 167 167
Oct 11 09:32:44 compute-0 podman[412089]: 2025-10-11 09:32:44.949740429 +0000 UTC m=+0.199218713 container died 76c1f90201a0c8247d079c1678842e4a58fa6634b7b65fc7f37f3b2e640c599f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_sanderson, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Oct 11 09:32:44 compute-0 systemd[1]: libpod-76c1f90201a0c8247d079c1678842e4a58fa6634b7b65fc7f37f3b2e640c599f.scope: Deactivated successfully.
Oct 11 09:32:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-b181b86b7689186e5b8179dec5d02332f745eb2a8d13446cdc837690be123970-merged.mount: Deactivated successfully.
Oct 11 09:32:45 compute-0 podman[412089]: 2025-10-11 09:32:45.012928773 +0000 UTC m=+0.262407027 container remove 76c1f90201a0c8247d079c1678842e4a58fa6634b7b65fc7f37f3b2e640c599f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_sanderson, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 09:32:45 compute-0 systemd[1]: libpod-conmon-76c1f90201a0c8247d079c1678842e4a58fa6634b7b65fc7f37f3b2e640c599f.scope: Deactivated successfully.
Oct 11 09:32:45 compute-0 nova_compute[260935]: 2025-10-11 09:32:45.057 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "refresh_cache-52be16b4-343a-4fd4-9041-39069a1fde2a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:32:45 compute-0 nova_compute[260935]: 2025-10-11 09:32:45.058 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquired lock "refresh_cache-52be16b4-343a-4fd4-9041-39069a1fde2a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:32:45 compute-0 nova_compute[260935]: 2025-10-11 09:32:45.058 2 DEBUG nova.network.neutron [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: 52be16b4-343a-4fd4-9041-39069a1fde2a] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 11 09:32:45 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2733: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:32:45 compute-0 ceph-mon[74313]: pgmap v2733: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:32:45 compute-0 podman[412128]: 2025-10-11 09:32:45.301118846 +0000 UTC m=+0.065681035 container create 571dbf7d3aab56c3199bca03a21d2278f71249780a09738f43e5bb770c5d9c16 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_herschel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 11 09:32:45 compute-0 systemd[1]: Started libpod-conmon-571dbf7d3aab56c3199bca03a21d2278f71249780a09738f43e5bb770c5d9c16.scope.
Oct 11 09:32:45 compute-0 podman[412128]: 2025-10-11 09:32:45.276583803 +0000 UTC m=+0.041146002 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:32:45 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:32:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c40d764e0753fe7b4fa4422efdc72e2cb8cb4cf09755914ac6af7fce1f47ea73/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 09:32:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c40d764e0753fe7b4fa4422efdc72e2cb8cb4cf09755914ac6af7fce1f47ea73/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 09:32:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c40d764e0753fe7b4fa4422efdc72e2cb8cb4cf09755914ac6af7fce1f47ea73/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 09:32:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c40d764e0753fe7b4fa4422efdc72e2cb8cb4cf09755914ac6af7fce1f47ea73/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 09:32:45 compute-0 podman[412128]: 2025-10-11 09:32:45.435234771 +0000 UTC m=+0.199796960 container init 571dbf7d3aab56c3199bca03a21d2278f71249780a09738f43e5bb770c5d9c16 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_herschel, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct 11 09:32:45 compute-0 podman[412128]: 2025-10-11 09:32:45.448294019 +0000 UTC m=+0.212856168 container start 571dbf7d3aab56c3199bca03a21d2278f71249780a09738f43e5bb770c5d9c16 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_herschel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef)
Oct 11 09:32:45 compute-0 podman[412128]: 2025-10-11 09:32:45.45079967 +0000 UTC m=+0.215361839 container attach 571dbf7d3aab56c3199bca03a21d2278f71249780a09738f43e5bb770c5d9c16 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_herschel, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True)
Oct 11 09:32:46 compute-0 clever_herschel[412145]: {
Oct 11 09:32:46 compute-0 clever_herschel[412145]:     "9a8d87fe-940e-4960-94fb-405e22e6e9ee": {
Oct 11 09:32:46 compute-0 clever_herschel[412145]:         "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:32:46 compute-0 clever_herschel[412145]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 11 09:32:46 compute-0 clever_herschel[412145]:         "osd_id": 2,
Oct 11 09:32:46 compute-0 clever_herschel[412145]:         "osd_uuid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 09:32:46 compute-0 clever_herschel[412145]:         "type": "bluestore"
Oct 11 09:32:46 compute-0 clever_herschel[412145]:     },
Oct 11 09:32:46 compute-0 clever_herschel[412145]:     "bd31cfe9-266a-4479-a7f9-659fff14b3e1": {
Oct 11 09:32:46 compute-0 clever_herschel[412145]:         "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:32:46 compute-0 clever_herschel[412145]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 11 09:32:46 compute-0 clever_herschel[412145]:         "osd_id": 0,
Oct 11 09:32:46 compute-0 clever_herschel[412145]:         "osd_uuid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 09:32:46 compute-0 clever_herschel[412145]:         "type": "bluestore"
Oct 11 09:32:46 compute-0 clever_herschel[412145]:     },
Oct 11 09:32:46 compute-0 clever_herschel[412145]:     "c6e730c8-7075-45e5-bf72-061b73088f5b": {
Oct 11 09:32:46 compute-0 clever_herschel[412145]:         "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:32:46 compute-0 clever_herschel[412145]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 11 09:32:46 compute-0 clever_herschel[412145]:         "osd_id": 1,
Oct 11 09:32:46 compute-0 clever_herschel[412145]:         "osd_uuid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 09:32:46 compute-0 clever_herschel[412145]:         "type": "bluestore"
Oct 11 09:32:46 compute-0 clever_herschel[412145]:     }
Oct 11 09:32:46 compute-0 clever_herschel[412145]: }
Oct 11 09:32:46 compute-0 systemd[1]: libpod-571dbf7d3aab56c3199bca03a21d2278f71249780a09738f43e5bb770c5d9c16.scope: Deactivated successfully.
Oct 11 09:32:46 compute-0 systemd[1]: libpod-571dbf7d3aab56c3199bca03a21d2278f71249780a09738f43e5bb770c5d9c16.scope: Consumed 1.104s CPU time.
Oct 11 09:32:46 compute-0 podman[412178]: 2025-10-11 09:32:46.586347256 +0000 UTC m=+0.027826916 container died 571dbf7d3aab56c3199bca03a21d2278f71249780a09738f43e5bb770c5d9c16 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_herschel, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 09:32:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-c40d764e0753fe7b4fa4422efdc72e2cb8cb4cf09755914ac6af7fce1f47ea73-merged.mount: Deactivated successfully.
Oct 11 09:32:46 compute-0 podman[412178]: 2025-10-11 09:32:46.645073673 +0000 UTC m=+0.086553293 container remove 571dbf7d3aab56c3199bca03a21d2278f71249780a09738f43e5bb770c5d9c16 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_herschel, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 09:32:46 compute-0 systemd[1]: libpod-conmon-571dbf7d3aab56c3199bca03a21d2278f71249780a09738f43e5bb770c5d9c16.scope: Deactivated successfully.
Oct 11 09:32:46 compute-0 sudo[412023]: pam_unix(sudo:session): session closed for user root
Oct 11 09:32:46 compute-0 nova_compute[260935]: 2025-10-11 09:32:46.698 2 DEBUG oslo_concurrency.lockutils [None req-7e317ff1-0e32-4bad-816a-213589e6956c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "e7751d7b-3ef7-4b84-a579-89d4b72f6cf4" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:32:46 compute-0 nova_compute[260935]: 2025-10-11 09:32:46.699 2 DEBUG oslo_concurrency.lockutils [None req-7e317ff1-0e32-4bad-816a-213589e6956c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "e7751d7b-3ef7-4b84-a579-89d4b72f6cf4" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:32:46 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 09:32:46 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:32:46 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 09:32:46 compute-0 nova_compute[260935]: 2025-10-11 09:32:46.717 2 DEBUG nova.compute.manager [None req-7e317ff1-0e32-4bad-816a-213589e6956c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: e7751d7b-3ef7-4b84-a579-89d4b72f6cf4] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 11 09:32:46 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:32:46 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev 3dfdb9ff-8081-48cb-87e2-3a3651a9a43f does not exist
Oct 11 09:32:46 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev 845fd075-adb9-4533-b33f-02e95c0a82ee does not exist
Oct 11 09:32:46 compute-0 nova_compute[260935]: 2025-10-11 09:32:46.752 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:32:46 compute-0 sudo[412194]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:32:46 compute-0 sudo[412194]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:32:46 compute-0 nova_compute[260935]: 2025-10-11 09:32:46.819 2 DEBUG oslo_concurrency.lockutils [None req-7e317ff1-0e32-4bad-816a-213589e6956c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:32:46 compute-0 nova_compute[260935]: 2025-10-11 09:32:46.820 2 DEBUG oslo_concurrency.lockutils [None req-7e317ff1-0e32-4bad-816a-213589e6956c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:32:46 compute-0 sudo[412194]: pam_unix(sudo:session): session closed for user root
Oct 11 09:32:46 compute-0 nova_compute[260935]: 2025-10-11 09:32:46.830 2 DEBUG nova.virt.hardware [None req-7e317ff1-0e32-4bad-816a-213589e6956c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 11 09:32:46 compute-0 nova_compute[260935]: 2025-10-11 09:32:46.831 2 INFO nova.compute.claims [None req-7e317ff1-0e32-4bad-816a-213589e6956c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: e7751d7b-3ef7-4b84-a579-89d4b72f6cf4] Claim successful on node compute-0.ctlplane.example.com
Oct 11 09:32:46 compute-0 sudo[412219]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 11 09:32:46 compute-0 sudo[412219]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:32:46 compute-0 sudo[412219]: pam_unix(sudo:session): session closed for user root
Oct 11 09:32:46 compute-0 nova_compute[260935]: 2025-10-11 09:32:46.937 2 DEBUG nova.network.neutron [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: 52be16b4-343a-4fd4-9041-39069a1fde2a] Updating instance_info_cache with network_info: [{"id": "c992d6e3-ef59-42a0-80c5-109fe0c056cd", "address": "fa:16:3e:d3:b5:ce", "network": {"id": "7c40ad6c-6e2c-4d8e-a70f-72c8786fa745", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1855455514-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0ba95f2514ce4fe4b00f245335eaeb01", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc992d6e3-ef", "ovs_interfaceid": "c992d6e3-ef59-42a0-80c5-109fe0c056cd", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:32:46 compute-0 nova_compute[260935]: 2025-10-11 09:32:46.953 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Releasing lock "refresh_cache-52be16b4-343a-4fd4-9041-39069a1fde2a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:32:46 compute-0 nova_compute[260935]: 2025-10-11 09:32:46.954 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: 52be16b4-343a-4fd4-9041-39069a1fde2a] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 11 09:32:46 compute-0 nova_compute[260935]: 2025-10-11 09:32:46.954 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:32:46 compute-0 nova_compute[260935]: 2025-10-11 09:32:46.954 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 11 09:32:47 compute-0 nova_compute[260935]: 2025-10-11 09:32:47.004 2 DEBUG oslo_concurrency.processutils [None req-7e317ff1-0e32-4bad-816a-213589e6956c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:32:47 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2734: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:32:47 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 09:32:47 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4002942164' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:32:47 compute-0 nova_compute[260935]: 2025-10-11 09:32:47.577 2 DEBUG oslo_concurrency.processutils [None req-7e317ff1-0e32-4bad-816a-213589e6956c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.573s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:32:47 compute-0 nova_compute[260935]: 2025-10-11 09:32:47.587 2 DEBUG nova.compute.provider_tree [None req-7e317ff1-0e32-4bad-816a-213589e6956c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 09:32:47 compute-0 nova_compute[260935]: 2025-10-11 09:32:47.608 2 DEBUG nova.scheduler.client.report [None req-7e317ff1-0e32-4bad-816a-213589e6956c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 09:32:47 compute-0 nova_compute[260935]: 2025-10-11 09:32:47.641 2 DEBUG oslo_concurrency.lockutils [None req-7e317ff1-0e32-4bad-816a-213589e6956c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.821s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:32:47 compute-0 nova_compute[260935]: 2025-10-11 09:32:47.643 2 DEBUG nova.compute.manager [None req-7e317ff1-0e32-4bad-816a-213589e6956c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: e7751d7b-3ef7-4b84-a579-89d4b72f6cf4] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 11 09:32:47 compute-0 nova_compute[260935]: 2025-10-11 09:32:47.701 2 DEBUG nova.compute.manager [None req-7e317ff1-0e32-4bad-816a-213589e6956c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: e7751d7b-3ef7-4b84-a579-89d4b72f6cf4] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 11 09:32:47 compute-0 nova_compute[260935]: 2025-10-11 09:32:47.702 2 DEBUG nova.network.neutron [None req-7e317ff1-0e32-4bad-816a-213589e6956c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: e7751d7b-3ef7-4b84-a579-89d4b72f6cf4] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 11 09:32:47 compute-0 nova_compute[260935]: 2025-10-11 09:32:47.707 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:32:47 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:32:47 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:32:47 compute-0 ceph-mon[74313]: pgmap v2734: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:32:47 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/4002942164' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:32:47 compute-0 nova_compute[260935]: 2025-10-11 09:32:47.729 2 INFO nova.virt.libvirt.driver [None req-7e317ff1-0e32-4bad-816a-213589e6956c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: e7751d7b-3ef7-4b84-a579-89d4b72f6cf4] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 11 09:32:47 compute-0 nova_compute[260935]: 2025-10-11 09:32:47.738 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:32:47 compute-0 nova_compute[260935]: 2025-10-11 09:32:47.739 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:32:47 compute-0 nova_compute[260935]: 2025-10-11 09:32:47.740 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:32:47 compute-0 nova_compute[260935]: 2025-10-11 09:32:47.741 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 11 09:32:47 compute-0 nova_compute[260935]: 2025-10-11 09:32:47.741 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:32:47 compute-0 nova_compute[260935]: 2025-10-11 09:32:47.799 2 DEBUG nova.compute.manager [None req-7e317ff1-0e32-4bad-816a-213589e6956c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: e7751d7b-3ef7-4b84-a579-89d4b72f6cf4] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 11 09:32:47 compute-0 nova_compute[260935]: 2025-10-11 09:32:47.913 2 DEBUG nova.compute.manager [None req-7e317ff1-0e32-4bad-816a-213589e6956c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: e7751d7b-3ef7-4b84-a579-89d4b72f6cf4] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 11 09:32:47 compute-0 nova_compute[260935]: 2025-10-11 09:32:47.915 2 DEBUG nova.virt.libvirt.driver [None req-7e317ff1-0e32-4bad-816a-213589e6956c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: e7751d7b-3ef7-4b84-a579-89d4b72f6cf4] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 11 09:32:47 compute-0 nova_compute[260935]: 2025-10-11 09:32:47.916 2 INFO nova.virt.libvirt.driver [None req-7e317ff1-0e32-4bad-816a-213589e6956c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: e7751d7b-3ef7-4b84-a579-89d4b72f6cf4] Creating image(s)
Oct 11 09:32:47 compute-0 nova_compute[260935]: 2025-10-11 09:32:47.948 2 DEBUG nova.storage.rbd_utils [None req-7e317ff1-0e32-4bad-816a-213589e6956c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] rbd image e7751d7b-3ef7-4b84-a579-89d4b72f6cf4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:32:47 compute-0 nova_compute[260935]: 2025-10-11 09:32:47.974 2 DEBUG nova.storage.rbd_utils [None req-7e317ff1-0e32-4bad-816a-213589e6956c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] rbd image e7751d7b-3ef7-4b84-a579-89d4b72f6cf4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:32:48 compute-0 nova_compute[260935]: 2025-10-11 09:32:48.001 2 DEBUG nova.storage.rbd_utils [None req-7e317ff1-0e32-4bad-816a-213589e6956c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] rbd image e7751d7b-3ef7-4b84-a579-89d4b72f6cf4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:32:48 compute-0 nova_compute[260935]: 2025-10-11 09:32:48.005 2 DEBUG oslo_concurrency.processutils [None req-7e317ff1-0e32-4bad-816a-213589e6956c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:32:48 compute-0 nova_compute[260935]: 2025-10-11 09:32:48.047 2 DEBUG nova.policy [None req-7e317ff1-0e32-4bad-816a-213589e6956c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '0e1fd111a1ff43179343661e01457085', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'db6885dd005947ad850fed13cefdf2fc', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 11 09:32:48 compute-0 nova_compute[260935]: 2025-10-11 09:32:48.093 2 DEBUG oslo_concurrency.processutils [None req-7e317ff1-0e32-4bad-816a-213589e6956c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json" returned: 0 in 0.088s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:32:48 compute-0 nova_compute[260935]: 2025-10-11 09:32:48.094 2 DEBUG oslo_concurrency.lockutils [None req-7e317ff1-0e32-4bad-816a-213589e6956c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "9811042c7d73cc51997f7c966840f4b7728169a1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:32:48 compute-0 nova_compute[260935]: 2025-10-11 09:32:48.094 2 DEBUG oslo_concurrency.lockutils [None req-7e317ff1-0e32-4bad-816a-213589e6956c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:32:48 compute-0 nova_compute[260935]: 2025-10-11 09:32:48.094 2 DEBUG oslo_concurrency.lockutils [None req-7e317ff1-0e32-4bad-816a-213589e6956c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:32:48 compute-0 nova_compute[260935]: 2025-10-11 09:32:48.118 2 DEBUG nova.storage.rbd_utils [None req-7e317ff1-0e32-4bad-816a-213589e6956c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] rbd image e7751d7b-3ef7-4b84-a579-89d4b72f6cf4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:32:48 compute-0 nova_compute[260935]: 2025-10-11 09:32:48.121 2 DEBUG oslo_concurrency.processutils [None req-7e317ff1-0e32-4bad-816a-213589e6956c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 e7751d7b-3ef7-4b84-a579-89d4b72f6cf4_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:32:48 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 09:32:48 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4225269718' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:32:48 compute-0 nova_compute[260935]: 2025-10-11 09:32:48.273 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.532s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:32:48 compute-0 nova_compute[260935]: 2025-10-11 09:32:48.384 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:32:48 compute-0 nova_compute[260935]: 2025-10-11 09:32:48.385 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:32:48 compute-0 nova_compute[260935]: 2025-10-11 09:32:48.386 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:32:48 compute-0 nova_compute[260935]: 2025-10-11 09:32:48.391 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:32:48 compute-0 nova_compute[260935]: 2025-10-11 09:32:48.392 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:32:48 compute-0 nova_compute[260935]: 2025-10-11 09:32:48.396 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000052 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:32:48 compute-0 nova_compute[260935]: 2025-10-11 09:32:48.396 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000052 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:32:48 compute-0 nova_compute[260935]: 2025-10-11 09:32:48.443 2 DEBUG oslo_concurrency.processutils [None req-7e317ff1-0e32-4bad-816a-213589e6956c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 e7751d7b-3ef7-4b84-a579-89d4b72f6cf4_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.322s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:32:48 compute-0 nova_compute[260935]: 2025-10-11 09:32:48.512 2 DEBUG nova.storage.rbd_utils [None req-7e317ff1-0e32-4bad-816a-213589e6956c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] resizing rbd image e7751d7b-3ef7-4b84-a579-89d4b72f6cf4_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 11 09:32:48 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/4225269718' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:32:48 compute-0 nova_compute[260935]: 2025-10-11 09:32:48.823 2 WARNING nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 09:32:48 compute-0 nova_compute[260935]: 2025-10-11 09:32:48.824 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=2793MB free_disk=59.83066940307617GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 11 09:32:48 compute-0 nova_compute[260935]: 2025-10-11 09:32:48.824 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:32:48 compute-0 nova_compute[260935]: 2025-10-11 09:32:48.825 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:32:48 compute-0 nova_compute[260935]: 2025-10-11 09:32:48.828 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:32:48 compute-0 nova_compute[260935]: 2025-10-11 09:32:48.908 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance c176845c-89c0-4038-ba22-4ee79bd3ebfe actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 09:32:48 compute-0 nova_compute[260935]: 2025-10-11 09:32:48.908 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance b75d8ded-515b-48ff-a6b6-28df88878996 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 09:32:48 compute-0 nova_compute[260935]: 2025-10-11 09:32:48.909 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance 52be16b4-343a-4fd4-9041-39069a1fde2a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 09:32:48 compute-0 nova_compute[260935]: 2025-10-11 09:32:48.909 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance e7751d7b-3ef7-4b84-a579-89d4b72f6cf4 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 09:32:48 compute-0 nova_compute[260935]: 2025-10-11 09:32:48.909 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 11 09:32:48 compute-0 nova_compute[260935]: 2025-10-11 09:32:48.909 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=1024MB phys_disk=59GB used_disk=4GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 11 09:32:48 compute-0 nova_compute[260935]: 2025-10-11 09:32:48.918 2 DEBUG nova.objects.instance [None req-7e317ff1-0e32-4bad-816a-213589e6956c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lazy-loading 'migration_context' on Instance uuid e7751d7b-3ef7-4b84-a579-89d4b72f6cf4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:32:48 compute-0 nova_compute[260935]: 2025-10-11 09:32:48.935 2 DEBUG nova.virt.libvirt.driver [None req-7e317ff1-0e32-4bad-816a-213589e6956c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: e7751d7b-3ef7-4b84-a579-89d4b72f6cf4] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 11 09:32:48 compute-0 nova_compute[260935]: 2025-10-11 09:32:48.936 2 DEBUG nova.virt.libvirt.driver [None req-7e317ff1-0e32-4bad-816a-213589e6956c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: e7751d7b-3ef7-4b84-a579-89d4b72f6cf4] Ensure instance console log exists: /var/lib/nova/instances/e7751d7b-3ef7-4b84-a579-89d4b72f6cf4/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 11 09:32:48 compute-0 nova_compute[260935]: 2025-10-11 09:32:48.936 2 DEBUG oslo_concurrency.lockutils [None req-7e317ff1-0e32-4bad-816a-213589e6956c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:32:48 compute-0 nova_compute[260935]: 2025-10-11 09:32:48.937 2 DEBUG oslo_concurrency.lockutils [None req-7e317ff1-0e32-4bad-816a-213589e6956c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:32:48 compute-0 nova_compute[260935]: 2025-10-11 09:32:48.937 2 DEBUG oslo_concurrency.lockutils [None req-7e317ff1-0e32-4bad-816a-213589e6956c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:32:48 compute-0 nova_compute[260935]: 2025-10-11 09:32:48.997 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:32:49 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2735: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:32:49 compute-0 nova_compute[260935]: 2025-10-11 09:32:49.344 2 DEBUG nova.network.neutron [None req-7e317ff1-0e32-4bad-816a-213589e6956c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: e7751d7b-3ef7-4b84-a579-89d4b72f6cf4] Successfully created port: fca94e7c-1f55-4f5d-a268-7a4f0b86bae0 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 11 09:32:49 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 09:32:49 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1230948506' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:32:49 compute-0 nova_compute[260935]: 2025-10-11 09:32:49.446 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:32:49 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:32:49 compute-0 nova_compute[260935]: 2025-10-11 09:32:49.456 2 DEBUG nova.compute.provider_tree [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 09:32:49 compute-0 nova_compute[260935]: 2025-10-11 09:32:49.481 2 DEBUG nova.scheduler.client.report [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 09:32:49 compute-0 nova_compute[260935]: 2025-10-11 09:32:49.516 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 11 09:32:49 compute-0 nova_compute[260935]: 2025-10-11 09:32:49.517 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.692s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:32:49 compute-0 ceph-mon[74313]: pgmap v2735: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:32:49 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/1230948506' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:32:50 compute-0 nova_compute[260935]: 2025-10-11 09:32:50.093 2 DEBUG nova.network.neutron [None req-7e317ff1-0e32-4bad-816a-213589e6956c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: e7751d7b-3ef7-4b84-a579-89d4b72f6cf4] Successfully updated port: fca94e7c-1f55-4f5d-a268-7a4f0b86bae0 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 11 09:32:50 compute-0 nova_compute[260935]: 2025-10-11 09:32:50.244 2 DEBUG oslo_concurrency.lockutils [None req-7e317ff1-0e32-4bad-816a-213589e6956c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "refresh_cache-e7751d7b-3ef7-4b84-a579-89d4b72f6cf4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:32:50 compute-0 nova_compute[260935]: 2025-10-11 09:32:50.244 2 DEBUG oslo_concurrency.lockutils [None req-7e317ff1-0e32-4bad-816a-213589e6956c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquired lock "refresh_cache-e7751d7b-3ef7-4b84-a579-89d4b72f6cf4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:32:50 compute-0 nova_compute[260935]: 2025-10-11 09:32:50.245 2 DEBUG nova.network.neutron [None req-7e317ff1-0e32-4bad-816a-213589e6956c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: e7751d7b-3ef7-4b84-a579-89d4b72f6cf4] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 11 09:32:50 compute-0 nova_compute[260935]: 2025-10-11 09:32:50.321 2 DEBUG nova.compute.manager [req-66efc25e-7b51-487f-8c7a-8007245c9fc6 req-38eadef4-8eef-44c2-b8cf-283dd4e27713 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: e7751d7b-3ef7-4b84-a579-89d4b72f6cf4] Received event network-changed-fca94e7c-1f55-4f5d-a268-7a4f0b86bae0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:32:50 compute-0 nova_compute[260935]: 2025-10-11 09:32:50.322 2 DEBUG nova.compute.manager [req-66efc25e-7b51-487f-8c7a-8007245c9fc6 req-38eadef4-8eef-44c2-b8cf-283dd4e27713 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: e7751d7b-3ef7-4b84-a579-89d4b72f6cf4] Refreshing instance network info cache due to event network-changed-fca94e7c-1f55-4f5d-a268-7a4f0b86bae0. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 09:32:50 compute-0 nova_compute[260935]: 2025-10-11 09:32:50.322 2 DEBUG oslo_concurrency.lockutils [req-66efc25e-7b51-487f-8c7a-8007245c9fc6 req-38eadef4-8eef-44c2-b8cf-283dd4e27713 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-e7751d7b-3ef7-4b84-a579-89d4b72f6cf4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:32:50 compute-0 nova_compute[260935]: 2025-10-11 09:32:50.514 2 DEBUG nova.network.neutron [None req-7e317ff1-0e32-4bad-816a-213589e6956c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: e7751d7b-3ef7-4b84-a579-89d4b72f6cf4] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 11 09:32:50 compute-0 nova_compute[260935]: 2025-10-11 09:32:50.518 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:32:51 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2736: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:32:51 compute-0 ceph-mon[74313]: pgmap v2736: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:32:51 compute-0 nova_compute[260935]: 2025-10-11 09:32:51.754 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:32:53 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2737: 321 pgs: 321 active+clean; 374 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 11 09:32:53 compute-0 ceph-mon[74313]: pgmap v2737: 321 pgs: 321 active+clean; 374 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 11 09:32:53 compute-0 podman[412478]: 2025-10-11 09:32:53.79646788 +0000 UTC m=+0.088992392 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001)
Oct 11 09:32:53 compute-0 nova_compute[260935]: 2025-10-11 09:32:53.832 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:32:54 compute-0 sshd-session[412499]: Invalid user admin from 165.232.82.252 port 51512
Oct 11 09:32:54 compute-0 sshd-session[412499]: pam_unix(sshd:auth): check pass; user unknown
Oct 11 09:32:54 compute-0 sshd-session[412499]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=165.232.82.252
Oct 11 09:32:54 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:32:54 compute-0 nova_compute[260935]: 2025-10-11 09:32:54.594 2 DEBUG nova.network.neutron [None req-7e317ff1-0e32-4bad-816a-213589e6956c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: e7751d7b-3ef7-4b84-a579-89d4b72f6cf4] Updating instance_info_cache with network_info: [{"id": "fca94e7c-1f55-4f5d-a268-7a4f0b86bae0", "address": "fa:16:3e:9b:ef:fb", "network": {"id": "84203be9-d0b1-4633-bcd6-51523ee507a5", "bridge": "br-int", "label": "tempest-network-smoke--13547720", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe9b:effb", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfca94e7c-1f", "ovs_interfaceid": "fca94e7c-1f55-4f5d-a268-7a4f0b86bae0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:32:54 compute-0 nova_compute[260935]: 2025-10-11 09:32:54.616 2 DEBUG oslo_concurrency.lockutils [None req-7e317ff1-0e32-4bad-816a-213589e6956c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Releasing lock "refresh_cache-e7751d7b-3ef7-4b84-a579-89d4b72f6cf4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:32:54 compute-0 nova_compute[260935]: 2025-10-11 09:32:54.616 2 DEBUG nova.compute.manager [None req-7e317ff1-0e32-4bad-816a-213589e6956c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: e7751d7b-3ef7-4b84-a579-89d4b72f6cf4] Instance network_info: |[{"id": "fca94e7c-1f55-4f5d-a268-7a4f0b86bae0", "address": "fa:16:3e:9b:ef:fb", "network": {"id": "84203be9-d0b1-4633-bcd6-51523ee507a5", "bridge": "br-int", "label": "tempest-network-smoke--13547720", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe9b:effb", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfca94e7c-1f", "ovs_interfaceid": "fca94e7c-1f55-4f5d-a268-7a4f0b86bae0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 11 09:32:54 compute-0 nova_compute[260935]: 2025-10-11 09:32:54.617 2 DEBUG oslo_concurrency.lockutils [req-66efc25e-7b51-487f-8c7a-8007245c9fc6 req-38eadef4-8eef-44c2-b8cf-283dd4e27713 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-e7751d7b-3ef7-4b84-a579-89d4b72f6cf4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:32:54 compute-0 nova_compute[260935]: 2025-10-11 09:32:54.617 2 DEBUG nova.network.neutron [req-66efc25e-7b51-487f-8c7a-8007245c9fc6 req-38eadef4-8eef-44c2-b8cf-283dd4e27713 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: e7751d7b-3ef7-4b84-a579-89d4b72f6cf4] Refreshing network info cache for port fca94e7c-1f55-4f5d-a268-7a4f0b86bae0 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 09:32:54 compute-0 nova_compute[260935]: 2025-10-11 09:32:54.623 2 DEBUG nova.virt.libvirt.driver [None req-7e317ff1-0e32-4bad-816a-213589e6956c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: e7751d7b-3ef7-4b84-a579-89d4b72f6cf4] Start _get_guest_xml network_info=[{"id": "fca94e7c-1f55-4f5d-a268-7a4f0b86bae0", "address": "fa:16:3e:9b:ef:fb", "network": {"id": "84203be9-d0b1-4633-bcd6-51523ee507a5", "bridge": "br-int", "label": "tempest-network-smoke--13547720", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe9b:effb", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfca94e7c-1f", "ovs_interfaceid": "fca94e7c-1f55-4f5d-a268-7a4f0b86bae0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 11 09:32:54 compute-0 nova_compute[260935]: 2025-10-11 09:32:54.630 2 WARNING nova.virt.libvirt.driver [None req-7e317ff1-0e32-4bad-816a-213589e6956c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 09:32:54 compute-0 nova_compute[260935]: 2025-10-11 09:32:54.636 2 DEBUG nova.virt.libvirt.host [None req-7e317ff1-0e32-4bad-816a-213589e6956c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 11 09:32:54 compute-0 nova_compute[260935]: 2025-10-11 09:32:54.636 2 DEBUG nova.virt.libvirt.host [None req-7e317ff1-0e32-4bad-816a-213589e6956c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 11 09:32:54 compute-0 nova_compute[260935]: 2025-10-11 09:32:54.642 2 DEBUG nova.virt.libvirt.host [None req-7e317ff1-0e32-4bad-816a-213589e6956c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 11 09:32:54 compute-0 nova_compute[260935]: 2025-10-11 09:32:54.643 2 DEBUG nova.virt.libvirt.host [None req-7e317ff1-0e32-4bad-816a-213589e6956c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 11 09:32:54 compute-0 nova_compute[260935]: 2025-10-11 09:32:54.643 2 DEBUG nova.virt.libvirt.driver [None req-7e317ff1-0e32-4bad-816a-213589e6956c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 11 09:32:54 compute-0 nova_compute[260935]: 2025-10-11 09:32:54.643 2 DEBUG nova.virt.hardware [None req-7e317ff1-0e32-4bad-816a-213589e6956c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 11 09:32:54 compute-0 nova_compute[260935]: 2025-10-11 09:32:54.644 2 DEBUG nova.virt.hardware [None req-7e317ff1-0e32-4bad-816a-213589e6956c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 11 09:32:54 compute-0 nova_compute[260935]: 2025-10-11 09:32:54.644 2 DEBUG nova.virt.hardware [None req-7e317ff1-0e32-4bad-816a-213589e6956c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 11 09:32:54 compute-0 nova_compute[260935]: 2025-10-11 09:32:54.644 2 DEBUG nova.virt.hardware [None req-7e317ff1-0e32-4bad-816a-213589e6956c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 11 09:32:54 compute-0 nova_compute[260935]: 2025-10-11 09:32:54.645 2 DEBUG nova.virt.hardware [None req-7e317ff1-0e32-4bad-816a-213589e6956c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 11 09:32:54 compute-0 nova_compute[260935]: 2025-10-11 09:32:54.645 2 DEBUG nova.virt.hardware [None req-7e317ff1-0e32-4bad-816a-213589e6956c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 11 09:32:54 compute-0 nova_compute[260935]: 2025-10-11 09:32:54.645 2 DEBUG nova.virt.hardware [None req-7e317ff1-0e32-4bad-816a-213589e6956c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 11 09:32:54 compute-0 nova_compute[260935]: 2025-10-11 09:32:54.645 2 DEBUG nova.virt.hardware [None req-7e317ff1-0e32-4bad-816a-213589e6956c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 11 09:32:54 compute-0 nova_compute[260935]: 2025-10-11 09:32:54.645 2 DEBUG nova.virt.hardware [None req-7e317ff1-0e32-4bad-816a-213589e6956c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 11 09:32:54 compute-0 nova_compute[260935]: 2025-10-11 09:32:54.646 2 DEBUG nova.virt.hardware [None req-7e317ff1-0e32-4bad-816a-213589e6956c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 11 09:32:54 compute-0 nova_compute[260935]: 2025-10-11 09:32:54.646 2 DEBUG nova.virt.hardware [None req-7e317ff1-0e32-4bad-816a-213589e6956c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 11 09:32:54 compute-0 nova_compute[260935]: 2025-10-11 09:32:54.650 2 DEBUG oslo_concurrency.processutils [None req-7e317ff1-0e32-4bad-816a-213589e6956c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:32:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:32:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:32:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:32:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:32:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:32:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:32:54 compute-0 ceph-mgr[74605]: [balancer INFO root] Optimize plan auto_2025-10-11_09:32:54
Oct 11 09:32:54 compute-0 ceph-mgr[74605]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 09:32:54 compute-0 ceph-mgr[74605]: [balancer INFO root] do_upmap
Oct 11 09:32:54 compute-0 ceph-mgr[74605]: [balancer INFO root] pools ['backups', 'cephfs.cephfs.data', '.mgr', '.rgw.root', 'volumes', 'vms', 'images', 'default.rgw.control', 'default.rgw.meta', 'cephfs.cephfs.meta', 'default.rgw.log']
Oct 11 09:32:54 compute-0 ceph-mgr[74605]: [balancer INFO root] prepared 0/10 changes
Oct 11 09:32:55 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 09:32:55 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2764879442' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:32:55 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2738: 321 pgs: 321 active+clean; 374 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 11 09:32:55 compute-0 nova_compute[260935]: 2025-10-11 09:32:55.133 2 DEBUG oslo_concurrency.processutils [None req-7e317ff1-0e32-4bad-816a-213589e6956c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.483s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:32:55 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/2764879442' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:32:55 compute-0 ceph-mon[74313]: pgmap v2738: 321 pgs: 321 active+clean; 374 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 11 09:32:55 compute-0 nova_compute[260935]: 2025-10-11 09:32:55.169 2 DEBUG nova.storage.rbd_utils [None req-7e317ff1-0e32-4bad-816a-213589e6956c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] rbd image e7751d7b-3ef7-4b84-a579-89d4b72f6cf4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:32:55 compute-0 nova_compute[260935]: 2025-10-11 09:32:55.180 2 DEBUG oslo_concurrency.processutils [None req-7e317ff1-0e32-4bad-816a-213589e6956c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:32:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 09:32:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 09:32:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 09:32:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 09:32:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 09:32:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 09:32:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 09:32:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 09:32:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 09:32:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 09:32:55 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 09:32:55 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/525487199' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:32:55 compute-0 nova_compute[260935]: 2025-10-11 09:32:55.656 2 DEBUG oslo_concurrency.processutils [None req-7e317ff1-0e32-4bad-816a-213589e6956c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.477s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:32:55 compute-0 nova_compute[260935]: 2025-10-11 09:32:55.660 2 DEBUG nova.virt.libvirt.vif [None req-7e317ff1-0e32-4bad-816a-213589e6956c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T09:32:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1457402262',display_name='tempest-TestGettingAddress-server-1457402262',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1457402262',id=137,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHVfw7pOtwEAslurCfMJI+LHxZQ7Z2OcTKKHxGE0pH5jm6Sci3HdEc6EkB55fGhUPW8iPpuzF74thdADhl6pL+8SkI+p3jvP78xNgpaS0telyWktJDIyQUmQdgZIxm8Eyg==',key_name='tempest-TestGettingAddress-2055882368',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='db6885dd005947ad850fed13cefdf2fc',ramdisk_id='',reservation_id='r-axv24jbu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-1238692117',owner_user_name='tempest-TestGettingAddress-1238692117-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:32:47Z,user_data=None,user_id='0e1fd111a1ff43179343661e01457085',uuid=e7751d7b-3ef7-4b84-a579-89d4b72f6cf4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "fca94e7c-1f55-4f5d-a268-7a4f0b86bae0", "address": "fa:16:3e:9b:ef:fb", "network": {"id": "84203be9-d0b1-4633-bcd6-51523ee507a5", "bridge": "br-int", "label": "tempest-network-smoke--13547720", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe9b:effb", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfca94e7c-1f", "ovs_interfaceid": "fca94e7c-1f55-4f5d-a268-7a4f0b86bae0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 11 09:32:55 compute-0 nova_compute[260935]: 2025-10-11 09:32:55.661 2 DEBUG nova.network.os_vif_util [None req-7e317ff1-0e32-4bad-816a-213589e6956c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converting VIF {"id": "fca94e7c-1f55-4f5d-a268-7a4f0b86bae0", "address": "fa:16:3e:9b:ef:fb", "network": {"id": "84203be9-d0b1-4633-bcd6-51523ee507a5", "bridge": "br-int", "label": "tempest-network-smoke--13547720", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe9b:effb", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfca94e7c-1f", "ovs_interfaceid": "fca94e7c-1f55-4f5d-a268-7a4f0b86bae0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 09:32:55 compute-0 nova_compute[260935]: 2025-10-11 09:32:55.663 2 DEBUG nova.network.os_vif_util [None req-7e317ff1-0e32-4bad-816a-213589e6956c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9b:ef:fb,bridge_name='br-int',has_traffic_filtering=True,id=fca94e7c-1f55-4f5d-a268-7a4f0b86bae0,network=Network(84203be9-d0b1-4633-bcd6-51523ee507a5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfca94e7c-1f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 09:32:55 compute-0 nova_compute[260935]: 2025-10-11 09:32:55.665 2 DEBUG nova.objects.instance [None req-7e317ff1-0e32-4bad-816a-213589e6956c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lazy-loading 'pci_devices' on Instance uuid e7751d7b-3ef7-4b84-a579-89d4b72f6cf4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:32:55 compute-0 nova_compute[260935]: 2025-10-11 09:32:55.687 2 DEBUG nova.virt.libvirt.driver [None req-7e317ff1-0e32-4bad-816a-213589e6956c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: e7751d7b-3ef7-4b84-a579-89d4b72f6cf4] End _get_guest_xml xml=<domain type="kvm">
Oct 11 09:32:55 compute-0 nova_compute[260935]:   <uuid>e7751d7b-3ef7-4b84-a579-89d4b72f6cf4</uuid>
Oct 11 09:32:55 compute-0 nova_compute[260935]:   <name>instance-00000089</name>
Oct 11 09:32:55 compute-0 nova_compute[260935]:   <memory>131072</memory>
Oct 11 09:32:55 compute-0 nova_compute[260935]:   <vcpu>1</vcpu>
Oct 11 09:32:55 compute-0 nova_compute[260935]:   <metadata>
Oct 11 09:32:55 compute-0 nova_compute[260935]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 09:32:55 compute-0 nova_compute[260935]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 09:32:55 compute-0 nova_compute[260935]:       <nova:name>tempest-TestGettingAddress-server-1457402262</nova:name>
Oct 11 09:32:55 compute-0 nova_compute[260935]:       <nova:creationTime>2025-10-11 09:32:54</nova:creationTime>
Oct 11 09:32:55 compute-0 nova_compute[260935]:       <nova:flavor name="m1.nano">
Oct 11 09:32:55 compute-0 nova_compute[260935]:         <nova:memory>128</nova:memory>
Oct 11 09:32:55 compute-0 nova_compute[260935]:         <nova:disk>1</nova:disk>
Oct 11 09:32:55 compute-0 nova_compute[260935]:         <nova:swap>0</nova:swap>
Oct 11 09:32:55 compute-0 nova_compute[260935]:         <nova:ephemeral>0</nova:ephemeral>
Oct 11 09:32:55 compute-0 nova_compute[260935]:         <nova:vcpus>1</nova:vcpus>
Oct 11 09:32:55 compute-0 nova_compute[260935]:       </nova:flavor>
Oct 11 09:32:55 compute-0 nova_compute[260935]:       <nova:owner>
Oct 11 09:32:55 compute-0 nova_compute[260935]:         <nova:user uuid="0e1fd111a1ff43179343661e01457085">tempest-TestGettingAddress-1238692117-project-member</nova:user>
Oct 11 09:32:55 compute-0 nova_compute[260935]:         <nova:project uuid="db6885dd005947ad850fed13cefdf2fc">tempest-TestGettingAddress-1238692117</nova:project>
Oct 11 09:32:55 compute-0 nova_compute[260935]:       </nova:owner>
Oct 11 09:32:55 compute-0 nova_compute[260935]:       <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 09:32:55 compute-0 nova_compute[260935]:       <nova:ports>
Oct 11 09:32:55 compute-0 nova_compute[260935]:         <nova:port uuid="fca94e7c-1f55-4f5d-a268-7a4f0b86bae0">
Oct 11 09:32:55 compute-0 nova_compute[260935]:           <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Oct 11 09:32:55 compute-0 nova_compute[260935]:           <nova:ip type="fixed" address="2001:db8::f816:3eff:fe9b:effb" ipVersion="6"/>
Oct 11 09:32:55 compute-0 nova_compute[260935]:         </nova:port>
Oct 11 09:32:55 compute-0 nova_compute[260935]:       </nova:ports>
Oct 11 09:32:55 compute-0 nova_compute[260935]:     </nova:instance>
Oct 11 09:32:55 compute-0 nova_compute[260935]:   </metadata>
Oct 11 09:32:55 compute-0 nova_compute[260935]:   <sysinfo type="smbios">
Oct 11 09:32:55 compute-0 nova_compute[260935]:     <system>
Oct 11 09:32:55 compute-0 nova_compute[260935]:       <entry name="manufacturer">RDO</entry>
Oct 11 09:32:55 compute-0 nova_compute[260935]:       <entry name="product">OpenStack Compute</entry>
Oct 11 09:32:55 compute-0 nova_compute[260935]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 09:32:55 compute-0 nova_compute[260935]:       <entry name="serial">e7751d7b-3ef7-4b84-a579-89d4b72f6cf4</entry>
Oct 11 09:32:55 compute-0 nova_compute[260935]:       <entry name="uuid">e7751d7b-3ef7-4b84-a579-89d4b72f6cf4</entry>
Oct 11 09:32:55 compute-0 nova_compute[260935]:       <entry name="family">Virtual Machine</entry>
Oct 11 09:32:55 compute-0 nova_compute[260935]:     </system>
Oct 11 09:32:55 compute-0 nova_compute[260935]:   </sysinfo>
Oct 11 09:32:55 compute-0 nova_compute[260935]:   <os>
Oct 11 09:32:55 compute-0 nova_compute[260935]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 11 09:32:55 compute-0 nova_compute[260935]:     <boot dev="hd"/>
Oct 11 09:32:55 compute-0 nova_compute[260935]:     <smbios mode="sysinfo"/>
Oct 11 09:32:55 compute-0 nova_compute[260935]:   </os>
Oct 11 09:32:55 compute-0 nova_compute[260935]:   <features>
Oct 11 09:32:55 compute-0 nova_compute[260935]:     <acpi/>
Oct 11 09:32:55 compute-0 nova_compute[260935]:     <apic/>
Oct 11 09:32:55 compute-0 nova_compute[260935]:     <vmcoreinfo/>
Oct 11 09:32:55 compute-0 nova_compute[260935]:   </features>
Oct 11 09:32:55 compute-0 nova_compute[260935]:   <clock offset="utc">
Oct 11 09:32:55 compute-0 nova_compute[260935]:     <timer name="pit" tickpolicy="delay"/>
Oct 11 09:32:55 compute-0 nova_compute[260935]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 11 09:32:55 compute-0 nova_compute[260935]:     <timer name="hpet" present="no"/>
Oct 11 09:32:55 compute-0 nova_compute[260935]:   </clock>
Oct 11 09:32:55 compute-0 nova_compute[260935]:   <cpu mode="host-model" match="exact">
Oct 11 09:32:55 compute-0 nova_compute[260935]:     <topology sockets="1" cores="1" threads="1"/>
Oct 11 09:32:55 compute-0 nova_compute[260935]:   </cpu>
Oct 11 09:32:55 compute-0 nova_compute[260935]:   <devices>
Oct 11 09:32:55 compute-0 nova_compute[260935]:     <disk type="network" device="disk">
Oct 11 09:32:55 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 09:32:55 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/e7751d7b-3ef7-4b84-a579-89d4b72f6cf4_disk">
Oct 11 09:32:55 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 09:32:55 compute-0 nova_compute[260935]:       </source>
Oct 11 09:32:55 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 09:32:55 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 09:32:55 compute-0 nova_compute[260935]:       </auth>
Oct 11 09:32:55 compute-0 nova_compute[260935]:       <target dev="vda" bus="virtio"/>
Oct 11 09:32:55 compute-0 nova_compute[260935]:     </disk>
Oct 11 09:32:55 compute-0 nova_compute[260935]:     <disk type="network" device="cdrom">
Oct 11 09:32:55 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 09:32:55 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/e7751d7b-3ef7-4b84-a579-89d4b72f6cf4_disk.config">
Oct 11 09:32:55 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 09:32:55 compute-0 nova_compute[260935]:       </source>
Oct 11 09:32:55 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 09:32:55 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 09:32:55 compute-0 nova_compute[260935]:       </auth>
Oct 11 09:32:55 compute-0 nova_compute[260935]:       <target dev="sda" bus="sata"/>
Oct 11 09:32:55 compute-0 nova_compute[260935]:     </disk>
Oct 11 09:32:55 compute-0 nova_compute[260935]:     <interface type="ethernet">
Oct 11 09:32:55 compute-0 nova_compute[260935]:       <mac address="fa:16:3e:9b:ef:fb"/>
Oct 11 09:32:55 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 09:32:55 compute-0 nova_compute[260935]:       <driver name="vhost" rx_queue_size="512"/>
Oct 11 09:32:55 compute-0 nova_compute[260935]:       <mtu size="1442"/>
Oct 11 09:32:55 compute-0 nova_compute[260935]:       <target dev="tapfca94e7c-1f"/>
Oct 11 09:32:55 compute-0 nova_compute[260935]:     </interface>
Oct 11 09:32:55 compute-0 nova_compute[260935]:     <serial type="pty">
Oct 11 09:32:55 compute-0 nova_compute[260935]:       <log file="/var/lib/nova/instances/e7751d7b-3ef7-4b84-a579-89d4b72f6cf4/console.log" append="off"/>
Oct 11 09:32:55 compute-0 nova_compute[260935]:     </serial>
Oct 11 09:32:55 compute-0 nova_compute[260935]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 09:32:55 compute-0 nova_compute[260935]:     <video>
Oct 11 09:32:55 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 09:32:55 compute-0 nova_compute[260935]:     </video>
Oct 11 09:32:55 compute-0 nova_compute[260935]:     <input type="tablet" bus="usb"/>
Oct 11 09:32:55 compute-0 nova_compute[260935]:     <rng model="virtio">
Oct 11 09:32:55 compute-0 nova_compute[260935]:       <backend model="random">/dev/urandom</backend>
Oct 11 09:32:55 compute-0 nova_compute[260935]:     </rng>
Oct 11 09:32:55 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root"/>
Oct 11 09:32:55 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:32:55 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:32:55 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:32:55 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:32:55 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:32:55 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:32:55 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:32:55 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:32:55 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:32:55 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:32:55 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:32:55 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:32:55 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:32:55 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:32:55 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:32:55 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:32:55 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:32:55 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:32:55 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:32:55 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:32:55 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:32:55 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:32:55 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:32:55 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:32:55 compute-0 nova_compute[260935]:     <controller type="usb" index="0"/>
Oct 11 09:32:55 compute-0 nova_compute[260935]:     <memballoon model="virtio">
Oct 11 09:32:55 compute-0 nova_compute[260935]:       <stats period="10"/>
Oct 11 09:32:55 compute-0 nova_compute[260935]:     </memballoon>
Oct 11 09:32:55 compute-0 nova_compute[260935]:   </devices>
Oct 11 09:32:55 compute-0 nova_compute[260935]: </domain>
Oct 11 09:32:55 compute-0 nova_compute[260935]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 11 09:32:55 compute-0 nova_compute[260935]: 2025-10-11 09:32:55.689 2 DEBUG nova.compute.manager [None req-7e317ff1-0e32-4bad-816a-213589e6956c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: e7751d7b-3ef7-4b84-a579-89d4b72f6cf4] Preparing to wait for external event network-vif-plugged-fca94e7c-1f55-4f5d-a268-7a4f0b86bae0 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 11 09:32:55 compute-0 nova_compute[260935]: 2025-10-11 09:32:55.689 2 DEBUG oslo_concurrency.lockutils [None req-7e317ff1-0e32-4bad-816a-213589e6956c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "e7751d7b-3ef7-4b84-a579-89d4b72f6cf4-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:32:55 compute-0 nova_compute[260935]: 2025-10-11 09:32:55.690 2 DEBUG oslo_concurrency.lockutils [None req-7e317ff1-0e32-4bad-816a-213589e6956c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "e7751d7b-3ef7-4b84-a579-89d4b72f6cf4-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:32:55 compute-0 nova_compute[260935]: 2025-10-11 09:32:55.690 2 DEBUG oslo_concurrency.lockutils [None req-7e317ff1-0e32-4bad-816a-213589e6956c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "e7751d7b-3ef7-4b84-a579-89d4b72f6cf4-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:32:55 compute-0 nova_compute[260935]: 2025-10-11 09:32:55.691 2 DEBUG nova.virt.libvirt.vif [None req-7e317ff1-0e32-4bad-816a-213589e6956c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T09:32:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1457402262',display_name='tempest-TestGettingAddress-server-1457402262',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1457402262',id=137,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHVfw7pOtwEAslurCfMJI+LHxZQ7Z2OcTKKHxGE0pH5jm6Sci3HdEc6EkB55fGhUPW8iPpuzF74thdADhl6pL+8SkI+p3jvP78xNgpaS0telyWktJDIyQUmQdgZIxm8Eyg==',key_name='tempest-TestGettingAddress-2055882368',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='db6885dd005947ad850fed13cefdf2fc',ramdisk_id='',reservation_id='r-axv24jbu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-1238692117',owner_user_name='tempest-TestGettingAddress-1238692117-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:32:47Z,user_data=None,user_id='0e1fd111a1ff43179343661e01457085',uuid=e7751d7b-3ef7-4b84-a579-89d4b72f6cf4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "fca94e7c-1f55-4f5d-a268-7a4f0b86bae0", "address": "fa:16:3e:9b:ef:fb", "network": {"id": "84203be9-d0b1-4633-bcd6-51523ee507a5", "bridge": "br-int", "label": "tempest-network-smoke--13547720", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe9b:effb", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfca94e7c-1f", "ovs_interfaceid": "fca94e7c-1f55-4f5d-a268-7a4f0b86bae0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 11 09:32:55 compute-0 nova_compute[260935]: 2025-10-11 09:32:55.692 2 DEBUG nova.network.os_vif_util [None req-7e317ff1-0e32-4bad-816a-213589e6956c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converting VIF {"id": "fca94e7c-1f55-4f5d-a268-7a4f0b86bae0", "address": "fa:16:3e:9b:ef:fb", "network": {"id": "84203be9-d0b1-4633-bcd6-51523ee507a5", "bridge": "br-int", "label": "tempest-network-smoke--13547720", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe9b:effb", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfca94e7c-1f", "ovs_interfaceid": "fca94e7c-1f55-4f5d-a268-7a4f0b86bae0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 09:32:55 compute-0 nova_compute[260935]: 2025-10-11 09:32:55.693 2 DEBUG nova.network.os_vif_util [None req-7e317ff1-0e32-4bad-816a-213589e6956c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9b:ef:fb,bridge_name='br-int',has_traffic_filtering=True,id=fca94e7c-1f55-4f5d-a268-7a4f0b86bae0,network=Network(84203be9-d0b1-4633-bcd6-51523ee507a5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfca94e7c-1f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 09:32:55 compute-0 nova_compute[260935]: 2025-10-11 09:32:55.693 2 DEBUG os_vif [None req-7e317ff1-0e32-4bad-816a-213589e6956c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:9b:ef:fb,bridge_name='br-int',has_traffic_filtering=True,id=fca94e7c-1f55-4f5d-a268-7a4f0b86bae0,network=Network(84203be9-d0b1-4633-bcd6-51523ee507a5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfca94e7c-1f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 11 09:32:55 compute-0 nova_compute[260935]: 2025-10-11 09:32:55.694 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:32:55 compute-0 nova_compute[260935]: 2025-10-11 09:32:55.695 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:32:55 compute-0 nova_compute[260935]: 2025-10-11 09:32:55.696 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 09:32:55 compute-0 nova_compute[260935]: 2025-10-11 09:32:55.701 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:32:55 compute-0 nova_compute[260935]: 2025-10-11 09:32:55.702 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfca94e7c-1f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:32:55 compute-0 nova_compute[260935]: 2025-10-11 09:32:55.702 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapfca94e7c-1f, col_values=(('external_ids', {'iface-id': 'fca94e7c-1f55-4f5d-a268-7a4f0b86bae0', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:9b:ef:fb', 'vm-uuid': 'e7751d7b-3ef7-4b84-a579-89d4b72f6cf4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:32:55 compute-0 nova_compute[260935]: 2025-10-11 09:32:55.705 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:32:55 compute-0 NetworkManager[44960]: <info>  [1760175175.7069] manager: (tapfca94e7c-1f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/588)
Oct 11 09:32:55 compute-0 nova_compute[260935]: 2025-10-11 09:32:55.708 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 09:32:55 compute-0 nova_compute[260935]: 2025-10-11 09:32:55.716 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:32:55 compute-0 nova_compute[260935]: 2025-10-11 09:32:55.718 2 INFO os_vif [None req-7e317ff1-0e32-4bad-816a-213589e6956c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:9b:ef:fb,bridge_name='br-int',has_traffic_filtering=True,id=fca94e7c-1f55-4f5d-a268-7a4f0b86bae0,network=Network(84203be9-d0b1-4633-bcd6-51523ee507a5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfca94e7c-1f')
Oct 11 09:32:55 compute-0 nova_compute[260935]: 2025-10-11 09:32:55.830 2 DEBUG nova.virt.libvirt.driver [None req-7e317ff1-0e32-4bad-816a-213589e6956c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 09:32:55 compute-0 nova_compute[260935]: 2025-10-11 09:32:55.830 2 DEBUG nova.virt.libvirt.driver [None req-7e317ff1-0e32-4bad-816a-213589e6956c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 09:32:55 compute-0 nova_compute[260935]: 2025-10-11 09:32:55.831 2 DEBUG nova.virt.libvirt.driver [None req-7e317ff1-0e32-4bad-816a-213589e6956c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] No VIF found with MAC fa:16:3e:9b:ef:fb, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 11 09:32:55 compute-0 nova_compute[260935]: 2025-10-11 09:32:55.832 2 INFO nova.virt.libvirt.driver [None req-7e317ff1-0e32-4bad-816a-213589e6956c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: e7751d7b-3ef7-4b84-a579-89d4b72f6cf4] Using config drive
Oct 11 09:32:55 compute-0 nova_compute[260935]: 2025-10-11 09:32:55.874 2 DEBUG nova.storage.rbd_utils [None req-7e317ff1-0e32-4bad-816a-213589e6956c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] rbd image e7751d7b-3ef7-4b84-a579-89d4b72f6cf4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:32:56 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/525487199' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:32:56 compute-0 sshd-session[412499]: Failed password for invalid user admin from 165.232.82.252 port 51512 ssh2
Oct 11 09:32:56 compute-0 nova_compute[260935]: 2025-10-11 09:32:56.483 2 INFO nova.virt.libvirt.driver [None req-7e317ff1-0e32-4bad-816a-213589e6956c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: e7751d7b-3ef7-4b84-a579-89d4b72f6cf4] Creating config drive at /var/lib/nova/instances/e7751d7b-3ef7-4b84-a579-89d4b72f6cf4/disk.config
Oct 11 09:32:56 compute-0 nova_compute[260935]: 2025-10-11 09:32:56.492 2 DEBUG oslo_concurrency.processutils [None req-7e317ff1-0e32-4bad-816a-213589e6956c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/e7751d7b-3ef7-4b84-a579-89d4b72f6cf4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp2xayg7dd execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:32:56 compute-0 nova_compute[260935]: 2025-10-11 09:32:56.663 2 DEBUG oslo_concurrency.processutils [None req-7e317ff1-0e32-4bad-816a-213589e6956c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/e7751d7b-3ef7-4b84-a579-89d4b72f6cf4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp2xayg7dd" returned: 0 in 0.171s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:32:56 compute-0 nova_compute[260935]: 2025-10-11 09:32:56.703 2 DEBUG nova.storage.rbd_utils [None req-7e317ff1-0e32-4bad-816a-213589e6956c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] rbd image e7751d7b-3ef7-4b84-a579-89d4b72f6cf4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:32:56 compute-0 nova_compute[260935]: 2025-10-11 09:32:56.709 2 DEBUG oslo_concurrency.processutils [None req-7e317ff1-0e32-4bad-816a-213589e6956c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/e7751d7b-3ef7-4b84-a579-89d4b72f6cf4/disk.config e7751d7b-3ef7-4b84-a579-89d4b72f6cf4_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:32:56 compute-0 nova_compute[260935]: 2025-10-11 09:32:56.767 2 DEBUG nova.network.neutron [req-66efc25e-7b51-487f-8c7a-8007245c9fc6 req-38eadef4-8eef-44c2-b8cf-283dd4e27713 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: e7751d7b-3ef7-4b84-a579-89d4b72f6cf4] Updated VIF entry in instance network info cache for port fca94e7c-1f55-4f5d-a268-7a4f0b86bae0. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 09:32:56 compute-0 nova_compute[260935]: 2025-10-11 09:32:56.768 2 DEBUG nova.network.neutron [req-66efc25e-7b51-487f-8c7a-8007245c9fc6 req-38eadef4-8eef-44c2-b8cf-283dd4e27713 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: e7751d7b-3ef7-4b84-a579-89d4b72f6cf4] Updating instance_info_cache with network_info: [{"id": "fca94e7c-1f55-4f5d-a268-7a4f0b86bae0", "address": "fa:16:3e:9b:ef:fb", "network": {"id": "84203be9-d0b1-4633-bcd6-51523ee507a5", "bridge": "br-int", "label": "tempest-network-smoke--13547720", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe9b:effb", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfca94e7c-1f", "ovs_interfaceid": "fca94e7c-1f55-4f5d-a268-7a4f0b86bae0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:32:56 compute-0 nova_compute[260935]: 2025-10-11 09:32:56.787 2 DEBUG oslo_concurrency.lockutils [req-66efc25e-7b51-487f-8c7a-8007245c9fc6 req-38eadef4-8eef-44c2-b8cf-283dd4e27713 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-e7751d7b-3ef7-4b84-a579-89d4b72f6cf4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:32:56 compute-0 nova_compute[260935]: 2025-10-11 09:32:56.942 2 DEBUG oslo_concurrency.processutils [None req-7e317ff1-0e32-4bad-816a-213589e6956c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/e7751d7b-3ef7-4b84-a579-89d4b72f6cf4/disk.config e7751d7b-3ef7-4b84-a579-89d4b72f6cf4_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.233s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:32:56 compute-0 nova_compute[260935]: 2025-10-11 09:32:56.943 2 INFO nova.virt.libvirt.driver [None req-7e317ff1-0e32-4bad-816a-213589e6956c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: e7751d7b-3ef7-4b84-a579-89d4b72f6cf4] Deleting local config drive /var/lib/nova/instances/e7751d7b-3ef7-4b84-a579-89d4b72f6cf4/disk.config because it was imported into RBD.
Oct 11 09:32:57 compute-0 kernel: tapfca94e7c-1f: entered promiscuous mode
Oct 11 09:32:57 compute-0 NetworkManager[44960]: <info>  [1760175177.0300] manager: (tapfca94e7c-1f): new Tun device (/org/freedesktop/NetworkManager/Devices/589)
Oct 11 09:32:57 compute-0 ovn_controller[152945]: 2025-10-11T09:32:57Z|01531|binding|INFO|Claiming lport fca94e7c-1f55-4f5d-a268-7a4f0b86bae0 for this chassis.
Oct 11 09:32:57 compute-0 ovn_controller[152945]: 2025-10-11T09:32:57Z|01532|binding|INFO|fca94e7c-1f55-4f5d-a268-7a4f0b86bae0: Claiming fa:16:3e:9b:ef:fb 10.100.0.12 2001:db8::f816:3eff:fe9b:effb
Oct 11 09:32:57 compute-0 nova_compute[260935]: 2025-10-11 09:32:57.044 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:32:57 compute-0 nova_compute[260935]: 2025-10-11 09:32:57.055 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:32:57 compute-0 systemd-udevd[412636]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 09:32:57 compute-0 systemd-machined[215705]: New machine qemu-161-instance-00000089.
Oct 11 09:32:57 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:32:57.091 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9b:ef:fb 10.100.0.12 2001:db8::f816:3eff:fe9b:effb'], port_security=['fa:16:3e:9b:ef:fb 10.100.0.12 2001:db8::f816:3eff:fe9b:effb'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28 2001:db8::f816:3eff:fe9b:effb/64', 'neutron:device_id': 'e7751d7b-3ef7-4b84-a579-89d4b72f6cf4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-84203be9-d0b1-4633-bcd6-51523ee507a5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'db6885dd005947ad850fed13cefdf2fc', 'neutron:revision_number': '2', 'neutron:security_group_ids': '49b6c18b-a992-472d-bab2-db1a8305eb48', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d927a0f8-4868-42ce-9cc2-00b73d34efaa, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=fca94e7c-1f55-4f5d-a268-7a4f0b86bae0) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 09:32:57 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:32:57.093 162815 INFO neutron.agent.ovn.metadata.agent [-] Port fca94e7c-1f55-4f5d-a268-7a4f0b86bae0 in datapath 84203be9-d0b1-4633-bcd6-51523ee507a5 bound to our chassis
Oct 11 09:32:57 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:32:57.095 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 84203be9-d0b1-4633-bcd6-51523ee507a5
Oct 11 09:32:57 compute-0 NetworkManager[44960]: <info>  [1760175177.1131] device (tapfca94e7c-1f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 09:32:57 compute-0 NetworkManager[44960]: <info>  [1760175177.1147] device (tapfca94e7c-1f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 09:32:57 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2739: 321 pgs: 321 active+clean; 374 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 11 09:32:57 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:32:57.112 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[ef939037-8c37-49ad-ba7a-316f112d07de]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:32:57 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:32:57.116 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap84203be9-d1 in ovnmeta-84203be9-d0b1-4633-bcd6-51523ee507a5 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 11 09:32:57 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:32:57.119 276945 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap84203be9-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 11 09:32:57 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:32:57.119 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[eab8c58b-e053-4f93-b65c-25309210068d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:32:57 compute-0 systemd[1]: Started Virtual Machine qemu-161-instance-00000089.
Oct 11 09:32:57 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:32:57.121 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[5fd50968-c87c-4b4b-857f-6a45e2ae74c0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:32:57 compute-0 ovn_controller[152945]: 2025-10-11T09:32:57Z|01533|binding|INFO|Setting lport fca94e7c-1f55-4f5d-a268-7a4f0b86bae0 ovn-installed in OVS
Oct 11 09:32:57 compute-0 ovn_controller[152945]: 2025-10-11T09:32:57Z|01534|binding|INFO|Setting lport fca94e7c-1f55-4f5d-a268-7a4f0b86bae0 up in Southbound
Oct 11 09:32:57 compute-0 nova_compute[260935]: 2025-10-11 09:32:57.126 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:32:57 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:32:57.137 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[bb736d9d-b5b1-49c7-8c6e-07878547cfef]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:32:57 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:32:57.168 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[8a034a43-dbae-426f-880c-61ff026f289b]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:32:57 compute-0 ceph-mon[74313]: pgmap v2739: 321 pgs: 321 active+clean; 374 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 11 09:32:57 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:32:57.216 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[83858f03-84ea-4f97-8ce2-e8b5a867bd72]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:32:57 compute-0 systemd-udevd[412639]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 09:32:57 compute-0 NetworkManager[44960]: <info>  [1760175177.2274] manager: (tap84203be9-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/590)
Oct 11 09:32:57 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:32:57.226 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[57d22cd9-40c2-47ff-afc4-d019c9fb034f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:32:57 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:32:57.267 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[7022cc63-8e81-4d85-99ef-3d764c8dba82]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:32:57 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:32:57.271 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[18c5a624-e93a-407d-bac1-0652a64f9212]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:32:57 compute-0 NetworkManager[44960]: <info>  [1760175177.3058] device (tap84203be9-d0): carrier: link connected
Oct 11 09:32:57 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:32:57.316 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[1ebc3b83-fb38-4a96-9f15-eeac288ab27f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:32:57 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:32:57.344 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[afebfc6c-85d2-4db1-b142-dfe95682415d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap84203be9-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:09:4e:e8'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 410], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 702200, 'reachable_time': 38542, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 412669, 'error': None, 'target': 'ovnmeta-84203be9-d0b1-4633-bcd6-51523ee507a5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:32:57 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:32:57.367 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[62a3aaa0-9eff-41c6-b43b-79a870a15948]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe09:4ee8'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 702200, 'tstamp': 702200}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 412670, 'error': None, 'target': 'ovnmeta-84203be9-d0b1-4633-bcd6-51523ee507a5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:32:57 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:32:57.397 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[a5bcac3f-70d8-414e-a28f-857a59f094bb]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap84203be9-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:09:4e:e8'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 220, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 220, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 410], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 702200, 'reachable_time': 38542, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 192, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 192, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 412671, 'error': None, 'target': 'ovnmeta-84203be9-d0b1-4633-bcd6-51523ee507a5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:32:57 compute-0 sshd-session[412499]: Connection closed by invalid user admin 165.232.82.252 port 51512 [preauth]
Oct 11 09:32:57 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:32:57.438 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[3e1cb462-e326-4fa1-9d66-9d8cf189b8b5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:32:57 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:32:57.535 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[a2a2ddf3-a6ed-4b7a-8dda-7f765cb22a9e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:32:57 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:32:57.537 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap84203be9-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:32:57 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:32:57.537 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 09:32:57 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:32:57.538 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap84203be9-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:32:57 compute-0 NetworkManager[44960]: <info>  [1760175177.5419] manager: (tap84203be9-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/591)
Oct 11 09:32:57 compute-0 kernel: tap84203be9-d0: entered promiscuous mode
Oct 11 09:32:57 compute-0 nova_compute[260935]: 2025-10-11 09:32:57.540 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:32:57 compute-0 nova_compute[260935]: 2025-10-11 09:32:57.544 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:32:57 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:32:57.545 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap84203be9-d0, col_values=(('external_ids', {'iface-id': '8bbc5aca-a0c9-493f-8847-60a74f11dbe6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:32:57 compute-0 nova_compute[260935]: 2025-10-11 09:32:57.547 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:32:57 compute-0 ovn_controller[152945]: 2025-10-11T09:32:57Z|01535|binding|INFO|Releasing lport 8bbc5aca-a0c9-493f-8847-60a74f11dbe6 from this chassis (sb_readonly=0)
Oct 11 09:32:57 compute-0 nova_compute[260935]: 2025-10-11 09:32:57.582 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:32:57 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:32:57.583 162815 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/84203be9-d0b1-4633-bcd6-51523ee507a5.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/84203be9-d0b1-4633-bcd6-51523ee507a5.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 11 09:32:57 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:32:57.585 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[05f1a0d1-daf7-416d-a1c3-f2b4d9b2e03d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:32:57 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:32:57.586 162815 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 11 09:32:57 compute-0 ovn_metadata_agent[162810]: global
Oct 11 09:32:57 compute-0 ovn_metadata_agent[162810]:     log         /dev/log local0 debug
Oct 11 09:32:57 compute-0 ovn_metadata_agent[162810]:     log-tag     haproxy-metadata-proxy-84203be9-d0b1-4633-bcd6-51523ee507a5
Oct 11 09:32:57 compute-0 ovn_metadata_agent[162810]:     user        root
Oct 11 09:32:57 compute-0 ovn_metadata_agent[162810]:     group       root
Oct 11 09:32:57 compute-0 ovn_metadata_agent[162810]:     maxconn     1024
Oct 11 09:32:57 compute-0 ovn_metadata_agent[162810]:     pidfile     /var/lib/neutron/external/pids/84203be9-d0b1-4633-bcd6-51523ee507a5.pid.haproxy
Oct 11 09:32:57 compute-0 ovn_metadata_agent[162810]:     daemon
Oct 11 09:32:57 compute-0 ovn_metadata_agent[162810]: 
Oct 11 09:32:57 compute-0 ovn_metadata_agent[162810]: defaults
Oct 11 09:32:57 compute-0 ovn_metadata_agent[162810]:     log global
Oct 11 09:32:57 compute-0 ovn_metadata_agent[162810]:     mode http
Oct 11 09:32:57 compute-0 ovn_metadata_agent[162810]:     option httplog
Oct 11 09:32:57 compute-0 ovn_metadata_agent[162810]:     option dontlognull
Oct 11 09:32:57 compute-0 ovn_metadata_agent[162810]:     option http-server-close
Oct 11 09:32:57 compute-0 ovn_metadata_agent[162810]:     option forwardfor
Oct 11 09:32:57 compute-0 ovn_metadata_agent[162810]:     retries                 3
Oct 11 09:32:57 compute-0 ovn_metadata_agent[162810]:     timeout http-request    30s
Oct 11 09:32:57 compute-0 ovn_metadata_agent[162810]:     timeout connect         30s
Oct 11 09:32:57 compute-0 ovn_metadata_agent[162810]:     timeout client          32s
Oct 11 09:32:57 compute-0 ovn_metadata_agent[162810]:     timeout server          32s
Oct 11 09:32:57 compute-0 ovn_metadata_agent[162810]:     timeout http-keep-alive 30s
Oct 11 09:32:57 compute-0 ovn_metadata_agent[162810]: 
Oct 11 09:32:57 compute-0 ovn_metadata_agent[162810]: 
Oct 11 09:32:57 compute-0 ovn_metadata_agent[162810]: listen listener
Oct 11 09:32:57 compute-0 ovn_metadata_agent[162810]:     bind 169.254.169.254:80
Oct 11 09:32:57 compute-0 ovn_metadata_agent[162810]:     server metadata /var/lib/neutron/metadata_proxy
Oct 11 09:32:57 compute-0 ovn_metadata_agent[162810]:     http-request add-header X-OVN-Network-ID 84203be9-d0b1-4633-bcd6-51523ee507a5
Oct 11 09:32:57 compute-0 ovn_metadata_agent[162810]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 11 09:32:57 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:32:57.590 162815 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-84203be9-d0b1-4633-bcd6-51523ee507a5', 'env', 'PROCESS_TAG=haproxy-84203be9-d0b1-4633-bcd6-51523ee507a5', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/84203be9-d0b1-4633-bcd6-51523ee507a5.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 11 09:32:57 compute-0 nova_compute[260935]: 2025-10-11 09:32:57.807 2 DEBUG nova.compute.manager [req-e1f95efd-d04b-4067-90b7-0a548160d21e req-9f5a89c9-ffcb-4d5d-8abc-280b27d208a0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: e7751d7b-3ef7-4b84-a579-89d4b72f6cf4] Received event network-vif-plugged-fca94e7c-1f55-4f5d-a268-7a4f0b86bae0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:32:57 compute-0 nova_compute[260935]: 2025-10-11 09:32:57.808 2 DEBUG oslo_concurrency.lockutils [req-e1f95efd-d04b-4067-90b7-0a548160d21e req-9f5a89c9-ffcb-4d5d-8abc-280b27d208a0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "e7751d7b-3ef7-4b84-a579-89d4b72f6cf4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:32:57 compute-0 nova_compute[260935]: 2025-10-11 09:32:57.808 2 DEBUG oslo_concurrency.lockutils [req-e1f95efd-d04b-4067-90b7-0a548160d21e req-9f5a89c9-ffcb-4d5d-8abc-280b27d208a0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "e7751d7b-3ef7-4b84-a579-89d4b72f6cf4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:32:57 compute-0 nova_compute[260935]: 2025-10-11 09:32:57.809 2 DEBUG oslo_concurrency.lockutils [req-e1f95efd-d04b-4067-90b7-0a548160d21e req-9f5a89c9-ffcb-4d5d-8abc-280b27d208a0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "e7751d7b-3ef7-4b84-a579-89d4b72f6cf4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:32:57 compute-0 nova_compute[260935]: 2025-10-11 09:32:57.811 2 DEBUG nova.compute.manager [req-e1f95efd-d04b-4067-90b7-0a548160d21e req-9f5a89c9-ffcb-4d5d-8abc-280b27d208a0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: e7751d7b-3ef7-4b84-a579-89d4b72f6cf4] Processing event network-vif-plugged-fca94e7c-1f55-4f5d-a268-7a4f0b86bae0 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 11 09:32:58 compute-0 podman[412745]: 2025-10-11 09:32:58.033960985 +0000 UTC m=+0.051473624 container create 328fa346917de62bfec3429b7ba4ef5fc57b6ac0cdc64d48811fdf8800ac0793 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-84203be9-d0b1-4633-bcd6-51523ee507a5, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 11 09:32:58 compute-0 systemd[1]: Started libpod-conmon-328fa346917de62bfec3429b7ba4ef5fc57b6ac0cdc64d48811fdf8800ac0793.scope.
Oct 11 09:32:58 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:32:58 compute-0 podman[412745]: 2025-10-11 09:32:58.009421042 +0000 UTC m=+0.026933731 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 11 09:32:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b192159424a33d91dd7687b52effbbc7201b23f7d99b918f0ad704b8e1f4e2c9/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 11 09:32:58 compute-0 podman[412745]: 2025-10-11 09:32:58.122061461 +0000 UTC m=+0.139574150 container init 328fa346917de62bfec3429b7ba4ef5fc57b6ac0cdc64d48811fdf8800ac0793 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-84203be9-d0b1-4633-bcd6-51523ee507a5, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2)
Oct 11 09:32:58 compute-0 podman[412745]: 2025-10-11 09:32:58.130839609 +0000 UTC m=+0.148352258 container start 328fa346917de62bfec3429b7ba4ef5fc57b6ac0cdc64d48811fdf8800ac0793 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-84203be9-d0b1-4633-bcd6-51523ee507a5, org.label-schema.build-date=20251001, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 11 09:32:58 compute-0 neutron-haproxy-ovnmeta-84203be9-d0b1-4633-bcd6-51523ee507a5[412760]: [NOTICE]   (412764) : New worker (412766) forked
Oct 11 09:32:58 compute-0 neutron-haproxy-ovnmeta-84203be9-d0b1-4633-bcd6-51523ee507a5[412760]: [NOTICE]   (412764) : Loading success.
Oct 11 09:32:58 compute-0 nova_compute[260935]: 2025-10-11 09:32:58.369 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760175178.368896, e7751d7b-3ef7-4b84-a579-89d4b72f6cf4 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:32:58 compute-0 nova_compute[260935]: 2025-10-11 09:32:58.369 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: e7751d7b-3ef7-4b84-a579-89d4b72f6cf4] VM Started (Lifecycle Event)
Oct 11 09:32:58 compute-0 nova_compute[260935]: 2025-10-11 09:32:58.371 2 DEBUG nova.compute.manager [None req-7e317ff1-0e32-4bad-816a-213589e6956c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: e7751d7b-3ef7-4b84-a579-89d4b72f6cf4] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 11 09:32:58 compute-0 nova_compute[260935]: 2025-10-11 09:32:58.375 2 DEBUG nova.virt.libvirt.driver [None req-7e317ff1-0e32-4bad-816a-213589e6956c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: e7751d7b-3ef7-4b84-a579-89d4b72f6cf4] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 11 09:32:58 compute-0 nova_compute[260935]: 2025-10-11 09:32:58.379 2 INFO nova.virt.libvirt.driver [-] [instance: e7751d7b-3ef7-4b84-a579-89d4b72f6cf4] Instance spawned successfully.
Oct 11 09:32:58 compute-0 nova_compute[260935]: 2025-10-11 09:32:58.380 2 DEBUG nova.virt.libvirt.driver [None req-7e317ff1-0e32-4bad-816a-213589e6956c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: e7751d7b-3ef7-4b84-a579-89d4b72f6cf4] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 11 09:32:58 compute-0 nova_compute[260935]: 2025-10-11 09:32:58.400 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: e7751d7b-3ef7-4b84-a579-89d4b72f6cf4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:32:58 compute-0 nova_compute[260935]: 2025-10-11 09:32:58.407 2 DEBUG nova.virt.libvirt.driver [None req-7e317ff1-0e32-4bad-816a-213589e6956c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: e7751d7b-3ef7-4b84-a579-89d4b72f6cf4] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:32:58 compute-0 nova_compute[260935]: 2025-10-11 09:32:58.408 2 DEBUG nova.virt.libvirt.driver [None req-7e317ff1-0e32-4bad-816a-213589e6956c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: e7751d7b-3ef7-4b84-a579-89d4b72f6cf4] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:32:58 compute-0 nova_compute[260935]: 2025-10-11 09:32:58.408 2 DEBUG nova.virt.libvirt.driver [None req-7e317ff1-0e32-4bad-816a-213589e6956c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: e7751d7b-3ef7-4b84-a579-89d4b72f6cf4] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:32:58 compute-0 nova_compute[260935]: 2025-10-11 09:32:58.408 2 DEBUG nova.virt.libvirt.driver [None req-7e317ff1-0e32-4bad-816a-213589e6956c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: e7751d7b-3ef7-4b84-a579-89d4b72f6cf4] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:32:58 compute-0 nova_compute[260935]: 2025-10-11 09:32:58.409 2 DEBUG nova.virt.libvirt.driver [None req-7e317ff1-0e32-4bad-816a-213589e6956c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: e7751d7b-3ef7-4b84-a579-89d4b72f6cf4] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:32:58 compute-0 nova_compute[260935]: 2025-10-11 09:32:58.409 2 DEBUG nova.virt.libvirt.driver [None req-7e317ff1-0e32-4bad-816a-213589e6956c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: e7751d7b-3ef7-4b84-a579-89d4b72f6cf4] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:32:58 compute-0 nova_compute[260935]: 2025-10-11 09:32:58.412 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: e7751d7b-3ef7-4b84-a579-89d4b72f6cf4] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 09:32:58 compute-0 nova_compute[260935]: 2025-10-11 09:32:58.461 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: e7751d7b-3ef7-4b84-a579-89d4b72f6cf4] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 09:32:58 compute-0 nova_compute[260935]: 2025-10-11 09:32:58.462 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760175178.3720112, e7751d7b-3ef7-4b84-a579-89d4b72f6cf4 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:32:58 compute-0 nova_compute[260935]: 2025-10-11 09:32:58.462 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: e7751d7b-3ef7-4b84-a579-89d4b72f6cf4] VM Paused (Lifecycle Event)
Oct 11 09:32:58 compute-0 nova_compute[260935]: 2025-10-11 09:32:58.491 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: e7751d7b-3ef7-4b84-a579-89d4b72f6cf4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:32:58 compute-0 nova_compute[260935]: 2025-10-11 09:32:58.496 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760175178.3741326, e7751d7b-3ef7-4b84-a579-89d4b72f6cf4 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:32:58 compute-0 nova_compute[260935]: 2025-10-11 09:32:58.496 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: e7751d7b-3ef7-4b84-a579-89d4b72f6cf4] VM Resumed (Lifecycle Event)
Oct 11 09:32:58 compute-0 nova_compute[260935]: 2025-10-11 09:32:58.508 2 INFO nova.compute.manager [None req-7e317ff1-0e32-4bad-816a-213589e6956c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: e7751d7b-3ef7-4b84-a579-89d4b72f6cf4] Took 10.59 seconds to spawn the instance on the hypervisor.
Oct 11 09:32:58 compute-0 nova_compute[260935]: 2025-10-11 09:32:58.508 2 DEBUG nova.compute.manager [None req-7e317ff1-0e32-4bad-816a-213589e6956c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: e7751d7b-3ef7-4b84-a579-89d4b72f6cf4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:32:58 compute-0 nova_compute[260935]: 2025-10-11 09:32:58.517 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: e7751d7b-3ef7-4b84-a579-89d4b72f6cf4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:32:58 compute-0 nova_compute[260935]: 2025-10-11 09:32:58.520 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: e7751d7b-3ef7-4b84-a579-89d4b72f6cf4] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 09:32:58 compute-0 nova_compute[260935]: 2025-10-11 09:32:58.546 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: e7751d7b-3ef7-4b84-a579-89d4b72f6cf4] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 09:32:58 compute-0 nova_compute[260935]: 2025-10-11 09:32:58.577 2 INFO nova.compute.manager [None req-7e317ff1-0e32-4bad-816a-213589e6956c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: e7751d7b-3ef7-4b84-a579-89d4b72f6cf4] Took 11.80 seconds to build instance.
Oct 11 09:32:58 compute-0 nova_compute[260935]: 2025-10-11 09:32:58.596 2 DEBUG oslo_concurrency.lockutils [None req-7e317ff1-0e32-4bad-816a-213589e6956c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "e7751d7b-3ef7-4b84-a579-89d4b72f6cf4" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.897s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:32:58 compute-0 nova_compute[260935]: 2025-10-11 09:32:58.833 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:32:59 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2740: 321 pgs: 321 active+clean; 374 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 32 op/s
Oct 11 09:32:59 compute-0 ceph-mon[74313]: pgmap v2740: 321 pgs: 321 active+clean; 374 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 32 op/s
Oct 11 09:32:59 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:32:59 compute-0 nova_compute[260935]: 2025-10-11 09:32:59.926 2 DEBUG nova.compute.manager [req-f0eee2e9-c7a8-43de-87fe-d5dc050ce3fc req-c5d34e32-0c7c-46a2-bfef-8a7c74c8b77e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: e7751d7b-3ef7-4b84-a579-89d4b72f6cf4] Received event network-vif-plugged-fca94e7c-1f55-4f5d-a268-7a4f0b86bae0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:32:59 compute-0 nova_compute[260935]: 2025-10-11 09:32:59.927 2 DEBUG oslo_concurrency.lockutils [req-f0eee2e9-c7a8-43de-87fe-d5dc050ce3fc req-c5d34e32-0c7c-46a2-bfef-8a7c74c8b77e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "e7751d7b-3ef7-4b84-a579-89d4b72f6cf4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:32:59 compute-0 nova_compute[260935]: 2025-10-11 09:32:59.928 2 DEBUG oslo_concurrency.lockutils [req-f0eee2e9-c7a8-43de-87fe-d5dc050ce3fc req-c5d34e32-0c7c-46a2-bfef-8a7c74c8b77e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "e7751d7b-3ef7-4b84-a579-89d4b72f6cf4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:32:59 compute-0 nova_compute[260935]: 2025-10-11 09:32:59.929 2 DEBUG oslo_concurrency.lockutils [req-f0eee2e9-c7a8-43de-87fe-d5dc050ce3fc req-c5d34e32-0c7c-46a2-bfef-8a7c74c8b77e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "e7751d7b-3ef7-4b84-a579-89d4b72f6cf4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:32:59 compute-0 nova_compute[260935]: 2025-10-11 09:32:59.929 2 DEBUG nova.compute.manager [req-f0eee2e9-c7a8-43de-87fe-d5dc050ce3fc req-c5d34e32-0c7c-46a2-bfef-8a7c74c8b77e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: e7751d7b-3ef7-4b84-a579-89d4b72f6cf4] No waiting events found dispatching network-vif-plugged-fca94e7c-1f55-4f5d-a268-7a4f0b86bae0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 09:32:59 compute-0 nova_compute[260935]: 2025-10-11 09:32:59.930 2 WARNING nova.compute.manager [req-f0eee2e9-c7a8-43de-87fe-d5dc050ce3fc req-c5d34e32-0c7c-46a2-bfef-8a7c74c8b77e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: e7751d7b-3ef7-4b84-a579-89d4b72f6cf4] Received unexpected event network-vif-plugged-fca94e7c-1f55-4f5d-a268-7a4f0b86bae0 for instance with vm_state active and task_state None.
Oct 11 09:33:00 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:33:00.524 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=49, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:d1:d9', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '16:ab:1e:b7:4b:7f'}, ipsec=False) old=SB_Global(nb_cfg=48) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 09:33:00 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:33:00.525 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 11 09:33:00 compute-0 nova_compute[260935]: 2025-10-11 09:33:00.566 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:33:00 compute-0 nova_compute[260935]: 2025-10-11 09:33:00.705 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:33:01 compute-0 nova_compute[260935]: 2025-10-11 09:33:01.086 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:33:01 compute-0 NetworkManager[44960]: <info>  [1760175181.0871] manager: (patch-provnet-02213cae-d648-4a8f-b18b-9bdf9573f1be-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/592)
Oct 11 09:33:01 compute-0 NetworkManager[44960]: <info>  [1760175181.0882] manager: (patch-br-int-to-provnet-02213cae-d648-4a8f-b18b-9bdf9573f1be): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/593)
Oct 11 09:33:01 compute-0 ovn_controller[152945]: 2025-10-11T09:33:01Z|01536|binding|INFO|Releasing lport 8bbc5aca-a0c9-493f-8847-60a74f11dbe6 from this chassis (sb_readonly=0)
Oct 11 09:33:01 compute-0 ovn_controller[152945]: 2025-10-11T09:33:01Z|01537|binding|INFO|Releasing lport e23cd806-8523-4e59-ba27-db15cee52548 from this chassis (sb_readonly=0)
Oct 11 09:33:01 compute-0 ovn_controller[152945]: 2025-10-11T09:33:01Z|01538|binding|INFO|Releasing lport f45dd889-4db0-488b-b6d3-356bc191844e from this chassis (sb_readonly=0)
Oct 11 09:33:01 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2741: 321 pgs: 321 active+clean; 374 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 32 op/s
Oct 11 09:33:01 compute-0 nova_compute[260935]: 2025-10-11 09:33:01.138 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:33:01 compute-0 ovn_controller[152945]: 2025-10-11T09:33:01Z|01539|binding|INFO|Releasing lport 8bbc5aca-a0c9-493f-8847-60a74f11dbe6 from this chassis (sb_readonly=0)
Oct 11 09:33:01 compute-0 ovn_controller[152945]: 2025-10-11T09:33:01Z|01540|binding|INFO|Releasing lport e23cd806-8523-4e59-ba27-db15cee52548 from this chassis (sb_readonly=0)
Oct 11 09:33:01 compute-0 ovn_controller[152945]: 2025-10-11T09:33:01Z|01541|binding|INFO|Releasing lport f45dd889-4db0-488b-b6d3-356bc191844e from this chassis (sb_readonly=0)
Oct 11 09:33:01 compute-0 nova_compute[260935]: 2025-10-11 09:33:01.149 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:33:01 compute-0 ceph-mon[74313]: pgmap v2741: 321 pgs: 321 active+clean; 374 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 32 op/s
Oct 11 09:33:01 compute-0 podman[412776]: 2025-10-11 09:33:01.349284016 +0000 UTC m=+0.096108573 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct 11 09:33:01 compute-0 nova_compute[260935]: 2025-10-11 09:33:01.431 2 DEBUG nova.compute.manager [req-60bb825c-6197-4468-a57b-7d4cc4e529ac req-ed5ccceb-1572-4ae3-8603-3c327c2c5b13 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: e7751d7b-3ef7-4b84-a579-89d4b72f6cf4] Received event network-changed-fca94e7c-1f55-4f5d-a268-7a4f0b86bae0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:33:01 compute-0 nova_compute[260935]: 2025-10-11 09:33:01.431 2 DEBUG nova.compute.manager [req-60bb825c-6197-4468-a57b-7d4cc4e529ac req-ed5ccceb-1572-4ae3-8603-3c327c2c5b13 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: e7751d7b-3ef7-4b84-a579-89d4b72f6cf4] Refreshing instance network info cache due to event network-changed-fca94e7c-1f55-4f5d-a268-7a4f0b86bae0. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 09:33:01 compute-0 nova_compute[260935]: 2025-10-11 09:33:01.431 2 DEBUG oslo_concurrency.lockutils [req-60bb825c-6197-4468-a57b-7d4cc4e529ac req-ed5ccceb-1572-4ae3-8603-3c327c2c5b13 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-e7751d7b-3ef7-4b84-a579-89d4b72f6cf4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:33:01 compute-0 nova_compute[260935]: 2025-10-11 09:33:01.432 2 DEBUG oslo_concurrency.lockutils [req-60bb825c-6197-4468-a57b-7d4cc4e529ac req-ed5ccceb-1572-4ae3-8603-3c327c2c5b13 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-e7751d7b-3ef7-4b84-a579-89d4b72f6cf4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:33:01 compute-0 nova_compute[260935]: 2025-10-11 09:33:01.432 2 DEBUG nova.network.neutron [req-60bb825c-6197-4468-a57b-7d4cc4e529ac req-ed5ccceb-1572-4ae3-8603-3c327c2c5b13 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: e7751d7b-3ef7-4b84-a579-89d4b72f6cf4] Refreshing network info cache for port fca94e7c-1f55-4f5d-a268-7a4f0b86bae0 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 09:33:01 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:33:01.528 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=bec9a5e2-82b0-42f0-811d-08d245f1dc66, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '49'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:33:03 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2742: 321 pgs: 321 active+clean; 374 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Oct 11 09:33:03 compute-0 ceph-mon[74313]: pgmap v2742: 321 pgs: 321 active+clean; 374 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Oct 11 09:33:03 compute-0 nova_compute[260935]: 2025-10-11 09:33:03.889 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:33:04 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:33:04 compute-0 nova_compute[260935]: 2025-10-11 09:33:04.572 2 DEBUG nova.network.neutron [req-60bb825c-6197-4468-a57b-7d4cc4e529ac req-ed5ccceb-1572-4ae3-8603-3c327c2c5b13 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: e7751d7b-3ef7-4b84-a579-89d4b72f6cf4] Updated VIF entry in instance network info cache for port fca94e7c-1f55-4f5d-a268-7a4f0b86bae0. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 09:33:04 compute-0 nova_compute[260935]: 2025-10-11 09:33:04.572 2 DEBUG nova.network.neutron [req-60bb825c-6197-4468-a57b-7d4cc4e529ac req-ed5ccceb-1572-4ae3-8603-3c327c2c5b13 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: e7751d7b-3ef7-4b84-a579-89d4b72f6cf4] Updating instance_info_cache with network_info: [{"id": "fca94e7c-1f55-4f5d-a268-7a4f0b86bae0", "address": "fa:16:3e:9b:ef:fb", "network": {"id": "84203be9-d0b1-4633-bcd6-51523ee507a5", "bridge": "br-int", "label": "tempest-network-smoke--13547720", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.177", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe9b:effb", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfca94e7c-1f", "ovs_interfaceid": "fca94e7c-1f55-4f5d-a268-7a4f0b86bae0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:33:04 compute-0 nova_compute[260935]: 2025-10-11 09:33:04.700 2 DEBUG oslo_concurrency.lockutils [req-60bb825c-6197-4468-a57b-7d4cc4e529ac req-ed5ccceb-1572-4ae3-8603-3c327c2c5b13 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-e7751d7b-3ef7-4b84-a579-89d4b72f6cf4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:33:05 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2743: 321 pgs: 321 active+clean; 374 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 11 09:33:05 compute-0 ceph-mon[74313]: pgmap v2743: 321 pgs: 321 active+clean; 374 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 11 09:33:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 09:33:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:33:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 09:33:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:33:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0029757271724620694 of space, bias 1.0, pg target 0.8927181517386208 quantized to 32 (current 32)
Oct 11 09:33:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:33:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 09:33:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:33:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 09:33:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:33:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Oct 11 09:33:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:33:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 11 09:33:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:33:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 09:33:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:33:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 11 09:33:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:33:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 11 09:33:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:33:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 09:33:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:33:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 11 09:33:05 compute-0 nova_compute[260935]: 2025-10-11 09:33:05.708 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:33:05 compute-0 podman[412796]: 2025-10-11 09:33:05.805299317 +0000 UTC m=+0.109793249 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Oct 11 09:33:05 compute-0 podman[412797]: 2025-10-11 09:33:05.807403916 +0000 UTC m=+0.107669639 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Oct 11 09:33:07 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2744: 321 pgs: 321 active+clean; 374 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 11 09:33:07 compute-0 ceph-mon[74313]: pgmap v2744: 321 pgs: 321 active+clean; 374 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 11 09:33:08 compute-0 nova_compute[260935]: 2025-10-11 09:33:08.892 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:33:09 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2745: 321 pgs: 321 active+clean; 374 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 11 09:33:09 compute-0 ceph-mon[74313]: pgmap v2745: 321 pgs: 321 active+clean; 374 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 11 09:33:09 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:33:10 compute-0 nova_compute[260935]: 2025-10-11 09:33:10.712 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:33:10 compute-0 ovn_controller[152945]: 2025-10-11T09:33:10Z|00180|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:9b:ef:fb 10.100.0.12
Oct 11 09:33:10 compute-0 ovn_controller[152945]: 2025-10-11T09:33:10Z|00181|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:9b:ef:fb 10.100.0.12
Oct 11 09:33:11 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2746: 321 pgs: 321 active+clean; 374 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 68 op/s
Oct 11 09:33:11 compute-0 ceph-mon[74313]: pgmap v2746: 321 pgs: 321 active+clean; 374 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 68 op/s
Oct 11 09:33:13 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2747: 321 pgs: 321 active+clean; 407 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 132 op/s
Oct 11 09:33:13 compute-0 ceph-mon[74313]: pgmap v2747: 321 pgs: 321 active+clean; 407 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 132 op/s
Oct 11 09:33:13 compute-0 nova_compute[260935]: 2025-10-11 09:33:13.894 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:33:14 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:33:15 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2748: 321 pgs: 321 active+clean; 407 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 11 09:33:15 compute-0 ceph-mon[74313]: pgmap v2748: 321 pgs: 321 active+clean; 407 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 11 09:33:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:33:15.228 162815 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:33:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:33:15.229 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:33:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:33:15.230 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:33:15 compute-0 nova_compute[260935]: 2025-10-11 09:33:15.724 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:33:17 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2749: 321 pgs: 321 active+clean; 407 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 11 09:33:17 compute-0 ceph-mon[74313]: pgmap v2749: 321 pgs: 321 active+clean; 407 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 11 09:33:18 compute-0 nova_compute[260935]: 2025-10-11 09:33:18.897 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:33:19 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2750: 321 pgs: 321 active+clean; 407 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 11 09:33:19 compute-0 ceph-mon[74313]: pgmap v2750: 321 pgs: 321 active+clean; 407 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 11 09:33:19 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:33:20 compute-0 nova_compute[260935]: 2025-10-11 09:33:20.726 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:33:21 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2751: 321 pgs: 321 active+clean; 407 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 11 09:33:21 compute-0 ceph-mon[74313]: pgmap v2751: 321 pgs: 321 active+clean; 407 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 11 09:33:23 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2752: 321 pgs: 321 active+clean; 407 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 11 09:33:23 compute-0 ceph-mon[74313]: pgmap v2752: 321 pgs: 321 active+clean; 407 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 11 09:33:23 compute-0 nova_compute[260935]: 2025-10-11 09:33:23.925 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:33:24 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:33:24 compute-0 nova_compute[260935]: 2025-10-11 09:33:24.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:33:24 compute-0 podman[412840]: 2025-10-11 09:33:24.793197228 +0000 UTC m=+0.084244889 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct 11 09:33:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:33:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:33:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:33:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:33:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:33:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:33:25 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2753: 321 pgs: 321 active+clean; 407 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 14 KiB/s wr, 0 op/s
Oct 11 09:33:25 compute-0 ceph-mon[74313]: pgmap v2753: 321 pgs: 321 active+clean; 407 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 14 KiB/s wr, 0 op/s
Oct 11 09:33:25 compute-0 nova_compute[260935]: 2025-10-11 09:33:25.728 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:33:26 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 09:33:26 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3359709646' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 09:33:26 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 09:33:26 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3359709646' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 09:33:26 compute-0 ceph-mon[74313]: from='client.? 192.168.122.10:0/3359709646' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 09:33:26 compute-0 ceph-mon[74313]: from='client.? 192.168.122.10:0/3359709646' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 09:33:27 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2754: 321 pgs: 321 active+clean; 407 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 14 KiB/s wr, 0 op/s
Oct 11 09:33:27 compute-0 ceph-mon[74313]: pgmap v2754: 321 pgs: 321 active+clean; 407 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 14 KiB/s wr, 0 op/s
Oct 11 09:33:28 compute-0 nova_compute[260935]: 2025-10-11 09:33:28.928 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:33:29 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2755: 321 pgs: 321 active+clean; 407 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 15 KiB/s wr, 0 op/s
Oct 11 09:33:29 compute-0 ceph-mon[74313]: pgmap v2755: 321 pgs: 321 active+clean; 407 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 15 KiB/s wr, 0 op/s
Oct 11 09:33:29 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:33:30 compute-0 sshd-session[412859]: Invalid user mysql from 165.232.82.252 port 57750
Oct 11 09:33:30 compute-0 sshd-session[412859]: pam_unix(sshd:auth): check pass; user unknown
Oct 11 09:33:30 compute-0 sshd-session[412859]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=165.232.82.252
Oct 11 09:33:30 compute-0 nova_compute[260935]: 2025-10-11 09:33:30.731 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:33:31 compute-0 ovn_controller[152945]: 2025-10-11T09:33:31Z|01542|memory_trim|INFO|Detected inactivity (last active 30026 ms ago): trimming memory
Oct 11 09:33:31 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2756: 321 pgs: 321 active+clean; 407 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 3.3 KiB/s wr, 0 op/s
Oct 11 09:33:31 compute-0 ceph-mon[74313]: pgmap v2756: 321 pgs: 321 active+clean; 407 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 3.3 KiB/s wr, 0 op/s
Oct 11 09:33:31 compute-0 podman[412862]: 2025-10-11 09:33:31.808408693 +0000 UTC m=+0.092184833 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=iscsid, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=iscsid, managed_by=edpm_ansible)
Oct 11 09:33:32 compute-0 sshd-session[412859]: Failed password for invalid user mysql from 165.232.82.252 port 57750 ssh2
Oct 11 09:33:33 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2757: 321 pgs: 321 active+clean; 407 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 3.3 KiB/s wr, 59 op/s
Oct 11 09:33:33 compute-0 ceph-mon[74313]: pgmap v2757: 321 pgs: 321 active+clean; 407 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 3.3 KiB/s wr, 59 op/s
Oct 11 09:33:33 compute-0 sshd-session[412859]: Connection closed by invalid user mysql 165.232.82.252 port 57750 [preauth]
Oct 11 09:33:33 compute-0 nova_compute[260935]: 2025-10-11 09:33:33.930 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:33:34 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:33:35 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2758: 321 pgs: 321 active+clean; 407 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 341 B/s wr, 59 op/s
Oct 11 09:33:35 compute-0 ceph-mon[74313]: pgmap v2758: 321 pgs: 321 active+clean; 407 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 341 B/s wr, 59 op/s
Oct 11 09:33:35 compute-0 nova_compute[260935]: 2025-10-11 09:33:35.770 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:33:36 compute-0 podman[412882]: 2025-10-11 09:33:36.788718278 +0000 UTC m=+0.089274171 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct 11 09:33:36 compute-0 podman[412883]: 2025-10-11 09:33:36.853799014 +0000 UTC m=+0.151794474 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_managed=true, managed_by=edpm_ansible)
Oct 11 09:33:37 compute-0 nova_compute[260935]: 2025-10-11 09:33:37.013 2 DEBUG oslo_concurrency.lockutils [None req-f89a2a12-bb97-4ca0-964e-f5a20d350ce4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "5caa1430-1bd3-4b60-83f0-908b98b891cf" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:33:37 compute-0 nova_compute[260935]: 2025-10-11 09:33:37.013 2 DEBUG oslo_concurrency.lockutils [None req-f89a2a12-bb97-4ca0-964e-f5a20d350ce4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "5caa1430-1bd3-4b60-83f0-908b98b891cf" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:33:37 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2759: 321 pgs: 321 active+clean; 407 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 341 B/s wr, 59 op/s
Oct 11 09:33:37 compute-0 nova_compute[260935]: 2025-10-11 09:33:37.163 2 DEBUG nova.compute.manager [None req-f89a2a12-bb97-4ca0-964e-f5a20d350ce4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 5caa1430-1bd3-4b60-83f0-908b98b891cf] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 11 09:33:37 compute-0 ceph-mon[74313]: pgmap v2759: 321 pgs: 321 active+clean; 407 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 341 B/s wr, 59 op/s
Oct 11 09:33:37 compute-0 nova_compute[260935]: 2025-10-11 09:33:37.501 2 DEBUG oslo_concurrency.lockutils [None req-f89a2a12-bb97-4ca0-964e-f5a20d350ce4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:33:37 compute-0 nova_compute[260935]: 2025-10-11 09:33:37.502 2 DEBUG oslo_concurrency.lockutils [None req-f89a2a12-bb97-4ca0-964e-f5a20d350ce4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:33:37 compute-0 nova_compute[260935]: 2025-10-11 09:33:37.510 2 DEBUG nova.virt.hardware [None req-f89a2a12-bb97-4ca0-964e-f5a20d350ce4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 11 09:33:37 compute-0 nova_compute[260935]: 2025-10-11 09:33:37.511 2 INFO nova.compute.claims [None req-f89a2a12-bb97-4ca0-964e-f5a20d350ce4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 5caa1430-1bd3-4b60-83f0-908b98b891cf] Claim successful on node compute-0.ctlplane.example.com
Oct 11 09:33:37 compute-0 nova_compute[260935]: 2025-10-11 09:33:37.852 2 DEBUG oslo_concurrency.processutils [None req-f89a2a12-bb97-4ca0-964e-f5a20d350ce4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:33:38 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 09:33:38 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4286754567' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:33:38 compute-0 nova_compute[260935]: 2025-10-11 09:33:38.376 2 DEBUG oslo_concurrency.processutils [None req-f89a2a12-bb97-4ca0-964e-f5a20d350ce4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.524s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:33:38 compute-0 nova_compute[260935]: 2025-10-11 09:33:38.384 2 DEBUG nova.compute.provider_tree [None req-f89a2a12-bb97-4ca0-964e-f5a20d350ce4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 09:33:38 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/4286754567' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:33:38 compute-0 nova_compute[260935]: 2025-10-11 09:33:38.441 2 DEBUG nova.scheduler.client.report [None req-f89a2a12-bb97-4ca0-964e-f5a20d350ce4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 09:33:38 compute-0 nova_compute[260935]: 2025-10-11 09:33:38.557 2 DEBUG oslo_concurrency.lockutils [None req-f89a2a12-bb97-4ca0-964e-f5a20d350ce4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.055s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:33:38 compute-0 nova_compute[260935]: 2025-10-11 09:33:38.558 2 DEBUG nova.compute.manager [None req-f89a2a12-bb97-4ca0-964e-f5a20d350ce4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 5caa1430-1bd3-4b60-83f0-908b98b891cf] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 11 09:33:38 compute-0 nova_compute[260935]: 2025-10-11 09:33:38.933 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:33:39 compute-0 nova_compute[260935]: 2025-10-11 09:33:39.013 2 DEBUG nova.compute.manager [None req-f89a2a12-bb97-4ca0-964e-f5a20d350ce4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 5caa1430-1bd3-4b60-83f0-908b98b891cf] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 11 09:33:39 compute-0 nova_compute[260935]: 2025-10-11 09:33:39.014 2 DEBUG nova.network.neutron [None req-f89a2a12-bb97-4ca0-964e-f5a20d350ce4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 5caa1430-1bd3-4b60-83f0-908b98b891cf] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 11 09:33:39 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2760: 321 pgs: 321 active+clean; 407 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 341 B/s wr, 59 op/s
Oct 11 09:33:39 compute-0 nova_compute[260935]: 2025-10-11 09:33:39.184 2 INFO nova.virt.libvirt.driver [None req-f89a2a12-bb97-4ca0-964e-f5a20d350ce4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 5caa1430-1bd3-4b60-83f0-908b98b891cf] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 11 09:33:39 compute-0 nova_compute[260935]: 2025-10-11 09:33:39.265 2 DEBUG nova.policy [None req-f89a2a12-bb97-4ca0-964e-f5a20d350ce4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '0e1fd111a1ff43179343661e01457085', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'db6885dd005947ad850fed13cefdf2fc', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 11 09:33:39 compute-0 ceph-mon[74313]: pgmap v2760: 321 pgs: 321 active+clean; 407 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 341 B/s wr, 59 op/s
Oct 11 09:33:39 compute-0 nova_compute[260935]: 2025-10-11 09:33:39.445 2 DEBUG nova.compute.manager [None req-f89a2a12-bb97-4ca0-964e-f5a20d350ce4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 5caa1430-1bd3-4b60-83f0-908b98b891cf] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 11 09:33:39 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:33:39 compute-0 nova_compute[260935]: 2025-10-11 09:33:39.872 2 DEBUG nova.compute.manager [None req-f89a2a12-bb97-4ca0-964e-f5a20d350ce4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 5caa1430-1bd3-4b60-83f0-908b98b891cf] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 11 09:33:39 compute-0 nova_compute[260935]: 2025-10-11 09:33:39.873 2 DEBUG nova.virt.libvirt.driver [None req-f89a2a12-bb97-4ca0-964e-f5a20d350ce4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 5caa1430-1bd3-4b60-83f0-908b98b891cf] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 11 09:33:39 compute-0 nova_compute[260935]: 2025-10-11 09:33:39.873 2 INFO nova.virt.libvirt.driver [None req-f89a2a12-bb97-4ca0-964e-f5a20d350ce4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 5caa1430-1bd3-4b60-83f0-908b98b891cf] Creating image(s)
Oct 11 09:33:39 compute-0 nova_compute[260935]: 2025-10-11 09:33:39.896 2 DEBUG nova.storage.rbd_utils [None req-f89a2a12-bb97-4ca0-964e-f5a20d350ce4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] rbd image 5caa1430-1bd3-4b60-83f0-908b98b891cf_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:33:39 compute-0 nova_compute[260935]: 2025-10-11 09:33:39.917 2 DEBUG nova.storage.rbd_utils [None req-f89a2a12-bb97-4ca0-964e-f5a20d350ce4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] rbd image 5caa1430-1bd3-4b60-83f0-908b98b891cf_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:33:39 compute-0 nova_compute[260935]: 2025-10-11 09:33:39.940 2 DEBUG nova.storage.rbd_utils [None req-f89a2a12-bb97-4ca0-964e-f5a20d350ce4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] rbd image 5caa1430-1bd3-4b60-83f0-908b98b891cf_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:33:39 compute-0 nova_compute[260935]: 2025-10-11 09:33:39.944 2 DEBUG oslo_concurrency.processutils [None req-f89a2a12-bb97-4ca0-964e-f5a20d350ce4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:33:40 compute-0 nova_compute[260935]: 2025-10-11 09:33:40.046 2 DEBUG oslo_concurrency.processutils [None req-f89a2a12-bb97-4ca0-964e-f5a20d350ce4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json" returned: 0 in 0.102s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:33:40 compute-0 nova_compute[260935]: 2025-10-11 09:33:40.048 2 DEBUG oslo_concurrency.lockutils [None req-f89a2a12-bb97-4ca0-964e-f5a20d350ce4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "9811042c7d73cc51997f7c966840f4b7728169a1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:33:40 compute-0 nova_compute[260935]: 2025-10-11 09:33:40.048 2 DEBUG oslo_concurrency.lockutils [None req-f89a2a12-bb97-4ca0-964e-f5a20d350ce4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:33:40 compute-0 nova_compute[260935]: 2025-10-11 09:33:40.049 2 DEBUG oslo_concurrency.lockutils [None req-f89a2a12-bb97-4ca0-964e-f5a20d350ce4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:33:40 compute-0 nova_compute[260935]: 2025-10-11 09:33:40.083 2 DEBUG nova.storage.rbd_utils [None req-f89a2a12-bb97-4ca0-964e-f5a20d350ce4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] rbd image 5caa1430-1bd3-4b60-83f0-908b98b891cf_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:33:40 compute-0 nova_compute[260935]: 2025-10-11 09:33:40.087 2 DEBUG oslo_concurrency.processutils [None req-f89a2a12-bb97-4ca0-964e-f5a20d350ce4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 5caa1430-1bd3-4b60-83f0-908b98b891cf_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:33:40 compute-0 nova_compute[260935]: 2025-10-11 09:33:40.429 2 DEBUG oslo_concurrency.processutils [None req-f89a2a12-bb97-4ca0-964e-f5a20d350ce4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 5caa1430-1bd3-4b60-83f0-908b98b891cf_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.342s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:33:40 compute-0 nova_compute[260935]: 2025-10-11 09:33:40.499 2 DEBUG nova.storage.rbd_utils [None req-f89a2a12-bb97-4ca0-964e-f5a20d350ce4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] resizing rbd image 5caa1430-1bd3-4b60-83f0-908b98b891cf_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 11 09:33:40 compute-0 nova_compute[260935]: 2025-10-11 09:33:40.599 2 DEBUG nova.objects.instance [None req-f89a2a12-bb97-4ca0-964e-f5a20d350ce4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lazy-loading 'migration_context' on Instance uuid 5caa1430-1bd3-4b60-83f0-908b98b891cf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:33:40 compute-0 nova_compute[260935]: 2025-10-11 09:33:40.626 2 DEBUG nova.virt.libvirt.driver [None req-f89a2a12-bb97-4ca0-964e-f5a20d350ce4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 5caa1430-1bd3-4b60-83f0-908b98b891cf] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 11 09:33:40 compute-0 nova_compute[260935]: 2025-10-11 09:33:40.627 2 DEBUG nova.virt.libvirt.driver [None req-f89a2a12-bb97-4ca0-964e-f5a20d350ce4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 5caa1430-1bd3-4b60-83f0-908b98b891cf] Ensure instance console log exists: /var/lib/nova/instances/5caa1430-1bd3-4b60-83f0-908b98b891cf/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 11 09:33:40 compute-0 nova_compute[260935]: 2025-10-11 09:33:40.628 2 DEBUG oslo_concurrency.lockutils [None req-f89a2a12-bb97-4ca0-964e-f5a20d350ce4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:33:40 compute-0 nova_compute[260935]: 2025-10-11 09:33:40.629 2 DEBUG oslo_concurrency.lockutils [None req-f89a2a12-bb97-4ca0-964e-f5a20d350ce4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:33:40 compute-0 nova_compute[260935]: 2025-10-11 09:33:40.629 2 DEBUG oslo_concurrency.lockutils [None req-f89a2a12-bb97-4ca0-964e-f5a20d350ce4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:33:40 compute-0 nova_compute[260935]: 2025-10-11 09:33:40.764 2 DEBUG nova.network.neutron [None req-f89a2a12-bb97-4ca0-964e-f5a20d350ce4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 5caa1430-1bd3-4b60-83f0-908b98b891cf] Successfully created port: ac79972f-498e-46be-b858-6c67d0f29f93 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 11 09:33:40 compute-0 nova_compute[260935]: 2025-10-11 09:33:40.773 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:33:41 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2761: 321 pgs: 321 active+clean; 407 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct 11 09:33:41 compute-0 ceph-mon[74313]: pgmap v2761: 321 pgs: 321 active+clean; 407 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct 11 09:33:41 compute-0 nova_compute[260935]: 2025-10-11 09:33:41.961 2 DEBUG nova.network.neutron [None req-f89a2a12-bb97-4ca0-964e-f5a20d350ce4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 5caa1430-1bd3-4b60-83f0-908b98b891cf] Successfully updated port: ac79972f-498e-46be-b858-6c67d0f29f93 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 11 09:33:42 compute-0 nova_compute[260935]: 2025-10-11 09:33:42.007 2 DEBUG oslo_concurrency.lockutils [None req-f89a2a12-bb97-4ca0-964e-f5a20d350ce4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "refresh_cache-5caa1430-1bd3-4b60-83f0-908b98b891cf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:33:42 compute-0 nova_compute[260935]: 2025-10-11 09:33:42.007 2 DEBUG oslo_concurrency.lockutils [None req-f89a2a12-bb97-4ca0-964e-f5a20d350ce4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquired lock "refresh_cache-5caa1430-1bd3-4b60-83f0-908b98b891cf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:33:42 compute-0 nova_compute[260935]: 2025-10-11 09:33:42.007 2 DEBUG nova.network.neutron [None req-f89a2a12-bb97-4ca0-964e-f5a20d350ce4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 5caa1430-1bd3-4b60-83f0-908b98b891cf] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 11 09:33:42 compute-0 nova_compute[260935]: 2025-10-11 09:33:42.152 2 DEBUG nova.compute.manager [req-53f66181-7ed6-455a-a890-3bf6f09af614 req-885c04dd-f16d-4d30-b640-db9775973f57 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 5caa1430-1bd3-4b60-83f0-908b98b891cf] Received event network-changed-ac79972f-498e-46be-b858-6c67d0f29f93 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:33:42 compute-0 nova_compute[260935]: 2025-10-11 09:33:42.152 2 DEBUG nova.compute.manager [req-53f66181-7ed6-455a-a890-3bf6f09af614 req-885c04dd-f16d-4d30-b640-db9775973f57 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 5caa1430-1bd3-4b60-83f0-908b98b891cf] Refreshing instance network info cache due to event network-changed-ac79972f-498e-46be-b858-6c67d0f29f93. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 09:33:42 compute-0 nova_compute[260935]: 2025-10-11 09:33:42.152 2 DEBUG oslo_concurrency.lockutils [req-53f66181-7ed6-455a-a890-3bf6f09af614 req-885c04dd-f16d-4d30-b640-db9775973f57 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-5caa1430-1bd3-4b60-83f0-908b98b891cf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:33:42 compute-0 nova_compute[260935]: 2025-10-11 09:33:42.535 2 DEBUG nova.network.neutron [None req-f89a2a12-bb97-4ca0-964e-f5a20d350ce4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 5caa1430-1bd3-4b60-83f0-908b98b891cf] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 11 09:33:43 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2762: 321 pgs: 321 active+clean; 453 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 54 KiB/s rd, 1.8 MiB/s wr, 87 op/s
Oct 11 09:33:43 compute-0 ceph-mon[74313]: pgmap v2762: 321 pgs: 321 active+clean; 453 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 54 KiB/s rd, 1.8 MiB/s wr, 87 op/s
Oct 11 09:33:43 compute-0 nova_compute[260935]: 2025-10-11 09:33:43.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:33:43 compute-0 nova_compute[260935]: 2025-10-11 09:33:43.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:33:43 compute-0 nova_compute[260935]: 2025-10-11 09:33:43.715 2 DEBUG nova.network.neutron [None req-f89a2a12-bb97-4ca0-964e-f5a20d350ce4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 5caa1430-1bd3-4b60-83f0-908b98b891cf] Updating instance_info_cache with network_info: [{"id": "ac79972f-498e-46be-b858-6c67d0f29f93", "address": "fa:16:3e:0d:1b:5f", "network": {"id": "84203be9-d0b1-4633-bcd6-51523ee507a5", "bridge": "br-int", "label": "tempest-network-smoke--13547720", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe0d:1b5f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac79972f-49", "ovs_interfaceid": "ac79972f-498e-46be-b858-6c67d0f29f93", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:33:43 compute-0 nova_compute[260935]: 2025-10-11 09:33:43.920 2 DEBUG oslo_concurrency.lockutils [None req-f89a2a12-bb97-4ca0-964e-f5a20d350ce4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Releasing lock "refresh_cache-5caa1430-1bd3-4b60-83f0-908b98b891cf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:33:43 compute-0 nova_compute[260935]: 2025-10-11 09:33:43.920 2 DEBUG nova.compute.manager [None req-f89a2a12-bb97-4ca0-964e-f5a20d350ce4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 5caa1430-1bd3-4b60-83f0-908b98b891cf] Instance network_info: |[{"id": "ac79972f-498e-46be-b858-6c67d0f29f93", "address": "fa:16:3e:0d:1b:5f", "network": {"id": "84203be9-d0b1-4633-bcd6-51523ee507a5", "bridge": "br-int", "label": "tempest-network-smoke--13547720", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe0d:1b5f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac79972f-49", "ovs_interfaceid": "ac79972f-498e-46be-b858-6c67d0f29f93", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 11 09:33:43 compute-0 nova_compute[260935]: 2025-10-11 09:33:43.921 2 DEBUG oslo_concurrency.lockutils [req-53f66181-7ed6-455a-a890-3bf6f09af614 req-885c04dd-f16d-4d30-b640-db9775973f57 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-5caa1430-1bd3-4b60-83f0-908b98b891cf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:33:43 compute-0 nova_compute[260935]: 2025-10-11 09:33:43.921 2 DEBUG nova.network.neutron [req-53f66181-7ed6-455a-a890-3bf6f09af614 req-885c04dd-f16d-4d30-b640-db9775973f57 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 5caa1430-1bd3-4b60-83f0-908b98b891cf] Refreshing network info cache for port ac79972f-498e-46be-b858-6c67d0f29f93 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 09:33:43 compute-0 nova_compute[260935]: 2025-10-11 09:33:43.926 2 DEBUG nova.virt.libvirt.driver [None req-f89a2a12-bb97-4ca0-964e-f5a20d350ce4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 5caa1430-1bd3-4b60-83f0-908b98b891cf] Start _get_guest_xml network_info=[{"id": "ac79972f-498e-46be-b858-6c67d0f29f93", "address": "fa:16:3e:0d:1b:5f", "network": {"id": "84203be9-d0b1-4633-bcd6-51523ee507a5", "bridge": "br-int", "label": "tempest-network-smoke--13547720", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe0d:1b5f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac79972f-49", "ovs_interfaceid": "ac79972f-498e-46be-b858-6c67d0f29f93", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 11 09:33:43 compute-0 nova_compute[260935]: 2025-10-11 09:33:43.932 2 WARNING nova.virt.libvirt.driver [None req-f89a2a12-bb97-4ca0-964e-f5a20d350ce4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 09:33:43 compute-0 nova_compute[260935]: 2025-10-11 09:33:43.935 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:33:43 compute-0 nova_compute[260935]: 2025-10-11 09:33:43.941 2 DEBUG nova.virt.libvirt.host [None req-f89a2a12-bb97-4ca0-964e-f5a20d350ce4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 11 09:33:43 compute-0 nova_compute[260935]: 2025-10-11 09:33:43.942 2 DEBUG nova.virt.libvirt.host [None req-f89a2a12-bb97-4ca0-964e-f5a20d350ce4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 11 09:33:43 compute-0 nova_compute[260935]: 2025-10-11 09:33:43.949 2 DEBUG nova.virt.libvirt.host [None req-f89a2a12-bb97-4ca0-964e-f5a20d350ce4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 11 09:33:43 compute-0 nova_compute[260935]: 2025-10-11 09:33:43.950 2 DEBUG nova.virt.libvirt.host [None req-f89a2a12-bb97-4ca0-964e-f5a20d350ce4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 11 09:33:43 compute-0 nova_compute[260935]: 2025-10-11 09:33:43.951 2 DEBUG nova.virt.libvirt.driver [None req-f89a2a12-bb97-4ca0-964e-f5a20d350ce4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 11 09:33:43 compute-0 nova_compute[260935]: 2025-10-11 09:33:43.951 2 DEBUG nova.virt.hardware [None req-f89a2a12-bb97-4ca0-964e-f5a20d350ce4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 11 09:33:43 compute-0 nova_compute[260935]: 2025-10-11 09:33:43.952 2 DEBUG nova.virt.hardware [None req-f89a2a12-bb97-4ca0-964e-f5a20d350ce4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 11 09:33:43 compute-0 nova_compute[260935]: 2025-10-11 09:33:43.952 2 DEBUG nova.virt.hardware [None req-f89a2a12-bb97-4ca0-964e-f5a20d350ce4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 11 09:33:43 compute-0 nova_compute[260935]: 2025-10-11 09:33:43.953 2 DEBUG nova.virt.hardware [None req-f89a2a12-bb97-4ca0-964e-f5a20d350ce4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 11 09:33:43 compute-0 nova_compute[260935]: 2025-10-11 09:33:43.953 2 DEBUG nova.virt.hardware [None req-f89a2a12-bb97-4ca0-964e-f5a20d350ce4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 11 09:33:43 compute-0 nova_compute[260935]: 2025-10-11 09:33:43.953 2 DEBUG nova.virt.hardware [None req-f89a2a12-bb97-4ca0-964e-f5a20d350ce4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 11 09:33:43 compute-0 nova_compute[260935]: 2025-10-11 09:33:43.954 2 DEBUG nova.virt.hardware [None req-f89a2a12-bb97-4ca0-964e-f5a20d350ce4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 11 09:33:43 compute-0 nova_compute[260935]: 2025-10-11 09:33:43.954 2 DEBUG nova.virt.hardware [None req-f89a2a12-bb97-4ca0-964e-f5a20d350ce4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 11 09:33:43 compute-0 nova_compute[260935]: 2025-10-11 09:33:43.955 2 DEBUG nova.virt.hardware [None req-f89a2a12-bb97-4ca0-964e-f5a20d350ce4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 11 09:33:43 compute-0 nova_compute[260935]: 2025-10-11 09:33:43.955 2 DEBUG nova.virt.hardware [None req-f89a2a12-bb97-4ca0-964e-f5a20d350ce4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 11 09:33:43 compute-0 nova_compute[260935]: 2025-10-11 09:33:43.956 2 DEBUG nova.virt.hardware [None req-f89a2a12-bb97-4ca0-964e-f5a20d350ce4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 11 09:33:43 compute-0 nova_compute[260935]: 2025-10-11 09:33:43.961 2 DEBUG oslo_concurrency.processutils [None req-f89a2a12-bb97-4ca0-964e-f5a20d350ce4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:33:44 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 09:33:44 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1667727775' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:33:44 compute-0 nova_compute[260935]: 2025-10-11 09:33:44.402 2 DEBUG oslo_concurrency.processutils [None req-f89a2a12-bb97-4ca0-964e-f5a20d350ce4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:33:44 compute-0 nova_compute[260935]: 2025-10-11 09:33:44.427 2 DEBUG nova.storage.rbd_utils [None req-f89a2a12-bb97-4ca0-964e-f5a20d350ce4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] rbd image 5caa1430-1bd3-4b60-83f0-908b98b891cf_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:33:44 compute-0 nova_compute[260935]: 2025-10-11 09:33:44.432 2 DEBUG oslo_concurrency.processutils [None req-f89a2a12-bb97-4ca0-964e-f5a20d350ce4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:33:44 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/1667727775' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:33:44 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:33:44 compute-0 nova_compute[260935]: 2025-10-11 09:33:44.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:33:44 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 09:33:44 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/144209284' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:33:44 compute-0 nova_compute[260935]: 2025-10-11 09:33:44.861 2 DEBUG oslo_concurrency.processutils [None req-f89a2a12-bb97-4ca0-964e-f5a20d350ce4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.430s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:33:44 compute-0 nova_compute[260935]: 2025-10-11 09:33:44.863 2 DEBUG nova.virt.libvirt.vif [None req-f89a2a12-bb97-4ca0-964e-f5a20d350ce4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T09:33:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1406255321',display_name='tempest-TestGettingAddress-server-1406255321',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1406255321',id=138,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHVfw7pOtwEAslurCfMJI+LHxZQ7Z2OcTKKHxGE0pH5jm6Sci3HdEc6EkB55fGhUPW8iPpuzF74thdADhl6pL+8SkI+p3jvP78xNgpaS0telyWktJDIyQUmQdgZIxm8Eyg==',key_name='tempest-TestGettingAddress-2055882368',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='db6885dd005947ad850fed13cefdf2fc',ramdisk_id='',reservation_id='r-p3ajjp08',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-1238692117',owner_user_name='tempest-TestGettingAddress-1238692117-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:33:39Z,user_data=None,user_id='0e1fd111a1ff43179343661e01457085',uuid=5caa1430-1bd3-4b60-83f0-908b98b891cf,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ac79972f-498e-46be-b858-6c67d0f29f93", "address": "fa:16:3e:0d:1b:5f", "network": {"id": "84203be9-d0b1-4633-bcd6-51523ee507a5", "bridge": "br-int", "label": "tempest-network-smoke--13547720", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe0d:1b5f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac79972f-49", "ovs_interfaceid": "ac79972f-498e-46be-b858-6c67d0f29f93", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 11 09:33:44 compute-0 nova_compute[260935]: 2025-10-11 09:33:44.863 2 DEBUG nova.network.os_vif_util [None req-f89a2a12-bb97-4ca0-964e-f5a20d350ce4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converting VIF {"id": "ac79972f-498e-46be-b858-6c67d0f29f93", "address": "fa:16:3e:0d:1b:5f", "network": {"id": "84203be9-d0b1-4633-bcd6-51523ee507a5", "bridge": "br-int", "label": "tempest-network-smoke--13547720", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe0d:1b5f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac79972f-49", "ovs_interfaceid": "ac79972f-498e-46be-b858-6c67d0f29f93", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 09:33:44 compute-0 nova_compute[260935]: 2025-10-11 09:33:44.864 2 DEBUG nova.network.os_vif_util [None req-f89a2a12-bb97-4ca0-964e-f5a20d350ce4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0d:1b:5f,bridge_name='br-int',has_traffic_filtering=True,id=ac79972f-498e-46be-b858-6c67d0f29f93,network=Network(84203be9-d0b1-4633-bcd6-51523ee507a5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapac79972f-49') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 09:33:44 compute-0 nova_compute[260935]: 2025-10-11 09:33:44.866 2 DEBUG nova.objects.instance [None req-f89a2a12-bb97-4ca0-964e-f5a20d350ce4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lazy-loading 'pci_devices' on Instance uuid 5caa1430-1bd3-4b60-83f0-908b98b891cf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:33:44 compute-0 nova_compute[260935]: 2025-10-11 09:33:44.901 2 DEBUG nova.virt.libvirt.driver [None req-f89a2a12-bb97-4ca0-964e-f5a20d350ce4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 5caa1430-1bd3-4b60-83f0-908b98b891cf] End _get_guest_xml xml=<domain type="kvm">
Oct 11 09:33:44 compute-0 nova_compute[260935]:   <uuid>5caa1430-1bd3-4b60-83f0-908b98b891cf</uuid>
Oct 11 09:33:44 compute-0 nova_compute[260935]:   <name>instance-0000008a</name>
Oct 11 09:33:44 compute-0 nova_compute[260935]:   <memory>131072</memory>
Oct 11 09:33:44 compute-0 nova_compute[260935]:   <vcpu>1</vcpu>
Oct 11 09:33:44 compute-0 nova_compute[260935]:   <metadata>
Oct 11 09:33:44 compute-0 nova_compute[260935]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 09:33:44 compute-0 nova_compute[260935]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 09:33:44 compute-0 nova_compute[260935]:       <nova:name>tempest-TestGettingAddress-server-1406255321</nova:name>
Oct 11 09:33:44 compute-0 nova_compute[260935]:       <nova:creationTime>2025-10-11 09:33:43</nova:creationTime>
Oct 11 09:33:44 compute-0 nova_compute[260935]:       <nova:flavor name="m1.nano">
Oct 11 09:33:44 compute-0 nova_compute[260935]:         <nova:memory>128</nova:memory>
Oct 11 09:33:44 compute-0 nova_compute[260935]:         <nova:disk>1</nova:disk>
Oct 11 09:33:44 compute-0 nova_compute[260935]:         <nova:swap>0</nova:swap>
Oct 11 09:33:44 compute-0 nova_compute[260935]:         <nova:ephemeral>0</nova:ephemeral>
Oct 11 09:33:44 compute-0 nova_compute[260935]:         <nova:vcpus>1</nova:vcpus>
Oct 11 09:33:44 compute-0 nova_compute[260935]:       </nova:flavor>
Oct 11 09:33:44 compute-0 nova_compute[260935]:       <nova:owner>
Oct 11 09:33:44 compute-0 nova_compute[260935]:         <nova:user uuid="0e1fd111a1ff43179343661e01457085">tempest-TestGettingAddress-1238692117-project-member</nova:user>
Oct 11 09:33:44 compute-0 nova_compute[260935]:         <nova:project uuid="db6885dd005947ad850fed13cefdf2fc">tempest-TestGettingAddress-1238692117</nova:project>
Oct 11 09:33:44 compute-0 nova_compute[260935]:       </nova:owner>
Oct 11 09:33:44 compute-0 nova_compute[260935]:       <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 09:33:44 compute-0 nova_compute[260935]:       <nova:ports>
Oct 11 09:33:44 compute-0 nova_compute[260935]:         <nova:port uuid="ac79972f-498e-46be-b858-6c67d0f29f93">
Oct 11 09:33:44 compute-0 nova_compute[260935]:           <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Oct 11 09:33:44 compute-0 nova_compute[260935]:           <nova:ip type="fixed" address="2001:db8::f816:3eff:fe0d:1b5f" ipVersion="6"/>
Oct 11 09:33:44 compute-0 nova_compute[260935]:         </nova:port>
Oct 11 09:33:44 compute-0 nova_compute[260935]:       </nova:ports>
Oct 11 09:33:44 compute-0 nova_compute[260935]:     </nova:instance>
Oct 11 09:33:44 compute-0 nova_compute[260935]:   </metadata>
Oct 11 09:33:44 compute-0 nova_compute[260935]:   <sysinfo type="smbios">
Oct 11 09:33:44 compute-0 nova_compute[260935]:     <system>
Oct 11 09:33:44 compute-0 nova_compute[260935]:       <entry name="manufacturer">RDO</entry>
Oct 11 09:33:44 compute-0 nova_compute[260935]:       <entry name="product">OpenStack Compute</entry>
Oct 11 09:33:44 compute-0 nova_compute[260935]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 09:33:44 compute-0 nova_compute[260935]:       <entry name="serial">5caa1430-1bd3-4b60-83f0-908b98b891cf</entry>
Oct 11 09:33:44 compute-0 nova_compute[260935]:       <entry name="uuid">5caa1430-1bd3-4b60-83f0-908b98b891cf</entry>
Oct 11 09:33:44 compute-0 nova_compute[260935]:       <entry name="family">Virtual Machine</entry>
Oct 11 09:33:44 compute-0 nova_compute[260935]:     </system>
Oct 11 09:33:44 compute-0 nova_compute[260935]:   </sysinfo>
Oct 11 09:33:44 compute-0 nova_compute[260935]:   <os>
Oct 11 09:33:44 compute-0 nova_compute[260935]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 11 09:33:44 compute-0 nova_compute[260935]:     <boot dev="hd"/>
Oct 11 09:33:44 compute-0 nova_compute[260935]:     <smbios mode="sysinfo"/>
Oct 11 09:33:44 compute-0 nova_compute[260935]:   </os>
Oct 11 09:33:44 compute-0 nova_compute[260935]:   <features>
Oct 11 09:33:44 compute-0 nova_compute[260935]:     <acpi/>
Oct 11 09:33:44 compute-0 nova_compute[260935]:     <apic/>
Oct 11 09:33:44 compute-0 nova_compute[260935]:     <vmcoreinfo/>
Oct 11 09:33:44 compute-0 nova_compute[260935]:   </features>
Oct 11 09:33:44 compute-0 nova_compute[260935]:   <clock offset="utc">
Oct 11 09:33:44 compute-0 nova_compute[260935]:     <timer name="pit" tickpolicy="delay"/>
Oct 11 09:33:44 compute-0 nova_compute[260935]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 11 09:33:44 compute-0 nova_compute[260935]:     <timer name="hpet" present="no"/>
Oct 11 09:33:44 compute-0 nova_compute[260935]:   </clock>
Oct 11 09:33:44 compute-0 nova_compute[260935]:   <cpu mode="host-model" match="exact">
Oct 11 09:33:44 compute-0 nova_compute[260935]:     <topology sockets="1" cores="1" threads="1"/>
Oct 11 09:33:44 compute-0 nova_compute[260935]:   </cpu>
Oct 11 09:33:44 compute-0 nova_compute[260935]:   <devices>
Oct 11 09:33:44 compute-0 nova_compute[260935]:     <disk type="network" device="disk">
Oct 11 09:33:44 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 09:33:44 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/5caa1430-1bd3-4b60-83f0-908b98b891cf_disk">
Oct 11 09:33:44 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 09:33:44 compute-0 nova_compute[260935]:       </source>
Oct 11 09:33:44 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 09:33:44 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 09:33:44 compute-0 nova_compute[260935]:       </auth>
Oct 11 09:33:44 compute-0 nova_compute[260935]:       <target dev="vda" bus="virtio"/>
Oct 11 09:33:44 compute-0 nova_compute[260935]:     </disk>
Oct 11 09:33:44 compute-0 nova_compute[260935]:     <disk type="network" device="cdrom">
Oct 11 09:33:44 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 09:33:44 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/5caa1430-1bd3-4b60-83f0-908b98b891cf_disk.config">
Oct 11 09:33:44 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 09:33:44 compute-0 nova_compute[260935]:       </source>
Oct 11 09:33:44 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 09:33:44 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 09:33:44 compute-0 nova_compute[260935]:       </auth>
Oct 11 09:33:44 compute-0 nova_compute[260935]:       <target dev="sda" bus="sata"/>
Oct 11 09:33:44 compute-0 nova_compute[260935]:     </disk>
Oct 11 09:33:44 compute-0 nova_compute[260935]:     <interface type="ethernet">
Oct 11 09:33:44 compute-0 nova_compute[260935]:       <mac address="fa:16:3e:0d:1b:5f"/>
Oct 11 09:33:44 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 09:33:44 compute-0 nova_compute[260935]:       <driver name="vhost" rx_queue_size="512"/>
Oct 11 09:33:44 compute-0 nova_compute[260935]:       <mtu size="1442"/>
Oct 11 09:33:44 compute-0 nova_compute[260935]:       <target dev="tapac79972f-49"/>
Oct 11 09:33:44 compute-0 nova_compute[260935]:     </interface>
Oct 11 09:33:44 compute-0 nova_compute[260935]:     <serial type="pty">
Oct 11 09:33:44 compute-0 nova_compute[260935]:       <log file="/var/lib/nova/instances/5caa1430-1bd3-4b60-83f0-908b98b891cf/console.log" append="off"/>
Oct 11 09:33:44 compute-0 nova_compute[260935]:     </serial>
Oct 11 09:33:44 compute-0 nova_compute[260935]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 09:33:44 compute-0 nova_compute[260935]:     <video>
Oct 11 09:33:44 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 09:33:44 compute-0 nova_compute[260935]:     </video>
Oct 11 09:33:44 compute-0 nova_compute[260935]:     <input type="tablet" bus="usb"/>
Oct 11 09:33:44 compute-0 nova_compute[260935]:     <rng model="virtio">
Oct 11 09:33:44 compute-0 nova_compute[260935]:       <backend model="random">/dev/urandom</backend>
Oct 11 09:33:44 compute-0 nova_compute[260935]:     </rng>
Oct 11 09:33:44 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root"/>
Oct 11 09:33:44 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:33:44 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:33:44 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:33:44 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:33:44 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:33:44 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:33:44 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:33:44 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:33:44 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:33:44 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:33:44 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:33:44 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:33:44 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:33:44 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:33:44 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:33:44 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:33:44 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:33:44 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:33:44 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:33:44 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:33:44 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:33:44 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:33:44 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:33:44 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:33:44 compute-0 nova_compute[260935]:     <controller type="usb" index="0"/>
Oct 11 09:33:44 compute-0 nova_compute[260935]:     <memballoon model="virtio">
Oct 11 09:33:44 compute-0 nova_compute[260935]:       <stats period="10"/>
Oct 11 09:33:44 compute-0 nova_compute[260935]:     </memballoon>
Oct 11 09:33:44 compute-0 nova_compute[260935]:   </devices>
Oct 11 09:33:44 compute-0 nova_compute[260935]: </domain>
Oct 11 09:33:44 compute-0 nova_compute[260935]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 11 09:33:44 compute-0 nova_compute[260935]: 2025-10-11 09:33:44.902 2 DEBUG nova.compute.manager [None req-f89a2a12-bb97-4ca0-964e-f5a20d350ce4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 5caa1430-1bd3-4b60-83f0-908b98b891cf] Preparing to wait for external event network-vif-plugged-ac79972f-498e-46be-b858-6c67d0f29f93 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 11 09:33:44 compute-0 nova_compute[260935]: 2025-10-11 09:33:44.902 2 DEBUG oslo_concurrency.lockutils [None req-f89a2a12-bb97-4ca0-964e-f5a20d350ce4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "5caa1430-1bd3-4b60-83f0-908b98b891cf-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:33:44 compute-0 nova_compute[260935]: 2025-10-11 09:33:44.903 2 DEBUG oslo_concurrency.lockutils [None req-f89a2a12-bb97-4ca0-964e-f5a20d350ce4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "5caa1430-1bd3-4b60-83f0-908b98b891cf-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:33:44 compute-0 nova_compute[260935]: 2025-10-11 09:33:44.903 2 DEBUG oslo_concurrency.lockutils [None req-f89a2a12-bb97-4ca0-964e-f5a20d350ce4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "5caa1430-1bd3-4b60-83f0-908b98b891cf-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:33:44 compute-0 nova_compute[260935]: 2025-10-11 09:33:44.904 2 DEBUG nova.virt.libvirt.vif [None req-f89a2a12-bb97-4ca0-964e-f5a20d350ce4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T09:33:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1406255321',display_name='tempest-TestGettingAddress-server-1406255321',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1406255321',id=138,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHVfw7pOtwEAslurCfMJI+LHxZQ7Z2OcTKKHxGE0pH5jm6Sci3HdEc6EkB55fGhUPW8iPpuzF74thdADhl6pL+8SkI+p3jvP78xNgpaS0telyWktJDIyQUmQdgZIxm8Eyg==',key_name='tempest-TestGettingAddress-2055882368',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='db6885dd005947ad850fed13cefdf2fc',ramdisk_id='',reservation_id='r-p3ajjp08',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-1238692117',owner_user_name='tempest-TestGettingAddress-1238692117-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:33:39Z,user_data=None,user_id='0e1fd111a1ff43179343661e01457085',uuid=5caa1430-1bd3-4b60-83f0-908b98b891cf,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ac79972f-498e-46be-b858-6c67d0f29f93", "address": "fa:16:3e:0d:1b:5f", "network": {"id": "84203be9-d0b1-4633-bcd6-51523ee507a5", "bridge": "br-int", "label": "tempest-network-smoke--13547720", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe0d:1b5f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac79972f-49", "ovs_interfaceid": "ac79972f-498e-46be-b858-6c67d0f29f93", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 11 09:33:44 compute-0 nova_compute[260935]: 2025-10-11 09:33:44.905 2 DEBUG nova.network.os_vif_util [None req-f89a2a12-bb97-4ca0-964e-f5a20d350ce4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converting VIF {"id": "ac79972f-498e-46be-b858-6c67d0f29f93", "address": "fa:16:3e:0d:1b:5f", "network": {"id": "84203be9-d0b1-4633-bcd6-51523ee507a5", "bridge": "br-int", "label": "tempest-network-smoke--13547720", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe0d:1b5f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac79972f-49", "ovs_interfaceid": "ac79972f-498e-46be-b858-6c67d0f29f93", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 09:33:44 compute-0 nova_compute[260935]: 2025-10-11 09:33:44.906 2 DEBUG nova.network.os_vif_util [None req-f89a2a12-bb97-4ca0-964e-f5a20d350ce4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0d:1b:5f,bridge_name='br-int',has_traffic_filtering=True,id=ac79972f-498e-46be-b858-6c67d0f29f93,network=Network(84203be9-d0b1-4633-bcd6-51523ee507a5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapac79972f-49') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 09:33:44 compute-0 nova_compute[260935]: 2025-10-11 09:33:44.906 2 DEBUG os_vif [None req-f89a2a12-bb97-4ca0-964e-f5a20d350ce4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:0d:1b:5f,bridge_name='br-int',has_traffic_filtering=True,id=ac79972f-498e-46be-b858-6c67d0f29f93,network=Network(84203be9-d0b1-4633-bcd6-51523ee507a5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapac79972f-49') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 11 09:33:44 compute-0 nova_compute[260935]: 2025-10-11 09:33:44.907 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:33:44 compute-0 nova_compute[260935]: 2025-10-11 09:33:44.908 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:33:44 compute-0 nova_compute[260935]: 2025-10-11 09:33:44.909 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 09:33:44 compute-0 nova_compute[260935]: 2025-10-11 09:33:44.915 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:33:44 compute-0 nova_compute[260935]: 2025-10-11 09:33:44.915 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapac79972f-49, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:33:44 compute-0 nova_compute[260935]: 2025-10-11 09:33:44.916 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapac79972f-49, col_values=(('external_ids', {'iface-id': 'ac79972f-498e-46be-b858-6c67d0f29f93', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:0d:1b:5f', 'vm-uuid': '5caa1430-1bd3-4b60-83f0-908b98b891cf'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:33:44 compute-0 nova_compute[260935]: 2025-10-11 09:33:44.956 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:33:44 compute-0 NetworkManager[44960]: <info>  [1760175224.9585] manager: (tapac79972f-49): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/594)
Oct 11 09:33:44 compute-0 nova_compute[260935]: 2025-10-11 09:33:44.960 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 09:33:44 compute-0 nova_compute[260935]: 2025-10-11 09:33:44.968 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:33:44 compute-0 nova_compute[260935]: 2025-10-11 09:33:44.969 2 INFO os_vif [None req-f89a2a12-bb97-4ca0-964e-f5a20d350ce4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:0d:1b:5f,bridge_name='br-int',has_traffic_filtering=True,id=ac79972f-498e-46be-b858-6c67d0f29f93,network=Network(84203be9-d0b1-4633-bcd6-51523ee507a5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapac79972f-49')
Oct 11 09:33:45 compute-0 nova_compute[260935]: 2025-10-11 09:33:45.134 2 DEBUG nova.virt.libvirt.driver [None req-f89a2a12-bb97-4ca0-964e-f5a20d350ce4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 09:33:45 compute-0 nova_compute[260935]: 2025-10-11 09:33:45.135 2 DEBUG nova.virt.libvirt.driver [None req-f89a2a12-bb97-4ca0-964e-f5a20d350ce4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 09:33:45 compute-0 nova_compute[260935]: 2025-10-11 09:33:45.136 2 DEBUG nova.virt.libvirt.driver [None req-f89a2a12-bb97-4ca0-964e-f5a20d350ce4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] No VIF found with MAC fa:16:3e:0d:1b:5f, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 11 09:33:45 compute-0 nova_compute[260935]: 2025-10-11 09:33:45.137 2 INFO nova.virt.libvirt.driver [None req-f89a2a12-bb97-4ca0-964e-f5a20d350ce4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 5caa1430-1bd3-4b60-83f0-908b98b891cf] Using config drive
Oct 11 09:33:45 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2763: 321 pgs: 321 active+clean; 453 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 11 09:33:45 compute-0 nova_compute[260935]: 2025-10-11 09:33:45.170 2 DEBUG nova.storage.rbd_utils [None req-f89a2a12-bb97-4ca0-964e-f5a20d350ce4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] rbd image 5caa1430-1bd3-4b60-83f0-908b98b891cf_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:33:45 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/144209284' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:33:45 compute-0 ceph-mon[74313]: pgmap v2763: 321 pgs: 321 active+clean; 453 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 11 09:33:45 compute-0 nova_compute[260935]: 2025-10-11 09:33:45.699 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:33:45 compute-0 nova_compute[260935]: 2025-10-11 09:33:45.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:33:45 compute-0 nova_compute[260935]: 2025-10-11 09:33:45.702 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 11 09:33:45 compute-0 nova_compute[260935]: 2025-10-11 09:33:45.703 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 11 09:33:45 compute-0 nova_compute[260935]: 2025-10-11 09:33:45.750 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: 5caa1430-1bd3-4b60-83f0-908b98b891cf] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Oct 11 09:33:46 compute-0 nova_compute[260935]: 2025-10-11 09:33:46.236 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "refresh_cache-c176845c-89c0-4038-ba22-4ee79bd3ebfe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:33:46 compute-0 nova_compute[260935]: 2025-10-11 09:33:46.236 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquired lock "refresh_cache-c176845c-89c0-4038-ba22-4ee79bd3ebfe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:33:46 compute-0 nova_compute[260935]: 2025-10-11 09:33:46.236 2 DEBUG nova.network.neutron [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: c176845c-89c0-4038-ba22-4ee79bd3ebfe] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 11 09:33:46 compute-0 nova_compute[260935]: 2025-10-11 09:33:46.236 2 DEBUG nova.objects.instance [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lazy-loading 'info_cache' on Instance uuid c176845c-89c0-4038-ba22-4ee79bd3ebfe obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:33:46 compute-0 nova_compute[260935]: 2025-10-11 09:33:46.277 2 INFO nova.virt.libvirt.driver [None req-f89a2a12-bb97-4ca0-964e-f5a20d350ce4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 5caa1430-1bd3-4b60-83f0-908b98b891cf] Creating config drive at /var/lib/nova/instances/5caa1430-1bd3-4b60-83f0-908b98b891cf/disk.config
Oct 11 09:33:46 compute-0 nova_compute[260935]: 2025-10-11 09:33:46.283 2 DEBUG oslo_concurrency.processutils [None req-f89a2a12-bb97-4ca0-964e-f5a20d350ce4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/5caa1430-1bd3-4b60-83f0-908b98b891cf/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp8um0t0xa execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:33:46 compute-0 nova_compute[260935]: 2025-10-11 09:33:46.316 2 DEBUG nova.network.neutron [req-53f66181-7ed6-455a-a890-3bf6f09af614 req-885c04dd-f16d-4d30-b640-db9775973f57 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 5caa1430-1bd3-4b60-83f0-908b98b891cf] Updated VIF entry in instance network info cache for port ac79972f-498e-46be-b858-6c67d0f29f93. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 09:33:46 compute-0 nova_compute[260935]: 2025-10-11 09:33:46.317 2 DEBUG nova.network.neutron [req-53f66181-7ed6-455a-a890-3bf6f09af614 req-885c04dd-f16d-4d30-b640-db9775973f57 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 5caa1430-1bd3-4b60-83f0-908b98b891cf] Updating instance_info_cache with network_info: [{"id": "ac79972f-498e-46be-b858-6c67d0f29f93", "address": "fa:16:3e:0d:1b:5f", "network": {"id": "84203be9-d0b1-4633-bcd6-51523ee507a5", "bridge": "br-int", "label": "tempest-network-smoke--13547720", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe0d:1b5f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac79972f-49", "ovs_interfaceid": "ac79972f-498e-46be-b858-6c67d0f29f93", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:33:46 compute-0 nova_compute[260935]: 2025-10-11 09:33:46.395 2 DEBUG oslo_concurrency.lockutils [req-53f66181-7ed6-455a-a890-3bf6f09af614 req-885c04dd-f16d-4d30-b640-db9775973f57 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-5caa1430-1bd3-4b60-83f0-908b98b891cf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:33:46 compute-0 nova_compute[260935]: 2025-10-11 09:33:46.422 2 DEBUG oslo_concurrency.processutils [None req-f89a2a12-bb97-4ca0-964e-f5a20d350ce4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/5caa1430-1bd3-4b60-83f0-908b98b891cf/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp8um0t0xa" returned: 0 in 0.139s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:33:46 compute-0 nova_compute[260935]: 2025-10-11 09:33:46.450 2 DEBUG nova.storage.rbd_utils [None req-f89a2a12-bb97-4ca0-964e-f5a20d350ce4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] rbd image 5caa1430-1bd3-4b60-83f0-908b98b891cf_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:33:46 compute-0 nova_compute[260935]: 2025-10-11 09:33:46.454 2 DEBUG oslo_concurrency.processutils [None req-f89a2a12-bb97-4ca0-964e-f5a20d350ce4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/5caa1430-1bd3-4b60-83f0-908b98b891cf/disk.config 5caa1430-1bd3-4b60-83f0-908b98b891cf_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:33:46 compute-0 nova_compute[260935]: 2025-10-11 09:33:46.646 2 DEBUG oslo_concurrency.processutils [None req-f89a2a12-bb97-4ca0-964e-f5a20d350ce4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/5caa1430-1bd3-4b60-83f0-908b98b891cf/disk.config 5caa1430-1bd3-4b60-83f0-908b98b891cf_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.192s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:33:46 compute-0 nova_compute[260935]: 2025-10-11 09:33:46.648 2 INFO nova.virt.libvirt.driver [None req-f89a2a12-bb97-4ca0-964e-f5a20d350ce4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 5caa1430-1bd3-4b60-83f0-908b98b891cf] Deleting local config drive /var/lib/nova/instances/5caa1430-1bd3-4b60-83f0-908b98b891cf/disk.config because it was imported into RBD.
Oct 11 09:33:46 compute-0 kernel: tapac79972f-49: entered promiscuous mode
Oct 11 09:33:46 compute-0 NetworkManager[44960]: <info>  [1760175226.7079] manager: (tapac79972f-49): new Tun device (/org/freedesktop/NetworkManager/Devices/595)
Oct 11 09:33:46 compute-0 ovn_controller[152945]: 2025-10-11T09:33:46Z|01543|binding|INFO|Claiming lport ac79972f-498e-46be-b858-6c67d0f29f93 for this chassis.
Oct 11 09:33:46 compute-0 ovn_controller[152945]: 2025-10-11T09:33:46Z|01544|binding|INFO|ac79972f-498e-46be-b858-6c67d0f29f93: Claiming fa:16:3e:0d:1b:5f 10.100.0.10 2001:db8::f816:3eff:fe0d:1b5f
Oct 11 09:33:46 compute-0 nova_compute[260935]: 2025-10-11 09:33:46.708 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:33:46 compute-0 ovn_controller[152945]: 2025-10-11T09:33:46Z|01545|binding|INFO|Setting lport ac79972f-498e-46be-b858-6c67d0f29f93 ovn-installed in OVS
Oct 11 09:33:46 compute-0 nova_compute[260935]: 2025-10-11 09:33:46.742 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:33:46 compute-0 ovn_controller[152945]: 2025-10-11T09:33:46Z|01546|binding|INFO|Setting lport ac79972f-498e-46be-b858-6c67d0f29f93 up in Southbound
Oct 11 09:33:46 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:33:46.747 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0d:1b:5f 10.100.0.10 2001:db8::f816:3eff:fe0d:1b5f'], port_security=['fa:16:3e:0d:1b:5f 10.100.0.10 2001:db8::f816:3eff:fe0d:1b5f'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28 2001:db8::f816:3eff:fe0d:1b5f/64', 'neutron:device_id': '5caa1430-1bd3-4b60-83f0-908b98b891cf', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-84203be9-d0b1-4633-bcd6-51523ee507a5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'db6885dd005947ad850fed13cefdf2fc', 'neutron:revision_number': '2', 'neutron:security_group_ids': '49b6c18b-a992-472d-bab2-db1a8305eb48', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d927a0f8-4868-42ce-9cc2-00b73d34efaa, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=ac79972f-498e-46be-b858-6c67d0f29f93) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 09:33:46 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:33:46.748 162815 INFO neutron.agent.ovn.metadata.agent [-] Port ac79972f-498e-46be-b858-6c67d0f29f93 in datapath 84203be9-d0b1-4633-bcd6-51523ee507a5 bound to our chassis
Oct 11 09:33:46 compute-0 nova_compute[260935]: 2025-10-11 09:33:46.749 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:33:46 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:33:46.751 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 84203be9-d0b1-4633-bcd6-51523ee507a5
Oct 11 09:33:46 compute-0 systemd-machined[215705]: New machine qemu-162-instance-0000008a.
Oct 11 09:33:46 compute-0 systemd-udevd[413255]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 09:33:46 compute-0 systemd[1]: Started Virtual Machine qemu-162-instance-0000008a.
Oct 11 09:33:46 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:33:46.772 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[fa707165-e8ef-4627-8a2e-3390a5541169]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:33:46 compute-0 NetworkManager[44960]: <info>  [1760175226.7927] device (tapac79972f-49): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 09:33:46 compute-0 NetworkManager[44960]: <info>  [1760175226.7940] device (tapac79972f-49): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 09:33:46 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:33:46.817 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[9cc1823d-08df-4631-91c8-a641ee94da4a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:33:46 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:33:46.822 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[b2dd88bc-65a8-480a-b2f1-dad1ea32c15a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:33:46 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:33:46.872 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[b7ca17e8-5191-45e7-878e-1aa9d14e7e34]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:33:46 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:33:46.896 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[bbba0ac3-5719-4e74-a1ee-0502b82c1fbf]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap84203be9-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:09:4e:e8'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 21, 'tx_packets': 5, 'rx_bytes': 1886, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 21, 'tx_packets': 5, 'rx_bytes': 1886, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 410], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 702200, 'reachable_time': 31343, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 19, 'inoctets': 1536, 'indelivers': 4, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 19, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 1536, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 19, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 4, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 413269, 'error': None, 'target': 'ovnmeta-84203be9-d0b1-4633-bcd6-51523ee507a5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:33:46 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:33:46.915 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[7407fb28-231d-43ce-848c-7ef16aa5bba1]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap84203be9-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 702218, 'tstamp': 702218}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 413270, 'error': None, 'target': 'ovnmeta-84203be9-d0b1-4633-bcd6-51523ee507a5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap84203be9-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 702223, 'tstamp': 702223}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 413270, 'error': None, 'target': 'ovnmeta-84203be9-d0b1-4633-bcd6-51523ee507a5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:33:46 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:33:46.918 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap84203be9-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:33:46 compute-0 nova_compute[260935]: 2025-10-11 09:33:46.921 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:33:46 compute-0 nova_compute[260935]: 2025-10-11 09:33:46.922 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:33:46 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:33:46.923 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap84203be9-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:33:46 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:33:46.924 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 09:33:46 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:33:46.925 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap84203be9-d0, col_values=(('external_ids', {'iface-id': '8bbc5aca-a0c9-493f-8847-60a74f11dbe6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:33:46 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:33:46.925 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 09:33:47 compute-0 sudo[413271]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:33:47 compute-0 sudo[413271]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:33:47 compute-0 sudo[413271]: pam_unix(sudo:session): session closed for user root
Oct 11 09:33:47 compute-0 sudo[413314]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 09:33:47 compute-0 sudo[413314]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:33:47 compute-0 sudo[413314]: pam_unix(sudo:session): session closed for user root
Oct 11 09:33:47 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2764: 321 pgs: 321 active+clean; 453 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 11 09:33:47 compute-0 sudo[413362]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:33:47 compute-0 sudo[413362]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:33:47 compute-0 sudo[413362]: pam_unix(sudo:session): session closed for user root
Oct 11 09:33:47 compute-0 ceph-mon[74313]: pgmap v2764: 321 pgs: 321 active+clean; 453 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 11 09:33:47 compute-0 sudo[413388]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 11 09:33:47 compute-0 sudo[413388]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:33:47 compute-0 nova_compute[260935]: 2025-10-11 09:33:47.606 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760175227.6059654, 5caa1430-1bd3-4b60-83f0-908b98b891cf => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:33:47 compute-0 nova_compute[260935]: 2025-10-11 09:33:47.607 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 5caa1430-1bd3-4b60-83f0-908b98b891cf] VM Started (Lifecycle Event)
Oct 11 09:33:47 compute-0 nova_compute[260935]: 2025-10-11 09:33:47.700 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 5caa1430-1bd3-4b60-83f0-908b98b891cf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:33:47 compute-0 nova_compute[260935]: 2025-10-11 09:33:47.706 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760175227.6061015, 5caa1430-1bd3-4b60-83f0-908b98b891cf => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:33:47 compute-0 nova_compute[260935]: 2025-10-11 09:33:47.706 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 5caa1430-1bd3-4b60-83f0-908b98b891cf] VM Paused (Lifecycle Event)
Oct 11 09:33:47 compute-0 sudo[413388]: pam_unix(sudo:session): session closed for user root
Oct 11 09:33:47 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 09:33:47 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 09:33:47 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 11 09:33:47 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 09:33:47 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 11 09:33:47 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:33:47 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev e3b25e92-f8bf-44bd-8957-c5e84ab1974c does not exist
Oct 11 09:33:47 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev 2f880e37-d251-469d-9a27-c5bd863401a1 does not exist
Oct 11 09:33:47 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev a655fc70-bd5d-4a3b-946f-8e331a10079f does not exist
Oct 11 09:33:47 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 11 09:33:47 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 09:33:47 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 11 09:33:47 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 09:33:47 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 09:33:47 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 09:33:47 compute-0 nova_compute[260935]: 2025-10-11 09:33:47.944 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 5caa1430-1bd3-4b60-83f0-908b98b891cf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:33:47 compute-0 nova_compute[260935]: 2025-10-11 09:33:47.949 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 5caa1430-1bd3-4b60-83f0-908b98b891cf] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 09:33:47 compute-0 sudo[413442]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:33:47 compute-0 sudo[413442]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:33:47 compute-0 sudo[413442]: pam_unix(sudo:session): session closed for user root
Oct 11 09:33:48 compute-0 sudo[413467]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 09:33:48 compute-0 sudo[413467]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:33:48 compute-0 nova_compute[260935]: 2025-10-11 09:33:48.052 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 5caa1430-1bd3-4b60-83f0-908b98b891cf] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 09:33:48 compute-0 sudo[413467]: pam_unix(sudo:session): session closed for user root
Oct 11 09:33:48 compute-0 nova_compute[260935]: 2025-10-11 09:33:48.116 2 DEBUG nova.compute.manager [req-aa7188f3-39b3-4da2-9216-0937035ee33d req-aa07fa17-4843-4acc-8067-415a89fbb8e8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 5caa1430-1bd3-4b60-83f0-908b98b891cf] Received event network-vif-plugged-ac79972f-498e-46be-b858-6c67d0f29f93 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:33:48 compute-0 nova_compute[260935]: 2025-10-11 09:33:48.116 2 DEBUG oslo_concurrency.lockutils [req-aa7188f3-39b3-4da2-9216-0937035ee33d req-aa07fa17-4843-4acc-8067-415a89fbb8e8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "5caa1430-1bd3-4b60-83f0-908b98b891cf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:33:48 compute-0 nova_compute[260935]: 2025-10-11 09:33:48.116 2 DEBUG oslo_concurrency.lockutils [req-aa7188f3-39b3-4da2-9216-0937035ee33d req-aa07fa17-4843-4acc-8067-415a89fbb8e8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "5caa1430-1bd3-4b60-83f0-908b98b891cf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:33:48 compute-0 nova_compute[260935]: 2025-10-11 09:33:48.117 2 DEBUG oslo_concurrency.lockutils [req-aa7188f3-39b3-4da2-9216-0937035ee33d req-aa07fa17-4843-4acc-8067-415a89fbb8e8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "5caa1430-1bd3-4b60-83f0-908b98b891cf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:33:48 compute-0 nova_compute[260935]: 2025-10-11 09:33:48.117 2 DEBUG nova.compute.manager [req-aa7188f3-39b3-4da2-9216-0937035ee33d req-aa07fa17-4843-4acc-8067-415a89fbb8e8 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 5caa1430-1bd3-4b60-83f0-908b98b891cf] Processing event network-vif-plugged-ac79972f-498e-46be-b858-6c67d0f29f93 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 11 09:33:48 compute-0 nova_compute[260935]: 2025-10-11 09:33:48.118 2 DEBUG nova.compute.manager [None req-f89a2a12-bb97-4ca0-964e-f5a20d350ce4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 5caa1430-1bd3-4b60-83f0-908b98b891cf] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 11 09:33:48 compute-0 nova_compute[260935]: 2025-10-11 09:33:48.122 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760175228.121264, 5caa1430-1bd3-4b60-83f0-908b98b891cf => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:33:48 compute-0 nova_compute[260935]: 2025-10-11 09:33:48.122 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 5caa1430-1bd3-4b60-83f0-908b98b891cf] VM Resumed (Lifecycle Event)
Oct 11 09:33:48 compute-0 sudo[413492]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:33:48 compute-0 sudo[413492]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:33:48 compute-0 nova_compute[260935]: 2025-10-11 09:33:48.129 2 DEBUG nova.virt.libvirt.driver [None req-f89a2a12-bb97-4ca0-964e-f5a20d350ce4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 5caa1430-1bd3-4b60-83f0-908b98b891cf] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 11 09:33:48 compute-0 sudo[413492]: pam_unix(sudo:session): session closed for user root
Oct 11 09:33:48 compute-0 nova_compute[260935]: 2025-10-11 09:33:48.132 2 INFO nova.virt.libvirt.driver [-] [instance: 5caa1430-1bd3-4b60-83f0-908b98b891cf] Instance spawned successfully.
Oct 11 09:33:48 compute-0 nova_compute[260935]: 2025-10-11 09:33:48.133 2 DEBUG nova.virt.libvirt.driver [None req-f89a2a12-bb97-4ca0-964e-f5a20d350ce4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 5caa1430-1bd3-4b60-83f0-908b98b891cf] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 11 09:33:48 compute-0 nova_compute[260935]: 2025-10-11 09:33:48.184 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 5caa1430-1bd3-4b60-83f0-908b98b891cf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:33:48 compute-0 nova_compute[260935]: 2025-10-11 09:33:48.190 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 5caa1430-1bd3-4b60-83f0-908b98b891cf] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 09:33:48 compute-0 nova_compute[260935]: 2025-10-11 09:33:48.194 2 DEBUG nova.virt.libvirt.driver [None req-f89a2a12-bb97-4ca0-964e-f5a20d350ce4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 5caa1430-1bd3-4b60-83f0-908b98b891cf] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:33:48 compute-0 nova_compute[260935]: 2025-10-11 09:33:48.194 2 DEBUG nova.virt.libvirt.driver [None req-f89a2a12-bb97-4ca0-964e-f5a20d350ce4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 5caa1430-1bd3-4b60-83f0-908b98b891cf] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:33:48 compute-0 nova_compute[260935]: 2025-10-11 09:33:48.195 2 DEBUG nova.virt.libvirt.driver [None req-f89a2a12-bb97-4ca0-964e-f5a20d350ce4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 5caa1430-1bd3-4b60-83f0-908b98b891cf] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:33:48 compute-0 nova_compute[260935]: 2025-10-11 09:33:48.195 2 DEBUG nova.virt.libvirt.driver [None req-f89a2a12-bb97-4ca0-964e-f5a20d350ce4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 5caa1430-1bd3-4b60-83f0-908b98b891cf] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:33:48 compute-0 nova_compute[260935]: 2025-10-11 09:33:48.196 2 DEBUG nova.virt.libvirt.driver [None req-f89a2a12-bb97-4ca0-964e-f5a20d350ce4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 5caa1430-1bd3-4b60-83f0-908b98b891cf] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:33:48 compute-0 nova_compute[260935]: 2025-10-11 09:33:48.196 2 DEBUG nova.virt.libvirt.driver [None req-f89a2a12-bb97-4ca0-964e-f5a20d350ce4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 5caa1430-1bd3-4b60-83f0-908b98b891cf] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:33:48 compute-0 sudo[413517]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 11 09:33:48 compute-0 sudo[413517]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:33:48 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 09:33:48 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 09:33:48 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:33:48 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 09:33:48 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 09:33:48 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 09:33:48 compute-0 nova_compute[260935]: 2025-10-11 09:33:48.325 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 5caa1430-1bd3-4b60-83f0-908b98b891cf] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 09:33:48 compute-0 nova_compute[260935]: 2025-10-11 09:33:48.410 2 INFO nova.compute.manager [None req-f89a2a12-bb97-4ca0-964e-f5a20d350ce4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 5caa1430-1bd3-4b60-83f0-908b98b891cf] Took 8.54 seconds to spawn the instance on the hypervisor.
Oct 11 09:33:48 compute-0 nova_compute[260935]: 2025-10-11 09:33:48.411 2 DEBUG nova.compute.manager [None req-f89a2a12-bb97-4ca0-964e-f5a20d350ce4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 5caa1430-1bd3-4b60-83f0-908b98b891cf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:33:48 compute-0 nova_compute[260935]: 2025-10-11 09:33:48.462 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:33:48 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:33:48.462 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=50, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:d1:d9', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '16:ab:1e:b7:4b:7f'}, ipsec=False) old=SB_Global(nb_cfg=49) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 09:33:48 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:33:48.463 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 11 09:33:48 compute-0 nova_compute[260935]: 2025-10-11 09:33:48.547 2 INFO nova.compute.manager [None req-f89a2a12-bb97-4ca0-964e-f5a20d350ce4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 5caa1430-1bd3-4b60-83f0-908b98b891cf] Took 11.08 seconds to build instance.
Oct 11 09:33:48 compute-0 podman[413581]: 2025-10-11 09:33:48.651007456 +0000 UTC m=+0.057455572 container create f92e0b4019d92c4055d76d34f6f7341ebcc6849401ed5e87d4a63de9faf9d975 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_diffie, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct 11 09:33:48 compute-0 nova_compute[260935]: 2025-10-11 09:33:48.671 2 DEBUG nova.network.neutron [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: c176845c-89c0-4038-ba22-4ee79bd3ebfe] Updating instance_info_cache with network_info: [{"id": "e61ae661-47c6-4317-a2c2-6e7a5b567441", "address": "fa:16:3e:1e:82:58", "network": {"id": "164a664d-5e52-48b9-8b00-f73d0851a4cc", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-311778958-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d33b48586acf4e6c8254f2a1213b001c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape61ae661-47", "ovs_interfaceid": "e61ae661-47c6-4317-a2c2-6e7a5b567441", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:33:48 compute-0 nova_compute[260935]: 2025-10-11 09:33:48.700 2 DEBUG oslo_concurrency.lockutils [None req-f89a2a12-bb97-4ca0-964e-f5a20d350ce4 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "5caa1430-1bd3-4b60-83f0-908b98b891cf" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.687s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:33:48 compute-0 systemd[1]: Started libpod-conmon-f92e0b4019d92c4055d76d34f6f7341ebcc6849401ed5e87d4a63de9faf9d975.scope.
Oct 11 09:33:48 compute-0 podman[413581]: 2025-10-11 09:33:48.629582841 +0000 UTC m=+0.036030957 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:33:48 compute-0 nova_compute[260935]: 2025-10-11 09:33:48.739 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Releasing lock "refresh_cache-c176845c-89c0-4038-ba22-4ee79bd3ebfe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:33:48 compute-0 nova_compute[260935]: 2025-10-11 09:33:48.739 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: c176845c-89c0-4038-ba22-4ee79bd3ebfe] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 11 09:33:48 compute-0 nova_compute[260935]: 2025-10-11 09:33:48.740 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:33:48 compute-0 nova_compute[260935]: 2025-10-11 09:33:48.740 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 11 09:33:48 compute-0 nova_compute[260935]: 2025-10-11 09:33:48.740 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:33:48 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:33:48 compute-0 podman[413581]: 2025-10-11 09:33:48.763521591 +0000 UTC m=+0.169969757 container init f92e0b4019d92c4055d76d34f6f7341ebcc6849401ed5e87d4a63de9faf9d975 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_diffie, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Oct 11 09:33:48 compute-0 podman[413581]: 2025-10-11 09:33:48.771788335 +0000 UTC m=+0.178236411 container start f92e0b4019d92c4055d76d34f6f7341ebcc6849401ed5e87d4a63de9faf9d975 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_diffie, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 11 09:33:48 compute-0 peaceful_diffie[413597]: 167 167
Oct 11 09:33:48 compute-0 systemd[1]: libpod-f92e0b4019d92c4055d76d34f6f7341ebcc6849401ed5e87d4a63de9faf9d975.scope: Deactivated successfully.
Oct 11 09:33:48 compute-0 podman[413581]: 2025-10-11 09:33:48.778836664 +0000 UTC m=+0.185284830 container attach f92e0b4019d92c4055d76d34f6f7341ebcc6849401ed5e87d4a63de9faf9d975 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_diffie, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct 11 09:33:48 compute-0 podman[413581]: 2025-10-11 09:33:48.779194124 +0000 UTC m=+0.185642240 container died f92e0b4019d92c4055d76d34f6f7341ebcc6849401ed5e87d4a63de9faf9d975 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_diffie, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct 11 09:33:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-2fd0c637f32fd7ee53cd8e61b2db34d80c6a3fb061ccafa51067e560b839926b-merged.mount: Deactivated successfully.
Oct 11 09:33:48 compute-0 podman[413581]: 2025-10-11 09:33:48.823460313 +0000 UTC m=+0.229908389 container remove f92e0b4019d92c4055d76d34f6f7341ebcc6849401ed5e87d4a63de9faf9d975 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_diffie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct 11 09:33:48 compute-0 systemd[1]: libpod-conmon-f92e0b4019d92c4055d76d34f6f7341ebcc6849401ed5e87d4a63de9faf9d975.scope: Deactivated successfully.
Oct 11 09:33:48 compute-0 nova_compute[260935]: 2025-10-11 09:33:48.863 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:33:48 compute-0 nova_compute[260935]: 2025-10-11 09:33:48.864 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:33:48 compute-0 nova_compute[260935]: 2025-10-11 09:33:48.864 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:33:48 compute-0 nova_compute[260935]: 2025-10-11 09:33:48.864 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 11 09:33:48 compute-0 nova_compute[260935]: 2025-10-11 09:33:48.864 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:33:48 compute-0 nova_compute[260935]: 2025-10-11 09:33:48.939 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:33:49 compute-0 podman[413620]: 2025-10-11 09:33:49.062749706 +0000 UTC m=+0.057391811 container create da14e3a8031872d4df5399a3e2d379bd3d33314fb32fbc710fb2de4f69cd9853 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_swartz, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Oct 11 09:33:49 compute-0 systemd[1]: Started libpod-conmon-da14e3a8031872d4df5399a3e2d379bd3d33314fb32fbc710fb2de4f69cd9853.scope.
Oct 11 09:33:49 compute-0 podman[413620]: 2025-10-11 09:33:49.03102236 +0000 UTC m=+0.025664495 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:33:49 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2765: 321 pgs: 321 active+clean; 453 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 25 KiB/s rd, 1.8 MiB/s wr, 37 op/s
Oct 11 09:33:49 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:33:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de676b6d35ff340631f7b69520222d54f6111d12f54565119871358ab6ba2c7d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 09:33:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de676b6d35ff340631f7b69520222d54f6111d12f54565119871358ab6ba2c7d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 09:33:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de676b6d35ff340631f7b69520222d54f6111d12f54565119871358ab6ba2c7d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 09:33:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de676b6d35ff340631f7b69520222d54f6111d12f54565119871358ab6ba2c7d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 09:33:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de676b6d35ff340631f7b69520222d54f6111d12f54565119871358ab6ba2c7d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 09:33:49 compute-0 podman[413620]: 2025-10-11 09:33:49.180752316 +0000 UTC m=+0.175394481 container init da14e3a8031872d4df5399a3e2d379bd3d33314fb32fbc710fb2de4f69cd9853 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_swartz, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 09:33:49 compute-0 podman[413620]: 2025-10-11 09:33:49.188931717 +0000 UTC m=+0.183573832 container start da14e3a8031872d4df5399a3e2d379bd3d33314fb32fbc710fb2de4f69cd9853 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_swartz, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct 11 09:33:49 compute-0 podman[413620]: 2025-10-11 09:33:49.192744754 +0000 UTC m=+0.187386919 container attach da14e3a8031872d4df5399a3e2d379bd3d33314fb32fbc710fb2de4f69cd9853 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_swartz, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 11 09:33:49 compute-0 ceph-mon[74313]: pgmap v2765: 321 pgs: 321 active+clean; 453 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 25 KiB/s rd, 1.8 MiB/s wr, 37 op/s
Oct 11 09:33:49 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 09:33:49 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2635443784' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:33:49 compute-0 nova_compute[260935]: 2025-10-11 09:33:49.347 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.482s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:33:49 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:33:49 compute-0 nova_compute[260935]: 2025-10-11 09:33:49.622 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-0000008a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:33:49 compute-0 nova_compute[260935]: 2025-10-11 09:33:49.622 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-0000008a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:33:49 compute-0 nova_compute[260935]: 2025-10-11 09:33:49.626 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:33:49 compute-0 nova_compute[260935]: 2025-10-11 09:33:49.626 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:33:49 compute-0 nova_compute[260935]: 2025-10-11 09:33:49.626 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:33:49 compute-0 nova_compute[260935]: 2025-10-11 09:33:49.629 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:33:49 compute-0 nova_compute[260935]: 2025-10-11 09:33:49.629 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:33:49 compute-0 nova_compute[260935]: 2025-10-11 09:33:49.632 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000089 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:33:49 compute-0 nova_compute[260935]: 2025-10-11 09:33:49.632 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000089 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:33:49 compute-0 nova_compute[260935]: 2025-10-11 09:33:49.639 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000052 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:33:49 compute-0 nova_compute[260935]: 2025-10-11 09:33:49.640 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000052 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:33:49 compute-0 nova_compute[260935]: 2025-10-11 09:33:49.819 2 WARNING nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 09:33:49 compute-0 nova_compute[260935]: 2025-10-11 09:33:49.820 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=2551MB free_disk=59.76419448852539GB free_vcpus=3 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 11 09:33:49 compute-0 nova_compute[260935]: 2025-10-11 09:33:49.821 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:33:49 compute-0 nova_compute[260935]: 2025-10-11 09:33:49.821 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:33:50 compute-0 nova_compute[260935]: 2025-10-11 09:33:50.013 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:33:50 compute-0 nova_compute[260935]: 2025-10-11 09:33:50.046 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance c176845c-89c0-4038-ba22-4ee79bd3ebfe actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 09:33:50 compute-0 nova_compute[260935]: 2025-10-11 09:33:50.046 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance b75d8ded-515b-48ff-a6b6-28df88878996 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 09:33:50 compute-0 nova_compute[260935]: 2025-10-11 09:33:50.047 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance 52be16b4-343a-4fd4-9041-39069a1fde2a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 09:33:50 compute-0 nova_compute[260935]: 2025-10-11 09:33:50.047 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance e7751d7b-3ef7-4b84-a579-89d4b72f6cf4 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 09:33:50 compute-0 nova_compute[260935]: 2025-10-11 09:33:50.047 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance 5caa1430-1bd3-4b60-83f0-908b98b891cf actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 09:33:50 compute-0 nova_compute[260935]: 2025-10-11 09:33:50.047 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 5 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 11 09:33:50 compute-0 nova_compute[260935]: 2025-10-11 09:33:50.047 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=1152MB phys_disk=59GB used_disk=5GB total_vcpus=8 used_vcpus=5 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 11 09:33:50 compute-0 nova_compute[260935]: 2025-10-11 09:33:50.173 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:33:50 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/2635443784' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:33:50 compute-0 nova_compute[260935]: 2025-10-11 09:33:50.263 2 DEBUG nova.compute.manager [req-33ca5f19-6d80-4e10-96fa-ae1339e17696 req-b0050e08-5e4f-40a9-9e9b-ef3de877920f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 5caa1430-1bd3-4b60-83f0-908b98b891cf] Received event network-vif-plugged-ac79972f-498e-46be-b858-6c67d0f29f93 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:33:50 compute-0 nova_compute[260935]: 2025-10-11 09:33:50.264 2 DEBUG oslo_concurrency.lockutils [req-33ca5f19-6d80-4e10-96fa-ae1339e17696 req-b0050e08-5e4f-40a9-9e9b-ef3de877920f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "5caa1430-1bd3-4b60-83f0-908b98b891cf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:33:50 compute-0 nova_compute[260935]: 2025-10-11 09:33:50.264 2 DEBUG oslo_concurrency.lockutils [req-33ca5f19-6d80-4e10-96fa-ae1339e17696 req-b0050e08-5e4f-40a9-9e9b-ef3de877920f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "5caa1430-1bd3-4b60-83f0-908b98b891cf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:33:50 compute-0 nova_compute[260935]: 2025-10-11 09:33:50.264 2 DEBUG oslo_concurrency.lockutils [req-33ca5f19-6d80-4e10-96fa-ae1339e17696 req-b0050e08-5e4f-40a9-9e9b-ef3de877920f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "5caa1430-1bd3-4b60-83f0-908b98b891cf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:33:50 compute-0 nova_compute[260935]: 2025-10-11 09:33:50.264 2 DEBUG nova.compute.manager [req-33ca5f19-6d80-4e10-96fa-ae1339e17696 req-b0050e08-5e4f-40a9-9e9b-ef3de877920f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 5caa1430-1bd3-4b60-83f0-908b98b891cf] No waiting events found dispatching network-vif-plugged-ac79972f-498e-46be-b858-6c67d0f29f93 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 09:33:50 compute-0 nova_compute[260935]: 2025-10-11 09:33:50.264 2 WARNING nova.compute.manager [req-33ca5f19-6d80-4e10-96fa-ae1339e17696 req-b0050e08-5e4f-40a9-9e9b-ef3de877920f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 5caa1430-1bd3-4b60-83f0-908b98b891cf] Received unexpected event network-vif-plugged-ac79972f-498e-46be-b858-6c67d0f29f93 for instance with vm_state active and task_state None.
Oct 11 09:33:50 compute-0 jolly_swartz[413655]: --> passed data devices: 0 physical, 3 LVM
Oct 11 09:33:50 compute-0 jolly_swartz[413655]: --> relative data size: 1.0
Oct 11 09:33:50 compute-0 jolly_swartz[413655]: --> All data devices are unavailable
Oct 11 09:33:50 compute-0 systemd[1]: libpod-da14e3a8031872d4df5399a3e2d379bd3d33314fb32fbc710fb2de4f69cd9853.scope: Deactivated successfully.
Oct 11 09:33:50 compute-0 podman[413705]: 2025-10-11 09:33:50.356701492 +0000 UTC m=+0.025250793 container died da14e3a8031872d4df5399a3e2d379bd3d33314fb32fbc710fb2de4f69cd9853 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_swartz, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 09:33:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-de676b6d35ff340631f7b69520222d54f6111d12f54565119871358ab6ba2c7d-merged.mount: Deactivated successfully.
Oct 11 09:33:50 compute-0 podman[413705]: 2025-10-11 09:33:50.405232762 +0000 UTC m=+0.073782013 container remove da14e3a8031872d4df5399a3e2d379bd3d33314fb32fbc710fb2de4f69cd9853 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_swartz, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct 11 09:33:50 compute-0 systemd[1]: libpod-conmon-da14e3a8031872d4df5399a3e2d379bd3d33314fb32fbc710fb2de4f69cd9853.scope: Deactivated successfully.
Oct 11 09:33:50 compute-0 sudo[413517]: pam_unix(sudo:session): session closed for user root
Oct 11 09:33:50 compute-0 sudo[413722]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:33:50 compute-0 sudo[413722]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:33:50 compute-0 sudo[413722]: pam_unix(sudo:session): session closed for user root
Oct 11 09:33:50 compute-0 sudo[413747]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 09:33:50 compute-0 sudo[413747]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:33:50 compute-0 sudo[413747]: pam_unix(sudo:session): session closed for user root
Oct 11 09:33:50 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 09:33:50 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1647981956' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:33:50 compute-0 nova_compute[260935]: 2025-10-11 09:33:50.631 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:33:50 compute-0 nova_compute[260935]: 2025-10-11 09:33:50.644 2 DEBUG nova.compute.provider_tree [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 09:33:50 compute-0 nova_compute[260935]: 2025-10-11 09:33:50.683 2 DEBUG nova.scheduler.client.report [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 09:33:50 compute-0 sudo[413772]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:33:50 compute-0 sudo[413772]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:33:50 compute-0 sudo[413772]: pam_unix(sudo:session): session closed for user root
Oct 11 09:33:50 compute-0 nova_compute[260935]: 2025-10-11 09:33:50.749 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 11 09:33:50 compute-0 nova_compute[260935]: 2025-10-11 09:33:50.750 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.929s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:33:50 compute-0 sudo[413799]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a -- lvm list --format json
Oct 11 09:33:50 compute-0 sudo[413799]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:33:51 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2766: 321 pgs: 321 active+clean; 453 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 37 op/s
Oct 11 09:33:51 compute-0 podman[413865]: 2025-10-11 09:33:51.257586946 +0000 UTC m=+0.058635266 container create 486101b5a3a4ba014a5b4886582fede2cf0259762b91ac5ca687b7bf046ba3a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_heisenberg, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct 11 09:33:51 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/1647981956' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:33:51 compute-0 ceph-mon[74313]: pgmap v2766: 321 pgs: 321 active+clean; 453 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 37 op/s
Oct 11 09:33:51 compute-0 systemd[1]: Started libpod-conmon-486101b5a3a4ba014a5b4886582fede2cf0259762b91ac5ca687b7bf046ba3a5.scope.
Oct 11 09:33:51 compute-0 podman[413865]: 2025-10-11 09:33:51.236017157 +0000 UTC m=+0.037065487 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:33:51 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:33:51 compute-0 podman[413865]: 2025-10-11 09:33:51.354020688 +0000 UTC m=+0.155069088 container init 486101b5a3a4ba014a5b4886582fede2cf0259762b91ac5ca687b7bf046ba3a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_heisenberg, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Oct 11 09:33:51 compute-0 podman[413865]: 2025-10-11 09:33:51.368579468 +0000 UTC m=+0.169627788 container start 486101b5a3a4ba014a5b4886582fede2cf0259762b91ac5ca687b7bf046ba3a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_heisenberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 09:33:51 compute-0 podman[413865]: 2025-10-11 09:33:51.372210351 +0000 UTC m=+0.173258671 container attach 486101b5a3a4ba014a5b4886582fede2cf0259762b91ac5ca687b7bf046ba3a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_heisenberg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Oct 11 09:33:51 compute-0 zealous_heisenberg[413881]: 167 167
Oct 11 09:33:51 compute-0 systemd[1]: libpod-486101b5a3a4ba014a5b4886582fede2cf0259762b91ac5ca687b7bf046ba3a5.scope: Deactivated successfully.
Oct 11 09:33:51 compute-0 podman[413865]: 2025-10-11 09:33:51.3767874 +0000 UTC m=+0.177835740 container died 486101b5a3a4ba014a5b4886582fede2cf0259762b91ac5ca687b7bf046ba3a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_heisenberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 09:33:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-3f07093acff3f955691c73868d7ee0bbd38dbaf2f9fe9d9b5936fa6ec53e9f9d-merged.mount: Deactivated successfully.
Oct 11 09:33:51 compute-0 podman[413865]: 2025-10-11 09:33:51.432237715 +0000 UTC m=+0.233286065 container remove 486101b5a3a4ba014a5b4886582fede2cf0259762b91ac5ca687b7bf046ba3a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_heisenberg, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 09:33:51 compute-0 systemd[1]: libpod-conmon-486101b5a3a4ba014a5b4886582fede2cf0259762b91ac5ca687b7bf046ba3a5.scope: Deactivated successfully.
Oct 11 09:33:51 compute-0 podman[413905]: 2025-10-11 09:33:51.658713155 +0000 UTC m=+0.039940648 container create a3482454fe7b4b12909c67b35926fce695220619cb673b388dffcc0da67fe91d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_wright, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default)
Oct 11 09:33:51 compute-0 systemd[1]: Started libpod-conmon-a3482454fe7b4b12909c67b35926fce695220619cb673b388dffcc0da67fe91d.scope.
Oct 11 09:33:51 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:33:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c20ce29c18b06fb7061e48a448d97b2190b0b0e012e77819264ab14061b5a587/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 09:33:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c20ce29c18b06fb7061e48a448d97b2190b0b0e012e77819264ab14061b5a587/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 09:33:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c20ce29c18b06fb7061e48a448d97b2190b0b0e012e77819264ab14061b5a587/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 09:33:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c20ce29c18b06fb7061e48a448d97b2190b0b0e012e77819264ab14061b5a587/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 09:33:51 compute-0 podman[413905]: 2025-10-11 09:33:51.6429231 +0000 UTC m=+0.024150613 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:33:51 compute-0 podman[413905]: 2025-10-11 09:33:51.749032414 +0000 UTC m=+0.130259927 container init a3482454fe7b4b12909c67b35926fce695220619cb673b388dffcc0da67fe91d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_wright, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 09:33:51 compute-0 podman[413905]: 2025-10-11 09:33:51.755755634 +0000 UTC m=+0.136983157 container start a3482454fe7b4b12909c67b35926fce695220619cb673b388dffcc0da67fe91d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_wright, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Oct 11 09:33:51 compute-0 podman[413905]: 2025-10-11 09:33:51.760113547 +0000 UTC m=+0.141341060 container attach a3482454fe7b4b12909c67b35926fce695220619cb673b388dffcc0da67fe91d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_wright, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 09:33:52 compute-0 angry_wright[413922]: {
Oct 11 09:33:52 compute-0 angry_wright[413922]:     "0": [
Oct 11 09:33:52 compute-0 angry_wright[413922]:         {
Oct 11 09:33:52 compute-0 angry_wright[413922]:             "devices": [
Oct 11 09:33:52 compute-0 angry_wright[413922]:                 "/dev/loop3"
Oct 11 09:33:52 compute-0 angry_wright[413922]:             ],
Oct 11 09:33:52 compute-0 angry_wright[413922]:             "lv_name": "ceph_lv0",
Oct 11 09:33:52 compute-0 angry_wright[413922]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 09:33:52 compute-0 angry_wright[413922]:             "lv_size": "21470642176",
Oct 11 09:33:52 compute-0 angry_wright[413922]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=bd31cfe9-266a-4479-a7f9-659fff14b3e1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 09:33:52 compute-0 angry_wright[413922]:             "lv_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 09:33:52 compute-0 angry_wright[413922]:             "name": "ceph_lv0",
Oct 11 09:33:52 compute-0 angry_wright[413922]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 09:33:52 compute-0 angry_wright[413922]:             "tags": {
Oct 11 09:33:52 compute-0 angry_wright[413922]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 11 09:33:52 compute-0 angry_wright[413922]:                 "ceph.block_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 09:33:52 compute-0 angry_wright[413922]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 09:33:52 compute-0 angry_wright[413922]:                 "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:33:52 compute-0 angry_wright[413922]:                 "ceph.cluster_name": "ceph",
Oct 11 09:33:52 compute-0 angry_wright[413922]:                 "ceph.crush_device_class": "",
Oct 11 09:33:52 compute-0 angry_wright[413922]:                 "ceph.encrypted": "0",
Oct 11 09:33:52 compute-0 angry_wright[413922]:                 "ceph.osd_fsid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 09:33:52 compute-0 angry_wright[413922]:                 "ceph.osd_id": "0",
Oct 11 09:33:52 compute-0 angry_wright[413922]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 09:33:52 compute-0 angry_wright[413922]:                 "ceph.type": "block",
Oct 11 09:33:52 compute-0 angry_wright[413922]:                 "ceph.vdo": "0"
Oct 11 09:33:52 compute-0 angry_wright[413922]:             },
Oct 11 09:33:52 compute-0 angry_wright[413922]:             "type": "block",
Oct 11 09:33:52 compute-0 angry_wright[413922]:             "vg_name": "ceph_vg0"
Oct 11 09:33:52 compute-0 angry_wright[413922]:         }
Oct 11 09:33:52 compute-0 angry_wright[413922]:     ],
Oct 11 09:33:52 compute-0 angry_wright[413922]:     "1": [
Oct 11 09:33:52 compute-0 angry_wright[413922]:         {
Oct 11 09:33:52 compute-0 angry_wright[413922]:             "devices": [
Oct 11 09:33:52 compute-0 angry_wright[413922]:                 "/dev/loop4"
Oct 11 09:33:52 compute-0 angry_wright[413922]:             ],
Oct 11 09:33:52 compute-0 angry_wright[413922]:             "lv_name": "ceph_lv1",
Oct 11 09:33:52 compute-0 angry_wright[413922]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 09:33:52 compute-0 angry_wright[413922]:             "lv_size": "21470642176",
Oct 11 09:33:52 compute-0 angry_wright[413922]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c6e730c8-7075-45e5-bf72-061b73088f5b,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 09:33:52 compute-0 angry_wright[413922]:             "lv_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 09:33:52 compute-0 angry_wright[413922]:             "name": "ceph_lv1",
Oct 11 09:33:52 compute-0 angry_wright[413922]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 09:33:52 compute-0 angry_wright[413922]:             "tags": {
Oct 11 09:33:52 compute-0 angry_wright[413922]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 11 09:33:52 compute-0 angry_wright[413922]:                 "ceph.block_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 09:33:52 compute-0 angry_wright[413922]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 09:33:52 compute-0 angry_wright[413922]:                 "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:33:52 compute-0 angry_wright[413922]:                 "ceph.cluster_name": "ceph",
Oct 11 09:33:52 compute-0 angry_wright[413922]:                 "ceph.crush_device_class": "",
Oct 11 09:33:52 compute-0 angry_wright[413922]:                 "ceph.encrypted": "0",
Oct 11 09:33:52 compute-0 angry_wright[413922]:                 "ceph.osd_fsid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 09:33:52 compute-0 angry_wright[413922]:                 "ceph.osd_id": "1",
Oct 11 09:33:52 compute-0 angry_wright[413922]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 09:33:52 compute-0 angry_wright[413922]:                 "ceph.type": "block",
Oct 11 09:33:52 compute-0 angry_wright[413922]:                 "ceph.vdo": "0"
Oct 11 09:33:52 compute-0 angry_wright[413922]:             },
Oct 11 09:33:52 compute-0 angry_wright[413922]:             "type": "block",
Oct 11 09:33:52 compute-0 angry_wright[413922]:             "vg_name": "ceph_vg1"
Oct 11 09:33:52 compute-0 angry_wright[413922]:         }
Oct 11 09:33:52 compute-0 angry_wright[413922]:     ],
Oct 11 09:33:52 compute-0 angry_wright[413922]:     "2": [
Oct 11 09:33:52 compute-0 angry_wright[413922]:         {
Oct 11 09:33:52 compute-0 angry_wright[413922]:             "devices": [
Oct 11 09:33:52 compute-0 angry_wright[413922]:                 "/dev/loop5"
Oct 11 09:33:52 compute-0 angry_wright[413922]:             ],
Oct 11 09:33:52 compute-0 angry_wright[413922]:             "lv_name": "ceph_lv2",
Oct 11 09:33:52 compute-0 angry_wright[413922]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 09:33:52 compute-0 angry_wright[413922]:             "lv_size": "21470642176",
Oct 11 09:33:52 compute-0 angry_wright[413922]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9a8d87fe-940e-4960-94fb-405e22e6e9ee,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 09:33:52 compute-0 angry_wright[413922]:             "lv_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 09:33:52 compute-0 angry_wright[413922]:             "name": "ceph_lv2",
Oct 11 09:33:52 compute-0 angry_wright[413922]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 09:33:52 compute-0 angry_wright[413922]:             "tags": {
Oct 11 09:33:52 compute-0 angry_wright[413922]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 11 09:33:52 compute-0 angry_wright[413922]:                 "ceph.block_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 09:33:52 compute-0 angry_wright[413922]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 09:33:52 compute-0 angry_wright[413922]:                 "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:33:52 compute-0 angry_wright[413922]:                 "ceph.cluster_name": "ceph",
Oct 11 09:33:52 compute-0 angry_wright[413922]:                 "ceph.crush_device_class": "",
Oct 11 09:33:52 compute-0 angry_wright[413922]:                 "ceph.encrypted": "0",
Oct 11 09:33:52 compute-0 angry_wright[413922]:                 "ceph.osd_fsid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 09:33:52 compute-0 angry_wright[413922]:                 "ceph.osd_id": "2",
Oct 11 09:33:52 compute-0 angry_wright[413922]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 09:33:52 compute-0 angry_wright[413922]:                 "ceph.type": "block",
Oct 11 09:33:52 compute-0 angry_wright[413922]:                 "ceph.vdo": "0"
Oct 11 09:33:52 compute-0 angry_wright[413922]:             },
Oct 11 09:33:52 compute-0 angry_wright[413922]:             "type": "block",
Oct 11 09:33:52 compute-0 angry_wright[413922]:             "vg_name": "ceph_vg2"
Oct 11 09:33:52 compute-0 angry_wright[413922]:         }
Oct 11 09:33:52 compute-0 angry_wright[413922]:     ]
Oct 11 09:33:52 compute-0 angry_wright[413922]: }
Oct 11 09:33:52 compute-0 systemd[1]: libpod-a3482454fe7b4b12909c67b35926fce695220619cb673b388dffcc0da67fe91d.scope: Deactivated successfully.
Oct 11 09:33:52 compute-0 podman[413905]: 2025-10-11 09:33:52.674399339 +0000 UTC m=+1.055626872 container died a3482454fe7b4b12909c67b35926fce695220619cb673b388dffcc0da67fe91d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_wright, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct 11 09:33:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-c20ce29c18b06fb7061e48a448d97b2190b0b0e012e77819264ab14061b5a587-merged.mount: Deactivated successfully.
Oct 11 09:33:52 compute-0 nova_compute[260935]: 2025-10-11 09:33:52.718 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:33:52 compute-0 podman[413905]: 2025-10-11 09:33:52.760237341 +0000 UTC m=+1.141464854 container remove a3482454fe7b4b12909c67b35926fce695220619cb673b388dffcc0da67fe91d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_wright, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 09:33:52 compute-0 nova_compute[260935]: 2025-10-11 09:33:52.790 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:33:52 compute-0 systemd[1]: libpod-conmon-a3482454fe7b4b12909c67b35926fce695220619cb673b388dffcc0da67fe91d.scope: Deactivated successfully.
Oct 11 09:33:52 compute-0 sudo[413799]: pam_unix(sudo:session): session closed for user root
Oct 11 09:33:52 compute-0 sudo[413944]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:33:52 compute-0 sudo[413944]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:33:52 compute-0 sudo[413944]: pam_unix(sudo:session): session closed for user root
Oct 11 09:33:52 compute-0 sudo[413969]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 09:33:52 compute-0 sudo[413969]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:33:52 compute-0 sudo[413969]: pam_unix(sudo:session): session closed for user root
Oct 11 09:33:53 compute-0 sudo[413994]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:33:53 compute-0 sudo[413994]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:33:53 compute-0 sudo[413994]: pam_unix(sudo:session): session closed for user root
Oct 11 09:33:53 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2767: 321 pgs: 321 active+clean; 453 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Oct 11 09:33:53 compute-0 sudo[414019]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a -- raw list --format json
Oct 11 09:33:53 compute-0 sudo[414019]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:33:53 compute-0 ceph-mon[74313]: pgmap v2767: 321 pgs: 321 active+clean; 453 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Oct 11 09:33:53 compute-0 podman[414085]: 2025-10-11 09:33:53.657661877 +0000 UTC m=+0.061564838 container create 501432be35042fb0da6252f4c9d2ef028de47d477d6c3355f4acbe29d85f49d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_brattain, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct 11 09:33:53 compute-0 systemd[1]: Started libpod-conmon-501432be35042fb0da6252f4c9d2ef028de47d477d6c3355f4acbe29d85f49d4.scope.
Oct 11 09:33:53 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:33:53 compute-0 podman[414085]: 2025-10-11 09:33:53.633769923 +0000 UTC m=+0.037672914 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:33:53 compute-0 podman[414085]: 2025-10-11 09:33:53.736236875 +0000 UTC m=+0.140139836 container init 501432be35042fb0da6252f4c9d2ef028de47d477d6c3355f4acbe29d85f49d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_brattain, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct 11 09:33:53 compute-0 podman[414085]: 2025-10-11 09:33:53.745716022 +0000 UTC m=+0.149618983 container start 501432be35042fb0da6252f4c9d2ef028de47d477d6c3355f4acbe29d85f49d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_brattain, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 09:33:53 compute-0 podman[414085]: 2025-10-11 09:33:53.748779029 +0000 UTC m=+0.152682020 container attach 501432be35042fb0da6252f4c9d2ef028de47d477d6c3355f4acbe29d85f49d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_brattain, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct 11 09:33:53 compute-0 zealous_brattain[414101]: 167 167
Oct 11 09:33:53 compute-0 systemd[1]: libpod-501432be35042fb0da6252f4c9d2ef028de47d477d6c3355f4acbe29d85f49d4.scope: Deactivated successfully.
Oct 11 09:33:53 compute-0 podman[414085]: 2025-10-11 09:33:53.749861899 +0000 UTC m=+0.153764860 container died 501432be35042fb0da6252f4c9d2ef028de47d477d6c3355f4acbe29d85f49d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_brattain, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 09:33:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-3f4551ee4790b923b65473b028a456d71a363ba5b947a39dcbfc3dfdc6a9e3f9-merged.mount: Deactivated successfully.
Oct 11 09:33:53 compute-0 podman[414085]: 2025-10-11 09:33:53.803989057 +0000 UTC m=+0.207892018 container remove 501432be35042fb0da6252f4c9d2ef028de47d477d6c3355f4acbe29d85f49d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_brattain, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct 11 09:33:53 compute-0 systemd[1]: libpod-conmon-501432be35042fb0da6252f4c9d2ef028de47d477d6c3355f4acbe29d85f49d4.scope: Deactivated successfully.
Oct 11 09:33:53 compute-0 nova_compute[260935]: 2025-10-11 09:33:53.939 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:33:54 compute-0 podman[414125]: 2025-10-11 09:33:54.114425398 +0000 UTC m=+0.074418731 container create 2a4784b147769e6b7a7809feba20db0cd977eb39b82e8dedb2a79fd500c1d288 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_yalow, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 09:33:54 compute-0 systemd[1]: Started libpod-conmon-2a4784b147769e6b7a7809feba20db0cd977eb39b82e8dedb2a79fd500c1d288.scope.
Oct 11 09:33:54 compute-0 podman[414125]: 2025-10-11 09:33:54.085223464 +0000 UTC m=+0.045216857 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:33:54 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:33:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b05fd4220f2afdbb80827e1580cf4b560e01d56f3b73eee77e7e3707eb05a69a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 09:33:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b05fd4220f2afdbb80827e1580cf4b560e01d56f3b73eee77e7e3707eb05a69a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 09:33:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b05fd4220f2afdbb80827e1580cf4b560e01d56f3b73eee77e7e3707eb05a69a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 09:33:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b05fd4220f2afdbb80827e1580cf4b560e01d56f3b73eee77e7e3707eb05a69a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 09:33:54 compute-0 podman[414125]: 2025-10-11 09:33:54.22398982 +0000 UTC m=+0.183983213 container init 2a4784b147769e6b7a7809feba20db0cd977eb39b82e8dedb2a79fd500c1d288 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_yalow, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True)
Oct 11 09:33:54 compute-0 podman[414125]: 2025-10-11 09:33:54.233298593 +0000 UTC m=+0.193291906 container start 2a4784b147769e6b7a7809feba20db0cd977eb39b82e8dedb2a79fd500c1d288 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_yalow, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct 11 09:33:54 compute-0 podman[414125]: 2025-10-11 09:33:54.23711367 +0000 UTC m=+0.197107083 container attach 2a4784b147769e6b7a7809feba20db0cd977eb39b82e8dedb2a79fd500c1d288 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_yalow, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Oct 11 09:33:54 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:33:54 compute-0 nova_compute[260935]: 2025-10-11 09:33:54.804 2 DEBUG nova.compute.manager [req-256946c9-eeb7-4850-a713-aca88ea58d66 req-aacf12db-dd68-4e04-9986-77d8b564786b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 5caa1430-1bd3-4b60-83f0-908b98b891cf] Received event network-changed-ac79972f-498e-46be-b858-6c67d0f29f93 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:33:54 compute-0 nova_compute[260935]: 2025-10-11 09:33:54.804 2 DEBUG nova.compute.manager [req-256946c9-eeb7-4850-a713-aca88ea58d66 req-aacf12db-dd68-4e04-9986-77d8b564786b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 5caa1430-1bd3-4b60-83f0-908b98b891cf] Refreshing instance network info cache due to event network-changed-ac79972f-498e-46be-b858-6c67d0f29f93. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 09:33:54 compute-0 nova_compute[260935]: 2025-10-11 09:33:54.805 2 DEBUG oslo_concurrency.lockutils [req-256946c9-eeb7-4850-a713-aca88ea58d66 req-aacf12db-dd68-4e04-9986-77d8b564786b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-5caa1430-1bd3-4b60-83f0-908b98b891cf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:33:54 compute-0 nova_compute[260935]: 2025-10-11 09:33:54.805 2 DEBUG oslo_concurrency.lockutils [req-256946c9-eeb7-4850-a713-aca88ea58d66 req-aacf12db-dd68-4e04-9986-77d8b564786b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-5caa1430-1bd3-4b60-83f0-908b98b891cf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:33:54 compute-0 nova_compute[260935]: 2025-10-11 09:33:54.805 2 DEBUG nova.network.neutron [req-256946c9-eeb7-4850-a713-aca88ea58d66 req-aacf12db-dd68-4e04-9986-77d8b564786b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 5caa1430-1bd3-4b60-83f0-908b98b891cf] Refreshing network info cache for port ac79972f-498e-46be-b858-6c67d0f29f93 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 09:33:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:33:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:33:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:33:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:33:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:33:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:33:54 compute-0 ceph-mgr[74605]: [balancer INFO root] Optimize plan auto_2025-10-11_09:33:54
Oct 11 09:33:54 compute-0 ceph-mgr[74605]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 09:33:54 compute-0 ceph-mgr[74605]: [balancer INFO root] do_upmap
Oct 11 09:33:54 compute-0 ceph-mgr[74605]: [balancer INFO root] pools ['default.rgw.meta', '.mgr', '.rgw.root', 'images', 'backups', 'default.rgw.control', 'cephfs.cephfs.meta', 'volumes', 'default.rgw.log', 'cephfs.cephfs.data', 'vms']
Oct 11 09:33:54 compute-0 ceph-mgr[74605]: [balancer INFO root] prepared 0/10 changes
Oct 11 09:33:55 compute-0 nova_compute[260935]: 2025-10-11 09:33:55.015 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:33:55 compute-0 dazzling_yalow[414142]: {
Oct 11 09:33:55 compute-0 dazzling_yalow[414142]:     "9a8d87fe-940e-4960-94fb-405e22e6e9ee": {
Oct 11 09:33:55 compute-0 dazzling_yalow[414142]:         "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:33:55 compute-0 dazzling_yalow[414142]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 11 09:33:55 compute-0 dazzling_yalow[414142]:         "osd_id": 2,
Oct 11 09:33:55 compute-0 dazzling_yalow[414142]:         "osd_uuid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 09:33:55 compute-0 dazzling_yalow[414142]:         "type": "bluestore"
Oct 11 09:33:55 compute-0 dazzling_yalow[414142]:     },
Oct 11 09:33:55 compute-0 dazzling_yalow[414142]:     "bd31cfe9-266a-4479-a7f9-659fff14b3e1": {
Oct 11 09:33:55 compute-0 dazzling_yalow[414142]:         "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:33:55 compute-0 dazzling_yalow[414142]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 11 09:33:55 compute-0 dazzling_yalow[414142]:         "osd_id": 0,
Oct 11 09:33:55 compute-0 dazzling_yalow[414142]:         "osd_uuid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 09:33:55 compute-0 dazzling_yalow[414142]:         "type": "bluestore"
Oct 11 09:33:55 compute-0 dazzling_yalow[414142]:     },
Oct 11 09:33:55 compute-0 dazzling_yalow[414142]:     "c6e730c8-7075-45e5-bf72-061b73088f5b": {
Oct 11 09:33:55 compute-0 dazzling_yalow[414142]:         "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:33:55 compute-0 dazzling_yalow[414142]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 11 09:33:55 compute-0 dazzling_yalow[414142]:         "osd_id": 1,
Oct 11 09:33:55 compute-0 dazzling_yalow[414142]:         "osd_uuid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 09:33:55 compute-0 dazzling_yalow[414142]:         "type": "bluestore"
Oct 11 09:33:55 compute-0 dazzling_yalow[414142]:     }
Oct 11 09:33:55 compute-0 dazzling_yalow[414142]: }
Oct 11 09:33:55 compute-0 systemd[1]: libpod-2a4784b147769e6b7a7809feba20db0cd977eb39b82e8dedb2a79fd500c1d288.scope: Deactivated successfully.
Oct 11 09:33:55 compute-0 podman[414125]: 2025-10-11 09:33:55.138605021 +0000 UTC m=+1.098598324 container died 2a4784b147769e6b7a7809feba20db0cd977eb39b82e8dedb2a79fd500c1d288 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_yalow, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 09:33:55 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2768: 321 pgs: 321 active+clean; 453 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 11 09:33:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-b05fd4220f2afdbb80827e1580cf4b560e01d56f3b73eee77e7e3707eb05a69a-merged.mount: Deactivated successfully.
Oct 11 09:33:55 compute-0 podman[414125]: 2025-10-11 09:33:55.19139789 +0000 UTC m=+1.151391183 container remove 2a4784b147769e6b7a7809feba20db0cd977eb39b82e8dedb2a79fd500c1d288 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_yalow, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 11 09:33:55 compute-0 systemd[1]: libpod-conmon-2a4784b147769e6b7a7809feba20db0cd977eb39b82e8dedb2a79fd500c1d288.scope: Deactivated successfully.
Oct 11 09:33:55 compute-0 sudo[414019]: pam_unix(sudo:session): session closed for user root
Oct 11 09:33:55 compute-0 ceph-mon[74313]: pgmap v2768: 321 pgs: 321 active+clean; 453 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 11 09:33:55 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 09:33:55 compute-0 podman[414176]: 2025-10-11 09:33:55.24206509 +0000 UTC m=+0.075886523 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 09:33:55 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:33:55 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 09:33:55 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:33:55 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev fc3e59f8-3622-44bd-aa78-11a788fe0191 does not exist
Oct 11 09:33:55 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev 23657c31-57e6-4ee6-9ece-c6d93a17b85f does not exist
Oct 11 09:33:55 compute-0 sudo[414204]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:33:55 compute-0 sudo[414204]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:33:55 compute-0 sudo[414204]: pam_unix(sudo:session): session closed for user root
Oct 11 09:33:55 compute-0 sudo[414230]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 11 09:33:55 compute-0 sudo[414230]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:33:55 compute-0 sudo[414230]: pam_unix(sudo:session): session closed for user root
Oct 11 09:33:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 09:33:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 09:33:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 09:33:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 09:33:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 09:33:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 09:33:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 09:33:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 09:33:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 09:33:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 09:33:56 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:33:56 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:33:56 compute-0 nova_compute[260935]: 2025-10-11 09:33:56.605 2 DEBUG nova.network.neutron [req-256946c9-eeb7-4850-a713-aca88ea58d66 req-aacf12db-dd68-4e04-9986-77d8b564786b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 5caa1430-1bd3-4b60-83f0-908b98b891cf] Updated VIF entry in instance network info cache for port ac79972f-498e-46be-b858-6c67d0f29f93. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 09:33:56 compute-0 nova_compute[260935]: 2025-10-11 09:33:56.606 2 DEBUG nova.network.neutron [req-256946c9-eeb7-4850-a713-aca88ea58d66 req-aacf12db-dd68-4e04-9986-77d8b564786b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 5caa1430-1bd3-4b60-83f0-908b98b891cf] Updating instance_info_cache with network_info: [{"id": "ac79972f-498e-46be-b858-6c67d0f29f93", "address": "fa:16:3e:0d:1b:5f", "network": {"id": "84203be9-d0b1-4633-bcd6-51523ee507a5", "bridge": "br-int", "label": "tempest-network-smoke--13547720", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.212", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe0d:1b5f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac79972f-49", "ovs_interfaceid": "ac79972f-498e-46be-b858-6c67d0f29f93", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:33:57 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2769: 321 pgs: 321 active+clean; 453 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 11 09:33:57 compute-0 ceph-mon[74313]: pgmap v2769: 321 pgs: 321 active+clean; 453 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 11 09:33:57 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:33:57.465 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=bec9a5e2-82b0-42f0-811d-08d245f1dc66, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '50'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:33:57 compute-0 nova_compute[260935]: 2025-10-11 09:33:57.901 2 DEBUG oslo_concurrency.lockutils [req-256946c9-eeb7-4850-a713-aca88ea58d66 req-aacf12db-dd68-4e04-9986-77d8b564786b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-5caa1430-1bd3-4b60-83f0-908b98b891cf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:33:58 compute-0 nova_compute[260935]: 2025-10-11 09:33:58.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:33:58 compute-0 nova_compute[260935]: 2025-10-11 09:33:58.703 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Oct 11 09:33:58 compute-0 nova_compute[260935]: 2025-10-11 09:33:58.942 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:33:59 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2770: 321 pgs: 321 active+clean; 453 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 21 KiB/s wr, 74 op/s
Oct 11 09:33:59 compute-0 ceph-mon[74313]: pgmap v2770: 321 pgs: 321 active+clean; 453 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 21 KiB/s wr, 74 op/s
Oct 11 09:33:59 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:34:00 compute-0 nova_compute[260935]: 2025-10-11 09:34:00.017 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:34:00 compute-0 ovn_controller[152945]: 2025-10-11T09:34:00Z|00182|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:0d:1b:5f 10.100.0.10
Oct 11 09:34:00 compute-0 ovn_controller[152945]: 2025-10-11T09:34:00Z|00183|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:0d:1b:5f 10.100.0.10
Oct 11 09:34:01 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2771: 321 pgs: 321 active+clean; 453 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 8.4 KiB/s wr, 65 op/s
Oct 11 09:34:01 compute-0 ceph-mon[74313]: pgmap v2771: 321 pgs: 321 active+clean; 453 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 8.4 KiB/s wr, 65 op/s
Oct 11 09:34:02 compute-0 nova_compute[260935]: 2025-10-11 09:34:02.178 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:34:02 compute-0 podman[414255]: 2025-10-11 09:34:02.791676474 +0000 UTC m=+0.084758723 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=iscsid, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 11 09:34:03 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2772: 321 pgs: 321 active+clean; 486 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 128 op/s
Oct 11 09:34:03 compute-0 ceph-mon[74313]: pgmap v2772: 321 pgs: 321 active+clean; 486 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 128 op/s
Oct 11 09:34:03 compute-0 nova_compute[260935]: 2025-10-11 09:34:03.945 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:34:04 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:34:05 compute-0 nova_compute[260935]: 2025-10-11 09:34:05.019 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:34:05 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2773: 321 pgs: 321 active+clean; 486 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 11 09:34:05 compute-0 ceph-mon[74313]: pgmap v2773: 321 pgs: 321 active+clean; 486 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 11 09:34:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 09:34:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:34:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 09:34:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:34:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.004143761293709709 of space, bias 1.0, pg target 1.2431283881129125 quantized to 32 (current 32)
Oct 11 09:34:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:34:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 09:34:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:34:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 09:34:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:34:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.1992057139048968 quantized to 32 (current 32)
Oct 11 09:34:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:34:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006084358924269063 quantized to 16 (current 32)
Oct 11 09:34:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:34:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 09:34:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:34:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.605448655336329e-05 quantized to 32 (current 32)
Oct 11 09:34:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:34:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006464631357035879 quantized to 32 (current 32)
Oct 11 09:34:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:34:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 09:34:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:34:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015210897310672657 quantized to 32 (current 32)
Oct 11 09:34:05 compute-0 sshd-session[412952]: error: kex_exchange_identification: read: Connection reset by peer
Oct 11 09:34:05 compute-0 sshd-session[412952]: Connection reset by 45.140.17.97 port 52851
Oct 11 09:34:06 compute-0 sshd-session[414275]: Invalid user mysql from 165.232.82.252 port 57764
Oct 11 09:34:06 compute-0 podman[414277]: 2025-10-11 09:34:06.975257358 +0000 UTC m=+0.090501885 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, managed_by=edpm_ansible)
Oct 11 09:34:07 compute-0 podman[414278]: 2025-10-11 09:34:07.058789596 +0000 UTC m=+0.161365675 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, tcib_managed=true)
Oct 11 09:34:07 compute-0 sshd-session[414275]: pam_unix(sshd:auth): check pass; user unknown
Oct 11 09:34:07 compute-0 sshd-session[414275]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=165.232.82.252
Oct 11 09:34:07 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2774: 321 pgs: 321 active+clean; 486 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 11 09:34:07 compute-0 ceph-mon[74313]: pgmap v2774: 321 pgs: 321 active+clean; 486 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 11 09:34:08 compute-0 sshd-session[414275]: Failed password for invalid user mysql from 165.232.82.252 port 57764 ssh2
Oct 11 09:34:08 compute-0 nova_compute[260935]: 2025-10-11 09:34:08.948 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:34:09 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2775: 321 pgs: 321 active+clean; 486 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Oct 11 09:34:09 compute-0 ceph-mon[74313]: pgmap v2775: 321 pgs: 321 active+clean; 486 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Oct 11 09:34:09 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:34:09 compute-0 nova_compute[260935]: 2025-10-11 09:34:09.704 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:34:10 compute-0 nova_compute[260935]: 2025-10-11 09:34:10.050 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:34:10 compute-0 sshd-session[414275]: Connection closed by invalid user mysql 165.232.82.252 port 57764 [preauth]
Oct 11 09:34:11 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2776: 321 pgs: 321 active+clean; 486 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 11 09:34:11 compute-0 ceph-mon[74313]: pgmap v2776: 321 pgs: 321 active+clean; 486 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 11 09:34:13 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2777: 321 pgs: 321 active+clean; 486 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 336 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 11 09:34:13 compute-0 ceph-mon[74313]: pgmap v2777: 321 pgs: 321 active+clean; 486 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 336 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 11 09:34:13 compute-0 nova_compute[260935]: 2025-10-11 09:34:13.951 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:34:14 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:34:14 compute-0 nova_compute[260935]: 2025-10-11 09:34:14.729 2 DEBUG nova.compute.manager [req-40aec95a-a27c-4450-8db4-b703678a5875 req-122188ee-6929-4698-9c76-2bebaab1b122 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 5caa1430-1bd3-4b60-83f0-908b98b891cf] Received event network-changed-ac79972f-498e-46be-b858-6c67d0f29f93 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:34:14 compute-0 nova_compute[260935]: 2025-10-11 09:34:14.730 2 DEBUG nova.compute.manager [req-40aec95a-a27c-4450-8db4-b703678a5875 req-122188ee-6929-4698-9c76-2bebaab1b122 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 5caa1430-1bd3-4b60-83f0-908b98b891cf] Refreshing instance network info cache due to event network-changed-ac79972f-498e-46be-b858-6c67d0f29f93. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 09:34:14 compute-0 nova_compute[260935]: 2025-10-11 09:34:14.730 2 DEBUG oslo_concurrency.lockutils [req-40aec95a-a27c-4450-8db4-b703678a5875 req-122188ee-6929-4698-9c76-2bebaab1b122 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-5caa1430-1bd3-4b60-83f0-908b98b891cf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:34:14 compute-0 nova_compute[260935]: 2025-10-11 09:34:14.731 2 DEBUG oslo_concurrency.lockutils [req-40aec95a-a27c-4450-8db4-b703678a5875 req-122188ee-6929-4698-9c76-2bebaab1b122 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-5caa1430-1bd3-4b60-83f0-908b98b891cf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:34:14 compute-0 nova_compute[260935]: 2025-10-11 09:34:14.731 2 DEBUG nova.network.neutron [req-40aec95a-a27c-4450-8db4-b703678a5875 req-122188ee-6929-4698-9c76-2bebaab1b122 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 5caa1430-1bd3-4b60-83f0-908b98b891cf] Refreshing network info cache for port ac79972f-498e-46be-b858-6c67d0f29f93 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 09:34:14 compute-0 nova_compute[260935]: 2025-10-11 09:34:14.879 2 DEBUG oslo_concurrency.lockutils [None req-38e74ca2-0169-4507-aea0-18c915f71c4b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "5caa1430-1bd3-4b60-83f0-908b98b891cf" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:34:14 compute-0 nova_compute[260935]: 2025-10-11 09:34:14.880 2 DEBUG oslo_concurrency.lockutils [None req-38e74ca2-0169-4507-aea0-18c915f71c4b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "5caa1430-1bd3-4b60-83f0-908b98b891cf" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:34:14 compute-0 nova_compute[260935]: 2025-10-11 09:34:14.880 2 DEBUG oslo_concurrency.lockutils [None req-38e74ca2-0169-4507-aea0-18c915f71c4b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "5caa1430-1bd3-4b60-83f0-908b98b891cf-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:34:14 compute-0 nova_compute[260935]: 2025-10-11 09:34:14.880 2 DEBUG oslo_concurrency.lockutils [None req-38e74ca2-0169-4507-aea0-18c915f71c4b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "5caa1430-1bd3-4b60-83f0-908b98b891cf-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:34:14 compute-0 nova_compute[260935]: 2025-10-11 09:34:14.881 2 DEBUG oslo_concurrency.lockutils [None req-38e74ca2-0169-4507-aea0-18c915f71c4b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "5caa1430-1bd3-4b60-83f0-908b98b891cf-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:34:14 compute-0 nova_compute[260935]: 2025-10-11 09:34:14.883 2 INFO nova.compute.manager [None req-38e74ca2-0169-4507-aea0-18c915f71c4b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 5caa1430-1bd3-4b60-83f0-908b98b891cf] Terminating instance
Oct 11 09:34:14 compute-0 nova_compute[260935]: 2025-10-11 09:34:14.887 2 DEBUG nova.compute.manager [None req-38e74ca2-0169-4507-aea0-18c915f71c4b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 5caa1430-1bd3-4b60-83f0-908b98b891cf] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 11 09:34:14 compute-0 kernel: tapac79972f-49 (unregistering): left promiscuous mode
Oct 11 09:34:14 compute-0 NetworkManager[44960]: <info>  [1760175254.9530] device (tapac79972f-49): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 09:34:14 compute-0 ovn_controller[152945]: 2025-10-11T09:34:14Z|01547|binding|INFO|Releasing lport ac79972f-498e-46be-b858-6c67d0f29f93 from this chassis (sb_readonly=0)
Oct 11 09:34:14 compute-0 nova_compute[260935]: 2025-10-11 09:34:14.964 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:34:14 compute-0 ovn_controller[152945]: 2025-10-11T09:34:14Z|01548|binding|INFO|Setting lport ac79972f-498e-46be-b858-6c67d0f29f93 down in Southbound
Oct 11 09:34:14 compute-0 ovn_controller[152945]: 2025-10-11T09:34:14Z|01549|binding|INFO|Removing iface tapac79972f-49 ovn-installed in OVS
Oct 11 09:34:14 compute-0 nova_compute[260935]: 2025-10-11 09:34:14.972 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:34:14 compute-0 nova_compute[260935]: 2025-10-11 09:34:14.980 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:34:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:34:15.007 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0d:1b:5f 10.100.0.10 2001:db8::f816:3eff:fe0d:1b5f'], port_security=['fa:16:3e:0d:1b:5f 10.100.0.10 2001:db8::f816:3eff:fe0d:1b5f'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28 2001:db8::f816:3eff:fe0d:1b5f/64', 'neutron:device_id': '5caa1430-1bd3-4b60-83f0-908b98b891cf', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-84203be9-d0b1-4633-bcd6-51523ee507a5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'db6885dd005947ad850fed13cefdf2fc', 'neutron:revision_number': '4', 'neutron:security_group_ids': '49b6c18b-a992-472d-bab2-db1a8305eb48', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d927a0f8-4868-42ce-9cc2-00b73d34efaa, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=ac79972f-498e-46be-b858-6c67d0f29f93) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 09:34:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:34:15.008 162815 INFO neutron.agent.ovn.metadata.agent [-] Port ac79972f-498e-46be-b858-6c67d0f29f93 in datapath 84203be9-d0b1-4633-bcd6-51523ee507a5 unbound from our chassis
Oct 11 09:34:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:34:15.010 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 84203be9-d0b1-4633-bcd6-51523ee507a5
Oct 11 09:34:15 compute-0 systemd[1]: machine-qemu\x2d162\x2dinstance\x2d0000008a.scope: Deactivated successfully.
Oct 11 09:34:15 compute-0 systemd[1]: machine-qemu\x2d162\x2dinstance\x2d0000008a.scope: Consumed 13.405s CPU time.
Oct 11 09:34:15 compute-0 systemd-machined[215705]: Machine qemu-162-instance-0000008a terminated.
Oct 11 09:34:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:34:15.030 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[5733b19f-7c09-4722-9100-a2763df045e2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:34:15 compute-0 nova_compute[260935]: 2025-10-11 09:34:15.052 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:34:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:34:15.064 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[70d061a2-967a-48ca-bb52-e7fa2205bd3b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:34:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:34:15.069 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[582f876e-b66e-48e3-8c1e-8da2ed3b29d6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:34:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:34:15.097 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[2655f18b-ecb8-43f7-bee2-9273318eec58]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:34:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:34:15.121 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[070111d1-444b-4f78-bb3e-b2e83d1c80dc]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap84203be9-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:09:4e:e8'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 34, 'tx_packets': 7, 'rx_bytes': 2940, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 34, 'tx_packets': 7, 'rx_bytes': 2940, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 410], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 702200, 'reachable_time': 31343, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 30, 'inoctets': 2352, 'indelivers': 7, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 30, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 2352, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 30, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 7, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 414334, 'error': None, 'target': 'ovnmeta-84203be9-d0b1-4633-bcd6-51523ee507a5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:34:15 compute-0 nova_compute[260935]: 2025-10-11 09:34:15.134 2 INFO nova.virt.libvirt.driver [-] [instance: 5caa1430-1bd3-4b60-83f0-908b98b891cf] Instance destroyed successfully.
Oct 11 09:34:15 compute-0 nova_compute[260935]: 2025-10-11 09:34:15.134 2 DEBUG nova.objects.instance [None req-38e74ca2-0169-4507-aea0-18c915f71c4b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lazy-loading 'resources' on Instance uuid 5caa1430-1bd3-4b60-83f0-908b98b891cf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:34:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:34:15.141 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[1dfd8b0c-8c98-45df-b5dd-4b6223add90e]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap84203be9-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 702218, 'tstamp': 702218}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 414342, 'error': None, 'target': 'ovnmeta-84203be9-d0b1-4633-bcd6-51523ee507a5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap84203be9-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 702223, 'tstamp': 702223}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 414342, 'error': None, 'target': 'ovnmeta-84203be9-d0b1-4633-bcd6-51523ee507a5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:34:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:34:15.143 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap84203be9-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:34:15 compute-0 nova_compute[260935]: 2025-10-11 09:34:15.144 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:34:15 compute-0 nova_compute[260935]: 2025-10-11 09:34:15.148 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:34:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:34:15.151 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap84203be9-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:34:15 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2778: 321 pgs: 321 active+clean; 486 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 12 KiB/s wr, 1 op/s
Oct 11 09:34:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:34:15.152 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 09:34:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:34:15.153 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap84203be9-d0, col_values=(('external_ids', {'iface-id': '8bbc5aca-a0c9-493f-8847-60a74f11dbe6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:34:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:34:15.153 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 09:34:15 compute-0 nova_compute[260935]: 2025-10-11 09:34:15.204 2 DEBUG nova.virt.libvirt.vif [None req-38e74ca2-0169-4507-aea0-18c915f71c4b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T09:33:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1406255321',display_name='tempest-TestGettingAddress-server-1406255321',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1406255321',id=138,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHVfw7pOtwEAslurCfMJI+LHxZQ7Z2OcTKKHxGE0pH5jm6Sci3HdEc6EkB55fGhUPW8iPpuzF74thdADhl6pL+8SkI+p3jvP78xNgpaS0telyWktJDIyQUmQdgZIxm8Eyg==',key_name='tempest-TestGettingAddress-2055882368',keypairs=<?>,launch_index=0,launched_at=2025-10-11T09:33:48Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='db6885dd005947ad850fed13cefdf2fc',ramdisk_id='',reservation_id='r-p3ajjp08',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-1238692117',owner_user_name='tempest-TestGettingAddress-1238692117-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T09:33:48Z,user_data=None,user_id='0e1fd111a1ff43179343661e01457085',uuid=5caa1430-1bd3-4b60-83f0-908b98b891cf,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ac79972f-498e-46be-b858-6c67d0f29f93", "address": "fa:16:3e:0d:1b:5f", "network": {"id": "84203be9-d0b1-4633-bcd6-51523ee507a5", "bridge": "br-int", "label": "tempest-network-smoke--13547720", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.212", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe0d:1b5f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac79972f-49", "ovs_interfaceid": "ac79972f-498e-46be-b858-6c67d0f29f93", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 11 09:34:15 compute-0 nova_compute[260935]: 2025-10-11 09:34:15.204 2 DEBUG nova.network.os_vif_util [None req-38e74ca2-0169-4507-aea0-18c915f71c4b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converting VIF {"id": "ac79972f-498e-46be-b858-6c67d0f29f93", "address": "fa:16:3e:0d:1b:5f", "network": {"id": "84203be9-d0b1-4633-bcd6-51523ee507a5", "bridge": "br-int", "label": "tempest-network-smoke--13547720", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.212", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe0d:1b5f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac79972f-49", "ovs_interfaceid": "ac79972f-498e-46be-b858-6c67d0f29f93", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 09:34:15 compute-0 nova_compute[260935]: 2025-10-11 09:34:15.205 2 DEBUG nova.network.os_vif_util [None req-38e74ca2-0169-4507-aea0-18c915f71c4b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:0d:1b:5f,bridge_name='br-int',has_traffic_filtering=True,id=ac79972f-498e-46be-b858-6c67d0f29f93,network=Network(84203be9-d0b1-4633-bcd6-51523ee507a5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapac79972f-49') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 09:34:15 compute-0 nova_compute[260935]: 2025-10-11 09:34:15.205 2 DEBUG os_vif [None req-38e74ca2-0169-4507-aea0-18c915f71c4b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:0d:1b:5f,bridge_name='br-int',has_traffic_filtering=True,id=ac79972f-498e-46be-b858-6c67d0f29f93,network=Network(84203be9-d0b1-4633-bcd6-51523ee507a5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapac79972f-49') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 11 09:34:15 compute-0 nova_compute[260935]: 2025-10-11 09:34:15.207 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:34:15 compute-0 nova_compute[260935]: 2025-10-11 09:34:15.207 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapac79972f-49, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:34:15 compute-0 nova_compute[260935]: 2025-10-11 09:34:15.208 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:34:15 compute-0 nova_compute[260935]: 2025-10-11 09:34:15.212 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 09:34:15 compute-0 nova_compute[260935]: 2025-10-11 09:34:15.214 2 INFO os_vif [None req-38e74ca2-0169-4507-aea0-18c915f71c4b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:0d:1b:5f,bridge_name='br-int',has_traffic_filtering=True,id=ac79972f-498e-46be-b858-6c67d0f29f93,network=Network(84203be9-d0b1-4633-bcd6-51523ee507a5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapac79972f-49')
Oct 11 09:34:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:34:15.229 162815 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:34:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:34:15.230 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:34:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:34:15.231 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:34:15 compute-0 ceph-mon[74313]: pgmap v2778: 321 pgs: 321 active+clean; 486 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 12 KiB/s wr, 1 op/s
Oct 11 09:34:15 compute-0 nova_compute[260935]: 2025-10-11 09:34:15.337 2 DEBUG nova.compute.manager [req-8246ea68-8883-4a4d-a26a-1396194d7304 req-f453b8bf-edf9-462c-859f-dc69a19fc694 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 5caa1430-1bd3-4b60-83f0-908b98b891cf] Received event network-vif-unplugged-ac79972f-498e-46be-b858-6c67d0f29f93 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:34:15 compute-0 nova_compute[260935]: 2025-10-11 09:34:15.338 2 DEBUG oslo_concurrency.lockutils [req-8246ea68-8883-4a4d-a26a-1396194d7304 req-f453b8bf-edf9-462c-859f-dc69a19fc694 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "5caa1430-1bd3-4b60-83f0-908b98b891cf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:34:15 compute-0 nova_compute[260935]: 2025-10-11 09:34:15.338 2 DEBUG oslo_concurrency.lockutils [req-8246ea68-8883-4a4d-a26a-1396194d7304 req-f453b8bf-edf9-462c-859f-dc69a19fc694 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "5caa1430-1bd3-4b60-83f0-908b98b891cf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:34:15 compute-0 nova_compute[260935]: 2025-10-11 09:34:15.339 2 DEBUG oslo_concurrency.lockutils [req-8246ea68-8883-4a4d-a26a-1396194d7304 req-f453b8bf-edf9-462c-859f-dc69a19fc694 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "5caa1430-1bd3-4b60-83f0-908b98b891cf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:34:15 compute-0 nova_compute[260935]: 2025-10-11 09:34:15.339 2 DEBUG nova.compute.manager [req-8246ea68-8883-4a4d-a26a-1396194d7304 req-f453b8bf-edf9-462c-859f-dc69a19fc694 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 5caa1430-1bd3-4b60-83f0-908b98b891cf] No waiting events found dispatching network-vif-unplugged-ac79972f-498e-46be-b858-6c67d0f29f93 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 09:34:15 compute-0 nova_compute[260935]: 2025-10-11 09:34:15.340 2 DEBUG nova.compute.manager [req-8246ea68-8883-4a4d-a26a-1396194d7304 req-f453b8bf-edf9-462c-859f-dc69a19fc694 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 5caa1430-1bd3-4b60-83f0-908b98b891cf] Received event network-vif-unplugged-ac79972f-498e-46be-b858-6c67d0f29f93 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 11 09:34:15 compute-0 nova_compute[260935]: 2025-10-11 09:34:15.638 2 INFO nova.virt.libvirt.driver [None req-38e74ca2-0169-4507-aea0-18c915f71c4b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 5caa1430-1bd3-4b60-83f0-908b98b891cf] Deleting instance files /var/lib/nova/instances/5caa1430-1bd3-4b60-83f0-908b98b891cf_del
Oct 11 09:34:15 compute-0 nova_compute[260935]: 2025-10-11 09:34:15.639 2 INFO nova.virt.libvirt.driver [None req-38e74ca2-0169-4507-aea0-18c915f71c4b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 5caa1430-1bd3-4b60-83f0-908b98b891cf] Deletion of /var/lib/nova/instances/5caa1430-1bd3-4b60-83f0-908b98b891cf_del complete
Oct 11 09:34:15 compute-0 nova_compute[260935]: 2025-10-11 09:34:15.786 2 INFO nova.compute.manager [None req-38e74ca2-0169-4507-aea0-18c915f71c4b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: 5caa1430-1bd3-4b60-83f0-908b98b891cf] Took 0.90 seconds to destroy the instance on the hypervisor.
Oct 11 09:34:15 compute-0 nova_compute[260935]: 2025-10-11 09:34:15.786 2 DEBUG oslo.service.loopingcall [None req-38e74ca2-0169-4507-aea0-18c915f71c4b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 11 09:34:15 compute-0 nova_compute[260935]: 2025-10-11 09:34:15.787 2 DEBUG nova.compute.manager [-] [instance: 5caa1430-1bd3-4b60-83f0-908b98b891cf] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 11 09:34:15 compute-0 nova_compute[260935]: 2025-10-11 09:34:15.787 2 DEBUG nova.network.neutron [-] [instance: 5caa1430-1bd3-4b60-83f0-908b98b891cf] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 11 09:34:16 compute-0 nova_compute[260935]: 2025-10-11 09:34:16.716 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:34:16 compute-0 nova_compute[260935]: 2025-10-11 09:34:16.716 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Oct 11 09:34:16 compute-0 nova_compute[260935]: 2025-10-11 09:34:16.753 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Oct 11 09:34:16 compute-0 nova_compute[260935]: 2025-10-11 09:34:16.963 2 DEBUG nova.network.neutron [req-40aec95a-a27c-4450-8db4-b703678a5875 req-122188ee-6929-4698-9c76-2bebaab1b122 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 5caa1430-1bd3-4b60-83f0-908b98b891cf] Updated VIF entry in instance network info cache for port ac79972f-498e-46be-b858-6c67d0f29f93. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 09:34:16 compute-0 nova_compute[260935]: 2025-10-11 09:34:16.964 2 DEBUG nova.network.neutron [req-40aec95a-a27c-4450-8db4-b703678a5875 req-122188ee-6929-4698-9c76-2bebaab1b122 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 5caa1430-1bd3-4b60-83f0-908b98b891cf] Updating instance_info_cache with network_info: [{"id": "ac79972f-498e-46be-b858-6c67d0f29f93", "address": "fa:16:3e:0d:1b:5f", "network": {"id": "84203be9-d0b1-4633-bcd6-51523ee507a5", "bridge": "br-int", "label": "tempest-network-smoke--13547720", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe0d:1b5f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac79972f-49", "ovs_interfaceid": "ac79972f-498e-46be-b858-6c67d0f29f93", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:34:17 compute-0 nova_compute[260935]: 2025-10-11 09:34:17.134 2 DEBUG oslo_concurrency.lockutils [req-40aec95a-a27c-4450-8db4-b703678a5875 req-122188ee-6929-4698-9c76-2bebaab1b122 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-5caa1430-1bd3-4b60-83f0-908b98b891cf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:34:17 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2779: 321 pgs: 321 active+clean; 486 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 12 KiB/s wr, 1 op/s
Oct 11 09:34:17 compute-0 ceph-mon[74313]: pgmap v2779: 321 pgs: 321 active+clean; 486 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 12 KiB/s wr, 1 op/s
Oct 11 09:34:17 compute-0 nova_compute[260935]: 2025-10-11 09:34:17.435 2 DEBUG nova.compute.manager [req-4aa54c14-c116-46c4-a92d-a93a1d8f3938 req-bcbe8eb0-4199-40db-b9c5-82ba90a7c747 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 5caa1430-1bd3-4b60-83f0-908b98b891cf] Received event network-vif-plugged-ac79972f-498e-46be-b858-6c67d0f29f93 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:34:17 compute-0 nova_compute[260935]: 2025-10-11 09:34:17.435 2 DEBUG oslo_concurrency.lockutils [req-4aa54c14-c116-46c4-a92d-a93a1d8f3938 req-bcbe8eb0-4199-40db-b9c5-82ba90a7c747 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "5caa1430-1bd3-4b60-83f0-908b98b891cf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:34:17 compute-0 nova_compute[260935]: 2025-10-11 09:34:17.436 2 DEBUG oslo_concurrency.lockutils [req-4aa54c14-c116-46c4-a92d-a93a1d8f3938 req-bcbe8eb0-4199-40db-b9c5-82ba90a7c747 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "5caa1430-1bd3-4b60-83f0-908b98b891cf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:34:17 compute-0 nova_compute[260935]: 2025-10-11 09:34:17.436 2 DEBUG oslo_concurrency.lockutils [req-4aa54c14-c116-46c4-a92d-a93a1d8f3938 req-bcbe8eb0-4199-40db-b9c5-82ba90a7c747 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "5caa1430-1bd3-4b60-83f0-908b98b891cf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:34:17 compute-0 nova_compute[260935]: 2025-10-11 09:34:17.436 2 DEBUG nova.compute.manager [req-4aa54c14-c116-46c4-a92d-a93a1d8f3938 req-bcbe8eb0-4199-40db-b9c5-82ba90a7c747 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 5caa1430-1bd3-4b60-83f0-908b98b891cf] No waiting events found dispatching network-vif-plugged-ac79972f-498e-46be-b858-6c67d0f29f93 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 09:34:17 compute-0 nova_compute[260935]: 2025-10-11 09:34:17.437 2 WARNING nova.compute.manager [req-4aa54c14-c116-46c4-a92d-a93a1d8f3938 req-bcbe8eb0-4199-40db-b9c5-82ba90a7c747 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 5caa1430-1bd3-4b60-83f0-908b98b891cf] Received unexpected event network-vif-plugged-ac79972f-498e-46be-b858-6c67d0f29f93 for instance with vm_state active and task_state deleting.
Oct 11 09:34:17 compute-0 nova_compute[260935]: 2025-10-11 09:34:17.526 2 DEBUG nova.network.neutron [-] [instance: 5caa1430-1bd3-4b60-83f0-908b98b891cf] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:34:17 compute-0 nova_compute[260935]: 2025-10-11 09:34:17.546 2 INFO nova.compute.manager [-] [instance: 5caa1430-1bd3-4b60-83f0-908b98b891cf] Took 1.76 seconds to deallocate network for instance.
Oct 11 09:34:17 compute-0 nova_compute[260935]: 2025-10-11 09:34:17.590 2 DEBUG nova.compute.manager [req-ada5ff7c-f96e-4a16-ab8e-48e27b12c810 req-c8a10e74-8c1f-4069-9431-9f8e1d60158f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 5caa1430-1bd3-4b60-83f0-908b98b891cf] Received event network-vif-deleted-ac79972f-498e-46be-b858-6c67d0f29f93 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:34:17 compute-0 nova_compute[260935]: 2025-10-11 09:34:17.618 2 DEBUG oslo_concurrency.lockutils [None req-38e74ca2-0169-4507-aea0-18c915f71c4b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:34:17 compute-0 nova_compute[260935]: 2025-10-11 09:34:17.618 2 DEBUG oslo_concurrency.lockutils [None req-38e74ca2-0169-4507-aea0-18c915f71c4b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:34:17 compute-0 nova_compute[260935]: 2025-10-11 09:34:17.741 2 DEBUG oslo_concurrency.processutils [None req-38e74ca2-0169-4507-aea0-18c915f71c4b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:34:18 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 09:34:18 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1794202127' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:34:18 compute-0 nova_compute[260935]: 2025-10-11 09:34:18.255 2 DEBUG oslo_concurrency.processutils [None req-38e74ca2-0169-4507-aea0-18c915f71c4b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.514s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:34:18 compute-0 nova_compute[260935]: 2025-10-11 09:34:18.262 2 DEBUG nova.compute.provider_tree [None req-38e74ca2-0169-4507-aea0-18c915f71c4b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 09:34:18 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/1794202127' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:34:18 compute-0 nova_compute[260935]: 2025-10-11 09:34:18.332 2 DEBUG nova.scheduler.client.report [None req-38e74ca2-0169-4507-aea0-18c915f71c4b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 09:34:18 compute-0 nova_compute[260935]: 2025-10-11 09:34:18.476 2 DEBUG oslo_concurrency.lockutils [None req-38e74ca2-0169-4507-aea0-18c915f71c4b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.858s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:34:18 compute-0 nova_compute[260935]: 2025-10-11 09:34:18.646 2 INFO nova.scheduler.client.report [None req-38e74ca2-0169-4507-aea0-18c915f71c4b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Deleted allocations for instance 5caa1430-1bd3-4b60-83f0-908b98b891cf
Oct 11 09:34:18 compute-0 nova_compute[260935]: 2025-10-11 09:34:18.727 2 DEBUG oslo_concurrency.lockutils [None req-38e74ca2-0169-4507-aea0-18c915f71c4b 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "5caa1430-1bd3-4b60-83f0-908b98b891cf" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.848s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:34:18 compute-0 nova_compute[260935]: 2025-10-11 09:34:18.954 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:34:19 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2780: 321 pgs: 321 active+clean; 407 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 30 KiB/s rd, 16 KiB/s wr, 29 op/s
Oct 11 09:34:19 compute-0 ceph-mon[74313]: pgmap v2780: 321 pgs: 321 active+clean; 407 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 30 KiB/s rd, 16 KiB/s wr, 29 op/s
Oct 11 09:34:19 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:34:20 compute-0 nova_compute[260935]: 2025-10-11 09:34:20.209 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:34:21 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2781: 321 pgs: 321 active+clean; 407 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 30 KiB/s rd, 3.9 KiB/s wr, 29 op/s
Oct 11 09:34:21 compute-0 ceph-mon[74313]: pgmap v2781: 321 pgs: 321 active+clean; 407 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 30 KiB/s rd, 3.9 KiB/s wr, 29 op/s
Oct 11 09:34:21 compute-0 nova_compute[260935]: 2025-10-11 09:34:21.344 2 DEBUG nova.compute.manager [req-10bbbeaa-bc36-4ae1-aba3-6ab7ca4b0b1d req-78a08336-4971-4b07-9f24-de8659f7f43f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: e7751d7b-3ef7-4b84-a579-89d4b72f6cf4] Received event network-changed-fca94e7c-1f55-4f5d-a268-7a4f0b86bae0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:34:21 compute-0 nova_compute[260935]: 2025-10-11 09:34:21.344 2 DEBUG nova.compute.manager [req-10bbbeaa-bc36-4ae1-aba3-6ab7ca4b0b1d req-78a08336-4971-4b07-9f24-de8659f7f43f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: e7751d7b-3ef7-4b84-a579-89d4b72f6cf4] Refreshing instance network info cache due to event network-changed-fca94e7c-1f55-4f5d-a268-7a4f0b86bae0. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 09:34:21 compute-0 nova_compute[260935]: 2025-10-11 09:34:21.344 2 DEBUG oslo_concurrency.lockutils [req-10bbbeaa-bc36-4ae1-aba3-6ab7ca4b0b1d req-78a08336-4971-4b07-9f24-de8659f7f43f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-e7751d7b-3ef7-4b84-a579-89d4b72f6cf4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:34:21 compute-0 nova_compute[260935]: 2025-10-11 09:34:21.344 2 DEBUG oslo_concurrency.lockutils [req-10bbbeaa-bc36-4ae1-aba3-6ab7ca4b0b1d req-78a08336-4971-4b07-9f24-de8659f7f43f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-e7751d7b-3ef7-4b84-a579-89d4b72f6cf4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:34:21 compute-0 nova_compute[260935]: 2025-10-11 09:34:21.344 2 DEBUG nova.network.neutron [req-10bbbeaa-bc36-4ae1-aba3-6ab7ca4b0b1d req-78a08336-4971-4b07-9f24-de8659f7f43f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: e7751d7b-3ef7-4b84-a579-89d4b72f6cf4] Refreshing network info cache for port fca94e7c-1f55-4f5d-a268-7a4f0b86bae0 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 09:34:21 compute-0 nova_compute[260935]: 2025-10-11 09:34:21.439 2 DEBUG oslo_concurrency.lockutils [None req-0ad757af-2e2c-4152-b14c-d19212ea303c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "e7751d7b-3ef7-4b84-a579-89d4b72f6cf4" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:34:21 compute-0 nova_compute[260935]: 2025-10-11 09:34:21.439 2 DEBUG oslo_concurrency.lockutils [None req-0ad757af-2e2c-4152-b14c-d19212ea303c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "e7751d7b-3ef7-4b84-a579-89d4b72f6cf4" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:34:21 compute-0 nova_compute[260935]: 2025-10-11 09:34:21.439 2 DEBUG oslo_concurrency.lockutils [None req-0ad757af-2e2c-4152-b14c-d19212ea303c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "e7751d7b-3ef7-4b84-a579-89d4b72f6cf4-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:34:21 compute-0 nova_compute[260935]: 2025-10-11 09:34:21.440 2 DEBUG oslo_concurrency.lockutils [None req-0ad757af-2e2c-4152-b14c-d19212ea303c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "e7751d7b-3ef7-4b84-a579-89d4b72f6cf4-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:34:21 compute-0 nova_compute[260935]: 2025-10-11 09:34:21.440 2 DEBUG oslo_concurrency.lockutils [None req-0ad757af-2e2c-4152-b14c-d19212ea303c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "e7751d7b-3ef7-4b84-a579-89d4b72f6cf4-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:34:21 compute-0 nova_compute[260935]: 2025-10-11 09:34:21.441 2 INFO nova.compute.manager [None req-0ad757af-2e2c-4152-b14c-d19212ea303c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: e7751d7b-3ef7-4b84-a579-89d4b72f6cf4] Terminating instance
Oct 11 09:34:21 compute-0 nova_compute[260935]: 2025-10-11 09:34:21.442 2 DEBUG nova.compute.manager [None req-0ad757af-2e2c-4152-b14c-d19212ea303c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: e7751d7b-3ef7-4b84-a579-89d4b72f6cf4] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 11 09:34:21 compute-0 kernel: tapfca94e7c-1f (unregistering): left promiscuous mode
Oct 11 09:34:21 compute-0 NetworkManager[44960]: <info>  [1760175261.5099] device (tapfca94e7c-1f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 09:34:21 compute-0 ovn_controller[152945]: 2025-10-11T09:34:21Z|01550|binding|INFO|Releasing lport fca94e7c-1f55-4f5d-a268-7a4f0b86bae0 from this chassis (sb_readonly=0)
Oct 11 09:34:21 compute-0 ovn_controller[152945]: 2025-10-11T09:34:21Z|01551|binding|INFO|Setting lport fca94e7c-1f55-4f5d-a268-7a4f0b86bae0 down in Southbound
Oct 11 09:34:21 compute-0 nova_compute[260935]: 2025-10-11 09:34:21.528 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:34:21 compute-0 ovn_controller[152945]: 2025-10-11T09:34:21Z|01552|binding|INFO|Removing iface tapfca94e7c-1f ovn-installed in OVS
Oct 11 09:34:21 compute-0 nova_compute[260935]: 2025-10-11 09:34:21.532 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:34:21 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:34:21.537 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9b:ef:fb 10.100.0.12 2001:db8::f816:3eff:fe9b:effb'], port_security=['fa:16:3e:9b:ef:fb 10.100.0.12 2001:db8::f816:3eff:fe9b:effb'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28 2001:db8::f816:3eff:fe9b:effb/64', 'neutron:device_id': 'e7751d7b-3ef7-4b84-a579-89d4b72f6cf4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-84203be9-d0b1-4633-bcd6-51523ee507a5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'db6885dd005947ad850fed13cefdf2fc', 'neutron:revision_number': '4', 'neutron:security_group_ids': '49b6c18b-a992-472d-bab2-db1a8305eb48', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d927a0f8-4868-42ce-9cc2-00b73d34efaa, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=fca94e7c-1f55-4f5d-a268-7a4f0b86bae0) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 09:34:21 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:34:21.540 162815 INFO neutron.agent.ovn.metadata.agent [-] Port fca94e7c-1f55-4f5d-a268-7a4f0b86bae0 in datapath 84203be9-d0b1-4633-bcd6-51523ee507a5 unbound from our chassis
Oct 11 09:34:21 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:34:21.544 162815 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 84203be9-d0b1-4633-bcd6-51523ee507a5, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 11 09:34:21 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:34:21.546 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[8bfb7e36-4354-4b3a-a06e-64ef66ecb107]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:34:21 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:34:21.547 162815 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-84203be9-d0b1-4633-bcd6-51523ee507a5 namespace which is not needed anymore
Oct 11 09:34:21 compute-0 nova_compute[260935]: 2025-10-11 09:34:21.563 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:34:21 compute-0 systemd[1]: machine-qemu\x2d161\x2dinstance\x2d00000089.scope: Deactivated successfully.
Oct 11 09:34:21 compute-0 systemd[1]: machine-qemu\x2d161\x2dinstance\x2d00000089.scope: Consumed 16.625s CPU time.
Oct 11 09:34:21 compute-0 systemd-machined[215705]: Machine qemu-161-instance-00000089 terminated.
Oct 11 09:34:21 compute-0 nova_compute[260935]: 2025-10-11 09:34:21.697 2 INFO nova.virt.libvirt.driver [-] [instance: e7751d7b-3ef7-4b84-a579-89d4b72f6cf4] Instance destroyed successfully.
Oct 11 09:34:21 compute-0 nova_compute[260935]: 2025-10-11 09:34:21.698 2 DEBUG nova.objects.instance [None req-0ad757af-2e2c-4152-b14c-d19212ea303c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lazy-loading 'resources' on Instance uuid e7751d7b-3ef7-4b84-a579-89d4b72f6cf4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:34:21 compute-0 nova_compute[260935]: 2025-10-11 09:34:21.715 2 DEBUG nova.virt.libvirt.vif [None req-0ad757af-2e2c-4152-b14c-d19212ea303c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T09:32:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1457402262',display_name='tempest-TestGettingAddress-server-1457402262',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1457402262',id=137,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHVfw7pOtwEAslurCfMJI+LHxZQ7Z2OcTKKHxGE0pH5jm6Sci3HdEc6EkB55fGhUPW8iPpuzF74thdADhl6pL+8SkI+p3jvP78xNgpaS0telyWktJDIyQUmQdgZIxm8Eyg==',key_name='tempest-TestGettingAddress-2055882368',keypairs=<?>,launch_index=0,launched_at=2025-10-11T09:32:58Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='db6885dd005947ad850fed13cefdf2fc',ramdisk_id='',reservation_id='r-axv24jbu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-1238692117',owner_user_name='tempest-TestGettingAddress-1238692117-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T09:32:58Z,user_data=None,user_id='0e1fd111a1ff43179343661e01457085',uuid=e7751d7b-3ef7-4b84-a579-89d4b72f6cf4,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "fca94e7c-1f55-4f5d-a268-7a4f0b86bae0", "address": "fa:16:3e:9b:ef:fb", "network": {"id": "84203be9-d0b1-4633-bcd6-51523ee507a5", "bridge": "br-int", "label": "tempest-network-smoke--13547720", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.177", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe9b:effb", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfca94e7c-1f", "ovs_interfaceid": "fca94e7c-1f55-4f5d-a268-7a4f0b86bae0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 11 09:34:21 compute-0 nova_compute[260935]: 2025-10-11 09:34:21.715 2 DEBUG nova.network.os_vif_util [None req-0ad757af-2e2c-4152-b14c-d19212ea303c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converting VIF {"id": "fca94e7c-1f55-4f5d-a268-7a4f0b86bae0", "address": "fa:16:3e:9b:ef:fb", "network": {"id": "84203be9-d0b1-4633-bcd6-51523ee507a5", "bridge": "br-int", "label": "tempest-network-smoke--13547720", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.177", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe9b:effb", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfca94e7c-1f", "ovs_interfaceid": "fca94e7c-1f55-4f5d-a268-7a4f0b86bae0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 09:34:21 compute-0 nova_compute[260935]: 2025-10-11 09:34:21.717 2 DEBUG nova.network.os_vif_util [None req-0ad757af-2e2c-4152-b14c-d19212ea303c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:9b:ef:fb,bridge_name='br-int',has_traffic_filtering=True,id=fca94e7c-1f55-4f5d-a268-7a4f0b86bae0,network=Network(84203be9-d0b1-4633-bcd6-51523ee507a5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfca94e7c-1f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 09:34:21 compute-0 nova_compute[260935]: 2025-10-11 09:34:21.717 2 DEBUG os_vif [None req-0ad757af-2e2c-4152-b14c-d19212ea303c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:9b:ef:fb,bridge_name='br-int',has_traffic_filtering=True,id=fca94e7c-1f55-4f5d-a268-7a4f0b86bae0,network=Network(84203be9-d0b1-4633-bcd6-51523ee507a5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfca94e7c-1f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 11 09:34:21 compute-0 nova_compute[260935]: 2025-10-11 09:34:21.722 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:34:21 compute-0 nova_compute[260935]: 2025-10-11 09:34:21.723 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfca94e7c-1f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:34:21 compute-0 nova_compute[260935]: 2025-10-11 09:34:21.725 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:34:21 compute-0 nova_compute[260935]: 2025-10-11 09:34:21.732 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:34:21 compute-0 nova_compute[260935]: 2025-10-11 09:34:21.736 2 INFO os_vif [None req-0ad757af-2e2c-4152-b14c-d19212ea303c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:9b:ef:fb,bridge_name='br-int',has_traffic_filtering=True,id=fca94e7c-1f55-4f5d-a268-7a4f0b86bae0,network=Network(84203be9-d0b1-4633-bcd6-51523ee507a5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfca94e7c-1f')
Oct 11 09:34:21 compute-0 neutron-haproxy-ovnmeta-84203be9-d0b1-4633-bcd6-51523ee507a5[412760]: [NOTICE]   (412764) : haproxy version is 2.8.14-c23fe91
Oct 11 09:34:21 compute-0 neutron-haproxy-ovnmeta-84203be9-d0b1-4633-bcd6-51523ee507a5[412760]: [NOTICE]   (412764) : path to executable is /usr/sbin/haproxy
Oct 11 09:34:21 compute-0 neutron-haproxy-ovnmeta-84203be9-d0b1-4633-bcd6-51523ee507a5[412760]: [WARNING]  (412764) : Exiting Master process...
Oct 11 09:34:21 compute-0 neutron-haproxy-ovnmeta-84203be9-d0b1-4633-bcd6-51523ee507a5[412760]: [ALERT]    (412764) : Current worker (412766) exited with code 143 (Terminated)
Oct 11 09:34:21 compute-0 neutron-haproxy-ovnmeta-84203be9-d0b1-4633-bcd6-51523ee507a5[412760]: [WARNING]  (412764) : All workers exited. Exiting... (0)
Oct 11 09:34:21 compute-0 systemd[1]: libpod-328fa346917de62bfec3429b7ba4ef5fc57b6ac0cdc64d48811fdf8800ac0793.scope: Deactivated successfully.
Oct 11 09:34:21 compute-0 podman[414419]: 2025-10-11 09:34:21.787123995 +0000 UTC m=+0.077515508 container died 328fa346917de62bfec3429b7ba4ef5fc57b6ac0cdc64d48811fdf8800ac0793 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-84203be9-d0b1-4633-bcd6-51523ee507a5, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true)
Oct 11 09:34:21 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-328fa346917de62bfec3429b7ba4ef5fc57b6ac0cdc64d48811fdf8800ac0793-userdata-shm.mount: Deactivated successfully.
Oct 11 09:34:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-b192159424a33d91dd7687b52effbbc7201b23f7d99b918f0ad704b8e1f4e2c9-merged.mount: Deactivated successfully.
Oct 11 09:34:21 compute-0 nova_compute[260935]: 2025-10-11 09:34:21.839 2 DEBUG nova.compute.manager [req-ea9a42a1-4457-460b-83ce-e07ed88efa01 req-27d9ce7b-bd12-4a20-9a58-0a782e931a5b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: e7751d7b-3ef7-4b84-a579-89d4b72f6cf4] Received event network-vif-unplugged-fca94e7c-1f55-4f5d-a268-7a4f0b86bae0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:34:21 compute-0 nova_compute[260935]: 2025-10-11 09:34:21.840 2 DEBUG oslo_concurrency.lockutils [req-ea9a42a1-4457-460b-83ce-e07ed88efa01 req-27d9ce7b-bd12-4a20-9a58-0a782e931a5b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "e7751d7b-3ef7-4b84-a579-89d4b72f6cf4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:34:21 compute-0 nova_compute[260935]: 2025-10-11 09:34:21.840 2 DEBUG oslo_concurrency.lockutils [req-ea9a42a1-4457-460b-83ce-e07ed88efa01 req-27d9ce7b-bd12-4a20-9a58-0a782e931a5b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "e7751d7b-3ef7-4b84-a579-89d4b72f6cf4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:34:21 compute-0 nova_compute[260935]: 2025-10-11 09:34:21.840 2 DEBUG oslo_concurrency.lockutils [req-ea9a42a1-4457-460b-83ce-e07ed88efa01 req-27d9ce7b-bd12-4a20-9a58-0a782e931a5b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "e7751d7b-3ef7-4b84-a579-89d4b72f6cf4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:34:21 compute-0 nova_compute[260935]: 2025-10-11 09:34:21.841 2 DEBUG nova.compute.manager [req-ea9a42a1-4457-460b-83ce-e07ed88efa01 req-27d9ce7b-bd12-4a20-9a58-0a782e931a5b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: e7751d7b-3ef7-4b84-a579-89d4b72f6cf4] No waiting events found dispatching network-vif-unplugged-fca94e7c-1f55-4f5d-a268-7a4f0b86bae0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 09:34:21 compute-0 nova_compute[260935]: 2025-10-11 09:34:21.841 2 DEBUG nova.compute.manager [req-ea9a42a1-4457-460b-83ce-e07ed88efa01 req-27d9ce7b-bd12-4a20-9a58-0a782e931a5b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: e7751d7b-3ef7-4b84-a579-89d4b72f6cf4] Received event network-vif-unplugged-fca94e7c-1f55-4f5d-a268-7a4f0b86bae0 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 11 09:34:21 compute-0 podman[414419]: 2025-10-11 09:34:21.848354813 +0000 UTC m=+0.138746366 container cleanup 328fa346917de62bfec3429b7ba4ef5fc57b6ac0cdc64d48811fdf8800ac0793 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-84203be9-d0b1-4633-bcd6-51523ee507a5, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true)
Oct 11 09:34:21 compute-0 systemd[1]: libpod-conmon-328fa346917de62bfec3429b7ba4ef5fc57b6ac0cdc64d48811fdf8800ac0793.scope: Deactivated successfully.
Oct 11 09:34:21 compute-0 podman[414472]: 2025-10-11 09:34:21.951741341 +0000 UTC m=+0.072717913 container remove 328fa346917de62bfec3429b7ba4ef5fc57b6ac0cdc64d48811fdf8800ac0793 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-84203be9-d0b1-4633-bcd6-51523ee507a5, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 11 09:34:21 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:34:21.958 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[4f52183e-69ea-4e2f-93fa-9bf010caa6f0]: (4, ('Sat Oct 11 09:34:21 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-84203be9-d0b1-4633-bcd6-51523ee507a5 (328fa346917de62bfec3429b7ba4ef5fc57b6ac0cdc64d48811fdf8800ac0793)\n328fa346917de62bfec3429b7ba4ef5fc57b6ac0cdc64d48811fdf8800ac0793\nSat Oct 11 09:34:21 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-84203be9-d0b1-4633-bcd6-51523ee507a5 (328fa346917de62bfec3429b7ba4ef5fc57b6ac0cdc64d48811fdf8800ac0793)\n328fa346917de62bfec3429b7ba4ef5fc57b6ac0cdc64d48811fdf8800ac0793\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:34:21 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:34:21.960 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[201ed882-171c-4cac-af88-634b4849beb9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:34:21 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:34:21.961 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap84203be9-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:34:21 compute-0 kernel: tap84203be9-d0: left promiscuous mode
Oct 11 09:34:21 compute-0 nova_compute[260935]: 2025-10-11 09:34:21.963 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:34:21 compute-0 nova_compute[260935]: 2025-10-11 09:34:21.981 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:34:21 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:34:21.984 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[ca6a9dba-f783-450a-bb43-1c5277eb0a8c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:34:22 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:34:22.007 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[0d0bab8b-8111-40a2-8753-3f8aef8ff011]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:34:22 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:34:22.007 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[cff2442d-3200-4e56-a488-3b3c2c042021]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:34:22 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:34:22.032 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[0dfc2a0b-4479-456e-987a-68c336de24bc]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 702190, 'reachable_time': 27474, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 414488, 'error': None, 'target': 'ovnmeta-84203be9-d0b1-4633-bcd6-51523ee507a5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:34:22 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:34:22.034 162929 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-84203be9-d0b1-4633-bcd6-51523ee507a5 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 11 09:34:22 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:34:22.034 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[e6c13fef-96fd-4b29-b6ba-96f3b368f829]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:34:22 compute-0 systemd[1]: run-netns-ovnmeta\x2d84203be9\x2dd0b1\x2d4633\x2dbcd6\x2d51523ee507a5.mount: Deactivated successfully.
Oct 11 09:34:22 compute-0 nova_compute[260935]: 2025-10-11 09:34:22.138 2 INFO nova.virt.libvirt.driver [None req-0ad757af-2e2c-4152-b14c-d19212ea303c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: e7751d7b-3ef7-4b84-a579-89d4b72f6cf4] Deleting instance files /var/lib/nova/instances/e7751d7b-3ef7-4b84-a579-89d4b72f6cf4_del
Oct 11 09:34:22 compute-0 nova_compute[260935]: 2025-10-11 09:34:22.139 2 INFO nova.virt.libvirt.driver [None req-0ad757af-2e2c-4152-b14c-d19212ea303c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: e7751d7b-3ef7-4b84-a579-89d4b72f6cf4] Deletion of /var/lib/nova/instances/e7751d7b-3ef7-4b84-a579-89d4b72f6cf4_del complete
Oct 11 09:34:22 compute-0 nova_compute[260935]: 2025-10-11 09:34:22.189 2 INFO nova.compute.manager [None req-0ad757af-2e2c-4152-b14c-d19212ea303c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] [instance: e7751d7b-3ef7-4b84-a579-89d4b72f6cf4] Took 0.75 seconds to destroy the instance on the hypervisor.
Oct 11 09:34:22 compute-0 nova_compute[260935]: 2025-10-11 09:34:22.190 2 DEBUG oslo.service.loopingcall [None req-0ad757af-2e2c-4152-b14c-d19212ea303c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 11 09:34:22 compute-0 nova_compute[260935]: 2025-10-11 09:34:22.190 2 DEBUG nova.compute.manager [-] [instance: e7751d7b-3ef7-4b84-a579-89d4b72f6cf4] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 11 09:34:22 compute-0 nova_compute[260935]: 2025-10-11 09:34:22.191 2 DEBUG nova.network.neutron [-] [instance: e7751d7b-3ef7-4b84-a579-89d4b72f6cf4] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 11 09:34:22 compute-0 nova_compute[260935]: 2025-10-11 09:34:22.766 2 DEBUG nova.network.neutron [req-10bbbeaa-bc36-4ae1-aba3-6ab7ca4b0b1d req-78a08336-4971-4b07-9f24-de8659f7f43f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: e7751d7b-3ef7-4b84-a579-89d4b72f6cf4] Updated VIF entry in instance network info cache for port fca94e7c-1f55-4f5d-a268-7a4f0b86bae0. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 09:34:22 compute-0 nova_compute[260935]: 2025-10-11 09:34:22.767 2 DEBUG nova.network.neutron [req-10bbbeaa-bc36-4ae1-aba3-6ab7ca4b0b1d req-78a08336-4971-4b07-9f24-de8659f7f43f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: e7751d7b-3ef7-4b84-a579-89d4b72f6cf4] Updating instance_info_cache with network_info: [{"id": "fca94e7c-1f55-4f5d-a268-7a4f0b86bae0", "address": "fa:16:3e:9b:ef:fb", "network": {"id": "84203be9-d0b1-4633-bcd6-51523ee507a5", "bridge": "br-int", "label": "tempest-network-smoke--13547720", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe9b:effb", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "db6885dd005947ad850fed13cefdf2fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfca94e7c-1f", "ovs_interfaceid": "fca94e7c-1f55-4f5d-a268-7a4f0b86bae0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:34:22 compute-0 nova_compute[260935]: 2025-10-11 09:34:22.788 2 DEBUG oslo_concurrency.lockutils [req-10bbbeaa-bc36-4ae1-aba3-6ab7ca4b0b1d req-78a08336-4971-4b07-9f24-de8659f7f43f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-e7751d7b-3ef7-4b84-a579-89d4b72f6cf4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:34:22 compute-0 nova_compute[260935]: 2025-10-11 09:34:22.834 2 DEBUG nova.network.neutron [-] [instance: e7751d7b-3ef7-4b84-a579-89d4b72f6cf4] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:34:22 compute-0 nova_compute[260935]: 2025-10-11 09:34:22.849 2 INFO nova.compute.manager [-] [instance: e7751d7b-3ef7-4b84-a579-89d4b72f6cf4] Took 0.66 seconds to deallocate network for instance.
Oct 11 09:34:22 compute-0 nova_compute[260935]: 2025-10-11 09:34:22.888 2 DEBUG oslo_concurrency.lockutils [None req-0ad757af-2e2c-4152-b14c-d19212ea303c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:34:22 compute-0 nova_compute[260935]: 2025-10-11 09:34:22.889 2 DEBUG oslo_concurrency.lockutils [None req-0ad757af-2e2c-4152-b14c-d19212ea303c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:34:22 compute-0 nova_compute[260935]: 2025-10-11 09:34:22.999 2 DEBUG oslo_concurrency.processutils [None req-0ad757af-2e2c-4152-b14c-d19212ea303c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:34:23 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2782: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 48 KiB/s rd, 5.1 KiB/s wr, 56 op/s
Oct 11 09:34:23 compute-0 ceph-mon[74313]: pgmap v2782: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 48 KiB/s rd, 5.1 KiB/s wr, 56 op/s
Oct 11 09:34:23 compute-0 nova_compute[260935]: 2025-10-11 09:34:23.423 2 DEBUG nova.compute.manager [req-f0b8aef2-8f21-4b60-9fec-1a68a55c1afb req-1a8d7cec-ad79-43d6-bf27-5e20c940b515 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: e7751d7b-3ef7-4b84-a579-89d4b72f6cf4] Received event network-vif-deleted-fca94e7c-1f55-4f5d-a268-7a4f0b86bae0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:34:23 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 09:34:23 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/395003438' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:34:23 compute-0 nova_compute[260935]: 2025-10-11 09:34:23.462 2 DEBUG oslo_concurrency.processutils [None req-0ad757af-2e2c-4152-b14c-d19212ea303c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:34:23 compute-0 nova_compute[260935]: 2025-10-11 09:34:23.467 2 DEBUG nova.compute.provider_tree [None req-0ad757af-2e2c-4152-b14c-d19212ea303c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 09:34:23 compute-0 nova_compute[260935]: 2025-10-11 09:34:23.483 2 DEBUG nova.scheduler.client.report [None req-0ad757af-2e2c-4152-b14c-d19212ea303c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 09:34:23 compute-0 nova_compute[260935]: 2025-10-11 09:34:23.502 2 DEBUG oslo_concurrency.lockutils [None req-0ad757af-2e2c-4152-b14c-d19212ea303c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.613s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:34:23 compute-0 nova_compute[260935]: 2025-10-11 09:34:23.524 2 INFO nova.scheduler.client.report [None req-0ad757af-2e2c-4152-b14c-d19212ea303c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Deleted allocations for instance e7751d7b-3ef7-4b84-a579-89d4b72f6cf4
Oct 11 09:34:23 compute-0 nova_compute[260935]: 2025-10-11 09:34:23.578 2 DEBUG oslo_concurrency.lockutils [None req-0ad757af-2e2c-4152-b14c-d19212ea303c 0e1fd111a1ff43179343661e01457085 db6885dd005947ad850fed13cefdf2fc - - default default] Lock "e7751d7b-3ef7-4b84-a579-89d4b72f6cf4" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.139s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:34:23 compute-0 nova_compute[260935]: 2025-10-11 09:34:23.956 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:34:23 compute-0 nova_compute[260935]: 2025-10-11 09:34:23.966 2 DEBUG nova.compute.manager [req-99247a1b-92d6-4dff-9602-a915a27d0072 req-b7582fd8-36aa-432a-94a0-a0b6f07a75ee e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: e7751d7b-3ef7-4b84-a579-89d4b72f6cf4] Received event network-vif-plugged-fca94e7c-1f55-4f5d-a268-7a4f0b86bae0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:34:23 compute-0 nova_compute[260935]: 2025-10-11 09:34:23.967 2 DEBUG oslo_concurrency.lockutils [req-99247a1b-92d6-4dff-9602-a915a27d0072 req-b7582fd8-36aa-432a-94a0-a0b6f07a75ee e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "e7751d7b-3ef7-4b84-a579-89d4b72f6cf4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:34:23 compute-0 nova_compute[260935]: 2025-10-11 09:34:23.967 2 DEBUG oslo_concurrency.lockutils [req-99247a1b-92d6-4dff-9602-a915a27d0072 req-b7582fd8-36aa-432a-94a0-a0b6f07a75ee e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "e7751d7b-3ef7-4b84-a579-89d4b72f6cf4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:34:23 compute-0 nova_compute[260935]: 2025-10-11 09:34:23.968 2 DEBUG oslo_concurrency.lockutils [req-99247a1b-92d6-4dff-9602-a915a27d0072 req-b7582fd8-36aa-432a-94a0-a0b6f07a75ee e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "e7751d7b-3ef7-4b84-a579-89d4b72f6cf4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:34:23 compute-0 nova_compute[260935]: 2025-10-11 09:34:23.968 2 DEBUG nova.compute.manager [req-99247a1b-92d6-4dff-9602-a915a27d0072 req-b7582fd8-36aa-432a-94a0-a0b6f07a75ee e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: e7751d7b-3ef7-4b84-a579-89d4b72f6cf4] No waiting events found dispatching network-vif-plugged-fca94e7c-1f55-4f5d-a268-7a4f0b86bae0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 09:34:23 compute-0 nova_compute[260935]: 2025-10-11 09:34:23.969 2 WARNING nova.compute.manager [req-99247a1b-92d6-4dff-9602-a915a27d0072 req-b7582fd8-36aa-432a-94a0-a0b6f07a75ee e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: e7751d7b-3ef7-4b84-a579-89d4b72f6cf4] Received unexpected event network-vif-plugged-fca94e7c-1f55-4f5d-a268-7a4f0b86bae0 for instance with vm_state deleted and task_state None.
Oct 11 09:34:24 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/395003438' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:34:24 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:34:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:34:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:34:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:34:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:34:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:34:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:34:25 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2783: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 5.0 KiB/s wr, 55 op/s
Oct 11 09:34:25 compute-0 ceph-mon[74313]: pgmap v2783: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 5.0 KiB/s wr, 55 op/s
Oct 11 09:34:25 compute-0 nova_compute[260935]: 2025-10-11 09:34:25.739 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:34:25 compute-0 podman[414513]: 2025-10-11 09:34:25.789551795 +0000 UTC m=+0.073272199 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Oct 11 09:34:25 compute-0 sshd-session[414511]: Invalid user localhost from 155.4.244.179 port 25927
Oct 11 09:34:25 compute-0 sshd-session[414511]: pam_unix(sshd:auth): check pass; user unknown
Oct 11 09:34:25 compute-0 sshd-session[414511]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=155.4.244.179
Oct 11 09:34:26 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 09:34:26 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3479799431' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 09:34:26 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 09:34:26 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3479799431' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 09:34:26 compute-0 nova_compute[260935]: 2025-10-11 09:34:26.726 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:34:26 compute-0 ceph-mon[74313]: from='client.? 192.168.122.10:0/3479799431' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 09:34:26 compute-0 ceph-mon[74313]: from='client.? 192.168.122.10:0/3479799431' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 09:34:27 compute-0 ovn_controller[152945]: 2025-10-11T09:34:27Z|01553|binding|INFO|Releasing lport e23cd806-8523-4e59-ba27-db15cee52548 from this chassis (sb_readonly=0)
Oct 11 09:34:27 compute-0 ovn_controller[152945]: 2025-10-11T09:34:27Z|01554|binding|INFO|Releasing lport f45dd889-4db0-488b-b6d3-356bc191844e from this chassis (sb_readonly=0)
Oct 11 09:34:27 compute-0 nova_compute[260935]: 2025-10-11 09:34:27.071 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:34:27 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2784: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 5.0 KiB/s wr, 55 op/s
Oct 11 09:34:27 compute-0 ovn_controller[152945]: 2025-10-11T09:34:27Z|01555|binding|INFO|Releasing lport e23cd806-8523-4e59-ba27-db15cee52548 from this chassis (sb_readonly=0)
Oct 11 09:34:27 compute-0 ovn_controller[152945]: 2025-10-11T09:34:27Z|01556|binding|INFO|Releasing lport f45dd889-4db0-488b-b6d3-356bc191844e from this chassis (sb_readonly=0)
Oct 11 09:34:27 compute-0 nova_compute[260935]: 2025-10-11 09:34:27.170 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:34:27 compute-0 ceph-mon[74313]: pgmap v2784: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 5.0 KiB/s wr, 55 op/s
Oct 11 09:34:28 compute-0 sshd-session[414511]: Failed password for invalid user localhost from 155.4.244.179 port 25927 ssh2
Oct 11 09:34:28 compute-0 nova_compute[260935]: 2025-10-11 09:34:28.958 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:34:29 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2785: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 5.0 KiB/s wr, 55 op/s
Oct 11 09:34:29 compute-0 ceph-mon[74313]: pgmap v2785: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 5.0 KiB/s wr, 55 op/s
Oct 11 09:34:29 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:34:29 compute-0 sshd-session[414511]: Received disconnect from 155.4.244.179 port 25927:11: Bye Bye [preauth]
Oct 11 09:34:29 compute-0 sshd-session[414511]: Disconnected from invalid user localhost 155.4.244.179 port 25927 [preauth]
Oct 11 09:34:30 compute-0 nova_compute[260935]: 2025-10-11 09:34:30.131 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760175255.1291778, 5caa1430-1bd3-4b60-83f0-908b98b891cf => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:34:30 compute-0 nova_compute[260935]: 2025-10-11 09:34:30.132 2 INFO nova.compute.manager [-] [instance: 5caa1430-1bd3-4b60-83f0-908b98b891cf] VM Stopped (Lifecycle Event)
Oct 11 09:34:30 compute-0 nova_compute[260935]: 2025-10-11 09:34:30.152 2 DEBUG nova.compute.manager [None req-df7224fd-343e-40a1-9727-812397c2ff45 - - - - - -] [instance: 5caa1430-1bd3-4b60-83f0-908b98b891cf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:34:31 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2786: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 11 09:34:31 compute-0 ceph-mon[74313]: pgmap v2786: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 11 09:34:31 compute-0 nova_compute[260935]: 2025-10-11 09:34:31.728 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:34:33 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2787: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 11 09:34:33 compute-0 ceph-mon[74313]: pgmap v2787: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 11 09:34:33 compute-0 podman[414534]: 2025-10-11 09:34:33.793067948 +0000 UTC m=+0.080002338 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true)
Oct 11 09:34:33 compute-0 nova_compute[260935]: 2025-10-11 09:34:33.960 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:34:34 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:34:35 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2788: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:34:35 compute-0 ceph-mon[74313]: pgmap v2788: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:34:36 compute-0 nova_compute[260935]: 2025-10-11 09:34:36.691 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760175261.6879365, e7751d7b-3ef7-4b84-a579-89d4b72f6cf4 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:34:36 compute-0 nova_compute[260935]: 2025-10-11 09:34:36.692 2 INFO nova.compute.manager [-] [instance: e7751d7b-3ef7-4b84-a579-89d4b72f6cf4] VM Stopped (Lifecycle Event)
Oct 11 09:34:36 compute-0 nova_compute[260935]: 2025-10-11 09:34:36.724 2 DEBUG nova.compute.manager [None req-a6ef93fe-ac82-46a1-a32a-4f7bdf3011d5 - - - - - -] [instance: e7751d7b-3ef7-4b84-a579-89d4b72f6cf4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:34:36 compute-0 nova_compute[260935]: 2025-10-11 09:34:36.730 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:34:37 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2789: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:34:37 compute-0 ceph-mon[74313]: pgmap v2789: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:34:37 compute-0 podman[414554]: 2025-10-11 09:34:37.790683244 +0000 UTC m=+0.102889724 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 11 09:34:37 compute-0 podman[414555]: 2025-10-11 09:34:37.812113849 +0000 UTC m=+0.118749962 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, container_name=ovn_controller)
Oct 11 09:34:38 compute-0 nova_compute[260935]: 2025-10-11 09:34:38.962 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:34:39 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2790: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:34:39 compute-0 ceph-mon[74313]: pgmap v2790: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:34:39 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:34:41 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2791: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:34:41 compute-0 ceph-mon[74313]: pgmap v2791: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:34:41 compute-0 nova_compute[260935]: 2025-10-11 09:34:41.730 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:34:42 compute-0 sshd-session[414600]: Invalid user mysql from 165.232.82.252 port 56208
Oct 11 09:34:42 compute-0 sshd-session[414600]: pam_unix(sshd:auth): check pass; user unknown
Oct 11 09:34:42 compute-0 sshd-session[414600]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=165.232.82.252
Oct 11 09:34:43 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2792: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:34:43 compute-0 ceph-mon[74313]: pgmap v2792: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:34:43 compute-0 nova_compute[260935]: 2025-10-11 09:34:43.964 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:34:44 compute-0 sshd-session[414600]: Failed password for invalid user mysql from 165.232.82.252 port 56208 ssh2
Oct 11 09:34:44 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:34:44 compute-0 sshd-session[414600]: Connection closed by invalid user mysql 165.232.82.252 port 56208 [preauth]
Oct 11 09:34:45 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2793: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:34:45 compute-0 ceph-mon[74313]: pgmap v2793: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:34:45 compute-0 nova_compute[260935]: 2025-10-11 09:34:45.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:34:45 compute-0 nova_compute[260935]: 2025-10-11 09:34:45.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:34:46 compute-0 nova_compute[260935]: 2025-10-11 09:34:46.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:34:46 compute-0 nova_compute[260935]: 2025-10-11 09:34:46.703 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 11 09:34:46 compute-0 nova_compute[260935]: 2025-10-11 09:34:46.733 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:34:47 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2794: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:34:47 compute-0 ceph-mon[74313]: pgmap v2794: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:34:47 compute-0 nova_compute[260935]: 2025-10-11 09:34:47.559 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "refresh_cache-b75d8ded-515b-48ff-a6b6-28df88878996" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:34:47 compute-0 nova_compute[260935]: 2025-10-11 09:34:47.559 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquired lock "refresh_cache-b75d8ded-515b-48ff-a6b6-28df88878996" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:34:47 compute-0 nova_compute[260935]: 2025-10-11 09:34:47.560 2 DEBUG nova.network.neutron [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: b75d8ded-515b-48ff-a6b6-28df88878996] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 11 09:34:48 compute-0 nova_compute[260935]: 2025-10-11 09:34:48.722 2 DEBUG nova.network.neutron [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: b75d8ded-515b-48ff-a6b6-28df88878996] Updating instance_info_cache with network_info: [{"id": "99e74dca-1d94-446c-ac4b-bc16dc028d2b", "address": "fa:16:3e:ab:9b:26", "network": {"id": "e4686205-cbf0-4221-bc49-ebb890c4a59f", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1553544744-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "11b44ad9193e4e43838d52056ccf413e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap99e74dca-1d", "ovs_interfaceid": "99e74dca-1d94-446c-ac4b-bc16dc028d2b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:34:48 compute-0 nova_compute[260935]: 2025-10-11 09:34:48.742 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Releasing lock "refresh_cache-b75d8ded-515b-48ff-a6b6-28df88878996" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:34:48 compute-0 nova_compute[260935]: 2025-10-11 09:34:48.743 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: b75d8ded-515b-48ff-a6b6-28df88878996] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 11 09:34:48 compute-0 nova_compute[260935]: 2025-10-11 09:34:48.743 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:34:48 compute-0 nova_compute[260935]: 2025-10-11 09:34:48.743 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:34:48 compute-0 nova_compute[260935]: 2025-10-11 09:34:48.744 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 11 09:34:48 compute-0 nova_compute[260935]: 2025-10-11 09:34:48.744 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:34:48 compute-0 nova_compute[260935]: 2025-10-11 09:34:48.779 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:34:48 compute-0 nova_compute[260935]: 2025-10-11 09:34:48.780 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:34:48 compute-0 nova_compute[260935]: 2025-10-11 09:34:48.780 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:34:48 compute-0 nova_compute[260935]: 2025-10-11 09:34:48.780 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 11 09:34:48 compute-0 nova_compute[260935]: 2025-10-11 09:34:48.781 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:34:48 compute-0 nova_compute[260935]: 2025-10-11 09:34:48.824 2 DEBUG oslo_concurrency.lockutils [None req-ca6f4263-76a1-4d3c-87e5-55f3dd57291a 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Acquiring lock "a77fa566-1ab4-484c-b6ef-53471c9f91f8" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:34:48 compute-0 nova_compute[260935]: 2025-10-11 09:34:48.825 2 DEBUG oslo_concurrency.lockutils [None req-ca6f4263-76a1-4d3c-87e5-55f3dd57291a 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lock "a77fa566-1ab4-484c-b6ef-53471c9f91f8" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:34:48 compute-0 nova_compute[260935]: 2025-10-11 09:34:48.937 2 DEBUG nova.compute.manager [None req-ca6f4263-76a1-4d3c-87e5-55f3dd57291a 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: a77fa566-1ab4-484c-b6ef-53471c9f91f8] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 11 09:34:48 compute-0 nova_compute[260935]: 2025-10-11 09:34:48.967 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:34:49 compute-0 nova_compute[260935]: 2025-10-11 09:34:49.062 2 DEBUG oslo_concurrency.lockutils [None req-ca6f4263-76a1-4d3c-87e5-55f3dd57291a 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:34:49 compute-0 nova_compute[260935]: 2025-10-11 09:34:49.062 2 DEBUG oslo_concurrency.lockutils [None req-ca6f4263-76a1-4d3c-87e5-55f3dd57291a 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:34:49 compute-0 nova_compute[260935]: 2025-10-11 09:34:49.070 2 DEBUG nova.virt.hardware [None req-ca6f4263-76a1-4d3c-87e5-55f3dd57291a 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 11 09:34:49 compute-0 nova_compute[260935]: 2025-10-11 09:34:49.071 2 INFO nova.compute.claims [None req-ca6f4263-76a1-4d3c-87e5-55f3dd57291a 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: a77fa566-1ab4-484c-b6ef-53471c9f91f8] Claim successful on node compute-0.ctlplane.example.com
Oct 11 09:34:49 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2795: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:34:49 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 09:34:49 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3575484801' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:34:49 compute-0 nova_compute[260935]: 2025-10-11 09:34:49.210 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.429s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:34:49 compute-0 ceph-mon[74313]: pgmap v2795: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:34:49 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/3575484801' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:34:49 compute-0 nova_compute[260935]: 2025-10-11 09:34:49.289 2 DEBUG nova.scheduler.client.report [None req-ca6f4263-76a1-4d3c-87e5-55f3dd57291a 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Refreshing inventories for resource provider ead2f521-4d5d-46d9-864c-1aac19134114 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Oct 11 09:34:49 compute-0 nova_compute[260935]: 2025-10-11 09:34:49.306 2 DEBUG nova.scheduler.client.report [None req-ca6f4263-76a1-4d3c-87e5-55f3dd57291a 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Updating ProviderTree inventory for provider ead2f521-4d5d-46d9-864c-1aac19134114 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Oct 11 09:34:49 compute-0 nova_compute[260935]: 2025-10-11 09:34:49.307 2 DEBUG nova.compute.provider_tree [None req-ca6f4263-76a1-4d3c-87e5-55f3dd57291a 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Updating inventory in ProviderTree for provider ead2f521-4d5d-46d9-864c-1aac19134114 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Oct 11 09:34:49 compute-0 nova_compute[260935]: 2025-10-11 09:34:49.323 2 DEBUG nova.scheduler.client.report [None req-ca6f4263-76a1-4d3c-87e5-55f3dd57291a 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Refreshing aggregate associations for resource provider ead2f521-4d5d-46d9-864c-1aac19134114, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Oct 11 09:34:49 compute-0 nova_compute[260935]: 2025-10-11 09:34:49.337 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:34:49 compute-0 nova_compute[260935]: 2025-10-11 09:34:49.338 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:34:49 compute-0 nova_compute[260935]: 2025-10-11 09:34:49.338 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:34:49 compute-0 nova_compute[260935]: 2025-10-11 09:34:49.343 2 DEBUG nova.scheduler.client.report [None req-ca6f4263-76a1-4d3c-87e5-55f3dd57291a 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Refreshing trait associations for resource provider ead2f521-4d5d-46d9-864c-1aac19134114, traits: HW_CPU_X86_AESNI,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_CLMUL,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_AVX,COMPUTE_RESCUE_BFV,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_F16C,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NODE,HW_CPU_X86_SSE2,HW_CPU_X86_BMI,COMPUTE_DEVICE_TAGGING,COMPUTE_STORAGE_BUS_FDC,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SHA,COMPUTE_STORAGE_BUS_SATA,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SSE42,HW_CPU_X86_SSE41,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_ABM,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_STORAGE_BUS_IDE,COMPUTE_STORAGE_BUS_USB,COMPUTE_TRUSTED_CERTS,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_FMA3,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_SSE4A,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSSE3,HW_CPU_X86_SVM,HW_CPU_X86_BMI2,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_AVX2,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_SECURITY_TPM_2_0,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_AMD_SVM,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_RTL8139 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Oct 11 09:34:49 compute-0 nova_compute[260935]: 2025-10-11 09:34:49.347 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:34:49 compute-0 nova_compute[260935]: 2025-10-11 09:34:49.347 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:34:49 compute-0 nova_compute[260935]: 2025-10-11 09:34:49.352 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000052 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:34:49 compute-0 nova_compute[260935]: 2025-10-11 09:34:49.352 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000052 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:34:49 compute-0 nova_compute[260935]: 2025-10-11 09:34:49.429 2 DEBUG oslo_concurrency.processutils [None req-ca6f4263-76a1-4d3c-87e5-55f3dd57291a 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:34:49 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:34:49 compute-0 nova_compute[260935]: 2025-10-11 09:34:49.701 2 WARNING nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 09:34:49 compute-0 nova_compute[260935]: 2025-10-11 09:34:49.702 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=2898MB free_disk=59.83066940307617GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 11 09:34:49 compute-0 nova_compute[260935]: 2025-10-11 09:34:49.703 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:34:49 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 09:34:49 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/719642367' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:34:49 compute-0 nova_compute[260935]: 2025-10-11 09:34:49.937 2 DEBUG oslo_concurrency.processutils [None req-ca6f4263-76a1-4d3c-87e5-55f3dd57291a 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.508s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:34:49 compute-0 nova_compute[260935]: 2025-10-11 09:34:49.944 2 DEBUG nova.compute.provider_tree [None req-ca6f4263-76a1-4d3c-87e5-55f3dd57291a 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 09:34:50 compute-0 nova_compute[260935]: 2025-10-11 09:34:50.025 2 DEBUG nova.scheduler.client.report [None req-ca6f4263-76a1-4d3c-87e5-55f3dd57291a 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 09:34:50 compute-0 nova_compute[260935]: 2025-10-11 09:34:50.092 2 DEBUG oslo_concurrency.lockutils [None req-ca6f4263-76a1-4d3c-87e5-55f3dd57291a 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.029s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:34:50 compute-0 nova_compute[260935]: 2025-10-11 09:34:50.093 2 DEBUG nova.compute.manager [None req-ca6f4263-76a1-4d3c-87e5-55f3dd57291a 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: a77fa566-1ab4-484c-b6ef-53471c9f91f8] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 11 09:34:50 compute-0 nova_compute[260935]: 2025-10-11 09:34:50.099 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.396s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:34:50 compute-0 nova_compute[260935]: 2025-10-11 09:34:50.159 2 DEBUG nova.compute.manager [None req-ca6f4263-76a1-4d3c-87e5-55f3dd57291a 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: a77fa566-1ab4-484c-b6ef-53471c9f91f8] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 11 09:34:50 compute-0 nova_compute[260935]: 2025-10-11 09:34:50.160 2 DEBUG nova.network.neutron [None req-ca6f4263-76a1-4d3c-87e5-55f3dd57291a 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: a77fa566-1ab4-484c-b6ef-53471c9f91f8] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 11 09:34:50 compute-0 nova_compute[260935]: 2025-10-11 09:34:50.235 2 INFO nova.virt.libvirt.driver [None req-ca6f4263-76a1-4d3c-87e5-55f3dd57291a 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: a77fa566-1ab4-484c-b6ef-53471c9f91f8] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 11 09:34:50 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/719642367' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:34:50 compute-0 nova_compute[260935]: 2025-10-11 09:34:50.273 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance c176845c-89c0-4038-ba22-4ee79bd3ebfe actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 09:34:50 compute-0 nova_compute[260935]: 2025-10-11 09:34:50.273 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance b75d8ded-515b-48ff-a6b6-28df88878996 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 09:34:50 compute-0 nova_compute[260935]: 2025-10-11 09:34:50.274 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance 52be16b4-343a-4fd4-9041-39069a1fde2a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 09:34:50 compute-0 nova_compute[260935]: 2025-10-11 09:34:50.274 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance a77fa566-1ab4-484c-b6ef-53471c9f91f8 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 09:34:50 compute-0 nova_compute[260935]: 2025-10-11 09:34:50.274 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 11 09:34:50 compute-0 nova_compute[260935]: 2025-10-11 09:34:50.275 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=1024MB phys_disk=59GB used_disk=4GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 11 09:34:50 compute-0 nova_compute[260935]: 2025-10-11 09:34:50.304 2 DEBUG nova.compute.manager [None req-ca6f4263-76a1-4d3c-87e5-55f3dd57291a 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: a77fa566-1ab4-484c-b6ef-53471c9f91f8] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 11 09:34:50 compute-0 nova_compute[260935]: 2025-10-11 09:34:50.374 2 DEBUG nova.policy [None req-ca6f4263-76a1-4d3c-87e5-55f3dd57291a 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '489c4d0457354f4684f8b9e53261224f', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '81e7096f23df4e7d8782cf98d09d54e9', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 11 09:34:50 compute-0 nova_compute[260935]: 2025-10-11 09:34:50.406 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:34:50 compute-0 nova_compute[260935]: 2025-10-11 09:34:50.562 2 DEBUG nova.compute.manager [None req-ca6f4263-76a1-4d3c-87e5-55f3dd57291a 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: a77fa566-1ab4-484c-b6ef-53471c9f91f8] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 11 09:34:50 compute-0 nova_compute[260935]: 2025-10-11 09:34:50.566 2 DEBUG nova.virt.libvirt.driver [None req-ca6f4263-76a1-4d3c-87e5-55f3dd57291a 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: a77fa566-1ab4-484c-b6ef-53471c9f91f8] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 11 09:34:50 compute-0 nova_compute[260935]: 2025-10-11 09:34:50.566 2 INFO nova.virt.libvirt.driver [None req-ca6f4263-76a1-4d3c-87e5-55f3dd57291a 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: a77fa566-1ab4-484c-b6ef-53471c9f91f8] Creating image(s)
Oct 11 09:34:50 compute-0 nova_compute[260935]: 2025-10-11 09:34:50.624 2 DEBUG nova.storage.rbd_utils [None req-ca6f4263-76a1-4d3c-87e5-55f3dd57291a 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] rbd image a77fa566-1ab4-484c-b6ef-53471c9f91f8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:34:50 compute-0 nova_compute[260935]: 2025-10-11 09:34:50.664 2 DEBUG nova.storage.rbd_utils [None req-ca6f4263-76a1-4d3c-87e5-55f3dd57291a 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] rbd image a77fa566-1ab4-484c-b6ef-53471c9f91f8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:34:50 compute-0 nova_compute[260935]: 2025-10-11 09:34:50.698 2 DEBUG nova.storage.rbd_utils [None req-ca6f4263-76a1-4d3c-87e5-55f3dd57291a 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] rbd image a77fa566-1ab4-484c-b6ef-53471c9f91f8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:34:50 compute-0 nova_compute[260935]: 2025-10-11 09:34:50.703 2 DEBUG oslo_concurrency.processutils [None req-ca6f4263-76a1-4d3c-87e5-55f3dd57291a 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:34:50 compute-0 nova_compute[260935]: 2025-10-11 09:34:50.801 2 DEBUG oslo_concurrency.processutils [None req-ca6f4263-76a1-4d3c-87e5-55f3dd57291a 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json" returned: 0 in 0.098s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:34:50 compute-0 nova_compute[260935]: 2025-10-11 09:34:50.802 2 DEBUG oslo_concurrency.lockutils [None req-ca6f4263-76a1-4d3c-87e5-55f3dd57291a 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Acquiring lock "9811042c7d73cc51997f7c966840f4b7728169a1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:34:50 compute-0 nova_compute[260935]: 2025-10-11 09:34:50.803 2 DEBUG oslo_concurrency.lockutils [None req-ca6f4263-76a1-4d3c-87e5-55f3dd57291a 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:34:50 compute-0 nova_compute[260935]: 2025-10-11 09:34:50.803 2 DEBUG oslo_concurrency.lockutils [None req-ca6f4263-76a1-4d3c-87e5-55f3dd57291a 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:34:50 compute-0 nova_compute[260935]: 2025-10-11 09:34:50.830 2 DEBUG nova.storage.rbd_utils [None req-ca6f4263-76a1-4d3c-87e5-55f3dd57291a 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] rbd image a77fa566-1ab4-484c-b6ef-53471c9f91f8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:34:50 compute-0 nova_compute[260935]: 2025-10-11 09:34:50.836 2 DEBUG oslo_concurrency.processutils [None req-ca6f4263-76a1-4d3c-87e5-55f3dd57291a 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 a77fa566-1ab4-484c-b6ef-53471c9f91f8_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:34:50 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 09:34:50 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2718831139' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:34:50 compute-0 nova_compute[260935]: 2025-10-11 09:34:50.919 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.513s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:34:50 compute-0 nova_compute[260935]: 2025-10-11 09:34:50.927 2 DEBUG nova.compute.provider_tree [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 09:34:50 compute-0 nova_compute[260935]: 2025-10-11 09:34:50.977 2 DEBUG nova.scheduler.client.report [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 09:34:51 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2796: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:34:51 compute-0 nova_compute[260935]: 2025-10-11 09:34:51.212 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 11 09:34:51 compute-0 nova_compute[260935]: 2025-10-11 09:34:51.213 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.114s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:34:51 compute-0 nova_compute[260935]: 2025-10-11 09:34:51.215 2 DEBUG oslo_concurrency.processutils [None req-ca6f4263-76a1-4d3c-87e5-55f3dd57291a 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 a77fa566-1ab4-484c-b6ef-53471c9f91f8_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.379s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:34:51 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/2718831139' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:34:51 compute-0 ceph-mon[74313]: pgmap v2796: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:34:51 compute-0 nova_compute[260935]: 2025-10-11 09:34:51.265 2 DEBUG nova.network.neutron [None req-ca6f4263-76a1-4d3c-87e5-55f3dd57291a 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: a77fa566-1ab4-484c-b6ef-53471c9f91f8] Successfully created port: 75ae5f39-4616-4581-8e22-411bee7c1747 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 11 09:34:51 compute-0 nova_compute[260935]: 2025-10-11 09:34:51.324 2 DEBUG nova.storage.rbd_utils [None req-ca6f4263-76a1-4d3c-87e5-55f3dd57291a 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] resizing rbd image a77fa566-1ab4-484c-b6ef-53471c9f91f8_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 11 09:34:51 compute-0 nova_compute[260935]: 2025-10-11 09:34:51.454 2 DEBUG nova.objects.instance [None req-ca6f4263-76a1-4d3c-87e5-55f3dd57291a 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lazy-loading 'migration_context' on Instance uuid a77fa566-1ab4-484c-b6ef-53471c9f91f8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:34:51 compute-0 nova_compute[260935]: 2025-10-11 09:34:51.530 2 DEBUG nova.virt.libvirt.driver [None req-ca6f4263-76a1-4d3c-87e5-55f3dd57291a 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: a77fa566-1ab4-484c-b6ef-53471c9f91f8] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 11 09:34:51 compute-0 nova_compute[260935]: 2025-10-11 09:34:51.531 2 DEBUG nova.virt.libvirt.driver [None req-ca6f4263-76a1-4d3c-87e5-55f3dd57291a 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: a77fa566-1ab4-484c-b6ef-53471c9f91f8] Ensure instance console log exists: /var/lib/nova/instances/a77fa566-1ab4-484c-b6ef-53471c9f91f8/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 11 09:34:51 compute-0 nova_compute[260935]: 2025-10-11 09:34:51.532 2 DEBUG oslo_concurrency.lockutils [None req-ca6f4263-76a1-4d3c-87e5-55f3dd57291a 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:34:51 compute-0 nova_compute[260935]: 2025-10-11 09:34:51.535 2 DEBUG oslo_concurrency.lockutils [None req-ca6f4263-76a1-4d3c-87e5-55f3dd57291a 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.004s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:34:51 compute-0 nova_compute[260935]: 2025-10-11 09:34:51.536 2 DEBUG oslo_concurrency.lockutils [None req-ca6f4263-76a1-4d3c-87e5-55f3dd57291a 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:34:51 compute-0 nova_compute[260935]: 2025-10-11 09:34:51.736 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:34:52 compute-0 nova_compute[260935]: 2025-10-11 09:34:52.210 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:34:52 compute-0 nova_compute[260935]: 2025-10-11 09:34:52.211 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:34:53 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2797: 321 pgs: 321 active+clean; 374 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 11 09:34:53 compute-0 ceph-mon[74313]: pgmap v2797: 321 pgs: 321 active+clean; 374 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 11 09:34:53 compute-0 nova_compute[260935]: 2025-10-11 09:34:53.969 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:34:54 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:34:54 compute-0 ceph-mon[74313]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #135. Immutable memtables: 0.
Oct 11 09:34:54 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:34:54.480302) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 11 09:34:54 compute-0 ceph-mon[74313]: rocksdb: [db/flush_job.cc:856] [default] [JOB 81] Flushing memtable with next log file: 135
Oct 11 09:34:54 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760175294480389, "job": 81, "event": "flush_started", "num_memtables": 1, "num_entries": 1768, "num_deletes": 250, "total_data_size": 2803516, "memory_usage": 2845480, "flush_reason": "Manual Compaction"}
Oct 11 09:34:54 compute-0 ceph-mon[74313]: rocksdb: [db/flush_job.cc:885] [default] [JOB 81] Level-0 flush table #136: started
Oct 11 09:34:54 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760175294562274, "cf_name": "default", "job": 81, "event": "table_file_creation", "file_number": 136, "file_size": 1594857, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 56885, "largest_seqno": 58652, "table_properties": {"data_size": 1589131, "index_size": 2800, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1861, "raw_key_size": 15224, "raw_average_key_size": 20, "raw_value_size": 1576344, "raw_average_value_size": 2141, "num_data_blocks": 128, "num_entries": 736, "num_filter_entries": 736, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760175105, "oldest_key_time": 1760175105, "file_creation_time": 1760175294, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "25d4b2da-549a-4320-988a-8e4ed36a341b", "db_session_id": "HBQ1LYMNE0LLZGR7AHC6", "orig_file_number": 136, "seqno_to_time_mapping": "N/A"}}
Oct 11 09:34:54 compute-0 ceph-mon[74313]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 81] Flush lasted 82047 microseconds, and 9383 cpu microseconds.
Oct 11 09:34:54 compute-0 ceph-mon[74313]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 09:34:54 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:34:54.562354) [db/flush_job.cc:967] [default] [JOB 81] Level-0 flush table #136: 1594857 bytes OK
Oct 11 09:34:54 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:34:54.562388) [db/memtable_list.cc:519] [default] Level-0 commit table #136 started
Oct 11 09:34:54 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:34:54.564056) [db/memtable_list.cc:722] [default] Level-0 commit table #136: memtable #1 done
Oct 11 09:34:54 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:34:54.564087) EVENT_LOG_v1 {"time_micros": 1760175294564077, "job": 81, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 11 09:34:54 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:34:54.564116) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 11 09:34:54 compute-0 ceph-mon[74313]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 81] Try to delete WAL files size 2796002, prev total WAL file size 2796002, number of live WAL files 2.
Oct 11 09:34:54 compute-0 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000132.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 09:34:54 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:34:54.565580) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740032323534' seq:72057594037927935, type:22 .. '6D6772737461740032353035' seq:0, type:0; will stop at (end)
Oct 11 09:34:54 compute-0 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 82] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 11 09:34:54 compute-0 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 81 Base level 0, inputs: [136(1557KB)], [134(9889KB)]
Oct 11 09:34:54 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760175294565621, "job": 82, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [136], "files_L6": [134], "score": -1, "input_data_size": 11721902, "oldest_snapshot_seqno": -1}
Oct 11 09:34:54 compute-0 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 82] Generated table #137: 7808 keys, 9590003 bytes, temperature: kUnknown
Oct 11 09:34:54 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760175294765507, "cf_name": "default", "job": 82, "event": "table_file_creation", "file_number": 137, "file_size": 9590003, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9540705, "index_size": 28705, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 19525, "raw_key_size": 203111, "raw_average_key_size": 26, "raw_value_size": 9404124, "raw_average_value_size": 1204, "num_data_blocks": 1122, "num_entries": 7808, "num_filter_entries": 7808, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760170204, "oldest_key_time": 0, "file_creation_time": 1760175294, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "25d4b2da-549a-4320-988a-8e4ed36a341b", "db_session_id": "HBQ1LYMNE0LLZGR7AHC6", "orig_file_number": 137, "seqno_to_time_mapping": "N/A"}}
Oct 11 09:34:54 compute-0 ceph-mon[74313]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 09:34:54 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:34:54.767152) [db/compaction/compaction_job.cc:1663] [default] [JOB 82] Compacted 1@0 + 1@6 files to L6 => 9590003 bytes
Oct 11 09:34:54 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:34:54.794901) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 58.6 rd, 47.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.5, 9.7 +0.0 blob) out(9.1 +0.0 blob), read-write-amplify(13.4) write-amplify(6.0) OK, records in: 8231, records dropped: 423 output_compression: NoCompression
Oct 11 09:34:54 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:34:54.795124) EVENT_LOG_v1 {"time_micros": 1760175294795107, "job": 82, "event": "compaction_finished", "compaction_time_micros": 200007, "compaction_time_cpu_micros": 33782, "output_level": 6, "num_output_files": 1, "total_output_size": 9590003, "num_input_records": 8231, "num_output_records": 7808, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 11 09:34:54 compute-0 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000136.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 09:34:54 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760175294795792, "job": 82, "event": "table_file_deletion", "file_number": 136}
Oct 11 09:34:54 compute-0 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000134.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 09:34:54 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760175294799000, "job": 82, "event": "table_file_deletion", "file_number": 134}
Oct 11 09:34:54 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:34:54.565527) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 09:34:54 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:34:54.799155) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 09:34:54 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:34:54.799166) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 09:34:54 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:34:54.799170) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 09:34:54 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:34:54.799176) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 09:34:54 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:34:54.799182) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 09:34:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:34:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:34:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:34:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:34:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:34:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:34:54 compute-0 ceph-mgr[74605]: [balancer INFO root] Optimize plan auto_2025-10-11_09:34:54
Oct 11 09:34:54 compute-0 ceph-mgr[74605]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 09:34:54 compute-0 ceph-mgr[74605]: [balancer INFO root] do_upmap
Oct 11 09:34:54 compute-0 ceph-mgr[74605]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'backups', 'vms', '.rgw.root', 'default.rgw.control', 'volumes', 'images', 'default.rgw.meta', 'cephfs.cephfs.data', 'default.rgw.log', '.mgr']
Oct 11 09:34:54 compute-0 ceph-mgr[74605]: [balancer INFO root] prepared 0/10 changes
Oct 11 09:34:55 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2798: 321 pgs: 321 active+clean; 374 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 11 09:34:55 compute-0 sudo[414837]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:34:55 compute-0 ceph-mon[74313]: pgmap v2798: 321 pgs: 321 active+clean; 374 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 11 09:34:55 compute-0 sudo[414837]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:34:55 compute-0 sudo[414837]: pam_unix(sudo:session): session closed for user root
Oct 11 09:34:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 09:34:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 09:34:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 09:34:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 09:34:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 09:34:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 09:34:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 09:34:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 09:34:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 09:34:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 09:34:55 compute-0 sudo[414862]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 09:34:55 compute-0 sudo[414862]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:34:55 compute-0 sudo[414862]: pam_unix(sudo:session): session closed for user root
Oct 11 09:34:55 compute-0 sudo[414887]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:34:55 compute-0 sudo[414887]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:34:55 compute-0 sudo[414887]: pam_unix(sudo:session): session closed for user root
Oct 11 09:34:55 compute-0 nova_compute[260935]: 2025-10-11 09:34:55.726 2 DEBUG nova.network.neutron [None req-ca6f4263-76a1-4d3c-87e5-55f3dd57291a 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: a77fa566-1ab4-484c-b6ef-53471c9f91f8] Successfully updated port: 75ae5f39-4616-4581-8e22-411bee7c1747 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 11 09:34:55 compute-0 sudo[414912]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Oct 11 09:34:55 compute-0 sudo[414912]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:34:55 compute-0 nova_compute[260935]: 2025-10-11 09:34:55.915 2 DEBUG oslo_concurrency.lockutils [None req-ca6f4263-76a1-4d3c-87e5-55f3dd57291a 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Acquiring lock "refresh_cache-a77fa566-1ab4-484c-b6ef-53471c9f91f8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:34:55 compute-0 nova_compute[260935]: 2025-10-11 09:34:55.915 2 DEBUG oslo_concurrency.lockutils [None req-ca6f4263-76a1-4d3c-87e5-55f3dd57291a 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Acquired lock "refresh_cache-a77fa566-1ab4-484c-b6ef-53471c9f91f8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:34:55 compute-0 nova_compute[260935]: 2025-10-11 09:34:55.916 2 DEBUG nova.network.neutron [None req-ca6f4263-76a1-4d3c-87e5-55f3dd57291a 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: a77fa566-1ab4-484c-b6ef-53471c9f91f8] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 11 09:34:55 compute-0 nova_compute[260935]: 2025-10-11 09:34:55.920 2 DEBUG nova.compute.manager [req-194e8d22-3c7a-4e35-aadb-81a85961cc62 req-9eb466f2-c47b-48ea-8d0a-b56a106bcd99 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a77fa566-1ab4-484c-b6ef-53471c9f91f8] Received event network-changed-75ae5f39-4616-4581-8e22-411bee7c1747 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:34:55 compute-0 nova_compute[260935]: 2025-10-11 09:34:55.921 2 DEBUG nova.compute.manager [req-194e8d22-3c7a-4e35-aadb-81a85961cc62 req-9eb466f2-c47b-48ea-8d0a-b56a106bcd99 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a77fa566-1ab4-484c-b6ef-53471c9f91f8] Refreshing instance network info cache due to event network-changed-75ae5f39-4616-4581-8e22-411bee7c1747. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 09:34:55 compute-0 nova_compute[260935]: 2025-10-11 09:34:55.921 2 DEBUG oslo_concurrency.lockutils [req-194e8d22-3c7a-4e35-aadb-81a85961cc62 req-9eb466f2-c47b-48ea-8d0a-b56a106bcd99 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-a77fa566-1ab4-484c-b6ef-53471c9f91f8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:34:56 compute-0 nova_compute[260935]: 2025-10-11 09:34:56.066 2 DEBUG nova.network.neutron [None req-ca6f4263-76a1-4d3c-87e5-55f3dd57291a 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: a77fa566-1ab4-484c-b6ef-53471c9f91f8] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 11 09:34:56 compute-0 sudo[414912]: pam_unix(sudo:session): session closed for user root
Oct 11 09:34:56 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 09:34:56 compute-0 podman[414952]: 2025-10-11 09:34:56.121385175 +0000 UTC m=+0.082273923 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent)
Oct 11 09:34:56 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:34:56 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 09:34:56 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:34:56 compute-0 sudo[414977]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:34:56 compute-0 sudo[414977]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:34:56 compute-0 sudo[414977]: pam_unix(sudo:session): session closed for user root
Oct 11 09:34:56 compute-0 sudo[415002]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 09:34:56 compute-0 sudo[415002]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:34:56 compute-0 sudo[415002]: pam_unix(sudo:session): session closed for user root
Oct 11 09:34:56 compute-0 sudo[415027]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:34:56 compute-0 sudo[415027]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:34:56 compute-0 sudo[415027]: pam_unix(sudo:session): session closed for user root
Oct 11 09:34:56 compute-0 sudo[415052]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 11 09:34:56 compute-0 sudo[415052]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:34:56 compute-0 nova_compute[260935]: 2025-10-11 09:34:56.738 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:34:57 compute-0 sudo[415052]: pam_unix(sudo:session): session closed for user root
Oct 11 09:34:57 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:34:57 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:34:57 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 09:34:57 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 09:34:57 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 11 09:34:57 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 09:34:57 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 11 09:34:57 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:34:57 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev 28ec1567-f3a2-480f-be52-d55beb6f92e1 does not exist
Oct 11 09:34:57 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev 96d67f8d-b9fa-4636-ba1c-2c9f0d443ff2 does not exist
Oct 11 09:34:57 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev da124377-ee81-465b-a69b-15090d2845ba does not exist
Oct 11 09:34:57 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 11 09:34:57 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 09:34:57 compute-0 ceph-mon[74313]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #138. Immutable memtables: 0.
Oct 11 09:34:57 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:34:57.159124) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 11 09:34:57 compute-0 ceph-mon[74313]: rocksdb: [db/flush_job.cc:856] [default] [JOB 83] Flushing memtable with next log file: 138
Oct 11 09:34:57 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760175297159161, "job": 83, "event": "flush_started", "num_memtables": 1, "num_entries": 299, "num_deletes": 251, "total_data_size": 89249, "memory_usage": 95072, "flush_reason": "Manual Compaction"}
Oct 11 09:34:57 compute-0 ceph-mon[74313]: rocksdb: [db/flush_job.cc:885] [default] [JOB 83] Level-0 flush table #139: started
Oct 11 09:34:57 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760175297161667, "cf_name": "default", "job": 83, "event": "table_file_creation", "file_number": 139, "file_size": 88708, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 58653, "largest_seqno": 58951, "table_properties": {"data_size": 86732, "index_size": 203, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 709, "raw_key_size": 5190, "raw_average_key_size": 18, "raw_value_size": 82748, "raw_average_value_size": 297, "num_data_blocks": 8, "num_entries": 278, "num_filter_entries": 278, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760175295, "oldest_key_time": 1760175295, "file_creation_time": 1760175297, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "25d4b2da-549a-4320-988a-8e4ed36a341b", "db_session_id": "HBQ1LYMNE0LLZGR7AHC6", "orig_file_number": 139, "seqno_to_time_mapping": "N/A"}}
Oct 11 09:34:57 compute-0 ceph-mon[74313]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 83] Flush lasted 2579 microseconds, and 788 cpu microseconds.
Oct 11 09:34:57 compute-0 ceph-mon[74313]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 09:34:57 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:34:57.161701) [db/flush_job.cc:967] [default] [JOB 83] Level-0 flush table #139: 88708 bytes OK
Oct 11 09:34:57 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:34:57.161718) [db/memtable_list.cc:519] [default] Level-0 commit table #139 started
Oct 11 09:34:57 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:34:57.163017) [db/memtable_list.cc:722] [default] Level-0 commit table #139: memtable #1 done
Oct 11 09:34:57 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:34:57.163034) EVENT_LOG_v1 {"time_micros": 1760175297163028, "job": 83, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 11 09:34:57 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:34:57.163046) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 11 09:34:57 compute-0 ceph-mon[74313]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 83] Try to delete WAL files size 87073, prev total WAL file size 87073, number of live WAL files 2.
Oct 11 09:34:57 compute-0 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000135.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 09:34:57 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:34:57.163369) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730035353232' seq:72057594037927935, type:22 .. '7061786F730035373734' seq:0, type:0; will stop at (end)
Oct 11 09:34:57 compute-0 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 84] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 11 09:34:57 compute-0 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 83 Base level 0, inputs: [139(86KB)], [137(9365KB)]
Oct 11 09:34:57 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760175297163401, "job": 84, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [139], "files_L6": [137], "score": -1, "input_data_size": 9678711, "oldest_snapshot_seqno": -1}
Oct 11 09:34:57 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 11 09:34:57 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 09:34:57 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 09:34:57 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 09:34:57 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2799: 321 pgs: 321 active+clean; 374 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 11 09:34:57 compute-0 nova_compute[260935]: 2025-10-11 09:34:57.188 2 DEBUG nova.network.neutron [None req-ca6f4263-76a1-4d3c-87e5-55f3dd57291a 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: a77fa566-1ab4-484c-b6ef-53471c9f91f8] Updating instance_info_cache with network_info: [{"id": "75ae5f39-4616-4581-8e22-411bee7c1747", "address": "fa:16:3e:db:c5:0a", "network": {"id": "f0d9de45-f93f-45ef-aa2e-7cd54a90600b", "bridge": "br-int", "label": "tempest-network-smoke--1509318194", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "81e7096f23df4e7d8782cf98d09d54e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap75ae5f39-46", "ovs_interfaceid": "75ae5f39-4616-4581-8e22-411bee7c1747", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:34:57 compute-0 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 84] Generated table #140: 7573 keys, 7985086 bytes, temperature: kUnknown
Oct 11 09:34:57 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760175297213979, "cf_name": "default", "job": 84, "event": "table_file_creation", "file_number": 140, "file_size": 7985086, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7938870, "index_size": 26199, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 18949, "raw_key_size": 198898, "raw_average_key_size": 26, "raw_value_size": 7807777, "raw_average_value_size": 1031, "num_data_blocks": 1008, "num_entries": 7573, "num_filter_entries": 7573, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760170204, "oldest_key_time": 0, "file_creation_time": 1760175297, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "25d4b2da-549a-4320-988a-8e4ed36a341b", "db_session_id": "HBQ1LYMNE0LLZGR7AHC6", "orig_file_number": 140, "seqno_to_time_mapping": "N/A"}}
Oct 11 09:34:57 compute-0 ceph-mon[74313]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 09:34:57 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:34:57.214361) [db/compaction/compaction_job.cc:1663] [default] [JOB 84] Compacted 1@0 + 1@6 files to L6 => 7985086 bytes
Oct 11 09:34:57 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:34:57.216086) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 190.9 rd, 157.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.1, 9.1 +0.0 blob) out(7.6 +0.0 blob), read-write-amplify(199.1) write-amplify(90.0) OK, records in: 8086, records dropped: 513 output_compression: NoCompression
Oct 11 09:34:57 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:34:57.216119) EVENT_LOG_v1 {"time_micros": 1760175297216101, "job": 84, "event": "compaction_finished", "compaction_time_micros": 50694, "compaction_time_cpu_micros": 25311, "output_level": 6, "num_output_files": 1, "total_output_size": 7985086, "num_input_records": 8086, "num_output_records": 7573, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 11 09:34:57 compute-0 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000139.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 09:34:57 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760175297216335, "job": 84, "event": "table_file_deletion", "file_number": 139}
Oct 11 09:34:57 compute-0 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000137.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 09:34:57 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760175297218662, "job": 84, "event": "table_file_deletion", "file_number": 137}
Oct 11 09:34:57 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:34:57.163298) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 09:34:57 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:34:57.218766) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 09:34:57 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:34:57.218777) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 09:34:57 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:34:57.218781) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 09:34:57 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:34:57.218785) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 09:34:57 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:34:57.218788) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 09:34:57 compute-0 sudo[415109]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:34:57 compute-0 sudo[415109]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:34:57 compute-0 sudo[415109]: pam_unix(sudo:session): session closed for user root
Oct 11 09:34:57 compute-0 nova_compute[260935]: 2025-10-11 09:34:57.349 2 DEBUG oslo_concurrency.lockutils [None req-ca6f4263-76a1-4d3c-87e5-55f3dd57291a 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Releasing lock "refresh_cache-a77fa566-1ab4-484c-b6ef-53471c9f91f8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:34:57 compute-0 nova_compute[260935]: 2025-10-11 09:34:57.350 2 DEBUG nova.compute.manager [None req-ca6f4263-76a1-4d3c-87e5-55f3dd57291a 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: a77fa566-1ab4-484c-b6ef-53471c9f91f8] Instance network_info: |[{"id": "75ae5f39-4616-4581-8e22-411bee7c1747", "address": "fa:16:3e:db:c5:0a", "network": {"id": "f0d9de45-f93f-45ef-aa2e-7cd54a90600b", "bridge": "br-int", "label": "tempest-network-smoke--1509318194", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "81e7096f23df4e7d8782cf98d09d54e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap75ae5f39-46", "ovs_interfaceid": "75ae5f39-4616-4581-8e22-411bee7c1747", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 11 09:34:57 compute-0 nova_compute[260935]: 2025-10-11 09:34:57.351 2 DEBUG oslo_concurrency.lockutils [req-194e8d22-3c7a-4e35-aadb-81a85961cc62 req-9eb466f2-c47b-48ea-8d0a-b56a106bcd99 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-a77fa566-1ab4-484c-b6ef-53471c9f91f8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:34:57 compute-0 nova_compute[260935]: 2025-10-11 09:34:57.352 2 DEBUG nova.network.neutron [req-194e8d22-3c7a-4e35-aadb-81a85961cc62 req-9eb466f2-c47b-48ea-8d0a-b56a106bcd99 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a77fa566-1ab4-484c-b6ef-53471c9f91f8] Refreshing network info cache for port 75ae5f39-4616-4581-8e22-411bee7c1747 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 09:34:57 compute-0 sudo[415134]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 09:34:57 compute-0 sudo[415134]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:34:57 compute-0 sudo[415134]: pam_unix(sudo:session): session closed for user root
Oct 11 09:34:57 compute-0 nova_compute[260935]: 2025-10-11 09:34:57.363 2 DEBUG nova.virt.libvirt.driver [None req-ca6f4263-76a1-4d3c-87e5-55f3dd57291a 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: a77fa566-1ab4-484c-b6ef-53471c9f91f8] Start _get_guest_xml network_info=[{"id": "75ae5f39-4616-4581-8e22-411bee7c1747", "address": "fa:16:3e:db:c5:0a", "network": {"id": "f0d9de45-f93f-45ef-aa2e-7cd54a90600b", "bridge": "br-int", "label": "tempest-network-smoke--1509318194", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "81e7096f23df4e7d8782cf98d09d54e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap75ae5f39-46", "ovs_interfaceid": "75ae5f39-4616-4581-8e22-411bee7c1747", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 11 09:34:57 compute-0 nova_compute[260935]: 2025-10-11 09:34:57.370 2 WARNING nova.virt.libvirt.driver [None req-ca6f4263-76a1-4d3c-87e5-55f3dd57291a 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 09:34:57 compute-0 nova_compute[260935]: 2025-10-11 09:34:57.377 2 DEBUG nova.virt.libvirt.host [None req-ca6f4263-76a1-4d3c-87e5-55f3dd57291a 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 11 09:34:57 compute-0 nova_compute[260935]: 2025-10-11 09:34:57.379 2 DEBUG nova.virt.libvirt.host [None req-ca6f4263-76a1-4d3c-87e5-55f3dd57291a 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 11 09:34:57 compute-0 nova_compute[260935]: 2025-10-11 09:34:57.384 2 DEBUG nova.virt.libvirt.host [None req-ca6f4263-76a1-4d3c-87e5-55f3dd57291a 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 11 09:34:57 compute-0 nova_compute[260935]: 2025-10-11 09:34:57.384 2 DEBUG nova.virt.libvirt.host [None req-ca6f4263-76a1-4d3c-87e5-55f3dd57291a 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 11 09:34:57 compute-0 nova_compute[260935]: 2025-10-11 09:34:57.385 2 DEBUG nova.virt.libvirt.driver [None req-ca6f4263-76a1-4d3c-87e5-55f3dd57291a 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 11 09:34:57 compute-0 nova_compute[260935]: 2025-10-11 09:34:57.386 2 DEBUG nova.virt.hardware [None req-ca6f4263-76a1-4d3c-87e5-55f3dd57291a 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 11 09:34:57 compute-0 nova_compute[260935]: 2025-10-11 09:34:57.386 2 DEBUG nova.virt.hardware [None req-ca6f4263-76a1-4d3c-87e5-55f3dd57291a 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 11 09:34:57 compute-0 nova_compute[260935]: 2025-10-11 09:34:57.387 2 DEBUG nova.virt.hardware [None req-ca6f4263-76a1-4d3c-87e5-55f3dd57291a 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 11 09:34:57 compute-0 nova_compute[260935]: 2025-10-11 09:34:57.387 2 DEBUG nova.virt.hardware [None req-ca6f4263-76a1-4d3c-87e5-55f3dd57291a 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 11 09:34:57 compute-0 nova_compute[260935]: 2025-10-11 09:34:57.388 2 DEBUG nova.virt.hardware [None req-ca6f4263-76a1-4d3c-87e5-55f3dd57291a 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 11 09:34:57 compute-0 nova_compute[260935]: 2025-10-11 09:34:57.388 2 DEBUG nova.virt.hardware [None req-ca6f4263-76a1-4d3c-87e5-55f3dd57291a 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 11 09:34:57 compute-0 nova_compute[260935]: 2025-10-11 09:34:57.388 2 DEBUG nova.virt.hardware [None req-ca6f4263-76a1-4d3c-87e5-55f3dd57291a 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 11 09:34:57 compute-0 nova_compute[260935]: 2025-10-11 09:34:57.389 2 DEBUG nova.virt.hardware [None req-ca6f4263-76a1-4d3c-87e5-55f3dd57291a 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 11 09:34:57 compute-0 nova_compute[260935]: 2025-10-11 09:34:57.389 2 DEBUG nova.virt.hardware [None req-ca6f4263-76a1-4d3c-87e5-55f3dd57291a 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 11 09:34:57 compute-0 nova_compute[260935]: 2025-10-11 09:34:57.390 2 DEBUG nova.virt.hardware [None req-ca6f4263-76a1-4d3c-87e5-55f3dd57291a 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 11 09:34:57 compute-0 nova_compute[260935]: 2025-10-11 09:34:57.390 2 DEBUG nova.virt.hardware [None req-ca6f4263-76a1-4d3c-87e5-55f3dd57291a 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 11 09:34:57 compute-0 nova_compute[260935]: 2025-10-11 09:34:57.395 2 DEBUG oslo_concurrency.processutils [None req-ca6f4263-76a1-4d3c-87e5-55f3dd57291a 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:34:57 compute-0 sudo[415159]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:34:57 compute-0 sudo[415159]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:34:57 compute-0 sudo[415159]: pam_unix(sudo:session): session closed for user root
Oct 11 09:34:57 compute-0 sudo[415185]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 11 09:34:57 compute-0 sudo[415185]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:34:57 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 09:34:57 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3877447729' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:34:57 compute-0 nova_compute[260935]: 2025-10-11 09:34:57.902 2 DEBUG oslo_concurrency.processutils [None req-ca6f4263-76a1-4d3c-87e5-55f3dd57291a 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.507s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:34:57 compute-0 nova_compute[260935]: 2025-10-11 09:34:57.942 2 DEBUG nova.storage.rbd_utils [None req-ca6f4263-76a1-4d3c-87e5-55f3dd57291a 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] rbd image a77fa566-1ab4-484c-b6ef-53471c9f91f8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:34:57 compute-0 nova_compute[260935]: 2025-10-11 09:34:57.953 2 DEBUG oslo_concurrency.processutils [None req-ca6f4263-76a1-4d3c-87e5-55f3dd57291a 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:34:58 compute-0 podman[415290]: 2025-10-11 09:34:58.051536996 +0000 UTC m=+0.062014312 container create 587f4fd5b29045fe9daeeab296b2e948b736d6636bf8c66ce1ed2ec270e0ff33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_cray, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 09:34:58 compute-0 systemd[1]: Started libpod-conmon-587f4fd5b29045fe9daeeab296b2e948b736d6636bf8c66ce1ed2ec270e0ff33.scope.
Oct 11 09:34:58 compute-0 podman[415290]: 2025-10-11 09:34:58.021133818 +0000 UTC m=+0.031611214 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:34:58 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 09:34:58 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 09:34:58 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:34:58 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 09:34:58 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 09:34:58 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 09:34:58 compute-0 ceph-mon[74313]: pgmap v2799: 321 pgs: 321 active+clean; 374 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 11 09:34:58 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/3877447729' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:34:58 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:34:58 compute-0 podman[415290]: 2025-10-11 09:34:58.169051562 +0000 UTC m=+0.179528968 container init 587f4fd5b29045fe9daeeab296b2e948b736d6636bf8c66ce1ed2ec270e0ff33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_cray, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 09:34:58 compute-0 podman[415290]: 2025-10-11 09:34:58.185736743 +0000 UTC m=+0.196214049 container start 587f4fd5b29045fe9daeeab296b2e948b736d6636bf8c66ce1ed2ec270e0ff33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_cray, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct 11 09:34:58 compute-0 podman[415290]: 2025-10-11 09:34:58.188686256 +0000 UTC m=+0.199163642 container attach 587f4fd5b29045fe9daeeab296b2e948b736d6636bf8c66ce1ed2ec270e0ff33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_cray, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct 11 09:34:58 compute-0 systemd[1]: libpod-587f4fd5b29045fe9daeeab296b2e948b736d6636bf8c66ce1ed2ec270e0ff33.scope: Deactivated successfully.
Oct 11 09:34:58 compute-0 funny_cray[415315]: 167 167
Oct 11 09:34:58 compute-0 conmon[415315]: conmon 587f4fd5b29045fe9dae <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-587f4fd5b29045fe9daeeab296b2e948b736d6636bf8c66ce1ed2ec270e0ff33.scope/container/memory.events
Oct 11 09:34:58 compute-0 podman[415329]: 2025-10-11 09:34:58.266489472 +0000 UTC m=+0.044626521 container died 587f4fd5b29045fe9daeeab296b2e948b736d6636bf8c66ce1ed2ec270e0ff33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_cray, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 09:34:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-d609ff9df10e8670b6a0eed585fc4f51579f664abed24a56f176789fc25a0918-merged.mount: Deactivated successfully.
Oct 11 09:34:58 compute-0 podman[415329]: 2025-10-11 09:34:58.306531542 +0000 UTC m=+0.084668541 container remove 587f4fd5b29045fe9daeeab296b2e948b736d6636bf8c66ce1ed2ec270e0ff33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_cray, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 11 09:34:58 compute-0 systemd[1]: libpod-conmon-587f4fd5b29045fe9daeeab296b2e948b736d6636bf8c66ce1ed2ec270e0ff33.scope: Deactivated successfully.
Oct 11 09:34:58 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 09:34:58 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/232724278' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:34:58 compute-0 nova_compute[260935]: 2025-10-11 09:34:58.426 2 DEBUG oslo_concurrency.processutils [None req-ca6f4263-76a1-4d3c-87e5-55f3dd57291a 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:34:58 compute-0 nova_compute[260935]: 2025-10-11 09:34:58.431 2 DEBUG nova.virt.libvirt.vif [None req-ca6f4263-76a1-4d3c-87e5-55f3dd57291a 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T09:34:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-607770139-access_point-1880442521',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-607770139-access_point-1880442521',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-607770139-acc',id=139,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCdfdb9R+zsDw6jenCtVzR++AIgDN5ehA4oPbzl8x/m8tfimOhF7LhQ9X4A3F+nSw3/eKnrfnA/yKanSmbwPXrSaCs1ORHouK4S19eKbXKNTi17aP7Q4amYlBjFlUm9aIA==',key_name='tempest-TestSecurityGroupsBasicOps-1403644569',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='81e7096f23df4e7d8782cf98d09d54e9',ramdisk_id='',reservation_id='r-8gi3qbv7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-607770139',owner_user_name='tempest-TestSecurityGroupsBasicOps-607770139-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:34:50Z,user_data=None,user_id='489c4d0457354f4684f8b9e53261224f',uuid=a77fa566-1ab4-484c-b6ef-53471c9f91f8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "75ae5f39-4616-4581-8e22-411bee7c1747", "address": "fa:16:3e:db:c5:0a", "network": {"id": "f0d9de45-f93f-45ef-aa2e-7cd54a90600b", "bridge": "br-int", "label": "tempest-network-smoke--1509318194", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "81e7096f23df4e7d8782cf98d09d54e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap75ae5f39-46", "ovs_interfaceid": "75ae5f39-4616-4581-8e22-411bee7c1747", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 11 09:34:58 compute-0 nova_compute[260935]: 2025-10-11 09:34:58.432 2 DEBUG nova.network.os_vif_util [None req-ca6f4263-76a1-4d3c-87e5-55f3dd57291a 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Converting VIF {"id": "75ae5f39-4616-4581-8e22-411bee7c1747", "address": "fa:16:3e:db:c5:0a", "network": {"id": "f0d9de45-f93f-45ef-aa2e-7cd54a90600b", "bridge": "br-int", "label": "tempest-network-smoke--1509318194", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "81e7096f23df4e7d8782cf98d09d54e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap75ae5f39-46", "ovs_interfaceid": "75ae5f39-4616-4581-8e22-411bee7c1747", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 09:34:58 compute-0 nova_compute[260935]: 2025-10-11 09:34:58.433 2 DEBUG nova.network.os_vif_util [None req-ca6f4263-76a1-4d3c-87e5-55f3dd57291a 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:db:c5:0a,bridge_name='br-int',has_traffic_filtering=True,id=75ae5f39-4616-4581-8e22-411bee7c1747,network=Network(f0d9de45-f93f-45ef-aa2e-7cd54a90600b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap75ae5f39-46') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 09:34:58 compute-0 nova_compute[260935]: 2025-10-11 09:34:58.435 2 DEBUG nova.objects.instance [None req-ca6f4263-76a1-4d3c-87e5-55f3dd57291a 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lazy-loading 'pci_devices' on Instance uuid a77fa566-1ab4-484c-b6ef-53471c9f91f8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:34:58 compute-0 nova_compute[260935]: 2025-10-11 09:34:58.513 2 DEBUG nova.virt.libvirt.driver [None req-ca6f4263-76a1-4d3c-87e5-55f3dd57291a 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: a77fa566-1ab4-484c-b6ef-53471c9f91f8] End _get_guest_xml xml=<domain type="kvm">
Oct 11 09:34:58 compute-0 nova_compute[260935]:   <uuid>a77fa566-1ab4-484c-b6ef-53471c9f91f8</uuid>
Oct 11 09:34:58 compute-0 nova_compute[260935]:   <name>instance-0000008b</name>
Oct 11 09:34:58 compute-0 nova_compute[260935]:   <memory>131072</memory>
Oct 11 09:34:58 compute-0 nova_compute[260935]:   <vcpu>1</vcpu>
Oct 11 09:34:58 compute-0 nova_compute[260935]:   <metadata>
Oct 11 09:34:58 compute-0 nova_compute[260935]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 09:34:58 compute-0 nova_compute[260935]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 09:34:58 compute-0 nova_compute[260935]:       <nova:name>tempest-server-tempest-TestSecurityGroupsBasicOps-607770139-access_point-1880442521</nova:name>
Oct 11 09:34:58 compute-0 nova_compute[260935]:       <nova:creationTime>2025-10-11 09:34:57</nova:creationTime>
Oct 11 09:34:58 compute-0 nova_compute[260935]:       <nova:flavor name="m1.nano">
Oct 11 09:34:58 compute-0 nova_compute[260935]:         <nova:memory>128</nova:memory>
Oct 11 09:34:58 compute-0 nova_compute[260935]:         <nova:disk>1</nova:disk>
Oct 11 09:34:58 compute-0 nova_compute[260935]:         <nova:swap>0</nova:swap>
Oct 11 09:34:58 compute-0 nova_compute[260935]:         <nova:ephemeral>0</nova:ephemeral>
Oct 11 09:34:58 compute-0 nova_compute[260935]:         <nova:vcpus>1</nova:vcpus>
Oct 11 09:34:58 compute-0 nova_compute[260935]:       </nova:flavor>
Oct 11 09:34:58 compute-0 nova_compute[260935]:       <nova:owner>
Oct 11 09:34:58 compute-0 nova_compute[260935]:         <nova:user uuid="489c4d0457354f4684f8b9e53261224f">tempest-TestSecurityGroupsBasicOps-607770139-project-member</nova:user>
Oct 11 09:34:58 compute-0 nova_compute[260935]:         <nova:project uuid="81e7096f23df4e7d8782cf98d09d54e9">tempest-TestSecurityGroupsBasicOps-607770139</nova:project>
Oct 11 09:34:58 compute-0 nova_compute[260935]:       </nova:owner>
Oct 11 09:34:58 compute-0 nova_compute[260935]:       <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 09:34:58 compute-0 nova_compute[260935]:       <nova:ports>
Oct 11 09:34:58 compute-0 nova_compute[260935]:         <nova:port uuid="75ae5f39-4616-4581-8e22-411bee7c1747">
Oct 11 09:34:58 compute-0 nova_compute[260935]:           <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Oct 11 09:34:58 compute-0 nova_compute[260935]:         </nova:port>
Oct 11 09:34:58 compute-0 nova_compute[260935]:       </nova:ports>
Oct 11 09:34:58 compute-0 nova_compute[260935]:     </nova:instance>
Oct 11 09:34:58 compute-0 nova_compute[260935]:   </metadata>
Oct 11 09:34:58 compute-0 nova_compute[260935]:   <sysinfo type="smbios">
Oct 11 09:34:58 compute-0 nova_compute[260935]:     <system>
Oct 11 09:34:58 compute-0 nova_compute[260935]:       <entry name="manufacturer">RDO</entry>
Oct 11 09:34:58 compute-0 nova_compute[260935]:       <entry name="product">OpenStack Compute</entry>
Oct 11 09:34:58 compute-0 nova_compute[260935]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 09:34:58 compute-0 nova_compute[260935]:       <entry name="serial">a77fa566-1ab4-484c-b6ef-53471c9f91f8</entry>
Oct 11 09:34:58 compute-0 nova_compute[260935]:       <entry name="uuid">a77fa566-1ab4-484c-b6ef-53471c9f91f8</entry>
Oct 11 09:34:58 compute-0 nova_compute[260935]:       <entry name="family">Virtual Machine</entry>
Oct 11 09:34:58 compute-0 nova_compute[260935]:     </system>
Oct 11 09:34:58 compute-0 nova_compute[260935]:   </sysinfo>
Oct 11 09:34:58 compute-0 nova_compute[260935]:   <os>
Oct 11 09:34:58 compute-0 nova_compute[260935]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 11 09:34:58 compute-0 nova_compute[260935]:     <boot dev="hd"/>
Oct 11 09:34:58 compute-0 nova_compute[260935]:     <smbios mode="sysinfo"/>
Oct 11 09:34:58 compute-0 nova_compute[260935]:   </os>
Oct 11 09:34:58 compute-0 nova_compute[260935]:   <features>
Oct 11 09:34:58 compute-0 nova_compute[260935]:     <acpi/>
Oct 11 09:34:58 compute-0 nova_compute[260935]:     <apic/>
Oct 11 09:34:58 compute-0 nova_compute[260935]:     <vmcoreinfo/>
Oct 11 09:34:58 compute-0 nova_compute[260935]:   </features>
Oct 11 09:34:58 compute-0 nova_compute[260935]:   <clock offset="utc">
Oct 11 09:34:58 compute-0 nova_compute[260935]:     <timer name="pit" tickpolicy="delay"/>
Oct 11 09:34:58 compute-0 nova_compute[260935]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 11 09:34:58 compute-0 nova_compute[260935]:     <timer name="hpet" present="no"/>
Oct 11 09:34:58 compute-0 nova_compute[260935]:   </clock>
Oct 11 09:34:58 compute-0 nova_compute[260935]:   <cpu mode="host-model" match="exact">
Oct 11 09:34:58 compute-0 nova_compute[260935]:     <topology sockets="1" cores="1" threads="1"/>
Oct 11 09:34:58 compute-0 nova_compute[260935]:   </cpu>
Oct 11 09:34:58 compute-0 nova_compute[260935]:   <devices>
Oct 11 09:34:58 compute-0 nova_compute[260935]:     <disk type="network" device="disk">
Oct 11 09:34:58 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 09:34:58 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/a77fa566-1ab4-484c-b6ef-53471c9f91f8_disk">
Oct 11 09:34:58 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 09:34:58 compute-0 nova_compute[260935]:       </source>
Oct 11 09:34:58 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 09:34:58 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 09:34:58 compute-0 nova_compute[260935]:       </auth>
Oct 11 09:34:58 compute-0 nova_compute[260935]:       <target dev="vda" bus="virtio"/>
Oct 11 09:34:58 compute-0 nova_compute[260935]:     </disk>
Oct 11 09:34:58 compute-0 nova_compute[260935]:     <disk type="network" device="cdrom">
Oct 11 09:34:58 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 09:34:58 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/a77fa566-1ab4-484c-b6ef-53471c9f91f8_disk.config">
Oct 11 09:34:58 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 09:34:58 compute-0 nova_compute[260935]:       </source>
Oct 11 09:34:58 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 09:34:58 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 09:34:58 compute-0 nova_compute[260935]:       </auth>
Oct 11 09:34:58 compute-0 nova_compute[260935]:       <target dev="sda" bus="sata"/>
Oct 11 09:34:58 compute-0 nova_compute[260935]:     </disk>
Oct 11 09:34:58 compute-0 nova_compute[260935]:     <interface type="ethernet">
Oct 11 09:34:58 compute-0 nova_compute[260935]:       <mac address="fa:16:3e:db:c5:0a"/>
Oct 11 09:34:58 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 09:34:58 compute-0 nova_compute[260935]:       <driver name="vhost" rx_queue_size="512"/>
Oct 11 09:34:58 compute-0 nova_compute[260935]:       <mtu size="1442"/>
Oct 11 09:34:58 compute-0 nova_compute[260935]:       <target dev="tap75ae5f39-46"/>
Oct 11 09:34:58 compute-0 nova_compute[260935]:     </interface>
Oct 11 09:34:58 compute-0 nova_compute[260935]:     <serial type="pty">
Oct 11 09:34:58 compute-0 nova_compute[260935]:       <log file="/var/lib/nova/instances/a77fa566-1ab4-484c-b6ef-53471c9f91f8/console.log" append="off"/>
Oct 11 09:34:58 compute-0 nova_compute[260935]:     </serial>
Oct 11 09:34:58 compute-0 nova_compute[260935]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 09:34:58 compute-0 nova_compute[260935]:     <video>
Oct 11 09:34:58 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 09:34:58 compute-0 nova_compute[260935]:     </video>
Oct 11 09:34:58 compute-0 nova_compute[260935]:     <input type="tablet" bus="usb"/>
Oct 11 09:34:58 compute-0 nova_compute[260935]:     <rng model="virtio">
Oct 11 09:34:58 compute-0 nova_compute[260935]:       <backend model="random">/dev/urandom</backend>
Oct 11 09:34:58 compute-0 nova_compute[260935]:     </rng>
Oct 11 09:34:58 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root"/>
Oct 11 09:34:58 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:34:58 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:34:58 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:34:58 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:34:58 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:34:58 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:34:58 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:34:58 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:34:58 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:34:58 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:34:58 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:34:58 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:34:58 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:34:58 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:34:58 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:34:58 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:34:58 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:34:58 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:34:58 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:34:58 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:34:58 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:34:58 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:34:58 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:34:58 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:34:58 compute-0 nova_compute[260935]:     <controller type="usb" index="0"/>
Oct 11 09:34:58 compute-0 nova_compute[260935]:     <memballoon model="virtio">
Oct 11 09:34:58 compute-0 nova_compute[260935]:       <stats period="10"/>
Oct 11 09:34:58 compute-0 nova_compute[260935]:     </memballoon>
Oct 11 09:34:58 compute-0 nova_compute[260935]:   </devices>
Oct 11 09:34:58 compute-0 nova_compute[260935]: </domain>
Oct 11 09:34:58 compute-0 nova_compute[260935]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 11 09:34:58 compute-0 nova_compute[260935]: 2025-10-11 09:34:58.513 2 DEBUG nova.compute.manager [None req-ca6f4263-76a1-4d3c-87e5-55f3dd57291a 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: a77fa566-1ab4-484c-b6ef-53471c9f91f8] Preparing to wait for external event network-vif-plugged-75ae5f39-4616-4581-8e22-411bee7c1747 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 11 09:34:58 compute-0 nova_compute[260935]: 2025-10-11 09:34:58.513 2 DEBUG oslo_concurrency.lockutils [None req-ca6f4263-76a1-4d3c-87e5-55f3dd57291a 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Acquiring lock "a77fa566-1ab4-484c-b6ef-53471c9f91f8-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:34:58 compute-0 nova_compute[260935]: 2025-10-11 09:34:58.514 2 DEBUG oslo_concurrency.lockutils [None req-ca6f4263-76a1-4d3c-87e5-55f3dd57291a 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lock "a77fa566-1ab4-484c-b6ef-53471c9f91f8-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:34:58 compute-0 nova_compute[260935]: 2025-10-11 09:34:58.514 2 DEBUG oslo_concurrency.lockutils [None req-ca6f4263-76a1-4d3c-87e5-55f3dd57291a 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lock "a77fa566-1ab4-484c-b6ef-53471c9f91f8-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:34:58 compute-0 nova_compute[260935]: 2025-10-11 09:34:58.514 2 DEBUG nova.virt.libvirt.vif [None req-ca6f4263-76a1-4d3c-87e5-55f3dd57291a 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T09:34:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-607770139-access_point-1880442521',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-607770139-access_point-1880442521',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-607770139-acc',id=139,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCdfdb9R+zsDw6jenCtVzR++AIgDN5ehA4oPbzl8x/m8tfimOhF7LhQ9X4A3F+nSw3/eKnrfnA/yKanSmbwPXrSaCs1ORHouK4S19eKbXKNTi17aP7Q4amYlBjFlUm9aIA==',key_name='tempest-TestSecurityGroupsBasicOps-1403644569',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='81e7096f23df4e7d8782cf98d09d54e9',ramdisk_id='',reservation_id='r-8gi3qbv7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-607770139',owner_user_name='tempest-TestSecurityGroupsBasicOps-607770139-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:34:50Z,user_data=None,user_id='489c4d0457354f4684f8b9e53261224f',uuid=a77fa566-1ab4-484c-b6ef-53471c9f91f8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "75ae5f39-4616-4581-8e22-411bee7c1747", "address": "fa:16:3e:db:c5:0a", "network": {"id": "f0d9de45-f93f-45ef-aa2e-7cd54a90600b", "bridge": "br-int", "label": "tempest-network-smoke--1509318194", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "81e7096f23df4e7d8782cf98d09d54e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap75ae5f39-46", "ovs_interfaceid": "75ae5f39-4616-4581-8e22-411bee7c1747", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 11 09:34:58 compute-0 nova_compute[260935]: 2025-10-11 09:34:58.515 2 DEBUG nova.network.os_vif_util [None req-ca6f4263-76a1-4d3c-87e5-55f3dd57291a 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Converting VIF {"id": "75ae5f39-4616-4581-8e22-411bee7c1747", "address": "fa:16:3e:db:c5:0a", "network": {"id": "f0d9de45-f93f-45ef-aa2e-7cd54a90600b", "bridge": "br-int", "label": "tempest-network-smoke--1509318194", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "81e7096f23df4e7d8782cf98d09d54e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap75ae5f39-46", "ovs_interfaceid": "75ae5f39-4616-4581-8e22-411bee7c1747", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 09:34:58 compute-0 nova_compute[260935]: 2025-10-11 09:34:58.515 2 DEBUG nova.network.os_vif_util [None req-ca6f4263-76a1-4d3c-87e5-55f3dd57291a 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:db:c5:0a,bridge_name='br-int',has_traffic_filtering=True,id=75ae5f39-4616-4581-8e22-411bee7c1747,network=Network(f0d9de45-f93f-45ef-aa2e-7cd54a90600b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap75ae5f39-46') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 09:34:58 compute-0 nova_compute[260935]: 2025-10-11 09:34:58.515 2 DEBUG os_vif [None req-ca6f4263-76a1-4d3c-87e5-55f3dd57291a 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:db:c5:0a,bridge_name='br-int',has_traffic_filtering=True,id=75ae5f39-4616-4581-8e22-411bee7c1747,network=Network(f0d9de45-f93f-45ef-aa2e-7cd54a90600b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap75ae5f39-46') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 11 09:34:58 compute-0 nova_compute[260935]: 2025-10-11 09:34:58.516 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:34:58 compute-0 nova_compute[260935]: 2025-10-11 09:34:58.516 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:34:58 compute-0 nova_compute[260935]: 2025-10-11 09:34:58.517 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 09:34:58 compute-0 nova_compute[260935]: 2025-10-11 09:34:58.520 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:34:58 compute-0 nova_compute[260935]: 2025-10-11 09:34:58.521 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap75ae5f39-46, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:34:58 compute-0 nova_compute[260935]: 2025-10-11 09:34:58.521 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap75ae5f39-46, col_values=(('external_ids', {'iface-id': '75ae5f39-4616-4581-8e22-411bee7c1747', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:db:c5:0a', 'vm-uuid': 'a77fa566-1ab4-484c-b6ef-53471c9f91f8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:34:58 compute-0 nova_compute[260935]: 2025-10-11 09:34:58.523 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:34:58 compute-0 NetworkManager[44960]: <info>  [1760175298.5244] manager: (tap75ae5f39-46): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/596)
Oct 11 09:34:58 compute-0 nova_compute[260935]: 2025-10-11 09:34:58.525 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 09:34:58 compute-0 nova_compute[260935]: 2025-10-11 09:34:58.533 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:34:58 compute-0 nova_compute[260935]: 2025-10-11 09:34:58.535 2 INFO os_vif [None req-ca6f4263-76a1-4d3c-87e5-55f3dd57291a 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:db:c5:0a,bridge_name='br-int',has_traffic_filtering=True,id=75ae5f39-4616-4581-8e22-411bee7c1747,network=Network(f0d9de45-f93f-45ef-aa2e-7cd54a90600b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap75ae5f39-46')
Oct 11 09:34:58 compute-0 podman[415352]: 2025-10-11 09:34:58.560929241 +0000 UTC m=+0.057524064 container create 519f5c0eab9f7e736515cec7fd7199c7c4b6c3c5dcd7eec393f0b55dbdd0c4de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_lewin, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct 11 09:34:58 compute-0 systemd[1]: Started libpod-conmon-519f5c0eab9f7e736515cec7fd7199c7c4b6c3c5dcd7eec393f0b55dbdd0c4de.scope.
Oct 11 09:34:58 compute-0 podman[415352]: 2025-10-11 09:34:58.53927574 +0000 UTC m=+0.035870603 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:34:58 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:34:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b2fc7d2d37ee99efdb27f5a5ea6ac3aaf893b590c1f4359cfd6171a165cd843/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 09:34:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b2fc7d2d37ee99efdb27f5a5ea6ac3aaf893b590c1f4359cfd6171a165cd843/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 09:34:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b2fc7d2d37ee99efdb27f5a5ea6ac3aaf893b590c1f4359cfd6171a165cd843/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 09:34:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b2fc7d2d37ee99efdb27f5a5ea6ac3aaf893b590c1f4359cfd6171a165cd843/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 09:34:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b2fc7d2d37ee99efdb27f5a5ea6ac3aaf893b590c1f4359cfd6171a165cd843/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 09:34:58 compute-0 podman[415352]: 2025-10-11 09:34:58.664422682 +0000 UTC m=+0.161017535 container init 519f5c0eab9f7e736515cec7fd7199c7c4b6c3c5dcd7eec393f0b55dbdd0c4de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_lewin, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 09:34:58 compute-0 nova_compute[260935]: 2025-10-11 09:34:58.668 2 DEBUG nova.virt.libvirt.driver [None req-ca6f4263-76a1-4d3c-87e5-55f3dd57291a 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 09:34:58 compute-0 nova_compute[260935]: 2025-10-11 09:34:58.668 2 DEBUG nova.virt.libvirt.driver [None req-ca6f4263-76a1-4d3c-87e5-55f3dd57291a 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 09:34:58 compute-0 nova_compute[260935]: 2025-10-11 09:34:58.668 2 DEBUG nova.virt.libvirt.driver [None req-ca6f4263-76a1-4d3c-87e5-55f3dd57291a 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] No VIF found with MAC fa:16:3e:db:c5:0a, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 11 09:34:58 compute-0 nova_compute[260935]: 2025-10-11 09:34:58.669 2 INFO nova.virt.libvirt.driver [None req-ca6f4263-76a1-4d3c-87e5-55f3dd57291a 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: a77fa566-1ab4-484c-b6ef-53471c9f91f8] Using config drive
Oct 11 09:34:58 compute-0 podman[415352]: 2025-10-11 09:34:58.679565539 +0000 UTC m=+0.176160402 container start 519f5c0eab9f7e736515cec7fd7199c7c4b6c3c5dcd7eec393f0b55dbdd0c4de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_lewin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 11 09:34:58 compute-0 podman[415352]: 2025-10-11 09:34:58.684118038 +0000 UTC m=+0.180712901 container attach 519f5c0eab9f7e736515cec7fd7199c7c4b6c3c5dcd7eec393f0b55dbdd0c4de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_lewin, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 09:34:58 compute-0 nova_compute[260935]: 2025-10-11 09:34:58.706 2 DEBUG nova.storage.rbd_utils [None req-ca6f4263-76a1-4d3c-87e5-55f3dd57291a 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] rbd image a77fa566-1ab4-484c-b6ef-53471c9f91f8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:34:58 compute-0 nova_compute[260935]: 2025-10-11 09:34:58.769 2 DEBUG nova.network.neutron [req-194e8d22-3c7a-4e35-aadb-81a85961cc62 req-9eb466f2-c47b-48ea-8d0a-b56a106bcd99 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a77fa566-1ab4-484c-b6ef-53471c9f91f8] Updated VIF entry in instance network info cache for port 75ae5f39-4616-4581-8e22-411bee7c1747. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 09:34:58 compute-0 nova_compute[260935]: 2025-10-11 09:34:58.770 2 DEBUG nova.network.neutron [req-194e8d22-3c7a-4e35-aadb-81a85961cc62 req-9eb466f2-c47b-48ea-8d0a-b56a106bcd99 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a77fa566-1ab4-484c-b6ef-53471c9f91f8] Updating instance_info_cache with network_info: [{"id": "75ae5f39-4616-4581-8e22-411bee7c1747", "address": "fa:16:3e:db:c5:0a", "network": {"id": "f0d9de45-f93f-45ef-aa2e-7cd54a90600b", "bridge": "br-int", "label": "tempest-network-smoke--1509318194", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "81e7096f23df4e7d8782cf98d09d54e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap75ae5f39-46", "ovs_interfaceid": "75ae5f39-4616-4581-8e22-411bee7c1747", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:34:58 compute-0 nova_compute[260935]: 2025-10-11 09:34:58.792 2 DEBUG oslo_concurrency.lockutils [req-194e8d22-3c7a-4e35-aadb-81a85961cc62 req-9eb466f2-c47b-48ea-8d0a-b56a106bcd99 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-a77fa566-1ab4-484c-b6ef-53471c9f91f8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:34:58 compute-0 nova_compute[260935]: 2025-10-11 09:34:58.972 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:34:59 compute-0 nova_compute[260935]: 2025-10-11 09:34:59.103 2 INFO nova.virt.libvirt.driver [None req-ca6f4263-76a1-4d3c-87e5-55f3dd57291a 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: a77fa566-1ab4-484c-b6ef-53471c9f91f8] Creating config drive at /var/lib/nova/instances/a77fa566-1ab4-484c-b6ef-53471c9f91f8/disk.config
Oct 11 09:34:59 compute-0 nova_compute[260935]: 2025-10-11 09:34:59.114 2 DEBUG oslo_concurrency.processutils [None req-ca6f4263-76a1-4d3c-87e5-55f3dd57291a 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/a77fa566-1ab4-484c-b6ef-53471c9f91f8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpka2xt1qv execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:34:59 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/232724278' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:34:59 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2800: 321 pgs: 321 active+clean; 374 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 11 09:34:59 compute-0 nova_compute[260935]: 2025-10-11 09:34:59.284 2 DEBUG oslo_concurrency.processutils [None req-ca6f4263-76a1-4d3c-87e5-55f3dd57291a 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/a77fa566-1ab4-484c-b6ef-53471c9f91f8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpka2xt1qv" returned: 0 in 0.170s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:34:59 compute-0 nova_compute[260935]: 2025-10-11 09:34:59.315 2 DEBUG nova.storage.rbd_utils [None req-ca6f4263-76a1-4d3c-87e5-55f3dd57291a 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] rbd image a77fa566-1ab4-484c-b6ef-53471c9f91f8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:34:59 compute-0 nova_compute[260935]: 2025-10-11 09:34:59.320 2 DEBUG oslo_concurrency.processutils [None req-ca6f4263-76a1-4d3c-87e5-55f3dd57291a 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/a77fa566-1ab4-484c-b6ef-53471c9f91f8/disk.config a77fa566-1ab4-484c-b6ef-53471c9f91f8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:34:59 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:34:59 compute-0 nova_compute[260935]: 2025-10-11 09:34:59.507 2 DEBUG oslo_concurrency.processutils [None req-ca6f4263-76a1-4d3c-87e5-55f3dd57291a 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/a77fa566-1ab4-484c-b6ef-53471c9f91f8/disk.config a77fa566-1ab4-484c-b6ef-53471c9f91f8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.187s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:34:59 compute-0 nova_compute[260935]: 2025-10-11 09:34:59.509 2 INFO nova.virt.libvirt.driver [None req-ca6f4263-76a1-4d3c-87e5-55f3dd57291a 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: a77fa566-1ab4-484c-b6ef-53471c9f91f8] Deleting local config drive /var/lib/nova/instances/a77fa566-1ab4-484c-b6ef-53471c9f91f8/disk.config because it was imported into RBD.
Oct 11 09:34:59 compute-0 kernel: tap75ae5f39-46: entered promiscuous mode
Oct 11 09:34:59 compute-0 NetworkManager[44960]: <info>  [1760175299.5749] manager: (tap75ae5f39-46): new Tun device (/org/freedesktop/NetworkManager/Devices/597)
Oct 11 09:34:59 compute-0 nova_compute[260935]: 2025-10-11 09:34:59.581 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:34:59 compute-0 ovn_controller[152945]: 2025-10-11T09:34:59Z|01557|binding|INFO|Claiming lport 75ae5f39-4616-4581-8e22-411bee7c1747 for this chassis.
Oct 11 09:34:59 compute-0 ovn_controller[152945]: 2025-10-11T09:34:59Z|01558|binding|INFO|75ae5f39-4616-4581-8e22-411bee7c1747: Claiming fa:16:3e:db:c5:0a 10.100.0.14
Oct 11 09:34:59 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:34:59.593 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:db:c5:0a 10.100.0.14'], port_security=['fa:16:3e:db:c5:0a 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'a77fa566-1ab4-484c-b6ef-53471c9f91f8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f0d9de45-f93f-45ef-aa2e-7cd54a90600b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '81e7096f23df4e7d8782cf98d09d54e9', 'neutron:revision_number': '2', 'neutron:security_group_ids': '7c64e7e9-762c-40c5-8997-daeebe19175c c4774add-49ce-4194-9683-2bce3da69dec', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e2e43ada-251a-46ad-bf0a-b57c9e02b647, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=75ae5f39-4616-4581-8e22-411bee7c1747) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 09:34:59 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:34:59.594 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 75ae5f39-4616-4581-8e22-411bee7c1747 in datapath f0d9de45-f93f-45ef-aa2e-7cd54a90600b bound to our chassis
Oct 11 09:34:59 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:34:59.596 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f0d9de45-f93f-45ef-aa2e-7cd54a90600b
Oct 11 09:34:59 compute-0 systemd-udevd[415465]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 09:34:59 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:34:59.613 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[bc5a907f-5007-4918-adea-dd2c71f12e50]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:34:59 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:34:59.614 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapf0d9de45-f1 in ovnmeta-f0d9de45-f93f-45ef-aa2e-7cd54a90600b namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 11 09:34:59 compute-0 systemd-machined[215705]: New machine qemu-163-instance-0000008b.
Oct 11 09:34:59 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:34:59.616 276945 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapf0d9de45-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 11 09:34:59 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:34:59.616 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[da27d94f-73ba-4572-8dcf-bc05d6c3298f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:34:59 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:34:59.617 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[5c5ae0b6-7d9e-4639-9f99-8f23a28eecca]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:34:59 compute-0 NetworkManager[44960]: <info>  [1760175299.6222] device (tap75ae5f39-46): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 09:34:59 compute-0 NetworkManager[44960]: <info>  [1760175299.6251] device (tap75ae5f39-46): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 09:34:59 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:34:59.632 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[e26ff3d3-d449-4a95-9283-17cb30863f28]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:34:59 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:34:59.656 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[c83aecc5-5c19-4de0-b0e5-c0aecef7af3c]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:34:59 compute-0 systemd[1]: Started Virtual Machine qemu-163-instance-0000008b.
Oct 11 09:34:59 compute-0 nova_compute[260935]: 2025-10-11 09:34:59.662 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:34:59 compute-0 ovn_controller[152945]: 2025-10-11T09:34:59Z|01559|binding|INFO|Setting lport 75ae5f39-4616-4581-8e22-411bee7c1747 ovn-installed in OVS
Oct 11 09:34:59 compute-0 ovn_controller[152945]: 2025-10-11T09:34:59Z|01560|binding|INFO|Setting lport 75ae5f39-4616-4581-8e22-411bee7c1747 up in Southbound
Oct 11 09:34:59 compute-0 nova_compute[260935]: 2025-10-11 09:34:59.669 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:34:59 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:34:59.693 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[5dcba2b1-615c-4ac4-b3c7-a5f0f298b4ef]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:34:59 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:34:59.698 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[7240c16b-1cde-4864-8577-075abba58bf9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:34:59 compute-0 systemd-udevd[415469]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 09:34:59 compute-0 NetworkManager[44960]: <info>  [1760175299.7006] manager: (tapf0d9de45-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/598)
Oct 11 09:34:59 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:34:59.742 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[c7e26833-d233-4806-bdb4-283d76841348]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:34:59 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:34:59.752 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[d1ef021c-a44d-486e-93ec-9b3703c866b9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:34:59 compute-0 intelligent_lewin[415372]: --> passed data devices: 0 physical, 3 LVM
Oct 11 09:34:59 compute-0 intelligent_lewin[415372]: --> relative data size: 1.0
Oct 11 09:34:59 compute-0 intelligent_lewin[415372]: --> All data devices are unavailable
Oct 11 09:34:59 compute-0 NetworkManager[44960]: <info>  [1760175299.7824] device (tapf0d9de45-f0): carrier: link connected
Oct 11 09:34:59 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:34:59.791 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[b6b985b2-6699-47a4-bbd8-5260626353cb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:34:59 compute-0 systemd[1]: libpod-519f5c0eab9f7e736515cec7fd7199c7c4b6c3c5dcd7eec393f0b55dbdd0c4de.scope: Deactivated successfully.
Oct 11 09:34:59 compute-0 systemd[1]: libpod-519f5c0eab9f7e736515cec7fd7199c7c4b6c3c5dcd7eec393f0b55dbdd0c4de.scope: Consumed 1.055s CPU time.
Oct 11 09:34:59 compute-0 podman[415352]: 2025-10-11 09:34:59.798600338 +0000 UTC m=+1.295195181 container died 519f5c0eab9f7e736515cec7fd7199c7c4b6c3c5dcd7eec393f0b55dbdd0c4de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_lewin, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 09:34:59 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:34:59.821 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[9e3cf028-5789-46dd-b591-16491b89d8e1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf0d9de45-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:3e:9a:8e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 415], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 714448, 'reachable_time': 34091, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 415505, 'error': None, 'target': 'ovnmeta-f0d9de45-f93f-45ef-aa2e-7cd54a90600b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:34:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-2b2fc7d2d37ee99efdb27f5a5ea6ac3aaf893b590c1f4359cfd6171a165cd843-merged.mount: Deactivated successfully.
Oct 11 09:34:59 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:34:59.844 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[f2a26042-a0ae-45ea-b8e9-8a053daa983d]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe3e:9a8e'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 714448, 'tstamp': 714448}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 415513, 'error': None, 'target': 'ovnmeta-f0d9de45-f93f-45ef-aa2e-7cd54a90600b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:34:59 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:34:59.869 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[cd13b368-eefb-45b6-bc27-3d595dd4ce7a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf0d9de45-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:3e:9a:8e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 2, 'rx_bytes': 196, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 2, 'rx_bytes': 196, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 415], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 714448, 'reachable_time': 34091, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 168, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 168, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 415519, 'error': None, 'target': 'ovnmeta-f0d9de45-f93f-45ef-aa2e-7cd54a90600b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:34:59 compute-0 podman[415352]: 2025-10-11 09:34:59.885710117 +0000 UTC m=+1.382304970 container remove 519f5c0eab9f7e736515cec7fd7199c7c4b6c3c5dcd7eec393f0b55dbdd0c4de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_lewin, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct 11 09:34:59 compute-0 systemd[1]: libpod-conmon-519f5c0eab9f7e736515cec7fd7199c7c4b6c3c5dcd7eec393f0b55dbdd0c4de.scope: Deactivated successfully.
Oct 11 09:34:59 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:34:59.926 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[b13623bf-1dc4-4752-8ed0-67046e138f11]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:34:59 compute-0 sudo[415185]: pam_unix(sudo:session): session closed for user root
Oct 11 09:35:00 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:35:00.012 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[b404011b-5373-4773-a432-25c5f3fae272]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:35:00 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:35:00.015 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf0d9de45-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:35:00 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:35:00.016 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 09:35:00 compute-0 sudo[415556]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:35:00 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:35:00.018 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf0d9de45-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:35:00 compute-0 sudo[415556]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:35:00 compute-0 kernel: tapf0d9de45-f0: entered promiscuous mode
Oct 11 09:35:00 compute-0 NetworkManager[44960]: <info>  [1760175300.0213] manager: (tapf0d9de45-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/599)
Oct 11 09:35:00 compute-0 nova_compute[260935]: 2025-10-11 09:35:00.021 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:35:00 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:35:00.024 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf0d9de45-f0, col_values=(('external_ids', {'iface-id': '675f5b9f-9cb7-4132-af37-955cc69eba82'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:35:00 compute-0 sudo[415556]: pam_unix(sudo:session): session closed for user root
Oct 11 09:35:00 compute-0 nova_compute[260935]: 2025-10-11 09:35:00.025 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:35:00 compute-0 ovn_controller[152945]: 2025-10-11T09:35:00Z|01561|binding|INFO|Releasing lport 675f5b9f-9cb7-4132-af37-955cc69eba82 from this chassis (sb_readonly=0)
Oct 11 09:35:00 compute-0 nova_compute[260935]: 2025-10-11 09:35:00.040 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:35:00 compute-0 nova_compute[260935]: 2025-10-11 09:35:00.044 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:35:00 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:35:00.045 162815 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/f0d9de45-f93f-45ef-aa2e-7cd54a90600b.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/f0d9de45-f93f-45ef-aa2e-7cd54a90600b.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 11 09:35:00 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:35:00.046 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[57516713-5a1b-4237-b214-5ef84a91c144]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:35:00 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:35:00.047 162815 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 11 09:35:00 compute-0 ovn_metadata_agent[162810]: global
Oct 11 09:35:00 compute-0 ovn_metadata_agent[162810]:     log         /dev/log local0 debug
Oct 11 09:35:00 compute-0 ovn_metadata_agent[162810]:     log-tag     haproxy-metadata-proxy-f0d9de45-f93f-45ef-aa2e-7cd54a90600b
Oct 11 09:35:00 compute-0 ovn_metadata_agent[162810]:     user        root
Oct 11 09:35:00 compute-0 ovn_metadata_agent[162810]:     group       root
Oct 11 09:35:00 compute-0 ovn_metadata_agent[162810]:     maxconn     1024
Oct 11 09:35:00 compute-0 ovn_metadata_agent[162810]:     pidfile     /var/lib/neutron/external/pids/f0d9de45-f93f-45ef-aa2e-7cd54a90600b.pid.haproxy
Oct 11 09:35:00 compute-0 ovn_metadata_agent[162810]:     daemon
Oct 11 09:35:00 compute-0 ovn_metadata_agent[162810]: 
Oct 11 09:35:00 compute-0 ovn_metadata_agent[162810]: defaults
Oct 11 09:35:00 compute-0 ovn_metadata_agent[162810]:     log global
Oct 11 09:35:00 compute-0 ovn_metadata_agent[162810]:     mode http
Oct 11 09:35:00 compute-0 ovn_metadata_agent[162810]:     option httplog
Oct 11 09:35:00 compute-0 ovn_metadata_agent[162810]:     option dontlognull
Oct 11 09:35:00 compute-0 ovn_metadata_agent[162810]:     option http-server-close
Oct 11 09:35:00 compute-0 ovn_metadata_agent[162810]:     option forwardfor
Oct 11 09:35:00 compute-0 ovn_metadata_agent[162810]:     retries                 3
Oct 11 09:35:00 compute-0 ovn_metadata_agent[162810]:     timeout http-request    30s
Oct 11 09:35:00 compute-0 ovn_metadata_agent[162810]:     timeout connect         30s
Oct 11 09:35:00 compute-0 ovn_metadata_agent[162810]:     timeout client          32s
Oct 11 09:35:00 compute-0 ovn_metadata_agent[162810]:     timeout server          32s
Oct 11 09:35:00 compute-0 ovn_metadata_agent[162810]:     timeout http-keep-alive 30s
Oct 11 09:35:00 compute-0 ovn_metadata_agent[162810]: 
Oct 11 09:35:00 compute-0 ovn_metadata_agent[162810]: 
Oct 11 09:35:00 compute-0 ovn_metadata_agent[162810]: listen listener
Oct 11 09:35:00 compute-0 ovn_metadata_agent[162810]:     bind 169.254.169.254:80
Oct 11 09:35:00 compute-0 ovn_metadata_agent[162810]:     server metadata /var/lib/neutron/metadata_proxy
Oct 11 09:35:00 compute-0 ovn_metadata_agent[162810]:     http-request add-header X-OVN-Network-ID f0d9de45-f93f-45ef-aa2e-7cd54a90600b
Oct 11 09:35:00 compute-0 ovn_metadata_agent[162810]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 11 09:35:00 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:35:00.049 162815 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-f0d9de45-f93f-45ef-aa2e-7cd54a90600b', 'env', 'PROCESS_TAG=haproxy-f0d9de45-f93f-45ef-aa2e-7cd54a90600b', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/f0d9de45-f93f-45ef-aa2e-7cd54a90600b.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 11 09:35:00 compute-0 sudo[415592]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 09:35:00 compute-0 sudo[415592]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:35:00 compute-0 sudo[415592]: pam_unix(sudo:session): session closed for user root
Oct 11 09:35:00 compute-0 sudo[415621]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:35:00 compute-0 sudo[415621]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:35:00 compute-0 sudo[415621]: pam_unix(sudo:session): session closed for user root
Oct 11 09:35:00 compute-0 sudo[415646]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a -- lvm list --format json
Oct 11 09:35:00 compute-0 sudo[415646]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:35:00 compute-0 ceph-mon[74313]: pgmap v2800: 321 pgs: 321 active+clean; 374 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 11 09:35:00 compute-0 nova_compute[260935]: 2025-10-11 09:35:00.491 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760175300.4897466, a77fa566-1ab4-484c-b6ef-53471c9f91f8 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:35:00 compute-0 nova_compute[260935]: 2025-10-11 09:35:00.491 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: a77fa566-1ab4-484c-b6ef-53471c9f91f8] VM Started (Lifecycle Event)
Oct 11 09:35:00 compute-0 nova_compute[260935]: 2025-10-11 09:35:00.536 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: a77fa566-1ab4-484c-b6ef-53471c9f91f8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:35:00 compute-0 podman[415702]: 2025-10-11 09:35:00.445480354 +0000 UTC m=+0.040058032 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 11 09:35:00 compute-0 nova_compute[260935]: 2025-10-11 09:35:00.542 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760175300.4905643, a77fa566-1ab4-484c-b6ef-53471c9f91f8 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:35:00 compute-0 nova_compute[260935]: 2025-10-11 09:35:00.543 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: a77fa566-1ab4-484c-b6ef-53471c9f91f8] VM Paused (Lifecycle Event)
Oct 11 09:35:00 compute-0 nova_compute[260935]: 2025-10-11 09:35:00.584 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: a77fa566-1ab4-484c-b6ef-53471c9f91f8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:35:00 compute-0 nova_compute[260935]: 2025-10-11 09:35:00.589 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: a77fa566-1ab4-484c-b6ef-53471c9f91f8] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 09:35:00 compute-0 nova_compute[260935]: 2025-10-11 09:35:00.644 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: a77fa566-1ab4-484c-b6ef-53471c9f91f8] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 09:35:00 compute-0 podman[415702]: 2025-10-11 09:35:00.648667988 +0000 UTC m=+0.243245676 container create c8a612255054c19d0be4ed190e3fbfb857d6f4a0946c2f20e00a1d51da26ca37 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-f0d9de45-f93f-45ef-aa2e-7cd54a90600b, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Oct 11 09:35:00 compute-0 systemd[1]: Started libpod-conmon-c8a612255054c19d0be4ed190e3fbfb857d6f4a0946c2f20e00a1d51da26ca37.scope.
Oct 11 09:35:00 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:35:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f9779a9262745b18ce2d1c4f4694138a6bb395f088c211d26fdb2e153f47b00b/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 11 09:35:00 compute-0 podman[415702]: 2025-10-11 09:35:00.760103183 +0000 UTC m=+0.354680861 container init c8a612255054c19d0be4ed190e3fbfb857d6f4a0946c2f20e00a1d51da26ca37 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-f0d9de45-f93f-45ef-aa2e-7cd54a90600b, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Oct 11 09:35:00 compute-0 podman[415702]: 2025-10-11 09:35:00.773613604 +0000 UTC m=+0.368191262 container start c8a612255054c19d0be4ed190e3fbfb857d6f4a0946c2f20e00a1d51da26ca37 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-f0d9de45-f93f-45ef-aa2e-7cd54a90600b, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Oct 11 09:35:00 compute-0 nova_compute[260935]: 2025-10-11 09:35:00.802 2 DEBUG nova.compute.manager [req-4a69dd77-34c4-4f62-9db0-843c16f4014e req-1af745fe-ce00-4d2d-8247-dffe5fecab79 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a77fa566-1ab4-484c-b6ef-53471c9f91f8] Received event network-vif-plugged-75ae5f39-4616-4581-8e22-411bee7c1747 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:35:00 compute-0 nova_compute[260935]: 2025-10-11 09:35:00.803 2 DEBUG oslo_concurrency.lockutils [req-4a69dd77-34c4-4f62-9db0-843c16f4014e req-1af745fe-ce00-4d2d-8247-dffe5fecab79 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "a77fa566-1ab4-484c-b6ef-53471c9f91f8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:35:00 compute-0 nova_compute[260935]: 2025-10-11 09:35:00.803 2 DEBUG oslo_concurrency.lockutils [req-4a69dd77-34c4-4f62-9db0-843c16f4014e req-1af745fe-ce00-4d2d-8247-dffe5fecab79 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "a77fa566-1ab4-484c-b6ef-53471c9f91f8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:35:00 compute-0 nova_compute[260935]: 2025-10-11 09:35:00.803 2 DEBUG oslo_concurrency.lockutils [req-4a69dd77-34c4-4f62-9db0-843c16f4014e req-1af745fe-ce00-4d2d-8247-dffe5fecab79 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "a77fa566-1ab4-484c-b6ef-53471c9f91f8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:35:00 compute-0 nova_compute[260935]: 2025-10-11 09:35:00.804 2 DEBUG nova.compute.manager [req-4a69dd77-34c4-4f62-9db0-843c16f4014e req-1af745fe-ce00-4d2d-8247-dffe5fecab79 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a77fa566-1ab4-484c-b6ef-53471c9f91f8] Processing event network-vif-plugged-75ae5f39-4616-4581-8e22-411bee7c1747 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 11 09:35:00 compute-0 neutron-haproxy-ovnmeta-f0d9de45-f93f-45ef-aa2e-7cd54a90600b[415736]: [NOTICE]   (415750) : New worker (415752) forked
Oct 11 09:35:00 compute-0 neutron-haproxy-ovnmeta-f0d9de45-f93f-45ef-aa2e-7cd54a90600b[415736]: [NOTICE]   (415750) : Loading success.
Oct 11 09:35:00 compute-0 nova_compute[260935]: 2025-10-11 09:35:00.805 2 DEBUG nova.compute.manager [None req-ca6f4263-76a1-4d3c-87e5-55f3dd57291a 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: a77fa566-1ab4-484c-b6ef-53471c9f91f8] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 11 09:35:00 compute-0 nova_compute[260935]: 2025-10-11 09:35:00.811 2 DEBUG nova.virt.libvirt.driver [None req-ca6f4263-76a1-4d3c-87e5-55f3dd57291a 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: a77fa566-1ab4-484c-b6ef-53471c9f91f8] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 11 09:35:00 compute-0 nova_compute[260935]: 2025-10-11 09:35:00.812 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760175300.812208, a77fa566-1ab4-484c-b6ef-53471c9f91f8 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:35:00 compute-0 nova_compute[260935]: 2025-10-11 09:35:00.812 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: a77fa566-1ab4-484c-b6ef-53471c9f91f8] VM Resumed (Lifecycle Event)
Oct 11 09:35:00 compute-0 nova_compute[260935]: 2025-10-11 09:35:00.817 2 INFO nova.virt.libvirt.driver [-] [instance: a77fa566-1ab4-484c-b6ef-53471c9f91f8] Instance spawned successfully.
Oct 11 09:35:00 compute-0 nova_compute[260935]: 2025-10-11 09:35:00.817 2 DEBUG nova.virt.libvirt.driver [None req-ca6f4263-76a1-4d3c-87e5-55f3dd57291a 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: a77fa566-1ab4-484c-b6ef-53471c9f91f8] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 11 09:35:00 compute-0 podman[415760]: 2025-10-11 09:35:00.872250548 +0000 UTC m=+0.046696209 container create eaa884e06349883c544d7e89173d34e06b9868409fea14fbd96af882da7ac99e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_bose, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 09:35:00 compute-0 nova_compute[260935]: 2025-10-11 09:35:00.892 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: a77fa566-1ab4-484c-b6ef-53471c9f91f8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:35:00 compute-0 nova_compute[260935]: 2025-10-11 09:35:00.901 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: a77fa566-1ab4-484c-b6ef-53471c9f91f8] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 09:35:00 compute-0 systemd[1]: Started libpod-conmon-eaa884e06349883c544d7e89173d34e06b9868409fea14fbd96af882da7ac99e.scope.
Oct 11 09:35:00 compute-0 nova_compute[260935]: 2025-10-11 09:35:00.936 2 DEBUG nova.virt.libvirt.driver [None req-ca6f4263-76a1-4d3c-87e5-55f3dd57291a 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: a77fa566-1ab4-484c-b6ef-53471c9f91f8] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:35:00 compute-0 nova_compute[260935]: 2025-10-11 09:35:00.937 2 DEBUG nova.virt.libvirt.driver [None req-ca6f4263-76a1-4d3c-87e5-55f3dd57291a 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: a77fa566-1ab4-484c-b6ef-53471c9f91f8] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:35:00 compute-0 nova_compute[260935]: 2025-10-11 09:35:00.937 2 DEBUG nova.virt.libvirt.driver [None req-ca6f4263-76a1-4d3c-87e5-55f3dd57291a 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: a77fa566-1ab4-484c-b6ef-53471c9f91f8] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:35:00 compute-0 nova_compute[260935]: 2025-10-11 09:35:00.938 2 DEBUG nova.virt.libvirt.driver [None req-ca6f4263-76a1-4d3c-87e5-55f3dd57291a 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: a77fa566-1ab4-484c-b6ef-53471c9f91f8] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:35:00 compute-0 nova_compute[260935]: 2025-10-11 09:35:00.939 2 DEBUG nova.virt.libvirt.driver [None req-ca6f4263-76a1-4d3c-87e5-55f3dd57291a 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: a77fa566-1ab4-484c-b6ef-53471c9f91f8] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:35:00 compute-0 nova_compute[260935]: 2025-10-11 09:35:00.940 2 DEBUG nova.virt.libvirt.driver [None req-ca6f4263-76a1-4d3c-87e5-55f3dd57291a 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: a77fa566-1ab4-484c-b6ef-53471c9f91f8] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:35:00 compute-0 podman[415760]: 2025-10-11 09:35:00.851841102 +0000 UTC m=+0.026286773 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:35:00 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:35:00 compute-0 nova_compute[260935]: 2025-10-11 09:35:00.968 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: a77fa566-1ab4-484c-b6ef-53471c9f91f8] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 09:35:00 compute-0 podman[415760]: 2025-10-11 09:35:00.982763346 +0000 UTC m=+0.157209017 container init eaa884e06349883c544d7e89173d34e06b9868409fea14fbd96af882da7ac99e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_bose, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct 11 09:35:00 compute-0 podman[415760]: 2025-10-11 09:35:00.991900144 +0000 UTC m=+0.166345805 container start eaa884e06349883c544d7e89173d34e06b9868409fea14fbd96af882da7ac99e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_bose, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 09:35:00 compute-0 podman[415760]: 2025-10-11 09:35:00.99566606 +0000 UTC m=+0.170111741 container attach eaa884e06349883c544d7e89173d34e06b9868409fea14fbd96af882da7ac99e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_bose, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 09:35:00 compute-0 strange_bose[415774]: 167 167
Oct 11 09:35:00 compute-0 systemd[1]: libpod-eaa884e06349883c544d7e89173d34e06b9868409fea14fbd96af882da7ac99e.scope: Deactivated successfully.
Oct 11 09:35:01 compute-0 podman[415760]: 2025-10-11 09:35:01.000028844 +0000 UTC m=+0.174474515 container died eaa884e06349883c544d7e89173d34e06b9868409fea14fbd96af882da7ac99e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_bose, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 09:35:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-6482c6a029adf5b69ec7e53358f2731c232ca04d8091d470ffcc292db3d134b8-merged.mount: Deactivated successfully.
Oct 11 09:35:01 compute-0 podman[415760]: 2025-10-11 09:35:01.054444079 +0000 UTC m=+0.228889760 container remove eaa884e06349883c544d7e89173d34e06b9868409fea14fbd96af882da7ac99e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_bose, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 09:35:01 compute-0 nova_compute[260935]: 2025-10-11 09:35:01.059 2 INFO nova.compute.manager [None req-ca6f4263-76a1-4d3c-87e5-55f3dd57291a 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: a77fa566-1ab4-484c-b6ef-53471c9f91f8] Took 10.50 seconds to spawn the instance on the hypervisor.
Oct 11 09:35:01 compute-0 nova_compute[260935]: 2025-10-11 09:35:01.061 2 DEBUG nova.compute.manager [None req-ca6f4263-76a1-4d3c-87e5-55f3dd57291a 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: a77fa566-1ab4-484c-b6ef-53471c9f91f8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:35:01 compute-0 systemd[1]: libpod-conmon-eaa884e06349883c544d7e89173d34e06b9868409fea14fbd96af882da7ac99e.scope: Deactivated successfully.
Oct 11 09:35:01 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2801: 321 pgs: 321 active+clean; 374 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 11 09:35:01 compute-0 nova_compute[260935]: 2025-10-11 09:35:01.300 2 INFO nova.compute.manager [None req-ca6f4263-76a1-4d3c-87e5-55f3dd57291a 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: a77fa566-1ab4-484c-b6ef-53471c9f91f8] Took 12.27 seconds to build instance.
Oct 11 09:35:01 compute-0 podman[415802]: 2025-10-11 09:35:01.303155868 +0000 UTC m=+0.060284442 container create 0fe18510815923d1ed531df3faf2cd413a545e97b95f33929f863b1316b60e60 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_shirley, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct 11 09:35:01 compute-0 podman[415802]: 2025-10-11 09:35:01.272207045 +0000 UTC m=+0.029335709 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:35:01 compute-0 systemd[1]: Started libpod-conmon-0fe18510815923d1ed531df3faf2cd413a545e97b95f33929f863b1316b60e60.scope.
Oct 11 09:35:01 compute-0 nova_compute[260935]: 2025-10-11 09:35:01.388 2 DEBUG oslo_concurrency.lockutils [None req-ca6f4263-76a1-4d3c-87e5-55f3dd57291a 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lock "a77fa566-1ab4-484c-b6ef-53471c9f91f8" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.563s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:35:01 compute-0 ceph-mon[74313]: pgmap v2801: 321 pgs: 321 active+clean; 374 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 11 09:35:01 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:35:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/00390b82c6358f05b1db82df8f39e2e647f140cb8514638b162a39ebf758746c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 09:35:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/00390b82c6358f05b1db82df8f39e2e647f140cb8514638b162a39ebf758746c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 09:35:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/00390b82c6358f05b1db82df8f39e2e647f140cb8514638b162a39ebf758746c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 09:35:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/00390b82c6358f05b1db82df8f39e2e647f140cb8514638b162a39ebf758746c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 09:35:01 compute-0 podman[415802]: 2025-10-11 09:35:01.441595485 +0000 UTC m=+0.198724149 container init 0fe18510815923d1ed531df3faf2cd413a545e97b95f33929f863b1316b60e60 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_shirley, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 09:35:01 compute-0 podman[415802]: 2025-10-11 09:35:01.455975821 +0000 UTC m=+0.213104405 container start 0fe18510815923d1ed531df3faf2cd413a545e97b95f33929f863b1316b60e60 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_shirley, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct 11 09:35:01 compute-0 podman[415802]: 2025-10-11 09:35:01.459794519 +0000 UTC m=+0.216923183 container attach 0fe18510815923d1ed531df3faf2cd413a545e97b95f33929f863b1316b60e60 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_shirley, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct 11 09:35:02 compute-0 thirsty_shirley[415819]: {
Oct 11 09:35:02 compute-0 thirsty_shirley[415819]:     "0": [
Oct 11 09:35:02 compute-0 thirsty_shirley[415819]:         {
Oct 11 09:35:02 compute-0 thirsty_shirley[415819]:             "devices": [
Oct 11 09:35:02 compute-0 thirsty_shirley[415819]:                 "/dev/loop3"
Oct 11 09:35:02 compute-0 thirsty_shirley[415819]:             ],
Oct 11 09:35:02 compute-0 thirsty_shirley[415819]:             "lv_name": "ceph_lv0",
Oct 11 09:35:02 compute-0 thirsty_shirley[415819]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 09:35:02 compute-0 thirsty_shirley[415819]:             "lv_size": "21470642176",
Oct 11 09:35:02 compute-0 thirsty_shirley[415819]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=bd31cfe9-266a-4479-a7f9-659fff14b3e1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 09:35:02 compute-0 thirsty_shirley[415819]:             "lv_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 09:35:02 compute-0 thirsty_shirley[415819]:             "name": "ceph_lv0",
Oct 11 09:35:02 compute-0 thirsty_shirley[415819]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 09:35:02 compute-0 thirsty_shirley[415819]:             "tags": {
Oct 11 09:35:02 compute-0 thirsty_shirley[415819]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 11 09:35:02 compute-0 thirsty_shirley[415819]:                 "ceph.block_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 09:35:02 compute-0 thirsty_shirley[415819]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 09:35:02 compute-0 thirsty_shirley[415819]:                 "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:35:02 compute-0 thirsty_shirley[415819]:                 "ceph.cluster_name": "ceph",
Oct 11 09:35:02 compute-0 thirsty_shirley[415819]:                 "ceph.crush_device_class": "",
Oct 11 09:35:02 compute-0 thirsty_shirley[415819]:                 "ceph.encrypted": "0",
Oct 11 09:35:02 compute-0 thirsty_shirley[415819]:                 "ceph.osd_fsid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 09:35:02 compute-0 thirsty_shirley[415819]:                 "ceph.osd_id": "0",
Oct 11 09:35:02 compute-0 thirsty_shirley[415819]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 09:35:02 compute-0 thirsty_shirley[415819]:                 "ceph.type": "block",
Oct 11 09:35:02 compute-0 thirsty_shirley[415819]:                 "ceph.vdo": "0"
Oct 11 09:35:02 compute-0 thirsty_shirley[415819]:             },
Oct 11 09:35:02 compute-0 thirsty_shirley[415819]:             "type": "block",
Oct 11 09:35:02 compute-0 thirsty_shirley[415819]:             "vg_name": "ceph_vg0"
Oct 11 09:35:02 compute-0 thirsty_shirley[415819]:         }
Oct 11 09:35:02 compute-0 thirsty_shirley[415819]:     ],
Oct 11 09:35:02 compute-0 thirsty_shirley[415819]:     "1": [
Oct 11 09:35:02 compute-0 thirsty_shirley[415819]:         {
Oct 11 09:35:02 compute-0 thirsty_shirley[415819]:             "devices": [
Oct 11 09:35:02 compute-0 thirsty_shirley[415819]:                 "/dev/loop4"
Oct 11 09:35:02 compute-0 thirsty_shirley[415819]:             ],
Oct 11 09:35:02 compute-0 thirsty_shirley[415819]:             "lv_name": "ceph_lv1",
Oct 11 09:35:02 compute-0 thirsty_shirley[415819]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 09:35:02 compute-0 thirsty_shirley[415819]:             "lv_size": "21470642176",
Oct 11 09:35:02 compute-0 thirsty_shirley[415819]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c6e730c8-7075-45e5-bf72-061b73088f5b,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 09:35:02 compute-0 thirsty_shirley[415819]:             "lv_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 09:35:02 compute-0 thirsty_shirley[415819]:             "name": "ceph_lv1",
Oct 11 09:35:02 compute-0 thirsty_shirley[415819]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 09:35:02 compute-0 thirsty_shirley[415819]:             "tags": {
Oct 11 09:35:02 compute-0 thirsty_shirley[415819]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 11 09:35:02 compute-0 thirsty_shirley[415819]:                 "ceph.block_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 09:35:02 compute-0 thirsty_shirley[415819]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 09:35:02 compute-0 thirsty_shirley[415819]:                 "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:35:02 compute-0 thirsty_shirley[415819]:                 "ceph.cluster_name": "ceph",
Oct 11 09:35:02 compute-0 thirsty_shirley[415819]:                 "ceph.crush_device_class": "",
Oct 11 09:35:02 compute-0 thirsty_shirley[415819]:                 "ceph.encrypted": "0",
Oct 11 09:35:02 compute-0 thirsty_shirley[415819]:                 "ceph.osd_fsid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 09:35:02 compute-0 thirsty_shirley[415819]:                 "ceph.osd_id": "1",
Oct 11 09:35:02 compute-0 thirsty_shirley[415819]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 09:35:02 compute-0 thirsty_shirley[415819]:                 "ceph.type": "block",
Oct 11 09:35:02 compute-0 thirsty_shirley[415819]:                 "ceph.vdo": "0"
Oct 11 09:35:02 compute-0 thirsty_shirley[415819]:             },
Oct 11 09:35:02 compute-0 thirsty_shirley[415819]:             "type": "block",
Oct 11 09:35:02 compute-0 thirsty_shirley[415819]:             "vg_name": "ceph_vg1"
Oct 11 09:35:02 compute-0 thirsty_shirley[415819]:         }
Oct 11 09:35:02 compute-0 thirsty_shirley[415819]:     ],
Oct 11 09:35:02 compute-0 thirsty_shirley[415819]:     "2": [
Oct 11 09:35:02 compute-0 thirsty_shirley[415819]:         {
Oct 11 09:35:02 compute-0 thirsty_shirley[415819]:             "devices": [
Oct 11 09:35:02 compute-0 thirsty_shirley[415819]:                 "/dev/loop5"
Oct 11 09:35:02 compute-0 thirsty_shirley[415819]:             ],
Oct 11 09:35:02 compute-0 thirsty_shirley[415819]:             "lv_name": "ceph_lv2",
Oct 11 09:35:02 compute-0 thirsty_shirley[415819]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 09:35:02 compute-0 thirsty_shirley[415819]:             "lv_size": "21470642176",
Oct 11 09:35:02 compute-0 thirsty_shirley[415819]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9a8d87fe-940e-4960-94fb-405e22e6e9ee,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 09:35:02 compute-0 thirsty_shirley[415819]:             "lv_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 09:35:02 compute-0 thirsty_shirley[415819]:             "name": "ceph_lv2",
Oct 11 09:35:02 compute-0 thirsty_shirley[415819]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 09:35:02 compute-0 thirsty_shirley[415819]:             "tags": {
Oct 11 09:35:02 compute-0 thirsty_shirley[415819]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 11 09:35:02 compute-0 thirsty_shirley[415819]:                 "ceph.block_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 09:35:02 compute-0 thirsty_shirley[415819]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 09:35:02 compute-0 thirsty_shirley[415819]:                 "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:35:02 compute-0 thirsty_shirley[415819]:                 "ceph.cluster_name": "ceph",
Oct 11 09:35:02 compute-0 thirsty_shirley[415819]:                 "ceph.crush_device_class": "",
Oct 11 09:35:02 compute-0 thirsty_shirley[415819]:                 "ceph.encrypted": "0",
Oct 11 09:35:02 compute-0 thirsty_shirley[415819]:                 "ceph.osd_fsid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 09:35:02 compute-0 thirsty_shirley[415819]:                 "ceph.osd_id": "2",
Oct 11 09:35:02 compute-0 thirsty_shirley[415819]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 09:35:02 compute-0 thirsty_shirley[415819]:                 "ceph.type": "block",
Oct 11 09:35:02 compute-0 thirsty_shirley[415819]:                 "ceph.vdo": "0"
Oct 11 09:35:02 compute-0 thirsty_shirley[415819]:             },
Oct 11 09:35:02 compute-0 thirsty_shirley[415819]:             "type": "block",
Oct 11 09:35:02 compute-0 thirsty_shirley[415819]:             "vg_name": "ceph_vg2"
Oct 11 09:35:02 compute-0 thirsty_shirley[415819]:         }
Oct 11 09:35:02 compute-0 thirsty_shirley[415819]:     ]
Oct 11 09:35:02 compute-0 thirsty_shirley[415819]: }
Oct 11 09:35:02 compute-0 systemd[1]: libpod-0fe18510815923d1ed531df3faf2cd413a545e97b95f33929f863b1316b60e60.scope: Deactivated successfully.
Oct 11 09:35:02 compute-0 podman[415802]: 2025-10-11 09:35:02.369848391 +0000 UTC m=+1.126976975 container died 0fe18510815923d1ed531df3faf2cd413a545e97b95f33929f863b1316b60e60 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_shirley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct 11 09:35:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-00390b82c6358f05b1db82df8f39e2e647f140cb8514638b162a39ebf758746c-merged.mount: Deactivated successfully.
Oct 11 09:35:02 compute-0 podman[415802]: 2025-10-11 09:35:02.426419728 +0000 UTC m=+1.183548312 container remove 0fe18510815923d1ed531df3faf2cd413a545e97b95f33929f863b1316b60e60 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_shirley, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default)
Oct 11 09:35:02 compute-0 systemd[1]: libpod-conmon-0fe18510815923d1ed531df3faf2cd413a545e97b95f33929f863b1316b60e60.scope: Deactivated successfully.
Oct 11 09:35:02 compute-0 sudo[415646]: pam_unix(sudo:session): session closed for user root
Oct 11 09:35:02 compute-0 sudo[415842]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:35:02 compute-0 sudo[415842]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:35:02 compute-0 sudo[415842]: pam_unix(sudo:session): session closed for user root
Oct 11 09:35:02 compute-0 sudo[415867]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 09:35:02 compute-0 sudo[415867]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:35:02 compute-0 sudo[415867]: pam_unix(sudo:session): session closed for user root
Oct 11 09:35:02 compute-0 sudo[415892]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:35:02 compute-0 sudo[415892]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:35:02 compute-0 sudo[415892]: pam_unix(sudo:session): session closed for user root
Oct 11 09:35:02 compute-0 sudo[415917]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a -- raw list --format json
Oct 11 09:35:02 compute-0 sudo[415917]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:35:02 compute-0 nova_compute[260935]: 2025-10-11 09:35:02.875 2 DEBUG nova.compute.manager [req-46c20a59-164a-4a5c-86b1-59d9331466dc req-c23e8d91-040e-494a-97be-0503c79b62b6 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a77fa566-1ab4-484c-b6ef-53471c9f91f8] Received event network-vif-plugged-75ae5f39-4616-4581-8e22-411bee7c1747 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:35:02 compute-0 nova_compute[260935]: 2025-10-11 09:35:02.876 2 DEBUG oslo_concurrency.lockutils [req-46c20a59-164a-4a5c-86b1-59d9331466dc req-c23e8d91-040e-494a-97be-0503c79b62b6 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "a77fa566-1ab4-484c-b6ef-53471c9f91f8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:35:02 compute-0 nova_compute[260935]: 2025-10-11 09:35:02.876 2 DEBUG oslo_concurrency.lockutils [req-46c20a59-164a-4a5c-86b1-59d9331466dc req-c23e8d91-040e-494a-97be-0503c79b62b6 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "a77fa566-1ab4-484c-b6ef-53471c9f91f8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:35:02 compute-0 nova_compute[260935]: 2025-10-11 09:35:02.877 2 DEBUG oslo_concurrency.lockutils [req-46c20a59-164a-4a5c-86b1-59d9331466dc req-c23e8d91-040e-494a-97be-0503c79b62b6 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "a77fa566-1ab4-484c-b6ef-53471c9f91f8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:35:02 compute-0 nova_compute[260935]: 2025-10-11 09:35:02.877 2 DEBUG nova.compute.manager [req-46c20a59-164a-4a5c-86b1-59d9331466dc req-c23e8d91-040e-494a-97be-0503c79b62b6 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a77fa566-1ab4-484c-b6ef-53471c9f91f8] No waiting events found dispatching network-vif-plugged-75ae5f39-4616-4581-8e22-411bee7c1747 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 09:35:02 compute-0 nova_compute[260935]: 2025-10-11 09:35:02.877 2 WARNING nova.compute.manager [req-46c20a59-164a-4a5c-86b1-59d9331466dc req-c23e8d91-040e-494a-97be-0503c79b62b6 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a77fa566-1ab4-484c-b6ef-53471c9f91f8] Received unexpected event network-vif-plugged-75ae5f39-4616-4581-8e22-411bee7c1747 for instance with vm_state active and task_state None.
Oct 11 09:35:03 compute-0 podman[415982]: 2025-10-11 09:35:03.161204763 +0000 UTC m=+0.052638806 container create 44d2cf2e2a7ed3798b696cf8c58a555f3afa9fd025d32ab5719110df599b8794 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_pasteur, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct 11 09:35:03 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2802: 321 pgs: 321 active+clean; 374 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Oct 11 09:35:03 compute-0 systemd[1]: Started libpod-conmon-44d2cf2e2a7ed3798b696cf8c58a555f3afa9fd025d32ab5719110df599b8794.scope.
Oct 11 09:35:03 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:35:03 compute-0 podman[415982]: 2025-10-11 09:35:03.137249488 +0000 UTC m=+0.028683611 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:35:03 compute-0 ceph-mon[74313]: pgmap v2802: 321 pgs: 321 active+clean; 374 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Oct 11 09:35:03 compute-0 podman[415982]: 2025-10-11 09:35:03.249115014 +0000 UTC m=+0.140549097 container init 44d2cf2e2a7ed3798b696cf8c58a555f3afa9fd025d32ab5719110df599b8794 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_pasteur, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 09:35:03 compute-0 podman[415982]: 2025-10-11 09:35:03.260860105 +0000 UTC m=+0.152294138 container start 44d2cf2e2a7ed3798b696cf8c58a555f3afa9fd025d32ab5719110df599b8794 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_pasteur, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct 11 09:35:03 compute-0 podman[415982]: 2025-10-11 09:35:03.264050695 +0000 UTC m=+0.155484778 container attach 44d2cf2e2a7ed3798b696cf8c58a555f3afa9fd025d32ab5719110df599b8794 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_pasteur, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 09:35:03 compute-0 sweet_pasteur[415998]: 167 167
Oct 11 09:35:03 compute-0 systemd[1]: libpod-44d2cf2e2a7ed3798b696cf8c58a555f3afa9fd025d32ab5719110df599b8794.scope: Deactivated successfully.
Oct 11 09:35:03 compute-0 podman[415982]: 2025-10-11 09:35:03.270526748 +0000 UTC m=+0.161960811 container died 44d2cf2e2a7ed3798b696cf8c58a555f3afa9fd025d32ab5719110df599b8794 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_pasteur, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 09:35:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-3747d1b85103fb8f66ad5d8a3e00746311387dfea762220c3b2c678bf3a4c3db-merged.mount: Deactivated successfully.
Oct 11 09:35:03 compute-0 podman[415982]: 2025-10-11 09:35:03.317860944 +0000 UTC m=+0.209295007 container remove 44d2cf2e2a7ed3798b696cf8c58a555f3afa9fd025d32ab5719110df599b8794 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_pasteur, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 11 09:35:03 compute-0 systemd[1]: libpod-conmon-44d2cf2e2a7ed3798b696cf8c58a555f3afa9fd025d32ab5719110df599b8794.scope: Deactivated successfully.
Oct 11 09:35:03 compute-0 nova_compute[260935]: 2025-10-11 09:35:03.525 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:35:03 compute-0 podman[416023]: 2025-10-11 09:35:03.580580478 +0000 UTC m=+0.068834673 container create 3529affaadc4c16e0d9a832ebc2b8258ab58a2214aeedf1987ccba63d75997e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_mendel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 09:35:03 compute-0 systemd[1]: Started libpod-conmon-3529affaadc4c16e0d9a832ebc2b8258ab58a2214aeedf1987ccba63d75997e7.scope.
Oct 11 09:35:03 compute-0 podman[416023]: 2025-10-11 09:35:03.554222384 +0000 UTC m=+0.042476629 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:35:03 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:35:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/daa09d7b1a60817f59a6eda79d2302104d33916544975fdc5069bc26fb5c6142/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 09:35:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/daa09d7b1a60817f59a6eda79d2302104d33916544975fdc5069bc26fb5c6142/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 09:35:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/daa09d7b1a60817f59a6eda79d2302104d33916544975fdc5069bc26fb5c6142/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 09:35:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/daa09d7b1a60817f59a6eda79d2302104d33916544975fdc5069bc26fb5c6142/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 09:35:03 compute-0 podman[416023]: 2025-10-11 09:35:03.707117519 +0000 UTC m=+0.195371704 container init 3529affaadc4c16e0d9a832ebc2b8258ab58a2214aeedf1987ccba63d75997e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_mendel, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 09:35:03 compute-0 podman[416023]: 2025-10-11 09:35:03.719742406 +0000 UTC m=+0.207996561 container start 3529affaadc4c16e0d9a832ebc2b8258ab58a2214aeedf1987ccba63d75997e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_mendel, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct 11 09:35:03 compute-0 podman[416023]: 2025-10-11 09:35:03.723924474 +0000 UTC m=+0.212178699 container attach 3529affaadc4c16e0d9a832ebc2b8258ab58a2214aeedf1987ccba63d75997e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_mendel, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 09:35:03 compute-0 nova_compute[260935]: 2025-10-11 09:35:03.974 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:35:04 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:35:04 compute-0 nova_compute[260935]: 2025-10-11 09:35:04.661 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:35:04 compute-0 hardcore_mendel[416039]: {
Oct 11 09:35:04 compute-0 hardcore_mendel[416039]:     "9a8d87fe-940e-4960-94fb-405e22e6e9ee": {
Oct 11 09:35:04 compute-0 hardcore_mendel[416039]:         "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:35:04 compute-0 hardcore_mendel[416039]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 11 09:35:04 compute-0 hardcore_mendel[416039]:         "osd_id": 2,
Oct 11 09:35:04 compute-0 hardcore_mendel[416039]:         "osd_uuid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 09:35:04 compute-0 hardcore_mendel[416039]:         "type": "bluestore"
Oct 11 09:35:04 compute-0 hardcore_mendel[416039]:     },
Oct 11 09:35:04 compute-0 hardcore_mendel[416039]:     "bd31cfe9-266a-4479-a7f9-659fff14b3e1": {
Oct 11 09:35:04 compute-0 hardcore_mendel[416039]:         "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:35:04 compute-0 hardcore_mendel[416039]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 11 09:35:04 compute-0 hardcore_mendel[416039]:         "osd_id": 0,
Oct 11 09:35:04 compute-0 hardcore_mendel[416039]:         "osd_uuid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 09:35:04 compute-0 hardcore_mendel[416039]:         "type": "bluestore"
Oct 11 09:35:04 compute-0 hardcore_mendel[416039]:     },
Oct 11 09:35:04 compute-0 hardcore_mendel[416039]:     "c6e730c8-7075-45e5-bf72-061b73088f5b": {
Oct 11 09:35:04 compute-0 hardcore_mendel[416039]:         "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:35:04 compute-0 hardcore_mendel[416039]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 11 09:35:04 compute-0 hardcore_mendel[416039]:         "osd_id": 1,
Oct 11 09:35:04 compute-0 hardcore_mendel[416039]:         "osd_uuid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 09:35:04 compute-0 hardcore_mendel[416039]:         "type": "bluestore"
Oct 11 09:35:04 compute-0 hardcore_mendel[416039]:     }
Oct 11 09:35:04 compute-0 hardcore_mendel[416039]: }
Oct 11 09:35:04 compute-0 nova_compute[260935]: 2025-10-11 09:35:04.735 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Triggering sync for uuid c176845c-89c0-4038-ba22-4ee79bd3ebfe _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Oct 11 09:35:04 compute-0 nova_compute[260935]: 2025-10-11 09:35:04.736 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Triggering sync for uuid b75d8ded-515b-48ff-a6b6-28df88878996 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Oct 11 09:35:04 compute-0 nova_compute[260935]: 2025-10-11 09:35:04.737 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Triggering sync for uuid 52be16b4-343a-4fd4-9041-39069a1fde2a _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Oct 11 09:35:04 compute-0 nova_compute[260935]: 2025-10-11 09:35:04.737 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Triggering sync for uuid a77fa566-1ab4-484c-b6ef-53471c9f91f8 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Oct 11 09:35:04 compute-0 nova_compute[260935]: 2025-10-11 09:35:04.738 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "c176845c-89c0-4038-ba22-4ee79bd3ebfe" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:35:04 compute-0 nova_compute[260935]: 2025-10-11 09:35:04.739 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "c176845c-89c0-4038-ba22-4ee79bd3ebfe" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:35:04 compute-0 nova_compute[260935]: 2025-10-11 09:35:04.739 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "b75d8ded-515b-48ff-a6b6-28df88878996" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:35:04 compute-0 nova_compute[260935]: 2025-10-11 09:35:04.740 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "b75d8ded-515b-48ff-a6b6-28df88878996" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:35:04 compute-0 nova_compute[260935]: 2025-10-11 09:35:04.741 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "52be16b4-343a-4fd4-9041-39069a1fde2a" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:35:04 compute-0 nova_compute[260935]: 2025-10-11 09:35:04.742 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "52be16b4-343a-4fd4-9041-39069a1fde2a" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:35:04 compute-0 nova_compute[260935]: 2025-10-11 09:35:04.742 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "a77fa566-1ab4-484c-b6ef-53471c9f91f8" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:35:04 compute-0 nova_compute[260935]: 2025-10-11 09:35:04.743 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "a77fa566-1ab4-484c-b6ef-53471c9f91f8" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:35:04 compute-0 systemd[1]: libpod-3529affaadc4c16e0d9a832ebc2b8258ab58a2214aeedf1987ccba63d75997e7.scope: Deactivated successfully.
Oct 11 09:35:04 compute-0 systemd[1]: libpod-3529affaadc4c16e0d9a832ebc2b8258ab58a2214aeedf1987ccba63d75997e7.scope: Consumed 1.043s CPU time.
Oct 11 09:35:04 compute-0 conmon[416039]: conmon 3529affaadc4c16e0d9a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-3529affaadc4c16e0d9a832ebc2b8258ab58a2214aeedf1987ccba63d75997e7.scope/container/memory.events
Oct 11 09:35:04 compute-0 podman[416023]: 2025-10-11 09:35:04.774966135 +0000 UTC m=+1.263220320 container died 3529affaadc4c16e0d9a832ebc2b8258ab58a2214aeedf1987ccba63d75997e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_mendel, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct 11 09:35:04 compute-0 nova_compute[260935]: 2025-10-11 09:35:04.775 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:35:04 compute-0 NetworkManager[44960]: <info>  [1760175304.7769] manager: (patch-br-int-to-provnet-02213cae-d648-4a8f-b18b-9bdf9573f1be): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/600)
Oct 11 09:35:04 compute-0 ovn_controller[152945]: 2025-10-11T09:35:04Z|01562|binding|INFO|Releasing lport 675f5b9f-9cb7-4132-af37-955cc69eba82 from this chassis (sb_readonly=0)
Oct 11 09:35:04 compute-0 ovn_controller[152945]: 2025-10-11T09:35:04Z|01563|binding|INFO|Releasing lport e23cd806-8523-4e59-ba27-db15cee52548 from this chassis (sb_readonly=0)
Oct 11 09:35:04 compute-0 ovn_controller[152945]: 2025-10-11T09:35:04Z|01564|binding|INFO|Releasing lport f45dd889-4db0-488b-b6d3-356bc191844e from this chassis (sb_readonly=0)
Oct 11 09:35:04 compute-0 NetworkManager[44960]: <info>  [1760175304.7787] manager: (patch-provnet-02213cae-d648-4a8f-b18b-9bdf9573f1be-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/601)
Oct 11 09:35:04 compute-0 podman[416069]: 2025-10-11 09:35:04.788424505 +0000 UTC m=+0.087761278 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=iscsid, org.label-schema.build-date=20251001, tcib_managed=true, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Oct 11 09:35:04 compute-0 nova_compute[260935]: 2025-10-11 09:35:04.809 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "c176845c-89c0-4038-ba22-4ee79bd3ebfe" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.070s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:35:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-daa09d7b1a60817f59a6eda79d2302104d33916544975fdc5069bc26fb5c6142-merged.mount: Deactivated successfully.
Oct 11 09:35:04 compute-0 ovn_controller[152945]: 2025-10-11T09:35:04Z|01565|binding|INFO|Releasing lport 675f5b9f-9cb7-4132-af37-955cc69eba82 from this chassis (sb_readonly=0)
Oct 11 09:35:04 compute-0 ovn_controller[152945]: 2025-10-11T09:35:04Z|01566|binding|INFO|Releasing lport e23cd806-8523-4e59-ba27-db15cee52548 from this chassis (sb_readonly=0)
Oct 11 09:35:04 compute-0 ovn_controller[152945]: 2025-10-11T09:35:04Z|01567|binding|INFO|Releasing lport f45dd889-4db0-488b-b6d3-356bc191844e from this chassis (sb_readonly=0)
Oct 11 09:35:04 compute-0 nova_compute[260935]: 2025-10-11 09:35:04.823 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:35:04 compute-0 nova_compute[260935]: 2025-10-11 09:35:04.838 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:35:04 compute-0 nova_compute[260935]: 2025-10-11 09:35:04.843 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "a77fa566-1ab4-484c-b6ef-53471c9f91f8" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.101s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:35:04 compute-0 nova_compute[260935]: 2025-10-11 09:35:04.844 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "52be16b4-343a-4fd4-9041-39069a1fde2a" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.102s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:35:04 compute-0 nova_compute[260935]: 2025-10-11 09:35:04.844 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "b75d8ded-515b-48ff-a6b6-28df88878996" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.104s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:35:04 compute-0 podman[416023]: 2025-10-11 09:35:04.857153794 +0000 UTC m=+1.345407959 container remove 3529affaadc4c16e0d9a832ebc2b8258ab58a2214aeedf1987ccba63d75997e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_mendel, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 09:35:04 compute-0 systemd[1]: libpod-conmon-3529affaadc4c16e0d9a832ebc2b8258ab58a2214aeedf1987ccba63d75997e7.scope: Deactivated successfully.
Oct 11 09:35:04 compute-0 sudo[415917]: pam_unix(sudo:session): session closed for user root
Oct 11 09:35:04 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 09:35:04 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:35:04 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 09:35:04 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:35:04 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev 91940d04-9b03-48b7-aca5-5d115dc358bb does not exist
Oct 11 09:35:04 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev a4385df5-6be6-49af-a551-faef9c3549b9 does not exist
Oct 11 09:35:05 compute-0 sudo[416106]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:35:05 compute-0 sudo[416106]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:35:05 compute-0 sudo[416106]: pam_unix(sudo:session): session closed for user root
Oct 11 09:35:05 compute-0 sudo[416131]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 11 09:35:05 compute-0 sudo[416131]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:35:05 compute-0 sudo[416131]: pam_unix(sudo:session): session closed for user root
Oct 11 09:35:05 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2803: 321 pgs: 321 active+clean; 374 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 11 09:35:05 compute-0 nova_compute[260935]: 2025-10-11 09:35:05.219 2 DEBUG nova.compute.manager [req-f07464da-f467-4ef7-8a77-728a45062dc1 req-409aa333-9aa1-42df-85ae-f5a98ef4d69e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a77fa566-1ab4-484c-b6ef-53471c9f91f8] Received event network-changed-75ae5f39-4616-4581-8e22-411bee7c1747 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:35:05 compute-0 nova_compute[260935]: 2025-10-11 09:35:05.221 2 DEBUG nova.compute.manager [req-f07464da-f467-4ef7-8a77-728a45062dc1 req-409aa333-9aa1-42df-85ae-f5a98ef4d69e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a77fa566-1ab4-484c-b6ef-53471c9f91f8] Refreshing instance network info cache due to event network-changed-75ae5f39-4616-4581-8e22-411bee7c1747. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 09:35:05 compute-0 nova_compute[260935]: 2025-10-11 09:35:05.221 2 DEBUG oslo_concurrency.lockutils [req-f07464da-f467-4ef7-8a77-728a45062dc1 req-409aa333-9aa1-42df-85ae-f5a98ef4d69e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-a77fa566-1ab4-484c-b6ef-53471c9f91f8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:35:05 compute-0 nova_compute[260935]: 2025-10-11 09:35:05.222 2 DEBUG oslo_concurrency.lockutils [req-f07464da-f467-4ef7-8a77-728a45062dc1 req-409aa333-9aa1-42df-85ae-f5a98ef4d69e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-a77fa566-1ab4-484c-b6ef-53471c9f91f8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:35:05 compute-0 nova_compute[260935]: 2025-10-11 09:35:05.222 2 DEBUG nova.network.neutron [req-f07464da-f467-4ef7-8a77-728a45062dc1 req-409aa333-9aa1-42df-85ae-f5a98ef4d69e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a77fa566-1ab4-484c-b6ef-53471c9f91f8] Refreshing network info cache for port 75ae5f39-4616-4581-8e22-411bee7c1747 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 09:35:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 09:35:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:35:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 09:35:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:35:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0029757271724620694 of space, bias 1.0, pg target 0.8927181517386208 quantized to 32 (current 32)
Oct 11 09:35:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:35:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 09:35:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:35:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 09:35:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:35:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Oct 11 09:35:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:35:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 11 09:35:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:35:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 09:35:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:35:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 11 09:35:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:35:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 11 09:35:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:35:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 09:35:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:35:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 11 09:35:05 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:35:05 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:35:05 compute-0 ceph-mon[74313]: pgmap v2803: 321 pgs: 321 active+clean; 374 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 11 09:35:07 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2804: 321 pgs: 321 active+clean; 374 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 11 09:35:07 compute-0 ceph-mon[74313]: pgmap v2804: 321 pgs: 321 active+clean; 374 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 11 09:35:08 compute-0 nova_compute[260935]: 2025-10-11 09:35:08.529 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:35:08 compute-0 nova_compute[260935]: 2025-10-11 09:35:08.613 2 DEBUG nova.network.neutron [req-f07464da-f467-4ef7-8a77-728a45062dc1 req-409aa333-9aa1-42df-85ae-f5a98ef4d69e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a77fa566-1ab4-484c-b6ef-53471c9f91f8] Updated VIF entry in instance network info cache for port 75ae5f39-4616-4581-8e22-411bee7c1747. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 09:35:08 compute-0 nova_compute[260935]: 2025-10-11 09:35:08.613 2 DEBUG nova.network.neutron [req-f07464da-f467-4ef7-8a77-728a45062dc1 req-409aa333-9aa1-42df-85ae-f5a98ef4d69e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a77fa566-1ab4-484c-b6ef-53471c9f91f8] Updating instance_info_cache with network_info: [{"id": "75ae5f39-4616-4581-8e22-411bee7c1747", "address": "fa:16:3e:db:c5:0a", "network": {"id": "f0d9de45-f93f-45ef-aa2e-7cd54a90600b", "bridge": "br-int", "label": "tempest-network-smoke--1509318194", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.175", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "81e7096f23df4e7d8782cf98d09d54e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap75ae5f39-46", "ovs_interfaceid": "75ae5f39-4616-4581-8e22-411bee7c1747", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:35:08 compute-0 nova_compute[260935]: 2025-10-11 09:35:08.638 2 DEBUG oslo_concurrency.lockutils [req-f07464da-f467-4ef7-8a77-728a45062dc1 req-409aa333-9aa1-42df-85ae-f5a98ef4d69e e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-a77fa566-1ab4-484c-b6ef-53471c9f91f8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:35:08 compute-0 podman[416156]: 2025-10-11 09:35:08.782138938 +0000 UTC m=+0.075457943 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.build-date=20251001, tcib_managed=true, container_name=multipathd, org.label-schema.license=GPLv2)
Oct 11 09:35:08 compute-0 podman[416157]: 2025-10-11 09:35:08.818139121 +0000 UTC m=+0.111968631 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct 11 09:35:08 compute-0 nova_compute[260935]: 2025-10-11 09:35:08.976 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:35:09 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2805: 321 pgs: 321 active+clean; 374 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 11 09:35:09 compute-0 ceph-mon[74313]: pgmap v2805: 321 pgs: 321 active+clean; 374 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 11 09:35:09 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:35:11 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2806: 321 pgs: 321 active+clean; 374 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 11 09:35:11 compute-0 ceph-mon[74313]: pgmap v2806: 321 pgs: 321 active+clean; 374 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 11 09:35:12 compute-0 ovn_controller[152945]: 2025-10-11T09:35:12Z|00184|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:db:c5:0a 10.100.0.14
Oct 11 09:35:12 compute-0 ovn_controller[152945]: 2025-10-11T09:35:12Z|00185|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:db:c5:0a 10.100.0.14
Oct 11 09:35:13 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2807: 321 pgs: 321 active+clean; 399 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 123 op/s
Oct 11 09:35:13 compute-0 ceph-mon[74313]: pgmap v2807: 321 pgs: 321 active+clean; 399 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 123 op/s
Oct 11 09:35:13 compute-0 nova_compute[260935]: 2025-10-11 09:35:13.540 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:35:13 compute-0 nova_compute[260935]: 2025-10-11 09:35:13.978 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:35:14 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:35:15 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2808: 321 pgs: 321 active+clean; 399 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 292 KiB/s rd, 2.0 MiB/s wr, 49 op/s
Oct 11 09:35:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:35:15.231 162815 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:35:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:35:15.232 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:35:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:35:15.233 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:35:15 compute-0 ceph-mon[74313]: pgmap v2808: 321 pgs: 321 active+clean; 399 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 292 KiB/s rd, 2.0 MiB/s wr, 49 op/s
Oct 11 09:35:17 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2809: 321 pgs: 321 active+clean; 399 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 292 KiB/s rd, 2.0 MiB/s wr, 49 op/s
Oct 11 09:35:17 compute-0 ceph-mon[74313]: pgmap v2809: 321 pgs: 321 active+clean; 399 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 292 KiB/s rd, 2.0 MiB/s wr, 49 op/s
Oct 11 09:35:18 compute-0 nova_compute[260935]: 2025-10-11 09:35:18.543 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:35:18 compute-0 nova_compute[260935]: 2025-10-11 09:35:18.981 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:35:19 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2810: 321 pgs: 321 active+clean; 407 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 11 09:35:19 compute-0 ceph-mon[74313]: pgmap v2810: 321 pgs: 321 active+clean; 407 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 11 09:35:19 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:35:20 compute-0 sshd-session[416200]: Invalid user mysql from 165.232.82.252 port 43908
Oct 11 09:35:20 compute-0 sshd-session[416200]: pam_unix(sshd:auth): check pass; user unknown
Oct 11 09:35:20 compute-0 sshd-session[416200]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=165.232.82.252
Oct 11 09:35:21 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2811: 321 pgs: 321 active+clean; 407 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 11 09:35:21 compute-0 ceph-mon[74313]: pgmap v2811: 321 pgs: 321 active+clean; 407 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 11 09:35:22 compute-0 sshd-session[416200]: Failed password for invalid user mysql from 165.232.82.252 port 43908 ssh2
Oct 11 09:35:23 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2812: 321 pgs: 321 active+clean; 407 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 11 09:35:23 compute-0 ceph-mon[74313]: pgmap v2812: 321 pgs: 321 active+clean; 407 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 11 09:35:23 compute-0 sshd-session[416200]: Connection closed by invalid user mysql 165.232.82.252 port 43908 [preauth]
Oct 11 09:35:23 compute-0 nova_compute[260935]: 2025-10-11 09:35:23.587 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:35:23 compute-0 nova_compute[260935]: 2025-10-11 09:35:23.984 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:35:24 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:35:24 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:35:24.529 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=51, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:d1:d9', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '16:ab:1e:b7:4b:7f'}, ipsec=False) old=SB_Global(nb_cfg=50) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 09:35:24 compute-0 nova_compute[260935]: 2025-10-11 09:35:24.530 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:35:24 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:35:24.531 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 11 09:35:24 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:35:24.533 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=bec9a5e2-82b0-42f0-811d-08d245f1dc66, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '51'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:35:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:35:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:35:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:35:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:35:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:35:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:35:25 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2813: 321 pgs: 321 active+clean; 407 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 35 KiB/s rd, 103 KiB/s wr, 14 op/s
Oct 11 09:35:25 compute-0 ceph-mon[74313]: pgmap v2813: 321 pgs: 321 active+clean; 407 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 35 KiB/s rd, 103 KiB/s wr, 14 op/s
Oct 11 09:35:26 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 09:35:26 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/50139011' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 09:35:26 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 09:35:26 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/50139011' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 09:35:26 compute-0 ceph-mon[74313]: from='client.? 192.168.122.10:0/50139011' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 09:35:26 compute-0 ceph-mon[74313]: from='client.? 192.168.122.10:0/50139011' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 09:35:26 compute-0 podman[416202]: 2025-10-11 09:35:26.769332653 +0000 UTC m=+0.067323843 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, config_id=ovn_metadata_agent)
Oct 11 09:35:27 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2814: 321 pgs: 321 active+clean; 407 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 35 KiB/s rd, 103 KiB/s wr, 14 op/s
Oct 11 09:35:27 compute-0 nova_compute[260935]: 2025-10-11 09:35:27.589 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:35:27 compute-0 ceph-mon[74313]: pgmap v2814: 321 pgs: 321 active+clean; 407 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 35 KiB/s rd, 103 KiB/s wr, 14 op/s
Oct 11 09:35:27 compute-0 nova_compute[260935]: 2025-10-11 09:35:27.785 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:35:28 compute-0 nova_compute[260935]: 2025-10-11 09:35:28.590 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:35:28 compute-0 nova_compute[260935]: 2025-10-11 09:35:28.987 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:35:29 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2815: 321 pgs: 321 active+clean; 407 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 35 KiB/s rd, 103 KiB/s wr, 14 op/s
Oct 11 09:35:29 compute-0 ceph-mon[74313]: pgmap v2815: 321 pgs: 321 active+clean; 407 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 35 KiB/s rd, 103 KiB/s wr, 14 op/s
Oct 11 09:35:29 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:35:31 compute-0 nova_compute[260935]: 2025-10-11 09:35:31.107 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:35:31 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2816: 321 pgs: 321 active+clean; 407 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 682 B/s rd, 12 KiB/s wr, 0 op/s
Oct 11 09:35:31 compute-0 ceph-mon[74313]: pgmap v2816: 321 pgs: 321 active+clean; 407 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 682 B/s rd, 12 KiB/s wr, 0 op/s
Oct 11 09:35:33 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2817: 321 pgs: 321 active+clean; 407 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 682 B/s rd, 14 KiB/s wr, 0 op/s
Oct 11 09:35:33 compute-0 ceph-mon[74313]: pgmap v2817: 321 pgs: 321 active+clean; 407 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 682 B/s rd, 14 KiB/s wr, 0 op/s
Oct 11 09:35:33 compute-0 nova_compute[260935]: 2025-10-11 09:35:33.594 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:35:33 compute-0 nova_compute[260935]: 2025-10-11 09:35:33.990 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:35:34 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:35:35 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2818: 321 pgs: 321 active+clean; 407 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.0 KiB/s wr, 0 op/s
Oct 11 09:35:35 compute-0 ceph-mon[74313]: pgmap v2818: 321 pgs: 321 active+clean; 407 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.0 KiB/s wr, 0 op/s
Oct 11 09:35:35 compute-0 podman[416222]: 2025-10-11 09:35:35.796385883 +0000 UTC m=+0.093421734 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=iscsid, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 09:35:36 compute-0 nova_compute[260935]: 2025-10-11 09:35:36.889 2 DEBUG oslo_concurrency.lockutils [None req-2e95fecc-72be-49bc-b3cc-7d2dec882a8e 14455fee241a4032a1491acddf675e59 ae66871ba5094dd49475fed96fb24be2 - - default default] Acquiring lock "8d35e49e-efca-4621-a6f7-de650e5272fd" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:35:36 compute-0 nova_compute[260935]: 2025-10-11 09:35:36.890 2 DEBUG oslo_concurrency.lockutils [None req-2e95fecc-72be-49bc-b3cc-7d2dec882a8e 14455fee241a4032a1491acddf675e59 ae66871ba5094dd49475fed96fb24be2 - - default default] Lock "8d35e49e-efca-4621-a6f7-de650e5272fd" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:35:37 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2819: 321 pgs: 321 active+clean; 407 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.0 KiB/s wr, 0 op/s
Oct 11 09:35:37 compute-0 nova_compute[260935]: 2025-10-11 09:35:37.244 2 DEBUG nova.compute.manager [None req-2e95fecc-72be-49bc-b3cc-7d2dec882a8e 14455fee241a4032a1491acddf675e59 ae66871ba5094dd49475fed96fb24be2 - - default default] [instance: 8d35e49e-efca-4621-a6f7-de650e5272fd] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 11 09:35:37 compute-0 ceph-mon[74313]: pgmap v2819: 321 pgs: 321 active+clean; 407 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.0 KiB/s wr, 0 op/s
Oct 11 09:35:37 compute-0 nova_compute[260935]: 2025-10-11 09:35:37.618 2 DEBUG oslo_concurrency.lockutils [None req-2e95fecc-72be-49bc-b3cc-7d2dec882a8e 14455fee241a4032a1491acddf675e59 ae66871ba5094dd49475fed96fb24be2 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:35:37 compute-0 nova_compute[260935]: 2025-10-11 09:35:37.618 2 DEBUG oslo_concurrency.lockutils [None req-2e95fecc-72be-49bc-b3cc-7d2dec882a8e 14455fee241a4032a1491acddf675e59 ae66871ba5094dd49475fed96fb24be2 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:35:37 compute-0 nova_compute[260935]: 2025-10-11 09:35:37.628 2 DEBUG nova.virt.hardware [None req-2e95fecc-72be-49bc-b3cc-7d2dec882a8e 14455fee241a4032a1491acddf675e59 ae66871ba5094dd49475fed96fb24be2 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 11 09:35:37 compute-0 nova_compute[260935]: 2025-10-11 09:35:37.628 2 INFO nova.compute.claims [None req-2e95fecc-72be-49bc-b3cc-7d2dec882a8e 14455fee241a4032a1491acddf675e59 ae66871ba5094dd49475fed96fb24be2 - - default default] [instance: 8d35e49e-efca-4621-a6f7-de650e5272fd] Claim successful on node compute-0.ctlplane.example.com
Oct 11 09:35:38 compute-0 nova_compute[260935]: 2025-10-11 09:35:38.597 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:35:38 compute-0 nova_compute[260935]: 2025-10-11 09:35:38.993 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:35:39 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2820: 321 pgs: 321 active+clean; 407 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.0 KiB/s wr, 0 op/s
Oct 11 09:35:39 compute-0 ceph-mon[74313]: pgmap v2820: 321 pgs: 321 active+clean; 407 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.0 KiB/s wr, 0 op/s
Oct 11 09:35:39 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:35:39 compute-0 podman[416243]: 2025-10-11 09:35:39.815459307 +0000 UTC m=+0.105807436 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0)
Oct 11 09:35:39 compute-0 podman[416244]: 2025-10-11 09:35:39.89621396 +0000 UTC m=+0.183229474 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 09:35:40 compute-0 nova_compute[260935]: 2025-10-11 09:35:40.810 2 DEBUG oslo_concurrency.processutils [None req-2e95fecc-72be-49bc-b3cc-7d2dec882a8e 14455fee241a4032a1491acddf675e59 ae66871ba5094dd49475fed96fb24be2 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:35:41 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2821: 321 pgs: 321 active+clean; 407 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.0 KiB/s wr, 0 op/s
Oct 11 09:35:41 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 09:35:41 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3238120510' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:35:41 compute-0 ceph-mon[74313]: pgmap v2821: 321 pgs: 321 active+clean; 407 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.0 KiB/s wr, 0 op/s
Oct 11 09:35:41 compute-0 nova_compute[260935]: 2025-10-11 09:35:41.273 2 DEBUG oslo_concurrency.processutils [None req-2e95fecc-72be-49bc-b3cc-7d2dec882a8e 14455fee241a4032a1491acddf675e59 ae66871ba5094dd49475fed96fb24be2 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:35:41 compute-0 nova_compute[260935]: 2025-10-11 09:35:41.281 2 DEBUG nova.compute.provider_tree [None req-2e95fecc-72be-49bc-b3cc-7d2dec882a8e 14455fee241a4032a1491acddf675e59 ae66871ba5094dd49475fed96fb24be2 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 09:35:41 compute-0 nova_compute[260935]: 2025-10-11 09:35:41.305 2 DEBUG nova.scheduler.client.report [None req-2e95fecc-72be-49bc-b3cc-7d2dec882a8e 14455fee241a4032a1491acddf675e59 ae66871ba5094dd49475fed96fb24be2 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 09:35:42 compute-0 nova_compute[260935]: 2025-10-11 09:35:42.206 2 DEBUG oslo_concurrency.lockutils [None req-2e95fecc-72be-49bc-b3cc-7d2dec882a8e 14455fee241a4032a1491acddf675e59 ae66871ba5094dd49475fed96fb24be2 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 4.588s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:35:42 compute-0 nova_compute[260935]: 2025-10-11 09:35:42.207 2 DEBUG nova.compute.manager [None req-2e95fecc-72be-49bc-b3cc-7d2dec882a8e 14455fee241a4032a1491acddf675e59 ae66871ba5094dd49475fed96fb24be2 - - default default] [instance: 8d35e49e-efca-4621-a6f7-de650e5272fd] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 11 09:35:42 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/3238120510' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:35:42 compute-0 nova_compute[260935]: 2025-10-11 09:35:42.487 2 DEBUG nova.compute.manager [None req-2e95fecc-72be-49bc-b3cc-7d2dec882a8e 14455fee241a4032a1491acddf675e59 ae66871ba5094dd49475fed96fb24be2 - - default default] [instance: 8d35e49e-efca-4621-a6f7-de650e5272fd] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 11 09:35:42 compute-0 nova_compute[260935]: 2025-10-11 09:35:42.487 2 DEBUG nova.network.neutron [None req-2e95fecc-72be-49bc-b3cc-7d2dec882a8e 14455fee241a4032a1491acddf675e59 ae66871ba5094dd49475fed96fb24be2 - - default default] [instance: 8d35e49e-efca-4621-a6f7-de650e5272fd] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 11 09:35:42 compute-0 nova_compute[260935]: 2025-10-11 09:35:42.685 2 INFO nova.virt.libvirt.driver [None req-2e95fecc-72be-49bc-b3cc-7d2dec882a8e 14455fee241a4032a1491acddf675e59 ae66871ba5094dd49475fed96fb24be2 - - default default] [instance: 8d35e49e-efca-4621-a6f7-de650e5272fd] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 11 09:35:43 compute-0 nova_compute[260935]: 2025-10-11 09:35:43.138 2 DEBUG nova.compute.manager [None req-2e95fecc-72be-49bc-b3cc-7d2dec882a8e 14455fee241a4032a1491acddf675e59 ae66871ba5094dd49475fed96fb24be2 - - default default] [instance: 8d35e49e-efca-4621-a6f7-de650e5272fd] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 11 09:35:43 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2822: 321 pgs: 321 active+clean; 407 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.0 KiB/s wr, 0 op/s
Oct 11 09:35:43 compute-0 ceph-mon[74313]: pgmap v2822: 321 pgs: 321 active+clean; 407 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.0 KiB/s wr, 0 op/s
Oct 11 09:35:43 compute-0 nova_compute[260935]: 2025-10-11 09:35:43.581 2 DEBUG nova.policy [None req-2e95fecc-72be-49bc-b3cc-7d2dec882a8e 14455fee241a4032a1491acddf675e59 ae66871ba5094dd49475fed96fb24be2 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '14455fee241a4032a1491acddf675e59', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'ae66871ba5094dd49475fed96fb24be2', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 11 09:35:43 compute-0 nova_compute[260935]: 2025-10-11 09:35:43.601 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:35:43 compute-0 nova_compute[260935]: 2025-10-11 09:35:43.644 2 DEBUG nova.compute.manager [None req-2e95fecc-72be-49bc-b3cc-7d2dec882a8e 14455fee241a4032a1491acddf675e59 ae66871ba5094dd49475fed96fb24be2 - - default default] [instance: 8d35e49e-efca-4621-a6f7-de650e5272fd] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 11 09:35:43 compute-0 nova_compute[260935]: 2025-10-11 09:35:43.646 2 DEBUG nova.virt.libvirt.driver [None req-2e95fecc-72be-49bc-b3cc-7d2dec882a8e 14455fee241a4032a1491acddf675e59 ae66871ba5094dd49475fed96fb24be2 - - default default] [instance: 8d35e49e-efca-4621-a6f7-de650e5272fd] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 11 09:35:43 compute-0 nova_compute[260935]: 2025-10-11 09:35:43.647 2 INFO nova.virt.libvirt.driver [None req-2e95fecc-72be-49bc-b3cc-7d2dec882a8e 14455fee241a4032a1491acddf675e59 ae66871ba5094dd49475fed96fb24be2 - - default default] [instance: 8d35e49e-efca-4621-a6f7-de650e5272fd] Creating image(s)
Oct 11 09:35:43 compute-0 nova_compute[260935]: 2025-10-11 09:35:43.687 2 DEBUG nova.storage.rbd_utils [None req-2e95fecc-72be-49bc-b3cc-7d2dec882a8e 14455fee241a4032a1491acddf675e59 ae66871ba5094dd49475fed96fb24be2 - - default default] rbd image 8d35e49e-efca-4621-a6f7-de650e5272fd_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:35:43 compute-0 nova_compute[260935]: 2025-10-11 09:35:43.725 2 DEBUG nova.storage.rbd_utils [None req-2e95fecc-72be-49bc-b3cc-7d2dec882a8e 14455fee241a4032a1491acddf675e59 ae66871ba5094dd49475fed96fb24be2 - - default default] rbd image 8d35e49e-efca-4621-a6f7-de650e5272fd_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:35:43 compute-0 nova_compute[260935]: 2025-10-11 09:35:43.765 2 DEBUG nova.storage.rbd_utils [None req-2e95fecc-72be-49bc-b3cc-7d2dec882a8e 14455fee241a4032a1491acddf675e59 ae66871ba5094dd49475fed96fb24be2 - - default default] rbd image 8d35e49e-efca-4621-a6f7-de650e5272fd_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:35:43 compute-0 nova_compute[260935]: 2025-10-11 09:35:43.770 2 DEBUG oslo_concurrency.processutils [None req-2e95fecc-72be-49bc-b3cc-7d2dec882a8e 14455fee241a4032a1491acddf675e59 ae66871ba5094dd49475fed96fb24be2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:35:43 compute-0 nova_compute[260935]: 2025-10-11 09:35:43.882 2 DEBUG oslo_concurrency.processutils [None req-2e95fecc-72be-49bc-b3cc-7d2dec882a8e 14455fee241a4032a1491acddf675e59 ae66871ba5094dd49475fed96fb24be2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json" returned: 0 in 0.112s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:35:43 compute-0 nova_compute[260935]: 2025-10-11 09:35:43.884 2 DEBUG oslo_concurrency.lockutils [None req-2e95fecc-72be-49bc-b3cc-7d2dec882a8e 14455fee241a4032a1491acddf675e59 ae66871ba5094dd49475fed96fb24be2 - - default default] Acquiring lock "9811042c7d73cc51997f7c966840f4b7728169a1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:35:43 compute-0 nova_compute[260935]: 2025-10-11 09:35:43.885 2 DEBUG oslo_concurrency.lockutils [None req-2e95fecc-72be-49bc-b3cc-7d2dec882a8e 14455fee241a4032a1491acddf675e59 ae66871ba5094dd49475fed96fb24be2 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:35:43 compute-0 nova_compute[260935]: 2025-10-11 09:35:43.885 2 DEBUG oslo_concurrency.lockutils [None req-2e95fecc-72be-49bc-b3cc-7d2dec882a8e 14455fee241a4032a1491acddf675e59 ae66871ba5094dd49475fed96fb24be2 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:35:43 compute-0 nova_compute[260935]: 2025-10-11 09:35:43.922 2 DEBUG nova.storage.rbd_utils [None req-2e95fecc-72be-49bc-b3cc-7d2dec882a8e 14455fee241a4032a1491acddf675e59 ae66871ba5094dd49475fed96fb24be2 - - default default] rbd image 8d35e49e-efca-4621-a6f7-de650e5272fd_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:35:43 compute-0 nova_compute[260935]: 2025-10-11 09:35:43.928 2 DEBUG oslo_concurrency.processutils [None req-2e95fecc-72be-49bc-b3cc-7d2dec882a8e 14455fee241a4032a1491acddf675e59 ae66871ba5094dd49475fed96fb24be2 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 8d35e49e-efca-4621-a6f7-de650e5272fd_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:35:43 compute-0 nova_compute[260935]: 2025-10-11 09:35:43.996 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:35:44 compute-0 nova_compute[260935]: 2025-10-11 09:35:44.238 2 DEBUG oslo_concurrency.processutils [None req-2e95fecc-72be-49bc-b3cc-7d2dec882a8e 14455fee241a4032a1491acddf675e59 ae66871ba5094dd49475fed96fb24be2 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 8d35e49e-efca-4621-a6f7-de650e5272fd_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.310s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:35:44 compute-0 nova_compute[260935]: 2025-10-11 09:35:44.337 2 DEBUG nova.storage.rbd_utils [None req-2e95fecc-72be-49bc-b3cc-7d2dec882a8e 14455fee241a4032a1491acddf675e59 ae66871ba5094dd49475fed96fb24be2 - - default default] resizing rbd image 8d35e49e-efca-4621-a6f7-de650e5272fd_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 11 09:35:44 compute-0 nova_compute[260935]: 2025-10-11 09:35:44.461 2 DEBUG nova.objects.instance [None req-2e95fecc-72be-49bc-b3cc-7d2dec882a8e 14455fee241a4032a1491acddf675e59 ae66871ba5094dd49475fed96fb24be2 - - default default] Lazy-loading 'migration_context' on Instance uuid 8d35e49e-efca-4621-a6f7-de650e5272fd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:35:44 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:35:44 compute-0 nova_compute[260935]: 2025-10-11 09:35:44.636 2 DEBUG nova.virt.libvirt.driver [None req-2e95fecc-72be-49bc-b3cc-7d2dec882a8e 14455fee241a4032a1491acddf675e59 ae66871ba5094dd49475fed96fb24be2 - - default default] [instance: 8d35e49e-efca-4621-a6f7-de650e5272fd] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 11 09:35:44 compute-0 nova_compute[260935]: 2025-10-11 09:35:44.637 2 DEBUG nova.virt.libvirt.driver [None req-2e95fecc-72be-49bc-b3cc-7d2dec882a8e 14455fee241a4032a1491acddf675e59 ae66871ba5094dd49475fed96fb24be2 - - default default] [instance: 8d35e49e-efca-4621-a6f7-de650e5272fd] Ensure instance console log exists: /var/lib/nova/instances/8d35e49e-efca-4621-a6f7-de650e5272fd/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 11 09:35:44 compute-0 nova_compute[260935]: 2025-10-11 09:35:44.638 2 DEBUG oslo_concurrency.lockutils [None req-2e95fecc-72be-49bc-b3cc-7d2dec882a8e 14455fee241a4032a1491acddf675e59 ae66871ba5094dd49475fed96fb24be2 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:35:44 compute-0 nova_compute[260935]: 2025-10-11 09:35:44.638 2 DEBUG oslo_concurrency.lockutils [None req-2e95fecc-72be-49bc-b3cc-7d2dec882a8e 14455fee241a4032a1491acddf675e59 ae66871ba5094dd49475fed96fb24be2 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:35:44 compute-0 nova_compute[260935]: 2025-10-11 09:35:44.639 2 DEBUG oslo_concurrency.lockutils [None req-2e95fecc-72be-49bc-b3cc-7d2dec882a8e 14455fee241a4032a1491acddf675e59 ae66871ba5094dd49475fed96fb24be2 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:35:45 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2823: 321 pgs: 321 active+clean; 407 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:35:45 compute-0 ceph-mon[74313]: pgmap v2823: 321 pgs: 321 active+clean; 407 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:35:45 compute-0 nova_compute[260935]: 2025-10-11 09:35:45.454 2 DEBUG nova.network.neutron [None req-2e95fecc-72be-49bc-b3cc-7d2dec882a8e 14455fee241a4032a1491acddf675e59 ae66871ba5094dd49475fed96fb24be2 - - default default] [instance: 8d35e49e-efca-4621-a6f7-de650e5272fd] Successfully created port: 28067a9e-43c9-4f82-aeb3-002926889b4b _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 11 09:35:46 compute-0 nova_compute[260935]: 2025-10-11 09:35:46.555 2 DEBUG nova.network.neutron [None req-2e95fecc-72be-49bc-b3cc-7d2dec882a8e 14455fee241a4032a1491acddf675e59 ae66871ba5094dd49475fed96fb24be2 - - default default] [instance: 8d35e49e-efca-4621-a6f7-de650e5272fd] Successfully updated port: 28067a9e-43c9-4f82-aeb3-002926889b4b _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 11 09:35:46 compute-0 nova_compute[260935]: 2025-10-11 09:35:46.677 2 DEBUG oslo_concurrency.lockutils [None req-2e95fecc-72be-49bc-b3cc-7d2dec882a8e 14455fee241a4032a1491acddf675e59 ae66871ba5094dd49475fed96fb24be2 - - default default] Acquiring lock "refresh_cache-8d35e49e-efca-4621-a6f7-de650e5272fd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:35:46 compute-0 nova_compute[260935]: 2025-10-11 09:35:46.677 2 DEBUG oslo_concurrency.lockutils [None req-2e95fecc-72be-49bc-b3cc-7d2dec882a8e 14455fee241a4032a1491acddf675e59 ae66871ba5094dd49475fed96fb24be2 - - default default] Acquired lock "refresh_cache-8d35e49e-efca-4621-a6f7-de650e5272fd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:35:46 compute-0 nova_compute[260935]: 2025-10-11 09:35:46.678 2 DEBUG nova.network.neutron [None req-2e95fecc-72be-49bc-b3cc-7d2dec882a8e 14455fee241a4032a1491acddf675e59 ae66871ba5094dd49475fed96fb24be2 - - default default] [instance: 8d35e49e-efca-4621-a6f7-de650e5272fd] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 11 09:35:46 compute-0 nova_compute[260935]: 2025-10-11 09:35:46.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:35:46 compute-0 nova_compute[260935]: 2025-10-11 09:35:46.703 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 11 09:35:46 compute-0 nova_compute[260935]: 2025-10-11 09:35:46.789 2 DEBUG nova.compute.manager [req-66e1ed7f-e399-4bcd-b696-f92e0395f4e2 req-a3349235-6f62-4a71-b5ca-64933f2939fa e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 8d35e49e-efca-4621-a6f7-de650e5272fd] Received event network-changed-28067a9e-43c9-4f82-aeb3-002926889b4b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:35:46 compute-0 nova_compute[260935]: 2025-10-11 09:35:46.790 2 DEBUG nova.compute.manager [req-66e1ed7f-e399-4bcd-b696-f92e0395f4e2 req-a3349235-6f62-4a71-b5ca-64933f2939fa e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 8d35e49e-efca-4621-a6f7-de650e5272fd] Refreshing instance network info cache due to event network-changed-28067a9e-43c9-4f82-aeb3-002926889b4b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 09:35:46 compute-0 nova_compute[260935]: 2025-10-11 09:35:46.790 2 DEBUG oslo_concurrency.lockutils [req-66e1ed7f-e399-4bcd-b696-f92e0395f4e2 req-a3349235-6f62-4a71-b5ca-64933f2939fa e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-8d35e49e-efca-4621-a6f7-de650e5272fd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:35:46 compute-0 nova_compute[260935]: 2025-10-11 09:35:46.959 2 DEBUG nova.network.neutron [None req-2e95fecc-72be-49bc-b3cc-7d2dec882a8e 14455fee241a4032a1491acddf675e59 ae66871ba5094dd49475fed96fb24be2 - - default default] [instance: 8d35e49e-efca-4621-a6f7-de650e5272fd] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 11 09:35:46 compute-0 nova_compute[260935]: 2025-10-11 09:35:46.976 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "refresh_cache-52be16b4-343a-4fd4-9041-39069a1fde2a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:35:46 compute-0 nova_compute[260935]: 2025-10-11 09:35:46.976 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquired lock "refresh_cache-52be16b4-343a-4fd4-9041-39069a1fde2a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:35:46 compute-0 nova_compute[260935]: 2025-10-11 09:35:46.976 2 DEBUG nova.network.neutron [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: 52be16b4-343a-4fd4-9041-39069a1fde2a] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 11 09:35:47 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2824: 321 pgs: 321 active+clean; 407 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:35:47 compute-0 ceph-mon[74313]: pgmap v2824: 321 pgs: 321 active+clean; 407 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:35:48 compute-0 nova_compute[260935]: 2025-10-11 09:35:48.560 2 DEBUG nova.network.neutron [None req-2e95fecc-72be-49bc-b3cc-7d2dec882a8e 14455fee241a4032a1491acddf675e59 ae66871ba5094dd49475fed96fb24be2 - - default default] [instance: 8d35e49e-efca-4621-a6f7-de650e5272fd] Updating instance_info_cache with network_info: [{"id": "28067a9e-43c9-4f82-aeb3-002926889b4b", "address": "fa:16:3e:83:96:3a", "network": {"id": "617f2647-1180-40d5-ae2e-3b28298acf26", "bridge": "br-int", "label": "tempest-network-smoke--1660520824", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ae66871ba5094dd49475fed96fb24be2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap28067a9e-43", "ovs_interfaceid": "28067a9e-43c9-4f82-aeb3-002926889b4b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:35:48 compute-0 nova_compute[260935]: 2025-10-11 09:35:48.603 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:35:48 compute-0 nova_compute[260935]: 2025-10-11 09:35:48.754 2 DEBUG oslo_concurrency.lockutils [None req-2e95fecc-72be-49bc-b3cc-7d2dec882a8e 14455fee241a4032a1491acddf675e59 ae66871ba5094dd49475fed96fb24be2 - - default default] Releasing lock "refresh_cache-8d35e49e-efca-4621-a6f7-de650e5272fd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:35:48 compute-0 nova_compute[260935]: 2025-10-11 09:35:48.754 2 DEBUG nova.compute.manager [None req-2e95fecc-72be-49bc-b3cc-7d2dec882a8e 14455fee241a4032a1491acddf675e59 ae66871ba5094dd49475fed96fb24be2 - - default default] [instance: 8d35e49e-efca-4621-a6f7-de650e5272fd] Instance network_info: |[{"id": "28067a9e-43c9-4f82-aeb3-002926889b4b", "address": "fa:16:3e:83:96:3a", "network": {"id": "617f2647-1180-40d5-ae2e-3b28298acf26", "bridge": "br-int", "label": "tempest-network-smoke--1660520824", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ae66871ba5094dd49475fed96fb24be2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap28067a9e-43", "ovs_interfaceid": "28067a9e-43c9-4f82-aeb3-002926889b4b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 11 09:35:48 compute-0 nova_compute[260935]: 2025-10-11 09:35:48.755 2 DEBUG oslo_concurrency.lockutils [req-66e1ed7f-e399-4bcd-b696-f92e0395f4e2 req-a3349235-6f62-4a71-b5ca-64933f2939fa e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-8d35e49e-efca-4621-a6f7-de650e5272fd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:35:48 compute-0 nova_compute[260935]: 2025-10-11 09:35:48.755 2 DEBUG nova.network.neutron [req-66e1ed7f-e399-4bcd-b696-f92e0395f4e2 req-a3349235-6f62-4a71-b5ca-64933f2939fa e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 8d35e49e-efca-4621-a6f7-de650e5272fd] Refreshing network info cache for port 28067a9e-43c9-4f82-aeb3-002926889b4b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 09:35:48 compute-0 nova_compute[260935]: 2025-10-11 09:35:48.758 2 DEBUG nova.virt.libvirt.driver [None req-2e95fecc-72be-49bc-b3cc-7d2dec882a8e 14455fee241a4032a1491acddf675e59 ae66871ba5094dd49475fed96fb24be2 - - default default] [instance: 8d35e49e-efca-4621-a6f7-de650e5272fd] Start _get_guest_xml network_info=[{"id": "28067a9e-43c9-4f82-aeb3-002926889b4b", "address": "fa:16:3e:83:96:3a", "network": {"id": "617f2647-1180-40d5-ae2e-3b28298acf26", "bridge": "br-int", "label": "tempest-network-smoke--1660520824", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ae66871ba5094dd49475fed96fb24be2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap28067a9e-43", "ovs_interfaceid": "28067a9e-43c9-4f82-aeb3-002926889b4b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 11 09:35:48 compute-0 nova_compute[260935]: 2025-10-11 09:35:48.762 2 WARNING nova.virt.libvirt.driver [None req-2e95fecc-72be-49bc-b3cc-7d2dec882a8e 14455fee241a4032a1491acddf675e59 ae66871ba5094dd49475fed96fb24be2 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 09:35:48 compute-0 nova_compute[260935]: 2025-10-11 09:35:48.774 2 DEBUG nova.virt.libvirt.host [None req-2e95fecc-72be-49bc-b3cc-7d2dec882a8e 14455fee241a4032a1491acddf675e59 ae66871ba5094dd49475fed96fb24be2 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 11 09:35:48 compute-0 nova_compute[260935]: 2025-10-11 09:35:48.775 2 DEBUG nova.virt.libvirt.host [None req-2e95fecc-72be-49bc-b3cc-7d2dec882a8e 14455fee241a4032a1491acddf675e59 ae66871ba5094dd49475fed96fb24be2 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 11 09:35:48 compute-0 nova_compute[260935]: 2025-10-11 09:35:48.778 2 DEBUG nova.virt.libvirt.host [None req-2e95fecc-72be-49bc-b3cc-7d2dec882a8e 14455fee241a4032a1491acddf675e59 ae66871ba5094dd49475fed96fb24be2 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 11 09:35:48 compute-0 nova_compute[260935]: 2025-10-11 09:35:48.779 2 DEBUG nova.virt.libvirt.host [None req-2e95fecc-72be-49bc-b3cc-7d2dec882a8e 14455fee241a4032a1491acddf675e59 ae66871ba5094dd49475fed96fb24be2 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 11 09:35:48 compute-0 nova_compute[260935]: 2025-10-11 09:35:48.779 2 DEBUG nova.virt.libvirt.driver [None req-2e95fecc-72be-49bc-b3cc-7d2dec882a8e 14455fee241a4032a1491acddf675e59 ae66871ba5094dd49475fed96fb24be2 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 11 09:35:48 compute-0 nova_compute[260935]: 2025-10-11 09:35:48.780 2 DEBUG nova.virt.hardware [None req-2e95fecc-72be-49bc-b3cc-7d2dec882a8e 14455fee241a4032a1491acddf675e59 ae66871ba5094dd49475fed96fb24be2 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 11 09:35:48 compute-0 nova_compute[260935]: 2025-10-11 09:35:48.780 2 DEBUG nova.virt.hardware [None req-2e95fecc-72be-49bc-b3cc-7d2dec882a8e 14455fee241a4032a1491acddf675e59 ae66871ba5094dd49475fed96fb24be2 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 11 09:35:48 compute-0 nova_compute[260935]: 2025-10-11 09:35:48.780 2 DEBUG nova.virt.hardware [None req-2e95fecc-72be-49bc-b3cc-7d2dec882a8e 14455fee241a4032a1491acddf675e59 ae66871ba5094dd49475fed96fb24be2 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 11 09:35:48 compute-0 nova_compute[260935]: 2025-10-11 09:35:48.781 2 DEBUG nova.virt.hardware [None req-2e95fecc-72be-49bc-b3cc-7d2dec882a8e 14455fee241a4032a1491acddf675e59 ae66871ba5094dd49475fed96fb24be2 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 11 09:35:48 compute-0 nova_compute[260935]: 2025-10-11 09:35:48.781 2 DEBUG nova.virt.hardware [None req-2e95fecc-72be-49bc-b3cc-7d2dec882a8e 14455fee241a4032a1491acddf675e59 ae66871ba5094dd49475fed96fb24be2 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 11 09:35:48 compute-0 nova_compute[260935]: 2025-10-11 09:35:48.781 2 DEBUG nova.virt.hardware [None req-2e95fecc-72be-49bc-b3cc-7d2dec882a8e 14455fee241a4032a1491acddf675e59 ae66871ba5094dd49475fed96fb24be2 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 11 09:35:48 compute-0 nova_compute[260935]: 2025-10-11 09:35:48.781 2 DEBUG nova.virt.hardware [None req-2e95fecc-72be-49bc-b3cc-7d2dec882a8e 14455fee241a4032a1491acddf675e59 ae66871ba5094dd49475fed96fb24be2 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 11 09:35:48 compute-0 nova_compute[260935]: 2025-10-11 09:35:48.781 2 DEBUG nova.virt.hardware [None req-2e95fecc-72be-49bc-b3cc-7d2dec882a8e 14455fee241a4032a1491acddf675e59 ae66871ba5094dd49475fed96fb24be2 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 11 09:35:48 compute-0 nova_compute[260935]: 2025-10-11 09:35:48.782 2 DEBUG nova.virt.hardware [None req-2e95fecc-72be-49bc-b3cc-7d2dec882a8e 14455fee241a4032a1491acddf675e59 ae66871ba5094dd49475fed96fb24be2 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 11 09:35:48 compute-0 nova_compute[260935]: 2025-10-11 09:35:48.782 2 DEBUG nova.virt.hardware [None req-2e95fecc-72be-49bc-b3cc-7d2dec882a8e 14455fee241a4032a1491acddf675e59 ae66871ba5094dd49475fed96fb24be2 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 11 09:35:48 compute-0 nova_compute[260935]: 2025-10-11 09:35:48.782 2 DEBUG nova.virt.hardware [None req-2e95fecc-72be-49bc-b3cc-7d2dec882a8e 14455fee241a4032a1491acddf675e59 ae66871ba5094dd49475fed96fb24be2 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 11 09:35:48 compute-0 nova_compute[260935]: 2025-10-11 09:35:48.785 2 DEBUG oslo_concurrency.processutils [None req-2e95fecc-72be-49bc-b3cc-7d2dec882a8e 14455fee241a4032a1491acddf675e59 ae66871ba5094dd49475fed96fb24be2 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:35:48 compute-0 nova_compute[260935]: 2025-10-11 09:35:48.997 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:35:49 compute-0 nova_compute[260935]: 2025-10-11 09:35:49.034 2 DEBUG nova.network.neutron [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: 52be16b4-343a-4fd4-9041-39069a1fde2a] Updating instance_info_cache with network_info: [{"id": "c992d6e3-ef59-42a0-80c5-109fe0c056cd", "address": "fa:16:3e:d3:b5:ce", "network": {"id": "7c40ad6c-6e2c-4d8e-a70f-72c8786fa745", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1855455514-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0ba95f2514ce4fe4b00f245335eaeb01", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc992d6e3-ef", "ovs_interfaceid": "c992d6e3-ef59-42a0-80c5-109fe0c056cd", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:35:49 compute-0 nova_compute[260935]: 2025-10-11 09:35:49.103 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Releasing lock "refresh_cache-52be16b4-343a-4fd4-9041-39069a1fde2a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:35:49 compute-0 nova_compute[260935]: 2025-10-11 09:35:49.103 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: 52be16b4-343a-4fd4-9041-39069a1fde2a] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 11 09:35:49 compute-0 nova_compute[260935]: 2025-10-11 09:35:49.104 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:35:49 compute-0 nova_compute[260935]: 2025-10-11 09:35:49.104 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:35:49 compute-0 nova_compute[260935]: 2025-10-11 09:35:49.105 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:35:49 compute-0 nova_compute[260935]: 2025-10-11 09:35:49.105 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:35:49 compute-0 nova_compute[260935]: 2025-10-11 09:35:49.106 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 11 09:35:49 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2825: 321 pgs: 321 active+clean; 453 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 11 09:35:49 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 09:35:49 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4021734379' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:35:49 compute-0 nova_compute[260935]: 2025-10-11 09:35:49.217 2 DEBUG oslo_concurrency.processutils [None req-2e95fecc-72be-49bc-b3cc-7d2dec882a8e 14455fee241a4032a1491acddf675e59 ae66871ba5094dd49475fed96fb24be2 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.432s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:35:49 compute-0 nova_compute[260935]: 2025-10-11 09:35:49.247 2 DEBUG nova.storage.rbd_utils [None req-2e95fecc-72be-49bc-b3cc-7d2dec882a8e 14455fee241a4032a1491acddf675e59 ae66871ba5094dd49475fed96fb24be2 - - default default] rbd image 8d35e49e-efca-4621-a6f7-de650e5272fd_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:35:49 compute-0 ceph-mon[74313]: pgmap v2825: 321 pgs: 321 active+clean; 453 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 11 09:35:49 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/4021734379' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:35:49 compute-0 nova_compute[260935]: 2025-10-11 09:35:49.262 2 DEBUG oslo_concurrency.processutils [None req-2e95fecc-72be-49bc-b3cc-7d2dec882a8e 14455fee241a4032a1491acddf675e59 ae66871ba5094dd49475fed96fb24be2 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:35:49 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:35:49 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 09:35:49 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/239988064' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:35:49 compute-0 nova_compute[260935]: 2025-10-11 09:35:49.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:35:49 compute-0 nova_compute[260935]: 2025-10-11 09:35:49.705 2 DEBUG oslo_concurrency.processutils [None req-2e95fecc-72be-49bc-b3cc-7d2dec882a8e 14455fee241a4032a1491acddf675e59 ae66871ba5094dd49475fed96fb24be2 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:35:49 compute-0 nova_compute[260935]: 2025-10-11 09:35:49.705 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:35:49 compute-0 nova_compute[260935]: 2025-10-11 09:35:49.707 2 DEBUG nova.virt.libvirt.vif [None req-2e95fecc-72be-49bc-b3cc-7d2dec882a8e 14455fee241a4032a1491acddf675e59 ae66871ba5094dd49475fed96fb24be2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T09:35:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-88042792-access_point-478778867',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-88042792-access_point-478778867',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-88042792-acce',id=140,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFTq8OGvKaz+LMA/PcV/K58nrpAVi7kg5IPqRwY1XCZgpgNS8omJcJt4bFk7y+YWXHh9knbjh0T2FJN0u/vwytRbPYrGFElF+LT6MJEgQPvTD/i1PmkAieU+uwu1S/cEGA==',key_name='tempest-TestSecurityGroupsBasicOps-1768444573',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ae66871ba5094dd49475fed96fb24be2',ramdisk_id='',reservation_id='r-zleps1oi',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-88042792',owner_user_name='tempest-TestSecurityGroupsBasicOps-88042792-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:35:43Z,user_data=None,user_id='14455fee241a4032a1491acddf675e59',uuid=8d35e49e-efca-4621-a6f7-de650e5272fd,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "28067a9e-43c9-4f82-aeb3-002926889b4b", "address": "fa:16:3e:83:96:3a", "network": {"id": "617f2647-1180-40d5-ae2e-3b28298acf26", "bridge": "br-int", "label": "tempest-network-smoke--1660520824", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ae66871ba5094dd49475fed96fb24be2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap28067a9e-43", "ovs_interfaceid": "28067a9e-43c9-4f82-aeb3-002926889b4b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 11 09:35:49 compute-0 nova_compute[260935]: 2025-10-11 09:35:49.708 2 DEBUG nova.network.os_vif_util [None req-2e95fecc-72be-49bc-b3cc-7d2dec882a8e 14455fee241a4032a1491acddf675e59 ae66871ba5094dd49475fed96fb24be2 - - default default] Converting VIF {"id": "28067a9e-43c9-4f82-aeb3-002926889b4b", "address": "fa:16:3e:83:96:3a", "network": {"id": "617f2647-1180-40d5-ae2e-3b28298acf26", "bridge": "br-int", "label": "tempest-network-smoke--1660520824", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ae66871ba5094dd49475fed96fb24be2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap28067a9e-43", "ovs_interfaceid": "28067a9e-43c9-4f82-aeb3-002926889b4b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 09:35:49 compute-0 nova_compute[260935]: 2025-10-11 09:35:49.709 2 DEBUG nova.network.os_vif_util [None req-2e95fecc-72be-49bc-b3cc-7d2dec882a8e 14455fee241a4032a1491acddf675e59 ae66871ba5094dd49475fed96fb24be2 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:83:96:3a,bridge_name='br-int',has_traffic_filtering=True,id=28067a9e-43c9-4f82-aeb3-002926889b4b,network=Network(617f2647-1180-40d5-ae2e-3b28298acf26),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap28067a9e-43') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 09:35:49 compute-0 nova_compute[260935]: 2025-10-11 09:35:49.711 2 DEBUG nova.objects.instance [None req-2e95fecc-72be-49bc-b3cc-7d2dec882a8e 14455fee241a4032a1491acddf675e59 ae66871ba5094dd49475fed96fb24be2 - - default default] Lazy-loading 'pci_devices' on Instance uuid 8d35e49e-efca-4621-a6f7-de650e5272fd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:35:49 compute-0 nova_compute[260935]: 2025-10-11 09:35:49.800 2 DEBUG nova.virt.libvirt.driver [None req-2e95fecc-72be-49bc-b3cc-7d2dec882a8e 14455fee241a4032a1491acddf675e59 ae66871ba5094dd49475fed96fb24be2 - - default default] [instance: 8d35e49e-efca-4621-a6f7-de650e5272fd] End _get_guest_xml xml=<domain type="kvm">
Oct 11 09:35:49 compute-0 nova_compute[260935]:   <uuid>8d35e49e-efca-4621-a6f7-de650e5272fd</uuid>
Oct 11 09:35:49 compute-0 nova_compute[260935]:   <name>instance-0000008c</name>
Oct 11 09:35:49 compute-0 nova_compute[260935]:   <memory>131072</memory>
Oct 11 09:35:49 compute-0 nova_compute[260935]:   <vcpu>1</vcpu>
Oct 11 09:35:49 compute-0 nova_compute[260935]:   <metadata>
Oct 11 09:35:49 compute-0 nova_compute[260935]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 09:35:49 compute-0 nova_compute[260935]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 09:35:49 compute-0 nova_compute[260935]:       <nova:name>tempest-server-tempest-TestSecurityGroupsBasicOps-88042792-access_point-478778867</nova:name>
Oct 11 09:35:49 compute-0 nova_compute[260935]:       <nova:creationTime>2025-10-11 09:35:48</nova:creationTime>
Oct 11 09:35:49 compute-0 nova_compute[260935]:       <nova:flavor name="m1.nano">
Oct 11 09:35:49 compute-0 nova_compute[260935]:         <nova:memory>128</nova:memory>
Oct 11 09:35:49 compute-0 nova_compute[260935]:         <nova:disk>1</nova:disk>
Oct 11 09:35:49 compute-0 nova_compute[260935]:         <nova:swap>0</nova:swap>
Oct 11 09:35:49 compute-0 nova_compute[260935]:         <nova:ephemeral>0</nova:ephemeral>
Oct 11 09:35:49 compute-0 nova_compute[260935]:         <nova:vcpus>1</nova:vcpus>
Oct 11 09:35:49 compute-0 nova_compute[260935]:       </nova:flavor>
Oct 11 09:35:49 compute-0 nova_compute[260935]:       <nova:owner>
Oct 11 09:35:49 compute-0 nova_compute[260935]:         <nova:user uuid="14455fee241a4032a1491acddf675e59">tempest-TestSecurityGroupsBasicOps-88042792-project-member</nova:user>
Oct 11 09:35:49 compute-0 nova_compute[260935]:         <nova:project uuid="ae66871ba5094dd49475fed96fb24be2">tempest-TestSecurityGroupsBasicOps-88042792</nova:project>
Oct 11 09:35:49 compute-0 nova_compute[260935]:       </nova:owner>
Oct 11 09:35:49 compute-0 nova_compute[260935]:       <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 09:35:49 compute-0 nova_compute[260935]:       <nova:ports>
Oct 11 09:35:49 compute-0 nova_compute[260935]:         <nova:port uuid="28067a9e-43c9-4f82-aeb3-002926889b4b">
Oct 11 09:35:49 compute-0 nova_compute[260935]:           <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Oct 11 09:35:49 compute-0 nova_compute[260935]:         </nova:port>
Oct 11 09:35:49 compute-0 nova_compute[260935]:       </nova:ports>
Oct 11 09:35:49 compute-0 nova_compute[260935]:     </nova:instance>
Oct 11 09:35:49 compute-0 nova_compute[260935]:   </metadata>
Oct 11 09:35:49 compute-0 nova_compute[260935]:   <sysinfo type="smbios">
Oct 11 09:35:49 compute-0 nova_compute[260935]:     <system>
Oct 11 09:35:49 compute-0 nova_compute[260935]:       <entry name="manufacturer">RDO</entry>
Oct 11 09:35:49 compute-0 nova_compute[260935]:       <entry name="product">OpenStack Compute</entry>
Oct 11 09:35:49 compute-0 nova_compute[260935]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 09:35:49 compute-0 nova_compute[260935]:       <entry name="serial">8d35e49e-efca-4621-a6f7-de650e5272fd</entry>
Oct 11 09:35:49 compute-0 nova_compute[260935]:       <entry name="uuid">8d35e49e-efca-4621-a6f7-de650e5272fd</entry>
Oct 11 09:35:49 compute-0 nova_compute[260935]:       <entry name="family">Virtual Machine</entry>
Oct 11 09:35:49 compute-0 nova_compute[260935]:     </system>
Oct 11 09:35:49 compute-0 nova_compute[260935]:   </sysinfo>
Oct 11 09:35:49 compute-0 nova_compute[260935]:   <os>
Oct 11 09:35:49 compute-0 nova_compute[260935]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 11 09:35:49 compute-0 nova_compute[260935]:     <boot dev="hd"/>
Oct 11 09:35:49 compute-0 nova_compute[260935]:     <smbios mode="sysinfo"/>
Oct 11 09:35:49 compute-0 nova_compute[260935]:   </os>
Oct 11 09:35:49 compute-0 nova_compute[260935]:   <features>
Oct 11 09:35:49 compute-0 nova_compute[260935]:     <acpi/>
Oct 11 09:35:49 compute-0 nova_compute[260935]:     <apic/>
Oct 11 09:35:49 compute-0 nova_compute[260935]:     <vmcoreinfo/>
Oct 11 09:35:49 compute-0 nova_compute[260935]:   </features>
Oct 11 09:35:49 compute-0 nova_compute[260935]:   <clock offset="utc">
Oct 11 09:35:49 compute-0 nova_compute[260935]:     <timer name="pit" tickpolicy="delay"/>
Oct 11 09:35:49 compute-0 nova_compute[260935]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 11 09:35:49 compute-0 nova_compute[260935]:     <timer name="hpet" present="no"/>
Oct 11 09:35:49 compute-0 nova_compute[260935]:   </clock>
Oct 11 09:35:49 compute-0 nova_compute[260935]:   <cpu mode="host-model" match="exact">
Oct 11 09:35:49 compute-0 nova_compute[260935]:     <topology sockets="1" cores="1" threads="1"/>
Oct 11 09:35:49 compute-0 nova_compute[260935]:   </cpu>
Oct 11 09:35:49 compute-0 nova_compute[260935]:   <devices>
Oct 11 09:35:49 compute-0 nova_compute[260935]:     <disk type="network" device="disk">
Oct 11 09:35:49 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 09:35:49 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/8d35e49e-efca-4621-a6f7-de650e5272fd_disk">
Oct 11 09:35:49 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 09:35:49 compute-0 nova_compute[260935]:       </source>
Oct 11 09:35:49 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 09:35:49 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 09:35:49 compute-0 nova_compute[260935]:       </auth>
Oct 11 09:35:49 compute-0 nova_compute[260935]:       <target dev="vda" bus="virtio"/>
Oct 11 09:35:49 compute-0 nova_compute[260935]:     </disk>
Oct 11 09:35:49 compute-0 nova_compute[260935]:     <disk type="network" device="cdrom">
Oct 11 09:35:49 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 09:35:49 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/8d35e49e-efca-4621-a6f7-de650e5272fd_disk.config">
Oct 11 09:35:49 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 09:35:49 compute-0 nova_compute[260935]:       </source>
Oct 11 09:35:49 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 09:35:49 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 09:35:49 compute-0 nova_compute[260935]:       </auth>
Oct 11 09:35:49 compute-0 nova_compute[260935]:       <target dev="sda" bus="sata"/>
Oct 11 09:35:49 compute-0 nova_compute[260935]:     </disk>
Oct 11 09:35:49 compute-0 nova_compute[260935]:     <interface type="ethernet">
Oct 11 09:35:49 compute-0 nova_compute[260935]:       <mac address="fa:16:3e:83:96:3a"/>
Oct 11 09:35:49 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 09:35:49 compute-0 nova_compute[260935]:       <driver name="vhost" rx_queue_size="512"/>
Oct 11 09:35:49 compute-0 nova_compute[260935]:       <mtu size="1442"/>
Oct 11 09:35:49 compute-0 nova_compute[260935]:       <target dev="tap28067a9e-43"/>
Oct 11 09:35:49 compute-0 nova_compute[260935]:     </interface>
Oct 11 09:35:49 compute-0 nova_compute[260935]:     <serial type="pty">
Oct 11 09:35:49 compute-0 nova_compute[260935]:       <log file="/var/lib/nova/instances/8d35e49e-efca-4621-a6f7-de650e5272fd/console.log" append="off"/>
Oct 11 09:35:49 compute-0 nova_compute[260935]:     </serial>
Oct 11 09:35:49 compute-0 nova_compute[260935]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 09:35:49 compute-0 nova_compute[260935]:     <video>
Oct 11 09:35:49 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 09:35:49 compute-0 nova_compute[260935]:     </video>
Oct 11 09:35:49 compute-0 nova_compute[260935]:     <input type="tablet" bus="usb"/>
Oct 11 09:35:49 compute-0 nova_compute[260935]:     <rng model="virtio">
Oct 11 09:35:49 compute-0 nova_compute[260935]:       <backend model="random">/dev/urandom</backend>
Oct 11 09:35:49 compute-0 nova_compute[260935]:     </rng>
Oct 11 09:35:49 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root"/>
Oct 11 09:35:49 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:35:49 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:35:49 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:35:49 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:35:49 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:35:49 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:35:49 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:35:49 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:35:49 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:35:49 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:35:49 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:35:49 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:35:49 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:35:49 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:35:49 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:35:49 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:35:49 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:35:49 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:35:49 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:35:49 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:35:49 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:35:49 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:35:49 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:35:49 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:35:49 compute-0 nova_compute[260935]:     <controller type="usb" index="0"/>
Oct 11 09:35:49 compute-0 nova_compute[260935]:     <memballoon model="virtio">
Oct 11 09:35:49 compute-0 nova_compute[260935]:       <stats period="10"/>
Oct 11 09:35:49 compute-0 nova_compute[260935]:     </memballoon>
Oct 11 09:35:49 compute-0 nova_compute[260935]:   </devices>
Oct 11 09:35:49 compute-0 nova_compute[260935]: </domain>
Oct 11 09:35:49 compute-0 nova_compute[260935]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 11 09:35:49 compute-0 nova_compute[260935]: 2025-10-11 09:35:49.801 2 DEBUG nova.compute.manager [None req-2e95fecc-72be-49bc-b3cc-7d2dec882a8e 14455fee241a4032a1491acddf675e59 ae66871ba5094dd49475fed96fb24be2 - - default default] [instance: 8d35e49e-efca-4621-a6f7-de650e5272fd] Preparing to wait for external event network-vif-plugged-28067a9e-43c9-4f82-aeb3-002926889b4b prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 11 09:35:49 compute-0 nova_compute[260935]: 2025-10-11 09:35:49.802 2 DEBUG oslo_concurrency.lockutils [None req-2e95fecc-72be-49bc-b3cc-7d2dec882a8e 14455fee241a4032a1491acddf675e59 ae66871ba5094dd49475fed96fb24be2 - - default default] Acquiring lock "8d35e49e-efca-4621-a6f7-de650e5272fd-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:35:49 compute-0 nova_compute[260935]: 2025-10-11 09:35:49.802 2 DEBUG oslo_concurrency.lockutils [None req-2e95fecc-72be-49bc-b3cc-7d2dec882a8e 14455fee241a4032a1491acddf675e59 ae66871ba5094dd49475fed96fb24be2 - - default default] Lock "8d35e49e-efca-4621-a6f7-de650e5272fd-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:35:49 compute-0 nova_compute[260935]: 2025-10-11 09:35:49.802 2 DEBUG oslo_concurrency.lockutils [None req-2e95fecc-72be-49bc-b3cc-7d2dec882a8e 14455fee241a4032a1491acddf675e59 ae66871ba5094dd49475fed96fb24be2 - - default default] Lock "8d35e49e-efca-4621-a6f7-de650e5272fd-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:35:49 compute-0 nova_compute[260935]: 2025-10-11 09:35:49.803 2 DEBUG nova.virt.libvirt.vif [None req-2e95fecc-72be-49bc-b3cc-7d2dec882a8e 14455fee241a4032a1491acddf675e59 ae66871ba5094dd49475fed96fb24be2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T09:35:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-88042792-access_point-478778867',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-88042792-access_point-478778867',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-88042792-acce',id=140,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFTq8OGvKaz+LMA/PcV/K58nrpAVi7kg5IPqRwY1XCZgpgNS8omJcJt4bFk7y+YWXHh9knbjh0T2FJN0u/vwytRbPYrGFElF+LT6MJEgQPvTD/i1PmkAieU+uwu1S/cEGA==',key_name='tempest-TestSecurityGroupsBasicOps-1768444573',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ae66871ba5094dd49475fed96fb24be2',ramdisk_id='',reservation_id='r-zleps1oi',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-88042792',owner_user_name='tempest-TestSecurityGroupsBasicOps-88042792-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:35:43Z,user_data=None,user_id='14455fee241a4032a1491acddf675e59',uuid=8d35e49e-efca-4621-a6f7-de650e5272fd,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "28067a9e-43c9-4f82-aeb3-002926889b4b", "address": "fa:16:3e:83:96:3a", "network": {"id": "617f2647-1180-40d5-ae2e-3b28298acf26", "bridge": "br-int", "label": "tempest-network-smoke--1660520824", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ae66871ba5094dd49475fed96fb24be2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap28067a9e-43", "ovs_interfaceid": "28067a9e-43c9-4f82-aeb3-002926889b4b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 11 09:35:49 compute-0 nova_compute[260935]: 2025-10-11 09:35:49.803 2 DEBUG nova.network.os_vif_util [None req-2e95fecc-72be-49bc-b3cc-7d2dec882a8e 14455fee241a4032a1491acddf675e59 ae66871ba5094dd49475fed96fb24be2 - - default default] Converting VIF {"id": "28067a9e-43c9-4f82-aeb3-002926889b4b", "address": "fa:16:3e:83:96:3a", "network": {"id": "617f2647-1180-40d5-ae2e-3b28298acf26", "bridge": "br-int", "label": "tempest-network-smoke--1660520824", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ae66871ba5094dd49475fed96fb24be2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap28067a9e-43", "ovs_interfaceid": "28067a9e-43c9-4f82-aeb3-002926889b4b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 09:35:49 compute-0 nova_compute[260935]: 2025-10-11 09:35:49.804 2 DEBUG nova.network.os_vif_util [None req-2e95fecc-72be-49bc-b3cc-7d2dec882a8e 14455fee241a4032a1491acddf675e59 ae66871ba5094dd49475fed96fb24be2 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:83:96:3a,bridge_name='br-int',has_traffic_filtering=True,id=28067a9e-43c9-4f82-aeb3-002926889b4b,network=Network(617f2647-1180-40d5-ae2e-3b28298acf26),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap28067a9e-43') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 09:35:49 compute-0 nova_compute[260935]: 2025-10-11 09:35:49.804 2 DEBUG os_vif [None req-2e95fecc-72be-49bc-b3cc-7d2dec882a8e 14455fee241a4032a1491acddf675e59 ae66871ba5094dd49475fed96fb24be2 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:83:96:3a,bridge_name='br-int',has_traffic_filtering=True,id=28067a9e-43c9-4f82-aeb3-002926889b4b,network=Network(617f2647-1180-40d5-ae2e-3b28298acf26),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap28067a9e-43') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 11 09:35:49 compute-0 nova_compute[260935]: 2025-10-11 09:35:49.805 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:35:49 compute-0 nova_compute[260935]: 2025-10-11 09:35:49.805 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:35:49 compute-0 nova_compute[260935]: 2025-10-11 09:35:49.805 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 09:35:49 compute-0 nova_compute[260935]: 2025-10-11 09:35:49.809 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:35:49 compute-0 nova_compute[260935]: 2025-10-11 09:35:49.809 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap28067a9e-43, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:35:49 compute-0 nova_compute[260935]: 2025-10-11 09:35:49.809 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap28067a9e-43, col_values=(('external_ids', {'iface-id': '28067a9e-43c9-4f82-aeb3-002926889b4b', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:83:96:3a', 'vm-uuid': '8d35e49e-efca-4621-a6f7-de650e5272fd'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:35:49 compute-0 nova_compute[260935]: 2025-10-11 09:35:49.861 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:35:49 compute-0 NetworkManager[44960]: <info>  [1760175349.8626] manager: (tap28067a9e-43): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/602)
Oct 11 09:35:49 compute-0 nova_compute[260935]: 2025-10-11 09:35:49.863 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 09:35:49 compute-0 nova_compute[260935]: 2025-10-11 09:35:49.867 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:35:49 compute-0 nova_compute[260935]: 2025-10-11 09:35:49.868 2 INFO os_vif [None req-2e95fecc-72be-49bc-b3cc-7d2dec882a8e 14455fee241a4032a1491acddf675e59 ae66871ba5094dd49475fed96fb24be2 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:83:96:3a,bridge_name='br-int',has_traffic_filtering=True,id=28067a9e-43c9-4f82-aeb3-002926889b4b,network=Network(617f2647-1180-40d5-ae2e-3b28298acf26),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap28067a9e-43')
Oct 11 09:35:49 compute-0 nova_compute[260935]: 2025-10-11 09:35:49.907 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:35:49 compute-0 nova_compute[260935]: 2025-10-11 09:35:49.908 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:35:49 compute-0 nova_compute[260935]: 2025-10-11 09:35:49.909 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:35:49 compute-0 nova_compute[260935]: 2025-10-11 09:35:49.909 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 11 09:35:49 compute-0 nova_compute[260935]: 2025-10-11 09:35:49.910 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:35:49 compute-0 nova_compute[260935]: 2025-10-11 09:35:49.948 2 DEBUG nova.network.neutron [req-66e1ed7f-e399-4bcd-b696-f92e0395f4e2 req-a3349235-6f62-4a71-b5ca-64933f2939fa e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 8d35e49e-efca-4621-a6f7-de650e5272fd] Updated VIF entry in instance network info cache for port 28067a9e-43c9-4f82-aeb3-002926889b4b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 09:35:49 compute-0 nova_compute[260935]: 2025-10-11 09:35:49.949 2 DEBUG nova.network.neutron [req-66e1ed7f-e399-4bcd-b696-f92e0395f4e2 req-a3349235-6f62-4a71-b5ca-64933f2939fa e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 8d35e49e-efca-4621-a6f7-de650e5272fd] Updating instance_info_cache with network_info: [{"id": "28067a9e-43c9-4f82-aeb3-002926889b4b", "address": "fa:16:3e:83:96:3a", "network": {"id": "617f2647-1180-40d5-ae2e-3b28298acf26", "bridge": "br-int", "label": "tempest-network-smoke--1660520824", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ae66871ba5094dd49475fed96fb24be2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap28067a9e-43", "ovs_interfaceid": "28067a9e-43c9-4f82-aeb3-002926889b4b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:35:49 compute-0 nova_compute[260935]: 2025-10-11 09:35:49.967 2 DEBUG oslo_concurrency.lockutils [req-66e1ed7f-e399-4bcd-b696-f92e0395f4e2 req-a3349235-6f62-4a71-b5ca-64933f2939fa e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-8d35e49e-efca-4621-a6f7-de650e5272fd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:35:49 compute-0 nova_compute[260935]: 2025-10-11 09:35:49.985 2 DEBUG nova.virt.libvirt.driver [None req-2e95fecc-72be-49bc-b3cc-7d2dec882a8e 14455fee241a4032a1491acddf675e59 ae66871ba5094dd49475fed96fb24be2 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 09:35:49 compute-0 nova_compute[260935]: 2025-10-11 09:35:49.986 2 DEBUG nova.virt.libvirt.driver [None req-2e95fecc-72be-49bc-b3cc-7d2dec882a8e 14455fee241a4032a1491acddf675e59 ae66871ba5094dd49475fed96fb24be2 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 09:35:49 compute-0 nova_compute[260935]: 2025-10-11 09:35:49.986 2 DEBUG nova.virt.libvirt.driver [None req-2e95fecc-72be-49bc-b3cc-7d2dec882a8e 14455fee241a4032a1491acddf675e59 ae66871ba5094dd49475fed96fb24be2 - - default default] No VIF found with MAC fa:16:3e:83:96:3a, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 11 09:35:49 compute-0 nova_compute[260935]: 2025-10-11 09:35:49.986 2 INFO nova.virt.libvirt.driver [None req-2e95fecc-72be-49bc-b3cc-7d2dec882a8e 14455fee241a4032a1491acddf675e59 ae66871ba5094dd49475fed96fb24be2 - - default default] [instance: 8d35e49e-efca-4621-a6f7-de650e5272fd] Using config drive
Oct 11 09:35:50 compute-0 nova_compute[260935]: 2025-10-11 09:35:50.008 2 DEBUG nova.storage.rbd_utils [None req-2e95fecc-72be-49bc-b3cc-7d2dec882a8e 14455fee241a4032a1491acddf675e59 ae66871ba5094dd49475fed96fb24be2 - - default default] rbd image 8d35e49e-efca-4621-a6f7-de650e5272fd_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:35:50 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/239988064' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:35:50 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 09:35:50 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/388559360' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:35:50 compute-0 nova_compute[260935]: 2025-10-11 09:35:50.343 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.433s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:35:50 compute-0 nova_compute[260935]: 2025-10-11 09:35:50.451 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-0000008b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:35:50 compute-0 nova_compute[260935]: 2025-10-11 09:35:50.452 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-0000008b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:35:50 compute-0 nova_compute[260935]: 2025-10-11 09:35:50.458 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:35:50 compute-0 nova_compute[260935]: 2025-10-11 09:35:50.459 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:35:50 compute-0 nova_compute[260935]: 2025-10-11 09:35:50.459 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:35:50 compute-0 nova_compute[260935]: 2025-10-11 09:35:50.465 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:35:50 compute-0 nova_compute[260935]: 2025-10-11 09:35:50.465 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:35:50 compute-0 nova_compute[260935]: 2025-10-11 09:35:50.470 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-0000008c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:35:50 compute-0 nova_compute[260935]: 2025-10-11 09:35:50.470 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-0000008c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:35:50 compute-0 nova_compute[260935]: 2025-10-11 09:35:50.477 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000052 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:35:50 compute-0 nova_compute[260935]: 2025-10-11 09:35:50.477 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000052 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:35:50 compute-0 nova_compute[260935]: 2025-10-11 09:35:50.660 2 INFO nova.virt.libvirt.driver [None req-2e95fecc-72be-49bc-b3cc-7d2dec882a8e 14455fee241a4032a1491acddf675e59 ae66871ba5094dd49475fed96fb24be2 - - default default] [instance: 8d35e49e-efca-4621-a6f7-de650e5272fd] Creating config drive at /var/lib/nova/instances/8d35e49e-efca-4621-a6f7-de650e5272fd/disk.config
Oct 11 09:35:50 compute-0 nova_compute[260935]: 2025-10-11 09:35:50.670 2 DEBUG oslo_concurrency.processutils [None req-2e95fecc-72be-49bc-b3cc-7d2dec882a8e 14455fee241a4032a1491acddf675e59 ae66871ba5094dd49475fed96fb24be2 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/8d35e49e-efca-4621-a6f7-de650e5272fd/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpmqz6i04_ execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:35:50 compute-0 nova_compute[260935]: 2025-10-11 09:35:50.782 2 WARNING nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 09:35:50 compute-0 nova_compute[260935]: 2025-10-11 09:35:50.784 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=2621MB free_disk=59.764366149902344GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 11 09:35:50 compute-0 nova_compute[260935]: 2025-10-11 09:35:50.784 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:35:50 compute-0 nova_compute[260935]: 2025-10-11 09:35:50.785 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:35:50 compute-0 nova_compute[260935]: 2025-10-11 09:35:50.818 2 DEBUG oslo_concurrency.processutils [None req-2e95fecc-72be-49bc-b3cc-7d2dec882a8e 14455fee241a4032a1491acddf675e59 ae66871ba5094dd49475fed96fb24be2 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/8d35e49e-efca-4621-a6f7-de650e5272fd/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpmqz6i04_" returned: 0 in 0.148s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:35:50 compute-0 nova_compute[260935]: 2025-10-11 09:35:50.843 2 DEBUG nova.storage.rbd_utils [None req-2e95fecc-72be-49bc-b3cc-7d2dec882a8e 14455fee241a4032a1491acddf675e59 ae66871ba5094dd49475fed96fb24be2 - - default default] rbd image 8d35e49e-efca-4621-a6f7-de650e5272fd_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:35:50 compute-0 nova_compute[260935]: 2025-10-11 09:35:50.848 2 DEBUG oslo_concurrency.processutils [None req-2e95fecc-72be-49bc-b3cc-7d2dec882a8e 14455fee241a4032a1491acddf675e59 ae66871ba5094dd49475fed96fb24be2 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/8d35e49e-efca-4621-a6f7-de650e5272fd/disk.config 8d35e49e-efca-4621-a6f7-de650e5272fd_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:35:51 compute-0 nova_compute[260935]: 2025-10-11 09:35:51.003 2 DEBUG oslo_concurrency.processutils [None req-2e95fecc-72be-49bc-b3cc-7d2dec882a8e 14455fee241a4032a1491acddf675e59 ae66871ba5094dd49475fed96fb24be2 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/8d35e49e-efca-4621-a6f7-de650e5272fd/disk.config 8d35e49e-efca-4621-a6f7-de650e5272fd_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.155s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:35:51 compute-0 nova_compute[260935]: 2025-10-11 09:35:51.004 2 INFO nova.virt.libvirt.driver [None req-2e95fecc-72be-49bc-b3cc-7d2dec882a8e 14455fee241a4032a1491acddf675e59 ae66871ba5094dd49475fed96fb24be2 - - default default] [instance: 8d35e49e-efca-4621-a6f7-de650e5272fd] Deleting local config drive /var/lib/nova/instances/8d35e49e-efca-4621-a6f7-de650e5272fd/disk.config because it was imported into RBD.
Oct 11 09:35:51 compute-0 NetworkManager[44960]: <info>  [1760175351.0474] manager: (tap28067a9e-43): new Tun device (/org/freedesktop/NetworkManager/Devices/603)
Oct 11 09:35:51 compute-0 kernel: tap28067a9e-43: entered promiscuous mode
Oct 11 09:35:51 compute-0 ovn_controller[152945]: 2025-10-11T09:35:51Z|01568|binding|INFO|Claiming lport 28067a9e-43c9-4f82-aeb3-002926889b4b for this chassis.
Oct 11 09:35:51 compute-0 ovn_controller[152945]: 2025-10-11T09:35:51Z|01569|binding|INFO|28067a9e-43c9-4f82-aeb3-002926889b4b: Claiming fa:16:3e:83:96:3a 10.100.0.8
Oct 11 09:35:51 compute-0 nova_compute[260935]: 2025-10-11 09:35:51.051 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:35:51 compute-0 ovn_controller[152945]: 2025-10-11T09:35:51Z|01570|binding|INFO|Setting lport 28067a9e-43c9-4f82-aeb3-002926889b4b ovn-installed in OVS
Oct 11 09:35:51 compute-0 nova_compute[260935]: 2025-10-11 09:35:51.068 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:35:51 compute-0 nova_compute[260935]: 2025-10-11 09:35:51.071 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:35:51 compute-0 systemd-udevd[416635]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 09:35:51 compute-0 systemd-machined[215705]: New machine qemu-164-instance-0000008c.
Oct 11 09:35:51 compute-0 systemd[1]: Started Virtual Machine qemu-164-instance-0000008c.
Oct 11 09:35:51 compute-0 NetworkManager[44960]: <info>  [1760175351.1010] device (tap28067a9e-43): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 09:35:51 compute-0 NetworkManager[44960]: <info>  [1760175351.1016] device (tap28067a9e-43): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 09:35:51 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2826: 321 pgs: 321 active+clean; 453 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 11 09:35:51 compute-0 ovn_controller[152945]: 2025-10-11T09:35:51Z|01571|binding|INFO|Setting lport 28067a9e-43c9-4f82-aeb3-002926889b4b up in Southbound
Oct 11 09:35:51 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:35:51.238 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:83:96:3a 10.100.0.8'], port_security=['fa:16:3e:83:96:3a 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '8d35e49e-efca-4621-a6f7-de650e5272fd', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-617f2647-1180-40d5-ae2e-3b28298acf26', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ae66871ba5094dd49475fed96fb24be2', 'neutron:revision_number': '2', 'neutron:security_group_ids': '31517e64-56cf-4001-97b4-ec41047395ae 9f4770d8-f722-49dc-9929-f78ea8b6a7c0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=321d388f-0e1e-4939-a64b-86d4cae0051f, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=28067a9e-43c9-4f82-aeb3-002926889b4b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 09:35:51 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:35:51.240 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 28067a9e-43c9-4f82-aeb3-002926889b4b in datapath 617f2647-1180-40d5-ae2e-3b28298acf26 bound to our chassis
Oct 11 09:35:51 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:35:51.244 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 617f2647-1180-40d5-ae2e-3b28298acf26
Oct 11 09:35:51 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:35:51.263 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[a74f1e8d-6560-45e2-a5c2-544132147795]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:35:51 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:35:51.264 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap617f2647-11 in ovnmeta-617f2647-1180-40d5-ae2e-3b28298acf26 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 11 09:35:51 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/388559360' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:35:51 compute-0 ceph-mon[74313]: pgmap v2826: 321 pgs: 321 active+clean; 453 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 11 09:35:51 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:35:51.267 276945 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap617f2647-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 11 09:35:51 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:35:51.267 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[25cb3ef9-e1d3-480b-98f2-9306e2a2855a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:35:51 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:35:51.268 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[5bcaa0e7-f01a-4b4e-bc97-6b0c4dd30cd4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:35:51 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:35:51.295 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[734fe6ee-6829-4020-a005-80e9b3dc6e77]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:35:51 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:35:51.326 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[1631f050-d8bc-47d2-b02a-2b03cee3373e]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:35:51 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:35:51.355 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[0a29b415-e586-46c3-8f3b-8d78d9520639]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:35:51 compute-0 NetworkManager[44960]: <info>  [1760175351.3677] manager: (tap617f2647-10): new Veth device (/org/freedesktop/NetworkManager/Devices/604)
Oct 11 09:35:51 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:35:51.367 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[541e311e-67c2-4f88-9a7d-0b2621ce1bab]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:35:51 compute-0 systemd-udevd[416637]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 09:35:51 compute-0 nova_compute[260935]: 2025-10-11 09:35:51.400 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance c176845c-89c0-4038-ba22-4ee79bd3ebfe actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 09:35:51 compute-0 nova_compute[260935]: 2025-10-11 09:35:51.400 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance b75d8ded-515b-48ff-a6b6-28df88878996 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 09:35:51 compute-0 nova_compute[260935]: 2025-10-11 09:35:51.401 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance 52be16b4-343a-4fd4-9041-39069a1fde2a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 09:35:51 compute-0 nova_compute[260935]: 2025-10-11 09:35:51.401 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance a77fa566-1ab4-484c-b6ef-53471c9f91f8 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 09:35:51 compute-0 nova_compute[260935]: 2025-10-11 09:35:51.401 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance 8d35e49e-efca-4621-a6f7-de650e5272fd actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 09:35:51 compute-0 nova_compute[260935]: 2025-10-11 09:35:51.401 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 5 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 11 09:35:51 compute-0 nova_compute[260935]: 2025-10-11 09:35:51.401 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=1152MB phys_disk=59GB used_disk=5GB total_vcpus=8 used_vcpus=5 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 11 09:35:51 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:35:51.416 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[58f30fc8-0b5a-47f3-8b3e-7fbdd5d97d93]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:35:51 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:35:51.419 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[66cf776c-e692-4cb7-bbf7-f6eeac5d797e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:35:51 compute-0 NetworkManager[44960]: <info>  [1760175351.4585] device (tap617f2647-10): carrier: link connected
Oct 11 09:35:51 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:35:51.466 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[d5b978a2-3d29-4036-9102-d3ec76bb7d0a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:35:51 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:35:51.492 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[7e1adf72-2aa9-4aaf-9cd6-a283cddd523f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap617f2647-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:22:a9:10'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 417], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 719616, 'reachable_time': 18424, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 416677, 'error': None, 'target': 'ovnmeta-617f2647-1180-40d5-ae2e-3b28298acf26', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:35:51 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:35:51.512 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[127cc807-e160-40ba-abc1-5a7a57522a85]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe22:a910'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 719616, 'tstamp': 719616}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 416689, 'error': None, 'target': 'ovnmeta-617f2647-1180-40d5-ae2e-3b28298acf26', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:35:51 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:35:51.533 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[4ed3c329-6e03-4f61-a515-66ac52bfa819]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap617f2647-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:22:a9:10'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 417], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 719616, 'reachable_time': 18424, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 416697, 'error': None, 'target': 'ovnmeta-617f2647-1180-40d5-ae2e-3b28298acf26', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:35:51 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:35:51.588 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[739e02cc-fc62-4397-9cc8-7a3661453167]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:35:51 compute-0 nova_compute[260935]: 2025-10-11 09:35:51.600 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:35:51 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:35:51.677 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[98373f13-3a4e-47f5-84ba-e17949db52a6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:35:51 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:35:51.679 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap617f2647-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:35:51 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:35:51.679 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 09:35:51 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:35:51.680 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap617f2647-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:35:51 compute-0 NetworkManager[44960]: <info>  [1760175351.6832] manager: (tap617f2647-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/605)
Oct 11 09:35:51 compute-0 kernel: tap617f2647-10: entered promiscuous mode
Oct 11 09:35:51 compute-0 nova_compute[260935]: 2025-10-11 09:35:51.683 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:35:51 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:35:51.686 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap617f2647-10, col_values=(('external_ids', {'iface-id': 'a74f93a1-5d2a-45dc-853f-254c71fe1565'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:35:51 compute-0 ovn_controller[152945]: 2025-10-11T09:35:51Z|01572|binding|INFO|Releasing lport a74f93a1-5d2a-45dc-853f-254c71fe1565 from this chassis (sb_readonly=0)
Oct 11 09:35:51 compute-0 nova_compute[260935]: 2025-10-11 09:35:51.705 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:35:51 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:35:51.706 162815 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/617f2647-1180-40d5-ae2e-3b28298acf26.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/617f2647-1180-40d5-ae2e-3b28298acf26.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 11 09:35:51 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:35:51.707 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[02d10c94-81fb-4d86-9041-b54e6766ef12]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:35:51 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:35:51.709 162815 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 11 09:35:51 compute-0 ovn_metadata_agent[162810]: global
Oct 11 09:35:51 compute-0 ovn_metadata_agent[162810]:     log         /dev/log local0 debug
Oct 11 09:35:51 compute-0 ovn_metadata_agent[162810]:     log-tag     haproxy-metadata-proxy-617f2647-1180-40d5-ae2e-3b28298acf26
Oct 11 09:35:51 compute-0 ovn_metadata_agent[162810]:     user        root
Oct 11 09:35:51 compute-0 ovn_metadata_agent[162810]:     group       root
Oct 11 09:35:51 compute-0 ovn_metadata_agent[162810]:     maxconn     1024
Oct 11 09:35:51 compute-0 ovn_metadata_agent[162810]:     pidfile     /var/lib/neutron/external/pids/617f2647-1180-40d5-ae2e-3b28298acf26.pid.haproxy
Oct 11 09:35:51 compute-0 ovn_metadata_agent[162810]:     daemon
Oct 11 09:35:51 compute-0 ovn_metadata_agent[162810]: 
Oct 11 09:35:51 compute-0 ovn_metadata_agent[162810]: defaults
Oct 11 09:35:51 compute-0 ovn_metadata_agent[162810]:     log global
Oct 11 09:35:51 compute-0 ovn_metadata_agent[162810]:     mode http
Oct 11 09:35:51 compute-0 ovn_metadata_agent[162810]:     option httplog
Oct 11 09:35:51 compute-0 ovn_metadata_agent[162810]:     option dontlognull
Oct 11 09:35:51 compute-0 ovn_metadata_agent[162810]:     option http-server-close
Oct 11 09:35:51 compute-0 ovn_metadata_agent[162810]:     option forwardfor
Oct 11 09:35:51 compute-0 ovn_metadata_agent[162810]:     retries                 3
Oct 11 09:35:51 compute-0 ovn_metadata_agent[162810]:     timeout http-request    30s
Oct 11 09:35:51 compute-0 ovn_metadata_agent[162810]:     timeout connect         30s
Oct 11 09:35:51 compute-0 ovn_metadata_agent[162810]:     timeout client          32s
Oct 11 09:35:51 compute-0 ovn_metadata_agent[162810]:     timeout server          32s
Oct 11 09:35:51 compute-0 ovn_metadata_agent[162810]:     timeout http-keep-alive 30s
Oct 11 09:35:51 compute-0 ovn_metadata_agent[162810]: 
Oct 11 09:35:51 compute-0 ovn_metadata_agent[162810]: 
Oct 11 09:35:51 compute-0 ovn_metadata_agent[162810]: listen listener
Oct 11 09:35:51 compute-0 ovn_metadata_agent[162810]:     bind 169.254.169.254:80
Oct 11 09:35:51 compute-0 ovn_metadata_agent[162810]:     server metadata /var/lib/neutron/metadata_proxy
Oct 11 09:35:51 compute-0 ovn_metadata_agent[162810]:     http-request add-header X-OVN-Network-ID 617f2647-1180-40d5-ae2e-3b28298acf26
Oct 11 09:35:51 compute-0 ovn_metadata_agent[162810]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 11 09:35:51 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:35:51.713 162815 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-617f2647-1180-40d5-ae2e-3b28298acf26', 'env', 'PROCESS_TAG=haproxy-617f2647-1180-40d5-ae2e-3b28298acf26', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/617f2647-1180-40d5-ae2e-3b28298acf26.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 11 09:35:51 compute-0 nova_compute[260935]: 2025-10-11 09:35:51.751 2 DEBUG nova.compute.manager [req-709ab804-cc7e-41c1-8fb2-90f98ef8c046 req-52e75187-b8bc-4b63-a759-f2b067ec804b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 8d35e49e-efca-4621-a6f7-de650e5272fd] Received event network-vif-plugged-28067a9e-43c9-4f82-aeb3-002926889b4b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:35:51 compute-0 nova_compute[260935]: 2025-10-11 09:35:51.752 2 DEBUG oslo_concurrency.lockutils [req-709ab804-cc7e-41c1-8fb2-90f98ef8c046 req-52e75187-b8bc-4b63-a759-f2b067ec804b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "8d35e49e-efca-4621-a6f7-de650e5272fd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:35:51 compute-0 nova_compute[260935]: 2025-10-11 09:35:51.752 2 DEBUG oslo_concurrency.lockutils [req-709ab804-cc7e-41c1-8fb2-90f98ef8c046 req-52e75187-b8bc-4b63-a759-f2b067ec804b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "8d35e49e-efca-4621-a6f7-de650e5272fd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:35:51 compute-0 nova_compute[260935]: 2025-10-11 09:35:51.753 2 DEBUG oslo_concurrency.lockutils [req-709ab804-cc7e-41c1-8fb2-90f98ef8c046 req-52e75187-b8bc-4b63-a759-f2b067ec804b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "8d35e49e-efca-4621-a6f7-de650e5272fd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:35:51 compute-0 nova_compute[260935]: 2025-10-11 09:35:51.753 2 DEBUG nova.compute.manager [req-709ab804-cc7e-41c1-8fb2-90f98ef8c046 req-52e75187-b8bc-4b63-a759-f2b067ec804b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 8d35e49e-efca-4621-a6f7-de650e5272fd] Processing event network-vif-plugged-28067a9e-43c9-4f82-aeb3-002926889b4b _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 11 09:35:52 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 09:35:52 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1108910696' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:35:52 compute-0 nova_compute[260935]: 2025-10-11 09:35:52.055 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:35:52 compute-0 nova_compute[260935]: 2025-10-11 09:35:52.060 2 DEBUG nova.compute.provider_tree [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 09:35:52 compute-0 nova_compute[260935]: 2025-10-11 09:35:52.076 2 DEBUG nova.scheduler.client.report [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 09:35:52 compute-0 nova_compute[260935]: 2025-10-11 09:35:52.125 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 11 09:35:52 compute-0 nova_compute[260935]: 2025-10-11 09:35:52.125 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.340s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:35:52 compute-0 podman[416767]: 2025-10-11 09:35:52.129505292 +0000 UTC m=+0.054982843 container create d864fe38d9be2dae3da3c070959dd6bc2d2ef716d596898d6e6ac4974a4b2210 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-617f2647-1180-40d5-ae2e-3b28298acf26, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Oct 11 09:35:52 compute-0 nova_compute[260935]: 2025-10-11 09:35:52.146 2 DEBUG nova.compute.manager [None req-2e95fecc-72be-49bc-b3cc-7d2dec882a8e 14455fee241a4032a1491acddf675e59 ae66871ba5094dd49475fed96fb24be2 - - default default] [instance: 8d35e49e-efca-4621-a6f7-de650e5272fd] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 11 09:35:52 compute-0 nova_compute[260935]: 2025-10-11 09:35:52.147 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760175352.1472325, 8d35e49e-efca-4621-a6f7-de650e5272fd => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:35:52 compute-0 nova_compute[260935]: 2025-10-11 09:35:52.147 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 8d35e49e-efca-4621-a6f7-de650e5272fd] VM Started (Lifecycle Event)
Oct 11 09:35:52 compute-0 nova_compute[260935]: 2025-10-11 09:35:52.151 2 DEBUG nova.virt.libvirt.driver [None req-2e95fecc-72be-49bc-b3cc-7d2dec882a8e 14455fee241a4032a1491acddf675e59 ae66871ba5094dd49475fed96fb24be2 - - default default] [instance: 8d35e49e-efca-4621-a6f7-de650e5272fd] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 11 09:35:52 compute-0 nova_compute[260935]: 2025-10-11 09:35:52.155 2 INFO nova.virt.libvirt.driver [-] [instance: 8d35e49e-efca-4621-a6f7-de650e5272fd] Instance spawned successfully.
Oct 11 09:35:52 compute-0 nova_compute[260935]: 2025-10-11 09:35:52.156 2 DEBUG nova.virt.libvirt.driver [None req-2e95fecc-72be-49bc-b3cc-7d2dec882a8e 14455fee241a4032a1491acddf675e59 ae66871ba5094dd49475fed96fb24be2 - - default default] [instance: 8d35e49e-efca-4621-a6f7-de650e5272fd] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 11 09:35:52 compute-0 podman[416767]: 2025-10-11 09:35:52.099285413 +0000 UTC m=+0.024763045 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 11 09:35:52 compute-0 systemd[1]: Started libpod-conmon-d864fe38d9be2dae3da3c070959dd6bc2d2ef716d596898d6e6ac4974a4b2210.scope.
Oct 11 09:35:52 compute-0 nova_compute[260935]: 2025-10-11 09:35:52.193 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 8d35e49e-efca-4621-a6f7-de650e5272fd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:35:52 compute-0 nova_compute[260935]: 2025-10-11 09:35:52.198 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 8d35e49e-efca-4621-a6f7-de650e5272fd] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 09:35:52 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:35:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bce8ebf8528fbbe46433a517ccbe8faadfb46f7e1ba5f57651637bad728b7361/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 11 09:35:52 compute-0 podman[416767]: 2025-10-11 09:35:52.247391199 +0000 UTC m=+0.172868810 container init d864fe38d9be2dae3da3c070959dd6bc2d2ef716d596898d6e6ac4974a4b2210 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-617f2647-1180-40d5-ae2e-3b28298acf26, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Oct 11 09:35:52 compute-0 podman[416767]: 2025-10-11 09:35:52.25797601 +0000 UTC m=+0.183453581 container start d864fe38d9be2dae3da3c070959dd6bc2d2ef716d596898d6e6ac4974a4b2210 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-617f2647-1180-40d5-ae2e-3b28298acf26, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct 11 09:35:52 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/1108910696' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:35:52 compute-0 nova_compute[260935]: 2025-10-11 09:35:52.294 2 DEBUG nova.virt.libvirt.driver [None req-2e95fecc-72be-49bc-b3cc-7d2dec882a8e 14455fee241a4032a1491acddf675e59 ae66871ba5094dd49475fed96fb24be2 - - default default] [instance: 8d35e49e-efca-4621-a6f7-de650e5272fd] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:35:52 compute-0 nova_compute[260935]: 2025-10-11 09:35:52.295 2 DEBUG nova.virt.libvirt.driver [None req-2e95fecc-72be-49bc-b3cc-7d2dec882a8e 14455fee241a4032a1491acddf675e59 ae66871ba5094dd49475fed96fb24be2 - - default default] [instance: 8d35e49e-efca-4621-a6f7-de650e5272fd] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:35:52 compute-0 nova_compute[260935]: 2025-10-11 09:35:52.296 2 DEBUG nova.virt.libvirt.driver [None req-2e95fecc-72be-49bc-b3cc-7d2dec882a8e 14455fee241a4032a1491acddf675e59 ae66871ba5094dd49475fed96fb24be2 - - default default] [instance: 8d35e49e-efca-4621-a6f7-de650e5272fd] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:35:52 compute-0 nova_compute[260935]: 2025-10-11 09:35:52.296 2 DEBUG nova.virt.libvirt.driver [None req-2e95fecc-72be-49bc-b3cc-7d2dec882a8e 14455fee241a4032a1491acddf675e59 ae66871ba5094dd49475fed96fb24be2 - - default default] [instance: 8d35e49e-efca-4621-a6f7-de650e5272fd] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:35:52 compute-0 nova_compute[260935]: 2025-10-11 09:35:52.297 2 DEBUG nova.virt.libvirt.driver [None req-2e95fecc-72be-49bc-b3cc-7d2dec882a8e 14455fee241a4032a1491acddf675e59 ae66871ba5094dd49475fed96fb24be2 - - default default] [instance: 8d35e49e-efca-4621-a6f7-de650e5272fd] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:35:52 compute-0 nova_compute[260935]: 2025-10-11 09:35:52.297 2 DEBUG nova.virt.libvirt.driver [None req-2e95fecc-72be-49bc-b3cc-7d2dec882a8e 14455fee241a4032a1491acddf675e59 ae66871ba5094dd49475fed96fb24be2 - - default default] [instance: 8d35e49e-efca-4621-a6f7-de650e5272fd] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:35:52 compute-0 neutron-haproxy-ovnmeta-617f2647-1180-40d5-ae2e-3b28298acf26[416782]: [NOTICE]   (416786) : New worker (416788) forked
Oct 11 09:35:52 compute-0 neutron-haproxy-ovnmeta-617f2647-1180-40d5-ae2e-3b28298acf26[416782]: [NOTICE]   (416786) : Loading success.
Oct 11 09:35:52 compute-0 nova_compute[260935]: 2025-10-11 09:35:52.310 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 8d35e49e-efca-4621-a6f7-de650e5272fd] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 09:35:52 compute-0 nova_compute[260935]: 2025-10-11 09:35:52.311 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760175352.1494644, 8d35e49e-efca-4621-a6f7-de650e5272fd => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:35:52 compute-0 nova_compute[260935]: 2025-10-11 09:35:52.311 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 8d35e49e-efca-4621-a6f7-de650e5272fd] VM Paused (Lifecycle Event)
Oct 11 09:35:52 compute-0 nova_compute[260935]: 2025-10-11 09:35:52.343 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 8d35e49e-efca-4621-a6f7-de650e5272fd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:35:52 compute-0 nova_compute[260935]: 2025-10-11 09:35:52.347 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760175352.1510408, 8d35e49e-efca-4621-a6f7-de650e5272fd => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:35:52 compute-0 nova_compute[260935]: 2025-10-11 09:35:52.348 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 8d35e49e-efca-4621-a6f7-de650e5272fd] VM Resumed (Lifecycle Event)
Oct 11 09:35:52 compute-0 nova_compute[260935]: 2025-10-11 09:35:52.378 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 8d35e49e-efca-4621-a6f7-de650e5272fd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:35:52 compute-0 nova_compute[260935]: 2025-10-11 09:35:52.381 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 8d35e49e-efca-4621-a6f7-de650e5272fd] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 09:35:52 compute-0 nova_compute[260935]: 2025-10-11 09:35:52.389 2 INFO nova.compute.manager [None req-2e95fecc-72be-49bc-b3cc-7d2dec882a8e 14455fee241a4032a1491acddf675e59 ae66871ba5094dd49475fed96fb24be2 - - default default] [instance: 8d35e49e-efca-4621-a6f7-de650e5272fd] Took 8.74 seconds to spawn the instance on the hypervisor.
Oct 11 09:35:52 compute-0 nova_compute[260935]: 2025-10-11 09:35:52.389 2 DEBUG nova.compute.manager [None req-2e95fecc-72be-49bc-b3cc-7d2dec882a8e 14455fee241a4032a1491acddf675e59 ae66871ba5094dd49475fed96fb24be2 - - default default] [instance: 8d35e49e-efca-4621-a6f7-de650e5272fd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:35:52 compute-0 nova_compute[260935]: 2025-10-11 09:35:52.426 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 8d35e49e-efca-4621-a6f7-de650e5272fd] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 09:35:52 compute-0 nova_compute[260935]: 2025-10-11 09:35:52.465 2 INFO nova.compute.manager [None req-2e95fecc-72be-49bc-b3cc-7d2dec882a8e 14455fee241a4032a1491acddf675e59 ae66871ba5094dd49475fed96fb24be2 - - default default] [instance: 8d35e49e-efca-4621-a6f7-de650e5272fd] Took 14.87 seconds to build instance.
Oct 11 09:35:52 compute-0 nova_compute[260935]: 2025-10-11 09:35:52.484 2 DEBUG oslo_concurrency.lockutils [None req-2e95fecc-72be-49bc-b3cc-7d2dec882a8e 14455fee241a4032a1491acddf675e59 ae66871ba5094dd49475fed96fb24be2 - - default default] Lock "8d35e49e-efca-4621-a6f7-de650e5272fd" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 15.594s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:35:53 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2827: 321 pgs: 321 active+clean; 453 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 37 op/s
Oct 11 09:35:53 compute-0 ceph-mon[74313]: pgmap v2827: 321 pgs: 321 active+clean; 453 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 37 op/s
Oct 11 09:35:53 compute-0 nova_compute[260935]: 2025-10-11 09:35:53.851 2 DEBUG nova.compute.manager [req-d291c922-ed13-4ddb-b46c-96cd4bf74090 req-086f2737-481e-4d37-894f-46f76701ca84 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 8d35e49e-efca-4621-a6f7-de650e5272fd] Received event network-vif-plugged-28067a9e-43c9-4f82-aeb3-002926889b4b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:35:53 compute-0 nova_compute[260935]: 2025-10-11 09:35:53.852 2 DEBUG oslo_concurrency.lockutils [req-d291c922-ed13-4ddb-b46c-96cd4bf74090 req-086f2737-481e-4d37-894f-46f76701ca84 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "8d35e49e-efca-4621-a6f7-de650e5272fd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:35:53 compute-0 nova_compute[260935]: 2025-10-11 09:35:53.852 2 DEBUG oslo_concurrency.lockutils [req-d291c922-ed13-4ddb-b46c-96cd4bf74090 req-086f2737-481e-4d37-894f-46f76701ca84 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "8d35e49e-efca-4621-a6f7-de650e5272fd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:35:53 compute-0 nova_compute[260935]: 2025-10-11 09:35:53.853 2 DEBUG oslo_concurrency.lockutils [req-d291c922-ed13-4ddb-b46c-96cd4bf74090 req-086f2737-481e-4d37-894f-46f76701ca84 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "8d35e49e-efca-4621-a6f7-de650e5272fd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:35:53 compute-0 nova_compute[260935]: 2025-10-11 09:35:53.853 2 DEBUG nova.compute.manager [req-d291c922-ed13-4ddb-b46c-96cd4bf74090 req-086f2737-481e-4d37-894f-46f76701ca84 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 8d35e49e-efca-4621-a6f7-de650e5272fd] No waiting events found dispatching network-vif-plugged-28067a9e-43c9-4f82-aeb3-002926889b4b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 09:35:53 compute-0 nova_compute[260935]: 2025-10-11 09:35:53.853 2 WARNING nova.compute.manager [req-d291c922-ed13-4ddb-b46c-96cd4bf74090 req-086f2737-481e-4d37-894f-46f76701ca84 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 8d35e49e-efca-4621-a6f7-de650e5272fd] Received unexpected event network-vif-plugged-28067a9e-43c9-4f82-aeb3-002926889b4b for instance with vm_state active and task_state None.
Oct 11 09:35:54 compute-0 nova_compute[260935]: 2025-10-11 09:35:54.000 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:35:54 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:35:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:35:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:35:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:35:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:35:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:35:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:35:54 compute-0 nova_compute[260935]: 2025-10-11 09:35:54.861 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:35:55 compute-0 ceph-mgr[74605]: [balancer INFO root] Optimize plan auto_2025-10-11_09:35:54
Oct 11 09:35:55 compute-0 ceph-mgr[74605]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 09:35:55 compute-0 ceph-mgr[74605]: [balancer INFO root] do_upmap
Oct 11 09:35:55 compute-0 ceph-mgr[74605]: [balancer INFO root] pools ['backups', 'cephfs.cephfs.data', '.mgr', 'default.rgw.control', 'default.rgw.log', 'cephfs.cephfs.meta', 'default.rgw.meta', 'images', 'vms', '.rgw.root', 'volumes']
Oct 11 09:35:55 compute-0 ceph-mgr[74605]: [balancer INFO root] prepared 0/10 changes
Oct 11 09:35:55 compute-0 nova_compute[260935]: 2025-10-11 09:35:55.122 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:35:55 compute-0 nova_compute[260935]: 2025-10-11 09:35:55.145 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:35:55 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2828: 321 pgs: 321 active+clean; 453 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 37 op/s
Oct 11 09:35:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 09:35:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 09:35:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 09:35:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 09:35:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 09:35:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 09:35:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 09:35:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 09:35:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 09:35:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 09:35:55 compute-0 sshd-session[416797]: Invalid user mysql from 165.232.82.252 port 54820
Oct 11 09:35:55 compute-0 nova_compute[260935]: 2025-10-11 09:35:55.931 2 DEBUG nova.compute.manager [req-f6cd7d3a-f757-41a3-abbc-920eaa30f196 req-478b1c3a-cd7a-4a43-96db-7848b59a64df e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 8d35e49e-efca-4621-a6f7-de650e5272fd] Received event network-changed-28067a9e-43c9-4f82-aeb3-002926889b4b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:35:55 compute-0 nova_compute[260935]: 2025-10-11 09:35:55.933 2 DEBUG nova.compute.manager [req-f6cd7d3a-f757-41a3-abbc-920eaa30f196 req-478b1c3a-cd7a-4a43-96db-7848b59a64df e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 8d35e49e-efca-4621-a6f7-de650e5272fd] Refreshing instance network info cache due to event network-changed-28067a9e-43c9-4f82-aeb3-002926889b4b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 09:35:55 compute-0 nova_compute[260935]: 2025-10-11 09:35:55.934 2 DEBUG oslo_concurrency.lockutils [req-f6cd7d3a-f757-41a3-abbc-920eaa30f196 req-478b1c3a-cd7a-4a43-96db-7848b59a64df e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-8d35e49e-efca-4621-a6f7-de650e5272fd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:35:55 compute-0 nova_compute[260935]: 2025-10-11 09:35:55.935 2 DEBUG oslo_concurrency.lockutils [req-f6cd7d3a-f757-41a3-abbc-920eaa30f196 req-478b1c3a-cd7a-4a43-96db-7848b59a64df e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-8d35e49e-efca-4621-a6f7-de650e5272fd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:35:55 compute-0 nova_compute[260935]: 2025-10-11 09:35:55.935 2 DEBUG nova.network.neutron [req-f6cd7d3a-f757-41a3-abbc-920eaa30f196 req-478b1c3a-cd7a-4a43-96db-7848b59a64df e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 8d35e49e-efca-4621-a6f7-de650e5272fd] Refreshing network info cache for port 28067a9e-43c9-4f82-aeb3-002926889b4b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 09:35:56 compute-0 sshd-session[416797]: pam_unix(sshd:auth): check pass; user unknown
Oct 11 09:35:56 compute-0 sshd-session[416797]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=165.232.82.252
Oct 11 09:35:56 compute-0 ceph-mon[74313]: pgmap v2828: 321 pgs: 321 active+clean; 453 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 37 op/s
Oct 11 09:35:57 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2829: 321 pgs: 321 active+clean; 453 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 37 op/s
Oct 11 09:35:57 compute-0 podman[416799]: 2025-10-11 09:35:57.806386044 +0000 UTC m=+0.089891654 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 11 09:35:58 compute-0 nova_compute[260935]: 2025-10-11 09:35:58.007 2 DEBUG nova.network.neutron [req-f6cd7d3a-f757-41a3-abbc-920eaa30f196 req-478b1c3a-cd7a-4a43-96db-7848b59a64df e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 8d35e49e-efca-4621-a6f7-de650e5272fd] Updated VIF entry in instance network info cache for port 28067a9e-43c9-4f82-aeb3-002926889b4b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 09:35:58 compute-0 nova_compute[260935]: 2025-10-11 09:35:58.008 2 DEBUG nova.network.neutron [req-f6cd7d3a-f757-41a3-abbc-920eaa30f196 req-478b1c3a-cd7a-4a43-96db-7848b59a64df e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 8d35e49e-efca-4621-a6f7-de650e5272fd] Updating instance_info_cache with network_info: [{"id": "28067a9e-43c9-4f82-aeb3-002926889b4b", "address": "fa:16:3e:83:96:3a", "network": {"id": "617f2647-1180-40d5-ae2e-3b28298acf26", "bridge": "br-int", "label": "tempest-network-smoke--1660520824", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.174", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ae66871ba5094dd49475fed96fb24be2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap28067a9e-43", "ovs_interfaceid": "28067a9e-43c9-4f82-aeb3-002926889b4b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:35:58 compute-0 nova_compute[260935]: 2025-10-11 09:35:58.038 2 DEBUG oslo_concurrency.lockutils [req-f6cd7d3a-f757-41a3-abbc-920eaa30f196 req-478b1c3a-cd7a-4a43-96db-7848b59a64df e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-8d35e49e-efca-4621-a6f7-de650e5272fd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:35:58 compute-0 sshd-session[416797]: Failed password for invalid user mysql from 165.232.82.252 port 54820 ssh2
Oct 11 09:35:58 compute-0 ceph-mon[74313]: pgmap v2829: 321 pgs: 321 active+clean; 453 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 37 op/s
Oct 11 09:35:59 compute-0 nova_compute[260935]: 2025-10-11 09:35:59.005 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:35:59 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2830: 321 pgs: 321 active+clean; 453 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Oct 11 09:35:59 compute-0 sshd-session[416797]: Connection closed by invalid user mysql 165.232.82.252 port 54820 [preauth]
Oct 11 09:35:59 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:35:59 compute-0 nova_compute[260935]: 2025-10-11 09:35:59.864 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:36:00 compute-0 ceph-mon[74313]: pgmap v2830: 321 pgs: 321 active+clean; 453 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Oct 11 09:36:01 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2831: 321 pgs: 321 active+clean; 453 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 11 09:36:02 compute-0 ceph-mon[74313]: pgmap v2831: 321 pgs: 321 active+clean; 453 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 11 09:36:03 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2832: 321 pgs: 321 active+clean; 453 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Oct 11 09:36:04 compute-0 nova_compute[260935]: 2025-10-11 09:36:04.005 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:36:04 compute-0 ceph-mon[74313]: pgmap v2832: 321 pgs: 321 active+clean; 453 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Oct 11 09:36:04 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:36:04 compute-0 nova_compute[260935]: 2025-10-11 09:36:04.866 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:36:05 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2833: 321 pgs: 321 active+clean; 453 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 64 op/s
Oct 11 09:36:05 compute-0 sudo[416820]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:36:05 compute-0 sudo[416820]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:36:05 compute-0 sudo[416820]: pam_unix(sudo:session): session closed for user root
Oct 11 09:36:05 compute-0 ovn_controller[152945]: 2025-10-11T09:36:05Z|00186|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:83:96:3a 10.100.0.8
Oct 11 09:36:05 compute-0 ovn_controller[152945]: 2025-10-11T09:36:05Z|00187|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:83:96:3a 10.100.0.8
Oct 11 09:36:05 compute-0 sudo[416845]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 09:36:05 compute-0 sudo[416845]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:36:05 compute-0 sudo[416845]: pam_unix(sudo:session): session closed for user root
Oct 11 09:36:05 compute-0 sudo[416870]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:36:05 compute-0 sudo[416870]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:36:05 compute-0 sudo[416870]: pam_unix(sudo:session): session closed for user root
Oct 11 09:36:05 compute-0 sudo[416895]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 11 09:36:05 compute-0 sudo[416895]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:36:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 09:36:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:36:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 09:36:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:36:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.003735063814547112 of space, bias 1.0, pg target 1.1205191443641336 quantized to 32 (current 32)
Oct 11 09:36:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:36:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 09:36:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:36:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 09:36:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:36:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.1992057139048968 quantized to 32 (current 32)
Oct 11 09:36:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:36:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006084358924269063 quantized to 16 (current 32)
Oct 11 09:36:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:36:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 09:36:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:36:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.605448655336329e-05 quantized to 32 (current 32)
Oct 11 09:36:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:36:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006464631357035879 quantized to 32 (current 32)
Oct 11 09:36:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:36:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 09:36:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:36:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015210897310672657 quantized to 32 (current 32)
Oct 11 09:36:05 compute-0 sudo[416895]: pam_unix(sudo:session): session closed for user root
Oct 11 09:36:06 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Oct 11 09:36:06 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 11 09:36:06 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 09:36:06 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 09:36:06 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 11 09:36:06 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 09:36:06 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 11 09:36:06 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:36:06 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev 9f388bcc-84b2-4365-8f76-9327c5a09d07 does not exist
Oct 11 09:36:06 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev b13cf9af-73e5-420a-9426-4547c6f096b5 does not exist
Oct 11 09:36:06 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev 8b8f358d-986b-4e67-b45b-8d6cb294dc68 does not exist
Oct 11 09:36:06 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 11 09:36:06 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 09:36:06 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 11 09:36:06 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 09:36:06 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 09:36:06 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 09:36:06 compute-0 sudo[416950]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:36:06 compute-0 sudo[416950]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:36:06 compute-0 sudo[416950]: pam_unix(sudo:session): session closed for user root
Oct 11 09:36:06 compute-0 sudo[416976]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 09:36:06 compute-0 sudo[416976]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:36:06 compute-0 sudo[416976]: pam_unix(sudo:session): session closed for user root
Oct 11 09:36:06 compute-0 podman[416974]: 2025-10-11 09:36:06.279989508 +0000 UTC m=+0.106660831 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=iscsid)
Oct 11 09:36:06 compute-0 sudo[417016]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:36:06 compute-0 sudo[417016]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:36:06 compute-0 sudo[417016]: pam_unix(sudo:session): session closed for user root
Oct 11 09:36:06 compute-0 ceph-mon[74313]: pgmap v2833: 321 pgs: 321 active+clean; 453 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 64 op/s
Oct 11 09:36:06 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 11 09:36:06 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 09:36:06 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 09:36:06 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:36:06 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 09:36:06 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 09:36:06 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 09:36:06 compute-0 sudo[417044]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 11 09:36:06 compute-0 sudo[417044]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:36:06 compute-0 podman[417107]: 2025-10-11 09:36:06.825849049 +0000 UTC m=+0.071448310 container create 15279f96ee3bc1fda5eb71add03766de8bf491ce9afe73826e463c2c9c00a21a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_goldberg, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 09:36:06 compute-0 systemd[1]: Started libpod-conmon-15279f96ee3bc1fda5eb71add03766de8bf491ce9afe73826e463c2c9c00a21a.scope.
Oct 11 09:36:06 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:36:06 compute-0 podman[417107]: 2025-10-11 09:36:06.799259234 +0000 UTC m=+0.044858525 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:36:06 compute-0 podman[417107]: 2025-10-11 09:36:06.90722984 +0000 UTC m=+0.152829111 container init 15279f96ee3bc1fda5eb71add03766de8bf491ce9afe73826e463c2c9c00a21a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_goldberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct 11 09:36:06 compute-0 podman[417107]: 2025-10-11 09:36:06.915609618 +0000 UTC m=+0.161208919 container start 15279f96ee3bc1fda5eb71add03766de8bf491ce9afe73826e463c2c9c00a21a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_goldberg, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct 11 09:36:06 compute-0 podman[417107]: 2025-10-11 09:36:06.918968213 +0000 UTC m=+0.164567484 container attach 15279f96ee3bc1fda5eb71add03766de8bf491ce9afe73826e463c2c9c00a21a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_goldberg, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct 11 09:36:06 compute-0 systemd[1]: libpod-15279f96ee3bc1fda5eb71add03766de8bf491ce9afe73826e463c2c9c00a21a.scope: Deactivated successfully.
Oct 11 09:36:06 compute-0 boring_goldberg[417124]: 167 167
Oct 11 09:36:06 compute-0 conmon[417124]: conmon 15279f96ee3bc1fda5eb <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-15279f96ee3bc1fda5eb71add03766de8bf491ce9afe73826e463c2c9c00a21a.scope/container/memory.events
Oct 11 09:36:06 compute-0 podman[417107]: 2025-10-11 09:36:06.924870741 +0000 UTC m=+0.170469992 container died 15279f96ee3bc1fda5eb71add03766de8bf491ce9afe73826e463c2c9c00a21a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_goldberg, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Oct 11 09:36:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-f6b5e9efcbdfe9b4b51782cc8b44dd61ec407d90b4a464ff4dcfb8d175982185-merged.mount: Deactivated successfully.
Oct 11 09:36:06 compute-0 podman[417107]: 2025-10-11 09:36:06.973503612 +0000 UTC m=+0.219102863 container remove 15279f96ee3bc1fda5eb71add03766de8bf491ce9afe73826e463c2c9c00a21a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_goldberg, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 09:36:06 compute-0 systemd[1]: libpod-conmon-15279f96ee3bc1fda5eb71add03766de8bf491ce9afe73826e463c2c9c00a21a.scope: Deactivated successfully.
Oct 11 09:36:07 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2834: 321 pgs: 321 active+clean; 453 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 64 op/s
Oct 11 09:36:07 compute-0 podman[417147]: 2025-10-11 09:36:07.256101047 +0000 UTC m=+0.066784247 container create c896f6bb10b0a121d1a569bf1e1dec9cb178b51df3ab96c76064ab616d1da6dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_lederberg, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True)
Oct 11 09:36:07 compute-0 systemd[1]: Started libpod-conmon-c896f6bb10b0a121d1a569bf1e1dec9cb178b51df3ab96c76064ab616d1da6dc.scope.
Oct 11 09:36:07 compute-0 podman[417147]: 2025-10-11 09:36:07.22624396 +0000 UTC m=+0.036927210 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:36:07 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:36:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d1a683c50790f06de6da5c196121348762d4349b1c70bccac39e307466a660e6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 09:36:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d1a683c50790f06de6da5c196121348762d4349b1c70bccac39e307466a660e6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 09:36:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d1a683c50790f06de6da5c196121348762d4349b1c70bccac39e307466a660e6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 09:36:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d1a683c50790f06de6da5c196121348762d4349b1c70bccac39e307466a660e6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 09:36:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d1a683c50790f06de6da5c196121348762d4349b1c70bccac39e307466a660e6/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 09:36:07 compute-0 podman[417147]: 2025-10-11 09:36:07.363519918 +0000 UTC m=+0.174203098 container init c896f6bb10b0a121d1a569bf1e1dec9cb178b51df3ab96c76064ab616d1da6dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_lederberg, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Oct 11 09:36:07 compute-0 podman[417147]: 2025-10-11 09:36:07.383154995 +0000 UTC m=+0.193838195 container start c896f6bb10b0a121d1a569bf1e1dec9cb178b51df3ab96c76064ab616d1da6dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_lederberg, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct 11 09:36:07 compute-0 podman[417147]: 2025-10-11 09:36:07.387784727 +0000 UTC m=+0.198467897 container attach c896f6bb10b0a121d1a569bf1e1dec9cb178b51df3ab96c76064ab616d1da6dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_lederberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct 11 09:36:08 compute-0 ceph-mon[74313]: pgmap v2834: 321 pgs: 321 active+clean; 453 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 64 op/s
Oct 11 09:36:08 compute-0 sharp_lederberg[417164]: --> passed data devices: 0 physical, 3 LVM
Oct 11 09:36:08 compute-0 sharp_lederberg[417164]: --> relative data size: 1.0
Oct 11 09:36:08 compute-0 sharp_lederberg[417164]: --> All data devices are unavailable
Oct 11 09:36:08 compute-0 systemd[1]: libpod-c896f6bb10b0a121d1a569bf1e1dec9cb178b51df3ab96c76064ab616d1da6dc.scope: Deactivated successfully.
Oct 11 09:36:08 compute-0 podman[417147]: 2025-10-11 09:36:08.468832906 +0000 UTC m=+1.279516076 container died c896f6bb10b0a121d1a569bf1e1dec9cb178b51df3ab96c76064ab616d1da6dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_lederberg, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 09:36:08 compute-0 systemd[1]: libpod-c896f6bb10b0a121d1a569bf1e1dec9cb178b51df3ab96c76064ab616d1da6dc.scope: Consumed 1.024s CPU time.
Oct 11 09:36:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-d1a683c50790f06de6da5c196121348762d4349b1c70bccac39e307466a660e6-merged.mount: Deactivated successfully.
Oct 11 09:36:08 compute-0 podman[417147]: 2025-10-11 09:36:08.535971052 +0000 UTC m=+1.346654222 container remove c896f6bb10b0a121d1a569bf1e1dec9cb178b51df3ab96c76064ab616d1da6dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_lederberg, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 09:36:08 compute-0 systemd[1]: libpod-conmon-c896f6bb10b0a121d1a569bf1e1dec9cb178b51df3ab96c76064ab616d1da6dc.scope: Deactivated successfully.
Oct 11 09:36:08 compute-0 sudo[417044]: pam_unix(sudo:session): session closed for user root
Oct 11 09:36:08 compute-0 sudo[417207]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:36:08 compute-0 sudo[417207]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:36:08 compute-0 sudo[417207]: pam_unix(sudo:session): session closed for user root
Oct 11 09:36:08 compute-0 sudo[417232]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 09:36:08 compute-0 sudo[417232]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:36:08 compute-0 sudo[417232]: pam_unix(sudo:session): session closed for user root
Oct 11 09:36:08 compute-0 sudo[417257]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:36:08 compute-0 sudo[417257]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:36:08 compute-0 sudo[417257]: pam_unix(sudo:session): session closed for user root
Oct 11 09:36:08 compute-0 sudo[417282]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a -- lvm list --format json
Oct 11 09:36:08 compute-0 sudo[417282]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:36:09 compute-0 nova_compute[260935]: 2025-10-11 09:36:09.006 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:36:09 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2835: 321 pgs: 321 active+clean; 486 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 125 op/s
Oct 11 09:36:09 compute-0 podman[417345]: 2025-10-11 09:36:09.376622015 +0000 UTC m=+0.069315559 container create 9bfb9ddc725b02b0347fb74fe49311600998b5a511d617f24a5073a887943b70 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_kepler, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 09:36:09 compute-0 systemd[1]: Started libpod-conmon-9bfb9ddc725b02b0347fb74fe49311600998b5a511d617f24a5073a887943b70.scope.
Oct 11 09:36:09 compute-0 podman[417345]: 2025-10-11 09:36:09.347109657 +0000 UTC m=+0.039803281 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:36:09 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:36:09 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:36:09 compute-0 podman[417345]: 2025-10-11 09:36:09.491192839 +0000 UTC m=+0.183886413 container init 9bfb9ddc725b02b0347fb74fe49311600998b5a511d617f24a5073a887943b70 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_kepler, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 09:36:09 compute-0 podman[417345]: 2025-10-11 09:36:09.499763222 +0000 UTC m=+0.192456756 container start 9bfb9ddc725b02b0347fb74fe49311600998b5a511d617f24a5073a887943b70 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_kepler, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 09:36:09 compute-0 podman[417345]: 2025-10-11 09:36:09.503660783 +0000 UTC m=+0.196354337 container attach 9bfb9ddc725b02b0347fb74fe49311600998b5a511d617f24a5073a887943b70 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_kepler, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 09:36:09 compute-0 elated_kepler[417361]: 167 167
Oct 11 09:36:09 compute-0 systemd[1]: libpod-9bfb9ddc725b02b0347fb74fe49311600998b5a511d617f24a5073a887943b70.scope: Deactivated successfully.
Oct 11 09:36:09 compute-0 podman[417345]: 2025-10-11 09:36:09.50638083 +0000 UTC m=+0.199074364 container died 9bfb9ddc725b02b0347fb74fe49311600998b5a511d617f24a5073a887943b70 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_kepler, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 09:36:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-89c1c6d40041cc5a6c51c8f28969c43121fa6083ae11a29036438a036959b72f-merged.mount: Deactivated successfully.
Oct 11 09:36:09 compute-0 podman[417345]: 2025-10-11 09:36:09.539754888 +0000 UTC m=+0.232448422 container remove 9bfb9ddc725b02b0347fb74fe49311600998b5a511d617f24a5073a887943b70 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_kepler, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct 11 09:36:09 compute-0 systemd[1]: libpod-conmon-9bfb9ddc725b02b0347fb74fe49311600998b5a511d617f24a5073a887943b70.scope: Deactivated successfully.
Oct 11 09:36:09 compute-0 podman[417384]: 2025-10-11 09:36:09.810017823 +0000 UTC m=+0.082771552 container create 13134e1eeaf069feb802e37a56347b940f7f4914e3a8859b3263c7b080e14cfa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_knuth, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 09:36:09 compute-0 systemd[1]: Started libpod-conmon-13134e1eeaf069feb802e37a56347b940f7f4914e3a8859b3263c7b080e14cfa.scope.
Oct 11 09:36:09 compute-0 nova_compute[260935]: 2025-10-11 09:36:09.869 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:36:09 compute-0 podman[417384]: 2025-10-11 09:36:09.780701171 +0000 UTC m=+0.053454920 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:36:09 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:36:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c8c741749fe33e5f282ed2556be34d151cc486b440addabd861b2191ecce01d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 09:36:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c8c741749fe33e5f282ed2556be34d151cc486b440addabd861b2191ecce01d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 09:36:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c8c741749fe33e5f282ed2556be34d151cc486b440addabd861b2191ecce01d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 09:36:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c8c741749fe33e5f282ed2556be34d151cc486b440addabd861b2191ecce01d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 09:36:09 compute-0 podman[417384]: 2025-10-11 09:36:09.925076511 +0000 UTC m=+0.197830270 container init 13134e1eeaf069feb802e37a56347b940f7f4914e3a8859b3263c7b080e14cfa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_knuth, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct 11 09:36:09 compute-0 podman[417384]: 2025-10-11 09:36:09.938580584 +0000 UTC m=+0.211334293 container start 13134e1eeaf069feb802e37a56347b940f7f4914e3a8859b3263c7b080e14cfa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_knuth, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 09:36:09 compute-0 podman[417384]: 2025-10-11 09:36:09.942546987 +0000 UTC m=+0.215300696 container attach 13134e1eeaf069feb802e37a56347b940f7f4914e3a8859b3263c7b080e14cfa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_knuth, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 09:36:09 compute-0 podman[417401]: 2025-10-11 09:36:09.972349673 +0000 UTC m=+0.086085406 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct 11 09:36:10 compute-0 podman[417425]: 2025-10-11 09:36:10.133291063 +0000 UTC m=+0.133249635 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct 11 09:36:10 compute-0 ceph-mon[74313]: pgmap v2835: 321 pgs: 321 active+clean; 486 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 125 op/s
Oct 11 09:36:10 compute-0 focused_knuth[417400]: {
Oct 11 09:36:10 compute-0 focused_knuth[417400]:     "0": [
Oct 11 09:36:10 compute-0 focused_knuth[417400]:         {
Oct 11 09:36:10 compute-0 focused_knuth[417400]:             "devices": [
Oct 11 09:36:10 compute-0 focused_knuth[417400]:                 "/dev/loop3"
Oct 11 09:36:10 compute-0 focused_knuth[417400]:             ],
Oct 11 09:36:10 compute-0 focused_knuth[417400]:             "lv_name": "ceph_lv0",
Oct 11 09:36:10 compute-0 focused_knuth[417400]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 09:36:10 compute-0 focused_knuth[417400]:             "lv_size": "21470642176",
Oct 11 09:36:10 compute-0 focused_knuth[417400]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=bd31cfe9-266a-4479-a7f9-659fff14b3e1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 09:36:10 compute-0 focused_knuth[417400]:             "lv_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 09:36:10 compute-0 focused_knuth[417400]:             "name": "ceph_lv0",
Oct 11 09:36:10 compute-0 focused_knuth[417400]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 09:36:10 compute-0 focused_knuth[417400]:             "tags": {
Oct 11 09:36:10 compute-0 focused_knuth[417400]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 11 09:36:10 compute-0 focused_knuth[417400]:                 "ceph.block_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 09:36:10 compute-0 focused_knuth[417400]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 09:36:10 compute-0 focused_knuth[417400]:                 "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:36:10 compute-0 focused_knuth[417400]:                 "ceph.cluster_name": "ceph",
Oct 11 09:36:10 compute-0 focused_knuth[417400]:                 "ceph.crush_device_class": "",
Oct 11 09:36:10 compute-0 focused_knuth[417400]:                 "ceph.encrypted": "0",
Oct 11 09:36:10 compute-0 focused_knuth[417400]:                 "ceph.osd_fsid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 09:36:10 compute-0 focused_knuth[417400]:                 "ceph.osd_id": "0",
Oct 11 09:36:10 compute-0 focused_knuth[417400]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 09:36:10 compute-0 focused_knuth[417400]:                 "ceph.type": "block",
Oct 11 09:36:10 compute-0 focused_knuth[417400]:                 "ceph.vdo": "0"
Oct 11 09:36:10 compute-0 focused_knuth[417400]:             },
Oct 11 09:36:10 compute-0 focused_knuth[417400]:             "type": "block",
Oct 11 09:36:10 compute-0 focused_knuth[417400]:             "vg_name": "ceph_vg0"
Oct 11 09:36:10 compute-0 focused_knuth[417400]:         }
Oct 11 09:36:10 compute-0 focused_knuth[417400]:     ],
Oct 11 09:36:10 compute-0 focused_knuth[417400]:     "1": [
Oct 11 09:36:10 compute-0 focused_knuth[417400]:         {
Oct 11 09:36:10 compute-0 focused_knuth[417400]:             "devices": [
Oct 11 09:36:10 compute-0 focused_knuth[417400]:                 "/dev/loop4"
Oct 11 09:36:10 compute-0 focused_knuth[417400]:             ],
Oct 11 09:36:10 compute-0 focused_knuth[417400]:             "lv_name": "ceph_lv1",
Oct 11 09:36:10 compute-0 focused_knuth[417400]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 09:36:10 compute-0 focused_knuth[417400]:             "lv_size": "21470642176",
Oct 11 09:36:10 compute-0 focused_knuth[417400]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c6e730c8-7075-45e5-bf72-061b73088f5b,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 09:36:10 compute-0 focused_knuth[417400]:             "lv_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 09:36:10 compute-0 focused_knuth[417400]:             "name": "ceph_lv1",
Oct 11 09:36:10 compute-0 focused_knuth[417400]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 09:36:10 compute-0 focused_knuth[417400]:             "tags": {
Oct 11 09:36:10 compute-0 focused_knuth[417400]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 11 09:36:10 compute-0 focused_knuth[417400]:                 "ceph.block_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 09:36:10 compute-0 focused_knuth[417400]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 09:36:10 compute-0 focused_knuth[417400]:                 "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:36:10 compute-0 focused_knuth[417400]:                 "ceph.cluster_name": "ceph",
Oct 11 09:36:10 compute-0 focused_knuth[417400]:                 "ceph.crush_device_class": "",
Oct 11 09:36:10 compute-0 focused_knuth[417400]:                 "ceph.encrypted": "0",
Oct 11 09:36:10 compute-0 focused_knuth[417400]:                 "ceph.osd_fsid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 09:36:10 compute-0 focused_knuth[417400]:                 "ceph.osd_id": "1",
Oct 11 09:36:10 compute-0 focused_knuth[417400]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 09:36:10 compute-0 focused_knuth[417400]:                 "ceph.type": "block",
Oct 11 09:36:10 compute-0 focused_knuth[417400]:                 "ceph.vdo": "0"
Oct 11 09:36:10 compute-0 focused_knuth[417400]:             },
Oct 11 09:36:10 compute-0 focused_knuth[417400]:             "type": "block",
Oct 11 09:36:10 compute-0 focused_knuth[417400]:             "vg_name": "ceph_vg1"
Oct 11 09:36:10 compute-0 focused_knuth[417400]:         }
Oct 11 09:36:10 compute-0 focused_knuth[417400]:     ],
Oct 11 09:36:10 compute-0 focused_knuth[417400]:     "2": [
Oct 11 09:36:10 compute-0 focused_knuth[417400]:         {
Oct 11 09:36:10 compute-0 focused_knuth[417400]:             "devices": [
Oct 11 09:36:10 compute-0 focused_knuth[417400]:                 "/dev/loop5"
Oct 11 09:36:10 compute-0 focused_knuth[417400]:             ],
Oct 11 09:36:10 compute-0 focused_knuth[417400]:             "lv_name": "ceph_lv2",
Oct 11 09:36:10 compute-0 focused_knuth[417400]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 09:36:10 compute-0 focused_knuth[417400]:             "lv_size": "21470642176",
Oct 11 09:36:10 compute-0 focused_knuth[417400]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9a8d87fe-940e-4960-94fb-405e22e6e9ee,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 09:36:10 compute-0 focused_knuth[417400]:             "lv_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 09:36:10 compute-0 focused_knuth[417400]:             "name": "ceph_lv2",
Oct 11 09:36:10 compute-0 focused_knuth[417400]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 09:36:10 compute-0 focused_knuth[417400]:             "tags": {
Oct 11 09:36:10 compute-0 focused_knuth[417400]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 11 09:36:10 compute-0 focused_knuth[417400]:                 "ceph.block_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 09:36:10 compute-0 focused_knuth[417400]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 09:36:10 compute-0 focused_knuth[417400]:                 "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:36:10 compute-0 focused_knuth[417400]:                 "ceph.cluster_name": "ceph",
Oct 11 09:36:10 compute-0 focused_knuth[417400]:                 "ceph.crush_device_class": "",
Oct 11 09:36:10 compute-0 focused_knuth[417400]:                 "ceph.encrypted": "0",
Oct 11 09:36:10 compute-0 focused_knuth[417400]:                 "ceph.osd_fsid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 09:36:10 compute-0 focused_knuth[417400]:                 "ceph.osd_id": "2",
Oct 11 09:36:10 compute-0 focused_knuth[417400]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 09:36:10 compute-0 focused_knuth[417400]:                 "ceph.type": "block",
Oct 11 09:36:10 compute-0 focused_knuth[417400]:                 "ceph.vdo": "0"
Oct 11 09:36:10 compute-0 focused_knuth[417400]:             },
Oct 11 09:36:10 compute-0 focused_knuth[417400]:             "type": "block",
Oct 11 09:36:10 compute-0 focused_knuth[417400]:             "vg_name": "ceph_vg2"
Oct 11 09:36:10 compute-0 focused_knuth[417400]:         }
Oct 11 09:36:10 compute-0 focused_knuth[417400]:     ]
Oct 11 09:36:10 compute-0 focused_knuth[417400]: }
Oct 11 09:36:10 compute-0 systemd[1]: libpod-13134e1eeaf069feb802e37a56347b940f7f4914e3a8859b3263c7b080e14cfa.scope: Deactivated successfully.
Oct 11 09:36:10 compute-0 podman[417384]: 2025-10-11 09:36:10.760511805 +0000 UTC m=+1.033265494 container died 13134e1eeaf069feb802e37a56347b940f7f4914e3a8859b3263c7b080e14cfa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_knuth, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 09:36:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-4c8c741749fe33e5f282ed2556be34d151cc486b440addabd861b2191ecce01d-merged.mount: Deactivated successfully.
Oct 11 09:36:10 compute-0 podman[417384]: 2025-10-11 09:36:10.816263009 +0000 UTC m=+1.089016688 container remove 13134e1eeaf069feb802e37a56347b940f7f4914e3a8859b3263c7b080e14cfa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_knuth, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 09:36:10 compute-0 systemd[1]: libpod-conmon-13134e1eeaf069feb802e37a56347b940f7f4914e3a8859b3263c7b080e14cfa.scope: Deactivated successfully.
Oct 11 09:36:10 compute-0 sudo[417282]: pam_unix(sudo:session): session closed for user root
Oct 11 09:36:10 compute-0 sudo[417466]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:36:10 compute-0 sudo[417466]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:36:10 compute-0 sudo[417466]: pam_unix(sudo:session): session closed for user root
Oct 11 09:36:10 compute-0 sudo[417491]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 09:36:10 compute-0 sudo[417491]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:36:10 compute-0 sudo[417491]: pam_unix(sudo:session): session closed for user root
Oct 11 09:36:11 compute-0 sudo[417516]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:36:11 compute-0 sudo[417516]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:36:11 compute-0 sudo[417516]: pam_unix(sudo:session): session closed for user root
Oct 11 09:36:11 compute-0 sudo[417541]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a -- raw list --format json
Oct 11 09:36:11 compute-0 sudo[417541]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:36:11 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2836: 321 pgs: 321 active+clean; 486 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 271 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Oct 11 09:36:11 compute-0 podman[417605]: 2025-10-11 09:36:11.502865316 +0000 UTC m=+0.044226687 container create 60ab389b66da97309e3ceb2d2f485c6801cddb44f6b40c06bc4d5b637b7e500c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_chebyshev, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 09:36:11 compute-0 systemd[1]: Started libpod-conmon-60ab389b66da97309e3ceb2d2f485c6801cddb44f6b40c06bc4d5b637b7e500c.scope.
Oct 11 09:36:11 compute-0 podman[417605]: 2025-10-11 09:36:11.482969611 +0000 UTC m=+0.024331012 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:36:11 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:36:11 compute-0 podman[417605]: 2025-10-11 09:36:11.60618869 +0000 UTC m=+0.147550091 container init 60ab389b66da97309e3ceb2d2f485c6801cddb44f6b40c06bc4d5b637b7e500c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_chebyshev, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct 11 09:36:11 compute-0 podman[417605]: 2025-10-11 09:36:11.613843088 +0000 UTC m=+0.155204469 container start 60ab389b66da97309e3ceb2d2f485c6801cddb44f6b40c06bc4d5b637b7e500c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_chebyshev, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct 11 09:36:11 compute-0 podman[417605]: 2025-10-11 09:36:11.617596204 +0000 UTC m=+0.158957605 container attach 60ab389b66da97309e3ceb2d2f485c6801cddb44f6b40c06bc4d5b637b7e500c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_chebyshev, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True)
Oct 11 09:36:11 compute-0 frosty_chebyshev[417621]: 167 167
Oct 11 09:36:11 compute-0 podman[417605]: 2025-10-11 09:36:11.620382763 +0000 UTC m=+0.161744154 container died 60ab389b66da97309e3ceb2d2f485c6801cddb44f6b40c06bc4d5b637b7e500c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_chebyshev, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default)
Oct 11 09:36:11 compute-0 systemd[1]: libpod-60ab389b66da97309e3ceb2d2f485c6801cddb44f6b40c06bc4d5b637b7e500c.scope: Deactivated successfully.
Oct 11 09:36:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-474a4cafc2c047478e227aa156a5f01b01aff76536401ec0c4ac0ea5801050bb-merged.mount: Deactivated successfully.
Oct 11 09:36:11 compute-0 podman[417605]: 2025-10-11 09:36:11.673701857 +0000 UTC m=+0.215063278 container remove 60ab389b66da97309e3ceb2d2f485c6801cddb44f6b40c06bc4d5b637b7e500c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_chebyshev, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS)
Oct 11 09:36:11 compute-0 systemd[1]: libpod-conmon-60ab389b66da97309e3ceb2d2f485c6801cddb44f6b40c06bc4d5b637b7e500c.scope: Deactivated successfully.
Oct 11 09:36:11 compute-0 podman[417644]: 2025-10-11 09:36:11.958360311 +0000 UTC m=+0.074284330 container create f6bd31b1f21eef7d175ee5d15d04fb5a294bab1d6c0c1583882d7294cc9e35eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_poitras, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 09:36:12 compute-0 systemd[1]: Started libpod-conmon-f6bd31b1f21eef7d175ee5d15d04fb5a294bab1d6c0c1583882d7294cc9e35eb.scope.
Oct 11 09:36:12 compute-0 podman[417644]: 2025-10-11 09:36:11.926665631 +0000 UTC m=+0.042589700 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:36:12 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:36:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6fe9a08d3b88b7f6df5ef9c52eae83b7f268bd287ceb123e5b780f59f9fcf3bc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 09:36:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6fe9a08d3b88b7f6df5ef9c52eae83b7f268bd287ceb123e5b780f59f9fcf3bc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 09:36:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6fe9a08d3b88b7f6df5ef9c52eae83b7f268bd287ceb123e5b780f59f9fcf3bc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 09:36:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6fe9a08d3b88b7f6df5ef9c52eae83b7f268bd287ceb123e5b780f59f9fcf3bc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 09:36:12 compute-0 podman[417644]: 2025-10-11 09:36:12.073736398 +0000 UTC m=+0.189660467 container init f6bd31b1f21eef7d175ee5d15d04fb5a294bab1d6c0c1583882d7294cc9e35eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_poitras, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 09:36:12 compute-0 podman[417644]: 2025-10-11 09:36:12.08615015 +0000 UTC m=+0.202074179 container start f6bd31b1f21eef7d175ee5d15d04fb5a294bab1d6c0c1583882d7294cc9e35eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_poitras, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 09:36:12 compute-0 podman[417644]: 2025-10-11 09:36:12.090645388 +0000 UTC m=+0.206569497 container attach f6bd31b1f21eef7d175ee5d15d04fb5a294bab1d6c0c1583882d7294cc9e35eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_poitras, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 09:36:12 compute-0 ceph-mon[74313]: pgmap v2836: 321 pgs: 321 active+clean; 486 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 271 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Oct 11 09:36:13 compute-0 vigilant_poitras[417660]: {
Oct 11 09:36:13 compute-0 vigilant_poitras[417660]:     "9a8d87fe-940e-4960-94fb-405e22e6e9ee": {
Oct 11 09:36:13 compute-0 vigilant_poitras[417660]:         "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:36:13 compute-0 vigilant_poitras[417660]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 11 09:36:13 compute-0 vigilant_poitras[417660]:         "osd_id": 2,
Oct 11 09:36:13 compute-0 vigilant_poitras[417660]:         "osd_uuid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 09:36:13 compute-0 vigilant_poitras[417660]:         "type": "bluestore"
Oct 11 09:36:13 compute-0 vigilant_poitras[417660]:     },
Oct 11 09:36:13 compute-0 vigilant_poitras[417660]:     "bd31cfe9-266a-4479-a7f9-659fff14b3e1": {
Oct 11 09:36:13 compute-0 vigilant_poitras[417660]:         "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:36:13 compute-0 vigilant_poitras[417660]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 11 09:36:13 compute-0 vigilant_poitras[417660]:         "osd_id": 0,
Oct 11 09:36:13 compute-0 vigilant_poitras[417660]:         "osd_uuid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 09:36:13 compute-0 vigilant_poitras[417660]:         "type": "bluestore"
Oct 11 09:36:13 compute-0 vigilant_poitras[417660]:     },
Oct 11 09:36:13 compute-0 vigilant_poitras[417660]:     "c6e730c8-7075-45e5-bf72-061b73088f5b": {
Oct 11 09:36:13 compute-0 vigilant_poitras[417660]:         "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:36:13 compute-0 vigilant_poitras[417660]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 11 09:36:13 compute-0 vigilant_poitras[417660]:         "osd_id": 1,
Oct 11 09:36:13 compute-0 vigilant_poitras[417660]:         "osd_uuid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 09:36:13 compute-0 vigilant_poitras[417660]:         "type": "bluestore"
Oct 11 09:36:13 compute-0 vigilant_poitras[417660]:     }
Oct 11 09:36:13 compute-0 vigilant_poitras[417660]: }
Oct 11 09:36:13 compute-0 systemd[1]: libpod-f6bd31b1f21eef7d175ee5d15d04fb5a294bab1d6c0c1583882d7294cc9e35eb.scope: Deactivated successfully.
Oct 11 09:36:13 compute-0 systemd[1]: libpod-f6bd31b1f21eef7d175ee5d15d04fb5a294bab1d6c0c1583882d7294cc9e35eb.scope: Consumed 1.078s CPU time.
Oct 11 09:36:13 compute-0 podman[417644]: 2025-10-11 09:36:13.167327874 +0000 UTC m=+1.283251903 container died f6bd31b1f21eef7d175ee5d15d04fb5a294bab1d6c0c1583882d7294cc9e35eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_poitras, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 09:36:13 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2837: 321 pgs: 321 active+clean; 486 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 276 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 11 09:36:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-6fe9a08d3b88b7f6df5ef9c52eae83b7f268bd287ceb123e5b780f59f9fcf3bc-merged.mount: Deactivated successfully.
Oct 11 09:36:13 compute-0 podman[417644]: 2025-10-11 09:36:13.255100186 +0000 UTC m=+1.371024215 container remove f6bd31b1f21eef7d175ee5d15d04fb5a294bab1d6c0c1583882d7294cc9e35eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_poitras, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct 11 09:36:13 compute-0 systemd[1]: libpod-conmon-f6bd31b1f21eef7d175ee5d15d04fb5a294bab1d6c0c1583882d7294cc9e35eb.scope: Deactivated successfully.
Oct 11 09:36:13 compute-0 sudo[417541]: pam_unix(sudo:session): session closed for user root
Oct 11 09:36:13 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 09:36:13 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:36:13 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 09:36:13 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:36:13 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev e2862908-e4fd-46f5-8690-5e7297c8ca6f does not exist
Oct 11 09:36:13 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev 116d4332-cd19-4d44-8213-ab9c46705087 does not exist
Oct 11 09:36:13 compute-0 sudo[417707]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:36:13 compute-0 sudo[417707]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:36:13 compute-0 sudo[417707]: pam_unix(sudo:session): session closed for user root
Oct 11 09:36:13 compute-0 sudo[417732]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 11 09:36:13 compute-0 sudo[417732]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:36:13 compute-0 sudo[417732]: pam_unix(sudo:session): session closed for user root
Oct 11 09:36:14 compute-0 nova_compute[260935]: 2025-10-11 09:36:14.009 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:36:14 compute-0 ceph-mon[74313]: pgmap v2837: 321 pgs: 321 active+clean; 486 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 276 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 11 09:36:14 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:36:14 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:36:14 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:36:14 compute-0 nova_compute[260935]: 2025-10-11 09:36:14.873 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:36:15 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2838: 321 pgs: 321 active+clean; 486 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 273 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Oct 11 09:36:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:36:15.232 162815 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:36:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:36:15.233 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:36:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:36:15.235 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:36:16 compute-0 ceph-mon[74313]: pgmap v2838: 321 pgs: 321 active+clean; 486 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 273 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Oct 11 09:36:16 compute-0 nova_compute[260935]: 2025-10-11 09:36:16.672 2 DEBUG nova.compute.manager [req-84c1aa69-24c8-4e7e-a827-743369a5a798 req-b026976f-5fe9-490f-8b9c-32b8ee4c5f0f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 8d35e49e-efca-4621-a6f7-de650e5272fd] Received event network-changed-28067a9e-43c9-4f82-aeb3-002926889b4b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:36:16 compute-0 nova_compute[260935]: 2025-10-11 09:36:16.672 2 DEBUG nova.compute.manager [req-84c1aa69-24c8-4e7e-a827-743369a5a798 req-b026976f-5fe9-490f-8b9c-32b8ee4c5f0f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 8d35e49e-efca-4621-a6f7-de650e5272fd] Refreshing instance network info cache due to event network-changed-28067a9e-43c9-4f82-aeb3-002926889b4b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 09:36:16 compute-0 nova_compute[260935]: 2025-10-11 09:36:16.673 2 DEBUG oslo_concurrency.lockutils [req-84c1aa69-24c8-4e7e-a827-743369a5a798 req-b026976f-5fe9-490f-8b9c-32b8ee4c5f0f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-8d35e49e-efca-4621-a6f7-de650e5272fd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:36:16 compute-0 nova_compute[260935]: 2025-10-11 09:36:16.673 2 DEBUG oslo_concurrency.lockutils [req-84c1aa69-24c8-4e7e-a827-743369a5a798 req-b026976f-5fe9-490f-8b9c-32b8ee4c5f0f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-8d35e49e-efca-4621-a6f7-de650e5272fd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:36:16 compute-0 nova_compute[260935]: 2025-10-11 09:36:16.673 2 DEBUG nova.network.neutron [req-84c1aa69-24c8-4e7e-a827-743369a5a798 req-b026976f-5fe9-490f-8b9c-32b8ee4c5f0f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 8d35e49e-efca-4621-a6f7-de650e5272fd] Refreshing network info cache for port 28067a9e-43c9-4f82-aeb3-002926889b4b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 09:36:16 compute-0 nova_compute[260935]: 2025-10-11 09:36:16.771 2 DEBUG oslo_concurrency.lockutils [None req-eadff6a0-d525-4cae-9d02-3ce08e18351e 14455fee241a4032a1491acddf675e59 ae66871ba5094dd49475fed96fb24be2 - - default default] Acquiring lock "8d35e49e-efca-4621-a6f7-de650e5272fd" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:36:16 compute-0 nova_compute[260935]: 2025-10-11 09:36:16.772 2 DEBUG oslo_concurrency.lockutils [None req-eadff6a0-d525-4cae-9d02-3ce08e18351e 14455fee241a4032a1491acddf675e59 ae66871ba5094dd49475fed96fb24be2 - - default default] Lock "8d35e49e-efca-4621-a6f7-de650e5272fd" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:36:16 compute-0 nova_compute[260935]: 2025-10-11 09:36:16.772 2 DEBUG oslo_concurrency.lockutils [None req-eadff6a0-d525-4cae-9d02-3ce08e18351e 14455fee241a4032a1491acddf675e59 ae66871ba5094dd49475fed96fb24be2 - - default default] Acquiring lock "8d35e49e-efca-4621-a6f7-de650e5272fd-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:36:16 compute-0 nova_compute[260935]: 2025-10-11 09:36:16.773 2 DEBUG oslo_concurrency.lockutils [None req-eadff6a0-d525-4cae-9d02-3ce08e18351e 14455fee241a4032a1491acddf675e59 ae66871ba5094dd49475fed96fb24be2 - - default default] Lock "8d35e49e-efca-4621-a6f7-de650e5272fd-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:36:16 compute-0 nova_compute[260935]: 2025-10-11 09:36:16.774 2 DEBUG oslo_concurrency.lockutils [None req-eadff6a0-d525-4cae-9d02-3ce08e18351e 14455fee241a4032a1491acddf675e59 ae66871ba5094dd49475fed96fb24be2 - - default default] Lock "8d35e49e-efca-4621-a6f7-de650e5272fd-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:36:16 compute-0 nova_compute[260935]: 2025-10-11 09:36:16.776 2 INFO nova.compute.manager [None req-eadff6a0-d525-4cae-9d02-3ce08e18351e 14455fee241a4032a1491acddf675e59 ae66871ba5094dd49475fed96fb24be2 - - default default] [instance: 8d35e49e-efca-4621-a6f7-de650e5272fd] Terminating instance
Oct 11 09:36:16 compute-0 nova_compute[260935]: 2025-10-11 09:36:16.778 2 DEBUG nova.compute.manager [None req-eadff6a0-d525-4cae-9d02-3ce08e18351e 14455fee241a4032a1491acddf675e59 ae66871ba5094dd49475fed96fb24be2 - - default default] [instance: 8d35e49e-efca-4621-a6f7-de650e5272fd] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 11 09:36:16 compute-0 kernel: tap28067a9e-43 (unregistering): left promiscuous mode
Oct 11 09:36:16 compute-0 NetworkManager[44960]: <info>  [1760175376.8451] device (tap28067a9e-43): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 09:36:16 compute-0 nova_compute[260935]: 2025-10-11 09:36:16.857 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:36:16 compute-0 ovn_controller[152945]: 2025-10-11T09:36:16Z|01573|binding|INFO|Releasing lport 28067a9e-43c9-4f82-aeb3-002926889b4b from this chassis (sb_readonly=0)
Oct 11 09:36:16 compute-0 ovn_controller[152945]: 2025-10-11T09:36:16Z|01574|binding|INFO|Setting lport 28067a9e-43c9-4f82-aeb3-002926889b4b down in Southbound
Oct 11 09:36:16 compute-0 ovn_controller[152945]: 2025-10-11T09:36:16Z|01575|binding|INFO|Removing iface tap28067a9e-43 ovn-installed in OVS
Oct 11 09:36:16 compute-0 nova_compute[260935]: 2025-10-11 09:36:16.861 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:36:16 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:36:16.867 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:83:96:3a 10.100.0.8'], port_security=['fa:16:3e:83:96:3a 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '8d35e49e-efca-4621-a6f7-de650e5272fd', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-617f2647-1180-40d5-ae2e-3b28298acf26', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ae66871ba5094dd49475fed96fb24be2', 'neutron:revision_number': '4', 'neutron:security_group_ids': '31517e64-56cf-4001-97b4-ec41047395ae 9f4770d8-f722-49dc-9929-f78ea8b6a7c0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=321d388f-0e1e-4939-a64b-86d4cae0051f, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=28067a9e-43c9-4f82-aeb3-002926889b4b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 09:36:16 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:36:16.870 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 28067a9e-43c9-4f82-aeb3-002926889b4b in datapath 617f2647-1180-40d5-ae2e-3b28298acf26 unbound from our chassis
Oct 11 09:36:16 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:36:16.873 162815 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 617f2647-1180-40d5-ae2e-3b28298acf26, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 11 09:36:16 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:36:16.875 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[aef743b2-c14d-481d-9244-5c263c660f25]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:36:16 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:36:16.876 162815 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-617f2647-1180-40d5-ae2e-3b28298acf26 namespace which is not needed anymore
Oct 11 09:36:16 compute-0 nova_compute[260935]: 2025-10-11 09:36:16.882 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:36:16 compute-0 systemd[1]: machine-qemu\x2d164\x2dinstance\x2d0000008c.scope: Deactivated successfully.
Oct 11 09:36:16 compute-0 systemd[1]: machine-qemu\x2d164\x2dinstance\x2d0000008c.scope: Consumed 13.539s CPU time.
Oct 11 09:36:16 compute-0 systemd-machined[215705]: Machine qemu-164-instance-0000008c terminated.
Oct 11 09:36:17 compute-0 nova_compute[260935]: 2025-10-11 09:36:17.006 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:36:17 compute-0 nova_compute[260935]: 2025-10-11 09:36:17.013 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:36:17 compute-0 nova_compute[260935]: 2025-10-11 09:36:17.025 2 INFO nova.virt.libvirt.driver [-] [instance: 8d35e49e-efca-4621-a6f7-de650e5272fd] Instance destroyed successfully.
Oct 11 09:36:17 compute-0 nova_compute[260935]: 2025-10-11 09:36:17.025 2 DEBUG nova.objects.instance [None req-eadff6a0-d525-4cae-9d02-3ce08e18351e 14455fee241a4032a1491acddf675e59 ae66871ba5094dd49475fed96fb24be2 - - default default] Lazy-loading 'resources' on Instance uuid 8d35e49e-efca-4621-a6f7-de650e5272fd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:36:17 compute-0 nova_compute[260935]: 2025-10-11 09:36:17.043 2 DEBUG nova.virt.libvirt.vif [None req-eadff6a0-d525-4cae-9d02-3ce08e18351e 14455fee241a4032a1491acddf675e59 ae66871ba5094dd49475fed96fb24be2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T09:35:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-88042792-access_point-478778867',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-88042792-access_point-478778867',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-88042792-acce',id=140,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFTq8OGvKaz+LMA/PcV/K58nrpAVi7kg5IPqRwY1XCZgpgNS8omJcJt4bFk7y+YWXHh9knbjh0T2FJN0u/vwytRbPYrGFElF+LT6MJEgQPvTD/i1PmkAieU+uwu1S/cEGA==',key_name='tempest-TestSecurityGroupsBasicOps-1768444573',keypairs=<?>,launch_index=0,launched_at=2025-10-11T09:35:52Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='ae66871ba5094dd49475fed96fb24be2',ramdisk_id='',reservation_id='r-zleps1oi',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSecurityGroupsBasicOps-88042792',owner_user_name='tempest-TestSecurityGroupsBasicOps-88042792-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T09:35:52Z,user_data=None,user_id='14455fee241a4032a1491acddf675e59',uuid=8d35e49e-efca-4621-a6f7-de650e5272fd,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "28067a9e-43c9-4f82-aeb3-002926889b4b", "address": "fa:16:3e:83:96:3a", "network": {"id": "617f2647-1180-40d5-ae2e-3b28298acf26", "bridge": "br-int", "label": "tempest-network-smoke--1660520824", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.174", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ae66871ba5094dd49475fed96fb24be2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap28067a9e-43", "ovs_interfaceid": "28067a9e-43c9-4f82-aeb3-002926889b4b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 11 09:36:17 compute-0 nova_compute[260935]: 2025-10-11 09:36:17.045 2 DEBUG nova.network.os_vif_util [None req-eadff6a0-d525-4cae-9d02-3ce08e18351e 14455fee241a4032a1491acddf675e59 ae66871ba5094dd49475fed96fb24be2 - - default default] Converting VIF {"id": "28067a9e-43c9-4f82-aeb3-002926889b4b", "address": "fa:16:3e:83:96:3a", "network": {"id": "617f2647-1180-40d5-ae2e-3b28298acf26", "bridge": "br-int", "label": "tempest-network-smoke--1660520824", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.174", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ae66871ba5094dd49475fed96fb24be2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap28067a9e-43", "ovs_interfaceid": "28067a9e-43c9-4f82-aeb3-002926889b4b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 09:36:17 compute-0 nova_compute[260935]: 2025-10-11 09:36:17.046 2 DEBUG nova.network.os_vif_util [None req-eadff6a0-d525-4cae-9d02-3ce08e18351e 14455fee241a4032a1491acddf675e59 ae66871ba5094dd49475fed96fb24be2 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:83:96:3a,bridge_name='br-int',has_traffic_filtering=True,id=28067a9e-43c9-4f82-aeb3-002926889b4b,network=Network(617f2647-1180-40d5-ae2e-3b28298acf26),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap28067a9e-43') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 09:36:17 compute-0 nova_compute[260935]: 2025-10-11 09:36:17.047 2 DEBUG os_vif [None req-eadff6a0-d525-4cae-9d02-3ce08e18351e 14455fee241a4032a1491acddf675e59 ae66871ba5094dd49475fed96fb24be2 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:83:96:3a,bridge_name='br-int',has_traffic_filtering=True,id=28067a9e-43c9-4f82-aeb3-002926889b4b,network=Network(617f2647-1180-40d5-ae2e-3b28298acf26),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap28067a9e-43') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 11 09:36:17 compute-0 nova_compute[260935]: 2025-10-11 09:36:17.050 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:36:17 compute-0 nova_compute[260935]: 2025-10-11 09:36:17.051 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap28067a9e-43, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:36:17 compute-0 nova_compute[260935]: 2025-10-11 09:36:17.053 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:36:17 compute-0 nova_compute[260935]: 2025-10-11 09:36:17.054 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:36:17 compute-0 nova_compute[260935]: 2025-10-11 09:36:17.057 2 INFO os_vif [None req-eadff6a0-d525-4cae-9d02-3ce08e18351e 14455fee241a4032a1491acddf675e59 ae66871ba5094dd49475fed96fb24be2 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:83:96:3a,bridge_name='br-int',has_traffic_filtering=True,id=28067a9e-43c9-4f82-aeb3-002926889b4b,network=Network(617f2647-1180-40d5-ae2e-3b28298acf26),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap28067a9e-43')
Oct 11 09:36:17 compute-0 neutron-haproxy-ovnmeta-617f2647-1180-40d5-ae2e-3b28298acf26[416782]: [NOTICE]   (416786) : haproxy version is 2.8.14-c23fe91
Oct 11 09:36:17 compute-0 neutron-haproxy-ovnmeta-617f2647-1180-40d5-ae2e-3b28298acf26[416782]: [NOTICE]   (416786) : path to executable is /usr/sbin/haproxy
Oct 11 09:36:17 compute-0 neutron-haproxy-ovnmeta-617f2647-1180-40d5-ae2e-3b28298acf26[416782]: [WARNING]  (416786) : Exiting Master process...
Oct 11 09:36:17 compute-0 neutron-haproxy-ovnmeta-617f2647-1180-40d5-ae2e-3b28298acf26[416782]: [WARNING]  (416786) : Exiting Master process...
Oct 11 09:36:17 compute-0 neutron-haproxy-ovnmeta-617f2647-1180-40d5-ae2e-3b28298acf26[416782]: [ALERT]    (416786) : Current worker (416788) exited with code 143 (Terminated)
Oct 11 09:36:17 compute-0 neutron-haproxy-ovnmeta-617f2647-1180-40d5-ae2e-3b28298acf26[416782]: [WARNING]  (416786) : All workers exited. Exiting... (0)
Oct 11 09:36:17 compute-0 systemd[1]: libpod-d864fe38d9be2dae3da3c070959dd6bc2d2ef716d596898d6e6ac4974a4b2210.scope: Deactivated successfully.
Oct 11 09:36:17 compute-0 podman[417783]: 2025-10-11 09:36:17.089242139 +0000 UTC m=+0.070179974 container died d864fe38d9be2dae3da3c070959dd6bc2d2ef716d596898d6e6ac4974a4b2210 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-617f2647-1180-40d5-ae2e-3b28298acf26, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3)
Oct 11 09:36:17 compute-0 nova_compute[260935]: 2025-10-11 09:36:17.095 2 DEBUG nova.compute.manager [req-beda6639-1f52-470b-9ef2-bc053da0526e req-918ccbcb-47d2-4941-a8fb-3bd8676f977d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 8d35e49e-efca-4621-a6f7-de650e5272fd] Received event network-vif-unplugged-28067a9e-43c9-4f82-aeb3-002926889b4b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:36:17 compute-0 nova_compute[260935]: 2025-10-11 09:36:17.095 2 DEBUG oslo_concurrency.lockutils [req-beda6639-1f52-470b-9ef2-bc053da0526e req-918ccbcb-47d2-4941-a8fb-3bd8676f977d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "8d35e49e-efca-4621-a6f7-de650e5272fd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:36:17 compute-0 nova_compute[260935]: 2025-10-11 09:36:17.095 2 DEBUG oslo_concurrency.lockutils [req-beda6639-1f52-470b-9ef2-bc053da0526e req-918ccbcb-47d2-4941-a8fb-3bd8676f977d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "8d35e49e-efca-4621-a6f7-de650e5272fd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:36:17 compute-0 nova_compute[260935]: 2025-10-11 09:36:17.096 2 DEBUG oslo_concurrency.lockutils [req-beda6639-1f52-470b-9ef2-bc053da0526e req-918ccbcb-47d2-4941-a8fb-3bd8676f977d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "8d35e49e-efca-4621-a6f7-de650e5272fd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:36:17 compute-0 nova_compute[260935]: 2025-10-11 09:36:17.096 2 DEBUG nova.compute.manager [req-beda6639-1f52-470b-9ef2-bc053da0526e req-918ccbcb-47d2-4941-a8fb-3bd8676f977d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 8d35e49e-efca-4621-a6f7-de650e5272fd] No waiting events found dispatching network-vif-unplugged-28067a9e-43c9-4f82-aeb3-002926889b4b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 09:36:17 compute-0 nova_compute[260935]: 2025-10-11 09:36:17.096 2 DEBUG nova.compute.manager [req-beda6639-1f52-470b-9ef2-bc053da0526e req-918ccbcb-47d2-4941-a8fb-3bd8676f977d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 8d35e49e-efca-4621-a6f7-de650e5272fd] Received event network-vif-unplugged-28067a9e-43c9-4f82-aeb3-002926889b4b for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 11 09:36:17 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-d864fe38d9be2dae3da3c070959dd6bc2d2ef716d596898d6e6ac4974a4b2210-userdata-shm.mount: Deactivated successfully.
Oct 11 09:36:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-bce8ebf8528fbbe46433a517ccbe8faadfb46f7e1ba5f57651637bad728b7361-merged.mount: Deactivated successfully.
Oct 11 09:36:17 compute-0 podman[417783]: 2025-10-11 09:36:17.131039846 +0000 UTC m=+0.111977721 container cleanup d864fe38d9be2dae3da3c070959dd6bc2d2ef716d596898d6e6ac4974a4b2210 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-617f2647-1180-40d5-ae2e-3b28298acf26, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0)
Oct 11 09:36:17 compute-0 systemd[1]: libpod-conmon-d864fe38d9be2dae3da3c070959dd6bc2d2ef716d596898d6e6ac4974a4b2210.scope: Deactivated successfully.
Oct 11 09:36:17 compute-0 podman[417841]: 2025-10-11 09:36:17.208052943 +0000 UTC m=+0.052663657 container remove d864fe38d9be2dae3da3c070959dd6bc2d2ef716d596898d6e6ac4974a4b2210 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-617f2647-1180-40d5-ae2e-3b28298acf26, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 11 09:36:17 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2839: 321 pgs: 321 active+clean; 486 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 273 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Oct 11 09:36:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:36:17.217 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[488577c7-5743-48b1-8f39-7a90f10dab59]: (4, ('Sat Oct 11 09:36:17 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-617f2647-1180-40d5-ae2e-3b28298acf26 (d864fe38d9be2dae3da3c070959dd6bc2d2ef716d596898d6e6ac4974a4b2210)\nd864fe38d9be2dae3da3c070959dd6bc2d2ef716d596898d6e6ac4974a4b2210\nSat Oct 11 09:36:17 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-617f2647-1180-40d5-ae2e-3b28298acf26 (d864fe38d9be2dae3da3c070959dd6bc2d2ef716d596898d6e6ac4974a4b2210)\nd864fe38d9be2dae3da3c070959dd6bc2d2ef716d596898d6e6ac4974a4b2210\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:36:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:36:17.220 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[0fbdc02f-e503-4b2f-9025-5dafa5666a7c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:36:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:36:17.222 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap617f2647-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:36:17 compute-0 nova_compute[260935]: 2025-10-11 09:36:17.224 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:36:17 compute-0 kernel: tap617f2647-10: left promiscuous mode
Oct 11 09:36:17 compute-0 nova_compute[260935]: 2025-10-11 09:36:17.242 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:36:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:36:17.247 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[fa445a50-d570-4022-bdf7-49c9dbfbb0bd]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:36:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:36:17.274 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[6ec15aa4-6e08-4a6b-8431-27b9fe24b91b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:36:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:36:17.276 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[6ed4afec-1225-4709-88dd-5f00a211e61b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:36:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:36:17.296 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[e206c842-85d2-4594-8862-ccb74f83447a]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 719605, 'reachable_time': 22709, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 417857, 'error': None, 'target': 'ovnmeta-617f2647-1180-40d5-ae2e-3b28298acf26', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:36:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:36:17.299 162929 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-617f2647-1180-40d5-ae2e-3b28298acf26 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 11 09:36:17 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:36:17.299 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[5bc8e7d4-8a25-45f5-9eb5-085c5a9c9c1d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:36:17 compute-0 systemd[1]: run-netns-ovnmeta\x2d617f2647\x2d1180\x2d40d5\x2dae2e\x2d3b28298acf26.mount: Deactivated successfully.
Oct 11 09:36:17 compute-0 nova_compute[260935]: 2025-10-11 09:36:17.429 2 INFO nova.virt.libvirt.driver [None req-eadff6a0-d525-4cae-9d02-3ce08e18351e 14455fee241a4032a1491acddf675e59 ae66871ba5094dd49475fed96fb24be2 - - default default] [instance: 8d35e49e-efca-4621-a6f7-de650e5272fd] Deleting instance files /var/lib/nova/instances/8d35e49e-efca-4621-a6f7-de650e5272fd_del
Oct 11 09:36:17 compute-0 nova_compute[260935]: 2025-10-11 09:36:17.431 2 INFO nova.virt.libvirt.driver [None req-eadff6a0-d525-4cae-9d02-3ce08e18351e 14455fee241a4032a1491acddf675e59 ae66871ba5094dd49475fed96fb24be2 - - default default] [instance: 8d35e49e-efca-4621-a6f7-de650e5272fd] Deletion of /var/lib/nova/instances/8d35e49e-efca-4621-a6f7-de650e5272fd_del complete
Oct 11 09:36:17 compute-0 nova_compute[260935]: 2025-10-11 09:36:17.490 2 INFO nova.compute.manager [None req-eadff6a0-d525-4cae-9d02-3ce08e18351e 14455fee241a4032a1491acddf675e59 ae66871ba5094dd49475fed96fb24be2 - - default default] [instance: 8d35e49e-efca-4621-a6f7-de650e5272fd] Took 0.71 seconds to destroy the instance on the hypervisor.
Oct 11 09:36:17 compute-0 nova_compute[260935]: 2025-10-11 09:36:17.491 2 DEBUG oslo.service.loopingcall [None req-eadff6a0-d525-4cae-9d02-3ce08e18351e 14455fee241a4032a1491acddf675e59 ae66871ba5094dd49475fed96fb24be2 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 11 09:36:17 compute-0 nova_compute[260935]: 2025-10-11 09:36:17.492 2 DEBUG nova.compute.manager [-] [instance: 8d35e49e-efca-4621-a6f7-de650e5272fd] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 11 09:36:17 compute-0 nova_compute[260935]: 2025-10-11 09:36:17.492 2 DEBUG nova.network.neutron [-] [instance: 8d35e49e-efca-4621-a6f7-de650e5272fd] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 11 09:36:18 compute-0 nova_compute[260935]: 2025-10-11 09:36:18.039 2 DEBUG nova.network.neutron [req-84c1aa69-24c8-4e7e-a827-743369a5a798 req-b026976f-5fe9-490f-8b9c-32b8ee4c5f0f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 8d35e49e-efca-4621-a6f7-de650e5272fd] Updated VIF entry in instance network info cache for port 28067a9e-43c9-4f82-aeb3-002926889b4b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 09:36:18 compute-0 nova_compute[260935]: 2025-10-11 09:36:18.040 2 DEBUG nova.network.neutron [req-84c1aa69-24c8-4e7e-a827-743369a5a798 req-b026976f-5fe9-490f-8b9c-32b8ee4c5f0f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 8d35e49e-efca-4621-a6f7-de650e5272fd] Updating instance_info_cache with network_info: [{"id": "28067a9e-43c9-4f82-aeb3-002926889b4b", "address": "fa:16:3e:83:96:3a", "network": {"id": "617f2647-1180-40d5-ae2e-3b28298acf26", "bridge": "br-int", "label": "tempest-network-smoke--1660520824", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ae66871ba5094dd49475fed96fb24be2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap28067a9e-43", "ovs_interfaceid": "28067a9e-43c9-4f82-aeb3-002926889b4b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:36:18 compute-0 nova_compute[260935]: 2025-10-11 09:36:18.050 2 DEBUG nova.network.neutron [-] [instance: 8d35e49e-efca-4621-a6f7-de650e5272fd] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:36:18 compute-0 nova_compute[260935]: 2025-10-11 09:36:18.057 2 DEBUG oslo_concurrency.lockutils [req-84c1aa69-24c8-4e7e-a827-743369a5a798 req-b026976f-5fe9-490f-8b9c-32b8ee4c5f0f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-8d35e49e-efca-4621-a6f7-de650e5272fd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:36:18 compute-0 nova_compute[260935]: 2025-10-11 09:36:18.066 2 INFO nova.compute.manager [-] [instance: 8d35e49e-efca-4621-a6f7-de650e5272fd] Took 0.57 seconds to deallocate network for instance.
Oct 11 09:36:18 compute-0 nova_compute[260935]: 2025-10-11 09:36:18.106 2 DEBUG oslo_concurrency.lockutils [None req-eadff6a0-d525-4cae-9d02-3ce08e18351e 14455fee241a4032a1491acddf675e59 ae66871ba5094dd49475fed96fb24be2 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:36:18 compute-0 nova_compute[260935]: 2025-10-11 09:36:18.107 2 DEBUG oslo_concurrency.lockutils [None req-eadff6a0-d525-4cae-9d02-3ce08e18351e 14455fee241a4032a1491acddf675e59 ae66871ba5094dd49475fed96fb24be2 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:36:18 compute-0 nova_compute[260935]: 2025-10-11 09:36:18.255 2 DEBUG oslo_concurrency.processutils [None req-eadff6a0-d525-4cae-9d02-3ce08e18351e 14455fee241a4032a1491acddf675e59 ae66871ba5094dd49475fed96fb24be2 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:36:18 compute-0 ceph-mon[74313]: pgmap v2839: 321 pgs: 321 active+clean; 486 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 273 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Oct 11 09:36:18 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 09:36:18 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2623494835' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:36:18 compute-0 nova_compute[260935]: 2025-10-11 09:36:18.715 2 DEBUG oslo_concurrency.processutils [None req-eadff6a0-d525-4cae-9d02-3ce08e18351e 14455fee241a4032a1491acddf675e59 ae66871ba5094dd49475fed96fb24be2 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:36:18 compute-0 nova_compute[260935]: 2025-10-11 09:36:18.723 2 DEBUG nova.compute.provider_tree [None req-eadff6a0-d525-4cae-9d02-3ce08e18351e 14455fee241a4032a1491acddf675e59 ae66871ba5094dd49475fed96fb24be2 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 09:36:18 compute-0 nova_compute[260935]: 2025-10-11 09:36:18.755 2 DEBUG nova.scheduler.client.report [None req-eadff6a0-d525-4cae-9d02-3ce08e18351e 14455fee241a4032a1491acddf675e59 ae66871ba5094dd49475fed96fb24be2 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 09:36:18 compute-0 nova_compute[260935]: 2025-10-11 09:36:18.790 2 DEBUG oslo_concurrency.lockutils [None req-eadff6a0-d525-4cae-9d02-3ce08e18351e 14455fee241a4032a1491acddf675e59 ae66871ba5094dd49475fed96fb24be2 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.684s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:36:18 compute-0 nova_compute[260935]: 2025-10-11 09:36:18.822 2 DEBUG nova.compute.manager [req-51c13e98-4e08-45e8-9ced-ec5d0d6ae073 req-da2df8f6-385e-4f51-bb00-3cbfc6eb6bba e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 8d35e49e-efca-4621-a6f7-de650e5272fd] Received event network-vif-deleted-28067a9e-43c9-4f82-aeb3-002926889b4b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:36:18 compute-0 nova_compute[260935]: 2025-10-11 09:36:18.825 2 INFO nova.scheduler.client.report [None req-eadff6a0-d525-4cae-9d02-3ce08e18351e 14455fee241a4032a1491acddf675e59 ae66871ba5094dd49475fed96fb24be2 - - default default] Deleted allocations for instance 8d35e49e-efca-4621-a6f7-de650e5272fd
Oct 11 09:36:18 compute-0 nova_compute[260935]: 2025-10-11 09:36:18.910 2 DEBUG oslo_concurrency.lockutils [None req-eadff6a0-d525-4cae-9d02-3ce08e18351e 14455fee241a4032a1491acddf675e59 ae66871ba5094dd49475fed96fb24be2 - - default default] Lock "8d35e49e-efca-4621-a6f7-de650e5272fd" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.138s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:36:19 compute-0 nova_compute[260935]: 2025-10-11 09:36:19.012 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:36:19 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2840: 321 pgs: 321 active+clean; 407 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 292 KiB/s rd, 2.2 MiB/s wr, 91 op/s
Oct 11 09:36:19 compute-0 nova_compute[260935]: 2025-10-11 09:36:19.214 2 DEBUG nova.compute.manager [req-5f201fe6-316b-4d6b-9184-e2bd9e158f45 req-d6802a18-4bb8-460a-ae02-7db4b3fe871d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 8d35e49e-efca-4621-a6f7-de650e5272fd] Received event network-vif-plugged-28067a9e-43c9-4f82-aeb3-002926889b4b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:36:19 compute-0 nova_compute[260935]: 2025-10-11 09:36:19.215 2 DEBUG oslo_concurrency.lockutils [req-5f201fe6-316b-4d6b-9184-e2bd9e158f45 req-d6802a18-4bb8-460a-ae02-7db4b3fe871d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "8d35e49e-efca-4621-a6f7-de650e5272fd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:36:19 compute-0 nova_compute[260935]: 2025-10-11 09:36:19.216 2 DEBUG oslo_concurrency.lockutils [req-5f201fe6-316b-4d6b-9184-e2bd9e158f45 req-d6802a18-4bb8-460a-ae02-7db4b3fe871d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "8d35e49e-efca-4621-a6f7-de650e5272fd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:36:19 compute-0 nova_compute[260935]: 2025-10-11 09:36:19.216 2 DEBUG oslo_concurrency.lockutils [req-5f201fe6-316b-4d6b-9184-e2bd9e158f45 req-d6802a18-4bb8-460a-ae02-7db4b3fe871d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "8d35e49e-efca-4621-a6f7-de650e5272fd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:36:19 compute-0 nova_compute[260935]: 2025-10-11 09:36:19.217 2 DEBUG nova.compute.manager [req-5f201fe6-316b-4d6b-9184-e2bd9e158f45 req-d6802a18-4bb8-460a-ae02-7db4b3fe871d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 8d35e49e-efca-4621-a6f7-de650e5272fd] No waiting events found dispatching network-vif-plugged-28067a9e-43c9-4f82-aeb3-002926889b4b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 09:36:19 compute-0 nova_compute[260935]: 2025-10-11 09:36:19.217 2 WARNING nova.compute.manager [req-5f201fe6-316b-4d6b-9184-e2bd9e158f45 req-d6802a18-4bb8-460a-ae02-7db4b3fe871d e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 8d35e49e-efca-4621-a6f7-de650e5272fd] Received unexpected event network-vif-plugged-28067a9e-43c9-4f82-aeb3-002926889b4b for instance with vm_state deleted and task_state None.
Oct 11 09:36:19 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/2623494835' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:36:19 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:36:20 compute-0 ceph-mon[74313]: pgmap v2840: 321 pgs: 321 active+clean; 407 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 292 KiB/s rd, 2.2 MiB/s wr, 91 op/s
Oct 11 09:36:21 compute-0 sshd-session[417880]: Invalid user github from 155.4.244.179 port 18404
Oct 11 09:36:21 compute-0 sshd-session[417880]: pam_unix(sshd:auth): check pass; user unknown
Oct 11 09:36:21 compute-0 sshd-session[417880]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=155.4.244.179
Oct 11 09:36:21 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2841: 321 pgs: 321 active+clean; 407 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 27 KiB/s wr, 30 op/s
Oct 11 09:36:22 compute-0 nova_compute[260935]: 2025-10-11 09:36:22.053 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:36:22 compute-0 ceph-mon[74313]: pgmap v2841: 321 pgs: 321 active+clean; 407 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 27 KiB/s wr, 30 op/s
Oct 11 09:36:23 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2842: 321 pgs: 321 active+clean; 407 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 27 KiB/s wr, 30 op/s
Oct 11 09:36:23 compute-0 sshd-session[417880]: Failed password for invalid user github from 155.4.244.179 port 18404 ssh2
Oct 11 09:36:24 compute-0 nova_compute[260935]: 2025-10-11 09:36:24.015 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:36:24 compute-0 sshd-session[417880]: Received disconnect from 155.4.244.179 port 18404:11: Bye Bye [preauth]
Oct 11 09:36:24 compute-0 sshd-session[417880]: Disconnected from invalid user github 155.4.244.179 port 18404 [preauth]
Oct 11 09:36:24 compute-0 ceph-mon[74313]: pgmap v2842: 321 pgs: 321 active+clean; 407 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 27 KiB/s wr, 30 op/s
Oct 11 09:36:24 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:36:24 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:36:24.575 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=52, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:d1:d9', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '16:ab:1e:b7:4b:7f'}, ipsec=False) old=SB_Global(nb_cfg=51) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 09:36:24 compute-0 nova_compute[260935]: 2025-10-11 09:36:24.576 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:36:24 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:36:24.577 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 11 09:36:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:36:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:36:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:36:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:36:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:36:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:36:25 compute-0 ovn_controller[152945]: 2025-10-11T09:36:25Z|01576|binding|INFO|Releasing lport 675f5b9f-9cb7-4132-af37-955cc69eba82 from this chassis (sb_readonly=0)
Oct 11 09:36:25 compute-0 ovn_controller[152945]: 2025-10-11T09:36:25Z|01577|binding|INFO|Releasing lport e23cd806-8523-4e59-ba27-db15cee52548 from this chassis (sb_readonly=0)
Oct 11 09:36:25 compute-0 ovn_controller[152945]: 2025-10-11T09:36:25Z|01578|binding|INFO|Releasing lport f45dd889-4db0-488b-b6d3-356bc191844e from this chassis (sb_readonly=0)
Oct 11 09:36:25 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2843: 321 pgs: 321 active+clean; 407 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 16 KiB/s wr, 29 op/s
Oct 11 09:36:25 compute-0 nova_compute[260935]: 2025-10-11 09:36:25.231 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:36:25 compute-0 ceph-mon[74313]: pgmap v2843: 321 pgs: 321 active+clean; 407 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 16 KiB/s wr, 29 op/s
Oct 11 09:36:26 compute-0 nova_compute[260935]: 2025-10-11 09:36:26.672 2 DEBUG nova.compute.manager [req-fcffd95b-a1ff-472d-ad28-bc9ab8e4deb4 req-6f2aeb84-df20-4543-9c8a-ab027e240def e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a77fa566-1ab4-484c-b6ef-53471c9f91f8] Received event network-changed-75ae5f39-4616-4581-8e22-411bee7c1747 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:36:26 compute-0 nova_compute[260935]: 2025-10-11 09:36:26.672 2 DEBUG nova.compute.manager [req-fcffd95b-a1ff-472d-ad28-bc9ab8e4deb4 req-6f2aeb84-df20-4543-9c8a-ab027e240def e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a77fa566-1ab4-484c-b6ef-53471c9f91f8] Refreshing instance network info cache due to event network-changed-75ae5f39-4616-4581-8e22-411bee7c1747. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 09:36:26 compute-0 nova_compute[260935]: 2025-10-11 09:36:26.673 2 DEBUG oslo_concurrency.lockutils [req-fcffd95b-a1ff-472d-ad28-bc9ab8e4deb4 req-6f2aeb84-df20-4543-9c8a-ab027e240def e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-a77fa566-1ab4-484c-b6ef-53471c9f91f8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:36:26 compute-0 nova_compute[260935]: 2025-10-11 09:36:26.673 2 DEBUG oslo_concurrency.lockutils [req-fcffd95b-a1ff-472d-ad28-bc9ab8e4deb4 req-6f2aeb84-df20-4543-9c8a-ab027e240def e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-a77fa566-1ab4-484c-b6ef-53471c9f91f8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:36:26 compute-0 nova_compute[260935]: 2025-10-11 09:36:26.674 2 DEBUG nova.network.neutron [req-fcffd95b-a1ff-472d-ad28-bc9ab8e4deb4 req-6f2aeb84-df20-4543-9c8a-ab027e240def e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a77fa566-1ab4-484c-b6ef-53471c9f91f8] Refreshing network info cache for port 75ae5f39-4616-4581-8e22-411bee7c1747 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 09:36:26 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 09:36:26 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3259768337' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 09:36:26 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 09:36:26 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3259768337' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 09:36:26 compute-0 ceph-mon[74313]: from='client.? 192.168.122.10:0/3259768337' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 09:36:26 compute-0 ceph-mon[74313]: from='client.? 192.168.122.10:0/3259768337' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 09:36:26 compute-0 nova_compute[260935]: 2025-10-11 09:36:26.738 2 DEBUG oslo_concurrency.lockutils [None req-0ec99d0e-7f11-4333-9136-b40d06e10165 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Acquiring lock "a77fa566-1ab4-484c-b6ef-53471c9f91f8" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:36:26 compute-0 nova_compute[260935]: 2025-10-11 09:36:26.738 2 DEBUG oslo_concurrency.lockutils [None req-0ec99d0e-7f11-4333-9136-b40d06e10165 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lock "a77fa566-1ab4-484c-b6ef-53471c9f91f8" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:36:26 compute-0 nova_compute[260935]: 2025-10-11 09:36:26.739 2 DEBUG oslo_concurrency.lockutils [None req-0ec99d0e-7f11-4333-9136-b40d06e10165 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Acquiring lock "a77fa566-1ab4-484c-b6ef-53471c9f91f8-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:36:26 compute-0 nova_compute[260935]: 2025-10-11 09:36:26.740 2 DEBUG oslo_concurrency.lockutils [None req-0ec99d0e-7f11-4333-9136-b40d06e10165 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lock "a77fa566-1ab4-484c-b6ef-53471c9f91f8-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:36:26 compute-0 nova_compute[260935]: 2025-10-11 09:36:26.740 2 DEBUG oslo_concurrency.lockutils [None req-0ec99d0e-7f11-4333-9136-b40d06e10165 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lock "a77fa566-1ab4-484c-b6ef-53471c9f91f8-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:36:26 compute-0 nova_compute[260935]: 2025-10-11 09:36:26.742 2 INFO nova.compute.manager [None req-0ec99d0e-7f11-4333-9136-b40d06e10165 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: a77fa566-1ab4-484c-b6ef-53471c9f91f8] Terminating instance
Oct 11 09:36:26 compute-0 nova_compute[260935]: 2025-10-11 09:36:26.744 2 DEBUG nova.compute.manager [None req-0ec99d0e-7f11-4333-9136-b40d06e10165 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: a77fa566-1ab4-484c-b6ef-53471c9f91f8] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 11 09:36:26 compute-0 kernel: tap75ae5f39-46 (unregistering): left promiscuous mode
Oct 11 09:36:26 compute-0 NetworkManager[44960]: <info>  [1760175386.8175] device (tap75ae5f39-46): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 09:36:26 compute-0 ovn_controller[152945]: 2025-10-11T09:36:26Z|01579|binding|INFO|Releasing lport 75ae5f39-4616-4581-8e22-411bee7c1747 from this chassis (sb_readonly=0)
Oct 11 09:36:26 compute-0 ovn_controller[152945]: 2025-10-11T09:36:26Z|01580|binding|INFO|Setting lport 75ae5f39-4616-4581-8e22-411bee7c1747 down in Southbound
Oct 11 09:36:26 compute-0 nova_compute[260935]: 2025-10-11 09:36:26.833 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:36:26 compute-0 ovn_controller[152945]: 2025-10-11T09:36:26Z|01581|binding|INFO|Removing iface tap75ae5f39-46 ovn-installed in OVS
Oct 11 09:36:26 compute-0 nova_compute[260935]: 2025-10-11 09:36:26.839 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:36:26 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:36:26.845 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:db:c5:0a 10.100.0.14'], port_security=['fa:16:3e:db:c5:0a 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'a77fa566-1ab4-484c-b6ef-53471c9f91f8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f0d9de45-f93f-45ef-aa2e-7cd54a90600b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '81e7096f23df4e7d8782cf98d09d54e9', 'neutron:revision_number': '4', 'neutron:security_group_ids': '7c64e7e9-762c-40c5-8997-daeebe19175c c4774add-49ce-4194-9683-2bce3da69dec', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e2e43ada-251a-46ad-bf0a-b57c9e02b647, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=75ae5f39-4616-4581-8e22-411bee7c1747) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 09:36:26 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:36:26.848 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 75ae5f39-4616-4581-8e22-411bee7c1747 in datapath f0d9de45-f93f-45ef-aa2e-7cd54a90600b unbound from our chassis
Oct 11 09:36:26 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:36:26.851 162815 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f0d9de45-f93f-45ef-aa2e-7cd54a90600b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 11 09:36:26 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:36:26.853 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[9785fcc9-aa35-44b4-accf-9a5accab25bd]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:36:26 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:36:26.854 162815 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-f0d9de45-f93f-45ef-aa2e-7cd54a90600b namespace which is not needed anymore
Oct 11 09:36:26 compute-0 nova_compute[260935]: 2025-10-11 09:36:26.866 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:36:26 compute-0 systemd[1]: machine-qemu\x2d163\x2dinstance\x2d0000008b.scope: Deactivated successfully.
Oct 11 09:36:26 compute-0 systemd[1]: machine-qemu\x2d163\x2dinstance\x2d0000008b.scope: Consumed 16.038s CPU time.
Oct 11 09:36:26 compute-0 systemd-machined[215705]: Machine qemu-163-instance-0000008b terminated.
Oct 11 09:36:26 compute-0 nova_compute[260935]: 2025-10-11 09:36:26.987 2 INFO nova.virt.libvirt.driver [-] [instance: a77fa566-1ab4-484c-b6ef-53471c9f91f8] Instance destroyed successfully.
Oct 11 09:36:26 compute-0 nova_compute[260935]: 2025-10-11 09:36:26.988 2 DEBUG nova.objects.instance [None req-0ec99d0e-7f11-4333-9136-b40d06e10165 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lazy-loading 'resources' on Instance uuid a77fa566-1ab4-484c-b6ef-53471c9f91f8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:36:27 compute-0 nova_compute[260935]: 2025-10-11 09:36:27.004 2 DEBUG nova.virt.libvirt.vif [None req-0ec99d0e-7f11-4333-9136-b40d06e10165 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T09:34:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-607770139-access_point-1880442521',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-607770139-access_point-1880442521',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-607770139-acc',id=139,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCdfdb9R+zsDw6jenCtVzR++AIgDN5ehA4oPbzl8x/m8tfimOhF7LhQ9X4A3F+nSw3/eKnrfnA/yKanSmbwPXrSaCs1ORHouK4S19eKbXKNTi17aP7Q4amYlBjFlUm9aIA==',key_name='tempest-TestSecurityGroupsBasicOps-1403644569',keypairs=<?>,launch_index=0,launched_at=2025-10-11T09:35:01Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='81e7096f23df4e7d8782cf98d09d54e9',ramdisk_id='',reservation_id='r-8gi3qbv7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSecurityGroupsBasicOps-607770139',owner_user_name='tempest-TestSecurityGroupsBasicOps-607770139-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T09:35:01Z,user_data=None,user_id='489c4d0457354f4684f8b9e53261224f',uuid=a77fa566-1ab4-484c-b6ef-53471c9f91f8,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "75ae5f39-4616-4581-8e22-411bee7c1747", "address": "fa:16:3e:db:c5:0a", "network": {"id": "f0d9de45-f93f-45ef-aa2e-7cd54a90600b", "bridge": "br-int", "label": "tempest-network-smoke--1509318194", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.175", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "81e7096f23df4e7d8782cf98d09d54e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap75ae5f39-46", "ovs_interfaceid": "75ae5f39-4616-4581-8e22-411bee7c1747", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 11 09:36:27 compute-0 nova_compute[260935]: 2025-10-11 09:36:27.004 2 DEBUG nova.network.os_vif_util [None req-0ec99d0e-7f11-4333-9136-b40d06e10165 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Converting VIF {"id": "75ae5f39-4616-4581-8e22-411bee7c1747", "address": "fa:16:3e:db:c5:0a", "network": {"id": "f0d9de45-f93f-45ef-aa2e-7cd54a90600b", "bridge": "br-int", "label": "tempest-network-smoke--1509318194", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.175", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "81e7096f23df4e7d8782cf98d09d54e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap75ae5f39-46", "ovs_interfaceid": "75ae5f39-4616-4581-8e22-411bee7c1747", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 09:36:27 compute-0 nova_compute[260935]: 2025-10-11 09:36:27.005 2 DEBUG nova.network.os_vif_util [None req-0ec99d0e-7f11-4333-9136-b40d06e10165 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:db:c5:0a,bridge_name='br-int',has_traffic_filtering=True,id=75ae5f39-4616-4581-8e22-411bee7c1747,network=Network(f0d9de45-f93f-45ef-aa2e-7cd54a90600b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap75ae5f39-46') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 09:36:27 compute-0 nova_compute[260935]: 2025-10-11 09:36:27.006 2 DEBUG os_vif [None req-0ec99d0e-7f11-4333-9136-b40d06e10165 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:db:c5:0a,bridge_name='br-int',has_traffic_filtering=True,id=75ae5f39-4616-4581-8e22-411bee7c1747,network=Network(f0d9de45-f93f-45ef-aa2e-7cd54a90600b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap75ae5f39-46') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 11 09:36:27 compute-0 nova_compute[260935]: 2025-10-11 09:36:27.007 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:36:27 compute-0 nova_compute[260935]: 2025-10-11 09:36:27.007 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap75ae5f39-46, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:36:27 compute-0 nova_compute[260935]: 2025-10-11 09:36:27.010 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:36:27 compute-0 nova_compute[260935]: 2025-10-11 09:36:27.012 2 INFO os_vif [None req-0ec99d0e-7f11-4333-9136-b40d06e10165 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:db:c5:0a,bridge_name='br-int',has_traffic_filtering=True,id=75ae5f39-4616-4581-8e22-411bee7c1747,network=Network(f0d9de45-f93f-45ef-aa2e-7cd54a90600b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap75ae5f39-46')
Oct 11 09:36:27 compute-0 neutron-haproxy-ovnmeta-f0d9de45-f93f-45ef-aa2e-7cd54a90600b[415736]: [NOTICE]   (415750) : haproxy version is 2.8.14-c23fe91
Oct 11 09:36:27 compute-0 neutron-haproxy-ovnmeta-f0d9de45-f93f-45ef-aa2e-7cd54a90600b[415736]: [NOTICE]   (415750) : path to executable is /usr/sbin/haproxy
Oct 11 09:36:27 compute-0 neutron-haproxy-ovnmeta-f0d9de45-f93f-45ef-aa2e-7cd54a90600b[415736]: [WARNING]  (415750) : Exiting Master process...
Oct 11 09:36:27 compute-0 neutron-haproxy-ovnmeta-f0d9de45-f93f-45ef-aa2e-7cd54a90600b[415736]: [ALERT]    (415750) : Current worker (415752) exited with code 143 (Terminated)
Oct 11 09:36:27 compute-0 neutron-haproxy-ovnmeta-f0d9de45-f93f-45ef-aa2e-7cd54a90600b[415736]: [WARNING]  (415750) : All workers exited. Exiting... (0)
Oct 11 09:36:27 compute-0 systemd[1]: libpod-c8a612255054c19d0be4ed190e3fbfb857d6f4a0946c2f20e00a1d51da26ca37.scope: Deactivated successfully.
Oct 11 09:36:27 compute-0 podman[417908]: 2025-10-11 09:36:27.03304115 +0000 UTC m=+0.051719739 container died c8a612255054c19d0be4ed190e3fbfb857d6f4a0946c2f20e00a1d51da26ca37 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-f0d9de45-f93f-45ef-aa2e-7cd54a90600b, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001)
Oct 11 09:36:27 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c8a612255054c19d0be4ed190e3fbfb857d6f4a0946c2f20e00a1d51da26ca37-userdata-shm.mount: Deactivated successfully.
Oct 11 09:36:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-f9779a9262745b18ce2d1c4f4694138a6bb395f088c211d26fdb2e153f47b00b-merged.mount: Deactivated successfully.
Oct 11 09:36:27 compute-0 podman[417908]: 2025-10-11 09:36:27.07739337 +0000 UTC m=+0.096071979 container cleanup c8a612255054c19d0be4ed190e3fbfb857d6f4a0946c2f20e00a1d51da26ca37 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-f0d9de45-f93f-45ef-aa2e-7cd54a90600b, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 11 09:36:27 compute-0 systemd[1]: libpod-conmon-c8a612255054c19d0be4ed190e3fbfb857d6f4a0946c2f20e00a1d51da26ca37.scope: Deactivated successfully.
Oct 11 09:36:27 compute-0 podman[417962]: 2025-10-11 09:36:27.160003196 +0000 UTC m=+0.057036111 container remove c8a612255054c19d0be4ed190e3fbfb857d6f4a0946c2f20e00a1d51da26ca37 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-f0d9de45-f93f-45ef-aa2e-7cd54a90600b, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 09:36:27 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:36:27.168 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[1288e3b3-36aa-48a6-bb80-86d432ec09fc]: (4, ('Sat Oct 11 09:36:26 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-f0d9de45-f93f-45ef-aa2e-7cd54a90600b (c8a612255054c19d0be4ed190e3fbfb857d6f4a0946c2f20e00a1d51da26ca37)\nc8a612255054c19d0be4ed190e3fbfb857d6f4a0946c2f20e00a1d51da26ca37\nSat Oct 11 09:36:27 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-f0d9de45-f93f-45ef-aa2e-7cd54a90600b (c8a612255054c19d0be4ed190e3fbfb857d6f4a0946c2f20e00a1d51da26ca37)\nc8a612255054c19d0be4ed190e3fbfb857d6f4a0946c2f20e00a1d51da26ca37\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:36:27 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:36:27.171 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[800d42b1-6dc5-43c1-82bb-e881046872d5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:36:27 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:36:27.174 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf0d9de45-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:36:27 compute-0 nova_compute[260935]: 2025-10-11 09:36:27.177 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:36:27 compute-0 kernel: tapf0d9de45-f0: left promiscuous mode
Oct 11 09:36:27 compute-0 nova_compute[260935]: 2025-10-11 09:36:27.180 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:36:27 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:36:27.185 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[992eb05c-cb90-4112-88ef-f47af4115b42]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:36:27 compute-0 nova_compute[260935]: 2025-10-11 09:36:27.200 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:36:27 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2844: 321 pgs: 321 active+clean; 407 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 16 KiB/s wr, 29 op/s
Oct 11 09:36:27 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:36:27.214 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[0ecfd9dc-6383-4041-abc6-e8bbb14990ae]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:36:27 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:36:27.216 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[7f5ffca3-ad04-4e17-9c94-e5a94acd969c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:36:27 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:36:27.234 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[93a82c1c-0978-40b3-a011-143d39503cde]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 714438, 'reachable_time': 41792, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 417977, 'error': None, 'target': 'ovnmeta-f0d9de45-f93f-45ef-aa2e-7cd54a90600b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:36:27 compute-0 systemd[1]: run-netns-ovnmeta\x2df0d9de45\x2df93f\x2d45ef\x2daa2e\x2d7cd54a90600b.mount: Deactivated successfully.
Oct 11 09:36:27 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:36:27.236 162929 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-f0d9de45-f93f-45ef-aa2e-7cd54a90600b deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 11 09:36:27 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:36:27.236 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[ece6194a-37c5-472d-8d8d-55feecdf8066]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:36:27 compute-0 nova_compute[260935]: 2025-10-11 09:36:27.436 2 INFO nova.virt.libvirt.driver [None req-0ec99d0e-7f11-4333-9136-b40d06e10165 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: a77fa566-1ab4-484c-b6ef-53471c9f91f8] Deleting instance files /var/lib/nova/instances/a77fa566-1ab4-484c-b6ef-53471c9f91f8_del
Oct 11 09:36:27 compute-0 nova_compute[260935]: 2025-10-11 09:36:27.438 2 INFO nova.virt.libvirt.driver [None req-0ec99d0e-7f11-4333-9136-b40d06e10165 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: a77fa566-1ab4-484c-b6ef-53471c9f91f8] Deletion of /var/lib/nova/instances/a77fa566-1ab4-484c-b6ef-53471c9f91f8_del complete
Oct 11 09:36:27 compute-0 nova_compute[260935]: 2025-10-11 09:36:27.496 2 INFO nova.compute.manager [None req-0ec99d0e-7f11-4333-9136-b40d06e10165 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: a77fa566-1ab4-484c-b6ef-53471c9f91f8] Took 0.75 seconds to destroy the instance on the hypervisor.
Oct 11 09:36:27 compute-0 nova_compute[260935]: 2025-10-11 09:36:27.497 2 DEBUG oslo.service.loopingcall [None req-0ec99d0e-7f11-4333-9136-b40d06e10165 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 11 09:36:27 compute-0 nova_compute[260935]: 2025-10-11 09:36:27.497 2 DEBUG nova.compute.manager [-] [instance: a77fa566-1ab4-484c-b6ef-53471c9f91f8] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 11 09:36:27 compute-0 nova_compute[260935]: 2025-10-11 09:36:27.498 2 DEBUG nova.network.neutron [-] [instance: a77fa566-1ab4-484c-b6ef-53471c9f91f8] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 11 09:36:27 compute-0 nova_compute[260935]: 2025-10-11 09:36:27.705 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:36:27 compute-0 ceph-mon[74313]: pgmap v2844: 321 pgs: 321 active+clean; 407 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 16 KiB/s wr, 29 op/s
Oct 11 09:36:28 compute-0 nova_compute[260935]: 2025-10-11 09:36:28.113 2 DEBUG nova.network.neutron [-] [instance: a77fa566-1ab4-484c-b6ef-53471c9f91f8] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:36:28 compute-0 nova_compute[260935]: 2025-10-11 09:36:28.129 2 INFO nova.compute.manager [-] [instance: a77fa566-1ab4-484c-b6ef-53471c9f91f8] Took 0.63 seconds to deallocate network for instance.
Oct 11 09:36:28 compute-0 nova_compute[260935]: 2025-10-11 09:36:28.174 2 DEBUG oslo_concurrency.lockutils [None req-0ec99d0e-7f11-4333-9136-b40d06e10165 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:36:28 compute-0 nova_compute[260935]: 2025-10-11 09:36:28.175 2 DEBUG oslo_concurrency.lockutils [None req-0ec99d0e-7f11-4333-9136-b40d06e10165 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:36:28 compute-0 nova_compute[260935]: 2025-10-11 09:36:28.188 2 DEBUG nova.network.neutron [req-fcffd95b-a1ff-472d-ad28-bc9ab8e4deb4 req-6f2aeb84-df20-4543-9c8a-ab027e240def e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a77fa566-1ab4-484c-b6ef-53471c9f91f8] Updated VIF entry in instance network info cache for port 75ae5f39-4616-4581-8e22-411bee7c1747. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 09:36:28 compute-0 nova_compute[260935]: 2025-10-11 09:36:28.189 2 DEBUG nova.network.neutron [req-fcffd95b-a1ff-472d-ad28-bc9ab8e4deb4 req-6f2aeb84-df20-4543-9c8a-ab027e240def e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a77fa566-1ab4-484c-b6ef-53471c9f91f8] Updating instance_info_cache with network_info: [{"id": "75ae5f39-4616-4581-8e22-411bee7c1747", "address": "fa:16:3e:db:c5:0a", "network": {"id": "f0d9de45-f93f-45ef-aa2e-7cd54a90600b", "bridge": "br-int", "label": "tempest-network-smoke--1509318194", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "81e7096f23df4e7d8782cf98d09d54e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap75ae5f39-46", "ovs_interfaceid": "75ae5f39-4616-4581-8e22-411bee7c1747", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:36:28 compute-0 nova_compute[260935]: 2025-10-11 09:36:28.205 2 DEBUG oslo_concurrency.lockutils [req-fcffd95b-a1ff-472d-ad28-bc9ab8e4deb4 req-6f2aeb84-df20-4543-9c8a-ab027e240def e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-a77fa566-1ab4-484c-b6ef-53471c9f91f8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:36:28 compute-0 nova_compute[260935]: 2025-10-11 09:36:28.265 2 DEBUG oslo_concurrency.processutils [None req-0ec99d0e-7f11-4333-9136-b40d06e10165 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:36:28 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 09:36:28 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2525950482' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:36:28 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/2525950482' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:36:28 compute-0 podman[417999]: 2025-10-11 09:36:28.766669582 +0000 UTC m=+0.076949746 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct 11 09:36:28 compute-0 nova_compute[260935]: 2025-10-11 09:36:28.771 2 DEBUG oslo_concurrency.processutils [None req-0ec99d0e-7f11-4333-9136-b40d06e10165 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.506s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:36:28 compute-0 nova_compute[260935]: 2025-10-11 09:36:28.779 2 DEBUG nova.compute.provider_tree [None req-0ec99d0e-7f11-4333-9136-b40d06e10165 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 09:36:28 compute-0 nova_compute[260935]: 2025-10-11 09:36:28.809 2 DEBUG nova.scheduler.client.report [None req-0ec99d0e-7f11-4333-9136-b40d06e10165 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 09:36:28 compute-0 nova_compute[260935]: 2025-10-11 09:36:28.847 2 DEBUG nova.compute.manager [req-b0bd5619-95bc-4045-81a8-4c69b854573e req-9cff2958-134b-4e72-8af3-38ed98d3bc04 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a77fa566-1ab4-484c-b6ef-53471c9f91f8] Received event network-vif-unplugged-75ae5f39-4616-4581-8e22-411bee7c1747 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:36:28 compute-0 nova_compute[260935]: 2025-10-11 09:36:28.847 2 DEBUG oslo_concurrency.lockutils [req-b0bd5619-95bc-4045-81a8-4c69b854573e req-9cff2958-134b-4e72-8af3-38ed98d3bc04 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "a77fa566-1ab4-484c-b6ef-53471c9f91f8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:36:28 compute-0 nova_compute[260935]: 2025-10-11 09:36:28.848 2 DEBUG oslo_concurrency.lockutils [req-b0bd5619-95bc-4045-81a8-4c69b854573e req-9cff2958-134b-4e72-8af3-38ed98d3bc04 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "a77fa566-1ab4-484c-b6ef-53471c9f91f8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:36:28 compute-0 nova_compute[260935]: 2025-10-11 09:36:28.848 2 DEBUG oslo_concurrency.lockutils [req-b0bd5619-95bc-4045-81a8-4c69b854573e req-9cff2958-134b-4e72-8af3-38ed98d3bc04 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "a77fa566-1ab4-484c-b6ef-53471c9f91f8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:36:28 compute-0 nova_compute[260935]: 2025-10-11 09:36:28.849 2 DEBUG nova.compute.manager [req-b0bd5619-95bc-4045-81a8-4c69b854573e req-9cff2958-134b-4e72-8af3-38ed98d3bc04 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a77fa566-1ab4-484c-b6ef-53471c9f91f8] No waiting events found dispatching network-vif-unplugged-75ae5f39-4616-4581-8e22-411bee7c1747 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 09:36:28 compute-0 nova_compute[260935]: 2025-10-11 09:36:28.849 2 WARNING nova.compute.manager [req-b0bd5619-95bc-4045-81a8-4c69b854573e req-9cff2958-134b-4e72-8af3-38ed98d3bc04 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a77fa566-1ab4-484c-b6ef-53471c9f91f8] Received unexpected event network-vif-unplugged-75ae5f39-4616-4581-8e22-411bee7c1747 for instance with vm_state deleted and task_state None.
Oct 11 09:36:28 compute-0 nova_compute[260935]: 2025-10-11 09:36:28.849 2 DEBUG nova.compute.manager [req-b0bd5619-95bc-4045-81a8-4c69b854573e req-9cff2958-134b-4e72-8af3-38ed98d3bc04 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a77fa566-1ab4-484c-b6ef-53471c9f91f8] Received event network-vif-plugged-75ae5f39-4616-4581-8e22-411bee7c1747 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:36:28 compute-0 nova_compute[260935]: 2025-10-11 09:36:28.850 2 DEBUG oslo_concurrency.lockutils [req-b0bd5619-95bc-4045-81a8-4c69b854573e req-9cff2958-134b-4e72-8af3-38ed98d3bc04 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "a77fa566-1ab4-484c-b6ef-53471c9f91f8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:36:28 compute-0 nova_compute[260935]: 2025-10-11 09:36:28.850 2 DEBUG oslo_concurrency.lockutils [req-b0bd5619-95bc-4045-81a8-4c69b854573e req-9cff2958-134b-4e72-8af3-38ed98d3bc04 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "a77fa566-1ab4-484c-b6ef-53471c9f91f8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:36:28 compute-0 nova_compute[260935]: 2025-10-11 09:36:28.850 2 DEBUG oslo_concurrency.lockutils [req-b0bd5619-95bc-4045-81a8-4c69b854573e req-9cff2958-134b-4e72-8af3-38ed98d3bc04 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "a77fa566-1ab4-484c-b6ef-53471c9f91f8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:36:28 compute-0 nova_compute[260935]: 2025-10-11 09:36:28.850 2 DEBUG nova.compute.manager [req-b0bd5619-95bc-4045-81a8-4c69b854573e req-9cff2958-134b-4e72-8af3-38ed98d3bc04 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a77fa566-1ab4-484c-b6ef-53471c9f91f8] No waiting events found dispatching network-vif-plugged-75ae5f39-4616-4581-8e22-411bee7c1747 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 09:36:28 compute-0 nova_compute[260935]: 2025-10-11 09:36:28.851 2 WARNING nova.compute.manager [req-b0bd5619-95bc-4045-81a8-4c69b854573e req-9cff2958-134b-4e72-8af3-38ed98d3bc04 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a77fa566-1ab4-484c-b6ef-53471c9f91f8] Received unexpected event network-vif-plugged-75ae5f39-4616-4581-8e22-411bee7c1747 for instance with vm_state deleted and task_state None.
Oct 11 09:36:28 compute-0 nova_compute[260935]: 2025-10-11 09:36:28.851 2 DEBUG nova.compute.manager [req-b0bd5619-95bc-4045-81a8-4c69b854573e req-9cff2958-134b-4e72-8af3-38ed98d3bc04 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a77fa566-1ab4-484c-b6ef-53471c9f91f8] Received event network-vif-deleted-75ae5f39-4616-4581-8e22-411bee7c1747 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:36:28 compute-0 nova_compute[260935]: 2025-10-11 09:36:28.851 2 INFO nova.compute.manager [req-b0bd5619-95bc-4045-81a8-4c69b854573e req-9cff2958-134b-4e72-8af3-38ed98d3bc04 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a77fa566-1ab4-484c-b6ef-53471c9f91f8] Neutron deleted interface 75ae5f39-4616-4581-8e22-411bee7c1747; detaching it from the instance and deleting it from the info cache
Oct 11 09:36:28 compute-0 nova_compute[260935]: 2025-10-11 09:36:28.852 2 DEBUG nova.network.neutron [req-b0bd5619-95bc-4045-81a8-4c69b854573e req-9cff2958-134b-4e72-8af3-38ed98d3bc04 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a77fa566-1ab4-484c-b6ef-53471c9f91f8] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:36:28 compute-0 nova_compute[260935]: 2025-10-11 09:36:28.891 2 DEBUG oslo_concurrency.lockutils [None req-0ec99d0e-7f11-4333-9136-b40d06e10165 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.716s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:36:28 compute-0 nova_compute[260935]: 2025-10-11 09:36:28.945 2 INFO nova.scheduler.client.report [None req-0ec99d0e-7f11-4333-9136-b40d06e10165 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Deleted allocations for instance a77fa566-1ab4-484c-b6ef-53471c9f91f8
Oct 11 09:36:28 compute-0 nova_compute[260935]: 2025-10-11 09:36:28.951 2 DEBUG nova.compute.manager [req-b0bd5619-95bc-4045-81a8-4c69b854573e req-9cff2958-134b-4e72-8af3-38ed98d3bc04 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: a77fa566-1ab4-484c-b6ef-53471c9f91f8] Detach interface failed, port_id=75ae5f39-4616-4581-8e22-411bee7c1747, reason: Instance a77fa566-1ab4-484c-b6ef-53471c9f91f8 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Oct 11 09:36:29 compute-0 nova_compute[260935]: 2025-10-11 09:36:29.018 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:36:29 compute-0 nova_compute[260935]: 2025-10-11 09:36:29.118 2 DEBUG oslo_concurrency.lockutils [None req-0ec99d0e-7f11-4333-9136-b40d06e10165 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lock "a77fa566-1ab4-484c-b6ef-53471c9f91f8" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.379s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:36:29 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2845: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 17 KiB/s wr, 57 op/s
Oct 11 09:36:29 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:36:29 compute-0 ceph-mon[74313]: pgmap v2845: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 17 KiB/s wr, 57 op/s
Oct 11 09:36:31 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2846: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 11 09:36:31 compute-0 ceph-mon[74313]: pgmap v2846: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 11 09:36:31 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:36:31.580 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=bec9a5e2-82b0-42f0-811d-08d245f1dc66, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '52'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:36:32 compute-0 nova_compute[260935]: 2025-10-11 09:36:32.009 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:36:32 compute-0 nova_compute[260935]: 2025-10-11 09:36:32.022 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760175377.021556, 8d35e49e-efca-4621-a6f7-de650e5272fd => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:36:32 compute-0 nova_compute[260935]: 2025-10-11 09:36:32.022 2 INFO nova.compute.manager [-] [instance: 8d35e49e-efca-4621-a6f7-de650e5272fd] VM Stopped (Lifecycle Event)
Oct 11 09:36:32 compute-0 nova_compute[260935]: 2025-10-11 09:36:32.046 2 DEBUG nova.compute.manager [None req-2698aad6-082a-4cd8-8540-a52cf04d5d67 - - - - - -] [instance: 8d35e49e-efca-4621-a6f7-de650e5272fd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:36:32 compute-0 sshd-session[418020]: Invalid user mysql from 165.232.82.252 port 45320
Oct 11 09:36:33 compute-0 sshd-session[418020]: pam_unix(sshd:auth): check pass; user unknown
Oct 11 09:36:33 compute-0 sshd-session[418020]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=165.232.82.252
Oct 11 09:36:33 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2847: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 11 09:36:33 compute-0 ceph-mon[74313]: pgmap v2847: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 11 09:36:33 compute-0 ovn_controller[152945]: 2025-10-11T09:36:33Z|01582|binding|INFO|Releasing lport e23cd806-8523-4e59-ba27-db15cee52548 from this chassis (sb_readonly=0)
Oct 11 09:36:33 compute-0 ovn_controller[152945]: 2025-10-11T09:36:33Z|01583|binding|INFO|Releasing lport f45dd889-4db0-488b-b6d3-356bc191844e from this chassis (sb_readonly=0)
Oct 11 09:36:33 compute-0 nova_compute[260935]: 2025-10-11 09:36:33.713 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:36:33 compute-0 ovn_controller[152945]: 2025-10-11T09:36:33Z|01584|binding|INFO|Releasing lport e23cd806-8523-4e59-ba27-db15cee52548 from this chassis (sb_readonly=0)
Oct 11 09:36:33 compute-0 ovn_controller[152945]: 2025-10-11T09:36:33Z|01585|binding|INFO|Releasing lport f45dd889-4db0-488b-b6d3-356bc191844e from this chassis (sb_readonly=0)
Oct 11 09:36:33 compute-0 nova_compute[260935]: 2025-10-11 09:36:33.834 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:36:34 compute-0 nova_compute[260935]: 2025-10-11 09:36:34.059 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:36:34 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:36:35 compute-0 sshd-session[418020]: Failed password for invalid user mysql from 165.232.82.252 port 45320 ssh2
Oct 11 09:36:35 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2848: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 11 09:36:35 compute-0 ceph-mon[74313]: pgmap v2848: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 11 09:36:36 compute-0 sshd-session[418020]: Connection closed by invalid user mysql 165.232.82.252 port 45320 [preauth]
Oct 11 09:36:36 compute-0 podman[418023]: 2025-10-11 09:36:36.802197357 +0000 UTC m=+0.098509069 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=iscsid, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 11 09:36:37 compute-0 nova_compute[260935]: 2025-10-11 09:36:37.012 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:36:37 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2849: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 11 09:36:37 compute-0 ceph-mon[74313]: pgmap v2849: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 11 09:36:39 compute-0 nova_compute[260935]: 2025-10-11 09:36:39.062 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:36:39 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2850: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 11 09:36:39 compute-0 ceph-mon[74313]: pgmap v2850: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 11 09:36:39 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:36:40 compute-0 podman[418044]: 2025-10-11 09:36:40.803261971 +0000 UTC m=+0.094733372 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 09:36:40 compute-0 podman[418045]: 2025-10-11 09:36:40.837509863 +0000 UTC m=+0.128337595 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.build-date=20251001, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Oct 11 09:36:41 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2851: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 85 B/s rd, 0 B/s wr, 0 op/s
Oct 11 09:36:41 compute-0 ceph-mon[74313]: pgmap v2851: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 85 B/s rd, 0 B/s wr, 0 op/s
Oct 11 09:36:41 compute-0 nova_compute[260935]: 2025-10-11 09:36:41.987 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760175386.9852498, a77fa566-1ab4-484c-b6ef-53471c9f91f8 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:36:41 compute-0 nova_compute[260935]: 2025-10-11 09:36:41.987 2 INFO nova.compute.manager [-] [instance: a77fa566-1ab4-484c-b6ef-53471c9f91f8] VM Stopped (Lifecycle Event)
Oct 11 09:36:42 compute-0 nova_compute[260935]: 2025-10-11 09:36:42.011 2 DEBUG nova.compute.manager [None req-5d03d261-a293-4717-bcc7-45d0e9e8f085 - - - - - -] [instance: a77fa566-1ab4-484c-b6ef-53471c9f91f8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:36:42 compute-0 nova_compute[260935]: 2025-10-11 09:36:42.015 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:36:43 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2852: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 85 B/s rd, 0 B/s wr, 0 op/s
Oct 11 09:36:43 compute-0 ceph-mon[74313]: pgmap v2852: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 85 B/s rd, 0 B/s wr, 0 op/s
Oct 11 09:36:44 compute-0 nova_compute[260935]: 2025-10-11 09:36:44.064 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:36:44 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:36:45 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2853: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:36:45 compute-0 ceph-mon[74313]: pgmap v2853: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:36:45 compute-0 nova_compute[260935]: 2025-10-11 09:36:45.986 2 DEBUG oslo_concurrency.lockutils [None req-3f206d69-2a3d-4596-ab30-71315f552b39 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Acquiring lock "1f73fb12-6b6e-4491-81a4-51fb66ffb310" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:36:45 compute-0 nova_compute[260935]: 2025-10-11 09:36:45.988 2 DEBUG oslo_concurrency.lockutils [None req-3f206d69-2a3d-4596-ab30-71315f552b39 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lock "1f73fb12-6b6e-4491-81a4-51fb66ffb310" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:36:46 compute-0 nova_compute[260935]: 2025-10-11 09:36:46.004 2 DEBUG nova.compute.manager [None req-3f206d69-2a3d-4596-ab30-71315f552b39 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 1f73fb12-6b6e-4491-81a4-51fb66ffb310] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 11 09:36:46 compute-0 nova_compute[260935]: 2025-10-11 09:36:46.096 2 DEBUG oslo_concurrency.lockutils [None req-3f206d69-2a3d-4596-ab30-71315f552b39 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:36:46 compute-0 nova_compute[260935]: 2025-10-11 09:36:46.097 2 DEBUG oslo_concurrency.lockutils [None req-3f206d69-2a3d-4596-ab30-71315f552b39 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:36:46 compute-0 nova_compute[260935]: 2025-10-11 09:36:46.109 2 DEBUG nova.virt.hardware [None req-3f206d69-2a3d-4596-ab30-71315f552b39 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 11 09:36:46 compute-0 nova_compute[260935]: 2025-10-11 09:36:46.110 2 INFO nova.compute.claims [None req-3f206d69-2a3d-4596-ab30-71315f552b39 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 1f73fb12-6b6e-4491-81a4-51fb66ffb310] Claim successful on node compute-0.ctlplane.example.com
Oct 11 09:36:46 compute-0 nova_compute[260935]: 2025-10-11 09:36:46.300 2 DEBUG oslo_concurrency.processutils [None req-3f206d69-2a3d-4596-ab30-71315f552b39 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:36:46 compute-0 nova_compute[260935]: 2025-10-11 09:36:46.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:36:46 compute-0 nova_compute[260935]: 2025-10-11 09:36:46.704 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 11 09:36:46 compute-0 nova_compute[260935]: 2025-10-11 09:36:46.740 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 11 09:36:46 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 09:36:46 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3001938525' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:36:46 compute-0 nova_compute[260935]: 2025-10-11 09:36:46.764 2 DEBUG oslo_concurrency.processutils [None req-3f206d69-2a3d-4596-ab30-71315f552b39 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:36:46 compute-0 nova_compute[260935]: 2025-10-11 09:36:46.773 2 DEBUG nova.compute.provider_tree [None req-3f206d69-2a3d-4596-ab30-71315f552b39 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 09:36:46 compute-0 nova_compute[260935]: 2025-10-11 09:36:46.792 2 DEBUG nova.scheduler.client.report [None req-3f206d69-2a3d-4596-ab30-71315f552b39 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 09:36:46 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/3001938525' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:36:46 compute-0 nova_compute[260935]: 2025-10-11 09:36:46.818 2 DEBUG oslo_concurrency.lockutils [None req-3f206d69-2a3d-4596-ab30-71315f552b39 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.721s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:36:46 compute-0 nova_compute[260935]: 2025-10-11 09:36:46.819 2 DEBUG nova.compute.manager [None req-3f206d69-2a3d-4596-ab30-71315f552b39 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 1f73fb12-6b6e-4491-81a4-51fb66ffb310] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 11 09:36:46 compute-0 nova_compute[260935]: 2025-10-11 09:36:46.859 2 DEBUG nova.compute.manager [None req-3f206d69-2a3d-4596-ab30-71315f552b39 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 1f73fb12-6b6e-4491-81a4-51fb66ffb310] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 11 09:36:46 compute-0 nova_compute[260935]: 2025-10-11 09:36:46.860 2 DEBUG nova.network.neutron [None req-3f206d69-2a3d-4596-ab30-71315f552b39 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 1f73fb12-6b6e-4491-81a4-51fb66ffb310] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 11 09:36:46 compute-0 nova_compute[260935]: 2025-10-11 09:36:46.876 2 INFO nova.virt.libvirt.driver [None req-3f206d69-2a3d-4596-ab30-71315f552b39 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 1f73fb12-6b6e-4491-81a4-51fb66ffb310] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 11 09:36:46 compute-0 nova_compute[260935]: 2025-10-11 09:36:46.895 2 DEBUG nova.compute.manager [None req-3f206d69-2a3d-4596-ab30-71315f552b39 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 1f73fb12-6b6e-4491-81a4-51fb66ffb310] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 11 09:36:46 compute-0 nova_compute[260935]: 2025-10-11 09:36:46.982 2 DEBUG nova.compute.manager [None req-3f206d69-2a3d-4596-ab30-71315f552b39 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 1f73fb12-6b6e-4491-81a4-51fb66ffb310] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 11 09:36:46 compute-0 nova_compute[260935]: 2025-10-11 09:36:46.984 2 DEBUG nova.virt.libvirt.driver [None req-3f206d69-2a3d-4596-ab30-71315f552b39 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 1f73fb12-6b6e-4491-81a4-51fb66ffb310] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 11 09:36:46 compute-0 nova_compute[260935]: 2025-10-11 09:36:46.985 2 INFO nova.virt.libvirt.driver [None req-3f206d69-2a3d-4596-ab30-71315f552b39 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 1f73fb12-6b6e-4491-81a4-51fb66ffb310] Creating image(s)
Oct 11 09:36:47 compute-0 nova_compute[260935]: 2025-10-11 09:36:47.014 2 DEBUG nova.storage.rbd_utils [None req-3f206d69-2a3d-4596-ab30-71315f552b39 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] rbd image 1f73fb12-6b6e-4491-81a4-51fb66ffb310_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:36:47 compute-0 nova_compute[260935]: 2025-10-11 09:36:47.045 2 DEBUG nova.storage.rbd_utils [None req-3f206d69-2a3d-4596-ab30-71315f552b39 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] rbd image 1f73fb12-6b6e-4491-81a4-51fb66ffb310_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:36:47 compute-0 nova_compute[260935]: 2025-10-11 09:36:47.072 2 DEBUG nova.storage.rbd_utils [None req-3f206d69-2a3d-4596-ab30-71315f552b39 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] rbd image 1f73fb12-6b6e-4491-81a4-51fb66ffb310_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:36:47 compute-0 nova_compute[260935]: 2025-10-11 09:36:47.077 2 DEBUG oslo_concurrency.processutils [None req-3f206d69-2a3d-4596-ab30-71315f552b39 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:36:47 compute-0 nova_compute[260935]: 2025-10-11 09:36:47.116 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:36:47 compute-0 nova_compute[260935]: 2025-10-11 09:36:47.164 2 DEBUG oslo_concurrency.processutils [None req-3f206d69-2a3d-4596-ab30-71315f552b39 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json" returned: 0 in 0.088s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:36:47 compute-0 nova_compute[260935]: 2025-10-11 09:36:47.166 2 DEBUG oslo_concurrency.lockutils [None req-3f206d69-2a3d-4596-ab30-71315f552b39 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Acquiring lock "9811042c7d73cc51997f7c966840f4b7728169a1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:36:47 compute-0 nova_compute[260935]: 2025-10-11 09:36:47.166 2 DEBUG oslo_concurrency.lockutils [None req-3f206d69-2a3d-4596-ab30-71315f552b39 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:36:47 compute-0 nova_compute[260935]: 2025-10-11 09:36:47.167 2 DEBUG oslo_concurrency.lockutils [None req-3f206d69-2a3d-4596-ab30-71315f552b39 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:36:47 compute-0 nova_compute[260935]: 2025-10-11 09:36:47.194 2 DEBUG nova.storage.rbd_utils [None req-3f206d69-2a3d-4596-ab30-71315f552b39 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] rbd image 1f73fb12-6b6e-4491-81a4-51fb66ffb310_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:36:47 compute-0 nova_compute[260935]: 2025-10-11 09:36:47.199 2 DEBUG oslo_concurrency.processutils [None req-3f206d69-2a3d-4596-ab30-71315f552b39 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 1f73fb12-6b6e-4491-81a4-51fb66ffb310_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:36:47 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2854: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:36:47 compute-0 nova_compute[260935]: 2025-10-11 09:36:47.499 2 DEBUG oslo_concurrency.processutils [None req-3f206d69-2a3d-4596-ab30-71315f552b39 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 1f73fb12-6b6e-4491-81a4-51fb66ffb310_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.300s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:36:47 compute-0 nova_compute[260935]: 2025-10-11 09:36:47.584 2 DEBUG nova.storage.rbd_utils [None req-3f206d69-2a3d-4596-ab30-71315f552b39 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] resizing rbd image 1f73fb12-6b6e-4491-81a4-51fb66ffb310_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 11 09:36:47 compute-0 nova_compute[260935]: 2025-10-11 09:36:47.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:36:47 compute-0 nova_compute[260935]: 2025-10-11 09:36:47.704 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 11 09:36:47 compute-0 nova_compute[260935]: 2025-10-11 09:36:47.711 2 DEBUG nova.objects.instance [None req-3f206d69-2a3d-4596-ab30-71315f552b39 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lazy-loading 'migration_context' on Instance uuid 1f73fb12-6b6e-4491-81a4-51fb66ffb310 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:36:47 compute-0 nova_compute[260935]: 2025-10-11 09:36:47.770 2 DEBUG nova.virt.libvirt.driver [None req-3f206d69-2a3d-4596-ab30-71315f552b39 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 1f73fb12-6b6e-4491-81a4-51fb66ffb310] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 11 09:36:47 compute-0 nova_compute[260935]: 2025-10-11 09:36:47.771 2 DEBUG nova.virt.libvirt.driver [None req-3f206d69-2a3d-4596-ab30-71315f552b39 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 1f73fb12-6b6e-4491-81a4-51fb66ffb310] Ensure instance console log exists: /var/lib/nova/instances/1f73fb12-6b6e-4491-81a4-51fb66ffb310/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 11 09:36:47 compute-0 nova_compute[260935]: 2025-10-11 09:36:47.772 2 DEBUG oslo_concurrency.lockutils [None req-3f206d69-2a3d-4596-ab30-71315f552b39 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:36:47 compute-0 nova_compute[260935]: 2025-10-11 09:36:47.772 2 DEBUG oslo_concurrency.lockutils [None req-3f206d69-2a3d-4596-ab30-71315f552b39 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:36:47 compute-0 nova_compute[260935]: 2025-10-11 09:36:47.773 2 DEBUG oslo_concurrency.lockutils [None req-3f206d69-2a3d-4596-ab30-71315f552b39 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:36:47 compute-0 nova_compute[260935]: 2025-10-11 09:36:47.778 2 DEBUG nova.policy [None req-3f206d69-2a3d-4596-ab30-71315f552b39 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '489c4d0457354f4684f8b9e53261224f', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '81e7096f23df4e7d8782cf98d09d54e9', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 11 09:36:47 compute-0 ceph-mon[74313]: pgmap v2854: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:36:48 compute-0 nova_compute[260935]: 2025-10-11 09:36:48.610 2 DEBUG nova.network.neutron [None req-3f206d69-2a3d-4596-ab30-71315f552b39 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 1f73fb12-6b6e-4491-81a4-51fb66ffb310] Successfully created port: 25d7271a-bce8-4388-991e-e7069da0eff1 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 11 09:36:49 compute-0 nova_compute[260935]: 2025-10-11 09:36:49.067 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:36:49 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2855: 321 pgs: 321 active+clean; 360 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 10 KiB/s rd, 1.2 MiB/s wr, 17 op/s
Oct 11 09:36:49 compute-0 ceph-mon[74313]: pgmap v2855: 321 pgs: 321 active+clean; 360 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 10 KiB/s rd, 1.2 MiB/s wr, 17 op/s
Oct 11 09:36:49 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:36:49 compute-0 nova_compute[260935]: 2025-10-11 09:36:49.704 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:36:49 compute-0 nova_compute[260935]: 2025-10-11 09:36:49.705 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:36:49 compute-0 nova_compute[260935]: 2025-10-11 09:36:49.710 2 DEBUG nova.network.neutron [None req-3f206d69-2a3d-4596-ab30-71315f552b39 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 1f73fb12-6b6e-4491-81a4-51fb66ffb310] Successfully updated port: 25d7271a-bce8-4388-991e-e7069da0eff1 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 11 09:36:49 compute-0 nova_compute[260935]: 2025-10-11 09:36:49.725 2 DEBUG oslo_concurrency.lockutils [None req-3f206d69-2a3d-4596-ab30-71315f552b39 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Acquiring lock "refresh_cache-1f73fb12-6b6e-4491-81a4-51fb66ffb310" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:36:49 compute-0 nova_compute[260935]: 2025-10-11 09:36:49.725 2 DEBUG oslo_concurrency.lockutils [None req-3f206d69-2a3d-4596-ab30-71315f552b39 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Acquired lock "refresh_cache-1f73fb12-6b6e-4491-81a4-51fb66ffb310" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:36:49 compute-0 nova_compute[260935]: 2025-10-11 09:36:49.725 2 DEBUG nova.network.neutron [None req-3f206d69-2a3d-4596-ab30-71315f552b39 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 1f73fb12-6b6e-4491-81a4-51fb66ffb310] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 11 09:36:49 compute-0 nova_compute[260935]: 2025-10-11 09:36:49.834 2 DEBUG nova.compute.manager [req-5f104c5a-a72a-4467-97f7-a3a3298f3e62 req-24e00788-677e-4108-b8dd-1aae34efa582 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 1f73fb12-6b6e-4491-81a4-51fb66ffb310] Received event network-changed-25d7271a-bce8-4388-991e-e7069da0eff1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:36:49 compute-0 nova_compute[260935]: 2025-10-11 09:36:49.834 2 DEBUG nova.compute.manager [req-5f104c5a-a72a-4467-97f7-a3a3298f3e62 req-24e00788-677e-4108-b8dd-1aae34efa582 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 1f73fb12-6b6e-4491-81a4-51fb66ffb310] Refreshing instance network info cache due to event network-changed-25d7271a-bce8-4388-991e-e7069da0eff1. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 09:36:49 compute-0 nova_compute[260935]: 2025-10-11 09:36:49.835 2 DEBUG oslo_concurrency.lockutils [req-5f104c5a-a72a-4467-97f7-a3a3298f3e62 req-24e00788-677e-4108-b8dd-1aae34efa582 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-1f73fb12-6b6e-4491-81a4-51fb66ffb310" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:36:50 compute-0 nova_compute[260935]: 2025-10-11 09:36:50.565 2 DEBUG nova.network.neutron [None req-3f206d69-2a3d-4596-ab30-71315f552b39 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 1f73fb12-6b6e-4491-81a4-51fb66ffb310] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 11 09:36:50 compute-0 nova_compute[260935]: 2025-10-11 09:36:50.704 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:36:50 compute-0 nova_compute[260935]: 2025-10-11 09:36:50.704 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:36:50 compute-0 nova_compute[260935]: 2025-10-11 09:36:50.736 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:36:50 compute-0 nova_compute[260935]: 2025-10-11 09:36:50.737 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:36:50 compute-0 nova_compute[260935]: 2025-10-11 09:36:50.737 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:36:50 compute-0 nova_compute[260935]: 2025-10-11 09:36:50.738 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 11 09:36:50 compute-0 nova_compute[260935]: 2025-10-11 09:36:50.738 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:36:51 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 09:36:51 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/472120719' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:36:51 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2856: 321 pgs: 321 active+clean; 360 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 10 KiB/s rd, 1.2 MiB/s wr, 17 op/s
Oct 11 09:36:51 compute-0 nova_compute[260935]: 2025-10-11 09:36:51.228 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.489s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:36:51 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/472120719' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:36:51 compute-0 ceph-mon[74313]: pgmap v2856: 321 pgs: 321 active+clean; 360 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 10 KiB/s rd, 1.2 MiB/s wr, 17 op/s
Oct 11 09:36:51 compute-0 nova_compute[260935]: 2025-10-11 09:36:51.386 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:36:51 compute-0 nova_compute[260935]: 2025-10-11 09:36:51.387 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:36:51 compute-0 nova_compute[260935]: 2025-10-11 09:36:51.387 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:36:51 compute-0 nova_compute[260935]: 2025-10-11 09:36:51.394 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:36:51 compute-0 nova_compute[260935]: 2025-10-11 09:36:51.395 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:36:51 compute-0 nova_compute[260935]: 2025-10-11 09:36:51.401 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000052 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:36:51 compute-0 nova_compute[260935]: 2025-10-11 09:36:51.402 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000052 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:36:51 compute-0 nova_compute[260935]: 2025-10-11 09:36:51.698 2 WARNING nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 09:36:51 compute-0 nova_compute[260935]: 2025-10-11 09:36:51.699 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=2845MB free_disk=59.81090545654297GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 11 09:36:51 compute-0 nova_compute[260935]: 2025-10-11 09:36:51.700 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:36:51 compute-0 nova_compute[260935]: 2025-10-11 09:36:51.700 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:36:51 compute-0 nova_compute[260935]: 2025-10-11 09:36:51.788 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance c176845c-89c0-4038-ba22-4ee79bd3ebfe actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 09:36:51 compute-0 nova_compute[260935]: 2025-10-11 09:36:51.788 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance b75d8ded-515b-48ff-a6b6-28df88878996 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 09:36:51 compute-0 nova_compute[260935]: 2025-10-11 09:36:51.789 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance 52be16b4-343a-4fd4-9041-39069a1fde2a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 09:36:51 compute-0 nova_compute[260935]: 2025-10-11 09:36:51.789 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance 1f73fb12-6b6e-4491-81a4-51fb66ffb310 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 09:36:51 compute-0 nova_compute[260935]: 2025-10-11 09:36:51.789 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 11 09:36:51 compute-0 nova_compute[260935]: 2025-10-11 09:36:51.789 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=1024MB phys_disk=59GB used_disk=4GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 11 09:36:51 compute-0 nova_compute[260935]: 2025-10-11 09:36:51.903 2 DEBUG nova.network.neutron [None req-3f206d69-2a3d-4596-ab30-71315f552b39 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 1f73fb12-6b6e-4491-81a4-51fb66ffb310] Updating instance_info_cache with network_info: [{"id": "25d7271a-bce8-4388-991e-e7069da0eff1", "address": "fa:16:3e:a1:d6:78", "network": {"id": "03aa96fb-1646-4924-8eea-e7c8a5518a31", "bridge": "br-int", "label": "tempest-network-smoke--2144007454", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "81e7096f23df4e7d8782cf98d09d54e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25d7271a-bc", "ovs_interfaceid": "25d7271a-bce8-4388-991e-e7069da0eff1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:36:52 compute-0 nova_compute[260935]: 2025-10-11 09:36:52.048 2 DEBUG oslo_concurrency.lockutils [None req-3f206d69-2a3d-4596-ab30-71315f552b39 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Releasing lock "refresh_cache-1f73fb12-6b6e-4491-81a4-51fb66ffb310" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:36:52 compute-0 nova_compute[260935]: 2025-10-11 09:36:52.048 2 DEBUG nova.compute.manager [None req-3f206d69-2a3d-4596-ab30-71315f552b39 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 1f73fb12-6b6e-4491-81a4-51fb66ffb310] Instance network_info: |[{"id": "25d7271a-bce8-4388-991e-e7069da0eff1", "address": "fa:16:3e:a1:d6:78", "network": {"id": "03aa96fb-1646-4924-8eea-e7c8a5518a31", "bridge": "br-int", "label": "tempest-network-smoke--2144007454", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "81e7096f23df4e7d8782cf98d09d54e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25d7271a-bc", "ovs_interfaceid": "25d7271a-bce8-4388-991e-e7069da0eff1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 11 09:36:52 compute-0 nova_compute[260935]: 2025-10-11 09:36:52.050 2 DEBUG oslo_concurrency.lockutils [req-5f104c5a-a72a-4467-97f7-a3a3298f3e62 req-24e00788-677e-4108-b8dd-1aae34efa582 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-1f73fb12-6b6e-4491-81a4-51fb66ffb310" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:36:52 compute-0 nova_compute[260935]: 2025-10-11 09:36:52.050 2 DEBUG nova.network.neutron [req-5f104c5a-a72a-4467-97f7-a3a3298f3e62 req-24e00788-677e-4108-b8dd-1aae34efa582 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 1f73fb12-6b6e-4491-81a4-51fb66ffb310] Refreshing network info cache for port 25d7271a-bce8-4388-991e-e7069da0eff1 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 09:36:52 compute-0 nova_compute[260935]: 2025-10-11 09:36:52.056 2 DEBUG nova.virt.libvirt.driver [None req-3f206d69-2a3d-4596-ab30-71315f552b39 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 1f73fb12-6b6e-4491-81a4-51fb66ffb310] Start _get_guest_xml network_info=[{"id": "25d7271a-bce8-4388-991e-e7069da0eff1", "address": "fa:16:3e:a1:d6:78", "network": {"id": "03aa96fb-1646-4924-8eea-e7c8a5518a31", "bridge": "br-int", "label": "tempest-network-smoke--2144007454", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "81e7096f23df4e7d8782cf98d09d54e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25d7271a-bc", "ovs_interfaceid": "25d7271a-bce8-4388-991e-e7069da0eff1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 11 09:36:52 compute-0 nova_compute[260935]: 2025-10-11 09:36:52.058 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:36:52 compute-0 nova_compute[260935]: 2025-10-11 09:36:52.119 2 WARNING nova.virt.libvirt.driver [None req-3f206d69-2a3d-4596-ab30-71315f552b39 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 09:36:52 compute-0 nova_compute[260935]: 2025-10-11 09:36:52.122 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:36:52 compute-0 nova_compute[260935]: 2025-10-11 09:36:52.136 2 DEBUG nova.virt.libvirt.host [None req-3f206d69-2a3d-4596-ab30-71315f552b39 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 11 09:36:52 compute-0 nova_compute[260935]: 2025-10-11 09:36:52.137 2 DEBUG nova.virt.libvirt.host [None req-3f206d69-2a3d-4596-ab30-71315f552b39 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 11 09:36:52 compute-0 nova_compute[260935]: 2025-10-11 09:36:52.141 2 DEBUG nova.virt.libvirt.host [None req-3f206d69-2a3d-4596-ab30-71315f552b39 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 11 09:36:52 compute-0 nova_compute[260935]: 2025-10-11 09:36:52.142 2 DEBUG nova.virt.libvirt.host [None req-3f206d69-2a3d-4596-ab30-71315f552b39 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 11 09:36:52 compute-0 nova_compute[260935]: 2025-10-11 09:36:52.143 2 DEBUG nova.virt.libvirt.driver [None req-3f206d69-2a3d-4596-ab30-71315f552b39 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 11 09:36:52 compute-0 nova_compute[260935]: 2025-10-11 09:36:52.144 2 DEBUG nova.virt.hardware [None req-3f206d69-2a3d-4596-ab30-71315f552b39 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 11 09:36:52 compute-0 nova_compute[260935]: 2025-10-11 09:36:52.145 2 DEBUG nova.virt.hardware [None req-3f206d69-2a3d-4596-ab30-71315f552b39 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 11 09:36:52 compute-0 nova_compute[260935]: 2025-10-11 09:36:52.145 2 DEBUG nova.virt.hardware [None req-3f206d69-2a3d-4596-ab30-71315f552b39 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 11 09:36:52 compute-0 nova_compute[260935]: 2025-10-11 09:36:52.146 2 DEBUG nova.virt.hardware [None req-3f206d69-2a3d-4596-ab30-71315f552b39 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 11 09:36:52 compute-0 nova_compute[260935]: 2025-10-11 09:36:52.147 2 DEBUG nova.virt.hardware [None req-3f206d69-2a3d-4596-ab30-71315f552b39 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 11 09:36:52 compute-0 nova_compute[260935]: 2025-10-11 09:36:52.147 2 DEBUG nova.virt.hardware [None req-3f206d69-2a3d-4596-ab30-71315f552b39 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 11 09:36:52 compute-0 nova_compute[260935]: 2025-10-11 09:36:52.148 2 DEBUG nova.virt.hardware [None req-3f206d69-2a3d-4596-ab30-71315f552b39 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 11 09:36:52 compute-0 nova_compute[260935]: 2025-10-11 09:36:52.148 2 DEBUG nova.virt.hardware [None req-3f206d69-2a3d-4596-ab30-71315f552b39 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 11 09:36:52 compute-0 nova_compute[260935]: 2025-10-11 09:36:52.149 2 DEBUG nova.virt.hardware [None req-3f206d69-2a3d-4596-ab30-71315f552b39 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 11 09:36:52 compute-0 nova_compute[260935]: 2025-10-11 09:36:52.149 2 DEBUG nova.virt.hardware [None req-3f206d69-2a3d-4596-ab30-71315f552b39 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 11 09:36:52 compute-0 nova_compute[260935]: 2025-10-11 09:36:52.150 2 DEBUG nova.virt.hardware [None req-3f206d69-2a3d-4596-ab30-71315f552b39 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 11 09:36:52 compute-0 nova_compute[260935]: 2025-10-11 09:36:52.158 2 DEBUG oslo_concurrency.processutils [None req-3f206d69-2a3d-4596-ab30-71315f552b39 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:36:52 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 09:36:52 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2658467122' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:36:52 compute-0 nova_compute[260935]: 2025-10-11 09:36:52.540 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.481s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:36:52 compute-0 nova_compute[260935]: 2025-10-11 09:36:52.545 2 DEBUG nova.compute.provider_tree [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 09:36:52 compute-0 nova_compute[260935]: 2025-10-11 09:36:52.564 2 DEBUG nova.scheduler.client.report [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 09:36:52 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/2658467122' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:36:52 compute-0 nova_compute[260935]: 2025-10-11 09:36:52.591 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 11 09:36:52 compute-0 nova_compute[260935]: 2025-10-11 09:36:52.591 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.891s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:36:52 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 09:36:52 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2025239124' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:36:52 compute-0 nova_compute[260935]: 2025-10-11 09:36:52.622 2 DEBUG oslo_concurrency.processutils [None req-3f206d69-2a3d-4596-ab30-71315f552b39 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:36:52 compute-0 nova_compute[260935]: 2025-10-11 09:36:52.650 2 DEBUG nova.storage.rbd_utils [None req-3f206d69-2a3d-4596-ab30-71315f552b39 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] rbd image 1f73fb12-6b6e-4491-81a4-51fb66ffb310_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:36:52 compute-0 nova_compute[260935]: 2025-10-11 09:36:52.655 2 DEBUG oslo_concurrency.processutils [None req-3f206d69-2a3d-4596-ab30-71315f552b39 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:36:53 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 09:36:53 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1080274345' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:36:53 compute-0 nova_compute[260935]: 2025-10-11 09:36:53.098 2 DEBUG oslo_concurrency.processutils [None req-3f206d69-2a3d-4596-ab30-71315f552b39 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:36:53 compute-0 nova_compute[260935]: 2025-10-11 09:36:53.100 2 DEBUG nova.virt.libvirt.vif [None req-3f206d69-2a3d-4596-ab30-71315f552b39 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T09:36:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-607770139-access_point-2088666113',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-607770139-access_point-2088666113',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-607770139-acc',id=141,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDUdKb5dOuuDVVWeqX4zKoPcjWDfKUs1onbpng/L6Jwhcf0BjMOuWghlJ+p86B+gJd3uhEaIWe8cO4nTPLwQAMa1oAzsF+deBAfCAYSzplFFdsUIepQYO467EdbI9Uqbsw==',key_name='tempest-TestSecurityGroupsBasicOps-402699739',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='81e7096f23df4e7d8782cf98d09d54e9',ramdisk_id='',reservation_id='r-1r2upbl3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-607770139',owner_user_name='tempest-TestSecurityGroupsBasicOps-607770139-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:36:46Z,user_data=None,user_id='489c4d0457354f4684f8b9e53261224f',uuid=1f73fb12-6b6e-4491-81a4-51fb66ffb310,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "25d7271a-bce8-4388-991e-e7069da0eff1", "address": "fa:16:3e:a1:d6:78", "network": {"id": "03aa96fb-1646-4924-8eea-e7c8a5518a31", "bridge": "br-int", "label": "tempest-network-smoke--2144007454", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "81e7096f23df4e7d8782cf98d09d54e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25d7271a-bc", "ovs_interfaceid": "25d7271a-bce8-4388-991e-e7069da0eff1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 11 09:36:53 compute-0 nova_compute[260935]: 2025-10-11 09:36:53.100 2 DEBUG nova.network.os_vif_util [None req-3f206d69-2a3d-4596-ab30-71315f552b39 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Converting VIF {"id": "25d7271a-bce8-4388-991e-e7069da0eff1", "address": "fa:16:3e:a1:d6:78", "network": {"id": "03aa96fb-1646-4924-8eea-e7c8a5518a31", "bridge": "br-int", "label": "tempest-network-smoke--2144007454", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "81e7096f23df4e7d8782cf98d09d54e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25d7271a-bc", "ovs_interfaceid": "25d7271a-bce8-4388-991e-e7069da0eff1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 09:36:53 compute-0 nova_compute[260935]: 2025-10-11 09:36:53.101 2 DEBUG nova.network.os_vif_util [None req-3f206d69-2a3d-4596-ab30-71315f552b39 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a1:d6:78,bridge_name='br-int',has_traffic_filtering=True,id=25d7271a-bce8-4388-991e-e7069da0eff1,network=Network(03aa96fb-1646-4924-8eea-e7c8a5518a31),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap25d7271a-bc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 09:36:53 compute-0 nova_compute[260935]: 2025-10-11 09:36:53.102 2 DEBUG nova.objects.instance [None req-3f206d69-2a3d-4596-ab30-71315f552b39 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lazy-loading 'pci_devices' on Instance uuid 1f73fb12-6b6e-4491-81a4-51fb66ffb310 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:36:53 compute-0 nova_compute[260935]: 2025-10-11 09:36:53.119 2 DEBUG nova.virt.libvirt.driver [None req-3f206d69-2a3d-4596-ab30-71315f552b39 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 1f73fb12-6b6e-4491-81a4-51fb66ffb310] End _get_guest_xml xml=<domain type="kvm">
Oct 11 09:36:53 compute-0 nova_compute[260935]:   <uuid>1f73fb12-6b6e-4491-81a4-51fb66ffb310</uuid>
Oct 11 09:36:53 compute-0 nova_compute[260935]:   <name>instance-0000008d</name>
Oct 11 09:36:53 compute-0 nova_compute[260935]:   <memory>131072</memory>
Oct 11 09:36:53 compute-0 nova_compute[260935]:   <vcpu>1</vcpu>
Oct 11 09:36:53 compute-0 nova_compute[260935]:   <metadata>
Oct 11 09:36:53 compute-0 nova_compute[260935]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 09:36:53 compute-0 nova_compute[260935]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 09:36:53 compute-0 nova_compute[260935]:       <nova:name>tempest-server-tempest-TestSecurityGroupsBasicOps-607770139-access_point-2088666113</nova:name>
Oct 11 09:36:53 compute-0 nova_compute[260935]:       <nova:creationTime>2025-10-11 09:36:52</nova:creationTime>
Oct 11 09:36:53 compute-0 nova_compute[260935]:       <nova:flavor name="m1.nano">
Oct 11 09:36:53 compute-0 nova_compute[260935]:         <nova:memory>128</nova:memory>
Oct 11 09:36:53 compute-0 nova_compute[260935]:         <nova:disk>1</nova:disk>
Oct 11 09:36:53 compute-0 nova_compute[260935]:         <nova:swap>0</nova:swap>
Oct 11 09:36:53 compute-0 nova_compute[260935]:         <nova:ephemeral>0</nova:ephemeral>
Oct 11 09:36:53 compute-0 nova_compute[260935]:         <nova:vcpus>1</nova:vcpus>
Oct 11 09:36:53 compute-0 nova_compute[260935]:       </nova:flavor>
Oct 11 09:36:53 compute-0 nova_compute[260935]:       <nova:owner>
Oct 11 09:36:53 compute-0 nova_compute[260935]:         <nova:user uuid="489c4d0457354f4684f8b9e53261224f">tempest-TestSecurityGroupsBasicOps-607770139-project-member</nova:user>
Oct 11 09:36:53 compute-0 nova_compute[260935]:         <nova:project uuid="81e7096f23df4e7d8782cf98d09d54e9">tempest-TestSecurityGroupsBasicOps-607770139</nova:project>
Oct 11 09:36:53 compute-0 nova_compute[260935]:       </nova:owner>
Oct 11 09:36:53 compute-0 nova_compute[260935]:       <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 09:36:53 compute-0 nova_compute[260935]:       <nova:ports>
Oct 11 09:36:53 compute-0 nova_compute[260935]:         <nova:port uuid="25d7271a-bce8-4388-991e-e7069da0eff1">
Oct 11 09:36:53 compute-0 nova_compute[260935]:           <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Oct 11 09:36:53 compute-0 nova_compute[260935]:         </nova:port>
Oct 11 09:36:53 compute-0 nova_compute[260935]:       </nova:ports>
Oct 11 09:36:53 compute-0 nova_compute[260935]:     </nova:instance>
Oct 11 09:36:53 compute-0 nova_compute[260935]:   </metadata>
Oct 11 09:36:53 compute-0 nova_compute[260935]:   <sysinfo type="smbios">
Oct 11 09:36:53 compute-0 nova_compute[260935]:     <system>
Oct 11 09:36:53 compute-0 nova_compute[260935]:       <entry name="manufacturer">RDO</entry>
Oct 11 09:36:53 compute-0 nova_compute[260935]:       <entry name="product">OpenStack Compute</entry>
Oct 11 09:36:53 compute-0 nova_compute[260935]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 09:36:53 compute-0 nova_compute[260935]:       <entry name="serial">1f73fb12-6b6e-4491-81a4-51fb66ffb310</entry>
Oct 11 09:36:53 compute-0 nova_compute[260935]:       <entry name="uuid">1f73fb12-6b6e-4491-81a4-51fb66ffb310</entry>
Oct 11 09:36:53 compute-0 nova_compute[260935]:       <entry name="family">Virtual Machine</entry>
Oct 11 09:36:53 compute-0 nova_compute[260935]:     </system>
Oct 11 09:36:53 compute-0 nova_compute[260935]:   </sysinfo>
Oct 11 09:36:53 compute-0 nova_compute[260935]:   <os>
Oct 11 09:36:53 compute-0 nova_compute[260935]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 11 09:36:53 compute-0 nova_compute[260935]:     <boot dev="hd"/>
Oct 11 09:36:53 compute-0 nova_compute[260935]:     <smbios mode="sysinfo"/>
Oct 11 09:36:53 compute-0 nova_compute[260935]:   </os>
Oct 11 09:36:53 compute-0 nova_compute[260935]:   <features>
Oct 11 09:36:53 compute-0 nova_compute[260935]:     <acpi/>
Oct 11 09:36:53 compute-0 nova_compute[260935]:     <apic/>
Oct 11 09:36:53 compute-0 nova_compute[260935]:     <vmcoreinfo/>
Oct 11 09:36:53 compute-0 nova_compute[260935]:   </features>
Oct 11 09:36:53 compute-0 nova_compute[260935]:   <clock offset="utc">
Oct 11 09:36:53 compute-0 nova_compute[260935]:     <timer name="pit" tickpolicy="delay"/>
Oct 11 09:36:53 compute-0 nova_compute[260935]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 11 09:36:53 compute-0 nova_compute[260935]:     <timer name="hpet" present="no"/>
Oct 11 09:36:53 compute-0 nova_compute[260935]:   </clock>
Oct 11 09:36:53 compute-0 nova_compute[260935]:   <cpu mode="host-model" match="exact">
Oct 11 09:36:53 compute-0 nova_compute[260935]:     <topology sockets="1" cores="1" threads="1"/>
Oct 11 09:36:53 compute-0 nova_compute[260935]:   </cpu>
Oct 11 09:36:53 compute-0 nova_compute[260935]:   <devices>
Oct 11 09:36:53 compute-0 nova_compute[260935]:     <disk type="network" device="disk">
Oct 11 09:36:53 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 09:36:53 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/1f73fb12-6b6e-4491-81a4-51fb66ffb310_disk">
Oct 11 09:36:53 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 09:36:53 compute-0 nova_compute[260935]:       </source>
Oct 11 09:36:53 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 09:36:53 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 09:36:53 compute-0 nova_compute[260935]:       </auth>
Oct 11 09:36:53 compute-0 nova_compute[260935]:       <target dev="vda" bus="virtio"/>
Oct 11 09:36:53 compute-0 nova_compute[260935]:     </disk>
Oct 11 09:36:53 compute-0 nova_compute[260935]:     <disk type="network" device="cdrom">
Oct 11 09:36:53 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 09:36:53 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/1f73fb12-6b6e-4491-81a4-51fb66ffb310_disk.config">
Oct 11 09:36:53 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 09:36:53 compute-0 nova_compute[260935]:       </source>
Oct 11 09:36:53 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 09:36:53 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 09:36:53 compute-0 nova_compute[260935]:       </auth>
Oct 11 09:36:53 compute-0 nova_compute[260935]:       <target dev="sda" bus="sata"/>
Oct 11 09:36:53 compute-0 nova_compute[260935]:     </disk>
Oct 11 09:36:53 compute-0 nova_compute[260935]:     <interface type="ethernet">
Oct 11 09:36:53 compute-0 nova_compute[260935]:       <mac address="fa:16:3e:a1:d6:78"/>
Oct 11 09:36:53 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 09:36:53 compute-0 nova_compute[260935]:       <driver name="vhost" rx_queue_size="512"/>
Oct 11 09:36:53 compute-0 nova_compute[260935]:       <mtu size="1442"/>
Oct 11 09:36:53 compute-0 nova_compute[260935]:       <target dev="tap25d7271a-bc"/>
Oct 11 09:36:53 compute-0 nova_compute[260935]:     </interface>
Oct 11 09:36:53 compute-0 nova_compute[260935]:     <serial type="pty">
Oct 11 09:36:53 compute-0 nova_compute[260935]:       <log file="/var/lib/nova/instances/1f73fb12-6b6e-4491-81a4-51fb66ffb310/console.log" append="off"/>
Oct 11 09:36:53 compute-0 nova_compute[260935]:     </serial>
Oct 11 09:36:53 compute-0 nova_compute[260935]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 09:36:53 compute-0 nova_compute[260935]:     <video>
Oct 11 09:36:53 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 09:36:53 compute-0 nova_compute[260935]:     </video>
Oct 11 09:36:53 compute-0 nova_compute[260935]:     <input type="tablet" bus="usb"/>
Oct 11 09:36:53 compute-0 nova_compute[260935]:     <rng model="virtio">
Oct 11 09:36:53 compute-0 nova_compute[260935]:       <backend model="random">/dev/urandom</backend>
Oct 11 09:36:53 compute-0 nova_compute[260935]:     </rng>
Oct 11 09:36:53 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root"/>
Oct 11 09:36:53 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:36:53 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:36:53 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:36:53 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:36:53 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:36:53 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:36:53 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:36:53 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:36:53 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:36:53 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:36:53 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:36:53 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:36:53 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:36:53 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:36:53 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:36:53 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:36:53 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:36:53 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:36:53 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:36:53 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:36:53 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:36:53 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:36:53 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:36:53 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:36:53 compute-0 nova_compute[260935]:     <controller type="usb" index="0"/>
Oct 11 09:36:53 compute-0 nova_compute[260935]:     <memballoon model="virtio">
Oct 11 09:36:53 compute-0 nova_compute[260935]:       <stats period="10"/>
Oct 11 09:36:53 compute-0 nova_compute[260935]:     </memballoon>
Oct 11 09:36:53 compute-0 nova_compute[260935]:   </devices>
Oct 11 09:36:53 compute-0 nova_compute[260935]: </domain>
Oct 11 09:36:53 compute-0 nova_compute[260935]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 11 09:36:53 compute-0 nova_compute[260935]: 2025-10-11 09:36:53.120 2 DEBUG nova.compute.manager [None req-3f206d69-2a3d-4596-ab30-71315f552b39 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 1f73fb12-6b6e-4491-81a4-51fb66ffb310] Preparing to wait for external event network-vif-plugged-25d7271a-bce8-4388-991e-e7069da0eff1 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 11 09:36:53 compute-0 nova_compute[260935]: 2025-10-11 09:36:53.120 2 DEBUG oslo_concurrency.lockutils [None req-3f206d69-2a3d-4596-ab30-71315f552b39 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Acquiring lock "1f73fb12-6b6e-4491-81a4-51fb66ffb310-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:36:53 compute-0 nova_compute[260935]: 2025-10-11 09:36:53.120 2 DEBUG oslo_concurrency.lockutils [None req-3f206d69-2a3d-4596-ab30-71315f552b39 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lock "1f73fb12-6b6e-4491-81a4-51fb66ffb310-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:36:53 compute-0 nova_compute[260935]: 2025-10-11 09:36:53.121 2 DEBUG oslo_concurrency.lockutils [None req-3f206d69-2a3d-4596-ab30-71315f552b39 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lock "1f73fb12-6b6e-4491-81a4-51fb66ffb310-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:36:53 compute-0 nova_compute[260935]: 2025-10-11 09:36:53.121 2 DEBUG nova.virt.libvirt.vif [None req-3f206d69-2a3d-4596-ab30-71315f552b39 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T09:36:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-607770139-access_point-2088666113',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-607770139-access_point-2088666113',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-607770139-acc',id=141,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDUdKb5dOuuDVVWeqX4zKoPcjWDfKUs1onbpng/L6Jwhcf0BjMOuWghlJ+p86B+gJd3uhEaIWe8cO4nTPLwQAMa1oAzsF+deBAfCAYSzplFFdsUIepQYO467EdbI9Uqbsw==',key_name='tempest-TestSecurityGroupsBasicOps-402699739',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='81e7096f23df4e7d8782cf98d09d54e9',ramdisk_id='',reservation_id='r-1r2upbl3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-607770139',owner_user_name='tempest-TestSecurityGroupsBasicOps-607770139-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:36:46Z,user_data=None,user_id='489c4d0457354f4684f8b9e53261224f',uuid=1f73fb12-6b6e-4491-81a4-51fb66ffb310,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "25d7271a-bce8-4388-991e-e7069da0eff1", "address": "fa:16:3e:a1:d6:78", "network": {"id": "03aa96fb-1646-4924-8eea-e7c8a5518a31", "bridge": "br-int", "label": "tempest-network-smoke--2144007454", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "81e7096f23df4e7d8782cf98d09d54e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25d7271a-bc", "ovs_interfaceid": "25d7271a-bce8-4388-991e-e7069da0eff1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 11 09:36:53 compute-0 nova_compute[260935]: 2025-10-11 09:36:53.121 2 DEBUG nova.network.os_vif_util [None req-3f206d69-2a3d-4596-ab30-71315f552b39 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Converting VIF {"id": "25d7271a-bce8-4388-991e-e7069da0eff1", "address": "fa:16:3e:a1:d6:78", "network": {"id": "03aa96fb-1646-4924-8eea-e7c8a5518a31", "bridge": "br-int", "label": "tempest-network-smoke--2144007454", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "81e7096f23df4e7d8782cf98d09d54e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25d7271a-bc", "ovs_interfaceid": "25d7271a-bce8-4388-991e-e7069da0eff1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 09:36:53 compute-0 nova_compute[260935]: 2025-10-11 09:36:53.122 2 DEBUG nova.network.os_vif_util [None req-3f206d69-2a3d-4596-ab30-71315f552b39 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a1:d6:78,bridge_name='br-int',has_traffic_filtering=True,id=25d7271a-bce8-4388-991e-e7069da0eff1,network=Network(03aa96fb-1646-4924-8eea-e7c8a5518a31),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap25d7271a-bc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 09:36:53 compute-0 nova_compute[260935]: 2025-10-11 09:36:53.122 2 DEBUG os_vif [None req-3f206d69-2a3d-4596-ab30-71315f552b39 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a1:d6:78,bridge_name='br-int',has_traffic_filtering=True,id=25d7271a-bce8-4388-991e-e7069da0eff1,network=Network(03aa96fb-1646-4924-8eea-e7c8a5518a31),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap25d7271a-bc') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 11 09:36:53 compute-0 nova_compute[260935]: 2025-10-11 09:36:53.123 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:36:53 compute-0 nova_compute[260935]: 2025-10-11 09:36:53.123 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:36:53 compute-0 nova_compute[260935]: 2025-10-11 09:36:53.124 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 09:36:53 compute-0 nova_compute[260935]: 2025-10-11 09:36:53.126 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:36:53 compute-0 nova_compute[260935]: 2025-10-11 09:36:53.126 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap25d7271a-bc, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:36:53 compute-0 nova_compute[260935]: 2025-10-11 09:36:53.126 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap25d7271a-bc, col_values=(('external_ids', {'iface-id': '25d7271a-bce8-4388-991e-e7069da0eff1', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:a1:d6:78', 'vm-uuid': '1f73fb12-6b6e-4491-81a4-51fb66ffb310'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:36:53 compute-0 nova_compute[260935]: 2025-10-11 09:36:53.128 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:36:53 compute-0 nova_compute[260935]: 2025-10-11 09:36:53.130 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 09:36:53 compute-0 NetworkManager[44960]: <info>  [1760175413.1309] manager: (tap25d7271a-bc): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/606)
Oct 11 09:36:53 compute-0 nova_compute[260935]: 2025-10-11 09:36:53.141 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:36:53 compute-0 nova_compute[260935]: 2025-10-11 09:36:53.142 2 INFO os_vif [None req-3f206d69-2a3d-4596-ab30-71315f552b39 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a1:d6:78,bridge_name='br-int',has_traffic_filtering=True,id=25d7271a-bce8-4388-991e-e7069da0eff1,network=Network(03aa96fb-1646-4924-8eea-e7c8a5518a31),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap25d7271a-bc')
Oct 11 09:36:53 compute-0 nova_compute[260935]: 2025-10-11 09:36:53.211 2 DEBUG nova.virt.libvirt.driver [None req-3f206d69-2a3d-4596-ab30-71315f552b39 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 09:36:53 compute-0 nova_compute[260935]: 2025-10-11 09:36:53.211 2 DEBUG nova.virt.libvirt.driver [None req-3f206d69-2a3d-4596-ab30-71315f552b39 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 09:36:53 compute-0 nova_compute[260935]: 2025-10-11 09:36:53.212 2 DEBUG nova.virt.libvirt.driver [None req-3f206d69-2a3d-4596-ab30-71315f552b39 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] No VIF found with MAC fa:16:3e:a1:d6:78, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 11 09:36:53 compute-0 nova_compute[260935]: 2025-10-11 09:36:53.212 2 INFO nova.virt.libvirt.driver [None req-3f206d69-2a3d-4596-ab30-71315f552b39 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 1f73fb12-6b6e-4491-81a4-51fb66ffb310] Using config drive
Oct 11 09:36:53 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2857: 321 pgs: 321 active+clean; 374 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 11 09:36:53 compute-0 nova_compute[260935]: 2025-10-11 09:36:53.253 2 DEBUG nova.storage.rbd_utils [None req-3f206d69-2a3d-4596-ab30-71315f552b39 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] rbd image 1f73fb12-6b6e-4491-81a4-51fb66ffb310_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:36:53 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/2025239124' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:36:53 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/1080274345' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:36:53 compute-0 ceph-mon[74313]: pgmap v2857: 321 pgs: 321 active+clean; 374 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 11 09:36:53 compute-0 nova_compute[260935]: 2025-10-11 09:36:53.586 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:36:53 compute-0 nova_compute[260935]: 2025-10-11 09:36:53.587 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:36:53 compute-0 nova_compute[260935]: 2025-10-11 09:36:53.763 2 INFO nova.virt.libvirt.driver [None req-3f206d69-2a3d-4596-ab30-71315f552b39 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 1f73fb12-6b6e-4491-81a4-51fb66ffb310] Creating config drive at /var/lib/nova/instances/1f73fb12-6b6e-4491-81a4-51fb66ffb310/disk.config
Oct 11 09:36:53 compute-0 nova_compute[260935]: 2025-10-11 09:36:53.769 2 DEBUG oslo_concurrency.processutils [None req-3f206d69-2a3d-4596-ab30-71315f552b39 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/1f73fb12-6b6e-4491-81a4-51fb66ffb310/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmparptuho6 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:36:53 compute-0 nova_compute[260935]: 2025-10-11 09:36:53.918 2 DEBUG oslo_concurrency.processutils [None req-3f206d69-2a3d-4596-ab30-71315f552b39 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/1f73fb12-6b6e-4491-81a4-51fb66ffb310/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmparptuho6" returned: 0 in 0.149s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:36:53 compute-0 nova_compute[260935]: 2025-10-11 09:36:53.951 2 DEBUG nova.storage.rbd_utils [None req-3f206d69-2a3d-4596-ab30-71315f552b39 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] rbd image 1f73fb12-6b6e-4491-81a4-51fb66ffb310_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:36:53 compute-0 nova_compute[260935]: 2025-10-11 09:36:53.955 2 DEBUG oslo_concurrency.processutils [None req-3f206d69-2a3d-4596-ab30-71315f552b39 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/1f73fb12-6b6e-4491-81a4-51fb66ffb310/disk.config 1f73fb12-6b6e-4491-81a4-51fb66ffb310_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:36:53 compute-0 nova_compute[260935]: 2025-10-11 09:36:53.996 2 DEBUG nova.network.neutron [req-5f104c5a-a72a-4467-97f7-a3a3298f3e62 req-24e00788-677e-4108-b8dd-1aae34efa582 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 1f73fb12-6b6e-4491-81a4-51fb66ffb310] Updated VIF entry in instance network info cache for port 25d7271a-bce8-4388-991e-e7069da0eff1. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 09:36:53 compute-0 nova_compute[260935]: 2025-10-11 09:36:53.997 2 DEBUG nova.network.neutron [req-5f104c5a-a72a-4467-97f7-a3a3298f3e62 req-24e00788-677e-4108-b8dd-1aae34efa582 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 1f73fb12-6b6e-4491-81a4-51fb66ffb310] Updating instance_info_cache with network_info: [{"id": "25d7271a-bce8-4388-991e-e7069da0eff1", "address": "fa:16:3e:a1:d6:78", "network": {"id": "03aa96fb-1646-4924-8eea-e7c8a5518a31", "bridge": "br-int", "label": "tempest-network-smoke--2144007454", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "81e7096f23df4e7d8782cf98d09d54e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25d7271a-bc", "ovs_interfaceid": "25d7271a-bce8-4388-991e-e7069da0eff1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:36:54 compute-0 nova_compute[260935]: 2025-10-11 09:36:54.021 2 DEBUG oslo_concurrency.lockutils [req-5f104c5a-a72a-4467-97f7-a3a3298f3e62 req-24e00788-677e-4108-b8dd-1aae34efa582 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-1f73fb12-6b6e-4491-81a4-51fb66ffb310" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:36:54 compute-0 nova_compute[260935]: 2025-10-11 09:36:54.109 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:36:54 compute-0 nova_compute[260935]: 2025-10-11 09:36:54.141 2 DEBUG oslo_concurrency.processutils [None req-3f206d69-2a3d-4596-ab30-71315f552b39 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/1f73fb12-6b6e-4491-81a4-51fb66ffb310/disk.config 1f73fb12-6b6e-4491-81a4-51fb66ffb310_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.186s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:36:54 compute-0 nova_compute[260935]: 2025-10-11 09:36:54.141 2 INFO nova.virt.libvirt.driver [None req-3f206d69-2a3d-4596-ab30-71315f552b39 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 1f73fb12-6b6e-4491-81a4-51fb66ffb310] Deleting local config drive /var/lib/nova/instances/1f73fb12-6b6e-4491-81a4-51fb66ffb310/disk.config because it was imported into RBD.
Oct 11 09:36:54 compute-0 kernel: tap25d7271a-bc: entered promiscuous mode
Oct 11 09:36:54 compute-0 nova_compute[260935]: 2025-10-11 09:36:54.219 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:36:54 compute-0 ovn_controller[152945]: 2025-10-11T09:36:54Z|01586|binding|INFO|Claiming lport 25d7271a-bce8-4388-991e-e7069da0eff1 for this chassis.
Oct 11 09:36:54 compute-0 ovn_controller[152945]: 2025-10-11T09:36:54Z|01587|binding|INFO|25d7271a-bce8-4388-991e-e7069da0eff1: Claiming fa:16:3e:a1:d6:78 10.100.0.13
Oct 11 09:36:54 compute-0 NetworkManager[44960]: <info>  [1760175414.2262] manager: (tap25d7271a-bc): new Tun device (/org/freedesktop/NetworkManager/Devices/607)
Oct 11 09:36:54 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:36:54.244 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a1:d6:78 10.100.0.13'], port_security=['fa:16:3e:a1:d6:78 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '1f73fb12-6b6e-4491-81a4-51fb66ffb310', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-03aa96fb-1646-4924-8eea-e7c8a5518a31', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '81e7096f23df4e7d8782cf98d09d54e9', 'neutron:revision_number': '2', 'neutron:security_group_ids': '0fdfee67-b394-4b27-bc8b-a70f0c2dfabe aba3d239-1d8c-4eb4-ab69-9421b1db2407', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b9ec93d7-4219-431f-a875-bf3609f077df, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=25d7271a-bce8-4388-991e-e7069da0eff1) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 09:36:54 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:36:54.247 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 25d7271a-bce8-4388-991e-e7069da0eff1 in datapath 03aa96fb-1646-4924-8eea-e7c8a5518a31 bound to our chassis
Oct 11 09:36:54 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:36:54.251 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 03aa96fb-1646-4924-8eea-e7c8a5518a31
Oct 11 09:36:54 compute-0 systemd-machined[215705]: New machine qemu-165-instance-0000008d.
Oct 11 09:36:54 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:36:54.269 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[c98d54ad-1b3e-4652-9472-b977550df518]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:36:54 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:36:54.270 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap03aa96fb-11 in ovnmeta-03aa96fb-1646-4924-8eea-e7c8a5518a31 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 11 09:36:54 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:36:54.273 276945 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap03aa96fb-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 11 09:36:54 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:36:54.274 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[dc2a6246-3c19-4afd-8a79-16ea4c7e4abb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:36:54 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:36:54.275 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[68936c12-a02e-4421-baf6-57cc75ecf630]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:36:54 compute-0 systemd[1]: Started Virtual Machine qemu-165-instance-0000008d.
Oct 11 09:36:54 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:36:54.291 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[51e84e09-fbf3-473d-8559-1555782f27c9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:36:54 compute-0 systemd-udevd[418461]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 09:36:54 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:36:54.335 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[4f73bad5-b603-443b-ba1f-3a23ae185912]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:36:54 compute-0 NetworkManager[44960]: <info>  [1760175414.3459] device (tap25d7271a-bc): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 09:36:54 compute-0 NetworkManager[44960]: <info>  [1760175414.3486] device (tap25d7271a-bc): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 09:36:54 compute-0 nova_compute[260935]: 2025-10-11 09:36:54.347 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:36:54 compute-0 ovn_controller[152945]: 2025-10-11T09:36:54Z|01588|binding|INFO|Setting lport 25d7271a-bce8-4388-991e-e7069da0eff1 ovn-installed in OVS
Oct 11 09:36:54 compute-0 ovn_controller[152945]: 2025-10-11T09:36:54Z|01589|binding|INFO|Setting lport 25d7271a-bce8-4388-991e-e7069da0eff1 up in Southbound
Oct 11 09:36:54 compute-0 nova_compute[260935]: 2025-10-11 09:36:54.352 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:36:54 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:36:54.386 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[3ebd61a3-856a-44a7-99e2-14bfae627f46]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:36:54 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:36:54.397 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[c8f2d522-f739-435f-bb96-cbd4bb978f57]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:36:54 compute-0 NetworkManager[44960]: <info>  [1760175414.3992] manager: (tap03aa96fb-10): new Veth device (/org/freedesktop/NetworkManager/Devices/608)
Oct 11 09:36:54 compute-0 systemd-udevd[418465]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 09:36:54 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:36:54.445 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[3245e512-b4a5-410d-ae38-b0ffda12dc6b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:36:54 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:36:54.450 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[d3f365ff-d8c0-4f87-b9da-3ce8dbd522c9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:36:54 compute-0 NetworkManager[44960]: <info>  [1760175414.4828] device (tap03aa96fb-10): carrier: link connected
Oct 11 09:36:54 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:36:54 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:36:54.497 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[c98c33cc-3d36-4e3d-bb8c-c44b5361bbb1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:36:54 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:36:54.519 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[b160923c-35f1-403e-8554-b2257baa070a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap03aa96fb-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:61:60:32'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 421], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 725918, 'reachable_time': 20337, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 418493, 'error': None, 'target': 'ovnmeta-03aa96fb-1646-4924-8eea-e7c8a5518a31', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:36:54 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:36:54.538 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[2ca984f6-2ad3-418e-b939-d9e175759f70]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe61:6032'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 725918, 'tstamp': 725918}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 418494, 'error': None, 'target': 'ovnmeta-03aa96fb-1646-4924-8eea-e7c8a5518a31', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:36:54 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:36:54.560 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[d9f3b399-9450-43b3-819c-f42ba4535cb3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap03aa96fb-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:61:60:32'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 421], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 725918, 'reachable_time': 20337, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 418495, 'error': None, 'target': 'ovnmeta-03aa96fb-1646-4924-8eea-e7c8a5518a31', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:36:54 compute-0 nova_compute[260935]: 2025-10-11 09:36:54.564 2 DEBUG nova.compute.manager [req-847d668b-09ce-47ce-90db-24f5c3bdea56 req-8563a742-80dc-4888-84cc-2ac15196fedc e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 1f73fb12-6b6e-4491-81a4-51fb66ffb310] Received event network-vif-plugged-25d7271a-bce8-4388-991e-e7069da0eff1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:36:54 compute-0 nova_compute[260935]: 2025-10-11 09:36:54.565 2 DEBUG oslo_concurrency.lockutils [req-847d668b-09ce-47ce-90db-24f5c3bdea56 req-8563a742-80dc-4888-84cc-2ac15196fedc e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "1f73fb12-6b6e-4491-81a4-51fb66ffb310-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:36:54 compute-0 nova_compute[260935]: 2025-10-11 09:36:54.565 2 DEBUG oslo_concurrency.lockutils [req-847d668b-09ce-47ce-90db-24f5c3bdea56 req-8563a742-80dc-4888-84cc-2ac15196fedc e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "1f73fb12-6b6e-4491-81a4-51fb66ffb310-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:36:54 compute-0 nova_compute[260935]: 2025-10-11 09:36:54.565 2 DEBUG oslo_concurrency.lockutils [req-847d668b-09ce-47ce-90db-24f5c3bdea56 req-8563a742-80dc-4888-84cc-2ac15196fedc e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "1f73fb12-6b6e-4491-81a4-51fb66ffb310-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:36:54 compute-0 nova_compute[260935]: 2025-10-11 09:36:54.566 2 DEBUG nova.compute.manager [req-847d668b-09ce-47ce-90db-24f5c3bdea56 req-8563a742-80dc-4888-84cc-2ac15196fedc e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 1f73fb12-6b6e-4491-81a4-51fb66ffb310] Processing event network-vif-plugged-25d7271a-bce8-4388-991e-e7069da0eff1 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 11 09:36:54 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:36:54.601 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[a9ea4167-4a46-42f8-9a2f-2844ddbd1b21]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:36:54 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:36:54.686 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[67cd4389-2c1f-44cd-965b-7a31ab5eaa5a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:36:54 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:36:54.688 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap03aa96fb-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:36:54 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:36:54.689 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 09:36:54 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:36:54.690 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap03aa96fb-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:36:54 compute-0 NetworkManager[44960]: <info>  [1760175414.6935] manager: (tap03aa96fb-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/609)
Oct 11 09:36:54 compute-0 kernel: tap03aa96fb-10: entered promiscuous mode
Oct 11 09:36:54 compute-0 nova_compute[260935]: 2025-10-11 09:36:54.693 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:36:54 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:36:54.698 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap03aa96fb-10, col_values=(('external_ids', {'iface-id': '33c7af53-54c4-4168-8c26-88465029f36a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:36:54 compute-0 ovn_controller[152945]: 2025-10-11T09:36:54Z|01590|binding|INFO|Releasing lport 33c7af53-54c4-4168-8c26-88465029f36a from this chassis (sb_readonly=0)
Oct 11 09:36:54 compute-0 nova_compute[260935]: 2025-10-11 09:36:54.700 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:36:54 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:36:54.702 162815 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/03aa96fb-1646-4924-8eea-e7c8a5518a31.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/03aa96fb-1646-4924-8eea-e7c8a5518a31.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 11 09:36:54 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:36:54.704 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[2ecb418e-2966-4611-9432-3c38706a4825]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:36:54 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:36:54.705 162815 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 11 09:36:54 compute-0 ovn_metadata_agent[162810]: global
Oct 11 09:36:54 compute-0 ovn_metadata_agent[162810]:     log         /dev/log local0 debug
Oct 11 09:36:54 compute-0 ovn_metadata_agent[162810]:     log-tag     haproxy-metadata-proxy-03aa96fb-1646-4924-8eea-e7c8a5518a31
Oct 11 09:36:54 compute-0 ovn_metadata_agent[162810]:     user        root
Oct 11 09:36:54 compute-0 ovn_metadata_agent[162810]:     group       root
Oct 11 09:36:54 compute-0 ovn_metadata_agent[162810]:     maxconn     1024
Oct 11 09:36:54 compute-0 ovn_metadata_agent[162810]:     pidfile     /var/lib/neutron/external/pids/03aa96fb-1646-4924-8eea-e7c8a5518a31.pid.haproxy
Oct 11 09:36:54 compute-0 ovn_metadata_agent[162810]:     daemon
Oct 11 09:36:54 compute-0 ovn_metadata_agent[162810]: 
Oct 11 09:36:54 compute-0 ovn_metadata_agent[162810]: defaults
Oct 11 09:36:54 compute-0 ovn_metadata_agent[162810]:     log global
Oct 11 09:36:54 compute-0 ovn_metadata_agent[162810]:     mode http
Oct 11 09:36:54 compute-0 ovn_metadata_agent[162810]:     option httplog
Oct 11 09:36:54 compute-0 ovn_metadata_agent[162810]:     option dontlognull
Oct 11 09:36:54 compute-0 ovn_metadata_agent[162810]:     option http-server-close
Oct 11 09:36:54 compute-0 ovn_metadata_agent[162810]:     option forwardfor
Oct 11 09:36:54 compute-0 ovn_metadata_agent[162810]:     retries                 3
Oct 11 09:36:54 compute-0 ovn_metadata_agent[162810]:     timeout http-request    30s
Oct 11 09:36:54 compute-0 ovn_metadata_agent[162810]:     timeout connect         30s
Oct 11 09:36:54 compute-0 ovn_metadata_agent[162810]:     timeout client          32s
Oct 11 09:36:54 compute-0 ovn_metadata_agent[162810]:     timeout server          32s
Oct 11 09:36:54 compute-0 ovn_metadata_agent[162810]:     timeout http-keep-alive 30s
Oct 11 09:36:54 compute-0 ovn_metadata_agent[162810]: 
Oct 11 09:36:54 compute-0 ovn_metadata_agent[162810]: 
Oct 11 09:36:54 compute-0 ovn_metadata_agent[162810]: listen listener
Oct 11 09:36:54 compute-0 ovn_metadata_agent[162810]:     bind 169.254.169.254:80
Oct 11 09:36:54 compute-0 ovn_metadata_agent[162810]:     server metadata /var/lib/neutron/metadata_proxy
Oct 11 09:36:54 compute-0 ovn_metadata_agent[162810]:     http-request add-header X-OVN-Network-ID 03aa96fb-1646-4924-8eea-e7c8a5518a31
Oct 11 09:36:54 compute-0 ovn_metadata_agent[162810]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 11 09:36:54 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:36:54.707 162815 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-03aa96fb-1646-4924-8eea-e7c8a5518a31', 'env', 'PROCESS_TAG=haproxy-03aa96fb-1646-4924-8eea-e7c8a5518a31', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/03aa96fb-1646-4924-8eea-e7c8a5518a31.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 11 09:36:54 compute-0 nova_compute[260935]: 2025-10-11 09:36:54.713 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:36:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:36:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:36:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:36:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:36:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:36:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:36:55 compute-0 ceph-mgr[74605]: [balancer INFO root] Optimize plan auto_2025-10-11_09:36:55
Oct 11 09:36:55 compute-0 ceph-mgr[74605]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 09:36:55 compute-0 ceph-mgr[74605]: [balancer INFO root] do_upmap
Oct 11 09:36:55 compute-0 ceph-mgr[74605]: [balancer INFO root] pools ['images', 'backups', 'default.rgw.log', '.rgw.root', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'default.rgw.control', '.mgr', 'default.rgw.meta', 'vms', 'volumes']
Oct 11 09:36:55 compute-0 ceph-mgr[74605]: [balancer INFO root] prepared 0/10 changes
Oct 11 09:36:55 compute-0 nova_compute[260935]: 2025-10-11 09:36:55.207 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760175415.207217, 1f73fb12-6b6e-4491-81a4-51fb66ffb310 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:36:55 compute-0 nova_compute[260935]: 2025-10-11 09:36:55.208 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 1f73fb12-6b6e-4491-81a4-51fb66ffb310] VM Started (Lifecycle Event)
Oct 11 09:36:55 compute-0 podman[418567]: 2025-10-11 09:36:55.211349798 +0000 UTC m=+0.081826026 container create f6a9261aec5be20db1e997a39a8f40719235d71526af0a641fc6e13ee745e3e5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-03aa96fb-1646-4924-8eea-e7c8a5518a31, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 11 09:36:55 compute-0 nova_compute[260935]: 2025-10-11 09:36:55.211 2 DEBUG nova.compute.manager [None req-3f206d69-2a3d-4596-ab30-71315f552b39 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 1f73fb12-6b6e-4491-81a4-51fb66ffb310] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 11 09:36:55 compute-0 nova_compute[260935]: 2025-10-11 09:36:55.221 2 DEBUG nova.virt.libvirt.driver [None req-3f206d69-2a3d-4596-ab30-71315f552b39 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 1f73fb12-6b6e-4491-81a4-51fb66ffb310] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 11 09:36:55 compute-0 nova_compute[260935]: 2025-10-11 09:36:55.225 2 INFO nova.virt.libvirt.driver [-] [instance: 1f73fb12-6b6e-4491-81a4-51fb66ffb310] Instance spawned successfully.
Oct 11 09:36:55 compute-0 nova_compute[260935]: 2025-10-11 09:36:55.225 2 DEBUG nova.virt.libvirt.driver [None req-3f206d69-2a3d-4596-ab30-71315f552b39 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 1f73fb12-6b6e-4491-81a4-51fb66ffb310] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 11 09:36:55 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2858: 321 pgs: 321 active+clean; 374 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 11 09:36:55 compute-0 nova_compute[260935]: 2025-10-11 09:36:55.231 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 1f73fb12-6b6e-4491-81a4-51fb66ffb310] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:36:55 compute-0 nova_compute[260935]: 2025-10-11 09:36:55.235 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 1f73fb12-6b6e-4491-81a4-51fb66ffb310] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 09:36:55 compute-0 nova_compute[260935]: 2025-10-11 09:36:55.248 2 DEBUG nova.virt.libvirt.driver [None req-3f206d69-2a3d-4596-ab30-71315f552b39 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 1f73fb12-6b6e-4491-81a4-51fb66ffb310] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:36:55 compute-0 nova_compute[260935]: 2025-10-11 09:36:55.249 2 DEBUG nova.virt.libvirt.driver [None req-3f206d69-2a3d-4596-ab30-71315f552b39 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 1f73fb12-6b6e-4491-81a4-51fb66ffb310] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:36:55 compute-0 nova_compute[260935]: 2025-10-11 09:36:55.250 2 DEBUG nova.virt.libvirt.driver [None req-3f206d69-2a3d-4596-ab30-71315f552b39 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 1f73fb12-6b6e-4491-81a4-51fb66ffb310] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:36:55 compute-0 nova_compute[260935]: 2025-10-11 09:36:55.251 2 DEBUG nova.virt.libvirt.driver [None req-3f206d69-2a3d-4596-ab30-71315f552b39 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 1f73fb12-6b6e-4491-81a4-51fb66ffb310] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:36:55 compute-0 nova_compute[260935]: 2025-10-11 09:36:55.251 2 DEBUG nova.virt.libvirt.driver [None req-3f206d69-2a3d-4596-ab30-71315f552b39 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 1f73fb12-6b6e-4491-81a4-51fb66ffb310] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:36:55 compute-0 nova_compute[260935]: 2025-10-11 09:36:55.252 2 DEBUG nova.virt.libvirt.driver [None req-3f206d69-2a3d-4596-ab30-71315f552b39 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 1f73fb12-6b6e-4491-81a4-51fb66ffb310] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:36:55 compute-0 systemd[1]: Started libpod-conmon-f6a9261aec5be20db1e997a39a8f40719235d71526af0a641fc6e13ee745e3e5.scope.
Oct 11 09:36:55 compute-0 nova_compute[260935]: 2025-10-11 09:36:55.260 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 1f73fb12-6b6e-4491-81a4-51fb66ffb310] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 09:36:55 compute-0 nova_compute[260935]: 2025-10-11 09:36:55.260 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760175415.2114127, 1f73fb12-6b6e-4491-81a4-51fb66ffb310 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:36:55 compute-0 nova_compute[260935]: 2025-10-11 09:36:55.260 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 1f73fb12-6b6e-4491-81a4-51fb66ffb310] VM Paused (Lifecycle Event)
Oct 11 09:36:55 compute-0 podman[418567]: 2025-10-11 09:36:55.174246657 +0000 UTC m=+0.044722935 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 11 09:36:55 compute-0 ceph-mon[74313]: pgmap v2858: 321 pgs: 321 active+clean; 374 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 11 09:36:55 compute-0 nova_compute[260935]: 2025-10-11 09:36:55.291 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 1f73fb12-6b6e-4491-81a4-51fb66ffb310] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:36:55 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:36:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1249ac7e806de3a44795ea0929a99531493013ad6047c5dc235ff82a29920ead/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 11 09:36:55 compute-0 nova_compute[260935]: 2025-10-11 09:36:55.301 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760175415.2223327, 1f73fb12-6b6e-4491-81a4-51fb66ffb310 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:36:55 compute-0 nova_compute[260935]: 2025-10-11 09:36:55.301 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 1f73fb12-6b6e-4491-81a4-51fb66ffb310] VM Resumed (Lifecycle Event)
Oct 11 09:36:55 compute-0 podman[418567]: 2025-10-11 09:36:55.323316108 +0000 UTC m=+0.193792336 container init f6a9261aec5be20db1e997a39a8f40719235d71526af0a641fc6e13ee745e3e5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-03aa96fb-1646-4924-8eea-e7c8a5518a31, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team)
Oct 11 09:36:55 compute-0 nova_compute[260935]: 2025-10-11 09:36:55.329 2 INFO nova.compute.manager [None req-3f206d69-2a3d-4596-ab30-71315f552b39 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 1f73fb12-6b6e-4491-81a4-51fb66ffb310] Took 8.35 seconds to spawn the instance on the hypervisor.
Oct 11 09:36:55 compute-0 nova_compute[260935]: 2025-10-11 09:36:55.330 2 DEBUG nova.compute.manager [None req-3f206d69-2a3d-4596-ab30-71315f552b39 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 1f73fb12-6b6e-4491-81a4-51fb66ffb310] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:36:55 compute-0 podman[418567]: 2025-10-11 09:36:55.332714792 +0000 UTC m=+0.203190990 container start f6a9261aec5be20db1e997a39a8f40719235d71526af0a641fc6e13ee745e3e5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-03aa96fb-1646-4924-8eea-e7c8a5518a31, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, io.buildah.version=1.41.3)
Oct 11 09:36:55 compute-0 nova_compute[260935]: 2025-10-11 09:36:55.332 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 1f73fb12-6b6e-4491-81a4-51fb66ffb310] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:36:55 compute-0 nova_compute[260935]: 2025-10-11 09:36:55.345 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 1f73fb12-6b6e-4491-81a4-51fb66ffb310] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 09:36:55 compute-0 neutron-haproxy-ovnmeta-03aa96fb-1646-4924-8eea-e7c8a5518a31[418582]: [NOTICE]   (418586) : New worker (418588) forked
Oct 11 09:36:55 compute-0 neutron-haproxy-ovnmeta-03aa96fb-1646-4924-8eea-e7c8a5518a31[418582]: [NOTICE]   (418586) : Loading success.
Oct 11 09:36:55 compute-0 nova_compute[260935]: 2025-10-11 09:36:55.373 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 1f73fb12-6b6e-4491-81a4-51fb66ffb310] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 09:36:55 compute-0 nova_compute[260935]: 2025-10-11 09:36:55.399 2 INFO nova.compute.manager [None req-3f206d69-2a3d-4596-ab30-71315f552b39 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 1f73fb12-6b6e-4491-81a4-51fb66ffb310] Took 9.35 seconds to build instance.
Oct 11 09:36:55 compute-0 nova_compute[260935]: 2025-10-11 09:36:55.414 2 DEBUG oslo_concurrency.lockutils [None req-3f206d69-2a3d-4596-ab30-71315f552b39 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lock "1f73fb12-6b6e-4491-81a4-51fb66ffb310" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.426s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:36:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 09:36:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 09:36:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 09:36:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 09:36:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 09:36:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 09:36:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 09:36:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 09:36:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 09:36:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 09:36:56 compute-0 nova_compute[260935]: 2025-10-11 09:36:56.761 2 DEBUG nova.compute.manager [req-0bb0a453-4935-4461-877a-b5d24fe1faa7 req-9c9835f0-b3eb-4b63-b977-d9ac4b287479 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 1f73fb12-6b6e-4491-81a4-51fb66ffb310] Received event network-vif-plugged-25d7271a-bce8-4388-991e-e7069da0eff1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:36:56 compute-0 nova_compute[260935]: 2025-10-11 09:36:56.761 2 DEBUG oslo_concurrency.lockutils [req-0bb0a453-4935-4461-877a-b5d24fe1faa7 req-9c9835f0-b3eb-4b63-b977-d9ac4b287479 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "1f73fb12-6b6e-4491-81a4-51fb66ffb310-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:36:56 compute-0 nova_compute[260935]: 2025-10-11 09:36:56.762 2 DEBUG oslo_concurrency.lockutils [req-0bb0a453-4935-4461-877a-b5d24fe1faa7 req-9c9835f0-b3eb-4b63-b977-d9ac4b287479 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "1f73fb12-6b6e-4491-81a4-51fb66ffb310-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:36:56 compute-0 nova_compute[260935]: 2025-10-11 09:36:56.762 2 DEBUG oslo_concurrency.lockutils [req-0bb0a453-4935-4461-877a-b5d24fe1faa7 req-9c9835f0-b3eb-4b63-b977-d9ac4b287479 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "1f73fb12-6b6e-4491-81a4-51fb66ffb310-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:36:56 compute-0 nova_compute[260935]: 2025-10-11 09:36:56.762 2 DEBUG nova.compute.manager [req-0bb0a453-4935-4461-877a-b5d24fe1faa7 req-9c9835f0-b3eb-4b63-b977-d9ac4b287479 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 1f73fb12-6b6e-4491-81a4-51fb66ffb310] No waiting events found dispatching network-vif-plugged-25d7271a-bce8-4388-991e-e7069da0eff1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 09:36:56 compute-0 nova_compute[260935]: 2025-10-11 09:36:56.762 2 WARNING nova.compute.manager [req-0bb0a453-4935-4461-877a-b5d24fe1faa7 req-9c9835f0-b3eb-4b63-b977-d9ac4b287479 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 1f73fb12-6b6e-4491-81a4-51fb66ffb310] Received unexpected event network-vif-plugged-25d7271a-bce8-4388-991e-e7069da0eff1 for instance with vm_state active and task_state None.
Oct 11 09:36:57 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2859: 321 pgs: 321 active+clean; 374 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 11 09:36:57 compute-0 ceph-mon[74313]: pgmap v2859: 321 pgs: 321 active+clean; 374 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 11 09:36:58 compute-0 nova_compute[260935]: 2025-10-11 09:36:58.129 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:36:59 compute-0 nova_compute[260935]: 2025-10-11 09:36:59.110 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:36:59 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2860: 321 pgs: 321 active+clean; 374 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Oct 11 09:36:59 compute-0 ceph-mon[74313]: pgmap v2860: 321 pgs: 321 active+clean; 374 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Oct 11 09:36:59 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:36:59 compute-0 podman[418597]: 2025-10-11 09:36:59.767763104 +0000 UTC m=+0.073587375 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 11 09:37:00 compute-0 ovn_controller[152945]: 2025-10-11T09:37:00Z|01591|binding|INFO|Releasing lport 33c7af53-54c4-4168-8c26-88465029f36a from this chassis (sb_readonly=0)
Oct 11 09:37:00 compute-0 ovn_controller[152945]: 2025-10-11T09:37:00Z|01592|binding|INFO|Releasing lport e23cd806-8523-4e59-ba27-db15cee52548 from this chassis (sb_readonly=0)
Oct 11 09:37:00 compute-0 ovn_controller[152945]: 2025-10-11T09:37:00Z|01593|binding|INFO|Releasing lport f45dd889-4db0-488b-b6d3-356bc191844e from this chassis (sb_readonly=0)
Oct 11 09:37:00 compute-0 NetworkManager[44960]: <info>  [1760175420.2281] manager: (patch-br-int-to-provnet-02213cae-d648-4a8f-b18b-9bdf9573f1be): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/610)
Oct 11 09:37:00 compute-0 NetworkManager[44960]: <info>  [1760175420.2299] manager: (patch-provnet-02213cae-d648-4a8f-b18b-9bdf9573f1be-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/611)
Oct 11 09:37:00 compute-0 nova_compute[260935]: 2025-10-11 09:37:00.231 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:37:00 compute-0 ovn_controller[152945]: 2025-10-11T09:37:00Z|01594|binding|INFO|Releasing lport 33c7af53-54c4-4168-8c26-88465029f36a from this chassis (sb_readonly=0)
Oct 11 09:37:00 compute-0 nova_compute[260935]: 2025-10-11 09:37:00.241 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:37:00 compute-0 ovn_controller[152945]: 2025-10-11T09:37:00Z|01595|binding|INFO|Releasing lport e23cd806-8523-4e59-ba27-db15cee52548 from this chassis (sb_readonly=0)
Oct 11 09:37:00 compute-0 ovn_controller[152945]: 2025-10-11T09:37:00Z|01596|binding|INFO|Releasing lport f45dd889-4db0-488b-b6d3-356bc191844e from this chassis (sb_readonly=0)
Oct 11 09:37:01 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2861: 321 pgs: 321 active+clean; 374 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 571 KiB/s wr, 82 op/s
Oct 11 09:37:01 compute-0 ceph-mon[74313]: pgmap v2861: 321 pgs: 321 active+clean; 374 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 571 KiB/s wr, 82 op/s
Oct 11 09:37:01 compute-0 nova_compute[260935]: 2025-10-11 09:37:01.374 2 DEBUG nova.compute.manager [req-b6eed398-d152-4150-879c-1bfec323b707 req-b166ae1e-e675-43cd-a6ab-5cfea68f5380 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 1f73fb12-6b6e-4491-81a4-51fb66ffb310] Received event network-changed-25d7271a-bce8-4388-991e-e7069da0eff1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:37:01 compute-0 nova_compute[260935]: 2025-10-11 09:37:01.374 2 DEBUG nova.compute.manager [req-b6eed398-d152-4150-879c-1bfec323b707 req-b166ae1e-e675-43cd-a6ab-5cfea68f5380 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 1f73fb12-6b6e-4491-81a4-51fb66ffb310] Refreshing instance network info cache due to event network-changed-25d7271a-bce8-4388-991e-e7069da0eff1. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 09:37:01 compute-0 nova_compute[260935]: 2025-10-11 09:37:01.374 2 DEBUG oslo_concurrency.lockutils [req-b6eed398-d152-4150-879c-1bfec323b707 req-b166ae1e-e675-43cd-a6ab-5cfea68f5380 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-1f73fb12-6b6e-4491-81a4-51fb66ffb310" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:37:01 compute-0 nova_compute[260935]: 2025-10-11 09:37:01.375 2 DEBUG oslo_concurrency.lockutils [req-b6eed398-d152-4150-879c-1bfec323b707 req-b166ae1e-e675-43cd-a6ab-5cfea68f5380 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-1f73fb12-6b6e-4491-81a4-51fb66ffb310" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:37:01 compute-0 nova_compute[260935]: 2025-10-11 09:37:01.375 2 DEBUG nova.network.neutron [req-b6eed398-d152-4150-879c-1bfec323b707 req-b166ae1e-e675-43cd-a6ab-5cfea68f5380 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 1f73fb12-6b6e-4491-81a4-51fb66ffb310] Refreshing network info cache for port 25d7271a-bce8-4388-991e-e7069da0eff1 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 09:37:03 compute-0 nova_compute[260935]: 2025-10-11 09:37:03.088 2 DEBUG nova.network.neutron [req-b6eed398-d152-4150-879c-1bfec323b707 req-b166ae1e-e675-43cd-a6ab-5cfea68f5380 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 1f73fb12-6b6e-4491-81a4-51fb66ffb310] Updated VIF entry in instance network info cache for port 25d7271a-bce8-4388-991e-e7069da0eff1. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 09:37:03 compute-0 nova_compute[260935]: 2025-10-11 09:37:03.088 2 DEBUG nova.network.neutron [req-b6eed398-d152-4150-879c-1bfec323b707 req-b166ae1e-e675-43cd-a6ab-5cfea68f5380 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 1f73fb12-6b6e-4491-81a4-51fb66ffb310] Updating instance_info_cache with network_info: [{"id": "25d7271a-bce8-4388-991e-e7069da0eff1", "address": "fa:16:3e:a1:d6:78", "network": {"id": "03aa96fb-1646-4924-8eea-e7c8a5518a31", "bridge": "br-int", "label": "tempest-network-smoke--2144007454", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.233", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "81e7096f23df4e7d8782cf98d09d54e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25d7271a-bc", "ovs_interfaceid": "25d7271a-bce8-4388-991e-e7069da0eff1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:37:03 compute-0 nova_compute[260935]: 2025-10-11 09:37:03.110 2 DEBUG oslo_concurrency.lockutils [req-b6eed398-d152-4150-879c-1bfec323b707 req-b166ae1e-e675-43cd-a6ab-5cfea68f5380 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-1f73fb12-6b6e-4491-81a4-51fb66ffb310" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:37:03 compute-0 nova_compute[260935]: 2025-10-11 09:37:03.171 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:37:03 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2862: 321 pgs: 321 active+clean; 374 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 571 KiB/s wr, 82 op/s
Oct 11 09:37:03 compute-0 ceph-mon[74313]: pgmap v2862: 321 pgs: 321 active+clean; 374 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 571 KiB/s wr, 82 op/s
Oct 11 09:37:04 compute-0 nova_compute[260935]: 2025-10-11 09:37:04.113 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:37:04 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:37:05 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2863: 321 pgs: 321 active+clean; 374 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 11 09:37:05 compute-0 ceph-mon[74313]: pgmap v2863: 321 pgs: 321 active+clean; 374 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 11 09:37:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 09:37:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:37:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 09:37:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:37:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0029757271724620694 of space, bias 1.0, pg target 0.8927181517386208 quantized to 32 (current 32)
Oct 11 09:37:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:37:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 09:37:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:37:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 09:37:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:37:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Oct 11 09:37:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:37:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 11 09:37:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:37:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 09:37:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:37:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 11 09:37:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:37:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 11 09:37:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:37:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 09:37:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:37:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 11 09:37:05 compute-0 ceph-osd[90364]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #49. Immutable memtables: 6.
Oct 11 09:37:06 compute-0 ovn_controller[152945]: 2025-10-11T09:37:06Z|00188|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:a1:d6:78 10.100.0.13
Oct 11 09:37:06 compute-0 ovn_controller[152945]: 2025-10-11T09:37:06Z|00189|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:a1:d6:78 10.100.0.13
Oct 11 09:37:07 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2864: 321 pgs: 321 active+clean; 374 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 11 09:37:07 compute-0 ceph-mon[74313]: pgmap v2864: 321 pgs: 321 active+clean; 374 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 11 09:37:07 compute-0 podman[418617]: 2025-10-11 09:37:07.755571853 +0000 UTC m=+0.056465255 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=iscsid, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.license=GPLv2)
Oct 11 09:37:08 compute-0 nova_compute[260935]: 2025-10-11 09:37:08.173 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:37:09 compute-0 nova_compute[260935]: 2025-10-11 09:37:09.161 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:37:09 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2865: 321 pgs: 321 active+clean; 407 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 137 op/s
Oct 11 09:37:09 compute-0 ceph-mon[74313]: pgmap v2865: 321 pgs: 321 active+clean; 407 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 137 op/s
Oct 11 09:37:09 compute-0 sshd-session[418638]: Invalid user mysql from 165.232.82.252 port 33138
Oct 11 09:37:09 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:37:09 compute-0 sshd-session[418638]: pam_unix(sshd:auth): check pass; user unknown
Oct 11 09:37:09 compute-0 sshd-session[418638]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=165.232.82.252
Oct 11 09:37:11 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2866: 321 pgs: 321 active+clean; 407 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 11 09:37:11 compute-0 ceph-mon[74313]: pgmap v2866: 321 pgs: 321 active+clean; 407 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 11 09:37:11 compute-0 rsyslogd[1003]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 11 09:37:11 compute-0 podman[418641]: 2025-10-11 09:37:11.772243212 +0000 UTC m=+0.066842956 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team)
Oct 11 09:37:11 compute-0 podman[418642]: 2025-10-11 09:37:11.828379396 +0000 UTC m=+0.109712758 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251001, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 11 09:37:11 compute-0 sshd-session[418638]: Failed password for invalid user mysql from 165.232.82.252 port 33138 ssh2
Oct 11 09:37:12 compute-0 sshd-session[418638]: Connection closed by invalid user mysql 165.232.82.252 port 33138 [preauth]
Oct 11 09:37:13 compute-0 nova_compute[260935]: 2025-10-11 09:37:13.222 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:37:13 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2867: 321 pgs: 321 active+clean; 407 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 11 09:37:13 compute-0 ceph-mon[74313]: pgmap v2867: 321 pgs: 321 active+clean; 407 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 11 09:37:13 compute-0 sudo[418688]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:37:13 compute-0 sudo[418688]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:37:13 compute-0 sudo[418688]: pam_unix(sudo:session): session closed for user root
Oct 11 09:37:13 compute-0 sudo[418713]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 09:37:13 compute-0 sudo[418713]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:37:13 compute-0 sudo[418713]: pam_unix(sudo:session): session closed for user root
Oct 11 09:37:13 compute-0 sudo[418738]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:37:13 compute-0 sudo[418738]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:37:13 compute-0 sudo[418738]: pam_unix(sudo:session): session closed for user root
Oct 11 09:37:13 compute-0 sudo[418763]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Oct 11 09:37:13 compute-0 sudo[418763]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:37:14 compute-0 nova_compute[260935]: 2025-10-11 09:37:14.165 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:37:14 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:37:14 compute-0 podman[418860]: 2025-10-11 09:37:14.88768024 +0000 UTC m=+0.407159781 container exec ef4d743dbf6b626090e433b260dff1359de31ba4682290cbdab8727911345729 (image=quay.io/ceph/ceph:v18, name=ceph-33219f8b-dc38-5a8f-a577-8ccc4b37190a-mon-compute-0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True)
Oct 11 09:37:15 compute-0 podman[418880]: 2025-10-11 09:37:15.084152971 +0000 UTC m=+0.069305485 container exec_died ef4d743dbf6b626090e433b260dff1359de31ba4682290cbdab8727911345729 (image=quay.io/ceph/ceph:v18, name=ceph-33219f8b-dc38-5a8f-a577-8ccc4b37190a-mon-compute-0, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 09:37:15 compute-0 podman[418860]: 2025-10-11 09:37:15.163226729 +0000 UTC m=+0.682706250 container exec_died ef4d743dbf6b626090e433b260dff1359de31ba4682290cbdab8727911345729 (image=quay.io/ceph/ceph:v18, name=ceph-33219f8b-dc38-5a8f-a577-8ccc4b37190a-mon-compute-0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct 11 09:37:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:37:15.233 162815 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:37:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:37:15.234 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:37:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:37:15.235 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:37:15 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2868: 321 pgs: 321 active+clean; 407 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 11 09:37:15 compute-0 ceph-mon[74313]: pgmap v2868: 321 pgs: 321 active+clean; 407 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 11 09:37:16 compute-0 sudo[418763]: pam_unix(sudo:session): session closed for user root
Oct 11 09:37:16 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 09:37:16 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:37:16 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 09:37:16 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:37:16 compute-0 sudo[419020]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:37:16 compute-0 sudo[419020]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:37:16 compute-0 sudo[419020]: pam_unix(sudo:session): session closed for user root
Oct 11 09:37:16 compute-0 sudo[419045]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 09:37:16 compute-0 sudo[419045]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:37:16 compute-0 sudo[419045]: pam_unix(sudo:session): session closed for user root
Oct 11 09:37:16 compute-0 sudo[419070]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:37:16 compute-0 sudo[419070]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:37:16 compute-0 sudo[419070]: pam_unix(sudo:session): session closed for user root
Oct 11 09:37:16 compute-0 sudo[419095]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 11 09:37:16 compute-0 sudo[419095]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:37:17 compute-0 sudo[419095]: pam_unix(sudo:session): session closed for user root
Oct 11 09:37:17 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 09:37:17 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 09:37:17 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 11 09:37:17 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 09:37:17 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 11 09:37:17 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2869: 321 pgs: 321 active+clean; 407 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 11 09:37:17 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:37:17 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:37:17 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:37:17 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev ee4fd2c5-39b4-47a2-966f-16699a37a662 does not exist
Oct 11 09:37:17 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev db883301-b3a9-44d7-bc92-92e19c4bb7fd does not exist
Oct 11 09:37:17 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev fcc630c8-6f9a-4baa-b3ad-149a3f957c02 does not exist
Oct 11 09:37:17 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 11 09:37:17 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 09:37:17 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 11 09:37:17 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 09:37:17 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 09:37:17 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 09:37:17 compute-0 sudo[419151]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:37:17 compute-0 sudo[419151]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:37:17 compute-0 sudo[419151]: pam_unix(sudo:session): session closed for user root
Oct 11 09:37:17 compute-0 sudo[419176]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 09:37:17 compute-0 sudo[419176]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:37:17 compute-0 sudo[419176]: pam_unix(sudo:session): session closed for user root
Oct 11 09:37:17 compute-0 sudo[419201]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:37:17 compute-0 sudo[419201]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:37:17 compute-0 sudo[419201]: pam_unix(sudo:session): session closed for user root
Oct 11 09:37:17 compute-0 sudo[419226]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 11 09:37:17 compute-0 sudo[419226]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:37:18 compute-0 nova_compute[260935]: 2025-10-11 09:37:18.224 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:37:18 compute-0 podman[419293]: 2025-10-11 09:37:18.171291506 +0000 UTC m=+0.026749631 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:37:18 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 09:37:18 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 09:37:18 compute-0 ceph-mon[74313]: pgmap v2869: 321 pgs: 321 active+clean; 407 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 11 09:37:18 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:37:18 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 09:37:18 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 09:37:18 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 09:37:18 compute-0 podman[419293]: 2025-10-11 09:37:18.37108195 +0000 UTC m=+0.226539985 container create e74a230b8739b707ccf56b2d8c5eb5fdaa1683baa27b68a8739072fa5beaa2b9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_nobel, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 09:37:18 compute-0 systemd[1]: Started libpod-conmon-e74a230b8739b707ccf56b2d8c5eb5fdaa1683baa27b68a8739072fa5beaa2b9.scope.
Oct 11 09:37:18 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:37:18 compute-0 podman[419293]: 2025-10-11 09:37:18.804196248 +0000 UTC m=+0.659654363 container init e74a230b8739b707ccf56b2d8c5eb5fdaa1683baa27b68a8739072fa5beaa2b9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_nobel, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 09:37:18 compute-0 podman[419293]: 2025-10-11 09:37:18.815580707 +0000 UTC m=+0.671038762 container start e74a230b8739b707ccf56b2d8c5eb5fdaa1683baa27b68a8739072fa5beaa2b9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_nobel, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 09:37:18 compute-0 interesting_nobel[419309]: 167 167
Oct 11 09:37:18 compute-0 systemd[1]: libpod-e74a230b8739b707ccf56b2d8c5eb5fdaa1683baa27b68a8739072fa5beaa2b9.scope: Deactivated successfully.
Oct 11 09:37:19 compute-0 podman[419293]: 2025-10-11 09:37:19.009027223 +0000 UTC m=+0.864485258 container attach e74a230b8739b707ccf56b2d8c5eb5fdaa1683baa27b68a8739072fa5beaa2b9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_nobel, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 09:37:19 compute-0 podman[419293]: 2025-10-11 09:37:19.009415024 +0000 UTC m=+0.864873059 container died e74a230b8739b707ccf56b2d8c5eb5fdaa1683baa27b68a8739072fa5beaa2b9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_nobel, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Oct 11 09:37:19 compute-0 nova_compute[260935]: 2025-10-11 09:37:19.168 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:37:19 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2870: 321 pgs: 321 active+clean; 407 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 11 09:37:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-b3a9ce8e260c855a2fb27e2e00f9f05895a6ebbaf560fae4bdf49038cd7d1362-merged.mount: Deactivated successfully.
Oct 11 09:37:19 compute-0 ceph-mon[74313]: pgmap v2870: 321 pgs: 321 active+clean; 407 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 11 09:37:19 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:37:19 compute-0 podman[419293]: 2025-10-11 09:37:19.91811214 +0000 UTC m=+1.773570205 container remove e74a230b8739b707ccf56b2d8c5eb5fdaa1683baa27b68a8739072fa5beaa2b9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_nobel, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct 11 09:37:20 compute-0 systemd[1]: libpod-conmon-e74a230b8739b707ccf56b2d8c5eb5fdaa1683baa27b68a8739072fa5beaa2b9.scope: Deactivated successfully.
Oct 11 09:37:20 compute-0 podman[419333]: 2025-10-11 09:37:20.193242037 +0000 UTC m=+0.118079783 container create cf9155b5ff473e93bf61de270b47f8edd4fa36b3be82bdc90d3fe5d4caf0812e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_bell, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct 11 09:37:20 compute-0 podman[419333]: 2025-10-11 09:37:20.105828945 +0000 UTC m=+0.030666681 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:37:20 compute-0 nova_compute[260935]: 2025-10-11 09:37:20.231 2 DEBUG oslo_concurrency.lockutils [None req-cef89f47-e8a1-4ded-b3a1-844fdd04a2c8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Acquiring lock "27b27eae-7476-459a-b3e1-62199c81d4e1" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:37:20 compute-0 nova_compute[260935]: 2025-10-11 09:37:20.233 2 DEBUG oslo_concurrency.lockutils [None req-cef89f47-e8a1-4ded-b3a1-844fdd04a2c8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lock "27b27eae-7476-459a-b3e1-62199c81d4e1" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:37:20 compute-0 nova_compute[260935]: 2025-10-11 09:37:20.259 2 DEBUG nova.compute.manager [None req-cef89f47-e8a1-4ded-b3a1-844fdd04a2c8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 27b27eae-7476-459a-b3e1-62199c81d4e1] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 11 09:37:20 compute-0 nova_compute[260935]: 2025-10-11 09:37:20.342 2 DEBUG oslo_concurrency.lockutils [None req-cef89f47-e8a1-4ded-b3a1-844fdd04a2c8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:37:20 compute-0 nova_compute[260935]: 2025-10-11 09:37:20.343 2 DEBUG oslo_concurrency.lockutils [None req-cef89f47-e8a1-4ded-b3a1-844fdd04a2c8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:37:20 compute-0 nova_compute[260935]: 2025-10-11 09:37:20.360 2 DEBUG nova.virt.hardware [None req-cef89f47-e8a1-4ded-b3a1-844fdd04a2c8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 11 09:37:20 compute-0 nova_compute[260935]: 2025-10-11 09:37:20.360 2 INFO nova.compute.claims [None req-cef89f47-e8a1-4ded-b3a1-844fdd04a2c8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 27b27eae-7476-459a-b3e1-62199c81d4e1] Claim successful on node compute-0.ctlplane.example.com
Oct 11 09:37:20 compute-0 systemd[1]: Started libpod-conmon-cf9155b5ff473e93bf61de270b47f8edd4fa36b3be82bdc90d3fe5d4caf0812e.scope.
Oct 11 09:37:20 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:37:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aab9e1fd1a2142c5e1d2b3a2eae6374c0e118879c058481fe07c55b2dc87e5e3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 09:37:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aab9e1fd1a2142c5e1d2b3a2eae6374c0e118879c058481fe07c55b2dc87e5e3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 09:37:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aab9e1fd1a2142c5e1d2b3a2eae6374c0e118879c058481fe07c55b2dc87e5e3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 09:37:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aab9e1fd1a2142c5e1d2b3a2eae6374c0e118879c058481fe07c55b2dc87e5e3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 09:37:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aab9e1fd1a2142c5e1d2b3a2eae6374c0e118879c058481fe07c55b2dc87e5e3/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 09:37:20 compute-0 podman[419333]: 2025-10-11 09:37:20.462087287 +0000 UTC m=+0.386925053 container init cf9155b5ff473e93bf61de270b47f8edd4fa36b3be82bdc90d3fe5d4caf0812e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_bell, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 09:37:20 compute-0 podman[419333]: 2025-10-11 09:37:20.471584684 +0000 UTC m=+0.396422440 container start cf9155b5ff473e93bf61de270b47f8edd4fa36b3be82bdc90d3fe5d4caf0812e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_bell, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 09:37:20 compute-0 nova_compute[260935]: 2025-10-11 09:37:20.538 2 DEBUG oslo_concurrency.processutils [None req-cef89f47-e8a1-4ded-b3a1-844fdd04a2c8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:37:20 compute-0 podman[419333]: 2025-10-11 09:37:20.653627179 +0000 UTC m=+0.578464915 container attach cf9155b5ff473e93bf61de270b47f8edd4fa36b3be82bdc90d3fe5d4caf0812e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_bell, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Oct 11 09:37:21 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 09:37:21 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2225361393' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:37:21 compute-0 nova_compute[260935]: 2025-10-11 09:37:21.052 2 DEBUG oslo_concurrency.processutils [None req-cef89f47-e8a1-4ded-b3a1-844fdd04a2c8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.513s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:37:21 compute-0 nova_compute[260935]: 2025-10-11 09:37:21.062 2 DEBUG nova.compute.provider_tree [None req-cef89f47-e8a1-4ded-b3a1-844fdd04a2c8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 09:37:21 compute-0 nova_compute[260935]: 2025-10-11 09:37:21.083 2 DEBUG nova.scheduler.client.report [None req-cef89f47-e8a1-4ded-b3a1-844fdd04a2c8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 09:37:21 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/2225361393' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:37:21 compute-0 nova_compute[260935]: 2025-10-11 09:37:21.126 2 DEBUG oslo_concurrency.lockutils [None req-cef89f47-e8a1-4ded-b3a1-844fdd04a2c8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.783s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:37:21 compute-0 nova_compute[260935]: 2025-10-11 09:37:21.127 2 DEBUG nova.compute.manager [None req-cef89f47-e8a1-4ded-b3a1-844fdd04a2c8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 27b27eae-7476-459a-b3e1-62199c81d4e1] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 11 09:37:21 compute-0 nova_compute[260935]: 2025-10-11 09:37:21.175 2 DEBUG nova.compute.manager [None req-cef89f47-e8a1-4ded-b3a1-844fdd04a2c8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 27b27eae-7476-459a-b3e1-62199c81d4e1] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 11 09:37:21 compute-0 nova_compute[260935]: 2025-10-11 09:37:21.177 2 DEBUG nova.network.neutron [None req-cef89f47-e8a1-4ded-b3a1-844fdd04a2c8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 27b27eae-7476-459a-b3e1-62199c81d4e1] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 11 09:37:21 compute-0 nova_compute[260935]: 2025-10-11 09:37:21.195 2 INFO nova.virt.libvirt.driver [None req-cef89f47-e8a1-4ded-b3a1-844fdd04a2c8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 27b27eae-7476-459a-b3e1-62199c81d4e1] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 11 09:37:21 compute-0 nova_compute[260935]: 2025-10-11 09:37:21.212 2 DEBUG nova.compute.manager [None req-cef89f47-e8a1-4ded-b3a1-844fdd04a2c8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 27b27eae-7476-459a-b3e1-62199c81d4e1] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 11 09:37:21 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2871: 321 pgs: 321 active+clean; 407 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.3 KiB/s rd, 12 KiB/s wr, 0 op/s
Oct 11 09:37:21 compute-0 nova_compute[260935]: 2025-10-11 09:37:21.295 2 DEBUG nova.compute.manager [None req-cef89f47-e8a1-4ded-b3a1-844fdd04a2c8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 27b27eae-7476-459a-b3e1-62199c81d4e1] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 11 09:37:21 compute-0 nova_compute[260935]: 2025-10-11 09:37:21.299 2 DEBUG nova.virt.libvirt.driver [None req-cef89f47-e8a1-4ded-b3a1-844fdd04a2c8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 27b27eae-7476-459a-b3e1-62199c81d4e1] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 11 09:37:21 compute-0 nova_compute[260935]: 2025-10-11 09:37:21.300 2 INFO nova.virt.libvirt.driver [None req-cef89f47-e8a1-4ded-b3a1-844fdd04a2c8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 27b27eae-7476-459a-b3e1-62199c81d4e1] Creating image(s)
Oct 11 09:37:21 compute-0 nova_compute[260935]: 2025-10-11 09:37:21.335 2 DEBUG nova.storage.rbd_utils [None req-cef89f47-e8a1-4ded-b3a1-844fdd04a2c8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] rbd image 27b27eae-7476-459a-b3e1-62199c81d4e1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:37:21 compute-0 nova_compute[260935]: 2025-10-11 09:37:21.377 2 DEBUG nova.storage.rbd_utils [None req-cef89f47-e8a1-4ded-b3a1-844fdd04a2c8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] rbd image 27b27eae-7476-459a-b3e1-62199c81d4e1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:37:21 compute-0 nova_compute[260935]: 2025-10-11 09:37:21.414 2 DEBUG nova.storage.rbd_utils [None req-cef89f47-e8a1-4ded-b3a1-844fdd04a2c8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] rbd image 27b27eae-7476-459a-b3e1-62199c81d4e1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:37:21 compute-0 nova_compute[260935]: 2025-10-11 09:37:21.422 2 DEBUG oslo_concurrency.processutils [None req-cef89f47-e8a1-4ded-b3a1-844fdd04a2c8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:37:21 compute-0 epic_bell[419350]: --> passed data devices: 0 physical, 3 LVM
Oct 11 09:37:21 compute-0 epic_bell[419350]: --> relative data size: 1.0
Oct 11 09:37:21 compute-0 epic_bell[419350]: --> All data devices are unavailable
Oct 11 09:37:21 compute-0 nova_compute[260935]: 2025-10-11 09:37:21.506 2 DEBUG oslo_concurrency.processutils [None req-cef89f47-e8a1-4ded-b3a1-844fdd04a2c8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json" returned: 0 in 0.084s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:37:21 compute-0 nova_compute[260935]: 2025-10-11 09:37:21.507 2 DEBUG oslo_concurrency.lockutils [None req-cef89f47-e8a1-4ded-b3a1-844fdd04a2c8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Acquiring lock "9811042c7d73cc51997f7c966840f4b7728169a1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:37:21 compute-0 nova_compute[260935]: 2025-10-11 09:37:21.508 2 DEBUG oslo_concurrency.lockutils [None req-cef89f47-e8a1-4ded-b3a1-844fdd04a2c8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:37:21 compute-0 nova_compute[260935]: 2025-10-11 09:37:21.508 2 DEBUG oslo_concurrency.lockutils [None req-cef89f47-e8a1-4ded-b3a1-844fdd04a2c8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:37:21 compute-0 systemd[1]: libpod-cf9155b5ff473e93bf61de270b47f8edd4fa36b3be82bdc90d3fe5d4caf0812e.scope: Deactivated successfully.
Oct 11 09:37:21 compute-0 nova_compute[260935]: 2025-10-11 09:37:21.544 2 DEBUG nova.storage.rbd_utils [None req-cef89f47-e8a1-4ded-b3a1-844fdd04a2c8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] rbd image 27b27eae-7476-459a-b3e1-62199c81d4e1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:37:21 compute-0 nova_compute[260935]: 2025-10-11 09:37:21.551 2 DEBUG oslo_concurrency.processutils [None req-cef89f47-e8a1-4ded-b3a1-844fdd04a2c8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 27b27eae-7476-459a-b3e1-62199c81d4e1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:37:21 compute-0 podman[419458]: 2025-10-11 09:37:21.571088282 +0000 UTC m=+0.036655349 container died cf9155b5ff473e93bf61de270b47f8edd4fa36b3be82bdc90d3fe5d4caf0812e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_bell, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct 11 09:37:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-aab9e1fd1a2142c5e1d2b3a2eae6374c0e118879c058481fe07c55b2dc87e5e3-merged.mount: Deactivated successfully.
Oct 11 09:37:21 compute-0 nova_compute[260935]: 2025-10-11 09:37:21.687 2 DEBUG nova.policy [None req-cef89f47-e8a1-4ded-b3a1-844fdd04a2c8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '489c4d0457354f4684f8b9e53261224f', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '81e7096f23df4e7d8782cf98d09d54e9', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 11 09:37:21 compute-0 podman[419458]: 2025-10-11 09:37:21.743629262 +0000 UTC m=+0.209196249 container remove cf9155b5ff473e93bf61de270b47f8edd4fa36b3be82bdc90d3fe5d4caf0812e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_bell, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct 11 09:37:21 compute-0 systemd[1]: libpod-conmon-cf9155b5ff473e93bf61de270b47f8edd4fa36b3be82bdc90d3fe5d4caf0812e.scope: Deactivated successfully.
Oct 11 09:37:21 compute-0 sudo[419226]: pam_unix(sudo:session): session closed for user root
Oct 11 09:37:21 compute-0 sudo[419511]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:37:21 compute-0 sudo[419511]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:37:21 compute-0 sudo[419511]: pam_unix(sudo:session): session closed for user root
Oct 11 09:37:21 compute-0 sudo[419536]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 09:37:21 compute-0 sudo[419536]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:37:21 compute-0 sudo[419536]: pam_unix(sudo:session): session closed for user root
Oct 11 09:37:22 compute-0 sudo[419561]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:37:22 compute-0 sudo[419561]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:37:22 compute-0 sudo[419561]: pam_unix(sudo:session): session closed for user root
Oct 11 09:37:22 compute-0 sudo[419586]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a -- lvm list --format json
Oct 11 09:37:22 compute-0 sudo[419586]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:37:22 compute-0 ceph-mon[74313]: pgmap v2871: 321 pgs: 321 active+clean; 407 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.3 KiB/s rd, 12 KiB/s wr, 0 op/s
Oct 11 09:37:22 compute-0 nova_compute[260935]: 2025-10-11 09:37:22.307 2 DEBUG oslo_concurrency.processutils [None req-cef89f47-e8a1-4ded-b3a1-844fdd04a2c8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 27b27eae-7476-459a-b3e1-62199c81d4e1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.757s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:37:22 compute-0 nova_compute[260935]: 2025-10-11 09:37:22.382 2 DEBUG nova.storage.rbd_utils [None req-cef89f47-e8a1-4ded-b3a1-844fdd04a2c8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] resizing rbd image 27b27eae-7476-459a-b3e1-62199c81d4e1_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 11 09:37:22 compute-0 podman[419707]: 2025-10-11 09:37:22.499346708 +0000 UTC m=+0.069447709 container create 59797a5ef2ca90f121707b901900832f203c778fb603b4ec5cc9088193ea6c11 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_dijkstra, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 09:37:22 compute-0 podman[419707]: 2025-10-11 09:37:22.458253445 +0000 UTC m=+0.028354486 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:37:22 compute-0 systemd[1]: Started libpod-conmon-59797a5ef2ca90f121707b901900832f203c778fb603b4ec5cc9088193ea6c11.scope.
Oct 11 09:37:22 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:37:22 compute-0 podman[419707]: 2025-10-11 09:37:22.676779814 +0000 UTC m=+0.246880865 container init 59797a5ef2ca90f121707b901900832f203c778fb603b4ec5cc9088193ea6c11 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_dijkstra, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 09:37:22 compute-0 nova_compute[260935]: 2025-10-11 09:37:22.689 2 DEBUG nova.objects.instance [None req-cef89f47-e8a1-4ded-b3a1-844fdd04a2c8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lazy-loading 'migration_context' on Instance uuid 27b27eae-7476-459a-b3e1-62199c81d4e1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:37:22 compute-0 podman[419707]: 2025-10-11 09:37:22.692833325 +0000 UTC m=+0.262934366 container start 59797a5ef2ca90f121707b901900832f203c778fb603b4ec5cc9088193ea6c11 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_dijkstra, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 11 09:37:22 compute-0 kind_dijkstra[419723]: 167 167
Oct 11 09:37:22 compute-0 systemd[1]: libpod-59797a5ef2ca90f121707b901900832f203c778fb603b4ec5cc9088193ea6c11.scope: Deactivated successfully.
Oct 11 09:37:22 compute-0 podman[419707]: 2025-10-11 09:37:22.706622191 +0000 UTC m=+0.276723252 container attach 59797a5ef2ca90f121707b901900832f203c778fb603b4ec5cc9088193ea6c11 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_dijkstra, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 09:37:22 compute-0 podman[419707]: 2025-10-11 09:37:22.708130574 +0000 UTC m=+0.278231605 container died 59797a5ef2ca90f121707b901900832f203c778fb603b4ec5cc9088193ea6c11 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_dijkstra, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 09:37:22 compute-0 nova_compute[260935]: 2025-10-11 09:37:22.724 2 DEBUG nova.virt.libvirt.driver [None req-cef89f47-e8a1-4ded-b3a1-844fdd04a2c8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 27b27eae-7476-459a-b3e1-62199c81d4e1] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 11 09:37:22 compute-0 nova_compute[260935]: 2025-10-11 09:37:22.725 2 DEBUG nova.virt.libvirt.driver [None req-cef89f47-e8a1-4ded-b3a1-844fdd04a2c8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 27b27eae-7476-459a-b3e1-62199c81d4e1] Ensure instance console log exists: /var/lib/nova/instances/27b27eae-7476-459a-b3e1-62199c81d4e1/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 11 09:37:22 compute-0 nova_compute[260935]: 2025-10-11 09:37:22.726 2 DEBUG oslo_concurrency.lockutils [None req-cef89f47-e8a1-4ded-b3a1-844fdd04a2c8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:37:22 compute-0 nova_compute[260935]: 2025-10-11 09:37:22.727 2 DEBUG oslo_concurrency.lockutils [None req-cef89f47-e8a1-4ded-b3a1-844fdd04a2c8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:37:22 compute-0 nova_compute[260935]: 2025-10-11 09:37:22.727 2 DEBUG oslo_concurrency.lockutils [None req-cef89f47-e8a1-4ded-b3a1-844fdd04a2c8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:37:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-3e40c1cbe71f541f8e24b35359a6c320d245fedf6fbc580c87e9a7718ef3f963-merged.mount: Deactivated successfully.
Oct 11 09:37:22 compute-0 nova_compute[260935]: 2025-10-11 09:37:22.782 2 DEBUG nova.network.neutron [None req-cef89f47-e8a1-4ded-b3a1-844fdd04a2c8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 27b27eae-7476-459a-b3e1-62199c81d4e1] Successfully created port: 0110d16a-ba47-4f17-8196-6a5b11e482b3 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 11 09:37:22 compute-0 podman[419707]: 2025-10-11 09:37:22.896396783 +0000 UTC m=+0.466497794 container remove 59797a5ef2ca90f121707b901900832f203c778fb603b4ec5cc9088193ea6c11 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_dijkstra, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef)
Oct 11 09:37:22 compute-0 systemd[1]: libpod-conmon-59797a5ef2ca90f121707b901900832f203c778fb603b4ec5cc9088193ea6c11.scope: Deactivated successfully.
Oct 11 09:37:23 compute-0 podman[419767]: 2025-10-11 09:37:23.093650166 +0000 UTC m=+0.021718101 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:37:23 compute-0 nova_compute[260935]: 2025-10-11 09:37:23.228 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:37:23 compute-0 podman[419767]: 2025-10-11 09:37:23.233345124 +0000 UTC m=+0.161413039 container create b164a2a9eddab52a2c459e3bf7dbf585b8cf083436ce2b4ecd401b82ad138060 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_goodall, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 09:37:23 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2872: 321 pgs: 321 active+clean; 453 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 16 KiB/s rd, 1.8 MiB/s wr, 24 op/s
Oct 11 09:37:23 compute-0 systemd[1]: Started libpod-conmon-b164a2a9eddab52a2c459e3bf7dbf585b8cf083436ce2b4ecd401b82ad138060.scope.
Oct 11 09:37:23 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:37:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ac20e729bd30e2066555d9f1a06f8c8d9199a33569ca164a18cfafd3b9f2b39/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 09:37:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ac20e729bd30e2066555d9f1a06f8c8d9199a33569ca164a18cfafd3b9f2b39/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 09:37:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ac20e729bd30e2066555d9f1a06f8c8d9199a33569ca164a18cfafd3b9f2b39/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 09:37:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ac20e729bd30e2066555d9f1a06f8c8d9199a33569ca164a18cfafd3b9f2b39/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 09:37:23 compute-0 ceph-mon[74313]: pgmap v2872: 321 pgs: 321 active+clean; 453 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 16 KiB/s rd, 1.8 MiB/s wr, 24 op/s
Oct 11 09:37:23 compute-0 podman[419767]: 2025-10-11 09:37:23.490883157 +0000 UTC m=+0.418951152 container init b164a2a9eddab52a2c459e3bf7dbf585b8cf083436ce2b4ecd401b82ad138060 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_goodall, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct 11 09:37:23 compute-0 podman[419767]: 2025-10-11 09:37:23.50919473 +0000 UTC m=+0.437262665 container start b164a2a9eddab52a2c459e3bf7dbf585b8cf083436ce2b4ecd401b82ad138060 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_goodall, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 09:37:23 compute-0 podman[419767]: 2025-10-11 09:37:23.551551238 +0000 UTC m=+0.479619243 container attach b164a2a9eddab52a2c459e3bf7dbf585b8cf083436ce2b4ecd401b82ad138060 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_goodall, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct 11 09:37:23 compute-0 nova_compute[260935]: 2025-10-11 09:37:23.877 2 DEBUG nova.network.neutron [None req-cef89f47-e8a1-4ded-b3a1-844fdd04a2c8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 27b27eae-7476-459a-b3e1-62199c81d4e1] Successfully updated port: 0110d16a-ba47-4f17-8196-6a5b11e482b3 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 11 09:37:23 compute-0 nova_compute[260935]: 2025-10-11 09:37:23.894 2 DEBUG oslo_concurrency.lockutils [None req-cef89f47-e8a1-4ded-b3a1-844fdd04a2c8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Acquiring lock "refresh_cache-27b27eae-7476-459a-b3e1-62199c81d4e1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:37:23 compute-0 nova_compute[260935]: 2025-10-11 09:37:23.894 2 DEBUG oslo_concurrency.lockutils [None req-cef89f47-e8a1-4ded-b3a1-844fdd04a2c8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Acquired lock "refresh_cache-27b27eae-7476-459a-b3e1-62199c81d4e1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:37:23 compute-0 nova_compute[260935]: 2025-10-11 09:37:23.894 2 DEBUG nova.network.neutron [None req-cef89f47-e8a1-4ded-b3a1-844fdd04a2c8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 27b27eae-7476-459a-b3e1-62199c81d4e1] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 11 09:37:23 compute-0 nova_compute[260935]: 2025-10-11 09:37:23.974 2 DEBUG nova.compute.manager [req-20904805-f1e5-4a16-ad04-6075d7cd35e2 req-46dc4c43-2eae-4ebc-91c9-3b83d366f528 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 27b27eae-7476-459a-b3e1-62199c81d4e1] Received event network-changed-0110d16a-ba47-4f17-8196-6a5b11e482b3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:37:23 compute-0 nova_compute[260935]: 2025-10-11 09:37:23.974 2 DEBUG nova.compute.manager [req-20904805-f1e5-4a16-ad04-6075d7cd35e2 req-46dc4c43-2eae-4ebc-91c9-3b83d366f528 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 27b27eae-7476-459a-b3e1-62199c81d4e1] Refreshing instance network info cache due to event network-changed-0110d16a-ba47-4f17-8196-6a5b11e482b3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 09:37:23 compute-0 nova_compute[260935]: 2025-10-11 09:37:23.975 2 DEBUG oslo_concurrency.lockutils [req-20904805-f1e5-4a16-ad04-6075d7cd35e2 req-46dc4c43-2eae-4ebc-91c9-3b83d366f528 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-27b27eae-7476-459a-b3e1-62199c81d4e1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:37:24 compute-0 nova_compute[260935]: 2025-10-11 09:37:24.034 2 DEBUG nova.network.neutron [None req-cef89f47-e8a1-4ded-b3a1-844fdd04a2c8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 27b27eae-7476-459a-b3e1-62199c81d4e1] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 11 09:37:24 compute-0 nova_compute[260935]: 2025-10-11 09:37:24.171 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:37:24 compute-0 admiring_goodall[419783]: {
Oct 11 09:37:24 compute-0 admiring_goodall[419783]:     "0": [
Oct 11 09:37:24 compute-0 admiring_goodall[419783]:         {
Oct 11 09:37:24 compute-0 admiring_goodall[419783]:             "devices": [
Oct 11 09:37:24 compute-0 admiring_goodall[419783]:                 "/dev/loop3"
Oct 11 09:37:24 compute-0 admiring_goodall[419783]:             ],
Oct 11 09:37:24 compute-0 admiring_goodall[419783]:             "lv_name": "ceph_lv0",
Oct 11 09:37:24 compute-0 admiring_goodall[419783]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 09:37:24 compute-0 admiring_goodall[419783]:             "lv_size": "21470642176",
Oct 11 09:37:24 compute-0 admiring_goodall[419783]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=bd31cfe9-266a-4479-a7f9-659fff14b3e1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 09:37:24 compute-0 admiring_goodall[419783]:             "lv_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 09:37:24 compute-0 admiring_goodall[419783]:             "name": "ceph_lv0",
Oct 11 09:37:24 compute-0 admiring_goodall[419783]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 09:37:24 compute-0 admiring_goodall[419783]:             "tags": {
Oct 11 09:37:24 compute-0 admiring_goodall[419783]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 11 09:37:24 compute-0 admiring_goodall[419783]:                 "ceph.block_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 09:37:24 compute-0 admiring_goodall[419783]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 09:37:24 compute-0 admiring_goodall[419783]:                 "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:37:24 compute-0 admiring_goodall[419783]:                 "ceph.cluster_name": "ceph",
Oct 11 09:37:24 compute-0 admiring_goodall[419783]:                 "ceph.crush_device_class": "",
Oct 11 09:37:24 compute-0 admiring_goodall[419783]:                 "ceph.encrypted": "0",
Oct 11 09:37:24 compute-0 admiring_goodall[419783]:                 "ceph.osd_fsid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 09:37:24 compute-0 admiring_goodall[419783]:                 "ceph.osd_id": "0",
Oct 11 09:37:24 compute-0 admiring_goodall[419783]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 09:37:24 compute-0 admiring_goodall[419783]:                 "ceph.type": "block",
Oct 11 09:37:24 compute-0 admiring_goodall[419783]:                 "ceph.vdo": "0"
Oct 11 09:37:24 compute-0 admiring_goodall[419783]:             },
Oct 11 09:37:24 compute-0 admiring_goodall[419783]:             "type": "block",
Oct 11 09:37:24 compute-0 admiring_goodall[419783]:             "vg_name": "ceph_vg0"
Oct 11 09:37:24 compute-0 admiring_goodall[419783]:         }
Oct 11 09:37:24 compute-0 admiring_goodall[419783]:     ],
Oct 11 09:37:24 compute-0 admiring_goodall[419783]:     "1": [
Oct 11 09:37:24 compute-0 admiring_goodall[419783]:         {
Oct 11 09:37:24 compute-0 admiring_goodall[419783]:             "devices": [
Oct 11 09:37:24 compute-0 admiring_goodall[419783]:                 "/dev/loop4"
Oct 11 09:37:24 compute-0 admiring_goodall[419783]:             ],
Oct 11 09:37:24 compute-0 admiring_goodall[419783]:             "lv_name": "ceph_lv1",
Oct 11 09:37:24 compute-0 admiring_goodall[419783]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 09:37:24 compute-0 admiring_goodall[419783]:             "lv_size": "21470642176",
Oct 11 09:37:24 compute-0 admiring_goodall[419783]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c6e730c8-7075-45e5-bf72-061b73088f5b,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 09:37:24 compute-0 admiring_goodall[419783]:             "lv_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 09:37:24 compute-0 admiring_goodall[419783]:             "name": "ceph_lv1",
Oct 11 09:37:24 compute-0 admiring_goodall[419783]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 09:37:24 compute-0 admiring_goodall[419783]:             "tags": {
Oct 11 09:37:24 compute-0 admiring_goodall[419783]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 11 09:37:24 compute-0 admiring_goodall[419783]:                 "ceph.block_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 09:37:24 compute-0 admiring_goodall[419783]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 09:37:24 compute-0 admiring_goodall[419783]:                 "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:37:24 compute-0 admiring_goodall[419783]:                 "ceph.cluster_name": "ceph",
Oct 11 09:37:24 compute-0 admiring_goodall[419783]:                 "ceph.crush_device_class": "",
Oct 11 09:37:24 compute-0 admiring_goodall[419783]:                 "ceph.encrypted": "0",
Oct 11 09:37:24 compute-0 admiring_goodall[419783]:                 "ceph.osd_fsid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 09:37:24 compute-0 admiring_goodall[419783]:                 "ceph.osd_id": "1",
Oct 11 09:37:24 compute-0 admiring_goodall[419783]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 09:37:24 compute-0 admiring_goodall[419783]:                 "ceph.type": "block",
Oct 11 09:37:24 compute-0 admiring_goodall[419783]:                 "ceph.vdo": "0"
Oct 11 09:37:24 compute-0 admiring_goodall[419783]:             },
Oct 11 09:37:24 compute-0 admiring_goodall[419783]:             "type": "block",
Oct 11 09:37:24 compute-0 admiring_goodall[419783]:             "vg_name": "ceph_vg1"
Oct 11 09:37:24 compute-0 admiring_goodall[419783]:         }
Oct 11 09:37:24 compute-0 admiring_goodall[419783]:     ],
Oct 11 09:37:24 compute-0 admiring_goodall[419783]:     "2": [
Oct 11 09:37:24 compute-0 admiring_goodall[419783]:         {
Oct 11 09:37:24 compute-0 admiring_goodall[419783]:             "devices": [
Oct 11 09:37:24 compute-0 admiring_goodall[419783]:                 "/dev/loop5"
Oct 11 09:37:24 compute-0 admiring_goodall[419783]:             ],
Oct 11 09:37:24 compute-0 admiring_goodall[419783]:             "lv_name": "ceph_lv2",
Oct 11 09:37:24 compute-0 admiring_goodall[419783]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 09:37:24 compute-0 admiring_goodall[419783]:             "lv_size": "21470642176",
Oct 11 09:37:24 compute-0 admiring_goodall[419783]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9a8d87fe-940e-4960-94fb-405e22e6e9ee,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 09:37:24 compute-0 admiring_goodall[419783]:             "lv_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 09:37:24 compute-0 admiring_goodall[419783]:             "name": "ceph_lv2",
Oct 11 09:37:24 compute-0 admiring_goodall[419783]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 09:37:24 compute-0 admiring_goodall[419783]:             "tags": {
Oct 11 09:37:24 compute-0 admiring_goodall[419783]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 11 09:37:24 compute-0 admiring_goodall[419783]:                 "ceph.block_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 09:37:24 compute-0 admiring_goodall[419783]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 09:37:24 compute-0 admiring_goodall[419783]:                 "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:37:24 compute-0 admiring_goodall[419783]:                 "ceph.cluster_name": "ceph",
Oct 11 09:37:24 compute-0 admiring_goodall[419783]:                 "ceph.crush_device_class": "",
Oct 11 09:37:24 compute-0 admiring_goodall[419783]:                 "ceph.encrypted": "0",
Oct 11 09:37:24 compute-0 admiring_goodall[419783]:                 "ceph.osd_fsid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 09:37:24 compute-0 admiring_goodall[419783]:                 "ceph.osd_id": "2",
Oct 11 09:37:24 compute-0 admiring_goodall[419783]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 09:37:24 compute-0 admiring_goodall[419783]:                 "ceph.type": "block",
Oct 11 09:37:24 compute-0 admiring_goodall[419783]:                 "ceph.vdo": "0"
Oct 11 09:37:24 compute-0 admiring_goodall[419783]:             },
Oct 11 09:37:24 compute-0 admiring_goodall[419783]:             "type": "block",
Oct 11 09:37:24 compute-0 admiring_goodall[419783]:             "vg_name": "ceph_vg2"
Oct 11 09:37:24 compute-0 admiring_goodall[419783]:         }
Oct 11 09:37:24 compute-0 admiring_goodall[419783]:     ]
Oct 11 09:37:24 compute-0 admiring_goodall[419783]: }
Oct 11 09:37:24 compute-0 systemd[1]: libpod-b164a2a9eddab52a2c459e3bf7dbf585b8cf083436ce2b4ecd401b82ad138060.scope: Deactivated successfully.
Oct 11 09:37:24 compute-0 podman[419767]: 2025-10-11 09:37:24.288310073 +0000 UTC m=+1.216377998 container died b164a2a9eddab52a2c459e3bf7dbf585b8cf083436ce2b4ecd401b82ad138060 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_goodall, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Oct 11 09:37:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-6ac20e729bd30e2066555d9f1a06f8c8d9199a33569ca164a18cfafd3b9f2b39-merged.mount: Deactivated successfully.
Oct 11 09:37:24 compute-0 podman[419767]: 2025-10-11 09:37:24.354956862 +0000 UTC m=+1.283024817 container remove b164a2a9eddab52a2c459e3bf7dbf585b8cf083436ce2b4ecd401b82ad138060 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_goodall, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 09:37:24 compute-0 systemd[1]: libpod-conmon-b164a2a9eddab52a2c459e3bf7dbf585b8cf083436ce2b4ecd401b82ad138060.scope: Deactivated successfully.
Oct 11 09:37:24 compute-0 sudo[419586]: pam_unix(sudo:session): session closed for user root
Oct 11 09:37:24 compute-0 sudo[419804]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:37:24 compute-0 sudo[419804]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:37:24 compute-0 sudo[419804]: pam_unix(sudo:session): session closed for user root
Oct 11 09:37:24 compute-0 sudo[419829]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 09:37:24 compute-0 sudo[419829]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:37:24 compute-0 sudo[419829]: pam_unix(sudo:session): session closed for user root
Oct 11 09:37:24 compute-0 sudo[419854]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:37:24 compute-0 sudo[419854]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:37:24 compute-0 sudo[419854]: pam_unix(sudo:session): session closed for user root
Oct 11 09:37:24 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:37:24 compute-0 sudo[419879]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a -- raw list --format json
Oct 11 09:37:24 compute-0 sudo[419879]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:37:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:37:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:37:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:37:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:37:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:37:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:37:24 compute-0 nova_compute[260935]: 2025-10-11 09:37:24.861 2 DEBUG nova.network.neutron [None req-cef89f47-e8a1-4ded-b3a1-844fdd04a2c8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 27b27eae-7476-459a-b3e1-62199c81d4e1] Updating instance_info_cache with network_info: [{"id": "0110d16a-ba47-4f17-8196-6a5b11e482b3", "address": "fa:16:3e:32:9d:c0", "network": {"id": "03aa96fb-1646-4924-8eea-e7c8a5518a31", "bridge": "br-int", "label": "tempest-network-smoke--2144007454", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "81e7096f23df4e7d8782cf98d09d54e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0110d16a-ba", "ovs_interfaceid": "0110d16a-ba47-4f17-8196-6a5b11e482b3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:37:24 compute-0 nova_compute[260935]: 2025-10-11 09:37:24.902 2 DEBUG oslo_concurrency.lockutils [None req-cef89f47-e8a1-4ded-b3a1-844fdd04a2c8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Releasing lock "refresh_cache-27b27eae-7476-459a-b3e1-62199c81d4e1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:37:24 compute-0 nova_compute[260935]: 2025-10-11 09:37:24.903 2 DEBUG nova.compute.manager [None req-cef89f47-e8a1-4ded-b3a1-844fdd04a2c8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 27b27eae-7476-459a-b3e1-62199c81d4e1] Instance network_info: |[{"id": "0110d16a-ba47-4f17-8196-6a5b11e482b3", "address": "fa:16:3e:32:9d:c0", "network": {"id": "03aa96fb-1646-4924-8eea-e7c8a5518a31", "bridge": "br-int", "label": "tempest-network-smoke--2144007454", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "81e7096f23df4e7d8782cf98d09d54e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0110d16a-ba", "ovs_interfaceid": "0110d16a-ba47-4f17-8196-6a5b11e482b3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 11 09:37:24 compute-0 nova_compute[260935]: 2025-10-11 09:37:24.903 2 DEBUG oslo_concurrency.lockutils [req-20904805-f1e5-4a16-ad04-6075d7cd35e2 req-46dc4c43-2eae-4ebc-91c9-3b83d366f528 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-27b27eae-7476-459a-b3e1-62199c81d4e1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:37:24 compute-0 nova_compute[260935]: 2025-10-11 09:37:24.903 2 DEBUG nova.network.neutron [req-20904805-f1e5-4a16-ad04-6075d7cd35e2 req-46dc4c43-2eae-4ebc-91c9-3b83d366f528 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 27b27eae-7476-459a-b3e1-62199c81d4e1] Refreshing network info cache for port 0110d16a-ba47-4f17-8196-6a5b11e482b3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 09:37:24 compute-0 nova_compute[260935]: 2025-10-11 09:37:24.907 2 DEBUG nova.virt.libvirt.driver [None req-cef89f47-e8a1-4ded-b3a1-844fdd04a2c8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 27b27eae-7476-459a-b3e1-62199c81d4e1] Start _get_guest_xml network_info=[{"id": "0110d16a-ba47-4f17-8196-6a5b11e482b3", "address": "fa:16:3e:32:9d:c0", "network": {"id": "03aa96fb-1646-4924-8eea-e7c8a5518a31", "bridge": "br-int", "label": "tempest-network-smoke--2144007454", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "81e7096f23df4e7d8782cf98d09d54e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0110d16a-ba", "ovs_interfaceid": "0110d16a-ba47-4f17-8196-6a5b11e482b3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 11 09:37:24 compute-0 nova_compute[260935]: 2025-10-11 09:37:24.914 2 WARNING nova.virt.libvirt.driver [None req-cef89f47-e8a1-4ded-b3a1-844fdd04a2c8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 09:37:24 compute-0 nova_compute[260935]: 2025-10-11 09:37:24.928 2 DEBUG nova.virt.libvirt.host [None req-cef89f47-e8a1-4ded-b3a1-844fdd04a2c8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 11 09:37:24 compute-0 nova_compute[260935]: 2025-10-11 09:37:24.929 2 DEBUG nova.virt.libvirt.host [None req-cef89f47-e8a1-4ded-b3a1-844fdd04a2c8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 11 09:37:24 compute-0 nova_compute[260935]: 2025-10-11 09:37:24.933 2 DEBUG nova.virt.libvirt.host [None req-cef89f47-e8a1-4ded-b3a1-844fdd04a2c8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 11 09:37:24 compute-0 nova_compute[260935]: 2025-10-11 09:37:24.934 2 DEBUG nova.virt.libvirt.host [None req-cef89f47-e8a1-4ded-b3a1-844fdd04a2c8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 11 09:37:24 compute-0 nova_compute[260935]: 2025-10-11 09:37:24.935 2 DEBUG nova.virt.libvirt.driver [None req-cef89f47-e8a1-4ded-b3a1-844fdd04a2c8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 11 09:37:24 compute-0 nova_compute[260935]: 2025-10-11 09:37:24.935 2 DEBUG nova.virt.hardware [None req-cef89f47-e8a1-4ded-b3a1-844fdd04a2c8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 11 09:37:24 compute-0 nova_compute[260935]: 2025-10-11 09:37:24.936 2 DEBUG nova.virt.hardware [None req-cef89f47-e8a1-4ded-b3a1-844fdd04a2c8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 11 09:37:24 compute-0 nova_compute[260935]: 2025-10-11 09:37:24.937 2 DEBUG nova.virt.hardware [None req-cef89f47-e8a1-4ded-b3a1-844fdd04a2c8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 11 09:37:24 compute-0 nova_compute[260935]: 2025-10-11 09:37:24.937 2 DEBUG nova.virt.hardware [None req-cef89f47-e8a1-4ded-b3a1-844fdd04a2c8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 11 09:37:24 compute-0 nova_compute[260935]: 2025-10-11 09:37:24.937 2 DEBUG nova.virt.hardware [None req-cef89f47-e8a1-4ded-b3a1-844fdd04a2c8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 11 09:37:24 compute-0 nova_compute[260935]: 2025-10-11 09:37:24.938 2 DEBUG nova.virt.hardware [None req-cef89f47-e8a1-4ded-b3a1-844fdd04a2c8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 11 09:37:24 compute-0 nova_compute[260935]: 2025-10-11 09:37:24.938 2 DEBUG nova.virt.hardware [None req-cef89f47-e8a1-4ded-b3a1-844fdd04a2c8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 11 09:37:24 compute-0 nova_compute[260935]: 2025-10-11 09:37:24.939 2 DEBUG nova.virt.hardware [None req-cef89f47-e8a1-4ded-b3a1-844fdd04a2c8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 11 09:37:24 compute-0 nova_compute[260935]: 2025-10-11 09:37:24.939 2 DEBUG nova.virt.hardware [None req-cef89f47-e8a1-4ded-b3a1-844fdd04a2c8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 11 09:37:24 compute-0 nova_compute[260935]: 2025-10-11 09:37:24.939 2 DEBUG nova.virt.hardware [None req-cef89f47-e8a1-4ded-b3a1-844fdd04a2c8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 11 09:37:24 compute-0 nova_compute[260935]: 2025-10-11 09:37:24.940 2 DEBUG nova.virt.hardware [None req-cef89f47-e8a1-4ded-b3a1-844fdd04a2c8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 11 09:37:24 compute-0 nova_compute[260935]: 2025-10-11 09:37:24.949 2 DEBUG oslo_concurrency.processutils [None req-cef89f47-e8a1-4ded-b3a1-844fdd04a2c8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:37:25 compute-0 podman[419947]: 2025-10-11 09:37:25.03658502 +0000 UTC m=+0.035080795 container create 2a1da028167916aa5b667e04dde1bcf883c5af6d312ed01d866c798a6bc5875c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_villani, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 09:37:25 compute-0 systemd[1]: Started libpod-conmon-2a1da028167916aa5b667e04dde1bcf883c5af6d312ed01d866c798a6bc5875c.scope.
Oct 11 09:37:25 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:37:25 compute-0 podman[419947]: 2025-10-11 09:37:25.02266949 +0000 UTC m=+0.021165285 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:37:25 compute-0 podman[419947]: 2025-10-11 09:37:25.118341623 +0000 UTC m=+0.116837418 container init 2a1da028167916aa5b667e04dde1bcf883c5af6d312ed01d866c798a6bc5875c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_villani, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True)
Oct 11 09:37:25 compute-0 podman[419947]: 2025-10-11 09:37:25.125592827 +0000 UTC m=+0.124088602 container start 2a1da028167916aa5b667e04dde1bcf883c5af6d312ed01d866c798a6bc5875c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_villani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct 11 09:37:25 compute-0 podman[419947]: 2025-10-11 09:37:25.128672713 +0000 UTC m=+0.127168488 container attach 2a1da028167916aa5b667e04dde1bcf883c5af6d312ed01d866c798a6bc5875c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_villani, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 09:37:25 compute-0 ecstatic_villani[419963]: 167 167
Oct 11 09:37:25 compute-0 systemd[1]: libpod-2a1da028167916aa5b667e04dde1bcf883c5af6d312ed01d866c798a6bc5875c.scope: Deactivated successfully.
Oct 11 09:37:25 compute-0 conmon[419963]: conmon 2a1da028167916aa5b66 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-2a1da028167916aa5b667e04dde1bcf883c5af6d312ed01d866c798a6bc5875c.scope/container/memory.events
Oct 11 09:37:25 compute-0 podman[419947]: 2025-10-11 09:37:25.131921034 +0000 UTC m=+0.130416809 container died 2a1da028167916aa5b667e04dde1bcf883c5af6d312ed01d866c798a6bc5875c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_villani, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 09:37:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-1e9ea8233646edfb89339b23cdbe22a247620476d8590539113002f0e3faf47f-merged.mount: Deactivated successfully.
Oct 11 09:37:25 compute-0 podman[419947]: 2025-10-11 09:37:25.165860616 +0000 UTC m=+0.164356391 container remove 2a1da028167916aa5b667e04dde1bcf883c5af6d312ed01d866c798a6bc5875c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_villani, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 09:37:25 compute-0 systemd[1]: libpod-conmon-2a1da028167916aa5b667e04dde1bcf883c5af6d312ed01d866c798a6bc5875c.scope: Deactivated successfully.
Oct 11 09:37:25 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2873: 321 pgs: 321 active+clean; 453 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 15 KiB/s rd, 1.8 MiB/s wr, 23 op/s
Oct 11 09:37:25 compute-0 ceph-mon[74313]: pgmap v2873: 321 pgs: 321 active+clean; 453 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 15 KiB/s rd, 1.8 MiB/s wr, 23 op/s
Oct 11 09:37:25 compute-0 podman[420005]: 2025-10-11 09:37:25.374378094 +0000 UTC m=+0.053893912 container create 6bf6e523c579205dead90350d459d81d654dcc5bcb18e91d7c75b1776473663c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_galileo, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507)
Oct 11 09:37:25 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 09:37:25 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1599478162' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:37:25 compute-0 nova_compute[260935]: 2025-10-11 09:37:25.419 2 DEBUG oslo_concurrency.processutils [None req-cef89f47-e8a1-4ded-b3a1-844fdd04a2c8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.471s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:37:25 compute-0 systemd[1]: Started libpod-conmon-6bf6e523c579205dead90350d459d81d654dcc5bcb18e91d7c75b1776473663c.scope.
Oct 11 09:37:25 compute-0 nova_compute[260935]: 2025-10-11 09:37:25.448 2 DEBUG nova.storage.rbd_utils [None req-cef89f47-e8a1-4ded-b3a1-844fdd04a2c8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] rbd image 27b27eae-7476-459a-b3e1-62199c81d4e1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:37:25 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:37:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1cdcdb1183aa2e86fd801d112bc065a7027e19904460d8bd33886acedb1295d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 09:37:25 compute-0 podman[420005]: 2025-10-11 09:37:25.358570861 +0000 UTC m=+0.038086689 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:37:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1cdcdb1183aa2e86fd801d112bc065a7027e19904460d8bd33886acedb1295d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 09:37:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1cdcdb1183aa2e86fd801d112bc065a7027e19904460d8bd33886acedb1295d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 09:37:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1cdcdb1183aa2e86fd801d112bc065a7027e19904460d8bd33886acedb1295d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 09:37:25 compute-0 podman[420005]: 2025-10-11 09:37:25.463856234 +0000 UTC m=+0.143372062 container init 6bf6e523c579205dead90350d459d81d654dcc5bcb18e91d7c75b1776473663c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_galileo, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 09:37:25 compute-0 nova_compute[260935]: 2025-10-11 09:37:25.467 2 DEBUG oslo_concurrency.processutils [None req-cef89f47-e8a1-4ded-b3a1-844fdd04a2c8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:37:25 compute-0 podman[420005]: 2025-10-11 09:37:25.472310041 +0000 UTC m=+0.151825839 container start 6bf6e523c579205dead90350d459d81d654dcc5bcb18e91d7c75b1776473663c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_galileo, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Oct 11 09:37:25 compute-0 podman[420005]: 2025-10-11 09:37:25.477139087 +0000 UTC m=+0.156654915 container attach 6bf6e523c579205dead90350d459d81d654dcc5bcb18e91d7c75b1776473663c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_galileo, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 09:37:25 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 09:37:25 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1024536959' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:37:25 compute-0 nova_compute[260935]: 2025-10-11 09:37:25.867 2 DEBUG oslo_concurrency.processutils [None req-cef89f47-e8a1-4ded-b3a1-844fdd04a2c8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.400s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:37:25 compute-0 nova_compute[260935]: 2025-10-11 09:37:25.871 2 DEBUG nova.virt.libvirt.vif [None req-cef89f47-e8a1-4ded-b3a1-844fdd04a2c8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T09:37:19Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-607770139-gen-0-202990093',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-607770139-gen-0-202990093',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-607770139-gen',id=142,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDUdKb5dOuuDVVWeqX4zKoPcjWDfKUs1onbpng/L6Jwhcf0BjMOuWghlJ+p86B+gJd3uhEaIWe8cO4nTPLwQAMa1oAzsF+deBAfCAYSzplFFdsUIepQYO467EdbI9Uqbsw==',key_name='tempest-TestSecurityGroupsBasicOps-402699739',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='81e7096f23df4e7d8782cf98d09d54e9',ramdisk_id='',reservation_id='r-0rby20lg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-607770139',owner_user_name='tempest-TestSecurityGroupsBasicOps-607770139-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:37:21Z,user_data=None,user_id='489c4d0457354f4684f8b9e53261224f',uuid=27b27eae-7476-459a-b3e1-62199c81d4e1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0110d16a-ba47-4f17-8196-6a5b11e482b3", "address": "fa:16:3e:32:9d:c0", "network": {"id": "03aa96fb-1646-4924-8eea-e7c8a5518a31", "bridge": "br-int", "label": "tempest-network-smoke--2144007454", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "81e7096f23df4e7d8782cf98d09d54e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0110d16a-ba", "ovs_interfaceid": "0110d16a-ba47-4f17-8196-6a5b11e482b3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 11 09:37:25 compute-0 nova_compute[260935]: 2025-10-11 09:37:25.871 2 DEBUG nova.network.os_vif_util [None req-cef89f47-e8a1-4ded-b3a1-844fdd04a2c8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Converting VIF {"id": "0110d16a-ba47-4f17-8196-6a5b11e482b3", "address": "fa:16:3e:32:9d:c0", "network": {"id": "03aa96fb-1646-4924-8eea-e7c8a5518a31", "bridge": "br-int", "label": "tempest-network-smoke--2144007454", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "81e7096f23df4e7d8782cf98d09d54e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0110d16a-ba", "ovs_interfaceid": "0110d16a-ba47-4f17-8196-6a5b11e482b3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 09:37:25 compute-0 nova_compute[260935]: 2025-10-11 09:37:25.873 2 DEBUG nova.network.os_vif_util [None req-cef89f47-e8a1-4ded-b3a1-844fdd04a2c8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:32:9d:c0,bridge_name='br-int',has_traffic_filtering=True,id=0110d16a-ba47-4f17-8196-6a5b11e482b3,network=Network(03aa96fb-1646-4924-8eea-e7c8a5518a31),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0110d16a-ba') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 09:37:25 compute-0 nova_compute[260935]: 2025-10-11 09:37:25.875 2 DEBUG nova.objects.instance [None req-cef89f47-e8a1-4ded-b3a1-844fdd04a2c8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lazy-loading 'pci_devices' on Instance uuid 27b27eae-7476-459a-b3e1-62199c81d4e1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:37:25 compute-0 nova_compute[260935]: 2025-10-11 09:37:25.894 2 DEBUG nova.virt.libvirt.driver [None req-cef89f47-e8a1-4ded-b3a1-844fdd04a2c8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 27b27eae-7476-459a-b3e1-62199c81d4e1] End _get_guest_xml xml=<domain type="kvm">
Oct 11 09:37:25 compute-0 nova_compute[260935]:   <uuid>27b27eae-7476-459a-b3e1-62199c81d4e1</uuid>
Oct 11 09:37:25 compute-0 nova_compute[260935]:   <name>instance-0000008e</name>
Oct 11 09:37:25 compute-0 nova_compute[260935]:   <memory>131072</memory>
Oct 11 09:37:25 compute-0 nova_compute[260935]:   <vcpu>1</vcpu>
Oct 11 09:37:25 compute-0 nova_compute[260935]:   <metadata>
Oct 11 09:37:25 compute-0 nova_compute[260935]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 09:37:25 compute-0 nova_compute[260935]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 09:37:25 compute-0 nova_compute[260935]:       <nova:name>tempest-server-tempest-TestSecurityGroupsBasicOps-607770139-gen-0-202990093</nova:name>
Oct 11 09:37:25 compute-0 nova_compute[260935]:       <nova:creationTime>2025-10-11 09:37:24</nova:creationTime>
Oct 11 09:37:25 compute-0 nova_compute[260935]:       <nova:flavor name="m1.nano">
Oct 11 09:37:25 compute-0 nova_compute[260935]:         <nova:memory>128</nova:memory>
Oct 11 09:37:25 compute-0 nova_compute[260935]:         <nova:disk>1</nova:disk>
Oct 11 09:37:25 compute-0 nova_compute[260935]:         <nova:swap>0</nova:swap>
Oct 11 09:37:25 compute-0 nova_compute[260935]:         <nova:ephemeral>0</nova:ephemeral>
Oct 11 09:37:25 compute-0 nova_compute[260935]:         <nova:vcpus>1</nova:vcpus>
Oct 11 09:37:25 compute-0 nova_compute[260935]:       </nova:flavor>
Oct 11 09:37:25 compute-0 nova_compute[260935]:       <nova:owner>
Oct 11 09:37:25 compute-0 nova_compute[260935]:         <nova:user uuid="489c4d0457354f4684f8b9e53261224f">tempest-TestSecurityGroupsBasicOps-607770139-project-member</nova:user>
Oct 11 09:37:25 compute-0 nova_compute[260935]:         <nova:project uuid="81e7096f23df4e7d8782cf98d09d54e9">tempest-TestSecurityGroupsBasicOps-607770139</nova:project>
Oct 11 09:37:25 compute-0 nova_compute[260935]:       </nova:owner>
Oct 11 09:37:25 compute-0 nova_compute[260935]:       <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 09:37:25 compute-0 nova_compute[260935]:       <nova:ports>
Oct 11 09:37:25 compute-0 nova_compute[260935]:         <nova:port uuid="0110d16a-ba47-4f17-8196-6a5b11e482b3">
Oct 11 09:37:25 compute-0 nova_compute[260935]:           <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Oct 11 09:37:25 compute-0 nova_compute[260935]:         </nova:port>
Oct 11 09:37:25 compute-0 nova_compute[260935]:       </nova:ports>
Oct 11 09:37:25 compute-0 nova_compute[260935]:     </nova:instance>
Oct 11 09:37:25 compute-0 nova_compute[260935]:   </metadata>
Oct 11 09:37:25 compute-0 nova_compute[260935]:   <sysinfo type="smbios">
Oct 11 09:37:25 compute-0 nova_compute[260935]:     <system>
Oct 11 09:37:25 compute-0 nova_compute[260935]:       <entry name="manufacturer">RDO</entry>
Oct 11 09:37:25 compute-0 nova_compute[260935]:       <entry name="product">OpenStack Compute</entry>
Oct 11 09:37:25 compute-0 nova_compute[260935]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 09:37:25 compute-0 nova_compute[260935]:       <entry name="serial">27b27eae-7476-459a-b3e1-62199c81d4e1</entry>
Oct 11 09:37:25 compute-0 nova_compute[260935]:       <entry name="uuid">27b27eae-7476-459a-b3e1-62199c81d4e1</entry>
Oct 11 09:37:25 compute-0 nova_compute[260935]:       <entry name="family">Virtual Machine</entry>
Oct 11 09:37:25 compute-0 nova_compute[260935]:     </system>
Oct 11 09:37:25 compute-0 nova_compute[260935]:   </sysinfo>
Oct 11 09:37:25 compute-0 nova_compute[260935]:   <os>
Oct 11 09:37:25 compute-0 nova_compute[260935]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 11 09:37:25 compute-0 nova_compute[260935]:     <boot dev="hd"/>
Oct 11 09:37:25 compute-0 nova_compute[260935]:     <smbios mode="sysinfo"/>
Oct 11 09:37:25 compute-0 nova_compute[260935]:   </os>
Oct 11 09:37:25 compute-0 nova_compute[260935]:   <features>
Oct 11 09:37:25 compute-0 nova_compute[260935]:     <acpi/>
Oct 11 09:37:25 compute-0 nova_compute[260935]:     <apic/>
Oct 11 09:37:25 compute-0 nova_compute[260935]:     <vmcoreinfo/>
Oct 11 09:37:25 compute-0 nova_compute[260935]:   </features>
Oct 11 09:37:25 compute-0 nova_compute[260935]:   <clock offset="utc">
Oct 11 09:37:25 compute-0 nova_compute[260935]:     <timer name="pit" tickpolicy="delay"/>
Oct 11 09:37:25 compute-0 nova_compute[260935]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 11 09:37:25 compute-0 nova_compute[260935]:     <timer name="hpet" present="no"/>
Oct 11 09:37:25 compute-0 nova_compute[260935]:   </clock>
Oct 11 09:37:25 compute-0 nova_compute[260935]:   <cpu mode="host-model" match="exact">
Oct 11 09:37:25 compute-0 nova_compute[260935]:     <topology sockets="1" cores="1" threads="1"/>
Oct 11 09:37:25 compute-0 nova_compute[260935]:   </cpu>
Oct 11 09:37:25 compute-0 nova_compute[260935]:   <devices>
Oct 11 09:37:25 compute-0 nova_compute[260935]:     <disk type="network" device="disk">
Oct 11 09:37:25 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 09:37:25 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/27b27eae-7476-459a-b3e1-62199c81d4e1_disk">
Oct 11 09:37:25 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 09:37:25 compute-0 nova_compute[260935]:       </source>
Oct 11 09:37:25 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 09:37:25 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 09:37:25 compute-0 nova_compute[260935]:       </auth>
Oct 11 09:37:25 compute-0 nova_compute[260935]:       <target dev="vda" bus="virtio"/>
Oct 11 09:37:25 compute-0 nova_compute[260935]:     </disk>
Oct 11 09:37:25 compute-0 nova_compute[260935]:     <disk type="network" device="cdrom">
Oct 11 09:37:25 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 09:37:25 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/27b27eae-7476-459a-b3e1-62199c81d4e1_disk.config">
Oct 11 09:37:25 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 09:37:25 compute-0 nova_compute[260935]:       </source>
Oct 11 09:37:25 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 09:37:25 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 09:37:25 compute-0 nova_compute[260935]:       </auth>
Oct 11 09:37:25 compute-0 nova_compute[260935]:       <target dev="sda" bus="sata"/>
Oct 11 09:37:25 compute-0 nova_compute[260935]:     </disk>
Oct 11 09:37:25 compute-0 nova_compute[260935]:     <interface type="ethernet">
Oct 11 09:37:25 compute-0 nova_compute[260935]:       <mac address="fa:16:3e:32:9d:c0"/>
Oct 11 09:37:25 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 09:37:25 compute-0 nova_compute[260935]:       <driver name="vhost" rx_queue_size="512"/>
Oct 11 09:37:25 compute-0 nova_compute[260935]:       <mtu size="1442"/>
Oct 11 09:37:25 compute-0 nova_compute[260935]:       <target dev="tap0110d16a-ba"/>
Oct 11 09:37:25 compute-0 nova_compute[260935]:     </interface>
Oct 11 09:37:25 compute-0 nova_compute[260935]:     <serial type="pty">
Oct 11 09:37:25 compute-0 nova_compute[260935]:       <log file="/var/lib/nova/instances/27b27eae-7476-459a-b3e1-62199c81d4e1/console.log" append="off"/>
Oct 11 09:37:25 compute-0 nova_compute[260935]:     </serial>
Oct 11 09:37:25 compute-0 nova_compute[260935]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 09:37:25 compute-0 nova_compute[260935]:     <video>
Oct 11 09:37:25 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 09:37:25 compute-0 nova_compute[260935]:     </video>
Oct 11 09:37:25 compute-0 nova_compute[260935]:     <input type="tablet" bus="usb"/>
Oct 11 09:37:25 compute-0 nova_compute[260935]:     <rng model="virtio">
Oct 11 09:37:25 compute-0 nova_compute[260935]:       <backend model="random">/dev/urandom</backend>
Oct 11 09:37:25 compute-0 nova_compute[260935]:     </rng>
Oct 11 09:37:25 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root"/>
Oct 11 09:37:25 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:37:25 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:37:25 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:37:25 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:37:25 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:37:25 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:37:25 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:37:25 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:37:25 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:37:25 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:37:25 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:37:25 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:37:25 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:37:25 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:37:25 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:37:25 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:37:25 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:37:25 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:37:25 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:37:25 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:37:25 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:37:25 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:37:25 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:37:25 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:37:25 compute-0 nova_compute[260935]:     <controller type="usb" index="0"/>
Oct 11 09:37:25 compute-0 nova_compute[260935]:     <memballoon model="virtio">
Oct 11 09:37:25 compute-0 nova_compute[260935]:       <stats period="10"/>
Oct 11 09:37:25 compute-0 nova_compute[260935]:     </memballoon>
Oct 11 09:37:25 compute-0 nova_compute[260935]:   </devices>
Oct 11 09:37:25 compute-0 nova_compute[260935]: </domain>
Oct 11 09:37:25 compute-0 nova_compute[260935]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 11 09:37:25 compute-0 nova_compute[260935]: 2025-10-11 09:37:25.895 2 DEBUG nova.compute.manager [None req-cef89f47-e8a1-4ded-b3a1-844fdd04a2c8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 27b27eae-7476-459a-b3e1-62199c81d4e1] Preparing to wait for external event network-vif-plugged-0110d16a-ba47-4f17-8196-6a5b11e482b3 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 11 09:37:25 compute-0 nova_compute[260935]: 2025-10-11 09:37:25.895 2 DEBUG oslo_concurrency.lockutils [None req-cef89f47-e8a1-4ded-b3a1-844fdd04a2c8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Acquiring lock "27b27eae-7476-459a-b3e1-62199c81d4e1-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:37:25 compute-0 nova_compute[260935]: 2025-10-11 09:37:25.896 2 DEBUG oslo_concurrency.lockutils [None req-cef89f47-e8a1-4ded-b3a1-844fdd04a2c8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lock "27b27eae-7476-459a-b3e1-62199c81d4e1-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:37:25 compute-0 nova_compute[260935]: 2025-10-11 09:37:25.896 2 DEBUG oslo_concurrency.lockutils [None req-cef89f47-e8a1-4ded-b3a1-844fdd04a2c8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lock "27b27eae-7476-459a-b3e1-62199c81d4e1-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:37:25 compute-0 nova_compute[260935]: 2025-10-11 09:37:25.897 2 DEBUG nova.virt.libvirt.vif [None req-cef89f47-e8a1-4ded-b3a1-844fdd04a2c8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T09:37:19Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-607770139-gen-0-202990093',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-607770139-gen-0-202990093',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-607770139-gen',id=142,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDUdKb5dOuuDVVWeqX4zKoPcjWDfKUs1onbpng/L6Jwhcf0BjMOuWghlJ+p86B+gJd3uhEaIWe8cO4nTPLwQAMa1oAzsF+deBAfCAYSzplFFdsUIepQYO467EdbI9Uqbsw==',key_name='tempest-TestSecurityGroupsBasicOps-402699739',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='81e7096f23df4e7d8782cf98d09d54e9',ramdisk_id='',reservation_id='r-0rby20lg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-607770139',owner_user_name='tempest-TestSecurityGroupsBasicOps-607770139-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:37:21Z,user_data=None,user_id='489c4d0457354f4684f8b9e53261224f',uuid=27b27eae-7476-459a-b3e1-62199c81d4e1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0110d16a-ba47-4f17-8196-6a5b11e482b3", "address": "fa:16:3e:32:9d:c0", "network": {"id": "03aa96fb-1646-4924-8eea-e7c8a5518a31", "bridge": "br-int", "label": "tempest-network-smoke--2144007454", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "81e7096f23df4e7d8782cf98d09d54e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0110d16a-ba", "ovs_interfaceid": "0110d16a-ba47-4f17-8196-6a5b11e482b3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 11 09:37:25 compute-0 nova_compute[260935]: 2025-10-11 09:37:25.898 2 DEBUG nova.network.os_vif_util [None req-cef89f47-e8a1-4ded-b3a1-844fdd04a2c8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Converting VIF {"id": "0110d16a-ba47-4f17-8196-6a5b11e482b3", "address": "fa:16:3e:32:9d:c0", "network": {"id": "03aa96fb-1646-4924-8eea-e7c8a5518a31", "bridge": "br-int", "label": "tempest-network-smoke--2144007454", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "81e7096f23df4e7d8782cf98d09d54e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0110d16a-ba", "ovs_interfaceid": "0110d16a-ba47-4f17-8196-6a5b11e482b3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 09:37:25 compute-0 nova_compute[260935]: 2025-10-11 09:37:25.899 2 DEBUG nova.network.os_vif_util [None req-cef89f47-e8a1-4ded-b3a1-844fdd04a2c8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:32:9d:c0,bridge_name='br-int',has_traffic_filtering=True,id=0110d16a-ba47-4f17-8196-6a5b11e482b3,network=Network(03aa96fb-1646-4924-8eea-e7c8a5518a31),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0110d16a-ba') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 09:37:25 compute-0 nova_compute[260935]: 2025-10-11 09:37:25.900 2 DEBUG os_vif [None req-cef89f47-e8a1-4ded-b3a1-844fdd04a2c8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:32:9d:c0,bridge_name='br-int',has_traffic_filtering=True,id=0110d16a-ba47-4f17-8196-6a5b11e482b3,network=Network(03aa96fb-1646-4924-8eea-e7c8a5518a31),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0110d16a-ba') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 11 09:37:25 compute-0 nova_compute[260935]: 2025-10-11 09:37:25.901 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:37:25 compute-0 nova_compute[260935]: 2025-10-11 09:37:25.902 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:37:25 compute-0 nova_compute[260935]: 2025-10-11 09:37:25.903 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 09:37:25 compute-0 nova_compute[260935]: 2025-10-11 09:37:25.908 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:37:25 compute-0 nova_compute[260935]: 2025-10-11 09:37:25.908 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0110d16a-ba, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:37:25 compute-0 nova_compute[260935]: 2025-10-11 09:37:25.909 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap0110d16a-ba, col_values=(('external_ids', {'iface-id': '0110d16a-ba47-4f17-8196-6a5b11e482b3', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:32:9d:c0', 'vm-uuid': '27b27eae-7476-459a-b3e1-62199c81d4e1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:37:25 compute-0 nova_compute[260935]: 2025-10-11 09:37:25.912 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:37:25 compute-0 NetworkManager[44960]: <info>  [1760175445.9135] manager: (tap0110d16a-ba): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/612)
Oct 11 09:37:25 compute-0 nova_compute[260935]: 2025-10-11 09:37:25.917 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 09:37:25 compute-0 nova_compute[260935]: 2025-10-11 09:37:25.921 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:37:25 compute-0 nova_compute[260935]: 2025-10-11 09:37:25.922 2 INFO os_vif [None req-cef89f47-e8a1-4ded-b3a1-844fdd04a2c8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:32:9d:c0,bridge_name='br-int',has_traffic_filtering=True,id=0110d16a-ba47-4f17-8196-6a5b11e482b3,network=Network(03aa96fb-1646-4924-8eea-e7c8a5518a31),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0110d16a-ba')
Oct 11 09:37:25 compute-0 nova_compute[260935]: 2025-10-11 09:37:25.995 2 DEBUG nova.virt.libvirt.driver [None req-cef89f47-e8a1-4ded-b3a1-844fdd04a2c8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 09:37:25 compute-0 nova_compute[260935]: 2025-10-11 09:37:25.996 2 DEBUG nova.virt.libvirt.driver [None req-cef89f47-e8a1-4ded-b3a1-844fdd04a2c8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 09:37:25 compute-0 nova_compute[260935]: 2025-10-11 09:37:25.996 2 DEBUG nova.virt.libvirt.driver [None req-cef89f47-e8a1-4ded-b3a1-844fdd04a2c8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] No VIF found with MAC fa:16:3e:32:9d:c0, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 11 09:37:25 compute-0 nova_compute[260935]: 2025-10-11 09:37:25.997 2 INFO nova.virt.libvirt.driver [None req-cef89f47-e8a1-4ded-b3a1-844fdd04a2c8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 27b27eae-7476-459a-b3e1-62199c81d4e1] Using config drive
Oct 11 09:37:26 compute-0 nova_compute[260935]: 2025-10-11 09:37:26.025 2 DEBUG nova.storage.rbd_utils [None req-cef89f47-e8a1-4ded-b3a1-844fdd04a2c8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] rbd image 27b27eae-7476-459a-b3e1-62199c81d4e1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:37:26 compute-0 nova_compute[260935]: 2025-10-11 09:37:26.118 2 DEBUG nova.network.neutron [req-20904805-f1e5-4a16-ad04-6075d7cd35e2 req-46dc4c43-2eae-4ebc-91c9-3b83d366f528 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 27b27eae-7476-459a-b3e1-62199c81d4e1] Updated VIF entry in instance network info cache for port 0110d16a-ba47-4f17-8196-6a5b11e482b3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 09:37:26 compute-0 nova_compute[260935]: 2025-10-11 09:37:26.119 2 DEBUG nova.network.neutron [req-20904805-f1e5-4a16-ad04-6075d7cd35e2 req-46dc4c43-2eae-4ebc-91c9-3b83d366f528 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 27b27eae-7476-459a-b3e1-62199c81d4e1] Updating instance_info_cache with network_info: [{"id": "0110d16a-ba47-4f17-8196-6a5b11e482b3", "address": "fa:16:3e:32:9d:c0", "network": {"id": "03aa96fb-1646-4924-8eea-e7c8a5518a31", "bridge": "br-int", "label": "tempest-network-smoke--2144007454", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "81e7096f23df4e7d8782cf98d09d54e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0110d16a-ba", "ovs_interfaceid": "0110d16a-ba47-4f17-8196-6a5b11e482b3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:37:26 compute-0 nova_compute[260935]: 2025-10-11 09:37:26.133 2 DEBUG oslo_concurrency.lockutils [req-20904805-f1e5-4a16-ad04-6075d7cd35e2 req-46dc4c43-2eae-4ebc-91c9-3b83d366f528 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-27b27eae-7476-459a-b3e1-62199c81d4e1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:37:26 compute-0 nova_compute[260935]: 2025-10-11 09:37:26.284 2 INFO nova.virt.libvirt.driver [None req-cef89f47-e8a1-4ded-b3a1-844fdd04a2c8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 27b27eae-7476-459a-b3e1-62199c81d4e1] Creating config drive at /var/lib/nova/instances/27b27eae-7476-459a-b3e1-62199c81d4e1/disk.config
Oct 11 09:37:26 compute-0 nova_compute[260935]: 2025-10-11 09:37:26.295 2 DEBUG oslo_concurrency.processutils [None req-cef89f47-e8a1-4ded-b3a1-844fdd04a2c8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/27b27eae-7476-459a-b3e1-62199c81d4e1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpeuo5ffxd execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:37:26 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/1599478162' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:37:26 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/1024536959' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:37:26 compute-0 nova_compute[260935]: 2025-10-11 09:37:26.444 2 DEBUG oslo_concurrency.processutils [None req-cef89f47-e8a1-4ded-b3a1-844fdd04a2c8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/27b27eae-7476-459a-b3e1-62199c81d4e1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpeuo5ffxd" returned: 0 in 0.150s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:37:26 compute-0 magical_galileo[420024]: {
Oct 11 09:37:26 compute-0 magical_galileo[420024]:     "9a8d87fe-940e-4960-94fb-405e22e6e9ee": {
Oct 11 09:37:26 compute-0 magical_galileo[420024]:         "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:37:26 compute-0 magical_galileo[420024]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 11 09:37:26 compute-0 magical_galileo[420024]:         "osd_id": 2,
Oct 11 09:37:26 compute-0 magical_galileo[420024]:         "osd_uuid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 09:37:26 compute-0 magical_galileo[420024]:         "type": "bluestore"
Oct 11 09:37:26 compute-0 magical_galileo[420024]:     },
Oct 11 09:37:26 compute-0 magical_galileo[420024]:     "bd31cfe9-266a-4479-a7f9-659fff14b3e1": {
Oct 11 09:37:26 compute-0 magical_galileo[420024]:         "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:37:26 compute-0 magical_galileo[420024]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 11 09:37:26 compute-0 magical_galileo[420024]:         "osd_id": 0,
Oct 11 09:37:26 compute-0 magical_galileo[420024]:         "osd_uuid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 09:37:26 compute-0 magical_galileo[420024]:         "type": "bluestore"
Oct 11 09:37:26 compute-0 magical_galileo[420024]:     },
Oct 11 09:37:26 compute-0 magical_galileo[420024]:     "c6e730c8-7075-45e5-bf72-061b73088f5b": {
Oct 11 09:37:26 compute-0 magical_galileo[420024]:         "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:37:26 compute-0 magical_galileo[420024]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 11 09:37:26 compute-0 magical_galileo[420024]:         "osd_id": 1,
Oct 11 09:37:26 compute-0 magical_galileo[420024]:         "osd_uuid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 09:37:26 compute-0 magical_galileo[420024]:         "type": "bluestore"
Oct 11 09:37:26 compute-0 magical_galileo[420024]:     }
Oct 11 09:37:26 compute-0 magical_galileo[420024]: }
Oct 11 09:37:26 compute-0 systemd[1]: libpod-6bf6e523c579205dead90350d459d81d654dcc5bcb18e91d7c75b1776473663c.scope: Deactivated successfully.
Oct 11 09:37:26 compute-0 systemd[1]: libpod-6bf6e523c579205dead90350d459d81d654dcc5bcb18e91d7c75b1776473663c.scope: Consumed 1.014s CPU time.
Oct 11 09:37:26 compute-0 conmon[420024]: conmon 6bf6e523c579205dead9 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-6bf6e523c579205dead90350d459d81d654dcc5bcb18e91d7c75b1776473663c.scope/container/memory.events
Oct 11 09:37:26 compute-0 podman[420005]: 2025-10-11 09:37:26.491509356 +0000 UTC m=+1.171025184 container died 6bf6e523c579205dead90350d459d81d654dcc5bcb18e91d7c75b1776473663c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_galileo, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 09:37:26 compute-0 nova_compute[260935]: 2025-10-11 09:37:26.494 2 DEBUG nova.storage.rbd_utils [None req-cef89f47-e8a1-4ded-b3a1-844fdd04a2c8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] rbd image 27b27eae-7476-459a-b3e1-62199c81d4e1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:37:26 compute-0 nova_compute[260935]: 2025-10-11 09:37:26.498 2 DEBUG oslo_concurrency.processutils [None req-cef89f47-e8a1-4ded-b3a1-844fdd04a2c8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/27b27eae-7476-459a-b3e1-62199c81d4e1/disk.config 27b27eae-7476-459a-b3e1-62199c81d4e1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:37:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-c1cdcdb1183aa2e86fd801d112bc065a7027e19904460d8bd33886acedb1295d-merged.mount: Deactivated successfully.
Oct 11 09:37:26 compute-0 podman[420005]: 2025-10-11 09:37:26.542767074 +0000 UTC m=+1.222282872 container remove 6bf6e523c579205dead90350d459d81d654dcc5bcb18e91d7c75b1776473663c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_galileo, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct 11 09:37:26 compute-0 systemd[1]: libpod-conmon-6bf6e523c579205dead90350d459d81d654dcc5bcb18e91d7c75b1776473663c.scope: Deactivated successfully.
Oct 11 09:37:26 compute-0 sudo[419879]: pam_unix(sudo:session): session closed for user root
Oct 11 09:37:26 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 09:37:26 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:37:26 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 09:37:26 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:37:26 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev 244937b3-5c8a-43ad-8c84-b6770ade7700 does not exist
Oct 11 09:37:26 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev fc28c099-bb94-45ce-97db-ceab283f5687 does not exist
Oct 11 09:37:26 compute-0 sudo[420169]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:37:26 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 09:37:26 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/71404979' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 09:37:26 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 09:37:26 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/71404979' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 09:37:26 compute-0 sudo[420169]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:37:26 compute-0 sudo[420169]: pam_unix(sudo:session): session closed for user root
Oct 11 09:37:26 compute-0 nova_compute[260935]: 2025-10-11 09:37:26.686 2 DEBUG oslo_concurrency.processutils [None req-cef89f47-e8a1-4ded-b3a1-844fdd04a2c8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/27b27eae-7476-459a-b3e1-62199c81d4e1/disk.config 27b27eae-7476-459a-b3e1-62199c81d4e1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.188s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:37:26 compute-0 nova_compute[260935]: 2025-10-11 09:37:26.687 2 INFO nova.virt.libvirt.driver [None req-cef89f47-e8a1-4ded-b3a1-844fdd04a2c8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 27b27eae-7476-459a-b3e1-62199c81d4e1] Deleting local config drive /var/lib/nova/instances/27b27eae-7476-459a-b3e1-62199c81d4e1/disk.config because it was imported into RBD.
Oct 11 09:37:26 compute-0 sudo[420197]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 11 09:37:26 compute-0 sudo[420197]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:37:26 compute-0 sudo[420197]: pam_unix(sudo:session): session closed for user root
Oct 11 09:37:26 compute-0 kernel: tap0110d16a-ba: entered promiscuous mode
Oct 11 09:37:26 compute-0 NetworkManager[44960]: <info>  [1760175446.7476] manager: (tap0110d16a-ba): new Tun device (/org/freedesktop/NetworkManager/Devices/613)
Oct 11 09:37:26 compute-0 ovn_controller[152945]: 2025-10-11T09:37:26Z|01597|binding|INFO|Claiming lport 0110d16a-ba47-4f17-8196-6a5b11e482b3 for this chassis.
Oct 11 09:37:26 compute-0 ovn_controller[152945]: 2025-10-11T09:37:26Z|01598|binding|INFO|0110d16a-ba47-4f17-8196-6a5b11e482b3: Claiming fa:16:3e:32:9d:c0 10.100.0.11
Oct 11 09:37:26 compute-0 nova_compute[260935]: 2025-10-11 09:37:26.754 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:37:26 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:37:26.762 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:32:9d:c0 10.100.0.11'], port_security=['fa:16:3e:32:9d:c0 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '27b27eae-7476-459a-b3e1-62199c81d4e1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-03aa96fb-1646-4924-8eea-e7c8a5518a31', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '81e7096f23df4e7d8782cf98d09d54e9', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'aba3d239-1d8c-4eb4-ab69-9421b1db2407', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b9ec93d7-4219-431f-a875-bf3609f077df, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=0110d16a-ba47-4f17-8196-6a5b11e482b3) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 09:37:26 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:37:26.764 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 0110d16a-ba47-4f17-8196-6a5b11e482b3 in datapath 03aa96fb-1646-4924-8eea-e7c8a5518a31 bound to our chassis
Oct 11 09:37:26 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:37:26.766 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 03aa96fb-1646-4924-8eea-e7c8a5518a31
Oct 11 09:37:26 compute-0 ovn_controller[152945]: 2025-10-11T09:37:26Z|01599|binding|INFO|Setting lport 0110d16a-ba47-4f17-8196-6a5b11e482b3 ovn-installed in OVS
Oct 11 09:37:26 compute-0 ovn_controller[152945]: 2025-10-11T09:37:26Z|01600|binding|INFO|Setting lport 0110d16a-ba47-4f17-8196-6a5b11e482b3 up in Southbound
Oct 11 09:37:26 compute-0 nova_compute[260935]: 2025-10-11 09:37:26.773 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:37:26 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:37:26.786 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[e3c7689b-4687-4914-8813-92986e269c69]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:37:26 compute-0 systemd-machined[215705]: New machine qemu-166-instance-0000008e.
Oct 11 09:37:26 compute-0 systemd[1]: Started Virtual Machine qemu-166-instance-0000008e.
Oct 11 09:37:26 compute-0 systemd-udevd[420237]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 09:37:26 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:37:26.834 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[19973a50-bc68-4836-b1ff-cb33476062a5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:37:26 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:37:26.838 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[9ba11c0b-1b77-4fe7-a787-fa01e3f4d916]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:37:26 compute-0 NetworkManager[44960]: <info>  [1760175446.8485] device (tap0110d16a-ba): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 09:37:26 compute-0 NetworkManager[44960]: <info>  [1760175446.8496] device (tap0110d16a-ba): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 09:37:26 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:37:26.871 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[d9368337-8a66-4143-acaa-878d2d61a0bd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:37:26 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:37:26.938 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[deaf0e55-047b-4d9b-8cdf-4e1c832f080b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap03aa96fb-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:61:60:32'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 421], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 725918, 'reachable_time': 20337, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 420247, 'error': None, 'target': 'ovnmeta-03aa96fb-1646-4924-8eea-e7c8a5518a31', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:37:26 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:37:26.962 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[a4938878-a2ac-43d5-86a3-bdc8707e905b]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap03aa96fb-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 725934, 'tstamp': 725934}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 420249, 'error': None, 'target': 'ovnmeta-03aa96fb-1646-4924-8eea-e7c8a5518a31', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap03aa96fb-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 725937, 'tstamp': 725937}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 420249, 'error': None, 'target': 'ovnmeta-03aa96fb-1646-4924-8eea-e7c8a5518a31', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:37:26 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:37:26.964 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap03aa96fb-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:37:26 compute-0 nova_compute[260935]: 2025-10-11 09:37:26.966 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:37:26 compute-0 nova_compute[260935]: 2025-10-11 09:37:26.967 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:37:26 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:37:26.968 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap03aa96fb-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:37:26 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:37:26.968 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 09:37:26 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:37:26.968 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap03aa96fb-10, col_values=(('external_ids', {'iface-id': '33c7af53-54c4-4168-8c26-88465029f36a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:37:26 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:37:26.969 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 09:37:27 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2874: 321 pgs: 321 active+clean; 453 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 15 KiB/s rd, 1.8 MiB/s wr, 23 op/s
Oct 11 09:37:27 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:37:27 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:37:27 compute-0 ceph-mon[74313]: from='client.? 192.168.122.10:0/71404979' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 09:37:27 compute-0 ceph-mon[74313]: from='client.? 192.168.122.10:0/71404979' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 09:37:27 compute-0 ceph-mon[74313]: pgmap v2874: 321 pgs: 321 active+clean; 453 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 15 KiB/s rd, 1.8 MiB/s wr, 23 op/s
Oct 11 09:37:27 compute-0 nova_compute[260935]: 2025-10-11 09:37:27.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:37:27 compute-0 nova_compute[260935]: 2025-10-11 09:37:27.897 2 DEBUG nova.compute.manager [req-67577cb7-ed9d-46b7-be65-13eed7429eb0 req-0ddd7289-5332-4e11-a871-63058f82010b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 27b27eae-7476-459a-b3e1-62199c81d4e1] Received event network-vif-plugged-0110d16a-ba47-4f17-8196-6a5b11e482b3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:37:27 compute-0 nova_compute[260935]: 2025-10-11 09:37:27.897 2 DEBUG oslo_concurrency.lockutils [req-67577cb7-ed9d-46b7-be65-13eed7429eb0 req-0ddd7289-5332-4e11-a871-63058f82010b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "27b27eae-7476-459a-b3e1-62199c81d4e1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:37:27 compute-0 nova_compute[260935]: 2025-10-11 09:37:27.898 2 DEBUG oslo_concurrency.lockutils [req-67577cb7-ed9d-46b7-be65-13eed7429eb0 req-0ddd7289-5332-4e11-a871-63058f82010b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "27b27eae-7476-459a-b3e1-62199c81d4e1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:37:27 compute-0 nova_compute[260935]: 2025-10-11 09:37:27.898 2 DEBUG oslo_concurrency.lockutils [req-67577cb7-ed9d-46b7-be65-13eed7429eb0 req-0ddd7289-5332-4e11-a871-63058f82010b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "27b27eae-7476-459a-b3e1-62199c81d4e1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:37:27 compute-0 nova_compute[260935]: 2025-10-11 09:37:27.898 2 DEBUG nova.compute.manager [req-67577cb7-ed9d-46b7-be65-13eed7429eb0 req-0ddd7289-5332-4e11-a871-63058f82010b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 27b27eae-7476-459a-b3e1-62199c81d4e1] Processing event network-vif-plugged-0110d16a-ba47-4f17-8196-6a5b11e482b3 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 11 09:37:27 compute-0 nova_compute[260935]: 2025-10-11 09:37:27.904 2 DEBUG nova.compute.manager [None req-cef89f47-e8a1-4ded-b3a1-844fdd04a2c8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 27b27eae-7476-459a-b3e1-62199c81d4e1] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 11 09:37:27 compute-0 nova_compute[260935]: 2025-10-11 09:37:27.905 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760175447.9050891, 27b27eae-7476-459a-b3e1-62199c81d4e1 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:37:27 compute-0 nova_compute[260935]: 2025-10-11 09:37:27.906 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 27b27eae-7476-459a-b3e1-62199c81d4e1] VM Started (Lifecycle Event)
Oct 11 09:37:27 compute-0 nova_compute[260935]: 2025-10-11 09:37:27.910 2 DEBUG nova.virt.libvirt.driver [None req-cef89f47-e8a1-4ded-b3a1-844fdd04a2c8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 27b27eae-7476-459a-b3e1-62199c81d4e1] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 11 09:37:27 compute-0 nova_compute[260935]: 2025-10-11 09:37:27.914 2 INFO nova.virt.libvirt.driver [-] [instance: 27b27eae-7476-459a-b3e1-62199c81d4e1] Instance spawned successfully.
Oct 11 09:37:27 compute-0 nova_compute[260935]: 2025-10-11 09:37:27.914 2 DEBUG nova.virt.libvirt.driver [None req-cef89f47-e8a1-4ded-b3a1-844fdd04a2c8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 27b27eae-7476-459a-b3e1-62199c81d4e1] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 11 09:37:27 compute-0 nova_compute[260935]: 2025-10-11 09:37:27.932 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 27b27eae-7476-459a-b3e1-62199c81d4e1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:37:27 compute-0 nova_compute[260935]: 2025-10-11 09:37:27.938 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 27b27eae-7476-459a-b3e1-62199c81d4e1] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 09:37:27 compute-0 nova_compute[260935]: 2025-10-11 09:37:27.943 2 DEBUG nova.virt.libvirt.driver [None req-cef89f47-e8a1-4ded-b3a1-844fdd04a2c8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 27b27eae-7476-459a-b3e1-62199c81d4e1] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:37:27 compute-0 nova_compute[260935]: 2025-10-11 09:37:27.944 2 DEBUG nova.virt.libvirt.driver [None req-cef89f47-e8a1-4ded-b3a1-844fdd04a2c8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 27b27eae-7476-459a-b3e1-62199c81d4e1] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:37:27 compute-0 nova_compute[260935]: 2025-10-11 09:37:27.944 2 DEBUG nova.virt.libvirt.driver [None req-cef89f47-e8a1-4ded-b3a1-844fdd04a2c8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 27b27eae-7476-459a-b3e1-62199c81d4e1] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:37:27 compute-0 nova_compute[260935]: 2025-10-11 09:37:27.945 2 DEBUG nova.virt.libvirt.driver [None req-cef89f47-e8a1-4ded-b3a1-844fdd04a2c8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 27b27eae-7476-459a-b3e1-62199c81d4e1] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:37:27 compute-0 nova_compute[260935]: 2025-10-11 09:37:27.945 2 DEBUG nova.virt.libvirt.driver [None req-cef89f47-e8a1-4ded-b3a1-844fdd04a2c8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 27b27eae-7476-459a-b3e1-62199c81d4e1] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:37:27 compute-0 nova_compute[260935]: 2025-10-11 09:37:27.946 2 DEBUG nova.virt.libvirt.driver [None req-cef89f47-e8a1-4ded-b3a1-844fdd04a2c8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 27b27eae-7476-459a-b3e1-62199c81d4e1] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:37:27 compute-0 nova_compute[260935]: 2025-10-11 09:37:27.968 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 27b27eae-7476-459a-b3e1-62199c81d4e1] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 09:37:27 compute-0 nova_compute[260935]: 2025-10-11 09:37:27.969 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760175447.9055552, 27b27eae-7476-459a-b3e1-62199c81d4e1 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:37:27 compute-0 nova_compute[260935]: 2025-10-11 09:37:27.969 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 27b27eae-7476-459a-b3e1-62199c81d4e1] VM Paused (Lifecycle Event)
Oct 11 09:37:27 compute-0 nova_compute[260935]: 2025-10-11 09:37:27.996 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 27b27eae-7476-459a-b3e1-62199c81d4e1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:37:28 compute-0 nova_compute[260935]: 2025-10-11 09:37:28.000 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760175447.9090703, 27b27eae-7476-459a-b3e1-62199c81d4e1 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:37:28 compute-0 nova_compute[260935]: 2025-10-11 09:37:28.000 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 27b27eae-7476-459a-b3e1-62199c81d4e1] VM Resumed (Lifecycle Event)
Oct 11 09:37:28 compute-0 nova_compute[260935]: 2025-10-11 09:37:28.005 2 INFO nova.compute.manager [None req-cef89f47-e8a1-4ded-b3a1-844fdd04a2c8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 27b27eae-7476-459a-b3e1-62199c81d4e1] Took 6.71 seconds to spawn the instance on the hypervisor.
Oct 11 09:37:28 compute-0 nova_compute[260935]: 2025-10-11 09:37:28.005 2 DEBUG nova.compute.manager [None req-cef89f47-e8a1-4ded-b3a1-844fdd04a2c8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 27b27eae-7476-459a-b3e1-62199c81d4e1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:37:28 compute-0 nova_compute[260935]: 2025-10-11 09:37:28.019 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 27b27eae-7476-459a-b3e1-62199c81d4e1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:37:28 compute-0 nova_compute[260935]: 2025-10-11 09:37:28.023 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 27b27eae-7476-459a-b3e1-62199c81d4e1] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 09:37:28 compute-0 nova_compute[260935]: 2025-10-11 09:37:28.058 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 27b27eae-7476-459a-b3e1-62199c81d4e1] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 09:37:28 compute-0 nova_compute[260935]: 2025-10-11 09:37:28.076 2 INFO nova.compute.manager [None req-cef89f47-e8a1-4ded-b3a1-844fdd04a2c8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 27b27eae-7476-459a-b3e1-62199c81d4e1] Took 7.77 seconds to build instance.
Oct 11 09:37:28 compute-0 nova_compute[260935]: 2025-10-11 09:37:28.093 2 DEBUG oslo_concurrency.lockutils [None req-cef89f47-e8a1-4ded-b3a1-844fdd04a2c8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lock "27b27eae-7476-459a-b3e1-62199c81d4e1" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.861s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:37:29 compute-0 nova_compute[260935]: 2025-10-11 09:37:29.173 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:37:29 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2875: 321 pgs: 321 active+clean; 453 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 25 KiB/s rd, 1.8 MiB/s wr, 37 op/s
Oct 11 09:37:29 compute-0 ceph-mon[74313]: pgmap v2875: 321 pgs: 321 active+clean; 453 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 25 KiB/s rd, 1.8 MiB/s wr, 37 op/s
Oct 11 09:37:29 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:37:30 compute-0 nova_compute[260935]: 2025-10-11 09:37:30.016 2 DEBUG nova.compute.manager [req-c8f68c1b-7556-48e2-bce4-5be42a27e4dc req-54f1e2c7-cef3-4db9-80c4-6d889d0a5128 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 27b27eae-7476-459a-b3e1-62199c81d4e1] Received event network-vif-plugged-0110d16a-ba47-4f17-8196-6a5b11e482b3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:37:30 compute-0 nova_compute[260935]: 2025-10-11 09:37:30.016 2 DEBUG oslo_concurrency.lockutils [req-c8f68c1b-7556-48e2-bce4-5be42a27e4dc req-54f1e2c7-cef3-4db9-80c4-6d889d0a5128 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "27b27eae-7476-459a-b3e1-62199c81d4e1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:37:30 compute-0 nova_compute[260935]: 2025-10-11 09:37:30.017 2 DEBUG oslo_concurrency.lockutils [req-c8f68c1b-7556-48e2-bce4-5be42a27e4dc req-54f1e2c7-cef3-4db9-80c4-6d889d0a5128 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "27b27eae-7476-459a-b3e1-62199c81d4e1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:37:30 compute-0 nova_compute[260935]: 2025-10-11 09:37:30.018 2 DEBUG oslo_concurrency.lockutils [req-c8f68c1b-7556-48e2-bce4-5be42a27e4dc req-54f1e2c7-cef3-4db9-80c4-6d889d0a5128 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "27b27eae-7476-459a-b3e1-62199c81d4e1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:37:30 compute-0 nova_compute[260935]: 2025-10-11 09:37:30.018 2 DEBUG nova.compute.manager [req-c8f68c1b-7556-48e2-bce4-5be42a27e4dc req-54f1e2c7-cef3-4db9-80c4-6d889d0a5128 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 27b27eae-7476-459a-b3e1-62199c81d4e1] No waiting events found dispatching network-vif-plugged-0110d16a-ba47-4f17-8196-6a5b11e482b3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 09:37:30 compute-0 nova_compute[260935]: 2025-10-11 09:37:30.018 2 WARNING nova.compute.manager [req-c8f68c1b-7556-48e2-bce4-5be42a27e4dc req-54f1e2c7-cef3-4db9-80c4-6d889d0a5128 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 27b27eae-7476-459a-b3e1-62199c81d4e1] Received unexpected event network-vif-plugged-0110d16a-ba47-4f17-8196-6a5b11e482b3 for instance with vm_state active and task_state None.
Oct 11 09:37:30 compute-0 podman[420292]: 2025-10-11 09:37:30.808381984 +0000 UTC m=+0.103580687 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct 11 09:37:30 compute-0 nova_compute[260935]: 2025-10-11 09:37:30.912 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:37:31 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2876: 321 pgs: 321 active+clean; 453 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 25 KiB/s rd, 1.8 MiB/s wr, 37 op/s
Oct 11 09:37:31 compute-0 ceph-mon[74313]: pgmap v2876: 321 pgs: 321 active+clean; 453 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 25 KiB/s rd, 1.8 MiB/s wr, 37 op/s
Oct 11 09:37:31 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:37:31.844 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=53, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:d1:d9', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '16:ab:1e:b7:4b:7f'}, ipsec=False) old=SB_Global(nb_cfg=52) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 09:37:31 compute-0 nova_compute[260935]: 2025-10-11 09:37:31.845 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:37:31 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:37:31.846 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 11 09:37:32 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:37:32.848 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=bec9a5e2-82b0-42f0-811d-08d245f1dc66, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '53'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:37:33 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2877: 321 pgs: 321 active+clean; 453 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Oct 11 09:37:33 compute-0 ceph-mon[74313]: pgmap v2877: 321 pgs: 321 active+clean; 453 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Oct 11 09:37:34 compute-0 nova_compute[260935]: 2025-10-11 09:37:34.176 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:37:34 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:37:35 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2878: 321 pgs: 321 active+clean; 453 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 77 op/s
Oct 11 09:37:35 compute-0 ceph-mon[74313]: pgmap v2878: 321 pgs: 321 active+clean; 453 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 77 op/s
Oct 11 09:37:35 compute-0 nova_compute[260935]: 2025-10-11 09:37:35.916 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:37:37 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2879: 321 pgs: 321 active+clean; 453 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 77 op/s
Oct 11 09:37:37 compute-0 ceph-mon[74313]: pgmap v2879: 321 pgs: 321 active+clean; 453 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 77 op/s
Oct 11 09:37:38 compute-0 podman[420311]: 2025-10-11 09:37:38.786709765 +0000 UTC m=+0.087572577 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Oct 11 09:37:39 compute-0 nova_compute[260935]: 2025-10-11 09:37:39.228 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:37:39 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2880: 321 pgs: 321 active+clean; 453 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 17 KiB/s wr, 78 op/s
Oct 11 09:37:39 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:37:40 compute-0 ovn_controller[152945]: 2025-10-11T09:37:40Z|00190|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:32:9d:c0 10.100.0.11
Oct 11 09:37:40 compute-0 ovn_controller[152945]: 2025-10-11T09:37:40Z|00191|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:32:9d:c0 10.100.0.11
Oct 11 09:37:40 compute-0 ceph-mon[74313]: pgmap v2880: 321 pgs: 321 active+clean; 453 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 17 KiB/s wr, 78 op/s
Oct 11 09:37:40 compute-0 nova_compute[260935]: 2025-10-11 09:37:40.919 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:37:41 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2881: 321 pgs: 321 active+clean; 453 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.3 KiB/s wr, 64 op/s
Oct 11 09:37:42 compute-0 ceph-mon[74313]: pgmap v2881: 321 pgs: 321 active+clean; 453 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.3 KiB/s wr, 64 op/s
Oct 11 09:37:42 compute-0 podman[420331]: 2025-10-11 09:37:42.814506064 +0000 UTC m=+0.104567464 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 11 09:37:42 compute-0 podman[420332]: 2025-10-11 09:37:42.855936366 +0000 UTC m=+0.140012008 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3)
Oct 11 09:37:43 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2882: 321 pgs: 321 active+clean; 486 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 128 op/s
Oct 11 09:37:44 compute-0 nova_compute[260935]: 2025-10-11 09:37:44.230 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:37:44 compute-0 ceph-mon[74313]: pgmap v2882: 321 pgs: 321 active+clean; 486 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 128 op/s
Oct 11 09:37:44 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:37:45 compute-0 nova_compute[260935]: 2025-10-11 09:37:45.128 2 DEBUG oslo_concurrency.lockutils [None req-e6a19925-ee3b-42b4-bb70-827444d44bbb 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Acquiring lock "27b27eae-7476-459a-b3e1-62199c81d4e1" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:37:45 compute-0 nova_compute[260935]: 2025-10-11 09:37:45.129 2 DEBUG oslo_concurrency.lockutils [None req-e6a19925-ee3b-42b4-bb70-827444d44bbb 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lock "27b27eae-7476-459a-b3e1-62199c81d4e1" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:37:45 compute-0 nova_compute[260935]: 2025-10-11 09:37:45.130 2 DEBUG oslo_concurrency.lockutils [None req-e6a19925-ee3b-42b4-bb70-827444d44bbb 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Acquiring lock "27b27eae-7476-459a-b3e1-62199c81d4e1-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:37:45 compute-0 nova_compute[260935]: 2025-10-11 09:37:45.130 2 DEBUG oslo_concurrency.lockutils [None req-e6a19925-ee3b-42b4-bb70-827444d44bbb 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lock "27b27eae-7476-459a-b3e1-62199c81d4e1-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:37:45 compute-0 nova_compute[260935]: 2025-10-11 09:37:45.131 2 DEBUG oslo_concurrency.lockutils [None req-e6a19925-ee3b-42b4-bb70-827444d44bbb 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lock "27b27eae-7476-459a-b3e1-62199c81d4e1-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:37:45 compute-0 nova_compute[260935]: 2025-10-11 09:37:45.133 2 INFO nova.compute.manager [None req-e6a19925-ee3b-42b4-bb70-827444d44bbb 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 27b27eae-7476-459a-b3e1-62199c81d4e1] Terminating instance
Oct 11 09:37:45 compute-0 nova_compute[260935]: 2025-10-11 09:37:45.135 2 DEBUG nova.compute.manager [None req-e6a19925-ee3b-42b4-bb70-827444d44bbb 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 27b27eae-7476-459a-b3e1-62199c81d4e1] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 11 09:37:45 compute-0 kernel: tap0110d16a-ba (unregistering): left promiscuous mode
Oct 11 09:37:45 compute-0 NetworkManager[44960]: <info>  [1760175465.2009] device (tap0110d16a-ba): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 09:37:45 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2883: 321 pgs: 321 active+clean; 486 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 11 09:37:45 compute-0 nova_compute[260935]: 2025-10-11 09:37:45.256 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:37:45 compute-0 ovn_controller[152945]: 2025-10-11T09:37:45Z|01601|binding|INFO|Releasing lport 0110d16a-ba47-4f17-8196-6a5b11e482b3 from this chassis (sb_readonly=0)
Oct 11 09:37:45 compute-0 ovn_controller[152945]: 2025-10-11T09:37:45Z|01602|binding|INFO|Setting lport 0110d16a-ba47-4f17-8196-6a5b11e482b3 down in Southbound
Oct 11 09:37:45 compute-0 ovn_controller[152945]: 2025-10-11T09:37:45Z|01603|binding|INFO|Removing iface tap0110d16a-ba ovn-installed in OVS
Oct 11 09:37:45 compute-0 nova_compute[260935]: 2025-10-11 09:37:45.259 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:37:45 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:37:45.268 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:32:9d:c0 10.100.0.11'], port_security=['fa:16:3e:32:9d:c0 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '27b27eae-7476-459a-b3e1-62199c81d4e1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-03aa96fb-1646-4924-8eea-e7c8a5518a31', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '81e7096f23df4e7d8782cf98d09d54e9', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'aba3d239-1d8c-4eb4-ab69-9421b1db2407', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b9ec93d7-4219-431f-a875-bf3609f077df, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=0110d16a-ba47-4f17-8196-6a5b11e482b3) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 09:37:45 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:37:45.269 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 0110d16a-ba47-4f17-8196-6a5b11e482b3 in datapath 03aa96fb-1646-4924-8eea-e7c8a5518a31 unbound from our chassis
Oct 11 09:37:45 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:37:45.272 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 03aa96fb-1646-4924-8eea-e7c8a5518a31
Oct 11 09:37:45 compute-0 nova_compute[260935]: 2025-10-11 09:37:45.280 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:37:45 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:37:45.293 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[d441df3b-d975-4d64-83fd-deeb8214b792]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:37:45 compute-0 systemd[1]: machine-qemu\x2d166\x2dinstance\x2d0000008e.scope: Deactivated successfully.
Oct 11 09:37:45 compute-0 systemd[1]: machine-qemu\x2d166\x2dinstance\x2d0000008e.scope: Consumed 12.959s CPU time.
Oct 11 09:37:45 compute-0 systemd-machined[215705]: Machine qemu-166-instance-0000008e terminated.
Oct 11 09:37:45 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:37:45.337 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[3a393c0e-48ae-4a02-b789-35ac028646d1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:37:45 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:37:45.341 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[4366f2fc-6405-4253-b64b-ed70c1aa6dbb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:37:45 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:37:45.386 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[5c4409d7-fe7c-4743-9750-6ce27fabd3a8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:37:45 compute-0 nova_compute[260935]: 2025-10-11 09:37:45.395 2 INFO nova.virt.libvirt.driver [-] [instance: 27b27eae-7476-459a-b3e1-62199c81d4e1] Instance destroyed successfully.
Oct 11 09:37:45 compute-0 nova_compute[260935]: 2025-10-11 09:37:45.396 2 DEBUG nova.objects.instance [None req-e6a19925-ee3b-42b4-bb70-827444d44bbb 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lazy-loading 'resources' on Instance uuid 27b27eae-7476-459a-b3e1-62199c81d4e1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:37:45 compute-0 nova_compute[260935]: 2025-10-11 09:37:45.411 2 DEBUG nova.virt.libvirt.vif [None req-e6a19925-ee3b-42b4-bb70-827444d44bbb 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T09:37:19Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-607770139-gen-0-202990093',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-607770139-gen-0-202990093',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-607770139-gen',id=142,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDUdKb5dOuuDVVWeqX4zKoPcjWDfKUs1onbpng/L6Jwhcf0BjMOuWghlJ+p86B+gJd3uhEaIWe8cO4nTPLwQAMa1oAzsF+deBAfCAYSzplFFdsUIepQYO467EdbI9Uqbsw==',key_name='tempest-TestSecurityGroupsBasicOps-402699739',keypairs=<?>,launch_index=0,launched_at=2025-10-11T09:37:28Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='81e7096f23df4e7d8782cf98d09d54e9',ramdisk_id='',reservation_id='r-0rby20lg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSecurityGroupsBasicOps-607770139',owner_user_name='tempest-TestSecurityGroupsBasicOps-607770139-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T09:37:28Z,user_data=None,user_id='489c4d0457354f4684f8b9e53261224f',uuid=27b27eae-7476-459a-b3e1-62199c81d4e1,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "0110d16a-ba47-4f17-8196-6a5b11e482b3", "address": "fa:16:3e:32:9d:c0", "network": {"id": "03aa96fb-1646-4924-8eea-e7c8a5518a31", "bridge": "br-int", "label": "tempest-network-smoke--2144007454", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "81e7096f23df4e7d8782cf98d09d54e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0110d16a-ba", "ovs_interfaceid": "0110d16a-ba47-4f17-8196-6a5b11e482b3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 11 09:37:45 compute-0 nova_compute[260935]: 2025-10-11 09:37:45.412 2 DEBUG nova.network.os_vif_util [None req-e6a19925-ee3b-42b4-bb70-827444d44bbb 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Converting VIF {"id": "0110d16a-ba47-4f17-8196-6a5b11e482b3", "address": "fa:16:3e:32:9d:c0", "network": {"id": "03aa96fb-1646-4924-8eea-e7c8a5518a31", "bridge": "br-int", "label": "tempest-network-smoke--2144007454", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "81e7096f23df4e7d8782cf98d09d54e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0110d16a-ba", "ovs_interfaceid": "0110d16a-ba47-4f17-8196-6a5b11e482b3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 09:37:45 compute-0 nova_compute[260935]: 2025-10-11 09:37:45.413 2 DEBUG nova.network.os_vif_util [None req-e6a19925-ee3b-42b4-bb70-827444d44bbb 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:32:9d:c0,bridge_name='br-int',has_traffic_filtering=True,id=0110d16a-ba47-4f17-8196-6a5b11e482b3,network=Network(03aa96fb-1646-4924-8eea-e7c8a5518a31),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0110d16a-ba') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 09:37:45 compute-0 nova_compute[260935]: 2025-10-11 09:37:45.413 2 DEBUG os_vif [None req-e6a19925-ee3b-42b4-bb70-827444d44bbb 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:32:9d:c0,bridge_name='br-int',has_traffic_filtering=True,id=0110d16a-ba47-4f17-8196-6a5b11e482b3,network=Network(03aa96fb-1646-4924-8eea-e7c8a5518a31),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0110d16a-ba') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 11 09:37:45 compute-0 nova_compute[260935]: 2025-10-11 09:37:45.415 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:37:45 compute-0 nova_compute[260935]: 2025-10-11 09:37:45.416 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0110d16a-ba, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:37:45 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:37:45.418 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[a066f576-bcbc-4d3a-af64-98f34c28d5ea]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap03aa96fb-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:61:60:32'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 11, 'tx_packets': 7, 'rx_bytes': 958, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 11, 'tx_packets': 7, 'rx_bytes': 958, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 421], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 725918, 'reachable_time': 20337, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 420395, 'error': None, 'target': 'ovnmeta-03aa96fb-1646-4924-8eea-e7c8a5518a31', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:37:45 compute-0 nova_compute[260935]: 2025-10-11 09:37:45.423 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:37:45 compute-0 nova_compute[260935]: 2025-10-11 09:37:45.426 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 09:37:45 compute-0 nova_compute[260935]: 2025-10-11 09:37:45.430 2 INFO os_vif [None req-e6a19925-ee3b-42b4-bb70-827444d44bbb 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:32:9d:c0,bridge_name='br-int',has_traffic_filtering=True,id=0110d16a-ba47-4f17-8196-6a5b11e482b3,network=Network(03aa96fb-1646-4924-8eea-e7c8a5518a31),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0110d16a-ba')
Oct 11 09:37:45 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:37:45.443 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[a4a84768-34fe-4a88-9b1b-4417281d9d80]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap03aa96fb-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 725934, 'tstamp': 725934}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 420397, 'error': None, 'target': 'ovnmeta-03aa96fb-1646-4924-8eea-e7c8a5518a31', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap03aa96fb-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 725937, 'tstamp': 725937}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 420397, 'error': None, 'target': 'ovnmeta-03aa96fb-1646-4924-8eea-e7c8a5518a31', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:37:45 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:37:45.445 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap03aa96fb-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:37:45 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:37:45.449 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap03aa96fb-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:37:45 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:37:45.450 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 09:37:45 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:37:45.451 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap03aa96fb-10, col_values=(('external_ids', {'iface-id': '33c7af53-54c4-4168-8c26-88465029f36a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:37:45 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:37:45.452 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 09:37:45 compute-0 nova_compute[260935]: 2025-10-11 09:37:45.455 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:37:45 compute-0 nova_compute[260935]: 2025-10-11 09:37:45.849 2 DEBUG nova.compute.manager [req-7fdc563f-2a1d-4ed6-8465-159fd623f8f7 req-78f11f5a-daff-4ba6-9fe3-7934b5b2e85f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 27b27eae-7476-459a-b3e1-62199c81d4e1] Received event network-vif-unplugged-0110d16a-ba47-4f17-8196-6a5b11e482b3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:37:45 compute-0 nova_compute[260935]: 2025-10-11 09:37:45.850 2 DEBUG oslo_concurrency.lockutils [req-7fdc563f-2a1d-4ed6-8465-159fd623f8f7 req-78f11f5a-daff-4ba6-9fe3-7934b5b2e85f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "27b27eae-7476-459a-b3e1-62199c81d4e1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:37:45 compute-0 nova_compute[260935]: 2025-10-11 09:37:45.850 2 DEBUG oslo_concurrency.lockutils [req-7fdc563f-2a1d-4ed6-8465-159fd623f8f7 req-78f11f5a-daff-4ba6-9fe3-7934b5b2e85f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "27b27eae-7476-459a-b3e1-62199c81d4e1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:37:45 compute-0 nova_compute[260935]: 2025-10-11 09:37:45.850 2 DEBUG oslo_concurrency.lockutils [req-7fdc563f-2a1d-4ed6-8465-159fd623f8f7 req-78f11f5a-daff-4ba6-9fe3-7934b5b2e85f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "27b27eae-7476-459a-b3e1-62199c81d4e1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:37:45 compute-0 nova_compute[260935]: 2025-10-11 09:37:45.850 2 DEBUG nova.compute.manager [req-7fdc563f-2a1d-4ed6-8465-159fd623f8f7 req-78f11f5a-daff-4ba6-9fe3-7934b5b2e85f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 27b27eae-7476-459a-b3e1-62199c81d4e1] No waiting events found dispatching network-vif-unplugged-0110d16a-ba47-4f17-8196-6a5b11e482b3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 09:37:45 compute-0 nova_compute[260935]: 2025-10-11 09:37:45.850 2 DEBUG nova.compute.manager [req-7fdc563f-2a1d-4ed6-8465-159fd623f8f7 req-78f11f5a-daff-4ba6-9fe3-7934b5b2e85f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 27b27eae-7476-459a-b3e1-62199c81d4e1] Received event network-vif-unplugged-0110d16a-ba47-4f17-8196-6a5b11e482b3 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 11 09:37:45 compute-0 nova_compute[260935]: 2025-10-11 09:37:45.865 2 INFO nova.virt.libvirt.driver [None req-e6a19925-ee3b-42b4-bb70-827444d44bbb 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 27b27eae-7476-459a-b3e1-62199c81d4e1] Deleting instance files /var/lib/nova/instances/27b27eae-7476-459a-b3e1-62199c81d4e1_del
Oct 11 09:37:45 compute-0 nova_compute[260935]: 2025-10-11 09:37:45.866 2 INFO nova.virt.libvirt.driver [None req-e6a19925-ee3b-42b4-bb70-827444d44bbb 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 27b27eae-7476-459a-b3e1-62199c81d4e1] Deletion of /var/lib/nova/instances/27b27eae-7476-459a-b3e1-62199c81d4e1_del complete
Oct 11 09:37:45 compute-0 nova_compute[260935]: 2025-10-11 09:37:45.918 2 INFO nova.compute.manager [None req-e6a19925-ee3b-42b4-bb70-827444d44bbb 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 27b27eae-7476-459a-b3e1-62199c81d4e1] Took 0.78 seconds to destroy the instance on the hypervisor.
Oct 11 09:37:45 compute-0 nova_compute[260935]: 2025-10-11 09:37:45.918 2 DEBUG oslo.service.loopingcall [None req-e6a19925-ee3b-42b4-bb70-827444d44bbb 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 11 09:37:45 compute-0 nova_compute[260935]: 2025-10-11 09:37:45.919 2 DEBUG nova.compute.manager [-] [instance: 27b27eae-7476-459a-b3e1-62199c81d4e1] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 11 09:37:45 compute-0 nova_compute[260935]: 2025-10-11 09:37:45.919 2 DEBUG nova.network.neutron [-] [instance: 27b27eae-7476-459a-b3e1-62199c81d4e1] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 11 09:37:46 compute-0 ceph-mon[74313]: pgmap v2883: 321 pgs: 321 active+clean; 486 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 11 09:37:46 compute-0 nova_compute[260935]: 2025-10-11 09:37:46.691 2 DEBUG nova.network.neutron [-] [instance: 27b27eae-7476-459a-b3e1-62199c81d4e1] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:37:46 compute-0 nova_compute[260935]: 2025-10-11 09:37:46.712 2 INFO nova.compute.manager [-] [instance: 27b27eae-7476-459a-b3e1-62199c81d4e1] Took 0.79 seconds to deallocate network for instance.
Oct 11 09:37:46 compute-0 sshd-session[420417]: Invalid user mysql from 165.232.82.252 port 46154
Oct 11 09:37:46 compute-0 nova_compute[260935]: 2025-10-11 09:37:46.773 2 DEBUG oslo_concurrency.lockutils [None req-e6a19925-ee3b-42b4-bb70-827444d44bbb 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:37:46 compute-0 nova_compute[260935]: 2025-10-11 09:37:46.773 2 DEBUG oslo_concurrency.lockutils [None req-e6a19925-ee3b-42b4-bb70-827444d44bbb 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:37:46 compute-0 sshd-session[420417]: pam_unix(sshd:auth): check pass; user unknown
Oct 11 09:37:46 compute-0 sshd-session[420417]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=165.232.82.252
Oct 11 09:37:46 compute-0 nova_compute[260935]: 2025-10-11 09:37:46.900 2 DEBUG oslo_concurrency.processutils [None req-e6a19925-ee3b-42b4-bb70-827444d44bbb 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:37:47 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2884: 321 pgs: 321 active+clean; 486 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 11 09:37:47 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 09:37:47 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1255123041' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:37:47 compute-0 nova_compute[260935]: 2025-10-11 09:37:47.366 2 DEBUG oslo_concurrency.processutils [None req-e6a19925-ee3b-42b4-bb70-827444d44bbb 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:37:47 compute-0 nova_compute[260935]: 2025-10-11 09:37:47.373 2 DEBUG nova.compute.provider_tree [None req-e6a19925-ee3b-42b4-bb70-827444d44bbb 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 09:37:47 compute-0 nova_compute[260935]: 2025-10-11 09:37:47.388 2 DEBUG nova.scheduler.client.report [None req-e6a19925-ee3b-42b4-bb70-827444d44bbb 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 09:37:47 compute-0 nova_compute[260935]: 2025-10-11 09:37:47.406 2 DEBUG oslo_concurrency.lockutils [None req-e6a19925-ee3b-42b4-bb70-827444d44bbb 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.632s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:37:47 compute-0 nova_compute[260935]: 2025-10-11 09:37:47.458 2 INFO nova.scheduler.client.report [None req-e6a19925-ee3b-42b4-bb70-827444d44bbb 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Deleted allocations for instance 27b27eae-7476-459a-b3e1-62199c81d4e1
Oct 11 09:37:47 compute-0 nova_compute[260935]: 2025-10-11 09:37:47.530 2 DEBUG oslo_concurrency.lockutils [None req-e6a19925-ee3b-42b4-bb70-827444d44bbb 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lock "27b27eae-7476-459a-b3e1-62199c81d4e1" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.401s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:37:47 compute-0 nova_compute[260935]: 2025-10-11 09:37:47.931 2 DEBUG nova.compute.manager [req-f81e6af3-07d9-40d9-9dd8-45f3031ed632 req-e047558c-e11b-4df4-a5b3-c6fddf8e1784 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 27b27eae-7476-459a-b3e1-62199c81d4e1] Received event network-vif-plugged-0110d16a-ba47-4f17-8196-6a5b11e482b3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:37:47 compute-0 nova_compute[260935]: 2025-10-11 09:37:47.932 2 DEBUG oslo_concurrency.lockutils [req-f81e6af3-07d9-40d9-9dd8-45f3031ed632 req-e047558c-e11b-4df4-a5b3-c6fddf8e1784 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "27b27eae-7476-459a-b3e1-62199c81d4e1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:37:47 compute-0 nova_compute[260935]: 2025-10-11 09:37:47.932 2 DEBUG oslo_concurrency.lockutils [req-f81e6af3-07d9-40d9-9dd8-45f3031ed632 req-e047558c-e11b-4df4-a5b3-c6fddf8e1784 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "27b27eae-7476-459a-b3e1-62199c81d4e1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:37:47 compute-0 nova_compute[260935]: 2025-10-11 09:37:47.932 2 DEBUG oslo_concurrency.lockutils [req-f81e6af3-07d9-40d9-9dd8-45f3031ed632 req-e047558c-e11b-4df4-a5b3-c6fddf8e1784 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "27b27eae-7476-459a-b3e1-62199c81d4e1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:37:47 compute-0 nova_compute[260935]: 2025-10-11 09:37:47.933 2 DEBUG nova.compute.manager [req-f81e6af3-07d9-40d9-9dd8-45f3031ed632 req-e047558c-e11b-4df4-a5b3-c6fddf8e1784 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 27b27eae-7476-459a-b3e1-62199c81d4e1] No waiting events found dispatching network-vif-plugged-0110d16a-ba47-4f17-8196-6a5b11e482b3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 09:37:47 compute-0 nova_compute[260935]: 2025-10-11 09:37:47.933 2 WARNING nova.compute.manager [req-f81e6af3-07d9-40d9-9dd8-45f3031ed632 req-e047558c-e11b-4df4-a5b3-c6fddf8e1784 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 27b27eae-7476-459a-b3e1-62199c81d4e1] Received unexpected event network-vif-plugged-0110d16a-ba47-4f17-8196-6a5b11e482b3 for instance with vm_state deleted and task_state None.
Oct 11 09:37:47 compute-0 nova_compute[260935]: 2025-10-11 09:37:47.933 2 DEBUG nova.compute.manager [req-f81e6af3-07d9-40d9-9dd8-45f3031ed632 req-e047558c-e11b-4df4-a5b3-c6fddf8e1784 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 27b27eae-7476-459a-b3e1-62199c81d4e1] Received event network-vif-deleted-0110d16a-ba47-4f17-8196-6a5b11e482b3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:37:48 compute-0 ceph-mon[74313]: pgmap v2884: 321 pgs: 321 active+clean; 486 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 11 09:37:48 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/1255123041' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:37:48 compute-0 sshd-session[420417]: Failed password for invalid user mysql from 165.232.82.252 port 46154 ssh2
Oct 11 09:37:48 compute-0 nova_compute[260935]: 2025-10-11 09:37:48.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:37:48 compute-0 nova_compute[260935]: 2025-10-11 09:37:48.703 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 11 09:37:48 compute-0 nova_compute[260935]: 2025-10-11 09:37:48.704 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 11 09:37:48 compute-0 nova_compute[260935]: 2025-10-11 09:37:48.906 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "refresh_cache-c176845c-89c0-4038-ba22-4ee79bd3ebfe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:37:48 compute-0 nova_compute[260935]: 2025-10-11 09:37:48.907 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquired lock "refresh_cache-c176845c-89c0-4038-ba22-4ee79bd3ebfe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:37:48 compute-0 nova_compute[260935]: 2025-10-11 09:37:48.907 2 DEBUG nova.network.neutron [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: c176845c-89c0-4038-ba22-4ee79bd3ebfe] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 11 09:37:48 compute-0 nova_compute[260935]: 2025-10-11 09:37:48.907 2 DEBUG nova.objects.instance [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lazy-loading 'info_cache' on Instance uuid c176845c-89c0-4038-ba22-4ee79bd3ebfe obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:37:48 compute-0 nova_compute[260935]: 2025-10-11 09:37:48.980 2 DEBUG oslo_concurrency.lockutils [None req-eab9539a-10df-46da-b724-617da73ee67c 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Acquiring lock "1f73fb12-6b6e-4491-81a4-51fb66ffb310" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:37:48 compute-0 nova_compute[260935]: 2025-10-11 09:37:48.980 2 DEBUG oslo_concurrency.lockutils [None req-eab9539a-10df-46da-b724-617da73ee67c 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lock "1f73fb12-6b6e-4491-81a4-51fb66ffb310" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:37:48 compute-0 nova_compute[260935]: 2025-10-11 09:37:48.981 2 DEBUG oslo_concurrency.lockutils [None req-eab9539a-10df-46da-b724-617da73ee67c 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Acquiring lock "1f73fb12-6b6e-4491-81a4-51fb66ffb310-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:37:48 compute-0 nova_compute[260935]: 2025-10-11 09:37:48.981 2 DEBUG oslo_concurrency.lockutils [None req-eab9539a-10df-46da-b724-617da73ee67c 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lock "1f73fb12-6b6e-4491-81a4-51fb66ffb310-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:37:48 compute-0 nova_compute[260935]: 2025-10-11 09:37:48.982 2 DEBUG oslo_concurrency.lockutils [None req-eab9539a-10df-46da-b724-617da73ee67c 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lock "1f73fb12-6b6e-4491-81a4-51fb66ffb310-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:37:48 compute-0 nova_compute[260935]: 2025-10-11 09:37:48.984 2 INFO nova.compute.manager [None req-eab9539a-10df-46da-b724-617da73ee67c 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 1f73fb12-6b6e-4491-81a4-51fb66ffb310] Terminating instance
Oct 11 09:37:48 compute-0 nova_compute[260935]: 2025-10-11 09:37:48.985 2 DEBUG nova.compute.manager [None req-eab9539a-10df-46da-b724-617da73ee67c 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 1f73fb12-6b6e-4491-81a4-51fb66ffb310] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 11 09:37:49 compute-0 kernel: tap25d7271a-bc (unregistering): left promiscuous mode
Oct 11 09:37:49 compute-0 NetworkManager[44960]: <info>  [1760175469.0381] device (tap25d7271a-bc): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 09:37:49 compute-0 nova_compute[260935]: 2025-10-11 09:37:49.051 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:37:49 compute-0 ovn_controller[152945]: 2025-10-11T09:37:49Z|01604|binding|INFO|Releasing lport 25d7271a-bce8-4388-991e-e7069da0eff1 from this chassis (sb_readonly=0)
Oct 11 09:37:49 compute-0 ovn_controller[152945]: 2025-10-11T09:37:49Z|01605|binding|INFO|Setting lport 25d7271a-bce8-4388-991e-e7069da0eff1 down in Southbound
Oct 11 09:37:49 compute-0 ovn_controller[152945]: 2025-10-11T09:37:49Z|01606|binding|INFO|Removing iface tap25d7271a-bc ovn-installed in OVS
Oct 11 09:37:49 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:37:49.056 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a1:d6:78 10.100.0.13'], port_security=['fa:16:3e:a1:d6:78 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '1f73fb12-6b6e-4491-81a4-51fb66ffb310', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-03aa96fb-1646-4924-8eea-e7c8a5518a31', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '81e7096f23df4e7d8782cf98d09d54e9', 'neutron:revision_number': '4', 'neutron:security_group_ids': '0fdfee67-b394-4b27-bc8b-a70f0c2dfabe aba3d239-1d8c-4eb4-ab69-9421b1db2407', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b9ec93d7-4219-431f-a875-bf3609f077df, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=25d7271a-bce8-4388-991e-e7069da0eff1) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 09:37:49 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:37:49.058 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 25d7271a-bce8-4388-991e-e7069da0eff1 in datapath 03aa96fb-1646-4924-8eea-e7c8a5518a31 unbound from our chassis
Oct 11 09:37:49 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:37:49.059 162815 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 03aa96fb-1646-4924-8eea-e7c8a5518a31, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 11 09:37:49 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:37:49.060 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[cff60040-b244-489e-a993-6915bb33c3a1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:37:49 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:37:49.060 162815 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-03aa96fb-1646-4924-8eea-e7c8a5518a31 namespace which is not needed anymore
Oct 11 09:37:49 compute-0 nova_compute[260935]: 2025-10-11 09:37:49.071 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:37:49 compute-0 systemd[1]: machine-qemu\x2d165\x2dinstance\x2d0000008d.scope: Deactivated successfully.
Oct 11 09:37:49 compute-0 systemd[1]: machine-qemu\x2d165\x2dinstance\x2d0000008d.scope: Consumed 14.846s CPU time.
Oct 11 09:37:49 compute-0 systemd-machined[215705]: Machine qemu-165-instance-0000008d terminated.
Oct 11 09:37:49 compute-0 nova_compute[260935]: 2025-10-11 09:37:49.219 2 INFO nova.virt.libvirt.driver [-] [instance: 1f73fb12-6b6e-4491-81a4-51fb66ffb310] Instance destroyed successfully.
Oct 11 09:37:49 compute-0 nova_compute[260935]: 2025-10-11 09:37:49.220 2 DEBUG nova.objects.instance [None req-eab9539a-10df-46da-b724-617da73ee67c 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lazy-loading 'resources' on Instance uuid 1f73fb12-6b6e-4491-81a4-51fb66ffb310 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:37:49 compute-0 nova_compute[260935]: 2025-10-11 09:37:49.232 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:37:49 compute-0 neutron-haproxy-ovnmeta-03aa96fb-1646-4924-8eea-e7c8a5518a31[418582]: [NOTICE]   (418586) : haproxy version is 2.8.14-c23fe91
Oct 11 09:37:49 compute-0 neutron-haproxy-ovnmeta-03aa96fb-1646-4924-8eea-e7c8a5518a31[418582]: [NOTICE]   (418586) : path to executable is /usr/sbin/haproxy
Oct 11 09:37:49 compute-0 neutron-haproxy-ovnmeta-03aa96fb-1646-4924-8eea-e7c8a5518a31[418582]: [ALERT]    (418586) : Current worker (418588) exited with code 143 (Terminated)
Oct 11 09:37:49 compute-0 neutron-haproxy-ovnmeta-03aa96fb-1646-4924-8eea-e7c8a5518a31[418582]: [WARNING]  (418586) : All workers exited. Exiting... (0)
Oct 11 09:37:49 compute-0 systemd[1]: libpod-f6a9261aec5be20db1e997a39a8f40719235d71526af0a641fc6e13ee745e3e5.scope: Deactivated successfully.
Oct 11 09:37:49 compute-0 podman[420464]: 2025-10-11 09:37:49.256172375 +0000 UTC m=+0.065973261 container died f6a9261aec5be20db1e997a39a8f40719235d71526af0a641fc6e13ee745e3e5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-03aa96fb-1646-4924-8eea-e7c8a5518a31, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, tcib_managed=true)
Oct 11 09:37:49 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2885: 321 pgs: 321 active+clean; 407 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 344 KiB/s rd, 2.1 MiB/s wr, 92 op/s
Oct 11 09:37:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-1249ac7e806de3a44795ea0929a99531493013ad6047c5dc235ff82a29920ead-merged.mount: Deactivated successfully.
Oct 11 09:37:49 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-f6a9261aec5be20db1e997a39a8f40719235d71526af0a641fc6e13ee745e3e5-userdata-shm.mount: Deactivated successfully.
Oct 11 09:37:49 compute-0 podman[420464]: 2025-10-11 09:37:49.298326217 +0000 UTC m=+0.108127123 container cleanup f6a9261aec5be20db1e997a39a8f40719235d71526af0a641fc6e13ee745e3e5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-03aa96fb-1646-4924-8eea-e7c8a5518a31, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 09:37:49 compute-0 systemd[1]: libpod-conmon-f6a9261aec5be20db1e997a39a8f40719235d71526af0a641fc6e13ee745e3e5.scope: Deactivated successfully.
Oct 11 09:37:49 compute-0 nova_compute[260935]: 2025-10-11 09:37:49.344 2 DEBUG nova.virt.libvirt.vif [None req-eab9539a-10df-46da-b724-617da73ee67c 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T09:36:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-607770139-access_point-2088666113',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-607770139-access_point-2088666113',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-607770139-acc',id=141,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDUdKb5dOuuDVVWeqX4zKoPcjWDfKUs1onbpng/L6Jwhcf0BjMOuWghlJ+p86B+gJd3uhEaIWe8cO4nTPLwQAMa1oAzsF+deBAfCAYSzplFFdsUIepQYO467EdbI9Uqbsw==',key_name='tempest-TestSecurityGroupsBasicOps-402699739',keypairs=<?>,launch_index=0,launched_at=2025-10-11T09:36:55Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='81e7096f23df4e7d8782cf98d09d54e9',ramdisk_id='',reservation_id='r-1r2upbl3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSecurityGroupsBasicOps-607770139',owner_user_name='tempest-TestSecurityGroupsBasicOps-607770139-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T09:36:55Z,user_data=None,user_id='489c4d0457354f4684f8b9e53261224f',uuid=1f73fb12-6b6e-4491-81a4-51fb66ffb310,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "25d7271a-bce8-4388-991e-e7069da0eff1", "address": "fa:16:3e:a1:d6:78", "network": {"id": "03aa96fb-1646-4924-8eea-e7c8a5518a31", "bridge": "br-int", "label": "tempest-network-smoke--2144007454", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.233", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "81e7096f23df4e7d8782cf98d09d54e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25d7271a-bc", "ovs_interfaceid": "25d7271a-bce8-4388-991e-e7069da0eff1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 11 09:37:49 compute-0 nova_compute[260935]: 2025-10-11 09:37:49.345 2 DEBUG nova.network.os_vif_util [None req-eab9539a-10df-46da-b724-617da73ee67c 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Converting VIF {"id": "25d7271a-bce8-4388-991e-e7069da0eff1", "address": "fa:16:3e:a1:d6:78", "network": {"id": "03aa96fb-1646-4924-8eea-e7c8a5518a31", "bridge": "br-int", "label": "tempest-network-smoke--2144007454", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.233", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "81e7096f23df4e7d8782cf98d09d54e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25d7271a-bc", "ovs_interfaceid": "25d7271a-bce8-4388-991e-e7069da0eff1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 09:37:49 compute-0 nova_compute[260935]: 2025-10-11 09:37:49.346 2 DEBUG nova.network.os_vif_util [None req-eab9539a-10df-46da-b724-617da73ee67c 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:a1:d6:78,bridge_name='br-int',has_traffic_filtering=True,id=25d7271a-bce8-4388-991e-e7069da0eff1,network=Network(03aa96fb-1646-4924-8eea-e7c8a5518a31),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap25d7271a-bc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 09:37:49 compute-0 nova_compute[260935]: 2025-10-11 09:37:49.346 2 DEBUG os_vif [None req-eab9539a-10df-46da-b724-617da73ee67c 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:a1:d6:78,bridge_name='br-int',has_traffic_filtering=True,id=25d7271a-bce8-4388-991e-e7069da0eff1,network=Network(03aa96fb-1646-4924-8eea-e7c8a5518a31),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap25d7271a-bc') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 11 09:37:49 compute-0 nova_compute[260935]: 2025-10-11 09:37:49.348 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:37:49 compute-0 nova_compute[260935]: 2025-10-11 09:37:49.349 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap25d7271a-bc, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:37:49 compute-0 nova_compute[260935]: 2025-10-11 09:37:49.350 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:37:49 compute-0 nova_compute[260935]: 2025-10-11 09:37:49.353 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 09:37:49 compute-0 nova_compute[260935]: 2025-10-11 09:37:49.355 2 INFO os_vif [None req-eab9539a-10df-46da-b724-617da73ee67c 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:a1:d6:78,bridge_name='br-int',has_traffic_filtering=True,id=25d7271a-bce8-4388-991e-e7069da0eff1,network=Network(03aa96fb-1646-4924-8eea-e7c8a5518a31),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap25d7271a-bc')
Oct 11 09:37:49 compute-0 podman[420505]: 2025-10-11 09:37:49.360348747 +0000 UTC m=+0.040388894 container remove f6a9261aec5be20db1e997a39a8f40719235d71526af0a641fc6e13ee745e3e5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-03aa96fb-1646-4924-8eea-e7c8a5518a31, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct 11 09:37:49 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:37:49.367 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[e655cc66-9971-4446-91d0-8626d86da726]: (4, ('Sat Oct 11 09:37:49 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-03aa96fb-1646-4924-8eea-e7c8a5518a31 (f6a9261aec5be20db1e997a39a8f40719235d71526af0a641fc6e13ee745e3e5)\nf6a9261aec5be20db1e997a39a8f40719235d71526af0a641fc6e13ee745e3e5\nSat Oct 11 09:37:49 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-03aa96fb-1646-4924-8eea-e7c8a5518a31 (f6a9261aec5be20db1e997a39a8f40719235d71526af0a641fc6e13ee745e3e5)\nf6a9261aec5be20db1e997a39a8f40719235d71526af0a641fc6e13ee745e3e5\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:37:49 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:37:49.369 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[10ead976-4267-4782-bfb1-aa1c623b3191]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:37:49 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:37:49.370 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap03aa96fb-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:37:49 compute-0 nova_compute[260935]: 2025-10-11 09:37:49.371 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:37:49 compute-0 kernel: tap03aa96fb-10: left promiscuous mode
Oct 11 09:37:49 compute-0 nova_compute[260935]: 2025-10-11 09:37:49.373 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:37:49 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:37:49.383 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[a62774b6-397d-4ebd-810e-c22033d8b33d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:37:49 compute-0 nova_compute[260935]: 2025-10-11 09:37:49.388 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:37:49 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:37:49.403 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[05344d1c-78a6-445c-b84e-db39e89eacc9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:37:49 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:37:49.405 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[12c4bb35-a025-45bb-81e7-f4f530206e8f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:37:49 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:37:49.423 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[dfad55cc-59fd-4d2d-a082-57916836d127]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 725907, 'reachable_time': 40693, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 420538, 'error': None, 'target': 'ovnmeta-03aa96fb-1646-4924-8eea-e7c8a5518a31', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:37:49 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:37:49.426 162929 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-03aa96fb-1646-4924-8eea-e7c8a5518a31 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 11 09:37:49 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:37:49.426 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[951a96d1-c632-4676-9122-e09f2dc064d6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:37:49 compute-0 systemd[1]: run-netns-ovnmeta\x2d03aa96fb\x2d1646\x2d4924\x2d8eea\x2de7c8a5518a31.mount: Deactivated successfully.
Oct 11 09:37:49 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:37:49 compute-0 nova_compute[260935]: 2025-10-11 09:37:49.792 2 INFO nova.virt.libvirt.driver [None req-eab9539a-10df-46da-b724-617da73ee67c 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 1f73fb12-6b6e-4491-81a4-51fb66ffb310] Deleting instance files /var/lib/nova/instances/1f73fb12-6b6e-4491-81a4-51fb66ffb310_del
Oct 11 09:37:49 compute-0 nova_compute[260935]: 2025-10-11 09:37:49.793 2 INFO nova.virt.libvirt.driver [None req-eab9539a-10df-46da-b724-617da73ee67c 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 1f73fb12-6b6e-4491-81a4-51fb66ffb310] Deletion of /var/lib/nova/instances/1f73fb12-6b6e-4491-81a4-51fb66ffb310_del complete
Oct 11 09:37:49 compute-0 sshd-session[420417]: Connection closed by invalid user mysql 165.232.82.252 port 46154 [preauth]
Oct 11 09:37:49 compute-0 nova_compute[260935]: 2025-10-11 09:37:49.887 2 INFO nova.compute.manager [None req-eab9539a-10df-46da-b724-617da73ee67c 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 1f73fb12-6b6e-4491-81a4-51fb66ffb310] Took 0.90 seconds to destroy the instance on the hypervisor.
Oct 11 09:37:49 compute-0 nova_compute[260935]: 2025-10-11 09:37:49.888 2 DEBUG oslo.service.loopingcall [None req-eab9539a-10df-46da-b724-617da73ee67c 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 11 09:37:49 compute-0 nova_compute[260935]: 2025-10-11 09:37:49.889 2 DEBUG nova.compute.manager [-] [instance: 1f73fb12-6b6e-4491-81a4-51fb66ffb310] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 11 09:37:49 compute-0 nova_compute[260935]: 2025-10-11 09:37:49.889 2 DEBUG nova.network.neutron [-] [instance: 1f73fb12-6b6e-4491-81a4-51fb66ffb310] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 11 09:37:50 compute-0 nova_compute[260935]: 2025-10-11 09:37:50.111 2 DEBUG nova.compute.manager [req-93293fd9-9a46-4427-a4b3-e496864b715d req-ff0e0363-d505-4bb8-9984-9c4a0b7450af e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 1f73fb12-6b6e-4491-81a4-51fb66ffb310] Received event network-changed-25d7271a-bce8-4388-991e-e7069da0eff1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:37:50 compute-0 nova_compute[260935]: 2025-10-11 09:37:50.111 2 DEBUG nova.compute.manager [req-93293fd9-9a46-4427-a4b3-e496864b715d req-ff0e0363-d505-4bb8-9984-9c4a0b7450af e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 1f73fb12-6b6e-4491-81a4-51fb66ffb310] Refreshing instance network info cache due to event network-changed-25d7271a-bce8-4388-991e-e7069da0eff1. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 09:37:50 compute-0 nova_compute[260935]: 2025-10-11 09:37:50.112 2 DEBUG oslo_concurrency.lockutils [req-93293fd9-9a46-4427-a4b3-e496864b715d req-ff0e0363-d505-4bb8-9984-9c4a0b7450af e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-1f73fb12-6b6e-4491-81a4-51fb66ffb310" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:37:50 compute-0 nova_compute[260935]: 2025-10-11 09:37:50.113 2 DEBUG oslo_concurrency.lockutils [req-93293fd9-9a46-4427-a4b3-e496864b715d req-ff0e0363-d505-4bb8-9984-9c4a0b7450af e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-1f73fb12-6b6e-4491-81a4-51fb66ffb310" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:37:50 compute-0 nova_compute[260935]: 2025-10-11 09:37:50.113 2 DEBUG nova.network.neutron [req-93293fd9-9a46-4427-a4b3-e496864b715d req-ff0e0363-d505-4bb8-9984-9c4a0b7450af e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 1f73fb12-6b6e-4491-81a4-51fb66ffb310] Refreshing network info cache for port 25d7271a-bce8-4388-991e-e7069da0eff1 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 09:37:50 compute-0 ceph-mon[74313]: pgmap v2885: 321 pgs: 321 active+clean; 407 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 344 KiB/s rd, 2.1 MiB/s wr, 92 op/s
Oct 11 09:37:51 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2886: 321 pgs: 321 active+clean; 407 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 344 KiB/s rd, 2.1 MiB/s wr, 92 op/s
Oct 11 09:37:51 compute-0 nova_compute[260935]: 2025-10-11 09:37:51.734 2 DEBUG nova.network.neutron [-] [instance: 1f73fb12-6b6e-4491-81a4-51fb66ffb310] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:37:51 compute-0 nova_compute[260935]: 2025-10-11 09:37:51.757 2 INFO nova.compute.manager [-] [instance: 1f73fb12-6b6e-4491-81a4-51fb66ffb310] Took 1.87 seconds to deallocate network for instance.
Oct 11 09:37:51 compute-0 nova_compute[260935]: 2025-10-11 09:37:51.816 2 DEBUG oslo_concurrency.lockutils [None req-eab9539a-10df-46da-b724-617da73ee67c 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:37:51 compute-0 nova_compute[260935]: 2025-10-11 09:37:51.816 2 DEBUG oslo_concurrency.lockutils [None req-eab9539a-10df-46da-b724-617da73ee67c 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:37:51 compute-0 nova_compute[260935]: 2025-10-11 09:37:51.819 2 DEBUG nova.compute.manager [req-c56ce2a9-c4eb-4c3b-8fbd-f469ea142eae req-0d20d5ab-080f-4caf-b065-3bc81ac31782 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 1f73fb12-6b6e-4491-81a4-51fb66ffb310] Received event network-vif-deleted-25d7271a-bce8-4388-991e-e7069da0eff1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:37:51 compute-0 nova_compute[260935]: 2025-10-11 09:37:51.924 2 DEBUG nova.network.neutron [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: c176845c-89c0-4038-ba22-4ee79bd3ebfe] Updating instance_info_cache with network_info: [{"id": "e61ae661-47c6-4317-a2c2-6e7a5b567441", "address": "fa:16:3e:1e:82:58", "network": {"id": "164a664d-5e52-48b9-8b00-f73d0851a4cc", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-311778958-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d33b48586acf4e6c8254f2a1213b001c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape61ae661-47", "ovs_interfaceid": "e61ae661-47c6-4317-a2c2-6e7a5b567441", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:37:51 compute-0 nova_compute[260935]: 2025-10-11 09:37:51.943 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Releasing lock "refresh_cache-c176845c-89c0-4038-ba22-4ee79bd3ebfe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:37:51 compute-0 nova_compute[260935]: 2025-10-11 09:37:51.944 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: c176845c-89c0-4038-ba22-4ee79bd3ebfe] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 11 09:37:51 compute-0 nova_compute[260935]: 2025-10-11 09:37:51.945 2 DEBUG oslo_concurrency.processutils [None req-eab9539a-10df-46da-b724-617da73ee67c 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:37:51 compute-0 nova_compute[260935]: 2025-10-11 09:37:51.997 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:37:51 compute-0 nova_compute[260935]: 2025-10-11 09:37:51.999 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:37:52 compute-0 nova_compute[260935]: 2025-10-11 09:37:52.000 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:37:52 compute-0 nova_compute[260935]: 2025-10-11 09:37:52.000 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:37:52 compute-0 nova_compute[260935]: 2025-10-11 09:37:52.000 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 11 09:37:52 compute-0 nova_compute[260935]: 2025-10-11 09:37:52.000 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:37:52 compute-0 nova_compute[260935]: 2025-10-11 09:37:52.025 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:37:52 compute-0 nova_compute[260935]: 2025-10-11 09:37:52.225 2 DEBUG nova.network.neutron [req-93293fd9-9a46-4427-a4b3-e496864b715d req-ff0e0363-d505-4bb8-9984-9c4a0b7450af e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 1f73fb12-6b6e-4491-81a4-51fb66ffb310] Updated VIF entry in instance network info cache for port 25d7271a-bce8-4388-991e-e7069da0eff1. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 09:37:52 compute-0 nova_compute[260935]: 2025-10-11 09:37:52.226 2 DEBUG nova.network.neutron [req-93293fd9-9a46-4427-a4b3-e496864b715d req-ff0e0363-d505-4bb8-9984-9c4a0b7450af e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 1f73fb12-6b6e-4491-81a4-51fb66ffb310] Updating instance_info_cache with network_info: [{"id": "25d7271a-bce8-4388-991e-e7069da0eff1", "address": "fa:16:3e:a1:d6:78", "network": {"id": "03aa96fb-1646-4924-8eea-e7c8a5518a31", "bridge": "br-int", "label": "tempest-network-smoke--2144007454", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "81e7096f23df4e7d8782cf98d09d54e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25d7271a-bc", "ovs_interfaceid": "25d7271a-bce8-4388-991e-e7069da0eff1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:37:52 compute-0 nova_compute[260935]: 2025-10-11 09:37:52.252 2 DEBUG oslo_concurrency.lockutils [req-93293fd9-9a46-4427-a4b3-e496864b715d req-ff0e0363-d505-4bb8-9984-9c4a0b7450af e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-1f73fb12-6b6e-4491-81a4-51fb66ffb310" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:37:52 compute-0 nova_compute[260935]: 2025-10-11 09:37:52.253 2 DEBUG nova.compute.manager [req-93293fd9-9a46-4427-a4b3-e496864b715d req-ff0e0363-d505-4bb8-9984-9c4a0b7450af e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 1f73fb12-6b6e-4491-81a4-51fb66ffb310] Received event network-vif-unplugged-25d7271a-bce8-4388-991e-e7069da0eff1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:37:52 compute-0 nova_compute[260935]: 2025-10-11 09:37:52.253 2 DEBUG oslo_concurrency.lockutils [req-93293fd9-9a46-4427-a4b3-e496864b715d req-ff0e0363-d505-4bb8-9984-9c4a0b7450af e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "1f73fb12-6b6e-4491-81a4-51fb66ffb310-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:37:52 compute-0 nova_compute[260935]: 2025-10-11 09:37:52.253 2 DEBUG oslo_concurrency.lockutils [req-93293fd9-9a46-4427-a4b3-e496864b715d req-ff0e0363-d505-4bb8-9984-9c4a0b7450af e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "1f73fb12-6b6e-4491-81a4-51fb66ffb310-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:37:52 compute-0 nova_compute[260935]: 2025-10-11 09:37:52.254 2 DEBUG oslo_concurrency.lockutils [req-93293fd9-9a46-4427-a4b3-e496864b715d req-ff0e0363-d505-4bb8-9984-9c4a0b7450af e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "1f73fb12-6b6e-4491-81a4-51fb66ffb310-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:37:52 compute-0 nova_compute[260935]: 2025-10-11 09:37:52.254 2 DEBUG nova.compute.manager [req-93293fd9-9a46-4427-a4b3-e496864b715d req-ff0e0363-d505-4bb8-9984-9c4a0b7450af e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 1f73fb12-6b6e-4491-81a4-51fb66ffb310] No waiting events found dispatching network-vif-unplugged-25d7271a-bce8-4388-991e-e7069da0eff1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 09:37:52 compute-0 nova_compute[260935]: 2025-10-11 09:37:52.254 2 DEBUG nova.compute.manager [req-93293fd9-9a46-4427-a4b3-e496864b715d req-ff0e0363-d505-4bb8-9984-9c4a0b7450af e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 1f73fb12-6b6e-4491-81a4-51fb66ffb310] Received event network-vif-unplugged-25d7271a-bce8-4388-991e-e7069da0eff1 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 11 09:37:52 compute-0 nova_compute[260935]: 2025-10-11 09:37:52.254 2 DEBUG nova.compute.manager [req-93293fd9-9a46-4427-a4b3-e496864b715d req-ff0e0363-d505-4bb8-9984-9c4a0b7450af e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 1f73fb12-6b6e-4491-81a4-51fb66ffb310] Received event network-vif-plugged-25d7271a-bce8-4388-991e-e7069da0eff1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:37:52 compute-0 nova_compute[260935]: 2025-10-11 09:37:52.254 2 DEBUG oslo_concurrency.lockutils [req-93293fd9-9a46-4427-a4b3-e496864b715d req-ff0e0363-d505-4bb8-9984-9c4a0b7450af e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "1f73fb12-6b6e-4491-81a4-51fb66ffb310-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:37:52 compute-0 nova_compute[260935]: 2025-10-11 09:37:52.255 2 DEBUG oslo_concurrency.lockutils [req-93293fd9-9a46-4427-a4b3-e496864b715d req-ff0e0363-d505-4bb8-9984-9c4a0b7450af e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "1f73fb12-6b6e-4491-81a4-51fb66ffb310-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:37:52 compute-0 nova_compute[260935]: 2025-10-11 09:37:52.255 2 DEBUG oslo_concurrency.lockutils [req-93293fd9-9a46-4427-a4b3-e496864b715d req-ff0e0363-d505-4bb8-9984-9c4a0b7450af e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "1f73fb12-6b6e-4491-81a4-51fb66ffb310-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:37:52 compute-0 nova_compute[260935]: 2025-10-11 09:37:52.255 2 DEBUG nova.compute.manager [req-93293fd9-9a46-4427-a4b3-e496864b715d req-ff0e0363-d505-4bb8-9984-9c4a0b7450af e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 1f73fb12-6b6e-4491-81a4-51fb66ffb310] No waiting events found dispatching network-vif-plugged-25d7271a-bce8-4388-991e-e7069da0eff1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 09:37:52 compute-0 nova_compute[260935]: 2025-10-11 09:37:52.255 2 WARNING nova.compute.manager [req-93293fd9-9a46-4427-a4b3-e496864b715d req-ff0e0363-d505-4bb8-9984-9c4a0b7450af e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 1f73fb12-6b6e-4491-81a4-51fb66ffb310] Received unexpected event network-vif-plugged-25d7271a-bce8-4388-991e-e7069da0eff1 for instance with vm_state active and task_state deleting.
Oct 11 09:37:52 compute-0 ceph-mon[74313]: pgmap v2886: 321 pgs: 321 active+clean; 407 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 344 KiB/s rd, 2.1 MiB/s wr, 92 op/s
Oct 11 09:37:52 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 09:37:52 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2726880311' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:37:52 compute-0 nova_compute[260935]: 2025-10-11 09:37:52.402 2 DEBUG oslo_concurrency.processutils [None req-eab9539a-10df-46da-b724-617da73ee67c 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:37:52 compute-0 nova_compute[260935]: 2025-10-11 09:37:52.409 2 DEBUG nova.compute.provider_tree [None req-eab9539a-10df-46da-b724-617da73ee67c 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 09:37:52 compute-0 nova_compute[260935]: 2025-10-11 09:37:52.427 2 DEBUG nova.scheduler.client.report [None req-eab9539a-10df-46da-b724-617da73ee67c 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 09:37:52 compute-0 nova_compute[260935]: 2025-10-11 09:37:52.459 2 DEBUG oslo_concurrency.lockutils [None req-eab9539a-10df-46da-b724-617da73ee67c 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.643s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:37:52 compute-0 nova_compute[260935]: 2025-10-11 09:37:52.462 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.437s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:37:52 compute-0 nova_compute[260935]: 2025-10-11 09:37:52.462 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:37:52 compute-0 nova_compute[260935]: 2025-10-11 09:37:52.463 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 11 09:37:52 compute-0 nova_compute[260935]: 2025-10-11 09:37:52.463 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:37:52 compute-0 nova_compute[260935]: 2025-10-11 09:37:52.537 2 INFO nova.scheduler.client.report [None req-eab9539a-10df-46da-b724-617da73ee67c 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Deleted allocations for instance 1f73fb12-6b6e-4491-81a4-51fb66ffb310
Oct 11 09:37:52 compute-0 nova_compute[260935]: 2025-10-11 09:37:52.598 2 DEBUG oslo_concurrency.lockutils [None req-eab9539a-10df-46da-b724-617da73ee67c 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lock "1f73fb12-6b6e-4491-81a4-51fb66ffb310" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.618s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:37:52 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 09:37:52 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/108814941' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:37:52 compute-0 nova_compute[260935]: 2025-10-11 09:37:52.889 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.426s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:37:52 compute-0 nova_compute[260935]: 2025-10-11 09:37:52.972 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:37:52 compute-0 nova_compute[260935]: 2025-10-11 09:37:52.973 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:37:52 compute-0 nova_compute[260935]: 2025-10-11 09:37:52.973 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:37:52 compute-0 nova_compute[260935]: 2025-10-11 09:37:52.977 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:37:52 compute-0 nova_compute[260935]: 2025-10-11 09:37:52.977 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:37:52 compute-0 nova_compute[260935]: 2025-10-11 09:37:52.981 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000052 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:37:52 compute-0 nova_compute[260935]: 2025-10-11 09:37:52.981 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000052 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:37:53 compute-0 nova_compute[260935]: 2025-10-11 09:37:53.258 2 WARNING nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 09:37:53 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2887: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 363 KiB/s rd, 2.1 MiB/s wr, 119 op/s
Oct 11 09:37:53 compute-0 nova_compute[260935]: 2025-10-11 09:37:53.260 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=2822MB free_disk=59.78506851196289GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 11 09:37:53 compute-0 nova_compute[260935]: 2025-10-11 09:37:53.261 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:37:53 compute-0 nova_compute[260935]: 2025-10-11 09:37:53.261 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:37:53 compute-0 nova_compute[260935]: 2025-10-11 09:37:53.329 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance c176845c-89c0-4038-ba22-4ee79bd3ebfe actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 09:37:53 compute-0 nova_compute[260935]: 2025-10-11 09:37:53.330 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance b75d8ded-515b-48ff-a6b6-28df88878996 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 09:37:53 compute-0 nova_compute[260935]: 2025-10-11 09:37:53.330 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance 52be16b4-343a-4fd4-9041-39069a1fde2a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 09:37:53 compute-0 nova_compute[260935]: 2025-10-11 09:37:53.330 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 11 09:37:53 compute-0 nova_compute[260935]: 2025-10-11 09:37:53.332 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=896MB phys_disk=59GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 11 09:37:53 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/2726880311' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:37:53 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/108814941' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:37:53 compute-0 nova_compute[260935]: 2025-10-11 09:37:53.402 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:37:53 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 09:37:53 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/874628231' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:37:53 compute-0 nova_compute[260935]: 2025-10-11 09:37:53.840 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:37:53 compute-0 nova_compute[260935]: 2025-10-11 09:37:53.844 2 DEBUG nova.compute.provider_tree [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 09:37:53 compute-0 nova_compute[260935]: 2025-10-11 09:37:53.859 2 DEBUG nova.scheduler.client.report [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 09:37:53 compute-0 nova_compute[260935]: 2025-10-11 09:37:53.875 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 11 09:37:53 compute-0 nova_compute[260935]: 2025-10-11 09:37:53.876 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.614s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:37:54 compute-0 nova_compute[260935]: 2025-10-11 09:37:54.234 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:37:54 compute-0 nova_compute[260935]: 2025-10-11 09:37:54.351 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:37:54 compute-0 ceph-mon[74313]: pgmap v2887: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 363 KiB/s rd, 2.1 MiB/s wr, 119 op/s
Oct 11 09:37:54 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/874628231' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:37:54 compute-0 nova_compute[260935]: 2025-10-11 09:37:54.579 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:37:54 compute-0 nova_compute[260935]: 2025-10-11 09:37:54.580 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:37:54 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:37:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:37:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:37:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:37:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:37:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:37:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:37:55 compute-0 ceph-mgr[74605]: [balancer INFO root] Optimize plan auto_2025-10-11_09:37:55
Oct 11 09:37:55 compute-0 ceph-mgr[74605]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 09:37:55 compute-0 ceph-mgr[74605]: [balancer INFO root] do_upmap
Oct 11 09:37:55 compute-0 ceph-mgr[74605]: [balancer INFO root] pools ['.mgr', 'default.rgw.log', 'backups', '.rgw.root', 'cephfs.cephfs.data', 'vms', 'images', 'volumes', 'cephfs.cephfs.meta', 'default.rgw.control', 'default.rgw.meta']
Oct 11 09:37:55 compute-0 ceph-mgr[74605]: [balancer INFO root] prepared 0/10 changes
Oct 11 09:37:55 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2888: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 3.3 KiB/s wr, 55 op/s
Oct 11 09:37:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 09:37:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 09:37:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 09:37:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 09:37:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 09:37:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 09:37:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 09:37:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 09:37:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 09:37:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 09:37:55 compute-0 ovn_controller[152945]: 2025-10-11T09:37:55Z|01607|binding|INFO|Releasing lport e23cd806-8523-4e59-ba27-db15cee52548 from this chassis (sb_readonly=0)
Oct 11 09:37:55 compute-0 ovn_controller[152945]: 2025-10-11T09:37:55Z|01608|binding|INFO|Releasing lport f45dd889-4db0-488b-b6d3-356bc191844e from this chassis (sb_readonly=0)
Oct 11 09:37:55 compute-0 nova_compute[260935]: 2025-10-11 09:37:55.852 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:37:55 compute-0 ovn_controller[152945]: 2025-10-11T09:37:55Z|01609|binding|INFO|Releasing lport e23cd806-8523-4e59-ba27-db15cee52548 from this chassis (sb_readonly=0)
Oct 11 09:37:55 compute-0 ovn_controller[152945]: 2025-10-11T09:37:55Z|01610|binding|INFO|Releasing lport f45dd889-4db0-488b-b6d3-356bc191844e from this chassis (sb_readonly=0)
Oct 11 09:37:55 compute-0 nova_compute[260935]: 2025-10-11 09:37:55.916 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:37:56 compute-0 ceph-mon[74313]: pgmap v2888: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 3.3 KiB/s wr, 55 op/s
Oct 11 09:37:57 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2889: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 3.3 KiB/s wr, 55 op/s
Oct 11 09:37:58 compute-0 ceph-mon[74313]: pgmap v2889: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 3.3 KiB/s wr, 55 op/s
Oct 11 09:37:58 compute-0 nova_compute[260935]: 2025-10-11 09:37:58.699 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:37:59 compute-0 nova_compute[260935]: 2025-10-11 09:37:59.236 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:37:59 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2890: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 3.3 KiB/s wr, 55 op/s
Oct 11 09:37:59 compute-0 nova_compute[260935]: 2025-10-11 09:37:59.354 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:37:59 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:38:00 compute-0 nova_compute[260935]: 2025-10-11 09:38:00.393 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760175465.3918967, 27b27eae-7476-459a-b3e1-62199c81d4e1 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:38:00 compute-0 nova_compute[260935]: 2025-10-11 09:38:00.394 2 INFO nova.compute.manager [-] [instance: 27b27eae-7476-459a-b3e1-62199c81d4e1] VM Stopped (Lifecycle Event)
Oct 11 09:38:00 compute-0 nova_compute[260935]: 2025-10-11 09:38:00.413 2 DEBUG nova.compute.manager [None req-9c74a2de-46d2-49fb-8c59-0f471a1c290d - - - - - -] [instance: 27b27eae-7476-459a-b3e1-62199c81d4e1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:38:00 compute-0 ceph-mon[74313]: pgmap v2890: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 3.3 KiB/s wr, 55 op/s
Oct 11 09:38:01 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2891: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 11 09:38:01 compute-0 ceph-mon[74313]: pgmap v2891: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 11 09:38:01 compute-0 podman[420609]: 2025-10-11 09:38:01.798499717 +0000 UTC m=+0.093037821 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct 11 09:38:03 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2892: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 11 09:38:03 compute-0 ceph-mon[74313]: pgmap v2892: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 11 09:38:04 compute-0 nova_compute[260935]: 2025-10-11 09:38:04.217 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760175469.2156253, 1f73fb12-6b6e-4491-81a4-51fb66ffb310 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:38:04 compute-0 nova_compute[260935]: 2025-10-11 09:38:04.218 2 INFO nova.compute.manager [-] [instance: 1f73fb12-6b6e-4491-81a4-51fb66ffb310] VM Stopped (Lifecycle Event)
Oct 11 09:38:04 compute-0 nova_compute[260935]: 2025-10-11 09:38:04.239 2 DEBUG nova.compute.manager [None req-1bc4dd52-2b35-4183-8885-dbc56bbe2449 - - - - - -] [instance: 1f73fb12-6b6e-4491-81a4-51fb66ffb310] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:38:04 compute-0 nova_compute[260935]: 2025-10-11 09:38:04.240 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:38:04 compute-0 nova_compute[260935]: 2025-10-11 09:38:04.355 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:38:04 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:38:05 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2893: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:38:05 compute-0 ceph-mon[74313]: pgmap v2893: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:38:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 09:38:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:38:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 09:38:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:38:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.002627377275021163 of space, bias 1.0, pg target 0.788213182506349 quantized to 32 (current 32)
Oct 11 09:38:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:38:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 09:38:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:38:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 09:38:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:38:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Oct 11 09:38:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:38:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 11 09:38:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:38:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 09:38:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:38:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 11 09:38:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:38:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 11 09:38:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:38:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 09:38:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:38:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 11 09:38:05 compute-0 nova_compute[260935]: 2025-10-11 09:38:05.593 2 DEBUG oslo_concurrency.lockutils [None req-455544e6-47d2-4409-98a3-ece9099a8dc9 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Acquiring lock "d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:38:05 compute-0 nova_compute[260935]: 2025-10-11 09:38:05.593 2 DEBUG oslo_concurrency.lockutils [None req-455544e6-47d2-4409-98a3-ece9099a8dc9 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lock "d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:38:05 compute-0 nova_compute[260935]: 2025-10-11 09:38:05.612 2 DEBUG nova.compute.manager [None req-455544e6-47d2-4409-98a3-ece9099a8dc9 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 11 09:38:05 compute-0 nova_compute[260935]: 2025-10-11 09:38:05.704 2 DEBUG oslo_concurrency.lockutils [None req-455544e6-47d2-4409-98a3-ece9099a8dc9 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:38:05 compute-0 nova_compute[260935]: 2025-10-11 09:38:05.704 2 DEBUG oslo_concurrency.lockutils [None req-455544e6-47d2-4409-98a3-ece9099a8dc9 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:38:05 compute-0 nova_compute[260935]: 2025-10-11 09:38:05.713 2 DEBUG nova.virt.hardware [None req-455544e6-47d2-4409-98a3-ece9099a8dc9 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 11 09:38:05 compute-0 nova_compute[260935]: 2025-10-11 09:38:05.714 2 INFO nova.compute.claims [None req-455544e6-47d2-4409-98a3-ece9099a8dc9 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b] Claim successful on node compute-0.ctlplane.example.com
Oct 11 09:38:05 compute-0 nova_compute[260935]: 2025-10-11 09:38:05.877 2 DEBUG oslo_concurrency.processutils [None req-455544e6-47d2-4409-98a3-ece9099a8dc9 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:38:06 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 09:38:06 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/731794348' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:38:06 compute-0 nova_compute[260935]: 2025-10-11 09:38:06.392 2 DEBUG oslo_concurrency.processutils [None req-455544e6-47d2-4409-98a3-ece9099a8dc9 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.515s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:38:06 compute-0 nova_compute[260935]: 2025-10-11 09:38:06.402 2 DEBUG nova.compute.provider_tree [None req-455544e6-47d2-4409-98a3-ece9099a8dc9 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 09:38:06 compute-0 nova_compute[260935]: 2025-10-11 09:38:06.422 2 DEBUG nova.scheduler.client.report [None req-455544e6-47d2-4409-98a3-ece9099a8dc9 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 09:38:06 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/731794348' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:38:06 compute-0 nova_compute[260935]: 2025-10-11 09:38:06.451 2 DEBUG oslo_concurrency.lockutils [None req-455544e6-47d2-4409-98a3-ece9099a8dc9 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.747s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:38:06 compute-0 nova_compute[260935]: 2025-10-11 09:38:06.452 2 DEBUG nova.compute.manager [None req-455544e6-47d2-4409-98a3-ece9099a8dc9 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 11 09:38:06 compute-0 nova_compute[260935]: 2025-10-11 09:38:06.526 2 DEBUG nova.compute.manager [None req-455544e6-47d2-4409-98a3-ece9099a8dc9 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 11 09:38:06 compute-0 nova_compute[260935]: 2025-10-11 09:38:06.527 2 DEBUG nova.network.neutron [None req-455544e6-47d2-4409-98a3-ece9099a8dc9 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 11 09:38:06 compute-0 nova_compute[260935]: 2025-10-11 09:38:06.568 2 INFO nova.virt.libvirt.driver [None req-455544e6-47d2-4409-98a3-ece9099a8dc9 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 11 09:38:06 compute-0 nova_compute[260935]: 2025-10-11 09:38:06.600 2 DEBUG nova.compute.manager [None req-455544e6-47d2-4409-98a3-ece9099a8dc9 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 11 09:38:06 compute-0 nova_compute[260935]: 2025-10-11 09:38:06.708 2 DEBUG nova.compute.manager [None req-455544e6-47d2-4409-98a3-ece9099a8dc9 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 11 09:38:06 compute-0 nova_compute[260935]: 2025-10-11 09:38:06.710 2 DEBUG nova.virt.libvirt.driver [None req-455544e6-47d2-4409-98a3-ece9099a8dc9 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 11 09:38:06 compute-0 nova_compute[260935]: 2025-10-11 09:38:06.711 2 INFO nova.virt.libvirt.driver [None req-455544e6-47d2-4409-98a3-ece9099a8dc9 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b] Creating image(s)
Oct 11 09:38:06 compute-0 nova_compute[260935]: 2025-10-11 09:38:06.744 2 DEBUG nova.storage.rbd_utils [None req-455544e6-47d2-4409-98a3-ece9099a8dc9 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] rbd image d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:38:06 compute-0 nova_compute[260935]: 2025-10-11 09:38:06.783 2 DEBUG nova.storage.rbd_utils [None req-455544e6-47d2-4409-98a3-ece9099a8dc9 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] rbd image d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:38:06 compute-0 nova_compute[260935]: 2025-10-11 09:38:06.825 2 DEBUG nova.storage.rbd_utils [None req-455544e6-47d2-4409-98a3-ece9099a8dc9 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] rbd image d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:38:06 compute-0 nova_compute[260935]: 2025-10-11 09:38:06.833 2 DEBUG oslo_concurrency.processutils [None req-455544e6-47d2-4409-98a3-ece9099a8dc9 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:38:06 compute-0 nova_compute[260935]: 2025-10-11 09:38:06.884 2 DEBUG nova.policy [None req-455544e6-47d2-4409-98a3-ece9099a8dc9 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '489c4d0457354f4684f8b9e53261224f', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '81e7096f23df4e7d8782cf98d09d54e9', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 11 09:38:06 compute-0 nova_compute[260935]: 2025-10-11 09:38:06.932 2 DEBUG oslo_concurrency.processutils [None req-455544e6-47d2-4409-98a3-ece9099a8dc9 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json" returned: 0 in 0.098s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:38:06 compute-0 nova_compute[260935]: 2025-10-11 09:38:06.933 2 DEBUG oslo_concurrency.lockutils [None req-455544e6-47d2-4409-98a3-ece9099a8dc9 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Acquiring lock "9811042c7d73cc51997f7c966840f4b7728169a1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:38:06 compute-0 nova_compute[260935]: 2025-10-11 09:38:06.933 2 DEBUG oslo_concurrency.lockutils [None req-455544e6-47d2-4409-98a3-ece9099a8dc9 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:38:06 compute-0 nova_compute[260935]: 2025-10-11 09:38:06.934 2 DEBUG oslo_concurrency.lockutils [None req-455544e6-47d2-4409-98a3-ece9099a8dc9 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:38:06 compute-0 nova_compute[260935]: 2025-10-11 09:38:06.967 2 DEBUG nova.storage.rbd_utils [None req-455544e6-47d2-4409-98a3-ece9099a8dc9 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] rbd image d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:38:06 compute-0 nova_compute[260935]: 2025-10-11 09:38:06.973 2 DEBUG oslo_concurrency.processutils [None req-455544e6-47d2-4409-98a3-ece9099a8dc9 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:38:07 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2894: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:38:07 compute-0 nova_compute[260935]: 2025-10-11 09:38:07.361 2 DEBUG oslo_concurrency.processutils [None req-455544e6-47d2-4409-98a3-ece9099a8dc9 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.388s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:38:07 compute-0 ceph-mon[74313]: pgmap v2894: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:38:07 compute-0 nova_compute[260935]: 2025-10-11 09:38:07.441 2 DEBUG nova.storage.rbd_utils [None req-455544e6-47d2-4409-98a3-ece9099a8dc9 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] resizing rbd image d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 11 09:38:07 compute-0 nova_compute[260935]: 2025-10-11 09:38:07.539 2 DEBUG nova.objects.instance [None req-455544e6-47d2-4409-98a3-ece9099a8dc9 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lazy-loading 'migration_context' on Instance uuid d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:38:07 compute-0 nova_compute[260935]: 2025-10-11 09:38:07.556 2 DEBUG nova.virt.libvirt.driver [None req-455544e6-47d2-4409-98a3-ece9099a8dc9 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 11 09:38:07 compute-0 nova_compute[260935]: 2025-10-11 09:38:07.557 2 DEBUG nova.virt.libvirt.driver [None req-455544e6-47d2-4409-98a3-ece9099a8dc9 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b] Ensure instance console log exists: /var/lib/nova/instances/d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 11 09:38:07 compute-0 nova_compute[260935]: 2025-10-11 09:38:07.557 2 DEBUG oslo_concurrency.lockutils [None req-455544e6-47d2-4409-98a3-ece9099a8dc9 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:38:07 compute-0 nova_compute[260935]: 2025-10-11 09:38:07.558 2 DEBUG oslo_concurrency.lockutils [None req-455544e6-47d2-4409-98a3-ece9099a8dc9 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:38:07 compute-0 nova_compute[260935]: 2025-10-11 09:38:07.558 2 DEBUG oslo_concurrency.lockutils [None req-455544e6-47d2-4409-98a3-ece9099a8dc9 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:38:07 compute-0 nova_compute[260935]: 2025-10-11 09:38:07.775 2 DEBUG nova.network.neutron [None req-455544e6-47d2-4409-98a3-ece9099a8dc9 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b] Successfully created port: 927e4c78-95c0-407e-ab92-2c7457c30f72 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 11 09:38:08 compute-0 nova_compute[260935]: 2025-10-11 09:38:08.692 2 DEBUG nova.network.neutron [None req-455544e6-47d2-4409-98a3-ece9099a8dc9 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b] Successfully updated port: 927e4c78-95c0-407e-ab92-2c7457c30f72 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 11 09:38:08 compute-0 nova_compute[260935]: 2025-10-11 09:38:08.705 2 DEBUG oslo_concurrency.lockutils [None req-455544e6-47d2-4409-98a3-ece9099a8dc9 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Acquiring lock "refresh_cache-d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:38:08 compute-0 nova_compute[260935]: 2025-10-11 09:38:08.705 2 DEBUG oslo_concurrency.lockutils [None req-455544e6-47d2-4409-98a3-ece9099a8dc9 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Acquired lock "refresh_cache-d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:38:08 compute-0 nova_compute[260935]: 2025-10-11 09:38:08.705 2 DEBUG nova.network.neutron [None req-455544e6-47d2-4409-98a3-ece9099a8dc9 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 11 09:38:08 compute-0 nova_compute[260935]: 2025-10-11 09:38:08.807 2 DEBUG nova.compute.manager [req-5ae8b420-bcc1-4e6e-a6bb-0099c9fcc198 req-77e91244-6006-47c4-9838-e4ef14c61537 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b] Received event network-changed-927e4c78-95c0-407e-ab92-2c7457c30f72 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:38:08 compute-0 nova_compute[260935]: 2025-10-11 09:38:08.807 2 DEBUG nova.compute.manager [req-5ae8b420-bcc1-4e6e-a6bb-0099c9fcc198 req-77e91244-6006-47c4-9838-e4ef14c61537 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b] Refreshing instance network info cache due to event network-changed-927e4c78-95c0-407e-ab92-2c7457c30f72. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 09:38:08 compute-0 nova_compute[260935]: 2025-10-11 09:38:08.808 2 DEBUG oslo_concurrency.lockutils [req-5ae8b420-bcc1-4e6e-a6bb-0099c9fcc198 req-77e91244-6006-47c4-9838-e4ef14c61537 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:38:08 compute-0 nova_compute[260935]: 2025-10-11 09:38:08.865 2 DEBUG nova.network.neutron [None req-455544e6-47d2-4409-98a3-ece9099a8dc9 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 11 09:38:09 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2895: 321 pgs: 321 active+clean; 374 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 25 op/s
Oct 11 09:38:09 compute-0 nova_compute[260935]: 2025-10-11 09:38:09.287 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:38:09 compute-0 ceph-mon[74313]: pgmap v2895: 321 pgs: 321 active+clean; 374 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 25 op/s
Oct 11 09:38:09 compute-0 nova_compute[260935]: 2025-10-11 09:38:09.357 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:38:09 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:38:09 compute-0 nova_compute[260935]: 2025-10-11 09:38:09.719 2 DEBUG nova.network.neutron [None req-455544e6-47d2-4409-98a3-ece9099a8dc9 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b] Updating instance_info_cache with network_info: [{"id": "927e4c78-95c0-407e-ab92-2c7457c30f72", "address": "fa:16:3e:ff:f3:13", "network": {"id": "d0b488df-4f94-4347-9251-9c74a85cff63", "bridge": "br-int", "label": "tempest-network-smoke--397807174", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "81e7096f23df4e7d8782cf98d09d54e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap927e4c78-95", "ovs_interfaceid": "927e4c78-95c0-407e-ab92-2c7457c30f72", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:38:09 compute-0 nova_compute[260935]: 2025-10-11 09:38:09.741 2 DEBUG oslo_concurrency.lockutils [None req-455544e6-47d2-4409-98a3-ece9099a8dc9 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Releasing lock "refresh_cache-d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:38:09 compute-0 nova_compute[260935]: 2025-10-11 09:38:09.741 2 DEBUG nova.compute.manager [None req-455544e6-47d2-4409-98a3-ece9099a8dc9 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b] Instance network_info: |[{"id": "927e4c78-95c0-407e-ab92-2c7457c30f72", "address": "fa:16:3e:ff:f3:13", "network": {"id": "d0b488df-4f94-4347-9251-9c74a85cff63", "bridge": "br-int", "label": "tempest-network-smoke--397807174", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "81e7096f23df4e7d8782cf98d09d54e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap927e4c78-95", "ovs_interfaceid": "927e4c78-95c0-407e-ab92-2c7457c30f72", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 11 09:38:09 compute-0 nova_compute[260935]: 2025-10-11 09:38:09.742 2 DEBUG oslo_concurrency.lockutils [req-5ae8b420-bcc1-4e6e-a6bb-0099c9fcc198 req-77e91244-6006-47c4-9838-e4ef14c61537 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:38:09 compute-0 nova_compute[260935]: 2025-10-11 09:38:09.742 2 DEBUG nova.network.neutron [req-5ae8b420-bcc1-4e6e-a6bb-0099c9fcc198 req-77e91244-6006-47c4-9838-e4ef14c61537 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b] Refreshing network info cache for port 927e4c78-95c0-407e-ab92-2c7457c30f72 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 09:38:09 compute-0 nova_compute[260935]: 2025-10-11 09:38:09.751 2 DEBUG nova.virt.libvirt.driver [None req-455544e6-47d2-4409-98a3-ece9099a8dc9 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b] Start _get_guest_xml network_info=[{"id": "927e4c78-95c0-407e-ab92-2c7457c30f72", "address": "fa:16:3e:ff:f3:13", "network": {"id": "d0b488df-4f94-4347-9251-9c74a85cff63", "bridge": "br-int", "label": "tempest-network-smoke--397807174", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "81e7096f23df4e7d8782cf98d09d54e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap927e4c78-95", "ovs_interfaceid": "927e4c78-95c0-407e-ab92-2c7457c30f72", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 11 09:38:09 compute-0 nova_compute[260935]: 2025-10-11 09:38:09.758 2 WARNING nova.virt.libvirt.driver [None req-455544e6-47d2-4409-98a3-ece9099a8dc9 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 09:38:09 compute-0 nova_compute[260935]: 2025-10-11 09:38:09.769 2 DEBUG nova.virt.libvirt.host [None req-455544e6-47d2-4409-98a3-ece9099a8dc9 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 11 09:38:09 compute-0 nova_compute[260935]: 2025-10-11 09:38:09.770 2 DEBUG nova.virt.libvirt.host [None req-455544e6-47d2-4409-98a3-ece9099a8dc9 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 11 09:38:09 compute-0 nova_compute[260935]: 2025-10-11 09:38:09.775 2 DEBUG nova.virt.libvirt.host [None req-455544e6-47d2-4409-98a3-ece9099a8dc9 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 11 09:38:09 compute-0 nova_compute[260935]: 2025-10-11 09:38:09.776 2 DEBUG nova.virt.libvirt.host [None req-455544e6-47d2-4409-98a3-ece9099a8dc9 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 11 09:38:09 compute-0 nova_compute[260935]: 2025-10-11 09:38:09.776 2 DEBUG nova.virt.libvirt.driver [None req-455544e6-47d2-4409-98a3-ece9099a8dc9 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 11 09:38:09 compute-0 nova_compute[260935]: 2025-10-11 09:38:09.777 2 DEBUG nova.virt.hardware [None req-455544e6-47d2-4409-98a3-ece9099a8dc9 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 11 09:38:09 compute-0 nova_compute[260935]: 2025-10-11 09:38:09.778 2 DEBUG nova.virt.hardware [None req-455544e6-47d2-4409-98a3-ece9099a8dc9 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 11 09:38:09 compute-0 nova_compute[260935]: 2025-10-11 09:38:09.778 2 DEBUG nova.virt.hardware [None req-455544e6-47d2-4409-98a3-ece9099a8dc9 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 11 09:38:09 compute-0 nova_compute[260935]: 2025-10-11 09:38:09.779 2 DEBUG nova.virt.hardware [None req-455544e6-47d2-4409-98a3-ece9099a8dc9 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 11 09:38:09 compute-0 nova_compute[260935]: 2025-10-11 09:38:09.779 2 DEBUG nova.virt.hardware [None req-455544e6-47d2-4409-98a3-ece9099a8dc9 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 11 09:38:09 compute-0 nova_compute[260935]: 2025-10-11 09:38:09.780 2 DEBUG nova.virt.hardware [None req-455544e6-47d2-4409-98a3-ece9099a8dc9 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 11 09:38:09 compute-0 nova_compute[260935]: 2025-10-11 09:38:09.780 2 DEBUG nova.virt.hardware [None req-455544e6-47d2-4409-98a3-ece9099a8dc9 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 11 09:38:09 compute-0 nova_compute[260935]: 2025-10-11 09:38:09.781 2 DEBUG nova.virt.hardware [None req-455544e6-47d2-4409-98a3-ece9099a8dc9 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 11 09:38:09 compute-0 nova_compute[260935]: 2025-10-11 09:38:09.781 2 DEBUG nova.virt.hardware [None req-455544e6-47d2-4409-98a3-ece9099a8dc9 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 11 09:38:09 compute-0 nova_compute[260935]: 2025-10-11 09:38:09.782 2 DEBUG nova.virt.hardware [None req-455544e6-47d2-4409-98a3-ece9099a8dc9 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 11 09:38:09 compute-0 nova_compute[260935]: 2025-10-11 09:38:09.783 2 DEBUG nova.virt.hardware [None req-455544e6-47d2-4409-98a3-ece9099a8dc9 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 11 09:38:09 compute-0 nova_compute[260935]: 2025-10-11 09:38:09.788 2 DEBUG oslo_concurrency.processutils [None req-455544e6-47d2-4409-98a3-ece9099a8dc9 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:38:09 compute-0 podman[420817]: 2025-10-11 09:38:09.809419482 +0000 UTC m=+0.093840383 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, container_name=iscsid, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct 11 09:38:10 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 09:38:10 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1074118747' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:38:10 compute-0 nova_compute[260935]: 2025-10-11 09:38:10.315 2 DEBUG oslo_concurrency.processutils [None req-455544e6-47d2-4409-98a3-ece9099a8dc9 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.527s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:38:10 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/1074118747' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:38:10 compute-0 nova_compute[260935]: 2025-10-11 09:38:10.356 2 DEBUG nova.storage.rbd_utils [None req-455544e6-47d2-4409-98a3-ece9099a8dc9 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] rbd image d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:38:10 compute-0 nova_compute[260935]: 2025-10-11 09:38:10.362 2 DEBUG oslo_concurrency.processutils [None req-455544e6-47d2-4409-98a3-ece9099a8dc9 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:38:10 compute-0 nova_compute[260935]: 2025-10-11 09:38:10.804 2 DEBUG nova.network.neutron [req-5ae8b420-bcc1-4e6e-a6bb-0099c9fcc198 req-77e91244-6006-47c4-9838-e4ef14c61537 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b] Updated VIF entry in instance network info cache for port 927e4c78-95c0-407e-ab92-2c7457c30f72. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 09:38:10 compute-0 nova_compute[260935]: 2025-10-11 09:38:10.805 2 DEBUG nova.network.neutron [req-5ae8b420-bcc1-4e6e-a6bb-0099c9fcc198 req-77e91244-6006-47c4-9838-e4ef14c61537 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b] Updating instance_info_cache with network_info: [{"id": "927e4c78-95c0-407e-ab92-2c7457c30f72", "address": "fa:16:3e:ff:f3:13", "network": {"id": "d0b488df-4f94-4347-9251-9c74a85cff63", "bridge": "br-int", "label": "tempest-network-smoke--397807174", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "81e7096f23df4e7d8782cf98d09d54e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap927e4c78-95", "ovs_interfaceid": "927e4c78-95c0-407e-ab92-2c7457c30f72", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:38:10 compute-0 nova_compute[260935]: 2025-10-11 09:38:10.822 2 DEBUG oslo_concurrency.lockutils [req-5ae8b420-bcc1-4e6e-a6bb-0099c9fcc198 req-77e91244-6006-47c4-9838-e4ef14c61537 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:38:10 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 09:38:10 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1368580161' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:38:10 compute-0 nova_compute[260935]: 2025-10-11 09:38:10.859 2 DEBUG oslo_concurrency.processutils [None req-455544e6-47d2-4409-98a3-ece9099a8dc9 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.498s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:38:10 compute-0 nova_compute[260935]: 2025-10-11 09:38:10.862 2 DEBUG nova.virt.libvirt.vif [None req-455544e6-47d2-4409-98a3-ece9099a8dc9 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T09:38:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-607770139-access_point-901075224',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-607770139-access_point-901075224',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-607770139-acc',id=143,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDeAiXe5BxSlKhTWUA0h8drhicmm4lHT+GCh5IJOhzOrvcV2CoWAUh+e7G/x4nbCWm9Jvt6uJONBp2OY5zPdoAKrYlSLLfE4Vnt6Ll8at8odUFHqxgC501PsHRAdCCNJdw==',key_name='tempest-TestSecurityGroupsBasicOps-904423859',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='81e7096f23df4e7d8782cf98d09d54e9',ramdisk_id='',reservation_id='r-9nlmw0id',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-607770139',owner_user_name='tempest-TestSecurityGroupsBasicOps-607770139-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:38:06Z,user_data=None,user_id='489c4d0457354f4684f8b9e53261224f',uuid=d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "927e4c78-95c0-407e-ab92-2c7457c30f72", "address": "fa:16:3e:ff:f3:13", "network": {"id": "d0b488df-4f94-4347-9251-9c74a85cff63", "bridge": "br-int", "label": "tempest-network-smoke--397807174", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "81e7096f23df4e7d8782cf98d09d54e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap927e4c78-95", "ovs_interfaceid": "927e4c78-95c0-407e-ab92-2c7457c30f72", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 11 09:38:10 compute-0 nova_compute[260935]: 2025-10-11 09:38:10.862 2 DEBUG nova.network.os_vif_util [None req-455544e6-47d2-4409-98a3-ece9099a8dc9 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Converting VIF {"id": "927e4c78-95c0-407e-ab92-2c7457c30f72", "address": "fa:16:3e:ff:f3:13", "network": {"id": "d0b488df-4f94-4347-9251-9c74a85cff63", "bridge": "br-int", "label": "tempest-network-smoke--397807174", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "81e7096f23df4e7d8782cf98d09d54e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap927e4c78-95", "ovs_interfaceid": "927e4c78-95c0-407e-ab92-2c7457c30f72", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 09:38:10 compute-0 nova_compute[260935]: 2025-10-11 09:38:10.864 2 DEBUG nova.network.os_vif_util [None req-455544e6-47d2-4409-98a3-ece9099a8dc9 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ff:f3:13,bridge_name='br-int',has_traffic_filtering=True,id=927e4c78-95c0-407e-ab92-2c7457c30f72,network=Network(d0b488df-4f94-4347-9251-9c74a85cff63),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap927e4c78-95') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 09:38:10 compute-0 nova_compute[260935]: 2025-10-11 09:38:10.866 2 DEBUG nova.objects.instance [None req-455544e6-47d2-4409-98a3-ece9099a8dc9 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lazy-loading 'pci_devices' on Instance uuid d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:38:10 compute-0 nova_compute[260935]: 2025-10-11 09:38:10.882 2 DEBUG nova.virt.libvirt.driver [None req-455544e6-47d2-4409-98a3-ece9099a8dc9 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b] End _get_guest_xml xml=<domain type="kvm">
Oct 11 09:38:10 compute-0 nova_compute[260935]:   <uuid>d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b</uuid>
Oct 11 09:38:10 compute-0 nova_compute[260935]:   <name>instance-0000008f</name>
Oct 11 09:38:10 compute-0 nova_compute[260935]:   <memory>131072</memory>
Oct 11 09:38:10 compute-0 nova_compute[260935]:   <vcpu>1</vcpu>
Oct 11 09:38:10 compute-0 nova_compute[260935]:   <metadata>
Oct 11 09:38:10 compute-0 nova_compute[260935]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 09:38:10 compute-0 nova_compute[260935]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 09:38:10 compute-0 nova_compute[260935]:       <nova:name>tempest-server-tempest-TestSecurityGroupsBasicOps-607770139-access_point-901075224</nova:name>
Oct 11 09:38:10 compute-0 nova_compute[260935]:       <nova:creationTime>2025-10-11 09:38:09</nova:creationTime>
Oct 11 09:38:10 compute-0 nova_compute[260935]:       <nova:flavor name="m1.nano">
Oct 11 09:38:10 compute-0 nova_compute[260935]:         <nova:memory>128</nova:memory>
Oct 11 09:38:10 compute-0 nova_compute[260935]:         <nova:disk>1</nova:disk>
Oct 11 09:38:10 compute-0 nova_compute[260935]:         <nova:swap>0</nova:swap>
Oct 11 09:38:10 compute-0 nova_compute[260935]:         <nova:ephemeral>0</nova:ephemeral>
Oct 11 09:38:10 compute-0 nova_compute[260935]:         <nova:vcpus>1</nova:vcpus>
Oct 11 09:38:10 compute-0 nova_compute[260935]:       </nova:flavor>
Oct 11 09:38:10 compute-0 nova_compute[260935]:       <nova:owner>
Oct 11 09:38:10 compute-0 nova_compute[260935]:         <nova:user uuid="489c4d0457354f4684f8b9e53261224f">tempest-TestSecurityGroupsBasicOps-607770139-project-member</nova:user>
Oct 11 09:38:10 compute-0 nova_compute[260935]:         <nova:project uuid="81e7096f23df4e7d8782cf98d09d54e9">tempest-TestSecurityGroupsBasicOps-607770139</nova:project>
Oct 11 09:38:10 compute-0 nova_compute[260935]:       </nova:owner>
Oct 11 09:38:10 compute-0 nova_compute[260935]:       <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 09:38:10 compute-0 nova_compute[260935]:       <nova:ports>
Oct 11 09:38:10 compute-0 nova_compute[260935]:         <nova:port uuid="927e4c78-95c0-407e-ab92-2c7457c30f72">
Oct 11 09:38:10 compute-0 nova_compute[260935]:           <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Oct 11 09:38:10 compute-0 nova_compute[260935]:         </nova:port>
Oct 11 09:38:10 compute-0 nova_compute[260935]:       </nova:ports>
Oct 11 09:38:10 compute-0 nova_compute[260935]:     </nova:instance>
Oct 11 09:38:10 compute-0 nova_compute[260935]:   </metadata>
Oct 11 09:38:10 compute-0 nova_compute[260935]:   <sysinfo type="smbios">
Oct 11 09:38:10 compute-0 nova_compute[260935]:     <system>
Oct 11 09:38:10 compute-0 nova_compute[260935]:       <entry name="manufacturer">RDO</entry>
Oct 11 09:38:10 compute-0 nova_compute[260935]:       <entry name="product">OpenStack Compute</entry>
Oct 11 09:38:10 compute-0 nova_compute[260935]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 09:38:10 compute-0 nova_compute[260935]:       <entry name="serial">d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b</entry>
Oct 11 09:38:10 compute-0 nova_compute[260935]:       <entry name="uuid">d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b</entry>
Oct 11 09:38:10 compute-0 nova_compute[260935]:       <entry name="family">Virtual Machine</entry>
Oct 11 09:38:10 compute-0 nova_compute[260935]:     </system>
Oct 11 09:38:10 compute-0 nova_compute[260935]:   </sysinfo>
Oct 11 09:38:10 compute-0 nova_compute[260935]:   <os>
Oct 11 09:38:10 compute-0 nova_compute[260935]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 11 09:38:10 compute-0 nova_compute[260935]:     <boot dev="hd"/>
Oct 11 09:38:10 compute-0 nova_compute[260935]:     <smbios mode="sysinfo"/>
Oct 11 09:38:10 compute-0 nova_compute[260935]:   </os>
Oct 11 09:38:10 compute-0 nova_compute[260935]:   <features>
Oct 11 09:38:10 compute-0 nova_compute[260935]:     <acpi/>
Oct 11 09:38:10 compute-0 nova_compute[260935]:     <apic/>
Oct 11 09:38:10 compute-0 nova_compute[260935]:     <vmcoreinfo/>
Oct 11 09:38:10 compute-0 nova_compute[260935]:   </features>
Oct 11 09:38:10 compute-0 nova_compute[260935]:   <clock offset="utc">
Oct 11 09:38:10 compute-0 nova_compute[260935]:     <timer name="pit" tickpolicy="delay"/>
Oct 11 09:38:10 compute-0 nova_compute[260935]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 11 09:38:10 compute-0 nova_compute[260935]:     <timer name="hpet" present="no"/>
Oct 11 09:38:10 compute-0 nova_compute[260935]:   </clock>
Oct 11 09:38:10 compute-0 nova_compute[260935]:   <cpu mode="host-model" match="exact">
Oct 11 09:38:10 compute-0 nova_compute[260935]:     <topology sockets="1" cores="1" threads="1"/>
Oct 11 09:38:10 compute-0 nova_compute[260935]:   </cpu>
Oct 11 09:38:10 compute-0 nova_compute[260935]:   <devices>
Oct 11 09:38:10 compute-0 nova_compute[260935]:     <disk type="network" device="disk">
Oct 11 09:38:10 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 09:38:10 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b_disk">
Oct 11 09:38:10 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 09:38:10 compute-0 nova_compute[260935]:       </source>
Oct 11 09:38:10 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 09:38:10 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 09:38:10 compute-0 nova_compute[260935]:       </auth>
Oct 11 09:38:10 compute-0 nova_compute[260935]:       <target dev="vda" bus="virtio"/>
Oct 11 09:38:10 compute-0 nova_compute[260935]:     </disk>
Oct 11 09:38:10 compute-0 nova_compute[260935]:     <disk type="network" device="cdrom">
Oct 11 09:38:10 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 09:38:10 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b_disk.config">
Oct 11 09:38:10 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 09:38:10 compute-0 nova_compute[260935]:       </source>
Oct 11 09:38:10 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 09:38:10 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 09:38:10 compute-0 nova_compute[260935]:       </auth>
Oct 11 09:38:10 compute-0 nova_compute[260935]:       <target dev="sda" bus="sata"/>
Oct 11 09:38:10 compute-0 nova_compute[260935]:     </disk>
Oct 11 09:38:10 compute-0 nova_compute[260935]:     <interface type="ethernet">
Oct 11 09:38:10 compute-0 nova_compute[260935]:       <mac address="fa:16:3e:ff:f3:13"/>
Oct 11 09:38:10 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 09:38:10 compute-0 nova_compute[260935]:       <driver name="vhost" rx_queue_size="512"/>
Oct 11 09:38:10 compute-0 nova_compute[260935]:       <mtu size="1442"/>
Oct 11 09:38:10 compute-0 nova_compute[260935]:       <target dev="tap927e4c78-95"/>
Oct 11 09:38:10 compute-0 nova_compute[260935]:     </interface>
Oct 11 09:38:10 compute-0 nova_compute[260935]:     <serial type="pty">
Oct 11 09:38:10 compute-0 nova_compute[260935]:       <log file="/var/lib/nova/instances/d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b/console.log" append="off"/>
Oct 11 09:38:10 compute-0 nova_compute[260935]:     </serial>
Oct 11 09:38:10 compute-0 nova_compute[260935]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 09:38:10 compute-0 nova_compute[260935]:     <video>
Oct 11 09:38:10 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 09:38:10 compute-0 nova_compute[260935]:     </video>
Oct 11 09:38:10 compute-0 nova_compute[260935]:     <input type="tablet" bus="usb"/>
Oct 11 09:38:10 compute-0 nova_compute[260935]:     <rng model="virtio">
Oct 11 09:38:10 compute-0 nova_compute[260935]:       <backend model="random">/dev/urandom</backend>
Oct 11 09:38:10 compute-0 nova_compute[260935]:     </rng>
Oct 11 09:38:10 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root"/>
Oct 11 09:38:10 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:38:10 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:38:10 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:38:10 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:38:10 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:38:10 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:38:10 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:38:10 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:38:10 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:38:10 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:38:10 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:38:10 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:38:10 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:38:10 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:38:10 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:38:10 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:38:10 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:38:10 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:38:10 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:38:10 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:38:10 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:38:10 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:38:10 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:38:10 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:38:10 compute-0 nova_compute[260935]:     <controller type="usb" index="0"/>
Oct 11 09:38:10 compute-0 nova_compute[260935]:     <memballoon model="virtio">
Oct 11 09:38:10 compute-0 nova_compute[260935]:       <stats period="10"/>
Oct 11 09:38:10 compute-0 nova_compute[260935]:     </memballoon>
Oct 11 09:38:10 compute-0 nova_compute[260935]:   </devices>
Oct 11 09:38:10 compute-0 nova_compute[260935]: </domain>
Oct 11 09:38:10 compute-0 nova_compute[260935]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 11 09:38:10 compute-0 nova_compute[260935]: 2025-10-11 09:38:10.884 2 DEBUG nova.compute.manager [None req-455544e6-47d2-4409-98a3-ece9099a8dc9 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b] Preparing to wait for external event network-vif-plugged-927e4c78-95c0-407e-ab92-2c7457c30f72 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 11 09:38:10 compute-0 nova_compute[260935]: 2025-10-11 09:38:10.885 2 DEBUG oslo_concurrency.lockutils [None req-455544e6-47d2-4409-98a3-ece9099a8dc9 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Acquiring lock "d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:38:10 compute-0 nova_compute[260935]: 2025-10-11 09:38:10.885 2 DEBUG oslo_concurrency.lockutils [None req-455544e6-47d2-4409-98a3-ece9099a8dc9 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lock "d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:38:10 compute-0 nova_compute[260935]: 2025-10-11 09:38:10.886 2 DEBUG oslo_concurrency.lockutils [None req-455544e6-47d2-4409-98a3-ece9099a8dc9 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lock "d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:38:10 compute-0 nova_compute[260935]: 2025-10-11 09:38:10.887 2 DEBUG nova.virt.libvirt.vif [None req-455544e6-47d2-4409-98a3-ece9099a8dc9 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T09:38:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-607770139-access_point-901075224',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-607770139-access_point-901075224',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-607770139-acc',id=143,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDeAiXe5BxSlKhTWUA0h8drhicmm4lHT+GCh5IJOhzOrvcV2CoWAUh+e7G/x4nbCWm9Jvt6uJONBp2OY5zPdoAKrYlSLLfE4Vnt6Ll8at8odUFHqxgC501PsHRAdCCNJdw==',key_name='tempest-TestSecurityGroupsBasicOps-904423859',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='81e7096f23df4e7d8782cf98d09d54e9',ramdisk_id='',reservation_id='r-9nlmw0id',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-607770139',owner_user_name='tempest-TestSecurityGroupsBasicOps-607770139-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:38:06Z,user_data=None,user_id='489c4d0457354f4684f8b9e53261224f',uuid=d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "927e4c78-95c0-407e-ab92-2c7457c30f72", "address": "fa:16:3e:ff:f3:13", "network": {"id": "d0b488df-4f94-4347-9251-9c74a85cff63", "bridge": "br-int", "label": "tempest-network-smoke--397807174", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "81e7096f23df4e7d8782cf98d09d54e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap927e4c78-95", "ovs_interfaceid": "927e4c78-95c0-407e-ab92-2c7457c30f72", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 11 09:38:10 compute-0 nova_compute[260935]: 2025-10-11 09:38:10.888 2 DEBUG nova.network.os_vif_util [None req-455544e6-47d2-4409-98a3-ece9099a8dc9 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Converting VIF {"id": "927e4c78-95c0-407e-ab92-2c7457c30f72", "address": "fa:16:3e:ff:f3:13", "network": {"id": "d0b488df-4f94-4347-9251-9c74a85cff63", "bridge": "br-int", "label": "tempest-network-smoke--397807174", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "81e7096f23df4e7d8782cf98d09d54e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap927e4c78-95", "ovs_interfaceid": "927e4c78-95c0-407e-ab92-2c7457c30f72", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 09:38:10 compute-0 nova_compute[260935]: 2025-10-11 09:38:10.889 2 DEBUG nova.network.os_vif_util [None req-455544e6-47d2-4409-98a3-ece9099a8dc9 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ff:f3:13,bridge_name='br-int',has_traffic_filtering=True,id=927e4c78-95c0-407e-ab92-2c7457c30f72,network=Network(d0b488df-4f94-4347-9251-9c74a85cff63),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap927e4c78-95') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 09:38:10 compute-0 nova_compute[260935]: 2025-10-11 09:38:10.890 2 DEBUG os_vif [None req-455544e6-47d2-4409-98a3-ece9099a8dc9 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ff:f3:13,bridge_name='br-int',has_traffic_filtering=True,id=927e4c78-95c0-407e-ab92-2c7457c30f72,network=Network(d0b488df-4f94-4347-9251-9c74a85cff63),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap927e4c78-95') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 11 09:38:10 compute-0 nova_compute[260935]: 2025-10-11 09:38:10.891 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:38:10 compute-0 nova_compute[260935]: 2025-10-11 09:38:10.892 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:38:10 compute-0 nova_compute[260935]: 2025-10-11 09:38:10.893 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 09:38:10 compute-0 nova_compute[260935]: 2025-10-11 09:38:10.898 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:38:10 compute-0 nova_compute[260935]: 2025-10-11 09:38:10.898 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap927e4c78-95, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:38:10 compute-0 nova_compute[260935]: 2025-10-11 09:38:10.899 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap927e4c78-95, col_values=(('external_ids', {'iface-id': '927e4c78-95c0-407e-ab92-2c7457c30f72', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ff:f3:13', 'vm-uuid': 'd61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:38:10 compute-0 NetworkManager[44960]: <info>  [1760175490.9033] manager: (tap927e4c78-95): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/614)
Oct 11 09:38:10 compute-0 nova_compute[260935]: 2025-10-11 09:38:10.904 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 09:38:10 compute-0 nova_compute[260935]: 2025-10-11 09:38:10.913 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:38:10 compute-0 nova_compute[260935]: 2025-10-11 09:38:10.914 2 INFO os_vif [None req-455544e6-47d2-4409-98a3-ece9099a8dc9 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ff:f3:13,bridge_name='br-int',has_traffic_filtering=True,id=927e4c78-95c0-407e-ab92-2c7457c30f72,network=Network(d0b488df-4f94-4347-9251-9c74a85cff63),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap927e4c78-95')
Oct 11 09:38:10 compute-0 nova_compute[260935]: 2025-10-11 09:38:10.976 2 DEBUG nova.virt.libvirt.driver [None req-455544e6-47d2-4409-98a3-ece9099a8dc9 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 09:38:10 compute-0 nova_compute[260935]: 2025-10-11 09:38:10.976 2 DEBUG nova.virt.libvirt.driver [None req-455544e6-47d2-4409-98a3-ece9099a8dc9 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 09:38:10 compute-0 nova_compute[260935]: 2025-10-11 09:38:10.977 2 DEBUG nova.virt.libvirt.driver [None req-455544e6-47d2-4409-98a3-ece9099a8dc9 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] No VIF found with MAC fa:16:3e:ff:f3:13, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 11 09:38:10 compute-0 nova_compute[260935]: 2025-10-11 09:38:10.977 2 INFO nova.virt.libvirt.driver [None req-455544e6-47d2-4409-98a3-ece9099a8dc9 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b] Using config drive
Oct 11 09:38:11 compute-0 nova_compute[260935]: 2025-10-11 09:38:11.004 2 DEBUG nova.storage.rbd_utils [None req-455544e6-47d2-4409-98a3-ece9099a8dc9 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] rbd image d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:38:11 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2896: 321 pgs: 321 active+clean; 374 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 25 op/s
Oct 11 09:38:11 compute-0 nova_compute[260935]: 2025-10-11 09:38:11.283 2 INFO nova.virt.libvirt.driver [None req-455544e6-47d2-4409-98a3-ece9099a8dc9 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b] Creating config drive at /var/lib/nova/instances/d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b/disk.config
Oct 11 09:38:11 compute-0 nova_compute[260935]: 2025-10-11 09:38:11.293 2 DEBUG oslo_concurrency.processutils [None req-455544e6-47d2-4409-98a3-ece9099a8dc9 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpygd86x75 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:38:11 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/1368580161' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:38:11 compute-0 ceph-mon[74313]: pgmap v2896: 321 pgs: 321 active+clean; 374 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 25 op/s
Oct 11 09:38:11 compute-0 nova_compute[260935]: 2025-10-11 09:38:11.451 2 DEBUG oslo_concurrency.processutils [None req-455544e6-47d2-4409-98a3-ece9099a8dc9 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpygd86x75" returned: 0 in 0.158s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:38:11 compute-0 nova_compute[260935]: 2025-10-11 09:38:11.492 2 DEBUG nova.storage.rbd_utils [None req-455544e6-47d2-4409-98a3-ece9099a8dc9 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] rbd image d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:38:11 compute-0 nova_compute[260935]: 2025-10-11 09:38:11.497 2 DEBUG oslo_concurrency.processutils [None req-455544e6-47d2-4409-98a3-ece9099a8dc9 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b/disk.config d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:38:11 compute-0 nova_compute[260935]: 2025-10-11 09:38:11.685 2 DEBUG oslo_concurrency.processutils [None req-455544e6-47d2-4409-98a3-ece9099a8dc9 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b/disk.config d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.188s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:38:11 compute-0 nova_compute[260935]: 2025-10-11 09:38:11.687 2 INFO nova.virt.libvirt.driver [None req-455544e6-47d2-4409-98a3-ece9099a8dc9 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b] Deleting local config drive /var/lib/nova/instances/d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b/disk.config because it was imported into RBD.
Oct 11 09:38:11 compute-0 kernel: tap927e4c78-95: entered promiscuous mode
Oct 11 09:38:11 compute-0 NetworkManager[44960]: <info>  [1760175491.7597] manager: (tap927e4c78-95): new Tun device (/org/freedesktop/NetworkManager/Devices/615)
Oct 11 09:38:11 compute-0 systemd-udevd[420968]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 09:38:11 compute-0 ovn_controller[152945]: 2025-10-11T09:38:11Z|01611|binding|INFO|Claiming lport 927e4c78-95c0-407e-ab92-2c7457c30f72 for this chassis.
Oct 11 09:38:11 compute-0 ovn_controller[152945]: 2025-10-11T09:38:11Z|01612|binding|INFO|927e4c78-95c0-407e-ab92-2c7457c30f72: Claiming fa:16:3e:ff:f3:13 10.100.0.5
Oct 11 09:38:11 compute-0 nova_compute[260935]: 2025-10-11 09:38:11.809 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:38:11 compute-0 NetworkManager[44960]: <info>  [1760175491.8252] device (tap927e4c78-95): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 09:38:11 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:38:11.825 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ff:f3:13 10.100.0.5'], port_security=['fa:16:3e:ff:f3:13 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'd61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d0b488df-4f94-4347-9251-9c74a85cff63', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '81e7096f23df4e7d8782cf98d09d54e9', 'neutron:revision_number': '2', 'neutron:security_group_ids': '007bd862-2a46-4a84-aea1-941ef61ceb34 a1c26107-dbf4-42a0-ae54-f39f52094af5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c6f689b1-ae42-4f44-bdc0-840884eacd12, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=927e4c78-95c0-407e-ab92-2c7457c30f72) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 09:38:11 compute-0 NetworkManager[44960]: <info>  [1760175491.8266] device (tap927e4c78-95): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 09:38:11 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:38:11.826 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 927e4c78-95c0-407e-ab92-2c7457c30f72 in datapath d0b488df-4f94-4347-9251-9c74a85cff63 bound to our chassis
Oct 11 09:38:11 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:38:11.828 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network d0b488df-4f94-4347-9251-9c74a85cff63
Oct 11 09:38:11 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:38:11.851 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[ac03e7b8-6766-4a7e-b6ae-55dfe8e416e1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:38:11 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:38:11.854 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapd0b488df-41 in ovnmeta-d0b488df-4f94-4347-9251-9c74a85cff63 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 11 09:38:11 compute-0 systemd-machined[215705]: New machine qemu-167-instance-0000008f.
Oct 11 09:38:11 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:38:11.856 276945 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapd0b488df-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 11 09:38:11 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:38:11.856 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[410f5ae8-b78b-4e09-ab8c-3f390f332bf0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:38:11 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:38:11.857 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[a883861c-0875-439d-8aec-a46b76105a5f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:38:11 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:38:11.870 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[2f0907bf-5726-422b-afda-20b12ecb484f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:38:11 compute-0 systemd[1]: Started Virtual Machine qemu-167-instance-0000008f.
Oct 11 09:38:11 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:38:11.900 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[0c152aef-4967-4362-8665-8fc5b0a124c5]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:38:11 compute-0 nova_compute[260935]: 2025-10-11 09:38:11.914 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:38:11 compute-0 ovn_controller[152945]: 2025-10-11T09:38:11Z|01613|binding|INFO|Setting lport 927e4c78-95c0-407e-ab92-2c7457c30f72 ovn-installed in OVS
Oct 11 09:38:11 compute-0 ovn_controller[152945]: 2025-10-11T09:38:11Z|01614|binding|INFO|Setting lport 927e4c78-95c0-407e-ab92-2c7457c30f72 up in Southbound
Oct 11 09:38:11 compute-0 nova_compute[260935]: 2025-10-11 09:38:11.919 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:38:11 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:38:11.950 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[cf4eb8c2-486b-40b5-b219-22388ca1c33c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:38:11 compute-0 NetworkManager[44960]: <info>  [1760175491.9577] manager: (tapd0b488df-40): new Veth device (/org/freedesktop/NetworkManager/Devices/616)
Oct 11 09:38:11 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:38:11.955 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[dd2d3c52-60a0-4470-a60d-913cf1618a36]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:38:11 compute-0 systemd-udevd[420971]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 09:38:12 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:38:12.000 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[761ad54d-c54c-4452-862e-8aed15e84203]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:38:12 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:38:12.004 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[f409afb7-57a1-42cd-9a48-3818830ed4d3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:38:12 compute-0 NetworkManager[44960]: <info>  [1760175492.0301] device (tapd0b488df-40): carrier: link connected
Oct 11 09:38:12 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:38:12.036 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[870d60e7-17fa-42cd-934c-a3d861b2927a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:38:12 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:38:12.059 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[6779d3b4-2f2b-4240-9d60-758fb2efef31]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd0b488df-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a6:41:da'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 426], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 733673, 'reachable_time': 36704, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 421004, 'error': None, 'target': 'ovnmeta-d0b488df-4f94-4347-9251-9c74a85cff63', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:38:12 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:38:12.085 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[e761712b-6d3b-4738-a773-8e7c089c1ad6]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fea6:41da'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 733673, 'tstamp': 733673}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 421005, 'error': None, 'target': 'ovnmeta-d0b488df-4f94-4347-9251-9c74a85cff63', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:38:12 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:38:12.113 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[b8cf7e2d-289f-4b78-b887-1c259280faa7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd0b488df-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a6:41:da'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 426], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 733673, 'reachable_time': 36704, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 421006, 'error': None, 'target': 'ovnmeta-d0b488df-4f94-4347-9251-9c74a85cff63', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:38:12 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:38:12.156 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[f0344909-25ec-4c83-919d-907d83cb3e7a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:38:12 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:38:12.225 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[bf3a0610-739b-4ad5-b771-94489f2c0633]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:38:12 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:38:12.227 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd0b488df-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:38:12 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:38:12.227 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 09:38:12 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:38:12.228 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd0b488df-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:38:12 compute-0 nova_compute[260935]: 2025-10-11 09:38:12.230 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:38:12 compute-0 NetworkManager[44960]: <info>  [1760175492.2310] manager: (tapd0b488df-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/617)
Oct 11 09:38:12 compute-0 kernel: tapd0b488df-40: entered promiscuous mode
Oct 11 09:38:12 compute-0 nova_compute[260935]: 2025-10-11 09:38:12.233 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:38:12 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:38:12.238 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapd0b488df-40, col_values=(('external_ids', {'iface-id': '42365f39-7cdb-44f9-ba26-d9bbfa355140'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:38:12 compute-0 nova_compute[260935]: 2025-10-11 09:38:12.239 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:38:12 compute-0 ovn_controller[152945]: 2025-10-11T09:38:12Z|01615|binding|INFO|Releasing lport 42365f39-7cdb-44f9-ba26-d9bbfa355140 from this chassis (sb_readonly=0)
Oct 11 09:38:12 compute-0 nova_compute[260935]: 2025-10-11 09:38:12.270 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:38:12 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:38:12.271 162815 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/d0b488df-4f94-4347-9251-9c74a85cff63.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/d0b488df-4f94-4347-9251-9c74a85cff63.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 11 09:38:12 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:38:12.272 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[1a7f4542-f31f-4dd7-a18c-77b38791129a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:38:12 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:38:12.273 162815 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 11 09:38:12 compute-0 ovn_metadata_agent[162810]: global
Oct 11 09:38:12 compute-0 ovn_metadata_agent[162810]:     log         /dev/log local0 debug
Oct 11 09:38:12 compute-0 ovn_metadata_agent[162810]:     log-tag     haproxy-metadata-proxy-d0b488df-4f94-4347-9251-9c74a85cff63
Oct 11 09:38:12 compute-0 ovn_metadata_agent[162810]:     user        root
Oct 11 09:38:12 compute-0 ovn_metadata_agent[162810]:     group       root
Oct 11 09:38:12 compute-0 ovn_metadata_agent[162810]:     maxconn     1024
Oct 11 09:38:12 compute-0 ovn_metadata_agent[162810]:     pidfile     /var/lib/neutron/external/pids/d0b488df-4f94-4347-9251-9c74a85cff63.pid.haproxy
Oct 11 09:38:12 compute-0 ovn_metadata_agent[162810]:     daemon
Oct 11 09:38:12 compute-0 ovn_metadata_agent[162810]: 
Oct 11 09:38:12 compute-0 ovn_metadata_agent[162810]: defaults
Oct 11 09:38:12 compute-0 ovn_metadata_agent[162810]:     log global
Oct 11 09:38:12 compute-0 ovn_metadata_agent[162810]:     mode http
Oct 11 09:38:12 compute-0 ovn_metadata_agent[162810]:     option httplog
Oct 11 09:38:12 compute-0 ovn_metadata_agent[162810]:     option dontlognull
Oct 11 09:38:12 compute-0 ovn_metadata_agent[162810]:     option http-server-close
Oct 11 09:38:12 compute-0 ovn_metadata_agent[162810]:     option forwardfor
Oct 11 09:38:12 compute-0 ovn_metadata_agent[162810]:     retries                 3
Oct 11 09:38:12 compute-0 ovn_metadata_agent[162810]:     timeout http-request    30s
Oct 11 09:38:12 compute-0 ovn_metadata_agent[162810]:     timeout connect         30s
Oct 11 09:38:12 compute-0 ovn_metadata_agent[162810]:     timeout client          32s
Oct 11 09:38:12 compute-0 ovn_metadata_agent[162810]:     timeout server          32s
Oct 11 09:38:12 compute-0 ovn_metadata_agent[162810]:     timeout http-keep-alive 30s
Oct 11 09:38:12 compute-0 ovn_metadata_agent[162810]: 
Oct 11 09:38:12 compute-0 ovn_metadata_agent[162810]: 
Oct 11 09:38:12 compute-0 ovn_metadata_agent[162810]: listen listener
Oct 11 09:38:12 compute-0 ovn_metadata_agent[162810]:     bind 169.254.169.254:80
Oct 11 09:38:12 compute-0 ovn_metadata_agent[162810]:     server metadata /var/lib/neutron/metadata_proxy
Oct 11 09:38:12 compute-0 ovn_metadata_agent[162810]:     http-request add-header X-OVN-Network-ID d0b488df-4f94-4347-9251-9c74a85cff63
Oct 11 09:38:12 compute-0 ovn_metadata_agent[162810]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 11 09:38:12 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:38:12.274 162815 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-d0b488df-4f94-4347-9251-9c74a85cff63', 'env', 'PROCESS_TAG=haproxy-d0b488df-4f94-4347-9251-9c74a85cff63', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/d0b488df-4f94-4347-9251-9c74a85cff63.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 11 09:38:12 compute-0 podman[421080]: 2025-10-11 09:38:12.682986918 +0000 UTC m=+0.057334999 container create e8a3f080b91898e90dce62a6bb187fd979d7d1ce21868500283dee71e6647be6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-d0b488df-4f94-4347-9251-9c74a85cff63, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 11 09:38:12 compute-0 systemd[1]: Started libpod-conmon-e8a3f080b91898e90dce62a6bb187fd979d7d1ce21868500283dee71e6647be6.scope.
Oct 11 09:38:12 compute-0 podman[421080]: 2025-10-11 09:38:12.651843325 +0000 UTC m=+0.026191426 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 11 09:38:12 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:38:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f4cf93020fc98090af1b4c5a6be3fdf7b017ecf3820002b4c2da663e5a79eaa/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 11 09:38:12 compute-0 podman[421080]: 2025-10-11 09:38:12.786692347 +0000 UTC m=+0.161040468 container init e8a3f080b91898e90dce62a6bb187fd979d7d1ce21868500283dee71e6647be6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-d0b488df-4f94-4347-9251-9c74a85cff63, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001)
Oct 11 09:38:12 compute-0 nova_compute[260935]: 2025-10-11 09:38:12.794 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760175492.7935276, d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:38:12 compute-0 nova_compute[260935]: 2025-10-11 09:38:12.794 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b] VM Started (Lifecycle Event)
Oct 11 09:38:12 compute-0 podman[421080]: 2025-10-11 09:38:12.799064174 +0000 UTC m=+0.173412295 container start e8a3f080b91898e90dce62a6bb187fd979d7d1ce21868500283dee71e6647be6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-d0b488df-4f94-4347-9251-9c74a85cff63, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001)
Oct 11 09:38:12 compute-0 nova_compute[260935]: 2025-10-11 09:38:12.821 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:38:12 compute-0 nova_compute[260935]: 2025-10-11 09:38:12.824 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760175492.7948625, d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:38:12 compute-0 nova_compute[260935]: 2025-10-11 09:38:12.825 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b] VM Paused (Lifecycle Event)
Oct 11 09:38:12 compute-0 neutron-haproxy-ovnmeta-d0b488df-4f94-4347-9251-9c74a85cff63[421095]: [NOTICE]   (421099) : New worker (421101) forked
Oct 11 09:38:12 compute-0 neutron-haproxy-ovnmeta-d0b488df-4f94-4347-9251-9c74a85cff63[421095]: [NOTICE]   (421099) : Loading success.
Oct 11 09:38:12 compute-0 nova_compute[260935]: 2025-10-11 09:38:12.846 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:38:12 compute-0 nova_compute[260935]: 2025-10-11 09:38:12.850 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 09:38:12 compute-0 nova_compute[260935]: 2025-10-11 09:38:12.865 2 DEBUG nova.compute.manager [req-a7c1f1c5-fbd5-40bc-8b59-d3f29925c231 req-f825efc9-bab9-4cb2-bf06-8642d83f3652 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b] Received event network-vif-plugged-927e4c78-95c0-407e-ab92-2c7457c30f72 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:38:12 compute-0 nova_compute[260935]: 2025-10-11 09:38:12.866 2 DEBUG oslo_concurrency.lockutils [req-a7c1f1c5-fbd5-40bc-8b59-d3f29925c231 req-f825efc9-bab9-4cb2-bf06-8642d83f3652 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:38:12 compute-0 nova_compute[260935]: 2025-10-11 09:38:12.866 2 DEBUG oslo_concurrency.lockutils [req-a7c1f1c5-fbd5-40bc-8b59-d3f29925c231 req-f825efc9-bab9-4cb2-bf06-8642d83f3652 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:38:12 compute-0 nova_compute[260935]: 2025-10-11 09:38:12.867 2 DEBUG oslo_concurrency.lockutils [req-a7c1f1c5-fbd5-40bc-8b59-d3f29925c231 req-f825efc9-bab9-4cb2-bf06-8642d83f3652 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:38:12 compute-0 nova_compute[260935]: 2025-10-11 09:38:12.867 2 DEBUG nova.compute.manager [req-a7c1f1c5-fbd5-40bc-8b59-d3f29925c231 req-f825efc9-bab9-4cb2-bf06-8642d83f3652 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b] Processing event network-vif-plugged-927e4c78-95c0-407e-ab92-2c7457c30f72 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 11 09:38:12 compute-0 nova_compute[260935]: 2025-10-11 09:38:12.868 2 DEBUG nova.compute.manager [None req-455544e6-47d2-4409-98a3-ece9099a8dc9 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 11 09:38:12 compute-0 nova_compute[260935]: 2025-10-11 09:38:12.869 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 09:38:12 compute-0 nova_compute[260935]: 2025-10-11 09:38:12.871 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760175492.8712468, d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:38:12 compute-0 nova_compute[260935]: 2025-10-11 09:38:12.871 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b] VM Resumed (Lifecycle Event)
Oct 11 09:38:12 compute-0 nova_compute[260935]: 2025-10-11 09:38:12.873 2 DEBUG nova.virt.libvirt.driver [None req-455544e6-47d2-4409-98a3-ece9099a8dc9 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 11 09:38:12 compute-0 nova_compute[260935]: 2025-10-11 09:38:12.880 2 INFO nova.virt.libvirt.driver [-] [instance: d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b] Instance spawned successfully.
Oct 11 09:38:12 compute-0 nova_compute[260935]: 2025-10-11 09:38:12.880 2 DEBUG nova.virt.libvirt.driver [None req-455544e6-47d2-4409-98a3-ece9099a8dc9 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 11 09:38:12 compute-0 nova_compute[260935]: 2025-10-11 09:38:12.887 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:38:12 compute-0 nova_compute[260935]: 2025-10-11 09:38:12.890 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 09:38:12 compute-0 nova_compute[260935]: 2025-10-11 09:38:12.900 2 DEBUG nova.virt.libvirt.driver [None req-455544e6-47d2-4409-98a3-ece9099a8dc9 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:38:12 compute-0 nova_compute[260935]: 2025-10-11 09:38:12.901 2 DEBUG nova.virt.libvirt.driver [None req-455544e6-47d2-4409-98a3-ece9099a8dc9 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:38:12 compute-0 nova_compute[260935]: 2025-10-11 09:38:12.901 2 DEBUG nova.virt.libvirt.driver [None req-455544e6-47d2-4409-98a3-ece9099a8dc9 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:38:12 compute-0 nova_compute[260935]: 2025-10-11 09:38:12.902 2 DEBUG nova.virt.libvirt.driver [None req-455544e6-47d2-4409-98a3-ece9099a8dc9 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:38:12 compute-0 nova_compute[260935]: 2025-10-11 09:38:12.902 2 DEBUG nova.virt.libvirt.driver [None req-455544e6-47d2-4409-98a3-ece9099a8dc9 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:38:12 compute-0 nova_compute[260935]: 2025-10-11 09:38:12.903 2 DEBUG nova.virt.libvirt.driver [None req-455544e6-47d2-4409-98a3-ece9099a8dc9 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:38:12 compute-0 nova_compute[260935]: 2025-10-11 09:38:12.907 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 09:38:12 compute-0 nova_compute[260935]: 2025-10-11 09:38:12.955 2 INFO nova.compute.manager [None req-455544e6-47d2-4409-98a3-ece9099a8dc9 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b] Took 6.25 seconds to spawn the instance on the hypervisor.
Oct 11 09:38:12 compute-0 nova_compute[260935]: 2025-10-11 09:38:12.956 2 DEBUG nova.compute.manager [None req-455544e6-47d2-4409-98a3-ece9099a8dc9 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:38:13 compute-0 nova_compute[260935]: 2025-10-11 09:38:13.011 2 INFO nova.compute.manager [None req-455544e6-47d2-4409-98a3-ece9099a8dc9 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b] Took 7.34 seconds to build instance.
Oct 11 09:38:13 compute-0 nova_compute[260935]: 2025-10-11 09:38:13.026 2 DEBUG oslo_concurrency.lockutils [None req-455544e6-47d2-4409-98a3-ece9099a8dc9 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lock "d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.433s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:38:13 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2897: 321 pgs: 321 active+clean; 374 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 36 op/s
Oct 11 09:38:13 compute-0 ceph-mon[74313]: pgmap v2897: 321 pgs: 321 active+clean; 374 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 36 op/s
Oct 11 09:38:13 compute-0 podman[421110]: 2025-10-11 09:38:13.811134439 +0000 UTC m=+0.102252719 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible)
Oct 11 09:38:13 compute-0 podman[421111]: 2025-10-11 09:38:13.854950048 +0000 UTC m=+0.144935326 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 11 09:38:14 compute-0 nova_compute[260935]: 2025-10-11 09:38:14.327 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:38:14 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:38:14 compute-0 nova_compute[260935]: 2025-10-11 09:38:14.941 2 DEBUG nova.compute.manager [req-3fe15ade-a7bc-49ae-930a-234a6430574b req-c7253e32-24b9-4c06-8af2-c7b5f92918c9 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b] Received event network-vif-plugged-927e4c78-95c0-407e-ab92-2c7457c30f72 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:38:14 compute-0 nova_compute[260935]: 2025-10-11 09:38:14.943 2 DEBUG oslo_concurrency.lockutils [req-3fe15ade-a7bc-49ae-930a-234a6430574b req-c7253e32-24b9-4c06-8af2-c7b5f92918c9 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:38:14 compute-0 nova_compute[260935]: 2025-10-11 09:38:14.944 2 DEBUG oslo_concurrency.lockutils [req-3fe15ade-a7bc-49ae-930a-234a6430574b req-c7253e32-24b9-4c06-8af2-c7b5f92918c9 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:38:14 compute-0 nova_compute[260935]: 2025-10-11 09:38:14.944 2 DEBUG oslo_concurrency.lockutils [req-3fe15ade-a7bc-49ae-930a-234a6430574b req-c7253e32-24b9-4c06-8af2-c7b5f92918c9 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:38:14 compute-0 nova_compute[260935]: 2025-10-11 09:38:14.945 2 DEBUG nova.compute.manager [req-3fe15ade-a7bc-49ae-930a-234a6430574b req-c7253e32-24b9-4c06-8af2-c7b5f92918c9 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b] No waiting events found dispatching network-vif-plugged-927e4c78-95c0-407e-ab92-2c7457c30f72 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 09:38:14 compute-0 nova_compute[260935]: 2025-10-11 09:38:14.946 2 WARNING nova.compute.manager [req-3fe15ade-a7bc-49ae-930a-234a6430574b req-c7253e32-24b9-4c06-8af2-c7b5f92918c9 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b] Received unexpected event network-vif-plugged-927e4c78-95c0-407e-ab92-2c7457c30f72 for instance with vm_state active and task_state None.
Oct 11 09:38:15 compute-0 sshd-session[421154]: Invalid user njzf from 155.4.244.179 port 33414
Oct 11 09:38:15 compute-0 sshd-session[421154]: pam_unix(sshd:auth): check pass; user unknown
Oct 11 09:38:15 compute-0 sshd-session[421154]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=155.4.244.179
Oct 11 09:38:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:38:15.234 162815 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:38:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:38:15.235 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:38:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:38:15.236 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:38:15 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2898: 321 pgs: 321 active+clean; 374 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 36 op/s
Oct 11 09:38:15 compute-0 ceph-mon[74313]: pgmap v2898: 321 pgs: 321 active+clean; 374 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 36 op/s
Oct 11 09:38:15 compute-0 nova_compute[260935]: 2025-10-11 09:38:15.903 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:38:16 compute-0 ovn_controller[152945]: 2025-10-11T09:38:16Z|01616|binding|INFO|Releasing lport 42365f39-7cdb-44f9-ba26-d9bbfa355140 from this chassis (sb_readonly=0)
Oct 11 09:38:16 compute-0 ovn_controller[152945]: 2025-10-11T09:38:16Z|01617|binding|INFO|Releasing lport e23cd806-8523-4e59-ba27-db15cee52548 from this chassis (sb_readonly=0)
Oct 11 09:38:16 compute-0 ovn_controller[152945]: 2025-10-11T09:38:16Z|01618|binding|INFO|Releasing lport f45dd889-4db0-488b-b6d3-356bc191844e from this chassis (sb_readonly=0)
Oct 11 09:38:16 compute-0 NetworkManager[44960]: <info>  [1760175496.8446] manager: (patch-br-int-to-provnet-02213cae-d648-4a8f-b18b-9bdf9573f1be): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/618)
Oct 11 09:38:16 compute-0 NetworkManager[44960]: <info>  [1760175496.8457] manager: (patch-provnet-02213cae-d648-4a8f-b18b-9bdf9573f1be-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/619)
Oct 11 09:38:16 compute-0 nova_compute[260935]: 2025-10-11 09:38:16.849 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:38:16 compute-0 ovn_controller[152945]: 2025-10-11T09:38:16Z|01619|binding|INFO|Releasing lport 42365f39-7cdb-44f9-ba26-d9bbfa355140 from this chassis (sb_readonly=0)
Oct 11 09:38:16 compute-0 ovn_controller[152945]: 2025-10-11T09:38:16Z|01620|binding|INFO|Releasing lport e23cd806-8523-4e59-ba27-db15cee52548 from this chassis (sb_readonly=0)
Oct 11 09:38:16 compute-0 ovn_controller[152945]: 2025-10-11T09:38:16Z|01621|binding|INFO|Releasing lport f45dd889-4db0-488b-b6d3-356bc191844e from this chassis (sb_readonly=0)
Oct 11 09:38:16 compute-0 nova_compute[260935]: 2025-10-11 09:38:16.918 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:38:16 compute-0 nova_compute[260935]: 2025-10-11 09:38:16.927 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:38:17 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2899: 321 pgs: 321 active+clean; 374 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 36 op/s
Oct 11 09:38:17 compute-0 nova_compute[260935]: 2025-10-11 09:38:17.287 2 DEBUG nova.compute.manager [req-a1a6c7bc-0c3e-4b71-abda-e7ef9266678b req-020dcead-bdae-46e3-bf68-f319f660d275 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b] Received event network-changed-927e4c78-95c0-407e-ab92-2c7457c30f72 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:38:17 compute-0 nova_compute[260935]: 2025-10-11 09:38:17.288 2 DEBUG nova.compute.manager [req-a1a6c7bc-0c3e-4b71-abda-e7ef9266678b req-020dcead-bdae-46e3-bf68-f319f660d275 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b] Refreshing instance network info cache due to event network-changed-927e4c78-95c0-407e-ab92-2c7457c30f72. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 09:38:17 compute-0 nova_compute[260935]: 2025-10-11 09:38:17.288 2 DEBUG oslo_concurrency.lockutils [req-a1a6c7bc-0c3e-4b71-abda-e7ef9266678b req-020dcead-bdae-46e3-bf68-f319f660d275 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:38:17 compute-0 nova_compute[260935]: 2025-10-11 09:38:17.289 2 DEBUG oslo_concurrency.lockutils [req-a1a6c7bc-0c3e-4b71-abda-e7ef9266678b req-020dcead-bdae-46e3-bf68-f319f660d275 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:38:17 compute-0 nova_compute[260935]: 2025-10-11 09:38:17.289 2 DEBUG nova.network.neutron [req-a1a6c7bc-0c3e-4b71-abda-e7ef9266678b req-020dcead-bdae-46e3-bf68-f319f660d275 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b] Refreshing network info cache for port 927e4c78-95c0-407e-ab92-2c7457c30f72 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 09:38:17 compute-0 ceph-mon[74313]: pgmap v2899: 321 pgs: 321 active+clean; 374 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 36 op/s
Oct 11 09:38:17 compute-0 sshd-session[421154]: Failed password for invalid user njzf from 155.4.244.179 port 33414 ssh2
Oct 11 09:38:18 compute-0 nova_compute[260935]: 2025-10-11 09:38:18.673 2 DEBUG nova.network.neutron [req-a1a6c7bc-0c3e-4b71-abda-e7ef9266678b req-020dcead-bdae-46e3-bf68-f319f660d275 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b] Updated VIF entry in instance network info cache for port 927e4c78-95c0-407e-ab92-2c7457c30f72. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 09:38:18 compute-0 nova_compute[260935]: 2025-10-11 09:38:18.674 2 DEBUG nova.network.neutron [req-a1a6c7bc-0c3e-4b71-abda-e7ef9266678b req-020dcead-bdae-46e3-bf68-f319f660d275 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b] Updating instance_info_cache with network_info: [{"id": "927e4c78-95c0-407e-ab92-2c7457c30f72", "address": "fa:16:3e:ff:f3:13", "network": {"id": "d0b488df-4f94-4347-9251-9c74a85cff63", "bridge": "br-int", "label": "tempest-network-smoke--397807174", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.246", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "81e7096f23df4e7d8782cf98d09d54e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap927e4c78-95", "ovs_interfaceid": "927e4c78-95c0-407e-ab92-2c7457c30f72", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:38:18 compute-0 nova_compute[260935]: 2025-10-11 09:38:18.703 2 DEBUG oslo_concurrency.lockutils [req-a1a6c7bc-0c3e-4b71-abda-e7ef9266678b req-020dcead-bdae-46e3-bf68-f319f660d275 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:38:19 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2900: 321 pgs: 321 active+clean; 374 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Oct 11 09:38:19 compute-0 nova_compute[260935]: 2025-10-11 09:38:19.333 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:38:19 compute-0 ceph-mon[74313]: pgmap v2900: 321 pgs: 321 active+clean; 374 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Oct 11 09:38:19 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:38:19 compute-0 sshd-session[421154]: Received disconnect from 155.4.244.179 port 33414:11: Bye Bye [preauth]
Oct 11 09:38:19 compute-0 sshd-session[421154]: Disconnected from invalid user njzf 155.4.244.179 port 33414 [preauth]
Oct 11 09:38:20 compute-0 nova_compute[260935]: 2025-10-11 09:38:20.906 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:38:21 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2901: 321 pgs: 321 active+clean; 374 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 75 op/s
Oct 11 09:38:21 compute-0 ceph-mon[74313]: pgmap v2901: 321 pgs: 321 active+clean; 374 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 75 op/s
Oct 11 09:38:23 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2902: 321 pgs: 321 active+clean; 374 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 75 op/s
Oct 11 09:38:23 compute-0 ceph-mon[74313]: pgmap v2902: 321 pgs: 321 active+clean; 374 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 75 op/s
Oct 11 09:38:23 compute-0 ceph-osd[89278]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [P] New memtable created with log file: #51. Immutable memtables: 0.
Oct 11 09:38:23 compute-0 sshd-session[421157]: Invalid user mysql from 165.232.82.252 port 59792
Oct 11 09:38:23 compute-0 sshd-session[421157]: pam_unix(sshd:auth): check pass; user unknown
Oct 11 09:38:23 compute-0 sshd-session[421157]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=165.232.82.252
Oct 11 09:38:24 compute-0 nova_compute[260935]: 2025-10-11 09:38:24.331 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:38:24 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:38:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:38:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:38:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:38:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:38:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:38:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:38:25 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2903: 321 pgs: 321 active+clean; 374 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 63 op/s
Oct 11 09:38:25 compute-0 ceph-mon[74313]: pgmap v2903: 321 pgs: 321 active+clean; 374 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 63 op/s
Oct 11 09:38:25 compute-0 ovn_controller[152945]: 2025-10-11T09:38:25Z|00192|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:ff:f3:13 10.100.0.5
Oct 11 09:38:25 compute-0 ovn_controller[152945]: 2025-10-11T09:38:25Z|00193|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:ff:f3:13 10.100.0.5
Oct 11 09:38:25 compute-0 nova_compute[260935]: 2025-10-11 09:38:25.928 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:38:25 compute-0 sshd-session[421157]: Failed password for invalid user mysql from 165.232.82.252 port 59792 ssh2
Oct 11 09:38:26 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 09:38:26 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1863885401' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 09:38:26 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 09:38:26 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1863885401' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 09:38:26 compute-0 ceph-mon[74313]: from='client.? 192.168.122.10:0/1863885401' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 09:38:26 compute-0 ceph-mon[74313]: from='client.? 192.168.122.10:0/1863885401' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 09:38:26 compute-0 sshd-session[421157]: Connection closed by invalid user mysql 165.232.82.252 port 59792 [preauth]
Oct 11 09:38:26 compute-0 sudo[421160]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:38:26 compute-0 sudo[421160]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:38:26 compute-0 sudo[421160]: pam_unix(sudo:session): session closed for user root
Oct 11 09:38:26 compute-0 sudo[421185]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 09:38:26 compute-0 sudo[421185]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:38:26 compute-0 sudo[421185]: pam_unix(sudo:session): session closed for user root
Oct 11 09:38:27 compute-0 sudo[421210]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:38:27 compute-0 sudo[421210]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:38:27 compute-0 sudo[421210]: pam_unix(sudo:session): session closed for user root
Oct 11 09:38:27 compute-0 sudo[421235]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 11 09:38:27 compute-0 sudo[421235]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:38:27 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2904: 321 pgs: 321 active+clean; 374 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 63 op/s
Oct 11 09:38:27 compute-0 sudo[421235]: pam_unix(sudo:session): session closed for user root
Oct 11 09:38:27 compute-0 ceph-mon[74313]: pgmap v2904: 321 pgs: 321 active+clean; 374 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 63 op/s
Oct 11 09:38:27 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 09:38:27 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 09:38:27 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 11 09:38:27 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 09:38:27 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 11 09:38:27 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:38:27 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev f0500dc0-9c92-4582-966b-e21f96bee81c does not exist
Oct 11 09:38:27 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev 845a3f9e-a596-467c-a57d-984347e8c60a does not exist
Oct 11 09:38:27 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev fc9ae470-784c-4f56-8889-fd06a2f9fd3f does not exist
Oct 11 09:38:27 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 11 09:38:27 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 09:38:27 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 11 09:38:27 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 09:38:27 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 09:38:27 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 09:38:27 compute-0 sudo[421293]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:38:27 compute-0 sudo[421293]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:38:27 compute-0 sudo[421293]: pam_unix(sudo:session): session closed for user root
Oct 11 09:38:27 compute-0 sudo[421318]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 09:38:27 compute-0 sudo[421318]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:38:27 compute-0 sudo[421318]: pam_unix(sudo:session): session closed for user root
Oct 11 09:38:28 compute-0 sudo[421343]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:38:28 compute-0 sudo[421343]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:38:28 compute-0 sudo[421343]: pam_unix(sudo:session): session closed for user root
Oct 11 09:38:28 compute-0 sudo[421368]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 11 09:38:28 compute-0 sudo[421368]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:38:28 compute-0 podman[421436]: 2025-10-11 09:38:28.415759263 +0000 UTC m=+0.052671388 container create d232ecf4abb5b9323cf7cb65d28f5a02fb6fc7276c9c595a1a660b6eb148d06a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_faraday, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 09:38:28 compute-0 systemd[1]: Started libpod-conmon-d232ecf4abb5b9323cf7cb65d28f5a02fb6fc7276c9c595a1a660b6eb148d06a.scope.
Oct 11 09:38:28 compute-0 podman[421436]: 2025-10-11 09:38:28.386717759 +0000 UTC m=+0.023629924 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:38:28 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:38:28 compute-0 podman[421436]: 2025-10-11 09:38:28.525401239 +0000 UTC m=+0.162313344 container init d232ecf4abb5b9323cf7cb65d28f5a02fb6fc7276c9c595a1a660b6eb148d06a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_faraday, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct 11 09:38:28 compute-0 podman[421436]: 2025-10-11 09:38:28.531832509 +0000 UTC m=+0.168744594 container start d232ecf4abb5b9323cf7cb65d28f5a02fb6fc7276c9c595a1a660b6eb148d06a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_faraday, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 09:38:28 compute-0 podman[421436]: 2025-10-11 09:38:28.535608375 +0000 UTC m=+0.172520460 container attach d232ecf4abb5b9323cf7cb65d28f5a02fb6fc7276c9c595a1a660b6eb148d06a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_faraday, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Oct 11 09:38:28 compute-0 vigorous_faraday[421452]: 167 167
Oct 11 09:38:28 compute-0 systemd[1]: libpod-d232ecf4abb5b9323cf7cb65d28f5a02fb6fc7276c9c595a1a660b6eb148d06a.scope: Deactivated successfully.
Oct 11 09:38:28 compute-0 conmon[421452]: conmon d232ecf4abb5b9323cf7 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d232ecf4abb5b9323cf7cb65d28f5a02fb6fc7276c9c595a1a660b6eb148d06a.scope/container/memory.events
Oct 11 09:38:28 compute-0 podman[421436]: 2025-10-11 09:38:28.540228165 +0000 UTC m=+0.177140350 container died d232ecf4abb5b9323cf7cb65d28f5a02fb6fc7276c9c595a1a660b6eb148d06a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_faraday, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 09:38:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-99c45faafe4b6b63ee2e9cfa0bbc2a578e0994c3bd71be86cda5c59d2d40c702-merged.mount: Deactivated successfully.
Oct 11 09:38:28 compute-0 podman[421436]: 2025-10-11 09:38:28.582231353 +0000 UTC m=+0.219143448 container remove d232ecf4abb5b9323cf7cb65d28f5a02fb6fc7276c9c595a1a660b6eb148d06a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_faraday, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct 11 09:38:28 compute-0 systemd[1]: libpod-conmon-d232ecf4abb5b9323cf7cb65d28f5a02fb6fc7276c9c595a1a660b6eb148d06a.scope: Deactivated successfully.
Oct 11 09:38:28 compute-0 nova_compute[260935]: 2025-10-11 09:38:28.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:38:28 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 09:38:28 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 09:38:28 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:38:28 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 09:38:28 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 09:38:28 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 09:38:28 compute-0 podman[421475]: 2025-10-11 09:38:28.788171639 +0000 UTC m=+0.046959938 container create 1feccc7810243549367a36941a49e1ce90fcba5923c0342216789520c45c3759 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_thompson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True)
Oct 11 09:38:28 compute-0 systemd[1]: Started libpod-conmon-1feccc7810243549367a36941a49e1ce90fcba5923c0342216789520c45c3759.scope.
Oct 11 09:38:28 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:38:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b7b348d223a1a1cbe4cb2df9e2dcbb0d8ccea8c087d59d5bab84b0dcce19bbb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 09:38:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b7b348d223a1a1cbe4cb2df9e2dcbb0d8ccea8c087d59d5bab84b0dcce19bbb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 09:38:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b7b348d223a1a1cbe4cb2df9e2dcbb0d8ccea8c087d59d5bab84b0dcce19bbb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 09:38:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b7b348d223a1a1cbe4cb2df9e2dcbb0d8ccea8c087d59d5bab84b0dcce19bbb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 09:38:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b7b348d223a1a1cbe4cb2df9e2dcbb0d8ccea8c087d59d5bab84b0dcce19bbb/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 09:38:28 compute-0 podman[421475]: 2025-10-11 09:38:28.771871432 +0000 UTC m=+0.030659741 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:38:28 compute-0 podman[421475]: 2025-10-11 09:38:28.878394599 +0000 UTC m=+0.137182888 container init 1feccc7810243549367a36941a49e1ce90fcba5923c0342216789520c45c3759 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_thompson, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct 11 09:38:28 compute-0 podman[421475]: 2025-10-11 09:38:28.884317845 +0000 UTC m=+0.143106134 container start 1feccc7810243549367a36941a49e1ce90fcba5923c0342216789520c45c3759 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_thompson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 09:38:28 compute-0 podman[421475]: 2025-10-11 09:38:28.888023509 +0000 UTC m=+0.146811828 container attach 1feccc7810243549367a36941a49e1ce90fcba5923c0342216789520c45c3759 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_thompson, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 09:38:29 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2905: 321 pgs: 321 active+clean; 407 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 127 op/s
Oct 11 09:38:29 compute-0 nova_compute[260935]: 2025-10-11 09:38:29.334 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:38:29 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:38:29 compute-0 ceph-mon[74313]: pgmap v2905: 321 pgs: 321 active+clean; 407 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 127 op/s
Oct 11 09:38:30 compute-0 eloquent_thompson[421492]: --> passed data devices: 0 physical, 3 LVM
Oct 11 09:38:30 compute-0 eloquent_thompson[421492]: --> relative data size: 1.0
Oct 11 09:38:30 compute-0 eloquent_thompson[421492]: --> All data devices are unavailable
Oct 11 09:38:30 compute-0 systemd[1]: libpod-1feccc7810243549367a36941a49e1ce90fcba5923c0342216789520c45c3759.scope: Deactivated successfully.
Oct 11 09:38:30 compute-0 systemd[1]: libpod-1feccc7810243549367a36941a49e1ce90fcba5923c0342216789520c45c3759.scope: Consumed 1.094s CPU time.
Oct 11 09:38:30 compute-0 podman[421475]: 2025-10-11 09:38:30.053431546 +0000 UTC m=+1.312219915 container died 1feccc7810243549367a36941a49e1ce90fcba5923c0342216789520c45c3759 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_thompson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct 11 09:38:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-0b7b348d223a1a1cbe4cb2df9e2dcbb0d8ccea8c087d59d5bab84b0dcce19bbb-merged.mount: Deactivated successfully.
Oct 11 09:38:30 compute-0 podman[421475]: 2025-10-11 09:38:30.134393557 +0000 UTC m=+1.393181856 container remove 1feccc7810243549367a36941a49e1ce90fcba5923c0342216789520c45c3759 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_thompson, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 09:38:30 compute-0 systemd[1]: libpod-conmon-1feccc7810243549367a36941a49e1ce90fcba5923c0342216789520c45c3759.scope: Deactivated successfully.
Oct 11 09:38:30 compute-0 sudo[421368]: pam_unix(sudo:session): session closed for user root
Oct 11 09:38:30 compute-0 sudo[421534]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:38:30 compute-0 sudo[421534]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:38:30 compute-0 sudo[421534]: pam_unix(sudo:session): session closed for user root
Oct 11 09:38:30 compute-0 sudo[421559]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 09:38:30 compute-0 sudo[421559]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:38:30 compute-0 sudo[421559]: pam_unix(sudo:session): session closed for user root
Oct 11 09:38:30 compute-0 sudo[421584]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:38:30 compute-0 sudo[421584]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:38:30 compute-0 sudo[421584]: pam_unix(sudo:session): session closed for user root
Oct 11 09:38:30 compute-0 sudo[421609]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a -- lvm list --format json
Oct 11 09:38:30 compute-0 sudo[421609]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:38:30 compute-0 nova_compute[260935]: 2025-10-11 09:38:30.931 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:38:31 compute-0 podman[421676]: 2025-10-11 09:38:31.04903511 +0000 UTC m=+0.066890447 container create 767c7a118d00fd395b1ca9b52bb1f0ae114c09acbadcc0a2c8e5d9f60efa139a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_goldwasser, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 09:38:31 compute-0 systemd[1]: Started libpod-conmon-767c7a118d00fd395b1ca9b52bb1f0ae114c09acbadcc0a2c8e5d9f60efa139a.scope.
Oct 11 09:38:31 compute-0 podman[421676]: 2025-10-11 09:38:31.021978011 +0000 UTC m=+0.039833388 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:38:31 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:38:31 compute-0 podman[421676]: 2025-10-11 09:38:31.152374368 +0000 UTC m=+0.170229695 container init 767c7a118d00fd395b1ca9b52bb1f0ae114c09acbadcc0a2c8e5d9f60efa139a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_goldwasser, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 09:38:31 compute-0 podman[421676]: 2025-10-11 09:38:31.167208964 +0000 UTC m=+0.185064301 container start 767c7a118d00fd395b1ca9b52bb1f0ae114c09acbadcc0a2c8e5d9f60efa139a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_goldwasser, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct 11 09:38:31 compute-0 podman[421676]: 2025-10-11 09:38:31.172278906 +0000 UTC m=+0.190134213 container attach 767c7a118d00fd395b1ca9b52bb1f0ae114c09acbadcc0a2c8e5d9f60efa139a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_goldwasser, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 09:38:31 compute-0 confident_goldwasser[421693]: 167 167
Oct 11 09:38:31 compute-0 systemd[1]: libpod-767c7a118d00fd395b1ca9b52bb1f0ae114c09acbadcc0a2c8e5d9f60efa139a.scope: Deactivated successfully.
Oct 11 09:38:31 compute-0 podman[421676]: 2025-10-11 09:38:31.176786153 +0000 UTC m=+0.194641480 container died 767c7a118d00fd395b1ca9b52bb1f0ae114c09acbadcc0a2c8e5d9f60efa139a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_goldwasser, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 11 09:38:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-c820a7028d3f2d9d93b59a4d3901cb356c03161f549994d142a0338f4bebc2a4-merged.mount: Deactivated successfully.
Oct 11 09:38:31 compute-0 podman[421676]: 2025-10-11 09:38:31.236989061 +0000 UTC m=+0.254844408 container remove 767c7a118d00fd395b1ca9b52bb1f0ae114c09acbadcc0a2c8e5d9f60efa139a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_goldwasser, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Oct 11 09:38:31 compute-0 systemd[1]: libpod-conmon-767c7a118d00fd395b1ca9b52bb1f0ae114c09acbadcc0a2c8e5d9f60efa139a.scope: Deactivated successfully.
Oct 11 09:38:31 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2906: 321 pgs: 321 active+clean; 407 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 11 09:38:31 compute-0 ceph-mon[74313]: pgmap v2906: 321 pgs: 321 active+clean; 407 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 11 09:38:31 compute-0 podman[421718]: 2025-10-11 09:38:31.533388915 +0000 UTC m=+0.055327933 container create e2fa2fc09f08fbc2955e3dfa9622496019c99547cf882fcced71cc47864296a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_beaver, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 11 09:38:31 compute-0 systemd[1]: Started libpod-conmon-e2fa2fc09f08fbc2955e3dfa9622496019c99547cf882fcced71cc47864296a5.scope.
Oct 11 09:38:31 compute-0 podman[421718]: 2025-10-11 09:38:31.509361681 +0000 UTC m=+0.031300729 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:38:31 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:38:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/514eb36ec30afb7434542db6c6887913db66d3c2492b2de91f25e814eb03528e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 09:38:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/514eb36ec30afb7434542db6c6887913db66d3c2492b2de91f25e814eb03528e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 09:38:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/514eb36ec30afb7434542db6c6887913db66d3c2492b2de91f25e814eb03528e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 09:38:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/514eb36ec30afb7434542db6c6887913db66d3c2492b2de91f25e814eb03528e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 09:38:31 compute-0 podman[421718]: 2025-10-11 09:38:31.643790141 +0000 UTC m=+0.165729159 container init e2fa2fc09f08fbc2955e3dfa9622496019c99547cf882fcced71cc47864296a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_beaver, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True)
Oct 11 09:38:31 compute-0 podman[421718]: 2025-10-11 09:38:31.659882942 +0000 UTC m=+0.181821930 container start e2fa2fc09f08fbc2955e3dfa9622496019c99547cf882fcced71cc47864296a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_beaver, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 11 09:38:31 compute-0 podman[421718]: 2025-10-11 09:38:31.664482491 +0000 UTC m=+0.186421509 container attach e2fa2fc09f08fbc2955e3dfa9622496019c99547cf882fcced71cc47864296a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_beaver, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Oct 11 09:38:32 compute-0 clever_beaver[421737]: {
Oct 11 09:38:32 compute-0 clever_beaver[421737]:     "0": [
Oct 11 09:38:32 compute-0 clever_beaver[421737]:         {
Oct 11 09:38:32 compute-0 clever_beaver[421737]:             "devices": [
Oct 11 09:38:32 compute-0 clever_beaver[421737]:                 "/dev/loop3"
Oct 11 09:38:32 compute-0 clever_beaver[421737]:             ],
Oct 11 09:38:32 compute-0 clever_beaver[421737]:             "lv_name": "ceph_lv0",
Oct 11 09:38:32 compute-0 clever_beaver[421737]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 09:38:32 compute-0 clever_beaver[421737]:             "lv_size": "21470642176",
Oct 11 09:38:32 compute-0 clever_beaver[421737]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=bd31cfe9-266a-4479-a7f9-659fff14b3e1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 09:38:32 compute-0 clever_beaver[421737]:             "lv_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 09:38:32 compute-0 clever_beaver[421737]:             "name": "ceph_lv0",
Oct 11 09:38:32 compute-0 clever_beaver[421737]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 09:38:32 compute-0 clever_beaver[421737]:             "tags": {
Oct 11 09:38:32 compute-0 clever_beaver[421737]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 11 09:38:32 compute-0 clever_beaver[421737]:                 "ceph.block_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 09:38:32 compute-0 clever_beaver[421737]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 09:38:32 compute-0 clever_beaver[421737]:                 "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:38:32 compute-0 clever_beaver[421737]:                 "ceph.cluster_name": "ceph",
Oct 11 09:38:32 compute-0 clever_beaver[421737]:                 "ceph.crush_device_class": "",
Oct 11 09:38:32 compute-0 clever_beaver[421737]:                 "ceph.encrypted": "0",
Oct 11 09:38:32 compute-0 clever_beaver[421737]:                 "ceph.osd_fsid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 09:38:32 compute-0 clever_beaver[421737]:                 "ceph.osd_id": "0",
Oct 11 09:38:32 compute-0 clever_beaver[421737]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 09:38:32 compute-0 clever_beaver[421737]:                 "ceph.type": "block",
Oct 11 09:38:32 compute-0 clever_beaver[421737]:                 "ceph.vdo": "0"
Oct 11 09:38:32 compute-0 clever_beaver[421737]:             },
Oct 11 09:38:32 compute-0 clever_beaver[421737]:             "type": "block",
Oct 11 09:38:32 compute-0 clever_beaver[421737]:             "vg_name": "ceph_vg0"
Oct 11 09:38:32 compute-0 clever_beaver[421737]:         }
Oct 11 09:38:32 compute-0 clever_beaver[421737]:     ],
Oct 11 09:38:32 compute-0 clever_beaver[421737]:     "1": [
Oct 11 09:38:32 compute-0 clever_beaver[421737]:         {
Oct 11 09:38:32 compute-0 clever_beaver[421737]:             "devices": [
Oct 11 09:38:32 compute-0 clever_beaver[421737]:                 "/dev/loop4"
Oct 11 09:38:32 compute-0 clever_beaver[421737]:             ],
Oct 11 09:38:32 compute-0 clever_beaver[421737]:             "lv_name": "ceph_lv1",
Oct 11 09:38:32 compute-0 clever_beaver[421737]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 09:38:32 compute-0 clever_beaver[421737]:             "lv_size": "21470642176",
Oct 11 09:38:32 compute-0 clever_beaver[421737]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c6e730c8-7075-45e5-bf72-061b73088f5b,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 09:38:32 compute-0 clever_beaver[421737]:             "lv_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 09:38:32 compute-0 clever_beaver[421737]:             "name": "ceph_lv1",
Oct 11 09:38:32 compute-0 clever_beaver[421737]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 09:38:32 compute-0 clever_beaver[421737]:             "tags": {
Oct 11 09:38:32 compute-0 clever_beaver[421737]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 11 09:38:32 compute-0 clever_beaver[421737]:                 "ceph.block_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 09:38:32 compute-0 clever_beaver[421737]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 09:38:32 compute-0 clever_beaver[421737]:                 "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:38:32 compute-0 clever_beaver[421737]:                 "ceph.cluster_name": "ceph",
Oct 11 09:38:32 compute-0 clever_beaver[421737]:                 "ceph.crush_device_class": "",
Oct 11 09:38:32 compute-0 clever_beaver[421737]:                 "ceph.encrypted": "0",
Oct 11 09:38:32 compute-0 clever_beaver[421737]:                 "ceph.osd_fsid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 09:38:32 compute-0 clever_beaver[421737]:                 "ceph.osd_id": "1",
Oct 11 09:38:32 compute-0 clever_beaver[421737]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 09:38:32 compute-0 clever_beaver[421737]:                 "ceph.type": "block",
Oct 11 09:38:32 compute-0 clever_beaver[421737]:                 "ceph.vdo": "0"
Oct 11 09:38:32 compute-0 clever_beaver[421737]:             },
Oct 11 09:38:32 compute-0 clever_beaver[421737]:             "type": "block",
Oct 11 09:38:32 compute-0 clever_beaver[421737]:             "vg_name": "ceph_vg1"
Oct 11 09:38:32 compute-0 clever_beaver[421737]:         }
Oct 11 09:38:32 compute-0 clever_beaver[421737]:     ],
Oct 11 09:38:32 compute-0 clever_beaver[421737]:     "2": [
Oct 11 09:38:32 compute-0 clever_beaver[421737]:         {
Oct 11 09:38:32 compute-0 clever_beaver[421737]:             "devices": [
Oct 11 09:38:32 compute-0 clever_beaver[421737]:                 "/dev/loop5"
Oct 11 09:38:32 compute-0 clever_beaver[421737]:             ],
Oct 11 09:38:32 compute-0 clever_beaver[421737]:             "lv_name": "ceph_lv2",
Oct 11 09:38:32 compute-0 clever_beaver[421737]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 09:38:32 compute-0 clever_beaver[421737]:             "lv_size": "21470642176",
Oct 11 09:38:32 compute-0 clever_beaver[421737]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9a8d87fe-940e-4960-94fb-405e22e6e9ee,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 09:38:32 compute-0 clever_beaver[421737]:             "lv_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 09:38:32 compute-0 clever_beaver[421737]:             "name": "ceph_lv2",
Oct 11 09:38:32 compute-0 clever_beaver[421737]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 09:38:32 compute-0 clever_beaver[421737]:             "tags": {
Oct 11 09:38:32 compute-0 clever_beaver[421737]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 11 09:38:32 compute-0 clever_beaver[421737]:                 "ceph.block_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 09:38:32 compute-0 clever_beaver[421737]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 09:38:32 compute-0 clever_beaver[421737]:                 "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:38:32 compute-0 clever_beaver[421737]:                 "ceph.cluster_name": "ceph",
Oct 11 09:38:32 compute-0 clever_beaver[421737]:                 "ceph.crush_device_class": "",
Oct 11 09:38:32 compute-0 clever_beaver[421737]:                 "ceph.encrypted": "0",
Oct 11 09:38:32 compute-0 clever_beaver[421737]:                 "ceph.osd_fsid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 09:38:32 compute-0 clever_beaver[421737]:                 "ceph.osd_id": "2",
Oct 11 09:38:32 compute-0 clever_beaver[421737]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 09:38:32 compute-0 clever_beaver[421737]:                 "ceph.type": "block",
Oct 11 09:38:32 compute-0 clever_beaver[421737]:                 "ceph.vdo": "0"
Oct 11 09:38:32 compute-0 clever_beaver[421737]:             },
Oct 11 09:38:32 compute-0 clever_beaver[421737]:             "type": "block",
Oct 11 09:38:32 compute-0 clever_beaver[421737]:             "vg_name": "ceph_vg2"
Oct 11 09:38:32 compute-0 clever_beaver[421737]:         }
Oct 11 09:38:32 compute-0 clever_beaver[421737]:     ]
Oct 11 09:38:32 compute-0 clever_beaver[421737]: }
Oct 11 09:38:32 compute-0 systemd[1]: libpod-e2fa2fc09f08fbc2955e3dfa9622496019c99547cf882fcced71cc47864296a5.scope: Deactivated successfully.
Oct 11 09:38:32 compute-0 podman[421718]: 2025-10-11 09:38:32.504972365 +0000 UTC m=+1.026911363 container died e2fa2fc09f08fbc2955e3dfa9622496019c99547cf882fcced71cc47864296a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_beaver, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct 11 09:38:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-514eb36ec30afb7434542db6c6887913db66d3c2492b2de91f25e814eb03528e-merged.mount: Deactivated successfully.
Oct 11 09:38:32 compute-0 podman[421718]: 2025-10-11 09:38:32.585212786 +0000 UTC m=+1.107151774 container remove e2fa2fc09f08fbc2955e3dfa9622496019c99547cf882fcced71cc47864296a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_beaver, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 09:38:32 compute-0 systemd[1]: libpod-conmon-e2fa2fc09f08fbc2955e3dfa9622496019c99547cf882fcced71cc47864296a5.scope: Deactivated successfully.
Oct 11 09:38:32 compute-0 sudo[421609]: pam_unix(sudo:session): session closed for user root
Oct 11 09:38:32 compute-0 podman[421747]: 2025-10-11 09:38:32.637898354 +0000 UTC m=+0.089890113 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS)
Oct 11 09:38:32 compute-0 sudo[421776]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:38:32 compute-0 sudo[421776]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:38:32 compute-0 sudo[421776]: pam_unix(sudo:session): session closed for user root
Oct 11 09:38:32 compute-0 sudo[421801]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 09:38:32 compute-0 sudo[421801]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:38:32 compute-0 sudo[421801]: pam_unix(sudo:session): session closed for user root
Oct 11 09:38:32 compute-0 sudo[421826]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:38:32 compute-0 sudo[421826]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:38:32 compute-0 sudo[421826]: pam_unix(sudo:session): session closed for user root
Oct 11 09:38:32 compute-0 sudo[421851]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a -- raw list --format json
Oct 11 09:38:32 compute-0 sudo[421851]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:38:33 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2907: 321 pgs: 321 active+clean; 407 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 11 09:38:33 compute-0 ceph-mon[74313]: pgmap v2907: 321 pgs: 321 active+clean; 407 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 11 09:38:33 compute-0 podman[421915]: 2025-10-11 09:38:33.423650652 +0000 UTC m=+0.056492096 container create 0d1f4c0004ad48911c025b9cc849fee1e9ab6507752883454c89ca1bbd692432 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_poitras, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 09:38:33 compute-0 systemd[1]: Started libpod-conmon-0d1f4c0004ad48911c025b9cc849fee1e9ab6507752883454c89ca1bbd692432.scope.
Oct 11 09:38:33 compute-0 podman[421915]: 2025-10-11 09:38:33.393724553 +0000 UTC m=+0.026566087 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:38:33 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:38:33 compute-0 podman[421915]: 2025-10-11 09:38:33.517727511 +0000 UTC m=+0.150568985 container init 0d1f4c0004ad48911c025b9cc849fee1e9ab6507752883454c89ca1bbd692432 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_poitras, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct 11 09:38:33 compute-0 podman[421915]: 2025-10-11 09:38:33.526495266 +0000 UTC m=+0.159336700 container start 0d1f4c0004ad48911c025b9cc849fee1e9ab6507752883454c89ca1bbd692432 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_poitras, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True)
Oct 11 09:38:33 compute-0 podman[421915]: 2025-10-11 09:38:33.530078057 +0000 UTC m=+0.162919531 container attach 0d1f4c0004ad48911c025b9cc849fee1e9ab6507752883454c89ca1bbd692432 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_poitras, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 09:38:33 compute-0 stupefied_poitras[421931]: 167 167
Oct 11 09:38:33 compute-0 systemd[1]: libpod-0d1f4c0004ad48911c025b9cc849fee1e9ab6507752883454c89ca1bbd692432.scope: Deactivated successfully.
Oct 11 09:38:33 compute-0 podman[421915]: 2025-10-11 09:38:33.53622884 +0000 UTC m=+0.169070274 container died 0d1f4c0004ad48911c025b9cc849fee1e9ab6507752883454c89ca1bbd692432 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_poitras, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct 11 09:38:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-bda4bd7722d054a4b2028e6f117706736c6ad75d5fe14a0d79543938e230d584-merged.mount: Deactivated successfully.
Oct 11 09:38:33 compute-0 podman[421915]: 2025-10-11 09:38:33.578380432 +0000 UTC m=+0.211221866 container remove 0d1f4c0004ad48911c025b9cc849fee1e9ab6507752883454c89ca1bbd692432 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_poitras, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct 11 09:38:33 compute-0 systemd[1]: libpod-conmon-0d1f4c0004ad48911c025b9cc849fee1e9ab6507752883454c89ca1bbd692432.scope: Deactivated successfully.
Oct 11 09:38:33 compute-0 podman[421954]: 2025-10-11 09:38:33.789226256 +0000 UTC m=+0.049650774 container create 1680d5b3f7e1263595d679a0ce3ad23b2cd7b2c853caa280388970c429d776e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_bartik, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 11 09:38:33 compute-0 systemd[1]: Started libpod-conmon-1680d5b3f7e1263595d679a0ce3ad23b2cd7b2c853caa280388970c429d776e2.scope.
Oct 11 09:38:33 compute-0 podman[421954]: 2025-10-11 09:38:33.771528759 +0000 UTC m=+0.031953277 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:38:33 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:38:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/87c77aade803147110c789c16801407fca6d5477d3d3693c9967c77eb808d421/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 09:38:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/87c77aade803147110c789c16801407fca6d5477d3d3693c9967c77eb808d421/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 09:38:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/87c77aade803147110c789c16801407fca6d5477d3d3693c9967c77eb808d421/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 09:38:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/87c77aade803147110c789c16801407fca6d5477d3d3693c9967c77eb808d421/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 09:38:33 compute-0 podman[421954]: 2025-10-11 09:38:33.905067965 +0000 UTC m=+0.165492463 container init 1680d5b3f7e1263595d679a0ce3ad23b2cd7b2c853caa280388970c429d776e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_bartik, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 09:38:33 compute-0 podman[421954]: 2025-10-11 09:38:33.918943474 +0000 UTC m=+0.179367982 container start 1680d5b3f7e1263595d679a0ce3ad23b2cd7b2c853caa280388970c429d776e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_bartik, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 09:38:33 compute-0 podman[421954]: 2025-10-11 09:38:33.942761682 +0000 UTC m=+0.203186220 container attach 1680d5b3f7e1263595d679a0ce3ad23b2cd7b2c853caa280388970c429d776e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_bartik, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct 11 09:38:34 compute-0 nova_compute[260935]: 2025-10-11 09:38:34.337 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:38:34 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:38:34 compute-0 busy_bartik[421970]: {
Oct 11 09:38:34 compute-0 busy_bartik[421970]:     "9a8d87fe-940e-4960-94fb-405e22e6e9ee": {
Oct 11 09:38:34 compute-0 busy_bartik[421970]:         "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:38:34 compute-0 busy_bartik[421970]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 11 09:38:34 compute-0 busy_bartik[421970]:         "osd_id": 2,
Oct 11 09:38:34 compute-0 busy_bartik[421970]:         "osd_uuid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 09:38:34 compute-0 busy_bartik[421970]:         "type": "bluestore"
Oct 11 09:38:34 compute-0 busy_bartik[421970]:     },
Oct 11 09:38:34 compute-0 busy_bartik[421970]:     "bd31cfe9-266a-4479-a7f9-659fff14b3e1": {
Oct 11 09:38:34 compute-0 busy_bartik[421970]:         "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:38:34 compute-0 busy_bartik[421970]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 11 09:38:34 compute-0 busy_bartik[421970]:         "osd_id": 0,
Oct 11 09:38:34 compute-0 busy_bartik[421970]:         "osd_uuid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 09:38:34 compute-0 busy_bartik[421970]:         "type": "bluestore"
Oct 11 09:38:34 compute-0 busy_bartik[421970]:     },
Oct 11 09:38:34 compute-0 busy_bartik[421970]:     "c6e730c8-7075-45e5-bf72-061b73088f5b": {
Oct 11 09:38:34 compute-0 busy_bartik[421970]:         "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:38:34 compute-0 busy_bartik[421970]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 11 09:38:34 compute-0 busy_bartik[421970]:         "osd_id": 1,
Oct 11 09:38:34 compute-0 busy_bartik[421970]:         "osd_uuid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 09:38:34 compute-0 busy_bartik[421970]:         "type": "bluestore"
Oct 11 09:38:34 compute-0 busy_bartik[421970]:     }
Oct 11 09:38:34 compute-0 busy_bartik[421970]: }
Oct 11 09:38:34 compute-0 systemd[1]: libpod-1680d5b3f7e1263595d679a0ce3ad23b2cd7b2c853caa280388970c429d776e2.scope: Deactivated successfully.
Oct 11 09:38:34 compute-0 podman[421954]: 2025-10-11 09:38:34.905471403 +0000 UTC m=+1.165895941 container died 1680d5b3f7e1263595d679a0ce3ad23b2cd7b2c853caa280388970c429d776e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_bartik, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 09:38:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-87c77aade803147110c789c16801407fca6d5477d3d3693c9967c77eb808d421-merged.mount: Deactivated successfully.
Oct 11 09:38:34 compute-0 podman[421954]: 2025-10-11 09:38:34.969450087 +0000 UTC m=+1.229874585 container remove 1680d5b3f7e1263595d679a0ce3ad23b2cd7b2c853caa280388970c429d776e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_bartik, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct 11 09:38:34 compute-0 systemd[1]: libpod-conmon-1680d5b3f7e1263595d679a0ce3ad23b2cd7b2c853caa280388970c429d776e2.scope: Deactivated successfully.
Oct 11 09:38:35 compute-0 sudo[421851]: pam_unix(sudo:session): session closed for user root
Oct 11 09:38:35 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 09:38:35 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:38:35 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 09:38:35 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:38:35 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev 9e5f9a70-6870-45f1-9abf-1ebc66165e38 does not exist
Oct 11 09:38:35 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev 12591a8d-b54d-4d51-82fa-7f56d0050612 does not exist
Oct 11 09:38:35 compute-0 sudo[422017]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:38:35 compute-0 sudo[422017]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:38:35 compute-0 sudo[422017]: pam_unix(sudo:session): session closed for user root
Oct 11 09:38:35 compute-0 sudo[422042]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 11 09:38:35 compute-0 sudo[422042]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:38:35 compute-0 sudo[422042]: pam_unix(sudo:session): session closed for user root
Oct 11 09:38:35 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:38:35.216 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=54, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:d1:d9', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '16:ab:1e:b7:4b:7f'}, ipsec=False) old=SB_Global(nb_cfg=53) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 09:38:35 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:38:35.218 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 11 09:38:35 compute-0 nova_compute[260935]: 2025-10-11 09:38:35.260 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:38:35 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2908: 321 pgs: 321 active+clean; 407 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 11 09:38:35 compute-0 nova_compute[260935]: 2025-10-11 09:38:35.934 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:38:35 compute-0 nova_compute[260935]: 2025-10-11 09:38:35.997 2 DEBUG nova.compute.manager [req-60ba0ff0-c067-4554-b5a9-0bfb7b14f6b9 req-f5dc0872-b655-4208-b8c5-b6b5f9011f56 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b] Received event network-changed-927e4c78-95c0-407e-ab92-2c7457c30f72 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:38:35 compute-0 nova_compute[260935]: 2025-10-11 09:38:35.998 2 DEBUG nova.compute.manager [req-60ba0ff0-c067-4554-b5a9-0bfb7b14f6b9 req-f5dc0872-b655-4208-b8c5-b6b5f9011f56 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b] Refreshing instance network info cache due to event network-changed-927e4c78-95c0-407e-ab92-2c7457c30f72. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 09:38:35 compute-0 nova_compute[260935]: 2025-10-11 09:38:35.998 2 DEBUG oslo_concurrency.lockutils [req-60ba0ff0-c067-4554-b5a9-0bfb7b14f6b9 req-f5dc0872-b655-4208-b8c5-b6b5f9011f56 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:38:35 compute-0 nova_compute[260935]: 2025-10-11 09:38:35.998 2 DEBUG oslo_concurrency.lockutils [req-60ba0ff0-c067-4554-b5a9-0bfb7b14f6b9 req-f5dc0872-b655-4208-b8c5-b6b5f9011f56 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:38:35 compute-0 nova_compute[260935]: 2025-10-11 09:38:35.999 2 DEBUG nova.network.neutron [req-60ba0ff0-c067-4554-b5a9-0bfb7b14f6b9 req-f5dc0872-b655-4208-b8c5-b6b5f9011f56 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b] Refreshing network info cache for port 927e4c78-95c0-407e-ab92-2c7457c30f72 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 09:38:36 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:38:36 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:38:36 compute-0 ceph-mon[74313]: pgmap v2908: 321 pgs: 321 active+clean; 407 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 11 09:38:36 compute-0 ceph-mon[74313]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #141. Immutable memtables: 0.
Oct 11 09:38:36 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:38:36.042592) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 11 09:38:36 compute-0 ceph-mon[74313]: rocksdb: [db/flush_job.cc:856] [default] [JOB 85] Flushing memtable with next log file: 141
Oct 11 09:38:36 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760175516042628, "job": 85, "event": "flush_started", "num_memtables": 1, "num_entries": 2059, "num_deletes": 251, "total_data_size": 3362519, "memory_usage": 3422056, "flush_reason": "Manual Compaction"}
Oct 11 09:38:36 compute-0 ceph-mon[74313]: rocksdb: [db/flush_job.cc:885] [default] [JOB 85] Level-0 flush table #142: started
Oct 11 09:38:36 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760175516065016, "cf_name": "default", "job": 85, "event": "table_file_creation", "file_number": 142, "file_size": 3284383, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 58952, "largest_seqno": 61010, "table_properties": {"data_size": 3275069, "index_size": 5871, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2373, "raw_key_size": 19008, "raw_average_key_size": 20, "raw_value_size": 3256530, "raw_average_value_size": 3453, "num_data_blocks": 261, "num_entries": 943, "num_filter_entries": 943, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760175298, "oldest_key_time": 1760175298, "file_creation_time": 1760175516, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "25d4b2da-549a-4320-988a-8e4ed36a341b", "db_session_id": "HBQ1LYMNE0LLZGR7AHC6", "orig_file_number": 142, "seqno_to_time_mapping": "N/A"}}
Oct 11 09:38:36 compute-0 ceph-mon[74313]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 85] Flush lasted 22489 microseconds, and 12777 cpu microseconds.
Oct 11 09:38:36 compute-0 ceph-mon[74313]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 09:38:36 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:38:36.065069) [db/flush_job.cc:967] [default] [JOB 85] Level-0 flush table #142: 3284383 bytes OK
Oct 11 09:38:36 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:38:36.065104) [db/memtable_list.cc:519] [default] Level-0 commit table #142 started
Oct 11 09:38:36 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:38:36.067444) [db/memtable_list.cc:722] [default] Level-0 commit table #142: memtable #1 done
Oct 11 09:38:36 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:38:36.067470) EVENT_LOG_v1 {"time_micros": 1760175516067462, "job": 85, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 11 09:38:36 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:38:36.067494) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 11 09:38:36 compute-0 ceph-mon[74313]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 85] Try to delete WAL files size 3353879, prev total WAL file size 3380367, number of live WAL files 2.
Oct 11 09:38:36 compute-0 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000138.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 09:38:36 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:38:36.068938) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730035373733' seq:72057594037927935, type:22 .. '7061786F730036303235' seq:0, type:0; will stop at (end)
Oct 11 09:38:36 compute-0 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 86] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 11 09:38:36 compute-0 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 85 Base level 0, inputs: [142(3207KB)], [140(7797KB)]
Oct 11 09:38:36 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760175516068981, "job": 86, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [142], "files_L6": [140], "score": -1, "input_data_size": 11269469, "oldest_snapshot_seqno": -1}
Oct 11 09:38:36 compute-0 nova_compute[260935]: 2025-10-11 09:38:36.125 2 DEBUG oslo_concurrency.lockutils [None req-82f121cd-c1d2-45a9-bd52-8fccff66f235 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Acquiring lock "d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:38:36 compute-0 nova_compute[260935]: 2025-10-11 09:38:36.125 2 DEBUG oslo_concurrency.lockutils [None req-82f121cd-c1d2-45a9-bd52-8fccff66f235 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lock "d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:38:36 compute-0 nova_compute[260935]: 2025-10-11 09:38:36.126 2 DEBUG oslo_concurrency.lockutils [None req-82f121cd-c1d2-45a9-bd52-8fccff66f235 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Acquiring lock "d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:38:36 compute-0 nova_compute[260935]: 2025-10-11 09:38:36.126 2 DEBUG oslo_concurrency.lockutils [None req-82f121cd-c1d2-45a9-bd52-8fccff66f235 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lock "d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:38:36 compute-0 nova_compute[260935]: 2025-10-11 09:38:36.126 2 DEBUG oslo_concurrency.lockutils [None req-82f121cd-c1d2-45a9-bd52-8fccff66f235 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lock "d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:38:36 compute-0 nova_compute[260935]: 2025-10-11 09:38:36.128 2 INFO nova.compute.manager [None req-82f121cd-c1d2-45a9-bd52-8fccff66f235 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b] Terminating instance
Oct 11 09:38:36 compute-0 nova_compute[260935]: 2025-10-11 09:38:36.129 2 DEBUG nova.compute.manager [None req-82f121cd-c1d2-45a9-bd52-8fccff66f235 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 11 09:38:36 compute-0 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 86] Generated table #143: 8002 keys, 9576876 bytes, temperature: kUnknown
Oct 11 09:38:36 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760175516134490, "cf_name": "default", "job": 86, "event": "table_file_creation", "file_number": 143, "file_size": 9576876, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9526234, "index_size": 29537, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 20037, "raw_key_size": 208479, "raw_average_key_size": 26, "raw_value_size": 9386195, "raw_average_value_size": 1172, "num_data_blocks": 1147, "num_entries": 8002, "num_filter_entries": 8002, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760170204, "oldest_key_time": 0, "file_creation_time": 1760175516, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "25d4b2da-549a-4320-988a-8e4ed36a341b", "db_session_id": "HBQ1LYMNE0LLZGR7AHC6", "orig_file_number": 143, "seqno_to_time_mapping": "N/A"}}
Oct 11 09:38:36 compute-0 ceph-mon[74313]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 09:38:36 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:38:36.134759) [db/compaction/compaction_job.cc:1663] [default] [JOB 86] Compacted 1@0 + 1@6 files to L6 => 9576876 bytes
Oct 11 09:38:36 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:38:36.135866) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 171.8 rd, 146.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.1, 7.6 +0.0 blob) out(9.1 +0.0 blob), read-write-amplify(6.3) write-amplify(2.9) OK, records in: 8516, records dropped: 514 output_compression: NoCompression
Oct 11 09:38:36 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:38:36.135899) EVENT_LOG_v1 {"time_micros": 1760175516135876, "job": 86, "event": "compaction_finished", "compaction_time_micros": 65579, "compaction_time_cpu_micros": 41617, "output_level": 6, "num_output_files": 1, "total_output_size": 9576876, "num_input_records": 8516, "num_output_records": 8002, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 11 09:38:36 compute-0 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000142.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 09:38:36 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760175516136735, "job": 86, "event": "table_file_deletion", "file_number": 142}
Oct 11 09:38:36 compute-0 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000140.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 09:38:36 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760175516138553, "job": 86, "event": "table_file_deletion", "file_number": 140}
Oct 11 09:38:36 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:38:36.068793) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 09:38:36 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:38:36.138657) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 09:38:36 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:38:36.138677) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 09:38:36 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:38:36.138681) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 09:38:36 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:38:36.138684) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 09:38:36 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:38:36.138687) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 09:38:36 compute-0 kernel: tap927e4c78-95 (unregistering): left promiscuous mode
Oct 11 09:38:36 compute-0 NetworkManager[44960]: <info>  [1760175516.1893] device (tap927e4c78-95): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 09:38:36 compute-0 ovn_controller[152945]: 2025-10-11T09:38:36Z|01622|binding|INFO|Releasing lport 927e4c78-95c0-407e-ab92-2c7457c30f72 from this chassis (sb_readonly=0)
Oct 11 09:38:36 compute-0 ovn_controller[152945]: 2025-10-11T09:38:36Z|01623|binding|INFO|Setting lport 927e4c78-95c0-407e-ab92-2c7457c30f72 down in Southbound
Oct 11 09:38:36 compute-0 ovn_controller[152945]: 2025-10-11T09:38:36Z|01624|binding|INFO|Removing iface tap927e4c78-95 ovn-installed in OVS
Oct 11 09:38:36 compute-0 nova_compute[260935]: 2025-10-11 09:38:36.206 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:38:36 compute-0 nova_compute[260935]: 2025-10-11 09:38:36.221 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:38:36 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:38:36.228 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ff:f3:13 10.100.0.5'], port_security=['fa:16:3e:ff:f3:13 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'd61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d0b488df-4f94-4347-9251-9c74a85cff63', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '81e7096f23df4e7d8782cf98d09d54e9', 'neutron:revision_number': '4', 'neutron:security_group_ids': '007bd862-2a46-4a84-aea1-941ef61ceb34 a1c26107-dbf4-42a0-ae54-f39f52094af5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c6f689b1-ae42-4f44-bdc0-840884eacd12, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=927e4c78-95c0-407e-ab92-2c7457c30f72) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 09:38:36 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:38:36.231 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 927e4c78-95c0-407e-ab92-2c7457c30f72 in datapath d0b488df-4f94-4347-9251-9c74a85cff63 unbound from our chassis
Oct 11 09:38:36 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:38:36.234 162815 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network d0b488df-4f94-4347-9251-9c74a85cff63, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 11 09:38:36 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:38:36.237 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[24a816bc-023e-4b14-9ad0-e45f36628a6a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:38:36 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:38:36.237 162815 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-d0b488df-4f94-4347-9251-9c74a85cff63 namespace which is not needed anymore
Oct 11 09:38:36 compute-0 systemd[1]: machine-qemu\x2d167\x2dinstance\x2d0000008f.scope: Deactivated successfully.
Oct 11 09:38:36 compute-0 systemd[1]: machine-qemu\x2d167\x2dinstance\x2d0000008f.scope: Consumed 12.824s CPU time.
Oct 11 09:38:36 compute-0 systemd-machined[215705]: Machine qemu-167-instance-0000008f terminated.
Oct 11 09:38:36 compute-0 nova_compute[260935]: 2025-10-11 09:38:36.377 2 INFO nova.virt.libvirt.driver [-] [instance: d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b] Instance destroyed successfully.
Oct 11 09:38:36 compute-0 nova_compute[260935]: 2025-10-11 09:38:36.378 2 DEBUG nova.objects.instance [None req-82f121cd-c1d2-45a9-bd52-8fccff66f235 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lazy-loading 'resources' on Instance uuid d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:38:36 compute-0 nova_compute[260935]: 2025-10-11 09:38:36.395 2 DEBUG nova.virt.libvirt.vif [None req-82f121cd-c1d2-45a9-bd52-8fccff66f235 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T09:38:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-607770139-access_point-901075224',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-607770139-access_point-901075224',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-607770139-acc',id=143,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDeAiXe5BxSlKhTWUA0h8drhicmm4lHT+GCh5IJOhzOrvcV2CoWAUh+e7G/x4nbCWm9Jvt6uJONBp2OY5zPdoAKrYlSLLfE4Vnt6Ll8at8odUFHqxgC501PsHRAdCCNJdw==',key_name='tempest-TestSecurityGroupsBasicOps-904423859',keypairs=<?>,launch_index=0,launched_at=2025-10-11T09:38:12Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='81e7096f23df4e7d8782cf98d09d54e9',ramdisk_id='',reservation_id='r-9nlmw0id',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSecurityGroupsBasicOps-607770139',owner_user_name='tempest-TestSecurityGroupsBasicOps-607770139-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T09:38:12Z,user_data=None,user_id='489c4d0457354f4684f8b9e53261224f',uuid=d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "927e4c78-95c0-407e-ab92-2c7457c30f72", "address": "fa:16:3e:ff:f3:13", "network": {"id": "d0b488df-4f94-4347-9251-9c74a85cff63", "bridge": "br-int", "label": "tempest-network-smoke--397807174", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.246", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "81e7096f23df4e7d8782cf98d09d54e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap927e4c78-95", "ovs_interfaceid": "927e4c78-95c0-407e-ab92-2c7457c30f72", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 11 09:38:36 compute-0 nova_compute[260935]: 2025-10-11 09:38:36.395 2 DEBUG nova.network.os_vif_util [None req-82f121cd-c1d2-45a9-bd52-8fccff66f235 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Converting VIF {"id": "927e4c78-95c0-407e-ab92-2c7457c30f72", "address": "fa:16:3e:ff:f3:13", "network": {"id": "d0b488df-4f94-4347-9251-9c74a85cff63", "bridge": "br-int", "label": "tempest-network-smoke--397807174", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.246", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "81e7096f23df4e7d8782cf98d09d54e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap927e4c78-95", "ovs_interfaceid": "927e4c78-95c0-407e-ab92-2c7457c30f72", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 09:38:36 compute-0 nova_compute[260935]: 2025-10-11 09:38:36.396 2 DEBUG nova.network.os_vif_util [None req-82f121cd-c1d2-45a9-bd52-8fccff66f235 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:ff:f3:13,bridge_name='br-int',has_traffic_filtering=True,id=927e4c78-95c0-407e-ab92-2c7457c30f72,network=Network(d0b488df-4f94-4347-9251-9c74a85cff63),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap927e4c78-95') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 09:38:36 compute-0 nova_compute[260935]: 2025-10-11 09:38:36.397 2 DEBUG os_vif [None req-82f121cd-c1d2-45a9-bd52-8fccff66f235 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:ff:f3:13,bridge_name='br-int',has_traffic_filtering=True,id=927e4c78-95c0-407e-ab92-2c7457c30f72,network=Network(d0b488df-4f94-4347-9251-9c74a85cff63),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap927e4c78-95') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 11 09:38:36 compute-0 nova_compute[260935]: 2025-10-11 09:38:36.399 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:38:36 compute-0 nova_compute[260935]: 2025-10-11 09:38:36.399 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap927e4c78-95, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:38:36 compute-0 nova_compute[260935]: 2025-10-11 09:38:36.403 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 09:38:36 compute-0 nova_compute[260935]: 2025-10-11 09:38:36.404 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:38:36 compute-0 nova_compute[260935]: 2025-10-11 09:38:36.408 2 INFO os_vif [None req-82f121cd-c1d2-45a9-bd52-8fccff66f235 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:ff:f3:13,bridge_name='br-int',has_traffic_filtering=True,id=927e4c78-95c0-407e-ab92-2c7457c30f72,network=Network(d0b488df-4f94-4347-9251-9c74a85cff63),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap927e4c78-95')
Oct 11 09:38:36 compute-0 neutron-haproxy-ovnmeta-d0b488df-4f94-4347-9251-9c74a85cff63[421095]: [NOTICE]   (421099) : haproxy version is 2.8.14-c23fe91
Oct 11 09:38:36 compute-0 neutron-haproxy-ovnmeta-d0b488df-4f94-4347-9251-9c74a85cff63[421095]: [NOTICE]   (421099) : path to executable is /usr/sbin/haproxy
Oct 11 09:38:36 compute-0 neutron-haproxy-ovnmeta-d0b488df-4f94-4347-9251-9c74a85cff63[421095]: [WARNING]  (421099) : Exiting Master process...
Oct 11 09:38:36 compute-0 neutron-haproxy-ovnmeta-d0b488df-4f94-4347-9251-9c74a85cff63[421095]: [ALERT]    (421099) : Current worker (421101) exited with code 143 (Terminated)
Oct 11 09:38:36 compute-0 neutron-haproxy-ovnmeta-d0b488df-4f94-4347-9251-9c74a85cff63[421095]: [WARNING]  (421099) : All workers exited. Exiting... (0)
Oct 11 09:38:36 compute-0 systemd[1]: libpod-e8a3f080b91898e90dce62a6bb187fd979d7d1ce21868500283dee71e6647be6.scope: Deactivated successfully.
Oct 11 09:38:36 compute-0 podman[422096]: 2025-10-11 09:38:36.477603707 +0000 UTC m=+0.082300769 container died e8a3f080b91898e90dce62a6bb187fd979d7d1ce21868500283dee71e6647be6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-d0b488df-4f94-4347-9251-9c74a85cff63, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 11 09:38:36 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e8a3f080b91898e90dce62a6bb187fd979d7d1ce21868500283dee71e6647be6-userdata-shm.mount: Deactivated successfully.
Oct 11 09:38:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-0f4cf93020fc98090af1b4c5a6be3fdf7b017ecf3820002b4c2da663e5a79eaa-merged.mount: Deactivated successfully.
Oct 11 09:38:36 compute-0 podman[422096]: 2025-10-11 09:38:36.540482161 +0000 UTC m=+0.145179183 container cleanup e8a3f080b91898e90dce62a6bb187fd979d7d1ce21868500283dee71e6647be6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-d0b488df-4f94-4347-9251-9c74a85cff63, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3)
Oct 11 09:38:36 compute-0 systemd[1]: libpod-conmon-e8a3f080b91898e90dce62a6bb187fd979d7d1ce21868500283dee71e6647be6.scope: Deactivated successfully.
Oct 11 09:38:36 compute-0 podman[422150]: 2025-10-11 09:38:36.63599718 +0000 UTC m=+0.064188442 container remove e8a3f080b91898e90dce62a6bb187fd979d7d1ce21868500283dee71e6647be6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-d0b488df-4f94-4347-9251-9c74a85cff63, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 11 09:38:36 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:38:36.645 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[22bd0cca-5e0a-4cb9-8464-4d6ce40f2ead]: (4, ('Sat Oct 11 09:38:36 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-d0b488df-4f94-4347-9251-9c74a85cff63 (e8a3f080b91898e90dce62a6bb187fd979d7d1ce21868500283dee71e6647be6)\ne8a3f080b91898e90dce62a6bb187fd979d7d1ce21868500283dee71e6647be6\nSat Oct 11 09:38:36 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-d0b488df-4f94-4347-9251-9c74a85cff63 (e8a3f080b91898e90dce62a6bb187fd979d7d1ce21868500283dee71e6647be6)\ne8a3f080b91898e90dce62a6bb187fd979d7d1ce21868500283dee71e6647be6\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:38:36 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:38:36.648 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[996a1028-fa84-4f6b-8959-770233d4d5a3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:38:36 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:38:36.649 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd0b488df-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:38:36 compute-0 nova_compute[260935]: 2025-10-11 09:38:36.651 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:38:36 compute-0 kernel: tapd0b488df-40: left promiscuous mode
Oct 11 09:38:36 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:38:36.674 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[a38752ca-d60f-4d77-b04e-6945f26c4d92]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:38:36 compute-0 nova_compute[260935]: 2025-10-11 09:38:36.678 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:38:36 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:38:36.704 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[0503b745-5123-4de3-ae2f-ef8c9e9eb04b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:38:36 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:38:36.706 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[7dd0b3e1-d0f8-4ceb-b57b-700a2e1e0097]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:38:36 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:38:36.722 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[ee14fea4-8d0b-457b-87a4-eacda58eaaf9]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 733664, 'reachable_time': 19416, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 422166, 'error': None, 'target': 'ovnmeta-d0b488df-4f94-4347-9251-9c74a85cff63', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:38:36 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:38:36.727 162929 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-d0b488df-4f94-4347-9251-9c74a85cff63 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 11 09:38:36 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:38:36.727 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[274e655b-4b5a-4bca-a6d7-e0fbc1a27f04]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:38:36 compute-0 systemd[1]: run-netns-ovnmeta\x2dd0b488df\x2d4f94\x2d4347\x2d9251\x2d9c74a85cff63.mount: Deactivated successfully.
Oct 11 09:38:36 compute-0 nova_compute[260935]: 2025-10-11 09:38:36.850 2 INFO nova.virt.libvirt.driver [None req-82f121cd-c1d2-45a9-bd52-8fccff66f235 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b] Deleting instance files /var/lib/nova/instances/d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b_del
Oct 11 09:38:36 compute-0 nova_compute[260935]: 2025-10-11 09:38:36.851 2 INFO nova.virt.libvirt.driver [None req-82f121cd-c1d2-45a9-bd52-8fccff66f235 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b] Deletion of /var/lib/nova/instances/d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b_del complete
Oct 11 09:38:36 compute-0 nova_compute[260935]: 2025-10-11 09:38:36.939 2 INFO nova.compute.manager [None req-82f121cd-c1d2-45a9-bd52-8fccff66f235 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b] Took 0.81 seconds to destroy the instance on the hypervisor.
Oct 11 09:38:36 compute-0 nova_compute[260935]: 2025-10-11 09:38:36.939 2 DEBUG oslo.service.loopingcall [None req-82f121cd-c1d2-45a9-bd52-8fccff66f235 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 11 09:38:36 compute-0 nova_compute[260935]: 2025-10-11 09:38:36.940 2 DEBUG nova.compute.manager [-] [instance: d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 11 09:38:36 compute-0 nova_compute[260935]: 2025-10-11 09:38:36.940 2 DEBUG nova.network.neutron [-] [instance: d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 11 09:38:37 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2909: 321 pgs: 321 active+clean; 407 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 11 09:38:37 compute-0 ceph-mon[74313]: pgmap v2909: 321 pgs: 321 active+clean; 407 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 11 09:38:38 compute-0 nova_compute[260935]: 2025-10-11 09:38:38.101 2 DEBUG nova.compute.manager [req-f01483a1-bedc-42f4-9040-2a904d1e4bb8 req-79d87f6d-7097-41c7-b8a3-9fdfd7d00f33 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b] Received event network-vif-unplugged-927e4c78-95c0-407e-ab92-2c7457c30f72 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:38:38 compute-0 nova_compute[260935]: 2025-10-11 09:38:38.101 2 DEBUG oslo_concurrency.lockutils [req-f01483a1-bedc-42f4-9040-2a904d1e4bb8 req-79d87f6d-7097-41c7-b8a3-9fdfd7d00f33 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:38:38 compute-0 nova_compute[260935]: 2025-10-11 09:38:38.102 2 DEBUG oslo_concurrency.lockutils [req-f01483a1-bedc-42f4-9040-2a904d1e4bb8 req-79d87f6d-7097-41c7-b8a3-9fdfd7d00f33 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:38:38 compute-0 nova_compute[260935]: 2025-10-11 09:38:38.102 2 DEBUG oslo_concurrency.lockutils [req-f01483a1-bedc-42f4-9040-2a904d1e4bb8 req-79d87f6d-7097-41c7-b8a3-9fdfd7d00f33 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:38:38 compute-0 nova_compute[260935]: 2025-10-11 09:38:38.102 2 DEBUG nova.compute.manager [req-f01483a1-bedc-42f4-9040-2a904d1e4bb8 req-79d87f6d-7097-41c7-b8a3-9fdfd7d00f33 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b] No waiting events found dispatching network-vif-unplugged-927e4c78-95c0-407e-ab92-2c7457c30f72 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 09:38:38 compute-0 nova_compute[260935]: 2025-10-11 09:38:38.103 2 DEBUG nova.compute.manager [req-f01483a1-bedc-42f4-9040-2a904d1e4bb8 req-79d87f6d-7097-41c7-b8a3-9fdfd7d00f33 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b] Received event network-vif-unplugged-927e4c78-95c0-407e-ab92-2c7457c30f72 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 11 09:38:38 compute-0 nova_compute[260935]: 2025-10-11 09:38:38.103 2 DEBUG nova.compute.manager [req-f01483a1-bedc-42f4-9040-2a904d1e4bb8 req-79d87f6d-7097-41c7-b8a3-9fdfd7d00f33 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b] Received event network-vif-plugged-927e4c78-95c0-407e-ab92-2c7457c30f72 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:38:38 compute-0 nova_compute[260935]: 2025-10-11 09:38:38.103 2 DEBUG oslo_concurrency.lockutils [req-f01483a1-bedc-42f4-9040-2a904d1e4bb8 req-79d87f6d-7097-41c7-b8a3-9fdfd7d00f33 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:38:38 compute-0 nova_compute[260935]: 2025-10-11 09:38:38.104 2 DEBUG oslo_concurrency.lockutils [req-f01483a1-bedc-42f4-9040-2a904d1e4bb8 req-79d87f6d-7097-41c7-b8a3-9fdfd7d00f33 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:38:38 compute-0 nova_compute[260935]: 2025-10-11 09:38:38.104 2 DEBUG oslo_concurrency.lockutils [req-f01483a1-bedc-42f4-9040-2a904d1e4bb8 req-79d87f6d-7097-41c7-b8a3-9fdfd7d00f33 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:38:38 compute-0 nova_compute[260935]: 2025-10-11 09:38:38.105 2 DEBUG nova.compute.manager [req-f01483a1-bedc-42f4-9040-2a904d1e4bb8 req-79d87f6d-7097-41c7-b8a3-9fdfd7d00f33 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b] No waiting events found dispatching network-vif-plugged-927e4c78-95c0-407e-ab92-2c7457c30f72 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 09:38:38 compute-0 nova_compute[260935]: 2025-10-11 09:38:38.105 2 WARNING nova.compute.manager [req-f01483a1-bedc-42f4-9040-2a904d1e4bb8 req-79d87f6d-7097-41c7-b8a3-9fdfd7d00f33 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b] Received unexpected event network-vif-plugged-927e4c78-95c0-407e-ab92-2c7457c30f72 for instance with vm_state active and task_state deleting.
Oct 11 09:38:38 compute-0 nova_compute[260935]: 2025-10-11 09:38:38.125 2 DEBUG nova.network.neutron [req-60ba0ff0-c067-4554-b5a9-0bfb7b14f6b9 req-f5dc0872-b655-4208-b8c5-b6b5f9011f56 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b] Updated VIF entry in instance network info cache for port 927e4c78-95c0-407e-ab92-2c7457c30f72. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 09:38:38 compute-0 nova_compute[260935]: 2025-10-11 09:38:38.125 2 DEBUG nova.network.neutron [req-60ba0ff0-c067-4554-b5a9-0bfb7b14f6b9 req-f5dc0872-b655-4208-b8c5-b6b5f9011f56 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b] Updating instance_info_cache with network_info: [{"id": "927e4c78-95c0-407e-ab92-2c7457c30f72", "address": "fa:16:3e:ff:f3:13", "network": {"id": "d0b488df-4f94-4347-9251-9c74a85cff63", "bridge": "br-int", "label": "tempest-network-smoke--397807174", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "81e7096f23df4e7d8782cf98d09d54e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap927e4c78-95", "ovs_interfaceid": "927e4c78-95c0-407e-ab92-2c7457c30f72", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:38:38 compute-0 nova_compute[260935]: 2025-10-11 09:38:38.144 2 DEBUG oslo_concurrency.lockutils [req-60ba0ff0-c067-4554-b5a9-0bfb7b14f6b9 req-f5dc0872-b655-4208-b8c5-b6b5f9011f56 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:38:38 compute-0 nova_compute[260935]: 2025-10-11 09:38:38.659 2 DEBUG nova.network.neutron [-] [instance: d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:38:38 compute-0 nova_compute[260935]: 2025-10-11 09:38:38.673 2 INFO nova.compute.manager [-] [instance: d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b] Took 1.73 seconds to deallocate network for instance.
Oct 11 09:38:38 compute-0 nova_compute[260935]: 2025-10-11 09:38:38.713 2 DEBUG oslo_concurrency.lockutils [None req-82f121cd-c1d2-45a9-bd52-8fccff66f235 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:38:38 compute-0 nova_compute[260935]: 2025-10-11 09:38:38.714 2 DEBUG oslo_concurrency.lockutils [None req-82f121cd-c1d2-45a9-bd52-8fccff66f235 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:38:38 compute-0 nova_compute[260935]: 2025-10-11 09:38:38.835 2 DEBUG oslo_concurrency.processutils [None req-82f121cd-c1d2-45a9-bd52-8fccff66f235 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:38:39 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:38:39.220 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=bec9a5e2-82b0-42f0-811d-08d245f1dc66, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '54'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:38:39 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2910: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 346 KiB/s rd, 2.1 MiB/s wr, 92 op/s
Oct 11 09:38:39 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 09:38:39 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/846503678' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:38:39 compute-0 nova_compute[260935]: 2025-10-11 09:38:39.331 2 DEBUG oslo_concurrency.processutils [None req-82f121cd-c1d2-45a9-bd52-8fccff66f235 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.496s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:38:39 compute-0 nova_compute[260935]: 2025-10-11 09:38:39.339 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:38:39 compute-0 nova_compute[260935]: 2025-10-11 09:38:39.343 2 DEBUG nova.compute.provider_tree [None req-82f121cd-c1d2-45a9-bd52-8fccff66f235 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 09:38:39 compute-0 ceph-mon[74313]: pgmap v2910: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 346 KiB/s rd, 2.1 MiB/s wr, 92 op/s
Oct 11 09:38:39 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/846503678' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:38:39 compute-0 nova_compute[260935]: 2025-10-11 09:38:39.361 2 DEBUG nova.scheduler.client.report [None req-82f121cd-c1d2-45a9-bd52-8fccff66f235 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 09:38:39 compute-0 nova_compute[260935]: 2025-10-11 09:38:39.386 2 DEBUG oslo_concurrency.lockutils [None req-82f121cd-c1d2-45a9-bd52-8fccff66f235 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.672s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:38:39 compute-0 nova_compute[260935]: 2025-10-11 09:38:39.410 2 INFO nova.scheduler.client.report [None req-82f121cd-c1d2-45a9-bd52-8fccff66f235 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Deleted allocations for instance d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b
Oct 11 09:38:39 compute-0 nova_compute[260935]: 2025-10-11 09:38:39.476 2 DEBUG oslo_concurrency.lockutils [None req-82f121cd-c1d2-45a9-bd52-8fccff66f235 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lock "d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.350s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:38:39 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:38:40 compute-0 nova_compute[260935]: 2025-10-11 09:38:40.187 2 DEBUG nova.compute.manager [req-add12720-68d5-4734-a6a0-74775b88095c req-58dffc40-4633-4867-ab49-d8541a50c45c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b] Received event network-vif-deleted-927e4c78-95c0-407e-ab92-2c7457c30f72 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:38:40 compute-0 podman[422189]: 2025-10-11 09:38:40.812161941 +0000 UTC m=+0.095436708 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team)
Oct 11 09:38:41 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2911: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 12 KiB/s wr, 28 op/s
Oct 11 09:38:41 compute-0 nova_compute[260935]: 2025-10-11 09:38:41.403 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:38:41 compute-0 ceph-mon[74313]: pgmap v2911: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 12 KiB/s wr, 28 op/s
Oct 11 09:38:43 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2912: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 12 KiB/s wr, 28 op/s
Oct 11 09:38:43 compute-0 ceph-mon[74313]: pgmap v2912: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 12 KiB/s wr, 28 op/s
Oct 11 09:38:44 compute-0 nova_compute[260935]: 2025-10-11 09:38:44.342 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:38:44 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:38:44 compute-0 podman[422209]: 2025-10-11 09:38:44.760511361 +0000 UTC m=+0.068194383 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, managed_by=edpm_ansible)
Oct 11 09:38:44 compute-0 podman[422210]: 2025-10-11 09:38:44.800467622 +0000 UTC m=+0.101547729 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 11 09:38:45 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2913: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 11 09:38:45 compute-0 ceph-mon[74313]: pgmap v2913: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 11 09:38:46 compute-0 nova_compute[260935]: 2025-10-11 09:38:46.407 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:38:46 compute-0 ovn_controller[152945]: 2025-10-11T09:38:46Z|01625|binding|INFO|Releasing lport e23cd806-8523-4e59-ba27-db15cee52548 from this chassis (sb_readonly=0)
Oct 11 09:38:46 compute-0 ovn_controller[152945]: 2025-10-11T09:38:46Z|01626|binding|INFO|Releasing lport f45dd889-4db0-488b-b6d3-356bc191844e from this chassis (sb_readonly=0)
Oct 11 09:38:46 compute-0 nova_compute[260935]: 2025-10-11 09:38:46.677 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:38:46 compute-0 ovn_controller[152945]: 2025-10-11T09:38:46Z|01627|binding|INFO|Releasing lport e23cd806-8523-4e59-ba27-db15cee52548 from this chassis (sb_readonly=0)
Oct 11 09:38:46 compute-0 ovn_controller[152945]: 2025-10-11T09:38:46Z|01628|binding|INFO|Releasing lport f45dd889-4db0-488b-b6d3-356bc191844e from this chassis (sb_readonly=0)
Oct 11 09:38:46 compute-0 nova_compute[260935]: 2025-10-11 09:38:46.778 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:38:47 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2914: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 11 09:38:47 compute-0 ceph-mon[74313]: pgmap v2914: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 11 09:38:48 compute-0 nova_compute[260935]: 2025-10-11 09:38:48.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:38:48 compute-0 nova_compute[260935]: 2025-10-11 09:38:48.702 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 11 09:38:48 compute-0 nova_compute[260935]: 2025-10-11 09:38:48.945 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "refresh_cache-b75d8ded-515b-48ff-a6b6-28df88878996" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:38:48 compute-0 nova_compute[260935]: 2025-10-11 09:38:48.946 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquired lock "refresh_cache-b75d8ded-515b-48ff-a6b6-28df88878996" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:38:48 compute-0 nova_compute[260935]: 2025-10-11 09:38:48.946 2 DEBUG nova.network.neutron [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: b75d8ded-515b-48ff-a6b6-28df88878996] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 11 09:38:49 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2915: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 11 09:38:49 compute-0 nova_compute[260935]: 2025-10-11 09:38:49.344 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:38:49 compute-0 ceph-mon[74313]: pgmap v2915: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 11 09:38:49 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:38:50 compute-0 nova_compute[260935]: 2025-10-11 09:38:50.335 2 DEBUG nova.network.neutron [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: b75d8ded-515b-48ff-a6b6-28df88878996] Updating instance_info_cache with network_info: [{"id": "99e74dca-1d94-446c-ac4b-bc16dc028d2b", "address": "fa:16:3e:ab:9b:26", "network": {"id": "e4686205-cbf0-4221-bc49-ebb890c4a59f", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1553544744-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "11b44ad9193e4e43838d52056ccf413e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap99e74dca-1d", "ovs_interfaceid": "99e74dca-1d94-446c-ac4b-bc16dc028d2b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:38:50 compute-0 nova_compute[260935]: 2025-10-11 09:38:50.349 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Releasing lock "refresh_cache-b75d8ded-515b-48ff-a6b6-28df88878996" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:38:50 compute-0 nova_compute[260935]: 2025-10-11 09:38:50.350 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: b75d8ded-515b-48ff-a6b6-28df88878996] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 11 09:38:50 compute-0 nova_compute[260935]: 2025-10-11 09:38:50.351 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:38:50 compute-0 nova_compute[260935]: 2025-10-11 09:38:50.351 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 11 09:38:50 compute-0 nova_compute[260935]: 2025-10-11 09:38:50.704 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:38:51 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2916: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:38:51 compute-0 ceph-mon[74313]: pgmap v2916: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:38:51 compute-0 nova_compute[260935]: 2025-10-11 09:38:51.374 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760175516.370321, d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:38:51 compute-0 nova_compute[260935]: 2025-10-11 09:38:51.374 2 INFO nova.compute.manager [-] [instance: d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b] VM Stopped (Lifecycle Event)
Oct 11 09:38:51 compute-0 nova_compute[260935]: 2025-10-11 09:38:51.396 2 DEBUG nova.compute.manager [None req-e92e827e-33e5-495e-840e-a96c90755c09 - - - - - -] [instance: d61a0a4e-e1f6-4f2e-a1d7-d3f1f695a19b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:38:51 compute-0 nova_compute[260935]: 2025-10-11 09:38:51.411 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:38:51 compute-0 nova_compute[260935]: 2025-10-11 09:38:51.697 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:38:52 compute-0 nova_compute[260935]: 2025-10-11 09:38:52.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:38:52 compute-0 nova_compute[260935]: 2025-10-11 09:38:52.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:38:52 compute-0 nova_compute[260935]: 2025-10-11 09:38:52.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:38:53 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2917: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:38:53 compute-0 ceph-mon[74313]: pgmap v2917: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:38:53 compute-0 nova_compute[260935]: 2025-10-11 09:38:53.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:38:53 compute-0 nova_compute[260935]: 2025-10-11 09:38:53.732 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:38:53 compute-0 nova_compute[260935]: 2025-10-11 09:38:53.732 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:38:53 compute-0 nova_compute[260935]: 2025-10-11 09:38:53.733 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:38:53 compute-0 nova_compute[260935]: 2025-10-11 09:38:53.733 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 11 09:38:53 compute-0 nova_compute[260935]: 2025-10-11 09:38:53.734 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:38:54 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 09:38:54 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/131630721' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:38:54 compute-0 nova_compute[260935]: 2025-10-11 09:38:54.221 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.487s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:38:54 compute-0 nova_compute[260935]: 2025-10-11 09:38:54.321 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:38:54 compute-0 nova_compute[260935]: 2025-10-11 09:38:54.323 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:38:54 compute-0 nova_compute[260935]: 2025-10-11 09:38:54.323 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:38:54 compute-0 nova_compute[260935]: 2025-10-11 09:38:54.328 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:38:54 compute-0 nova_compute[260935]: 2025-10-11 09:38:54.328 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:38:54 compute-0 nova_compute[260935]: 2025-10-11 09:38:54.334 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000052 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:38:54 compute-0 nova_compute[260935]: 2025-10-11 09:38:54.334 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000052 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:38:54 compute-0 nova_compute[260935]: 2025-10-11 09:38:54.346 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:38:54 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/131630721' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:38:54 compute-0 nova_compute[260935]: 2025-10-11 09:38:54.635 2 WARNING nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 09:38:54 compute-0 nova_compute[260935]: 2025-10-11 09:38:54.638 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=2802MB free_disk=59.83066940307617GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 11 09:38:54 compute-0 nova_compute[260935]: 2025-10-11 09:38:54.638 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:38:54 compute-0 nova_compute[260935]: 2025-10-11 09:38:54.639 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:38:54 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:38:54 compute-0 ceph-mon[74313]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #144. Immutable memtables: 0.
Oct 11 09:38:54 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:38:54.670300) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 11 09:38:54 compute-0 ceph-mon[74313]: rocksdb: [db/flush_job.cc:856] [default] [JOB 87] Flushing memtable with next log file: 144
Oct 11 09:38:54 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760175534670337, "job": 87, "event": "flush_started", "num_memtables": 1, "num_entries": 402, "num_deletes": 256, "total_data_size": 292495, "memory_usage": 301640, "flush_reason": "Manual Compaction"}
Oct 11 09:38:54 compute-0 ceph-mon[74313]: rocksdb: [db/flush_job.cc:885] [default] [JOB 87] Level-0 flush table #145: started
Oct 11 09:38:54 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760175534674891, "cf_name": "default", "job": 87, "event": "table_file_creation", "file_number": 145, "file_size": 290261, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 61011, "largest_seqno": 61412, "table_properties": {"data_size": 287824, "index_size": 536, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 837, "raw_key_size": 5622, "raw_average_key_size": 17, "raw_value_size": 283092, "raw_average_value_size": 898, "num_data_blocks": 24, "num_entries": 315, "num_filter_entries": 315, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760175516, "oldest_key_time": 1760175516, "file_creation_time": 1760175534, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "25d4b2da-549a-4320-988a-8e4ed36a341b", "db_session_id": "HBQ1LYMNE0LLZGR7AHC6", "orig_file_number": 145, "seqno_to_time_mapping": "N/A"}}
Oct 11 09:38:54 compute-0 ceph-mon[74313]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 87] Flush lasted 4634 microseconds, and 1956 cpu microseconds.
Oct 11 09:38:54 compute-0 ceph-mon[74313]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 09:38:54 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:38:54.674934) [db/flush_job.cc:967] [default] [JOB 87] Level-0 flush table #145: 290261 bytes OK
Oct 11 09:38:54 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:38:54.674952) [db/memtable_list.cc:519] [default] Level-0 commit table #145 started
Oct 11 09:38:54 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:38:54.676323) [db/memtable_list.cc:722] [default] Level-0 commit table #145: memtable #1 done
Oct 11 09:38:54 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:38:54.676341) EVENT_LOG_v1 {"time_micros": 1760175534676335, "job": 87, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 11 09:38:54 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:38:54.676360) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 11 09:38:54 compute-0 ceph-mon[74313]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 87] Try to delete WAL files size 289929, prev total WAL file size 289929, number of live WAL files 2.
Oct 11 09:38:54 compute-0 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000141.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 09:38:54 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:38:54.677078) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0032353131' seq:72057594037927935, type:22 .. '6C6F676D0032373633' seq:0, type:0; will stop at (end)
Oct 11 09:38:54 compute-0 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 88] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 11 09:38:54 compute-0 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 87 Base level 0, inputs: [145(283KB)], [143(9352KB)]
Oct 11 09:38:54 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760175534677111, "job": 88, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [145], "files_L6": [143], "score": -1, "input_data_size": 9867137, "oldest_snapshot_seqno": -1}
Oct 11 09:38:54 compute-0 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 88] Generated table #146: 7798 keys, 9762706 bytes, temperature: kUnknown
Oct 11 09:38:54 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760175534743476, "cf_name": "default", "job": 88, "event": "table_file_creation", "file_number": 146, "file_size": 9762706, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9712535, "index_size": 29578, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 19525, "raw_key_size": 205165, "raw_average_key_size": 26, "raw_value_size": 9575233, "raw_average_value_size": 1227, "num_data_blocks": 1147, "num_entries": 7798, "num_filter_entries": 7798, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760170204, "oldest_key_time": 0, "file_creation_time": 1760175534, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "25d4b2da-549a-4320-988a-8e4ed36a341b", "db_session_id": "HBQ1LYMNE0LLZGR7AHC6", "orig_file_number": 146, "seqno_to_time_mapping": "N/A"}}
Oct 11 09:38:54 compute-0 ceph-mon[74313]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 09:38:54 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:38:54.743758) [db/compaction/compaction_job.cc:1663] [default] [JOB 88] Compacted 1@0 + 1@6 files to L6 => 9762706 bytes
Oct 11 09:38:54 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:38:54.745449) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 148.5 rd, 146.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.3, 9.1 +0.0 blob) out(9.3 +0.0 blob), read-write-amplify(67.6) write-amplify(33.6) OK, records in: 8317, records dropped: 519 output_compression: NoCompression
Oct 11 09:38:54 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:38:54.745470) EVENT_LOG_v1 {"time_micros": 1760175534745460, "job": 88, "event": "compaction_finished", "compaction_time_micros": 66452, "compaction_time_cpu_micros": 21913, "output_level": 6, "num_output_files": 1, "total_output_size": 9762706, "num_input_records": 8317, "num_output_records": 7798, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 11 09:38:54 compute-0 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000145.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 09:38:54 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760175534745651, "job": 88, "event": "table_file_deletion", "file_number": 145}
Oct 11 09:38:54 compute-0 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000143.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 09:38:54 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760175534747733, "job": 88, "event": "table_file_deletion", "file_number": 143}
Oct 11 09:38:54 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:38:54.676951) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 09:38:54 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:38:54.747907) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 09:38:54 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:38:54.747915) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 09:38:54 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:38:54.747919) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 09:38:54 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:38:54.747922) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 09:38:54 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:38:54.747925) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 09:38:54 compute-0 nova_compute[260935]: 2025-10-11 09:38:54.789 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance c176845c-89c0-4038-ba22-4ee79bd3ebfe actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 09:38:54 compute-0 nova_compute[260935]: 2025-10-11 09:38:54.790 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance b75d8ded-515b-48ff-a6b6-28df88878996 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 09:38:54 compute-0 nova_compute[260935]: 2025-10-11 09:38:54.790 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance 52be16b4-343a-4fd4-9041-39069a1fde2a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 09:38:54 compute-0 nova_compute[260935]: 2025-10-11 09:38:54.790 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 11 09:38:54 compute-0 nova_compute[260935]: 2025-10-11 09:38:54.791 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=896MB phys_disk=59GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 11 09:38:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:38:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:38:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:38:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:38:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:38:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:38:54 compute-0 nova_compute[260935]: 2025-10-11 09:38:54.897 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:38:55 compute-0 ceph-mgr[74605]: [balancer INFO root] Optimize plan auto_2025-10-11_09:38:55
Oct 11 09:38:55 compute-0 ceph-mgr[74605]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 09:38:55 compute-0 ceph-mgr[74605]: [balancer INFO root] do_upmap
Oct 11 09:38:55 compute-0 ceph-mgr[74605]: [balancer INFO root] pools ['cephfs.cephfs.data', 'cephfs.cephfs.meta', '.mgr', 'default.rgw.meta', '.rgw.root', 'default.rgw.log', 'volumes', 'default.rgw.control', 'vms', 'images', 'backups']
Oct 11 09:38:55 compute-0 ceph-mgr[74605]: [balancer INFO root] prepared 0/10 changes
Oct 11 09:38:55 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2918: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:38:55 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 09:38:55 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2305504440' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:38:55 compute-0 nova_compute[260935]: 2025-10-11 09:38:55.356 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:38:55 compute-0 nova_compute[260935]: 2025-10-11 09:38:55.364 2 DEBUG nova.compute.provider_tree [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 09:38:55 compute-0 nova_compute[260935]: 2025-10-11 09:38:55.417 2 DEBUG nova.scheduler.client.report [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 09:38:55 compute-0 nova_compute[260935]: 2025-10-11 09:38:55.540 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 11 09:38:55 compute-0 nova_compute[260935]: 2025-10-11 09:38:55.541 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.902s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:38:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 09:38:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 09:38:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 09:38:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 09:38:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 09:38:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 09:38:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 09:38:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 09:38:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 09:38:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 09:38:55 compute-0 ceph-mon[74313]: pgmap v2918: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:38:55 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/2305504440' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:38:56 compute-0 nova_compute[260935]: 2025-10-11 09:38:56.459 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:38:57 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2919: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:38:57 compute-0 ceph-mon[74313]: pgmap v2919: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:38:58 compute-0 sshd-session[422302]: Invalid user mysql from 165.232.82.252 port 57532
Oct 11 09:38:59 compute-0 sshd-session[422302]: pam_unix(sshd:auth): check pass; user unknown
Oct 11 09:38:59 compute-0 sshd-session[422302]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=165.232.82.252
Oct 11 09:38:59 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2920: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:38:59 compute-0 ceph-mon[74313]: pgmap v2920: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:38:59 compute-0 nova_compute[260935]: 2025-10-11 09:38:59.348 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:38:59 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:39:00 compute-0 sshd-session[422302]: Failed password for invalid user mysql from 165.232.82.252 port 57532 ssh2
Oct 11 09:39:00 compute-0 sshd-session[422302]: Connection closed by invalid user mysql 165.232.82.252 port 57532 [preauth]
Oct 11 09:39:01 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2921: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:39:01 compute-0 ceph-mon[74313]: pgmap v2921: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:39:01 compute-0 nova_compute[260935]: 2025-10-11 09:39:01.462 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:39:01 compute-0 nova_compute[260935]: 2025-10-11 09:39:01.613 2 DEBUG oslo_concurrency.lockutils [None req-039a0314-eefc-4187-bb2e-e87aec59f217 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Acquiring lock "82fac090-c427-485c-98cd-ad02e839be40" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:39:01 compute-0 nova_compute[260935]: 2025-10-11 09:39:01.613 2 DEBUG oslo_concurrency.lockutils [None req-039a0314-eefc-4187-bb2e-e87aec59f217 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lock "82fac090-c427-485c-98cd-ad02e839be40" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:39:01 compute-0 nova_compute[260935]: 2025-10-11 09:39:01.701 2 DEBUG nova.compute.manager [None req-039a0314-eefc-4187-bb2e-e87aec59f217 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 82fac090-c427-485c-98cd-ad02e839be40] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 11 09:39:01 compute-0 nova_compute[260935]: 2025-10-11 09:39:01.864 2 DEBUG oslo_concurrency.lockutils [None req-039a0314-eefc-4187-bb2e-e87aec59f217 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:39:01 compute-0 nova_compute[260935]: 2025-10-11 09:39:01.865 2 DEBUG oslo_concurrency.lockutils [None req-039a0314-eefc-4187-bb2e-e87aec59f217 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:39:01 compute-0 nova_compute[260935]: 2025-10-11 09:39:01.875 2 DEBUG nova.virt.hardware [None req-039a0314-eefc-4187-bb2e-e87aec59f217 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 11 09:39:01 compute-0 nova_compute[260935]: 2025-10-11 09:39:01.875 2 INFO nova.compute.claims [None req-039a0314-eefc-4187-bb2e-e87aec59f217 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 82fac090-c427-485c-98cd-ad02e839be40] Claim successful on node compute-0.ctlplane.example.com
Oct 11 09:39:02 compute-0 nova_compute[260935]: 2025-10-11 09:39:02.025 2 DEBUG oslo_concurrency.processutils [None req-039a0314-eefc-4187-bb2e-e87aec59f217 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:39:02 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 09:39:02 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3851506032' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:39:02 compute-0 nova_compute[260935]: 2025-10-11 09:39:02.557 2 DEBUG oslo_concurrency.processutils [None req-039a0314-eefc-4187-bb2e-e87aec59f217 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.532s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:39:02 compute-0 nova_compute[260935]: 2025-10-11 09:39:02.567 2 DEBUG nova.compute.provider_tree [None req-039a0314-eefc-4187-bb2e-e87aec59f217 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 09:39:02 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/3851506032' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:39:02 compute-0 nova_compute[260935]: 2025-10-11 09:39:02.602 2 DEBUG nova.scheduler.client.report [None req-039a0314-eefc-4187-bb2e-e87aec59f217 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 09:39:02 compute-0 nova_compute[260935]: 2025-10-11 09:39:02.675 2 DEBUG oslo_concurrency.lockutils [None req-039a0314-eefc-4187-bb2e-e87aec59f217 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.809s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:39:02 compute-0 nova_compute[260935]: 2025-10-11 09:39:02.675 2 DEBUG nova.compute.manager [None req-039a0314-eefc-4187-bb2e-e87aec59f217 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 82fac090-c427-485c-98cd-ad02e839be40] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 11 09:39:02 compute-0 podman[422326]: 2025-10-11 09:39:02.79908549 +0000 UTC m=+0.089368327 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 11 09:39:02 compute-0 nova_compute[260935]: 2025-10-11 09:39:02.805 2 DEBUG nova.compute.manager [None req-039a0314-eefc-4187-bb2e-e87aec59f217 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 82fac090-c427-485c-98cd-ad02e839be40] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 11 09:39:02 compute-0 nova_compute[260935]: 2025-10-11 09:39:02.806 2 DEBUG nova.network.neutron [None req-039a0314-eefc-4187-bb2e-e87aec59f217 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 82fac090-c427-485c-98cd-ad02e839be40] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 11 09:39:02 compute-0 nova_compute[260935]: 2025-10-11 09:39:02.851 2 INFO nova.virt.libvirt.driver [None req-039a0314-eefc-4187-bb2e-e87aec59f217 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 82fac090-c427-485c-98cd-ad02e839be40] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 11 09:39:02 compute-0 nova_compute[260935]: 2025-10-11 09:39:02.899 2 DEBUG nova.compute.manager [None req-039a0314-eefc-4187-bb2e-e87aec59f217 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 82fac090-c427-485c-98cd-ad02e839be40] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 11 09:39:03 compute-0 nova_compute[260935]: 2025-10-11 09:39:03.059 2 DEBUG nova.compute.manager [None req-039a0314-eefc-4187-bb2e-e87aec59f217 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 82fac090-c427-485c-98cd-ad02e839be40] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 11 09:39:03 compute-0 nova_compute[260935]: 2025-10-11 09:39:03.061 2 DEBUG nova.virt.libvirt.driver [None req-039a0314-eefc-4187-bb2e-e87aec59f217 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 82fac090-c427-485c-98cd-ad02e839be40] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 11 09:39:03 compute-0 nova_compute[260935]: 2025-10-11 09:39:03.062 2 INFO nova.virt.libvirt.driver [None req-039a0314-eefc-4187-bb2e-e87aec59f217 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 82fac090-c427-485c-98cd-ad02e839be40] Creating image(s)
Oct 11 09:39:03 compute-0 nova_compute[260935]: 2025-10-11 09:39:03.088 2 DEBUG nova.storage.rbd_utils [None req-039a0314-eefc-4187-bb2e-e87aec59f217 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] rbd image 82fac090-c427-485c-98cd-ad02e839be40_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:39:03 compute-0 nova_compute[260935]: 2025-10-11 09:39:03.115 2 DEBUG nova.storage.rbd_utils [None req-039a0314-eefc-4187-bb2e-e87aec59f217 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] rbd image 82fac090-c427-485c-98cd-ad02e839be40_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:39:03 compute-0 nova_compute[260935]: 2025-10-11 09:39:03.141 2 DEBUG nova.storage.rbd_utils [None req-039a0314-eefc-4187-bb2e-e87aec59f217 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] rbd image 82fac090-c427-485c-98cd-ad02e839be40_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:39:03 compute-0 nova_compute[260935]: 2025-10-11 09:39:03.145 2 DEBUG oslo_concurrency.processutils [None req-039a0314-eefc-4187-bb2e-e87aec59f217 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:39:03 compute-0 nova_compute[260935]: 2025-10-11 09:39:03.252 2 DEBUG oslo_concurrency.processutils [None req-039a0314-eefc-4187-bb2e-e87aec59f217 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json" returned: 0 in 0.107s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:39:03 compute-0 nova_compute[260935]: 2025-10-11 09:39:03.253 2 DEBUG oslo_concurrency.lockutils [None req-039a0314-eefc-4187-bb2e-e87aec59f217 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Acquiring lock "9811042c7d73cc51997f7c966840f4b7728169a1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:39:03 compute-0 nova_compute[260935]: 2025-10-11 09:39:03.254 2 DEBUG oslo_concurrency.lockutils [None req-039a0314-eefc-4187-bb2e-e87aec59f217 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:39:03 compute-0 nova_compute[260935]: 2025-10-11 09:39:03.255 2 DEBUG oslo_concurrency.lockutils [None req-039a0314-eefc-4187-bb2e-e87aec59f217 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:39:03 compute-0 nova_compute[260935]: 2025-10-11 09:39:03.279 2 DEBUG nova.storage.rbd_utils [None req-039a0314-eefc-4187-bb2e-e87aec59f217 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] rbd image 82fac090-c427-485c-98cd-ad02e839be40_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:39:03 compute-0 nova_compute[260935]: 2025-10-11 09:39:03.283 2 DEBUG oslo_concurrency.processutils [None req-039a0314-eefc-4187-bb2e-e87aec59f217 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 82fac090-c427-485c-98cd-ad02e839be40_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:39:03 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2922: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:39:03 compute-0 ceph-mon[74313]: pgmap v2922: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:39:03 compute-0 nova_compute[260935]: 2025-10-11 09:39:03.624 2 DEBUG oslo_concurrency.processutils [None req-039a0314-eefc-4187-bb2e-e87aec59f217 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 82fac090-c427-485c-98cd-ad02e839be40_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.341s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:39:03 compute-0 nova_compute[260935]: 2025-10-11 09:39:03.682 2 DEBUG nova.storage.rbd_utils [None req-039a0314-eefc-4187-bb2e-e87aec59f217 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] resizing rbd image 82fac090-c427-485c-98cd-ad02e839be40_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 11 09:39:03 compute-0 nova_compute[260935]: 2025-10-11 09:39:03.709 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:39:03 compute-0 nova_compute[260935]: 2025-10-11 09:39:03.710 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Oct 11 09:39:03 compute-0 nova_compute[260935]: 2025-10-11 09:39:03.728 2 DEBUG nova.policy [None req-039a0314-eefc-4187-bb2e-e87aec59f217 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '489c4d0457354f4684f8b9e53261224f', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '81e7096f23df4e7d8782cf98d09d54e9', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 11 09:39:03 compute-0 nova_compute[260935]: 2025-10-11 09:39:03.775 2 DEBUG nova.objects.instance [None req-039a0314-eefc-4187-bb2e-e87aec59f217 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lazy-loading 'migration_context' on Instance uuid 82fac090-c427-485c-98cd-ad02e839be40 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:39:03 compute-0 nova_compute[260935]: 2025-10-11 09:39:03.887 2 DEBUG nova.virt.libvirt.driver [None req-039a0314-eefc-4187-bb2e-e87aec59f217 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 82fac090-c427-485c-98cd-ad02e839be40] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 11 09:39:03 compute-0 nova_compute[260935]: 2025-10-11 09:39:03.887 2 DEBUG nova.virt.libvirt.driver [None req-039a0314-eefc-4187-bb2e-e87aec59f217 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 82fac090-c427-485c-98cd-ad02e839be40] Ensure instance console log exists: /var/lib/nova/instances/82fac090-c427-485c-98cd-ad02e839be40/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 11 09:39:03 compute-0 nova_compute[260935]: 2025-10-11 09:39:03.888 2 DEBUG oslo_concurrency.lockutils [None req-039a0314-eefc-4187-bb2e-e87aec59f217 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:39:03 compute-0 nova_compute[260935]: 2025-10-11 09:39:03.888 2 DEBUG oslo_concurrency.lockutils [None req-039a0314-eefc-4187-bb2e-e87aec59f217 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:39:03 compute-0 nova_compute[260935]: 2025-10-11 09:39:03.889 2 DEBUG oslo_concurrency.lockutils [None req-039a0314-eefc-4187-bb2e-e87aec59f217 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:39:04 compute-0 nova_compute[260935]: 2025-10-11 09:39:04.350 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:39:04 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:39:05 compute-0 nova_compute[260935]: 2025-10-11 09:39:05.227 2 DEBUG nova.network.neutron [None req-039a0314-eefc-4187-bb2e-e87aec59f217 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 82fac090-c427-485c-98cd-ad02e839be40] Successfully created port: e8668336-ee38-49b1-97dd-f6c5fcfab2e5 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 11 09:39:05 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2923: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:39:05 compute-0 ceph-mon[74313]: pgmap v2923: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:39:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 09:39:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:39:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 09:39:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:39:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.002627377275021163 of space, bias 1.0, pg target 0.788213182506349 quantized to 32 (current 32)
Oct 11 09:39:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:39:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 09:39:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:39:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 09:39:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:39:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Oct 11 09:39:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:39:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 11 09:39:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:39:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 09:39:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:39:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 11 09:39:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:39:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 11 09:39:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:39:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 09:39:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:39:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 11 09:39:06 compute-0 nova_compute[260935]: 2025-10-11 09:39:06.260 2 DEBUG nova.network.neutron [None req-039a0314-eefc-4187-bb2e-e87aec59f217 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 82fac090-c427-485c-98cd-ad02e839be40] Successfully updated port: e8668336-ee38-49b1-97dd-f6c5fcfab2e5 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 11 09:39:06 compute-0 nova_compute[260935]: 2025-10-11 09:39:06.330 2 DEBUG oslo_concurrency.lockutils [None req-039a0314-eefc-4187-bb2e-e87aec59f217 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Acquiring lock "refresh_cache-82fac090-c427-485c-98cd-ad02e839be40" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:39:06 compute-0 nova_compute[260935]: 2025-10-11 09:39:06.330 2 DEBUG oslo_concurrency.lockutils [None req-039a0314-eefc-4187-bb2e-e87aec59f217 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Acquired lock "refresh_cache-82fac090-c427-485c-98cd-ad02e839be40" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:39:06 compute-0 nova_compute[260935]: 2025-10-11 09:39:06.330 2 DEBUG nova.network.neutron [None req-039a0314-eefc-4187-bb2e-e87aec59f217 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 82fac090-c427-485c-98cd-ad02e839be40] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 11 09:39:06 compute-0 nova_compute[260935]: 2025-10-11 09:39:06.434 2 DEBUG nova.compute.manager [req-96caaa46-41f0-45aa-9e32-828537bb45af req-da02b2bf-7c8f-4832-8c44-01da4113ed52 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 82fac090-c427-485c-98cd-ad02e839be40] Received event network-changed-e8668336-ee38-49b1-97dd-f6c5fcfab2e5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:39:06 compute-0 nova_compute[260935]: 2025-10-11 09:39:06.435 2 DEBUG nova.compute.manager [req-96caaa46-41f0-45aa-9e32-828537bb45af req-da02b2bf-7c8f-4832-8c44-01da4113ed52 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 82fac090-c427-485c-98cd-ad02e839be40] Refreshing instance network info cache due to event network-changed-e8668336-ee38-49b1-97dd-f6c5fcfab2e5. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 09:39:06 compute-0 nova_compute[260935]: 2025-10-11 09:39:06.436 2 DEBUG oslo_concurrency.lockutils [req-96caaa46-41f0-45aa-9e32-828537bb45af req-da02b2bf-7c8f-4832-8c44-01da4113ed52 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-82fac090-c427-485c-98cd-ad02e839be40" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:39:06 compute-0 nova_compute[260935]: 2025-10-11 09:39:06.466 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:39:06 compute-0 nova_compute[260935]: 2025-10-11 09:39:06.522 2 DEBUG nova.network.neutron [None req-039a0314-eefc-4187-bb2e-e87aec59f217 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 82fac090-c427-485c-98cd-ad02e839be40] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 11 09:39:07 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2924: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:39:07 compute-0 ceph-mon[74313]: pgmap v2924: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:39:07 compute-0 nova_compute[260935]: 2025-10-11 09:39:07.826 2 DEBUG nova.network.neutron [None req-039a0314-eefc-4187-bb2e-e87aec59f217 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 82fac090-c427-485c-98cd-ad02e839be40] Updating instance_info_cache with network_info: [{"id": "e8668336-ee38-49b1-97dd-f6c5fcfab2e5", "address": "fa:16:3e:ae:4e:40", "network": {"id": "1aa39742-414b-41b6-bac5-b401ed01a1ec", "bridge": "br-int", "label": "tempest-network-smoke--1567678547", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "81e7096f23df4e7d8782cf98d09d54e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape8668336-ee", "ovs_interfaceid": "e8668336-ee38-49b1-97dd-f6c5fcfab2e5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:39:07 compute-0 nova_compute[260935]: 2025-10-11 09:39:07.942 2 DEBUG oslo_concurrency.lockutils [None req-039a0314-eefc-4187-bb2e-e87aec59f217 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Releasing lock "refresh_cache-82fac090-c427-485c-98cd-ad02e839be40" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:39:07 compute-0 nova_compute[260935]: 2025-10-11 09:39:07.943 2 DEBUG nova.compute.manager [None req-039a0314-eefc-4187-bb2e-e87aec59f217 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 82fac090-c427-485c-98cd-ad02e839be40] Instance network_info: |[{"id": "e8668336-ee38-49b1-97dd-f6c5fcfab2e5", "address": "fa:16:3e:ae:4e:40", "network": {"id": "1aa39742-414b-41b6-bac5-b401ed01a1ec", "bridge": "br-int", "label": "tempest-network-smoke--1567678547", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "81e7096f23df4e7d8782cf98d09d54e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape8668336-ee", "ovs_interfaceid": "e8668336-ee38-49b1-97dd-f6c5fcfab2e5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 11 09:39:07 compute-0 nova_compute[260935]: 2025-10-11 09:39:07.943 2 DEBUG oslo_concurrency.lockutils [req-96caaa46-41f0-45aa-9e32-828537bb45af req-da02b2bf-7c8f-4832-8c44-01da4113ed52 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-82fac090-c427-485c-98cd-ad02e839be40" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:39:07 compute-0 nova_compute[260935]: 2025-10-11 09:39:07.943 2 DEBUG nova.network.neutron [req-96caaa46-41f0-45aa-9e32-828537bb45af req-da02b2bf-7c8f-4832-8c44-01da4113ed52 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 82fac090-c427-485c-98cd-ad02e839be40] Refreshing network info cache for port e8668336-ee38-49b1-97dd-f6c5fcfab2e5 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 09:39:07 compute-0 nova_compute[260935]: 2025-10-11 09:39:07.947 2 DEBUG nova.virt.libvirt.driver [None req-039a0314-eefc-4187-bb2e-e87aec59f217 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 82fac090-c427-485c-98cd-ad02e839be40] Start _get_guest_xml network_info=[{"id": "e8668336-ee38-49b1-97dd-f6c5fcfab2e5", "address": "fa:16:3e:ae:4e:40", "network": {"id": "1aa39742-414b-41b6-bac5-b401ed01a1ec", "bridge": "br-int", "label": "tempest-network-smoke--1567678547", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "81e7096f23df4e7d8782cf98d09d54e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape8668336-ee", "ovs_interfaceid": "e8668336-ee38-49b1-97dd-f6c5fcfab2e5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 11 09:39:07 compute-0 nova_compute[260935]: 2025-10-11 09:39:07.952 2 WARNING nova.virt.libvirt.driver [None req-039a0314-eefc-4187-bb2e-e87aec59f217 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 09:39:07 compute-0 nova_compute[260935]: 2025-10-11 09:39:07.957 2 DEBUG nova.virt.libvirt.host [None req-039a0314-eefc-4187-bb2e-e87aec59f217 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 11 09:39:07 compute-0 nova_compute[260935]: 2025-10-11 09:39:07.958 2 DEBUG nova.virt.libvirt.host [None req-039a0314-eefc-4187-bb2e-e87aec59f217 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 11 09:39:07 compute-0 nova_compute[260935]: 2025-10-11 09:39:07.964 2 DEBUG nova.virt.libvirt.host [None req-039a0314-eefc-4187-bb2e-e87aec59f217 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 11 09:39:07 compute-0 nova_compute[260935]: 2025-10-11 09:39:07.965 2 DEBUG nova.virt.libvirt.host [None req-039a0314-eefc-4187-bb2e-e87aec59f217 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 11 09:39:07 compute-0 nova_compute[260935]: 2025-10-11 09:39:07.965 2 DEBUG nova.virt.libvirt.driver [None req-039a0314-eefc-4187-bb2e-e87aec59f217 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 11 09:39:07 compute-0 nova_compute[260935]: 2025-10-11 09:39:07.966 2 DEBUG nova.virt.hardware [None req-039a0314-eefc-4187-bb2e-e87aec59f217 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 11 09:39:07 compute-0 nova_compute[260935]: 2025-10-11 09:39:07.967 2 DEBUG nova.virt.hardware [None req-039a0314-eefc-4187-bb2e-e87aec59f217 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 11 09:39:07 compute-0 nova_compute[260935]: 2025-10-11 09:39:07.967 2 DEBUG nova.virt.hardware [None req-039a0314-eefc-4187-bb2e-e87aec59f217 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 11 09:39:07 compute-0 nova_compute[260935]: 2025-10-11 09:39:07.967 2 DEBUG nova.virt.hardware [None req-039a0314-eefc-4187-bb2e-e87aec59f217 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 11 09:39:07 compute-0 nova_compute[260935]: 2025-10-11 09:39:07.968 2 DEBUG nova.virt.hardware [None req-039a0314-eefc-4187-bb2e-e87aec59f217 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 11 09:39:07 compute-0 nova_compute[260935]: 2025-10-11 09:39:07.968 2 DEBUG nova.virt.hardware [None req-039a0314-eefc-4187-bb2e-e87aec59f217 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 11 09:39:07 compute-0 nova_compute[260935]: 2025-10-11 09:39:07.968 2 DEBUG nova.virt.hardware [None req-039a0314-eefc-4187-bb2e-e87aec59f217 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 11 09:39:07 compute-0 nova_compute[260935]: 2025-10-11 09:39:07.969 2 DEBUG nova.virt.hardware [None req-039a0314-eefc-4187-bb2e-e87aec59f217 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 11 09:39:07 compute-0 nova_compute[260935]: 2025-10-11 09:39:07.969 2 DEBUG nova.virt.hardware [None req-039a0314-eefc-4187-bb2e-e87aec59f217 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 11 09:39:07 compute-0 nova_compute[260935]: 2025-10-11 09:39:07.969 2 DEBUG nova.virt.hardware [None req-039a0314-eefc-4187-bb2e-e87aec59f217 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 11 09:39:07 compute-0 nova_compute[260935]: 2025-10-11 09:39:07.970 2 DEBUG nova.virt.hardware [None req-039a0314-eefc-4187-bb2e-e87aec59f217 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 11 09:39:07 compute-0 nova_compute[260935]: 2025-10-11 09:39:07.973 2 DEBUG oslo_concurrency.processutils [None req-039a0314-eefc-4187-bb2e-e87aec59f217 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:39:08 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 09:39:08 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/325145860' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:39:08 compute-0 nova_compute[260935]: 2025-10-11 09:39:08.440 2 DEBUG oslo_concurrency.processutils [None req-039a0314-eefc-4187-bb2e-e87aec59f217 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:39:08 compute-0 nova_compute[260935]: 2025-10-11 09:39:08.473 2 DEBUG nova.storage.rbd_utils [None req-039a0314-eefc-4187-bb2e-e87aec59f217 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] rbd image 82fac090-c427-485c-98cd-ad02e839be40_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:39:08 compute-0 nova_compute[260935]: 2025-10-11 09:39:08.477 2 DEBUG oslo_concurrency.processutils [None req-039a0314-eefc-4187-bb2e-e87aec59f217 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:39:08 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/325145860' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:39:08 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 09:39:08 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3066451492' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:39:08 compute-0 nova_compute[260935]: 2025-10-11 09:39:08.974 2 DEBUG oslo_concurrency.processutils [None req-039a0314-eefc-4187-bb2e-e87aec59f217 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.496s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:39:08 compute-0 nova_compute[260935]: 2025-10-11 09:39:08.977 2 DEBUG nova.virt.libvirt.vif [None req-039a0314-eefc-4187-bb2e-e87aec59f217 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T09:38:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-607770139-access_point-830501792',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-607770139-access_point-830501792',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-607770139-acc',id=144,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMgIDSEbqXKfaiPe9PEwj6SDspyzopAbfyJnz8+djsMK1KLgiJJqBYs9uE57WKf6jqalL3R3Kh83Cc9busMQoG7JknYjD9hSHphOlTqezk2QHYcvhxySyaS51yDxw6uubA==',key_name='tempest-TestSecurityGroupsBasicOps-1631871125',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='81e7096f23df4e7d8782cf98d09d54e9',ramdisk_id='',reservation_id='r-bjfbul0r',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-607770139',owner_user_name='tempest-TestSecurityGroupsBasicOps-607770139-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:39:02Z,user_data=None,user_id='489c4d0457354f4684f8b9e53261224f',uuid=82fac090-c427-485c-98cd-ad02e839be40,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e8668336-ee38-49b1-97dd-f6c5fcfab2e5", "address": "fa:16:3e:ae:4e:40", "network": {"id": "1aa39742-414b-41b6-bac5-b401ed01a1ec", "bridge": "br-int", "label": "tempest-network-smoke--1567678547", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "81e7096f23df4e7d8782cf98d09d54e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape8668336-ee", "ovs_interfaceid": "e8668336-ee38-49b1-97dd-f6c5fcfab2e5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 11 09:39:08 compute-0 nova_compute[260935]: 2025-10-11 09:39:08.978 2 DEBUG nova.network.os_vif_util [None req-039a0314-eefc-4187-bb2e-e87aec59f217 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Converting VIF {"id": "e8668336-ee38-49b1-97dd-f6c5fcfab2e5", "address": "fa:16:3e:ae:4e:40", "network": {"id": "1aa39742-414b-41b6-bac5-b401ed01a1ec", "bridge": "br-int", "label": "tempest-network-smoke--1567678547", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "81e7096f23df4e7d8782cf98d09d54e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape8668336-ee", "ovs_interfaceid": "e8668336-ee38-49b1-97dd-f6c5fcfab2e5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 09:39:08 compute-0 nova_compute[260935]: 2025-10-11 09:39:08.980 2 DEBUG nova.network.os_vif_util [None req-039a0314-eefc-4187-bb2e-e87aec59f217 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ae:4e:40,bridge_name='br-int',has_traffic_filtering=True,id=e8668336-ee38-49b1-97dd-f6c5fcfab2e5,network=Network(1aa39742-414b-41b6-bac5-b401ed01a1ec),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape8668336-ee') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 09:39:08 compute-0 nova_compute[260935]: 2025-10-11 09:39:08.982 2 DEBUG nova.objects.instance [None req-039a0314-eefc-4187-bb2e-e87aec59f217 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lazy-loading 'pci_devices' on Instance uuid 82fac090-c427-485c-98cd-ad02e839be40 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:39:09 compute-0 nova_compute[260935]: 2025-10-11 09:39:09.000 2 DEBUG nova.virt.libvirt.driver [None req-039a0314-eefc-4187-bb2e-e87aec59f217 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 82fac090-c427-485c-98cd-ad02e839be40] End _get_guest_xml xml=<domain type="kvm">
Oct 11 09:39:09 compute-0 nova_compute[260935]:   <uuid>82fac090-c427-485c-98cd-ad02e839be40</uuid>
Oct 11 09:39:09 compute-0 nova_compute[260935]:   <name>instance-00000090</name>
Oct 11 09:39:09 compute-0 nova_compute[260935]:   <memory>131072</memory>
Oct 11 09:39:09 compute-0 nova_compute[260935]:   <vcpu>1</vcpu>
Oct 11 09:39:09 compute-0 nova_compute[260935]:   <metadata>
Oct 11 09:39:09 compute-0 nova_compute[260935]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 09:39:09 compute-0 nova_compute[260935]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 09:39:09 compute-0 nova_compute[260935]:       <nova:name>tempest-server-tempest-TestSecurityGroupsBasicOps-607770139-access_point-830501792</nova:name>
Oct 11 09:39:09 compute-0 nova_compute[260935]:       <nova:creationTime>2025-10-11 09:39:07</nova:creationTime>
Oct 11 09:39:09 compute-0 nova_compute[260935]:       <nova:flavor name="m1.nano">
Oct 11 09:39:09 compute-0 nova_compute[260935]:         <nova:memory>128</nova:memory>
Oct 11 09:39:09 compute-0 nova_compute[260935]:         <nova:disk>1</nova:disk>
Oct 11 09:39:09 compute-0 nova_compute[260935]:         <nova:swap>0</nova:swap>
Oct 11 09:39:09 compute-0 nova_compute[260935]:         <nova:ephemeral>0</nova:ephemeral>
Oct 11 09:39:09 compute-0 nova_compute[260935]:         <nova:vcpus>1</nova:vcpus>
Oct 11 09:39:09 compute-0 nova_compute[260935]:       </nova:flavor>
Oct 11 09:39:09 compute-0 nova_compute[260935]:       <nova:owner>
Oct 11 09:39:09 compute-0 nova_compute[260935]:         <nova:user uuid="489c4d0457354f4684f8b9e53261224f">tempest-TestSecurityGroupsBasicOps-607770139-project-member</nova:user>
Oct 11 09:39:09 compute-0 nova_compute[260935]:         <nova:project uuid="81e7096f23df4e7d8782cf98d09d54e9">tempest-TestSecurityGroupsBasicOps-607770139</nova:project>
Oct 11 09:39:09 compute-0 nova_compute[260935]:       </nova:owner>
Oct 11 09:39:09 compute-0 nova_compute[260935]:       <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 09:39:09 compute-0 nova_compute[260935]:       <nova:ports>
Oct 11 09:39:09 compute-0 nova_compute[260935]:         <nova:port uuid="e8668336-ee38-49b1-97dd-f6c5fcfab2e5">
Oct 11 09:39:09 compute-0 nova_compute[260935]:           <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Oct 11 09:39:09 compute-0 nova_compute[260935]:         </nova:port>
Oct 11 09:39:09 compute-0 nova_compute[260935]:       </nova:ports>
Oct 11 09:39:09 compute-0 nova_compute[260935]:     </nova:instance>
Oct 11 09:39:09 compute-0 nova_compute[260935]:   </metadata>
Oct 11 09:39:09 compute-0 nova_compute[260935]:   <sysinfo type="smbios">
Oct 11 09:39:09 compute-0 nova_compute[260935]:     <system>
Oct 11 09:39:09 compute-0 nova_compute[260935]:       <entry name="manufacturer">RDO</entry>
Oct 11 09:39:09 compute-0 nova_compute[260935]:       <entry name="product">OpenStack Compute</entry>
Oct 11 09:39:09 compute-0 nova_compute[260935]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 09:39:09 compute-0 nova_compute[260935]:       <entry name="serial">82fac090-c427-485c-98cd-ad02e839be40</entry>
Oct 11 09:39:09 compute-0 nova_compute[260935]:       <entry name="uuid">82fac090-c427-485c-98cd-ad02e839be40</entry>
Oct 11 09:39:09 compute-0 nova_compute[260935]:       <entry name="family">Virtual Machine</entry>
Oct 11 09:39:09 compute-0 nova_compute[260935]:     </system>
Oct 11 09:39:09 compute-0 nova_compute[260935]:   </sysinfo>
Oct 11 09:39:09 compute-0 nova_compute[260935]:   <os>
Oct 11 09:39:09 compute-0 nova_compute[260935]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 11 09:39:09 compute-0 nova_compute[260935]:     <boot dev="hd"/>
Oct 11 09:39:09 compute-0 nova_compute[260935]:     <smbios mode="sysinfo"/>
Oct 11 09:39:09 compute-0 nova_compute[260935]:   </os>
Oct 11 09:39:09 compute-0 nova_compute[260935]:   <features>
Oct 11 09:39:09 compute-0 nova_compute[260935]:     <acpi/>
Oct 11 09:39:09 compute-0 nova_compute[260935]:     <apic/>
Oct 11 09:39:09 compute-0 nova_compute[260935]:     <vmcoreinfo/>
Oct 11 09:39:09 compute-0 nova_compute[260935]:   </features>
Oct 11 09:39:09 compute-0 nova_compute[260935]:   <clock offset="utc">
Oct 11 09:39:09 compute-0 nova_compute[260935]:     <timer name="pit" tickpolicy="delay"/>
Oct 11 09:39:09 compute-0 nova_compute[260935]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 11 09:39:09 compute-0 nova_compute[260935]:     <timer name="hpet" present="no"/>
Oct 11 09:39:09 compute-0 nova_compute[260935]:   </clock>
Oct 11 09:39:09 compute-0 nova_compute[260935]:   <cpu mode="host-model" match="exact">
Oct 11 09:39:09 compute-0 nova_compute[260935]:     <topology sockets="1" cores="1" threads="1"/>
Oct 11 09:39:09 compute-0 nova_compute[260935]:   </cpu>
Oct 11 09:39:09 compute-0 nova_compute[260935]:   <devices>
Oct 11 09:39:09 compute-0 nova_compute[260935]:     <disk type="network" device="disk">
Oct 11 09:39:09 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 09:39:09 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/82fac090-c427-485c-98cd-ad02e839be40_disk">
Oct 11 09:39:09 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 09:39:09 compute-0 nova_compute[260935]:       </source>
Oct 11 09:39:09 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 09:39:09 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 09:39:09 compute-0 nova_compute[260935]:       </auth>
Oct 11 09:39:09 compute-0 nova_compute[260935]:       <target dev="vda" bus="virtio"/>
Oct 11 09:39:09 compute-0 nova_compute[260935]:     </disk>
Oct 11 09:39:09 compute-0 nova_compute[260935]:     <disk type="network" device="cdrom">
Oct 11 09:39:09 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 09:39:09 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/82fac090-c427-485c-98cd-ad02e839be40_disk.config">
Oct 11 09:39:09 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 09:39:09 compute-0 nova_compute[260935]:       </source>
Oct 11 09:39:09 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 09:39:09 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 09:39:09 compute-0 nova_compute[260935]:       </auth>
Oct 11 09:39:09 compute-0 nova_compute[260935]:       <target dev="sda" bus="sata"/>
Oct 11 09:39:09 compute-0 nova_compute[260935]:     </disk>
Oct 11 09:39:09 compute-0 nova_compute[260935]:     <interface type="ethernet">
Oct 11 09:39:09 compute-0 nova_compute[260935]:       <mac address="fa:16:3e:ae:4e:40"/>
Oct 11 09:39:09 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 09:39:09 compute-0 nova_compute[260935]:       <driver name="vhost" rx_queue_size="512"/>
Oct 11 09:39:09 compute-0 nova_compute[260935]:       <mtu size="1442"/>
Oct 11 09:39:09 compute-0 nova_compute[260935]:       <target dev="tape8668336-ee"/>
Oct 11 09:39:09 compute-0 nova_compute[260935]:     </interface>
Oct 11 09:39:09 compute-0 nova_compute[260935]:     <serial type="pty">
Oct 11 09:39:09 compute-0 nova_compute[260935]:       <log file="/var/lib/nova/instances/82fac090-c427-485c-98cd-ad02e839be40/console.log" append="off"/>
Oct 11 09:39:09 compute-0 nova_compute[260935]:     </serial>
Oct 11 09:39:09 compute-0 nova_compute[260935]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 09:39:09 compute-0 nova_compute[260935]:     <video>
Oct 11 09:39:09 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 09:39:09 compute-0 nova_compute[260935]:     </video>
Oct 11 09:39:09 compute-0 nova_compute[260935]:     <input type="tablet" bus="usb"/>
Oct 11 09:39:09 compute-0 nova_compute[260935]:     <rng model="virtio">
Oct 11 09:39:09 compute-0 nova_compute[260935]:       <backend model="random">/dev/urandom</backend>
Oct 11 09:39:09 compute-0 nova_compute[260935]:     </rng>
Oct 11 09:39:09 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root"/>
Oct 11 09:39:09 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:39:09 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:39:09 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:39:09 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:39:09 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:39:09 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:39:09 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:39:09 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:39:09 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:39:09 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:39:09 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:39:09 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:39:09 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:39:09 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:39:09 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:39:09 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:39:09 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:39:09 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:39:09 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:39:09 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:39:09 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:39:09 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:39:09 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:39:09 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:39:09 compute-0 nova_compute[260935]:     <controller type="usb" index="0"/>
Oct 11 09:39:09 compute-0 nova_compute[260935]:     <memballoon model="virtio">
Oct 11 09:39:09 compute-0 nova_compute[260935]:       <stats period="10"/>
Oct 11 09:39:09 compute-0 nova_compute[260935]:     </memballoon>
Oct 11 09:39:09 compute-0 nova_compute[260935]:   </devices>
Oct 11 09:39:09 compute-0 nova_compute[260935]: </domain>
Oct 11 09:39:09 compute-0 nova_compute[260935]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 11 09:39:09 compute-0 nova_compute[260935]: 2025-10-11 09:39:09.002 2 DEBUG nova.compute.manager [None req-039a0314-eefc-4187-bb2e-e87aec59f217 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 82fac090-c427-485c-98cd-ad02e839be40] Preparing to wait for external event network-vif-plugged-e8668336-ee38-49b1-97dd-f6c5fcfab2e5 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 11 09:39:09 compute-0 nova_compute[260935]: 2025-10-11 09:39:09.004 2 DEBUG oslo_concurrency.lockutils [None req-039a0314-eefc-4187-bb2e-e87aec59f217 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Acquiring lock "82fac090-c427-485c-98cd-ad02e839be40-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:39:09 compute-0 nova_compute[260935]: 2025-10-11 09:39:09.004 2 DEBUG oslo_concurrency.lockutils [None req-039a0314-eefc-4187-bb2e-e87aec59f217 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lock "82fac090-c427-485c-98cd-ad02e839be40-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:39:09 compute-0 nova_compute[260935]: 2025-10-11 09:39:09.005 2 DEBUG oslo_concurrency.lockutils [None req-039a0314-eefc-4187-bb2e-e87aec59f217 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lock "82fac090-c427-485c-98cd-ad02e839be40-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:39:09 compute-0 nova_compute[260935]: 2025-10-11 09:39:09.006 2 DEBUG nova.virt.libvirt.vif [None req-039a0314-eefc-4187-bb2e-e87aec59f217 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T09:38:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-607770139-access_point-830501792',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-607770139-access_point-830501792',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-607770139-acc',id=144,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMgIDSEbqXKfaiPe9PEwj6SDspyzopAbfyJnz8+djsMK1KLgiJJqBYs9uE57WKf6jqalL3R3Kh83Cc9busMQoG7JknYjD9hSHphOlTqezk2QHYcvhxySyaS51yDxw6uubA==',key_name='tempest-TestSecurityGroupsBasicOps-1631871125',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='81e7096f23df4e7d8782cf98d09d54e9',ramdisk_id='',reservation_id='r-bjfbul0r',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-607770139',owner_user_name='tempest-TestSecurityGroupsBasicOps-607770139-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:39:02Z,user_data=None,user_id='489c4d0457354f4684f8b9e53261224f',uuid=82fac090-c427-485c-98cd-ad02e839be40,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e8668336-ee38-49b1-97dd-f6c5fcfab2e5", "address": "fa:16:3e:ae:4e:40", "network": {"id": "1aa39742-414b-41b6-bac5-b401ed01a1ec", "bridge": "br-int", "label": "tempest-network-smoke--1567678547", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "81e7096f23df4e7d8782cf98d09d54e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape8668336-ee", "ovs_interfaceid": "e8668336-ee38-49b1-97dd-f6c5fcfab2e5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 11 09:39:09 compute-0 nova_compute[260935]: 2025-10-11 09:39:09.006 2 DEBUG nova.network.os_vif_util [None req-039a0314-eefc-4187-bb2e-e87aec59f217 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Converting VIF {"id": "e8668336-ee38-49b1-97dd-f6c5fcfab2e5", "address": "fa:16:3e:ae:4e:40", "network": {"id": "1aa39742-414b-41b6-bac5-b401ed01a1ec", "bridge": "br-int", "label": "tempest-network-smoke--1567678547", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "81e7096f23df4e7d8782cf98d09d54e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape8668336-ee", "ovs_interfaceid": "e8668336-ee38-49b1-97dd-f6c5fcfab2e5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 09:39:09 compute-0 nova_compute[260935]: 2025-10-11 09:39:09.007 2 DEBUG nova.network.os_vif_util [None req-039a0314-eefc-4187-bb2e-e87aec59f217 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ae:4e:40,bridge_name='br-int',has_traffic_filtering=True,id=e8668336-ee38-49b1-97dd-f6c5fcfab2e5,network=Network(1aa39742-414b-41b6-bac5-b401ed01a1ec),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape8668336-ee') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 09:39:09 compute-0 nova_compute[260935]: 2025-10-11 09:39:09.008 2 DEBUG os_vif [None req-039a0314-eefc-4187-bb2e-e87aec59f217 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ae:4e:40,bridge_name='br-int',has_traffic_filtering=True,id=e8668336-ee38-49b1-97dd-f6c5fcfab2e5,network=Network(1aa39742-414b-41b6-bac5-b401ed01a1ec),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape8668336-ee') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 11 09:39:09 compute-0 nova_compute[260935]: 2025-10-11 09:39:09.009 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:39:09 compute-0 nova_compute[260935]: 2025-10-11 09:39:09.010 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:39:09 compute-0 nova_compute[260935]: 2025-10-11 09:39:09.010 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 09:39:09 compute-0 nova_compute[260935]: 2025-10-11 09:39:09.018 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:39:09 compute-0 nova_compute[260935]: 2025-10-11 09:39:09.019 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape8668336-ee, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:39:09 compute-0 nova_compute[260935]: 2025-10-11 09:39:09.020 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tape8668336-ee, col_values=(('external_ids', {'iface-id': 'e8668336-ee38-49b1-97dd-f6c5fcfab2e5', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ae:4e:40', 'vm-uuid': '82fac090-c427-485c-98cd-ad02e839be40'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:39:09 compute-0 nova_compute[260935]: 2025-10-11 09:39:09.022 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:39:09 compute-0 NetworkManager[44960]: <info>  [1760175549.0233] manager: (tape8668336-ee): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/620)
Oct 11 09:39:09 compute-0 nova_compute[260935]: 2025-10-11 09:39:09.025 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 09:39:09 compute-0 nova_compute[260935]: 2025-10-11 09:39:09.031 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:39:09 compute-0 nova_compute[260935]: 2025-10-11 09:39:09.033 2 INFO os_vif [None req-039a0314-eefc-4187-bb2e-e87aec59f217 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ae:4e:40,bridge_name='br-int',has_traffic_filtering=True,id=e8668336-ee38-49b1-97dd-f6c5fcfab2e5,network=Network(1aa39742-414b-41b6-bac5-b401ed01a1ec),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape8668336-ee')
Oct 11 09:39:09 compute-0 nova_compute[260935]: 2025-10-11 09:39:09.094 2 DEBUG nova.virt.libvirt.driver [None req-039a0314-eefc-4187-bb2e-e87aec59f217 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 09:39:09 compute-0 nova_compute[260935]: 2025-10-11 09:39:09.095 2 DEBUG nova.virt.libvirt.driver [None req-039a0314-eefc-4187-bb2e-e87aec59f217 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 09:39:09 compute-0 nova_compute[260935]: 2025-10-11 09:39:09.095 2 DEBUG nova.virt.libvirt.driver [None req-039a0314-eefc-4187-bb2e-e87aec59f217 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] No VIF found with MAC fa:16:3e:ae:4e:40, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 11 09:39:09 compute-0 nova_compute[260935]: 2025-10-11 09:39:09.096 2 INFO nova.virt.libvirt.driver [None req-039a0314-eefc-4187-bb2e-e87aec59f217 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 82fac090-c427-485c-98cd-ad02e839be40] Using config drive
Oct 11 09:39:09 compute-0 nova_compute[260935]: 2025-10-11 09:39:09.127 2 DEBUG nova.storage.rbd_utils [None req-039a0314-eefc-4187-bb2e-e87aec59f217 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] rbd image 82fac090-c427-485c-98cd-ad02e839be40_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:39:09 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2925: 321 pgs: 321 active+clean; 374 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 11 09:39:09 compute-0 nova_compute[260935]: 2025-10-11 09:39:09.385 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:39:09 compute-0 nova_compute[260935]: 2025-10-11 09:39:09.489 2 INFO nova.virt.libvirt.driver [None req-039a0314-eefc-4187-bb2e-e87aec59f217 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 82fac090-c427-485c-98cd-ad02e839be40] Creating config drive at /var/lib/nova/instances/82fac090-c427-485c-98cd-ad02e839be40/disk.config
Oct 11 09:39:09 compute-0 nova_compute[260935]: 2025-10-11 09:39:09.497 2 DEBUG oslo_concurrency.processutils [None req-039a0314-eefc-4187-bb2e-e87aec59f217 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/82fac090-c427-485c-98cd-ad02e839be40/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp23gpdswd execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:39:09 compute-0 nova_compute[260935]: 2025-10-11 09:39:09.551 2 DEBUG nova.network.neutron [req-96caaa46-41f0-45aa-9e32-828537bb45af req-da02b2bf-7c8f-4832-8c44-01da4113ed52 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 82fac090-c427-485c-98cd-ad02e839be40] Updated VIF entry in instance network info cache for port e8668336-ee38-49b1-97dd-f6c5fcfab2e5. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 09:39:09 compute-0 nova_compute[260935]: 2025-10-11 09:39:09.553 2 DEBUG nova.network.neutron [req-96caaa46-41f0-45aa-9e32-828537bb45af req-da02b2bf-7c8f-4832-8c44-01da4113ed52 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 82fac090-c427-485c-98cd-ad02e839be40] Updating instance_info_cache with network_info: [{"id": "e8668336-ee38-49b1-97dd-f6c5fcfab2e5", "address": "fa:16:3e:ae:4e:40", "network": {"id": "1aa39742-414b-41b6-bac5-b401ed01a1ec", "bridge": "br-int", "label": "tempest-network-smoke--1567678547", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "81e7096f23df4e7d8782cf98d09d54e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape8668336-ee", "ovs_interfaceid": "e8668336-ee38-49b1-97dd-f6c5fcfab2e5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:39:09 compute-0 nova_compute[260935]: 2025-10-11 09:39:09.596 2 DEBUG oslo_concurrency.lockutils [req-96caaa46-41f0-45aa-9e32-828537bb45af req-da02b2bf-7c8f-4832-8c44-01da4113ed52 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-82fac090-c427-485c-98cd-ad02e839be40" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:39:09 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/3066451492' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:39:09 compute-0 ceph-mon[74313]: pgmap v2925: 321 pgs: 321 active+clean; 374 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 11 09:39:09 compute-0 nova_compute[260935]: 2025-10-11 09:39:09.664 2 DEBUG oslo_concurrency.processutils [None req-039a0314-eefc-4187-bb2e-e87aec59f217 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/82fac090-c427-485c-98cd-ad02e839be40/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp23gpdswd" returned: 0 in 0.167s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:39:09 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:39:09 compute-0 nova_compute[260935]: 2025-10-11 09:39:09.690 2 DEBUG nova.storage.rbd_utils [None req-039a0314-eefc-4187-bb2e-e87aec59f217 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] rbd image 82fac090-c427-485c-98cd-ad02e839be40_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:39:09 compute-0 nova_compute[260935]: 2025-10-11 09:39:09.694 2 DEBUG oslo_concurrency.processutils [None req-039a0314-eefc-4187-bb2e-e87aec59f217 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/82fac090-c427-485c-98cd-ad02e839be40/disk.config 82fac090-c427-485c-98cd-ad02e839be40_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:39:09 compute-0 nova_compute[260935]: 2025-10-11 09:39:09.875 2 DEBUG oslo_concurrency.processutils [None req-039a0314-eefc-4187-bb2e-e87aec59f217 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/82fac090-c427-485c-98cd-ad02e839be40/disk.config 82fac090-c427-485c-98cd-ad02e839be40_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.181s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:39:09 compute-0 nova_compute[260935]: 2025-10-11 09:39:09.876 2 INFO nova.virt.libvirt.driver [None req-039a0314-eefc-4187-bb2e-e87aec59f217 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 82fac090-c427-485c-98cd-ad02e839be40] Deleting local config drive /var/lib/nova/instances/82fac090-c427-485c-98cd-ad02e839be40/disk.config because it was imported into RBD.
Oct 11 09:39:09 compute-0 kernel: tape8668336-ee: entered promiscuous mode
Oct 11 09:39:09 compute-0 NetworkManager[44960]: <info>  [1760175549.9464] manager: (tape8668336-ee): new Tun device (/org/freedesktop/NetworkManager/Devices/621)
Oct 11 09:39:09 compute-0 ovn_controller[152945]: 2025-10-11T09:39:09Z|01629|binding|INFO|Claiming lport e8668336-ee38-49b1-97dd-f6c5fcfab2e5 for this chassis.
Oct 11 09:39:09 compute-0 ovn_controller[152945]: 2025-10-11T09:39:09Z|01630|binding|INFO|e8668336-ee38-49b1-97dd-f6c5fcfab2e5: Claiming fa:16:3e:ae:4e:40 10.100.0.4
Oct 11 09:39:09 compute-0 nova_compute[260935]: 2025-10-11 09:39:09.948 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:39:09 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:39:09.968 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ae:4e:40 10.100.0.4'], port_security=['fa:16:3e:ae:4e:40 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '82fac090-c427-485c-98cd-ad02e839be40', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1aa39742-414b-41b6-bac5-b401ed01a1ec', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '81e7096f23df4e7d8782cf98d09d54e9', 'neutron:revision_number': '2', 'neutron:security_group_ids': '7b70b57d-37bb-4331-92b9-ae0d4d7e602c e4a50ca5-3bee-4cfb-9819-432e1e875b60', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f310564f-86ed-419d-996b-5fa9060df3fb, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=e8668336-ee38-49b1-97dd-f6c5fcfab2e5) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 09:39:09 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:39:09.971 162815 INFO neutron.agent.ovn.metadata.agent [-] Port e8668336-ee38-49b1-97dd-f6c5fcfab2e5 in datapath 1aa39742-414b-41b6-bac5-b401ed01a1ec bound to our chassis
Oct 11 09:39:09 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:39:09.974 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 1aa39742-414b-41b6-bac5-b401ed01a1ec
Oct 11 09:39:09 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:39:09.993 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[3fdea2a3-3846-4b35-978d-f3aca8857322]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:39:09 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:39:09.994 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap1aa39742-41 in ovnmeta-1aa39742-414b-41b6-bac5-b401ed01a1ec namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 11 09:39:09 compute-0 systemd-udevd[422646]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 09:39:09 compute-0 systemd-machined[215705]: New machine qemu-168-instance-00000090.
Oct 11 09:39:10 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:39:10.001 276945 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap1aa39742-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 11 09:39:10 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:39:10.002 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[cd6c78b8-9cd1-4955-9799-746db083762e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:39:10 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:39:10.003 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[363007cb-8201-4ec1-b790-c8483c4ddc05]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:39:10 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:39:10.014 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[e4c94c02-ad8e-46d4-abda-681c7960c902]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:39:10 compute-0 NetworkManager[44960]: <info>  [1760175550.0272] device (tape8668336-ee): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 09:39:10 compute-0 NetworkManager[44960]: <info>  [1760175550.0284] device (tape8668336-ee): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 09:39:10 compute-0 systemd[1]: Started Virtual Machine qemu-168-instance-00000090.
Oct 11 09:39:10 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:39:10.045 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[3416675a-0ec8-4653-a33b-259cba1efb37]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:39:10 compute-0 nova_compute[260935]: 2025-10-11 09:39:10.055 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:39:10 compute-0 ovn_controller[152945]: 2025-10-11T09:39:10Z|01631|binding|INFO|Setting lport e8668336-ee38-49b1-97dd-f6c5fcfab2e5 ovn-installed in OVS
Oct 11 09:39:10 compute-0 ovn_controller[152945]: 2025-10-11T09:39:10Z|01632|binding|INFO|Setting lport e8668336-ee38-49b1-97dd-f6c5fcfab2e5 up in Southbound
Oct 11 09:39:10 compute-0 nova_compute[260935]: 2025-10-11 09:39:10.068 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:39:10 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:39:10.084 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[cf1fe385-7016-48f0-a936-2b170fbeff07]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:39:10 compute-0 NetworkManager[44960]: <info>  [1760175550.0915] manager: (tap1aa39742-40): new Veth device (/org/freedesktop/NetworkManager/Devices/622)
Oct 11 09:39:10 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:39:10.091 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[737193aa-ffa6-421f-b7d3-b5c8eb9e2d16]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:39:10 compute-0 systemd-udevd[422649]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 09:39:10 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:39:10.125 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[4b56599c-2c9e-426d-9616-3589d851be4b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:39:10 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:39:10.129 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[f54e43b4-4c33-4136-bd97-80786602dc0c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:39:10 compute-0 NetworkManager[44960]: <info>  [1760175550.1662] device (tap1aa39742-40): carrier: link connected
Oct 11 09:39:10 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:39:10.173 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[1dc1f698-8c73-4d84-a61a-070f25e80740]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:39:10 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:39:10.205 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[a7e6e1c8-913c-4a1b-8805-30ff2f1839db]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1aa39742-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:6d:7f:8f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 429], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 739486, 'reachable_time': 20665, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 422678, 'error': None, 'target': 'ovnmeta-1aa39742-414b-41b6-bac5-b401ed01a1ec', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:39:10 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:39:10.230 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[19c18478-c6cf-41a0-a7a6-dc16ee722800]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe6d:7f8f'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 739486, 'tstamp': 739486}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 422679, 'error': None, 'target': 'ovnmeta-1aa39742-414b-41b6-bac5-b401ed01a1ec', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:39:10 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:39:10.259 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[4c1caf68-3209-4e44-98cd-650c90f531d6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1aa39742-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:6d:7f:8f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 429], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 739486, 'reachable_time': 20665, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 422680, 'error': None, 'target': 'ovnmeta-1aa39742-414b-41b6-bac5-b401ed01a1ec', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:39:10 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:39:10.305 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[6705deb7-d405-4aa7-952f-a89a46a6619e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:39:10 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:39:10.405 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[e30b1847-b30b-41e3-9c3b-6c1ccb0ef018]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:39:10 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:39:10.408 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1aa39742-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:39:10 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:39:10.409 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 09:39:10 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:39:10.410 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1aa39742-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:39:10 compute-0 NetworkManager[44960]: <info>  [1760175550.4136] manager: (tap1aa39742-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/623)
Oct 11 09:39:10 compute-0 kernel: tap1aa39742-40: entered promiscuous mode
Oct 11 09:39:10 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:39:10.418 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap1aa39742-40, col_values=(('external_ids', {'iface-id': 'c50fba63-8396-4845-aa84-03644ac4618d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:39:10 compute-0 nova_compute[260935]: 2025-10-11 09:39:10.412 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:39:10 compute-0 ovn_controller[152945]: 2025-10-11T09:39:10Z|01633|binding|INFO|Releasing lport c50fba63-8396-4845-aa84-03644ac4618d from this chassis (sb_readonly=0)
Oct 11 09:39:10 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:39:10.422 162815 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/1aa39742-414b-41b6-bac5-b401ed01a1ec.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/1aa39742-414b-41b6-bac5-b401ed01a1ec.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 11 09:39:10 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:39:10.424 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[f4ee7db2-bb3b-40ee-ab32-6eef6a3d008c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:39:10 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:39:10.425 162815 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 11 09:39:10 compute-0 ovn_metadata_agent[162810]: global
Oct 11 09:39:10 compute-0 ovn_metadata_agent[162810]:     log         /dev/log local0 debug
Oct 11 09:39:10 compute-0 ovn_metadata_agent[162810]:     log-tag     haproxy-metadata-proxy-1aa39742-414b-41b6-bac5-b401ed01a1ec
Oct 11 09:39:10 compute-0 ovn_metadata_agent[162810]:     user        root
Oct 11 09:39:10 compute-0 ovn_metadata_agent[162810]:     group       root
Oct 11 09:39:10 compute-0 ovn_metadata_agent[162810]:     maxconn     1024
Oct 11 09:39:10 compute-0 ovn_metadata_agent[162810]:     pidfile     /var/lib/neutron/external/pids/1aa39742-414b-41b6-bac5-b401ed01a1ec.pid.haproxy
Oct 11 09:39:10 compute-0 ovn_metadata_agent[162810]:     daemon
Oct 11 09:39:10 compute-0 ovn_metadata_agent[162810]: 
Oct 11 09:39:10 compute-0 ovn_metadata_agent[162810]: defaults
Oct 11 09:39:10 compute-0 ovn_metadata_agent[162810]:     log global
Oct 11 09:39:10 compute-0 ovn_metadata_agent[162810]:     mode http
Oct 11 09:39:10 compute-0 ovn_metadata_agent[162810]:     option httplog
Oct 11 09:39:10 compute-0 ovn_metadata_agent[162810]:     option dontlognull
Oct 11 09:39:10 compute-0 ovn_metadata_agent[162810]:     option http-server-close
Oct 11 09:39:10 compute-0 ovn_metadata_agent[162810]:     option forwardfor
Oct 11 09:39:10 compute-0 ovn_metadata_agent[162810]:     retries                 3
Oct 11 09:39:10 compute-0 ovn_metadata_agent[162810]:     timeout http-request    30s
Oct 11 09:39:10 compute-0 ovn_metadata_agent[162810]:     timeout connect         30s
Oct 11 09:39:10 compute-0 ovn_metadata_agent[162810]:     timeout client          32s
Oct 11 09:39:10 compute-0 ovn_metadata_agent[162810]:     timeout server          32s
Oct 11 09:39:10 compute-0 ovn_metadata_agent[162810]:     timeout http-keep-alive 30s
Oct 11 09:39:10 compute-0 ovn_metadata_agent[162810]: 
Oct 11 09:39:10 compute-0 ovn_metadata_agent[162810]: 
Oct 11 09:39:10 compute-0 ovn_metadata_agent[162810]: listen listener
Oct 11 09:39:10 compute-0 ovn_metadata_agent[162810]:     bind 169.254.169.254:80
Oct 11 09:39:10 compute-0 ovn_metadata_agent[162810]:     server metadata /var/lib/neutron/metadata_proxy
Oct 11 09:39:10 compute-0 ovn_metadata_agent[162810]:     http-request add-header X-OVN-Network-ID 1aa39742-414b-41b6-bac5-b401ed01a1ec
Oct 11 09:39:10 compute-0 ovn_metadata_agent[162810]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 11 09:39:10 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:39:10.426 162815 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-1aa39742-414b-41b6-bac5-b401ed01a1ec', 'env', 'PROCESS_TAG=haproxy-1aa39742-414b-41b6-bac5-b401ed01a1ec', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/1aa39742-414b-41b6-bac5-b401ed01a1ec.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 11 09:39:10 compute-0 nova_compute[260935]: 2025-10-11 09:39:10.435 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:39:10 compute-0 nova_compute[260935]: 2025-10-11 09:39:10.840 2 DEBUG nova.compute.manager [req-52a5a028-c2c2-498c-a1e7-a3635c35a40a req-4ecd5998-a901-49f2-a568-5130652c4287 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 82fac090-c427-485c-98cd-ad02e839be40] Received event network-vif-plugged-e8668336-ee38-49b1-97dd-f6c5fcfab2e5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:39:10 compute-0 nova_compute[260935]: 2025-10-11 09:39:10.841 2 DEBUG oslo_concurrency.lockutils [req-52a5a028-c2c2-498c-a1e7-a3635c35a40a req-4ecd5998-a901-49f2-a568-5130652c4287 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "82fac090-c427-485c-98cd-ad02e839be40-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:39:10 compute-0 nova_compute[260935]: 2025-10-11 09:39:10.841 2 DEBUG oslo_concurrency.lockutils [req-52a5a028-c2c2-498c-a1e7-a3635c35a40a req-4ecd5998-a901-49f2-a568-5130652c4287 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "82fac090-c427-485c-98cd-ad02e839be40-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:39:10 compute-0 nova_compute[260935]: 2025-10-11 09:39:10.841 2 DEBUG oslo_concurrency.lockutils [req-52a5a028-c2c2-498c-a1e7-a3635c35a40a req-4ecd5998-a901-49f2-a568-5130652c4287 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "82fac090-c427-485c-98cd-ad02e839be40-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:39:10 compute-0 nova_compute[260935]: 2025-10-11 09:39:10.841 2 DEBUG nova.compute.manager [req-52a5a028-c2c2-498c-a1e7-a3635c35a40a req-4ecd5998-a901-49f2-a568-5130652c4287 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 82fac090-c427-485c-98cd-ad02e839be40] Processing event network-vif-plugged-e8668336-ee38-49b1-97dd-f6c5fcfab2e5 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 11 09:39:10 compute-0 podman[422755]: 2025-10-11 09:39:10.827713973 +0000 UTC m=+0.031472684 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 11 09:39:10 compute-0 podman[422755]: 2025-10-11 09:39:10.9370533 +0000 UTC m=+0.140812011 container create 038b982849d91ce8fc8acc43b62a7023ef78d8265bf04b4627973c6c00b5d798 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-1aa39742-414b-41b6-bac5-b401ed01a1ec, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001)
Oct 11 09:39:11 compute-0 systemd[1]: Started libpod-conmon-038b982849d91ce8fc8acc43b62a7023ef78d8265bf04b4627973c6c00b5d798.scope.
Oct 11 09:39:11 compute-0 nova_compute[260935]: 2025-10-11 09:39:11.013 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760175551.0129657, 82fac090-c427-485c-98cd-ad02e839be40 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:39:11 compute-0 nova_compute[260935]: 2025-10-11 09:39:11.014 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 82fac090-c427-485c-98cd-ad02e839be40] VM Started (Lifecycle Event)
Oct 11 09:39:11 compute-0 nova_compute[260935]: 2025-10-11 09:39:11.019 2 DEBUG nova.compute.manager [None req-039a0314-eefc-4187-bb2e-e87aec59f217 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 82fac090-c427-485c-98cd-ad02e839be40] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 11 09:39:11 compute-0 nova_compute[260935]: 2025-10-11 09:39:11.024 2 DEBUG nova.virt.libvirt.driver [None req-039a0314-eefc-4187-bb2e-e87aec59f217 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 82fac090-c427-485c-98cd-ad02e839be40] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 11 09:39:11 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:39:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e06eeafc87d96a39c349d61f6972a13eca41eb9183cc378fdf1defab8f1ee2b5/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 11 09:39:11 compute-0 nova_compute[260935]: 2025-10-11 09:39:11.036 2 INFO nova.virt.libvirt.driver [-] [instance: 82fac090-c427-485c-98cd-ad02e839be40] Instance spawned successfully.
Oct 11 09:39:11 compute-0 nova_compute[260935]: 2025-10-11 09:39:11.036 2 DEBUG nova.virt.libvirt.driver [None req-039a0314-eefc-4187-bb2e-e87aec59f217 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 82fac090-c427-485c-98cd-ad02e839be40] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 11 09:39:11 compute-0 nova_compute[260935]: 2025-10-11 09:39:11.064 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 82fac090-c427-485c-98cd-ad02e839be40] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:39:11 compute-0 nova_compute[260935]: 2025-10-11 09:39:11.069 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 82fac090-c427-485c-98cd-ad02e839be40] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 09:39:11 compute-0 podman[422755]: 2025-10-11 09:39:11.077875539 +0000 UTC m=+0.281634230 container init 038b982849d91ce8fc8acc43b62a7023ef78d8265bf04b4627973c6c00b5d798 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-1aa39742-414b-41b6-bac5-b401ed01a1ec, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 11 09:39:11 compute-0 podman[422768]: 2025-10-11 09:39:11.082732015 +0000 UTC m=+0.100271963 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, config_id=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible)
Oct 11 09:39:11 compute-0 podman[422755]: 2025-10-11 09:39:11.085473872 +0000 UTC m=+0.289232553 container start 038b982849d91ce8fc8acc43b62a7023ef78d8265bf04b4627973c6c00b5d798 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-1aa39742-414b-41b6-bac5-b401ed01a1ec, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 11 09:39:11 compute-0 nova_compute[260935]: 2025-10-11 09:39:11.108 2 DEBUG nova.virt.libvirt.driver [None req-039a0314-eefc-4187-bb2e-e87aec59f217 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 82fac090-c427-485c-98cd-ad02e839be40] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:39:11 compute-0 nova_compute[260935]: 2025-10-11 09:39:11.109 2 DEBUG nova.virt.libvirt.driver [None req-039a0314-eefc-4187-bb2e-e87aec59f217 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 82fac090-c427-485c-98cd-ad02e839be40] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:39:11 compute-0 nova_compute[260935]: 2025-10-11 09:39:11.109 2 DEBUG nova.virt.libvirt.driver [None req-039a0314-eefc-4187-bb2e-e87aec59f217 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 82fac090-c427-485c-98cd-ad02e839be40] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:39:11 compute-0 nova_compute[260935]: 2025-10-11 09:39:11.110 2 DEBUG nova.virt.libvirt.driver [None req-039a0314-eefc-4187-bb2e-e87aec59f217 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 82fac090-c427-485c-98cd-ad02e839be40] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:39:11 compute-0 nova_compute[260935]: 2025-10-11 09:39:11.111 2 DEBUG nova.virt.libvirt.driver [None req-039a0314-eefc-4187-bb2e-e87aec59f217 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 82fac090-c427-485c-98cd-ad02e839be40] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:39:11 compute-0 nova_compute[260935]: 2025-10-11 09:39:11.111 2 DEBUG nova.virt.libvirt.driver [None req-039a0314-eefc-4187-bb2e-e87aec59f217 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 82fac090-c427-485c-98cd-ad02e839be40] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:39:11 compute-0 neutron-haproxy-ovnmeta-1aa39742-414b-41b6-bac5-b401ed01a1ec[422779]: [NOTICE]   (422792) : New worker (422794) forked
Oct 11 09:39:11 compute-0 neutron-haproxy-ovnmeta-1aa39742-414b-41b6-bac5-b401ed01a1ec[422779]: [NOTICE]   (422792) : Loading success.
Oct 11 09:39:11 compute-0 nova_compute[260935]: 2025-10-11 09:39:11.118 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 82fac090-c427-485c-98cd-ad02e839be40] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 09:39:11 compute-0 nova_compute[260935]: 2025-10-11 09:39:11.119 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760175551.0143678, 82fac090-c427-485c-98cd-ad02e839be40 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:39:11 compute-0 nova_compute[260935]: 2025-10-11 09:39:11.119 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 82fac090-c427-485c-98cd-ad02e839be40] VM Paused (Lifecycle Event)
Oct 11 09:39:11 compute-0 nova_compute[260935]: 2025-10-11 09:39:11.199 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 82fac090-c427-485c-98cd-ad02e839be40] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:39:11 compute-0 nova_compute[260935]: 2025-10-11 09:39:11.202 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760175551.0223103, 82fac090-c427-485c-98cd-ad02e839be40 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:39:11 compute-0 nova_compute[260935]: 2025-10-11 09:39:11.202 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 82fac090-c427-485c-98cd-ad02e839be40] VM Resumed (Lifecycle Event)
Oct 11 09:39:11 compute-0 nova_compute[260935]: 2025-10-11 09:39:11.252 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 82fac090-c427-485c-98cd-ad02e839be40] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:39:11 compute-0 nova_compute[260935]: 2025-10-11 09:39:11.255 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 82fac090-c427-485c-98cd-ad02e839be40] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 09:39:11 compute-0 nova_compute[260935]: 2025-10-11 09:39:11.258 2 INFO nova.compute.manager [None req-039a0314-eefc-4187-bb2e-e87aec59f217 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 82fac090-c427-485c-98cd-ad02e839be40] Took 8.20 seconds to spawn the instance on the hypervisor.
Oct 11 09:39:11 compute-0 nova_compute[260935]: 2025-10-11 09:39:11.259 2 DEBUG nova.compute.manager [None req-039a0314-eefc-4187-bb2e-e87aec59f217 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 82fac090-c427-485c-98cd-ad02e839be40] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:39:11 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2926: 321 pgs: 321 active+clean; 374 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 11 09:39:11 compute-0 nova_compute[260935]: 2025-10-11 09:39:11.412 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 82fac090-c427-485c-98cd-ad02e839be40] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 09:39:11 compute-0 nova_compute[260935]: 2025-10-11 09:39:11.497 2 INFO nova.compute.manager [None req-039a0314-eefc-4187-bb2e-e87aec59f217 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 82fac090-c427-485c-98cd-ad02e839be40] Took 9.66 seconds to build instance.
Oct 11 09:39:11 compute-0 ceph-mon[74313]: pgmap v2926: 321 pgs: 321 active+clean; 374 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 11 09:39:11 compute-0 nova_compute[260935]: 2025-10-11 09:39:11.664 2 DEBUG oslo_concurrency.lockutils [None req-039a0314-eefc-4187-bb2e-e87aec59f217 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lock "82fac090-c427-485c-98cd-ad02e839be40" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.051s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:39:12 compute-0 nova_compute[260935]: 2025-10-11 09:39:12.969 2 DEBUG nova.compute.manager [req-b63ab3ce-d5ff-4487-b811-7a34f3a6f196 req-7a3aae87-af0a-421a-a56f-fa5d8c983490 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 82fac090-c427-485c-98cd-ad02e839be40] Received event network-vif-plugged-e8668336-ee38-49b1-97dd-f6c5fcfab2e5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:39:12 compute-0 nova_compute[260935]: 2025-10-11 09:39:12.969 2 DEBUG oslo_concurrency.lockutils [req-b63ab3ce-d5ff-4487-b811-7a34f3a6f196 req-7a3aae87-af0a-421a-a56f-fa5d8c983490 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "82fac090-c427-485c-98cd-ad02e839be40-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:39:12 compute-0 nova_compute[260935]: 2025-10-11 09:39:12.970 2 DEBUG oslo_concurrency.lockutils [req-b63ab3ce-d5ff-4487-b811-7a34f3a6f196 req-7a3aae87-af0a-421a-a56f-fa5d8c983490 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "82fac090-c427-485c-98cd-ad02e839be40-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:39:12 compute-0 nova_compute[260935]: 2025-10-11 09:39:12.970 2 DEBUG oslo_concurrency.lockutils [req-b63ab3ce-d5ff-4487-b811-7a34f3a6f196 req-7a3aae87-af0a-421a-a56f-fa5d8c983490 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "82fac090-c427-485c-98cd-ad02e839be40-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:39:12 compute-0 nova_compute[260935]: 2025-10-11 09:39:12.971 2 DEBUG nova.compute.manager [req-b63ab3ce-d5ff-4487-b811-7a34f3a6f196 req-7a3aae87-af0a-421a-a56f-fa5d8c983490 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 82fac090-c427-485c-98cd-ad02e839be40] No waiting events found dispatching network-vif-plugged-e8668336-ee38-49b1-97dd-f6c5fcfab2e5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 09:39:12 compute-0 nova_compute[260935]: 2025-10-11 09:39:12.971 2 WARNING nova.compute.manager [req-b63ab3ce-d5ff-4487-b811-7a34f3a6f196 req-7a3aae87-af0a-421a-a56f-fa5d8c983490 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 82fac090-c427-485c-98cd-ad02e839be40] Received unexpected event network-vif-plugged-e8668336-ee38-49b1-97dd-f6c5fcfab2e5 for instance with vm_state active and task_state None.
Oct 11 09:39:13 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2927: 321 pgs: 321 active+clean; 374 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Oct 11 09:39:14 compute-0 nova_compute[260935]: 2025-10-11 09:39:14.022 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:39:14 compute-0 nova_compute[260935]: 2025-10-11 09:39:14.434 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:39:14 compute-0 ceph-mon[74313]: pgmap v2927: 321 pgs: 321 active+clean; 374 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Oct 11 09:39:14 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:39:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:39:15.234 162815 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:39:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:39:15.236 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:39:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:39:15.237 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:39:15 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2928: 321 pgs: 321 active+clean; 374 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Oct 11 09:39:15 compute-0 podman[422803]: 2025-10-11 09:39:15.809630523 +0000 UTC m=+0.100013116 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible)
Oct 11 09:39:15 compute-0 podman[422804]: 2025-10-11 09:39:15.845621753 +0000 UTC m=+0.129217306 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct 11 09:39:16 compute-0 ovn_controller[152945]: 2025-10-11T09:39:16Z|01634|binding|INFO|Releasing lport c50fba63-8396-4845-aa84-03644ac4618d from this chassis (sb_readonly=0)
Oct 11 09:39:16 compute-0 ovn_controller[152945]: 2025-10-11T09:39:16Z|01635|binding|INFO|Releasing lport e23cd806-8523-4e59-ba27-db15cee52548 from this chassis (sb_readonly=0)
Oct 11 09:39:16 compute-0 ovn_controller[152945]: 2025-10-11T09:39:16Z|01636|binding|INFO|Releasing lport f45dd889-4db0-488b-b6d3-356bc191844e from this chassis (sb_readonly=0)
Oct 11 09:39:16 compute-0 NetworkManager[44960]: <info>  [1760175556.2540] manager: (patch-br-int-to-provnet-02213cae-d648-4a8f-b18b-9bdf9573f1be): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/624)
Oct 11 09:39:16 compute-0 nova_compute[260935]: 2025-10-11 09:39:16.252 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:39:16 compute-0 NetworkManager[44960]: <info>  [1760175556.2566] manager: (patch-provnet-02213cae-d648-4a8f-b18b-9bdf9573f1be-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/625)
Oct 11 09:39:16 compute-0 ovn_controller[152945]: 2025-10-11T09:39:16Z|01637|binding|INFO|Releasing lport c50fba63-8396-4845-aa84-03644ac4618d from this chassis (sb_readonly=0)
Oct 11 09:39:16 compute-0 ovn_controller[152945]: 2025-10-11T09:39:16Z|01638|binding|INFO|Releasing lport e23cd806-8523-4e59-ba27-db15cee52548 from this chassis (sb_readonly=0)
Oct 11 09:39:16 compute-0 nova_compute[260935]: 2025-10-11 09:39:16.317 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:39:16 compute-0 ovn_controller[152945]: 2025-10-11T09:39:16Z|01639|binding|INFO|Releasing lport f45dd889-4db0-488b-b6d3-356bc191844e from this chassis (sb_readonly=0)
Oct 11 09:39:16 compute-0 nova_compute[260935]: 2025-10-11 09:39:16.339 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:39:16 compute-0 ceph-mon[74313]: pgmap v2928: 321 pgs: 321 active+clean; 374 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Oct 11 09:39:17 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2929: 321 pgs: 321 active+clean; 374 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Oct 11 09:39:17 compute-0 nova_compute[260935]: 2025-10-11 09:39:17.336 2 DEBUG nova.compute.manager [req-4ee9159c-9041-4b69-b802-1571e683ee28 req-efd2311e-5b55-4de6-a67f-437da0e542bf e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 82fac090-c427-485c-98cd-ad02e839be40] Received event network-changed-e8668336-ee38-49b1-97dd-f6c5fcfab2e5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:39:17 compute-0 nova_compute[260935]: 2025-10-11 09:39:17.337 2 DEBUG nova.compute.manager [req-4ee9159c-9041-4b69-b802-1571e683ee28 req-efd2311e-5b55-4de6-a67f-437da0e542bf e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 82fac090-c427-485c-98cd-ad02e839be40] Refreshing instance network info cache due to event network-changed-e8668336-ee38-49b1-97dd-f6c5fcfab2e5. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 09:39:17 compute-0 nova_compute[260935]: 2025-10-11 09:39:17.337 2 DEBUG oslo_concurrency.lockutils [req-4ee9159c-9041-4b69-b802-1571e683ee28 req-efd2311e-5b55-4de6-a67f-437da0e542bf e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-82fac090-c427-485c-98cd-ad02e839be40" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:39:17 compute-0 nova_compute[260935]: 2025-10-11 09:39:17.338 2 DEBUG oslo_concurrency.lockutils [req-4ee9159c-9041-4b69-b802-1571e683ee28 req-efd2311e-5b55-4de6-a67f-437da0e542bf e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-82fac090-c427-485c-98cd-ad02e839be40" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:39:17 compute-0 nova_compute[260935]: 2025-10-11 09:39:17.339 2 DEBUG nova.network.neutron [req-4ee9159c-9041-4b69-b802-1571e683ee28 req-efd2311e-5b55-4de6-a67f-437da0e542bf e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 82fac090-c427-485c-98cd-ad02e839be40] Refreshing network info cache for port e8668336-ee38-49b1-97dd-f6c5fcfab2e5 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 09:39:18 compute-0 nova_compute[260935]: 2025-10-11 09:39:18.876 2 DEBUG nova.network.neutron [req-4ee9159c-9041-4b69-b802-1571e683ee28 req-efd2311e-5b55-4de6-a67f-437da0e542bf e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 82fac090-c427-485c-98cd-ad02e839be40] Updated VIF entry in instance network info cache for port e8668336-ee38-49b1-97dd-f6c5fcfab2e5. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 09:39:18 compute-0 nova_compute[260935]: 2025-10-11 09:39:18.877 2 DEBUG nova.network.neutron [req-4ee9159c-9041-4b69-b802-1571e683ee28 req-efd2311e-5b55-4de6-a67f-437da0e542bf e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 82fac090-c427-485c-98cd-ad02e839be40] Updating instance_info_cache with network_info: [{"id": "e8668336-ee38-49b1-97dd-f6c5fcfab2e5", "address": "fa:16:3e:ae:4e:40", "network": {"id": "1aa39742-414b-41b6-bac5-b401ed01a1ec", "bridge": "br-int", "label": "tempest-network-smoke--1567678547", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.206", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "81e7096f23df4e7d8782cf98d09d54e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape8668336-ee", "ovs_interfaceid": "e8668336-ee38-49b1-97dd-f6c5fcfab2e5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:39:18 compute-0 nova_compute[260935]: 2025-10-11 09:39:18.998 2 DEBUG oslo_concurrency.lockutils [req-4ee9159c-9041-4b69-b802-1571e683ee28 req-efd2311e-5b55-4de6-a67f-437da0e542bf e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-82fac090-c427-485c-98cd-ad02e839be40" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:39:19 compute-0 nova_compute[260935]: 2025-10-11 09:39:19.023 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:39:19 compute-0 ceph-mon[74313]: pgmap v2929: 321 pgs: 321 active+clean; 374 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Oct 11 09:39:19 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2930: 321 pgs: 321 active+clean; 374 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Oct 11 09:39:19 compute-0 nova_compute[260935]: 2025-10-11 09:39:19.437 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:39:19 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:39:21 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2931: 321 pgs: 321 active+clean; 374 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 11 09:39:21 compute-0 ceph-mon[74313]: pgmap v2930: 321 pgs: 321 active+clean; 374 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Oct 11 09:39:22 compute-0 nova_compute[260935]: 2025-10-11 09:39:22.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:39:22 compute-0 ceph-mon[74313]: pgmap v2931: 321 pgs: 321 active+clean; 374 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 11 09:39:23 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2932: 321 pgs: 321 active+clean; 382 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 817 KiB/s wr, 90 op/s
Oct 11 09:39:24 compute-0 nova_compute[260935]: 2025-10-11 09:39:24.028 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:39:24 compute-0 nova_compute[260935]: 2025-10-11 09:39:24.439 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:39:24 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:39:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:39:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:39:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:39:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:39:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:39:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:39:25 compute-0 ceph-mon[74313]: pgmap v2932: 321 pgs: 321 active+clean; 382 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 817 KiB/s wr, 90 op/s
Oct 11 09:39:25 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2933: 321 pgs: 321 active+clean; 382 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 804 KiB/s wr, 16 op/s
Oct 11 09:39:26 compute-0 ovn_controller[152945]: 2025-10-11T09:39:26Z|00194|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:ae:4e:40 10.100.0.4
Oct 11 09:39:26 compute-0 ovn_controller[152945]: 2025-10-11T09:39:26Z|00195|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:ae:4e:40 10.100.0.4
Oct 11 09:39:26 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 09:39:26 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/731782623' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 09:39:26 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 09:39:26 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/731782623' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 09:39:27 compute-0 ceph-mon[74313]: pgmap v2933: 321 pgs: 321 active+clean; 382 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 804 KiB/s wr, 16 op/s
Oct 11 09:39:27 compute-0 ceph-mon[74313]: from='client.? 192.168.122.10:0/731782623' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 09:39:27 compute-0 ceph-mon[74313]: from='client.? 192.168.122.10:0/731782623' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 09:39:27 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2934: 321 pgs: 321 active+clean; 382 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 805 KiB/s wr, 16 op/s
Oct 11 09:39:27 compute-0 nova_compute[260935]: 2025-10-11 09:39:27.759 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:39:27 compute-0 nova_compute[260935]: 2025-10-11 09:39:27.760 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Oct 11 09:39:27 compute-0 nova_compute[260935]: 2025-10-11 09:39:27.844 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Oct 11 09:39:28 compute-0 ceph-mon[74313]: pgmap v2934: 321 pgs: 321 active+clean; 382 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 805 KiB/s wr, 16 op/s
Oct 11 09:39:29 compute-0 nova_compute[260935]: 2025-10-11 09:39:29.030 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:39:29 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2935: 321 pgs: 321 active+clean; 400 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 291 KiB/s rd, 2.1 MiB/s wr, 54 op/s
Oct 11 09:39:29 compute-0 nova_compute[260935]: 2025-10-11 09:39:29.474 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:39:29 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:39:30 compute-0 ceph-mon[74313]: pgmap v2935: 321 pgs: 321 active+clean; 400 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 291 KiB/s rd, 2.1 MiB/s wr, 54 op/s
Oct 11 09:39:30 compute-0 nova_compute[260935]: 2025-10-11 09:39:30.789 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:39:31 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2936: 321 pgs: 321 active+clean; 400 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 291 KiB/s rd, 2.1 MiB/s wr, 54 op/s
Oct 11 09:39:32 compute-0 ceph-mon[74313]: pgmap v2936: 321 pgs: 321 active+clean; 400 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 291 KiB/s rd, 2.1 MiB/s wr, 54 op/s
Oct 11 09:39:33 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2937: 321 pgs: 321 active+clean; 407 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 361 KiB/s rd, 2.1 MiB/s wr, 68 op/s
Oct 11 09:39:33 compute-0 podman[422848]: 2025-10-11 09:39:33.791683416 +0000 UTC m=+0.087049792 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct 11 09:39:34 compute-0 nova_compute[260935]: 2025-10-11 09:39:34.033 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:39:34 compute-0 ceph-mon[74313]: pgmap v2937: 321 pgs: 321 active+clean; 407 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 361 KiB/s rd, 2.1 MiB/s wr, 68 op/s
Oct 11 09:39:34 compute-0 nova_compute[260935]: 2025-10-11 09:39:34.519 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:39:34 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:39:34 compute-0 sshd-session[422867]: Invalid user mysql from 165.232.82.252 port 47852
Oct 11 09:39:35 compute-0 sshd-session[422867]: pam_unix(sshd:auth): check pass; user unknown
Oct 11 09:39:35 compute-0 sshd-session[422867]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=165.232.82.252
Oct 11 09:39:35 compute-0 sudo[422869]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:39:35 compute-0 sudo[422869]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:39:35 compute-0 sudo[422869]: pam_unix(sudo:session): session closed for user root
Oct 11 09:39:35 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2938: 321 pgs: 321 active+clean; 407 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 332 KiB/s rd, 1.3 MiB/s wr, 52 op/s
Oct 11 09:39:35 compute-0 sudo[422894]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 09:39:35 compute-0 sudo[422894]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:39:35 compute-0 sudo[422894]: pam_unix(sudo:session): session closed for user root
Oct 11 09:39:35 compute-0 sudo[422919]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:39:35 compute-0 sudo[422919]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:39:35 compute-0 sudo[422919]: pam_unix(sudo:session): session closed for user root
Oct 11 09:39:35 compute-0 sudo[422944]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 11 09:39:35 compute-0 sudo[422944]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:39:36 compute-0 sudo[422944]: pam_unix(sudo:session): session closed for user root
Oct 11 09:39:36 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 09:39:36 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 09:39:36 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 11 09:39:36 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 09:39:36 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 11 09:39:36 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:39:36 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev 140f9669-be13-4e25-a249-6a8ff0748549 does not exist
Oct 11 09:39:36 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev 2fd45056-7261-48a1-bd59-b24a1f8336bf does not exist
Oct 11 09:39:36 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev 06c22ea0-2d30-40c2-9709-6326fa4ac7b8 does not exist
Oct 11 09:39:36 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 11 09:39:36 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 09:39:36 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 11 09:39:36 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 09:39:36 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 09:39:36 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 09:39:36 compute-0 sudo[423000]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:39:36 compute-0 sudo[423000]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:39:36 compute-0 sudo[423000]: pam_unix(sudo:session): session closed for user root
Oct 11 09:39:36 compute-0 sudo[423025]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 09:39:36 compute-0 sudo[423025]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:39:36 compute-0 sudo[423025]: pam_unix(sudo:session): session closed for user root
Oct 11 09:39:36 compute-0 ceph-mon[74313]: pgmap v2938: 321 pgs: 321 active+clean; 407 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 332 KiB/s rd, 1.3 MiB/s wr, 52 op/s
Oct 11 09:39:36 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 09:39:36 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 09:39:36 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:39:36 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 09:39:36 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 09:39:36 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 09:39:36 compute-0 sudo[423050]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:39:36 compute-0 sudo[423050]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:39:36 compute-0 sudo[423050]: pam_unix(sudo:session): session closed for user root
Oct 11 09:39:36 compute-0 sudo[423075]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 11 09:39:36 compute-0 sudo[423075]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:39:36 compute-0 podman[423141]: 2025-10-11 09:39:36.949036252 +0000 UTC m=+0.045071245 container create 0ae0ba9955340b2eae926b5b33ac836ec9da16afccc3970702a716f58dedb216 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_elbakyan, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 09:39:36 compute-0 systemd[1]: Started libpod-conmon-0ae0ba9955340b2eae926b5b33ac836ec9da16afccc3970702a716f58dedb216.scope.
Oct 11 09:39:37 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:39:37 compute-0 podman[423141]: 2025-10-11 09:39:36.927452236 +0000 UTC m=+0.023487239 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:39:37 compute-0 podman[423141]: 2025-10-11 09:39:37.042913935 +0000 UTC m=+0.138948918 container init 0ae0ba9955340b2eae926b5b33ac836ec9da16afccc3970702a716f58dedb216 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_elbakyan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Oct 11 09:39:37 compute-0 podman[423141]: 2025-10-11 09:39:37.050892279 +0000 UTC m=+0.146927242 container start 0ae0ba9955340b2eae926b5b33ac836ec9da16afccc3970702a716f58dedb216 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_elbakyan, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct 11 09:39:37 compute-0 podman[423141]: 2025-10-11 09:39:37.054733546 +0000 UTC m=+0.150768519 container attach 0ae0ba9955340b2eae926b5b33ac836ec9da16afccc3970702a716f58dedb216 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_elbakyan, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct 11 09:39:37 compute-0 elegant_elbakyan[423157]: 167 167
Oct 11 09:39:37 compute-0 systemd[1]: libpod-0ae0ba9955340b2eae926b5b33ac836ec9da16afccc3970702a716f58dedb216.scope: Deactivated successfully.
Oct 11 09:39:37 compute-0 podman[423141]: 2025-10-11 09:39:37.064288684 +0000 UTC m=+0.160323677 container died 0ae0ba9955340b2eae926b5b33ac836ec9da16afccc3970702a716f58dedb216 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_elbakyan, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 11 09:39:37 compute-0 sshd-session[422867]: Failed password for invalid user mysql from 165.232.82.252 port 47852 ssh2
Oct 11 09:39:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-cea3c683a30b8cd2af0798dd18983af782bb69ca2886b9fea4740d3e17b104cd-merged.mount: Deactivated successfully.
Oct 11 09:39:37 compute-0 podman[423141]: 2025-10-11 09:39:37.11689572 +0000 UTC m=+0.212930683 container remove 0ae0ba9955340b2eae926b5b33ac836ec9da16afccc3970702a716f58dedb216 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_elbakyan, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 11 09:39:37 compute-0 systemd[1]: libpod-conmon-0ae0ba9955340b2eae926b5b33ac836ec9da16afccc3970702a716f58dedb216.scope: Deactivated successfully.
Oct 11 09:39:37 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2939: 321 pgs: 321 active+clean; 407 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 332 KiB/s rd, 1.3 MiB/s wr, 52 op/s
Oct 11 09:39:37 compute-0 podman[423182]: 2025-10-11 09:39:37.397703956 +0000 UTC m=+0.070935601 container create 01249dcd2db0f5f37d2e6c26690ea2644217bd79b5bd9d93a54a325781448ea5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_wozniak, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Oct 11 09:39:37 compute-0 systemd[1]: Started libpod-conmon-01249dcd2db0f5f37d2e6c26690ea2644217bd79b5bd9d93a54a325781448ea5.scope.
Oct 11 09:39:37 compute-0 podman[423182]: 2025-10-11 09:39:37.368879917 +0000 UTC m=+0.042111612 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:39:37 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:39:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/38785a173538c460538805941cd484f7a99bb2cb9b8d090825d08ceb1c66df40/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 09:39:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/38785a173538c460538805941cd484f7a99bb2cb9b8d090825d08ceb1c66df40/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 09:39:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/38785a173538c460538805941cd484f7a99bb2cb9b8d090825d08ceb1c66df40/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 09:39:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/38785a173538c460538805941cd484f7a99bb2cb9b8d090825d08ceb1c66df40/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 09:39:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/38785a173538c460538805941cd484f7a99bb2cb9b8d090825d08ceb1c66df40/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 09:39:37 compute-0 podman[423182]: 2025-10-11 09:39:37.502457004 +0000 UTC m=+0.175688679 container init 01249dcd2db0f5f37d2e6c26690ea2644217bd79b5bd9d93a54a325781448ea5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_wozniak, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 09:39:37 compute-0 podman[423182]: 2025-10-11 09:39:37.519967425 +0000 UTC m=+0.193199040 container start 01249dcd2db0f5f37d2e6c26690ea2644217bd79b5bd9d93a54a325781448ea5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_wozniak, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 09:39:37 compute-0 podman[423182]: 2025-10-11 09:39:37.523381961 +0000 UTC m=+0.196613666 container attach 01249dcd2db0f5f37d2e6c26690ea2644217bd79b5bd9d93a54a325781448ea5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_wozniak, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 11 09:39:38 compute-0 sshd-session[422867]: Connection closed by invalid user mysql 165.232.82.252 port 47852 [preauth]
Oct 11 09:39:38 compute-0 nova_compute[260935]: 2025-10-11 09:39:38.209 2 DEBUG oslo_concurrency.lockutils [None req-7b919d74-9860-497a-b49b-c97f88fedcf2 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Acquiring lock "cacbf863-eea2-4852-a3a4-7cc929ebacec" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:39:38 compute-0 nova_compute[260935]: 2025-10-11 09:39:38.212 2 DEBUG oslo_concurrency.lockutils [None req-7b919d74-9860-497a-b49b-c97f88fedcf2 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lock "cacbf863-eea2-4852-a3a4-7cc929ebacec" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:39:38 compute-0 nova_compute[260935]: 2025-10-11 09:39:38.285 2 DEBUG nova.compute.manager [None req-7b919d74-9860-497a-b49b-c97f88fedcf2 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: cacbf863-eea2-4852-a3a4-7cc929ebacec] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 11 09:39:38 compute-0 ceph-mon[74313]: pgmap v2939: 321 pgs: 321 active+clean; 407 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 332 KiB/s rd, 1.3 MiB/s wr, 52 op/s
Oct 11 09:39:38 compute-0 nova_compute[260935]: 2025-10-11 09:39:38.491 2 DEBUG oslo_concurrency.lockutils [None req-7b919d74-9860-497a-b49b-c97f88fedcf2 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:39:38 compute-0 nova_compute[260935]: 2025-10-11 09:39:38.493 2 DEBUG oslo_concurrency.lockutils [None req-7b919d74-9860-497a-b49b-c97f88fedcf2 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:39:38 compute-0 nova_compute[260935]: 2025-10-11 09:39:38.502 2 DEBUG nova.virt.hardware [None req-7b919d74-9860-497a-b49b-c97f88fedcf2 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 11 09:39:38 compute-0 nova_compute[260935]: 2025-10-11 09:39:38.502 2 INFO nova.compute.claims [None req-7b919d74-9860-497a-b49b-c97f88fedcf2 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: cacbf863-eea2-4852-a3a4-7cc929ebacec] Claim successful on node compute-0.ctlplane.example.com
Oct 11 09:39:38 compute-0 tender_wozniak[423198]: --> passed data devices: 0 physical, 3 LVM
Oct 11 09:39:38 compute-0 tender_wozniak[423198]: --> relative data size: 1.0
Oct 11 09:39:38 compute-0 tender_wozniak[423198]: --> All data devices are unavailable
Oct 11 09:39:38 compute-0 systemd[1]: libpod-01249dcd2db0f5f37d2e6c26690ea2644217bd79b5bd9d93a54a325781448ea5.scope: Deactivated successfully.
Oct 11 09:39:38 compute-0 systemd[1]: libpod-01249dcd2db0f5f37d2e6c26690ea2644217bd79b5bd9d93a54a325781448ea5.scope: Consumed 1.123s CPU time.
Oct 11 09:39:38 compute-0 podman[423182]: 2025-10-11 09:39:38.730782956 +0000 UTC m=+1.404014611 container died 01249dcd2db0f5f37d2e6c26690ea2644217bd79b5bd9d93a54a325781448ea5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_wozniak, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 09:39:38 compute-0 nova_compute[260935]: 2025-10-11 09:39:38.790 2 DEBUG oslo_concurrency.processutils [None req-7b919d74-9860-497a-b49b-c97f88fedcf2 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:39:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-38785a173538c460538805941cd484f7a99bb2cb9b8d090825d08ceb1c66df40-merged.mount: Deactivated successfully.
Oct 11 09:39:39 compute-0 nova_compute[260935]: 2025-10-11 09:39:39.079 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:39:39 compute-0 podman[423182]: 2025-10-11 09:39:39.166056733 +0000 UTC m=+1.839288388 container remove 01249dcd2db0f5f37d2e6c26690ea2644217bd79b5bd9d93a54a325781448ea5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_wozniak, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Oct 11 09:39:39 compute-0 systemd[1]: libpod-conmon-01249dcd2db0f5f37d2e6c26690ea2644217bd79b5bd9d93a54a325781448ea5.scope: Deactivated successfully.
Oct 11 09:39:39 compute-0 sudo[423075]: pam_unix(sudo:session): session closed for user root
Oct 11 09:39:39 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 09:39:39 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/307893045' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:39:39 compute-0 nova_compute[260935]: 2025-10-11 09:39:39.253 2 DEBUG oslo_concurrency.processutils [None req-7b919d74-9860-497a-b49b-c97f88fedcf2 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:39:39 compute-0 nova_compute[260935]: 2025-10-11 09:39:39.268 2 DEBUG nova.compute.provider_tree [None req-7b919d74-9860-497a-b49b-c97f88fedcf2 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 09:39:39 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2940: 321 pgs: 321 active+clean; 407 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 332 KiB/s rd, 1.4 MiB/s wr, 53 op/s
Oct 11 09:39:39 compute-0 sudo[423257]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:39:39 compute-0 sudo[423257]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:39:39 compute-0 sudo[423257]: pam_unix(sudo:session): session closed for user root
Oct 11 09:39:39 compute-0 nova_compute[260935]: 2025-10-11 09:39:39.357 2 DEBUG nova.scheduler.client.report [None req-7b919d74-9860-497a-b49b-c97f88fedcf2 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 09:39:39 compute-0 nova_compute[260935]: 2025-10-11 09:39:39.419 2 DEBUG oslo_concurrency.lockutils [None req-7b919d74-9860-497a-b49b-c97f88fedcf2 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.926s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:39:39 compute-0 sudo[423284]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 09:39:39 compute-0 nova_compute[260935]: 2025-10-11 09:39:39.420 2 DEBUG nova.compute.manager [None req-7b919d74-9860-497a-b49b-c97f88fedcf2 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: cacbf863-eea2-4852-a3a4-7cc929ebacec] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 11 09:39:39 compute-0 sudo[423284]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:39:39 compute-0 sudo[423284]: pam_unix(sudo:session): session closed for user root
Oct 11 09:39:39 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/307893045' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:39:39 compute-0 sudo[423309]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:39:39 compute-0 sudo[423309]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:39:39 compute-0 sudo[423309]: pam_unix(sudo:session): session closed for user root
Oct 11 09:39:39 compute-0 nova_compute[260935]: 2025-10-11 09:39:39.523 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:39:39 compute-0 sudo[423334]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a -- lvm list --format json
Oct 11 09:39:39 compute-0 sudo[423334]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:39:39 compute-0 nova_compute[260935]: 2025-10-11 09:39:39.601 2 DEBUG nova.compute.manager [None req-7b919d74-9860-497a-b49b-c97f88fedcf2 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: cacbf863-eea2-4852-a3a4-7cc929ebacec] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 11 09:39:39 compute-0 nova_compute[260935]: 2025-10-11 09:39:39.602 2 DEBUG nova.network.neutron [None req-7b919d74-9860-497a-b49b-c97f88fedcf2 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: cacbf863-eea2-4852-a3a4-7cc929ebacec] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 11 09:39:39 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:39:39 compute-0 nova_compute[260935]: 2025-10-11 09:39:39.707 2 INFO nova.virt.libvirt.driver [None req-7b919d74-9860-497a-b49b-c97f88fedcf2 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: cacbf863-eea2-4852-a3a4-7cc929ebacec] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 11 09:39:39 compute-0 nova_compute[260935]: 2025-10-11 09:39:39.798 2 DEBUG nova.policy [None req-7b919d74-9860-497a-b49b-c97f88fedcf2 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '489c4d0457354f4684f8b9e53261224f', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '81e7096f23df4e7d8782cf98d09d54e9', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 11 09:39:39 compute-0 nova_compute[260935]: 2025-10-11 09:39:39.840 2 DEBUG nova.compute.manager [None req-7b919d74-9860-497a-b49b-c97f88fedcf2 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: cacbf863-eea2-4852-a3a4-7cc929ebacec] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 11 09:39:40 compute-0 podman[423400]: 2025-10-11 09:39:40.074911024 +0000 UTC m=+0.059171280 container create 1c539a08aee25affcc1762dd54b16d24334cef3e977978e2c35d31a56d7ae7ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_northcutt, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Oct 11 09:39:40 compute-0 systemd[1]: Started libpod-conmon-1c539a08aee25affcc1762dd54b16d24334cef3e977978e2c35d31a56d7ae7ad.scope.
Oct 11 09:39:40 compute-0 podman[423400]: 2025-10-11 09:39:40.047434964 +0000 UTC m=+0.031695280 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:39:40 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:39:40 compute-0 podman[423400]: 2025-10-11 09:39:40.196434583 +0000 UTC m=+0.180694889 container init 1c539a08aee25affcc1762dd54b16d24334cef3e977978e2c35d31a56d7ae7ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_northcutt, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 09:39:40 compute-0 podman[423400]: 2025-10-11 09:39:40.20774537 +0000 UTC m=+0.192005626 container start 1c539a08aee25affcc1762dd54b16d24334cef3e977978e2c35d31a56d7ae7ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_northcutt, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Oct 11 09:39:40 compute-0 podman[423400]: 2025-10-11 09:39:40.212961416 +0000 UTC m=+0.197221672 container attach 1c539a08aee25affcc1762dd54b16d24334cef3e977978e2c35d31a56d7ae7ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_northcutt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 09:39:40 compute-0 charming_northcutt[423417]: 167 167
Oct 11 09:39:40 compute-0 podman[423400]: 2025-10-11 09:39:40.215812136 +0000 UTC m=+0.200072402 container died 1c539a08aee25affcc1762dd54b16d24334cef3e977978e2c35d31a56d7ae7ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_northcutt, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct 11 09:39:40 compute-0 systemd[1]: libpod-1c539a08aee25affcc1762dd54b16d24334cef3e977978e2c35d31a56d7ae7ad.scope: Deactivated successfully.
Oct 11 09:39:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-303543a215637fa685aaa0ad5123d5d7e6c7760d8cf4d6c682cd37f8da96cfd2-merged.mount: Deactivated successfully.
Oct 11 09:39:40 compute-0 podman[423400]: 2025-10-11 09:39:40.268912886 +0000 UTC m=+0.253173152 container remove 1c539a08aee25affcc1762dd54b16d24334cef3e977978e2c35d31a56d7ae7ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_northcutt, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct 11 09:39:40 compute-0 systemd[1]: libpod-conmon-1c539a08aee25affcc1762dd54b16d24334cef3e977978e2c35d31a56d7ae7ad.scope: Deactivated successfully.
Oct 11 09:39:40 compute-0 nova_compute[260935]: 2025-10-11 09:39:40.312 2 DEBUG nova.compute.manager [None req-7b919d74-9860-497a-b49b-c97f88fedcf2 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: cacbf863-eea2-4852-a3a4-7cc929ebacec] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 11 09:39:40 compute-0 nova_compute[260935]: 2025-10-11 09:39:40.314 2 DEBUG nova.virt.libvirt.driver [None req-7b919d74-9860-497a-b49b-c97f88fedcf2 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: cacbf863-eea2-4852-a3a4-7cc929ebacec] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 11 09:39:40 compute-0 nova_compute[260935]: 2025-10-11 09:39:40.315 2 INFO nova.virt.libvirt.driver [None req-7b919d74-9860-497a-b49b-c97f88fedcf2 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: cacbf863-eea2-4852-a3a4-7cc929ebacec] Creating image(s)
Oct 11 09:39:40 compute-0 nova_compute[260935]: 2025-10-11 09:39:40.351 2 DEBUG nova.storage.rbd_utils [None req-7b919d74-9860-497a-b49b-c97f88fedcf2 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] rbd image cacbf863-eea2-4852-a3a4-7cc929ebacec_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:39:40 compute-0 nova_compute[260935]: 2025-10-11 09:39:40.378 2 DEBUG nova.storage.rbd_utils [None req-7b919d74-9860-497a-b49b-c97f88fedcf2 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] rbd image cacbf863-eea2-4852-a3a4-7cc929ebacec_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:39:40 compute-0 nova_compute[260935]: 2025-10-11 09:39:40.407 2 DEBUG nova.storage.rbd_utils [None req-7b919d74-9860-497a-b49b-c97f88fedcf2 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] rbd image cacbf863-eea2-4852-a3a4-7cc929ebacec_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:39:40 compute-0 nova_compute[260935]: 2025-10-11 09:39:40.410 2 DEBUG oslo_concurrency.processutils [None req-7b919d74-9860-497a-b49b-c97f88fedcf2 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:39:40 compute-0 ceph-mon[74313]: pgmap v2940: 321 pgs: 321 active+clean; 407 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 332 KiB/s rd, 1.4 MiB/s wr, 53 op/s
Oct 11 09:39:40 compute-0 nova_compute[260935]: 2025-10-11 09:39:40.519 2 DEBUG oslo_concurrency.processutils [None req-7b919d74-9860-497a-b49b-c97f88fedcf2 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json" returned: 0 in 0.109s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:39:40 compute-0 nova_compute[260935]: 2025-10-11 09:39:40.520 2 DEBUG oslo_concurrency.lockutils [None req-7b919d74-9860-497a-b49b-c97f88fedcf2 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Acquiring lock "9811042c7d73cc51997f7c966840f4b7728169a1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:39:40 compute-0 nova_compute[260935]: 2025-10-11 09:39:40.521 2 DEBUG oslo_concurrency.lockutils [None req-7b919d74-9860-497a-b49b-c97f88fedcf2 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:39:40 compute-0 nova_compute[260935]: 2025-10-11 09:39:40.521 2 DEBUG oslo_concurrency.lockutils [None req-7b919d74-9860-497a-b49b-c97f88fedcf2 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:39:40 compute-0 nova_compute[260935]: 2025-10-11 09:39:40.552 2 DEBUG nova.storage.rbd_utils [None req-7b919d74-9860-497a-b49b-c97f88fedcf2 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] rbd image cacbf863-eea2-4852-a3a4-7cc929ebacec_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:39:40 compute-0 nova_compute[260935]: 2025-10-11 09:39:40.558 2 DEBUG oslo_concurrency.processutils [None req-7b919d74-9860-497a-b49b-c97f88fedcf2 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 cacbf863-eea2-4852-a3a4-7cc929ebacec_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:39:40 compute-0 podman[423494]: 2025-10-11 09:39:40.571125442 +0000 UTC m=+0.058178263 container create a4a7c55ee7c141d0eb3a7e220a358034c3eddcde754d8a8983f15733ca2a621c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_wescoff, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 09:39:40 compute-0 systemd[1]: Started libpod-conmon-a4a7c55ee7c141d0eb3a7e220a358034c3eddcde754d8a8983f15733ca2a621c.scope.
Oct 11 09:39:40 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:39:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ea591ea10bfd9ca77ea8906a91c25cd4f3e8ca662b6ddac2224093f2d19825b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 09:39:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ea591ea10bfd9ca77ea8906a91c25cd4f3e8ca662b6ddac2224093f2d19825b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 09:39:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ea591ea10bfd9ca77ea8906a91c25cd4f3e8ca662b6ddac2224093f2d19825b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 09:39:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ea591ea10bfd9ca77ea8906a91c25cd4f3e8ca662b6ddac2224093f2d19825b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 09:39:40 compute-0 podman[423494]: 2025-10-11 09:39:40.552411297 +0000 UTC m=+0.039464118 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:39:40 compute-0 podman[423494]: 2025-10-11 09:39:40.653588765 +0000 UTC m=+0.140641656 container init a4a7c55ee7c141d0eb3a7e220a358034c3eddcde754d8a8983f15733ca2a621c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_wescoff, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Oct 11 09:39:40 compute-0 podman[423494]: 2025-10-11 09:39:40.664402498 +0000 UTC m=+0.151455309 container start a4a7c55ee7c141d0eb3a7e220a358034c3eddcde754d8a8983f15733ca2a621c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_wescoff, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef)
Oct 11 09:39:40 compute-0 podman[423494]: 2025-10-11 09:39:40.668487343 +0000 UTC m=+0.155540184 container attach a4a7c55ee7c141d0eb3a7e220a358034c3eddcde754d8a8983f15733ca2a621c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_wescoff, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0)
Oct 11 09:39:41 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2941: 321 pgs: 321 active+clean; 407 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 71 KiB/s rd, 71 KiB/s wr, 14 op/s
Oct 11 09:39:41 compute-0 objective_wescoff[423530]: {
Oct 11 09:39:41 compute-0 objective_wescoff[423530]:     "0": [
Oct 11 09:39:41 compute-0 objective_wescoff[423530]:         {
Oct 11 09:39:41 compute-0 objective_wescoff[423530]:             "devices": [
Oct 11 09:39:41 compute-0 objective_wescoff[423530]:                 "/dev/loop3"
Oct 11 09:39:41 compute-0 objective_wescoff[423530]:             ],
Oct 11 09:39:41 compute-0 objective_wescoff[423530]:             "lv_name": "ceph_lv0",
Oct 11 09:39:41 compute-0 objective_wescoff[423530]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 09:39:41 compute-0 objective_wescoff[423530]:             "lv_size": "21470642176",
Oct 11 09:39:41 compute-0 objective_wescoff[423530]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=bd31cfe9-266a-4479-a7f9-659fff14b3e1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 09:39:41 compute-0 objective_wescoff[423530]:             "lv_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 09:39:41 compute-0 objective_wescoff[423530]:             "name": "ceph_lv0",
Oct 11 09:39:41 compute-0 objective_wescoff[423530]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 09:39:41 compute-0 objective_wescoff[423530]:             "tags": {
Oct 11 09:39:41 compute-0 objective_wescoff[423530]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 11 09:39:41 compute-0 objective_wescoff[423530]:                 "ceph.block_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 09:39:41 compute-0 objective_wescoff[423530]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 09:39:41 compute-0 objective_wescoff[423530]:                 "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:39:41 compute-0 objective_wescoff[423530]:                 "ceph.cluster_name": "ceph",
Oct 11 09:39:41 compute-0 objective_wescoff[423530]:                 "ceph.crush_device_class": "",
Oct 11 09:39:41 compute-0 objective_wescoff[423530]:                 "ceph.encrypted": "0",
Oct 11 09:39:41 compute-0 objective_wescoff[423530]:                 "ceph.osd_fsid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 09:39:41 compute-0 objective_wescoff[423530]:                 "ceph.osd_id": "0",
Oct 11 09:39:41 compute-0 objective_wescoff[423530]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 09:39:41 compute-0 objective_wescoff[423530]:                 "ceph.type": "block",
Oct 11 09:39:41 compute-0 objective_wescoff[423530]:                 "ceph.vdo": "0"
Oct 11 09:39:41 compute-0 objective_wescoff[423530]:             },
Oct 11 09:39:41 compute-0 objective_wescoff[423530]:             "type": "block",
Oct 11 09:39:41 compute-0 objective_wescoff[423530]:             "vg_name": "ceph_vg0"
Oct 11 09:39:41 compute-0 objective_wescoff[423530]:         }
Oct 11 09:39:41 compute-0 objective_wescoff[423530]:     ],
Oct 11 09:39:41 compute-0 objective_wescoff[423530]:     "1": [
Oct 11 09:39:41 compute-0 objective_wescoff[423530]:         {
Oct 11 09:39:41 compute-0 objective_wescoff[423530]:             "devices": [
Oct 11 09:39:41 compute-0 objective_wescoff[423530]:                 "/dev/loop4"
Oct 11 09:39:41 compute-0 objective_wescoff[423530]:             ],
Oct 11 09:39:41 compute-0 objective_wescoff[423530]:             "lv_name": "ceph_lv1",
Oct 11 09:39:41 compute-0 objective_wescoff[423530]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 09:39:41 compute-0 objective_wescoff[423530]:             "lv_size": "21470642176",
Oct 11 09:39:41 compute-0 objective_wescoff[423530]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c6e730c8-7075-45e5-bf72-061b73088f5b,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 09:39:41 compute-0 objective_wescoff[423530]:             "lv_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 09:39:41 compute-0 objective_wescoff[423530]:             "name": "ceph_lv1",
Oct 11 09:39:41 compute-0 objective_wescoff[423530]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 09:39:41 compute-0 objective_wescoff[423530]:             "tags": {
Oct 11 09:39:41 compute-0 objective_wescoff[423530]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 11 09:39:41 compute-0 objective_wescoff[423530]:                 "ceph.block_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 09:39:41 compute-0 objective_wescoff[423530]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 09:39:41 compute-0 objective_wescoff[423530]:                 "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:39:41 compute-0 objective_wescoff[423530]:                 "ceph.cluster_name": "ceph",
Oct 11 09:39:41 compute-0 objective_wescoff[423530]:                 "ceph.crush_device_class": "",
Oct 11 09:39:41 compute-0 objective_wescoff[423530]:                 "ceph.encrypted": "0",
Oct 11 09:39:41 compute-0 objective_wescoff[423530]:                 "ceph.osd_fsid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 09:39:41 compute-0 objective_wescoff[423530]:                 "ceph.osd_id": "1",
Oct 11 09:39:41 compute-0 objective_wescoff[423530]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 09:39:41 compute-0 objective_wescoff[423530]:                 "ceph.type": "block",
Oct 11 09:39:41 compute-0 objective_wescoff[423530]:                 "ceph.vdo": "0"
Oct 11 09:39:41 compute-0 objective_wescoff[423530]:             },
Oct 11 09:39:41 compute-0 objective_wescoff[423530]:             "type": "block",
Oct 11 09:39:41 compute-0 objective_wescoff[423530]:             "vg_name": "ceph_vg1"
Oct 11 09:39:41 compute-0 objective_wescoff[423530]:         }
Oct 11 09:39:41 compute-0 objective_wescoff[423530]:     ],
Oct 11 09:39:41 compute-0 objective_wescoff[423530]:     "2": [
Oct 11 09:39:41 compute-0 objective_wescoff[423530]:         {
Oct 11 09:39:41 compute-0 objective_wescoff[423530]:             "devices": [
Oct 11 09:39:41 compute-0 objective_wescoff[423530]:                 "/dev/loop5"
Oct 11 09:39:41 compute-0 objective_wescoff[423530]:             ],
Oct 11 09:39:41 compute-0 objective_wescoff[423530]:             "lv_name": "ceph_lv2",
Oct 11 09:39:41 compute-0 objective_wescoff[423530]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 09:39:41 compute-0 objective_wescoff[423530]:             "lv_size": "21470642176",
Oct 11 09:39:41 compute-0 objective_wescoff[423530]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9a8d87fe-940e-4960-94fb-405e22e6e9ee,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 09:39:41 compute-0 objective_wescoff[423530]:             "lv_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 09:39:41 compute-0 objective_wescoff[423530]:             "name": "ceph_lv2",
Oct 11 09:39:41 compute-0 objective_wescoff[423530]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 09:39:41 compute-0 objective_wescoff[423530]:             "tags": {
Oct 11 09:39:41 compute-0 objective_wescoff[423530]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 11 09:39:41 compute-0 objective_wescoff[423530]:                 "ceph.block_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 09:39:41 compute-0 objective_wescoff[423530]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 09:39:41 compute-0 objective_wescoff[423530]:                 "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:39:41 compute-0 objective_wescoff[423530]:                 "ceph.cluster_name": "ceph",
Oct 11 09:39:41 compute-0 objective_wescoff[423530]:                 "ceph.crush_device_class": "",
Oct 11 09:39:41 compute-0 objective_wescoff[423530]:                 "ceph.encrypted": "0",
Oct 11 09:39:41 compute-0 objective_wescoff[423530]:                 "ceph.osd_fsid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 09:39:41 compute-0 objective_wescoff[423530]:                 "ceph.osd_id": "2",
Oct 11 09:39:41 compute-0 objective_wescoff[423530]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 09:39:41 compute-0 objective_wescoff[423530]:                 "ceph.type": "block",
Oct 11 09:39:41 compute-0 objective_wescoff[423530]:                 "ceph.vdo": "0"
Oct 11 09:39:41 compute-0 objective_wescoff[423530]:             },
Oct 11 09:39:41 compute-0 objective_wescoff[423530]:             "type": "block",
Oct 11 09:39:41 compute-0 objective_wescoff[423530]:             "vg_name": "ceph_vg2"
Oct 11 09:39:41 compute-0 objective_wescoff[423530]:         }
Oct 11 09:39:41 compute-0 objective_wescoff[423530]:     ]
Oct 11 09:39:41 compute-0 objective_wescoff[423530]: }
Oct 11 09:39:41 compute-0 systemd[1]: libpod-a4a7c55ee7c141d0eb3a7e220a358034c3eddcde754d8a8983f15733ca2a621c.scope: Deactivated successfully.
Oct 11 09:39:41 compute-0 nova_compute[260935]: 2025-10-11 09:39:41.569 2 DEBUG oslo_concurrency.processutils [None req-7b919d74-9860-497a-b49b-c97f88fedcf2 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 cacbf863-eea2-4852-a3a4-7cc929ebacec_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.011s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:39:41 compute-0 podman[423558]: 2025-10-11 09:39:41.583390624 +0000 UTC m=+0.061448425 container died a4a7c55ee7c141d0eb3a7e220a358034c3eddcde754d8a8983f15733ca2a621c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_wescoff, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct 11 09:39:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-7ea591ea10bfd9ca77ea8906a91c25cd4f3e8ca662b6ddac2224093f2d19825b-merged.mount: Deactivated successfully.
Oct 11 09:39:41 compute-0 podman[423559]: 2025-10-11 09:39:41.626616746 +0000 UTC m=+0.085102348 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 11 09:39:41 compute-0 podman[423558]: 2025-10-11 09:39:41.640423493 +0000 UTC m=+0.118481284 container remove a4a7c55ee7c141d0eb3a7e220a358034c3eddcde754d8a8983f15733ca2a621c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_wescoff, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Oct 11 09:39:41 compute-0 systemd[1]: libpod-conmon-a4a7c55ee7c141d0eb3a7e220a358034c3eddcde754d8a8983f15733ca2a621c.scope: Deactivated successfully.
Oct 11 09:39:41 compute-0 nova_compute[260935]: 2025-10-11 09:39:41.650 2 DEBUG nova.storage.rbd_utils [None req-7b919d74-9860-497a-b49b-c97f88fedcf2 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] resizing rbd image cacbf863-eea2-4852-a3a4-7cc929ebacec_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 11 09:39:41 compute-0 sudo[423334]: pam_unix(sudo:session): session closed for user root
Oct 11 09:39:41 compute-0 nova_compute[260935]: 2025-10-11 09:39:41.745 2 DEBUG nova.objects.instance [None req-7b919d74-9860-497a-b49b-c97f88fedcf2 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lazy-loading 'migration_context' on Instance uuid cacbf863-eea2-4852-a3a4-7cc929ebacec obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:39:41 compute-0 sudo[423643]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:39:41 compute-0 sudo[423643]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:39:41 compute-0 sudo[423643]: pam_unix(sudo:session): session closed for user root
Oct 11 09:39:41 compute-0 sudo[423686]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 09:39:41 compute-0 sudo[423686]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:39:41 compute-0 sudo[423686]: pam_unix(sudo:session): session closed for user root
Oct 11 09:39:41 compute-0 sudo[423711]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:39:41 compute-0 sudo[423711]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:39:41 compute-0 sudo[423711]: pam_unix(sudo:session): session closed for user root
Oct 11 09:39:41 compute-0 sudo[423736]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a -- raw list --format json
Oct 11 09:39:41 compute-0 sudo[423736]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:39:42 compute-0 nova_compute[260935]: 2025-10-11 09:39:42.042 2 DEBUG nova.virt.libvirt.driver [None req-7b919d74-9860-497a-b49b-c97f88fedcf2 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: cacbf863-eea2-4852-a3a4-7cc929ebacec] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 11 09:39:42 compute-0 nova_compute[260935]: 2025-10-11 09:39:42.043 2 DEBUG nova.virt.libvirt.driver [None req-7b919d74-9860-497a-b49b-c97f88fedcf2 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: cacbf863-eea2-4852-a3a4-7cc929ebacec] Ensure instance console log exists: /var/lib/nova/instances/cacbf863-eea2-4852-a3a4-7cc929ebacec/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 11 09:39:42 compute-0 nova_compute[260935]: 2025-10-11 09:39:42.043 2 DEBUG oslo_concurrency.lockutils [None req-7b919d74-9860-497a-b49b-c97f88fedcf2 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:39:42 compute-0 nova_compute[260935]: 2025-10-11 09:39:42.044 2 DEBUG oslo_concurrency.lockutils [None req-7b919d74-9860-497a-b49b-c97f88fedcf2 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:39:42 compute-0 nova_compute[260935]: 2025-10-11 09:39:42.044 2 DEBUG oslo_concurrency.lockutils [None req-7b919d74-9860-497a-b49b-c97f88fedcf2 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:39:42 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:39:42.373 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=55, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:d1:d9', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '16:ab:1e:b7:4b:7f'}, ipsec=False) old=SB_Global(nb_cfg=54) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 09:39:42 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:39:42.375 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 11 09:39:42 compute-0 nova_compute[260935]: 2025-10-11 09:39:42.375 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:39:42 compute-0 podman[423802]: 2025-10-11 09:39:42.493812538 +0000 UTC m=+0.074377857 container create a26406f1e5bf177ccb68e64ac3accf8ecd8def4ea9e753e1b4c8969fafd0112b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_chaum, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct 11 09:39:42 compute-0 ceph-mon[74313]: pgmap v2941: 321 pgs: 321 active+clean; 407 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 71 KiB/s rd, 71 KiB/s wr, 14 op/s
Oct 11 09:39:42 compute-0 systemd[1]: Started libpod-conmon-a26406f1e5bf177ccb68e64ac3accf8ecd8def4ea9e753e1b4c8969fafd0112b.scope.
Oct 11 09:39:42 compute-0 podman[423802]: 2025-10-11 09:39:42.456950624 +0000 UTC m=+0.037515983 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:39:42 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:39:42 compute-0 podman[423802]: 2025-10-11 09:39:42.621197761 +0000 UTC m=+0.201763080 container init a26406f1e5bf177ccb68e64ac3accf8ecd8def4ea9e753e1b4c8969fafd0112b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_chaum, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 11 09:39:42 compute-0 podman[423802]: 2025-10-11 09:39:42.634051741 +0000 UTC m=+0.214617070 container start a26406f1e5bf177ccb68e64ac3accf8ecd8def4ea9e753e1b4c8969fafd0112b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_chaum, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default)
Oct 11 09:39:42 compute-0 podman[423802]: 2025-10-11 09:39:42.63792596 +0000 UTC m=+0.218491339 container attach a26406f1e5bf177ccb68e64ac3accf8ecd8def4ea9e753e1b4c8969fafd0112b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_chaum, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct 11 09:39:42 compute-0 gallant_chaum[423818]: 167 167
Oct 11 09:39:42 compute-0 systemd[1]: libpod-a26406f1e5bf177ccb68e64ac3accf8ecd8def4ea9e753e1b4c8969fafd0112b.scope: Deactivated successfully.
Oct 11 09:39:42 compute-0 podman[423802]: 2025-10-11 09:39:42.641889471 +0000 UTC m=+0.222454760 container died a26406f1e5bf177ccb68e64ac3accf8ecd8def4ea9e753e1b4c8969fafd0112b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_chaum, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 09:39:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-500409dc348cb3fb22d19be19b408231199d972da7a8172b6a8b61e0a95052d1-merged.mount: Deactivated successfully.
Oct 11 09:39:42 compute-0 podman[423802]: 2025-10-11 09:39:42.692350356 +0000 UTC m=+0.272915665 container remove a26406f1e5bf177ccb68e64ac3accf8ecd8def4ea9e753e1b4c8969fafd0112b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_chaum, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 09:39:42 compute-0 systemd[1]: libpod-conmon-a26406f1e5bf177ccb68e64ac3accf8ecd8def4ea9e753e1b4c8969fafd0112b.scope: Deactivated successfully.
Oct 11 09:39:42 compute-0 podman[423841]: 2025-10-11 09:39:42.973306716 +0000 UTC m=+0.064642844 container create cd9fd497c09bcf4bb75533455bd6f33ba36b49525b8a08b097dd8b49b3928229 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_wu, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct 11 09:39:43 compute-0 systemd[1]: Started libpod-conmon-cd9fd497c09bcf4bb75533455bd6f33ba36b49525b8a08b097dd8b49b3928229.scope.
Oct 11 09:39:43 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:39:43 compute-0 podman[423841]: 2025-10-11 09:39:42.95096904 +0000 UTC m=+0.042305198 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:39:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f72184d0a7ec2ac2e7179f234ddcb1ca2f49a93d05ab6b6c031b047032002ce7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 09:39:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f72184d0a7ec2ac2e7179f234ddcb1ca2f49a93d05ab6b6c031b047032002ce7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 09:39:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f72184d0a7ec2ac2e7179f234ddcb1ca2f49a93d05ab6b6c031b047032002ce7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 09:39:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f72184d0a7ec2ac2e7179f234ddcb1ca2f49a93d05ab6b6c031b047032002ce7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 09:39:43 compute-0 podman[423841]: 2025-10-11 09:39:43.057266831 +0000 UTC m=+0.148602969 container init cd9fd497c09bcf4bb75533455bd6f33ba36b49525b8a08b097dd8b49b3928229 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_wu, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 09:39:43 compute-0 podman[423841]: 2025-10-11 09:39:43.063118495 +0000 UTC m=+0.154454623 container start cd9fd497c09bcf4bb75533455bd6f33ba36b49525b8a08b097dd8b49b3928229 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_wu, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 09:39:43 compute-0 podman[423841]: 2025-10-11 09:39:43.066892191 +0000 UTC m=+0.158228319 container attach cd9fd497c09bcf4bb75533455bd6f33ba36b49525b8a08b097dd8b49b3928229 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_wu, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 09:39:43 compute-0 nova_compute[260935]: 2025-10-11 09:39:43.069 2 DEBUG nova.network.neutron [None req-7b919d74-9860-497a-b49b-c97f88fedcf2 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: cacbf863-eea2-4852-a3a4-7cc929ebacec] Successfully created port: 79c195e7-9605-49f3-b866-70093b232242 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 11 09:39:43 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2942: 321 pgs: 321 active+clean; 453 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 88 KiB/s rd, 1.8 MiB/s wr, 42 op/s
Oct 11 09:39:44 compute-0 nova_compute[260935]: 2025-10-11 09:39:44.059 2 DEBUG nova.network.neutron [None req-7b919d74-9860-497a-b49b-c97f88fedcf2 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: cacbf863-eea2-4852-a3a4-7cc929ebacec] Successfully updated port: 79c195e7-9605-49f3-b866-70093b232242 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 11 09:39:44 compute-0 nova_compute[260935]: 2025-10-11 09:39:44.083 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:39:44 compute-0 nova_compute[260935]: 2025-10-11 09:39:44.115 2 DEBUG oslo_concurrency.lockutils [None req-7b919d74-9860-497a-b49b-c97f88fedcf2 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Acquiring lock "refresh_cache-cacbf863-eea2-4852-a3a4-7cc929ebacec" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:39:44 compute-0 nova_compute[260935]: 2025-10-11 09:39:44.115 2 DEBUG oslo_concurrency.lockutils [None req-7b919d74-9860-497a-b49b-c97f88fedcf2 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Acquired lock "refresh_cache-cacbf863-eea2-4852-a3a4-7cc929ebacec" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:39:44 compute-0 nova_compute[260935]: 2025-10-11 09:39:44.115 2 DEBUG nova.network.neutron [None req-7b919d74-9860-497a-b49b-c97f88fedcf2 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: cacbf863-eea2-4852-a3a4-7cc929ebacec] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 11 09:39:44 compute-0 clever_wu[423858]: {
Oct 11 09:39:44 compute-0 clever_wu[423858]:     "9a8d87fe-940e-4960-94fb-405e22e6e9ee": {
Oct 11 09:39:44 compute-0 clever_wu[423858]:         "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:39:44 compute-0 clever_wu[423858]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 11 09:39:44 compute-0 clever_wu[423858]:         "osd_id": 2,
Oct 11 09:39:44 compute-0 clever_wu[423858]:         "osd_uuid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 09:39:44 compute-0 clever_wu[423858]:         "type": "bluestore"
Oct 11 09:39:44 compute-0 clever_wu[423858]:     },
Oct 11 09:39:44 compute-0 clever_wu[423858]:     "bd31cfe9-266a-4479-a7f9-659fff14b3e1": {
Oct 11 09:39:44 compute-0 clever_wu[423858]:         "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:39:44 compute-0 clever_wu[423858]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 11 09:39:44 compute-0 clever_wu[423858]:         "osd_id": 0,
Oct 11 09:39:44 compute-0 clever_wu[423858]:         "osd_uuid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 09:39:44 compute-0 clever_wu[423858]:         "type": "bluestore"
Oct 11 09:39:44 compute-0 clever_wu[423858]:     },
Oct 11 09:39:44 compute-0 clever_wu[423858]:     "c6e730c8-7075-45e5-bf72-061b73088f5b": {
Oct 11 09:39:44 compute-0 clever_wu[423858]:         "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:39:44 compute-0 clever_wu[423858]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 11 09:39:44 compute-0 clever_wu[423858]:         "osd_id": 1,
Oct 11 09:39:44 compute-0 clever_wu[423858]:         "osd_uuid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 09:39:44 compute-0 clever_wu[423858]:         "type": "bluestore"
Oct 11 09:39:44 compute-0 clever_wu[423858]:     }
Oct 11 09:39:44 compute-0 clever_wu[423858]: }
Oct 11 09:39:44 compute-0 systemd[1]: libpod-cd9fd497c09bcf4bb75533455bd6f33ba36b49525b8a08b097dd8b49b3928229.scope: Deactivated successfully.
Oct 11 09:39:44 compute-0 systemd[1]: libpod-cd9fd497c09bcf4bb75533455bd6f33ba36b49525b8a08b097dd8b49b3928229.scope: Consumed 1.078s CPU time.
Oct 11 09:39:44 compute-0 podman[423841]: 2025-10-11 09:39:44.144251829 +0000 UTC m=+1.235587957 container died cd9fd497c09bcf4bb75533455bd6f33ba36b49525b8a08b097dd8b49b3928229 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_wu, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct 11 09:39:44 compute-0 nova_compute[260935]: 2025-10-11 09:39:44.229 2 DEBUG nova.compute.manager [req-8f46a015-bd7a-4b67-8949-c165a5f311f4 req-da8ba57d-f9cc-462a-8eab-1b63c59e1029 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: cacbf863-eea2-4852-a3a4-7cc929ebacec] Received event network-changed-79c195e7-9605-49f3-b866-70093b232242 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:39:44 compute-0 nova_compute[260935]: 2025-10-11 09:39:44.230 2 DEBUG nova.compute.manager [req-8f46a015-bd7a-4b67-8949-c165a5f311f4 req-da8ba57d-f9cc-462a-8eab-1b63c59e1029 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: cacbf863-eea2-4852-a3a4-7cc929ebacec] Refreshing instance network info cache due to event network-changed-79c195e7-9605-49f3-b866-70093b232242. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 09:39:44 compute-0 nova_compute[260935]: 2025-10-11 09:39:44.230 2 DEBUG oslo_concurrency.lockutils [req-8f46a015-bd7a-4b67-8949-c165a5f311f4 req-da8ba57d-f9cc-462a-8eab-1b63c59e1029 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-cacbf863-eea2-4852-a3a4-7cc929ebacec" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:39:44 compute-0 nova_compute[260935]: 2025-10-11 09:39:44.350 2 DEBUG nova.network.neutron [None req-7b919d74-9860-497a-b49b-c97f88fedcf2 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: cacbf863-eea2-4852-a3a4-7cc929ebacec] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 11 09:39:44 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:39:44.379 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=bec9a5e2-82b0-42f0-811d-08d245f1dc66, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '55'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:39:44 compute-0 nova_compute[260935]: 2025-10-11 09:39:44.575 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:39:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-f72184d0a7ec2ac2e7179f234ddcb1ca2f49a93d05ab6b6c031b047032002ce7-merged.mount: Deactivated successfully.
Oct 11 09:39:44 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:39:44 compute-0 ceph-mon[74313]: pgmap v2942: 321 pgs: 321 active+clean; 453 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 88 KiB/s rd, 1.8 MiB/s wr, 42 op/s
Oct 11 09:39:44 compute-0 podman[423841]: 2025-10-11 09:39:44.795482324 +0000 UTC m=+1.886818482 container remove cd9fd497c09bcf4bb75533455bd6f33ba36b49525b8a08b097dd8b49b3928229 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_wu, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 09:39:44 compute-0 systemd[1]: libpod-conmon-cd9fd497c09bcf4bb75533455bd6f33ba36b49525b8a08b097dd8b49b3928229.scope: Deactivated successfully.
Oct 11 09:39:44 compute-0 sudo[423736]: pam_unix(sudo:session): session closed for user root
Oct 11 09:39:44 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 09:39:44 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:39:44 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 09:39:44 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:39:44 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev 25b7aaa0-17e7-4a8c-bd94-21ee68c95857 does not exist
Oct 11 09:39:44 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev 680876e7-911d-4608-bccf-493acacda419 does not exist
Oct 11 09:39:44 compute-0 sudo[423903]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:39:44 compute-0 sudo[423903]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:39:44 compute-0 sudo[423903]: pam_unix(sudo:session): session closed for user root
Oct 11 09:39:45 compute-0 sudo[423928]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 11 09:39:45 compute-0 sudo[423928]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:39:45 compute-0 sudo[423928]: pam_unix(sudo:session): session closed for user root
Oct 11 09:39:45 compute-0 nova_compute[260935]: 2025-10-11 09:39:45.277 2 DEBUG nova.network.neutron [None req-7b919d74-9860-497a-b49b-c97f88fedcf2 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: cacbf863-eea2-4852-a3a4-7cc929ebacec] Updating instance_info_cache with network_info: [{"id": "79c195e7-9605-49f3-b866-70093b232242", "address": "fa:16:3e:61:d1:72", "network": {"id": "1aa39742-414b-41b6-bac5-b401ed01a1ec", "bridge": "br-int", "label": "tempest-network-smoke--1567678547", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "81e7096f23df4e7d8782cf98d09d54e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap79c195e7-96", "ovs_interfaceid": "79c195e7-9605-49f3-b866-70093b232242", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:39:45 compute-0 nova_compute[260935]: 2025-10-11 09:39:45.308 2 DEBUG oslo_concurrency.lockutils [None req-7b919d74-9860-497a-b49b-c97f88fedcf2 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Releasing lock "refresh_cache-cacbf863-eea2-4852-a3a4-7cc929ebacec" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:39:45 compute-0 nova_compute[260935]: 2025-10-11 09:39:45.308 2 DEBUG nova.compute.manager [None req-7b919d74-9860-497a-b49b-c97f88fedcf2 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: cacbf863-eea2-4852-a3a4-7cc929ebacec] Instance network_info: |[{"id": "79c195e7-9605-49f3-b866-70093b232242", "address": "fa:16:3e:61:d1:72", "network": {"id": "1aa39742-414b-41b6-bac5-b401ed01a1ec", "bridge": "br-int", "label": "tempest-network-smoke--1567678547", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "81e7096f23df4e7d8782cf98d09d54e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap79c195e7-96", "ovs_interfaceid": "79c195e7-9605-49f3-b866-70093b232242", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 11 09:39:45 compute-0 nova_compute[260935]: 2025-10-11 09:39:45.309 2 DEBUG oslo_concurrency.lockutils [req-8f46a015-bd7a-4b67-8949-c165a5f311f4 req-da8ba57d-f9cc-462a-8eab-1b63c59e1029 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-cacbf863-eea2-4852-a3a4-7cc929ebacec" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:39:45 compute-0 nova_compute[260935]: 2025-10-11 09:39:45.310 2 DEBUG nova.network.neutron [req-8f46a015-bd7a-4b67-8949-c165a5f311f4 req-da8ba57d-f9cc-462a-8eab-1b63c59e1029 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: cacbf863-eea2-4852-a3a4-7cc929ebacec] Refreshing network info cache for port 79c195e7-9605-49f3-b866-70093b232242 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 09:39:45 compute-0 nova_compute[260935]: 2025-10-11 09:39:45.315 2 DEBUG nova.virt.libvirt.driver [None req-7b919d74-9860-497a-b49b-c97f88fedcf2 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: cacbf863-eea2-4852-a3a4-7cc929ebacec] Start _get_guest_xml network_info=[{"id": "79c195e7-9605-49f3-b866-70093b232242", "address": "fa:16:3e:61:d1:72", "network": {"id": "1aa39742-414b-41b6-bac5-b401ed01a1ec", "bridge": "br-int", "label": "tempest-network-smoke--1567678547", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "81e7096f23df4e7d8782cf98d09d54e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap79c195e7-96", "ovs_interfaceid": "79c195e7-9605-49f3-b866-70093b232242", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 11 09:39:45 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2943: 321 pgs: 321 active+clean; 453 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 11 09:39:45 compute-0 nova_compute[260935]: 2025-10-11 09:39:45.323 2 WARNING nova.virt.libvirt.driver [None req-7b919d74-9860-497a-b49b-c97f88fedcf2 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 09:39:45 compute-0 nova_compute[260935]: 2025-10-11 09:39:45.331 2 DEBUG nova.virt.libvirt.host [None req-7b919d74-9860-497a-b49b-c97f88fedcf2 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 11 09:39:45 compute-0 nova_compute[260935]: 2025-10-11 09:39:45.332 2 DEBUG nova.virt.libvirt.host [None req-7b919d74-9860-497a-b49b-c97f88fedcf2 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 11 09:39:45 compute-0 nova_compute[260935]: 2025-10-11 09:39:45.336 2 DEBUG nova.virt.libvirt.host [None req-7b919d74-9860-497a-b49b-c97f88fedcf2 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 11 09:39:45 compute-0 nova_compute[260935]: 2025-10-11 09:39:45.336 2 DEBUG nova.virt.libvirt.host [None req-7b919d74-9860-497a-b49b-c97f88fedcf2 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 11 09:39:45 compute-0 nova_compute[260935]: 2025-10-11 09:39:45.337 2 DEBUG nova.virt.libvirt.driver [None req-7b919d74-9860-497a-b49b-c97f88fedcf2 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 11 09:39:45 compute-0 nova_compute[260935]: 2025-10-11 09:39:45.338 2 DEBUG nova.virt.hardware [None req-7b919d74-9860-497a-b49b-c97f88fedcf2 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 11 09:39:45 compute-0 nova_compute[260935]: 2025-10-11 09:39:45.338 2 DEBUG nova.virt.hardware [None req-7b919d74-9860-497a-b49b-c97f88fedcf2 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 11 09:39:45 compute-0 nova_compute[260935]: 2025-10-11 09:39:45.339 2 DEBUG nova.virt.hardware [None req-7b919d74-9860-497a-b49b-c97f88fedcf2 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 11 09:39:45 compute-0 nova_compute[260935]: 2025-10-11 09:39:45.339 2 DEBUG nova.virt.hardware [None req-7b919d74-9860-497a-b49b-c97f88fedcf2 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 11 09:39:45 compute-0 nova_compute[260935]: 2025-10-11 09:39:45.340 2 DEBUG nova.virt.hardware [None req-7b919d74-9860-497a-b49b-c97f88fedcf2 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 11 09:39:45 compute-0 nova_compute[260935]: 2025-10-11 09:39:45.340 2 DEBUG nova.virt.hardware [None req-7b919d74-9860-497a-b49b-c97f88fedcf2 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 11 09:39:45 compute-0 nova_compute[260935]: 2025-10-11 09:39:45.341 2 DEBUG nova.virt.hardware [None req-7b919d74-9860-497a-b49b-c97f88fedcf2 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 11 09:39:45 compute-0 nova_compute[260935]: 2025-10-11 09:39:45.341 2 DEBUG nova.virt.hardware [None req-7b919d74-9860-497a-b49b-c97f88fedcf2 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 11 09:39:45 compute-0 nova_compute[260935]: 2025-10-11 09:39:45.342 2 DEBUG nova.virt.hardware [None req-7b919d74-9860-497a-b49b-c97f88fedcf2 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 11 09:39:45 compute-0 nova_compute[260935]: 2025-10-11 09:39:45.342 2 DEBUG nova.virt.hardware [None req-7b919d74-9860-497a-b49b-c97f88fedcf2 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 11 09:39:45 compute-0 nova_compute[260935]: 2025-10-11 09:39:45.343 2 DEBUG nova.virt.hardware [None req-7b919d74-9860-497a-b49b-c97f88fedcf2 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 11 09:39:45 compute-0 nova_compute[260935]: 2025-10-11 09:39:45.347 2 DEBUG oslo_concurrency.processutils [None req-7b919d74-9860-497a-b49b-c97f88fedcf2 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:39:45 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 09:39:45 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1524207635' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:39:45 compute-0 nova_compute[260935]: 2025-10-11 09:39:45.823 2 DEBUG oslo_concurrency.processutils [None req-7b919d74-9860-497a-b49b-c97f88fedcf2 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:39:45 compute-0 nova_compute[260935]: 2025-10-11 09:39:45.848 2 DEBUG nova.storage.rbd_utils [None req-7b919d74-9860-497a-b49b-c97f88fedcf2 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] rbd image cacbf863-eea2-4852-a3a4-7cc929ebacec_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:39:45 compute-0 nova_compute[260935]: 2025-10-11 09:39:45.852 2 DEBUG oslo_concurrency.processutils [None req-7b919d74-9860-497a-b49b-c97f88fedcf2 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:39:45 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:39:45 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:39:45 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/1524207635' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:39:46 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 09:39:46 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2912784440' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:39:46 compute-0 nova_compute[260935]: 2025-10-11 09:39:46.294 2 DEBUG oslo_concurrency.processutils [None req-7b919d74-9860-497a-b49b-c97f88fedcf2 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:39:46 compute-0 nova_compute[260935]: 2025-10-11 09:39:46.296 2 DEBUG nova.virt.libvirt.vif [None req-7b919d74-9860-497a-b49b-c97f88fedcf2 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T09:39:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-607770139-gen-1-1926377608',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-607770139-gen-1-1926377608',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-607770139-gen',id=145,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMgIDSEbqXKfaiPe9PEwj6SDspyzopAbfyJnz8+djsMK1KLgiJJqBYs9uE57WKf6jqalL3R3Kh83Cc9busMQoG7JknYjD9hSHphOlTqezk2QHYcvhxySyaS51yDxw6uubA==',key_name='tempest-TestSecurityGroupsBasicOps-1631871125',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='81e7096f23df4e7d8782cf98d09d54e9',ramdisk_id='',reservation_id='r-dza5ihjy',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-607770139',owner_user_name='tempest-TestSecurityGroupsBasicOps-607770139-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:39:39Z,user_data=None,user_id='489c4d0457354f4684f8b9e53261224f',uuid=cacbf863-eea2-4852-a3a4-7cc929ebacec,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "79c195e7-9605-49f3-b866-70093b232242", "address": "fa:16:3e:61:d1:72", "network": {"id": "1aa39742-414b-41b6-bac5-b401ed01a1ec", "bridge": "br-int", "label": "tempest-network-smoke--1567678547", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "81e7096f23df4e7d8782cf98d09d54e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap79c195e7-96", "ovs_interfaceid": "79c195e7-9605-49f3-b866-70093b232242", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 11 09:39:46 compute-0 nova_compute[260935]: 2025-10-11 09:39:46.297 2 DEBUG nova.network.os_vif_util [None req-7b919d74-9860-497a-b49b-c97f88fedcf2 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Converting VIF {"id": "79c195e7-9605-49f3-b866-70093b232242", "address": "fa:16:3e:61:d1:72", "network": {"id": "1aa39742-414b-41b6-bac5-b401ed01a1ec", "bridge": "br-int", "label": "tempest-network-smoke--1567678547", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "81e7096f23df4e7d8782cf98d09d54e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap79c195e7-96", "ovs_interfaceid": "79c195e7-9605-49f3-b866-70093b232242", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 09:39:46 compute-0 nova_compute[260935]: 2025-10-11 09:39:46.297 2 DEBUG nova.network.os_vif_util [None req-7b919d74-9860-497a-b49b-c97f88fedcf2 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:61:d1:72,bridge_name='br-int',has_traffic_filtering=True,id=79c195e7-9605-49f3-b866-70093b232242,network=Network(1aa39742-414b-41b6-bac5-b401ed01a1ec),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap79c195e7-96') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 09:39:46 compute-0 nova_compute[260935]: 2025-10-11 09:39:46.299 2 DEBUG nova.objects.instance [None req-7b919d74-9860-497a-b49b-c97f88fedcf2 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lazy-loading 'pci_devices' on Instance uuid cacbf863-eea2-4852-a3a4-7cc929ebacec obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:39:46 compute-0 nova_compute[260935]: 2025-10-11 09:39:46.322 2 DEBUG nova.virt.libvirt.driver [None req-7b919d74-9860-497a-b49b-c97f88fedcf2 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: cacbf863-eea2-4852-a3a4-7cc929ebacec] End _get_guest_xml xml=<domain type="kvm">
Oct 11 09:39:46 compute-0 nova_compute[260935]:   <uuid>cacbf863-eea2-4852-a3a4-7cc929ebacec</uuid>
Oct 11 09:39:46 compute-0 nova_compute[260935]:   <name>instance-00000091</name>
Oct 11 09:39:46 compute-0 nova_compute[260935]:   <memory>131072</memory>
Oct 11 09:39:46 compute-0 nova_compute[260935]:   <vcpu>1</vcpu>
Oct 11 09:39:46 compute-0 nova_compute[260935]:   <metadata>
Oct 11 09:39:46 compute-0 nova_compute[260935]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 09:39:46 compute-0 nova_compute[260935]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 09:39:46 compute-0 nova_compute[260935]:       <nova:name>tempest-server-tempest-TestSecurityGroupsBasicOps-607770139-gen-1-1926377608</nova:name>
Oct 11 09:39:46 compute-0 nova_compute[260935]:       <nova:creationTime>2025-10-11 09:39:45</nova:creationTime>
Oct 11 09:39:46 compute-0 nova_compute[260935]:       <nova:flavor name="m1.nano">
Oct 11 09:39:46 compute-0 nova_compute[260935]:         <nova:memory>128</nova:memory>
Oct 11 09:39:46 compute-0 nova_compute[260935]:         <nova:disk>1</nova:disk>
Oct 11 09:39:46 compute-0 nova_compute[260935]:         <nova:swap>0</nova:swap>
Oct 11 09:39:46 compute-0 nova_compute[260935]:         <nova:ephemeral>0</nova:ephemeral>
Oct 11 09:39:46 compute-0 nova_compute[260935]:         <nova:vcpus>1</nova:vcpus>
Oct 11 09:39:46 compute-0 nova_compute[260935]:       </nova:flavor>
Oct 11 09:39:46 compute-0 nova_compute[260935]:       <nova:owner>
Oct 11 09:39:46 compute-0 nova_compute[260935]:         <nova:user uuid="489c4d0457354f4684f8b9e53261224f">tempest-TestSecurityGroupsBasicOps-607770139-project-member</nova:user>
Oct 11 09:39:46 compute-0 nova_compute[260935]:         <nova:project uuid="81e7096f23df4e7d8782cf98d09d54e9">tempest-TestSecurityGroupsBasicOps-607770139</nova:project>
Oct 11 09:39:46 compute-0 nova_compute[260935]:       </nova:owner>
Oct 11 09:39:46 compute-0 nova_compute[260935]:       <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 09:39:46 compute-0 nova_compute[260935]:       <nova:ports>
Oct 11 09:39:46 compute-0 nova_compute[260935]:         <nova:port uuid="79c195e7-9605-49f3-b866-70093b232242">
Oct 11 09:39:46 compute-0 nova_compute[260935]:           <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Oct 11 09:39:46 compute-0 nova_compute[260935]:         </nova:port>
Oct 11 09:39:46 compute-0 nova_compute[260935]:       </nova:ports>
Oct 11 09:39:46 compute-0 nova_compute[260935]:     </nova:instance>
Oct 11 09:39:46 compute-0 nova_compute[260935]:   </metadata>
Oct 11 09:39:46 compute-0 nova_compute[260935]:   <sysinfo type="smbios">
Oct 11 09:39:46 compute-0 nova_compute[260935]:     <system>
Oct 11 09:39:46 compute-0 nova_compute[260935]:       <entry name="manufacturer">RDO</entry>
Oct 11 09:39:46 compute-0 nova_compute[260935]:       <entry name="product">OpenStack Compute</entry>
Oct 11 09:39:46 compute-0 nova_compute[260935]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 09:39:46 compute-0 nova_compute[260935]:       <entry name="serial">cacbf863-eea2-4852-a3a4-7cc929ebacec</entry>
Oct 11 09:39:46 compute-0 nova_compute[260935]:       <entry name="uuid">cacbf863-eea2-4852-a3a4-7cc929ebacec</entry>
Oct 11 09:39:46 compute-0 nova_compute[260935]:       <entry name="family">Virtual Machine</entry>
Oct 11 09:39:46 compute-0 nova_compute[260935]:     </system>
Oct 11 09:39:46 compute-0 nova_compute[260935]:   </sysinfo>
Oct 11 09:39:46 compute-0 nova_compute[260935]:   <os>
Oct 11 09:39:46 compute-0 nova_compute[260935]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 11 09:39:46 compute-0 nova_compute[260935]:     <boot dev="hd"/>
Oct 11 09:39:46 compute-0 nova_compute[260935]:     <smbios mode="sysinfo"/>
Oct 11 09:39:46 compute-0 nova_compute[260935]:   </os>
Oct 11 09:39:46 compute-0 nova_compute[260935]:   <features>
Oct 11 09:39:46 compute-0 nova_compute[260935]:     <acpi/>
Oct 11 09:39:46 compute-0 nova_compute[260935]:     <apic/>
Oct 11 09:39:46 compute-0 nova_compute[260935]:     <vmcoreinfo/>
Oct 11 09:39:46 compute-0 nova_compute[260935]:   </features>
Oct 11 09:39:46 compute-0 nova_compute[260935]:   <clock offset="utc">
Oct 11 09:39:46 compute-0 nova_compute[260935]:     <timer name="pit" tickpolicy="delay"/>
Oct 11 09:39:46 compute-0 nova_compute[260935]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 11 09:39:46 compute-0 nova_compute[260935]:     <timer name="hpet" present="no"/>
Oct 11 09:39:46 compute-0 nova_compute[260935]:   </clock>
Oct 11 09:39:46 compute-0 nova_compute[260935]:   <cpu mode="host-model" match="exact">
Oct 11 09:39:46 compute-0 nova_compute[260935]:     <topology sockets="1" cores="1" threads="1"/>
Oct 11 09:39:46 compute-0 nova_compute[260935]:   </cpu>
Oct 11 09:39:46 compute-0 nova_compute[260935]:   <devices>
Oct 11 09:39:46 compute-0 nova_compute[260935]:     <disk type="network" device="disk">
Oct 11 09:39:46 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 09:39:46 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/cacbf863-eea2-4852-a3a4-7cc929ebacec_disk">
Oct 11 09:39:46 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 09:39:46 compute-0 nova_compute[260935]:       </source>
Oct 11 09:39:46 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 09:39:46 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 09:39:46 compute-0 nova_compute[260935]:       </auth>
Oct 11 09:39:46 compute-0 nova_compute[260935]:       <target dev="vda" bus="virtio"/>
Oct 11 09:39:46 compute-0 nova_compute[260935]:     </disk>
Oct 11 09:39:46 compute-0 nova_compute[260935]:     <disk type="network" device="cdrom">
Oct 11 09:39:46 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 09:39:46 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/cacbf863-eea2-4852-a3a4-7cc929ebacec_disk.config">
Oct 11 09:39:46 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 09:39:46 compute-0 nova_compute[260935]:       </source>
Oct 11 09:39:46 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 09:39:46 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 09:39:46 compute-0 nova_compute[260935]:       </auth>
Oct 11 09:39:46 compute-0 nova_compute[260935]:       <target dev="sda" bus="sata"/>
Oct 11 09:39:46 compute-0 nova_compute[260935]:     </disk>
Oct 11 09:39:46 compute-0 nova_compute[260935]:     <interface type="ethernet">
Oct 11 09:39:46 compute-0 nova_compute[260935]:       <mac address="fa:16:3e:61:d1:72"/>
Oct 11 09:39:46 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 09:39:46 compute-0 nova_compute[260935]:       <driver name="vhost" rx_queue_size="512"/>
Oct 11 09:39:46 compute-0 nova_compute[260935]:       <mtu size="1442"/>
Oct 11 09:39:46 compute-0 nova_compute[260935]:       <target dev="tap79c195e7-96"/>
Oct 11 09:39:46 compute-0 nova_compute[260935]:     </interface>
Oct 11 09:39:46 compute-0 nova_compute[260935]:     <serial type="pty">
Oct 11 09:39:46 compute-0 nova_compute[260935]:       <log file="/var/lib/nova/instances/cacbf863-eea2-4852-a3a4-7cc929ebacec/console.log" append="off"/>
Oct 11 09:39:46 compute-0 nova_compute[260935]:     </serial>
Oct 11 09:39:46 compute-0 nova_compute[260935]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 09:39:46 compute-0 nova_compute[260935]:     <video>
Oct 11 09:39:46 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 09:39:46 compute-0 nova_compute[260935]:     </video>
Oct 11 09:39:46 compute-0 nova_compute[260935]:     <input type="tablet" bus="usb"/>
Oct 11 09:39:46 compute-0 nova_compute[260935]:     <rng model="virtio">
Oct 11 09:39:46 compute-0 nova_compute[260935]:       <backend model="random">/dev/urandom</backend>
Oct 11 09:39:46 compute-0 nova_compute[260935]:     </rng>
Oct 11 09:39:46 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root"/>
Oct 11 09:39:46 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:39:46 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:39:46 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:39:46 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:39:46 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:39:46 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:39:46 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:39:46 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:39:46 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:39:46 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:39:46 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:39:46 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:39:46 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:39:46 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:39:46 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:39:46 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:39:46 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:39:46 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:39:46 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:39:46 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:39:46 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:39:46 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:39:46 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:39:46 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:39:46 compute-0 nova_compute[260935]:     <controller type="usb" index="0"/>
Oct 11 09:39:46 compute-0 nova_compute[260935]:     <memballoon model="virtio">
Oct 11 09:39:46 compute-0 nova_compute[260935]:       <stats period="10"/>
Oct 11 09:39:46 compute-0 nova_compute[260935]:     </memballoon>
Oct 11 09:39:46 compute-0 nova_compute[260935]:   </devices>
Oct 11 09:39:46 compute-0 nova_compute[260935]: </domain>
Oct 11 09:39:46 compute-0 nova_compute[260935]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 11 09:39:46 compute-0 nova_compute[260935]: 2025-10-11 09:39:46.323 2 DEBUG nova.compute.manager [None req-7b919d74-9860-497a-b49b-c97f88fedcf2 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: cacbf863-eea2-4852-a3a4-7cc929ebacec] Preparing to wait for external event network-vif-plugged-79c195e7-9605-49f3-b866-70093b232242 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 11 09:39:46 compute-0 nova_compute[260935]: 2025-10-11 09:39:46.324 2 DEBUG oslo_concurrency.lockutils [None req-7b919d74-9860-497a-b49b-c97f88fedcf2 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Acquiring lock "cacbf863-eea2-4852-a3a4-7cc929ebacec-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:39:46 compute-0 nova_compute[260935]: 2025-10-11 09:39:46.324 2 DEBUG oslo_concurrency.lockutils [None req-7b919d74-9860-497a-b49b-c97f88fedcf2 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lock "cacbf863-eea2-4852-a3a4-7cc929ebacec-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:39:46 compute-0 nova_compute[260935]: 2025-10-11 09:39:46.324 2 DEBUG oslo_concurrency.lockutils [None req-7b919d74-9860-497a-b49b-c97f88fedcf2 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lock "cacbf863-eea2-4852-a3a4-7cc929ebacec-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:39:46 compute-0 nova_compute[260935]: 2025-10-11 09:39:46.325 2 DEBUG nova.virt.libvirt.vif [None req-7b919d74-9860-497a-b49b-c97f88fedcf2 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T09:39:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-607770139-gen-1-1926377608',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-607770139-gen-1-1926377608',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-607770139-gen',id=145,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMgIDSEbqXKfaiPe9PEwj6SDspyzopAbfyJnz8+djsMK1KLgiJJqBYs9uE57WKf6jqalL3R3Kh83Cc9busMQoG7JknYjD9hSHphOlTqezk2QHYcvhxySyaS51yDxw6uubA==',key_name='tempest-TestSecurityGroupsBasicOps-1631871125',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='81e7096f23df4e7d8782cf98d09d54e9',ramdisk_id='',reservation_id='r-dza5ihjy',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-607770139',owner_user_name='tempest-TestSecurityGroupsBasicOps-607770139-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:39:39Z,user_data=None,user_id='489c4d0457354f4684f8b9e53261224f',uuid=cacbf863-eea2-4852-a3a4-7cc929ebacec,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "79c195e7-9605-49f3-b866-70093b232242", "address": "fa:16:3e:61:d1:72", "network": {"id": "1aa39742-414b-41b6-bac5-b401ed01a1ec", "bridge": "br-int", "label": "tempest-network-smoke--1567678547", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "81e7096f23df4e7d8782cf98d09d54e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap79c195e7-96", "ovs_interfaceid": "79c195e7-9605-49f3-b866-70093b232242", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 11 09:39:46 compute-0 nova_compute[260935]: 2025-10-11 09:39:46.325 2 DEBUG nova.network.os_vif_util [None req-7b919d74-9860-497a-b49b-c97f88fedcf2 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Converting VIF {"id": "79c195e7-9605-49f3-b866-70093b232242", "address": "fa:16:3e:61:d1:72", "network": {"id": "1aa39742-414b-41b6-bac5-b401ed01a1ec", "bridge": "br-int", "label": "tempest-network-smoke--1567678547", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "81e7096f23df4e7d8782cf98d09d54e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap79c195e7-96", "ovs_interfaceid": "79c195e7-9605-49f3-b866-70093b232242", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 09:39:46 compute-0 nova_compute[260935]: 2025-10-11 09:39:46.326 2 DEBUG nova.network.os_vif_util [None req-7b919d74-9860-497a-b49b-c97f88fedcf2 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:61:d1:72,bridge_name='br-int',has_traffic_filtering=True,id=79c195e7-9605-49f3-b866-70093b232242,network=Network(1aa39742-414b-41b6-bac5-b401ed01a1ec),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap79c195e7-96') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 09:39:46 compute-0 nova_compute[260935]: 2025-10-11 09:39:46.326 2 DEBUG os_vif [None req-7b919d74-9860-497a-b49b-c97f88fedcf2 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:61:d1:72,bridge_name='br-int',has_traffic_filtering=True,id=79c195e7-9605-49f3-b866-70093b232242,network=Network(1aa39742-414b-41b6-bac5-b401ed01a1ec),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap79c195e7-96') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 11 09:39:46 compute-0 nova_compute[260935]: 2025-10-11 09:39:46.330 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:39:46 compute-0 nova_compute[260935]: 2025-10-11 09:39:46.330 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:39:46 compute-0 nova_compute[260935]: 2025-10-11 09:39:46.331 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 09:39:46 compute-0 nova_compute[260935]: 2025-10-11 09:39:46.333 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:39:46 compute-0 nova_compute[260935]: 2025-10-11 09:39:46.334 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap79c195e7-96, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:39:46 compute-0 nova_compute[260935]: 2025-10-11 09:39:46.334 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap79c195e7-96, col_values=(('external_ids', {'iface-id': '79c195e7-9605-49f3-b866-70093b232242', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:61:d1:72', 'vm-uuid': 'cacbf863-eea2-4852-a3a4-7cc929ebacec'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:39:46 compute-0 NetworkManager[44960]: <info>  [1760175586.3372] manager: (tap79c195e7-96): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/626)
Oct 11 09:39:46 compute-0 nova_compute[260935]: 2025-10-11 09:39:46.337 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 09:39:46 compute-0 nova_compute[260935]: 2025-10-11 09:39:46.345 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:39:46 compute-0 nova_compute[260935]: 2025-10-11 09:39:46.346 2 INFO os_vif [None req-7b919d74-9860-497a-b49b-c97f88fedcf2 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:61:d1:72,bridge_name='br-int',has_traffic_filtering=True,id=79c195e7-9605-49f3-b866-70093b232242,network=Network(1aa39742-414b-41b6-bac5-b401ed01a1ec),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap79c195e7-96')
Oct 11 09:39:46 compute-0 podman[424018]: 2025-10-11 09:39:46.478739705 +0000 UTC m=+0.090409827 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, container_name=multipathd)
Oct 11 09:39:46 compute-0 podman[424019]: 2025-10-11 09:39:46.478751885 +0000 UTC m=+0.088338219 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct 11 09:39:46 compute-0 nova_compute[260935]: 2025-10-11 09:39:46.582 2 DEBUG nova.virt.libvirt.driver [None req-7b919d74-9860-497a-b49b-c97f88fedcf2 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 09:39:46 compute-0 nova_compute[260935]: 2025-10-11 09:39:46.584 2 DEBUG nova.virt.libvirt.driver [None req-7b919d74-9860-497a-b49b-c97f88fedcf2 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 09:39:46 compute-0 nova_compute[260935]: 2025-10-11 09:39:46.584 2 DEBUG nova.virt.libvirt.driver [None req-7b919d74-9860-497a-b49b-c97f88fedcf2 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] No VIF found with MAC fa:16:3e:61:d1:72, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 11 09:39:46 compute-0 nova_compute[260935]: 2025-10-11 09:39:46.586 2 INFO nova.virt.libvirt.driver [None req-7b919d74-9860-497a-b49b-c97f88fedcf2 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: cacbf863-eea2-4852-a3a4-7cc929ebacec] Using config drive
Oct 11 09:39:46 compute-0 nova_compute[260935]: 2025-10-11 09:39:46.627 2 DEBUG nova.storage.rbd_utils [None req-7b919d74-9860-497a-b49b-c97f88fedcf2 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] rbd image cacbf863-eea2-4852-a3a4-7cc929ebacec_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:39:46 compute-0 ceph-mon[74313]: pgmap v2943: 321 pgs: 321 active+clean; 453 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 11 09:39:46 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/2912784440' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:39:47 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2944: 321 pgs: 321 active+clean; 453 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 11 09:39:47 compute-0 nova_compute[260935]: 2025-10-11 09:39:47.780 2 DEBUG nova.network.neutron [req-8f46a015-bd7a-4b67-8949-c165a5f311f4 req-da8ba57d-f9cc-462a-8eab-1b63c59e1029 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: cacbf863-eea2-4852-a3a4-7cc929ebacec] Updated VIF entry in instance network info cache for port 79c195e7-9605-49f3-b866-70093b232242. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 09:39:47 compute-0 nova_compute[260935]: 2025-10-11 09:39:47.780 2 DEBUG nova.network.neutron [req-8f46a015-bd7a-4b67-8949-c165a5f311f4 req-da8ba57d-f9cc-462a-8eab-1b63c59e1029 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: cacbf863-eea2-4852-a3a4-7cc929ebacec] Updating instance_info_cache with network_info: [{"id": "79c195e7-9605-49f3-b866-70093b232242", "address": "fa:16:3e:61:d1:72", "network": {"id": "1aa39742-414b-41b6-bac5-b401ed01a1ec", "bridge": "br-int", "label": "tempest-network-smoke--1567678547", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "81e7096f23df4e7d8782cf98d09d54e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap79c195e7-96", "ovs_interfaceid": "79c195e7-9605-49f3-b866-70093b232242", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:39:47 compute-0 nova_compute[260935]: 2025-10-11 09:39:47.800 2 DEBUG oslo_concurrency.lockutils [req-8f46a015-bd7a-4b67-8949-c165a5f311f4 req-da8ba57d-f9cc-462a-8eab-1b63c59e1029 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-cacbf863-eea2-4852-a3a4-7cc929ebacec" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:39:48 compute-0 nova_compute[260935]: 2025-10-11 09:39:48.109 2 INFO nova.virt.libvirt.driver [None req-7b919d74-9860-497a-b49b-c97f88fedcf2 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: cacbf863-eea2-4852-a3a4-7cc929ebacec] Creating config drive at /var/lib/nova/instances/cacbf863-eea2-4852-a3a4-7cc929ebacec/disk.config
Oct 11 09:39:48 compute-0 nova_compute[260935]: 2025-10-11 09:39:48.116 2 DEBUG oslo_concurrency.processutils [None req-7b919d74-9860-497a-b49b-c97f88fedcf2 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/cacbf863-eea2-4852-a3a4-7cc929ebacec/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpw47xy07s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:39:48 compute-0 nova_compute[260935]: 2025-10-11 09:39:48.263 2 DEBUG oslo_concurrency.processutils [None req-7b919d74-9860-497a-b49b-c97f88fedcf2 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/cacbf863-eea2-4852-a3a4-7cc929ebacec/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpw47xy07s" returned: 0 in 0.148s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:39:48 compute-0 nova_compute[260935]: 2025-10-11 09:39:48.297 2 DEBUG nova.storage.rbd_utils [None req-7b919d74-9860-497a-b49b-c97f88fedcf2 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] rbd image cacbf863-eea2-4852-a3a4-7cc929ebacec_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:39:48 compute-0 nova_compute[260935]: 2025-10-11 09:39:48.301 2 DEBUG oslo_concurrency.processutils [None req-7b919d74-9860-497a-b49b-c97f88fedcf2 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/cacbf863-eea2-4852-a3a4-7cc929ebacec/disk.config cacbf863-eea2-4852-a3a4-7cc929ebacec_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:39:48 compute-0 nova_compute[260935]: 2025-10-11 09:39:48.480 2 DEBUG oslo_concurrency.processutils [None req-7b919d74-9860-497a-b49b-c97f88fedcf2 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/cacbf863-eea2-4852-a3a4-7cc929ebacec/disk.config cacbf863-eea2-4852-a3a4-7cc929ebacec_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.178s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:39:48 compute-0 nova_compute[260935]: 2025-10-11 09:39:48.481 2 INFO nova.virt.libvirt.driver [None req-7b919d74-9860-497a-b49b-c97f88fedcf2 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: cacbf863-eea2-4852-a3a4-7cc929ebacec] Deleting local config drive /var/lib/nova/instances/cacbf863-eea2-4852-a3a4-7cc929ebacec/disk.config because it was imported into RBD.
Oct 11 09:39:48 compute-0 NetworkManager[44960]: <info>  [1760175588.5561] manager: (tap79c195e7-96): new Tun device (/org/freedesktop/NetworkManager/Devices/627)
Oct 11 09:39:48 compute-0 kernel: tap79c195e7-96: entered promiscuous mode
Oct 11 09:39:48 compute-0 nova_compute[260935]: 2025-10-11 09:39:48.564 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:39:48 compute-0 ovn_controller[152945]: 2025-10-11T09:39:48Z|01640|binding|INFO|Claiming lport 79c195e7-9605-49f3-b866-70093b232242 for this chassis.
Oct 11 09:39:48 compute-0 ovn_controller[152945]: 2025-10-11T09:39:48Z|01641|binding|INFO|79c195e7-9605-49f3-b866-70093b232242: Claiming fa:16:3e:61:d1:72 10.100.0.14
Oct 11 09:39:48 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:39:48.575 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:61:d1:72 10.100.0.14'], port_security=['fa:16:3e:61:d1:72 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'cacbf863-eea2-4852-a3a4-7cc929ebacec', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1aa39742-414b-41b6-bac5-b401ed01a1ec', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '81e7096f23df4e7d8782cf98d09d54e9', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'e4a50ca5-3bee-4cfb-9819-432e1e875b60', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f310564f-86ed-419d-996b-5fa9060df3fb, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=79c195e7-9605-49f3-b866-70093b232242) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 09:39:48 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:39:48.579 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 79c195e7-9605-49f3-b866-70093b232242 in datapath 1aa39742-414b-41b6-bac5-b401ed01a1ec bound to our chassis
Oct 11 09:39:48 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:39:48.583 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 1aa39742-414b-41b6-bac5-b401ed01a1ec
Oct 11 09:39:48 compute-0 systemd-udevd[424135]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 09:39:48 compute-0 ovn_controller[152945]: 2025-10-11T09:39:48Z|01642|binding|INFO|Setting lport 79c195e7-9605-49f3-b866-70093b232242 ovn-installed in OVS
Oct 11 09:39:48 compute-0 ovn_controller[152945]: 2025-10-11T09:39:48Z|01643|binding|INFO|Setting lport 79c195e7-9605-49f3-b866-70093b232242 up in Southbound
Oct 11 09:39:48 compute-0 nova_compute[260935]: 2025-10-11 09:39:48.612 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:39:48 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:39:48.610 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[dc4c61bf-51f4-42a7-9b09-c6b401bc1e3d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:39:48 compute-0 systemd-machined[215705]: New machine qemu-169-instance-00000091.
Oct 11 09:39:48 compute-0 NetworkManager[44960]: <info>  [1760175588.6295] device (tap79c195e7-96): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 09:39:48 compute-0 NetworkManager[44960]: <info>  [1760175588.6333] device (tap79c195e7-96): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 09:39:48 compute-0 systemd[1]: Started Virtual Machine qemu-169-instance-00000091.
Oct 11 09:39:48 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:39:48.666 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[f056b2d3-38a8-4bfd-ab00-20d077069b87]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:39:48 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:39:48.670 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[bfa40f7f-f9ff-434f-93b2-e6ccdf9129a2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:39:48 compute-0 nova_compute[260935]: 2025-10-11 09:39:48.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:39:48 compute-0 nova_compute[260935]: 2025-10-11 09:39:48.703 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 11 09:39:48 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:39:48.717 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[18e2813e-6198-476c-9836-ed227395e219]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:39:48 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:39:48.744 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[ff806e41-aefb-4a08-9617-eb3ec5a3ab0e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1aa39742-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:6d:7f:8f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 429], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 739486, 'reachable_time': 20665, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 424148, 'error': None, 'target': 'ovnmeta-1aa39742-414b-41b6-bac5-b401ed01a1ec', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:39:48 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:39:48.772 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[d80c92a8-a938-4215-a5bb-e56a8a9d2bb4]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap1aa39742-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 739504, 'tstamp': 739504}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 424150, 'error': None, 'target': 'ovnmeta-1aa39742-414b-41b6-bac5-b401ed01a1ec', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap1aa39742-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 739509, 'tstamp': 739509}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 424150, 'error': None, 'target': 'ovnmeta-1aa39742-414b-41b6-bac5-b401ed01a1ec', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:39:48 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:39:48.775 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1aa39742-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:39:48 compute-0 nova_compute[260935]: 2025-10-11 09:39:48.777 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:39:48 compute-0 nova_compute[260935]: 2025-10-11 09:39:48.779 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:39:48 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:39:48.780 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1aa39742-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:39:48 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:39:48.780 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 09:39:48 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:39:48.781 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap1aa39742-40, col_values=(('external_ids', {'iface-id': 'c50fba63-8396-4845-aa84-03644ac4618d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:39:48 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:39:48.782 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 09:39:48 compute-0 nova_compute[260935]: 2025-10-11 09:39:48.834 2 DEBUG nova.compute.manager [req-199b2cac-a64c-4944-805c-7a4028ab490e req-f06330ca-668a-4bf8-893a-c338abd88d17 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: cacbf863-eea2-4852-a3a4-7cc929ebacec] Received event network-vif-plugged-79c195e7-9605-49f3-b866-70093b232242 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:39:48 compute-0 nova_compute[260935]: 2025-10-11 09:39:48.835 2 DEBUG oslo_concurrency.lockutils [req-199b2cac-a64c-4944-805c-7a4028ab490e req-f06330ca-668a-4bf8-893a-c338abd88d17 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "cacbf863-eea2-4852-a3a4-7cc929ebacec-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:39:48 compute-0 nova_compute[260935]: 2025-10-11 09:39:48.835 2 DEBUG oslo_concurrency.lockutils [req-199b2cac-a64c-4944-805c-7a4028ab490e req-f06330ca-668a-4bf8-893a-c338abd88d17 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "cacbf863-eea2-4852-a3a4-7cc929ebacec-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:39:48 compute-0 nova_compute[260935]: 2025-10-11 09:39:48.836 2 DEBUG oslo_concurrency.lockutils [req-199b2cac-a64c-4944-805c-7a4028ab490e req-f06330ca-668a-4bf8-893a-c338abd88d17 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "cacbf863-eea2-4852-a3a4-7cc929ebacec-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:39:48 compute-0 nova_compute[260935]: 2025-10-11 09:39:48.837 2 DEBUG nova.compute.manager [req-199b2cac-a64c-4944-805c-7a4028ab490e req-f06330ca-668a-4bf8-893a-c338abd88d17 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: cacbf863-eea2-4852-a3a4-7cc929ebacec] Processing event network-vif-plugged-79c195e7-9605-49f3-b866-70093b232242 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 11 09:39:48 compute-0 ceph-mon[74313]: pgmap v2944: 321 pgs: 321 active+clean; 453 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 11 09:39:49 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2945: 321 pgs: 321 active+clean; 453 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 11 09:39:49 compute-0 nova_compute[260935]: 2025-10-11 09:39:49.577 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:39:49 compute-0 nova_compute[260935]: 2025-10-11 09:39:49.616 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760175589.616395, cacbf863-eea2-4852-a3a4-7cc929ebacec => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:39:49 compute-0 nova_compute[260935]: 2025-10-11 09:39:49.617 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: cacbf863-eea2-4852-a3a4-7cc929ebacec] VM Started (Lifecycle Event)
Oct 11 09:39:49 compute-0 nova_compute[260935]: 2025-10-11 09:39:49.620 2 DEBUG nova.compute.manager [None req-7b919d74-9860-497a-b49b-c97f88fedcf2 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: cacbf863-eea2-4852-a3a4-7cc929ebacec] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 11 09:39:49 compute-0 nova_compute[260935]: 2025-10-11 09:39:49.623 2 DEBUG nova.virt.libvirt.driver [None req-7b919d74-9860-497a-b49b-c97f88fedcf2 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: cacbf863-eea2-4852-a3a4-7cc929ebacec] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 11 09:39:49 compute-0 nova_compute[260935]: 2025-10-11 09:39:49.627 2 INFO nova.virt.libvirt.driver [-] [instance: cacbf863-eea2-4852-a3a4-7cc929ebacec] Instance spawned successfully.
Oct 11 09:39:49 compute-0 nova_compute[260935]: 2025-10-11 09:39:49.627 2 DEBUG nova.virt.libvirt.driver [None req-7b919d74-9860-497a-b49b-c97f88fedcf2 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: cacbf863-eea2-4852-a3a4-7cc929ebacec] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 11 09:39:49 compute-0 nova_compute[260935]: 2025-10-11 09:39:49.639 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: cacbf863-eea2-4852-a3a4-7cc929ebacec] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:39:49 compute-0 nova_compute[260935]: 2025-10-11 09:39:49.641 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: cacbf863-eea2-4852-a3a4-7cc929ebacec] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 09:39:49 compute-0 nova_compute[260935]: 2025-10-11 09:39:49.649 2 DEBUG nova.virt.libvirt.driver [None req-7b919d74-9860-497a-b49b-c97f88fedcf2 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: cacbf863-eea2-4852-a3a4-7cc929ebacec] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:39:49 compute-0 nova_compute[260935]: 2025-10-11 09:39:49.650 2 DEBUG nova.virt.libvirt.driver [None req-7b919d74-9860-497a-b49b-c97f88fedcf2 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: cacbf863-eea2-4852-a3a4-7cc929ebacec] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:39:49 compute-0 nova_compute[260935]: 2025-10-11 09:39:49.650 2 DEBUG nova.virt.libvirt.driver [None req-7b919d74-9860-497a-b49b-c97f88fedcf2 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: cacbf863-eea2-4852-a3a4-7cc929ebacec] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:39:49 compute-0 nova_compute[260935]: 2025-10-11 09:39:49.650 2 DEBUG nova.virt.libvirt.driver [None req-7b919d74-9860-497a-b49b-c97f88fedcf2 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: cacbf863-eea2-4852-a3a4-7cc929ebacec] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:39:49 compute-0 nova_compute[260935]: 2025-10-11 09:39:49.650 2 DEBUG nova.virt.libvirt.driver [None req-7b919d74-9860-497a-b49b-c97f88fedcf2 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: cacbf863-eea2-4852-a3a4-7cc929ebacec] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:39:49 compute-0 nova_compute[260935]: 2025-10-11 09:39:49.651 2 DEBUG nova.virt.libvirt.driver [None req-7b919d74-9860-497a-b49b-c97f88fedcf2 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: cacbf863-eea2-4852-a3a4-7cc929ebacec] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:39:49 compute-0 nova_compute[260935]: 2025-10-11 09:39:49.659 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: cacbf863-eea2-4852-a3a4-7cc929ebacec] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 09:39:49 compute-0 nova_compute[260935]: 2025-10-11 09:39:49.659 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760175589.619448, cacbf863-eea2-4852-a3a4-7cc929ebacec => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:39:49 compute-0 nova_compute[260935]: 2025-10-11 09:39:49.660 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: cacbf863-eea2-4852-a3a4-7cc929ebacec] VM Paused (Lifecycle Event)
Oct 11 09:39:49 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:39:49 compute-0 nova_compute[260935]: 2025-10-11 09:39:49.684 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: cacbf863-eea2-4852-a3a4-7cc929ebacec] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:39:49 compute-0 nova_compute[260935]: 2025-10-11 09:39:49.687 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760175589.6222608, cacbf863-eea2-4852-a3a4-7cc929ebacec => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:39:49 compute-0 nova_compute[260935]: 2025-10-11 09:39:49.687 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: cacbf863-eea2-4852-a3a4-7cc929ebacec] VM Resumed (Lifecycle Event)
Oct 11 09:39:49 compute-0 nova_compute[260935]: 2025-10-11 09:39:49.708 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: cacbf863-eea2-4852-a3a4-7cc929ebacec] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:39:49 compute-0 nova_compute[260935]: 2025-10-11 09:39:49.711 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: cacbf863-eea2-4852-a3a4-7cc929ebacec] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 09:39:49 compute-0 nova_compute[260935]: 2025-10-11 09:39:49.717 2 INFO nova.compute.manager [None req-7b919d74-9860-497a-b49b-c97f88fedcf2 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: cacbf863-eea2-4852-a3a4-7cc929ebacec] Took 9.40 seconds to spawn the instance on the hypervisor.
Oct 11 09:39:49 compute-0 nova_compute[260935]: 2025-10-11 09:39:49.717 2 DEBUG nova.compute.manager [None req-7b919d74-9860-497a-b49b-c97f88fedcf2 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: cacbf863-eea2-4852-a3a4-7cc929ebacec] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:39:49 compute-0 nova_compute[260935]: 2025-10-11 09:39:49.728 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: cacbf863-eea2-4852-a3a4-7cc929ebacec] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 09:39:49 compute-0 nova_compute[260935]: 2025-10-11 09:39:49.772 2 INFO nova.compute.manager [None req-7b919d74-9860-497a-b49b-c97f88fedcf2 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: cacbf863-eea2-4852-a3a4-7cc929ebacec] Took 11.32 seconds to build instance.
Oct 11 09:39:49 compute-0 nova_compute[260935]: 2025-10-11 09:39:49.798 2 DEBUG oslo_concurrency.lockutils [None req-7b919d74-9860-497a-b49b-c97f88fedcf2 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lock "cacbf863-eea2-4852-a3a4-7cc929ebacec" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.587s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:39:50 compute-0 nova_compute[260935]: 2025-10-11 09:39:50.704 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:39:50 compute-0 nova_compute[260935]: 2025-10-11 09:39:50.704 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 11 09:39:50 compute-0 nova_compute[260935]: 2025-10-11 09:39:50.869 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "refresh_cache-52be16b4-343a-4fd4-9041-39069a1fde2a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:39:50 compute-0 nova_compute[260935]: 2025-10-11 09:39:50.869 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquired lock "refresh_cache-52be16b4-343a-4fd4-9041-39069a1fde2a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:39:50 compute-0 nova_compute[260935]: 2025-10-11 09:39:50.869 2 DEBUG nova.network.neutron [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: 52be16b4-343a-4fd4-9041-39069a1fde2a] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 11 09:39:50 compute-0 ceph-mon[74313]: pgmap v2945: 321 pgs: 321 active+clean; 453 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 11 09:39:50 compute-0 nova_compute[260935]: 2025-10-11 09:39:50.908 2 DEBUG nova.compute.manager [req-1d3f245f-ba12-4fcb-bff8-6b1edd1e9107 req-3eb87703-b457-45c5-8e78-bf3d23440ab3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: cacbf863-eea2-4852-a3a4-7cc929ebacec] Received event network-vif-plugged-79c195e7-9605-49f3-b866-70093b232242 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:39:50 compute-0 nova_compute[260935]: 2025-10-11 09:39:50.908 2 DEBUG oslo_concurrency.lockutils [req-1d3f245f-ba12-4fcb-bff8-6b1edd1e9107 req-3eb87703-b457-45c5-8e78-bf3d23440ab3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "cacbf863-eea2-4852-a3a4-7cc929ebacec-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:39:50 compute-0 nova_compute[260935]: 2025-10-11 09:39:50.909 2 DEBUG oslo_concurrency.lockutils [req-1d3f245f-ba12-4fcb-bff8-6b1edd1e9107 req-3eb87703-b457-45c5-8e78-bf3d23440ab3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "cacbf863-eea2-4852-a3a4-7cc929ebacec-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:39:50 compute-0 nova_compute[260935]: 2025-10-11 09:39:50.909 2 DEBUG oslo_concurrency.lockutils [req-1d3f245f-ba12-4fcb-bff8-6b1edd1e9107 req-3eb87703-b457-45c5-8e78-bf3d23440ab3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "cacbf863-eea2-4852-a3a4-7cc929ebacec-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:39:50 compute-0 nova_compute[260935]: 2025-10-11 09:39:50.909 2 DEBUG nova.compute.manager [req-1d3f245f-ba12-4fcb-bff8-6b1edd1e9107 req-3eb87703-b457-45c5-8e78-bf3d23440ab3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: cacbf863-eea2-4852-a3a4-7cc929ebacec] No waiting events found dispatching network-vif-plugged-79c195e7-9605-49f3-b866-70093b232242 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 09:39:50 compute-0 nova_compute[260935]: 2025-10-11 09:39:50.910 2 WARNING nova.compute.manager [req-1d3f245f-ba12-4fcb-bff8-6b1edd1e9107 req-3eb87703-b457-45c5-8e78-bf3d23440ab3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: cacbf863-eea2-4852-a3a4-7cc929ebacec] Received unexpected event network-vif-plugged-79c195e7-9605-49f3-b866-70093b232242 for instance with vm_state active and task_state None.
Oct 11 09:39:51 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2946: 321 pgs: 321 active+clean; 453 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 11 09:39:51 compute-0 nova_compute[260935]: 2025-10-11 09:39:51.337 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:39:52 compute-0 ceph-mon[74313]: pgmap v2946: 321 pgs: 321 active+clean; 453 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 11 09:39:52 compute-0 nova_compute[260935]: 2025-10-11 09:39:52.908 2 DEBUG nova.compute.manager [req-013a23b8-79db-4536-9f44-b9b2757ec46a req-d2030c1a-76e5-4e0f-a5d6-12b60d1d06c3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: cacbf863-eea2-4852-a3a4-7cc929ebacec] Received event network-changed-79c195e7-9605-49f3-b866-70093b232242 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:39:52 compute-0 nova_compute[260935]: 2025-10-11 09:39:52.908 2 DEBUG nova.compute.manager [req-013a23b8-79db-4536-9f44-b9b2757ec46a req-d2030c1a-76e5-4e0f-a5d6-12b60d1d06c3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: cacbf863-eea2-4852-a3a4-7cc929ebacec] Refreshing instance network info cache due to event network-changed-79c195e7-9605-49f3-b866-70093b232242. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 09:39:52 compute-0 nova_compute[260935]: 2025-10-11 09:39:52.909 2 DEBUG oslo_concurrency.lockutils [req-013a23b8-79db-4536-9f44-b9b2757ec46a req-d2030c1a-76e5-4e0f-a5d6-12b60d1d06c3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-cacbf863-eea2-4852-a3a4-7cc929ebacec" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:39:52 compute-0 nova_compute[260935]: 2025-10-11 09:39:52.909 2 DEBUG oslo_concurrency.lockutils [req-013a23b8-79db-4536-9f44-b9b2757ec46a req-d2030c1a-76e5-4e0f-a5d6-12b60d1d06c3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-cacbf863-eea2-4852-a3a4-7cc929ebacec" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:39:52 compute-0 nova_compute[260935]: 2025-10-11 09:39:52.909 2 DEBUG nova.network.neutron [req-013a23b8-79db-4536-9f44-b9b2757ec46a req-d2030c1a-76e5-4e0f-a5d6-12b60d1d06c3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: cacbf863-eea2-4852-a3a4-7cc929ebacec] Refreshing network info cache for port 79c195e7-9605-49f3-b866-70093b232242 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 09:39:52 compute-0 nova_compute[260935]: 2025-10-11 09:39:52.926 2 DEBUG nova.network.neutron [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: 52be16b4-343a-4fd4-9041-39069a1fde2a] Updating instance_info_cache with network_info: [{"id": "c992d6e3-ef59-42a0-80c5-109fe0c056cd", "address": "fa:16:3e:d3:b5:ce", "network": {"id": "7c40ad6c-6e2c-4d8e-a70f-72c8786fa745", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1855455514-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0ba95f2514ce4fe4b00f245335eaeb01", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc992d6e3-ef", "ovs_interfaceid": "c992d6e3-ef59-42a0-80c5-109fe0c056cd", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:39:52 compute-0 nova_compute[260935]: 2025-10-11 09:39:52.942 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Releasing lock "refresh_cache-52be16b4-343a-4fd4-9041-39069a1fde2a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:39:52 compute-0 nova_compute[260935]: 2025-10-11 09:39:52.943 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: 52be16b4-343a-4fd4-9041-39069a1fde2a] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 11 09:39:52 compute-0 nova_compute[260935]: 2025-10-11 09:39:52.943 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:39:52 compute-0 nova_compute[260935]: 2025-10-11 09:39:52.943 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:39:53 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2947: 321 pgs: 321 active+clean; 453 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Oct 11 09:39:53 compute-0 nova_compute[260935]: 2025-10-11 09:39:53.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:39:53 compute-0 nova_compute[260935]: 2025-10-11 09:39:53.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:39:53 compute-0 nova_compute[260935]: 2025-10-11 09:39:53.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:39:53 compute-0 nova_compute[260935]: 2025-10-11 09:39:53.748 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:39:53 compute-0 nova_compute[260935]: 2025-10-11 09:39:53.749 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:39:53 compute-0 nova_compute[260935]: 2025-10-11 09:39:53.749 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:39:53 compute-0 nova_compute[260935]: 2025-10-11 09:39:53.749 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 11 09:39:53 compute-0 nova_compute[260935]: 2025-10-11 09:39:53.750 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:39:54 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 09:39:54 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1302718109' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:39:54 compute-0 nova_compute[260935]: 2025-10-11 09:39:54.252 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.502s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:39:54 compute-0 nova_compute[260935]: 2025-10-11 09:39:54.376 2 DEBUG nova.network.neutron [req-013a23b8-79db-4536-9f44-b9b2757ec46a req-d2030c1a-76e5-4e0f-a5d6-12b60d1d06c3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: cacbf863-eea2-4852-a3a4-7cc929ebacec] Updated VIF entry in instance network info cache for port 79c195e7-9605-49f3-b866-70093b232242. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 09:39:54 compute-0 nova_compute[260935]: 2025-10-11 09:39:54.377 2 DEBUG nova.network.neutron [req-013a23b8-79db-4536-9f44-b9b2757ec46a req-d2030c1a-76e5-4e0f-a5d6-12b60d1d06c3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: cacbf863-eea2-4852-a3a4-7cc929ebacec] Updating instance_info_cache with network_info: [{"id": "79c195e7-9605-49f3-b866-70093b232242", "address": "fa:16:3e:61:d1:72", "network": {"id": "1aa39742-414b-41b6-bac5-b401ed01a1ec", "bridge": "br-int", "label": "tempest-network-smoke--1567678547", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "81e7096f23df4e7d8782cf98d09d54e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap79c195e7-96", "ovs_interfaceid": "79c195e7-9605-49f3-b866-70093b232242", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:39:54 compute-0 nova_compute[260935]: 2025-10-11 09:39:54.383 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000090 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:39:54 compute-0 nova_compute[260935]: 2025-10-11 09:39:54.384 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000090 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:39:54 compute-0 nova_compute[260935]: 2025-10-11 09:39:54.390 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:39:54 compute-0 nova_compute[260935]: 2025-10-11 09:39:54.391 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:39:54 compute-0 nova_compute[260935]: 2025-10-11 09:39:54.391 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:39:54 compute-0 nova_compute[260935]: 2025-10-11 09:39:54.398 2 DEBUG oslo_concurrency.lockutils [req-013a23b8-79db-4536-9f44-b9b2757ec46a req-d2030c1a-76e5-4e0f-a5d6-12b60d1d06c3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-cacbf863-eea2-4852-a3a4-7cc929ebacec" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:39:54 compute-0 nova_compute[260935]: 2025-10-11 09:39:54.400 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:39:54 compute-0 nova_compute[260935]: 2025-10-11 09:39:54.400 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:39:54 compute-0 nova_compute[260935]: 2025-10-11 09:39:54.407 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000091 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:39:54 compute-0 nova_compute[260935]: 2025-10-11 09:39:54.407 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000091 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:39:54 compute-0 nova_compute[260935]: 2025-10-11 09:39:54.414 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000052 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:39:54 compute-0 nova_compute[260935]: 2025-10-11 09:39:54.415 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000052 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:39:54 compute-0 nova_compute[260935]: 2025-10-11 09:39:54.580 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:39:54 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:39:54 compute-0 nova_compute[260935]: 2025-10-11 09:39:54.753 2 WARNING nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 09:39:54 compute-0 nova_compute[260935]: 2025-10-11 09:39:54.754 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=2445MB free_disk=59.76421356201172GB free_vcpus=3 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 11 09:39:54 compute-0 nova_compute[260935]: 2025-10-11 09:39:54.755 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:39:54 compute-0 nova_compute[260935]: 2025-10-11 09:39:54.755 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:39:54 compute-0 nova_compute[260935]: 2025-10-11 09:39:54.844 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance c176845c-89c0-4038-ba22-4ee79bd3ebfe actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 09:39:54 compute-0 nova_compute[260935]: 2025-10-11 09:39:54.844 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance b75d8ded-515b-48ff-a6b6-28df88878996 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 09:39:54 compute-0 nova_compute[260935]: 2025-10-11 09:39:54.844 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance 52be16b4-343a-4fd4-9041-39069a1fde2a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 09:39:54 compute-0 nova_compute[260935]: 2025-10-11 09:39:54.845 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance 82fac090-c427-485c-98cd-ad02e839be40 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 09:39:54 compute-0 nova_compute[260935]: 2025-10-11 09:39:54.845 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance cacbf863-eea2-4852-a3a4-7cc929ebacec actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 09:39:54 compute-0 nova_compute[260935]: 2025-10-11 09:39:54.845 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 5 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 11 09:39:54 compute-0 nova_compute[260935]: 2025-10-11 09:39:54.845 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=1152MB phys_disk=59GB used_disk=5GB total_vcpus=8 used_vcpus=5 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 11 09:39:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:39:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:39:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:39:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:39:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:39:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:39:54 compute-0 nova_compute[260935]: 2025-10-11 09:39:54.859 2 DEBUG nova.scheduler.client.report [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Refreshing inventories for resource provider ead2f521-4d5d-46d9-864c-1aac19134114 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Oct 11 09:39:54 compute-0 nova_compute[260935]: 2025-10-11 09:39:54.879 2 DEBUG nova.scheduler.client.report [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Updating ProviderTree inventory for provider ead2f521-4d5d-46d9-864c-1aac19134114 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Oct 11 09:39:54 compute-0 nova_compute[260935]: 2025-10-11 09:39:54.880 2 DEBUG nova.compute.provider_tree [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Updating inventory in ProviderTree for provider ead2f521-4d5d-46d9-864c-1aac19134114 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Oct 11 09:39:54 compute-0 nova_compute[260935]: 2025-10-11 09:39:54.894 2 DEBUG nova.scheduler.client.report [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Refreshing aggregate associations for resource provider ead2f521-4d5d-46d9-864c-1aac19134114, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Oct 11 09:39:54 compute-0 ceph-mon[74313]: pgmap v2947: 321 pgs: 321 active+clean; 453 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Oct 11 09:39:54 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/1302718109' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:39:54 compute-0 nova_compute[260935]: 2025-10-11 09:39:54.916 2 DEBUG nova.scheduler.client.report [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Refreshing trait associations for resource provider ead2f521-4d5d-46d9-864c-1aac19134114, traits: HW_CPU_X86_AESNI,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_CLMUL,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_AVX,COMPUTE_RESCUE_BFV,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_F16C,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NODE,HW_CPU_X86_SSE2,HW_CPU_X86_BMI,COMPUTE_DEVICE_TAGGING,COMPUTE_STORAGE_BUS_FDC,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SHA,COMPUTE_STORAGE_BUS_SATA,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SSE42,HW_CPU_X86_SSE41,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_ABM,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_STORAGE_BUS_IDE,COMPUTE_STORAGE_BUS_USB,COMPUTE_TRUSTED_CERTS,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_FMA3,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_SSE4A,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSSE3,HW_CPU_X86_SVM,HW_CPU_X86_BMI2,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_AVX2,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_SECURITY_TPM_2_0,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_AMD_SVM,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_RTL8139 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Oct 11 09:39:54 compute-0 nova_compute[260935]: 2025-10-11 09:39:54.988 2 DEBUG nova.compute.manager [req-c42889f6-5a2e-42d4-9879-8f34e73175a8 req-bf3ef614-f2d1-45bd-a0ec-59853e96e572 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: cacbf863-eea2-4852-a3a4-7cc929ebacec] Received event network-changed-79c195e7-9605-49f3-b866-70093b232242 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:39:54 compute-0 nova_compute[260935]: 2025-10-11 09:39:54.988 2 DEBUG nova.compute.manager [req-c42889f6-5a2e-42d4-9879-8f34e73175a8 req-bf3ef614-f2d1-45bd-a0ec-59853e96e572 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: cacbf863-eea2-4852-a3a4-7cc929ebacec] Refreshing instance network info cache due to event network-changed-79c195e7-9605-49f3-b866-70093b232242. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 09:39:54 compute-0 nova_compute[260935]: 2025-10-11 09:39:54.989 2 DEBUG oslo_concurrency.lockutils [req-c42889f6-5a2e-42d4-9879-8f34e73175a8 req-bf3ef614-f2d1-45bd-a0ec-59853e96e572 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-cacbf863-eea2-4852-a3a4-7cc929ebacec" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:39:54 compute-0 nova_compute[260935]: 2025-10-11 09:39:54.989 2 DEBUG oslo_concurrency.lockutils [req-c42889f6-5a2e-42d4-9879-8f34e73175a8 req-bf3ef614-f2d1-45bd-a0ec-59853e96e572 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-cacbf863-eea2-4852-a3a4-7cc929ebacec" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:39:54 compute-0 nova_compute[260935]: 2025-10-11 09:39:54.990 2 DEBUG nova.network.neutron [req-c42889f6-5a2e-42d4-9879-8f34e73175a8 req-bf3ef614-f2d1-45bd-a0ec-59853e96e572 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: cacbf863-eea2-4852-a3a4-7cc929ebacec] Refreshing network info cache for port 79c195e7-9605-49f3-b866-70093b232242 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 09:39:55 compute-0 ceph-mgr[74605]: [balancer INFO root] Optimize plan auto_2025-10-11_09:39:55
Oct 11 09:39:55 compute-0 ceph-mgr[74605]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 09:39:55 compute-0 ceph-mgr[74605]: [balancer INFO root] do_upmap
Oct 11 09:39:55 compute-0 ceph-mgr[74605]: [balancer INFO root] pools ['.mgr', 'cephfs.cephfs.meta', 'default.rgw.meta', 'default.rgw.control', 'backups', 'default.rgw.log', 'volumes', 'vms', '.rgw.root', 'images', 'cephfs.cephfs.data']
Oct 11 09:39:55 compute-0 ceph-mgr[74605]: [balancer INFO root] prepared 0/10 changes
Oct 11 09:39:55 compute-0 nova_compute[260935]: 2025-10-11 09:39:55.056 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:39:55 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2948: 321 pgs: 321 active+clean; 453 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 73 op/s
Oct 11 09:39:55 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 09:39:55 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/95193417' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:39:55 compute-0 nova_compute[260935]: 2025-10-11 09:39:55.531 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:39:55 compute-0 nova_compute[260935]: 2025-10-11 09:39:55.541 2 DEBUG nova.compute.provider_tree [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 09:39:55 compute-0 nova_compute[260935]: 2025-10-11 09:39:55.560 2 DEBUG nova.scheduler.client.report [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 09:39:55 compute-0 nova_compute[260935]: 2025-10-11 09:39:55.594 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 11 09:39:55 compute-0 nova_compute[260935]: 2025-10-11 09:39:55.595 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.840s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:39:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 09:39:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 09:39:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 09:39:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 09:39:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 09:39:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 09:39:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 09:39:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 09:39:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 09:39:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 09:39:55 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/95193417' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:39:56 compute-0 nova_compute[260935]: 2025-10-11 09:39:56.340 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:39:56 compute-0 nova_compute[260935]: 2025-10-11 09:39:56.596 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:39:56 compute-0 ceph-mon[74313]: pgmap v2948: 321 pgs: 321 active+clean; 453 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 73 op/s
Oct 11 09:39:57 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2949: 321 pgs: 321 active+clean; 453 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 73 op/s
Oct 11 09:39:57 compute-0 nova_compute[260935]: 2025-10-11 09:39:57.716 2 DEBUG nova.network.neutron [req-c42889f6-5a2e-42d4-9879-8f34e73175a8 req-bf3ef614-f2d1-45bd-a0ec-59853e96e572 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: cacbf863-eea2-4852-a3a4-7cc929ebacec] Updated VIF entry in instance network info cache for port 79c195e7-9605-49f3-b866-70093b232242. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 09:39:57 compute-0 nova_compute[260935]: 2025-10-11 09:39:57.717 2 DEBUG nova.network.neutron [req-c42889f6-5a2e-42d4-9879-8f34e73175a8 req-bf3ef614-f2d1-45bd-a0ec-59853e96e572 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: cacbf863-eea2-4852-a3a4-7cc929ebacec] Updating instance_info_cache with network_info: [{"id": "79c195e7-9605-49f3-b866-70093b232242", "address": "fa:16:3e:61:d1:72", "network": {"id": "1aa39742-414b-41b6-bac5-b401ed01a1ec", "bridge": "br-int", "label": "tempest-network-smoke--1567678547", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "81e7096f23df4e7d8782cf98d09d54e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap79c195e7-96", "ovs_interfaceid": "79c195e7-9605-49f3-b866-70093b232242", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:39:57 compute-0 nova_compute[260935]: 2025-10-11 09:39:57.736 2 DEBUG oslo_concurrency.lockutils [req-c42889f6-5a2e-42d4-9879-8f34e73175a8 req-bf3ef614-f2d1-45bd-a0ec-59853e96e572 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-cacbf863-eea2-4852-a3a4-7cc929ebacec" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:39:58 compute-0 ceph-mon[74313]: pgmap v2949: 321 pgs: 321 active+clean; 453 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 73 op/s
Oct 11 09:39:59 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2950: 321 pgs: 321 active+clean; 453 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 16 KiB/s wr, 74 op/s
Oct 11 09:39:59 compute-0 nova_compute[260935]: 2025-10-11 09:39:59.610 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:39:59 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:40:00 compute-0 ceph-mon[74313]: pgmap v2950: 321 pgs: 321 active+clean; 453 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 16 KiB/s wr, 74 op/s
Oct 11 09:40:01 compute-0 ovn_controller[152945]: 2025-10-11T09:40:01Z|00196|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:61:d1:72 10.100.0.14
Oct 11 09:40:01 compute-0 ovn_controller[152945]: 2025-10-11T09:40:01Z|00197|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:61:d1:72 10.100.0.14
Oct 11 09:40:01 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2951: 321 pgs: 321 active+clean; 453 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 74 op/s
Oct 11 09:40:01 compute-0 nova_compute[260935]: 2025-10-11 09:40:01.344 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:40:02 compute-0 unix_chkpwd[424242]: password check failed for user (root)
Oct 11 09:40:02 compute-0 sshd-session[424240]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=155.4.244.179  user=root
Oct 11 09:40:02 compute-0 ceph-mon[74313]: pgmap v2951: 321 pgs: 321 active+clean; 453 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 74 op/s
Oct 11 09:40:03 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2952: 321 pgs: 321 active+clean; 486 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 139 op/s
Oct 11 09:40:03 compute-0 nova_compute[260935]: 2025-10-11 09:40:03.698 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:40:04 compute-0 sshd-session[424240]: Failed password for root from 155.4.244.179 port 21801 ssh2
Oct 11 09:40:04 compute-0 sshd-session[424240]: Received disconnect from 155.4.244.179 port 21801:11: Bye Bye [preauth]
Oct 11 09:40:04 compute-0 sshd-session[424240]: Disconnected from authenticating user root 155.4.244.179 port 21801 [preauth]
Oct 11 09:40:04 compute-0 nova_compute[260935]: 2025-10-11 09:40:04.676 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:40:04 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:40:04 compute-0 podman[424243]: 2025-10-11 09:40:04.826036568 +0000 UTC m=+0.102729452 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3)
Oct 11 09:40:05 compute-0 ceph-mon[74313]: pgmap v2952: 321 pgs: 321 active+clean; 486 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 139 op/s
Oct 11 09:40:05 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2953: 321 pgs: 321 active+clean; 486 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 394 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Oct 11 09:40:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 09:40:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:40:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 09:40:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:40:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.004143952065833171 of space, bias 1.0, pg target 1.2431856197499511 quantized to 32 (current 32)
Oct 11 09:40:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:40:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 09:40:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:40:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 09:40:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:40:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.1992057139048968 quantized to 32 (current 32)
Oct 11 09:40:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:40:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006084358924269063 quantized to 16 (current 32)
Oct 11 09:40:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:40:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 09:40:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:40:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.605448655336329e-05 quantized to 32 (current 32)
Oct 11 09:40:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:40:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006464631357035879 quantized to 32 (current 32)
Oct 11 09:40:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:40:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 09:40:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:40:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015210897310672657 quantized to 32 (current 32)
Oct 11 09:40:06 compute-0 nova_compute[260935]: 2025-10-11 09:40:06.348 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:40:07 compute-0 ceph-mon[74313]: pgmap v2953: 321 pgs: 321 active+clean; 486 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 394 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Oct 11 09:40:07 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2954: 321 pgs: 321 active+clean; 486 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 394 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Oct 11 09:40:07 compute-0 nova_compute[260935]: 2025-10-11 09:40:07.467 2 DEBUG oslo_concurrency.lockutils [None req-13807304-469c-4d72-89ec-44b36ba0bc53 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Acquiring lock "cacbf863-eea2-4852-a3a4-7cc929ebacec" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:40:07 compute-0 nova_compute[260935]: 2025-10-11 09:40:07.467 2 DEBUG oslo_concurrency.lockutils [None req-13807304-469c-4d72-89ec-44b36ba0bc53 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lock "cacbf863-eea2-4852-a3a4-7cc929ebacec" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:40:07 compute-0 nova_compute[260935]: 2025-10-11 09:40:07.468 2 DEBUG oslo_concurrency.lockutils [None req-13807304-469c-4d72-89ec-44b36ba0bc53 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Acquiring lock "cacbf863-eea2-4852-a3a4-7cc929ebacec-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:40:07 compute-0 nova_compute[260935]: 2025-10-11 09:40:07.469 2 DEBUG oslo_concurrency.lockutils [None req-13807304-469c-4d72-89ec-44b36ba0bc53 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lock "cacbf863-eea2-4852-a3a4-7cc929ebacec-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:40:07 compute-0 nova_compute[260935]: 2025-10-11 09:40:07.469 2 DEBUG oslo_concurrency.lockutils [None req-13807304-469c-4d72-89ec-44b36ba0bc53 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lock "cacbf863-eea2-4852-a3a4-7cc929ebacec-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:40:07 compute-0 nova_compute[260935]: 2025-10-11 09:40:07.471 2 INFO nova.compute.manager [None req-13807304-469c-4d72-89ec-44b36ba0bc53 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: cacbf863-eea2-4852-a3a4-7cc929ebacec] Terminating instance
Oct 11 09:40:07 compute-0 nova_compute[260935]: 2025-10-11 09:40:07.472 2 DEBUG nova.compute.manager [None req-13807304-469c-4d72-89ec-44b36ba0bc53 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: cacbf863-eea2-4852-a3a4-7cc929ebacec] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 11 09:40:07 compute-0 kernel: tap79c195e7-96 (unregistering): left promiscuous mode
Oct 11 09:40:07 compute-0 NetworkManager[44960]: <info>  [1760175607.5278] device (tap79c195e7-96): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 09:40:07 compute-0 ovn_controller[152945]: 2025-10-11T09:40:07Z|01644|binding|INFO|Releasing lport 79c195e7-9605-49f3-b866-70093b232242 from this chassis (sb_readonly=0)
Oct 11 09:40:07 compute-0 ovn_controller[152945]: 2025-10-11T09:40:07Z|01645|binding|INFO|Setting lport 79c195e7-9605-49f3-b866-70093b232242 down in Southbound
Oct 11 09:40:07 compute-0 ovn_controller[152945]: 2025-10-11T09:40:07Z|01646|binding|INFO|Removing iface tap79c195e7-96 ovn-installed in OVS
Oct 11 09:40:07 compute-0 nova_compute[260935]: 2025-10-11 09:40:07.542 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:40:07 compute-0 nova_compute[260935]: 2025-10-11 09:40:07.555 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:40:07 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:40:07.557 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:61:d1:72 10.100.0.14', 'unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'cacbf863-eea2-4852-a3a4-7cc929ebacec', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1aa39742-414b-41b6-bac5-b401ed01a1ec', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '81e7096f23df4e7d8782cf98d09d54e9', 'neutron:revision_number': '6', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f310564f-86ed-419d-996b-5fa9060df3fb, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=79c195e7-9605-49f3-b866-70093b232242) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 09:40:07 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:40:07.558 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 79c195e7-9605-49f3-b866-70093b232242 in datapath 1aa39742-414b-41b6-bac5-b401ed01a1ec unbound from our chassis
Oct 11 09:40:07 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:40:07.560 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 1aa39742-414b-41b6-bac5-b401ed01a1ec
Oct 11 09:40:07 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:40:07.582 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[2e7a5ddb-6a5a-46de-b85b-9c6c9e1f12fd]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:40:07 compute-0 ceph-mon[74313]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 11 09:40:07 compute-0 ceph-mon[74313]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 5400.0 total, 600.0 interval
                                           Cumulative writes: 13K writes, 61K keys, 13K commit groups, 1.0 writes per commit group, ingest: 0.08 GB, 0.02 MB/s
                                           Cumulative WAL: 13K writes, 13K syncs, 1.00 writes per sync, written: 0.08 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1366 writes, 6414 keys, 1366 commit groups, 1.0 writes per commit group, ingest: 8.66 MB, 0.01 MB/s
                                           Interval WAL: 1366 writes, 1366 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     70.8      1.05              0.30        44    0.024       0      0       0.0       0.0
                                             L6      1/0    9.31 MB   0.0      0.4     0.1      0.3       0.3      0.0       0.0   4.8    151.5    128.1      2.76              1.42        43    0.064    272K    23K       0.0       0.0
                                            Sum      1/0    9.31 MB   0.0      0.4     0.1      0.3       0.4      0.1       0.0   5.8    109.9    112.3      3.80              1.72        87    0.044    272K    23K       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   7.3     88.1     90.2      0.70              0.24        12    0.058     49K   3009       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.4     0.1      0.3       0.3      0.0       0.0   0.0    151.5    128.1      2.76              1.42        43    0.064    272K    23K       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     71.0      1.04              0.30        43    0.024       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     12.2      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 5400.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.072, interval 0.008
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.42 GB write, 0.08 MB/s write, 0.41 GB read, 0.08 MB/s read, 3.8 seconds
                                           Interval compaction: 0.06 GB write, 0.10 MB/s write, 0.06 GB read, 0.10 MB/s read, 0.7 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558f0ab3b1f0#2 capacity: 304.00 MB usage: 47.71 MB table_size: 0 occupancy: 18446744073709551615 collections: 10 last_copies: 0 last_secs: 0.000298 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3112,45.75 MB,15.0506%) FilterBlock(88,762.23 KB,0.244858%) IndexBlock(88,1.21 MB,0.399574%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Oct 11 09:40:07 compute-0 systemd[1]: machine-qemu\x2d169\x2dinstance\x2d00000091.scope: Deactivated successfully.
Oct 11 09:40:07 compute-0 systemd[1]: machine-qemu\x2d169\x2dinstance\x2d00000091.scope: Consumed 13.317s CPU time.
Oct 11 09:40:07 compute-0 systemd-machined[215705]: Machine qemu-169-instance-00000091 terminated.
Oct 11 09:40:07 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:40:07.622 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[6935d3d5-a4e3-4047-9a39-9fe3bfc6b212]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:40:07 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:40:07.626 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[12f57cdd-4b9d-412e-a9a7-37fb39ace161]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:40:07 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:40:07.655 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[f0400178-fa9c-44eb-bc01-9d4b1a41f140]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:40:07 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:40:07.686 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[6637ba85-fa1a-47c1-839b-e253a67b117c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1aa39742-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:6d:7f:8f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 21, 'tx_packets': 7, 'rx_bytes': 1670, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 21, 'tx_packets': 7, 'rx_bytes': 1670, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 429], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 739486, 'reachable_time': 20665, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 15, 'inoctets': 1208, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 15, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 1208, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 15, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 424274, 'error': None, 'target': 'ovnmeta-1aa39742-414b-41b6-bac5-b401ed01a1ec', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:40:07 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:40:07.707 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[6ff647b4-7a86-43e9-ab63-d056b53a9ed1]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap1aa39742-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 739504, 'tstamp': 739504}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 424277, 'error': None, 'target': 'ovnmeta-1aa39742-414b-41b6-bac5-b401ed01a1ec', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap1aa39742-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 739509, 'tstamp': 739509}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 424277, 'error': None, 'target': 'ovnmeta-1aa39742-414b-41b6-bac5-b401ed01a1ec', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:40:07 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:40:07.709 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1aa39742-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:40:07 compute-0 nova_compute[260935]: 2025-10-11 09:40:07.711 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:40:07 compute-0 nova_compute[260935]: 2025-10-11 09:40:07.722 2 INFO nova.virt.libvirt.driver [-] [instance: cacbf863-eea2-4852-a3a4-7cc929ebacec] Instance destroyed successfully.
Oct 11 09:40:07 compute-0 nova_compute[260935]: 2025-10-11 09:40:07.723 2 DEBUG nova.objects.instance [None req-13807304-469c-4d72-89ec-44b36ba0bc53 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lazy-loading 'resources' on Instance uuid cacbf863-eea2-4852-a3a4-7cc929ebacec obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:40:07 compute-0 nova_compute[260935]: 2025-10-11 09:40:07.727 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:40:07 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:40:07.727 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1aa39742-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:40:07 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:40:07.728 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 09:40:07 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:40:07.728 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap1aa39742-40, col_values=(('external_ids', {'iface-id': 'c50fba63-8396-4845-aa84-03644ac4618d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:40:07 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:40:07.729 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 09:40:07 compute-0 nova_compute[260935]: 2025-10-11 09:40:07.744 2 DEBUG nova.virt.libvirt.vif [None req-13807304-469c-4d72-89ec-44b36ba0bc53 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T09:39:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-607770139-gen-1-1926377608',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-607770139-gen-1-1926377608',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-607770139-gen',id=145,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMgIDSEbqXKfaiPe9PEwj6SDspyzopAbfyJnz8+djsMK1KLgiJJqBYs9uE57WKf6jqalL3R3Kh83Cc9busMQoG7JknYjD9hSHphOlTqezk2QHYcvhxySyaS51yDxw6uubA==',key_name='tempest-TestSecurityGroupsBasicOps-1631871125',keypairs=<?>,launch_index=0,launched_at=2025-10-11T09:39:49Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='81e7096f23df4e7d8782cf98d09d54e9',ramdisk_id='',reservation_id='r-dza5ihjy',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSecurityGroupsBasicOps-607770139',owner_user_name='tempest-TestSecurityGroupsBasicOps-607770139-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T09:39:49Z,user_data=None,user_id='489c4d0457354f4684f8b9e53261224f',uuid=cacbf863-eea2-4852-a3a4-7cc929ebacec,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "79c195e7-9605-49f3-b866-70093b232242", "address": "fa:16:3e:61:d1:72", "network": {"id": "1aa39742-414b-41b6-bac5-b401ed01a1ec", "bridge": "br-int", "label": "tempest-network-smoke--1567678547", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "81e7096f23df4e7d8782cf98d09d54e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap79c195e7-96", "ovs_interfaceid": "79c195e7-9605-49f3-b866-70093b232242", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 11 09:40:07 compute-0 nova_compute[260935]: 2025-10-11 09:40:07.744 2 DEBUG nova.network.os_vif_util [None req-13807304-469c-4d72-89ec-44b36ba0bc53 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Converting VIF {"id": "79c195e7-9605-49f3-b866-70093b232242", "address": "fa:16:3e:61:d1:72", "network": {"id": "1aa39742-414b-41b6-bac5-b401ed01a1ec", "bridge": "br-int", "label": "tempest-network-smoke--1567678547", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "81e7096f23df4e7d8782cf98d09d54e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap79c195e7-96", "ovs_interfaceid": "79c195e7-9605-49f3-b866-70093b232242", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 09:40:07 compute-0 nova_compute[260935]: 2025-10-11 09:40:07.746 2 DEBUG nova.network.os_vif_util [None req-13807304-469c-4d72-89ec-44b36ba0bc53 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:61:d1:72,bridge_name='br-int',has_traffic_filtering=True,id=79c195e7-9605-49f3-b866-70093b232242,network=Network(1aa39742-414b-41b6-bac5-b401ed01a1ec),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap79c195e7-96') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 09:40:07 compute-0 nova_compute[260935]: 2025-10-11 09:40:07.747 2 DEBUG os_vif [None req-13807304-469c-4d72-89ec-44b36ba0bc53 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:61:d1:72,bridge_name='br-int',has_traffic_filtering=True,id=79c195e7-9605-49f3-b866-70093b232242,network=Network(1aa39742-414b-41b6-bac5-b401ed01a1ec),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap79c195e7-96') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 11 09:40:07 compute-0 nova_compute[260935]: 2025-10-11 09:40:07.749 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:40:07 compute-0 nova_compute[260935]: 2025-10-11 09:40:07.749 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap79c195e7-96, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:40:07 compute-0 nova_compute[260935]: 2025-10-11 09:40:07.751 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:40:07 compute-0 nova_compute[260935]: 2025-10-11 09:40:07.755 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 09:40:07 compute-0 nova_compute[260935]: 2025-10-11 09:40:07.757 2 INFO os_vif [None req-13807304-469c-4d72-89ec-44b36ba0bc53 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:61:d1:72,bridge_name='br-int',has_traffic_filtering=True,id=79c195e7-9605-49f3-b866-70093b232242,network=Network(1aa39742-414b-41b6-bac5-b401ed01a1ec),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap79c195e7-96')
Oct 11 09:40:07 compute-0 nova_compute[260935]: 2025-10-11 09:40:07.855 2 DEBUG nova.compute.manager [req-c83ed9c1-40a8-4aca-a4c4-999f9b82a9a6 req-11bf5389-e06b-4944-ae55-fe19b8a04f67 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: cacbf863-eea2-4852-a3a4-7cc929ebacec] Received event network-vif-unplugged-79c195e7-9605-49f3-b866-70093b232242 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:40:07 compute-0 nova_compute[260935]: 2025-10-11 09:40:07.858 2 DEBUG oslo_concurrency.lockutils [req-c83ed9c1-40a8-4aca-a4c4-999f9b82a9a6 req-11bf5389-e06b-4944-ae55-fe19b8a04f67 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "cacbf863-eea2-4852-a3a4-7cc929ebacec-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:40:07 compute-0 nova_compute[260935]: 2025-10-11 09:40:07.859 2 DEBUG oslo_concurrency.lockutils [req-c83ed9c1-40a8-4aca-a4c4-999f9b82a9a6 req-11bf5389-e06b-4944-ae55-fe19b8a04f67 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "cacbf863-eea2-4852-a3a4-7cc929ebacec-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.004s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:40:07 compute-0 nova_compute[260935]: 2025-10-11 09:40:07.860 2 DEBUG oslo_concurrency.lockutils [req-c83ed9c1-40a8-4aca-a4c4-999f9b82a9a6 req-11bf5389-e06b-4944-ae55-fe19b8a04f67 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "cacbf863-eea2-4852-a3a4-7cc929ebacec-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:40:07 compute-0 nova_compute[260935]: 2025-10-11 09:40:07.861 2 DEBUG nova.compute.manager [req-c83ed9c1-40a8-4aca-a4c4-999f9b82a9a6 req-11bf5389-e06b-4944-ae55-fe19b8a04f67 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: cacbf863-eea2-4852-a3a4-7cc929ebacec] No waiting events found dispatching network-vif-unplugged-79c195e7-9605-49f3-b866-70093b232242 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 09:40:07 compute-0 nova_compute[260935]: 2025-10-11 09:40:07.862 2 DEBUG nova.compute.manager [req-c83ed9c1-40a8-4aca-a4c4-999f9b82a9a6 req-11bf5389-e06b-4944-ae55-fe19b8a04f67 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: cacbf863-eea2-4852-a3a4-7cc929ebacec] Received event network-vif-unplugged-79c195e7-9605-49f3-b866-70093b232242 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 11 09:40:08 compute-0 nova_compute[260935]: 2025-10-11 09:40:08.132 2 INFO nova.virt.libvirt.driver [None req-13807304-469c-4d72-89ec-44b36ba0bc53 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: cacbf863-eea2-4852-a3a4-7cc929ebacec] Deleting instance files /var/lib/nova/instances/cacbf863-eea2-4852-a3a4-7cc929ebacec_del
Oct 11 09:40:08 compute-0 nova_compute[260935]: 2025-10-11 09:40:08.134 2 INFO nova.virt.libvirt.driver [None req-13807304-469c-4d72-89ec-44b36ba0bc53 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: cacbf863-eea2-4852-a3a4-7cc929ebacec] Deletion of /var/lib/nova/instances/cacbf863-eea2-4852-a3a4-7cc929ebacec_del complete
Oct 11 09:40:08 compute-0 nova_compute[260935]: 2025-10-11 09:40:08.209 2 INFO nova.compute.manager [None req-13807304-469c-4d72-89ec-44b36ba0bc53 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: cacbf863-eea2-4852-a3a4-7cc929ebacec] Took 0.74 seconds to destroy the instance on the hypervisor.
Oct 11 09:40:08 compute-0 nova_compute[260935]: 2025-10-11 09:40:08.210 2 DEBUG oslo.service.loopingcall [None req-13807304-469c-4d72-89ec-44b36ba0bc53 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 11 09:40:08 compute-0 nova_compute[260935]: 2025-10-11 09:40:08.212 2 DEBUG nova.compute.manager [-] [instance: cacbf863-eea2-4852-a3a4-7cc929ebacec] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 11 09:40:08 compute-0 nova_compute[260935]: 2025-10-11 09:40:08.213 2 DEBUG nova.network.neutron [-] [instance: cacbf863-eea2-4852-a3a4-7cc929ebacec] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 11 09:40:08 compute-0 nova_compute[260935]: 2025-10-11 09:40:08.736 2 DEBUG nova.network.neutron [-] [instance: cacbf863-eea2-4852-a3a4-7cc929ebacec] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:40:08 compute-0 nova_compute[260935]: 2025-10-11 09:40:08.778 2 INFO nova.compute.manager [-] [instance: cacbf863-eea2-4852-a3a4-7cc929ebacec] Took 0.57 seconds to deallocate network for instance.
Oct 11 09:40:08 compute-0 nova_compute[260935]: 2025-10-11 09:40:08.831 2 DEBUG oslo_concurrency.lockutils [None req-13807304-469c-4d72-89ec-44b36ba0bc53 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:40:08 compute-0 nova_compute[260935]: 2025-10-11 09:40:08.831 2 DEBUG oslo_concurrency.lockutils [None req-13807304-469c-4d72-89ec-44b36ba0bc53 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:40:08 compute-0 nova_compute[260935]: 2025-10-11 09:40:08.845 2 DEBUG nova.compute.manager [req-151c9b99-690d-4545-b169-102528c27861 req-14140b4b-d40c-44af-b243-81e66f5005c2 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: cacbf863-eea2-4852-a3a4-7cc929ebacec] Received event network-vif-deleted-79c195e7-9605-49f3-b866-70093b232242 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:40:08 compute-0 nova_compute[260935]: 2025-10-11 09:40:08.965 2 DEBUG oslo_concurrency.processutils [None req-13807304-469c-4d72-89ec-44b36ba0bc53 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:40:09 compute-0 ceph-mon[74313]: pgmap v2954: 321 pgs: 321 active+clean; 486 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 394 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Oct 11 09:40:09 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2955: 321 pgs: 321 active+clean; 486 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 396 KiB/s rd, 2.1 MiB/s wr, 67 op/s
Oct 11 09:40:09 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 09:40:09 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3117894820' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:40:09 compute-0 nova_compute[260935]: 2025-10-11 09:40:09.410 2 DEBUG oslo_concurrency.processutils [None req-13807304-469c-4d72-89ec-44b36ba0bc53 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:40:09 compute-0 nova_compute[260935]: 2025-10-11 09:40:09.416 2 DEBUG nova.compute.provider_tree [None req-13807304-469c-4d72-89ec-44b36ba0bc53 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 09:40:09 compute-0 nova_compute[260935]: 2025-10-11 09:40:09.430 2 DEBUG nova.scheduler.client.report [None req-13807304-469c-4d72-89ec-44b36ba0bc53 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 09:40:09 compute-0 nova_compute[260935]: 2025-10-11 09:40:09.449 2 DEBUG oslo_concurrency.lockutils [None req-13807304-469c-4d72-89ec-44b36ba0bc53 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.618s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:40:09 compute-0 nova_compute[260935]: 2025-10-11 09:40:09.473 2 INFO nova.scheduler.client.report [None req-13807304-469c-4d72-89ec-44b36ba0bc53 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Deleted allocations for instance cacbf863-eea2-4852-a3a4-7cc929ebacec
Oct 11 09:40:09 compute-0 nova_compute[260935]: 2025-10-11 09:40:09.521 2 DEBUG oslo_concurrency.lockutils [None req-13807304-469c-4d72-89ec-44b36ba0bc53 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lock "cacbf863-eea2-4852-a3a4-7cc929ebacec" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.054s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:40:09 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:40:09 compute-0 nova_compute[260935]: 2025-10-11 09:40:09.679 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:40:09 compute-0 nova_compute[260935]: 2025-10-11 09:40:09.944 2 DEBUG nova.compute.manager [req-621f7e6b-c6a9-427a-9a72-f67fab378eb4 req-e3220cae-deaa-4b29-a35f-5611a2b1d0bf e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: cacbf863-eea2-4852-a3a4-7cc929ebacec] Received event network-vif-plugged-79c195e7-9605-49f3-b866-70093b232242 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:40:09 compute-0 nova_compute[260935]: 2025-10-11 09:40:09.944 2 DEBUG oslo_concurrency.lockutils [req-621f7e6b-c6a9-427a-9a72-f67fab378eb4 req-e3220cae-deaa-4b29-a35f-5611a2b1d0bf e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "cacbf863-eea2-4852-a3a4-7cc929ebacec-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:40:09 compute-0 nova_compute[260935]: 2025-10-11 09:40:09.944 2 DEBUG oslo_concurrency.lockutils [req-621f7e6b-c6a9-427a-9a72-f67fab378eb4 req-e3220cae-deaa-4b29-a35f-5611a2b1d0bf e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "cacbf863-eea2-4852-a3a4-7cc929ebacec-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:40:09 compute-0 nova_compute[260935]: 2025-10-11 09:40:09.944 2 DEBUG oslo_concurrency.lockutils [req-621f7e6b-c6a9-427a-9a72-f67fab378eb4 req-e3220cae-deaa-4b29-a35f-5611a2b1d0bf e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "cacbf863-eea2-4852-a3a4-7cc929ebacec-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:40:09 compute-0 nova_compute[260935]: 2025-10-11 09:40:09.945 2 DEBUG nova.compute.manager [req-621f7e6b-c6a9-427a-9a72-f67fab378eb4 req-e3220cae-deaa-4b29-a35f-5611a2b1d0bf e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: cacbf863-eea2-4852-a3a4-7cc929ebacec] No waiting events found dispatching network-vif-plugged-79c195e7-9605-49f3-b866-70093b232242 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 09:40:09 compute-0 nova_compute[260935]: 2025-10-11 09:40:09.945 2 WARNING nova.compute.manager [req-621f7e6b-c6a9-427a-9a72-f67fab378eb4 req-e3220cae-deaa-4b29-a35f-5611a2b1d0bf e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: cacbf863-eea2-4852-a3a4-7cc929ebacec] Received unexpected event network-vif-plugged-79c195e7-9605-49f3-b866-70093b232242 for instance with vm_state deleted and task_state None.
Oct 11 09:40:10 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/3117894820' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:40:11 compute-0 ceph-mon[74313]: pgmap v2955: 321 pgs: 321 active+clean; 486 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 396 KiB/s rd, 2.1 MiB/s wr, 67 op/s
Oct 11 09:40:11 compute-0 sshd-session[424327]: Invalid user mysql from 165.232.82.252 port 59890
Oct 11 09:40:11 compute-0 nova_compute[260935]: 2025-10-11 09:40:11.200 2 DEBUG nova.compute.manager [req-353880be-ba1d-436d-b91a-a5d8b44e1231 req-9e82cae5-d1d4-4af6-a1ad-f1584ed29072 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 82fac090-c427-485c-98cd-ad02e839be40] Received event network-changed-e8668336-ee38-49b1-97dd-f6c5fcfab2e5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:40:11 compute-0 nova_compute[260935]: 2025-10-11 09:40:11.202 2 DEBUG nova.compute.manager [req-353880be-ba1d-436d-b91a-a5d8b44e1231 req-9e82cae5-d1d4-4af6-a1ad-f1584ed29072 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 82fac090-c427-485c-98cd-ad02e839be40] Refreshing instance network info cache due to event network-changed-e8668336-ee38-49b1-97dd-f6c5fcfab2e5. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 09:40:11 compute-0 nova_compute[260935]: 2025-10-11 09:40:11.202 2 DEBUG oslo_concurrency.lockutils [req-353880be-ba1d-436d-b91a-a5d8b44e1231 req-9e82cae5-d1d4-4af6-a1ad-f1584ed29072 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-82fac090-c427-485c-98cd-ad02e839be40" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:40:11 compute-0 nova_compute[260935]: 2025-10-11 09:40:11.202 2 DEBUG oslo_concurrency.lockutils [req-353880be-ba1d-436d-b91a-a5d8b44e1231 req-9e82cae5-d1d4-4af6-a1ad-f1584ed29072 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-82fac090-c427-485c-98cd-ad02e839be40" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:40:11 compute-0 nova_compute[260935]: 2025-10-11 09:40:11.203 2 DEBUG nova.network.neutron [req-353880be-ba1d-436d-b91a-a5d8b44e1231 req-9e82cae5-d1d4-4af6-a1ad-f1584ed29072 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 82fac090-c427-485c-98cd-ad02e839be40] Refreshing network info cache for port e8668336-ee38-49b1-97dd-f6c5fcfab2e5 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 09:40:11 compute-0 sshd-session[424327]: pam_unix(sshd:auth): check pass; user unknown
Oct 11 09:40:11 compute-0 sshd-session[424327]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=165.232.82.252
Oct 11 09:40:11 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2956: 321 pgs: 321 active+clean; 486 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 390 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Oct 11 09:40:11 compute-0 nova_compute[260935]: 2025-10-11 09:40:11.339 2 DEBUG oslo_concurrency.lockutils [None req-ebd233bc-240a-4544-adea-8404466f68f4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Acquiring lock "82fac090-c427-485c-98cd-ad02e839be40" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:40:11 compute-0 nova_compute[260935]: 2025-10-11 09:40:11.339 2 DEBUG oslo_concurrency.lockutils [None req-ebd233bc-240a-4544-adea-8404466f68f4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lock "82fac090-c427-485c-98cd-ad02e839be40" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:40:11 compute-0 nova_compute[260935]: 2025-10-11 09:40:11.340 2 DEBUG oslo_concurrency.lockutils [None req-ebd233bc-240a-4544-adea-8404466f68f4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Acquiring lock "82fac090-c427-485c-98cd-ad02e839be40-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:40:11 compute-0 nova_compute[260935]: 2025-10-11 09:40:11.340 2 DEBUG oslo_concurrency.lockutils [None req-ebd233bc-240a-4544-adea-8404466f68f4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lock "82fac090-c427-485c-98cd-ad02e839be40-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:40:11 compute-0 nova_compute[260935]: 2025-10-11 09:40:11.341 2 DEBUG oslo_concurrency.lockutils [None req-ebd233bc-240a-4544-adea-8404466f68f4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lock "82fac090-c427-485c-98cd-ad02e839be40-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:40:11 compute-0 nova_compute[260935]: 2025-10-11 09:40:11.342 2 INFO nova.compute.manager [None req-ebd233bc-240a-4544-adea-8404466f68f4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 82fac090-c427-485c-98cd-ad02e839be40] Terminating instance
Oct 11 09:40:11 compute-0 nova_compute[260935]: 2025-10-11 09:40:11.344 2 DEBUG nova.compute.manager [None req-ebd233bc-240a-4544-adea-8404466f68f4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 82fac090-c427-485c-98cd-ad02e839be40] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 11 09:40:11 compute-0 kernel: tape8668336-ee (unregistering): left promiscuous mode
Oct 11 09:40:11 compute-0 NetworkManager[44960]: <info>  [1760175611.5305] device (tape8668336-ee): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 09:40:11 compute-0 nova_compute[260935]: 2025-10-11 09:40:11.539 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:40:11 compute-0 ovn_controller[152945]: 2025-10-11T09:40:11Z|01647|binding|INFO|Releasing lport e8668336-ee38-49b1-97dd-f6c5fcfab2e5 from this chassis (sb_readonly=0)
Oct 11 09:40:11 compute-0 ovn_controller[152945]: 2025-10-11T09:40:11Z|01648|binding|INFO|Setting lport e8668336-ee38-49b1-97dd-f6c5fcfab2e5 down in Southbound
Oct 11 09:40:11 compute-0 ovn_controller[152945]: 2025-10-11T09:40:11Z|01649|binding|INFO|Removing iface tape8668336-ee ovn-installed in OVS
Oct 11 09:40:11 compute-0 nova_compute[260935]: 2025-10-11 09:40:11.541 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:40:11 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:40:11.545 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ae:4e:40 10.100.0.4'], port_security=['fa:16:3e:ae:4e:40 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '82fac090-c427-485c-98cd-ad02e839be40', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1aa39742-414b-41b6-bac5-b401ed01a1ec', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '81e7096f23df4e7d8782cf98d09d54e9', 'neutron:revision_number': '4', 'neutron:security_group_ids': '7b70b57d-37bb-4331-92b9-ae0d4d7e602c e4a50ca5-3bee-4cfb-9819-432e1e875b60', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f310564f-86ed-419d-996b-5fa9060df3fb, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=e8668336-ee38-49b1-97dd-f6c5fcfab2e5) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 09:40:11 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:40:11.546 162815 INFO neutron.agent.ovn.metadata.agent [-] Port e8668336-ee38-49b1-97dd-f6c5fcfab2e5 in datapath 1aa39742-414b-41b6-bac5-b401ed01a1ec unbound from our chassis
Oct 11 09:40:11 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:40:11.548 162815 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 1aa39742-414b-41b6-bac5-b401ed01a1ec, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 11 09:40:11 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:40:11.549 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[1ab2575e-271f-4d27-bcd5-6b3429e8e9e8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:40:11 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:40:11.549 162815 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-1aa39742-414b-41b6-bac5-b401ed01a1ec namespace which is not needed anymore
Oct 11 09:40:11 compute-0 nova_compute[260935]: 2025-10-11 09:40:11.555 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:40:11 compute-0 systemd[1]: machine-qemu\x2d168\x2dinstance\x2d00000090.scope: Deactivated successfully.
Oct 11 09:40:11 compute-0 systemd[1]: machine-qemu\x2d168\x2dinstance\x2d00000090.scope: Consumed 15.372s CPU time.
Oct 11 09:40:11 compute-0 systemd-machined[215705]: Machine qemu-168-instance-00000090 terminated.
Oct 11 09:40:11 compute-0 neutron-haproxy-ovnmeta-1aa39742-414b-41b6-bac5-b401ed01a1ec[422779]: [NOTICE]   (422792) : haproxy version is 2.8.14-c23fe91
Oct 11 09:40:11 compute-0 neutron-haproxy-ovnmeta-1aa39742-414b-41b6-bac5-b401ed01a1ec[422779]: [NOTICE]   (422792) : path to executable is /usr/sbin/haproxy
Oct 11 09:40:11 compute-0 neutron-haproxy-ovnmeta-1aa39742-414b-41b6-bac5-b401ed01a1ec[422779]: [WARNING]  (422792) : Exiting Master process...
Oct 11 09:40:11 compute-0 neutron-haproxy-ovnmeta-1aa39742-414b-41b6-bac5-b401ed01a1ec[422779]: [WARNING]  (422792) : Exiting Master process...
Oct 11 09:40:11 compute-0 neutron-haproxy-ovnmeta-1aa39742-414b-41b6-bac5-b401ed01a1ec[422779]: [ALERT]    (422792) : Current worker (422794) exited with code 143 (Terminated)
Oct 11 09:40:11 compute-0 neutron-haproxy-ovnmeta-1aa39742-414b-41b6-bac5-b401ed01a1ec[422779]: [WARNING]  (422792) : All workers exited. Exiting... (0)
Oct 11 09:40:11 compute-0 systemd[1]: libpod-038b982849d91ce8fc8acc43b62a7023ef78d8265bf04b4627973c6c00b5d798.scope: Deactivated successfully.
Oct 11 09:40:11 compute-0 podman[424352]: 2025-10-11 09:40:11.706433231 +0000 UTC m=+0.055014004 container died 038b982849d91ce8fc8acc43b62a7023ef78d8265bf04b4627973c6c00b5d798 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-1aa39742-414b-41b6-bac5-b401ed01a1ec, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 11 09:40:11 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-038b982849d91ce8fc8acc43b62a7023ef78d8265bf04b4627973c6c00b5d798-userdata-shm.mount: Deactivated successfully.
Oct 11 09:40:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-e06eeafc87d96a39c349d61f6972a13eca41eb9183cc378fdf1defab8f1ee2b5-merged.mount: Deactivated successfully.
Oct 11 09:40:11 compute-0 podman[424352]: 2025-10-11 09:40:11.745104055 +0000 UTC m=+0.093684818 container cleanup 038b982849d91ce8fc8acc43b62a7023ef78d8265bf04b4627973c6c00b5d798 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-1aa39742-414b-41b6-bac5-b401ed01a1ec, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 11 09:40:11 compute-0 systemd[1]: libpod-conmon-038b982849d91ce8fc8acc43b62a7023ef78d8265bf04b4627973c6c00b5d798.scope: Deactivated successfully.
Oct 11 09:40:11 compute-0 kernel: tape8668336-ee: entered promiscuous mode
Oct 11 09:40:11 compute-0 systemd-udevd[424332]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 09:40:11 compute-0 NetworkManager[44960]: <info>  [1760175611.7693] manager: (tape8668336-ee): new Tun device (/org/freedesktop/NetworkManager/Devices/628)
Oct 11 09:40:11 compute-0 kernel: tape8668336-ee (unregistering): left promiscuous mode
Oct 11 09:40:11 compute-0 ovn_controller[152945]: 2025-10-11T09:40:11Z|01650|binding|INFO|Claiming lport e8668336-ee38-49b1-97dd-f6c5fcfab2e5 for this chassis.
Oct 11 09:40:11 compute-0 ovn_controller[152945]: 2025-10-11T09:40:11Z|01651|binding|INFO|e8668336-ee38-49b1-97dd-f6c5fcfab2e5: Claiming fa:16:3e:ae:4e:40 10.100.0.4
Oct 11 09:40:11 compute-0 podman[424364]: 2025-10-11 09:40:11.77309092 +0000 UTC m=+0.074963803 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=iscsid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct 11 09:40:11 compute-0 nova_compute[260935]: 2025-10-11 09:40:11.772 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:40:11 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:40:11.779 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ae:4e:40 10.100.0.4'], port_security=['fa:16:3e:ae:4e:40 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '82fac090-c427-485c-98cd-ad02e839be40', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1aa39742-414b-41b6-bac5-b401ed01a1ec', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '81e7096f23df4e7d8782cf98d09d54e9', 'neutron:revision_number': '4', 'neutron:security_group_ids': '7b70b57d-37bb-4331-92b9-ae0d4d7e602c e4a50ca5-3bee-4cfb-9819-432e1e875b60', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f310564f-86ed-419d-996b-5fa9060df3fb, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=e8668336-ee38-49b1-97dd-f6c5fcfab2e5) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 09:40:11 compute-0 ovn_controller[152945]: 2025-10-11T09:40:11Z|01652|binding|INFO|Releasing lport e8668336-ee38-49b1-97dd-f6c5fcfab2e5 from this chassis (sb_readonly=0)
Oct 11 09:40:11 compute-0 nova_compute[260935]: 2025-10-11 09:40:11.794 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:40:11 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:40:11.802 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ae:4e:40 10.100.0.4'], port_security=['fa:16:3e:ae:4e:40 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '82fac090-c427-485c-98cd-ad02e839be40', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1aa39742-414b-41b6-bac5-b401ed01a1ec', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '81e7096f23df4e7d8782cf98d09d54e9', 'neutron:revision_number': '4', 'neutron:security_group_ids': '7b70b57d-37bb-4331-92b9-ae0d4d7e602c e4a50ca5-3bee-4cfb-9819-432e1e875b60', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f310564f-86ed-419d-996b-5fa9060df3fb, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=e8668336-ee38-49b1-97dd-f6c5fcfab2e5) old=Port_Binding(chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 09:40:11 compute-0 nova_compute[260935]: 2025-10-11 09:40:11.803 2 INFO nova.virt.libvirt.driver [-] [instance: 82fac090-c427-485c-98cd-ad02e839be40] Instance destroyed successfully.
Oct 11 09:40:11 compute-0 nova_compute[260935]: 2025-10-11 09:40:11.804 2 DEBUG nova.objects.instance [None req-ebd233bc-240a-4544-adea-8404466f68f4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lazy-loading 'resources' on Instance uuid 82fac090-c427-485c-98cd-ad02e839be40 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:40:11 compute-0 nova_compute[260935]: 2025-10-11 09:40:11.821 2 DEBUG nova.virt.libvirt.vif [None req-ebd233bc-240a-4544-adea-8404466f68f4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T09:38:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-607770139-access_point-830501792',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-607770139-access_point-830501792',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-607770139-acc',id=144,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMgIDSEbqXKfaiPe9PEwj6SDspyzopAbfyJnz8+djsMK1KLgiJJqBYs9uE57WKf6jqalL3R3Kh83Cc9busMQoG7JknYjD9hSHphOlTqezk2QHYcvhxySyaS51yDxw6uubA==',key_name='tempest-TestSecurityGroupsBasicOps-1631871125',keypairs=<?>,launch_index=0,launched_at=2025-10-11T09:39:11Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='81e7096f23df4e7d8782cf98d09d54e9',ramdisk_id='',reservation_id='r-bjfbul0r',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSecurityGroupsBasicOps-607770139',owner_user_name='tempest-TestSecurityGroupsBasicOps-607770139-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T09:39:11Z,user_data=None,user_id='489c4d0457354f4684f8b9e53261224f',uuid=82fac090-c427-485c-98cd-ad02e839be40,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "e8668336-ee38-49b1-97dd-f6c5fcfab2e5", "address": "fa:16:3e:ae:4e:40", "network": {"id": "1aa39742-414b-41b6-bac5-b401ed01a1ec", "bridge": "br-int", "label": "tempest-network-smoke--1567678547", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.206", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "81e7096f23df4e7d8782cf98d09d54e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape8668336-ee", "ovs_interfaceid": "e8668336-ee38-49b1-97dd-f6c5fcfab2e5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 11 09:40:11 compute-0 nova_compute[260935]: 2025-10-11 09:40:11.822 2 DEBUG nova.network.os_vif_util [None req-ebd233bc-240a-4544-adea-8404466f68f4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Converting VIF {"id": "e8668336-ee38-49b1-97dd-f6c5fcfab2e5", "address": "fa:16:3e:ae:4e:40", "network": {"id": "1aa39742-414b-41b6-bac5-b401ed01a1ec", "bridge": "br-int", "label": "tempest-network-smoke--1567678547", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.206", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "81e7096f23df4e7d8782cf98d09d54e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape8668336-ee", "ovs_interfaceid": "e8668336-ee38-49b1-97dd-f6c5fcfab2e5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 09:40:11 compute-0 nova_compute[260935]: 2025-10-11 09:40:11.823 2 DEBUG nova.network.os_vif_util [None req-ebd233bc-240a-4544-adea-8404466f68f4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:ae:4e:40,bridge_name='br-int',has_traffic_filtering=True,id=e8668336-ee38-49b1-97dd-f6c5fcfab2e5,network=Network(1aa39742-414b-41b6-bac5-b401ed01a1ec),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape8668336-ee') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 09:40:11 compute-0 nova_compute[260935]: 2025-10-11 09:40:11.823 2 DEBUG os_vif [None req-ebd233bc-240a-4544-adea-8404466f68f4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:ae:4e:40,bridge_name='br-int',has_traffic_filtering=True,id=e8668336-ee38-49b1-97dd-f6c5fcfab2e5,network=Network(1aa39742-414b-41b6-bac5-b401ed01a1ec),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape8668336-ee') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 11 09:40:11 compute-0 podman[424401]: 2025-10-11 09:40:11.823909766 +0000 UTC m=+0.054885781 container remove 038b982849d91ce8fc8acc43b62a7023ef78d8265bf04b4627973c6c00b5d798 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-1aa39742-414b-41b6-bac5-b401ed01a1ec, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Oct 11 09:40:11 compute-0 nova_compute[260935]: 2025-10-11 09:40:11.825 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:40:11 compute-0 nova_compute[260935]: 2025-10-11 09:40:11.825 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape8668336-ee, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:40:11 compute-0 nova_compute[260935]: 2025-10-11 09:40:11.826 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:40:11 compute-0 nova_compute[260935]: 2025-10-11 09:40:11.828 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:40:11 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:40:11.830 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[5923bbbf-2429-46dc-8ee3-4e03435ba210]: (4, ('Sat Oct 11 09:40:11 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-1aa39742-414b-41b6-bac5-b401ed01a1ec (038b982849d91ce8fc8acc43b62a7023ef78d8265bf04b4627973c6c00b5d798)\n038b982849d91ce8fc8acc43b62a7023ef78d8265bf04b4627973c6c00b5d798\nSat Oct 11 09:40:11 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-1aa39742-414b-41b6-bac5-b401ed01a1ec (038b982849d91ce8fc8acc43b62a7023ef78d8265bf04b4627973c6c00b5d798)\n038b982849d91ce8fc8acc43b62a7023ef78d8265bf04b4627973c6c00b5d798\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:40:11 compute-0 nova_compute[260935]: 2025-10-11 09:40:11.830 2 INFO os_vif [None req-ebd233bc-240a-4544-adea-8404466f68f4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:ae:4e:40,bridge_name='br-int',has_traffic_filtering=True,id=e8668336-ee38-49b1-97dd-f6c5fcfab2e5,network=Network(1aa39742-414b-41b6-bac5-b401ed01a1ec),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape8668336-ee')
Oct 11 09:40:11 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:40:11.831 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[ea719301-76c1-433e-8c75-5fde912ba966]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:40:11 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:40:11.831 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1aa39742-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:40:11 compute-0 kernel: tap1aa39742-40: left promiscuous mode
Oct 11 09:40:11 compute-0 nova_compute[260935]: 2025-10-11 09:40:11.854 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:40:11 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:40:11.855 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[6a6056da-6927-44bb-8371-5cac7f76d817]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:40:11 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:40:11.874 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[513c3dd1-daac-431a-be04-2ca6c0f2ee1c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:40:11 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:40:11.875 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[120016fa-3b9e-4b7e-af39-93e55d4f86ea]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:40:11 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:40:11.889 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[dcf3287d-0c96-49c1-a85c-d6fb26c8029f]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 739478, 'reachable_time': 24974, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 424438, 'error': None, 'target': 'ovnmeta-1aa39742-414b-41b6-bac5-b401ed01a1ec', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:40:11 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:40:11.892 162929 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-1aa39742-414b-41b6-bac5-b401ed01a1ec deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 11 09:40:11 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:40:11.892 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[0afcecde-377a-40c0-9459-4e5ae8bc5596]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:40:11 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:40:11.893 162815 INFO neutron.agent.ovn.metadata.agent [-] Port e8668336-ee38-49b1-97dd-f6c5fcfab2e5 in datapath 1aa39742-414b-41b6-bac5-b401ed01a1ec unbound from our chassis
Oct 11 09:40:11 compute-0 systemd[1]: run-netns-ovnmeta\x2d1aa39742\x2d414b\x2d41b6\x2dbac5\x2db401ed01a1ec.mount: Deactivated successfully.
Oct 11 09:40:11 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:40:11.894 162815 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 1aa39742-414b-41b6-bac5-b401ed01a1ec, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 11 09:40:11 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:40:11.895 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[99b0368a-fc15-4d13-bc5b-54cb3b67d372]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:40:11 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:40:11.895 162815 INFO neutron.agent.ovn.metadata.agent [-] Port e8668336-ee38-49b1-97dd-f6c5fcfab2e5 in datapath 1aa39742-414b-41b6-bac5-b401ed01a1ec unbound from our chassis
Oct 11 09:40:11 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:40:11.896 162815 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 1aa39742-414b-41b6-bac5-b401ed01a1ec, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 11 09:40:11 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:40:11.896 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[bf90c03a-3dbb-463b-9c5a-121997bae0b2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:40:12 compute-0 nova_compute[260935]: 2025-10-11 09:40:12.869 2 DEBUG nova.network.neutron [req-353880be-ba1d-436d-b91a-a5d8b44e1231 req-9e82cae5-d1d4-4af6-a1ad-f1584ed29072 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 82fac090-c427-485c-98cd-ad02e839be40] Updated VIF entry in instance network info cache for port e8668336-ee38-49b1-97dd-f6c5fcfab2e5. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 09:40:12 compute-0 nova_compute[260935]: 2025-10-11 09:40:12.871 2 DEBUG nova.network.neutron [req-353880be-ba1d-436d-b91a-a5d8b44e1231 req-9e82cae5-d1d4-4af6-a1ad-f1584ed29072 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 82fac090-c427-485c-98cd-ad02e839be40] Updating instance_info_cache with network_info: [{"id": "e8668336-ee38-49b1-97dd-f6c5fcfab2e5", "address": "fa:16:3e:ae:4e:40", "network": {"id": "1aa39742-414b-41b6-bac5-b401ed01a1ec", "bridge": "br-int", "label": "tempest-network-smoke--1567678547", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "81e7096f23df4e7d8782cf98d09d54e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape8668336-ee", "ovs_interfaceid": "e8668336-ee38-49b1-97dd-f6c5fcfab2e5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:40:12 compute-0 nova_compute[260935]: 2025-10-11 09:40:12.895 2 DEBUG oslo_concurrency.lockutils [req-353880be-ba1d-436d-b91a-a5d8b44e1231 req-9e82cae5-d1d4-4af6-a1ad-f1584ed29072 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-82fac090-c427-485c-98cd-ad02e839be40" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:40:13 compute-0 nova_compute[260935]: 2025-10-11 09:40:13.023 2 INFO nova.virt.libvirt.driver [None req-ebd233bc-240a-4544-adea-8404466f68f4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 82fac090-c427-485c-98cd-ad02e839be40] Deleting instance files /var/lib/nova/instances/82fac090-c427-485c-98cd-ad02e839be40_del
Oct 11 09:40:13 compute-0 nova_compute[260935]: 2025-10-11 09:40:13.025 2 INFO nova.virt.libvirt.driver [None req-ebd233bc-240a-4544-adea-8404466f68f4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 82fac090-c427-485c-98cd-ad02e839be40] Deletion of /var/lib/nova/instances/82fac090-c427-485c-98cd-ad02e839be40_del complete
Oct 11 09:40:13 compute-0 nova_compute[260935]: 2025-10-11 09:40:13.095 2 INFO nova.compute.manager [None req-ebd233bc-240a-4544-adea-8404466f68f4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 82fac090-c427-485c-98cd-ad02e839be40] Took 1.75 seconds to destroy the instance on the hypervisor.
Oct 11 09:40:13 compute-0 nova_compute[260935]: 2025-10-11 09:40:13.096 2 DEBUG oslo.service.loopingcall [None req-ebd233bc-240a-4544-adea-8404466f68f4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 11 09:40:13 compute-0 nova_compute[260935]: 2025-10-11 09:40:13.096 2 DEBUG nova.compute.manager [-] [instance: 82fac090-c427-485c-98cd-ad02e839be40] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 11 09:40:13 compute-0 nova_compute[260935]: 2025-10-11 09:40:13.097 2 DEBUG nova.network.neutron [-] [instance: 82fac090-c427-485c-98cd-ad02e839be40] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 11 09:40:13 compute-0 ceph-mon[74313]: pgmap v2956: 321 pgs: 321 active+clean; 486 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 390 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Oct 11 09:40:13 compute-0 nova_compute[260935]: 2025-10-11 09:40:13.281 2 DEBUG nova.compute.manager [req-310f909e-3b5b-4b47-986c-4eeaa4b0804a req-84667300-8306-4472-9a2d-f89a00f1af8c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 82fac090-c427-485c-98cd-ad02e839be40] Received event network-vif-unplugged-e8668336-ee38-49b1-97dd-f6c5fcfab2e5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:40:13 compute-0 nova_compute[260935]: 2025-10-11 09:40:13.282 2 DEBUG oslo_concurrency.lockutils [req-310f909e-3b5b-4b47-986c-4eeaa4b0804a req-84667300-8306-4472-9a2d-f89a00f1af8c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "82fac090-c427-485c-98cd-ad02e839be40-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:40:13 compute-0 nova_compute[260935]: 2025-10-11 09:40:13.282 2 DEBUG oslo_concurrency.lockutils [req-310f909e-3b5b-4b47-986c-4eeaa4b0804a req-84667300-8306-4472-9a2d-f89a00f1af8c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "82fac090-c427-485c-98cd-ad02e839be40-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:40:13 compute-0 nova_compute[260935]: 2025-10-11 09:40:13.282 2 DEBUG oslo_concurrency.lockutils [req-310f909e-3b5b-4b47-986c-4eeaa4b0804a req-84667300-8306-4472-9a2d-f89a00f1af8c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "82fac090-c427-485c-98cd-ad02e839be40-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:40:13 compute-0 nova_compute[260935]: 2025-10-11 09:40:13.283 2 DEBUG nova.compute.manager [req-310f909e-3b5b-4b47-986c-4eeaa4b0804a req-84667300-8306-4472-9a2d-f89a00f1af8c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 82fac090-c427-485c-98cd-ad02e839be40] No waiting events found dispatching network-vif-unplugged-e8668336-ee38-49b1-97dd-f6c5fcfab2e5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 09:40:13 compute-0 nova_compute[260935]: 2025-10-11 09:40:13.283 2 DEBUG nova.compute.manager [req-310f909e-3b5b-4b47-986c-4eeaa4b0804a req-84667300-8306-4472-9a2d-f89a00f1af8c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 82fac090-c427-485c-98cd-ad02e839be40] Received event network-vif-unplugged-e8668336-ee38-49b1-97dd-f6c5fcfab2e5 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 11 09:40:13 compute-0 nova_compute[260935]: 2025-10-11 09:40:13.283 2 DEBUG nova.compute.manager [req-310f909e-3b5b-4b47-986c-4eeaa4b0804a req-84667300-8306-4472-9a2d-f89a00f1af8c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 82fac090-c427-485c-98cd-ad02e839be40] Received event network-vif-plugged-e8668336-ee38-49b1-97dd-f6c5fcfab2e5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:40:13 compute-0 nova_compute[260935]: 2025-10-11 09:40:13.284 2 DEBUG oslo_concurrency.lockutils [req-310f909e-3b5b-4b47-986c-4eeaa4b0804a req-84667300-8306-4472-9a2d-f89a00f1af8c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "82fac090-c427-485c-98cd-ad02e839be40-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:40:13 compute-0 nova_compute[260935]: 2025-10-11 09:40:13.284 2 DEBUG oslo_concurrency.lockutils [req-310f909e-3b5b-4b47-986c-4eeaa4b0804a req-84667300-8306-4472-9a2d-f89a00f1af8c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "82fac090-c427-485c-98cd-ad02e839be40-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:40:13 compute-0 nova_compute[260935]: 2025-10-11 09:40:13.284 2 DEBUG oslo_concurrency.lockutils [req-310f909e-3b5b-4b47-986c-4eeaa4b0804a req-84667300-8306-4472-9a2d-f89a00f1af8c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "82fac090-c427-485c-98cd-ad02e839be40-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:40:13 compute-0 nova_compute[260935]: 2025-10-11 09:40:13.284 2 DEBUG nova.compute.manager [req-310f909e-3b5b-4b47-986c-4eeaa4b0804a req-84667300-8306-4472-9a2d-f89a00f1af8c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 82fac090-c427-485c-98cd-ad02e839be40] No waiting events found dispatching network-vif-plugged-e8668336-ee38-49b1-97dd-f6c5fcfab2e5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 09:40:13 compute-0 nova_compute[260935]: 2025-10-11 09:40:13.285 2 WARNING nova.compute.manager [req-310f909e-3b5b-4b47-986c-4eeaa4b0804a req-84667300-8306-4472-9a2d-f89a00f1af8c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 82fac090-c427-485c-98cd-ad02e839be40] Received unexpected event network-vif-plugged-e8668336-ee38-49b1-97dd-f6c5fcfab2e5 for instance with vm_state active and task_state deleting.
Oct 11 09:40:13 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2957: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 419 KiB/s rd, 2.1 MiB/s wr, 109 op/s
Oct 11 09:40:13 compute-0 sshd-session[424327]: Failed password for invalid user mysql from 165.232.82.252 port 59890 ssh2
Oct 11 09:40:13 compute-0 nova_compute[260935]: 2025-10-11 09:40:13.949 2 DEBUG nova.network.neutron [-] [instance: 82fac090-c427-485c-98cd-ad02e839be40] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:40:13 compute-0 nova_compute[260935]: 2025-10-11 09:40:13.969 2 INFO nova.compute.manager [-] [instance: 82fac090-c427-485c-98cd-ad02e839be40] Took 0.87 seconds to deallocate network for instance.
Oct 11 09:40:14 compute-0 nova_compute[260935]: 2025-10-11 09:40:14.014 2 DEBUG oslo_concurrency.lockutils [None req-ebd233bc-240a-4544-adea-8404466f68f4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:40:14 compute-0 nova_compute[260935]: 2025-10-11 09:40:14.015 2 DEBUG oslo_concurrency.lockutils [None req-ebd233bc-240a-4544-adea-8404466f68f4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:40:14 compute-0 nova_compute[260935]: 2025-10-11 09:40:14.129 2 DEBUG oslo_concurrency.processutils [None req-ebd233bc-240a-4544-adea-8404466f68f4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:40:14 compute-0 sshd-session[424327]: Connection closed by invalid user mysql 165.232.82.252 port 59890 [preauth]
Oct 11 09:40:14 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 09:40:14 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/288191443' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:40:14 compute-0 nova_compute[260935]: 2025-10-11 09:40:14.632 2 DEBUG oslo_concurrency.processutils [None req-ebd233bc-240a-4544-adea-8404466f68f4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.504s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:40:14 compute-0 nova_compute[260935]: 2025-10-11 09:40:14.642 2 DEBUG nova.compute.provider_tree [None req-ebd233bc-240a-4544-adea-8404466f68f4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 09:40:14 compute-0 nova_compute[260935]: 2025-10-11 09:40:14.659 2 DEBUG nova.scheduler.client.report [None req-ebd233bc-240a-4544-adea-8404466f68f4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 09:40:14 compute-0 nova_compute[260935]: 2025-10-11 09:40:14.676 2 DEBUG oslo_concurrency.lockutils [None req-ebd233bc-240a-4544-adea-8404466f68f4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.661s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:40:14 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:40:14 compute-0 nova_compute[260935]: 2025-10-11 09:40:14.681 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:40:14 compute-0 nova_compute[260935]: 2025-10-11 09:40:14.708 2 INFO nova.scheduler.client.report [None req-ebd233bc-240a-4544-adea-8404466f68f4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Deleted allocations for instance 82fac090-c427-485c-98cd-ad02e839be40
Oct 11 09:40:14 compute-0 nova_compute[260935]: 2025-10-11 09:40:14.774 2 DEBUG oslo_concurrency.lockutils [None req-ebd233bc-240a-4544-adea-8404466f68f4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lock "82fac090-c427-485c-98cd-ad02e839be40" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.435s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:40:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:40:15.236 162815 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:40:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:40:15.236 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:40:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:40:15.237 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:40:15 compute-0 ceph-mon[74313]: pgmap v2957: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 419 KiB/s rd, 2.1 MiB/s wr, 109 op/s
Oct 11 09:40:15 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/288191443' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:40:15 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2958: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 30 KiB/s rd, 4.5 KiB/s wr, 43 op/s
Oct 11 09:40:15 compute-0 nova_compute[260935]: 2025-10-11 09:40:15.390 2 DEBUG nova.compute.manager [req-8c8ec429-264a-4d22-9286-ed572f20b435 req-488133fe-ad0a-40ae-bdc1-15e1d66146e4 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 82fac090-c427-485c-98cd-ad02e839be40] Received event network-vif-deleted-e8668336-ee38-49b1-97dd-f6c5fcfab2e5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:40:16 compute-0 ceph-mon[74313]: pgmap v2958: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 30 KiB/s rd, 4.5 KiB/s wr, 43 op/s
Oct 11 09:40:16 compute-0 podman[424462]: 2025-10-11 09:40:16.798149753 +0000 UTC m=+0.095253833 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, container_name=multipathd, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=multipathd, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct 11 09:40:16 compute-0 nova_compute[260935]: 2025-10-11 09:40:16.827 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:40:16 compute-0 podman[424463]: 2025-10-11 09:40:16.829306106 +0000 UTC m=+0.116749315 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 09:40:17 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2959: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 30 KiB/s rd, 4.5 KiB/s wr, 43 op/s
Oct 11 09:40:18 compute-0 ovn_controller[152945]: 2025-10-11T09:40:18Z|01653|binding|INFO|Releasing lport e23cd806-8523-4e59-ba27-db15cee52548 from this chassis (sb_readonly=0)
Oct 11 09:40:18 compute-0 ovn_controller[152945]: 2025-10-11T09:40:18Z|01654|binding|INFO|Releasing lport f45dd889-4db0-488b-b6d3-356bc191844e from this chassis (sb_readonly=0)
Oct 11 09:40:18 compute-0 nova_compute[260935]: 2025-10-11 09:40:18.073 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:40:18 compute-0 ovn_controller[152945]: 2025-10-11T09:40:18Z|01655|binding|INFO|Releasing lport e23cd806-8523-4e59-ba27-db15cee52548 from this chassis (sb_readonly=0)
Oct 11 09:40:18 compute-0 ovn_controller[152945]: 2025-10-11T09:40:18Z|01656|binding|INFO|Releasing lport f45dd889-4db0-488b-b6d3-356bc191844e from this chassis (sb_readonly=0)
Oct 11 09:40:18 compute-0 nova_compute[260935]: 2025-10-11 09:40:18.169 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:40:18 compute-0 ceph-mon[74313]: pgmap v2959: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 30 KiB/s rd, 4.5 KiB/s wr, 43 op/s
Oct 11 09:40:19 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2960: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 39 KiB/s rd, 5.1 KiB/s wr, 56 op/s
Oct 11 09:40:19 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:40:19 compute-0 nova_compute[260935]: 2025-10-11 09:40:19.683 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:40:21 compute-0 ceph-mon[74313]: pgmap v2960: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 39 KiB/s rd, 5.1 KiB/s wr, 56 op/s
Oct 11 09:40:21 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2961: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 3.4 KiB/s wr, 55 op/s
Oct 11 09:40:21 compute-0 nova_compute[260935]: 2025-10-11 09:40:21.830 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:40:22 compute-0 nova_compute[260935]: 2025-10-11 09:40:22.718 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760175607.7165146, cacbf863-eea2-4852-a3a4-7cc929ebacec => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:40:22 compute-0 nova_compute[260935]: 2025-10-11 09:40:22.719 2 INFO nova.compute.manager [-] [instance: cacbf863-eea2-4852-a3a4-7cc929ebacec] VM Stopped (Lifecycle Event)
Oct 11 09:40:22 compute-0 nova_compute[260935]: 2025-10-11 09:40:22.742 2 DEBUG nova.compute.manager [None req-5b59a428-284f-47e1-b707-00db6326745b - - - - - -] [instance: cacbf863-eea2-4852-a3a4-7cc929ebacec] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:40:23 compute-0 ceph-mon[74313]: pgmap v2961: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 3.4 KiB/s wr, 55 op/s
Oct 11 09:40:23 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2962: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 3.4 KiB/s wr, 55 op/s
Oct 11 09:40:24 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:40:24 compute-0 nova_compute[260935]: 2025-10-11 09:40:24.686 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:40:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:40:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:40:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:40:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:40:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:40:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:40:25 compute-0 ceph-mon[74313]: pgmap v2962: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 3.4 KiB/s wr, 55 op/s
Oct 11 09:40:25 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2963: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 9.3 KiB/s rd, 597 B/s wr, 12 op/s
Oct 11 09:40:26 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 09:40:26 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3860873842' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 09:40:26 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 09:40:26 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3860873842' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 09:40:26 compute-0 nova_compute[260935]: 2025-10-11 09:40:26.795 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760175611.7924993, 82fac090-c427-485c-98cd-ad02e839be40 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:40:26 compute-0 nova_compute[260935]: 2025-10-11 09:40:26.796 2 INFO nova.compute.manager [-] [instance: 82fac090-c427-485c-98cd-ad02e839be40] VM Stopped (Lifecycle Event)
Oct 11 09:40:26 compute-0 nova_compute[260935]: 2025-10-11 09:40:26.817 2 DEBUG nova.compute.manager [None req-9b4817d1-cd55-498b-955b-42139685e066 - - - - - -] [instance: 82fac090-c427-485c-98cd-ad02e839be40] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:40:26 compute-0 nova_compute[260935]: 2025-10-11 09:40:26.834 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:40:27 compute-0 ceph-mon[74313]: pgmap v2963: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 9.3 KiB/s rd, 597 B/s wr, 12 op/s
Oct 11 09:40:27 compute-0 ceph-mon[74313]: from='client.? 192.168.122.10:0/3860873842' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 09:40:27 compute-0 ceph-mon[74313]: from='client.? 192.168.122.10:0/3860873842' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 09:40:27 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2964: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 9.3 KiB/s rd, 597 B/s wr, 12 op/s
Oct 11 09:40:29 compute-0 ceph-mon[74313]: pgmap v2964: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 9.3 KiB/s rd, 597 B/s wr, 12 op/s
Oct 11 09:40:29 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2965: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 9.3 KiB/s rd, 597 B/s wr, 12 op/s
Oct 11 09:40:29 compute-0 nova_compute[260935]: 2025-10-11 09:40:29.553 2 DEBUG oslo_concurrency.lockutils [None req-a034e9f8-7335-4e06-b0a3-7f9268b121d8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Acquiring lock "cf42187a-059a-49d4-b9c8-96786042977f" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:40:29 compute-0 nova_compute[260935]: 2025-10-11 09:40:29.554 2 DEBUG oslo_concurrency.lockutils [None req-a034e9f8-7335-4e06-b0a3-7f9268b121d8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lock "cf42187a-059a-49d4-b9c8-96786042977f" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:40:29 compute-0 nova_compute[260935]: 2025-10-11 09:40:29.572 2 DEBUG nova.compute.manager [None req-a034e9f8-7335-4e06-b0a3-7f9268b121d8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: cf42187a-059a-49d4-b9c8-96786042977f] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 11 09:40:29 compute-0 nova_compute[260935]: 2025-10-11 09:40:29.661 2 DEBUG oslo_concurrency.lockutils [None req-a034e9f8-7335-4e06-b0a3-7f9268b121d8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:40:29 compute-0 nova_compute[260935]: 2025-10-11 09:40:29.661 2 DEBUG oslo_concurrency.lockutils [None req-a034e9f8-7335-4e06-b0a3-7f9268b121d8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:40:29 compute-0 nova_compute[260935]: 2025-10-11 09:40:29.671 2 DEBUG nova.virt.hardware [None req-a034e9f8-7335-4e06-b0a3-7f9268b121d8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 11 09:40:29 compute-0 nova_compute[260935]: 2025-10-11 09:40:29.672 2 INFO nova.compute.claims [None req-a034e9f8-7335-4e06-b0a3-7f9268b121d8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: cf42187a-059a-49d4-b9c8-96786042977f] Claim successful on node compute-0.ctlplane.example.com
Oct 11 09:40:29 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:40:29 compute-0 nova_compute[260935]: 2025-10-11 09:40:29.688 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:40:29 compute-0 nova_compute[260935]: 2025-10-11 09:40:29.843 2 DEBUG oslo_concurrency.processutils [None req-a034e9f8-7335-4e06-b0a3-7f9268b121d8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:40:30 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 09:40:30 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2660461439' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:40:30 compute-0 nova_compute[260935]: 2025-10-11 09:40:30.335 2 DEBUG oslo_concurrency.processutils [None req-a034e9f8-7335-4e06-b0a3-7f9268b121d8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.493s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:40:30 compute-0 nova_compute[260935]: 2025-10-11 09:40:30.344 2 DEBUG nova.compute.provider_tree [None req-a034e9f8-7335-4e06-b0a3-7f9268b121d8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 09:40:30 compute-0 nova_compute[260935]: 2025-10-11 09:40:30.362 2 DEBUG nova.scheduler.client.report [None req-a034e9f8-7335-4e06-b0a3-7f9268b121d8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 09:40:30 compute-0 nova_compute[260935]: 2025-10-11 09:40:30.390 2 DEBUG oslo_concurrency.lockutils [None req-a034e9f8-7335-4e06-b0a3-7f9268b121d8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.729s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:40:30 compute-0 nova_compute[260935]: 2025-10-11 09:40:30.392 2 DEBUG nova.compute.manager [None req-a034e9f8-7335-4e06-b0a3-7f9268b121d8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: cf42187a-059a-49d4-b9c8-96786042977f] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 11 09:40:30 compute-0 nova_compute[260935]: 2025-10-11 09:40:30.441 2 DEBUG nova.compute.manager [None req-a034e9f8-7335-4e06-b0a3-7f9268b121d8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: cf42187a-059a-49d4-b9c8-96786042977f] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 11 09:40:30 compute-0 nova_compute[260935]: 2025-10-11 09:40:30.442 2 DEBUG nova.network.neutron [None req-a034e9f8-7335-4e06-b0a3-7f9268b121d8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: cf42187a-059a-49d4-b9c8-96786042977f] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 11 09:40:30 compute-0 nova_compute[260935]: 2025-10-11 09:40:30.467 2 INFO nova.virt.libvirt.driver [None req-a034e9f8-7335-4e06-b0a3-7f9268b121d8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: cf42187a-059a-49d4-b9c8-96786042977f] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 11 09:40:30 compute-0 nova_compute[260935]: 2025-10-11 09:40:30.487 2 DEBUG nova.compute.manager [None req-a034e9f8-7335-4e06-b0a3-7f9268b121d8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: cf42187a-059a-49d4-b9c8-96786042977f] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 11 09:40:30 compute-0 nova_compute[260935]: 2025-10-11 09:40:30.569 2 DEBUG nova.compute.manager [None req-a034e9f8-7335-4e06-b0a3-7f9268b121d8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: cf42187a-059a-49d4-b9c8-96786042977f] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 11 09:40:30 compute-0 nova_compute[260935]: 2025-10-11 09:40:30.571 2 DEBUG nova.virt.libvirt.driver [None req-a034e9f8-7335-4e06-b0a3-7f9268b121d8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: cf42187a-059a-49d4-b9c8-96786042977f] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 11 09:40:30 compute-0 nova_compute[260935]: 2025-10-11 09:40:30.572 2 INFO nova.virt.libvirt.driver [None req-a034e9f8-7335-4e06-b0a3-7f9268b121d8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: cf42187a-059a-49d4-b9c8-96786042977f] Creating image(s)
Oct 11 09:40:30 compute-0 nova_compute[260935]: 2025-10-11 09:40:30.607 2 DEBUG nova.storage.rbd_utils [None req-a034e9f8-7335-4e06-b0a3-7f9268b121d8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] rbd image cf42187a-059a-49d4-b9c8-96786042977f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:40:30 compute-0 nova_compute[260935]: 2025-10-11 09:40:30.646 2 DEBUG nova.storage.rbd_utils [None req-a034e9f8-7335-4e06-b0a3-7f9268b121d8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] rbd image cf42187a-059a-49d4-b9c8-96786042977f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:40:30 compute-0 nova_compute[260935]: 2025-10-11 09:40:30.677 2 DEBUG nova.storage.rbd_utils [None req-a034e9f8-7335-4e06-b0a3-7f9268b121d8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] rbd image cf42187a-059a-49d4-b9c8-96786042977f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:40:30 compute-0 nova_compute[260935]: 2025-10-11 09:40:30.681 2 DEBUG oslo_concurrency.processutils [None req-a034e9f8-7335-4e06-b0a3-7f9268b121d8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:40:30 compute-0 nova_compute[260935]: 2025-10-11 09:40:30.722 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:40:30 compute-0 nova_compute[260935]: 2025-10-11 09:40:30.768 2 DEBUG oslo_concurrency.processutils [None req-a034e9f8-7335-4e06-b0a3-7f9268b121d8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json" returned: 0 in 0.087s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:40:30 compute-0 nova_compute[260935]: 2025-10-11 09:40:30.769 2 DEBUG oslo_concurrency.lockutils [None req-a034e9f8-7335-4e06-b0a3-7f9268b121d8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Acquiring lock "9811042c7d73cc51997f7c966840f4b7728169a1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:40:30 compute-0 nova_compute[260935]: 2025-10-11 09:40:30.770 2 DEBUG oslo_concurrency.lockutils [None req-a034e9f8-7335-4e06-b0a3-7f9268b121d8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:40:30 compute-0 nova_compute[260935]: 2025-10-11 09:40:30.771 2 DEBUG oslo_concurrency.lockutils [None req-a034e9f8-7335-4e06-b0a3-7f9268b121d8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:40:30 compute-0 nova_compute[260935]: 2025-10-11 09:40:30.801 2 DEBUG nova.storage.rbd_utils [None req-a034e9f8-7335-4e06-b0a3-7f9268b121d8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] rbd image cf42187a-059a-49d4-b9c8-96786042977f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:40:30 compute-0 nova_compute[260935]: 2025-10-11 09:40:30.806 2 DEBUG oslo_concurrency.processutils [None req-a034e9f8-7335-4e06-b0a3-7f9268b121d8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 cf42187a-059a-49d4-b9c8-96786042977f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:40:30 compute-0 nova_compute[260935]: 2025-10-11 09:40:30.866 2 DEBUG nova.policy [None req-a034e9f8-7335-4e06-b0a3-7f9268b121d8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '489c4d0457354f4684f8b9e53261224f', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '81e7096f23df4e7d8782cf98d09d54e9', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 11 09:40:31 compute-0 ceph-mon[74313]: pgmap v2965: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 9.3 KiB/s rd, 597 B/s wr, 12 op/s
Oct 11 09:40:31 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/2660461439' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:40:31 compute-0 nova_compute[260935]: 2025-10-11 09:40:31.207 2 DEBUG oslo_concurrency.processutils [None req-a034e9f8-7335-4e06-b0a3-7f9268b121d8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 cf42187a-059a-49d4-b9c8-96786042977f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.401s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:40:31 compute-0 nova_compute[260935]: 2025-10-11 09:40:31.299 2 DEBUG nova.storage.rbd_utils [None req-a034e9f8-7335-4e06-b0a3-7f9268b121d8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] resizing rbd image cf42187a-059a-49d4-b9c8-96786042977f_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 11 09:40:31 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2966: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:40:31 compute-0 nova_compute[260935]: 2025-10-11 09:40:31.431 2 DEBUG nova.objects.instance [None req-a034e9f8-7335-4e06-b0a3-7f9268b121d8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lazy-loading 'migration_context' on Instance uuid cf42187a-059a-49d4-b9c8-96786042977f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:40:31 compute-0 nova_compute[260935]: 2025-10-11 09:40:31.451 2 DEBUG nova.virt.libvirt.driver [None req-a034e9f8-7335-4e06-b0a3-7f9268b121d8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: cf42187a-059a-49d4-b9c8-96786042977f] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 11 09:40:31 compute-0 nova_compute[260935]: 2025-10-11 09:40:31.452 2 DEBUG nova.virt.libvirt.driver [None req-a034e9f8-7335-4e06-b0a3-7f9268b121d8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: cf42187a-059a-49d4-b9c8-96786042977f] Ensure instance console log exists: /var/lib/nova/instances/cf42187a-059a-49d4-b9c8-96786042977f/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 11 09:40:31 compute-0 nova_compute[260935]: 2025-10-11 09:40:31.453 2 DEBUG oslo_concurrency.lockutils [None req-a034e9f8-7335-4e06-b0a3-7f9268b121d8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:40:31 compute-0 nova_compute[260935]: 2025-10-11 09:40:31.453 2 DEBUG oslo_concurrency.lockutils [None req-a034e9f8-7335-4e06-b0a3-7f9268b121d8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:40:31 compute-0 nova_compute[260935]: 2025-10-11 09:40:31.454 2 DEBUG oslo_concurrency.lockutils [None req-a034e9f8-7335-4e06-b0a3-7f9268b121d8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:40:31 compute-0 nova_compute[260935]: 2025-10-11 09:40:31.836 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:40:32 compute-0 nova_compute[260935]: 2025-10-11 09:40:32.249 2 DEBUG nova.network.neutron [None req-a034e9f8-7335-4e06-b0a3-7f9268b121d8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: cf42187a-059a-49d4-b9c8-96786042977f] Successfully created port: 9f09f897-79b2-4242-ba35-4ab26732b411 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 11 09:40:33 compute-0 ceph-mon[74313]: pgmap v2966: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:40:33 compute-0 nova_compute[260935]: 2025-10-11 09:40:33.254 2 DEBUG nova.network.neutron [None req-a034e9f8-7335-4e06-b0a3-7f9268b121d8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: cf42187a-059a-49d4-b9c8-96786042977f] Successfully updated port: 9f09f897-79b2-4242-ba35-4ab26732b411 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 11 09:40:33 compute-0 nova_compute[260935]: 2025-10-11 09:40:33.278 2 DEBUG oslo_concurrency.lockutils [None req-a034e9f8-7335-4e06-b0a3-7f9268b121d8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Acquiring lock "refresh_cache-cf42187a-059a-49d4-b9c8-96786042977f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:40:33 compute-0 nova_compute[260935]: 2025-10-11 09:40:33.278 2 DEBUG oslo_concurrency.lockutils [None req-a034e9f8-7335-4e06-b0a3-7f9268b121d8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Acquired lock "refresh_cache-cf42187a-059a-49d4-b9c8-96786042977f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:40:33 compute-0 nova_compute[260935]: 2025-10-11 09:40:33.278 2 DEBUG nova.network.neutron [None req-a034e9f8-7335-4e06-b0a3-7f9268b121d8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: cf42187a-059a-49d4-b9c8-96786042977f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 11 09:40:33 compute-0 nova_compute[260935]: 2025-10-11 09:40:33.329 2 DEBUG nova.compute.manager [req-9693bee0-39c7-428f-9576-552cfbc406bc req-c2de1901-d119-4acd-8a82-3e920b6e015a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: cf42187a-059a-49d4-b9c8-96786042977f] Received event network-changed-9f09f897-79b2-4242-ba35-4ab26732b411 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:40:33 compute-0 nova_compute[260935]: 2025-10-11 09:40:33.329 2 DEBUG nova.compute.manager [req-9693bee0-39c7-428f-9576-552cfbc406bc req-c2de1901-d119-4acd-8a82-3e920b6e015a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: cf42187a-059a-49d4-b9c8-96786042977f] Refreshing instance network info cache due to event network-changed-9f09f897-79b2-4242-ba35-4ab26732b411. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 09:40:33 compute-0 nova_compute[260935]: 2025-10-11 09:40:33.329 2 DEBUG oslo_concurrency.lockutils [req-9693bee0-39c7-428f-9576-552cfbc406bc req-c2de1901-d119-4acd-8a82-3e920b6e015a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-cf42187a-059a-49d4-b9c8-96786042977f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:40:33 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2967: 321 pgs: 321 active+clean; 374 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 11 09:40:33 compute-0 nova_compute[260935]: 2025-10-11 09:40:33.710 2 DEBUG nova.network.neutron [None req-a034e9f8-7335-4e06-b0a3-7f9268b121d8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: cf42187a-059a-49d4-b9c8-96786042977f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 11 09:40:34 compute-0 nova_compute[260935]: 2025-10-11 09:40:34.375 2 DEBUG nova.network.neutron [None req-a034e9f8-7335-4e06-b0a3-7f9268b121d8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: cf42187a-059a-49d4-b9c8-96786042977f] Updating instance_info_cache with network_info: [{"id": "9f09f897-79b2-4242-ba35-4ab26732b411", "address": "fa:16:3e:f9:42:d3", "network": {"id": "fbb66f3b-0d2f-44df-a1ff-0fbd9c571596", "bridge": "br-int", "label": "tempest-network-smoke--2076009485", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "81e7096f23df4e7d8782cf98d09d54e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9f09f897-79", "ovs_interfaceid": "9f09f897-79b2-4242-ba35-4ab26732b411", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:40:34 compute-0 nova_compute[260935]: 2025-10-11 09:40:34.397 2 DEBUG oslo_concurrency.lockutils [None req-a034e9f8-7335-4e06-b0a3-7f9268b121d8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Releasing lock "refresh_cache-cf42187a-059a-49d4-b9c8-96786042977f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:40:34 compute-0 nova_compute[260935]: 2025-10-11 09:40:34.398 2 DEBUG nova.compute.manager [None req-a034e9f8-7335-4e06-b0a3-7f9268b121d8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: cf42187a-059a-49d4-b9c8-96786042977f] Instance network_info: |[{"id": "9f09f897-79b2-4242-ba35-4ab26732b411", "address": "fa:16:3e:f9:42:d3", "network": {"id": "fbb66f3b-0d2f-44df-a1ff-0fbd9c571596", "bridge": "br-int", "label": "tempest-network-smoke--2076009485", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "81e7096f23df4e7d8782cf98d09d54e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9f09f897-79", "ovs_interfaceid": "9f09f897-79b2-4242-ba35-4ab26732b411", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 11 09:40:34 compute-0 nova_compute[260935]: 2025-10-11 09:40:34.399 2 DEBUG oslo_concurrency.lockutils [req-9693bee0-39c7-428f-9576-552cfbc406bc req-c2de1901-d119-4acd-8a82-3e920b6e015a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-cf42187a-059a-49d4-b9c8-96786042977f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:40:34 compute-0 nova_compute[260935]: 2025-10-11 09:40:34.400 2 DEBUG nova.network.neutron [req-9693bee0-39c7-428f-9576-552cfbc406bc req-c2de1901-d119-4acd-8a82-3e920b6e015a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: cf42187a-059a-49d4-b9c8-96786042977f] Refreshing network info cache for port 9f09f897-79b2-4242-ba35-4ab26732b411 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 09:40:34 compute-0 nova_compute[260935]: 2025-10-11 09:40:34.406 2 DEBUG nova.virt.libvirt.driver [None req-a034e9f8-7335-4e06-b0a3-7f9268b121d8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: cf42187a-059a-49d4-b9c8-96786042977f] Start _get_guest_xml network_info=[{"id": "9f09f897-79b2-4242-ba35-4ab26732b411", "address": "fa:16:3e:f9:42:d3", "network": {"id": "fbb66f3b-0d2f-44df-a1ff-0fbd9c571596", "bridge": "br-int", "label": "tempest-network-smoke--2076009485", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "81e7096f23df4e7d8782cf98d09d54e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9f09f897-79", "ovs_interfaceid": "9f09f897-79b2-4242-ba35-4ab26732b411", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 11 09:40:34 compute-0 nova_compute[260935]: 2025-10-11 09:40:34.420 2 WARNING nova.virt.libvirt.driver [None req-a034e9f8-7335-4e06-b0a3-7f9268b121d8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 09:40:34 compute-0 nova_compute[260935]: 2025-10-11 09:40:34.432 2 DEBUG nova.virt.libvirt.host [None req-a034e9f8-7335-4e06-b0a3-7f9268b121d8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 11 09:40:34 compute-0 nova_compute[260935]: 2025-10-11 09:40:34.433 2 DEBUG nova.virt.libvirt.host [None req-a034e9f8-7335-4e06-b0a3-7f9268b121d8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 11 09:40:34 compute-0 nova_compute[260935]: 2025-10-11 09:40:34.441 2 DEBUG nova.virt.libvirt.host [None req-a034e9f8-7335-4e06-b0a3-7f9268b121d8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 11 09:40:34 compute-0 nova_compute[260935]: 2025-10-11 09:40:34.442 2 DEBUG nova.virt.libvirt.host [None req-a034e9f8-7335-4e06-b0a3-7f9268b121d8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 11 09:40:34 compute-0 nova_compute[260935]: 2025-10-11 09:40:34.443 2 DEBUG nova.virt.libvirt.driver [None req-a034e9f8-7335-4e06-b0a3-7f9268b121d8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 11 09:40:34 compute-0 nova_compute[260935]: 2025-10-11 09:40:34.444 2 DEBUG nova.virt.hardware [None req-a034e9f8-7335-4e06-b0a3-7f9268b121d8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 11 09:40:34 compute-0 nova_compute[260935]: 2025-10-11 09:40:34.446 2 DEBUG nova.virt.hardware [None req-a034e9f8-7335-4e06-b0a3-7f9268b121d8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 11 09:40:34 compute-0 nova_compute[260935]: 2025-10-11 09:40:34.447 2 DEBUG nova.virt.hardware [None req-a034e9f8-7335-4e06-b0a3-7f9268b121d8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 11 09:40:34 compute-0 nova_compute[260935]: 2025-10-11 09:40:34.448 2 DEBUG nova.virt.hardware [None req-a034e9f8-7335-4e06-b0a3-7f9268b121d8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 11 09:40:34 compute-0 nova_compute[260935]: 2025-10-11 09:40:34.448 2 DEBUG nova.virt.hardware [None req-a034e9f8-7335-4e06-b0a3-7f9268b121d8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 11 09:40:34 compute-0 nova_compute[260935]: 2025-10-11 09:40:34.449 2 DEBUG nova.virt.hardware [None req-a034e9f8-7335-4e06-b0a3-7f9268b121d8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 11 09:40:34 compute-0 nova_compute[260935]: 2025-10-11 09:40:34.449 2 DEBUG nova.virt.hardware [None req-a034e9f8-7335-4e06-b0a3-7f9268b121d8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 11 09:40:34 compute-0 nova_compute[260935]: 2025-10-11 09:40:34.450 2 DEBUG nova.virt.hardware [None req-a034e9f8-7335-4e06-b0a3-7f9268b121d8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 11 09:40:34 compute-0 nova_compute[260935]: 2025-10-11 09:40:34.451 2 DEBUG nova.virt.hardware [None req-a034e9f8-7335-4e06-b0a3-7f9268b121d8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 11 09:40:34 compute-0 nova_compute[260935]: 2025-10-11 09:40:34.451 2 DEBUG nova.virt.hardware [None req-a034e9f8-7335-4e06-b0a3-7f9268b121d8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 11 09:40:34 compute-0 nova_compute[260935]: 2025-10-11 09:40:34.452 2 DEBUG nova.virt.hardware [None req-a034e9f8-7335-4e06-b0a3-7f9268b121d8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 11 09:40:34 compute-0 nova_compute[260935]: 2025-10-11 09:40:34.458 2 DEBUG oslo_concurrency.processutils [None req-a034e9f8-7335-4e06-b0a3-7f9268b121d8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:40:34 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:40:34 compute-0 nova_compute[260935]: 2025-10-11 09:40:34.741 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:40:35 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 09:40:35 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1064095526' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:40:35 compute-0 nova_compute[260935]: 2025-10-11 09:40:35.018 2 DEBUG oslo_concurrency.processutils [None req-a034e9f8-7335-4e06-b0a3-7f9268b121d8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.560s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:40:35 compute-0 nova_compute[260935]: 2025-10-11 09:40:35.056 2 DEBUG nova.storage.rbd_utils [None req-a034e9f8-7335-4e06-b0a3-7f9268b121d8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] rbd image cf42187a-059a-49d4-b9c8-96786042977f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:40:35 compute-0 nova_compute[260935]: 2025-10-11 09:40:35.062 2 DEBUG oslo_concurrency.processutils [None req-a034e9f8-7335-4e06-b0a3-7f9268b121d8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:40:35 compute-0 ceph-mon[74313]: pgmap v2967: 321 pgs: 321 active+clean; 374 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 11 09:40:35 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/1064095526' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:40:35 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2968: 321 pgs: 321 active+clean; 374 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 11 09:40:35 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 09:40:35 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1853872656' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:40:35 compute-0 nova_compute[260935]: 2025-10-11 09:40:35.557 2 DEBUG oslo_concurrency.processutils [None req-a034e9f8-7335-4e06-b0a3-7f9268b121d8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.495s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:40:35 compute-0 nova_compute[260935]: 2025-10-11 09:40:35.561 2 DEBUG nova.virt.libvirt.vif [None req-a034e9f8-7335-4e06-b0a3-7f9268b121d8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T09:40:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-607770139-access_point-1885053733',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-607770139-access_point-1885053733',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-607770139-acc',id=146,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMJ2jJOMdFj9GKBYjHKBvmwlAaXSeJgKxd7Y5C0D5IYgR/hyyjjmJiPLE03H6XYiIWX3QUpUmhg6y0ka6+hRs35VkER6t8ZYYcl+MQfyMjAvRZwHFSUWgwuABpKNZ9OROA==',key_name='tempest-TestSecurityGroupsBasicOps-1246308253',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='81e7096f23df4e7d8782cf98d09d54e9',ramdisk_id='',reservation_id='r-8zvi2y9v',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-607770139',owner_user_name='tempest-TestSecurityGroupsBasicOps-607770139-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:40:30Z,user_data=None,user_id='489c4d0457354f4684f8b9e53261224f',uuid=cf42187a-059a-49d4-b9c8-96786042977f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9f09f897-79b2-4242-ba35-4ab26732b411", "address": "fa:16:3e:f9:42:d3", "network": {"id": "fbb66f3b-0d2f-44df-a1ff-0fbd9c571596", "bridge": "br-int", "label": "tempest-network-smoke--2076009485", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "81e7096f23df4e7d8782cf98d09d54e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9f09f897-79", "ovs_interfaceid": "9f09f897-79b2-4242-ba35-4ab26732b411", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 11 09:40:35 compute-0 nova_compute[260935]: 2025-10-11 09:40:35.561 2 DEBUG nova.network.os_vif_util [None req-a034e9f8-7335-4e06-b0a3-7f9268b121d8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Converting VIF {"id": "9f09f897-79b2-4242-ba35-4ab26732b411", "address": "fa:16:3e:f9:42:d3", "network": {"id": "fbb66f3b-0d2f-44df-a1ff-0fbd9c571596", "bridge": "br-int", "label": "tempest-network-smoke--2076009485", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "81e7096f23df4e7d8782cf98d09d54e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9f09f897-79", "ovs_interfaceid": "9f09f897-79b2-4242-ba35-4ab26732b411", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 09:40:35 compute-0 nova_compute[260935]: 2025-10-11 09:40:35.563 2 DEBUG nova.network.os_vif_util [None req-a034e9f8-7335-4e06-b0a3-7f9268b121d8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f9:42:d3,bridge_name='br-int',has_traffic_filtering=True,id=9f09f897-79b2-4242-ba35-4ab26732b411,network=Network(fbb66f3b-0d2f-44df-a1ff-0fbd9c571596),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9f09f897-79') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 09:40:35 compute-0 nova_compute[260935]: 2025-10-11 09:40:35.566 2 DEBUG nova.objects.instance [None req-a034e9f8-7335-4e06-b0a3-7f9268b121d8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lazy-loading 'pci_devices' on Instance uuid cf42187a-059a-49d4-b9c8-96786042977f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:40:35 compute-0 nova_compute[260935]: 2025-10-11 09:40:35.584 2 DEBUG nova.virt.libvirt.driver [None req-a034e9f8-7335-4e06-b0a3-7f9268b121d8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: cf42187a-059a-49d4-b9c8-96786042977f] End _get_guest_xml xml=<domain type="kvm">
Oct 11 09:40:35 compute-0 nova_compute[260935]:   <uuid>cf42187a-059a-49d4-b9c8-96786042977f</uuid>
Oct 11 09:40:35 compute-0 nova_compute[260935]:   <name>instance-00000092</name>
Oct 11 09:40:35 compute-0 nova_compute[260935]:   <memory>131072</memory>
Oct 11 09:40:35 compute-0 nova_compute[260935]:   <vcpu>1</vcpu>
Oct 11 09:40:35 compute-0 nova_compute[260935]:   <metadata>
Oct 11 09:40:35 compute-0 nova_compute[260935]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 09:40:35 compute-0 nova_compute[260935]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 09:40:35 compute-0 nova_compute[260935]:       <nova:name>tempest-server-tempest-TestSecurityGroupsBasicOps-607770139-access_point-1885053733</nova:name>
Oct 11 09:40:35 compute-0 nova_compute[260935]:       <nova:creationTime>2025-10-11 09:40:34</nova:creationTime>
Oct 11 09:40:35 compute-0 nova_compute[260935]:       <nova:flavor name="m1.nano">
Oct 11 09:40:35 compute-0 nova_compute[260935]:         <nova:memory>128</nova:memory>
Oct 11 09:40:35 compute-0 nova_compute[260935]:         <nova:disk>1</nova:disk>
Oct 11 09:40:35 compute-0 nova_compute[260935]:         <nova:swap>0</nova:swap>
Oct 11 09:40:35 compute-0 nova_compute[260935]:         <nova:ephemeral>0</nova:ephemeral>
Oct 11 09:40:35 compute-0 nova_compute[260935]:         <nova:vcpus>1</nova:vcpus>
Oct 11 09:40:35 compute-0 nova_compute[260935]:       </nova:flavor>
Oct 11 09:40:35 compute-0 nova_compute[260935]:       <nova:owner>
Oct 11 09:40:35 compute-0 nova_compute[260935]:         <nova:user uuid="489c4d0457354f4684f8b9e53261224f">tempest-TestSecurityGroupsBasicOps-607770139-project-member</nova:user>
Oct 11 09:40:35 compute-0 nova_compute[260935]:         <nova:project uuid="81e7096f23df4e7d8782cf98d09d54e9">tempest-TestSecurityGroupsBasicOps-607770139</nova:project>
Oct 11 09:40:35 compute-0 nova_compute[260935]:       </nova:owner>
Oct 11 09:40:35 compute-0 nova_compute[260935]:       <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 09:40:35 compute-0 nova_compute[260935]:       <nova:ports>
Oct 11 09:40:35 compute-0 nova_compute[260935]:         <nova:port uuid="9f09f897-79b2-4242-ba35-4ab26732b411">
Oct 11 09:40:35 compute-0 nova_compute[260935]:           <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Oct 11 09:40:35 compute-0 nova_compute[260935]:         </nova:port>
Oct 11 09:40:35 compute-0 nova_compute[260935]:       </nova:ports>
Oct 11 09:40:35 compute-0 nova_compute[260935]:     </nova:instance>
Oct 11 09:40:35 compute-0 nova_compute[260935]:   </metadata>
Oct 11 09:40:35 compute-0 nova_compute[260935]:   <sysinfo type="smbios">
Oct 11 09:40:35 compute-0 nova_compute[260935]:     <system>
Oct 11 09:40:35 compute-0 nova_compute[260935]:       <entry name="manufacturer">RDO</entry>
Oct 11 09:40:35 compute-0 nova_compute[260935]:       <entry name="product">OpenStack Compute</entry>
Oct 11 09:40:35 compute-0 nova_compute[260935]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 09:40:35 compute-0 nova_compute[260935]:       <entry name="serial">cf42187a-059a-49d4-b9c8-96786042977f</entry>
Oct 11 09:40:35 compute-0 nova_compute[260935]:       <entry name="uuid">cf42187a-059a-49d4-b9c8-96786042977f</entry>
Oct 11 09:40:35 compute-0 nova_compute[260935]:       <entry name="family">Virtual Machine</entry>
Oct 11 09:40:35 compute-0 nova_compute[260935]:     </system>
Oct 11 09:40:35 compute-0 nova_compute[260935]:   </sysinfo>
Oct 11 09:40:35 compute-0 nova_compute[260935]:   <os>
Oct 11 09:40:35 compute-0 nova_compute[260935]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 11 09:40:35 compute-0 nova_compute[260935]:     <boot dev="hd"/>
Oct 11 09:40:35 compute-0 nova_compute[260935]:     <smbios mode="sysinfo"/>
Oct 11 09:40:35 compute-0 nova_compute[260935]:   </os>
Oct 11 09:40:35 compute-0 nova_compute[260935]:   <features>
Oct 11 09:40:35 compute-0 nova_compute[260935]:     <acpi/>
Oct 11 09:40:35 compute-0 nova_compute[260935]:     <apic/>
Oct 11 09:40:35 compute-0 nova_compute[260935]:     <vmcoreinfo/>
Oct 11 09:40:35 compute-0 nova_compute[260935]:   </features>
Oct 11 09:40:35 compute-0 nova_compute[260935]:   <clock offset="utc">
Oct 11 09:40:35 compute-0 nova_compute[260935]:     <timer name="pit" tickpolicy="delay"/>
Oct 11 09:40:35 compute-0 nova_compute[260935]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 11 09:40:35 compute-0 nova_compute[260935]:     <timer name="hpet" present="no"/>
Oct 11 09:40:35 compute-0 nova_compute[260935]:   </clock>
Oct 11 09:40:35 compute-0 nova_compute[260935]:   <cpu mode="host-model" match="exact">
Oct 11 09:40:35 compute-0 nova_compute[260935]:     <topology sockets="1" cores="1" threads="1"/>
Oct 11 09:40:35 compute-0 nova_compute[260935]:   </cpu>
Oct 11 09:40:35 compute-0 nova_compute[260935]:   <devices>
Oct 11 09:40:35 compute-0 nova_compute[260935]:     <disk type="network" device="disk">
Oct 11 09:40:35 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 09:40:35 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/cf42187a-059a-49d4-b9c8-96786042977f_disk">
Oct 11 09:40:35 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 09:40:35 compute-0 nova_compute[260935]:       </source>
Oct 11 09:40:35 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 09:40:35 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 09:40:35 compute-0 nova_compute[260935]:       </auth>
Oct 11 09:40:35 compute-0 nova_compute[260935]:       <target dev="vda" bus="virtio"/>
Oct 11 09:40:35 compute-0 nova_compute[260935]:     </disk>
Oct 11 09:40:35 compute-0 nova_compute[260935]:     <disk type="network" device="cdrom">
Oct 11 09:40:35 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 09:40:35 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/cf42187a-059a-49d4-b9c8-96786042977f_disk.config">
Oct 11 09:40:35 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 09:40:35 compute-0 nova_compute[260935]:       </source>
Oct 11 09:40:35 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 09:40:35 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 09:40:35 compute-0 nova_compute[260935]:       </auth>
Oct 11 09:40:35 compute-0 nova_compute[260935]:       <target dev="sda" bus="sata"/>
Oct 11 09:40:35 compute-0 nova_compute[260935]:     </disk>
Oct 11 09:40:35 compute-0 nova_compute[260935]:     <interface type="ethernet">
Oct 11 09:40:35 compute-0 nova_compute[260935]:       <mac address="fa:16:3e:f9:42:d3"/>
Oct 11 09:40:35 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 09:40:35 compute-0 nova_compute[260935]:       <driver name="vhost" rx_queue_size="512"/>
Oct 11 09:40:35 compute-0 nova_compute[260935]:       <mtu size="1442"/>
Oct 11 09:40:35 compute-0 nova_compute[260935]:       <target dev="tap9f09f897-79"/>
Oct 11 09:40:35 compute-0 nova_compute[260935]:     </interface>
Oct 11 09:40:35 compute-0 nova_compute[260935]:     <serial type="pty">
Oct 11 09:40:35 compute-0 nova_compute[260935]:       <log file="/var/lib/nova/instances/cf42187a-059a-49d4-b9c8-96786042977f/console.log" append="off"/>
Oct 11 09:40:35 compute-0 nova_compute[260935]:     </serial>
Oct 11 09:40:35 compute-0 nova_compute[260935]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 09:40:35 compute-0 nova_compute[260935]:     <video>
Oct 11 09:40:35 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 09:40:35 compute-0 nova_compute[260935]:     </video>
Oct 11 09:40:35 compute-0 nova_compute[260935]:     <input type="tablet" bus="usb"/>
Oct 11 09:40:35 compute-0 nova_compute[260935]:     <rng model="virtio">
Oct 11 09:40:35 compute-0 nova_compute[260935]:       <backend model="random">/dev/urandom</backend>
Oct 11 09:40:35 compute-0 nova_compute[260935]:     </rng>
Oct 11 09:40:35 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root"/>
Oct 11 09:40:35 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:40:35 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:40:35 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:40:35 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:40:35 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:40:35 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:40:35 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:40:35 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:40:35 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:40:35 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:40:35 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:40:35 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:40:35 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:40:35 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:40:35 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:40:35 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:40:35 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:40:35 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:40:35 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:40:35 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:40:35 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:40:35 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:40:35 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:40:35 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:40:35 compute-0 nova_compute[260935]:     <controller type="usb" index="0"/>
Oct 11 09:40:35 compute-0 nova_compute[260935]:     <memballoon model="virtio">
Oct 11 09:40:35 compute-0 nova_compute[260935]:       <stats period="10"/>
Oct 11 09:40:35 compute-0 nova_compute[260935]:     </memballoon>
Oct 11 09:40:35 compute-0 nova_compute[260935]:   </devices>
Oct 11 09:40:35 compute-0 nova_compute[260935]: </domain>
Oct 11 09:40:35 compute-0 nova_compute[260935]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 11 09:40:35 compute-0 nova_compute[260935]: 2025-10-11 09:40:35.587 2 DEBUG nova.compute.manager [None req-a034e9f8-7335-4e06-b0a3-7f9268b121d8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: cf42187a-059a-49d4-b9c8-96786042977f] Preparing to wait for external event network-vif-plugged-9f09f897-79b2-4242-ba35-4ab26732b411 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 11 09:40:35 compute-0 nova_compute[260935]: 2025-10-11 09:40:35.588 2 DEBUG oslo_concurrency.lockutils [None req-a034e9f8-7335-4e06-b0a3-7f9268b121d8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Acquiring lock "cf42187a-059a-49d4-b9c8-96786042977f-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:40:35 compute-0 nova_compute[260935]: 2025-10-11 09:40:35.588 2 DEBUG oslo_concurrency.lockutils [None req-a034e9f8-7335-4e06-b0a3-7f9268b121d8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lock "cf42187a-059a-49d4-b9c8-96786042977f-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:40:35 compute-0 nova_compute[260935]: 2025-10-11 09:40:35.589 2 DEBUG oslo_concurrency.lockutils [None req-a034e9f8-7335-4e06-b0a3-7f9268b121d8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lock "cf42187a-059a-49d4-b9c8-96786042977f-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:40:35 compute-0 nova_compute[260935]: 2025-10-11 09:40:35.590 2 DEBUG nova.virt.libvirt.vif [None req-a034e9f8-7335-4e06-b0a3-7f9268b121d8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T09:40:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-607770139-access_point-1885053733',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-607770139-access_point-1885053733',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-607770139-acc',id=146,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMJ2jJOMdFj9GKBYjHKBvmwlAaXSeJgKxd7Y5C0D5IYgR/hyyjjmJiPLE03H6XYiIWX3QUpUmhg6y0ka6+hRs35VkER6t8ZYYcl+MQfyMjAvRZwHFSUWgwuABpKNZ9OROA==',key_name='tempest-TestSecurityGroupsBasicOps-1246308253',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='81e7096f23df4e7d8782cf98d09d54e9',ramdisk_id='',reservation_id='r-8zvi2y9v',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-607770139',owner_user_name='tempest-TestSecurityGroupsBasicOps-607770139-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:40:30Z,user_data=None,user_id='489c4d0457354f4684f8b9e53261224f',uuid=cf42187a-059a-49d4-b9c8-96786042977f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9f09f897-79b2-4242-ba35-4ab26732b411", "address": "fa:16:3e:f9:42:d3", "network": {"id": "fbb66f3b-0d2f-44df-a1ff-0fbd9c571596", "bridge": "br-int", "label": "tempest-network-smoke--2076009485", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "81e7096f23df4e7d8782cf98d09d54e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9f09f897-79", "ovs_interfaceid": "9f09f897-79b2-4242-ba35-4ab26732b411", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 11 09:40:35 compute-0 nova_compute[260935]: 2025-10-11 09:40:35.591 2 DEBUG nova.network.os_vif_util [None req-a034e9f8-7335-4e06-b0a3-7f9268b121d8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Converting VIF {"id": "9f09f897-79b2-4242-ba35-4ab26732b411", "address": "fa:16:3e:f9:42:d3", "network": {"id": "fbb66f3b-0d2f-44df-a1ff-0fbd9c571596", "bridge": "br-int", "label": "tempest-network-smoke--2076009485", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "81e7096f23df4e7d8782cf98d09d54e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9f09f897-79", "ovs_interfaceid": "9f09f897-79b2-4242-ba35-4ab26732b411", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 09:40:35 compute-0 nova_compute[260935]: 2025-10-11 09:40:35.592 2 DEBUG nova.network.os_vif_util [None req-a034e9f8-7335-4e06-b0a3-7f9268b121d8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f9:42:d3,bridge_name='br-int',has_traffic_filtering=True,id=9f09f897-79b2-4242-ba35-4ab26732b411,network=Network(fbb66f3b-0d2f-44df-a1ff-0fbd9c571596),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9f09f897-79') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 09:40:35 compute-0 nova_compute[260935]: 2025-10-11 09:40:35.593 2 DEBUG os_vif [None req-a034e9f8-7335-4e06-b0a3-7f9268b121d8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f9:42:d3,bridge_name='br-int',has_traffic_filtering=True,id=9f09f897-79b2-4242-ba35-4ab26732b411,network=Network(fbb66f3b-0d2f-44df-a1ff-0fbd9c571596),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9f09f897-79') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 11 09:40:35 compute-0 nova_compute[260935]: 2025-10-11 09:40:35.601 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:40:35 compute-0 nova_compute[260935]: 2025-10-11 09:40:35.603 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:40:35 compute-0 nova_compute[260935]: 2025-10-11 09:40:35.604 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 09:40:35 compute-0 nova_compute[260935]: 2025-10-11 09:40:35.608 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:40:35 compute-0 nova_compute[260935]: 2025-10-11 09:40:35.609 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9f09f897-79, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:40:35 compute-0 nova_compute[260935]: 2025-10-11 09:40:35.610 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap9f09f897-79, col_values=(('external_ids', {'iface-id': '9f09f897-79b2-4242-ba35-4ab26732b411', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:f9:42:d3', 'vm-uuid': 'cf42187a-059a-49d4-b9c8-96786042977f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:40:35 compute-0 nova_compute[260935]: 2025-10-11 09:40:35.613 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:40:35 compute-0 NetworkManager[44960]: <info>  [1760175635.6143] manager: (tap9f09f897-79): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/629)
Oct 11 09:40:35 compute-0 nova_compute[260935]: 2025-10-11 09:40:35.618 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 09:40:35 compute-0 nova_compute[260935]: 2025-10-11 09:40:35.619 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:40:35 compute-0 nova_compute[260935]: 2025-10-11 09:40:35.621 2 INFO os_vif [None req-a034e9f8-7335-4e06-b0a3-7f9268b121d8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f9:42:d3,bridge_name='br-int',has_traffic_filtering=True,id=9f09f897-79b2-4242-ba35-4ab26732b411,network=Network(fbb66f3b-0d2f-44df-a1ff-0fbd9c571596),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9f09f897-79')
Oct 11 09:40:35 compute-0 nova_compute[260935]: 2025-10-11 09:40:35.681 2 DEBUG nova.virt.libvirt.driver [None req-a034e9f8-7335-4e06-b0a3-7f9268b121d8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 09:40:35 compute-0 nova_compute[260935]: 2025-10-11 09:40:35.682 2 DEBUG nova.virt.libvirt.driver [None req-a034e9f8-7335-4e06-b0a3-7f9268b121d8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 09:40:35 compute-0 nova_compute[260935]: 2025-10-11 09:40:35.682 2 DEBUG nova.virt.libvirt.driver [None req-a034e9f8-7335-4e06-b0a3-7f9268b121d8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] No VIF found with MAC fa:16:3e:f9:42:d3, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 11 09:40:35 compute-0 nova_compute[260935]: 2025-10-11 09:40:35.683 2 INFO nova.virt.libvirt.driver [None req-a034e9f8-7335-4e06-b0a3-7f9268b121d8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: cf42187a-059a-49d4-b9c8-96786042977f] Using config drive
Oct 11 09:40:35 compute-0 nova_compute[260935]: 2025-10-11 09:40:35.721 2 DEBUG nova.storage.rbd_utils [None req-a034e9f8-7335-4e06-b0a3-7f9268b121d8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] rbd image cf42187a-059a-49d4-b9c8-96786042977f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:40:35 compute-0 podman[424762]: 2025-10-11 09:40:35.7309911 +0000 UTC m=+0.071904638 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 11 09:40:35 compute-0 nova_compute[260935]: 2025-10-11 09:40:35.767 2 DEBUG nova.network.neutron [req-9693bee0-39c7-428f-9576-552cfbc406bc req-c2de1901-d119-4acd-8a82-3e920b6e015a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: cf42187a-059a-49d4-b9c8-96786042977f] Updated VIF entry in instance network info cache for port 9f09f897-79b2-4242-ba35-4ab26732b411. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 09:40:35 compute-0 nova_compute[260935]: 2025-10-11 09:40:35.768 2 DEBUG nova.network.neutron [req-9693bee0-39c7-428f-9576-552cfbc406bc req-c2de1901-d119-4acd-8a82-3e920b6e015a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: cf42187a-059a-49d4-b9c8-96786042977f] Updating instance_info_cache with network_info: [{"id": "9f09f897-79b2-4242-ba35-4ab26732b411", "address": "fa:16:3e:f9:42:d3", "network": {"id": "fbb66f3b-0d2f-44df-a1ff-0fbd9c571596", "bridge": "br-int", "label": "tempest-network-smoke--2076009485", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "81e7096f23df4e7d8782cf98d09d54e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9f09f897-79", "ovs_interfaceid": "9f09f897-79b2-4242-ba35-4ab26732b411", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:40:35 compute-0 nova_compute[260935]: 2025-10-11 09:40:35.783 2 DEBUG oslo_concurrency.lockutils [req-9693bee0-39c7-428f-9576-552cfbc406bc req-c2de1901-d119-4acd-8a82-3e920b6e015a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-cf42187a-059a-49d4-b9c8-96786042977f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:40:36 compute-0 nova_compute[260935]: 2025-10-11 09:40:36.119 2 INFO nova.virt.libvirt.driver [None req-a034e9f8-7335-4e06-b0a3-7f9268b121d8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: cf42187a-059a-49d4-b9c8-96786042977f] Creating config drive at /var/lib/nova/instances/cf42187a-059a-49d4-b9c8-96786042977f/disk.config
Oct 11 09:40:36 compute-0 nova_compute[260935]: 2025-10-11 09:40:36.127 2 DEBUG oslo_concurrency.processutils [None req-a034e9f8-7335-4e06-b0a3-7f9268b121d8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/cf42187a-059a-49d4-b9c8-96786042977f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpi97ndg0p execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:40:36 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/1853872656' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:40:36 compute-0 nova_compute[260935]: 2025-10-11 09:40:36.281 2 DEBUG oslo_concurrency.processutils [None req-a034e9f8-7335-4e06-b0a3-7f9268b121d8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/cf42187a-059a-49d4-b9c8-96786042977f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpi97ndg0p" returned: 0 in 0.154s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:40:36 compute-0 nova_compute[260935]: 2025-10-11 09:40:36.316 2 DEBUG nova.storage.rbd_utils [None req-a034e9f8-7335-4e06-b0a3-7f9268b121d8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] rbd image cf42187a-059a-49d4-b9c8-96786042977f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:40:36 compute-0 nova_compute[260935]: 2025-10-11 09:40:36.320 2 DEBUG oslo_concurrency.processutils [None req-a034e9f8-7335-4e06-b0a3-7f9268b121d8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/cf42187a-059a-49d4-b9c8-96786042977f/disk.config cf42187a-059a-49d4-b9c8-96786042977f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:40:36 compute-0 nova_compute[260935]: 2025-10-11 09:40:36.527 2 DEBUG oslo_concurrency.processutils [None req-a034e9f8-7335-4e06-b0a3-7f9268b121d8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/cf42187a-059a-49d4-b9c8-96786042977f/disk.config cf42187a-059a-49d4-b9c8-96786042977f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.207s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:40:36 compute-0 nova_compute[260935]: 2025-10-11 09:40:36.529 2 INFO nova.virt.libvirt.driver [None req-a034e9f8-7335-4e06-b0a3-7f9268b121d8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: cf42187a-059a-49d4-b9c8-96786042977f] Deleting local config drive /var/lib/nova/instances/cf42187a-059a-49d4-b9c8-96786042977f/disk.config because it was imported into RBD.
Oct 11 09:40:36 compute-0 NetworkManager[44960]: <info>  [1760175636.6077] manager: (tap9f09f897-79): new Tun device (/org/freedesktop/NetworkManager/Devices/630)
Oct 11 09:40:36 compute-0 kernel: tap9f09f897-79: entered promiscuous mode
Oct 11 09:40:36 compute-0 ovn_controller[152945]: 2025-10-11T09:40:36Z|01657|binding|INFO|Claiming lport 9f09f897-79b2-4242-ba35-4ab26732b411 for this chassis.
Oct 11 09:40:36 compute-0 ovn_controller[152945]: 2025-10-11T09:40:36Z|01658|binding|INFO|9f09f897-79b2-4242-ba35-4ab26732b411: Claiming fa:16:3e:f9:42:d3 10.100.0.14
Oct 11 09:40:36 compute-0 nova_compute[260935]: 2025-10-11 09:40:36.615 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:40:36 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:40:36.638 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f9:42:d3 10.100.0.14'], port_security=['fa:16:3e:f9:42:d3 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'cf42187a-059a-49d4-b9c8-96786042977f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fbb66f3b-0d2f-44df-a1ff-0fbd9c571596', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '81e7096f23df4e7d8782cf98d09d54e9', 'neutron:revision_number': '2', 'neutron:security_group_ids': '09004472-a3cd-4396-b377-505fc9433156 ece05fde-852d-4083-a996-a2c46d1ad9c2', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9d19e995-d8fe-4294-b195-20de1c881c60, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=9f09f897-79b2-4242-ba35-4ab26732b411) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 09:40:36 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:40:36.640 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 9f09f897-79b2-4242-ba35-4ab26732b411 in datapath fbb66f3b-0d2f-44df-a1ff-0fbd9c571596 bound to our chassis
Oct 11 09:40:36 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:40:36.643 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network fbb66f3b-0d2f-44df-a1ff-0fbd9c571596
Oct 11 09:40:36 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:40:36.660 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[6a1cde3e-4270-4402-a3e9-f786b28b77b5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:40:36 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:40:36.661 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapfbb66f3b-01 in ovnmeta-fbb66f3b-0d2f-44df-a1ff-0fbd9c571596 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 11 09:40:36 compute-0 systemd-machined[215705]: New machine qemu-170-instance-00000092.
Oct 11 09:40:36 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:40:36.666 276945 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapfbb66f3b-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 11 09:40:36 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:40:36.666 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[d149be25-ceb9-4c96-8913-0c30c4b97bb9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:40:36 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:40:36.669 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[d1eab10b-f836-46e9-864d-a9b97e25b525]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:40:36 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:40:36.688 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[1188c158-7213-428c-9afa-fcd155c91411]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:40:36 compute-0 systemd[1]: Started Virtual Machine qemu-170-instance-00000092.
Oct 11 09:40:36 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:40:36.712 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[03e66ba9-8b91-49c4-bdb9-16f9ef035d11]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:40:36 compute-0 systemd-udevd[424852]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 09:40:36 compute-0 nova_compute[260935]: 2025-10-11 09:40:36.715 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:40:36 compute-0 ovn_controller[152945]: 2025-10-11T09:40:36Z|01659|binding|INFO|Setting lport 9f09f897-79b2-4242-ba35-4ab26732b411 ovn-installed in OVS
Oct 11 09:40:36 compute-0 ovn_controller[152945]: 2025-10-11T09:40:36Z|01660|binding|INFO|Setting lport 9f09f897-79b2-4242-ba35-4ab26732b411 up in Southbound
Oct 11 09:40:36 compute-0 nova_compute[260935]: 2025-10-11 09:40:36.721 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:40:36 compute-0 NetworkManager[44960]: <info>  [1760175636.7357] device (tap9f09f897-79): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 09:40:36 compute-0 NetworkManager[44960]: <info>  [1760175636.7366] device (tap9f09f897-79): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 09:40:36 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:40:36.761 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[b4685214-8ad0-44ec-9865-1eeeb4498d17]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:40:36 compute-0 NetworkManager[44960]: <info>  [1760175636.7705] manager: (tapfbb66f3b-00): new Veth device (/org/freedesktop/NetworkManager/Devices/631)
Oct 11 09:40:36 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:40:36.769 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[98b5f0de-4e5d-4254-81a7-930bf15382b6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:40:36 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:40:36.818 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[100d498d-63fd-48cd-81d3-c7ecbc87787d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:40:36 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:40:36.822 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[4406d6da-f697-49cd-b3fc-c8282790853e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:40:36 compute-0 NetworkManager[44960]: <info>  [1760175636.8527] device (tapfbb66f3b-00): carrier: link connected
Oct 11 09:40:36 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:40:36.857 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[dc276785-6436-4d2b-9b8b-70549633c896]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:40:36 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:40:36.880 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[ee8cb72d-f355-40d7-ba91-1c421f094ce3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfbb66f3b-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a1:6d:d0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 434], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 748155, 'reachable_time': 34340, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 424882, 'error': None, 'target': 'ovnmeta-fbb66f3b-0d2f-44df-a1ff-0fbd9c571596', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:40:36 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:40:36.898 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[884e3daa-9d1f-455b-b011-c2192833aa8e]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fea1:6dd0'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 748155, 'tstamp': 748155}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 424883, 'error': None, 'target': 'ovnmeta-fbb66f3b-0d2f-44df-a1ff-0fbd9c571596', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:40:36 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:40:36.919 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[23d53aca-4fed-44f1-b808-cd774d1fe7e8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfbb66f3b-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a1:6d:d0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 434], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 748155, 'reachable_time': 34340, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 424884, 'error': None, 'target': 'ovnmeta-fbb66f3b-0d2f-44df-a1ff-0fbd9c571596', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:40:36 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:40:36.957 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[0228b9dc-ce6f-41dc-a5d4-e9658d9ef416]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:40:37 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:40:37.041 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[89172be3-f17b-4a02-9dc5-34eeb83ea24d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:40:37 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:40:37.043 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfbb66f3b-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:40:37 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:40:37.044 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 09:40:37 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:40:37.045 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfbb66f3b-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:40:37 compute-0 nova_compute[260935]: 2025-10-11 09:40:37.047 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:40:37 compute-0 NetworkManager[44960]: <info>  [1760175637.0488] manager: (tapfbb66f3b-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/632)
Oct 11 09:40:37 compute-0 kernel: tapfbb66f3b-00: entered promiscuous mode
Oct 11 09:40:37 compute-0 nova_compute[260935]: 2025-10-11 09:40:37.050 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:40:37 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:40:37.053 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapfbb66f3b-00, col_values=(('external_ids', {'iface-id': '3bca8f57-1f39-42d4-90ee-6695770f4d39'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:40:37 compute-0 nova_compute[260935]: 2025-10-11 09:40:37.054 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:40:37 compute-0 ovn_controller[152945]: 2025-10-11T09:40:37Z|01661|binding|INFO|Releasing lport 3bca8f57-1f39-42d4-90ee-6695770f4d39 from this chassis (sb_readonly=0)
Oct 11 09:40:37 compute-0 nova_compute[260935]: 2025-10-11 09:40:37.073 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:40:37 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:40:37.075 162815 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/fbb66f3b-0d2f-44df-a1ff-0fbd9c571596.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/fbb66f3b-0d2f-44df-a1ff-0fbd9c571596.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 11 09:40:37 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:40:37.076 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[7d252bd9-ee12-40f1-bc29-541036c8c9b3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:40:37 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:40:37.077 162815 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 11 09:40:37 compute-0 ovn_metadata_agent[162810]: global
Oct 11 09:40:37 compute-0 ovn_metadata_agent[162810]:     log         /dev/log local0 debug
Oct 11 09:40:37 compute-0 ovn_metadata_agent[162810]:     log-tag     haproxy-metadata-proxy-fbb66f3b-0d2f-44df-a1ff-0fbd9c571596
Oct 11 09:40:37 compute-0 ovn_metadata_agent[162810]:     user        root
Oct 11 09:40:37 compute-0 ovn_metadata_agent[162810]:     group       root
Oct 11 09:40:37 compute-0 ovn_metadata_agent[162810]:     maxconn     1024
Oct 11 09:40:37 compute-0 ovn_metadata_agent[162810]:     pidfile     /var/lib/neutron/external/pids/fbb66f3b-0d2f-44df-a1ff-0fbd9c571596.pid.haproxy
Oct 11 09:40:37 compute-0 ovn_metadata_agent[162810]:     daemon
Oct 11 09:40:37 compute-0 ovn_metadata_agent[162810]: 
Oct 11 09:40:37 compute-0 ovn_metadata_agent[162810]: defaults
Oct 11 09:40:37 compute-0 ovn_metadata_agent[162810]:     log global
Oct 11 09:40:37 compute-0 ovn_metadata_agent[162810]:     mode http
Oct 11 09:40:37 compute-0 ovn_metadata_agent[162810]:     option httplog
Oct 11 09:40:37 compute-0 ovn_metadata_agent[162810]:     option dontlognull
Oct 11 09:40:37 compute-0 ovn_metadata_agent[162810]:     option http-server-close
Oct 11 09:40:37 compute-0 ovn_metadata_agent[162810]:     option forwardfor
Oct 11 09:40:37 compute-0 ovn_metadata_agent[162810]:     retries                 3
Oct 11 09:40:37 compute-0 ovn_metadata_agent[162810]:     timeout http-request    30s
Oct 11 09:40:37 compute-0 ovn_metadata_agent[162810]:     timeout connect         30s
Oct 11 09:40:37 compute-0 ovn_metadata_agent[162810]:     timeout client          32s
Oct 11 09:40:37 compute-0 ovn_metadata_agent[162810]:     timeout server          32s
Oct 11 09:40:37 compute-0 ovn_metadata_agent[162810]:     timeout http-keep-alive 30s
Oct 11 09:40:37 compute-0 ovn_metadata_agent[162810]: 
Oct 11 09:40:37 compute-0 ovn_metadata_agent[162810]: 
Oct 11 09:40:37 compute-0 ovn_metadata_agent[162810]: listen listener
Oct 11 09:40:37 compute-0 ovn_metadata_agent[162810]:     bind 169.254.169.254:80
Oct 11 09:40:37 compute-0 ovn_metadata_agent[162810]:     server metadata /var/lib/neutron/metadata_proxy
Oct 11 09:40:37 compute-0 ovn_metadata_agent[162810]:     http-request add-header X-OVN-Network-ID fbb66f3b-0d2f-44df-a1ff-0fbd9c571596
Oct 11 09:40:37 compute-0 ovn_metadata_agent[162810]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 11 09:40:37 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:40:37.079 162815 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-fbb66f3b-0d2f-44df-a1ff-0fbd9c571596', 'env', 'PROCESS_TAG=haproxy-fbb66f3b-0d2f-44df-a1ff-0fbd9c571596', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/fbb66f3b-0d2f-44df-a1ff-0fbd9c571596.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 11 09:40:37 compute-0 ceph-mon[74313]: pgmap v2968: 321 pgs: 321 active+clean; 374 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 11 09:40:37 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2969: 321 pgs: 321 active+clean; 374 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 11 09:40:37 compute-0 podman[424958]: 2025-10-11 09:40:37.535692677 +0000 UTC m=+0.072014081 container create 2a2f657487bc3f85e27c94fbbd87712ccd236f179ffec9e579ec1ad63a55a04d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-fbb66f3b-0d2f-44df-a1ff-0fbd9c571596, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true)
Oct 11 09:40:37 compute-0 systemd[1]: Started libpod-conmon-2a2f657487bc3f85e27c94fbbd87712ccd236f179ffec9e579ec1ad63a55a04d.scope.
Oct 11 09:40:37 compute-0 podman[424958]: 2025-10-11 09:40:37.502791594 +0000 UTC m=+0.039113098 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 11 09:40:37 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:40:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4cdf0e2af4d95bd9ffa8a3118a9662388e3229f9a077ce124d6a46ddcadbaf59/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 11 09:40:37 compute-0 podman[424958]: 2025-10-11 09:40:37.651724631 +0000 UTC m=+0.188046045 container init 2a2f657487bc3f85e27c94fbbd87712ccd236f179ffec9e579ec1ad63a55a04d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-fbb66f3b-0d2f-44df-a1ff-0fbd9c571596, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 09:40:37 compute-0 podman[424958]: 2025-10-11 09:40:37.656507765 +0000 UTC m=+0.192829169 container start 2a2f657487bc3f85e27c94fbbd87712ccd236f179ffec9e579ec1ad63a55a04d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-fbb66f3b-0d2f-44df-a1ff-0fbd9c571596, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001)
Oct 11 09:40:37 compute-0 neutron-haproxy-ovnmeta-fbb66f3b-0d2f-44df-a1ff-0fbd9c571596[424973]: [NOTICE]   (424977) : New worker (424979) forked
Oct 11 09:40:37 compute-0 neutron-haproxy-ovnmeta-fbb66f3b-0d2f-44df-a1ff-0fbd9c571596[424973]: [NOTICE]   (424977) : Loading success.
Oct 11 09:40:37 compute-0 nova_compute[260935]: 2025-10-11 09:40:37.766 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760175637.7664146, cf42187a-059a-49d4-b9c8-96786042977f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:40:37 compute-0 nova_compute[260935]: 2025-10-11 09:40:37.767 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: cf42187a-059a-49d4-b9c8-96786042977f] VM Started (Lifecycle Event)
Oct 11 09:40:37 compute-0 nova_compute[260935]: 2025-10-11 09:40:37.788 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: cf42187a-059a-49d4-b9c8-96786042977f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:40:37 compute-0 nova_compute[260935]: 2025-10-11 09:40:37.794 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760175637.7675128, cf42187a-059a-49d4-b9c8-96786042977f => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:40:37 compute-0 nova_compute[260935]: 2025-10-11 09:40:37.794 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: cf42187a-059a-49d4-b9c8-96786042977f] VM Paused (Lifecycle Event)
Oct 11 09:40:37 compute-0 nova_compute[260935]: 2025-10-11 09:40:37.813 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: cf42187a-059a-49d4-b9c8-96786042977f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:40:37 compute-0 nova_compute[260935]: 2025-10-11 09:40:37.817 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: cf42187a-059a-49d4-b9c8-96786042977f] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 09:40:37 compute-0 nova_compute[260935]: 2025-10-11 09:40:37.838 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: cf42187a-059a-49d4-b9c8-96786042977f] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 09:40:38 compute-0 nova_compute[260935]: 2025-10-11 09:40:38.231 2 DEBUG nova.compute.manager [req-30e1b532-c2d3-4b11-87ed-3fcee473cf46 req-59d161fb-83c4-4765-83b5-b2b15d5b5bcf e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: cf42187a-059a-49d4-b9c8-96786042977f] Received event network-vif-plugged-9f09f897-79b2-4242-ba35-4ab26732b411 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:40:38 compute-0 nova_compute[260935]: 2025-10-11 09:40:38.232 2 DEBUG oslo_concurrency.lockutils [req-30e1b532-c2d3-4b11-87ed-3fcee473cf46 req-59d161fb-83c4-4765-83b5-b2b15d5b5bcf e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "cf42187a-059a-49d4-b9c8-96786042977f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:40:38 compute-0 nova_compute[260935]: 2025-10-11 09:40:38.232 2 DEBUG oslo_concurrency.lockutils [req-30e1b532-c2d3-4b11-87ed-3fcee473cf46 req-59d161fb-83c4-4765-83b5-b2b15d5b5bcf e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "cf42187a-059a-49d4-b9c8-96786042977f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:40:38 compute-0 nova_compute[260935]: 2025-10-11 09:40:38.233 2 DEBUG oslo_concurrency.lockutils [req-30e1b532-c2d3-4b11-87ed-3fcee473cf46 req-59d161fb-83c4-4765-83b5-b2b15d5b5bcf e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "cf42187a-059a-49d4-b9c8-96786042977f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:40:38 compute-0 nova_compute[260935]: 2025-10-11 09:40:38.233 2 DEBUG nova.compute.manager [req-30e1b532-c2d3-4b11-87ed-3fcee473cf46 req-59d161fb-83c4-4765-83b5-b2b15d5b5bcf e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: cf42187a-059a-49d4-b9c8-96786042977f] Processing event network-vif-plugged-9f09f897-79b2-4242-ba35-4ab26732b411 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 11 09:40:38 compute-0 nova_compute[260935]: 2025-10-11 09:40:38.235 2 DEBUG nova.compute.manager [None req-a034e9f8-7335-4e06-b0a3-7f9268b121d8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: cf42187a-059a-49d4-b9c8-96786042977f] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 11 09:40:38 compute-0 nova_compute[260935]: 2025-10-11 09:40:38.240 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760175638.239727, cf42187a-059a-49d4-b9c8-96786042977f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:40:38 compute-0 nova_compute[260935]: 2025-10-11 09:40:38.241 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: cf42187a-059a-49d4-b9c8-96786042977f] VM Resumed (Lifecycle Event)
Oct 11 09:40:38 compute-0 nova_compute[260935]: 2025-10-11 09:40:38.246 2 DEBUG nova.virt.libvirt.driver [None req-a034e9f8-7335-4e06-b0a3-7f9268b121d8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: cf42187a-059a-49d4-b9c8-96786042977f] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 11 09:40:38 compute-0 nova_compute[260935]: 2025-10-11 09:40:38.251 2 INFO nova.virt.libvirt.driver [-] [instance: cf42187a-059a-49d4-b9c8-96786042977f] Instance spawned successfully.
Oct 11 09:40:38 compute-0 nova_compute[260935]: 2025-10-11 09:40:38.252 2 DEBUG nova.virt.libvirt.driver [None req-a034e9f8-7335-4e06-b0a3-7f9268b121d8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: cf42187a-059a-49d4-b9c8-96786042977f] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 11 09:40:38 compute-0 nova_compute[260935]: 2025-10-11 09:40:38.263 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: cf42187a-059a-49d4-b9c8-96786042977f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:40:38 compute-0 nova_compute[260935]: 2025-10-11 09:40:38.273 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: cf42187a-059a-49d4-b9c8-96786042977f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 09:40:38 compute-0 nova_compute[260935]: 2025-10-11 09:40:38.281 2 DEBUG nova.virt.libvirt.driver [None req-a034e9f8-7335-4e06-b0a3-7f9268b121d8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: cf42187a-059a-49d4-b9c8-96786042977f] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:40:38 compute-0 nova_compute[260935]: 2025-10-11 09:40:38.282 2 DEBUG nova.virt.libvirt.driver [None req-a034e9f8-7335-4e06-b0a3-7f9268b121d8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: cf42187a-059a-49d4-b9c8-96786042977f] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:40:38 compute-0 nova_compute[260935]: 2025-10-11 09:40:38.283 2 DEBUG nova.virt.libvirt.driver [None req-a034e9f8-7335-4e06-b0a3-7f9268b121d8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: cf42187a-059a-49d4-b9c8-96786042977f] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:40:38 compute-0 nova_compute[260935]: 2025-10-11 09:40:38.283 2 DEBUG nova.virt.libvirt.driver [None req-a034e9f8-7335-4e06-b0a3-7f9268b121d8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: cf42187a-059a-49d4-b9c8-96786042977f] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:40:38 compute-0 nova_compute[260935]: 2025-10-11 09:40:38.284 2 DEBUG nova.virt.libvirt.driver [None req-a034e9f8-7335-4e06-b0a3-7f9268b121d8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: cf42187a-059a-49d4-b9c8-96786042977f] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:40:38 compute-0 nova_compute[260935]: 2025-10-11 09:40:38.284 2 DEBUG nova.virt.libvirt.driver [None req-a034e9f8-7335-4e06-b0a3-7f9268b121d8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: cf42187a-059a-49d4-b9c8-96786042977f] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:40:38 compute-0 nova_compute[260935]: 2025-10-11 09:40:38.293 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: cf42187a-059a-49d4-b9c8-96786042977f] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 09:40:38 compute-0 nova_compute[260935]: 2025-10-11 09:40:38.341 2 INFO nova.compute.manager [None req-a034e9f8-7335-4e06-b0a3-7f9268b121d8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: cf42187a-059a-49d4-b9c8-96786042977f] Took 7.77 seconds to spawn the instance on the hypervisor.
Oct 11 09:40:38 compute-0 nova_compute[260935]: 2025-10-11 09:40:38.341 2 DEBUG nova.compute.manager [None req-a034e9f8-7335-4e06-b0a3-7f9268b121d8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: cf42187a-059a-49d4-b9c8-96786042977f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:40:38 compute-0 nova_compute[260935]: 2025-10-11 09:40:38.408 2 INFO nova.compute.manager [None req-a034e9f8-7335-4e06-b0a3-7f9268b121d8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: cf42187a-059a-49d4-b9c8-96786042977f] Took 8.78 seconds to build instance.
Oct 11 09:40:38 compute-0 nova_compute[260935]: 2025-10-11 09:40:38.432 2 DEBUG oslo_concurrency.lockutils [None req-a034e9f8-7335-4e06-b0a3-7f9268b121d8 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lock "cf42187a-059a-49d4-b9c8-96786042977f" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.878s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:40:39 compute-0 ceph-mon[74313]: pgmap v2969: 321 pgs: 321 active+clean; 374 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 11 09:40:39 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2970: 321 pgs: 321 active+clean; 374 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 36 op/s
Oct 11 09:40:39 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:40:39 compute-0 nova_compute[260935]: 2025-10-11 09:40:39.776 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:40:40 compute-0 nova_compute[260935]: 2025-10-11 09:40:40.316 2 DEBUG nova.compute.manager [req-5a558e1c-8cf8-4e32-b190-b4cfb53baf2b req-931ba1e5-f68c-4f7d-a1d2-bda2f59ef1ce e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: cf42187a-059a-49d4-b9c8-96786042977f] Received event network-vif-plugged-9f09f897-79b2-4242-ba35-4ab26732b411 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:40:40 compute-0 nova_compute[260935]: 2025-10-11 09:40:40.317 2 DEBUG oslo_concurrency.lockutils [req-5a558e1c-8cf8-4e32-b190-b4cfb53baf2b req-931ba1e5-f68c-4f7d-a1d2-bda2f59ef1ce e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "cf42187a-059a-49d4-b9c8-96786042977f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:40:40 compute-0 nova_compute[260935]: 2025-10-11 09:40:40.317 2 DEBUG oslo_concurrency.lockutils [req-5a558e1c-8cf8-4e32-b190-b4cfb53baf2b req-931ba1e5-f68c-4f7d-a1d2-bda2f59ef1ce e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "cf42187a-059a-49d4-b9c8-96786042977f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:40:40 compute-0 nova_compute[260935]: 2025-10-11 09:40:40.318 2 DEBUG oslo_concurrency.lockutils [req-5a558e1c-8cf8-4e32-b190-b4cfb53baf2b req-931ba1e5-f68c-4f7d-a1d2-bda2f59ef1ce e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "cf42187a-059a-49d4-b9c8-96786042977f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:40:40 compute-0 nova_compute[260935]: 2025-10-11 09:40:40.318 2 DEBUG nova.compute.manager [req-5a558e1c-8cf8-4e32-b190-b4cfb53baf2b req-931ba1e5-f68c-4f7d-a1d2-bda2f59ef1ce e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: cf42187a-059a-49d4-b9c8-96786042977f] No waiting events found dispatching network-vif-plugged-9f09f897-79b2-4242-ba35-4ab26732b411 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 09:40:40 compute-0 nova_compute[260935]: 2025-10-11 09:40:40.318 2 WARNING nova.compute.manager [req-5a558e1c-8cf8-4e32-b190-b4cfb53baf2b req-931ba1e5-f68c-4f7d-a1d2-bda2f59ef1ce e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: cf42187a-059a-49d4-b9c8-96786042977f] Received unexpected event network-vif-plugged-9f09f897-79b2-4242-ba35-4ab26732b411 for instance with vm_state active and task_state None.
Oct 11 09:40:40 compute-0 nova_compute[260935]: 2025-10-11 09:40:40.615 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:40:41 compute-0 ceph-mon[74313]: pgmap v2970: 321 pgs: 321 active+clean; 374 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 36 op/s
Oct 11 09:40:41 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2971: 321 pgs: 321 active+clean; 374 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 36 op/s
Oct 11 09:40:41 compute-0 ovn_controller[152945]: 2025-10-11T09:40:41Z|01662|binding|INFO|Releasing lport 3bca8f57-1f39-42d4-90ee-6695770f4d39 from this chassis (sb_readonly=0)
Oct 11 09:40:41 compute-0 ovn_controller[152945]: 2025-10-11T09:40:41Z|01663|binding|INFO|Releasing lport e23cd806-8523-4e59-ba27-db15cee52548 from this chassis (sb_readonly=0)
Oct 11 09:40:41 compute-0 ovn_controller[152945]: 2025-10-11T09:40:41Z|01664|binding|INFO|Releasing lport f45dd889-4db0-488b-b6d3-356bc191844e from this chassis (sb_readonly=0)
Oct 11 09:40:41 compute-0 NetworkManager[44960]: <info>  [1760175641.8901] manager: (patch-provnet-02213cae-d648-4a8f-b18b-9bdf9573f1be-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/633)
Oct 11 09:40:41 compute-0 NetworkManager[44960]: <info>  [1760175641.8922] manager: (patch-br-int-to-provnet-02213cae-d648-4a8f-b18b-9bdf9573f1be): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/634)
Oct 11 09:40:41 compute-0 nova_compute[260935]: 2025-10-11 09:40:41.902 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:40:41 compute-0 ovn_controller[152945]: 2025-10-11T09:40:41Z|01665|binding|INFO|Releasing lport 3bca8f57-1f39-42d4-90ee-6695770f4d39 from this chassis (sb_readonly=0)
Oct 11 09:40:41 compute-0 ovn_controller[152945]: 2025-10-11T09:40:41Z|01666|binding|INFO|Releasing lport e23cd806-8523-4e59-ba27-db15cee52548 from this chassis (sb_readonly=0)
Oct 11 09:40:41 compute-0 ovn_controller[152945]: 2025-10-11T09:40:41Z|01667|binding|INFO|Releasing lport f45dd889-4db0-488b-b6d3-356bc191844e from this chassis (sb_readonly=0)
Oct 11 09:40:41 compute-0 nova_compute[260935]: 2025-10-11 09:40:41.963 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:40:41 compute-0 nova_compute[260935]: 2025-10-11 09:40:41.976 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:40:42 compute-0 nova_compute[260935]: 2025-10-11 09:40:42.216 2 DEBUG nova.compute.manager [req-20dd7588-773f-48dc-9ba0-f6755fa1b593 req-325fcfbb-2d35-46f5-93b1-6de270bbc839 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: cf42187a-059a-49d4-b9c8-96786042977f] Received event network-changed-9f09f897-79b2-4242-ba35-4ab26732b411 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:40:42 compute-0 nova_compute[260935]: 2025-10-11 09:40:42.217 2 DEBUG nova.compute.manager [req-20dd7588-773f-48dc-9ba0-f6755fa1b593 req-325fcfbb-2d35-46f5-93b1-6de270bbc839 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: cf42187a-059a-49d4-b9c8-96786042977f] Refreshing instance network info cache due to event network-changed-9f09f897-79b2-4242-ba35-4ab26732b411. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 09:40:42 compute-0 nova_compute[260935]: 2025-10-11 09:40:42.217 2 DEBUG oslo_concurrency.lockutils [req-20dd7588-773f-48dc-9ba0-f6755fa1b593 req-325fcfbb-2d35-46f5-93b1-6de270bbc839 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-cf42187a-059a-49d4-b9c8-96786042977f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:40:42 compute-0 nova_compute[260935]: 2025-10-11 09:40:42.217 2 DEBUG oslo_concurrency.lockutils [req-20dd7588-773f-48dc-9ba0-f6755fa1b593 req-325fcfbb-2d35-46f5-93b1-6de270bbc839 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-cf42187a-059a-49d4-b9c8-96786042977f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:40:42 compute-0 nova_compute[260935]: 2025-10-11 09:40:42.217 2 DEBUG nova.network.neutron [req-20dd7588-773f-48dc-9ba0-f6755fa1b593 req-325fcfbb-2d35-46f5-93b1-6de270bbc839 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: cf42187a-059a-49d4-b9c8-96786042977f] Refreshing network info cache for port 9f09f897-79b2-4242-ba35-4ab26732b411 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 09:40:42 compute-0 podman[424989]: 2025-10-11 09:40:42.788852696 +0000 UTC m=+0.082330730 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, tcib_managed=true, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=iscsid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Oct 11 09:40:43 compute-0 ceph-mon[74313]: pgmap v2971: 321 pgs: 321 active+clean; 374 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 36 op/s
Oct 11 09:40:43 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2972: 321 pgs: 321 active+clean; 374 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Oct 11 09:40:43 compute-0 nova_compute[260935]: 2025-10-11 09:40:43.744 2 DEBUG nova.network.neutron [req-20dd7588-773f-48dc-9ba0-f6755fa1b593 req-325fcfbb-2d35-46f5-93b1-6de270bbc839 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: cf42187a-059a-49d4-b9c8-96786042977f] Updated VIF entry in instance network info cache for port 9f09f897-79b2-4242-ba35-4ab26732b411. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 09:40:43 compute-0 nova_compute[260935]: 2025-10-11 09:40:43.745 2 DEBUG nova.network.neutron [req-20dd7588-773f-48dc-9ba0-f6755fa1b593 req-325fcfbb-2d35-46f5-93b1-6de270bbc839 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: cf42187a-059a-49d4-b9c8-96786042977f] Updating instance_info_cache with network_info: [{"id": "9f09f897-79b2-4242-ba35-4ab26732b411", "address": "fa:16:3e:f9:42:d3", "network": {"id": "fbb66f3b-0d2f-44df-a1ff-0fbd9c571596", "bridge": "br-int", "label": "tempest-network-smoke--2076009485", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.216", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "81e7096f23df4e7d8782cf98d09d54e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9f09f897-79", "ovs_interfaceid": "9f09f897-79b2-4242-ba35-4ab26732b411", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:40:43 compute-0 nova_compute[260935]: 2025-10-11 09:40:43.765 2 DEBUG oslo_concurrency.lockutils [req-20dd7588-773f-48dc-9ba0-f6755fa1b593 req-325fcfbb-2d35-46f5-93b1-6de270bbc839 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-cf42187a-059a-49d4-b9c8-96786042977f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:40:44 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:40:44 compute-0 nova_compute[260935]: 2025-10-11 09:40:44.780 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:40:45 compute-0 sudo[425009]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:40:45 compute-0 sudo[425009]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:40:45 compute-0 sudo[425009]: pam_unix(sudo:session): session closed for user root
Oct 11 09:40:45 compute-0 ceph-mon[74313]: pgmap v2972: 321 pgs: 321 active+clean; 374 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Oct 11 09:40:45 compute-0 sudo[425034]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 09:40:45 compute-0 sudo[425034]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:40:45 compute-0 sudo[425034]: pam_unix(sudo:session): session closed for user root
Oct 11 09:40:45 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2973: 321 pgs: 321 active+clean; 374 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 11 09:40:45 compute-0 sudo[425059]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:40:45 compute-0 sudo[425059]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:40:45 compute-0 sudo[425059]: pam_unix(sudo:session): session closed for user root
Oct 11 09:40:45 compute-0 sudo[425084]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 11 09:40:45 compute-0 sudo[425084]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:40:45 compute-0 nova_compute[260935]: 2025-10-11 09:40:45.654 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:40:46 compute-0 sudo[425084]: pam_unix(sudo:session): session closed for user root
Oct 11 09:40:46 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 09:40:46 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 09:40:46 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 11 09:40:46 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 09:40:46 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 11 09:40:46 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:40:46 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev 7e92c195-4a34-4e8f-ab89-4037c8511fae does not exist
Oct 11 09:40:46 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev 1b637b65-91e5-450a-8f31-f539c0d0dbc9 does not exist
Oct 11 09:40:46 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev ca33e817-cc8c-4916-834d-bc44af9609a3 does not exist
Oct 11 09:40:46 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 11 09:40:46 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 09:40:46 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 11 09:40:46 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 09:40:46 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 09:40:46 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 09:40:46 compute-0 sudo[425139]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:40:46 compute-0 sudo[425139]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:40:46 compute-0 sudo[425139]: pam_unix(sudo:session): session closed for user root
Oct 11 09:40:46 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 09:40:46 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 09:40:46 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:40:46 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 09:40:46 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 09:40:46 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 09:40:46 compute-0 sudo[425164]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 09:40:46 compute-0 sudo[425164]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:40:46 compute-0 sudo[425164]: pam_unix(sudo:session): session closed for user root
Oct 11 09:40:46 compute-0 sudo[425191]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:40:46 compute-0 sudo[425191]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:40:46 compute-0 sudo[425191]: pam_unix(sudo:session): session closed for user root
Oct 11 09:40:46 compute-0 sudo[425216]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 11 09:40:46 compute-0 sudo[425216]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:40:46 compute-0 sshd-session[425168]: Invalid user mysql from 165.232.82.252 port 50624
Oct 11 09:40:46 compute-0 sshd-session[425168]: pam_unix(sshd:auth): check pass; user unknown
Oct 11 09:40:46 compute-0 sshd-session[425168]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=165.232.82.252
Oct 11 09:40:46 compute-0 podman[425281]: 2025-10-11 09:40:46.970946962 +0000 UTC m=+0.047831383 container create 9626556a2acbbe2824aebdc8596726bff5661c3580e14e45e82877d352403fe4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_curie, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 09:40:47 compute-0 podman[425281]: 2025-10-11 09:40:46.946864946 +0000 UTC m=+0.023749357 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:40:47 compute-0 systemd[1]: Started libpod-conmon-9626556a2acbbe2824aebdc8596726bff5661c3580e14e45e82877d352403fe4.scope.
Oct 11 09:40:47 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:40:47 compute-0 podman[425281]: 2025-10-11 09:40:47.115796685 +0000 UTC m=+0.192681106 container init 9626556a2acbbe2824aebdc8596726bff5661c3580e14e45e82877d352403fe4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_curie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 09:40:47 compute-0 podman[425295]: 2025-10-11 09:40:47.120847906 +0000 UTC m=+0.094363027 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0)
Oct 11 09:40:47 compute-0 podman[425281]: 2025-10-11 09:40:47.126023111 +0000 UTC m=+0.202907512 container start 9626556a2acbbe2824aebdc8596726bff5661c3580e14e45e82877d352403fe4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_curie, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Oct 11 09:40:47 compute-0 podman[425281]: 2025-10-11 09:40:47.12990629 +0000 UTC m=+0.206790691 container attach 9626556a2acbbe2824aebdc8596726bff5661c3580e14e45e82877d352403fe4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_curie, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Oct 11 09:40:47 compute-0 systemd[1]: libpod-9626556a2acbbe2824aebdc8596726bff5661c3580e14e45e82877d352403fe4.scope: Deactivated successfully.
Oct 11 09:40:47 compute-0 stupefied_curie[425318]: 167 167
Oct 11 09:40:47 compute-0 conmon[425318]: conmon 9626556a2acbbe2824ae <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-9626556a2acbbe2824aebdc8596726bff5661c3580e14e45e82877d352403fe4.scope/container/memory.events
Oct 11 09:40:47 compute-0 podman[425299]: 2025-10-11 09:40:47.153223374 +0000 UTC m=+0.126160889 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 11 09:40:47 compute-0 podman[425346]: 2025-10-11 09:40:47.17588399 +0000 UTC m=+0.025829856 container died 9626556a2acbbe2824aebdc8596726bff5661c3580e14e45e82877d352403fe4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_curie, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct 11 09:40:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-19f17a25cbae9d6f8a3e97fe01d72cde6bedf5b20efc568634a1ab9f6f533842-merged.mount: Deactivated successfully.
Oct 11 09:40:47 compute-0 podman[425346]: 2025-10-11 09:40:47.2143991 +0000 UTC m=+0.064344956 container remove 9626556a2acbbe2824aebdc8596726bff5661c3580e14e45e82877d352403fe4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_curie, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 09:40:47 compute-0 systemd[1]: libpod-conmon-9626556a2acbbe2824aebdc8596726bff5661c3580e14e45e82877d352403fe4.scope: Deactivated successfully.
Oct 11 09:40:47 compute-0 ceph-mon[74313]: pgmap v2973: 321 pgs: 321 active+clean; 374 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 11 09:40:47 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2974: 321 pgs: 321 active+clean; 374 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 11 09:40:47 compute-0 podman[425370]: 2025-10-11 09:40:47.420904702 +0000 UTC m=+0.054001645 container create e1f9e7b59ab78b0ef548f13bf7ae3f67dea5e799e584c3416138891cf31fe16f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_pare, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 09:40:47 compute-0 systemd[1]: Started libpod-conmon-e1f9e7b59ab78b0ef548f13bf7ae3f67dea5e799e584c3416138891cf31fe16f.scope.
Oct 11 09:40:47 compute-0 podman[425370]: 2025-10-11 09:40:47.396204449 +0000 UTC m=+0.029301482 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:40:47 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:40:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/910949cea1c5127ed6d2cdd94fdb240d5cf4a49db364b46fdbd0674828e87fec/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 09:40:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/910949cea1c5127ed6d2cdd94fdb240d5cf4a49db364b46fdbd0674828e87fec/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 09:40:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/910949cea1c5127ed6d2cdd94fdb240d5cf4a49db364b46fdbd0674828e87fec/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 09:40:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/910949cea1c5127ed6d2cdd94fdb240d5cf4a49db364b46fdbd0674828e87fec/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 09:40:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/910949cea1c5127ed6d2cdd94fdb240d5cf4a49db364b46fdbd0674828e87fec/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 09:40:47 compute-0 podman[425370]: 2025-10-11 09:40:47.533901501 +0000 UTC m=+0.166998484 container init e1f9e7b59ab78b0ef548f13bf7ae3f67dea5e799e584c3416138891cf31fe16f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_pare, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct 11 09:40:47 compute-0 podman[425370]: 2025-10-11 09:40:47.545998481 +0000 UTC m=+0.179095424 container start e1f9e7b59ab78b0ef548f13bf7ae3f67dea5e799e584c3416138891cf31fe16f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_pare, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Oct 11 09:40:47 compute-0 podman[425370]: 2025-10-11 09:40:47.552881924 +0000 UTC m=+0.185978907 container attach e1f9e7b59ab78b0ef548f13bf7ae3f67dea5e799e584c3416138891cf31fe16f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_pare, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 09:40:48 compute-0 optimistic_pare[425387]: --> passed data devices: 0 physical, 3 LVM
Oct 11 09:40:48 compute-0 optimistic_pare[425387]: --> relative data size: 1.0
Oct 11 09:40:48 compute-0 optimistic_pare[425387]: --> All data devices are unavailable
Oct 11 09:40:48 compute-0 nova_compute[260935]: 2025-10-11 09:40:48.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:40:48 compute-0 nova_compute[260935]: 2025-10-11 09:40:48.706 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 11 09:40:48 compute-0 systemd[1]: libpod-e1f9e7b59ab78b0ef548f13bf7ae3f67dea5e799e584c3416138891cf31fe16f.scope: Deactivated successfully.
Oct 11 09:40:48 compute-0 systemd[1]: libpod-e1f9e7b59ab78b0ef548f13bf7ae3f67dea5e799e584c3416138891cf31fe16f.scope: Consumed 1.121s CPU time.
Oct 11 09:40:48 compute-0 sshd-session[425168]: Failed password for invalid user mysql from 165.232.82.252 port 50624 ssh2
Oct 11 09:40:48 compute-0 podman[425417]: 2025-10-11 09:40:48.836525787 +0000 UTC m=+0.039397886 container died e1f9e7b59ab78b0ef548f13bf7ae3f67dea5e799e584c3416138891cf31fe16f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_pare, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default)
Oct 11 09:40:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-910949cea1c5127ed6d2cdd94fdb240d5cf4a49db364b46fdbd0674828e87fec-merged.mount: Deactivated successfully.
Oct 11 09:40:48 compute-0 podman[425417]: 2025-10-11 09:40:48.924761482 +0000 UTC m=+0.127633541 container remove e1f9e7b59ab78b0ef548f13bf7ae3f67dea5e799e584c3416138891cf31fe16f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_pare, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 09:40:48 compute-0 systemd[1]: libpod-conmon-e1f9e7b59ab78b0ef548f13bf7ae3f67dea5e799e584c3416138891cf31fe16f.scope: Deactivated successfully.
Oct 11 09:40:48 compute-0 sudo[425216]: pam_unix(sudo:session): session closed for user root
Oct 11 09:40:49 compute-0 sudo[425432]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:40:49 compute-0 sudo[425432]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:40:49 compute-0 sudo[425432]: pam_unix(sudo:session): session closed for user root
Oct 11 09:40:49 compute-0 sudo[425457]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 09:40:49 compute-0 sudo[425457]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:40:49 compute-0 sudo[425457]: pam_unix(sudo:session): session closed for user root
Oct 11 09:40:49 compute-0 sudo[425482]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:40:49 compute-0 sudo[425482]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:40:49 compute-0 sudo[425482]: pam_unix(sudo:session): session closed for user root
Oct 11 09:40:49 compute-0 sudo[425507]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a -- lvm list --format json
Oct 11 09:40:49 compute-0 sudo[425507]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:40:49 compute-0 ceph-mon[74313]: pgmap v2974: 321 pgs: 321 active+clean; 374 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 11 09:40:49 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2975: 321 pgs: 321 active+clean; 374 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 11 09:40:49 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:40:49 compute-0 ovn_controller[152945]: 2025-10-11T09:40:49Z|00198|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:f9:42:d3 10.100.0.14
Oct 11 09:40:49 compute-0 nova_compute[260935]: 2025-10-11 09:40:49.781 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:40:49 compute-0 sshd-session[425168]: Connection closed by invalid user mysql 165.232.82.252 port 50624 [preauth]
Oct 11 09:40:49 compute-0 ovn_controller[152945]: 2025-10-11T09:40:49Z|00199|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:f9:42:d3 10.100.0.14
Oct 11 09:40:49 compute-0 podman[425574]: 2025-10-11 09:40:49.800603717 +0000 UTC m=+0.114865323 container create f6e18345d3452229342959c1d17991edd3dbc699172b331e6700093b4b59fc40 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_lichterman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 09:40:49 compute-0 podman[425574]: 2025-10-11 09:40:49.715075238 +0000 UTC m=+0.029336834 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:40:49 compute-0 systemd[1]: Started libpod-conmon-f6e18345d3452229342959c1d17991edd3dbc699172b331e6700093b4b59fc40.scope.
Oct 11 09:40:49 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:40:49 compute-0 podman[425574]: 2025-10-11 09:40:49.927308461 +0000 UTC m=+0.241570067 container init f6e18345d3452229342959c1d17991edd3dbc699172b331e6700093b4b59fc40 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_lichterman, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Oct 11 09:40:49 compute-0 podman[425574]: 2025-10-11 09:40:49.939771431 +0000 UTC m=+0.254033027 container start f6e18345d3452229342959c1d17991edd3dbc699172b331e6700093b4b59fc40 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_lichterman, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 09:40:49 compute-0 laughing_lichterman[425590]: 167 167
Oct 11 09:40:49 compute-0 podman[425574]: 2025-10-11 09:40:49.946624683 +0000 UTC m=+0.260886269 container attach f6e18345d3452229342959c1d17991edd3dbc699172b331e6700093b4b59fc40 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_lichterman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 11 09:40:49 compute-0 systemd[1]: libpod-f6e18345d3452229342959c1d17991edd3dbc699172b331e6700093b4b59fc40.scope: Deactivated successfully.
Oct 11 09:40:49 compute-0 podman[425574]: 2025-10-11 09:40:49.947892588 +0000 UTC m=+0.262154174 container died f6e18345d3452229342959c1d17991edd3dbc699172b331e6700093b4b59fc40 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_lichterman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 09:40:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-005f8da06175772deb0485bce16cd28fb9a59916db499df094a53ab8453f696e-merged.mount: Deactivated successfully.
Oct 11 09:40:50 compute-0 podman[425574]: 2025-10-11 09:40:50.246139864 +0000 UTC m=+0.560401430 container remove f6e18345d3452229342959c1d17991edd3dbc699172b331e6700093b4b59fc40 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_lichterman, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 11 09:40:50 compute-0 systemd[1]: libpod-conmon-f6e18345d3452229342959c1d17991edd3dbc699172b331e6700093b4b59fc40.scope: Deactivated successfully.
Oct 11 09:40:50 compute-0 podman[425615]: 2025-10-11 09:40:50.522065152 +0000 UTC m=+0.040955320 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:40:50 compute-0 podman[425615]: 2025-10-11 09:40:50.614725411 +0000 UTC m=+0.133615569 container create 6a61ff2635e136c7494bed64d015b80c98f75bd900f82883c52f90a1e6cc1193 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_shtern, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 09:40:50 compute-0 nova_compute[260935]: 2025-10-11 09:40:50.657 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:40:50 compute-0 systemd[1]: Started libpod-conmon-6a61ff2635e136c7494bed64d015b80c98f75bd900f82883c52f90a1e6cc1193.scope.
Oct 11 09:40:50 compute-0 nova_compute[260935]: 2025-10-11 09:40:50.706 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:40:50 compute-0 nova_compute[260935]: 2025-10-11 09:40:50.706 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 11 09:40:50 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:40:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c53e9148a5b249753869cc3dc2819875adb5344ebd66e2dada43832de9aafe19/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 09:40:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c53e9148a5b249753869cc3dc2819875adb5344ebd66e2dada43832de9aafe19/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 09:40:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c53e9148a5b249753869cc3dc2819875adb5344ebd66e2dada43832de9aafe19/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 09:40:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c53e9148a5b249753869cc3dc2819875adb5344ebd66e2dada43832de9aafe19/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 09:40:50 compute-0 nova_compute[260935]: 2025-10-11 09:40:50.729 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 11 09:40:50 compute-0 podman[425615]: 2025-10-11 09:40:50.747590267 +0000 UTC m=+0.266480495 container init 6a61ff2635e136c7494bed64d015b80c98f75bd900f82883c52f90a1e6cc1193 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_shtern, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 11 09:40:50 compute-0 podman[425615]: 2025-10-11 09:40:50.762132645 +0000 UTC m=+0.281022803 container start 6a61ff2635e136c7494bed64d015b80c98f75bd900f82883c52f90a1e6cc1193 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_shtern, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct 11 09:40:50 compute-0 podman[425615]: 2025-10-11 09:40:50.849058783 +0000 UTC m=+0.367949021 container attach 6a61ff2635e136c7494bed64d015b80c98f75bd900f82883c52f90a1e6cc1193 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_shtern, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct 11 09:40:51 compute-0 ceph-mon[74313]: pgmap v2975: 321 pgs: 321 active+clean; 374 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 11 09:40:51 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2976: 321 pgs: 321 active+clean; 374 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 63 op/s
Oct 11 09:40:51 compute-0 sad_shtern[425632]: {
Oct 11 09:40:51 compute-0 sad_shtern[425632]:     "0": [
Oct 11 09:40:51 compute-0 sad_shtern[425632]:         {
Oct 11 09:40:51 compute-0 sad_shtern[425632]:             "devices": [
Oct 11 09:40:51 compute-0 sad_shtern[425632]:                 "/dev/loop3"
Oct 11 09:40:51 compute-0 sad_shtern[425632]:             ],
Oct 11 09:40:51 compute-0 sad_shtern[425632]:             "lv_name": "ceph_lv0",
Oct 11 09:40:51 compute-0 sad_shtern[425632]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 09:40:51 compute-0 sad_shtern[425632]:             "lv_size": "21470642176",
Oct 11 09:40:51 compute-0 sad_shtern[425632]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=bd31cfe9-266a-4479-a7f9-659fff14b3e1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 09:40:51 compute-0 sad_shtern[425632]:             "lv_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 09:40:51 compute-0 sad_shtern[425632]:             "name": "ceph_lv0",
Oct 11 09:40:51 compute-0 sad_shtern[425632]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 09:40:51 compute-0 sad_shtern[425632]:             "tags": {
Oct 11 09:40:51 compute-0 sad_shtern[425632]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 11 09:40:51 compute-0 sad_shtern[425632]:                 "ceph.block_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 09:40:51 compute-0 sad_shtern[425632]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 09:40:51 compute-0 sad_shtern[425632]:                 "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:40:51 compute-0 sad_shtern[425632]:                 "ceph.cluster_name": "ceph",
Oct 11 09:40:51 compute-0 sad_shtern[425632]:                 "ceph.crush_device_class": "",
Oct 11 09:40:51 compute-0 sad_shtern[425632]:                 "ceph.encrypted": "0",
Oct 11 09:40:51 compute-0 sad_shtern[425632]:                 "ceph.osd_fsid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 09:40:51 compute-0 sad_shtern[425632]:                 "ceph.osd_id": "0",
Oct 11 09:40:51 compute-0 sad_shtern[425632]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 09:40:51 compute-0 sad_shtern[425632]:                 "ceph.type": "block",
Oct 11 09:40:51 compute-0 sad_shtern[425632]:                 "ceph.vdo": "0"
Oct 11 09:40:51 compute-0 sad_shtern[425632]:             },
Oct 11 09:40:51 compute-0 sad_shtern[425632]:             "type": "block",
Oct 11 09:40:51 compute-0 sad_shtern[425632]:             "vg_name": "ceph_vg0"
Oct 11 09:40:51 compute-0 sad_shtern[425632]:         }
Oct 11 09:40:51 compute-0 sad_shtern[425632]:     ],
Oct 11 09:40:51 compute-0 sad_shtern[425632]:     "1": [
Oct 11 09:40:51 compute-0 sad_shtern[425632]:         {
Oct 11 09:40:51 compute-0 sad_shtern[425632]:             "devices": [
Oct 11 09:40:51 compute-0 sad_shtern[425632]:                 "/dev/loop4"
Oct 11 09:40:51 compute-0 sad_shtern[425632]:             ],
Oct 11 09:40:51 compute-0 sad_shtern[425632]:             "lv_name": "ceph_lv1",
Oct 11 09:40:51 compute-0 sad_shtern[425632]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 09:40:51 compute-0 sad_shtern[425632]:             "lv_size": "21470642176",
Oct 11 09:40:51 compute-0 sad_shtern[425632]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c6e730c8-7075-45e5-bf72-061b73088f5b,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 09:40:51 compute-0 sad_shtern[425632]:             "lv_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 09:40:51 compute-0 sad_shtern[425632]:             "name": "ceph_lv1",
Oct 11 09:40:51 compute-0 sad_shtern[425632]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 09:40:51 compute-0 sad_shtern[425632]:             "tags": {
Oct 11 09:40:51 compute-0 sad_shtern[425632]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 11 09:40:51 compute-0 sad_shtern[425632]:                 "ceph.block_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 09:40:51 compute-0 sad_shtern[425632]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 09:40:51 compute-0 sad_shtern[425632]:                 "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:40:51 compute-0 sad_shtern[425632]:                 "ceph.cluster_name": "ceph",
Oct 11 09:40:51 compute-0 sad_shtern[425632]:                 "ceph.crush_device_class": "",
Oct 11 09:40:51 compute-0 sad_shtern[425632]:                 "ceph.encrypted": "0",
Oct 11 09:40:51 compute-0 sad_shtern[425632]:                 "ceph.osd_fsid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 09:40:51 compute-0 sad_shtern[425632]:                 "ceph.osd_id": "1",
Oct 11 09:40:51 compute-0 sad_shtern[425632]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 09:40:51 compute-0 sad_shtern[425632]:                 "ceph.type": "block",
Oct 11 09:40:51 compute-0 sad_shtern[425632]:                 "ceph.vdo": "0"
Oct 11 09:40:51 compute-0 sad_shtern[425632]:             },
Oct 11 09:40:51 compute-0 sad_shtern[425632]:             "type": "block",
Oct 11 09:40:51 compute-0 sad_shtern[425632]:             "vg_name": "ceph_vg1"
Oct 11 09:40:51 compute-0 sad_shtern[425632]:         }
Oct 11 09:40:51 compute-0 sad_shtern[425632]:     ],
Oct 11 09:40:51 compute-0 sad_shtern[425632]:     "2": [
Oct 11 09:40:51 compute-0 sad_shtern[425632]:         {
Oct 11 09:40:51 compute-0 sad_shtern[425632]:             "devices": [
Oct 11 09:40:51 compute-0 sad_shtern[425632]:                 "/dev/loop5"
Oct 11 09:40:51 compute-0 sad_shtern[425632]:             ],
Oct 11 09:40:51 compute-0 sad_shtern[425632]:             "lv_name": "ceph_lv2",
Oct 11 09:40:51 compute-0 sad_shtern[425632]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 09:40:51 compute-0 sad_shtern[425632]:             "lv_size": "21470642176",
Oct 11 09:40:51 compute-0 sad_shtern[425632]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9a8d87fe-940e-4960-94fb-405e22e6e9ee,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 09:40:51 compute-0 sad_shtern[425632]:             "lv_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 09:40:51 compute-0 sad_shtern[425632]:             "name": "ceph_lv2",
Oct 11 09:40:51 compute-0 sad_shtern[425632]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 09:40:51 compute-0 sad_shtern[425632]:             "tags": {
Oct 11 09:40:51 compute-0 sad_shtern[425632]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 11 09:40:51 compute-0 sad_shtern[425632]:                 "ceph.block_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 09:40:51 compute-0 sad_shtern[425632]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 09:40:51 compute-0 sad_shtern[425632]:                 "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:40:51 compute-0 sad_shtern[425632]:                 "ceph.cluster_name": "ceph",
Oct 11 09:40:51 compute-0 sad_shtern[425632]:                 "ceph.crush_device_class": "",
Oct 11 09:40:51 compute-0 sad_shtern[425632]:                 "ceph.encrypted": "0",
Oct 11 09:40:51 compute-0 sad_shtern[425632]:                 "ceph.osd_fsid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 09:40:51 compute-0 sad_shtern[425632]:                 "ceph.osd_id": "2",
Oct 11 09:40:51 compute-0 sad_shtern[425632]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 09:40:51 compute-0 sad_shtern[425632]:                 "ceph.type": "block",
Oct 11 09:40:51 compute-0 sad_shtern[425632]:                 "ceph.vdo": "0"
Oct 11 09:40:51 compute-0 sad_shtern[425632]:             },
Oct 11 09:40:51 compute-0 sad_shtern[425632]:             "type": "block",
Oct 11 09:40:51 compute-0 sad_shtern[425632]:             "vg_name": "ceph_vg2"
Oct 11 09:40:51 compute-0 sad_shtern[425632]:         }
Oct 11 09:40:51 compute-0 sad_shtern[425632]:     ]
Oct 11 09:40:51 compute-0 sad_shtern[425632]: }
Oct 11 09:40:51 compute-0 systemd[1]: libpod-6a61ff2635e136c7494bed64d015b80c98f75bd900f82883c52f90a1e6cc1193.scope: Deactivated successfully.
Oct 11 09:40:51 compute-0 podman[425615]: 2025-10-11 09:40:51.563408629 +0000 UTC m=+1.082298807 container died 6a61ff2635e136c7494bed64d015b80c98f75bd900f82883c52f90a1e6cc1193 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_shtern, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Oct 11 09:40:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-c53e9148a5b249753869cc3dc2819875adb5344ebd66e2dada43832de9aafe19-merged.mount: Deactivated successfully.
Oct 11 09:40:51 compute-0 podman[425615]: 2025-10-11 09:40:51.638185076 +0000 UTC m=+1.157075264 container remove 6a61ff2635e136c7494bed64d015b80c98f75bd900f82883c52f90a1e6cc1193 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_shtern, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct 11 09:40:51 compute-0 systemd[1]: libpod-conmon-6a61ff2635e136c7494bed64d015b80c98f75bd900f82883c52f90a1e6cc1193.scope: Deactivated successfully.
Oct 11 09:40:51 compute-0 sudo[425507]: pam_unix(sudo:session): session closed for user root
Oct 11 09:40:51 compute-0 sudo[425652]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:40:51 compute-0 sudo[425652]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:40:51 compute-0 sudo[425652]: pam_unix(sudo:session): session closed for user root
Oct 11 09:40:51 compute-0 sudo[425678]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 09:40:51 compute-0 sudo[425678]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:40:51 compute-0 sudo[425678]: pam_unix(sudo:session): session closed for user root
Oct 11 09:40:51 compute-0 sudo[425703]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:40:51 compute-0 sudo[425703]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:40:51 compute-0 sudo[425703]: pam_unix(sudo:session): session closed for user root
Oct 11 09:40:51 compute-0 sudo[425728]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a -- raw list --format json
Oct 11 09:40:51 compute-0 sudo[425728]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:40:52 compute-0 podman[425794]: 2025-10-11 09:40:52.336663407 +0000 UTC m=+0.040394894 container create 0d209383da76ef84b39eb37023dfd6af5725992cd72484eb36aec6560e383463 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_brown, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 09:40:52 compute-0 systemd[1]: Started libpod-conmon-0d209383da76ef84b39eb37023dfd6af5725992cd72484eb36aec6560e383463.scope.
Oct 11 09:40:52 compute-0 podman[425794]: 2025-10-11 09:40:52.319506476 +0000 UTC m=+0.023237963 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:40:52 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:40:52 compute-0 podman[425794]: 2025-10-11 09:40:52.436322702 +0000 UTC m=+0.140054209 container init 0d209383da76ef84b39eb37023dfd6af5725992cd72484eb36aec6560e383463 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_brown, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 09:40:52 compute-0 podman[425794]: 2025-10-11 09:40:52.443985877 +0000 UTC m=+0.147717354 container start 0d209383da76ef84b39eb37023dfd6af5725992cd72484eb36aec6560e383463 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_brown, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 09:40:52 compute-0 nervous_brown[425810]: 167 167
Oct 11 09:40:52 compute-0 podman[425794]: 2025-10-11 09:40:52.446884878 +0000 UTC m=+0.150616355 container attach 0d209383da76ef84b39eb37023dfd6af5725992cd72484eb36aec6560e383463 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_brown, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 11 09:40:52 compute-0 systemd[1]: libpod-0d209383da76ef84b39eb37023dfd6af5725992cd72484eb36aec6560e383463.scope: Deactivated successfully.
Oct 11 09:40:52 compute-0 conmon[425810]: conmon 0d209383da76ef84b39e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-0d209383da76ef84b39eb37023dfd6af5725992cd72484eb36aec6560e383463.scope/container/memory.events
Oct 11 09:40:52 compute-0 podman[425794]: 2025-10-11 09:40:52.451691153 +0000 UTC m=+0.155422650 container died 0d209383da76ef84b39eb37023dfd6af5725992cd72484eb36aec6560e383463 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_brown, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct 11 09:40:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-c54f5620345f3bb90eb59a3077115055f0aedebf034865721218a8b1760993a1-merged.mount: Deactivated successfully.
Oct 11 09:40:52 compute-0 podman[425794]: 2025-10-11 09:40:52.495765029 +0000 UTC m=+0.199496546 container remove 0d209383da76ef84b39eb37023dfd6af5725992cd72484eb36aec6560e383463 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_brown, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 09:40:52 compute-0 systemd[1]: libpod-conmon-0d209383da76ef84b39eb37023dfd6af5725992cd72484eb36aec6560e383463.scope: Deactivated successfully.
Oct 11 09:40:52 compute-0 nova_compute[260935]: 2025-10-11 09:40:52.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:40:52 compute-0 nova_compute[260935]: 2025-10-11 09:40:52.704 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:40:52 compute-0 podman[425834]: 2025-10-11 09:40:52.771310208 +0000 UTC m=+0.065542790 container create e100d7a2c1d658aba90ba8c6e75aeb5c9ec4ecf66c545bc40b678d56953fb22c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_golick, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct 11 09:40:52 compute-0 systemd[1]: Started libpod-conmon-e100d7a2c1d658aba90ba8c6e75aeb5c9ec4ecf66c545bc40b678d56953fb22c.scope.
Oct 11 09:40:52 compute-0 podman[425834]: 2025-10-11 09:40:52.749610739 +0000 UTC m=+0.043843341 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:40:52 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:40:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f1b62f5b29c4b39dc0863e58fe1a5cc2990509b9c0507ede5281336241ed3193/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 09:40:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f1b62f5b29c4b39dc0863e58fe1a5cc2990509b9c0507ede5281336241ed3193/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 09:40:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f1b62f5b29c4b39dc0863e58fe1a5cc2990509b9c0507ede5281336241ed3193/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 09:40:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f1b62f5b29c4b39dc0863e58fe1a5cc2990509b9c0507ede5281336241ed3193/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 09:40:52 compute-0 podman[425834]: 2025-10-11 09:40:52.891865919 +0000 UTC m=+0.186098581 container init e100d7a2c1d658aba90ba8c6e75aeb5c9ec4ecf66c545bc40b678d56953fb22c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_golick, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 09:40:52 compute-0 podman[425834]: 2025-10-11 09:40:52.90151665 +0000 UTC m=+0.195749242 container start e100d7a2c1d658aba90ba8c6e75aeb5c9ec4ecf66c545bc40b678d56953fb22c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_golick, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 09:40:52 compute-0 podman[425834]: 2025-10-11 09:40:52.90544521 +0000 UTC m=+0.199677882 container attach e100d7a2c1d658aba90ba8c6e75aeb5c9ec4ecf66c545bc40b678d56953fb22c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_golick, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 09:40:53 compute-0 ceph-mon[74313]: pgmap v2976: 321 pgs: 321 active+clean; 374 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 63 op/s
Oct 11 09:40:53 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2977: 321 pgs: 321 active+clean; 407 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 125 op/s
Oct 11 09:40:53 compute-0 nova_compute[260935]: 2025-10-11 09:40:53.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:40:53 compute-0 sad_golick[425851]: {
Oct 11 09:40:53 compute-0 sad_golick[425851]:     "9a8d87fe-940e-4960-94fb-405e22e6e9ee": {
Oct 11 09:40:53 compute-0 sad_golick[425851]:         "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:40:53 compute-0 sad_golick[425851]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 11 09:40:53 compute-0 sad_golick[425851]:         "osd_id": 2,
Oct 11 09:40:53 compute-0 sad_golick[425851]:         "osd_uuid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 09:40:53 compute-0 sad_golick[425851]:         "type": "bluestore"
Oct 11 09:40:53 compute-0 sad_golick[425851]:     },
Oct 11 09:40:53 compute-0 sad_golick[425851]:     "bd31cfe9-266a-4479-a7f9-659fff14b3e1": {
Oct 11 09:40:53 compute-0 sad_golick[425851]:         "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:40:53 compute-0 sad_golick[425851]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 11 09:40:53 compute-0 sad_golick[425851]:         "osd_id": 0,
Oct 11 09:40:53 compute-0 sad_golick[425851]:         "osd_uuid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 09:40:53 compute-0 sad_golick[425851]:         "type": "bluestore"
Oct 11 09:40:53 compute-0 sad_golick[425851]:     },
Oct 11 09:40:53 compute-0 sad_golick[425851]:     "c6e730c8-7075-45e5-bf72-061b73088f5b": {
Oct 11 09:40:53 compute-0 sad_golick[425851]:         "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:40:53 compute-0 sad_golick[425851]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 11 09:40:53 compute-0 sad_golick[425851]:         "osd_id": 1,
Oct 11 09:40:53 compute-0 sad_golick[425851]:         "osd_uuid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 09:40:53 compute-0 sad_golick[425851]:         "type": "bluestore"
Oct 11 09:40:53 compute-0 sad_golick[425851]:     }
Oct 11 09:40:53 compute-0 sad_golick[425851]: }
Oct 11 09:40:53 compute-0 systemd[1]: libpod-e100d7a2c1d658aba90ba8c6e75aeb5c9ec4ecf66c545bc40b678d56953fb22c.scope: Deactivated successfully.
Oct 11 09:40:53 compute-0 systemd[1]: libpod-e100d7a2c1d658aba90ba8c6e75aeb5c9ec4ecf66c545bc40b678d56953fb22c.scope: Consumed 1.064s CPU time.
Oct 11 09:40:53 compute-0 podman[425834]: 2025-10-11 09:40:53.974055561 +0000 UTC m=+1.268288113 container died e100d7a2c1d658aba90ba8c6e75aeb5c9ec4ecf66c545bc40b678d56953fb22c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_golick, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 09:40:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-f1b62f5b29c4b39dc0863e58fe1a5cc2990509b9c0507ede5281336241ed3193-merged.mount: Deactivated successfully.
Oct 11 09:40:54 compute-0 podman[425834]: 2025-10-11 09:40:54.051161163 +0000 UTC m=+1.345393755 container remove e100d7a2c1d658aba90ba8c6e75aeb5c9ec4ecf66c545bc40b678d56953fb22c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_golick, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct 11 09:40:54 compute-0 systemd[1]: libpod-conmon-e100d7a2c1d658aba90ba8c6e75aeb5c9ec4ecf66c545bc40b678d56953fb22c.scope: Deactivated successfully.
Oct 11 09:40:54 compute-0 sudo[425728]: pam_unix(sudo:session): session closed for user root
Oct 11 09:40:54 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 09:40:54 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:40:54 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 09:40:54 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:40:54 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev 59e3d7d7-038f-45a6-837d-a22d1be3c94c does not exist
Oct 11 09:40:54 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev d342ac12-db4e-40b3-8554-516736395ca6 does not exist
Oct 11 09:40:54 compute-0 sudo[425897]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:40:54 compute-0 sudo[425897]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:40:54 compute-0 sudo[425897]: pam_unix(sudo:session): session closed for user root
Oct 11 09:40:54 compute-0 sudo[425922]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 11 09:40:54 compute-0 sudo[425922]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:40:54 compute-0 sudo[425922]: pam_unix(sudo:session): session closed for user root
Oct 11 09:40:54 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:40:54 compute-0 nova_compute[260935]: 2025-10-11 09:40:54.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:40:54 compute-0 nova_compute[260935]: 2025-10-11 09:40:54.757 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:40:54 compute-0 nova_compute[260935]: 2025-10-11 09:40:54.757 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:40:54 compute-0 nova_compute[260935]: 2025-10-11 09:40:54.757 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:40:54 compute-0 nova_compute[260935]: 2025-10-11 09:40:54.758 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 11 09:40:54 compute-0 nova_compute[260935]: 2025-10-11 09:40:54.758 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:40:54 compute-0 nova_compute[260935]: 2025-10-11 09:40:54.838 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:40:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:40:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:40:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:40:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:40:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:40:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:40:55 compute-0 ceph-mgr[74605]: [balancer INFO root] Optimize plan auto_2025-10-11_09:40:55
Oct 11 09:40:55 compute-0 ceph-mgr[74605]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 09:40:55 compute-0 ceph-mgr[74605]: [balancer INFO root] do_upmap
Oct 11 09:40:55 compute-0 ceph-mgr[74605]: [balancer INFO root] pools ['.mgr', 'cephfs.cephfs.meta', 'images', 'cephfs.cephfs.data', 'backups', 'default.rgw.control', 'default.rgw.meta', 'default.rgw.log', '.rgw.root', 'volumes', 'vms']
Oct 11 09:40:55 compute-0 ceph-mgr[74605]: [balancer INFO root] prepared 0/10 changes
Oct 11 09:40:55 compute-0 ceph-mon[74313]: pgmap v2977: 321 pgs: 321 active+clean; 407 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 125 op/s
Oct 11 09:40:55 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:40:55 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:40:55 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 09:40:55 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4031578983' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:40:55 compute-0 nova_compute[260935]: 2025-10-11 09:40:55.234 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:40:55 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2978: 321 pgs: 321 active+clean; 407 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 305 KiB/s rd, 2.1 MiB/s wr, 61 op/s
Oct 11 09:40:55 compute-0 nova_compute[260935]: 2025-10-11 09:40:55.472 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:40:55 compute-0 nova_compute[260935]: 2025-10-11 09:40:55.473 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:40:55 compute-0 nova_compute[260935]: 2025-10-11 09:40:55.474 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:40:55 compute-0 nova_compute[260935]: 2025-10-11 09:40:55.481 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:40:55 compute-0 nova_compute[260935]: 2025-10-11 09:40:55.481 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:40:55 compute-0 nova_compute[260935]: 2025-10-11 09:40:55.488 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000052 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:40:55 compute-0 nova_compute[260935]: 2025-10-11 09:40:55.489 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000052 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:40:55 compute-0 nova_compute[260935]: 2025-10-11 09:40:55.495 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000092 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:40:55 compute-0 nova_compute[260935]: 2025-10-11 09:40:55.496 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000092 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:40:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 09:40:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 09:40:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 09:40:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 09:40:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 09:40:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 09:40:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 09:40:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 09:40:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 09:40:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 09:40:55 compute-0 nova_compute[260935]: 2025-10-11 09:40:55.659 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:40:55 compute-0 ceph-mgr[74605]: client.0 ms_handle_reset on v2:192.168.122.100:6800/2898047278
Oct 11 09:40:55 compute-0 nova_compute[260935]: 2025-10-11 09:40:55.797 2 WARNING nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 09:40:55 compute-0 nova_compute[260935]: 2025-10-11 09:40:55.798 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=2601MB free_disk=59.785282135009766GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 11 09:40:55 compute-0 nova_compute[260935]: 2025-10-11 09:40:55.799 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:40:55 compute-0 nova_compute[260935]: 2025-10-11 09:40:55.799 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:40:56 compute-0 nova_compute[260935]: 2025-10-11 09:40:56.010 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance c176845c-89c0-4038-ba22-4ee79bd3ebfe actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 09:40:56 compute-0 nova_compute[260935]: 2025-10-11 09:40:56.010 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance b75d8ded-515b-48ff-a6b6-28df88878996 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 09:40:56 compute-0 nova_compute[260935]: 2025-10-11 09:40:56.010 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance 52be16b4-343a-4fd4-9041-39069a1fde2a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 09:40:56 compute-0 nova_compute[260935]: 2025-10-11 09:40:56.010 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance cf42187a-059a-49d4-b9c8-96786042977f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 09:40:56 compute-0 nova_compute[260935]: 2025-10-11 09:40:56.011 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 11 09:40:56 compute-0 nova_compute[260935]: 2025-10-11 09:40:56.011 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=1024MB phys_disk=59GB used_disk=4GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 11 09:40:56 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/4031578983' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:40:56 compute-0 nova_compute[260935]: 2025-10-11 09:40:56.286 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:40:56 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 09:40:56 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/10474241' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:40:56 compute-0 nova_compute[260935]: 2025-10-11 09:40:56.801 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.515s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:40:56 compute-0 nova_compute[260935]: 2025-10-11 09:40:56.807 2 DEBUG nova.compute.provider_tree [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 09:40:56 compute-0 nova_compute[260935]: 2025-10-11 09:40:56.927 2 DEBUG nova.scheduler.client.report [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 09:40:57 compute-0 nova_compute[260935]: 2025-10-11 09:40:57.056 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 11 09:40:57 compute-0 nova_compute[260935]: 2025-10-11 09:40:57.056 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.258s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:40:57 compute-0 ceph-mon[74313]: pgmap v2978: 321 pgs: 321 active+clean; 407 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 305 KiB/s rd, 2.1 MiB/s wr, 61 op/s
Oct 11 09:40:57 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/10474241' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:40:57 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2979: 321 pgs: 321 active+clean; 407 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 305 KiB/s rd, 2.1 MiB/s wr, 61 op/s
Oct 11 09:40:58 compute-0 nova_compute[260935]: 2025-10-11 09:40:58.057 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:40:58 compute-0 nova_compute[260935]: 2025-10-11 09:40:58.057 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:40:59 compute-0 ceph-mon[74313]: pgmap v2979: 321 pgs: 321 active+clean; 407 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 305 KiB/s rd, 2.1 MiB/s wr, 61 op/s
Oct 11 09:40:59 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2980: 321 pgs: 321 active+clean; 407 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 305 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Oct 11 09:40:59 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:40:59 compute-0 nova_compute[260935]: 2025-10-11 09:40:59.841 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:41:00 compute-0 nova_compute[260935]: 2025-10-11 09:41:00.053 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:41:00 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:41:00.052 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=56, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:d1:d9', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '16:ab:1e:b7:4b:7f'}, ipsec=False) old=SB_Global(nb_cfg=55) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 09:41:00 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:41:00.054 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 11 09:41:00 compute-0 nova_compute[260935]: 2025-10-11 09:41:00.662 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:41:01 compute-0 ceph-mon[74313]: pgmap v2980: 321 pgs: 321 active+clean; 407 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 305 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Oct 11 09:41:01 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2981: 321 pgs: 321 active+clean; 407 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 305 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Oct 11 09:41:03 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:41:03.057 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=bec9a5e2-82b0-42f0-811d-08d245f1dc66, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '56'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:41:03 compute-0 ceph-mon[74313]: pgmap v2981: 321 pgs: 321 active+clean; 407 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 305 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Oct 11 09:41:03 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2982: 321 pgs: 321 active+clean; 407 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 305 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Oct 11 09:41:04 compute-0 nova_compute[260935]: 2025-10-11 09:41:04.119 2 DEBUG oslo_concurrency.lockutils [None req-500c65b1-7962-482f-aabb-b0268b19e7b4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Acquiring lock "7478f2ca-99cf-4c64-917f-bcfee233cf4e" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:41:04 compute-0 nova_compute[260935]: 2025-10-11 09:41:04.120 2 DEBUG oslo_concurrency.lockutils [None req-500c65b1-7962-482f-aabb-b0268b19e7b4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lock "7478f2ca-99cf-4c64-917f-bcfee233cf4e" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:41:04 compute-0 nova_compute[260935]: 2025-10-11 09:41:04.138 2 DEBUG nova.compute.manager [None req-500c65b1-7962-482f-aabb-b0268b19e7b4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 7478f2ca-99cf-4c64-917f-bcfee233cf4e] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 11 09:41:04 compute-0 nova_compute[260935]: 2025-10-11 09:41:04.225 2 DEBUG oslo_concurrency.lockutils [None req-500c65b1-7962-482f-aabb-b0268b19e7b4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:41:04 compute-0 nova_compute[260935]: 2025-10-11 09:41:04.226 2 DEBUG oslo_concurrency.lockutils [None req-500c65b1-7962-482f-aabb-b0268b19e7b4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:41:04 compute-0 nova_compute[260935]: 2025-10-11 09:41:04.234 2 DEBUG nova.virt.hardware [None req-500c65b1-7962-482f-aabb-b0268b19e7b4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 11 09:41:04 compute-0 nova_compute[260935]: 2025-10-11 09:41:04.235 2 INFO nova.compute.claims [None req-500c65b1-7962-482f-aabb-b0268b19e7b4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 7478f2ca-99cf-4c64-917f-bcfee233cf4e] Claim successful on node compute-0.ctlplane.example.com
Oct 11 09:41:04 compute-0 nova_compute[260935]: 2025-10-11 09:41:04.412 2 DEBUG oslo_concurrency.processutils [None req-500c65b1-7962-482f-aabb-b0268b19e7b4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:41:04 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:41:04 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 09:41:04 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2341111548' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:41:04 compute-0 nova_compute[260935]: 2025-10-11 09:41:04.844 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:41:04 compute-0 nova_compute[260935]: 2025-10-11 09:41:04.860 2 DEBUG oslo_concurrency.processutils [None req-500c65b1-7962-482f-aabb-b0268b19e7b4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:41:04 compute-0 nova_compute[260935]: 2025-10-11 09:41:04.868 2 DEBUG nova.compute.provider_tree [None req-500c65b1-7962-482f-aabb-b0268b19e7b4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 09:41:04 compute-0 nova_compute[260935]: 2025-10-11 09:41:04.888 2 DEBUG nova.scheduler.client.report [None req-500c65b1-7962-482f-aabb-b0268b19e7b4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 09:41:04 compute-0 nova_compute[260935]: 2025-10-11 09:41:04.916 2 DEBUG oslo_concurrency.lockutils [None req-500c65b1-7962-482f-aabb-b0268b19e7b4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.690s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:41:04 compute-0 nova_compute[260935]: 2025-10-11 09:41:04.917 2 DEBUG nova.compute.manager [None req-500c65b1-7962-482f-aabb-b0268b19e7b4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 7478f2ca-99cf-4c64-917f-bcfee233cf4e] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 11 09:41:04 compute-0 nova_compute[260935]: 2025-10-11 09:41:04.985 2 DEBUG nova.compute.manager [None req-500c65b1-7962-482f-aabb-b0268b19e7b4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 7478f2ca-99cf-4c64-917f-bcfee233cf4e] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 11 09:41:04 compute-0 nova_compute[260935]: 2025-10-11 09:41:04.986 2 DEBUG nova.network.neutron [None req-500c65b1-7962-482f-aabb-b0268b19e7b4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 7478f2ca-99cf-4c64-917f-bcfee233cf4e] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 11 09:41:05 compute-0 nova_compute[260935]: 2025-10-11 09:41:05.011 2 INFO nova.virt.libvirt.driver [None req-500c65b1-7962-482f-aabb-b0268b19e7b4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 7478f2ca-99cf-4c64-917f-bcfee233cf4e] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 11 09:41:05 compute-0 nova_compute[260935]: 2025-10-11 09:41:05.032 2 DEBUG nova.compute.manager [None req-500c65b1-7962-482f-aabb-b0268b19e7b4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 7478f2ca-99cf-4c64-917f-bcfee233cf4e] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 11 09:41:05 compute-0 nova_compute[260935]: 2025-10-11 09:41:05.162 2 DEBUG nova.compute.manager [None req-500c65b1-7962-482f-aabb-b0268b19e7b4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 7478f2ca-99cf-4c64-917f-bcfee233cf4e] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 11 09:41:05 compute-0 nova_compute[260935]: 2025-10-11 09:41:05.164 2 DEBUG nova.virt.libvirt.driver [None req-500c65b1-7962-482f-aabb-b0268b19e7b4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 7478f2ca-99cf-4c64-917f-bcfee233cf4e] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 11 09:41:05 compute-0 nova_compute[260935]: 2025-10-11 09:41:05.165 2 INFO nova.virt.libvirt.driver [None req-500c65b1-7962-482f-aabb-b0268b19e7b4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 7478f2ca-99cf-4c64-917f-bcfee233cf4e] Creating image(s)
Oct 11 09:41:05 compute-0 ceph-mon[74313]: pgmap v2982: 321 pgs: 321 active+clean; 407 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 305 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Oct 11 09:41:05 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/2341111548' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:41:05 compute-0 nova_compute[260935]: 2025-10-11 09:41:05.200 2 DEBUG nova.storage.rbd_utils [None req-500c65b1-7962-482f-aabb-b0268b19e7b4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] rbd image 7478f2ca-99cf-4c64-917f-bcfee233cf4e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:41:05 compute-0 nova_compute[260935]: 2025-10-11 09:41:05.222 2 DEBUG nova.storage.rbd_utils [None req-500c65b1-7962-482f-aabb-b0268b19e7b4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] rbd image 7478f2ca-99cf-4c64-917f-bcfee233cf4e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:41:05 compute-0 nova_compute[260935]: 2025-10-11 09:41:05.241 2 DEBUG nova.storage.rbd_utils [None req-500c65b1-7962-482f-aabb-b0268b19e7b4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] rbd image 7478f2ca-99cf-4c64-917f-bcfee233cf4e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:41:05 compute-0 nova_compute[260935]: 2025-10-11 09:41:05.246 2 DEBUG oslo_concurrency.processutils [None req-500c65b1-7962-482f-aabb-b0268b19e7b4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:41:05 compute-0 nova_compute[260935]: 2025-10-11 09:41:05.285 2 DEBUG nova.policy [None req-500c65b1-7962-482f-aabb-b0268b19e7b4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '489c4d0457354f4684f8b9e53261224f', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '81e7096f23df4e7d8782cf98d09d54e9', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 11 09:41:05 compute-0 nova_compute[260935]: 2025-10-11 09:41:05.336 2 DEBUG oslo_concurrency.processutils [None req-500c65b1-7962-482f-aabb-b0268b19e7b4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json" returned: 0 in 0.090s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:41:05 compute-0 nova_compute[260935]: 2025-10-11 09:41:05.337 2 DEBUG oslo_concurrency.lockutils [None req-500c65b1-7962-482f-aabb-b0268b19e7b4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Acquiring lock "9811042c7d73cc51997f7c966840f4b7728169a1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:41:05 compute-0 nova_compute[260935]: 2025-10-11 09:41:05.338 2 DEBUG oslo_concurrency.lockutils [None req-500c65b1-7962-482f-aabb-b0268b19e7b4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:41:05 compute-0 nova_compute[260935]: 2025-10-11 09:41:05.338 2 DEBUG oslo_concurrency.lockutils [None req-500c65b1-7962-482f-aabb-b0268b19e7b4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:41:05 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2983: 321 pgs: 321 active+clean; 407 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 12 KiB/s wr, 0 op/s
Oct 11 09:41:05 compute-0 nova_compute[260935]: 2025-10-11 09:41:05.373 2 DEBUG nova.storage.rbd_utils [None req-500c65b1-7962-482f-aabb-b0268b19e7b4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] rbd image 7478f2ca-99cf-4c64-917f-bcfee233cf4e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:41:05 compute-0 nova_compute[260935]: 2025-10-11 09:41:05.379 2 DEBUG oslo_concurrency.processutils [None req-500c65b1-7962-482f-aabb-b0268b19e7b4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 7478f2ca-99cf-4c64-917f-bcfee233cf4e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:41:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 09:41:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:41:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 09:41:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:41:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.003386268782151462 of space, bias 1.0, pg target 1.0158806346454385 quantized to 32 (current 32)
Oct 11 09:41:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:41:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 09:41:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:41:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 09:41:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:41:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.1992057139048968 quantized to 32 (current 32)
Oct 11 09:41:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:41:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006084358924269063 quantized to 16 (current 32)
Oct 11 09:41:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:41:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 09:41:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:41:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.605448655336329e-05 quantized to 32 (current 32)
Oct 11 09:41:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:41:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006464631357035879 quantized to 32 (current 32)
Oct 11 09:41:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:41:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 09:41:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:41:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015210897310672657 quantized to 32 (current 32)
Oct 11 09:41:05 compute-0 nova_compute[260935]: 2025-10-11 09:41:05.665 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:41:05 compute-0 nova_compute[260935]: 2025-10-11 09:41:05.736 2 DEBUG oslo_concurrency.processutils [None req-500c65b1-7962-482f-aabb-b0268b19e7b4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 7478f2ca-99cf-4c64-917f-bcfee233cf4e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.357s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:41:05 compute-0 nova_compute[260935]: 2025-10-11 09:41:05.803 2 DEBUG nova.storage.rbd_utils [None req-500c65b1-7962-482f-aabb-b0268b19e7b4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] resizing rbd image 7478f2ca-99cf-4c64-917f-bcfee233cf4e_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 11 09:41:05 compute-0 nova_compute[260935]: 2025-10-11 09:41:05.833 2 DEBUG nova.network.neutron [None req-500c65b1-7962-482f-aabb-b0268b19e7b4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 7478f2ca-99cf-4c64-917f-bcfee233cf4e] Successfully created port: ea93226c-f47f-4213-9d19-8bbf98d9f70f _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 11 09:41:05 compute-0 nova_compute[260935]: 2025-10-11 09:41:05.908 2 DEBUG nova.objects.instance [None req-500c65b1-7962-482f-aabb-b0268b19e7b4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lazy-loading 'migration_context' on Instance uuid 7478f2ca-99cf-4c64-917f-bcfee233cf4e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:41:05 compute-0 nova_compute[260935]: 2025-10-11 09:41:05.921 2 DEBUG nova.virt.libvirt.driver [None req-500c65b1-7962-482f-aabb-b0268b19e7b4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 7478f2ca-99cf-4c64-917f-bcfee233cf4e] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 11 09:41:05 compute-0 nova_compute[260935]: 2025-10-11 09:41:05.921 2 DEBUG nova.virt.libvirt.driver [None req-500c65b1-7962-482f-aabb-b0268b19e7b4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 7478f2ca-99cf-4c64-917f-bcfee233cf4e] Ensure instance console log exists: /var/lib/nova/instances/7478f2ca-99cf-4c64-917f-bcfee233cf4e/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 11 09:41:05 compute-0 nova_compute[260935]: 2025-10-11 09:41:05.922 2 DEBUG oslo_concurrency.lockutils [None req-500c65b1-7962-482f-aabb-b0268b19e7b4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:41:05 compute-0 nova_compute[260935]: 2025-10-11 09:41:05.922 2 DEBUG oslo_concurrency.lockutils [None req-500c65b1-7962-482f-aabb-b0268b19e7b4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:41:05 compute-0 nova_compute[260935]: 2025-10-11 09:41:05.923 2 DEBUG oslo_concurrency.lockutils [None req-500c65b1-7962-482f-aabb-b0268b19e7b4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:41:06 compute-0 nova_compute[260935]: 2025-10-11 09:41:06.658 2 DEBUG nova.network.neutron [None req-500c65b1-7962-482f-aabb-b0268b19e7b4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 7478f2ca-99cf-4c64-917f-bcfee233cf4e] Successfully updated port: ea93226c-f47f-4213-9d19-8bbf98d9f70f _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 11 09:41:06 compute-0 nova_compute[260935]: 2025-10-11 09:41:06.674 2 DEBUG oslo_concurrency.lockutils [None req-500c65b1-7962-482f-aabb-b0268b19e7b4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Acquiring lock "refresh_cache-7478f2ca-99cf-4c64-917f-bcfee233cf4e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:41:06 compute-0 nova_compute[260935]: 2025-10-11 09:41:06.675 2 DEBUG oslo_concurrency.lockutils [None req-500c65b1-7962-482f-aabb-b0268b19e7b4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Acquired lock "refresh_cache-7478f2ca-99cf-4c64-917f-bcfee233cf4e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:41:06 compute-0 nova_compute[260935]: 2025-10-11 09:41:06.675 2 DEBUG nova.network.neutron [None req-500c65b1-7962-482f-aabb-b0268b19e7b4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 7478f2ca-99cf-4c64-917f-bcfee233cf4e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 11 09:41:06 compute-0 nova_compute[260935]: 2025-10-11 09:41:06.772 2 DEBUG nova.compute.manager [req-f978723e-c646-487d-af34-e1f41b376a28 req-ec27a19c-6a2b-41c4-90a9-199e4609872f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 7478f2ca-99cf-4c64-917f-bcfee233cf4e] Received event network-changed-ea93226c-f47f-4213-9d19-8bbf98d9f70f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:41:06 compute-0 nova_compute[260935]: 2025-10-11 09:41:06.774 2 DEBUG nova.compute.manager [req-f978723e-c646-487d-af34-e1f41b376a28 req-ec27a19c-6a2b-41c4-90a9-199e4609872f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 7478f2ca-99cf-4c64-917f-bcfee233cf4e] Refreshing instance network info cache due to event network-changed-ea93226c-f47f-4213-9d19-8bbf98d9f70f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 09:41:06 compute-0 nova_compute[260935]: 2025-10-11 09:41:06.774 2 DEBUG oslo_concurrency.lockutils [req-f978723e-c646-487d-af34-e1f41b376a28 req-ec27a19c-6a2b-41c4-90a9-199e4609872f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-7478f2ca-99cf-4c64-917f-bcfee233cf4e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:41:06 compute-0 podman[426181]: 2025-10-11 09:41:06.817876398 +0000 UTC m=+0.101798837 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 11 09:41:06 compute-0 nova_compute[260935]: 2025-10-11 09:41:06.845 2 DEBUG nova.network.neutron [None req-500c65b1-7962-482f-aabb-b0268b19e7b4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 7478f2ca-99cf-4c64-917f-bcfee233cf4e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 11 09:41:07 compute-0 ceph-mon[74313]: pgmap v2983: 321 pgs: 321 active+clean; 407 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 12 KiB/s wr, 0 op/s
Oct 11 09:41:07 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2984: 321 pgs: 321 active+clean; 407 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 12 KiB/s wr, 0 op/s
Oct 11 09:41:08 compute-0 nova_compute[260935]: 2025-10-11 09:41:08.354 2 DEBUG nova.network.neutron [None req-500c65b1-7962-482f-aabb-b0268b19e7b4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 7478f2ca-99cf-4c64-917f-bcfee233cf4e] Updating instance_info_cache with network_info: [{"id": "ea93226c-f47f-4213-9d19-8bbf98d9f70f", "address": "fa:16:3e:29:42:8b", "network": {"id": "fbb66f3b-0d2f-44df-a1ff-0fbd9c571596", "bridge": "br-int", "label": "tempest-network-smoke--2076009485", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "81e7096f23df4e7d8782cf98d09d54e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapea93226c-f4", "ovs_interfaceid": "ea93226c-f47f-4213-9d19-8bbf98d9f70f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:41:08 compute-0 nova_compute[260935]: 2025-10-11 09:41:08.384 2 DEBUG oslo_concurrency.lockutils [None req-500c65b1-7962-482f-aabb-b0268b19e7b4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Releasing lock "refresh_cache-7478f2ca-99cf-4c64-917f-bcfee233cf4e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:41:08 compute-0 nova_compute[260935]: 2025-10-11 09:41:08.385 2 DEBUG nova.compute.manager [None req-500c65b1-7962-482f-aabb-b0268b19e7b4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 7478f2ca-99cf-4c64-917f-bcfee233cf4e] Instance network_info: |[{"id": "ea93226c-f47f-4213-9d19-8bbf98d9f70f", "address": "fa:16:3e:29:42:8b", "network": {"id": "fbb66f3b-0d2f-44df-a1ff-0fbd9c571596", "bridge": "br-int", "label": "tempest-network-smoke--2076009485", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "81e7096f23df4e7d8782cf98d09d54e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapea93226c-f4", "ovs_interfaceid": "ea93226c-f47f-4213-9d19-8bbf98d9f70f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 11 09:41:08 compute-0 nova_compute[260935]: 2025-10-11 09:41:08.385 2 DEBUG oslo_concurrency.lockutils [req-f978723e-c646-487d-af34-e1f41b376a28 req-ec27a19c-6a2b-41c4-90a9-199e4609872f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-7478f2ca-99cf-4c64-917f-bcfee233cf4e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:41:08 compute-0 nova_compute[260935]: 2025-10-11 09:41:08.386 2 DEBUG nova.network.neutron [req-f978723e-c646-487d-af34-e1f41b376a28 req-ec27a19c-6a2b-41c4-90a9-199e4609872f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 7478f2ca-99cf-4c64-917f-bcfee233cf4e] Refreshing network info cache for port ea93226c-f47f-4213-9d19-8bbf98d9f70f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 09:41:08 compute-0 nova_compute[260935]: 2025-10-11 09:41:08.391 2 DEBUG nova.virt.libvirt.driver [None req-500c65b1-7962-482f-aabb-b0268b19e7b4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 7478f2ca-99cf-4c64-917f-bcfee233cf4e] Start _get_guest_xml network_info=[{"id": "ea93226c-f47f-4213-9d19-8bbf98d9f70f", "address": "fa:16:3e:29:42:8b", "network": {"id": "fbb66f3b-0d2f-44df-a1ff-0fbd9c571596", "bridge": "br-int", "label": "tempest-network-smoke--2076009485", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "81e7096f23df4e7d8782cf98d09d54e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapea93226c-f4", "ovs_interfaceid": "ea93226c-f47f-4213-9d19-8bbf98d9f70f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 11 09:41:08 compute-0 nova_compute[260935]: 2025-10-11 09:41:08.398 2 WARNING nova.virt.libvirt.driver [None req-500c65b1-7962-482f-aabb-b0268b19e7b4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 09:41:08 compute-0 nova_compute[260935]: 2025-10-11 09:41:08.408 2 DEBUG nova.virt.libvirt.host [None req-500c65b1-7962-482f-aabb-b0268b19e7b4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 11 09:41:08 compute-0 nova_compute[260935]: 2025-10-11 09:41:08.409 2 DEBUG nova.virt.libvirt.host [None req-500c65b1-7962-482f-aabb-b0268b19e7b4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 11 09:41:08 compute-0 nova_compute[260935]: 2025-10-11 09:41:08.413 2 DEBUG nova.virt.libvirt.host [None req-500c65b1-7962-482f-aabb-b0268b19e7b4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 11 09:41:08 compute-0 nova_compute[260935]: 2025-10-11 09:41:08.414 2 DEBUG nova.virt.libvirt.host [None req-500c65b1-7962-482f-aabb-b0268b19e7b4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 11 09:41:08 compute-0 nova_compute[260935]: 2025-10-11 09:41:08.415 2 DEBUG nova.virt.libvirt.driver [None req-500c65b1-7962-482f-aabb-b0268b19e7b4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 11 09:41:08 compute-0 nova_compute[260935]: 2025-10-11 09:41:08.416 2 DEBUG nova.virt.hardware [None req-500c65b1-7962-482f-aabb-b0268b19e7b4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 11 09:41:08 compute-0 nova_compute[260935]: 2025-10-11 09:41:08.417 2 DEBUG nova.virt.hardware [None req-500c65b1-7962-482f-aabb-b0268b19e7b4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 11 09:41:08 compute-0 nova_compute[260935]: 2025-10-11 09:41:08.417 2 DEBUG nova.virt.hardware [None req-500c65b1-7962-482f-aabb-b0268b19e7b4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 11 09:41:08 compute-0 nova_compute[260935]: 2025-10-11 09:41:08.418 2 DEBUG nova.virt.hardware [None req-500c65b1-7962-482f-aabb-b0268b19e7b4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 11 09:41:08 compute-0 nova_compute[260935]: 2025-10-11 09:41:08.418 2 DEBUG nova.virt.hardware [None req-500c65b1-7962-482f-aabb-b0268b19e7b4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 11 09:41:08 compute-0 nova_compute[260935]: 2025-10-11 09:41:08.419 2 DEBUG nova.virt.hardware [None req-500c65b1-7962-482f-aabb-b0268b19e7b4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 11 09:41:08 compute-0 nova_compute[260935]: 2025-10-11 09:41:08.420 2 DEBUG nova.virt.hardware [None req-500c65b1-7962-482f-aabb-b0268b19e7b4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 11 09:41:08 compute-0 nova_compute[260935]: 2025-10-11 09:41:08.420 2 DEBUG nova.virt.hardware [None req-500c65b1-7962-482f-aabb-b0268b19e7b4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 11 09:41:08 compute-0 nova_compute[260935]: 2025-10-11 09:41:08.421 2 DEBUG nova.virt.hardware [None req-500c65b1-7962-482f-aabb-b0268b19e7b4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 11 09:41:08 compute-0 nova_compute[260935]: 2025-10-11 09:41:08.421 2 DEBUG nova.virt.hardware [None req-500c65b1-7962-482f-aabb-b0268b19e7b4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 11 09:41:08 compute-0 nova_compute[260935]: 2025-10-11 09:41:08.422 2 DEBUG nova.virt.hardware [None req-500c65b1-7962-482f-aabb-b0268b19e7b4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 11 09:41:08 compute-0 nova_compute[260935]: 2025-10-11 09:41:08.427 2 DEBUG oslo_concurrency.processutils [None req-500c65b1-7962-482f-aabb-b0268b19e7b4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:41:08 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 09:41:08 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4286187677' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:41:08 compute-0 nova_compute[260935]: 2025-10-11 09:41:08.913 2 DEBUG oslo_concurrency.processutils [None req-500c65b1-7962-482f-aabb-b0268b19e7b4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.486s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:41:08 compute-0 nova_compute[260935]: 2025-10-11 09:41:08.931 2 DEBUG nova.storage.rbd_utils [None req-500c65b1-7962-482f-aabb-b0268b19e7b4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] rbd image 7478f2ca-99cf-4c64-917f-bcfee233cf4e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:41:08 compute-0 nova_compute[260935]: 2025-10-11 09:41:08.935 2 DEBUG oslo_concurrency.processutils [None req-500c65b1-7962-482f-aabb-b0268b19e7b4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:41:09 compute-0 ceph-mon[74313]: pgmap v2984: 321 pgs: 321 active+clean; 407 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 12 KiB/s wr, 0 op/s
Oct 11 09:41:09 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/4286187677' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:41:09 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2985: 321 pgs: 321 active+clean; 453 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 11 09:41:09 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 09:41:09 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3478184439' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:41:09 compute-0 nova_compute[260935]: 2025-10-11 09:41:09.377 2 DEBUG oslo_concurrency.processutils [None req-500c65b1-7962-482f-aabb-b0268b19e7b4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:41:09 compute-0 nova_compute[260935]: 2025-10-11 09:41:09.380 2 DEBUG nova.virt.libvirt.vif [None req-500c65b1-7962-482f-aabb-b0268b19e7b4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T09:41:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-607770139-gen-1-1334732272',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-607770139-gen-1-1334732272',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-607770139-gen',id=147,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMJ2jJOMdFj9GKBYjHKBvmwlAaXSeJgKxd7Y5C0D5IYgR/hyyjjmJiPLE03H6XYiIWX3QUpUmhg6y0ka6+hRs35VkER6t8ZYYcl+MQfyMjAvRZwHFSUWgwuABpKNZ9OROA==',key_name='tempest-TestSecurityGroupsBasicOps-1246308253',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='81e7096f23df4e7d8782cf98d09d54e9',ramdisk_id='',reservation_id='r-n62joivm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-607770139',owner_user_name='tempest-TestSecurityGroupsBasicOps-607770139-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:41:05Z,user_data=None,user_id='489c4d0457354f4684f8b9e53261224f',uuid=7478f2ca-99cf-4c64-917f-bcfee233cf4e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ea93226c-f47f-4213-9d19-8bbf98d9f70f", "address": "fa:16:3e:29:42:8b", "network": {"id": "fbb66f3b-0d2f-44df-a1ff-0fbd9c571596", "bridge": "br-int", "label": "tempest-network-smoke--2076009485", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "81e7096f23df4e7d8782cf98d09d54e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapea93226c-f4", "ovs_interfaceid": "ea93226c-f47f-4213-9d19-8bbf98d9f70f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 11 09:41:09 compute-0 nova_compute[260935]: 2025-10-11 09:41:09.380 2 DEBUG nova.network.os_vif_util [None req-500c65b1-7962-482f-aabb-b0268b19e7b4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Converting VIF {"id": "ea93226c-f47f-4213-9d19-8bbf98d9f70f", "address": "fa:16:3e:29:42:8b", "network": {"id": "fbb66f3b-0d2f-44df-a1ff-0fbd9c571596", "bridge": "br-int", "label": "tempest-network-smoke--2076009485", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "81e7096f23df4e7d8782cf98d09d54e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapea93226c-f4", "ovs_interfaceid": "ea93226c-f47f-4213-9d19-8bbf98d9f70f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 09:41:09 compute-0 nova_compute[260935]: 2025-10-11 09:41:09.382 2 DEBUG nova.network.os_vif_util [None req-500c65b1-7962-482f-aabb-b0268b19e7b4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:29:42:8b,bridge_name='br-int',has_traffic_filtering=True,id=ea93226c-f47f-4213-9d19-8bbf98d9f70f,network=Network(fbb66f3b-0d2f-44df-a1ff-0fbd9c571596),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapea93226c-f4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 09:41:09 compute-0 nova_compute[260935]: 2025-10-11 09:41:09.384 2 DEBUG nova.objects.instance [None req-500c65b1-7962-482f-aabb-b0268b19e7b4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lazy-loading 'pci_devices' on Instance uuid 7478f2ca-99cf-4c64-917f-bcfee233cf4e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:41:09 compute-0 nova_compute[260935]: 2025-10-11 09:41:09.401 2 DEBUG nova.virt.libvirt.driver [None req-500c65b1-7962-482f-aabb-b0268b19e7b4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 7478f2ca-99cf-4c64-917f-bcfee233cf4e] End _get_guest_xml xml=<domain type="kvm">
Oct 11 09:41:09 compute-0 nova_compute[260935]:   <uuid>7478f2ca-99cf-4c64-917f-bcfee233cf4e</uuid>
Oct 11 09:41:09 compute-0 nova_compute[260935]:   <name>instance-00000093</name>
Oct 11 09:41:09 compute-0 nova_compute[260935]:   <memory>131072</memory>
Oct 11 09:41:09 compute-0 nova_compute[260935]:   <vcpu>1</vcpu>
Oct 11 09:41:09 compute-0 nova_compute[260935]:   <metadata>
Oct 11 09:41:09 compute-0 nova_compute[260935]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 09:41:09 compute-0 nova_compute[260935]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 09:41:09 compute-0 nova_compute[260935]:       <nova:name>tempest-server-tempest-TestSecurityGroupsBasicOps-607770139-gen-1-1334732272</nova:name>
Oct 11 09:41:09 compute-0 nova_compute[260935]:       <nova:creationTime>2025-10-11 09:41:08</nova:creationTime>
Oct 11 09:41:09 compute-0 nova_compute[260935]:       <nova:flavor name="m1.nano">
Oct 11 09:41:09 compute-0 nova_compute[260935]:         <nova:memory>128</nova:memory>
Oct 11 09:41:09 compute-0 nova_compute[260935]:         <nova:disk>1</nova:disk>
Oct 11 09:41:09 compute-0 nova_compute[260935]:         <nova:swap>0</nova:swap>
Oct 11 09:41:09 compute-0 nova_compute[260935]:         <nova:ephemeral>0</nova:ephemeral>
Oct 11 09:41:09 compute-0 nova_compute[260935]:         <nova:vcpus>1</nova:vcpus>
Oct 11 09:41:09 compute-0 nova_compute[260935]:       </nova:flavor>
Oct 11 09:41:09 compute-0 nova_compute[260935]:       <nova:owner>
Oct 11 09:41:09 compute-0 nova_compute[260935]:         <nova:user uuid="489c4d0457354f4684f8b9e53261224f">tempest-TestSecurityGroupsBasicOps-607770139-project-member</nova:user>
Oct 11 09:41:09 compute-0 nova_compute[260935]:         <nova:project uuid="81e7096f23df4e7d8782cf98d09d54e9">tempest-TestSecurityGroupsBasicOps-607770139</nova:project>
Oct 11 09:41:09 compute-0 nova_compute[260935]:       </nova:owner>
Oct 11 09:41:09 compute-0 nova_compute[260935]:       <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 09:41:09 compute-0 nova_compute[260935]:       <nova:ports>
Oct 11 09:41:09 compute-0 nova_compute[260935]:         <nova:port uuid="ea93226c-f47f-4213-9d19-8bbf98d9f70f">
Oct 11 09:41:09 compute-0 nova_compute[260935]:           <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Oct 11 09:41:09 compute-0 nova_compute[260935]:         </nova:port>
Oct 11 09:41:09 compute-0 nova_compute[260935]:       </nova:ports>
Oct 11 09:41:09 compute-0 nova_compute[260935]:     </nova:instance>
Oct 11 09:41:09 compute-0 nova_compute[260935]:   </metadata>
Oct 11 09:41:09 compute-0 nova_compute[260935]:   <sysinfo type="smbios">
Oct 11 09:41:09 compute-0 nova_compute[260935]:     <system>
Oct 11 09:41:09 compute-0 nova_compute[260935]:       <entry name="manufacturer">RDO</entry>
Oct 11 09:41:09 compute-0 nova_compute[260935]:       <entry name="product">OpenStack Compute</entry>
Oct 11 09:41:09 compute-0 nova_compute[260935]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 09:41:09 compute-0 nova_compute[260935]:       <entry name="serial">7478f2ca-99cf-4c64-917f-bcfee233cf4e</entry>
Oct 11 09:41:09 compute-0 nova_compute[260935]:       <entry name="uuid">7478f2ca-99cf-4c64-917f-bcfee233cf4e</entry>
Oct 11 09:41:09 compute-0 nova_compute[260935]:       <entry name="family">Virtual Machine</entry>
Oct 11 09:41:09 compute-0 nova_compute[260935]:     </system>
Oct 11 09:41:09 compute-0 nova_compute[260935]:   </sysinfo>
Oct 11 09:41:09 compute-0 nova_compute[260935]:   <os>
Oct 11 09:41:09 compute-0 nova_compute[260935]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 11 09:41:09 compute-0 nova_compute[260935]:     <boot dev="hd"/>
Oct 11 09:41:09 compute-0 nova_compute[260935]:     <smbios mode="sysinfo"/>
Oct 11 09:41:09 compute-0 nova_compute[260935]:   </os>
Oct 11 09:41:09 compute-0 nova_compute[260935]:   <features>
Oct 11 09:41:09 compute-0 nova_compute[260935]:     <acpi/>
Oct 11 09:41:09 compute-0 nova_compute[260935]:     <apic/>
Oct 11 09:41:09 compute-0 nova_compute[260935]:     <vmcoreinfo/>
Oct 11 09:41:09 compute-0 nova_compute[260935]:   </features>
Oct 11 09:41:09 compute-0 nova_compute[260935]:   <clock offset="utc">
Oct 11 09:41:09 compute-0 nova_compute[260935]:     <timer name="pit" tickpolicy="delay"/>
Oct 11 09:41:09 compute-0 nova_compute[260935]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 11 09:41:09 compute-0 nova_compute[260935]:     <timer name="hpet" present="no"/>
Oct 11 09:41:09 compute-0 nova_compute[260935]:   </clock>
Oct 11 09:41:09 compute-0 nova_compute[260935]:   <cpu mode="host-model" match="exact">
Oct 11 09:41:09 compute-0 nova_compute[260935]:     <topology sockets="1" cores="1" threads="1"/>
Oct 11 09:41:09 compute-0 nova_compute[260935]:   </cpu>
Oct 11 09:41:09 compute-0 nova_compute[260935]:   <devices>
Oct 11 09:41:09 compute-0 nova_compute[260935]:     <disk type="network" device="disk">
Oct 11 09:41:09 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 09:41:09 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/7478f2ca-99cf-4c64-917f-bcfee233cf4e_disk">
Oct 11 09:41:09 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 09:41:09 compute-0 nova_compute[260935]:       </source>
Oct 11 09:41:09 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 09:41:09 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 09:41:09 compute-0 nova_compute[260935]:       </auth>
Oct 11 09:41:09 compute-0 nova_compute[260935]:       <target dev="vda" bus="virtio"/>
Oct 11 09:41:09 compute-0 nova_compute[260935]:     </disk>
Oct 11 09:41:09 compute-0 nova_compute[260935]:     <disk type="network" device="cdrom">
Oct 11 09:41:09 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 09:41:09 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/7478f2ca-99cf-4c64-917f-bcfee233cf4e_disk.config">
Oct 11 09:41:09 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 09:41:09 compute-0 nova_compute[260935]:       </source>
Oct 11 09:41:09 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 09:41:09 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 09:41:09 compute-0 nova_compute[260935]:       </auth>
Oct 11 09:41:09 compute-0 nova_compute[260935]:       <target dev="sda" bus="sata"/>
Oct 11 09:41:09 compute-0 nova_compute[260935]:     </disk>
Oct 11 09:41:09 compute-0 nova_compute[260935]:     <interface type="ethernet">
Oct 11 09:41:09 compute-0 nova_compute[260935]:       <mac address="fa:16:3e:29:42:8b"/>
Oct 11 09:41:09 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 09:41:09 compute-0 nova_compute[260935]:       <driver name="vhost" rx_queue_size="512"/>
Oct 11 09:41:09 compute-0 nova_compute[260935]:       <mtu size="1442"/>
Oct 11 09:41:09 compute-0 nova_compute[260935]:       <target dev="tapea93226c-f4"/>
Oct 11 09:41:09 compute-0 nova_compute[260935]:     </interface>
Oct 11 09:41:09 compute-0 nova_compute[260935]:     <serial type="pty">
Oct 11 09:41:09 compute-0 nova_compute[260935]:       <log file="/var/lib/nova/instances/7478f2ca-99cf-4c64-917f-bcfee233cf4e/console.log" append="off"/>
Oct 11 09:41:09 compute-0 nova_compute[260935]:     </serial>
Oct 11 09:41:09 compute-0 nova_compute[260935]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 09:41:09 compute-0 nova_compute[260935]:     <video>
Oct 11 09:41:09 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 09:41:09 compute-0 nova_compute[260935]:     </video>
Oct 11 09:41:09 compute-0 nova_compute[260935]:     <input type="tablet" bus="usb"/>
Oct 11 09:41:09 compute-0 nova_compute[260935]:     <rng model="virtio">
Oct 11 09:41:09 compute-0 nova_compute[260935]:       <backend model="random">/dev/urandom</backend>
Oct 11 09:41:09 compute-0 nova_compute[260935]:     </rng>
Oct 11 09:41:09 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root"/>
Oct 11 09:41:09 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:41:09 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:41:09 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:41:09 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:41:09 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:41:09 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:41:09 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:41:09 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:41:09 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:41:09 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:41:09 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:41:09 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:41:09 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:41:09 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:41:09 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:41:09 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:41:09 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:41:09 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:41:09 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:41:09 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:41:09 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:41:09 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:41:09 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:41:09 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:41:09 compute-0 nova_compute[260935]:     <controller type="usb" index="0"/>
Oct 11 09:41:09 compute-0 nova_compute[260935]:     <memballoon model="virtio">
Oct 11 09:41:09 compute-0 nova_compute[260935]:       <stats period="10"/>
Oct 11 09:41:09 compute-0 nova_compute[260935]:     </memballoon>
Oct 11 09:41:09 compute-0 nova_compute[260935]:   </devices>
Oct 11 09:41:09 compute-0 nova_compute[260935]: </domain>
Oct 11 09:41:09 compute-0 nova_compute[260935]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 11 09:41:09 compute-0 nova_compute[260935]: 2025-10-11 09:41:09.402 2 DEBUG nova.compute.manager [None req-500c65b1-7962-482f-aabb-b0268b19e7b4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 7478f2ca-99cf-4c64-917f-bcfee233cf4e] Preparing to wait for external event network-vif-plugged-ea93226c-f47f-4213-9d19-8bbf98d9f70f prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 11 09:41:09 compute-0 nova_compute[260935]: 2025-10-11 09:41:09.403 2 DEBUG oslo_concurrency.lockutils [None req-500c65b1-7962-482f-aabb-b0268b19e7b4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Acquiring lock "7478f2ca-99cf-4c64-917f-bcfee233cf4e-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:41:09 compute-0 nova_compute[260935]: 2025-10-11 09:41:09.403 2 DEBUG oslo_concurrency.lockutils [None req-500c65b1-7962-482f-aabb-b0268b19e7b4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lock "7478f2ca-99cf-4c64-917f-bcfee233cf4e-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:41:09 compute-0 nova_compute[260935]: 2025-10-11 09:41:09.403 2 DEBUG oslo_concurrency.lockutils [None req-500c65b1-7962-482f-aabb-b0268b19e7b4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lock "7478f2ca-99cf-4c64-917f-bcfee233cf4e-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:41:09 compute-0 nova_compute[260935]: 2025-10-11 09:41:09.404 2 DEBUG nova.virt.libvirt.vif [None req-500c65b1-7962-482f-aabb-b0268b19e7b4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T09:41:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-607770139-gen-1-1334732272',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-607770139-gen-1-1334732272',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-607770139-gen',id=147,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMJ2jJOMdFj9GKBYjHKBvmwlAaXSeJgKxd7Y5C0D5IYgR/hyyjjmJiPLE03H6XYiIWX3QUpUmhg6y0ka6+hRs35VkER6t8ZYYcl+MQfyMjAvRZwHFSUWgwuABpKNZ9OROA==',key_name='tempest-TestSecurityGroupsBasicOps-1246308253',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='81e7096f23df4e7d8782cf98d09d54e9',ramdisk_id='',reservation_id='r-n62joivm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-607770139',owner_user_name='tempest-TestSecurityGroupsBasicOps-607770139-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:41:05Z,user_data=None,user_id='489c4d0457354f4684f8b9e53261224f',uuid=7478f2ca-99cf-4c64-917f-bcfee233cf4e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ea93226c-f47f-4213-9d19-8bbf98d9f70f", "address": "fa:16:3e:29:42:8b", "network": {"id": "fbb66f3b-0d2f-44df-a1ff-0fbd9c571596", "bridge": "br-int", "label": "tempest-network-smoke--2076009485", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "81e7096f23df4e7d8782cf98d09d54e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapea93226c-f4", "ovs_interfaceid": "ea93226c-f47f-4213-9d19-8bbf98d9f70f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 11 09:41:09 compute-0 nova_compute[260935]: 2025-10-11 09:41:09.404 2 DEBUG nova.network.os_vif_util [None req-500c65b1-7962-482f-aabb-b0268b19e7b4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Converting VIF {"id": "ea93226c-f47f-4213-9d19-8bbf98d9f70f", "address": "fa:16:3e:29:42:8b", "network": {"id": "fbb66f3b-0d2f-44df-a1ff-0fbd9c571596", "bridge": "br-int", "label": "tempest-network-smoke--2076009485", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "81e7096f23df4e7d8782cf98d09d54e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapea93226c-f4", "ovs_interfaceid": "ea93226c-f47f-4213-9d19-8bbf98d9f70f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 09:41:09 compute-0 nova_compute[260935]: 2025-10-11 09:41:09.405 2 DEBUG nova.network.os_vif_util [None req-500c65b1-7962-482f-aabb-b0268b19e7b4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:29:42:8b,bridge_name='br-int',has_traffic_filtering=True,id=ea93226c-f47f-4213-9d19-8bbf98d9f70f,network=Network(fbb66f3b-0d2f-44df-a1ff-0fbd9c571596),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapea93226c-f4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 09:41:09 compute-0 nova_compute[260935]: 2025-10-11 09:41:09.405 2 DEBUG os_vif [None req-500c65b1-7962-482f-aabb-b0268b19e7b4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:29:42:8b,bridge_name='br-int',has_traffic_filtering=True,id=ea93226c-f47f-4213-9d19-8bbf98d9f70f,network=Network(fbb66f3b-0d2f-44df-a1ff-0fbd9c571596),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapea93226c-f4') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 11 09:41:09 compute-0 nova_compute[260935]: 2025-10-11 09:41:09.405 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:41:09 compute-0 nova_compute[260935]: 2025-10-11 09:41:09.406 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:41:09 compute-0 nova_compute[260935]: 2025-10-11 09:41:09.406 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 09:41:09 compute-0 nova_compute[260935]: 2025-10-11 09:41:09.410 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:41:09 compute-0 nova_compute[260935]: 2025-10-11 09:41:09.410 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapea93226c-f4, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:41:09 compute-0 nova_compute[260935]: 2025-10-11 09:41:09.411 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapea93226c-f4, col_values=(('external_ids', {'iface-id': 'ea93226c-f47f-4213-9d19-8bbf98d9f70f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:29:42:8b', 'vm-uuid': '7478f2ca-99cf-4c64-917f-bcfee233cf4e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:41:09 compute-0 nova_compute[260935]: 2025-10-11 09:41:09.412 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:41:09 compute-0 NetworkManager[44960]: <info>  [1760175669.4140] manager: (tapea93226c-f4): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/635)
Oct 11 09:41:09 compute-0 nova_compute[260935]: 2025-10-11 09:41:09.414 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 09:41:09 compute-0 nova_compute[260935]: 2025-10-11 09:41:09.420 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:41:09 compute-0 nova_compute[260935]: 2025-10-11 09:41:09.421 2 INFO os_vif [None req-500c65b1-7962-482f-aabb-b0268b19e7b4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:29:42:8b,bridge_name='br-int',has_traffic_filtering=True,id=ea93226c-f47f-4213-9d19-8bbf98d9f70f,network=Network(fbb66f3b-0d2f-44df-a1ff-0fbd9c571596),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapea93226c-f4')
Oct 11 09:41:09 compute-0 nova_compute[260935]: 2025-10-11 09:41:09.462 2 DEBUG nova.virt.libvirt.driver [None req-500c65b1-7962-482f-aabb-b0268b19e7b4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 09:41:09 compute-0 nova_compute[260935]: 2025-10-11 09:41:09.463 2 DEBUG nova.virt.libvirt.driver [None req-500c65b1-7962-482f-aabb-b0268b19e7b4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 09:41:09 compute-0 nova_compute[260935]: 2025-10-11 09:41:09.463 2 DEBUG nova.virt.libvirt.driver [None req-500c65b1-7962-482f-aabb-b0268b19e7b4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] No VIF found with MAC fa:16:3e:29:42:8b, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 11 09:41:09 compute-0 nova_compute[260935]: 2025-10-11 09:41:09.464 2 INFO nova.virt.libvirt.driver [None req-500c65b1-7962-482f-aabb-b0268b19e7b4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 7478f2ca-99cf-4c64-917f-bcfee233cf4e] Using config drive
Oct 11 09:41:09 compute-0 nova_compute[260935]: 2025-10-11 09:41:09.488 2 DEBUG nova.storage.rbd_utils [None req-500c65b1-7962-482f-aabb-b0268b19e7b4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] rbd image 7478f2ca-99cf-4c64-917f-bcfee233cf4e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:41:09 compute-0 nova_compute[260935]: 2025-10-11 09:41:09.493 2 DEBUG nova.network.neutron [req-f978723e-c646-487d-af34-e1f41b376a28 req-ec27a19c-6a2b-41c4-90a9-199e4609872f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 7478f2ca-99cf-4c64-917f-bcfee233cf4e] Updated VIF entry in instance network info cache for port ea93226c-f47f-4213-9d19-8bbf98d9f70f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 09:41:09 compute-0 nova_compute[260935]: 2025-10-11 09:41:09.493 2 DEBUG nova.network.neutron [req-f978723e-c646-487d-af34-e1f41b376a28 req-ec27a19c-6a2b-41c4-90a9-199e4609872f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 7478f2ca-99cf-4c64-917f-bcfee233cf4e] Updating instance_info_cache with network_info: [{"id": "ea93226c-f47f-4213-9d19-8bbf98d9f70f", "address": "fa:16:3e:29:42:8b", "network": {"id": "fbb66f3b-0d2f-44df-a1ff-0fbd9c571596", "bridge": "br-int", "label": "tempest-network-smoke--2076009485", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "81e7096f23df4e7d8782cf98d09d54e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapea93226c-f4", "ovs_interfaceid": "ea93226c-f47f-4213-9d19-8bbf98d9f70f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:41:09 compute-0 nova_compute[260935]: 2025-10-11 09:41:09.522 2 DEBUG oslo_concurrency.lockutils [req-f978723e-c646-487d-af34-e1f41b376a28 req-ec27a19c-6a2b-41c4-90a9-199e4609872f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-7478f2ca-99cf-4c64-917f-bcfee233cf4e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:41:09 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:41:09 compute-0 nova_compute[260935]: 2025-10-11 09:41:09.736 2 INFO nova.virt.libvirt.driver [None req-500c65b1-7962-482f-aabb-b0268b19e7b4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 7478f2ca-99cf-4c64-917f-bcfee233cf4e] Creating config drive at /var/lib/nova/instances/7478f2ca-99cf-4c64-917f-bcfee233cf4e/disk.config
Oct 11 09:41:09 compute-0 nova_compute[260935]: 2025-10-11 09:41:09.746 2 DEBUG oslo_concurrency.processutils [None req-500c65b1-7962-482f-aabb-b0268b19e7b4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/7478f2ca-99cf-4c64-917f-bcfee233cf4e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpuzkj8rfu execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:41:09 compute-0 nova_compute[260935]: 2025-10-11 09:41:09.847 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:41:09 compute-0 nova_compute[260935]: 2025-10-11 09:41:09.915 2 DEBUG oslo_concurrency.processutils [None req-500c65b1-7962-482f-aabb-b0268b19e7b4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/7478f2ca-99cf-4c64-917f-bcfee233cf4e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpuzkj8rfu" returned: 0 in 0.169s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:41:09 compute-0 nova_compute[260935]: 2025-10-11 09:41:09.954 2 DEBUG nova.storage.rbd_utils [None req-500c65b1-7962-482f-aabb-b0268b19e7b4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] rbd image 7478f2ca-99cf-4c64-917f-bcfee233cf4e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:41:09 compute-0 nova_compute[260935]: 2025-10-11 09:41:09.960 2 DEBUG oslo_concurrency.processutils [None req-500c65b1-7962-482f-aabb-b0268b19e7b4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/7478f2ca-99cf-4c64-917f-bcfee233cf4e/disk.config 7478f2ca-99cf-4c64-917f-bcfee233cf4e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:41:10 compute-0 nova_compute[260935]: 2025-10-11 09:41:10.181 2 DEBUG oslo_concurrency.processutils [None req-500c65b1-7962-482f-aabb-b0268b19e7b4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/7478f2ca-99cf-4c64-917f-bcfee233cf4e/disk.config 7478f2ca-99cf-4c64-917f-bcfee233cf4e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.221s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:41:10 compute-0 nova_compute[260935]: 2025-10-11 09:41:10.182 2 INFO nova.virt.libvirt.driver [None req-500c65b1-7962-482f-aabb-b0268b19e7b4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 7478f2ca-99cf-4c64-917f-bcfee233cf4e] Deleting local config drive /var/lib/nova/instances/7478f2ca-99cf-4c64-917f-bcfee233cf4e/disk.config because it was imported into RBD.
Oct 11 09:41:10 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/3478184439' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:41:10 compute-0 kernel: tapea93226c-f4: entered promiscuous mode
Oct 11 09:41:10 compute-0 NetworkManager[44960]: <info>  [1760175670.2462] manager: (tapea93226c-f4): new Tun device (/org/freedesktop/NetworkManager/Devices/636)
Oct 11 09:41:10 compute-0 nova_compute[260935]: 2025-10-11 09:41:10.250 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:41:10 compute-0 ovn_controller[152945]: 2025-10-11T09:41:10Z|01668|binding|INFO|Claiming lport ea93226c-f47f-4213-9d19-8bbf98d9f70f for this chassis.
Oct 11 09:41:10 compute-0 ovn_controller[152945]: 2025-10-11T09:41:10Z|01669|binding|INFO|ea93226c-f47f-4213-9d19-8bbf98d9f70f: Claiming fa:16:3e:29:42:8b 10.100.0.5
Oct 11 09:41:10 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:41:10.260 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:29:42:8b 10.100.0.5'], port_security=['fa:16:3e:29:42:8b 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '7478f2ca-99cf-4c64-917f-bcfee233cf4e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fbb66f3b-0d2f-44df-a1ff-0fbd9c571596', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '81e7096f23df4e7d8782cf98d09d54e9', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'ece05fde-852d-4083-a996-a2c46d1ad9c2', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9d19e995-d8fe-4294-b195-20de1c881c60, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=ea93226c-f47f-4213-9d19-8bbf98d9f70f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 09:41:10 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:41:10.262 162815 INFO neutron.agent.ovn.metadata.agent [-] Port ea93226c-f47f-4213-9d19-8bbf98d9f70f in datapath fbb66f3b-0d2f-44df-a1ff-0fbd9c571596 bound to our chassis
Oct 11 09:41:10 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:41:10.269 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network fbb66f3b-0d2f-44df-a1ff-0fbd9c571596
Oct 11 09:41:10 compute-0 ovn_controller[152945]: 2025-10-11T09:41:10Z|01670|binding|INFO|Setting lport ea93226c-f47f-4213-9d19-8bbf98d9f70f ovn-installed in OVS
Oct 11 09:41:10 compute-0 ovn_controller[152945]: 2025-10-11T09:41:10Z|01671|binding|INFO|Setting lport ea93226c-f47f-4213-9d19-8bbf98d9f70f up in Southbound
Oct 11 09:41:10 compute-0 nova_compute[260935]: 2025-10-11 09:41:10.275 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:41:10 compute-0 nova_compute[260935]: 2025-10-11 09:41:10.281 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:41:10 compute-0 systemd-machined[215705]: New machine qemu-171-instance-00000093.
Oct 11 09:41:10 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:41:10.297 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[ee0f68ae-afa4-44b4-a47d-c31f56354091]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:41:10 compute-0 systemd[1]: Started Virtual Machine qemu-171-instance-00000093.
Oct 11 09:41:10 compute-0 systemd-udevd[426339]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 09:41:10 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:41:10.329 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[99d405a9-72cf-4d17-a3ca-bd99092bdff2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:41:10 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:41:10.333 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[cf481698-0839-4f8b-a76e-538bc2275fb5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:41:10 compute-0 NetworkManager[44960]: <info>  [1760175670.3377] device (tapea93226c-f4): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 09:41:10 compute-0 NetworkManager[44960]: <info>  [1760175670.3385] device (tapea93226c-f4): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 09:41:10 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:41:10.360 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[94067496-4ccf-4c86-a00a-3470106e6636]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:41:10 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:41:10.381 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[a87e7191-1bac-47f3-9e8a-75a1d256657e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfbb66f3b-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a1:6d:d0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 916, 'tx_bytes': 530, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 916, 'tx_bytes': 530, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 434], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 748155, 'reachable_time': 34340, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 5, 'outoctets': 376, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 5, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 376, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 5, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 426349, 'error': None, 'target': 'ovnmeta-fbb66f3b-0d2f-44df-a1ff-0fbd9c571596', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:41:10 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:41:10.399 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[a34a08aa-7c21-4107-87ba-29c7740b0c12]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapfbb66f3b-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 748169, 'tstamp': 748169}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 426351, 'error': None, 'target': 'ovnmeta-fbb66f3b-0d2f-44df-a1ff-0fbd9c571596', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapfbb66f3b-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 748173, 'tstamp': 748173}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 426351, 'error': None, 'target': 'ovnmeta-fbb66f3b-0d2f-44df-a1ff-0fbd9c571596', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:41:10 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:41:10.400 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfbb66f3b-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:41:10 compute-0 nova_compute[260935]: 2025-10-11 09:41:10.402 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:41:10 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:41:10.404 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfbb66f3b-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:41:10 compute-0 nova_compute[260935]: 2025-10-11 09:41:10.403 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:41:10 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:41:10.404 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 09:41:10 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:41:10.404 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapfbb66f3b-00, col_values=(('external_ids', {'iface-id': '3bca8f57-1f39-42d4-90ee-6695770f4d39'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:41:10 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:41:10.405 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 09:41:10 compute-0 nova_compute[260935]: 2025-10-11 09:41:10.894 2 DEBUG nova.compute.manager [req-3a092c95-0b74-44bd-a958-3d4eafd404aa req-daac414b-f21e-4ed1-a22a-0a5ab96c7d6f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 7478f2ca-99cf-4c64-917f-bcfee233cf4e] Received event network-vif-plugged-ea93226c-f47f-4213-9d19-8bbf98d9f70f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:41:10 compute-0 nova_compute[260935]: 2025-10-11 09:41:10.895 2 DEBUG oslo_concurrency.lockutils [req-3a092c95-0b74-44bd-a958-3d4eafd404aa req-daac414b-f21e-4ed1-a22a-0a5ab96c7d6f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "7478f2ca-99cf-4c64-917f-bcfee233cf4e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:41:10 compute-0 nova_compute[260935]: 2025-10-11 09:41:10.895 2 DEBUG oslo_concurrency.lockutils [req-3a092c95-0b74-44bd-a958-3d4eafd404aa req-daac414b-f21e-4ed1-a22a-0a5ab96c7d6f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "7478f2ca-99cf-4c64-917f-bcfee233cf4e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:41:10 compute-0 nova_compute[260935]: 2025-10-11 09:41:10.896 2 DEBUG oslo_concurrency.lockutils [req-3a092c95-0b74-44bd-a958-3d4eafd404aa req-daac414b-f21e-4ed1-a22a-0a5ab96c7d6f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "7478f2ca-99cf-4c64-917f-bcfee233cf4e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:41:10 compute-0 nova_compute[260935]: 2025-10-11 09:41:10.897 2 DEBUG nova.compute.manager [req-3a092c95-0b74-44bd-a958-3d4eafd404aa req-daac414b-f21e-4ed1-a22a-0a5ab96c7d6f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 7478f2ca-99cf-4c64-917f-bcfee233cf4e] Processing event network-vif-plugged-ea93226c-f47f-4213-9d19-8bbf98d9f70f _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 11 09:41:11 compute-0 ceph-mon[74313]: pgmap v2985: 321 pgs: 321 active+clean; 453 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 11 09:41:11 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2986: 321 pgs: 321 active+clean; 453 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 11 09:41:11 compute-0 nova_compute[260935]: 2025-10-11 09:41:11.885 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760175671.8853352, 7478f2ca-99cf-4c64-917f-bcfee233cf4e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:41:11 compute-0 nova_compute[260935]: 2025-10-11 09:41:11.886 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 7478f2ca-99cf-4c64-917f-bcfee233cf4e] VM Started (Lifecycle Event)
Oct 11 09:41:11 compute-0 nova_compute[260935]: 2025-10-11 09:41:11.889 2 DEBUG nova.compute.manager [None req-500c65b1-7962-482f-aabb-b0268b19e7b4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 7478f2ca-99cf-4c64-917f-bcfee233cf4e] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 11 09:41:11 compute-0 nova_compute[260935]: 2025-10-11 09:41:11.892 2 DEBUG nova.virt.libvirt.driver [None req-500c65b1-7962-482f-aabb-b0268b19e7b4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 7478f2ca-99cf-4c64-917f-bcfee233cf4e] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 11 09:41:11 compute-0 nova_compute[260935]: 2025-10-11 09:41:11.895 2 INFO nova.virt.libvirt.driver [-] [instance: 7478f2ca-99cf-4c64-917f-bcfee233cf4e] Instance spawned successfully.
Oct 11 09:41:11 compute-0 nova_compute[260935]: 2025-10-11 09:41:11.896 2 DEBUG nova.virt.libvirt.driver [None req-500c65b1-7962-482f-aabb-b0268b19e7b4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 7478f2ca-99cf-4c64-917f-bcfee233cf4e] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 11 09:41:11 compute-0 nova_compute[260935]: 2025-10-11 09:41:11.922 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 7478f2ca-99cf-4c64-917f-bcfee233cf4e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:41:11 compute-0 nova_compute[260935]: 2025-10-11 09:41:11.925 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 7478f2ca-99cf-4c64-917f-bcfee233cf4e] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 09:41:11 compute-0 nova_compute[260935]: 2025-10-11 09:41:11.932 2 DEBUG nova.virt.libvirt.driver [None req-500c65b1-7962-482f-aabb-b0268b19e7b4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 7478f2ca-99cf-4c64-917f-bcfee233cf4e] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:41:11 compute-0 nova_compute[260935]: 2025-10-11 09:41:11.933 2 DEBUG nova.virt.libvirt.driver [None req-500c65b1-7962-482f-aabb-b0268b19e7b4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 7478f2ca-99cf-4c64-917f-bcfee233cf4e] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:41:11 compute-0 nova_compute[260935]: 2025-10-11 09:41:11.933 2 DEBUG nova.virt.libvirt.driver [None req-500c65b1-7962-482f-aabb-b0268b19e7b4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 7478f2ca-99cf-4c64-917f-bcfee233cf4e] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:41:11 compute-0 nova_compute[260935]: 2025-10-11 09:41:11.934 2 DEBUG nova.virt.libvirt.driver [None req-500c65b1-7962-482f-aabb-b0268b19e7b4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 7478f2ca-99cf-4c64-917f-bcfee233cf4e] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:41:11 compute-0 nova_compute[260935]: 2025-10-11 09:41:11.934 2 DEBUG nova.virt.libvirt.driver [None req-500c65b1-7962-482f-aabb-b0268b19e7b4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 7478f2ca-99cf-4c64-917f-bcfee233cf4e] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:41:11 compute-0 nova_compute[260935]: 2025-10-11 09:41:11.935 2 DEBUG nova.virt.libvirt.driver [None req-500c65b1-7962-482f-aabb-b0268b19e7b4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 7478f2ca-99cf-4c64-917f-bcfee233cf4e] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:41:11 compute-0 nova_compute[260935]: 2025-10-11 09:41:11.971 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 7478f2ca-99cf-4c64-917f-bcfee233cf4e] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 09:41:11 compute-0 nova_compute[260935]: 2025-10-11 09:41:11.971 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760175671.886363, 7478f2ca-99cf-4c64-917f-bcfee233cf4e => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:41:11 compute-0 nova_compute[260935]: 2025-10-11 09:41:11.972 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 7478f2ca-99cf-4c64-917f-bcfee233cf4e] VM Paused (Lifecycle Event)
Oct 11 09:41:12 compute-0 nova_compute[260935]: 2025-10-11 09:41:12.005 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 7478f2ca-99cf-4c64-917f-bcfee233cf4e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:41:12 compute-0 nova_compute[260935]: 2025-10-11 09:41:12.008 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760175671.8910682, 7478f2ca-99cf-4c64-917f-bcfee233cf4e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:41:12 compute-0 nova_compute[260935]: 2025-10-11 09:41:12.008 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 7478f2ca-99cf-4c64-917f-bcfee233cf4e] VM Resumed (Lifecycle Event)
Oct 11 09:41:12 compute-0 nova_compute[260935]: 2025-10-11 09:41:12.035 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 7478f2ca-99cf-4c64-917f-bcfee233cf4e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:41:12 compute-0 nova_compute[260935]: 2025-10-11 09:41:12.039 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 7478f2ca-99cf-4c64-917f-bcfee233cf4e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 09:41:12 compute-0 nova_compute[260935]: 2025-10-11 09:41:12.044 2 INFO nova.compute.manager [None req-500c65b1-7962-482f-aabb-b0268b19e7b4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 7478f2ca-99cf-4c64-917f-bcfee233cf4e] Took 6.88 seconds to spawn the instance on the hypervisor.
Oct 11 09:41:12 compute-0 nova_compute[260935]: 2025-10-11 09:41:12.045 2 DEBUG nova.compute.manager [None req-500c65b1-7962-482f-aabb-b0268b19e7b4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 7478f2ca-99cf-4c64-917f-bcfee233cf4e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:41:12 compute-0 nova_compute[260935]: 2025-10-11 09:41:12.086 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 7478f2ca-99cf-4c64-917f-bcfee233cf4e] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 09:41:12 compute-0 nova_compute[260935]: 2025-10-11 09:41:12.117 2 INFO nova.compute.manager [None req-500c65b1-7962-482f-aabb-b0268b19e7b4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 7478f2ca-99cf-4c64-917f-bcfee233cf4e] Took 7.93 seconds to build instance.
Oct 11 09:41:12 compute-0 nova_compute[260935]: 2025-10-11 09:41:12.141 2 DEBUG oslo_concurrency.lockutils [None req-500c65b1-7962-482f-aabb-b0268b19e7b4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lock "7478f2ca-99cf-4c64-917f-bcfee233cf4e" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.022s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:41:12 compute-0 nova_compute[260935]: 2025-10-11 09:41:12.967 2 DEBUG nova.compute.manager [req-8d6bc5e4-ac86-4490-ad69-79ef958fca71 req-8781de71-8b5c-4395-8627-eafbb88acc29 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 7478f2ca-99cf-4c64-917f-bcfee233cf4e] Received event network-vif-plugged-ea93226c-f47f-4213-9d19-8bbf98d9f70f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:41:12 compute-0 nova_compute[260935]: 2025-10-11 09:41:12.968 2 DEBUG oslo_concurrency.lockutils [req-8d6bc5e4-ac86-4490-ad69-79ef958fca71 req-8781de71-8b5c-4395-8627-eafbb88acc29 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "7478f2ca-99cf-4c64-917f-bcfee233cf4e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:41:12 compute-0 nova_compute[260935]: 2025-10-11 09:41:12.969 2 DEBUG oslo_concurrency.lockutils [req-8d6bc5e4-ac86-4490-ad69-79ef958fca71 req-8781de71-8b5c-4395-8627-eafbb88acc29 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "7478f2ca-99cf-4c64-917f-bcfee233cf4e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:41:12 compute-0 nova_compute[260935]: 2025-10-11 09:41:12.969 2 DEBUG oslo_concurrency.lockutils [req-8d6bc5e4-ac86-4490-ad69-79ef958fca71 req-8781de71-8b5c-4395-8627-eafbb88acc29 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "7478f2ca-99cf-4c64-917f-bcfee233cf4e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:41:12 compute-0 nova_compute[260935]: 2025-10-11 09:41:12.970 2 DEBUG nova.compute.manager [req-8d6bc5e4-ac86-4490-ad69-79ef958fca71 req-8781de71-8b5c-4395-8627-eafbb88acc29 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 7478f2ca-99cf-4c64-917f-bcfee233cf4e] No waiting events found dispatching network-vif-plugged-ea93226c-f47f-4213-9d19-8bbf98d9f70f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 09:41:12 compute-0 nova_compute[260935]: 2025-10-11 09:41:12.970 2 WARNING nova.compute.manager [req-8d6bc5e4-ac86-4490-ad69-79ef958fca71 req-8781de71-8b5c-4395-8627-eafbb88acc29 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 7478f2ca-99cf-4c64-917f-bcfee233cf4e] Received unexpected event network-vif-plugged-ea93226c-f47f-4213-9d19-8bbf98d9f70f for instance with vm_state active and task_state None.
Oct 11 09:41:13 compute-0 ceph-mon[74313]: pgmap v2986: 321 pgs: 321 active+clean; 453 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 11 09:41:13 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2987: 321 pgs: 321 active+clean; 453 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 139 KiB/s rd, 1.8 MiB/s wr, 41 op/s
Oct 11 09:41:13 compute-0 podman[426394]: 2025-10-11 09:41:13.776763046 +0000 UTC m=+0.079967564 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=iscsid, container_name=iscsid)
Oct 11 09:41:14 compute-0 nova_compute[260935]: 2025-10-11 09:41:14.414 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:41:14 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:41:14 compute-0 nova_compute[260935]: 2025-10-11 09:41:14.849 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:41:15 compute-0 ceph-mon[74313]: pgmap v2987: 321 pgs: 321 active+clean; 453 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 139 KiB/s rd, 1.8 MiB/s wr, 41 op/s
Oct 11 09:41:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:41:15.237 162815 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:41:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:41:15.237 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:41:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:41:15.238 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:41:15 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2988: 321 pgs: 321 active+clean; 453 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 139 KiB/s rd, 1.8 MiB/s wr, 41 op/s
Oct 11 09:41:16 compute-0 nova_compute[260935]: 2025-10-11 09:41:16.812 2 DEBUG nova.compute.manager [req-c976d1b7-a860-4431-a5f3-910c9c26a4b1 req-326acea8-da5f-4362-9d74-17056b00a9c2 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 7478f2ca-99cf-4c64-917f-bcfee233cf4e] Received event network-changed-ea93226c-f47f-4213-9d19-8bbf98d9f70f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:41:16 compute-0 nova_compute[260935]: 2025-10-11 09:41:16.813 2 DEBUG nova.compute.manager [req-c976d1b7-a860-4431-a5f3-910c9c26a4b1 req-326acea8-da5f-4362-9d74-17056b00a9c2 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 7478f2ca-99cf-4c64-917f-bcfee233cf4e] Refreshing instance network info cache due to event network-changed-ea93226c-f47f-4213-9d19-8bbf98d9f70f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 09:41:16 compute-0 nova_compute[260935]: 2025-10-11 09:41:16.814 2 DEBUG oslo_concurrency.lockutils [req-c976d1b7-a860-4431-a5f3-910c9c26a4b1 req-326acea8-da5f-4362-9d74-17056b00a9c2 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-7478f2ca-99cf-4c64-917f-bcfee233cf4e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:41:16 compute-0 nova_compute[260935]: 2025-10-11 09:41:16.814 2 DEBUG oslo_concurrency.lockutils [req-c976d1b7-a860-4431-a5f3-910c9c26a4b1 req-326acea8-da5f-4362-9d74-17056b00a9c2 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-7478f2ca-99cf-4c64-917f-bcfee233cf4e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:41:16 compute-0 nova_compute[260935]: 2025-10-11 09:41:16.815 2 DEBUG nova.network.neutron [req-c976d1b7-a860-4431-a5f3-910c9c26a4b1 req-326acea8-da5f-4362-9d74-17056b00a9c2 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 7478f2ca-99cf-4c64-917f-bcfee233cf4e] Refreshing network info cache for port ea93226c-f47f-4213-9d19-8bbf98d9f70f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 09:41:17 compute-0 ceph-mon[74313]: pgmap v2988: 321 pgs: 321 active+clean; 453 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 139 KiB/s rd, 1.8 MiB/s wr, 41 op/s
Oct 11 09:41:17 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2989: 321 pgs: 321 active+clean; 453 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 139 KiB/s rd, 1.8 MiB/s wr, 41 op/s
Oct 11 09:41:17 compute-0 podman[426414]: 2025-10-11 09:41:17.814520246 +0000 UTC m=+0.104578014 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3)
Oct 11 09:41:17 compute-0 podman[426415]: 2025-10-11 09:41:17.869224411 +0000 UTC m=+0.159878116 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 09:41:18 compute-0 nova_compute[260935]: 2025-10-11 09:41:18.124 2 DEBUG nova.network.neutron [req-c976d1b7-a860-4431-a5f3-910c9c26a4b1 req-326acea8-da5f-4362-9d74-17056b00a9c2 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 7478f2ca-99cf-4c64-917f-bcfee233cf4e] Updated VIF entry in instance network info cache for port ea93226c-f47f-4213-9d19-8bbf98d9f70f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 09:41:18 compute-0 nova_compute[260935]: 2025-10-11 09:41:18.125 2 DEBUG nova.network.neutron [req-c976d1b7-a860-4431-a5f3-910c9c26a4b1 req-326acea8-da5f-4362-9d74-17056b00a9c2 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 7478f2ca-99cf-4c64-917f-bcfee233cf4e] Updating instance_info_cache with network_info: [{"id": "ea93226c-f47f-4213-9d19-8bbf98d9f70f", "address": "fa:16:3e:29:42:8b", "network": {"id": "fbb66f3b-0d2f-44df-a1ff-0fbd9c571596", "bridge": "br-int", "label": "tempest-network-smoke--2076009485", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "81e7096f23df4e7d8782cf98d09d54e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapea93226c-f4", "ovs_interfaceid": "ea93226c-f47f-4213-9d19-8bbf98d9f70f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:41:18 compute-0 nova_compute[260935]: 2025-10-11 09:41:18.147 2 DEBUG oslo_concurrency.lockutils [req-c976d1b7-a860-4431-a5f3-910c9c26a4b1 req-326acea8-da5f-4362-9d74-17056b00a9c2 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-7478f2ca-99cf-4c64-917f-bcfee233cf4e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:41:19 compute-0 ceph-mon[74313]: pgmap v2989: 321 pgs: 321 active+clean; 453 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 139 KiB/s rd, 1.8 MiB/s wr, 41 op/s
Oct 11 09:41:19 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2990: 321 pgs: 321 active+clean; 453 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Oct 11 09:41:19 compute-0 nova_compute[260935]: 2025-10-11 09:41:19.419 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:41:19 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:41:19 compute-0 nova_compute[260935]: 2025-10-11 09:41:19.851 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:41:21 compute-0 sshd-session[426458]: Invalid user mysql from 165.232.82.252 port 52322
Oct 11 09:41:21 compute-0 sshd-session[426458]: pam_unix(sshd:auth): check pass; user unknown
Oct 11 09:41:21 compute-0 sshd-session[426458]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=165.232.82.252
Oct 11 09:41:21 compute-0 ceph-mon[74313]: pgmap v2990: 321 pgs: 321 active+clean; 453 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Oct 11 09:41:21 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2991: 321 pgs: 321 active+clean; 453 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Oct 11 09:41:23 compute-0 ovn_controller[152945]: 2025-10-11T09:41:23Z|00200|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:29:42:8b 10.100.0.5
Oct 11 09:41:23 compute-0 ovn_controller[152945]: 2025-10-11T09:41:23Z|00201|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:29:42:8b 10.100.0.5
Oct 11 09:41:23 compute-0 ceph-mon[74313]: pgmap v2991: 321 pgs: 321 active+clean; 453 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Oct 11 09:41:23 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2992: 321 pgs: 321 active+clean; 466 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.4 MiB/s wr, 95 op/s
Oct 11 09:41:23 compute-0 sshd-session[426458]: Failed password for invalid user mysql from 165.232.82.252 port 52322 ssh2
Oct 11 09:41:24 compute-0 sshd-session[426458]: Connection closed by invalid user mysql 165.232.82.252 port 52322 [preauth]
Oct 11 09:41:24 compute-0 nova_compute[260935]: 2025-10-11 09:41:24.424 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:41:24 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:41:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:41:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:41:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:41:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:41:24 compute-0 nova_compute[260935]: 2025-10-11 09:41:24.854 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:41:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:41:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:41:25 compute-0 ceph-mon[74313]: pgmap v2992: 321 pgs: 321 active+clean; 466 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.4 MiB/s wr, 95 op/s
Oct 11 09:41:25 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2993: 321 pgs: 321 active+clean; 466 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 1.4 MiB/s wr, 81 op/s
Oct 11 09:41:26 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 09:41:26 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2813472303' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 09:41:26 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 09:41:26 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2813472303' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 09:41:27 compute-0 ceph-mon[74313]: pgmap v2993: 321 pgs: 321 active+clean; 466 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 1.4 MiB/s wr, 81 op/s
Oct 11 09:41:27 compute-0 ceph-mon[74313]: from='client.? 192.168.122.10:0/2813472303' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 09:41:27 compute-0 ceph-mon[74313]: from='client.? 192.168.122.10:0/2813472303' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 09:41:27 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2994: 321 pgs: 321 active+clean; 466 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 1.4 MiB/s wr, 81 op/s
Oct 11 09:41:29 compute-0 ceph-mon[74313]: pgmap v2994: 321 pgs: 321 active+clean; 466 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 1.4 MiB/s wr, 81 op/s
Oct 11 09:41:29 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2995: 321 pgs: 321 active+clean; 486 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 2.1 MiB/s wr, 123 op/s
Oct 11 09:41:29 compute-0 nova_compute[260935]: 2025-10-11 09:41:29.455 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:41:29 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:41:29 compute-0 nova_compute[260935]: 2025-10-11 09:41:29.820 2 DEBUG oslo_concurrency.lockutils [None req-42e61385-09b9-45de-b168-08184d24b4f5 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Acquiring lock "7478f2ca-99cf-4c64-917f-bcfee233cf4e" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:41:29 compute-0 nova_compute[260935]: 2025-10-11 09:41:29.822 2 DEBUG oslo_concurrency.lockutils [None req-42e61385-09b9-45de-b168-08184d24b4f5 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lock "7478f2ca-99cf-4c64-917f-bcfee233cf4e" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:41:29 compute-0 nova_compute[260935]: 2025-10-11 09:41:29.822 2 DEBUG oslo_concurrency.lockutils [None req-42e61385-09b9-45de-b168-08184d24b4f5 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Acquiring lock "7478f2ca-99cf-4c64-917f-bcfee233cf4e-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:41:29 compute-0 nova_compute[260935]: 2025-10-11 09:41:29.822 2 DEBUG oslo_concurrency.lockutils [None req-42e61385-09b9-45de-b168-08184d24b4f5 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lock "7478f2ca-99cf-4c64-917f-bcfee233cf4e-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:41:29 compute-0 nova_compute[260935]: 2025-10-11 09:41:29.822 2 DEBUG oslo_concurrency.lockutils [None req-42e61385-09b9-45de-b168-08184d24b4f5 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lock "7478f2ca-99cf-4c64-917f-bcfee233cf4e-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:41:29 compute-0 nova_compute[260935]: 2025-10-11 09:41:29.824 2 INFO nova.compute.manager [None req-42e61385-09b9-45de-b168-08184d24b4f5 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 7478f2ca-99cf-4c64-917f-bcfee233cf4e] Terminating instance
Oct 11 09:41:29 compute-0 nova_compute[260935]: 2025-10-11 09:41:29.825 2 DEBUG nova.compute.manager [None req-42e61385-09b9-45de-b168-08184d24b4f5 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 7478f2ca-99cf-4c64-917f-bcfee233cf4e] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 11 09:41:29 compute-0 nova_compute[260935]: 2025-10-11 09:41:29.862 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:41:29 compute-0 kernel: tapea93226c-f4 (unregistering): left promiscuous mode
Oct 11 09:41:29 compute-0 NetworkManager[44960]: <info>  [1760175689.8999] device (tapea93226c-f4): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 09:41:29 compute-0 ovn_controller[152945]: 2025-10-11T09:41:29Z|01672|binding|INFO|Releasing lport ea93226c-f47f-4213-9d19-8bbf98d9f70f from this chassis (sb_readonly=0)
Oct 11 09:41:29 compute-0 ovn_controller[152945]: 2025-10-11T09:41:29Z|01673|binding|INFO|Setting lport ea93226c-f47f-4213-9d19-8bbf98d9f70f down in Southbound
Oct 11 09:41:29 compute-0 nova_compute[260935]: 2025-10-11 09:41:29.916 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:41:29 compute-0 ovn_controller[152945]: 2025-10-11T09:41:29Z|01674|binding|INFO|Removing iface tapea93226c-f4 ovn-installed in OVS
Oct 11 09:41:29 compute-0 nova_compute[260935]: 2025-10-11 09:41:29.920 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:41:29 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:41:29.942 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:29:42:8b 10.100.0.5'], port_security=['fa:16:3e:29:42:8b 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '7478f2ca-99cf-4c64-917f-bcfee233cf4e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fbb66f3b-0d2f-44df-a1ff-0fbd9c571596', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '81e7096f23df4e7d8782cf98d09d54e9', 'neutron:revision_number': '5', 'neutron:security_group_ids': '96e9c5f2-802e-4f1b-99f1-1853c36965bb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9d19e995-d8fe-4294-b195-20de1c881c60, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=ea93226c-f47f-4213-9d19-8bbf98d9f70f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 09:41:29 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:41:29.944 162815 INFO neutron.agent.ovn.metadata.agent [-] Port ea93226c-f47f-4213-9d19-8bbf98d9f70f in datapath fbb66f3b-0d2f-44df-a1ff-0fbd9c571596 unbound from our chassis
Oct 11 09:41:29 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:41:29.947 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network fbb66f3b-0d2f-44df-a1ff-0fbd9c571596
Oct 11 09:41:29 compute-0 nova_compute[260935]: 2025-10-11 09:41:29.955 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:41:29 compute-0 systemd[1]: machine-qemu\x2d171\x2dinstance\x2d00000093.scope: Deactivated successfully.
Oct 11 09:41:29 compute-0 systemd[1]: machine-qemu\x2d171\x2dinstance\x2d00000093.scope: Consumed 13.983s CPU time.
Oct 11 09:41:29 compute-0 systemd-machined[215705]: Machine qemu-171-instance-00000093 terminated.
Oct 11 09:41:29 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:41:29.978 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[3b76029d-3a2e-4e0a-8198-2224cdda29d1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:41:30 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:41:30.025 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[f7523f2f-8bfb-4594-a034-88e7b0dc90a4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:41:30 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:41:30.030 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[8716474f-dda1-456d-ad04-f49abb3689d9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:41:30 compute-0 nova_compute[260935]: 2025-10-11 09:41:30.075 2 INFO nova.virt.libvirt.driver [-] [instance: 7478f2ca-99cf-4c64-917f-bcfee233cf4e] Instance destroyed successfully.
Oct 11 09:41:30 compute-0 nova_compute[260935]: 2025-10-11 09:41:30.076 2 DEBUG nova.objects.instance [None req-42e61385-09b9-45de-b168-08184d24b4f5 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lazy-loading 'resources' on Instance uuid 7478f2ca-99cf-4c64-917f-bcfee233cf4e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:41:30 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:41:30.080 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[5f667095-7169-47a2-ab15-320b01d64f08]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:41:30 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:41:30.112 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[02efde1c-b31c-4603-8bd5-540edcd9f690]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfbb66f3b-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a1:6d:d0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 9, 'rx_bytes': 1000, 'tx_bytes': 614, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 9, 'rx_bytes': 1000, 'tx_bytes': 614, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 434], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 748155, 'reachable_time': 34340, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 5, 'outoctets': 376, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 5, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 376, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 5, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 426481, 'error': None, 'target': 'ovnmeta-fbb66f3b-0d2f-44df-a1ff-0fbd9c571596', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:41:30 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:41:30.135 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[1f48780d-99a8-4b01-8266-6950dcc53546]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapfbb66f3b-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 748169, 'tstamp': 748169}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 426482, 'error': None, 'target': 'ovnmeta-fbb66f3b-0d2f-44df-a1ff-0fbd9c571596', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapfbb66f3b-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 748173, 'tstamp': 748173}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 426482, 'error': None, 'target': 'ovnmeta-fbb66f3b-0d2f-44df-a1ff-0fbd9c571596', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:41:30 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:41:30.137 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfbb66f3b-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:41:30 compute-0 nova_compute[260935]: 2025-10-11 09:41:30.138 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:41:30 compute-0 nova_compute[260935]: 2025-10-11 09:41:30.144 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:41:30 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:41:30.146 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfbb66f3b-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:41:30 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:41:30.146 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 09:41:30 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:41:30.147 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapfbb66f3b-00, col_values=(('external_ids', {'iface-id': '3bca8f57-1f39-42d4-90ee-6695770f4d39'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:41:30 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:41:30.148 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 09:41:30 compute-0 nova_compute[260935]: 2025-10-11 09:41:30.187 2 DEBUG nova.virt.libvirt.vif [None req-42e61385-09b9-45de-b168-08184d24b4f5 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T09:41:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-607770139-gen-1-1334732272',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-607770139-gen-1-1334732272',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-607770139-gen',id=147,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMJ2jJOMdFj9GKBYjHKBvmwlAaXSeJgKxd7Y5C0D5IYgR/hyyjjmJiPLE03H6XYiIWX3QUpUmhg6y0ka6+hRs35VkER6t8ZYYcl+MQfyMjAvRZwHFSUWgwuABpKNZ9OROA==',key_name='tempest-TestSecurityGroupsBasicOps-1246308253',keypairs=<?>,launch_index=0,launched_at=2025-10-11T09:41:12Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='81e7096f23df4e7d8782cf98d09d54e9',ramdisk_id='',reservation_id='r-n62joivm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSecurityGroupsBasicOps-607770139',owner_user_name='tempest-TestSecurityGroupsBasicOps-607770139-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T09:41:12Z,user_data=None,user_id='489c4d0457354f4684f8b9e53261224f',uuid=7478f2ca-99cf-4c64-917f-bcfee233cf4e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ea93226c-f47f-4213-9d19-8bbf98d9f70f", "address": "fa:16:3e:29:42:8b", "network": {"id": "fbb66f3b-0d2f-44df-a1ff-0fbd9c571596", "bridge": "br-int", "label": "tempest-network-smoke--2076009485", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "81e7096f23df4e7d8782cf98d09d54e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapea93226c-f4", "ovs_interfaceid": "ea93226c-f47f-4213-9d19-8bbf98d9f70f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 11 09:41:30 compute-0 nova_compute[260935]: 2025-10-11 09:41:30.188 2 DEBUG nova.network.os_vif_util [None req-42e61385-09b9-45de-b168-08184d24b4f5 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Converting VIF {"id": "ea93226c-f47f-4213-9d19-8bbf98d9f70f", "address": "fa:16:3e:29:42:8b", "network": {"id": "fbb66f3b-0d2f-44df-a1ff-0fbd9c571596", "bridge": "br-int", "label": "tempest-network-smoke--2076009485", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "81e7096f23df4e7d8782cf98d09d54e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapea93226c-f4", "ovs_interfaceid": "ea93226c-f47f-4213-9d19-8bbf98d9f70f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 09:41:30 compute-0 nova_compute[260935]: 2025-10-11 09:41:30.189 2 DEBUG nova.network.os_vif_util [None req-42e61385-09b9-45de-b168-08184d24b4f5 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:29:42:8b,bridge_name='br-int',has_traffic_filtering=True,id=ea93226c-f47f-4213-9d19-8bbf98d9f70f,network=Network(fbb66f3b-0d2f-44df-a1ff-0fbd9c571596),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapea93226c-f4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 09:41:30 compute-0 nova_compute[260935]: 2025-10-11 09:41:30.190 2 DEBUG os_vif [None req-42e61385-09b9-45de-b168-08184d24b4f5 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:29:42:8b,bridge_name='br-int',has_traffic_filtering=True,id=ea93226c-f47f-4213-9d19-8bbf98d9f70f,network=Network(fbb66f3b-0d2f-44df-a1ff-0fbd9c571596),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapea93226c-f4') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 11 09:41:30 compute-0 nova_compute[260935]: 2025-10-11 09:41:30.193 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:41:30 compute-0 nova_compute[260935]: 2025-10-11 09:41:30.194 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapea93226c-f4, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:41:30 compute-0 nova_compute[260935]: 2025-10-11 09:41:30.196 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:41:30 compute-0 nova_compute[260935]: 2025-10-11 09:41:30.198 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:41:30 compute-0 nova_compute[260935]: 2025-10-11 09:41:30.203 2 INFO os_vif [None req-42e61385-09b9-45de-b168-08184d24b4f5 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:29:42:8b,bridge_name='br-int',has_traffic_filtering=True,id=ea93226c-f47f-4213-9d19-8bbf98d9f70f,network=Network(fbb66f3b-0d2f-44df-a1ff-0fbd9c571596),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapea93226c-f4')
Oct 11 09:41:30 compute-0 nova_compute[260935]: 2025-10-11 09:41:30.653 2 INFO nova.virt.libvirt.driver [None req-42e61385-09b9-45de-b168-08184d24b4f5 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 7478f2ca-99cf-4c64-917f-bcfee233cf4e] Deleting instance files /var/lib/nova/instances/7478f2ca-99cf-4c64-917f-bcfee233cf4e_del
Oct 11 09:41:30 compute-0 nova_compute[260935]: 2025-10-11 09:41:30.654 2 INFO nova.virt.libvirt.driver [None req-42e61385-09b9-45de-b168-08184d24b4f5 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 7478f2ca-99cf-4c64-917f-bcfee233cf4e] Deletion of /var/lib/nova/instances/7478f2ca-99cf-4c64-917f-bcfee233cf4e_del complete
Oct 11 09:41:30 compute-0 nova_compute[260935]: 2025-10-11 09:41:30.727 2 INFO nova.compute.manager [None req-42e61385-09b9-45de-b168-08184d24b4f5 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: 7478f2ca-99cf-4c64-917f-bcfee233cf4e] Took 0.90 seconds to destroy the instance on the hypervisor.
Oct 11 09:41:30 compute-0 nova_compute[260935]: 2025-10-11 09:41:30.727 2 DEBUG oslo.service.loopingcall [None req-42e61385-09b9-45de-b168-08184d24b4f5 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 11 09:41:30 compute-0 nova_compute[260935]: 2025-10-11 09:41:30.728 2 DEBUG nova.compute.manager [-] [instance: 7478f2ca-99cf-4c64-917f-bcfee233cf4e] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 11 09:41:30 compute-0 nova_compute[260935]: 2025-10-11 09:41:30.728 2 DEBUG nova.network.neutron [-] [instance: 7478f2ca-99cf-4c64-917f-bcfee233cf4e] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 11 09:41:30 compute-0 nova_compute[260935]: 2025-10-11 09:41:30.929 2 DEBUG nova.compute.manager [req-5cd15e40-1b0e-4c8f-8d49-c1846b70a172 req-cb3478f5-532d-4cad-afe2-0ced00bd7bdb e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 7478f2ca-99cf-4c64-917f-bcfee233cf4e] Received event network-vif-unplugged-ea93226c-f47f-4213-9d19-8bbf98d9f70f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:41:30 compute-0 nova_compute[260935]: 2025-10-11 09:41:30.930 2 DEBUG oslo_concurrency.lockutils [req-5cd15e40-1b0e-4c8f-8d49-c1846b70a172 req-cb3478f5-532d-4cad-afe2-0ced00bd7bdb e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "7478f2ca-99cf-4c64-917f-bcfee233cf4e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:41:30 compute-0 nova_compute[260935]: 2025-10-11 09:41:30.930 2 DEBUG oslo_concurrency.lockutils [req-5cd15e40-1b0e-4c8f-8d49-c1846b70a172 req-cb3478f5-532d-4cad-afe2-0ced00bd7bdb e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "7478f2ca-99cf-4c64-917f-bcfee233cf4e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:41:30 compute-0 nova_compute[260935]: 2025-10-11 09:41:30.931 2 DEBUG oslo_concurrency.lockutils [req-5cd15e40-1b0e-4c8f-8d49-c1846b70a172 req-cb3478f5-532d-4cad-afe2-0ced00bd7bdb e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "7478f2ca-99cf-4c64-917f-bcfee233cf4e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:41:30 compute-0 nova_compute[260935]: 2025-10-11 09:41:30.931 2 DEBUG nova.compute.manager [req-5cd15e40-1b0e-4c8f-8d49-c1846b70a172 req-cb3478f5-532d-4cad-afe2-0ced00bd7bdb e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 7478f2ca-99cf-4c64-917f-bcfee233cf4e] No waiting events found dispatching network-vif-unplugged-ea93226c-f47f-4213-9d19-8bbf98d9f70f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 09:41:30 compute-0 nova_compute[260935]: 2025-10-11 09:41:30.932 2 DEBUG nova.compute.manager [req-5cd15e40-1b0e-4c8f-8d49-c1846b70a172 req-cb3478f5-532d-4cad-afe2-0ced00bd7bdb e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 7478f2ca-99cf-4c64-917f-bcfee233cf4e] Received event network-vif-unplugged-ea93226c-f47f-4213-9d19-8bbf98d9f70f for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 11 09:41:31 compute-0 ceph-mon[74313]: pgmap v2995: 321 pgs: 321 active+clean; 486 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 2.1 MiB/s wr, 123 op/s
Oct 11 09:41:31 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2996: 321 pgs: 321 active+clean; 486 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 312 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 11 09:41:31 compute-0 nova_compute[260935]: 2025-10-11 09:41:31.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:41:32 compute-0 nova_compute[260935]: 2025-10-11 09:41:32.084 2 DEBUG nova.network.neutron [-] [instance: 7478f2ca-99cf-4c64-917f-bcfee233cf4e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:41:32 compute-0 nova_compute[260935]: 2025-10-11 09:41:32.109 2 INFO nova.compute.manager [-] [instance: 7478f2ca-99cf-4c64-917f-bcfee233cf4e] Took 1.38 seconds to deallocate network for instance.
Oct 11 09:41:32 compute-0 nova_compute[260935]: 2025-10-11 09:41:32.154 2 DEBUG nova.compute.manager [req-70621370-3dff-4291-a643-b2006a33e0bf req-99db5a06-a66b-4823-8270-75ea87120c91 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 7478f2ca-99cf-4c64-917f-bcfee233cf4e] Received event network-vif-deleted-ea93226c-f47f-4213-9d19-8bbf98d9f70f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:41:32 compute-0 nova_compute[260935]: 2025-10-11 09:41:32.156 2 DEBUG oslo_concurrency.lockutils [None req-42e61385-09b9-45de-b168-08184d24b4f5 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:41:32 compute-0 nova_compute[260935]: 2025-10-11 09:41:32.156 2 DEBUG oslo_concurrency.lockutils [None req-42e61385-09b9-45de-b168-08184d24b4f5 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:41:32 compute-0 nova_compute[260935]: 2025-10-11 09:41:32.272 2 DEBUG oslo_concurrency.processutils [None req-42e61385-09b9-45de-b168-08184d24b4f5 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:41:32 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 09:41:32 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3854747652' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:41:32 compute-0 nova_compute[260935]: 2025-10-11 09:41:32.756 2 DEBUG oslo_concurrency.processutils [None req-42e61385-09b9-45de-b168-08184d24b4f5 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.484s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:41:32 compute-0 nova_compute[260935]: 2025-10-11 09:41:32.765 2 DEBUG nova.compute.provider_tree [None req-42e61385-09b9-45de-b168-08184d24b4f5 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 09:41:32 compute-0 nova_compute[260935]: 2025-10-11 09:41:32.783 2 DEBUG nova.scheduler.client.report [None req-42e61385-09b9-45de-b168-08184d24b4f5 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 09:41:32 compute-0 nova_compute[260935]: 2025-10-11 09:41:32.804 2 DEBUG oslo_concurrency.lockutils [None req-42e61385-09b9-45de-b168-08184d24b4f5 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.648s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:41:32 compute-0 nova_compute[260935]: 2025-10-11 09:41:32.831 2 INFO nova.scheduler.client.report [None req-42e61385-09b9-45de-b168-08184d24b4f5 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Deleted allocations for instance 7478f2ca-99cf-4c64-917f-bcfee233cf4e
Oct 11 09:41:32 compute-0 nova_compute[260935]: 2025-10-11 09:41:32.888 2 DEBUG oslo_concurrency.lockutils [None req-42e61385-09b9-45de-b168-08184d24b4f5 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lock "7478f2ca-99cf-4c64-917f-bcfee233cf4e" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.067s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:41:32 compute-0 nova_compute[260935]: 2025-10-11 09:41:32.999 2 DEBUG nova.compute.manager [req-8c8de571-56fc-4d27-996b-19eefb071682 req-979ab78a-23d8-4dc8-a64e-db4b5a27aeba e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 7478f2ca-99cf-4c64-917f-bcfee233cf4e] Received event network-vif-plugged-ea93226c-f47f-4213-9d19-8bbf98d9f70f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:41:33 compute-0 nova_compute[260935]: 2025-10-11 09:41:32.999 2 DEBUG oslo_concurrency.lockutils [req-8c8de571-56fc-4d27-996b-19eefb071682 req-979ab78a-23d8-4dc8-a64e-db4b5a27aeba e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "7478f2ca-99cf-4c64-917f-bcfee233cf4e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:41:33 compute-0 nova_compute[260935]: 2025-10-11 09:41:33.000 2 DEBUG oslo_concurrency.lockutils [req-8c8de571-56fc-4d27-996b-19eefb071682 req-979ab78a-23d8-4dc8-a64e-db4b5a27aeba e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "7478f2ca-99cf-4c64-917f-bcfee233cf4e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:41:33 compute-0 nova_compute[260935]: 2025-10-11 09:41:33.000 2 DEBUG oslo_concurrency.lockutils [req-8c8de571-56fc-4d27-996b-19eefb071682 req-979ab78a-23d8-4dc8-a64e-db4b5a27aeba e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "7478f2ca-99cf-4c64-917f-bcfee233cf4e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:41:33 compute-0 nova_compute[260935]: 2025-10-11 09:41:33.001 2 DEBUG nova.compute.manager [req-8c8de571-56fc-4d27-996b-19eefb071682 req-979ab78a-23d8-4dc8-a64e-db4b5a27aeba e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 7478f2ca-99cf-4c64-917f-bcfee233cf4e] No waiting events found dispatching network-vif-plugged-ea93226c-f47f-4213-9d19-8bbf98d9f70f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 09:41:33 compute-0 nova_compute[260935]: 2025-10-11 09:41:33.001 2 WARNING nova.compute.manager [req-8c8de571-56fc-4d27-996b-19eefb071682 req-979ab78a-23d8-4dc8-a64e-db4b5a27aeba e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 7478f2ca-99cf-4c64-917f-bcfee233cf4e] Received unexpected event network-vif-plugged-ea93226c-f47f-4213-9d19-8bbf98d9f70f for instance with vm_state deleted and task_state None.
Oct 11 09:41:33 compute-0 ceph-mon[74313]: pgmap v2996: 321 pgs: 321 active+clean; 486 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 312 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 11 09:41:33 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/3854747652' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:41:33 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2997: 321 pgs: 321 active+clean; 407 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 331 KiB/s rd, 2.1 MiB/s wr, 91 op/s
Oct 11 09:41:34 compute-0 nova_compute[260935]: 2025-10-11 09:41:34.625 2 DEBUG nova.compute.manager [req-7c930276-1c50-4402-b0ca-8f1342bfa047 req-07c0a6d2-3434-4d26-97c4-e603cf7b242b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: cf42187a-059a-49d4-b9c8-96786042977f] Received event network-changed-9f09f897-79b2-4242-ba35-4ab26732b411 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:41:34 compute-0 nova_compute[260935]: 2025-10-11 09:41:34.626 2 DEBUG nova.compute.manager [req-7c930276-1c50-4402-b0ca-8f1342bfa047 req-07c0a6d2-3434-4d26-97c4-e603cf7b242b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: cf42187a-059a-49d4-b9c8-96786042977f] Refreshing instance network info cache due to event network-changed-9f09f897-79b2-4242-ba35-4ab26732b411. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 09:41:34 compute-0 nova_compute[260935]: 2025-10-11 09:41:34.626 2 DEBUG oslo_concurrency.lockutils [req-7c930276-1c50-4402-b0ca-8f1342bfa047 req-07c0a6d2-3434-4d26-97c4-e603cf7b242b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-cf42187a-059a-49d4-b9c8-96786042977f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:41:34 compute-0 nova_compute[260935]: 2025-10-11 09:41:34.626 2 DEBUG oslo_concurrency.lockutils [req-7c930276-1c50-4402-b0ca-8f1342bfa047 req-07c0a6d2-3434-4d26-97c4-e603cf7b242b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-cf42187a-059a-49d4-b9c8-96786042977f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:41:34 compute-0 nova_compute[260935]: 2025-10-11 09:41:34.627 2 DEBUG nova.network.neutron [req-7c930276-1c50-4402-b0ca-8f1342bfa047 req-07c0a6d2-3434-4d26-97c4-e603cf7b242b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: cf42187a-059a-49d4-b9c8-96786042977f] Refreshing network info cache for port 9f09f897-79b2-4242-ba35-4ab26732b411 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 09:41:34 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:41:34 compute-0 nova_compute[260935]: 2025-10-11 09:41:34.786 2 DEBUG oslo_concurrency.lockutils [None req-4861d1f6-0b93-49ce-a26a-2c004be74ac4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Acquiring lock "cf42187a-059a-49d4-b9c8-96786042977f" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:41:34 compute-0 nova_compute[260935]: 2025-10-11 09:41:34.787 2 DEBUG oslo_concurrency.lockutils [None req-4861d1f6-0b93-49ce-a26a-2c004be74ac4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lock "cf42187a-059a-49d4-b9c8-96786042977f" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:41:34 compute-0 nova_compute[260935]: 2025-10-11 09:41:34.788 2 DEBUG oslo_concurrency.lockutils [None req-4861d1f6-0b93-49ce-a26a-2c004be74ac4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Acquiring lock "cf42187a-059a-49d4-b9c8-96786042977f-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:41:34 compute-0 nova_compute[260935]: 2025-10-11 09:41:34.788 2 DEBUG oslo_concurrency.lockutils [None req-4861d1f6-0b93-49ce-a26a-2c004be74ac4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lock "cf42187a-059a-49d4-b9c8-96786042977f-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:41:34 compute-0 nova_compute[260935]: 2025-10-11 09:41:34.788 2 DEBUG oslo_concurrency.lockutils [None req-4861d1f6-0b93-49ce-a26a-2c004be74ac4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lock "cf42187a-059a-49d4-b9c8-96786042977f-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:41:34 compute-0 nova_compute[260935]: 2025-10-11 09:41:34.790 2 INFO nova.compute.manager [None req-4861d1f6-0b93-49ce-a26a-2c004be74ac4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: cf42187a-059a-49d4-b9c8-96786042977f] Terminating instance
Oct 11 09:41:34 compute-0 nova_compute[260935]: 2025-10-11 09:41:34.792 2 DEBUG nova.compute.manager [None req-4861d1f6-0b93-49ce-a26a-2c004be74ac4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: cf42187a-059a-49d4-b9c8-96786042977f] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 11 09:41:34 compute-0 kernel: tap9f09f897-79 (unregistering): left promiscuous mode
Oct 11 09:41:34 compute-0 NetworkManager[44960]: <info>  [1760175694.8526] device (tap9f09f897-79): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 09:41:34 compute-0 ovn_controller[152945]: 2025-10-11T09:41:34Z|01675|binding|INFO|Releasing lport 9f09f897-79b2-4242-ba35-4ab26732b411 from this chassis (sb_readonly=0)
Oct 11 09:41:34 compute-0 ovn_controller[152945]: 2025-10-11T09:41:34Z|01676|binding|INFO|Setting lport 9f09f897-79b2-4242-ba35-4ab26732b411 down in Southbound
Oct 11 09:41:34 compute-0 ovn_controller[152945]: 2025-10-11T09:41:34Z|01677|binding|INFO|Removing iface tap9f09f897-79 ovn-installed in OVS
Oct 11 09:41:34 compute-0 nova_compute[260935]: 2025-10-11 09:41:34.868 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:41:34 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:41:34.876 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f9:42:d3 10.100.0.14'], port_security=['fa:16:3e:f9:42:d3 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'cf42187a-059a-49d4-b9c8-96786042977f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fbb66f3b-0d2f-44df-a1ff-0fbd9c571596', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '81e7096f23df4e7d8782cf98d09d54e9', 'neutron:revision_number': '4', 'neutron:security_group_ids': '09004472-a3cd-4396-b377-505fc9433156 ece05fde-852d-4083-a996-a2c46d1ad9c2', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9d19e995-d8fe-4294-b195-20de1c881c60, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=9f09f897-79b2-4242-ba35-4ab26732b411) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 09:41:34 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:41:34.878 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 9f09f897-79b2-4242-ba35-4ab26732b411 in datapath fbb66f3b-0d2f-44df-a1ff-0fbd9c571596 unbound from our chassis
Oct 11 09:41:34 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:41:34.882 162815 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network fbb66f3b-0d2f-44df-a1ff-0fbd9c571596, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 11 09:41:34 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:41:34.883 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[b19b0f4b-6c65-4133-b2f7-348061ad1a40]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:41:34 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:41:34.884 162815 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-fbb66f3b-0d2f-44df-a1ff-0fbd9c571596 namespace which is not needed anymore
Oct 11 09:41:34 compute-0 nova_compute[260935]: 2025-10-11 09:41:34.901 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:41:34 compute-0 systemd[1]: machine-qemu\x2d170\x2dinstance\x2d00000092.scope: Deactivated successfully.
Oct 11 09:41:34 compute-0 systemd[1]: machine-qemu\x2d170\x2dinstance\x2d00000092.scope: Consumed 14.969s CPU time.
Oct 11 09:41:34 compute-0 systemd-machined[215705]: Machine qemu-170-instance-00000092 terminated.
Oct 11 09:41:35 compute-0 nova_compute[260935]: 2025-10-11 09:41:35.040 2 INFO nova.virt.libvirt.driver [-] [instance: cf42187a-059a-49d4-b9c8-96786042977f] Instance destroyed successfully.
Oct 11 09:41:35 compute-0 nova_compute[260935]: 2025-10-11 09:41:35.045 2 DEBUG nova.objects.instance [None req-4861d1f6-0b93-49ce-a26a-2c004be74ac4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lazy-loading 'resources' on Instance uuid cf42187a-059a-49d4-b9c8-96786042977f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:41:35 compute-0 nova_compute[260935]: 2025-10-11 09:41:35.062 2 DEBUG nova.virt.libvirt.vif [None req-4861d1f6-0b93-49ce-a26a-2c004be74ac4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T09:40:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-607770139-access_point-1885053733',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-607770139-access_point-1885053733',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-607770139-acc',id=146,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMJ2jJOMdFj9GKBYjHKBvmwlAaXSeJgKxd7Y5C0D5IYgR/hyyjjmJiPLE03H6XYiIWX3QUpUmhg6y0ka6+hRs35VkER6t8ZYYcl+MQfyMjAvRZwHFSUWgwuABpKNZ9OROA==',key_name='tempest-TestSecurityGroupsBasicOps-1246308253',keypairs=<?>,launch_index=0,launched_at=2025-10-11T09:40:38Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='81e7096f23df4e7d8782cf98d09d54e9',ramdisk_id='',reservation_id='r-8zvi2y9v',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSecurityGroupsBasicOps-607770139',owner_user_name='tempest-TestSecurityGroupsBasicOps-607770139-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T09:40:38Z,user_data=None,user_id='489c4d0457354f4684f8b9e53261224f',uuid=cf42187a-059a-49d4-b9c8-96786042977f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9f09f897-79b2-4242-ba35-4ab26732b411", "address": "fa:16:3e:f9:42:d3", "network": {"id": "fbb66f3b-0d2f-44df-a1ff-0fbd9c571596", "bridge": "br-int", "label": "tempest-network-smoke--2076009485", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.216", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "81e7096f23df4e7d8782cf98d09d54e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9f09f897-79", "ovs_interfaceid": "9f09f897-79b2-4242-ba35-4ab26732b411", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 11 09:41:35 compute-0 nova_compute[260935]: 2025-10-11 09:41:35.063 2 DEBUG nova.network.os_vif_util [None req-4861d1f6-0b93-49ce-a26a-2c004be74ac4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Converting VIF {"id": "9f09f897-79b2-4242-ba35-4ab26732b411", "address": "fa:16:3e:f9:42:d3", "network": {"id": "fbb66f3b-0d2f-44df-a1ff-0fbd9c571596", "bridge": "br-int", "label": "tempest-network-smoke--2076009485", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.216", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "81e7096f23df4e7d8782cf98d09d54e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9f09f897-79", "ovs_interfaceid": "9f09f897-79b2-4242-ba35-4ab26732b411", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 09:41:35 compute-0 nova_compute[260935]: 2025-10-11 09:41:35.064 2 DEBUG nova.network.os_vif_util [None req-4861d1f6-0b93-49ce-a26a-2c004be74ac4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:f9:42:d3,bridge_name='br-int',has_traffic_filtering=True,id=9f09f897-79b2-4242-ba35-4ab26732b411,network=Network(fbb66f3b-0d2f-44df-a1ff-0fbd9c571596),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9f09f897-79') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 09:41:35 compute-0 nova_compute[260935]: 2025-10-11 09:41:35.064 2 DEBUG os_vif [None req-4861d1f6-0b93-49ce-a26a-2c004be74ac4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:f9:42:d3,bridge_name='br-int',has_traffic_filtering=True,id=9f09f897-79b2-4242-ba35-4ab26732b411,network=Network(fbb66f3b-0d2f-44df-a1ff-0fbd9c571596),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9f09f897-79') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 11 09:41:35 compute-0 nova_compute[260935]: 2025-10-11 09:41:35.066 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:41:35 compute-0 nova_compute[260935]: 2025-10-11 09:41:35.067 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9f09f897-79, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:41:35 compute-0 nova_compute[260935]: 2025-10-11 09:41:35.108 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:41:35 compute-0 nova_compute[260935]: 2025-10-11 09:41:35.111 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:41:35 compute-0 neutron-haproxy-ovnmeta-fbb66f3b-0d2f-44df-a1ff-0fbd9c571596[424973]: [NOTICE]   (424977) : haproxy version is 2.8.14-c23fe91
Oct 11 09:41:35 compute-0 neutron-haproxy-ovnmeta-fbb66f3b-0d2f-44df-a1ff-0fbd9c571596[424973]: [NOTICE]   (424977) : path to executable is /usr/sbin/haproxy
Oct 11 09:41:35 compute-0 neutron-haproxy-ovnmeta-fbb66f3b-0d2f-44df-a1ff-0fbd9c571596[424973]: [WARNING]  (424977) : Exiting Master process...
Oct 11 09:41:35 compute-0 neutron-haproxy-ovnmeta-fbb66f3b-0d2f-44df-a1ff-0fbd9c571596[424973]: [ALERT]    (424977) : Current worker (424979) exited with code 143 (Terminated)
Oct 11 09:41:35 compute-0 neutron-haproxy-ovnmeta-fbb66f3b-0d2f-44df-a1ff-0fbd9c571596[424973]: [WARNING]  (424977) : All workers exited. Exiting... (0)
Oct 11 09:41:35 compute-0 nova_compute[260935]: 2025-10-11 09:41:35.113 2 INFO os_vif [None req-4861d1f6-0b93-49ce-a26a-2c004be74ac4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:f9:42:d3,bridge_name='br-int',has_traffic_filtering=True,id=9f09f897-79b2-4242-ba35-4ab26732b411,network=Network(fbb66f3b-0d2f-44df-a1ff-0fbd9c571596),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9f09f897-79')
Oct 11 09:41:35 compute-0 systemd[1]: libpod-2a2f657487bc3f85e27c94fbbd87712ccd236f179ffec9e579ec1ad63a55a04d.scope: Deactivated successfully.
Oct 11 09:41:35 compute-0 podman[426552]: 2025-10-11 09:41:35.128363936 +0000 UTC m=+0.070615311 container died 2a2f657487bc3f85e27c94fbbd87712ccd236f179ffec9e579ec1ad63a55a04d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-fbb66f3b-0d2f-44df-a1ff-0fbd9c571596, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 11 09:41:35 compute-0 nova_compute[260935]: 2025-10-11 09:41:35.137 2 DEBUG nova.compute.manager [req-17b767a0-157f-4747-a1cb-103fa5847d24 req-2f1371aa-1da6-45b9-8067-f76928ef2689 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: cf42187a-059a-49d4-b9c8-96786042977f] Received event network-vif-unplugged-9f09f897-79b2-4242-ba35-4ab26732b411 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:41:35 compute-0 nova_compute[260935]: 2025-10-11 09:41:35.137 2 DEBUG oslo_concurrency.lockutils [req-17b767a0-157f-4747-a1cb-103fa5847d24 req-2f1371aa-1da6-45b9-8067-f76928ef2689 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "cf42187a-059a-49d4-b9c8-96786042977f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:41:35 compute-0 nova_compute[260935]: 2025-10-11 09:41:35.138 2 DEBUG oslo_concurrency.lockutils [req-17b767a0-157f-4747-a1cb-103fa5847d24 req-2f1371aa-1da6-45b9-8067-f76928ef2689 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "cf42187a-059a-49d4-b9c8-96786042977f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:41:35 compute-0 nova_compute[260935]: 2025-10-11 09:41:35.138 2 DEBUG oslo_concurrency.lockutils [req-17b767a0-157f-4747-a1cb-103fa5847d24 req-2f1371aa-1da6-45b9-8067-f76928ef2689 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "cf42187a-059a-49d4-b9c8-96786042977f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:41:35 compute-0 nova_compute[260935]: 2025-10-11 09:41:35.139 2 DEBUG nova.compute.manager [req-17b767a0-157f-4747-a1cb-103fa5847d24 req-2f1371aa-1da6-45b9-8067-f76928ef2689 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: cf42187a-059a-49d4-b9c8-96786042977f] No waiting events found dispatching network-vif-unplugged-9f09f897-79b2-4242-ba35-4ab26732b411 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 09:41:35 compute-0 nova_compute[260935]: 2025-10-11 09:41:35.139 2 DEBUG nova.compute.manager [req-17b767a0-157f-4747-a1cb-103fa5847d24 req-2f1371aa-1da6-45b9-8067-f76928ef2689 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: cf42187a-059a-49d4-b9c8-96786042977f] Received event network-vif-unplugged-9f09f897-79b2-4242-ba35-4ab26732b411 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 11 09:41:35 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-2a2f657487bc3f85e27c94fbbd87712ccd236f179ffec9e579ec1ad63a55a04d-userdata-shm.mount: Deactivated successfully.
Oct 11 09:41:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-4cdf0e2af4d95bd9ffa8a3118a9662388e3229f9a077ce124d6a46ddcadbaf59-merged.mount: Deactivated successfully.
Oct 11 09:41:35 compute-0 podman[426552]: 2025-10-11 09:41:35.175736925 +0000 UTC m=+0.117988200 container cleanup 2a2f657487bc3f85e27c94fbbd87712ccd236f179ffec9e579ec1ad63a55a04d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-fbb66f3b-0d2f-44df-a1ff-0fbd9c571596, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 11 09:41:35 compute-0 systemd[1]: libpod-conmon-2a2f657487bc3f85e27c94fbbd87712ccd236f179ffec9e579ec1ad63a55a04d.scope: Deactivated successfully.
Oct 11 09:41:35 compute-0 podman[426604]: 2025-10-11 09:41:35.242348753 +0000 UTC m=+0.045788255 container remove 2a2f657487bc3f85e27c94fbbd87712ccd236f179ffec9e579ec1ad63a55a04d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-fbb66f3b-0d2f-44df-a1ff-0fbd9c571596, tcib_managed=true, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Oct 11 09:41:35 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:41:35.252 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[4beaa8b1-02ff-4890-9f47-bc08940ee605]: (4, ('Sat Oct 11 09:41:35 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-fbb66f3b-0d2f-44df-a1ff-0fbd9c571596 (2a2f657487bc3f85e27c94fbbd87712ccd236f179ffec9e579ec1ad63a55a04d)\n2a2f657487bc3f85e27c94fbbd87712ccd236f179ffec9e579ec1ad63a55a04d\nSat Oct 11 09:41:35 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-fbb66f3b-0d2f-44df-a1ff-0fbd9c571596 (2a2f657487bc3f85e27c94fbbd87712ccd236f179ffec9e579ec1ad63a55a04d)\n2a2f657487bc3f85e27c94fbbd87712ccd236f179ffec9e579ec1ad63a55a04d\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:41:35 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:41:35.253 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[f05479fb-ccf2-4638-8a13-9e13be0a7063]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:41:35 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:41:35.254 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfbb66f3b-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:41:35 compute-0 kernel: tapfbb66f3b-00: left promiscuous mode
Oct 11 09:41:35 compute-0 nova_compute[260935]: 2025-10-11 09:41:35.258 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:41:35 compute-0 nova_compute[260935]: 2025-10-11 09:41:35.263 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:41:35 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:41:35.266 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[9e788872-423c-4875-9f6c-ebed03900c9c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:41:35 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:41:35.291 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[c031cc92-5e48-482a-b95e-03b5356055a9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:41:35 compute-0 nova_compute[260935]: 2025-10-11 09:41:35.293 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:41:35 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:41:35.293 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[1785ec6b-0867-4d8b-904d-1b0b7cd2b298]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:41:35 compute-0 ceph-mon[74313]: pgmap v2997: 321 pgs: 321 active+clean; 407 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 331 KiB/s rd, 2.1 MiB/s wr, 91 op/s
Oct 11 09:41:35 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:41:35.313 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[22c440b3-4376-4f41-9e95-01aeaf019e10]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 748145, 'reachable_time': 42330, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 426620, 'error': None, 'target': 'ovnmeta-fbb66f3b-0d2f-44df-a1ff-0fbd9c571596', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:41:35 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:41:35.317 162929 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-fbb66f3b-0d2f-44df-a1ff-0fbd9c571596 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 11 09:41:35 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:41:35.317 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[1af595db-87be-4262-aed3-6c52c90df807]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:41:35 compute-0 systemd[1]: run-netns-ovnmeta\x2dfbb66f3b\x2d0d2f\x2d44df\x2da1ff\x2d0fbd9c571596.mount: Deactivated successfully.
Oct 11 09:41:35 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2998: 321 pgs: 321 active+clean; 407 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 300 KiB/s rd, 796 KiB/s wr, 70 op/s
Oct 11 09:41:35 compute-0 nova_compute[260935]: 2025-10-11 09:41:35.557 2 INFO nova.virt.libvirt.driver [None req-4861d1f6-0b93-49ce-a26a-2c004be74ac4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: cf42187a-059a-49d4-b9c8-96786042977f] Deleting instance files /var/lib/nova/instances/cf42187a-059a-49d4-b9c8-96786042977f_del
Oct 11 09:41:35 compute-0 nova_compute[260935]: 2025-10-11 09:41:35.558 2 INFO nova.virt.libvirt.driver [None req-4861d1f6-0b93-49ce-a26a-2c004be74ac4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: cf42187a-059a-49d4-b9c8-96786042977f] Deletion of /var/lib/nova/instances/cf42187a-059a-49d4-b9c8-96786042977f_del complete
Oct 11 09:41:35 compute-0 nova_compute[260935]: 2025-10-11 09:41:35.603 2 INFO nova.compute.manager [None req-4861d1f6-0b93-49ce-a26a-2c004be74ac4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] [instance: cf42187a-059a-49d4-b9c8-96786042977f] Took 0.81 seconds to destroy the instance on the hypervisor.
Oct 11 09:41:35 compute-0 nova_compute[260935]: 2025-10-11 09:41:35.604 2 DEBUG oslo.service.loopingcall [None req-4861d1f6-0b93-49ce-a26a-2c004be74ac4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 11 09:41:35 compute-0 nova_compute[260935]: 2025-10-11 09:41:35.604 2 DEBUG nova.compute.manager [-] [instance: cf42187a-059a-49d4-b9c8-96786042977f] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 11 09:41:35 compute-0 nova_compute[260935]: 2025-10-11 09:41:35.604 2 DEBUG nova.network.neutron [-] [instance: cf42187a-059a-49d4-b9c8-96786042977f] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 11 09:41:35 compute-0 nova_compute[260935]: 2025-10-11 09:41:35.796 2 DEBUG nova.network.neutron [req-7c930276-1c50-4402-b0ca-8f1342bfa047 req-07c0a6d2-3434-4d26-97c4-e603cf7b242b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: cf42187a-059a-49d4-b9c8-96786042977f] Updated VIF entry in instance network info cache for port 9f09f897-79b2-4242-ba35-4ab26732b411. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 09:41:35 compute-0 nova_compute[260935]: 2025-10-11 09:41:35.797 2 DEBUG nova.network.neutron [req-7c930276-1c50-4402-b0ca-8f1342bfa047 req-07c0a6d2-3434-4d26-97c4-e603cf7b242b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: cf42187a-059a-49d4-b9c8-96786042977f] Updating instance_info_cache with network_info: [{"id": "9f09f897-79b2-4242-ba35-4ab26732b411", "address": "fa:16:3e:f9:42:d3", "network": {"id": "fbb66f3b-0d2f-44df-a1ff-0fbd9c571596", "bridge": "br-int", "label": "tempest-network-smoke--2076009485", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "81e7096f23df4e7d8782cf98d09d54e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9f09f897-79", "ovs_interfaceid": "9f09f897-79b2-4242-ba35-4ab26732b411", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:41:35 compute-0 nova_compute[260935]: 2025-10-11 09:41:35.847 2 DEBUG oslo_concurrency.lockutils [req-7c930276-1c50-4402-b0ca-8f1342bfa047 req-07c0a6d2-3434-4d26-97c4-e603cf7b242b e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-cf42187a-059a-49d4-b9c8-96786042977f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:41:36 compute-0 nova_compute[260935]: 2025-10-11 09:41:36.111 2 DEBUG nova.network.neutron [-] [instance: cf42187a-059a-49d4-b9c8-96786042977f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:41:36 compute-0 nova_compute[260935]: 2025-10-11 09:41:36.131 2 INFO nova.compute.manager [-] [instance: cf42187a-059a-49d4-b9c8-96786042977f] Took 0.53 seconds to deallocate network for instance.
Oct 11 09:41:36 compute-0 nova_compute[260935]: 2025-10-11 09:41:36.175 2 DEBUG oslo_concurrency.lockutils [None req-4861d1f6-0b93-49ce-a26a-2c004be74ac4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:41:36 compute-0 nova_compute[260935]: 2025-10-11 09:41:36.175 2 DEBUG oslo_concurrency.lockutils [None req-4861d1f6-0b93-49ce-a26a-2c004be74ac4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:41:36 compute-0 nova_compute[260935]: 2025-10-11 09:41:36.287 2 DEBUG oslo_concurrency.processutils [None req-4861d1f6-0b93-49ce-a26a-2c004be74ac4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:41:36 compute-0 nova_compute[260935]: 2025-10-11 09:41:36.714 2 DEBUG nova.compute.manager [req-fb518a76-f90c-4ad5-9804-bcb7092300b3 req-963bdee2-04e9-4f60-ae7a-bfdedc58ae27 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: cf42187a-059a-49d4-b9c8-96786042977f] Received event network-vif-deleted-9f09f897-79b2-4242-ba35-4ab26732b411 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:41:36 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 09:41:36 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3224447743' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:41:36 compute-0 nova_compute[260935]: 2025-10-11 09:41:36.774 2 DEBUG oslo_concurrency.processutils [None req-4861d1f6-0b93-49ce-a26a-2c004be74ac4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.487s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:41:36 compute-0 nova_compute[260935]: 2025-10-11 09:41:36.781 2 DEBUG nova.compute.provider_tree [None req-4861d1f6-0b93-49ce-a26a-2c004be74ac4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 09:41:36 compute-0 nova_compute[260935]: 2025-10-11 09:41:36.797 2 DEBUG nova.scheduler.client.report [None req-4861d1f6-0b93-49ce-a26a-2c004be74ac4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 09:41:36 compute-0 nova_compute[260935]: 2025-10-11 09:41:36.823 2 DEBUG oslo_concurrency.lockutils [None req-4861d1f6-0b93-49ce-a26a-2c004be74ac4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.648s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:41:36 compute-0 nova_compute[260935]: 2025-10-11 09:41:36.864 2 INFO nova.scheduler.client.report [None req-4861d1f6-0b93-49ce-a26a-2c004be74ac4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Deleted allocations for instance cf42187a-059a-49d4-b9c8-96786042977f
Oct 11 09:41:36 compute-0 nova_compute[260935]: 2025-10-11 09:41:36.938 2 DEBUG oslo_concurrency.lockutils [None req-4861d1f6-0b93-49ce-a26a-2c004be74ac4 489c4d0457354f4684f8b9e53261224f 81e7096f23df4e7d8782cf98d09d54e9 - - default default] Lock "cf42187a-059a-49d4-b9c8-96786042977f" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.151s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:41:37 compute-0 nova_compute[260935]: 2025-10-11 09:41:37.241 2 DEBUG nova.compute.manager [req-d115e42f-d3b3-4737-a6e6-8a8662152a15 req-cce8a67b-6cb2-44f2-abab-cb4bc198ce75 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: cf42187a-059a-49d4-b9c8-96786042977f] Received event network-vif-plugged-9f09f897-79b2-4242-ba35-4ab26732b411 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:41:37 compute-0 nova_compute[260935]: 2025-10-11 09:41:37.241 2 DEBUG oslo_concurrency.lockutils [req-d115e42f-d3b3-4737-a6e6-8a8662152a15 req-cce8a67b-6cb2-44f2-abab-cb4bc198ce75 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "cf42187a-059a-49d4-b9c8-96786042977f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:41:37 compute-0 nova_compute[260935]: 2025-10-11 09:41:37.242 2 DEBUG oslo_concurrency.lockutils [req-d115e42f-d3b3-4737-a6e6-8a8662152a15 req-cce8a67b-6cb2-44f2-abab-cb4bc198ce75 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "cf42187a-059a-49d4-b9c8-96786042977f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:41:37 compute-0 nova_compute[260935]: 2025-10-11 09:41:37.242 2 DEBUG oslo_concurrency.lockutils [req-d115e42f-d3b3-4737-a6e6-8a8662152a15 req-cce8a67b-6cb2-44f2-abab-cb4bc198ce75 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "cf42187a-059a-49d4-b9c8-96786042977f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:41:37 compute-0 nova_compute[260935]: 2025-10-11 09:41:37.242 2 DEBUG nova.compute.manager [req-d115e42f-d3b3-4737-a6e6-8a8662152a15 req-cce8a67b-6cb2-44f2-abab-cb4bc198ce75 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: cf42187a-059a-49d4-b9c8-96786042977f] No waiting events found dispatching network-vif-plugged-9f09f897-79b2-4242-ba35-4ab26732b411 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 09:41:37 compute-0 nova_compute[260935]: 2025-10-11 09:41:37.243 2 WARNING nova.compute.manager [req-d115e42f-d3b3-4737-a6e6-8a8662152a15 req-cce8a67b-6cb2-44f2-abab-cb4bc198ce75 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: cf42187a-059a-49d4-b9c8-96786042977f] Received unexpected event network-vif-plugged-9f09f897-79b2-4242-ba35-4ab26732b411 for instance with vm_state deleted and task_state None.
Oct 11 09:41:37 compute-0 ceph-mon[74313]: pgmap v2998: 321 pgs: 321 active+clean; 407 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 300 KiB/s rd, 796 KiB/s wr, 70 op/s
Oct 11 09:41:37 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/3224447743' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:41:37 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v2999: 321 pgs: 321 active+clean; 407 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 300 KiB/s rd, 796 KiB/s wr, 70 op/s
Oct 11 09:41:37 compute-0 podman[426644]: 2025-10-11 09:41:37.776703465 +0000 UTC m=+0.081865228 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent)
Oct 11 09:41:39 compute-0 ceph-mon[74313]: pgmap v2999: 321 pgs: 321 active+clean; 407 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 300 KiB/s rd, 796 KiB/s wr, 70 op/s
Oct 11 09:41:39 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3000: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 319 KiB/s rd, 797 KiB/s wr, 97 op/s
Oct 11 09:41:39 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:41:39 compute-0 ovn_controller[152945]: 2025-10-11T09:41:39Z|01678|binding|INFO|Releasing lport e23cd806-8523-4e59-ba27-db15cee52548 from this chassis (sb_readonly=0)
Oct 11 09:41:39 compute-0 ovn_controller[152945]: 2025-10-11T09:41:39Z|01679|binding|INFO|Releasing lport f45dd889-4db0-488b-b6d3-356bc191844e from this chassis (sb_readonly=0)
Oct 11 09:41:39 compute-0 nova_compute[260935]: 2025-10-11 09:41:39.848 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:41:39 compute-0 nova_compute[260935]: 2025-10-11 09:41:39.958 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:41:39 compute-0 ovn_controller[152945]: 2025-10-11T09:41:39Z|01680|binding|INFO|Releasing lport e23cd806-8523-4e59-ba27-db15cee52548 from this chassis (sb_readonly=0)
Oct 11 09:41:39 compute-0 ovn_controller[152945]: 2025-10-11T09:41:39Z|01681|binding|INFO|Releasing lport f45dd889-4db0-488b-b6d3-356bc191844e from this chassis (sb_readonly=0)
Oct 11 09:41:39 compute-0 nova_compute[260935]: 2025-10-11 09:41:39.968 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:41:40 compute-0 nova_compute[260935]: 2025-10-11 09:41:40.109 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:41:41 compute-0 ceph-mon[74313]: pgmap v3000: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 319 KiB/s rd, 797 KiB/s wr, 97 op/s
Oct 11 09:41:41 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3001: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 3.4 KiB/s wr, 56 op/s
Oct 11 09:41:42 compute-0 ceph-mon[74313]: pgmap v3001: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 3.4 KiB/s wr, 56 op/s
Oct 11 09:41:43 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3002: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 3.4 KiB/s wr, 56 op/s
Oct 11 09:41:44 compute-0 ceph-mon[74313]: pgmap v3002: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 3.4 KiB/s wr, 56 op/s
Oct 11 09:41:44 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:41:44 compute-0 podman[426666]: 2025-10-11 09:41:44.805672549 +0000 UTC m=+0.104544113 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=iscsid, org.label-schema.build-date=20251001)
Oct 11 09:41:44 compute-0 nova_compute[260935]: 2025-10-11 09:41:44.962 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:41:45 compute-0 nova_compute[260935]: 2025-10-11 09:41:45.070 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760175690.068873, 7478f2ca-99cf-4c64-917f-bcfee233cf4e => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:41:45 compute-0 nova_compute[260935]: 2025-10-11 09:41:45.071 2 INFO nova.compute.manager [-] [instance: 7478f2ca-99cf-4c64-917f-bcfee233cf4e] VM Stopped (Lifecycle Event)
Oct 11 09:41:45 compute-0 nova_compute[260935]: 2025-10-11 09:41:45.111 2 DEBUG nova.compute.manager [None req-4a761097-9845-4f75-8a41-82a9d77f8230 - - - - - -] [instance: 7478f2ca-99cf-4c64-917f-bcfee233cf4e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:41:45 compute-0 nova_compute[260935]: 2025-10-11 09:41:45.112 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:41:45 compute-0 sshd-session[426664]: Invalid user admini from 155.4.244.179 port 25229
Oct 11 09:41:45 compute-0 sshd-session[426664]: pam_unix(sshd:auth): check pass; user unknown
Oct 11 09:41:45 compute-0 sshd-session[426664]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=155.4.244.179
Oct 11 09:41:45 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3003: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 11 09:41:46 compute-0 ceph-mon[74313]: pgmap v3003: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 11 09:41:47 compute-0 sshd-session[426664]: Failed password for invalid user admini from 155.4.244.179 port 25229 ssh2
Oct 11 09:41:47 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3004: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 11 09:41:48 compute-0 ceph-mon[74313]: pgmap v3004: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 11 09:41:48 compute-0 nova_compute[260935]: 2025-10-11 09:41:48.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:41:48 compute-0 nova_compute[260935]: 2025-10-11 09:41:48.703 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 11 09:41:48 compute-0 podman[426686]: 2025-10-11 09:41:48.772994872 +0000 UTC m=+0.073615146 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_managed=true, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=multipathd)
Oct 11 09:41:48 compute-0 podman[426687]: 2025-10-11 09:41:48.80856434 +0000 UTC m=+0.102456975 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_controller)
Oct 11 09:41:48 compute-0 sshd-session[426664]: Received disconnect from 155.4.244.179 port 25229:11: Bye Bye [preauth]
Oct 11 09:41:48 compute-0 sshd-session[426664]: Disconnected from invalid user admini 155.4.244.179 port 25229 [preauth]
Oct 11 09:41:49 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3005: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 11 09:41:49 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:41:49 compute-0 nova_compute[260935]: 2025-10-11 09:41:49.964 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:41:50 compute-0 nova_compute[260935]: 2025-10-11 09:41:50.038 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760175695.0369942, cf42187a-059a-49d4-b9c8-96786042977f => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:41:50 compute-0 nova_compute[260935]: 2025-10-11 09:41:50.039 2 INFO nova.compute.manager [-] [instance: cf42187a-059a-49d4-b9c8-96786042977f] VM Stopped (Lifecycle Event)
Oct 11 09:41:50 compute-0 nova_compute[260935]: 2025-10-11 09:41:50.057 2 DEBUG nova.compute.manager [None req-7c12ff4f-1894-49d1-8823-46df4e261789 - - - - - -] [instance: cf42187a-059a-49d4-b9c8-96786042977f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:41:50 compute-0 nova_compute[260935]: 2025-10-11 09:41:50.114 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:41:50 compute-0 ceph-mon[74313]: pgmap v3005: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 11 09:41:51 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3006: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:41:51 compute-0 nova_compute[260935]: 2025-10-11 09:41:51.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:41:51 compute-0 nova_compute[260935]: 2025-10-11 09:41:51.703 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 11 09:41:51 compute-0 nova_compute[260935]: 2025-10-11 09:41:51.703 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 11 09:41:51 compute-0 nova_compute[260935]: 2025-10-11 09:41:51.875 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "refresh_cache-c176845c-89c0-4038-ba22-4ee79bd3ebfe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:41:51 compute-0 nova_compute[260935]: 2025-10-11 09:41:51.875 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquired lock "refresh_cache-c176845c-89c0-4038-ba22-4ee79bd3ebfe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:41:51 compute-0 nova_compute[260935]: 2025-10-11 09:41:51.876 2 DEBUG nova.network.neutron [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: c176845c-89c0-4038-ba22-4ee79bd3ebfe] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 11 09:41:51 compute-0 nova_compute[260935]: 2025-10-11 09:41:51.876 2 DEBUG nova.objects.instance [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lazy-loading 'info_cache' on Instance uuid c176845c-89c0-4038-ba22-4ee79bd3ebfe obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:41:52 compute-0 ceph-mon[74313]: pgmap v3006: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:41:53 compute-0 nova_compute[260935]: 2025-10-11 09:41:53.157 2 DEBUG nova.network.neutron [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: c176845c-89c0-4038-ba22-4ee79bd3ebfe] Updating instance_info_cache with network_info: [{"id": "e61ae661-47c6-4317-a2c2-6e7a5b567441", "address": "fa:16:3e:1e:82:58", "network": {"id": "164a664d-5e52-48b9-8b00-f73d0851a4cc", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-311778958-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d33b48586acf4e6c8254f2a1213b001c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape61ae661-47", "ovs_interfaceid": "e61ae661-47c6-4317-a2c2-6e7a5b567441", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:41:53 compute-0 nova_compute[260935]: 2025-10-11 09:41:53.176 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Releasing lock "refresh_cache-c176845c-89c0-4038-ba22-4ee79bd3ebfe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:41:53 compute-0 nova_compute[260935]: 2025-10-11 09:41:53.177 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: c176845c-89c0-4038-ba22-4ee79bd3ebfe] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 11 09:41:53 compute-0 nova_compute[260935]: 2025-10-11 09:41:53.178 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:41:53 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3007: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:41:53 compute-0 nova_compute[260935]: 2025-10-11 09:41:53.701 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:41:54 compute-0 nova_compute[260935]: 2025-10-11 09:41:54.044 2 DEBUG oslo_concurrency.lockutils [None req-cd857be1-1604-4a66-9869-2cbea25e1f23 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] Acquiring lock "4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:41:54 compute-0 nova_compute[260935]: 2025-10-11 09:41:54.045 2 DEBUG oslo_concurrency.lockutils [None req-cd857be1-1604-4a66-9869-2cbea25e1f23 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] Lock "4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:41:54 compute-0 nova_compute[260935]: 2025-10-11 09:41:54.075 2 DEBUG nova.compute.manager [None req-cd857be1-1604-4a66-9869-2cbea25e1f23 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] [instance: 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 11 09:41:54 compute-0 nova_compute[260935]: 2025-10-11 09:41:54.196 2 DEBUG oslo_concurrency.lockutils [None req-cd857be1-1604-4a66-9869-2cbea25e1f23 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:41:54 compute-0 nova_compute[260935]: 2025-10-11 09:41:54.197 2 DEBUG oslo_concurrency.lockutils [None req-cd857be1-1604-4a66-9869-2cbea25e1f23 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:41:54 compute-0 nova_compute[260935]: 2025-10-11 09:41:54.207 2 DEBUG nova.virt.hardware [None req-cd857be1-1604-4a66-9869-2cbea25e1f23 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 11 09:41:54 compute-0 nova_compute[260935]: 2025-10-11 09:41:54.207 2 INFO nova.compute.claims [None req-cd857be1-1604-4a66-9869-2cbea25e1f23 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] [instance: 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1] Claim successful on node compute-0.ctlplane.example.com
Oct 11 09:41:54 compute-0 nova_compute[260935]: 2025-10-11 09:41:54.391 2 DEBUG oslo_concurrency.processutils [None req-cd857be1-1604-4a66-9869-2cbea25e1f23 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:41:54 compute-0 ceph-mon[74313]: pgmap v3007: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:41:54 compute-0 sudo[426733]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:41:54 compute-0 sudo[426733]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:41:54 compute-0 sudo[426733]: pam_unix(sudo:session): session closed for user root
Oct 11 09:41:54 compute-0 sudo[426759]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 09:41:54 compute-0 sudo[426759]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:41:54 compute-0 sudo[426759]: pam_unix(sudo:session): session closed for user root
Oct 11 09:41:54 compute-0 sudo[426784]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:41:54 compute-0 sudo[426784]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:41:54 compute-0 sudo[426784]: pam_unix(sudo:session): session closed for user root
Oct 11 09:41:54 compute-0 sudo[426828]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 11 09:41:54 compute-0 sudo[426828]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:41:54 compute-0 nova_compute[260935]: 2025-10-11 09:41:54.699 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:41:54 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:41:54 compute-0 nova_compute[260935]: 2025-10-11 09:41:54.701 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:41:54 compute-0 nova_compute[260935]: 2025-10-11 09:41:54.731 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:41:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:41:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:41:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:41:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:41:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:41:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:41:54 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 09:41:54 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3688408711' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:41:54 compute-0 nova_compute[260935]: 2025-10-11 09:41:54.898 2 DEBUG oslo_concurrency.processutils [None req-cd857be1-1604-4a66-9869-2cbea25e1f23 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.506s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:41:54 compute-0 nova_compute[260935]: 2025-10-11 09:41:54.909 2 DEBUG nova.compute.provider_tree [None req-cd857be1-1604-4a66-9869-2cbea25e1f23 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 09:41:54 compute-0 nova_compute[260935]: 2025-10-11 09:41:54.939 2 DEBUG nova.scheduler.client.report [None req-cd857be1-1604-4a66-9869-2cbea25e1f23 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 09:41:54 compute-0 nova_compute[260935]: 2025-10-11 09:41:54.966 2 DEBUG oslo_concurrency.lockutils [None req-cd857be1-1604-4a66-9869-2cbea25e1f23 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.769s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:41:54 compute-0 nova_compute[260935]: 2025-10-11 09:41:54.967 2 DEBUG nova.compute.manager [None req-cd857be1-1604-4a66-9869-2cbea25e1f23 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] [instance: 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 11 09:41:54 compute-0 nova_compute[260935]: 2025-10-11 09:41:54.972 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:41:54 compute-0 nova_compute[260935]: 2025-10-11 09:41:54.975 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.243s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:41:54 compute-0 nova_compute[260935]: 2025-10-11 09:41:54.975 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:41:54 compute-0 nova_compute[260935]: 2025-10-11 09:41:54.976 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 11 09:41:54 compute-0 nova_compute[260935]: 2025-10-11 09:41:54.976 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:41:55 compute-0 ceph-mgr[74605]: [balancer INFO root] Optimize plan auto_2025-10-11_09:41:55
Oct 11 09:41:55 compute-0 ceph-mgr[74605]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 09:41:55 compute-0 ceph-mgr[74605]: [balancer INFO root] do_upmap
Oct 11 09:41:55 compute-0 ceph-mgr[74605]: [balancer INFO root] pools ['volumes', 'vms', '.mgr', 'default.rgw.log', 'cephfs.cephfs.meta', 'images', 'cephfs.cephfs.data', 'default.rgw.control', 'backups', 'default.rgw.meta', '.rgw.root']
Oct 11 09:41:55 compute-0 ceph-mgr[74605]: [balancer INFO root] prepared 0/10 changes
Oct 11 09:41:55 compute-0 ceph-osd[88249]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 11 09:41:55 compute-0 ceph-osd[88249]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 5400.1 total, 600.0 interval
                                           Cumulative writes: 44K writes, 177K keys, 44K commit groups, 1.0 writes per commit group, ingest: 0.17 GB, 0.03 MB/s
                                           Cumulative WAL: 44K writes, 16K syncs, 2.75 writes per sync, written: 0.17 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 2843 writes, 12K keys, 2843 commit groups, 1.0 writes per commit group, ingest: 13.29 MB, 0.02 MB/s
                                           Interval WAL: 2843 writes, 1055 syncs, 2.69 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 11 09:41:55 compute-0 nova_compute[260935]: 2025-10-11 09:41:55.087 2 DEBUG nova.compute.manager [None req-cd857be1-1604-4a66-9869-2cbea25e1f23 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] [instance: 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 11 09:41:55 compute-0 nova_compute[260935]: 2025-10-11 09:41:55.088 2 DEBUG nova.network.neutron [None req-cd857be1-1604-4a66-9869-2cbea25e1f23 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] [instance: 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 11 09:41:55 compute-0 nova_compute[260935]: 2025-10-11 09:41:55.116 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:41:55 compute-0 nova_compute[260935]: 2025-10-11 09:41:55.124 2 INFO nova.virt.libvirt.driver [None req-cd857be1-1604-4a66-9869-2cbea25e1f23 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] [instance: 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 11 09:41:55 compute-0 nova_compute[260935]: 2025-10-11 09:41:55.151 2 DEBUG nova.compute.manager [None req-cd857be1-1604-4a66-9869-2cbea25e1f23 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] [instance: 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 11 09:41:55 compute-0 nova_compute[260935]: 2025-10-11 09:41:55.250 2 DEBUG nova.compute.manager [None req-cd857be1-1604-4a66-9869-2cbea25e1f23 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] [instance: 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 11 09:41:55 compute-0 nova_compute[260935]: 2025-10-11 09:41:55.251 2 DEBUG nova.virt.libvirt.driver [None req-cd857be1-1604-4a66-9869-2cbea25e1f23 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] [instance: 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 11 09:41:55 compute-0 nova_compute[260935]: 2025-10-11 09:41:55.252 2 INFO nova.virt.libvirt.driver [None req-cd857be1-1604-4a66-9869-2cbea25e1f23 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] [instance: 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1] Creating image(s)
Oct 11 09:41:55 compute-0 nova_compute[260935]: 2025-10-11 09:41:55.288 2 DEBUG nova.storage.rbd_utils [None req-cd857be1-1604-4a66-9869-2cbea25e1f23 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] rbd image 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:41:55 compute-0 sudo[426828]: pam_unix(sudo:session): session closed for user root
Oct 11 09:41:55 compute-0 nova_compute[260935]: 2025-10-11 09:41:55.327 2 DEBUG nova.storage.rbd_utils [None req-cd857be1-1604-4a66-9869-2cbea25e1f23 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] rbd image 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:41:55 compute-0 nova_compute[260935]: 2025-10-11 09:41:55.365 2 DEBUG nova.storage.rbd_utils [None req-cd857be1-1604-4a66-9869-2cbea25e1f23 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] rbd image 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:41:55 compute-0 nova_compute[260935]: 2025-10-11 09:41:55.370 2 DEBUG oslo_concurrency.processutils [None req-cd857be1-1604-4a66-9869-2cbea25e1f23 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:41:55 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3008: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:41:55 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 09:41:55 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/656746381' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:41:55 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/3688408711' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:41:55 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/656746381' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:41:55 compute-0 nova_compute[260935]: 2025-10-11 09:41:55.456 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:41:55 compute-0 nova_compute[260935]: 2025-10-11 09:41:55.463 2 DEBUG oslo_concurrency.processutils [None req-cd857be1-1604-4a66-9869-2cbea25e1f23 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json" returned: 0 in 0.093s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:41:55 compute-0 nova_compute[260935]: 2025-10-11 09:41:55.464 2 DEBUG oslo_concurrency.lockutils [None req-cd857be1-1604-4a66-9869-2cbea25e1f23 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] Acquiring lock "9811042c7d73cc51997f7c966840f4b7728169a1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:41:55 compute-0 nova_compute[260935]: 2025-10-11 09:41:55.465 2 DEBUG oslo_concurrency.lockutils [None req-cd857be1-1604-4a66-9869-2cbea25e1f23 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:41:55 compute-0 nova_compute[260935]: 2025-10-11 09:41:55.466 2 DEBUG oslo_concurrency.lockutils [None req-cd857be1-1604-4a66-9869-2cbea25e1f23 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:41:55 compute-0 sudo[426963]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:41:55 compute-0 sudo[426963]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:41:55 compute-0 sudo[426963]: pam_unix(sudo:session): session closed for user root
Oct 11 09:41:55 compute-0 nova_compute[260935]: 2025-10-11 09:41:55.505 2 DEBUG nova.storage.rbd_utils [None req-cd857be1-1604-4a66-9869-2cbea25e1f23 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] rbd image 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:41:55 compute-0 nova_compute[260935]: 2025-10-11 09:41:55.516 2 DEBUG oslo_concurrency.processutils [None req-cd857be1-1604-4a66-9869-2cbea25e1f23 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:41:55 compute-0 sudo[427008]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 09:41:55 compute-0 sudo[427008]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:41:55 compute-0 sudo[427008]: pam_unix(sudo:session): session closed for user root
Oct 11 09:41:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 09:41:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 09:41:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 09:41:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 09:41:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 09:41:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 09:41:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 09:41:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 09:41:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 09:41:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 09:41:55 compute-0 sshd-session[426869]: Invalid user mysql from 165.232.82.252 port 39696
Oct 11 09:41:55 compute-0 sudo[427036]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:41:55 compute-0 sudo[427036]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:41:55 compute-0 nova_compute[260935]: 2025-10-11 09:41:55.663 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:41:55 compute-0 nova_compute[260935]: 2025-10-11 09:41:55.664 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:41:55 compute-0 nova_compute[260935]: 2025-10-11 09:41:55.665 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:41:55 compute-0 sudo[427036]: pam_unix(sudo:session): session closed for user root
Oct 11 09:41:55 compute-0 nova_compute[260935]: 2025-10-11 09:41:55.671 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:41:55 compute-0 nova_compute[260935]: 2025-10-11 09:41:55.671 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:41:55 compute-0 nova_compute[260935]: 2025-10-11 09:41:55.678 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000052 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:41:55 compute-0 nova_compute[260935]: 2025-10-11 09:41:55.679 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000052 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:41:55 compute-0 nova_compute[260935]: 2025-10-11 09:41:55.772 2 DEBUG nova.policy [None req-cd857be1-1604-4a66-9869-2cbea25e1f23 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'a45c6166242048a983479be416f90c52', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'bd3d8446de604a3682ade55965824985', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 11 09:41:55 compute-0 sshd-session[426869]: pam_unix(sshd:auth): check pass; user unknown
Oct 11 09:41:55 compute-0 sshd-session[426869]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=165.232.82.252
Oct 11 09:41:55 compute-0 sudo[427080]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 list-networks
Oct 11 09:41:55 compute-0 sudo[427080]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:41:55 compute-0 nova_compute[260935]: 2025-10-11 09:41:55.865 2 DEBUG oslo_concurrency.processutils [None req-cd857be1-1604-4a66-9869-2cbea25e1f23 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.349s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:41:55 compute-0 nova_compute[260935]: 2025-10-11 09:41:55.944 2 DEBUG nova.storage.rbd_utils [None req-cd857be1-1604-4a66-9869-2cbea25e1f23 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] resizing rbd image 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 11 09:41:56 compute-0 nova_compute[260935]: 2025-10-11 09:41:56.053 2 DEBUG nova.objects.instance [None req-cd857be1-1604-4a66-9869-2cbea25e1f23 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] Lazy-loading 'migration_context' on Instance uuid 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:41:56 compute-0 sudo[427080]: pam_unix(sudo:session): session closed for user root
Oct 11 09:41:56 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 09:41:56 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:41:56 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 09:41:56 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:41:56 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 09:41:56 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 09:41:56 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 11 09:41:56 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 09:41:56 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 11 09:41:56 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:41:56 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev 36158715-1009-4fab-8772-b9abbaa429a2 does not exist
Oct 11 09:41:56 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev 39659549-62df-4ccf-a1fa-ac4c1d3e3cd0 does not exist
Oct 11 09:41:56 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev 022aa7c4-6c68-4680-a7b3-e80e7f9fcfca does not exist
Oct 11 09:41:56 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 11 09:41:56 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 09:41:56 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 11 09:41:56 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 09:41:56 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 09:41:56 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 09:41:56 compute-0 sudo[427195]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:41:56 compute-0 sudo[427195]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:41:56 compute-0 sudo[427195]: pam_unix(sudo:session): session closed for user root
Oct 11 09:41:56 compute-0 nova_compute[260935]: 2025-10-11 09:41:56.213 2 DEBUG nova.virt.libvirt.driver [None req-cd857be1-1604-4a66-9869-2cbea25e1f23 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] [instance: 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 11 09:41:56 compute-0 nova_compute[260935]: 2025-10-11 09:41:56.213 2 DEBUG nova.virt.libvirt.driver [None req-cd857be1-1604-4a66-9869-2cbea25e1f23 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] [instance: 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1] Ensure instance console log exists: /var/lib/nova/instances/4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 11 09:41:56 compute-0 nova_compute[260935]: 2025-10-11 09:41:56.213 2 DEBUG oslo_concurrency.lockutils [None req-cd857be1-1604-4a66-9869-2cbea25e1f23 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:41:56 compute-0 nova_compute[260935]: 2025-10-11 09:41:56.214 2 DEBUG oslo_concurrency.lockutils [None req-cd857be1-1604-4a66-9869-2cbea25e1f23 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:41:56 compute-0 nova_compute[260935]: 2025-10-11 09:41:56.214 2 DEBUG oslo_concurrency.lockutils [None req-cd857be1-1604-4a66-9869-2cbea25e1f23 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:41:56 compute-0 sudo[427220]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 09:41:56 compute-0 sudo[427220]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:41:56 compute-0 sudo[427220]: pam_unix(sudo:session): session closed for user root
Oct 11 09:41:56 compute-0 nova_compute[260935]: 2025-10-11 09:41:56.282 2 WARNING nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 09:41:56 compute-0 nova_compute[260935]: 2025-10-11 09:41:56.283 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=2830MB free_disk=59.83066940307617GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 11 09:41:56 compute-0 nova_compute[260935]: 2025-10-11 09:41:56.283 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:41:56 compute-0 nova_compute[260935]: 2025-10-11 09:41:56.283 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:41:56 compute-0 nova_compute[260935]: 2025-10-11 09:41:56.351 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance c176845c-89c0-4038-ba22-4ee79bd3ebfe actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 09:41:56 compute-0 nova_compute[260935]: 2025-10-11 09:41:56.351 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance b75d8ded-515b-48ff-a6b6-28df88878996 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 09:41:56 compute-0 nova_compute[260935]: 2025-10-11 09:41:56.351 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance 52be16b4-343a-4fd4-9041-39069a1fde2a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 09:41:56 compute-0 nova_compute[260935]: 2025-10-11 09:41:56.351 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 09:41:56 compute-0 nova_compute[260935]: 2025-10-11 09:41:56.351 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 11 09:41:56 compute-0 nova_compute[260935]: 2025-10-11 09:41:56.351 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=1024MB phys_disk=59GB used_disk=4GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 11 09:41:56 compute-0 sudo[427245]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:41:56 compute-0 sudo[427245]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:41:56 compute-0 sudo[427245]: pam_unix(sudo:session): session closed for user root
Oct 11 09:41:56 compute-0 nova_compute[260935]: 2025-10-11 09:41:56.467 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:41:56 compute-0 sudo[427270]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 11 09:41:56 compute-0 sudo[427270]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:41:56 compute-0 nova_compute[260935]: 2025-10-11 09:41:56.831 2 DEBUG nova.network.neutron [None req-cd857be1-1604-4a66-9869-2cbea25e1f23 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] [instance: 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1] Successfully created port: a65667d8-0cc9-498a-9415-8fcecf5881f3 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 11 09:41:56 compute-0 podman[427354]: 2025-10-11 09:41:56.965139455 +0000 UTC m=+0.071919738 container create db027bb006cee476d52a108cdd113d3310894c25627d5e8a2504776ece7bfbc1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_brahmagupta, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 09:41:56 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 09:41:56 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3831463335' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:41:57 compute-0 nova_compute[260935]: 2025-10-11 09:41:57.007 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.540s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:41:57 compute-0 nova_compute[260935]: 2025-10-11 09:41:57.016 2 DEBUG nova.compute.provider_tree [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 09:41:57 compute-0 podman[427354]: 2025-10-11 09:41:56.934937038 +0000 UTC m=+0.041717351 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:41:57 compute-0 systemd[1]: Started libpod-conmon-db027bb006cee476d52a108cdd113d3310894c25627d5e8a2504776ece7bfbc1.scope.
Oct 11 09:41:57 compute-0 nova_compute[260935]: 2025-10-11 09:41:57.042 2 DEBUG nova.scheduler.client.report [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 09:41:57 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:41:57 compute-0 ceph-mon[74313]: pgmap v3008: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:41:57 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:41:57 compute-0 nova_compute[260935]: 2025-10-11 09:41:57.070 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 11 09:41:57 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:41:57 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 09:41:57 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 09:41:57 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:41:57 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 09:41:57 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 09:41:57 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 09:41:57 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/3831463335' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:41:57 compute-0 nova_compute[260935]: 2025-10-11 09:41:57.070 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.787s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:41:57 compute-0 podman[427354]: 2025-10-11 09:41:57.078539616 +0000 UTC m=+0.185319919 container init db027bb006cee476d52a108cdd113d3310894c25627d5e8a2504776ece7bfbc1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_brahmagupta, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 09:41:57 compute-0 podman[427354]: 2025-10-11 09:41:57.088984449 +0000 UTC m=+0.195764752 container start db027bb006cee476d52a108cdd113d3310894c25627d5e8a2504776ece7bfbc1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_brahmagupta, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Oct 11 09:41:57 compute-0 objective_brahmagupta[427372]: 167 167
Oct 11 09:41:57 compute-0 systemd[1]: libpod-db027bb006cee476d52a108cdd113d3310894c25627d5e8a2504776ece7bfbc1.scope: Deactivated successfully.
Oct 11 09:41:57 compute-0 conmon[427372]: conmon db027bb006cee476d52a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-db027bb006cee476d52a108cdd113d3310894c25627d5e8a2504776ece7bfbc1.scope/container/memory.events
Oct 11 09:41:57 compute-0 podman[427354]: 2025-10-11 09:41:57.095158812 +0000 UTC m=+0.201939065 container attach db027bb006cee476d52a108cdd113d3310894c25627d5e8a2504776ece7bfbc1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_brahmagupta, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS)
Oct 11 09:41:57 compute-0 podman[427354]: 2025-10-11 09:41:57.097500647 +0000 UTC m=+0.204280900 container died db027bb006cee476d52a108cdd113d3310894c25627d5e8a2504776ece7bfbc1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_brahmagupta, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct 11 09:41:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-f4b78a022b04bb2f2ce4f335ae388a8cbbb5e91d5908112b9d8c6e5b5850aac8-merged.mount: Deactivated successfully.
Oct 11 09:41:57 compute-0 podman[427354]: 2025-10-11 09:41:57.140142073 +0000 UTC m=+0.246922326 container remove db027bb006cee476d52a108cdd113d3310894c25627d5e8a2504776ece7bfbc1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_brahmagupta, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 11 09:41:57 compute-0 systemd[1]: libpod-conmon-db027bb006cee476d52a108cdd113d3310894c25627d5e8a2504776ece7bfbc1.scope: Deactivated successfully.
Oct 11 09:41:57 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3009: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:41:57 compute-0 podman[427398]: 2025-10-11 09:41:57.389101176 +0000 UTC m=+0.070710714 container create 5442941559e201f06ba23932e50989d0e04c1d52e54721a2d0ed633d7160166f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_shockley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 11 09:41:57 compute-0 systemd[1]: Started libpod-conmon-5442941559e201f06ba23932e50989d0e04c1d52e54721a2d0ed633d7160166f.scope.
Oct 11 09:41:57 compute-0 podman[427398]: 2025-10-11 09:41:57.358983161 +0000 UTC m=+0.040592769 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:41:57 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:41:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6996d0a4690b7c0704d0a231afa8510b0a46d2819b5629b647a01452244cc654/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 09:41:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6996d0a4690b7c0704d0a231afa8510b0a46d2819b5629b647a01452244cc654/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 09:41:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6996d0a4690b7c0704d0a231afa8510b0a46d2819b5629b647a01452244cc654/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 09:41:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6996d0a4690b7c0704d0a231afa8510b0a46d2819b5629b647a01452244cc654/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 09:41:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6996d0a4690b7c0704d0a231afa8510b0a46d2819b5629b647a01452244cc654/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 09:41:57 compute-0 nova_compute[260935]: 2025-10-11 09:41:57.496 2 DEBUG nova.network.neutron [None req-cd857be1-1604-4a66-9869-2cbea25e1f23 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] [instance: 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1] Successfully updated port: a65667d8-0cc9-498a-9415-8fcecf5881f3 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 11 09:41:57 compute-0 podman[427398]: 2025-10-11 09:41:57.514077461 +0000 UTC m=+0.195686969 container init 5442941559e201f06ba23932e50989d0e04c1d52e54721a2d0ed633d7160166f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_shockley, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Oct 11 09:41:57 compute-0 nova_compute[260935]: 2025-10-11 09:41:57.517 2 DEBUG oslo_concurrency.lockutils [None req-cd857be1-1604-4a66-9869-2cbea25e1f23 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] Acquiring lock "refresh_cache-4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:41:57 compute-0 nova_compute[260935]: 2025-10-11 09:41:57.518 2 DEBUG oslo_concurrency.lockutils [None req-cd857be1-1604-4a66-9869-2cbea25e1f23 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] Acquired lock "refresh_cache-4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:41:57 compute-0 nova_compute[260935]: 2025-10-11 09:41:57.518 2 DEBUG nova.network.neutron [None req-cd857be1-1604-4a66-9869-2cbea25e1f23 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] [instance: 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 11 09:41:57 compute-0 podman[427398]: 2025-10-11 09:41:57.521847529 +0000 UTC m=+0.203457067 container start 5442941559e201f06ba23932e50989d0e04c1d52e54721a2d0ed633d7160166f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_shockley, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct 11 09:41:57 compute-0 podman[427398]: 2025-10-11 09:41:57.52580308 +0000 UTC m=+0.207412618 container attach 5442941559e201f06ba23932e50989d0e04c1d52e54721a2d0ed633d7160166f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_shockley, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct 11 09:41:57 compute-0 nova_compute[260935]: 2025-10-11 09:41:57.600 2 DEBUG nova.compute.manager [req-52cef021-412f-4e8e-af9d-73722e4da40d req-02608588-cc5b-4682-941f-08da95deffac e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1] Received event network-changed-a65667d8-0cc9-498a-9415-8fcecf5881f3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:41:57 compute-0 nova_compute[260935]: 2025-10-11 09:41:57.600 2 DEBUG nova.compute.manager [req-52cef021-412f-4e8e-af9d-73722e4da40d req-02608588-cc5b-4682-941f-08da95deffac e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1] Refreshing instance network info cache due to event network-changed-a65667d8-0cc9-498a-9415-8fcecf5881f3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 09:41:57 compute-0 nova_compute[260935]: 2025-10-11 09:41:57.601 2 DEBUG oslo_concurrency.lockutils [req-52cef021-412f-4e8e-af9d-73722e4da40d req-02608588-cc5b-4682-941f-08da95deffac e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:41:57 compute-0 sshd-session[426869]: Failed password for invalid user mysql from 165.232.82.252 port 39696 ssh2
Oct 11 09:41:57 compute-0 nova_compute[260935]: 2025-10-11 09:41:57.726 2 DEBUG nova.network.neutron [None req-cd857be1-1604-4a66-9869-2cbea25e1f23 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] [instance: 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 11 09:41:58 compute-0 nova_compute[260935]: 2025-10-11 09:41:58.292 2 DEBUG nova.network.neutron [None req-cd857be1-1604-4a66-9869-2cbea25e1f23 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] [instance: 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1] Updating instance_info_cache with network_info: [{"id": "a65667d8-0cc9-498a-9415-8fcecf5881f3", "address": "fa:16:3e:38:9b:b3", "network": {"id": "7f1968f8-d23e-46ea-bffa-f1b3561e0cf9", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-351533677-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "bd3d8446de604a3682ade55965824985", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa65667d8-0c", "ovs_interfaceid": "a65667d8-0cc9-498a-9415-8fcecf5881f3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:41:58 compute-0 nova_compute[260935]: 2025-10-11 09:41:58.311 2 DEBUG oslo_concurrency.lockutils [None req-cd857be1-1604-4a66-9869-2cbea25e1f23 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] Releasing lock "refresh_cache-4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:41:58 compute-0 nova_compute[260935]: 2025-10-11 09:41:58.311 2 DEBUG nova.compute.manager [None req-cd857be1-1604-4a66-9869-2cbea25e1f23 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] [instance: 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1] Instance network_info: |[{"id": "a65667d8-0cc9-498a-9415-8fcecf5881f3", "address": "fa:16:3e:38:9b:b3", "network": {"id": "7f1968f8-d23e-46ea-bffa-f1b3561e0cf9", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-351533677-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "bd3d8446de604a3682ade55965824985", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa65667d8-0c", "ovs_interfaceid": "a65667d8-0cc9-498a-9415-8fcecf5881f3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 11 09:41:58 compute-0 nova_compute[260935]: 2025-10-11 09:41:58.312 2 DEBUG oslo_concurrency.lockutils [req-52cef021-412f-4e8e-af9d-73722e4da40d req-02608588-cc5b-4682-941f-08da95deffac e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:41:58 compute-0 nova_compute[260935]: 2025-10-11 09:41:58.313 2 DEBUG nova.network.neutron [req-52cef021-412f-4e8e-af9d-73722e4da40d req-02608588-cc5b-4682-941f-08da95deffac e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1] Refreshing network info cache for port a65667d8-0cc9-498a-9415-8fcecf5881f3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 09:41:58 compute-0 nova_compute[260935]: 2025-10-11 09:41:58.317 2 DEBUG nova.virt.libvirt.driver [None req-cd857be1-1604-4a66-9869-2cbea25e1f23 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] [instance: 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1] Start _get_guest_xml network_info=[{"id": "a65667d8-0cc9-498a-9415-8fcecf5881f3", "address": "fa:16:3e:38:9b:b3", "network": {"id": "7f1968f8-d23e-46ea-bffa-f1b3561e0cf9", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-351533677-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "bd3d8446de604a3682ade55965824985", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa65667d8-0c", "ovs_interfaceid": "a65667d8-0cc9-498a-9415-8fcecf5881f3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 11 09:41:58 compute-0 nova_compute[260935]: 2025-10-11 09:41:58.324 2 WARNING nova.virt.libvirt.driver [None req-cd857be1-1604-4a66-9869-2cbea25e1f23 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 09:41:58 compute-0 nova_compute[260935]: 2025-10-11 09:41:58.334 2 DEBUG nova.virt.libvirt.host [None req-cd857be1-1604-4a66-9869-2cbea25e1f23 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 11 09:41:58 compute-0 nova_compute[260935]: 2025-10-11 09:41:58.335 2 DEBUG nova.virt.libvirt.host [None req-cd857be1-1604-4a66-9869-2cbea25e1f23 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 11 09:41:58 compute-0 nova_compute[260935]: 2025-10-11 09:41:58.339 2 DEBUG nova.virt.libvirt.host [None req-cd857be1-1604-4a66-9869-2cbea25e1f23 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 11 09:41:58 compute-0 nova_compute[260935]: 2025-10-11 09:41:58.340 2 DEBUG nova.virt.libvirt.host [None req-cd857be1-1604-4a66-9869-2cbea25e1f23 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 11 09:41:58 compute-0 nova_compute[260935]: 2025-10-11 09:41:58.340 2 DEBUG nova.virt.libvirt.driver [None req-cd857be1-1604-4a66-9869-2cbea25e1f23 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 11 09:41:58 compute-0 nova_compute[260935]: 2025-10-11 09:41:58.341 2 DEBUG nova.virt.hardware [None req-cd857be1-1604-4a66-9869-2cbea25e1f23 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 11 09:41:58 compute-0 nova_compute[260935]: 2025-10-11 09:41:58.342 2 DEBUG nova.virt.hardware [None req-cd857be1-1604-4a66-9869-2cbea25e1f23 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 11 09:41:58 compute-0 nova_compute[260935]: 2025-10-11 09:41:58.342 2 DEBUG nova.virt.hardware [None req-cd857be1-1604-4a66-9869-2cbea25e1f23 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 11 09:41:58 compute-0 nova_compute[260935]: 2025-10-11 09:41:58.342 2 DEBUG nova.virt.hardware [None req-cd857be1-1604-4a66-9869-2cbea25e1f23 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 11 09:41:58 compute-0 nova_compute[260935]: 2025-10-11 09:41:58.343 2 DEBUG nova.virt.hardware [None req-cd857be1-1604-4a66-9869-2cbea25e1f23 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 11 09:41:58 compute-0 nova_compute[260935]: 2025-10-11 09:41:58.343 2 DEBUG nova.virt.hardware [None req-cd857be1-1604-4a66-9869-2cbea25e1f23 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 11 09:41:58 compute-0 nova_compute[260935]: 2025-10-11 09:41:58.344 2 DEBUG nova.virt.hardware [None req-cd857be1-1604-4a66-9869-2cbea25e1f23 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 11 09:41:58 compute-0 nova_compute[260935]: 2025-10-11 09:41:58.344 2 DEBUG nova.virt.hardware [None req-cd857be1-1604-4a66-9869-2cbea25e1f23 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 11 09:41:58 compute-0 nova_compute[260935]: 2025-10-11 09:41:58.344 2 DEBUG nova.virt.hardware [None req-cd857be1-1604-4a66-9869-2cbea25e1f23 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 11 09:41:58 compute-0 nova_compute[260935]: 2025-10-11 09:41:58.345 2 DEBUG nova.virt.hardware [None req-cd857be1-1604-4a66-9869-2cbea25e1f23 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 11 09:41:58 compute-0 nova_compute[260935]: 2025-10-11 09:41:58.345 2 DEBUG nova.virt.hardware [None req-cd857be1-1604-4a66-9869-2cbea25e1f23 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 11 09:41:58 compute-0 nova_compute[260935]: 2025-10-11 09:41:58.351 2 DEBUG oslo_concurrency.processutils [None req-cd857be1-1604-4a66-9869-2cbea25e1f23 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:41:58 compute-0 angry_shockley[427415]: --> passed data devices: 0 physical, 3 LVM
Oct 11 09:41:58 compute-0 angry_shockley[427415]: --> relative data size: 1.0
Oct 11 09:41:58 compute-0 angry_shockley[427415]: --> All data devices are unavailable
Oct 11 09:41:58 compute-0 systemd[1]: libpod-5442941559e201f06ba23932e50989d0e04c1d52e54721a2d0ed633d7160166f.scope: Deactivated successfully.
Oct 11 09:41:58 compute-0 systemd[1]: libpod-5442941559e201f06ba23932e50989d0e04c1d52e54721a2d0ed633d7160166f.scope: Consumed 1.105s CPU time.
Oct 11 09:41:58 compute-0 podman[427398]: 2025-10-11 09:41:58.702986287 +0000 UTC m=+1.384595785 container died 5442941559e201f06ba23932e50989d0e04c1d52e54721a2d0ed633d7160166f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_shockley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct 11 09:41:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-6996d0a4690b7c0704d0a231afa8510b0a46d2819b5629b647a01452244cc654-merged.mount: Deactivated successfully.
Oct 11 09:41:58 compute-0 podman[427398]: 2025-10-11 09:41:58.780630094 +0000 UTC m=+1.462239622 container remove 5442941559e201f06ba23932e50989d0e04c1d52e54721a2d0ed633d7160166f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_shockley, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 09:41:58 compute-0 systemd[1]: libpod-conmon-5442941559e201f06ba23932e50989d0e04c1d52e54721a2d0ed633d7160166f.scope: Deactivated successfully.
Oct 11 09:41:58 compute-0 sudo[427270]: pam_unix(sudo:session): session closed for user root
Oct 11 09:41:58 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 09:41:58 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3743662840' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:41:58 compute-0 nova_compute[260935]: 2025-10-11 09:41:58.842 2 DEBUG oslo_concurrency.processutils [None req-cd857be1-1604-4a66-9869-2cbea25e1f23 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.491s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:41:58 compute-0 sshd-session[426869]: Connection closed by invalid user mysql 165.232.82.252 port 39696 [preauth]
Oct 11 09:41:58 compute-0 sudo[427476]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:41:58 compute-0 sudo[427476]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:41:58 compute-0 nova_compute[260935]: 2025-10-11 09:41:58.877 2 DEBUG nova.storage.rbd_utils [None req-cd857be1-1604-4a66-9869-2cbea25e1f23 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] rbd image 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:41:58 compute-0 sudo[427476]: pam_unix(sudo:session): session closed for user root
Oct 11 09:41:58 compute-0 nova_compute[260935]: 2025-10-11 09:41:58.885 2 DEBUG oslo_concurrency.processutils [None req-cd857be1-1604-4a66-9869-2cbea25e1f23 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:41:58 compute-0 sudo[427521]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 09:41:58 compute-0 sudo[427521]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:41:58 compute-0 sudo[427521]: pam_unix(sudo:session): session closed for user root
Oct 11 09:41:59 compute-0 sudo[427547]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:41:59 compute-0 sudo[427547]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:41:59 compute-0 sudo[427547]: pam_unix(sudo:session): session closed for user root
Oct 11 09:41:59 compute-0 ceph-mon[74313]: pgmap v3009: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:41:59 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/3743662840' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:41:59 compute-0 sudo[427591]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a -- lvm list --format json
Oct 11 09:41:59 compute-0 sudo[427591]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:41:59 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 09:41:59 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/796290833' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:41:59 compute-0 nova_compute[260935]: 2025-10-11 09:41:59.358 2 DEBUG oslo_concurrency.processutils [None req-cd857be1-1604-4a66-9869-2cbea25e1f23 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:41:59 compute-0 nova_compute[260935]: 2025-10-11 09:41:59.360 2 DEBUG nova.virt.libvirt.vif [None req-cd857be1-1604-4a66-9869-2cbea25e1f23 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T09:41:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestServerAdvancedOps-server-2061155047',display_name='tempest-TestServerAdvancedOps-server-2061155047',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testserveradvancedops-server-2061155047',id=148,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='bd3d8446de604a3682ade55965824985',ramdisk_id='',reservation_id='r-q51mr6v9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestServerAdvancedOps-940341189',owner_user_name='tempest-TestServerAdvancedOps-940341189-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:41:55Z,user_data=None,user_id='a45c6166242048a983479be416f90c52',uuid=4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a65667d8-0cc9-498a-9415-8fcecf5881f3", "address": "fa:16:3e:38:9b:b3", "network": {"id": "7f1968f8-d23e-46ea-bffa-f1b3561e0cf9", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-351533677-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "bd3d8446de604a3682ade55965824985", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa65667d8-0c", "ovs_interfaceid": "a65667d8-0cc9-498a-9415-8fcecf5881f3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 11 09:41:59 compute-0 nova_compute[260935]: 2025-10-11 09:41:59.360 2 DEBUG nova.network.os_vif_util [None req-cd857be1-1604-4a66-9869-2cbea25e1f23 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] Converting VIF {"id": "a65667d8-0cc9-498a-9415-8fcecf5881f3", "address": "fa:16:3e:38:9b:b3", "network": {"id": "7f1968f8-d23e-46ea-bffa-f1b3561e0cf9", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-351533677-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "bd3d8446de604a3682ade55965824985", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa65667d8-0c", "ovs_interfaceid": "a65667d8-0cc9-498a-9415-8fcecf5881f3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 09:41:59 compute-0 nova_compute[260935]: 2025-10-11 09:41:59.361 2 DEBUG nova.network.os_vif_util [None req-cd857be1-1604-4a66-9869-2cbea25e1f23 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:38:9b:b3,bridge_name='br-int',has_traffic_filtering=True,id=a65667d8-0cc9-498a-9415-8fcecf5881f3,network=Network(7f1968f8-d23e-46ea-bffa-f1b3561e0cf9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa65667d8-0c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 09:41:59 compute-0 nova_compute[260935]: 2025-10-11 09:41:59.363 2 DEBUG nova.objects.instance [None req-cd857be1-1604-4a66-9869-2cbea25e1f23 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] Lazy-loading 'pci_devices' on Instance uuid 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:41:59 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3010: 321 pgs: 321 active+clean; 374 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 11 09:41:59 compute-0 nova_compute[260935]: 2025-10-11 09:41:59.387 2 DEBUG nova.virt.libvirt.driver [None req-cd857be1-1604-4a66-9869-2cbea25e1f23 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] [instance: 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1] End _get_guest_xml xml=<domain type="kvm">
Oct 11 09:41:59 compute-0 nova_compute[260935]:   <uuid>4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1</uuid>
Oct 11 09:41:59 compute-0 nova_compute[260935]:   <name>instance-00000094</name>
Oct 11 09:41:59 compute-0 nova_compute[260935]:   <memory>131072</memory>
Oct 11 09:41:59 compute-0 nova_compute[260935]:   <vcpu>1</vcpu>
Oct 11 09:41:59 compute-0 nova_compute[260935]:   <metadata>
Oct 11 09:41:59 compute-0 nova_compute[260935]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 09:41:59 compute-0 nova_compute[260935]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 09:41:59 compute-0 nova_compute[260935]:       <nova:name>tempest-TestServerAdvancedOps-server-2061155047</nova:name>
Oct 11 09:41:59 compute-0 nova_compute[260935]:       <nova:creationTime>2025-10-11 09:41:58</nova:creationTime>
Oct 11 09:41:59 compute-0 nova_compute[260935]:       <nova:flavor name="m1.nano">
Oct 11 09:41:59 compute-0 nova_compute[260935]:         <nova:memory>128</nova:memory>
Oct 11 09:41:59 compute-0 nova_compute[260935]:         <nova:disk>1</nova:disk>
Oct 11 09:41:59 compute-0 nova_compute[260935]:         <nova:swap>0</nova:swap>
Oct 11 09:41:59 compute-0 nova_compute[260935]:         <nova:ephemeral>0</nova:ephemeral>
Oct 11 09:41:59 compute-0 nova_compute[260935]:         <nova:vcpus>1</nova:vcpus>
Oct 11 09:41:59 compute-0 nova_compute[260935]:       </nova:flavor>
Oct 11 09:41:59 compute-0 nova_compute[260935]:       <nova:owner>
Oct 11 09:41:59 compute-0 nova_compute[260935]:         <nova:user uuid="a45c6166242048a983479be416f90c52">tempest-TestServerAdvancedOps-940341189-project-member</nova:user>
Oct 11 09:41:59 compute-0 nova_compute[260935]:         <nova:project uuid="bd3d8446de604a3682ade55965824985">tempest-TestServerAdvancedOps-940341189</nova:project>
Oct 11 09:41:59 compute-0 nova_compute[260935]:       </nova:owner>
Oct 11 09:41:59 compute-0 nova_compute[260935]:       <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 09:41:59 compute-0 nova_compute[260935]:       <nova:ports>
Oct 11 09:41:59 compute-0 nova_compute[260935]:         <nova:port uuid="a65667d8-0cc9-498a-9415-8fcecf5881f3">
Oct 11 09:41:59 compute-0 nova_compute[260935]:           <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Oct 11 09:41:59 compute-0 nova_compute[260935]:         </nova:port>
Oct 11 09:41:59 compute-0 nova_compute[260935]:       </nova:ports>
Oct 11 09:41:59 compute-0 nova_compute[260935]:     </nova:instance>
Oct 11 09:41:59 compute-0 nova_compute[260935]:   </metadata>
Oct 11 09:41:59 compute-0 nova_compute[260935]:   <sysinfo type="smbios">
Oct 11 09:41:59 compute-0 nova_compute[260935]:     <system>
Oct 11 09:41:59 compute-0 nova_compute[260935]:       <entry name="manufacturer">RDO</entry>
Oct 11 09:41:59 compute-0 nova_compute[260935]:       <entry name="product">OpenStack Compute</entry>
Oct 11 09:41:59 compute-0 nova_compute[260935]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 09:41:59 compute-0 nova_compute[260935]:       <entry name="serial">4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1</entry>
Oct 11 09:41:59 compute-0 nova_compute[260935]:       <entry name="uuid">4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1</entry>
Oct 11 09:41:59 compute-0 nova_compute[260935]:       <entry name="family">Virtual Machine</entry>
Oct 11 09:41:59 compute-0 nova_compute[260935]:     </system>
Oct 11 09:41:59 compute-0 nova_compute[260935]:   </sysinfo>
Oct 11 09:41:59 compute-0 nova_compute[260935]:   <os>
Oct 11 09:41:59 compute-0 nova_compute[260935]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 11 09:41:59 compute-0 nova_compute[260935]:     <boot dev="hd"/>
Oct 11 09:41:59 compute-0 nova_compute[260935]:     <smbios mode="sysinfo"/>
Oct 11 09:41:59 compute-0 nova_compute[260935]:   </os>
Oct 11 09:41:59 compute-0 nova_compute[260935]:   <features>
Oct 11 09:41:59 compute-0 nova_compute[260935]:     <acpi/>
Oct 11 09:41:59 compute-0 nova_compute[260935]:     <apic/>
Oct 11 09:41:59 compute-0 nova_compute[260935]:     <vmcoreinfo/>
Oct 11 09:41:59 compute-0 nova_compute[260935]:   </features>
Oct 11 09:41:59 compute-0 nova_compute[260935]:   <clock offset="utc">
Oct 11 09:41:59 compute-0 nova_compute[260935]:     <timer name="pit" tickpolicy="delay"/>
Oct 11 09:41:59 compute-0 nova_compute[260935]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 11 09:41:59 compute-0 nova_compute[260935]:     <timer name="hpet" present="no"/>
Oct 11 09:41:59 compute-0 nova_compute[260935]:   </clock>
Oct 11 09:41:59 compute-0 nova_compute[260935]:   <cpu mode="host-model" match="exact">
Oct 11 09:41:59 compute-0 nova_compute[260935]:     <topology sockets="1" cores="1" threads="1"/>
Oct 11 09:41:59 compute-0 nova_compute[260935]:   </cpu>
Oct 11 09:41:59 compute-0 nova_compute[260935]:   <devices>
Oct 11 09:41:59 compute-0 nova_compute[260935]:     <disk type="network" device="disk">
Oct 11 09:41:59 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 09:41:59 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1_disk">
Oct 11 09:41:59 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 09:41:59 compute-0 nova_compute[260935]:       </source>
Oct 11 09:41:59 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 09:41:59 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 09:41:59 compute-0 nova_compute[260935]:       </auth>
Oct 11 09:41:59 compute-0 nova_compute[260935]:       <target dev="vda" bus="virtio"/>
Oct 11 09:41:59 compute-0 nova_compute[260935]:     </disk>
Oct 11 09:41:59 compute-0 nova_compute[260935]:     <disk type="network" device="cdrom">
Oct 11 09:41:59 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 09:41:59 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1_disk.config">
Oct 11 09:41:59 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 09:41:59 compute-0 nova_compute[260935]:       </source>
Oct 11 09:41:59 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 09:41:59 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 09:41:59 compute-0 nova_compute[260935]:       </auth>
Oct 11 09:41:59 compute-0 nova_compute[260935]:       <target dev="sda" bus="sata"/>
Oct 11 09:41:59 compute-0 nova_compute[260935]:     </disk>
Oct 11 09:41:59 compute-0 nova_compute[260935]:     <interface type="ethernet">
Oct 11 09:41:59 compute-0 nova_compute[260935]:       <mac address="fa:16:3e:38:9b:b3"/>
Oct 11 09:41:59 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 09:41:59 compute-0 nova_compute[260935]:       <driver name="vhost" rx_queue_size="512"/>
Oct 11 09:41:59 compute-0 nova_compute[260935]:       <mtu size="1442"/>
Oct 11 09:41:59 compute-0 nova_compute[260935]:       <target dev="tapa65667d8-0c"/>
Oct 11 09:41:59 compute-0 nova_compute[260935]:     </interface>
Oct 11 09:41:59 compute-0 nova_compute[260935]:     <serial type="pty">
Oct 11 09:41:59 compute-0 nova_compute[260935]:       <log file="/var/lib/nova/instances/4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1/console.log" append="off"/>
Oct 11 09:41:59 compute-0 nova_compute[260935]:     </serial>
Oct 11 09:41:59 compute-0 nova_compute[260935]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 09:41:59 compute-0 nova_compute[260935]:     <video>
Oct 11 09:41:59 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 09:41:59 compute-0 nova_compute[260935]:     </video>
Oct 11 09:41:59 compute-0 nova_compute[260935]:     <input type="tablet" bus="usb"/>
Oct 11 09:41:59 compute-0 nova_compute[260935]:     <rng model="virtio">
Oct 11 09:41:59 compute-0 nova_compute[260935]:       <backend model="random">/dev/urandom</backend>
Oct 11 09:41:59 compute-0 nova_compute[260935]:     </rng>
Oct 11 09:41:59 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root"/>
Oct 11 09:41:59 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:41:59 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:41:59 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:41:59 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:41:59 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:41:59 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:41:59 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:41:59 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:41:59 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:41:59 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:41:59 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:41:59 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:41:59 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:41:59 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:41:59 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:41:59 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:41:59 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:41:59 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:41:59 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:41:59 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:41:59 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:41:59 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:41:59 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:41:59 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:41:59 compute-0 nova_compute[260935]:     <controller type="usb" index="0"/>
Oct 11 09:41:59 compute-0 nova_compute[260935]:     <memballoon model="virtio">
Oct 11 09:41:59 compute-0 nova_compute[260935]:       <stats period="10"/>
Oct 11 09:41:59 compute-0 nova_compute[260935]:     </memballoon>
Oct 11 09:41:59 compute-0 nova_compute[260935]:   </devices>
Oct 11 09:41:59 compute-0 nova_compute[260935]: </domain>
Oct 11 09:41:59 compute-0 nova_compute[260935]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 11 09:41:59 compute-0 nova_compute[260935]: 2025-10-11 09:41:59.388 2 DEBUG nova.compute.manager [None req-cd857be1-1604-4a66-9869-2cbea25e1f23 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] [instance: 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1] Preparing to wait for external event network-vif-plugged-a65667d8-0cc9-498a-9415-8fcecf5881f3 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 11 09:41:59 compute-0 nova_compute[260935]: 2025-10-11 09:41:59.389 2 DEBUG oslo_concurrency.lockutils [None req-cd857be1-1604-4a66-9869-2cbea25e1f23 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] Acquiring lock "4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:41:59 compute-0 nova_compute[260935]: 2025-10-11 09:41:59.389 2 DEBUG oslo_concurrency.lockutils [None req-cd857be1-1604-4a66-9869-2cbea25e1f23 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] Lock "4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:41:59 compute-0 nova_compute[260935]: 2025-10-11 09:41:59.389 2 DEBUG oslo_concurrency.lockutils [None req-cd857be1-1604-4a66-9869-2cbea25e1f23 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] Lock "4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:41:59 compute-0 nova_compute[260935]: 2025-10-11 09:41:59.390 2 DEBUG nova.virt.libvirt.vif [None req-cd857be1-1604-4a66-9869-2cbea25e1f23 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T09:41:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestServerAdvancedOps-server-2061155047',display_name='tempest-TestServerAdvancedOps-server-2061155047',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testserveradvancedops-server-2061155047',id=148,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='bd3d8446de604a3682ade55965824985',ramdisk_id='',reservation_id='r-q51mr6v9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestServerAdvancedOps-940341189',owner_user_name='tempest-TestServerAdvancedOps-940341189-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:41:55Z,user_data=None,user_id='a45c6166242048a983479be416f90c52',uuid=4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a65667d8-0cc9-498a-9415-8fcecf5881f3", "address": "fa:16:3e:38:9b:b3", "network": {"id": "7f1968f8-d23e-46ea-bffa-f1b3561e0cf9", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-351533677-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "bd3d8446de604a3682ade55965824985", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa65667d8-0c", "ovs_interfaceid": "a65667d8-0cc9-498a-9415-8fcecf5881f3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 11 09:41:59 compute-0 nova_compute[260935]: 2025-10-11 09:41:59.390 2 DEBUG nova.network.os_vif_util [None req-cd857be1-1604-4a66-9869-2cbea25e1f23 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] Converting VIF {"id": "a65667d8-0cc9-498a-9415-8fcecf5881f3", "address": "fa:16:3e:38:9b:b3", "network": {"id": "7f1968f8-d23e-46ea-bffa-f1b3561e0cf9", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-351533677-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "bd3d8446de604a3682ade55965824985", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa65667d8-0c", "ovs_interfaceid": "a65667d8-0cc9-498a-9415-8fcecf5881f3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 09:41:59 compute-0 nova_compute[260935]: 2025-10-11 09:41:59.391 2 DEBUG nova.network.os_vif_util [None req-cd857be1-1604-4a66-9869-2cbea25e1f23 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:38:9b:b3,bridge_name='br-int',has_traffic_filtering=True,id=a65667d8-0cc9-498a-9415-8fcecf5881f3,network=Network(7f1968f8-d23e-46ea-bffa-f1b3561e0cf9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa65667d8-0c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 09:41:59 compute-0 nova_compute[260935]: 2025-10-11 09:41:59.391 2 DEBUG os_vif [None req-cd857be1-1604-4a66-9869-2cbea25e1f23 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:38:9b:b3,bridge_name='br-int',has_traffic_filtering=True,id=a65667d8-0cc9-498a-9415-8fcecf5881f3,network=Network(7f1968f8-d23e-46ea-bffa-f1b3561e0cf9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa65667d8-0c') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 11 09:41:59 compute-0 nova_compute[260935]: 2025-10-11 09:41:59.392 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:41:59 compute-0 nova_compute[260935]: 2025-10-11 09:41:59.393 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:41:59 compute-0 nova_compute[260935]: 2025-10-11 09:41:59.393 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 09:41:59 compute-0 nova_compute[260935]: 2025-10-11 09:41:59.396 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:41:59 compute-0 nova_compute[260935]: 2025-10-11 09:41:59.396 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa65667d8-0c, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:41:59 compute-0 nova_compute[260935]: 2025-10-11 09:41:59.396 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa65667d8-0c, col_values=(('external_ids', {'iface-id': 'a65667d8-0cc9-498a-9415-8fcecf5881f3', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:38:9b:b3', 'vm-uuid': '4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:41:59 compute-0 nova_compute[260935]: 2025-10-11 09:41:59.442 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:41:59 compute-0 NetworkManager[44960]: <info>  [1760175719.4438] manager: (tapa65667d8-0c): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/637)
Oct 11 09:41:59 compute-0 nova_compute[260935]: 2025-10-11 09:41:59.445 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 09:41:59 compute-0 nova_compute[260935]: 2025-10-11 09:41:59.449 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:41:59 compute-0 nova_compute[260935]: 2025-10-11 09:41:59.450 2 INFO os_vif [None req-cd857be1-1604-4a66-9869-2cbea25e1f23 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:38:9b:b3,bridge_name='br-int',has_traffic_filtering=True,id=a65667d8-0cc9-498a-9415-8fcecf5881f3,network=Network(7f1968f8-d23e-46ea-bffa-f1b3561e0cf9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa65667d8-0c')
Oct 11 09:41:59 compute-0 nova_compute[260935]: 2025-10-11 09:41:59.505 2 DEBUG nova.virt.libvirt.driver [None req-cd857be1-1604-4a66-9869-2cbea25e1f23 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 09:41:59 compute-0 nova_compute[260935]: 2025-10-11 09:41:59.505 2 DEBUG nova.virt.libvirt.driver [None req-cd857be1-1604-4a66-9869-2cbea25e1f23 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 09:41:59 compute-0 nova_compute[260935]: 2025-10-11 09:41:59.505 2 DEBUG nova.virt.libvirt.driver [None req-cd857be1-1604-4a66-9869-2cbea25e1f23 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] No VIF found with MAC fa:16:3e:38:9b:b3, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 11 09:41:59 compute-0 nova_compute[260935]: 2025-10-11 09:41:59.506 2 INFO nova.virt.libvirt.driver [None req-cd857be1-1604-4a66-9869-2cbea25e1f23 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] [instance: 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1] Using config drive
Oct 11 09:41:59 compute-0 nova_compute[260935]: 2025-10-11 09:41:59.527 2 DEBUG nova.storage.rbd_utils [None req-cd857be1-1604-4a66-9869-2cbea25e1f23 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] rbd image 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:41:59 compute-0 podman[427659]: 2025-10-11 09:41:59.548526082 +0000 UTC m=+0.044066317 container create 3b201b19e598de738308caa3100af28fbaaf22c8b759a401e97a0dd9fb08918f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_darwin, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct 11 09:41:59 compute-0 systemd[1]: Started libpod-conmon-3b201b19e598de738308caa3100af28fbaaf22c8b759a401e97a0dd9fb08918f.scope.
Oct 11 09:41:59 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:41:59 compute-0 podman[427659]: 2025-10-11 09:41:59.532949595 +0000 UTC m=+0.028489810 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:41:59 compute-0 podman[427659]: 2025-10-11 09:41:59.630525472 +0000 UTC m=+0.126065707 container init 3b201b19e598de738308caa3100af28fbaaf22c8b759a401e97a0dd9fb08918f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_darwin, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 09:41:59 compute-0 podman[427659]: 2025-10-11 09:41:59.642305322 +0000 UTC m=+0.137845577 container start 3b201b19e598de738308caa3100af28fbaaf22c8b759a401e97a0dd9fb08918f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_darwin, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct 11 09:41:59 compute-0 podman[427659]: 2025-10-11 09:41:59.647249811 +0000 UTC m=+0.142790026 container attach 3b201b19e598de738308caa3100af28fbaaf22c8b759a401e97a0dd9fb08918f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_darwin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Oct 11 09:41:59 compute-0 wizardly_darwin[427693]: 167 167
Oct 11 09:41:59 compute-0 systemd[1]: libpod-3b201b19e598de738308caa3100af28fbaaf22c8b759a401e97a0dd9fb08918f.scope: Deactivated successfully.
Oct 11 09:41:59 compute-0 podman[427659]: 2025-10-11 09:41:59.650144912 +0000 UTC m=+0.145685127 container died 3b201b19e598de738308caa3100af28fbaaf22c8b759a401e97a0dd9fb08918f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_darwin, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct 11 09:41:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-51405a9e87857bb08b086a81544336e8b453cc566823a7829a91c02385313093-merged.mount: Deactivated successfully.
Oct 11 09:41:59 compute-0 podman[427659]: 2025-10-11 09:41:59.69785269 +0000 UTC m=+0.193392895 container remove 3b201b19e598de738308caa3100af28fbaaf22c8b759a401e97a0dd9fb08918f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_darwin, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Oct 11 09:41:59 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:41:59 compute-0 systemd[1]: libpod-conmon-3b201b19e598de738308caa3100af28fbaaf22c8b759a401e97a0dd9fb08918f.scope: Deactivated successfully.
Oct 11 09:41:59 compute-0 podman[427717]: 2025-10-11 09:41:59.920177466 +0000 UTC m=+0.056854166 container create a89a47fd8d8d9eab53365029aa78cb0b1a8ac9f2e04e5ae783d8c295322a1df0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_kowalevski, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 09:41:59 compute-0 systemd[1]: Started libpod-conmon-a89a47fd8d8d9eab53365029aa78cb0b1a8ac9f2e04e5ae783d8c295322a1df0.scope.
Oct 11 09:41:59 compute-0 nova_compute[260935]: 2025-10-11 09:41:59.969 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:41:59 compute-0 podman[427717]: 2025-10-11 09:41:59.899698432 +0000 UTC m=+0.036375122 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:41:59 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:42:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/940d7b53bae276155cc65e78b4573b9f71751355c9d25fe194971ea0ca47a55c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 09:42:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/940d7b53bae276155cc65e78b4573b9f71751355c9d25fe194971ea0ca47a55c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 09:42:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/940d7b53bae276155cc65e78b4573b9f71751355c9d25fe194971ea0ca47a55c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 09:42:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/940d7b53bae276155cc65e78b4573b9f71751355c9d25fe194971ea0ca47a55c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 09:42:00 compute-0 podman[427717]: 2025-10-11 09:42:00.025208242 +0000 UTC m=+0.161884932 container init a89a47fd8d8d9eab53365029aa78cb0b1a8ac9f2e04e5ae783d8c295322a1df0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_kowalevski, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct 11 09:42:00 compute-0 podman[427717]: 2025-10-11 09:42:00.04331591 +0000 UTC m=+0.179992620 container start a89a47fd8d8d9eab53365029aa78cb0b1a8ac9f2e04e5ae783d8c295322a1df0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_kowalevski, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 09:42:00 compute-0 podman[427717]: 2025-10-11 09:42:00.047045914 +0000 UTC m=+0.183722614 container attach a89a47fd8d8d9eab53365029aa78cb0b1a8ac9f2e04e5ae783d8c295322a1df0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_kowalevski, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 09:42:00 compute-0 nova_compute[260935]: 2025-10-11 09:42:00.073 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:42:00 compute-0 nova_compute[260935]: 2025-10-11 09:42:00.073 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:42:00 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/796290833' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:42:00 compute-0 ceph-osd[89278]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 11 09:42:00 compute-0 ceph-osd[89278]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 5400.1 total, 600.0 interval
                                           Cumulative writes: 46K writes, 175K keys, 46K commit groups, 1.0 writes per commit group, ingest: 0.17 GB, 0.03 MB/s
                                           Cumulative WAL: 46K writes, 17K syncs, 2.71 writes per sync, written: 0.17 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 3122 writes, 13K keys, 3122 commit groups, 1.0 writes per commit group, ingest: 16.94 MB, 0.03 MB/s
                                           Interval WAL: 3122 writes, 1172 syncs, 2.66 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 11 09:42:00 compute-0 nova_compute[260935]: 2025-10-11 09:42:00.730 2 INFO nova.virt.libvirt.driver [None req-cd857be1-1604-4a66-9869-2cbea25e1f23 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] [instance: 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1] Creating config drive at /var/lib/nova/instances/4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1/disk.config
Oct 11 09:42:00 compute-0 nova_compute[260935]: 2025-10-11 09:42:00.737 2 DEBUG oslo_concurrency.processutils [None req-cd857be1-1604-4a66-9869-2cbea25e1f23 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpxdlik2tv execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:42:00 compute-0 nifty_kowalevski[427734]: {
Oct 11 09:42:00 compute-0 nifty_kowalevski[427734]:     "0": [
Oct 11 09:42:00 compute-0 nifty_kowalevski[427734]:         {
Oct 11 09:42:00 compute-0 nifty_kowalevski[427734]:             "devices": [
Oct 11 09:42:00 compute-0 nifty_kowalevski[427734]:                 "/dev/loop3"
Oct 11 09:42:00 compute-0 nifty_kowalevski[427734]:             ],
Oct 11 09:42:00 compute-0 nifty_kowalevski[427734]:             "lv_name": "ceph_lv0",
Oct 11 09:42:00 compute-0 nifty_kowalevski[427734]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 09:42:00 compute-0 nifty_kowalevski[427734]:             "lv_size": "21470642176",
Oct 11 09:42:00 compute-0 nifty_kowalevski[427734]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=bd31cfe9-266a-4479-a7f9-659fff14b3e1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 09:42:00 compute-0 nifty_kowalevski[427734]:             "lv_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 09:42:00 compute-0 nifty_kowalevski[427734]:             "name": "ceph_lv0",
Oct 11 09:42:00 compute-0 nifty_kowalevski[427734]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 09:42:00 compute-0 nifty_kowalevski[427734]:             "tags": {
Oct 11 09:42:00 compute-0 nifty_kowalevski[427734]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 11 09:42:00 compute-0 nifty_kowalevski[427734]:                 "ceph.block_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 09:42:00 compute-0 nifty_kowalevski[427734]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 09:42:00 compute-0 nifty_kowalevski[427734]:                 "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:42:00 compute-0 nifty_kowalevski[427734]:                 "ceph.cluster_name": "ceph",
Oct 11 09:42:00 compute-0 nifty_kowalevski[427734]:                 "ceph.crush_device_class": "",
Oct 11 09:42:00 compute-0 nifty_kowalevski[427734]:                 "ceph.encrypted": "0",
Oct 11 09:42:00 compute-0 nifty_kowalevski[427734]:                 "ceph.osd_fsid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 09:42:00 compute-0 nifty_kowalevski[427734]:                 "ceph.osd_id": "0",
Oct 11 09:42:00 compute-0 nifty_kowalevski[427734]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 09:42:00 compute-0 nifty_kowalevski[427734]:                 "ceph.type": "block",
Oct 11 09:42:00 compute-0 nifty_kowalevski[427734]:                 "ceph.vdo": "0"
Oct 11 09:42:00 compute-0 nifty_kowalevski[427734]:             },
Oct 11 09:42:00 compute-0 nifty_kowalevski[427734]:             "type": "block",
Oct 11 09:42:00 compute-0 nifty_kowalevski[427734]:             "vg_name": "ceph_vg0"
Oct 11 09:42:00 compute-0 nifty_kowalevski[427734]:         }
Oct 11 09:42:00 compute-0 nifty_kowalevski[427734]:     ],
Oct 11 09:42:00 compute-0 nifty_kowalevski[427734]:     "1": [
Oct 11 09:42:00 compute-0 nifty_kowalevski[427734]:         {
Oct 11 09:42:00 compute-0 nifty_kowalevski[427734]:             "devices": [
Oct 11 09:42:00 compute-0 nifty_kowalevski[427734]:                 "/dev/loop4"
Oct 11 09:42:00 compute-0 nifty_kowalevski[427734]:             ],
Oct 11 09:42:00 compute-0 nifty_kowalevski[427734]:             "lv_name": "ceph_lv1",
Oct 11 09:42:00 compute-0 nifty_kowalevski[427734]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 09:42:00 compute-0 nifty_kowalevski[427734]:             "lv_size": "21470642176",
Oct 11 09:42:00 compute-0 nifty_kowalevski[427734]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c6e730c8-7075-45e5-bf72-061b73088f5b,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 09:42:00 compute-0 nifty_kowalevski[427734]:             "lv_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 09:42:00 compute-0 nifty_kowalevski[427734]:             "name": "ceph_lv1",
Oct 11 09:42:00 compute-0 nifty_kowalevski[427734]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 09:42:00 compute-0 nifty_kowalevski[427734]:             "tags": {
Oct 11 09:42:00 compute-0 nifty_kowalevski[427734]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 11 09:42:00 compute-0 nifty_kowalevski[427734]:                 "ceph.block_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 09:42:00 compute-0 nifty_kowalevski[427734]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 09:42:00 compute-0 nifty_kowalevski[427734]:                 "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:42:00 compute-0 nifty_kowalevski[427734]:                 "ceph.cluster_name": "ceph",
Oct 11 09:42:00 compute-0 nifty_kowalevski[427734]:                 "ceph.crush_device_class": "",
Oct 11 09:42:00 compute-0 nifty_kowalevski[427734]:                 "ceph.encrypted": "0",
Oct 11 09:42:00 compute-0 nifty_kowalevski[427734]:                 "ceph.osd_fsid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 09:42:00 compute-0 nifty_kowalevski[427734]:                 "ceph.osd_id": "1",
Oct 11 09:42:00 compute-0 nifty_kowalevski[427734]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 09:42:00 compute-0 nifty_kowalevski[427734]:                 "ceph.type": "block",
Oct 11 09:42:00 compute-0 nifty_kowalevski[427734]:                 "ceph.vdo": "0"
Oct 11 09:42:00 compute-0 nifty_kowalevski[427734]:             },
Oct 11 09:42:00 compute-0 nifty_kowalevski[427734]:             "type": "block",
Oct 11 09:42:00 compute-0 nifty_kowalevski[427734]:             "vg_name": "ceph_vg1"
Oct 11 09:42:00 compute-0 nifty_kowalevski[427734]:         }
Oct 11 09:42:00 compute-0 nifty_kowalevski[427734]:     ],
Oct 11 09:42:00 compute-0 nifty_kowalevski[427734]:     "2": [
Oct 11 09:42:00 compute-0 nifty_kowalevski[427734]:         {
Oct 11 09:42:00 compute-0 nifty_kowalevski[427734]:             "devices": [
Oct 11 09:42:00 compute-0 nifty_kowalevski[427734]:                 "/dev/loop5"
Oct 11 09:42:00 compute-0 nifty_kowalevski[427734]:             ],
Oct 11 09:42:00 compute-0 nifty_kowalevski[427734]:             "lv_name": "ceph_lv2",
Oct 11 09:42:00 compute-0 nifty_kowalevski[427734]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 09:42:00 compute-0 nifty_kowalevski[427734]:             "lv_size": "21470642176",
Oct 11 09:42:00 compute-0 nifty_kowalevski[427734]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9a8d87fe-940e-4960-94fb-405e22e6e9ee,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 09:42:00 compute-0 nifty_kowalevski[427734]:             "lv_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 09:42:00 compute-0 nifty_kowalevski[427734]:             "name": "ceph_lv2",
Oct 11 09:42:00 compute-0 nifty_kowalevski[427734]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 09:42:00 compute-0 nifty_kowalevski[427734]:             "tags": {
Oct 11 09:42:00 compute-0 nifty_kowalevski[427734]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 11 09:42:00 compute-0 nifty_kowalevski[427734]:                 "ceph.block_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 09:42:00 compute-0 nifty_kowalevski[427734]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 09:42:00 compute-0 nifty_kowalevski[427734]:                 "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:42:00 compute-0 nifty_kowalevski[427734]:                 "ceph.cluster_name": "ceph",
Oct 11 09:42:00 compute-0 nifty_kowalevski[427734]:                 "ceph.crush_device_class": "",
Oct 11 09:42:00 compute-0 nifty_kowalevski[427734]:                 "ceph.encrypted": "0",
Oct 11 09:42:00 compute-0 nifty_kowalevski[427734]:                 "ceph.osd_fsid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 09:42:00 compute-0 nifty_kowalevski[427734]:                 "ceph.osd_id": "2",
Oct 11 09:42:00 compute-0 nifty_kowalevski[427734]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 09:42:00 compute-0 nifty_kowalevski[427734]:                 "ceph.type": "block",
Oct 11 09:42:00 compute-0 nifty_kowalevski[427734]:                 "ceph.vdo": "0"
Oct 11 09:42:00 compute-0 nifty_kowalevski[427734]:             },
Oct 11 09:42:00 compute-0 nifty_kowalevski[427734]:             "type": "block",
Oct 11 09:42:00 compute-0 nifty_kowalevski[427734]:             "vg_name": "ceph_vg2"
Oct 11 09:42:00 compute-0 nifty_kowalevski[427734]:         }
Oct 11 09:42:00 compute-0 nifty_kowalevski[427734]:     ]
Oct 11 09:42:00 compute-0 nifty_kowalevski[427734]: }
Oct 11 09:42:00 compute-0 nova_compute[260935]: 2025-10-11 09:42:00.794 2 DEBUG nova.network.neutron [req-52cef021-412f-4e8e-af9d-73722e4da40d req-02608588-cc5b-4682-941f-08da95deffac e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1] Updated VIF entry in instance network info cache for port a65667d8-0cc9-498a-9415-8fcecf5881f3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 09:42:00 compute-0 nova_compute[260935]: 2025-10-11 09:42:00.795 2 DEBUG nova.network.neutron [req-52cef021-412f-4e8e-af9d-73722e4da40d req-02608588-cc5b-4682-941f-08da95deffac e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1] Updating instance_info_cache with network_info: [{"id": "a65667d8-0cc9-498a-9415-8fcecf5881f3", "address": "fa:16:3e:38:9b:b3", "network": {"id": "7f1968f8-d23e-46ea-bffa-f1b3561e0cf9", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-351533677-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "bd3d8446de604a3682ade55965824985", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa65667d8-0c", "ovs_interfaceid": "a65667d8-0cc9-498a-9415-8fcecf5881f3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:42:00 compute-0 nova_compute[260935]: 2025-10-11 09:42:00.812 2 DEBUG oslo_concurrency.lockutils [req-52cef021-412f-4e8e-af9d-73722e4da40d req-02608588-cc5b-4682-941f-08da95deffac e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:42:00 compute-0 systemd[1]: libpod-a89a47fd8d8d9eab53365029aa78cb0b1a8ac9f2e04e5ae783d8c295322a1df0.scope: Deactivated successfully.
Oct 11 09:42:00 compute-0 podman[427717]: 2025-10-11 09:42:00.822474223 +0000 UTC m=+0.959150923 container died a89a47fd8d8d9eab53365029aa78cb0b1a8ac9f2e04e5ae783d8c295322a1df0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_kowalevski, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 09:42:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-940d7b53bae276155cc65e78b4573b9f71751355c9d25fe194971ea0ca47a55c-merged.mount: Deactivated successfully.
Oct 11 09:42:00 compute-0 podman[427717]: 2025-10-11 09:42:00.890944804 +0000 UTC m=+1.027621484 container remove a89a47fd8d8d9eab53365029aa78cb0b1a8ac9f2e04e5ae783d8c295322a1df0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_kowalevski, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 11 09:42:00 compute-0 nova_compute[260935]: 2025-10-11 09:42:00.893 2 DEBUG oslo_concurrency.processutils [None req-cd857be1-1604-4a66-9869-2cbea25e1f23 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpxdlik2tv" returned: 0 in 0.156s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:42:00 compute-0 systemd[1]: libpod-conmon-a89a47fd8d8d9eab53365029aa78cb0b1a8ac9f2e04e5ae783d8c295322a1df0.scope: Deactivated successfully.
Oct 11 09:42:00 compute-0 sudo[427591]: pam_unix(sudo:session): session closed for user root
Oct 11 09:42:00 compute-0 nova_compute[260935]: 2025-10-11 09:42:00.934 2 DEBUG nova.storage.rbd_utils [None req-cd857be1-1604-4a66-9869-2cbea25e1f23 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] rbd image 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:42:00 compute-0 nova_compute[260935]: 2025-10-11 09:42:00.940 2 DEBUG oslo_concurrency.processutils [None req-cd857be1-1604-4a66-9869-2cbea25e1f23 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1/disk.config 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:42:01 compute-0 sudo[427779]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:42:01 compute-0 sudo[427779]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:42:01 compute-0 sudo[427779]: pam_unix(sudo:session): session closed for user root
Oct 11 09:42:01 compute-0 sudo[427812]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 09:42:01 compute-0 sudo[427812]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:42:01 compute-0 sudo[427812]: pam_unix(sudo:session): session closed for user root
Oct 11 09:42:01 compute-0 ceph-mon[74313]: pgmap v3010: 321 pgs: 321 active+clean; 374 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 11 09:42:01 compute-0 nova_compute[260935]: 2025-10-11 09:42:01.121 2 DEBUG oslo_concurrency.processutils [None req-cd857be1-1604-4a66-9869-2cbea25e1f23 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1/disk.config 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.181s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:42:01 compute-0 nova_compute[260935]: 2025-10-11 09:42:01.124 2 INFO nova.virt.libvirt.driver [None req-cd857be1-1604-4a66-9869-2cbea25e1f23 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] [instance: 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1] Deleting local config drive /var/lib/nova/instances/4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1/disk.config because it was imported into RBD.
Oct 11 09:42:01 compute-0 kernel: tapa65667d8-0c: entered promiscuous mode
Oct 11 09:42:01 compute-0 NetworkManager[44960]: <info>  [1760175721.1818] manager: (tapa65667d8-0c): new Tun device (/org/freedesktop/NetworkManager/Devices/638)
Oct 11 09:42:01 compute-0 sudo[427848]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:42:01 compute-0 ovn_controller[152945]: 2025-10-11T09:42:01Z|01682|binding|INFO|Claiming lport a65667d8-0cc9-498a-9415-8fcecf5881f3 for this chassis.
Oct 11 09:42:01 compute-0 ovn_controller[152945]: 2025-10-11T09:42:01Z|01683|binding|INFO|a65667d8-0cc9-498a-9415-8fcecf5881f3: Claiming fa:16:3e:38:9b:b3 10.100.0.10
Oct 11 09:42:01 compute-0 nova_compute[260935]: 2025-10-11 09:42:01.182 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:42:01 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:42:01.191 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:38:9b:b3 10.100.0.10'], port_security=['fa:16:3e:38:9b:b3 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7f1968f8-d23e-46ea-bffa-f1b3561e0cf9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'bd3d8446de604a3682ade55965824985', 'neutron:revision_number': '2', 'neutron:security_group_ids': '0b05ff59-8aaa-4951-bb03-8ac33de90ac6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=61cb3534-5fbd-4916-a755-03a295e9752c, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=a65667d8-0cc9-498a-9415-8fcecf5881f3) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 09:42:01 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:42:01.193 162815 INFO neutron.agent.ovn.metadata.agent [-] Port a65667d8-0cc9-498a-9415-8fcecf5881f3 in datapath 7f1968f8-d23e-46ea-bffa-f1b3561e0cf9 bound to our chassis
Oct 11 09:42:01 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:42:01.194 162815 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 7f1968f8-d23e-46ea-bffa-f1b3561e0cf9 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Oct 11 09:42:01 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:42:01.195 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[ef577416-86d1-42e5-a1b6-66921a9df3a4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:42:01 compute-0 sudo[427848]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:42:01 compute-0 sudo[427848]: pam_unix(sudo:session): session closed for user root
Oct 11 09:42:01 compute-0 systemd-udevd[427886]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 09:42:01 compute-0 systemd-machined[215705]: New machine qemu-172-instance-00000094.
Oct 11 09:42:01 compute-0 NetworkManager[44960]: <info>  [1760175721.2386] device (tapa65667d8-0c): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 09:42:01 compute-0 NetworkManager[44960]: <info>  [1760175721.2397] device (tapa65667d8-0c): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 09:42:01 compute-0 ovn_controller[152945]: 2025-10-11T09:42:01Z|01684|binding|INFO|Setting lport a65667d8-0cc9-498a-9415-8fcecf5881f3 ovn-installed in OVS
Oct 11 09:42:01 compute-0 ovn_controller[152945]: 2025-10-11T09:42:01Z|01685|binding|INFO|Setting lport a65667d8-0cc9-498a-9415-8fcecf5881f3 up in Southbound
Oct 11 09:42:01 compute-0 nova_compute[260935]: 2025-10-11 09:42:01.247 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:42:01 compute-0 nova_compute[260935]: 2025-10-11 09:42:01.248 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:42:01 compute-0 systemd[1]: Started Virtual Machine qemu-172-instance-00000094.
Oct 11 09:42:01 compute-0 sudo[427885]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a -- raw list --format json
Oct 11 09:42:01 compute-0 sudo[427885]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:42:01 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3011: 321 pgs: 321 active+clean; 374 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 11 09:42:01 compute-0 podman[427958]: 2025-10-11 09:42:01.662602567 +0000 UTC m=+0.057086272 container create 2f96c944f22af246792492701ffdf855ecf749b52627df4edef29a20d36fde60 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_tharp, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct 11 09:42:01 compute-0 systemd[1]: Started libpod-conmon-2f96c944f22af246792492701ffdf855ecf749b52627df4edef29a20d36fde60.scope.
Oct 11 09:42:01 compute-0 podman[427958]: 2025-10-11 09:42:01.642184494 +0000 UTC m=+0.036668189 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:42:01 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:42:01 compute-0 podman[427958]: 2025-10-11 09:42:01.769442814 +0000 UTC m=+0.163926489 container init 2f96c944f22af246792492701ffdf855ecf749b52627df4edef29a20d36fde60 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_tharp, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef)
Oct 11 09:42:01 compute-0 podman[427958]: 2025-10-11 09:42:01.782933142 +0000 UTC m=+0.177416847 container start 2f96c944f22af246792492701ffdf855ecf749b52627df4edef29a20d36fde60 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_tharp, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef)
Oct 11 09:42:01 compute-0 podman[427958]: 2025-10-11 09:42:01.789452235 +0000 UTC m=+0.183935910 container attach 2f96c944f22af246792492701ffdf855ecf749b52627df4edef29a20d36fde60 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_tharp, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Oct 11 09:42:01 compute-0 dreamy_tharp[427974]: 167 167
Oct 11 09:42:01 compute-0 systemd[1]: libpod-2f96c944f22af246792492701ffdf855ecf749b52627df4edef29a20d36fde60.scope: Deactivated successfully.
Oct 11 09:42:01 compute-0 podman[427958]: 2025-10-11 09:42:01.792708526 +0000 UTC m=+0.187192221 container died 2f96c944f22af246792492701ffdf855ecf749b52627df4edef29a20d36fde60 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_tharp, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 09:42:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-ceee0bdfbaa1b4efabc20aab84948bcb36412fa9f8414095e00553d4e1ddb9d7-merged.mount: Deactivated successfully.
Oct 11 09:42:01 compute-0 podman[427958]: 2025-10-11 09:42:01.839754796 +0000 UTC m=+0.234238461 container remove 2f96c944f22af246792492701ffdf855ecf749b52627df4edef29a20d36fde60 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_tharp, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Oct 11 09:42:01 compute-0 systemd[1]: libpod-conmon-2f96c944f22af246792492701ffdf855ecf749b52627df4edef29a20d36fde60.scope: Deactivated successfully.
Oct 11 09:42:01 compute-0 nova_compute[260935]: 2025-10-11 09:42:01.887 2 DEBUG nova.compute.manager [req-cd15cdab-6dc7-461d-9360-8aee9d4257e2 req-317114c0-6309-4804-96c3-a40bc32d4559 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1] Received event network-vif-plugged-a65667d8-0cc9-498a-9415-8fcecf5881f3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:42:01 compute-0 nova_compute[260935]: 2025-10-11 09:42:01.889 2 DEBUG oslo_concurrency.lockutils [req-cd15cdab-6dc7-461d-9360-8aee9d4257e2 req-317114c0-6309-4804-96c3-a40bc32d4559 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:42:01 compute-0 nova_compute[260935]: 2025-10-11 09:42:01.889 2 DEBUG oslo_concurrency.lockutils [req-cd15cdab-6dc7-461d-9360-8aee9d4257e2 req-317114c0-6309-4804-96c3-a40bc32d4559 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:42:01 compute-0 nova_compute[260935]: 2025-10-11 09:42:01.890 2 DEBUG oslo_concurrency.lockutils [req-cd15cdab-6dc7-461d-9360-8aee9d4257e2 req-317114c0-6309-4804-96c3-a40bc32d4559 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:42:01 compute-0 nova_compute[260935]: 2025-10-11 09:42:01.890 2 DEBUG nova.compute.manager [req-cd15cdab-6dc7-461d-9360-8aee9d4257e2 req-317114c0-6309-4804-96c3-a40bc32d4559 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1] Processing event network-vif-plugged-a65667d8-0cc9-498a-9415-8fcecf5881f3 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 11 09:42:02 compute-0 podman[428039]: 2025-10-11 09:42:02.130682554 +0000 UTC m=+0.078277296 container create 44623b47d2b5fdb018a3bc8d72e7e12191461c56163a724cfcff733e9e4a64f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_tharp, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 09:42:02 compute-0 systemd[1]: Started libpod-conmon-44623b47d2b5fdb018a3bc8d72e7e12191461c56163a724cfcff733e9e4a64f4.scope.
Oct 11 09:42:02 compute-0 podman[428039]: 2025-10-11 09:42:02.097949456 +0000 UTC m=+0.045544258 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:42:02 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:42:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e473843e00e29bcc81d91f7f7d6a2fd8f0c3b09897eb2d05159de73e927d2d9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 09:42:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e473843e00e29bcc81d91f7f7d6a2fd8f0c3b09897eb2d05159de73e927d2d9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 09:42:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e473843e00e29bcc81d91f7f7d6a2fd8f0c3b09897eb2d05159de73e927d2d9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 09:42:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e473843e00e29bcc81d91f7f7d6a2fd8f0c3b09897eb2d05159de73e927d2d9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 09:42:02 compute-0 podman[428039]: 2025-10-11 09:42:02.225899335 +0000 UTC m=+0.173494047 container init 44623b47d2b5fdb018a3bc8d72e7e12191461c56163a724cfcff733e9e4a64f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_tharp, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct 11 09:42:02 compute-0 podman[428039]: 2025-10-11 09:42:02.23785743 +0000 UTC m=+0.185452152 container start 44623b47d2b5fdb018a3bc8d72e7e12191461c56163a724cfcff733e9e4a64f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_tharp, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Oct 11 09:42:02 compute-0 podman[428039]: 2025-10-11 09:42:02.242809919 +0000 UTC m=+0.190417602 container attach 44623b47d2b5fdb018a3bc8d72e7e12191461c56163a724cfcff733e9e4a64f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_tharp, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Oct 11 09:42:02 compute-0 nova_compute[260935]: 2025-10-11 09:42:02.439 2 DEBUG nova.compute.manager [None req-cd857be1-1604-4a66-9869-2cbea25e1f23 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] [instance: 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 11 09:42:02 compute-0 nova_compute[260935]: 2025-10-11 09:42:02.441 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760175722.439016, 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:42:02 compute-0 nova_compute[260935]: 2025-10-11 09:42:02.442 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1] VM Started (Lifecycle Event)
Oct 11 09:42:02 compute-0 nova_compute[260935]: 2025-10-11 09:42:02.447 2 DEBUG nova.virt.libvirt.driver [None req-cd857be1-1604-4a66-9869-2cbea25e1f23 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] [instance: 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 11 09:42:02 compute-0 nova_compute[260935]: 2025-10-11 09:42:02.453 2 INFO nova.virt.libvirt.driver [-] [instance: 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1] Instance spawned successfully.
Oct 11 09:42:02 compute-0 nova_compute[260935]: 2025-10-11 09:42:02.454 2 DEBUG nova.virt.libvirt.driver [None req-cd857be1-1604-4a66-9869-2cbea25e1f23 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] [instance: 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 11 09:42:02 compute-0 nova_compute[260935]: 2025-10-11 09:42:02.463 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:42:02 compute-0 nova_compute[260935]: 2025-10-11 09:42:02.468 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 09:42:02 compute-0 nova_compute[260935]: 2025-10-11 09:42:02.482 2 DEBUG nova.virt.libvirt.driver [None req-cd857be1-1604-4a66-9869-2cbea25e1f23 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] [instance: 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:42:02 compute-0 nova_compute[260935]: 2025-10-11 09:42:02.483 2 DEBUG nova.virt.libvirt.driver [None req-cd857be1-1604-4a66-9869-2cbea25e1f23 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] [instance: 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:42:02 compute-0 nova_compute[260935]: 2025-10-11 09:42:02.484 2 DEBUG nova.virt.libvirt.driver [None req-cd857be1-1604-4a66-9869-2cbea25e1f23 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] [instance: 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:42:02 compute-0 nova_compute[260935]: 2025-10-11 09:42:02.484 2 DEBUG nova.virt.libvirt.driver [None req-cd857be1-1604-4a66-9869-2cbea25e1f23 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] [instance: 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:42:02 compute-0 nova_compute[260935]: 2025-10-11 09:42:02.485 2 DEBUG nova.virt.libvirt.driver [None req-cd857be1-1604-4a66-9869-2cbea25e1f23 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] [instance: 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:42:02 compute-0 nova_compute[260935]: 2025-10-11 09:42:02.486 2 DEBUG nova.virt.libvirt.driver [None req-cd857be1-1604-4a66-9869-2cbea25e1f23 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] [instance: 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:42:02 compute-0 nova_compute[260935]: 2025-10-11 09:42:02.494 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 09:42:02 compute-0 nova_compute[260935]: 2025-10-11 09:42:02.494 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760175722.4393353, 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:42:02 compute-0 nova_compute[260935]: 2025-10-11 09:42:02.495 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1] VM Paused (Lifecycle Event)
Oct 11 09:42:02 compute-0 nova_compute[260935]: 2025-10-11 09:42:02.526 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:42:02 compute-0 nova_compute[260935]: 2025-10-11 09:42:02.531 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760175722.446989, 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:42:02 compute-0 nova_compute[260935]: 2025-10-11 09:42:02.532 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1] VM Resumed (Lifecycle Event)
Oct 11 09:42:02 compute-0 nova_compute[260935]: 2025-10-11 09:42:02.557 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:42:02 compute-0 nova_compute[260935]: 2025-10-11 09:42:02.564 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 09:42:02 compute-0 nova_compute[260935]: 2025-10-11 09:42:02.570 2 INFO nova.compute.manager [None req-cd857be1-1604-4a66-9869-2cbea25e1f23 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] [instance: 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1] Took 7.32 seconds to spawn the instance on the hypervisor.
Oct 11 09:42:02 compute-0 nova_compute[260935]: 2025-10-11 09:42:02.571 2 DEBUG nova.compute.manager [None req-cd857be1-1604-4a66-9869-2cbea25e1f23 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] [instance: 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:42:02 compute-0 nova_compute[260935]: 2025-10-11 09:42:02.589 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 09:42:02 compute-0 nova_compute[260935]: 2025-10-11 09:42:02.639 2 INFO nova.compute.manager [None req-cd857be1-1604-4a66-9869-2cbea25e1f23 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] [instance: 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1] Took 8.47 seconds to build instance.
Oct 11 09:42:02 compute-0 nova_compute[260935]: 2025-10-11 09:42:02.707 2 DEBUG oslo_concurrency.lockutils [None req-cd857be1-1604-4a66-9869-2cbea25e1f23 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] Lock "4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.663s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:42:02 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:42:02.773 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=57, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:d1:d9', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '16:ab:1e:b7:4b:7f'}, ipsec=False) old=SB_Global(nb_cfg=56) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 09:42:02 compute-0 nova_compute[260935]: 2025-10-11 09:42:02.775 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:42:02 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:42:02.775 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 11 09:42:03 compute-0 ceph-mon[74313]: pgmap v3011: 321 pgs: 321 active+clean; 374 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 11 09:42:03 compute-0 pedantic_tharp[428056]: {
Oct 11 09:42:03 compute-0 pedantic_tharp[428056]:     "9a8d87fe-940e-4960-94fb-405e22e6e9ee": {
Oct 11 09:42:03 compute-0 pedantic_tharp[428056]:         "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:42:03 compute-0 pedantic_tharp[428056]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 11 09:42:03 compute-0 pedantic_tharp[428056]:         "osd_id": 2,
Oct 11 09:42:03 compute-0 pedantic_tharp[428056]:         "osd_uuid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 09:42:03 compute-0 pedantic_tharp[428056]:         "type": "bluestore"
Oct 11 09:42:03 compute-0 pedantic_tharp[428056]:     },
Oct 11 09:42:03 compute-0 pedantic_tharp[428056]:     "bd31cfe9-266a-4479-a7f9-659fff14b3e1": {
Oct 11 09:42:03 compute-0 pedantic_tharp[428056]:         "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:42:03 compute-0 pedantic_tharp[428056]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 11 09:42:03 compute-0 pedantic_tharp[428056]:         "osd_id": 0,
Oct 11 09:42:03 compute-0 pedantic_tharp[428056]:         "osd_uuid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 09:42:03 compute-0 pedantic_tharp[428056]:         "type": "bluestore"
Oct 11 09:42:03 compute-0 pedantic_tharp[428056]:     },
Oct 11 09:42:03 compute-0 pedantic_tharp[428056]:     "c6e730c8-7075-45e5-bf72-061b73088f5b": {
Oct 11 09:42:03 compute-0 pedantic_tharp[428056]:         "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:42:03 compute-0 pedantic_tharp[428056]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 11 09:42:03 compute-0 pedantic_tharp[428056]:         "osd_id": 1,
Oct 11 09:42:03 compute-0 pedantic_tharp[428056]:         "osd_uuid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 09:42:03 compute-0 pedantic_tharp[428056]:         "type": "bluestore"
Oct 11 09:42:03 compute-0 pedantic_tharp[428056]:     }
Oct 11 09:42:03 compute-0 pedantic_tharp[428056]: }
Oct 11 09:42:03 compute-0 systemd[1]: libpod-44623b47d2b5fdb018a3bc8d72e7e12191461c56163a724cfcff733e9e4a64f4.scope: Deactivated successfully.
Oct 11 09:42:03 compute-0 systemd[1]: libpod-44623b47d2b5fdb018a3bc8d72e7e12191461c56163a724cfcff733e9e4a64f4.scope: Consumed 1.000s CPU time.
Oct 11 09:42:03 compute-0 conmon[428056]: conmon 44623b47d2b5fdb018a3 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-44623b47d2b5fdb018a3bc8d72e7e12191461c56163a724cfcff733e9e4a64f4.scope/container/memory.events
Oct 11 09:42:03 compute-0 podman[428039]: 2025-10-11 09:42:03.25218555 +0000 UTC m=+1.199780262 container died 44623b47d2b5fdb018a3bc8d72e7e12191461c56163a724cfcff733e9e4a64f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_tharp, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 09:42:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-4e473843e00e29bcc81d91f7f7d6a2fd8f0c3b09897eb2d05159de73e927d2d9-merged.mount: Deactivated successfully.
Oct 11 09:42:03 compute-0 podman[428039]: 2025-10-11 09:42:03.310090124 +0000 UTC m=+1.257684836 container remove 44623b47d2b5fdb018a3bc8d72e7e12191461c56163a724cfcff733e9e4a64f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_tharp, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct 11 09:42:03 compute-0 systemd[1]: libpod-conmon-44623b47d2b5fdb018a3bc8d72e7e12191461c56163a724cfcff733e9e4a64f4.scope: Deactivated successfully.
Oct 11 09:42:03 compute-0 sudo[427885]: pam_unix(sudo:session): session closed for user root
Oct 11 09:42:03 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 09:42:03 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:42:03 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 09:42:03 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:42:03 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev e256c990-9bc8-458f-9fde-17aef5a01017 does not exist
Oct 11 09:42:03 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev afa43252-7147-415e-9dd9-c0371a91c89e does not exist
Oct 11 09:42:03 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3012: 321 pgs: 321 active+clean; 374 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 36 op/s
Oct 11 09:42:03 compute-0 sudo[428101]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:42:03 compute-0 sudo[428101]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:42:03 compute-0 sudo[428101]: pam_unix(sudo:session): session closed for user root
Oct 11 09:42:03 compute-0 sudo[428126]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 11 09:42:03 compute-0 sudo[428126]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:42:03 compute-0 sudo[428126]: pam_unix(sudo:session): session closed for user root
Oct 11 09:42:03 compute-0 nova_compute[260935]: 2025-10-11 09:42:03.972 2 DEBUG nova.compute.manager [req-40c50636-2a14-4749-bbc6-ac93f57cf557 req-c15d3149-369f-41a5-8eca-a6006a9701d5 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1] Received event network-vif-plugged-a65667d8-0cc9-498a-9415-8fcecf5881f3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:42:03 compute-0 nova_compute[260935]: 2025-10-11 09:42:03.973 2 DEBUG oslo_concurrency.lockutils [req-40c50636-2a14-4749-bbc6-ac93f57cf557 req-c15d3149-369f-41a5-8eca-a6006a9701d5 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:42:03 compute-0 nova_compute[260935]: 2025-10-11 09:42:03.973 2 DEBUG oslo_concurrency.lockutils [req-40c50636-2a14-4749-bbc6-ac93f57cf557 req-c15d3149-369f-41a5-8eca-a6006a9701d5 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:42:03 compute-0 nova_compute[260935]: 2025-10-11 09:42:03.974 2 DEBUG oslo_concurrency.lockutils [req-40c50636-2a14-4749-bbc6-ac93f57cf557 req-c15d3149-369f-41a5-8eca-a6006a9701d5 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:42:03 compute-0 nova_compute[260935]: 2025-10-11 09:42:03.974 2 DEBUG nova.compute.manager [req-40c50636-2a14-4749-bbc6-ac93f57cf557 req-c15d3149-369f-41a5-8eca-a6006a9701d5 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1] No waiting events found dispatching network-vif-plugged-a65667d8-0cc9-498a-9415-8fcecf5881f3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 09:42:03 compute-0 nova_compute[260935]: 2025-10-11 09:42:03.974 2 WARNING nova.compute.manager [req-40c50636-2a14-4749-bbc6-ac93f57cf557 req-c15d3149-369f-41a5-8eca-a6006a9701d5 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1] Received unexpected event network-vif-plugged-a65667d8-0cc9-498a-9415-8fcecf5881f3 for instance with vm_state active and task_state None.
Oct 11 09:42:04 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:42:04 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:42:04 compute-0 ceph-mon[74313]: pgmap v3012: 321 pgs: 321 active+clean; 374 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 36 op/s
Oct 11 09:42:04 compute-0 nova_compute[260935]: 2025-10-11 09:42:04.442 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:42:04 compute-0 nova_compute[260935]: 2025-10-11 09:42:04.671 2 DEBUG nova.objects.instance [None req-5aa95985-9947-4d14-b787-44d2c9d016d5 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] Lazy-loading 'pci_devices' on Instance uuid 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:42:04 compute-0 nova_compute[260935]: 2025-10-11 09:42:04.693 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760175724.6935637, 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:42:04 compute-0 nova_compute[260935]: 2025-10-11 09:42:04.694 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1] VM Paused (Lifecycle Event)
Oct 11 09:42:04 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:42:04 compute-0 nova_compute[260935]: 2025-10-11 09:42:04.715 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:42:04 compute-0 nova_compute[260935]: 2025-10-11 09:42:04.723 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: suspending, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 09:42:04 compute-0 nova_compute[260935]: 2025-10-11 09:42:04.752 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1] During sync_power_state the instance has a pending task (suspending). Skip.
Oct 11 09:42:04 compute-0 nova_compute[260935]: 2025-10-11 09:42:04.972 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:42:05 compute-0 kernel: tapa65667d8-0c (unregistering): left promiscuous mode
Oct 11 09:42:05 compute-0 NetworkManager[44960]: <info>  [1760175725.2450] device (tapa65667d8-0c): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 09:42:05 compute-0 ovn_controller[152945]: 2025-10-11T09:42:05Z|01686|binding|INFO|Releasing lport a65667d8-0cc9-498a-9415-8fcecf5881f3 from this chassis (sb_readonly=0)
Oct 11 09:42:05 compute-0 ovn_controller[152945]: 2025-10-11T09:42:05Z|01687|binding|INFO|Setting lport a65667d8-0cc9-498a-9415-8fcecf5881f3 down in Southbound
Oct 11 09:42:05 compute-0 nova_compute[260935]: 2025-10-11 09:42:05.260 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:42:05 compute-0 ovn_controller[152945]: 2025-10-11T09:42:05Z|01688|binding|INFO|Removing iface tapa65667d8-0c ovn-installed in OVS
Oct 11 09:42:05 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:42:05.269 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:38:9b:b3 10.100.0.10'], port_security=['fa:16:3e:38:9b:b3 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7f1968f8-d23e-46ea-bffa-f1b3561e0cf9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'bd3d8446de604a3682ade55965824985', 'neutron:revision_number': '4', 'neutron:security_group_ids': '0b05ff59-8aaa-4951-bb03-8ac33de90ac6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=61cb3534-5fbd-4916-a755-03a295e9752c, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=a65667d8-0cc9-498a-9415-8fcecf5881f3) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 09:42:05 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:42:05.271 162815 INFO neutron.agent.ovn.metadata.agent [-] Port a65667d8-0cc9-498a-9415-8fcecf5881f3 in datapath 7f1968f8-d23e-46ea-bffa-f1b3561e0cf9 unbound from our chassis
Oct 11 09:42:05 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:42:05.273 162815 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 7f1968f8-d23e-46ea-bffa-f1b3561e0cf9 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Oct 11 09:42:05 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:42:05.274 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[305e2b54-9248-4397-a529-cc457f42ff49]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:42:05 compute-0 nova_compute[260935]: 2025-10-11 09:42:05.280 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:42:05 compute-0 systemd[1]: machine-qemu\x2d172\x2dinstance\x2d00000094.scope: Deactivated successfully.
Oct 11 09:42:05 compute-0 systemd[1]: machine-qemu\x2d172\x2dinstance\x2d00000094.scope: Consumed 3.522s CPU time.
Oct 11 09:42:05 compute-0 systemd-machined[215705]: Machine qemu-172-instance-00000094 terminated.
Oct 11 09:42:05 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3013: 321 pgs: 321 active+clean; 374 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 36 op/s
Oct 11 09:42:05 compute-0 nova_compute[260935]: 2025-10-11 09:42:05.452 2 DEBUG nova.compute.manager [None req-5aa95985-9947-4d14-b787-44d2c9d016d5 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] [instance: 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:42:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 09:42:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:42:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 09:42:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:42:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0029757271724620694 of space, bias 1.0, pg target 0.8927181517386208 quantized to 32 (current 32)
Oct 11 09:42:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:42:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 09:42:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:42:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 09:42:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:42:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Oct 11 09:42:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:42:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 11 09:42:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:42:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 09:42:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:42:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 11 09:42:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:42:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 11 09:42:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:42:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 09:42:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:42:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 11 09:42:06 compute-0 nova_compute[260935]: 2025-10-11 09:42:06.066 2 DEBUG nova.compute.manager [req-c2466a66-a43f-46ab-8649-c51c1913a56e req-92bec300-8673-4fa1-b059-944613e01403 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1] Received event network-vif-unplugged-a65667d8-0cc9-498a-9415-8fcecf5881f3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:42:06 compute-0 nova_compute[260935]: 2025-10-11 09:42:06.066 2 DEBUG oslo_concurrency.lockutils [req-c2466a66-a43f-46ab-8649-c51c1913a56e req-92bec300-8673-4fa1-b059-944613e01403 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:42:06 compute-0 nova_compute[260935]: 2025-10-11 09:42:06.067 2 DEBUG oslo_concurrency.lockutils [req-c2466a66-a43f-46ab-8649-c51c1913a56e req-92bec300-8673-4fa1-b059-944613e01403 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:42:06 compute-0 nova_compute[260935]: 2025-10-11 09:42:06.067 2 DEBUG oslo_concurrency.lockutils [req-c2466a66-a43f-46ab-8649-c51c1913a56e req-92bec300-8673-4fa1-b059-944613e01403 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:42:06 compute-0 nova_compute[260935]: 2025-10-11 09:42:06.068 2 DEBUG nova.compute.manager [req-c2466a66-a43f-46ab-8649-c51c1913a56e req-92bec300-8673-4fa1-b059-944613e01403 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1] No waiting events found dispatching network-vif-unplugged-a65667d8-0cc9-498a-9415-8fcecf5881f3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 09:42:06 compute-0 nova_compute[260935]: 2025-10-11 09:42:06.068 2 WARNING nova.compute.manager [req-c2466a66-a43f-46ab-8649-c51c1913a56e req-92bec300-8673-4fa1-b059-944613e01403 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1] Received unexpected event network-vif-unplugged-a65667d8-0cc9-498a-9415-8fcecf5881f3 for instance with vm_state suspended and task_state None.
Oct 11 09:42:06 compute-0 nova_compute[260935]: 2025-10-11 09:42:06.069 2 DEBUG nova.compute.manager [req-c2466a66-a43f-46ab-8649-c51c1913a56e req-92bec300-8673-4fa1-b059-944613e01403 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1] Received event network-vif-plugged-a65667d8-0cc9-498a-9415-8fcecf5881f3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:42:06 compute-0 nova_compute[260935]: 2025-10-11 09:42:06.069 2 DEBUG oslo_concurrency.lockutils [req-c2466a66-a43f-46ab-8649-c51c1913a56e req-92bec300-8673-4fa1-b059-944613e01403 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:42:06 compute-0 nova_compute[260935]: 2025-10-11 09:42:06.070 2 DEBUG oslo_concurrency.lockutils [req-c2466a66-a43f-46ab-8649-c51c1913a56e req-92bec300-8673-4fa1-b059-944613e01403 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:42:06 compute-0 nova_compute[260935]: 2025-10-11 09:42:06.070 2 DEBUG oslo_concurrency.lockutils [req-c2466a66-a43f-46ab-8649-c51c1913a56e req-92bec300-8673-4fa1-b059-944613e01403 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:42:06 compute-0 nova_compute[260935]: 2025-10-11 09:42:06.070 2 DEBUG nova.compute.manager [req-c2466a66-a43f-46ab-8649-c51c1913a56e req-92bec300-8673-4fa1-b059-944613e01403 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1] No waiting events found dispatching network-vif-plugged-a65667d8-0cc9-498a-9415-8fcecf5881f3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 09:42:06 compute-0 nova_compute[260935]: 2025-10-11 09:42:06.071 2 WARNING nova.compute.manager [req-c2466a66-a43f-46ab-8649-c51c1913a56e req-92bec300-8673-4fa1-b059-944613e01403 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1] Received unexpected event network-vif-plugged-a65667d8-0cc9-498a-9415-8fcecf5881f3 for instance with vm_state suspended and task_state None.
Oct 11 09:42:06 compute-0 ceph-osd[90364]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 11 09:42:06 compute-0 ceph-osd[90364]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 5400.3 total, 600.0 interval
                                           Cumulative writes: 35K writes, 140K keys, 35K commit groups, 1.0 writes per commit group, ingest: 0.14 GB, 0.03 MB/s
                                           Cumulative WAL: 35K writes, 12K syncs, 2.75 writes per sync, written: 0.14 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 2305 writes, 8726 keys, 2305 commit groups, 1.0 writes per commit group, ingest: 10.50 MB, 0.02 MB/s
                                           Interval WAL: 2305 writes, 962 syncs, 2.40 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 11 09:42:06 compute-0 ceph-mon[74313]: pgmap v3013: 321 pgs: 321 active+clean; 374 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 36 op/s
Oct 11 09:42:07 compute-0 nova_compute[260935]: 2025-10-11 09:42:07.071 2 INFO nova.compute.manager [None req-d6a7ad79-e7cc-4d9e-a0a6-406d4e7e1e8b a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] [instance: 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1] Resuming
Oct 11 09:42:07 compute-0 nova_compute[260935]: 2025-10-11 09:42:07.073 2 DEBUG nova.objects.instance [None req-d6a7ad79-e7cc-4d9e-a0a6-406d4e7e1e8b a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] Lazy-loading 'flavor' on Instance uuid 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:42:07 compute-0 nova_compute[260935]: 2025-10-11 09:42:07.117 2 DEBUG oslo_concurrency.lockutils [None req-d6a7ad79-e7cc-4d9e-a0a6-406d4e7e1e8b a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] Acquiring lock "refresh_cache-4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:42:07 compute-0 nova_compute[260935]: 2025-10-11 09:42:07.118 2 DEBUG oslo_concurrency.lockutils [None req-d6a7ad79-e7cc-4d9e-a0a6-406d4e7e1e8b a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] Acquired lock "refresh_cache-4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:42:07 compute-0 nova_compute[260935]: 2025-10-11 09:42:07.118 2 DEBUG nova.network.neutron [None req-d6a7ad79-e7cc-4d9e-a0a6-406d4e7e1e8b a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] [instance: 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 11 09:42:07 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3014: 321 pgs: 321 active+clean; 374 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 1.8 MiB/s wr, 82 op/s
Oct 11 09:42:07 compute-0 ceph-mgr[74605]: [devicehealth INFO root] Check health
Oct 11 09:42:08 compute-0 ceph-mon[74313]: pgmap v3014: 321 pgs: 321 active+clean; 374 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 1.8 MiB/s wr, 82 op/s
Oct 11 09:42:08 compute-0 nova_compute[260935]: 2025-10-11 09:42:08.698 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:42:08 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:42:08.778 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=bec9a5e2-82b0-42f0-811d-08d245f1dc66, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '57'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:42:08 compute-0 podman[428171]: 2025-10-11 09:42:08.823831221 +0000 UTC m=+0.108348459 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent)
Oct 11 09:42:08 compute-0 nova_compute[260935]: 2025-10-11 09:42:08.943 2 DEBUG nova.network.neutron [None req-d6a7ad79-e7cc-4d9e-a0a6-406d4e7e1e8b a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] [instance: 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1] Updating instance_info_cache with network_info: [{"id": "a65667d8-0cc9-498a-9415-8fcecf5881f3", "address": "fa:16:3e:38:9b:b3", "network": {"id": "7f1968f8-d23e-46ea-bffa-f1b3561e0cf9", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-351533677-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "bd3d8446de604a3682ade55965824985", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa65667d8-0c", "ovs_interfaceid": "a65667d8-0cc9-498a-9415-8fcecf5881f3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:42:08 compute-0 nova_compute[260935]: 2025-10-11 09:42:08.966 2 DEBUG oslo_concurrency.lockutils [None req-d6a7ad79-e7cc-4d9e-a0a6-406d4e7e1e8b a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] Releasing lock "refresh_cache-4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:42:08 compute-0 nova_compute[260935]: 2025-10-11 09:42:08.973 2 DEBUG nova.virt.libvirt.vif [None req-d6a7ad79-e7cc-4d9e-a0a6-406d4e7e1e8b a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T09:41:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestServerAdvancedOps-server-2061155047',display_name='tempest-TestServerAdvancedOps-server-2061155047',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testserveradvancedops-server-2061155047',id=148,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-11T09:42:02Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='bd3d8446de604a3682ade55965824985',ramdisk_id='',reservation_id='r-q51mr6v9',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-TestServerAdvancedOps-940341189',owner_user_name='tempest-TestServerAdvancedOps-940341189-project-member'},tags=<?>,task_state='resuming',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T09:42:05Z,user_data=None,user_id='a45c6166242048a983479be416f90c52',uuid=4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='suspended') vif={"id": "a65667d8-0cc9-498a-9415-8fcecf5881f3", "address": "fa:16:3e:38:9b:b3", "network": {"id": "7f1968f8-d23e-46ea-bffa-f1b3561e0cf9", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-351533677-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "bd3d8446de604a3682ade55965824985", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa65667d8-0c", "ovs_interfaceid": "a65667d8-0cc9-498a-9415-8fcecf5881f3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 11 09:42:08 compute-0 nova_compute[260935]: 2025-10-11 09:42:08.974 2 DEBUG nova.network.os_vif_util [None req-d6a7ad79-e7cc-4d9e-a0a6-406d4e7e1e8b a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] Converting VIF {"id": "a65667d8-0cc9-498a-9415-8fcecf5881f3", "address": "fa:16:3e:38:9b:b3", "network": {"id": "7f1968f8-d23e-46ea-bffa-f1b3561e0cf9", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-351533677-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "bd3d8446de604a3682ade55965824985", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa65667d8-0c", "ovs_interfaceid": "a65667d8-0cc9-498a-9415-8fcecf5881f3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 09:42:08 compute-0 nova_compute[260935]: 2025-10-11 09:42:08.975 2 DEBUG nova.network.os_vif_util [None req-d6a7ad79-e7cc-4d9e-a0a6-406d4e7e1e8b a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:38:9b:b3,bridge_name='br-int',has_traffic_filtering=True,id=a65667d8-0cc9-498a-9415-8fcecf5881f3,network=Network(7f1968f8-d23e-46ea-bffa-f1b3561e0cf9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa65667d8-0c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 09:42:08 compute-0 nova_compute[260935]: 2025-10-11 09:42:08.976 2 DEBUG os_vif [None req-d6a7ad79-e7cc-4d9e-a0a6-406d4e7e1e8b a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:38:9b:b3,bridge_name='br-int',has_traffic_filtering=True,id=a65667d8-0cc9-498a-9415-8fcecf5881f3,network=Network(7f1968f8-d23e-46ea-bffa-f1b3561e0cf9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa65667d8-0c') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 11 09:42:08 compute-0 nova_compute[260935]: 2025-10-11 09:42:08.976 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:42:08 compute-0 nova_compute[260935]: 2025-10-11 09:42:08.977 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:42:08 compute-0 nova_compute[260935]: 2025-10-11 09:42:08.977 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 09:42:08 compute-0 nova_compute[260935]: 2025-10-11 09:42:08.983 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:42:08 compute-0 nova_compute[260935]: 2025-10-11 09:42:08.983 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa65667d8-0c, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:42:08 compute-0 nova_compute[260935]: 2025-10-11 09:42:08.984 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa65667d8-0c, col_values=(('external_ids', {'iface-id': 'a65667d8-0cc9-498a-9415-8fcecf5881f3', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:38:9b:b3', 'vm-uuid': '4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:42:08 compute-0 nova_compute[260935]: 2025-10-11 09:42:08.985 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 09:42:08 compute-0 nova_compute[260935]: 2025-10-11 09:42:08.985 2 INFO os_vif [None req-d6a7ad79-e7cc-4d9e-a0a6-406d4e7e1e8b a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:38:9b:b3,bridge_name='br-int',has_traffic_filtering=True,id=a65667d8-0cc9-498a-9415-8fcecf5881f3,network=Network(7f1968f8-d23e-46ea-bffa-f1b3561e0cf9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa65667d8-0c')
Oct 11 09:42:09 compute-0 nova_compute[260935]: 2025-10-11 09:42:09.011 2 DEBUG nova.objects.instance [None req-d6a7ad79-e7cc-4d9e-a0a6-406d4e7e1e8b a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] Lazy-loading 'numa_topology' on Instance uuid 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:42:09 compute-0 kernel: tapa65667d8-0c: entered promiscuous mode
Oct 11 09:42:09 compute-0 nova_compute[260935]: 2025-10-11 09:42:09.113 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:42:09 compute-0 ovn_controller[152945]: 2025-10-11T09:42:09Z|01689|binding|INFO|Claiming lport a65667d8-0cc9-498a-9415-8fcecf5881f3 for this chassis.
Oct 11 09:42:09 compute-0 ovn_controller[152945]: 2025-10-11T09:42:09Z|01690|binding|INFO|a65667d8-0cc9-498a-9415-8fcecf5881f3: Claiming fa:16:3e:38:9b:b3 10.100.0.10
Oct 11 09:42:09 compute-0 NetworkManager[44960]: <info>  [1760175729.1180] manager: (tapa65667d8-0c): new Tun device (/org/freedesktop/NetworkManager/Devices/639)
Oct 11 09:42:09 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:42:09.122 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:38:9b:b3 10.100.0.10'], port_security=['fa:16:3e:38:9b:b3 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7f1968f8-d23e-46ea-bffa-f1b3561e0cf9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'bd3d8446de604a3682ade55965824985', 'neutron:revision_number': '5', 'neutron:security_group_ids': '0b05ff59-8aaa-4951-bb03-8ac33de90ac6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=61cb3534-5fbd-4916-a755-03a295e9752c, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=a65667d8-0cc9-498a-9415-8fcecf5881f3) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 09:42:09 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:42:09.124 162815 INFO neutron.agent.ovn.metadata.agent [-] Port a65667d8-0cc9-498a-9415-8fcecf5881f3 in datapath 7f1968f8-d23e-46ea-bffa-f1b3561e0cf9 bound to our chassis
Oct 11 09:42:09 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:42:09.125 162815 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 7f1968f8-d23e-46ea-bffa-f1b3561e0cf9 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Oct 11 09:42:09 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:42:09.126 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[626243bd-71b5-4257-a429-86dcacc85739]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:42:09 compute-0 ovn_controller[152945]: 2025-10-11T09:42:09Z|01691|binding|INFO|Setting lport a65667d8-0cc9-498a-9415-8fcecf5881f3 up in Southbound
Oct 11 09:42:09 compute-0 ovn_controller[152945]: 2025-10-11T09:42:09Z|01692|binding|INFO|Setting lport a65667d8-0cc9-498a-9415-8fcecf5881f3 ovn-installed in OVS
Oct 11 09:42:09 compute-0 nova_compute[260935]: 2025-10-11 09:42:09.155 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:42:09 compute-0 nova_compute[260935]: 2025-10-11 09:42:09.157 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:42:09 compute-0 systemd-machined[215705]: New machine qemu-173-instance-00000094.
Oct 11 09:42:09 compute-0 systemd[1]: Started Virtual Machine qemu-173-instance-00000094.
Oct 11 09:42:09 compute-0 systemd-udevd[428205]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 09:42:09 compute-0 NetworkManager[44960]: <info>  [1760175729.2089] device (tapa65667d8-0c): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 09:42:09 compute-0 NetworkManager[44960]: <info>  [1760175729.2112] device (tapa65667d8-0c): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 09:42:09 compute-0 nova_compute[260935]: 2025-10-11 09:42:09.292 2 DEBUG nova.compute.manager [req-c00e7a6a-c0e0-4b76-b06f-9bce4f253c59 req-93ce15a0-dc9f-4e0a-beea-872740918fe0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1] Received event network-vif-plugged-a65667d8-0cc9-498a-9415-8fcecf5881f3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:42:09 compute-0 nova_compute[260935]: 2025-10-11 09:42:09.293 2 DEBUG oslo_concurrency.lockutils [req-c00e7a6a-c0e0-4b76-b06f-9bce4f253c59 req-93ce15a0-dc9f-4e0a-beea-872740918fe0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:42:09 compute-0 nova_compute[260935]: 2025-10-11 09:42:09.294 2 DEBUG oslo_concurrency.lockutils [req-c00e7a6a-c0e0-4b76-b06f-9bce4f253c59 req-93ce15a0-dc9f-4e0a-beea-872740918fe0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:42:09 compute-0 nova_compute[260935]: 2025-10-11 09:42:09.294 2 DEBUG oslo_concurrency.lockutils [req-c00e7a6a-c0e0-4b76-b06f-9bce4f253c59 req-93ce15a0-dc9f-4e0a-beea-872740918fe0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:42:09 compute-0 nova_compute[260935]: 2025-10-11 09:42:09.294 2 DEBUG nova.compute.manager [req-c00e7a6a-c0e0-4b76-b06f-9bce4f253c59 req-93ce15a0-dc9f-4e0a-beea-872740918fe0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1] No waiting events found dispatching network-vif-plugged-a65667d8-0cc9-498a-9415-8fcecf5881f3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 09:42:09 compute-0 nova_compute[260935]: 2025-10-11 09:42:09.294 2 WARNING nova.compute.manager [req-c00e7a6a-c0e0-4b76-b06f-9bce4f253c59 req-93ce15a0-dc9f-4e0a-beea-872740918fe0 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1] Received unexpected event network-vif-plugged-a65667d8-0cc9-498a-9415-8fcecf5881f3 for instance with vm_state suspended and task_state resuming.
Oct 11 09:42:09 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3015: 321 pgs: 321 active+clean; 374 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Oct 11 09:42:09 compute-0 nova_compute[260935]: 2025-10-11 09:42:09.444 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:42:09 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:42:09 compute-0 nova_compute[260935]: 2025-10-11 09:42:09.974 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:42:10 compute-0 ceph-mon[74313]: pgmap v3015: 321 pgs: 321 active+clean; 374 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Oct 11 09:42:10 compute-0 nova_compute[260935]: 2025-10-11 09:42:10.824 2 DEBUG nova.virt.libvirt.host [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Removed pending event for 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Oct 11 09:42:10 compute-0 nova_compute[260935]: 2025-10-11 09:42:10.824 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760175730.8237205, 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:42:10 compute-0 nova_compute[260935]: 2025-10-11 09:42:10.824 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1] VM Started (Lifecycle Event)
Oct 11 09:42:10 compute-0 nova_compute[260935]: 2025-10-11 09:42:10.848 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:42:10 compute-0 nova_compute[260935]: 2025-10-11 09:42:10.861 2 DEBUG nova.compute.manager [None req-d6a7ad79-e7cc-4d9e-a0a6-406d4e7e1e8b a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] [instance: 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 11 09:42:10 compute-0 nova_compute[260935]: 2025-10-11 09:42:10.861 2 DEBUG nova.objects.instance [None req-d6a7ad79-e7cc-4d9e-a0a6-406d4e7e1e8b a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] Lazy-loading 'pci_devices' on Instance uuid 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:42:10 compute-0 nova_compute[260935]: 2025-10-11 09:42:10.866 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1] Synchronizing instance power state after lifecycle event "Started"; current vm_state: suspended, current task_state: resuming, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 09:42:10 compute-0 nova_compute[260935]: 2025-10-11 09:42:10.888 2 INFO nova.virt.libvirt.driver [-] [instance: 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1] Instance running successfully.
Oct 11 09:42:10 compute-0 virtqemud[260524]: argument unsupported: QEMU guest agent is not configured
Oct 11 09:42:10 compute-0 nova_compute[260935]: 2025-10-11 09:42:10.890 2 DEBUG nova.virt.libvirt.guest [None req-d6a7ad79-e7cc-4d9e-a0a6-406d4e7e1e8b a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] [instance: 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200
Oct 11 09:42:10 compute-0 nova_compute[260935]: 2025-10-11 09:42:10.891 2 DEBUG nova.compute.manager [None req-d6a7ad79-e7cc-4d9e-a0a6-406d4e7e1e8b a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] [instance: 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:42:10 compute-0 nova_compute[260935]: 2025-10-11 09:42:10.891 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1] During sync_power_state the instance has a pending task (resuming). Skip.
Oct 11 09:42:10 compute-0 nova_compute[260935]: 2025-10-11 09:42:10.892 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760175730.8285172, 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:42:10 compute-0 nova_compute[260935]: 2025-10-11 09:42:10.892 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1] VM Resumed (Lifecycle Event)
Oct 11 09:42:10 compute-0 nova_compute[260935]: 2025-10-11 09:42:10.920 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:42:10 compute-0 nova_compute[260935]: 2025-10-11 09:42:10.923 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: suspended, current task_state: resuming, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 09:42:10 compute-0 nova_compute[260935]: 2025-10-11 09:42:10.953 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1] During sync_power_state the instance has a pending task (resuming). Skip.
Oct 11 09:42:11 compute-0 nova_compute[260935]: 2025-10-11 09:42:11.374 2 DEBUG nova.compute.manager [req-fa9159cf-fcf2-45b8-ba2e-2f3d3c3b5354 req-cbae1766-b588-437f-bce1-abfbd4b71ca3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1] Received event network-vif-plugged-a65667d8-0cc9-498a-9415-8fcecf5881f3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:42:11 compute-0 nova_compute[260935]: 2025-10-11 09:42:11.374 2 DEBUG oslo_concurrency.lockutils [req-fa9159cf-fcf2-45b8-ba2e-2f3d3c3b5354 req-cbae1766-b588-437f-bce1-abfbd4b71ca3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:42:11 compute-0 nova_compute[260935]: 2025-10-11 09:42:11.375 2 DEBUG oslo_concurrency.lockutils [req-fa9159cf-fcf2-45b8-ba2e-2f3d3c3b5354 req-cbae1766-b588-437f-bce1-abfbd4b71ca3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:42:11 compute-0 nova_compute[260935]: 2025-10-11 09:42:11.375 2 DEBUG oslo_concurrency.lockutils [req-fa9159cf-fcf2-45b8-ba2e-2f3d3c3b5354 req-cbae1766-b588-437f-bce1-abfbd4b71ca3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:42:11 compute-0 nova_compute[260935]: 2025-10-11 09:42:11.375 2 DEBUG nova.compute.manager [req-fa9159cf-fcf2-45b8-ba2e-2f3d3c3b5354 req-cbae1766-b588-437f-bce1-abfbd4b71ca3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1] No waiting events found dispatching network-vif-plugged-a65667d8-0cc9-498a-9415-8fcecf5881f3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 09:42:11 compute-0 nova_compute[260935]: 2025-10-11 09:42:11.375 2 WARNING nova.compute.manager [req-fa9159cf-fcf2-45b8-ba2e-2f3d3c3b5354 req-cbae1766-b588-437f-bce1-abfbd4b71ca3 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1] Received unexpected event network-vif-plugged-a65667d8-0cc9-498a-9415-8fcecf5881f3 for instance with vm_state active and task_state None.
Oct 11 09:42:11 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3016: 321 pgs: 321 active+clean; 374 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 11 09:42:12 compute-0 ceph-mon[74313]: pgmap v3016: 321 pgs: 321 active+clean; 374 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 11 09:42:13 compute-0 nova_compute[260935]: 2025-10-11 09:42:13.086 2 DEBUG nova.objects.instance [None req-afc008e0-ff13-4fae-805f-b002ec8810ad a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] Lazy-loading 'pci_devices' on Instance uuid 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:42:13 compute-0 nova_compute[260935]: 2025-10-11 09:42:13.105 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760175733.104935, 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:42:13 compute-0 nova_compute[260935]: 2025-10-11 09:42:13.105 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1] VM Paused (Lifecycle Event)
Oct 11 09:42:13 compute-0 nova_compute[260935]: 2025-10-11 09:42:13.131 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:42:13 compute-0 nova_compute[260935]: 2025-10-11 09:42:13.135 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: suspending, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 09:42:13 compute-0 nova_compute[260935]: 2025-10-11 09:42:13.156 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1] During sync_power_state the instance has a pending task (suspending). Skip.
Oct 11 09:42:13 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3017: 321 pgs: 321 active+clean; 374 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 78 op/s
Oct 11 09:42:13 compute-0 kernel: tapa65667d8-0c (unregistering): left promiscuous mode
Oct 11 09:42:13 compute-0 NetworkManager[44960]: <info>  [1760175733.6167] device (tapa65667d8-0c): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 09:42:13 compute-0 ovn_controller[152945]: 2025-10-11T09:42:13Z|01693|binding|INFO|Releasing lport a65667d8-0cc9-498a-9415-8fcecf5881f3 from this chassis (sb_readonly=0)
Oct 11 09:42:13 compute-0 ovn_controller[152945]: 2025-10-11T09:42:13Z|01694|binding|INFO|Setting lport a65667d8-0cc9-498a-9415-8fcecf5881f3 down in Southbound
Oct 11 09:42:13 compute-0 nova_compute[260935]: 2025-10-11 09:42:13.629 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:42:13 compute-0 ovn_controller[152945]: 2025-10-11T09:42:13Z|01695|binding|INFO|Removing iface tapa65667d8-0c ovn-installed in OVS
Oct 11 09:42:13 compute-0 nova_compute[260935]: 2025-10-11 09:42:13.633 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:42:13 compute-0 nova_compute[260935]: 2025-10-11 09:42:13.666 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:42:13 compute-0 systemd[1]: machine-qemu\x2d173\x2dinstance\x2d00000094.scope: Deactivated successfully.
Oct 11 09:42:13 compute-0 systemd[1]: machine-qemu\x2d173\x2dinstance\x2d00000094.scope: Consumed 3.918s CPU time.
Oct 11 09:42:13 compute-0 systemd-machined[215705]: Machine qemu-173-instance-00000094 terminated.
Oct 11 09:42:13 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:42:13.714 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:38:9b:b3 10.100.0.10'], port_security=['fa:16:3e:38:9b:b3 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7f1968f8-d23e-46ea-bffa-f1b3561e0cf9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'bd3d8446de604a3682ade55965824985', 'neutron:revision_number': '6', 'neutron:security_group_ids': '0b05ff59-8aaa-4951-bb03-8ac33de90ac6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=61cb3534-5fbd-4916-a755-03a295e9752c, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=a65667d8-0cc9-498a-9415-8fcecf5881f3) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 09:42:13 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:42:13.715 162815 INFO neutron.agent.ovn.metadata.agent [-] Port a65667d8-0cc9-498a-9415-8fcecf5881f3 in datapath 7f1968f8-d23e-46ea-bffa-f1b3561e0cf9 unbound from our chassis
Oct 11 09:42:13 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:42:13.716 162815 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 7f1968f8-d23e-46ea-bffa-f1b3561e0cf9 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Oct 11 09:42:13 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:42:13.717 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[5f19fe65-7e50-4e7a-be67-64ae6e5f4ce0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:42:13 compute-0 nova_compute[260935]: 2025-10-11 09:42:13.796 2 DEBUG nova.compute.manager [None req-afc008e0-ff13-4fae-805f-b002ec8810ad a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] [instance: 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:42:13 compute-0 nova_compute[260935]: 2025-10-11 09:42:13.949 2 DEBUG nova.compute.manager [req-a77b8350-78c7-42a4-8b79-f3d95a4ca8b5 req-0e41b591-0887-47d1-b11f-6b80bd7ebf39 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1] Received event network-vif-unplugged-a65667d8-0cc9-498a-9415-8fcecf5881f3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:42:13 compute-0 nova_compute[260935]: 2025-10-11 09:42:13.949 2 DEBUG oslo_concurrency.lockutils [req-a77b8350-78c7-42a4-8b79-f3d95a4ca8b5 req-0e41b591-0887-47d1-b11f-6b80bd7ebf39 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:42:13 compute-0 nova_compute[260935]: 2025-10-11 09:42:13.949 2 DEBUG oslo_concurrency.lockutils [req-a77b8350-78c7-42a4-8b79-f3d95a4ca8b5 req-0e41b591-0887-47d1-b11f-6b80bd7ebf39 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:42:13 compute-0 nova_compute[260935]: 2025-10-11 09:42:13.950 2 DEBUG oslo_concurrency.lockutils [req-a77b8350-78c7-42a4-8b79-f3d95a4ca8b5 req-0e41b591-0887-47d1-b11f-6b80bd7ebf39 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:42:13 compute-0 nova_compute[260935]: 2025-10-11 09:42:13.950 2 DEBUG nova.compute.manager [req-a77b8350-78c7-42a4-8b79-f3d95a4ca8b5 req-0e41b591-0887-47d1-b11f-6b80bd7ebf39 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1] No waiting events found dispatching network-vif-unplugged-a65667d8-0cc9-498a-9415-8fcecf5881f3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 09:42:13 compute-0 nova_compute[260935]: 2025-10-11 09:42:13.950 2 WARNING nova.compute.manager [req-a77b8350-78c7-42a4-8b79-f3d95a4ca8b5 req-0e41b591-0887-47d1-b11f-6b80bd7ebf39 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1] Received unexpected event network-vif-unplugged-a65667d8-0cc9-498a-9415-8fcecf5881f3 for instance with vm_state active and task_state suspending.
Oct 11 09:42:14 compute-0 nova_compute[260935]: 2025-10-11 09:42:14.490 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:42:14 compute-0 ceph-mon[74313]: pgmap v3017: 321 pgs: 321 active+clean; 374 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 78 op/s
Oct 11 09:42:14 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:42:14 compute-0 nova_compute[260935]: 2025-10-11 09:42:14.976 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:42:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:42:15.238 162815 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:42:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:42:15.239 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:42:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:42:15.240 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:42:15 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3018: 321 pgs: 321 active+clean; 374 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 68 op/s
Oct 11 09:42:15 compute-0 podman[428276]: 2025-10-11 09:42:15.804876191 +0000 UTC m=+0.099876612 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid)
Oct 11 09:42:16 compute-0 nova_compute[260935]: 2025-10-11 09:42:16.049 2 DEBUG nova.compute.manager [req-0125c421-863d-47fd-b899-a380169f7048 req-6b5d95ba-7b5e-49f7-a1fc-5373b5ad9440 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1] Received event network-vif-plugged-a65667d8-0cc9-498a-9415-8fcecf5881f3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:42:16 compute-0 nova_compute[260935]: 2025-10-11 09:42:16.050 2 DEBUG oslo_concurrency.lockutils [req-0125c421-863d-47fd-b899-a380169f7048 req-6b5d95ba-7b5e-49f7-a1fc-5373b5ad9440 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:42:16 compute-0 nova_compute[260935]: 2025-10-11 09:42:16.050 2 DEBUG oslo_concurrency.lockutils [req-0125c421-863d-47fd-b899-a380169f7048 req-6b5d95ba-7b5e-49f7-a1fc-5373b5ad9440 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:42:16 compute-0 nova_compute[260935]: 2025-10-11 09:42:16.051 2 DEBUG oslo_concurrency.lockutils [req-0125c421-863d-47fd-b899-a380169f7048 req-6b5d95ba-7b5e-49f7-a1fc-5373b5ad9440 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:42:16 compute-0 nova_compute[260935]: 2025-10-11 09:42:16.051 2 DEBUG nova.compute.manager [req-0125c421-863d-47fd-b899-a380169f7048 req-6b5d95ba-7b5e-49f7-a1fc-5373b5ad9440 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1] No waiting events found dispatching network-vif-plugged-a65667d8-0cc9-498a-9415-8fcecf5881f3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 09:42:16 compute-0 nova_compute[260935]: 2025-10-11 09:42:16.052 2 WARNING nova.compute.manager [req-0125c421-863d-47fd-b899-a380169f7048 req-6b5d95ba-7b5e-49f7-a1fc-5373b5ad9440 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1] Received unexpected event network-vif-plugged-a65667d8-0cc9-498a-9415-8fcecf5881f3 for instance with vm_state suspended and task_state None.
Oct 11 09:42:16 compute-0 nova_compute[260935]: 2025-10-11 09:42:16.138 2 INFO nova.compute.manager [None req-fdad2070-38bd-4784-869b-5419abf20991 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] [instance: 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1] Resuming
Oct 11 09:42:16 compute-0 nova_compute[260935]: 2025-10-11 09:42:16.139 2 DEBUG nova.objects.instance [None req-fdad2070-38bd-4784-869b-5419abf20991 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] Lazy-loading 'flavor' on Instance uuid 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:42:16 compute-0 nova_compute[260935]: 2025-10-11 09:42:16.183 2 DEBUG oslo_concurrency.lockutils [None req-fdad2070-38bd-4784-869b-5419abf20991 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] Acquiring lock "refresh_cache-4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:42:16 compute-0 nova_compute[260935]: 2025-10-11 09:42:16.183 2 DEBUG oslo_concurrency.lockutils [None req-fdad2070-38bd-4784-869b-5419abf20991 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] Acquired lock "refresh_cache-4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:42:16 compute-0 nova_compute[260935]: 2025-10-11 09:42:16.184 2 DEBUG nova.network.neutron [None req-fdad2070-38bd-4784-869b-5419abf20991 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] [instance: 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 11 09:42:16 compute-0 ceph-mon[74313]: pgmap v3018: 321 pgs: 321 active+clean; 374 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 68 op/s
Oct 11 09:42:17 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3019: 321 pgs: 321 active+clean; 374 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 68 op/s
Oct 11 09:42:18 compute-0 nova_compute[260935]: 2025-10-11 09:42:18.070 2 DEBUG nova.network.neutron [None req-fdad2070-38bd-4784-869b-5419abf20991 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] [instance: 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1] Updating instance_info_cache with network_info: [{"id": "a65667d8-0cc9-498a-9415-8fcecf5881f3", "address": "fa:16:3e:38:9b:b3", "network": {"id": "7f1968f8-d23e-46ea-bffa-f1b3561e0cf9", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-351533677-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "bd3d8446de604a3682ade55965824985", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa65667d8-0c", "ovs_interfaceid": "a65667d8-0cc9-498a-9415-8fcecf5881f3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:42:18 compute-0 nova_compute[260935]: 2025-10-11 09:42:18.088 2 DEBUG oslo_concurrency.lockutils [None req-fdad2070-38bd-4784-869b-5419abf20991 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] Releasing lock "refresh_cache-4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:42:18 compute-0 nova_compute[260935]: 2025-10-11 09:42:18.096 2 DEBUG nova.virt.libvirt.vif [None req-fdad2070-38bd-4784-869b-5419abf20991 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T09:41:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestServerAdvancedOps-server-2061155047',display_name='tempest-TestServerAdvancedOps-server-2061155047',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testserveradvancedops-server-2061155047',id=148,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-11T09:42:02Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='bd3d8446de604a3682ade55965824985',ramdisk_id='',reservation_id='r-q51mr6v9',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-TestServerAdvancedOps-940341189',owner_user_name='tempest-TestServerAdvancedOps-940341189-project-member'},tags=<?>,task_state='resuming',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T09:42:13Z,user_data=None,user_id='a45c6166242048a983479be416f90c52',uuid=4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='suspended') vif={"id": "a65667d8-0cc9-498a-9415-8fcecf5881f3", "address": "fa:16:3e:38:9b:b3", "network": {"id": "7f1968f8-d23e-46ea-bffa-f1b3561e0cf9", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-351533677-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "bd3d8446de604a3682ade55965824985", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa65667d8-0c", "ovs_interfaceid": "a65667d8-0cc9-498a-9415-8fcecf5881f3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 11 09:42:18 compute-0 nova_compute[260935]: 2025-10-11 09:42:18.098 2 DEBUG nova.network.os_vif_util [None req-fdad2070-38bd-4784-869b-5419abf20991 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] Converting VIF {"id": "a65667d8-0cc9-498a-9415-8fcecf5881f3", "address": "fa:16:3e:38:9b:b3", "network": {"id": "7f1968f8-d23e-46ea-bffa-f1b3561e0cf9", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-351533677-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "bd3d8446de604a3682ade55965824985", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa65667d8-0c", "ovs_interfaceid": "a65667d8-0cc9-498a-9415-8fcecf5881f3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 09:42:18 compute-0 nova_compute[260935]: 2025-10-11 09:42:18.099 2 DEBUG nova.network.os_vif_util [None req-fdad2070-38bd-4784-869b-5419abf20991 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:38:9b:b3,bridge_name='br-int',has_traffic_filtering=True,id=a65667d8-0cc9-498a-9415-8fcecf5881f3,network=Network(7f1968f8-d23e-46ea-bffa-f1b3561e0cf9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa65667d8-0c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 09:42:18 compute-0 nova_compute[260935]: 2025-10-11 09:42:18.099 2 DEBUG os_vif [None req-fdad2070-38bd-4784-869b-5419abf20991 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:38:9b:b3,bridge_name='br-int',has_traffic_filtering=True,id=a65667d8-0cc9-498a-9415-8fcecf5881f3,network=Network(7f1968f8-d23e-46ea-bffa-f1b3561e0cf9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa65667d8-0c') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 11 09:42:18 compute-0 nova_compute[260935]: 2025-10-11 09:42:18.100 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:42:18 compute-0 nova_compute[260935]: 2025-10-11 09:42:18.101 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:42:18 compute-0 nova_compute[260935]: 2025-10-11 09:42:18.101 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 09:42:18 compute-0 nova_compute[260935]: 2025-10-11 09:42:18.105 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:42:18 compute-0 nova_compute[260935]: 2025-10-11 09:42:18.105 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa65667d8-0c, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:42:18 compute-0 nova_compute[260935]: 2025-10-11 09:42:18.106 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa65667d8-0c, col_values=(('external_ids', {'iface-id': 'a65667d8-0cc9-498a-9415-8fcecf5881f3', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:38:9b:b3', 'vm-uuid': '4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:42:18 compute-0 nova_compute[260935]: 2025-10-11 09:42:18.108 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 09:42:18 compute-0 nova_compute[260935]: 2025-10-11 09:42:18.108 2 INFO os_vif [None req-fdad2070-38bd-4784-869b-5419abf20991 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:38:9b:b3,bridge_name='br-int',has_traffic_filtering=True,id=a65667d8-0cc9-498a-9415-8fcecf5881f3,network=Network(7f1968f8-d23e-46ea-bffa-f1b3561e0cf9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa65667d8-0c')
Oct 11 09:42:18 compute-0 nova_compute[260935]: 2025-10-11 09:42:18.132 2 DEBUG nova.objects.instance [None req-fdad2070-38bd-4784-869b-5419abf20991 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] Lazy-loading 'numa_topology' on Instance uuid 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:42:18 compute-0 kernel: tapa65667d8-0c: entered promiscuous mode
Oct 11 09:42:18 compute-0 NetworkManager[44960]: <info>  [1760175738.2144] manager: (tapa65667d8-0c): new Tun device (/org/freedesktop/NetworkManager/Devices/640)
Oct 11 09:42:18 compute-0 systemd-udevd[428307]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 09:42:18 compute-0 ovn_controller[152945]: 2025-10-11T09:42:18Z|01696|binding|INFO|Claiming lport a65667d8-0cc9-498a-9415-8fcecf5881f3 for this chassis.
Oct 11 09:42:18 compute-0 ovn_controller[152945]: 2025-10-11T09:42:18Z|01697|binding|INFO|a65667d8-0cc9-498a-9415-8fcecf5881f3: Claiming fa:16:3e:38:9b:b3 10.100.0.10
Oct 11 09:42:18 compute-0 nova_compute[260935]: 2025-10-11 09:42:18.249 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:42:18 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:42:18.256 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:38:9b:b3 10.100.0.10'], port_security=['fa:16:3e:38:9b:b3 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7f1968f8-d23e-46ea-bffa-f1b3561e0cf9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'bd3d8446de604a3682ade55965824985', 'neutron:revision_number': '7', 'neutron:security_group_ids': '0b05ff59-8aaa-4951-bb03-8ac33de90ac6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=61cb3534-5fbd-4916-a755-03a295e9752c, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=a65667d8-0cc9-498a-9415-8fcecf5881f3) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 09:42:18 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:42:18.258 162815 INFO neutron.agent.ovn.metadata.agent [-] Port a65667d8-0cc9-498a-9415-8fcecf5881f3 in datapath 7f1968f8-d23e-46ea-bffa-f1b3561e0cf9 bound to our chassis
Oct 11 09:42:18 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:42:18.259 162815 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 7f1968f8-d23e-46ea-bffa-f1b3561e0cf9 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Oct 11 09:42:18 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:42:18.260 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[61b85472-6ac2-4b91-907a-15b6e3f2df99]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:42:18 compute-0 NetworkManager[44960]: <info>  [1760175738.2736] device (tapa65667d8-0c): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 09:42:18 compute-0 NetworkManager[44960]: <info>  [1760175738.2762] device (tapa65667d8-0c): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 09:42:18 compute-0 systemd-machined[215705]: New machine qemu-174-instance-00000094.
Oct 11 09:42:18 compute-0 ovn_controller[152945]: 2025-10-11T09:42:18Z|01698|binding|INFO|Setting lport a65667d8-0cc9-498a-9415-8fcecf5881f3 ovn-installed in OVS
Oct 11 09:42:18 compute-0 ovn_controller[152945]: 2025-10-11T09:42:18Z|01699|binding|INFO|Setting lport a65667d8-0cc9-498a-9415-8fcecf5881f3 up in Southbound
Oct 11 09:42:18 compute-0 nova_compute[260935]: 2025-10-11 09:42:18.286 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:42:18 compute-0 systemd[1]: Started Virtual Machine qemu-174-instance-00000094.
Oct 11 09:42:18 compute-0 nova_compute[260935]: 2025-10-11 09:42:18.292 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:42:18 compute-0 ceph-mon[74313]: pgmap v3019: 321 pgs: 321 active+clean; 374 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 68 op/s
Oct 11 09:42:18 compute-0 nova_compute[260935]: 2025-10-11 09:42:18.914 2 DEBUG nova.compute.manager [req-4d47a79b-edca-4dde-8943-54f1730a02e9 req-3f8aea58-87ad-47df-a943-2cf5e2ced308 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1] Received event network-vif-plugged-a65667d8-0cc9-498a-9415-8fcecf5881f3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:42:18 compute-0 nova_compute[260935]: 2025-10-11 09:42:18.915 2 DEBUG oslo_concurrency.lockutils [req-4d47a79b-edca-4dde-8943-54f1730a02e9 req-3f8aea58-87ad-47df-a943-2cf5e2ced308 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:42:18 compute-0 nova_compute[260935]: 2025-10-11 09:42:18.916 2 DEBUG oslo_concurrency.lockutils [req-4d47a79b-edca-4dde-8943-54f1730a02e9 req-3f8aea58-87ad-47df-a943-2cf5e2ced308 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:42:18 compute-0 nova_compute[260935]: 2025-10-11 09:42:18.916 2 DEBUG oslo_concurrency.lockutils [req-4d47a79b-edca-4dde-8943-54f1730a02e9 req-3f8aea58-87ad-47df-a943-2cf5e2ced308 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:42:18 compute-0 nova_compute[260935]: 2025-10-11 09:42:18.916 2 DEBUG nova.compute.manager [req-4d47a79b-edca-4dde-8943-54f1730a02e9 req-3f8aea58-87ad-47df-a943-2cf5e2ced308 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1] No waiting events found dispatching network-vif-plugged-a65667d8-0cc9-498a-9415-8fcecf5881f3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 09:42:18 compute-0 nova_compute[260935]: 2025-10-11 09:42:18.917 2 WARNING nova.compute.manager [req-4d47a79b-edca-4dde-8943-54f1730a02e9 req-3f8aea58-87ad-47df-a943-2cf5e2ced308 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1] Received unexpected event network-vif-plugged-a65667d8-0cc9-498a-9415-8fcecf5881f3 for instance with vm_state suspended and task_state resuming.
Oct 11 09:42:19 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3020: 321 pgs: 321 active+clean; 374 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 526 KiB/s rd, 22 op/s
Oct 11 09:42:19 compute-0 nova_compute[260935]: 2025-10-11 09:42:19.492 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:42:19 compute-0 nova_compute[260935]: 2025-10-11 09:42:19.645 2 DEBUG nova.virt.libvirt.host [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Removed pending event for 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Oct 11 09:42:19 compute-0 nova_compute[260935]: 2025-10-11 09:42:19.647 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760175739.6445706, 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:42:19 compute-0 nova_compute[260935]: 2025-10-11 09:42:19.647 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1] VM Started (Lifecycle Event)
Oct 11 09:42:19 compute-0 nova_compute[260935]: 2025-10-11 09:42:19.673 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:42:19 compute-0 nova_compute[260935]: 2025-10-11 09:42:19.680 2 DEBUG nova.compute.manager [None req-fdad2070-38bd-4784-869b-5419abf20991 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] [instance: 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 11 09:42:19 compute-0 nova_compute[260935]: 2025-10-11 09:42:19.681 2 DEBUG nova.objects.instance [None req-fdad2070-38bd-4784-869b-5419abf20991 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] Lazy-loading 'pci_devices' on Instance uuid 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:42:19 compute-0 nova_compute[260935]: 2025-10-11 09:42:19.691 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1] Synchronizing instance power state after lifecycle event "Started"; current vm_state: suspended, current task_state: resuming, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 09:42:19 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:42:19 compute-0 nova_compute[260935]: 2025-10-11 09:42:19.705 2 INFO nova.virt.libvirt.driver [-] [instance: 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1] Instance running successfully.
Oct 11 09:42:19 compute-0 virtqemud[260524]: argument unsupported: QEMU guest agent is not configured
Oct 11 09:42:19 compute-0 nova_compute[260935]: 2025-10-11 09:42:19.711 2 DEBUG nova.virt.libvirt.guest [None req-fdad2070-38bd-4784-869b-5419abf20991 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] [instance: 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200
Oct 11 09:42:19 compute-0 nova_compute[260935]: 2025-10-11 09:42:19.712 2 DEBUG nova.compute.manager [None req-fdad2070-38bd-4784-869b-5419abf20991 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] [instance: 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:42:19 compute-0 nova_compute[260935]: 2025-10-11 09:42:19.714 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1] During sync_power_state the instance has a pending task (resuming). Skip.
Oct 11 09:42:19 compute-0 nova_compute[260935]: 2025-10-11 09:42:19.715 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760175739.6523895, 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:42:19 compute-0 nova_compute[260935]: 2025-10-11 09:42:19.715 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1] VM Resumed (Lifecycle Event)
Oct 11 09:42:19 compute-0 nova_compute[260935]: 2025-10-11 09:42:19.748 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:42:19 compute-0 nova_compute[260935]: 2025-10-11 09:42:19.752 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: suspended, current task_state: resuming, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 09:42:19 compute-0 nova_compute[260935]: 2025-10-11 09:42:19.783 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1] During sync_power_state the instance has a pending task (resuming). Skip.
Oct 11 09:42:19 compute-0 podman[428361]: 2025-10-11 09:42:19.795382205 +0000 UTC m=+0.090512609 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=multipathd, io.buildah.version=1.41.3)
Oct 11 09:42:19 compute-0 podman[428362]: 2025-10-11 09:42:19.798970036 +0000 UTC m=+0.087544087 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.schema-version=1.0)
Oct 11 09:42:19 compute-0 nova_compute[260935]: 2025-10-11 09:42:19.980 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:42:20 compute-0 ceph-mon[74313]: pgmap v3020: 321 pgs: 321 active+clean; 374 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 526 KiB/s rd, 22 op/s
Oct 11 09:42:20 compute-0 ceph-mon[74313]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #147. Immutable memtables: 0.
Oct 11 09:42:20 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:42:20.607530) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 11 09:42:20 compute-0 ceph-mon[74313]: rocksdb: [db/flush_job.cc:856] [default] [JOB 89] Flushing memtable with next log file: 147
Oct 11 09:42:20 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760175740607563, "job": 89, "event": "flush_started", "num_memtables": 1, "num_entries": 1911, "num_deletes": 251, "total_data_size": 3139384, "memory_usage": 3193424, "flush_reason": "Manual Compaction"}
Oct 11 09:42:20 compute-0 ceph-mon[74313]: rocksdb: [db/flush_job.cc:885] [default] [JOB 89] Level-0 flush table #148: started
Oct 11 09:42:20 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760175740625710, "cf_name": "default", "job": 89, "event": "table_file_creation", "file_number": 148, "file_size": 3075291, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 61413, "largest_seqno": 63323, "table_properties": {"data_size": 3066526, "index_size": 5450, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2245, "raw_key_size": 17837, "raw_average_key_size": 20, "raw_value_size": 3049132, "raw_average_value_size": 3441, "num_data_blocks": 242, "num_entries": 886, "num_filter_entries": 886, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760175535, "oldest_key_time": 1760175535, "file_creation_time": 1760175740, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "25d4b2da-549a-4320-988a-8e4ed36a341b", "db_session_id": "HBQ1LYMNE0LLZGR7AHC6", "orig_file_number": 148, "seqno_to_time_mapping": "N/A"}}
Oct 11 09:42:20 compute-0 ceph-mon[74313]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 89] Flush lasted 18218 microseconds, and 5878 cpu microseconds.
Oct 11 09:42:20 compute-0 ceph-mon[74313]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 09:42:20 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:42:20.625749) [db/flush_job.cc:967] [default] [JOB 89] Level-0 flush table #148: 3075291 bytes OK
Oct 11 09:42:20 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:42:20.625768) [db/memtable_list.cc:519] [default] Level-0 commit table #148 started
Oct 11 09:42:20 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:42:20.627821) [db/memtable_list.cc:722] [default] Level-0 commit table #148: memtable #1 done
Oct 11 09:42:20 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:42:20.627845) EVENT_LOG_v1 {"time_micros": 1760175740627842, "job": 89, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 11 09:42:20 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:42:20.627862) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 11 09:42:20 compute-0 ceph-mon[74313]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 89] Try to delete WAL files size 3131276, prev total WAL file size 3131276, number of live WAL files 2.
Oct 11 09:42:20 compute-0 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000144.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 09:42:20 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:42:20.628612) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730036303234' seq:72057594037927935, type:22 .. '7061786F730036323736' seq:0, type:0; will stop at (end)
Oct 11 09:42:20 compute-0 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 90] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 11 09:42:20 compute-0 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 89 Base level 0, inputs: [148(3003KB)], [146(9533KB)]
Oct 11 09:42:20 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760175740628641, "job": 90, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [148], "files_L6": [146], "score": -1, "input_data_size": 12837997, "oldest_snapshot_seqno": -1}
Oct 11 09:42:20 compute-0 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 90] Generated table #149: 8170 keys, 11163371 bytes, temperature: kUnknown
Oct 11 09:42:20 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760175740679372, "cf_name": "default", "job": 90, "event": "table_file_creation", "file_number": 149, "file_size": 11163371, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11109275, "index_size": 32543, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 20485, "raw_key_size": 213575, "raw_average_key_size": 26, "raw_value_size": 10964101, "raw_average_value_size": 1341, "num_data_blocks": 1268, "num_entries": 8170, "num_filter_entries": 8170, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760170204, "oldest_key_time": 0, "file_creation_time": 1760175740, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "25d4b2da-549a-4320-988a-8e4ed36a341b", "db_session_id": "HBQ1LYMNE0LLZGR7AHC6", "orig_file_number": 149, "seqno_to_time_mapping": "N/A"}}
Oct 11 09:42:20 compute-0 ceph-mon[74313]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 09:42:20 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:42:20.679592) [db/compaction/compaction_job.cc:1663] [default] [JOB 90] Compacted 1@0 + 1@6 files to L6 => 11163371 bytes
Oct 11 09:42:20 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:42:20.680811) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 252.7 rd, 219.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.9, 9.3 +0.0 blob) out(10.6 +0.0 blob), read-write-amplify(7.8) write-amplify(3.6) OK, records in: 8684, records dropped: 514 output_compression: NoCompression
Oct 11 09:42:20 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:42:20.680840) EVENT_LOG_v1 {"time_micros": 1760175740680822, "job": 90, "event": "compaction_finished", "compaction_time_micros": 50798, "compaction_time_cpu_micros": 23533, "output_level": 6, "num_output_files": 1, "total_output_size": 11163371, "num_input_records": 8684, "num_output_records": 8170, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 11 09:42:20 compute-0 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000148.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 09:42:20 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760175740681334, "job": 90, "event": "table_file_deletion", "file_number": 148}
Oct 11 09:42:20 compute-0 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000146.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 09:42:20 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760175740682949, "job": 90, "event": "table_file_deletion", "file_number": 146}
Oct 11 09:42:20 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:42:20.628537) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 09:42:20 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:42:20.683010) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 09:42:20 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:42:20.683015) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 09:42:20 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:42:20.683017) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 09:42:20 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:42:20.683019) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 09:42:20 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:42:20.683021) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 09:42:21 compute-0 nova_compute[260935]: 2025-10-11 09:42:21.009 2 DEBUG nova.compute.manager [req-28d866e6-319f-4e66-a7f3-9ad4f2814d41 req-7b592e36-886d-4daf-9ee8-376e17fc3593 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1] Received event network-vif-plugged-a65667d8-0cc9-498a-9415-8fcecf5881f3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:42:21 compute-0 nova_compute[260935]: 2025-10-11 09:42:21.010 2 DEBUG oslo_concurrency.lockutils [req-28d866e6-319f-4e66-a7f3-9ad4f2814d41 req-7b592e36-886d-4daf-9ee8-376e17fc3593 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:42:21 compute-0 nova_compute[260935]: 2025-10-11 09:42:21.010 2 DEBUG oslo_concurrency.lockutils [req-28d866e6-319f-4e66-a7f3-9ad4f2814d41 req-7b592e36-886d-4daf-9ee8-376e17fc3593 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:42:21 compute-0 nova_compute[260935]: 2025-10-11 09:42:21.011 2 DEBUG oslo_concurrency.lockutils [req-28d866e6-319f-4e66-a7f3-9ad4f2814d41 req-7b592e36-886d-4daf-9ee8-376e17fc3593 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:42:21 compute-0 nova_compute[260935]: 2025-10-11 09:42:21.011 2 DEBUG nova.compute.manager [req-28d866e6-319f-4e66-a7f3-9ad4f2814d41 req-7b592e36-886d-4daf-9ee8-376e17fc3593 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1] No waiting events found dispatching network-vif-plugged-a65667d8-0cc9-498a-9415-8fcecf5881f3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 09:42:21 compute-0 nova_compute[260935]: 2025-10-11 09:42:21.012 2 WARNING nova.compute.manager [req-28d866e6-319f-4e66-a7f3-9ad4f2814d41 req-7b592e36-886d-4daf-9ee8-376e17fc3593 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1] Received unexpected event network-vif-plugged-a65667d8-0cc9-498a-9415-8fcecf5881f3 for instance with vm_state active and task_state None.
Oct 11 09:42:21 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3021: 321 pgs: 321 active+clean; 374 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 4.2 KiB/s rd, 4 op/s
Oct 11 09:42:22 compute-0 nova_compute[260935]: 2025-10-11 09:42:22.315 2 DEBUG oslo_concurrency.lockutils [None req-8947ba06-cc32-43ed-bd05-5aa754ba3953 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] Acquiring lock "4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:42:22 compute-0 nova_compute[260935]: 2025-10-11 09:42:22.316 2 DEBUG oslo_concurrency.lockutils [None req-8947ba06-cc32-43ed-bd05-5aa754ba3953 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] Lock "4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:42:22 compute-0 nova_compute[260935]: 2025-10-11 09:42:22.316 2 DEBUG oslo_concurrency.lockutils [None req-8947ba06-cc32-43ed-bd05-5aa754ba3953 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] Acquiring lock "4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:42:22 compute-0 nova_compute[260935]: 2025-10-11 09:42:22.316 2 DEBUG oslo_concurrency.lockutils [None req-8947ba06-cc32-43ed-bd05-5aa754ba3953 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] Lock "4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:42:22 compute-0 nova_compute[260935]: 2025-10-11 09:42:22.317 2 DEBUG oslo_concurrency.lockutils [None req-8947ba06-cc32-43ed-bd05-5aa754ba3953 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] Lock "4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:42:22 compute-0 nova_compute[260935]: 2025-10-11 09:42:22.318 2 INFO nova.compute.manager [None req-8947ba06-cc32-43ed-bd05-5aa754ba3953 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] [instance: 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1] Terminating instance
Oct 11 09:42:22 compute-0 nova_compute[260935]: 2025-10-11 09:42:22.319 2 DEBUG nova.compute.manager [None req-8947ba06-cc32-43ed-bd05-5aa754ba3953 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] [instance: 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 11 09:42:22 compute-0 kernel: tapa65667d8-0c (unregistering): left promiscuous mode
Oct 11 09:42:22 compute-0 NetworkManager[44960]: <info>  [1760175742.3632] device (tapa65667d8-0c): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 09:42:22 compute-0 nova_compute[260935]: 2025-10-11 09:42:22.373 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:42:22 compute-0 ovn_controller[152945]: 2025-10-11T09:42:22Z|01700|binding|INFO|Releasing lport a65667d8-0cc9-498a-9415-8fcecf5881f3 from this chassis (sb_readonly=0)
Oct 11 09:42:22 compute-0 ovn_controller[152945]: 2025-10-11T09:42:22Z|01701|binding|INFO|Setting lport a65667d8-0cc9-498a-9415-8fcecf5881f3 down in Southbound
Oct 11 09:42:22 compute-0 ovn_controller[152945]: 2025-10-11T09:42:22Z|01702|binding|INFO|Removing iface tapa65667d8-0c ovn-installed in OVS
Oct 11 09:42:22 compute-0 nova_compute[260935]: 2025-10-11 09:42:22.380 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:42:22 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:42:22.385 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:38:9b:b3 10.100.0.10'], port_security=['fa:16:3e:38:9b:b3 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7f1968f8-d23e-46ea-bffa-f1b3561e0cf9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'bd3d8446de604a3682ade55965824985', 'neutron:revision_number': '8', 'neutron:security_group_ids': '0b05ff59-8aaa-4951-bb03-8ac33de90ac6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=61cb3534-5fbd-4916-a755-03a295e9752c, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=a65667d8-0cc9-498a-9415-8fcecf5881f3) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 09:42:22 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:42:22.386 162815 INFO neutron.agent.ovn.metadata.agent [-] Port a65667d8-0cc9-498a-9415-8fcecf5881f3 in datapath 7f1968f8-d23e-46ea-bffa-f1b3561e0cf9 unbound from our chassis
Oct 11 09:42:22 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:42:22.386 162815 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 7f1968f8-d23e-46ea-bffa-f1b3561e0cf9 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Oct 11 09:42:22 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:42:22.387 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[6fc723e8-d568-47bb-9ef4-200b29c3488f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:42:22 compute-0 nova_compute[260935]: 2025-10-11 09:42:22.391 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:42:22 compute-0 systemd[1]: machine-qemu\x2d174\x2dinstance\x2d00000094.scope: Deactivated successfully.
Oct 11 09:42:22 compute-0 systemd[1]: machine-qemu\x2d174\x2dinstance\x2d00000094.scope: Consumed 3.912s CPU time.
Oct 11 09:42:22 compute-0 systemd-machined[215705]: Machine qemu-174-instance-00000094 terminated.
Oct 11 09:42:22 compute-0 nova_compute[260935]: 2025-10-11 09:42:22.562 2 INFO nova.virt.libvirt.driver [-] [instance: 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1] Instance destroyed successfully.
Oct 11 09:42:22 compute-0 nova_compute[260935]: 2025-10-11 09:42:22.563 2 DEBUG nova.objects.instance [None req-8947ba06-cc32-43ed-bd05-5aa754ba3953 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] Lazy-loading 'resources' on Instance uuid 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:42:22 compute-0 nova_compute[260935]: 2025-10-11 09:42:22.579 2 DEBUG nova.virt.libvirt.vif [None req-8947ba06-cc32-43ed-bd05-5aa754ba3953 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T09:41:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestServerAdvancedOps-server-2061155047',display_name='tempest-TestServerAdvancedOps-server-2061155047',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testserveradvancedops-server-2061155047',id=148,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-11T09:42:02Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='bd3d8446de604a3682ade55965824985',ramdisk_id='',reservation_id='r-q51mr6v9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestServerAdvancedOps-940341189',owner_user_name='tempest-TestServerAdvancedOps-940341189-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T09:42:19Z,user_data=None,user_id='a45c6166242048a983479be416f90c52',uuid=4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a65667d8-0cc9-498a-9415-8fcecf5881f3", "address": "fa:16:3e:38:9b:b3", "network": {"id": "7f1968f8-d23e-46ea-bffa-f1b3561e0cf9", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-351533677-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "bd3d8446de604a3682ade55965824985", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa65667d8-0c", "ovs_interfaceid": "a65667d8-0cc9-498a-9415-8fcecf5881f3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 11 09:42:22 compute-0 nova_compute[260935]: 2025-10-11 09:42:22.579 2 DEBUG nova.network.os_vif_util [None req-8947ba06-cc32-43ed-bd05-5aa754ba3953 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] Converting VIF {"id": "a65667d8-0cc9-498a-9415-8fcecf5881f3", "address": "fa:16:3e:38:9b:b3", "network": {"id": "7f1968f8-d23e-46ea-bffa-f1b3561e0cf9", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-351533677-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "bd3d8446de604a3682ade55965824985", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa65667d8-0c", "ovs_interfaceid": "a65667d8-0cc9-498a-9415-8fcecf5881f3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 09:42:22 compute-0 nova_compute[260935]: 2025-10-11 09:42:22.580 2 DEBUG nova.network.os_vif_util [None req-8947ba06-cc32-43ed-bd05-5aa754ba3953 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:38:9b:b3,bridge_name='br-int',has_traffic_filtering=True,id=a65667d8-0cc9-498a-9415-8fcecf5881f3,network=Network(7f1968f8-d23e-46ea-bffa-f1b3561e0cf9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa65667d8-0c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 09:42:22 compute-0 nova_compute[260935]: 2025-10-11 09:42:22.580 2 DEBUG os_vif [None req-8947ba06-cc32-43ed-bd05-5aa754ba3953 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:38:9b:b3,bridge_name='br-int',has_traffic_filtering=True,id=a65667d8-0cc9-498a-9415-8fcecf5881f3,network=Network(7f1968f8-d23e-46ea-bffa-f1b3561e0cf9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa65667d8-0c') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 11 09:42:22 compute-0 nova_compute[260935]: 2025-10-11 09:42:22.582 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:42:22 compute-0 nova_compute[260935]: 2025-10-11 09:42:22.582 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa65667d8-0c, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:42:22 compute-0 nova_compute[260935]: 2025-10-11 09:42:22.583 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:42:22 compute-0 nova_compute[260935]: 2025-10-11 09:42:22.585 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:42:22 compute-0 nova_compute[260935]: 2025-10-11 09:42:22.587 2 INFO os_vif [None req-8947ba06-cc32-43ed-bd05-5aa754ba3953 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:38:9b:b3,bridge_name='br-int',has_traffic_filtering=True,id=a65667d8-0cc9-498a-9415-8fcecf5881f3,network=Network(7f1968f8-d23e-46ea-bffa-f1b3561e0cf9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa65667d8-0c')
Oct 11 09:42:22 compute-0 ceph-mon[74313]: pgmap v3021: 321 pgs: 321 active+clean; 374 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 4.2 KiB/s rd, 4 op/s
Oct 11 09:42:23 compute-0 nova_compute[260935]: 2025-10-11 09:42:23.057 2 INFO nova.virt.libvirt.driver [None req-8947ba06-cc32-43ed-bd05-5aa754ba3953 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] [instance: 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1] Deleting instance files /var/lib/nova/instances/4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1_del
Oct 11 09:42:23 compute-0 nova_compute[260935]: 2025-10-11 09:42:23.058 2 INFO nova.virt.libvirt.driver [None req-8947ba06-cc32-43ed-bd05-5aa754ba3953 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] [instance: 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1] Deletion of /var/lib/nova/instances/4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1_del complete
Oct 11 09:42:23 compute-0 nova_compute[260935]: 2025-10-11 09:42:23.366 2 DEBUG nova.compute.manager [req-53b1b8f9-8217-4154-8f21-5d6411eeadaa req-28234a51-a3a5-41f7-a287-b8beb6cfdfc5 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1] Received event network-vif-unplugged-a65667d8-0cc9-498a-9415-8fcecf5881f3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:42:23 compute-0 nova_compute[260935]: 2025-10-11 09:42:23.367 2 DEBUG oslo_concurrency.lockutils [req-53b1b8f9-8217-4154-8f21-5d6411eeadaa req-28234a51-a3a5-41f7-a287-b8beb6cfdfc5 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:42:23 compute-0 nova_compute[260935]: 2025-10-11 09:42:23.367 2 DEBUG oslo_concurrency.lockutils [req-53b1b8f9-8217-4154-8f21-5d6411eeadaa req-28234a51-a3a5-41f7-a287-b8beb6cfdfc5 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:42:23 compute-0 nova_compute[260935]: 2025-10-11 09:42:23.367 2 DEBUG oslo_concurrency.lockutils [req-53b1b8f9-8217-4154-8f21-5d6411eeadaa req-28234a51-a3a5-41f7-a287-b8beb6cfdfc5 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:42:23 compute-0 nova_compute[260935]: 2025-10-11 09:42:23.368 2 DEBUG nova.compute.manager [req-53b1b8f9-8217-4154-8f21-5d6411eeadaa req-28234a51-a3a5-41f7-a287-b8beb6cfdfc5 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1] No waiting events found dispatching network-vif-unplugged-a65667d8-0cc9-498a-9415-8fcecf5881f3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 09:42:23 compute-0 nova_compute[260935]: 2025-10-11 09:42:23.368 2 DEBUG nova.compute.manager [req-53b1b8f9-8217-4154-8f21-5d6411eeadaa req-28234a51-a3a5-41f7-a287-b8beb6cfdfc5 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1] Received event network-vif-unplugged-a65667d8-0cc9-498a-9415-8fcecf5881f3 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 11 09:42:23 compute-0 nova_compute[260935]: 2025-10-11 09:42:23.369 2 DEBUG nova.compute.manager [req-53b1b8f9-8217-4154-8f21-5d6411eeadaa req-28234a51-a3a5-41f7-a287-b8beb6cfdfc5 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1] Received event network-vif-plugged-a65667d8-0cc9-498a-9415-8fcecf5881f3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:42:23 compute-0 nova_compute[260935]: 2025-10-11 09:42:23.369 2 DEBUG oslo_concurrency.lockutils [req-53b1b8f9-8217-4154-8f21-5d6411eeadaa req-28234a51-a3a5-41f7-a287-b8beb6cfdfc5 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:42:23 compute-0 nova_compute[260935]: 2025-10-11 09:42:23.369 2 DEBUG oslo_concurrency.lockutils [req-53b1b8f9-8217-4154-8f21-5d6411eeadaa req-28234a51-a3a5-41f7-a287-b8beb6cfdfc5 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:42:23 compute-0 nova_compute[260935]: 2025-10-11 09:42:23.370 2 DEBUG oslo_concurrency.lockutils [req-53b1b8f9-8217-4154-8f21-5d6411eeadaa req-28234a51-a3a5-41f7-a287-b8beb6cfdfc5 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:42:23 compute-0 nova_compute[260935]: 2025-10-11 09:42:23.370 2 DEBUG nova.compute.manager [req-53b1b8f9-8217-4154-8f21-5d6411eeadaa req-28234a51-a3a5-41f7-a287-b8beb6cfdfc5 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1] No waiting events found dispatching network-vif-plugged-a65667d8-0cc9-498a-9415-8fcecf5881f3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 09:42:23 compute-0 nova_compute[260935]: 2025-10-11 09:42:23.370 2 WARNING nova.compute.manager [req-53b1b8f9-8217-4154-8f21-5d6411eeadaa req-28234a51-a3a5-41f7-a287-b8beb6cfdfc5 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1] Received unexpected event network-vif-plugged-a65667d8-0cc9-498a-9415-8fcecf5881f3 for instance with vm_state active and task_state deleting.
Oct 11 09:42:23 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3022: 321 pgs: 321 active+clean; 374 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 8.6 KiB/s rd, 9 op/s
Oct 11 09:42:23 compute-0 nova_compute[260935]: 2025-10-11 09:42:23.444 2 INFO nova.compute.manager [None req-8947ba06-cc32-43ed-bd05-5aa754ba3953 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] [instance: 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1] Took 1.12 seconds to destroy the instance on the hypervisor.
Oct 11 09:42:23 compute-0 nova_compute[260935]: 2025-10-11 09:42:23.445 2 DEBUG oslo.service.loopingcall [None req-8947ba06-cc32-43ed-bd05-5aa754ba3953 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 11 09:42:23 compute-0 nova_compute[260935]: 2025-10-11 09:42:23.445 2 DEBUG nova.compute.manager [-] [instance: 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 11 09:42:23 compute-0 nova_compute[260935]: 2025-10-11 09:42:23.446 2 DEBUG nova.network.neutron [-] [instance: 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 11 09:42:24 compute-0 nova_compute[260935]: 2025-10-11 09:42:24.294 2 DEBUG nova.network.neutron [-] [instance: 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:42:24 compute-0 nova_compute[260935]: 2025-10-11 09:42:24.312 2 INFO nova.compute.manager [-] [instance: 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1] Took 0.87 seconds to deallocate network for instance.
Oct 11 09:42:24 compute-0 nova_compute[260935]: 2025-10-11 09:42:24.361 2 DEBUG oslo_concurrency.lockutils [None req-8947ba06-cc32-43ed-bd05-5aa754ba3953 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:42:24 compute-0 nova_compute[260935]: 2025-10-11 09:42:24.362 2 DEBUG oslo_concurrency.lockutils [None req-8947ba06-cc32-43ed-bd05-5aa754ba3953 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:42:24 compute-0 nova_compute[260935]: 2025-10-11 09:42:24.469 2 DEBUG oslo_concurrency.processutils [None req-8947ba06-cc32-43ed-bd05-5aa754ba3953 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:42:24 compute-0 ceph-mon[74313]: pgmap v3022: 321 pgs: 321 active+clean; 374 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 8.6 KiB/s rd, 9 op/s
Oct 11 09:42:24 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:42:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:42:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:42:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:42:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:42:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:42:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:42:24 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 09:42:24 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2217349222' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:42:24 compute-0 nova_compute[260935]: 2025-10-11 09:42:24.950 2 DEBUG oslo_concurrency.processutils [None req-8947ba06-cc32-43ed-bd05-5aa754ba3953 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.481s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:42:24 compute-0 nova_compute[260935]: 2025-10-11 09:42:24.959 2 DEBUG nova.compute.provider_tree [None req-8947ba06-cc32-43ed-bd05-5aa754ba3953 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 09:42:24 compute-0 nova_compute[260935]: 2025-10-11 09:42:24.980 2 DEBUG nova.scheduler.client.report [None req-8947ba06-cc32-43ed-bd05-5aa754ba3953 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 09:42:24 compute-0 nova_compute[260935]: 2025-10-11 09:42:24.987 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:42:25 compute-0 nova_compute[260935]: 2025-10-11 09:42:25.009 2 DEBUG oslo_concurrency.lockutils [None req-8947ba06-cc32-43ed-bd05-5aa754ba3953 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.648s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:42:25 compute-0 nova_compute[260935]: 2025-10-11 09:42:25.035 2 INFO nova.scheduler.client.report [None req-8947ba06-cc32-43ed-bd05-5aa754ba3953 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] Deleted allocations for instance 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1
Oct 11 09:42:25 compute-0 nova_compute[260935]: 2025-10-11 09:42:25.108 2 DEBUG oslo_concurrency.lockutils [None req-8947ba06-cc32-43ed-bd05-5aa754ba3953 a45c6166242048a983479be416f90c52 bd3d8446de604a3682ade55965824985 - - default default] Lock "4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.793s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:42:25 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3023: 321 pgs: 321 active+clean; 374 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 4.3 KiB/s rd, 4 op/s
Oct 11 09:42:25 compute-0 nova_compute[260935]: 2025-10-11 09:42:25.478 2 DEBUG nova.compute.manager [req-a54cf32c-c35c-4524-a603-479e7211bf62 req-b6184f53-2926-4a77-9632-389b80798845 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1] Received event network-vif-deleted-a65667d8-0cc9-498a-9415-8fcecf5881f3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:42:25 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/2217349222' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:42:26 compute-0 ovn_controller[152945]: 2025-10-11T09:42:26Z|01703|binding|INFO|Releasing lport e23cd806-8523-4e59-ba27-db15cee52548 from this chassis (sb_readonly=0)
Oct 11 09:42:26 compute-0 ovn_controller[152945]: 2025-10-11T09:42:26Z|01704|binding|INFO|Releasing lport f45dd889-4db0-488b-b6d3-356bc191844e from this chassis (sb_readonly=0)
Oct 11 09:42:26 compute-0 nova_compute[260935]: 2025-10-11 09:42:26.490 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:42:26 compute-0 ceph-mon[74313]: pgmap v3023: 321 pgs: 321 active+clean; 374 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 4.3 KiB/s rd, 4 op/s
Oct 11 09:42:26 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 09:42:26 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/393553716' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 09:42:26 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 09:42:26 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/393553716' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 09:42:27 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3024: 321 pgs: 321 active+clean; 339 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 16 KiB/s rd, 1.2 KiB/s wr, 22 op/s
Oct 11 09:42:27 compute-0 nova_compute[260935]: 2025-10-11 09:42:27.584 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:42:27 compute-0 ceph-mon[74313]: from='client.? 192.168.122.10:0/393553716' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 09:42:27 compute-0 ceph-mon[74313]: from='client.? 192.168.122.10:0/393553716' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 09:42:28 compute-0 ceph-mon[74313]: pgmap v3024: 321 pgs: 321 active+clean; 339 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 16 KiB/s rd, 1.2 KiB/s wr, 22 op/s
Oct 11 09:42:29 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3025: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 1.2 KiB/s wr, 31 op/s
Oct 11 09:42:29 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:42:30 compute-0 nova_compute[260935]: 2025-10-11 09:42:30.005 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:42:30 compute-0 sshd-session[428464]: Invalid user mysql from 165.232.82.252 port 33500
Oct 11 09:42:30 compute-0 sshd-session[428464]: pam_unix(sshd:auth): check pass; user unknown
Oct 11 09:42:30 compute-0 sshd-session[428464]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=165.232.82.252
Oct 11 09:42:30 compute-0 ceph-mon[74313]: pgmap v3025: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 1.2 KiB/s wr, 31 op/s
Oct 11 09:42:31 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3026: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 1.2 KiB/s wr, 31 op/s
Oct 11 09:42:31 compute-0 nova_compute[260935]: 2025-10-11 09:42:31.704 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:42:32 compute-0 sshd-session[428464]: Failed password for invalid user mysql from 165.232.82.252 port 33500 ssh2
Oct 11 09:42:32 compute-0 nova_compute[260935]: 2025-10-11 09:42:32.587 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:42:32 compute-0 ceph-mon[74313]: pgmap v3026: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 1.2 KiB/s wr, 31 op/s
Oct 11 09:42:33 compute-0 sshd-session[428464]: Connection closed by invalid user mysql 165.232.82.252 port 33500 [preauth]
Oct 11 09:42:33 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3027: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 1.2 KiB/s wr, 31 op/s
Oct 11 09:42:34 compute-0 ceph-mon[74313]: pgmap v3027: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 1.2 KiB/s wr, 31 op/s
Oct 11 09:42:34 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:42:35 compute-0 nova_compute[260935]: 2025-10-11 09:42:35.008 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:42:35 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3028: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 26 op/s
Oct 11 09:42:36 compute-0 ceph-mon[74313]: pgmap v3028: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 26 op/s
Oct 11 09:42:37 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3029: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 26 op/s
Oct 11 09:42:37 compute-0 nova_compute[260935]: 2025-10-11 09:42:37.562 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760175742.5586293, 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:42:37 compute-0 nova_compute[260935]: 2025-10-11 09:42:37.564 2 INFO nova.compute.manager [-] [instance: 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1] VM Stopped (Lifecycle Event)
Oct 11 09:42:37 compute-0 nova_compute[260935]: 2025-10-11 09:42:37.591 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:42:37 compute-0 nova_compute[260935]: 2025-10-11 09:42:37.599 2 DEBUG nova.compute.manager [None req-bbd4cac3-062e-4082-b035-f84bca38eadd - - - - - -] [instance: 4f7e8b8e-b0c8-4618-ab1b-e82bb4fdc7e1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:42:38 compute-0 ceph-mon[74313]: pgmap v3029: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 26 op/s
Oct 11 09:42:39 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3030: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 7.0 KiB/s rd, 0 B/s wr, 8 op/s
Oct 11 09:42:39 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:42:39 compute-0 podman[428466]: 2025-10-11 09:42:39.841455001 +0000 UTC m=+0.128955238 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct 11 09:42:40 compute-0 nova_compute[260935]: 2025-10-11 09:42:40.012 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:42:40 compute-0 ceph-mon[74313]: pgmap v3030: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 7.0 KiB/s rd, 0 B/s wr, 8 op/s
Oct 11 09:42:41 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3031: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:42:42 compute-0 nova_compute[260935]: 2025-10-11 09:42:42.596 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:42:42 compute-0 ceph-mon[74313]: pgmap v3031: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:42:43 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3032: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:42:44 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:42:44 compute-0 ceph-mon[74313]: pgmap v3032: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:42:45 compute-0 nova_compute[260935]: 2025-10-11 09:42:45.014 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:42:45 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3033: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:42:46 compute-0 ceph-mon[74313]: pgmap v3033: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:42:46 compute-0 podman[428485]: 2025-10-11 09:42:46.813297264 +0000 UTC m=+0.112308741 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct 11 09:42:47 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3034: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:42:47 compute-0 nova_compute[260935]: 2025-10-11 09:42:47.600 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:42:48 compute-0 ceph-mon[74313]: pgmap v3034: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:42:49 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3035: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:42:49 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:42:50 compute-0 nova_compute[260935]: 2025-10-11 09:42:50.048 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:42:50 compute-0 nova_compute[260935]: 2025-10-11 09:42:50.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:42:50 compute-0 nova_compute[260935]: 2025-10-11 09:42:50.703 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 11 09:42:50 compute-0 ceph-mon[74313]: pgmap v3035: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:42:50 compute-0 podman[428506]: 2025-10-11 09:42:50.779810324 +0000 UTC m=+0.084009697 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct 11 09:42:50 compute-0 podman[428507]: 2025-10-11 09:42:50.818965203 +0000 UTC m=+0.110077029 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 11 09:42:51 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3036: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:42:52 compute-0 nova_compute[260935]: 2025-10-11 09:42:52.647 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:42:52 compute-0 nova_compute[260935]: 2025-10-11 09:42:52.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:42:52 compute-0 nova_compute[260935]: 2025-10-11 09:42:52.704 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 11 09:42:52 compute-0 ceph-mon[74313]: pgmap v3036: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:42:53 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3037: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:42:53 compute-0 nova_compute[260935]: 2025-10-11 09:42:53.819 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "refresh_cache-b75d8ded-515b-48ff-a6b6-28df88878996" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:42:53 compute-0 nova_compute[260935]: 2025-10-11 09:42:53.820 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquired lock "refresh_cache-b75d8ded-515b-48ff-a6b6-28df88878996" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:42:53 compute-0 nova_compute[260935]: 2025-10-11 09:42:53.820 2 DEBUG nova.network.neutron [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: b75d8ded-515b-48ff-a6b6-28df88878996] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 11 09:42:54 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:42:54 compute-0 ceph-mon[74313]: pgmap v3037: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:42:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:42:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:42:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:42:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:42:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:42:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:42:55 compute-0 ceph-mgr[74605]: [balancer INFO root] Optimize plan auto_2025-10-11_09:42:55
Oct 11 09:42:55 compute-0 ceph-mgr[74605]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 09:42:55 compute-0 ceph-mgr[74605]: [balancer INFO root] do_upmap
Oct 11 09:42:55 compute-0 ceph-mgr[74605]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'vms', 'images', 'volumes', 'default.rgw.meta', 'default.rgw.control', 'cephfs.cephfs.data', '.rgw.root', '.mgr', 'default.rgw.log', 'backups']
Oct 11 09:42:55 compute-0 ceph-mgr[74605]: [balancer INFO root] prepared 0/10 changes
Oct 11 09:42:55 compute-0 nova_compute[260935]: 2025-10-11 09:42:55.050 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:42:55 compute-0 nova_compute[260935]: 2025-10-11 09:42:55.076 2 DEBUG nova.network.neutron [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: b75d8ded-515b-48ff-a6b6-28df88878996] Updating instance_info_cache with network_info: [{"id": "99e74dca-1d94-446c-ac4b-bc16dc028d2b", "address": "fa:16:3e:ab:9b:26", "network": {"id": "e4686205-cbf0-4221-bc49-ebb890c4a59f", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1553544744-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "11b44ad9193e4e43838d52056ccf413e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap99e74dca-1d", "ovs_interfaceid": "99e74dca-1d94-446c-ac4b-bc16dc028d2b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:42:55 compute-0 nova_compute[260935]: 2025-10-11 09:42:55.094 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Releasing lock "refresh_cache-b75d8ded-515b-48ff-a6b6-28df88878996" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:42:55 compute-0 nova_compute[260935]: 2025-10-11 09:42:55.095 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: b75d8ded-515b-48ff-a6b6-28df88878996] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 11 09:42:55 compute-0 nova_compute[260935]: 2025-10-11 09:42:55.095 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:42:55 compute-0 nova_compute[260935]: 2025-10-11 09:42:55.096 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:42:55 compute-0 nova_compute[260935]: 2025-10-11 09:42:55.122 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:42:55 compute-0 nova_compute[260935]: 2025-10-11 09:42:55.124 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:42:55 compute-0 nova_compute[260935]: 2025-10-11 09:42:55.124 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:42:55 compute-0 nova_compute[260935]: 2025-10-11 09:42:55.124 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 11 09:42:55 compute-0 nova_compute[260935]: 2025-10-11 09:42:55.125 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:42:55 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3038: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:42:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 09:42:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 09:42:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 09:42:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 09:42:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 09:42:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 09:42:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 09:42:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 09:42:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 09:42:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 09:42:55 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 09:42:55 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1257212598' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:42:55 compute-0 nova_compute[260935]: 2025-10-11 09:42:55.668 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.543s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:42:55 compute-0 nova_compute[260935]: 2025-10-11 09:42:55.771 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:42:55 compute-0 nova_compute[260935]: 2025-10-11 09:42:55.772 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:42:55 compute-0 nova_compute[260935]: 2025-10-11 09:42:55.772 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:42:55 compute-0 nova_compute[260935]: 2025-10-11 09:42:55.775 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:42:55 compute-0 nova_compute[260935]: 2025-10-11 09:42:55.775 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:42:55 compute-0 nova_compute[260935]: 2025-10-11 09:42:55.779 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000052 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:42:55 compute-0 nova_compute[260935]: 2025-10-11 09:42:55.779 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000052 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:42:55 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/1257212598' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:42:56 compute-0 nova_compute[260935]: 2025-10-11 09:42:56.048 2 WARNING nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 09:42:56 compute-0 nova_compute[260935]: 2025-10-11 09:42:56.049 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=2766MB free_disk=59.83066940307617GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 11 09:42:56 compute-0 nova_compute[260935]: 2025-10-11 09:42:56.049 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:42:56 compute-0 nova_compute[260935]: 2025-10-11 09:42:56.050 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:42:56 compute-0 nova_compute[260935]: 2025-10-11 09:42:56.135 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance c176845c-89c0-4038-ba22-4ee79bd3ebfe actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 09:42:56 compute-0 nova_compute[260935]: 2025-10-11 09:42:56.136 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance b75d8ded-515b-48ff-a6b6-28df88878996 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 09:42:56 compute-0 nova_compute[260935]: 2025-10-11 09:42:56.136 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance 52be16b4-343a-4fd4-9041-39069a1fde2a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 09:42:56 compute-0 nova_compute[260935]: 2025-10-11 09:42:56.136 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 11 09:42:56 compute-0 nova_compute[260935]: 2025-10-11 09:42:56.136 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=896MB phys_disk=59GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 11 09:42:56 compute-0 nova_compute[260935]: 2025-10-11 09:42:56.233 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:42:56 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 09:42:56 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3089073776' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:42:56 compute-0 nova_compute[260935]: 2025-10-11 09:42:56.746 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.513s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:42:56 compute-0 nova_compute[260935]: 2025-10-11 09:42:56.752 2 DEBUG nova.compute.provider_tree [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 09:42:56 compute-0 nova_compute[260935]: 2025-10-11 09:42:56.770 2 DEBUG nova.scheduler.client.report [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 09:42:56 compute-0 nova_compute[260935]: 2025-10-11 09:42:56.804 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 11 09:42:56 compute-0 nova_compute[260935]: 2025-10-11 09:42:56.805 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.755s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:42:56 compute-0 ceph-mon[74313]: pgmap v3038: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:42:56 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/3089073776' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:42:57 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3039: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:42:57 compute-0 nova_compute[260935]: 2025-10-11 09:42:57.412 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:42:57 compute-0 nova_compute[260935]: 2025-10-11 09:42:57.413 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:42:57 compute-0 nova_compute[260935]: 2025-10-11 09:42:57.651 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:42:58 compute-0 ceph-mon[74313]: pgmap v3039: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:42:59 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3040: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:42:59 compute-0 nova_compute[260935]: 2025-10-11 09:42:59.494 2 DEBUG oslo_concurrency.lockutils [None req-0457b85d-0cc0-477b-b645-cef4c4bd5a46 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] Acquiring lock "1f03604e-0a25-4f22-807b-11f2e654be90" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:42:59 compute-0 nova_compute[260935]: 2025-10-11 09:42:59.495 2 DEBUG oslo_concurrency.lockutils [None req-0457b85d-0cc0-477b-b645-cef4c4bd5a46 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] Lock "1f03604e-0a25-4f22-807b-11f2e654be90" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:42:59 compute-0 nova_compute[260935]: 2025-10-11 09:42:59.516 2 DEBUG nova.compute.manager [None req-0457b85d-0cc0-477b-b645-cef4c4bd5a46 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] [instance: 1f03604e-0a25-4f22-807b-11f2e654be90] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 11 09:42:59 compute-0 nova_compute[260935]: 2025-10-11 09:42:59.581 2 DEBUG oslo_concurrency.lockutils [None req-0457b85d-0cc0-477b-b645-cef4c4bd5a46 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:42:59 compute-0 nova_compute[260935]: 2025-10-11 09:42:59.582 2 DEBUG oslo_concurrency.lockutils [None req-0457b85d-0cc0-477b-b645-cef4c4bd5a46 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:42:59 compute-0 nova_compute[260935]: 2025-10-11 09:42:59.591 2 DEBUG nova.virt.hardware [None req-0457b85d-0cc0-477b-b645-cef4c4bd5a46 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 11 09:42:59 compute-0 nova_compute[260935]: 2025-10-11 09:42:59.592 2 INFO nova.compute.claims [None req-0457b85d-0cc0-477b-b645-cef4c4bd5a46 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] [instance: 1f03604e-0a25-4f22-807b-11f2e654be90] Claim successful on node compute-0.ctlplane.example.com
Oct 11 09:42:59 compute-0 nova_compute[260935]: 2025-10-11 09:42:59.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:42:59 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:42:59 compute-0 nova_compute[260935]: 2025-10-11 09:42:59.737 2 DEBUG oslo_concurrency.processutils [None req-0457b85d-0cc0-477b-b645-cef4c4bd5a46 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:43:00 compute-0 nova_compute[260935]: 2025-10-11 09:43:00.053 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:43:00 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 09:43:00 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/853917308' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:43:00 compute-0 nova_compute[260935]: 2025-10-11 09:43:00.238 2 DEBUG oslo_concurrency.processutils [None req-0457b85d-0cc0-477b-b645-cef4c4bd5a46 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.501s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:43:00 compute-0 nova_compute[260935]: 2025-10-11 09:43:00.248 2 DEBUG nova.compute.provider_tree [None req-0457b85d-0cc0-477b-b645-cef4c4bd5a46 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 09:43:00 compute-0 nova_compute[260935]: 2025-10-11 09:43:00.267 2 DEBUG nova.scheduler.client.report [None req-0457b85d-0cc0-477b-b645-cef4c4bd5a46 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 09:43:00 compute-0 nova_compute[260935]: 2025-10-11 09:43:00.299 2 DEBUG oslo_concurrency.lockutils [None req-0457b85d-0cc0-477b-b645-cef4c4bd5a46 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.717s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:43:00 compute-0 nova_compute[260935]: 2025-10-11 09:43:00.300 2 DEBUG nova.compute.manager [None req-0457b85d-0cc0-477b-b645-cef4c4bd5a46 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] [instance: 1f03604e-0a25-4f22-807b-11f2e654be90] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 11 09:43:00 compute-0 nova_compute[260935]: 2025-10-11 09:43:00.365 2 DEBUG nova.compute.manager [None req-0457b85d-0cc0-477b-b645-cef4c4bd5a46 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] [instance: 1f03604e-0a25-4f22-807b-11f2e654be90] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 11 09:43:00 compute-0 nova_compute[260935]: 2025-10-11 09:43:00.368 2 DEBUG nova.network.neutron [None req-0457b85d-0cc0-477b-b645-cef4c4bd5a46 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] [instance: 1f03604e-0a25-4f22-807b-11f2e654be90] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 11 09:43:00 compute-0 nova_compute[260935]: 2025-10-11 09:43:00.394 2 INFO nova.virt.libvirt.driver [None req-0457b85d-0cc0-477b-b645-cef4c4bd5a46 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] [instance: 1f03604e-0a25-4f22-807b-11f2e654be90] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 11 09:43:00 compute-0 nova_compute[260935]: 2025-10-11 09:43:00.415 2 DEBUG nova.compute.manager [None req-0457b85d-0cc0-477b-b645-cef4c4bd5a46 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] [instance: 1f03604e-0a25-4f22-807b-11f2e654be90] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 11 09:43:00 compute-0 nova_compute[260935]: 2025-10-11 09:43:00.508 2 DEBUG nova.compute.manager [None req-0457b85d-0cc0-477b-b645-cef4c4bd5a46 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] [instance: 1f03604e-0a25-4f22-807b-11f2e654be90] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 11 09:43:00 compute-0 nova_compute[260935]: 2025-10-11 09:43:00.510 2 DEBUG nova.virt.libvirt.driver [None req-0457b85d-0cc0-477b-b645-cef4c4bd5a46 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] [instance: 1f03604e-0a25-4f22-807b-11f2e654be90] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 11 09:43:00 compute-0 nova_compute[260935]: 2025-10-11 09:43:00.511 2 INFO nova.virt.libvirt.driver [None req-0457b85d-0cc0-477b-b645-cef4c4bd5a46 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] [instance: 1f03604e-0a25-4f22-807b-11f2e654be90] Creating image(s)
Oct 11 09:43:00 compute-0 nova_compute[260935]: 2025-10-11 09:43:00.546 2 DEBUG nova.storage.rbd_utils [None req-0457b85d-0cc0-477b-b645-cef4c4bd5a46 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] rbd image 1f03604e-0a25-4f22-807b-11f2e654be90_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:43:00 compute-0 nova_compute[260935]: 2025-10-11 09:43:00.580 2 DEBUG nova.storage.rbd_utils [None req-0457b85d-0cc0-477b-b645-cef4c4bd5a46 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] rbd image 1f03604e-0a25-4f22-807b-11f2e654be90_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:43:00 compute-0 nova_compute[260935]: 2025-10-11 09:43:00.609 2 DEBUG nova.storage.rbd_utils [None req-0457b85d-0cc0-477b-b645-cef4c4bd5a46 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] rbd image 1f03604e-0a25-4f22-807b-11f2e654be90_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:43:00 compute-0 nova_compute[260935]: 2025-10-11 09:43:00.614 2 DEBUG oslo_concurrency.processutils [None req-0457b85d-0cc0-477b-b645-cef4c4bd5a46 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:43:00 compute-0 nova_compute[260935]: 2025-10-11 09:43:00.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:43:00 compute-0 nova_compute[260935]: 2025-10-11 09:43:00.706 2 DEBUG oslo_concurrency.processutils [None req-0457b85d-0cc0-477b-b645-cef4c4bd5a46 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json" returned: 0 in 0.092s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:43:00 compute-0 nova_compute[260935]: 2025-10-11 09:43:00.706 2 DEBUG oslo_concurrency.lockutils [None req-0457b85d-0cc0-477b-b645-cef4c4bd5a46 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] Acquiring lock "9811042c7d73cc51997f7c966840f4b7728169a1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:43:00 compute-0 nova_compute[260935]: 2025-10-11 09:43:00.707 2 DEBUG oslo_concurrency.lockutils [None req-0457b85d-0cc0-477b-b645-cef4c4bd5a46 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:43:00 compute-0 nova_compute[260935]: 2025-10-11 09:43:00.707 2 DEBUG oslo_concurrency.lockutils [None req-0457b85d-0cc0-477b-b645-cef4c4bd5a46 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:43:00 compute-0 nova_compute[260935]: 2025-10-11 09:43:00.734 2 DEBUG nova.storage.rbd_utils [None req-0457b85d-0cc0-477b-b645-cef4c4bd5a46 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] rbd image 1f03604e-0a25-4f22-807b-11f2e654be90_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:43:00 compute-0 nova_compute[260935]: 2025-10-11 09:43:00.739 2 DEBUG oslo_concurrency.processutils [None req-0457b85d-0cc0-477b-b645-cef4c4bd5a46 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 1f03604e-0a25-4f22-807b-11f2e654be90_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:43:00 compute-0 ceph-mon[74313]: pgmap v3040: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:43:00 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/853917308' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:43:00 compute-0 nova_compute[260935]: 2025-10-11 09:43:00.845 2 DEBUG nova.policy [None req-0457b85d-0cc0-477b-b645-cef4c4bd5a46 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '2b3a6de4bc924868b4e73b0a23a89090', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'd599eac75aee4193a971c2c157a326a8', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 11 09:43:01 compute-0 nova_compute[260935]: 2025-10-11 09:43:01.033 2 DEBUG oslo_concurrency.processutils [None req-0457b85d-0cc0-477b-b645-cef4c4bd5a46 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 1f03604e-0a25-4f22-807b-11f2e654be90_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.294s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:43:01 compute-0 nova_compute[260935]: 2025-10-11 09:43:01.116 2 DEBUG nova.storage.rbd_utils [None req-0457b85d-0cc0-477b-b645-cef4c4bd5a46 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] resizing rbd image 1f03604e-0a25-4f22-807b-11f2e654be90_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 11 09:43:01 compute-0 nova_compute[260935]: 2025-10-11 09:43:01.208 2 DEBUG nova.objects.instance [None req-0457b85d-0cc0-477b-b645-cef4c4bd5a46 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] Lazy-loading 'migration_context' on Instance uuid 1f03604e-0a25-4f22-807b-11f2e654be90 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:43:01 compute-0 nova_compute[260935]: 2025-10-11 09:43:01.225 2 DEBUG nova.virt.libvirt.driver [None req-0457b85d-0cc0-477b-b645-cef4c4bd5a46 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] [instance: 1f03604e-0a25-4f22-807b-11f2e654be90] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 11 09:43:01 compute-0 nova_compute[260935]: 2025-10-11 09:43:01.225 2 DEBUG nova.virt.libvirt.driver [None req-0457b85d-0cc0-477b-b645-cef4c4bd5a46 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] [instance: 1f03604e-0a25-4f22-807b-11f2e654be90] Ensure instance console log exists: /var/lib/nova/instances/1f03604e-0a25-4f22-807b-11f2e654be90/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 11 09:43:01 compute-0 nova_compute[260935]: 2025-10-11 09:43:01.226 2 DEBUG oslo_concurrency.lockutils [None req-0457b85d-0cc0-477b-b645-cef4c4bd5a46 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:43:01 compute-0 nova_compute[260935]: 2025-10-11 09:43:01.226 2 DEBUG oslo_concurrency.lockutils [None req-0457b85d-0cc0-477b-b645-cef4c4bd5a46 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:43:01 compute-0 nova_compute[260935]: 2025-10-11 09:43:01.227 2 DEBUG oslo_concurrency.lockutils [None req-0457b85d-0cc0-477b-b645-cef4c4bd5a46 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:43:01 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3041: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:43:01 compute-0 nova_compute[260935]: 2025-10-11 09:43:01.953 2 DEBUG nova.network.neutron [None req-0457b85d-0cc0-477b-b645-cef4c4bd5a46 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] [instance: 1f03604e-0a25-4f22-807b-11f2e654be90] Successfully created port: 0079b745-efa5-4641-a180-fee85fb30a5d _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 11 09:43:02 compute-0 nova_compute[260935]: 2025-10-11 09:43:02.655 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:43:02 compute-0 nova_compute[260935]: 2025-10-11 09:43:02.671 2 DEBUG nova.network.neutron [None req-0457b85d-0cc0-477b-b645-cef4c4bd5a46 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] [instance: 1f03604e-0a25-4f22-807b-11f2e654be90] Successfully updated port: 0079b745-efa5-4641-a180-fee85fb30a5d _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 11 09:43:02 compute-0 nova_compute[260935]: 2025-10-11 09:43:02.693 2 DEBUG oslo_concurrency.lockutils [None req-0457b85d-0cc0-477b-b645-cef4c4bd5a46 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] Acquiring lock "refresh_cache-1f03604e-0a25-4f22-807b-11f2e654be90" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:43:02 compute-0 nova_compute[260935]: 2025-10-11 09:43:02.693 2 DEBUG oslo_concurrency.lockutils [None req-0457b85d-0cc0-477b-b645-cef4c4bd5a46 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] Acquired lock "refresh_cache-1f03604e-0a25-4f22-807b-11f2e654be90" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:43:02 compute-0 nova_compute[260935]: 2025-10-11 09:43:02.693 2 DEBUG nova.network.neutron [None req-0457b85d-0cc0-477b-b645-cef4c4bd5a46 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] [instance: 1f03604e-0a25-4f22-807b-11f2e654be90] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 11 09:43:02 compute-0 ceph-mon[74313]: pgmap v3041: 321 pgs: 321 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:43:03 compute-0 nova_compute[260935]: 2025-10-11 09:43:03.057 2 DEBUG nova.compute.manager [req-b809192d-1ef9-406f-841e-229d53aa9ca4 req-bca78657-e6e3-4dcc-bdd6-92fbd1951939 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 1f03604e-0a25-4f22-807b-11f2e654be90] Received event network-changed-0079b745-efa5-4641-a180-fee85fb30a5d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:43:03 compute-0 nova_compute[260935]: 2025-10-11 09:43:03.058 2 DEBUG nova.compute.manager [req-b809192d-1ef9-406f-841e-229d53aa9ca4 req-bca78657-e6e3-4dcc-bdd6-92fbd1951939 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 1f03604e-0a25-4f22-807b-11f2e654be90] Refreshing instance network info cache due to event network-changed-0079b745-efa5-4641-a180-fee85fb30a5d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 09:43:03 compute-0 nova_compute[260935]: 2025-10-11 09:43:03.059 2 DEBUG oslo_concurrency.lockutils [req-b809192d-1ef9-406f-841e-229d53aa9ca4 req-bca78657-e6e3-4dcc-bdd6-92fbd1951939 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-1f03604e-0a25-4f22-807b-11f2e654be90" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:43:03 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3042: 321 pgs: 321 active+clean; 374 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 11 09:43:03 compute-0 sudo[428786]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:43:03 compute-0 sudo[428786]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:43:03 compute-0 sudo[428786]: pam_unix(sudo:session): session closed for user root
Oct 11 09:43:03 compute-0 sudo[428811]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 09:43:03 compute-0 sudo[428811]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:43:03 compute-0 sudo[428811]: pam_unix(sudo:session): session closed for user root
Oct 11 09:43:03 compute-0 sudo[428836]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:43:03 compute-0 sudo[428836]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:43:03 compute-0 sudo[428836]: pam_unix(sudo:session): session closed for user root
Oct 11 09:43:03 compute-0 nova_compute[260935]: 2025-10-11 09:43:03.822 2 DEBUG nova.network.neutron [None req-0457b85d-0cc0-477b-b645-cef4c4bd5a46 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] [instance: 1f03604e-0a25-4f22-807b-11f2e654be90] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 11 09:43:03 compute-0 sudo[428861]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 11 09:43:03 compute-0 sudo[428861]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:43:04 compute-0 sudo[428861]: pam_unix(sudo:session): session closed for user root
Oct 11 09:43:04 compute-0 sudo[428919]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:43:04 compute-0 sudo[428919]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:43:04 compute-0 sudo[428919]: pam_unix(sudo:session): session closed for user root
Oct 11 09:43:04 compute-0 sudo[428944]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 09:43:04 compute-0 sudo[428944]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:43:04 compute-0 sudo[428944]: pam_unix(sudo:session): session closed for user root
Oct 11 09:43:04 compute-0 sshd-session[428903]: Invalid user mysql from 165.232.82.252 port 53426
Oct 11 09:43:04 compute-0 sudo[428969]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:43:04 compute-0 sudo[428969]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:43:04 compute-0 sudo[428969]: pam_unix(sudo:session): session closed for user root
Oct 11 09:43:04 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:43:04 compute-0 sshd-session[428903]: pam_unix(sshd:auth): check pass; user unknown
Oct 11 09:43:04 compute-0 sshd-session[428903]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=165.232.82.252
Oct 11 09:43:04 compute-0 sudo[428994]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a -- inventory --format=json-pretty --filter-for-batch
Oct 11 09:43:04 compute-0 sudo[428994]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:43:04 compute-0 ceph-mon[74313]: pgmap v3042: 321 pgs: 321 active+clean; 374 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 11 09:43:05 compute-0 nova_compute[260935]: 2025-10-11 09:43:05.055 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:43:05 compute-0 nova_compute[260935]: 2025-10-11 09:43:05.116 2 DEBUG nova.network.neutron [None req-0457b85d-0cc0-477b-b645-cef4c4bd5a46 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] [instance: 1f03604e-0a25-4f22-807b-11f2e654be90] Updating instance_info_cache with network_info: [{"id": "0079b745-efa5-4641-a180-fee85fb30a5d", "address": "fa:16:3e:88:b5:6b", "network": {"id": "424246b3-55aa-428f-9446-55ed3d626b5d", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1806329114-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d599eac75aee4193a971c2c157a326a8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0079b745-ef", "ovs_interfaceid": "0079b745-efa5-4641-a180-fee85fb30a5d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:43:05 compute-0 nova_compute[260935]: 2025-10-11 09:43:05.142 2 DEBUG oslo_concurrency.lockutils [None req-0457b85d-0cc0-477b-b645-cef4c4bd5a46 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] Releasing lock "refresh_cache-1f03604e-0a25-4f22-807b-11f2e654be90" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:43:05 compute-0 nova_compute[260935]: 2025-10-11 09:43:05.142 2 DEBUG nova.compute.manager [None req-0457b85d-0cc0-477b-b645-cef4c4bd5a46 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] [instance: 1f03604e-0a25-4f22-807b-11f2e654be90] Instance network_info: |[{"id": "0079b745-efa5-4641-a180-fee85fb30a5d", "address": "fa:16:3e:88:b5:6b", "network": {"id": "424246b3-55aa-428f-9446-55ed3d626b5d", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1806329114-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d599eac75aee4193a971c2c157a326a8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0079b745-ef", "ovs_interfaceid": "0079b745-efa5-4641-a180-fee85fb30a5d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 11 09:43:05 compute-0 nova_compute[260935]: 2025-10-11 09:43:05.143 2 DEBUG oslo_concurrency.lockutils [req-b809192d-1ef9-406f-841e-229d53aa9ca4 req-bca78657-e6e3-4dcc-bdd6-92fbd1951939 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-1f03604e-0a25-4f22-807b-11f2e654be90" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:43:05 compute-0 nova_compute[260935]: 2025-10-11 09:43:05.143 2 DEBUG nova.network.neutron [req-b809192d-1ef9-406f-841e-229d53aa9ca4 req-bca78657-e6e3-4dcc-bdd6-92fbd1951939 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 1f03604e-0a25-4f22-807b-11f2e654be90] Refreshing network info cache for port 0079b745-efa5-4641-a180-fee85fb30a5d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 09:43:05 compute-0 nova_compute[260935]: 2025-10-11 09:43:05.148 2 DEBUG nova.virt.libvirt.driver [None req-0457b85d-0cc0-477b-b645-cef4c4bd5a46 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] [instance: 1f03604e-0a25-4f22-807b-11f2e654be90] Start _get_guest_xml network_info=[{"id": "0079b745-efa5-4641-a180-fee85fb30a5d", "address": "fa:16:3e:88:b5:6b", "network": {"id": "424246b3-55aa-428f-9446-55ed3d626b5d", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1806329114-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d599eac75aee4193a971c2c157a326a8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0079b745-ef", "ovs_interfaceid": "0079b745-efa5-4641-a180-fee85fb30a5d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 11 09:43:05 compute-0 nova_compute[260935]: 2025-10-11 09:43:05.156 2 WARNING nova.virt.libvirt.driver [None req-0457b85d-0cc0-477b-b645-cef4c4bd5a46 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 09:43:05 compute-0 nova_compute[260935]: 2025-10-11 09:43:05.167 2 DEBUG nova.virt.libvirt.host [None req-0457b85d-0cc0-477b-b645-cef4c4bd5a46 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 11 09:43:05 compute-0 nova_compute[260935]: 2025-10-11 09:43:05.168 2 DEBUG nova.virt.libvirt.host [None req-0457b85d-0cc0-477b-b645-cef4c4bd5a46 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 11 09:43:05 compute-0 nova_compute[260935]: 2025-10-11 09:43:05.174 2 DEBUG nova.virt.libvirt.host [None req-0457b85d-0cc0-477b-b645-cef4c4bd5a46 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 11 09:43:05 compute-0 nova_compute[260935]: 2025-10-11 09:43:05.175 2 DEBUG nova.virt.libvirt.host [None req-0457b85d-0cc0-477b-b645-cef4c4bd5a46 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 11 09:43:05 compute-0 nova_compute[260935]: 2025-10-11 09:43:05.176 2 DEBUG nova.virt.libvirt.driver [None req-0457b85d-0cc0-477b-b645-cef4c4bd5a46 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 11 09:43:05 compute-0 nova_compute[260935]: 2025-10-11 09:43:05.176 2 DEBUG nova.virt.hardware [None req-0457b85d-0cc0-477b-b645-cef4c4bd5a46 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 11 09:43:05 compute-0 nova_compute[260935]: 2025-10-11 09:43:05.177 2 DEBUG nova.virt.hardware [None req-0457b85d-0cc0-477b-b645-cef4c4bd5a46 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 11 09:43:05 compute-0 nova_compute[260935]: 2025-10-11 09:43:05.177 2 DEBUG nova.virt.hardware [None req-0457b85d-0cc0-477b-b645-cef4c4bd5a46 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 11 09:43:05 compute-0 nova_compute[260935]: 2025-10-11 09:43:05.178 2 DEBUG nova.virt.hardware [None req-0457b85d-0cc0-477b-b645-cef4c4bd5a46 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 11 09:43:05 compute-0 nova_compute[260935]: 2025-10-11 09:43:05.178 2 DEBUG nova.virt.hardware [None req-0457b85d-0cc0-477b-b645-cef4c4bd5a46 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 11 09:43:05 compute-0 nova_compute[260935]: 2025-10-11 09:43:05.178 2 DEBUG nova.virt.hardware [None req-0457b85d-0cc0-477b-b645-cef4c4bd5a46 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 11 09:43:05 compute-0 nova_compute[260935]: 2025-10-11 09:43:05.179 2 DEBUG nova.virt.hardware [None req-0457b85d-0cc0-477b-b645-cef4c4bd5a46 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 11 09:43:05 compute-0 nova_compute[260935]: 2025-10-11 09:43:05.179 2 DEBUG nova.virt.hardware [None req-0457b85d-0cc0-477b-b645-cef4c4bd5a46 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 11 09:43:05 compute-0 nova_compute[260935]: 2025-10-11 09:43:05.179 2 DEBUG nova.virt.hardware [None req-0457b85d-0cc0-477b-b645-cef4c4bd5a46 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 11 09:43:05 compute-0 nova_compute[260935]: 2025-10-11 09:43:05.180 2 DEBUG nova.virt.hardware [None req-0457b85d-0cc0-477b-b645-cef4c4bd5a46 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 11 09:43:05 compute-0 nova_compute[260935]: 2025-10-11 09:43:05.180 2 DEBUG nova.virt.hardware [None req-0457b85d-0cc0-477b-b645-cef4c4bd5a46 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 11 09:43:05 compute-0 nova_compute[260935]: 2025-10-11 09:43:05.185 2 DEBUG oslo_concurrency.processutils [None req-0457b85d-0cc0-477b-b645-cef4c4bd5a46 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:43:05 compute-0 podman[429060]: 2025-10-11 09:43:05.241695463 +0000 UTC m=+0.073916294 container create 1a092983c130eef7c4541c55811be68b775bf8366c775e612c5a2895ff0a4b48 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_bardeen, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 09:43:05 compute-0 podman[429060]: 2025-10-11 09:43:05.208503143 +0000 UTC m=+0.040724064 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:43:05 compute-0 systemd[1]: Started libpod-conmon-1a092983c130eef7c4541c55811be68b775bf8366c775e612c5a2895ff0a4b48.scope.
Oct 11 09:43:05 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:43:05 compute-0 podman[429060]: 2025-10-11 09:43:05.381116434 +0000 UTC m=+0.213337335 container init 1a092983c130eef7c4541c55811be68b775bf8366c775e612c5a2895ff0a4b48 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_bardeen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 09:43:05 compute-0 podman[429060]: 2025-10-11 09:43:05.392591366 +0000 UTC m=+0.224812237 container start 1a092983c130eef7c4541c55811be68b775bf8366c775e612c5a2895ff0a4b48 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_bardeen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 09:43:05 compute-0 podman[429060]: 2025-10-11 09:43:05.396785443 +0000 UTC m=+0.229006374 container attach 1a092983c130eef7c4541c55811be68b775bf8366c775e612c5a2895ff0a4b48 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_bardeen, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct 11 09:43:05 compute-0 goofy_bardeen[429078]: 167 167
Oct 11 09:43:05 compute-0 systemd[1]: libpod-1a092983c130eef7c4541c55811be68b775bf8366c775e612c5a2895ff0a4b48.scope: Deactivated successfully.
Oct 11 09:43:05 compute-0 conmon[429078]: conmon 1a092983c130eef7c454 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-1a092983c130eef7c4541c55811be68b775bf8366c775e612c5a2895ff0a4b48.scope/container/memory.events
Oct 11 09:43:05 compute-0 podman[429060]: 2025-10-11 09:43:05.403868982 +0000 UTC m=+0.236089883 container died 1a092983c130eef7c4541c55811be68b775bf8366c775e612c5a2895ff0a4b48 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_bardeen, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 09:43:05 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3043: 321 pgs: 321 active+clean; 374 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 11 09:43:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-3993dc1e7829b57b12dbac5a3e95fbd1c455d0de97d3381740ffb53c7bee289c-merged.mount: Deactivated successfully.
Oct 11 09:43:05 compute-0 podman[429060]: 2025-10-11 09:43:05.463614728 +0000 UTC m=+0.295835579 container remove 1a092983c130eef7c4541c55811be68b775bf8366c775e612c5a2895ff0a4b48 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_bardeen, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 11 09:43:05 compute-0 systemd[1]: libpod-conmon-1a092983c130eef7c4541c55811be68b775bf8366c775e612c5a2895ff0a4b48.scope: Deactivated successfully.
Oct 11 09:43:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 09:43:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:43:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 09:43:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:43:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.002973310725564889 of space, bias 1.0, pg target 0.8919932176694666 quantized to 32 (current 32)
Oct 11 09:43:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:43:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 09:43:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:43:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 09:43:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:43:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Oct 11 09:43:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:43:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 11 09:43:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:43:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 09:43:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:43:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 11 09:43:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:43:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 11 09:43:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:43:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 09:43:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:43:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 11 09:43:05 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 09:43:05 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1733831700' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:43:05 compute-0 nova_compute[260935]: 2025-10-11 09:43:05.662 2 DEBUG oslo_concurrency.processutils [None req-0457b85d-0cc0-477b-b645-cef4c4bd5a46 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:43:05 compute-0 nova_compute[260935]: 2025-10-11 09:43:05.694 2 DEBUG nova.storage.rbd_utils [None req-0457b85d-0cc0-477b-b645-cef4c4bd5a46 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] rbd image 1f03604e-0a25-4f22-807b-11f2e654be90_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:43:05 compute-0 nova_compute[260935]: 2025-10-11 09:43:05.699 2 DEBUG oslo_concurrency.processutils [None req-0457b85d-0cc0-477b-b645-cef4c4bd5a46 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:43:05 compute-0 podman[429119]: 2025-10-11 09:43:05.700238525 +0000 UTC m=+0.063015219 container create bb3ace29aaac1f2148cc3940986c4c84a247b3d7c72dfec165b64c7e4db2360f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_lovelace, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 09:43:05 compute-0 systemd[1]: Started libpod-conmon-bb3ace29aaac1f2148cc3940986c4c84a247b3d7c72dfec165b64c7e4db2360f.scope.
Oct 11 09:43:05 compute-0 podman[429119]: 2025-10-11 09:43:05.670764718 +0000 UTC m=+0.033541502 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:43:05 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:43:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8f0a717cb682ed8212f3ac948020242a6f4e7385c72fba36d642cfc64baed44/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 09:43:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8f0a717cb682ed8212f3ac948020242a6f4e7385c72fba36d642cfc64baed44/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 09:43:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8f0a717cb682ed8212f3ac948020242a6f4e7385c72fba36d642cfc64baed44/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 09:43:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8f0a717cb682ed8212f3ac948020242a6f4e7385c72fba36d642cfc64baed44/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 09:43:05 compute-0 podman[429119]: 2025-10-11 09:43:05.791268968 +0000 UTC m=+0.154045662 container init bb3ace29aaac1f2148cc3940986c4c84a247b3d7c72dfec165b64c7e4db2360f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_lovelace, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 11 09:43:05 compute-0 podman[429119]: 2025-10-11 09:43:05.798876141 +0000 UTC m=+0.161652835 container start bb3ace29aaac1f2148cc3940986c4c84a247b3d7c72dfec165b64c7e4db2360f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_lovelace, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct 11 09:43:05 compute-0 podman[429119]: 2025-10-11 09:43:05.802187824 +0000 UTC m=+0.164964548 container attach bb3ace29aaac1f2148cc3940986c4c84a247b3d7c72dfec165b64c7e4db2360f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_lovelace, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 09:43:05 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/1733831700' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:43:06 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 09:43:06 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1707526989' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:43:06 compute-0 sshd-session[428903]: Failed password for invalid user mysql from 165.232.82.252 port 53426 ssh2
Oct 11 09:43:06 compute-0 nova_compute[260935]: 2025-10-11 09:43:06.143 2 DEBUG oslo_concurrency.processutils [None req-0457b85d-0cc0-477b-b645-cef4c4bd5a46 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:43:06 compute-0 nova_compute[260935]: 2025-10-11 09:43:06.145 2 DEBUG nova.virt.libvirt.vif [None req-0457b85d-0cc0-477b-b645-cef4c4bd5a46 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T09:42:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestSnapshotPattern-server-1990672178',display_name='tempest-TestSnapshotPattern-server-1990672178',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testsnapshotpattern-server-1990672178',id=149,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBP2ECx0jj3Rt10/VBX40dpY/aktTrRIITXR7+kOE+gKATXjMYBpecra2wtbKgkthLAz8Iw/nS+1WioBHxe0IF9i4guGae1Gh4XxA1njuMr4iSWg7N0/eaZRKp5xLMkN7hg==',key_name='tempest-TestSnapshotPattern-1719255089',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d599eac75aee4193a971c2c157a326a8',ramdisk_id='',reservation_id='r-eoa49vly',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSnapshotPattern-1818755654',owner_user_name='tempest-TestSnapshotPattern-1818755654-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:43:00Z,user_data=None,user_id='2b3a6de4bc924868b4e73b0a23a89090',uuid=1f03604e-0a25-4f22-807b-11f2e654be90,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0079b745-efa5-4641-a180-fee85fb30a5d", "address": "fa:16:3e:88:b5:6b", "network": {"id": "424246b3-55aa-428f-9446-55ed3d626b5d", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1806329114-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d599eac75aee4193a971c2c157a326a8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0079b745-ef", "ovs_interfaceid": "0079b745-efa5-4641-a180-fee85fb30a5d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 11 09:43:06 compute-0 nova_compute[260935]: 2025-10-11 09:43:06.145 2 DEBUG nova.network.os_vif_util [None req-0457b85d-0cc0-477b-b645-cef4c4bd5a46 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] Converting VIF {"id": "0079b745-efa5-4641-a180-fee85fb30a5d", "address": "fa:16:3e:88:b5:6b", "network": {"id": "424246b3-55aa-428f-9446-55ed3d626b5d", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1806329114-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d599eac75aee4193a971c2c157a326a8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0079b745-ef", "ovs_interfaceid": "0079b745-efa5-4641-a180-fee85fb30a5d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 09:43:06 compute-0 nova_compute[260935]: 2025-10-11 09:43:06.146 2 DEBUG nova.network.os_vif_util [None req-0457b85d-0cc0-477b-b645-cef4c4bd5a46 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:88:b5:6b,bridge_name='br-int',has_traffic_filtering=True,id=0079b745-efa5-4641-a180-fee85fb30a5d,network=Network(424246b3-55aa-428f-9446-55ed3d626b5d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0079b745-ef') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 09:43:06 compute-0 nova_compute[260935]: 2025-10-11 09:43:06.147 2 DEBUG nova.objects.instance [None req-0457b85d-0cc0-477b-b645-cef4c4bd5a46 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] Lazy-loading 'pci_devices' on Instance uuid 1f03604e-0a25-4f22-807b-11f2e654be90 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:43:06 compute-0 nova_compute[260935]: 2025-10-11 09:43:06.162 2 DEBUG nova.virt.libvirt.driver [None req-0457b85d-0cc0-477b-b645-cef4c4bd5a46 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] [instance: 1f03604e-0a25-4f22-807b-11f2e654be90] End _get_guest_xml xml=<domain type="kvm">
Oct 11 09:43:06 compute-0 nova_compute[260935]:   <uuid>1f03604e-0a25-4f22-807b-11f2e654be90</uuid>
Oct 11 09:43:06 compute-0 nova_compute[260935]:   <name>instance-00000095</name>
Oct 11 09:43:06 compute-0 nova_compute[260935]:   <memory>131072</memory>
Oct 11 09:43:06 compute-0 nova_compute[260935]:   <vcpu>1</vcpu>
Oct 11 09:43:06 compute-0 nova_compute[260935]:   <metadata>
Oct 11 09:43:06 compute-0 nova_compute[260935]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 09:43:06 compute-0 nova_compute[260935]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 09:43:06 compute-0 nova_compute[260935]:       <nova:name>tempest-TestSnapshotPattern-server-1990672178</nova:name>
Oct 11 09:43:06 compute-0 nova_compute[260935]:       <nova:creationTime>2025-10-11 09:43:05</nova:creationTime>
Oct 11 09:43:06 compute-0 nova_compute[260935]:       <nova:flavor name="m1.nano">
Oct 11 09:43:06 compute-0 nova_compute[260935]:         <nova:memory>128</nova:memory>
Oct 11 09:43:06 compute-0 nova_compute[260935]:         <nova:disk>1</nova:disk>
Oct 11 09:43:06 compute-0 nova_compute[260935]:         <nova:swap>0</nova:swap>
Oct 11 09:43:06 compute-0 nova_compute[260935]:         <nova:ephemeral>0</nova:ephemeral>
Oct 11 09:43:06 compute-0 nova_compute[260935]:         <nova:vcpus>1</nova:vcpus>
Oct 11 09:43:06 compute-0 nova_compute[260935]:       </nova:flavor>
Oct 11 09:43:06 compute-0 nova_compute[260935]:       <nova:owner>
Oct 11 09:43:06 compute-0 nova_compute[260935]:         <nova:user uuid="2b3a6de4bc924868b4e73b0a23a89090">tempest-TestSnapshotPattern-1818755654-project-member</nova:user>
Oct 11 09:43:06 compute-0 nova_compute[260935]:         <nova:project uuid="d599eac75aee4193a971c2c157a326a8">tempest-TestSnapshotPattern-1818755654</nova:project>
Oct 11 09:43:06 compute-0 nova_compute[260935]:       </nova:owner>
Oct 11 09:43:06 compute-0 nova_compute[260935]:       <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 09:43:06 compute-0 nova_compute[260935]:       <nova:ports>
Oct 11 09:43:06 compute-0 nova_compute[260935]:         <nova:port uuid="0079b745-efa5-4641-a180-fee85fb30a5d">
Oct 11 09:43:06 compute-0 nova_compute[260935]:           <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Oct 11 09:43:06 compute-0 nova_compute[260935]:         </nova:port>
Oct 11 09:43:06 compute-0 nova_compute[260935]:       </nova:ports>
Oct 11 09:43:06 compute-0 nova_compute[260935]:     </nova:instance>
Oct 11 09:43:06 compute-0 nova_compute[260935]:   </metadata>
Oct 11 09:43:06 compute-0 nova_compute[260935]:   <sysinfo type="smbios">
Oct 11 09:43:06 compute-0 nova_compute[260935]:     <system>
Oct 11 09:43:06 compute-0 nova_compute[260935]:       <entry name="manufacturer">RDO</entry>
Oct 11 09:43:06 compute-0 nova_compute[260935]:       <entry name="product">OpenStack Compute</entry>
Oct 11 09:43:06 compute-0 nova_compute[260935]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 09:43:06 compute-0 nova_compute[260935]:       <entry name="serial">1f03604e-0a25-4f22-807b-11f2e654be90</entry>
Oct 11 09:43:06 compute-0 nova_compute[260935]:       <entry name="uuid">1f03604e-0a25-4f22-807b-11f2e654be90</entry>
Oct 11 09:43:06 compute-0 nova_compute[260935]:       <entry name="family">Virtual Machine</entry>
Oct 11 09:43:06 compute-0 nova_compute[260935]:     </system>
Oct 11 09:43:06 compute-0 nova_compute[260935]:   </sysinfo>
Oct 11 09:43:06 compute-0 nova_compute[260935]:   <os>
Oct 11 09:43:06 compute-0 nova_compute[260935]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 11 09:43:06 compute-0 nova_compute[260935]:     <boot dev="hd"/>
Oct 11 09:43:06 compute-0 nova_compute[260935]:     <smbios mode="sysinfo"/>
Oct 11 09:43:06 compute-0 nova_compute[260935]:   </os>
Oct 11 09:43:06 compute-0 nova_compute[260935]:   <features>
Oct 11 09:43:06 compute-0 nova_compute[260935]:     <acpi/>
Oct 11 09:43:06 compute-0 nova_compute[260935]:     <apic/>
Oct 11 09:43:06 compute-0 nova_compute[260935]:     <vmcoreinfo/>
Oct 11 09:43:06 compute-0 nova_compute[260935]:   </features>
Oct 11 09:43:06 compute-0 nova_compute[260935]:   <clock offset="utc">
Oct 11 09:43:06 compute-0 nova_compute[260935]:     <timer name="pit" tickpolicy="delay"/>
Oct 11 09:43:06 compute-0 nova_compute[260935]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 11 09:43:06 compute-0 nova_compute[260935]:     <timer name="hpet" present="no"/>
Oct 11 09:43:06 compute-0 nova_compute[260935]:   </clock>
Oct 11 09:43:06 compute-0 nova_compute[260935]:   <cpu mode="host-model" match="exact">
Oct 11 09:43:06 compute-0 nova_compute[260935]:     <topology sockets="1" cores="1" threads="1"/>
Oct 11 09:43:06 compute-0 nova_compute[260935]:   </cpu>
Oct 11 09:43:06 compute-0 nova_compute[260935]:   <devices>
Oct 11 09:43:06 compute-0 nova_compute[260935]:     <disk type="network" device="disk">
Oct 11 09:43:06 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 09:43:06 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/1f03604e-0a25-4f22-807b-11f2e654be90_disk">
Oct 11 09:43:06 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 09:43:06 compute-0 nova_compute[260935]:       </source>
Oct 11 09:43:06 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 09:43:06 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 09:43:06 compute-0 nova_compute[260935]:       </auth>
Oct 11 09:43:06 compute-0 nova_compute[260935]:       <target dev="vda" bus="virtio"/>
Oct 11 09:43:06 compute-0 nova_compute[260935]:     </disk>
Oct 11 09:43:06 compute-0 nova_compute[260935]:     <disk type="network" device="cdrom">
Oct 11 09:43:06 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 09:43:06 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/1f03604e-0a25-4f22-807b-11f2e654be90_disk.config">
Oct 11 09:43:06 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 09:43:06 compute-0 nova_compute[260935]:       </source>
Oct 11 09:43:06 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 09:43:06 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 09:43:06 compute-0 nova_compute[260935]:       </auth>
Oct 11 09:43:06 compute-0 nova_compute[260935]:       <target dev="sda" bus="sata"/>
Oct 11 09:43:06 compute-0 nova_compute[260935]:     </disk>
Oct 11 09:43:06 compute-0 nova_compute[260935]:     <interface type="ethernet">
Oct 11 09:43:06 compute-0 nova_compute[260935]:       <mac address="fa:16:3e:88:b5:6b"/>
Oct 11 09:43:06 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 09:43:06 compute-0 nova_compute[260935]:       <driver name="vhost" rx_queue_size="512"/>
Oct 11 09:43:06 compute-0 nova_compute[260935]:       <mtu size="1442"/>
Oct 11 09:43:06 compute-0 nova_compute[260935]:       <target dev="tap0079b745-ef"/>
Oct 11 09:43:06 compute-0 nova_compute[260935]:     </interface>
Oct 11 09:43:06 compute-0 nova_compute[260935]:     <serial type="pty">
Oct 11 09:43:06 compute-0 nova_compute[260935]:       <log file="/var/lib/nova/instances/1f03604e-0a25-4f22-807b-11f2e654be90/console.log" append="off"/>
Oct 11 09:43:06 compute-0 nova_compute[260935]:     </serial>
Oct 11 09:43:06 compute-0 nova_compute[260935]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 09:43:06 compute-0 nova_compute[260935]:     <video>
Oct 11 09:43:06 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 09:43:06 compute-0 nova_compute[260935]:     </video>
Oct 11 09:43:06 compute-0 nova_compute[260935]:     <input type="tablet" bus="usb"/>
Oct 11 09:43:06 compute-0 nova_compute[260935]:     <rng model="virtio">
Oct 11 09:43:06 compute-0 nova_compute[260935]:       <backend model="random">/dev/urandom</backend>
Oct 11 09:43:06 compute-0 nova_compute[260935]:     </rng>
Oct 11 09:43:06 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root"/>
Oct 11 09:43:06 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:43:06 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:43:06 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:43:06 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:43:06 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:43:06 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:43:06 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:43:06 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:43:06 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:43:06 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:43:06 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:43:06 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:43:06 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:43:06 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:43:06 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:43:06 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:43:06 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:43:06 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:43:06 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:43:06 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:43:06 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:43:06 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:43:06 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:43:06 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:43:06 compute-0 nova_compute[260935]:     <controller type="usb" index="0"/>
Oct 11 09:43:06 compute-0 nova_compute[260935]:     <memballoon model="virtio">
Oct 11 09:43:06 compute-0 nova_compute[260935]:       <stats period="10"/>
Oct 11 09:43:06 compute-0 nova_compute[260935]:     </memballoon>
Oct 11 09:43:06 compute-0 nova_compute[260935]:   </devices>
Oct 11 09:43:06 compute-0 nova_compute[260935]: </domain>
Oct 11 09:43:06 compute-0 nova_compute[260935]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 11 09:43:06 compute-0 nova_compute[260935]: 2025-10-11 09:43:06.162 2 DEBUG nova.compute.manager [None req-0457b85d-0cc0-477b-b645-cef4c4bd5a46 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] [instance: 1f03604e-0a25-4f22-807b-11f2e654be90] Preparing to wait for external event network-vif-plugged-0079b745-efa5-4641-a180-fee85fb30a5d prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 11 09:43:06 compute-0 nova_compute[260935]: 2025-10-11 09:43:06.162 2 DEBUG oslo_concurrency.lockutils [None req-0457b85d-0cc0-477b-b645-cef4c4bd5a46 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] Acquiring lock "1f03604e-0a25-4f22-807b-11f2e654be90-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:43:06 compute-0 nova_compute[260935]: 2025-10-11 09:43:06.163 2 DEBUG oslo_concurrency.lockutils [None req-0457b85d-0cc0-477b-b645-cef4c4bd5a46 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] Lock "1f03604e-0a25-4f22-807b-11f2e654be90-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:43:06 compute-0 nova_compute[260935]: 2025-10-11 09:43:06.163 2 DEBUG oslo_concurrency.lockutils [None req-0457b85d-0cc0-477b-b645-cef4c4bd5a46 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] Lock "1f03604e-0a25-4f22-807b-11f2e654be90-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:43:06 compute-0 nova_compute[260935]: 2025-10-11 09:43:06.164 2 DEBUG nova.virt.libvirt.vif [None req-0457b85d-0cc0-477b-b645-cef4c4bd5a46 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T09:42:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestSnapshotPattern-server-1990672178',display_name='tempest-TestSnapshotPattern-server-1990672178',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testsnapshotpattern-server-1990672178',id=149,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBP2ECx0jj3Rt10/VBX40dpY/aktTrRIITXR7+kOE+gKATXjMYBpecra2wtbKgkthLAz8Iw/nS+1WioBHxe0IF9i4guGae1Gh4XxA1njuMr4iSWg7N0/eaZRKp5xLMkN7hg==',key_name='tempest-TestSnapshotPattern-1719255089',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d599eac75aee4193a971c2c157a326a8',ramdisk_id='',reservation_id='r-eoa49vly',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSnapshotPattern-1818755654',owner_user_name='tempest-TestSnapshotPattern-1818755654-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:43:00Z,user_data=None,user_id='2b3a6de4bc924868b4e73b0a23a89090',uuid=1f03604e-0a25-4f22-807b-11f2e654be90,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0079b745-efa5-4641-a180-fee85fb30a5d", "address": "fa:16:3e:88:b5:6b", "network": {"id": "424246b3-55aa-428f-9446-55ed3d626b5d", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1806329114-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d599eac75aee4193a971c2c157a326a8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0079b745-ef", "ovs_interfaceid": "0079b745-efa5-4641-a180-fee85fb30a5d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 11 09:43:06 compute-0 nova_compute[260935]: 2025-10-11 09:43:06.164 2 DEBUG nova.network.os_vif_util [None req-0457b85d-0cc0-477b-b645-cef4c4bd5a46 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] Converting VIF {"id": "0079b745-efa5-4641-a180-fee85fb30a5d", "address": "fa:16:3e:88:b5:6b", "network": {"id": "424246b3-55aa-428f-9446-55ed3d626b5d", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1806329114-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d599eac75aee4193a971c2c157a326a8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0079b745-ef", "ovs_interfaceid": "0079b745-efa5-4641-a180-fee85fb30a5d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 09:43:06 compute-0 nova_compute[260935]: 2025-10-11 09:43:06.165 2 DEBUG nova.network.os_vif_util [None req-0457b85d-0cc0-477b-b645-cef4c4bd5a46 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:88:b5:6b,bridge_name='br-int',has_traffic_filtering=True,id=0079b745-efa5-4641-a180-fee85fb30a5d,network=Network(424246b3-55aa-428f-9446-55ed3d626b5d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0079b745-ef') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 09:43:06 compute-0 nova_compute[260935]: 2025-10-11 09:43:06.166 2 DEBUG os_vif [None req-0457b85d-0cc0-477b-b645-cef4c4bd5a46 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:88:b5:6b,bridge_name='br-int',has_traffic_filtering=True,id=0079b745-efa5-4641-a180-fee85fb30a5d,network=Network(424246b3-55aa-428f-9446-55ed3d626b5d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0079b745-ef') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 11 09:43:06 compute-0 nova_compute[260935]: 2025-10-11 09:43:06.167 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:43:06 compute-0 nova_compute[260935]: 2025-10-11 09:43:06.168 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:43:06 compute-0 nova_compute[260935]: 2025-10-11 09:43:06.168 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 09:43:06 compute-0 nova_compute[260935]: 2025-10-11 09:43:06.172 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:43:06 compute-0 nova_compute[260935]: 2025-10-11 09:43:06.173 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0079b745-ef, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:43:06 compute-0 nova_compute[260935]: 2025-10-11 09:43:06.174 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap0079b745-ef, col_values=(('external_ids', {'iface-id': '0079b745-efa5-4641-a180-fee85fb30a5d', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:88:b5:6b', 'vm-uuid': '1f03604e-0a25-4f22-807b-11f2e654be90'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:43:06 compute-0 nova_compute[260935]: 2025-10-11 09:43:06.176 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:43:06 compute-0 NetworkManager[44960]: <info>  [1760175786.1769] manager: (tap0079b745-ef): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/641)
Oct 11 09:43:06 compute-0 nova_compute[260935]: 2025-10-11 09:43:06.178 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 09:43:06 compute-0 nova_compute[260935]: 2025-10-11 09:43:06.184 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:43:06 compute-0 nova_compute[260935]: 2025-10-11 09:43:06.186 2 INFO os_vif [None req-0457b85d-0cc0-477b-b645-cef4c4bd5a46 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:88:b5:6b,bridge_name='br-int',has_traffic_filtering=True,id=0079b745-efa5-4641-a180-fee85fb30a5d,network=Network(424246b3-55aa-428f-9446-55ed3d626b5d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0079b745-ef')
Oct 11 09:43:06 compute-0 nova_compute[260935]: 2025-10-11 09:43:06.243 2 DEBUG nova.virt.libvirt.driver [None req-0457b85d-0cc0-477b-b645-cef4c4bd5a46 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 09:43:06 compute-0 nova_compute[260935]: 2025-10-11 09:43:06.244 2 DEBUG nova.virt.libvirt.driver [None req-0457b85d-0cc0-477b-b645-cef4c4bd5a46 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 09:43:06 compute-0 nova_compute[260935]: 2025-10-11 09:43:06.244 2 DEBUG nova.virt.libvirt.driver [None req-0457b85d-0cc0-477b-b645-cef4c4bd5a46 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] No VIF found with MAC fa:16:3e:88:b5:6b, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 11 09:43:06 compute-0 nova_compute[260935]: 2025-10-11 09:43:06.245 2 INFO nova.virt.libvirt.driver [None req-0457b85d-0cc0-477b-b645-cef4c4bd5a46 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] [instance: 1f03604e-0a25-4f22-807b-11f2e654be90] Using config drive
Oct 11 09:43:06 compute-0 nova_compute[260935]: 2025-10-11 09:43:06.274 2 DEBUG nova.storage.rbd_utils [None req-0457b85d-0cc0-477b-b645-cef4c4bd5a46 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] rbd image 1f03604e-0a25-4f22-807b-11f2e654be90_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:43:06 compute-0 sshd-session[428903]: Connection closed by invalid user mysql 165.232.82.252 port 53426 [preauth]
Oct 11 09:43:06 compute-0 nova_compute[260935]: 2025-10-11 09:43:06.513 2 DEBUG nova.network.neutron [req-b809192d-1ef9-406f-841e-229d53aa9ca4 req-bca78657-e6e3-4dcc-bdd6-92fbd1951939 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 1f03604e-0a25-4f22-807b-11f2e654be90] Updated VIF entry in instance network info cache for port 0079b745-efa5-4641-a180-fee85fb30a5d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 09:43:06 compute-0 nova_compute[260935]: 2025-10-11 09:43:06.515 2 DEBUG nova.network.neutron [req-b809192d-1ef9-406f-841e-229d53aa9ca4 req-bca78657-e6e3-4dcc-bdd6-92fbd1951939 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 1f03604e-0a25-4f22-807b-11f2e654be90] Updating instance_info_cache with network_info: [{"id": "0079b745-efa5-4641-a180-fee85fb30a5d", "address": "fa:16:3e:88:b5:6b", "network": {"id": "424246b3-55aa-428f-9446-55ed3d626b5d", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1806329114-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d599eac75aee4193a971c2c157a326a8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0079b745-ef", "ovs_interfaceid": "0079b745-efa5-4641-a180-fee85fb30a5d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:43:06 compute-0 nova_compute[260935]: 2025-10-11 09:43:06.543 2 DEBUG oslo_concurrency.lockutils [req-b809192d-1ef9-406f-841e-229d53aa9ca4 req-bca78657-e6e3-4dcc-bdd6-92fbd1951939 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-1f03604e-0a25-4f22-807b-11f2e654be90" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:43:06 compute-0 nova_compute[260935]: 2025-10-11 09:43:06.642 2 INFO nova.virt.libvirt.driver [None req-0457b85d-0cc0-477b-b645-cef4c4bd5a46 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] [instance: 1f03604e-0a25-4f22-807b-11f2e654be90] Creating config drive at /var/lib/nova/instances/1f03604e-0a25-4f22-807b-11f2e654be90/disk.config
Oct 11 09:43:06 compute-0 nova_compute[260935]: 2025-10-11 09:43:06.653 2 DEBUG oslo_concurrency.processutils [None req-0457b85d-0cc0-477b-b645-cef4c4bd5a46 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/1f03604e-0a25-4f22-807b-11f2e654be90/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp47vqskgy execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:43:06 compute-0 nova_compute[260935]: 2025-10-11 09:43:06.820 2 DEBUG oslo_concurrency.processutils [None req-0457b85d-0cc0-477b-b645-cef4c4bd5a46 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/1f03604e-0a25-4f22-807b-11f2e654be90/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp47vqskgy" returned: 0 in 0.168s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:43:06 compute-0 nova_compute[260935]: 2025-10-11 09:43:06.868 2 DEBUG nova.storage.rbd_utils [None req-0457b85d-0cc0-477b-b645-cef4c4bd5a46 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] rbd image 1f03604e-0a25-4f22-807b-11f2e654be90_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:43:06 compute-0 nova_compute[260935]: 2025-10-11 09:43:06.873 2 DEBUG oslo_concurrency.processutils [None req-0457b85d-0cc0-477b-b645-cef4c4bd5a46 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/1f03604e-0a25-4f22-807b-11f2e654be90/disk.config 1f03604e-0a25-4f22-807b-11f2e654be90_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:43:06 compute-0 ceph-mon[74313]: pgmap v3043: 321 pgs: 321 active+clean; 374 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 11 09:43:06 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/1707526989' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:43:07 compute-0 nova_compute[260935]: 2025-10-11 09:43:07.083 2 DEBUG oslo_concurrency.processutils [None req-0457b85d-0cc0-477b-b645-cef4c4bd5a46 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/1f03604e-0a25-4f22-807b-11f2e654be90/disk.config 1f03604e-0a25-4f22-807b-11f2e654be90_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.210s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:43:07 compute-0 nova_compute[260935]: 2025-10-11 09:43:07.085 2 INFO nova.virt.libvirt.driver [None req-0457b85d-0cc0-477b-b645-cef4c4bd5a46 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] [instance: 1f03604e-0a25-4f22-807b-11f2e654be90] Deleting local config drive /var/lib/nova/instances/1f03604e-0a25-4f22-807b-11f2e654be90/disk.config because it was imported into RBD.
Oct 11 09:43:07 compute-0 kernel: tap0079b745-ef: entered promiscuous mode
Oct 11 09:43:07 compute-0 NetworkManager[44960]: <info>  [1760175787.1631] manager: (tap0079b745-ef): new Tun device (/org/freedesktop/NetworkManager/Devices/642)
Oct 11 09:43:07 compute-0 ovn_controller[152945]: 2025-10-11T09:43:07Z|01705|binding|INFO|Claiming lport 0079b745-efa5-4641-a180-fee85fb30a5d for this chassis.
Oct 11 09:43:07 compute-0 ovn_controller[152945]: 2025-10-11T09:43:07Z|01706|binding|INFO|0079b745-efa5-4641-a180-fee85fb30a5d: Claiming fa:16:3e:88:b5:6b 10.100.0.3
Oct 11 09:43:07 compute-0 nova_compute[260935]: 2025-10-11 09:43:07.165 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:43:07 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:43:07.187 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:88:b5:6b 10.100.0.3'], port_security=['fa:16:3e:88:b5:6b 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '1f03604e-0a25-4f22-807b-11f2e654be90', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-424246b3-55aa-428f-9446-55ed3d626b5d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd599eac75aee4193a971c2c157a326a8', 'neutron:revision_number': '2', 'neutron:security_group_ids': '9d823c59-8c61-45f8-94bd-622699d4ae58', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a8021c16-4de3-44ff-b5a5-7602224b9734, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=0079b745-efa5-4641-a180-fee85fb30a5d) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 09:43:07 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:43:07.190 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 0079b745-efa5-4641-a180-fee85fb30a5d in datapath 424246b3-55aa-428f-9446-55ed3d626b5d bound to our chassis
Oct 11 09:43:07 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:43:07.195 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 424246b3-55aa-428f-9446-55ed3d626b5d
Oct 11 09:43:07 compute-0 systemd-udevd[429973]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 09:43:07 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:43:07.216 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[6963919c-7c66-4325-ba66-f0fbd2d5fc4a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:43:07 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:43:07.217 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap424246b3-51 in ovnmeta-424246b3-55aa-428f-9446-55ed3d626b5d namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 11 09:43:07 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:43:07.221 276945 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap424246b3-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 11 09:43:07 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:43:07.222 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[c928771b-2295-4196-b783-9d27e1aaa233]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:43:07 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:43:07.223 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[70c32c64-beea-4c21-b844-170eedf12942]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:43:07 compute-0 systemd-machined[215705]: New machine qemu-175-instance-00000095.
Oct 11 09:43:07 compute-0 NetworkManager[44960]: <info>  [1760175787.2352] device (tap0079b745-ef): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 09:43:07 compute-0 NetworkManager[44960]: <info>  [1760175787.2366] device (tap0079b745-ef): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 09:43:07 compute-0 systemd[1]: Started Virtual Machine qemu-175-instance-00000095.
Oct 11 09:43:07 compute-0 nova_compute[260935]: 2025-10-11 09:43:07.245 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:43:07 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:43:07.252 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[42c78a17-afcf-4582-9b8c-77dbf16147c7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:43:07 compute-0 ovn_controller[152945]: 2025-10-11T09:43:07Z|01707|binding|INFO|Setting lport 0079b745-efa5-4641-a180-fee85fb30a5d ovn-installed in OVS
Oct 11 09:43:07 compute-0 ovn_controller[152945]: 2025-10-11T09:43:07Z|01708|binding|INFO|Setting lport 0079b745-efa5-4641-a180-fee85fb30a5d up in Southbound
Oct 11 09:43:07 compute-0 nova_compute[260935]: 2025-10-11 09:43:07.255 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:43:07 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:43:07.283 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[4cedb6cf-e0f8-467b-bd15-8780791537bd]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:43:07 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:43:07.316 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[885109f9-9601-498e-ab90-110d82570d65]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:43:07 compute-0 systemd-udevd[430004]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 09:43:07 compute-0 NetworkManager[44960]: <info>  [1760175787.3249] manager: (tap424246b3-50): new Veth device (/org/freedesktop/NetworkManager/Devices/643)
Oct 11 09:43:07 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:43:07.324 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[5bfd30c1-c493-468e-928c-01113f9e9586]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:43:07 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:43:07.365 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[1cd6d0ad-cbac-46df-b522-cde97882a3d7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:43:07 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:43:07.369 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[8bccf3ff-b7e6-4b90-8f0f-bfd0c25fa0c5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:43:07 compute-0 NetworkManager[44960]: <info>  [1760175787.3977] device (tap424246b3-50): carrier: link connected
Oct 11 09:43:07 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:43:07.408 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[3d05cff4-1253-4895-8fef-5f69d198690f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:43:07 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3044: 321 pgs: 321 active+clean; 374 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 31 op/s
Oct 11 09:43:07 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:43:07.433 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[1d9a4072-55a6-48eb-a109-679309355bb1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap424246b3-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:28:e0:2b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 445], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 763210, 'reachable_time': 21658, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 430551, 'error': None, 'target': 'ovnmeta-424246b3-55aa-428f-9446-55ed3d626b5d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:43:07 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:43:07.458 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[18538dc6-d6ad-45e4-a636-c7d88d1c73d0]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe28:e02b'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 763210, 'tstamp': 763210}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 430652, 'error': None, 'target': 'ovnmeta-424246b3-55aa-428f-9446-55ed3d626b5d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:43:07 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:43:07.481 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[92c39f9a-2abd-4dcc-a4a0-e7051c53ef55]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap424246b3-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:28:e0:2b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 445], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 763210, 'reachable_time': 21658, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 430753, 'error': None, 'target': 'ovnmeta-424246b3-55aa-428f-9446-55ed3d626b5d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:43:07 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:43:07.520 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[37c305d5-e431-439a-9768-e792c449625b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:43:07 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:43:07.594 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[bc490561-7ebd-451b-83e7-f98b5408c4d7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:43:07 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:43:07.596 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap424246b3-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:43:07 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:43:07.596 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 09:43:07 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:43:07.597 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap424246b3-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:43:07 compute-0 nova_compute[260935]: 2025-10-11 09:43:07.600 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:43:07 compute-0 NetworkManager[44960]: <info>  [1760175787.6007] manager: (tap424246b3-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/644)
Oct 11 09:43:07 compute-0 kernel: tap424246b3-50: entered promiscuous mode
Oct 11 09:43:07 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:43:07.604 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap424246b3-50, col_values=(('external_ids', {'iface-id': 'e23ac8d5-df18-43b3-bb12-29f5cd9ce802'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:43:07 compute-0 nova_compute[260935]: 2025-10-11 09:43:07.605 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:43:07 compute-0 ovn_controller[152945]: 2025-10-11T09:43:07Z|01709|binding|INFO|Releasing lport e23ac8d5-df18-43b3-bb12-29f5cd9ce802 from this chassis (sb_readonly=0)
Oct 11 09:43:07 compute-0 nova_compute[260935]: 2025-10-11 09:43:07.627 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:43:07 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:43:07.629 162815 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/424246b3-55aa-428f-9446-55ed3d626b5d.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/424246b3-55aa-428f-9446-55ed3d626b5d.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 11 09:43:07 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:43:07.630 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[f6812a6f-c315-41d5-9694-687166900cf4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:43:07 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:43:07.631 162815 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 11 09:43:07 compute-0 ovn_metadata_agent[162810]: global
Oct 11 09:43:07 compute-0 ovn_metadata_agent[162810]:     log         /dev/log local0 debug
Oct 11 09:43:07 compute-0 ovn_metadata_agent[162810]:     log-tag     haproxy-metadata-proxy-424246b3-55aa-428f-9446-55ed3d626b5d
Oct 11 09:43:07 compute-0 ovn_metadata_agent[162810]:     user        root
Oct 11 09:43:07 compute-0 ovn_metadata_agent[162810]:     group       root
Oct 11 09:43:07 compute-0 ovn_metadata_agent[162810]:     maxconn     1024
Oct 11 09:43:07 compute-0 ovn_metadata_agent[162810]:     pidfile     /var/lib/neutron/external/pids/424246b3-55aa-428f-9446-55ed3d626b5d.pid.haproxy
Oct 11 09:43:07 compute-0 ovn_metadata_agent[162810]:     daemon
Oct 11 09:43:07 compute-0 ovn_metadata_agent[162810]: 
Oct 11 09:43:07 compute-0 ovn_metadata_agent[162810]: defaults
Oct 11 09:43:07 compute-0 ovn_metadata_agent[162810]:     log global
Oct 11 09:43:07 compute-0 ovn_metadata_agent[162810]:     mode http
Oct 11 09:43:07 compute-0 ovn_metadata_agent[162810]:     option httplog
Oct 11 09:43:07 compute-0 ovn_metadata_agent[162810]:     option dontlognull
Oct 11 09:43:07 compute-0 ovn_metadata_agent[162810]:     option http-server-close
Oct 11 09:43:07 compute-0 ovn_metadata_agent[162810]:     option forwardfor
Oct 11 09:43:07 compute-0 ovn_metadata_agent[162810]:     retries                 3
Oct 11 09:43:07 compute-0 ovn_metadata_agent[162810]:     timeout http-request    30s
Oct 11 09:43:07 compute-0 ovn_metadata_agent[162810]:     timeout connect         30s
Oct 11 09:43:07 compute-0 ovn_metadata_agent[162810]:     timeout client          32s
Oct 11 09:43:07 compute-0 ovn_metadata_agent[162810]:     timeout server          32s
Oct 11 09:43:07 compute-0 ovn_metadata_agent[162810]:     timeout http-keep-alive 30s
Oct 11 09:43:07 compute-0 ovn_metadata_agent[162810]: 
Oct 11 09:43:07 compute-0 ovn_metadata_agent[162810]: 
Oct 11 09:43:07 compute-0 ovn_metadata_agent[162810]: listen listener
Oct 11 09:43:07 compute-0 ovn_metadata_agent[162810]:     bind 169.254.169.254:80
Oct 11 09:43:07 compute-0 ovn_metadata_agent[162810]:     server metadata /var/lib/neutron/metadata_proxy
Oct 11 09:43:07 compute-0 ovn_metadata_agent[162810]:     http-request add-header X-OVN-Network-ID 424246b3-55aa-428f-9446-55ed3d626b5d
Oct 11 09:43:07 compute-0 ovn_metadata_agent[162810]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 11 09:43:07 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:43:07.633 162815 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-424246b3-55aa-428f-9446-55ed3d626b5d', 'env', 'PROCESS_TAG=haproxy-424246b3-55aa-428f-9446-55ed3d626b5d', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/424246b3-55aa-428f-9446-55ed3d626b5d.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 11 09:43:07 compute-0 bold_lovelace[429157]: [
Oct 11 09:43:07 compute-0 bold_lovelace[429157]:     {
Oct 11 09:43:07 compute-0 bold_lovelace[429157]:         "available": false,
Oct 11 09:43:07 compute-0 bold_lovelace[429157]:         "ceph_device": false,
Oct 11 09:43:07 compute-0 bold_lovelace[429157]:         "device_id": "QEMU_DVD-ROM_QM00001",
Oct 11 09:43:07 compute-0 bold_lovelace[429157]:         "lsm_data": {},
Oct 11 09:43:07 compute-0 bold_lovelace[429157]:         "lvs": [],
Oct 11 09:43:07 compute-0 bold_lovelace[429157]:         "path": "/dev/sr0",
Oct 11 09:43:07 compute-0 bold_lovelace[429157]:         "rejected_reasons": [
Oct 11 09:43:07 compute-0 bold_lovelace[429157]:             "Has a FileSystem",
Oct 11 09:43:07 compute-0 bold_lovelace[429157]:             "Insufficient space (<5GB)"
Oct 11 09:43:07 compute-0 bold_lovelace[429157]:         ],
Oct 11 09:43:07 compute-0 bold_lovelace[429157]:         "sys_api": {
Oct 11 09:43:07 compute-0 bold_lovelace[429157]:             "actuators": null,
Oct 11 09:43:07 compute-0 bold_lovelace[429157]:             "device_nodes": "sr0",
Oct 11 09:43:07 compute-0 bold_lovelace[429157]:             "devname": "sr0",
Oct 11 09:43:07 compute-0 bold_lovelace[429157]:             "human_readable_size": "482.00 KB",
Oct 11 09:43:07 compute-0 bold_lovelace[429157]:             "id_bus": "ata",
Oct 11 09:43:07 compute-0 bold_lovelace[429157]:             "model": "QEMU DVD-ROM",
Oct 11 09:43:07 compute-0 bold_lovelace[429157]:             "nr_requests": "2",
Oct 11 09:43:07 compute-0 bold_lovelace[429157]:             "parent": "/dev/sr0",
Oct 11 09:43:07 compute-0 bold_lovelace[429157]:             "partitions": {},
Oct 11 09:43:07 compute-0 bold_lovelace[429157]:             "path": "/dev/sr0",
Oct 11 09:43:07 compute-0 bold_lovelace[429157]:             "removable": "1",
Oct 11 09:43:07 compute-0 bold_lovelace[429157]:             "rev": "2.5+",
Oct 11 09:43:07 compute-0 bold_lovelace[429157]:             "ro": "0",
Oct 11 09:43:07 compute-0 bold_lovelace[429157]:             "rotational": "0",
Oct 11 09:43:07 compute-0 bold_lovelace[429157]:             "sas_address": "",
Oct 11 09:43:07 compute-0 bold_lovelace[429157]:             "sas_device_handle": "",
Oct 11 09:43:07 compute-0 bold_lovelace[429157]:             "scheduler_mode": "mq-deadline",
Oct 11 09:43:07 compute-0 bold_lovelace[429157]:             "sectors": 0,
Oct 11 09:43:07 compute-0 bold_lovelace[429157]:             "sectorsize": "2048",
Oct 11 09:43:07 compute-0 bold_lovelace[429157]:             "size": 493568.0,
Oct 11 09:43:07 compute-0 bold_lovelace[429157]:             "support_discard": "2048",
Oct 11 09:43:07 compute-0 bold_lovelace[429157]:             "type": "disk",
Oct 11 09:43:07 compute-0 bold_lovelace[429157]:             "vendor": "QEMU"
Oct 11 09:43:07 compute-0 bold_lovelace[429157]:         }
Oct 11 09:43:07 compute-0 bold_lovelace[429157]:     }
Oct 11 09:43:07 compute-0 bold_lovelace[429157]: ]
Oct 11 09:43:07 compute-0 systemd[1]: libpod-bb3ace29aaac1f2148cc3940986c4c84a247b3d7c72dfec165b64c7e4db2360f.scope: Deactivated successfully.
Oct 11 09:43:07 compute-0 systemd[1]: libpod-bb3ace29aaac1f2148cc3940986c4c84a247b3d7c72dfec165b64c7e4db2360f.scope: Consumed 1.927s CPU time.
Oct 11 09:43:07 compute-0 podman[431664]: 2025-10-11 09:43:07.798463514 +0000 UTC m=+0.046495525 container died bb3ace29aaac1f2148cc3940986c4c84a247b3d7c72dfec165b64c7e4db2360f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_lovelace, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 09:43:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-f8f0a717cb682ed8212f3ac948020242a6f4e7385c72fba36d642cfc64baed44-merged.mount: Deactivated successfully.
Oct 11 09:43:07 compute-0 podman[431664]: 2025-10-11 09:43:07.873437997 +0000 UTC m=+0.121469918 container remove bb3ace29aaac1f2148cc3940986c4c84a247b3d7c72dfec165b64c7e4db2360f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_lovelace, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 09:43:07 compute-0 systemd[1]: libpod-conmon-bb3ace29aaac1f2148cc3940986c4c84a247b3d7c72dfec165b64c7e4db2360f.scope: Deactivated successfully.
Oct 11 09:43:07 compute-0 sudo[428994]: pam_unix(sudo:session): session closed for user root
Oct 11 09:43:07 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 09:43:07 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:43:07 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 09:43:07 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:43:07 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 09:43:07 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 09:43:07 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 11 09:43:07 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 09:43:07 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 11 09:43:07 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:43:07 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev 10b8bcda-9811-49d5-9e54-b1bc8282ec99 does not exist
Oct 11 09:43:07 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev f599ce44-41e0-4c9e-901e-80f6b44f82a5 does not exist
Oct 11 09:43:07 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev 9028ebff-1ccd-437f-9946-491a52ce5396 does not exist
Oct 11 09:43:07 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 11 09:43:07 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 09:43:07 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 11 09:43:07 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 09:43:07 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 09:43:07 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 09:43:08 compute-0 nova_compute[260935]: 2025-10-11 09:43:08.010 2 DEBUG nova.compute.manager [req-cd5b286d-81bd-4376-8e0a-cdce4859dac3 req-b0accb5e-bb99-4ecc-819d-9a2a41e16548 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 1f03604e-0a25-4f22-807b-11f2e654be90] Received event network-vif-plugged-0079b745-efa5-4641-a180-fee85fb30a5d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:43:08 compute-0 nova_compute[260935]: 2025-10-11 09:43:08.010 2 DEBUG oslo_concurrency.lockutils [req-cd5b286d-81bd-4376-8e0a-cdce4859dac3 req-b0accb5e-bb99-4ecc-819d-9a2a41e16548 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "1f03604e-0a25-4f22-807b-11f2e654be90-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:43:08 compute-0 nova_compute[260935]: 2025-10-11 09:43:08.011 2 DEBUG oslo_concurrency.lockutils [req-cd5b286d-81bd-4376-8e0a-cdce4859dac3 req-b0accb5e-bb99-4ecc-819d-9a2a41e16548 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "1f03604e-0a25-4f22-807b-11f2e654be90-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:43:08 compute-0 nova_compute[260935]: 2025-10-11 09:43:08.011 2 DEBUG oslo_concurrency.lockutils [req-cd5b286d-81bd-4376-8e0a-cdce4859dac3 req-b0accb5e-bb99-4ecc-819d-9a2a41e16548 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "1f03604e-0a25-4f22-807b-11f2e654be90-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:43:08 compute-0 nova_compute[260935]: 2025-10-11 09:43:08.012 2 DEBUG nova.compute.manager [req-cd5b286d-81bd-4376-8e0a-cdce4859dac3 req-b0accb5e-bb99-4ecc-819d-9a2a41e16548 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 1f03604e-0a25-4f22-807b-11f2e654be90] Processing event network-vif-plugged-0079b745-efa5-4641-a180-fee85fb30a5d _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 11 09:43:08 compute-0 sudo[431702]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:43:08 compute-0 sudo[431702]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:43:08 compute-0 sudo[431702]: pam_unix(sudo:session): session closed for user root
Oct 11 09:43:08 compute-0 podman[431719]: 2025-10-11 09:43:08.073953331 +0000 UTC m=+0.073864853 container create f3c157f84cd53093e9b7de536de6a8417e4e5d8b42621d118b292c613e249157 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-424246b3-55aa-428f-9446-55ed3d626b5d, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct 11 09:43:08 compute-0 systemd[1]: Started libpod-conmon-f3c157f84cd53093e9b7de536de6a8417e4e5d8b42621d118b292c613e249157.scope.
Oct 11 09:43:08 compute-0 podman[431719]: 2025-10-11 09:43:08.043315202 +0000 UTC m=+0.043226734 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 11 09:43:08 compute-0 sudo[431774]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 09:43:08 compute-0 sudo[431774]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:43:08 compute-0 sudo[431774]: pam_unix(sudo:session): session closed for user root
Oct 11 09:43:08 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:43:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a86c02cc2cbc8761b2102a73fcb1e50eef8c8d1227adb1efe533f1256a4c9e7f/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 11 09:43:08 compute-0 podman[431719]: 2025-10-11 09:43:08.187523506 +0000 UTC m=+0.187435098 container init f3c157f84cd53093e9b7de536de6a8417e4e5d8b42621d118b292c613e249157 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-424246b3-55aa-428f-9446-55ed3d626b5d, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Oct 11 09:43:08 compute-0 podman[431719]: 2025-10-11 09:43:08.195350616 +0000 UTC m=+0.195262168 container start f3c157f84cd53093e9b7de536de6a8417e4e5d8b42621d118b292c613e249157 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-424246b3-55aa-428f-9446-55ed3d626b5d, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 11 09:43:08 compute-0 neutron-haproxy-ovnmeta-424246b3-55aa-428f-9446-55ed3d626b5d[431804]: [NOTICE]   (431828) : New worker (431836) forked
Oct 11 09:43:08 compute-0 neutron-haproxy-ovnmeta-424246b3-55aa-428f-9446-55ed3d626b5d[431804]: [NOTICE]   (431828) : Loading success.
Oct 11 09:43:08 compute-0 sudo[431809]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:43:08 compute-0 sudo[431809]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:43:08 compute-0 sudo[431809]: pam_unix(sudo:session): session closed for user root
Oct 11 09:43:08 compute-0 sudo[431847]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 11 09:43:08 compute-0 sudo[431847]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:43:08 compute-0 nova_compute[260935]: 2025-10-11 09:43:08.699 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760175788.698853, 1f03604e-0a25-4f22-807b-11f2e654be90 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:43:08 compute-0 nova_compute[260935]: 2025-10-11 09:43:08.701 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 1f03604e-0a25-4f22-807b-11f2e654be90] VM Started (Lifecycle Event)
Oct 11 09:43:08 compute-0 nova_compute[260935]: 2025-10-11 09:43:08.704 2 DEBUG nova.compute.manager [None req-0457b85d-0cc0-477b-b645-cef4c4bd5a46 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] [instance: 1f03604e-0a25-4f22-807b-11f2e654be90] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 11 09:43:08 compute-0 nova_compute[260935]: 2025-10-11 09:43:08.710 2 DEBUG nova.virt.libvirt.driver [None req-0457b85d-0cc0-477b-b645-cef4c4bd5a46 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] [instance: 1f03604e-0a25-4f22-807b-11f2e654be90] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 11 09:43:08 compute-0 nova_compute[260935]: 2025-10-11 09:43:08.715 2 INFO nova.virt.libvirt.driver [-] [instance: 1f03604e-0a25-4f22-807b-11f2e654be90] Instance spawned successfully.
Oct 11 09:43:08 compute-0 nova_compute[260935]: 2025-10-11 09:43:08.716 2 DEBUG nova.virt.libvirt.driver [None req-0457b85d-0cc0-477b-b645-cef4c4bd5a46 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] [instance: 1f03604e-0a25-4f22-807b-11f2e654be90] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 11 09:43:08 compute-0 nova_compute[260935]: 2025-10-11 09:43:08.727 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 1f03604e-0a25-4f22-807b-11f2e654be90] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:43:08 compute-0 nova_compute[260935]: 2025-10-11 09:43:08.734 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 1f03604e-0a25-4f22-807b-11f2e654be90] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 09:43:08 compute-0 podman[431912]: 2025-10-11 09:43:08.745789454 +0000 UTC m=+0.073442641 container create 2e956781cf9e473375f76efb6c98b8e440a0ca0f493749d8fad231eb8fa0a360 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_lumiere, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 09:43:08 compute-0 nova_compute[260935]: 2025-10-11 09:43:08.750 2 DEBUG nova.virt.libvirt.driver [None req-0457b85d-0cc0-477b-b645-cef4c4bd5a46 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] [instance: 1f03604e-0a25-4f22-807b-11f2e654be90] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:43:08 compute-0 nova_compute[260935]: 2025-10-11 09:43:08.750 2 DEBUG nova.virt.libvirt.driver [None req-0457b85d-0cc0-477b-b645-cef4c4bd5a46 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] [instance: 1f03604e-0a25-4f22-807b-11f2e654be90] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:43:08 compute-0 nova_compute[260935]: 2025-10-11 09:43:08.751 2 DEBUG nova.virt.libvirt.driver [None req-0457b85d-0cc0-477b-b645-cef4c4bd5a46 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] [instance: 1f03604e-0a25-4f22-807b-11f2e654be90] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:43:08 compute-0 nova_compute[260935]: 2025-10-11 09:43:08.751 2 DEBUG nova.virt.libvirt.driver [None req-0457b85d-0cc0-477b-b645-cef4c4bd5a46 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] [instance: 1f03604e-0a25-4f22-807b-11f2e654be90] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:43:08 compute-0 nova_compute[260935]: 2025-10-11 09:43:08.752 2 DEBUG nova.virt.libvirt.driver [None req-0457b85d-0cc0-477b-b645-cef4c4bd5a46 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] [instance: 1f03604e-0a25-4f22-807b-11f2e654be90] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:43:08 compute-0 nova_compute[260935]: 2025-10-11 09:43:08.753 2 DEBUG nova.virt.libvirt.driver [None req-0457b85d-0cc0-477b-b645-cef4c4bd5a46 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] [instance: 1f03604e-0a25-4f22-807b-11f2e654be90] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:43:08 compute-0 nova_compute[260935]: 2025-10-11 09:43:08.761 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 1f03604e-0a25-4f22-807b-11f2e654be90] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 09:43:08 compute-0 nova_compute[260935]: 2025-10-11 09:43:08.761 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760175788.7011256, 1f03604e-0a25-4f22-807b-11f2e654be90 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:43:08 compute-0 nova_compute[260935]: 2025-10-11 09:43:08.762 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 1f03604e-0a25-4f22-807b-11f2e654be90] VM Paused (Lifecycle Event)
Oct 11 09:43:08 compute-0 systemd[1]: Started libpod-conmon-2e956781cf9e473375f76efb6c98b8e440a0ca0f493749d8fad231eb8fa0a360.scope.
Oct 11 09:43:08 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:43:08 compute-0 podman[431912]: 2025-10-11 09:43:08.726831433 +0000 UTC m=+0.054484640 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:43:08 compute-0 nova_compute[260935]: 2025-10-11 09:43:08.826 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 1f03604e-0a25-4f22-807b-11f2e654be90] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:43:08 compute-0 nova_compute[260935]: 2025-10-11 09:43:08.831 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760175788.7081282, 1f03604e-0a25-4f22-807b-11f2e654be90 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:43:08 compute-0 nova_compute[260935]: 2025-10-11 09:43:08.831 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 1f03604e-0a25-4f22-807b-11f2e654be90] VM Resumed (Lifecycle Event)
Oct 11 09:43:08 compute-0 podman[431912]: 2025-10-11 09:43:08.836644173 +0000 UTC m=+0.164297350 container init 2e956781cf9e473375f76efb6c98b8e440a0ca0f493749d8fad231eb8fa0a360 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_lumiere, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct 11 09:43:08 compute-0 podman[431912]: 2025-10-11 09:43:08.842950539 +0000 UTC m=+0.170603746 container start 2e956781cf9e473375f76efb6c98b8e440a0ca0f493749d8fad231eb8fa0a360 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_lumiere, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct 11 09:43:08 compute-0 podman[431912]: 2025-10-11 09:43:08.846749116 +0000 UTC m=+0.174402313 container attach 2e956781cf9e473375f76efb6c98b8e440a0ca0f493749d8fad231eb8fa0a360 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_lumiere, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True)
Oct 11 09:43:08 compute-0 systemd[1]: libpod-2e956781cf9e473375f76efb6c98b8e440a0ca0f493749d8fad231eb8fa0a360.scope: Deactivated successfully.
Oct 11 09:43:08 compute-0 inspiring_lumiere[431928]: 167 167
Oct 11 09:43:08 compute-0 conmon[431928]: conmon 2e956781cf9e473375f7 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-2e956781cf9e473375f76efb6c98b8e440a0ca0f493749d8fad231eb8fa0a360.scope/container/memory.events
Oct 11 09:43:08 compute-0 podman[431912]: 2025-10-11 09:43:08.852880778 +0000 UTC m=+0.180533995 container died 2e956781cf9e473375f76efb6c98b8e440a0ca0f493749d8fad231eb8fa0a360 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_lumiere, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Oct 11 09:43:08 compute-0 nova_compute[260935]: 2025-10-11 09:43:08.874 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 1f03604e-0a25-4f22-807b-11f2e654be90] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:43:08 compute-0 nova_compute[260935]: 2025-10-11 09:43:08.878 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 1f03604e-0a25-4f22-807b-11f2e654be90] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 09:43:08 compute-0 nova_compute[260935]: 2025-10-11 09:43:08.885 2 INFO nova.compute.manager [None req-0457b85d-0cc0-477b-b645-cef4c4bd5a46 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] [instance: 1f03604e-0a25-4f22-807b-11f2e654be90] Took 8.38 seconds to spawn the instance on the hypervisor.
Oct 11 09:43:08 compute-0 nova_compute[260935]: 2025-10-11 09:43:08.885 2 DEBUG nova.compute.manager [None req-0457b85d-0cc0-477b-b645-cef4c4bd5a46 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] [instance: 1f03604e-0a25-4f22-807b-11f2e654be90] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:43:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-0bbfe8e7f00aa1f6d64a8783ddcc5e6fa727c90ce97881b2165f3f3e3b82bb0a-merged.mount: Deactivated successfully.
Oct 11 09:43:08 compute-0 podman[431912]: 2025-10-11 09:43:08.90108796 +0000 UTC m=+0.228741147 container remove 2e956781cf9e473375f76efb6c98b8e440a0ca0f493749d8fad231eb8fa0a360 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_lumiere, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 09:43:08 compute-0 nova_compute[260935]: 2025-10-11 09:43:08.902 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 1f03604e-0a25-4f22-807b-11f2e654be90] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 09:43:08 compute-0 systemd[1]: libpod-conmon-2e956781cf9e473375f76efb6c98b8e440a0ca0f493749d8fad231eb8fa0a360.scope: Deactivated successfully.
Oct 11 09:43:08 compute-0 ceph-mon[74313]: pgmap v3044: 321 pgs: 321 active+clean; 374 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 31 op/s
Oct 11 09:43:08 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:43:08 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:43:08 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 09:43:08 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 09:43:08 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:43:08 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 09:43:08 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 09:43:08 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 09:43:08 compute-0 nova_compute[260935]: 2025-10-11 09:43:08.978 2 INFO nova.compute.manager [None req-0457b85d-0cc0-477b-b645-cef4c4bd5a46 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] [instance: 1f03604e-0a25-4f22-807b-11f2e654be90] Took 9.42 seconds to build instance.
Oct 11 09:43:09 compute-0 nova_compute[260935]: 2025-10-11 09:43:09.003 2 DEBUG oslo_concurrency.lockutils [None req-0457b85d-0cc0-477b-b645-cef4c4bd5a46 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] Lock "1f03604e-0a25-4f22-807b-11f2e654be90" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.507s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:43:09 compute-0 podman[431954]: 2025-10-11 09:43:09.16634835 +0000 UTC m=+0.057300508 container create 1f53bf1e17932a138ecf2922f56188a39b18e7057164724457f4c8c269021dc8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_shamir, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 09:43:09 compute-0 podman[431954]: 2025-10-11 09:43:09.139655941 +0000 UTC m=+0.030608129 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:43:09 compute-0 systemd[1]: Started libpod-conmon-1f53bf1e17932a138ecf2922f56188a39b18e7057164724457f4c8c269021dc8.scope.
Oct 11 09:43:09 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:43:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e5c2179f1ee0ad2d9c40e7b03c6fc2483a2bc73caaf0a61f5fd4959880199b6f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 09:43:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e5c2179f1ee0ad2d9c40e7b03c6fc2483a2bc73caaf0a61f5fd4959880199b6f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 09:43:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e5c2179f1ee0ad2d9c40e7b03c6fc2483a2bc73caaf0a61f5fd4959880199b6f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 09:43:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e5c2179f1ee0ad2d9c40e7b03c6fc2483a2bc73caaf0a61f5fd4959880199b6f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 09:43:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e5c2179f1ee0ad2d9c40e7b03c6fc2483a2bc73caaf0a61f5fd4959880199b6f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 09:43:09 compute-0 podman[431954]: 2025-10-11 09:43:09.297147309 +0000 UTC m=+0.188099477 container init 1f53bf1e17932a138ecf2922f56188a39b18e7057164724457f4c8c269021dc8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_shamir, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 11 09:43:09 compute-0 podman[431954]: 2025-10-11 09:43:09.3053991 +0000 UTC m=+0.196351258 container start 1f53bf1e17932a138ecf2922f56188a39b18e7057164724457f4c8c269021dc8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_shamir, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 09:43:09 compute-0 podman[431954]: 2025-10-11 09:43:09.308999201 +0000 UTC m=+0.199951349 container attach 1f53bf1e17932a138ecf2922f56188a39b18e7057164724457f4c8c269021dc8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_shamir, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 09:43:09 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3045: 321 pgs: 321 active+clean; 374 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 32 op/s
Oct 11 09:43:09 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:43:10 compute-0 nova_compute[260935]: 2025-10-11 09:43:10.058 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:43:10 compute-0 nova_compute[260935]: 2025-10-11 09:43:10.083 2 DEBUG nova.compute.manager [req-a208f417-83df-4d5b-9b65-2d22714c8d45 req-cb6e1f20-98dd-4f8b-ac19-d73cff918f3f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 1f03604e-0a25-4f22-807b-11f2e654be90] Received event network-vif-plugged-0079b745-efa5-4641-a180-fee85fb30a5d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:43:10 compute-0 nova_compute[260935]: 2025-10-11 09:43:10.084 2 DEBUG oslo_concurrency.lockutils [req-a208f417-83df-4d5b-9b65-2d22714c8d45 req-cb6e1f20-98dd-4f8b-ac19-d73cff918f3f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "1f03604e-0a25-4f22-807b-11f2e654be90-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:43:10 compute-0 nova_compute[260935]: 2025-10-11 09:43:10.084 2 DEBUG oslo_concurrency.lockutils [req-a208f417-83df-4d5b-9b65-2d22714c8d45 req-cb6e1f20-98dd-4f8b-ac19-d73cff918f3f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "1f03604e-0a25-4f22-807b-11f2e654be90-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:43:10 compute-0 nova_compute[260935]: 2025-10-11 09:43:10.084 2 DEBUG oslo_concurrency.lockutils [req-a208f417-83df-4d5b-9b65-2d22714c8d45 req-cb6e1f20-98dd-4f8b-ac19-d73cff918f3f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "1f03604e-0a25-4f22-807b-11f2e654be90-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:43:10 compute-0 nova_compute[260935]: 2025-10-11 09:43:10.084 2 DEBUG nova.compute.manager [req-a208f417-83df-4d5b-9b65-2d22714c8d45 req-cb6e1f20-98dd-4f8b-ac19-d73cff918f3f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 1f03604e-0a25-4f22-807b-11f2e654be90] No waiting events found dispatching network-vif-plugged-0079b745-efa5-4641-a180-fee85fb30a5d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 09:43:10 compute-0 nova_compute[260935]: 2025-10-11 09:43:10.085 2 WARNING nova.compute.manager [req-a208f417-83df-4d5b-9b65-2d22714c8d45 req-cb6e1f20-98dd-4f8b-ac19-d73cff918f3f e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 1f03604e-0a25-4f22-807b-11f2e654be90] Received unexpected event network-vif-plugged-0079b745-efa5-4641-a180-fee85fb30a5d for instance with vm_state active and task_state None.
Oct 11 09:43:10 compute-0 vigilant_shamir[431971]: --> passed data devices: 0 physical, 3 LVM
Oct 11 09:43:10 compute-0 vigilant_shamir[431971]: --> relative data size: 1.0
Oct 11 09:43:10 compute-0 vigilant_shamir[431971]: --> All data devices are unavailable
Oct 11 09:43:10 compute-0 systemd[1]: libpod-1f53bf1e17932a138ecf2922f56188a39b18e7057164724457f4c8c269021dc8.scope: Deactivated successfully.
Oct 11 09:43:10 compute-0 systemd[1]: libpod-1f53bf1e17932a138ecf2922f56188a39b18e7057164724457f4c8c269021dc8.scope: Consumed 1.123s CPU time.
Oct 11 09:43:10 compute-0 podman[432001]: 2025-10-11 09:43:10.636546615 +0000 UTC m=+0.049948272 container died 1f53bf1e17932a138ecf2922f56188a39b18e7057164724457f4c8c269021dc8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_shamir, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 09:43:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-e5c2179f1ee0ad2d9c40e7b03c6fc2483a2bc73caaf0a61f5fd4959880199b6f-merged.mount: Deactivated successfully.
Oct 11 09:43:10 compute-0 podman[432000]: 2025-10-11 09:43:10.687457983 +0000 UTC m=+0.089996286 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible)
Oct 11 09:43:10 compute-0 podman[432001]: 2025-10-11 09:43:10.718487753 +0000 UTC m=+0.131889370 container remove 1f53bf1e17932a138ecf2922f56188a39b18e7057164724457f4c8c269021dc8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_shamir, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct 11 09:43:10 compute-0 systemd[1]: libpod-conmon-1f53bf1e17932a138ecf2922f56188a39b18e7057164724457f4c8c269021dc8.scope: Deactivated successfully.
Oct 11 09:43:10 compute-0 sudo[431847]: pam_unix(sudo:session): session closed for user root
Oct 11 09:43:10 compute-0 sudo[432033]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:43:10 compute-0 sudo[432033]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:43:10 compute-0 sudo[432033]: pam_unix(sudo:session): session closed for user root
Oct 11 09:43:10 compute-0 sudo[432058]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 09:43:10 compute-0 ceph-mon[74313]: pgmap v3045: 321 pgs: 321 active+clean; 374 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 32 op/s
Oct 11 09:43:10 compute-0 sudo[432058]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:43:10 compute-0 sudo[432058]: pam_unix(sudo:session): session closed for user root
Oct 11 09:43:11 compute-0 sudo[432083]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:43:11 compute-0 sudo[432083]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:43:11 compute-0 sudo[432083]: pam_unix(sudo:session): session closed for user root
Oct 11 09:43:11 compute-0 sudo[432108]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a -- lvm list --format json
Oct 11 09:43:11 compute-0 sudo[432108]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:43:11 compute-0 nova_compute[260935]: 2025-10-11 09:43:11.176 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:43:11 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3046: 321 pgs: 321 active+clean; 374 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 32 op/s
Oct 11 09:43:11 compute-0 podman[432174]: 2025-10-11 09:43:11.650704469 +0000 UTC m=+0.065260141 container create ad9080768b607930d81d193c7c7d513db5d7e8cef5a20674c49dfffa3926d000 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_poitras, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 09:43:11 compute-0 systemd[1]: Started libpod-conmon-ad9080768b607930d81d193c7c7d513db5d7e8cef5a20674c49dfffa3926d000.scope.
Oct 11 09:43:11 compute-0 podman[432174]: 2025-10-11 09:43:11.617785596 +0000 UTC m=+0.032341258 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:43:11 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:43:11 compute-0 podman[432174]: 2025-10-11 09:43:11.760433277 +0000 UTC m=+0.174988999 container init ad9080768b607930d81d193c7c7d513db5d7e8cef5a20674c49dfffa3926d000 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_poitras, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 09:43:11 compute-0 podman[432174]: 2025-10-11 09:43:11.77480471 +0000 UTC m=+0.189360382 container start ad9080768b607930d81d193c7c7d513db5d7e8cef5a20674c49dfffa3926d000 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_poitras, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0)
Oct 11 09:43:11 compute-0 podman[432174]: 2025-10-11 09:43:11.779880352 +0000 UTC m=+0.194436114 container attach ad9080768b607930d81d193c7c7d513db5d7e8cef5a20674c49dfffa3926d000 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_poitras, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 09:43:11 compute-0 hardcore_poitras[432189]: 167 167
Oct 11 09:43:11 compute-0 systemd[1]: libpod-ad9080768b607930d81d193c7c7d513db5d7e8cef5a20674c49dfffa3926d000.scope: Deactivated successfully.
Oct 11 09:43:11 compute-0 podman[432174]: 2025-10-11 09:43:11.785573052 +0000 UTC m=+0.200128724 container died ad9080768b607930d81d193c7c7d513db5d7e8cef5a20674c49dfffa3926d000 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_poitras, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 09:43:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-99c4704daf49d622146bffd0c72f808c2245a73f001cb4f011b0c8139cb6dc79-merged.mount: Deactivated successfully.
Oct 11 09:43:11 compute-0 podman[432174]: 2025-10-11 09:43:11.83648287 +0000 UTC m=+0.251038542 container remove ad9080768b607930d81d193c7c7d513db5d7e8cef5a20674c49dfffa3926d000 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_poitras, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 09:43:11 compute-0 systemd[1]: libpod-conmon-ad9080768b607930d81d193c7c7d513db5d7e8cef5a20674c49dfffa3926d000.scope: Deactivated successfully.
Oct 11 09:43:12 compute-0 podman[432213]: 2025-10-11 09:43:12.116753761 +0000 UTC m=+0.074554352 container create 1fb14a8902dd17c4d24b6cfdc742f2180c70686186d2456f4aa7d58b7f9ba274 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_mcnulty, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 09:43:12 compute-0 podman[432213]: 2025-10-11 09:43:12.086468192 +0000 UTC m=+0.044268843 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:43:12 compute-0 systemd[1]: Started libpod-conmon-1fb14a8902dd17c4d24b6cfdc742f2180c70686186d2456f4aa7d58b7f9ba274.scope.
Oct 11 09:43:12 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:43:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa092e71b1770e0d8a0b1aefc42f0b6ec71c6864da0d70694a723f7f2ab0213b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 09:43:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa092e71b1770e0d8a0b1aefc42f0b6ec71c6864da0d70694a723f7f2ab0213b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 09:43:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa092e71b1770e0d8a0b1aefc42f0b6ec71c6864da0d70694a723f7f2ab0213b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 09:43:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa092e71b1770e0d8a0b1aefc42f0b6ec71c6864da0d70694a723f7f2ab0213b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 09:43:12 compute-0 podman[432213]: 2025-10-11 09:43:12.254020621 +0000 UTC m=+0.211821272 container init 1fb14a8902dd17c4d24b6cfdc742f2180c70686186d2456f4aa7d58b7f9ba274 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_mcnulty, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 09:43:12 compute-0 podman[432213]: 2025-10-11 09:43:12.268374684 +0000 UTC m=+0.226175275 container start 1fb14a8902dd17c4d24b6cfdc742f2180c70686186d2456f4aa7d58b7f9ba274 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_mcnulty, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 11 09:43:12 compute-0 podman[432213]: 2025-10-11 09:43:12.27324811 +0000 UTC m=+0.231048701 container attach 1fb14a8902dd17c4d24b6cfdc742f2180c70686186d2456f4aa7d58b7f9ba274 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_mcnulty, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Oct 11 09:43:12 compute-0 NetworkManager[44960]: <info>  [1760175792.3549] manager: (patch-br-int-to-provnet-02213cae-d648-4a8f-b18b-9bdf9573f1be): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/645)
Oct 11 09:43:12 compute-0 nova_compute[260935]: 2025-10-11 09:43:12.353 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:43:12 compute-0 NetworkManager[44960]: <info>  [1760175792.3576] manager: (patch-provnet-02213cae-d648-4a8f-b18b-9bdf9573f1be-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/646)
Oct 11 09:43:12 compute-0 nova_compute[260935]: 2025-10-11 09:43:12.468 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:43:12 compute-0 ovn_controller[152945]: 2025-10-11T09:43:12Z|01710|binding|INFO|Releasing lport e23ac8d5-df18-43b3-bb12-29f5cd9ce802 from this chassis (sb_readonly=0)
Oct 11 09:43:12 compute-0 ovn_controller[152945]: 2025-10-11T09:43:12Z|01711|binding|INFO|Releasing lport e23cd806-8523-4e59-ba27-db15cee52548 from this chassis (sb_readonly=0)
Oct 11 09:43:12 compute-0 ovn_controller[152945]: 2025-10-11T09:43:12Z|01712|binding|INFO|Releasing lport f45dd889-4db0-488b-b6d3-356bc191844e from this chassis (sb_readonly=0)
Oct 11 09:43:12 compute-0 nova_compute[260935]: 2025-10-11 09:43:12.478 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:43:12 compute-0 ceph-mon[74313]: pgmap v3046: 321 pgs: 321 active+clean; 374 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 32 op/s
Oct 11 09:43:13 compute-0 admiring_mcnulty[432230]: {
Oct 11 09:43:13 compute-0 admiring_mcnulty[432230]:     "0": [
Oct 11 09:43:13 compute-0 admiring_mcnulty[432230]:         {
Oct 11 09:43:13 compute-0 admiring_mcnulty[432230]:             "devices": [
Oct 11 09:43:13 compute-0 admiring_mcnulty[432230]:                 "/dev/loop3"
Oct 11 09:43:13 compute-0 admiring_mcnulty[432230]:             ],
Oct 11 09:43:13 compute-0 admiring_mcnulty[432230]:             "lv_name": "ceph_lv0",
Oct 11 09:43:13 compute-0 admiring_mcnulty[432230]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 09:43:13 compute-0 admiring_mcnulty[432230]:             "lv_size": "21470642176",
Oct 11 09:43:13 compute-0 admiring_mcnulty[432230]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=bd31cfe9-266a-4479-a7f9-659fff14b3e1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 09:43:13 compute-0 admiring_mcnulty[432230]:             "lv_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 09:43:13 compute-0 admiring_mcnulty[432230]:             "name": "ceph_lv0",
Oct 11 09:43:13 compute-0 admiring_mcnulty[432230]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 09:43:13 compute-0 admiring_mcnulty[432230]:             "tags": {
Oct 11 09:43:13 compute-0 admiring_mcnulty[432230]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 11 09:43:13 compute-0 admiring_mcnulty[432230]:                 "ceph.block_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 09:43:13 compute-0 admiring_mcnulty[432230]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 09:43:13 compute-0 admiring_mcnulty[432230]:                 "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:43:13 compute-0 admiring_mcnulty[432230]:                 "ceph.cluster_name": "ceph",
Oct 11 09:43:13 compute-0 admiring_mcnulty[432230]:                 "ceph.crush_device_class": "",
Oct 11 09:43:13 compute-0 admiring_mcnulty[432230]:                 "ceph.encrypted": "0",
Oct 11 09:43:13 compute-0 admiring_mcnulty[432230]:                 "ceph.osd_fsid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 09:43:13 compute-0 admiring_mcnulty[432230]:                 "ceph.osd_id": "0",
Oct 11 09:43:13 compute-0 admiring_mcnulty[432230]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 09:43:13 compute-0 admiring_mcnulty[432230]:                 "ceph.type": "block",
Oct 11 09:43:13 compute-0 admiring_mcnulty[432230]:                 "ceph.vdo": "0"
Oct 11 09:43:13 compute-0 admiring_mcnulty[432230]:             },
Oct 11 09:43:13 compute-0 admiring_mcnulty[432230]:             "type": "block",
Oct 11 09:43:13 compute-0 admiring_mcnulty[432230]:             "vg_name": "ceph_vg0"
Oct 11 09:43:13 compute-0 admiring_mcnulty[432230]:         }
Oct 11 09:43:13 compute-0 admiring_mcnulty[432230]:     ],
Oct 11 09:43:13 compute-0 admiring_mcnulty[432230]:     "1": [
Oct 11 09:43:13 compute-0 admiring_mcnulty[432230]:         {
Oct 11 09:43:13 compute-0 admiring_mcnulty[432230]:             "devices": [
Oct 11 09:43:13 compute-0 admiring_mcnulty[432230]:                 "/dev/loop4"
Oct 11 09:43:13 compute-0 admiring_mcnulty[432230]:             ],
Oct 11 09:43:13 compute-0 admiring_mcnulty[432230]:             "lv_name": "ceph_lv1",
Oct 11 09:43:13 compute-0 admiring_mcnulty[432230]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 09:43:13 compute-0 admiring_mcnulty[432230]:             "lv_size": "21470642176",
Oct 11 09:43:13 compute-0 admiring_mcnulty[432230]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c6e730c8-7075-45e5-bf72-061b73088f5b,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 09:43:13 compute-0 admiring_mcnulty[432230]:             "lv_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 09:43:13 compute-0 admiring_mcnulty[432230]:             "name": "ceph_lv1",
Oct 11 09:43:13 compute-0 admiring_mcnulty[432230]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 09:43:13 compute-0 admiring_mcnulty[432230]:             "tags": {
Oct 11 09:43:13 compute-0 admiring_mcnulty[432230]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 11 09:43:13 compute-0 admiring_mcnulty[432230]:                 "ceph.block_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 09:43:13 compute-0 admiring_mcnulty[432230]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 09:43:13 compute-0 admiring_mcnulty[432230]:                 "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:43:13 compute-0 admiring_mcnulty[432230]:                 "ceph.cluster_name": "ceph",
Oct 11 09:43:13 compute-0 admiring_mcnulty[432230]:                 "ceph.crush_device_class": "",
Oct 11 09:43:13 compute-0 admiring_mcnulty[432230]:                 "ceph.encrypted": "0",
Oct 11 09:43:13 compute-0 admiring_mcnulty[432230]:                 "ceph.osd_fsid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 09:43:13 compute-0 admiring_mcnulty[432230]:                 "ceph.osd_id": "1",
Oct 11 09:43:13 compute-0 admiring_mcnulty[432230]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 09:43:13 compute-0 admiring_mcnulty[432230]:                 "ceph.type": "block",
Oct 11 09:43:13 compute-0 admiring_mcnulty[432230]:                 "ceph.vdo": "0"
Oct 11 09:43:13 compute-0 admiring_mcnulty[432230]:             },
Oct 11 09:43:13 compute-0 admiring_mcnulty[432230]:             "type": "block",
Oct 11 09:43:13 compute-0 admiring_mcnulty[432230]:             "vg_name": "ceph_vg1"
Oct 11 09:43:13 compute-0 admiring_mcnulty[432230]:         }
Oct 11 09:43:13 compute-0 admiring_mcnulty[432230]:     ],
Oct 11 09:43:13 compute-0 admiring_mcnulty[432230]:     "2": [
Oct 11 09:43:13 compute-0 admiring_mcnulty[432230]:         {
Oct 11 09:43:13 compute-0 admiring_mcnulty[432230]:             "devices": [
Oct 11 09:43:13 compute-0 admiring_mcnulty[432230]:                 "/dev/loop5"
Oct 11 09:43:13 compute-0 admiring_mcnulty[432230]:             ],
Oct 11 09:43:13 compute-0 admiring_mcnulty[432230]:             "lv_name": "ceph_lv2",
Oct 11 09:43:13 compute-0 admiring_mcnulty[432230]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 09:43:13 compute-0 admiring_mcnulty[432230]:             "lv_size": "21470642176",
Oct 11 09:43:13 compute-0 admiring_mcnulty[432230]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9a8d87fe-940e-4960-94fb-405e22e6e9ee,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 09:43:13 compute-0 admiring_mcnulty[432230]:             "lv_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 09:43:13 compute-0 admiring_mcnulty[432230]:             "name": "ceph_lv2",
Oct 11 09:43:13 compute-0 admiring_mcnulty[432230]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 09:43:13 compute-0 admiring_mcnulty[432230]:             "tags": {
Oct 11 09:43:13 compute-0 admiring_mcnulty[432230]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 11 09:43:13 compute-0 admiring_mcnulty[432230]:                 "ceph.block_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 09:43:13 compute-0 admiring_mcnulty[432230]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 09:43:13 compute-0 admiring_mcnulty[432230]:                 "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:43:13 compute-0 admiring_mcnulty[432230]:                 "ceph.cluster_name": "ceph",
Oct 11 09:43:13 compute-0 admiring_mcnulty[432230]:                 "ceph.crush_device_class": "",
Oct 11 09:43:13 compute-0 admiring_mcnulty[432230]:                 "ceph.encrypted": "0",
Oct 11 09:43:13 compute-0 admiring_mcnulty[432230]:                 "ceph.osd_fsid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 09:43:13 compute-0 admiring_mcnulty[432230]:                 "ceph.osd_id": "2",
Oct 11 09:43:13 compute-0 admiring_mcnulty[432230]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 09:43:13 compute-0 admiring_mcnulty[432230]:                 "ceph.type": "block",
Oct 11 09:43:13 compute-0 admiring_mcnulty[432230]:                 "ceph.vdo": "0"
Oct 11 09:43:13 compute-0 admiring_mcnulty[432230]:             },
Oct 11 09:43:13 compute-0 admiring_mcnulty[432230]:             "type": "block",
Oct 11 09:43:13 compute-0 admiring_mcnulty[432230]:             "vg_name": "ceph_vg2"
Oct 11 09:43:13 compute-0 admiring_mcnulty[432230]:         }
Oct 11 09:43:13 compute-0 admiring_mcnulty[432230]:     ]
Oct 11 09:43:13 compute-0 admiring_mcnulty[432230]: }
Oct 11 09:43:13 compute-0 nova_compute[260935]: 2025-10-11 09:43:13.036 2 DEBUG nova.compute.manager [req-c0a05136-1ab6-47af-8820-1513e5ee3484 req-914b9789-4a5b-4c3a-949c-34b877341476 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 1f03604e-0a25-4f22-807b-11f2e654be90] Received event network-changed-0079b745-efa5-4641-a180-fee85fb30a5d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:43:13 compute-0 nova_compute[260935]: 2025-10-11 09:43:13.036 2 DEBUG nova.compute.manager [req-c0a05136-1ab6-47af-8820-1513e5ee3484 req-914b9789-4a5b-4c3a-949c-34b877341476 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 1f03604e-0a25-4f22-807b-11f2e654be90] Refreshing instance network info cache due to event network-changed-0079b745-efa5-4641-a180-fee85fb30a5d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 09:43:13 compute-0 nova_compute[260935]: 2025-10-11 09:43:13.036 2 DEBUG oslo_concurrency.lockutils [req-c0a05136-1ab6-47af-8820-1513e5ee3484 req-914b9789-4a5b-4c3a-949c-34b877341476 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-1f03604e-0a25-4f22-807b-11f2e654be90" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:43:13 compute-0 nova_compute[260935]: 2025-10-11 09:43:13.037 2 DEBUG oslo_concurrency.lockutils [req-c0a05136-1ab6-47af-8820-1513e5ee3484 req-914b9789-4a5b-4c3a-949c-34b877341476 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-1f03604e-0a25-4f22-807b-11f2e654be90" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:43:13 compute-0 nova_compute[260935]: 2025-10-11 09:43:13.037 2 DEBUG nova.network.neutron [req-c0a05136-1ab6-47af-8820-1513e5ee3484 req-914b9789-4a5b-4c3a-949c-34b877341476 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 1f03604e-0a25-4f22-807b-11f2e654be90] Refreshing network info cache for port 0079b745-efa5-4641-a180-fee85fb30a5d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 09:43:13 compute-0 systemd[1]: libpod-1fb14a8902dd17c4d24b6cfdc742f2180c70686186d2456f4aa7d58b7f9ba274.scope: Deactivated successfully.
Oct 11 09:43:13 compute-0 podman[432213]: 2025-10-11 09:43:13.060302625 +0000 UTC m=+1.018103186 container died 1fb14a8902dd17c4d24b6cfdc742f2180c70686186d2456f4aa7d58b7f9ba274 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_mcnulty, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef)
Oct 11 09:43:13 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:43:13.092 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=58, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:d1:d9', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '16:ab:1e:b7:4b:7f'}, ipsec=False) old=SB_Global(nb_cfg=57) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 09:43:13 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:43:13.094 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 11 09:43:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-aa092e71b1770e0d8a0b1aefc42f0b6ec71c6864da0d70694a723f7f2ab0213b-merged.mount: Deactivated successfully.
Oct 11 09:43:13 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:43:13.095 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=bec9a5e2-82b0-42f0-811d-08d245f1dc66, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '58'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:43:13 compute-0 nova_compute[260935]: 2025-10-11 09:43:13.098 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:43:13 compute-0 podman[432213]: 2025-10-11 09:43:13.129057654 +0000 UTC m=+1.086858205 container remove 1fb14a8902dd17c4d24b6cfdc742f2180c70686186d2456f4aa7d58b7f9ba274 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_mcnulty, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct 11 09:43:13 compute-0 sudo[432108]: pam_unix(sudo:session): session closed for user root
Oct 11 09:43:13 compute-0 systemd[1]: libpod-conmon-1fb14a8902dd17c4d24b6cfdc742f2180c70686186d2456f4aa7d58b7f9ba274.scope: Deactivated successfully.
Oct 11 09:43:13 compute-0 sudo[432252]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:43:13 compute-0 sudo[432252]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:43:13 compute-0 sudo[432252]: pam_unix(sudo:session): session closed for user root
Oct 11 09:43:13 compute-0 sudo[432277]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 09:43:13 compute-0 sudo[432277]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:43:13 compute-0 sudo[432277]: pam_unix(sudo:session): session closed for user root
Oct 11 09:43:13 compute-0 sudo[432302]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:43:13 compute-0 sudo[432302]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:43:13 compute-0 sudo[432302]: pam_unix(sudo:session): session closed for user root
Oct 11 09:43:13 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3047: 321 pgs: 321 active+clean; 374 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Oct 11 09:43:13 compute-0 sudo[432327]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a -- raw list --format json
Oct 11 09:43:13 compute-0 sudo[432327]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:43:13 compute-0 podman[432392]: 2025-10-11 09:43:13.912387403 +0000 UTC m=+0.063367638 container create 2897252657a8fe6ea1ce9b8cb93532342f033984969cf330cd6b5318275f532c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_thompson, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 09:43:13 compute-0 systemd[1]: Started libpod-conmon-2897252657a8fe6ea1ce9b8cb93532342f033984969cf330cd6b5318275f532c.scope.
Oct 11 09:43:13 compute-0 podman[432392]: 2025-10-11 09:43:13.889361167 +0000 UTC m=+0.040341482 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:43:13 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:43:14 compute-0 podman[432392]: 2025-10-11 09:43:14.015251268 +0000 UTC m=+0.166231553 container init 2897252657a8fe6ea1ce9b8cb93532342f033984969cf330cd6b5318275f532c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_thompson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 09:43:14 compute-0 podman[432392]: 2025-10-11 09:43:14.026385001 +0000 UTC m=+0.177365276 container start 2897252657a8fe6ea1ce9b8cb93532342f033984969cf330cd6b5318275f532c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_thompson, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct 11 09:43:14 compute-0 podman[432392]: 2025-10-11 09:43:14.03099767 +0000 UTC m=+0.181977915 container attach 2897252657a8fe6ea1ce9b8cb93532342f033984969cf330cd6b5318275f532c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_thompson, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct 11 09:43:14 compute-0 elegant_thompson[432408]: 167 167
Oct 11 09:43:14 compute-0 systemd[1]: libpod-2897252657a8fe6ea1ce9b8cb93532342f033984969cf330cd6b5318275f532c.scope: Deactivated successfully.
Oct 11 09:43:14 compute-0 podman[432392]: 2025-10-11 09:43:14.035181117 +0000 UTC m=+0.186161392 container died 2897252657a8fe6ea1ce9b8cb93532342f033984969cf330cd6b5318275f532c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_thompson, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True)
Oct 11 09:43:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-5a82d96b790c37660197fd8b534c81a85ea3e1934b77b4d529cbc19a196aa98a-merged.mount: Deactivated successfully.
Oct 11 09:43:14 compute-0 podman[432392]: 2025-10-11 09:43:14.094642575 +0000 UTC m=+0.245622840 container remove 2897252657a8fe6ea1ce9b8cb93532342f033984969cf330cd6b5318275f532c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_thompson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct 11 09:43:14 compute-0 systemd[1]: libpod-conmon-2897252657a8fe6ea1ce9b8cb93532342f033984969cf330cd6b5318275f532c.scope: Deactivated successfully.
Oct 11 09:43:14 compute-0 podman[432431]: 2025-10-11 09:43:14.430756072 +0000 UTC m=+0.080299383 container create 1c14042b59e0c1bc43131aff64ee2856172186ee8805f2419b26c8e7acabd75c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_saha, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 09:43:14 compute-0 podman[432431]: 2025-10-11 09:43:14.399273249 +0000 UTC m=+0.048816570 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:43:14 compute-0 systemd[1]: Started libpod-conmon-1c14042b59e0c1bc43131aff64ee2856172186ee8805f2419b26c8e7acabd75c.scope.
Oct 11 09:43:14 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:43:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c08a52a284e4b028e5ee1a137f00e3a778c90125ce25b82728c73a18e07eaff/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 09:43:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c08a52a284e4b028e5ee1a137f00e3a778c90125ce25b82728c73a18e07eaff/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 09:43:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c08a52a284e4b028e5ee1a137f00e3a778c90125ce25b82728c73a18e07eaff/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 09:43:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c08a52a284e4b028e5ee1a137f00e3a778c90125ce25b82728c73a18e07eaff/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 09:43:14 compute-0 podman[432431]: 2025-10-11 09:43:14.552845917 +0000 UTC m=+0.202389258 container init 1c14042b59e0c1bc43131aff64ee2856172186ee8805f2419b26c8e7acabd75c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_saha, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct 11 09:43:14 compute-0 podman[432431]: 2025-10-11 09:43:14.567846097 +0000 UTC m=+0.217389368 container start 1c14042b59e0c1bc43131aff64ee2856172186ee8805f2419b26c8e7acabd75c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_saha, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 09:43:14 compute-0 podman[432431]: 2025-10-11 09:43:14.571596903 +0000 UTC m=+0.221140254 container attach 1c14042b59e0c1bc43131aff64ee2856172186ee8805f2419b26c8e7acabd75c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_saha, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 09:43:14 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:43:14 compute-0 ceph-mon[74313]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #150. Immutable memtables: 0.
Oct 11 09:43:14 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:43:14.723738) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 11 09:43:14 compute-0 ceph-mon[74313]: rocksdb: [db/flush_job.cc:856] [default] [JOB 91] Flushing memtable with next log file: 150
Oct 11 09:43:14 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760175794723810, "job": 91, "event": "flush_started", "num_memtables": 1, "num_entries": 687, "num_deletes": 250, "total_data_size": 828326, "memory_usage": 841472, "flush_reason": "Manual Compaction"}
Oct 11 09:43:14 compute-0 ceph-mon[74313]: rocksdb: [db/flush_job.cc:885] [default] [JOB 91] Level-0 flush table #151: started
Oct 11 09:43:14 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760175794730320, "cf_name": "default", "job": 91, "event": "table_file_creation", "file_number": 151, "file_size": 544444, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 63324, "largest_seqno": 64010, "table_properties": {"data_size": 541336, "index_size": 1015, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1029, "raw_key_size": 8325, "raw_average_key_size": 20, "raw_value_size": 534815, "raw_average_value_size": 1327, "num_data_blocks": 45, "num_entries": 403, "num_filter_entries": 403, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760175741, "oldest_key_time": 1760175741, "file_creation_time": 1760175794, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "25d4b2da-549a-4320-988a-8e4ed36a341b", "db_session_id": "HBQ1LYMNE0LLZGR7AHC6", "orig_file_number": 151, "seqno_to_time_mapping": "N/A"}}
Oct 11 09:43:14 compute-0 ceph-mon[74313]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 91] Flush lasted 6611 microseconds, and 2374 cpu microseconds.
Oct 11 09:43:14 compute-0 ceph-mon[74313]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 09:43:14 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:43:14.730360) [db/flush_job.cc:967] [default] [JOB 91] Level-0 flush table #151: 544444 bytes OK
Oct 11 09:43:14 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:43:14.730374) [db/memtable_list.cc:519] [default] Level-0 commit table #151 started
Oct 11 09:43:14 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:43:14.732108) [db/memtable_list.cc:722] [default] Level-0 commit table #151: memtable #1 done
Oct 11 09:43:14 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:43:14.732120) EVENT_LOG_v1 {"time_micros": 1760175794732116, "job": 91, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 11 09:43:14 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:43:14.732136) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 11 09:43:14 compute-0 ceph-mon[74313]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 91] Try to delete WAL files size 824731, prev total WAL file size 824731, number of live WAL files 2.
Oct 11 09:43:14 compute-0 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000147.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 09:43:14 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:43:14.732678) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740032353034' seq:72057594037927935, type:22 .. '6D6772737461740032373535' seq:0, type:0; will stop at (end)
Oct 11 09:43:14 compute-0 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 92] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 11 09:43:14 compute-0 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 91 Base level 0, inputs: [151(531KB)], [149(10MB)]
Oct 11 09:43:14 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760175794732785, "job": 92, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [151], "files_L6": [149], "score": -1, "input_data_size": 11707815, "oldest_snapshot_seqno": -1}
Oct 11 09:43:14 compute-0 nova_compute[260935]: 2025-10-11 09:43:14.782 2 DEBUG nova.network.neutron [req-c0a05136-1ab6-47af-8820-1513e5ee3484 req-914b9789-4a5b-4c3a-949c-34b877341476 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 1f03604e-0a25-4f22-807b-11f2e654be90] Updated VIF entry in instance network info cache for port 0079b745-efa5-4641-a180-fee85fb30a5d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 09:43:14 compute-0 nova_compute[260935]: 2025-10-11 09:43:14.784 2 DEBUG nova.network.neutron [req-c0a05136-1ab6-47af-8820-1513e5ee3484 req-914b9789-4a5b-4c3a-949c-34b877341476 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 1f03604e-0a25-4f22-807b-11f2e654be90] Updating instance_info_cache with network_info: [{"id": "0079b745-efa5-4641-a180-fee85fb30a5d", "address": "fa:16:3e:88:b5:6b", "network": {"id": "424246b3-55aa-428f-9446-55ed3d626b5d", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1806329114-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.208", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d599eac75aee4193a971c2c157a326a8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0079b745-ef", "ovs_interfaceid": "0079b745-efa5-4641-a180-fee85fb30a5d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:43:14 compute-0 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 92] Generated table #152: 8083 keys, 8684197 bytes, temperature: kUnknown
Oct 11 09:43:14 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760175794793235, "cf_name": "default", "job": 92, "event": "table_file_creation", "file_number": 152, "file_size": 8684197, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8634871, "index_size": 28029, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 20229, "raw_key_size": 211951, "raw_average_key_size": 26, "raw_value_size": 8495318, "raw_average_value_size": 1051, "num_data_blocks": 1081, "num_entries": 8083, "num_filter_entries": 8083, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760170204, "oldest_key_time": 0, "file_creation_time": 1760175794, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "25d4b2da-549a-4320-988a-8e4ed36a341b", "db_session_id": "HBQ1LYMNE0LLZGR7AHC6", "orig_file_number": 152, "seqno_to_time_mapping": "N/A"}}
Oct 11 09:43:14 compute-0 ceph-mon[74313]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 09:43:14 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:43:14.793505) [db/compaction/compaction_job.cc:1663] [default] [JOB 92] Compacted 1@0 + 1@6 files to L6 => 8684197 bytes
Oct 11 09:43:14 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:43:14.795649) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 193.5 rd, 143.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.5, 10.6 +0.0 blob) out(8.3 +0.0 blob), read-write-amplify(37.5) write-amplify(16.0) OK, records in: 8573, records dropped: 490 output_compression: NoCompression
Oct 11 09:43:14 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:43:14.795670) EVENT_LOG_v1 {"time_micros": 1760175794795660, "job": 92, "event": "compaction_finished", "compaction_time_micros": 60512, "compaction_time_cpu_micros": 25503, "output_level": 6, "num_output_files": 1, "total_output_size": 8684197, "num_input_records": 8573, "num_output_records": 8083, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 11 09:43:14 compute-0 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000151.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 09:43:14 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760175794796059, "job": 92, "event": "table_file_deletion", "file_number": 151}
Oct 11 09:43:14 compute-0 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000149.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 09:43:14 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760175794798241, "job": 92, "event": "table_file_deletion", "file_number": 149}
Oct 11 09:43:14 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:43:14.732568) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 09:43:14 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:43:14.798392) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 09:43:14 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:43:14.798401) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 09:43:14 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:43:14.798406) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 09:43:14 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:43:14.798409) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 09:43:14 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:43:14.798412) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 09:43:14 compute-0 nova_compute[260935]: 2025-10-11 09:43:14.810 2 DEBUG oslo_concurrency.lockutils [req-c0a05136-1ab6-47af-8820-1513e5ee3484 req-914b9789-4a5b-4c3a-949c-34b877341476 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-1f03604e-0a25-4f22-807b-11f2e654be90" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:43:14 compute-0 ceph-mon[74313]: pgmap v3047: 321 pgs: 321 active+clean; 374 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Oct 11 09:43:15 compute-0 nova_compute[260935]: 2025-10-11 09:43:15.061 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:43:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:43:15.239 162815 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:43:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:43:15.240 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:43:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:43:15.240 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:43:15 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3048: 321 pgs: 321 active+clean; 374 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 11 09:43:15 compute-0 tender_saha[432448]: {
Oct 11 09:43:15 compute-0 tender_saha[432448]:     "9a8d87fe-940e-4960-94fb-405e22e6e9ee": {
Oct 11 09:43:15 compute-0 tender_saha[432448]:         "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:43:15 compute-0 tender_saha[432448]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 11 09:43:15 compute-0 tender_saha[432448]:         "osd_id": 2,
Oct 11 09:43:15 compute-0 tender_saha[432448]:         "osd_uuid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 09:43:15 compute-0 tender_saha[432448]:         "type": "bluestore"
Oct 11 09:43:15 compute-0 tender_saha[432448]:     },
Oct 11 09:43:15 compute-0 tender_saha[432448]:     "bd31cfe9-266a-4479-a7f9-659fff14b3e1": {
Oct 11 09:43:15 compute-0 tender_saha[432448]:         "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:43:15 compute-0 tender_saha[432448]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 11 09:43:15 compute-0 tender_saha[432448]:         "osd_id": 0,
Oct 11 09:43:15 compute-0 tender_saha[432448]:         "osd_uuid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 09:43:15 compute-0 tender_saha[432448]:         "type": "bluestore"
Oct 11 09:43:15 compute-0 tender_saha[432448]:     },
Oct 11 09:43:15 compute-0 tender_saha[432448]:     "c6e730c8-7075-45e5-bf72-061b73088f5b": {
Oct 11 09:43:15 compute-0 tender_saha[432448]:         "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:43:15 compute-0 tender_saha[432448]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 11 09:43:15 compute-0 tender_saha[432448]:         "osd_id": 1,
Oct 11 09:43:15 compute-0 tender_saha[432448]:         "osd_uuid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 09:43:15 compute-0 tender_saha[432448]:         "type": "bluestore"
Oct 11 09:43:15 compute-0 tender_saha[432448]:     }
Oct 11 09:43:15 compute-0 tender_saha[432448]: }
Oct 11 09:43:15 compute-0 systemd[1]: libpod-1c14042b59e0c1bc43131aff64ee2856172186ee8805f2419b26c8e7acabd75c.scope: Deactivated successfully.
Oct 11 09:43:15 compute-0 systemd[1]: libpod-1c14042b59e0c1bc43131aff64ee2856172186ee8805f2419b26c8e7acabd75c.scope: Consumed 1.134s CPU time.
Oct 11 09:43:15 compute-0 podman[432481]: 2025-10-11 09:43:15.764689766 +0000 UTC m=+0.043500521 container died 1c14042b59e0c1bc43131aff64ee2856172186ee8805f2419b26c8e7acabd75c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_saha, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 09:43:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-5c08a52a284e4b028e5ee1a137f00e3a778c90125ce25b82728c73a18e07eaff-merged.mount: Deactivated successfully.
Oct 11 09:43:15 compute-0 podman[432481]: 2025-10-11 09:43:15.846326286 +0000 UTC m=+0.125137001 container remove 1c14042b59e0c1bc43131aff64ee2856172186ee8805f2419b26c8e7acabd75c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_saha, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 09:43:15 compute-0 systemd[1]: libpod-conmon-1c14042b59e0c1bc43131aff64ee2856172186ee8805f2419b26c8e7acabd75c.scope: Deactivated successfully.
Oct 11 09:43:15 compute-0 sudo[432327]: pam_unix(sudo:session): session closed for user root
Oct 11 09:43:15 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 09:43:15 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:43:15 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 09:43:15 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:43:15 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev f0365db6-4b73-4cb9-83a0-a06a18263a26 does not exist
Oct 11 09:43:15 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev d8ba6715-907b-4456-982f-218a246e2468 does not exist
Oct 11 09:43:16 compute-0 sudo[432497]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:43:16 compute-0 sudo[432497]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:43:16 compute-0 sudo[432497]: pam_unix(sudo:session): session closed for user root
Oct 11 09:43:16 compute-0 sudo[432522]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 11 09:43:16 compute-0 sudo[432522]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:43:16 compute-0 sudo[432522]: pam_unix(sudo:session): session closed for user root
Oct 11 09:43:16 compute-0 nova_compute[260935]: 2025-10-11 09:43:16.178 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:43:16 compute-0 ceph-mon[74313]: pgmap v3048: 321 pgs: 321 active+clean; 374 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 11 09:43:16 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:43:16 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:43:17 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3049: 321 pgs: 321 active+clean; 374 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 11 09:43:17 compute-0 podman[432547]: 2025-10-11 09:43:17.812983025 +0000 UTC m=+0.102621929 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=iscsid, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true)
Oct 11 09:43:18 compute-0 ceph-mon[74313]: pgmap v3049: 321 pgs: 321 active+clean; 374 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 11 09:43:19 compute-0 ceph-osd[89278]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #52. Immutable memtables: 8.
Oct 11 09:43:19 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3050: 321 pgs: 321 active+clean; 374 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 69 op/s
Oct 11 09:43:19 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:43:20 compute-0 nova_compute[260935]: 2025-10-11 09:43:20.063 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:43:20 compute-0 ceph-mon[74313]: pgmap v3050: 321 pgs: 321 active+clean; 374 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 69 op/s
Oct 11 09:43:21 compute-0 ovn_controller[152945]: 2025-10-11T09:43:21Z|00202|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:88:b5:6b 10.100.0.3
Oct 11 09:43:21 compute-0 ovn_controller[152945]: 2025-10-11T09:43:21Z|00203|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:88:b5:6b 10.100.0.3
Oct 11 09:43:21 compute-0 nova_compute[260935]: 2025-10-11 09:43:21.180 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:43:21 compute-0 ceph-osd[88249]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #51. Immutable memtables: 8.
Oct 11 09:43:21 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3051: 321 pgs: 321 active+clean; 374 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 68 op/s
Oct 11 09:43:21 compute-0 podman[432567]: 2025-10-11 09:43:21.81139762 +0000 UTC m=+0.110262123 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, container_name=multipathd)
Oct 11 09:43:21 compute-0 podman[432568]: 2025-10-11 09:43:21.846516285 +0000 UTC m=+0.134136363 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 11 09:43:22 compute-0 ceph-mon[74313]: pgmap v3051: 321 pgs: 321 active+clean; 374 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 68 op/s
Oct 11 09:43:23 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3052: 321 pgs: 321 active+clean; 407 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 132 op/s
Oct 11 09:43:24 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:43:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:43:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:43:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:43:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:43:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:43:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:43:24 compute-0 ceph-mon[74313]: pgmap v3052: 321 pgs: 321 active+clean; 407 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 132 op/s
Oct 11 09:43:25 compute-0 nova_compute[260935]: 2025-10-11 09:43:25.066 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:43:25 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3053: 321 pgs: 321 active+clean; 407 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 11 09:43:26 compute-0 nova_compute[260935]: 2025-10-11 09:43:26.184 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:43:26 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 09:43:26 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2699164002' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 09:43:26 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 09:43:26 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2699164002' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 09:43:26 compute-0 ceph-mon[74313]: pgmap v3053: 321 pgs: 321 active+clean; 407 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 11 09:43:26 compute-0 ceph-mon[74313]: from='client.? 192.168.122.10:0/2699164002' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 09:43:26 compute-0 ceph-mon[74313]: from='client.? 192.168.122.10:0/2699164002' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 09:43:27 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3054: 321 pgs: 321 active+clean; 407 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 11 09:43:28 compute-0 ceph-mon[74313]: pgmap v3054: 321 pgs: 321 active+clean; 407 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 11 09:43:29 compute-0 nova_compute[260935]: 2025-10-11 09:43:29.367 2 DEBUG nova.compute.manager [None req-51e2322f-f017-4674-9b11-4d8873a16a30 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] [instance: 1f03604e-0a25-4f22-807b-11f2e654be90] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:43:29 compute-0 nova_compute[260935]: 2025-10-11 09:43:29.420 2 INFO nova.compute.manager [None req-51e2322f-f017-4674-9b11-4d8873a16a30 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] [instance: 1f03604e-0a25-4f22-807b-11f2e654be90] instance snapshotting
Oct 11 09:43:29 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3055: 321 pgs: 321 active+clean; 407 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 11 09:43:29 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:43:30 compute-0 nova_compute[260935]: 2025-10-11 09:43:30.068 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:43:30 compute-0 nova_compute[260935]: 2025-10-11 09:43:30.096 2 INFO nova.virt.libvirt.driver [None req-51e2322f-f017-4674-9b11-4d8873a16a30 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] [instance: 1f03604e-0a25-4f22-807b-11f2e654be90] Beginning live snapshot process
Oct 11 09:43:30 compute-0 nova_compute[260935]: 2025-10-11 09:43:30.320 2 DEBUG nova.virt.libvirt.imagebackend [None req-51e2322f-f017-4674-9b11-4d8873a16a30 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] No parent info for 03f2fef0-11c0-48e1-b3a0-3e02d898739e; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Oct 11 09:43:30 compute-0 nova_compute[260935]: 2025-10-11 09:43:30.539 2 DEBUG nova.storage.rbd_utils [None req-51e2322f-f017-4674-9b11-4d8873a16a30 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] creating snapshot(32aeeb857bab45b484f9af7c8d20feb0) on rbd image(1f03604e-0a25-4f22-807b-11f2e654be90_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Oct 11 09:43:30 compute-0 sshd-session[428505]: Invalid user jerri from 2.57.121.112 port 63436
Oct 11 09:43:30 compute-0 sshd-session[428505]: pam_unix(sshd:auth): check pass; user unknown
Oct 11 09:43:30 compute-0 sshd-session[428505]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=2.57.121.112
Oct 11 09:43:30 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e288 do_prune osdmap full prune enabled
Oct 11 09:43:30 compute-0 ceph-mon[74313]: pgmap v3055: 321 pgs: 321 active+clean; 407 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 11 09:43:31 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e289 e289: 3 total, 3 up, 3 in
Oct 11 09:43:31 compute-0 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e289: 3 total, 3 up, 3 in
Oct 11 09:43:31 compute-0 nova_compute[260935]: 2025-10-11 09:43:31.079 2 DEBUG nova.storage.rbd_utils [None req-51e2322f-f017-4674-9b11-4d8873a16a30 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] cloning vms/1f03604e-0a25-4f22-807b-11f2e654be90_disk@32aeeb857bab45b484f9af7c8d20feb0 to images/72225c6c-cee7-4404-9d0d-8915e5edba0a clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Oct 11 09:43:31 compute-0 nova_compute[260935]: 2025-10-11 09:43:31.186 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:43:31 compute-0 nova_compute[260935]: 2025-10-11 09:43:31.259 2 DEBUG nova.storage.rbd_utils [None req-51e2322f-f017-4674-9b11-4d8873a16a30 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] flattening images/72225c6c-cee7-4404-9d0d-8915e5edba0a flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Oct 11 09:43:31 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3057: 321 pgs: 321 active+clean; 407 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 392 KiB/s rd, 2.6 MiB/s wr, 77 op/s
Oct 11 09:43:31 compute-0 nova_compute[260935]: 2025-10-11 09:43:31.846 2 DEBUG nova.storage.rbd_utils [None req-51e2322f-f017-4674-9b11-4d8873a16a30 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] removing snapshot(32aeeb857bab45b484f9af7c8d20feb0) on rbd image(1f03604e-0a25-4f22-807b-11f2e654be90_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Oct 11 09:43:32 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e289 do_prune osdmap full prune enabled
Oct 11 09:43:32 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e290 e290: 3 total, 3 up, 3 in
Oct 11 09:43:32 compute-0 ceph-mon[74313]: osdmap e289: 3 total, 3 up, 3 in
Oct 11 09:43:32 compute-0 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e290: 3 total, 3 up, 3 in
Oct 11 09:43:32 compute-0 nova_compute[260935]: 2025-10-11 09:43:32.057 2 DEBUG nova.storage.rbd_utils [None req-51e2322f-f017-4674-9b11-4d8873a16a30 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] creating snapshot(snap) on rbd image(72225c6c-cee7-4404-9d0d-8915e5edba0a) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Oct 11 09:43:32 compute-0 sshd-session[432667]: Invalid user manu from 155.4.244.179 port 4746
Oct 11 09:43:32 compute-0 sshd-session[432667]: pam_unix(sshd:auth): check pass; user unknown
Oct 11 09:43:32 compute-0 sshd-session[432667]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=155.4.244.179
Oct 11 09:43:32 compute-0 sshd-session[428505]: Failed password for invalid user jerri from 2.57.121.112 port 63436 ssh2
Oct 11 09:43:32 compute-0 nova_compute[260935]: 2025-10-11 09:43:32.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:43:32 compute-0 sshd-session[428505]: pam_unix(sshd:auth): check pass; user unknown
Oct 11 09:43:33 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e290 do_prune osdmap full prune enabled
Oct 11 09:43:33 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e291 e291: 3 total, 3 up, 3 in
Oct 11 09:43:33 compute-0 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e291: 3 total, 3 up, 3 in
Oct 11 09:43:33 compute-0 ceph-mon[74313]: pgmap v3057: 321 pgs: 321 active+clean; 407 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 392 KiB/s rd, 2.6 MiB/s wr, 77 op/s
Oct 11 09:43:33 compute-0 ceph-mon[74313]: osdmap e290: 3 total, 3 up, 3 in
Oct 11 09:43:33 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3060: 321 pgs: 4 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 315 active+clean; 486 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 7.9 MiB/s rd, 7.8 MiB/s wr, 227 op/s
Oct 11 09:43:34 compute-0 ceph-mon[74313]: osdmap e291: 3 total, 3 up, 3 in
Oct 11 09:43:34 compute-0 sshd-session[432667]: Failed password for invalid user manu from 155.4.244.179 port 4746 ssh2
Oct 11 09:43:34 compute-0 nova_compute[260935]: 2025-10-11 09:43:34.589 2 INFO nova.virt.libvirt.driver [None req-51e2322f-f017-4674-9b11-4d8873a16a30 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] [instance: 1f03604e-0a25-4f22-807b-11f2e654be90] Snapshot image upload complete
Oct 11 09:43:34 compute-0 nova_compute[260935]: 2025-10-11 09:43:34.589 2 INFO nova.compute.manager [None req-51e2322f-f017-4674-9b11-4d8873a16a30 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] [instance: 1f03604e-0a25-4f22-807b-11f2e654be90] Took 5.16 seconds to snapshot the instance on the hypervisor.
Oct 11 09:43:34 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e291 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:43:34 compute-0 sshd-session[428505]: Failed password for invalid user jerri from 2.57.121.112 port 63436 ssh2
Oct 11 09:43:35 compute-0 ceph-mon[74313]: pgmap v3060: 321 pgs: 4 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 315 active+clean; 486 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 7.9 MiB/s rd, 7.8 MiB/s wr, 227 op/s
Oct 11 09:43:35 compute-0 nova_compute[260935]: 2025-10-11 09:43:35.071 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:43:35 compute-0 sshd-session[432667]: Received disconnect from 155.4.244.179 port 4746:11: Bye Bye [preauth]
Oct 11 09:43:35 compute-0 sshd-session[432667]: Disconnected from invalid user manu 155.4.244.179 port 4746 [preauth]
Oct 11 09:43:35 compute-0 sshd-session[428505]: pam_unix(sshd:auth): check pass; user unknown
Oct 11 09:43:35 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3061: 321 pgs: 4 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 315 active+clean; 486 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 7.9 MiB/s rd, 7.8 MiB/s wr, 226 op/s
Oct 11 09:43:36 compute-0 nova_compute[260935]: 2025-10-11 09:43:36.190 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:43:37 compute-0 ceph-mon[74313]: pgmap v3061: 321 pgs: 4 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 315 active+clean; 486 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 7.9 MiB/s rd, 7.8 MiB/s wr, 226 op/s
Oct 11 09:43:37 compute-0 sshd-session[428505]: Failed password for invalid user jerri from 2.57.121.112 port 63436 ssh2
Oct 11 09:43:37 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3062: 321 pgs: 321 active+clean; 486 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 7.4 MiB/s rd, 7.3 MiB/s wr, 255 op/s
Oct 11 09:43:37 compute-0 sshd-session[428505]: pam_unix(sshd:auth): check pass; user unknown
Oct 11 09:43:39 compute-0 ceph-mon[74313]: pgmap v3062: 321 pgs: 321 active+clean; 486 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 7.4 MiB/s rd, 7.3 MiB/s wr, 255 op/s
Oct 11 09:43:39 compute-0 nova_compute[260935]: 2025-10-11 09:43:39.127 2 DEBUG oslo_concurrency.lockutils [None req-16027092-f3c9-46ca-9a46-de5eb6f82914 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] Acquiring lock "84e65bfe-6918-4c8f-8b2a-3b5262c6e44e" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:43:39 compute-0 nova_compute[260935]: 2025-10-11 09:43:39.127 2 DEBUG oslo_concurrency.lockutils [None req-16027092-f3c9-46ca-9a46-de5eb6f82914 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] Lock "84e65bfe-6918-4c8f-8b2a-3b5262c6e44e" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:43:39 compute-0 nova_compute[260935]: 2025-10-11 09:43:39.157 2 DEBUG nova.compute.manager [None req-16027092-f3c9-46ca-9a46-de5eb6f82914 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] [instance: 84e65bfe-6918-4c8f-8b2a-3b5262c6e44e] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 11 09:43:39 compute-0 nova_compute[260935]: 2025-10-11 09:43:39.353 2 DEBUG oslo_concurrency.lockutils [None req-16027092-f3c9-46ca-9a46-de5eb6f82914 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:43:39 compute-0 nova_compute[260935]: 2025-10-11 09:43:39.354 2 DEBUG oslo_concurrency.lockutils [None req-16027092-f3c9-46ca-9a46-de5eb6f82914 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:43:39 compute-0 nova_compute[260935]: 2025-10-11 09:43:39.364 2 DEBUG nova.virt.hardware [None req-16027092-f3c9-46ca-9a46-de5eb6f82914 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 11 09:43:39 compute-0 nova_compute[260935]: 2025-10-11 09:43:39.365 2 INFO nova.compute.claims [None req-16027092-f3c9-46ca-9a46-de5eb6f82914 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] [instance: 84e65bfe-6918-4c8f-8b2a-3b5262c6e44e] Claim successful on node compute-0.ctlplane.example.com
Oct 11 09:43:39 compute-0 sshd-session[432760]: Invalid user mysql from 165.232.82.252 port 37678
Oct 11 09:43:39 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3063: 321 pgs: 321 active+clean; 486 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 5.9 MiB/s rd, 5.9 MiB/s wr, 226 op/s
Oct 11 09:43:39 compute-0 nova_compute[260935]: 2025-10-11 09:43:39.524 2 DEBUG oslo_concurrency.processutils [None req-16027092-f3c9-46ca-9a46-de5eb6f82914 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:43:39 compute-0 sshd-session[432760]: pam_unix(sshd:auth): check pass; user unknown
Oct 11 09:43:39 compute-0 sshd-session[432760]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=165.232.82.252
Oct 11 09:43:39 compute-0 sshd-session[428505]: Failed password for invalid user jerri from 2.57.121.112 port 63436 ssh2
Oct 11 09:43:39 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e291 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:43:39 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e291 do_prune osdmap full prune enabled
Oct 11 09:43:39 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e292 e292: 3 total, 3 up, 3 in
Oct 11 09:43:39 compute-0 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e292: 3 total, 3 up, 3 in
Oct 11 09:43:40 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 09:43:40 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1983641431' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:43:40 compute-0 nova_compute[260935]: 2025-10-11 09:43:40.040 2 DEBUG oslo_concurrency.processutils [None req-16027092-f3c9-46ca-9a46-de5eb6f82914 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.516s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:43:40 compute-0 nova_compute[260935]: 2025-10-11 09:43:40.049 2 DEBUG nova.compute.provider_tree [None req-16027092-f3c9-46ca-9a46-de5eb6f82914 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 09:43:40 compute-0 nova_compute[260935]: 2025-10-11 09:43:40.073 2 DEBUG nova.scheduler.client.report [None req-16027092-f3c9-46ca-9a46-de5eb6f82914 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 09:43:40 compute-0 nova_compute[260935]: 2025-10-11 09:43:40.078 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:43:40 compute-0 nova_compute[260935]: 2025-10-11 09:43:40.103 2 DEBUG oslo_concurrency.lockutils [None req-16027092-f3c9-46ca-9a46-de5eb6f82914 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.749s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:43:40 compute-0 nova_compute[260935]: 2025-10-11 09:43:40.104 2 DEBUG nova.compute.manager [None req-16027092-f3c9-46ca-9a46-de5eb6f82914 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] [instance: 84e65bfe-6918-4c8f-8b2a-3b5262c6e44e] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 11 09:43:40 compute-0 sshd-session[428505]: pam_unix(sshd:auth): check pass; user unknown
Oct 11 09:43:40 compute-0 nova_compute[260935]: 2025-10-11 09:43:40.174 2 DEBUG nova.compute.manager [None req-16027092-f3c9-46ca-9a46-de5eb6f82914 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] [instance: 84e65bfe-6918-4c8f-8b2a-3b5262c6e44e] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 11 09:43:40 compute-0 nova_compute[260935]: 2025-10-11 09:43:40.175 2 DEBUG nova.network.neutron [None req-16027092-f3c9-46ca-9a46-de5eb6f82914 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] [instance: 84e65bfe-6918-4c8f-8b2a-3b5262c6e44e] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 11 09:43:40 compute-0 nova_compute[260935]: 2025-10-11 09:43:40.203 2 INFO nova.virt.libvirt.driver [None req-16027092-f3c9-46ca-9a46-de5eb6f82914 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] [instance: 84e65bfe-6918-4c8f-8b2a-3b5262c6e44e] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 11 09:43:40 compute-0 nova_compute[260935]: 2025-10-11 09:43:40.234 2 DEBUG nova.compute.manager [None req-16027092-f3c9-46ca-9a46-de5eb6f82914 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] [instance: 84e65bfe-6918-4c8f-8b2a-3b5262c6e44e] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 11 09:43:40 compute-0 nova_compute[260935]: 2025-10-11 09:43:40.376 2 DEBUG nova.compute.manager [None req-16027092-f3c9-46ca-9a46-de5eb6f82914 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] [instance: 84e65bfe-6918-4c8f-8b2a-3b5262c6e44e] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 11 09:43:40 compute-0 nova_compute[260935]: 2025-10-11 09:43:40.378 2 DEBUG nova.virt.libvirt.driver [None req-16027092-f3c9-46ca-9a46-de5eb6f82914 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] [instance: 84e65bfe-6918-4c8f-8b2a-3b5262c6e44e] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 11 09:43:40 compute-0 nova_compute[260935]: 2025-10-11 09:43:40.379 2 INFO nova.virt.libvirt.driver [None req-16027092-f3c9-46ca-9a46-de5eb6f82914 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] [instance: 84e65bfe-6918-4c8f-8b2a-3b5262c6e44e] Creating image(s)
Oct 11 09:43:40 compute-0 nova_compute[260935]: 2025-10-11 09:43:40.416 2 DEBUG nova.storage.rbd_utils [None req-16027092-f3c9-46ca-9a46-de5eb6f82914 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] rbd image 84e65bfe-6918-4c8f-8b2a-3b5262c6e44e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:43:40 compute-0 nova_compute[260935]: 2025-10-11 09:43:40.457 2 DEBUG nova.storage.rbd_utils [None req-16027092-f3c9-46ca-9a46-de5eb6f82914 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] rbd image 84e65bfe-6918-4c8f-8b2a-3b5262c6e44e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:43:40 compute-0 nova_compute[260935]: 2025-10-11 09:43:40.492 2 DEBUG nova.storage.rbd_utils [None req-16027092-f3c9-46ca-9a46-de5eb6f82914 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] rbd image 84e65bfe-6918-4c8f-8b2a-3b5262c6e44e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:43:40 compute-0 nova_compute[260935]: 2025-10-11 09:43:40.496 2 DEBUG oslo_concurrency.lockutils [None req-16027092-f3c9-46ca-9a46-de5eb6f82914 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] Acquiring lock "76189c210ef45b7e7189903b8d72d52e9a464e92" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:43:40 compute-0 nova_compute[260935]: 2025-10-11 09:43:40.497 2 DEBUG oslo_concurrency.lockutils [None req-16027092-f3c9-46ca-9a46-de5eb6f82914 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] Lock "76189c210ef45b7e7189903b8d72d52e9a464e92" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:43:40 compute-0 ceph-mon[74313]: pgmap v3063: 321 pgs: 321 active+clean; 486 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 5.9 MiB/s rd, 5.9 MiB/s wr, 226 op/s
Oct 11 09:43:40 compute-0 ceph-mon[74313]: osdmap e292: 3 total, 3 up, 3 in
Oct 11 09:43:40 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/1983641431' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:43:40 compute-0 nova_compute[260935]: 2025-10-11 09:43:40.764 2 DEBUG nova.virt.libvirt.imagebackend [None req-16027092-f3c9-46ca-9a46-de5eb6f82914 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] Image locations are: [{'url': 'rbd://33219f8b-dc38-5a8f-a577-8ccc4b37190a/images/72225c6c-cee7-4404-9d0d-8915e5edba0a/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://33219f8b-dc38-5a8f-a577-8ccc4b37190a/images/72225c6c-cee7-4404-9d0d-8915e5edba0a/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085
Oct 11 09:43:40 compute-0 nova_compute[260935]: 2025-10-11 09:43:40.816 2 DEBUG nova.virt.libvirt.imagebackend [None req-16027092-f3c9-46ca-9a46-de5eb6f82914 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] Selected location: {'url': 'rbd://33219f8b-dc38-5a8f-a577-8ccc4b37190a/images/72225c6c-cee7-4404-9d0d-8915e5edba0a/snap', 'metadata': {'store': 'default_backend'}} clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1094
Oct 11 09:43:40 compute-0 nova_compute[260935]: 2025-10-11 09:43:40.817 2 DEBUG nova.storage.rbd_utils [None req-16027092-f3c9-46ca-9a46-de5eb6f82914 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] cloning images/72225c6c-cee7-4404-9d0d-8915e5edba0a@snap to None/84e65bfe-6918-4c8f-8b2a-3b5262c6e44e_disk clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Oct 11 09:43:40 compute-0 nova_compute[260935]: 2025-10-11 09:43:40.933 2 DEBUG oslo_concurrency.lockutils [None req-16027092-f3c9-46ca-9a46-de5eb6f82914 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] Lock "76189c210ef45b7e7189903b8d72d52e9a464e92" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.436s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:43:40 compute-0 nova_compute[260935]: 2025-10-11 09:43:40.989 2 DEBUG nova.policy [None req-16027092-f3c9-46ca-9a46-de5eb6f82914 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '2b3a6de4bc924868b4e73b0a23a89090', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'd599eac75aee4193a971c2c157a326a8', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 11 09:43:41 compute-0 nova_compute[260935]: 2025-10-11 09:43:41.117 2 DEBUG nova.objects.instance [None req-16027092-f3c9-46ca-9a46-de5eb6f82914 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] Lazy-loading 'migration_context' on Instance uuid 84e65bfe-6918-4c8f-8b2a-3b5262c6e44e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:43:41 compute-0 nova_compute[260935]: 2025-10-11 09:43:41.144 2 DEBUG nova.virt.libvirt.driver [None req-16027092-f3c9-46ca-9a46-de5eb6f82914 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] [instance: 84e65bfe-6918-4c8f-8b2a-3b5262c6e44e] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 11 09:43:41 compute-0 nova_compute[260935]: 2025-10-11 09:43:41.144 2 DEBUG nova.virt.libvirt.driver [None req-16027092-f3c9-46ca-9a46-de5eb6f82914 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] [instance: 84e65bfe-6918-4c8f-8b2a-3b5262c6e44e] Ensure instance console log exists: /var/lib/nova/instances/84e65bfe-6918-4c8f-8b2a-3b5262c6e44e/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 11 09:43:41 compute-0 nova_compute[260935]: 2025-10-11 09:43:41.145 2 DEBUG oslo_concurrency.lockutils [None req-16027092-f3c9-46ca-9a46-de5eb6f82914 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:43:41 compute-0 nova_compute[260935]: 2025-10-11 09:43:41.146 2 DEBUG oslo_concurrency.lockutils [None req-16027092-f3c9-46ca-9a46-de5eb6f82914 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:43:41 compute-0 nova_compute[260935]: 2025-10-11 09:43:41.146 2 DEBUG oslo_concurrency.lockutils [None req-16027092-f3c9-46ca-9a46-de5eb6f82914 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:43:41 compute-0 nova_compute[260935]: 2025-10-11 09:43:41.194 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:43:41 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3065: 321 pgs: 321 active+clean; 486 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 2.5 KiB/s wr, 53 op/s
Oct 11 09:43:41 compute-0 podman[432961]: 2025-10-11 09:43:41.794866448 +0000 UTC m=+0.084687956 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true, io.buildah.version=1.41.3, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Oct 11 09:43:41 compute-0 sshd-session[428505]: Failed password for invalid user jerri from 2.57.121.112 port 63436 ssh2
Oct 11 09:43:42 compute-0 sshd-session[432760]: Failed password for invalid user mysql from 165.232.82.252 port 37678 ssh2
Oct 11 09:43:42 compute-0 ovn_controller[152945]: 2025-10-11T09:43:42Z|01713|memory_trim|INFO|Detected inactivity (last active 30015 ms ago): trimming memory
Oct 11 09:43:42 compute-0 sshd-session[428505]: Received disconnect from 2.57.121.112 port 63436:11: Bye [preauth]
Oct 11 09:43:42 compute-0 sshd-session[428505]: Disconnected from invalid user jerri 2.57.121.112 port 63436 [preauth]
Oct 11 09:43:42 compute-0 sshd-session[428505]: PAM 4 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=2.57.121.112
Oct 11 09:43:42 compute-0 sshd-session[428505]: PAM service(sshd) ignoring max retries; 5 > 3
Oct 11 09:43:42 compute-0 sshd-session[432760]: Connection closed by invalid user mysql 165.232.82.252 port 37678 [preauth]
Oct 11 09:43:42 compute-0 ceph-mon[74313]: pgmap v3065: 321 pgs: 321 active+clean; 486 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 2.5 KiB/s wr, 53 op/s
Oct 11 09:43:42 compute-0 nova_compute[260935]: 2025-10-11 09:43:42.950 2 DEBUG nova.network.neutron [None req-16027092-f3c9-46ca-9a46-de5eb6f82914 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] [instance: 84e65bfe-6918-4c8f-8b2a-3b5262c6e44e] Successfully created port: 007ca8a8-8ce1-4661-9b04-c8764bd3b523 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 11 09:43:43 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3066: 321 pgs: 321 active+clean; 486 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 56 KiB/s rd, 3.2 KiB/s wr, 77 op/s
Oct 11 09:43:43 compute-0 nova_compute[260935]: 2025-10-11 09:43:43.705 2 DEBUG nova.network.neutron [None req-16027092-f3c9-46ca-9a46-de5eb6f82914 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] [instance: 84e65bfe-6918-4c8f-8b2a-3b5262c6e44e] Successfully updated port: 007ca8a8-8ce1-4661-9b04-c8764bd3b523 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 11 09:43:43 compute-0 nova_compute[260935]: 2025-10-11 09:43:43.720 2 DEBUG oslo_concurrency.lockutils [None req-16027092-f3c9-46ca-9a46-de5eb6f82914 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] Acquiring lock "refresh_cache-84e65bfe-6918-4c8f-8b2a-3b5262c6e44e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:43:43 compute-0 nova_compute[260935]: 2025-10-11 09:43:43.720 2 DEBUG oslo_concurrency.lockutils [None req-16027092-f3c9-46ca-9a46-de5eb6f82914 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] Acquired lock "refresh_cache-84e65bfe-6918-4c8f-8b2a-3b5262c6e44e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:43:43 compute-0 nova_compute[260935]: 2025-10-11 09:43:43.720 2 DEBUG nova.network.neutron [None req-16027092-f3c9-46ca-9a46-de5eb6f82914 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] [instance: 84e65bfe-6918-4c8f-8b2a-3b5262c6e44e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 11 09:43:43 compute-0 nova_compute[260935]: 2025-10-11 09:43:43.834 2 DEBUG nova.compute.manager [req-4eb9e5a6-41f9-4d62-9981-8a536b7c9d08 req-6a0fdad2-f9d7-4183-9fce-a507d3a3674a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 84e65bfe-6918-4c8f-8b2a-3b5262c6e44e] Received event network-changed-007ca8a8-8ce1-4661-9b04-c8764bd3b523 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:43:43 compute-0 nova_compute[260935]: 2025-10-11 09:43:43.835 2 DEBUG nova.compute.manager [req-4eb9e5a6-41f9-4d62-9981-8a536b7c9d08 req-6a0fdad2-f9d7-4183-9fce-a507d3a3674a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 84e65bfe-6918-4c8f-8b2a-3b5262c6e44e] Refreshing instance network info cache due to event network-changed-007ca8a8-8ce1-4661-9b04-c8764bd3b523. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 09:43:43 compute-0 nova_compute[260935]: 2025-10-11 09:43:43.836 2 DEBUG oslo_concurrency.lockutils [req-4eb9e5a6-41f9-4d62-9981-8a536b7c9d08 req-6a0fdad2-f9d7-4183-9fce-a507d3a3674a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-84e65bfe-6918-4c8f-8b2a-3b5262c6e44e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:43:43 compute-0 nova_compute[260935]: 2025-10-11 09:43:43.855 2 DEBUG nova.network.neutron [None req-16027092-f3c9-46ca-9a46-de5eb6f82914 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] [instance: 84e65bfe-6918-4c8f-8b2a-3b5262c6e44e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 11 09:43:44 compute-0 nova_compute[260935]: 2025-10-11 09:43:44.644 2 DEBUG nova.network.neutron [None req-16027092-f3c9-46ca-9a46-de5eb6f82914 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] [instance: 84e65bfe-6918-4c8f-8b2a-3b5262c6e44e] Updating instance_info_cache with network_info: [{"id": "007ca8a8-8ce1-4661-9b04-c8764bd3b523", "address": "fa:16:3e:c1:4b:38", "network": {"id": "424246b3-55aa-428f-9446-55ed3d626b5d", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1806329114-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d599eac75aee4193a971c2c157a326a8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap007ca8a8-8c", "ovs_interfaceid": "007ca8a8-8ce1-4661-9b04-c8764bd3b523", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:43:44 compute-0 nova_compute[260935]: 2025-10-11 09:43:44.668 2 DEBUG oslo_concurrency.lockutils [None req-16027092-f3c9-46ca-9a46-de5eb6f82914 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] Releasing lock "refresh_cache-84e65bfe-6918-4c8f-8b2a-3b5262c6e44e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:43:44 compute-0 nova_compute[260935]: 2025-10-11 09:43:44.668 2 DEBUG nova.compute.manager [None req-16027092-f3c9-46ca-9a46-de5eb6f82914 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] [instance: 84e65bfe-6918-4c8f-8b2a-3b5262c6e44e] Instance network_info: |[{"id": "007ca8a8-8ce1-4661-9b04-c8764bd3b523", "address": "fa:16:3e:c1:4b:38", "network": {"id": "424246b3-55aa-428f-9446-55ed3d626b5d", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1806329114-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d599eac75aee4193a971c2c157a326a8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap007ca8a8-8c", "ovs_interfaceid": "007ca8a8-8ce1-4661-9b04-c8764bd3b523", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 11 09:43:44 compute-0 nova_compute[260935]: 2025-10-11 09:43:44.669 2 DEBUG oslo_concurrency.lockutils [req-4eb9e5a6-41f9-4d62-9981-8a536b7c9d08 req-6a0fdad2-f9d7-4183-9fce-a507d3a3674a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-84e65bfe-6918-4c8f-8b2a-3b5262c6e44e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:43:44 compute-0 nova_compute[260935]: 2025-10-11 09:43:44.670 2 DEBUG nova.network.neutron [req-4eb9e5a6-41f9-4d62-9981-8a536b7c9d08 req-6a0fdad2-f9d7-4183-9fce-a507d3a3674a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 84e65bfe-6918-4c8f-8b2a-3b5262c6e44e] Refreshing network info cache for port 007ca8a8-8ce1-4661-9b04-c8764bd3b523 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 09:43:44 compute-0 nova_compute[260935]: 2025-10-11 09:43:44.675 2 DEBUG nova.virt.libvirt.driver [None req-16027092-f3c9-46ca-9a46-de5eb6f82914 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] [instance: 84e65bfe-6918-4c8f-8b2a-3b5262c6e44e] Start _get_guest_xml network_info=[{"id": "007ca8a8-8ce1-4661-9b04-c8764bd3b523", "address": "fa:16:3e:c1:4b:38", "network": {"id": "424246b3-55aa-428f-9446-55ed3d626b5d", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1806329114-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d599eac75aee4193a971c2c157a326a8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap007ca8a8-8c", "ovs_interfaceid": "007ca8a8-8ce1-4661-9b04-c8764bd3b523", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='',container_format='bare',created_at=2025-10-11T09:43:29Z,direct_url=<?>,disk_format='raw',id=72225c6c-cee7-4404-9d0d-8915e5edba0a,min_disk=1,min_ram=0,name='tempest-TestSnapshotPatternsnapshot-322378170',owner='d599eac75aee4193a971c2c157a326a8',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2025-10-11T09:43:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '72225c6c-cee7-4404-9d0d-8915e5edba0a'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 11 09:43:44 compute-0 nova_compute[260935]: 2025-10-11 09:43:44.681 2 WARNING nova.virt.libvirt.driver [None req-16027092-f3c9-46ca-9a46-de5eb6f82914 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 09:43:44 compute-0 nova_compute[260935]: 2025-10-11 09:43:44.691 2 DEBUG nova.virt.libvirt.host [None req-16027092-f3c9-46ca-9a46-de5eb6f82914 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 11 09:43:44 compute-0 nova_compute[260935]: 2025-10-11 09:43:44.692 2 DEBUG nova.virt.libvirt.host [None req-16027092-f3c9-46ca-9a46-de5eb6f82914 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 11 09:43:44 compute-0 nova_compute[260935]: 2025-10-11 09:43:44.707 2 DEBUG nova.virt.libvirt.host [None req-16027092-f3c9-46ca-9a46-de5eb6f82914 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 11 09:43:44 compute-0 nova_compute[260935]: 2025-10-11 09:43:44.708 2 DEBUG nova.virt.libvirt.host [None req-16027092-f3c9-46ca-9a46-de5eb6f82914 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 11 09:43:44 compute-0 nova_compute[260935]: 2025-10-11 09:43:44.709 2 DEBUG nova.virt.libvirt.driver [None req-16027092-f3c9-46ca-9a46-de5eb6f82914 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 11 09:43:44 compute-0 nova_compute[260935]: 2025-10-11 09:43:44.709 2 DEBUG nova.virt.hardware [None req-16027092-f3c9-46ca-9a46-de5eb6f82914 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='',container_format='bare',created_at=2025-10-11T09:43:29Z,direct_url=<?>,disk_format='raw',id=72225c6c-cee7-4404-9d0d-8915e5edba0a,min_disk=1,min_ram=0,name='tempest-TestSnapshotPatternsnapshot-322378170',owner='d599eac75aee4193a971c2c157a326a8',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2025-10-11T09:43:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 11 09:43:44 compute-0 nova_compute[260935]: 2025-10-11 09:43:44.710 2 DEBUG nova.virt.hardware [None req-16027092-f3c9-46ca-9a46-de5eb6f82914 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 11 09:43:44 compute-0 nova_compute[260935]: 2025-10-11 09:43:44.710 2 DEBUG nova.virt.hardware [None req-16027092-f3c9-46ca-9a46-de5eb6f82914 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 11 09:43:44 compute-0 nova_compute[260935]: 2025-10-11 09:43:44.711 2 DEBUG nova.virt.hardware [None req-16027092-f3c9-46ca-9a46-de5eb6f82914 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 11 09:43:44 compute-0 nova_compute[260935]: 2025-10-11 09:43:44.711 2 DEBUG nova.virt.hardware [None req-16027092-f3c9-46ca-9a46-de5eb6f82914 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 11 09:43:44 compute-0 nova_compute[260935]: 2025-10-11 09:43:44.711 2 DEBUG nova.virt.hardware [None req-16027092-f3c9-46ca-9a46-de5eb6f82914 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 11 09:43:44 compute-0 nova_compute[260935]: 2025-10-11 09:43:44.711 2 DEBUG nova.virt.hardware [None req-16027092-f3c9-46ca-9a46-de5eb6f82914 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 11 09:43:44 compute-0 nova_compute[260935]: 2025-10-11 09:43:44.712 2 DEBUG nova.virt.hardware [None req-16027092-f3c9-46ca-9a46-de5eb6f82914 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 11 09:43:44 compute-0 nova_compute[260935]: 2025-10-11 09:43:44.712 2 DEBUG nova.virt.hardware [None req-16027092-f3c9-46ca-9a46-de5eb6f82914 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 11 09:43:44 compute-0 nova_compute[260935]: 2025-10-11 09:43:44.712 2 DEBUG nova.virt.hardware [None req-16027092-f3c9-46ca-9a46-de5eb6f82914 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 11 09:43:44 compute-0 nova_compute[260935]: 2025-10-11 09:43:44.713 2 DEBUG nova.virt.hardware [None req-16027092-f3c9-46ca-9a46-de5eb6f82914 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 11 09:43:44 compute-0 nova_compute[260935]: 2025-10-11 09:43:44.717 2 DEBUG oslo_concurrency.processutils [None req-16027092-f3c9-46ca-9a46-de5eb6f82914 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:43:44 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:43:44 compute-0 ceph-mon[74313]: pgmap v3066: 321 pgs: 321 active+clean; 486 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 56 KiB/s rd, 3.2 KiB/s wr, 77 op/s
Oct 11 09:43:45 compute-0 nova_compute[260935]: 2025-10-11 09:43:45.078 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:43:45 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 09:43:45 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3748329194' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:43:45 compute-0 nova_compute[260935]: 2025-10-11 09:43:45.217 2 DEBUG oslo_concurrency.processutils [None req-16027092-f3c9-46ca-9a46-de5eb6f82914 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.500s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:43:45 compute-0 nova_compute[260935]: 2025-10-11 09:43:45.249 2 DEBUG nova.storage.rbd_utils [None req-16027092-f3c9-46ca-9a46-de5eb6f82914 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] rbd image 84e65bfe-6918-4c8f-8b2a-3b5262c6e44e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:43:45 compute-0 nova_compute[260935]: 2025-10-11 09:43:45.254 2 DEBUG oslo_concurrency.processutils [None req-16027092-f3c9-46ca-9a46-de5eb6f82914 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:43:45 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3067: 321 pgs: 321 active+clean; 486 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 56 KiB/s rd, 3.2 KiB/s wr, 77 op/s
Oct 11 09:43:45 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 09:43:45 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/703938094' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:43:45 compute-0 nova_compute[260935]: 2025-10-11 09:43:45.702 2 DEBUG oslo_concurrency.processutils [None req-16027092-f3c9-46ca-9a46-de5eb6f82914 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:43:45 compute-0 nova_compute[260935]: 2025-10-11 09:43:45.704 2 DEBUG nova.virt.libvirt.vif [None req-16027092-f3c9-46ca-9a46-de5eb6f82914 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T09:43:38Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestSnapshotPattern-server-937149156',display_name='tempest-TestSnapshotPattern-server-937149156',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testsnapshotpattern-server-937149156',id=150,image_ref='72225c6c-cee7-4404-9d0d-8915e5edba0a',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBP2ECx0jj3Rt10/VBX40dpY/aktTrRIITXR7+kOE+gKATXjMYBpecra2wtbKgkthLAz8Iw/nS+1WioBHxe0IF9i4guGae1Gh4XxA1njuMr4iSWg7N0/eaZRKp5xLMkN7hg==',key_name='tempest-TestSnapshotPattern-1719255089',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d599eac75aee4193a971c2c157a326a8',ramdisk_id='',reservation_id='r-v3mdlfxx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_boot_roles='member,reader',image_container_format='bare',image_disk_format='raw',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_image_location='snapshot',image_image_state='available',image_image_type='snapshot',image_instance_uuid='1f03604e-0a25-4f22-807b-11f2e654be90',image_min_disk='1',image_min_ram='0',image_owner_id='d599eac75aee4193a971c2c157a326a8',image_owner_project_name='tempest-TestSnapshotPattern-1818755654',image_owner_user_name='tempest-TestSnapshotPattern-1818755654-project-member',image_user_id='2b3a6de4bc924868b4e73b0a23a89090',image_version='8.0',network_allocated='True',owner_project_name='tempest-TestSnapshotPattern-1818755654',owner_user_name='tempest-TestSnapshotPattern-1818755654-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:43:40Z,user_data=None,user_id='2b3a6de4bc924868b4e73b0a23a89090',uuid=84e65bfe-6918-4c8f-8b2a-3b5262c6e44e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "007ca8a8-8ce1-4661-9b04-c8764bd3b523", "address": "fa:16:3e:c1:4b:38", "network": {"id": "424246b3-55aa-428f-9446-55ed3d626b5d", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1806329114-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d599eac75aee4193a971c2c157a326a8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap007ca8a8-8c", "ovs_interfaceid": "007ca8a8-8ce1-4661-9b04-c8764bd3b523", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 11 09:43:45 compute-0 nova_compute[260935]: 2025-10-11 09:43:45.705 2 DEBUG nova.network.os_vif_util [None req-16027092-f3c9-46ca-9a46-de5eb6f82914 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] Converting VIF {"id": "007ca8a8-8ce1-4661-9b04-c8764bd3b523", "address": "fa:16:3e:c1:4b:38", "network": {"id": "424246b3-55aa-428f-9446-55ed3d626b5d", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1806329114-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d599eac75aee4193a971c2c157a326a8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap007ca8a8-8c", "ovs_interfaceid": "007ca8a8-8ce1-4661-9b04-c8764bd3b523", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 09:43:45 compute-0 nova_compute[260935]: 2025-10-11 09:43:45.706 2 DEBUG nova.network.os_vif_util [None req-16027092-f3c9-46ca-9a46-de5eb6f82914 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c1:4b:38,bridge_name='br-int',has_traffic_filtering=True,id=007ca8a8-8ce1-4661-9b04-c8764bd3b523,network=Network(424246b3-55aa-428f-9446-55ed3d626b5d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap007ca8a8-8c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 09:43:45 compute-0 nova_compute[260935]: 2025-10-11 09:43:45.707 2 DEBUG nova.objects.instance [None req-16027092-f3c9-46ca-9a46-de5eb6f82914 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] Lazy-loading 'pci_devices' on Instance uuid 84e65bfe-6918-4c8f-8b2a-3b5262c6e44e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:43:45 compute-0 nova_compute[260935]: 2025-10-11 09:43:45.730 2 DEBUG nova.virt.libvirt.driver [None req-16027092-f3c9-46ca-9a46-de5eb6f82914 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] [instance: 84e65bfe-6918-4c8f-8b2a-3b5262c6e44e] End _get_guest_xml xml=<domain type="kvm">
Oct 11 09:43:45 compute-0 nova_compute[260935]:   <uuid>84e65bfe-6918-4c8f-8b2a-3b5262c6e44e</uuid>
Oct 11 09:43:45 compute-0 nova_compute[260935]:   <name>instance-00000096</name>
Oct 11 09:43:45 compute-0 nova_compute[260935]:   <memory>131072</memory>
Oct 11 09:43:45 compute-0 nova_compute[260935]:   <vcpu>1</vcpu>
Oct 11 09:43:45 compute-0 nova_compute[260935]:   <metadata>
Oct 11 09:43:45 compute-0 nova_compute[260935]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 09:43:45 compute-0 nova_compute[260935]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 09:43:45 compute-0 nova_compute[260935]:       <nova:name>tempest-TestSnapshotPattern-server-937149156</nova:name>
Oct 11 09:43:45 compute-0 nova_compute[260935]:       <nova:creationTime>2025-10-11 09:43:44</nova:creationTime>
Oct 11 09:43:45 compute-0 nova_compute[260935]:       <nova:flavor name="m1.nano">
Oct 11 09:43:45 compute-0 nova_compute[260935]:         <nova:memory>128</nova:memory>
Oct 11 09:43:45 compute-0 nova_compute[260935]:         <nova:disk>1</nova:disk>
Oct 11 09:43:45 compute-0 nova_compute[260935]:         <nova:swap>0</nova:swap>
Oct 11 09:43:45 compute-0 nova_compute[260935]:         <nova:ephemeral>0</nova:ephemeral>
Oct 11 09:43:45 compute-0 nova_compute[260935]:         <nova:vcpus>1</nova:vcpus>
Oct 11 09:43:45 compute-0 nova_compute[260935]:       </nova:flavor>
Oct 11 09:43:45 compute-0 nova_compute[260935]:       <nova:owner>
Oct 11 09:43:45 compute-0 nova_compute[260935]:         <nova:user uuid="2b3a6de4bc924868b4e73b0a23a89090">tempest-TestSnapshotPattern-1818755654-project-member</nova:user>
Oct 11 09:43:45 compute-0 nova_compute[260935]:         <nova:project uuid="d599eac75aee4193a971c2c157a326a8">tempest-TestSnapshotPattern-1818755654</nova:project>
Oct 11 09:43:45 compute-0 nova_compute[260935]:       </nova:owner>
Oct 11 09:43:45 compute-0 nova_compute[260935]:       <nova:root type="image" uuid="72225c6c-cee7-4404-9d0d-8915e5edba0a"/>
Oct 11 09:43:45 compute-0 nova_compute[260935]:       <nova:ports>
Oct 11 09:43:45 compute-0 nova_compute[260935]:         <nova:port uuid="007ca8a8-8ce1-4661-9b04-c8764bd3b523">
Oct 11 09:43:45 compute-0 nova_compute[260935]:           <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Oct 11 09:43:45 compute-0 nova_compute[260935]:         </nova:port>
Oct 11 09:43:45 compute-0 nova_compute[260935]:       </nova:ports>
Oct 11 09:43:45 compute-0 nova_compute[260935]:     </nova:instance>
Oct 11 09:43:45 compute-0 nova_compute[260935]:   </metadata>
Oct 11 09:43:45 compute-0 nova_compute[260935]:   <sysinfo type="smbios">
Oct 11 09:43:45 compute-0 nova_compute[260935]:     <system>
Oct 11 09:43:45 compute-0 nova_compute[260935]:       <entry name="manufacturer">RDO</entry>
Oct 11 09:43:45 compute-0 nova_compute[260935]:       <entry name="product">OpenStack Compute</entry>
Oct 11 09:43:45 compute-0 nova_compute[260935]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 09:43:45 compute-0 nova_compute[260935]:       <entry name="serial">84e65bfe-6918-4c8f-8b2a-3b5262c6e44e</entry>
Oct 11 09:43:45 compute-0 nova_compute[260935]:       <entry name="uuid">84e65bfe-6918-4c8f-8b2a-3b5262c6e44e</entry>
Oct 11 09:43:45 compute-0 nova_compute[260935]:       <entry name="family">Virtual Machine</entry>
Oct 11 09:43:45 compute-0 nova_compute[260935]:     </system>
Oct 11 09:43:45 compute-0 nova_compute[260935]:   </sysinfo>
Oct 11 09:43:45 compute-0 nova_compute[260935]:   <os>
Oct 11 09:43:45 compute-0 nova_compute[260935]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 11 09:43:45 compute-0 nova_compute[260935]:     <boot dev="hd"/>
Oct 11 09:43:45 compute-0 nova_compute[260935]:     <smbios mode="sysinfo"/>
Oct 11 09:43:45 compute-0 nova_compute[260935]:   </os>
Oct 11 09:43:45 compute-0 nova_compute[260935]:   <features>
Oct 11 09:43:45 compute-0 nova_compute[260935]:     <acpi/>
Oct 11 09:43:45 compute-0 nova_compute[260935]:     <apic/>
Oct 11 09:43:45 compute-0 nova_compute[260935]:     <vmcoreinfo/>
Oct 11 09:43:45 compute-0 nova_compute[260935]:   </features>
Oct 11 09:43:45 compute-0 nova_compute[260935]:   <clock offset="utc">
Oct 11 09:43:45 compute-0 nova_compute[260935]:     <timer name="pit" tickpolicy="delay"/>
Oct 11 09:43:45 compute-0 nova_compute[260935]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 11 09:43:45 compute-0 nova_compute[260935]:     <timer name="hpet" present="no"/>
Oct 11 09:43:45 compute-0 nova_compute[260935]:   </clock>
Oct 11 09:43:45 compute-0 nova_compute[260935]:   <cpu mode="host-model" match="exact">
Oct 11 09:43:45 compute-0 nova_compute[260935]:     <topology sockets="1" cores="1" threads="1"/>
Oct 11 09:43:45 compute-0 nova_compute[260935]:   </cpu>
Oct 11 09:43:45 compute-0 nova_compute[260935]:   <devices>
Oct 11 09:43:45 compute-0 nova_compute[260935]:     <disk type="network" device="disk">
Oct 11 09:43:45 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 09:43:45 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/84e65bfe-6918-4c8f-8b2a-3b5262c6e44e_disk">
Oct 11 09:43:45 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 09:43:45 compute-0 nova_compute[260935]:       </source>
Oct 11 09:43:45 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 09:43:45 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 09:43:45 compute-0 nova_compute[260935]:       </auth>
Oct 11 09:43:45 compute-0 nova_compute[260935]:       <target dev="vda" bus="virtio"/>
Oct 11 09:43:45 compute-0 nova_compute[260935]:     </disk>
Oct 11 09:43:45 compute-0 nova_compute[260935]:     <disk type="network" device="cdrom">
Oct 11 09:43:45 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 09:43:45 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/84e65bfe-6918-4c8f-8b2a-3b5262c6e44e_disk.config">
Oct 11 09:43:45 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 09:43:45 compute-0 nova_compute[260935]:       </source>
Oct 11 09:43:45 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 09:43:45 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 09:43:45 compute-0 nova_compute[260935]:       </auth>
Oct 11 09:43:45 compute-0 nova_compute[260935]:       <target dev="sda" bus="sata"/>
Oct 11 09:43:45 compute-0 nova_compute[260935]:     </disk>
Oct 11 09:43:45 compute-0 nova_compute[260935]:     <interface type="ethernet">
Oct 11 09:43:45 compute-0 nova_compute[260935]:       <mac address="fa:16:3e:c1:4b:38"/>
Oct 11 09:43:45 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 09:43:45 compute-0 nova_compute[260935]:       <driver name="vhost" rx_queue_size="512"/>
Oct 11 09:43:45 compute-0 nova_compute[260935]:       <mtu size="1442"/>
Oct 11 09:43:45 compute-0 nova_compute[260935]:       <target dev="tap007ca8a8-8c"/>
Oct 11 09:43:45 compute-0 nova_compute[260935]:     </interface>
Oct 11 09:43:45 compute-0 nova_compute[260935]:     <serial type="pty">
Oct 11 09:43:45 compute-0 nova_compute[260935]:       <log file="/var/lib/nova/instances/84e65bfe-6918-4c8f-8b2a-3b5262c6e44e/console.log" append="off"/>
Oct 11 09:43:45 compute-0 nova_compute[260935]:     </serial>
Oct 11 09:43:45 compute-0 nova_compute[260935]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 09:43:45 compute-0 nova_compute[260935]:     <video>
Oct 11 09:43:45 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 09:43:45 compute-0 nova_compute[260935]:     </video>
Oct 11 09:43:45 compute-0 nova_compute[260935]:     <input type="tablet" bus="usb"/>
Oct 11 09:43:45 compute-0 nova_compute[260935]:     <input type="keyboard" bus="usb"/>
Oct 11 09:43:45 compute-0 nova_compute[260935]:     <rng model="virtio">
Oct 11 09:43:45 compute-0 nova_compute[260935]:       <backend model="random">/dev/urandom</backend>
Oct 11 09:43:45 compute-0 nova_compute[260935]:     </rng>
Oct 11 09:43:45 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root"/>
Oct 11 09:43:45 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:43:45 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:43:45 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:43:45 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:43:45 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:43:45 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:43:45 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:43:45 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:43:45 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:43:45 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:43:45 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:43:45 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:43:45 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:43:45 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:43:45 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:43:45 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:43:45 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:43:45 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:43:45 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:43:45 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:43:45 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:43:45 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:43:45 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:43:45 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:43:45 compute-0 nova_compute[260935]:     <controller type="usb" index="0"/>
Oct 11 09:43:45 compute-0 nova_compute[260935]:     <memballoon model="virtio">
Oct 11 09:43:45 compute-0 nova_compute[260935]:       <stats period="10"/>
Oct 11 09:43:45 compute-0 nova_compute[260935]:     </memballoon>
Oct 11 09:43:45 compute-0 nova_compute[260935]:   </devices>
Oct 11 09:43:45 compute-0 nova_compute[260935]: </domain>
Oct 11 09:43:45 compute-0 nova_compute[260935]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 11 09:43:45 compute-0 nova_compute[260935]: 2025-10-11 09:43:45.731 2 DEBUG nova.compute.manager [None req-16027092-f3c9-46ca-9a46-de5eb6f82914 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] [instance: 84e65bfe-6918-4c8f-8b2a-3b5262c6e44e] Preparing to wait for external event network-vif-plugged-007ca8a8-8ce1-4661-9b04-c8764bd3b523 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 11 09:43:45 compute-0 nova_compute[260935]: 2025-10-11 09:43:45.731 2 DEBUG oslo_concurrency.lockutils [None req-16027092-f3c9-46ca-9a46-de5eb6f82914 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] Acquiring lock "84e65bfe-6918-4c8f-8b2a-3b5262c6e44e-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:43:45 compute-0 nova_compute[260935]: 2025-10-11 09:43:45.731 2 DEBUG oslo_concurrency.lockutils [None req-16027092-f3c9-46ca-9a46-de5eb6f82914 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] Lock "84e65bfe-6918-4c8f-8b2a-3b5262c6e44e-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:43:45 compute-0 nova_compute[260935]: 2025-10-11 09:43:45.732 2 DEBUG oslo_concurrency.lockutils [None req-16027092-f3c9-46ca-9a46-de5eb6f82914 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] Lock "84e65bfe-6918-4c8f-8b2a-3b5262c6e44e-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:43:45 compute-0 nova_compute[260935]: 2025-10-11 09:43:45.733 2 DEBUG nova.virt.libvirt.vif [None req-16027092-f3c9-46ca-9a46-de5eb6f82914 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-11T09:43:38Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestSnapshotPattern-server-937149156',display_name='tempest-TestSnapshotPattern-server-937149156',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testsnapshotpattern-server-937149156',id=150,image_ref='72225c6c-cee7-4404-9d0d-8915e5edba0a',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBP2ECx0jj3Rt10/VBX40dpY/aktTrRIITXR7+kOE+gKATXjMYBpecra2wtbKgkthLAz8Iw/nS+1WioBHxe0IF9i4guGae1Gh4XxA1njuMr4iSWg7N0/eaZRKp5xLMkN7hg==',key_name='tempest-TestSnapshotPattern-1719255089',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d599eac75aee4193a971c2c157a326a8',ramdisk_id='',reservation_id='r-v3mdlfxx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_boot_roles='member,reader',image_container_format='bare',image_disk_format='raw',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_image_location='snapshot',image_image_state='available',image_image_type='snapshot',image_instance_uuid='1f03604e-0a25-4f22-807b-11f2e654be90',image_min_disk='1',image_min_ram='0',image_owner_id='d599eac75aee4193a971c2c157a326a8',image_owner_project_name='tempest-TestSnapshotPattern-1818755654',image_owner_user_name='tempest-TestSnapshotPattern-1818755654-project-member',image_user_id='2b3a6de4bc924868b4e73b0a23a89090',image_version='8.0',network_allocated='True',owner_project_name='tempest-TestSnapshotPattern-1818755654',owner_user_name='tempest-TestSnapshotPattern-1818755654-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-11T09:43:40Z,user_data=None,user_id='2b3a6de4bc924868b4e73b0a23a89090',uuid=84e65bfe-6918-4c8f-8b2a-3b5262c6e44e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "007ca8a8-8ce1-4661-9b04-c8764bd3b523", "address": "fa:16:3e:c1:4b:38", "network": {"id": "424246b3-55aa-428f-9446-55ed3d626b5d", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1806329114-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d599eac75aee4193a971c2c157a326a8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap007ca8a8-8c", "ovs_interfaceid": "007ca8a8-8ce1-4661-9b04-c8764bd3b523", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 11 09:43:45 compute-0 nova_compute[260935]: 2025-10-11 09:43:45.733 2 DEBUG nova.network.os_vif_util [None req-16027092-f3c9-46ca-9a46-de5eb6f82914 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] Converting VIF {"id": "007ca8a8-8ce1-4661-9b04-c8764bd3b523", "address": "fa:16:3e:c1:4b:38", "network": {"id": "424246b3-55aa-428f-9446-55ed3d626b5d", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1806329114-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d599eac75aee4193a971c2c157a326a8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap007ca8a8-8c", "ovs_interfaceid": "007ca8a8-8ce1-4661-9b04-c8764bd3b523", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 09:43:45 compute-0 nova_compute[260935]: 2025-10-11 09:43:45.734 2 DEBUG nova.network.os_vif_util [None req-16027092-f3c9-46ca-9a46-de5eb6f82914 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c1:4b:38,bridge_name='br-int',has_traffic_filtering=True,id=007ca8a8-8ce1-4661-9b04-c8764bd3b523,network=Network(424246b3-55aa-428f-9446-55ed3d626b5d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap007ca8a8-8c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 09:43:45 compute-0 nova_compute[260935]: 2025-10-11 09:43:45.734 2 DEBUG os_vif [None req-16027092-f3c9-46ca-9a46-de5eb6f82914 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c1:4b:38,bridge_name='br-int',has_traffic_filtering=True,id=007ca8a8-8ce1-4661-9b04-c8764bd3b523,network=Network(424246b3-55aa-428f-9446-55ed3d626b5d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap007ca8a8-8c') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 11 09:43:45 compute-0 nova_compute[260935]: 2025-10-11 09:43:45.735 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:43:45 compute-0 nova_compute[260935]: 2025-10-11 09:43:45.735 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:43:45 compute-0 nova_compute[260935]: 2025-10-11 09:43:45.736 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 09:43:45 compute-0 nova_compute[260935]: 2025-10-11 09:43:45.739 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:43:45 compute-0 nova_compute[260935]: 2025-10-11 09:43:45.740 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap007ca8a8-8c, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:43:45 compute-0 nova_compute[260935]: 2025-10-11 09:43:45.740 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap007ca8a8-8c, col_values=(('external_ids', {'iface-id': '007ca8a8-8ce1-4661-9b04-c8764bd3b523', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:c1:4b:38', 'vm-uuid': '84e65bfe-6918-4c8f-8b2a-3b5262c6e44e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:43:45 compute-0 nova_compute[260935]: 2025-10-11 09:43:45.742 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:43:45 compute-0 NetworkManager[44960]: <info>  [1760175825.7444] manager: (tap007ca8a8-8c): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/647)
Oct 11 09:43:45 compute-0 nova_compute[260935]: 2025-10-11 09:43:45.745 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 09:43:45 compute-0 nova_compute[260935]: 2025-10-11 09:43:45.755 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:43:45 compute-0 nova_compute[260935]: 2025-10-11 09:43:45.757 2 INFO os_vif [None req-16027092-f3c9-46ca-9a46-de5eb6f82914 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c1:4b:38,bridge_name='br-int',has_traffic_filtering=True,id=007ca8a8-8ce1-4661-9b04-c8764bd3b523,network=Network(424246b3-55aa-428f-9446-55ed3d626b5d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap007ca8a8-8c')
Oct 11 09:43:45 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/3748329194' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:43:45 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/703938094' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:43:45 compute-0 nova_compute[260935]: 2025-10-11 09:43:45.825 2 DEBUG nova.virt.libvirt.driver [None req-16027092-f3c9-46ca-9a46-de5eb6f82914 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 09:43:45 compute-0 nova_compute[260935]: 2025-10-11 09:43:45.825 2 DEBUG nova.virt.libvirt.driver [None req-16027092-f3c9-46ca-9a46-de5eb6f82914 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 09:43:45 compute-0 nova_compute[260935]: 2025-10-11 09:43:45.825 2 DEBUG nova.virt.libvirt.driver [None req-16027092-f3c9-46ca-9a46-de5eb6f82914 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] No VIF found with MAC fa:16:3e:c1:4b:38, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 11 09:43:45 compute-0 nova_compute[260935]: 2025-10-11 09:43:45.825 2 INFO nova.virt.libvirt.driver [None req-16027092-f3c9-46ca-9a46-de5eb6f82914 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] [instance: 84e65bfe-6918-4c8f-8b2a-3b5262c6e44e] Using config drive
Oct 11 09:43:45 compute-0 nova_compute[260935]: 2025-10-11 09:43:45.846 2 DEBUG nova.storage.rbd_utils [None req-16027092-f3c9-46ca-9a46-de5eb6f82914 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] rbd image 84e65bfe-6918-4c8f-8b2a-3b5262c6e44e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:43:46 compute-0 nova_compute[260935]: 2025-10-11 09:43:46.238 2 INFO nova.virt.libvirt.driver [None req-16027092-f3c9-46ca-9a46-de5eb6f82914 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] [instance: 84e65bfe-6918-4c8f-8b2a-3b5262c6e44e] Creating config drive at /var/lib/nova/instances/84e65bfe-6918-4c8f-8b2a-3b5262c6e44e/disk.config
Oct 11 09:43:46 compute-0 nova_compute[260935]: 2025-10-11 09:43:46.249 2 DEBUG oslo_concurrency.processutils [None req-16027092-f3c9-46ca-9a46-de5eb6f82914 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/84e65bfe-6918-4c8f-8b2a-3b5262c6e44e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpp48vo7y9 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:43:46 compute-0 nova_compute[260935]: 2025-10-11 09:43:46.378 2 DEBUG nova.network.neutron [req-4eb9e5a6-41f9-4d62-9981-8a536b7c9d08 req-6a0fdad2-f9d7-4183-9fce-a507d3a3674a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 84e65bfe-6918-4c8f-8b2a-3b5262c6e44e] Updated VIF entry in instance network info cache for port 007ca8a8-8ce1-4661-9b04-c8764bd3b523. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 09:43:46 compute-0 nova_compute[260935]: 2025-10-11 09:43:46.380 2 DEBUG nova.network.neutron [req-4eb9e5a6-41f9-4d62-9981-8a536b7c9d08 req-6a0fdad2-f9d7-4183-9fce-a507d3a3674a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 84e65bfe-6918-4c8f-8b2a-3b5262c6e44e] Updating instance_info_cache with network_info: [{"id": "007ca8a8-8ce1-4661-9b04-c8764bd3b523", "address": "fa:16:3e:c1:4b:38", "network": {"id": "424246b3-55aa-428f-9446-55ed3d626b5d", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1806329114-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d599eac75aee4193a971c2c157a326a8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap007ca8a8-8c", "ovs_interfaceid": "007ca8a8-8ce1-4661-9b04-c8764bd3b523", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:43:46 compute-0 nova_compute[260935]: 2025-10-11 09:43:46.404 2 DEBUG oslo_concurrency.lockutils [req-4eb9e5a6-41f9-4d62-9981-8a536b7c9d08 req-6a0fdad2-f9d7-4183-9fce-a507d3a3674a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-84e65bfe-6918-4c8f-8b2a-3b5262c6e44e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:43:46 compute-0 nova_compute[260935]: 2025-10-11 09:43:46.423 2 DEBUG oslo_concurrency.processutils [None req-16027092-f3c9-46ca-9a46-de5eb6f82914 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/84e65bfe-6918-4c8f-8b2a-3b5262c6e44e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpp48vo7y9" returned: 0 in 0.174s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:43:46 compute-0 nova_compute[260935]: 2025-10-11 09:43:46.466 2 DEBUG nova.storage.rbd_utils [None req-16027092-f3c9-46ca-9a46-de5eb6f82914 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] rbd image 84e65bfe-6918-4c8f-8b2a-3b5262c6e44e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:43:46 compute-0 nova_compute[260935]: 2025-10-11 09:43:46.471 2 DEBUG oslo_concurrency.processutils [None req-16027092-f3c9-46ca-9a46-de5eb6f82914 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/84e65bfe-6918-4c8f-8b2a-3b5262c6e44e/disk.config 84e65bfe-6918-4c8f-8b2a-3b5262c6e44e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:43:46 compute-0 nova_compute[260935]: 2025-10-11 09:43:46.678 2 DEBUG oslo_concurrency.processutils [None req-16027092-f3c9-46ca-9a46-de5eb6f82914 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/84e65bfe-6918-4c8f-8b2a-3b5262c6e44e/disk.config 84e65bfe-6918-4c8f-8b2a-3b5262c6e44e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.208s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:43:46 compute-0 nova_compute[260935]: 2025-10-11 09:43:46.679 2 INFO nova.virt.libvirt.driver [None req-16027092-f3c9-46ca-9a46-de5eb6f82914 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] [instance: 84e65bfe-6918-4c8f-8b2a-3b5262c6e44e] Deleting local config drive /var/lib/nova/instances/84e65bfe-6918-4c8f-8b2a-3b5262c6e44e/disk.config because it was imported into RBD.
Oct 11 09:43:46 compute-0 kernel: tap007ca8a8-8c: entered promiscuous mode
Oct 11 09:43:46 compute-0 NetworkManager[44960]: <info>  [1760175826.7462] manager: (tap007ca8a8-8c): new Tun device (/org/freedesktop/NetworkManager/Devices/648)
Oct 11 09:43:46 compute-0 ovn_controller[152945]: 2025-10-11T09:43:46Z|01714|binding|INFO|Claiming lport 007ca8a8-8ce1-4661-9b04-c8764bd3b523 for this chassis.
Oct 11 09:43:46 compute-0 ovn_controller[152945]: 2025-10-11T09:43:46Z|01715|binding|INFO|007ca8a8-8ce1-4661-9b04-c8764bd3b523: Claiming fa:16:3e:c1:4b:38 10.100.0.7
Oct 11 09:43:46 compute-0 nova_compute[260935]: 2025-10-11 09:43:46.749 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:43:46 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:43:46.757 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c1:4b:38 10.100.0.7'], port_security=['fa:16:3e:c1:4b:38 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '84e65bfe-6918-4c8f-8b2a-3b5262c6e44e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-424246b3-55aa-428f-9446-55ed3d626b5d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd599eac75aee4193a971c2c157a326a8', 'neutron:revision_number': '2', 'neutron:security_group_ids': '9d823c59-8c61-45f8-94bd-622699d4ae58', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a8021c16-4de3-44ff-b5a5-7602224b9734, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=007ca8a8-8ce1-4661-9b04-c8764bd3b523) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 09:43:46 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:43:46.759 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 007ca8a8-8ce1-4661-9b04-c8764bd3b523 in datapath 424246b3-55aa-428f-9446-55ed3d626b5d bound to our chassis
Oct 11 09:43:46 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:43:46.761 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 424246b3-55aa-428f-9446-55ed3d626b5d
Oct 11 09:43:46 compute-0 ceph-mon[74313]: pgmap v3067: 321 pgs: 321 active+clean; 486 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 56 KiB/s rd, 3.2 KiB/s wr, 77 op/s
Oct 11 09:43:46 compute-0 ovn_controller[152945]: 2025-10-11T09:43:46Z|01716|binding|INFO|Setting lport 007ca8a8-8ce1-4661-9b04-c8764bd3b523 ovn-installed in OVS
Oct 11 09:43:46 compute-0 ovn_controller[152945]: 2025-10-11T09:43:46Z|01717|binding|INFO|Setting lport 007ca8a8-8ce1-4661-9b04-c8764bd3b523 up in Southbound
Oct 11 09:43:46 compute-0 nova_compute[260935]: 2025-10-11 09:43:46.776 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:43:46 compute-0 nova_compute[260935]: 2025-10-11 09:43:46.781 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:43:46 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:43:46.794 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[06df2a90-988d-4ac5-b2b6-7afd37b59b7f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:43:46 compute-0 systemd-machined[215705]: New machine qemu-176-instance-00000096.
Oct 11 09:43:46 compute-0 systemd[1]: Started Virtual Machine qemu-176-instance-00000096.
Oct 11 09:43:46 compute-0 systemd-udevd[433118]: Network interface NamePolicy= disabled on kernel command line.
Oct 11 09:43:46 compute-0 NetworkManager[44960]: <info>  [1760175826.8551] device (tap007ca8a8-8c): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 11 09:43:46 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:43:46.852 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[694b253b-8fa6-46d0-a0e1-e2547b7b4102]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:43:46 compute-0 NetworkManager[44960]: <info>  [1760175826.8582] device (tap007ca8a8-8c): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 11 09:43:46 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:43:46.859 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[ce885850-6fd8-45c4-9f20-af04c95a9d06]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:43:46 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:43:46.909 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[3d43a426-42c5-44eb-b862-bb440e893eed]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:43:46 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:43:46.931 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[27fcfd95-f363-430b-92bd-fd6361bd333e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap424246b3-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:28:e0:2b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 445], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 763210, 'reachable_time': 32262, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 433128, 'error': None, 'target': 'ovnmeta-424246b3-55aa-428f-9446-55ed3d626b5d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:43:46 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:43:46.961 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[5e90a6f2-4a20-4440-b453-2b317f3e6010]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap424246b3-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 763225, 'tstamp': 763225}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 433130, 'error': None, 'target': 'ovnmeta-424246b3-55aa-428f-9446-55ed3d626b5d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap424246b3-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 763229, 'tstamp': 763229}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 433130, 'error': None, 'target': 'ovnmeta-424246b3-55aa-428f-9446-55ed3d626b5d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:43:46 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:43:46.964 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap424246b3-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:43:46 compute-0 nova_compute[260935]: 2025-10-11 09:43:46.967 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:43:46 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:43:46.972 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap424246b3-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:43:46 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:43:46.973 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 09:43:46 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:43:46.974 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap424246b3-50, col_values=(('external_ids', {'iface-id': 'e23ac8d5-df18-43b3-bb12-29f5cd9ce802'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:43:46 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:43:46.975 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 09:43:47 compute-0 nova_compute[260935]: 2025-10-11 09:43:47.035 2 DEBUG nova.compute.manager [req-12c5ffd2-f6d1-415e-a4b9-daf4b5936937 req-e30cfdc2-b1e0-41a6-9fd6-b1b40e01b813 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 84e65bfe-6918-4c8f-8b2a-3b5262c6e44e] Received event network-vif-plugged-007ca8a8-8ce1-4661-9b04-c8764bd3b523 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:43:47 compute-0 nova_compute[260935]: 2025-10-11 09:43:47.036 2 DEBUG oslo_concurrency.lockutils [req-12c5ffd2-f6d1-415e-a4b9-daf4b5936937 req-e30cfdc2-b1e0-41a6-9fd6-b1b40e01b813 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "84e65bfe-6918-4c8f-8b2a-3b5262c6e44e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:43:47 compute-0 nova_compute[260935]: 2025-10-11 09:43:47.036 2 DEBUG oslo_concurrency.lockutils [req-12c5ffd2-f6d1-415e-a4b9-daf4b5936937 req-e30cfdc2-b1e0-41a6-9fd6-b1b40e01b813 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "84e65bfe-6918-4c8f-8b2a-3b5262c6e44e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:43:47 compute-0 nova_compute[260935]: 2025-10-11 09:43:47.037 2 DEBUG oslo_concurrency.lockutils [req-12c5ffd2-f6d1-415e-a4b9-daf4b5936937 req-e30cfdc2-b1e0-41a6-9fd6-b1b40e01b813 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "84e65bfe-6918-4c8f-8b2a-3b5262c6e44e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:43:47 compute-0 nova_compute[260935]: 2025-10-11 09:43:47.038 2 DEBUG nova.compute.manager [req-12c5ffd2-f6d1-415e-a4b9-daf4b5936937 req-e30cfdc2-b1e0-41a6-9fd6-b1b40e01b813 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 84e65bfe-6918-4c8f-8b2a-3b5262c6e44e] Processing event network-vif-plugged-007ca8a8-8ce1-4661-9b04-c8764bd3b523 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 11 09:43:47 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3068: 321 pgs: 321 active+clean; 486 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 41 KiB/s rd, 18 KiB/s wr, 53 op/s
Oct 11 09:43:47 compute-0 nova_compute[260935]: 2025-10-11 09:43:47.814 2 DEBUG nova.compute.manager [None req-16027092-f3c9-46ca-9a46-de5eb6f82914 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] [instance: 84e65bfe-6918-4c8f-8b2a-3b5262c6e44e] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 11 09:43:47 compute-0 nova_compute[260935]: 2025-10-11 09:43:47.815 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760175827.8142624, 84e65bfe-6918-4c8f-8b2a-3b5262c6e44e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:43:47 compute-0 nova_compute[260935]: 2025-10-11 09:43:47.815 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 84e65bfe-6918-4c8f-8b2a-3b5262c6e44e] VM Started (Lifecycle Event)
Oct 11 09:43:47 compute-0 nova_compute[260935]: 2025-10-11 09:43:47.821 2 DEBUG nova.virt.libvirt.driver [None req-16027092-f3c9-46ca-9a46-de5eb6f82914 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] [instance: 84e65bfe-6918-4c8f-8b2a-3b5262c6e44e] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 11 09:43:47 compute-0 nova_compute[260935]: 2025-10-11 09:43:47.824 2 INFO nova.virt.libvirt.driver [-] [instance: 84e65bfe-6918-4c8f-8b2a-3b5262c6e44e] Instance spawned successfully.
Oct 11 09:43:47 compute-0 nova_compute[260935]: 2025-10-11 09:43:47.825 2 INFO nova.compute.manager [None req-16027092-f3c9-46ca-9a46-de5eb6f82914 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] [instance: 84e65bfe-6918-4c8f-8b2a-3b5262c6e44e] Took 7.45 seconds to spawn the instance on the hypervisor.
Oct 11 09:43:47 compute-0 nova_compute[260935]: 2025-10-11 09:43:47.825 2 DEBUG nova.compute.manager [None req-16027092-f3c9-46ca-9a46-de5eb6f82914 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] [instance: 84e65bfe-6918-4c8f-8b2a-3b5262c6e44e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:43:47 compute-0 nova_compute[260935]: 2025-10-11 09:43:47.834 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 84e65bfe-6918-4c8f-8b2a-3b5262c6e44e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:43:47 compute-0 nova_compute[260935]: 2025-10-11 09:43:47.837 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 84e65bfe-6918-4c8f-8b2a-3b5262c6e44e] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 09:43:47 compute-0 nova_compute[260935]: 2025-10-11 09:43:47.873 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 84e65bfe-6918-4c8f-8b2a-3b5262c6e44e] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 09:43:47 compute-0 nova_compute[260935]: 2025-10-11 09:43:47.873 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760175827.8175523, 84e65bfe-6918-4c8f-8b2a-3b5262c6e44e => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:43:47 compute-0 nova_compute[260935]: 2025-10-11 09:43:47.874 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 84e65bfe-6918-4c8f-8b2a-3b5262c6e44e] VM Paused (Lifecycle Event)
Oct 11 09:43:47 compute-0 nova_compute[260935]: 2025-10-11 09:43:47.904 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 84e65bfe-6918-4c8f-8b2a-3b5262c6e44e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:43:47 compute-0 nova_compute[260935]: 2025-10-11 09:43:47.906 2 INFO nova.compute.manager [None req-16027092-f3c9-46ca-9a46-de5eb6f82914 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] [instance: 84e65bfe-6918-4c8f-8b2a-3b5262c6e44e] Took 8.58 seconds to build instance.
Oct 11 09:43:47 compute-0 nova_compute[260935]: 2025-10-11 09:43:47.910 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760175827.8197222, 84e65bfe-6918-4c8f-8b2a-3b5262c6e44e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:43:47 compute-0 nova_compute[260935]: 2025-10-11 09:43:47.910 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 84e65bfe-6918-4c8f-8b2a-3b5262c6e44e] VM Resumed (Lifecycle Event)
Oct 11 09:43:47 compute-0 nova_compute[260935]: 2025-10-11 09:43:47.931 2 DEBUG oslo_concurrency.lockutils [None req-16027092-f3c9-46ca-9a46-de5eb6f82914 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] Lock "84e65bfe-6918-4c8f-8b2a-3b5262c6e44e" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.804s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:43:47 compute-0 nova_compute[260935]: 2025-10-11 09:43:47.937 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 84e65bfe-6918-4c8f-8b2a-3b5262c6e44e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:43:47 compute-0 nova_compute[260935]: 2025-10-11 09:43:47.943 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: 84e65bfe-6918-4c8f-8b2a-3b5262c6e44e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 09:43:48 compute-0 podman[433173]: 2025-10-11 09:43:48.768797092 +0000 UTC m=+0.071364452 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=iscsid, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 09:43:48 compute-0 ceph-mon[74313]: pgmap v3068: 321 pgs: 321 active+clean; 486 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 41 KiB/s rd, 18 KiB/s wr, 53 op/s
Oct 11 09:43:49 compute-0 nova_compute[260935]: 2025-10-11 09:43:49.144 2 DEBUG nova.compute.manager [req-d1d7c0af-a367-4b5b-8f8d-a3abcc404929 req-8e59af5f-e597-4212-8ac4-7c93ad311718 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 84e65bfe-6918-4c8f-8b2a-3b5262c6e44e] Received event network-vif-plugged-007ca8a8-8ce1-4661-9b04-c8764bd3b523 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:43:49 compute-0 nova_compute[260935]: 2025-10-11 09:43:49.145 2 DEBUG oslo_concurrency.lockutils [req-d1d7c0af-a367-4b5b-8f8d-a3abcc404929 req-8e59af5f-e597-4212-8ac4-7c93ad311718 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "84e65bfe-6918-4c8f-8b2a-3b5262c6e44e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:43:49 compute-0 nova_compute[260935]: 2025-10-11 09:43:49.146 2 DEBUG oslo_concurrency.lockutils [req-d1d7c0af-a367-4b5b-8f8d-a3abcc404929 req-8e59af5f-e597-4212-8ac4-7c93ad311718 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "84e65bfe-6918-4c8f-8b2a-3b5262c6e44e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:43:49 compute-0 nova_compute[260935]: 2025-10-11 09:43:49.146 2 DEBUG oslo_concurrency.lockutils [req-d1d7c0af-a367-4b5b-8f8d-a3abcc404929 req-8e59af5f-e597-4212-8ac4-7c93ad311718 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "84e65bfe-6918-4c8f-8b2a-3b5262c6e44e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:43:49 compute-0 nova_compute[260935]: 2025-10-11 09:43:49.147 2 DEBUG nova.compute.manager [req-d1d7c0af-a367-4b5b-8f8d-a3abcc404929 req-8e59af5f-e597-4212-8ac4-7c93ad311718 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 84e65bfe-6918-4c8f-8b2a-3b5262c6e44e] No waiting events found dispatching network-vif-plugged-007ca8a8-8ce1-4661-9b04-c8764bd3b523 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 09:43:49 compute-0 nova_compute[260935]: 2025-10-11 09:43:49.147 2 WARNING nova.compute.manager [req-d1d7c0af-a367-4b5b-8f8d-a3abcc404929 req-8e59af5f-e597-4212-8ac4-7c93ad311718 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 84e65bfe-6918-4c8f-8b2a-3b5262c6e44e] Received unexpected event network-vif-plugged-007ca8a8-8ce1-4661-9b04-c8764bd3b523 for instance with vm_state active and task_state None.
Oct 11 09:43:49 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3069: 321 pgs: 321 active+clean; 486 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 37 KiB/s rd, 16 KiB/s wr, 46 op/s
Oct 11 09:43:49 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:43:50 compute-0 nova_compute[260935]: 2025-10-11 09:43:50.080 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:43:50 compute-0 nova_compute[260935]: 2025-10-11 09:43:50.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:43:50 compute-0 nova_compute[260935]: 2025-10-11 09:43:50.704 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 11 09:43:50 compute-0 nova_compute[260935]: 2025-10-11 09:43:50.743 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:43:50 compute-0 ceph-mon[74313]: pgmap v3069: 321 pgs: 321 active+clean; 486 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 37 KiB/s rd, 16 KiB/s wr, 46 op/s
Oct 11 09:43:51 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3070: 321 pgs: 321 active+clean; 486 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 31 KiB/s rd, 14 KiB/s wr, 39 op/s
Oct 11 09:43:52 compute-0 nova_compute[260935]: 2025-10-11 09:43:52.308 2 DEBUG nova.compute.manager [req-a90ebb07-f741-4cda-857b-0e35a45f5f9a req-e24391f6-1da4-42b8-8599-dbd2fbfc257c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 84e65bfe-6918-4c8f-8b2a-3b5262c6e44e] Received event network-changed-007ca8a8-8ce1-4661-9b04-c8764bd3b523 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:43:52 compute-0 nova_compute[260935]: 2025-10-11 09:43:52.310 2 DEBUG nova.compute.manager [req-a90ebb07-f741-4cda-857b-0e35a45f5f9a req-e24391f6-1da4-42b8-8599-dbd2fbfc257c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 84e65bfe-6918-4c8f-8b2a-3b5262c6e44e] Refreshing instance network info cache due to event network-changed-007ca8a8-8ce1-4661-9b04-c8764bd3b523. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 09:43:52 compute-0 nova_compute[260935]: 2025-10-11 09:43:52.311 2 DEBUG oslo_concurrency.lockutils [req-a90ebb07-f741-4cda-857b-0e35a45f5f9a req-e24391f6-1da4-42b8-8599-dbd2fbfc257c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-84e65bfe-6918-4c8f-8b2a-3b5262c6e44e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:43:52 compute-0 nova_compute[260935]: 2025-10-11 09:43:52.311 2 DEBUG oslo_concurrency.lockutils [req-a90ebb07-f741-4cda-857b-0e35a45f5f9a req-e24391f6-1da4-42b8-8599-dbd2fbfc257c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-84e65bfe-6918-4c8f-8b2a-3b5262c6e44e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:43:52 compute-0 nova_compute[260935]: 2025-10-11 09:43:52.312 2 DEBUG nova.network.neutron [req-a90ebb07-f741-4cda-857b-0e35a45f5f9a req-e24391f6-1da4-42b8-8599-dbd2fbfc257c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 84e65bfe-6918-4c8f-8b2a-3b5262c6e44e] Refreshing network info cache for port 007ca8a8-8ce1-4661-9b04-c8764bd3b523 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 09:43:52 compute-0 ceph-mon[74313]: pgmap v3070: 321 pgs: 321 active+clean; 486 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 31 KiB/s rd, 14 KiB/s wr, 39 op/s
Oct 11 09:43:52 compute-0 podman[433194]: 2025-10-11 09:43:52.838751004 +0000 UTC m=+0.129010359 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, config_id=multipathd, org.label-schema.build-date=20251001)
Oct 11 09:43:52 compute-0 podman[433195]: 2025-10-11 09:43:52.846108951 +0000 UTC m=+0.131864190 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 09:43:53 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3071: 321 pgs: 321 active+clean; 486 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 103 op/s
Oct 11 09:43:53 compute-0 nova_compute[260935]: 2025-10-11 09:43:53.596 2 DEBUG nova.network.neutron [req-a90ebb07-f741-4cda-857b-0e35a45f5f9a req-e24391f6-1da4-42b8-8599-dbd2fbfc257c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 84e65bfe-6918-4c8f-8b2a-3b5262c6e44e] Updated VIF entry in instance network info cache for port 007ca8a8-8ce1-4661-9b04-c8764bd3b523. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 09:43:53 compute-0 nova_compute[260935]: 2025-10-11 09:43:53.597 2 DEBUG nova.network.neutron [req-a90ebb07-f741-4cda-857b-0e35a45f5f9a req-e24391f6-1da4-42b8-8599-dbd2fbfc257c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 84e65bfe-6918-4c8f-8b2a-3b5262c6e44e] Updating instance_info_cache with network_info: [{"id": "007ca8a8-8ce1-4661-9b04-c8764bd3b523", "address": "fa:16:3e:c1:4b:38", "network": {"id": "424246b3-55aa-428f-9446-55ed3d626b5d", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1806329114-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.181", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d599eac75aee4193a971c2c157a326a8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap007ca8a8-8c", "ovs_interfaceid": "007ca8a8-8ce1-4661-9b04-c8764bd3b523", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:43:53 compute-0 nova_compute[260935]: 2025-10-11 09:43:53.618 2 DEBUG oslo_concurrency.lockutils [req-a90ebb07-f741-4cda-857b-0e35a45f5f9a req-e24391f6-1da4-42b8-8599-dbd2fbfc257c e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-84e65bfe-6918-4c8f-8b2a-3b5262c6e44e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:43:53 compute-0 nova_compute[260935]: 2025-10-11 09:43:53.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:43:53 compute-0 nova_compute[260935]: 2025-10-11 09:43:53.704 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 11 09:43:54 compute-0 nova_compute[260935]: 2025-10-11 09:43:54.438 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "refresh_cache-52be16b4-343a-4fd4-9041-39069a1fde2a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:43:54 compute-0 nova_compute[260935]: 2025-10-11 09:43:54.439 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquired lock "refresh_cache-52be16b4-343a-4fd4-9041-39069a1fde2a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:43:54 compute-0 nova_compute[260935]: 2025-10-11 09:43:54.439 2 DEBUG nova.network.neutron [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: 52be16b4-343a-4fd4-9041-39069a1fde2a] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 11 09:43:54 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:43:54 compute-0 ceph-mon[74313]: pgmap v3071: 321 pgs: 321 active+clean; 486 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 103 op/s
Oct 11 09:43:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:43:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:43:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:43:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:43:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:43:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:43:55 compute-0 ceph-mgr[74605]: [balancer INFO root] Optimize plan auto_2025-10-11_09:43:55
Oct 11 09:43:55 compute-0 ceph-mgr[74605]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 09:43:55 compute-0 ceph-mgr[74605]: [balancer INFO root] do_upmap
Oct 11 09:43:55 compute-0 ceph-mgr[74605]: [balancer INFO root] pools ['images', '.mgr', 'cephfs.cephfs.data', 'vms', '.rgw.root', 'default.rgw.control', 'volumes', 'default.rgw.log', 'cephfs.cephfs.meta', 'backups', 'default.rgw.meta']
Oct 11 09:43:55 compute-0 ceph-mgr[74605]: [balancer INFO root] prepared 0/10 changes
Oct 11 09:43:55 compute-0 nova_compute[260935]: 2025-10-11 09:43:55.082 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:43:55 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3072: 321 pgs: 321 active+clean; 486 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 76 op/s
Oct 11 09:43:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 09:43:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 09:43:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 09:43:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 09:43:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 09:43:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 09:43:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 09:43:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 09:43:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 09:43:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 09:43:55 compute-0 nova_compute[260935]: 2025-10-11 09:43:55.657 2 DEBUG nova.network.neutron [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: 52be16b4-343a-4fd4-9041-39069a1fde2a] Updating instance_info_cache with network_info: [{"id": "c992d6e3-ef59-42a0-80c5-109fe0c056cd", "address": "fa:16:3e:d3:b5:ce", "network": {"id": "7c40ad6c-6e2c-4d8e-a70f-72c8786fa745", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1855455514-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0ba95f2514ce4fe4b00f245335eaeb01", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc992d6e3-ef", "ovs_interfaceid": "c992d6e3-ef59-42a0-80c5-109fe0c056cd", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:43:55 compute-0 nova_compute[260935]: 2025-10-11 09:43:55.683 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Releasing lock "refresh_cache-52be16b4-343a-4fd4-9041-39069a1fde2a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:43:55 compute-0 nova_compute[260935]: 2025-10-11 09:43:55.684 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: 52be16b4-343a-4fd4-9041-39069a1fde2a] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 11 09:43:55 compute-0 nova_compute[260935]: 2025-10-11 09:43:55.685 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:43:55 compute-0 nova_compute[260935]: 2025-10-11 09:43:55.701 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:43:55 compute-0 nova_compute[260935]: 2025-10-11 09:43:55.744 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:43:56 compute-0 nova_compute[260935]: 2025-10-11 09:43:56.698 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:43:56 compute-0 nova_compute[260935]: 2025-10-11 09:43:56.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:43:56 compute-0 nova_compute[260935]: 2025-10-11 09:43:56.728 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:43:56 compute-0 nova_compute[260935]: 2025-10-11 09:43:56.729 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:43:56 compute-0 nova_compute[260935]: 2025-10-11 09:43:56.729 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:43:56 compute-0 nova_compute[260935]: 2025-10-11 09:43:56.730 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 11 09:43:56 compute-0 nova_compute[260935]: 2025-10-11 09:43:56.730 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:43:56 compute-0 ceph-mon[74313]: pgmap v3072: 321 pgs: 321 active+clean; 486 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 76 op/s
Oct 11 09:43:57 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 09:43:57 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3890214284' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:43:57 compute-0 nova_compute[260935]: 2025-10-11 09:43:57.249 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.518s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:43:57 compute-0 nova_compute[260935]: 2025-10-11 09:43:57.359 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:43:57 compute-0 nova_compute[260935]: 2025-10-11 09:43:57.360 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:43:57 compute-0 nova_compute[260935]: 2025-10-11 09:43:57.360 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:43:57 compute-0 nova_compute[260935]: 2025-10-11 09:43:57.366 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:43:57 compute-0 nova_compute[260935]: 2025-10-11 09:43:57.366 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:43:57 compute-0 nova_compute[260935]: 2025-10-11 09:43:57.372 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000095 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:43:57 compute-0 nova_compute[260935]: 2025-10-11 09:43:57.373 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000095 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:43:57 compute-0 nova_compute[260935]: 2025-10-11 09:43:57.379 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000052 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:43:57 compute-0 nova_compute[260935]: 2025-10-11 09:43:57.379 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000052 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:43:57 compute-0 nova_compute[260935]: 2025-10-11 09:43:57.386 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000096 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:43:57 compute-0 nova_compute[260935]: 2025-10-11 09:43:57.387 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000096 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:43:57 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3073: 321 pgs: 321 active+clean; 486 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 76 op/s
Oct 11 09:43:57 compute-0 nova_compute[260935]: 2025-10-11 09:43:57.627 2 WARNING nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 09:43:57 compute-0 nova_compute[260935]: 2025-10-11 09:43:57.628 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=2429MB free_disk=59.7849006652832GB free_vcpus=3 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 11 09:43:57 compute-0 nova_compute[260935]: 2025-10-11 09:43:57.629 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:43:57 compute-0 nova_compute[260935]: 2025-10-11 09:43:57.629 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:43:57 compute-0 nova_compute[260935]: 2025-10-11 09:43:57.715 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance c176845c-89c0-4038-ba22-4ee79bd3ebfe actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 09:43:57 compute-0 nova_compute[260935]: 2025-10-11 09:43:57.716 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance b75d8ded-515b-48ff-a6b6-28df88878996 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 09:43:57 compute-0 nova_compute[260935]: 2025-10-11 09:43:57.716 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance 52be16b4-343a-4fd4-9041-39069a1fde2a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 09:43:57 compute-0 nova_compute[260935]: 2025-10-11 09:43:57.716 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance 1f03604e-0a25-4f22-807b-11f2e654be90 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 09:43:57 compute-0 nova_compute[260935]: 2025-10-11 09:43:57.716 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance 84e65bfe-6918-4c8f-8b2a-3b5262c6e44e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 09:43:57 compute-0 nova_compute[260935]: 2025-10-11 09:43:57.716 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 5 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 11 09:43:57 compute-0 nova_compute[260935]: 2025-10-11 09:43:57.716 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=1152MB phys_disk=59GB used_disk=5GB total_vcpus=8 used_vcpus=5 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 11 09:43:57 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/3890214284' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:43:57 compute-0 nova_compute[260935]: 2025-10-11 09:43:57.825 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:43:58 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 09:43:58 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3517727397' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:43:58 compute-0 nova_compute[260935]: 2025-10-11 09:43:58.308 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.483s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:43:58 compute-0 nova_compute[260935]: 2025-10-11 09:43:58.318 2 DEBUG nova.compute.provider_tree [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 09:43:58 compute-0 nova_compute[260935]: 2025-10-11 09:43:58.352 2 DEBUG nova.scheduler.client.report [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 09:43:58 compute-0 nova_compute[260935]: 2025-10-11 09:43:58.380 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 11 09:43:58 compute-0 nova_compute[260935]: 2025-10-11 09:43:58.380 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.751s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:43:58 compute-0 ceph-mon[74313]: pgmap v3073: 321 pgs: 321 active+clean; 486 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 76 op/s
Oct 11 09:43:58 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/3517727397' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:43:59 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3074: 321 pgs: 321 active+clean; 486 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 85 B/s wr, 73 op/s
Oct 11 09:43:59 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:44:00 compute-0 nova_compute[260935]: 2025-10-11 09:44:00.084 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:44:00 compute-0 nova_compute[260935]: 2025-10-11 09:44:00.794 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:44:00 compute-0 ceph-mon[74313]: pgmap v3074: 321 pgs: 321 active+clean; 486 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 85 B/s wr, 73 op/s
Oct 11 09:44:01 compute-0 ovn_controller[152945]: 2025-10-11T09:44:01Z|00204|pinctrl(ovn_pinctrl0)|WARN|DHCPREQUEST requested IP 10.100.0.3 does not match offer 10.100.0.7
Oct 11 09:44:01 compute-0 ovn_controller[152945]: 2025-10-11T09:44:01Z|00205|pinctrl(ovn_pinctrl0)|INFO|DHCPNAK fa:16:3e:c1:4b:38 10.100.0.7
Oct 11 09:44:01 compute-0 nova_compute[260935]: 2025-10-11 09:44:01.381 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:44:01 compute-0 nova_compute[260935]: 2025-10-11 09:44:01.383 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:44:01 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3075: 321 pgs: 321 active+clean; 486 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 65 op/s
Oct 11 09:44:02 compute-0 ceph-mon[74313]: pgmap v3075: 321 pgs: 321 active+clean; 486 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 65 op/s
Oct 11 09:44:03 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3076: 321 pgs: 321 active+clean; 501 MiB data, 1.3 GiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 499 KiB/s wr, 117 op/s
Oct 11 09:44:04 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:44:04 compute-0 ceph-mon[74313]: pgmap v3076: 321 pgs: 321 active+clean; 501 MiB data, 1.3 GiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 499 KiB/s wr, 117 op/s
Oct 11 09:44:05 compute-0 nova_compute[260935]: 2025-10-11 09:44:05.087 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:44:05 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3077: 321 pgs: 321 active+clean; 501 MiB data, 1.3 GiB used, 59 GiB / 60 GiB avail; 1.0 MiB/s rd, 499 KiB/s wr, 52 op/s
Oct 11 09:44:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 09:44:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:44:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 09:44:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:44:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.003485025151396757 of space, bias 1.0, pg target 1.0455075454190272 quantized to 32 (current 32)
Oct 11 09:44:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:44:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 09:44:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:44:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 09:44:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:44:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.001424050310933125 of space, bias 1.0, pg target 0.4257910429690044 quantized to 32 (current 32)
Oct 11 09:44:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:44:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006084358924269063 quantized to 16 (current 32)
Oct 11 09:44:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:44:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 09:44:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:44:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.605448655336329e-05 quantized to 32 (current 32)
Oct 11 09:44:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:44:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006464631357035879 quantized to 32 (current 32)
Oct 11 09:44:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:44:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 09:44:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:44:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015210897310672657 quantized to 32 (current 32)
Oct 11 09:44:05 compute-0 nova_compute[260935]: 2025-10-11 09:44:05.797 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:44:05 compute-0 ovn_controller[152945]: 2025-10-11T09:44:05Z|00206|pinctrl(ovn_pinctrl0)|WARN|DHCPREQUEST requested IP 10.100.0.3 does not match offer 10.100.0.7
Oct 11 09:44:05 compute-0 ovn_controller[152945]: 2025-10-11T09:44:05Z|00207|pinctrl(ovn_pinctrl0)|INFO|DHCPNAK fa:16:3e:c1:4b:38 10.100.0.7
Oct 11 09:44:06 compute-0 ovn_controller[152945]: 2025-10-11T09:44:06Z|00208|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:c1:4b:38 10.100.0.7
Oct 11 09:44:06 compute-0 ovn_controller[152945]: 2025-10-11T09:44:06Z|00209|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:c1:4b:38 10.100.0.7
Oct 11 09:44:06 compute-0 ceph-mon[74313]: pgmap v3077: 321 pgs: 321 active+clean; 501 MiB data, 1.3 GiB used, 59 GiB / 60 GiB avail; 1.0 MiB/s rd, 499 KiB/s wr, 52 op/s
Oct 11 09:44:07 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3078: 321 pgs: 321 active+clean; 504 MiB data, 1.3 GiB used, 59 GiB / 60 GiB avail; 1.0 MiB/s rd, 546 KiB/s wr, 54 op/s
Oct 11 09:44:08 compute-0 ceph-mon[74313]: pgmap v3078: 321 pgs: 321 active+clean; 504 MiB data, 1.3 GiB used, 59 GiB / 60 GiB avail; 1.0 MiB/s rd, 546 KiB/s wr, 54 op/s
Oct 11 09:44:09 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3079: 321 pgs: 321 active+clean; 504 MiB data, 1.3 GiB used, 59 GiB / 60 GiB avail; 1.0 MiB/s rd, 546 KiB/s wr, 55 op/s
Oct 11 09:44:09 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:44:10 compute-0 nova_compute[260935]: 2025-10-11 09:44:10.089 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:44:10 compute-0 nova_compute[260935]: 2025-10-11 09:44:10.799 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:44:10 compute-0 ceph-mon[74313]: pgmap v3079: 321 pgs: 321 active+clean; 504 MiB data, 1.3 GiB used, 59 GiB / 60 GiB avail; 1.0 MiB/s rd, 546 KiB/s wr, 55 op/s
Oct 11 09:44:11 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3080: 321 pgs: 321 active+clean; 504 MiB data, 1.3 GiB used, 59 GiB / 60 GiB avail; 1.0 MiB/s rd, 546 KiB/s wr, 55 op/s
Oct 11 09:44:11 compute-0 nova_compute[260935]: 2025-10-11 09:44:11.700 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:44:12 compute-0 podman[433285]: 2025-10-11 09:44:12.797951866 +0000 UTC m=+0.089045209 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 11 09:44:12 compute-0 ceph-mon[74313]: pgmap v3080: 321 pgs: 321 active+clean; 504 MiB data, 1.3 GiB used, 59 GiB / 60 GiB avail; 1.0 MiB/s rd, 546 KiB/s wr, 55 op/s
Oct 11 09:44:13 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3081: 321 pgs: 321 active+clean; 504 MiB data, 1.3 GiB used, 59 GiB / 60 GiB avail; 1.0 MiB/s rd, 549 KiB/s wr, 55 op/s
Oct 11 09:44:14 compute-0 sshd-session[433304]: Invalid user postgres from 165.232.82.252 port 35640
Oct 11 09:44:14 compute-0 sshd-session[433304]: pam_unix(sshd:auth): check pass; user unknown
Oct 11 09:44:14 compute-0 sshd-session[433304]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=165.232.82.252
Oct 11 09:44:14 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:44:14 compute-0 ceph-mon[74313]: pgmap v3081: 321 pgs: 321 active+clean; 504 MiB data, 1.3 GiB used, 59 GiB / 60 GiB avail; 1.0 MiB/s rd, 549 KiB/s wr, 55 op/s
Oct 11 09:44:15 compute-0 nova_compute[260935]: 2025-10-11 09:44:15.092 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:44:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:44:15.240 162815 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:44:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:44:15.240 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:44:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:44:15.242 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:44:15 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3082: 321 pgs: 321 active+clean; 504 MiB data, 1.3 GiB used, 59 GiB / 60 GiB avail; 33 KiB/s rd, 50 KiB/s wr, 2 op/s
Oct 11 09:44:15 compute-0 nova_compute[260935]: 2025-10-11 09:44:15.801 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:44:16 compute-0 sudo[433306]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:44:16 compute-0 sudo[433306]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:44:16 compute-0 sudo[433306]: pam_unix(sudo:session): session closed for user root
Oct 11 09:44:16 compute-0 sudo[433331]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 09:44:16 compute-0 sudo[433331]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:44:16 compute-0 sudo[433331]: pam_unix(sudo:session): session closed for user root
Oct 11 09:44:16 compute-0 sudo[433356]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:44:16 compute-0 sudo[433356]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:44:16 compute-0 sudo[433356]: pam_unix(sudo:session): session closed for user root
Oct 11 09:44:16 compute-0 sudo[433381]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 11 09:44:16 compute-0 sudo[433381]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:44:16 compute-0 nova_compute[260935]: 2025-10-11 09:44:16.704 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:44:16 compute-0 nova_compute[260935]: 2025-10-11 09:44:16.705 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Oct 11 09:44:16 compute-0 ceph-mon[74313]: pgmap v3082: 321 pgs: 321 active+clean; 504 MiB data, 1.3 GiB used, 59 GiB / 60 GiB avail; 33 KiB/s rd, 50 KiB/s wr, 2 op/s
Oct 11 09:44:16 compute-0 sshd-session[433304]: Failed password for invalid user postgres from 165.232.82.252 port 35640 ssh2
Oct 11 09:44:17 compute-0 sudo[433381]: pam_unix(sudo:session): session closed for user root
Oct 11 09:44:17 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 09:44:17 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 09:44:17 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 11 09:44:17 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 09:44:17 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 11 09:44:17 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:44:17 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev b385e893-bbb3-4e28-8ace-dbefe2ceb71e does not exist
Oct 11 09:44:17 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev 7e0dc3d6-eacf-4458-bbed-4d0dc25fd143 does not exist
Oct 11 09:44:17 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev 29df011c-cf12-4ff4-873c-1bcdcf97f111 does not exist
Oct 11 09:44:17 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 11 09:44:17 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 09:44:17 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 11 09:44:17 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 09:44:17 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 09:44:17 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 09:44:17 compute-0 sudo[433438]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:44:17 compute-0 sudo[433438]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:44:17 compute-0 sshd-session[433304]: Connection closed by invalid user postgres 165.232.82.252 port 35640 [preauth]
Oct 11 09:44:17 compute-0 sudo[433438]: pam_unix(sudo:session): session closed for user root
Oct 11 09:44:17 compute-0 sudo[433463]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 09:44:17 compute-0 sudo[433463]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:44:17 compute-0 sudo[433463]: pam_unix(sudo:session): session closed for user root
Oct 11 09:44:17 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3083: 321 pgs: 321 active+clean; 504 MiB data, 1.3 GiB used, 59 GiB / 60 GiB avail; 33 KiB/s rd, 50 KiB/s wr, 2 op/s
Oct 11 09:44:17 compute-0 sudo[433488]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:44:17 compute-0 sudo[433488]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:44:17 compute-0 sudo[433488]: pam_unix(sudo:session): session closed for user root
Oct 11 09:44:17 compute-0 sudo[433513]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 11 09:44:17 compute-0 sudo[433513]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:44:17 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 09:44:17 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 09:44:17 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:44:17 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 09:44:17 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 09:44:17 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 09:44:18 compute-0 podman[433578]: 2025-10-11 09:44:18.150229163 +0000 UTC m=+0.079205512 container create b4c3768bf963a05c86a4103da3aed5015283feeaf4674d16472216077eb6f604 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_meninsky, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Oct 11 09:44:18 compute-0 podman[433578]: 2025-10-11 09:44:18.117065023 +0000 UTC m=+0.046041432 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:44:18 compute-0 systemd[1]: Started libpod-conmon-b4c3768bf963a05c86a4103da3aed5015283feeaf4674d16472216077eb6f604.scope.
Oct 11 09:44:18 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:44:18 compute-0 podman[433578]: 2025-10-11 09:44:18.27203766 +0000 UTC m=+0.201014039 container init b4c3768bf963a05c86a4103da3aed5015283feeaf4674d16472216077eb6f604 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_meninsky, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 09:44:18 compute-0 podman[433578]: 2025-10-11 09:44:18.28772901 +0000 UTC m=+0.216705339 container start b4c3768bf963a05c86a4103da3aed5015283feeaf4674d16472216077eb6f604 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_meninsky, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 09:44:18 compute-0 podman[433578]: 2025-10-11 09:44:18.291612169 +0000 UTC m=+0.220588578 container attach b4c3768bf963a05c86a4103da3aed5015283feeaf4674d16472216077eb6f604 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_meninsky, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 09:44:18 compute-0 funny_meninsky[433595]: 167 167
Oct 11 09:44:18 compute-0 systemd[1]: libpod-b4c3768bf963a05c86a4103da3aed5015283feeaf4674d16472216077eb6f604.scope: Deactivated successfully.
Oct 11 09:44:18 compute-0 conmon[433595]: conmon b4c3768bf963a05c86a4 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b4c3768bf963a05c86a4103da3aed5015283feeaf4674d16472216077eb6f604.scope/container/memory.events
Oct 11 09:44:18 compute-0 podman[433578]: 2025-10-11 09:44:18.300286532 +0000 UTC m=+0.229262861 container died b4c3768bf963a05c86a4103da3aed5015283feeaf4674d16472216077eb6f604 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_meninsky, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct 11 09:44:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-08ac0945cea309b18c0a599d8a794f60a3f100ed34ff9fbf993c19b98cea9030-merged.mount: Deactivated successfully.
Oct 11 09:44:18 compute-0 podman[433578]: 2025-10-11 09:44:18.356445957 +0000 UTC m=+0.285422296 container remove b4c3768bf963a05c86a4103da3aed5015283feeaf4674d16472216077eb6f604 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_meninsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True)
Oct 11 09:44:18 compute-0 systemd[1]: libpod-conmon-b4c3768bf963a05c86a4103da3aed5015283feeaf4674d16472216077eb6f604.scope: Deactivated successfully.
Oct 11 09:44:18 compute-0 podman[433618]: 2025-10-11 09:44:18.610993297 +0000 UTC m=+0.072902366 container create eb54926b34328acf3f91dfc92c37761da0460d754383b3ec156c337568fff0eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_greider, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 09:44:18 compute-0 systemd[1]: Started libpod-conmon-eb54926b34328acf3f91dfc92c37761da0460d754383b3ec156c337568fff0eb.scope.
Oct 11 09:44:18 compute-0 podman[433618]: 2025-10-11 09:44:18.581746596 +0000 UTC m=+0.043655715 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:44:18 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:44:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0b225d2e7bf4e30ac8354a5b633a9165fc61b233fe21133c95a698a5995d749/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 09:44:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0b225d2e7bf4e30ac8354a5b633a9165fc61b233fe21133c95a698a5995d749/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 09:44:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0b225d2e7bf4e30ac8354a5b633a9165fc61b233fe21133c95a698a5995d749/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 09:44:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0b225d2e7bf4e30ac8354a5b633a9165fc61b233fe21133c95a698a5995d749/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 09:44:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0b225d2e7bf4e30ac8354a5b633a9165fc61b233fe21133c95a698a5995d749/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 09:44:18 compute-0 podman[433618]: 2025-10-11 09:44:18.725266372 +0000 UTC m=+0.187175481 container init eb54926b34328acf3f91dfc92c37761da0460d754383b3ec156c337568fff0eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_greider, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct 11 09:44:18 compute-0 podman[433618]: 2025-10-11 09:44:18.740438247 +0000 UTC m=+0.202347316 container start eb54926b34328acf3f91dfc92c37761da0460d754383b3ec156c337568fff0eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_greider, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 09:44:18 compute-0 podman[433618]: 2025-10-11 09:44:18.745234922 +0000 UTC m=+0.207143951 container attach eb54926b34328acf3f91dfc92c37761da0460d754383b3ec156c337568fff0eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_greider, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct 11 09:44:19 compute-0 ceph-mon[74313]: pgmap v3083: 321 pgs: 321 active+clean; 504 MiB data, 1.3 GiB used, 59 GiB / 60 GiB avail; 33 KiB/s rd, 50 KiB/s wr, 2 op/s
Oct 11 09:44:19 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3084: 321 pgs: 321 active+clean; 504 MiB data, 1.3 GiB used, 59 GiB / 60 GiB avail; 33 KiB/s rd, 3.3 KiB/s wr, 0 op/s
Oct 11 09:44:19 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:44:19 compute-0 podman[433654]: 2025-10-11 09:44:19.778797841 +0000 UTC m=+0.083291957 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, io.buildah.version=1.41.3)
Oct 11 09:44:19 compute-0 funny_greider[433635]: --> passed data devices: 0 physical, 3 LVM
Oct 11 09:44:19 compute-0 funny_greider[433635]: --> relative data size: 1.0
Oct 11 09:44:19 compute-0 funny_greider[433635]: --> All data devices are unavailable
Oct 11 09:44:19 compute-0 systemd[1]: libpod-eb54926b34328acf3f91dfc92c37761da0460d754383b3ec156c337568fff0eb.scope: Deactivated successfully.
Oct 11 09:44:19 compute-0 systemd[1]: libpod-eb54926b34328acf3f91dfc92c37761da0460d754383b3ec156c337568fff0eb.scope: Consumed 1.097s CPU time.
Oct 11 09:44:19 compute-0 podman[433683]: 2025-10-11 09:44:19.968291956 +0000 UTC m=+0.029975112 container died eb54926b34328acf3f91dfc92c37761da0460d754383b3ec156c337568fff0eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_greider, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct 11 09:44:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-f0b225d2e7bf4e30ac8354a5b633a9165fc61b233fe21133c95a698a5995d749-merged.mount: Deactivated successfully.
Oct 11 09:44:20 compute-0 podman[433683]: 2025-10-11 09:44:20.040028378 +0000 UTC m=+0.101711464 container remove eb54926b34328acf3f91dfc92c37761da0460d754383b3ec156c337568fff0eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_greider, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 09:44:20 compute-0 systemd[1]: libpod-conmon-eb54926b34328acf3f91dfc92c37761da0460d754383b3ec156c337568fff0eb.scope: Deactivated successfully.
Oct 11 09:44:20 compute-0 nova_compute[260935]: 2025-10-11 09:44:20.095 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:44:20 compute-0 sudo[433513]: pam_unix(sudo:session): session closed for user root
Oct 11 09:44:20 compute-0 sudo[433698]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:44:20 compute-0 sudo[433698]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:44:20 compute-0 sudo[433698]: pam_unix(sudo:session): session closed for user root
Oct 11 09:44:20 compute-0 sudo[433723]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 09:44:20 compute-0 sudo[433723]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:44:20 compute-0 sudo[433723]: pam_unix(sudo:session): session closed for user root
Oct 11 09:44:20 compute-0 sudo[433748]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:44:20 compute-0 sudo[433748]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:44:20 compute-0 sudo[433748]: pam_unix(sudo:session): session closed for user root
Oct 11 09:44:20 compute-0 sudo[433773]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a -- lvm list --format json
Oct 11 09:44:20 compute-0 sudo[433773]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:44:20 compute-0 nova_compute[260935]: 2025-10-11 09:44:20.803 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:44:20 compute-0 podman[433838]: 2025-10-11 09:44:20.951135672 +0000 UTC m=+0.103003100 container create 603c7b9a69b6303019011b34b9cb8c58a10fbaddbdb1029c23817f575cb57477 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_visvesvaraya, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 09:44:20 compute-0 podman[433838]: 2025-10-11 09:44:20.893841825 +0000 UTC m=+0.045709343 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:44:21 compute-0 systemd[1]: Started libpod-conmon-603c7b9a69b6303019011b34b9cb8c58a10fbaddbdb1029c23817f575cb57477.scope.
Oct 11 09:44:21 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:44:21 compute-0 podman[433838]: 2025-10-11 09:44:21.065798368 +0000 UTC m=+0.217665816 container init 603c7b9a69b6303019011b34b9cb8c58a10fbaddbdb1029c23817f575cb57477 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_visvesvaraya, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 09:44:21 compute-0 podman[433838]: 2025-10-11 09:44:21.079985906 +0000 UTC m=+0.231853374 container start 603c7b9a69b6303019011b34b9cb8c58a10fbaddbdb1029c23817f575cb57477 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_visvesvaraya, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct 11 09:44:21 compute-0 sweet_visvesvaraya[433855]: 167 167
Oct 11 09:44:21 compute-0 systemd[1]: libpod-603c7b9a69b6303019011b34b9cb8c58a10fbaddbdb1029c23817f575cb57477.scope: Deactivated successfully.
Oct 11 09:44:21 compute-0 podman[433838]: 2025-10-11 09:44:21.089309208 +0000 UTC m=+0.241176666 container attach 603c7b9a69b6303019011b34b9cb8c58a10fbaddbdb1029c23817f575cb57477 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_visvesvaraya, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Oct 11 09:44:21 compute-0 conmon[433855]: conmon 603c7b9a69b630301901 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-603c7b9a69b6303019011b34b9cb8c58a10fbaddbdb1029c23817f575cb57477.scope/container/memory.events
Oct 11 09:44:21 compute-0 podman[433838]: 2025-10-11 09:44:21.091323494 +0000 UTC m=+0.243190932 container died 603c7b9a69b6303019011b34b9cb8c58a10fbaddbdb1029c23817f575cb57477 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_visvesvaraya, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 09:44:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-4c8aa8c0a6b958d11df2df4a76d972747c16c5d719052a0aa5d15945d1ea6d40-merged.mount: Deactivated successfully.
Oct 11 09:44:21 compute-0 podman[433838]: 2025-10-11 09:44:21.146412519 +0000 UTC m=+0.298279957 container remove 603c7b9a69b6303019011b34b9cb8c58a10fbaddbdb1029c23817f575cb57477 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_visvesvaraya, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2)
Oct 11 09:44:21 compute-0 systemd[1]: libpod-conmon-603c7b9a69b6303019011b34b9cb8c58a10fbaddbdb1029c23817f575cb57477.scope: Deactivated successfully.
Oct 11 09:44:21 compute-0 ceph-mon[74313]: pgmap v3084: 321 pgs: 321 active+clean; 504 MiB data, 1.3 GiB used, 59 GiB / 60 GiB avail; 33 KiB/s rd, 3.3 KiB/s wr, 0 op/s
Oct 11 09:44:21 compute-0 podman[433880]: 2025-10-11 09:44:21.356027678 +0000 UTC m=+0.044609672 container create deba285630076e9f29218b6204f2c33874212c8dfbef286a8a746360e826ee67 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_yalow, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 09:44:21 compute-0 systemd[1]: Started libpod-conmon-deba285630076e9f29218b6204f2c33874212c8dfbef286a8a746360e826ee67.scope.
Oct 11 09:44:21 compute-0 podman[433880]: 2025-10-11 09:44:21.33898897 +0000 UTC m=+0.027570984 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:44:21 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:44:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/71d1fdc44915113904c902b546730cd24c08293c11d92c223e66b526fe0ed615/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 09:44:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/71d1fdc44915113904c902b546730cd24c08293c11d92c223e66b526fe0ed615/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 09:44:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/71d1fdc44915113904c902b546730cd24c08293c11d92c223e66b526fe0ed615/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 09:44:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/71d1fdc44915113904c902b546730cd24c08293c11d92c223e66b526fe0ed615/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 09:44:21 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3085: 321 pgs: 321 active+clean; 504 MiB data, 1.3 GiB used, 59 GiB / 60 GiB avail; 0 B/s rd, 3.3 KiB/s wr, 0 op/s
Oct 11 09:44:21 compute-0 podman[433880]: 2025-10-11 09:44:21.457523605 +0000 UTC m=+0.146105689 container init deba285630076e9f29218b6204f2c33874212c8dfbef286a8a746360e826ee67 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_yalow, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct 11 09:44:21 compute-0 podman[433880]: 2025-10-11 09:44:21.467250088 +0000 UTC m=+0.155832112 container start deba285630076e9f29218b6204f2c33874212c8dfbef286a8a746360e826ee67 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_yalow, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct 11 09:44:21 compute-0 podman[433880]: 2025-10-11 09:44:21.471725083 +0000 UTC m=+0.160307157 container attach deba285630076e9f29218b6204f2c33874212c8dfbef286a8a746360e826ee67 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_yalow, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 09:44:22 compute-0 ovn_controller[152945]: 2025-10-11T09:44:22Z|01718|memory_trim|INFO|Detected inactivity (last active 30021 ms ago): trimming memory
Oct 11 09:44:22 compute-0 elastic_yalow[433897]: {
Oct 11 09:44:22 compute-0 elastic_yalow[433897]:     "0": [
Oct 11 09:44:22 compute-0 elastic_yalow[433897]:         {
Oct 11 09:44:22 compute-0 elastic_yalow[433897]:             "devices": [
Oct 11 09:44:22 compute-0 elastic_yalow[433897]:                 "/dev/loop3"
Oct 11 09:44:22 compute-0 elastic_yalow[433897]:             ],
Oct 11 09:44:22 compute-0 elastic_yalow[433897]:             "lv_name": "ceph_lv0",
Oct 11 09:44:22 compute-0 elastic_yalow[433897]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 09:44:22 compute-0 elastic_yalow[433897]:             "lv_size": "21470642176",
Oct 11 09:44:22 compute-0 elastic_yalow[433897]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=bd31cfe9-266a-4479-a7f9-659fff14b3e1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 09:44:22 compute-0 elastic_yalow[433897]:             "lv_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 09:44:22 compute-0 elastic_yalow[433897]:             "name": "ceph_lv0",
Oct 11 09:44:22 compute-0 elastic_yalow[433897]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 09:44:22 compute-0 elastic_yalow[433897]:             "tags": {
Oct 11 09:44:22 compute-0 elastic_yalow[433897]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 11 09:44:22 compute-0 elastic_yalow[433897]:                 "ceph.block_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 09:44:22 compute-0 elastic_yalow[433897]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 09:44:22 compute-0 elastic_yalow[433897]:                 "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:44:22 compute-0 elastic_yalow[433897]:                 "ceph.cluster_name": "ceph",
Oct 11 09:44:22 compute-0 elastic_yalow[433897]:                 "ceph.crush_device_class": "",
Oct 11 09:44:22 compute-0 elastic_yalow[433897]:                 "ceph.encrypted": "0",
Oct 11 09:44:22 compute-0 elastic_yalow[433897]:                 "ceph.osd_fsid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 09:44:22 compute-0 elastic_yalow[433897]:                 "ceph.osd_id": "0",
Oct 11 09:44:22 compute-0 elastic_yalow[433897]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 09:44:22 compute-0 elastic_yalow[433897]:                 "ceph.type": "block",
Oct 11 09:44:22 compute-0 elastic_yalow[433897]:                 "ceph.vdo": "0"
Oct 11 09:44:22 compute-0 elastic_yalow[433897]:             },
Oct 11 09:44:22 compute-0 elastic_yalow[433897]:             "type": "block",
Oct 11 09:44:22 compute-0 elastic_yalow[433897]:             "vg_name": "ceph_vg0"
Oct 11 09:44:22 compute-0 elastic_yalow[433897]:         }
Oct 11 09:44:22 compute-0 elastic_yalow[433897]:     ],
Oct 11 09:44:22 compute-0 elastic_yalow[433897]:     "1": [
Oct 11 09:44:22 compute-0 elastic_yalow[433897]:         {
Oct 11 09:44:22 compute-0 elastic_yalow[433897]:             "devices": [
Oct 11 09:44:22 compute-0 elastic_yalow[433897]:                 "/dev/loop4"
Oct 11 09:44:22 compute-0 elastic_yalow[433897]:             ],
Oct 11 09:44:22 compute-0 elastic_yalow[433897]:             "lv_name": "ceph_lv1",
Oct 11 09:44:22 compute-0 elastic_yalow[433897]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 09:44:22 compute-0 elastic_yalow[433897]:             "lv_size": "21470642176",
Oct 11 09:44:22 compute-0 elastic_yalow[433897]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c6e730c8-7075-45e5-bf72-061b73088f5b,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 09:44:22 compute-0 elastic_yalow[433897]:             "lv_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 09:44:22 compute-0 elastic_yalow[433897]:             "name": "ceph_lv1",
Oct 11 09:44:22 compute-0 elastic_yalow[433897]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 09:44:22 compute-0 elastic_yalow[433897]:             "tags": {
Oct 11 09:44:22 compute-0 elastic_yalow[433897]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 11 09:44:22 compute-0 elastic_yalow[433897]:                 "ceph.block_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 09:44:22 compute-0 elastic_yalow[433897]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 09:44:22 compute-0 elastic_yalow[433897]:                 "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:44:22 compute-0 elastic_yalow[433897]:                 "ceph.cluster_name": "ceph",
Oct 11 09:44:22 compute-0 elastic_yalow[433897]:                 "ceph.crush_device_class": "",
Oct 11 09:44:22 compute-0 elastic_yalow[433897]:                 "ceph.encrypted": "0",
Oct 11 09:44:22 compute-0 elastic_yalow[433897]:                 "ceph.osd_fsid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 09:44:22 compute-0 elastic_yalow[433897]:                 "ceph.osd_id": "1",
Oct 11 09:44:22 compute-0 elastic_yalow[433897]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 09:44:22 compute-0 elastic_yalow[433897]:                 "ceph.type": "block",
Oct 11 09:44:22 compute-0 elastic_yalow[433897]:                 "ceph.vdo": "0"
Oct 11 09:44:22 compute-0 elastic_yalow[433897]:             },
Oct 11 09:44:22 compute-0 elastic_yalow[433897]:             "type": "block",
Oct 11 09:44:22 compute-0 elastic_yalow[433897]:             "vg_name": "ceph_vg1"
Oct 11 09:44:22 compute-0 elastic_yalow[433897]:         }
Oct 11 09:44:22 compute-0 elastic_yalow[433897]:     ],
Oct 11 09:44:22 compute-0 elastic_yalow[433897]:     "2": [
Oct 11 09:44:22 compute-0 elastic_yalow[433897]:         {
Oct 11 09:44:22 compute-0 elastic_yalow[433897]:             "devices": [
Oct 11 09:44:22 compute-0 elastic_yalow[433897]:                 "/dev/loop5"
Oct 11 09:44:22 compute-0 elastic_yalow[433897]:             ],
Oct 11 09:44:22 compute-0 elastic_yalow[433897]:             "lv_name": "ceph_lv2",
Oct 11 09:44:22 compute-0 elastic_yalow[433897]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 09:44:22 compute-0 elastic_yalow[433897]:             "lv_size": "21470642176",
Oct 11 09:44:22 compute-0 elastic_yalow[433897]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9a8d87fe-940e-4960-94fb-405e22e6e9ee,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 09:44:22 compute-0 elastic_yalow[433897]:             "lv_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 09:44:22 compute-0 elastic_yalow[433897]:             "name": "ceph_lv2",
Oct 11 09:44:22 compute-0 elastic_yalow[433897]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 09:44:22 compute-0 elastic_yalow[433897]:             "tags": {
Oct 11 09:44:22 compute-0 elastic_yalow[433897]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 11 09:44:22 compute-0 elastic_yalow[433897]:                 "ceph.block_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 09:44:22 compute-0 elastic_yalow[433897]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 09:44:22 compute-0 elastic_yalow[433897]:                 "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:44:22 compute-0 elastic_yalow[433897]:                 "ceph.cluster_name": "ceph",
Oct 11 09:44:22 compute-0 elastic_yalow[433897]:                 "ceph.crush_device_class": "",
Oct 11 09:44:22 compute-0 elastic_yalow[433897]:                 "ceph.encrypted": "0",
Oct 11 09:44:22 compute-0 elastic_yalow[433897]:                 "ceph.osd_fsid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 09:44:22 compute-0 elastic_yalow[433897]:                 "ceph.osd_id": "2",
Oct 11 09:44:22 compute-0 elastic_yalow[433897]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 09:44:22 compute-0 elastic_yalow[433897]:                 "ceph.type": "block",
Oct 11 09:44:22 compute-0 elastic_yalow[433897]:                 "ceph.vdo": "0"
Oct 11 09:44:22 compute-0 elastic_yalow[433897]:             },
Oct 11 09:44:22 compute-0 elastic_yalow[433897]:             "type": "block",
Oct 11 09:44:22 compute-0 elastic_yalow[433897]:             "vg_name": "ceph_vg2"
Oct 11 09:44:22 compute-0 elastic_yalow[433897]:         }
Oct 11 09:44:22 compute-0 elastic_yalow[433897]:     ]
Oct 11 09:44:22 compute-0 elastic_yalow[433897]: }
Oct 11 09:44:22 compute-0 systemd[1]: libpod-deba285630076e9f29218b6204f2c33874212c8dfbef286a8a746360e826ee67.scope: Deactivated successfully.
Oct 11 09:44:22 compute-0 conmon[433897]: conmon deba285630076e9f2921 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-deba285630076e9f29218b6204f2c33874212c8dfbef286a8a746360e826ee67.scope/container/memory.events
Oct 11 09:44:22 compute-0 podman[433880]: 2025-10-11 09:44:22.347722502 +0000 UTC m=+1.036304526 container died deba285630076e9f29218b6204f2c33874212c8dfbef286a8a746360e826ee67 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_yalow, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 09:44:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-71d1fdc44915113904c902b546730cd24c08293c11d92c223e66b526fe0ed615-merged.mount: Deactivated successfully.
Oct 11 09:44:22 compute-0 podman[433880]: 2025-10-11 09:44:22.412805118 +0000 UTC m=+1.101387112 container remove deba285630076e9f29218b6204f2c33874212c8dfbef286a8a746360e826ee67 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_yalow, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct 11 09:44:22 compute-0 systemd[1]: libpod-conmon-deba285630076e9f29218b6204f2c33874212c8dfbef286a8a746360e826ee67.scope: Deactivated successfully.
Oct 11 09:44:22 compute-0 sudo[433773]: pam_unix(sudo:session): session closed for user root
Oct 11 09:44:22 compute-0 sudo[433917]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:44:22 compute-0 sudo[433917]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:44:22 compute-0 sudo[433917]: pam_unix(sudo:session): session closed for user root
Oct 11 09:44:22 compute-0 sudo[433942]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 09:44:22 compute-0 sudo[433942]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:44:22 compute-0 sudo[433942]: pam_unix(sudo:session): session closed for user root
Oct 11 09:44:22 compute-0 sudo[433967]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:44:22 compute-0 sudo[433967]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:44:22 compute-0 sudo[433967]: pam_unix(sudo:session): session closed for user root
Oct 11 09:44:22 compute-0 sudo[433992]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a -- raw list --format json
Oct 11 09:44:22 compute-0 sudo[433992]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:44:23 compute-0 podman[434057]: 2025-10-11 09:44:23.217694563 +0000 UTC m=+0.057411471 container create 9a16953235bdf0a26b547d425f1d8f754646237901d77f8a4dc1b83588287dd0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_roentgen, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 09:44:23 compute-0 ceph-mon[74313]: pgmap v3085: 321 pgs: 321 active+clean; 504 MiB data, 1.3 GiB used, 59 GiB / 60 GiB avail; 0 B/s rd, 3.3 KiB/s wr, 0 op/s
Oct 11 09:44:23 compute-0 systemd[1]: Started libpod-conmon-9a16953235bdf0a26b547d425f1d8f754646237901d77f8a4dc1b83588287dd0.scope.
Oct 11 09:44:23 compute-0 podman[434057]: 2025-10-11 09:44:23.189955055 +0000 UTC m=+0.029672043 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:44:23 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:44:23 compute-0 podman[434057]: 2025-10-11 09:44:23.307989436 +0000 UTC m=+0.147706434 container init 9a16953235bdf0a26b547d425f1d8f754646237901d77f8a4dc1b83588287dd0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_roentgen, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Oct 11 09:44:23 compute-0 podman[434057]: 2025-10-11 09:44:23.320089425 +0000 UTC m=+0.159806333 container start 9a16953235bdf0a26b547d425f1d8f754646237901d77f8a4dc1b83588287dd0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_roentgen, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 09:44:23 compute-0 podman[434057]: 2025-10-11 09:44:23.323782618 +0000 UTC m=+0.163499606 container attach 9a16953235bdf0a26b547d425f1d8f754646237901d77f8a4dc1b83588287dd0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_roentgen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 09:44:23 compute-0 happy_roentgen[434075]: 167 167
Oct 11 09:44:23 compute-0 systemd[1]: libpod-9a16953235bdf0a26b547d425f1d8f754646237901d77f8a4dc1b83588287dd0.scope: Deactivated successfully.
Oct 11 09:44:23 compute-0 podman[434057]: 2025-10-11 09:44:23.328985124 +0000 UTC m=+0.168702032 container died 9a16953235bdf0a26b547d425f1d8f754646237901d77f8a4dc1b83588287dd0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_roentgen, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct 11 09:44:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-ec6ca4389312919d5f5aa80856929b6bb2c45e45b9423241066a7b854f22549e-merged.mount: Deactivated successfully.
Oct 11 09:44:23 compute-0 podman[434057]: 2025-10-11 09:44:23.368955745 +0000 UTC m=+0.208672653 container remove 9a16953235bdf0a26b547d425f1d8f754646237901d77f8a4dc1b83588287dd0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_roentgen, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True)
Oct 11 09:44:23 compute-0 podman[434071]: 2025-10-11 09:44:23.374532522 +0000 UTC m=+0.108392291 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, container_name=multipathd)
Oct 11 09:44:23 compute-0 systemd[1]: libpod-conmon-9a16953235bdf0a26b547d425f1d8f754646237901d77f8a4dc1b83588287dd0.scope: Deactivated successfully.
Oct 11 09:44:23 compute-0 podman[434074]: 2025-10-11 09:44:23.396390085 +0000 UTC m=+0.128417143 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct 11 09:44:23 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3086: 321 pgs: 321 active+clean; 504 MiB data, 1.3 GiB used, 59 GiB / 60 GiB avail; 0 B/s rd, 6.7 KiB/s wr, 0 op/s
Oct 11 09:44:23 compute-0 podman[434140]: 2025-10-11 09:44:23.664075043 +0000 UTC m=+0.057022870 container create 92673d2f769fa1afc5fff6a3fdd5d7483f17543ebce00a06c3d4e9ecb291a2a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_wu, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 09:44:23 compute-0 systemd[1]: Started libpod-conmon-92673d2f769fa1afc5fff6a3fdd5d7483f17543ebce00a06c3d4e9ecb291a2a5.scope.
Oct 11 09:44:23 compute-0 podman[434140]: 2025-10-11 09:44:23.64757398 +0000 UTC m=+0.040521837 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:44:23 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:44:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7b2135b0e3e290e2f0121f730ab9a6ad97132337306745d9d155d75bb123dcf/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 09:44:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7b2135b0e3e290e2f0121f730ab9a6ad97132337306745d9d155d75bb123dcf/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 09:44:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7b2135b0e3e290e2f0121f730ab9a6ad97132337306745d9d155d75bb123dcf/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 09:44:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7b2135b0e3e290e2f0121f730ab9a6ad97132337306745d9d155d75bb123dcf/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 09:44:23 compute-0 podman[434140]: 2025-10-11 09:44:23.766880046 +0000 UTC m=+0.159827893 container init 92673d2f769fa1afc5fff6a3fdd5d7483f17543ebce00a06c3d4e9ecb291a2a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_wu, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Oct 11 09:44:23 compute-0 podman[434140]: 2025-10-11 09:44:23.780118428 +0000 UTC m=+0.173066265 container start 92673d2f769fa1afc5fff6a3fdd5d7483f17543ebce00a06c3d4e9ecb291a2a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_wu, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct 11 09:44:23 compute-0 podman[434140]: 2025-10-11 09:44:23.784234073 +0000 UTC m=+0.177181900 container attach 92673d2f769fa1afc5fff6a3fdd5d7483f17543ebce00a06c3d4e9ecb291a2a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_wu, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 09:44:24 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:44:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:44:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:44:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:44:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:44:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:44:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:44:24 compute-0 dreamy_wu[434156]: {
Oct 11 09:44:24 compute-0 dreamy_wu[434156]:     "9a8d87fe-940e-4960-94fb-405e22e6e9ee": {
Oct 11 09:44:24 compute-0 dreamy_wu[434156]:         "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:44:24 compute-0 dreamy_wu[434156]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 11 09:44:24 compute-0 dreamy_wu[434156]:         "osd_id": 2,
Oct 11 09:44:24 compute-0 dreamy_wu[434156]:         "osd_uuid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 09:44:24 compute-0 dreamy_wu[434156]:         "type": "bluestore"
Oct 11 09:44:24 compute-0 dreamy_wu[434156]:     },
Oct 11 09:44:24 compute-0 dreamy_wu[434156]:     "bd31cfe9-266a-4479-a7f9-659fff14b3e1": {
Oct 11 09:44:24 compute-0 dreamy_wu[434156]:         "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:44:24 compute-0 dreamy_wu[434156]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 11 09:44:24 compute-0 dreamy_wu[434156]:         "osd_id": 0,
Oct 11 09:44:24 compute-0 dreamy_wu[434156]:         "osd_uuid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 09:44:24 compute-0 dreamy_wu[434156]:         "type": "bluestore"
Oct 11 09:44:24 compute-0 dreamy_wu[434156]:     },
Oct 11 09:44:24 compute-0 dreamy_wu[434156]:     "c6e730c8-7075-45e5-bf72-061b73088f5b": {
Oct 11 09:44:24 compute-0 dreamy_wu[434156]:         "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:44:24 compute-0 dreamy_wu[434156]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 11 09:44:24 compute-0 dreamy_wu[434156]:         "osd_id": 1,
Oct 11 09:44:24 compute-0 dreamy_wu[434156]:         "osd_uuid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 09:44:24 compute-0 dreamy_wu[434156]:         "type": "bluestore"
Oct 11 09:44:24 compute-0 dreamy_wu[434156]:     }
Oct 11 09:44:24 compute-0 dreamy_wu[434156]: }
Oct 11 09:44:25 compute-0 systemd[1]: libpod-92673d2f769fa1afc5fff6a3fdd5d7483f17543ebce00a06c3d4e9ecb291a2a5.scope: Deactivated successfully.
Oct 11 09:44:25 compute-0 podman[434140]: 2025-10-11 09:44:25.017464372 +0000 UTC m=+1.410412239 container died 92673d2f769fa1afc5fff6a3fdd5d7483f17543ebce00a06c3d4e9ecb291a2a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_wu, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct 11 09:44:25 compute-0 systemd[1]: libpod-92673d2f769fa1afc5fff6a3fdd5d7483f17543ebce00a06c3d4e9ecb291a2a5.scope: Consumed 1.217s CPU time.
Oct 11 09:44:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-e7b2135b0e3e290e2f0121f730ab9a6ad97132337306745d9d155d75bb123dcf-merged.mount: Deactivated successfully.
Oct 11 09:44:25 compute-0 nova_compute[260935]: 2025-10-11 09:44:25.059 2 DEBUG nova.compute.manager [None req-57ec2092-f1ad-40af-a08e-500cc8dc8747 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] [instance: 84e65bfe-6918-4c8f-8b2a-3b5262c6e44e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:44:25 compute-0 nova_compute[260935]: 2025-10-11 09:44:25.098 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:44:25 compute-0 podman[434140]: 2025-10-11 09:44:25.10298493 +0000 UTC m=+1.495932777 container remove 92673d2f769fa1afc5fff6a3fdd5d7483f17543ebce00a06c3d4e9ecb291a2a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_wu, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 09:44:25 compute-0 nova_compute[260935]: 2025-10-11 09:44:25.121 2 INFO nova.compute.manager [None req-57ec2092-f1ad-40af-a08e-500cc8dc8747 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] [instance: 84e65bfe-6918-4c8f-8b2a-3b5262c6e44e] instance snapshotting
Oct 11 09:44:25 compute-0 systemd[1]: libpod-conmon-92673d2f769fa1afc5fff6a3fdd5d7483f17543ebce00a06c3d4e9ecb291a2a5.scope: Deactivated successfully.
Oct 11 09:44:25 compute-0 sudo[433992]: pam_unix(sudo:session): session closed for user root
Oct 11 09:44:25 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 09:44:25 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:44:25 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 09:44:25 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:44:25 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev edf3ebd4-10c5-4493-b22b-18c72411344b does not exist
Oct 11 09:44:25 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev a70ba05e-bac3-4525-9b63-faa6577857ca does not exist
Oct 11 09:44:25 compute-0 ceph-mon[74313]: pgmap v3086: 321 pgs: 321 active+clean; 504 MiB data, 1.3 GiB used, 59 GiB / 60 GiB avail; 0 B/s rd, 6.7 KiB/s wr, 0 op/s
Oct 11 09:44:25 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:44:25 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:44:25 compute-0 sudo[434204]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:44:25 compute-0 sudo[434204]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:44:25 compute-0 sudo[434204]: pam_unix(sudo:session): session closed for user root
Oct 11 09:44:25 compute-0 sudo[434229]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 11 09:44:25 compute-0 sudo[434229]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:44:25 compute-0 sudo[434229]: pam_unix(sudo:session): session closed for user root
Oct 11 09:44:25 compute-0 nova_compute[260935]: 2025-10-11 09:44:25.402 2 INFO nova.virt.libvirt.driver [None req-57ec2092-f1ad-40af-a08e-500cc8dc8747 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] [instance: 84e65bfe-6918-4c8f-8b2a-3b5262c6e44e] Beginning live snapshot process
Oct 11 09:44:25 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3087: 321 pgs: 321 active+clean; 504 MiB data, 1.3 GiB used, 59 GiB / 60 GiB avail; 0 B/s rd, 3.3 KiB/s wr, 0 op/s
Oct 11 09:44:25 compute-0 nova_compute[260935]: 2025-10-11 09:44:25.670 2 DEBUG nova.storage.rbd_utils [None req-57ec2092-f1ad-40af-a08e-500cc8dc8747 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] creating snapshot(1cfadc1d5df44930ad898c4bbb06e9cc) on rbd image(84e65bfe-6918-4c8f-8b2a-3b5262c6e44e_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Oct 11 09:44:25 compute-0 nova_compute[260935]: 2025-10-11 09:44:25.805 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:44:26 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e292 do_prune osdmap full prune enabled
Oct 11 09:44:26 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e293 e293: 3 total, 3 up, 3 in
Oct 11 09:44:26 compute-0 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e293: 3 total, 3 up, 3 in
Oct 11 09:44:26 compute-0 nova_compute[260935]: 2025-10-11 09:44:26.354 2 DEBUG nova.storage.rbd_utils [None req-57ec2092-f1ad-40af-a08e-500cc8dc8747 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] cloning vms/84e65bfe-6918-4c8f-8b2a-3b5262c6e44e_disk@1cfadc1d5df44930ad898c4bbb06e9cc to images/661f6683-31d1-46ed-ae81-1e493e8ff006 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Oct 11 09:44:26 compute-0 nova_compute[260935]: 2025-10-11 09:44:26.462 2 DEBUG nova.storage.rbd_utils [None req-57ec2092-f1ad-40af-a08e-500cc8dc8747 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] flattening images/661f6683-31d1-46ed-ae81-1e493e8ff006 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Oct 11 09:44:26 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 09:44:26 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3629907740' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 09:44:26 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 09:44:26 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3629907740' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 09:44:27 compute-0 nova_compute[260935]: 2025-10-11 09:44:27.231 2 DEBUG nova.storage.rbd_utils [None req-57ec2092-f1ad-40af-a08e-500cc8dc8747 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] removing snapshot(1cfadc1d5df44930ad898c4bbb06e9cc) on rbd image(84e65bfe-6918-4c8f-8b2a-3b5262c6e44e_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Oct 11 09:44:27 compute-0 ceph-mon[74313]: pgmap v3087: 321 pgs: 321 active+clean; 504 MiB data, 1.3 GiB used, 59 GiB / 60 GiB avail; 0 B/s rd, 3.3 KiB/s wr, 0 op/s
Oct 11 09:44:27 compute-0 ceph-mon[74313]: osdmap e293: 3 total, 3 up, 3 in
Oct 11 09:44:27 compute-0 ceph-mon[74313]: from='client.? 192.168.122.10:0/3629907740' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 09:44:27 compute-0 ceph-mon[74313]: from='client.? 192.168.122.10:0/3629907740' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 09:44:27 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e293 do_prune osdmap full prune enabled
Oct 11 09:44:27 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e294 e294: 3 total, 3 up, 3 in
Oct 11 09:44:27 compute-0 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e294: 3 total, 3 up, 3 in
Oct 11 09:44:27 compute-0 nova_compute[260935]: 2025-10-11 09:44:27.330 2 DEBUG nova.storage.rbd_utils [None req-57ec2092-f1ad-40af-a08e-500cc8dc8747 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] creating snapshot(snap) on rbd image(661f6683-31d1-46ed-ae81-1e493e8ff006) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Oct 11 09:44:27 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3090: 321 pgs: 321 active+clean; 577 MiB data, 1.3 GiB used, 59 GiB / 60 GiB avail; 4.3 MiB/s rd, 7.8 MiB/s wr, 77 op/s
Oct 11 09:44:27 compute-0 nova_compute[260935]: 2025-10-11 09:44:27.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:44:28 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e294 do_prune osdmap full prune enabled
Oct 11 09:44:28 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e295 e295: 3 total, 3 up, 3 in
Oct 11 09:44:28 compute-0 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e295: 3 total, 3 up, 3 in
Oct 11 09:44:28 compute-0 ceph-mon[74313]: osdmap e294: 3 total, 3 up, 3 in
Oct 11 09:44:29 compute-0 ceph-mon[74313]: pgmap v3090: 321 pgs: 321 active+clean; 577 MiB data, 1.3 GiB used, 59 GiB / 60 GiB avail; 4.3 MiB/s rd, 7.8 MiB/s wr, 77 op/s
Oct 11 09:44:29 compute-0 ceph-mon[74313]: osdmap e295: 3 total, 3 up, 3 in
Oct 11 09:44:29 compute-0 nova_compute[260935]: 2025-10-11 09:44:29.409 2 INFO nova.virt.libvirt.driver [None req-57ec2092-f1ad-40af-a08e-500cc8dc8747 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] [instance: 84e65bfe-6918-4c8f-8b2a-3b5262c6e44e] Snapshot image upload complete
Oct 11 09:44:29 compute-0 nova_compute[260935]: 2025-10-11 09:44:29.409 2 INFO nova.compute.manager [None req-57ec2092-f1ad-40af-a08e-500cc8dc8747 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] [instance: 84e65bfe-6918-4c8f-8b2a-3b5262c6e44e] Took 4.28 seconds to snapshot the instance on the hypervisor.
Oct 11 09:44:29 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3092: 321 pgs: 321 active+clean; 605 MiB data, 1.3 GiB used, 59 GiB / 60 GiB avail; 8.3 MiB/s rd, 15 MiB/s wr, 145 op/s
Oct 11 09:44:29 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e295 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:44:30 compute-0 nova_compute[260935]: 2025-10-11 09:44:30.101 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:44:30 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e295 do_prune osdmap full prune enabled
Oct 11 09:44:30 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e296 e296: 3 total, 3 up, 3 in
Oct 11 09:44:30 compute-0 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e296: 3 total, 3 up, 3 in
Oct 11 09:44:30 compute-0 nova_compute[260935]: 2025-10-11 09:44:30.807 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:44:31 compute-0 ceph-mon[74313]: pgmap v3092: 321 pgs: 321 active+clean; 605 MiB data, 1.3 GiB used, 59 GiB / 60 GiB avail; 8.3 MiB/s rd, 15 MiB/s wr, 145 op/s
Oct 11 09:44:31 compute-0 ceph-mon[74313]: osdmap e296: 3 total, 3 up, 3 in
Oct 11 09:44:31 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3094: 321 pgs: 321 active+clean; 605 MiB data, 1.3 GiB used, 59 GiB / 60 GiB avail; 9.6 MiB/s rd, 17 MiB/s wr, 168 op/s
Oct 11 09:44:31 compute-0 nova_compute[260935]: 2025-10-11 09:44:31.719 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:44:31 compute-0 nova_compute[260935]: 2025-10-11 09:44:31.720 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Oct 11 09:44:31 compute-0 nova_compute[260935]: 2025-10-11 09:44:31.739 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Oct 11 09:44:32 compute-0 ceph-mon[74313]: pgmap v3094: 321 pgs: 321 active+clean; 605 MiB data, 1.3 GiB used, 59 GiB / 60 GiB avail; 9.6 MiB/s rd, 17 MiB/s wr, 168 op/s
Oct 11 09:44:32 compute-0 nova_compute[260935]: 2025-10-11 09:44:32.723 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:44:33 compute-0 nova_compute[260935]: 2025-10-11 09:44:33.063 2 DEBUG nova.compute.manager [req-8bdc2c52-d0a8-441c-bc90-2f0944771fd8 req-93f42f6b-c906-413e-8e13-9916c737fc00 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 84e65bfe-6918-4c8f-8b2a-3b5262c6e44e] Received event network-changed-007ca8a8-8ce1-4661-9b04-c8764bd3b523 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:44:33 compute-0 nova_compute[260935]: 2025-10-11 09:44:33.064 2 DEBUG nova.compute.manager [req-8bdc2c52-d0a8-441c-bc90-2f0944771fd8 req-93f42f6b-c906-413e-8e13-9916c737fc00 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 84e65bfe-6918-4c8f-8b2a-3b5262c6e44e] Refreshing instance network info cache due to event network-changed-007ca8a8-8ce1-4661-9b04-c8764bd3b523. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 09:44:33 compute-0 nova_compute[260935]: 2025-10-11 09:44:33.065 2 DEBUG oslo_concurrency.lockutils [req-8bdc2c52-d0a8-441c-bc90-2f0944771fd8 req-93f42f6b-c906-413e-8e13-9916c737fc00 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-84e65bfe-6918-4c8f-8b2a-3b5262c6e44e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:44:33 compute-0 nova_compute[260935]: 2025-10-11 09:44:33.065 2 DEBUG oslo_concurrency.lockutils [req-8bdc2c52-d0a8-441c-bc90-2f0944771fd8 req-93f42f6b-c906-413e-8e13-9916c737fc00 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-84e65bfe-6918-4c8f-8b2a-3b5262c6e44e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:44:33 compute-0 nova_compute[260935]: 2025-10-11 09:44:33.066 2 DEBUG nova.network.neutron [req-8bdc2c52-d0a8-441c-bc90-2f0944771fd8 req-93f42f6b-c906-413e-8e13-9916c737fc00 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 84e65bfe-6918-4c8f-8b2a-3b5262c6e44e] Refreshing network info cache for port 007ca8a8-8ce1-4661-9b04-c8764bd3b523 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 09:44:33 compute-0 nova_compute[260935]: 2025-10-11 09:44:33.148 2 DEBUG oslo_concurrency.lockutils [None req-356cdc09-84f0-400e-865f-67dee43f8a74 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] Acquiring lock "84e65bfe-6918-4c8f-8b2a-3b5262c6e44e" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:44:33 compute-0 nova_compute[260935]: 2025-10-11 09:44:33.149 2 DEBUG oslo_concurrency.lockutils [None req-356cdc09-84f0-400e-865f-67dee43f8a74 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] Lock "84e65bfe-6918-4c8f-8b2a-3b5262c6e44e" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:44:33 compute-0 nova_compute[260935]: 2025-10-11 09:44:33.149 2 DEBUG oslo_concurrency.lockutils [None req-356cdc09-84f0-400e-865f-67dee43f8a74 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] Acquiring lock "84e65bfe-6918-4c8f-8b2a-3b5262c6e44e-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:44:33 compute-0 nova_compute[260935]: 2025-10-11 09:44:33.150 2 DEBUG oslo_concurrency.lockutils [None req-356cdc09-84f0-400e-865f-67dee43f8a74 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] Lock "84e65bfe-6918-4c8f-8b2a-3b5262c6e44e-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:44:33 compute-0 nova_compute[260935]: 2025-10-11 09:44:33.150 2 DEBUG oslo_concurrency.lockutils [None req-356cdc09-84f0-400e-865f-67dee43f8a74 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] Lock "84e65bfe-6918-4c8f-8b2a-3b5262c6e44e-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:44:33 compute-0 nova_compute[260935]: 2025-10-11 09:44:33.152 2 INFO nova.compute.manager [None req-356cdc09-84f0-400e-865f-67dee43f8a74 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] [instance: 84e65bfe-6918-4c8f-8b2a-3b5262c6e44e] Terminating instance
Oct 11 09:44:33 compute-0 nova_compute[260935]: 2025-10-11 09:44:33.154 2 DEBUG nova.compute.manager [None req-356cdc09-84f0-400e-865f-67dee43f8a74 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] [instance: 84e65bfe-6918-4c8f-8b2a-3b5262c6e44e] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 11 09:44:33 compute-0 kernel: tap007ca8a8-8c (unregistering): left promiscuous mode
Oct 11 09:44:33 compute-0 NetworkManager[44960]: <info>  [1760175873.4209] device (tap007ca8a8-8c): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 09:44:33 compute-0 ovn_controller[152945]: 2025-10-11T09:44:33Z|01719|binding|INFO|Releasing lport 007ca8a8-8ce1-4661-9b04-c8764bd3b523 from this chassis (sb_readonly=0)
Oct 11 09:44:33 compute-0 ovn_controller[152945]: 2025-10-11T09:44:33Z|01720|binding|INFO|Setting lport 007ca8a8-8ce1-4661-9b04-c8764bd3b523 down in Southbound
Oct 11 09:44:33 compute-0 nova_compute[260935]: 2025-10-11 09:44:33.437 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:44:33 compute-0 ovn_controller[152945]: 2025-10-11T09:44:33Z|01721|binding|INFO|Removing iface tap007ca8a8-8c ovn-installed in OVS
Oct 11 09:44:33 compute-0 nova_compute[260935]: 2025-10-11 09:44:33.442 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:44:33 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:44:33.444 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c1:4b:38 10.100.0.7'], port_security=['fa:16:3e:c1:4b:38 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '84e65bfe-6918-4c8f-8b2a-3b5262c6e44e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-424246b3-55aa-428f-9446-55ed3d626b5d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd599eac75aee4193a971c2c157a326a8', 'neutron:revision_number': '4', 'neutron:security_group_ids': '9d823c59-8c61-45f8-94bd-622699d4ae58', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a8021c16-4de3-44ff-b5a5-7602224b9734, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=007ca8a8-8ce1-4661-9b04-c8764bd3b523) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 09:44:33 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:44:33.446 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 007ca8a8-8ce1-4661-9b04-c8764bd3b523 in datapath 424246b3-55aa-428f-9446-55ed3d626b5d unbound from our chassis
Oct 11 09:44:33 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:44:33.448 162815 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 424246b3-55aa-428f-9446-55ed3d626b5d
Oct 11 09:44:33 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3095: 321 pgs: 321 active+clean; 508 MiB data, 1.3 GiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 4.9 MiB/s wr, 146 op/s
Oct 11 09:44:33 compute-0 nova_compute[260935]: 2025-10-11 09:44:33.470 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:44:33 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:44:33.479 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[6fdb9598-36dd-4b1c-9400-4481aff46bbc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:44:33 compute-0 systemd[1]: machine-qemu\x2d176\x2dinstance\x2d00000096.scope: Deactivated successfully.
Oct 11 09:44:33 compute-0 systemd[1]: machine-qemu\x2d176\x2dinstance\x2d00000096.scope: Consumed 15.382s CPU time.
Oct 11 09:44:33 compute-0 systemd-machined[215705]: Machine qemu-176-instance-00000096 terminated.
Oct 11 09:44:33 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:44:33.529 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[3395237e-deeb-4f42-a01f-90823a94c094]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:44:33 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:44:33.532 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[44c49996-1fef-4ec5-8462-5daa87e0c994]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:44:33 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:44:33.577 277356 DEBUG oslo.privsep.daemon [-] privsep: reply[86c3e8e4-11c3-4597-80d0-f13e446ff23b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:44:33 compute-0 nova_compute[260935]: 2025-10-11 09:44:33.601 2 INFO nova.virt.libvirt.driver [-] [instance: 84e65bfe-6918-4c8f-8b2a-3b5262c6e44e] Instance destroyed successfully.
Oct 11 09:44:33 compute-0 nova_compute[260935]: 2025-10-11 09:44:33.601 2 DEBUG nova.objects.instance [None req-356cdc09-84f0-400e-865f-67dee43f8a74 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] Lazy-loading 'resources' on Instance uuid 84e65bfe-6918-4c8f-8b2a-3b5262c6e44e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:44:33 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:44:33.601 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[69983b83-e782-4af9-b014-3c3cafb811e5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap424246b3-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:28:e0:2b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 7, 'rx_bytes': 1000, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 7, 'rx_bytes': 1000, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 445], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 763210, 'reachable_time': 32262, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 434411, 'error': None, 'target': 'ovnmeta-424246b3-55aa-428f-9446-55ed3d626b5d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:44:33 compute-0 nova_compute[260935]: 2025-10-11 09:44:33.614 2 DEBUG nova.virt.libvirt.vif [None req-356cdc09-84f0-400e-865f-67dee43f8a74 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T09:43:38Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestSnapshotPattern-server-937149156',display_name='tempest-TestSnapshotPattern-server-937149156',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testsnapshotpattern-server-937149156',id=150,image_ref='72225c6c-cee7-4404-9d0d-8915e5edba0a',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBP2ECx0jj3Rt10/VBX40dpY/aktTrRIITXR7+kOE+gKATXjMYBpecra2wtbKgkthLAz8Iw/nS+1WioBHxe0IF9i4guGae1Gh4XxA1njuMr4iSWg7N0/eaZRKp5xLMkN7hg==',key_name='tempest-TestSnapshotPattern-1719255089',keypairs=<?>,launch_index=0,launched_at=2025-10-11T09:43:47Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='d599eac75aee4193a971c2c157a326a8',ramdisk_id='',reservation_id='r-v3mdlfxx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_boot_roles='member,reader',image_container_format='bare',image_disk_format='raw',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_image_location='snapshot',image_image_state='available',image_image_type='snapshot',image_instance_uuid='1f03604e-0a25-4f22-807b-11f2e654be90',image_min_disk='1',image_min_ram='0',image_owner_id='d599eac75aee4193a971c2c157a326a8',image_owner_project_name='tempest-TestSnapshotPattern-1818755654',image_owner_user_name='tempest-TestSnapshotPattern-1818755654-project-member',image_user_id='2b3a6de4bc924868b4e73b0a23a89090',image_version='8.0',owner_project_name='tempest-TestSnapshotPattern-1818755654',owner_user_name='tempest-TestSnapshotPattern-1818755654-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T09:44:29Z,user_data=None,user_id='2b3a6de4bc924868b4e73b0a23a89090',uuid=84e65bfe-6918-4c8f-8b2a-3b5262c6e44e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "007ca8a8-8ce1-4661-9b04-c8764bd3b523", "address": "fa:16:3e:c1:4b:38", "network": {"id": "424246b3-55aa-428f-9446-55ed3d626b5d", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1806329114-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.181", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d599eac75aee4193a971c2c157a326a8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap007ca8a8-8c", "ovs_interfaceid": "007ca8a8-8ce1-4661-9b04-c8764bd3b523", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 11 09:44:33 compute-0 nova_compute[260935]: 2025-10-11 09:44:33.615 2 DEBUG nova.network.os_vif_util [None req-356cdc09-84f0-400e-865f-67dee43f8a74 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] Converting VIF {"id": "007ca8a8-8ce1-4661-9b04-c8764bd3b523", "address": "fa:16:3e:c1:4b:38", "network": {"id": "424246b3-55aa-428f-9446-55ed3d626b5d", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1806329114-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.181", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d599eac75aee4193a971c2c157a326a8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap007ca8a8-8c", "ovs_interfaceid": "007ca8a8-8ce1-4661-9b04-c8764bd3b523", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 09:44:33 compute-0 nova_compute[260935]: 2025-10-11 09:44:33.615 2 DEBUG nova.network.os_vif_util [None req-356cdc09-84f0-400e-865f-67dee43f8a74 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:c1:4b:38,bridge_name='br-int',has_traffic_filtering=True,id=007ca8a8-8ce1-4661-9b04-c8764bd3b523,network=Network(424246b3-55aa-428f-9446-55ed3d626b5d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap007ca8a8-8c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 09:44:33 compute-0 nova_compute[260935]: 2025-10-11 09:44:33.616 2 DEBUG os_vif [None req-356cdc09-84f0-400e-865f-67dee43f8a74 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:c1:4b:38,bridge_name='br-int',has_traffic_filtering=True,id=007ca8a8-8ce1-4661-9b04-c8764bd3b523,network=Network(424246b3-55aa-428f-9446-55ed3d626b5d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap007ca8a8-8c') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 11 09:44:33 compute-0 nova_compute[260935]: 2025-10-11 09:44:33.617 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:44:33 compute-0 nova_compute[260935]: 2025-10-11 09:44:33.617 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap007ca8a8-8c, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:44:33 compute-0 nova_compute[260935]: 2025-10-11 09:44:33.658 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:44:33 compute-0 nova_compute[260935]: 2025-10-11 09:44:33.660 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:44:33 compute-0 nova_compute[260935]: 2025-10-11 09:44:33.663 2 INFO os_vif [None req-356cdc09-84f0-400e-865f-67dee43f8a74 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:c1:4b:38,bridge_name='br-int',has_traffic_filtering=True,id=007ca8a8-8ce1-4661-9b04-c8764bd3b523,network=Network(424246b3-55aa-428f-9446-55ed3d626b5d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap007ca8a8-8c')
Oct 11 09:44:33 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:44:33.665 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[3877039d-b3cc-455d-8ede-cba97691a4e3]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap424246b3-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 763225, 'tstamp': 763225}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 434419, 'error': None, 'target': 'ovnmeta-424246b3-55aa-428f-9446-55ed3d626b5d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap424246b3-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 763229, 'tstamp': 763229}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 434419, 'error': None, 'target': 'ovnmeta-424246b3-55aa-428f-9446-55ed3d626b5d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:44:33 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:44:33.666 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap424246b3-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:44:33 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:44:33.671 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap424246b3-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:44:33 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:44:33.671 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 09:44:33 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:44:33.671 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap424246b3-50, col_values=(('external_ids', {'iface-id': 'e23ac8d5-df18-43b3-bb12-29f5cd9ce802'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:44:33 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:44:33.672 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 11 09:44:33 compute-0 nova_compute[260935]: 2025-10-11 09:44:33.682 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:44:34 compute-0 ceph-mon[74313]: pgmap v3095: 321 pgs: 321 active+clean; 508 MiB data, 1.3 GiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 4.9 MiB/s wr, 146 op/s
Oct 11 09:44:34 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e296 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:44:34 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e296 do_prune osdmap full prune enabled
Oct 11 09:44:34 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e297 e297: 3 total, 3 up, 3 in
Oct 11 09:44:34 compute-0 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e297: 3 total, 3 up, 3 in
Oct 11 09:44:34 compute-0 nova_compute[260935]: 2025-10-11 09:44:34.809 2 INFO nova.virt.libvirt.driver [None req-356cdc09-84f0-400e-865f-67dee43f8a74 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] [instance: 84e65bfe-6918-4c8f-8b2a-3b5262c6e44e] Deleting instance files /var/lib/nova/instances/84e65bfe-6918-4c8f-8b2a-3b5262c6e44e_del
Oct 11 09:44:34 compute-0 nova_compute[260935]: 2025-10-11 09:44:34.809 2 INFO nova.virt.libvirt.driver [None req-356cdc09-84f0-400e-865f-67dee43f8a74 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] [instance: 84e65bfe-6918-4c8f-8b2a-3b5262c6e44e] Deletion of /var/lib/nova/instances/84e65bfe-6918-4c8f-8b2a-3b5262c6e44e_del complete
Oct 11 09:44:34 compute-0 nova_compute[260935]: 2025-10-11 09:44:34.872 2 INFO nova.compute.manager [None req-356cdc09-84f0-400e-865f-67dee43f8a74 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] [instance: 84e65bfe-6918-4c8f-8b2a-3b5262c6e44e] Took 1.72 seconds to destroy the instance on the hypervisor.
Oct 11 09:44:34 compute-0 nova_compute[260935]: 2025-10-11 09:44:34.873 2 DEBUG oslo.service.loopingcall [None req-356cdc09-84f0-400e-865f-67dee43f8a74 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 11 09:44:34 compute-0 nova_compute[260935]: 2025-10-11 09:44:34.873 2 DEBUG nova.compute.manager [-] [instance: 84e65bfe-6918-4c8f-8b2a-3b5262c6e44e] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 11 09:44:34 compute-0 nova_compute[260935]: 2025-10-11 09:44:34.873 2 DEBUG nova.network.neutron [-] [instance: 84e65bfe-6918-4c8f-8b2a-3b5262c6e44e] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 11 09:44:35 compute-0 nova_compute[260935]: 2025-10-11 09:44:35.103 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:44:35 compute-0 nova_compute[260935]: 2025-10-11 09:44:35.162 2 DEBUG nova.network.neutron [req-8bdc2c52-d0a8-441c-bc90-2f0944771fd8 req-93f42f6b-c906-413e-8e13-9916c737fc00 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 84e65bfe-6918-4c8f-8b2a-3b5262c6e44e] Updated VIF entry in instance network info cache for port 007ca8a8-8ce1-4661-9b04-c8764bd3b523. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 09:44:35 compute-0 nova_compute[260935]: 2025-10-11 09:44:35.163 2 DEBUG nova.network.neutron [req-8bdc2c52-d0a8-441c-bc90-2f0944771fd8 req-93f42f6b-c906-413e-8e13-9916c737fc00 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 84e65bfe-6918-4c8f-8b2a-3b5262c6e44e] Updating instance_info_cache with network_info: [{"id": "007ca8a8-8ce1-4661-9b04-c8764bd3b523", "address": "fa:16:3e:c1:4b:38", "network": {"id": "424246b3-55aa-428f-9446-55ed3d626b5d", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1806329114-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d599eac75aee4193a971c2c157a326a8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap007ca8a8-8c", "ovs_interfaceid": "007ca8a8-8ce1-4661-9b04-c8764bd3b523", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:44:35 compute-0 nova_compute[260935]: 2025-10-11 09:44:35.183 2 DEBUG nova.compute.manager [req-95cff39c-729b-46f4-b9c9-f584f8db33b6 req-bd522e15-e1c2-4bfb-b033-3570d5924fde e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 84e65bfe-6918-4c8f-8b2a-3b5262c6e44e] Received event network-vif-unplugged-007ca8a8-8ce1-4661-9b04-c8764bd3b523 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:44:35 compute-0 nova_compute[260935]: 2025-10-11 09:44:35.184 2 DEBUG oslo_concurrency.lockutils [req-95cff39c-729b-46f4-b9c9-f584f8db33b6 req-bd522e15-e1c2-4bfb-b033-3570d5924fde e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "84e65bfe-6918-4c8f-8b2a-3b5262c6e44e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:44:35 compute-0 nova_compute[260935]: 2025-10-11 09:44:35.185 2 DEBUG oslo_concurrency.lockutils [req-95cff39c-729b-46f4-b9c9-f584f8db33b6 req-bd522e15-e1c2-4bfb-b033-3570d5924fde e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "84e65bfe-6918-4c8f-8b2a-3b5262c6e44e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:44:35 compute-0 nova_compute[260935]: 2025-10-11 09:44:35.185 2 DEBUG oslo_concurrency.lockutils [req-95cff39c-729b-46f4-b9c9-f584f8db33b6 req-bd522e15-e1c2-4bfb-b033-3570d5924fde e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "84e65bfe-6918-4c8f-8b2a-3b5262c6e44e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:44:35 compute-0 nova_compute[260935]: 2025-10-11 09:44:35.186 2 DEBUG nova.compute.manager [req-95cff39c-729b-46f4-b9c9-f584f8db33b6 req-bd522e15-e1c2-4bfb-b033-3570d5924fde e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 84e65bfe-6918-4c8f-8b2a-3b5262c6e44e] No waiting events found dispatching network-vif-unplugged-007ca8a8-8ce1-4661-9b04-c8764bd3b523 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 09:44:35 compute-0 nova_compute[260935]: 2025-10-11 09:44:35.186 2 DEBUG nova.compute.manager [req-95cff39c-729b-46f4-b9c9-f584f8db33b6 req-bd522e15-e1c2-4bfb-b033-3570d5924fde e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 84e65bfe-6918-4c8f-8b2a-3b5262c6e44e] Received event network-vif-unplugged-007ca8a8-8ce1-4661-9b04-c8764bd3b523 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 11 09:44:35 compute-0 nova_compute[260935]: 2025-10-11 09:44:35.186 2 DEBUG nova.compute.manager [req-95cff39c-729b-46f4-b9c9-f584f8db33b6 req-bd522e15-e1c2-4bfb-b033-3570d5924fde e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 84e65bfe-6918-4c8f-8b2a-3b5262c6e44e] Received event network-vif-plugged-007ca8a8-8ce1-4661-9b04-c8764bd3b523 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:44:35 compute-0 nova_compute[260935]: 2025-10-11 09:44:35.187 2 DEBUG oslo_concurrency.lockutils [req-95cff39c-729b-46f4-b9c9-f584f8db33b6 req-bd522e15-e1c2-4bfb-b033-3570d5924fde e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "84e65bfe-6918-4c8f-8b2a-3b5262c6e44e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:44:35 compute-0 nova_compute[260935]: 2025-10-11 09:44:35.187 2 DEBUG oslo_concurrency.lockutils [req-95cff39c-729b-46f4-b9c9-f584f8db33b6 req-bd522e15-e1c2-4bfb-b033-3570d5924fde e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "84e65bfe-6918-4c8f-8b2a-3b5262c6e44e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:44:35 compute-0 nova_compute[260935]: 2025-10-11 09:44:35.187 2 DEBUG oslo_concurrency.lockutils [req-95cff39c-729b-46f4-b9c9-f584f8db33b6 req-bd522e15-e1c2-4bfb-b033-3570d5924fde e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "84e65bfe-6918-4c8f-8b2a-3b5262c6e44e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:44:35 compute-0 nova_compute[260935]: 2025-10-11 09:44:35.187 2 DEBUG nova.compute.manager [req-95cff39c-729b-46f4-b9c9-f584f8db33b6 req-bd522e15-e1c2-4bfb-b033-3570d5924fde e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 84e65bfe-6918-4c8f-8b2a-3b5262c6e44e] No waiting events found dispatching network-vif-plugged-007ca8a8-8ce1-4661-9b04-c8764bd3b523 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 09:44:35 compute-0 nova_compute[260935]: 2025-10-11 09:44:35.188 2 WARNING nova.compute.manager [req-95cff39c-729b-46f4-b9c9-f584f8db33b6 req-bd522e15-e1c2-4bfb-b033-3570d5924fde e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 84e65bfe-6918-4c8f-8b2a-3b5262c6e44e] Received unexpected event network-vif-plugged-007ca8a8-8ce1-4661-9b04-c8764bd3b523 for instance with vm_state active and task_state deleting.
Oct 11 09:44:35 compute-0 nova_compute[260935]: 2025-10-11 09:44:35.190 2 DEBUG oslo_concurrency.lockutils [req-8bdc2c52-d0a8-441c-bc90-2f0944771fd8 req-93f42f6b-c906-413e-8e13-9916c737fc00 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-84e65bfe-6918-4c8f-8b2a-3b5262c6e44e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:44:35 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3097: 321 pgs: 321 active+clean; 508 MiB data, 1.3 GiB used, 59 GiB / 60 GiB avail; 373 KiB/s rd, 321 KiB/s wr, 90 op/s
Oct 11 09:44:35 compute-0 ceph-mon[74313]: osdmap e297: 3 total, 3 up, 3 in
Oct 11 09:44:36 compute-0 nova_compute[260935]: 2025-10-11 09:44:36.179 2 DEBUG nova.network.neutron [-] [instance: 84e65bfe-6918-4c8f-8b2a-3b5262c6e44e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:44:36 compute-0 nova_compute[260935]: 2025-10-11 09:44:36.203 2 INFO nova.compute.manager [-] [instance: 84e65bfe-6918-4c8f-8b2a-3b5262c6e44e] Took 1.33 seconds to deallocate network for instance.
Oct 11 09:44:36 compute-0 nova_compute[260935]: 2025-10-11 09:44:36.258 2 DEBUG oslo_concurrency.lockutils [None req-356cdc09-84f0-400e-865f-67dee43f8a74 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:44:36 compute-0 nova_compute[260935]: 2025-10-11 09:44:36.258 2 DEBUG oslo_concurrency.lockutils [None req-356cdc09-84f0-400e-865f-67dee43f8a74 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:44:36 compute-0 nova_compute[260935]: 2025-10-11 09:44:36.417 2 DEBUG oslo_concurrency.processutils [None req-356cdc09-84f0-400e-865f-67dee43f8a74 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:44:36 compute-0 ceph-mon[74313]: pgmap v3097: 321 pgs: 321 active+clean; 508 MiB data, 1.3 GiB used, 59 GiB / 60 GiB avail; 373 KiB/s rd, 321 KiB/s wr, 90 op/s
Oct 11 09:44:36 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 09:44:36 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/571979900' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:44:36 compute-0 nova_compute[260935]: 2025-10-11 09:44:36.923 2 DEBUG oslo_concurrency.processutils [None req-356cdc09-84f0-400e-865f-67dee43f8a74 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.506s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:44:36 compute-0 nova_compute[260935]: 2025-10-11 09:44:36.932 2 DEBUG nova.compute.provider_tree [None req-356cdc09-84f0-400e-865f-67dee43f8a74 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 09:44:36 compute-0 nova_compute[260935]: 2025-10-11 09:44:36.961 2 DEBUG nova.scheduler.client.report [None req-356cdc09-84f0-400e-865f-67dee43f8a74 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 09:44:36 compute-0 nova_compute[260935]: 2025-10-11 09:44:36.988 2 DEBUG oslo_concurrency.lockutils [None req-356cdc09-84f0-400e-865f-67dee43f8a74 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.730s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:44:37 compute-0 nova_compute[260935]: 2025-10-11 09:44:37.027 2 INFO nova.scheduler.client.report [None req-356cdc09-84f0-400e-865f-67dee43f8a74 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] Deleted allocations for instance 84e65bfe-6918-4c8f-8b2a-3b5262c6e44e
Oct 11 09:44:37 compute-0 nova_compute[260935]: 2025-10-11 09:44:37.119 2 DEBUG oslo_concurrency.lockutils [None req-356cdc09-84f0-400e-865f-67dee43f8a74 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] Lock "84e65bfe-6918-4c8f-8b2a-3b5262c6e44e" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.970s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:44:37 compute-0 nova_compute[260935]: 2025-10-11 09:44:37.328 2 DEBUG nova.compute.manager [req-f2358d54-0136-4fb5-b3d2-145b305533ae req-3fb464ba-2809-4640-a0f8-0dbc3040beaf e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 84e65bfe-6918-4c8f-8b2a-3b5262c6e44e] Received event network-vif-deleted-007ca8a8-8ce1-4661-9b04-c8764bd3b523 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:44:37 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3098: 321 pgs: 321 active+clean; 491 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 351 KiB/s rd, 289 KiB/s wr, 107 op/s
Oct 11 09:44:37 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e297 do_prune osdmap full prune enabled
Oct 11 09:44:37 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/571979900' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:44:37 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e298 e298: 3 total, 3 up, 3 in
Oct 11 09:44:37 compute-0 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e298: 3 total, 3 up, 3 in
Oct 11 09:44:38 compute-0 nova_compute[260935]: 2025-10-11 09:44:38.658 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:44:38 compute-0 ceph-mon[74313]: pgmap v3098: 321 pgs: 321 active+clean; 491 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 351 KiB/s rd, 289 KiB/s wr, 107 op/s
Oct 11 09:44:38 compute-0 ceph-mon[74313]: osdmap e298: 3 total, 3 up, 3 in
Oct 11 09:44:39 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3100: 321 pgs: 321 active+clean; 486 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 370 KiB/s rd, 290 KiB/s wr, 130 op/s
Oct 11 09:44:39 compute-0 nova_compute[260935]: 2025-10-11 09:44:39.507 2 DEBUG nova.compute.manager [req-2918d78e-ff83-480e-87cb-30b3898e0bba req-c195240e-583b-4aa1-9eaf-eb85d7b00c4a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 1f03604e-0a25-4f22-807b-11f2e654be90] Received event network-changed-0079b745-efa5-4641-a180-fee85fb30a5d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:44:39 compute-0 nova_compute[260935]: 2025-10-11 09:44:39.507 2 DEBUG nova.compute.manager [req-2918d78e-ff83-480e-87cb-30b3898e0bba req-c195240e-583b-4aa1-9eaf-eb85d7b00c4a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 1f03604e-0a25-4f22-807b-11f2e654be90] Refreshing instance network info cache due to event network-changed-0079b745-efa5-4641-a180-fee85fb30a5d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 11 09:44:39 compute-0 nova_compute[260935]: 2025-10-11 09:44:39.508 2 DEBUG oslo_concurrency.lockutils [req-2918d78e-ff83-480e-87cb-30b3898e0bba req-c195240e-583b-4aa1-9eaf-eb85d7b00c4a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "refresh_cache-1f03604e-0a25-4f22-807b-11f2e654be90" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:44:39 compute-0 nova_compute[260935]: 2025-10-11 09:44:39.509 2 DEBUG oslo_concurrency.lockutils [req-2918d78e-ff83-480e-87cb-30b3898e0bba req-c195240e-583b-4aa1-9eaf-eb85d7b00c4a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquired lock "refresh_cache-1f03604e-0a25-4f22-807b-11f2e654be90" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:44:39 compute-0 nova_compute[260935]: 2025-10-11 09:44:39.509 2 DEBUG nova.network.neutron [req-2918d78e-ff83-480e-87cb-30b3898e0bba req-c195240e-583b-4aa1-9eaf-eb85d7b00c4a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 1f03604e-0a25-4f22-807b-11f2e654be90] Refreshing network info cache for port 0079b745-efa5-4641-a180-fee85fb30a5d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 11 09:44:39 compute-0 nova_compute[260935]: 2025-10-11 09:44:39.570 2 DEBUG oslo_concurrency.lockutils [None req-fec77767-140b-4bc4-9bbc-6c88db813392 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] Acquiring lock "1f03604e-0a25-4f22-807b-11f2e654be90" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:44:39 compute-0 nova_compute[260935]: 2025-10-11 09:44:39.571 2 DEBUG oslo_concurrency.lockutils [None req-fec77767-140b-4bc4-9bbc-6c88db813392 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] Lock "1f03604e-0a25-4f22-807b-11f2e654be90" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:44:39 compute-0 nova_compute[260935]: 2025-10-11 09:44:39.572 2 DEBUG oslo_concurrency.lockutils [None req-fec77767-140b-4bc4-9bbc-6c88db813392 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] Acquiring lock "1f03604e-0a25-4f22-807b-11f2e654be90-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:44:39 compute-0 nova_compute[260935]: 2025-10-11 09:44:39.572 2 DEBUG oslo_concurrency.lockutils [None req-fec77767-140b-4bc4-9bbc-6c88db813392 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] Lock "1f03604e-0a25-4f22-807b-11f2e654be90-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:44:39 compute-0 nova_compute[260935]: 2025-10-11 09:44:39.573 2 DEBUG oslo_concurrency.lockutils [None req-fec77767-140b-4bc4-9bbc-6c88db813392 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] Lock "1f03604e-0a25-4f22-807b-11f2e654be90-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:44:39 compute-0 nova_compute[260935]: 2025-10-11 09:44:39.575 2 INFO nova.compute.manager [None req-fec77767-140b-4bc4-9bbc-6c88db813392 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] [instance: 1f03604e-0a25-4f22-807b-11f2e654be90] Terminating instance
Oct 11 09:44:39 compute-0 nova_compute[260935]: 2025-10-11 09:44:39.577 2 DEBUG nova.compute.manager [None req-fec77767-140b-4bc4-9bbc-6c88db813392 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] [instance: 1f03604e-0a25-4f22-807b-11f2e654be90] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 11 09:44:39 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:44:39.591 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=59, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:d1:d9', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '16:ab:1e:b7:4b:7f'}, ipsec=False) old=SB_Global(nb_cfg=58) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 09:44:39 compute-0 nova_compute[260935]: 2025-10-11 09:44:39.592 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:44:39 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:44:39.593 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 11 09:44:39 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e298 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:44:39 compute-0 kernel: tap0079b745-ef (unregistering): left promiscuous mode
Oct 11 09:44:39 compute-0 NetworkManager[44960]: <info>  [1760175879.9166] device (tap0079b745-ef): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 09:44:39 compute-0 ovn_controller[152945]: 2025-10-11T09:44:39Z|01722|binding|INFO|Releasing lport 0079b745-efa5-4641-a180-fee85fb30a5d from this chassis (sb_readonly=0)
Oct 11 09:44:39 compute-0 ovn_controller[152945]: 2025-10-11T09:44:39Z|01723|binding|INFO|Setting lport 0079b745-efa5-4641-a180-fee85fb30a5d down in Southbound
Oct 11 09:44:39 compute-0 ovn_controller[152945]: 2025-10-11T09:44:39Z|01724|binding|INFO|Removing iface tap0079b745-ef ovn-installed in OVS
Oct 11 09:44:39 compute-0 nova_compute[260935]: 2025-10-11 09:44:39.925 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:44:39 compute-0 nova_compute[260935]: 2025-10-11 09:44:39.928 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:44:39 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:44:39.931 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:88:b5:6b 10.100.0.3'], port_security=['fa:16:3e:88:b5:6b 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '1f03604e-0a25-4f22-807b-11f2e654be90', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-424246b3-55aa-428f-9446-55ed3d626b5d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd599eac75aee4193a971c2c157a326a8', 'neutron:revision_number': '4', 'neutron:security_group_ids': '9d823c59-8c61-45f8-94bd-622699d4ae58', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a8021c16-4de3-44ff-b5a5-7602224b9734, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=0079b745-efa5-4641-a180-fee85fb30a5d) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 09:44:39 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:44:39.933 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 0079b745-efa5-4641-a180-fee85fb30a5d in datapath 424246b3-55aa-428f-9446-55ed3d626b5d unbound from our chassis
Oct 11 09:44:39 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:44:39.935 162815 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 424246b3-55aa-428f-9446-55ed3d626b5d, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 11 09:44:39 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:44:39.935 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[14399662-6246-4ff2-a2d3-71241878046e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:44:39 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:44:39.936 162815 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-424246b3-55aa-428f-9446-55ed3d626b5d namespace which is not needed anymore
Oct 11 09:44:39 compute-0 nova_compute[260935]: 2025-10-11 09:44:39.950 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:44:39 compute-0 systemd[1]: machine-qemu\x2d175\x2dinstance\x2d00000095.scope: Deactivated successfully.
Oct 11 09:44:39 compute-0 systemd[1]: machine-qemu\x2d175\x2dinstance\x2d00000095.scope: Consumed 17.652s CPU time.
Oct 11 09:44:39 compute-0 systemd-machined[215705]: Machine qemu-175-instance-00000095 terminated.
Oct 11 09:44:40 compute-0 nova_compute[260935]: 2025-10-11 09:44:40.035 2 INFO nova.virt.libvirt.driver [-] [instance: 1f03604e-0a25-4f22-807b-11f2e654be90] Instance destroyed successfully.
Oct 11 09:44:40 compute-0 nova_compute[260935]: 2025-10-11 09:44:40.035 2 DEBUG nova.objects.instance [None req-fec77767-140b-4bc4-9bbc-6c88db813392 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] Lazy-loading 'resources' on Instance uuid 1f03604e-0a25-4f22-807b-11f2e654be90 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:44:40 compute-0 nova_compute[260935]: 2025-10-11 09:44:40.054 2 DEBUG nova.virt.libvirt.vif [None req-fec77767-140b-4bc4-9bbc-6c88db813392 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T09:42:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestSnapshotPattern-server-1990672178',display_name='tempest-TestSnapshotPattern-server-1990672178',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testsnapshotpattern-server-1990672178',id=149,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBP2ECx0jj3Rt10/VBX40dpY/aktTrRIITXR7+kOE+gKATXjMYBpecra2wtbKgkthLAz8Iw/nS+1WioBHxe0IF9i4guGae1Gh4XxA1njuMr4iSWg7N0/eaZRKp5xLMkN7hg==',key_name='tempest-TestSnapshotPattern-1719255089',keypairs=<?>,launch_index=0,launched_at=2025-10-11T09:43:08Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='d599eac75aee4193a971c2c157a326a8',ramdisk_id='',reservation_id='r-eoa49vly',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSnapshotPattern-1818755654',owner_user_name='tempest-TestSnapshotPattern-1818755654-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T09:43:34Z,user_data=None,user_id='2b3a6de4bc924868b4e73b0a23a89090',uuid=1f03604e-0a25-4f22-807b-11f2e654be90,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "0079b745-efa5-4641-a180-fee85fb30a5d", "address": "fa:16:3e:88:b5:6b", "network": {"id": "424246b3-55aa-428f-9446-55ed3d626b5d", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1806329114-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.208", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d599eac75aee4193a971c2c157a326a8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0079b745-ef", "ovs_interfaceid": "0079b745-efa5-4641-a180-fee85fb30a5d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 11 09:44:40 compute-0 nova_compute[260935]: 2025-10-11 09:44:40.055 2 DEBUG nova.network.os_vif_util [None req-fec77767-140b-4bc4-9bbc-6c88db813392 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] Converting VIF {"id": "0079b745-efa5-4641-a180-fee85fb30a5d", "address": "fa:16:3e:88:b5:6b", "network": {"id": "424246b3-55aa-428f-9446-55ed3d626b5d", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1806329114-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.208", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d599eac75aee4193a971c2c157a326a8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0079b745-ef", "ovs_interfaceid": "0079b745-efa5-4641-a180-fee85fb30a5d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 09:44:40 compute-0 nova_compute[260935]: 2025-10-11 09:44:40.055 2 DEBUG nova.network.os_vif_util [None req-fec77767-140b-4bc4-9bbc-6c88db813392 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:88:b5:6b,bridge_name='br-int',has_traffic_filtering=True,id=0079b745-efa5-4641-a180-fee85fb30a5d,network=Network(424246b3-55aa-428f-9446-55ed3d626b5d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0079b745-ef') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 09:44:40 compute-0 nova_compute[260935]: 2025-10-11 09:44:40.056 2 DEBUG os_vif [None req-fec77767-140b-4bc4-9bbc-6c88db813392 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:88:b5:6b,bridge_name='br-int',has_traffic_filtering=True,id=0079b745-efa5-4641-a180-fee85fb30a5d,network=Network(424246b3-55aa-428f-9446-55ed3d626b5d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0079b745-ef') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 11 09:44:40 compute-0 nova_compute[260935]: 2025-10-11 09:44:40.057 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:44:40 compute-0 nova_compute[260935]: 2025-10-11 09:44:40.057 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0079b745-ef, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:44:40 compute-0 nova_compute[260935]: 2025-10-11 09:44:40.098 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:44:40 compute-0 nova_compute[260935]: 2025-10-11 09:44:40.100 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:44:40 compute-0 nova_compute[260935]: 2025-10-11 09:44:40.102 2 INFO os_vif [None req-fec77767-140b-4bc4-9bbc-6c88db813392 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:88:b5:6b,bridge_name='br-int',has_traffic_filtering=True,id=0079b745-efa5-4641-a180-fee85fb30a5d,network=Network(424246b3-55aa-428f-9446-55ed3d626b5d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0079b745-ef')
Oct 11 09:44:40 compute-0 nova_compute[260935]: 2025-10-11 09:44:40.116 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:44:40 compute-0 neutron-haproxy-ovnmeta-424246b3-55aa-428f-9446-55ed3d626b5d[431804]: [NOTICE]   (431828) : haproxy version is 2.8.14-c23fe91
Oct 11 09:44:40 compute-0 neutron-haproxy-ovnmeta-424246b3-55aa-428f-9446-55ed3d626b5d[431804]: [NOTICE]   (431828) : path to executable is /usr/sbin/haproxy
Oct 11 09:44:40 compute-0 neutron-haproxy-ovnmeta-424246b3-55aa-428f-9446-55ed3d626b5d[431804]: [WARNING]  (431828) : Exiting Master process...
Oct 11 09:44:40 compute-0 neutron-haproxy-ovnmeta-424246b3-55aa-428f-9446-55ed3d626b5d[431804]: [ALERT]    (431828) : Current worker (431836) exited with code 143 (Terminated)
Oct 11 09:44:40 compute-0 neutron-haproxy-ovnmeta-424246b3-55aa-428f-9446-55ed3d626b5d[431804]: [WARNING]  (431828) : All workers exited. Exiting... (0)
Oct 11 09:44:40 compute-0 systemd[1]: libpod-f3c157f84cd53093e9b7de536de6a8417e4e5d8b42621d118b292c613e249157.scope: Deactivated successfully.
Oct 11 09:44:40 compute-0 podman[434493]: 2025-10-11 09:44:40.289103803 +0000 UTC m=+0.187148111 container died f3c157f84cd53093e9b7de536de6a8417e4e5d8b42621d118b292c613e249157 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-424246b3-55aa-428f-9446-55ed3d626b5d, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001)
Oct 11 09:44:40 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-f3c157f84cd53093e9b7de536de6a8417e4e5d8b42621d118b292c613e249157-userdata-shm.mount: Deactivated successfully.
Oct 11 09:44:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-a86c02cc2cbc8761b2102a73fcb1e50eef8c8d1227adb1efe533f1256a4c9e7f-merged.mount: Deactivated successfully.
Oct 11 09:44:40 compute-0 podman[434493]: 2025-10-11 09:44:40.642034152 +0000 UTC m=+0.540078470 container cleanup f3c157f84cd53093e9b7de536de6a8417e4e5d8b42621d118b292c613e249157 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-424246b3-55aa-428f-9446-55ed3d626b5d, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 11 09:44:40 compute-0 systemd[1]: libpod-conmon-f3c157f84cd53093e9b7de536de6a8417e4e5d8b42621d118b292c613e249157.scope: Deactivated successfully.
Oct 11 09:44:40 compute-0 podman[434542]: 2025-10-11 09:44:40.765039312 +0000 UTC m=+0.082127665 container remove f3c157f84cd53093e9b7de536de6a8417e4e5d8b42621d118b292c613e249157 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-424246b3-55aa-428f-9446-55ed3d626b5d, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS)
Oct 11 09:44:40 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:44:40.775 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[880fd4d5-b801-4564-9e01-69c297662d3f]: (4, ('Sat Oct 11 09:44:40 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-424246b3-55aa-428f-9446-55ed3d626b5d (f3c157f84cd53093e9b7de536de6a8417e4e5d8b42621d118b292c613e249157)\nf3c157f84cd53093e9b7de536de6a8417e4e5d8b42621d118b292c613e249157\nSat Oct 11 09:44:40 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-424246b3-55aa-428f-9446-55ed3d626b5d (f3c157f84cd53093e9b7de536de6a8417e4e5d8b42621d118b292c613e249157)\nf3c157f84cd53093e9b7de536de6a8417e4e5d8b42621d118b292c613e249157\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:44:40 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:44:40.779 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[a35a966e-7b17-4929-b4df-f5afca654491]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:44:40 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:44:40.781 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap424246b3-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:44:40 compute-0 nova_compute[260935]: 2025-10-11 09:44:40.784 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:44:40 compute-0 kernel: tap424246b3-50: left promiscuous mode
Oct 11 09:44:40 compute-0 nova_compute[260935]: 2025-10-11 09:44:40.804 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:44:40 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:44:40.808 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[0d5f5379-ed00-475d-ab84-12402c204340]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:44:40 compute-0 nova_compute[260935]: 2025-10-11 09:44:40.809 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:44:40 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:44:40.834 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[e205d993-2aef-405e-9cb7-f51152d1fd4a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:44:40 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:44:40.836 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[c5cb009d-8a3b-47bf-bdbf-2de5cf3e9479]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:44:40 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:44:40.863 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[d04ffa96-74c0-4ab8-b04c-b5f08a87e862]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 763201, 'reachable_time': 27525, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 434558, 'error': None, 'target': 'ovnmeta-424246b3-55aa-428f-9446-55ed3d626b5d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:44:40 compute-0 systemd[1]: run-netns-ovnmeta\x2d424246b3\x2d55aa\x2d428f\x2d9446\x2d55ed3d626b5d.mount: Deactivated successfully.
Oct 11 09:44:40 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:44:40.872 162929 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-424246b3-55aa-428f-9446-55ed3d626b5d deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 11 09:44:40 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:44:40.872 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[023f4c90-69a3-424a-b95e-58791ff68222]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:44:40 compute-0 ceph-mon[74313]: pgmap v3100: 321 pgs: 321 active+clean; 486 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 370 KiB/s rd, 290 KiB/s wr, 130 op/s
Oct 11 09:44:41 compute-0 nova_compute[260935]: 2025-10-11 09:44:41.036 2 INFO nova.virt.libvirt.driver [None req-fec77767-140b-4bc4-9bbc-6c88db813392 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] [instance: 1f03604e-0a25-4f22-807b-11f2e654be90] Deleting instance files /var/lib/nova/instances/1f03604e-0a25-4f22-807b-11f2e654be90_del
Oct 11 09:44:41 compute-0 nova_compute[260935]: 2025-10-11 09:44:41.037 2 INFO nova.virt.libvirt.driver [None req-fec77767-140b-4bc4-9bbc-6c88db813392 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] [instance: 1f03604e-0a25-4f22-807b-11f2e654be90] Deletion of /var/lib/nova/instances/1f03604e-0a25-4f22-807b-11f2e654be90_del complete
Oct 11 09:44:41 compute-0 nova_compute[260935]: 2025-10-11 09:44:41.144 2 DEBUG nova.network.neutron [req-2918d78e-ff83-480e-87cb-30b3898e0bba req-c195240e-583b-4aa1-9eaf-eb85d7b00c4a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 1f03604e-0a25-4f22-807b-11f2e654be90] Updated VIF entry in instance network info cache for port 0079b745-efa5-4641-a180-fee85fb30a5d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 11 09:44:41 compute-0 nova_compute[260935]: 2025-10-11 09:44:41.145 2 DEBUG nova.network.neutron [req-2918d78e-ff83-480e-87cb-30b3898e0bba req-c195240e-583b-4aa1-9eaf-eb85d7b00c4a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 1f03604e-0a25-4f22-807b-11f2e654be90] Updating instance_info_cache with network_info: [{"id": "0079b745-efa5-4641-a180-fee85fb30a5d", "address": "fa:16:3e:88:b5:6b", "network": {"id": "424246b3-55aa-428f-9446-55ed3d626b5d", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1806329114-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d599eac75aee4193a971c2c157a326a8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0079b745-ef", "ovs_interfaceid": "0079b745-efa5-4641-a180-fee85fb30a5d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:44:41 compute-0 nova_compute[260935]: 2025-10-11 09:44:41.147 2 INFO nova.compute.manager [None req-fec77767-140b-4bc4-9bbc-6c88db813392 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] [instance: 1f03604e-0a25-4f22-807b-11f2e654be90] Took 1.57 seconds to destroy the instance on the hypervisor.
Oct 11 09:44:41 compute-0 nova_compute[260935]: 2025-10-11 09:44:41.148 2 DEBUG oslo.service.loopingcall [None req-fec77767-140b-4bc4-9bbc-6c88db813392 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 11 09:44:41 compute-0 nova_compute[260935]: 2025-10-11 09:44:41.148 2 DEBUG nova.compute.manager [-] [instance: 1f03604e-0a25-4f22-807b-11f2e654be90] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 11 09:44:41 compute-0 nova_compute[260935]: 2025-10-11 09:44:41.149 2 DEBUG nova.network.neutron [-] [instance: 1f03604e-0a25-4f22-807b-11f2e654be90] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 11 09:44:41 compute-0 nova_compute[260935]: 2025-10-11 09:44:41.172 2 DEBUG oslo_concurrency.lockutils [req-2918d78e-ff83-480e-87cb-30b3898e0bba req-c195240e-583b-4aa1-9eaf-eb85d7b00c4a e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Releasing lock "refresh_cache-1f03604e-0a25-4f22-807b-11f2e654be90" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:44:41 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3101: 321 pgs: 321 active+clean; 486 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 2.6 KiB/s wr, 50 op/s
Oct 11 09:44:41 compute-0 nova_compute[260935]: 2025-10-11 09:44:41.607 2 DEBUG nova.compute.manager [req-009a7719-9899-49aa-8cc5-93c3e4ca62d8 req-527a943f-28db-4d2e-b923-3de31b2b4593 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 1f03604e-0a25-4f22-807b-11f2e654be90] Received event network-vif-unplugged-0079b745-efa5-4641-a180-fee85fb30a5d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:44:41 compute-0 nova_compute[260935]: 2025-10-11 09:44:41.608 2 DEBUG oslo_concurrency.lockutils [req-009a7719-9899-49aa-8cc5-93c3e4ca62d8 req-527a943f-28db-4d2e-b923-3de31b2b4593 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "1f03604e-0a25-4f22-807b-11f2e654be90-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:44:41 compute-0 nova_compute[260935]: 2025-10-11 09:44:41.608 2 DEBUG oslo_concurrency.lockutils [req-009a7719-9899-49aa-8cc5-93c3e4ca62d8 req-527a943f-28db-4d2e-b923-3de31b2b4593 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "1f03604e-0a25-4f22-807b-11f2e654be90-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:44:41 compute-0 nova_compute[260935]: 2025-10-11 09:44:41.609 2 DEBUG oslo_concurrency.lockutils [req-009a7719-9899-49aa-8cc5-93c3e4ca62d8 req-527a943f-28db-4d2e-b923-3de31b2b4593 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "1f03604e-0a25-4f22-807b-11f2e654be90-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:44:41 compute-0 nova_compute[260935]: 2025-10-11 09:44:41.609 2 DEBUG nova.compute.manager [req-009a7719-9899-49aa-8cc5-93c3e4ca62d8 req-527a943f-28db-4d2e-b923-3de31b2b4593 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 1f03604e-0a25-4f22-807b-11f2e654be90] No waiting events found dispatching network-vif-unplugged-0079b745-efa5-4641-a180-fee85fb30a5d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 09:44:41 compute-0 nova_compute[260935]: 2025-10-11 09:44:41.609 2 DEBUG nova.compute.manager [req-009a7719-9899-49aa-8cc5-93c3e4ca62d8 req-527a943f-28db-4d2e-b923-3de31b2b4593 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 1f03604e-0a25-4f22-807b-11f2e654be90] Received event network-vif-unplugged-0079b745-efa5-4641-a180-fee85fb30a5d for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 11 09:44:41 compute-0 nova_compute[260935]: 2025-10-11 09:44:41.610 2 DEBUG nova.compute.manager [req-009a7719-9899-49aa-8cc5-93c3e4ca62d8 req-527a943f-28db-4d2e-b923-3de31b2b4593 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 1f03604e-0a25-4f22-807b-11f2e654be90] Received event network-vif-plugged-0079b745-efa5-4641-a180-fee85fb30a5d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:44:41 compute-0 nova_compute[260935]: 2025-10-11 09:44:41.610 2 DEBUG oslo_concurrency.lockutils [req-009a7719-9899-49aa-8cc5-93c3e4ca62d8 req-527a943f-28db-4d2e-b923-3de31b2b4593 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Acquiring lock "1f03604e-0a25-4f22-807b-11f2e654be90-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:44:41 compute-0 nova_compute[260935]: 2025-10-11 09:44:41.610 2 DEBUG oslo_concurrency.lockutils [req-009a7719-9899-49aa-8cc5-93c3e4ca62d8 req-527a943f-28db-4d2e-b923-3de31b2b4593 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "1f03604e-0a25-4f22-807b-11f2e654be90-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:44:41 compute-0 nova_compute[260935]: 2025-10-11 09:44:41.611 2 DEBUG oslo_concurrency.lockutils [req-009a7719-9899-49aa-8cc5-93c3e4ca62d8 req-527a943f-28db-4d2e-b923-3de31b2b4593 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lock "1f03604e-0a25-4f22-807b-11f2e654be90-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:44:41 compute-0 nova_compute[260935]: 2025-10-11 09:44:41.611 2 DEBUG nova.compute.manager [req-009a7719-9899-49aa-8cc5-93c3e4ca62d8 req-527a943f-28db-4d2e-b923-3de31b2b4593 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 1f03604e-0a25-4f22-807b-11f2e654be90] No waiting events found dispatching network-vif-plugged-0079b745-efa5-4641-a180-fee85fb30a5d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 11 09:44:41 compute-0 nova_compute[260935]: 2025-10-11 09:44:41.611 2 WARNING nova.compute.manager [req-009a7719-9899-49aa-8cc5-93c3e4ca62d8 req-527a943f-28db-4d2e-b923-3de31b2b4593 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 1f03604e-0a25-4f22-807b-11f2e654be90] Received unexpected event network-vif-plugged-0079b745-efa5-4641-a180-fee85fb30a5d for instance with vm_state active and task_state deleting.
Oct 11 09:44:41 compute-0 nova_compute[260935]: 2025-10-11 09:44:41.873 2 DEBUG nova.network.neutron [-] [instance: 1f03604e-0a25-4f22-807b-11f2e654be90] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:44:41 compute-0 nova_compute[260935]: 2025-10-11 09:44:41.900 2 INFO nova.compute.manager [-] [instance: 1f03604e-0a25-4f22-807b-11f2e654be90] Took 0.75 seconds to deallocate network for instance.
Oct 11 09:44:41 compute-0 nova_compute[260935]: 2025-10-11 09:44:41.963 2 DEBUG oslo_concurrency.lockutils [None req-fec77767-140b-4bc4-9bbc-6c88db813392 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:44:41 compute-0 nova_compute[260935]: 2025-10-11 09:44:41.964 2 DEBUG oslo_concurrency.lockutils [None req-fec77767-140b-4bc4-9bbc-6c88db813392 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:44:42 compute-0 nova_compute[260935]: 2025-10-11 09:44:42.105 2 DEBUG oslo_concurrency.processutils [None req-fec77767-140b-4bc4-9bbc-6c88db813392 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:44:42 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 09:44:42 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3107757616' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:44:42 compute-0 nova_compute[260935]: 2025-10-11 09:44:42.577 2 DEBUG oslo_concurrency.processutils [None req-fec77767-140b-4bc4-9bbc-6c88db813392 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:44:42 compute-0 nova_compute[260935]: 2025-10-11 09:44:42.587 2 DEBUG nova.compute.provider_tree [None req-fec77767-140b-4bc4-9bbc-6c88db813392 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 09:44:42 compute-0 nova_compute[260935]: 2025-10-11 09:44:42.628 2 DEBUG nova.scheduler.client.report [None req-fec77767-140b-4bc4-9bbc-6c88db813392 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 09:44:42 compute-0 nova_compute[260935]: 2025-10-11 09:44:42.661 2 DEBUG oslo_concurrency.lockutils [None req-fec77767-140b-4bc4-9bbc-6c88db813392 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.697s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:44:42 compute-0 nova_compute[260935]: 2025-10-11 09:44:42.711 2 INFO nova.scheduler.client.report [None req-fec77767-140b-4bc4-9bbc-6c88db813392 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] Deleted allocations for instance 1f03604e-0a25-4f22-807b-11f2e654be90
Oct 11 09:44:42 compute-0 nova_compute[260935]: 2025-10-11 09:44:42.787 2 DEBUG oslo_concurrency.lockutils [None req-fec77767-140b-4bc4-9bbc-6c88db813392 2b3a6de4bc924868b4e73b0a23a89090 d599eac75aee4193a971c2c157a326a8 - - default default] Lock "1f03604e-0a25-4f22-807b-11f2e654be90" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.217s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:44:42 compute-0 ceph-mon[74313]: pgmap v3101: 321 pgs: 321 active+clean; 486 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 2.6 KiB/s wr, 50 op/s
Oct 11 09:44:42 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/3107757616' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:44:43 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3102: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 74 KiB/s rd, 5.1 KiB/s wr, 107 op/s
Oct 11 09:44:43 compute-0 nova_compute[260935]: 2025-10-11 09:44:43.722 2 DEBUG nova.compute.manager [req-48c59ed9-cb11-40ee-85d8-2a9673f6a9e4 req-1a311ce8-c300-46cf-81ee-c73f1c333969 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 1f03604e-0a25-4f22-807b-11f2e654be90] Received event network-vif-deleted-0079b745-efa5-4641-a180-fee85fb30a5d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:44:43 compute-0 podman[434581]: 2025-10-11 09:44:43.807774832 +0000 UTC m=+0.097400293 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 11 09:44:44 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:44:44.596 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=bec9a5e2-82b0-42f0-811d-08d245f1dc66, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '59'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:44:44 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e298 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:44:44 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e298 do_prune osdmap full prune enabled
Oct 11 09:44:44 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e299 e299: 3 total, 3 up, 3 in
Oct 11 09:44:44 compute-0 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e299: 3 total, 3 up, 3 in
Oct 11 09:44:44 compute-0 ceph-mon[74313]: pgmap v3102: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 74 KiB/s rd, 5.1 KiB/s wr, 107 op/s
Oct 11 09:44:44 compute-0 ceph-mon[74313]: osdmap e299: 3 total, 3 up, 3 in
Oct 11 09:44:45 compute-0 nova_compute[260935]: 2025-10-11 09:44:45.101 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:44:45 compute-0 nova_compute[260935]: 2025-10-11 09:44:45.107 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:44:45 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3104: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 62 KiB/s rd, 3.7 KiB/s wr, 90 op/s
Oct 11 09:44:46 compute-0 ceph-mon[74313]: pgmap v3104: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 62 KiB/s rd, 3.7 KiB/s wr, 90 op/s
Oct 11 09:44:47 compute-0 ovn_controller[152945]: 2025-10-11T09:44:47Z|01725|binding|INFO|Releasing lport e23cd806-8523-4e59-ba27-db15cee52548 from this chassis (sb_readonly=0)
Oct 11 09:44:47 compute-0 ovn_controller[152945]: 2025-10-11T09:44:47Z|01726|binding|INFO|Releasing lport f45dd889-4db0-488b-b6d3-356bc191844e from this chassis (sb_readonly=0)
Oct 11 09:44:47 compute-0 nova_compute[260935]: 2025-10-11 09:44:47.176 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:44:47 compute-0 ovn_controller[152945]: 2025-10-11T09:44:47Z|01727|binding|INFO|Releasing lport e23cd806-8523-4e59-ba27-db15cee52548 from this chassis (sb_readonly=0)
Oct 11 09:44:47 compute-0 ovn_controller[152945]: 2025-10-11T09:44:47Z|01728|binding|INFO|Releasing lport f45dd889-4db0-488b-b6d3-356bc191844e from this chassis (sb_readonly=0)
Oct 11 09:44:47 compute-0 nova_compute[260935]: 2025-10-11 09:44:47.341 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:44:47 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3105: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 2.4 KiB/s wr, 55 op/s
Oct 11 09:44:48 compute-0 nova_compute[260935]: 2025-10-11 09:44:48.600 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760175873.5994537, 84e65bfe-6918-4c8f-8b2a-3b5262c6e44e => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:44:48 compute-0 nova_compute[260935]: 2025-10-11 09:44:48.601 2 INFO nova.compute.manager [-] [instance: 84e65bfe-6918-4c8f-8b2a-3b5262c6e44e] VM Stopped (Lifecycle Event)
Oct 11 09:44:48 compute-0 nova_compute[260935]: 2025-10-11 09:44:48.631 2 DEBUG nova.compute.manager [None req-abe8ca32-68d9-44d6-a57f-633c053de132 - - - - - -] [instance: 84e65bfe-6918-4c8f-8b2a-3b5262c6e44e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:44:48 compute-0 ceph-mon[74313]: pgmap v3105: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 2.4 KiB/s wr, 55 op/s
Oct 11 09:44:49 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3106: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 35 KiB/s rd, 2.3 KiB/s wr, 53 op/s
Oct 11 09:44:49 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:44:50 compute-0 nova_compute[260935]: 2025-10-11 09:44:50.104 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:44:50 compute-0 nova_compute[260935]: 2025-10-11 09:44:50.112 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:44:50 compute-0 sshd-session[434602]: Invalid user postgres from 165.232.82.252 port 60576
Oct 11 09:44:50 compute-0 sshd-session[434602]: pam_unix(sshd:auth): check pass; user unknown
Oct 11 09:44:50 compute-0 sshd-session[434602]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=165.232.82.252
Oct 11 09:44:50 compute-0 podman[434604]: 2025-10-11 09:44:50.558675749 +0000 UTC m=+0.089225754 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251001, container_name=iscsid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team)
Oct 11 09:44:50 compute-0 ceph-mon[74313]: pgmap v3106: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 35 KiB/s rd, 2.3 KiB/s wr, 53 op/s
Oct 11 09:44:51 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3107: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 35 KiB/s rd, 2.3 KiB/s wr, 53 op/s
Oct 11 09:44:51 compute-0 nova_compute[260935]: 2025-10-11 09:44:51.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:44:51 compute-0 nova_compute[260935]: 2025-10-11 09:44:51.703 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 11 09:44:52 compute-0 sshd-session[434602]: Failed password for invalid user postgres from 165.232.82.252 port 60576 ssh2
Oct 11 09:44:52 compute-0 ceph-mon[74313]: pgmap v3107: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 35 KiB/s rd, 2.3 KiB/s wr, 53 op/s
Oct 11 09:44:53 compute-0 sshd-session[434602]: Connection closed by invalid user postgres 165.232.82.252 port 60576 [preauth]
Oct 11 09:44:53 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3108: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:44:53 compute-0 podman[434625]: 2025-10-11 09:44:53.803299484 +0000 UTC m=+0.099777028 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_id=multipathd)
Oct 11 09:44:53 compute-0 podman[434626]: 2025-10-11 09:44:53.914212285 +0000 UTC m=+0.205204255 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller)
Oct 11 09:44:54 compute-0 nova_compute[260935]: 2025-10-11 09:44:54.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:44:54 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:44:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:44:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:44:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:44:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:44:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:44:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:44:54 compute-0 ceph-mon[74313]: pgmap v3108: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:44:55 compute-0 ceph-mgr[74605]: [balancer INFO root] Optimize plan auto_2025-10-11_09:44:55
Oct 11 09:44:55 compute-0 ceph-mgr[74605]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 09:44:55 compute-0 ceph-mgr[74605]: [balancer INFO root] do_upmap
Oct 11 09:44:55 compute-0 ceph-mgr[74605]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'cephfs.cephfs.data', 'default.rgw.meta', 'vms', '.mgr', '.rgw.root', 'volumes', 'images', 'default.rgw.log', 'backups', 'default.rgw.control']
Oct 11 09:44:55 compute-0 ceph-mgr[74605]: [balancer INFO root] prepared 0/10 changes
Oct 11 09:44:55 compute-0 nova_compute[260935]: 2025-10-11 09:44:55.034 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760175880.0333538, 1f03604e-0a25-4f22-807b-11f2e654be90 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:44:55 compute-0 nova_compute[260935]: 2025-10-11 09:44:55.035 2 INFO nova.compute.manager [-] [instance: 1f03604e-0a25-4f22-807b-11f2e654be90] VM Stopped (Lifecycle Event)
Oct 11 09:44:55 compute-0 nova_compute[260935]: 2025-10-11 09:44:55.062 2 DEBUG nova.compute.manager [None req-94cac424-3afb-46e6-ad13-620ae14841c7 - - - - - -] [instance: 1f03604e-0a25-4f22-807b-11f2e654be90] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:44:55 compute-0 nova_compute[260935]: 2025-10-11 09:44:55.106 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:44:55 compute-0 nova_compute[260935]: 2025-10-11 09:44:55.114 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:44:55 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3109: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:44:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 09:44:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 09:44:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 09:44:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 09:44:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 09:44:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 09:44:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 09:44:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 09:44:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 09:44:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 09:44:55 compute-0 nova_compute[260935]: 2025-10-11 09:44:55.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:44:55 compute-0 nova_compute[260935]: 2025-10-11 09:44:55.703 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 11 09:44:55 compute-0 nova_compute[260935]: 2025-10-11 09:44:55.704 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 11 09:44:56 compute-0 nova_compute[260935]: 2025-10-11 09:44:56.464 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "refresh_cache-c176845c-89c0-4038-ba22-4ee79bd3ebfe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:44:56 compute-0 nova_compute[260935]: 2025-10-11 09:44:56.465 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquired lock "refresh_cache-c176845c-89c0-4038-ba22-4ee79bd3ebfe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:44:56 compute-0 nova_compute[260935]: 2025-10-11 09:44:56.466 2 DEBUG nova.network.neutron [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: c176845c-89c0-4038-ba22-4ee79bd3ebfe] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 11 09:44:56 compute-0 nova_compute[260935]: 2025-10-11 09:44:56.467 2 DEBUG nova.objects.instance [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lazy-loading 'info_cache' on Instance uuid c176845c-89c0-4038-ba22-4ee79bd3ebfe obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:44:56 compute-0 ceph-mon[74313]: pgmap v3109: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:44:57 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3110: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:44:58 compute-0 nova_compute[260935]: 2025-10-11 09:44:58.188 2 DEBUG nova.network.neutron [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: c176845c-89c0-4038-ba22-4ee79bd3ebfe] Updating instance_info_cache with network_info: [{"id": "e61ae661-47c6-4317-a2c2-6e7a5b567441", "address": "fa:16:3e:1e:82:58", "network": {"id": "164a664d-5e52-48b9-8b00-f73d0851a4cc", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-311778958-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d33b48586acf4e6c8254f2a1213b001c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape61ae661-47", "ovs_interfaceid": "e61ae661-47c6-4317-a2c2-6e7a5b567441", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:44:58 compute-0 nova_compute[260935]: 2025-10-11 09:44:58.216 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Releasing lock "refresh_cache-c176845c-89c0-4038-ba22-4ee79bd3ebfe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:44:58 compute-0 nova_compute[260935]: 2025-10-11 09:44:58.216 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: c176845c-89c0-4038-ba22-4ee79bd3ebfe] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 11 09:44:58 compute-0 nova_compute[260935]: 2025-10-11 09:44:58.217 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:44:58 compute-0 nova_compute[260935]: 2025-10-11 09:44:58.217 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:44:58 compute-0 nova_compute[260935]: 2025-10-11 09:44:58.237 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:44:58 compute-0 nova_compute[260935]: 2025-10-11 09:44:58.237 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:44:58 compute-0 nova_compute[260935]: 2025-10-11 09:44:58.237 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:44:58 compute-0 nova_compute[260935]: 2025-10-11 09:44:58.238 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 11 09:44:58 compute-0 nova_compute[260935]: 2025-10-11 09:44:58.238 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:44:58 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 09:44:58 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/976988539' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:44:58 compute-0 nova_compute[260935]: 2025-10-11 09:44:58.660 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.422s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:44:58 compute-0 nova_compute[260935]: 2025-10-11 09:44:58.768 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:44:58 compute-0 nova_compute[260935]: 2025-10-11 09:44:58.769 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:44:58 compute-0 nova_compute[260935]: 2025-10-11 09:44:58.770 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:44:58 compute-0 nova_compute[260935]: 2025-10-11 09:44:58.776 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:44:58 compute-0 nova_compute[260935]: 2025-10-11 09:44:58.777 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:44:58 compute-0 nova_compute[260935]: 2025-10-11 09:44:58.783 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000052 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:44:58 compute-0 nova_compute[260935]: 2025-10-11 09:44:58.783 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000052 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:44:58 compute-0 ceph-mon[74313]: pgmap v3110: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:44:58 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/976988539' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:44:59 compute-0 nova_compute[260935]: 2025-10-11 09:44:59.092 2 WARNING nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 09:44:59 compute-0 nova_compute[260935]: 2025-10-11 09:44:59.094 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=2735MB free_disk=59.83066940307617GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 11 09:44:59 compute-0 nova_compute[260935]: 2025-10-11 09:44:59.095 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:44:59 compute-0 nova_compute[260935]: 2025-10-11 09:44:59.095 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:44:59 compute-0 nova_compute[260935]: 2025-10-11 09:44:59.200 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance c176845c-89c0-4038-ba22-4ee79bd3ebfe actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 09:44:59 compute-0 nova_compute[260935]: 2025-10-11 09:44:59.200 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance b75d8ded-515b-48ff-a6b6-28df88878996 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 09:44:59 compute-0 nova_compute[260935]: 2025-10-11 09:44:59.201 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance 52be16b4-343a-4fd4-9041-39069a1fde2a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 09:44:59 compute-0 nova_compute[260935]: 2025-10-11 09:44:59.201 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 11 09:44:59 compute-0 nova_compute[260935]: 2025-10-11 09:44:59.202 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=896MB phys_disk=59GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 11 09:44:59 compute-0 nova_compute[260935]: 2025-10-11 09:44:59.231 2 DEBUG nova.scheduler.client.report [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Refreshing inventories for resource provider ead2f521-4d5d-46d9-864c-1aac19134114 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Oct 11 09:44:59 compute-0 nova_compute[260935]: 2025-10-11 09:44:59.251 2 DEBUG nova.scheduler.client.report [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Updating ProviderTree inventory for provider ead2f521-4d5d-46d9-864c-1aac19134114 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Oct 11 09:44:59 compute-0 nova_compute[260935]: 2025-10-11 09:44:59.251 2 DEBUG nova.compute.provider_tree [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Updating inventory in ProviderTree for provider ead2f521-4d5d-46d9-864c-1aac19134114 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Oct 11 09:44:59 compute-0 nova_compute[260935]: 2025-10-11 09:44:59.275 2 DEBUG nova.scheduler.client.report [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Refreshing aggregate associations for resource provider ead2f521-4d5d-46d9-864c-1aac19134114, aggregates: 9222eec4-0f6c-4910-b4e6-f067acc46485 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Oct 11 09:44:59 compute-0 nova_compute[260935]: 2025-10-11 09:44:59.276 2 DEBUG nova.compute.provider_tree [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Updating resource provider ead2f521-4d5d-46d9-864c-1aac19134114 generation from 162 to 163 during operation: update_aggregates _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164
Oct 11 09:44:59 compute-0 nova_compute[260935]: 2025-10-11 09:44:59.305 2 DEBUG nova.scheduler.client.report [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Refreshing trait associations for resource provider ead2f521-4d5d-46d9-864c-1aac19134114, traits: HW_CPU_X86_AESNI,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_CLMUL,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_AVX,COMPUTE_RESCUE_BFV,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_F16C,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NODE,HW_CPU_X86_SSE2,HW_CPU_X86_BMI,COMPUTE_DEVICE_TAGGING,COMPUTE_STORAGE_BUS_FDC,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SHA,COMPUTE_STORAGE_BUS_SATA,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SSE42,HW_CPU_X86_SSE41,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_ABM,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_STORAGE_BUS_IDE,COMPUTE_STORAGE_BUS_USB,COMPUTE_TRUSTED_CERTS,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_FMA3,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_SSE4A,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSSE3,HW_CPU_X86_SVM,HW_CPU_X86_BMI2,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_AVX2,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_SECURITY_TPM_2_0,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_AMD_SVM,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_RTL8139 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Oct 11 09:44:59 compute-0 nova_compute[260935]: 2025-10-11 09:44:59.416 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:44:59 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3111: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:44:59 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:44:59 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 09:44:59 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1714427866' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:44:59 compute-0 nova_compute[260935]: 2025-10-11 09:44:59.892 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:44:59 compute-0 nova_compute[260935]: 2025-10-11 09:44:59.900 2 DEBUG nova.compute.provider_tree [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 09:44:59 compute-0 nova_compute[260935]: 2025-10-11 09:44:59.919 2 DEBUG nova.scheduler.client.report [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 09:44:59 compute-0 nova_compute[260935]: 2025-10-11 09:44:59.952 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 11 09:44:59 compute-0 nova_compute[260935]: 2025-10-11 09:44:59.953 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.857s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:44:59 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/1714427866' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:45:00 compute-0 nova_compute[260935]: 2025-10-11 09:45:00.111 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:45:00 compute-0 nova_compute[260935]: 2025-10-11 09:45:00.115 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:45:00 compute-0 nova_compute[260935]: 2025-10-11 09:45:00.949 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:45:00 compute-0 ceph-mon[74313]: pgmap v3111: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:45:01 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3112: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:45:01 compute-0 nova_compute[260935]: 2025-10-11 09:45:01.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:45:02 compute-0 nova_compute[260935]: 2025-10-11 09:45:02.395 2 DEBUG oslo_concurrency.lockutils [None req-5d1ab53d-f1ec-46f4-80a5-4660dcc2006e bee3284f19da4fb389bd50d7ec71d51c 9b754c6818c940e8b5e1beeba6733fe1 - - default default] Acquiring lock "dba9c194-a688-42ad-8b48-789e10f168fd" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:45:02 compute-0 nova_compute[260935]: 2025-10-11 09:45:02.396 2 DEBUG oslo_concurrency.lockutils [None req-5d1ab53d-f1ec-46f4-80a5-4660dcc2006e bee3284f19da4fb389bd50d7ec71d51c 9b754c6818c940e8b5e1beeba6733fe1 - - default default] Lock "dba9c194-a688-42ad-8b48-789e10f168fd" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:45:02 compute-0 nova_compute[260935]: 2025-10-11 09:45:02.433 2 DEBUG nova.compute.manager [None req-5d1ab53d-f1ec-46f4-80a5-4660dcc2006e bee3284f19da4fb389bd50d7ec71d51c 9b754c6818c940e8b5e1beeba6733fe1 - - default default] [instance: dba9c194-a688-42ad-8b48-789e10f168fd] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 11 09:45:02 compute-0 nova_compute[260935]: 2025-10-11 09:45:02.498 2 DEBUG oslo_concurrency.lockutils [None req-5d1ab53d-f1ec-46f4-80a5-4660dcc2006e bee3284f19da4fb389bd50d7ec71d51c 9b754c6818c940e8b5e1beeba6733fe1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:45:02 compute-0 nova_compute[260935]: 2025-10-11 09:45:02.499 2 DEBUG oslo_concurrency.lockutils [None req-5d1ab53d-f1ec-46f4-80a5-4660dcc2006e bee3284f19da4fb389bd50d7ec71d51c 9b754c6818c940e8b5e1beeba6733fe1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:45:02 compute-0 nova_compute[260935]: 2025-10-11 09:45:02.508 2 DEBUG nova.virt.hardware [None req-5d1ab53d-f1ec-46f4-80a5-4660dcc2006e bee3284f19da4fb389bd50d7ec71d51c 9b754c6818c940e8b5e1beeba6733fe1 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 11 09:45:02 compute-0 nova_compute[260935]: 2025-10-11 09:45:02.509 2 INFO nova.compute.claims [None req-5d1ab53d-f1ec-46f4-80a5-4660dcc2006e bee3284f19da4fb389bd50d7ec71d51c 9b754c6818c940e8b5e1beeba6733fe1 - - default default] [instance: dba9c194-a688-42ad-8b48-789e10f168fd] Claim successful on node compute-0.ctlplane.example.com
Oct 11 09:45:02 compute-0 nova_compute[260935]: 2025-10-11 09:45:02.684 2 DEBUG oslo_concurrency.processutils [None req-5d1ab53d-f1ec-46f4-80a5-4660dcc2006e bee3284f19da4fb389bd50d7ec71d51c 9b754c6818c940e8b5e1beeba6733fe1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:45:02 compute-0 nova_compute[260935]: 2025-10-11 09:45:02.737 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:45:02 compute-0 ceph-mon[74313]: pgmap v3112: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:45:03 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 09:45:03 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/614766696' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:45:03 compute-0 nova_compute[260935]: 2025-10-11 09:45:03.147 2 DEBUG oslo_concurrency.processutils [None req-5d1ab53d-f1ec-46f4-80a5-4660dcc2006e bee3284f19da4fb389bd50d7ec71d51c 9b754c6818c940e8b5e1beeba6733fe1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:45:03 compute-0 nova_compute[260935]: 2025-10-11 09:45:03.155 2 DEBUG nova.compute.provider_tree [None req-5d1ab53d-f1ec-46f4-80a5-4660dcc2006e bee3284f19da4fb389bd50d7ec71d51c 9b754c6818c940e8b5e1beeba6733fe1 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 09:45:03 compute-0 nova_compute[260935]: 2025-10-11 09:45:03.186 2 DEBUG nova.scheduler.client.report [None req-5d1ab53d-f1ec-46f4-80a5-4660dcc2006e bee3284f19da4fb389bd50d7ec71d51c 9b754c6818c940e8b5e1beeba6733fe1 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 09:45:03 compute-0 nova_compute[260935]: 2025-10-11 09:45:03.223 2 DEBUG oslo_concurrency.lockutils [None req-5d1ab53d-f1ec-46f4-80a5-4660dcc2006e bee3284f19da4fb389bd50d7ec71d51c 9b754c6818c940e8b5e1beeba6733fe1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.724s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:45:03 compute-0 nova_compute[260935]: 2025-10-11 09:45:03.224 2 DEBUG nova.compute.manager [None req-5d1ab53d-f1ec-46f4-80a5-4660dcc2006e bee3284f19da4fb389bd50d7ec71d51c 9b754c6818c940e8b5e1beeba6733fe1 - - default default] [instance: dba9c194-a688-42ad-8b48-789e10f168fd] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 11 09:45:03 compute-0 nova_compute[260935]: 2025-10-11 09:45:03.307 2 DEBUG nova.compute.manager [None req-5d1ab53d-f1ec-46f4-80a5-4660dcc2006e bee3284f19da4fb389bd50d7ec71d51c 9b754c6818c940e8b5e1beeba6733fe1 - - default default] [instance: dba9c194-a688-42ad-8b48-789e10f168fd] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 11 09:45:03 compute-0 nova_compute[260935]: 2025-10-11 09:45:03.308 2 DEBUG nova.network.neutron [None req-5d1ab53d-f1ec-46f4-80a5-4660dcc2006e bee3284f19da4fb389bd50d7ec71d51c 9b754c6818c940e8b5e1beeba6733fe1 - - default default] [instance: dba9c194-a688-42ad-8b48-789e10f168fd] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 11 09:45:03 compute-0 nova_compute[260935]: 2025-10-11 09:45:03.338 2 INFO nova.virt.libvirt.driver [None req-5d1ab53d-f1ec-46f4-80a5-4660dcc2006e bee3284f19da4fb389bd50d7ec71d51c 9b754c6818c940e8b5e1beeba6733fe1 - - default default] [instance: dba9c194-a688-42ad-8b48-789e10f168fd] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 11 09:45:03 compute-0 nova_compute[260935]: 2025-10-11 09:45:03.365 2 DEBUG nova.compute.manager [None req-5d1ab53d-f1ec-46f4-80a5-4660dcc2006e bee3284f19da4fb389bd50d7ec71d51c 9b754c6818c940e8b5e1beeba6733fe1 - - default default] [instance: dba9c194-a688-42ad-8b48-789e10f168fd] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 11 09:45:03 compute-0 nova_compute[260935]: 2025-10-11 09:45:03.458 2 DEBUG nova.compute.manager [None req-5d1ab53d-f1ec-46f4-80a5-4660dcc2006e bee3284f19da4fb389bd50d7ec71d51c 9b754c6818c940e8b5e1beeba6733fe1 - - default default] [instance: dba9c194-a688-42ad-8b48-789e10f168fd] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 11 09:45:03 compute-0 nova_compute[260935]: 2025-10-11 09:45:03.459 2 DEBUG nova.virt.libvirt.driver [None req-5d1ab53d-f1ec-46f4-80a5-4660dcc2006e bee3284f19da4fb389bd50d7ec71d51c 9b754c6818c940e8b5e1beeba6733fe1 - - default default] [instance: dba9c194-a688-42ad-8b48-789e10f168fd] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 11 09:45:03 compute-0 nova_compute[260935]: 2025-10-11 09:45:03.460 2 INFO nova.virt.libvirt.driver [None req-5d1ab53d-f1ec-46f4-80a5-4660dcc2006e bee3284f19da4fb389bd50d7ec71d51c 9b754c6818c940e8b5e1beeba6733fe1 - - default default] [instance: dba9c194-a688-42ad-8b48-789e10f168fd] Creating image(s)
Oct 11 09:45:03 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3113: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:45:03 compute-0 nova_compute[260935]: 2025-10-11 09:45:03.483 2 DEBUG nova.storage.rbd_utils [None req-5d1ab53d-f1ec-46f4-80a5-4660dcc2006e bee3284f19da4fb389bd50d7ec71d51c 9b754c6818c940e8b5e1beeba6733fe1 - - default default] rbd image dba9c194-a688-42ad-8b48-789e10f168fd_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:45:03 compute-0 nova_compute[260935]: 2025-10-11 09:45:03.507 2 DEBUG nova.storage.rbd_utils [None req-5d1ab53d-f1ec-46f4-80a5-4660dcc2006e bee3284f19da4fb389bd50d7ec71d51c 9b754c6818c940e8b5e1beeba6733fe1 - - default default] rbd image dba9c194-a688-42ad-8b48-789e10f168fd_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:45:03 compute-0 nova_compute[260935]: 2025-10-11 09:45:03.533 2 DEBUG nova.storage.rbd_utils [None req-5d1ab53d-f1ec-46f4-80a5-4660dcc2006e bee3284f19da4fb389bd50d7ec71d51c 9b754c6818c940e8b5e1beeba6733fe1 - - default default] rbd image dba9c194-a688-42ad-8b48-789e10f168fd_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:45:03 compute-0 nova_compute[260935]: 2025-10-11 09:45:03.538 2 DEBUG oslo_concurrency.processutils [None req-5d1ab53d-f1ec-46f4-80a5-4660dcc2006e bee3284f19da4fb389bd50d7ec71d51c 9b754c6818c940e8b5e1beeba6733fe1 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:45:03 compute-0 nova_compute[260935]: 2025-10-11 09:45:03.641 2 DEBUG oslo_concurrency.processutils [None req-5d1ab53d-f1ec-46f4-80a5-4660dcc2006e bee3284f19da4fb389bd50d7ec71d51c 9b754c6818c940e8b5e1beeba6733fe1 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 --force-share --output=json" returned: 0 in 0.104s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:45:03 compute-0 nova_compute[260935]: 2025-10-11 09:45:03.643 2 DEBUG oslo_concurrency.lockutils [None req-5d1ab53d-f1ec-46f4-80a5-4660dcc2006e bee3284f19da4fb389bd50d7ec71d51c 9b754c6818c940e8b5e1beeba6733fe1 - - default default] Acquiring lock "9811042c7d73cc51997f7c966840f4b7728169a1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:45:03 compute-0 nova_compute[260935]: 2025-10-11 09:45:03.644 2 DEBUG oslo_concurrency.lockutils [None req-5d1ab53d-f1ec-46f4-80a5-4660dcc2006e bee3284f19da4fb389bd50d7ec71d51c 9b754c6818c940e8b5e1beeba6733fe1 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:45:03 compute-0 nova_compute[260935]: 2025-10-11 09:45:03.645 2 DEBUG oslo_concurrency.lockutils [None req-5d1ab53d-f1ec-46f4-80a5-4660dcc2006e bee3284f19da4fb389bd50d7ec71d51c 9b754c6818c940e8b5e1beeba6733fe1 - - default default] Lock "9811042c7d73cc51997f7c966840f4b7728169a1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:45:03 compute-0 nova_compute[260935]: 2025-10-11 09:45:03.681 2 DEBUG nova.storage.rbd_utils [None req-5d1ab53d-f1ec-46f4-80a5-4660dcc2006e bee3284f19da4fb389bd50d7ec71d51c 9b754c6818c940e8b5e1beeba6733fe1 - - default default] rbd image dba9c194-a688-42ad-8b48-789e10f168fd_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:45:03 compute-0 nova_compute[260935]: 2025-10-11 09:45:03.686 2 DEBUG oslo_concurrency.processutils [None req-5d1ab53d-f1ec-46f4-80a5-4660dcc2006e bee3284f19da4fb389bd50d7ec71d51c 9b754c6818c940e8b5e1beeba6733fe1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 dba9c194-a688-42ad-8b48-789e10f168fd_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:45:03 compute-0 nova_compute[260935]: 2025-10-11 09:45:03.997 2 DEBUG nova.network.neutron [None req-5d1ab53d-f1ec-46f4-80a5-4660dcc2006e bee3284f19da4fb389bd50d7ec71d51c 9b754c6818c940e8b5e1beeba6733fe1 - - default default] [instance: dba9c194-a688-42ad-8b48-789e10f168fd] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188
Oct 11 09:45:03 compute-0 nova_compute[260935]: 2025-10-11 09:45:03.998 2 DEBUG nova.compute.manager [None req-5d1ab53d-f1ec-46f4-80a5-4660dcc2006e bee3284f19da4fb389bd50d7ec71d51c 9b754c6818c940e8b5e1beeba6733fe1 - - default default] [instance: dba9c194-a688-42ad-8b48-789e10f168fd] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 11 09:45:04 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/614766696' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:45:04 compute-0 nova_compute[260935]: 2025-10-11 09:45:04.052 2 DEBUG oslo_concurrency.processutils [None req-5d1ab53d-f1ec-46f4-80a5-4660dcc2006e bee3284f19da4fb389bd50d7ec71d51c 9b754c6818c940e8b5e1beeba6733fe1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1 dba9c194-a688-42ad-8b48-789e10f168fd_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.366s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:45:04 compute-0 nova_compute[260935]: 2025-10-11 09:45:04.145 2 DEBUG nova.storage.rbd_utils [None req-5d1ab53d-f1ec-46f4-80a5-4660dcc2006e bee3284f19da4fb389bd50d7ec71d51c 9b754c6818c940e8b5e1beeba6733fe1 - - default default] resizing rbd image dba9c194-a688-42ad-8b48-789e10f168fd_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 11 09:45:04 compute-0 nova_compute[260935]: 2025-10-11 09:45:04.305 2 DEBUG nova.objects.instance [None req-5d1ab53d-f1ec-46f4-80a5-4660dcc2006e bee3284f19da4fb389bd50d7ec71d51c 9b754c6818c940e8b5e1beeba6733fe1 - - default default] Lazy-loading 'migration_context' on Instance uuid dba9c194-a688-42ad-8b48-789e10f168fd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:45:04 compute-0 nova_compute[260935]: 2025-10-11 09:45:04.325 2 DEBUG nova.virt.libvirt.driver [None req-5d1ab53d-f1ec-46f4-80a5-4660dcc2006e bee3284f19da4fb389bd50d7ec71d51c 9b754c6818c940e8b5e1beeba6733fe1 - - default default] [instance: dba9c194-a688-42ad-8b48-789e10f168fd] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 11 09:45:04 compute-0 nova_compute[260935]: 2025-10-11 09:45:04.325 2 DEBUG nova.virt.libvirt.driver [None req-5d1ab53d-f1ec-46f4-80a5-4660dcc2006e bee3284f19da4fb389bd50d7ec71d51c 9b754c6818c940e8b5e1beeba6733fe1 - - default default] [instance: dba9c194-a688-42ad-8b48-789e10f168fd] Ensure instance console log exists: /var/lib/nova/instances/dba9c194-a688-42ad-8b48-789e10f168fd/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 11 09:45:04 compute-0 nova_compute[260935]: 2025-10-11 09:45:04.326 2 DEBUG oslo_concurrency.lockutils [None req-5d1ab53d-f1ec-46f4-80a5-4660dcc2006e bee3284f19da4fb389bd50d7ec71d51c 9b754c6818c940e8b5e1beeba6733fe1 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:45:04 compute-0 nova_compute[260935]: 2025-10-11 09:45:04.327 2 DEBUG oslo_concurrency.lockutils [None req-5d1ab53d-f1ec-46f4-80a5-4660dcc2006e bee3284f19da4fb389bd50d7ec71d51c 9b754c6818c940e8b5e1beeba6733fe1 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:45:04 compute-0 nova_compute[260935]: 2025-10-11 09:45:04.327 2 DEBUG oslo_concurrency.lockutils [None req-5d1ab53d-f1ec-46f4-80a5-4660dcc2006e bee3284f19da4fb389bd50d7ec71d51c 9b754c6818c940e8b5e1beeba6733fe1 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:45:04 compute-0 nova_compute[260935]: 2025-10-11 09:45:04.330 2 DEBUG nova.virt.libvirt.driver [None req-5d1ab53d-f1ec-46f4-80a5-4660dcc2006e bee3284f19da4fb389bd50d7ec71d51c 9b754c6818c940e8b5e1beeba6733fe1 - - default default] [instance: dba9c194-a688-42ad-8b48-789e10f168fd] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'encrypted': False, 'encryption_format': None, 'encryption_secret_uuid': None, 'image_id': '03f2fef0-11c0-48e1-b3a0-3e02d898739e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 11 09:45:04 compute-0 nova_compute[260935]: 2025-10-11 09:45:04.337 2 WARNING nova.virt.libvirt.driver [None req-5d1ab53d-f1ec-46f4-80a5-4660dcc2006e bee3284f19da4fb389bd50d7ec71d51c 9b754c6818c940e8b5e1beeba6733fe1 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 09:45:04 compute-0 nova_compute[260935]: 2025-10-11 09:45:04.351 2 DEBUG nova.virt.libvirt.host [None req-5d1ab53d-f1ec-46f4-80a5-4660dcc2006e bee3284f19da4fb389bd50d7ec71d51c 9b754c6818c940e8b5e1beeba6733fe1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 11 09:45:04 compute-0 nova_compute[260935]: 2025-10-11 09:45:04.352 2 DEBUG nova.virt.libvirt.host [None req-5d1ab53d-f1ec-46f4-80a5-4660dcc2006e bee3284f19da4fb389bd50d7ec71d51c 9b754c6818c940e8b5e1beeba6733fe1 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 11 09:45:04 compute-0 nova_compute[260935]: 2025-10-11 09:45:04.354 2 DEBUG nova.virt.libvirt.host [None req-5d1ab53d-f1ec-46f4-80a5-4660dcc2006e bee3284f19da4fb389bd50d7ec71d51c 9b754c6818c940e8b5e1beeba6733fe1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 11 09:45:04 compute-0 nova_compute[260935]: 2025-10-11 09:45:04.355 2 DEBUG nova.virt.libvirt.host [None req-5d1ab53d-f1ec-46f4-80a5-4660dcc2006e bee3284f19da4fb389bd50d7ec71d51c 9b754c6818c940e8b5e1beeba6733fe1 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 11 09:45:04 compute-0 nova_compute[260935]: 2025-10-11 09:45:04.355 2 DEBUG nova.virt.libvirt.driver [None req-5d1ab53d-f1ec-46f4-80a5-4660dcc2006e bee3284f19da4fb389bd50d7ec71d51c 9b754c6818c940e8b5e1beeba6733fe1 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 11 09:45:04 compute-0 nova_compute[260935]: 2025-10-11 09:45:04.355 2 DEBUG nova.virt.hardware [None req-5d1ab53d-f1ec-46f4-80a5-4660dcc2006e bee3284f19da4fb389bd50d7ec71d51c 9b754c6818c940e8b5e1beeba6733fe1 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-11T08:43:26Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7902c9bf-4f3b-46f1-b809-1259f28644a3',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-11T08:43:27Z,direct_url=<?>,disk_format='qcow2',id=03f2fef0-11c0-48e1-b3a0-3e02d898739e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='0e82dde68cf645ae91e590a41f8cb249',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-11T08:43:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 11 09:45:04 compute-0 nova_compute[260935]: 2025-10-11 09:45:04.356 2 DEBUG nova.virt.hardware [None req-5d1ab53d-f1ec-46f4-80a5-4660dcc2006e bee3284f19da4fb389bd50d7ec71d51c 9b754c6818c940e8b5e1beeba6733fe1 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 11 09:45:04 compute-0 nova_compute[260935]: 2025-10-11 09:45:04.356 2 DEBUG nova.virt.hardware [None req-5d1ab53d-f1ec-46f4-80a5-4660dcc2006e bee3284f19da4fb389bd50d7ec71d51c 9b754c6818c940e8b5e1beeba6733fe1 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 11 09:45:04 compute-0 nova_compute[260935]: 2025-10-11 09:45:04.356 2 DEBUG nova.virt.hardware [None req-5d1ab53d-f1ec-46f4-80a5-4660dcc2006e bee3284f19da4fb389bd50d7ec71d51c 9b754c6818c940e8b5e1beeba6733fe1 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 11 09:45:04 compute-0 nova_compute[260935]: 2025-10-11 09:45:04.356 2 DEBUG nova.virt.hardware [None req-5d1ab53d-f1ec-46f4-80a5-4660dcc2006e bee3284f19da4fb389bd50d7ec71d51c 9b754c6818c940e8b5e1beeba6733fe1 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 11 09:45:04 compute-0 nova_compute[260935]: 2025-10-11 09:45:04.356 2 DEBUG nova.virt.hardware [None req-5d1ab53d-f1ec-46f4-80a5-4660dcc2006e bee3284f19da4fb389bd50d7ec71d51c 9b754c6818c940e8b5e1beeba6733fe1 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 11 09:45:04 compute-0 nova_compute[260935]: 2025-10-11 09:45:04.357 2 DEBUG nova.virt.hardware [None req-5d1ab53d-f1ec-46f4-80a5-4660dcc2006e bee3284f19da4fb389bd50d7ec71d51c 9b754c6818c940e8b5e1beeba6733fe1 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 11 09:45:04 compute-0 nova_compute[260935]: 2025-10-11 09:45:04.357 2 DEBUG nova.virt.hardware [None req-5d1ab53d-f1ec-46f4-80a5-4660dcc2006e bee3284f19da4fb389bd50d7ec71d51c 9b754c6818c940e8b5e1beeba6733fe1 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 11 09:45:04 compute-0 nova_compute[260935]: 2025-10-11 09:45:04.357 2 DEBUG nova.virt.hardware [None req-5d1ab53d-f1ec-46f4-80a5-4660dcc2006e bee3284f19da4fb389bd50d7ec71d51c 9b754c6818c940e8b5e1beeba6733fe1 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 11 09:45:04 compute-0 nova_compute[260935]: 2025-10-11 09:45:04.357 2 DEBUG nova.virt.hardware [None req-5d1ab53d-f1ec-46f4-80a5-4660dcc2006e bee3284f19da4fb389bd50d7ec71d51c 9b754c6818c940e8b5e1beeba6733fe1 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 11 09:45:04 compute-0 nova_compute[260935]: 2025-10-11 09:45:04.357 2 DEBUG nova.virt.hardware [None req-5d1ab53d-f1ec-46f4-80a5-4660dcc2006e bee3284f19da4fb389bd50d7ec71d51c 9b754c6818c940e8b5e1beeba6733fe1 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 11 09:45:04 compute-0 nova_compute[260935]: 2025-10-11 09:45:04.360 2 DEBUG oslo_concurrency.processutils [None req-5d1ab53d-f1ec-46f4-80a5-4660dcc2006e bee3284f19da4fb389bd50d7ec71d51c 9b754c6818c940e8b5e1beeba6733fe1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:45:04 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:45:04 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 09:45:04 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3511972821' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:45:04 compute-0 nova_compute[260935]: 2025-10-11 09:45:04.805 2 DEBUG oslo_concurrency.processutils [None req-5d1ab53d-f1ec-46f4-80a5-4660dcc2006e bee3284f19da4fb389bd50d7ec71d51c 9b754c6818c940e8b5e1beeba6733fe1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:45:04 compute-0 nova_compute[260935]: 2025-10-11 09:45:04.829 2 DEBUG nova.storage.rbd_utils [None req-5d1ab53d-f1ec-46f4-80a5-4660dcc2006e bee3284f19da4fb389bd50d7ec71d51c 9b754c6818c940e8b5e1beeba6733fe1 - - default default] rbd image dba9c194-a688-42ad-8b48-789e10f168fd_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:45:04 compute-0 nova_compute[260935]: 2025-10-11 09:45:04.833 2 DEBUG oslo_concurrency.processutils [None req-5d1ab53d-f1ec-46f4-80a5-4660dcc2006e bee3284f19da4fb389bd50d7ec71d51c 9b754c6818c940e8b5e1beeba6733fe1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:45:05 compute-0 ceph-mon[74313]: pgmap v3113: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:45:05 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/3511972821' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:45:05 compute-0 nova_compute[260935]: 2025-10-11 09:45:05.117 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 09:45:05 compute-0 nova_compute[260935]: 2025-10-11 09:45:05.121 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 09:45:05 compute-0 nova_compute[260935]: 2025-10-11 09:45:05.121 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5005 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Oct 11 09:45:05 compute-0 nova_compute[260935]: 2025-10-11 09:45:05.122 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Oct 11 09:45:05 compute-0 nova_compute[260935]: 2025-10-11 09:45:05.138 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:45:05 compute-0 nova_compute[260935]: 2025-10-11 09:45:05.140 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Oct 11 09:45:05 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 11 09:45:05 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/502829476' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:45:05 compute-0 nova_compute[260935]: 2025-10-11 09:45:05.322 2 DEBUG oslo_concurrency.processutils [None req-5d1ab53d-f1ec-46f4-80a5-4660dcc2006e bee3284f19da4fb389bd50d7ec71d51c 9b754c6818c940e8b5e1beeba6733fe1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.489s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:45:05 compute-0 nova_compute[260935]: 2025-10-11 09:45:05.325 2 DEBUG nova.objects.instance [None req-5d1ab53d-f1ec-46f4-80a5-4660dcc2006e bee3284f19da4fb389bd50d7ec71d51c 9b754c6818c940e8b5e1beeba6733fe1 - - default default] Lazy-loading 'pci_devices' on Instance uuid dba9c194-a688-42ad-8b48-789e10f168fd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:45:05 compute-0 nova_compute[260935]: 2025-10-11 09:45:05.351 2 DEBUG nova.virt.libvirt.driver [None req-5d1ab53d-f1ec-46f4-80a5-4660dcc2006e bee3284f19da4fb389bd50d7ec71d51c 9b754c6818c940e8b5e1beeba6733fe1 - - default default] [instance: dba9c194-a688-42ad-8b48-789e10f168fd] End _get_guest_xml xml=<domain type="kvm">
Oct 11 09:45:05 compute-0 nova_compute[260935]:   <uuid>dba9c194-a688-42ad-8b48-789e10f168fd</uuid>
Oct 11 09:45:05 compute-0 nova_compute[260935]:   <name>instance-00000097</name>
Oct 11 09:45:05 compute-0 nova_compute[260935]:   <memory>131072</memory>
Oct 11 09:45:05 compute-0 nova_compute[260935]:   <vcpu>1</vcpu>
Oct 11 09:45:05 compute-0 nova_compute[260935]:   <metadata>
Oct 11 09:45:05 compute-0 nova_compute[260935]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 09:45:05 compute-0 nova_compute[260935]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 09:45:05 compute-0 nova_compute[260935]:       <nova:name>tempest-AggregatesAdminTestJSON-server-1352653575</nova:name>
Oct 11 09:45:05 compute-0 nova_compute[260935]:       <nova:creationTime>2025-10-11 09:45:04</nova:creationTime>
Oct 11 09:45:05 compute-0 nova_compute[260935]:       <nova:flavor name="m1.nano">
Oct 11 09:45:05 compute-0 nova_compute[260935]:         <nova:memory>128</nova:memory>
Oct 11 09:45:05 compute-0 nova_compute[260935]:         <nova:disk>1</nova:disk>
Oct 11 09:45:05 compute-0 nova_compute[260935]:         <nova:swap>0</nova:swap>
Oct 11 09:45:05 compute-0 nova_compute[260935]:         <nova:ephemeral>0</nova:ephemeral>
Oct 11 09:45:05 compute-0 nova_compute[260935]:         <nova:vcpus>1</nova:vcpus>
Oct 11 09:45:05 compute-0 nova_compute[260935]:       </nova:flavor>
Oct 11 09:45:05 compute-0 nova_compute[260935]:       <nova:owner>
Oct 11 09:45:05 compute-0 nova_compute[260935]:         <nova:user uuid="bee3284f19da4fb389bd50d7ec71d51c">tempest-AggregatesAdminTestJSON-938594948-project-member</nova:user>
Oct 11 09:45:05 compute-0 nova_compute[260935]:         <nova:project uuid="9b754c6818c940e8b5e1beeba6733fe1">tempest-AggregatesAdminTestJSON-938594948</nova:project>
Oct 11 09:45:05 compute-0 nova_compute[260935]:       </nova:owner>
Oct 11 09:45:05 compute-0 nova_compute[260935]:       <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 09:45:05 compute-0 nova_compute[260935]:       <nova:ports/>
Oct 11 09:45:05 compute-0 nova_compute[260935]:     </nova:instance>
Oct 11 09:45:05 compute-0 nova_compute[260935]:   </metadata>
Oct 11 09:45:05 compute-0 nova_compute[260935]:   <sysinfo type="smbios">
Oct 11 09:45:05 compute-0 nova_compute[260935]:     <system>
Oct 11 09:45:05 compute-0 nova_compute[260935]:       <entry name="manufacturer">RDO</entry>
Oct 11 09:45:05 compute-0 nova_compute[260935]:       <entry name="product">OpenStack Compute</entry>
Oct 11 09:45:05 compute-0 nova_compute[260935]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 11 09:45:05 compute-0 nova_compute[260935]:       <entry name="serial">dba9c194-a688-42ad-8b48-789e10f168fd</entry>
Oct 11 09:45:05 compute-0 nova_compute[260935]:       <entry name="uuid">dba9c194-a688-42ad-8b48-789e10f168fd</entry>
Oct 11 09:45:05 compute-0 nova_compute[260935]:       <entry name="family">Virtual Machine</entry>
Oct 11 09:45:05 compute-0 nova_compute[260935]:     </system>
Oct 11 09:45:05 compute-0 nova_compute[260935]:   </sysinfo>
Oct 11 09:45:05 compute-0 nova_compute[260935]:   <os>
Oct 11 09:45:05 compute-0 nova_compute[260935]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 11 09:45:05 compute-0 nova_compute[260935]:     <boot dev="hd"/>
Oct 11 09:45:05 compute-0 nova_compute[260935]:     <smbios mode="sysinfo"/>
Oct 11 09:45:05 compute-0 nova_compute[260935]:   </os>
Oct 11 09:45:05 compute-0 nova_compute[260935]:   <features>
Oct 11 09:45:05 compute-0 nova_compute[260935]:     <acpi/>
Oct 11 09:45:05 compute-0 nova_compute[260935]:     <apic/>
Oct 11 09:45:05 compute-0 nova_compute[260935]:     <vmcoreinfo/>
Oct 11 09:45:05 compute-0 nova_compute[260935]:   </features>
Oct 11 09:45:05 compute-0 nova_compute[260935]:   <clock offset="utc">
Oct 11 09:45:05 compute-0 nova_compute[260935]:     <timer name="pit" tickpolicy="delay"/>
Oct 11 09:45:05 compute-0 nova_compute[260935]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 11 09:45:05 compute-0 nova_compute[260935]:     <timer name="hpet" present="no"/>
Oct 11 09:45:05 compute-0 nova_compute[260935]:   </clock>
Oct 11 09:45:05 compute-0 nova_compute[260935]:   <cpu mode="host-model" match="exact">
Oct 11 09:45:05 compute-0 nova_compute[260935]:     <topology sockets="1" cores="1" threads="1"/>
Oct 11 09:45:05 compute-0 nova_compute[260935]:   </cpu>
Oct 11 09:45:05 compute-0 nova_compute[260935]:   <devices>
Oct 11 09:45:05 compute-0 nova_compute[260935]:     <disk type="network" device="disk">
Oct 11 09:45:05 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 09:45:05 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/dba9c194-a688-42ad-8b48-789e10f168fd_disk">
Oct 11 09:45:05 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 09:45:05 compute-0 nova_compute[260935]:       </source>
Oct 11 09:45:05 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 09:45:05 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 09:45:05 compute-0 nova_compute[260935]:       </auth>
Oct 11 09:45:05 compute-0 nova_compute[260935]:       <target dev="vda" bus="virtio"/>
Oct 11 09:45:05 compute-0 nova_compute[260935]:     </disk>
Oct 11 09:45:05 compute-0 nova_compute[260935]:     <disk type="network" device="cdrom">
Oct 11 09:45:05 compute-0 nova_compute[260935]:       <driver type="raw" cache="none"/>
Oct 11 09:45:05 compute-0 nova_compute[260935]:       <source protocol="rbd" name="vms/dba9c194-a688-42ad-8b48-789e10f168fd_disk.config">
Oct 11 09:45:05 compute-0 nova_compute[260935]:         <host name="192.168.122.100" port="6789"/>
Oct 11 09:45:05 compute-0 nova_compute[260935]:       </source>
Oct 11 09:45:05 compute-0 nova_compute[260935]:       <auth username="openstack">
Oct 11 09:45:05 compute-0 nova_compute[260935]:         <secret type="ceph" uuid="33219f8b-dc38-5a8f-a577-8ccc4b37190a"/>
Oct 11 09:45:05 compute-0 nova_compute[260935]:       </auth>
Oct 11 09:45:05 compute-0 nova_compute[260935]:       <target dev="sda" bus="sata"/>
Oct 11 09:45:05 compute-0 nova_compute[260935]:     </disk>
Oct 11 09:45:05 compute-0 nova_compute[260935]:     <serial type="pty">
Oct 11 09:45:05 compute-0 nova_compute[260935]:       <log file="/var/lib/nova/instances/dba9c194-a688-42ad-8b48-789e10f168fd/console.log" append="off"/>
Oct 11 09:45:05 compute-0 nova_compute[260935]:     </serial>
Oct 11 09:45:05 compute-0 nova_compute[260935]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 11 09:45:05 compute-0 nova_compute[260935]:     <video>
Oct 11 09:45:05 compute-0 nova_compute[260935]:       <model type="virtio"/>
Oct 11 09:45:05 compute-0 nova_compute[260935]:     </video>
Oct 11 09:45:05 compute-0 nova_compute[260935]:     <input type="tablet" bus="usb"/>
Oct 11 09:45:05 compute-0 nova_compute[260935]:     <rng model="virtio">
Oct 11 09:45:05 compute-0 nova_compute[260935]:       <backend model="random">/dev/urandom</backend>
Oct 11 09:45:05 compute-0 nova_compute[260935]:     </rng>
Oct 11 09:45:05 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root"/>
Oct 11 09:45:05 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:45:05 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:45:05 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:45:05 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:45:05 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:45:05 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:45:05 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:45:05 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:45:05 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:45:05 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:45:05 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:45:05 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:45:05 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:45:05 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:45:05 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:45:05 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:45:05 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:45:05 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:45:05 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:45:05 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:45:05 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:45:05 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:45:05 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:45:05 compute-0 nova_compute[260935]:     <controller type="pci" model="pcie-root-port"/>
Oct 11 09:45:05 compute-0 nova_compute[260935]:     <controller type="usb" index="0"/>
Oct 11 09:45:05 compute-0 nova_compute[260935]:     <memballoon model="virtio">
Oct 11 09:45:05 compute-0 nova_compute[260935]:       <stats period="10"/>
Oct 11 09:45:05 compute-0 nova_compute[260935]:     </memballoon>
Oct 11 09:45:05 compute-0 nova_compute[260935]:   </devices>
Oct 11 09:45:05 compute-0 nova_compute[260935]: </domain>
Oct 11 09:45:05 compute-0 nova_compute[260935]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 11 09:45:05 compute-0 nova_compute[260935]: 2025-10-11 09:45:05.409 2 DEBUG nova.virt.libvirt.driver [None req-5d1ab53d-f1ec-46f4-80a5-4660dcc2006e bee3284f19da4fb389bd50d7ec71d51c 9b754c6818c940e8b5e1beeba6733fe1 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 09:45:05 compute-0 nova_compute[260935]: 2025-10-11 09:45:05.410 2 DEBUG nova.virt.libvirt.driver [None req-5d1ab53d-f1ec-46f4-80a5-4660dcc2006e bee3284f19da4fb389bd50d7ec71d51c 9b754c6818c940e8b5e1beeba6733fe1 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 11 09:45:05 compute-0 nova_compute[260935]: 2025-10-11 09:45:05.410 2 INFO nova.virt.libvirt.driver [None req-5d1ab53d-f1ec-46f4-80a5-4660dcc2006e bee3284f19da4fb389bd50d7ec71d51c 9b754c6818c940e8b5e1beeba6733fe1 - - default default] [instance: dba9c194-a688-42ad-8b48-789e10f168fd] Using config drive
Oct 11 09:45:05 compute-0 nova_compute[260935]: 2025-10-11 09:45:05.434 2 DEBUG nova.storage.rbd_utils [None req-5d1ab53d-f1ec-46f4-80a5-4660dcc2006e bee3284f19da4fb389bd50d7ec71d51c 9b754c6818c940e8b5e1beeba6733fe1 - - default default] rbd image dba9c194-a688-42ad-8b48-789e10f168fd_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:45:05 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3114: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:45:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 09:45:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:45:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 09:45:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:45:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.002627377275021163 of space, bias 1.0, pg target 0.788213182506349 quantized to 32 (current 32)
Oct 11 09:45:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:45:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 09:45:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:45:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 09:45:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:45:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Oct 11 09:45:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:45:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 11 09:45:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:45:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 09:45:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:45:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 11 09:45:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:45:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 11 09:45:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:45:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 09:45:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:45:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 11 09:45:05 compute-0 nova_compute[260935]: 2025-10-11 09:45:05.879 2 INFO nova.virt.libvirt.driver [None req-5d1ab53d-f1ec-46f4-80a5-4660dcc2006e bee3284f19da4fb389bd50d7ec71d51c 9b754c6818c940e8b5e1beeba6733fe1 - - default default] [instance: dba9c194-a688-42ad-8b48-789e10f168fd] Creating config drive at /var/lib/nova/instances/dba9c194-a688-42ad-8b48-789e10f168fd/disk.config
Oct 11 09:45:05 compute-0 nova_compute[260935]: 2025-10-11 09:45:05.888 2 DEBUG oslo_concurrency.processutils [None req-5d1ab53d-f1ec-46f4-80a5-4660dcc2006e bee3284f19da4fb389bd50d7ec71d51c 9b754c6818c940e8b5e1beeba6733fe1 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/dba9c194-a688-42ad-8b48-789e10f168fd/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmplq24gx5u execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:45:06 compute-0 nova_compute[260935]: 2025-10-11 09:45:06.065 2 DEBUG oslo_concurrency.processutils [None req-5d1ab53d-f1ec-46f4-80a5-4660dcc2006e bee3284f19da4fb389bd50d7ec71d51c 9b754c6818c940e8b5e1beeba6733fe1 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/dba9c194-a688-42ad-8b48-789e10f168fd/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmplq24gx5u" returned: 0 in 0.177s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:45:06 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/502829476' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 11 09:45:06 compute-0 nova_compute[260935]: 2025-10-11 09:45:06.101 2 DEBUG nova.storage.rbd_utils [None req-5d1ab53d-f1ec-46f4-80a5-4660dcc2006e bee3284f19da4fb389bd50d7ec71d51c 9b754c6818c940e8b5e1beeba6733fe1 - - default default] rbd image dba9c194-a688-42ad-8b48-789e10f168fd_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 11 09:45:06 compute-0 nova_compute[260935]: 2025-10-11 09:45:06.105 2 DEBUG oslo_concurrency.processutils [None req-5d1ab53d-f1ec-46f4-80a5-4660dcc2006e bee3284f19da4fb389bd50d7ec71d51c 9b754c6818c940e8b5e1beeba6733fe1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/dba9c194-a688-42ad-8b48-789e10f168fd/disk.config dba9c194-a688-42ad-8b48-789e10f168fd_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:45:06 compute-0 nova_compute[260935]: 2025-10-11 09:45:06.337 2 DEBUG oslo_concurrency.processutils [None req-5d1ab53d-f1ec-46f4-80a5-4660dcc2006e bee3284f19da4fb389bd50d7ec71d51c 9b754c6818c940e8b5e1beeba6733fe1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/dba9c194-a688-42ad-8b48-789e10f168fd/disk.config dba9c194-a688-42ad-8b48-789e10f168fd_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.231s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:45:06 compute-0 nova_compute[260935]: 2025-10-11 09:45:06.339 2 INFO nova.virt.libvirt.driver [None req-5d1ab53d-f1ec-46f4-80a5-4660dcc2006e bee3284f19da4fb389bd50d7ec71d51c 9b754c6818c940e8b5e1beeba6733fe1 - - default default] [instance: dba9c194-a688-42ad-8b48-789e10f168fd] Deleting local config drive /var/lib/nova/instances/dba9c194-a688-42ad-8b48-789e10f168fd/disk.config because it was imported into RBD.
Oct 11 09:45:06 compute-0 systemd-machined[215705]: New machine qemu-177-instance-00000097.
Oct 11 09:45:06 compute-0 systemd[1]: Started Virtual Machine qemu-177-instance-00000097.
Oct 11 09:45:07 compute-0 ceph-mon[74313]: pgmap v3114: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:45:07 compute-0 nova_compute[260935]: 2025-10-11 09:45:07.325 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760175907.3255875, dba9c194-a688-42ad-8b48-789e10f168fd => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:45:07 compute-0 nova_compute[260935]: 2025-10-11 09:45:07.326 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: dba9c194-a688-42ad-8b48-789e10f168fd] VM Resumed (Lifecycle Event)
Oct 11 09:45:07 compute-0 nova_compute[260935]: 2025-10-11 09:45:07.330 2 DEBUG nova.compute.manager [None req-5d1ab53d-f1ec-46f4-80a5-4660dcc2006e bee3284f19da4fb389bd50d7ec71d51c 9b754c6818c940e8b5e1beeba6733fe1 - - default default] [instance: dba9c194-a688-42ad-8b48-789e10f168fd] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 11 09:45:07 compute-0 nova_compute[260935]: 2025-10-11 09:45:07.330 2 DEBUG nova.virt.libvirt.driver [None req-5d1ab53d-f1ec-46f4-80a5-4660dcc2006e bee3284f19da4fb389bd50d7ec71d51c 9b754c6818c940e8b5e1beeba6733fe1 - - default default] [instance: dba9c194-a688-42ad-8b48-789e10f168fd] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 11 09:45:07 compute-0 nova_compute[260935]: 2025-10-11 09:45:07.335 2 INFO nova.virt.libvirt.driver [-] [instance: dba9c194-a688-42ad-8b48-789e10f168fd] Instance spawned successfully.
Oct 11 09:45:07 compute-0 nova_compute[260935]: 2025-10-11 09:45:07.335 2 DEBUG nova.virt.libvirt.driver [None req-5d1ab53d-f1ec-46f4-80a5-4660dcc2006e bee3284f19da4fb389bd50d7ec71d51c 9b754c6818c940e8b5e1beeba6733fe1 - - default default] [instance: dba9c194-a688-42ad-8b48-789e10f168fd] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 11 09:45:07 compute-0 nova_compute[260935]: 2025-10-11 09:45:07.368 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: dba9c194-a688-42ad-8b48-789e10f168fd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:45:07 compute-0 nova_compute[260935]: 2025-10-11 09:45:07.375 2 DEBUG nova.virt.libvirt.driver [None req-5d1ab53d-f1ec-46f4-80a5-4660dcc2006e bee3284f19da4fb389bd50d7ec71d51c 9b754c6818c940e8b5e1beeba6733fe1 - - default default] [instance: dba9c194-a688-42ad-8b48-789e10f168fd] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:45:07 compute-0 nova_compute[260935]: 2025-10-11 09:45:07.376 2 DEBUG nova.virt.libvirt.driver [None req-5d1ab53d-f1ec-46f4-80a5-4660dcc2006e bee3284f19da4fb389bd50d7ec71d51c 9b754c6818c940e8b5e1beeba6733fe1 - - default default] [instance: dba9c194-a688-42ad-8b48-789e10f168fd] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:45:07 compute-0 nova_compute[260935]: 2025-10-11 09:45:07.376 2 DEBUG nova.virt.libvirt.driver [None req-5d1ab53d-f1ec-46f4-80a5-4660dcc2006e bee3284f19da4fb389bd50d7ec71d51c 9b754c6818c940e8b5e1beeba6733fe1 - - default default] [instance: dba9c194-a688-42ad-8b48-789e10f168fd] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:45:07 compute-0 nova_compute[260935]: 2025-10-11 09:45:07.377 2 DEBUG nova.virt.libvirt.driver [None req-5d1ab53d-f1ec-46f4-80a5-4660dcc2006e bee3284f19da4fb389bd50d7ec71d51c 9b754c6818c940e8b5e1beeba6733fe1 - - default default] [instance: dba9c194-a688-42ad-8b48-789e10f168fd] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:45:07 compute-0 nova_compute[260935]: 2025-10-11 09:45:07.377 2 DEBUG nova.virt.libvirt.driver [None req-5d1ab53d-f1ec-46f4-80a5-4660dcc2006e bee3284f19da4fb389bd50d7ec71d51c 9b754c6818c940e8b5e1beeba6733fe1 - - default default] [instance: dba9c194-a688-42ad-8b48-789e10f168fd] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:45:07 compute-0 nova_compute[260935]: 2025-10-11 09:45:07.377 2 DEBUG nova.virt.libvirt.driver [None req-5d1ab53d-f1ec-46f4-80a5-4660dcc2006e bee3284f19da4fb389bd50d7ec71d51c 9b754c6818c940e8b5e1beeba6733fe1 - - default default] [instance: dba9c194-a688-42ad-8b48-789e10f168fd] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 11 09:45:07 compute-0 nova_compute[260935]: 2025-10-11 09:45:07.381 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: dba9c194-a688-42ad-8b48-789e10f168fd] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 09:45:07 compute-0 nova_compute[260935]: 2025-10-11 09:45:07.430 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: dba9c194-a688-42ad-8b48-789e10f168fd] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 09:45:07 compute-0 nova_compute[260935]: 2025-10-11 09:45:07.431 2 DEBUG nova.virt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Emitting event <LifecycleEvent: 1760175907.3263695, dba9c194-a688-42ad-8b48-789e10f168fd => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:45:07 compute-0 nova_compute[260935]: 2025-10-11 09:45:07.431 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: dba9c194-a688-42ad-8b48-789e10f168fd] VM Started (Lifecycle Event)
Oct 11 09:45:07 compute-0 nova_compute[260935]: 2025-10-11 09:45:07.456 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: dba9c194-a688-42ad-8b48-789e10f168fd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:45:07 compute-0 nova_compute[260935]: 2025-10-11 09:45:07.459 2 DEBUG nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: dba9c194-a688-42ad-8b48-789e10f168fd] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 11 09:45:07 compute-0 nova_compute[260935]: 2025-10-11 09:45:07.466 2 INFO nova.compute.manager [None req-5d1ab53d-f1ec-46f4-80a5-4660dcc2006e bee3284f19da4fb389bd50d7ec71d51c 9b754c6818c940e8b5e1beeba6733fe1 - - default default] [instance: dba9c194-a688-42ad-8b48-789e10f168fd] Took 4.01 seconds to spawn the instance on the hypervisor.
Oct 11 09:45:07 compute-0 nova_compute[260935]: 2025-10-11 09:45:07.466 2 DEBUG nova.compute.manager [None req-5d1ab53d-f1ec-46f4-80a5-4660dcc2006e bee3284f19da4fb389bd50d7ec71d51c 9b754c6818c940e8b5e1beeba6733fe1 - - default default] [instance: dba9c194-a688-42ad-8b48-789e10f168fd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:45:07 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3115: 321 pgs: 321 active+clean; 366 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 22 KiB/s rd, 1.3 MiB/s wr, 32 op/s
Oct 11 09:45:07 compute-0 nova_compute[260935]: 2025-10-11 09:45:07.474 2 INFO nova.compute.manager [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] [instance: dba9c194-a688-42ad-8b48-789e10f168fd] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 11 09:45:07 compute-0 nova_compute[260935]: 2025-10-11 09:45:07.516 2 INFO nova.compute.manager [None req-5d1ab53d-f1ec-46f4-80a5-4660dcc2006e bee3284f19da4fb389bd50d7ec71d51c 9b754c6818c940e8b5e1beeba6733fe1 - - default default] [instance: dba9c194-a688-42ad-8b48-789e10f168fd] Took 5.04 seconds to build instance.
Oct 11 09:45:07 compute-0 nova_compute[260935]: 2025-10-11 09:45:07.530 2 DEBUG oslo_concurrency.lockutils [None req-5d1ab53d-f1ec-46f4-80a5-4660dcc2006e bee3284f19da4fb389bd50d7ec71d51c 9b754c6818c940e8b5e1beeba6733fe1 - - default default] Lock "dba9c194-a688-42ad-8b48-789e10f168fd" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 5.134s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:45:09 compute-0 ceph-mon[74313]: pgmap v3115: 321 pgs: 321 active+clean; 366 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 22 KiB/s rd, 1.3 MiB/s wr, 32 op/s
Oct 11 09:45:09 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3116: 321 pgs: 321 active+clean; 374 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 36 op/s
Oct 11 09:45:09 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:45:09 compute-0 nova_compute[260935]: 2025-10-11 09:45:09.972 2 DEBUG oslo_concurrency.lockutils [None req-52c8e5d0-9803-4e78-ad21-65fb1c7fae27 bee3284f19da4fb389bd50d7ec71d51c 9b754c6818c940e8b5e1beeba6733fe1 - - default default] Acquiring lock "dba9c194-a688-42ad-8b48-789e10f168fd" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:45:09 compute-0 nova_compute[260935]: 2025-10-11 09:45:09.973 2 DEBUG oslo_concurrency.lockutils [None req-52c8e5d0-9803-4e78-ad21-65fb1c7fae27 bee3284f19da4fb389bd50d7ec71d51c 9b754c6818c940e8b5e1beeba6733fe1 - - default default] Lock "dba9c194-a688-42ad-8b48-789e10f168fd" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:45:09 compute-0 nova_compute[260935]: 2025-10-11 09:45:09.973 2 DEBUG oslo_concurrency.lockutils [None req-52c8e5d0-9803-4e78-ad21-65fb1c7fae27 bee3284f19da4fb389bd50d7ec71d51c 9b754c6818c940e8b5e1beeba6733fe1 - - default default] Acquiring lock "dba9c194-a688-42ad-8b48-789e10f168fd-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:45:09 compute-0 nova_compute[260935]: 2025-10-11 09:45:09.973 2 DEBUG oslo_concurrency.lockutils [None req-52c8e5d0-9803-4e78-ad21-65fb1c7fae27 bee3284f19da4fb389bd50d7ec71d51c 9b754c6818c940e8b5e1beeba6733fe1 - - default default] Lock "dba9c194-a688-42ad-8b48-789e10f168fd-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:45:09 compute-0 nova_compute[260935]: 2025-10-11 09:45:09.974 2 DEBUG oslo_concurrency.lockutils [None req-52c8e5d0-9803-4e78-ad21-65fb1c7fae27 bee3284f19da4fb389bd50d7ec71d51c 9b754c6818c940e8b5e1beeba6733fe1 - - default default] Lock "dba9c194-a688-42ad-8b48-789e10f168fd-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:45:09 compute-0 nova_compute[260935]: 2025-10-11 09:45:09.976 2 INFO nova.compute.manager [None req-52c8e5d0-9803-4e78-ad21-65fb1c7fae27 bee3284f19da4fb389bd50d7ec71d51c 9b754c6818c940e8b5e1beeba6733fe1 - - default default] [instance: dba9c194-a688-42ad-8b48-789e10f168fd] Terminating instance
Oct 11 09:45:09 compute-0 nova_compute[260935]: 2025-10-11 09:45:09.977 2 DEBUG oslo_concurrency.lockutils [None req-52c8e5d0-9803-4e78-ad21-65fb1c7fae27 bee3284f19da4fb389bd50d7ec71d51c 9b754c6818c940e8b5e1beeba6733fe1 - - default default] Acquiring lock "refresh_cache-dba9c194-a688-42ad-8b48-789e10f168fd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:45:09 compute-0 nova_compute[260935]: 2025-10-11 09:45:09.978 2 DEBUG oslo_concurrency.lockutils [None req-52c8e5d0-9803-4e78-ad21-65fb1c7fae27 bee3284f19da4fb389bd50d7ec71d51c 9b754c6818c940e8b5e1beeba6733fe1 - - default default] Acquired lock "refresh_cache-dba9c194-a688-42ad-8b48-789e10f168fd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:45:09 compute-0 nova_compute[260935]: 2025-10-11 09:45:09.978 2 DEBUG nova.network.neutron [None req-52c8e5d0-9803-4e78-ad21-65fb1c7fae27 bee3284f19da4fb389bd50d7ec71d51c 9b754c6818c940e8b5e1beeba6733fe1 - - default default] [instance: dba9c194-a688-42ad-8b48-789e10f168fd] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 11 09:45:10 compute-0 nova_compute[260935]: 2025-10-11 09:45:10.140 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:45:10 compute-0 nova_compute[260935]: 2025-10-11 09:45:10.171 2 DEBUG nova.network.neutron [None req-52c8e5d0-9803-4e78-ad21-65fb1c7fae27 bee3284f19da4fb389bd50d7ec71d51c 9b754c6818c940e8b5e1beeba6733fe1 - - default default] [instance: dba9c194-a688-42ad-8b48-789e10f168fd] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 11 09:45:11 compute-0 nova_compute[260935]: 2025-10-11 09:45:11.032 2 DEBUG nova.network.neutron [None req-52c8e5d0-9803-4e78-ad21-65fb1c7fae27 bee3284f19da4fb389bd50d7ec71d51c 9b754c6818c940e8b5e1beeba6733fe1 - - default default] [instance: dba9c194-a688-42ad-8b48-789e10f168fd] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:45:11 compute-0 nova_compute[260935]: 2025-10-11 09:45:11.053 2 DEBUG oslo_concurrency.lockutils [None req-52c8e5d0-9803-4e78-ad21-65fb1c7fae27 bee3284f19da4fb389bd50d7ec71d51c 9b754c6818c940e8b5e1beeba6733fe1 - - default default] Releasing lock "refresh_cache-dba9c194-a688-42ad-8b48-789e10f168fd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:45:11 compute-0 nova_compute[260935]: 2025-10-11 09:45:11.054 2 DEBUG nova.compute.manager [None req-52c8e5d0-9803-4e78-ad21-65fb1c7fae27 bee3284f19da4fb389bd50d7ec71d51c 9b754c6818c940e8b5e1beeba6733fe1 - - default default] [instance: dba9c194-a688-42ad-8b48-789e10f168fd] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 11 09:45:11 compute-0 ceph-mon[74313]: pgmap v3116: 321 pgs: 321 active+clean; 374 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 36 op/s
Oct 11 09:45:11 compute-0 systemd[1]: machine-qemu\x2d177\x2dinstance\x2d00000097.scope: Deactivated successfully.
Oct 11 09:45:11 compute-0 systemd[1]: machine-qemu\x2d177\x2dinstance\x2d00000097.scope: Consumed 4.652s CPU time.
Oct 11 09:45:11 compute-0 systemd-machined[215705]: Machine qemu-177-instance-00000097 terminated.
Oct 11 09:45:11 compute-0 nova_compute[260935]: 2025-10-11 09:45:11.284 2 INFO nova.virt.libvirt.driver [-] [instance: dba9c194-a688-42ad-8b48-789e10f168fd] Instance destroyed successfully.
Oct 11 09:45:11 compute-0 nova_compute[260935]: 2025-10-11 09:45:11.285 2 DEBUG nova.objects.instance [None req-52c8e5d0-9803-4e78-ad21-65fb1c7fae27 bee3284f19da4fb389bd50d7ec71d51c 9b754c6818c940e8b5e1beeba6733fe1 - - default default] Lazy-loading 'resources' on Instance uuid dba9c194-a688-42ad-8b48-789e10f168fd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:45:11 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3117: 321 pgs: 321 active+clean; 374 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 36 op/s
Oct 11 09:45:11 compute-0 nova_compute[260935]: 2025-10-11 09:45:11.713 2 INFO nova.virt.libvirt.driver [None req-52c8e5d0-9803-4e78-ad21-65fb1c7fae27 bee3284f19da4fb389bd50d7ec71d51c 9b754c6818c940e8b5e1beeba6733fe1 - - default default] [instance: dba9c194-a688-42ad-8b48-789e10f168fd] Deleting instance files /var/lib/nova/instances/dba9c194-a688-42ad-8b48-789e10f168fd_del
Oct 11 09:45:11 compute-0 nova_compute[260935]: 2025-10-11 09:45:11.715 2 INFO nova.virt.libvirt.driver [None req-52c8e5d0-9803-4e78-ad21-65fb1c7fae27 bee3284f19da4fb389bd50d7ec71d51c 9b754c6818c940e8b5e1beeba6733fe1 - - default default] [instance: dba9c194-a688-42ad-8b48-789e10f168fd] Deletion of /var/lib/nova/instances/dba9c194-a688-42ad-8b48-789e10f168fd_del complete
Oct 11 09:45:11 compute-0 nova_compute[260935]: 2025-10-11 09:45:11.772 2 INFO nova.compute.manager [None req-52c8e5d0-9803-4e78-ad21-65fb1c7fae27 bee3284f19da4fb389bd50d7ec71d51c 9b754c6818c940e8b5e1beeba6733fe1 - - default default] [instance: dba9c194-a688-42ad-8b48-789e10f168fd] Took 0.72 seconds to destroy the instance on the hypervisor.
Oct 11 09:45:11 compute-0 nova_compute[260935]: 2025-10-11 09:45:11.773 2 DEBUG oslo.service.loopingcall [None req-52c8e5d0-9803-4e78-ad21-65fb1c7fae27 bee3284f19da4fb389bd50d7ec71d51c 9b754c6818c940e8b5e1beeba6733fe1 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 11 09:45:11 compute-0 nova_compute[260935]: 2025-10-11 09:45:11.774 2 DEBUG nova.compute.manager [-] [instance: dba9c194-a688-42ad-8b48-789e10f168fd] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 11 09:45:11 compute-0 nova_compute[260935]: 2025-10-11 09:45:11.774 2 DEBUG nova.network.neutron [-] [instance: dba9c194-a688-42ad-8b48-789e10f168fd] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 11 09:45:12 compute-0 nova_compute[260935]: 2025-10-11 09:45:12.885 2 DEBUG nova.network.neutron [-] [instance: dba9c194-a688-42ad-8b48-789e10f168fd] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 11 09:45:12 compute-0 nova_compute[260935]: 2025-10-11 09:45:12.906 2 DEBUG nova.network.neutron [-] [instance: dba9c194-a688-42ad-8b48-789e10f168fd] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:45:12 compute-0 nova_compute[260935]: 2025-10-11 09:45:12.923 2 INFO nova.compute.manager [-] [instance: dba9c194-a688-42ad-8b48-789e10f168fd] Took 1.15 seconds to deallocate network for instance.
Oct 11 09:45:12 compute-0 nova_compute[260935]: 2025-10-11 09:45:12.988 2 DEBUG oslo_concurrency.lockutils [None req-52c8e5d0-9803-4e78-ad21-65fb1c7fae27 bee3284f19da4fb389bd50d7ec71d51c 9b754c6818c940e8b5e1beeba6733fe1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:45:12 compute-0 nova_compute[260935]: 2025-10-11 09:45:12.989 2 DEBUG oslo_concurrency.lockutils [None req-52c8e5d0-9803-4e78-ad21-65fb1c7fae27 bee3284f19da4fb389bd50d7ec71d51c 9b754c6818c940e8b5e1beeba6733fe1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:45:13 compute-0 ceph-mon[74313]: pgmap v3117: 321 pgs: 321 active+clean; 374 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 36 op/s
Oct 11 09:45:13 compute-0 nova_compute[260935]: 2025-10-11 09:45:13.155 2 DEBUG oslo_concurrency.processutils [None req-52c8e5d0-9803-4e78-ad21-65fb1c7fae27 bee3284f19da4fb389bd50d7ec71d51c 9b754c6818c940e8b5e1beeba6733fe1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:45:13 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3118: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 127 op/s
Oct 11 09:45:13 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 09:45:13 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3430155061' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:45:13 compute-0 nova_compute[260935]: 2025-10-11 09:45:13.641 2 DEBUG oslo_concurrency.processutils [None req-52c8e5d0-9803-4e78-ad21-65fb1c7fae27 bee3284f19da4fb389bd50d7ec71d51c 9b754c6818c940e8b5e1beeba6733fe1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.485s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:45:13 compute-0 nova_compute[260935]: 2025-10-11 09:45:13.647 2 DEBUG nova.compute.provider_tree [None req-52c8e5d0-9803-4e78-ad21-65fb1c7fae27 bee3284f19da4fb389bd50d7ec71d51c 9b754c6818c940e8b5e1beeba6733fe1 - - default default] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 09:45:13 compute-0 nova_compute[260935]: 2025-10-11 09:45:13.668 2 DEBUG nova.scheduler.client.report [None req-52c8e5d0-9803-4e78-ad21-65fb1c7fae27 bee3284f19da4fb389bd50d7ec71d51c 9b754c6818c940e8b5e1beeba6733fe1 - - default default] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 09:45:13 compute-0 nova_compute[260935]: 2025-10-11 09:45:13.703 2 DEBUG oslo_concurrency.lockutils [None req-52c8e5d0-9803-4e78-ad21-65fb1c7fae27 bee3284f19da4fb389bd50d7ec71d51c 9b754c6818c940e8b5e1beeba6733fe1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.713s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:45:13 compute-0 nova_compute[260935]: 2025-10-11 09:45:13.757 2 INFO nova.scheduler.client.report [None req-52c8e5d0-9803-4e78-ad21-65fb1c7fae27 bee3284f19da4fb389bd50d7ec71d51c 9b754c6818c940e8b5e1beeba6733fe1 - - default default] Deleted allocations for instance dba9c194-a688-42ad-8b48-789e10f168fd
Oct 11 09:45:13 compute-0 nova_compute[260935]: 2025-10-11 09:45:13.827 2 DEBUG oslo_concurrency.lockutils [None req-52c8e5d0-9803-4e78-ad21-65fb1c7fae27 bee3284f19da4fb389bd50d7ec71d51c 9b754c6818c940e8b5e1beeba6733fe1 - - default default] Lock "dba9c194-a688-42ad-8b48-789e10f168fd" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.855s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:45:14 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/3430155061' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:45:14 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:45:14 compute-0 podman[435128]: 2025-10-11 09:45:14.798267942 +0000 UTC m=+0.096622741 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct 11 09:45:15 compute-0 ceph-mon[74313]: pgmap v3118: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 127 op/s
Oct 11 09:45:15 compute-0 nova_compute[260935]: 2025-10-11 09:45:15.143 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:45:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:45:15.241 162815 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:45:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:45:15.241 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:45:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:45:15.242 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:45:15 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3119: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 127 op/s
Oct 11 09:45:17 compute-0 ceph-mon[74313]: pgmap v3119: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 127 op/s
Oct 11 09:45:17 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3120: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 127 op/s
Oct 11 09:45:19 compute-0 ceph-mon[74313]: pgmap v3120: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 127 op/s
Oct 11 09:45:19 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3121: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 513 KiB/s wr, 94 op/s
Oct 11 09:45:19 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:45:20 compute-0 nova_compute[260935]: 2025-10-11 09:45:20.147 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 09:45:20 compute-0 nova_compute[260935]: 2025-10-11 09:45:20.148 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 09:45:20 compute-0 nova_compute[260935]: 2025-10-11 09:45:20.149 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Oct 11 09:45:20 compute-0 nova_compute[260935]: 2025-10-11 09:45:20.149 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Oct 11 09:45:20 compute-0 nova_compute[260935]: 2025-10-11 09:45:20.168 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:45:20 compute-0 nova_compute[260935]: 2025-10-11 09:45:20.170 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Oct 11 09:45:20 compute-0 podman[435149]: 2025-10-11 09:45:20.798107881 +0000 UTC m=+0.090429807 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Oct 11 09:45:21 compute-0 ceph-mon[74313]: pgmap v3121: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 513 KiB/s wr, 94 op/s
Oct 11 09:45:21 compute-0 ovn_controller[152945]: 2025-10-11T09:45:21Z|01729|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory
Oct 11 09:45:21 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3122: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.2 KiB/s wr, 90 op/s
Oct 11 09:45:23 compute-0 ceph-mon[74313]: pgmap v3122: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.2 KiB/s wr, 90 op/s
Oct 11 09:45:23 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3123: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.2 KiB/s wr, 90 op/s
Oct 11 09:45:24 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:45:24 compute-0 podman[435170]: 2025-10-11 09:45:24.799416037 +0000 UTC m=+0.096435606 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=multipathd, container_name=multipathd, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 11 09:45:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:45:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:45:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:45:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:45:24 compute-0 podman[435171]: 2025-10-11 09:45:24.859318367 +0000 UTC m=+0.149867094 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=ovn_controller, managed_by=edpm_ansible, config_id=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 11 09:45:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:45:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:45:25 compute-0 nova_compute[260935]: 2025-10-11 09:45:25.171 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 09:45:25 compute-0 ceph-mon[74313]: pgmap v3123: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.2 KiB/s wr, 90 op/s
Oct 11 09:45:25 compute-0 nova_compute[260935]: 2025-10-11 09:45:25.173 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 09:45:25 compute-0 nova_compute[260935]: 2025-10-11 09:45:25.173 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Oct 11 09:45:25 compute-0 nova_compute[260935]: 2025-10-11 09:45:25.173 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Oct 11 09:45:25 compute-0 nova_compute[260935]: 2025-10-11 09:45:25.219 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:45:25 compute-0 nova_compute[260935]: 2025-10-11 09:45:25.220 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Oct 11 09:45:25 compute-0 sudo[435216]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:45:25 compute-0 sudo[435216]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:45:25 compute-0 sudo[435216]: pam_unix(sudo:session): session closed for user root
Oct 11 09:45:25 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3124: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:45:25 compute-0 sudo[435241]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 09:45:25 compute-0 sudo[435241]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:45:25 compute-0 sudo[435241]: pam_unix(sudo:session): session closed for user root
Oct 11 09:45:25 compute-0 sudo[435266]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:45:25 compute-0 sudo[435266]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:45:25 compute-0 sudo[435266]: pam_unix(sudo:session): session closed for user root
Oct 11 09:45:25 compute-0 nova_compute[260935]: 2025-10-11 09:45:25.661 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:45:25 compute-0 nova_compute[260935]: 2025-10-11 09:45:25.709 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Triggering sync for uuid c176845c-89c0-4038-ba22-4ee79bd3ebfe _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Oct 11 09:45:25 compute-0 nova_compute[260935]: 2025-10-11 09:45:25.710 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Triggering sync for uuid b75d8ded-515b-48ff-a6b6-28df88878996 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Oct 11 09:45:25 compute-0 nova_compute[260935]: 2025-10-11 09:45:25.710 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Triggering sync for uuid 52be16b4-343a-4fd4-9041-39069a1fde2a _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Oct 11 09:45:25 compute-0 nova_compute[260935]: 2025-10-11 09:45:25.711 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "c176845c-89c0-4038-ba22-4ee79bd3ebfe" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:45:25 compute-0 nova_compute[260935]: 2025-10-11 09:45:25.711 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "c176845c-89c0-4038-ba22-4ee79bd3ebfe" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:45:25 compute-0 nova_compute[260935]: 2025-10-11 09:45:25.711 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "b75d8ded-515b-48ff-a6b6-28df88878996" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:45:25 compute-0 nova_compute[260935]: 2025-10-11 09:45:25.712 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "b75d8ded-515b-48ff-a6b6-28df88878996" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:45:25 compute-0 nova_compute[260935]: 2025-10-11 09:45:25.712 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "52be16b4-343a-4fd4-9041-39069a1fde2a" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:45:25 compute-0 nova_compute[260935]: 2025-10-11 09:45:25.713 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "52be16b4-343a-4fd4-9041-39069a1fde2a" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:45:25 compute-0 sudo[435291]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Oct 11 09:45:25 compute-0 sudo[435291]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:45:25 compute-0 nova_compute[260935]: 2025-10-11 09:45:25.769 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "c176845c-89c0-4038-ba22-4ee79bd3ebfe" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.058s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:45:25 compute-0 nova_compute[260935]: 2025-10-11 09:45:25.775 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "b75d8ded-515b-48ff-a6b6-28df88878996" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.063s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:45:25 compute-0 nova_compute[260935]: 2025-10-11 09:45:25.778 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "52be16b4-343a-4fd4-9041-39069a1fde2a" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.065s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:45:26 compute-0 sudo[435291]: pam_unix(sudo:session): session closed for user root
Oct 11 09:45:26 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 09:45:26 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:45:26 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 09:45:26 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:45:26 compute-0 sudo[435336]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:45:26 compute-0 sudo[435336]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:45:26 compute-0 sudo[435336]: pam_unix(sudo:session): session closed for user root
Oct 11 09:45:26 compute-0 nova_compute[260935]: 2025-10-11 09:45:26.281 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760175911.2804635, dba9c194-a688-42ad-8b48-789e10f168fd => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 11 09:45:26 compute-0 nova_compute[260935]: 2025-10-11 09:45:26.282 2 INFO nova.compute.manager [-] [instance: dba9c194-a688-42ad-8b48-789e10f168fd] VM Stopped (Lifecycle Event)
Oct 11 09:45:26 compute-0 nova_compute[260935]: 2025-10-11 09:45:26.303 2 DEBUG nova.compute.manager [None req-f211f4ea-1e7b-436b-bcc0-a00269ce03dd - - - - - -] [instance: dba9c194-a688-42ad-8b48-789e10f168fd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 11 09:45:26 compute-0 sudo[435361]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 09:45:26 compute-0 sudo[435361]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:45:26 compute-0 sudo[435361]: pam_unix(sudo:session): session closed for user root
Oct 11 09:45:26 compute-0 sudo[435386]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:45:26 compute-0 sudo[435386]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:45:26 compute-0 sudo[435386]: pam_unix(sudo:session): session closed for user root
Oct 11 09:45:26 compute-0 sudo[435411]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 11 09:45:26 compute-0 sudo[435411]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:45:26 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 09:45:26 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1176825860' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 09:45:26 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 09:45:26 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1176825860' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 09:45:27 compute-0 ceph-mon[74313]: pgmap v3124: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:45:27 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:45:27 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:45:27 compute-0 ceph-mon[74313]: from='client.? 192.168.122.10:0/1176825860' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 09:45:27 compute-0 ceph-mon[74313]: from='client.? 192.168.122.10:0/1176825860' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 09:45:27 compute-0 sudo[435411]: pam_unix(sudo:session): session closed for user root
Oct 11 09:45:27 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 09:45:27 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 09:45:27 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 11 09:45:27 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 09:45:27 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 11 09:45:27 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:45:27 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev 5fe9012b-8e34-4e37-a0a7-de35e1a225ab does not exist
Oct 11 09:45:27 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev 75d961f7-5425-41e8-86ec-dcd742ee631e does not exist
Oct 11 09:45:27 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev eb31a03d-d951-4863-88b2-a7617d3a355c does not exist
Oct 11 09:45:27 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 11 09:45:27 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 09:45:27 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 11 09:45:27 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 09:45:27 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 09:45:27 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 09:45:27 compute-0 sudo[435471]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:45:27 compute-0 sudo[435471]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:45:27 compute-0 sudo[435471]: pam_unix(sudo:session): session closed for user root
Oct 11 09:45:27 compute-0 sshd-session[435455]: Invalid user postgres from 165.232.82.252 port 51858
Oct 11 09:45:27 compute-0 sudo[435496]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 09:45:27 compute-0 sudo[435496]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:45:27 compute-0 sudo[435496]: pam_unix(sudo:session): session closed for user root
Oct 11 09:45:27 compute-0 sshd-session[435455]: pam_unix(sshd:auth): check pass; user unknown
Oct 11 09:45:27 compute-0 sshd-session[435455]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=165.232.82.252
Oct 11 09:45:27 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3125: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:45:27 compute-0 sudo[435521]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:45:27 compute-0 sudo[435521]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:45:27 compute-0 sudo[435521]: pam_unix(sudo:session): session closed for user root
Oct 11 09:45:27 compute-0 sudo[435546]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 11 09:45:27 compute-0 sudo[435546]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:45:28 compute-0 sshd-session[435451]: Invalid user mgeweb from 155.4.244.179 port 4507
Oct 11 09:45:28 compute-0 sshd-session[435451]: pam_unix(sshd:auth): check pass; user unknown
Oct 11 09:45:28 compute-0 sshd-session[435451]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=155.4.244.179
Oct 11 09:45:28 compute-0 podman[435614]: 2025-10-11 09:45:28.120472824 +0000 UTC m=+0.079696796 container create 1e92f9766ec644e4b7f0d634a67525681730fab158422c999ad33121106b9e03 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_goldberg, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct 11 09:45:28 compute-0 podman[435614]: 2025-10-11 09:45:28.070234655 +0000 UTC m=+0.029458687 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:45:28 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 09:45:28 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 09:45:28 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:45:28 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 09:45:28 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 09:45:28 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 09:45:28 compute-0 systemd[1]: Started libpod-conmon-1e92f9766ec644e4b7f0d634a67525681730fab158422c999ad33121106b9e03.scope.
Oct 11 09:45:28 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:45:28 compute-0 podman[435614]: 2025-10-11 09:45:28.352978026 +0000 UTC m=+0.312201988 container init 1e92f9766ec644e4b7f0d634a67525681730fab158422c999ad33121106b9e03 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_goldberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Oct 11 09:45:28 compute-0 podman[435614]: 2025-10-11 09:45:28.36704924 +0000 UTC m=+0.326273202 container start 1e92f9766ec644e4b7f0d634a67525681730fab158422c999ad33121106b9e03 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_goldberg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct 11 09:45:28 compute-0 keen_goldberg[435630]: 167 167
Oct 11 09:45:28 compute-0 systemd[1]: libpod-1e92f9766ec644e4b7f0d634a67525681730fab158422c999ad33121106b9e03.scope: Deactivated successfully.
Oct 11 09:45:28 compute-0 podman[435614]: 2025-10-11 09:45:28.381501916 +0000 UTC m=+0.340725878 container attach 1e92f9766ec644e4b7f0d634a67525681730fab158422c999ad33121106b9e03 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_goldberg, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct 11 09:45:28 compute-0 podman[435614]: 2025-10-11 09:45:28.381867956 +0000 UTC m=+0.341091948 container died 1e92f9766ec644e4b7f0d634a67525681730fab158422c999ad33121106b9e03 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_goldberg, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 09:45:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-cc56555f16783b858c05fa30e1936e197ef1f9dad9abb02c971fc6cf09b37574-merged.mount: Deactivated successfully.
Oct 11 09:45:28 compute-0 podman[435614]: 2025-10-11 09:45:28.541413391 +0000 UTC m=+0.500637363 container remove 1e92f9766ec644e4b7f0d634a67525681730fab158422c999ad33121106b9e03 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_goldberg, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct 11 09:45:28 compute-0 systemd[1]: libpod-conmon-1e92f9766ec644e4b7f0d634a67525681730fab158422c999ad33121106b9e03.scope: Deactivated successfully.
Oct 11 09:45:28 compute-0 podman[435657]: 2025-10-11 09:45:28.829306706 +0000 UTC m=+0.076526678 container create 85d649b21f7a8cda2beefe5084890a53e6ecfd6851b0c6539cbfc399aa7f34d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_chaplygin, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct 11 09:45:28 compute-0 systemd[1]: Started libpod-conmon-85d649b21f7a8cda2beefe5084890a53e6ecfd6851b0c6539cbfc399aa7f34d9.scope.
Oct 11 09:45:28 compute-0 podman[435657]: 2025-10-11 09:45:28.796399313 +0000 UTC m=+0.043619305 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:45:28 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:45:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6074ebf12a33dd2f911d441d825f55fc9b044f80d65bbd904537ff354f253dd7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 09:45:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6074ebf12a33dd2f911d441d825f55fc9b044f80d65bbd904537ff354f253dd7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 09:45:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6074ebf12a33dd2f911d441d825f55fc9b044f80d65bbd904537ff354f253dd7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 09:45:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6074ebf12a33dd2f911d441d825f55fc9b044f80d65bbd904537ff354f253dd7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 09:45:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6074ebf12a33dd2f911d441d825f55fc9b044f80d65bbd904537ff354f253dd7/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 09:45:28 compute-0 podman[435657]: 2025-10-11 09:45:28.944412544 +0000 UTC m=+0.191632566 container init 85d649b21f7a8cda2beefe5084890a53e6ecfd6851b0c6539cbfc399aa7f34d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_chaplygin, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS)
Oct 11 09:45:28 compute-0 podman[435657]: 2025-10-11 09:45:28.957875102 +0000 UTC m=+0.205095114 container start 85d649b21f7a8cda2beefe5084890a53e6ecfd6851b0c6539cbfc399aa7f34d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_chaplygin, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 09:45:28 compute-0 podman[435657]: 2025-10-11 09:45:28.977812011 +0000 UTC m=+0.225032033 container attach 85d649b21f7a8cda2beefe5084890a53e6ecfd6851b0c6539cbfc399aa7f34d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_chaplygin, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct 11 09:45:29 compute-0 ceph-mon[74313]: pgmap v3125: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:45:29 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3126: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:45:29 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:45:30 compute-0 laughing_chaplygin[435673]: --> passed data devices: 0 physical, 3 LVM
Oct 11 09:45:30 compute-0 laughing_chaplygin[435673]: --> relative data size: 1.0
Oct 11 09:45:30 compute-0 laughing_chaplygin[435673]: --> All data devices are unavailable
Oct 11 09:45:30 compute-0 systemd[1]: libpod-85d649b21f7a8cda2beefe5084890a53e6ecfd6851b0c6539cbfc399aa7f34d9.scope: Deactivated successfully.
Oct 11 09:45:30 compute-0 systemd[1]: libpod-85d649b21f7a8cda2beefe5084890a53e6ecfd6851b0c6539cbfc399aa7f34d9.scope: Consumed 1.103s CPU time.
Oct 11 09:45:30 compute-0 podman[435657]: 2025-10-11 09:45:30.104191442 +0000 UTC m=+1.351411454 container died 85d649b21f7a8cda2beefe5084890a53e6ecfd6851b0c6539cbfc399aa7f34d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_chaplygin, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct 11 09:45:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-6074ebf12a33dd2f911d441d825f55fc9b044f80d65bbd904537ff354f253dd7-merged.mount: Deactivated successfully.
Oct 11 09:45:30 compute-0 sshd-session[435455]: Failed password for invalid user postgres from 165.232.82.252 port 51858 ssh2
Oct 11 09:45:30 compute-0 podman[435657]: 2025-10-11 09:45:30.184775993 +0000 UTC m=+1.431996005 container remove 85d649b21f7a8cda2beefe5084890a53e6ecfd6851b0c6539cbfc399aa7f34d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_chaplygin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 11 09:45:30 compute-0 systemd[1]: libpod-conmon-85d649b21f7a8cda2beefe5084890a53e6ecfd6851b0c6539cbfc399aa7f34d9.scope: Deactivated successfully.
Oct 11 09:45:30 compute-0 nova_compute[260935]: 2025-10-11 09:45:30.220 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:45:30 compute-0 sudo[435546]: pam_unix(sudo:session): session closed for user root
Oct 11 09:45:30 compute-0 sudo[435714]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:45:30 compute-0 sudo[435714]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:45:30 compute-0 sudo[435714]: pam_unix(sudo:session): session closed for user root
Oct 11 09:45:30 compute-0 sudo[435739]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 09:45:30 compute-0 sudo[435739]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:45:30 compute-0 sudo[435739]: pam_unix(sudo:session): session closed for user root
Oct 11 09:45:30 compute-0 sudo[435764]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:45:30 compute-0 sshd-session[435451]: Failed password for invalid user mgeweb from 155.4.244.179 port 4507 ssh2
Oct 11 09:45:30 compute-0 sudo[435764]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:45:30 compute-0 sudo[435764]: pam_unix(sudo:session): session closed for user root
Oct 11 09:45:30 compute-0 sudo[435789]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a -- lvm list --format json
Oct 11 09:45:30 compute-0 sudo[435789]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:45:31 compute-0 podman[435853]: 2025-10-11 09:45:31.016771578 +0000 UTC m=+0.056536777 container create 40eb5b80a3190af639a4871f51c9c2f2c123b2db64ef1b51bc9a2e94c9c32f64 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_nightingale, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 09:45:31 compute-0 systemd[1]: Started libpod-conmon-40eb5b80a3190af639a4871f51c9c2f2c123b2db64ef1b51bc9a2e94c9c32f64.scope.
Oct 11 09:45:31 compute-0 podman[435853]: 2025-10-11 09:45:30.989982727 +0000 UTC m=+0.029747986 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:45:31 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:45:31 compute-0 podman[435853]: 2025-10-11 09:45:31.11060243 +0000 UTC m=+0.150367679 container init 40eb5b80a3190af639a4871f51c9c2f2c123b2db64ef1b51bc9a2e94c9c32f64 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_nightingale, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 09:45:31 compute-0 podman[435853]: 2025-10-11 09:45:31.120800756 +0000 UTC m=+0.160565925 container start 40eb5b80a3190af639a4871f51c9c2f2c123b2db64ef1b51bc9a2e94c9c32f64 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_nightingale, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct 11 09:45:31 compute-0 wonderful_nightingale[435869]: 167 167
Oct 11 09:45:31 compute-0 systemd[1]: libpod-40eb5b80a3190af639a4871f51c9c2f2c123b2db64ef1b51bc9a2e94c9c32f64.scope: Deactivated successfully.
Oct 11 09:45:31 compute-0 podman[435853]: 2025-10-11 09:45:31.127977897 +0000 UTC m=+0.167743146 container attach 40eb5b80a3190af639a4871f51c9c2f2c123b2db64ef1b51bc9a2e94c9c32f64 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_nightingale, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct 11 09:45:31 compute-0 podman[435853]: 2025-10-11 09:45:31.128379839 +0000 UTC m=+0.168145048 container died 40eb5b80a3190af639a4871f51c9c2f2c123b2db64ef1b51bc9a2e94c9c32f64 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_nightingale, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 09:45:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-7e85a5f805cc43300e1fac85dd1645fe9d11d258e82d3af7869d649d91d8382c-merged.mount: Deactivated successfully.
Oct 11 09:45:31 compute-0 podman[435853]: 2025-10-11 09:45:31.175287694 +0000 UTC m=+0.215052903 container remove 40eb5b80a3190af639a4871f51c9c2f2c123b2db64ef1b51bc9a2e94c9c32f64 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_nightingale, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 09:45:31 compute-0 systemd[1]: libpod-conmon-40eb5b80a3190af639a4871f51c9c2f2c123b2db64ef1b51bc9a2e94c9c32f64.scope: Deactivated successfully.
Oct 11 09:45:31 compute-0 ceph-mon[74313]: pgmap v3126: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:45:31 compute-0 podman[435893]: 2025-10-11 09:45:31.430217224 +0000 UTC m=+0.064640374 container create 80173a483183f7b6b4d586fb5ce65c0ca814fa4c34c7bf6f5a6e845740477578 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_kare, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 09:45:31 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3127: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:45:31 compute-0 systemd[1]: Started libpod-conmon-80173a483183f7b6b4d586fb5ce65c0ca814fa4c34c7bf6f5a6e845740477578.scope.
Oct 11 09:45:31 compute-0 podman[435893]: 2025-10-11 09:45:31.405527332 +0000 UTC m=+0.039950522 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:45:31 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:45:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99db2da31b09cc0c93e01f4eb7fbece128c1eae7f89b46fcd72e483f57324009/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 09:45:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99db2da31b09cc0c93e01f4eb7fbece128c1eae7f89b46fcd72e483f57324009/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 09:45:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99db2da31b09cc0c93e01f4eb7fbece128c1eae7f89b46fcd72e483f57324009/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 09:45:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99db2da31b09cc0c93e01f4eb7fbece128c1eae7f89b46fcd72e483f57324009/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 09:45:31 compute-0 podman[435893]: 2025-10-11 09:45:31.562550076 +0000 UTC m=+0.196973276 container init 80173a483183f7b6b4d586fb5ce65c0ca814fa4c34c7bf6f5a6e845740477578 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_kare, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct 11 09:45:31 compute-0 podman[435893]: 2025-10-11 09:45:31.57658241 +0000 UTC m=+0.211005580 container start 80173a483183f7b6b4d586fb5ce65c0ca814fa4c34c7bf6f5a6e845740477578 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_kare, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct 11 09:45:31 compute-0 podman[435893]: 2025-10-11 09:45:31.580420887 +0000 UTC m=+0.214844107 container attach 80173a483183f7b6b4d586fb5ce65c0ca814fa4c34c7bf6f5a6e845740477578 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_kare, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2)
Oct 11 09:45:32 compute-0 nostalgic_kare[435910]: {
Oct 11 09:45:32 compute-0 nostalgic_kare[435910]:     "0": [
Oct 11 09:45:32 compute-0 nostalgic_kare[435910]:         {
Oct 11 09:45:32 compute-0 nostalgic_kare[435910]:             "devices": [
Oct 11 09:45:32 compute-0 nostalgic_kare[435910]:                 "/dev/loop3"
Oct 11 09:45:32 compute-0 nostalgic_kare[435910]:             ],
Oct 11 09:45:32 compute-0 nostalgic_kare[435910]:             "lv_name": "ceph_lv0",
Oct 11 09:45:32 compute-0 nostalgic_kare[435910]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 09:45:32 compute-0 nostalgic_kare[435910]:             "lv_size": "21470642176",
Oct 11 09:45:32 compute-0 nostalgic_kare[435910]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=bd31cfe9-266a-4479-a7f9-659fff14b3e1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 09:45:32 compute-0 nostalgic_kare[435910]:             "lv_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 09:45:32 compute-0 nostalgic_kare[435910]:             "name": "ceph_lv0",
Oct 11 09:45:32 compute-0 nostalgic_kare[435910]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 09:45:32 compute-0 nostalgic_kare[435910]:             "tags": {
Oct 11 09:45:32 compute-0 nostalgic_kare[435910]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 11 09:45:32 compute-0 nostalgic_kare[435910]:                 "ceph.block_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 09:45:32 compute-0 nostalgic_kare[435910]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 09:45:32 compute-0 nostalgic_kare[435910]:                 "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:45:32 compute-0 nostalgic_kare[435910]:                 "ceph.cluster_name": "ceph",
Oct 11 09:45:32 compute-0 nostalgic_kare[435910]:                 "ceph.crush_device_class": "",
Oct 11 09:45:32 compute-0 nostalgic_kare[435910]:                 "ceph.encrypted": "0",
Oct 11 09:45:32 compute-0 nostalgic_kare[435910]:                 "ceph.osd_fsid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 09:45:32 compute-0 nostalgic_kare[435910]:                 "ceph.osd_id": "0",
Oct 11 09:45:32 compute-0 nostalgic_kare[435910]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 09:45:32 compute-0 nostalgic_kare[435910]:                 "ceph.type": "block",
Oct 11 09:45:32 compute-0 nostalgic_kare[435910]:                 "ceph.vdo": "0"
Oct 11 09:45:32 compute-0 nostalgic_kare[435910]:             },
Oct 11 09:45:32 compute-0 nostalgic_kare[435910]:             "type": "block",
Oct 11 09:45:32 compute-0 nostalgic_kare[435910]:             "vg_name": "ceph_vg0"
Oct 11 09:45:32 compute-0 nostalgic_kare[435910]:         }
Oct 11 09:45:32 compute-0 nostalgic_kare[435910]:     ],
Oct 11 09:45:32 compute-0 nostalgic_kare[435910]:     "1": [
Oct 11 09:45:32 compute-0 nostalgic_kare[435910]:         {
Oct 11 09:45:32 compute-0 nostalgic_kare[435910]:             "devices": [
Oct 11 09:45:32 compute-0 nostalgic_kare[435910]:                 "/dev/loop4"
Oct 11 09:45:32 compute-0 nostalgic_kare[435910]:             ],
Oct 11 09:45:32 compute-0 nostalgic_kare[435910]:             "lv_name": "ceph_lv1",
Oct 11 09:45:32 compute-0 nostalgic_kare[435910]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 09:45:32 compute-0 nostalgic_kare[435910]:             "lv_size": "21470642176",
Oct 11 09:45:32 compute-0 nostalgic_kare[435910]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c6e730c8-7075-45e5-bf72-061b73088f5b,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 09:45:32 compute-0 nostalgic_kare[435910]:             "lv_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 09:45:32 compute-0 nostalgic_kare[435910]:             "name": "ceph_lv1",
Oct 11 09:45:32 compute-0 nostalgic_kare[435910]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 09:45:32 compute-0 nostalgic_kare[435910]:             "tags": {
Oct 11 09:45:32 compute-0 nostalgic_kare[435910]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 11 09:45:32 compute-0 nostalgic_kare[435910]:                 "ceph.block_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 09:45:32 compute-0 nostalgic_kare[435910]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 09:45:32 compute-0 nostalgic_kare[435910]:                 "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:45:32 compute-0 nostalgic_kare[435910]:                 "ceph.cluster_name": "ceph",
Oct 11 09:45:32 compute-0 nostalgic_kare[435910]:                 "ceph.crush_device_class": "",
Oct 11 09:45:32 compute-0 nostalgic_kare[435910]:                 "ceph.encrypted": "0",
Oct 11 09:45:32 compute-0 nostalgic_kare[435910]:                 "ceph.osd_fsid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 09:45:32 compute-0 nostalgic_kare[435910]:                 "ceph.osd_id": "1",
Oct 11 09:45:32 compute-0 nostalgic_kare[435910]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 09:45:32 compute-0 nostalgic_kare[435910]:                 "ceph.type": "block",
Oct 11 09:45:32 compute-0 nostalgic_kare[435910]:                 "ceph.vdo": "0"
Oct 11 09:45:32 compute-0 nostalgic_kare[435910]:             },
Oct 11 09:45:32 compute-0 nostalgic_kare[435910]:             "type": "block",
Oct 11 09:45:32 compute-0 nostalgic_kare[435910]:             "vg_name": "ceph_vg1"
Oct 11 09:45:32 compute-0 nostalgic_kare[435910]:         }
Oct 11 09:45:32 compute-0 nostalgic_kare[435910]:     ],
Oct 11 09:45:32 compute-0 nostalgic_kare[435910]:     "2": [
Oct 11 09:45:32 compute-0 nostalgic_kare[435910]:         {
Oct 11 09:45:32 compute-0 nostalgic_kare[435910]:             "devices": [
Oct 11 09:45:32 compute-0 nostalgic_kare[435910]:                 "/dev/loop5"
Oct 11 09:45:32 compute-0 nostalgic_kare[435910]:             ],
Oct 11 09:45:32 compute-0 nostalgic_kare[435910]:             "lv_name": "ceph_lv2",
Oct 11 09:45:32 compute-0 nostalgic_kare[435910]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 09:45:32 compute-0 nostalgic_kare[435910]:             "lv_size": "21470642176",
Oct 11 09:45:32 compute-0 nostalgic_kare[435910]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9a8d87fe-940e-4960-94fb-405e22e6e9ee,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 09:45:32 compute-0 nostalgic_kare[435910]:             "lv_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 09:45:32 compute-0 nostalgic_kare[435910]:             "name": "ceph_lv2",
Oct 11 09:45:32 compute-0 nostalgic_kare[435910]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 09:45:32 compute-0 nostalgic_kare[435910]:             "tags": {
Oct 11 09:45:32 compute-0 nostalgic_kare[435910]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 11 09:45:32 compute-0 nostalgic_kare[435910]:                 "ceph.block_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 09:45:32 compute-0 nostalgic_kare[435910]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 09:45:32 compute-0 nostalgic_kare[435910]:                 "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:45:32 compute-0 nostalgic_kare[435910]:                 "ceph.cluster_name": "ceph",
Oct 11 09:45:32 compute-0 nostalgic_kare[435910]:                 "ceph.crush_device_class": "",
Oct 11 09:45:32 compute-0 nostalgic_kare[435910]:                 "ceph.encrypted": "0",
Oct 11 09:45:32 compute-0 nostalgic_kare[435910]:                 "ceph.osd_fsid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 09:45:32 compute-0 nostalgic_kare[435910]:                 "ceph.osd_id": "2",
Oct 11 09:45:32 compute-0 nostalgic_kare[435910]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 09:45:32 compute-0 nostalgic_kare[435910]:                 "ceph.type": "block",
Oct 11 09:45:32 compute-0 nostalgic_kare[435910]:                 "ceph.vdo": "0"
Oct 11 09:45:32 compute-0 nostalgic_kare[435910]:             },
Oct 11 09:45:32 compute-0 nostalgic_kare[435910]:             "type": "block",
Oct 11 09:45:32 compute-0 nostalgic_kare[435910]:             "vg_name": "ceph_vg2"
Oct 11 09:45:32 compute-0 nostalgic_kare[435910]:         }
Oct 11 09:45:32 compute-0 nostalgic_kare[435910]:     ]
Oct 11 09:45:32 compute-0 nostalgic_kare[435910]: }
Oct 11 09:45:32 compute-0 systemd[1]: libpod-80173a483183f7b6b4d586fb5ce65c0ca814fa4c34c7bf6f5a6e845740477578.scope: Deactivated successfully.
Oct 11 09:45:32 compute-0 podman[435893]: 2025-10-11 09:45:32.398091751 +0000 UTC m=+1.032514921 container died 80173a483183f7b6b4d586fb5ce65c0ca814fa4c34c7bf6f5a6e845740477578 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_kare, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct 11 09:45:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-99db2da31b09cc0c93e01f4eb7fbece128c1eae7f89b46fcd72e483f57324009-merged.mount: Deactivated successfully.
Oct 11 09:45:32 compute-0 podman[435893]: 2025-10-11 09:45:32.48469531 +0000 UTC m=+1.119118480 container remove 80173a483183f7b6b4d586fb5ce65c0ca814fa4c34c7bf6f5a6e845740477578 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_kare, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 09:45:32 compute-0 systemd[1]: libpod-conmon-80173a483183f7b6b4d586fb5ce65c0ca814fa4c34c7bf6f5a6e845740477578.scope: Deactivated successfully.
Oct 11 09:45:32 compute-0 sudo[435789]: pam_unix(sudo:session): session closed for user root
Oct 11 09:45:32 compute-0 sudo[435933]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:45:32 compute-0 sudo[435933]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:45:32 compute-0 sudo[435933]: pam_unix(sudo:session): session closed for user root
Oct 11 09:45:32 compute-0 sudo[435958]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 09:45:32 compute-0 sudo[435958]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:45:32 compute-0 sudo[435958]: pam_unix(sudo:session): session closed for user root
Oct 11 09:45:32 compute-0 nova_compute[260935]: 2025-10-11 09:45:32.754 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:45:32 compute-0 sudo[435983]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:45:32 compute-0 sudo[435983]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:45:32 compute-0 sudo[435983]: pam_unix(sudo:session): session closed for user root
Oct 11 09:45:32 compute-0 sshd-session[435455]: Connection closed by invalid user postgres 165.232.82.252 port 51858 [preauth]
Oct 11 09:45:32 compute-0 sshd-session[435451]: Received disconnect from 155.4.244.179 port 4507:11: Bye Bye [preauth]
Oct 11 09:45:32 compute-0 sshd-session[435451]: Disconnected from invalid user mgeweb 155.4.244.179 port 4507 [preauth]
Oct 11 09:45:32 compute-0 sudo[436008]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a -- raw list --format json
Oct 11 09:45:32 compute-0 sudo[436008]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:45:33 compute-0 ceph-mon[74313]: pgmap v3127: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:45:33 compute-0 podman[436072]: 2025-10-11 09:45:33.380647658 +0000 UTC m=+0.054246872 container create b0b6f21bb01e90854e2b69f65db58517b109092407b3caf39abde4590fefa772 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_gagarin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 09:45:33 compute-0 systemd[1]: Started libpod-conmon-b0b6f21bb01e90854e2b69f65db58517b109092407b3caf39abde4590fefa772.scope.
Oct 11 09:45:33 compute-0 podman[436072]: 2025-10-11 09:45:33.355510913 +0000 UTC m=+0.029110197 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:45:33 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:45:33 compute-0 podman[436072]: 2025-10-11 09:45:33.470579691 +0000 UTC m=+0.144178935 container init b0b6f21bb01e90854e2b69f65db58517b109092407b3caf39abde4590fefa772 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_gagarin, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct 11 09:45:33 compute-0 podman[436072]: 2025-10-11 09:45:33.480002595 +0000 UTC m=+0.153601809 container start b0b6f21bb01e90854e2b69f65db58517b109092407b3caf39abde4590fefa772 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_gagarin, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507)
Oct 11 09:45:33 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3128: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:45:33 compute-0 podman[436072]: 2025-10-11 09:45:33.48517106 +0000 UTC m=+0.158770314 container attach b0b6f21bb01e90854e2b69f65db58517b109092407b3caf39abde4590fefa772 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_gagarin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct 11 09:45:33 compute-0 charming_gagarin[436089]: 167 167
Oct 11 09:45:33 compute-0 systemd[1]: libpod-b0b6f21bb01e90854e2b69f65db58517b109092407b3caf39abde4590fefa772.scope: Deactivated successfully.
Oct 11 09:45:33 compute-0 podman[436072]: 2025-10-11 09:45:33.489691827 +0000 UTC m=+0.163291101 container died b0b6f21bb01e90854e2b69f65db58517b109092407b3caf39abde4590fefa772 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_gagarin, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct 11 09:45:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-c8246606b99c2a9721351544cee96f3c6e33c7849defa6889295c2266df451d3-merged.mount: Deactivated successfully.
Oct 11 09:45:33 compute-0 podman[436072]: 2025-10-11 09:45:33.53971426 +0000 UTC m=+0.213313494 container remove b0b6f21bb01e90854e2b69f65db58517b109092407b3caf39abde4590fefa772 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_gagarin, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct 11 09:45:33 compute-0 systemd[1]: libpod-conmon-b0b6f21bb01e90854e2b69f65db58517b109092407b3caf39abde4590fefa772.scope: Deactivated successfully.
Oct 11 09:45:33 compute-0 podman[436114]: 2025-10-11 09:45:33.772864079 +0000 UTC m=+0.073129952 container create 4faf2a597fc1d67b1b336adda8f558aa60e590843767a333a282e04d51784e05 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_grothendieck, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 09:45:33 compute-0 systemd[1]: Started libpod-conmon-4faf2a597fc1d67b1b336adda8f558aa60e590843767a333a282e04d51784e05.scope.
Oct 11 09:45:33 compute-0 podman[436114]: 2025-10-11 09:45:33.745383698 +0000 UTC m=+0.045649631 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:45:33 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:45:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3087fbf324805acd7113ea9df5ae173f1cb8dc82114773e8ecc97632df0ef52c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 09:45:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3087fbf324805acd7113ea9df5ae173f1cb8dc82114773e8ecc97632df0ef52c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 09:45:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3087fbf324805acd7113ea9df5ae173f1cb8dc82114773e8ecc97632df0ef52c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 09:45:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3087fbf324805acd7113ea9df5ae173f1cb8dc82114773e8ecc97632df0ef52c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 09:45:33 compute-0 podman[436114]: 2025-10-11 09:45:33.886089235 +0000 UTC m=+0.186355128 container init 4faf2a597fc1d67b1b336adda8f558aa60e590843767a333a282e04d51784e05 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_grothendieck, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct 11 09:45:33 compute-0 podman[436114]: 2025-10-11 09:45:33.899538712 +0000 UTC m=+0.199804585 container start 4faf2a597fc1d67b1b336adda8f558aa60e590843767a333a282e04d51784e05 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_grothendieck, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct 11 09:45:33 compute-0 podman[436114]: 2025-10-11 09:45:33.903694489 +0000 UTC m=+0.203960392 container attach 4faf2a597fc1d67b1b336adda8f558aa60e590843767a333a282e04d51784e05 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_grothendieck, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct 11 09:45:34 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:45:34 compute-0 great_grothendieck[436131]: {
Oct 11 09:45:34 compute-0 great_grothendieck[436131]:     "9a8d87fe-940e-4960-94fb-405e22e6e9ee": {
Oct 11 09:45:34 compute-0 great_grothendieck[436131]:         "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:45:34 compute-0 great_grothendieck[436131]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 11 09:45:34 compute-0 great_grothendieck[436131]:         "osd_id": 2,
Oct 11 09:45:34 compute-0 great_grothendieck[436131]:         "osd_uuid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 09:45:34 compute-0 great_grothendieck[436131]:         "type": "bluestore"
Oct 11 09:45:34 compute-0 great_grothendieck[436131]:     },
Oct 11 09:45:34 compute-0 great_grothendieck[436131]:     "bd31cfe9-266a-4479-a7f9-659fff14b3e1": {
Oct 11 09:45:34 compute-0 great_grothendieck[436131]:         "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:45:34 compute-0 great_grothendieck[436131]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 11 09:45:34 compute-0 great_grothendieck[436131]:         "osd_id": 0,
Oct 11 09:45:34 compute-0 great_grothendieck[436131]:         "osd_uuid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 09:45:34 compute-0 great_grothendieck[436131]:         "type": "bluestore"
Oct 11 09:45:34 compute-0 great_grothendieck[436131]:     },
Oct 11 09:45:34 compute-0 great_grothendieck[436131]:     "c6e730c8-7075-45e5-bf72-061b73088f5b": {
Oct 11 09:45:34 compute-0 great_grothendieck[436131]:         "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:45:34 compute-0 great_grothendieck[436131]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 11 09:45:34 compute-0 great_grothendieck[436131]:         "osd_id": 1,
Oct 11 09:45:34 compute-0 great_grothendieck[436131]:         "osd_uuid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 09:45:34 compute-0 great_grothendieck[436131]:         "type": "bluestore"
Oct 11 09:45:34 compute-0 great_grothendieck[436131]:     }
Oct 11 09:45:34 compute-0 great_grothendieck[436131]: }
Oct 11 09:45:35 compute-0 systemd[1]: libpod-4faf2a597fc1d67b1b336adda8f558aa60e590843767a333a282e04d51784e05.scope: Deactivated successfully.
Oct 11 09:45:35 compute-0 podman[436114]: 2025-10-11 09:45:35.013550538 +0000 UTC m=+1.313816391 container died 4faf2a597fc1d67b1b336adda8f558aa60e590843767a333a282e04d51784e05 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_grothendieck, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 09:45:35 compute-0 systemd[1]: libpod-4faf2a597fc1d67b1b336adda8f558aa60e590843767a333a282e04d51784e05.scope: Consumed 1.117s CPU time.
Oct 11 09:45:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-3087fbf324805acd7113ea9df5ae173f1cb8dc82114773e8ecc97632df0ef52c-merged.mount: Deactivated successfully.
Oct 11 09:45:35 compute-0 podman[436114]: 2025-10-11 09:45:35.082881112 +0000 UTC m=+1.383146995 container remove 4faf2a597fc1d67b1b336adda8f558aa60e590843767a333a282e04d51784e05 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_grothendieck, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct 11 09:45:35 compute-0 systemd[1]: libpod-conmon-4faf2a597fc1d67b1b336adda8f558aa60e590843767a333a282e04d51784e05.scope: Deactivated successfully.
Oct 11 09:45:35 compute-0 sudo[436008]: pam_unix(sudo:session): session closed for user root
Oct 11 09:45:35 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 09:45:35 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:45:35 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 09:45:35 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:45:35 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev 89a4c598-f210-444e-a306-a898644536d9 does not exist
Oct 11 09:45:35 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev 04d70ce9-3795-45ff-99be-5f842e78401e does not exist
Oct 11 09:45:35 compute-0 nova_compute[260935]: 2025-10-11 09:45:35.224 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 09:45:35 compute-0 nova_compute[260935]: 2025-10-11 09:45:35.251 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 09:45:35 compute-0 nova_compute[260935]: 2025-10-11 09:45:35.252 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5029 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Oct 11 09:45:35 compute-0 nova_compute[260935]: 2025-10-11 09:45:35.252 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Oct 11 09:45:35 compute-0 nova_compute[260935]: 2025-10-11 09:45:35.253 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Oct 11 09:45:35 compute-0 sudo[436177]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:45:35 compute-0 nova_compute[260935]: 2025-10-11 09:45:35.254 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:45:35 compute-0 sudo[436177]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:45:35 compute-0 sudo[436177]: pam_unix(sudo:session): session closed for user root
Oct 11 09:45:35 compute-0 ceph-mon[74313]: pgmap v3128: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:45:35 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:45:35 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:45:35 compute-0 sudo[436202]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 11 09:45:35 compute-0 sudo[436202]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:45:35 compute-0 sudo[436202]: pam_unix(sudo:session): session closed for user root
Oct 11 09:45:35 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3129: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:45:37 compute-0 ceph-mon[74313]: pgmap v3129: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:45:37 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3130: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:45:39 compute-0 ceph-mon[74313]: pgmap v3130: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:45:39 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3131: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:45:39 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:45:40 compute-0 nova_compute[260935]: 2025-10-11 09:45:40.256 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 09:45:40 compute-0 nova_compute[260935]: 2025-10-11 09:45:40.258 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 09:45:40 compute-0 nova_compute[260935]: 2025-10-11 09:45:40.258 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Oct 11 09:45:40 compute-0 nova_compute[260935]: 2025-10-11 09:45:40.259 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Oct 11 09:45:40 compute-0 nova_compute[260935]: 2025-10-11 09:45:40.302 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:45:40 compute-0 nova_compute[260935]: 2025-10-11 09:45:40.302 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Oct 11 09:45:41 compute-0 ceph-mon[74313]: pgmap v3131: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:45:41 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3132: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:45:43 compute-0 ceph-mon[74313]: pgmap v3132: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:45:43 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3133: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:45:44 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:45:44 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:45:44.907 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=60, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:d1:d9', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '16:ab:1e:b7:4b:7f'}, ipsec=False) old=SB_Global(nb_cfg=59) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 09:45:44 compute-0 nova_compute[260935]: 2025-10-11 09:45:44.908 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:45:44 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:45:44.910 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 11 09:45:45 compute-0 nova_compute[260935]: 2025-10-11 09:45:45.303 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:45:45 compute-0 nova_compute[260935]: 2025-10-11 09:45:45.304 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:45:45 compute-0 ceph-mon[74313]: pgmap v3133: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:45:45 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3134: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:45:45 compute-0 podman[436227]: 2025-10-11 09:45:45.798733723 +0000 UTC m=+0.094064299 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, managed_by=edpm_ansible, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct 11 09:45:47 compute-0 ceph-mon[74313]: pgmap v3134: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:45:47 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3135: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:45:47 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:45:47.913 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=bec9a5e2-82b0-42f0-811d-08d245f1dc66, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '60'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:45:48 compute-0 ceph-mon[74313]: pgmap v3135: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:45:49 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3136: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:45:49 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:45:50 compute-0 nova_compute[260935]: 2025-10-11 09:45:50.305 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4997-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 09:45:50 compute-0 nova_compute[260935]: 2025-10-11 09:45:50.307 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 09:45:50 compute-0 nova_compute[260935]: 2025-10-11 09:45:50.308 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Oct 11 09:45:50 compute-0 nova_compute[260935]: 2025-10-11 09:45:50.308 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Oct 11 09:45:50 compute-0 nova_compute[260935]: 2025-10-11 09:45:50.353 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:45:50 compute-0 nova_compute[260935]: 2025-10-11 09:45:50.354 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Oct 11 09:45:50 compute-0 ceph-mon[74313]: pgmap v3136: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:45:51 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3137: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:45:51 compute-0 ceph-mon[74313]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #153. Immutable memtables: 0.
Oct 11 09:45:51 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:45:51.560651) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 11 09:45:51 compute-0 ceph-mon[74313]: rocksdb: [db/flush_job.cc:856] [default] [JOB 93] Flushing memtable with next log file: 153
Oct 11 09:45:51 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760175951560696, "job": 93, "event": "flush_started", "num_memtables": 1, "num_entries": 1677, "num_deletes": 255, "total_data_size": 2619277, "memory_usage": 2655152, "flush_reason": "Manual Compaction"}
Oct 11 09:45:51 compute-0 ceph-mon[74313]: rocksdb: [db/flush_job.cc:885] [default] [JOB 93] Level-0 flush table #154: started
Oct 11 09:45:51 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760175951579056, "cf_name": "default", "job": 93, "event": "table_file_creation", "file_number": 154, "file_size": 2558437, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 64011, "largest_seqno": 65687, "table_properties": {"data_size": 2550590, "index_size": 4725, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2053, "raw_key_size": 16275, "raw_average_key_size": 20, "raw_value_size": 2534873, "raw_average_value_size": 3168, "num_data_blocks": 210, "num_entries": 800, "num_filter_entries": 800, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760175794, "oldest_key_time": 1760175794, "file_creation_time": 1760175951, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "25d4b2da-549a-4320-988a-8e4ed36a341b", "db_session_id": "HBQ1LYMNE0LLZGR7AHC6", "orig_file_number": 154, "seqno_to_time_mapping": "N/A"}}
Oct 11 09:45:51 compute-0 ceph-mon[74313]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 93] Flush lasted 18463 microseconds, and 11667 cpu microseconds.
Oct 11 09:45:51 compute-0 ceph-mon[74313]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 09:45:51 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:45:51.579112) [db/flush_job.cc:967] [default] [JOB 93] Level-0 flush table #154: 2558437 bytes OK
Oct 11 09:45:51 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:45:51.579138) [db/memtable_list.cc:519] [default] Level-0 commit table #154 started
Oct 11 09:45:51 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:45:51.581267) [db/memtable_list.cc:722] [default] Level-0 commit table #154: memtable #1 done
Oct 11 09:45:51 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:45:51.581288) EVENT_LOG_v1 {"time_micros": 1760175951581281, "job": 93, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 11 09:45:51 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:45:51.581312) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 11 09:45:51 compute-0 ceph-mon[74313]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 93] Try to delete WAL files size 2611975, prev total WAL file size 2611975, number of live WAL files 2.
Oct 11 09:45:51 compute-0 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000150.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 09:45:51 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:45:51.582720) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730036323735' seq:72057594037927935, type:22 .. '7061786F730036353237' seq:0, type:0; will stop at (end)
Oct 11 09:45:51 compute-0 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 94] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 11 09:45:51 compute-0 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 93 Base level 0, inputs: [154(2498KB)], [152(8480KB)]
Oct 11 09:45:51 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760175951582764, "job": 94, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [154], "files_L6": [152], "score": -1, "input_data_size": 11242634, "oldest_snapshot_seqno": -1}
Oct 11 09:45:51 compute-0 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 94] Generated table #155: 8359 keys, 9541598 bytes, temperature: kUnknown
Oct 11 09:45:51 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760175951647583, "cf_name": "default", "job": 94, "event": "table_file_creation", "file_number": 155, "file_size": 9541598, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9489370, "index_size": 30227, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 20933, "raw_key_size": 218409, "raw_average_key_size": 26, "raw_value_size": 9343990, "raw_average_value_size": 1117, "num_data_blocks": 1168, "num_entries": 8359, "num_filter_entries": 8359, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760170204, "oldest_key_time": 0, "file_creation_time": 1760175951, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "25d4b2da-549a-4320-988a-8e4ed36a341b", "db_session_id": "HBQ1LYMNE0LLZGR7AHC6", "orig_file_number": 155, "seqno_to_time_mapping": "N/A"}}
Oct 11 09:45:51 compute-0 ceph-mon[74313]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 09:45:51 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:45:51.647790) [db/compaction/compaction_job.cc:1663] [default] [JOB 94] Compacted 1@0 + 1@6 files to L6 => 9541598 bytes
Oct 11 09:45:51 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:45:51.649187) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 173.3 rd, 147.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.4, 8.3 +0.0 blob) out(9.1 +0.0 blob), read-write-amplify(8.1) write-amplify(3.7) OK, records in: 8883, records dropped: 524 output_compression: NoCompression
Oct 11 09:45:51 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:45:51.649207) EVENT_LOG_v1 {"time_micros": 1760175951649198, "job": 94, "event": "compaction_finished", "compaction_time_micros": 64878, "compaction_time_cpu_micros": 41619, "output_level": 6, "num_output_files": 1, "total_output_size": 9541598, "num_input_records": 8883, "num_output_records": 8359, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 11 09:45:51 compute-0 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000154.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 09:45:51 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760175951649914, "job": 94, "event": "table_file_deletion", "file_number": 154}
Oct 11 09:45:51 compute-0 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000152.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 09:45:51 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760175951651952, "job": 94, "event": "table_file_deletion", "file_number": 152}
Oct 11 09:45:51 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:45:51.582594) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 09:45:51 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:45:51.652103) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 09:45:51 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:45:51.652116) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 09:45:51 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:45:51.652119) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 09:45:51 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:45:51.652123) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 09:45:51 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:45:51.652127) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 09:45:51 compute-0 podman[436247]: 2025-10-11 09:45:51.813356729 +0000 UTC m=+0.101965561 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=iscsid, org.label-schema.build-date=20251001)
Oct 11 09:45:52 compute-0 ceph-mon[74313]: pgmap v3137: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:45:52 compute-0 nova_compute[260935]: 2025-10-11 09:45:52.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:45:52 compute-0 nova_compute[260935]: 2025-10-11 09:45:52.703 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 11 09:45:53 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3138: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:45:54 compute-0 ceph-mon[74313]: pgmap v3138: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:45:54 compute-0 nova_compute[260935]: 2025-10-11 09:45:54.704 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:45:54 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:45:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:45:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:45:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:45:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:45:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:45:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:45:55 compute-0 ceph-mgr[74605]: [balancer INFO root] Optimize plan auto_2025-10-11_09:45:55
Oct 11 09:45:55 compute-0 ceph-mgr[74605]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 09:45:55 compute-0 ceph-mgr[74605]: [balancer INFO root] do_upmap
Oct 11 09:45:55 compute-0 ceph-mgr[74605]: [balancer INFO root] pools ['backups', 'default.rgw.control', 'cephfs.cephfs.data', 'images', 'default.rgw.log', 'volumes', '.rgw.root', 'cephfs.cephfs.meta', 'vms', '.mgr', 'default.rgw.meta']
Oct 11 09:45:55 compute-0 ceph-mgr[74605]: [balancer INFO root] prepared 0/10 changes
Oct 11 09:45:55 compute-0 nova_compute[260935]: 2025-10-11 09:45:55.354 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 09:45:55 compute-0 nova_compute[260935]: 2025-10-11 09:45:55.356 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 09:45:55 compute-0 nova_compute[260935]: 2025-10-11 09:45:55.357 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Oct 11 09:45:55 compute-0 nova_compute[260935]: 2025-10-11 09:45:55.357 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Oct 11 09:45:55 compute-0 nova_compute[260935]: 2025-10-11 09:45:55.386 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:45:55 compute-0 nova_compute[260935]: 2025-10-11 09:45:55.387 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Oct 11 09:45:55 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3139: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:45:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 09:45:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 09:45:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 09:45:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 09:45:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 09:45:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 09:45:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 09:45:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 09:45:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 09:45:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 09:45:55 compute-0 nova_compute[260935]: 2025-10-11 09:45:55.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:45:55 compute-0 nova_compute[260935]: 2025-10-11 09:45:55.703 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 11 09:45:55 compute-0 podman[436267]: 2025-10-11 09:45:55.821036873 +0000 UTC m=+0.117909178 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct 11 09:45:55 compute-0 podman[436268]: 2025-10-11 09:45:55.856514998 +0000 UTC m=+0.146629703 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Oct 11 09:45:56 compute-0 nova_compute[260935]: 2025-10-11 09:45:56.190 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "refresh_cache-b75d8ded-515b-48ff-a6b6-28df88878996" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:45:56 compute-0 nova_compute[260935]: 2025-10-11 09:45:56.190 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquired lock "refresh_cache-b75d8ded-515b-48ff-a6b6-28df88878996" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:45:56 compute-0 nova_compute[260935]: 2025-10-11 09:45:56.190 2 DEBUG nova.network.neutron [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: b75d8ded-515b-48ff-a6b6-28df88878996] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 11 09:45:56 compute-0 ceph-mon[74313]: pgmap v3139: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:45:57 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3140: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:45:57 compute-0 nova_compute[260935]: 2025-10-11 09:45:57.612 2 DEBUG nova.network.neutron [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: b75d8ded-515b-48ff-a6b6-28df88878996] Updating instance_info_cache with network_info: [{"id": "99e74dca-1d94-446c-ac4b-bc16dc028d2b", "address": "fa:16:3e:ab:9b:26", "network": {"id": "e4686205-cbf0-4221-bc49-ebb890c4a59f", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1553544744-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "11b44ad9193e4e43838d52056ccf413e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap99e74dca-1d", "ovs_interfaceid": "99e74dca-1d94-446c-ac4b-bc16dc028d2b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:45:57 compute-0 nova_compute[260935]: 2025-10-11 09:45:57.635 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Releasing lock "refresh_cache-b75d8ded-515b-48ff-a6b6-28df88878996" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:45:57 compute-0 nova_compute[260935]: 2025-10-11 09:45:57.636 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: b75d8ded-515b-48ff-a6b6-28df88878996] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 11 09:45:57 compute-0 nova_compute[260935]: 2025-10-11 09:45:57.637 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:45:58 compute-0 ceph-mon[74313]: pgmap v3140: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:45:59 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3141: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:45:59 compute-0 nova_compute[260935]: 2025-10-11 09:45:59.632 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:45:59 compute-0 nova_compute[260935]: 2025-10-11 09:45:59.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:45:59 compute-0 nova_compute[260935]: 2025-10-11 09:45:59.749 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:45:59 compute-0 nova_compute[260935]: 2025-10-11 09:45:59.749 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:45:59 compute-0 nova_compute[260935]: 2025-10-11 09:45:59.749 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:45:59 compute-0 nova_compute[260935]: 2025-10-11 09:45:59.750 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 11 09:45:59 compute-0 nova_compute[260935]: 2025-10-11 09:45:59.750 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:45:59 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:46:00 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 09:46:00 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1163292549' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:46:00 compute-0 nova_compute[260935]: 2025-10-11 09:46:00.262 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.512s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:46:00 compute-0 nova_compute[260935]: 2025-10-11 09:46:00.378 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:46:00 compute-0 nova_compute[260935]: 2025-10-11 09:46:00.379 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:46:00 compute-0 nova_compute[260935]: 2025-10-11 09:46:00.379 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:46:00 compute-0 nova_compute[260935]: 2025-10-11 09:46:00.385 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:46:00 compute-0 nova_compute[260935]: 2025-10-11 09:46:00.385 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:46:00 compute-0 nova_compute[260935]: 2025-10-11 09:46:00.389 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 09:46:00 compute-0 nova_compute[260935]: 2025-10-11 09:46:00.391 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 09:46:00 compute-0 nova_compute[260935]: 2025-10-11 09:46:00.391 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5004 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Oct 11 09:46:00 compute-0 nova_compute[260935]: 2025-10-11 09:46:00.392 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Oct 11 09:46:00 compute-0 nova_compute[260935]: 2025-10-11 09:46:00.434 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000052 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:46:00 compute-0 nova_compute[260935]: 2025-10-11 09:46:00.434 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000052 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:46:00 compute-0 nova_compute[260935]: 2025-10-11 09:46:00.434 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:46:00 compute-0 nova_compute[260935]: 2025-10-11 09:46:00.435 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Oct 11 09:46:00 compute-0 ceph-mon[74313]: pgmap v3141: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:46:00 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/1163292549' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:46:00 compute-0 nova_compute[260935]: 2025-10-11 09:46:00.748 2 WARNING nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 09:46:00 compute-0 nova_compute[260935]: 2025-10-11 09:46:00.751 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=2774MB free_disk=59.83066940307617GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 11 09:46:00 compute-0 nova_compute[260935]: 2025-10-11 09:46:00.752 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:46:00 compute-0 nova_compute[260935]: 2025-10-11 09:46:00.753 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:46:01 compute-0 nova_compute[260935]: 2025-10-11 09:46:01.113 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance c176845c-89c0-4038-ba22-4ee79bd3ebfe actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 09:46:01 compute-0 nova_compute[260935]: 2025-10-11 09:46:01.113 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance b75d8ded-515b-48ff-a6b6-28df88878996 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 09:46:01 compute-0 nova_compute[260935]: 2025-10-11 09:46:01.113 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance 52be16b4-343a-4fd4-9041-39069a1fde2a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 09:46:01 compute-0 nova_compute[260935]: 2025-10-11 09:46:01.114 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 11 09:46:01 compute-0 nova_compute[260935]: 2025-10-11 09:46:01.114 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=896MB phys_disk=59GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 11 09:46:01 compute-0 nova_compute[260935]: 2025-10-11 09:46:01.427 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:46:01 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3142: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:46:01 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 09:46:01 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3322287583' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:46:01 compute-0 nova_compute[260935]: 2025-10-11 09:46:01.923 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.496s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:46:01 compute-0 nova_compute[260935]: 2025-10-11 09:46:01.933 2 DEBUG nova.compute.provider_tree [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 09:46:01 compute-0 nova_compute[260935]: 2025-10-11 09:46:01.960 2 DEBUG nova.scheduler.client.report [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 09:46:01 compute-0 nova_compute[260935]: 2025-10-11 09:46:01.996 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 11 09:46:01 compute-0 nova_compute[260935]: 2025-10-11 09:46:01.997 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.244s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:46:02 compute-0 ceph-mon[74313]: pgmap v3142: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:46:02 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/3322287583' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:46:02 compute-0 sshd-session[436355]: Invalid user postgres from 165.232.82.252 port 41548
Oct 11 09:46:02 compute-0 sshd-session[436355]: pam_unix(sshd:auth): check pass; user unknown
Oct 11 09:46:02 compute-0 sshd-session[436355]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=165.232.82.252
Oct 11 09:46:03 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3143: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:46:04 compute-0 ceph-mon[74313]: pgmap v3143: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:46:04 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:46:05 compute-0 nova_compute[260935]: 2025-10-11 09:46:05.435 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 09:46:05 compute-0 sshd-session[436355]: Failed password for invalid user postgres from 165.232.82.252 port 41548 ssh2
Oct 11 09:46:05 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3144: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:46:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 09:46:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:46:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 09:46:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:46:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.002627377275021163 of space, bias 1.0, pg target 0.788213182506349 quantized to 32 (current 32)
Oct 11 09:46:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:46:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 09:46:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:46:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 09:46:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:46:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Oct 11 09:46:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:46:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 11 09:46:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:46:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 09:46:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:46:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 11 09:46:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:46:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 11 09:46:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:46:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 09:46:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:46:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 11 09:46:05 compute-0 sshd-session[436355]: Connection closed by invalid user postgres 165.232.82.252 port 41548 [preauth]
Oct 11 09:46:05 compute-0 nova_compute[260935]: 2025-10-11 09:46:05.998 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:46:06 compute-0 nova_compute[260935]: 2025-10-11 09:46:05.999 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:46:06 compute-0 ceph-mon[74313]: pgmap v3144: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:46:07 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3145: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:46:08 compute-0 ceph-mon[74313]: pgmap v3145: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:46:09 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3146: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:46:09 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:46:10 compute-0 nova_compute[260935]: 2025-10-11 09:46:10.438 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 09:46:10 compute-0 ceph-mon[74313]: pgmap v3146: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:46:11 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3147: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:46:11 compute-0 nova_compute[260935]: 2025-10-11 09:46:11.700 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:46:12 compute-0 ceph-mon[74313]: pgmap v3147: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:46:13 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3148: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:46:14 compute-0 ceph-mon[74313]: pgmap v3148: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:46:14 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:46:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:46:15.241 162815 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:46:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:46:15.242 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:46:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:46:15.243 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:46:15 compute-0 nova_compute[260935]: 2025-10-11 09:46:15.440 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:46:15 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3149: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:46:16 compute-0 ceph-mon[74313]: pgmap v3149: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:46:16 compute-0 podman[436357]: 2025-10-11 09:46:16.792445598 +0000 UTC m=+0.088547795 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS)
Oct 11 09:46:17 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3150: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:46:17 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e299 do_prune osdmap full prune enabled
Oct 11 09:46:17 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e300 e300: 3 total, 3 up, 3 in
Oct 11 09:46:17 compute-0 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e300: 3 total, 3 up, 3 in
Oct 11 09:46:18 compute-0 ceph-mon[74313]: pgmap v3150: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:46:18 compute-0 ceph-mon[74313]: osdmap e300: 3 total, 3 up, 3 in
Oct 11 09:46:19 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3152: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 511 B/s rd, 102 B/s wr, 1 op/s
Oct 11 09:46:19 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:46:19 compute-0 ceph-mon[74313]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #156. Immutable memtables: 0.
Oct 11 09:46:19 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:46:19.768808) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 11 09:46:19 compute-0 ceph-mon[74313]: rocksdb: [db/flush_job.cc:856] [default] [JOB 95] Flushing memtable with next log file: 156
Oct 11 09:46:19 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760175979768911, "job": 95, "event": "flush_started", "num_memtables": 1, "num_entries": 472, "num_deletes": 257, "total_data_size": 419207, "memory_usage": 428192, "flush_reason": "Manual Compaction"}
Oct 11 09:46:19 compute-0 ceph-mon[74313]: rocksdb: [db/flush_job.cc:885] [default] [JOB 95] Level-0 flush table #157: started
Oct 11 09:46:19 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760175979775439, "cf_name": "default", "job": 95, "event": "table_file_creation", "file_number": 157, "file_size": 415796, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 65688, "largest_seqno": 66159, "table_properties": {"data_size": 413074, "index_size": 757, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 901, "raw_key_size": 6349, "raw_average_key_size": 18, "raw_value_size": 407601, "raw_average_value_size": 1178, "num_data_blocks": 34, "num_entries": 346, "num_filter_entries": 346, "num_deletions": 257, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760175952, "oldest_key_time": 1760175952, "file_creation_time": 1760175979, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "25d4b2da-549a-4320-988a-8e4ed36a341b", "db_session_id": "HBQ1LYMNE0LLZGR7AHC6", "orig_file_number": 157, "seqno_to_time_mapping": "N/A"}}
Oct 11 09:46:19 compute-0 ceph-mon[74313]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 95] Flush lasted 6668 microseconds, and 3269 cpu microseconds.
Oct 11 09:46:19 compute-0 ceph-mon[74313]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 09:46:19 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:46:19.775507) [db/flush_job.cc:967] [default] [JOB 95] Level-0 flush table #157: 415796 bytes OK
Oct 11 09:46:19 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:46:19.775534) [db/memtable_list.cc:519] [default] Level-0 commit table #157 started
Oct 11 09:46:19 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:46:19.777439) [db/memtable_list.cc:722] [default] Level-0 commit table #157: memtable #1 done
Oct 11 09:46:19 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:46:19.777468) EVENT_LOG_v1 {"time_micros": 1760175979777458, "job": 95, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 11 09:46:19 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:46:19.777494) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 11 09:46:19 compute-0 ceph-mon[74313]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 95] Try to delete WAL files size 416378, prev total WAL file size 416378, number of live WAL files 2.
Oct 11 09:46:19 compute-0 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000153.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 09:46:19 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:46:19.778348) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0032373632' seq:72057594037927935, type:22 .. '6C6F676D0033303134' seq:0, type:0; will stop at (end)
Oct 11 09:46:19 compute-0 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 96] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 11 09:46:19 compute-0 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 95 Base level 0, inputs: [157(406KB)], [155(9317KB)]
Oct 11 09:46:19 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760175979778411, "job": 96, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [157], "files_L6": [155], "score": -1, "input_data_size": 9957394, "oldest_snapshot_seqno": -1}
Oct 11 09:46:19 compute-0 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 96] Generated table #158: 8180 keys, 9855243 bytes, temperature: kUnknown
Oct 11 09:46:19 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760175979840059, "cf_name": "default", "job": 96, "event": "table_file_creation", "file_number": 158, "file_size": 9855243, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9803172, "index_size": 30518, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 20485, "raw_key_size": 215620, "raw_average_key_size": 26, "raw_value_size": 9659760, "raw_average_value_size": 1180, "num_data_blocks": 1178, "num_entries": 8180, "num_filter_entries": 8180, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760170204, "oldest_key_time": 0, "file_creation_time": 1760175979, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "25d4b2da-549a-4320-988a-8e4ed36a341b", "db_session_id": "HBQ1LYMNE0LLZGR7AHC6", "orig_file_number": 158, "seqno_to_time_mapping": "N/A"}}
Oct 11 09:46:19 compute-0 ceph-mon[74313]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 09:46:19 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:46:19.840457) [db/compaction/compaction_job.cc:1663] [default] [JOB 96] Compacted 1@0 + 1@6 files to L6 => 9855243 bytes
Oct 11 09:46:19 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:46:19.841792) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 161.3 rd, 159.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.4, 9.1 +0.0 blob) out(9.4 +0.0 blob), read-write-amplify(47.6) write-amplify(23.7) OK, records in: 8705, records dropped: 525 output_compression: NoCompression
Oct 11 09:46:19 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:46:19.841872) EVENT_LOG_v1 {"time_micros": 1760175979841854, "job": 96, "event": "compaction_finished", "compaction_time_micros": 61751, "compaction_time_cpu_micros": 45330, "output_level": 6, "num_output_files": 1, "total_output_size": 9855243, "num_input_records": 8705, "num_output_records": 8180, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 11 09:46:19 compute-0 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000157.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 09:46:19 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760175979842237, "job": 96, "event": "table_file_deletion", "file_number": 157}
Oct 11 09:46:19 compute-0 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000155.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 09:46:19 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760175979845736, "job": 96, "event": "table_file_deletion", "file_number": 155}
Oct 11 09:46:19 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:46:19.778224) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 09:46:19 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:46:19.845801) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 09:46:19 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:46:19.845807) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 09:46:19 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:46:19.845808) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 09:46:19 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:46:19.845810) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 09:46:19 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:46:19.845811) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 09:46:20 compute-0 nova_compute[260935]: 2025-10-11 09:46:20.442 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:46:20 compute-0 ceph-mon[74313]: pgmap v3152: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 511 B/s rd, 102 B/s wr, 1 op/s
Oct 11 09:46:21 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3153: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 511 B/s rd, 102 B/s wr, 1 op/s
Oct 11 09:46:22 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:46:22.216 162815 WARNING neutron.agent.ovn.metadata.agent [-] Removing non-external type port 533d8cc8-a3c8-4aab-9b38-4b2a82418dcd with type ""
Oct 11 09:46:22 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:46:22.217 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched DELETE: PortBindingDeletedEvent(events=('delete',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ab:9b:26 10.100.0.3'], port_security=['fa:16:3e:ab:9b:26 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'b75d8ded-515b-48ff-a6b6-28df88878996', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e4686205-cbf0-4221-bc49-ebb890c4a59f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '11b44ad9193e4e43838d52056ccf413e', 'neutron:revision_number': '6', 'neutron:security_group_ids': '4b0cfb76-aebd-4cfc-96f5-00bacc345d65', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9601b6e8-d9bc-46ca-99e8-33ec15f713e5, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=99e74dca-1d94-446c-ac4b-bc16dc028d2b) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 09:46:22 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:46:22.218 162815 INFO neutron.agent.ovn.metadata.agent [-] Port 99e74dca-1d94-446c-ac4b-bc16dc028d2b in datapath e4686205-cbf0-4221-bc49-ebb890c4a59f unbound from our chassis
Oct 11 09:46:22 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:46:22.219 162815 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network e4686205-cbf0-4221-bc49-ebb890c4a59f or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Oct 11 09:46:22 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:46:22.220 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[14c3df98-3c96-4db9-8916-ba740a9c57a0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:46:22 compute-0 ovn_controller[152945]: 2025-10-11T09:46:22Z|01730|binding|INFO|Removing iface tap99e74dca-1d ovn-installed in OVS
Oct 11 09:46:22 compute-0 ovn_controller[152945]: 2025-10-11T09:46:22Z|01731|binding|INFO|Removing lport 99e74dca-1d94-446c-ac4b-bc16dc028d2b ovn-installed in OVS
Oct 11 09:46:22 compute-0 nova_compute[260935]: 2025-10-11 09:46:22.225 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:46:22 compute-0 nova_compute[260935]: 2025-10-11 09:46:22.240 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:46:22 compute-0 nova_compute[260935]: 2025-10-11 09:46:22.360 2 DEBUG nova.compute.manager [req-6716311a-3b5c-4fd2-819c-bb56b2544039 req-b82d8d70-5488-4154-8877-93a2e1550fbf e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: b75d8ded-515b-48ff-a6b6-28df88878996] Received event network-vif-deleted-99e74dca-1d94-446c-ac4b-bc16dc028d2b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:46:22 compute-0 nova_compute[260935]: 2025-10-11 09:46:22.361 2 INFO nova.compute.manager [req-6716311a-3b5c-4fd2-819c-bb56b2544039 req-b82d8d70-5488-4154-8877-93a2e1550fbf e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: b75d8ded-515b-48ff-a6b6-28df88878996] Neutron deleted interface 99e74dca-1d94-446c-ac4b-bc16dc028d2b; detaching it from the instance and deleting it from the info cache
Oct 11 09:46:22 compute-0 nova_compute[260935]: 2025-10-11 09:46:22.361 2 DEBUG nova.network.neutron [req-6716311a-3b5c-4fd2-819c-bb56b2544039 req-b82d8d70-5488-4154-8877-93a2e1550fbf e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: b75d8ded-515b-48ff-a6b6-28df88878996] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:46:22 compute-0 nova_compute[260935]: 2025-10-11 09:46:22.398 2 DEBUG nova.objects.instance [req-6716311a-3b5c-4fd2-819c-bb56b2544039 req-b82d8d70-5488-4154-8877-93a2e1550fbf e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lazy-loading 'system_metadata' on Instance uuid b75d8ded-515b-48ff-a6b6-28df88878996 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:46:22 compute-0 nova_compute[260935]: 2025-10-11 09:46:22.442 2 DEBUG nova.objects.instance [req-6716311a-3b5c-4fd2-819c-bb56b2544039 req-b82d8d70-5488-4154-8877-93a2e1550fbf e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lazy-loading 'flavor' on Instance uuid b75d8ded-515b-48ff-a6b6-28df88878996 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:46:22 compute-0 nova_compute[260935]: 2025-10-11 09:46:22.472 2 DEBUG nova.virt.libvirt.vif [req-6716311a-3b5c-4fd2-819c-bb56b2544039 req-b82d8d70-5488-4154-8877-93a2e1550fbf e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T09:00:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerRescueTestJSON-server-217285829',display_name='tempest-ServerRescueTestJSON-server-217285829',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuetestjson-server-217285829',id=85,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-11T09:00:45Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='11b44ad9193e4e43838d52056ccf413e',ramdisk_id='',reservation_id='r-i0umpniu',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerRescueTestJSON-1667208638',owner_user_name='tempest-ServerRescueTestJSON-1667208638-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T09:00:45Z,user_data=None,user_id='df5a3c3a5d68473aa2e2950de45ebce1',uuid=b75d8ded-515b-48ff-a6b6-28df88878996,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='rescued') vif={"id": "99e74dca-1d94-446c-ac4b-bc16dc028d2b", "address": "fa:16:3e:ab:9b:26", "network": {"id": "e4686205-cbf0-4221-bc49-ebb890c4a59f", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1553544744-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "11b44ad9193e4e43838d52056ccf413e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap99e74dca-1d", "ovs_interfaceid": "99e74dca-1d94-446c-ac4b-bc16dc028d2b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 11 09:46:22 compute-0 nova_compute[260935]: 2025-10-11 09:46:22.473 2 DEBUG nova.network.os_vif_util [req-6716311a-3b5c-4fd2-819c-bb56b2544039 req-b82d8d70-5488-4154-8877-93a2e1550fbf e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Converting VIF {"id": "99e74dca-1d94-446c-ac4b-bc16dc028d2b", "address": "fa:16:3e:ab:9b:26", "network": {"id": "e4686205-cbf0-4221-bc49-ebb890c4a59f", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1553544744-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "11b44ad9193e4e43838d52056ccf413e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap99e74dca-1d", "ovs_interfaceid": "99e74dca-1d94-446c-ac4b-bc16dc028d2b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 09:46:22 compute-0 nova_compute[260935]: 2025-10-11 09:46:22.475 2 DEBUG nova.network.os_vif_util [req-6716311a-3b5c-4fd2-819c-bb56b2544039 req-b82d8d70-5488-4154-8877-93a2e1550fbf e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:ab:9b:26,bridge_name='br-int',has_traffic_filtering=True,id=99e74dca-1d94-446c-ac4b-bc16dc028d2b,network=Network(e4686205-cbf0-4221-bc49-ebb890c4a59f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap99e74dca-1d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 09:46:22 compute-0 nova_compute[260935]: 2025-10-11 09:46:22.483 2 DEBUG nova.virt.libvirt.guest [req-6716311a-3b5c-4fd2-819c-bb56b2544039 req-b82d8d70-5488-4154-8877-93a2e1550fbf e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:ab:9b:26"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap99e74dca-1d"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Oct 11 09:46:22 compute-0 nova_compute[260935]: 2025-10-11 09:46:22.489 2 DEBUG nova.virt.libvirt.guest [req-6716311a-3b5c-4fd2-819c-bb56b2544039 req-b82d8d70-5488-4154-8877-93a2e1550fbf e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:ab:9b:26"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap99e74dca-1d"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Oct 11 09:46:22 compute-0 nova_compute[260935]: 2025-10-11 09:46:22.495 2 DEBUG nova.virt.libvirt.driver [req-6716311a-3b5c-4fd2-819c-bb56b2544039 req-b82d8d70-5488-4154-8877-93a2e1550fbf e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Attempting to detach device tap99e74dca-1d from instance b75d8ded-515b-48ff-a6b6-28df88878996 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Oct 11 09:46:22 compute-0 nova_compute[260935]: 2025-10-11 09:46:22.496 2 DEBUG nova.virt.libvirt.guest [req-6716311a-3b5c-4fd2-819c-bb56b2544039 req-b82d8d70-5488-4154-8877-93a2e1550fbf e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] detach device xml: <interface type="ethernet">
Oct 11 09:46:22 compute-0 nova_compute[260935]:   <mac address="fa:16:3e:ab:9b:26"/>
Oct 11 09:46:22 compute-0 nova_compute[260935]:   <model type="virtio"/>
Oct 11 09:46:22 compute-0 nova_compute[260935]:   <driver name="vhost" rx_queue_size="512"/>
Oct 11 09:46:22 compute-0 nova_compute[260935]:   <mtu size="1442"/>
Oct 11 09:46:22 compute-0 nova_compute[260935]:   <target dev="tap99e74dca-1d"/>
Oct 11 09:46:22 compute-0 nova_compute[260935]: </interface>
Oct 11 09:46:22 compute-0 nova_compute[260935]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Oct 11 09:46:22 compute-0 nova_compute[260935]: 2025-10-11 09:46:22.507 2 DEBUG nova.virt.libvirt.guest [req-6716311a-3b5c-4fd2-819c-bb56b2544039 req-b82d8d70-5488-4154-8877-93a2e1550fbf e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:ab:9b:26"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap99e74dca-1d"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Oct 11 09:46:22 compute-0 nova_compute[260935]: 2025-10-11 09:46:22.512 2 DEBUG nova.virt.libvirt.guest [req-6716311a-3b5c-4fd2-819c-bb56b2544039 req-b82d8d70-5488-4154-8877-93a2e1550fbf e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] No interface of type: <class 'nova.virt.libvirt.config.LibvirtConfigGuestInterface'> found in domain get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:261
Oct 11 09:46:22 compute-0 nova_compute[260935]: 2025-10-11 09:46:22.513 2 INFO nova.virt.libvirt.driver [req-6716311a-3b5c-4fd2-819c-bb56b2544039 req-b82d8d70-5488-4154-8877-93a2e1550fbf e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Successfully detached device tap99e74dca-1d from instance b75d8ded-515b-48ff-a6b6-28df88878996 from the persistent domain config.
Oct 11 09:46:22 compute-0 nova_compute[260935]: 2025-10-11 09:46:22.513 2 DEBUG nova.virt.libvirt.driver [req-6716311a-3b5c-4fd2-819c-bb56b2544039 req-b82d8d70-5488-4154-8877-93a2e1550fbf e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] (1/8): Attempting to detach device tap99e74dca-1d with device alias net0 from instance b75d8ded-515b-48ff-a6b6-28df88878996 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Oct 11 09:46:22 compute-0 nova_compute[260935]: 2025-10-11 09:46:22.514 2 DEBUG nova.virt.libvirt.guest [req-6716311a-3b5c-4fd2-819c-bb56b2544039 req-b82d8d70-5488-4154-8877-93a2e1550fbf e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] detach device xml: <interface type="ethernet">
Oct 11 09:46:22 compute-0 nova_compute[260935]:   <mac address="fa:16:3e:ab:9b:26"/>
Oct 11 09:46:22 compute-0 nova_compute[260935]:   <model type="virtio"/>
Oct 11 09:46:22 compute-0 nova_compute[260935]:   <driver name="vhost" rx_queue_size="512"/>
Oct 11 09:46:22 compute-0 nova_compute[260935]:   <mtu size="1442"/>
Oct 11 09:46:22 compute-0 nova_compute[260935]:   <target dev="tap99e74dca-1d"/>
Oct 11 09:46:22 compute-0 nova_compute[260935]: </interface>
Oct 11 09:46:22 compute-0 nova_compute[260935]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Oct 11 09:46:22 compute-0 kernel: tap99e74dca-1d (unregistering): left promiscuous mode
Oct 11 09:46:22 compute-0 NetworkManager[44960]: <info>  [1760175982.5913] device (tap99e74dca-1d): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 09:46:22 compute-0 nova_compute[260935]: 2025-10-11 09:46:22.604 2 DEBUG nova.virt.libvirt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Received event <DeviceRemovedEvent: 1760175982.6040714, b75d8ded-515b-48ff-a6b6-28df88878996 => net0> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Oct 11 09:46:22 compute-0 nova_compute[260935]: 2025-10-11 09:46:22.608 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:46:22 compute-0 nova_compute[260935]: 2025-10-11 09:46:22.611 2 DEBUG nova.virt.libvirt.driver [req-6716311a-3b5c-4fd2-819c-bb56b2544039 req-b82d8d70-5488-4154-8877-93a2e1550fbf e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Start waiting for the detach event from libvirt for device tap99e74dca-1d with device alias net0 for instance b75d8ded-515b-48ff-a6b6-28df88878996 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Oct 11 09:46:22 compute-0 nova_compute[260935]: 2025-10-11 09:46:22.611 2 DEBUG nova.virt.libvirt.guest [req-6716311a-3b5c-4fd2-819c-bb56b2544039 req-b82d8d70-5488-4154-8877-93a2e1550fbf e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:ab:9b:26"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap99e74dca-1d"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Oct 11 09:46:22 compute-0 nova_compute[260935]: 2025-10-11 09:46:22.616 2 DEBUG nova.virt.libvirt.guest [req-6716311a-3b5c-4fd2-819c-bb56b2544039 req-b82d8d70-5488-4154-8877-93a2e1550fbf e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] No interface of type: <class 'nova.virt.libvirt.config.LibvirtConfigGuestInterface'> found in domain get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:261
Oct 11 09:46:22 compute-0 nova_compute[260935]: 2025-10-11 09:46:22.616 2 INFO nova.virt.libvirt.driver [req-6716311a-3b5c-4fd2-819c-bb56b2544039 req-b82d8d70-5488-4154-8877-93a2e1550fbf e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Successfully detached device tap99e74dca-1d from instance b75d8ded-515b-48ff-a6b6-28df88878996 from the live domain config.
Oct 11 09:46:22 compute-0 nova_compute[260935]: 2025-10-11 09:46:22.617 2 DEBUG nova.virt.libvirt.vif [req-6716311a-3b5c-4fd2-819c-bb56b2544039 req-b82d8d70-5488-4154-8877-93a2e1550fbf e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T09:00:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerRescueTestJSON-server-217285829',display_name='tempest-ServerRescueTestJSON-server-217285829',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuetestjson-server-217285829',id=85,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-11T09:00:45Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='11b44ad9193e4e43838d52056ccf413e',ramdisk_id='',reservation_id='r-i0umpniu',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerRescueTestJSON-1667208638',owner_user_name='tempest-ServerRescueTestJSON-1667208638-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T09:00:45Z,user_data=None,user_id='df5a3c3a5d68473aa2e2950de45ebce1',uuid=b75d8ded-515b-48ff-a6b6-28df88878996,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='rescued') vif={"id": "99e74dca-1d94-446c-ac4b-bc16dc028d2b", "address": "fa:16:3e:ab:9b:26", "network": {"id": "e4686205-cbf0-4221-bc49-ebb890c4a59f", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1553544744-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "11b44ad9193e4e43838d52056ccf413e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap99e74dca-1d", "ovs_interfaceid": "99e74dca-1d94-446c-ac4b-bc16dc028d2b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 11 09:46:22 compute-0 nova_compute[260935]: 2025-10-11 09:46:22.618 2 DEBUG nova.network.os_vif_util [req-6716311a-3b5c-4fd2-819c-bb56b2544039 req-b82d8d70-5488-4154-8877-93a2e1550fbf e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Converting VIF {"id": "99e74dca-1d94-446c-ac4b-bc16dc028d2b", "address": "fa:16:3e:ab:9b:26", "network": {"id": "e4686205-cbf0-4221-bc49-ebb890c4a59f", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1553544744-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "11b44ad9193e4e43838d52056ccf413e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap99e74dca-1d", "ovs_interfaceid": "99e74dca-1d94-446c-ac4b-bc16dc028d2b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 09:46:22 compute-0 nova_compute[260935]: 2025-10-11 09:46:22.619 2 DEBUG nova.network.os_vif_util [req-6716311a-3b5c-4fd2-819c-bb56b2544039 req-b82d8d70-5488-4154-8877-93a2e1550fbf e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:ab:9b:26,bridge_name='br-int',has_traffic_filtering=True,id=99e74dca-1d94-446c-ac4b-bc16dc028d2b,network=Network(e4686205-cbf0-4221-bc49-ebb890c4a59f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap99e74dca-1d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 09:46:22 compute-0 nova_compute[260935]: 2025-10-11 09:46:22.620 2 DEBUG os_vif [req-6716311a-3b5c-4fd2-819c-bb56b2544039 req-b82d8d70-5488-4154-8877-93a2e1550fbf e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:ab:9b:26,bridge_name='br-int',has_traffic_filtering=True,id=99e74dca-1d94-446c-ac4b-bc16dc028d2b,network=Network(e4686205-cbf0-4221-bc49-ebb890c4a59f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap99e74dca-1d') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 11 09:46:22 compute-0 nova_compute[260935]: 2025-10-11 09:46:22.623 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:46:22 compute-0 nova_compute[260935]: 2025-10-11 09:46:22.624 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap99e74dca-1d, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:46:22 compute-0 nova_compute[260935]: 2025-10-11 09:46:22.628 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:46:22 compute-0 nova_compute[260935]: 2025-10-11 09:46:22.630 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 09:46:22 compute-0 nova_compute[260935]: 2025-10-11 09:46:22.633 2 INFO os_vif [req-6716311a-3b5c-4fd2-819c-bb56b2544039 req-b82d8d70-5488-4154-8877-93a2e1550fbf e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:ab:9b:26,bridge_name='br-int',has_traffic_filtering=True,id=99e74dca-1d94-446c-ac4b-bc16dc028d2b,network=Network(e4686205-cbf0-4221-bc49-ebb890c4a59f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap99e74dca-1d')
Oct 11 09:46:22 compute-0 nova_compute[260935]: 2025-10-11 09:46:22.634 2 DEBUG nova.virt.libvirt.guest [req-6716311a-3b5c-4fd2-819c-bb56b2544039 req-b82d8d70-5488-4154-8877-93a2e1550fbf e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 09:46:22 compute-0 nova_compute[260935]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 09:46:22 compute-0 nova_compute[260935]:   <nova:name>tempest-ServerRescueTestJSON-server-217285829</nova:name>
Oct 11 09:46:22 compute-0 nova_compute[260935]:   <nova:creationTime>2025-10-11 09:46:22</nova:creationTime>
Oct 11 09:46:22 compute-0 nova_compute[260935]:   <nova:flavor name="m1.nano">
Oct 11 09:46:22 compute-0 nova_compute[260935]:     <nova:memory>128</nova:memory>
Oct 11 09:46:22 compute-0 nova_compute[260935]:     <nova:disk>1</nova:disk>
Oct 11 09:46:22 compute-0 nova_compute[260935]:     <nova:swap>0</nova:swap>
Oct 11 09:46:22 compute-0 nova_compute[260935]:     <nova:ephemeral>0</nova:ephemeral>
Oct 11 09:46:22 compute-0 nova_compute[260935]:     <nova:vcpus>1</nova:vcpus>
Oct 11 09:46:22 compute-0 nova_compute[260935]:   </nova:flavor>
Oct 11 09:46:22 compute-0 nova_compute[260935]:   <nova:owner>
Oct 11 09:46:22 compute-0 nova_compute[260935]:     <nova:user uuid="df5a3c3a5d68473aa2e2950de45ebce1">tempest-ServerRescueTestJSON-1667208638-project-member</nova:user>
Oct 11 09:46:22 compute-0 nova_compute[260935]:     <nova:project uuid="11b44ad9193e4e43838d52056ccf413e">tempest-ServerRescueTestJSON-1667208638</nova:project>
Oct 11 09:46:22 compute-0 nova_compute[260935]:   </nova:owner>
Oct 11 09:46:22 compute-0 nova_compute[260935]:   <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 09:46:22 compute-0 nova_compute[260935]:   <nova:ports/>
Oct 11 09:46:22 compute-0 nova_compute[260935]: </nova:instance>
Oct 11 09:46:22 compute-0 nova_compute[260935]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Oct 11 09:46:22 compute-0 podman[436375]: 2025-10-11 09:46:22.734406906 +0000 UTC m=+0.108108593 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 11 09:46:22 compute-0 ceph-mon[74313]: pgmap v3153: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 511 B/s rd, 102 B/s wr, 1 op/s
Oct 11 09:46:23 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:46:23.186 162815 WARNING neutron.agent.ovn.metadata.agent [-] Removing non-external type port 95a75fdd-515e-49c3-9289-acf68824d04a with type ""
Oct 11 09:46:23 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:46:23.187 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched DELETE: PortBindingDeletedEvent(events=('delete',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d3:b5:ce 10.100.0.14'], port_security=['fa:16:3e:d3:b5:ce 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '52be16b4-343a-4fd4-9041-39069a1fde2a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7c40ad6c-6e2c-4d8e-a70f-72c8786fa745', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0ba95f2514ce4fe4b00f245335eaeb01', 'neutron:revision_number': '4', 'neutron:security_group_ids': '462a25ad-d94b-4af1-ba28-eaa1a993c459', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=22e9d786-1ab0-4026-8b17-f42f91b9280f, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=c992d6e3-ef59-42a0-80c5-109fe0c056cd) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 09:46:23 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:46:23.189 162815 INFO neutron.agent.ovn.metadata.agent [-] Port c992d6e3-ef59-42a0-80c5-109fe0c056cd in datapath 7c40ad6c-6e2c-4d8e-a70f-72c8786fa745 unbound from our chassis
Oct 11 09:46:23 compute-0 ovn_controller[152945]: 2025-10-11T09:46:23Z|01732|binding|INFO|Removing iface tapc992d6e3-ef ovn-installed in OVS
Oct 11 09:46:23 compute-0 ovn_controller[152945]: 2025-10-11T09:46:23Z|01733|binding|INFO|Removing lport c992d6e3-ef59-42a0-80c5-109fe0c056cd ovn-installed in OVS
Oct 11 09:46:23 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:46:23.191 162815 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7c40ad6c-6e2c-4d8e-a70f-72c8786fa745, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 11 09:46:23 compute-0 nova_compute[260935]: 2025-10-11 09:46:23.193 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:46:23 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:46:23.192 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[1869c0d0-2afe-411d-aea6-f86a4e16abcb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:46:23 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:46:23.192 162815 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-7c40ad6c-6e2c-4d8e-a70f-72c8786fa745 namespace which is not needed anymore
Oct 11 09:46:23 compute-0 nova_compute[260935]: 2025-10-11 09:46:23.206 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:46:23 compute-0 neutron-haproxy-ovnmeta-7c40ad6c-6e2c-4d8e-a70f-72c8786fa745[344017]: [NOTICE]   (344035) : haproxy version is 2.8.14-c23fe91
Oct 11 09:46:23 compute-0 neutron-haproxy-ovnmeta-7c40ad6c-6e2c-4d8e-a70f-72c8786fa745[344017]: [NOTICE]   (344035) : path to executable is /usr/sbin/haproxy
Oct 11 09:46:23 compute-0 neutron-haproxy-ovnmeta-7c40ad6c-6e2c-4d8e-a70f-72c8786fa745[344017]: [WARNING]  (344035) : Exiting Master process...
Oct 11 09:46:23 compute-0 neutron-haproxy-ovnmeta-7c40ad6c-6e2c-4d8e-a70f-72c8786fa745[344017]: [ALERT]    (344035) : Current worker (344056) exited with code 143 (Terminated)
Oct 11 09:46:23 compute-0 neutron-haproxy-ovnmeta-7c40ad6c-6e2c-4d8e-a70f-72c8786fa745[344017]: [WARNING]  (344035) : All workers exited. Exiting... (0)
Oct 11 09:46:23 compute-0 systemd[1]: libpod-b5511f41d5dad8ddef6656dea0a0356c4dbed9d9c7c06c1948c618f051a18d4b.scope: Deactivated successfully.
Oct 11 09:46:23 compute-0 podman[436419]: 2025-10-11 09:46:23.389703255 +0000 UTC m=+0.066233059 container died b5511f41d5dad8ddef6656dea0a0356c4dbed9d9c7c06c1948c618f051a18d4b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-7c40ad6c-6e2c-4d8e-a70f-72c8786fa745, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Oct 11 09:46:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-d388700494a60a977f53213e9adaae75b3b656a3610a624cd88ae77b82267d55-merged.mount: Deactivated successfully.
Oct 11 09:46:23 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b5511f41d5dad8ddef6656dea0a0356c4dbed9d9c7c06c1948c618f051a18d4b-userdata-shm.mount: Deactivated successfully.
Oct 11 09:46:23 compute-0 podman[436419]: 2025-10-11 09:46:23.444131121 +0000 UTC m=+0.120660925 container cleanup b5511f41d5dad8ddef6656dea0a0356c4dbed9d9c7c06c1948c618f051a18d4b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-7c40ad6c-6e2c-4d8e-a70f-72c8786fa745, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0)
Oct 11 09:46:23 compute-0 systemd[1]: libpod-conmon-b5511f41d5dad8ddef6656dea0a0356c4dbed9d9c7c06c1948c618f051a18d4b.scope: Deactivated successfully.
Oct 11 09:46:23 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3154: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 24 op/s
Oct 11 09:46:23 compute-0 podman[436447]: 2025-10-11 09:46:23.549529297 +0000 UTC m=+0.069167291 container remove b5511f41d5dad8ddef6656dea0a0356c4dbed9d9c7c06c1948c618f051a18d4b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-7c40ad6c-6e2c-4d8e-a70f-72c8786fa745, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 09:46:23 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:46:23.559 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[c8a03e63-eb9c-40f1-93a8-c8478135964f]: (4, ('Sat Oct 11 09:46:23 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-7c40ad6c-6e2c-4d8e-a70f-72c8786fa745 (b5511f41d5dad8ddef6656dea0a0356c4dbed9d9c7c06c1948c618f051a18d4b)\nb5511f41d5dad8ddef6656dea0a0356c4dbed9d9c7c06c1948c618f051a18d4b\nSat Oct 11 09:46:23 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-7c40ad6c-6e2c-4d8e-a70f-72c8786fa745 (b5511f41d5dad8ddef6656dea0a0356c4dbed9d9c7c06c1948c618f051a18d4b)\nb5511f41d5dad8ddef6656dea0a0356c4dbed9d9c7c06c1948c618f051a18d4b\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:46:23 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:46:23.562 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[eaeb63a9-bdac-4664-91e2-1ea85335d9c7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:46:23 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:46:23.564 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7c40ad6c-60, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:46:23 compute-0 nova_compute[260935]: 2025-10-11 09:46:23.566 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:46:23 compute-0 kernel: tap7c40ad6c-60: left promiscuous mode
Oct 11 09:46:23 compute-0 nova_compute[260935]: 2025-10-11 09:46:23.593 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:46:23 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:46:23.600 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[5f8caf6d-f180-41e2-b33f-fc07afa44364]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:46:23 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:46:23.636 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[612ae595-2d10-44e3-95eb-999b70860292]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:46:23 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:46:23.639 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[2cc6f881-76a1-4f5e-82d4-c8e5915b2c4f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:46:23 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:46:23.660 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[40f9de06-20ad-4585-9611-397c5516652c]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 509451, 'reachable_time': 33449, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 436460, 'error': None, 'target': 'ovnmeta-7c40ad6c-6e2c-4d8e-a70f-72c8786fa745', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:46:23 compute-0 systemd[1]: run-netns-ovnmeta\x2d7c40ad6c\x2d6e2c\x2d4d8e\x2da70f\x2d72c8786fa745.mount: Deactivated successfully.
Oct 11 09:46:23 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:46:23.666 162929 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-7c40ad6c-6e2c-4d8e-a70f-72c8786fa745 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 11 09:46:23 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:46:23.666 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[fb3f3bc2-25fa-496e-b816-731c807d727d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:46:24 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:46:24.167 162815 WARNING neutron.agent.ovn.metadata.agent [-] Removing non-external type port db04ea8e-872a-415b-b3b7-56f400f73ae3 with type ""
Oct 11 09:46:24 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:46:24.168 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched DELETE: PortBindingDeletedEvent(events=('delete',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1e:82:58 10.100.0.14'], port_security=['fa:16:3e:1e:82:58 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'c176845c-89c0-4038-ba22-4ee79bd3ebfe', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-164a664d-5e52-48b9-8b00-f73d0851a4cc', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd33b48586acf4e6c8254f2a1213b001c', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'd2a92dd4-4e3a-46e0-b2c1-347b2512c6a3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=68f3e6c4-f574-4830-9133-912bb9cd6132, chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f6582ad0970>], logical_port=e61ae661-47c6-4317-a2c2-6e7a5b567441) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 09:46:24 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:46:24.169 162815 INFO neutron.agent.ovn.metadata.agent [-] Port e61ae661-47c6-4317-a2c2-6e7a5b567441 in datapath 164a664d-5e52-48b9-8b00-f73d0851a4cc unbound from our chassis
Oct 11 09:46:24 compute-0 ovn_controller[152945]: 2025-10-11T09:46:24Z|01734|binding|INFO|Removing iface tape61ae661-47 ovn-installed in OVS
Oct 11 09:46:24 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:46:24.171 162815 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 164a664d-5e52-48b9-8b00-f73d0851a4cc, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 11 09:46:24 compute-0 ovn_controller[152945]: 2025-10-11T09:46:24Z|01735|binding|INFO|Removing lport e61ae661-47c6-4317-a2c2-6e7a5b567441 ovn-installed in OVS
Oct 11 09:46:24 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:46:24.172 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[5245a6f2-eb72-4ebc-bf4a-b26a3e493115]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:46:24 compute-0 nova_compute[260935]: 2025-10-11 09:46:24.172 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:46:24 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:46:24.173 162815 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-164a664d-5e52-48b9-8b00-f73d0851a4cc namespace which is not needed anymore
Oct 11 09:46:24 compute-0 nova_compute[260935]: 2025-10-11 09:46:24.197 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:46:24 compute-0 neutron-haproxy-ovnmeta-164a664d-5e52-48b9-8b00-f73d0851a4cc[340003]: [NOTICE]   (340042) : haproxy version is 2.8.14-c23fe91
Oct 11 09:46:24 compute-0 neutron-haproxy-ovnmeta-164a664d-5e52-48b9-8b00-f73d0851a4cc[340003]: [NOTICE]   (340042) : path to executable is /usr/sbin/haproxy
Oct 11 09:46:24 compute-0 neutron-haproxy-ovnmeta-164a664d-5e52-48b9-8b00-f73d0851a4cc[340003]: [WARNING]  (340042) : Exiting Master process...
Oct 11 09:46:24 compute-0 neutron-haproxy-ovnmeta-164a664d-5e52-48b9-8b00-f73d0851a4cc[340003]: [WARNING]  (340042) : Exiting Master process...
Oct 11 09:46:24 compute-0 neutron-haproxy-ovnmeta-164a664d-5e52-48b9-8b00-f73d0851a4cc[340003]: [ALERT]    (340042) : Current worker (340059) exited with code 143 (Terminated)
Oct 11 09:46:24 compute-0 neutron-haproxy-ovnmeta-164a664d-5e52-48b9-8b00-f73d0851a4cc[340003]: [WARNING]  (340042) : All workers exited. Exiting... (0)
Oct 11 09:46:24 compute-0 systemd[1]: libpod-398747a8394e8bb5e06e9a7a905ca846a31a89812749ecde06f10d3769819829.scope: Deactivated successfully.
Oct 11 09:46:24 compute-0 podman[436478]: 2025-10-11 09:46:24.3757281 +0000 UTC m=+0.065233060 container died 398747a8394e8bb5e06e9a7a905ca846a31a89812749ecde06f10d3769819829 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-164a664d-5e52-48b9-8b00-f73d0851a4cc, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 11 09:46:24 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-398747a8394e8bb5e06e9a7a905ca846a31a89812749ecde06f10d3769819829-userdata-shm.mount: Deactivated successfully.
Oct 11 09:46:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-88ff3d2b1d366c767d0b7303eead81accb675a1c28ccb112da1864efc018408e-merged.mount: Deactivated successfully.
Oct 11 09:46:24 compute-0 podman[436478]: 2025-10-11 09:46:24.429728495 +0000 UTC m=+0.119233465 container cleanup 398747a8394e8bb5e06e9a7a905ca846a31a89812749ecde06f10d3769819829 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-164a664d-5e52-48b9-8b00-f73d0851a4cc, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 09:46:24 compute-0 systemd[1]: libpod-conmon-398747a8394e8bb5e06e9a7a905ca846a31a89812749ecde06f10d3769819829.scope: Deactivated successfully.
Oct 11 09:46:24 compute-0 podman[436507]: 2025-10-11 09:46:24.537258911 +0000 UTC m=+0.071220449 container remove 398747a8394e8bb5e06e9a7a905ca846a31a89812749ecde06f10d3769819829 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-164a664d-5e52-48b9-8b00-f73d0851a4cc, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 11 09:46:24 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:46:24.545 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[12d0cf05-3d34-4d4d-a2f1-408c22e25b23]: (4, ('Sat Oct 11 09:46:24 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-164a664d-5e52-48b9-8b00-f73d0851a4cc (398747a8394e8bb5e06e9a7a905ca846a31a89812749ecde06f10d3769819829)\n398747a8394e8bb5e06e9a7a905ca846a31a89812749ecde06f10d3769819829\nSat Oct 11 09:46:24 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-164a664d-5e52-48b9-8b00-f73d0851a4cc (398747a8394e8bb5e06e9a7a905ca846a31a89812749ecde06f10d3769819829)\n398747a8394e8bb5e06e9a7a905ca846a31a89812749ecde06f10d3769819829\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:46:24 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:46:24.547 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[ba3ff83b-dae9-4418-b03e-e9967b3fbadc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:46:24 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:46:24.549 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap164a664d-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:46:24 compute-0 nova_compute[260935]: 2025-10-11 09:46:24.551 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:46:24 compute-0 kernel: tap164a664d-50: left promiscuous mode
Oct 11 09:46:24 compute-0 nova_compute[260935]: 2025-10-11 09:46:24.577 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:46:24 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:46:24.583 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[288ef417-1d70-4400-af9c-a25dd66cf52d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:46:24 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:46:24.610 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[a592c1c3-9377-452d-ae3e-0bb4f5abc26e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:46:24 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:46:24.612 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[7259a7d5-eabe-458d-b480-c82d4a80f558]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:46:24 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:46:24.637 276945 DEBUG oslo.privsep.daemon [-] privsep: reply[2388920a-bea2-4440-98df-7926654df254]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 500896, 'reachable_time': 31085, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 436524, 'error': None, 'target': 'ovnmeta-164a664d-5e52-48b9-8b00-f73d0851a4cc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:46:24 compute-0 systemd[1]: run-netns-ovnmeta\x2d164a664d\x2d5e52\x2d48b9\x2d8b00\x2df73d0851a4cc.mount: Deactivated successfully.
Oct 11 09:46:24 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:46:24.643 162929 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-164a664d-5e52-48b9-8b00-f73d0851a4cc deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 11 09:46:24 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:46:24.643 162929 DEBUG oslo.privsep.daemon [-] privsep: reply[9c98e3df-b89e-4ff8-83c5-aaa6d6284819]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 11 09:46:24 compute-0 nova_compute[260935]: 2025-10-11 09:46:24.703 2 DEBUG nova.compute.manager [req-dc883725-baae-4810-bd6b-b038fdfe7004 req-b4f07a4c-37fa-4a07-9bef-470a23537838 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 52be16b4-343a-4fd4-9041-39069a1fde2a] Received event network-vif-deleted-c992d6e3-ef59-42a0-80c5-109fe0c056cd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:46:24 compute-0 nova_compute[260935]: 2025-10-11 09:46:24.705 2 INFO nova.compute.manager [req-dc883725-baae-4810-bd6b-b038fdfe7004 req-b4f07a4c-37fa-4a07-9bef-470a23537838 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 52be16b4-343a-4fd4-9041-39069a1fde2a] Neutron deleted interface c992d6e3-ef59-42a0-80c5-109fe0c056cd; detaching it from the instance and deleting it from the info cache
Oct 11 09:46:24 compute-0 nova_compute[260935]: 2025-10-11 09:46:24.705 2 DEBUG nova.network.neutron [req-dc883725-baae-4810-bd6b-b038fdfe7004 req-b4f07a4c-37fa-4a07-9bef-470a23537838 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: 52be16b4-343a-4fd4-9041-39069a1fde2a] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:46:24 compute-0 nova_compute[260935]: 2025-10-11 09:46:24.738 2 DEBUG nova.objects.instance [req-dc883725-baae-4810-bd6b-b038fdfe7004 req-b4f07a4c-37fa-4a07-9bef-470a23537838 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lazy-loading 'system_metadata' on Instance uuid 52be16b4-343a-4fd4-9041-39069a1fde2a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:46:24 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:46:24 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e300 do_prune osdmap full prune enabled
Oct 11 09:46:24 compute-0 nova_compute[260935]: 2025-10-11 09:46:24.768 2 DEBUG nova.objects.instance [req-dc883725-baae-4810-bd6b-b038fdfe7004 req-b4f07a4c-37fa-4a07-9bef-470a23537838 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lazy-loading 'flavor' on Instance uuid 52be16b4-343a-4fd4-9041-39069a1fde2a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:46:24 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e301 e301: 3 total, 3 up, 3 in
Oct 11 09:46:24 compute-0 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e301: 3 total, 3 up, 3 in
Oct 11 09:46:24 compute-0 nova_compute[260935]: 2025-10-11 09:46:24.788 2 DEBUG nova.virt.libvirt.vif [req-dc883725-baae-4810-bd6b-b038fdfe7004 req-b4f07a4c-37fa-4a07-9bef-470a23537838 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T09:00:22Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1975294956',display_name='tempest-ServerActionsTestJSON-server-1975294956',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1975294956',id=86,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCldfXJbbZ7d23yZCI3wgMskc62RJ3W+h+Bujyoq+l99HIouQoz2ogsrxnyNOy7JQrwu2S23uZGGnM/6kJAmk9ewWoiaLMeddrGku0Zod7LFIlcm/esb5hA9IKL9pBW3cA==',key_name='tempest-keypair-1522598924',keypairs=<?>,launch_index=0,launched_at=2025-10-11T09:00:51Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='0ba95f2514ce4fe4b00f245335eaeb01',ramdisk_id='',reservation_id='r-4yee06pj',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-332398676',owner_user_name='tempest-ServerActionsTestJSON-332398676-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T09:00:51Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='2e604a7f01ba42f8a2f2a90bf14cafba',uuid=52be16b4-343a-4fd4-9041-39069a1fde2a,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "c992d6e3-ef59-42a0-80c5-109fe0c056cd", "address": "fa:16:3e:d3:b5:ce", "network": {"id": "7c40ad6c-6e2c-4d8e-a70f-72c8786fa745", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1855455514-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0ba95f2514ce4fe4b00f245335eaeb01", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc992d6e3-ef", "ovs_interfaceid": "c992d6e3-ef59-42a0-80c5-109fe0c056cd", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 11 09:46:24 compute-0 nova_compute[260935]: 2025-10-11 09:46:24.788 2 DEBUG nova.network.os_vif_util [req-dc883725-baae-4810-bd6b-b038fdfe7004 req-b4f07a4c-37fa-4a07-9bef-470a23537838 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Converting VIF {"id": "c992d6e3-ef59-42a0-80c5-109fe0c056cd", "address": "fa:16:3e:d3:b5:ce", "network": {"id": "7c40ad6c-6e2c-4d8e-a70f-72c8786fa745", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1855455514-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0ba95f2514ce4fe4b00f245335eaeb01", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc992d6e3-ef", "ovs_interfaceid": "c992d6e3-ef59-42a0-80c5-109fe0c056cd", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 09:46:24 compute-0 nova_compute[260935]: 2025-10-11 09:46:24.789 2 DEBUG nova.network.os_vif_util [req-dc883725-baae-4810-bd6b-b038fdfe7004 req-b4f07a4c-37fa-4a07-9bef-470a23537838 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:d3:b5:ce,bridge_name='br-int',has_traffic_filtering=True,id=c992d6e3-ef59-42a0-80c5-109fe0c056cd,network=Network(7c40ad6c-6e2c-4d8e-a70f-72c8786fa745),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc992d6e3-ef') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 09:46:24 compute-0 nova_compute[260935]: 2025-10-11 09:46:24.793 2 DEBUG nova.virt.libvirt.guest [req-dc883725-baae-4810-bd6b-b038fdfe7004 req-b4f07a4c-37fa-4a07-9bef-470a23537838 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:d3:b5:ce"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapc992d6e3-ef"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Oct 11 09:46:24 compute-0 nova_compute[260935]: 2025-10-11 09:46:24.797 2 DEBUG nova.virt.libvirt.guest [req-dc883725-baae-4810-bd6b-b038fdfe7004 req-b4f07a4c-37fa-4a07-9bef-470a23537838 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:d3:b5:ce"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapc992d6e3-ef"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Oct 11 09:46:24 compute-0 nova_compute[260935]: 2025-10-11 09:46:24.801 2 DEBUG nova.virt.libvirt.driver [req-dc883725-baae-4810-bd6b-b038fdfe7004 req-b4f07a4c-37fa-4a07-9bef-470a23537838 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Attempting to detach device tapc992d6e3-ef from instance 52be16b4-343a-4fd4-9041-39069a1fde2a from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Oct 11 09:46:24 compute-0 nova_compute[260935]: 2025-10-11 09:46:24.802 2 DEBUG nova.virt.libvirt.guest [req-dc883725-baae-4810-bd6b-b038fdfe7004 req-b4f07a4c-37fa-4a07-9bef-470a23537838 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] detach device xml: <interface type="ethernet">
Oct 11 09:46:24 compute-0 nova_compute[260935]:   <mac address="fa:16:3e:d3:b5:ce"/>
Oct 11 09:46:24 compute-0 nova_compute[260935]:   <model type="virtio"/>
Oct 11 09:46:24 compute-0 nova_compute[260935]:   <driver name="vhost" rx_queue_size="512"/>
Oct 11 09:46:24 compute-0 nova_compute[260935]:   <mtu size="1442"/>
Oct 11 09:46:24 compute-0 nova_compute[260935]:   <target dev="tapc992d6e3-ef"/>
Oct 11 09:46:24 compute-0 nova_compute[260935]: </interface>
Oct 11 09:46:24 compute-0 nova_compute[260935]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Oct 11 09:46:24 compute-0 ceph-mon[74313]: pgmap v3154: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 24 op/s
Oct 11 09:46:24 compute-0 ceph-mon[74313]: osdmap e301: 3 total, 3 up, 3 in
Oct 11 09:46:24 compute-0 nova_compute[260935]: 2025-10-11 09:46:24.822 2 DEBUG nova.virt.libvirt.guest [req-dc883725-baae-4810-bd6b-b038fdfe7004 req-b4f07a4c-37fa-4a07-9bef-470a23537838 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:d3:b5:ce"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapc992d6e3-ef"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Oct 11 09:46:24 compute-0 nova_compute[260935]: 2025-10-11 09:46:24.826 2 DEBUG nova.virt.libvirt.guest [req-dc883725-baae-4810-bd6b-b038fdfe7004 req-b4f07a4c-37fa-4a07-9bef-470a23537838 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] No interface of type: <class 'nova.virt.libvirt.config.LibvirtConfigGuestInterface'> found in domain get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:261
Oct 11 09:46:24 compute-0 nova_compute[260935]: 2025-10-11 09:46:24.826 2 INFO nova.virt.libvirt.driver [req-dc883725-baae-4810-bd6b-b038fdfe7004 req-b4f07a4c-37fa-4a07-9bef-470a23537838 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Successfully detached device tapc992d6e3-ef from instance 52be16b4-343a-4fd4-9041-39069a1fde2a from the persistent domain config.
Oct 11 09:46:24 compute-0 nova_compute[260935]: 2025-10-11 09:46:24.827 2 DEBUG nova.virt.libvirt.driver [req-dc883725-baae-4810-bd6b-b038fdfe7004 req-b4f07a4c-37fa-4a07-9bef-470a23537838 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] (1/8): Attempting to detach device tapc992d6e3-ef with device alias net0 from instance 52be16b4-343a-4fd4-9041-39069a1fde2a from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Oct 11 09:46:24 compute-0 nova_compute[260935]: 2025-10-11 09:46:24.828 2 DEBUG nova.virt.libvirt.guest [req-dc883725-baae-4810-bd6b-b038fdfe7004 req-b4f07a4c-37fa-4a07-9bef-470a23537838 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] detach device xml: <interface type="ethernet">
Oct 11 09:46:24 compute-0 nova_compute[260935]:   <mac address="fa:16:3e:d3:b5:ce"/>
Oct 11 09:46:24 compute-0 nova_compute[260935]:   <model type="virtio"/>
Oct 11 09:46:24 compute-0 nova_compute[260935]:   <driver name="vhost" rx_queue_size="512"/>
Oct 11 09:46:24 compute-0 nova_compute[260935]:   <mtu size="1442"/>
Oct 11 09:46:24 compute-0 nova_compute[260935]:   <target dev="tapc992d6e3-ef"/>
Oct 11 09:46:24 compute-0 nova_compute[260935]: </interface>
Oct 11 09:46:24 compute-0 nova_compute[260935]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Oct 11 09:46:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:46:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:46:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:46:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:46:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:46:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:46:24 compute-0 kernel: tapc992d6e3-ef (unregistering): left promiscuous mode
Oct 11 09:46:24 compute-0 NetworkManager[44960]: <info>  [1760175984.9593] device (tapc992d6e3-ef): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 09:46:24 compute-0 nova_compute[260935]: 2025-10-11 09:46:24.975 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:46:24 compute-0 nova_compute[260935]: 2025-10-11 09:46:24.979 2 DEBUG nova.virt.libvirt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Received event <DeviceRemovedEvent: 1760175984.9790182, 52be16b4-343a-4fd4-9041-39069a1fde2a => net0> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Oct 11 09:46:24 compute-0 nova_compute[260935]: 2025-10-11 09:46:24.982 2 DEBUG nova.virt.libvirt.driver [req-dc883725-baae-4810-bd6b-b038fdfe7004 req-b4f07a4c-37fa-4a07-9bef-470a23537838 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Start waiting for the detach event from libvirt for device tapc992d6e3-ef with device alias net0 for instance 52be16b4-343a-4fd4-9041-39069a1fde2a _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Oct 11 09:46:24 compute-0 nova_compute[260935]: 2025-10-11 09:46:24.983 2 DEBUG nova.virt.libvirt.guest [req-dc883725-baae-4810-bd6b-b038fdfe7004 req-b4f07a4c-37fa-4a07-9bef-470a23537838 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:d3:b5:ce"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapc992d6e3-ef"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Oct 11 09:46:24 compute-0 nova_compute[260935]: 2025-10-11 09:46:24.986 2 DEBUG nova.virt.libvirt.guest [req-dc883725-baae-4810-bd6b-b038fdfe7004 req-b4f07a4c-37fa-4a07-9bef-470a23537838 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] No interface of type: <class 'nova.virt.libvirt.config.LibvirtConfigGuestInterface'> found in domain get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:261
Oct 11 09:46:24 compute-0 nova_compute[260935]: 2025-10-11 09:46:24.987 2 INFO nova.virt.libvirt.driver [req-dc883725-baae-4810-bd6b-b038fdfe7004 req-b4f07a4c-37fa-4a07-9bef-470a23537838 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Successfully detached device tapc992d6e3-ef from instance 52be16b4-343a-4fd4-9041-39069a1fde2a from the live domain config.
Oct 11 09:46:24 compute-0 nova_compute[260935]: 2025-10-11 09:46:24.988 2 DEBUG nova.virt.libvirt.vif [req-dc883725-baae-4810-bd6b-b038fdfe7004 req-b4f07a4c-37fa-4a07-9bef-470a23537838 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T09:00:22Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1975294956',display_name='tempest-ServerActionsTestJSON-server-1975294956',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1975294956',id=86,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCldfXJbbZ7d23yZCI3wgMskc62RJ3W+h+Bujyoq+l99HIouQoz2ogsrxnyNOy7JQrwu2S23uZGGnM/6kJAmk9ewWoiaLMeddrGku0Zod7LFIlcm/esb5hA9IKL9pBW3cA==',key_name='tempest-keypair-1522598924',keypairs=<?>,launch_index=0,launched_at=2025-10-11T09:00:51Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='0ba95f2514ce4fe4b00f245335eaeb01',ramdisk_id='',reservation_id='r-4yee06pj',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-332398676',owner_user_name='tempest-ServerActionsTestJSON-332398676-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T09:00:51Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='2e604a7f01ba42f8a2f2a90bf14cafba',uuid=52be16b4-343a-4fd4-9041-39069a1fde2a,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "c992d6e3-ef59-42a0-80c5-109fe0c056cd", "address": "fa:16:3e:d3:b5:ce", "network": {"id": "7c40ad6c-6e2c-4d8e-a70f-72c8786fa745", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1855455514-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0ba95f2514ce4fe4b00f245335eaeb01", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc992d6e3-ef", "ovs_interfaceid": "c992d6e3-ef59-42a0-80c5-109fe0c056cd", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 11 09:46:24 compute-0 nova_compute[260935]: 2025-10-11 09:46:24.989 2 DEBUG nova.network.os_vif_util [req-dc883725-baae-4810-bd6b-b038fdfe7004 req-b4f07a4c-37fa-4a07-9bef-470a23537838 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Converting VIF {"id": "c992d6e3-ef59-42a0-80c5-109fe0c056cd", "address": "fa:16:3e:d3:b5:ce", "network": {"id": "7c40ad6c-6e2c-4d8e-a70f-72c8786fa745", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1855455514-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0ba95f2514ce4fe4b00f245335eaeb01", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc992d6e3-ef", "ovs_interfaceid": "c992d6e3-ef59-42a0-80c5-109fe0c056cd", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 09:46:24 compute-0 nova_compute[260935]: 2025-10-11 09:46:24.990 2 DEBUG nova.network.os_vif_util [req-dc883725-baae-4810-bd6b-b038fdfe7004 req-b4f07a4c-37fa-4a07-9bef-470a23537838 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:d3:b5:ce,bridge_name='br-int',has_traffic_filtering=True,id=c992d6e3-ef59-42a0-80c5-109fe0c056cd,network=Network(7c40ad6c-6e2c-4d8e-a70f-72c8786fa745),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc992d6e3-ef') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 09:46:24 compute-0 nova_compute[260935]: 2025-10-11 09:46:24.990 2 DEBUG os_vif [req-dc883725-baae-4810-bd6b-b038fdfe7004 req-b4f07a4c-37fa-4a07-9bef-470a23537838 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:d3:b5:ce,bridge_name='br-int',has_traffic_filtering=True,id=c992d6e3-ef59-42a0-80c5-109fe0c056cd,network=Network(7c40ad6c-6e2c-4d8e-a70f-72c8786fa745),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc992d6e3-ef') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 11 09:46:24 compute-0 nova_compute[260935]: 2025-10-11 09:46:24.992 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:46:24 compute-0 nova_compute[260935]: 2025-10-11 09:46:24.992 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc992d6e3-ef, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:46:24 compute-0 nova_compute[260935]: 2025-10-11 09:46:24.994 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:46:25 compute-0 ceph-osd[88249]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [P] New memtable created with log file: #52. Immutable memtables: 0.
Oct 11 09:46:25 compute-0 nova_compute[260935]: 2025-10-11 09:46:25.001 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 09:46:25 compute-0 nova_compute[260935]: 2025-10-11 09:46:25.004 2 INFO os_vif [req-dc883725-baae-4810-bd6b-b038fdfe7004 req-b4f07a4c-37fa-4a07-9bef-470a23537838 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:d3:b5:ce,bridge_name='br-int',has_traffic_filtering=True,id=c992d6e3-ef59-42a0-80c5-109fe0c056cd,network=Network(7c40ad6c-6e2c-4d8e-a70f-72c8786fa745),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc992d6e3-ef')
Oct 11 09:46:25 compute-0 nova_compute[260935]: 2025-10-11 09:46:25.005 2 DEBUG nova.virt.libvirt.guest [req-dc883725-baae-4810-bd6b-b038fdfe7004 req-b4f07a4c-37fa-4a07-9bef-470a23537838 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 09:46:25 compute-0 nova_compute[260935]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 09:46:25 compute-0 nova_compute[260935]:   <nova:name>tempest-ServerActionsTestJSON-server-1975294956</nova:name>
Oct 11 09:46:25 compute-0 nova_compute[260935]:   <nova:creationTime>2025-10-11 09:46:25</nova:creationTime>
Oct 11 09:46:25 compute-0 nova_compute[260935]:   <nova:flavor name="m1.nano">
Oct 11 09:46:25 compute-0 nova_compute[260935]:     <nova:memory>128</nova:memory>
Oct 11 09:46:25 compute-0 nova_compute[260935]:     <nova:disk>1</nova:disk>
Oct 11 09:46:25 compute-0 nova_compute[260935]:     <nova:swap>0</nova:swap>
Oct 11 09:46:25 compute-0 nova_compute[260935]:     <nova:ephemeral>0</nova:ephemeral>
Oct 11 09:46:25 compute-0 nova_compute[260935]:     <nova:vcpus>1</nova:vcpus>
Oct 11 09:46:25 compute-0 nova_compute[260935]:   </nova:flavor>
Oct 11 09:46:25 compute-0 nova_compute[260935]:   <nova:owner>
Oct 11 09:46:25 compute-0 nova_compute[260935]:     <nova:user uuid="2e604a7f01ba42f8a2f2a90bf14cafba">tempest-ServerActionsTestJSON-332398676-project-member</nova:user>
Oct 11 09:46:25 compute-0 nova_compute[260935]:     <nova:project uuid="0ba95f2514ce4fe4b00f245335eaeb01">tempest-ServerActionsTestJSON-332398676</nova:project>
Oct 11 09:46:25 compute-0 nova_compute[260935]:   </nova:owner>
Oct 11 09:46:25 compute-0 nova_compute[260935]:   <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 09:46:25 compute-0 nova_compute[260935]:   <nova:ports/>
Oct 11 09:46:25 compute-0 nova_compute[260935]: </nova:instance>
Oct 11 09:46:25 compute-0 nova_compute[260935]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Oct 11 09:46:25 compute-0 nova_compute[260935]: 2025-10-11 09:46:25.008 2 DEBUG nova.compute.manager [req-dc883725-baae-4810-bd6b-b038fdfe7004 req-b4f07a4c-37fa-4a07-9bef-470a23537838 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: c176845c-89c0-4038-ba22-4ee79bd3ebfe] Received event network-vif-deleted-e61ae661-47c6-4317-a2c2-6e7a5b567441 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 11 09:46:25 compute-0 nova_compute[260935]: 2025-10-11 09:46:25.008 2 INFO nova.compute.manager [req-dc883725-baae-4810-bd6b-b038fdfe7004 req-b4f07a4c-37fa-4a07-9bef-470a23537838 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: c176845c-89c0-4038-ba22-4ee79bd3ebfe] Neutron deleted interface e61ae661-47c6-4317-a2c2-6e7a5b567441; detaching it from the instance and deleting it from the info cache
Oct 11 09:46:25 compute-0 nova_compute[260935]: 2025-10-11 09:46:25.009 2 DEBUG nova.network.neutron [req-dc883725-baae-4810-bd6b-b038fdfe7004 req-b4f07a4c-37fa-4a07-9bef-470a23537838 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] [instance: c176845c-89c0-4038-ba22-4ee79bd3ebfe] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:46:25 compute-0 nova_compute[260935]: 2025-10-11 09:46:25.030 2 DEBUG nova.objects.instance [req-dc883725-baae-4810-bd6b-b038fdfe7004 req-b4f07a4c-37fa-4a07-9bef-470a23537838 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lazy-loading 'system_metadata' on Instance uuid c176845c-89c0-4038-ba22-4ee79bd3ebfe obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:46:25 compute-0 nova_compute[260935]: 2025-10-11 09:46:25.075 2 DEBUG nova.objects.instance [req-dc883725-baae-4810-bd6b-b038fdfe7004 req-b4f07a4c-37fa-4a07-9bef-470a23537838 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Lazy-loading 'flavor' on Instance uuid c176845c-89c0-4038-ba22-4ee79bd3ebfe obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:46:25 compute-0 nova_compute[260935]: 2025-10-11 09:46:25.111 2 DEBUG nova.virt.libvirt.vif [req-dc883725-baae-4810-bd6b-b038fdfe7004 req-b4f07a4c-37fa-4a07-9bef-470a23537838 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T08:59:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherA-server-1251385331',display_name='tempest-ServerActionsTestOtherA-server-1251385331',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestothera-server-1251385331',id=82,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBD0zd9vOn1MoClsAHXDeWFPP5kO+VNofuvu89K7qYloOUWW4N93cF9QhhUyaB1pmFmHZjCIiPEyZ5cYnUAirQfqKcPyMnKcAjeovnFjGTt2K03Doe1dtzSfmqbvVdrCpkQ==',key_name='tempest-keypair-168857508',keypairs=<?>,launch_index=0,launched_at=2025-10-11T08:59:25Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='d33b48586acf4e6c8254f2a1213b001c',ramdisk_id='',reservation_id='r-vuepvjtl',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherA-620268335',owner_user_name='tempest-ServerActionsTestOtherA-620268335-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T08:59:25Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='8d211063ed874837bead2e13898b31d4',uuid=c176845c-89c0-4038-ba22-4ee79bd3ebfe,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "e61ae661-47c6-4317-a2c2-6e7a5b567441", "address": "fa:16:3e:1e:82:58", "network": {"id": "164a664d-5e52-48b9-8b00-f73d0851a4cc", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-311778958-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d33b48586acf4e6c8254f2a1213b001c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape61ae661-47", "ovs_interfaceid": "e61ae661-47c6-4317-a2c2-6e7a5b567441", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 11 09:46:25 compute-0 nova_compute[260935]: 2025-10-11 09:46:25.112 2 DEBUG nova.network.os_vif_util [req-dc883725-baae-4810-bd6b-b038fdfe7004 req-b4f07a4c-37fa-4a07-9bef-470a23537838 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Converting VIF {"id": "e61ae661-47c6-4317-a2c2-6e7a5b567441", "address": "fa:16:3e:1e:82:58", "network": {"id": "164a664d-5e52-48b9-8b00-f73d0851a4cc", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-311778958-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d33b48586acf4e6c8254f2a1213b001c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape61ae661-47", "ovs_interfaceid": "e61ae661-47c6-4317-a2c2-6e7a5b567441", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 09:46:25 compute-0 nova_compute[260935]: 2025-10-11 09:46:25.113 2 DEBUG nova.network.os_vif_util [req-dc883725-baae-4810-bd6b-b038fdfe7004 req-b4f07a4c-37fa-4a07-9bef-470a23537838 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:1e:82:58,bridge_name='br-int',has_traffic_filtering=True,id=e61ae661-47c6-4317-a2c2-6e7a5b567441,network=Network(164a664d-5e52-48b9-8b00-f73d0851a4cc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape61ae661-47') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 09:46:25 compute-0 nova_compute[260935]: 2025-10-11 09:46:25.117 2 DEBUG nova.virt.libvirt.guest [req-dc883725-baae-4810-bd6b-b038fdfe7004 req-b4f07a4c-37fa-4a07-9bef-470a23537838 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:1e:82:58"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tape61ae661-47"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Oct 11 09:46:25 compute-0 nova_compute[260935]: 2025-10-11 09:46:25.120 2 DEBUG nova.virt.libvirt.guest [req-dc883725-baae-4810-bd6b-b038fdfe7004 req-b4f07a4c-37fa-4a07-9bef-470a23537838 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:1e:82:58"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tape61ae661-47"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Oct 11 09:46:25 compute-0 nova_compute[260935]: 2025-10-11 09:46:25.124 2 DEBUG nova.virt.libvirt.driver [req-dc883725-baae-4810-bd6b-b038fdfe7004 req-b4f07a4c-37fa-4a07-9bef-470a23537838 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Attempting to detach device tape61ae661-47 from instance c176845c-89c0-4038-ba22-4ee79bd3ebfe from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Oct 11 09:46:25 compute-0 nova_compute[260935]: 2025-10-11 09:46:25.125 2 DEBUG nova.virt.libvirt.guest [req-dc883725-baae-4810-bd6b-b038fdfe7004 req-b4f07a4c-37fa-4a07-9bef-470a23537838 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] detach device xml: <interface type="ethernet">
Oct 11 09:46:25 compute-0 nova_compute[260935]:   <mac address="fa:16:3e:1e:82:58"/>
Oct 11 09:46:25 compute-0 nova_compute[260935]:   <model type="virtio"/>
Oct 11 09:46:25 compute-0 nova_compute[260935]:   <driver name="vhost" rx_queue_size="512"/>
Oct 11 09:46:25 compute-0 nova_compute[260935]:   <mtu size="1442"/>
Oct 11 09:46:25 compute-0 nova_compute[260935]:   <target dev="tape61ae661-47"/>
Oct 11 09:46:25 compute-0 nova_compute[260935]: </interface>
Oct 11 09:46:25 compute-0 nova_compute[260935]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Oct 11 09:46:25 compute-0 nova_compute[260935]: 2025-10-11 09:46:25.133 2 DEBUG nova.virt.libvirt.guest [req-dc883725-baae-4810-bd6b-b038fdfe7004 req-b4f07a4c-37fa-4a07-9bef-470a23537838 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:1e:82:58"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tape61ae661-47"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Oct 11 09:46:25 compute-0 nova_compute[260935]: 2025-10-11 09:46:25.136 2 DEBUG nova.virt.libvirt.guest [req-dc883725-baae-4810-bd6b-b038fdfe7004 req-b4f07a4c-37fa-4a07-9bef-470a23537838 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] No interface of type: <class 'nova.virt.libvirt.config.LibvirtConfigGuestInterface'> found in domain get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:261
Oct 11 09:46:25 compute-0 nova_compute[260935]: 2025-10-11 09:46:25.137 2 INFO nova.virt.libvirt.driver [req-dc883725-baae-4810-bd6b-b038fdfe7004 req-b4f07a4c-37fa-4a07-9bef-470a23537838 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Successfully detached device tape61ae661-47 from instance c176845c-89c0-4038-ba22-4ee79bd3ebfe from the persistent domain config.
Oct 11 09:46:25 compute-0 nova_compute[260935]: 2025-10-11 09:46:25.137 2 DEBUG nova.virt.libvirt.driver [req-dc883725-baae-4810-bd6b-b038fdfe7004 req-b4f07a4c-37fa-4a07-9bef-470a23537838 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] (1/8): Attempting to detach device tape61ae661-47 with device alias net0 from instance c176845c-89c0-4038-ba22-4ee79bd3ebfe from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Oct 11 09:46:25 compute-0 nova_compute[260935]: 2025-10-11 09:46:25.138 2 DEBUG nova.virt.libvirt.guest [req-dc883725-baae-4810-bd6b-b038fdfe7004 req-b4f07a4c-37fa-4a07-9bef-470a23537838 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] detach device xml: <interface type="ethernet">
Oct 11 09:46:25 compute-0 nova_compute[260935]:   <mac address="fa:16:3e:1e:82:58"/>
Oct 11 09:46:25 compute-0 nova_compute[260935]:   <model type="virtio"/>
Oct 11 09:46:25 compute-0 nova_compute[260935]:   <driver name="vhost" rx_queue_size="512"/>
Oct 11 09:46:25 compute-0 nova_compute[260935]:   <mtu size="1442"/>
Oct 11 09:46:25 compute-0 nova_compute[260935]:   <target dev="tape61ae661-47"/>
Oct 11 09:46:25 compute-0 nova_compute[260935]: </interface>
Oct 11 09:46:25 compute-0 nova_compute[260935]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Oct 11 09:46:25 compute-0 kernel: tape61ae661-47 (unregistering): left promiscuous mode
Oct 11 09:46:25 compute-0 NetworkManager[44960]: <info>  [1760175985.2465] device (tape61ae661-47): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 11 09:46:25 compute-0 nova_compute[260935]: 2025-10-11 09:46:25.282 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:46:25 compute-0 nova_compute[260935]: 2025-10-11 09:46:25.302 2 DEBUG nova.virt.libvirt.driver [None req-b478f727-26ad-499f-b583-9ffa401a6306 - - - - - -] Received event <DeviceRemovedEvent: 1760175985.30203, c176845c-89c0-4038-ba22-4ee79bd3ebfe => net0> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Oct 11 09:46:25 compute-0 nova_compute[260935]: 2025-10-11 09:46:25.305 2 DEBUG nova.virt.libvirt.driver [req-dc883725-baae-4810-bd6b-b038fdfe7004 req-b4f07a4c-37fa-4a07-9bef-470a23537838 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Start waiting for the detach event from libvirt for device tape61ae661-47 with device alias net0 for instance c176845c-89c0-4038-ba22-4ee79bd3ebfe _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Oct 11 09:46:25 compute-0 nova_compute[260935]: 2025-10-11 09:46:25.305 2 DEBUG nova.virt.libvirt.guest [req-dc883725-baae-4810-bd6b-b038fdfe7004 req-b4f07a4c-37fa-4a07-9bef-470a23537838 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:1e:82:58"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tape61ae661-47"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Oct 11 09:46:25 compute-0 nova_compute[260935]: 2025-10-11 09:46:25.309 2 DEBUG nova.virt.libvirt.guest [req-dc883725-baae-4810-bd6b-b038fdfe7004 req-b4f07a4c-37fa-4a07-9bef-470a23537838 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] No interface of type: <class 'nova.virt.libvirt.config.LibvirtConfigGuestInterface'> found in domain get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:261
Oct 11 09:46:25 compute-0 nova_compute[260935]: 2025-10-11 09:46:25.309 2 INFO nova.virt.libvirt.driver [req-dc883725-baae-4810-bd6b-b038fdfe7004 req-b4f07a4c-37fa-4a07-9bef-470a23537838 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Successfully detached device tape61ae661-47 from instance c176845c-89c0-4038-ba22-4ee79bd3ebfe from the live domain config.
Oct 11 09:46:25 compute-0 nova_compute[260935]: 2025-10-11 09:46:25.311 2 DEBUG nova.virt.libvirt.vif [req-dc883725-baae-4810-bd6b-b038fdfe7004 req-b4f07a4c-37fa-4a07-9bef-470a23537838 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-11T08:59:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherA-server-1251385331',display_name='tempest-ServerActionsTestOtherA-server-1251385331',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestothera-server-1251385331',id=82,image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBD0zd9vOn1MoClsAHXDeWFPP5kO+VNofuvu89K7qYloOUWW4N93cF9QhhUyaB1pmFmHZjCIiPEyZ5cYnUAirQfqKcPyMnKcAjeovnFjGTt2K03Doe1dtzSfmqbvVdrCpkQ==',key_name='tempest-keypair-168857508',keypairs=<?>,launch_index=0,launched_at=2025-10-11T08:59:25Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='d33b48586acf4e6c8254f2a1213b001c',ramdisk_id='',reservation_id='r-vuepvjtl',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='03f2fef0-11c0-48e1-b3a0-3e02d898739e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherA-620268335',owner_user_name='tempest-ServerActionsTestOtherA-620268335-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-11T08:59:25Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='8d211063ed874837bead2e13898b31d4',uuid=c176845c-89c0-4038-ba22-4ee79bd3ebfe,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "e61ae661-47c6-4317-a2c2-6e7a5b567441", "address": "fa:16:3e:1e:82:58", "network": {"id": "164a664d-5e52-48b9-8b00-f73d0851a4cc", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-311778958-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d33b48586acf4e6c8254f2a1213b001c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape61ae661-47", "ovs_interfaceid": "e61ae661-47c6-4317-a2c2-6e7a5b567441", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 11 09:46:25 compute-0 nova_compute[260935]: 2025-10-11 09:46:25.311 2 DEBUG nova.network.os_vif_util [req-dc883725-baae-4810-bd6b-b038fdfe7004 req-b4f07a4c-37fa-4a07-9bef-470a23537838 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Converting VIF {"id": "e61ae661-47c6-4317-a2c2-6e7a5b567441", "address": "fa:16:3e:1e:82:58", "network": {"id": "164a664d-5e52-48b9-8b00-f73d0851a4cc", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-311778958-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d33b48586acf4e6c8254f2a1213b001c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape61ae661-47", "ovs_interfaceid": "e61ae661-47c6-4317-a2c2-6e7a5b567441", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 11 09:46:25 compute-0 nova_compute[260935]: 2025-10-11 09:46:25.312 2 DEBUG nova.network.os_vif_util [req-dc883725-baae-4810-bd6b-b038fdfe7004 req-b4f07a4c-37fa-4a07-9bef-470a23537838 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:1e:82:58,bridge_name='br-int',has_traffic_filtering=True,id=e61ae661-47c6-4317-a2c2-6e7a5b567441,network=Network(164a664d-5e52-48b9-8b00-f73d0851a4cc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape61ae661-47') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 11 09:46:25 compute-0 nova_compute[260935]: 2025-10-11 09:46:25.313 2 DEBUG os_vif [req-dc883725-baae-4810-bd6b-b038fdfe7004 req-b4f07a4c-37fa-4a07-9bef-470a23537838 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:1e:82:58,bridge_name='br-int',has_traffic_filtering=True,id=e61ae661-47c6-4317-a2c2-6e7a5b567441,network=Network(164a664d-5e52-48b9-8b00-f73d0851a4cc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape61ae661-47') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 11 09:46:25 compute-0 nova_compute[260935]: 2025-10-11 09:46:25.314 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:46:25 compute-0 nova_compute[260935]: 2025-10-11 09:46:25.315 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape61ae661-47, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:46:25 compute-0 nova_compute[260935]: 2025-10-11 09:46:25.317 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:46:25 compute-0 nova_compute[260935]: 2025-10-11 09:46:25.319 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 09:46:25 compute-0 nova_compute[260935]: 2025-10-11 09:46:25.321 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:46:25 compute-0 nova_compute[260935]: 2025-10-11 09:46:25.324 2 INFO os_vif [req-dc883725-baae-4810-bd6b-b038fdfe7004 req-b4f07a4c-37fa-4a07-9bef-470a23537838 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:1e:82:58,bridge_name='br-int',has_traffic_filtering=True,id=e61ae661-47c6-4317-a2c2-6e7a5b567441,network=Network(164a664d-5e52-48b9-8b00-f73d0851a4cc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape61ae661-47')
Oct 11 09:46:25 compute-0 nova_compute[260935]: 2025-10-11 09:46:25.325 2 DEBUG nova.virt.libvirt.guest [req-dc883725-baae-4810-bd6b-b038fdfe7004 req-b4f07a4c-37fa-4a07-9bef-470a23537838 e994ece710834cd2b9436af53f799f5c 46ab2d9d7bea4bd2bc26f026c0541277 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 11 09:46:25 compute-0 nova_compute[260935]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 11 09:46:25 compute-0 nova_compute[260935]:   <nova:name>tempest-ServerActionsTestOtherA-server-1251385331</nova:name>
Oct 11 09:46:25 compute-0 nova_compute[260935]:   <nova:creationTime>2025-10-11 09:46:25</nova:creationTime>
Oct 11 09:46:25 compute-0 nova_compute[260935]:   <nova:flavor name="m1.nano">
Oct 11 09:46:25 compute-0 nova_compute[260935]:     <nova:memory>128</nova:memory>
Oct 11 09:46:25 compute-0 nova_compute[260935]:     <nova:disk>1</nova:disk>
Oct 11 09:46:25 compute-0 nova_compute[260935]:     <nova:swap>0</nova:swap>
Oct 11 09:46:25 compute-0 nova_compute[260935]:     <nova:ephemeral>0</nova:ephemeral>
Oct 11 09:46:25 compute-0 nova_compute[260935]:     <nova:vcpus>1</nova:vcpus>
Oct 11 09:46:25 compute-0 nova_compute[260935]:   </nova:flavor>
Oct 11 09:46:25 compute-0 nova_compute[260935]:   <nova:owner>
Oct 11 09:46:25 compute-0 nova_compute[260935]:     <nova:user uuid="8d211063ed874837bead2e13898b31d4">tempest-ServerActionsTestOtherA-620268335-project-member</nova:user>
Oct 11 09:46:25 compute-0 nova_compute[260935]:     <nova:project uuid="d33b48586acf4e6c8254f2a1213b001c">tempest-ServerActionsTestOtherA-620268335</nova:project>
Oct 11 09:46:25 compute-0 nova_compute[260935]:   </nova:owner>
Oct 11 09:46:25 compute-0 nova_compute[260935]:   <nova:root type="image" uuid="03f2fef0-11c0-48e1-b3a0-3e02d898739e"/>
Oct 11 09:46:25 compute-0 nova_compute[260935]:   <nova:ports/>
Oct 11 09:46:25 compute-0 nova_compute[260935]: </nova:instance>
Oct 11 09:46:25 compute-0 nova_compute[260935]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Oct 11 09:46:25 compute-0 nova_compute[260935]: 2025-10-11 09:46:25.444 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:46:25 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3156: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 1.7 KiB/s wr, 31 op/s
Oct 11 09:46:26 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 09:46:26 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1058496683' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 09:46:26 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 09:46:26 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1058496683' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 09:46:26 compute-0 podman[436533]: 2025-10-11 09:46:26.791691602 +0000 UTC m=+0.090226742 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=multipathd)
Oct 11 09:46:26 compute-0 ceph-mon[74313]: pgmap v3156: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 1.7 KiB/s wr, 31 op/s
Oct 11 09:46:26 compute-0 ceph-mon[74313]: from='client.? 192.168.122.10:0/1058496683' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 09:46:26 compute-0 ceph-mon[74313]: from='client.? 192.168.122.10:0/1058496683' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 09:46:26 compute-0 podman[436534]: 2025-10-11 09:46:26.82123191 +0000 UTC m=+0.121980242 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct 11 09:46:27 compute-0 nova_compute[260935]: 2025-10-11 09:46:27.196 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:46:27 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3157: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.3 KiB/s wr, 24 op/s
Oct 11 09:46:28 compute-0 ceph-mon[74313]: pgmap v3157: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.3 KiB/s wr, 24 op/s
Oct 11 09:46:29 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3158: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.3 KiB/s wr, 23 op/s
Oct 11 09:46:29 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e301 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:46:30 compute-0 nova_compute[260935]: 2025-10-11 09:46:30.355 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:46:30 compute-0 nova_compute[260935]: 2025-10-11 09:46:30.447 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:46:30 compute-0 ceph-mon[74313]: pgmap v3158: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.3 KiB/s wr, 23 op/s
Oct 11 09:46:31 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3159: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.3 KiB/s wr, 23 op/s
Oct 11 09:46:32 compute-0 nova_compute[260935]: 2025-10-11 09:46:32.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:46:32 compute-0 ceph-mon[74313]: pgmap v3159: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.3 KiB/s wr, 23 op/s
Oct 11 09:46:33 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3160: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 0 B/s rd, 5.1 KiB/s wr, 1 op/s
Oct 11 09:46:34 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e301 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:46:34 compute-0 ceph-mon[74313]: pgmap v3160: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 0 B/s rd, 5.1 KiB/s wr, 1 op/s
Oct 11 09:46:35 compute-0 nova_compute[260935]: 2025-10-11 09:46:35.358 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:46:35 compute-0 sudo[436579]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:46:35 compute-0 nova_compute[260935]: 2025-10-11 09:46:35.449 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:46:35 compute-0 sudo[436579]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:46:35 compute-0 sudo[436579]: pam_unix(sudo:session): session closed for user root
Oct 11 09:46:35 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3161: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 0 B/s rd, 4.7 KiB/s wr, 1 op/s
Oct 11 09:46:35 compute-0 sudo[436604]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 09:46:35 compute-0 sudo[436604]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:46:35 compute-0 sudo[436604]: pam_unix(sudo:session): session closed for user root
Oct 11 09:46:35 compute-0 sudo[436629]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:46:35 compute-0 sudo[436629]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:46:35 compute-0 sudo[436629]: pam_unix(sudo:session): session closed for user root
Oct 11 09:46:35 compute-0 sudo[436654]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 11 09:46:35 compute-0 sudo[436654]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:46:36 compute-0 sudo[436654]: pam_unix(sudo:session): session closed for user root
Oct 11 09:46:36 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Oct 11 09:46:36 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 11 09:46:36 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 09:46:36 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 09:46:36 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 11 09:46:36 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 09:46:36 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 11 09:46:36 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:46:36 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev 2bf8c8b5-cece-4da6-b498-8e9750631a51 does not exist
Oct 11 09:46:36 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev a7174808-c809-4e36-85b8-e380f1b44930 does not exist
Oct 11 09:46:36 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev 277b4765-0a57-41eb-bbd6-de53a8b84957 does not exist
Oct 11 09:46:36 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 11 09:46:36 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 09:46:36 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 11 09:46:36 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 09:46:36 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 09:46:36 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 09:46:36 compute-0 sudo[436712]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:46:36 compute-0 sudo[436712]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:46:36 compute-0 sudo[436712]: pam_unix(sudo:session): session closed for user root
Oct 11 09:46:36 compute-0 sudo[436737]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 09:46:36 compute-0 sudo[436737]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:46:36 compute-0 sudo[436737]: pam_unix(sudo:session): session closed for user root
Oct 11 09:46:36 compute-0 sudo[436762]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:46:36 compute-0 sudo[436762]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:46:36 compute-0 sudo[436762]: pam_unix(sudo:session): session closed for user root
Oct 11 09:46:36 compute-0 sudo[436787]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 11 09:46:36 compute-0 sudo[436787]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:46:36 compute-0 ceph-mon[74313]: pgmap v3161: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 0 B/s rd, 4.7 KiB/s wr, 1 op/s
Oct 11 09:46:36 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 11 09:46:36 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 09:46:36 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 09:46:36 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:46:36 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 09:46:36 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 09:46:36 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 09:46:36 compute-0 podman[436854]: 2025-10-11 09:46:36.952065244 +0000 UTC m=+0.043281105 container create b587ab3c7b5e1bdfe334fe91454e2797acf807010cdee055eaa8f55c066c16b9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_diffie, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct 11 09:46:36 compute-0 systemd[1]: Started libpod-conmon-b587ab3c7b5e1bdfe334fe91454e2797acf807010cdee055eaa8f55c066c16b9.scope.
Oct 11 09:46:37 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:46:37 compute-0 podman[436854]: 2025-10-11 09:46:36.931100985 +0000 UTC m=+0.022316856 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:46:37 compute-0 podman[436854]: 2025-10-11 09:46:37.034553977 +0000 UTC m=+0.125769858 container init b587ab3c7b5e1bdfe334fe91454e2797acf807010cdee055eaa8f55c066c16b9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_diffie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 09:46:37 compute-0 podman[436854]: 2025-10-11 09:46:37.049216279 +0000 UTC m=+0.140432170 container start b587ab3c7b5e1bdfe334fe91454e2797acf807010cdee055eaa8f55c066c16b9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_diffie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct 11 09:46:37 compute-0 podman[436854]: 2025-10-11 09:46:37.053641673 +0000 UTC m=+0.144857524 container attach b587ab3c7b5e1bdfe334fe91454e2797acf807010cdee055eaa8f55c066c16b9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_diffie, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct 11 09:46:37 compute-0 sharp_diffie[436871]: 167 167
Oct 11 09:46:37 compute-0 podman[436854]: 2025-10-11 09:46:37.058207051 +0000 UTC m=+0.149422942 container died b587ab3c7b5e1bdfe334fe91454e2797acf807010cdee055eaa8f55c066c16b9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_diffie, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 09:46:37 compute-0 systemd[1]: libpod-b587ab3c7b5e1bdfe334fe91454e2797acf807010cdee055eaa8f55c066c16b9.scope: Deactivated successfully.
Oct 11 09:46:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-67695c25e16eeddc482418ebc79b3529d7f9d41032d57f06d5445143d22075a7-merged.mount: Deactivated successfully.
Oct 11 09:46:37 compute-0 podman[436854]: 2025-10-11 09:46:37.120962991 +0000 UTC m=+0.212178872 container remove b587ab3c7b5e1bdfe334fe91454e2797acf807010cdee055eaa8f55c066c16b9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_diffie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 09:46:37 compute-0 systemd[1]: libpod-conmon-b587ab3c7b5e1bdfe334fe91454e2797acf807010cdee055eaa8f55c066c16b9.scope: Deactivated successfully.
Oct 11 09:46:37 compute-0 podman[436896]: 2025-10-11 09:46:37.390264185 +0000 UTC m=+0.070943911 container create 42aeb7065c9995a6cd0f5e2f5e89b7af8344e5a12cdee6a1ba6a2c1b6293998d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_mclean, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct 11 09:46:37 compute-0 systemd[1]: Started libpod-conmon-42aeb7065c9995a6cd0f5e2f5e89b7af8344e5a12cdee6a1ba6a2c1b6293998d.scope.
Oct 11 09:46:37 compute-0 podman[436896]: 2025-10-11 09:46:37.362082675 +0000 UTC m=+0.042762471 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:46:37 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:46:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/18461993abf1215889938bae614f13ec82aa9b777e9d1cccab6d63a36bb2cf0d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 09:46:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/18461993abf1215889938bae614f13ec82aa9b777e9d1cccab6d63a36bb2cf0d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 09:46:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/18461993abf1215889938bae614f13ec82aa9b777e9d1cccab6d63a36bb2cf0d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 09:46:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/18461993abf1215889938bae614f13ec82aa9b777e9d1cccab6d63a36bb2cf0d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 09:46:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/18461993abf1215889938bae614f13ec82aa9b777e9d1cccab6d63a36bb2cf0d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 09:46:37 compute-0 podman[436896]: 2025-10-11 09:46:37.508321327 +0000 UTC m=+0.189001043 container init 42aeb7065c9995a6cd0f5e2f5e89b7af8344e5a12cdee6a1ba6a2c1b6293998d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_mclean, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 09:46:37 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3162: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 0 B/s rd, 4.2 KiB/s wr, 1 op/s
Oct 11 09:46:37 compute-0 podman[436896]: 2025-10-11 09:46:37.516943939 +0000 UTC m=+0.197623645 container start 42aeb7065c9995a6cd0f5e2f5e89b7af8344e5a12cdee6a1ba6a2c1b6293998d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_mclean, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Oct 11 09:46:37 compute-0 podman[436896]: 2025-10-11 09:46:37.521043764 +0000 UTC m=+0.201723480 container attach 42aeb7065c9995a6cd0f5e2f5e89b7af8344e5a12cdee6a1ba6a2c1b6293998d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_mclean, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 09:46:38 compute-0 hardcore_mclean[436913]: --> passed data devices: 0 physical, 3 LVM
Oct 11 09:46:38 compute-0 hardcore_mclean[436913]: --> relative data size: 1.0
Oct 11 09:46:38 compute-0 hardcore_mclean[436913]: --> All data devices are unavailable
Oct 11 09:46:38 compute-0 systemd[1]: libpod-42aeb7065c9995a6cd0f5e2f5e89b7af8344e5a12cdee6a1ba6a2c1b6293998d.scope: Deactivated successfully.
Oct 11 09:46:38 compute-0 systemd[1]: libpod-42aeb7065c9995a6cd0f5e2f5e89b7af8344e5a12cdee6a1ba6a2c1b6293998d.scope: Consumed 1.088s CPU time.
Oct 11 09:46:38 compute-0 podman[436896]: 2025-10-11 09:46:38.662357964 +0000 UTC m=+1.343037730 container died 42aeb7065c9995a6cd0f5e2f5e89b7af8344e5a12cdee6a1ba6a2c1b6293998d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_mclean, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 09:46:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-18461993abf1215889938bae614f13ec82aa9b777e9d1cccab6d63a36bb2cf0d-merged.mount: Deactivated successfully.
Oct 11 09:46:38 compute-0 podman[436896]: 2025-10-11 09:46:38.738954703 +0000 UTC m=+1.419634399 container remove 42aeb7065c9995a6cd0f5e2f5e89b7af8344e5a12cdee6a1ba6a2c1b6293998d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_mclean, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 09:46:38 compute-0 systemd[1]: libpod-conmon-42aeb7065c9995a6cd0f5e2f5e89b7af8344e5a12cdee6a1ba6a2c1b6293998d.scope: Deactivated successfully.
Oct 11 09:46:38 compute-0 sudo[436787]: pam_unix(sudo:session): session closed for user root
Oct 11 09:46:38 compute-0 ceph-mon[74313]: pgmap v3162: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 0 B/s rd, 4.2 KiB/s wr, 1 op/s
Oct 11 09:46:38 compute-0 sudo[436958]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:46:38 compute-0 sudo[436958]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:46:38 compute-0 sudo[436958]: pam_unix(sudo:session): session closed for user root
Oct 11 09:46:38 compute-0 sudo[436983]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 09:46:38 compute-0 sudo[436983]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:46:38 compute-0 sudo[436983]: pam_unix(sudo:session): session closed for user root
Oct 11 09:46:38 compute-0 sshd-session[436942]: Invalid user postgres from 165.232.82.252 port 43716
Oct 11 09:46:39 compute-0 sudo[437008]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:46:39 compute-0 sudo[437008]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:46:39 compute-0 sudo[437008]: pam_unix(sudo:session): session closed for user root
Oct 11 09:46:39 compute-0 sshd-session[436942]: pam_unix(sshd:auth): check pass; user unknown
Oct 11 09:46:39 compute-0 sshd-session[436942]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=165.232.82.252
Oct 11 09:46:39 compute-0 sudo[437033]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a -- lvm list --format json
Oct 11 09:46:39 compute-0 sudo[437033]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:46:39 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3163: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 0 B/s rd, 4.2 KiB/s wr, 1 op/s
Oct 11 09:46:39 compute-0 podman[437099]: 2025-10-11 09:46:39.51897011 +0000 UTC m=+0.049198521 container create 2d61e9cadf08424f922757b06a8ac361be8e2e92625144fa26b336276852155a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_shannon, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 09:46:39 compute-0 systemd[1]: Started libpod-conmon-2d61e9cadf08424f922757b06a8ac361be8e2e92625144fa26b336276852155a.scope.
Oct 11 09:46:39 compute-0 podman[437099]: 2025-10-11 09:46:39.494428242 +0000 UTC m=+0.024656683 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:46:39 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:46:39 compute-0 podman[437099]: 2025-10-11 09:46:39.609689565 +0000 UTC m=+0.139918056 container init 2d61e9cadf08424f922757b06a8ac361be8e2e92625144fa26b336276852155a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_shannon, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 09:46:39 compute-0 podman[437099]: 2025-10-11 09:46:39.619044337 +0000 UTC m=+0.149272768 container start 2d61e9cadf08424f922757b06a8ac361be8e2e92625144fa26b336276852155a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_shannon, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 09:46:39 compute-0 podman[437099]: 2025-10-11 09:46:39.623037709 +0000 UTC m=+0.153266190 container attach 2d61e9cadf08424f922757b06a8ac361be8e2e92625144fa26b336276852155a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_shannon, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 11 09:46:39 compute-0 great_shannon[437115]: 167 167
Oct 11 09:46:39 compute-0 systemd[1]: libpod-2d61e9cadf08424f922757b06a8ac361be8e2e92625144fa26b336276852155a.scope: Deactivated successfully.
Oct 11 09:46:39 compute-0 podman[437099]: 2025-10-11 09:46:39.627778882 +0000 UTC m=+0.158007323 container died 2d61e9cadf08424f922757b06a8ac361be8e2e92625144fa26b336276852155a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_shannon, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 09:46:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-49f3ac8a2182a5e38f1dd3582a3caae88b7d7f02f7eb5f65f71dc65e85912bf0-merged.mount: Deactivated successfully.
Oct 11 09:46:39 compute-0 podman[437099]: 2025-10-11 09:46:39.684996077 +0000 UTC m=+0.215224508 container remove 2d61e9cadf08424f922757b06a8ac361be8e2e92625144fa26b336276852155a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_shannon, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef)
Oct 11 09:46:39 compute-0 systemd[1]: libpod-conmon-2d61e9cadf08424f922757b06a8ac361be8e2e92625144fa26b336276852155a.scope: Deactivated successfully.
Oct 11 09:46:39 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e301 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:46:39 compute-0 podman[437139]: 2025-10-11 09:46:39.938858268 +0000 UTC m=+0.058133272 container create 19ee91ac38d9c46dbfdb5ada7add135b45dafad5dca85326bcf6b9813d7da092 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_cori, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct 11 09:46:39 compute-0 systemd[1]: Started libpod-conmon-19ee91ac38d9c46dbfdb5ada7add135b45dafad5dca85326bcf6b9813d7da092.scope.
Oct 11 09:46:40 compute-0 podman[437139]: 2025-10-11 09:46:39.917116148 +0000 UTC m=+0.036391142 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:46:40 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:46:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df826b0d6e11cb2cfeadf0856aa676036df58f953968c08d8421a4499862de69/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 09:46:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df826b0d6e11cb2cfeadf0856aa676036df58f953968c08d8421a4499862de69/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 09:46:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df826b0d6e11cb2cfeadf0856aa676036df58f953968c08d8421a4499862de69/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 09:46:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df826b0d6e11cb2cfeadf0856aa676036df58f953968c08d8421a4499862de69/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 09:46:40 compute-0 podman[437139]: 2025-10-11 09:46:40.078809683 +0000 UTC m=+0.198084717 container init 19ee91ac38d9c46dbfdb5ada7add135b45dafad5dca85326bcf6b9813d7da092 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_cori, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct 11 09:46:40 compute-0 podman[437139]: 2025-10-11 09:46:40.091275563 +0000 UTC m=+0.210550537 container start 19ee91ac38d9c46dbfdb5ada7add135b45dafad5dca85326bcf6b9813d7da092 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_cori, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 09:46:40 compute-0 podman[437139]: 2025-10-11 09:46:40.095239974 +0000 UTC m=+0.214515028 container attach 19ee91ac38d9c46dbfdb5ada7add135b45dafad5dca85326bcf6b9813d7da092 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_cori, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Oct 11 09:46:40 compute-0 nova_compute[260935]: 2025-10-11 09:46:40.426 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:46:40 compute-0 nova_compute[260935]: 2025-10-11 09:46:40.451 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:46:40 compute-0 ceph-mon[74313]: pgmap v3163: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 0 B/s rd, 4.2 KiB/s wr, 1 op/s
Oct 11 09:46:40 compute-0 sad_cori[437156]: {
Oct 11 09:46:40 compute-0 sad_cori[437156]:     "0": [
Oct 11 09:46:40 compute-0 sad_cori[437156]:         {
Oct 11 09:46:40 compute-0 sad_cori[437156]:             "devices": [
Oct 11 09:46:40 compute-0 sad_cori[437156]:                 "/dev/loop3"
Oct 11 09:46:40 compute-0 sad_cori[437156]:             ],
Oct 11 09:46:40 compute-0 sad_cori[437156]:             "lv_name": "ceph_lv0",
Oct 11 09:46:40 compute-0 sad_cori[437156]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 09:46:40 compute-0 sad_cori[437156]:             "lv_size": "21470642176",
Oct 11 09:46:40 compute-0 sad_cori[437156]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=bd31cfe9-266a-4479-a7f9-659fff14b3e1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 09:46:40 compute-0 sad_cori[437156]:             "lv_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 09:46:40 compute-0 sad_cori[437156]:             "name": "ceph_lv0",
Oct 11 09:46:40 compute-0 sad_cori[437156]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 09:46:40 compute-0 sad_cori[437156]:             "tags": {
Oct 11 09:46:40 compute-0 sad_cori[437156]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 11 09:46:40 compute-0 sad_cori[437156]:                 "ceph.block_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 09:46:40 compute-0 sad_cori[437156]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 09:46:40 compute-0 sad_cori[437156]:                 "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:46:40 compute-0 sad_cori[437156]:                 "ceph.cluster_name": "ceph",
Oct 11 09:46:40 compute-0 sad_cori[437156]:                 "ceph.crush_device_class": "",
Oct 11 09:46:40 compute-0 sad_cori[437156]:                 "ceph.encrypted": "0",
Oct 11 09:46:40 compute-0 sad_cori[437156]:                 "ceph.osd_fsid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 09:46:40 compute-0 sad_cori[437156]:                 "ceph.osd_id": "0",
Oct 11 09:46:40 compute-0 sad_cori[437156]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 09:46:40 compute-0 sad_cori[437156]:                 "ceph.type": "block",
Oct 11 09:46:40 compute-0 sad_cori[437156]:                 "ceph.vdo": "0"
Oct 11 09:46:40 compute-0 sad_cori[437156]:             },
Oct 11 09:46:40 compute-0 sad_cori[437156]:             "type": "block",
Oct 11 09:46:40 compute-0 sad_cori[437156]:             "vg_name": "ceph_vg0"
Oct 11 09:46:40 compute-0 sad_cori[437156]:         }
Oct 11 09:46:40 compute-0 sad_cori[437156]:     ],
Oct 11 09:46:40 compute-0 sad_cori[437156]:     "1": [
Oct 11 09:46:40 compute-0 sad_cori[437156]:         {
Oct 11 09:46:40 compute-0 sad_cori[437156]:             "devices": [
Oct 11 09:46:40 compute-0 sad_cori[437156]:                 "/dev/loop4"
Oct 11 09:46:40 compute-0 sad_cori[437156]:             ],
Oct 11 09:46:40 compute-0 sad_cori[437156]:             "lv_name": "ceph_lv1",
Oct 11 09:46:40 compute-0 sad_cori[437156]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 09:46:40 compute-0 sad_cori[437156]:             "lv_size": "21470642176",
Oct 11 09:46:40 compute-0 sad_cori[437156]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c6e730c8-7075-45e5-bf72-061b73088f5b,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 09:46:40 compute-0 sad_cori[437156]:             "lv_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 09:46:40 compute-0 sad_cori[437156]:             "name": "ceph_lv1",
Oct 11 09:46:40 compute-0 sad_cori[437156]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 09:46:40 compute-0 sad_cori[437156]:             "tags": {
Oct 11 09:46:40 compute-0 sad_cori[437156]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 11 09:46:40 compute-0 sad_cori[437156]:                 "ceph.block_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 09:46:40 compute-0 sad_cori[437156]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 09:46:40 compute-0 sad_cori[437156]:                 "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:46:40 compute-0 sad_cori[437156]:                 "ceph.cluster_name": "ceph",
Oct 11 09:46:40 compute-0 sad_cori[437156]:                 "ceph.crush_device_class": "",
Oct 11 09:46:40 compute-0 sad_cori[437156]:                 "ceph.encrypted": "0",
Oct 11 09:46:40 compute-0 sad_cori[437156]:                 "ceph.osd_fsid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 09:46:40 compute-0 sad_cori[437156]:                 "ceph.osd_id": "1",
Oct 11 09:46:40 compute-0 sad_cori[437156]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 09:46:40 compute-0 sad_cori[437156]:                 "ceph.type": "block",
Oct 11 09:46:40 compute-0 sad_cori[437156]:                 "ceph.vdo": "0"
Oct 11 09:46:40 compute-0 sad_cori[437156]:             },
Oct 11 09:46:40 compute-0 sad_cori[437156]:             "type": "block",
Oct 11 09:46:40 compute-0 sad_cori[437156]:             "vg_name": "ceph_vg1"
Oct 11 09:46:40 compute-0 sad_cori[437156]:         }
Oct 11 09:46:40 compute-0 sad_cori[437156]:     ],
Oct 11 09:46:40 compute-0 sad_cori[437156]:     "2": [
Oct 11 09:46:40 compute-0 sad_cori[437156]:         {
Oct 11 09:46:40 compute-0 sad_cori[437156]:             "devices": [
Oct 11 09:46:40 compute-0 sad_cori[437156]:                 "/dev/loop5"
Oct 11 09:46:40 compute-0 sad_cori[437156]:             ],
Oct 11 09:46:40 compute-0 sad_cori[437156]:             "lv_name": "ceph_lv2",
Oct 11 09:46:40 compute-0 sad_cori[437156]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 09:46:40 compute-0 sad_cori[437156]:             "lv_size": "21470642176",
Oct 11 09:46:40 compute-0 sad_cori[437156]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9a8d87fe-940e-4960-94fb-405e22e6e9ee,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 09:46:40 compute-0 sad_cori[437156]:             "lv_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 09:46:40 compute-0 sad_cori[437156]:             "name": "ceph_lv2",
Oct 11 09:46:40 compute-0 sad_cori[437156]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 09:46:40 compute-0 sad_cori[437156]:             "tags": {
Oct 11 09:46:40 compute-0 sad_cori[437156]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 11 09:46:40 compute-0 sad_cori[437156]:                 "ceph.block_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 09:46:40 compute-0 sad_cori[437156]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 09:46:40 compute-0 sad_cori[437156]:                 "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:46:40 compute-0 sad_cori[437156]:                 "ceph.cluster_name": "ceph",
Oct 11 09:46:40 compute-0 sad_cori[437156]:                 "ceph.crush_device_class": "",
Oct 11 09:46:40 compute-0 sad_cori[437156]:                 "ceph.encrypted": "0",
Oct 11 09:46:40 compute-0 sad_cori[437156]:                 "ceph.osd_fsid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 09:46:40 compute-0 sad_cori[437156]:                 "ceph.osd_id": "2",
Oct 11 09:46:40 compute-0 sad_cori[437156]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 09:46:40 compute-0 sad_cori[437156]:                 "ceph.type": "block",
Oct 11 09:46:40 compute-0 sad_cori[437156]:                 "ceph.vdo": "0"
Oct 11 09:46:40 compute-0 sad_cori[437156]:             },
Oct 11 09:46:40 compute-0 sad_cori[437156]:             "type": "block",
Oct 11 09:46:40 compute-0 sad_cori[437156]:             "vg_name": "ceph_vg2"
Oct 11 09:46:40 compute-0 sad_cori[437156]:         }
Oct 11 09:46:40 compute-0 sad_cori[437156]:     ]
Oct 11 09:46:40 compute-0 sad_cori[437156]: }
Oct 11 09:46:40 compute-0 systemd[1]: libpod-19ee91ac38d9c46dbfdb5ada7add135b45dafad5dca85326bcf6b9813d7da092.scope: Deactivated successfully.
Oct 11 09:46:40 compute-0 podman[437139]: 2025-10-11 09:46:40.941019036 +0000 UTC m=+1.060294030 container died 19ee91ac38d9c46dbfdb5ada7add135b45dafad5dca85326bcf6b9813d7da092 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_cori, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 09:46:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-df826b0d6e11cb2cfeadf0856aa676036df58f953968c08d8421a4499862de69-merged.mount: Deactivated successfully.
Oct 11 09:46:41 compute-0 podman[437139]: 2025-10-11 09:46:41.015142135 +0000 UTC m=+1.134417119 container remove 19ee91ac38d9c46dbfdb5ada7add135b45dafad5dca85326bcf6b9813d7da092 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_cori, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 09:46:41 compute-0 systemd[1]: libpod-conmon-19ee91ac38d9c46dbfdb5ada7add135b45dafad5dca85326bcf6b9813d7da092.scope: Deactivated successfully.
Oct 11 09:46:41 compute-0 sudo[437033]: pam_unix(sudo:session): session closed for user root
Oct 11 09:46:41 compute-0 sudo[437177]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:46:41 compute-0 sudo[437177]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:46:41 compute-0 sudo[437177]: pam_unix(sudo:session): session closed for user root
Oct 11 09:46:41 compute-0 sudo[437202]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 09:46:41 compute-0 sudo[437202]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:46:41 compute-0 sudo[437202]: pam_unix(sudo:session): session closed for user root
Oct 11 09:46:41 compute-0 sudo[437227]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:46:41 compute-0 sudo[437227]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:46:41 compute-0 sudo[437227]: pam_unix(sudo:session): session closed for user root
Oct 11 09:46:41 compute-0 sudo[437252]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a -- raw list --format json
Oct 11 09:46:41 compute-0 sudo[437252]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:46:41 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3164: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 0 B/s rd, 4.2 KiB/s wr, 1 op/s
Oct 11 09:46:41 compute-0 sshd-session[436942]: Failed password for invalid user postgres from 165.232.82.252 port 43716 ssh2
Oct 11 09:46:41 compute-0 sshd-session[436942]: Connection closed by invalid user postgres 165.232.82.252 port 43716 [preauth]
Oct 11 09:46:41 compute-0 podman[437317]: 2025-10-11 09:46:41.915277662 +0000 UTC m=+0.066770654 container create d389ccda85d594948a9b097b0d0bd0410c60344b1f2c9f4cdc37ad679301667d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_heyrovsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 09:46:41 compute-0 systemd[1]: Started libpod-conmon-d389ccda85d594948a9b097b0d0bd0410c60344b1f2c9f4cdc37ad679301667d.scope.
Oct 11 09:46:41 compute-0 podman[437317]: 2025-10-11 09:46:41.890778064 +0000 UTC m=+0.042271126 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:46:41 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:46:42 compute-0 podman[437317]: 2025-10-11 09:46:42.018662981 +0000 UTC m=+0.170156003 container init d389ccda85d594948a9b097b0d0bd0410c60344b1f2c9f4cdc37ad679301667d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_heyrovsky, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct 11 09:46:42 compute-0 podman[437317]: 2025-10-11 09:46:42.02859677 +0000 UTC m=+0.180089802 container start d389ccda85d594948a9b097b0d0bd0410c60344b1f2c9f4cdc37ad679301667d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_heyrovsky, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default)
Oct 11 09:46:42 compute-0 podman[437317]: 2025-10-11 09:46:42.033307292 +0000 UTC m=+0.184800314 container attach d389ccda85d594948a9b097b0d0bd0410c60344b1f2c9f4cdc37ad679301667d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_heyrovsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 09:46:42 compute-0 loving_heyrovsky[437334]: 167 167
Oct 11 09:46:42 compute-0 systemd[1]: libpod-d389ccda85d594948a9b097b0d0bd0410c60344b1f2c9f4cdc37ad679301667d.scope: Deactivated successfully.
Oct 11 09:46:42 compute-0 podman[437317]: 2025-10-11 09:46:42.037279903 +0000 UTC m=+0.188772935 container died d389ccda85d594948a9b097b0d0bd0410c60344b1f2c9f4cdc37ad679301667d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_heyrovsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 09:46:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-a9471d5348d0606aa7f8b8e6a3058dc0c7bc98801064754eb2b649a19b829a70-merged.mount: Deactivated successfully.
Oct 11 09:46:42 compute-0 podman[437317]: 2025-10-11 09:46:42.095449645 +0000 UTC m=+0.246942667 container remove d389ccda85d594948a9b097b0d0bd0410c60344b1f2c9f4cdc37ad679301667d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_heyrovsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True)
Oct 11 09:46:42 compute-0 systemd[1]: libpod-conmon-d389ccda85d594948a9b097b0d0bd0410c60344b1f2c9f4cdc37ad679301667d.scope: Deactivated successfully.
Oct 11 09:46:42 compute-0 podman[437359]: 2025-10-11 09:46:42.348939195 +0000 UTC m=+0.059944723 container create 15c531d5f74240493d63aa5e3c0ac520c7dfecf8b66d27128280bf1676ce0ebe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_antonelli, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct 11 09:46:42 compute-0 systemd[1]: Started libpod-conmon-15c531d5f74240493d63aa5e3c0ac520c7dfecf8b66d27128280bf1676ce0ebe.scope.
Oct 11 09:46:42 compute-0 podman[437359]: 2025-10-11 09:46:42.316047952 +0000 UTC m=+0.027053550 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:46:42 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:46:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f6eb4423bf45bc2248f00d73b9e291128657eb5dc26be1f6d6b00a07e70c7ad/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 09:46:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f6eb4423bf45bc2248f00d73b9e291128657eb5dc26be1f6d6b00a07e70c7ad/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 09:46:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f6eb4423bf45bc2248f00d73b9e291128657eb5dc26be1f6d6b00a07e70c7ad/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 09:46:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f6eb4423bf45bc2248f00d73b9e291128657eb5dc26be1f6d6b00a07e70c7ad/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 09:46:42 compute-0 podman[437359]: 2025-10-11 09:46:42.459321321 +0000 UTC m=+0.170326909 container init 15c531d5f74240493d63aa5e3c0ac520c7dfecf8b66d27128280bf1676ce0ebe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_antonelli, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 09:46:42 compute-0 podman[437359]: 2025-10-11 09:46:42.472365666 +0000 UTC m=+0.183371224 container start 15c531d5f74240493d63aa5e3c0ac520c7dfecf8b66d27128280bf1676ce0ebe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_antonelli, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 09:46:42 compute-0 podman[437359]: 2025-10-11 09:46:42.477091929 +0000 UTC m=+0.188097527 container attach 15c531d5f74240493d63aa5e3c0ac520c7dfecf8b66d27128280bf1676ce0ebe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_antonelli, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct 11 09:46:42 compute-0 ceph-mon[74313]: pgmap v3164: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 0 B/s rd, 4.2 KiB/s wr, 1 op/s
Oct 11 09:46:43 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3165: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 0 B/s rd, 4.2 KiB/s wr, 1 op/s
Oct 11 09:46:43 compute-0 silly_antonelli[437376]: {
Oct 11 09:46:43 compute-0 silly_antonelli[437376]:     "9a8d87fe-940e-4960-94fb-405e22e6e9ee": {
Oct 11 09:46:43 compute-0 silly_antonelli[437376]:         "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:46:43 compute-0 silly_antonelli[437376]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 11 09:46:43 compute-0 silly_antonelli[437376]:         "osd_id": 2,
Oct 11 09:46:43 compute-0 silly_antonelli[437376]:         "osd_uuid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 09:46:43 compute-0 silly_antonelli[437376]:         "type": "bluestore"
Oct 11 09:46:43 compute-0 silly_antonelli[437376]:     },
Oct 11 09:46:43 compute-0 silly_antonelli[437376]:     "bd31cfe9-266a-4479-a7f9-659fff14b3e1": {
Oct 11 09:46:43 compute-0 silly_antonelli[437376]:         "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:46:43 compute-0 silly_antonelli[437376]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 11 09:46:43 compute-0 silly_antonelli[437376]:         "osd_id": 0,
Oct 11 09:46:43 compute-0 silly_antonelli[437376]:         "osd_uuid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 09:46:43 compute-0 silly_antonelli[437376]:         "type": "bluestore"
Oct 11 09:46:43 compute-0 silly_antonelli[437376]:     },
Oct 11 09:46:43 compute-0 silly_antonelli[437376]:     "c6e730c8-7075-45e5-bf72-061b73088f5b": {
Oct 11 09:46:43 compute-0 silly_antonelli[437376]:         "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:46:43 compute-0 silly_antonelli[437376]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 11 09:46:43 compute-0 silly_antonelli[437376]:         "osd_id": 1,
Oct 11 09:46:43 compute-0 silly_antonelli[437376]:         "osd_uuid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 09:46:43 compute-0 silly_antonelli[437376]:         "type": "bluestore"
Oct 11 09:46:43 compute-0 silly_antonelli[437376]:     }
Oct 11 09:46:43 compute-0 silly_antonelli[437376]: }
Oct 11 09:46:43 compute-0 podman[437359]: 2025-10-11 09:46:43.586448684 +0000 UTC m=+1.297454202 container died 15c531d5f74240493d63aa5e3c0ac520c7dfecf8b66d27128280bf1676ce0ebe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_antonelli, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 09:46:43 compute-0 systemd[1]: libpod-15c531d5f74240493d63aa5e3c0ac520c7dfecf8b66d27128280bf1676ce0ebe.scope: Deactivated successfully.
Oct 11 09:46:43 compute-0 systemd[1]: libpod-15c531d5f74240493d63aa5e3c0ac520c7dfecf8b66d27128280bf1676ce0ebe.scope: Consumed 1.125s CPU time.
Oct 11 09:46:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-3f6eb4423bf45bc2248f00d73b9e291128657eb5dc26be1f6d6b00a07e70c7ad-merged.mount: Deactivated successfully.
Oct 11 09:46:43 compute-0 podman[437359]: 2025-10-11 09:46:43.648974418 +0000 UTC m=+1.359979936 container remove 15c531d5f74240493d63aa5e3c0ac520c7dfecf8b66d27128280bf1676ce0ebe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_antonelli, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 09:46:43 compute-0 systemd[1]: libpod-conmon-15c531d5f74240493d63aa5e3c0ac520c7dfecf8b66d27128280bf1676ce0ebe.scope: Deactivated successfully.
Oct 11 09:46:43 compute-0 sudo[437252]: pam_unix(sudo:session): session closed for user root
Oct 11 09:46:43 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 09:46:43 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:46:43 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 09:46:43 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:46:43 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev b71dc2a2-8a7a-46a3-b4c7-770aa84c0218 does not exist
Oct 11 09:46:43 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev 6b41411a-8f62-4364-8ae2-f05c247bd3ce does not exist
Oct 11 09:46:43 compute-0 sudo[437420]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:46:43 compute-0 sudo[437420]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:46:43 compute-0 sudo[437420]: pam_unix(sudo:session): session closed for user root
Oct 11 09:46:43 compute-0 sudo[437445]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 11 09:46:43 compute-0 sudo[437445]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:46:43 compute-0 sudo[437445]: pam_unix(sudo:session): session closed for user root
Oct 11 09:46:44 compute-0 ceph-mon[74313]: pgmap v3165: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 0 B/s rd, 4.2 KiB/s wr, 1 op/s
Oct 11 09:46:44 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:46:44 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:46:44 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e301 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:46:45 compute-0 nova_compute[260935]: 2025-10-11 09:46:45.431 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:46:45 compute-0 nova_compute[260935]: 2025-10-11 09:46:45.453 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:46:45 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3166: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:46:46 compute-0 ceph-mon[74313]: pgmap v3166: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:46:47 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3167: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:46:47 compute-0 podman[437470]: 2025-10-11 09:46:47.849540683 +0000 UTC m=+0.131330056 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 11 09:46:48 compute-0 ceph-mon[74313]: pgmap v3167: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:46:49 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3168: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:46:49 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e301 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:46:50 compute-0 nova_compute[260935]: 2025-10-11 09:46:50.436 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:46:50 compute-0 nova_compute[260935]: 2025-10-11 09:46:50.455 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:46:50 compute-0 ceph-mon[74313]: pgmap v3168: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:46:51 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3169: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:46:52 compute-0 ceph-mon[74313]: pgmap v3169: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:46:53 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3170: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:46:53 compute-0 podman[437489]: 2025-10-11 09:46:53.800137312 +0000 UTC m=+0.097597499 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=iscsid, org.label-schema.license=GPLv2, config_id=iscsid, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 11 09:46:54 compute-0 nova_compute[260935]: 2025-10-11 09:46:54.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:46:54 compute-0 nova_compute[260935]: 2025-10-11 09:46:54.702 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 11 09:46:54 compute-0 ceph-mon[74313]: pgmap v3170: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:46:54 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e301 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:46:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:46:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:46:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:46:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:46:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:46:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:46:55 compute-0 ceph-mgr[74605]: [balancer INFO root] Optimize plan auto_2025-10-11_09:46:55
Oct 11 09:46:55 compute-0 ceph-mgr[74605]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 09:46:55 compute-0 ceph-mgr[74605]: [balancer INFO root] do_upmap
Oct 11 09:46:55 compute-0 ceph-mgr[74605]: [balancer INFO root] pools ['volumes', '.rgw.root', '.mgr', 'default.rgw.control', 'images', 'vms', 'backups', 'default.rgw.log', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'default.rgw.meta']
Oct 11 09:46:55 compute-0 ceph-mgr[74605]: [balancer INFO root] prepared 0/10 changes
Oct 11 09:46:55 compute-0 nova_compute[260935]: 2025-10-11 09:46:55.455 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 09:46:55 compute-0 nova_compute[260935]: 2025-10-11 09:46:55.458 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 09:46:55 compute-0 nova_compute[260935]: 2025-10-11 09:46:55.458 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Oct 11 09:46:55 compute-0 nova_compute[260935]: 2025-10-11 09:46:55.459 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Oct 11 09:46:55 compute-0 nova_compute[260935]: 2025-10-11 09:46:55.462 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:46:55 compute-0 nova_compute[260935]: 2025-10-11 09:46:55.463 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Oct 11 09:46:55 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3171: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:46:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 09:46:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 09:46:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 09:46:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 09:46:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 09:46:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 09:46:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 09:46:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 09:46:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 09:46:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 09:46:55 compute-0 nova_compute[260935]: 2025-10-11 09:46:55.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:46:55 compute-0 nova_compute[260935]: 2025-10-11 09:46:55.704 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:46:56 compute-0 ceph-mon[74313]: pgmap v3171: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:46:57 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3172: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 682 B/s wr, 0 op/s
Oct 11 09:46:57 compute-0 nova_compute[260935]: 2025-10-11 09:46:57.699 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:46:57 compute-0 nova_compute[260935]: 2025-10-11 09:46:57.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:46:57 compute-0 nova_compute[260935]: 2025-10-11 09:46:57.702 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 11 09:46:57 compute-0 podman[437509]: 2025-10-11 09:46:57.81215967 +0000 UTC m=+0.110406228 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 11 09:46:57 compute-0 podman[437510]: 2025-10-11 09:46:57.855896987 +0000 UTC m=+0.147945691 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_id=ovn_controller)
Oct 11 09:46:58 compute-0 nova_compute[260935]: 2025-10-11 09:46:58.211 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "refresh_cache-52be16b4-343a-4fd4-9041-39069a1fde2a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:46:58 compute-0 nova_compute[260935]: 2025-10-11 09:46:58.212 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquired lock "refresh_cache-52be16b4-343a-4fd4-9041-39069a1fde2a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:46:58 compute-0 nova_compute[260935]: 2025-10-11 09:46:58.213 2 DEBUG nova.network.neutron [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: 52be16b4-343a-4fd4-9041-39069a1fde2a] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 11 09:46:58 compute-0 nova_compute[260935]: 2025-10-11 09:46:58.338 2 DEBUG nova.network.neutron [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: 52be16b4-343a-4fd4-9041-39069a1fde2a] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 11 09:46:58 compute-0 ovn_controller[152945]: 2025-10-11T09:46:58Z|01736|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory
Oct 11 09:46:58 compute-0 nova_compute[260935]: 2025-10-11 09:46:58.618 2 DEBUG nova.network.neutron [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: 52be16b4-343a-4fd4-9041-39069a1fde2a] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:46:58 compute-0 nova_compute[260935]: 2025-10-11 09:46:58.638 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Releasing lock "refresh_cache-52be16b4-343a-4fd4-9041-39069a1fde2a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:46:58 compute-0 nova_compute[260935]: 2025-10-11 09:46:58.638 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: 52be16b4-343a-4fd4-9041-39069a1fde2a] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 11 09:46:58 compute-0 ceph-mon[74313]: pgmap v3172: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 682 B/s wr, 0 op/s
Oct 11 09:46:59 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3173: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 1023 B/s wr, 0 op/s
Oct 11 09:46:59 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e301 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:47:00 compute-0 nova_compute[260935]: 2025-10-11 09:47:00.464 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 09:47:00 compute-0 nova_compute[260935]: 2025-10-11 09:47:00.491 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 09:47:00 compute-0 nova_compute[260935]: 2025-10-11 09:47:00.491 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5028 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Oct 11 09:47:00 compute-0 nova_compute[260935]: 2025-10-11 09:47:00.492 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Oct 11 09:47:00 compute-0 nova_compute[260935]: 2025-10-11 09:47:00.493 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Oct 11 09:47:00 compute-0 nova_compute[260935]: 2025-10-11 09:47:00.496 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:47:00 compute-0 ceph-mon[74313]: pgmap v3173: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 1023 B/s wr, 0 op/s
Oct 11 09:47:01 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3174: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 1023 B/s wr, 0 op/s
Oct 11 09:47:01 compute-0 nova_compute[260935]: 2025-10-11 09:47:01.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:47:01 compute-0 nova_compute[260935]: 2025-10-11 09:47:01.743 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:47:01 compute-0 nova_compute[260935]: 2025-10-11 09:47:01.743 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:47:01 compute-0 nova_compute[260935]: 2025-10-11 09:47:01.744 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:47:01 compute-0 nova_compute[260935]: 2025-10-11 09:47:01.744 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 11 09:47:01 compute-0 nova_compute[260935]: 2025-10-11 09:47:01.745 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:47:02 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 09:47:02 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2662046581' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:47:02 compute-0 nova_compute[260935]: 2025-10-11 09:47:02.214 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:47:02 compute-0 nova_compute[260935]: 2025-10-11 09:47:02.285 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:47:02 compute-0 nova_compute[260935]: 2025-10-11 09:47:02.285 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:47:02 compute-0 nova_compute[260935]: 2025-10-11 09:47:02.286 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:47:02 compute-0 nova_compute[260935]: 2025-10-11 09:47:02.290 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:47:02 compute-0 nova_compute[260935]: 2025-10-11 09:47:02.290 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:47:02 compute-0 nova_compute[260935]: 2025-10-11 09:47:02.294 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000052 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:47:02 compute-0 nova_compute[260935]: 2025-10-11 09:47:02.294 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000052 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:47:02 compute-0 nova_compute[260935]: 2025-10-11 09:47:02.543 2 WARNING nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 09:47:02 compute-0 nova_compute[260935]: 2025-10-11 09:47:02.544 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=2765MB free_disk=59.83064270019531GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 11 09:47:02 compute-0 nova_compute[260935]: 2025-10-11 09:47:02.544 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:47:02 compute-0 nova_compute[260935]: 2025-10-11 09:47:02.545 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:47:02 compute-0 nova_compute[260935]: 2025-10-11 09:47:02.642 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance c176845c-89c0-4038-ba22-4ee79bd3ebfe actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 09:47:02 compute-0 nova_compute[260935]: 2025-10-11 09:47:02.642 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance b75d8ded-515b-48ff-a6b6-28df88878996 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 09:47:02 compute-0 nova_compute[260935]: 2025-10-11 09:47:02.643 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance 52be16b4-343a-4fd4-9041-39069a1fde2a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 09:47:02 compute-0 nova_compute[260935]: 2025-10-11 09:47:02.643 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 11 09:47:02 compute-0 nova_compute[260935]: 2025-10-11 09:47:02.643 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=896MB phys_disk=59GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 11 09:47:02 compute-0 nova_compute[260935]: 2025-10-11 09:47:02.736 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:47:02 compute-0 ceph-mon[74313]: pgmap v3174: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 1023 B/s wr, 0 op/s
Oct 11 09:47:02 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/2662046581' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:47:03 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 09:47:03 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4045184369' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:47:03 compute-0 nova_compute[260935]: 2025-10-11 09:47:03.245 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.509s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:47:03 compute-0 nova_compute[260935]: 2025-10-11 09:47:03.256 2 DEBUG nova.compute.provider_tree [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 09:47:03 compute-0 nova_compute[260935]: 2025-10-11 09:47:03.300 2 DEBUG nova.scheduler.client.report [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 09:47:03 compute-0 nova_compute[260935]: 2025-10-11 09:47:03.302 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 11 09:47:03 compute-0 nova_compute[260935]: 2025-10-11 09:47:03.303 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.758s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:47:03 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3175: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 1.7 KiB/s wr, 0 op/s
Oct 11 09:47:03 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/4045184369' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:47:04 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e301 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:47:04 compute-0 ceph-mon[74313]: pgmap v3175: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 1.7 KiB/s wr, 0 op/s
Oct 11 09:47:05 compute-0 nova_compute[260935]: 2025-10-11 09:47:05.493 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:47:05 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3176: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 1.7 KiB/s wr, 0 op/s
Oct 11 09:47:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 09:47:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:47:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 09:47:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:47:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0026278224099759067 of space, bias 1.0, pg target 0.788346722992772 quantized to 32 (current 32)
Oct 11 09:47:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:47:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 09:47:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:47:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 09:47:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:47:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006659854830044931 of space, bias 1.0, pg target 0.19979564490134794 quantized to 32 (current 32)
Oct 11 09:47:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:47:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 11 09:47:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:47:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 09:47:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:47:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 11 09:47:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:47:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 11 09:47:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:47:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 09:47:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:47:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 11 09:47:06 compute-0 nova_compute[260935]: 2025-10-11 09:47:06.303 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:47:06 compute-0 nova_compute[260935]: 2025-10-11 09:47:06.304 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:47:06 compute-0 systemd[1]: virtsecretd.service: Deactivated successfully.
Oct 11 09:47:06 compute-0 systemd[1]: virtsecretd.service: Consumed 1.386s CPU time.
Oct 11 09:47:06 compute-0 ceph-mon[74313]: pgmap v3176: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 1.7 KiB/s wr, 0 op/s
Oct 11 09:47:07 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3177: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 2.7 KiB/s wr, 0 op/s
Oct 11 09:47:08 compute-0 ceph-mon[74313]: pgmap v3177: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 2.7 KiB/s wr, 0 op/s
Oct 11 09:47:09 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3178: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 2.0 KiB/s wr, 0 op/s
Oct 11 09:47:09 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e301 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:47:10 compute-0 nova_compute[260935]: 2025-10-11 09:47:10.497 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 09:47:10 compute-0 nova_compute[260935]: 2025-10-11 09:47:10.500 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 09:47:10 compute-0 nova_compute[260935]: 2025-10-11 09:47:10.500 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Oct 11 09:47:10 compute-0 nova_compute[260935]: 2025-10-11 09:47:10.500 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Oct 11 09:47:10 compute-0 nova_compute[260935]: 2025-10-11 09:47:10.539 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:47:10 compute-0 nova_compute[260935]: 2025-10-11 09:47:10.540 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Oct 11 09:47:10 compute-0 ceph-mon[74313]: pgmap v3178: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 2.0 KiB/s wr, 0 op/s
Oct 11 09:47:11 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3179: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 1.7 KiB/s wr, 0 op/s
Oct 11 09:47:12 compute-0 ceph-mon[74313]: pgmap v3179: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 1.7 KiB/s wr, 0 op/s
Oct 11 09:47:13 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3180: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 1.7 KiB/s wr, 0 op/s
Oct 11 09:47:14 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e301 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:47:14 compute-0 ceph-mon[74313]: pgmap v3180: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 1.7 KiB/s wr, 0 op/s
Oct 11 09:47:14 compute-0 sshd-session[437599]: Invalid user postgres from 165.232.82.252 port 54902
Oct 11 09:47:14 compute-0 sshd-session[437599]: pam_unix(sshd:auth): check pass; user unknown
Oct 11 09:47:14 compute-0 sshd-session[437599]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=165.232.82.252
Oct 11 09:47:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:47:15.242 162815 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:47:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:47:15.243 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:47:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:47:15.243 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:47:15 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3181: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 1023 B/s wr, 0 op/s
Oct 11 09:47:15 compute-0 nova_compute[260935]: 2025-10-11 09:47:15.541 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 09:47:15 compute-0 nova_compute[260935]: 2025-10-11 09:47:15.543 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 09:47:16 compute-0 ceph-mon[74313]: pgmap v3181: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 1023 B/s wr, 0 op/s
Oct 11 09:47:17 compute-0 sshd-session[437599]: Failed password for invalid user postgres from 165.232.82.252 port 54902 ssh2
Oct 11 09:47:17 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3182: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 1023 B/s wr, 0 op/s
Oct 11 09:47:17 compute-0 sshd-session[437599]: Connection closed by invalid user postgres 165.232.82.252 port 54902 [preauth]
Oct 11 09:47:18 compute-0 podman[437601]: 2025-10-11 09:47:18.780234064 +0000 UTC m=+0.081310651 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, container_name=ovn_metadata_agent, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Oct 11 09:47:18 compute-0 ceph-mon[74313]: pgmap v3182: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 1023 B/s wr, 0 op/s
Oct 11 09:47:19 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3183: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:47:19 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e301 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:47:20 compute-0 nova_compute[260935]: 2025-10-11 09:47:20.542 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:47:20 compute-0 ceph-mon[74313]: pgmap v3183: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:47:21 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3184: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:47:22 compute-0 ceph-mon[74313]: pgmap v3184: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:47:23 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3185: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:47:24 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e301 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:47:24 compute-0 podman[437620]: 2025-10-11 09:47:24.783090991 +0000 UTC m=+0.081773165 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=iscsid, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=iscsid, io.buildah.version=1.41.3)
Oct 11 09:47:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:47:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:47:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:47:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:47:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:47:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:47:24 compute-0 ceph-mon[74313]: pgmap v3185: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:47:25 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3186: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:47:25 compute-0 nova_compute[260935]: 2025-10-11 09:47:25.544 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 09:47:26 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 09:47:26 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2384124883' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 09:47:26 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 09:47:26 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2384124883' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 09:47:26 compute-0 ceph-mon[74313]: pgmap v3186: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:47:26 compute-0 ceph-mon[74313]: from='client.? 192.168.122.10:0/2384124883' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 09:47:26 compute-0 ceph-mon[74313]: from='client.? 192.168.122.10:0/2384124883' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 09:47:27 compute-0 sshd-session[437643]: Invalid user ambari from 155.4.244.179 port 61407
Oct 11 09:47:27 compute-0 sshd-session[437643]: pam_unix(sshd:auth): check pass; user unknown
Oct 11 09:47:27 compute-0 sshd-session[437643]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=155.4.244.179
Oct 11 09:47:27 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3187: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:47:28 compute-0 podman[437645]: 2025-10-11 09:47:28.812312581 +0000 UTC m=+0.094919503 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 09:47:28 compute-0 podman[437646]: 2025-10-11 09:47:28.835624595 +0000 UTC m=+0.120422829 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct 11 09:47:28 compute-0 ceph-mon[74313]: pgmap v3187: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:47:29 compute-0 sshd-session[437643]: Failed password for invalid user ambari from 155.4.244.179 port 61407 ssh2
Oct 11 09:47:29 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3188: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:47:29 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e301 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:47:30 compute-0 nova_compute[260935]: 2025-10-11 09:47:30.546 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:47:30 compute-0 sshd-session[437643]: Received disconnect from 155.4.244.179 port 61407:11: Bye Bye [preauth]
Oct 11 09:47:30 compute-0 sshd-session[437643]: Disconnected from invalid user ambari 155.4.244.179 port 61407 [preauth]
Oct 11 09:47:30 compute-0 ceph-mon[74313]: pgmap v3188: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:47:31 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3189: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:47:32 compute-0 ceph-mon[74313]: pgmap v3189: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:47:33 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3190: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:47:33 compute-0 nova_compute[260935]: 2025-10-11 09:47:33.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:47:34 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e301 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:47:34 compute-0 ceph-mon[74313]: pgmap v3190: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:47:35 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3191: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:47:35 compute-0 nova_compute[260935]: 2025-10-11 09:47:35.549 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 09:47:35 compute-0 nova_compute[260935]: 2025-10-11 09:47:35.550 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:47:35 compute-0 nova_compute[260935]: 2025-10-11 09:47:35.550 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Oct 11 09:47:35 compute-0 nova_compute[260935]: 2025-10-11 09:47:35.550 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Oct 11 09:47:35 compute-0 nova_compute[260935]: 2025-10-11 09:47:35.551 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Oct 11 09:47:35 compute-0 nova_compute[260935]: 2025-10-11 09:47:35.553 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:47:36 compute-0 ceph-mon[74313]: pgmap v3191: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:47:37 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3192: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:47:38 compute-0 ceph-mon[74313]: pgmap v3192: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:47:39 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3193: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:47:39 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e301 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:47:40 compute-0 nova_compute[260935]: 2025-10-11 09:47:40.552 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:47:40 compute-0 ceph-mon[74313]: pgmap v3193: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:47:41 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3194: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:47:43 compute-0 ceph-mon[74313]: pgmap v3194: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:47:43 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3195: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:47:43 compute-0 sudo[437691]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:47:43 compute-0 sudo[437691]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:47:43 compute-0 sudo[437691]: pam_unix(sudo:session): session closed for user root
Oct 11 09:47:44 compute-0 sudo[437716]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 09:47:44 compute-0 sudo[437716]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:47:44 compute-0 sudo[437716]: pam_unix(sudo:session): session closed for user root
Oct 11 09:47:44 compute-0 sudo[437741]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:47:44 compute-0 sudo[437741]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:47:44 compute-0 sudo[437741]: pam_unix(sudo:session): session closed for user root
Oct 11 09:47:44 compute-0 sudo[437766]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Oct 11 09:47:44 compute-0 sudo[437766]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:47:44 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e301 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:47:44 compute-0 podman[437863]: 2025-10-11 09:47:44.84874492 +0000 UTC m=+0.097658610 container exec ef4d743dbf6b626090e433b260dff1359de31ba4682290cbdab8727911345729 (image=quay.io/ceph/ceph:v18, name=ceph-33219f8b-dc38-5a8f-a577-8ccc4b37190a-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 09:47:44 compute-0 podman[437863]: 2025-10-11 09:47:44.964045644 +0000 UTC m=+0.212959304 container exec_died ef4d743dbf6b626090e433b260dff1359de31ba4682290cbdab8727911345729 (image=quay.io/ceph/ceph:v18, name=ceph-33219f8b-dc38-5a8f-a577-8ccc4b37190a-mon-compute-0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 09:47:45 compute-0 ceph-mon[74313]: pgmap v3195: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:47:45 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3196: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:47:45 compute-0 nova_compute[260935]: 2025-10-11 09:47:45.555 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:47:45 compute-0 sudo[437766]: pam_unix(sudo:session): session closed for user root
Oct 11 09:47:45 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 09:47:45 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:47:45 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 09:47:45 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:47:46 compute-0 sudo[438025]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:47:46 compute-0 sudo[438025]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:47:46 compute-0 sudo[438025]: pam_unix(sudo:session): session closed for user root
Oct 11 09:47:46 compute-0 sudo[438050]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 09:47:46 compute-0 sudo[438050]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:47:46 compute-0 sudo[438050]: pam_unix(sudo:session): session closed for user root
Oct 11 09:47:46 compute-0 sudo[438075]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:47:46 compute-0 sudo[438075]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:47:46 compute-0 sudo[438075]: pam_unix(sudo:session): session closed for user root
Oct 11 09:47:46 compute-0 sudo[438100]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 11 09:47:46 compute-0 sudo[438100]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:47:46 compute-0 sudo[438100]: pam_unix(sudo:session): session closed for user root
Oct 11 09:47:46 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 09:47:46 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 09:47:46 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 11 09:47:46 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 09:47:46 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 11 09:47:46 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:47:46 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev fe7e7c9e-0970-4619-8359-5ac2e777196b does not exist
Oct 11 09:47:46 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev 5cfb91e5-022b-43dc-adfc-7f3b941bcbfb does not exist
Oct 11 09:47:46 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev 4138dc19-54b7-4e8c-9809-6b162e203465 does not exist
Oct 11 09:47:46 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 11 09:47:46 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 09:47:46 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 11 09:47:46 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 09:47:46 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 09:47:46 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 09:47:46 compute-0 ceph-mon[74313]: pgmap v3196: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:47:46 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:47:46 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:47:46 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 09:47:46 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 09:47:46 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:47:46 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 09:47:46 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 09:47:46 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 09:47:46 compute-0 sudo[438158]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:47:46 compute-0 sudo[438158]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:47:46 compute-0 sudo[438158]: pam_unix(sudo:session): session closed for user root
Oct 11 09:47:47 compute-0 sudo[438183]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 09:47:47 compute-0 sudo[438183]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:47:47 compute-0 sudo[438183]: pam_unix(sudo:session): session closed for user root
Oct 11 09:47:47 compute-0 sudo[438208]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:47:47 compute-0 sudo[438208]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:47:47 compute-0 sudo[438208]: pam_unix(sudo:session): session closed for user root
Oct 11 09:47:47 compute-0 sudo[438233]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 11 09:47:47 compute-0 sudo[438233]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:47:47 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3197: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:47:47 compute-0 podman[438299]: 2025-10-11 09:47:47.805055296 +0000 UTC m=+0.075828157 container create 4bf15292fa01a7e67cbde71a37aa585b9f97f38ffdeb642062daeaaf2e094737 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_khorana, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 09:47:47 compute-0 podman[438299]: 2025-10-11 09:47:47.771292069 +0000 UTC m=+0.042064980 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:47:47 compute-0 systemd[1]: Started libpod-conmon-4bf15292fa01a7e67cbde71a37aa585b9f97f38ffdeb642062daeaaf2e094737.scope.
Oct 11 09:47:47 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:47:47 compute-0 podman[438299]: 2025-10-11 09:47:47.918317663 +0000 UTC m=+0.189090524 container init 4bf15292fa01a7e67cbde71a37aa585b9f97f38ffdeb642062daeaaf2e094737 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_khorana, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef)
Oct 11 09:47:47 compute-0 podman[438299]: 2025-10-11 09:47:47.932420819 +0000 UTC m=+0.203193670 container start 4bf15292fa01a7e67cbde71a37aa585b9f97f38ffdeb642062daeaaf2e094737 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_khorana, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 09:47:47 compute-0 podman[438299]: 2025-10-11 09:47:47.938650743 +0000 UTC m=+0.209423614 container attach 4bf15292fa01a7e67cbde71a37aa585b9f97f38ffdeb642062daeaaf2e094737 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_khorana, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 09:47:47 compute-0 angry_khorana[438316]: 167 167
Oct 11 09:47:47 compute-0 systemd[1]: libpod-4bf15292fa01a7e67cbde71a37aa585b9f97f38ffdeb642062daeaaf2e094737.scope: Deactivated successfully.
Oct 11 09:47:47 compute-0 conmon[438316]: conmon 4bf15292fa01a7e67cbd <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-4bf15292fa01a7e67cbde71a37aa585b9f97f38ffdeb642062daeaaf2e094737.scope/container/memory.events
Oct 11 09:47:47 compute-0 podman[438299]: 2025-10-11 09:47:47.943053767 +0000 UTC m=+0.213826658 container died 4bf15292fa01a7e67cbde71a37aa585b9f97f38ffdeb642062daeaaf2e094737 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_khorana, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct 11 09:47:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-8f3279222ff3b7df6fc3a3bc660079d759f09165b6559b9369e912f1ad678f46-merged.mount: Deactivated successfully.
Oct 11 09:47:47 compute-0 podman[438299]: 2025-10-11 09:47:47.996400713 +0000 UTC m=+0.267173574 container remove 4bf15292fa01a7e67cbde71a37aa585b9f97f38ffdeb642062daeaaf2e094737 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_khorana, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True)
Oct 11 09:47:48 compute-0 systemd[1]: libpod-conmon-4bf15292fa01a7e67cbde71a37aa585b9f97f38ffdeb642062daeaaf2e094737.scope: Deactivated successfully.
Oct 11 09:47:48 compute-0 podman[438340]: 2025-10-11 09:47:48.24372157 +0000 UTC m=+0.060163888 container create a2085501b714cf9293fc44f4cd1ff58cae5938053d30616b170bb097556cc77e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_gagarin, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 11 09:47:48 compute-0 systemd[1]: Started libpod-conmon-a2085501b714cf9293fc44f4cd1ff58cae5938053d30616b170bb097556cc77e.scope.
Oct 11 09:47:48 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:47:48 compute-0 podman[438340]: 2025-10-11 09:47:48.219566853 +0000 UTC m=+0.036009231 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:47:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0dfc80109a18163fc4ada85637cf5afa94d8505122b642f73693adfdc6e629f2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 09:47:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0dfc80109a18163fc4ada85637cf5afa94d8505122b642f73693adfdc6e629f2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 09:47:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0dfc80109a18163fc4ada85637cf5afa94d8505122b642f73693adfdc6e629f2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 09:47:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0dfc80109a18163fc4ada85637cf5afa94d8505122b642f73693adfdc6e629f2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 09:47:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0dfc80109a18163fc4ada85637cf5afa94d8505122b642f73693adfdc6e629f2/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 09:47:48 compute-0 podman[438340]: 2025-10-11 09:47:48.33142366 +0000 UTC m=+0.147866018 container init a2085501b714cf9293fc44f4cd1ff58cae5938053d30616b170bb097556cc77e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_gagarin, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 09:47:48 compute-0 podman[438340]: 2025-10-11 09:47:48.343673243 +0000 UTC m=+0.160115801 container start a2085501b714cf9293fc44f4cd1ff58cae5938053d30616b170bb097556cc77e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_gagarin, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 09:47:48 compute-0 podman[438340]: 2025-10-11 09:47:48.347082989 +0000 UTC m=+0.163525397 container attach a2085501b714cf9293fc44f4cd1ff58cae5938053d30616b170bb097556cc77e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_gagarin, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 09:47:48 compute-0 ceph-mon[74313]: pgmap v3197: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:47:49 compute-0 clever_gagarin[438356]: --> passed data devices: 0 physical, 3 LVM
Oct 11 09:47:49 compute-0 clever_gagarin[438356]: --> relative data size: 1.0
Oct 11 09:47:49 compute-0 clever_gagarin[438356]: --> All data devices are unavailable
Oct 11 09:47:49 compute-0 systemd[1]: libpod-a2085501b714cf9293fc44f4cd1ff58cae5938053d30616b170bb097556cc77e.scope: Deactivated successfully.
Oct 11 09:47:49 compute-0 systemd[1]: libpod-a2085501b714cf9293fc44f4cd1ff58cae5938053d30616b170bb097556cc77e.scope: Consumed 1.109s CPU time.
Oct 11 09:47:49 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3198: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:47:49 compute-0 podman[438388]: 2025-10-11 09:47:49.604093075 +0000 UTC m=+0.043710408 container died a2085501b714cf9293fc44f4cd1ff58cae5938053d30616b170bb097556cc77e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_gagarin, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 09:47:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-0dfc80109a18163fc4ada85637cf5afa94d8505122b642f73693adfdc6e629f2-merged.mount: Deactivated successfully.
Oct 11 09:47:49 compute-0 podman[438388]: 2025-10-11 09:47:49.661782182 +0000 UTC m=+0.101399515 container remove a2085501b714cf9293fc44f4cd1ff58cae5938053d30616b170bb097556cc77e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_gagarin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct 11 09:47:49 compute-0 systemd[1]: libpod-conmon-a2085501b714cf9293fc44f4cd1ff58cae5938053d30616b170bb097556cc77e.scope: Deactivated successfully.
Oct 11 09:47:49 compute-0 podman[438387]: 2025-10-11 09:47:49.68095894 +0000 UTC m=+0.108858504 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3)
Oct 11 09:47:49 compute-0 sudo[438233]: pam_unix(sudo:session): session closed for user root
Oct 11 09:47:49 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e301 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:47:49 compute-0 sudo[438423]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:47:49 compute-0 sudo[438423]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:47:49 compute-0 sudo[438423]: pam_unix(sudo:session): session closed for user root
Oct 11 09:47:49 compute-0 sshd-session[438385]: Invalid user postgres from 165.232.82.252 port 51946
Oct 11 09:47:49 compute-0 sudo[438448]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 09:47:49 compute-0 sudo[438448]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:47:49 compute-0 sudo[438448]: pam_unix(sudo:session): session closed for user root
Oct 11 09:47:49 compute-0 sshd-session[438385]: pam_unix(sshd:auth): check pass; user unknown
Oct 11 09:47:49 compute-0 sshd-session[438385]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=165.232.82.252
Oct 11 09:47:49 compute-0 sudo[438473]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:47:49 compute-0 sudo[438473]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:47:49 compute-0 sudo[438473]: pam_unix(sudo:session): session closed for user root
Oct 11 09:47:50 compute-0 sudo[438498]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a -- lvm list --format json
Oct 11 09:47:50 compute-0 sudo[438498]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:47:50 compute-0 podman[438564]: 2025-10-11 09:47:50.412335944 +0000 UTC m=+0.037673628 container create a95f60e8fe5ea09030279cee26905d0a928c9aad98d8594e21e884b13681518b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_wilson, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 09:47:50 compute-0 systemd[1]: Started libpod-conmon-a95f60e8fe5ea09030279cee26905d0a928c9aad98d8594e21e884b13681518b.scope.
Oct 11 09:47:50 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:47:50 compute-0 podman[438564]: 2025-10-11 09:47:50.394856694 +0000 UTC m=+0.020194378 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:47:50 compute-0 podman[438564]: 2025-10-11 09:47:50.493617524 +0000 UTC m=+0.118955218 container init a95f60e8fe5ea09030279cee26905d0a928c9aad98d8594e21e884b13681518b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_wilson, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0)
Oct 11 09:47:50 compute-0 podman[438564]: 2025-10-11 09:47:50.504724125 +0000 UTC m=+0.130061789 container start a95f60e8fe5ea09030279cee26905d0a928c9aad98d8594e21e884b13681518b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_wilson, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 11 09:47:50 compute-0 podman[438564]: 2025-10-11 09:47:50.507553894 +0000 UTC m=+0.132891638 container attach a95f60e8fe5ea09030279cee26905d0a928c9aad98d8594e21e884b13681518b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_wilson, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct 11 09:47:50 compute-0 gallant_wilson[438581]: 167 167
Oct 11 09:47:50 compute-0 systemd[1]: libpod-a95f60e8fe5ea09030279cee26905d0a928c9aad98d8594e21e884b13681518b.scope: Deactivated successfully.
Oct 11 09:47:50 compute-0 conmon[438581]: conmon a95f60e8fe5ea0903027 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a95f60e8fe5ea09030279cee26905d0a928c9aad98d8594e21e884b13681518b.scope/container/memory.events
Oct 11 09:47:50 compute-0 podman[438564]: 2025-10-11 09:47:50.510510287 +0000 UTC m=+0.135847991 container died a95f60e8fe5ea09030279cee26905d0a928c9aad98d8594e21e884b13681518b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_wilson, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 09:47:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-6a7ad59697e0a71c537b0aa0ad721c54e1ee4b1616a8b85f869a16d10fd2c2b6-merged.mount: Deactivated successfully.
Oct 11 09:47:50 compute-0 nova_compute[260935]: 2025-10-11 09:47:50.557 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 09:47:50 compute-0 podman[438564]: 2025-10-11 09:47:50.565656944 +0000 UTC m=+0.190994648 container remove a95f60e8fe5ea09030279cee26905d0a928c9aad98d8594e21e884b13681518b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_wilson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 09:47:50 compute-0 systemd[1]: libpod-conmon-a95f60e8fe5ea09030279cee26905d0a928c9aad98d8594e21e884b13681518b.scope: Deactivated successfully.
Oct 11 09:47:50 compute-0 podman[438605]: 2025-10-11 09:47:50.785433908 +0000 UTC m=+0.066143396 container create 360805405fb44d1ff116d0ce2f9dc27182803cb8cfc897a0df86ceb99d7495ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_snyder, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 09:47:50 compute-0 systemd[1]: Started libpod-conmon-360805405fb44d1ff116d0ce2f9dc27182803cb8cfc897a0df86ceb99d7495ea.scope.
Oct 11 09:47:50 compute-0 podman[438605]: 2025-10-11 09:47:50.759028148 +0000 UTC m=+0.039737686 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:47:50 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:47:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0dc82382b365b870397d0f80b608d6303875f464b8399e1840f7462f753eea19/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 09:47:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0dc82382b365b870397d0f80b608d6303875f464b8399e1840f7462f753eea19/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 09:47:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0dc82382b365b870397d0f80b608d6303875f464b8399e1840f7462f753eea19/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 09:47:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0dc82382b365b870397d0f80b608d6303875f464b8399e1840f7462f753eea19/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 09:47:50 compute-0 podman[438605]: 2025-10-11 09:47:50.888711625 +0000 UTC m=+0.169421143 container init 360805405fb44d1ff116d0ce2f9dc27182803cb8cfc897a0df86ceb99d7495ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_snyder, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Oct 11 09:47:50 compute-0 podman[438605]: 2025-10-11 09:47:50.901922136 +0000 UTC m=+0.182631624 container start 360805405fb44d1ff116d0ce2f9dc27182803cb8cfc897a0df86ceb99d7495ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_snyder, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True)
Oct 11 09:47:50 compute-0 podman[438605]: 2025-10-11 09:47:50.905786044 +0000 UTC m=+0.186495532 container attach 360805405fb44d1ff116d0ce2f9dc27182803cb8cfc897a0df86ceb99d7495ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_snyder, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 09:47:50 compute-0 ceph-mon[74313]: pgmap v3198: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:47:51 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3199: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:47:51 compute-0 clever_snyder[438623]: {
Oct 11 09:47:51 compute-0 clever_snyder[438623]:     "0": [
Oct 11 09:47:51 compute-0 clever_snyder[438623]:         {
Oct 11 09:47:51 compute-0 clever_snyder[438623]:             "devices": [
Oct 11 09:47:51 compute-0 clever_snyder[438623]:                 "/dev/loop3"
Oct 11 09:47:51 compute-0 clever_snyder[438623]:             ],
Oct 11 09:47:51 compute-0 clever_snyder[438623]:             "lv_name": "ceph_lv0",
Oct 11 09:47:51 compute-0 clever_snyder[438623]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 09:47:51 compute-0 clever_snyder[438623]:             "lv_size": "21470642176",
Oct 11 09:47:51 compute-0 clever_snyder[438623]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=bd31cfe9-266a-4479-a7f9-659fff14b3e1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 09:47:51 compute-0 clever_snyder[438623]:             "lv_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 09:47:51 compute-0 clever_snyder[438623]:             "name": "ceph_lv0",
Oct 11 09:47:51 compute-0 clever_snyder[438623]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 09:47:51 compute-0 clever_snyder[438623]:             "tags": {
Oct 11 09:47:51 compute-0 clever_snyder[438623]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 11 09:47:51 compute-0 clever_snyder[438623]:                 "ceph.block_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 09:47:51 compute-0 clever_snyder[438623]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 09:47:51 compute-0 clever_snyder[438623]:                 "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:47:51 compute-0 clever_snyder[438623]:                 "ceph.cluster_name": "ceph",
Oct 11 09:47:51 compute-0 clever_snyder[438623]:                 "ceph.crush_device_class": "",
Oct 11 09:47:51 compute-0 clever_snyder[438623]:                 "ceph.encrypted": "0",
Oct 11 09:47:51 compute-0 clever_snyder[438623]:                 "ceph.osd_fsid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 09:47:51 compute-0 clever_snyder[438623]:                 "ceph.osd_id": "0",
Oct 11 09:47:51 compute-0 clever_snyder[438623]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 09:47:51 compute-0 clever_snyder[438623]:                 "ceph.type": "block",
Oct 11 09:47:51 compute-0 clever_snyder[438623]:                 "ceph.vdo": "0"
Oct 11 09:47:51 compute-0 clever_snyder[438623]:             },
Oct 11 09:47:51 compute-0 clever_snyder[438623]:             "type": "block",
Oct 11 09:47:51 compute-0 clever_snyder[438623]:             "vg_name": "ceph_vg0"
Oct 11 09:47:51 compute-0 clever_snyder[438623]:         }
Oct 11 09:47:51 compute-0 clever_snyder[438623]:     ],
Oct 11 09:47:51 compute-0 clever_snyder[438623]:     "1": [
Oct 11 09:47:51 compute-0 clever_snyder[438623]:         {
Oct 11 09:47:51 compute-0 clever_snyder[438623]:             "devices": [
Oct 11 09:47:51 compute-0 clever_snyder[438623]:                 "/dev/loop4"
Oct 11 09:47:51 compute-0 clever_snyder[438623]:             ],
Oct 11 09:47:51 compute-0 clever_snyder[438623]:             "lv_name": "ceph_lv1",
Oct 11 09:47:51 compute-0 clever_snyder[438623]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 09:47:51 compute-0 clever_snyder[438623]:             "lv_size": "21470642176",
Oct 11 09:47:51 compute-0 clever_snyder[438623]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c6e730c8-7075-45e5-bf72-061b73088f5b,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 09:47:51 compute-0 clever_snyder[438623]:             "lv_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 09:47:51 compute-0 clever_snyder[438623]:             "name": "ceph_lv1",
Oct 11 09:47:51 compute-0 clever_snyder[438623]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 09:47:51 compute-0 clever_snyder[438623]:             "tags": {
Oct 11 09:47:51 compute-0 clever_snyder[438623]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 11 09:47:51 compute-0 clever_snyder[438623]:                 "ceph.block_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 09:47:51 compute-0 clever_snyder[438623]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 09:47:51 compute-0 clever_snyder[438623]:                 "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:47:51 compute-0 clever_snyder[438623]:                 "ceph.cluster_name": "ceph",
Oct 11 09:47:51 compute-0 clever_snyder[438623]:                 "ceph.crush_device_class": "",
Oct 11 09:47:51 compute-0 clever_snyder[438623]:                 "ceph.encrypted": "0",
Oct 11 09:47:51 compute-0 clever_snyder[438623]:                 "ceph.osd_fsid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 09:47:51 compute-0 clever_snyder[438623]:                 "ceph.osd_id": "1",
Oct 11 09:47:51 compute-0 clever_snyder[438623]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 09:47:51 compute-0 clever_snyder[438623]:                 "ceph.type": "block",
Oct 11 09:47:51 compute-0 clever_snyder[438623]:                 "ceph.vdo": "0"
Oct 11 09:47:51 compute-0 clever_snyder[438623]:             },
Oct 11 09:47:51 compute-0 clever_snyder[438623]:             "type": "block",
Oct 11 09:47:51 compute-0 clever_snyder[438623]:             "vg_name": "ceph_vg1"
Oct 11 09:47:51 compute-0 clever_snyder[438623]:         }
Oct 11 09:47:51 compute-0 clever_snyder[438623]:     ],
Oct 11 09:47:51 compute-0 clever_snyder[438623]:     "2": [
Oct 11 09:47:51 compute-0 clever_snyder[438623]:         {
Oct 11 09:47:51 compute-0 clever_snyder[438623]:             "devices": [
Oct 11 09:47:51 compute-0 clever_snyder[438623]:                 "/dev/loop5"
Oct 11 09:47:51 compute-0 clever_snyder[438623]:             ],
Oct 11 09:47:51 compute-0 clever_snyder[438623]:             "lv_name": "ceph_lv2",
Oct 11 09:47:51 compute-0 clever_snyder[438623]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 09:47:51 compute-0 clever_snyder[438623]:             "lv_size": "21470642176",
Oct 11 09:47:51 compute-0 clever_snyder[438623]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9a8d87fe-940e-4960-94fb-405e22e6e9ee,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 09:47:51 compute-0 clever_snyder[438623]:             "lv_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 09:47:51 compute-0 clever_snyder[438623]:             "name": "ceph_lv2",
Oct 11 09:47:51 compute-0 clever_snyder[438623]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 09:47:51 compute-0 clever_snyder[438623]:             "tags": {
Oct 11 09:47:51 compute-0 clever_snyder[438623]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 11 09:47:51 compute-0 clever_snyder[438623]:                 "ceph.block_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 09:47:51 compute-0 clever_snyder[438623]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 09:47:51 compute-0 clever_snyder[438623]:                 "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:47:51 compute-0 clever_snyder[438623]:                 "ceph.cluster_name": "ceph",
Oct 11 09:47:51 compute-0 clever_snyder[438623]:                 "ceph.crush_device_class": "",
Oct 11 09:47:51 compute-0 clever_snyder[438623]:                 "ceph.encrypted": "0",
Oct 11 09:47:51 compute-0 clever_snyder[438623]:                 "ceph.osd_fsid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 09:47:51 compute-0 clever_snyder[438623]:                 "ceph.osd_id": "2",
Oct 11 09:47:51 compute-0 clever_snyder[438623]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 09:47:51 compute-0 clever_snyder[438623]:                 "ceph.type": "block",
Oct 11 09:47:51 compute-0 clever_snyder[438623]:                 "ceph.vdo": "0"
Oct 11 09:47:51 compute-0 clever_snyder[438623]:             },
Oct 11 09:47:51 compute-0 clever_snyder[438623]:             "type": "block",
Oct 11 09:47:51 compute-0 clever_snyder[438623]:             "vg_name": "ceph_vg2"
Oct 11 09:47:51 compute-0 clever_snyder[438623]:         }
Oct 11 09:47:51 compute-0 clever_snyder[438623]:     ]
Oct 11 09:47:51 compute-0 clever_snyder[438623]: }
Oct 11 09:47:51 compute-0 systemd[1]: libpod-360805405fb44d1ff116d0ce2f9dc27182803cb8cfc897a0df86ceb99d7495ea.scope: Deactivated successfully.
Oct 11 09:47:51 compute-0 podman[438632]: 2025-10-11 09:47:51.742231414 +0000 UTC m=+0.032886623 container died 360805405fb44d1ff116d0ce2f9dc27182803cb8cfc897a0df86ceb99d7495ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_snyder, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 09:47:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-0dc82382b365b870397d0f80b608d6303875f464b8399e1840f7462f753eea19-merged.mount: Deactivated successfully.
Oct 11 09:47:51 compute-0 podman[438632]: 2025-10-11 09:47:51.818264847 +0000 UTC m=+0.108920026 container remove 360805405fb44d1ff116d0ce2f9dc27182803cb8cfc897a0df86ceb99d7495ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_snyder, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 09:47:51 compute-0 sshd-session[438385]: Failed password for invalid user postgres from 165.232.82.252 port 51946 ssh2
Oct 11 09:47:51 compute-0 systemd[1]: libpod-conmon-360805405fb44d1ff116d0ce2f9dc27182803cb8cfc897a0df86ceb99d7495ea.scope: Deactivated successfully.
Oct 11 09:47:51 compute-0 sudo[438498]: pam_unix(sudo:session): session closed for user root
Oct 11 09:47:51 compute-0 sudo[438648]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:47:51 compute-0 sudo[438648]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:47:52 compute-0 sudo[438648]: pam_unix(sudo:session): session closed for user root
Oct 11 09:47:52 compute-0 sudo[438673]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 09:47:52 compute-0 sudo[438673]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:47:52 compute-0 sudo[438673]: pam_unix(sudo:session): session closed for user root
Oct 11 09:47:52 compute-0 sudo[438698]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:47:52 compute-0 sudo[438698]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:47:52 compute-0 sudo[438698]: pam_unix(sudo:session): session closed for user root
Oct 11 09:47:52 compute-0 sudo[438723]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a -- raw list --format json
Oct 11 09:47:52 compute-0 sudo[438723]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:47:52 compute-0 sshd-session[438385]: Connection closed by invalid user postgres 165.232.82.252 port 51946 [preauth]
Oct 11 09:47:52 compute-0 podman[438788]: 2025-10-11 09:47:52.817004868 +0000 UTC m=+0.066491246 container create 0d56f72f7f03ff68af789a66cd8bd9e51f8ffec4c76b97c4c873510bd68f9ed4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_jepsen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct 11 09:47:52 compute-0 systemd[1]: Started libpod-conmon-0d56f72f7f03ff68af789a66cd8bd9e51f8ffec4c76b97c4c873510bd68f9ed4.scope.
Oct 11 09:47:52 compute-0 podman[438788]: 2025-10-11 09:47:52.790042752 +0000 UTC m=+0.039529170 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:47:52 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:47:52 compute-0 podman[438788]: 2025-10-11 09:47:52.926769567 +0000 UTC m=+0.176255985 container init 0d56f72f7f03ff68af789a66cd8bd9e51f8ffec4c76b97c4c873510bd68f9ed4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_jepsen, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 09:47:52 compute-0 podman[438788]: 2025-10-11 09:47:52.93402044 +0000 UTC m=+0.183506818 container start 0d56f72f7f03ff68af789a66cd8bd9e51f8ffec4c76b97c4c873510bd68f9ed4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_jepsen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 09:47:52 compute-0 podman[438788]: 2025-10-11 09:47:52.937983351 +0000 UTC m=+0.187469719 container attach 0d56f72f7f03ff68af789a66cd8bd9e51f8ffec4c76b97c4c873510bd68f9ed4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_jepsen, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 09:47:52 compute-0 recursing_jepsen[438805]: 167 167
Oct 11 09:47:52 compute-0 systemd[1]: libpod-0d56f72f7f03ff68af789a66cd8bd9e51f8ffec4c76b97c4c873510bd68f9ed4.scope: Deactivated successfully.
Oct 11 09:47:52 compute-0 conmon[438805]: conmon 0d56f72f7f03ff68af78 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-0d56f72f7f03ff68af789a66cd8bd9e51f8ffec4c76b97c4c873510bd68f9ed4.scope/container/memory.events
Oct 11 09:47:52 compute-0 podman[438788]: 2025-10-11 09:47:52.943725482 +0000 UTC m=+0.193211860 container died 0d56f72f7f03ff68af789a66cd8bd9e51f8ffec4c76b97c4c873510bd68f9ed4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_jepsen, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct 11 09:47:52 compute-0 ceph-mon[74313]: pgmap v3199: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:47:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-79fe385ba3167c9400915b7711839e5960e1f42f8ba0a9bdd00746dbe582d8f6-merged.mount: Deactivated successfully.
Oct 11 09:47:52 compute-0 podman[438788]: 2025-10-11 09:47:52.995359721 +0000 UTC m=+0.244846099 container remove 0d56f72f7f03ff68af789a66cd8bd9e51f8ffec4c76b97c4c873510bd68f9ed4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_jepsen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 09:47:53 compute-0 systemd[1]: libpod-conmon-0d56f72f7f03ff68af789a66cd8bd9e51f8ffec4c76b97c4c873510bd68f9ed4.scope: Deactivated successfully.
Oct 11 09:47:53 compute-0 podman[438829]: 2025-10-11 09:47:53.238073608 +0000 UTC m=+0.064297764 container create 64ee16bb54c49a7189f395cdcdabd265b9b024764a3bc84f352fb11b907e69e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_noyce, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 09:47:53 compute-0 systemd[1]: Started libpod-conmon-64ee16bb54c49a7189f395cdcdabd265b9b024764a3bc84f352fb11b907e69e2.scope.
Oct 11 09:47:53 compute-0 podman[438829]: 2025-10-11 09:47:53.211874313 +0000 UTC m=+0.038098479 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:47:53 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:47:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca9aae32bd3fefc23b3d1d2ebbadad3cad0954e6c4f3cb04f7f24ad074c14914/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 09:47:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca9aae32bd3fefc23b3d1d2ebbadad3cad0954e6c4f3cb04f7f24ad074c14914/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 09:47:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca9aae32bd3fefc23b3d1d2ebbadad3cad0954e6c4f3cb04f7f24ad074c14914/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 09:47:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca9aae32bd3fefc23b3d1d2ebbadad3cad0954e6c4f3cb04f7f24ad074c14914/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 09:47:53 compute-0 podman[438829]: 2025-10-11 09:47:53.354582746 +0000 UTC m=+0.180806932 container init 64ee16bb54c49a7189f395cdcdabd265b9b024764a3bc84f352fb11b907e69e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_noyce, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 09:47:53 compute-0 podman[438829]: 2025-10-11 09:47:53.363164467 +0000 UTC m=+0.189388583 container start 64ee16bb54c49a7189f395cdcdabd265b9b024764a3bc84f352fb11b907e69e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_noyce, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 09:47:53 compute-0 podman[438829]: 2025-10-11 09:47:53.365893953 +0000 UTC m=+0.192118109 container attach 64ee16bb54c49a7189f395cdcdabd265b9b024764a3bc84f352fb11b907e69e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_noyce, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 11 09:47:53 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3200: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:47:54 compute-0 funny_noyce[438846]: {
Oct 11 09:47:54 compute-0 funny_noyce[438846]:     "9a8d87fe-940e-4960-94fb-405e22e6e9ee": {
Oct 11 09:47:54 compute-0 funny_noyce[438846]:         "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:47:54 compute-0 funny_noyce[438846]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 11 09:47:54 compute-0 funny_noyce[438846]:         "osd_id": 2,
Oct 11 09:47:54 compute-0 funny_noyce[438846]:         "osd_uuid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 09:47:54 compute-0 funny_noyce[438846]:         "type": "bluestore"
Oct 11 09:47:54 compute-0 funny_noyce[438846]:     },
Oct 11 09:47:54 compute-0 funny_noyce[438846]:     "bd31cfe9-266a-4479-a7f9-659fff14b3e1": {
Oct 11 09:47:54 compute-0 funny_noyce[438846]:         "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:47:54 compute-0 funny_noyce[438846]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 11 09:47:54 compute-0 funny_noyce[438846]:         "osd_id": 0,
Oct 11 09:47:54 compute-0 funny_noyce[438846]:         "osd_uuid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 09:47:54 compute-0 funny_noyce[438846]:         "type": "bluestore"
Oct 11 09:47:54 compute-0 funny_noyce[438846]:     },
Oct 11 09:47:54 compute-0 funny_noyce[438846]:     "c6e730c8-7075-45e5-bf72-061b73088f5b": {
Oct 11 09:47:54 compute-0 funny_noyce[438846]:         "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:47:54 compute-0 funny_noyce[438846]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 11 09:47:54 compute-0 funny_noyce[438846]:         "osd_id": 1,
Oct 11 09:47:54 compute-0 funny_noyce[438846]:         "osd_uuid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 09:47:54 compute-0 funny_noyce[438846]:         "type": "bluestore"
Oct 11 09:47:54 compute-0 funny_noyce[438846]:     }
Oct 11 09:47:54 compute-0 funny_noyce[438846]: }
Oct 11 09:47:54 compute-0 systemd[1]: libpod-64ee16bb54c49a7189f395cdcdabd265b9b024764a3bc84f352fb11b907e69e2.scope: Deactivated successfully.
Oct 11 09:47:54 compute-0 systemd[1]: libpod-64ee16bb54c49a7189f395cdcdabd265b9b024764a3bc84f352fb11b907e69e2.scope: Consumed 1.006s CPU time.
Oct 11 09:47:54 compute-0 podman[438829]: 2025-10-11 09:47:54.365250023 +0000 UTC m=+1.191474159 container died 64ee16bb54c49a7189f395cdcdabd265b9b024764a3bc84f352fb11b907e69e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_noyce, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct 11 09:47:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-ca9aae32bd3fefc23b3d1d2ebbadad3cad0954e6c4f3cb04f7f24ad074c14914-merged.mount: Deactivated successfully.
Oct 11 09:47:54 compute-0 podman[438829]: 2025-10-11 09:47:54.451209724 +0000 UTC m=+1.277433880 container remove 64ee16bb54c49a7189f395cdcdabd265b9b024764a3bc84f352fb11b907e69e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_noyce, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 09:47:54 compute-0 systemd[1]: libpod-conmon-64ee16bb54c49a7189f395cdcdabd265b9b024764a3bc84f352fb11b907e69e2.scope: Deactivated successfully.
Oct 11 09:47:54 compute-0 sudo[438723]: pam_unix(sudo:session): session closed for user root
Oct 11 09:47:54 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 09:47:54 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:47:54 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 09:47:54 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:47:54 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev d8478b80-f34a-4e77-9c89-b771c8fbedb0 does not exist
Oct 11 09:47:54 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev 886d8e6f-3129-4ceb-b6e7-2ca19fec5e6b does not exist
Oct 11 09:47:54 compute-0 sudo[438893]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:47:54 compute-0 sudo[438893]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:47:54 compute-0 sudo[438893]: pam_unix(sudo:session): session closed for user root
Oct 11 09:47:54 compute-0 sudo[438918]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 11 09:47:54 compute-0 sudo[438918]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:47:54 compute-0 sudo[438918]: pam_unix(sudo:session): session closed for user root
Oct 11 09:47:54 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e301 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:47:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:47:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:47:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:47:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:47:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:47:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:47:54 compute-0 ceph-mon[74313]: pgmap v3200: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:47:54 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:47:54 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:47:55 compute-0 ceph-mgr[74605]: [balancer INFO root] Optimize plan auto_2025-10-11_09:47:55
Oct 11 09:47:55 compute-0 ceph-mgr[74605]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 09:47:55 compute-0 ceph-mgr[74605]: [balancer INFO root] do_upmap
Oct 11 09:47:55 compute-0 ceph-mgr[74605]: [balancer INFO root] pools ['vms', '.rgw.root', 'cephfs.cephfs.data', 'default.rgw.meta', 'backups', 'cephfs.cephfs.meta', 'images', 'volumes', 'default.rgw.log', 'default.rgw.control', '.mgr']
Oct 11 09:47:55 compute-0 ceph-mgr[74605]: [balancer INFO root] prepared 0/10 changes
Oct 11 09:47:55 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3201: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:47:55 compute-0 nova_compute[260935]: 2025-10-11 09:47:55.559 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 09:47:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 09:47:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 09:47:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 09:47:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 09:47:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 09:47:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 09:47:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 09:47:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 09:47:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 09:47:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 09:47:55 compute-0 nova_compute[260935]: 2025-10-11 09:47:55.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:47:55 compute-0 nova_compute[260935]: 2025-10-11 09:47:55.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:47:55 compute-0 nova_compute[260935]: 2025-10-11 09:47:55.703 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 11 09:47:55 compute-0 podman[438943]: 2025-10-11 09:47:55.815505709 +0000 UTC m=+0.105955693 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, container_name=iscsid, org.label-schema.build-date=20251001)
Oct 11 09:47:56 compute-0 ceph-mon[74313]: pgmap v3201: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:47:57 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3202: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:47:57 compute-0 nova_compute[260935]: 2025-10-11 09:47:57.699 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:47:57 compute-0 nova_compute[260935]: 2025-10-11 09:47:57.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:47:57 compute-0 nova_compute[260935]: 2025-10-11 09:47:57.703 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 11 09:47:57 compute-0 nova_compute[260935]: 2025-10-11 09:47:57.703 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 11 09:47:58 compute-0 nova_compute[260935]: 2025-10-11 09:47:58.186 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "refresh_cache-c176845c-89c0-4038-ba22-4ee79bd3ebfe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:47:58 compute-0 nova_compute[260935]: 2025-10-11 09:47:58.187 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquired lock "refresh_cache-c176845c-89c0-4038-ba22-4ee79bd3ebfe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:47:58 compute-0 nova_compute[260935]: 2025-10-11 09:47:58.187 2 DEBUG nova.network.neutron [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: c176845c-89c0-4038-ba22-4ee79bd3ebfe] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 11 09:47:58 compute-0 nova_compute[260935]: 2025-10-11 09:47:58.187 2 DEBUG nova.objects.instance [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lazy-loading 'info_cache' on Instance uuid c176845c-89c0-4038-ba22-4ee79bd3ebfe obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:47:58 compute-0 nova_compute[260935]: 2025-10-11 09:47:58.434 2 DEBUG nova.network.neutron [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: c176845c-89c0-4038-ba22-4ee79bd3ebfe] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 11 09:47:58 compute-0 nova_compute[260935]: 2025-10-11 09:47:58.815 2 DEBUG nova.network.neutron [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: c176845c-89c0-4038-ba22-4ee79bd3ebfe] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:47:58 compute-0 nova_compute[260935]: 2025-10-11 09:47:58.836 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Releasing lock "refresh_cache-c176845c-89c0-4038-ba22-4ee79bd3ebfe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:47:58 compute-0 nova_compute[260935]: 2025-10-11 09:47:58.837 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: c176845c-89c0-4038-ba22-4ee79bd3ebfe] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 11 09:47:58 compute-0 nova_compute[260935]: 2025-10-11 09:47:58.837 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:47:58 compute-0 ceph-mon[74313]: pgmap v3202: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:47:59 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3203: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:47:59 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e301 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:47:59 compute-0 podman[438963]: 2025-10-11 09:47:59.814918933 +0000 UTC m=+0.100717286 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=multipathd, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct 11 09:47:59 compute-0 podman[438964]: 2025-10-11 09:47:59.846388856 +0000 UTC m=+0.130930434 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_controller, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct 11 09:48:00 compute-0 nova_compute[260935]: 2025-10-11 09:48:00.564 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 09:48:00 compute-0 nova_compute[260935]: 2025-10-11 09:48:00.566 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 09:48:00 compute-0 nova_compute[260935]: 2025-10-11 09:48:00.566 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5004 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Oct 11 09:48:00 compute-0 nova_compute[260935]: 2025-10-11 09:48:00.566 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Oct 11 09:48:00 compute-0 nova_compute[260935]: 2025-10-11 09:48:00.592 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:48:00 compute-0 nova_compute[260935]: 2025-10-11 09:48:00.593 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Oct 11 09:48:01 compute-0 ceph-mon[74313]: pgmap v3203: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:48:01 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3204: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:48:01 compute-0 nova_compute[260935]: 2025-10-11 09:48:01.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:48:01 compute-0 nova_compute[260935]: 2025-10-11 09:48:01.749 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:48:01 compute-0 nova_compute[260935]: 2025-10-11 09:48:01.750 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:48:01 compute-0 nova_compute[260935]: 2025-10-11 09:48:01.750 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:48:01 compute-0 nova_compute[260935]: 2025-10-11 09:48:01.751 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 11 09:48:01 compute-0 nova_compute[260935]: 2025-10-11 09:48:01.751 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:48:02 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 09:48:02 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3572227953' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:48:02 compute-0 nova_compute[260935]: 2025-10-11 09:48:02.277 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.525s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:48:02 compute-0 nova_compute[260935]: 2025-10-11 09:48:02.395 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:48:02 compute-0 nova_compute[260935]: 2025-10-11 09:48:02.395 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:48:02 compute-0 nova_compute[260935]: 2025-10-11 09:48:02.396 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:48:02 compute-0 nova_compute[260935]: 2025-10-11 09:48:02.404 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:48:02 compute-0 nova_compute[260935]: 2025-10-11 09:48:02.404 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:48:02 compute-0 nova_compute[260935]: 2025-10-11 09:48:02.411 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000052 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:48:02 compute-0 nova_compute[260935]: 2025-10-11 09:48:02.412 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000052 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:48:02 compute-0 nova_compute[260935]: 2025-10-11 09:48:02.646 2 WARNING nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 09:48:02 compute-0 nova_compute[260935]: 2025-10-11 09:48:02.648 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=2742MB free_disk=59.83064270019531GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 11 09:48:02 compute-0 nova_compute[260935]: 2025-10-11 09:48:02.649 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:48:02 compute-0 nova_compute[260935]: 2025-10-11 09:48:02.649 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:48:02 compute-0 nova_compute[260935]: 2025-10-11 09:48:02.767 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance c176845c-89c0-4038-ba22-4ee79bd3ebfe actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 09:48:02 compute-0 nova_compute[260935]: 2025-10-11 09:48:02.768 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance b75d8ded-515b-48ff-a6b6-28df88878996 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 09:48:02 compute-0 nova_compute[260935]: 2025-10-11 09:48:02.768 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance 52be16b4-343a-4fd4-9041-39069a1fde2a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 09:48:02 compute-0 nova_compute[260935]: 2025-10-11 09:48:02.769 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 11 09:48:02 compute-0 nova_compute[260935]: 2025-10-11 09:48:02.769 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=896MB phys_disk=59GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 11 09:48:02 compute-0 nova_compute[260935]: 2025-10-11 09:48:02.855 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:48:03 compute-0 ceph-mon[74313]: pgmap v3204: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:48:03 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/3572227953' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:48:03 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 09:48:03 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1017514983' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:48:03 compute-0 nova_compute[260935]: 2025-10-11 09:48:03.332 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.477s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:48:03 compute-0 nova_compute[260935]: 2025-10-11 09:48:03.343 2 DEBUG nova.compute.provider_tree [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 09:48:03 compute-0 nova_compute[260935]: 2025-10-11 09:48:03.369 2 DEBUG nova.scheduler.client.report [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 09:48:03 compute-0 nova_compute[260935]: 2025-10-11 09:48:03.372 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 11 09:48:03 compute-0 nova_compute[260935]: 2025-10-11 09:48:03.372 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.723s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:48:03 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3205: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:48:04 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/1017514983' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:48:04 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e301 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:48:05 compute-0 ceph-mon[74313]: pgmap v3205: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:48:05 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3206: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:48:05 compute-0 nova_compute[260935]: 2025-10-11 09:48:05.593 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:48:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 09:48:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:48:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 09:48:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:48:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0026278224099759067 of space, bias 1.0, pg target 0.788346722992772 quantized to 32 (current 32)
Oct 11 09:48:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:48:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 09:48:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:48:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 09:48:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:48:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006659854830044931 of space, bias 1.0, pg target 0.19979564490134794 quantized to 32 (current 32)
Oct 11 09:48:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:48:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 11 09:48:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:48:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 09:48:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:48:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 11 09:48:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:48:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 11 09:48:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:48:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 09:48:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:48:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 11 09:48:06 compute-0 nova_compute[260935]: 2025-10-11 09:48:06.374 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:48:06 compute-0 nova_compute[260935]: 2025-10-11 09:48:06.375 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:48:07 compute-0 ceph-mon[74313]: pgmap v3206: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:48:07 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3207: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:48:09 compute-0 ceph-mon[74313]: pgmap v3207: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:48:09 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3208: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:48:09 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e301 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:48:10 compute-0 nova_compute[260935]: 2025-10-11 09:48:10.595 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 09:48:10 compute-0 nova_compute[260935]: 2025-10-11 09:48:10.597 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 09:48:10 compute-0 nova_compute[260935]: 2025-10-11 09:48:10.597 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Oct 11 09:48:10 compute-0 nova_compute[260935]: 2025-10-11 09:48:10.598 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Oct 11 09:48:10 compute-0 nova_compute[260935]: 2025-10-11 09:48:10.616 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:48:10 compute-0 nova_compute[260935]: 2025-10-11 09:48:10.617 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Oct 11 09:48:11 compute-0 ceph-mon[74313]: pgmap v3208: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:48:11 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3209: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:48:13 compute-0 ceph-mon[74313]: pgmap v3209: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:48:13 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3210: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:48:14 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e301 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:48:15 compute-0 ceph-mon[74313]: pgmap v3210: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:48:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:48:15.244 162815 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:48:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:48:15.244 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:48:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:48:15.245 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:48:15 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3211: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:48:15 compute-0 nova_compute[260935]: 2025-10-11 09:48:15.618 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:48:15 compute-0 nova_compute[260935]: 2025-10-11 09:48:15.619 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:48:15 compute-0 nova_compute[260935]: 2025-10-11 09:48:15.699 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:48:17 compute-0 ceph-mon[74313]: pgmap v3211: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:48:17 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3212: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:48:19 compute-0 ceph-mon[74313]: pgmap v3212: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:48:19 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3213: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:48:19 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e301 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:48:20 compute-0 nova_compute[260935]: 2025-10-11 09:48:20.622 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:48:20 compute-0 podman[439051]: 2025-10-11 09:48:20.783759496 +0000 UTC m=+0.080722665 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_id=ovn_metadata_agent, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true)
Oct 11 09:48:21 compute-0 ceph-mon[74313]: pgmap v3213: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:48:21 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3214: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:48:23 compute-0 ceph-mon[74313]: pgmap v3214: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:48:23 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3215: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 85 B/s wr, 6 op/s
Oct 11 09:48:24 compute-0 sshd-session[439071]: Invalid user postgres from 165.232.82.252 port 57512
Oct 11 09:48:24 compute-0 sshd-session[439071]: pam_unix(sshd:auth): check pass; user unknown
Oct 11 09:48:24 compute-0 sshd-session[439071]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=165.232.82.252
Oct 11 09:48:24 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e301 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:48:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:48:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:48:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:48:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:48:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:48:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:48:25 compute-0 ceph-mon[74313]: pgmap v3215: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 85 B/s wr, 6 op/s
Oct 11 09:48:25 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3216: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 85 B/s wr, 6 op/s
Oct 11 09:48:25 compute-0 nova_compute[260935]: 2025-10-11 09:48:25.624 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:48:26 compute-0 sshd-session[439071]: Failed password for invalid user postgres from 165.232.82.252 port 57512 ssh2
Oct 11 09:48:26 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 09:48:26 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1332784152' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 09:48:26 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 09:48:26 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1332784152' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 09:48:26 compute-0 podman[439073]: 2025-10-11 09:48:26.786056256 +0000 UTC m=+0.075308923 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_id=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team)
Oct 11 09:48:27 compute-0 sshd-session[439071]: Connection closed by invalid user postgres 165.232.82.252 port 57512 [preauth]
Oct 11 09:48:27 compute-0 ceph-mon[74313]: pgmap v3216: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 85 B/s wr, 6 op/s
Oct 11 09:48:27 compute-0 ceph-mon[74313]: from='client.? 192.168.122.10:0/1332784152' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 09:48:27 compute-0 ceph-mon[74313]: from='client.? 192.168.122.10:0/1332784152' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 09:48:27 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3217: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 85 B/s wr, 6 op/s
Oct 11 09:48:29 compute-0 ceph-mon[74313]: pgmap v3217: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 85 B/s wr, 6 op/s
Oct 11 09:48:29 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3218: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 85 B/s wr, 6 op/s
Oct 11 09:48:29 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e301 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:48:30 compute-0 nova_compute[260935]: 2025-10-11 09:48:30.626 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:48:30 compute-0 podman[439091]: 2025-10-11 09:48:30.792087658 +0000 UTC m=+0.086031644 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.vendor=CentOS, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3)
Oct 11 09:48:30 compute-0 podman[439092]: 2025-10-11 09:48:30.863762029 +0000 UTC m=+0.145678347 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct 11 09:48:31 compute-0 ceph-mon[74313]: pgmap v3218: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 85 B/s wr, 6 op/s
Oct 11 09:48:31 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3219: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 85 B/s wr, 6 op/s
Oct 11 09:48:32 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e301 do_prune osdmap full prune enabled
Oct 11 09:48:32 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e302 e302: 3 total, 3 up, 3 in
Oct 11 09:48:32 compute-0 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e302: 3 total, 3 up, 3 in
Oct 11 09:48:33 compute-0 ceph-mon[74313]: pgmap v3219: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 85 B/s wr, 6 op/s
Oct 11 09:48:33 compute-0 ceph-mon[74313]: osdmap e302: 3 total, 3 up, 3 in
Oct 11 09:48:33 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3221: 321 pgs: 4 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 315 active+clean; 315 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.1 KiB/s wr, 23 op/s
Oct 11 09:48:34 compute-0 nova_compute[260935]: 2025-10-11 09:48:34.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:48:34 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:48:35 compute-0 ceph-mon[74313]: pgmap v3221: 321 pgs: 4 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 315 active+clean; 315 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.1 KiB/s wr, 23 op/s
Oct 11 09:48:35 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3222: 321 pgs: 4 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 315 active+clean; 315 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.1 KiB/s wr, 23 op/s
Oct 11 09:48:35 compute-0 nova_compute[260935]: 2025-10-11 09:48:35.628 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:48:36 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e302 do_prune osdmap full prune enabled
Oct 11 09:48:36 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e303 e303: 3 total, 3 up, 3 in
Oct 11 09:48:36 compute-0 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e303: 3 total, 3 up, 3 in
Oct 11 09:48:37 compute-0 ceph-mon[74313]: pgmap v3222: 321 pgs: 4 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 315 active+clean; 315 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.1 KiB/s wr, 23 op/s
Oct 11 09:48:37 compute-0 ceph-mon[74313]: osdmap e303: 3 total, 3 up, 3 in
Oct 11 09:48:37 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3224: 321 pgs: 321 active+clean; 295 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 42 KiB/s rd, 2.9 KiB/s wr, 55 op/s
Oct 11 09:48:39 compute-0 ceph-mon[74313]: pgmap v3224: 321 pgs: 321 active+clean; 295 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 42 KiB/s rd, 2.9 KiB/s wr, 55 op/s
Oct 11 09:48:39 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3225: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 45 KiB/s rd, 3.5 KiB/s wr, 62 op/s
Oct 11 09:48:39 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:48:39 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e303 do_prune osdmap full prune enabled
Oct 11 09:48:39 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e304 e304: 3 total, 3 up, 3 in
Oct 11 09:48:39 compute-0 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e304: 3 total, 3 up, 3 in
Oct 11 09:48:40 compute-0 nova_compute[260935]: 2025-10-11 09:48:40.631 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:48:40 compute-0 ceph-mon[74313]: pgmap v3225: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 45 KiB/s rd, 3.5 KiB/s wr, 62 op/s
Oct 11 09:48:40 compute-0 ceph-mon[74313]: osdmap e304: 3 total, 3 up, 3 in
Oct 11 09:48:41 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3227: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 2.1 KiB/s wr, 33 op/s
Oct 11 09:48:42 compute-0 ceph-mon[74313]: pgmap v3227: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 2.1 KiB/s wr, 33 op/s
Oct 11 09:48:43 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3228: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 2.1 KiB/s wr, 33 op/s
Oct 11 09:48:44 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e304 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:48:44 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e304 do_prune osdmap full prune enabled
Oct 11 09:48:44 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e305 e305: 3 total, 3 up, 3 in
Oct 11 09:48:44 compute-0 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e305: 3 total, 3 up, 3 in
Oct 11 09:48:44 compute-0 ceph-mon[74313]: pgmap v3228: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 2.1 KiB/s wr, 33 op/s
Oct 11 09:48:44 compute-0 ceph-mon[74313]: osdmap e305: 3 total, 3 up, 3 in
Oct 11 09:48:45 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3230: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 3.7 KiB/s rd, 638 B/s wr, 7 op/s
Oct 11 09:48:45 compute-0 nova_compute[260935]: 2025-10-11 09:48:45.633 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:48:46 compute-0 ceph-mon[74313]: pgmap v3230: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 3.7 KiB/s rd, 638 B/s wr, 7 op/s
Oct 11 09:48:47 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3231: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:48:48 compute-0 ceph-mon[74313]: pgmap v3231: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:48:49 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3232: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:48:49 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e305 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:48:50 compute-0 nova_compute[260935]: 2025-10-11 09:48:50.635 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 09:48:50 compute-0 ceph-mon[74313]: pgmap v3232: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:48:51 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3233: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:48:51 compute-0 podman[439136]: 2025-10-11 09:48:51.811398379 +0000 UTC m=+0.094104160 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0)
Oct 11 09:48:51 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e305 do_prune osdmap full prune enabled
Oct 11 09:48:51 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e306 e306: 3 total, 3 up, 3 in
Oct 11 09:48:51 compute-0 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e306: 3 total, 3 up, 3 in
Oct 11 09:48:52 compute-0 ceph-mon[74313]: pgmap v3233: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:48:52 compute-0 ceph-mon[74313]: osdmap e306: 3 total, 3 up, 3 in
Oct 11 09:48:53 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3235: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 12 KiB/s rd, 1.8 KiB/s wr, 17 op/s
Oct 11 09:48:54 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:48:54 compute-0 sudo[439156]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:48:54 compute-0 sudo[439156]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:48:54 compute-0 sudo[439156]: pam_unix(sudo:session): session closed for user root
Oct 11 09:48:54 compute-0 ceph-mon[74313]: pgmap v3235: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 12 KiB/s rd, 1.8 KiB/s wr, 17 op/s
Oct 11 09:48:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:48:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:48:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:48:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:48:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:48:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:48:54 compute-0 sudo[439181]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 09:48:54 compute-0 sudo[439181]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:48:54 compute-0 sudo[439181]: pam_unix(sudo:session): session closed for user root
Oct 11 09:48:54 compute-0 sudo[439206]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:48:54 compute-0 sudo[439206]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:48:54 compute-0 sudo[439206]: pam_unix(sudo:session): session closed for user root
Oct 11 09:48:55 compute-0 ceph-mgr[74605]: [balancer INFO root] Optimize plan auto_2025-10-11_09:48:55
Oct 11 09:48:55 compute-0 ceph-mgr[74605]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 09:48:55 compute-0 ceph-mgr[74605]: [balancer INFO root] do_upmap
Oct 11 09:48:55 compute-0 ceph-mgr[74605]: [balancer INFO root] pools ['.rgw.root', 'cephfs.cephfs.data', 'default.rgw.control', 'vms', 'volumes', 'default.rgw.meta', '.mgr', 'default.rgw.log', 'cephfs.cephfs.meta', 'images', 'backups']
Oct 11 09:48:55 compute-0 ceph-mgr[74605]: [balancer INFO root] prepared 0/10 changes
Oct 11 09:48:55 compute-0 sudo[439231]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 11 09:48:55 compute-0 sudo[439231]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:48:55 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3236: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 10 KiB/s rd, 1.6 KiB/s wr, 14 op/s
Oct 11 09:48:55 compute-0 nova_compute[260935]: 2025-10-11 09:48:55.638 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:48:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 09:48:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 09:48:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 09:48:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 09:48:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 09:48:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 09:48:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 09:48:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 09:48:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 09:48:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 09:48:55 compute-0 sudo[439231]: pam_unix(sudo:session): session closed for user root
Oct 11 09:48:55 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 09:48:55 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 09:48:55 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 11 09:48:55 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 09:48:55 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 11 09:48:55 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:48:55 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev 994e123e-deaa-4124-9cd5-1d62f4091a52 does not exist
Oct 11 09:48:55 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev ebd2b4d8-54d3-4b1a-af0a-24a9e0e69fb8 does not exist
Oct 11 09:48:55 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev 19dbc0f7-65d7-401a-87d2-d0225e2f712a does not exist
Oct 11 09:48:55 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 11 09:48:55 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 09:48:55 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 11 09:48:55 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 09:48:55 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 09:48:55 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 09:48:55 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 09:48:55 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 09:48:55 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:48:55 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 09:48:55 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 09:48:55 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 09:48:55 compute-0 sudo[439287]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:48:55 compute-0 sudo[439287]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:48:55 compute-0 sudo[439287]: pam_unix(sudo:session): session closed for user root
Oct 11 09:48:56 compute-0 sudo[439312]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 09:48:56 compute-0 sudo[439312]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:48:56 compute-0 sudo[439312]: pam_unix(sudo:session): session closed for user root
Oct 11 09:48:56 compute-0 sudo[439337]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:48:56 compute-0 sudo[439337]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:48:56 compute-0 sudo[439337]: pam_unix(sudo:session): session closed for user root
Oct 11 09:48:56 compute-0 sudo[439362]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 11 09:48:56 compute-0 sudo[439362]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:48:56 compute-0 podman[439428]: 2025-10-11 09:48:56.655463413 +0000 UTC m=+0.075545860 container create e263c908c4b7d2dd8740ed622268b2e70563eb59fa4378f71c5adc267b292ce4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_carson, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct 11 09:48:56 compute-0 nova_compute[260935]: 2025-10-11 09:48:56.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:48:56 compute-0 systemd[1]: Started libpod-conmon-e263c908c4b7d2dd8740ed622268b2e70563eb59fa4378f71c5adc267b292ce4.scope.
Oct 11 09:48:56 compute-0 podman[439428]: 2025-10-11 09:48:56.623566268 +0000 UTC m=+0.043648725 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:48:56 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:48:56 compute-0 podman[439428]: 2025-10-11 09:48:56.780663764 +0000 UTC m=+0.200746271 container init e263c908c4b7d2dd8740ed622268b2e70563eb59fa4378f71c5adc267b292ce4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_carson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct 11 09:48:56 compute-0 podman[439428]: 2025-10-11 09:48:56.79372016 +0000 UTC m=+0.213802607 container start e263c908c4b7d2dd8740ed622268b2e70563eb59fa4378f71c5adc267b292ce4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_carson, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 09:48:56 compute-0 podman[439428]: 2025-10-11 09:48:56.798193076 +0000 UTC m=+0.218275593 container attach e263c908c4b7d2dd8740ed622268b2e70563eb59fa4378f71c5adc267b292ce4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_carson, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct 11 09:48:56 compute-0 serene_carson[439444]: 167 167
Oct 11 09:48:56 compute-0 systemd[1]: libpod-e263c908c4b7d2dd8740ed622268b2e70563eb59fa4378f71c5adc267b292ce4.scope: Deactivated successfully.
Oct 11 09:48:56 compute-0 podman[439428]: 2025-10-11 09:48:56.803140535 +0000 UTC m=+0.223222952 container died e263c908c4b7d2dd8740ed622268b2e70563eb59fa4378f71c5adc267b292ce4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_carson, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct 11 09:48:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-6228eb02266966157ab3f44384072aa03a79e3fac4f746e0818473eaa277baef-merged.mount: Deactivated successfully.
Oct 11 09:48:56 compute-0 podman[439428]: 2025-10-11 09:48:56.862006636 +0000 UTC m=+0.282089053 container remove e263c908c4b7d2dd8740ed622268b2e70563eb59fa4378f71c5adc267b292ce4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_carson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct 11 09:48:56 compute-0 ceph-mon[74313]: pgmap v3236: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 10 KiB/s rd, 1.6 KiB/s wr, 14 op/s
Oct 11 09:48:56 compute-0 systemd[1]: libpod-conmon-e263c908c4b7d2dd8740ed622268b2e70563eb59fa4378f71c5adc267b292ce4.scope: Deactivated successfully.
Oct 11 09:48:56 compute-0 podman[439449]: 2025-10-11 09:48:56.935757284 +0000 UTC m=+0.092806544 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=iscsid, org.label-schema.build-date=20251001, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct 11 09:48:57 compute-0 podman[439484]: 2025-10-11 09:48:57.115372082 +0000 UTC m=+0.073567064 container create daff9667fd675943495171c9a6ac207136a7f3eb9f0f7fb794f8563da8b33907 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_nash, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 09:48:57 compute-0 systemd[1]: Started libpod-conmon-daff9667fd675943495171c9a6ac207136a7f3eb9f0f7fb794f8563da8b33907.scope.
Oct 11 09:48:57 compute-0 podman[439484]: 2025-10-11 09:48:57.082887191 +0000 UTC m=+0.041082253 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:48:57 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:48:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/17c991e904f226cc81de047f0ac2b1491fc266e9d0ab5e9c580634d76ebfa33e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 09:48:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/17c991e904f226cc81de047f0ac2b1491fc266e9d0ab5e9c580634d76ebfa33e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 09:48:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/17c991e904f226cc81de047f0ac2b1491fc266e9d0ab5e9c580634d76ebfa33e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 09:48:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/17c991e904f226cc81de047f0ac2b1491fc266e9d0ab5e9c580634d76ebfa33e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 09:48:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/17c991e904f226cc81de047f0ac2b1491fc266e9d0ab5e9c580634d76ebfa33e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 09:48:57 compute-0 podman[439484]: 2025-10-11 09:48:57.235746777 +0000 UTC m=+0.193941819 container init daff9667fd675943495171c9a6ac207136a7f3eb9f0f7fb794f8563da8b33907 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_nash, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct 11 09:48:57 compute-0 podman[439484]: 2025-10-11 09:48:57.251584221 +0000 UTC m=+0.209779223 container start daff9667fd675943495171c9a6ac207136a7f3eb9f0f7fb794f8563da8b33907 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_nash, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 09:48:57 compute-0 podman[439484]: 2025-10-11 09:48:57.255855821 +0000 UTC m=+0.214050833 container attach daff9667fd675943495171c9a6ac207136a7f3eb9f0f7fb794f8563da8b33907 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_nash, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 09:48:57 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3237: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 10 KiB/s rd, 1.6 KiB/s wr, 14 op/s
Oct 11 09:48:57 compute-0 nova_compute[260935]: 2025-10-11 09:48:57.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:48:57 compute-0 nova_compute[260935]: 2025-10-11 09:48:57.704 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 11 09:48:58 compute-0 recursing_nash[439500]: --> passed data devices: 0 physical, 3 LVM
Oct 11 09:48:58 compute-0 recursing_nash[439500]: --> relative data size: 1.0
Oct 11 09:48:58 compute-0 recursing_nash[439500]: --> All data devices are unavailable
Oct 11 09:48:58 compute-0 systemd[1]: libpod-daff9667fd675943495171c9a6ac207136a7f3eb9f0f7fb794f8563da8b33907.scope: Deactivated successfully.
Oct 11 09:48:58 compute-0 podman[439529]: 2025-10-11 09:48:58.358300212 +0000 UTC m=+0.042225875 container died daff9667fd675943495171c9a6ac207136a7f3eb9f0f7fb794f8563da8b33907 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_nash, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct 11 09:48:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-17c991e904f226cc81de047f0ac2b1491fc266e9d0ab5e9c580634d76ebfa33e-merged.mount: Deactivated successfully.
Oct 11 09:48:58 compute-0 podman[439529]: 2025-10-11 09:48:58.404306123 +0000 UTC m=+0.088231776 container remove daff9667fd675943495171c9a6ac207136a7f3eb9f0f7fb794f8563da8b33907 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_nash, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct 11 09:48:58 compute-0 systemd[1]: libpod-conmon-daff9667fd675943495171c9a6ac207136a7f3eb9f0f7fb794f8563da8b33907.scope: Deactivated successfully.
Oct 11 09:48:58 compute-0 sudo[439362]: pam_unix(sudo:session): session closed for user root
Oct 11 09:48:58 compute-0 sudo[439544]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:48:58 compute-0 sudo[439544]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:48:58 compute-0 sudo[439544]: pam_unix(sudo:session): session closed for user root
Oct 11 09:48:58 compute-0 sudo[439569]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 09:48:58 compute-0 sudo[439569]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:48:58 compute-0 sudo[439569]: pam_unix(sudo:session): session closed for user root
Oct 11 09:48:58 compute-0 nova_compute[260935]: 2025-10-11 09:48:58.700 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:48:58 compute-0 nova_compute[260935]: 2025-10-11 09:48:58.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:48:58 compute-0 nova_compute[260935]: 2025-10-11 09:48:58.702 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 11 09:48:58 compute-0 sudo[439594]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:48:58 compute-0 sudo[439594]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:48:58 compute-0 sudo[439594]: pam_unix(sudo:session): session closed for user root
Oct 11 09:48:58 compute-0 sudo[439619]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a -- lvm list --format json
Oct 11 09:48:58 compute-0 sudo[439619]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:48:58 compute-0 ceph-mon[74313]: pgmap v3237: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 10 KiB/s rd, 1.6 KiB/s wr, 14 op/s
Oct 11 09:48:59 compute-0 nova_compute[260935]: 2025-10-11 09:48:59.201 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "refresh_cache-b75d8ded-515b-48ff-a6b6-28df88878996" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:48:59 compute-0 nova_compute[260935]: 2025-10-11 09:48:59.202 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquired lock "refresh_cache-b75d8ded-515b-48ff-a6b6-28df88878996" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:48:59 compute-0 nova_compute[260935]: 2025-10-11 09:48:59.202 2 DEBUG nova.network.neutron [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: b75d8ded-515b-48ff-a6b6-28df88878996] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 11 09:48:59 compute-0 podman[439687]: 2025-10-11 09:48:59.235983319 +0000 UTC m=+0.072326850 container create 14be6388101e3f234699bd30cbb908931a168616c5375712d947b435f9b6ce11 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_noyce, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 09:48:59 compute-0 systemd[1]: Started libpod-conmon-14be6388101e3f234699bd30cbb908931a168616c5375712d947b435f9b6ce11.scope.
Oct 11 09:48:59 compute-0 podman[439687]: 2025-10-11 09:48:59.206374239 +0000 UTC m=+0.042717760 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:48:59 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:48:59 compute-0 podman[439687]: 2025-10-11 09:48:59.346368576 +0000 UTC m=+0.182712147 container init 14be6388101e3f234699bd30cbb908931a168616c5375712d947b435f9b6ce11 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_noyce, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 09:48:59 compute-0 podman[439687]: 2025-10-11 09:48:59.353588608 +0000 UTC m=+0.189932129 container start 14be6388101e3f234699bd30cbb908931a168616c5375712d947b435f9b6ce11 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_noyce, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct 11 09:48:59 compute-0 podman[439687]: 2025-10-11 09:48:59.35757077 +0000 UTC m=+0.193914301 container attach 14be6388101e3f234699bd30cbb908931a168616c5375712d947b435f9b6ce11 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_noyce, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 09:48:59 compute-0 dreamy_noyce[439703]: 167 167
Oct 11 09:48:59 compute-0 systemd[1]: libpod-14be6388101e3f234699bd30cbb908931a168616c5375712d947b435f9b6ce11.scope: Deactivated successfully.
Oct 11 09:48:59 compute-0 podman[439687]: 2025-10-11 09:48:59.363083664 +0000 UTC m=+0.199427155 container died 14be6388101e3f234699bd30cbb908931a168616c5375712d947b435f9b6ce11 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_noyce, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 09:48:59 compute-0 nova_compute[260935]: 2025-10-11 09:48:59.390 2 DEBUG nova.network.neutron [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: b75d8ded-515b-48ff-a6b6-28df88878996] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 11 09:48:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-dd827c8ed89b4c273506e054a78c2819a832e41d7419d39c7cb265c0637b3772-merged.mount: Deactivated successfully.
Oct 11 09:48:59 compute-0 podman[439687]: 2025-10-11 09:48:59.415763292 +0000 UTC m=+0.252106803 container remove 14be6388101e3f234699bd30cbb908931a168616c5375712d947b435f9b6ce11 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_noyce, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True)
Oct 11 09:48:59 compute-0 systemd[1]: libpod-conmon-14be6388101e3f234699bd30cbb908931a168616c5375712d947b435f9b6ce11.scope: Deactivated successfully.
Oct 11 09:48:59 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3238: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 10 KiB/s rd, 1.6 KiB/s wr, 14 op/s
Oct 11 09:48:59 compute-0 podman[439726]: 2025-10-11 09:48:59.687980038 +0000 UTC m=+0.076247920 container create d2b6e20bd793782022793ad74fa0c883585de4bba31d89196e1a5d41863effc4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_albattani, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 09:48:59 compute-0 nova_compute[260935]: 2025-10-11 09:48:59.700 2 DEBUG nova.network.neutron [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: b75d8ded-515b-48ff-a6b6-28df88878996] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:48:59 compute-0 nova_compute[260935]: 2025-10-11 09:48:59.726 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Releasing lock "refresh_cache-b75d8ded-515b-48ff-a6b6-28df88878996" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:48:59 compute-0 nova_compute[260935]: 2025-10-11 09:48:59.727 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: b75d8ded-515b-48ff-a6b6-28df88878996] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 11 09:48:59 compute-0 nova_compute[260935]: 2025-10-11 09:48:59.728 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:48:59 compute-0 systemd[1]: Started libpod-conmon-d2b6e20bd793782022793ad74fa0c883585de4bba31d89196e1a5d41863effc4.scope.
Oct 11 09:48:59 compute-0 podman[439726]: 2025-10-11 09:48:59.662086051 +0000 UTC m=+0.050353923 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:48:59 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:48:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a46bace8b7b0cfa8ecea2650ef5e694c34da25821a2402f7e4d1ab14d543a081/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 09:48:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a46bace8b7b0cfa8ecea2650ef5e694c34da25821a2402f7e4d1ab14d543a081/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 09:48:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a46bace8b7b0cfa8ecea2650ef5e694c34da25821a2402f7e4d1ab14d543a081/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 09:48:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a46bace8b7b0cfa8ecea2650ef5e694c34da25821a2402f7e4d1ab14d543a081/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 09:48:59 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:48:59 compute-0 podman[439726]: 2025-10-11 09:48:59.816081071 +0000 UTC m=+0.204349003 container init d2b6e20bd793782022793ad74fa0c883585de4bba31d89196e1a5d41863effc4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_albattani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct 11 09:48:59 compute-0 podman[439726]: 2025-10-11 09:48:59.82603808 +0000 UTC m=+0.214305962 container start d2b6e20bd793782022793ad74fa0c883585de4bba31d89196e1a5d41863effc4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_albattani, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 09:48:59 compute-0 podman[439726]: 2025-10-11 09:48:59.830066473 +0000 UTC m=+0.218334355 container attach d2b6e20bd793782022793ad74fa0c883585de4bba31d89196e1a5d41863effc4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_albattani, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct 11 09:49:00 compute-0 nova_compute[260935]: 2025-10-11 09:49:00.639 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:49:00 compute-0 exciting_albattani[439742]: {
Oct 11 09:49:00 compute-0 exciting_albattani[439742]:     "0": [
Oct 11 09:49:00 compute-0 exciting_albattani[439742]:         {
Oct 11 09:49:00 compute-0 exciting_albattani[439742]:             "devices": [
Oct 11 09:49:00 compute-0 exciting_albattani[439742]:                 "/dev/loop3"
Oct 11 09:49:00 compute-0 exciting_albattani[439742]:             ],
Oct 11 09:49:00 compute-0 exciting_albattani[439742]:             "lv_name": "ceph_lv0",
Oct 11 09:49:00 compute-0 exciting_albattani[439742]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 09:49:00 compute-0 exciting_albattani[439742]:             "lv_size": "21470642176",
Oct 11 09:49:00 compute-0 exciting_albattani[439742]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=bd31cfe9-266a-4479-a7f9-659fff14b3e1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 09:49:00 compute-0 exciting_albattani[439742]:             "lv_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 09:49:00 compute-0 exciting_albattani[439742]:             "name": "ceph_lv0",
Oct 11 09:49:00 compute-0 exciting_albattani[439742]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 09:49:00 compute-0 exciting_albattani[439742]:             "tags": {
Oct 11 09:49:00 compute-0 exciting_albattani[439742]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 11 09:49:00 compute-0 exciting_albattani[439742]:                 "ceph.block_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 09:49:00 compute-0 exciting_albattani[439742]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 09:49:00 compute-0 exciting_albattani[439742]:                 "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:49:00 compute-0 exciting_albattani[439742]:                 "ceph.cluster_name": "ceph",
Oct 11 09:49:00 compute-0 exciting_albattani[439742]:                 "ceph.crush_device_class": "",
Oct 11 09:49:00 compute-0 exciting_albattani[439742]:                 "ceph.encrypted": "0",
Oct 11 09:49:00 compute-0 exciting_albattani[439742]:                 "ceph.osd_fsid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 09:49:00 compute-0 exciting_albattani[439742]:                 "ceph.osd_id": "0",
Oct 11 09:49:00 compute-0 exciting_albattani[439742]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 09:49:00 compute-0 exciting_albattani[439742]:                 "ceph.type": "block",
Oct 11 09:49:00 compute-0 exciting_albattani[439742]:                 "ceph.vdo": "0"
Oct 11 09:49:00 compute-0 exciting_albattani[439742]:             },
Oct 11 09:49:00 compute-0 exciting_albattani[439742]:             "type": "block",
Oct 11 09:49:00 compute-0 exciting_albattani[439742]:             "vg_name": "ceph_vg0"
Oct 11 09:49:00 compute-0 exciting_albattani[439742]:         }
Oct 11 09:49:00 compute-0 exciting_albattani[439742]:     ],
Oct 11 09:49:00 compute-0 exciting_albattani[439742]:     "1": [
Oct 11 09:49:00 compute-0 exciting_albattani[439742]:         {
Oct 11 09:49:00 compute-0 exciting_albattani[439742]:             "devices": [
Oct 11 09:49:00 compute-0 exciting_albattani[439742]:                 "/dev/loop4"
Oct 11 09:49:00 compute-0 exciting_albattani[439742]:             ],
Oct 11 09:49:00 compute-0 exciting_albattani[439742]:             "lv_name": "ceph_lv1",
Oct 11 09:49:00 compute-0 exciting_albattani[439742]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 09:49:00 compute-0 exciting_albattani[439742]:             "lv_size": "21470642176",
Oct 11 09:49:00 compute-0 exciting_albattani[439742]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c6e730c8-7075-45e5-bf72-061b73088f5b,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 09:49:00 compute-0 exciting_albattani[439742]:             "lv_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 09:49:00 compute-0 exciting_albattani[439742]:             "name": "ceph_lv1",
Oct 11 09:49:00 compute-0 exciting_albattani[439742]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 09:49:00 compute-0 exciting_albattani[439742]:             "tags": {
Oct 11 09:49:00 compute-0 exciting_albattani[439742]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 11 09:49:00 compute-0 exciting_albattani[439742]:                 "ceph.block_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 09:49:00 compute-0 exciting_albattani[439742]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 09:49:00 compute-0 exciting_albattani[439742]:                 "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:49:00 compute-0 exciting_albattani[439742]:                 "ceph.cluster_name": "ceph",
Oct 11 09:49:00 compute-0 exciting_albattani[439742]:                 "ceph.crush_device_class": "",
Oct 11 09:49:00 compute-0 exciting_albattani[439742]:                 "ceph.encrypted": "0",
Oct 11 09:49:00 compute-0 exciting_albattani[439742]:                 "ceph.osd_fsid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 09:49:00 compute-0 exciting_albattani[439742]:                 "ceph.osd_id": "1",
Oct 11 09:49:00 compute-0 exciting_albattani[439742]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 09:49:00 compute-0 exciting_albattani[439742]:                 "ceph.type": "block",
Oct 11 09:49:00 compute-0 exciting_albattani[439742]:                 "ceph.vdo": "0"
Oct 11 09:49:00 compute-0 exciting_albattani[439742]:             },
Oct 11 09:49:00 compute-0 exciting_albattani[439742]:             "type": "block",
Oct 11 09:49:00 compute-0 exciting_albattani[439742]:             "vg_name": "ceph_vg1"
Oct 11 09:49:00 compute-0 exciting_albattani[439742]:         }
Oct 11 09:49:00 compute-0 exciting_albattani[439742]:     ],
Oct 11 09:49:00 compute-0 exciting_albattani[439742]:     "2": [
Oct 11 09:49:00 compute-0 exciting_albattani[439742]:         {
Oct 11 09:49:00 compute-0 exciting_albattani[439742]:             "devices": [
Oct 11 09:49:00 compute-0 exciting_albattani[439742]:                 "/dev/loop5"
Oct 11 09:49:00 compute-0 exciting_albattani[439742]:             ],
Oct 11 09:49:00 compute-0 exciting_albattani[439742]:             "lv_name": "ceph_lv2",
Oct 11 09:49:00 compute-0 exciting_albattani[439742]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 09:49:00 compute-0 exciting_albattani[439742]:             "lv_size": "21470642176",
Oct 11 09:49:00 compute-0 exciting_albattani[439742]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9a8d87fe-940e-4960-94fb-405e22e6e9ee,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 09:49:00 compute-0 exciting_albattani[439742]:             "lv_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 09:49:00 compute-0 exciting_albattani[439742]:             "name": "ceph_lv2",
Oct 11 09:49:00 compute-0 exciting_albattani[439742]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 09:49:00 compute-0 exciting_albattani[439742]:             "tags": {
Oct 11 09:49:00 compute-0 exciting_albattani[439742]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 11 09:49:00 compute-0 exciting_albattani[439742]:                 "ceph.block_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 09:49:00 compute-0 exciting_albattani[439742]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 09:49:00 compute-0 exciting_albattani[439742]:                 "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:49:00 compute-0 exciting_albattani[439742]:                 "ceph.cluster_name": "ceph",
Oct 11 09:49:00 compute-0 exciting_albattani[439742]:                 "ceph.crush_device_class": "",
Oct 11 09:49:00 compute-0 exciting_albattani[439742]:                 "ceph.encrypted": "0",
Oct 11 09:49:00 compute-0 exciting_albattani[439742]:                 "ceph.osd_fsid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 09:49:00 compute-0 exciting_albattani[439742]:                 "ceph.osd_id": "2",
Oct 11 09:49:00 compute-0 exciting_albattani[439742]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 09:49:00 compute-0 exciting_albattani[439742]:                 "ceph.type": "block",
Oct 11 09:49:00 compute-0 exciting_albattani[439742]:                 "ceph.vdo": "0"
Oct 11 09:49:00 compute-0 exciting_albattani[439742]:             },
Oct 11 09:49:00 compute-0 exciting_albattani[439742]:             "type": "block",
Oct 11 09:49:00 compute-0 exciting_albattani[439742]:             "vg_name": "ceph_vg2"
Oct 11 09:49:00 compute-0 exciting_albattani[439742]:         }
Oct 11 09:49:00 compute-0 exciting_albattani[439742]:     ]
Oct 11 09:49:00 compute-0 exciting_albattani[439742]: }
Oct 11 09:49:00 compute-0 systemd[1]: libpod-d2b6e20bd793782022793ad74fa0c883585de4bba31d89196e1a5d41863effc4.scope: Deactivated successfully.
Oct 11 09:49:00 compute-0 podman[439726]: 2025-10-11 09:49:00.731804084 +0000 UTC m=+1.120071976 container died d2b6e20bd793782022793ad74fa0c883585de4bba31d89196e1a5d41863effc4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_albattani, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct 11 09:49:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-a46bace8b7b0cfa8ecea2650ef5e694c34da25821a2402f7e4d1ab14d543a081-merged.mount: Deactivated successfully.
Oct 11 09:49:00 compute-0 podman[439726]: 2025-10-11 09:49:00.821391086 +0000 UTC m=+1.209658948 container remove d2b6e20bd793782022793ad74fa0c883585de4bba31d89196e1a5d41863effc4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_albattani, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 09:49:00 compute-0 systemd[1]: libpod-conmon-d2b6e20bd793782022793ad74fa0c883585de4bba31d89196e1a5d41863effc4.scope: Deactivated successfully.
Oct 11 09:49:00 compute-0 sudo[439619]: pam_unix(sudo:session): session closed for user root
Oct 11 09:49:00 compute-0 ceph-mon[74313]: pgmap v3238: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 10 KiB/s rd, 1.6 KiB/s wr, 14 op/s
Oct 11 09:49:00 compute-0 sudo[439766]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:49:00 compute-0 sudo[439766]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:49:00 compute-0 sudo[439766]: pam_unix(sudo:session): session closed for user root
Oct 11 09:49:00 compute-0 podman[439765]: 2025-10-11 09:49:00.981654251 +0000 UTC m=+0.103801162 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd)
Oct 11 09:49:01 compute-0 sudo[439809]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 09:49:01 compute-0 sudo[439809]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:49:01 compute-0 sudo[439809]: pam_unix(sudo:session): session closed for user root
Oct 11 09:49:01 compute-0 sudo[439855]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:49:01 compute-0 podman[439805]: 2025-10-11 09:49:01.155514098 +0000 UTC m=+0.162053167 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 09:49:01 compute-0 sudo[439855]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:49:01 compute-0 sudo[439855]: pam_unix(sudo:session): session closed for user root
Oct 11 09:49:01 compute-0 sudo[439886]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a -- raw list --format json
Oct 11 09:49:01 compute-0 sudo[439886]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:49:01 compute-0 podman[439953]: 2025-10-11 09:49:01.550422184 +0000 UTC m=+0.049048637 container create 661fe68c3ae8585b6eca0094862e35e8edc596a26d65521fbbe5cfd6e9332042 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_ritchie, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 09:49:01 compute-0 systemd[1]: Started libpod-conmon-661fe68c3ae8585b6eca0094862e35e8edc596a26d65521fbbe5cfd6e9332042.scope.
Oct 11 09:49:01 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3239: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 10 KiB/s rd, 1.6 KiB/s wr, 14 op/s
Oct 11 09:49:01 compute-0 podman[439953]: 2025-10-11 09:49:01.530243058 +0000 UTC m=+0.028869601 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:49:01 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:49:01 compute-0 podman[439953]: 2025-10-11 09:49:01.667620011 +0000 UTC m=+0.166246504 container init 661fe68c3ae8585b6eca0094862e35e8edc596a26d65521fbbe5cfd6e9332042 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_ritchie, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct 11 09:49:01 compute-0 podman[439953]: 2025-10-11 09:49:01.677473797 +0000 UTC m=+0.176100260 container start 661fe68c3ae8585b6eca0094862e35e8edc596a26d65521fbbe5cfd6e9332042 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_ritchie, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct 11 09:49:01 compute-0 podman[439953]: 2025-10-11 09:49:01.682020395 +0000 UTC m=+0.180646918 container attach 661fe68c3ae8585b6eca0094862e35e8edc596a26d65521fbbe5cfd6e9332042 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_ritchie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 09:49:01 compute-0 reverent_ritchie[439969]: 167 167
Oct 11 09:49:01 compute-0 systemd[1]: libpod-661fe68c3ae8585b6eca0094862e35e8edc596a26d65521fbbe5cfd6e9332042.scope: Deactivated successfully.
Oct 11 09:49:01 compute-0 podman[439953]: 2025-10-11 09:49:01.689912956 +0000 UTC m=+0.188539449 container died 661fe68c3ae8585b6eca0094862e35e8edc596a26d65521fbbe5cfd6e9332042 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_ritchie, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Oct 11 09:49:01 compute-0 sshd-session[439863]: Invalid user nginx from 43.157.67.116 port 48414
Oct 11 09:49:01 compute-0 sshd-session[439863]: pam_unix(sshd:auth): check pass; user unknown
Oct 11 09:49:01 compute-0 sshd-session[439863]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=43.157.67.116
Oct 11 09:49:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-50fb790f2b6eddbb76614e4e1b91f53a6b528e7c7c0f3dc651fdaa1fcbb071e2-merged.mount: Deactivated successfully.
Oct 11 09:49:01 compute-0 podman[439953]: 2025-10-11 09:49:01.727951493 +0000 UTC m=+0.226577956 container remove 661fe68c3ae8585b6eca0094862e35e8edc596a26d65521fbbe5cfd6e9332042 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_ritchie, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct 11 09:49:01 compute-0 systemd[1]: libpod-conmon-661fe68c3ae8585b6eca0094862e35e8edc596a26d65521fbbe5cfd6e9332042.scope: Deactivated successfully.
Oct 11 09:49:01 compute-0 podman[439992]: 2025-10-11 09:49:01.950381722 +0000 UTC m=+0.058813131 container create 0ba36d1101f17ab2385a2012b44afadf274045a8d0d3b15c2576f0a12f2d648a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_meitner, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 09:49:02 compute-0 systemd[1]: Started libpod-conmon-0ba36d1101f17ab2385a2012b44afadf274045a8d0d3b15c2576f0a12f2d648a.scope.
Oct 11 09:49:02 compute-0 podman[439992]: 2025-10-11 09:49:01.920338009 +0000 UTC m=+0.028769478 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:49:02 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:49:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cae628d6b684057036b030a98f8cf488f061fd16277022324e0cefd12558e788/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 09:49:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cae628d6b684057036b030a98f8cf488f061fd16277022324e0cefd12558e788/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 09:49:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cae628d6b684057036b030a98f8cf488f061fd16277022324e0cefd12558e788/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 09:49:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cae628d6b684057036b030a98f8cf488f061fd16277022324e0cefd12558e788/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 09:49:02 compute-0 podman[439992]: 2025-10-11 09:49:02.068766522 +0000 UTC m=+0.177197931 container init 0ba36d1101f17ab2385a2012b44afadf274045a8d0d3b15c2576f0a12f2d648a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_meitner, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct 11 09:49:02 compute-0 podman[439992]: 2025-10-11 09:49:02.082737454 +0000 UTC m=+0.191168833 container start 0ba36d1101f17ab2385a2012b44afadf274045a8d0d3b15c2576f0a12f2d648a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_meitner, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 09:49:02 compute-0 podman[439992]: 2025-10-11 09:49:02.086380306 +0000 UTC m=+0.194811755 container attach 0ba36d1101f17ab2385a2012b44afadf274045a8d0d3b15c2576f0a12f2d648a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_meitner, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct 11 09:49:02 compute-0 nova_compute[260935]: 2025-10-11 09:49:02.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:49:02 compute-0 nova_compute[260935]: 2025-10-11 09:49:02.737 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:49:02 compute-0 nova_compute[260935]: 2025-10-11 09:49:02.738 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:49:02 compute-0 nova_compute[260935]: 2025-10-11 09:49:02.739 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:49:02 compute-0 nova_compute[260935]: 2025-10-11 09:49:02.739 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 11 09:49:02 compute-0 nova_compute[260935]: 2025-10-11 09:49:02.740 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:49:02 compute-0 ceph-mon[74313]: pgmap v3239: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 10 KiB/s rd, 1.6 KiB/s wr, 14 op/s
Oct 11 09:49:03 compute-0 epic_meitner[440008]: {
Oct 11 09:49:03 compute-0 epic_meitner[440008]:     "9a8d87fe-940e-4960-94fb-405e22e6e9ee": {
Oct 11 09:49:03 compute-0 epic_meitner[440008]:         "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:49:03 compute-0 epic_meitner[440008]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 11 09:49:03 compute-0 epic_meitner[440008]:         "osd_id": 2,
Oct 11 09:49:03 compute-0 epic_meitner[440008]:         "osd_uuid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 09:49:03 compute-0 epic_meitner[440008]:         "type": "bluestore"
Oct 11 09:49:03 compute-0 epic_meitner[440008]:     },
Oct 11 09:49:03 compute-0 epic_meitner[440008]:     "bd31cfe9-266a-4479-a7f9-659fff14b3e1": {
Oct 11 09:49:03 compute-0 epic_meitner[440008]:         "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:49:03 compute-0 epic_meitner[440008]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 11 09:49:03 compute-0 epic_meitner[440008]:         "osd_id": 0,
Oct 11 09:49:03 compute-0 epic_meitner[440008]:         "osd_uuid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 09:49:03 compute-0 epic_meitner[440008]:         "type": "bluestore"
Oct 11 09:49:03 compute-0 epic_meitner[440008]:     },
Oct 11 09:49:03 compute-0 epic_meitner[440008]:     "c6e730c8-7075-45e5-bf72-061b73088f5b": {
Oct 11 09:49:03 compute-0 epic_meitner[440008]:         "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:49:03 compute-0 epic_meitner[440008]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 11 09:49:03 compute-0 epic_meitner[440008]:         "osd_id": 1,
Oct 11 09:49:03 compute-0 epic_meitner[440008]:         "osd_uuid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 09:49:03 compute-0 epic_meitner[440008]:         "type": "bluestore"
Oct 11 09:49:03 compute-0 epic_meitner[440008]:     }
Oct 11 09:49:03 compute-0 epic_meitner[440008]: }
Oct 11 09:49:03 compute-0 systemd[1]: libpod-0ba36d1101f17ab2385a2012b44afadf274045a8d0d3b15c2576f0a12f2d648a.scope: Deactivated successfully.
Oct 11 09:49:03 compute-0 conmon[440008]: conmon 0ba36d1101f17ab2385a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-0ba36d1101f17ab2385a2012b44afadf274045a8d0d3b15c2576f0a12f2d648a.scope/container/memory.events
Oct 11 09:49:03 compute-0 podman[439992]: 2025-10-11 09:49:03.056109265 +0000 UTC m=+1.164540684 container died 0ba36d1101f17ab2385a2012b44afadf274045a8d0d3b15c2576f0a12f2d648a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_meitner, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 09:49:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-cae628d6b684057036b030a98f8cf488f061fd16277022324e0cefd12558e788-merged.mount: Deactivated successfully.
Oct 11 09:49:03 compute-0 podman[439992]: 2025-10-11 09:49:03.137289522 +0000 UTC m=+1.245720941 container remove 0ba36d1101f17ab2385a2012b44afadf274045a8d0d3b15c2576f0a12f2d648a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_meitner, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct 11 09:49:03 compute-0 systemd[1]: libpod-conmon-0ba36d1101f17ab2385a2012b44afadf274045a8d0d3b15c2576f0a12f2d648a.scope: Deactivated successfully.
Oct 11 09:49:03 compute-0 sudo[439886]: pam_unix(sudo:session): session closed for user root
Oct 11 09:49:03 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 09:49:03 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:49:03 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 09:49:03 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:49:03 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev e1e7815e-7ad2-4242-99b0-242c0b0228c9 does not exist
Oct 11 09:49:03 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev a65e1d47-7648-4214-b9e3-d5d97094ff42 does not exist
Oct 11 09:49:03 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 09:49:03 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/727297108' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:49:03 compute-0 nova_compute[260935]: 2025-10-11 09:49:03.282 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.542s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:49:03 compute-0 sudo[440074]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:49:03 compute-0 sudo[440074]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:49:03 compute-0 sudo[440074]: pam_unix(sudo:session): session closed for user root
Oct 11 09:49:03 compute-0 nova_compute[260935]: 2025-10-11 09:49:03.395 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:49:03 compute-0 nova_compute[260935]: 2025-10-11 09:49:03.397 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:49:03 compute-0 nova_compute[260935]: 2025-10-11 09:49:03.397 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:49:03 compute-0 nova_compute[260935]: 2025-10-11 09:49:03.401 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:49:03 compute-0 nova_compute[260935]: 2025-10-11 09:49:03.401 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:49:03 compute-0 nova_compute[260935]: 2025-10-11 09:49:03.405 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000052 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:49:03 compute-0 nova_compute[260935]: 2025-10-11 09:49:03.405 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000052 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:49:03 compute-0 sudo[440101]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 11 09:49:03 compute-0 sudo[440101]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:49:03 compute-0 sudo[440101]: pam_unix(sudo:session): session closed for user root
Oct 11 09:49:03 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3240: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 8.8 KiB/s rd, 1.4 KiB/s wr, 12 op/s
Oct 11 09:49:03 compute-0 nova_compute[260935]: 2025-10-11 09:49:03.675 2 WARNING nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 09:49:03 compute-0 nova_compute[260935]: 2025-10-11 09:49:03.677 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=2757MB free_disk=59.83064270019531GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 11 09:49:03 compute-0 nova_compute[260935]: 2025-10-11 09:49:03.678 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:49:03 compute-0 nova_compute[260935]: 2025-10-11 09:49:03.678 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:49:03 compute-0 sshd-session[439863]: Failed password for invalid user nginx from 43.157.67.116 port 48414 ssh2
Oct 11 09:49:03 compute-0 nova_compute[260935]: 2025-10-11 09:49:03.816 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance c176845c-89c0-4038-ba22-4ee79bd3ebfe actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 09:49:03 compute-0 nova_compute[260935]: 2025-10-11 09:49:03.817 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance b75d8ded-515b-48ff-a6b6-28df88878996 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 09:49:03 compute-0 nova_compute[260935]: 2025-10-11 09:49:03.817 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance 52be16b4-343a-4fd4-9041-39069a1fde2a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 09:49:03 compute-0 nova_compute[260935]: 2025-10-11 09:49:03.818 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 11 09:49:03 compute-0 nova_compute[260935]: 2025-10-11 09:49:03.818 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=896MB phys_disk=59GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 11 09:49:03 compute-0 nova_compute[260935]: 2025-10-11 09:49:03.918 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:49:04 compute-0 sshd-session[439863]: Received disconnect from 43.157.67.116 port 48414:11: Bye Bye [preauth]
Oct 11 09:49:04 compute-0 sshd-session[439863]: Disconnected from invalid user nginx 43.157.67.116 port 48414 [preauth]
Oct 11 09:49:04 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:49:04 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:49:04 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/727297108' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:49:04 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 09:49:04 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/738161503' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:49:04 compute-0 nova_compute[260935]: 2025-10-11 09:49:04.395 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.477s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:49:04 compute-0 nova_compute[260935]: 2025-10-11 09:49:04.402 2 DEBUG nova.compute.provider_tree [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 09:49:04 compute-0 nova_compute[260935]: 2025-10-11 09:49:04.429 2 DEBUG nova.scheduler.client.report [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 09:49:04 compute-0 nova_compute[260935]: 2025-10-11 09:49:04.432 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 11 09:49:04 compute-0 nova_compute[260935]: 2025-10-11 09:49:04.433 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.755s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:49:04 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:49:05 compute-0 ceph-mon[74313]: pgmap v3240: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 8.8 KiB/s rd, 1.4 KiB/s wr, 12 op/s
Oct 11 09:49:05 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/738161503' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:49:05 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3241: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:49:05 compute-0 nova_compute[260935]: 2025-10-11 09:49:05.642 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 09:49:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 09:49:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:49:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 09:49:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:49:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0026278224099759067 of space, bias 1.0, pg target 0.788346722992772 quantized to 32 (current 32)
Oct 11 09:49:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:49:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 09:49:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:49:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 09:49:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:49:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 4.4513495474376506e-07 of space, bias 1.0, pg target 0.00013354048642312953 quantized to 32 (current 32)
Oct 11 09:49:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:49:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 11 09:49:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:49:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 09:49:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:49:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 11 09:49:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:49:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 11 09:49:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:49:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 09:49:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:49:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 11 09:49:07 compute-0 ceph-mon[74313]: pgmap v3241: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:49:07 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3242: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:49:08 compute-0 nova_compute[260935]: 2025-10-11 09:49:08.435 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:49:08 compute-0 nova_compute[260935]: 2025-10-11 09:49:08.435 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:49:09 compute-0 ceph-mon[74313]: pgmap v3242: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:49:09 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3243: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:49:09 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:49:10 compute-0 nova_compute[260935]: 2025-10-11 09:49:10.644 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 09:49:10 compute-0 nova_compute[260935]: 2025-10-11 09:49:10.646 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 09:49:10 compute-0 nova_compute[260935]: 2025-10-11 09:49:10.646 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Oct 11 09:49:10 compute-0 nova_compute[260935]: 2025-10-11 09:49:10.646 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Oct 11 09:49:10 compute-0 nova_compute[260935]: 2025-10-11 09:49:10.674 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:49:10 compute-0 nova_compute[260935]: 2025-10-11 09:49:10.674 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Oct 11 09:49:11 compute-0 ceph-mon[74313]: pgmap v3243: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:49:11 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3244: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:49:13 compute-0 ceph-mon[74313]: pgmap v3244: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:49:13 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3245: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:49:14 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:49:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:49:15.245 162815 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:49:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:49:15.246 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:49:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:49:15.246 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:49:15 compute-0 ceph-mon[74313]: pgmap v3245: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:49:15 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3246: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:49:15 compute-0 nova_compute[260935]: 2025-10-11 09:49:15.676 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 09:49:15 compute-0 nova_compute[260935]: 2025-10-11 09:49:15.678 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 09:49:17 compute-0 ceph-mon[74313]: pgmap v3246: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:49:17 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3247: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:49:17 compute-0 nova_compute[260935]: 2025-10-11 09:49:17.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:49:17 compute-0 nova_compute[260935]: 2025-10-11 09:49:17.703 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Oct 11 09:49:19 compute-0 ceph-mon[74313]: pgmap v3247: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:49:19 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3248: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:49:19 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:49:20 compute-0 nova_compute[260935]: 2025-10-11 09:49:20.678 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4997-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 09:49:21 compute-0 ceph-mon[74313]: pgmap v3248: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:49:21 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3249: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:49:22 compute-0 podman[440148]: 2025-10-11 09:49:22.830706094 +0000 UTC m=+0.123423383 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct 11 09:49:23 compute-0 ceph-mon[74313]: pgmap v3249: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:49:23 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3250: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:49:24 compute-0 sshd-session[440167]: Invalid user tianyu from 155.4.244.179 port 17482
Oct 11 09:49:24 compute-0 sshd-session[440167]: pam_unix(sshd:auth): check pass; user unknown
Oct 11 09:49:24 compute-0 sshd-session[440167]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=155.4.244.179
Oct 11 09:49:24 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:49:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:49:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:49:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:49:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:49:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:49:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:49:25 compute-0 ceph-mon[74313]: pgmap v3250: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:49:25 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3251: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:49:25 compute-0 nova_compute[260935]: 2025-10-11 09:49:25.680 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:49:25 compute-0 nova_compute[260935]: 2025-10-11 09:49:25.682 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:49:26 compute-0 sshd-session[440167]: Failed password for invalid user tianyu from 155.4.244.179 port 17482 ssh2
Oct 11 09:49:26 compute-0 sshd-session[440167]: Received disconnect from 155.4.244.179 port 17482:11: Bye Bye [preauth]
Oct 11 09:49:26 compute-0 sshd-session[440167]: Disconnected from invalid user tianyu 155.4.244.179 port 17482 [preauth]
Oct 11 09:49:26 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 09:49:26 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3362886063' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 09:49:26 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 09:49:26 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3362886063' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 09:49:27 compute-0 ceph-mon[74313]: pgmap v3251: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:49:27 compute-0 ceph-mon[74313]: from='client.? 192.168.122.10:0/3362886063' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 09:49:27 compute-0 ceph-mon[74313]: from='client.? 192.168.122.10:0/3362886063' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 09:49:27 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3252: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:49:27 compute-0 nova_compute[260935]: 2025-10-11 09:49:27.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:49:27 compute-0 podman[440169]: 2025-10-11 09:49:27.805619307 +0000 UTC m=+0.098344250 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_id=iscsid, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team)
Oct 11 09:49:29 compute-0 ceph-mon[74313]: pgmap v3252: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:49:29 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3253: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:49:29 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:49:30 compute-0 nova_compute[260935]: 2025-10-11 09:49:30.682 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 09:49:30 compute-0 nova_compute[260935]: 2025-10-11 09:49:30.684 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 09:49:30 compute-0 nova_compute[260935]: 2025-10-11 09:49:30.685 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Oct 11 09:49:30 compute-0 nova_compute[260935]: 2025-10-11 09:49:30.685 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Oct 11 09:49:30 compute-0 nova_compute[260935]: 2025-10-11 09:49:30.694 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:49:30 compute-0 nova_compute[260935]: 2025-10-11 09:49:30.695 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Oct 11 09:49:31 compute-0 ceph-mon[74313]: pgmap v3253: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:49:31 compute-0 podman[440189]: 2025-10-11 09:49:31.456205777 +0000 UTC m=+0.083154154 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Oct 11 09:49:31 compute-0 podman[440190]: 2025-10-11 09:49:31.483349638 +0000 UTC m=+0.104287886 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct 11 09:49:31 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3254: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:49:33 compute-0 ceph-mon[74313]: pgmap v3254: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:49:33 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3255: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:49:34 compute-0 nova_compute[260935]: 2025-10-11 09:49:34.721 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:49:34 compute-0 nova_compute[260935]: 2025-10-11 09:49:34.722 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:49:34 compute-0 nova_compute[260935]: 2025-10-11 09:49:34.722 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Oct 11 09:49:34 compute-0 nova_compute[260935]: 2025-10-11 09:49:34.751 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Oct 11 09:49:34 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:49:35 compute-0 ceph-mon[74313]: pgmap v3255: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:49:35 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3256: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:49:35 compute-0 nova_compute[260935]: 2025-10-11 09:49:35.697 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:49:35 compute-0 nova_compute[260935]: 2025-10-11 09:49:35.699 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 09:49:37 compute-0 ceph-mon[74313]: pgmap v3256: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:49:37 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3257: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:49:39 compute-0 ceph-mon[74313]: pgmap v3257: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:49:39 compute-0 ceph-mon[74313]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #159. Immutable memtables: 0.
Oct 11 09:49:39 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:49:39.379088) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 11 09:49:39 compute-0 ceph-mon[74313]: rocksdb: [db/flush_job.cc:856] [default] [JOB 97] Flushing memtable with next log file: 159
Oct 11 09:49:39 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760176179379153, "job": 97, "event": "flush_started", "num_memtables": 1, "num_entries": 1874, "num_deletes": 256, "total_data_size": 3092374, "memory_usage": 3139008, "flush_reason": "Manual Compaction"}
Oct 11 09:49:39 compute-0 ceph-mon[74313]: rocksdb: [db/flush_job.cc:885] [default] [JOB 97] Level-0 flush table #160: started
Oct 11 09:49:39 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760176179399957, "cf_name": "default", "job": 97, "event": "table_file_creation", "file_number": 160, "file_size": 3028674, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 66160, "largest_seqno": 68033, "table_properties": {"data_size": 3019861, "index_size": 5498, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2245, "raw_key_size": 17811, "raw_average_key_size": 20, "raw_value_size": 3002412, "raw_average_value_size": 3439, "num_data_blocks": 244, "num_entries": 873, "num_filter_entries": 873, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760175980, "oldest_key_time": 1760175980, "file_creation_time": 1760176179, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "25d4b2da-549a-4320-988a-8e4ed36a341b", "db_session_id": "HBQ1LYMNE0LLZGR7AHC6", "orig_file_number": 160, "seqno_to_time_mapping": "N/A"}}
Oct 11 09:49:39 compute-0 ceph-mon[74313]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 97] Flush lasted 20953 microseconds, and 12078 cpu microseconds.
Oct 11 09:49:39 compute-0 ceph-mon[74313]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 09:49:39 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:49:39.400036) [db/flush_job.cc:967] [default] [JOB 97] Level-0 flush table #160: 3028674 bytes OK
Oct 11 09:49:39 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:49:39.400074) [db/memtable_list.cc:519] [default] Level-0 commit table #160 started
Oct 11 09:49:39 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:49:39.402184) [db/memtable_list.cc:722] [default] Level-0 commit table #160: memtable #1 done
Oct 11 09:49:39 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:49:39.402208) EVENT_LOG_v1 {"time_micros": 1760176179402199, "job": 97, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 11 09:49:39 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:49:39.402240) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 11 09:49:39 compute-0 ceph-mon[74313]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 97] Try to delete WAL files size 3084376, prev total WAL file size 3084376, number of live WAL files 2.
Oct 11 09:49:39 compute-0 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000156.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 09:49:39 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:49:39.404130) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730036353236' seq:72057594037927935, type:22 .. '7061786F730036373738' seq:0, type:0; will stop at (end)
Oct 11 09:49:39 compute-0 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 98] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 11 09:49:39 compute-0 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 97 Base level 0, inputs: [160(2957KB)], [158(9624KB)]
Oct 11 09:49:39 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760176179404185, "job": 98, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [160], "files_L6": [158], "score": -1, "input_data_size": 12883917, "oldest_snapshot_seqno": -1}
Oct 11 09:49:39 compute-0 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 98] Generated table #161: 8528 keys, 11146509 bytes, temperature: kUnknown
Oct 11 09:49:39 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760176179464424, "cf_name": "default", "job": 98, "event": "table_file_creation", "file_number": 161, "file_size": 11146509, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11090651, "index_size": 33409, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 21381, "raw_key_size": 223567, "raw_average_key_size": 26, "raw_value_size": 10939723, "raw_average_value_size": 1282, "num_data_blocks": 1297, "num_entries": 8528, "num_filter_entries": 8528, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760170204, "oldest_key_time": 0, "file_creation_time": 1760176179, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "25d4b2da-549a-4320-988a-8e4ed36a341b", "db_session_id": "HBQ1LYMNE0LLZGR7AHC6", "orig_file_number": 161, "seqno_to_time_mapping": "N/A"}}
Oct 11 09:49:39 compute-0 ceph-mon[74313]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 09:49:39 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:49:39.464871) [db/compaction/compaction_job.cc:1663] [default] [JOB 98] Compacted 1@0 + 1@6 files to L6 => 11146509 bytes
Oct 11 09:49:39 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:49:39.466775) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 213.4 rd, 184.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.9, 9.4 +0.0 blob) out(10.6 +0.0 blob), read-write-amplify(7.9) write-amplify(3.7) OK, records in: 9053, records dropped: 525 output_compression: NoCompression
Oct 11 09:49:39 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:49:39.466805) EVENT_LOG_v1 {"time_micros": 1760176179466791, "job": 98, "event": "compaction_finished", "compaction_time_micros": 60362, "compaction_time_cpu_micros": 40385, "output_level": 6, "num_output_files": 1, "total_output_size": 11146509, "num_input_records": 9053, "num_output_records": 8528, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 11 09:49:39 compute-0 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000160.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 09:49:39 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760176179467847, "job": 98, "event": "table_file_deletion", "file_number": 160}
Oct 11 09:49:39 compute-0 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000158.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 09:49:39 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760176179471214, "job": 98, "event": "table_file_deletion", "file_number": 158}
Oct 11 09:49:39 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:49:39.403992) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 09:49:39 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:49:39.471516) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 09:49:39 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:49:39.471529) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 09:49:39 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:49:39.471533) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 09:49:39 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:49:39.471538) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 09:49:39 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:49:39.471542) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 09:49:39 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3258: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:49:39 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:49:40 compute-0 nova_compute[260935]: 2025-10-11 09:49:40.699 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4996-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 09:49:41 compute-0 ceph-mon[74313]: pgmap v3258: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:49:41 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3259: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:49:43 compute-0 ceph-mon[74313]: pgmap v3259: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:49:43 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3260: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:49:44 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:49:45 compute-0 ceph-mon[74313]: pgmap v3260: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:49:45 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3261: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:49:45 compute-0 nova_compute[260935]: 2025-10-11 09:49:45.701 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 09:49:45 compute-0 nova_compute[260935]: 2025-10-11 09:49:45.703 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:49:45 compute-0 nova_compute[260935]: 2025-10-11 09:49:45.703 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Oct 11 09:49:45 compute-0 nova_compute[260935]: 2025-10-11 09:49:45.703 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Oct 11 09:49:45 compute-0 nova_compute[260935]: 2025-10-11 09:49:45.704 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Oct 11 09:49:45 compute-0 nova_compute[260935]: 2025-10-11 09:49:45.706 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:49:47 compute-0 ceph-mon[74313]: pgmap v3261: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:49:47 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3262: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:49:49 compute-0 ceph-mon[74313]: pgmap v3262: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:49:49 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3263: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:49:49 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:49:50 compute-0 nova_compute[260935]: 2025-10-11 09:49:50.705 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:49:51 compute-0 ceph-mon[74313]: pgmap v3263: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:49:51 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3264: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:49:53 compute-0 ceph-mon[74313]: pgmap v3264: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:49:53 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3265: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:49:53 compute-0 podman[440237]: 2025-10-11 09:49:53.809733365 +0000 UTC m=+0.096725944 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct 11 09:49:54 compute-0 ceph-mon[74313]: pgmap v3265: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:49:54 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:49:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:49:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:49:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:49:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:49:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:49:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:49:55 compute-0 ceph-mgr[74605]: [balancer INFO root] Optimize plan auto_2025-10-11_09:49:55
Oct 11 09:49:55 compute-0 ceph-mgr[74605]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 09:49:55 compute-0 ceph-mgr[74605]: [balancer INFO root] do_upmap
Oct 11 09:49:55 compute-0 ceph-mgr[74605]: [balancer INFO root] pools ['cephfs.cephfs.data', 'backups', 'default.rgw.control', '.rgw.root', 'cephfs.cephfs.meta', 'vms', 'default.rgw.meta', 'volumes', 'images', 'default.rgw.log', '.mgr']
Oct 11 09:49:55 compute-0 ceph-mgr[74605]: [balancer INFO root] prepared 0/10 changes
Oct 11 09:49:55 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3266: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:49:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 09:49:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 09:49:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 09:49:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 09:49:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 09:49:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 09:49:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 09:49:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 09:49:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 09:49:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 09:49:55 compute-0 nova_compute[260935]: 2025-10-11 09:49:55.707 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:49:56 compute-0 ceph-mon[74313]: pgmap v3266: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:49:57 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3267: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:49:58 compute-0 nova_compute[260935]: 2025-10-11 09:49:58.732 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:49:58 compute-0 nova_compute[260935]: 2025-10-11 09:49:58.732 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:49:58 compute-0 nova_compute[260935]: 2025-10-11 09:49:58.733 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 11 09:49:58 compute-0 ceph-mon[74313]: pgmap v3267: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:49:58 compute-0 podman[440257]: 2025-10-11 09:49:58.783204918 +0000 UTC m=+0.084560623 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, container_name=iscsid, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 11 09:49:59 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3268: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:49:59 compute-0 nova_compute[260935]: 2025-10-11 09:49:59.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:49:59 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:50:00 compute-0 nova_compute[260935]: 2025-10-11 09:50:00.699 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:50:00 compute-0 nova_compute[260935]: 2025-10-11 09:50:00.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:50:00 compute-0 nova_compute[260935]: 2025-10-11 09:50:00.703 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 11 09:50:00 compute-0 nova_compute[260935]: 2025-10-11 09:50:00.709 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:50:00 compute-0 ceph-mon[74313]: pgmap v3268: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:50:01 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3269: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:50:01 compute-0 podman[440279]: 2025-10-11 09:50:01.796121903 +0000 UTC m=+0.083832992 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=multipathd, container_name=multipathd, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct 11 09:50:01 compute-0 podman[440280]: 2025-10-11 09:50:01.838065519 +0000 UTC m=+0.125901572 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible)
Oct 11 09:50:01 compute-0 nova_compute[260935]: 2025-10-11 09:50:01.984 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "refresh_cache-52be16b4-343a-4fd4-9041-39069a1fde2a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:50:01 compute-0 nova_compute[260935]: 2025-10-11 09:50:01.984 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquired lock "refresh_cache-52be16b4-343a-4fd4-9041-39069a1fde2a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:50:01 compute-0 nova_compute[260935]: 2025-10-11 09:50:01.985 2 DEBUG nova.network.neutron [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: 52be16b4-343a-4fd4-9041-39069a1fde2a] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 11 09:50:02 compute-0 ceph-mon[74313]: pgmap v3269: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:50:02 compute-0 nova_compute[260935]: 2025-10-11 09:50:02.975 2 DEBUG nova.network.neutron [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: 52be16b4-343a-4fd4-9041-39069a1fde2a] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 11 09:50:03 compute-0 sudo[440324]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:50:03 compute-0 sudo[440324]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:50:03 compute-0 sudo[440324]: pam_unix(sudo:session): session closed for user root
Oct 11 09:50:03 compute-0 sudo[440349]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 09:50:03 compute-0 sudo[440349]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:50:03 compute-0 sudo[440349]: pam_unix(sudo:session): session closed for user root
Oct 11 09:50:03 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3270: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:50:03 compute-0 sudo[440374]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:50:03 compute-0 sudo[440374]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:50:03 compute-0 sudo[440374]: pam_unix(sudo:session): session closed for user root
Oct 11 09:50:03 compute-0 sudo[440399]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 11 09:50:03 compute-0 sudo[440399]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:50:04 compute-0 nova_compute[260935]: 2025-10-11 09:50:04.068 2 DEBUG nova.network.neutron [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: 52be16b4-343a-4fd4-9041-39069a1fde2a] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:50:04 compute-0 nova_compute[260935]: 2025-10-11 09:50:04.091 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Releasing lock "refresh_cache-52be16b4-343a-4fd4-9041-39069a1fde2a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:50:04 compute-0 nova_compute[260935]: 2025-10-11 09:50:04.091 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: 52be16b4-343a-4fd4-9041-39069a1fde2a] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 11 09:50:04 compute-0 sudo[440399]: pam_unix(sudo:session): session closed for user root
Oct 11 09:50:04 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 09:50:04 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 09:50:04 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 11 09:50:04 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 09:50:04 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 11 09:50:04 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:50:04 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev c0d807ee-375b-4ffc-aad9-0a8ad0116b00 does not exist
Oct 11 09:50:04 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev 7b95ad51-aef0-499b-a0fc-9226c87d09b5 does not exist
Oct 11 09:50:04 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev cb1e8f90-a617-41ec-842a-2eb2eda4d23b does not exist
Oct 11 09:50:04 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 11 09:50:04 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 09:50:04 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 11 09:50:04 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 09:50:04 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 09:50:04 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 09:50:04 compute-0 sudo[440454]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:50:04 compute-0 sudo[440454]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:50:04 compute-0 sudo[440454]: pam_unix(sudo:session): session closed for user root
Oct 11 09:50:04 compute-0 sudo[440479]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 09:50:04 compute-0 sudo[440479]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:50:04 compute-0 sudo[440479]: pam_unix(sudo:session): session closed for user root
Oct 11 09:50:04 compute-0 nova_compute[260935]: 2025-10-11 09:50:04.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:50:04 compute-0 nova_compute[260935]: 2025-10-11 09:50:04.725 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:50:04 compute-0 nova_compute[260935]: 2025-10-11 09:50:04.726 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:50:04 compute-0 nova_compute[260935]: 2025-10-11 09:50:04.726 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:50:04 compute-0 nova_compute[260935]: 2025-10-11 09:50:04.726 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 11 09:50:04 compute-0 nova_compute[260935]: 2025-10-11 09:50:04.727 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:50:04 compute-0 sudo[440504]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:50:04 compute-0 sudo[440504]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:50:04 compute-0 ceph-mon[74313]: pgmap v3270: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:50:04 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 09:50:04 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 09:50:04 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:50:04 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 09:50:04 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 09:50:04 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 09:50:04 compute-0 sudo[440504]: pam_unix(sudo:session): session closed for user root
Oct 11 09:50:04 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:50:04 compute-0 sudo[440530]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 11 09:50:04 compute-0 sudo[440530]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:50:05 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 09:50:05 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3962598057' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:50:05 compute-0 nova_compute[260935]: 2025-10-11 09:50:05.210 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.483s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:50:05 compute-0 nova_compute[260935]: 2025-10-11 09:50:05.305 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:50:05 compute-0 nova_compute[260935]: 2025-10-11 09:50:05.306 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:50:05 compute-0 nova_compute[260935]: 2025-10-11 09:50:05.306 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:50:05 compute-0 nova_compute[260935]: 2025-10-11 09:50:05.310 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:50:05 compute-0 nova_compute[260935]: 2025-10-11 09:50:05.311 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:50:05 compute-0 nova_compute[260935]: 2025-10-11 09:50:05.316 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000052 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:50:05 compute-0 nova_compute[260935]: 2025-10-11 09:50:05.317 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000052 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:50:05 compute-0 podman[440617]: 2025-10-11 09:50:05.323352173 +0000 UTC m=+0.067929377 container create ed84ce5ca7714f154b7426698b99f14f2631dc27188fe39d46c07ca5537170fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_beaver, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct 11 09:50:05 compute-0 systemd[1]: Started libpod-conmon-ed84ce5ca7714f154b7426698b99f14f2631dc27188fe39d46c07ca5537170fb.scope.
Oct 11 09:50:05 compute-0 podman[440617]: 2025-10-11 09:50:05.293502645 +0000 UTC m=+0.038079919 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:50:05 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:50:05 compute-0 podman[440617]: 2025-10-11 09:50:05.436001142 +0000 UTC m=+0.180578356 container init ed84ce5ca7714f154b7426698b99f14f2631dc27188fe39d46c07ca5537170fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_beaver, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 09:50:05 compute-0 podman[440617]: 2025-10-11 09:50:05.447714721 +0000 UTC m=+0.192291915 container start ed84ce5ca7714f154b7426698b99f14f2631dc27188fe39d46c07ca5537170fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_beaver, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct 11 09:50:05 compute-0 podman[440617]: 2025-10-11 09:50:05.454198473 +0000 UTC m=+0.198775667 container attach ed84ce5ca7714f154b7426698b99f14f2631dc27188fe39d46c07ca5537170fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_beaver, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct 11 09:50:05 compute-0 condescending_beaver[440634]: 167 167
Oct 11 09:50:05 compute-0 systemd[1]: libpod-ed84ce5ca7714f154b7426698b99f14f2631dc27188fe39d46c07ca5537170fb.scope: Deactivated successfully.
Oct 11 09:50:05 compute-0 podman[440617]: 2025-10-11 09:50:05.45658528 +0000 UTC m=+0.201162524 container died ed84ce5ca7714f154b7426698b99f14f2631dc27188fe39d46c07ca5537170fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_beaver, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct 11 09:50:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-8e19b3c04977b61bd9f7ef083a7db9fde3c07fb6bf96dcb18f2036f75cd303e4-merged.mount: Deactivated successfully.
Oct 11 09:50:05 compute-0 podman[440617]: 2025-10-11 09:50:05.532874169 +0000 UTC m=+0.277451363 container remove ed84ce5ca7714f154b7426698b99f14f2631dc27188fe39d46c07ca5537170fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_beaver, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 09:50:05 compute-0 systemd[1]: libpod-conmon-ed84ce5ca7714f154b7426698b99f14f2631dc27188fe39d46c07ca5537170fb.scope: Deactivated successfully.
Oct 11 09:50:05 compute-0 nova_compute[260935]: 2025-10-11 09:50:05.573 2 WARNING nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 09:50:05 compute-0 nova_compute[260935]: 2025-10-11 09:50:05.576 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=2798MB free_disk=59.83064270019531GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 11 09:50:05 compute-0 nova_compute[260935]: 2025-10-11 09:50:05.576 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:50:05 compute-0 nova_compute[260935]: 2025-10-11 09:50:05.577 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:50:05 compute-0 nova_compute[260935]: 2025-10-11 09:50:05.662 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance c176845c-89c0-4038-ba22-4ee79bd3ebfe actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 09:50:05 compute-0 nova_compute[260935]: 2025-10-11 09:50:05.663 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance b75d8ded-515b-48ff-a6b6-28df88878996 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 09:50:05 compute-0 nova_compute[260935]: 2025-10-11 09:50:05.663 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance 52be16b4-343a-4fd4-9041-39069a1fde2a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 09:50:05 compute-0 nova_compute[260935]: 2025-10-11 09:50:05.664 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 11 09:50:05 compute-0 nova_compute[260935]: 2025-10-11 09:50:05.664 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=896MB phys_disk=59GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 11 09:50:05 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3271: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:50:05 compute-0 nova_compute[260935]: 2025-10-11 09:50:05.683 2 DEBUG nova.scheduler.client.report [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Refreshing inventories for resource provider ead2f521-4d5d-46d9-864c-1aac19134114 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Oct 11 09:50:05 compute-0 nova_compute[260935]: 2025-10-11 09:50:05.705 2 DEBUG nova.scheduler.client.report [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Updating ProviderTree inventory for provider ead2f521-4d5d-46d9-864c-1aac19134114 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Oct 11 09:50:05 compute-0 nova_compute[260935]: 2025-10-11 09:50:05.705 2 DEBUG nova.compute.provider_tree [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Updating inventory in ProviderTree for provider ead2f521-4d5d-46d9-864c-1aac19134114 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Oct 11 09:50:05 compute-0 nova_compute[260935]: 2025-10-11 09:50:05.710 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 09:50:05 compute-0 nova_compute[260935]: 2025-10-11 09:50:05.712 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:50:05 compute-0 nova_compute[260935]: 2025-10-11 09:50:05.712 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Oct 11 09:50:05 compute-0 nova_compute[260935]: 2025-10-11 09:50:05.712 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Oct 11 09:50:05 compute-0 nova_compute[260935]: 2025-10-11 09:50:05.713 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Oct 11 09:50:05 compute-0 nova_compute[260935]: 2025-10-11 09:50:05.715 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:50:05 compute-0 nova_compute[260935]: 2025-10-11 09:50:05.721 2 DEBUG nova.scheduler.client.report [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Refreshing aggregate associations for resource provider ead2f521-4d5d-46d9-864c-1aac19134114, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Oct 11 09:50:05 compute-0 podman[440657]: 2025-10-11 09:50:05.733774264 +0000 UTC m=+0.059080468 container create fa32d7723b1be13ac00239e2bacc503c68ab80e36f59161a153ebb415331da4e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_bouman, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct 11 09:50:05 compute-0 nova_compute[260935]: 2025-10-11 09:50:05.741 2 DEBUG nova.scheduler.client.report [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Refreshing trait associations for resource provider ead2f521-4d5d-46d9-864c-1aac19134114, traits: HW_CPU_X86_AESNI,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_CLMUL,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_AVX,COMPUTE_RESCUE_BFV,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_F16C,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NODE,HW_CPU_X86_SSE2,HW_CPU_X86_BMI,COMPUTE_DEVICE_TAGGING,COMPUTE_STORAGE_BUS_FDC,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SHA,COMPUTE_STORAGE_BUS_SATA,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SSE42,HW_CPU_X86_SSE41,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_ABM,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_STORAGE_BUS_IDE,COMPUTE_STORAGE_BUS_USB,COMPUTE_TRUSTED_CERTS,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_FMA3,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_SSE4A,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSSE3,HW_CPU_X86_SVM,HW_CPU_X86_BMI2,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_AVX2,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_SECURITY_TPM_2_0,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_AMD_SVM,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_RTL8139 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Oct 11 09:50:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 09:50:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:50:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 09:50:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:50:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0026278224099759067 of space, bias 1.0, pg target 0.788346722992772 quantized to 32 (current 32)
Oct 11 09:50:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:50:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 09:50:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:50:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 09:50:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:50:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 4.4513495474376506e-07 of space, bias 1.0, pg target 0.00013354048642312953 quantized to 32 (current 32)
Oct 11 09:50:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:50:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 11 09:50:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:50:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 09:50:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:50:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 11 09:50:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:50:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 11 09:50:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:50:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 09:50:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:50:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 11 09:50:05 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/3962598057' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:50:05 compute-0 systemd[1]: Started libpod-conmon-fa32d7723b1be13ac00239e2bacc503c68ab80e36f59161a153ebb415331da4e.scope.
Oct 11 09:50:05 compute-0 podman[440657]: 2025-10-11 09:50:05.701702934 +0000 UTC m=+0.027009058 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:50:05 compute-0 nova_compute[260935]: 2025-10-11 09:50:05.804 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:50:05 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:50:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c95c64881e8865f811e67f9c4eb9d428857c359f54fec2d8dfd4092eab7ba85a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 09:50:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c95c64881e8865f811e67f9c4eb9d428857c359f54fec2d8dfd4092eab7ba85a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 09:50:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c95c64881e8865f811e67f9c4eb9d428857c359f54fec2d8dfd4092eab7ba85a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 09:50:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c95c64881e8865f811e67f9c4eb9d428857c359f54fec2d8dfd4092eab7ba85a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 09:50:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c95c64881e8865f811e67f9c4eb9d428857c359f54fec2d8dfd4092eab7ba85a/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 09:50:05 compute-0 podman[440657]: 2025-10-11 09:50:05.836220067 +0000 UTC m=+0.161526241 container init fa32d7723b1be13ac00239e2bacc503c68ab80e36f59161a153ebb415331da4e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_bouman, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 09:50:05 compute-0 podman[440657]: 2025-10-11 09:50:05.85556175 +0000 UTC m=+0.180867834 container start fa32d7723b1be13ac00239e2bacc503c68ab80e36f59161a153ebb415331da4e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_bouman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True)
Oct 11 09:50:05 compute-0 podman[440657]: 2025-10-11 09:50:05.859191892 +0000 UTC m=+0.184498066 container attach fa32d7723b1be13ac00239e2bacc503c68ab80e36f59161a153ebb415331da4e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_bouman, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct 11 09:50:06 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 09:50:06 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2231980031' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:50:06 compute-0 nova_compute[260935]: 2025-10-11 09:50:06.289 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.485s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:50:06 compute-0 nova_compute[260935]: 2025-10-11 09:50:06.296 2 DEBUG nova.compute.provider_tree [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 09:50:06 compute-0 nova_compute[260935]: 2025-10-11 09:50:06.312 2 DEBUG nova.scheduler.client.report [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 09:50:06 compute-0 nova_compute[260935]: 2025-10-11 09:50:06.314 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 11 09:50:06 compute-0 nova_compute[260935]: 2025-10-11 09:50:06.314 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.737s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:50:06 compute-0 ceph-mon[74313]: pgmap v3271: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:50:06 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/2231980031' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:50:06 compute-0 magical_bouman[440674]: --> passed data devices: 0 physical, 3 LVM
Oct 11 09:50:06 compute-0 magical_bouman[440674]: --> relative data size: 1.0
Oct 11 09:50:06 compute-0 magical_bouman[440674]: --> All data devices are unavailable
Oct 11 09:50:07 compute-0 systemd[1]: libpod-fa32d7723b1be13ac00239e2bacc503c68ab80e36f59161a153ebb415331da4e.scope: Deactivated successfully.
Oct 11 09:50:07 compute-0 podman[440657]: 2025-10-11 09:50:07.010676248 +0000 UTC m=+1.335982352 container died fa32d7723b1be13ac00239e2bacc503c68ab80e36f59161a153ebb415331da4e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_bouman, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 09:50:07 compute-0 systemd[1]: libpod-fa32d7723b1be13ac00239e2bacc503c68ab80e36f59161a153ebb415331da4e.scope: Consumed 1.103s CPU time.
Oct 11 09:50:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-c95c64881e8865f811e67f9c4eb9d428857c359f54fec2d8dfd4092eab7ba85a-merged.mount: Deactivated successfully.
Oct 11 09:50:07 compute-0 podman[440657]: 2025-10-11 09:50:07.081897706 +0000 UTC m=+1.407203780 container remove fa32d7723b1be13ac00239e2bacc503c68ab80e36f59161a153ebb415331da4e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_bouman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True)
Oct 11 09:50:07 compute-0 systemd[1]: libpod-conmon-fa32d7723b1be13ac00239e2bacc503c68ab80e36f59161a153ebb415331da4e.scope: Deactivated successfully.
Oct 11 09:50:07 compute-0 sudo[440530]: pam_unix(sudo:session): session closed for user root
Oct 11 09:50:07 compute-0 sudo[440737]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:50:07 compute-0 sudo[440737]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:50:07 compute-0 sudo[440737]: pam_unix(sudo:session): session closed for user root
Oct 11 09:50:07 compute-0 sudo[440762]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 09:50:07 compute-0 sudo[440762]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:50:07 compute-0 sudo[440762]: pam_unix(sudo:session): session closed for user root
Oct 11 09:50:07 compute-0 sudo[440787]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:50:07 compute-0 sudo[440787]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:50:07 compute-0 sudo[440787]: pam_unix(sudo:session): session closed for user root
Oct 11 09:50:07 compute-0 sudo[440812]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a -- lvm list --format json
Oct 11 09:50:07 compute-0 sudo[440812]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:50:07 compute-0 ceph-mon[74313]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 11 09:50:07 compute-0 ceph-mon[74313]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 6000.0 total, 600.0 interval
                                           Cumulative writes: 14K writes, 68K keys, 14K commit groups, 1.0 writes per commit group, ingest: 0.09 GB, 0.02 MB/s
                                           Cumulative WAL: 14K writes, 14K syncs, 1.00 writes per sync, written: 0.09 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1382 writes, 6264 keys, 1382 commit groups, 1.0 writes per commit group, ingest: 8.96 MB, 0.01 MB/s
                                           Interval WAL: 1382 writes, 1382 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     74.5      1.12              0.34        49    0.023       0      0       0.0       0.0
                                             L6      1/0   10.63 MB   0.0      0.5     0.1      0.4       0.4      0.0       0.0   4.8    155.0    131.3      3.06              1.60        48    0.064    316K    25K       0.0       0.0
                                            Sum      1/0   10.63 MB   0.0      0.5     0.1      0.4       0.5      0.1       0.0   5.8    113.6    116.1      4.17              1.93        97    0.043    316K    25K       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.1     0.0      0.0       0.1      0.0       0.0   6.2    151.4    155.0      0.37              0.21        10    0.037     43K   2578       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.5     0.1      0.4       0.4      0.0       0.0   0.0    155.0    131.3      3.06              1.60        48    0.064    316K    25K       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     74.8      1.11              0.34        48    0.023       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     12.2      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.081, interval 0.009
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.47 GB write, 0.08 MB/s write, 0.46 GB read, 0.08 MB/s read, 4.2 seconds
                                           Interval compaction: 0.06 GB write, 0.10 MB/s write, 0.05 GB read, 0.09 MB/s read, 0.4 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558f0ab3b1f0#2 capacity: 304.00 MB usage: 54.56 MB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 0 last_secs: 0.000506 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3547,52.30 MB,17.2031%) FilterBlock(98,887.42 KB,0.285073%) IndexBlock(98,1.40 MB,0.459817%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Oct 11 09:50:07 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3272: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:50:07 compute-0 podman[440875]: 2025-10-11 09:50:07.841054518 +0000 UTC m=+0.050139767 container create 9f737dd6249a6a6d6aefe67ccff30dc50278d4004bc210a6f3fd8803b50d6036 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_merkle, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct 11 09:50:07 compute-0 systemd[1]: Started libpod-conmon-9f737dd6249a6a6d6aefe67ccff30dc50278d4004bc210a6f3fd8803b50d6036.scope.
Oct 11 09:50:07 compute-0 podman[440875]: 2025-10-11 09:50:07.822642432 +0000 UTC m=+0.031727721 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:50:07 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:50:07 compute-0 podman[440875]: 2025-10-11 09:50:07.952551895 +0000 UTC m=+0.161637234 container init 9f737dd6249a6a6d6aefe67ccff30dc50278d4004bc210a6f3fd8803b50d6036 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_merkle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 09:50:07 compute-0 podman[440875]: 2025-10-11 09:50:07.964887531 +0000 UTC m=+0.173972790 container start 9f737dd6249a6a6d6aefe67ccff30dc50278d4004bc210a6f3fd8803b50d6036 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_merkle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Oct 11 09:50:07 compute-0 podman[440875]: 2025-10-11 09:50:07.968352119 +0000 UTC m=+0.177437458 container attach 9f737dd6249a6a6d6aefe67ccff30dc50278d4004bc210a6f3fd8803b50d6036 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_merkle, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 09:50:07 compute-0 silly_merkle[440891]: 167 167
Oct 11 09:50:07 compute-0 systemd[1]: libpod-9f737dd6249a6a6d6aefe67ccff30dc50278d4004bc210a6f3fd8803b50d6036.scope: Deactivated successfully.
Oct 11 09:50:07 compute-0 podman[440875]: 2025-10-11 09:50:07.970003255 +0000 UTC m=+0.179088514 container died 9f737dd6249a6a6d6aefe67ccff30dc50278d4004bc210a6f3fd8803b50d6036 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_merkle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct 11 09:50:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-2ff9747bc6ffbe31b7265b2c5253b742bc222170d7a8209896fbb82abac5700d-merged.mount: Deactivated successfully.
Oct 11 09:50:08 compute-0 podman[440875]: 2025-10-11 09:50:08.005791999 +0000 UTC m=+0.214877248 container remove 9f737dd6249a6a6d6aefe67ccff30dc50278d4004bc210a6f3fd8803b50d6036 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_merkle, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 09:50:08 compute-0 systemd[1]: libpod-conmon-9f737dd6249a6a6d6aefe67ccff30dc50278d4004bc210a6f3fd8803b50d6036.scope: Deactivated successfully.
Oct 11 09:50:08 compute-0 podman[440915]: 2025-10-11 09:50:08.22796325 +0000 UTC m=+0.070735225 container create 6b4327d559d18386800c57c3d46a34662db97f883b05bd8e0814ed55eed97e3e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_antonelli, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 09:50:08 compute-0 systemd[1]: Started libpod-conmon-6b4327d559d18386800c57c3d46a34662db97f883b05bd8e0814ed55eed97e3e.scope.
Oct 11 09:50:08 compute-0 podman[440915]: 2025-10-11 09:50:08.200777788 +0000 UTC m=+0.043549803 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:50:08 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:50:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b7626d36f622ac4b129a8ec022e88ae1e99ada5e519ee6ca6db1641417edaec/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 09:50:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b7626d36f622ac4b129a8ec022e88ae1e99ada5e519ee6ca6db1641417edaec/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 09:50:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b7626d36f622ac4b129a8ec022e88ae1e99ada5e519ee6ca6db1641417edaec/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 09:50:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b7626d36f622ac4b129a8ec022e88ae1e99ada5e519ee6ca6db1641417edaec/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 09:50:08 compute-0 podman[440915]: 2025-10-11 09:50:08.345794715 +0000 UTC m=+0.188566750 container init 6b4327d559d18386800c57c3d46a34662db97f883b05bd8e0814ed55eed97e3e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_antonelli, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct 11 09:50:08 compute-0 podman[440915]: 2025-10-11 09:50:08.361724982 +0000 UTC m=+0.204496977 container start 6b4327d559d18386800c57c3d46a34662db97f883b05bd8e0814ed55eed97e3e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_antonelli, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 09:50:08 compute-0 podman[440915]: 2025-10-11 09:50:08.365523788 +0000 UTC m=+0.208295843 container attach 6b4327d559d18386800c57c3d46a34662db97f883b05bd8e0814ed55eed97e3e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_antonelli, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 09:50:08 compute-0 ceph-mon[74313]: pgmap v3272: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:50:09 compute-0 distracted_antonelli[440932]: {
Oct 11 09:50:09 compute-0 distracted_antonelli[440932]:     "0": [
Oct 11 09:50:09 compute-0 distracted_antonelli[440932]:         {
Oct 11 09:50:09 compute-0 distracted_antonelli[440932]:             "devices": [
Oct 11 09:50:09 compute-0 distracted_antonelli[440932]:                 "/dev/loop3"
Oct 11 09:50:09 compute-0 distracted_antonelli[440932]:             ],
Oct 11 09:50:09 compute-0 distracted_antonelli[440932]:             "lv_name": "ceph_lv0",
Oct 11 09:50:09 compute-0 distracted_antonelli[440932]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 09:50:09 compute-0 distracted_antonelli[440932]:             "lv_size": "21470642176",
Oct 11 09:50:09 compute-0 distracted_antonelli[440932]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=bd31cfe9-266a-4479-a7f9-659fff14b3e1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 09:50:09 compute-0 distracted_antonelli[440932]:             "lv_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 09:50:09 compute-0 distracted_antonelli[440932]:             "name": "ceph_lv0",
Oct 11 09:50:09 compute-0 distracted_antonelli[440932]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 09:50:09 compute-0 distracted_antonelli[440932]:             "tags": {
Oct 11 09:50:09 compute-0 distracted_antonelli[440932]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 11 09:50:09 compute-0 distracted_antonelli[440932]:                 "ceph.block_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 09:50:09 compute-0 distracted_antonelli[440932]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 09:50:09 compute-0 distracted_antonelli[440932]:                 "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:50:09 compute-0 distracted_antonelli[440932]:                 "ceph.cluster_name": "ceph",
Oct 11 09:50:09 compute-0 distracted_antonelli[440932]:                 "ceph.crush_device_class": "",
Oct 11 09:50:09 compute-0 distracted_antonelli[440932]:                 "ceph.encrypted": "0",
Oct 11 09:50:09 compute-0 distracted_antonelli[440932]:                 "ceph.osd_fsid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 09:50:09 compute-0 distracted_antonelli[440932]:                 "ceph.osd_id": "0",
Oct 11 09:50:09 compute-0 distracted_antonelli[440932]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 09:50:09 compute-0 distracted_antonelli[440932]:                 "ceph.type": "block",
Oct 11 09:50:09 compute-0 distracted_antonelli[440932]:                 "ceph.vdo": "0"
Oct 11 09:50:09 compute-0 distracted_antonelli[440932]:             },
Oct 11 09:50:09 compute-0 distracted_antonelli[440932]:             "type": "block",
Oct 11 09:50:09 compute-0 distracted_antonelli[440932]:             "vg_name": "ceph_vg0"
Oct 11 09:50:09 compute-0 distracted_antonelli[440932]:         }
Oct 11 09:50:09 compute-0 distracted_antonelli[440932]:     ],
Oct 11 09:50:09 compute-0 distracted_antonelli[440932]:     "1": [
Oct 11 09:50:09 compute-0 distracted_antonelli[440932]:         {
Oct 11 09:50:09 compute-0 distracted_antonelli[440932]:             "devices": [
Oct 11 09:50:09 compute-0 distracted_antonelli[440932]:                 "/dev/loop4"
Oct 11 09:50:09 compute-0 distracted_antonelli[440932]:             ],
Oct 11 09:50:09 compute-0 distracted_antonelli[440932]:             "lv_name": "ceph_lv1",
Oct 11 09:50:09 compute-0 distracted_antonelli[440932]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 09:50:09 compute-0 distracted_antonelli[440932]:             "lv_size": "21470642176",
Oct 11 09:50:09 compute-0 distracted_antonelli[440932]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c6e730c8-7075-45e5-bf72-061b73088f5b,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 09:50:09 compute-0 distracted_antonelli[440932]:             "lv_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 09:50:09 compute-0 distracted_antonelli[440932]:             "name": "ceph_lv1",
Oct 11 09:50:09 compute-0 distracted_antonelli[440932]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 09:50:09 compute-0 distracted_antonelli[440932]:             "tags": {
Oct 11 09:50:09 compute-0 distracted_antonelli[440932]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 11 09:50:09 compute-0 distracted_antonelli[440932]:                 "ceph.block_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 09:50:09 compute-0 distracted_antonelli[440932]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 09:50:09 compute-0 distracted_antonelli[440932]:                 "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:50:09 compute-0 distracted_antonelli[440932]:                 "ceph.cluster_name": "ceph",
Oct 11 09:50:09 compute-0 distracted_antonelli[440932]:                 "ceph.crush_device_class": "",
Oct 11 09:50:09 compute-0 distracted_antonelli[440932]:                 "ceph.encrypted": "0",
Oct 11 09:50:09 compute-0 distracted_antonelli[440932]:                 "ceph.osd_fsid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 09:50:09 compute-0 distracted_antonelli[440932]:                 "ceph.osd_id": "1",
Oct 11 09:50:09 compute-0 distracted_antonelli[440932]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 09:50:09 compute-0 distracted_antonelli[440932]:                 "ceph.type": "block",
Oct 11 09:50:09 compute-0 distracted_antonelli[440932]:                 "ceph.vdo": "0"
Oct 11 09:50:09 compute-0 distracted_antonelli[440932]:             },
Oct 11 09:50:09 compute-0 distracted_antonelli[440932]:             "type": "block",
Oct 11 09:50:09 compute-0 distracted_antonelli[440932]:             "vg_name": "ceph_vg1"
Oct 11 09:50:09 compute-0 distracted_antonelli[440932]:         }
Oct 11 09:50:09 compute-0 distracted_antonelli[440932]:     ],
Oct 11 09:50:09 compute-0 distracted_antonelli[440932]:     "2": [
Oct 11 09:50:09 compute-0 distracted_antonelli[440932]:         {
Oct 11 09:50:09 compute-0 distracted_antonelli[440932]:             "devices": [
Oct 11 09:50:09 compute-0 distracted_antonelli[440932]:                 "/dev/loop5"
Oct 11 09:50:09 compute-0 distracted_antonelli[440932]:             ],
Oct 11 09:50:09 compute-0 distracted_antonelli[440932]:             "lv_name": "ceph_lv2",
Oct 11 09:50:09 compute-0 distracted_antonelli[440932]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 09:50:09 compute-0 distracted_antonelli[440932]:             "lv_size": "21470642176",
Oct 11 09:50:09 compute-0 distracted_antonelli[440932]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9a8d87fe-940e-4960-94fb-405e22e6e9ee,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 09:50:09 compute-0 distracted_antonelli[440932]:             "lv_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 09:50:09 compute-0 distracted_antonelli[440932]:             "name": "ceph_lv2",
Oct 11 09:50:09 compute-0 distracted_antonelli[440932]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 09:50:09 compute-0 distracted_antonelli[440932]:             "tags": {
Oct 11 09:50:09 compute-0 distracted_antonelli[440932]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 11 09:50:09 compute-0 distracted_antonelli[440932]:                 "ceph.block_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 09:50:09 compute-0 distracted_antonelli[440932]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 09:50:09 compute-0 distracted_antonelli[440932]:                 "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:50:09 compute-0 distracted_antonelli[440932]:                 "ceph.cluster_name": "ceph",
Oct 11 09:50:09 compute-0 distracted_antonelli[440932]:                 "ceph.crush_device_class": "",
Oct 11 09:50:09 compute-0 distracted_antonelli[440932]:                 "ceph.encrypted": "0",
Oct 11 09:50:09 compute-0 distracted_antonelli[440932]:                 "ceph.osd_fsid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 09:50:09 compute-0 distracted_antonelli[440932]:                 "ceph.osd_id": "2",
Oct 11 09:50:09 compute-0 distracted_antonelli[440932]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 09:50:09 compute-0 distracted_antonelli[440932]:                 "ceph.type": "block",
Oct 11 09:50:09 compute-0 distracted_antonelli[440932]:                 "ceph.vdo": "0"
Oct 11 09:50:09 compute-0 distracted_antonelli[440932]:             },
Oct 11 09:50:09 compute-0 distracted_antonelli[440932]:             "type": "block",
Oct 11 09:50:09 compute-0 distracted_antonelli[440932]:             "vg_name": "ceph_vg2"
Oct 11 09:50:09 compute-0 distracted_antonelli[440932]:         }
Oct 11 09:50:09 compute-0 distracted_antonelli[440932]:     ]
Oct 11 09:50:09 compute-0 distracted_antonelli[440932]: }
Oct 11 09:50:09 compute-0 systemd[1]: libpod-6b4327d559d18386800c57c3d46a34662db97f883b05bd8e0814ed55eed97e3e.scope: Deactivated successfully.
Oct 11 09:50:09 compute-0 podman[440915]: 2025-10-11 09:50:09.166583505 +0000 UTC m=+1.009355540 container died 6b4327d559d18386800c57c3d46a34662db97f883b05bd8e0814ed55eed97e3e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_antonelli, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct 11 09:50:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-1b7626d36f622ac4b129a8ec022e88ae1e99ada5e519ee6ca6db1641417edaec-merged.mount: Deactivated successfully.
Oct 11 09:50:09 compute-0 podman[440915]: 2025-10-11 09:50:09.241484656 +0000 UTC m=+1.084256631 container remove 6b4327d559d18386800c57c3d46a34662db97f883b05bd8e0814ed55eed97e3e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_antonelli, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct 11 09:50:09 compute-0 systemd[1]: libpod-conmon-6b4327d559d18386800c57c3d46a34662db97f883b05bd8e0814ed55eed97e3e.scope: Deactivated successfully.
Oct 11 09:50:09 compute-0 sudo[440812]: pam_unix(sudo:session): session closed for user root
Oct 11 09:50:09 compute-0 nova_compute[260935]: 2025-10-11 09:50:09.315 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:50:09 compute-0 nova_compute[260935]: 2025-10-11 09:50:09.316 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:50:09 compute-0 sudo[440955]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:50:09 compute-0 sudo[440955]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:50:09 compute-0 sudo[440955]: pam_unix(sudo:session): session closed for user root
Oct 11 09:50:09 compute-0 sudo[440980]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 09:50:09 compute-0 sudo[440980]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:50:09 compute-0 sudo[440980]: pam_unix(sudo:session): session closed for user root
Oct 11 09:50:09 compute-0 sudo[441005]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:50:09 compute-0 sudo[441005]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:50:09 compute-0 sudo[441005]: pam_unix(sudo:session): session closed for user root
Oct 11 09:50:09 compute-0 sudo[441030]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a -- raw list --format json
Oct 11 09:50:09 compute-0 sudo[441030]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:50:09 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3273: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:50:09 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:50:10 compute-0 podman[441097]: 2025-10-11 09:50:10.169950277 +0000 UTC m=+0.069870900 container create d868fc7dc8a3c1218f5cab57e11f719a2092dccc278075f8d76f2b99bd4d1c2b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_brahmagupta, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 09:50:10 compute-0 systemd[1]: Started libpod-conmon-d868fc7dc8a3c1218f5cab57e11f719a2092dccc278075f8d76f2b99bd4d1c2b.scope.
Oct 11 09:50:10 compute-0 podman[441097]: 2025-10-11 09:50:10.139079311 +0000 UTC m=+0.039000034 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:50:10 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:50:10 compute-0 podman[441097]: 2025-10-11 09:50:10.280878479 +0000 UTC m=+0.180799192 container init d868fc7dc8a3c1218f5cab57e11f719a2092dccc278075f8d76f2b99bd4d1c2b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_brahmagupta, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 09:50:10 compute-0 podman[441097]: 2025-10-11 09:50:10.292102974 +0000 UTC m=+0.192023617 container start d868fc7dc8a3c1218f5cab57e11f719a2092dccc278075f8d76f2b99bd4d1c2b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_brahmagupta, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct 11 09:50:10 compute-0 podman[441097]: 2025-10-11 09:50:10.296535808 +0000 UTC m=+0.196456511 container attach d868fc7dc8a3c1218f5cab57e11f719a2092dccc278075f8d76f2b99bd4d1c2b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_brahmagupta, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct 11 09:50:10 compute-0 beautiful_brahmagupta[441114]: 167 167
Oct 11 09:50:10 compute-0 systemd[1]: libpod-d868fc7dc8a3c1218f5cab57e11f719a2092dccc278075f8d76f2b99bd4d1c2b.scope: Deactivated successfully.
Oct 11 09:50:10 compute-0 podman[441097]: 2025-10-11 09:50:10.299998645 +0000 UTC m=+0.199919338 container died d868fc7dc8a3c1218f5cab57e11f719a2092dccc278075f8d76f2b99bd4d1c2b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_brahmagupta, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 09:50:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-50fb37e7684ed608799ead6a500d547c67d88b8c7dedb8b5fcd37e5d4a9d38c6-merged.mount: Deactivated successfully.
Oct 11 09:50:10 compute-0 podman[441097]: 2025-10-11 09:50:10.348059133 +0000 UTC m=+0.247979786 container remove d868fc7dc8a3c1218f5cab57e11f719a2092dccc278075f8d76f2b99bd4d1c2b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_brahmagupta, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 09:50:10 compute-0 systemd[1]: libpod-conmon-d868fc7dc8a3c1218f5cab57e11f719a2092dccc278075f8d76f2b99bd4d1c2b.scope: Deactivated successfully.
Oct 11 09:50:10 compute-0 podman[441140]: 2025-10-11 09:50:10.579299519 +0000 UTC m=+0.074371047 container create d5c0c04515e82adae3c023eda151a83d0faa9b147ae17490a1ea9940d34ca45b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_hermann, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 09:50:10 compute-0 systemd[1]: Started libpod-conmon-d5c0c04515e82adae3c023eda151a83d0faa9b147ae17490a1ea9940d34ca45b.scope.
Oct 11 09:50:10 compute-0 podman[441140]: 2025-10-11 09:50:10.550662846 +0000 UTC m=+0.045734454 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:50:10 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:50:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3bcc0333411f3ef93b3d5906c09f44e4b1f25ffbd7e70bd45f233466f2eb8f7d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 09:50:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3bcc0333411f3ef93b3d5906c09f44e4b1f25ffbd7e70bd45f233466f2eb8f7d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 09:50:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3bcc0333411f3ef93b3d5906c09f44e4b1f25ffbd7e70bd45f233466f2eb8f7d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 09:50:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3bcc0333411f3ef93b3d5906c09f44e4b1f25ffbd7e70bd45f233466f2eb8f7d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 09:50:10 compute-0 podman[441140]: 2025-10-11 09:50:10.685032085 +0000 UTC m=+0.180103653 container init d5c0c04515e82adae3c023eda151a83d0faa9b147ae17490a1ea9940d34ca45b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_hermann, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 09:50:10 compute-0 podman[441140]: 2025-10-11 09:50:10.701430635 +0000 UTC m=+0.196502173 container start d5c0c04515e82adae3c023eda151a83d0faa9b147ae17490a1ea9940d34ca45b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_hermann, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507)
Oct 11 09:50:10 compute-0 podman[441140]: 2025-10-11 09:50:10.705248252 +0000 UTC m=+0.200319790 container attach d5c0c04515e82adae3c023eda151a83d0faa9b147ae17490a1ea9940d34ca45b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_hermann, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Oct 11 09:50:10 compute-0 nova_compute[260935]: 2025-10-11 09:50:10.715 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 09:50:10 compute-0 nova_compute[260935]: 2025-10-11 09:50:10.718 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 09:50:10 compute-0 nova_compute[260935]: 2025-10-11 09:50:10.719 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Oct 11 09:50:10 compute-0 nova_compute[260935]: 2025-10-11 09:50:10.719 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Oct 11 09:50:10 compute-0 nova_compute[260935]: 2025-10-11 09:50:10.759 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:50:10 compute-0 nova_compute[260935]: 2025-10-11 09:50:10.760 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Oct 11 09:50:10 compute-0 ceph-mon[74313]: pgmap v3273: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:50:11 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3274: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:50:11 compute-0 sleepy_hermann[441156]: {
Oct 11 09:50:11 compute-0 sleepy_hermann[441156]:     "9a8d87fe-940e-4960-94fb-405e22e6e9ee": {
Oct 11 09:50:11 compute-0 sleepy_hermann[441156]:         "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:50:11 compute-0 sleepy_hermann[441156]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 11 09:50:11 compute-0 sleepy_hermann[441156]:         "osd_id": 2,
Oct 11 09:50:11 compute-0 sleepy_hermann[441156]:         "osd_uuid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 09:50:11 compute-0 sleepy_hermann[441156]:         "type": "bluestore"
Oct 11 09:50:11 compute-0 sleepy_hermann[441156]:     },
Oct 11 09:50:11 compute-0 sleepy_hermann[441156]:     "bd31cfe9-266a-4479-a7f9-659fff14b3e1": {
Oct 11 09:50:11 compute-0 sleepy_hermann[441156]:         "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:50:11 compute-0 sleepy_hermann[441156]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 11 09:50:11 compute-0 sleepy_hermann[441156]:         "osd_id": 0,
Oct 11 09:50:11 compute-0 sleepy_hermann[441156]:         "osd_uuid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 09:50:11 compute-0 sleepy_hermann[441156]:         "type": "bluestore"
Oct 11 09:50:11 compute-0 sleepy_hermann[441156]:     },
Oct 11 09:50:11 compute-0 sleepy_hermann[441156]:     "c6e730c8-7075-45e5-bf72-061b73088f5b": {
Oct 11 09:50:11 compute-0 sleepy_hermann[441156]:         "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:50:11 compute-0 sleepy_hermann[441156]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 11 09:50:11 compute-0 sleepy_hermann[441156]:         "osd_id": 1,
Oct 11 09:50:11 compute-0 sleepy_hermann[441156]:         "osd_uuid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 09:50:11 compute-0 sleepy_hermann[441156]:         "type": "bluestore"
Oct 11 09:50:11 compute-0 sleepy_hermann[441156]:     }
Oct 11 09:50:11 compute-0 sleepy_hermann[441156]: }
Oct 11 09:50:11 compute-0 systemd[1]: libpod-d5c0c04515e82adae3c023eda151a83d0faa9b147ae17490a1ea9940d34ca45b.scope: Deactivated successfully.
Oct 11 09:50:11 compute-0 podman[441140]: 2025-10-11 09:50:11.857408637 +0000 UTC m=+1.352480155 container died d5c0c04515e82adae3c023eda151a83d0faa9b147ae17490a1ea9940d34ca45b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_hermann, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 09:50:11 compute-0 systemd[1]: libpod-d5c0c04515e82adae3c023eda151a83d0faa9b147ae17490a1ea9940d34ca45b.scope: Consumed 1.163s CPU time.
Oct 11 09:50:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-3bcc0333411f3ef93b3d5906c09f44e4b1f25ffbd7e70bd45f233466f2eb8f7d-merged.mount: Deactivated successfully.
Oct 11 09:50:11 compute-0 podman[441140]: 2025-10-11 09:50:11.934075448 +0000 UTC m=+1.429146996 container remove d5c0c04515e82adae3c023eda151a83d0faa9b147ae17490a1ea9940d34ca45b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_hermann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 09:50:11 compute-0 systemd[1]: libpod-conmon-d5c0c04515e82adae3c023eda151a83d0faa9b147ae17490a1ea9940d34ca45b.scope: Deactivated successfully.
Oct 11 09:50:11 compute-0 sudo[441030]: pam_unix(sudo:session): session closed for user root
Oct 11 09:50:11 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 09:50:11 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:50:11 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 09:50:11 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:50:12 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev abf931b1-e0a8-4202-a71d-a9ca3bf90ed8 does not exist
Oct 11 09:50:12 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev 86cd3a40-2c7e-4d3b-b99e-32c28761d510 does not exist
Oct 11 09:50:12 compute-0 sudo[441200]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:50:12 compute-0 sudo[441200]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:50:12 compute-0 sudo[441200]: pam_unix(sudo:session): session closed for user root
Oct 11 09:50:12 compute-0 sudo[441225]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 11 09:50:12 compute-0 sudo[441225]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:50:12 compute-0 sudo[441225]: pam_unix(sudo:session): session closed for user root
Oct 11 09:50:12 compute-0 ceph-mon[74313]: pgmap v3274: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:50:12 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:50:12 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:50:13 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3275: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:50:14 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:50:14 compute-0 ceph-mon[74313]: pgmap v3275: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:50:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:50:15.246 162815 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:50:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:50:15.249 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:50:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:50:15.249 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:50:15 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3276: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:50:15 compute-0 nova_compute[260935]: 2025-10-11 09:50:15.760 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:50:15 compute-0 nova_compute[260935]: 2025-10-11 09:50:15.763 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 09:50:17 compute-0 ceph-mon[74313]: pgmap v3276: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:50:17 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3277: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:50:17 compute-0 nova_compute[260935]: 2025-10-11 09:50:17.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:50:19 compute-0 ceph-mon[74313]: pgmap v3277: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:50:19 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3278: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:50:19 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:50:20 compute-0 nova_compute[260935]: 2025-10-11 09:50:20.763 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:50:21 compute-0 ceph-mon[74313]: pgmap v3278: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:50:21 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3279: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:50:23 compute-0 ceph-mon[74313]: pgmap v3279: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:50:23 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3280: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:50:24 compute-0 podman[441251]: 2025-10-11 09:50:24.795803986 +0000 UTC m=+0.093056441 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_metadata_agent)
Oct 11 09:50:24 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:50:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:50:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:50:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:50:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:50:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:50:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:50:25 compute-0 ceph-mon[74313]: pgmap v3280: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:50:25 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3281: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:50:25 compute-0 nova_compute[260935]: 2025-10-11 09:50:25.765 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:50:25 compute-0 nova_compute[260935]: 2025-10-11 09:50:25.767 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:50:26 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 09:50:26 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/974663407' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 09:50:26 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 09:50:26 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/974663407' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 09:50:27 compute-0 ceph-mon[74313]: pgmap v3281: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:50:27 compute-0 ceph-mon[74313]: from='client.? 192.168.122.10:0/974663407' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 09:50:27 compute-0 ceph-mon[74313]: from='client.? 192.168.122.10:0/974663407' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 09:50:27 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3282: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:50:29 compute-0 ceph-mon[74313]: pgmap v3282: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:50:29 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3283: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:50:29 compute-0 podman[441270]: 2025-10-11 09:50:29.765879134 +0000 UTC m=+0.073025310 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=iscsid, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team)
Oct 11 09:50:29 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:50:30 compute-0 nova_compute[260935]: 2025-10-11 09:50:30.768 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 09:50:31 compute-0 ceph-mon[74313]: pgmap v3283: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:50:31 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3284: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:50:32 compute-0 podman[441290]: 2025-10-11 09:50:32.805323043 +0000 UTC m=+0.101079196 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 11 09:50:32 compute-0 podman[441291]: 2025-10-11 09:50:32.844922883 +0000 UTC m=+0.139884934 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, tcib_managed=true)
Oct 11 09:50:33 compute-0 ceph-mon[74313]: pgmap v3284: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:50:33 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3285: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:50:34 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:50:35 compute-0 ceph-mon[74313]: pgmap v3285: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:50:35 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3286: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:50:35 compute-0 nova_compute[260935]: 2025-10-11 09:50:35.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:50:35 compute-0 nova_compute[260935]: 2025-10-11 09:50:35.770 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:50:37 compute-0 ceph-mon[74313]: pgmap v3286: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:50:37 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3287: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 4.7 KiB/s rd, 255 B/s wr, 5 op/s
Oct 11 09:50:38 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e306 do_prune osdmap full prune enabled
Oct 11 09:50:38 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e307 e307: 3 total, 3 up, 3 in
Oct 11 09:50:38 compute-0 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e307: 3 total, 3 up, 3 in
Oct 11 09:50:39 compute-0 ceph-mon[74313]: pgmap v3287: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 4.7 KiB/s rd, 255 B/s wr, 5 op/s
Oct 11 09:50:39 compute-0 ceph-mon[74313]: osdmap e307: 3 total, 3 up, 3 in
Oct 11 09:50:39 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3289: 321 pgs: 321 active+clean; 303 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 8.2 KiB/s rd, 1.6 MiB/s wr, 12 op/s
Oct 11 09:50:39 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:50:40 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e307 do_prune osdmap full prune enabled
Oct 11 09:50:40 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e308 e308: 3 total, 3 up, 3 in
Oct 11 09:50:40 compute-0 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e308: 3 total, 3 up, 3 in
Oct 11 09:50:40 compute-0 nova_compute[260935]: 2025-10-11 09:50:40.772 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:50:41 compute-0 ceph-mon[74313]: pgmap v3289: 321 pgs: 321 active+clean; 303 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 8.2 KiB/s rd, 1.6 MiB/s wr, 12 op/s
Oct 11 09:50:41 compute-0 ceph-mon[74313]: osdmap e308: 3 total, 3 up, 3 in
Oct 11 09:50:41 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3291: 321 pgs: 321 active+clean; 303 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 10 KiB/s rd, 2.0 MiB/s wr, 15 op/s
Oct 11 09:50:43 compute-0 ceph-mon[74313]: pgmap v3291: 321 pgs: 321 active+clean; 303 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 10 KiB/s rd, 2.0 MiB/s wr, 15 op/s
Oct 11 09:50:43 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3292: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 33 KiB/s rd, 5.1 MiB/s wr, 47 op/s
Oct 11 09:50:44 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e308 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:50:45 compute-0 ceph-mon[74313]: pgmap v3292: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 33 KiB/s rd, 5.1 MiB/s wr, 47 op/s
Oct 11 09:50:45 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3293: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 26 KiB/s rd, 5.1 MiB/s wr, 38 op/s
Oct 11 09:50:45 compute-0 nova_compute[260935]: 2025-10-11 09:50:45.774 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:50:47 compute-0 ceph-mon[74313]: pgmap v3293: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 26 KiB/s rd, 5.1 MiB/s wr, 38 op/s
Oct 11 09:50:47 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3294: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 2.6 MiB/s wr, 27 op/s
Oct 11 09:50:49 compute-0 ceph-mon[74313]: pgmap v3294: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 2.6 MiB/s wr, 27 op/s
Oct 11 09:50:49 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3295: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 2.5 MiB/s wr, 25 op/s
Oct 11 09:50:49 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e308 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:50:50 compute-0 nova_compute[260935]: 2025-10-11 09:50:50.776 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:50:51 compute-0 ceph-mon[74313]: pgmap v3295: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 2.5 MiB/s wr, 25 op/s
Oct 11 09:50:51 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3296: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 16 KiB/s rd, 2.2 MiB/s wr, 22 op/s
Oct 11 09:50:53 compute-0 ceph-mon[74313]: pgmap v3296: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 16 KiB/s rd, 2.2 MiB/s wr, 22 op/s
Oct 11 09:50:53 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3297: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 15 KiB/s rd, 2.1 MiB/s wr, 21 op/s
Oct 11 09:50:54 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e308 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:50:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:50:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:50:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:50:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:50:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:50:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:50:55 compute-0 ceph-mgr[74605]: [balancer INFO root] Optimize plan auto_2025-10-11_09:50:55
Oct 11 09:50:55 compute-0 ceph-mgr[74605]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 09:50:55 compute-0 ceph-mgr[74605]: [balancer INFO root] do_upmap
Oct 11 09:50:55 compute-0 ceph-mgr[74605]: [balancer INFO root] pools ['vms', 'volumes', 'default.rgw.control', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', '.rgw.root', 'default.rgw.log', '.mgr', 'images', 'backups', 'default.rgw.meta']
Oct 11 09:50:55 compute-0 ceph-mgr[74605]: [balancer INFO root] prepared 0/10 changes
Oct 11 09:50:55 compute-0 ceph-mon[74313]: pgmap v3297: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 15 KiB/s rd, 2.1 MiB/s wr, 21 op/s
Oct 11 09:50:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 09:50:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 09:50:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 09:50:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 09:50:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 09:50:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 09:50:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 09:50:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 09:50:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 09:50:55 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3298: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:50:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 09:50:55 compute-0 nova_compute[260935]: 2025-10-11 09:50:55.778 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:50:55 compute-0 nova_compute[260935]: 2025-10-11 09:50:55.781 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:50:55 compute-0 podman[441336]: 2025-10-11 09:50:55.79979438 +0000 UTC m=+0.091526908 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct 11 09:50:57 compute-0 ceph-mon[74313]: pgmap v3298: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:50:57 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3299: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:50:59 compute-0 ceph-mon[74313]: pgmap v3299: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:50:59 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3300: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:50:59 compute-0 nova_compute[260935]: 2025-10-11 09:50:59.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:50:59 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e308 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:51:00 compute-0 nova_compute[260935]: 2025-10-11 09:51:00.699 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:51:00 compute-0 nova_compute[260935]: 2025-10-11 09:51:00.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:51:00 compute-0 nova_compute[260935]: 2025-10-11 09:51:00.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:51:00 compute-0 nova_compute[260935]: 2025-10-11 09:51:00.702 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 11 09:51:00 compute-0 nova_compute[260935]: 2025-10-11 09:51:00.780 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:51:00 compute-0 podman[441356]: 2025-10-11 09:51:00.843474454 +0000 UTC m=+0.126414207 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=iscsid, container_name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct 11 09:51:01 compute-0 ceph-mon[74313]: pgmap v3300: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:51:01 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3301: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:51:02 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:51:02.206 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=61, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:d1:d9', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '16:ab:1e:b7:4b:7f'}, ipsec=False) old=SB_Global(nb_cfg=60) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 09:51:02 compute-0 nova_compute[260935]: 2025-10-11 09:51:02.207 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:51:02 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:51:02.208 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 11 09:51:02 compute-0 nova_compute[260935]: 2025-10-11 09:51:02.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:51:02 compute-0 nova_compute[260935]: 2025-10-11 09:51:02.704 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 11 09:51:02 compute-0 nova_compute[260935]: 2025-10-11 09:51:02.704 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 11 09:51:02 compute-0 nova_compute[260935]: 2025-10-11 09:51:02.993 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "refresh_cache-c176845c-89c0-4038-ba22-4ee79bd3ebfe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:51:02 compute-0 nova_compute[260935]: 2025-10-11 09:51:02.994 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquired lock "refresh_cache-c176845c-89c0-4038-ba22-4ee79bd3ebfe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:51:02 compute-0 nova_compute[260935]: 2025-10-11 09:51:02.994 2 DEBUG nova.network.neutron [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: c176845c-89c0-4038-ba22-4ee79bd3ebfe] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 11 09:51:02 compute-0 nova_compute[260935]: 2025-10-11 09:51:02.995 2 DEBUG nova.objects.instance [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lazy-loading 'info_cache' on Instance uuid c176845c-89c0-4038-ba22-4ee79bd3ebfe obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:51:03 compute-0 nova_compute[260935]: 2025-10-11 09:51:03.190 2 DEBUG nova.network.neutron [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: c176845c-89c0-4038-ba22-4ee79bd3ebfe] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 11 09:51:03 compute-0 ceph-mon[74313]: pgmap v3301: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:51:03 compute-0 nova_compute[260935]: 2025-10-11 09:51:03.594 2 DEBUG nova.network.neutron [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: c176845c-89c0-4038-ba22-4ee79bd3ebfe] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:51:03 compute-0 nova_compute[260935]: 2025-10-11 09:51:03.608 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Releasing lock "refresh_cache-c176845c-89c0-4038-ba22-4ee79bd3ebfe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:51:03 compute-0 nova_compute[260935]: 2025-10-11 09:51:03.608 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: c176845c-89c0-4038-ba22-4ee79bd3ebfe] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 11 09:51:03 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3302: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:51:03 compute-0 podman[441377]: 2025-10-11 09:51:03.757247888 +0000 UTC m=+0.061485525 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true)
Oct 11 09:51:03 compute-0 podman[441378]: 2025-10-11 09:51:03.869074365 +0000 UTC m=+0.164505196 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct 11 09:51:04 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e308 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:51:05 compute-0 sshd-session[441421]: Invalid user botuser from 43.157.67.116 port 41278
Oct 11 09:51:05 compute-0 sshd-session[441421]: pam_unix(sshd:auth): check pass; user unknown
Oct 11 09:51:05 compute-0 sshd-session[441421]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=43.157.67.116
Oct 11 09:51:05 compute-0 ceph-mon[74313]: pgmap v3302: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:51:05 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3303: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:51:05 compute-0 nova_compute[260935]: 2025-10-11 09:51:05.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:51:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 09:51:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:51:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 09:51:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:51:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0026278224099759067 of space, bias 1.0, pg target 0.788346722992772 quantized to 32 (current 32)
Oct 11 09:51:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:51:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 09:51:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:51:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 09:51:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:51:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Oct 11 09:51:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:51:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 11 09:51:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:51:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 09:51:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:51:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 11 09:51:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:51:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 11 09:51:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:51:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 09:51:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:51:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 11 09:51:05 compute-0 nova_compute[260935]: 2025-10-11 09:51:05.786 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:51:05 compute-0 nova_compute[260935]: 2025-10-11 09:51:05.787 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:51:05 compute-0 nova_compute[260935]: 2025-10-11 09:51:05.788 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:51:05 compute-0 nova_compute[260935]: 2025-10-11 09:51:05.788 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 11 09:51:05 compute-0 nova_compute[260935]: 2025-10-11 09:51:05.789 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:51:05 compute-0 nova_compute[260935]: 2025-10-11 09:51:05.876 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:51:06 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 09:51:06 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3433148840' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:51:06 compute-0 nova_compute[260935]: 2025-10-11 09:51:06.327 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.538s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:51:06 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/3433148840' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:51:07 compute-0 sshd-session[441421]: Failed password for invalid user botuser from 43.157.67.116 port 41278 ssh2
Oct 11 09:51:07 compute-0 ceph-mon[74313]: pgmap v3303: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:51:07 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3304: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:51:07 compute-0 nova_compute[260935]: 2025-10-11 09:51:07.708 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:51:07 compute-0 nova_compute[260935]: 2025-10-11 09:51:07.709 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:51:07 compute-0 nova_compute[260935]: 2025-10-11 09:51:07.709 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:51:07 compute-0 nova_compute[260935]: 2025-10-11 09:51:07.715 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:51:07 compute-0 nova_compute[260935]: 2025-10-11 09:51:07.716 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:51:07 compute-0 nova_compute[260935]: 2025-10-11 09:51:07.722 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000052 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:51:07 compute-0 nova_compute[260935]: 2025-10-11 09:51:07.723 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000052 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:51:07 compute-0 nova_compute[260935]: 2025-10-11 09:51:07.941 2 WARNING nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 09:51:07 compute-0 nova_compute[260935]: 2025-10-11 09:51:07.942 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=2805MB free_disk=59.83064270019531GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 11 09:51:07 compute-0 nova_compute[260935]: 2025-10-11 09:51:07.943 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:51:07 compute-0 nova_compute[260935]: 2025-10-11 09:51:07.943 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:51:08 compute-0 nova_compute[260935]: 2025-10-11 09:51:08.182 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance c176845c-89c0-4038-ba22-4ee79bd3ebfe actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 09:51:08 compute-0 nova_compute[260935]: 2025-10-11 09:51:08.183 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance b75d8ded-515b-48ff-a6b6-28df88878996 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 09:51:08 compute-0 nova_compute[260935]: 2025-10-11 09:51:08.184 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance 52be16b4-343a-4fd4-9041-39069a1fde2a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 09:51:08 compute-0 nova_compute[260935]: 2025-10-11 09:51:08.184 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 11 09:51:08 compute-0 nova_compute[260935]: 2025-10-11 09:51:08.185 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=896MB phys_disk=59GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 11 09:51:08 compute-0 nova_compute[260935]: 2025-10-11 09:51:08.410 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:51:08 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 09:51:08 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1495361142' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:51:08 compute-0 nova_compute[260935]: 2025-10-11 09:51:08.890 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:51:08 compute-0 nova_compute[260935]: 2025-10-11 09:51:08.899 2 DEBUG nova.compute.provider_tree [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 09:51:08 compute-0 nova_compute[260935]: 2025-10-11 09:51:08.923 2 DEBUG nova.scheduler.client.report [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 09:51:08 compute-0 nova_compute[260935]: 2025-10-11 09:51:08.927 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 11 09:51:08 compute-0 nova_compute[260935]: 2025-10-11 09:51:08.928 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.985s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:51:08 compute-0 sshd-session[441421]: Received disconnect from 43.157.67.116 port 41278:11: Bye Bye [preauth]
Oct 11 09:51:08 compute-0 sshd-session[441421]: Disconnected from invalid user botuser 43.157.67.116 port 41278 [preauth]
Oct 11 09:51:09 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e308 do_prune osdmap full prune enabled
Oct 11 09:51:09 compute-0 ceph-mon[74313]: pgmap v3304: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:51:09 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/1495361142' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:51:09 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e309 e309: 3 total, 3 up, 3 in
Oct 11 09:51:09 compute-0 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e309: 3 total, 3 up, 3 in
Oct 11 09:51:09 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3306: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:51:09 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:51:10 compute-0 ceph-mon[74313]: osdmap e309: 3 total, 3 up, 3 in
Oct 11 09:51:10 compute-0 nova_compute[260935]: 2025-10-11 09:51:10.786 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:51:10 compute-0 nova_compute[260935]: 2025-10-11 09:51:10.882 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:51:11 compute-0 ceph-mon[74313]: pgmap v3306: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:51:11 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3307: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:51:11 compute-0 nova_compute[260935]: 2025-10-11 09:51:11.931 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:51:11 compute-0 nova_compute[260935]: 2025-10-11 09:51:11.932 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:51:12 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:51:12.211 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=bec9a5e2-82b0-42f0-811d-08d245f1dc66, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '61'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:51:12 compute-0 sudo[441469]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:51:12 compute-0 sudo[441469]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:51:12 compute-0 rsyslogd[1003]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 11 09:51:12 compute-0 rsyslogd[1003]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 11 09:51:12 compute-0 sudo[441469]: pam_unix(sudo:session): session closed for user root
Oct 11 09:51:12 compute-0 sudo[441495]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 09:51:12 compute-0 sudo[441495]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:51:12 compute-0 sudo[441495]: pam_unix(sudo:session): session closed for user root
Oct 11 09:51:12 compute-0 sudo[441520]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:51:12 compute-0 sudo[441520]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:51:12 compute-0 sudo[441520]: pam_unix(sudo:session): session closed for user root
Oct 11 09:51:12 compute-0 sudo[441545]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 11 09:51:12 compute-0 sudo[441545]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:51:13 compute-0 sudo[441545]: pam_unix(sudo:session): session closed for user root
Oct 11 09:51:13 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 09:51:13 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 09:51:13 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 11 09:51:13 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 09:51:13 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 11 09:51:13 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:51:13 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev 7ddfaec9-6f6a-4fd1-b719-56bdeb8606aa does not exist
Oct 11 09:51:13 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev 58bbc594-93ed-4a2a-bef5-145b3ce3bba2 does not exist
Oct 11 09:51:13 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev f8e39f37-7eb7-47b5-b4d5-5fb82a7cbf62 does not exist
Oct 11 09:51:13 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 11 09:51:13 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 09:51:13 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 11 09:51:13 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 09:51:13 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 09:51:13 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 09:51:13 compute-0 sudo[441603]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:51:13 compute-0 sudo[441603]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:51:13 compute-0 sudo[441603]: pam_unix(sudo:session): session closed for user root
Oct 11 09:51:13 compute-0 sudo[441628]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 09:51:13 compute-0 sudo[441628]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:51:13 compute-0 sudo[441628]: pam_unix(sudo:session): session closed for user root
Oct 11 09:51:13 compute-0 ceph-mon[74313]: pgmap v3307: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:51:13 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 09:51:13 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 09:51:13 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:51:13 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 09:51:13 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 09:51:13 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 09:51:13 compute-0 sudo[441653]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:51:13 compute-0 sudo[441653]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:51:13 compute-0 sudo[441653]: pam_unix(sudo:session): session closed for user root
Oct 11 09:51:13 compute-0 sudo[441678]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 11 09:51:13 compute-0 sudo[441678]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:51:13 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3308: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 24 op/s
Oct 11 09:51:14 compute-0 podman[441744]: 2025-10-11 09:51:14.077213816 +0000 UTC m=+0.079287115 container create 089a7947feb3c42a2bad486a87c9e2576de9c2da7d5e780e5db9fa25042e4c97 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_clarke, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 09:51:14 compute-0 podman[441744]: 2025-10-11 09:51:14.041183356 +0000 UTC m=+0.043256745 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:51:14 compute-0 systemd[1]: Started libpod-conmon-089a7947feb3c42a2bad486a87c9e2576de9c2da7d5e780e5db9fa25042e4c97.scope.
Oct 11 09:51:14 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:51:14 compute-0 podman[441744]: 2025-10-11 09:51:14.207742157 +0000 UTC m=+0.209815516 container init 089a7947feb3c42a2bad486a87c9e2576de9c2da7d5e780e5db9fa25042e4c97 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_clarke, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 09:51:14 compute-0 podman[441744]: 2025-10-11 09:51:14.221090242 +0000 UTC m=+0.223163551 container start 089a7947feb3c42a2bad486a87c9e2576de9c2da7d5e780e5db9fa25042e4c97 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_clarke, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 09:51:14 compute-0 podman[441744]: 2025-10-11 09:51:14.226358309 +0000 UTC m=+0.228431658 container attach 089a7947feb3c42a2bad486a87c9e2576de9c2da7d5e780e5db9fa25042e4c97 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_clarke, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 11 09:51:14 compute-0 systemd[1]: libpod-089a7947feb3c42a2bad486a87c9e2576de9c2da7d5e780e5db9fa25042e4c97.scope: Deactivated successfully.
Oct 11 09:51:14 compute-0 gifted_clarke[441760]: 167 167
Oct 11 09:51:14 compute-0 conmon[441760]: conmon 089a7947feb3c42a2bad <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-089a7947feb3c42a2bad486a87c9e2576de9c2da7d5e780e5db9fa25042e4c97.scope/container/memory.events
Oct 11 09:51:14 compute-0 podman[441744]: 2025-10-11 09:51:14.233710486 +0000 UTC m=+0.235783795 container died 089a7947feb3c42a2bad486a87c9e2576de9c2da7d5e780e5db9fa25042e4c97 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_clarke, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct 11 09:51:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-47c5e9d0cd1b7ad68ceeac58c7257492ffabc369e7eceb98cb873064cb9c0194-merged.mount: Deactivated successfully.
Oct 11 09:51:14 compute-0 podman[441744]: 2025-10-11 09:51:14.300382936 +0000 UTC m=+0.302456235 container remove 089a7947feb3c42a2bad486a87c9e2576de9c2da7d5e780e5db9fa25042e4c97 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_clarke, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 09:51:14 compute-0 systemd[1]: libpod-conmon-089a7947feb3c42a2bad486a87c9e2576de9c2da7d5e780e5db9fa25042e4c97.scope: Deactivated successfully.
Oct 11 09:51:14 compute-0 podman[441784]: 2025-10-11 09:51:14.586980634 +0000 UTC m=+0.075711375 container create 9ee1a0ee47cc76e0a1d24a3828f92cde9e74e3461db5e51d751d3cb550d9b3ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_roentgen, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct 11 09:51:14 compute-0 podman[441784]: 2025-10-11 09:51:14.555144811 +0000 UTC m=+0.043875602 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:51:14 compute-0 systemd[1]: Started libpod-conmon-9ee1a0ee47cc76e0a1d24a3828f92cde9e74e3461db5e51d751d3cb550d9b3ba.scope.
Oct 11 09:51:14 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:51:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0a350947bd30e5883e9ad3b30781df6d8914047e0a80e3176926b8609cea8f9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 09:51:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0a350947bd30e5883e9ad3b30781df6d8914047e0a80e3176926b8609cea8f9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 09:51:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0a350947bd30e5883e9ad3b30781df6d8914047e0a80e3176926b8609cea8f9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 09:51:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0a350947bd30e5883e9ad3b30781df6d8914047e0a80e3176926b8609cea8f9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 09:51:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0a350947bd30e5883e9ad3b30781df6d8914047e0a80e3176926b8609cea8f9/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 09:51:14 compute-0 podman[441784]: 2025-10-11 09:51:14.720567811 +0000 UTC m=+0.209298602 container init 9ee1a0ee47cc76e0a1d24a3828f92cde9e74e3461db5e51d751d3cb550d9b3ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_roentgen, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct 11 09:51:14 compute-0 podman[441784]: 2025-10-11 09:51:14.736897829 +0000 UTC m=+0.225628560 container start 9ee1a0ee47cc76e0a1d24a3828f92cde9e74e3461db5e51d751d3cb550d9b3ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_roentgen, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct 11 09:51:14 compute-0 podman[441784]: 2025-10-11 09:51:14.741781956 +0000 UTC m=+0.230512757 container attach 9ee1a0ee47cc76e0a1d24a3828f92cde9e74e3461db5e51d751d3cb550d9b3ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_roentgen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 09:51:14 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:51:14 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e309 do_prune osdmap full prune enabled
Oct 11 09:51:14 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e310 e310: 3 total, 3 up, 3 in
Oct 11 09:51:14 compute-0 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e310: 3 total, 3 up, 3 in
Oct 11 09:51:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:51:15.247 162815 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:51:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:51:15.253 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.005s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:51:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:51:15.253 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:51:15 compute-0 ceph-mon[74313]: pgmap v3308: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 24 op/s
Oct 11 09:51:15 compute-0 ceph-mon[74313]: osdmap e310: 3 total, 3 up, 3 in
Oct 11 09:51:15 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3310: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 1.7 KiB/s wr, 31 op/s
Oct 11 09:51:15 compute-0 nova_compute[260935]: 2025-10-11 09:51:15.788 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:51:15 compute-0 nova_compute[260935]: 2025-10-11 09:51:15.884 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:51:15 compute-0 pedantic_roentgen[441800]: --> passed data devices: 0 physical, 3 LVM
Oct 11 09:51:15 compute-0 pedantic_roentgen[441800]: --> relative data size: 1.0
Oct 11 09:51:15 compute-0 pedantic_roentgen[441800]: --> All data devices are unavailable
Oct 11 09:51:16 compute-0 systemd[1]: libpod-9ee1a0ee47cc76e0a1d24a3828f92cde9e74e3461db5e51d751d3cb550d9b3ba.scope: Deactivated successfully.
Oct 11 09:51:16 compute-0 systemd[1]: libpod-9ee1a0ee47cc76e0a1d24a3828f92cde9e74e3461db5e51d751d3cb550d9b3ba.scope: Consumed 1.238s CPU time.
Oct 11 09:51:16 compute-0 podman[441784]: 2025-10-11 09:51:16.036987013 +0000 UTC m=+1.525717734 container died 9ee1a0ee47cc76e0a1d24a3828f92cde9e74e3461db5e51d751d3cb550d9b3ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_roentgen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct 11 09:51:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-b0a350947bd30e5883e9ad3b30781df6d8914047e0a80e3176926b8609cea8f9-merged.mount: Deactivated successfully.
Oct 11 09:51:16 compute-0 podman[441784]: 2025-10-11 09:51:16.112591924 +0000 UTC m=+1.601322635 container remove 9ee1a0ee47cc76e0a1d24a3828f92cde9e74e3461db5e51d751d3cb550d9b3ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_roentgen, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct 11 09:51:16 compute-0 systemd[1]: libpod-conmon-9ee1a0ee47cc76e0a1d24a3828f92cde9e74e3461db5e51d751d3cb550d9b3ba.scope: Deactivated successfully.
Oct 11 09:51:16 compute-0 sudo[441678]: pam_unix(sudo:session): session closed for user root
Oct 11 09:51:16 compute-0 sudo[441840]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:51:16 compute-0 sudo[441840]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:51:16 compute-0 sudo[441840]: pam_unix(sudo:session): session closed for user root
Oct 11 09:51:16 compute-0 sudo[441865]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 09:51:16 compute-0 sudo[441865]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:51:16 compute-0 sudo[441865]: pam_unix(sudo:session): session closed for user root
Oct 11 09:51:16 compute-0 sudo[441890]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:51:16 compute-0 sudo[441890]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:51:16 compute-0 sudo[441890]: pam_unix(sudo:session): session closed for user root
Oct 11 09:51:16 compute-0 sudo[441915]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a -- lvm list --format json
Oct 11 09:51:16 compute-0 sudo[441915]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:51:16 compute-0 ceph-mon[74313]: pgmap v3310: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 1.7 KiB/s wr, 31 op/s
Oct 11 09:51:16 compute-0 podman[441981]: 2025-10-11 09:51:16.956349968 +0000 UTC m=+0.049017116 container create 4a251049c766f932f5f37d800a07309bc23e9d28a58704d4f0463f5429da507e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_burnell, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 09:51:16 compute-0 systemd[1]: Started libpod-conmon-4a251049c766f932f5f37d800a07309bc23e9d28a58704d4f0463f5429da507e.scope.
Oct 11 09:51:17 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:51:17 compute-0 podman[441981]: 2025-10-11 09:51:16.938966341 +0000 UTC m=+0.031633489 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:51:17 compute-0 podman[441981]: 2025-10-11 09:51:17.04946767 +0000 UTC m=+0.142134848 container init 4a251049c766f932f5f37d800a07309bc23e9d28a58704d4f0463f5429da507e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_burnell, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 09:51:17 compute-0 podman[441981]: 2025-10-11 09:51:17.058665408 +0000 UTC m=+0.151332566 container start 4a251049c766f932f5f37d800a07309bc23e9d28a58704d4f0463f5429da507e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_burnell, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct 11 09:51:17 compute-0 podman[441981]: 2025-10-11 09:51:17.06267275 +0000 UTC m=+0.155339978 container attach 4a251049c766f932f5f37d800a07309bc23e9d28a58704d4f0463f5429da507e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_burnell, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct 11 09:51:17 compute-0 clever_burnell[441997]: 167 167
Oct 11 09:51:17 compute-0 systemd[1]: libpod-4a251049c766f932f5f37d800a07309bc23e9d28a58704d4f0463f5429da507e.scope: Deactivated successfully.
Oct 11 09:51:17 compute-0 podman[441981]: 2025-10-11 09:51:17.06550292 +0000 UTC m=+0.158170078 container died 4a251049c766f932f5f37d800a07309bc23e9d28a58704d4f0463f5429da507e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_burnell, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct 11 09:51:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-da54ef9b4989cfd379d393d7410ead6418b7747a9bea79422cc6e41a2dbb44c1-merged.mount: Deactivated successfully.
Oct 11 09:51:17 compute-0 podman[441981]: 2025-10-11 09:51:17.117354254 +0000 UTC m=+0.210021412 container remove 4a251049c766f932f5f37d800a07309bc23e9d28a58704d4f0463f5429da507e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_burnell, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 09:51:17 compute-0 systemd[1]: libpod-conmon-4a251049c766f932f5f37d800a07309bc23e9d28a58704d4f0463f5429da507e.scope: Deactivated successfully.
Oct 11 09:51:17 compute-0 podman[442021]: 2025-10-11 09:51:17.350125943 +0000 UTC m=+0.086429415 container create 76b69cd32191d2a5e81506050f0c5cc760971d4997a329b94d728d1751c81e56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_lewin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct 11 09:51:17 compute-0 systemd[1]: Started libpod-conmon-76b69cd32191d2a5e81506050f0c5cc760971d4997a329b94d728d1751c81e56.scope.
Oct 11 09:51:17 compute-0 podman[442021]: 2025-10-11 09:51:17.320937884 +0000 UTC m=+0.057241436 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:51:17 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:51:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/76da696b280cbd6a162fd10001dd9fe9553fcd7e5baf3038579aa08423468595/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 09:51:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/76da696b280cbd6a162fd10001dd9fe9553fcd7e5baf3038579aa08423468595/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 09:51:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/76da696b280cbd6a162fd10001dd9fe9553fcd7e5baf3038579aa08423468595/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 09:51:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/76da696b280cbd6a162fd10001dd9fe9553fcd7e5baf3038579aa08423468595/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 09:51:17 compute-0 podman[442021]: 2025-10-11 09:51:17.466576579 +0000 UTC m=+0.202880071 container init 76b69cd32191d2a5e81506050f0c5cc760971d4997a329b94d728d1751c81e56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_lewin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 11 09:51:17 compute-0 podman[442021]: 2025-10-11 09:51:17.479119941 +0000 UTC m=+0.215423433 container start 76b69cd32191d2a5e81506050f0c5cc760971d4997a329b94d728d1751c81e56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_lewin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct 11 09:51:17 compute-0 podman[442021]: 2025-10-11 09:51:17.484869362 +0000 UTC m=+0.221172864 container attach 76b69cd32191d2a5e81506050f0c5cc760971d4997a329b94d728d1751c81e56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_lewin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True)
Oct 11 09:51:17 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3311: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 22 KiB/s rd, 1.7 KiB/s wr, 30 op/s
Oct 11 09:51:18 compute-0 vigorous_lewin[442037]: {
Oct 11 09:51:18 compute-0 vigorous_lewin[442037]:     "0": [
Oct 11 09:51:18 compute-0 vigorous_lewin[442037]:         {
Oct 11 09:51:18 compute-0 vigorous_lewin[442037]:             "devices": [
Oct 11 09:51:18 compute-0 vigorous_lewin[442037]:                 "/dev/loop3"
Oct 11 09:51:18 compute-0 vigorous_lewin[442037]:             ],
Oct 11 09:51:18 compute-0 vigorous_lewin[442037]:             "lv_name": "ceph_lv0",
Oct 11 09:51:18 compute-0 vigorous_lewin[442037]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 09:51:18 compute-0 vigorous_lewin[442037]:             "lv_size": "21470642176",
Oct 11 09:51:18 compute-0 vigorous_lewin[442037]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=bd31cfe9-266a-4479-a7f9-659fff14b3e1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 09:51:18 compute-0 vigorous_lewin[442037]:             "lv_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 09:51:18 compute-0 vigorous_lewin[442037]:             "name": "ceph_lv0",
Oct 11 09:51:18 compute-0 vigorous_lewin[442037]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 09:51:18 compute-0 vigorous_lewin[442037]:             "tags": {
Oct 11 09:51:18 compute-0 vigorous_lewin[442037]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 11 09:51:18 compute-0 vigorous_lewin[442037]:                 "ceph.block_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 09:51:18 compute-0 vigorous_lewin[442037]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 09:51:18 compute-0 vigorous_lewin[442037]:                 "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:51:18 compute-0 vigorous_lewin[442037]:                 "ceph.cluster_name": "ceph",
Oct 11 09:51:18 compute-0 vigorous_lewin[442037]:                 "ceph.crush_device_class": "",
Oct 11 09:51:18 compute-0 vigorous_lewin[442037]:                 "ceph.encrypted": "0",
Oct 11 09:51:18 compute-0 vigorous_lewin[442037]:                 "ceph.osd_fsid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 09:51:18 compute-0 vigorous_lewin[442037]:                 "ceph.osd_id": "0",
Oct 11 09:51:18 compute-0 vigorous_lewin[442037]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 09:51:18 compute-0 vigorous_lewin[442037]:                 "ceph.type": "block",
Oct 11 09:51:18 compute-0 vigorous_lewin[442037]:                 "ceph.vdo": "0"
Oct 11 09:51:18 compute-0 vigorous_lewin[442037]:             },
Oct 11 09:51:18 compute-0 vigorous_lewin[442037]:             "type": "block",
Oct 11 09:51:18 compute-0 vigorous_lewin[442037]:             "vg_name": "ceph_vg0"
Oct 11 09:51:18 compute-0 vigorous_lewin[442037]:         }
Oct 11 09:51:18 compute-0 vigorous_lewin[442037]:     ],
Oct 11 09:51:18 compute-0 vigorous_lewin[442037]:     "1": [
Oct 11 09:51:18 compute-0 vigorous_lewin[442037]:         {
Oct 11 09:51:18 compute-0 vigorous_lewin[442037]:             "devices": [
Oct 11 09:51:18 compute-0 vigorous_lewin[442037]:                 "/dev/loop4"
Oct 11 09:51:18 compute-0 vigorous_lewin[442037]:             ],
Oct 11 09:51:18 compute-0 vigorous_lewin[442037]:             "lv_name": "ceph_lv1",
Oct 11 09:51:18 compute-0 vigorous_lewin[442037]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 09:51:18 compute-0 vigorous_lewin[442037]:             "lv_size": "21470642176",
Oct 11 09:51:18 compute-0 vigorous_lewin[442037]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c6e730c8-7075-45e5-bf72-061b73088f5b,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 09:51:18 compute-0 vigorous_lewin[442037]:             "lv_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 09:51:18 compute-0 vigorous_lewin[442037]:             "name": "ceph_lv1",
Oct 11 09:51:18 compute-0 vigorous_lewin[442037]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 09:51:18 compute-0 vigorous_lewin[442037]:             "tags": {
Oct 11 09:51:18 compute-0 vigorous_lewin[442037]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 11 09:51:18 compute-0 vigorous_lewin[442037]:                 "ceph.block_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 09:51:18 compute-0 vigorous_lewin[442037]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 09:51:18 compute-0 vigorous_lewin[442037]:                 "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:51:18 compute-0 vigorous_lewin[442037]:                 "ceph.cluster_name": "ceph",
Oct 11 09:51:18 compute-0 vigorous_lewin[442037]:                 "ceph.crush_device_class": "",
Oct 11 09:51:18 compute-0 vigorous_lewin[442037]:                 "ceph.encrypted": "0",
Oct 11 09:51:18 compute-0 vigorous_lewin[442037]:                 "ceph.osd_fsid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 09:51:18 compute-0 vigorous_lewin[442037]:                 "ceph.osd_id": "1",
Oct 11 09:51:18 compute-0 vigorous_lewin[442037]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 09:51:18 compute-0 vigorous_lewin[442037]:                 "ceph.type": "block",
Oct 11 09:51:18 compute-0 vigorous_lewin[442037]:                 "ceph.vdo": "0"
Oct 11 09:51:18 compute-0 vigorous_lewin[442037]:             },
Oct 11 09:51:18 compute-0 vigorous_lewin[442037]:             "type": "block",
Oct 11 09:51:18 compute-0 vigorous_lewin[442037]:             "vg_name": "ceph_vg1"
Oct 11 09:51:18 compute-0 vigorous_lewin[442037]:         }
Oct 11 09:51:18 compute-0 vigorous_lewin[442037]:     ],
Oct 11 09:51:18 compute-0 vigorous_lewin[442037]:     "2": [
Oct 11 09:51:18 compute-0 vigorous_lewin[442037]:         {
Oct 11 09:51:18 compute-0 vigorous_lewin[442037]:             "devices": [
Oct 11 09:51:18 compute-0 vigorous_lewin[442037]:                 "/dev/loop5"
Oct 11 09:51:18 compute-0 vigorous_lewin[442037]:             ],
Oct 11 09:51:18 compute-0 vigorous_lewin[442037]:             "lv_name": "ceph_lv2",
Oct 11 09:51:18 compute-0 vigorous_lewin[442037]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 09:51:18 compute-0 vigorous_lewin[442037]:             "lv_size": "21470642176",
Oct 11 09:51:18 compute-0 vigorous_lewin[442037]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9a8d87fe-940e-4960-94fb-405e22e6e9ee,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 09:51:18 compute-0 vigorous_lewin[442037]:             "lv_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 09:51:18 compute-0 vigorous_lewin[442037]:             "name": "ceph_lv2",
Oct 11 09:51:18 compute-0 vigorous_lewin[442037]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 09:51:18 compute-0 vigorous_lewin[442037]:             "tags": {
Oct 11 09:51:18 compute-0 vigorous_lewin[442037]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 11 09:51:18 compute-0 vigorous_lewin[442037]:                 "ceph.block_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 09:51:18 compute-0 vigorous_lewin[442037]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 09:51:18 compute-0 vigorous_lewin[442037]:                 "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:51:18 compute-0 vigorous_lewin[442037]:                 "ceph.cluster_name": "ceph",
Oct 11 09:51:18 compute-0 vigorous_lewin[442037]:                 "ceph.crush_device_class": "",
Oct 11 09:51:18 compute-0 vigorous_lewin[442037]:                 "ceph.encrypted": "0",
Oct 11 09:51:18 compute-0 vigorous_lewin[442037]:                 "ceph.osd_fsid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 09:51:18 compute-0 vigorous_lewin[442037]:                 "ceph.osd_id": "2",
Oct 11 09:51:18 compute-0 vigorous_lewin[442037]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 09:51:18 compute-0 vigorous_lewin[442037]:                 "ceph.type": "block",
Oct 11 09:51:18 compute-0 vigorous_lewin[442037]:                 "ceph.vdo": "0"
Oct 11 09:51:18 compute-0 vigorous_lewin[442037]:             },
Oct 11 09:51:18 compute-0 vigorous_lewin[442037]:             "type": "block",
Oct 11 09:51:18 compute-0 vigorous_lewin[442037]:             "vg_name": "ceph_vg2"
Oct 11 09:51:18 compute-0 vigorous_lewin[442037]:         }
Oct 11 09:51:18 compute-0 vigorous_lewin[442037]:     ]
Oct 11 09:51:18 compute-0 vigorous_lewin[442037]: }
Oct 11 09:51:18 compute-0 systemd[1]: libpod-76b69cd32191d2a5e81506050f0c5cc760971d4997a329b94d728d1751c81e56.scope: Deactivated successfully.
Oct 11 09:51:18 compute-0 podman[442021]: 2025-10-11 09:51:18.263705797 +0000 UTC m=+1.000009289 container died 76b69cd32191d2a5e81506050f0c5cc760971d4997a329b94d728d1751c81e56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_lewin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Oct 11 09:51:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-76da696b280cbd6a162fd10001dd9fe9553fcd7e5baf3038579aa08423468595-merged.mount: Deactivated successfully.
Oct 11 09:51:18 compute-0 podman[442021]: 2025-10-11 09:51:18.345377737 +0000 UTC m=+1.081681209 container remove 76b69cd32191d2a5e81506050f0c5cc760971d4997a329b94d728d1751c81e56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_lewin, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 09:51:18 compute-0 systemd[1]: libpod-conmon-76b69cd32191d2a5e81506050f0c5cc760971d4997a329b94d728d1751c81e56.scope: Deactivated successfully.
Oct 11 09:51:18 compute-0 sudo[441915]: pam_unix(sudo:session): session closed for user root
Oct 11 09:51:18 compute-0 sudo[442057]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:51:18 compute-0 sudo[442057]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:51:18 compute-0 sudo[442057]: pam_unix(sudo:session): session closed for user root
Oct 11 09:51:18 compute-0 sudo[442082]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 09:51:18 compute-0 sudo[442082]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:51:18 compute-0 sudo[442082]: pam_unix(sudo:session): session closed for user root
Oct 11 09:51:18 compute-0 sudo[442107]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:51:18 compute-0 sudo[442107]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:51:18 compute-0 sudo[442107]: pam_unix(sudo:session): session closed for user root
Oct 11 09:51:18 compute-0 ceph-mon[74313]: pgmap v3311: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 22 KiB/s rd, 1.7 KiB/s wr, 30 op/s
Oct 11 09:51:18 compute-0 sudo[442132]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a -- raw list --format json
Oct 11 09:51:18 compute-0 sudo[442132]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:51:19 compute-0 podman[442198]: 2025-10-11 09:51:19.284866428 +0000 UTC m=+0.072740401 container create 6775b865b91b457560d716f7a0a69cc6b7ac2423a7729cde111b204524f58ddf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_elgamal, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 09:51:19 compute-0 systemd[1]: Started libpod-conmon-6775b865b91b457560d716f7a0a69cc6b7ac2423a7729cde111b204524f58ddf.scope.
Oct 11 09:51:19 compute-0 podman[442198]: 2025-10-11 09:51:19.254567858 +0000 UTC m=+0.042441881 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:51:19 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:51:19 compute-0 podman[442198]: 2025-10-11 09:51:19.387567218 +0000 UTC m=+0.175441231 container init 6775b865b91b457560d716f7a0a69cc6b7ac2423a7729cde111b204524f58ddf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_elgamal, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Oct 11 09:51:19 compute-0 podman[442198]: 2025-10-11 09:51:19.398415223 +0000 UTC m=+0.186289166 container start 6775b865b91b457560d716f7a0a69cc6b7ac2423a7729cde111b204524f58ddf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_elgamal, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct 11 09:51:19 compute-0 podman[442198]: 2025-10-11 09:51:19.402200579 +0000 UTC m=+0.190074612 container attach 6775b865b91b457560d716f7a0a69cc6b7ac2423a7729cde111b204524f58ddf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_elgamal, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Oct 11 09:51:19 compute-0 elegant_elgamal[442214]: 167 167
Oct 11 09:51:19 compute-0 systemd[1]: libpod-6775b865b91b457560d716f7a0a69cc6b7ac2423a7729cde111b204524f58ddf.scope: Deactivated successfully.
Oct 11 09:51:19 compute-0 conmon[442214]: conmon 6775b865b91b457560d7 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-6775b865b91b457560d716f7a0a69cc6b7ac2423a7729cde111b204524f58ddf.scope/container/memory.events
Oct 11 09:51:19 compute-0 podman[442198]: 2025-10-11 09:51:19.409439372 +0000 UTC m=+0.197313335 container died 6775b865b91b457560d716f7a0a69cc6b7ac2423a7729cde111b204524f58ddf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_elgamal, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 09:51:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-f801ad58c07835a47a79cbdef0410f597165a08d450765ef8d96dd62d7f741d6-merged.mount: Deactivated successfully.
Oct 11 09:51:19 compute-0 podman[442198]: 2025-10-11 09:51:19.461620735 +0000 UTC m=+0.249494708 container remove 6775b865b91b457560d716f7a0a69cc6b7ac2423a7729cde111b204524f58ddf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_elgamal, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 09:51:19 compute-0 systemd[1]: libpod-conmon-6775b865b91b457560d716f7a0a69cc6b7ac2423a7729cde111b204524f58ddf.scope: Deactivated successfully.
Oct 11 09:51:19 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3312: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 24 op/s
Oct 11 09:51:19 compute-0 podman[442238]: 2025-10-11 09:51:19.710549977 +0000 UTC m=+0.076900158 container create bd264118aebc337469de2b11dab2b81ab87180733dfd494358c7c60fd896860a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_boyd, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 09:51:19 compute-0 systemd[1]: Started libpod-conmon-bd264118aebc337469de2b11dab2b81ab87180733dfd494358c7c60fd896860a.scope.
Oct 11 09:51:19 compute-0 podman[442238]: 2025-10-11 09:51:19.682082349 +0000 UTC m=+0.048432580 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:51:19 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:51:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a89082ed7c6d48f965ac8f6f4cf6e2d9051f9d0577165e5c3cd44e28fc754e92/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 09:51:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a89082ed7c6d48f965ac8f6f4cf6e2d9051f9d0577165e5c3cd44e28fc754e92/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 09:51:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a89082ed7c6d48f965ac8f6f4cf6e2d9051f9d0577165e5c3cd44e28fc754e92/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 09:51:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a89082ed7c6d48f965ac8f6f4cf6e2d9051f9d0577165e5c3cd44e28fc754e92/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 09:51:19 compute-0 podman[442238]: 2025-10-11 09:51:19.842610332 +0000 UTC m=+0.208960553 container init bd264118aebc337469de2b11dab2b81ab87180733dfd494358c7c60fd896860a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_boyd, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct 11 09:51:19 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e310 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:51:19 compute-0 podman[442238]: 2025-10-11 09:51:19.862930961 +0000 UTC m=+0.229281142 container start bd264118aebc337469de2b11dab2b81ab87180733dfd494358c7c60fd896860a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_boyd, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 09:51:19 compute-0 podman[442238]: 2025-10-11 09:51:19.866985185 +0000 UTC m=+0.233335426 container attach bd264118aebc337469de2b11dab2b81ab87180733dfd494358c7c60fd896860a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_boyd, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS)
Oct 11 09:51:20 compute-0 ceph-mon[74313]: pgmap v3312: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 24 op/s
Oct 11 09:51:20 compute-0 nova_compute[260935]: 2025-10-11 09:51:20.788 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:51:20 compute-0 nova_compute[260935]: 2025-10-11 09:51:20.886 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:51:20 compute-0 friendly_boyd[442255]: {
Oct 11 09:51:20 compute-0 friendly_boyd[442255]:     "9a8d87fe-940e-4960-94fb-405e22e6e9ee": {
Oct 11 09:51:20 compute-0 friendly_boyd[442255]:         "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:51:20 compute-0 friendly_boyd[442255]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 11 09:51:20 compute-0 friendly_boyd[442255]:         "osd_id": 2,
Oct 11 09:51:20 compute-0 friendly_boyd[442255]:         "osd_uuid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 09:51:20 compute-0 friendly_boyd[442255]:         "type": "bluestore"
Oct 11 09:51:20 compute-0 friendly_boyd[442255]:     },
Oct 11 09:51:20 compute-0 friendly_boyd[442255]:     "bd31cfe9-266a-4479-a7f9-659fff14b3e1": {
Oct 11 09:51:20 compute-0 friendly_boyd[442255]:         "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:51:20 compute-0 friendly_boyd[442255]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 11 09:51:20 compute-0 friendly_boyd[442255]:         "osd_id": 0,
Oct 11 09:51:20 compute-0 friendly_boyd[442255]:         "osd_uuid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 09:51:20 compute-0 friendly_boyd[442255]:         "type": "bluestore"
Oct 11 09:51:20 compute-0 friendly_boyd[442255]:     },
Oct 11 09:51:20 compute-0 friendly_boyd[442255]:     "c6e730c8-7075-45e5-bf72-061b73088f5b": {
Oct 11 09:51:20 compute-0 friendly_boyd[442255]:         "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:51:20 compute-0 friendly_boyd[442255]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 11 09:51:20 compute-0 friendly_boyd[442255]:         "osd_id": 1,
Oct 11 09:51:20 compute-0 friendly_boyd[442255]:         "osd_uuid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 09:51:20 compute-0 friendly_boyd[442255]:         "type": "bluestore"
Oct 11 09:51:20 compute-0 friendly_boyd[442255]:     }
Oct 11 09:51:20 compute-0 friendly_boyd[442255]: }
Oct 11 09:51:20 compute-0 systemd[1]: libpod-bd264118aebc337469de2b11dab2b81ab87180733dfd494358c7c60fd896860a.scope: Deactivated successfully.
Oct 11 09:51:20 compute-0 podman[442238]: 2025-10-11 09:51:20.929887146 +0000 UTC m=+1.296237337 container died bd264118aebc337469de2b11dab2b81ab87180733dfd494358c7c60fd896860a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_boyd, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 09:51:20 compute-0 systemd[1]: libpod-bd264118aebc337469de2b11dab2b81ab87180733dfd494358c7c60fd896860a.scope: Consumed 1.075s CPU time.
Oct 11 09:51:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-a89082ed7c6d48f965ac8f6f4cf6e2d9051f9d0577165e5c3cd44e28fc754e92-merged.mount: Deactivated successfully.
Oct 11 09:51:20 compute-0 podman[442238]: 2025-10-11 09:51:20.993900881 +0000 UTC m=+1.360251032 container remove bd264118aebc337469de2b11dab2b81ab87180733dfd494358c7c60fd896860a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_boyd, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 09:51:21 compute-0 systemd[1]: libpod-conmon-bd264118aebc337469de2b11dab2b81ab87180733dfd494358c7c60fd896860a.scope: Deactivated successfully.
Oct 11 09:51:21 compute-0 sudo[442132]: pam_unix(sudo:session): session closed for user root
Oct 11 09:51:21 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 09:51:21 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:51:21 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 09:51:21 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:51:21 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev 6bc55b3f-a1e0-4f26-833a-8d832366089e does not exist
Oct 11 09:51:21 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev 8002064a-8e50-4344-86be-de662b5b0769 does not exist
Oct 11 09:51:21 compute-0 sudo[442301]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:51:21 compute-0 sudo[442301]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:51:21 compute-0 sudo[442301]: pam_unix(sudo:session): session closed for user root
Oct 11 09:51:21 compute-0 sudo[442326]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 11 09:51:21 compute-0 sudo[442326]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:51:21 compute-0 sudo[442326]: pam_unix(sudo:session): session closed for user root
Oct 11 09:51:21 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3313: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 24 op/s
Oct 11 09:51:22 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:51:22 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:51:23 compute-0 ceph-mon[74313]: pgmap v3313: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 24 op/s
Oct 11 09:51:23 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3314: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:51:24 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e310 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:51:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:51:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:51:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:51:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:51:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:51:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:51:25 compute-0 ceph-mon[74313]: pgmap v3314: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:51:25 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3315: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:51:25 compute-0 nova_compute[260935]: 2025-10-11 09:51:25.790 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:51:25 compute-0 nova_compute[260935]: 2025-10-11 09:51:25.888 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:51:26 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 09:51:26 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2914793518' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 09:51:26 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 09:51:26 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2914793518' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 09:51:26 compute-0 podman[442351]: 2025-10-11 09:51:26.810189875 +0000 UTC m=+0.101117007 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct 11 09:51:27 compute-0 ceph-mon[74313]: pgmap v3315: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:51:27 compute-0 ceph-mon[74313]: from='client.? 192.168.122.10:0/2914793518' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 09:51:27 compute-0 ceph-mon[74313]: from='client.? 192.168.122.10:0/2914793518' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 09:51:27 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3316: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:51:29 compute-0 ceph-mon[74313]: pgmap v3316: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:51:29 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3317: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:51:29 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e310 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:51:30 compute-0 nova_compute[260935]: 2025-10-11 09:51:30.792 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:51:30 compute-0 nova_compute[260935]: 2025-10-11 09:51:30.891 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:51:31 compute-0 ceph-mon[74313]: pgmap v3317: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:51:31 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3318: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:51:31 compute-0 podman[442372]: 2025-10-11 09:51:31.796655984 +0000 UTC m=+0.086883248 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=iscsid, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 09:51:33 compute-0 ceph-mon[74313]: pgmap v3318: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:51:33 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3319: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:51:34 compute-0 podman[442393]: 2025-10-11 09:51:34.808945992 +0000 UTC m=+0.102456935 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=multipathd, managed_by=edpm_ansible)
Oct 11 09:51:34 compute-0 podman[442394]: 2025-10-11 09:51:34.841351201 +0000 UTC m=+0.137635062 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, managed_by=edpm_ansible, io.buildah.version=1.41.3, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 11 09:51:34 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e310 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:51:34 compute-0 ceph-mon[74313]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #162. Immutable memtables: 0.
Oct 11 09:51:34 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:51:34.862247) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 11 09:51:34 compute-0 ceph-mon[74313]: rocksdb: [db/flush_job.cc:856] [default] [JOB 99] Flushing memtable with next log file: 162
Oct 11 09:51:34 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760176294862290, "job": 99, "event": "flush_started", "num_memtables": 1, "num_entries": 1202, "num_deletes": 251, "total_data_size": 1809770, "memory_usage": 1835480, "flush_reason": "Manual Compaction"}
Oct 11 09:51:34 compute-0 ceph-mon[74313]: rocksdb: [db/flush_job.cc:885] [default] [JOB 99] Level-0 flush table #163: started
Oct 11 09:51:34 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760176294874785, "cf_name": "default", "job": 99, "event": "table_file_creation", "file_number": 163, "file_size": 1098064, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 68034, "largest_seqno": 69235, "table_properties": {"data_size": 1093478, "index_size": 2045, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1477, "raw_key_size": 11886, "raw_average_key_size": 20, "raw_value_size": 1083559, "raw_average_value_size": 1907, "num_data_blocks": 92, "num_entries": 568, "num_filter_entries": 568, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760176180, "oldest_key_time": 1760176180, "file_creation_time": 1760176294, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "25d4b2da-549a-4320-988a-8e4ed36a341b", "db_session_id": "HBQ1LYMNE0LLZGR7AHC6", "orig_file_number": 163, "seqno_to_time_mapping": "N/A"}}
Oct 11 09:51:34 compute-0 ceph-mon[74313]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 99] Flush lasted 12606 microseconds, and 5120 cpu microseconds.
Oct 11 09:51:34 compute-0 ceph-mon[74313]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 09:51:34 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:51:34.874852) [db/flush_job.cc:967] [default] [JOB 99] Level-0 flush table #163: 1098064 bytes OK
Oct 11 09:51:34 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:51:34.874872) [db/memtable_list.cc:519] [default] Level-0 commit table #163 started
Oct 11 09:51:34 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:51:34.876242) [db/memtable_list.cc:722] [default] Level-0 commit table #163: memtable #1 done
Oct 11 09:51:34 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:51:34.876256) EVENT_LOG_v1 {"time_micros": 1760176294876251, "job": 99, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 11 09:51:34 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:51:34.876274) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 11 09:51:34 compute-0 ceph-mon[74313]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 99] Try to delete WAL files size 1804273, prev total WAL file size 1804273, number of live WAL files 2.
Oct 11 09:51:34 compute-0 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000159.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 09:51:34 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:51:34.877158) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740032373534' seq:72057594037927935, type:22 .. '6D6772737461740033303035' seq:0, type:0; will stop at (end)
Oct 11 09:51:34 compute-0 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 100] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 11 09:51:34 compute-0 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 99 Base level 0, inputs: [163(1072KB)], [161(10MB)]
Oct 11 09:51:34 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760176294877240, "job": 100, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [163], "files_L6": [161], "score": -1, "input_data_size": 12244573, "oldest_snapshot_seqno": -1}
Oct 11 09:51:34 compute-0 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 100] Generated table #164: 8630 keys, 9627335 bytes, temperature: kUnknown
Oct 11 09:51:34 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760176294958604, "cf_name": "default", "job": 100, "event": "table_file_creation", "file_number": 164, "file_size": 9627335, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9573871, "index_size": 30759, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 21637, "raw_key_size": 225898, "raw_average_key_size": 26, "raw_value_size": 9423984, "raw_average_value_size": 1092, "num_data_blocks": 1190, "num_entries": 8630, "num_filter_entries": 8630, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760170204, "oldest_key_time": 0, "file_creation_time": 1760176294, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "25d4b2da-549a-4320-988a-8e4ed36a341b", "db_session_id": "HBQ1LYMNE0LLZGR7AHC6", "orig_file_number": 164, "seqno_to_time_mapping": "N/A"}}
Oct 11 09:51:34 compute-0 ceph-mon[74313]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 09:51:34 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:51:34.959001) [db/compaction/compaction_job.cc:1663] [default] [JOB 100] Compacted 1@0 + 1@6 files to L6 => 9627335 bytes
Oct 11 09:51:34 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:51:34.960513) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 150.3 rd, 118.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.0, 10.6 +0.0 blob) out(9.2 +0.0 blob), read-write-amplify(19.9) write-amplify(8.8) OK, records in: 9096, records dropped: 466 output_compression: NoCompression
Oct 11 09:51:34 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:51:34.960541) EVENT_LOG_v1 {"time_micros": 1760176294960529, "job": 100, "event": "compaction_finished", "compaction_time_micros": 81451, "compaction_time_cpu_micros": 47469, "output_level": 6, "num_output_files": 1, "total_output_size": 9627335, "num_input_records": 9096, "num_output_records": 8630, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 11 09:51:34 compute-0 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000163.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 09:51:34 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760176294961093, "job": 100, "event": "table_file_deletion", "file_number": 163}
Oct 11 09:51:34 compute-0 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000161.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 09:51:34 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760176294964763, "job": 100, "event": "table_file_deletion", "file_number": 161}
Oct 11 09:51:34 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:51:34.877018) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 09:51:34 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:51:34.964852) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 09:51:34 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:51:34.964860) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 09:51:34 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:51:34.964865) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 09:51:34 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:51:34.964869) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 09:51:34 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:51:34.964873) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 09:51:35 compute-0 ceph-mon[74313]: pgmap v3319: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:51:35 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3320: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:51:35 compute-0 nova_compute[260935]: 2025-10-11 09:51:35.795 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:51:35 compute-0 nova_compute[260935]: 2025-10-11 09:51:35.892 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:51:37 compute-0 ceph-mon[74313]: pgmap v3320: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:51:37 compute-0 nova_compute[260935]: 2025-10-11 09:51:37.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:51:37 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3321: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:51:39 compute-0 ceph-mon[74313]: pgmap v3321: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:51:39 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3322: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:51:39 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e310 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:51:40 compute-0 nova_compute[260935]: 2025-10-11 09:51:40.797 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:51:40 compute-0 nova_compute[260935]: 2025-10-11 09:51:40.894 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:51:41 compute-0 ceph-mon[74313]: pgmap v3322: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:51:41 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3323: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:51:43 compute-0 ceph-mon[74313]: pgmap v3323: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:51:43 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3324: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:51:44 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e310 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:51:45 compute-0 ceph-mon[74313]: pgmap v3324: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:51:45 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3325: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:51:45 compute-0 nova_compute[260935]: 2025-10-11 09:51:45.799 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:51:45 compute-0 nova_compute[260935]: 2025-10-11 09:51:45.897 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:51:47 compute-0 ceph-mon[74313]: pgmap v3325: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:51:47 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3326: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:51:49 compute-0 ceph-mon[74313]: pgmap v3326: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:51:49 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3327: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:51:49 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e310 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:51:50 compute-0 nova_compute[260935]: 2025-10-11 09:51:50.801 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:51:50 compute-0 nova_compute[260935]: 2025-10-11 09:51:50.898 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:51:51 compute-0 ceph-mon[74313]: pgmap v3327: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:51:51 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3328: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:51:53 compute-0 ceph-mon[74313]: pgmap v3328: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:51:53 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3329: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:51:54 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e310 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:51:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:51:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:51:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:51:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:51:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:51:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:51:55 compute-0 ceph-osd[88249]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 11 09:51:55 compute-0 ceph-osd[88249]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 6000.1 total, 600.0 interval
                                           Cumulative writes: 45K writes, 182K keys, 45K commit groups, 1.0 writes per commit group, ingest: 0.18 GB, 0.03 MB/s
                                           Cumulative WAL: 45K writes, 16K syncs, 2.73 writes per sync, written: 0.18 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1586 writes, 4965 keys, 1586 commit groups, 1.0 writes per commit group, ingest: 3.76 MB, 0.01 MB/s
                                           Interval WAL: 1586 writes, 675 syncs, 2.35 writes per sync, written: 0.00 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f9267111f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 0.000104 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f9267111f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 0.000104 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f9267111f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 0.000104 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f9267111f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 0.000104 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f9267111f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 0.000104 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f9267111f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 0.000104 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f9267111f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 0.000104 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f926711090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 1.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f926711090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 1.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f926711090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 1.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f9267111f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 0.000104 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f9267111f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 0.000104 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Oct 11 09:51:55 compute-0 ceph-mgr[74605]: [balancer INFO root] Optimize plan auto_2025-10-11_09:51:55
Oct 11 09:51:55 compute-0 ceph-mgr[74605]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 09:51:55 compute-0 ceph-mgr[74605]: [balancer INFO root] do_upmap
Oct 11 09:51:55 compute-0 ceph-mgr[74605]: [balancer INFO root] pools ['default.rgw.control', 'vms', 'cephfs.cephfs.data', 'default.rgw.log', 'backups', 'cephfs.cephfs.meta', '.rgw.root', 'default.rgw.meta', 'volumes', '.mgr', 'images']
Oct 11 09:51:55 compute-0 ceph-mgr[74605]: [balancer INFO root] prepared 0/10 changes
Oct 11 09:51:55 compute-0 ceph-mon[74313]: pgmap v3329: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:51:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 09:51:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 09:51:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 09:51:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 09:51:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 09:51:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 09:51:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 09:51:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 09:51:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 09:51:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 09:51:55 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3330: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:51:55 compute-0 nova_compute[260935]: 2025-10-11 09:51:55.802 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:51:55 compute-0 nova_compute[260935]: 2025-10-11 09:51:55.901 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:51:57 compute-0 ceph-mon[74313]: pgmap v3330: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:51:57 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3331: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:51:57 compute-0 podman[442439]: 2025-10-11 09:51:57.795659078 +0000 UTC m=+0.093484693 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.schema-version=1.0)
Oct 11 09:51:59 compute-0 ceph-mon[74313]: pgmap v3331: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:51:59 compute-0 nova_compute[260935]: 2025-10-11 09:51:59.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:51:59 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3332: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:51:59 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e310 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:52:00 compute-0 ceph-osd[89278]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 11 09:52:00 compute-0 ceph-osd[89278]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 6000.1 total, 600.0 interval
                                           Cumulative writes: 47K writes, 180K keys, 47K commit groups, 1.0 writes per commit group, ingest: 0.18 GB, 0.03 MB/s
                                           Cumulative WAL: 47K writes, 17K syncs, 2.70 writes per sync, written: 0.18 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1324 writes, 4495 keys, 1324 commit groups, 1.0 writes per commit group, ingest: 2.98 MB, 0.00 MB/s
                                           Interval WAL: 1324 writes, 573 syncs, 2.31 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557ab31031f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557ab31031f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557ab31031f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557ab31031f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557ab31031f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557ab31031f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557ab31031f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557ab3103090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 1.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557ab3103090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 1.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557ab3103090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 1.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557ab31031f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557ab31031f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Oct 11 09:52:00 compute-0 nova_compute[260935]: 2025-10-11 09:52:00.699 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:52:00 compute-0 nova_compute[260935]: 2025-10-11 09:52:00.805 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:52:00 compute-0 nova_compute[260935]: 2025-10-11 09:52:00.903 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:52:01 compute-0 ceph-mon[74313]: pgmap v3332: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:52:01 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3333: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:52:02 compute-0 nova_compute[260935]: 2025-10-11 09:52:02.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:52:02 compute-0 nova_compute[260935]: 2025-10-11 09:52:02.704 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:52:02 compute-0 nova_compute[260935]: 2025-10-11 09:52:02.704 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 11 09:52:02 compute-0 podman[442458]: 2025-10-11 09:52:02.782417339 +0000 UTC m=+0.075399195 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct 11 09:52:03 compute-0 ceph-mon[74313]: pgmap v3333: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:52:03 compute-0 nova_compute[260935]: 2025-10-11 09:52:03.704 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:52:03 compute-0 nova_compute[260935]: 2025-10-11 09:52:03.704 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 11 09:52:03 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3334: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:52:04 compute-0 nova_compute[260935]: 2025-10-11 09:52:04.504 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "refresh_cache-b75d8ded-515b-48ff-a6b6-28df88878996" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:52:04 compute-0 nova_compute[260935]: 2025-10-11 09:52:04.505 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquired lock "refresh_cache-b75d8ded-515b-48ff-a6b6-28df88878996" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:52:04 compute-0 nova_compute[260935]: 2025-10-11 09:52:04.505 2 DEBUG nova.network.neutron [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: b75d8ded-515b-48ff-a6b6-28df88878996] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 11 09:52:04 compute-0 nova_compute[260935]: 2025-10-11 09:52:04.719 2 DEBUG nova.network.neutron [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: b75d8ded-515b-48ff-a6b6-28df88878996] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 11 09:52:04 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e310 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:52:05 compute-0 nova_compute[260935]: 2025-10-11 09:52:05.072 2 DEBUG nova.network.neutron [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: b75d8ded-515b-48ff-a6b6-28df88878996] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:52:05 compute-0 nova_compute[260935]: 2025-10-11 09:52:05.097 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Releasing lock "refresh_cache-b75d8ded-515b-48ff-a6b6-28df88878996" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:52:05 compute-0 nova_compute[260935]: 2025-10-11 09:52:05.097 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: b75d8ded-515b-48ff-a6b6-28df88878996] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 11 09:52:05 compute-0 ceph-mon[74313]: pgmap v3334: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:52:05 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3335: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:52:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 09:52:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:52:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 09:52:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:52:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0026278224099759067 of space, bias 1.0, pg target 0.788346722992772 quantized to 32 (current 32)
Oct 11 09:52:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:52:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 09:52:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:52:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 09:52:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:52:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006659854830044931 of space, bias 1.0, pg target 0.19979564490134794 quantized to 32 (current 32)
Oct 11 09:52:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:52:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 11 09:52:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:52:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 09:52:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:52:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 11 09:52:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:52:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 11 09:52:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:52:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 09:52:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:52:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 11 09:52:05 compute-0 nova_compute[260935]: 2025-10-11 09:52:05.807 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:52:05 compute-0 podman[442480]: 2025-10-11 09:52:05.815556242 +0000 UTC m=+0.105438258 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Oct 11 09:52:05 compute-0 podman[442481]: 2025-10-11 09:52:05.840699257 +0000 UTC m=+0.129057910 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct 11 09:52:05 compute-0 nova_compute[260935]: 2025-10-11 09:52:05.905 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:52:06 compute-0 ceph-osd[90364]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 11 09:52:06 compute-0 ceph-osd[90364]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 6000.3 total, 600.0 interval
                                           Cumulative writes: 36K writes, 145K keys, 36K commit groups, 1.0 writes per commit group, ingest: 0.15 GB, 0.02 MB/s
                                           Cumulative WAL: 36K writes, 13K syncs, 2.73 writes per sync, written: 0.15 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1359 writes, 4745 keys, 1359 commit groups, 1.0 writes per commit group, ingest: 4.17 MB, 0.01 MB/s
                                           Interval WAL: 1359 writes, 574 syncs, 2.37 writes per sync, written: 0.00 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.05              0.00         1    0.048       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.05              0.00         1    0.048       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.05              0.00         1    0.048       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.3 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c4d9a771f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.3 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c4d9a771f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.3 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c4d9a771f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.3 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c4d9a771f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.06              0.00         1    0.062       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.06              0.00         1    0.062       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.06              0.00         1    0.062       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.3 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.1 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c4d9a771f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.3 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c4d9a771f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.3 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c4d9a771f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.3 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c4d9a77090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.3 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c4d9a77090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.08              0.00         1    0.082       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.08              0.00         1    0.082       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.08              0.00         1    0.082       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.3 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.1 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c4d9a77090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.01              0.00         1    0.012       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.01              0.00         1    0.012       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.01              0.00         1    0.012       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.3 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c4d9a771f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.3 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c4d9a771f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Oct 11 09:52:06 compute-0 nova_compute[260935]: 2025-10-11 09:52:06.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:52:06 compute-0 nova_compute[260935]: 2025-10-11 09:52:06.767 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:52:06 compute-0 nova_compute[260935]: 2025-10-11 09:52:06.767 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:52:06 compute-0 nova_compute[260935]: 2025-10-11 09:52:06.768 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:52:06 compute-0 nova_compute[260935]: 2025-10-11 09:52:06.768 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 11 09:52:06 compute-0 nova_compute[260935]: 2025-10-11 09:52:06.769 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:52:07 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 09:52:07 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3493803474' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:52:07 compute-0 ceph-mon[74313]: pgmap v3335: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:52:07 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/3493803474' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:52:07 compute-0 nova_compute[260935]: 2025-10-11 09:52:07.275 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.506s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:52:07 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3336: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:52:07 compute-0 nova_compute[260935]: 2025-10-11 09:52:07.931 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:52:07 compute-0 nova_compute[260935]: 2025-10-11 09:52:07.932 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:52:07 compute-0 nova_compute[260935]: 2025-10-11 09:52:07.932 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:52:07 compute-0 nova_compute[260935]: 2025-10-11 09:52:07.937 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:52:07 compute-0 nova_compute[260935]: 2025-10-11 09:52:07.938 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:52:07 compute-0 nova_compute[260935]: 2025-10-11 09:52:07.943 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000052 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:52:07 compute-0 nova_compute[260935]: 2025-10-11 09:52:07.944 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000052 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:52:08 compute-0 ceph-mgr[74605]: [devicehealth INFO root] Check health
Oct 11 09:52:08 compute-0 nova_compute[260935]: 2025-10-11 09:52:08.150 2 WARNING nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 09:52:08 compute-0 nova_compute[260935]: 2025-10-11 09:52:08.151 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=2796MB free_disk=59.83064270019531GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 11 09:52:08 compute-0 nova_compute[260935]: 2025-10-11 09:52:08.151 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:52:08 compute-0 nova_compute[260935]: 2025-10-11 09:52:08.151 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:52:08 compute-0 nova_compute[260935]: 2025-10-11 09:52:08.252 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance c176845c-89c0-4038-ba22-4ee79bd3ebfe actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 09:52:08 compute-0 nova_compute[260935]: 2025-10-11 09:52:08.253 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance b75d8ded-515b-48ff-a6b6-28df88878996 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 09:52:08 compute-0 nova_compute[260935]: 2025-10-11 09:52:08.253 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance 52be16b4-343a-4fd4-9041-39069a1fde2a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 09:52:08 compute-0 nova_compute[260935]: 2025-10-11 09:52:08.253 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 11 09:52:08 compute-0 nova_compute[260935]: 2025-10-11 09:52:08.254 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=896MB phys_disk=59GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 11 09:52:08 compute-0 nova_compute[260935]: 2025-10-11 09:52:08.326 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:52:08 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 09:52:08 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2449278572' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:52:08 compute-0 nova_compute[260935]: 2025-10-11 09:52:08.824 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.499s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:52:08 compute-0 nova_compute[260935]: 2025-10-11 09:52:08.832 2 DEBUG nova.compute.provider_tree [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 09:52:08 compute-0 nova_compute[260935]: 2025-10-11 09:52:08.866 2 DEBUG nova.scheduler.client.report [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 09:52:08 compute-0 nova_compute[260935]: 2025-10-11 09:52:08.869 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 11 09:52:08 compute-0 nova_compute[260935]: 2025-10-11 09:52:08.869 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.718s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:52:09 compute-0 ceph-mon[74313]: pgmap v3336: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:52:09 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/2449278572' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:52:09 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3337: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:52:09 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e310 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:52:10 compute-0 nova_compute[260935]: 2025-10-11 09:52:10.809 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:52:10 compute-0 nova_compute[260935]: 2025-10-11 09:52:10.907 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:52:11 compute-0 ceph-mon[74313]: pgmap v3337: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:52:11 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3338: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:52:11 compute-0 nova_compute[260935]: 2025-10-11 09:52:11.870 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:52:11 compute-0 nova_compute[260935]: 2025-10-11 09:52:11.871 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:52:13 compute-0 ceph-mon[74313]: pgmap v3338: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:52:13 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3339: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:52:14 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e310 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:52:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:52:15.248 162815 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:52:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:52:15.249 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:52:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:52:15.249 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:52:15 compute-0 ceph-mon[74313]: pgmap v3339: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:52:15 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3340: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:52:15 compute-0 nova_compute[260935]: 2025-10-11 09:52:15.812 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:52:15 compute-0 nova_compute[260935]: 2025-10-11 09:52:15.910 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:52:17 compute-0 ceph-mon[74313]: pgmap v3340: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:52:17 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3341: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:52:19 compute-0 ceph-mon[74313]: pgmap v3341: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:52:19 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3342: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:52:19 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e310 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:52:20 compute-0 nova_compute[260935]: 2025-10-11 09:52:20.699 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:52:20 compute-0 nova_compute[260935]: 2025-10-11 09:52:20.814 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:52:20 compute-0 nova_compute[260935]: 2025-10-11 09:52:20.912 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:52:21 compute-0 sudo[442571]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:52:21 compute-0 sudo[442571]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:52:21 compute-0 sudo[442571]: pam_unix(sudo:session): session closed for user root
Oct 11 09:52:21 compute-0 ceph-mon[74313]: pgmap v3342: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:52:21 compute-0 sudo[442596]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 09:52:21 compute-0 sudo[442596]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:52:21 compute-0 sudo[442596]: pam_unix(sudo:session): session closed for user root
Oct 11 09:52:21 compute-0 sudo[442621]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:52:21 compute-0 sudo[442621]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:52:21 compute-0 sudo[442621]: pam_unix(sudo:session): session closed for user root
Oct 11 09:52:21 compute-0 sudo[442646]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 11 09:52:21 compute-0 sudo[442646]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:52:21 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3343: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:52:22 compute-0 sudo[442646]: pam_unix(sudo:session): session closed for user root
Oct 11 09:52:22 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 09:52:22 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 09:52:22 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 11 09:52:22 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 09:52:22 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 11 09:52:22 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:52:22 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev 1d2c3ceb-f25e-4cb1-8dcc-ea2a9cdabf73 does not exist
Oct 11 09:52:22 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev 036b607d-623d-4273-a2bf-3df42f46c21e does not exist
Oct 11 09:52:22 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev 778ebda6-096c-4bde-af35-60e6264389bf does not exist
Oct 11 09:52:22 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 11 09:52:22 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 09:52:22 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 11 09:52:22 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 09:52:22 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 09:52:22 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 09:52:22 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 09:52:22 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 09:52:22 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:52:22 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 09:52:22 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 09:52:22 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 09:52:22 compute-0 sudo[442701]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:52:22 compute-0 sudo[442701]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:52:22 compute-0 sudo[442701]: pam_unix(sudo:session): session closed for user root
Oct 11 09:52:22 compute-0 sudo[442726]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 09:52:22 compute-0 sudo[442726]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:52:22 compute-0 sudo[442726]: pam_unix(sudo:session): session closed for user root
Oct 11 09:52:22 compute-0 sudo[442751]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:52:22 compute-0 sudo[442751]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:52:22 compute-0 sudo[442751]: pam_unix(sudo:session): session closed for user root
Oct 11 09:52:22 compute-0 sudo[442776]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 11 09:52:22 compute-0 sudo[442776]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:52:23 compute-0 podman[442843]: 2025-10-11 09:52:23.15610402 +0000 UTC m=+0.073319177 container create ac7b95d600782c8123b4a5884e4c21cf37c222d19d925dfbba96ee47df5053d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_payne, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True)
Oct 11 09:52:23 compute-0 systemd[1]: Started libpod-conmon-ac7b95d600782c8123b4a5884e4c21cf37c222d19d925dfbba96ee47df5053d8.scope.
Oct 11 09:52:23 compute-0 podman[442843]: 2025-10-11 09:52:23.123417153 +0000 UTC m=+0.040632350 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:52:23 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:52:23 compute-0 podman[442843]: 2025-10-11 09:52:23.269787149 +0000 UTC m=+0.187002356 container init ac7b95d600782c8123b4a5884e4c21cf37c222d19d925dfbba96ee47df5053d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_payne, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 09:52:23 compute-0 podman[442843]: 2025-10-11 09:52:23.281726084 +0000 UTC m=+0.198941241 container start ac7b95d600782c8123b4a5884e4c21cf37c222d19d925dfbba96ee47df5053d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_payne, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 11 09:52:23 compute-0 podman[442843]: 2025-10-11 09:52:23.287764243 +0000 UTC m=+0.204979400 container attach ac7b95d600782c8123b4a5884e4c21cf37c222d19d925dfbba96ee47df5053d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_payne, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 11 09:52:23 compute-0 exciting_payne[442860]: 167 167
Oct 11 09:52:23 compute-0 systemd[1]: libpod-ac7b95d600782c8123b4a5884e4c21cf37c222d19d925dfbba96ee47df5053d8.scope: Deactivated successfully.
Oct 11 09:52:23 compute-0 podman[442843]: 2025-10-11 09:52:23.292515466 +0000 UTC m=+0.209730623 container died ac7b95d600782c8123b4a5884e4c21cf37c222d19d925dfbba96ee47df5053d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_payne, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 09:52:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-74588cbe3ce958cec3621ce2299e05471194417a59ad23d4cdf9229ab3e9f2e7-merged.mount: Deactivated successfully.
Oct 11 09:52:23 compute-0 podman[442843]: 2025-10-11 09:52:23.347348444 +0000 UTC m=+0.264563601 container remove ac7b95d600782c8123b4a5884e4c21cf37c222d19d925dfbba96ee47df5053d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_payne, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 09:52:23 compute-0 ceph-mon[74313]: pgmap v3343: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:52:23 compute-0 systemd[1]: libpod-conmon-ac7b95d600782c8123b4a5884e4c21cf37c222d19d925dfbba96ee47df5053d8.scope: Deactivated successfully.
Oct 11 09:52:23 compute-0 podman[442884]: 2025-10-11 09:52:23.623359956 +0000 UTC m=+0.081030854 container create 2b3fd22d62300aacbb8a24491b5c7f83de1dec65cf50cc010d23d13a1529a6e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_nightingale, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct 11 09:52:23 compute-0 podman[442884]: 2025-10-11 09:52:23.593007694 +0000 UTC m=+0.050678652 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:52:23 compute-0 systemd[1]: Started libpod-conmon-2b3fd22d62300aacbb8a24491b5c7f83de1dec65cf50cc010d23d13a1529a6e0.scope.
Oct 11 09:52:23 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:52:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/502bf17f5b99edf7170471aaf9fc97fd91bcfe520795ac8b81f5f1a0deed15a3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 09:52:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/502bf17f5b99edf7170471aaf9fc97fd91bcfe520795ac8b81f5f1a0deed15a3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 09:52:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/502bf17f5b99edf7170471aaf9fc97fd91bcfe520795ac8b81f5f1a0deed15a3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 09:52:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/502bf17f5b99edf7170471aaf9fc97fd91bcfe520795ac8b81f5f1a0deed15a3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 09:52:23 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3344: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:52:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/502bf17f5b99edf7170471aaf9fc97fd91bcfe520795ac8b81f5f1a0deed15a3/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 09:52:23 compute-0 podman[442884]: 2025-10-11 09:52:23.759390911 +0000 UTC m=+0.217061849 container init 2b3fd22d62300aacbb8a24491b5c7f83de1dec65cf50cc010d23d13a1529a6e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_nightingale, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 09:52:23 compute-0 podman[442884]: 2025-10-11 09:52:23.773067495 +0000 UTC m=+0.230738363 container start 2b3fd22d62300aacbb8a24491b5c7f83de1dec65cf50cc010d23d13a1529a6e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_nightingale, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct 11 09:52:23 compute-0 podman[442884]: 2025-10-11 09:52:23.777977372 +0000 UTC m=+0.235648330 container attach 2b3fd22d62300aacbb8a24491b5c7f83de1dec65cf50cc010d23d13a1529a6e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_nightingale, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct 11 09:52:24 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e310 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:52:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:52:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:52:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:52:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:52:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:52:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:52:24 compute-0 xenodochial_nightingale[442900]: --> passed data devices: 0 physical, 3 LVM
Oct 11 09:52:24 compute-0 xenodochial_nightingale[442900]: --> relative data size: 1.0
Oct 11 09:52:24 compute-0 xenodochial_nightingale[442900]: --> All data devices are unavailable
Oct 11 09:52:24 compute-0 systemd[1]: libpod-2b3fd22d62300aacbb8a24491b5c7f83de1dec65cf50cc010d23d13a1529a6e0.scope: Deactivated successfully.
Oct 11 09:52:24 compute-0 systemd[1]: libpod-2b3fd22d62300aacbb8a24491b5c7f83de1dec65cf50cc010d23d13a1529a6e0.scope: Consumed 1.133s CPU time.
Oct 11 09:52:24 compute-0 podman[442884]: 2025-10-11 09:52:24.983811112 +0000 UTC m=+1.441482020 container died 2b3fd22d62300aacbb8a24491b5c7f83de1dec65cf50cc010d23d13a1529a6e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_nightingale, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Oct 11 09:52:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-502bf17f5b99edf7170471aaf9fc97fd91bcfe520795ac8b81f5f1a0deed15a3-merged.mount: Deactivated successfully.
Oct 11 09:52:25 compute-0 podman[442884]: 2025-10-11 09:52:25.051566823 +0000 UTC m=+1.509237681 container remove 2b3fd22d62300aacbb8a24491b5c7f83de1dec65cf50cc010d23d13a1529a6e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_nightingale, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct 11 09:52:25 compute-0 systemd[1]: libpod-conmon-2b3fd22d62300aacbb8a24491b5c7f83de1dec65cf50cc010d23d13a1529a6e0.scope: Deactivated successfully.
Oct 11 09:52:25 compute-0 sudo[442776]: pam_unix(sudo:session): session closed for user root
Oct 11 09:52:25 compute-0 sudo[442944]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:52:25 compute-0 sudo[442944]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:52:25 compute-0 sudo[442944]: pam_unix(sudo:session): session closed for user root
Oct 11 09:52:25 compute-0 sudo[442969]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 09:52:25 compute-0 sudo[442969]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:52:25 compute-0 sudo[442969]: pam_unix(sudo:session): session closed for user root
Oct 11 09:52:25 compute-0 sudo[442994]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:52:25 compute-0 sudo[442994]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:52:25 compute-0 sudo[442994]: pam_unix(sudo:session): session closed for user root
Oct 11 09:52:25 compute-0 ceph-mon[74313]: pgmap v3344: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:52:25 compute-0 sudo[443019]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a -- lvm list --format json
Oct 11 09:52:25 compute-0 sudo[443019]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:52:25 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3345: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:52:25 compute-0 nova_compute[260935]: 2025-10-11 09:52:25.816 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:52:25 compute-0 podman[443084]: 2025-10-11 09:52:25.821634921 +0000 UTC m=+0.062416031 container create 4aff7f179fe7f2a380656a584e79e9063fafb148b490ca0937aa0bfe93db7974 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_wright, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct 11 09:52:25 compute-0 systemd[1]: Started libpod-conmon-4aff7f179fe7f2a380656a584e79e9063fafb148b490ca0937aa0bfe93db7974.scope.
Oct 11 09:52:25 compute-0 podman[443084]: 2025-10-11 09:52:25.791411923 +0000 UTC m=+0.032193073 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:52:25 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:52:25 compute-0 nova_compute[260935]: 2025-10-11 09:52:25.914 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:52:25 compute-0 podman[443084]: 2025-10-11 09:52:25.917954563 +0000 UTC m=+0.158735723 container init 4aff7f179fe7f2a380656a584e79e9063fafb148b490ca0937aa0bfe93db7974 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_wright, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef)
Oct 11 09:52:25 compute-0 podman[443084]: 2025-10-11 09:52:25.930411592 +0000 UTC m=+0.171192702 container start 4aff7f179fe7f2a380656a584e79e9063fafb148b490ca0937aa0bfe93db7974 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_wright, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 09:52:25 compute-0 podman[443084]: 2025-10-11 09:52:25.935027492 +0000 UTC m=+0.175808652 container attach 4aff7f179fe7f2a380656a584e79e9063fafb148b490ca0937aa0bfe93db7974 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_wright, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct 11 09:52:25 compute-0 hopeful_wright[443100]: 167 167
Oct 11 09:52:25 compute-0 systemd[1]: libpod-4aff7f179fe7f2a380656a584e79e9063fafb148b490ca0937aa0bfe93db7974.scope: Deactivated successfully.
Oct 11 09:52:25 compute-0 podman[443084]: 2025-10-11 09:52:25.939711663 +0000 UTC m=+0.180492823 container died 4aff7f179fe7f2a380656a584e79e9063fafb148b490ca0937aa0bfe93db7974 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_wright, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 09:52:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-6437c1743ae006fb478b640167a9cd92f9d63443d18dc9965d6a243df5ef3815-merged.mount: Deactivated successfully.
Oct 11 09:52:25 compute-0 podman[443084]: 2025-10-11 09:52:25.992519504 +0000 UTC m=+0.233300604 container remove 4aff7f179fe7f2a380656a584e79e9063fafb148b490ca0937aa0bfe93db7974 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_wright, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 09:52:26 compute-0 systemd[1]: libpod-conmon-4aff7f179fe7f2a380656a584e79e9063fafb148b490ca0937aa0bfe93db7974.scope: Deactivated successfully.
Oct 11 09:52:26 compute-0 podman[443124]: 2025-10-11 09:52:26.231782345 +0000 UTC m=+0.057156184 container create 5217a001c971e6b69be8b4810e19005cfa228dd6e7b3f456e5efff0ab897da0a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_benz, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507)
Oct 11 09:52:26 compute-0 systemd[1]: Started libpod-conmon-5217a001c971e6b69be8b4810e19005cfa228dd6e7b3f456e5efff0ab897da0a.scope.
Oct 11 09:52:26 compute-0 podman[443124]: 2025-10-11 09:52:26.208295066 +0000 UTC m=+0.033668935 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:52:26 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:52:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0569c9bb76635c135bdf41cb7d16f62afd8b2219bb1fbecf3fa7b64ef1dd84b3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 09:52:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0569c9bb76635c135bdf41cb7d16f62afd8b2219bb1fbecf3fa7b64ef1dd84b3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 09:52:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0569c9bb76635c135bdf41cb7d16f62afd8b2219bb1fbecf3fa7b64ef1dd84b3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 09:52:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0569c9bb76635c135bdf41cb7d16f62afd8b2219bb1fbecf3fa7b64ef1dd84b3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 09:52:26 compute-0 podman[443124]: 2025-10-11 09:52:26.34106332 +0000 UTC m=+0.166437199 container init 5217a001c971e6b69be8b4810e19005cfa228dd6e7b3f456e5efff0ab897da0a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_benz, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct 11 09:52:26 compute-0 podman[443124]: 2025-10-11 09:52:26.356185164 +0000 UTC m=+0.181559023 container start 5217a001c971e6b69be8b4810e19005cfa228dd6e7b3f456e5efff0ab897da0a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_benz, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 09:52:26 compute-0 podman[443124]: 2025-10-11 09:52:26.360559857 +0000 UTC m=+0.185933726 container attach 5217a001c971e6b69be8b4810e19005cfa228dd6e7b3f456e5efff0ab897da0a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_benz, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 09:52:26 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 09:52:26 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3888739876' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 09:52:26 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 09:52:26 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3888739876' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 09:52:26 compute-0 sshd-session[443141]: Invalid user ali from 43.157.67.116 port 40800
Oct 11 09:52:26 compute-0 sshd-session[443141]: pam_unix(sshd:auth): check pass; user unknown
Oct 11 09:52:26 compute-0 sshd-session[443141]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=43.157.67.116
Oct 11 09:52:27 compute-0 boring_benz[443142]: {
Oct 11 09:52:27 compute-0 boring_benz[443142]:     "0": [
Oct 11 09:52:27 compute-0 boring_benz[443142]:         {
Oct 11 09:52:27 compute-0 boring_benz[443142]:             "devices": [
Oct 11 09:52:27 compute-0 boring_benz[443142]:                 "/dev/loop3"
Oct 11 09:52:27 compute-0 boring_benz[443142]:             ],
Oct 11 09:52:27 compute-0 boring_benz[443142]:             "lv_name": "ceph_lv0",
Oct 11 09:52:27 compute-0 boring_benz[443142]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 09:52:27 compute-0 boring_benz[443142]:             "lv_size": "21470642176",
Oct 11 09:52:27 compute-0 boring_benz[443142]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=bd31cfe9-266a-4479-a7f9-659fff14b3e1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 09:52:27 compute-0 boring_benz[443142]:             "lv_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 09:52:27 compute-0 boring_benz[443142]:             "name": "ceph_lv0",
Oct 11 09:52:27 compute-0 boring_benz[443142]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 09:52:27 compute-0 boring_benz[443142]:             "tags": {
Oct 11 09:52:27 compute-0 boring_benz[443142]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 11 09:52:27 compute-0 boring_benz[443142]:                 "ceph.block_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 09:52:27 compute-0 boring_benz[443142]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 09:52:27 compute-0 boring_benz[443142]:                 "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:52:27 compute-0 boring_benz[443142]:                 "ceph.cluster_name": "ceph",
Oct 11 09:52:27 compute-0 boring_benz[443142]:                 "ceph.crush_device_class": "",
Oct 11 09:52:27 compute-0 boring_benz[443142]:                 "ceph.encrypted": "0",
Oct 11 09:52:27 compute-0 boring_benz[443142]:                 "ceph.osd_fsid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 09:52:27 compute-0 boring_benz[443142]:                 "ceph.osd_id": "0",
Oct 11 09:52:27 compute-0 boring_benz[443142]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 09:52:27 compute-0 boring_benz[443142]:                 "ceph.type": "block",
Oct 11 09:52:27 compute-0 boring_benz[443142]:                 "ceph.vdo": "0"
Oct 11 09:52:27 compute-0 boring_benz[443142]:             },
Oct 11 09:52:27 compute-0 boring_benz[443142]:             "type": "block",
Oct 11 09:52:27 compute-0 boring_benz[443142]:             "vg_name": "ceph_vg0"
Oct 11 09:52:27 compute-0 boring_benz[443142]:         }
Oct 11 09:52:27 compute-0 boring_benz[443142]:     ],
Oct 11 09:52:27 compute-0 boring_benz[443142]:     "1": [
Oct 11 09:52:27 compute-0 boring_benz[443142]:         {
Oct 11 09:52:27 compute-0 boring_benz[443142]:             "devices": [
Oct 11 09:52:27 compute-0 boring_benz[443142]:                 "/dev/loop4"
Oct 11 09:52:27 compute-0 boring_benz[443142]:             ],
Oct 11 09:52:27 compute-0 boring_benz[443142]:             "lv_name": "ceph_lv1",
Oct 11 09:52:27 compute-0 boring_benz[443142]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 09:52:27 compute-0 boring_benz[443142]:             "lv_size": "21470642176",
Oct 11 09:52:27 compute-0 boring_benz[443142]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c6e730c8-7075-45e5-bf72-061b73088f5b,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 09:52:27 compute-0 boring_benz[443142]:             "lv_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 09:52:27 compute-0 boring_benz[443142]:             "name": "ceph_lv1",
Oct 11 09:52:27 compute-0 boring_benz[443142]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 09:52:27 compute-0 boring_benz[443142]:             "tags": {
Oct 11 09:52:27 compute-0 boring_benz[443142]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 11 09:52:27 compute-0 boring_benz[443142]:                 "ceph.block_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 09:52:27 compute-0 boring_benz[443142]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 09:52:27 compute-0 boring_benz[443142]:                 "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:52:27 compute-0 boring_benz[443142]:                 "ceph.cluster_name": "ceph",
Oct 11 09:52:27 compute-0 boring_benz[443142]:                 "ceph.crush_device_class": "",
Oct 11 09:52:27 compute-0 boring_benz[443142]:                 "ceph.encrypted": "0",
Oct 11 09:52:27 compute-0 boring_benz[443142]:                 "ceph.osd_fsid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 09:52:27 compute-0 boring_benz[443142]:                 "ceph.osd_id": "1",
Oct 11 09:52:27 compute-0 boring_benz[443142]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 09:52:27 compute-0 boring_benz[443142]:                 "ceph.type": "block",
Oct 11 09:52:27 compute-0 boring_benz[443142]:                 "ceph.vdo": "0"
Oct 11 09:52:27 compute-0 boring_benz[443142]:             },
Oct 11 09:52:27 compute-0 boring_benz[443142]:             "type": "block",
Oct 11 09:52:27 compute-0 boring_benz[443142]:             "vg_name": "ceph_vg1"
Oct 11 09:52:27 compute-0 boring_benz[443142]:         }
Oct 11 09:52:27 compute-0 boring_benz[443142]:     ],
Oct 11 09:52:27 compute-0 boring_benz[443142]:     "2": [
Oct 11 09:52:27 compute-0 boring_benz[443142]:         {
Oct 11 09:52:27 compute-0 boring_benz[443142]:             "devices": [
Oct 11 09:52:27 compute-0 boring_benz[443142]:                 "/dev/loop5"
Oct 11 09:52:27 compute-0 boring_benz[443142]:             ],
Oct 11 09:52:27 compute-0 boring_benz[443142]:             "lv_name": "ceph_lv2",
Oct 11 09:52:27 compute-0 boring_benz[443142]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 09:52:27 compute-0 boring_benz[443142]:             "lv_size": "21470642176",
Oct 11 09:52:27 compute-0 boring_benz[443142]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9a8d87fe-940e-4960-94fb-405e22e6e9ee,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 09:52:27 compute-0 boring_benz[443142]:             "lv_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 09:52:27 compute-0 boring_benz[443142]:             "name": "ceph_lv2",
Oct 11 09:52:27 compute-0 boring_benz[443142]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 09:52:27 compute-0 boring_benz[443142]:             "tags": {
Oct 11 09:52:27 compute-0 boring_benz[443142]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 11 09:52:27 compute-0 boring_benz[443142]:                 "ceph.block_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 09:52:27 compute-0 boring_benz[443142]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 09:52:27 compute-0 boring_benz[443142]:                 "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:52:27 compute-0 boring_benz[443142]:                 "ceph.cluster_name": "ceph",
Oct 11 09:52:27 compute-0 boring_benz[443142]:                 "ceph.crush_device_class": "",
Oct 11 09:52:27 compute-0 boring_benz[443142]:                 "ceph.encrypted": "0",
Oct 11 09:52:27 compute-0 boring_benz[443142]:                 "ceph.osd_fsid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 09:52:27 compute-0 boring_benz[443142]:                 "ceph.osd_id": "2",
Oct 11 09:52:27 compute-0 boring_benz[443142]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 09:52:27 compute-0 boring_benz[443142]:                 "ceph.type": "block",
Oct 11 09:52:27 compute-0 boring_benz[443142]:                 "ceph.vdo": "0"
Oct 11 09:52:27 compute-0 boring_benz[443142]:             },
Oct 11 09:52:27 compute-0 boring_benz[443142]:             "type": "block",
Oct 11 09:52:27 compute-0 boring_benz[443142]:             "vg_name": "ceph_vg2"
Oct 11 09:52:27 compute-0 boring_benz[443142]:         }
Oct 11 09:52:27 compute-0 boring_benz[443142]:     ]
Oct 11 09:52:27 compute-0 boring_benz[443142]: }
Oct 11 09:52:27 compute-0 systemd[1]: libpod-5217a001c971e6b69be8b4810e19005cfa228dd6e7b3f456e5efff0ab897da0a.scope: Deactivated successfully.
Oct 11 09:52:27 compute-0 podman[443152]: 2025-10-11 09:52:27.252491763 +0000 UTC m=+0.047780151 container died 5217a001c971e6b69be8b4810e19005cfa228dd6e7b3f456e5efff0ab897da0a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_benz, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True)
Oct 11 09:52:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-0569c9bb76635c135bdf41cb7d16f62afd8b2219bb1fbecf3fa7b64ef1dd84b3-merged.mount: Deactivated successfully.
Oct 11 09:52:27 compute-0 podman[443152]: 2025-10-11 09:52:27.334620007 +0000 UTC m=+0.129908345 container remove 5217a001c971e6b69be8b4810e19005cfa228dd6e7b3f456e5efff0ab897da0a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_benz, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS)
Oct 11 09:52:27 compute-0 systemd[1]: libpod-conmon-5217a001c971e6b69be8b4810e19005cfa228dd6e7b3f456e5efff0ab897da0a.scope: Deactivated successfully.
Oct 11 09:52:27 compute-0 ceph-mon[74313]: pgmap v3345: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:52:27 compute-0 ceph-mon[74313]: from='client.? 192.168.122.10:0/3888739876' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 09:52:27 compute-0 ceph-mon[74313]: from='client.? 192.168.122.10:0/3888739876' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 09:52:27 compute-0 sudo[443019]: pam_unix(sudo:session): session closed for user root
Oct 11 09:52:27 compute-0 sudo[443167]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:52:27 compute-0 sudo[443167]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:52:27 compute-0 sudo[443167]: pam_unix(sudo:session): session closed for user root
Oct 11 09:52:27 compute-0 sudo[443192]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 09:52:27 compute-0 sudo[443192]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:52:27 compute-0 sudo[443192]: pam_unix(sudo:session): session closed for user root
Oct 11 09:52:27 compute-0 sudo[443217]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:52:27 compute-0 sudo[443217]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:52:27 compute-0 sudo[443217]: pam_unix(sudo:session): session closed for user root
Oct 11 09:52:27 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3346: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:52:27 compute-0 sshd-session[442570]: error: kex_exchange_identification: read: Connection timed out
Oct 11 09:52:27 compute-0 sshd-session[442570]: banner exchange: Connection from 180.76.96.64 port 56976: Connection timed out
Oct 11 09:52:27 compute-0 sudo[443242]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a -- raw list --format json
Oct 11 09:52:27 compute-0 sudo[443242]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:52:28 compute-0 podman[443309]: 2025-10-11 09:52:28.211529652 +0000 UTC m=+0.061033793 container create 4034d87be5b69bccc424fb774a0f7aa0c394cbd3a67a52bb3e1274cbfed1e0ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_banzai, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3)
Oct 11 09:52:28 compute-0 systemd[1]: Started libpod-conmon-4034d87be5b69bccc424fb774a0f7aa0c394cbd3a67a52bb3e1274cbfed1e0ac.scope.
Oct 11 09:52:28 compute-0 podman[443309]: 2025-10-11 09:52:28.180898603 +0000 UTC m=+0.030402804 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:52:28 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:52:28 compute-0 podman[443309]: 2025-10-11 09:52:28.314075747 +0000 UTC m=+0.163579858 container init 4034d87be5b69bccc424fb774a0f7aa0c394cbd3a67a52bb3e1274cbfed1e0ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_banzai, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 09:52:28 compute-0 podman[443309]: 2025-10-11 09:52:28.322690539 +0000 UTC m=+0.172194680 container start 4034d87be5b69bccc424fb774a0f7aa0c394cbd3a67a52bb3e1274cbfed1e0ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_banzai, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True)
Oct 11 09:52:28 compute-0 podman[443309]: 2025-10-11 09:52:28.326560358 +0000 UTC m=+0.176064499 container attach 4034d87be5b69bccc424fb774a0f7aa0c394cbd3a67a52bb3e1274cbfed1e0ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_banzai, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 09:52:28 compute-0 strange_banzai[443327]: 167 167
Oct 11 09:52:28 compute-0 systemd[1]: libpod-4034d87be5b69bccc424fb774a0f7aa0c394cbd3a67a52bb3e1274cbfed1e0ac.scope: Deactivated successfully.
Oct 11 09:52:28 compute-0 podman[443309]: 2025-10-11 09:52:28.32843083 +0000 UTC m=+0.177934961 container died 4034d87be5b69bccc424fb774a0f7aa0c394cbd3a67a52bb3e1274cbfed1e0ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_banzai, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Oct 11 09:52:28 compute-0 podman[443324]: 2025-10-11 09:52:28.352999449 +0000 UTC m=+0.091485376 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 09:52:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-d63e15f5405a0e9164ba08e01eaab037e9845803369b2687b1be9462edfe13c0-merged.mount: Deactivated successfully.
Oct 11 09:52:28 compute-0 podman[443309]: 2025-10-11 09:52:28.375249333 +0000 UTC m=+0.224753444 container remove 4034d87be5b69bccc424fb774a0f7aa0c394cbd3a67a52bb3e1274cbfed1e0ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_banzai, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct 11 09:52:28 compute-0 systemd[1]: libpod-conmon-4034d87be5b69bccc424fb774a0f7aa0c394cbd3a67a52bb3e1274cbfed1e0ac.scope: Deactivated successfully.
Oct 11 09:52:28 compute-0 podman[443368]: 2025-10-11 09:52:28.618410143 +0000 UTC m=+0.073367898 container create ff0369e4ff79bd49bb7acc649c5e7cf295f2529c4f6a5001c3181f1e6b9fc3db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_pike, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 09:52:28 compute-0 systemd[1]: Started libpod-conmon-ff0369e4ff79bd49bb7acc649c5e7cf295f2529c4f6a5001c3181f1e6b9fc3db.scope.
Oct 11 09:52:28 compute-0 podman[443368]: 2025-10-11 09:52:28.58868106 +0000 UTC m=+0.043638905 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:52:28 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:52:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0609ac1a6278c1e0c2e8e55db2fec1996a04eb3f75e3aeda92a49f9baa0dd142/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 09:52:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0609ac1a6278c1e0c2e8e55db2fec1996a04eb3f75e3aeda92a49f9baa0dd142/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 09:52:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0609ac1a6278c1e0c2e8e55db2fec1996a04eb3f75e3aeda92a49f9baa0dd142/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 09:52:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0609ac1a6278c1e0c2e8e55db2fec1996a04eb3f75e3aeda92a49f9baa0dd142/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 09:52:28 compute-0 podman[443368]: 2025-10-11 09:52:28.732194285 +0000 UTC m=+0.187152130 container init ff0369e4ff79bd49bb7acc649c5e7cf295f2529c4f6a5001c3181f1e6b9fc3db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_pike, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 09:52:28 compute-0 podman[443368]: 2025-10-11 09:52:28.749336076 +0000 UTC m=+0.204293871 container start ff0369e4ff79bd49bb7acc649c5e7cf295f2529c4f6a5001c3181f1e6b9fc3db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_pike, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Oct 11 09:52:28 compute-0 podman[443368]: 2025-10-11 09:52:28.754609673 +0000 UTC m=+0.209567538 container attach ff0369e4ff79bd49bb7acc649c5e7cf295f2529c4f6a5001c3181f1e6b9fc3db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_pike, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 09:52:28 compute-0 sshd-session[443141]: Failed password for invalid user ali from 43.157.67.116 port 40800 ssh2
Oct 11 09:52:29 compute-0 ceph-mon[74313]: pgmap v3346: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:52:29 compute-0 friendly_pike[443384]: {
Oct 11 09:52:29 compute-0 friendly_pike[443384]:     "9a8d87fe-940e-4960-94fb-405e22e6e9ee": {
Oct 11 09:52:29 compute-0 friendly_pike[443384]:         "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:52:29 compute-0 friendly_pike[443384]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 11 09:52:29 compute-0 friendly_pike[443384]:         "osd_id": 2,
Oct 11 09:52:29 compute-0 friendly_pike[443384]:         "osd_uuid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 09:52:29 compute-0 friendly_pike[443384]:         "type": "bluestore"
Oct 11 09:52:29 compute-0 friendly_pike[443384]:     },
Oct 11 09:52:29 compute-0 friendly_pike[443384]:     "bd31cfe9-266a-4479-a7f9-659fff14b3e1": {
Oct 11 09:52:29 compute-0 friendly_pike[443384]:         "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:52:29 compute-0 friendly_pike[443384]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 11 09:52:29 compute-0 friendly_pike[443384]:         "osd_id": 0,
Oct 11 09:52:29 compute-0 friendly_pike[443384]:         "osd_uuid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 09:52:29 compute-0 friendly_pike[443384]:         "type": "bluestore"
Oct 11 09:52:29 compute-0 friendly_pike[443384]:     },
Oct 11 09:52:29 compute-0 friendly_pike[443384]:     "c6e730c8-7075-45e5-bf72-061b73088f5b": {
Oct 11 09:52:29 compute-0 friendly_pike[443384]:         "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:52:29 compute-0 friendly_pike[443384]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 11 09:52:29 compute-0 friendly_pike[443384]:         "osd_id": 1,
Oct 11 09:52:29 compute-0 friendly_pike[443384]:         "osd_uuid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 09:52:29 compute-0 friendly_pike[443384]:         "type": "bluestore"
Oct 11 09:52:29 compute-0 friendly_pike[443384]:     }
Oct 11 09:52:29 compute-0 friendly_pike[443384]: }
Oct 11 09:52:29 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3347: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:52:29 compute-0 systemd[1]: libpod-ff0369e4ff79bd49bb7acc649c5e7cf295f2529c4f6a5001c3181f1e6b9fc3db.scope: Deactivated successfully.
Oct 11 09:52:29 compute-0 podman[443368]: 2025-10-11 09:52:29.762182004 +0000 UTC m=+1.217139789 container died ff0369e4ff79bd49bb7acc649c5e7cf295f2529c4f6a5001c3181f1e6b9fc3db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_pike, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 09:52:29 compute-0 systemd[1]: libpod-ff0369e4ff79bd49bb7acc649c5e7cf295f2529c4f6a5001c3181f1e6b9fc3db.scope: Consumed 1.021s CPU time.
Oct 11 09:52:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-0609ac1a6278c1e0c2e8e55db2fec1996a04eb3f75e3aeda92a49f9baa0dd142-merged.mount: Deactivated successfully.
Oct 11 09:52:29 compute-0 podman[443368]: 2025-10-11 09:52:29.842331252 +0000 UTC m=+1.297289047 container remove ff0369e4ff79bd49bb7acc649c5e7cf295f2529c4f6a5001c3181f1e6b9fc3db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_pike, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Oct 11 09:52:29 compute-0 systemd[1]: libpod-conmon-ff0369e4ff79bd49bb7acc649c5e7cf295f2529c4f6a5001c3181f1e6b9fc3db.scope: Deactivated successfully.
Oct 11 09:52:29 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e310 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:52:29 compute-0 sudo[443242]: pam_unix(sudo:session): session closed for user root
Oct 11 09:52:29 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 09:52:29 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:52:29 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 09:52:29 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:52:29 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev d0f18d7f-e7ef-46bc-afb0-ca36b4e450e6 does not exist
Oct 11 09:52:29 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev 9cf034ad-69fd-4b13-9dfa-ee7fb61a1209 does not exist
Oct 11 09:52:30 compute-0 sudo[443431]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:52:30 compute-0 sudo[443431]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:52:30 compute-0 sudo[443431]: pam_unix(sudo:session): session closed for user root
Oct 11 09:52:30 compute-0 sudo[443456]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 11 09:52:30 compute-0 sudo[443456]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:52:30 compute-0 sudo[443456]: pam_unix(sudo:session): session closed for user root
Oct 11 09:52:30 compute-0 sshd-session[443141]: Received disconnect from 43.157.67.116 port 40800:11: Bye Bye [preauth]
Oct 11 09:52:30 compute-0 sshd-session[443141]: Disconnected from invalid user ali 43.157.67.116 port 40800 [preauth]
Oct 11 09:52:30 compute-0 nova_compute[260935]: 2025-10-11 09:52:30.819 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:52:30 compute-0 ceph-mon[74313]: pgmap v3347: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:52:30 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:52:30 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:52:30 compute-0 nova_compute[260935]: 2025-10-11 09:52:30.916 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:52:31 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3348: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:52:32 compute-0 ceph-mon[74313]: pgmap v3348: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:52:33 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3349: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:52:33 compute-0 podman[443481]: 2025-10-11 09:52:33.805123878 +0000 UTC m=+0.091068745 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, container_name=iscsid, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct 11 09:52:34 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e310 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:52:34 compute-0 ceph-mon[74313]: pgmap v3349: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:52:35 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3350: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:52:35 compute-0 nova_compute[260935]: 2025-10-11 09:52:35.822 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:52:35 compute-0 nova_compute[260935]: 2025-10-11 09:52:35.916 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:52:36 compute-0 podman[443505]: 2025-10-11 09:52:36.847008598 +0000 UTC m=+0.133488705 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team)
Oct 11 09:52:36 compute-0 podman[443506]: 2025-10-11 09:52:36.860534987 +0000 UTC m=+0.140314516 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 09:52:36 compute-0 ceph-mon[74313]: pgmap v3350: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:52:37 compute-0 nova_compute[260935]: 2025-10-11 09:52:37.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:52:37 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3351: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:52:38 compute-0 ceph-mon[74313]: pgmap v3351: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:52:39 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3352: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:52:39 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e310 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:52:40 compute-0 nova_compute[260935]: 2025-10-11 09:52:40.824 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:52:40 compute-0 nova_compute[260935]: 2025-10-11 09:52:40.918 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:52:40 compute-0 ceph-mon[74313]: pgmap v3352: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:52:41 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3353: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:52:42 compute-0 ceph-mon[74313]: pgmap v3353: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:52:43 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3354: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:52:44 compute-0 sshd-session[443551]: Invalid user manageruser from 13.126.15.214 port 44400
Oct 11 09:52:44 compute-0 sshd-session[443551]: pam_unix(sshd:auth): check pass; user unknown
Oct 11 09:52:44 compute-0 sshd-session[443551]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=13.126.15.214
Oct 11 09:52:44 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e310 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:52:44 compute-0 ceph-mon[74313]: pgmap v3354: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:52:45 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3355: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:52:45 compute-0 nova_compute[260935]: 2025-10-11 09:52:45.826 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:52:45 compute-0 nova_compute[260935]: 2025-10-11 09:52:45.920 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:52:46 compute-0 ceph-mon[74313]: pgmap v3355: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:52:46 compute-0 sshd-session[443551]: Failed password for invalid user manageruser from 13.126.15.214 port 44400 ssh2
Oct 11 09:52:47 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3356: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:52:48 compute-0 sshd-session[443551]: Received disconnect from 13.126.15.214 port 44400:11: Bye Bye [preauth]
Oct 11 09:52:48 compute-0 sshd-session[443551]: Disconnected from invalid user manageruser 13.126.15.214 port 44400 [preauth]
Oct 11 09:52:48 compute-0 ceph-mon[74313]: pgmap v3356: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:52:49 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3357: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:52:49 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e310 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:52:50 compute-0 nova_compute[260935]: 2025-10-11 09:52:50.829 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:52:50 compute-0 nova_compute[260935]: 2025-10-11 09:52:50.921 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:52:50 compute-0 ceph-mon[74313]: pgmap v3357: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:52:51 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3358: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:52:53 compute-0 ceph-mon[74313]: pgmap v3358: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:52:53 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3359: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:52:54 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e310 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:52:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:52:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:52:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:52:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:52:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:52:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:52:55 compute-0 ceph-mon[74313]: pgmap v3359: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:52:55 compute-0 ceph-mgr[74605]: [balancer INFO root] Optimize plan auto_2025-10-11_09:52:55
Oct 11 09:52:55 compute-0 ceph-mgr[74605]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 09:52:55 compute-0 ceph-mgr[74605]: [balancer INFO root] do_upmap
Oct 11 09:52:55 compute-0 ceph-mgr[74605]: [balancer INFO root] pools ['default.rgw.log', '.mgr', 'cephfs.cephfs.data', 'backups', 'cephfs.cephfs.meta', 'volumes', 'images', 'default.rgw.meta', 'default.rgw.control', 'vms', '.rgw.root']
Oct 11 09:52:55 compute-0 ceph-mgr[74605]: [balancer INFO root] prepared 0/10 changes
Oct 11 09:52:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 09:52:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 09:52:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 09:52:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 09:52:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 09:52:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 09:52:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 09:52:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 09:52:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 09:52:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 09:52:55 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3360: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:52:55 compute-0 nova_compute[260935]: 2025-10-11 09:52:55.831 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:52:55 compute-0 nova_compute[260935]: 2025-10-11 09:52:55.924 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:52:57 compute-0 ceph-mon[74313]: pgmap v3360: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:52:57 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3361: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:52:58 compute-0 podman[443553]: 2025-10-11 09:52:58.80391887 +0000 UTC m=+0.095154080 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct 11 09:52:59 compute-0 ceph-mon[74313]: pgmap v3361: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:52:59 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3362: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:52:59 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e310 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:53:00 compute-0 nova_compute[260935]: 2025-10-11 09:53:00.698 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:53:00 compute-0 nova_compute[260935]: 2025-10-11 09:53:00.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:53:00 compute-0 nova_compute[260935]: 2025-10-11 09:53:00.833 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:53:00 compute-0 nova_compute[260935]: 2025-10-11 09:53:00.925 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:53:01 compute-0 ceph-mon[74313]: pgmap v3362: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:53:01 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3363: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:53:03 compute-0 ceph-mon[74313]: pgmap v3363: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:53:03 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3364: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:53:04 compute-0 nova_compute[260935]: 2025-10-11 09:53:04.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:53:04 compute-0 nova_compute[260935]: 2025-10-11 09:53:04.702 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 11 09:53:04 compute-0 podman[443573]: 2025-10-11 09:53:04.803388866 +0000 UTC m=+0.095226629 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=iscsid)
Oct 11 09:53:04 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e310 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:53:05 compute-0 nova_compute[260935]: 2025-10-11 09:53:05.030 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "refresh_cache-52be16b4-343a-4fd4-9041-39069a1fde2a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:53:05 compute-0 nova_compute[260935]: 2025-10-11 09:53:05.031 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquired lock "refresh_cache-52be16b4-343a-4fd4-9041-39069a1fde2a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:53:05 compute-0 nova_compute[260935]: 2025-10-11 09:53:05.031 2 DEBUG nova.network.neutron [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: 52be16b4-343a-4fd4-9041-39069a1fde2a] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 11 09:53:05 compute-0 ceph-mon[74313]: pgmap v3364: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:53:05 compute-0 nova_compute[260935]: 2025-10-11 09:53:05.272 2 DEBUG nova.network.neutron [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: 52be16b4-343a-4fd4-9041-39069a1fde2a] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 11 09:53:05 compute-0 nova_compute[260935]: 2025-10-11 09:53:05.543 2 DEBUG nova.network.neutron [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: 52be16b4-343a-4fd4-9041-39069a1fde2a] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:53:05 compute-0 nova_compute[260935]: 2025-10-11 09:53:05.567 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Releasing lock "refresh_cache-52be16b4-343a-4fd4-9041-39069a1fde2a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:53:05 compute-0 nova_compute[260935]: 2025-10-11 09:53:05.568 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: 52be16b4-343a-4fd4-9041-39069a1fde2a] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 11 09:53:05 compute-0 nova_compute[260935]: 2025-10-11 09:53:05.569 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:53:05 compute-0 nova_compute[260935]: 2025-10-11 09:53:05.569 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:53:05 compute-0 nova_compute[260935]: 2025-10-11 09:53:05.569 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 11 09:53:05 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3365: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:53:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 09:53:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:53:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 09:53:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:53:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0026278224099759067 of space, bias 1.0, pg target 0.788346722992772 quantized to 32 (current 32)
Oct 11 09:53:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:53:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 09:53:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:53:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 09:53:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:53:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006659854830044931 of space, bias 1.0, pg target 0.19979564490134794 quantized to 32 (current 32)
Oct 11 09:53:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:53:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 11 09:53:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:53:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 09:53:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:53:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 11 09:53:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:53:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 11 09:53:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:53:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 09:53:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:53:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 11 09:53:05 compute-0 nova_compute[260935]: 2025-10-11 09:53:05.835 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:53:05 compute-0 nova_compute[260935]: 2025-10-11 09:53:05.927 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:53:06 compute-0 nova_compute[260935]: 2025-10-11 09:53:06.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:53:06 compute-0 nova_compute[260935]: 2025-10-11 09:53:06.736 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:53:06 compute-0 nova_compute[260935]: 2025-10-11 09:53:06.736 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:53:06 compute-0 nova_compute[260935]: 2025-10-11 09:53:06.737 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:53:06 compute-0 nova_compute[260935]: 2025-10-11 09:53:06.737 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 11 09:53:06 compute-0 nova_compute[260935]: 2025-10-11 09:53:06.737 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:53:07 compute-0 ceph-mon[74313]: pgmap v3365: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:53:07 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 09:53:07 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/146095201' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:53:07 compute-0 nova_compute[260935]: 2025-10-11 09:53:07.191 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:53:07 compute-0 nova_compute[260935]: 2025-10-11 09:53:07.274 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:53:07 compute-0 nova_compute[260935]: 2025-10-11 09:53:07.274 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:53:07 compute-0 nova_compute[260935]: 2025-10-11 09:53:07.275 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:53:07 compute-0 nova_compute[260935]: 2025-10-11 09:53:07.280 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:53:07 compute-0 nova_compute[260935]: 2025-10-11 09:53:07.281 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:53:07 compute-0 nova_compute[260935]: 2025-10-11 09:53:07.285 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000052 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:53:07 compute-0 nova_compute[260935]: 2025-10-11 09:53:07.286 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000052 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:53:07 compute-0 nova_compute[260935]: 2025-10-11 09:53:07.472 2 WARNING nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 09:53:07 compute-0 nova_compute[260935]: 2025-10-11 09:53:07.473 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=2808MB free_disk=59.83064270019531GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 11 09:53:07 compute-0 nova_compute[260935]: 2025-10-11 09:53:07.474 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:53:07 compute-0 nova_compute[260935]: 2025-10-11 09:53:07.474 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:53:07 compute-0 nova_compute[260935]: 2025-10-11 09:53:07.579 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance c176845c-89c0-4038-ba22-4ee79bd3ebfe actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 09:53:07 compute-0 nova_compute[260935]: 2025-10-11 09:53:07.579 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance b75d8ded-515b-48ff-a6b6-28df88878996 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 09:53:07 compute-0 nova_compute[260935]: 2025-10-11 09:53:07.579 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance 52be16b4-343a-4fd4-9041-39069a1fde2a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 09:53:07 compute-0 nova_compute[260935]: 2025-10-11 09:53:07.580 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 11 09:53:07 compute-0 nova_compute[260935]: 2025-10-11 09:53:07.580 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=896MB phys_disk=59GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 11 09:53:07 compute-0 nova_compute[260935]: 2025-10-11 09:53:07.693 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:53:07 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3366: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:53:07 compute-0 podman[443617]: 2025-10-11 09:53:07.791867773 +0000 UTC m=+0.085772481 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251001)
Oct 11 09:53:07 compute-0 podman[443618]: 2025-10-11 09:53:07.825254369 +0000 UTC m=+0.120124004 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller)
Oct 11 09:53:08 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/146095201' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:53:08 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 09:53:08 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2438806809' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:53:08 compute-0 nova_compute[260935]: 2025-10-11 09:53:08.206 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.513s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:53:08 compute-0 nova_compute[260935]: 2025-10-11 09:53:08.215 2 DEBUG nova.compute.provider_tree [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 09:53:08 compute-0 nova_compute[260935]: 2025-10-11 09:53:08.241 2 DEBUG nova.scheduler.client.report [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 09:53:08 compute-0 nova_compute[260935]: 2025-10-11 09:53:08.245 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 11 09:53:08 compute-0 nova_compute[260935]: 2025-10-11 09:53:08.246 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.772s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:53:09 compute-0 ceph-mon[74313]: pgmap v3366: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:53:09 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/2438806809' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:53:09 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3367: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:53:09 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e310 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:53:10 compute-0 nova_compute[260935]: 2025-10-11 09:53:10.837 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:53:10 compute-0 nova_compute[260935]: 2025-10-11 09:53:10.930 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:53:11 compute-0 ceph-mon[74313]: pgmap v3367: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:53:11 compute-0 nova_compute[260935]: 2025-10-11 09:53:11.246 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:53:11 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3368: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:53:12 compute-0 sshd-session[443684]: Invalid user es_user from 113.194.203.31 port 50070
Oct 11 09:53:12 compute-0 sshd-session[443684]: pam_unix(sshd:auth): check pass; user unknown
Oct 11 09:53:12 compute-0 sshd-session[443684]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=113.194.203.31
Oct 11 09:53:12 compute-0 nova_compute[260935]: 2025-10-11 09:53:12.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:53:13 compute-0 ceph-mon[74313]: pgmap v3368: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:53:13 compute-0 sshd-session[443684]: Failed password for invalid user es_user from 113.194.203.31 port 50070 ssh2
Oct 11 09:53:13 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3369: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:53:14 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e310 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:53:15 compute-0 ceph-mon[74313]: pgmap v3369: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:53:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:53:15.250 162815 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:53:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:53:15.251 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:53:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:53:15.251 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:53:15 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3370: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:53:15 compute-0 nova_compute[260935]: 2025-10-11 09:53:15.839 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:53:15 compute-0 nova_compute[260935]: 2025-10-11 09:53:15.931 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:53:17 compute-0 ceph-mon[74313]: pgmap v3370: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:53:17 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3371: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:53:19 compute-0 ceph-mon[74313]: pgmap v3371: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:53:19 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3372: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:53:19 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e310 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:53:20 compute-0 nova_compute[260935]: 2025-10-11 09:53:20.841 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:53:20 compute-0 nova_compute[260935]: 2025-10-11 09:53:20.933 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:53:21 compute-0 ceph-mon[74313]: pgmap v3372: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:53:21 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3373: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:53:23 compute-0 ceph-mon[74313]: pgmap v3373: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:53:23 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3374: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:53:24 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e310 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:53:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:53:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:53:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:53:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:53:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:53:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:53:25 compute-0 ceph-mon[74313]: pgmap v3374: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:53:25 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3375: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:53:25 compute-0 nova_compute[260935]: 2025-10-11 09:53:25.850 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:53:25 compute-0 nova_compute[260935]: 2025-10-11 09:53:25.936 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:53:26 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 09:53:26 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1420280469' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 09:53:26 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 09:53:26 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1420280469' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 09:53:27 compute-0 ceph-mon[74313]: pgmap v3375: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:53:27 compute-0 ceph-mon[74313]: from='client.? 192.168.122.10:0/1420280469' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 09:53:27 compute-0 ceph-mon[74313]: from='client.? 192.168.122.10:0/1420280469' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 09:53:27 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3376: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:53:29 compute-0 ceph-mon[74313]: pgmap v3376: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:53:29 compute-0 ceph-mon[74313]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #165. Immutable memtables: 0.
Oct 11 09:53:29 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:53:29.189074) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 11 09:53:29 compute-0 ceph-mon[74313]: rocksdb: [db/flush_job.cc:856] [default] [JOB 101] Flushing memtable with next log file: 165
Oct 11 09:53:29 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760176409189142, "job": 101, "event": "flush_started", "num_memtables": 1, "num_entries": 1112, "num_deletes": 251, "total_data_size": 1660603, "memory_usage": 1690832, "flush_reason": "Manual Compaction"}
Oct 11 09:53:29 compute-0 ceph-mon[74313]: rocksdb: [db/flush_job.cc:885] [default] [JOB 101] Level-0 flush table #166: started
Oct 11 09:53:29 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760176409203190, "cf_name": "default", "job": 101, "event": "table_file_creation", "file_number": 166, "file_size": 1644955, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 69236, "largest_seqno": 70347, "table_properties": {"data_size": 1639527, "index_size": 2886, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1477, "raw_key_size": 11425, "raw_average_key_size": 19, "raw_value_size": 1628705, "raw_average_value_size": 2812, "num_data_blocks": 130, "num_entries": 579, "num_filter_entries": 579, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760176295, "oldest_key_time": 1760176295, "file_creation_time": 1760176409, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "25d4b2da-549a-4320-988a-8e4ed36a341b", "db_session_id": "HBQ1LYMNE0LLZGR7AHC6", "orig_file_number": 166, "seqno_to_time_mapping": "N/A"}}
Oct 11 09:53:29 compute-0 ceph-mon[74313]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 101] Flush lasted 14184 microseconds, and 8778 cpu microseconds.
Oct 11 09:53:29 compute-0 ceph-mon[74313]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 09:53:29 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:53:29.203253) [db/flush_job.cc:967] [default] [JOB 101] Level-0 flush table #166: 1644955 bytes OK
Oct 11 09:53:29 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:53:29.203292) [db/memtable_list.cc:519] [default] Level-0 commit table #166 started
Oct 11 09:53:29 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:53:29.204748) [db/memtable_list.cc:722] [default] Level-0 commit table #166: memtable #1 done
Oct 11 09:53:29 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:53:29.204774) EVENT_LOG_v1 {"time_micros": 1760176409204765, "job": 101, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 11 09:53:29 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:53:29.204800) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 11 09:53:29 compute-0 ceph-mon[74313]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 101] Try to delete WAL files size 1655472, prev total WAL file size 1655472, number of live WAL files 2.
Oct 11 09:53:29 compute-0 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000162.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 09:53:29 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:53:29.205952) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730036373737' seq:72057594037927935, type:22 .. '7061786F730037303239' seq:0, type:0; will stop at (end)
Oct 11 09:53:29 compute-0 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 102] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 11 09:53:29 compute-0 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 101 Base level 0, inputs: [166(1606KB)], [164(9401KB)]
Oct 11 09:53:29 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760176409206012, "job": 102, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [166], "files_L6": [164], "score": -1, "input_data_size": 11272290, "oldest_snapshot_seqno": -1}
Oct 11 09:53:29 compute-0 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 102] Generated table #167: 8695 keys, 9527171 bytes, temperature: kUnknown
Oct 11 09:53:29 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760176409252388, "cf_name": "default", "job": 102, "event": "table_file_creation", "file_number": 167, "file_size": 9527171, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9473441, "index_size": 30897, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 21765, "raw_key_size": 227896, "raw_average_key_size": 26, "raw_value_size": 9322517, "raw_average_value_size": 1072, "num_data_blocks": 1191, "num_entries": 8695, "num_filter_entries": 8695, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760170204, "oldest_key_time": 0, "file_creation_time": 1760176409, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "25d4b2da-549a-4320-988a-8e4ed36a341b", "db_session_id": "HBQ1LYMNE0LLZGR7AHC6", "orig_file_number": 167, "seqno_to_time_mapping": "N/A"}}
Oct 11 09:53:29 compute-0 ceph-mon[74313]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 09:53:29 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:53:29.252709) [db/compaction/compaction_job.cc:1663] [default] [JOB 102] Compacted 1@0 + 1@6 files to L6 => 9527171 bytes
Oct 11 09:53:29 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:53:29.254361) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 242.6 rd, 205.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.6, 9.2 +0.0 blob) out(9.1 +0.0 blob), read-write-amplify(12.6) write-amplify(5.8) OK, records in: 9209, records dropped: 514 output_compression: NoCompression
Oct 11 09:53:29 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:53:29.254390) EVENT_LOG_v1 {"time_micros": 1760176409254376, "job": 102, "event": "compaction_finished", "compaction_time_micros": 46471, "compaction_time_cpu_micros": 28440, "output_level": 6, "num_output_files": 1, "total_output_size": 9527171, "num_input_records": 9209, "num_output_records": 8695, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 11 09:53:29 compute-0 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000166.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 09:53:29 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760176409255059, "job": 102, "event": "table_file_deletion", "file_number": 166}
Oct 11 09:53:29 compute-0 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000164.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 09:53:29 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760176409258351, "job": 102, "event": "table_file_deletion", "file_number": 164}
Oct 11 09:53:29 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:53:29.205803) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 09:53:29 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:53:29.258455) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 09:53:29 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:53:29.258463) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 09:53:29 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:53:29.258465) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 09:53:29 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:53:29.258467) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 09:53:29 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:53:29.258469) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 09:53:29 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3377: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:53:29 compute-0 podman[443686]: 2025-10-11 09:53:29.800412387 +0000 UTC m=+0.082403866 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct 11 09:53:29 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e310 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:53:30 compute-0 sudo[443705]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:53:30 compute-0 sudo[443705]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:53:30 compute-0 sudo[443705]: pam_unix(sudo:session): session closed for user root
Oct 11 09:53:30 compute-0 sudo[443730]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 09:53:30 compute-0 sudo[443730]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:53:30 compute-0 sudo[443730]: pam_unix(sudo:session): session closed for user root
Oct 11 09:53:30 compute-0 sudo[443755]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:53:30 compute-0 sudo[443755]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:53:30 compute-0 sudo[443755]: pam_unix(sudo:session): session closed for user root
Oct 11 09:53:30 compute-0 sudo[443780]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 11 09:53:30 compute-0 sudo[443780]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:53:30 compute-0 nova_compute[260935]: 2025-10-11 09:53:30.892 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:53:30 compute-0 nova_compute[260935]: 2025-10-11 09:53:30.938 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:53:31 compute-0 ceph-mon[74313]: pgmap v3377: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:53:31 compute-0 sudo[443780]: pam_unix(sudo:session): session closed for user root
Oct 11 09:53:31 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 09:53:31 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 09:53:31 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 11 09:53:31 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 09:53:31 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 11 09:53:31 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:53:31 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev 8436e64b-8bba-4f6f-b6de-cb996a041f87 does not exist
Oct 11 09:53:31 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev ad507034-ffdb-4e60-b112-14a727af4b12 does not exist
Oct 11 09:53:31 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev 236745c2-3e63-4b6c-adb4-ce1cae1186d9 does not exist
Oct 11 09:53:31 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 11 09:53:31 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 09:53:31 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 11 09:53:31 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 09:53:31 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 09:53:31 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 09:53:31 compute-0 sudo[443837]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:53:31 compute-0 sudo[443837]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:53:31 compute-0 sudo[443837]: pam_unix(sudo:session): session closed for user root
Oct 11 09:53:31 compute-0 sudo[443862]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 09:53:31 compute-0 sudo[443862]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:53:31 compute-0 sudo[443862]: pam_unix(sudo:session): session closed for user root
Oct 11 09:53:31 compute-0 sudo[443887]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:53:31 compute-0 sudo[443887]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:53:31 compute-0 sudo[443887]: pam_unix(sudo:session): session closed for user root
Oct 11 09:53:31 compute-0 sudo[443912]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 11 09:53:31 compute-0 sudo[443912]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:53:31 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3378: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:53:32 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 09:53:32 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 09:53:32 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:53:32 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 09:53:32 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 09:53:32 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 09:53:32 compute-0 podman[443978]: 2025-10-11 09:53:32.256047008 +0000 UTC m=+0.104829832 container create c6f82c3f876aaf91fc0c8741440731aa3d5702ef628e9245b2535d479662d0b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_boyd, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 09:53:32 compute-0 podman[443978]: 2025-10-11 09:53:32.202412828 +0000 UTC m=+0.051195722 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:53:32 compute-0 systemd[1]: Started libpod-conmon-c6f82c3f876aaf91fc0c8741440731aa3d5702ef628e9245b2535d479662d0b0.scope.
Oct 11 09:53:32 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:53:32 compute-0 podman[443978]: 2025-10-11 09:53:32.385264149 +0000 UTC m=+0.234047033 container init c6f82c3f876aaf91fc0c8741440731aa3d5702ef628e9245b2535d479662d0b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_boyd, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct 11 09:53:32 compute-0 podman[443978]: 2025-10-11 09:53:32.404340089 +0000 UTC m=+0.253122923 container start c6f82c3f876aaf91fc0c8741440731aa3d5702ef628e9245b2535d479662d0b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_boyd, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True)
Oct 11 09:53:32 compute-0 vigorous_boyd[443994]: 167 167
Oct 11 09:53:32 compute-0 systemd[1]: libpod-c6f82c3f876aaf91fc0c8741440731aa3d5702ef628e9245b2535d479662d0b0.scope: Deactivated successfully.
Oct 11 09:53:32 compute-0 podman[443978]: 2025-10-11 09:53:32.41637378 +0000 UTC m=+0.265156694 container attach c6f82c3f876aaf91fc0c8741440731aa3d5702ef628e9245b2535d479662d0b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_boyd, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef)
Oct 11 09:53:32 compute-0 podman[443978]: 2025-10-11 09:53:32.417156793 +0000 UTC m=+0.265939627 container died c6f82c3f876aaf91fc0c8741440731aa3d5702ef628e9245b2535d479662d0b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_boyd, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef)
Oct 11 09:53:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-a787b26cb92ca8ef8e91cba474b1e2170bcc35526cf0162f3dcf3c4d6aeb0bda-merged.mount: Deactivated successfully.
Oct 11 09:53:32 compute-0 podman[443978]: 2025-10-11 09:53:32.491384376 +0000 UTC m=+0.340167180 container remove c6f82c3f876aaf91fc0c8741440731aa3d5702ef628e9245b2535d479662d0b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_boyd, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct 11 09:53:32 compute-0 systemd[1]: libpod-conmon-c6f82c3f876aaf91fc0c8741440731aa3d5702ef628e9245b2535d479662d0b0.scope: Deactivated successfully.
Oct 11 09:53:32 compute-0 sshd-session[443999]: Accepted publickey for zuul from 192.168.122.30 port 45338 ssh2: ECDSA SHA256:KKxgUhG08XzjYMLOyvbR+tXItyOnGoLl6Fn32NV5afE
Oct 11 09:53:32 compute-0 systemd-logind[819]: New session 54 of user zuul.
Oct 11 09:53:32 compute-0 systemd[1]: Started Session 54 of User zuul.
Oct 11 09:53:32 compute-0 sshd-session[443999]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 11 09:53:32 compute-0 podman[444045]: 2025-10-11 09:53:32.753956045 +0000 UTC m=+0.063765207 container create a6d42cc1575d2fba891c5aee98c768788f518430903ff6225c8f172dcd1cc8fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_bhaskara, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 09:53:32 compute-0 systemd[1]: Started libpod-conmon-a6d42cc1575d2fba891c5aee98c768788f518430903ff6225c8f172dcd1cc8fa.scope.
Oct 11 09:53:32 compute-0 podman[444045]: 2025-10-11 09:53:32.732402804 +0000 UTC m=+0.042211956 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:53:32 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:53:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45b1ba0015bd1ffe54b10c5bd6ad7e6eaaaf04799b9d3cc3acdc66a5e6838206/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 09:53:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45b1ba0015bd1ffe54b10c5bd6ad7e6eaaaf04799b9d3cc3acdc66a5e6838206/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 09:53:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45b1ba0015bd1ffe54b10c5bd6ad7e6eaaaf04799b9d3cc3acdc66a5e6838206/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 09:53:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45b1ba0015bd1ffe54b10c5bd6ad7e6eaaaf04799b9d3cc3acdc66a5e6838206/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 09:53:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45b1ba0015bd1ffe54b10c5bd6ad7e6eaaaf04799b9d3cc3acdc66a5e6838206/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 09:53:32 compute-0 podman[444045]: 2025-10-11 09:53:32.8737926 +0000 UTC m=+0.183601742 container init a6d42cc1575d2fba891c5aee98c768788f518430903ff6225c8f172dcd1cc8fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_bhaskara, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef)
Oct 11 09:53:32 compute-0 podman[444045]: 2025-10-11 09:53:32.887162309 +0000 UTC m=+0.196971451 container start a6d42cc1575d2fba891c5aee98c768788f518430903ff6225c8f172dcd1cc8fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_bhaskara, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct 11 09:53:32 compute-0 podman[444045]: 2025-10-11 09:53:32.892202842 +0000 UTC m=+0.202011984 container attach a6d42cc1575d2fba891c5aee98c768788f518430903ff6225c8f172dcd1cc8fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_bhaskara, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 09:53:33 compute-0 ceph-mon[74313]: pgmap v3378: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:53:33 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3379: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 16 KiB/s rd, 0 B/s wr, 26 op/s
Oct 11 09:53:33 compute-0 agitated_bhaskara[444085]: --> passed data devices: 0 physical, 3 LVM
Oct 11 09:53:33 compute-0 agitated_bhaskara[444085]: --> relative data size: 1.0
Oct 11 09:53:33 compute-0 agitated_bhaskara[444085]: --> All data devices are unavailable
Oct 11 09:53:34 compute-0 systemd[1]: libpod-a6d42cc1575d2fba891c5aee98c768788f518430903ff6225c8f172dcd1cc8fa.scope: Deactivated successfully.
Oct 11 09:53:34 compute-0 podman[444045]: 2025-10-11 09:53:34.001330269 +0000 UTC m=+1.311139431 container died a6d42cc1575d2fba891c5aee98c768788f518430903ff6225c8f172dcd1cc8fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_bhaskara, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 11 09:53:34 compute-0 systemd[1]: libpod-a6d42cc1575d2fba891c5aee98c768788f518430903ff6225c8f172dcd1cc8fa.scope: Consumed 1.054s CPU time.
Oct 11 09:53:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-45b1ba0015bd1ffe54b10c5bd6ad7e6eaaaf04799b9d3cc3acdc66a5e6838206-merged.mount: Deactivated successfully.
Oct 11 09:53:34 compute-0 podman[444045]: 2025-10-11 09:53:34.092752729 +0000 UTC m=+1.402561881 container remove a6d42cc1575d2fba891c5aee98c768788f518430903ff6225c8f172dcd1cc8fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_bhaskara, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 09:53:34 compute-0 systemd[1]: libpod-conmon-a6d42cc1575d2fba891c5aee98c768788f518430903ff6225c8f172dcd1cc8fa.scope: Deactivated successfully.
Oct 11 09:53:34 compute-0 sudo[443912]: pam_unix(sudo:session): session closed for user root
Oct 11 09:53:34 compute-0 sudo[444128]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:53:34 compute-0 sudo[444128]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:53:34 compute-0 sudo[444128]: pam_unix(sudo:session): session closed for user root
Oct 11 09:53:34 compute-0 sudo[444153]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 09:53:34 compute-0 sudo[444153]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:53:34 compute-0 sudo[444153]: pam_unix(sudo:session): session closed for user root
Oct 11 09:53:34 compute-0 sudo[444178]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:53:34 compute-0 sudo[444178]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:53:34 compute-0 sudo[444178]: pam_unix(sudo:session): session closed for user root
Oct 11 09:53:34 compute-0 sudo[444203]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a -- lvm list --format json
Oct 11 09:53:34 compute-0 sudo[444203]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:53:34 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e310 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:53:34 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:53:34.947 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=62, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:d1:d9', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '16:ab:1e:b7:4b:7f'}, ipsec=False) old=SB_Global(nb_cfg=61) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 09:53:34 compute-0 nova_compute[260935]: 2025-10-11 09:53:34.947 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:53:34 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:53:34.949 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 11 09:53:34 compute-0 podman[444268]: 2025-10-11 09:53:34.962387371 +0000 UTC m=+0.060884677 container create b50ffd1ae993c05812643e7dc4fd7d6f2a9e9c0102d6eb07d43d078423a07aca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_keldysh, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct 11 09:53:35 compute-0 systemd[1]: Started libpod-conmon-b50ffd1ae993c05812643e7dc4fd7d6f2a9e9c0102d6eb07d43d078423a07aca.scope.
Oct 11 09:53:35 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:53:35 compute-0 podman[444268]: 2025-10-11 09:53:34.936455156 +0000 UTC m=+0.034952492 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:53:35 compute-0 podman[444268]: 2025-10-11 09:53:35.05413548 +0000 UTC m=+0.152632806 container init b50ffd1ae993c05812643e7dc4fd7d6f2a9e9c0102d6eb07d43d078423a07aca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_keldysh, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct 11 09:53:35 compute-0 podman[444268]: 2025-10-11 09:53:35.064296348 +0000 UTC m=+0.162793654 container start b50ffd1ae993c05812643e7dc4fd7d6f2a9e9c0102d6eb07d43d078423a07aca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_keldysh, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 09:53:35 compute-0 podman[444268]: 2025-10-11 09:53:35.067615502 +0000 UTC m=+0.166112798 container attach b50ffd1ae993c05812643e7dc4fd7d6f2a9e9c0102d6eb07d43d078423a07aca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_keldysh, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True)
Oct 11 09:53:35 compute-0 stupefied_keldysh[444285]: 167 167
Oct 11 09:53:35 compute-0 systemd[1]: libpod-b50ffd1ae993c05812643e7dc4fd7d6f2a9e9c0102d6eb07d43d078423a07aca.scope: Deactivated successfully.
Oct 11 09:53:35 compute-0 conmon[444285]: conmon b50ffd1ae993c0581264 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b50ffd1ae993c05812643e7dc4fd7d6f2a9e9c0102d6eb07d43d078423a07aca.scope/container/memory.events
Oct 11 09:53:35 compute-0 podman[444268]: 2025-10-11 09:53:35.07458505 +0000 UTC m=+0.173082356 container died b50ffd1ae993c05812643e7dc4fd7d6f2a9e9c0102d6eb07d43d078423a07aca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_keldysh, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct 11 09:53:35 compute-0 podman[444282]: 2025-10-11 09:53:35.075509546 +0000 UTC m=+0.070507269 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct 11 09:53:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-99224cadd617baa3331a404824366928daac2a44d4398c8963bdc2fc70f569d8-merged.mount: Deactivated successfully.
Oct 11 09:53:35 compute-0 podman[444268]: 2025-10-11 09:53:35.126236663 +0000 UTC m=+0.224733999 container remove b50ffd1ae993c05812643e7dc4fd7d6f2a9e9c0102d6eb07d43d078423a07aca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_keldysh, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct 11 09:53:35 compute-0 systemd[1]: libpod-conmon-b50ffd1ae993c05812643e7dc4fd7d6f2a9e9c0102d6eb07d43d078423a07aca.scope: Deactivated successfully.
Oct 11 09:53:35 compute-0 ceph-mon[74313]: pgmap v3379: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 16 KiB/s rd, 0 B/s wr, 26 op/s
Oct 11 09:53:35 compute-0 podman[444329]: 2025-10-11 09:53:35.377726649 +0000 UTC m=+0.069263663 container create b989d571166610c7092f98466dc42729127d03a1d522750b108d691b8841b054 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_jepsen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 09:53:35 compute-0 systemd[1]: Started libpod-conmon-b989d571166610c7092f98466dc42729127d03a1d522750b108d691b8841b054.scope.
Oct 11 09:53:35 compute-0 podman[444329]: 2025-10-11 09:53:35.357141696 +0000 UTC m=+0.048678710 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:53:35 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:53:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/650507ce9c18210302f59edcecb10af5746b542287b0ec85b9288d5797981f81/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 09:53:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/650507ce9c18210302f59edcecb10af5746b542287b0ec85b9288d5797981f81/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 09:53:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/650507ce9c18210302f59edcecb10af5746b542287b0ec85b9288d5797981f81/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 09:53:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/650507ce9c18210302f59edcecb10af5746b542287b0ec85b9288d5797981f81/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 09:53:35 compute-0 podman[444329]: 2025-10-11 09:53:35.509601106 +0000 UTC m=+0.201138130 container init b989d571166610c7092f98466dc42729127d03a1d522750b108d691b8841b054 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_jepsen, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct 11 09:53:35 compute-0 podman[444329]: 2025-10-11 09:53:35.521990407 +0000 UTC m=+0.213527411 container start b989d571166610c7092f98466dc42729127d03a1d522750b108d691b8841b054 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_jepsen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct 11 09:53:35 compute-0 podman[444329]: 2025-10-11 09:53:35.525896198 +0000 UTC m=+0.217433232 container attach b989d571166610c7092f98466dc42729127d03a1d522750b108d691b8841b054 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_jepsen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct 11 09:53:35 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3380: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 16 KiB/s rd, 0 B/s wr, 26 op/s
Oct 11 09:53:35 compute-0 nova_compute[260935]: 2025-10-11 09:53:35.960 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:53:36 compute-0 adoring_jepsen[444368]: {
Oct 11 09:53:36 compute-0 adoring_jepsen[444368]:     "0": [
Oct 11 09:53:36 compute-0 adoring_jepsen[444368]:         {
Oct 11 09:53:36 compute-0 adoring_jepsen[444368]:             "devices": [
Oct 11 09:53:36 compute-0 adoring_jepsen[444368]:                 "/dev/loop3"
Oct 11 09:53:36 compute-0 adoring_jepsen[444368]:             ],
Oct 11 09:53:36 compute-0 adoring_jepsen[444368]:             "lv_name": "ceph_lv0",
Oct 11 09:53:36 compute-0 adoring_jepsen[444368]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 09:53:36 compute-0 adoring_jepsen[444368]:             "lv_size": "21470642176",
Oct 11 09:53:36 compute-0 adoring_jepsen[444368]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=bd31cfe9-266a-4479-a7f9-659fff14b3e1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 09:53:36 compute-0 adoring_jepsen[444368]:             "lv_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 09:53:36 compute-0 adoring_jepsen[444368]:             "name": "ceph_lv0",
Oct 11 09:53:36 compute-0 adoring_jepsen[444368]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 09:53:36 compute-0 adoring_jepsen[444368]:             "tags": {
Oct 11 09:53:36 compute-0 adoring_jepsen[444368]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 11 09:53:36 compute-0 adoring_jepsen[444368]:                 "ceph.block_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 09:53:36 compute-0 adoring_jepsen[444368]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 09:53:36 compute-0 adoring_jepsen[444368]:                 "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:53:36 compute-0 adoring_jepsen[444368]:                 "ceph.cluster_name": "ceph",
Oct 11 09:53:36 compute-0 adoring_jepsen[444368]:                 "ceph.crush_device_class": "",
Oct 11 09:53:36 compute-0 adoring_jepsen[444368]:                 "ceph.encrypted": "0",
Oct 11 09:53:36 compute-0 adoring_jepsen[444368]:                 "ceph.osd_fsid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 09:53:36 compute-0 adoring_jepsen[444368]:                 "ceph.osd_id": "0",
Oct 11 09:53:36 compute-0 adoring_jepsen[444368]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 09:53:36 compute-0 adoring_jepsen[444368]:                 "ceph.type": "block",
Oct 11 09:53:36 compute-0 adoring_jepsen[444368]:                 "ceph.vdo": "0"
Oct 11 09:53:36 compute-0 adoring_jepsen[444368]:             },
Oct 11 09:53:36 compute-0 adoring_jepsen[444368]:             "type": "block",
Oct 11 09:53:36 compute-0 adoring_jepsen[444368]:             "vg_name": "ceph_vg0"
Oct 11 09:53:36 compute-0 adoring_jepsen[444368]:         }
Oct 11 09:53:36 compute-0 adoring_jepsen[444368]:     ],
Oct 11 09:53:36 compute-0 adoring_jepsen[444368]:     "1": [
Oct 11 09:53:36 compute-0 adoring_jepsen[444368]:         {
Oct 11 09:53:36 compute-0 adoring_jepsen[444368]:             "devices": [
Oct 11 09:53:36 compute-0 adoring_jepsen[444368]:                 "/dev/loop4"
Oct 11 09:53:36 compute-0 adoring_jepsen[444368]:             ],
Oct 11 09:53:36 compute-0 adoring_jepsen[444368]:             "lv_name": "ceph_lv1",
Oct 11 09:53:36 compute-0 adoring_jepsen[444368]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 09:53:36 compute-0 adoring_jepsen[444368]:             "lv_size": "21470642176",
Oct 11 09:53:36 compute-0 adoring_jepsen[444368]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c6e730c8-7075-45e5-bf72-061b73088f5b,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 09:53:36 compute-0 adoring_jepsen[444368]:             "lv_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 09:53:36 compute-0 adoring_jepsen[444368]:             "name": "ceph_lv1",
Oct 11 09:53:36 compute-0 adoring_jepsen[444368]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 09:53:36 compute-0 adoring_jepsen[444368]:             "tags": {
Oct 11 09:53:36 compute-0 adoring_jepsen[444368]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 11 09:53:36 compute-0 adoring_jepsen[444368]:                 "ceph.block_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 09:53:36 compute-0 adoring_jepsen[444368]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 09:53:36 compute-0 adoring_jepsen[444368]:                 "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:53:36 compute-0 adoring_jepsen[444368]:                 "ceph.cluster_name": "ceph",
Oct 11 09:53:36 compute-0 adoring_jepsen[444368]:                 "ceph.crush_device_class": "",
Oct 11 09:53:36 compute-0 adoring_jepsen[444368]:                 "ceph.encrypted": "0",
Oct 11 09:53:36 compute-0 adoring_jepsen[444368]:                 "ceph.osd_fsid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 09:53:36 compute-0 adoring_jepsen[444368]:                 "ceph.osd_id": "1",
Oct 11 09:53:36 compute-0 adoring_jepsen[444368]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 09:53:36 compute-0 adoring_jepsen[444368]:                 "ceph.type": "block",
Oct 11 09:53:36 compute-0 adoring_jepsen[444368]:                 "ceph.vdo": "0"
Oct 11 09:53:36 compute-0 adoring_jepsen[444368]:             },
Oct 11 09:53:36 compute-0 adoring_jepsen[444368]:             "type": "block",
Oct 11 09:53:36 compute-0 adoring_jepsen[444368]:             "vg_name": "ceph_vg1"
Oct 11 09:53:36 compute-0 adoring_jepsen[444368]:         }
Oct 11 09:53:36 compute-0 adoring_jepsen[444368]:     ],
Oct 11 09:53:36 compute-0 adoring_jepsen[444368]:     "2": [
Oct 11 09:53:36 compute-0 adoring_jepsen[444368]:         {
Oct 11 09:53:36 compute-0 adoring_jepsen[444368]:             "devices": [
Oct 11 09:53:36 compute-0 adoring_jepsen[444368]:                 "/dev/loop5"
Oct 11 09:53:36 compute-0 adoring_jepsen[444368]:             ],
Oct 11 09:53:36 compute-0 adoring_jepsen[444368]:             "lv_name": "ceph_lv2",
Oct 11 09:53:36 compute-0 adoring_jepsen[444368]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 09:53:36 compute-0 adoring_jepsen[444368]:             "lv_size": "21470642176",
Oct 11 09:53:36 compute-0 adoring_jepsen[444368]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9a8d87fe-940e-4960-94fb-405e22e6e9ee,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 09:53:36 compute-0 adoring_jepsen[444368]:             "lv_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 09:53:36 compute-0 adoring_jepsen[444368]:             "name": "ceph_lv2",
Oct 11 09:53:36 compute-0 adoring_jepsen[444368]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 09:53:36 compute-0 adoring_jepsen[444368]:             "tags": {
Oct 11 09:53:36 compute-0 adoring_jepsen[444368]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 11 09:53:36 compute-0 adoring_jepsen[444368]:                 "ceph.block_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 09:53:36 compute-0 adoring_jepsen[444368]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 09:53:36 compute-0 adoring_jepsen[444368]:                 "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:53:36 compute-0 adoring_jepsen[444368]:                 "ceph.cluster_name": "ceph",
Oct 11 09:53:36 compute-0 adoring_jepsen[444368]:                 "ceph.crush_device_class": "",
Oct 11 09:53:36 compute-0 adoring_jepsen[444368]:                 "ceph.encrypted": "0",
Oct 11 09:53:36 compute-0 adoring_jepsen[444368]:                 "ceph.osd_fsid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 09:53:36 compute-0 adoring_jepsen[444368]:                 "ceph.osd_id": "2",
Oct 11 09:53:36 compute-0 adoring_jepsen[444368]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 09:53:36 compute-0 adoring_jepsen[444368]:                 "ceph.type": "block",
Oct 11 09:53:36 compute-0 adoring_jepsen[444368]:                 "ceph.vdo": "0"
Oct 11 09:53:36 compute-0 adoring_jepsen[444368]:             },
Oct 11 09:53:36 compute-0 adoring_jepsen[444368]:             "type": "block",
Oct 11 09:53:36 compute-0 adoring_jepsen[444368]:             "vg_name": "ceph_vg2"
Oct 11 09:53:36 compute-0 adoring_jepsen[444368]:         }
Oct 11 09:53:36 compute-0 adoring_jepsen[444368]:     ]
Oct 11 09:53:36 compute-0 adoring_jepsen[444368]: }
Oct 11 09:53:36 compute-0 systemd[1]: libpod-b989d571166610c7092f98466dc42729127d03a1d522750b108d691b8841b054.scope: Deactivated successfully.
Oct 11 09:53:36 compute-0 podman[444538]: 2025-10-11 09:53:36.337550885 +0000 UTC m=+0.033701506 container died b989d571166610c7092f98466dc42729127d03a1d522750b108d691b8841b054 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_jepsen, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 09:53:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-650507ce9c18210302f59edcecb10af5746b542287b0ec85b9288d5797981f81-merged.mount: Deactivated successfully.
Oct 11 09:53:36 compute-0 podman[444538]: 2025-10-11 09:53:36.41289518 +0000 UTC m=+0.109045751 container remove b989d571166610c7092f98466dc42729127d03a1d522750b108d691b8841b054 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_jepsen, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct 11 09:53:36 compute-0 systemd[1]: libpod-conmon-b989d571166610c7092f98466dc42729127d03a1d522750b108d691b8841b054.scope: Deactivated successfully.
Oct 11 09:53:36 compute-0 sudo[444203]: pam_unix(sudo:session): session closed for user root
Oct 11 09:53:36 compute-0 sudo[444553]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:53:36 compute-0 sudo[444553]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:53:36 compute-0 sudo[444553]: pam_unix(sudo:session): session closed for user root
Oct 11 09:53:36 compute-0 sudo[444578]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 09:53:36 compute-0 sudo[444578]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:53:36 compute-0 sudo[444578]: pam_unix(sudo:session): session closed for user root
Oct 11 09:53:36 compute-0 sudo[444603]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:53:36 compute-0 sudo[444603]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:53:36 compute-0 sudo[444603]: pam_unix(sudo:session): session closed for user root
Oct 11 09:53:36 compute-0 sudo[444628]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a -- raw list --format json
Oct 11 09:53:36 compute-0 sudo[444628]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:53:37 compute-0 podman[444694]: 2025-10-11 09:53:37.356021643 +0000 UTC m=+0.078839185 container create d01db16ebe657e2be5b499014cfb26cf0a2c52d5cba62f43f470922c33da7d96 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_wright, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 09:53:37 compute-0 ceph-mon[74313]: pgmap v3380: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 16 KiB/s rd, 0 B/s wr, 26 op/s
Oct 11 09:53:37 compute-0 systemd[1]: Started libpod-conmon-d01db16ebe657e2be5b499014cfb26cf0a2c52d5cba62f43f470922c33da7d96.scope.
Oct 11 09:53:37 compute-0 podman[444694]: 2025-10-11 09:53:37.323922903 +0000 UTC m=+0.046740485 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:53:37 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:53:37 compute-0 podman[444694]: 2025-10-11 09:53:37.459573037 +0000 UTC m=+0.182390629 container init d01db16ebe657e2be5b499014cfb26cf0a2c52d5cba62f43f470922c33da7d96 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_wright, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 09:53:37 compute-0 podman[444694]: 2025-10-11 09:53:37.471508405 +0000 UTC m=+0.194325957 container start d01db16ebe657e2be5b499014cfb26cf0a2c52d5cba62f43f470922c33da7d96 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_wright, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct 11 09:53:37 compute-0 podman[444694]: 2025-10-11 09:53:37.477136235 +0000 UTC m=+0.199953787 container attach d01db16ebe657e2be5b499014cfb26cf0a2c52d5cba62f43f470922c33da7d96 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_wright, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 11 09:53:37 compute-0 gracious_wright[444710]: 167 167
Oct 11 09:53:37 compute-0 systemd[1]: libpod-d01db16ebe657e2be5b499014cfb26cf0a2c52d5cba62f43f470922c33da7d96.scope: Deactivated successfully.
Oct 11 09:53:37 compute-0 podman[444694]: 2025-10-11 09:53:37.480212302 +0000 UTC m=+0.203029874 container died d01db16ebe657e2be5b499014cfb26cf0a2c52d5cba62f43f470922c33da7d96 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_wright, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct 11 09:53:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-eedddab0199d0ff8c1e8e2ad36f3253ee39358167f89e927f51d6bd0a3fd9e12-merged.mount: Deactivated successfully.
Oct 11 09:53:37 compute-0 podman[444694]: 2025-10-11 09:53:37.523832488 +0000 UTC m=+0.246650000 container remove d01db16ebe657e2be5b499014cfb26cf0a2c52d5cba62f43f470922c33da7d96 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_wright, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 09:53:37 compute-0 systemd[1]: libpod-conmon-d01db16ebe657e2be5b499014cfb26cf0a2c52d5cba62f43f470922c33da7d96.scope: Deactivated successfully.
Oct 11 09:53:37 compute-0 nova_compute[260935]: 2025-10-11 09:53:37.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:53:37 compute-0 podman[444733]: 2025-10-11 09:53:37.771757083 +0000 UTC m=+0.061476923 container create 7b834237c3029ce9addbd891abb2e7ce2c0d33d137a11b6f78b069dbf74cb31a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_goldberg, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 09:53:37 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3381: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 32 KiB/s rd, 0 B/s wr, 52 op/s
Oct 11 09:53:37 compute-0 systemd[1]: Started libpod-conmon-7b834237c3029ce9addbd891abb2e7ce2c0d33d137a11b6f78b069dbf74cb31a.scope.
Oct 11 09:53:37 compute-0 podman[444733]: 2025-10-11 09:53:37.744082639 +0000 UTC m=+0.033802579 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:53:37 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:53:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/70c91b56e59b16fed25e3f65a3378b9a679206c892e40d874d8c7581a608c84a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 09:53:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/70c91b56e59b16fed25e3f65a3378b9a679206c892e40d874d8c7581a608c84a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 09:53:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/70c91b56e59b16fed25e3f65a3378b9a679206c892e40d874d8c7581a608c84a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 09:53:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/70c91b56e59b16fed25e3f65a3378b9a679206c892e40d874d8c7581a608c84a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 09:53:37 compute-0 podman[444733]: 2025-10-11 09:53:37.908310322 +0000 UTC m=+0.198030262 container init 7b834237c3029ce9addbd891abb2e7ce2c0d33d137a11b6f78b069dbf74cb31a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_goldberg, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Oct 11 09:53:37 compute-0 podman[444733]: 2025-10-11 09:53:37.921128975 +0000 UTC m=+0.210848815 container start 7b834237c3029ce9addbd891abb2e7ce2c0d33d137a11b6f78b069dbf74cb31a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_goldberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct 11 09:53:37 compute-0 podman[444733]: 2025-10-11 09:53:37.924663356 +0000 UTC m=+0.214383286 container attach 7b834237c3029ce9addbd891abb2e7ce2c0d33d137a11b6f78b069dbf74cb31a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_goldberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct 11 09:53:37 compute-0 podman[444750]: 2025-10-11 09:53:37.975947219 +0000 UTC m=+0.110928465 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 11 09:53:38 compute-0 podman[444752]: 2025-10-11 09:53:38.012453873 +0000 UTC m=+0.143096586 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Oct 11 09:53:38 compute-0 sshd-session[444016]: Connection closed by 192.168.122.30 port 45338
Oct 11 09:53:38 compute-0 sshd-session[443999]: pam_unix(sshd:session): session closed for user zuul
Oct 11 09:53:38 compute-0 systemd-logind[819]: Session 54 logged out. Waiting for processes to exit.
Oct 11 09:53:38 compute-0 systemd[1]: session-54.scope: Deactivated successfully.
Oct 11 09:53:38 compute-0 systemd-logind[819]: Removed session 54.
Oct 11 09:53:39 compute-0 distracted_goldberg[444749]: {
Oct 11 09:53:39 compute-0 distracted_goldberg[444749]:     "9a8d87fe-940e-4960-94fb-405e22e6e9ee": {
Oct 11 09:53:39 compute-0 distracted_goldberg[444749]:         "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:53:39 compute-0 distracted_goldberg[444749]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 11 09:53:39 compute-0 distracted_goldberg[444749]:         "osd_id": 2,
Oct 11 09:53:39 compute-0 distracted_goldberg[444749]:         "osd_uuid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 09:53:39 compute-0 distracted_goldberg[444749]:         "type": "bluestore"
Oct 11 09:53:39 compute-0 distracted_goldberg[444749]:     },
Oct 11 09:53:39 compute-0 distracted_goldberg[444749]:     "bd31cfe9-266a-4479-a7f9-659fff14b3e1": {
Oct 11 09:53:39 compute-0 distracted_goldberg[444749]:         "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:53:39 compute-0 distracted_goldberg[444749]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 11 09:53:39 compute-0 distracted_goldberg[444749]:         "osd_id": 0,
Oct 11 09:53:39 compute-0 distracted_goldberg[444749]:         "osd_uuid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 09:53:39 compute-0 distracted_goldberg[444749]:         "type": "bluestore"
Oct 11 09:53:39 compute-0 distracted_goldberg[444749]:     },
Oct 11 09:53:39 compute-0 distracted_goldberg[444749]:     "c6e730c8-7075-45e5-bf72-061b73088f5b": {
Oct 11 09:53:39 compute-0 distracted_goldberg[444749]:         "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:53:39 compute-0 distracted_goldberg[444749]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 11 09:53:39 compute-0 distracted_goldberg[444749]:         "osd_id": 1,
Oct 11 09:53:39 compute-0 distracted_goldberg[444749]:         "osd_uuid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 09:53:39 compute-0 distracted_goldberg[444749]:         "type": "bluestore"
Oct 11 09:53:39 compute-0 distracted_goldberg[444749]:     }
Oct 11 09:53:39 compute-0 distracted_goldberg[444749]: }
Oct 11 09:53:39 compute-0 systemd[1]: libpod-7b834237c3029ce9addbd891abb2e7ce2c0d33d137a11b6f78b069dbf74cb31a.scope: Deactivated successfully.
Oct 11 09:53:39 compute-0 systemd[1]: libpod-7b834237c3029ce9addbd891abb2e7ce2c0d33d137a11b6f78b069dbf74cb31a.scope: Consumed 1.130s CPU time.
Oct 11 09:53:39 compute-0 podman[444733]: 2025-10-11 09:53:39.051984608 +0000 UTC m=+1.341704468 container died 7b834237c3029ce9addbd891abb2e7ce2c0d33d137a11b6f78b069dbf74cb31a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_goldberg, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 09:53:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-70c91b56e59b16fed25e3f65a3378b9a679206c892e40d874d8c7581a608c84a-merged.mount: Deactivated successfully.
Oct 11 09:53:39 compute-0 podman[444733]: 2025-10-11 09:53:39.140925068 +0000 UTC m=+1.430644908 container remove 7b834237c3029ce9addbd891abb2e7ce2c0d33d137a11b6f78b069dbf74cb31a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_goldberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct 11 09:53:39 compute-0 systemd[1]: libpod-conmon-7b834237c3029ce9addbd891abb2e7ce2c0d33d137a11b6f78b069dbf74cb31a.scope: Deactivated successfully.
Oct 11 09:53:39 compute-0 sudo[444628]: pam_unix(sudo:session): session closed for user root
Oct 11 09:53:39 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 09:53:39 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:53:39 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 09:53:39 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:53:39 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev ec3620a2-a5c5-4098-be03-5fdd88240048 does not exist
Oct 11 09:53:39 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev e838f8f3-9993-428c-a794-9286809ea82b does not exist
Oct 11 09:53:39 compute-0 sudo[444864]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:53:39 compute-0 sudo[444864]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:53:39 compute-0 sudo[444864]: pam_unix(sudo:session): session closed for user root
Oct 11 09:53:39 compute-0 ceph-mon[74313]: pgmap v3381: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 32 KiB/s rd, 0 B/s wr, 52 op/s
Oct 11 09:53:39 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:53:39 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:53:39 compute-0 sudo[444889]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 11 09:53:39 compute-0 sudo[444889]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:53:39 compute-0 sudo[444889]: pam_unix(sudo:session): session closed for user root
Oct 11 09:53:39 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3382: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct 11 09:53:39 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e310 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:53:40 compute-0 nova_compute[260935]: 2025-10-11 09:53:40.962 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 09:53:40 compute-0 nova_compute[260935]: 2025-10-11 09:53:40.965 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 09:53:40 compute-0 nova_compute[260935]: 2025-10-11 09:53:40.965 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Oct 11 09:53:40 compute-0 nova_compute[260935]: 2025-10-11 09:53:40.965 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Oct 11 09:53:40 compute-0 nova_compute[260935]: 2025-10-11 09:53:40.998 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:53:41 compute-0 nova_compute[260935]: 2025-10-11 09:53:40.999 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Oct 11 09:53:41 compute-0 ceph-mon[74313]: pgmap v3382: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct 11 09:53:41 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3383: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct 11 09:53:43 compute-0 ceph-mon[74313]: pgmap v3383: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct 11 09:53:43 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3384: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct 11 09:53:43 compute-0 sshd-session[444914]: Invalid user ansible from 43.157.67.116 port 56146
Oct 11 09:53:43 compute-0 sshd-session[444914]: pam_unix(sshd:auth): check pass; user unknown
Oct 11 09:53:43 compute-0 sshd-session[444914]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=43.157.67.116
Oct 11 09:53:43 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:53:43.953 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=bec9a5e2-82b0-42f0-811d-08d245f1dc66, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '62'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:53:44 compute-0 nova_compute[260935]: 2025-10-11 09:53:44.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._run_image_cache_manager_pass run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:53:44 compute-0 nova_compute[260935]: 2025-10-11 09:53:44.704 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "storage-registry-lock" by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:53:44 compute-0 nova_compute[260935]: 2025-10-11 09:53:44.705 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "storage-registry-lock" acquired by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:53:44 compute-0 nova_compute[260935]: 2025-10-11 09:53:44.706 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "storage-registry-lock" "released" by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:53:44 compute-0 nova_compute[260935]: 2025-10-11 09:53:44.707 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "storage-registry-lock" by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:53:44 compute-0 nova_compute[260935]: 2025-10-11 09:53:44.708 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "storage-registry-lock" acquired by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:53:44 compute-0 nova_compute[260935]: 2025-10-11 09:53:44.708 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "storage-registry-lock" "released" by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:53:44 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e310 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:53:44 compute-0 nova_compute[260935]: 2025-10-11 09:53:44.911 2 DEBUG nova.virt.libvirt.imagecache [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Adding ephemeral_1_0706d66 into backend ephemeral images _store_ephemeral_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:100
Oct 11 09:53:44 compute-0 nova_compute[260935]: 2025-10-11 09:53:44.929 2 DEBUG nova.virt.libvirt.imagecache [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Verify base images _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:314
Oct 11 09:53:44 compute-0 nova_compute[260935]: 2025-10-11 09:53:44.930 2 DEBUG nova.virt.libvirt.imagecache [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Image id 03f2fef0-11c0-48e1-b3a0-3e02d898739e yields fingerprint 9811042c7d73cc51997f7c966840f4b7728169a1 _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:319
Oct 11 09:53:44 compute-0 nova_compute[260935]: 2025-10-11 09:53:44.931 2 INFO nova.virt.libvirt.imagecache [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] image 03f2fef0-11c0-48e1-b3a0-3e02d898739e at (/var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1): checking
Oct 11 09:53:44 compute-0 nova_compute[260935]: 2025-10-11 09:53:44.931 2 DEBUG nova.virt.libvirt.imagecache [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] image 03f2fef0-11c0-48e1-b3a0-3e02d898739e at (/var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1): image is in use _mark_in_use /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:279
Oct 11 09:53:44 compute-0 nova_compute[260935]: 2025-10-11 09:53:44.935 2 DEBUG nova.virt.libvirt.imagecache [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Image id  yields fingerprint da39a3ee5e6b4b0d3255bfef95601890afd80709 _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:319
Oct 11 09:53:44 compute-0 nova_compute[260935]: 2025-10-11 09:53:44.935 2 DEBUG nova.virt.libvirt.imagecache [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] c176845c-89c0-4038-ba22-4ee79bd3ebfe is a valid instance name _list_backing_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:126
Oct 11 09:53:44 compute-0 nova_compute[260935]: 2025-10-11 09:53:44.936 2 DEBUG nova.virt.libvirt.imagecache [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] b75d8ded-515b-48ff-a6b6-28df88878996 is a valid instance name _list_backing_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:126
Oct 11 09:53:44 compute-0 nova_compute[260935]: 2025-10-11 09:53:44.936 2 DEBUG nova.virt.libvirt.imagecache [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] 52be16b4-343a-4fd4-9041-39069a1fde2a is a valid instance name _list_backing_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:126
Oct 11 09:53:44 compute-0 nova_compute[260935]: 2025-10-11 09:53:44.937 2 WARNING nova.virt.libvirt.imagecache [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Unknown base file: /var/lib/nova/instances/_base/d427ed36e4acfaf36d5cf36bd49361b1db4ee571
Oct 11 09:53:44 compute-0 nova_compute[260935]: 2025-10-11 09:53:44.937 2 INFO nova.virt.libvirt.imagecache [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Active base files: /var/lib/nova/instances/_base/9811042c7d73cc51997f7c966840f4b7728169a1
Oct 11 09:53:44 compute-0 nova_compute[260935]: 2025-10-11 09:53:44.938 2 INFO nova.virt.libvirt.imagecache [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Removable base files: /var/lib/nova/instances/_base/d427ed36e4acfaf36d5cf36bd49361b1db4ee571
Oct 11 09:53:44 compute-0 nova_compute[260935]: 2025-10-11 09:53:44.939 2 INFO nova.virt.libvirt.imagecache [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/d427ed36e4acfaf36d5cf36bd49361b1db4ee571
Oct 11 09:53:44 compute-0 nova_compute[260935]: 2025-10-11 09:53:44.939 2 DEBUG nova.virt.libvirt.imagecache [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Verification complete _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:350
Oct 11 09:53:44 compute-0 nova_compute[260935]: 2025-10-11 09:53:44.940 2 DEBUG nova.virt.libvirt.imagecache [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Verify swap images _age_and_verify_swap_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:299
Oct 11 09:53:44 compute-0 nova_compute[260935]: 2025-10-11 09:53:44.940 2 DEBUG nova.virt.libvirt.imagecache [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Verify ephemeral images _age_and_verify_ephemeral_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:284
Oct 11 09:53:44 compute-0 nova_compute[260935]: 2025-10-11 09:53:44.941 2 INFO nova.virt.libvirt.imagecache [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/ephemeral_1_0706d66
Oct 11 09:53:45 compute-0 ceph-mon[74313]: pgmap v3384: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct 11 09:53:45 compute-0 sshd-session[444914]: Failed password for invalid user ansible from 43.157.67.116 port 56146 ssh2
Oct 11 09:53:45 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3385: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 0 B/s wr, 32 op/s
Oct 11 09:53:46 compute-0 nova_compute[260935]: 2025-10-11 09:53:46.000 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 09:53:46 compute-0 nova_compute[260935]: 2025-10-11 09:53:46.001 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:53:47 compute-0 ceph-mon[74313]: pgmap v3385: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 0 B/s wr, 32 op/s
Oct 11 09:53:47 compute-0 sshd-session[444914]: Received disconnect from 43.157.67.116 port 56146:11: Bye Bye [preauth]
Oct 11 09:53:47 compute-0 sshd-session[444914]: Disconnected from invalid user ansible 43.157.67.116 port 56146 [preauth]
Oct 11 09:53:47 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3386: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 0 B/s wr, 32 op/s
Oct 11 09:53:49 compute-0 ceph-mon[74313]: pgmap v3386: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 0 B/s wr, 32 op/s
Oct 11 09:53:49 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3387: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 4.0 KiB/s rd, 0 B/s wr, 6 op/s
Oct 11 09:53:49 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e310 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:53:51 compute-0 nova_compute[260935]: 2025-10-11 09:53:51.003 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 09:53:51 compute-0 nova_compute[260935]: 2025-10-11 09:53:51.004 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:53:51 compute-0 ceph-mon[74313]: pgmap v3387: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 4.0 KiB/s rd, 0 B/s wr, 6 op/s
Oct 11 09:53:51 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3388: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:53:53 compute-0 ceph-mon[74313]: pgmap v3388: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:53:53 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3389: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:53:54 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e310 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:53:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:53:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:53:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:53:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:53:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:53:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:53:55 compute-0 ceph-mgr[74605]: [balancer INFO root] Optimize plan auto_2025-10-11_09:53:55
Oct 11 09:53:55 compute-0 ceph-mgr[74605]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 09:53:55 compute-0 ceph-mgr[74605]: [balancer INFO root] do_upmap
Oct 11 09:53:55 compute-0 ceph-mgr[74605]: [balancer INFO root] pools ['vms', '.rgw.root', 'cephfs.cephfs.meta', 'default.rgw.meta', 'cephfs.cephfs.data', 'images', 'volumes', '.mgr', 'default.rgw.control', 'backups', 'default.rgw.log']
Oct 11 09:53:55 compute-0 ceph-mgr[74605]: [balancer INFO root] prepared 0/10 changes
Oct 11 09:53:55 compute-0 ceph-mon[74313]: pgmap v3389: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:53:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 09:53:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 09:53:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 09:53:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 09:53:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 09:53:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 09:53:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 09:53:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 09:53:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 09:53:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 09:53:55 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3390: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:53:56 compute-0 nova_compute[260935]: 2025-10-11 09:53:56.005 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 09:53:56 compute-0 nova_compute[260935]: 2025-10-11 09:53:56.006 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:53:56 compute-0 nova_compute[260935]: 2025-10-11 09:53:56.007 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Oct 11 09:53:56 compute-0 nova_compute[260935]: 2025-10-11 09:53:56.007 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Oct 11 09:53:56 compute-0 nova_compute[260935]: 2025-10-11 09:53:56.007 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Oct 11 09:53:56 compute-0 nova_compute[260935]: 2025-10-11 09:53:56.009 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:53:57 compute-0 ceph-mon[74313]: pgmap v3390: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:53:57 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3391: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:53:59 compute-0 ceph-mon[74313]: pgmap v3391: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:53:59 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3392: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:53:59 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e310 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:54:00 compute-0 podman[444916]: 2025-10-11 09:54:00.824144687 +0000 UTC m=+0.106639005 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 11 09:54:00 compute-0 nova_compute[260935]: 2025-10-11 09:54:00.937 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:54:00 compute-0 nova_compute[260935]: 2025-10-11 09:54:00.937 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:54:01 compute-0 nova_compute[260935]: 2025-10-11 09:54:01.038 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 09:54:01 compute-0 nova_compute[260935]: 2025-10-11 09:54:01.040 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:54:01 compute-0 nova_compute[260935]: 2025-10-11 09:54:01.041 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5030 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Oct 11 09:54:01 compute-0 nova_compute[260935]: 2025-10-11 09:54:01.041 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Oct 11 09:54:01 compute-0 nova_compute[260935]: 2025-10-11 09:54:01.042 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Oct 11 09:54:01 compute-0 nova_compute[260935]: 2025-10-11 09:54:01.043 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:54:01 compute-0 ceph-mon[74313]: pgmap v3392: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:54:01 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3393: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:54:03 compute-0 ceph-mon[74313]: pgmap v3393: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:54:03 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3394: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:54:04 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e310 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:54:05 compute-0 ceph-mon[74313]: pgmap v3394: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:54:05 compute-0 nova_compute[260935]: 2025-10-11 09:54:05.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:54:05 compute-0 nova_compute[260935]: 2025-10-11 09:54:05.704 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 11 09:54:05 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3395: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:54:05 compute-0 podman[444937]: 2025-10-11 09:54:05.801944592 +0000 UTC m=+0.095321465 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=iscsid, io.buildah.version=1.41.3, config_id=iscsid, org.label-schema.license=GPLv2)
Oct 11 09:54:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 09:54:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:54:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 09:54:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:54:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0026278224099759067 of space, bias 1.0, pg target 0.788346722992772 quantized to 32 (current 32)
Oct 11 09:54:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:54:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 09:54:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:54:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 09:54:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:54:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006659854830044931 of space, bias 1.0, pg target 0.19979564490134794 quantized to 32 (current 32)
Oct 11 09:54:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:54:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 11 09:54:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:54:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 09:54:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:54:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 11 09:54:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:54:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 11 09:54:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:54:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 09:54:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:54:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 11 09:54:06 compute-0 nova_compute[260935]: 2025-10-11 09:54:06.044 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 09:54:06 compute-0 nova_compute[260935]: 2025-10-11 09:54:06.046 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 09:54:06 compute-0 nova_compute[260935]: 2025-10-11 09:54:06.047 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Oct 11 09:54:06 compute-0 nova_compute[260935]: 2025-10-11 09:54:06.047 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Oct 11 09:54:06 compute-0 nova_compute[260935]: 2025-10-11 09:54:06.087 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:54:06 compute-0 nova_compute[260935]: 2025-10-11 09:54:06.088 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Oct 11 09:54:06 compute-0 nova_compute[260935]: 2025-10-11 09:54:06.705 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:54:06 compute-0 nova_compute[260935]: 2025-10-11 09:54:06.706 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 11 09:54:06 compute-0 nova_compute[260935]: 2025-10-11 09:54:06.706 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 11 09:54:07 compute-0 nova_compute[260935]: 2025-10-11 09:54:07.048 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "refresh_cache-c176845c-89c0-4038-ba22-4ee79bd3ebfe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:54:07 compute-0 nova_compute[260935]: 2025-10-11 09:54:07.049 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquired lock "refresh_cache-c176845c-89c0-4038-ba22-4ee79bd3ebfe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:54:07 compute-0 nova_compute[260935]: 2025-10-11 09:54:07.049 2 DEBUG nova.network.neutron [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: c176845c-89c0-4038-ba22-4ee79bd3ebfe] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 11 09:54:07 compute-0 nova_compute[260935]: 2025-10-11 09:54:07.050 2 DEBUG nova.objects.instance [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lazy-loading 'info_cache' on Instance uuid c176845c-89c0-4038-ba22-4ee79bd3ebfe obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:54:07 compute-0 nova_compute[260935]: 2025-10-11 09:54:07.263 2 DEBUG nova.network.neutron [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: c176845c-89c0-4038-ba22-4ee79bd3ebfe] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 11 09:54:07 compute-0 ceph-mon[74313]: pgmap v3395: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:54:07 compute-0 nova_compute[260935]: 2025-10-11 09:54:07.594 2 DEBUG nova.network.neutron [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: c176845c-89c0-4038-ba22-4ee79bd3ebfe] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:54:07 compute-0 nova_compute[260935]: 2025-10-11 09:54:07.617 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Releasing lock "refresh_cache-c176845c-89c0-4038-ba22-4ee79bd3ebfe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:54:07 compute-0 nova_compute[260935]: 2025-10-11 09:54:07.618 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: c176845c-89c0-4038-ba22-4ee79bd3ebfe] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 11 09:54:07 compute-0 nova_compute[260935]: 2025-10-11 09:54:07.618 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:54:07 compute-0 nova_compute[260935]: 2025-10-11 09:54:07.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:54:07 compute-0 nova_compute[260935]: 2025-10-11 09:54:07.737 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:54:07 compute-0 nova_compute[260935]: 2025-10-11 09:54:07.738 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:54:07 compute-0 nova_compute[260935]: 2025-10-11 09:54:07.738 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:54:07 compute-0 nova_compute[260935]: 2025-10-11 09:54:07.739 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 11 09:54:07 compute-0 nova_compute[260935]: 2025-10-11 09:54:07.739 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:54:07 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3396: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:54:08 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 09:54:08 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/744241938' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:54:08 compute-0 nova_compute[260935]: 2025-10-11 09:54:08.233 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.494s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:54:08 compute-0 nova_compute[260935]: 2025-10-11 09:54:08.313 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:54:08 compute-0 nova_compute[260935]: 2025-10-11 09:54:08.313 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:54:08 compute-0 nova_compute[260935]: 2025-10-11 09:54:08.315 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:54:08 compute-0 nova_compute[260935]: 2025-10-11 09:54:08.320 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:54:08 compute-0 nova_compute[260935]: 2025-10-11 09:54:08.320 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:54:08 compute-0 nova_compute[260935]: 2025-10-11 09:54:08.324 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000052 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:54:08 compute-0 nova_compute[260935]: 2025-10-11 09:54:08.324 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000052 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:54:08 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/744241938' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:54:08 compute-0 nova_compute[260935]: 2025-10-11 09:54:08.577 2 WARNING nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 09:54:08 compute-0 nova_compute[260935]: 2025-10-11 09:54:08.579 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=2769MB free_disk=59.83064270019531GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 11 09:54:08 compute-0 nova_compute[260935]: 2025-10-11 09:54:08.579 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:54:08 compute-0 nova_compute[260935]: 2025-10-11 09:54:08.580 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:54:08 compute-0 nova_compute[260935]: 2025-10-11 09:54:08.678 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance c176845c-89c0-4038-ba22-4ee79bd3ebfe actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 09:54:08 compute-0 nova_compute[260935]: 2025-10-11 09:54:08.679 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance b75d8ded-515b-48ff-a6b6-28df88878996 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 09:54:08 compute-0 nova_compute[260935]: 2025-10-11 09:54:08.679 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance 52be16b4-343a-4fd4-9041-39069a1fde2a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 09:54:08 compute-0 nova_compute[260935]: 2025-10-11 09:54:08.679 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 11 09:54:08 compute-0 nova_compute[260935]: 2025-10-11 09:54:08.680 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=896MB phys_disk=59GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 11 09:54:08 compute-0 nova_compute[260935]: 2025-10-11 09:54:08.790 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:54:08 compute-0 podman[444980]: 2025-10-11 09:54:08.801262205 +0000 UTC m=+0.099343749 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 11 09:54:08 compute-0 podman[444981]: 2025-10-11 09:54:08.854056697 +0000 UTC m=+0.144001281 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Oct 11 09:54:09 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 09:54:09 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/517822258' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:54:09 compute-0 nova_compute[260935]: 2025-10-11 09:54:09.283 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.493s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:54:09 compute-0 nova_compute[260935]: 2025-10-11 09:54:09.292 2 DEBUG nova.compute.provider_tree [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 09:54:09 compute-0 nova_compute[260935]: 2025-10-11 09:54:09.314 2 DEBUG nova.scheduler.client.report [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 09:54:09 compute-0 nova_compute[260935]: 2025-10-11 09:54:09.317 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 11 09:54:09 compute-0 nova_compute[260935]: 2025-10-11 09:54:09.318 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.738s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:54:09 compute-0 ceph-mon[74313]: pgmap v3396: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:54:09 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/517822258' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:54:09 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3397: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:54:09 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e310 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:54:10 compute-0 ceph-mon[74313]: pgmap v3397: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:54:11 compute-0 nova_compute[260935]: 2025-10-11 09:54:11.089 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 09:54:11 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3398: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:54:12 compute-0 nova_compute[260935]: 2025-10-11 09:54:12.320 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:54:12 compute-0 ceph-mon[74313]: pgmap v3398: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:54:13 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3399: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:54:14 compute-0 nova_compute[260935]: 2025-10-11 09:54:14.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:54:14 compute-0 ceph-mon[74313]: pgmap v3399: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:54:14 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e310 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:54:14 compute-0 ceph-mon[74313]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #168. Immutable memtables: 0.
Oct 11 09:54:14 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:54:14.897124) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 11 09:54:14 compute-0 ceph-mon[74313]: rocksdb: [db/flush_job.cc:856] [default] [JOB 103] Flushing memtable with next log file: 168
Oct 11 09:54:14 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760176454897190, "job": 103, "event": "flush_started", "num_memtables": 1, "num_entries": 624, "num_deletes": 254, "total_data_size": 708776, "memory_usage": 721360, "flush_reason": "Manual Compaction"}
Oct 11 09:54:14 compute-0 ceph-mon[74313]: rocksdb: [db/flush_job.cc:885] [default] [JOB 103] Level-0 flush table #169: started
Oct 11 09:54:14 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760176454904785, "cf_name": "default", "job": 103, "event": "table_file_creation", "file_number": 169, "file_size": 702447, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 70348, "largest_seqno": 70971, "table_properties": {"data_size": 699091, "index_size": 1263, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1029, "raw_key_size": 7504, "raw_average_key_size": 18, "raw_value_size": 692391, "raw_average_value_size": 1726, "num_data_blocks": 56, "num_entries": 401, "num_filter_entries": 401, "num_deletions": 254, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760176410, "oldest_key_time": 1760176410, "file_creation_time": 1760176454, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "25d4b2da-549a-4320-988a-8e4ed36a341b", "db_session_id": "HBQ1LYMNE0LLZGR7AHC6", "orig_file_number": 169, "seqno_to_time_mapping": "N/A"}}
Oct 11 09:54:14 compute-0 ceph-mon[74313]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 103] Flush lasted 7722 microseconds, and 3634 cpu microseconds.
Oct 11 09:54:14 compute-0 ceph-mon[74313]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 09:54:14 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:54:14.904852) [db/flush_job.cc:967] [default] [JOB 103] Level-0 flush table #169: 702447 bytes OK
Oct 11 09:54:14 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:54:14.904873) [db/memtable_list.cc:519] [default] Level-0 commit table #169 started
Oct 11 09:54:14 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:54:14.906589) [db/memtable_list.cc:722] [default] Level-0 commit table #169: memtable #1 done
Oct 11 09:54:14 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:54:14.906603) EVENT_LOG_v1 {"time_micros": 1760176454906599, "job": 103, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 11 09:54:14 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:54:14.906623) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 11 09:54:14 compute-0 ceph-mon[74313]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 103] Try to delete WAL files size 705402, prev total WAL file size 705402, number of live WAL files 2.
Oct 11 09:54:14 compute-0 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000165.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 09:54:14 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:54:14.907188) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0033303133' seq:72057594037927935, type:22 .. '6C6F676D0033323634' seq:0, type:0; will stop at (end)
Oct 11 09:54:14 compute-0 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 104] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 11 09:54:14 compute-0 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 103 Base level 0, inputs: [169(685KB)], [167(9303KB)]
Oct 11 09:54:14 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760176454907213, "job": 104, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [169], "files_L6": [167], "score": -1, "input_data_size": 10229618, "oldest_snapshot_seqno": -1}
Oct 11 09:54:14 compute-0 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 104] Generated table #170: 8577 keys, 10122223 bytes, temperature: kUnknown
Oct 11 09:54:14 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760176454961053, "cf_name": "default", "job": 104, "event": "table_file_creation", "file_number": 170, "file_size": 10122223, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10067947, "index_size": 31699, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 21509, "raw_key_size": 226358, "raw_average_key_size": 26, "raw_value_size": 9917842, "raw_average_value_size": 1156, "num_data_blocks": 1223, "num_entries": 8577, "num_filter_entries": 8577, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760170204, "oldest_key_time": 0, "file_creation_time": 1760176454, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "25d4b2da-549a-4320-988a-8e4ed36a341b", "db_session_id": "HBQ1LYMNE0LLZGR7AHC6", "orig_file_number": 170, "seqno_to_time_mapping": "N/A"}}
Oct 11 09:54:14 compute-0 ceph-mon[74313]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 09:54:14 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:54:14.961379) [db/compaction/compaction_job.cc:1663] [default] [JOB 104] Compacted 1@0 + 1@6 files to L6 => 10122223 bytes
Oct 11 09:54:14 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:54:14.962761) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 189.7 rd, 187.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.7, 9.1 +0.0 blob) out(9.7 +0.0 blob), read-write-amplify(29.0) write-amplify(14.4) OK, records in: 9096, records dropped: 519 output_compression: NoCompression
Oct 11 09:54:14 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:54:14.962793) EVENT_LOG_v1 {"time_micros": 1760176454962780, "job": 104, "event": "compaction_finished", "compaction_time_micros": 53939, "compaction_time_cpu_micros": 33907, "output_level": 6, "num_output_files": 1, "total_output_size": 10122223, "num_input_records": 9096, "num_output_records": 8577, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 11 09:54:14 compute-0 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000169.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 09:54:14 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760176454963162, "job": 104, "event": "table_file_deletion", "file_number": 169}
Oct 11 09:54:14 compute-0 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000167.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 09:54:14 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760176454966469, "job": 104, "event": "table_file_deletion", "file_number": 167}
Oct 11 09:54:14 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:54:14.907101) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 09:54:14 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:54:14.966892) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 09:54:14 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:54:14.966899) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 09:54:14 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:54:14.966902) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 09:54:14 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:54:14.966905) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 09:54:14 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:54:14.966908) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 09:54:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:54:15.251 162815 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:54:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:54:15.252 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:54:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:54:15.252 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:54:15 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3400: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:54:16 compute-0 nova_compute[260935]: 2025-10-11 09:54:16.091 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:54:16 compute-0 nova_compute[260935]: 2025-10-11 09:54:16.094 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:54:16 compute-0 ceph-mon[74313]: pgmap v3400: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:54:17 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3401: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:54:18 compute-0 ceph-mon[74313]: pgmap v3401: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:54:19 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3402: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:54:19 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e310 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:54:20 compute-0 nova_compute[260935]: 2025-10-11 09:54:20.699 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:54:20 compute-0 ceph-mon[74313]: pgmap v3402: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:54:21 compute-0 nova_compute[260935]: 2025-10-11 09:54:21.093 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:54:21 compute-0 nova_compute[260935]: 2025-10-11 09:54:21.095 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:54:21 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3403: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:54:22 compute-0 nova_compute[260935]: 2025-10-11 09:54:22.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:54:22 compute-0 nova_compute[260935]: 2025-10-11 09:54:22.703 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Oct 11 09:54:22 compute-0 ceph-mon[74313]: pgmap v3403: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:54:23 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3404: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:54:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:54:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:54:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:54:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:54:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:54:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:54:24 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e310 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:54:24 compute-0 ceph-mon[74313]: pgmap v3404: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:54:25 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3405: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:54:26 compute-0 nova_compute[260935]: 2025-10-11 09:54:26.095 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:54:26 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 09:54:26 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3195713945' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 09:54:26 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 09:54:26 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3195713945' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 09:54:26 compute-0 ceph-mon[74313]: pgmap v3405: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:54:26 compute-0 ceph-mon[74313]: from='client.? 192.168.122.10:0/3195713945' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 09:54:26 compute-0 ceph-mon[74313]: from='client.? 192.168.122.10:0/3195713945' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 09:54:27 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3406: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:54:28 compute-0 ceph-mon[74313]: pgmap v3406: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:54:29 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3407: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:54:29 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e310 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:54:30 compute-0 ceph-mon[74313]: pgmap v3407: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:54:31 compute-0 nova_compute[260935]: 2025-10-11 09:54:31.097 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 09:54:31 compute-0 nova_compute[260935]: 2025-10-11 09:54:31.099 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 09:54:31 compute-0 nova_compute[260935]: 2025-10-11 09:54:31.099 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Oct 11 09:54:31 compute-0 nova_compute[260935]: 2025-10-11 09:54:31.100 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Oct 11 09:54:31 compute-0 nova_compute[260935]: 2025-10-11 09:54:31.144 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:54:31 compute-0 nova_compute[260935]: 2025-10-11 09:54:31.144 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Oct 11 09:54:31 compute-0 podman[445047]: 2025-10-11 09:54:31.785751852 +0000 UTC m=+0.077877992 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS)
Oct 11 09:54:31 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3408: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:54:32 compute-0 nova_compute[260935]: 2025-10-11 09:54:32.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:54:32 compute-0 ceph-mon[74313]: pgmap v3408: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:54:33 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3409: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:54:34 compute-0 nova_compute[260935]: 2025-10-11 09:54:34.797 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:54:34 compute-0 nova_compute[260935]: 2025-10-11 09:54:34.798 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Oct 11 09:54:34 compute-0 nova_compute[260935]: 2025-10-11 09:54:34.819 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Oct 11 09:54:34 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e310 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:54:35 compute-0 ceph-mon[74313]: pgmap v3409: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:54:35 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3410: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:54:36 compute-0 nova_compute[260935]: 2025-10-11 09:54:36.145 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 09:54:36 compute-0 nova_compute[260935]: 2025-10-11 09:54:36.147 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 09:54:36 compute-0 podman[445069]: 2025-10-11 09:54:36.809349253 +0000 UTC m=+0.098703650 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct 11 09:54:37 compute-0 ceph-mon[74313]: pgmap v3410: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:54:37 compute-0 nova_compute[260935]: 2025-10-11 09:54:37.724 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:54:37 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3411: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:54:39 compute-0 ceph-mon[74313]: pgmap v3411: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:54:39 compute-0 sudo[445089]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:54:39 compute-0 sudo[445089]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:54:39 compute-0 sudo[445089]: pam_unix(sudo:session): session closed for user root
Oct 11 09:54:39 compute-0 sudo[445126]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 09:54:39 compute-0 sudo[445126]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:54:39 compute-0 sudo[445126]: pam_unix(sudo:session): session closed for user root
Oct 11 09:54:39 compute-0 podman[445113]: 2025-10-11 09:54:39.698157733 +0000 UTC m=+0.107209341 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct 11 09:54:39 compute-0 sudo[445172]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:54:39 compute-0 sudo[445172]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:54:39 compute-0 sudo[445172]: pam_unix(sudo:session): session closed for user root
Oct 11 09:54:39 compute-0 podman[445114]: 2025-10-11 09:54:39.765104865 +0000 UTC m=+0.166966500 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct 11 09:54:39 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3412: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:54:39 compute-0 sudo[445204]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 11 09:54:39 compute-0 sudo[445204]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:54:39 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e310 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:54:40 compute-0 sudo[445204]: pam_unix(sudo:session): session closed for user root
Oct 11 09:54:40 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 09:54:40 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 09:54:40 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 11 09:54:40 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 09:54:40 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 11 09:54:40 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:54:40 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev 058ee411-74a3-4dcc-b11e-2cb9c01c2dd6 does not exist
Oct 11 09:54:40 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev 7d173903-09f0-4ce4-a6f9-1ad9753ead58 does not exist
Oct 11 09:54:40 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev 06a9b743-4e5f-4a2b-af11-98c9e9cb248f does not exist
Oct 11 09:54:40 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 11 09:54:40 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 09:54:40 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 11 09:54:40 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 09:54:40 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 09:54:40 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 09:54:40 compute-0 sudo[445265]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:54:40 compute-0 sudo[445265]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:54:40 compute-0 sudo[445265]: pam_unix(sudo:session): session closed for user root
Oct 11 09:54:40 compute-0 sudo[445290]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 09:54:40 compute-0 sudo[445290]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:54:40 compute-0 sudo[445290]: pam_unix(sudo:session): session closed for user root
Oct 11 09:54:40 compute-0 sudo[445315]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:54:40 compute-0 sudo[445315]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:54:40 compute-0 sudo[445315]: pam_unix(sudo:session): session closed for user root
Oct 11 09:54:40 compute-0 sudo[445340]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 11 09:54:40 compute-0 sudo[445340]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:54:41 compute-0 ceph-mon[74313]: pgmap v3412: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:54:41 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 09:54:41 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 09:54:41 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:54:41 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 09:54:41 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 09:54:41 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 09:54:41 compute-0 nova_compute[260935]: 2025-10-11 09:54:41.147 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4997-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 09:54:41 compute-0 podman[445406]: 2025-10-11 09:54:41.183873779 +0000 UTC m=+0.067954611 container create 76b509e67f9ae36d386282df29dbd4ba86a4160fc3b2d786befa56aa3f428405 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_colden, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct 11 09:54:41 compute-0 systemd[1]: Started libpod-conmon-76b509e67f9ae36d386282df29dbd4ba86a4160fc3b2d786befa56aa3f428405.scope.
Oct 11 09:54:41 compute-0 podman[445406]: 2025-10-11 09:54:41.153378508 +0000 UTC m=+0.037459390 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:54:41 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:54:41 compute-0 podman[445406]: 2025-10-11 09:54:41.289840244 +0000 UTC m=+0.173921096 container init 76b509e67f9ae36d386282df29dbd4ba86a4160fc3b2d786befa56aa3f428405 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_colden, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 09:54:41 compute-0 podman[445406]: 2025-10-11 09:54:41.301630507 +0000 UTC m=+0.185711359 container start 76b509e67f9ae36d386282df29dbd4ba86a4160fc3b2d786befa56aa3f428405 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_colden, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 09:54:41 compute-0 podman[445406]: 2025-10-11 09:54:41.305574749 +0000 UTC m=+0.189655591 container attach 76b509e67f9ae36d386282df29dbd4ba86a4160fc3b2d786befa56aa3f428405 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_colden, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct 11 09:54:41 compute-0 stoic_colden[445423]: 167 167
Oct 11 09:54:41 compute-0 podman[445406]: 2025-10-11 09:54:41.312033371 +0000 UTC m=+0.196114233 container died 76b509e67f9ae36d386282df29dbd4ba86a4160fc3b2d786befa56aa3f428405 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_colden, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Oct 11 09:54:41 compute-0 systemd[1]: libpod-76b509e67f9ae36d386282df29dbd4ba86a4160fc3b2d786befa56aa3f428405.scope: Deactivated successfully.
Oct 11 09:54:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-4c000f282592af049031c01930cc337027efd62208cc9c5d6c7e41f4ed9d63b6-merged.mount: Deactivated successfully.
Oct 11 09:54:41 compute-0 podman[445406]: 2025-10-11 09:54:41.366947753 +0000 UTC m=+0.251028605 container remove 76b509e67f9ae36d386282df29dbd4ba86a4160fc3b2d786befa56aa3f428405 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_colden, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 09:54:41 compute-0 systemd[1]: libpod-conmon-76b509e67f9ae36d386282df29dbd4ba86a4160fc3b2d786befa56aa3f428405.scope: Deactivated successfully.
Oct 11 09:54:41 compute-0 podman[445447]: 2025-10-11 09:54:41.597730625 +0000 UTC m=+0.067359904 container create 972f4a9cd140589d30384d61754673ed4cfcef5c718817e513b7ddaed064bc6b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_hopper, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default)
Oct 11 09:54:41 compute-0 systemd[1]: Started libpod-conmon-972f4a9cd140589d30384d61754673ed4cfcef5c718817e513b7ddaed064bc6b.scope.
Oct 11 09:54:41 compute-0 podman[445447]: 2025-10-11 09:54:41.569355694 +0000 UTC m=+0.038985033 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:54:41 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:54:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/456b17fcb3fbbebc9544127ed8f73d98bd635ad1a8ca164a23677f5fe89f7675/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 09:54:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/456b17fcb3fbbebc9544127ed8f73d98bd635ad1a8ca164a23677f5fe89f7675/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 09:54:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/456b17fcb3fbbebc9544127ed8f73d98bd635ad1a8ca164a23677f5fe89f7675/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 09:54:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/456b17fcb3fbbebc9544127ed8f73d98bd635ad1a8ca164a23677f5fe89f7675/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 09:54:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/456b17fcb3fbbebc9544127ed8f73d98bd635ad1a8ca164a23677f5fe89f7675/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 09:54:41 compute-0 podman[445447]: 2025-10-11 09:54:41.703292359 +0000 UTC m=+0.172921698 container init 972f4a9cd140589d30384d61754673ed4cfcef5c718817e513b7ddaed064bc6b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_hopper, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 11 09:54:41 compute-0 podman[445447]: 2025-10-11 09:54:41.713080995 +0000 UTC m=+0.182710274 container start 972f4a9cd140589d30384d61754673ed4cfcef5c718817e513b7ddaed064bc6b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_hopper, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 09:54:41 compute-0 podman[445447]: 2025-10-11 09:54:41.717748357 +0000 UTC m=+0.187377636 container attach 972f4a9cd140589d30384d61754673ed4cfcef5c718817e513b7ddaed064bc6b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_hopper, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 09:54:41 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3413: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:54:42 compute-0 hopeful_hopper[445464]: --> passed data devices: 0 physical, 3 LVM
Oct 11 09:54:42 compute-0 hopeful_hopper[445464]: --> relative data size: 1.0
Oct 11 09:54:42 compute-0 hopeful_hopper[445464]: --> All data devices are unavailable
Oct 11 09:54:42 compute-0 systemd[1]: libpod-972f4a9cd140589d30384d61754673ed4cfcef5c718817e513b7ddaed064bc6b.scope: Deactivated successfully.
Oct 11 09:54:42 compute-0 systemd[1]: libpod-972f4a9cd140589d30384d61754673ed4cfcef5c718817e513b7ddaed064bc6b.scope: Consumed 1.207s CPU time.
Oct 11 09:54:42 compute-0 conmon[445464]: conmon 972f4a9cd140589d3038 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-972f4a9cd140589d30384d61754673ed4cfcef5c718817e513b7ddaed064bc6b.scope/container/memory.events
Oct 11 09:54:42 compute-0 podman[445447]: 2025-10-11 09:54:42.991186216 +0000 UTC m=+1.460815525 container died 972f4a9cd140589d30384d61754673ed4cfcef5c718817e513b7ddaed064bc6b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_hopper, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True)
Oct 11 09:54:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-456b17fcb3fbbebc9544127ed8f73d98bd635ad1a8ca164a23677f5fe89f7675-merged.mount: Deactivated successfully.
Oct 11 09:54:43 compute-0 ceph-mon[74313]: pgmap v3413: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:54:43 compute-0 podman[445447]: 2025-10-11 09:54:43.08194032 +0000 UTC m=+1.551569589 container remove 972f4a9cd140589d30384d61754673ed4cfcef5c718817e513b7ddaed064bc6b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_hopper, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 09:54:43 compute-0 systemd[1]: libpod-conmon-972f4a9cd140589d30384d61754673ed4cfcef5c718817e513b7ddaed064bc6b.scope: Deactivated successfully.
Oct 11 09:54:43 compute-0 sudo[445340]: pam_unix(sudo:session): session closed for user root
Oct 11 09:54:43 compute-0 sudo[445508]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:54:43 compute-0 sudo[445508]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:54:43 compute-0 sudo[445508]: pam_unix(sudo:session): session closed for user root
Oct 11 09:54:43 compute-0 sudo[445533]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 09:54:43 compute-0 sudo[445533]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:54:43 compute-0 sudo[445533]: pam_unix(sudo:session): session closed for user root
Oct 11 09:54:43 compute-0 sudo[445558]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:54:43 compute-0 sudo[445558]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:54:43 compute-0 sudo[445558]: pam_unix(sudo:session): session closed for user root
Oct 11 09:54:43 compute-0 sudo[445583]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a -- lvm list --format json
Oct 11 09:54:43 compute-0 sudo[445583]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:54:43 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3414: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:54:43 compute-0 podman[445649]: 2025-10-11 09:54:43.884419999 +0000 UTC m=+0.076991137 container create cce83428f8b741eac52f3c63c60a2f699fb98041d5b5e5a594c17cd522fffb78 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_herschel, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 09:54:43 compute-0 podman[445649]: 2025-10-11 09:54:43.842641908 +0000 UTC m=+0.035213116 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:54:43 compute-0 systemd[1]: Started libpod-conmon-cce83428f8b741eac52f3c63c60a2f699fb98041d5b5e5a594c17cd522fffb78.scope.
Oct 11 09:54:44 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:54:44 compute-0 podman[445649]: 2025-10-11 09:54:44.084260077 +0000 UTC m=+0.276831285 container init cce83428f8b741eac52f3c63c60a2f699fb98041d5b5e5a594c17cd522fffb78 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_herschel, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct 11 09:54:44 compute-0 podman[445649]: 2025-10-11 09:54:44.096026249 +0000 UTC m=+0.288597417 container start cce83428f8b741eac52f3c63c60a2f699fb98041d5b5e5a594c17cd522fffb78 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_herschel, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct 11 09:54:44 compute-0 busy_herschel[445666]: 167 167
Oct 11 09:54:44 compute-0 systemd[1]: libpod-cce83428f8b741eac52f3c63c60a2f699fb98041d5b5e5a594c17cd522fffb78.scope: Deactivated successfully.
Oct 11 09:54:44 compute-0 podman[445649]: 2025-10-11 09:54:44.17532434 +0000 UTC m=+0.367895488 container attach cce83428f8b741eac52f3c63c60a2f699fb98041d5b5e5a594c17cd522fffb78 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_herschel, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 09:54:44 compute-0 podman[445649]: 2025-10-11 09:54:44.175854865 +0000 UTC m=+0.368426033 container died cce83428f8b741eac52f3c63c60a2f699fb98041d5b5e5a594c17cd522fffb78 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_herschel, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Oct 11 09:54:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-81cf336ae8756f514782a61cc356a8295e24fdc2d2045673e7cbed21e196ecd5-merged.mount: Deactivated successfully.
Oct 11 09:54:44 compute-0 podman[445649]: 2025-10-11 09:54:44.535034015 +0000 UTC m=+0.727605143 container remove cce83428f8b741eac52f3c63c60a2f699fb98041d5b5e5a594c17cd522fffb78 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_herschel, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct 11 09:54:44 compute-0 systemd[1]: libpod-conmon-cce83428f8b741eac52f3c63c60a2f699fb98041d5b5e5a594c17cd522fffb78.scope: Deactivated successfully.
Oct 11 09:54:44 compute-0 podman[445689]: 2025-10-11 09:54:44.80979595 +0000 UTC m=+0.088135182 container create b22bdd10a13125447f2c892405d035d4be2273a67d84e1e6ca5cd0a38b8e9ae1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_visvesvaraya, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 09:54:44 compute-0 podman[445689]: 2025-10-11 09:54:44.75461352 +0000 UTC m=+0.032952802 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:54:44 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e310 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:54:44 compute-0 systemd[1]: Started libpod-conmon-b22bdd10a13125447f2c892405d035d4be2273a67d84e1e6ca5cd0a38b8e9ae1.scope.
Oct 11 09:54:44 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:54:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab76f86dd41e0f43c4e28f70ac1efb6511b697b9edd52d0a096542398eed1486/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 09:54:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab76f86dd41e0f43c4e28f70ac1efb6511b697b9edd52d0a096542398eed1486/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 09:54:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab76f86dd41e0f43c4e28f70ac1efb6511b697b9edd52d0a096542398eed1486/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 09:54:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab76f86dd41e0f43c4e28f70ac1efb6511b697b9edd52d0a096542398eed1486/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 09:54:45 compute-0 podman[445689]: 2025-10-11 09:54:45.060709451 +0000 UTC m=+0.339048703 container init b22bdd10a13125447f2c892405d035d4be2273a67d84e1e6ca5cd0a38b8e9ae1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_visvesvaraya, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 11 09:54:45 compute-0 podman[445689]: 2025-10-11 09:54:45.073290216 +0000 UTC m=+0.351629438 container start b22bdd10a13125447f2c892405d035d4be2273a67d84e1e6ca5cd0a38b8e9ae1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_visvesvaraya, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct 11 09:54:45 compute-0 podman[445689]: 2025-10-11 09:54:45.148034739 +0000 UTC m=+0.426374031 container attach b22bdd10a13125447f2c892405d035d4be2273a67d84e1e6ca5cd0a38b8e9ae1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_visvesvaraya, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct 11 09:54:45 compute-0 ceph-mon[74313]: pgmap v3414: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:54:45 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3415: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:54:45 compute-0 happy_visvesvaraya[445705]: {
Oct 11 09:54:45 compute-0 happy_visvesvaraya[445705]:     "0": [
Oct 11 09:54:45 compute-0 happy_visvesvaraya[445705]:         {
Oct 11 09:54:45 compute-0 happy_visvesvaraya[445705]:             "devices": [
Oct 11 09:54:45 compute-0 happy_visvesvaraya[445705]:                 "/dev/loop3"
Oct 11 09:54:45 compute-0 happy_visvesvaraya[445705]:             ],
Oct 11 09:54:45 compute-0 happy_visvesvaraya[445705]:             "lv_name": "ceph_lv0",
Oct 11 09:54:45 compute-0 happy_visvesvaraya[445705]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 09:54:45 compute-0 happy_visvesvaraya[445705]:             "lv_size": "21470642176",
Oct 11 09:54:45 compute-0 happy_visvesvaraya[445705]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=bd31cfe9-266a-4479-a7f9-659fff14b3e1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 09:54:45 compute-0 happy_visvesvaraya[445705]:             "lv_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 09:54:45 compute-0 happy_visvesvaraya[445705]:             "name": "ceph_lv0",
Oct 11 09:54:45 compute-0 happy_visvesvaraya[445705]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 09:54:45 compute-0 happy_visvesvaraya[445705]:             "tags": {
Oct 11 09:54:45 compute-0 happy_visvesvaraya[445705]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 11 09:54:45 compute-0 happy_visvesvaraya[445705]:                 "ceph.block_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 09:54:45 compute-0 happy_visvesvaraya[445705]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 09:54:45 compute-0 happy_visvesvaraya[445705]:                 "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:54:45 compute-0 happy_visvesvaraya[445705]:                 "ceph.cluster_name": "ceph",
Oct 11 09:54:45 compute-0 happy_visvesvaraya[445705]:                 "ceph.crush_device_class": "",
Oct 11 09:54:45 compute-0 happy_visvesvaraya[445705]:                 "ceph.encrypted": "0",
Oct 11 09:54:45 compute-0 happy_visvesvaraya[445705]:                 "ceph.osd_fsid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 09:54:45 compute-0 happy_visvesvaraya[445705]:                 "ceph.osd_id": "0",
Oct 11 09:54:45 compute-0 happy_visvesvaraya[445705]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 09:54:45 compute-0 happy_visvesvaraya[445705]:                 "ceph.type": "block",
Oct 11 09:54:45 compute-0 happy_visvesvaraya[445705]:                 "ceph.vdo": "0"
Oct 11 09:54:45 compute-0 happy_visvesvaraya[445705]:             },
Oct 11 09:54:45 compute-0 happy_visvesvaraya[445705]:             "type": "block",
Oct 11 09:54:45 compute-0 happy_visvesvaraya[445705]:             "vg_name": "ceph_vg0"
Oct 11 09:54:45 compute-0 happy_visvesvaraya[445705]:         }
Oct 11 09:54:45 compute-0 happy_visvesvaraya[445705]:     ],
Oct 11 09:54:45 compute-0 happy_visvesvaraya[445705]:     "1": [
Oct 11 09:54:45 compute-0 happy_visvesvaraya[445705]:         {
Oct 11 09:54:45 compute-0 happy_visvesvaraya[445705]:             "devices": [
Oct 11 09:54:45 compute-0 happy_visvesvaraya[445705]:                 "/dev/loop4"
Oct 11 09:54:45 compute-0 happy_visvesvaraya[445705]:             ],
Oct 11 09:54:45 compute-0 happy_visvesvaraya[445705]:             "lv_name": "ceph_lv1",
Oct 11 09:54:45 compute-0 happy_visvesvaraya[445705]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 09:54:45 compute-0 happy_visvesvaraya[445705]:             "lv_size": "21470642176",
Oct 11 09:54:45 compute-0 happy_visvesvaraya[445705]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c6e730c8-7075-45e5-bf72-061b73088f5b,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 09:54:45 compute-0 happy_visvesvaraya[445705]:             "lv_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 09:54:45 compute-0 happy_visvesvaraya[445705]:             "name": "ceph_lv1",
Oct 11 09:54:45 compute-0 happy_visvesvaraya[445705]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 09:54:45 compute-0 happy_visvesvaraya[445705]:             "tags": {
Oct 11 09:54:45 compute-0 happy_visvesvaraya[445705]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 11 09:54:45 compute-0 happy_visvesvaraya[445705]:                 "ceph.block_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 09:54:45 compute-0 happy_visvesvaraya[445705]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 09:54:45 compute-0 happy_visvesvaraya[445705]:                 "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:54:45 compute-0 happy_visvesvaraya[445705]:                 "ceph.cluster_name": "ceph",
Oct 11 09:54:45 compute-0 happy_visvesvaraya[445705]:                 "ceph.crush_device_class": "",
Oct 11 09:54:45 compute-0 happy_visvesvaraya[445705]:                 "ceph.encrypted": "0",
Oct 11 09:54:45 compute-0 happy_visvesvaraya[445705]:                 "ceph.osd_fsid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 09:54:45 compute-0 happy_visvesvaraya[445705]:                 "ceph.osd_id": "1",
Oct 11 09:54:45 compute-0 happy_visvesvaraya[445705]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 09:54:45 compute-0 happy_visvesvaraya[445705]:                 "ceph.type": "block",
Oct 11 09:54:45 compute-0 happy_visvesvaraya[445705]:                 "ceph.vdo": "0"
Oct 11 09:54:45 compute-0 happy_visvesvaraya[445705]:             },
Oct 11 09:54:45 compute-0 happy_visvesvaraya[445705]:             "type": "block",
Oct 11 09:54:45 compute-0 happy_visvesvaraya[445705]:             "vg_name": "ceph_vg1"
Oct 11 09:54:45 compute-0 happy_visvesvaraya[445705]:         }
Oct 11 09:54:45 compute-0 happy_visvesvaraya[445705]:     ],
Oct 11 09:54:45 compute-0 happy_visvesvaraya[445705]:     "2": [
Oct 11 09:54:45 compute-0 happy_visvesvaraya[445705]:         {
Oct 11 09:54:45 compute-0 happy_visvesvaraya[445705]:             "devices": [
Oct 11 09:54:45 compute-0 happy_visvesvaraya[445705]:                 "/dev/loop5"
Oct 11 09:54:45 compute-0 happy_visvesvaraya[445705]:             ],
Oct 11 09:54:45 compute-0 happy_visvesvaraya[445705]:             "lv_name": "ceph_lv2",
Oct 11 09:54:45 compute-0 happy_visvesvaraya[445705]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 09:54:45 compute-0 happy_visvesvaraya[445705]:             "lv_size": "21470642176",
Oct 11 09:54:45 compute-0 happy_visvesvaraya[445705]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9a8d87fe-940e-4960-94fb-405e22e6e9ee,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 09:54:45 compute-0 happy_visvesvaraya[445705]:             "lv_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 09:54:45 compute-0 happy_visvesvaraya[445705]:             "name": "ceph_lv2",
Oct 11 09:54:45 compute-0 happy_visvesvaraya[445705]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 09:54:45 compute-0 happy_visvesvaraya[445705]:             "tags": {
Oct 11 09:54:45 compute-0 happy_visvesvaraya[445705]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 11 09:54:45 compute-0 happy_visvesvaraya[445705]:                 "ceph.block_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 09:54:45 compute-0 happy_visvesvaraya[445705]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 09:54:45 compute-0 happy_visvesvaraya[445705]:                 "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:54:45 compute-0 happy_visvesvaraya[445705]:                 "ceph.cluster_name": "ceph",
Oct 11 09:54:45 compute-0 happy_visvesvaraya[445705]:                 "ceph.crush_device_class": "",
Oct 11 09:54:45 compute-0 happy_visvesvaraya[445705]:                 "ceph.encrypted": "0",
Oct 11 09:54:45 compute-0 happy_visvesvaraya[445705]:                 "ceph.osd_fsid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 09:54:45 compute-0 happy_visvesvaraya[445705]:                 "ceph.osd_id": "2",
Oct 11 09:54:45 compute-0 happy_visvesvaraya[445705]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 09:54:45 compute-0 happy_visvesvaraya[445705]:                 "ceph.type": "block",
Oct 11 09:54:45 compute-0 happy_visvesvaraya[445705]:                 "ceph.vdo": "0"
Oct 11 09:54:45 compute-0 happy_visvesvaraya[445705]:             },
Oct 11 09:54:45 compute-0 happy_visvesvaraya[445705]:             "type": "block",
Oct 11 09:54:45 compute-0 happy_visvesvaraya[445705]:             "vg_name": "ceph_vg2"
Oct 11 09:54:45 compute-0 happy_visvesvaraya[445705]:         }
Oct 11 09:54:45 compute-0 happy_visvesvaraya[445705]:     ]
Oct 11 09:54:45 compute-0 happy_visvesvaraya[445705]: }
Oct 11 09:54:45 compute-0 systemd[1]: libpod-b22bdd10a13125447f2c892405d035d4be2273a67d84e1e6ca5cd0a38b8e9ae1.scope: Deactivated successfully.
Oct 11 09:54:45 compute-0 podman[445714]: 2025-10-11 09:54:45.949000515 +0000 UTC m=+0.029821314 container died b22bdd10a13125447f2c892405d035d4be2273a67d84e1e6ca5cd0a38b8e9ae1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_visvesvaraya, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 09:54:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-ab76f86dd41e0f43c4e28f70ac1efb6511b697b9edd52d0a096542398eed1486-merged.mount: Deactivated successfully.
Oct 11 09:54:46 compute-0 nova_compute[260935]: 2025-10-11 09:54:46.150 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 09:54:46 compute-0 nova_compute[260935]: 2025-10-11 09:54:46.155 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 09:54:46 compute-0 nova_compute[260935]: 2025-10-11 09:54:46.155 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5006 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Oct 11 09:54:46 compute-0 nova_compute[260935]: 2025-10-11 09:54:46.155 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Oct 11 09:54:46 compute-0 podman[445714]: 2025-10-11 09:54:46.163727833 +0000 UTC m=+0.244548622 container remove b22bdd10a13125447f2c892405d035d4be2273a67d84e1e6ca5cd0a38b8e9ae1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_visvesvaraya, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct 11 09:54:46 compute-0 nova_compute[260935]: 2025-10-11 09:54:46.191 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:54:46 compute-0 nova_compute[260935]: 2025-10-11 09:54:46.192 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Oct 11 09:54:46 compute-0 systemd[1]: libpod-conmon-b22bdd10a13125447f2c892405d035d4be2273a67d84e1e6ca5cd0a38b8e9ae1.scope: Deactivated successfully.
Oct 11 09:54:46 compute-0 sudo[445583]: pam_unix(sudo:session): session closed for user root
Oct 11 09:54:46 compute-0 sudo[445729]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:54:46 compute-0 sudo[445729]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:54:46 compute-0 sudo[445729]: pam_unix(sudo:session): session closed for user root
Oct 11 09:54:46 compute-0 sudo[445754]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 09:54:46 compute-0 sudo[445754]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:54:46 compute-0 sudo[445754]: pam_unix(sudo:session): session closed for user root
Oct 11 09:54:46 compute-0 sudo[445779]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:54:46 compute-0 sudo[445779]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:54:46 compute-0 sudo[445779]: pam_unix(sudo:session): session closed for user root
Oct 11 09:54:46 compute-0 sudo[445804]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a -- raw list --format json
Oct 11 09:54:46 compute-0 sudo[445804]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:54:47 compute-0 podman[445869]: 2025-10-11 09:54:47.005860722 +0000 UTC m=+0.038351705 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:54:47 compute-0 podman[445869]: 2025-10-11 09:54:47.223510623 +0000 UTC m=+0.256001576 container create 4198cef30b96600c7f7d8aa9e686c0505213d2bb627b5c4d128bf670d4ed76ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_kapitsa, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default)
Oct 11 09:54:47 compute-0 ceph-mon[74313]: pgmap v3415: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:54:47 compute-0 systemd[1]: Started libpod-conmon-4198cef30b96600c7f7d8aa9e686c0505213d2bb627b5c4d128bf670d4ed76ce.scope.
Oct 11 09:54:47 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:54:47 compute-0 podman[445869]: 2025-10-11 09:54:47.464499924 +0000 UTC m=+0.496990867 container init 4198cef30b96600c7f7d8aa9e686c0505213d2bb627b5c4d128bf670d4ed76ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_kapitsa, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 09:54:47 compute-0 podman[445869]: 2025-10-11 09:54:47.475625788 +0000 UTC m=+0.508116721 container start 4198cef30b96600c7f7d8aa9e686c0505213d2bb627b5c4d128bf670d4ed76ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_kapitsa, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct 11 09:54:47 compute-0 upbeat_kapitsa[445885]: 167 167
Oct 11 09:54:47 compute-0 systemd[1]: libpod-4198cef30b96600c7f7d8aa9e686c0505213d2bb627b5c4d128bf670d4ed76ce.scope: Deactivated successfully.
Oct 11 09:54:47 compute-0 conmon[445885]: conmon 4198cef30b96600c7f7d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-4198cef30b96600c7f7d8aa9e686c0505213d2bb627b5c4d128bf670d4ed76ce.scope/container/memory.events
Oct 11 09:54:47 compute-0 podman[445869]: 2025-10-11 09:54:47.616376056 +0000 UTC m=+0.648866979 container attach 4198cef30b96600c7f7d8aa9e686c0505213d2bb627b5c4d128bf670d4ed76ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_kapitsa, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct 11 09:54:47 compute-0 podman[445869]: 2025-10-11 09:54:47.618026673 +0000 UTC m=+0.650517596 container died 4198cef30b96600c7f7d8aa9e686c0505213d2bb627b5c4d128bf670d4ed76ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_kapitsa, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct 11 09:54:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-866efabd6da637e34bf1b02a691972aabb90deb1332d378372c7fac9a20c3fb0-merged.mount: Deactivated successfully.
Oct 11 09:54:47 compute-0 podman[445869]: 2025-10-11 09:54:47.791220087 +0000 UTC m=+0.823711010 container remove 4198cef30b96600c7f7d8aa9e686c0505213d2bb627b5c4d128bf670d4ed76ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_kapitsa, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 09:54:47 compute-0 systemd[1]: libpod-conmon-4198cef30b96600c7f7d8aa9e686c0505213d2bb627b5c4d128bf670d4ed76ce.scope: Deactivated successfully.
Oct 11 09:54:47 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3416: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:54:47 compute-0 podman[445911]: 2025-10-11 09:54:47.958742001 +0000 UTC m=+0.044615322 container create e0f7cf8fdf12b9972f5cd89b7def9e5f976d6623ed0937845b8e035977f3065e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_jennings, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct 11 09:54:48 compute-0 systemd[1]: Started libpod-conmon-e0f7cf8fdf12b9972f5cd89b7def9e5f976d6623ed0937845b8e035977f3065e.scope.
Oct 11 09:54:48 compute-0 podman[445911]: 2025-10-11 09:54:47.937357036 +0000 UTC m=+0.023230357 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:54:48 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:54:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f1e8dba4ef2eed259d0b98dae8e03c62f76ff50d7ba7367dd1249c2eff53a15/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 09:54:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f1e8dba4ef2eed259d0b98dae8e03c62f76ff50d7ba7367dd1249c2eff53a15/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 09:54:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f1e8dba4ef2eed259d0b98dae8e03c62f76ff50d7ba7367dd1249c2eff53a15/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 09:54:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f1e8dba4ef2eed259d0b98dae8e03c62f76ff50d7ba7367dd1249c2eff53a15/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 09:54:48 compute-0 podman[445911]: 2025-10-11 09:54:48.10555297 +0000 UTC m=+0.191426311 container init e0f7cf8fdf12b9972f5cd89b7def9e5f976d6623ed0937845b8e035977f3065e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_jennings, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct 11 09:54:48 compute-0 podman[445911]: 2025-10-11 09:54:48.117871868 +0000 UTC m=+0.203745199 container start e0f7cf8fdf12b9972f5cd89b7def9e5f976d6623ed0937845b8e035977f3065e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_jennings, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct 11 09:54:48 compute-0 podman[445911]: 2025-10-11 09:54:48.135162306 +0000 UTC m=+0.221035657 container attach e0f7cf8fdf12b9972f5cd89b7def9e5f976d6623ed0937845b8e035977f3065e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_jennings, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True)
Oct 11 09:54:49 compute-0 quizzical_jennings[445928]: {
Oct 11 09:54:49 compute-0 quizzical_jennings[445928]:     "9a8d87fe-940e-4960-94fb-405e22e6e9ee": {
Oct 11 09:54:49 compute-0 quizzical_jennings[445928]:         "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:54:49 compute-0 quizzical_jennings[445928]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 11 09:54:49 compute-0 quizzical_jennings[445928]:         "osd_id": 2,
Oct 11 09:54:49 compute-0 quizzical_jennings[445928]:         "osd_uuid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 09:54:49 compute-0 quizzical_jennings[445928]:         "type": "bluestore"
Oct 11 09:54:49 compute-0 quizzical_jennings[445928]:     },
Oct 11 09:54:49 compute-0 quizzical_jennings[445928]:     "bd31cfe9-266a-4479-a7f9-659fff14b3e1": {
Oct 11 09:54:49 compute-0 quizzical_jennings[445928]:         "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:54:49 compute-0 quizzical_jennings[445928]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 11 09:54:49 compute-0 quizzical_jennings[445928]:         "osd_id": 0,
Oct 11 09:54:49 compute-0 quizzical_jennings[445928]:         "osd_uuid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 09:54:49 compute-0 quizzical_jennings[445928]:         "type": "bluestore"
Oct 11 09:54:49 compute-0 quizzical_jennings[445928]:     },
Oct 11 09:54:49 compute-0 quizzical_jennings[445928]:     "c6e730c8-7075-45e5-bf72-061b73088f5b": {
Oct 11 09:54:49 compute-0 quizzical_jennings[445928]:         "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:54:49 compute-0 quizzical_jennings[445928]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 11 09:54:49 compute-0 quizzical_jennings[445928]:         "osd_id": 1,
Oct 11 09:54:49 compute-0 quizzical_jennings[445928]:         "osd_uuid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 09:54:49 compute-0 quizzical_jennings[445928]:         "type": "bluestore"
Oct 11 09:54:49 compute-0 quizzical_jennings[445928]:     }
Oct 11 09:54:49 compute-0 quizzical_jennings[445928]: }
Oct 11 09:54:49 compute-0 systemd[1]: libpod-e0f7cf8fdf12b9972f5cd89b7def9e5f976d6623ed0937845b8e035977f3065e.scope: Deactivated successfully.
Oct 11 09:54:49 compute-0 podman[445911]: 2025-10-11 09:54:49.18938818 +0000 UTC m=+1.275261521 container died e0f7cf8fdf12b9972f5cd89b7def9e5f976d6623ed0937845b8e035977f3065e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_jennings, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Oct 11 09:54:49 compute-0 systemd[1]: libpod-e0f7cf8fdf12b9972f5cd89b7def9e5f976d6623ed0937845b8e035977f3065e.scope: Consumed 1.070s CPU time.
Oct 11 09:54:49 compute-0 ceph-mon[74313]: pgmap v3416: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:54:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-4f1e8dba4ef2eed259d0b98dae8e03c62f76ff50d7ba7367dd1249c2eff53a15-merged.mount: Deactivated successfully.
Oct 11 09:54:49 compute-0 podman[445911]: 2025-10-11 09:54:49.569289366 +0000 UTC m=+1.655162707 container remove e0f7cf8fdf12b9972f5cd89b7def9e5f976d6623ed0937845b8e035977f3065e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_jennings, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 09:54:49 compute-0 systemd[1]: libpod-conmon-e0f7cf8fdf12b9972f5cd89b7def9e5f976d6623ed0937845b8e035977f3065e.scope: Deactivated successfully.
Oct 11 09:54:49 compute-0 sudo[445804]: pam_unix(sudo:session): session closed for user root
Oct 11 09:54:49 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 09:54:49 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:54:49 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 09:54:49 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:54:49 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev 5e316f52-1a7a-4a23-868d-fb462409b3a2 does not exist
Oct 11 09:54:49 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev 62ab72a1-854a-47ce-9e8b-bd124d113089 does not exist
Oct 11 09:54:49 compute-0 sudo[445973]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:54:49 compute-0 sudo[445973]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:54:49 compute-0 sudo[445973]: pam_unix(sudo:session): session closed for user root
Oct 11 09:54:49 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3417: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:54:49 compute-0 sudo[445998]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 11 09:54:49 compute-0 sudo[445998]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:54:49 compute-0 sudo[445998]: pam_unix(sudo:session): session closed for user root
Oct 11 09:54:49 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e310 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:54:50 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:54:50 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:54:50 compute-0 ceph-mon[74313]: pgmap v3417: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:54:51 compute-0 nova_compute[260935]: 2025-10-11 09:54:51.193 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:54:51 compute-0 nova_compute[260935]: 2025-10-11 09:54:51.198 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 09:54:51 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3418: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:54:52 compute-0 sshd-session[446023]: Accepted publickey for zuul from 192.168.122.30 port 39412 ssh2: ECDSA SHA256:KKxgUhG08XzjYMLOyvbR+tXItyOnGoLl6Fn32NV5afE
Oct 11 09:54:52 compute-0 systemd-logind[819]: New session 55 of user zuul.
Oct 11 09:54:52 compute-0 systemd[1]: Started Session 55 of User zuul.
Oct 11 09:54:52 compute-0 sshd-session[446023]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 11 09:54:52 compute-0 nova_compute[260935]: 2025-10-11 09:54:52.160 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:54:52 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:54:52.160 162815 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=63, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:d1:d9', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '16:ab:1e:b7:4b:7f'}, ipsec=False) old=SB_Global(nb_cfg=62) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 11 09:54:52 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:54:52.163 162815 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 11 09:54:52 compute-0 sudo[446096]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/test -f /var/podman_client_access_setup
Oct 11 09:54:52 compute-0 sudo[446096]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 09:54:52 compute-0 sudo[446096]: pam_unix(sudo:session): session closed for user root
Oct 11 09:54:52 compute-0 sudo[446122]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/sbin/groupadd -f podman
Oct 11 09:54:52 compute-0 sudo[446122]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 09:54:52 compute-0 groupadd[446124]: group added to /etc/group: name=podman, GID=42479
Oct 11 09:54:52 compute-0 groupadd[446124]: group added to /etc/gshadow: name=podman
Oct 11 09:54:52 compute-0 groupadd[446124]: new group: name=podman, GID=42479
Oct 11 09:54:52 compute-0 sudo[446122]: pam_unix(sudo:session): session closed for user root
Oct 11 09:54:52 compute-0 sudo[446130]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/sbin/usermod -a -G podman zuul
Oct 11 09:54:52 compute-0 sudo[446130]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 09:54:52 compute-0 usermod[446132]: add 'zuul' to group 'podman'
Oct 11 09:54:52 compute-0 usermod[446132]: add 'zuul' to shadow group 'podman'
Oct 11 09:54:52 compute-0 sudo[446130]: pam_unix(sudo:session): session closed for user root
Oct 11 09:54:52 compute-0 sudo[446139]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/chmod -R o=wxr /etc/tmpfiles.d
Oct 11 09:54:52 compute-0 sudo[446139]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 09:54:52 compute-0 sudo[446139]: pam_unix(sudo:session): session closed for user root
Oct 11 09:54:52 compute-0 sudo[446142]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/echo 'd /run/podman 0770 root zuul'
Oct 11 09:54:52 compute-0 sudo[446142]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 09:54:52 compute-0 sudo[446142]: pam_unix(sudo:session): session closed for user root
Oct 11 09:54:52 compute-0 ceph-mon[74313]: pgmap v3418: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:54:52 compute-0 sudo[446145]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/cp /lib/systemd/system/podman.socket /etc/systemd/system/podman.socket
Oct 11 09:54:52 compute-0 sudo[446145]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 09:54:52 compute-0 sudo[446145]: pam_unix(sudo:session): session closed for user root
Oct 11 09:54:52 compute-0 sudo[446148]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/crudini --set /etc/systemd/system/podman.socket Socket SocketMode 0660
Oct 11 09:54:52 compute-0 sudo[446148]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 09:54:53 compute-0 sudo[446148]: pam_unix(sudo:session): session closed for user root
Oct 11 09:54:53 compute-0 sudo[446151]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/crudini --set /etc/systemd/system/podman.socket Socket SocketGroup podman
Oct 11 09:54:53 compute-0 sudo[446151]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 09:54:53 compute-0 sudo[446151]: pam_unix(sudo:session): session closed for user root
Oct 11 09:54:53 compute-0 sudo[446154]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/systemctl daemon-reload
Oct 11 09:54:53 compute-0 sudo[446154]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 09:54:53 compute-0 systemd[1]: Reloading.
Oct 11 09:54:53 compute-0 systemd-rc-local-generator[446177]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 09:54:53 compute-0 systemd-sysv-generator[446181]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 09:54:53 compute-0 sudo[446154]: pam_unix(sudo:session): session closed for user root
Oct 11 09:54:53 compute-0 sudo[446192]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/systemd-tmpfiles --create
Oct 11 09:54:53 compute-0 sudo[446192]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 09:54:53 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3419: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:54:53 compute-0 sudo[446192]: pam_unix(sudo:session): session closed for user root
Oct 11 09:54:53 compute-0 sudo[446195]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/systemctl enable --now podman.socket
Oct 11 09:54:53 compute-0 sudo[446195]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 09:54:53 compute-0 sshd-session[446190]: Invalid user nexus from 43.157.67.116 port 35550
Oct 11 09:54:53 compute-0 sshd-session[446190]: pam_unix(sshd:auth): check pass; user unknown
Oct 11 09:54:53 compute-0 sshd-session[446190]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=43.157.67.116
Oct 11 09:54:53 compute-0 systemd[1]: Reloading.
Oct 11 09:54:54 compute-0 systemd-rc-local-generator[446224]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 11 09:54:54 compute-0 systemd-sysv-generator[446228]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 11 09:54:54 compute-0 systemd[1]: Starting Podman API Socket...
Oct 11 09:54:54 compute-0 systemd[1]: Listening on Podman API Socket.
Oct 11 09:54:54 compute-0 sudo[446195]: pam_unix(sudo:session): session closed for user root
Oct 11 09:54:54 compute-0 sudo[446232]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/chmod 777 /run/podman
Oct 11 09:54:54 compute-0 sudo[446232]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 09:54:54 compute-0 sudo[446232]: pam_unix(sudo:session): session closed for user root
Oct 11 09:54:54 compute-0 sudo[446235]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/chown -R root: /run/podman
Oct 11 09:54:54 compute-0 sudo[446235]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 09:54:54 compute-0 sudo[446235]: pam_unix(sudo:session): session closed for user root
Oct 11 09:54:54 compute-0 sudo[446238]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/chmod g+rw /run/podman/podman.sock
Oct 11 09:54:54 compute-0 sudo[446238]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 09:54:54 compute-0 sudo[446238]: pam_unix(sudo:session): session closed for user root
Oct 11 09:54:54 compute-0 sudo[446241]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/chmod 777 /run/podman/podman.sock
Oct 11 09:54:54 compute-0 sudo[446241]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 09:54:54 compute-0 sudo[446241]: pam_unix(sudo:session): session closed for user root
Oct 11 09:54:54 compute-0 sudo[446244]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/sbin/setenforce 0
Oct 11 09:54:54 compute-0 sudo[446244]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 09:54:54 compute-0 sudo[446244]: pam_unix(sudo:session): session closed for user root
Oct 11 09:54:54 compute-0 sudo[446247]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/systemctl restart podman.socket
Oct 11 09:54:54 compute-0 dbus-broker-launch[809]: avc:  op=setenforce lsm=selinux enforcing=0 res=1
Oct 11 09:54:54 compute-0 sudo[446247]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 09:54:54 compute-0 systemd[1]: podman.socket: Deactivated successfully.
Oct 11 09:54:54 compute-0 systemd[1]: Closed Podman API Socket.
Oct 11 09:54:54 compute-0 systemd[1]: Stopping Podman API Socket...
Oct 11 09:54:54 compute-0 systemd[1]: Starting Podman API Socket...
Oct 11 09:54:54 compute-0 systemd[1]: Listening on Podman API Socket.
Oct 11 09:54:54 compute-0 sudo[446247]: pam_unix(sudo:session): session closed for user root
Oct 11 09:54:54 compute-0 sudo[446099]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/touch /var/podman_client_access_setup
Oct 11 09:54:54 compute-0 sudo[446099]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 09:54:54 compute-0 sudo[446099]: pam_unix(sudo:session): session closed for user root
Oct 11 09:54:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:54:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:54:54 compute-0 ceph-mon[74313]: pgmap v3419: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:54:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:54:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:54:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:54:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:54:54 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e310 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:54:55 compute-0 sshd-session[446253]: Accepted publickey for zuul from 192.168.122.30 port 39426 ssh2: ECDSA SHA256:KKxgUhG08XzjYMLOyvbR+tXItyOnGoLl6Fn32NV5afE
Oct 11 09:54:55 compute-0 systemd-logind[819]: New session 56 of user zuul.
Oct 11 09:54:55 compute-0 systemd[1]: Started Session 56 of User zuul.
Oct 11 09:54:55 compute-0 sshd-session[446253]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 11 09:54:55 compute-0 ceph-mgr[74605]: [balancer INFO root] Optimize plan auto_2025-10-11_09:54:55
Oct 11 09:54:55 compute-0 ceph-mgr[74605]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 09:54:55 compute-0 ceph-mgr[74605]: [balancer INFO root] do_upmap
Oct 11 09:54:55 compute-0 ceph-mgr[74605]: [balancer INFO root] pools ['default.rgw.control', 'volumes', 'cephfs.cephfs.meta', '.rgw.root', 'images', 'cephfs.cephfs.data', 'default.rgw.meta', '.mgr', 'default.rgw.log', 'vms', 'backups']
Oct 11 09:54:55 compute-0 ceph-mgr[74605]: [balancer INFO root] prepared 0/10 changes
Oct 11 09:54:55 compute-0 systemd[1]: Starting Podman API Service...
Oct 11 09:54:55 compute-0 systemd[1]: Started Podman API Service.
Oct 11 09:54:55 compute-0 podman[446257]: time="2025-10-11T09:54:55Z" level=info msg="/usr/bin/podman filtering at log level info"
Oct 11 09:54:55 compute-0 podman[446257]: time="2025-10-11T09:54:55Z" level=info msg="Setting parallel job count to 25"
Oct 11 09:54:55 compute-0 podman[446257]: time="2025-10-11T09:54:55Z" level=info msg="Using sqlite as database backend"
Oct 11 09:54:55 compute-0 podman[446257]: time="2025-10-11T09:54:55Z" level=info msg="Not using native diff for overlay, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled"
Oct 11 09:54:55 compute-0 podman[446257]: time="2025-10-11T09:54:55Z" level=info msg="Using systemd socket activation to determine API endpoint"
Oct 11 09:54:55 compute-0 podman[446257]: time="2025-10-11T09:54:55Z" level=info msg="API service listening on \"/run/podman/podman.sock\". URI: \"unix:///run/podman/podman.sock\""
Oct 11 09:54:55 compute-0 podman[446257]: @ - - [11/Oct/2025:09:54:55 +0000] "HEAD /v4.7.0/libpod/_ping HTTP/1.1" 200 0 "" "PodmanPy/4.7.0 (API v4.7.0; Compatible v1.40)"
Oct 11 09:54:55 compute-0 podman[446257]: @ - - [11/Oct/2025:09:54:55 +0000] "GET /v4.7.0/libpod/containers/json HTTP/1.1" 200 27463 "" "PodmanPy/4.7.0 (API v4.7.0; Compatible v1.40)"
Oct 11 09:54:55 compute-0 sshd-session[446190]: Failed password for invalid user nexus from 43.157.67.116 port 35550 ssh2
Oct 11 09:54:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 09:54:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 09:54:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 09:54:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 09:54:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 09:54:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 09:54:55 compute-0 sshd-session[446190]: Received disconnect from 43.157.67.116 port 35550:11: Bye Bye [preauth]
Oct 11 09:54:55 compute-0 sshd-session[446190]: Disconnected from invalid user nexus 43.157.67.116 port 35550 [preauth]
Oct 11 09:54:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 09:54:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 09:54:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 09:54:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 09:54:55 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3420: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:54:56 compute-0 nova_compute[260935]: 2025-10-11 09:54:56.196 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:54:57 compute-0 ceph-mon[74313]: pgmap v3420: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:54:57 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3421: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:54:58 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:54:58.165 162815 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=bec9a5e2-82b0-42f0-811d-08d245f1dc66, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '63'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 11 09:54:59 compute-0 ceph-mon[74313]: pgmap v3421: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:54:59 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3422: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:54:59 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e310 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:55:01 compute-0 ceph-mon[74313]: pgmap v3422: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:55:01 compute-0 nova_compute[260935]: 2025-10-11 09:55:01.198 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:55:01 compute-0 nova_compute[260935]: 2025-10-11 09:55:01.200 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:55:01 compute-0 nova_compute[260935]: 2025-10-11 09:55:01.699 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:55:01 compute-0 nova_compute[260935]: 2025-10-11 09:55:01.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:55:01 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3423: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:55:02 compute-0 podman[446294]: 2025-10-11 09:55:02.763214643 +0000 UTC m=+0.073242921 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_id=ovn_metadata_agent)
Oct 11 09:55:03 compute-0 ceph-mon[74313]: pgmap v3423: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:55:03 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3424: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:55:04 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e310 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:55:05 compute-0 ceph-mon[74313]: pgmap v3424: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:55:05 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3425: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:55:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 09:55:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:55:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 09:55:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:55:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0026278224099759067 of space, bias 1.0, pg target 0.788346722992772 quantized to 32 (current 32)
Oct 11 09:55:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:55:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 09:55:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:55:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 09:55:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:55:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006659854830044931 of space, bias 1.0, pg target 0.19979564490134794 quantized to 32 (current 32)
Oct 11 09:55:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:55:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 11 09:55:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:55:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 09:55:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:55:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 11 09:55:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:55:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 11 09:55:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:55:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 09:55:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:55:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 11 09:55:06 compute-0 nova_compute[260935]: 2025-10-11 09:55:06.245 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 09:55:06 compute-0 nova_compute[260935]: 2025-10-11 09:55:06.247 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:55:06 compute-0 nova_compute[260935]: 2025-10-11 09:55:06.247 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5046 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Oct 11 09:55:06 compute-0 nova_compute[260935]: 2025-10-11 09:55:06.247 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Oct 11 09:55:06 compute-0 nova_compute[260935]: 2025-10-11 09:55:06.248 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Oct 11 09:55:06 compute-0 nova_compute[260935]: 2025-10-11 09:55:06.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:55:06 compute-0 nova_compute[260935]: 2025-10-11 09:55:06.703 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 11 09:55:07 compute-0 ceph-mon[74313]: pgmap v3425: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:55:07 compute-0 nova_compute[260935]: 2025-10-11 09:55:07.107 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "refresh_cache-b75d8ded-515b-48ff-a6b6-28df88878996" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:55:07 compute-0 nova_compute[260935]: 2025-10-11 09:55:07.107 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquired lock "refresh_cache-b75d8ded-515b-48ff-a6b6-28df88878996" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:55:07 compute-0 nova_compute[260935]: 2025-10-11 09:55:07.108 2 DEBUG nova.network.neutron [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: b75d8ded-515b-48ff-a6b6-28df88878996] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 11 09:55:07 compute-0 nova_compute[260935]: 2025-10-11 09:55:07.305 2 DEBUG nova.network.neutron [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: b75d8ded-515b-48ff-a6b6-28df88878996] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 11 09:55:07 compute-0 podman[446315]: 2025-10-11 09:55:07.797232328 +0000 UTC m=+0.095284814 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_id=iscsid, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct 11 09:55:07 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3426: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:55:07 compute-0 nova_compute[260935]: 2025-10-11 09:55:07.972 2 DEBUG nova.network.neutron [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: b75d8ded-515b-48ff-a6b6-28df88878996] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:55:07 compute-0 nova_compute[260935]: 2025-10-11 09:55:07.993 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Releasing lock "refresh_cache-b75d8ded-515b-48ff-a6b6-28df88878996" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:55:07 compute-0 nova_compute[260935]: 2025-10-11 09:55:07.994 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: b75d8ded-515b-48ff-a6b6-28df88878996] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 11 09:55:07 compute-0 nova_compute[260935]: 2025-10-11 09:55:07.994 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:55:07 compute-0 nova_compute[260935]: 2025-10-11 09:55:07.994 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:55:07 compute-0 nova_compute[260935]: 2025-10-11 09:55:07.994 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 11 09:55:07 compute-0 nova_compute[260935]: 2025-10-11 09:55:07.995 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:55:08 compute-0 nova_compute[260935]: 2025-10-11 09:55:08.020 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:55:08 compute-0 nova_compute[260935]: 2025-10-11 09:55:08.021 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:55:08 compute-0 nova_compute[260935]: 2025-10-11 09:55:08.021 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:55:08 compute-0 nova_compute[260935]: 2025-10-11 09:55:08.021 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 11 09:55:08 compute-0 nova_compute[260935]: 2025-10-11 09:55:08.021 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:55:08 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 09:55:08 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1643149614' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:55:08 compute-0 nova_compute[260935]: 2025-10-11 09:55:08.482 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:55:08 compute-0 nova_compute[260935]: 2025-10-11 09:55:08.589 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:55:08 compute-0 nova_compute[260935]: 2025-10-11 09:55:08.589 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:55:08 compute-0 nova_compute[260935]: 2025-10-11 09:55:08.590 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:55:08 compute-0 nova_compute[260935]: 2025-10-11 09:55:08.596 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:55:08 compute-0 nova_compute[260935]: 2025-10-11 09:55:08.596 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:55:08 compute-0 nova_compute[260935]: 2025-10-11 09:55:08.602 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000052 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:55:08 compute-0 nova_compute[260935]: 2025-10-11 09:55:08.602 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000052 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:55:08 compute-0 nova_compute[260935]: 2025-10-11 09:55:08.861 2 WARNING nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 09:55:08 compute-0 nova_compute[260935]: 2025-10-11 09:55:08.863 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=2782MB free_disk=59.83064270019531GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 11 09:55:08 compute-0 nova_compute[260935]: 2025-10-11 09:55:08.864 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:55:08 compute-0 nova_compute[260935]: 2025-10-11 09:55:08.865 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:55:08 compute-0 nova_compute[260935]: 2025-10-11 09:55:08.993 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance c176845c-89c0-4038-ba22-4ee79bd3ebfe actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 09:55:08 compute-0 nova_compute[260935]: 2025-10-11 09:55:08.994 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance b75d8ded-515b-48ff-a6b6-28df88878996 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 09:55:08 compute-0 nova_compute[260935]: 2025-10-11 09:55:08.994 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance 52be16b4-343a-4fd4-9041-39069a1fde2a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 09:55:08 compute-0 nova_compute[260935]: 2025-10-11 09:55:08.995 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 11 09:55:08 compute-0 nova_compute[260935]: 2025-10-11 09:55:08.995 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=896MB phys_disk=59GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 11 09:55:09 compute-0 nova_compute[260935]: 2025-10-11 09:55:09.017 2 DEBUG nova.scheduler.client.report [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Refreshing inventories for resource provider ead2f521-4d5d-46d9-864c-1aac19134114 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Oct 11 09:55:09 compute-0 nova_compute[260935]: 2025-10-11 09:55:09.036 2 DEBUG nova.scheduler.client.report [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Updating ProviderTree inventory for provider ead2f521-4d5d-46d9-864c-1aac19134114 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Oct 11 09:55:09 compute-0 nova_compute[260935]: 2025-10-11 09:55:09.036 2 DEBUG nova.compute.provider_tree [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Updating inventory in ProviderTree for provider ead2f521-4d5d-46d9-864c-1aac19134114 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Oct 11 09:55:09 compute-0 sshd-session[446337]: Invalid user esuser from 13.126.15.214 port 36462
Oct 11 09:55:09 compute-0 sshd-session[446337]: pam_unix(sshd:auth): check pass; user unknown
Oct 11 09:55:09 compute-0 sshd-session[446337]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=13.126.15.214
Oct 11 09:55:09 compute-0 nova_compute[260935]: 2025-10-11 09:55:09.061 2 DEBUG nova.scheduler.client.report [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Refreshing aggregate associations for resource provider ead2f521-4d5d-46d9-864c-1aac19134114, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Oct 11 09:55:09 compute-0 ceph-mon[74313]: pgmap v3426: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:55:09 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/1643149614' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:55:09 compute-0 nova_compute[260935]: 2025-10-11 09:55:09.091 2 DEBUG nova.scheduler.client.report [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Refreshing trait associations for resource provider ead2f521-4d5d-46d9-864c-1aac19134114, traits: HW_CPU_X86_AESNI,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_CLMUL,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_AVX,COMPUTE_RESCUE_BFV,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_F16C,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NODE,HW_CPU_X86_SSE2,HW_CPU_X86_BMI,COMPUTE_DEVICE_TAGGING,COMPUTE_STORAGE_BUS_FDC,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SHA,COMPUTE_STORAGE_BUS_SATA,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SSE42,HW_CPU_X86_SSE41,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_ABM,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_STORAGE_BUS_IDE,COMPUTE_STORAGE_BUS_USB,COMPUTE_TRUSTED_CERTS,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_FMA3,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_SSE4A,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSSE3,HW_CPU_X86_SVM,HW_CPU_X86_BMI2,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_AVX2,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_SECURITY_TPM_2_0,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_AMD_SVM,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_RTL8139 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Oct 11 09:55:09 compute-0 nova_compute[260935]: 2025-10-11 09:55:09.175 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:55:09 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 09:55:09 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2618290930' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:55:09 compute-0 nova_compute[260935]: 2025-10-11 09:55:09.659 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.484s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:55:09 compute-0 nova_compute[260935]: 2025-10-11 09:55:09.668 2 DEBUG nova.compute.provider_tree [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 09:55:09 compute-0 nova_compute[260935]: 2025-10-11 09:55:09.703 2 DEBUG nova.scheduler.client.report [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 09:55:09 compute-0 nova_compute[260935]: 2025-10-11 09:55:09.706 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 11 09:55:09 compute-0 nova_compute[260935]: 2025-10-11 09:55:09.706 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.841s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:55:09 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3427: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:55:09 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e310 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:55:10 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/2618290930' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:55:10 compute-0 podman[446257]: time="2025-10-11T09:55:10Z" level=info msg="Received shutdown.Stop(), terminating!" PID=446257
Oct 11 09:55:10 compute-0 systemd[1]: podman.service: Deactivated successfully.
Oct 11 09:55:10 compute-0 podman[446384]: 2025-10-11 09:55:10.358000606 +0000 UTC m=+0.117204113 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 11 09:55:10 compute-0 podman[446385]: 2025-10-11 09:55:10.393544101 +0000 UTC m=+0.155915548 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true)
Oct 11 09:55:10 compute-0 sshd-session[446337]: Failed password for invalid user esuser from 13.126.15.214 port 36462 ssh2
Oct 11 09:55:11 compute-0 ceph-mon[74313]: pgmap v3427: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:55:11 compute-0 nova_compute[260935]: 2025-10-11 09:55:11.249 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:55:11 compute-0 sshd-session[446337]: Received disconnect from 13.126.15.214 port 36462:11: Bye Bye [preauth]
Oct 11 09:55:11 compute-0 sshd-session[446337]: Disconnected from invalid user esuser 13.126.15.214 port 36462 [preauth]
Oct 11 09:55:11 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3428: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:55:12 compute-0 sshd[189332]: Timeout before authentication for connection from 113.194.203.31 to 38.102.83.193, pid = 443684
Oct 11 09:55:13 compute-0 ceph-mon[74313]: pgmap v3428: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:55:13 compute-0 nova_compute[260935]: 2025-10-11 09:55:13.416 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:55:13 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3429: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:55:14 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e310 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:55:15 compute-0 ceph-mon[74313]: pgmap v3429: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:55:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:55:15.252 162815 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:55:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:55:15.253 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:55:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:55:15.253 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:55:15 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3430: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:55:16 compute-0 nova_compute[260935]: 2025-10-11 09:55:16.253 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4997-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 09:55:16 compute-0 nova_compute[260935]: 2025-10-11 09:55:16.254 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 09:55:16 compute-0 nova_compute[260935]: 2025-10-11 09:55:16.255 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Oct 11 09:55:16 compute-0 nova_compute[260935]: 2025-10-11 09:55:16.255 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Oct 11 09:55:16 compute-0 nova_compute[260935]: 2025-10-11 09:55:16.291 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:55:16 compute-0 nova_compute[260935]: 2025-10-11 09:55:16.291 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Oct 11 09:55:16 compute-0 nova_compute[260935]: 2025-10-11 09:55:16.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:55:17 compute-0 ceph-mon[74313]: pgmap v3430: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:55:17 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3431: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:55:19 compute-0 sudo[446430]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/sbin/ip --brief address list
Oct 11 09:55:19 compute-0 sudo[446430]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 09:55:19 compute-0 sudo[446430]: pam_unix(sudo:session): session closed for user root
Oct 11 09:55:19 compute-0 ceph-mon[74313]: pgmap v3431: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:55:19 compute-0 sudo[446455]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/sbin/ip -o netns list
Oct 11 09:55:19 compute-0 sudo[446455]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 09:55:19 compute-0 sudo[446455]: pam_unix(sudo:session): session closed for user root
Oct 11 09:55:19 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3432: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:55:19 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e310 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:55:21 compute-0 ceph-mon[74313]: pgmap v3432: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:55:21 compute-0 nova_compute[260935]: 2025-10-11 09:55:21.293 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 09:55:21 compute-0 nova_compute[260935]: 2025-10-11 09:55:21.294 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 09:55:21 compute-0 nova_compute[260935]: 2025-10-11 09:55:21.295 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Oct 11 09:55:21 compute-0 nova_compute[260935]: 2025-10-11 09:55:21.295 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Oct 11 09:55:21 compute-0 nova_compute[260935]: 2025-10-11 09:55:21.326 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:55:21 compute-0 nova_compute[260935]: 2025-10-11 09:55:21.327 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Oct 11 09:55:21 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3433: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:55:22 compute-0 ovn_controller[152945]: 2025-10-11T09:55:22Z|01737|memory_trim|INFO|Detected inactivity (last active 30003 ms ago): trimming memory
Oct 11 09:55:23 compute-0 ceph-mon[74313]: pgmap v3433: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:55:23 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3434: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:55:24 compute-0 sshd-session[446026]: Connection closed by 192.168.122.30 port 39412
Oct 11 09:55:24 compute-0 sshd-session[446023]: pam_unix(sshd:session): session closed for user zuul
Oct 11 09:55:24 compute-0 systemd[1]: session-55.scope: Deactivated successfully.
Oct 11 09:55:24 compute-0 systemd[1]: session-55.scope: Consumed 1.604s CPU time.
Oct 11 09:55:24 compute-0 systemd-logind[819]: Session 55 logged out. Waiting for processes to exit.
Oct 11 09:55:24 compute-0 systemd-logind[819]: Removed session 55.
Oct 11 09:55:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:55:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:55:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:55:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:55:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:55:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:55:24 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e310 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:55:25 compute-0 ceph-mon[74313]: pgmap v3434: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:55:25 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3435: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:55:26 compute-0 sshd-session[446256]: Connection closed by 192.168.122.30 port 39426
Oct 11 09:55:26 compute-0 sshd-session[446253]: pam_unix(sshd:session): session closed for user zuul
Oct 11 09:55:26 compute-0 systemd-logind[819]: Session 56 logged out. Waiting for processes to exit.
Oct 11 09:55:26 compute-0 systemd[1]: session-56.scope: Deactivated successfully.
Oct 11 09:55:26 compute-0 systemd-logind[819]: Removed session 56.
Oct 11 09:55:26 compute-0 nova_compute[260935]: 2025-10-11 09:55:26.328 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 09:55:26 compute-0 nova_compute[260935]: 2025-10-11 09:55:26.330 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 09:55:26 compute-0 nova_compute[260935]: 2025-10-11 09:55:26.330 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Oct 11 09:55:26 compute-0 nova_compute[260935]: 2025-10-11 09:55:26.331 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Oct 11 09:55:26 compute-0 nova_compute[260935]: 2025-10-11 09:55:26.333 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:55:26 compute-0 nova_compute[260935]: 2025-10-11 09:55:26.333 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Oct 11 09:55:26 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 09:55:26 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2675494968' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 09:55:26 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 09:55:26 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2675494968' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 09:55:27 compute-0 ceph-mon[74313]: pgmap v3435: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:55:27 compute-0 ceph-mon[74313]: from='client.? 192.168.122.10:0/2675494968' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 09:55:27 compute-0 ceph-mon[74313]: from='client.? 192.168.122.10:0/2675494968' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 09:55:27 compute-0 nova_compute[260935]: 2025-10-11 09:55:27.661 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:55:27 compute-0 nova_compute[260935]: 2025-10-11 09:55:27.720 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Triggering sync for uuid c176845c-89c0-4038-ba22-4ee79bd3ebfe _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Oct 11 09:55:27 compute-0 nova_compute[260935]: 2025-10-11 09:55:27.720 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Triggering sync for uuid b75d8ded-515b-48ff-a6b6-28df88878996 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Oct 11 09:55:27 compute-0 nova_compute[260935]: 2025-10-11 09:55:27.721 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Triggering sync for uuid 52be16b4-343a-4fd4-9041-39069a1fde2a _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Oct 11 09:55:27 compute-0 nova_compute[260935]: 2025-10-11 09:55:27.721 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "c176845c-89c0-4038-ba22-4ee79bd3ebfe" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:55:27 compute-0 nova_compute[260935]: 2025-10-11 09:55:27.722 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "c176845c-89c0-4038-ba22-4ee79bd3ebfe" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:55:27 compute-0 nova_compute[260935]: 2025-10-11 09:55:27.722 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "b75d8ded-515b-48ff-a6b6-28df88878996" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:55:27 compute-0 nova_compute[260935]: 2025-10-11 09:55:27.722 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "b75d8ded-515b-48ff-a6b6-28df88878996" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:55:27 compute-0 nova_compute[260935]: 2025-10-11 09:55:27.722 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "52be16b4-343a-4fd4-9041-39069a1fde2a" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:55:27 compute-0 nova_compute[260935]: 2025-10-11 09:55:27.723 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "52be16b4-343a-4fd4-9041-39069a1fde2a" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:55:27 compute-0 nova_compute[260935]: 2025-10-11 09:55:27.759 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "b75d8ded-515b-48ff-a6b6-28df88878996" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.037s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:55:27 compute-0 nova_compute[260935]: 2025-10-11 09:55:27.760 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "c176845c-89c0-4038-ba22-4ee79bd3ebfe" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.038s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:55:27 compute-0 nova_compute[260935]: 2025-10-11 09:55:27.763 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "52be16b4-343a-4fd4-9041-39069a1fde2a" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.040s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:55:27 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3436: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:55:29 compute-0 ceph-mon[74313]: pgmap v3436: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:55:29 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3437: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:55:29 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e310 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:55:31 compute-0 ceph-mon[74313]: pgmap v3437: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:55:31 compute-0 nova_compute[260935]: 2025-10-11 09:55:31.334 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:55:31 compute-0 nova_compute[260935]: 2025-10-11 09:55:31.337 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 09:55:31 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3438: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:55:33 compute-0 ceph-mon[74313]: pgmap v3438: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:55:33 compute-0 podman[446480]: 2025-10-11 09:55:33.762194995 +0000 UTC m=+0.068253910 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible)
Oct 11 09:55:33 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3439: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:55:34 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e310 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:55:35 compute-0 ceph-mon[74313]: pgmap v3439: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:55:35 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3440: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:55:36 compute-0 nova_compute[260935]: 2025-10-11 09:55:36.338 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4995-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 09:55:37 compute-0 ceph-mon[74313]: pgmap v3440: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:55:37 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3441: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:55:38 compute-0 nova_compute[260935]: 2025-10-11 09:55:38.764 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:55:38 compute-0 podman[446499]: 2025-10-11 09:55:38.782351017 +0000 UTC m=+0.079484728 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct 11 09:55:39 compute-0 ceph-mon[74313]: pgmap v3441: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:55:39 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3442: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:55:39 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e310 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:55:40 compute-0 podman[446520]: 2025-10-11 09:55:40.813741365 +0000 UTC m=+0.099033529 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=multipathd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 11 09:55:40 compute-0 podman[446521]: 2025-10-11 09:55:40.892641905 +0000 UTC m=+0.173600397 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251001, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 11 09:55:41 compute-0 ceph-mon[74313]: pgmap v3442: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:55:41 compute-0 nova_compute[260935]: 2025-10-11 09:55:41.340 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 09:55:41 compute-0 nova_compute[260935]: 2025-10-11 09:55:41.343 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 09:55:41 compute-0 nova_compute[260935]: 2025-10-11 09:55:41.343 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5004 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Oct 11 09:55:41 compute-0 nova_compute[260935]: 2025-10-11 09:55:41.344 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Oct 11 09:55:41 compute-0 nova_compute[260935]: 2025-10-11 09:55:41.382 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:55:41 compute-0 nova_compute[260935]: 2025-10-11 09:55:41.383 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Oct 11 09:55:41 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3443: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:55:43 compute-0 ceph-mon[74313]: pgmap v3443: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:55:43 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3444: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:55:44 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e310 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:55:45 compute-0 ceph-mon[74313]: pgmap v3444: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:55:45 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3445: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:55:46 compute-0 nova_compute[260935]: 2025-10-11 09:55:46.384 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:55:47 compute-0 ceph-mon[74313]: pgmap v3445: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:55:47 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3446: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:55:49 compute-0 ceph-mon[74313]: pgmap v3446: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:55:49 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3447: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:55:49 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e310 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:55:49 compute-0 sudo[446564]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:55:49 compute-0 sudo[446564]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:55:49 compute-0 sudo[446564]: pam_unix(sudo:session): session closed for user root
Oct 11 09:55:50 compute-0 sudo[446589]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 09:55:50 compute-0 sudo[446589]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:55:50 compute-0 sudo[446589]: pam_unix(sudo:session): session closed for user root
Oct 11 09:55:50 compute-0 sudo[446614]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:55:50 compute-0 sudo[446614]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:55:50 compute-0 sudo[446614]: pam_unix(sudo:session): session closed for user root
Oct 11 09:55:50 compute-0 sudo[446639]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Oct 11 09:55:50 compute-0 sudo[446639]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:55:50 compute-0 sudo[446639]: pam_unix(sudo:session): session closed for user root
Oct 11 09:55:50 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 09:55:50 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:55:50 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 09:55:50 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:55:50 compute-0 sudo[446684]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:55:50 compute-0 sudo[446684]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:55:50 compute-0 sudo[446684]: pam_unix(sudo:session): session closed for user root
Oct 11 09:55:50 compute-0 sudo[446709]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 09:55:50 compute-0 sudo[446709]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:55:50 compute-0 sudo[446709]: pam_unix(sudo:session): session closed for user root
Oct 11 09:55:51 compute-0 sudo[446734]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:55:51 compute-0 sudo[446734]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:55:51 compute-0 sudo[446734]: pam_unix(sudo:session): session closed for user root
Oct 11 09:55:51 compute-0 sudo[446759]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 11 09:55:51 compute-0 sudo[446759]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:55:51 compute-0 ceph-mon[74313]: pgmap v3447: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:55:51 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:55:51 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:55:51 compute-0 nova_compute[260935]: 2025-10-11 09:55:51.388 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 09:55:51 compute-0 nova_compute[260935]: 2025-10-11 09:55:51.390 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 09:55:51 compute-0 nova_compute[260935]: 2025-10-11 09:55:51.390 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5004 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Oct 11 09:55:51 compute-0 nova_compute[260935]: 2025-10-11 09:55:51.391 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Oct 11 09:55:51 compute-0 nova_compute[260935]: 2025-10-11 09:55:51.418 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:55:51 compute-0 nova_compute[260935]: 2025-10-11 09:55:51.419 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Oct 11 09:55:51 compute-0 sudo[446759]: pam_unix(sudo:session): session closed for user root
Oct 11 09:55:51 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 09:55:51 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 09:55:51 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 11 09:55:51 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 09:55:51 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 11 09:55:51 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:55:51 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev 1842cf83-cee6-4524-a080-bd56bc1e7981 does not exist
Oct 11 09:55:51 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev 963d5731-8610-4049-972c-31a2151881a8 does not exist
Oct 11 09:55:51 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev 9a42180e-6ee5-4f70-9311-4f4bc411279b does not exist
Oct 11 09:55:51 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 11 09:55:51 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 09:55:51 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 11 09:55:51 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 09:55:51 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 09:55:51 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 09:55:51 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3448: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:55:51 compute-0 sudo[446814]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:55:51 compute-0 sudo[446814]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:55:51 compute-0 sudo[446814]: pam_unix(sudo:session): session closed for user root
Oct 11 09:55:51 compute-0 sudo[446839]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 09:55:51 compute-0 sudo[446839]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:55:51 compute-0 sudo[446839]: pam_unix(sudo:session): session closed for user root
Oct 11 09:55:52 compute-0 sudo[446864]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:55:52 compute-0 sudo[446864]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:55:52 compute-0 sudo[446864]: pam_unix(sudo:session): session closed for user root
Oct 11 09:55:52 compute-0 sudo[446889]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 11 09:55:52 compute-0 sudo[446889]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:55:52 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 09:55:52 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 09:55:52 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:55:52 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 09:55:52 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 09:55:52 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 09:55:52 compute-0 podman[446955]: 2025-10-11 09:55:52.62508064 +0000 UTC m=+0.072012736 container create 9de2e6db4ec825c0641a607024f02bbb15c211d0cebb1643b3fc30798e7cfb1f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_borg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 09:55:52 compute-0 systemd[1]: Started libpod-conmon-9de2e6db4ec825c0641a607024f02bbb15c211d0cebb1643b3fc30798e7cfb1f.scope.
Oct 11 09:55:52 compute-0 podman[446955]: 2025-10-11 09:55:52.592790998 +0000 UTC m=+0.039723164 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:55:52 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:55:52 compute-0 podman[446955]: 2025-10-11 09:55:52.777328503 +0000 UTC m=+0.224260649 container init 9de2e6db4ec825c0641a607024f02bbb15c211d0cebb1643b3fc30798e7cfb1f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_borg, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 09:55:52 compute-0 podman[446955]: 2025-10-11 09:55:52.786571544 +0000 UTC m=+0.233503610 container start 9de2e6db4ec825c0641a607024f02bbb15c211d0cebb1643b3fc30798e7cfb1f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_borg, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True)
Oct 11 09:55:52 compute-0 podman[446955]: 2025-10-11 09:55:52.790388372 +0000 UTC m=+0.237320538 container attach 9de2e6db4ec825c0641a607024f02bbb15c211d0cebb1643b3fc30798e7cfb1f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_borg, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 09:55:52 compute-0 exciting_borg[446972]: 167 167
Oct 11 09:55:52 compute-0 systemd[1]: libpod-9de2e6db4ec825c0641a607024f02bbb15c211d0cebb1643b3fc30798e7cfb1f.scope: Deactivated successfully.
Oct 11 09:55:52 compute-0 podman[446955]: 2025-10-11 09:55:52.798963454 +0000 UTC m=+0.245895560 container died 9de2e6db4ec825c0641a607024f02bbb15c211d0cebb1643b3fc30798e7cfb1f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_borg, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Oct 11 09:55:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-db4c22380c2c1cad4f7fc5564b33c7bcb35ba9b219dcd6f449f08af01e5ee8da-merged.mount: Deactivated successfully.
Oct 11 09:55:52 compute-0 podman[446955]: 2025-10-11 09:55:52.852002073 +0000 UTC m=+0.298934139 container remove 9de2e6db4ec825c0641a607024f02bbb15c211d0cebb1643b3fc30798e7cfb1f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_borg, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 09:55:52 compute-0 systemd[1]: libpod-conmon-9de2e6db4ec825c0641a607024f02bbb15c211d0cebb1643b3fc30798e7cfb1f.scope: Deactivated successfully.
Oct 11 09:55:53 compute-0 podman[446996]: 2025-10-11 09:55:53.02775846 +0000 UTC m=+0.045019803 container create a39ed9ff5096f945d8a324352ce24f58fc2ca7a5013c00c0618ae8913b79ef42 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_chebyshev, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct 11 09:55:53 compute-0 systemd[1]: Started libpod-conmon-a39ed9ff5096f945d8a324352ce24f58fc2ca7a5013c00c0618ae8913b79ef42.scope.
Oct 11 09:55:53 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:55:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97530adc22cb6b8e34ff3af726f4a7a72798ea664e1e7c88abf939b17a6cdc5e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 09:55:53 compute-0 podman[446996]: 2025-10-11 09:55:53.006137929 +0000 UTC m=+0.023399362 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:55:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97530adc22cb6b8e34ff3af726f4a7a72798ea664e1e7c88abf939b17a6cdc5e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 09:55:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97530adc22cb6b8e34ff3af726f4a7a72798ea664e1e7c88abf939b17a6cdc5e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 09:55:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97530adc22cb6b8e34ff3af726f4a7a72798ea664e1e7c88abf939b17a6cdc5e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 09:55:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97530adc22cb6b8e34ff3af726f4a7a72798ea664e1e7c88abf939b17a6cdc5e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 09:55:53 compute-0 podman[446996]: 2025-10-11 09:55:53.124536925 +0000 UTC m=+0.141798268 container init a39ed9ff5096f945d8a324352ce24f58fc2ca7a5013c00c0618ae8913b79ef42 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_chebyshev, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Oct 11 09:55:53 compute-0 podman[446996]: 2025-10-11 09:55:53.133721145 +0000 UTC m=+0.150982488 container start a39ed9ff5096f945d8a324352ce24f58fc2ca7a5013c00c0618ae8913b79ef42 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_chebyshev, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 09:55:53 compute-0 podman[446996]: 2025-10-11 09:55:53.137244214 +0000 UTC m=+0.154505647 container attach a39ed9ff5096f945d8a324352ce24f58fc2ca7a5013c00c0618ae8913b79ef42 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_chebyshev, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct 11 09:55:53 compute-0 ceph-mon[74313]: pgmap v3448: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:55:53 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3449: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:55:54 compute-0 hungry_chebyshev[447013]: --> passed data devices: 0 physical, 3 LVM
Oct 11 09:55:54 compute-0 hungry_chebyshev[447013]: --> relative data size: 1.0
Oct 11 09:55:54 compute-0 hungry_chebyshev[447013]: --> All data devices are unavailable
Oct 11 09:55:54 compute-0 systemd[1]: libpod-a39ed9ff5096f945d8a324352ce24f58fc2ca7a5013c00c0618ae8913b79ef42.scope: Deactivated successfully.
Oct 11 09:55:54 compute-0 podman[446996]: 2025-10-11 09:55:54.253039998 +0000 UTC m=+1.270301361 container died a39ed9ff5096f945d8a324352ce24f58fc2ca7a5013c00c0618ae8913b79ef42 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_chebyshev, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507)
Oct 11 09:55:54 compute-0 systemd[1]: libpod-a39ed9ff5096f945d8a324352ce24f58fc2ca7a5013c00c0618ae8913b79ef42.scope: Consumed 1.062s CPU time.
Oct 11 09:55:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-97530adc22cb6b8e34ff3af726f4a7a72798ea664e1e7c88abf939b17a6cdc5e-merged.mount: Deactivated successfully.
Oct 11 09:55:54 compute-0 podman[446996]: 2025-10-11 09:55:54.329523389 +0000 UTC m=+1.346784742 container remove a39ed9ff5096f945d8a324352ce24f58fc2ca7a5013c00c0618ae8913b79ef42 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_chebyshev, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 09:55:54 compute-0 systemd[1]: libpod-conmon-a39ed9ff5096f945d8a324352ce24f58fc2ca7a5013c00c0618ae8913b79ef42.scope: Deactivated successfully.
Oct 11 09:55:54 compute-0 sudo[446889]: pam_unix(sudo:session): session closed for user root
Oct 11 09:55:54 compute-0 sudo[447056]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:55:54 compute-0 sudo[447056]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:55:54 compute-0 sudo[447056]: pam_unix(sudo:session): session closed for user root
Oct 11 09:55:54 compute-0 sudo[447081]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 09:55:54 compute-0 sudo[447081]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:55:54 compute-0 sudo[447081]: pam_unix(sudo:session): session closed for user root
Oct 11 09:55:54 compute-0 sudo[447106]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:55:54 compute-0 sudo[447106]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:55:54 compute-0 sudo[447106]: pam_unix(sudo:session): session closed for user root
Oct 11 09:55:54 compute-0 sudo[447131]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a -- lvm list --format json
Oct 11 09:55:54 compute-0 sudo[447131]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:55:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:55:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:55:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:55:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:55:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:55:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:55:54 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e310 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:55:55 compute-0 ceph-mgr[74605]: [balancer INFO root] Optimize plan auto_2025-10-11_09:55:55
Oct 11 09:55:55 compute-0 ceph-mgr[74605]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 09:55:55 compute-0 ceph-mgr[74605]: [balancer INFO root] do_upmap
Oct 11 09:55:55 compute-0 ceph-mgr[74605]: [balancer INFO root] pools ['.mgr', 'default.rgw.control', 'default.rgw.meta', 'vms', 'default.rgw.log', 'cephfs.cephfs.meta', 'backups', '.rgw.root', 'cephfs.cephfs.data', 'images', 'volumes']
Oct 11 09:55:55 compute-0 ceph-mgr[74605]: [balancer INFO root] prepared 0/10 changes
Oct 11 09:55:55 compute-0 podman[447197]: 2025-10-11 09:55:55.201958335 +0000 UTC m=+0.075219167 container create ccd1552c1c40a2e2fdcc3183b9632985dfa3c766fdcc307de67153d59e20da65 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_merkle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 09:55:55 compute-0 systemd[1]: Started libpod-conmon-ccd1552c1c40a2e2fdcc3183b9632985dfa3c766fdcc307de67153d59e20da65.scope.
Oct 11 09:55:55 compute-0 podman[447197]: 2025-10-11 09:55:55.173549202 +0000 UTC m=+0.046810064 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:55:55 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:55:55 compute-0 ceph-mon[74313]: pgmap v3449: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:55:55 compute-0 podman[447197]: 2025-10-11 09:55:55.303071922 +0000 UTC m=+0.176332734 container init ccd1552c1c40a2e2fdcc3183b9632985dfa3c766fdcc307de67153d59e20da65 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_merkle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 09:55:55 compute-0 podman[447197]: 2025-10-11 09:55:55.315581586 +0000 UTC m=+0.188842388 container start ccd1552c1c40a2e2fdcc3183b9632985dfa3c766fdcc307de67153d59e20da65 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_merkle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 09:55:55 compute-0 podman[447197]: 2025-10-11 09:55:55.319225769 +0000 UTC m=+0.192486591 container attach ccd1552c1c40a2e2fdcc3183b9632985dfa3c766fdcc307de67153d59e20da65 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_merkle, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 09:55:55 compute-0 practical_merkle[447214]: 167 167
Oct 11 09:55:55 compute-0 systemd[1]: libpod-ccd1552c1c40a2e2fdcc3183b9632985dfa3c766fdcc307de67153d59e20da65.scope: Deactivated successfully.
Oct 11 09:55:55 compute-0 podman[447197]: 2025-10-11 09:55:55.325533177 +0000 UTC m=+0.198794009 container died ccd1552c1c40a2e2fdcc3183b9632985dfa3c766fdcc307de67153d59e20da65 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_merkle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 09:55:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-efbee594591c9c92e902307ae96032c3089a43d748ac403a1ee524a02c49d955-merged.mount: Deactivated successfully.
Oct 11 09:55:55 compute-0 podman[447197]: 2025-10-11 09:55:55.372781982 +0000 UTC m=+0.246042794 container remove ccd1552c1c40a2e2fdcc3183b9632985dfa3c766fdcc307de67153d59e20da65 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_merkle, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct 11 09:55:55 compute-0 systemd[1]: libpod-conmon-ccd1552c1c40a2e2fdcc3183b9632985dfa3c766fdcc307de67153d59e20da65.scope: Deactivated successfully.
Oct 11 09:55:55 compute-0 podman[447237]: 2025-10-11 09:55:55.614585826 +0000 UTC m=+0.056676363 container create 1ac1ecad6a28d0449f5d09336ea51d15585b882a1f0ed142563e89e2dc546047 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_moore, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct 11 09:55:55 compute-0 systemd[1]: Started libpod-conmon-1ac1ecad6a28d0449f5d09336ea51d15585b882a1f0ed142563e89e2dc546047.scope.
Oct 11 09:55:55 compute-0 ceph-mgr[74605]: client.0 ms_handle_reset on v2:192.168.122.100:6800/2898047278
Oct 11 09:55:55 compute-0 podman[447237]: 2025-10-11 09:55:55.59491303 +0000 UTC m=+0.037003607 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:55:55 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:55:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/652143c9bca1181fb983e066942ee8bf221f66df02750a7ee3b9bfde38df833c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 09:55:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/652143c9bca1181fb983e066942ee8bf221f66df02750a7ee3b9bfde38df833c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 09:55:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/652143c9bca1181fb983e066942ee8bf221f66df02750a7ee3b9bfde38df833c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 09:55:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/652143c9bca1181fb983e066942ee8bf221f66df02750a7ee3b9bfde38df833c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 09:55:55 compute-0 podman[447237]: 2025-10-11 09:55:55.726988143 +0000 UTC m=+0.169078730 container init 1ac1ecad6a28d0449f5d09336ea51d15585b882a1f0ed142563e89e2dc546047 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_moore, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default)
Oct 11 09:55:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 09:55:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 09:55:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 09:55:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 09:55:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 09:55:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 09:55:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 09:55:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 09:55:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 09:55:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 09:55:55 compute-0 podman[447237]: 2025-10-11 09:55:55.74425056 +0000 UTC m=+0.186341137 container start 1ac1ecad6a28d0449f5d09336ea51d15585b882a1f0ed142563e89e2dc546047 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_moore, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 09:55:55 compute-0 podman[447237]: 2025-10-11 09:55:55.748718107 +0000 UTC m=+0.190808744 container attach 1ac1ecad6a28d0449f5d09336ea51d15585b882a1f0ed142563e89e2dc546047 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_moore, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct 11 09:55:55 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3450: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:55:56 compute-0 nova_compute[260935]: 2025-10-11 09:55:56.419 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:55:56 compute-0 nova_compute[260935]: 2025-10-11 09:55:56.424 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 09:55:56 compute-0 gifted_moore[447254]: {
Oct 11 09:55:56 compute-0 gifted_moore[447254]:     "0": [
Oct 11 09:55:56 compute-0 gifted_moore[447254]:         {
Oct 11 09:55:56 compute-0 gifted_moore[447254]:             "devices": [
Oct 11 09:55:56 compute-0 gifted_moore[447254]:                 "/dev/loop3"
Oct 11 09:55:56 compute-0 gifted_moore[447254]:             ],
Oct 11 09:55:56 compute-0 gifted_moore[447254]:             "lv_name": "ceph_lv0",
Oct 11 09:55:56 compute-0 gifted_moore[447254]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 09:55:56 compute-0 gifted_moore[447254]:             "lv_size": "21470642176",
Oct 11 09:55:56 compute-0 gifted_moore[447254]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=bd31cfe9-266a-4479-a7f9-659fff14b3e1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 09:55:56 compute-0 gifted_moore[447254]:             "lv_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 09:55:56 compute-0 gifted_moore[447254]:             "name": "ceph_lv0",
Oct 11 09:55:56 compute-0 gifted_moore[447254]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 09:55:56 compute-0 gifted_moore[447254]:             "tags": {
Oct 11 09:55:56 compute-0 gifted_moore[447254]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 11 09:55:56 compute-0 gifted_moore[447254]:                 "ceph.block_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 09:55:56 compute-0 gifted_moore[447254]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 09:55:56 compute-0 gifted_moore[447254]:                 "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:55:56 compute-0 gifted_moore[447254]:                 "ceph.cluster_name": "ceph",
Oct 11 09:55:56 compute-0 gifted_moore[447254]:                 "ceph.crush_device_class": "",
Oct 11 09:55:56 compute-0 gifted_moore[447254]:                 "ceph.encrypted": "0",
Oct 11 09:55:56 compute-0 gifted_moore[447254]:                 "ceph.osd_fsid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 09:55:56 compute-0 gifted_moore[447254]:                 "ceph.osd_id": "0",
Oct 11 09:55:56 compute-0 gifted_moore[447254]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 09:55:56 compute-0 gifted_moore[447254]:                 "ceph.type": "block",
Oct 11 09:55:56 compute-0 gifted_moore[447254]:                 "ceph.vdo": "0"
Oct 11 09:55:56 compute-0 gifted_moore[447254]:             },
Oct 11 09:55:56 compute-0 gifted_moore[447254]:             "type": "block",
Oct 11 09:55:56 compute-0 gifted_moore[447254]:             "vg_name": "ceph_vg0"
Oct 11 09:55:56 compute-0 gifted_moore[447254]:         }
Oct 11 09:55:56 compute-0 gifted_moore[447254]:     ],
Oct 11 09:55:56 compute-0 gifted_moore[447254]:     "1": [
Oct 11 09:55:56 compute-0 gifted_moore[447254]:         {
Oct 11 09:55:56 compute-0 gifted_moore[447254]:             "devices": [
Oct 11 09:55:56 compute-0 gifted_moore[447254]:                 "/dev/loop4"
Oct 11 09:55:56 compute-0 gifted_moore[447254]:             ],
Oct 11 09:55:56 compute-0 gifted_moore[447254]:             "lv_name": "ceph_lv1",
Oct 11 09:55:56 compute-0 gifted_moore[447254]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 09:55:56 compute-0 gifted_moore[447254]:             "lv_size": "21470642176",
Oct 11 09:55:56 compute-0 gifted_moore[447254]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c6e730c8-7075-45e5-bf72-061b73088f5b,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 09:55:56 compute-0 gifted_moore[447254]:             "lv_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 09:55:56 compute-0 gifted_moore[447254]:             "name": "ceph_lv1",
Oct 11 09:55:56 compute-0 gifted_moore[447254]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 09:55:56 compute-0 gifted_moore[447254]:             "tags": {
Oct 11 09:55:56 compute-0 gifted_moore[447254]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 11 09:55:56 compute-0 gifted_moore[447254]:                 "ceph.block_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 09:55:56 compute-0 gifted_moore[447254]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 09:55:56 compute-0 gifted_moore[447254]:                 "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:55:56 compute-0 gifted_moore[447254]:                 "ceph.cluster_name": "ceph",
Oct 11 09:55:56 compute-0 gifted_moore[447254]:                 "ceph.crush_device_class": "",
Oct 11 09:55:56 compute-0 gifted_moore[447254]:                 "ceph.encrypted": "0",
Oct 11 09:55:56 compute-0 gifted_moore[447254]:                 "ceph.osd_fsid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 09:55:56 compute-0 gifted_moore[447254]:                 "ceph.osd_id": "1",
Oct 11 09:55:56 compute-0 gifted_moore[447254]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 09:55:56 compute-0 gifted_moore[447254]:                 "ceph.type": "block",
Oct 11 09:55:56 compute-0 gifted_moore[447254]:                 "ceph.vdo": "0"
Oct 11 09:55:56 compute-0 gifted_moore[447254]:             },
Oct 11 09:55:56 compute-0 gifted_moore[447254]:             "type": "block",
Oct 11 09:55:56 compute-0 gifted_moore[447254]:             "vg_name": "ceph_vg1"
Oct 11 09:55:56 compute-0 gifted_moore[447254]:         }
Oct 11 09:55:56 compute-0 gifted_moore[447254]:     ],
Oct 11 09:55:56 compute-0 gifted_moore[447254]:     "2": [
Oct 11 09:55:56 compute-0 gifted_moore[447254]:         {
Oct 11 09:55:56 compute-0 gifted_moore[447254]:             "devices": [
Oct 11 09:55:56 compute-0 gifted_moore[447254]:                 "/dev/loop5"
Oct 11 09:55:56 compute-0 gifted_moore[447254]:             ],
Oct 11 09:55:56 compute-0 gifted_moore[447254]:             "lv_name": "ceph_lv2",
Oct 11 09:55:56 compute-0 gifted_moore[447254]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 09:55:56 compute-0 gifted_moore[447254]:             "lv_size": "21470642176",
Oct 11 09:55:56 compute-0 gifted_moore[447254]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9a8d87fe-940e-4960-94fb-405e22e6e9ee,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 09:55:56 compute-0 gifted_moore[447254]:             "lv_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 09:55:56 compute-0 gifted_moore[447254]:             "name": "ceph_lv2",
Oct 11 09:55:56 compute-0 gifted_moore[447254]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 09:55:56 compute-0 gifted_moore[447254]:             "tags": {
Oct 11 09:55:56 compute-0 gifted_moore[447254]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 11 09:55:56 compute-0 gifted_moore[447254]:                 "ceph.block_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 09:55:56 compute-0 gifted_moore[447254]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 09:55:56 compute-0 gifted_moore[447254]:                 "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:55:56 compute-0 gifted_moore[447254]:                 "ceph.cluster_name": "ceph",
Oct 11 09:55:56 compute-0 gifted_moore[447254]:                 "ceph.crush_device_class": "",
Oct 11 09:55:56 compute-0 gifted_moore[447254]:                 "ceph.encrypted": "0",
Oct 11 09:55:56 compute-0 gifted_moore[447254]:                 "ceph.osd_fsid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 09:55:56 compute-0 gifted_moore[447254]:                 "ceph.osd_id": "2",
Oct 11 09:55:56 compute-0 gifted_moore[447254]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 09:55:56 compute-0 gifted_moore[447254]:                 "ceph.type": "block",
Oct 11 09:55:56 compute-0 gifted_moore[447254]:                 "ceph.vdo": "0"
Oct 11 09:55:56 compute-0 gifted_moore[447254]:             },
Oct 11 09:55:56 compute-0 gifted_moore[447254]:             "type": "block",
Oct 11 09:55:56 compute-0 gifted_moore[447254]:             "vg_name": "ceph_vg2"
Oct 11 09:55:56 compute-0 gifted_moore[447254]:         }
Oct 11 09:55:56 compute-0 gifted_moore[447254]:     ]
Oct 11 09:55:56 compute-0 gifted_moore[447254]: }
Oct 11 09:55:56 compute-0 systemd[1]: libpod-1ac1ecad6a28d0449f5d09336ea51d15585b882a1f0ed142563e89e2dc546047.scope: Deactivated successfully.
Oct 11 09:55:56 compute-0 podman[447263]: 2025-10-11 09:55:56.633452699 +0000 UTC m=+0.034199927 container died 1ac1ecad6a28d0449f5d09336ea51d15585b882a1f0ed142563e89e2dc546047 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_moore, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 09:55:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-652143c9bca1181fb983e066942ee8bf221f66df02750a7ee3b9bfde38df833c-merged.mount: Deactivated successfully.
Oct 11 09:55:56 compute-0 podman[447263]: 2025-10-11 09:55:56.716429194 +0000 UTC m=+0.117176442 container remove 1ac1ecad6a28d0449f5d09336ea51d15585b882a1f0ed142563e89e2dc546047 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_moore, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 09:55:56 compute-0 systemd[1]: libpod-conmon-1ac1ecad6a28d0449f5d09336ea51d15585b882a1f0ed142563e89e2dc546047.scope: Deactivated successfully.
Oct 11 09:55:56 compute-0 sudo[447131]: pam_unix(sudo:session): session closed for user root
Oct 11 09:55:56 compute-0 sudo[447278]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:55:56 compute-0 sudo[447278]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:55:56 compute-0 sudo[447278]: pam_unix(sudo:session): session closed for user root
Oct 11 09:55:56 compute-0 sudo[447303]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 09:55:56 compute-0 sudo[447303]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:55:56 compute-0 sudo[447303]: pam_unix(sudo:session): session closed for user root
Oct 11 09:55:57 compute-0 sudo[447328]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:55:57 compute-0 sudo[447328]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:55:57 compute-0 sudo[447328]: pam_unix(sudo:session): session closed for user root
Oct 11 09:55:57 compute-0 sudo[447353]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a -- raw list --format json
Oct 11 09:55:57 compute-0 sudo[447353]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:55:57 compute-0 ceph-mon[74313]: pgmap v3450: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:55:57 compute-0 podman[447417]: 2025-10-11 09:55:57.60092002 +0000 UTC m=+0.051294090 container create 4e6264ec3cd5e2ba88210de1eb1c85a9dbd7b8c7c220e712df97a5634c6d340b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_sutherland, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Oct 11 09:55:57 compute-0 systemd[1]: Started libpod-conmon-4e6264ec3cd5e2ba88210de1eb1c85a9dbd7b8c7c220e712df97a5634c6d340b.scope.
Oct 11 09:55:57 compute-0 podman[447417]: 2025-10-11 09:55:57.581933314 +0000 UTC m=+0.032307344 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:55:57 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:55:57 compute-0 podman[447417]: 2025-10-11 09:55:57.712046851 +0000 UTC m=+0.162420931 container init 4e6264ec3cd5e2ba88210de1eb1c85a9dbd7b8c7c220e712df97a5634c6d340b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_sutherland, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 09:55:57 compute-0 podman[447417]: 2025-10-11 09:55:57.72086928 +0000 UTC m=+0.171243350 container start 4e6264ec3cd5e2ba88210de1eb1c85a9dbd7b8c7c220e712df97a5634c6d340b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_sutherland, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 09:55:57 compute-0 podman[447417]: 2025-10-11 09:55:57.724918285 +0000 UTC m=+0.175292415 container attach 4e6264ec3cd5e2ba88210de1eb1c85a9dbd7b8c7c220e712df97a5634c6d340b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_sutherland, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 09:55:57 compute-0 gifted_sutherland[447433]: 167 167
Oct 11 09:55:57 compute-0 systemd[1]: libpod-4e6264ec3cd5e2ba88210de1eb1c85a9dbd7b8c7c220e712df97a5634c6d340b.scope: Deactivated successfully.
Oct 11 09:55:57 compute-0 conmon[447433]: conmon 4e6264ec3cd5e2ba8821 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-4e6264ec3cd5e2ba88210de1eb1c85a9dbd7b8c7c220e712df97a5634c6d340b.scope/container/memory.events
Oct 11 09:55:57 compute-0 podman[447417]: 2025-10-11 09:55:57.730468442 +0000 UTC m=+0.180842542 container died 4e6264ec3cd5e2ba88210de1eb1c85a9dbd7b8c7c220e712df97a5634c6d340b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_sutherland, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Oct 11 09:55:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-86d05ae28cef3c1bef652d6fbc61f65dcf3db892e4bee7aeefc59346da1aaf4c-merged.mount: Deactivated successfully.
Oct 11 09:55:57 compute-0 podman[447417]: 2025-10-11 09:55:57.781885485 +0000 UTC m=+0.232259525 container remove 4e6264ec3cd5e2ba88210de1eb1c85a9dbd7b8c7c220e712df97a5634c6d340b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_sutherland, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct 11 09:55:57 compute-0 systemd[1]: libpod-conmon-4e6264ec3cd5e2ba88210de1eb1c85a9dbd7b8c7c220e712df97a5634c6d340b.scope: Deactivated successfully.
Oct 11 09:55:57 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3451: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:55:57 compute-0 podman[447457]: 2025-10-11 09:55:57.983618226 +0000 UTC m=+0.045908509 container create bb60f89cfcdec46801f9f95ef29779b45c68e63540db2a67ec10701291464bb9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_swanson, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3)
Oct 11 09:55:58 compute-0 systemd[1]: Started libpod-conmon-bb60f89cfcdec46801f9f95ef29779b45c68e63540db2a67ec10701291464bb9.scope.
Oct 11 09:55:58 compute-0 podman[447457]: 2025-10-11 09:55:57.963193159 +0000 UTC m=+0.025483522 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:55:58 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:55:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b883c7800eed4586e6588266624815260e8190dfb1a08b025d337bfcfeeb8a98/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 09:55:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b883c7800eed4586e6588266624815260e8190dfb1a08b025d337bfcfeeb8a98/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 09:55:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b883c7800eed4586e6588266624815260e8190dfb1a08b025d337bfcfeeb8a98/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 09:55:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b883c7800eed4586e6588266624815260e8190dfb1a08b025d337bfcfeeb8a98/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 09:55:58 compute-0 podman[447457]: 2025-10-11 09:55:58.082410428 +0000 UTC m=+0.144700791 container init bb60f89cfcdec46801f9f95ef29779b45c68e63540db2a67ec10701291464bb9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_swanson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct 11 09:55:58 compute-0 podman[447457]: 2025-10-11 09:55:58.094004035 +0000 UTC m=+0.156294308 container start bb60f89cfcdec46801f9f95ef29779b45c68e63540db2a67ec10701291464bb9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_swanson, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct 11 09:55:58 compute-0 podman[447457]: 2025-10-11 09:55:58.098623956 +0000 UTC m=+0.160914239 container attach bb60f89cfcdec46801f9f95ef29779b45c68e63540db2a67ec10701291464bb9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_swanson, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 09:55:59 compute-0 pensive_swanson[447473]: {
Oct 11 09:55:59 compute-0 pensive_swanson[447473]:     "9a8d87fe-940e-4960-94fb-405e22e6e9ee": {
Oct 11 09:55:59 compute-0 pensive_swanson[447473]:         "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:55:59 compute-0 pensive_swanson[447473]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 11 09:55:59 compute-0 pensive_swanson[447473]:         "osd_id": 2,
Oct 11 09:55:59 compute-0 pensive_swanson[447473]:         "osd_uuid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 09:55:59 compute-0 pensive_swanson[447473]:         "type": "bluestore"
Oct 11 09:55:59 compute-0 pensive_swanson[447473]:     },
Oct 11 09:55:59 compute-0 pensive_swanson[447473]:     "bd31cfe9-266a-4479-a7f9-659fff14b3e1": {
Oct 11 09:55:59 compute-0 pensive_swanson[447473]:         "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:55:59 compute-0 pensive_swanson[447473]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 11 09:55:59 compute-0 pensive_swanson[447473]:         "osd_id": 0,
Oct 11 09:55:59 compute-0 pensive_swanson[447473]:         "osd_uuid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 09:55:59 compute-0 pensive_swanson[447473]:         "type": "bluestore"
Oct 11 09:55:59 compute-0 pensive_swanson[447473]:     },
Oct 11 09:55:59 compute-0 pensive_swanson[447473]:     "c6e730c8-7075-45e5-bf72-061b73088f5b": {
Oct 11 09:55:59 compute-0 pensive_swanson[447473]:         "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:55:59 compute-0 pensive_swanson[447473]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 11 09:55:59 compute-0 pensive_swanson[447473]:         "osd_id": 1,
Oct 11 09:55:59 compute-0 pensive_swanson[447473]:         "osd_uuid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 09:55:59 compute-0 pensive_swanson[447473]:         "type": "bluestore"
Oct 11 09:55:59 compute-0 pensive_swanson[447473]:     }
Oct 11 09:55:59 compute-0 pensive_swanson[447473]: }
Oct 11 09:55:59 compute-0 systemd[1]: libpod-bb60f89cfcdec46801f9f95ef29779b45c68e63540db2a67ec10701291464bb9.scope: Deactivated successfully.
Oct 11 09:55:59 compute-0 systemd[1]: libpod-bb60f89cfcdec46801f9f95ef29779b45c68e63540db2a67ec10701291464bb9.scope: Consumed 1.097s CPU time.
Oct 11 09:55:59 compute-0 podman[447457]: 2025-10-11 09:55:59.190884344 +0000 UTC m=+1.253174667 container died bb60f89cfcdec46801f9f95ef29779b45c68e63540db2a67ec10701291464bb9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_swanson, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct 11 09:55:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-b883c7800eed4586e6588266624815260e8190dfb1a08b025d337bfcfeeb8a98-merged.mount: Deactivated successfully.
Oct 11 09:55:59 compute-0 podman[447457]: 2025-10-11 09:55:59.271663987 +0000 UTC m=+1.333954300 container remove bb60f89cfcdec46801f9f95ef29779b45c68e63540db2a67ec10701291464bb9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_swanson, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct 11 09:55:59 compute-0 systemd[1]: libpod-conmon-bb60f89cfcdec46801f9f95ef29779b45c68e63540db2a67ec10701291464bb9.scope: Deactivated successfully.
Oct 11 09:55:59 compute-0 sudo[447353]: pam_unix(sudo:session): session closed for user root
Oct 11 09:55:59 compute-0 ceph-mon[74313]: pgmap v3451: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:55:59 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 09:55:59 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:55:59 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 09:55:59 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:55:59 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev b491fff0-8335-4ab7-8b5b-be3c4b6bd17c does not exist
Oct 11 09:55:59 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev 00bf86e8-61a9-4b5a-bfd1-34afbae5d73d does not exist
Oct 11 09:55:59 compute-0 sudo[447520]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:55:59 compute-0 sudo[447520]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:55:59 compute-0 sudo[447520]: pam_unix(sudo:session): session closed for user root
Oct 11 09:55:59 compute-0 sudo[447545]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 11 09:55:59 compute-0 sudo[447545]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:55:59 compute-0 sudo[447545]: pam_unix(sudo:session): session closed for user root
Oct 11 09:55:59 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3452: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:55:59 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e310 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:56:00 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:56:00 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:56:01 compute-0 ceph-mon[74313]: pgmap v3452: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:56:01 compute-0 nova_compute[260935]: 2025-10-11 09:56:01.422 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:56:01 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3453: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:56:02 compute-0 nova_compute[260935]: 2025-10-11 09:56:02.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:56:03 compute-0 ceph-mon[74313]: pgmap v3453: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:56:03 compute-0 nova_compute[260935]: 2025-10-11 09:56:03.698 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:56:03 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3454: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:56:04 compute-0 podman[447572]: 2025-10-11 09:56:04.822303521 +0000 UTC m=+0.102613541 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Oct 11 09:56:04 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e310 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:56:05 compute-0 sshd-session[447570]: Invalid user lab from 43.157.67.116 port 45590
Oct 11 09:56:05 compute-0 sshd-session[447570]: pam_unix(sshd:auth): check pass; user unknown
Oct 11 09:56:05 compute-0 sshd-session[447570]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=43.157.67.116
Oct 11 09:56:05 compute-0 ceph-mon[74313]: pgmap v3454: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:56:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 09:56:05 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3455: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:56:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:56:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 09:56:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:56:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0026278224099759067 of space, bias 1.0, pg target 0.788346722992772 quantized to 32 (current 32)
Oct 11 09:56:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:56:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 09:56:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:56:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 09:56:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:56:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006659854830044931 of space, bias 1.0, pg target 0.19979564490134794 quantized to 32 (current 32)
Oct 11 09:56:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:56:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 11 09:56:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:56:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 09:56:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:56:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 11 09:56:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:56:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 11 09:56:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:56:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 09:56:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:56:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 11 09:56:06 compute-0 nova_compute[260935]: 2025-10-11 09:56:06.423 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:56:06 compute-0 nova_compute[260935]: 2025-10-11 09:56:06.427 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:56:07 compute-0 sshd-session[447570]: Failed password for invalid user lab from 43.157.67.116 port 45590 ssh2
Oct 11 09:56:07 compute-0 ceph-mon[74313]: pgmap v3455: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:56:07 compute-0 nova_compute[260935]: 2025-10-11 09:56:07.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:56:07 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3456: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:56:08 compute-0 nova_compute[260935]: 2025-10-11 09:56:08.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:56:08 compute-0 nova_compute[260935]: 2025-10-11 09:56:08.703 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 11 09:56:09 compute-0 sshd-session[447570]: Received disconnect from 43.157.67.116 port 45590:11: Bye Bye [preauth]
Oct 11 09:56:09 compute-0 sshd-session[447570]: Disconnected from invalid user lab 43.157.67.116 port 45590 [preauth]
Oct 11 09:56:09 compute-0 ceph-mon[74313]: pgmap v3456: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:56:09 compute-0 nova_compute[260935]: 2025-10-11 09:56:09.500 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "refresh_cache-52be16b4-343a-4fd4-9041-39069a1fde2a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:56:09 compute-0 nova_compute[260935]: 2025-10-11 09:56:09.501 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquired lock "refresh_cache-52be16b4-343a-4fd4-9041-39069a1fde2a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:56:09 compute-0 nova_compute[260935]: 2025-10-11 09:56:09.501 2 DEBUG nova.network.neutron [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: 52be16b4-343a-4fd4-9041-39069a1fde2a] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 11 09:56:09 compute-0 nova_compute[260935]: 2025-10-11 09:56:09.770 2 DEBUG nova.network.neutron [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: 52be16b4-343a-4fd4-9041-39069a1fde2a] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 11 09:56:09 compute-0 podman[447592]: 2025-10-11 09:56:09.799840219 +0000 UTC m=+0.092742882 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, org.label-schema.schema-version=1.0, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct 11 09:56:09 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3457: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:56:09 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e310 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:56:10 compute-0 nova_compute[260935]: 2025-10-11 09:56:10.362 2 DEBUG nova.network.neutron [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: 52be16b4-343a-4fd4-9041-39069a1fde2a] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:56:10 compute-0 nova_compute[260935]: 2025-10-11 09:56:10.386 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Releasing lock "refresh_cache-52be16b4-343a-4fd4-9041-39069a1fde2a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:56:10 compute-0 nova_compute[260935]: 2025-10-11 09:56:10.386 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: 52be16b4-343a-4fd4-9041-39069a1fde2a] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 11 09:56:10 compute-0 nova_compute[260935]: 2025-10-11 09:56:10.386 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:56:10 compute-0 nova_compute[260935]: 2025-10-11 09:56:10.387 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 11 09:56:10 compute-0 nova_compute[260935]: 2025-10-11 09:56:10.387 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:56:10 compute-0 nova_compute[260935]: 2025-10-11 09:56:10.425 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:56:10 compute-0 nova_compute[260935]: 2025-10-11 09:56:10.426 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:56:10 compute-0 nova_compute[260935]: 2025-10-11 09:56:10.427 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:56:10 compute-0 nova_compute[260935]: 2025-10-11 09:56:10.427 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 11 09:56:10 compute-0 nova_compute[260935]: 2025-10-11 09:56:10.428 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:56:10 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 09:56:10 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/809046280' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:56:10 compute-0 nova_compute[260935]: 2025-10-11 09:56:10.920 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.492s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:56:11 compute-0 nova_compute[260935]: 2025-10-11 09:56:11.106 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:56:11 compute-0 nova_compute[260935]: 2025-10-11 09:56:11.107 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:56:11 compute-0 nova_compute[260935]: 2025-10-11 09:56:11.107 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:56:11 compute-0 nova_compute[260935]: 2025-10-11 09:56:11.111 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:56:11 compute-0 nova_compute[260935]: 2025-10-11 09:56:11.111 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:56:11 compute-0 nova_compute[260935]: 2025-10-11 09:56:11.115 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000052 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:56:11 compute-0 nova_compute[260935]: 2025-10-11 09:56:11.115 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000052 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:56:11 compute-0 nova_compute[260935]: 2025-10-11 09:56:11.307 2 WARNING nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 09:56:11 compute-0 nova_compute[260935]: 2025-10-11 09:56:11.309 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=2773MB free_disk=59.83064270019531GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 11 09:56:11 compute-0 nova_compute[260935]: 2025-10-11 09:56:11.309 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:56:11 compute-0 nova_compute[260935]: 2025-10-11 09:56:11.309 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:56:11 compute-0 ceph-mon[74313]: pgmap v3457: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:56:11 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/809046280' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:56:11 compute-0 nova_compute[260935]: 2025-10-11 09:56:11.429 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 09:56:11 compute-0 nova_compute[260935]: 2025-10-11 09:56:11.432 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 09:56:11 compute-0 nova_compute[260935]: 2025-10-11 09:56:11.432 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5004 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Oct 11 09:56:11 compute-0 nova_compute[260935]: 2025-10-11 09:56:11.433 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Oct 11 09:56:11 compute-0 nova_compute[260935]: 2025-10-11 09:56:11.469 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:56:11 compute-0 nova_compute[260935]: 2025-10-11 09:56:11.470 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Oct 11 09:56:11 compute-0 nova_compute[260935]: 2025-10-11 09:56:11.520 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance c176845c-89c0-4038-ba22-4ee79bd3ebfe actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 09:56:11 compute-0 nova_compute[260935]: 2025-10-11 09:56:11.520 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance b75d8ded-515b-48ff-a6b6-28df88878996 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 09:56:11 compute-0 nova_compute[260935]: 2025-10-11 09:56:11.521 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance 52be16b4-343a-4fd4-9041-39069a1fde2a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 09:56:11 compute-0 nova_compute[260935]: 2025-10-11 09:56:11.521 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 11 09:56:11 compute-0 nova_compute[260935]: 2025-10-11 09:56:11.522 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=896MB phys_disk=59GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 11 09:56:11 compute-0 nova_compute[260935]: 2025-10-11 09:56:11.737 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:56:11 compute-0 podman[447635]: 2025-10-11 09:56:11.809103182 +0000 UTC m=+0.100705447 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_controller)
Oct 11 09:56:11 compute-0 podman[447634]: 2025-10-11 09:56:11.822771059 +0000 UTC m=+0.110272468 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team)
Oct 11 09:56:11 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3458: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:56:12 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 09:56:12 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1596980447' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:56:12 compute-0 nova_compute[260935]: 2025-10-11 09:56:12.238 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.501s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:56:12 compute-0 nova_compute[260935]: 2025-10-11 09:56:12.248 2 DEBUG nova.compute.provider_tree [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 09:56:12 compute-0 nova_compute[260935]: 2025-10-11 09:56:12.353 2 DEBUG nova.scheduler.client.report [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 09:56:12 compute-0 nova_compute[260935]: 2025-10-11 09:56:12.357 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 11 09:56:12 compute-0 nova_compute[260935]: 2025-10-11 09:56:12.357 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.048s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:56:12 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/1596980447' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:56:13 compute-0 ceph-mon[74313]: pgmap v3458: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:56:13 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3459: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:56:14 compute-0 nova_compute[260935]: 2025-10-11 09:56:14.674 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:56:14 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e310 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:56:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:56:15.253 162815 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:56:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:56:15.254 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:56:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:56:15.254 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:56:15 compute-0 ceph-mon[74313]: pgmap v3459: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:56:15 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3460: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:56:16 compute-0 nova_compute[260935]: 2025-10-11 09:56:16.470 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:56:17 compute-0 ceph-mon[74313]: pgmap v3460: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:56:17 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3461: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:56:18 compute-0 nova_compute[260935]: 2025-10-11 09:56:18.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:56:19 compute-0 ceph-mon[74313]: pgmap v3461: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:56:19 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3462: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:56:19 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e310 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:56:21 compute-0 nova_compute[260935]: 2025-10-11 09:56:21.522 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 09:56:21 compute-0 ceph-mon[74313]: pgmap v3462: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:56:21 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3463: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:56:23 compute-0 ceph-mon[74313]: pgmap v3463: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:56:23 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3464: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:56:24 compute-0 nova_compute[260935]: 2025-10-11 09:56:24.698 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:56:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:56:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:56:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:56:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:56:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:56:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:56:24 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e310 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:56:25 compute-0 ceph-mon[74313]: pgmap v3464: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:56:25 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3465: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:56:26 compute-0 nova_compute[260935]: 2025-10-11 09:56:26.524 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 09:56:26 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 09:56:26 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/495599010' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 09:56:26 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 09:56:26 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/495599010' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 09:56:27 compute-0 ceph-mon[74313]: pgmap v3465: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:56:27 compute-0 ceph-mon[74313]: from='client.? 192.168.122.10:0/495599010' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 09:56:27 compute-0 ceph-mon[74313]: from='client.? 192.168.122.10:0/495599010' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 09:56:27 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3466: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:56:29 compute-0 ceph-mon[74313]: pgmap v3466: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:56:29 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3467: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:56:29 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e310 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:56:31 compute-0 nova_compute[260935]: 2025-10-11 09:56:31.527 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 09:56:31 compute-0 nova_compute[260935]: 2025-10-11 09:56:31.530 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 09:56:31 compute-0 nova_compute[260935]: 2025-10-11 09:56:31.530 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5005 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Oct 11 09:56:31 compute-0 nova_compute[260935]: 2025-10-11 09:56:31.531 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Oct 11 09:56:31 compute-0 ceph-mon[74313]: pgmap v3467: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:56:31 compute-0 nova_compute[260935]: 2025-10-11 09:56:31.567 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:56:31 compute-0 nova_compute[260935]: 2025-10-11 09:56:31.568 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Oct 11 09:56:31 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3468: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:56:33 compute-0 ceph-mon[74313]: pgmap v3468: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:56:33 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3469: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:56:34 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e310 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:56:35 compute-0 ceph-mon[74313]: pgmap v3469: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:56:35 compute-0 podman[447702]: 2025-10-11 09:56:35.799098383 +0000 UTC m=+0.085182478 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001)
Oct 11 09:56:35 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3470: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:56:36 compute-0 nova_compute[260935]: 2025-10-11 09:56:36.568 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 09:56:36 compute-0 nova_compute[260935]: 2025-10-11 09:56:36.603 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 09:56:36 compute-0 nova_compute[260935]: 2025-10-11 09:56:36.604 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5036 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Oct 11 09:56:36 compute-0 nova_compute[260935]: 2025-10-11 09:56:36.604 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Oct 11 09:56:36 compute-0 nova_compute[260935]: 2025-10-11 09:56:36.604 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:56:36 compute-0 nova_compute[260935]: 2025-10-11 09:56:36.605 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Oct 11 09:56:37 compute-0 ceph-mon[74313]: pgmap v3470: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:56:37 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3471: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:56:38 compute-0 ceph-mon[74313]: pgmap v3471: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:56:38 compute-0 nova_compute[260935]: 2025-10-11 09:56:38.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:56:39 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3472: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:56:39 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e310 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:56:40 compute-0 sshd-session[447722]: Invalid user devuser from 13.126.15.214 port 45488
Oct 11 09:56:40 compute-0 sshd-session[447722]: pam_unix(sshd:auth): check pass; user unknown
Oct 11 09:56:40 compute-0 sshd-session[447722]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=13.126.15.214
Oct 11 09:56:40 compute-0 podman[447724]: 2025-10-11 09:56:40.477656632 +0000 UTC m=+0.076391359 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=iscsid, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 11 09:56:40 compute-0 ceph-mon[74313]: pgmap v3472: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:56:41 compute-0 nova_compute[260935]: 2025-10-11 09:56:41.606 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 09:56:41 compute-0 nova_compute[260935]: 2025-10-11 09:56:41.608 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 09:56:41 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3473: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:56:42 compute-0 podman[447745]: 2025-10-11 09:56:42.795353901 +0000 UTC m=+0.090894500 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct 11 09:56:42 compute-0 sshd-session[447722]: Failed password for invalid user devuser from 13.126.15.214 port 45488 ssh2
Oct 11 09:56:42 compute-0 podman[447746]: 2025-10-11 09:56:42.832033667 +0000 UTC m=+0.123471130 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Oct 11 09:56:42 compute-0 ceph-mon[74313]: pgmap v3473: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:56:43 compute-0 sshd[189332]: drop connection #1 from [113.194.203.31]:59998 on [38.102.83.193]:22 penalty: exceeded LoginGraceTime
Oct 11 09:56:43 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3474: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:56:44 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e310 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:56:44 compute-0 ceph-mon[74313]: pgmap v3474: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:56:44 compute-0 sshd-session[447722]: Received disconnect from 13.126.15.214 port 45488:11: Bye Bye [preauth]
Oct 11 09:56:44 compute-0 sshd-session[447722]: Disconnected from invalid user devuser 13.126.15.214 port 45488 [preauth]
Oct 11 09:56:45 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3475: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:56:46 compute-0 nova_compute[260935]: 2025-10-11 09:56:46.608 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4997-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 09:56:46 compute-0 ceph-mon[74313]: pgmap v3475: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:56:47 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3476: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:56:48 compute-0 ceph-mon[74313]: pgmap v3476: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:56:49 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3477: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:56:49 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e310 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:56:50 compute-0 ceph-mon[74313]: pgmap v3477: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:56:51 compute-0 nova_compute[260935]: 2025-10-11 09:56:51.610 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 09:56:51 compute-0 nova_compute[260935]: 2025-10-11 09:56:51.612 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 09:56:51 compute-0 nova_compute[260935]: 2025-10-11 09:56:51.612 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Oct 11 09:56:51 compute-0 nova_compute[260935]: 2025-10-11 09:56:51.612 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Oct 11 09:56:51 compute-0 nova_compute[260935]: 2025-10-11 09:56:51.637 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:56:51 compute-0 nova_compute[260935]: 2025-10-11 09:56:51.638 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Oct 11 09:56:51 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3478: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:56:52 compute-0 ceph-mon[74313]: pgmap v3478: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:56:53 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3479: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:56:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:56:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:56:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:56:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:56:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:56:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:56:54 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e310 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:56:54 compute-0 ceph-mon[74313]: pgmap v3479: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:56:55 compute-0 ceph-mgr[74605]: [balancer INFO root] Optimize plan auto_2025-10-11_09:56:55
Oct 11 09:56:55 compute-0 ceph-mgr[74605]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 09:56:55 compute-0 ceph-mgr[74605]: [balancer INFO root] do_upmap
Oct 11 09:56:55 compute-0 ceph-mgr[74605]: [balancer INFO root] pools ['.rgw.root', 'backups', 'default.rgw.log', '.mgr', 'cephfs.cephfs.meta', 'images', 'cephfs.cephfs.data', 'default.rgw.control', 'default.rgw.meta', 'volumes', 'vms']
Oct 11 09:56:55 compute-0 ceph-mgr[74605]: [balancer INFO root] prepared 0/10 changes
Oct 11 09:56:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 09:56:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 09:56:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 09:56:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 09:56:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 09:56:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 09:56:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 09:56:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 09:56:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 09:56:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 09:56:55 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3480: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:56:56 compute-0 nova_compute[260935]: 2025-10-11 09:56:56.640 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 09:56:56 compute-0 nova_compute[260935]: 2025-10-11 09:56:56.642 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 09:56:56 compute-0 nova_compute[260935]: 2025-10-11 09:56:56.642 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Oct 11 09:56:56 compute-0 nova_compute[260935]: 2025-10-11 09:56:56.642 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Oct 11 09:56:56 compute-0 nova_compute[260935]: 2025-10-11 09:56:56.681 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:56:56 compute-0 nova_compute[260935]: 2025-10-11 09:56:56.682 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Oct 11 09:56:57 compute-0 ceph-mon[74313]: pgmap v3480: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:56:57 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3481: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:56:59 compute-0 ceph-mon[74313]: pgmap v3481: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:56:59 compute-0 sudo[447791]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:56:59 compute-0 sudo[447791]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:56:59 compute-0 sudo[447791]: pam_unix(sudo:session): session closed for user root
Oct 11 09:56:59 compute-0 sudo[447817]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 09:56:59 compute-0 sudo[447817]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:56:59 compute-0 sudo[447817]: pam_unix(sudo:session): session closed for user root
Oct 11 09:56:59 compute-0 sudo[447842]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:56:59 compute-0 sudo[447842]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:56:59 compute-0 sudo[447842]: pam_unix(sudo:session): session closed for user root
Oct 11 09:56:59 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3482: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:56:59 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e310 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:56:59 compute-0 sudo[447867]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 11 09:56:59 compute-0 sudo[447867]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:57:00 compute-0 sudo[447867]: pam_unix(sudo:session): session closed for user root
Oct 11 09:57:00 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Oct 11 09:57:00 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 11 09:57:00 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 09:57:00 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 09:57:00 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 11 09:57:00 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 09:57:00 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 11 09:57:00 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:57:00 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev 52a27e58-ec03-4fe2-ba7a-6e8a6a627987 does not exist
Oct 11 09:57:00 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev 098387c3-8f4a-4ed3-8224-092a198fcd35 does not exist
Oct 11 09:57:00 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev 390df583-e6af-4048-8a55-63168625a4d9 does not exist
Oct 11 09:57:00 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 11 09:57:00 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 09:57:00 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 11 09:57:00 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 09:57:00 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 09:57:00 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 09:57:00 compute-0 sudo[447926]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:57:00 compute-0 sudo[447926]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:57:00 compute-0 sudo[447926]: pam_unix(sudo:session): session closed for user root
Oct 11 09:57:00 compute-0 sudo[447951]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 09:57:00 compute-0 sudo[447951]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:57:00 compute-0 sudo[447951]: pam_unix(sudo:session): session closed for user root
Oct 11 09:57:00 compute-0 sudo[447976]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:57:00 compute-0 sudo[447976]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:57:00 compute-0 sudo[447976]: pam_unix(sudo:session): session closed for user root
Oct 11 09:57:00 compute-0 sudo[448001]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 11 09:57:00 compute-0 sudo[448001]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:57:01 compute-0 ceph-mon[74313]: pgmap v3482: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:57:01 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 11 09:57:01 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 09:57:01 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 09:57:01 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:57:01 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 09:57:01 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 09:57:01 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 09:57:01 compute-0 podman[448067]: 2025-10-11 09:57:01.381153408 +0000 UTC m=+0.077435019 container create aa4861981e8112b1cc4173affdcfdf7cd6b31ee7fce2d6e9297dc40e3edf5acc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_shockley, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct 11 09:57:01 compute-0 systemd[1]: Started libpod-conmon-aa4861981e8112b1cc4173affdcfdf7cd6b31ee7fce2d6e9297dc40e3edf5acc.scope.
Oct 11 09:57:01 compute-0 podman[448067]: 2025-10-11 09:57:01.3486864 +0000 UTC m=+0.044968061 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:57:01 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:57:01 compute-0 podman[448067]: 2025-10-11 09:57:01.501325334 +0000 UTC m=+0.197606975 container init aa4861981e8112b1cc4173affdcfdf7cd6b31ee7fce2d6e9297dc40e3edf5acc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_shockley, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 09:57:01 compute-0 podman[448067]: 2025-10-11 09:57:01.515280499 +0000 UTC m=+0.211562110 container start aa4861981e8112b1cc4173affdcfdf7cd6b31ee7fce2d6e9297dc40e3edf5acc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_shockley, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct 11 09:57:01 compute-0 podman[448067]: 2025-10-11 09:57:01.520944899 +0000 UTC m=+0.217226510 container attach aa4861981e8112b1cc4173affdcfdf7cd6b31ee7fce2d6e9297dc40e3edf5acc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_shockley, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct 11 09:57:01 compute-0 vigilant_shockley[448083]: 167 167
Oct 11 09:57:01 compute-0 systemd[1]: libpod-aa4861981e8112b1cc4173affdcfdf7cd6b31ee7fce2d6e9297dc40e3edf5acc.scope: Deactivated successfully.
Oct 11 09:57:01 compute-0 conmon[448083]: conmon aa4861981e8112b1cc41 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-aa4861981e8112b1cc4173affdcfdf7cd6b31ee7fce2d6e9297dc40e3edf5acc.scope/container/memory.events
Oct 11 09:57:01 compute-0 podman[448067]: 2025-10-11 09:57:01.526341131 +0000 UTC m=+0.222622742 container died aa4861981e8112b1cc4173affdcfdf7cd6b31ee7fce2d6e9297dc40e3edf5acc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_shockley, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct 11 09:57:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-fd91e17c17cada7245855b2243a1fc12620ebaf5f5296a3d46bb739383c467f1-merged.mount: Deactivated successfully.
Oct 11 09:57:01 compute-0 podman[448067]: 2025-10-11 09:57:01.584064792 +0000 UTC m=+0.280346403 container remove aa4861981e8112b1cc4173affdcfdf7cd6b31ee7fce2d6e9297dc40e3edf5acc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_shockley, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 11 09:57:01 compute-0 systemd[1]: libpod-conmon-aa4861981e8112b1cc4173affdcfdf7cd6b31ee7fce2d6e9297dc40e3edf5acc.scope: Deactivated successfully.
Oct 11 09:57:01 compute-0 nova_compute[260935]: 2025-10-11 09:57:01.683 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 09:57:01 compute-0 nova_compute[260935]: 2025-10-11 09:57:01.686 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 09:57:01 compute-0 nova_compute[260935]: 2025-10-11 09:57:01.686 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5004 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Oct 11 09:57:01 compute-0 nova_compute[260935]: 2025-10-11 09:57:01.687 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Oct 11 09:57:01 compute-0 nova_compute[260935]: 2025-10-11 09:57:01.715 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:57:01 compute-0 nova_compute[260935]: 2025-10-11 09:57:01.716 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Oct 11 09:57:01 compute-0 podman[448107]: 2025-10-11 09:57:01.868847671 +0000 UTC m=+0.074684302 container create 07c0bbf460b721593030c55ee1525d84d37c82b1727cece565bb91036ee360ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_hellman, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct 11 09:57:01 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3483: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:57:01 compute-0 systemd[1]: Started libpod-conmon-07c0bbf460b721593030c55ee1525d84d37c82b1727cece565bb91036ee360ac.scope.
Oct 11 09:57:01 compute-0 podman[448107]: 2025-10-11 09:57:01.840653884 +0000 UTC m=+0.046490565 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:57:01 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:57:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72f30b70774f00f60633c522e52d10d02a346ca863997984b524056c98e48a8f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 09:57:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72f30b70774f00f60633c522e52d10d02a346ca863997984b524056c98e48a8f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 09:57:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72f30b70774f00f60633c522e52d10d02a346ca863997984b524056c98e48a8f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 09:57:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72f30b70774f00f60633c522e52d10d02a346ca863997984b524056c98e48a8f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 09:57:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72f30b70774f00f60633c522e52d10d02a346ca863997984b524056c98e48a8f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 09:57:02 compute-0 podman[448107]: 2025-10-11 09:57:02.009295 +0000 UTC m=+0.215131661 container init 07c0bbf460b721593030c55ee1525d84d37c82b1727cece565bb91036ee360ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_hellman, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 09:57:02 compute-0 podman[448107]: 2025-10-11 09:57:02.019546009 +0000 UTC m=+0.225382640 container start 07c0bbf460b721593030c55ee1525d84d37c82b1727cece565bb91036ee360ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_hellman, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2)
Oct 11 09:57:02 compute-0 podman[448107]: 2025-10-11 09:57:02.023942284 +0000 UTC m=+0.229778925 container attach 07c0bbf460b721593030c55ee1525d84d37c82b1727cece565bb91036ee360ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_hellman, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 11 09:57:03 compute-0 ceph-mon[74313]: pgmap v3483: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:57:03 compute-0 cranky_hellman[448123]: --> passed data devices: 0 physical, 3 LVM
Oct 11 09:57:03 compute-0 cranky_hellman[448123]: --> relative data size: 1.0
Oct 11 09:57:03 compute-0 cranky_hellman[448123]: --> All data devices are unavailable
Oct 11 09:57:03 compute-0 systemd[1]: libpod-07c0bbf460b721593030c55ee1525d84d37c82b1727cece565bb91036ee360ac.scope: Deactivated successfully.
Oct 11 09:57:03 compute-0 systemd[1]: libpod-07c0bbf460b721593030c55ee1525d84d37c82b1727cece565bb91036ee360ac.scope: Consumed 1.115s CPU time.
Oct 11 09:57:03 compute-0 podman[448107]: 2025-10-11 09:57:03.224212464 +0000 UTC m=+1.430049145 container died 07c0bbf460b721593030c55ee1525d84d37c82b1727cece565bb91036ee360ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_hellman, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 09:57:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-72f30b70774f00f60633c522e52d10d02a346ca863997984b524056c98e48a8f-merged.mount: Deactivated successfully.
Oct 11 09:57:03 compute-0 podman[448107]: 2025-10-11 09:57:03.286066022 +0000 UTC m=+1.491902653 container remove 07c0bbf460b721593030c55ee1525d84d37c82b1727cece565bb91036ee360ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_hellman, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct 11 09:57:03 compute-0 systemd[1]: libpod-conmon-07c0bbf460b721593030c55ee1525d84d37c82b1727cece565bb91036ee360ac.scope: Deactivated successfully.
Oct 11 09:57:03 compute-0 sudo[448001]: pam_unix(sudo:session): session closed for user root
Oct 11 09:57:03 compute-0 sudo[448166]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:57:03 compute-0 sudo[448166]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:57:03 compute-0 sudo[448166]: pam_unix(sudo:session): session closed for user root
Oct 11 09:57:03 compute-0 sudo[448191]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 09:57:03 compute-0 sudo[448191]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:57:03 compute-0 sudo[448191]: pam_unix(sudo:session): session closed for user root
Oct 11 09:57:03 compute-0 sudo[448216]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:57:03 compute-0 sudo[448216]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:57:03 compute-0 sudo[448216]: pam_unix(sudo:session): session closed for user root
Oct 11 09:57:03 compute-0 sudo[448241]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a -- lvm list --format json
Oct 11 09:57:03 compute-0 sudo[448241]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:57:03 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3484: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:57:04 compute-0 podman[448306]: 2025-10-11 09:57:04.142221997 +0000 UTC m=+0.058861204 container create 7369e8088c6e5dcddaea597d4cc7aafa16633b34ba4d6993b2d15c17e592a7f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_mclaren, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 09:57:04 compute-0 systemd[1]: Started libpod-conmon-7369e8088c6e5dcddaea597d4cc7aafa16633b34ba4d6993b2d15c17e592a7f6.scope.
Oct 11 09:57:04 compute-0 podman[448306]: 2025-10-11 09:57:04.117713484 +0000 UTC m=+0.034352691 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:57:04 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:57:04 compute-0 podman[448306]: 2025-10-11 09:57:04.25769376 +0000 UTC m=+0.174333017 container init 7369e8088c6e5dcddaea597d4cc7aafa16633b34ba4d6993b2d15c17e592a7f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_mclaren, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 09:57:04 compute-0 podman[448306]: 2025-10-11 09:57:04.270246105 +0000 UTC m=+0.186885272 container start 7369e8088c6e5dcddaea597d4cc7aafa16633b34ba4d6993b2d15c17e592a7f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_mclaren, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct 11 09:57:04 compute-0 podman[448306]: 2025-10-11 09:57:04.273681882 +0000 UTC m=+0.190321139 container attach 7369e8088c6e5dcddaea597d4cc7aafa16633b34ba4d6993b2d15c17e592a7f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_mclaren, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 09:57:04 compute-0 xenodochial_mclaren[448323]: 167 167
Oct 11 09:57:04 compute-0 systemd[1]: libpod-7369e8088c6e5dcddaea597d4cc7aafa16633b34ba4d6993b2d15c17e592a7f6.scope: Deactivated successfully.
Oct 11 09:57:04 compute-0 podman[448306]: 2025-10-11 09:57:04.28210501 +0000 UTC m=+0.198744217 container died 7369e8088c6e5dcddaea597d4cc7aafa16633b34ba4d6993b2d15c17e592a7f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_mclaren, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True)
Oct 11 09:57:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-fc48afd4ed0b7fc998dc1b3ef864b297d93b3d660e29042ec440f47a5cf10259-merged.mount: Deactivated successfully.
Oct 11 09:57:04 compute-0 podman[448306]: 2025-10-11 09:57:04.33516015 +0000 UTC m=+0.251799327 container remove 7369e8088c6e5dcddaea597d4cc7aafa16633b34ba4d6993b2d15c17e592a7f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_mclaren, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507)
Oct 11 09:57:04 compute-0 systemd[1]: libpod-conmon-7369e8088c6e5dcddaea597d4cc7aafa16633b34ba4d6993b2d15c17e592a7f6.scope: Deactivated successfully.
Oct 11 09:57:04 compute-0 podman[448347]: 2025-10-11 09:57:04.573593548 +0000 UTC m=+0.064870324 container create 76edd06c690e754d2a52ecc94128078cc614b3414a8cdb1e6bb149124af48bd4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_pascal, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct 11 09:57:04 compute-0 podman[448347]: 2025-10-11 09:57:04.545055721 +0000 UTC m=+0.036332567 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:57:04 compute-0 systemd[1]: Started libpod-conmon-76edd06c690e754d2a52ecc94128078cc614b3414a8cdb1e6bb149124af48bd4.scope.
Oct 11 09:57:04 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:57:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49a71f067376b08816b41a96d2da04e8d53c29f1037a7fb92cb58a36b95343a9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 09:57:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49a71f067376b08816b41a96d2da04e8d53c29f1037a7fb92cb58a36b95343a9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 09:57:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49a71f067376b08816b41a96d2da04e8d53c29f1037a7fb92cb58a36b95343a9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 09:57:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49a71f067376b08816b41a96d2da04e8d53c29f1037a7fb92cb58a36b95343a9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 09:57:04 compute-0 podman[448347]: 2025-10-11 09:57:04.703903791 +0000 UTC m=+0.195180597 container init 76edd06c690e754d2a52ecc94128078cc614b3414a8cdb1e6bb149124af48bd4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_pascal, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct 11 09:57:04 compute-0 nova_compute[260935]: 2025-10-11 09:57:04.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:57:04 compute-0 podman[448347]: 2025-10-11 09:57:04.720842229 +0000 UTC m=+0.212119025 container start 76edd06c690e754d2a52ecc94128078cc614b3414a8cdb1e6bb149124af48bd4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_pascal, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 09:57:04 compute-0 podman[448347]: 2025-10-11 09:57:04.724353968 +0000 UTC m=+0.215630815 container attach 76edd06c690e754d2a52ecc94128078cc614b3414a8cdb1e6bb149124af48bd4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_pascal, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct 11 09:57:04 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e310 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:57:05 compute-0 ceph-mon[74313]: pgmap v3484: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:57:05 compute-0 nostalgic_pascal[448364]: {
Oct 11 09:57:05 compute-0 nostalgic_pascal[448364]:     "0": [
Oct 11 09:57:05 compute-0 nostalgic_pascal[448364]:         {
Oct 11 09:57:05 compute-0 nostalgic_pascal[448364]:             "devices": [
Oct 11 09:57:05 compute-0 nostalgic_pascal[448364]:                 "/dev/loop3"
Oct 11 09:57:05 compute-0 nostalgic_pascal[448364]:             ],
Oct 11 09:57:05 compute-0 nostalgic_pascal[448364]:             "lv_name": "ceph_lv0",
Oct 11 09:57:05 compute-0 nostalgic_pascal[448364]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 09:57:05 compute-0 nostalgic_pascal[448364]:             "lv_size": "21470642176",
Oct 11 09:57:05 compute-0 nostalgic_pascal[448364]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=bd31cfe9-266a-4479-a7f9-659fff14b3e1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 09:57:05 compute-0 nostalgic_pascal[448364]:             "lv_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 09:57:05 compute-0 nostalgic_pascal[448364]:             "name": "ceph_lv0",
Oct 11 09:57:05 compute-0 nostalgic_pascal[448364]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 09:57:05 compute-0 nostalgic_pascal[448364]:             "tags": {
Oct 11 09:57:05 compute-0 nostalgic_pascal[448364]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 11 09:57:05 compute-0 nostalgic_pascal[448364]:                 "ceph.block_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 09:57:05 compute-0 nostalgic_pascal[448364]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 09:57:05 compute-0 nostalgic_pascal[448364]:                 "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:57:05 compute-0 nostalgic_pascal[448364]:                 "ceph.cluster_name": "ceph",
Oct 11 09:57:05 compute-0 nostalgic_pascal[448364]:                 "ceph.crush_device_class": "",
Oct 11 09:57:05 compute-0 nostalgic_pascal[448364]:                 "ceph.encrypted": "0",
Oct 11 09:57:05 compute-0 nostalgic_pascal[448364]:                 "ceph.osd_fsid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 09:57:05 compute-0 nostalgic_pascal[448364]:                 "ceph.osd_id": "0",
Oct 11 09:57:05 compute-0 nostalgic_pascal[448364]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 09:57:05 compute-0 nostalgic_pascal[448364]:                 "ceph.type": "block",
Oct 11 09:57:05 compute-0 nostalgic_pascal[448364]:                 "ceph.vdo": "0"
Oct 11 09:57:05 compute-0 nostalgic_pascal[448364]:             },
Oct 11 09:57:05 compute-0 nostalgic_pascal[448364]:             "type": "block",
Oct 11 09:57:05 compute-0 nostalgic_pascal[448364]:             "vg_name": "ceph_vg0"
Oct 11 09:57:05 compute-0 nostalgic_pascal[448364]:         }
Oct 11 09:57:05 compute-0 nostalgic_pascal[448364]:     ],
Oct 11 09:57:05 compute-0 nostalgic_pascal[448364]:     "1": [
Oct 11 09:57:05 compute-0 nostalgic_pascal[448364]:         {
Oct 11 09:57:05 compute-0 nostalgic_pascal[448364]:             "devices": [
Oct 11 09:57:05 compute-0 nostalgic_pascal[448364]:                 "/dev/loop4"
Oct 11 09:57:05 compute-0 nostalgic_pascal[448364]:             ],
Oct 11 09:57:05 compute-0 nostalgic_pascal[448364]:             "lv_name": "ceph_lv1",
Oct 11 09:57:05 compute-0 nostalgic_pascal[448364]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 09:57:05 compute-0 nostalgic_pascal[448364]:             "lv_size": "21470642176",
Oct 11 09:57:05 compute-0 nostalgic_pascal[448364]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c6e730c8-7075-45e5-bf72-061b73088f5b,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 09:57:05 compute-0 nostalgic_pascal[448364]:             "lv_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 09:57:05 compute-0 nostalgic_pascal[448364]:             "name": "ceph_lv1",
Oct 11 09:57:05 compute-0 nostalgic_pascal[448364]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 09:57:05 compute-0 nostalgic_pascal[448364]:             "tags": {
Oct 11 09:57:05 compute-0 nostalgic_pascal[448364]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 11 09:57:05 compute-0 nostalgic_pascal[448364]:                 "ceph.block_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 09:57:05 compute-0 nostalgic_pascal[448364]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 09:57:05 compute-0 nostalgic_pascal[448364]:                 "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:57:05 compute-0 nostalgic_pascal[448364]:                 "ceph.cluster_name": "ceph",
Oct 11 09:57:05 compute-0 nostalgic_pascal[448364]:                 "ceph.crush_device_class": "",
Oct 11 09:57:05 compute-0 nostalgic_pascal[448364]:                 "ceph.encrypted": "0",
Oct 11 09:57:05 compute-0 nostalgic_pascal[448364]:                 "ceph.osd_fsid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 09:57:05 compute-0 nostalgic_pascal[448364]:                 "ceph.osd_id": "1",
Oct 11 09:57:05 compute-0 nostalgic_pascal[448364]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 09:57:05 compute-0 nostalgic_pascal[448364]:                 "ceph.type": "block",
Oct 11 09:57:05 compute-0 nostalgic_pascal[448364]:                 "ceph.vdo": "0"
Oct 11 09:57:05 compute-0 nostalgic_pascal[448364]:             },
Oct 11 09:57:05 compute-0 nostalgic_pascal[448364]:             "type": "block",
Oct 11 09:57:05 compute-0 nostalgic_pascal[448364]:             "vg_name": "ceph_vg1"
Oct 11 09:57:05 compute-0 nostalgic_pascal[448364]:         }
Oct 11 09:57:05 compute-0 nostalgic_pascal[448364]:     ],
Oct 11 09:57:05 compute-0 nostalgic_pascal[448364]:     "2": [
Oct 11 09:57:05 compute-0 nostalgic_pascal[448364]:         {
Oct 11 09:57:05 compute-0 nostalgic_pascal[448364]:             "devices": [
Oct 11 09:57:05 compute-0 nostalgic_pascal[448364]:                 "/dev/loop5"
Oct 11 09:57:05 compute-0 nostalgic_pascal[448364]:             ],
Oct 11 09:57:05 compute-0 nostalgic_pascal[448364]:             "lv_name": "ceph_lv2",
Oct 11 09:57:05 compute-0 nostalgic_pascal[448364]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 09:57:05 compute-0 nostalgic_pascal[448364]:             "lv_size": "21470642176",
Oct 11 09:57:05 compute-0 nostalgic_pascal[448364]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9a8d87fe-940e-4960-94fb-405e22e6e9ee,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 09:57:05 compute-0 nostalgic_pascal[448364]:             "lv_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 09:57:05 compute-0 nostalgic_pascal[448364]:             "name": "ceph_lv2",
Oct 11 09:57:05 compute-0 nostalgic_pascal[448364]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 09:57:05 compute-0 nostalgic_pascal[448364]:             "tags": {
Oct 11 09:57:05 compute-0 nostalgic_pascal[448364]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 11 09:57:05 compute-0 nostalgic_pascal[448364]:                 "ceph.block_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 09:57:05 compute-0 nostalgic_pascal[448364]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 09:57:05 compute-0 nostalgic_pascal[448364]:                 "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:57:05 compute-0 nostalgic_pascal[448364]:                 "ceph.cluster_name": "ceph",
Oct 11 09:57:05 compute-0 nostalgic_pascal[448364]:                 "ceph.crush_device_class": "",
Oct 11 09:57:05 compute-0 nostalgic_pascal[448364]:                 "ceph.encrypted": "0",
Oct 11 09:57:05 compute-0 nostalgic_pascal[448364]:                 "ceph.osd_fsid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 09:57:05 compute-0 nostalgic_pascal[448364]:                 "ceph.osd_id": "2",
Oct 11 09:57:05 compute-0 nostalgic_pascal[448364]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 09:57:05 compute-0 nostalgic_pascal[448364]:                 "ceph.type": "block",
Oct 11 09:57:05 compute-0 nostalgic_pascal[448364]:                 "ceph.vdo": "0"
Oct 11 09:57:05 compute-0 nostalgic_pascal[448364]:             },
Oct 11 09:57:05 compute-0 nostalgic_pascal[448364]:             "type": "block",
Oct 11 09:57:05 compute-0 nostalgic_pascal[448364]:             "vg_name": "ceph_vg2"
Oct 11 09:57:05 compute-0 nostalgic_pascal[448364]:         }
Oct 11 09:57:05 compute-0 nostalgic_pascal[448364]:     ]
Oct 11 09:57:05 compute-0 nostalgic_pascal[448364]: }
Oct 11 09:57:05 compute-0 systemd[1]: libpod-76edd06c690e754d2a52ecc94128078cc614b3414a8cdb1e6bb149124af48bd4.scope: Deactivated successfully.
Oct 11 09:57:05 compute-0 podman[448347]: 2025-10-11 09:57:05.4802181 +0000 UTC m=+0.971494886 container died 76edd06c690e754d2a52ecc94128078cc614b3414a8cdb1e6bb149124af48bd4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_pascal, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 09:57:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-49a71f067376b08816b41a96d2da04e8d53c29f1037a7fb92cb58a36b95343a9-merged.mount: Deactivated successfully.
Oct 11 09:57:05 compute-0 podman[448347]: 2025-10-11 09:57:05.571737886 +0000 UTC m=+1.063014652 container remove 76edd06c690e754d2a52ecc94128078cc614b3414a8cdb1e6bb149124af48bd4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_pascal, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Oct 11 09:57:05 compute-0 systemd[1]: libpod-conmon-76edd06c690e754d2a52ecc94128078cc614b3414a8cdb1e6bb149124af48bd4.scope: Deactivated successfully.
Oct 11 09:57:05 compute-0 sudo[448241]: pam_unix(sudo:session): session closed for user root
Oct 11 09:57:05 compute-0 sudo[448386]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:57:05 compute-0 nova_compute[260935]: 2025-10-11 09:57:05.701 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:57:05 compute-0 sudo[448386]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:57:05 compute-0 sudo[448386]: pam_unix(sudo:session): session closed for user root
Oct 11 09:57:05 compute-0 sudo[448411]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 09:57:05 compute-0 sudo[448411]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:57:05 compute-0 sudo[448411]: pam_unix(sudo:session): session closed for user root
Oct 11 09:57:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 09:57:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:57:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 09:57:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:57:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0026278224099759067 of space, bias 1.0, pg target 0.788346722992772 quantized to 32 (current 32)
Oct 11 09:57:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:57:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 09:57:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:57:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 09:57:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:57:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006659854830044931 of space, bias 1.0, pg target 0.19979564490134794 quantized to 32 (current 32)
Oct 11 09:57:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:57:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 11 09:57:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:57:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 09:57:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:57:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 11 09:57:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:57:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 11 09:57:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:57:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 09:57:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:57:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 11 09:57:05 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3485: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:57:05 compute-0 sudo[448436]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:57:05 compute-0 sudo[448436]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:57:05 compute-0 sudo[448436]: pam_unix(sudo:session): session closed for user root
Oct 11 09:57:05 compute-0 sudo[448462]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a -- raw list --format json
Oct 11 09:57:05 compute-0 sudo[448462]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:57:05 compute-0 podman[448460]: 2025-10-11 09:57:05.994773282 +0000 UTC m=+0.082354299 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent)
Oct 11 09:57:06 compute-0 podman[448547]: 2025-10-11 09:57:06.469001174 +0000 UTC m=+0.070364870 container create be5e7434cc81833b609dea47c5c55a91958ebd0c9196f822bc150b20d1bbc723 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_kowalevski, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct 11 09:57:06 compute-0 systemd[1]: Started libpod-conmon-be5e7434cc81833b609dea47c5c55a91958ebd0c9196f822bc150b20d1bbc723.scope.
Oct 11 09:57:06 compute-0 podman[448547]: 2025-10-11 09:57:06.439086318 +0000 UTC m=+0.040450084 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:57:06 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:57:06 compute-0 podman[448547]: 2025-10-11 09:57:06.578599481 +0000 UTC m=+0.179963237 container init be5e7434cc81833b609dea47c5c55a91958ebd0c9196f822bc150b20d1bbc723 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_kowalevski, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct 11 09:57:06 compute-0 podman[448547]: 2025-10-11 09:57:06.591681031 +0000 UTC m=+0.193044707 container start be5e7434cc81833b609dea47c5c55a91958ebd0c9196f822bc150b20d1bbc723 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_kowalevski, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct 11 09:57:06 compute-0 podman[448547]: 2025-10-11 09:57:06.595973492 +0000 UTC m=+0.197337248 container attach be5e7434cc81833b609dea47c5c55a91958ebd0c9196f822bc150b20d1bbc723 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_kowalevski, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS)
Oct 11 09:57:06 compute-0 laughing_kowalevski[448564]: 167 167
Oct 11 09:57:06 compute-0 systemd[1]: libpod-be5e7434cc81833b609dea47c5c55a91958ebd0c9196f822bc150b20d1bbc723.scope: Deactivated successfully.
Oct 11 09:57:06 compute-0 podman[448547]: 2025-10-11 09:57:06.602583639 +0000 UTC m=+0.203947335 container died be5e7434cc81833b609dea47c5c55a91958ebd0c9196f822bc150b20d1bbc723 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_kowalevski, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Oct 11 09:57:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-07e2fbfcfff693d1f1bc62d58298a2a78b6d5a5aaeda7127d07727154e9ac21e-merged.mount: Deactivated successfully.
Oct 11 09:57:06 compute-0 podman[448547]: 2025-10-11 09:57:06.660845125 +0000 UTC m=+0.262208831 container remove be5e7434cc81833b609dea47c5c55a91958ebd0c9196f822bc150b20d1bbc723 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_kowalevski, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct 11 09:57:06 compute-0 systemd[1]: libpod-conmon-be5e7434cc81833b609dea47c5c55a91958ebd0c9196f822bc150b20d1bbc723.scope: Deactivated successfully.
Oct 11 09:57:06 compute-0 nova_compute[260935]: 2025-10-11 09:57:06.716 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:57:06 compute-0 nova_compute[260935]: 2025-10-11 09:57:06.718 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:57:06 compute-0 podman[448587]: 2025-10-11 09:57:06.926946745 +0000 UTC m=+0.074880477 container create 06822f48e0b4c0e734d7504e5e1a644c9d4efce7953aaec9cd010b13e699101f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_wozniak, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Oct 11 09:57:06 compute-0 podman[448587]: 2025-10-11 09:57:06.897428811 +0000 UTC m=+0.045362593 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:57:06 compute-0 systemd[1]: Started libpod-conmon-06822f48e0b4c0e734d7504e5e1a644c9d4efce7953aaec9cd010b13e699101f.scope.
Oct 11 09:57:07 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:57:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/afa9d8b224c2e4eadd8792923eb840a4edf3753066f434cedaee9f26104eac45/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 09:57:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/afa9d8b224c2e4eadd8792923eb840a4edf3753066f434cedaee9f26104eac45/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 09:57:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/afa9d8b224c2e4eadd8792923eb840a4edf3753066f434cedaee9f26104eac45/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 09:57:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/afa9d8b224c2e4eadd8792923eb840a4edf3753066f434cedaee9f26104eac45/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 09:57:07 compute-0 ceph-mon[74313]: pgmap v3485: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:57:07 compute-0 podman[448587]: 2025-10-11 09:57:07.071454309 +0000 UTC m=+0.219388101 container init 06822f48e0b4c0e734d7504e5e1a644c9d4efce7953aaec9cd010b13e699101f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_wozniak, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct 11 09:57:07 compute-0 podman[448587]: 2025-10-11 09:57:07.082466001 +0000 UTC m=+0.230399733 container start 06822f48e0b4c0e734d7504e5e1a644c9d4efce7953aaec9cd010b13e699101f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_wozniak, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct 11 09:57:07 compute-0 podman[448587]: 2025-10-11 09:57:07.086559406 +0000 UTC m=+0.234493198 container attach 06822f48e0b4c0e734d7504e5e1a644c9d4efce7953aaec9cd010b13e699101f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_wozniak, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True)
Oct 11 09:57:07 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3486: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:57:08 compute-0 blissful_wozniak[448604]: {
Oct 11 09:57:08 compute-0 blissful_wozniak[448604]:     "9a8d87fe-940e-4960-94fb-405e22e6e9ee": {
Oct 11 09:57:08 compute-0 blissful_wozniak[448604]:         "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:57:08 compute-0 blissful_wozniak[448604]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 11 09:57:08 compute-0 blissful_wozniak[448604]:         "osd_id": 2,
Oct 11 09:57:08 compute-0 blissful_wozniak[448604]:         "osd_uuid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 09:57:08 compute-0 blissful_wozniak[448604]:         "type": "bluestore"
Oct 11 09:57:08 compute-0 blissful_wozniak[448604]:     },
Oct 11 09:57:08 compute-0 blissful_wozniak[448604]:     "bd31cfe9-266a-4479-a7f9-659fff14b3e1": {
Oct 11 09:57:08 compute-0 blissful_wozniak[448604]:         "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:57:08 compute-0 blissful_wozniak[448604]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 11 09:57:08 compute-0 blissful_wozniak[448604]:         "osd_id": 0,
Oct 11 09:57:08 compute-0 blissful_wozniak[448604]:         "osd_uuid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 09:57:08 compute-0 blissful_wozniak[448604]:         "type": "bluestore"
Oct 11 09:57:08 compute-0 blissful_wozniak[448604]:     },
Oct 11 09:57:08 compute-0 blissful_wozniak[448604]:     "c6e730c8-7075-45e5-bf72-061b73088f5b": {
Oct 11 09:57:08 compute-0 blissful_wozniak[448604]:         "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:57:08 compute-0 blissful_wozniak[448604]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 11 09:57:08 compute-0 blissful_wozniak[448604]:         "osd_id": 1,
Oct 11 09:57:08 compute-0 blissful_wozniak[448604]:         "osd_uuid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 09:57:08 compute-0 blissful_wozniak[448604]:         "type": "bluestore"
Oct 11 09:57:08 compute-0 blissful_wozniak[448604]:     }
Oct 11 09:57:08 compute-0 blissful_wozniak[448604]: }
Oct 11 09:57:08 compute-0 systemd[1]: libpod-06822f48e0b4c0e734d7504e5e1a644c9d4efce7953aaec9cd010b13e699101f.scope: Deactivated successfully.
Oct 11 09:57:08 compute-0 podman[448587]: 2025-10-11 09:57:08.218518865 +0000 UTC m=+1.366452597 container died 06822f48e0b4c0e734d7504e5e1a644c9d4efce7953aaec9cd010b13e699101f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_wozniak, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 09:57:08 compute-0 systemd[1]: libpod-06822f48e0b4c0e734d7504e5e1a644c9d4efce7953aaec9cd010b13e699101f.scope: Consumed 1.141s CPU time.
Oct 11 09:57:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-afa9d8b224c2e4eadd8792923eb840a4edf3753066f434cedaee9f26104eac45-merged.mount: Deactivated successfully.
Oct 11 09:57:08 compute-0 podman[448587]: 2025-10-11 09:57:08.298490556 +0000 UTC m=+1.446424268 container remove 06822f48e0b4c0e734d7504e5e1a644c9d4efce7953aaec9cd010b13e699101f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_wozniak, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 09:57:08 compute-0 systemd[1]: libpod-conmon-06822f48e0b4c0e734d7504e5e1a644c9d4efce7953aaec9cd010b13e699101f.scope: Deactivated successfully.
Oct 11 09:57:08 compute-0 sudo[448462]: pam_unix(sudo:session): session closed for user root
Oct 11 09:57:08 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 09:57:08 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:57:08 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 09:57:08 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:57:08 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev d3960a72-9b0f-4e7b-bb06-3e6036b536f1 does not exist
Oct 11 09:57:08 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev 852a18a7-f8d8-489a-9ee1-b67d45c0f4ad does not exist
Oct 11 09:57:08 compute-0 sudo[448650]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:57:08 compute-0 sudo[448650]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:57:08 compute-0 sudo[448650]: pam_unix(sudo:session): session closed for user root
Oct 11 09:57:08 compute-0 sudo[448675]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 11 09:57:08 compute-0 sudo[448675]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:57:08 compute-0 sudo[448675]: pam_unix(sudo:session): session closed for user root
Oct 11 09:57:08 compute-0 nova_compute[260935]: 2025-10-11 09:57:08.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:57:08 compute-0 nova_compute[260935]: 2025-10-11 09:57:08.704 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 11 09:57:08 compute-0 nova_compute[260935]: 2025-10-11 09:57:08.705 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 11 09:57:09 compute-0 nova_compute[260935]: 2025-10-11 09:57:09.239 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "refresh_cache-c176845c-89c0-4038-ba22-4ee79bd3ebfe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:57:09 compute-0 nova_compute[260935]: 2025-10-11 09:57:09.239 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquired lock "refresh_cache-c176845c-89c0-4038-ba22-4ee79bd3ebfe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:57:09 compute-0 nova_compute[260935]: 2025-10-11 09:57:09.239 2 DEBUG nova.network.neutron [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: c176845c-89c0-4038-ba22-4ee79bd3ebfe] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 11 09:57:09 compute-0 nova_compute[260935]: 2025-10-11 09:57:09.240 2 DEBUG nova.objects.instance [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lazy-loading 'info_cache' on Instance uuid c176845c-89c0-4038-ba22-4ee79bd3ebfe obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 09:57:09 compute-0 ceph-mon[74313]: pgmap v3486: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:57:09 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:57:09 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:57:09 compute-0 nova_compute[260935]: 2025-10-11 09:57:09.413 2 DEBUG nova.network.neutron [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: c176845c-89c0-4038-ba22-4ee79bd3ebfe] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 11 09:57:09 compute-0 nova_compute[260935]: 2025-10-11 09:57:09.638 2 DEBUG nova.network.neutron [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: c176845c-89c0-4038-ba22-4ee79bd3ebfe] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:57:09 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3487: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:57:09 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e310 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:57:10 compute-0 nova_compute[260935]: 2025-10-11 09:57:10.181 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Releasing lock "refresh_cache-c176845c-89c0-4038-ba22-4ee79bd3ebfe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:57:10 compute-0 nova_compute[260935]: 2025-10-11 09:57:10.181 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: c176845c-89c0-4038-ba22-4ee79bd3ebfe] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 11 09:57:10 compute-0 nova_compute[260935]: 2025-10-11 09:57:10.182 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:57:10 compute-0 nova_compute[260935]: 2025-10-11 09:57:10.183 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:57:10 compute-0 nova_compute[260935]: 2025-10-11 09:57:10.183 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 11 09:57:10 compute-0 nova_compute[260935]: 2025-10-11 09:57:10.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:57:10 compute-0 nova_compute[260935]: 2025-10-11 09:57:10.736 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:57:10 compute-0 nova_compute[260935]: 2025-10-11 09:57:10.736 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:57:10 compute-0 nova_compute[260935]: 2025-10-11 09:57:10.737 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:57:10 compute-0 nova_compute[260935]: 2025-10-11 09:57:10.737 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 11 09:57:10 compute-0 nova_compute[260935]: 2025-10-11 09:57:10.738 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:57:10 compute-0 podman[448700]: 2025-10-11 09:57:10.817376501 +0000 UTC m=+0.103152996 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, container_name=iscsid)
Oct 11 09:57:11 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 09:57:11 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3521329931' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:57:11 compute-0 nova_compute[260935]: 2025-10-11 09:57:11.283 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.545s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:57:11 compute-0 ceph-mon[74313]: pgmap v3487: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:57:11 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/3521329931' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:57:11 compute-0 nova_compute[260935]: 2025-10-11 09:57:11.376 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:57:11 compute-0 nova_compute[260935]: 2025-10-11 09:57:11.377 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:57:11 compute-0 nova_compute[260935]: 2025-10-11 09:57:11.377 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:57:11 compute-0 nova_compute[260935]: 2025-10-11 09:57:11.382 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:57:11 compute-0 nova_compute[260935]: 2025-10-11 09:57:11.382 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:57:11 compute-0 nova_compute[260935]: 2025-10-11 09:57:11.388 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000052 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:57:11 compute-0 nova_compute[260935]: 2025-10-11 09:57:11.388 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000052 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:57:11 compute-0 nova_compute[260935]: 2025-10-11 09:57:11.592 2 WARNING nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 09:57:11 compute-0 nova_compute[260935]: 2025-10-11 09:57:11.593 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=2753MB free_disk=59.83064270019531GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 11 09:57:11 compute-0 nova_compute[260935]: 2025-10-11 09:57:11.594 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:57:11 compute-0 nova_compute[260935]: 2025-10-11 09:57:11.594 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:57:11 compute-0 nova_compute[260935]: 2025-10-11 09:57:11.677 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance c176845c-89c0-4038-ba22-4ee79bd3ebfe actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 09:57:11 compute-0 nova_compute[260935]: 2025-10-11 09:57:11.678 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance b75d8ded-515b-48ff-a6b6-28df88878996 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 09:57:11 compute-0 nova_compute[260935]: 2025-10-11 09:57:11.679 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance 52be16b4-343a-4fd4-9041-39069a1fde2a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 09:57:11 compute-0 nova_compute[260935]: 2025-10-11 09:57:11.679 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 11 09:57:11 compute-0 nova_compute[260935]: 2025-10-11 09:57:11.679 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=896MB phys_disk=59GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 11 09:57:11 compute-0 nova_compute[260935]: 2025-10-11 09:57:11.719 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 09:57:11 compute-0 nova_compute[260935]: 2025-10-11 09:57:11.721 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 09:57:11 compute-0 nova_compute[260935]: 2025-10-11 09:57:11.721 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Oct 11 09:57:11 compute-0 nova_compute[260935]: 2025-10-11 09:57:11.722 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Oct 11 09:57:11 compute-0 nova_compute[260935]: 2025-10-11 09:57:11.777 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:57:11 compute-0 nova_compute[260935]: 2025-10-11 09:57:11.778 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Oct 11 09:57:11 compute-0 nova_compute[260935]: 2025-10-11 09:57:11.807 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:57:11 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3488: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:57:12 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 09:57:12 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2046483490' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:57:12 compute-0 nova_compute[260935]: 2025-10-11 09:57:12.293 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.486s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:57:12 compute-0 nova_compute[260935]: 2025-10-11 09:57:12.303 2 DEBUG nova.compute.provider_tree [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 09:57:12 compute-0 nova_compute[260935]: 2025-10-11 09:57:12.328 2 DEBUG nova.scheduler.client.report [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 09:57:12 compute-0 nova_compute[260935]: 2025-10-11 09:57:12.331 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 11 09:57:12 compute-0 nova_compute[260935]: 2025-10-11 09:57:12.332 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.738s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:57:12 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/2046483490' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:57:13 compute-0 ceph-mon[74313]: pgmap v3488: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:57:13 compute-0 podman[448765]: 2025-10-11 09:57:13.798511649 +0000 UTC m=+0.097088305 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, container_name=multipathd, org.label-schema.schema-version=1.0)
Oct 11 09:57:13 compute-0 podman[448766]: 2025-10-11 09:57:13.857568598 +0000 UTC m=+0.147657434 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, config_id=ovn_controller, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct 11 09:57:13 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3489: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:57:14 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e310 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:57:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:57:15.255 162815 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:57:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:57:15.255 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:57:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:57:15.256 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:57:15 compute-0 nova_compute[260935]: 2025-10-11 09:57:15.333 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:57:15 compute-0 ceph-mon[74313]: pgmap v3489: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:57:15 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3490: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:57:16 compute-0 nova_compute[260935]: 2025-10-11 09:57:16.778 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:57:16 compute-0 nova_compute[260935]: 2025-10-11 09:57:16.781 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 09:57:17 compute-0 ceph-mon[74313]: pgmap v3490: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:57:17 compute-0 ceph-mon[74313]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #171. Immutable memtables: 0.
Oct 11 09:57:17 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:57:17.405885) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 11 09:57:17 compute-0 ceph-mon[74313]: rocksdb: [db/flush_job.cc:856] [default] [JOB 105] Flushing memtable with next log file: 171
Oct 11 09:57:17 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760176637405951, "job": 105, "event": "flush_started", "num_memtables": 1, "num_entries": 1680, "num_deletes": 251, "total_data_size": 2767455, "memory_usage": 2819056, "flush_reason": "Manual Compaction"}
Oct 11 09:57:17 compute-0 ceph-mon[74313]: rocksdb: [db/flush_job.cc:885] [default] [JOB 105] Level-0 flush table #172: started
Oct 11 09:57:17 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760176637429518, "cf_name": "default", "job": 105, "event": "table_file_creation", "file_number": 172, "file_size": 2707750, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 70972, "largest_seqno": 72651, "table_properties": {"data_size": 2699934, "index_size": 4758, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1989, "raw_key_size": 15833, "raw_average_key_size": 19, "raw_value_size": 2684369, "raw_average_value_size": 3389, "num_data_blocks": 213, "num_entries": 792, "num_filter_entries": 792, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760176455, "oldest_key_time": 1760176455, "file_creation_time": 1760176637, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "25d4b2da-549a-4320-988a-8e4ed36a341b", "db_session_id": "HBQ1LYMNE0LLZGR7AHC6", "orig_file_number": 172, "seqno_to_time_mapping": "N/A"}}
Oct 11 09:57:17 compute-0 ceph-mon[74313]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 105] Flush lasted 23702 microseconds, and 14226 cpu microseconds.
Oct 11 09:57:17 compute-0 ceph-mon[74313]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 09:57:17 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:57:17.429586) [db/flush_job.cc:967] [default] [JOB 105] Level-0 flush table #172: 2707750 bytes OK
Oct 11 09:57:17 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:57:17.429615) [db/memtable_list.cc:519] [default] Level-0 commit table #172 started
Oct 11 09:57:17 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:57:17.431304) [db/memtable_list.cc:722] [default] Level-0 commit table #172: memtable #1 done
Oct 11 09:57:17 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:57:17.431332) EVENT_LOG_v1 {"time_micros": 1760176637431322, "job": 105, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 11 09:57:17 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:57:17.431360) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 11 09:57:17 compute-0 ceph-mon[74313]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 105] Try to delete WAL files size 2760220, prev total WAL file size 2760220, number of live WAL files 2.
Oct 11 09:57:17 compute-0 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000168.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 09:57:17 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:57:17.432741) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730037303238' seq:72057594037927935, type:22 .. '7061786F730037323830' seq:0, type:0; will stop at (end)
Oct 11 09:57:17 compute-0 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 106] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 11 09:57:17 compute-0 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 105 Base level 0, inputs: [172(2644KB)], [170(9884KB)]
Oct 11 09:57:17 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760176637432803, "job": 106, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [172], "files_L6": [170], "score": -1, "input_data_size": 12829973, "oldest_snapshot_seqno": -1}
Oct 11 09:57:17 compute-0 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 106] Generated table #173: 8855 keys, 11052848 bytes, temperature: kUnknown
Oct 11 09:57:17 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760176637504530, "cf_name": "default", "job": 106, "event": "table_file_creation", "file_number": 173, "file_size": 11052848, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10995923, "index_size": 33708, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 22149, "raw_key_size": 232764, "raw_average_key_size": 26, "raw_value_size": 10840056, "raw_average_value_size": 1224, "num_data_blocks": 1303, "num_entries": 8855, "num_filter_entries": 8855, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760170204, "oldest_key_time": 0, "file_creation_time": 1760176637, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "25d4b2da-549a-4320-988a-8e4ed36a341b", "db_session_id": "HBQ1LYMNE0LLZGR7AHC6", "orig_file_number": 173, "seqno_to_time_mapping": "N/A"}}
Oct 11 09:57:17 compute-0 ceph-mon[74313]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 09:57:17 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:57:17.505019) [db/compaction/compaction_job.cc:1663] [default] [JOB 106] Compacted 1@0 + 1@6 files to L6 => 11052848 bytes
Oct 11 09:57:17 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:57:17.506583) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 178.5 rd, 153.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.6, 9.7 +0.0 blob) out(10.5 +0.0 blob), read-write-amplify(8.8) write-amplify(4.1) OK, records in: 9369, records dropped: 514 output_compression: NoCompression
Oct 11 09:57:17 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:57:17.506613) EVENT_LOG_v1 {"time_micros": 1760176637506600, "job": 106, "event": "compaction_finished", "compaction_time_micros": 71877, "compaction_time_cpu_micros": 50474, "output_level": 6, "num_output_files": 1, "total_output_size": 11052848, "num_input_records": 9369, "num_output_records": 8855, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 11 09:57:17 compute-0 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000172.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 09:57:17 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760176637508110, "job": 106, "event": "table_file_deletion", "file_number": 172}
Oct 11 09:57:17 compute-0 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000170.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 09:57:17 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760176637512207, "job": 106, "event": "table_file_deletion", "file_number": 170}
Oct 11 09:57:17 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:57:17.432640) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 09:57:17 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:57:17.512465) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 09:57:17 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:57:17.512476) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 09:57:17 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:57:17.512480) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 09:57:17 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:57:17.512484) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 09:57:17 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:57:17.512488) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 09:57:17 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3491: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:57:18 compute-0 sshd-session[448812]: Invalid user monitor from 43.157.67.116 port 57992
Oct 11 09:57:18 compute-0 sshd-session[448812]: pam_unix(sshd:auth): check pass; user unknown
Oct 11 09:57:18 compute-0 sshd-session[448812]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=43.157.67.116
Oct 11 09:57:18 compute-0 nova_compute[260935]: 2025-10-11 09:57:18.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:57:19 compute-0 ceph-mon[74313]: pgmap v3491: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:57:19 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3492: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:57:19 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e310 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:57:20 compute-0 sshd-session[448812]: Failed password for invalid user monitor from 43.157.67.116 port 57992 ssh2
Oct 11 09:57:21 compute-0 ceph-mon[74313]: pgmap v3492: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:57:21 compute-0 nova_compute[260935]: 2025-10-11 09:57:21.782 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4997-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 09:57:21 compute-0 nova_compute[260935]: 2025-10-11 09:57:21.786 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 09:57:21 compute-0 nova_compute[260935]: 2025-10-11 09:57:21.786 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5004 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Oct 11 09:57:21 compute-0 nova_compute[260935]: 2025-10-11 09:57:21.786 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Oct 11 09:57:21 compute-0 nova_compute[260935]: 2025-10-11 09:57:21.825 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:57:21 compute-0 nova_compute[260935]: 2025-10-11 09:57:21.826 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Oct 11 09:57:21 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3493: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:57:21 compute-0 sshd-session[448812]: Received disconnect from 43.157.67.116 port 57992:11: Bye Bye [preauth]
Oct 11 09:57:21 compute-0 sshd-session[448812]: Disconnected from invalid user monitor 43.157.67.116 port 57992 [preauth]
Oct 11 09:57:23 compute-0 ceph-mon[74313]: pgmap v3493: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:57:23 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3494: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:57:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:57:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:57:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:57:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:57:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:57:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:57:24 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e310 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:57:25 compute-0 ceph-mon[74313]: pgmap v3494: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:57:25 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3495: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:57:26 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 09:57:26 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3280523202' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 09:57:26 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 09:57:26 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3280523202' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 09:57:26 compute-0 nova_compute[260935]: 2025-10-11 09:57:26.826 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:57:27 compute-0 ceph-mon[74313]: pgmap v3495: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:57:27 compute-0 ceph-mon[74313]: from='client.? 192.168.122.10:0/3280523202' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 09:57:27 compute-0 ceph-mon[74313]: from='client.? 192.168.122.10:0/3280523202' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 09:57:27 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3496: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:57:29 compute-0 ceph-mon[74313]: pgmap v3496: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:57:29 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3497: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:57:29 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e310 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:57:31 compute-0 ceph-mon[74313]: pgmap v3497: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:57:31 compute-0 nova_compute[260935]: 2025-10-11 09:57:31.829 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:57:31 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3498: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:57:33 compute-0 ceph-mon[74313]: pgmap v3498: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:57:33 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3499: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:57:34 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e310 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:57:35 compute-0 ceph-mon[74313]: pgmap v3499: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:57:35 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3500: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:57:36 compute-0 podman[448814]: 2025-10-11 09:57:36.822322747 +0000 UTC m=+0.103901988 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 11 09:57:36 compute-0 nova_compute[260935]: 2025-10-11 09:57:36.830 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:57:36 compute-0 nova_compute[260935]: 2025-10-11 09:57:36.832 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:57:37 compute-0 ceph-mon[74313]: pgmap v3500: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:57:37 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3501: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:57:39 compute-0 ceph-mon[74313]: pgmap v3501: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:57:39 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3502: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:57:39 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e310 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:57:40 compute-0 nova_compute[260935]: 2025-10-11 09:57:40.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:57:41 compute-0 ceph-mon[74313]: pgmap v3502: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:57:41 compute-0 podman[448833]: 2025-10-11 09:57:41.795490181 +0000 UTC m=+0.094417369 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct 11 09:57:41 compute-0 nova_compute[260935]: 2025-10-11 09:57:41.834 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 09:57:41 compute-0 nova_compute[260935]: 2025-10-11 09:57:41.836 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 09:57:41 compute-0 nova_compute[260935]: 2025-10-11 09:57:41.836 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Oct 11 09:57:41 compute-0 nova_compute[260935]: 2025-10-11 09:57:41.836 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Oct 11 09:57:41 compute-0 nova_compute[260935]: 2025-10-11 09:57:41.860 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:57:41 compute-0 nova_compute[260935]: 2025-10-11 09:57:41.861 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Oct 11 09:57:41 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3503: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:57:43 compute-0 ceph-mon[74313]: pgmap v3503: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:57:43 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3504: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:57:44 compute-0 podman[448851]: 2025-10-11 09:57:44.803082407 +0000 UTC m=+0.099832472 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=multipathd)
Oct 11 09:57:44 compute-0 podman[448852]: 2025-10-11 09:57:44.844060455 +0000 UTC m=+0.134831522 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3)
Oct 11 09:57:44 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e310 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:57:45 compute-0 ceph-mon[74313]: pgmap v3504: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:57:45 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3505: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:57:46 compute-0 nova_compute[260935]: 2025-10-11 09:57:46.861 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:57:47 compute-0 ceph-mon[74313]: pgmap v3505: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:57:47 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3506: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:57:49 compute-0 ceph-mon[74313]: pgmap v3506: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:57:49 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3507: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:57:49 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e310 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:57:51 compute-0 ceph-mon[74313]: pgmap v3507: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:57:51 compute-0 nova_compute[260935]: 2025-10-11 09:57:51.864 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 09:57:51 compute-0 nova_compute[260935]: 2025-10-11 09:57:51.866 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 09:57:51 compute-0 nova_compute[260935]: 2025-10-11 09:57:51.866 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Oct 11 09:57:51 compute-0 nova_compute[260935]: 2025-10-11 09:57:51.866 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Oct 11 09:57:51 compute-0 nova_compute[260935]: 2025-10-11 09:57:51.901 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:57:51 compute-0 nova_compute[260935]: 2025-10-11 09:57:51.902 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Oct 11 09:57:51 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3508: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:57:53 compute-0 ceph-mon[74313]: pgmap v3508: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:57:53 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3509: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:57:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:57:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:57:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:57:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:57:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:57:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:57:54 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e310 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:57:55 compute-0 ceph-mgr[74605]: [balancer INFO root] Optimize plan auto_2025-10-11_09:57:55
Oct 11 09:57:55 compute-0 ceph-mgr[74605]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 09:57:55 compute-0 ceph-mgr[74605]: [balancer INFO root] do_upmap
Oct 11 09:57:55 compute-0 ceph-mgr[74605]: [balancer INFO root] pools ['.mgr', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'vms', 'backups', 'images', 'default.rgw.control', '.rgw.root', 'volumes', 'default.rgw.log', 'default.rgw.meta']
Oct 11 09:57:55 compute-0 ceph-mgr[74605]: [balancer INFO root] prepared 0/10 changes
Oct 11 09:57:55 compute-0 ceph-mon[74313]: pgmap v3509: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:57:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 09:57:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 09:57:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 09:57:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 09:57:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 09:57:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 09:57:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 09:57:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 09:57:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 09:57:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 09:57:55 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3510: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:57:56 compute-0 nova_compute[260935]: 2025-10-11 09:57:56.902 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 09:57:56 compute-0 nova_compute[260935]: 2025-10-11 09:57:56.904 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 09:57:56 compute-0 nova_compute[260935]: 2025-10-11 09:57:56.904 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Oct 11 09:57:56 compute-0 nova_compute[260935]: 2025-10-11 09:57:56.905 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Oct 11 09:57:56 compute-0 nova_compute[260935]: 2025-10-11 09:57:56.953 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:57:56 compute-0 nova_compute[260935]: 2025-10-11 09:57:56.953 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Oct 11 09:57:57 compute-0 ceph-mon[74313]: pgmap v3510: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:57:57 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3511: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:57:59 compute-0 ceph-mon[74313]: pgmap v3511: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:57:59 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3512: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:57:59 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e310 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:58:01 compute-0 ceph-mon[74313]: pgmap v3512: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:58:01 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3513: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:58:01 compute-0 nova_compute[260935]: 2025-10-11 09:58:01.954 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:58:03 compute-0 ceph-mon[74313]: pgmap v3513: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:58:03 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3514: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:58:04 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e310 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:58:05 compute-0 ceph-mon[74313]: pgmap v3514: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:58:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 09:58:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:58:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 09:58:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:58:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0026278224099759067 of space, bias 1.0, pg target 0.788346722992772 quantized to 32 (current 32)
Oct 11 09:58:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:58:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 09:58:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:58:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 09:58:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:58:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006659854830044931 of space, bias 1.0, pg target 0.19979564490134794 quantized to 32 (current 32)
Oct 11 09:58:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:58:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 11 09:58:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:58:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 09:58:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:58:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 11 09:58:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:58:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 11 09:58:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:58:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 09:58:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:58:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 11 09:58:05 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3515: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:58:06 compute-0 nova_compute[260935]: 2025-10-11 09:58:06.699 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:58:06 compute-0 nova_compute[260935]: 2025-10-11 09:58:06.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:58:06 compute-0 nova_compute[260935]: 2025-10-11 09:58:06.955 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 09:58:07 compute-0 ceph-mon[74313]: pgmap v3515: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:58:07 compute-0 podman[448897]: 2025-10-11 09:58:07.788935782 +0000 UTC m=+0.083910572 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Oct 11 09:58:07 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3516: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:58:08 compute-0 sudo[448917]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:58:08 compute-0 sudo[448917]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:58:08 compute-0 sudo[448917]: pam_unix(sudo:session): session closed for user root
Oct 11 09:58:08 compute-0 nova_compute[260935]: 2025-10-11 09:58:08.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:58:08 compute-0 nova_compute[260935]: 2025-10-11 09:58:08.703 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 11 09:58:08 compute-0 sudo[448942]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 09:58:08 compute-0 sudo[448942]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:58:08 compute-0 sudo[448942]: pam_unix(sudo:session): session closed for user root
Oct 11 09:58:08 compute-0 sudo[448967]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:58:08 compute-0 sudo[448967]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:58:08 compute-0 sudo[448967]: pam_unix(sudo:session): session closed for user root
Oct 11 09:58:08 compute-0 sudo[448992]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Oct 11 09:58:08 compute-0 sudo[448992]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:58:09 compute-0 ceph-mon[74313]: pgmap v3516: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:58:09 compute-0 nova_compute[260935]: 2025-10-11 09:58:09.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:58:09 compute-0 nova_compute[260935]: 2025-10-11 09:58:09.705 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 11 09:58:09 compute-0 podman[449091]: 2025-10-11 09:58:09.736376927 +0000 UTC m=+0.095611013 container exec ef4d743dbf6b626090e433b260dff1359de31ba4682290cbdab8727911345729 (image=quay.io/ceph/ceph:v18, name=ceph-33219f8b-dc38-5a8f-a577-8ccc4b37190a-mon-compute-0, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 09:58:09 compute-0 podman[449091]: 2025-10-11 09:58:09.863238932 +0000 UTC m=+0.222473018 container exec_died ef4d743dbf6b626090e433b260dff1359de31ba4682290cbdab8727911345729 (image=quay.io/ceph/ceph:v18, name=ceph-33219f8b-dc38-5a8f-a577-8ccc4b37190a-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 09:58:09 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3517: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:58:09 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e310 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:58:10 compute-0 nova_compute[260935]: 2025-10-11 09:58:10.262 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "refresh_cache-b75d8ded-515b-48ff-a6b6-28df88878996" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:58:10 compute-0 nova_compute[260935]: 2025-10-11 09:58:10.262 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquired lock "refresh_cache-b75d8ded-515b-48ff-a6b6-28df88878996" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:58:10 compute-0 nova_compute[260935]: 2025-10-11 09:58:10.263 2 DEBUG nova.network.neutron [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: b75d8ded-515b-48ff-a6b6-28df88878996] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 11 09:58:10 compute-0 nova_compute[260935]: 2025-10-11 09:58:10.494 2 DEBUG nova.network.neutron [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: b75d8ded-515b-48ff-a6b6-28df88878996] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 11 09:58:10 compute-0 ceph-mon[74313]: pgmap v3517: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:58:10 compute-0 sudo[448992]: pam_unix(sudo:session): session closed for user root
Oct 11 09:58:10 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 09:58:10 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:58:10 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 09:58:10 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:58:10 compute-0 sudo[449252]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:58:10 compute-0 sudo[449252]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:58:10 compute-0 sudo[449252]: pam_unix(sudo:session): session closed for user root
Oct 11 09:58:11 compute-0 sudo[449277]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 09:58:11 compute-0 sudo[449277]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:58:11 compute-0 sudo[449277]: pam_unix(sudo:session): session closed for user root
Oct 11 09:58:11 compute-0 sudo[449302]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:58:11 compute-0 sudo[449302]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:58:11 compute-0 sudo[449302]: pam_unix(sudo:session): session closed for user root
Oct 11 09:58:11 compute-0 nova_compute[260935]: 2025-10-11 09:58:11.252 2 DEBUG nova.network.neutron [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: b75d8ded-515b-48ff-a6b6-28df88878996] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:58:11 compute-0 sudo[449327]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 11 09:58:11 compute-0 sudo[449327]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:58:11 compute-0 nova_compute[260935]: 2025-10-11 09:58:11.351 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Releasing lock "refresh_cache-b75d8ded-515b-48ff-a6b6-28df88878996" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:58:11 compute-0 nova_compute[260935]: 2025-10-11 09:58:11.352 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: b75d8ded-515b-48ff-a6b6-28df88878996] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 11 09:58:11 compute-0 nova_compute[260935]: 2025-10-11 09:58:11.353 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:58:11 compute-0 nova_compute[260935]: 2025-10-11 09:58:11.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:58:11 compute-0 nova_compute[260935]: 2025-10-11 09:58:11.733 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:58:11 compute-0 nova_compute[260935]: 2025-10-11 09:58:11.734 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:58:11 compute-0 nova_compute[260935]: 2025-10-11 09:58:11.734 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:58:11 compute-0 nova_compute[260935]: 2025-10-11 09:58:11.735 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 11 09:58:11 compute-0 nova_compute[260935]: 2025-10-11 09:58:11.736 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:58:11 compute-0 sudo[449327]: pam_unix(sudo:session): session closed for user root
Oct 11 09:58:11 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:58:11 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:58:11 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3518: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:58:11 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 09:58:11 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 09:58:11 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 11 09:58:11 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 09:58:11 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 11 09:58:11 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:58:11 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev f27b023a-b34f-412d-b7d9-4cdbf9496c0e does not exist
Oct 11 09:58:11 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev 5cb2eb0b-42bc-45e3-9b27-a1ed5cd0f2cd does not exist
Oct 11 09:58:11 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev db082280-b58a-4c2c-9600-8994fb5b7418 does not exist
Oct 11 09:58:11 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 11 09:58:11 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 09:58:11 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 11 09:58:11 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 09:58:11 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 09:58:11 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 09:58:11 compute-0 nova_compute[260935]: 2025-10-11 09:58:11.957 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 09:58:11 compute-0 nova_compute[260935]: 2025-10-11 09:58:11.960 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 09:58:11 compute-0 nova_compute[260935]: 2025-10-11 09:58:11.960 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Oct 11 09:58:11 compute-0 nova_compute[260935]: 2025-10-11 09:58:11.960 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Oct 11 09:58:11 compute-0 nova_compute[260935]: 2025-10-11 09:58:11.993 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:58:11 compute-0 nova_compute[260935]: 2025-10-11 09:58:11.993 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Oct 11 09:58:12 compute-0 sudo[449406]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:58:12 compute-0 sudo[449406]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:58:12 compute-0 sudo[449406]: pam_unix(sudo:session): session closed for user root
Oct 11 09:58:12 compute-0 podman[449430]: 2025-10-11 09:58:12.140696554 +0000 UTC m=+0.058458053 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, config_id=iscsid, org.label-schema.license=GPLv2)
Oct 11 09:58:12 compute-0 sudo[449437]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 09:58:12 compute-0 sudo[449437]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:58:12 compute-0 sudo[449437]: pam_unix(sudo:session): session closed for user root
Oct 11 09:58:12 compute-0 sudo[449475]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:58:12 compute-0 sudo[449475]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:58:12 compute-0 sudo[449475]: pam_unix(sudo:session): session closed for user root
Oct 11 09:58:12 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 09:58:12 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2794627343' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:58:12 compute-0 nova_compute[260935]: 2025-10-11 09:58:12.258 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.522s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:58:12 compute-0 sudo[449502]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 11 09:58:12 compute-0 sudo[449502]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:58:12 compute-0 nova_compute[260935]: 2025-10-11 09:58:12.359 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:58:12 compute-0 nova_compute[260935]: 2025-10-11 09:58:12.360 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:58:12 compute-0 nova_compute[260935]: 2025-10-11 09:58:12.360 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:58:12 compute-0 nova_compute[260935]: 2025-10-11 09:58:12.364 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:58:12 compute-0 nova_compute[260935]: 2025-10-11 09:58:12.365 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:58:12 compute-0 nova_compute[260935]: 2025-10-11 09:58:12.369 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000052 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:58:12 compute-0 nova_compute[260935]: 2025-10-11 09:58:12.370 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000052 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:58:12 compute-0 sshd-session[449352]: Invalid user backup-user from 13.126.15.214 port 54504
Oct 11 09:58:12 compute-0 sshd-session[449352]: pam_unix(sshd:auth): check pass; user unknown
Oct 11 09:58:12 compute-0 sshd-session[449352]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=13.126.15.214
Oct 11 09:58:12 compute-0 nova_compute[260935]: 2025-10-11 09:58:12.574 2 WARNING nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 09:58:12 compute-0 nova_compute[260935]: 2025-10-11 09:58:12.576 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=2789MB free_disk=59.83064270019531GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 11 09:58:12 compute-0 nova_compute[260935]: 2025-10-11 09:58:12.576 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:58:12 compute-0 nova_compute[260935]: 2025-10-11 09:58:12.576 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:58:12 compute-0 nova_compute[260935]: 2025-10-11 09:58:12.701 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance c176845c-89c0-4038-ba22-4ee79bd3ebfe actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 09:58:12 compute-0 nova_compute[260935]: 2025-10-11 09:58:12.702 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance b75d8ded-515b-48ff-a6b6-28df88878996 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 09:58:12 compute-0 nova_compute[260935]: 2025-10-11 09:58:12.702 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance 52be16b4-343a-4fd4-9041-39069a1fde2a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 09:58:12 compute-0 nova_compute[260935]: 2025-10-11 09:58:12.702 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 11 09:58:12 compute-0 nova_compute[260935]: 2025-10-11 09:58:12.703 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=896MB phys_disk=59GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 11 09:58:12 compute-0 podman[449568]: 2025-10-11 09:58:12.744485117 +0000 UTC m=+0.049442798 container create 0f07a3f23d1b9364f8675f909b9456181e43da2d54b82e2b6ea4ddd1d0ed48b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_leakey, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 11 09:58:12 compute-0 systemd[1]: Started libpod-conmon-0f07a3f23d1b9364f8675f909b9456181e43da2d54b82e2b6ea4ddd1d0ed48b0.scope.
Oct 11 09:58:12 compute-0 podman[449568]: 2025-10-11 09:58:12.718667768 +0000 UTC m=+0.023625499 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:58:12 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:58:12 compute-0 podman[449568]: 2025-10-11 09:58:12.854090695 +0000 UTC m=+0.159048386 container init 0f07a3f23d1b9364f8675f909b9456181e43da2d54b82e2b6ea4ddd1d0ed48b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_leakey, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 09:58:12 compute-0 podman[449568]: 2025-10-11 09:58:12.867644898 +0000 UTC m=+0.172602579 container start 0f07a3f23d1b9364f8675f909b9456181e43da2d54b82e2b6ea4ddd1d0ed48b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_leakey, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Oct 11 09:58:12 compute-0 nova_compute[260935]: 2025-10-11 09:58:12.867 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:58:12 compute-0 podman[449568]: 2025-10-11 09:58:12.871857517 +0000 UTC m=+0.176815208 container attach 0f07a3f23d1b9364f8675f909b9456181e43da2d54b82e2b6ea4ddd1d0ed48b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_leakey, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct 11 09:58:12 compute-0 vigorous_leakey[449585]: 167 167
Oct 11 09:58:12 compute-0 systemd[1]: libpod-0f07a3f23d1b9364f8675f909b9456181e43da2d54b82e2b6ea4ddd1d0ed48b0.scope: Deactivated successfully.
Oct 11 09:58:12 compute-0 podman[449568]: 2025-10-11 09:58:12.877525257 +0000 UTC m=+0.182482938 container died 0f07a3f23d1b9364f8675f909b9456181e43da2d54b82e2b6ea4ddd1d0ed48b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_leakey, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 11 09:58:12 compute-0 ceph-mon[74313]: pgmap v3518: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:58:12 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 09:58:12 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 09:58:12 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:58:12 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 09:58:12 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 09:58:12 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 09:58:12 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/2794627343' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:58:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-b944457e05b903761e14f4e278578ffcae66232272c726b614a78bd4ead40b30-merged.mount: Deactivated successfully.
Oct 11 09:58:12 compute-0 podman[449568]: 2025-10-11 09:58:12.943343487 +0000 UTC m=+0.248301168 container remove 0f07a3f23d1b9364f8675f909b9456181e43da2d54b82e2b6ea4ddd1d0ed48b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_leakey, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 09:58:12 compute-0 systemd[1]: libpod-conmon-0f07a3f23d1b9364f8675f909b9456181e43da2d54b82e2b6ea4ddd1d0ed48b0.scope: Deactivated successfully.
Oct 11 09:58:13 compute-0 podman[449630]: 2025-10-11 09:58:13.214071538 +0000 UTC m=+0.073981902 container create d4fe924a54a472c799d6f85d3fb236aaed38ac64ca4578273c6d912eef975fb3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_jones, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 09:58:13 compute-0 systemd[1]: Started libpod-conmon-d4fe924a54a472c799d6f85d3fb236aaed38ac64ca4578273c6d912eef975fb3.scope.
Oct 11 09:58:13 compute-0 podman[449630]: 2025-10-11 09:58:13.187894068 +0000 UTC m=+0.047804542 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:58:13 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:58:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d56d9c6f51f4cf9f16d76a103a11708cb81ccac0cfdd61e688ecd6e12347cf6f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 09:58:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d56d9c6f51f4cf9f16d76a103a11708cb81ccac0cfdd61e688ecd6e12347cf6f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 09:58:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d56d9c6f51f4cf9f16d76a103a11708cb81ccac0cfdd61e688ecd6e12347cf6f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 09:58:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d56d9c6f51f4cf9f16d76a103a11708cb81ccac0cfdd61e688ecd6e12347cf6f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 09:58:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d56d9c6f51f4cf9f16d76a103a11708cb81ccac0cfdd61e688ecd6e12347cf6f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 09:58:13 compute-0 podman[449630]: 2025-10-11 09:58:13.334392448 +0000 UTC m=+0.194302872 container init d4fe924a54a472c799d6f85d3fb236aaed38ac64ca4578273c6d912eef975fb3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_jones, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Oct 11 09:58:13 compute-0 podman[449630]: 2025-10-11 09:58:13.349422653 +0000 UTC m=+0.209333057 container start d4fe924a54a472c799d6f85d3fb236aaed38ac64ca4578273c6d912eef975fb3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_jones, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct 11 09:58:13 compute-0 podman[449630]: 2025-10-11 09:58:13.354684482 +0000 UTC m=+0.214594886 container attach d4fe924a54a472c799d6f85d3fb236aaed38ac64ca4578273c6d912eef975fb3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_jones, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 09:58:13 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 09:58:13 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2416026574' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:58:13 compute-0 nova_compute[260935]: 2025-10-11 09:58:13.382 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.515s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:58:13 compute-0 nova_compute[260935]: 2025-10-11 09:58:13.391 2 DEBUG nova.compute.provider_tree [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 09:58:13 compute-0 nova_compute[260935]: 2025-10-11 09:58:13.415 2 DEBUG nova.scheduler.client.report [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 09:58:13 compute-0 nova_compute[260935]: 2025-10-11 09:58:13.417 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 11 09:58:13 compute-0 nova_compute[260935]: 2025-10-11 09:58:13.417 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.841s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:58:13 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/2416026574' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:58:13 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3519: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:58:14 compute-0 sshd-session[449352]: Failed password for invalid user backup-user from 13.126.15.214 port 54504 ssh2
Oct 11 09:58:14 compute-0 lucid_jones[449647]: --> passed data devices: 0 physical, 3 LVM
Oct 11 09:58:14 compute-0 lucid_jones[449647]: --> relative data size: 1.0
Oct 11 09:58:14 compute-0 lucid_jones[449647]: --> All data devices are unavailable
Oct 11 09:58:14 compute-0 systemd[1]: libpod-d4fe924a54a472c799d6f85d3fb236aaed38ac64ca4578273c6d912eef975fb3.scope: Deactivated successfully.
Oct 11 09:58:14 compute-0 podman[449630]: 2025-10-11 09:58:14.642345312 +0000 UTC m=+1.502255706 container died d4fe924a54a472c799d6f85d3fb236aaed38ac64ca4578273c6d912eef975fb3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_jones, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 09:58:14 compute-0 systemd[1]: libpod-d4fe924a54a472c799d6f85d3fb236aaed38ac64ca4578273c6d912eef975fb3.scope: Consumed 1.222s CPU time.
Oct 11 09:58:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-d56d9c6f51f4cf9f16d76a103a11708cb81ccac0cfdd61e688ecd6e12347cf6f-merged.mount: Deactivated successfully.
Oct 11 09:58:14 compute-0 podman[449630]: 2025-10-11 09:58:14.730880224 +0000 UTC m=+1.590790628 container remove d4fe924a54a472c799d6f85d3fb236aaed38ac64ca4578273c6d912eef975fb3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_jones, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct 11 09:58:14 compute-0 systemd[1]: libpod-conmon-d4fe924a54a472c799d6f85d3fb236aaed38ac64ca4578273c6d912eef975fb3.scope: Deactivated successfully.
Oct 11 09:58:14 compute-0 sudo[449502]: pam_unix(sudo:session): session closed for user root
Oct 11 09:58:14 compute-0 ceph-mon[74313]: pgmap v3519: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:58:14 compute-0 sudo[449691]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:58:14 compute-0 sudo[449691]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:58:14 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e310 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:58:14 compute-0 sudo[449691]: pam_unix(sudo:session): session closed for user root
Oct 11 09:58:14 compute-0 sshd-session[449352]: Received disconnect from 13.126.15.214 port 54504:11: Bye Bye [preauth]
Oct 11 09:58:14 compute-0 sshd-session[449352]: Disconnected from invalid user backup-user 13.126.15.214 port 54504 [preauth]
Oct 11 09:58:15 compute-0 sudo[449728]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 09:58:15 compute-0 sudo[449728]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:58:15 compute-0 sudo[449728]: pam_unix(sudo:session): session closed for user root
Oct 11 09:58:15 compute-0 podman[449715]: 2025-10-11 09:58:15.05306554 +0000 UTC m=+0.105675848 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct 11 09:58:15 compute-0 podman[449716]: 2025-10-11 09:58:15.100637584 +0000 UTC m=+0.154493287 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_controller)
Oct 11 09:58:15 compute-0 sudo[449780]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:58:15 compute-0 sudo[449780]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:58:15 compute-0 sudo[449780]: pam_unix(sudo:session): session closed for user root
Oct 11 09:58:15 compute-0 sudo[449810]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a -- lvm list --format json
Oct 11 09:58:15 compute-0 sudo[449810]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:58:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:58:15.256 162815 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:58:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:58:15.257 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:58:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:58:15.257 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:58:15 compute-0 podman[449876]: 2025-10-11 09:58:15.692266883 +0000 UTC m=+0.072000646 container create 81c73bb1a3402aa0dc4a8d78d8122f734e7b911fcabfcbfd7407a3022cdd46d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_mestorf, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct 11 09:58:15 compute-0 podman[449876]: 2025-10-11 09:58:15.664341264 +0000 UTC m=+0.044075067 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:58:15 compute-0 systemd[1]: Started libpod-conmon-81c73bb1a3402aa0dc4a8d78d8122f734e7b911fcabfcbfd7407a3022cdd46d8.scope.
Oct 11 09:58:15 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:58:15 compute-0 podman[449876]: 2025-10-11 09:58:15.866757634 +0000 UTC m=+0.246491447 container init 81c73bb1a3402aa0dc4a8d78d8122f734e7b911fcabfcbfd7407a3022cdd46d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_mestorf, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 09:58:15 compute-0 podman[449876]: 2025-10-11 09:58:15.8807554 +0000 UTC m=+0.260489163 container start 81c73bb1a3402aa0dc4a8d78d8122f734e7b911fcabfcbfd7407a3022cdd46d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_mestorf, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct 11 09:58:15 compute-0 podman[449876]: 2025-10-11 09:58:15.884853596 +0000 UTC m=+0.264587369 container attach 81c73bb1a3402aa0dc4a8d78d8122f734e7b911fcabfcbfd7407a3022cdd46d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_mestorf, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct 11 09:58:15 compute-0 amazing_mestorf[449893]: 167 167
Oct 11 09:58:15 compute-0 systemd[1]: libpod-81c73bb1a3402aa0dc4a8d78d8122f734e7b911fcabfcbfd7407a3022cdd46d8.scope: Deactivated successfully.
Oct 11 09:58:15 compute-0 podman[449876]: 2025-10-11 09:58:15.889985551 +0000 UTC m=+0.269719324 container died 81c73bb1a3402aa0dc4a8d78d8122f734e7b911fcabfcbfd7407a3022cdd46d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_mestorf, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 09:58:15 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e310 do_prune osdmap full prune enabled
Oct 11 09:58:15 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3520: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:58:15 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e311 e311: 3 total, 3 up, 3 in
Oct 11 09:58:15 compute-0 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e311: 3 total, 3 up, 3 in
Oct 11 09:58:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-60f33e46cb80cd284e8563fd6931ed5f2efefa213fe63d8000afeb679abd4519-merged.mount: Deactivated successfully.
Oct 11 09:58:15 compute-0 podman[449876]: 2025-10-11 09:58:15.96001944 +0000 UTC m=+0.339753213 container remove 81c73bb1a3402aa0dc4a8d78d8122f734e7b911fcabfcbfd7407a3022cdd46d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_mestorf, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True)
Oct 11 09:58:15 compute-0 systemd[1]: libpod-conmon-81c73bb1a3402aa0dc4a8d78d8122f734e7b911fcabfcbfd7407a3022cdd46d8.scope: Deactivated successfully.
Oct 11 09:58:16 compute-0 podman[449916]: 2025-10-11 09:58:16.23602711 +0000 UTC m=+0.083297615 container create 7e82aba17eec9dfa5fc09f168e565d9fdab429d18b38586148dcf126ba90d0a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_heyrovsky, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 11 09:58:16 compute-0 podman[449916]: 2025-10-11 09:58:16.203927663 +0000 UTC m=+0.051198178 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:58:16 compute-0 systemd[1]: Started libpod-conmon-7e82aba17eec9dfa5fc09f168e565d9fdab429d18b38586148dcf126ba90d0a2.scope.
Oct 11 09:58:16 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:58:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aac83156b499b1ed41f1e810735524d3ba1ffd3ee56d4a9e660014dff299aafb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 09:58:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aac83156b499b1ed41f1e810735524d3ba1ffd3ee56d4a9e660014dff299aafb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 09:58:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aac83156b499b1ed41f1e810735524d3ba1ffd3ee56d4a9e660014dff299aafb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 09:58:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aac83156b499b1ed41f1e810735524d3ba1ffd3ee56d4a9e660014dff299aafb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 09:58:16 compute-0 podman[449916]: 2025-10-11 09:58:16.376037827 +0000 UTC m=+0.223308382 container init 7e82aba17eec9dfa5fc09f168e565d9fdab429d18b38586148dcf126ba90d0a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_heyrovsky, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS)
Oct 11 09:58:16 compute-0 podman[449916]: 2025-10-11 09:58:16.392009618 +0000 UTC m=+0.239280143 container start 7e82aba17eec9dfa5fc09f168e565d9fdab429d18b38586148dcf126ba90d0a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_heyrovsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 09:58:16 compute-0 podman[449916]: 2025-10-11 09:58:16.397277317 +0000 UTC m=+0.244547872 container attach 7e82aba17eec9dfa5fc09f168e565d9fdab429d18b38586148dcf126ba90d0a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_heyrovsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct 11 09:58:16 compute-0 ceph-mon[74313]: pgmap v3520: 321 pgs: 321 active+clean; 328 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:58:16 compute-0 ceph-mon[74313]: osdmap e311: 3 total, 3 up, 3 in
Oct 11 09:58:16 compute-0 nova_compute[260935]: 2025-10-11 09:58:16.995 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:58:17 compute-0 gallant_heyrovsky[449933]: {
Oct 11 09:58:17 compute-0 gallant_heyrovsky[449933]:     "0": [
Oct 11 09:58:17 compute-0 gallant_heyrovsky[449933]:         {
Oct 11 09:58:17 compute-0 gallant_heyrovsky[449933]:             "devices": [
Oct 11 09:58:17 compute-0 gallant_heyrovsky[449933]:                 "/dev/loop3"
Oct 11 09:58:17 compute-0 gallant_heyrovsky[449933]:             ],
Oct 11 09:58:17 compute-0 gallant_heyrovsky[449933]:             "lv_name": "ceph_lv0",
Oct 11 09:58:17 compute-0 gallant_heyrovsky[449933]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 09:58:17 compute-0 gallant_heyrovsky[449933]:             "lv_size": "21470642176",
Oct 11 09:58:17 compute-0 gallant_heyrovsky[449933]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=bd31cfe9-266a-4479-a7f9-659fff14b3e1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 09:58:17 compute-0 gallant_heyrovsky[449933]:             "lv_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 09:58:17 compute-0 gallant_heyrovsky[449933]:             "name": "ceph_lv0",
Oct 11 09:58:17 compute-0 gallant_heyrovsky[449933]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 09:58:17 compute-0 gallant_heyrovsky[449933]:             "tags": {
Oct 11 09:58:17 compute-0 gallant_heyrovsky[449933]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 11 09:58:17 compute-0 gallant_heyrovsky[449933]:                 "ceph.block_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 09:58:17 compute-0 gallant_heyrovsky[449933]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 09:58:17 compute-0 gallant_heyrovsky[449933]:                 "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:58:17 compute-0 gallant_heyrovsky[449933]:                 "ceph.cluster_name": "ceph",
Oct 11 09:58:17 compute-0 gallant_heyrovsky[449933]:                 "ceph.crush_device_class": "",
Oct 11 09:58:17 compute-0 gallant_heyrovsky[449933]:                 "ceph.encrypted": "0",
Oct 11 09:58:17 compute-0 gallant_heyrovsky[449933]:                 "ceph.osd_fsid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 09:58:17 compute-0 gallant_heyrovsky[449933]:                 "ceph.osd_id": "0",
Oct 11 09:58:17 compute-0 gallant_heyrovsky[449933]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 09:58:17 compute-0 gallant_heyrovsky[449933]:                 "ceph.type": "block",
Oct 11 09:58:17 compute-0 gallant_heyrovsky[449933]:                 "ceph.vdo": "0"
Oct 11 09:58:17 compute-0 gallant_heyrovsky[449933]:             },
Oct 11 09:58:17 compute-0 gallant_heyrovsky[449933]:             "type": "block",
Oct 11 09:58:17 compute-0 gallant_heyrovsky[449933]:             "vg_name": "ceph_vg0"
Oct 11 09:58:17 compute-0 gallant_heyrovsky[449933]:         }
Oct 11 09:58:17 compute-0 gallant_heyrovsky[449933]:     ],
Oct 11 09:58:17 compute-0 gallant_heyrovsky[449933]:     "1": [
Oct 11 09:58:17 compute-0 gallant_heyrovsky[449933]:         {
Oct 11 09:58:17 compute-0 gallant_heyrovsky[449933]:             "devices": [
Oct 11 09:58:17 compute-0 gallant_heyrovsky[449933]:                 "/dev/loop4"
Oct 11 09:58:17 compute-0 gallant_heyrovsky[449933]:             ],
Oct 11 09:58:17 compute-0 gallant_heyrovsky[449933]:             "lv_name": "ceph_lv1",
Oct 11 09:58:17 compute-0 gallant_heyrovsky[449933]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 09:58:17 compute-0 gallant_heyrovsky[449933]:             "lv_size": "21470642176",
Oct 11 09:58:17 compute-0 gallant_heyrovsky[449933]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c6e730c8-7075-45e5-bf72-061b73088f5b,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 09:58:17 compute-0 gallant_heyrovsky[449933]:             "lv_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 09:58:17 compute-0 gallant_heyrovsky[449933]:             "name": "ceph_lv1",
Oct 11 09:58:17 compute-0 gallant_heyrovsky[449933]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 09:58:17 compute-0 gallant_heyrovsky[449933]:             "tags": {
Oct 11 09:58:17 compute-0 gallant_heyrovsky[449933]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 11 09:58:17 compute-0 gallant_heyrovsky[449933]:                 "ceph.block_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 09:58:17 compute-0 gallant_heyrovsky[449933]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 09:58:17 compute-0 gallant_heyrovsky[449933]:                 "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:58:17 compute-0 gallant_heyrovsky[449933]:                 "ceph.cluster_name": "ceph",
Oct 11 09:58:17 compute-0 gallant_heyrovsky[449933]:                 "ceph.crush_device_class": "",
Oct 11 09:58:17 compute-0 gallant_heyrovsky[449933]:                 "ceph.encrypted": "0",
Oct 11 09:58:17 compute-0 gallant_heyrovsky[449933]:                 "ceph.osd_fsid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 09:58:17 compute-0 gallant_heyrovsky[449933]:                 "ceph.osd_id": "1",
Oct 11 09:58:17 compute-0 gallant_heyrovsky[449933]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 09:58:17 compute-0 gallant_heyrovsky[449933]:                 "ceph.type": "block",
Oct 11 09:58:17 compute-0 gallant_heyrovsky[449933]:                 "ceph.vdo": "0"
Oct 11 09:58:17 compute-0 gallant_heyrovsky[449933]:             },
Oct 11 09:58:17 compute-0 gallant_heyrovsky[449933]:             "type": "block",
Oct 11 09:58:17 compute-0 gallant_heyrovsky[449933]:             "vg_name": "ceph_vg1"
Oct 11 09:58:17 compute-0 gallant_heyrovsky[449933]:         }
Oct 11 09:58:17 compute-0 gallant_heyrovsky[449933]:     ],
Oct 11 09:58:17 compute-0 gallant_heyrovsky[449933]:     "2": [
Oct 11 09:58:17 compute-0 gallant_heyrovsky[449933]:         {
Oct 11 09:58:17 compute-0 gallant_heyrovsky[449933]:             "devices": [
Oct 11 09:58:17 compute-0 gallant_heyrovsky[449933]:                 "/dev/loop5"
Oct 11 09:58:17 compute-0 gallant_heyrovsky[449933]:             ],
Oct 11 09:58:17 compute-0 gallant_heyrovsky[449933]:             "lv_name": "ceph_lv2",
Oct 11 09:58:17 compute-0 gallant_heyrovsky[449933]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 09:58:17 compute-0 gallant_heyrovsky[449933]:             "lv_size": "21470642176",
Oct 11 09:58:17 compute-0 gallant_heyrovsky[449933]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9a8d87fe-940e-4960-94fb-405e22e6e9ee,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 09:58:17 compute-0 gallant_heyrovsky[449933]:             "lv_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 09:58:17 compute-0 gallant_heyrovsky[449933]:             "name": "ceph_lv2",
Oct 11 09:58:17 compute-0 gallant_heyrovsky[449933]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 09:58:17 compute-0 gallant_heyrovsky[449933]:             "tags": {
Oct 11 09:58:17 compute-0 gallant_heyrovsky[449933]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 11 09:58:17 compute-0 gallant_heyrovsky[449933]:                 "ceph.block_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 09:58:17 compute-0 gallant_heyrovsky[449933]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 09:58:17 compute-0 gallant_heyrovsky[449933]:                 "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:58:17 compute-0 gallant_heyrovsky[449933]:                 "ceph.cluster_name": "ceph",
Oct 11 09:58:17 compute-0 gallant_heyrovsky[449933]:                 "ceph.crush_device_class": "",
Oct 11 09:58:17 compute-0 gallant_heyrovsky[449933]:                 "ceph.encrypted": "0",
Oct 11 09:58:17 compute-0 gallant_heyrovsky[449933]:                 "ceph.osd_fsid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 09:58:17 compute-0 gallant_heyrovsky[449933]:                 "ceph.osd_id": "2",
Oct 11 09:58:17 compute-0 gallant_heyrovsky[449933]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 09:58:17 compute-0 gallant_heyrovsky[449933]:                 "ceph.type": "block",
Oct 11 09:58:17 compute-0 gallant_heyrovsky[449933]:                 "ceph.vdo": "0"
Oct 11 09:58:17 compute-0 gallant_heyrovsky[449933]:             },
Oct 11 09:58:17 compute-0 gallant_heyrovsky[449933]:             "type": "block",
Oct 11 09:58:17 compute-0 gallant_heyrovsky[449933]:             "vg_name": "ceph_vg2"
Oct 11 09:58:17 compute-0 gallant_heyrovsky[449933]:         }
Oct 11 09:58:17 compute-0 gallant_heyrovsky[449933]:     ]
Oct 11 09:58:17 compute-0 gallant_heyrovsky[449933]: }
Oct 11 09:58:17 compute-0 systemd[1]: libpod-7e82aba17eec9dfa5fc09f168e565d9fdab429d18b38586148dcf126ba90d0a2.scope: Deactivated successfully.
Oct 11 09:58:17 compute-0 podman[449942]: 2025-10-11 09:58:17.244964203 +0000 UTC m=+0.027759355 container died 7e82aba17eec9dfa5fc09f168e565d9fdab429d18b38586148dcf126ba90d0a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_heyrovsky, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 09:58:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-aac83156b499b1ed41f1e810735524d3ba1ffd3ee56d4a9e660014dff299aafb-merged.mount: Deactivated successfully.
Oct 11 09:58:17 compute-0 podman[449942]: 2025-10-11 09:58:17.302455368 +0000 UTC m=+0.085250490 container remove 7e82aba17eec9dfa5fc09f168e565d9fdab429d18b38586148dcf126ba90d0a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_heyrovsky, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 09:58:17 compute-0 systemd[1]: libpod-conmon-7e82aba17eec9dfa5fc09f168e565d9fdab429d18b38586148dcf126ba90d0a2.scope: Deactivated successfully.
Oct 11 09:58:17 compute-0 sudo[449810]: pam_unix(sudo:session): session closed for user root
Oct 11 09:58:17 compute-0 nova_compute[260935]: 2025-10-11 09:58:17.417 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:58:17 compute-0 sudo[449957]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:58:17 compute-0 sudo[449957]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:58:17 compute-0 sudo[449957]: pam_unix(sudo:session): session closed for user root
Oct 11 09:58:17 compute-0 sudo[449982]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 09:58:17 compute-0 sudo[449982]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:58:17 compute-0 sudo[449982]: pam_unix(sudo:session): session closed for user root
Oct 11 09:58:17 compute-0 sudo[450007]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:58:17 compute-0 sudo[450007]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:58:17 compute-0 sudo[450007]: pam_unix(sudo:session): session closed for user root
Oct 11 09:58:17 compute-0 sudo[450032]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a -- raw list --format json
Oct 11 09:58:17 compute-0 sudo[450032]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:58:17 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3522: 321 pgs: 321 active+clean; 307 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 25 op/s
Oct 11 09:58:18 compute-0 podman[450098]: 2025-10-11 09:58:18.191082612 +0000 UTC m=+0.073542520 container create 176146b73960fd0442be348b045b4df935b48dae8a14ea16c0dfa87b3b34e96a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_edison, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 09:58:18 compute-0 systemd[1]: Started libpod-conmon-176146b73960fd0442be348b045b4df935b48dae8a14ea16c0dfa87b3b34e96a.scope.
Oct 11 09:58:18 compute-0 podman[450098]: 2025-10-11 09:58:18.161548657 +0000 UTC m=+0.044008655 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:58:18 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:58:18 compute-0 podman[450098]: 2025-10-11 09:58:18.341476792 +0000 UTC m=+0.223936730 container init 176146b73960fd0442be348b045b4df935b48dae8a14ea16c0dfa87b3b34e96a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_edison, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 09:58:18 compute-0 podman[450098]: 2025-10-11 09:58:18.354290084 +0000 UTC m=+0.236750002 container start 176146b73960fd0442be348b045b4df935b48dae8a14ea16c0dfa87b3b34e96a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_edison, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct 11 09:58:18 compute-0 podman[450098]: 2025-10-11 09:58:18.357122434 +0000 UTC m=+0.239582342 container attach 176146b73960fd0442be348b045b4df935b48dae8a14ea16c0dfa87b3b34e96a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_edison, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 09:58:18 compute-0 interesting_edison[450114]: 167 167
Oct 11 09:58:18 compute-0 systemd[1]: libpod-176146b73960fd0442be348b045b4df935b48dae8a14ea16c0dfa87b3b34e96a.scope: Deactivated successfully.
Oct 11 09:58:18 compute-0 podman[450098]: 2025-10-11 09:58:18.363779452 +0000 UTC m=+0.246239420 container died 176146b73960fd0442be348b045b4df935b48dae8a14ea16c0dfa87b3b34e96a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_edison, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 09:58:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-b9f615b1db19b770ea28e079725710d4aa7b52ee76f317208f6d9129d401ee43-merged.mount: Deactivated successfully.
Oct 11 09:58:18 compute-0 podman[450098]: 2025-10-11 09:58:18.419225769 +0000 UTC m=+0.301685687 container remove 176146b73960fd0442be348b045b4df935b48dae8a14ea16c0dfa87b3b34e96a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_edison, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 09:58:18 compute-0 systemd[1]: libpod-conmon-176146b73960fd0442be348b045b4df935b48dae8a14ea16c0dfa87b3b34e96a.scope: Deactivated successfully.
Oct 11 09:58:18 compute-0 podman[450137]: 2025-10-11 09:58:18.605638597 +0000 UTC m=+0.057092664 container create 215ddaaef2646b9cabd5a8193ea3b91b47050f003a58d0c8747ed00864587802 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_grothendieck, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct 11 09:58:18 compute-0 systemd[1]: Started libpod-conmon-215ddaaef2646b9cabd5a8193ea3b91b47050f003a58d0c8747ed00864587802.scope.
Oct 11 09:58:18 compute-0 podman[450137]: 2025-10-11 09:58:18.582711519 +0000 UTC m=+0.034165596 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:58:18 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:58:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0fbd365d6177e6146d6fea6e18a6ea48c97c7e6e36c20a7e3a3da36d10ae8c55/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 09:58:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0fbd365d6177e6146d6fea6e18a6ea48c97c7e6e36c20a7e3a3da36d10ae8c55/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 09:58:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0fbd365d6177e6146d6fea6e18a6ea48c97c7e6e36c20a7e3a3da36d10ae8c55/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 09:58:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0fbd365d6177e6146d6fea6e18a6ea48c97c7e6e36c20a7e3a3da36d10ae8c55/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 09:58:18 compute-0 podman[450137]: 2025-10-11 09:58:18.71507043 +0000 UTC m=+0.166524497 container init 215ddaaef2646b9cabd5a8193ea3b91b47050f003a58d0c8747ed00864587802 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_grothendieck, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct 11 09:58:18 compute-0 podman[450137]: 2025-10-11 09:58:18.733058728 +0000 UTC m=+0.184512785 container start 215ddaaef2646b9cabd5a8193ea3b91b47050f003a58d0c8747ed00864587802 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_grothendieck, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct 11 09:58:18 compute-0 podman[450137]: 2025-10-11 09:58:18.736421303 +0000 UTC m=+0.187875440 container attach 215ddaaef2646b9cabd5a8193ea3b91b47050f003a58d0c8747ed00864587802 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_grothendieck, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 09:58:18 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e311 do_prune osdmap full prune enabled
Oct 11 09:58:18 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e312 e312: 3 total, 3 up, 3 in
Oct 11 09:58:18 compute-0 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e312: 3 total, 3 up, 3 in
Oct 11 09:58:18 compute-0 ceph-mon[74313]: pgmap v3522: 321 pgs: 321 active+clean; 307 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 25 op/s
Oct 11 09:58:19 compute-0 nova_compute[260935]: 2025-10-11 09:58:19.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:58:19 compute-0 thirsty_grothendieck[450154]: {
Oct 11 09:58:19 compute-0 thirsty_grothendieck[450154]:     "9a8d87fe-940e-4960-94fb-405e22e6e9ee": {
Oct 11 09:58:19 compute-0 thirsty_grothendieck[450154]:         "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:58:19 compute-0 thirsty_grothendieck[450154]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 11 09:58:19 compute-0 thirsty_grothendieck[450154]:         "osd_id": 2,
Oct 11 09:58:19 compute-0 thirsty_grothendieck[450154]:         "osd_uuid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 09:58:19 compute-0 thirsty_grothendieck[450154]:         "type": "bluestore"
Oct 11 09:58:19 compute-0 thirsty_grothendieck[450154]:     },
Oct 11 09:58:19 compute-0 thirsty_grothendieck[450154]:     "bd31cfe9-266a-4479-a7f9-659fff14b3e1": {
Oct 11 09:58:19 compute-0 thirsty_grothendieck[450154]:         "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:58:19 compute-0 thirsty_grothendieck[450154]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 11 09:58:19 compute-0 thirsty_grothendieck[450154]:         "osd_id": 0,
Oct 11 09:58:19 compute-0 thirsty_grothendieck[450154]:         "osd_uuid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 09:58:19 compute-0 thirsty_grothendieck[450154]:         "type": "bluestore"
Oct 11 09:58:19 compute-0 thirsty_grothendieck[450154]:     },
Oct 11 09:58:19 compute-0 thirsty_grothendieck[450154]:     "c6e730c8-7075-45e5-bf72-061b73088f5b": {
Oct 11 09:58:19 compute-0 thirsty_grothendieck[450154]:         "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:58:19 compute-0 thirsty_grothendieck[450154]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 11 09:58:19 compute-0 thirsty_grothendieck[450154]:         "osd_id": 1,
Oct 11 09:58:19 compute-0 thirsty_grothendieck[450154]:         "osd_uuid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 09:58:19 compute-0 thirsty_grothendieck[450154]:         "type": "bluestore"
Oct 11 09:58:19 compute-0 thirsty_grothendieck[450154]:     }
Oct 11 09:58:19 compute-0 thirsty_grothendieck[450154]: }
Oct 11 09:58:19 compute-0 systemd[1]: libpod-215ddaaef2646b9cabd5a8193ea3b91b47050f003a58d0c8747ed00864587802.scope: Deactivated successfully.
Oct 11 09:58:19 compute-0 podman[450137]: 2025-10-11 09:58:19.777924466 +0000 UTC m=+1.229378523 container died 215ddaaef2646b9cabd5a8193ea3b91b47050f003a58d0c8747ed00864587802 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_grothendieck, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct 11 09:58:19 compute-0 systemd[1]: libpod-215ddaaef2646b9cabd5a8193ea3b91b47050f003a58d0c8747ed00864587802.scope: Consumed 1.038s CPU time.
Oct 11 09:58:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-0fbd365d6177e6146d6fea6e18a6ea48c97c7e6e36c20a7e3a3da36d10ae8c55-merged.mount: Deactivated successfully.
Oct 11 09:58:19 compute-0 podman[450137]: 2025-10-11 09:58:19.855592251 +0000 UTC m=+1.307046278 container remove 215ddaaef2646b9cabd5a8193ea3b91b47050f003a58d0c8747ed00864587802 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_grothendieck, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 09:58:19 compute-0 systemd[1]: libpod-conmon-215ddaaef2646b9cabd5a8193ea3b91b47050f003a58d0c8747ed00864587802.scope: Deactivated successfully.
Oct 11 09:58:19 compute-0 sudo[450032]: pam_unix(sudo:session): session closed for user root
Oct 11 09:58:19 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 09:58:19 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3524: 321 pgs: 321 active+clean; 307 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 1.7 KiB/s wr, 31 op/s
Oct 11 09:58:19 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:58:19 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 09:58:19 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:58:19 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:58:19 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e312 do_prune osdmap full prune enabled
Oct 11 09:58:19 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev 4f70e8eb-2026-4b36-8718-87aa66cea957 does not exist
Oct 11 09:58:19 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev ae41963f-5489-46a5-9a9b-b2077457664f does not exist
Oct 11 09:58:19 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e313 e313: 3 total, 3 up, 3 in
Oct 11 09:58:19 compute-0 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e313: 3 total, 3 up, 3 in
Oct 11 09:58:19 compute-0 ceph-mon[74313]: osdmap e312: 3 total, 3 up, 3 in
Oct 11 09:58:19 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:58:19 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:58:19 compute-0 ceph-mon[74313]: osdmap e313: 3 total, 3 up, 3 in
Oct 11 09:58:20 compute-0 sudo[450201]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:58:20 compute-0 sudo[450201]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:58:20 compute-0 sudo[450201]: pam_unix(sudo:session): session closed for user root
Oct 11 09:58:20 compute-0 sudo[450226]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 11 09:58:20 compute-0 sudo[450226]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:58:20 compute-0 sudo[450226]: pam_unix(sudo:session): session closed for user root
Oct 11 09:58:20 compute-0 ceph-mon[74313]: pgmap v3524: 321 pgs: 321 active+clean; 307 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 1.7 KiB/s wr, 31 op/s
Oct 11 09:58:21 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3526: 321 pgs: 321 active+clean; 307 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 30 KiB/s rd, 2.3 KiB/s wr, 41 op/s
Oct 11 09:58:21 compute-0 nova_compute[260935]: 2025-10-11 09:58:21.996 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:58:21 compute-0 nova_compute[260935]: 2025-10-11 09:58:21.998 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:58:22 compute-0 ceph-mon[74313]: pgmap v3526: 321 pgs: 321 active+clean; 307 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 30 KiB/s rd, 2.3 KiB/s wr, 41 op/s
Oct 11 09:58:23 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3527: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 46 KiB/s rd, 3.5 KiB/s wr, 63 op/s
Oct 11 09:58:23 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e313 do_prune osdmap full prune enabled
Oct 11 09:58:23 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e314 e314: 3 total, 3 up, 3 in
Oct 11 09:58:23 compute-0 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e314: 3 total, 3 up, 3 in
Oct 11 09:58:24 compute-0 nova_compute[260935]: 2025-10-11 09:58:24.701 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:58:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:58:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:58:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:58:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:58:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:58:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:58:24 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e314 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:58:24 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e314 do_prune osdmap full prune enabled
Oct 11 09:58:24 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e315 e315: 3 total, 3 up, 3 in
Oct 11 09:58:24 compute-0 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e315: 3 total, 3 up, 3 in
Oct 11 09:58:24 compute-0 ceph-mon[74313]: pgmap v3527: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 46 KiB/s rd, 3.5 KiB/s wr, 63 op/s
Oct 11 09:58:24 compute-0 ceph-mon[74313]: osdmap e314: 3 total, 3 up, 3 in
Oct 11 09:58:24 compute-0 ceph-mon[74313]: osdmap e315: 3 total, 3 up, 3 in
Oct 11 09:58:25 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3530: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 30 KiB/s rd, 2.3 KiB/s wr, 41 op/s
Oct 11 09:58:26 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 09:58:26 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/671261316' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 09:58:26 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 09:58:26 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/671261316' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 09:58:26 compute-0 ceph-mon[74313]: pgmap v3530: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 30 KiB/s rd, 2.3 KiB/s wr, 41 op/s
Oct 11 09:58:26 compute-0 ceph-mon[74313]: from='client.? 192.168.122.10:0/671261316' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 09:58:26 compute-0 ceph-mon[74313]: from='client.? 192.168.122.10:0/671261316' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 09:58:27 compute-0 nova_compute[260935]: 2025-10-11 09:58:26.999 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 09:58:27 compute-0 nova_compute[260935]: 2025-10-11 09:58:27.002 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 09:58:27 compute-0 nova_compute[260935]: 2025-10-11 09:58:27.002 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Oct 11 09:58:27 compute-0 nova_compute[260935]: 2025-10-11 09:58:27.002 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Oct 11 09:58:27 compute-0 nova_compute[260935]: 2025-10-11 09:58:27.045 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:58:27 compute-0 nova_compute[260935]: 2025-10-11 09:58:27.046 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Oct 11 09:58:27 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3531: 321 pgs: 321 active+clean; 307 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 32 KiB/s rd, 2.6 MiB/s wr, 46 op/s
Oct 11 09:58:28 compute-0 ceph-mon[74313]: pgmap v3531: 321 pgs: 321 active+clean; 307 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 32 KiB/s rd, 2.6 MiB/s wr, 46 op/s
Oct 11 09:58:29 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3532: 321 pgs: 321 active+clean; 307 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 32 KiB/s rd, 2.6 MiB/s wr, 46 op/s
Oct 11 09:58:29 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:58:30 compute-0 ceph-mon[74313]: pgmap v3532: 321 pgs: 321 active+clean; 307 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 32 KiB/s rd, 2.6 MiB/s wr, 46 op/s
Oct 11 09:58:31 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3533: 321 pgs: 321 active+clean; 307 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 9.6 KiB/s rd, 2.6 MiB/s wr, 14 op/s
Oct 11 09:58:32 compute-0 nova_compute[260935]: 2025-10-11 09:58:32.046 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:58:33 compute-0 ceph-mon[74313]: pgmap v3533: 321 pgs: 321 active+clean; 307 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 9.6 KiB/s rd, 2.6 MiB/s wr, 14 op/s
Oct 11 09:58:33 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3534: 321 pgs: 321 active+clean; 307 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 7.7 KiB/s rd, 2.1 MiB/s wr, 12 op/s
Oct 11 09:58:34 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:58:35 compute-0 ceph-mon[74313]: pgmap v3534: 321 pgs: 321 active+clean; 307 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 7.7 KiB/s rd, 2.1 MiB/s wr, 12 op/s
Oct 11 09:58:35 compute-0 sshd-session[450251]: Invalid user roman from 43.157.67.116 port 39160
Oct 11 09:58:35 compute-0 sshd-session[450251]: pam_unix(sshd:auth): check pass; user unknown
Oct 11 09:58:35 compute-0 sshd-session[450251]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=43.157.67.116
Oct 11 09:58:35 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3535: 321 pgs: 321 active+clean; 307 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 7.0 KiB/s rd, 1.9 MiB/s wr, 10 op/s
Oct 11 09:58:37 compute-0 ceph-mon[74313]: pgmap v3535: 321 pgs: 321 active+clean; 307 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 7.0 KiB/s rd, 1.9 MiB/s wr, 10 op/s
Oct 11 09:58:37 compute-0 nova_compute[260935]: 2025-10-11 09:58:37.048 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 09:58:37 compute-0 nova_compute[260935]: 2025-10-11 09:58:37.050 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 09:58:37 compute-0 nova_compute[260935]: 2025-10-11 09:58:37.051 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Oct 11 09:58:37 compute-0 nova_compute[260935]: 2025-10-11 09:58:37.051 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Oct 11 09:58:37 compute-0 nova_compute[260935]: 2025-10-11 09:58:37.106 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:58:37 compute-0 nova_compute[260935]: 2025-10-11 09:58:37.107 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Oct 11 09:58:37 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3536: 321 pgs: 321 active+clean; 307 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 6.4 KiB/s rd, 1.7 MiB/s wr, 9 op/s
Oct 11 09:58:38 compute-0 sshd-session[450251]: Failed password for invalid user roman from 43.157.67.116 port 39160 ssh2
Oct 11 09:58:38 compute-0 podman[450253]: 2025-10-11 09:58:38.815387147 +0000 UTC m=+0.103032482 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_managed=true)
Oct 11 09:58:39 compute-0 ceph-mon[74313]: pgmap v3536: 321 pgs: 321 active+clean; 307 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 6.4 KiB/s rd, 1.7 MiB/s wr, 9 op/s
Oct 11 09:58:39 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3537: 321 pgs: 321 active+clean; 307 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:58:39 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:58:40 compute-0 sshd-session[450251]: Received disconnect from 43.157.67.116 port 39160:11: Bye Bye [preauth]
Oct 11 09:58:40 compute-0 sshd-session[450251]: Disconnected from invalid user roman 43.157.67.116 port 39160 [preauth]
Oct 11 09:58:41 compute-0 ceph-mon[74313]: pgmap v3537: 321 pgs: 321 active+clean; 307 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:58:41 compute-0 nova_compute[260935]: 2025-10-11 09:58:41.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:58:41 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3538: 321 pgs: 321 active+clean; 307 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:58:42 compute-0 nova_compute[260935]: 2025-10-11 09:58:42.107 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:58:42 compute-0 podman[450273]: 2025-10-11 09:58:42.811652134 +0000 UTC m=+0.110774011 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=iscsid, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=iscsid, tcib_managed=true)
Oct 11 09:58:43 compute-0 ceph-mon[74313]: pgmap v3538: 321 pgs: 321 active+clean; 307 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:58:43 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3539: 321 pgs: 321 active+clean; 307 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:58:44 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:58:45 compute-0 ceph-mon[74313]: pgmap v3539: 321 pgs: 321 active+clean; 307 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:58:45 compute-0 podman[450293]: 2025-10-11 09:58:45.788083529 +0000 UTC m=+0.082100461 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3)
Oct 11 09:58:45 compute-0 podman[450294]: 2025-10-11 09:58:45.819560429 +0000 UTC m=+0.110485064 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct 11 09:58:45 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3540: 321 pgs: 321 active+clean; 307 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:58:47 compute-0 ceph-mon[74313]: pgmap v3540: 321 pgs: 321 active+clean; 307 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:58:47 compute-0 nova_compute[260935]: 2025-10-11 09:58:47.110 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:58:47 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3541: 321 pgs: 321 active+clean; 307 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:58:49 compute-0 ceph-mon[74313]: pgmap v3541: 321 pgs: 321 active+clean; 307 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:58:49 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3542: 321 pgs: 321 active+clean; 307 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:58:49 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:58:51 compute-0 ceph-mon[74313]: pgmap v3542: 321 pgs: 321 active+clean; 307 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:58:51 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3543: 321 pgs: 321 active+clean; 307 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:58:52 compute-0 nova_compute[260935]: 2025-10-11 09:58:52.111 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 09:58:52 compute-0 nova_compute[260935]: 2025-10-11 09:58:52.113 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 09:58:52 compute-0 nova_compute[260935]: 2025-10-11 09:58:52.114 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Oct 11 09:58:52 compute-0 nova_compute[260935]: 2025-10-11 09:58:52.114 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Oct 11 09:58:52 compute-0 nova_compute[260935]: 2025-10-11 09:58:52.149 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:58:52 compute-0 nova_compute[260935]: 2025-10-11 09:58:52.150 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Oct 11 09:58:53 compute-0 ceph-mon[74313]: pgmap v3543: 321 pgs: 321 active+clean; 307 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:58:53 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3544: 321 pgs: 321 active+clean; 307 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:58:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:58:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:58:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:58:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:58:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:58:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:58:54 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:58:55 compute-0 ceph-mgr[74605]: [balancer INFO root] Optimize plan auto_2025-10-11_09:58:55
Oct 11 09:58:55 compute-0 ceph-mgr[74605]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 09:58:55 compute-0 ceph-mgr[74605]: [balancer INFO root] do_upmap
Oct 11 09:58:55 compute-0 ceph-mgr[74605]: [balancer INFO root] pools ['images', 'volumes', 'backups', 'default.rgw.meta', '.mgr', 'default.rgw.control', 'default.rgw.log', 'cephfs.cephfs.meta', '.rgw.root', 'vms', 'cephfs.cephfs.data']
Oct 11 09:58:55 compute-0 ceph-mgr[74605]: [balancer INFO root] prepared 0/10 changes
Oct 11 09:58:55 compute-0 ceph-mon[74313]: pgmap v3544: 321 pgs: 321 active+clean; 307 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:58:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 09:58:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 09:58:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 09:58:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 09:58:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 09:58:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 09:58:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 09:58:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 09:58:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 09:58:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 09:58:55 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3545: 321 pgs: 321 active+clean; 307 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:58:57 compute-0 ceph-mon[74313]: pgmap v3545: 321 pgs: 321 active+clean; 307 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:58:57 compute-0 nova_compute[260935]: 2025-10-11 09:58:57.151 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 09:58:57 compute-0 nova_compute[260935]: 2025-10-11 09:58:57.153 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 09:58:57 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3546: 321 pgs: 321 active+clean; 307 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:58:59 compute-0 ceph-mon[74313]: pgmap v3546: 321 pgs: 321 active+clean; 307 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:58:59 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3547: 321 pgs: 321 active+clean; 307 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:58:59 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:59:01 compute-0 ceph-mon[74313]: pgmap v3547: 321 pgs: 321 active+clean; 307 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:59:01 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3548: 321 pgs: 321 active+clean; 307 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:59:02 compute-0 nova_compute[260935]: 2025-10-11 09:59:02.152 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4997-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 09:59:02 compute-0 nova_compute[260935]: 2025-10-11 09:59:02.154 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 09:59:02 compute-0 nova_compute[260935]: 2025-10-11 09:59:02.154 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Oct 11 09:59:02 compute-0 nova_compute[260935]: 2025-10-11 09:59:02.154 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Oct 11 09:59:02 compute-0 nova_compute[260935]: 2025-10-11 09:59:02.191 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:59:02 compute-0 nova_compute[260935]: 2025-10-11 09:59:02.192 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Oct 11 09:59:03 compute-0 ceph-mon[74313]: pgmap v3548: 321 pgs: 321 active+clean; 307 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:59:03 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3549: 321 pgs: 321 active+clean; 307 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:59:04 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:59:05 compute-0 ceph-mon[74313]: pgmap v3549: 321 pgs: 321 active+clean; 307 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:59:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 09:59:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:59:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 09:59:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:59:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0026278224099759067 of space, bias 1.0, pg target 0.788346722992772 quantized to 32 (current 32)
Oct 11 09:59:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:59:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 09:59:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:59:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 09:59:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:59:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00033308812756397733 of space, bias 1.0, pg target 0.0999264382691932 quantized to 32 (current 32)
Oct 11 09:59:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:59:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 11 09:59:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:59:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 09:59:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:59:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 11 09:59:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:59:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 11 09:59:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:59:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 09:59:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 09:59:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 11 09:59:05 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3550: 321 pgs: 321 active+clean; 307 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:59:07 compute-0 ceph-mon[74313]: pgmap v3550: 321 pgs: 321 active+clean; 307 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:59:07 compute-0 nova_compute[260935]: 2025-10-11 09:59:07.193 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 09:59:07 compute-0 nova_compute[260935]: 2025-10-11 09:59:07.196 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 09:59:07 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3551: 321 pgs: 321 active+clean; 307 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:59:08 compute-0 nova_compute[260935]: 2025-10-11 09:59:08.699 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:59:08 compute-0 nova_compute[260935]: 2025-10-11 09:59:08.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:59:09 compute-0 ceph-mon[74313]: pgmap v3551: 321 pgs: 321 active+clean; 307 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:59:09 compute-0 nova_compute[260935]: 2025-10-11 09:59:09.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:59:09 compute-0 nova_compute[260935]: 2025-10-11 09:59:09.703 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 11 09:59:09 compute-0 podman[450338]: 2025-10-11 09:59:09.789246944 +0000 UTC m=+0.076223246 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 09:59:09 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:59:09 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3552: 321 pgs: 321 active+clean; 307 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:59:10 compute-0 nova_compute[260935]: 2025-10-11 09:59:10.283 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "refresh_cache-52be16b4-343a-4fd4-9041-39069a1fde2a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 09:59:10 compute-0 nova_compute[260935]: 2025-10-11 09:59:10.284 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquired lock "refresh_cache-52be16b4-343a-4fd4-9041-39069a1fde2a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 09:59:10 compute-0 nova_compute[260935]: 2025-10-11 09:59:10.284 2 DEBUG nova.network.neutron [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: 52be16b4-343a-4fd4-9041-39069a1fde2a] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 11 09:59:10 compute-0 nova_compute[260935]: 2025-10-11 09:59:10.671 2 DEBUG nova.network.neutron [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: 52be16b4-343a-4fd4-9041-39069a1fde2a] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 11 09:59:11 compute-0 ceph-mon[74313]: pgmap v3552: 321 pgs: 321 active+clean; 307 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:59:11 compute-0 nova_compute[260935]: 2025-10-11 09:59:11.269 2 DEBUG nova.network.neutron [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: 52be16b4-343a-4fd4-9041-39069a1fde2a] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 09:59:11 compute-0 nova_compute[260935]: 2025-10-11 09:59:11.294 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Releasing lock "refresh_cache-52be16b4-343a-4fd4-9041-39069a1fde2a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 09:59:11 compute-0 nova_compute[260935]: 2025-10-11 09:59:11.294 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: 52be16b4-343a-4fd4-9041-39069a1fde2a] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 11 09:59:11 compute-0 nova_compute[260935]: 2025-10-11 09:59:11.295 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:59:11 compute-0 nova_compute[260935]: 2025-10-11 09:59:11.295 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:59:11 compute-0 nova_compute[260935]: 2025-10-11 09:59:11.296 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 11 09:59:11 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3553: 321 pgs: 321 active+clean; 307 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:59:12 compute-0 nova_compute[260935]: 2025-10-11 09:59:12.195 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4996-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 09:59:12 compute-0 nova_compute[260935]: 2025-10-11 09:59:12.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:59:12 compute-0 nova_compute[260935]: 2025-10-11 09:59:12.730 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:59:12 compute-0 nova_compute[260935]: 2025-10-11 09:59:12.730 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:59:12 compute-0 nova_compute[260935]: 2025-10-11 09:59:12.730 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:59:12 compute-0 nova_compute[260935]: 2025-10-11 09:59:12.731 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 11 09:59:12 compute-0 nova_compute[260935]: 2025-10-11 09:59:12.731 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:59:13 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 09:59:13 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1494995148' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:59:13 compute-0 nova_compute[260935]: 2025-10-11 09:59:13.165 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.434s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:59:13 compute-0 ceph-mon[74313]: pgmap v3553: 321 pgs: 321 active+clean; 307 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:59:13 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/1494995148' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:59:13 compute-0 nova_compute[260935]: 2025-10-11 09:59:13.271 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:59:13 compute-0 nova_compute[260935]: 2025-10-11 09:59:13.272 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:59:13 compute-0 nova_compute[260935]: 2025-10-11 09:59:13.273 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:59:13 compute-0 nova_compute[260935]: 2025-10-11 09:59:13.278 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:59:13 compute-0 nova_compute[260935]: 2025-10-11 09:59:13.278 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:59:13 compute-0 nova_compute[260935]: 2025-10-11 09:59:13.284 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000052 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:59:13 compute-0 nova_compute[260935]: 2025-10-11 09:59:13.284 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000052 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 09:59:13 compute-0 nova_compute[260935]: 2025-10-11 09:59:13.482 2 WARNING nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 09:59:13 compute-0 nova_compute[260935]: 2025-10-11 09:59:13.484 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=2798MB free_disk=59.83064270019531GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 11 09:59:13 compute-0 nova_compute[260935]: 2025-10-11 09:59:13.484 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:59:13 compute-0 nova_compute[260935]: 2025-10-11 09:59:13.484 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:59:13 compute-0 nova_compute[260935]: 2025-10-11 09:59:13.621 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance c176845c-89c0-4038-ba22-4ee79bd3ebfe actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 09:59:13 compute-0 nova_compute[260935]: 2025-10-11 09:59:13.621 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance b75d8ded-515b-48ff-a6b6-28df88878996 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 09:59:13 compute-0 nova_compute[260935]: 2025-10-11 09:59:13.621 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance 52be16b4-343a-4fd4-9041-39069a1fde2a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 09:59:13 compute-0 nova_compute[260935]: 2025-10-11 09:59:13.621 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 11 09:59:13 compute-0 nova_compute[260935]: 2025-10-11 09:59:13.622 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=896MB phys_disk=59GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 11 09:59:13 compute-0 nova_compute[260935]: 2025-10-11 09:59:13.696 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 09:59:13 compute-0 podman[450380]: 2025-10-11 09:59:13.794056192 +0000 UTC m=+0.095424068 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct 11 09:59:13 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3554: 321 pgs: 321 active+clean; 307 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:59:14 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 09:59:14 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2112852372' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:59:14 compute-0 nova_compute[260935]: 2025-10-11 09:59:14.227 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.530s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 09:59:14 compute-0 nova_compute[260935]: 2025-10-11 09:59:14.235 2 DEBUG nova.compute.provider_tree [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 09:59:14 compute-0 nova_compute[260935]: 2025-10-11 09:59:14.257 2 DEBUG nova.scheduler.client.report [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 09:59:14 compute-0 nova_compute[260935]: 2025-10-11 09:59:14.260 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 11 09:59:14 compute-0 nova_compute[260935]: 2025-10-11 09:59:14.260 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.776s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:59:14 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:59:15 compute-0 ceph-mon[74313]: pgmap v3554: 321 pgs: 321 active+clean; 307 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:59:15 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/2112852372' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 09:59:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:59:15.258 162815 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 09:59:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:59:15.258 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 09:59:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 09:59:15.259 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 09:59:15 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3555: 321 pgs: 321 active+clean; 307 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:59:16 compute-0 podman[450423]: 2025-10-11 09:59:16.773779831 +0000 UTC m=+0.068308412 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=multipathd, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Oct 11 09:59:16 compute-0 podman[450424]: 2025-10-11 09:59:16.828050074 +0000 UTC m=+0.107382895 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true)
Oct 11 09:59:17 compute-0 nova_compute[260935]: 2025-10-11 09:59:17.197 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 09:59:17 compute-0 nova_compute[260935]: 2025-10-11 09:59:17.199 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 09:59:17 compute-0 nova_compute[260935]: 2025-10-11 09:59:17.200 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Oct 11 09:59:17 compute-0 nova_compute[260935]: 2025-10-11 09:59:17.200 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Oct 11 09:59:17 compute-0 nova_compute[260935]: 2025-10-11 09:59:17.232 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:59:17 compute-0 nova_compute[260935]: 2025-10-11 09:59:17.232 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Oct 11 09:59:17 compute-0 nova_compute[260935]: 2025-10-11 09:59:17.263 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:59:17 compute-0 ceph-mon[74313]: pgmap v3555: 321 pgs: 321 active+clean; 307 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:59:17 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3556: 321 pgs: 321 active+clean; 307 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:59:19 compute-0 ceph-mon[74313]: pgmap v3556: 321 pgs: 321 active+clean; 307 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:59:19 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:59:19 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3557: 321 pgs: 321 active+clean; 307 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:59:20 compute-0 sudo[450469]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:59:20 compute-0 sudo[450469]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:59:20 compute-0 sudo[450469]: pam_unix(sudo:session): session closed for user root
Oct 11 09:59:20 compute-0 sudo[450494]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 09:59:20 compute-0 sudo[450494]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:59:20 compute-0 sudo[450494]: pam_unix(sudo:session): session closed for user root
Oct 11 09:59:20 compute-0 sudo[450519]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:59:20 compute-0 sudo[450519]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:59:20 compute-0 sudo[450519]: pam_unix(sudo:session): session closed for user root
Oct 11 09:59:20 compute-0 sudo[450544]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 11 09:59:20 compute-0 sudo[450544]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:59:21 compute-0 sudo[450544]: pam_unix(sudo:session): session closed for user root
Oct 11 09:59:21 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 09:59:21 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 09:59:21 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 11 09:59:21 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 09:59:21 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 11 09:59:21 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:59:21 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev 10d52c31-aa51-4ef7-863c-25122cb66f49 does not exist
Oct 11 09:59:21 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev a4acc30b-e38f-435c-aa84-681d7808e14a does not exist
Oct 11 09:59:21 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev ddc13413-9c33-45ef-90a4-5628a069fa3f does not exist
Oct 11 09:59:21 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 11 09:59:21 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 09:59:21 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 11 09:59:21 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 09:59:21 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 09:59:21 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 09:59:21 compute-0 sudo[450600]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:59:21 compute-0 sudo[450600]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:59:21 compute-0 sudo[450600]: pam_unix(sudo:session): session closed for user root
Oct 11 09:59:21 compute-0 sudo[450625]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 09:59:21 compute-0 sudo[450625]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:59:21 compute-0 sudo[450625]: pam_unix(sudo:session): session closed for user root
Oct 11 09:59:21 compute-0 ceph-mon[74313]: pgmap v3557: 321 pgs: 321 active+clean; 307 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:59:21 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 09:59:21 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 09:59:21 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:59:21 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 09:59:21 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 09:59:21 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 09:59:21 compute-0 sudo[450650]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:59:21 compute-0 sudo[450650]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:59:21 compute-0 sudo[450650]: pam_unix(sudo:session): session closed for user root
Oct 11 09:59:21 compute-0 sudo[450675]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 11 09:59:21 compute-0 sudo[450675]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:59:21 compute-0 nova_compute[260935]: 2025-10-11 09:59:21.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:59:21 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3558: 321 pgs: 321 active+clean; 307 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:59:22 compute-0 podman[450742]: 2025-10-11 09:59:22.060364943 +0000 UTC m=+0.064206485 container create 0f322816bafae446539d2547aedc3d8f660fc4a05c09dbed5fc02ea7910e501d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_liskov, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 09:59:22 compute-0 systemd[1]: Started libpod-conmon-0f322816bafae446539d2547aedc3d8f660fc4a05c09dbed5fc02ea7910e501d.scope.
Oct 11 09:59:22 compute-0 podman[450742]: 2025-10-11 09:59:22.034307637 +0000 UTC m=+0.038149249 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:59:22 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:59:22 compute-0 podman[450742]: 2025-10-11 09:59:22.181238449 +0000 UTC m=+0.185080071 container init 0f322816bafae446539d2547aedc3d8f660fc4a05c09dbed5fc02ea7910e501d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_liskov, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True)
Oct 11 09:59:22 compute-0 podman[450742]: 2025-10-11 09:59:22.194181525 +0000 UTC m=+0.198023087 container start 0f322816bafae446539d2547aedc3d8f660fc4a05c09dbed5fc02ea7910e501d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_liskov, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct 11 09:59:22 compute-0 podman[450742]: 2025-10-11 09:59:22.198917119 +0000 UTC m=+0.202758721 container attach 0f322816bafae446539d2547aedc3d8f660fc4a05c09dbed5fc02ea7910e501d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_liskov, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct 11 09:59:22 compute-0 boring_liskov[450759]: 167 167
Oct 11 09:59:22 compute-0 systemd[1]: libpod-0f322816bafae446539d2547aedc3d8f660fc4a05c09dbed5fc02ea7910e501d.scope: Deactivated successfully.
Oct 11 09:59:22 compute-0 podman[450742]: 2025-10-11 09:59:22.2021267 +0000 UTC m=+0.205968222 container died 0f322816bafae446539d2547aedc3d8f660fc4a05c09dbed5fc02ea7910e501d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_liskov, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct 11 09:59:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-afaecf35206d19c044ae2cbffc9537dd40dc8d1944fcb9a42324d2fbb687f3f9-merged.mount: Deactivated successfully.
Oct 11 09:59:22 compute-0 nova_compute[260935]: 2025-10-11 09:59:22.233 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 09:59:22 compute-0 nova_compute[260935]: 2025-10-11 09:59:22.236 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 09:59:22 compute-0 nova_compute[260935]: 2025-10-11 09:59:22.236 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Oct 11 09:59:22 compute-0 nova_compute[260935]: 2025-10-11 09:59:22.236 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Oct 11 09:59:22 compute-0 podman[450742]: 2025-10-11 09:59:22.247844792 +0000 UTC m=+0.251686314 container remove 0f322816bafae446539d2547aedc3d8f660fc4a05c09dbed5fc02ea7910e501d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_liskov, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 09:59:22 compute-0 nova_compute[260935]: 2025-10-11 09:59:22.265 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:59:22 compute-0 nova_compute[260935]: 2025-10-11 09:59:22.267 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Oct 11 09:59:22 compute-0 systemd[1]: libpod-conmon-0f322816bafae446539d2547aedc3d8f660fc4a05c09dbed5fc02ea7910e501d.scope: Deactivated successfully.
Oct 11 09:59:22 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e315 do_prune osdmap full prune enabled
Oct 11 09:59:22 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e316 e316: 3 total, 3 up, 3 in
Oct 11 09:59:22 compute-0 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e316: 3 total, 3 up, 3 in
Oct 11 09:59:22 compute-0 podman[450781]: 2025-10-11 09:59:22.513665644 +0000 UTC m=+0.074241109 container create c6e0166c40f5ff9791c3b186635d6ca266b618b4319198ae22a620a032cc7a1e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_keldysh, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct 11 09:59:22 compute-0 systemd[1]: Started libpod-conmon-c6e0166c40f5ff9791c3b186635d6ca266b618b4319198ae22a620a032cc7a1e.scope.
Oct 11 09:59:22 compute-0 podman[450781]: 2025-10-11 09:59:22.483072449 +0000 UTC m=+0.043648004 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:59:22 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:59:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3730568f09ac0f80f304ad4c019b99f3559515d100386aaf92a3007752b87c94/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 09:59:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3730568f09ac0f80f304ad4c019b99f3559515d100386aaf92a3007752b87c94/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 09:59:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3730568f09ac0f80f304ad4c019b99f3559515d100386aaf92a3007752b87c94/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 09:59:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3730568f09ac0f80f304ad4c019b99f3559515d100386aaf92a3007752b87c94/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 09:59:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3730568f09ac0f80f304ad4c019b99f3559515d100386aaf92a3007752b87c94/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 09:59:22 compute-0 podman[450781]: 2025-10-11 09:59:22.63665044 +0000 UTC m=+0.197225975 container init c6e0166c40f5ff9791c3b186635d6ca266b618b4319198ae22a620a032cc7a1e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_keldysh, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 09:59:22 compute-0 podman[450781]: 2025-10-11 09:59:22.64834856 +0000 UTC m=+0.208924035 container start c6e0166c40f5ff9791c3b186635d6ca266b618b4319198ae22a620a032cc7a1e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_keldysh, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Oct 11 09:59:22 compute-0 podman[450781]: 2025-10-11 09:59:22.653736083 +0000 UTC m=+0.214311558 container attach c6e0166c40f5ff9791c3b186635d6ca266b618b4319198ae22a620a032cc7a1e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_keldysh, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True)
Oct 11 09:59:23 compute-0 ceph-mon[74313]: pgmap v3558: 321 pgs: 321 active+clean; 307 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:59:23 compute-0 ceph-mon[74313]: osdmap e316: 3 total, 3 up, 3 in
Oct 11 09:59:23 compute-0 modest_keldysh[450797]: --> passed data devices: 0 physical, 3 LVM
Oct 11 09:59:23 compute-0 modest_keldysh[450797]: --> relative data size: 1.0
Oct 11 09:59:23 compute-0 modest_keldysh[450797]: --> All data devices are unavailable
Oct 11 09:59:23 compute-0 systemd[1]: libpod-c6e0166c40f5ff9791c3b186635d6ca266b618b4319198ae22a620a032cc7a1e.scope: Deactivated successfully.
Oct 11 09:59:23 compute-0 systemd[1]: libpod-c6e0166c40f5ff9791c3b186635d6ca266b618b4319198ae22a620a032cc7a1e.scope: Consumed 1.055s CPU time.
Oct 11 09:59:23 compute-0 podman[450827]: 2025-10-11 09:59:23.846343266 +0000 UTC m=+0.037933913 container died c6e0166c40f5ff9791c3b186635d6ca266b618b4319198ae22a620a032cc7a1e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_keldysh, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct 11 09:59:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-3730568f09ac0f80f304ad4c019b99f3559515d100386aaf92a3007752b87c94-merged.mount: Deactivated successfully.
Oct 11 09:59:23 compute-0 podman[450827]: 2025-10-11 09:59:23.917687802 +0000 UTC m=+0.109278439 container remove c6e0166c40f5ff9791c3b186635d6ca266b618b4319198ae22a620a032cc7a1e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_keldysh, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 09:59:23 compute-0 systemd[1]: libpod-conmon-c6e0166c40f5ff9791c3b186635d6ca266b618b4319198ae22a620a032cc7a1e.scope: Deactivated successfully.
Oct 11 09:59:23 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3560: 321 pgs: 321 active+clean; 307 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 2.8 KiB/s rd, 3 op/s
Oct 11 09:59:23 compute-0 sudo[450675]: pam_unix(sudo:session): session closed for user root
Oct 11 09:59:24 compute-0 sudo[450842]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:59:24 compute-0 sudo[450842]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:59:24 compute-0 sudo[450842]: pam_unix(sudo:session): session closed for user root
Oct 11 09:59:24 compute-0 sudo[450867]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 09:59:24 compute-0 sudo[450867]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:59:24 compute-0 sudo[450867]: pam_unix(sudo:session): session closed for user root
Oct 11 09:59:24 compute-0 sudo[450892]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:59:24 compute-0 sudo[450892]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:59:24 compute-0 sudo[450892]: pam_unix(sudo:session): session closed for user root
Oct 11 09:59:24 compute-0 sudo[450917]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a -- lvm list --format json
Oct 11 09:59:24 compute-0 sudo[450917]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:59:24 compute-0 podman[450983]: 2025-10-11 09:59:24.738456857 +0000 UTC m=+0.053008719 container create abb29db944f37377aaeb80801c044eca1371844ed2570e6bc4eb4cdbaf7a4f53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_allen, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 09:59:24 compute-0 systemd[1]: Started libpod-conmon-abb29db944f37377aaeb80801c044eca1371844ed2570e6bc4eb4cdbaf7a4f53.scope.
Oct 11 09:59:24 compute-0 podman[450983]: 2025-10-11 09:59:24.71058358 +0000 UTC m=+0.025135452 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:59:24 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:59:24 compute-0 podman[450983]: 2025-10-11 09:59:24.857257655 +0000 UTC m=+0.171809537 container init abb29db944f37377aaeb80801c044eca1371844ed2570e6bc4eb4cdbaf7a4f53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_allen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Oct 11 09:59:24 compute-0 podman[450983]: 2025-10-11 09:59:24.864695175 +0000 UTC m=+0.179247017 container start abb29db944f37377aaeb80801c044eca1371844ed2570e6bc4eb4cdbaf7a4f53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_allen, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 09:59:24 compute-0 podman[450983]: 2025-10-11 09:59:24.868238395 +0000 UTC m=+0.182790267 container attach abb29db944f37377aaeb80801c044eca1371844ed2570e6bc4eb4cdbaf7a4f53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_allen, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 09:59:24 compute-0 strange_allen[450999]: 167 167
Oct 11 09:59:24 compute-0 systemd[1]: libpod-abb29db944f37377aaeb80801c044eca1371844ed2570e6bc4eb4cdbaf7a4f53.scope: Deactivated successfully.
Oct 11 09:59:24 compute-0 conmon[450999]: conmon abb29db944f37377aaeb <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-abb29db944f37377aaeb80801c044eca1371844ed2570e6bc4eb4cdbaf7a4f53.scope/container/memory.events
Oct 11 09:59:24 compute-0 podman[450983]: 2025-10-11 09:59:24.873580956 +0000 UTC m=+0.188132788 container died abb29db944f37377aaeb80801c044eca1371844ed2570e6bc4eb4cdbaf7a4f53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_allen, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct 11 09:59:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:59:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:59:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:59:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:59:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:59:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:59:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-58fa6a392f9c5c8153333b31cb8a2122579d85edf674470fbc4b5d41df8f2bcc-merged.mount: Deactivated successfully.
Oct 11 09:59:24 compute-0 podman[450983]: 2025-10-11 09:59:24.921152081 +0000 UTC m=+0.235703943 container remove abb29db944f37377aaeb80801c044eca1371844ed2570e6bc4eb4cdbaf7a4f53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_allen, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct 11 09:59:24 compute-0 systemd[1]: libpod-conmon-abb29db944f37377aaeb80801c044eca1371844ed2570e6bc4eb4cdbaf7a4f53.scope: Deactivated successfully.
Oct 11 09:59:24 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:59:25 compute-0 podman[451024]: 2025-10-11 09:59:25.154429633 +0000 UTC m=+0.049489340 container create 1df06869fea5fa746ac50342be7030a6f8c0d15c71beea34bb0672ee3b661bf0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_rhodes, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 09:59:25 compute-0 systemd[1]: Started libpod-conmon-1df06869fea5fa746ac50342be7030a6f8c0d15c71beea34bb0672ee3b661bf0.scope.
Oct 11 09:59:25 compute-0 podman[451024]: 2025-10-11 09:59:25.13343884 +0000 UTC m=+0.028498537 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:59:25 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:59:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7fd535a8e2b1ebe46e5ff41e066eb6f827f79afb4709d15c202abaee13a1697/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 09:59:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7fd535a8e2b1ebe46e5ff41e066eb6f827f79afb4709d15c202abaee13a1697/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 09:59:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7fd535a8e2b1ebe46e5ff41e066eb6f827f79afb4709d15c202abaee13a1697/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 09:59:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7fd535a8e2b1ebe46e5ff41e066eb6f827f79afb4709d15c202abaee13a1697/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 09:59:25 compute-0 podman[451024]: 2025-10-11 09:59:25.258274498 +0000 UTC m=+0.153334195 container init 1df06869fea5fa746ac50342be7030a6f8c0d15c71beea34bb0672ee3b661bf0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_rhodes, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 09:59:25 compute-0 podman[451024]: 2025-10-11 09:59:25.273755375 +0000 UTC m=+0.168815062 container start 1df06869fea5fa746ac50342be7030a6f8c0d15c71beea34bb0672ee3b661bf0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_rhodes, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 09:59:25 compute-0 podman[451024]: 2025-10-11 09:59:25.277265334 +0000 UTC m=+0.172325021 container attach 1df06869fea5fa746ac50342be7030a6f8c0d15c71beea34bb0672ee3b661bf0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_rhodes, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Oct 11 09:59:25 compute-0 ceph-mon[74313]: pgmap v3560: 321 pgs: 321 active+clean; 307 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 2.8 KiB/s rd, 3 op/s
Oct 11 09:59:25 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3561: 321 pgs: 321 active+clean; 307 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 2.8 KiB/s rd, 3 op/s
Oct 11 09:59:26 compute-0 ecstatic_rhodes[451041]: {
Oct 11 09:59:26 compute-0 ecstatic_rhodes[451041]:     "0": [
Oct 11 09:59:26 compute-0 ecstatic_rhodes[451041]:         {
Oct 11 09:59:26 compute-0 ecstatic_rhodes[451041]:             "devices": [
Oct 11 09:59:26 compute-0 ecstatic_rhodes[451041]:                 "/dev/loop3"
Oct 11 09:59:26 compute-0 ecstatic_rhodes[451041]:             ],
Oct 11 09:59:26 compute-0 ecstatic_rhodes[451041]:             "lv_name": "ceph_lv0",
Oct 11 09:59:26 compute-0 ecstatic_rhodes[451041]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 09:59:26 compute-0 ecstatic_rhodes[451041]:             "lv_size": "21470642176",
Oct 11 09:59:26 compute-0 ecstatic_rhodes[451041]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=bd31cfe9-266a-4479-a7f9-659fff14b3e1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 09:59:26 compute-0 ecstatic_rhodes[451041]:             "lv_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 09:59:26 compute-0 ecstatic_rhodes[451041]:             "name": "ceph_lv0",
Oct 11 09:59:26 compute-0 ecstatic_rhodes[451041]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 09:59:26 compute-0 ecstatic_rhodes[451041]:             "tags": {
Oct 11 09:59:26 compute-0 ecstatic_rhodes[451041]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 11 09:59:26 compute-0 ecstatic_rhodes[451041]:                 "ceph.block_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 09:59:26 compute-0 ecstatic_rhodes[451041]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 09:59:26 compute-0 ecstatic_rhodes[451041]:                 "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:59:26 compute-0 ecstatic_rhodes[451041]:                 "ceph.cluster_name": "ceph",
Oct 11 09:59:26 compute-0 ecstatic_rhodes[451041]:                 "ceph.crush_device_class": "",
Oct 11 09:59:26 compute-0 ecstatic_rhodes[451041]:                 "ceph.encrypted": "0",
Oct 11 09:59:26 compute-0 ecstatic_rhodes[451041]:                 "ceph.osd_fsid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 09:59:26 compute-0 ecstatic_rhodes[451041]:                 "ceph.osd_id": "0",
Oct 11 09:59:26 compute-0 ecstatic_rhodes[451041]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 09:59:26 compute-0 ecstatic_rhodes[451041]:                 "ceph.type": "block",
Oct 11 09:59:26 compute-0 ecstatic_rhodes[451041]:                 "ceph.vdo": "0"
Oct 11 09:59:26 compute-0 ecstatic_rhodes[451041]:             },
Oct 11 09:59:26 compute-0 ecstatic_rhodes[451041]:             "type": "block",
Oct 11 09:59:26 compute-0 ecstatic_rhodes[451041]:             "vg_name": "ceph_vg0"
Oct 11 09:59:26 compute-0 ecstatic_rhodes[451041]:         }
Oct 11 09:59:26 compute-0 ecstatic_rhodes[451041]:     ],
Oct 11 09:59:26 compute-0 ecstatic_rhodes[451041]:     "1": [
Oct 11 09:59:26 compute-0 ecstatic_rhodes[451041]:         {
Oct 11 09:59:26 compute-0 ecstatic_rhodes[451041]:             "devices": [
Oct 11 09:59:26 compute-0 ecstatic_rhodes[451041]:                 "/dev/loop4"
Oct 11 09:59:26 compute-0 ecstatic_rhodes[451041]:             ],
Oct 11 09:59:26 compute-0 ecstatic_rhodes[451041]:             "lv_name": "ceph_lv1",
Oct 11 09:59:26 compute-0 ecstatic_rhodes[451041]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 09:59:26 compute-0 ecstatic_rhodes[451041]:             "lv_size": "21470642176",
Oct 11 09:59:26 compute-0 ecstatic_rhodes[451041]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c6e730c8-7075-45e5-bf72-061b73088f5b,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 09:59:26 compute-0 ecstatic_rhodes[451041]:             "lv_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 09:59:26 compute-0 ecstatic_rhodes[451041]:             "name": "ceph_lv1",
Oct 11 09:59:26 compute-0 ecstatic_rhodes[451041]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 09:59:26 compute-0 ecstatic_rhodes[451041]:             "tags": {
Oct 11 09:59:26 compute-0 ecstatic_rhodes[451041]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 11 09:59:26 compute-0 ecstatic_rhodes[451041]:                 "ceph.block_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 09:59:26 compute-0 ecstatic_rhodes[451041]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 09:59:26 compute-0 ecstatic_rhodes[451041]:                 "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:59:26 compute-0 ecstatic_rhodes[451041]:                 "ceph.cluster_name": "ceph",
Oct 11 09:59:26 compute-0 ecstatic_rhodes[451041]:                 "ceph.crush_device_class": "",
Oct 11 09:59:26 compute-0 ecstatic_rhodes[451041]:                 "ceph.encrypted": "0",
Oct 11 09:59:26 compute-0 ecstatic_rhodes[451041]:                 "ceph.osd_fsid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 09:59:26 compute-0 ecstatic_rhodes[451041]:                 "ceph.osd_id": "1",
Oct 11 09:59:26 compute-0 ecstatic_rhodes[451041]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 09:59:26 compute-0 ecstatic_rhodes[451041]:                 "ceph.type": "block",
Oct 11 09:59:26 compute-0 ecstatic_rhodes[451041]:                 "ceph.vdo": "0"
Oct 11 09:59:26 compute-0 ecstatic_rhodes[451041]:             },
Oct 11 09:59:26 compute-0 ecstatic_rhodes[451041]:             "type": "block",
Oct 11 09:59:26 compute-0 ecstatic_rhodes[451041]:             "vg_name": "ceph_vg1"
Oct 11 09:59:26 compute-0 ecstatic_rhodes[451041]:         }
Oct 11 09:59:26 compute-0 ecstatic_rhodes[451041]:     ],
Oct 11 09:59:26 compute-0 ecstatic_rhodes[451041]:     "2": [
Oct 11 09:59:26 compute-0 ecstatic_rhodes[451041]:         {
Oct 11 09:59:26 compute-0 ecstatic_rhodes[451041]:             "devices": [
Oct 11 09:59:26 compute-0 ecstatic_rhodes[451041]:                 "/dev/loop5"
Oct 11 09:59:26 compute-0 ecstatic_rhodes[451041]:             ],
Oct 11 09:59:26 compute-0 ecstatic_rhodes[451041]:             "lv_name": "ceph_lv2",
Oct 11 09:59:26 compute-0 ecstatic_rhodes[451041]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 09:59:26 compute-0 ecstatic_rhodes[451041]:             "lv_size": "21470642176",
Oct 11 09:59:26 compute-0 ecstatic_rhodes[451041]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9a8d87fe-940e-4960-94fb-405e22e6e9ee,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 09:59:26 compute-0 ecstatic_rhodes[451041]:             "lv_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 09:59:26 compute-0 ecstatic_rhodes[451041]:             "name": "ceph_lv2",
Oct 11 09:59:26 compute-0 ecstatic_rhodes[451041]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 09:59:26 compute-0 ecstatic_rhodes[451041]:             "tags": {
Oct 11 09:59:26 compute-0 ecstatic_rhodes[451041]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 11 09:59:26 compute-0 ecstatic_rhodes[451041]:                 "ceph.block_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 09:59:26 compute-0 ecstatic_rhodes[451041]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 09:59:26 compute-0 ecstatic_rhodes[451041]:                 "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:59:26 compute-0 ecstatic_rhodes[451041]:                 "ceph.cluster_name": "ceph",
Oct 11 09:59:26 compute-0 ecstatic_rhodes[451041]:                 "ceph.crush_device_class": "",
Oct 11 09:59:26 compute-0 ecstatic_rhodes[451041]:                 "ceph.encrypted": "0",
Oct 11 09:59:26 compute-0 ecstatic_rhodes[451041]:                 "ceph.osd_fsid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 09:59:26 compute-0 ecstatic_rhodes[451041]:                 "ceph.osd_id": "2",
Oct 11 09:59:26 compute-0 ecstatic_rhodes[451041]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 09:59:26 compute-0 ecstatic_rhodes[451041]:                 "ceph.type": "block",
Oct 11 09:59:26 compute-0 ecstatic_rhodes[451041]:                 "ceph.vdo": "0"
Oct 11 09:59:26 compute-0 ecstatic_rhodes[451041]:             },
Oct 11 09:59:26 compute-0 ecstatic_rhodes[451041]:             "type": "block",
Oct 11 09:59:26 compute-0 ecstatic_rhodes[451041]:             "vg_name": "ceph_vg2"
Oct 11 09:59:26 compute-0 ecstatic_rhodes[451041]:         }
Oct 11 09:59:26 compute-0 ecstatic_rhodes[451041]:     ]
Oct 11 09:59:26 compute-0 ecstatic_rhodes[451041]: }
Oct 11 09:59:26 compute-0 systemd[1]: libpod-1df06869fea5fa746ac50342be7030a6f8c0d15c71beea34bb0672ee3b661bf0.scope: Deactivated successfully.
Oct 11 09:59:26 compute-0 podman[451024]: 2025-10-11 09:59:26.050475266 +0000 UTC m=+0.945534983 container died 1df06869fea5fa746ac50342be7030a6f8c0d15c71beea34bb0672ee3b661bf0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_rhodes, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 09:59:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-e7fd535a8e2b1ebe46e5ff41e066eb6f827f79afb4709d15c202abaee13a1697-merged.mount: Deactivated successfully.
Oct 11 09:59:26 compute-0 podman[451024]: 2025-10-11 09:59:26.121970317 +0000 UTC m=+1.017029994 container remove 1df06869fea5fa746ac50342be7030a6f8c0d15c71beea34bb0672ee3b661bf0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_rhodes, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct 11 09:59:26 compute-0 systemd[1]: libpod-conmon-1df06869fea5fa746ac50342be7030a6f8c0d15c71beea34bb0672ee3b661bf0.scope: Deactivated successfully.
Oct 11 09:59:26 compute-0 sudo[450917]: pam_unix(sudo:session): session closed for user root
Oct 11 09:59:26 compute-0 sudo[451062]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:59:26 compute-0 sudo[451062]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:59:26 compute-0 sudo[451062]: pam_unix(sudo:session): session closed for user root
Oct 11 09:59:26 compute-0 sudo[451087]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 09:59:26 compute-0 sudo[451087]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:59:26 compute-0 sudo[451087]: pam_unix(sudo:session): session closed for user root
Oct 11 09:59:26 compute-0 sudo[451112]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:59:26 compute-0 sudo[451112]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:59:26 compute-0 sudo[451112]: pam_unix(sudo:session): session closed for user root
Oct 11 09:59:26 compute-0 sudo[451137]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a -- raw list --format json
Oct 11 09:59:26 compute-0 sudo[451137]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:59:26 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 09:59:26 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2581551438' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 09:59:26 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 09:59:26 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2581551438' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 09:59:26 compute-0 podman[451205]: 2025-10-11 09:59:26.867683741 +0000 UTC m=+0.046620959 container create 499da56bf67a5650a3bc894b12ff847bcda5a7dea8e7f47cfb4df58ba061f4d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_dhawan, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default)
Oct 11 09:59:26 compute-0 systemd[1]: Started libpod-conmon-499da56bf67a5650a3bc894b12ff847bcda5a7dea8e7f47cfb4df58ba061f4d9.scope.
Oct 11 09:59:26 compute-0 podman[451205]: 2025-10-11 09:59:26.847880591 +0000 UTC m=+0.026817839 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:59:26 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:59:26 compute-0 podman[451205]: 2025-10-11 09:59:26.968114869 +0000 UTC m=+0.147052097 container init 499da56bf67a5650a3bc894b12ff847bcda5a7dea8e7f47cfb4df58ba061f4d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_dhawan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Oct 11 09:59:26 compute-0 podman[451205]: 2025-10-11 09:59:26.975942051 +0000 UTC m=+0.154879289 container start 499da56bf67a5650a3bc894b12ff847bcda5a7dea8e7f47cfb4df58ba061f4d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_dhawan, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default)
Oct 11 09:59:26 compute-0 podman[451205]: 2025-10-11 09:59:26.980095458 +0000 UTC m=+0.159032686 container attach 499da56bf67a5650a3bc894b12ff847bcda5a7dea8e7f47cfb4df58ba061f4d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_dhawan, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 09:59:26 compute-0 suspicious_dhawan[451222]: 167 167
Oct 11 09:59:26 compute-0 systemd[1]: libpod-499da56bf67a5650a3bc894b12ff847bcda5a7dea8e7f47cfb4df58ba061f4d9.scope: Deactivated successfully.
Oct 11 09:59:26 compute-0 conmon[451222]: conmon 499da56bf67a5650a3bc <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-499da56bf67a5650a3bc894b12ff847bcda5a7dea8e7f47cfb4df58ba061f4d9.scope/container/memory.events
Oct 11 09:59:26 compute-0 podman[451205]: 2025-10-11 09:59:26.986296173 +0000 UTC m=+0.165233401 container died 499da56bf67a5650a3bc894b12ff847bcda5a7dea8e7f47cfb4df58ba061f4d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_dhawan, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 09:59:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-18d2c06357c007da8c6dc4793ea4e36b88fea493aab99ab5baba56b8ab98baf2-merged.mount: Deactivated successfully.
Oct 11 09:59:27 compute-0 podman[451205]: 2025-10-11 09:59:27.024196264 +0000 UTC m=+0.203133512 container remove 499da56bf67a5650a3bc894b12ff847bcda5a7dea8e7f47cfb4df58ba061f4d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_dhawan, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Oct 11 09:59:27 compute-0 systemd[1]: libpod-conmon-499da56bf67a5650a3bc894b12ff847bcda5a7dea8e7f47cfb4df58ba061f4d9.scope: Deactivated successfully.
Oct 11 09:59:27 compute-0 podman[451245]: 2025-10-11 09:59:27.254464011 +0000 UTC m=+0.065877613 container create 3d97cefd74355056e3b4c9d2117fa40a3fc608f02f0af56ae9d2fc3931e25362 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_darwin, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct 11 09:59:27 compute-0 nova_compute[260935]: 2025-10-11 09:59:27.266 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:59:27 compute-0 nova_compute[260935]: 2025-10-11 09:59:27.270 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 09:59:27 compute-0 systemd[1]: Started libpod-conmon-3d97cefd74355056e3b4c9d2117fa40a3fc608f02f0af56ae9d2fc3931e25362.scope.
Oct 11 09:59:27 compute-0 podman[451245]: 2025-10-11 09:59:27.226962164 +0000 UTC m=+0.038375816 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 09:59:27 compute-0 systemd[1]: Started libcrun container.
Oct 11 09:59:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c11fde3f592c424e035d54ac98446d60cc0f4644b141ed5f2cf5e72b0e7feed2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 09:59:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c11fde3f592c424e035d54ac98446d60cc0f4644b141ed5f2cf5e72b0e7feed2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 09:59:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c11fde3f592c424e035d54ac98446d60cc0f4644b141ed5f2cf5e72b0e7feed2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 09:59:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c11fde3f592c424e035d54ac98446d60cc0f4644b141ed5f2cf5e72b0e7feed2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 09:59:27 compute-0 podman[451245]: 2025-10-11 09:59:27.363386139 +0000 UTC m=+0.174799771 container init 3d97cefd74355056e3b4c9d2117fa40a3fc608f02f0af56ae9d2fc3931e25362 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_darwin, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS)
Oct 11 09:59:27 compute-0 podman[451245]: 2025-10-11 09:59:27.378530437 +0000 UTC m=+0.189943989 container start 3d97cefd74355056e3b4c9d2117fa40a3fc608f02f0af56ae9d2fc3931e25362 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_darwin, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct 11 09:59:27 compute-0 podman[451245]: 2025-10-11 09:59:27.382255752 +0000 UTC m=+0.193669404 container attach 3d97cefd74355056e3b4c9d2117fa40a3fc608f02f0af56ae9d2fc3931e25362 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_darwin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Oct 11 09:59:27 compute-0 ceph-mon[74313]: pgmap v3561: 321 pgs: 321 active+clean; 307 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 2.8 KiB/s rd, 3 op/s
Oct 11 09:59:27 compute-0 ceph-mon[74313]: from='client.? 192.168.122.10:0/2581551438' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 09:59:27 compute-0 ceph-mon[74313]: from='client.? 192.168.122.10:0/2581551438' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 09:59:27 compute-0 nova_compute[260935]: 2025-10-11 09:59:27.704 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:59:27 compute-0 nova_compute[260935]: 2025-10-11 09:59:27.705 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Oct 11 09:59:27 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3562: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 25 op/s
Oct 11 09:59:28 compute-0 busy_darwin[451262]: {
Oct 11 09:59:28 compute-0 busy_darwin[451262]:     "9a8d87fe-940e-4960-94fb-405e22e6e9ee": {
Oct 11 09:59:28 compute-0 busy_darwin[451262]:         "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:59:28 compute-0 busy_darwin[451262]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 11 09:59:28 compute-0 busy_darwin[451262]:         "osd_id": 2,
Oct 11 09:59:28 compute-0 busy_darwin[451262]:         "osd_uuid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 09:59:28 compute-0 busy_darwin[451262]:         "type": "bluestore"
Oct 11 09:59:28 compute-0 busy_darwin[451262]:     },
Oct 11 09:59:28 compute-0 busy_darwin[451262]:     "bd31cfe9-266a-4479-a7f9-659fff14b3e1": {
Oct 11 09:59:28 compute-0 busy_darwin[451262]:         "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:59:28 compute-0 busy_darwin[451262]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 11 09:59:28 compute-0 busy_darwin[451262]:         "osd_id": 0,
Oct 11 09:59:28 compute-0 busy_darwin[451262]:         "osd_uuid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 09:59:28 compute-0 busy_darwin[451262]:         "type": "bluestore"
Oct 11 09:59:28 compute-0 busy_darwin[451262]:     },
Oct 11 09:59:28 compute-0 busy_darwin[451262]:     "c6e730c8-7075-45e5-bf72-061b73088f5b": {
Oct 11 09:59:28 compute-0 busy_darwin[451262]:         "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 09:59:28 compute-0 busy_darwin[451262]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 11 09:59:28 compute-0 busy_darwin[451262]:         "osd_id": 1,
Oct 11 09:59:28 compute-0 busy_darwin[451262]:         "osd_uuid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 09:59:28 compute-0 busy_darwin[451262]:         "type": "bluestore"
Oct 11 09:59:28 compute-0 busy_darwin[451262]:     }
Oct 11 09:59:28 compute-0 busy_darwin[451262]: }
Oct 11 09:59:28 compute-0 systemd[1]: libpod-3d97cefd74355056e3b4c9d2117fa40a3fc608f02f0af56ae9d2fc3931e25362.scope: Deactivated successfully.
Oct 11 09:59:28 compute-0 podman[451245]: 2025-10-11 09:59:28.346230215 +0000 UTC m=+1.157643777 container died 3d97cefd74355056e3b4c9d2117fa40a3fc608f02f0af56ae9d2fc3931e25362 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_darwin, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Oct 11 09:59:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-c11fde3f592c424e035d54ac98446d60cc0f4644b141ed5f2cf5e72b0e7feed2-merged.mount: Deactivated successfully.
Oct 11 09:59:28 compute-0 podman[451245]: 2025-10-11 09:59:28.400982802 +0000 UTC m=+1.212396354 container remove 3d97cefd74355056e3b4c9d2117fa40a3fc608f02f0af56ae9d2fc3931e25362 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_darwin, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default)
Oct 11 09:59:28 compute-0 systemd[1]: libpod-conmon-3d97cefd74355056e3b4c9d2117fa40a3fc608f02f0af56ae9d2fc3931e25362.scope: Deactivated successfully.
Oct 11 09:59:28 compute-0 sudo[451137]: pam_unix(sudo:session): session closed for user root
Oct 11 09:59:28 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 09:59:28 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:59:28 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 09:59:28 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:59:28 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev 43102d20-3b13-4811-a52b-f033bd2cb073 does not exist
Oct 11 09:59:28 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev a4f476f5-e61a-4900-bc94-16e2262c1b07 does not exist
Oct 11 09:59:28 compute-0 sudo[451310]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 09:59:28 compute-0 sudo[451310]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:59:28 compute-0 sudo[451310]: pam_unix(sudo:session): session closed for user root
Oct 11 09:59:28 compute-0 sudo[451335]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 11 09:59:28 compute-0 sudo[451335]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 09:59:28 compute-0 sudo[451335]: pam_unix(sudo:session): session closed for user root
Oct 11 09:59:29 compute-0 ceph-mon[74313]: pgmap v3562: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 25 op/s
Oct 11 09:59:29 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:59:29 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 09:59:29 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:59:29 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e316 do_prune osdmap full prune enabled
Oct 11 09:59:29 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e317 e317: 3 total, 3 up, 3 in
Oct 11 09:59:29 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3563: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 25 op/s
Oct 11 09:59:29 compute-0 ceph-mon[74313]: log_channel(cluster) log [DBG] : osdmap e317: 3 total, 3 up, 3 in
Oct 11 09:59:30 compute-0 ceph-mon[74313]: pgmap v3563: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 25 op/s
Oct 11 09:59:30 compute-0 ceph-mon[74313]: osdmap e317: 3 total, 3 up, 3 in
Oct 11 09:59:31 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3565: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 16 KiB/s rd, 1.5 KiB/s wr, 23 op/s
Oct 11 09:59:32 compute-0 nova_compute[260935]: 2025-10-11 09:59:32.314 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4996-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 09:59:32 compute-0 nova_compute[260935]: 2025-10-11 09:59:32.315 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:59:32 compute-0 nova_compute[260935]: 2025-10-11 09:59:32.315 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5047 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Oct 11 09:59:32 compute-0 nova_compute[260935]: 2025-10-11 09:59:32.315 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Oct 11 09:59:32 compute-0 nova_compute[260935]: 2025-10-11 09:59:32.316 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Oct 11 09:59:32 compute-0 ceph-mon[74313]: pgmap v3565: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 16 KiB/s rd, 1.5 KiB/s wr, 23 op/s
Oct 11 09:59:33 compute-0 nova_compute[260935]: 2025-10-11 09:59:33.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:59:33 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3566: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 15 KiB/s rd, 1.4 KiB/s wr, 21 op/s
Oct 11 09:59:34 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e317 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:59:34 compute-0 ceph-mon[74313]: pgmap v3566: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 15 KiB/s rd, 1.4 KiB/s wr, 21 op/s
Oct 11 09:59:35 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3567: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 15 KiB/s rd, 1.4 KiB/s wr, 21 op/s
Oct 11 09:59:36 compute-0 ceph-mon[74313]: pgmap v3567: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 15 KiB/s rd, 1.4 KiB/s wr, 21 op/s
Oct 11 09:59:37 compute-0 nova_compute[260935]: 2025-10-11 09:59:37.317 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:59:37 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3568: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:59:39 compute-0 ceph-mon[74313]: pgmap v3568: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:59:39 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e317 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:59:39 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3569: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:59:40 compute-0 sshd-session[451360]: Invalid user es_user from 13.126.15.214 port 35300
Oct 11 09:59:40 compute-0 sshd-session[451360]: pam_unix(sshd:auth): check pass; user unknown
Oct 11 09:59:40 compute-0 sshd-session[451360]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=13.126.15.214
Oct 11 09:59:40 compute-0 podman[451362]: 2025-10-11 09:59:40.760150069 +0000 UTC m=+0.098610528 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0)
Oct 11 09:59:41 compute-0 ceph-mon[74313]: pgmap v3569: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:59:41 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3570: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:59:42 compute-0 nova_compute[260935]: 2025-10-11 09:59:42.319 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 09:59:42 compute-0 nova_compute[260935]: 2025-10-11 09:59:42.320 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:59:42 compute-0 nova_compute[260935]: 2025-10-11 09:59:42.321 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Oct 11 09:59:42 compute-0 nova_compute[260935]: 2025-10-11 09:59:42.321 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Oct 11 09:59:42 compute-0 nova_compute[260935]: 2025-10-11 09:59:42.322 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Oct 11 09:59:42 compute-0 nova_compute[260935]: 2025-10-11 09:59:42.324 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:59:42 compute-0 nova_compute[260935]: 2025-10-11 09:59:42.762 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:59:43 compute-0 ceph-mon[74313]: pgmap v3570: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:59:43 compute-0 sshd-session[451360]: Failed password for invalid user es_user from 13.126.15.214 port 35300 ssh2
Oct 11 09:59:43 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3571: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:59:44 compute-0 sshd-session[451360]: Received disconnect from 13.126.15.214 port 35300:11: Bye Bye [preauth]
Oct 11 09:59:44 compute-0 sshd-session[451360]: Disconnected from invalid user es_user 13.126.15.214 port 35300 [preauth]
Oct 11 09:59:44 compute-0 podman[451381]: 2025-10-11 09:59:44.802140277 +0000 UTC m=+0.093477992 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true, container_name=iscsid)
Oct 11 09:59:44 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e317 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:59:45 compute-0 ceph-mon[74313]: pgmap v3571: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:59:45 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3572: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:59:47 compute-0 ceph-mon[74313]: pgmap v3572: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:59:47 compute-0 nova_compute[260935]: 2025-10-11 09:59:47.326 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 09:59:47 compute-0 nova_compute[260935]: 2025-10-11 09:59:47.328 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 09:59:47 compute-0 nova_compute[260935]: 2025-10-11 09:59:47.328 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Oct 11 09:59:47 compute-0 nova_compute[260935]: 2025-10-11 09:59:47.328 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Oct 11 09:59:47 compute-0 nova_compute[260935]: 2025-10-11 09:59:47.347 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 09:59:47 compute-0 nova_compute[260935]: 2025-10-11 09:59:47.348 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Oct 11 09:59:47 compute-0 nova_compute[260935]: 2025-10-11 09:59:47.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 09:59:47 compute-0 nova_compute[260935]: 2025-10-11 09:59:47.703 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Oct 11 09:59:47 compute-0 nova_compute[260935]: 2025-10-11 09:59:47.728 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Oct 11 09:59:47 compute-0 podman[451401]: 2025-10-11 09:59:47.801187413 +0000 UTC m=+0.104753082 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, config_id=multipathd, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct 11 09:59:47 compute-0 podman[451402]: 2025-10-11 09:59:47.848416867 +0000 UTC m=+0.132511155 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_controller, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct 11 09:59:47 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3573: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:59:49 compute-0 ceph-mon[74313]: pgmap v3573: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:59:49 compute-0 sshd-session[451446]: Invalid user reza from 43.157.67.116 port 43420
Oct 11 09:59:49 compute-0 sshd-session[451446]: pam_unix(sshd:auth): check pass; user unknown
Oct 11 09:59:49 compute-0 sshd-session[451446]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=43.157.67.116
Oct 11 09:59:49 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e317 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:59:49 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3574: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:59:51 compute-0 ceph-mon[74313]: pgmap v3574: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:59:51 compute-0 sshd-session[451446]: Failed password for invalid user reza from 43.157.67.116 port 43420 ssh2
Oct 11 09:59:51 compute-0 sshd-session[451446]: Received disconnect from 43.157.67.116 port 43420:11: Bye Bye [preauth]
Oct 11 09:59:51 compute-0 sshd-session[451446]: Disconnected from invalid user reza 43.157.67.116 port 43420 [preauth]
Oct 11 09:59:51 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3575: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:59:52 compute-0 nova_compute[260935]: 2025-10-11 09:59:52.349 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4997-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 09:59:52 compute-0 nova_compute[260935]: 2025-10-11 09:59:52.400 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 09:59:52 compute-0 nova_compute[260935]: 2025-10-11 09:59:52.401 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5052 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Oct 11 09:59:52 compute-0 nova_compute[260935]: 2025-10-11 09:59:52.401 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Oct 11 09:59:52 compute-0 nova_compute[260935]: 2025-10-11 09:59:52.402 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Oct 11 09:59:52 compute-0 nova_compute[260935]: 2025-10-11 09:59:52.405 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 09:59:53 compute-0 ceph-mon[74313]: pgmap v3575: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:59:53 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3576: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:59:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:59:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:59:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:59:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:59:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 09:59:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 09:59:54 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e317 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:59:54 compute-0 ceph-mon[74313]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #174. Immutable memtables: 0.
Oct 11 09:59:54 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:59:54.968251) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 11 09:59:54 compute-0 ceph-mon[74313]: rocksdb: [db/flush_job.cc:856] [default] [JOB 107] Flushing memtable with next log file: 174
Oct 11 09:59:54 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760176794968355, "job": 107, "event": "flush_started", "num_memtables": 1, "num_entries": 1551, "num_deletes": 255, "total_data_size": 2489588, "memory_usage": 2536064, "flush_reason": "Manual Compaction"}
Oct 11 09:59:54 compute-0 ceph-mon[74313]: rocksdb: [db/flush_job.cc:885] [default] [JOB 107] Level-0 flush table #175: started
Oct 11 09:59:54 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760176794984727, "cf_name": "default", "job": 107, "event": "table_file_creation", "file_number": 175, "file_size": 1483622, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 72652, "largest_seqno": 74202, "table_properties": {"data_size": 1478055, "index_size": 2770, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1733, "raw_key_size": 14226, "raw_average_key_size": 21, "raw_value_size": 1465909, "raw_average_value_size": 2171, "num_data_blocks": 126, "num_entries": 675, "num_filter_entries": 675, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760176638, "oldest_key_time": 1760176638, "file_creation_time": 1760176794, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "25d4b2da-549a-4320-988a-8e4ed36a341b", "db_session_id": "HBQ1LYMNE0LLZGR7AHC6", "orig_file_number": 175, "seqno_to_time_mapping": "N/A"}}
Oct 11 09:59:54 compute-0 ceph-mon[74313]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 107] Flush lasted 16681 microseconds, and 8703 cpu microseconds.
Oct 11 09:59:54 compute-0 ceph-mon[74313]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 09:59:54 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:59:54.984942) [db/flush_job.cc:967] [default] [JOB 107] Level-0 flush table #175: 1483622 bytes OK
Oct 11 09:59:54 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:59:54.984968) [db/memtable_list.cc:519] [default] Level-0 commit table #175 started
Oct 11 09:59:54 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:59:54.987042) [db/memtable_list.cc:722] [default] Level-0 commit table #175: memtable #1 done
Oct 11 09:59:54 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:59:54.987065) EVENT_LOG_v1 {"time_micros": 1760176794987058, "job": 107, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 11 09:59:54 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:59:54.987090) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 11 09:59:54 compute-0 ceph-mon[74313]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 107] Try to delete WAL files size 2482784, prev total WAL file size 2482784, number of live WAL files 2.
Oct 11 09:59:54 compute-0 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000171.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 09:59:54 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:59:54.988445) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740033303034' seq:72057594037927935, type:22 .. '6D6772737461740033323535' seq:0, type:0; will stop at (end)
Oct 11 09:59:54 compute-0 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 108] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 11 09:59:54 compute-0 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 107 Base level 0, inputs: [175(1448KB)], [173(10MB)]
Oct 11 09:59:54 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760176794988498, "job": 108, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [175], "files_L6": [173], "score": -1, "input_data_size": 12536470, "oldest_snapshot_seqno": -1}
Oct 11 09:59:55 compute-0 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 108] Generated table #176: 9080 keys, 10195503 bytes, temperature: kUnknown
Oct 11 09:59:55 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760176795056154, "cf_name": "default", "job": 108, "event": "table_file_creation", "file_number": 176, "file_size": 10195503, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10139409, "index_size": 32301, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 22725, "raw_key_size": 237620, "raw_average_key_size": 26, "raw_value_size": 9981919, "raw_average_value_size": 1099, "num_data_blocks": 1251, "num_entries": 9080, "num_filter_entries": 9080, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760170204, "oldest_key_time": 0, "file_creation_time": 1760176794, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "25d4b2da-549a-4320-988a-8e4ed36a341b", "db_session_id": "HBQ1LYMNE0LLZGR7AHC6", "orig_file_number": 176, "seqno_to_time_mapping": "N/A"}}
Oct 11 09:59:55 compute-0 ceph-mon[74313]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 09:59:55 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:59:55.056477) [db/compaction/compaction_job.cc:1663] [default] [JOB 108] Compacted 1@0 + 1@6 files to L6 => 10195503 bytes
Oct 11 09:59:55 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:59:55.057997) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 185.1 rd, 150.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.4, 10.5 +0.0 blob) out(9.7 +0.0 blob), read-write-amplify(15.3) write-amplify(6.9) OK, records in: 9530, records dropped: 450 output_compression: NoCompression
Oct 11 09:59:55 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:59:55.058055) EVENT_LOG_v1 {"time_micros": 1760176795058041, "job": 108, "event": "compaction_finished", "compaction_time_micros": 67746, "compaction_time_cpu_micros": 49445, "output_level": 6, "num_output_files": 1, "total_output_size": 10195503, "num_input_records": 9530, "num_output_records": 9080, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 11 09:59:55 compute-0 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000175.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 09:59:55 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760176795058746, "job": 108, "event": "table_file_deletion", "file_number": 175}
Oct 11 09:59:55 compute-0 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000173.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 09:59:55 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760176795062604, "job": 108, "event": "table_file_deletion", "file_number": 173}
Oct 11 09:59:55 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:59:54.988310) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 09:59:55 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:59:55.062664) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 09:59:55 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:59:55.062671) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 09:59:55 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:59:55.062674) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 09:59:55 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:59:55.062677) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 09:59:55 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-09:59:55.062680) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 09:59:55 compute-0 ceph-mon[74313]: pgmap v3576: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:59:55 compute-0 ceph-mgr[74605]: [balancer INFO root] Optimize plan auto_2025-10-11_09:59:55
Oct 11 09:59:55 compute-0 ceph-mgr[74605]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 09:59:55 compute-0 ceph-mgr[74605]: [balancer INFO root] do_upmap
Oct 11 09:59:55 compute-0 ceph-mgr[74605]: [balancer INFO root] pools ['backups', 'images', 'cephfs.cephfs.data', '.rgw.root', 'cephfs.cephfs.meta', 'default.rgw.meta', 'volumes', '.mgr', 'default.rgw.log', 'vms', 'default.rgw.control']
Oct 11 09:59:55 compute-0 ceph-mgr[74605]: [balancer INFO root] prepared 0/10 changes
Oct 11 09:59:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 09:59:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 09:59:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 09:59:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 09:59:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 09:59:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 09:59:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 09:59:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 09:59:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 09:59:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 09:59:55 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3577: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:59:57 compute-0 ceph-mon[74313]: pgmap v3577: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:59:57 compute-0 nova_compute[260935]: 2025-10-11 09:59:57.403 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4996-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 09:59:57 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3578: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:59:59 compute-0 ceph-mon[74313]: pgmap v3578: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 09:59:59 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e317 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 09:59:59 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3579: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 10:00:01 compute-0 ceph-mon[74313]: pgmap v3579: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 10:00:01 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3580: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 10:00:02 compute-0 nova_compute[260935]: 2025-10-11 10:00:02.406 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 10:00:03 compute-0 ceph-mon[74313]: pgmap v3580: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 10:00:03 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3581: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 10:00:04 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e317 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 10:00:05 compute-0 ceph-mon[74313]: pgmap v3581: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 10:00:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 10:00:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 10:00:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 10:00:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 10:00:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0026278224099759067 of space, bias 1.0, pg target 0.788346722992772 quantized to 32 (current 32)
Oct 11 10:00:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 10:00:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 10:00:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 10:00:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 10:00:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 10:00:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 1.9077212346161359e-07 of space, bias 1.0, pg target 5.723163703848408e-05 quantized to 32 (current 32)
Oct 11 10:00:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 10:00:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 11 10:00:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 10:00:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 10:00:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 10:00:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 11 10:00:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 10:00:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 11 10:00:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 10:00:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 10:00:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 10:00:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 11 10:00:05 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3582: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 10:00:07 compute-0 ceph-mon[74313]: pgmap v3582: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 10:00:07 compute-0 nova_compute[260935]: 2025-10-11 10:00:07.408 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 10:00:07 compute-0 ceph-mon[74313]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 11 10:00:07 compute-0 ceph-mon[74313]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 6600.0 total, 600.0 interval
                                           Cumulative writes: 16K writes, 74K keys, 16K commit groups, 1.0 writes per commit group, ingest: 0.10 GB, 0.02 MB/s
                                           Cumulative WAL: 16K writes, 16K syncs, 1.00 writes per sync, written: 0.10 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1333 writes, 6041 keys, 1333 commit groups, 1.0 writes per commit group, ingest: 8.75 MB, 0.01 MB/s
                                           Interval WAL: 1333 writes, 1333 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     75.9      1.19              0.38        54    0.022       0      0       0.0       0.0
                                             L6      1/0    9.72 MB   0.0      0.5     0.1      0.4       0.4      0.0       0.0   5.0    157.0    133.1      3.38              1.81        53    0.064    362K    28K       0.0       0.0
                                            Sum      1/0    9.72 MB   0.0      0.5     0.1      0.4       0.5      0.1       0.0   6.0    116.0    118.2      4.57              2.18       107    0.043    362K    28K       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.1     0.0      0.0       0.1      0.0       0.0   7.6    142.2    139.9      0.40              0.25        10    0.040     46K   2463       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.5     0.1      0.4       0.4      0.0       0.0   0.0    157.0    133.1      3.38              1.81        53    0.064    362K    28K       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     76.2      1.19              0.38        53    0.022       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     12.2      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.088, interval 0.007
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.53 GB write, 0.08 MB/s write, 0.52 GB read, 0.08 MB/s read, 4.6 seconds
                                           Interval compaction: 0.05 GB write, 0.09 MB/s write, 0.06 GB read, 0.09 MB/s read, 0.4 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558f0ab3b1f0#2 capacity: 304.00 MB usage: 59.69 MB table_size: 0 occupancy: 18446744073709551615 collections: 12 last_copies: 0 last_secs: 0.00057 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3875,57.12 MB,18.791%) FilterBlock(108,1015.61 KB,0.326252%) IndexBlock(108,1.58 MB,0.518533%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Oct 11 10:00:07 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3583: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 10:00:09 compute-0 ceph-mon[74313]: pgmap v3583: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 10:00:09 compute-0 nova_compute[260935]: 2025-10-11 10:00:09.728 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 10:00:09 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e317 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 10:00:09 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3584: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 10:00:10 compute-0 nova_compute[260935]: 2025-10-11 10:00:10.698 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 10:00:10 compute-0 nova_compute[260935]: 2025-10-11 10:00:10.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 10:00:10 compute-0 nova_compute[260935]: 2025-10-11 10:00:10.702 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 11 10:00:10 compute-0 nova_compute[260935]: 2025-10-11 10:00:10.703 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 11 10:00:11 compute-0 ceph-mon[74313]: pgmap v3584: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 10:00:11 compute-0 nova_compute[260935]: 2025-10-11 10:00:11.301 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "refresh_cache-c176845c-89c0-4038-ba22-4ee79bd3ebfe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 10:00:11 compute-0 nova_compute[260935]: 2025-10-11 10:00:11.303 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquired lock "refresh_cache-c176845c-89c0-4038-ba22-4ee79bd3ebfe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 10:00:11 compute-0 nova_compute[260935]: 2025-10-11 10:00:11.303 2 DEBUG nova.network.neutron [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: c176845c-89c0-4038-ba22-4ee79bd3ebfe] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 11 10:00:11 compute-0 nova_compute[260935]: 2025-10-11 10:00:11.304 2 DEBUG nova.objects.instance [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lazy-loading 'info_cache' on Instance uuid c176845c-89c0-4038-ba22-4ee79bd3ebfe obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 11 10:00:11 compute-0 nova_compute[260935]: 2025-10-11 10:00:11.534 2 DEBUG nova.network.neutron [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: c176845c-89c0-4038-ba22-4ee79bd3ebfe] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 11 10:00:11 compute-0 podman[451448]: 2025-10-11 10:00:11.778070912 +0000 UTC m=+0.069911426 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_metadata_agent)
Oct 11 10:00:11 compute-0 nova_compute[260935]: 2025-10-11 10:00:11.851 2 DEBUG nova.network.neutron [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: c176845c-89c0-4038-ba22-4ee79bd3ebfe] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 10:00:11 compute-0 nova_compute[260935]: 2025-10-11 10:00:11.876 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Releasing lock "refresh_cache-c176845c-89c0-4038-ba22-4ee79bd3ebfe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 10:00:11 compute-0 nova_compute[260935]: 2025-10-11 10:00:11.876 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: c176845c-89c0-4038-ba22-4ee79bd3ebfe] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 11 10:00:11 compute-0 nova_compute[260935]: 2025-10-11 10:00:11.877 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 10:00:11 compute-0 nova_compute[260935]: 2025-10-11 10:00:11.877 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 10:00:11 compute-0 nova_compute[260935]: 2025-10-11 10:00:11.877 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 11 10:00:11 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3585: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 10:00:12 compute-0 nova_compute[260935]: 2025-10-11 10:00:12.409 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 10:00:12 compute-0 nova_compute[260935]: 2025-10-11 10:00:12.410 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 10:00:12 compute-0 nova_compute[260935]: 2025-10-11 10:00:12.410 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5001 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Oct 11 10:00:12 compute-0 nova_compute[260935]: 2025-10-11 10:00:12.411 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Oct 11 10:00:12 compute-0 nova_compute[260935]: 2025-10-11 10:00:12.411 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Oct 11 10:00:12 compute-0 nova_compute[260935]: 2025-10-11 10:00:12.413 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 10:00:13 compute-0 ceph-mon[74313]: pgmap v3585: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 10:00:13 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3586: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 10:00:14 compute-0 nova_compute[260935]: 2025-10-11 10:00:14.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 10:00:14 compute-0 nova_compute[260935]: 2025-10-11 10:00:14.734 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 10:00:14 compute-0 nova_compute[260935]: 2025-10-11 10:00:14.735 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 10:00:14 compute-0 nova_compute[260935]: 2025-10-11 10:00:14.735 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 10:00:14 compute-0 nova_compute[260935]: 2025-10-11 10:00:14.736 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 11 10:00:14 compute-0 nova_compute[260935]: 2025-10-11 10:00:14.736 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 10:00:14 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e317 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 10:00:15 compute-0 ceph-mon[74313]: pgmap v3586: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 10:00:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 10:00:15.259 162815 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 10:00:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 10:00:15.260 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 10:00:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 10:00:15.260 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 10:00:15 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 10:00:15 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2439239632' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 10:00:15 compute-0 nova_compute[260935]: 2025-10-11 10:00:15.304 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.568s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 10:00:15 compute-0 nova_compute[260935]: 2025-10-11 10:00:15.427 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 10:00:15 compute-0 nova_compute[260935]: 2025-10-11 10:00:15.427 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 10:00:15 compute-0 nova_compute[260935]: 2025-10-11 10:00:15.428 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 10:00:15 compute-0 nova_compute[260935]: 2025-10-11 10:00:15.433 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 10:00:15 compute-0 nova_compute[260935]: 2025-10-11 10:00:15.434 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 10:00:15 compute-0 nova_compute[260935]: 2025-10-11 10:00:15.439 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000052 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 10:00:15 compute-0 nova_compute[260935]: 2025-10-11 10:00:15.440 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000052 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 10:00:15 compute-0 nova_compute[260935]: 2025-10-11 10:00:15.680 2 WARNING nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 10:00:15 compute-0 nova_compute[260935]: 2025-10-11 10:00:15.681 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=2828MB free_disk=59.83064270019531GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 11 10:00:15 compute-0 nova_compute[260935]: 2025-10-11 10:00:15.682 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 10:00:15 compute-0 nova_compute[260935]: 2025-10-11 10:00:15.682 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 10:00:15 compute-0 podman[451489]: 2025-10-11 10:00:15.80196687 +0000 UTC m=+0.094516712 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=iscsid, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 10:00:15 compute-0 nova_compute[260935]: 2025-10-11 10:00:15.863 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance c176845c-89c0-4038-ba22-4ee79bd3ebfe actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 10:00:15 compute-0 nova_compute[260935]: 2025-10-11 10:00:15.864 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance b75d8ded-515b-48ff-a6b6-28df88878996 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 10:00:15 compute-0 nova_compute[260935]: 2025-10-11 10:00:15.864 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance 52be16b4-343a-4fd4-9041-39069a1fde2a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 10:00:15 compute-0 nova_compute[260935]: 2025-10-11 10:00:15.865 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 11 10:00:15 compute-0 nova_compute[260935]: 2025-10-11 10:00:15.865 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=896MB phys_disk=59GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 11 10:00:15 compute-0 nova_compute[260935]: 2025-10-11 10:00:15.891 2 DEBUG nova.scheduler.client.report [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Refreshing inventories for resource provider ead2f521-4d5d-46d9-864c-1aac19134114 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Oct 11 10:00:15 compute-0 nova_compute[260935]: 2025-10-11 10:00:15.920 2 DEBUG nova.scheduler.client.report [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Updating ProviderTree inventory for provider ead2f521-4d5d-46d9-864c-1aac19134114 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Oct 11 10:00:15 compute-0 nova_compute[260935]: 2025-10-11 10:00:15.921 2 DEBUG nova.compute.provider_tree [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Updating inventory in ProviderTree for provider ead2f521-4d5d-46d9-864c-1aac19134114 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Oct 11 10:00:15 compute-0 nova_compute[260935]: 2025-10-11 10:00:15.945 2 DEBUG nova.scheduler.client.report [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Refreshing aggregate associations for resource provider ead2f521-4d5d-46d9-864c-1aac19134114, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Oct 11 10:00:15 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3587: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 10:00:15 compute-0 nova_compute[260935]: 2025-10-11 10:00:15.988 2 DEBUG nova.scheduler.client.report [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Refreshing trait associations for resource provider ead2f521-4d5d-46d9-864c-1aac19134114, traits: HW_CPU_X86_AESNI,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_CLMUL,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_AVX,COMPUTE_RESCUE_BFV,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_F16C,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NODE,HW_CPU_X86_SSE2,HW_CPU_X86_BMI,COMPUTE_DEVICE_TAGGING,COMPUTE_STORAGE_BUS_FDC,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SHA,COMPUTE_STORAGE_BUS_SATA,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SSE42,HW_CPU_X86_SSE41,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_ABM,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_STORAGE_BUS_IDE,COMPUTE_STORAGE_BUS_USB,COMPUTE_TRUSTED_CERTS,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_FMA3,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_SSE4A,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSSE3,HW_CPU_X86_SVM,HW_CPU_X86_BMI2,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_AVX2,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_SECURITY_TPM_2_0,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_AMD_SVM,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_RTL8139 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Oct 11 10:00:16 compute-0 nova_compute[260935]: 2025-10-11 10:00:16.104 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 10:00:16 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/2439239632' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 10:00:16 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 10:00:16 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3759781213' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 10:00:16 compute-0 nova_compute[260935]: 2025-10-11 10:00:16.544 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 10:00:16 compute-0 nova_compute[260935]: 2025-10-11 10:00:16.550 2 DEBUG nova.compute.provider_tree [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 10:00:16 compute-0 nova_compute[260935]: 2025-10-11 10:00:16.570 2 DEBUG nova.scheduler.client.report [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 10:00:16 compute-0 nova_compute[260935]: 2025-10-11 10:00:16.574 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 11 10:00:16 compute-0 nova_compute[260935]: 2025-10-11 10:00:16.574 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.892s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 10:00:17 compute-0 ceph-mon[74313]: pgmap v3587: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 10:00:17 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/3759781213' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 10:00:17 compute-0 nova_compute[260935]: 2025-10-11 10:00:17.414 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 10:00:17 compute-0 nova_compute[260935]: 2025-10-11 10:00:17.417 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 10:00:17 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3588: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 10:00:18 compute-0 podman[451532]: 2025-10-11 10:00:18.852393167 +0000 UTC m=+0.143462046 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd)
Oct 11 10:00:18 compute-0 podman[451533]: 2025-10-11 10:00:18.877253549 +0000 UTC m=+0.165118677 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=ovn_controller, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct 11 10:00:19 compute-0 ceph-mon[74313]: pgmap v3588: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 10:00:19 compute-0 nova_compute[260935]: 2025-10-11 10:00:19.575 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 10:00:19 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e317 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 10:00:19 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3589: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 10:00:21 compute-0 ceph-mon[74313]: pgmap v3589: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 10:00:21 compute-0 nova_compute[260935]: 2025-10-11 10:00:21.704 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 10:00:21 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3590: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 10:00:22 compute-0 nova_compute[260935]: 2025-10-11 10:00:22.416 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4995-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 10:00:22 compute-0 nova_compute[260935]: 2025-10-11 10:00:22.418 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 10:00:22 compute-0 nova_compute[260935]: 2025-10-11 10:00:22.419 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Oct 11 10:00:22 compute-0 nova_compute[260935]: 2025-10-11 10:00:22.419 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Oct 11 10:00:22 compute-0 nova_compute[260935]: 2025-10-11 10:00:22.463 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 10:00:22 compute-0 nova_compute[260935]: 2025-10-11 10:00:22.464 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Oct 11 10:00:23 compute-0 ceph-mon[74313]: pgmap v3590: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 10:00:23 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3591: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 10:00:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 10:00:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 10:00:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 10:00:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 10:00:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 10:00:24 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 10:00:24 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e317 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 10:00:25 compute-0 ceph-mon[74313]: pgmap v3591: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 10:00:25 compute-0 nova_compute[260935]: 2025-10-11 10:00:25.699 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 10:00:25 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3592: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 10:00:26 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 11 10:00:26 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/572454341' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 10:00:26 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 11 10:00:26 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/572454341' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 10:00:27 compute-0 ceph-mon[74313]: pgmap v3592: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 10:00:27 compute-0 ceph-mon[74313]: from='client.? 192.168.122.10:0/572454341' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 11 10:00:27 compute-0 ceph-mon[74313]: from='client.? 192.168.122.10:0/572454341' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 11 10:00:27 compute-0 nova_compute[260935]: 2025-10-11 10:00:27.465 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 10:00:27 compute-0 nova_compute[260935]: 2025-10-11 10:00:27.466 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 10:00:27 compute-0 nova_compute[260935]: 2025-10-11 10:00:27.467 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Oct 11 10:00:27 compute-0 nova_compute[260935]: 2025-10-11 10:00:27.467 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Oct 11 10:00:27 compute-0 nova_compute[260935]: 2025-10-11 10:00:27.503 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 10:00:27 compute-0 nova_compute[260935]: 2025-10-11 10:00:27.504 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Oct 11 10:00:27 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3593: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 10:00:28 compute-0 sudo[451577]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 10:00:28 compute-0 sudo[451577]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 10:00:28 compute-0 sudo[451577]: pam_unix(sudo:session): session closed for user root
Oct 11 10:00:28 compute-0 sudo[451602]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 10:00:28 compute-0 sudo[451602]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 10:00:28 compute-0 sudo[451602]: pam_unix(sudo:session): session closed for user root
Oct 11 10:00:28 compute-0 sudo[451627]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 10:00:28 compute-0 sudo[451627]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 10:00:28 compute-0 sudo[451627]: pam_unix(sudo:session): session closed for user root
Oct 11 10:00:29 compute-0 sudo[451652]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 11 10:00:29 compute-0 sudo[451652]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 10:00:29 compute-0 ceph-mon[74313]: pgmap v3593: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 10:00:29 compute-0 sudo[451652]: pam_unix(sudo:session): session closed for user root
Oct 11 10:00:29 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 10:00:29 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 10:00:29 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 11 10:00:29 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 10:00:29 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 11 10:00:29 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 10:00:29 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev b374de8c-6e31-484b-91cb-7241b045d8d7 does not exist
Oct 11 10:00:29 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev 4780e8d8-6cf0-4acc-88a1-2c162708eb8a does not exist
Oct 11 10:00:29 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev cf456b66-d9c8-4603-9d7c-def0b56b2318 does not exist
Oct 11 10:00:29 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 11 10:00:29 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 10:00:29 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 11 10:00:29 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 10:00:29 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 10:00:29 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 10:00:29 compute-0 sudo[451710]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 10:00:29 compute-0 sudo[451710]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 10:00:29 compute-0 sudo[451710]: pam_unix(sudo:session): session closed for user root
Oct 11 10:00:29 compute-0 sudo[451735]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 10:00:29 compute-0 sudo[451735]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 10:00:29 compute-0 sudo[451735]: pam_unix(sudo:session): session closed for user root
Oct 11 10:00:29 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e317 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 10:00:29 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3594: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 10:00:30 compute-0 sudo[451760]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 10:00:30 compute-0 sudo[451760]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 10:00:30 compute-0 sudo[451760]: pam_unix(sudo:session): session closed for user root
Oct 11 10:00:30 compute-0 sudo[451785]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 11 10:00:30 compute-0 sudo[451785]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 10:00:30 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 10:00:30 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 11 10:00:30 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 10:00:30 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 11 10:00:30 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 11 10:00:30 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 10:00:30 compute-0 podman[451853]: 2025-10-11 10:00:30.567241698 +0000 UTC m=+0.065670927 container create 533ac78e3ccf1abd62e86d1fa94d811799351ebf44ef18909a5758c85f9d45ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_volhard, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 10:00:30 compute-0 podman[451853]: 2025-10-11 10:00:30.537203139 +0000 UTC m=+0.035632388 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 10:00:30 compute-0 systemd[1]: Started libpod-conmon-533ac78e3ccf1abd62e86d1fa94d811799351ebf44ef18909a5758c85f9d45ca.scope.
Oct 11 10:00:30 compute-0 systemd[1]: Started libcrun container.
Oct 11 10:00:30 compute-0 podman[451853]: 2025-10-11 10:00:30.690481531 +0000 UTC m=+0.188910820 container init 533ac78e3ccf1abd62e86d1fa94d811799351ebf44ef18909a5758c85f9d45ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_volhard, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 10:00:30 compute-0 podman[451853]: 2025-10-11 10:00:30.70176544 +0000 UTC m=+0.200194679 container start 533ac78e3ccf1abd62e86d1fa94d811799351ebf44ef18909a5758c85f9d45ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_volhard, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct 11 10:00:30 compute-0 podman[451853]: 2025-10-11 10:00:30.706860424 +0000 UTC m=+0.205289653 container attach 533ac78e3ccf1abd62e86d1fa94d811799351ebf44ef18909a5758c85f9d45ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_volhard, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 10:00:30 compute-0 systemd[1]: libpod-533ac78e3ccf1abd62e86d1fa94d811799351ebf44ef18909a5758c85f9d45ca.scope: Deactivated successfully.
Oct 11 10:00:30 compute-0 kind_volhard[451870]: 167 167
Oct 11 10:00:30 compute-0 conmon[451870]: conmon 533ac78e3ccf1abd62e8 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-533ac78e3ccf1abd62e86d1fa94d811799351ebf44ef18909a5758c85f9d45ca.scope/container/memory.events
Oct 11 10:00:30 compute-0 podman[451853]: 2025-10-11 10:00:30.712523684 +0000 UTC m=+0.210952913 container died 533ac78e3ccf1abd62e86d1fa94d811799351ebf44ef18909a5758c85f9d45ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_volhard, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 10:00:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-07e8904a61dd89ec574ad0412c1ab326d73ff16052c212a8ee89baefb8c9c76b-merged.mount: Deactivated successfully.
Oct 11 10:00:30 compute-0 podman[451853]: 2025-10-11 10:00:30.772168889 +0000 UTC m=+0.270598128 container remove 533ac78e3ccf1abd62e86d1fa94d811799351ebf44ef18909a5758c85f9d45ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_volhard, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 11 10:00:30 compute-0 systemd[1]: libpod-conmon-533ac78e3ccf1abd62e86d1fa94d811799351ebf44ef18909a5758c85f9d45ca.scope: Deactivated successfully.
Oct 11 10:00:31 compute-0 podman[451894]: 2025-10-11 10:00:31.049443375 +0000 UTC m=+0.077028928 container create b7222408acec4b2cf1787acb9f58ceaf234da51b5bd473e6e2259936ea72e617 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_antonelli, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct 11 10:00:31 compute-0 systemd[1]: Started libpod-conmon-b7222408acec4b2cf1787acb9f58ceaf234da51b5bd473e6e2259936ea72e617.scope.
Oct 11 10:00:31 compute-0 podman[451894]: 2025-10-11 10:00:31.016561986 +0000 UTC m=+0.044147609 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 10:00:31 compute-0 systemd[1]: Started libcrun container.
Oct 11 10:00:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b92663d9eb13351009195367f5b238b3a2cf420cc6d0c1adaf001ea3629b9881/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 10:00:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b92663d9eb13351009195367f5b238b3a2cf420cc6d0c1adaf001ea3629b9881/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 10:00:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b92663d9eb13351009195367f5b238b3a2cf420cc6d0c1adaf001ea3629b9881/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 10:00:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b92663d9eb13351009195367f5b238b3a2cf420cc6d0c1adaf001ea3629b9881/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 10:00:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b92663d9eb13351009195367f5b238b3a2cf420cc6d0c1adaf001ea3629b9881/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 11 10:00:31 compute-0 podman[451894]: 2025-10-11 10:00:31.153555498 +0000 UTC m=+0.181141091 container init b7222408acec4b2cf1787acb9f58ceaf234da51b5bd473e6e2259936ea72e617 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_antonelli, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 10:00:31 compute-0 podman[451894]: 2025-10-11 10:00:31.170173827 +0000 UTC m=+0.197759340 container start b7222408acec4b2cf1787acb9f58ceaf234da51b5bd473e6e2259936ea72e617 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_antonelli, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 11 10:00:31 compute-0 podman[451894]: 2025-10-11 10:00:31.173744618 +0000 UTC m=+0.201330181 container attach b7222408acec4b2cf1787acb9f58ceaf234da51b5bd473e6e2259936ea72e617 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_antonelli, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 10:00:31 compute-0 ceph-mon[74313]: pgmap v3594: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 10:00:31 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3595: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 10:00:32 compute-0 compassionate_antonelli[451911]: --> passed data devices: 0 physical, 3 LVM
Oct 11 10:00:32 compute-0 compassionate_antonelli[451911]: --> relative data size: 1.0
Oct 11 10:00:32 compute-0 compassionate_antonelli[451911]: --> All data devices are unavailable
Oct 11 10:00:32 compute-0 systemd[1]: libpod-b7222408acec4b2cf1787acb9f58ceaf234da51b5bd473e6e2259936ea72e617.scope: Deactivated successfully.
Oct 11 10:00:32 compute-0 systemd[1]: libpod-b7222408acec4b2cf1787acb9f58ceaf234da51b5bd473e6e2259936ea72e617.scope: Consumed 1.215s CPU time.
Oct 11 10:00:32 compute-0 podman[451894]: 2025-10-11 10:00:32.480628091 +0000 UTC m=+1.508213674 container died b7222408acec4b2cf1787acb9f58ceaf234da51b5bd473e6e2259936ea72e617 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_antonelli, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Oct 11 10:00:32 compute-0 nova_compute[260935]: 2025-10-11 10:00:32.505 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 10:00:32 compute-0 nova_compute[260935]: 2025-10-11 10:00:32.511 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 10:00:32 compute-0 nova_compute[260935]: 2025-10-11 10:00:32.511 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5007 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Oct 11 10:00:32 compute-0 nova_compute[260935]: 2025-10-11 10:00:32.512 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Oct 11 10:00:32 compute-0 nova_compute[260935]: 2025-10-11 10:00:32.543 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 10:00:32 compute-0 nova_compute[260935]: 2025-10-11 10:00:32.544 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Oct 11 10:00:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-b92663d9eb13351009195367f5b238b3a2cf420cc6d0c1adaf001ea3629b9881-merged.mount: Deactivated successfully.
Oct 11 10:00:32 compute-0 podman[451894]: 2025-10-11 10:00:32.612889828 +0000 UTC m=+1.640475381 container remove b7222408acec4b2cf1787acb9f58ceaf234da51b5bd473e6e2259936ea72e617 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_antonelli, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 10:00:32 compute-0 systemd[1]: libpod-conmon-b7222408acec4b2cf1787acb9f58ceaf234da51b5bd473e6e2259936ea72e617.scope: Deactivated successfully.
Oct 11 10:00:32 compute-0 sudo[451785]: pam_unix(sudo:session): session closed for user root
Oct 11 10:00:32 compute-0 sudo[451954]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 10:00:32 compute-0 sudo[451954]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 10:00:32 compute-0 sudo[451954]: pam_unix(sudo:session): session closed for user root
Oct 11 10:00:32 compute-0 sudo[451979]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 10:00:32 compute-0 sudo[451979]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 10:00:32 compute-0 sudo[451979]: pam_unix(sudo:session): session closed for user root
Oct 11 10:00:32 compute-0 sudo[452004]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 10:00:32 compute-0 sudo[452004]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 10:00:32 compute-0 sudo[452004]: pam_unix(sudo:session): session closed for user root
Oct 11 10:00:33 compute-0 sudo[452029]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a -- lvm list --format json
Oct 11 10:00:33 compute-0 sudo[452029]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 10:00:33 compute-0 ceph-mon[74313]: pgmap v3595: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 10:00:33 compute-0 podman[452096]: 2025-10-11 10:00:33.476364031 +0000 UTC m=+0.044733225 container create 0047748de172bcb352c8e2a9be8c50976a429c7873c56993b558452c7d876398 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_vaughan, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True)
Oct 11 10:00:33 compute-0 systemd[1]: Started libpod-conmon-0047748de172bcb352c8e2a9be8c50976a429c7873c56993b558452c7d876398.scope.
Oct 11 10:00:33 compute-0 systemd[1]: Started libcrun container.
Oct 11 10:00:33 compute-0 podman[452096]: 2025-10-11 10:00:33.460193364 +0000 UTC m=+0.028562578 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 10:00:33 compute-0 podman[452096]: 2025-10-11 10:00:33.564288736 +0000 UTC m=+0.132658030 container init 0047748de172bcb352c8e2a9be8c50976a429c7873c56993b558452c7d876398 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_vaughan, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct 11 10:00:33 compute-0 podman[452096]: 2025-10-11 10:00:33.572691813 +0000 UTC m=+0.141060997 container start 0047748de172bcb352c8e2a9be8c50976a429c7873c56993b558452c7d876398 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_vaughan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True)
Oct 11 10:00:33 compute-0 podman[452096]: 2025-10-11 10:00:33.576183502 +0000 UTC m=+0.144552736 container attach 0047748de172bcb352c8e2a9be8c50976a429c7873c56993b558452c7d876398 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_vaughan, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 10:00:33 compute-0 kind_vaughan[452112]: 167 167
Oct 11 10:00:33 compute-0 systemd[1]: libpod-0047748de172bcb352c8e2a9be8c50976a429c7873c56993b558452c7d876398.scope: Deactivated successfully.
Oct 11 10:00:33 compute-0 podman[452096]: 2025-10-11 10:00:33.581560964 +0000 UTC m=+0.149930208 container died 0047748de172bcb352c8e2a9be8c50976a429c7873c56993b558452c7d876398 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_vaughan, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Oct 11 10:00:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-e3259699e5d84f4b4f8f9dcf79f4f179c0a72e328e418a4cafd108fafdf36161-merged.mount: Deactivated successfully.
Oct 11 10:00:33 compute-0 podman[452096]: 2025-10-11 10:00:33.628317965 +0000 UTC m=+0.196687159 container remove 0047748de172bcb352c8e2a9be8c50976a429c7873c56993b558452c7d876398 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_vaughan, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 10:00:33 compute-0 systemd[1]: libpod-conmon-0047748de172bcb352c8e2a9be8c50976a429c7873c56993b558452c7d876398.scope: Deactivated successfully.
Oct 11 10:00:33 compute-0 podman[452138]: 2025-10-11 10:00:33.908671598 +0000 UTC m=+0.080491545 container create 97f0ec6e3435f2358a3d46ffeb39940f3d7ca2d9b649512bc9de50d88189bc04 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_ptolemy, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct 11 10:00:33 compute-0 systemd[1]: Started libpod-conmon-97f0ec6e3435f2358a3d46ffeb39940f3d7ca2d9b649512bc9de50d88189bc04.scope.
Oct 11 10:00:33 compute-0 podman[452138]: 2025-10-11 10:00:33.886620935 +0000 UTC m=+0.058440932 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 10:00:33 compute-0 systemd[1]: Started libcrun container.
Oct 11 10:00:33 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3596: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 10:00:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c3866d1751874a357b5fb6f05591920b09b484968d1fda8890a9bf2b40aeba0a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 10:00:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c3866d1751874a357b5fb6f05591920b09b484968d1fda8890a9bf2b40aeba0a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 10:00:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c3866d1751874a357b5fb6f05591920b09b484968d1fda8890a9bf2b40aeba0a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 10:00:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c3866d1751874a357b5fb6f05591920b09b484968d1fda8890a9bf2b40aeba0a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 10:00:34 compute-0 podman[452138]: 2025-10-11 10:00:34.01168826 +0000 UTC m=+0.183508247 container init 97f0ec6e3435f2358a3d46ffeb39940f3d7ca2d9b649512bc9de50d88189bc04 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_ptolemy, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct 11 10:00:34 compute-0 podman[452138]: 2025-10-11 10:00:34.021505747 +0000 UTC m=+0.193325724 container start 97f0ec6e3435f2358a3d46ffeb39940f3d7ca2d9b649512bc9de50d88189bc04 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_ptolemy, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 10:00:34 compute-0 podman[452138]: 2025-10-11 10:00:34.025947593 +0000 UTC m=+0.197767630 container attach 97f0ec6e3435f2358a3d46ffeb39940f3d7ca2d9b649512bc9de50d88189bc04 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_ptolemy, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 10:00:34 compute-0 charming_ptolemy[452154]: {
Oct 11 10:00:34 compute-0 charming_ptolemy[452154]:     "0": [
Oct 11 10:00:34 compute-0 charming_ptolemy[452154]:         {
Oct 11 10:00:34 compute-0 charming_ptolemy[452154]:             "devices": [
Oct 11 10:00:34 compute-0 charming_ptolemy[452154]:                 "/dev/loop3"
Oct 11 10:00:34 compute-0 charming_ptolemy[452154]:             ],
Oct 11 10:00:34 compute-0 charming_ptolemy[452154]:             "lv_name": "ceph_lv0",
Oct 11 10:00:34 compute-0 charming_ptolemy[452154]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 10:00:34 compute-0 charming_ptolemy[452154]:             "lv_size": "21470642176",
Oct 11 10:00:34 compute-0 charming_ptolemy[452154]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=bd31cfe9-266a-4479-a7f9-659fff14b3e1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 10:00:34 compute-0 charming_ptolemy[452154]:             "lv_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 10:00:34 compute-0 charming_ptolemy[452154]:             "name": "ceph_lv0",
Oct 11 10:00:34 compute-0 charming_ptolemy[452154]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 11 10:00:34 compute-0 charming_ptolemy[452154]:             "tags": {
Oct 11 10:00:34 compute-0 charming_ptolemy[452154]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 11 10:00:34 compute-0 charming_ptolemy[452154]:                 "ceph.block_uuid": "18uQNQ-bPne-9ZK0-p18M-dKY0-4gzC-KYgIOw",
Oct 11 10:00:34 compute-0 charming_ptolemy[452154]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 10:00:34 compute-0 charming_ptolemy[452154]:                 "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 10:00:34 compute-0 charming_ptolemy[452154]:                 "ceph.cluster_name": "ceph",
Oct 11 10:00:34 compute-0 charming_ptolemy[452154]:                 "ceph.crush_device_class": "",
Oct 11 10:00:34 compute-0 charming_ptolemy[452154]:                 "ceph.encrypted": "0",
Oct 11 10:00:34 compute-0 charming_ptolemy[452154]:                 "ceph.osd_fsid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 10:00:34 compute-0 charming_ptolemy[452154]:                 "ceph.osd_id": "0",
Oct 11 10:00:34 compute-0 charming_ptolemy[452154]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 10:00:34 compute-0 charming_ptolemy[452154]:                 "ceph.type": "block",
Oct 11 10:00:34 compute-0 charming_ptolemy[452154]:                 "ceph.vdo": "0"
Oct 11 10:00:34 compute-0 charming_ptolemy[452154]:             },
Oct 11 10:00:34 compute-0 charming_ptolemy[452154]:             "type": "block",
Oct 11 10:00:34 compute-0 charming_ptolemy[452154]:             "vg_name": "ceph_vg0"
Oct 11 10:00:34 compute-0 charming_ptolemy[452154]:         }
Oct 11 10:00:34 compute-0 charming_ptolemy[452154]:     ],
Oct 11 10:00:34 compute-0 charming_ptolemy[452154]:     "1": [
Oct 11 10:00:34 compute-0 charming_ptolemy[452154]:         {
Oct 11 10:00:34 compute-0 charming_ptolemy[452154]:             "devices": [
Oct 11 10:00:34 compute-0 charming_ptolemy[452154]:                 "/dev/loop4"
Oct 11 10:00:34 compute-0 charming_ptolemy[452154]:             ],
Oct 11 10:00:34 compute-0 charming_ptolemy[452154]:             "lv_name": "ceph_lv1",
Oct 11 10:00:34 compute-0 charming_ptolemy[452154]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 10:00:34 compute-0 charming_ptolemy[452154]:             "lv_size": "21470642176",
Oct 11 10:00:34 compute-0 charming_ptolemy[452154]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c6e730c8-7075-45e5-bf72-061b73088f5b,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 10:00:34 compute-0 charming_ptolemy[452154]:             "lv_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 10:00:34 compute-0 charming_ptolemy[452154]:             "name": "ceph_lv1",
Oct 11 10:00:34 compute-0 charming_ptolemy[452154]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 11 10:00:34 compute-0 charming_ptolemy[452154]:             "tags": {
Oct 11 10:00:34 compute-0 charming_ptolemy[452154]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 11 10:00:34 compute-0 charming_ptolemy[452154]:                 "ceph.block_uuid": "v3uM0B-A0d9-EBah-km4e-qdJQ-ADDq-JRTHj1",
Oct 11 10:00:34 compute-0 charming_ptolemy[452154]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 10:00:34 compute-0 charming_ptolemy[452154]:                 "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 10:00:34 compute-0 charming_ptolemy[452154]:                 "ceph.cluster_name": "ceph",
Oct 11 10:00:34 compute-0 charming_ptolemy[452154]:                 "ceph.crush_device_class": "",
Oct 11 10:00:34 compute-0 charming_ptolemy[452154]:                 "ceph.encrypted": "0",
Oct 11 10:00:34 compute-0 charming_ptolemy[452154]:                 "ceph.osd_fsid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 10:00:34 compute-0 charming_ptolemy[452154]:                 "ceph.osd_id": "1",
Oct 11 10:00:34 compute-0 charming_ptolemy[452154]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 10:00:34 compute-0 charming_ptolemy[452154]:                 "ceph.type": "block",
Oct 11 10:00:34 compute-0 charming_ptolemy[452154]:                 "ceph.vdo": "0"
Oct 11 10:00:34 compute-0 charming_ptolemy[452154]:             },
Oct 11 10:00:34 compute-0 charming_ptolemy[452154]:             "type": "block",
Oct 11 10:00:34 compute-0 charming_ptolemy[452154]:             "vg_name": "ceph_vg1"
Oct 11 10:00:34 compute-0 charming_ptolemy[452154]:         }
Oct 11 10:00:34 compute-0 charming_ptolemy[452154]:     ],
Oct 11 10:00:34 compute-0 charming_ptolemy[452154]:     "2": [
Oct 11 10:00:34 compute-0 charming_ptolemy[452154]:         {
Oct 11 10:00:34 compute-0 charming_ptolemy[452154]:             "devices": [
Oct 11 10:00:34 compute-0 charming_ptolemy[452154]:                 "/dev/loop5"
Oct 11 10:00:34 compute-0 charming_ptolemy[452154]:             ],
Oct 11 10:00:34 compute-0 charming_ptolemy[452154]:             "lv_name": "ceph_lv2",
Oct 11 10:00:34 compute-0 charming_ptolemy[452154]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 10:00:34 compute-0 charming_ptolemy[452154]:             "lv_size": "21470642176",
Oct 11 10:00:34 compute-0 charming_ptolemy[452154]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=33219f8b-dc38-5a8f-a577-8ccc4b37190a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9a8d87fe-940e-4960-94fb-405e22e6e9ee,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 11 10:00:34 compute-0 charming_ptolemy[452154]:             "lv_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 10:00:34 compute-0 charming_ptolemy[452154]:             "name": "ceph_lv2",
Oct 11 10:00:34 compute-0 charming_ptolemy[452154]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 11 10:00:34 compute-0 charming_ptolemy[452154]:             "tags": {
Oct 11 10:00:34 compute-0 charming_ptolemy[452154]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 11 10:00:34 compute-0 charming_ptolemy[452154]:                 "ceph.block_uuid": "aqdSt5-prs7-RfbX-cHNW-lb9U-bv4l-31Rqzo",
Oct 11 10:00:34 compute-0 charming_ptolemy[452154]:                 "ceph.cephx_lockbox_secret": "",
Oct 11 10:00:34 compute-0 charming_ptolemy[452154]:                 "ceph.cluster_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 10:00:34 compute-0 charming_ptolemy[452154]:                 "ceph.cluster_name": "ceph",
Oct 11 10:00:34 compute-0 charming_ptolemy[452154]:                 "ceph.crush_device_class": "",
Oct 11 10:00:34 compute-0 charming_ptolemy[452154]:                 "ceph.encrypted": "0",
Oct 11 10:00:34 compute-0 charming_ptolemy[452154]:                 "ceph.osd_fsid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 10:00:34 compute-0 charming_ptolemy[452154]:                 "ceph.osd_id": "2",
Oct 11 10:00:34 compute-0 charming_ptolemy[452154]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 11 10:00:34 compute-0 charming_ptolemy[452154]:                 "ceph.type": "block",
Oct 11 10:00:34 compute-0 charming_ptolemy[452154]:                 "ceph.vdo": "0"
Oct 11 10:00:34 compute-0 charming_ptolemy[452154]:             },
Oct 11 10:00:34 compute-0 charming_ptolemy[452154]:             "type": "block",
Oct 11 10:00:34 compute-0 charming_ptolemy[452154]:             "vg_name": "ceph_vg2"
Oct 11 10:00:34 compute-0 charming_ptolemy[452154]:         }
Oct 11 10:00:34 compute-0 charming_ptolemy[452154]:     ]
Oct 11 10:00:34 compute-0 charming_ptolemy[452154]: }
Oct 11 10:00:34 compute-0 systemd[1]: libpod-97f0ec6e3435f2358a3d46ffeb39940f3d7ca2d9b649512bc9de50d88189bc04.scope: Deactivated successfully.
Oct 11 10:00:34 compute-0 podman[452138]: 2025-10-11 10:00:34.899909172 +0000 UTC m=+1.071729149 container died 97f0ec6e3435f2358a3d46ffeb39940f3d7ca2d9b649512bc9de50d88189bc04 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_ptolemy, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct 11 10:00:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-c3866d1751874a357b5fb6f05591920b09b484968d1fda8890a9bf2b40aeba0a-merged.mount: Deactivated successfully.
Oct 11 10:00:34 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e317 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 10:00:34 compute-0 podman[452138]: 2025-10-11 10:00:34.975402365 +0000 UTC m=+1.147222322 container remove 97f0ec6e3435f2358a3d46ffeb39940f3d7ca2d9b649512bc9de50d88189bc04 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_ptolemy, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 10:00:34 compute-0 systemd[1]: libpod-conmon-97f0ec6e3435f2358a3d46ffeb39940f3d7ca2d9b649512bc9de50d88189bc04.scope: Deactivated successfully.
Oct 11 10:00:35 compute-0 sudo[452029]: pam_unix(sudo:session): session closed for user root
Oct 11 10:00:35 compute-0 sudo[452176]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 10:00:35 compute-0 sudo[452176]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 10:00:35 compute-0 sudo[452176]: pam_unix(sudo:session): session closed for user root
Oct 11 10:00:35 compute-0 sudo[452201]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 11 10:00:35 compute-0 sudo[452201]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 10:00:35 compute-0 sudo[452201]: pam_unix(sudo:session): session closed for user root
Oct 11 10:00:35 compute-0 ceph-mon[74313]: pgmap v3596: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 10:00:35 compute-0 sudo[452226]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 10:00:35 compute-0 sudo[452226]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 10:00:35 compute-0 sudo[452226]: pam_unix(sudo:session): session closed for user root
Oct 11 10:00:35 compute-0 sudo[452251]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/33219f8b-dc38-5a8f-a577-8ccc4b37190a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 33219f8b-dc38-5a8f-a577-8ccc4b37190a -- raw list --format json
Oct 11 10:00:35 compute-0 sudo[452251]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 10:00:35 compute-0 podman[452317]: 2025-10-11 10:00:35.816278048 +0000 UTC m=+0.053538764 container create 9dcf7099a1d2a8ff55b3bf08aad414d826d56486695943c9dae9667d2858d1f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_galileo, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 11 10:00:35 compute-0 systemd[1]: Started libpod-conmon-9dcf7099a1d2a8ff55b3bf08aad414d826d56486695943c9dae9667d2858d1f8.scope.
Oct 11 10:00:35 compute-0 podman[452317]: 2025-10-11 10:00:35.79265476 +0000 UTC m=+0.029915546 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 10:00:35 compute-0 systemd[1]: Started libcrun container.
Oct 11 10:00:35 compute-0 podman[452317]: 2025-10-11 10:00:35.931751501 +0000 UTC m=+0.169012267 container init 9dcf7099a1d2a8ff55b3bf08aad414d826d56486695943c9dae9667d2858d1f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_galileo, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 10:00:35 compute-0 podman[452317]: 2025-10-11 10:00:35.944588864 +0000 UTC m=+0.181849590 container start 9dcf7099a1d2a8ff55b3bf08aad414d826d56486695943c9dae9667d2858d1f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_galileo, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct 11 10:00:35 compute-0 podman[452317]: 2025-10-11 10:00:35.949957986 +0000 UTC m=+0.187218732 container attach 9dcf7099a1d2a8ff55b3bf08aad414d826d56486695943c9dae9667d2858d1f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_galileo, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507)
Oct 11 10:00:35 compute-0 keen_galileo[452334]: 167 167
Oct 11 10:00:35 compute-0 systemd[1]: libpod-9dcf7099a1d2a8ff55b3bf08aad414d826d56486695943c9dae9667d2858d1f8.scope: Deactivated successfully.
Oct 11 10:00:35 compute-0 podman[452317]: 2025-10-11 10:00:35.953668031 +0000 UTC m=+0.190928757 container died 9dcf7099a1d2a8ff55b3bf08aad414d826d56486695943c9dae9667d2858d1f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_galileo, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 10:00:35 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3597: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 10:00:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-414e65d311db1f6e4229d2ba50d1b74184f343bf2aa850e5f5fa3f894dc3bcbc-merged.mount: Deactivated successfully.
Oct 11 10:00:36 compute-0 podman[452317]: 2025-10-11 10:00:36.010109096 +0000 UTC m=+0.247369822 container remove 9dcf7099a1d2a8ff55b3bf08aad414d826d56486695943c9dae9667d2858d1f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_galileo, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 11 10:00:36 compute-0 systemd[1]: libpod-conmon-9dcf7099a1d2a8ff55b3bf08aad414d826d56486695943c9dae9667d2858d1f8.scope: Deactivated successfully.
Oct 11 10:00:36 compute-0 podman[452358]: 2025-10-11 10:00:36.259345129 +0000 UTC m=+0.063603718 container create 961e4bb01e1d9b967252031839fa27291b3f6c7320fd2ac2877cfd1423cc18eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_rubin, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct 11 10:00:36 compute-0 systemd[1]: Started libpod-conmon-961e4bb01e1d9b967252031839fa27291b3f6c7320fd2ac2877cfd1423cc18eb.scope.
Oct 11 10:00:36 compute-0 podman[452358]: 2025-10-11 10:00:36.233715085 +0000 UTC m=+0.037973734 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 11 10:00:36 compute-0 systemd[1]: Started libcrun container.
Oct 11 10:00:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4da4efee25d6261c82c039d02792d711c1eda2b670f9e6434bc1e86d96071f5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 11 10:00:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4da4efee25d6261c82c039d02792d711c1eda2b670f9e6434bc1e86d96071f5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 11 10:00:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4da4efee25d6261c82c039d02792d711c1eda2b670f9e6434bc1e86d96071f5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 11 10:00:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4da4efee25d6261c82c039d02792d711c1eda2b670f9e6434bc1e86d96071f5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 11 10:00:36 compute-0 podman[452358]: 2025-10-11 10:00:36.368358 +0000 UTC m=+0.172616639 container init 961e4bb01e1d9b967252031839fa27291b3f6c7320fd2ac2877cfd1423cc18eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_rubin, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct 11 10:00:36 compute-0 podman[452358]: 2025-10-11 10:00:36.391143684 +0000 UTC m=+0.195402283 container start 961e4bb01e1d9b967252031839fa27291b3f6c7320fd2ac2877cfd1423cc18eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_rubin, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 11 10:00:36 compute-0 podman[452358]: 2025-10-11 10:00:36.395368203 +0000 UTC m=+0.199626842 container attach 961e4bb01e1d9b967252031839fa27291b3f6c7320fd2ac2877cfd1423cc18eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_rubin, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 11 10:00:37 compute-0 ceph-mon[74313]: pgmap v3597: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 10:00:37 compute-0 hardcore_rubin[452374]: {
Oct 11 10:00:37 compute-0 hardcore_rubin[452374]:     "9a8d87fe-940e-4960-94fb-405e22e6e9ee": {
Oct 11 10:00:37 compute-0 hardcore_rubin[452374]:         "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 10:00:37 compute-0 hardcore_rubin[452374]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 11 10:00:37 compute-0 hardcore_rubin[452374]:         "osd_id": 2,
Oct 11 10:00:37 compute-0 hardcore_rubin[452374]:         "osd_uuid": "9a8d87fe-940e-4960-94fb-405e22e6e9ee",
Oct 11 10:00:37 compute-0 hardcore_rubin[452374]:         "type": "bluestore"
Oct 11 10:00:37 compute-0 hardcore_rubin[452374]:     },
Oct 11 10:00:37 compute-0 hardcore_rubin[452374]:     "bd31cfe9-266a-4479-a7f9-659fff14b3e1": {
Oct 11 10:00:37 compute-0 hardcore_rubin[452374]:         "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 10:00:37 compute-0 hardcore_rubin[452374]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 11 10:00:37 compute-0 hardcore_rubin[452374]:         "osd_id": 0,
Oct 11 10:00:37 compute-0 hardcore_rubin[452374]:         "osd_uuid": "bd31cfe9-266a-4479-a7f9-659fff14b3e1",
Oct 11 10:00:37 compute-0 hardcore_rubin[452374]:         "type": "bluestore"
Oct 11 10:00:37 compute-0 hardcore_rubin[452374]:     },
Oct 11 10:00:37 compute-0 hardcore_rubin[452374]:     "c6e730c8-7075-45e5-bf72-061b73088f5b": {
Oct 11 10:00:37 compute-0 hardcore_rubin[452374]:         "ceph_fsid": "33219f8b-dc38-5a8f-a577-8ccc4b37190a",
Oct 11 10:00:37 compute-0 hardcore_rubin[452374]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 11 10:00:37 compute-0 hardcore_rubin[452374]:         "osd_id": 1,
Oct 11 10:00:37 compute-0 hardcore_rubin[452374]:         "osd_uuid": "c6e730c8-7075-45e5-bf72-061b73088f5b",
Oct 11 10:00:37 compute-0 hardcore_rubin[452374]:         "type": "bluestore"
Oct 11 10:00:37 compute-0 hardcore_rubin[452374]:     }
Oct 11 10:00:37 compute-0 hardcore_rubin[452374]: }
Oct 11 10:00:37 compute-0 systemd[1]: libpod-961e4bb01e1d9b967252031839fa27291b3f6c7320fd2ac2877cfd1423cc18eb.scope: Deactivated successfully.
Oct 11 10:00:37 compute-0 podman[452358]: 2025-10-11 10:00:37.44086717 +0000 UTC m=+1.245125739 container died 961e4bb01e1d9b967252031839fa27291b3f6c7320fd2ac2877cfd1423cc18eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_rubin, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct 11 10:00:37 compute-0 systemd[1]: libpod-961e4bb01e1d9b967252031839fa27291b3f6c7320fd2ac2877cfd1423cc18eb.scope: Consumed 1.058s CPU time.
Oct 11 10:00:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-b4da4efee25d6261c82c039d02792d711c1eda2b670f9e6434bc1e86d96071f5-merged.mount: Deactivated successfully.
Oct 11 10:00:37 compute-0 podman[452358]: 2025-10-11 10:00:37.489408982 +0000 UTC m=+1.293667551 container remove 961e4bb01e1d9b967252031839fa27291b3f6c7320fd2ac2877cfd1423cc18eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_rubin, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0)
Oct 11 10:00:37 compute-0 systemd[1]: libpod-conmon-961e4bb01e1d9b967252031839fa27291b3f6c7320fd2ac2877cfd1423cc18eb.scope: Deactivated successfully.
Oct 11 10:00:37 compute-0 sudo[452251]: pam_unix(sudo:session): session closed for user root
Oct 11 10:00:37 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 11 10:00:37 compute-0 nova_compute[260935]: 2025-10-11 10:00:37.545 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 10:00:37 compute-0 nova_compute[260935]: 2025-10-11 10:00:37.548 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 10:00:37 compute-0 nova_compute[260935]: 2025-10-11 10:00:37.549 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5004 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Oct 11 10:00:37 compute-0 nova_compute[260935]: 2025-10-11 10:00:37.549 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Oct 11 10:00:37 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 10:00:37 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 11 10:00:37 compute-0 nova_compute[260935]: 2025-10-11 10:00:37.588 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 10:00:37 compute-0 nova_compute[260935]: 2025-10-11 10:00:37.589 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Oct 11 10:00:37 compute-0 ceph-mon[74313]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 10:00:37 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev cd3a34a6-e558-4f51-a62a-998ef42d2266 does not exist
Oct 11 10:00:37 compute-0 ceph-mgr[74605]: [progress WARNING root] complete: ev 5ff6a686-128a-4a7a-8a26-67a92610d644 does not exist
Oct 11 10:00:37 compute-0 sudo[452420]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 11 10:00:37 compute-0 sudo[452420]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 10:00:37 compute-0 sudo[452420]: pam_unix(sudo:session): session closed for user root
Oct 11 10:00:37 compute-0 sudo[452445]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 11 10:00:37 compute-0 sudo[452445]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 11 10:00:37 compute-0 sudo[452445]: pam_unix(sudo:session): session closed for user root
Oct 11 10:00:37 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3598: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 10:00:38 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 10:00:38 compute-0 ceph-mon[74313]: from='mgr.14132 192.168.122.100:0/3729028394' entity='mgr.compute-0.hcsgrm' 
Oct 11 10:00:39 compute-0 ceph-mon[74313]: pgmap v3598: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 10:00:39 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e317 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 10:00:39 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3599: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 10:00:41 compute-0 ceph-mon[74313]: pgmap v3599: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 10:00:41 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3600: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 10:00:42 compute-0 nova_compute[260935]: 2025-10-11 10:00:42.590 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 10:00:42 compute-0 podman[452470]: 2025-10-11 10:00:42.816363364 +0000 UTC m=+0.092942918 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001)
Oct 11 10:00:43 compute-0 ceph-mon[74313]: pgmap v3600: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 10:00:43 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3601: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 10:00:44 compute-0 nova_compute[260935]: 2025-10-11 10:00:44.703 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 10:00:44 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e317 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 10:00:45 compute-0 sshd-session[452489]: Accepted publickey for zuul from 192.168.122.10 port 44008 ssh2: ECDSA SHA256:KKxgUhG08XzjYMLOyvbR+tXItyOnGoLl6Fn32NV5afE
Oct 11 10:00:45 compute-0 systemd-logind[819]: New session 57 of user zuul.
Oct 11 10:00:45 compute-0 systemd[1]: Started Session 57 of User zuul.
Oct 11 10:00:45 compute-0 sshd-session[452489]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 11 10:00:45 compute-0 ceph-mon[74313]: pgmap v3601: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 10:00:45 compute-0 sudo[452493]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/bash -c 'rm -rf /var/tmp/sos-osp && mkdir /var/tmp/sos-osp && sos report --batch --all-logs --tmp-dir=/var/tmp/sos-osp -p container,openstack_edpm,system,storage,virt'
Oct 11 10:00:45 compute-0 sudo[452493]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 11 10:00:45 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3602: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 10:00:46 compute-0 podman[452527]: 2025-10-11 10:00:46.176236486 +0000 UTC m=+0.096908560 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_id=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=iscsid, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct 11 10:00:47 compute-0 nova_compute[260935]: 2025-10-11 10:00:47.593 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 10:00:47 compute-0 nova_compute[260935]: 2025-10-11 10:00:47.596 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 10:00:47 compute-0 nova_compute[260935]: 2025-10-11 10:00:47.596 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5004 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Oct 11 10:00:47 compute-0 nova_compute[260935]: 2025-10-11 10:00:47.597 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Oct 11 10:00:47 compute-0 nova_compute[260935]: 2025-10-11 10:00:47.630 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 10:00:47 compute-0 nova_compute[260935]: 2025-10-11 10:00:47.630 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Oct 11 10:00:47 compute-0 ceph-mon[74313]: pgmap v3602: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 10:00:47 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3603: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 10:00:49 compute-0 ceph-mgr[74605]: log_channel(audit) log [DBG] : from='client.22893 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 10:00:49 compute-0 ceph-mon[74313]: pgmap v3603: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 10:00:49 compute-0 podman[452695]: 2025-10-11 10:00:49.811590033 +0000 UTC m=+0.102052675 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 10:00:49 compute-0 podman[452696]: 2025-10-11 10:00:49.846452678 +0000 UTC m=+0.134473321 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct 11 10:00:49 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e317 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 10:00:49 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3604: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 10:00:50 compute-0 ceph-mgr[74605]: log_channel(audit) log [DBG] : from='client.22895 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 10:00:50 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0) v1
Oct 11 10:00:50 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1069736701' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Oct 11 10:00:50 compute-0 ceph-mon[74313]: from='client.22893 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 10:00:50 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/1069736701' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Oct 11 10:00:51 compute-0 ceph-mon[74313]: pgmap v3604: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 10:00:51 compute-0 ceph-mon[74313]: from='client.22895 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 10:00:52 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3605: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 10:00:52 compute-0 nova_compute[260935]: 2025-10-11 10:00:52.631 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 10:00:52 compute-0 nova_compute[260935]: 2025-10-11 10:00:52.634 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 10:00:52 compute-0 nova_compute[260935]: 2025-10-11 10:00:52.634 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Oct 11 10:00:52 compute-0 nova_compute[260935]: 2025-10-11 10:00:52.635 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Oct 11 10:00:52 compute-0 nova_compute[260935]: 2025-10-11 10:00:52.686 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 10:00:52 compute-0 nova_compute[260935]: 2025-10-11 10:00:52.687 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Oct 11 10:00:53 compute-0 ceph-mon[74313]: pgmap v3605: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 10:00:54 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3606: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 10:00:54 compute-0 ovs-vsctl[452846]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Oct 11 10:00:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 10:00:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 10:00:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 10:00:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 10:00:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] scanning for idle connections..
Oct 11 10:00:54 compute-0 ceph-mgr[74605]: [volumes INFO mgr_util] cleaning up connections: []
Oct 11 10:00:54 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e317 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 10:00:55 compute-0 ceph-mgr[74605]: [balancer INFO root] Optimize plan auto_2025-10-11_10:00:55
Oct 11 10:00:55 compute-0 ceph-mgr[74605]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 11 10:00:55 compute-0 ceph-mgr[74605]: [balancer INFO root] do_upmap
Oct 11 10:00:55 compute-0 ceph-mgr[74605]: [balancer INFO root] pools ['.rgw.root', 'backups', 'images', 'cephfs.cephfs.data', 'vms', 'default.rgw.meta', '.mgr', 'volumes', 'default.rgw.control', 'default.rgw.log', 'cephfs.cephfs.meta']
Oct 11 10:00:55 compute-0 ceph-mgr[74605]: [balancer INFO root] prepared 0/10 changes
Oct 11 10:00:55 compute-0 ceph-mon[74313]: pgmap v3606: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 10:00:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 11 10:00:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 10:00:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 11 10:00:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 11 10:00:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 10:00:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 10:00:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 11 10:00:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 10:00:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 11 10:00:55 compute-0 ceph-mgr[74605]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 11 10:00:55 compute-0 virtqemud[260524]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Oct 11 10:00:56 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3607: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 10:00:56 compute-0 virtqemud[260524]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Oct 11 10:00:56 compute-0 virtqemud[260524]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Oct 11 10:00:56 compute-0 ceph-mon[74313]: pgmap v3607: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 10:00:56 compute-0 ceph-mds[100846]: mds.cephfs.compute-0.aywjqo asok_command: cache status {prefix=cache status} (starting...)
Oct 11 10:00:56 compute-0 ceph-mds[100846]: mds.cephfs.compute-0.aywjqo asok_command: client ls {prefix=client ls} (starting...)
Oct 11 10:00:57 compute-0 lvm[453199]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Oct 11 10:00:57 compute-0 lvm[453199]: VG ceph_vg2 finished
Oct 11 10:00:57 compute-0 lvm[453207]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct 11 10:00:57 compute-0 lvm[453207]: VG ceph_vg0 finished
Oct 11 10:00:57 compute-0 lvm[453242]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Oct 11 10:00:57 compute-0 lvm[453242]: VG ceph_vg1 finished
Oct 11 10:00:57 compute-0 ceph-mgr[74605]: log_channel(audit) log [DBG] : from='client.22899 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 10:00:57 compute-0 kernel: block loop5: the capability attribute has been deprecated.
Oct 11 10:00:57 compute-0 nova_compute[260935]: 2025-10-11 10:00:57.687 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 10:00:57 compute-0 nova_compute[260935]: 2025-10-11 10:00:57.690 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 10:00:57 compute-0 ceph-mds[100846]: mds.cephfs.compute-0.aywjqo asok_command: damage ls {prefix=damage ls} (starting...)
Oct 11 10:00:57 compute-0 ceph-mds[100846]: mds.cephfs.compute-0.aywjqo asok_command: dump loads {prefix=dump loads} (starting...)
Oct 11 10:00:57 compute-0 ceph-mgr[74605]: log_channel(audit) log [DBG] : from='client.22901 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 10:00:57 compute-0 ceph-mon[74313]: from='client.22899 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 10:00:57 compute-0 ceph-mds[100846]: mds.cephfs.compute-0.aywjqo asok_command: dump tree {prefix=dump tree,root=/} (starting...)
Oct 11 10:00:58 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3608: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 10:00:58 compute-0 ceph-mds[100846]: mds.cephfs.compute-0.aywjqo asok_command: dump_blocked_ops {prefix=dump_blocked_ops} (starting...)
Oct 11 10:00:58 compute-0 ceph-mds[100846]: mds.cephfs.compute-0.aywjqo asok_command: dump_historic_ops {prefix=dump_historic_ops} (starting...)
Oct 11 10:00:58 compute-0 ceph-mds[100846]: mds.cephfs.compute-0.aywjqo asok_command: dump_historic_ops_by_duration {prefix=dump_historic_ops_by_duration} (starting...)
Oct 11 10:00:58 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "report"} v 0) v1
Oct 11 10:00:58 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/667685264' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Oct 11 10:00:58 compute-0 ceph-mds[100846]: mds.cephfs.compute-0.aywjqo asok_command: dump_ops_in_flight {prefix=dump_ops_in_flight} (starting...)
Oct 11 10:00:58 compute-0 ceph-mgr[74605]: log_channel(audit) log [DBG] : from='client.22907 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 10:00:58 compute-0 ceph-33219f8b-dc38-5a8f-a577-8ccc4b37190a-mgr-compute-0-hcsgrm[74601]: 2025-10-11T10:00:58.668+0000 7f7f1b2f5640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Oct 11 10:00:58 compute-0 ceph-mgr[74605]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Oct 11 10:00:58 compute-0 ceph-mds[100846]: mds.cephfs.compute-0.aywjqo asok_command: get subtrees {prefix=get subtrees} (starting...)
Oct 11 10:00:58 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 11 10:00:58 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3999565990' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 10:00:58 compute-0 ceph-mon[74313]: from='client.22901 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 10:00:58 compute-0 ceph-mon[74313]: pgmap v3608: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 10:00:58 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/667685264' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Oct 11 10:00:58 compute-0 ceph-mon[74313]: from='client.22907 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 10:00:58 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/3999565990' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 11 10:00:58 compute-0 ceph-mds[100846]: mds.cephfs.compute-0.aywjqo asok_command: ops {prefix=ops} (starting...)
Oct 11 10:00:59 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "channel": "cephadm"} v 0) v1
Oct 11 10:00:59 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/650596535' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Oct 11 10:00:59 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config log"} v 0) v1
Oct 11 10:00:59 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/141713683' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Oct 11 10:00:59 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump"} v 0) v1
Oct 11 10:00:59 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3915773846' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Oct 11 10:00:59 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config-key dump"} v 0) v1
Oct 11 10:00:59 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2054933797' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Oct 11 10:00:59 compute-0 ceph-mds[100846]: mds.cephfs.compute-0.aywjqo asok_command: session ls {prefix=session ls} (starting...)
Oct 11 10:00:59 compute-0 ceph-mds[100846]: mds.cephfs.compute-0.aywjqo asok_command: status {prefix=status} (starting...)
Oct 11 10:00:59 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/650596535' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Oct 11 10:00:59 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/141713683' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Oct 11 10:00:59 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/3915773846' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Oct 11 10:00:59 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/2054933797' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Oct 11 10:00:59 compute-0 ceph-mon[74313]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #177. Immutable memtables: 0.
Oct 11 10:00:59 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-10:00:59.894648) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 11 10:00:59 compute-0 ceph-mon[74313]: rocksdb: [db/flush_job.cc:856] [default] [JOB 109] Flushing memtable with next log file: 177
Oct 11 10:00:59 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760176859894682, "job": 109, "event": "flush_started", "num_memtables": 1, "num_entries": 794, "num_deletes": 251, "total_data_size": 977136, "memory_usage": 991304, "flush_reason": "Manual Compaction"}
Oct 11 10:00:59 compute-0 ceph-mon[74313]: rocksdb: [db/flush_job.cc:885] [default] [JOB 109] Level-0 flush table #178: started
Oct 11 10:00:59 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760176859904077, "cf_name": "default", "job": 109, "event": "table_file_creation", "file_number": 178, "file_size": 967455, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 74203, "largest_seqno": 74996, "table_properties": {"data_size": 963399, "index_size": 1771, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1221, "raw_key_size": 9231, "raw_average_key_size": 19, "raw_value_size": 955216, "raw_average_value_size": 2032, "num_data_blocks": 79, "num_entries": 470, "num_filter_entries": 470, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760176795, "oldest_key_time": 1760176795, "file_creation_time": 1760176859, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "25d4b2da-549a-4320-988a-8e4ed36a341b", "db_session_id": "HBQ1LYMNE0LLZGR7AHC6", "orig_file_number": 178, "seqno_to_time_mapping": "N/A"}}
Oct 11 10:00:59 compute-0 ceph-mon[74313]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 109] Flush lasted 9456 microseconds, and 3010 cpu microseconds.
Oct 11 10:00:59 compute-0 ceph-mon[74313]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 10:00:59 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-10:00:59.904105) [db/flush_job.cc:967] [default] [JOB 109] Level-0 flush table #178: 967455 bytes OK
Oct 11 10:00:59 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-10:00:59.904118) [db/memtable_list.cc:519] [default] Level-0 commit table #178 started
Oct 11 10:00:59 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-10:00:59.906059) [db/memtable_list.cc:722] [default] Level-0 commit table #178: memtable #1 done
Oct 11 10:00:59 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-10:00:59.906069) EVENT_LOG_v1 {"time_micros": 1760176859906066, "job": 109, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 11 10:00:59 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-10:00:59.906081) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 11 10:00:59 compute-0 ceph-mon[74313]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 109] Try to delete WAL files size 973127, prev total WAL file size 973127, number of live WAL files 2.
Oct 11 10:00:59 compute-0 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000174.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 10:00:59 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-10:00:59.906509) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730037323739' seq:72057594037927935, type:22 .. '7061786F730037353331' seq:0, type:0; will stop at (end)
Oct 11 10:00:59 compute-0 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 110] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 11 10:00:59 compute-0 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 109 Base level 0, inputs: [178(944KB)], [176(9956KB)]
Oct 11 10:00:59 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760176859906598, "job": 110, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [178], "files_L6": [176], "score": -1, "input_data_size": 11162958, "oldest_snapshot_seqno": -1}
Oct 11 10:00:59 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e317 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 10:00:59 compute-0 ceph-mon[74313]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 110] Generated table #179: 9036 keys, 9406681 bytes, temperature: kUnknown
Oct 11 10:00:59 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760176859983546, "cf_name": "default", "job": 110, "event": "table_file_creation", "file_number": 179, "file_size": 9406681, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9351760, "index_size": 31256, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 22597, "raw_key_size": 237424, "raw_average_key_size": 26, "raw_value_size": 9195696, "raw_average_value_size": 1017, "num_data_blocks": 1199, "num_entries": 9036, "num_filter_entries": 9036, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760170204, "oldest_key_time": 0, "file_creation_time": 1760176859, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "25d4b2da-549a-4320-988a-8e4ed36a341b", "db_session_id": "HBQ1LYMNE0LLZGR7AHC6", "orig_file_number": 179, "seqno_to_time_mapping": "N/A"}}
Oct 11 10:01:00 compute-0 ceph-mon[74313]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 11 10:01:00 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-10:01:00.000021) [db/compaction/compaction_job.cc:1663] [default] [JOB 110] Compacted 1@0 + 1@6 files to L6 => 9406681 bytes
Oct 11 10:01:00 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-10:01:00.001980) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 144.8 rd, 122.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.9, 9.7 +0.0 blob) out(9.0 +0.0 blob), read-write-amplify(21.3) write-amplify(9.7) OK, records in: 9550, records dropped: 514 output_compression: NoCompression
Oct 11 10:01:00 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-10:01:00.002017) EVENT_LOG_v1 {"time_micros": 1760176860002001, "job": 110, "event": "compaction_finished", "compaction_time_micros": 77101, "compaction_time_cpu_micros": 44227, "output_level": 6, "num_output_files": 1, "total_output_size": 9406681, "num_input_records": 9550, "num_output_records": 9036, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 11 10:01:00 compute-0 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000178.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 10:01:00 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760176860002729, "job": 110, "event": "table_file_deletion", "file_number": 178}
Oct 11 10:01:00 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3609: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 10:01:00 compute-0 ceph-mon[74313]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000176.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 11 10:01:00 compute-0 ceph-mon[74313]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760176860007315, "job": 110, "event": "table_file_deletion", "file_number": 176}
Oct 11 10:01:00 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-10:00:59.906408) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 10:01:00 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-10:01:00.007459) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 10:01:00 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-10:01:00.007465) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 10:01:00 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-10:01:00.007468) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 10:01:00 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-10:01:00.007470) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 10:01:00 compute-0 ceph-mon[74313]: rocksdb: (Original Log Time 2025/10/11-10:01:00.007472) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 11 10:01:00 compute-0 ceph-mgr[74605]: log_channel(audit) log [DBG] : from='client.22921 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 10:01:00 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata"} v 0) v1
Oct 11 10:01:00 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/800392077' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Oct 11 10:01:00 compute-0 ceph-mgr[74605]: log_channel(audit) log [DBG] : from='client.22923 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 10:01:00 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls"} v 0) v1
Oct 11 10:01:00 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2161272959' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Oct 11 10:01:00 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "features"} v 0) v1
Oct 11 10:01:00 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/172235930' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Oct 11 10:01:00 compute-0 ceph-mon[74313]: pgmap v3609: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 10:01:00 compute-0 ceph-mon[74313]: from='client.22921 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 10:01:00 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/800392077' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Oct 11 10:01:00 compute-0 ceph-mon[74313]: from='client.22923 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 10:01:00 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/2161272959' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Oct 11 10:01:00 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/172235930' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Oct 11 10:01:00 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Oct 11 10:01:00 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2639512722' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Oct 11 10:01:01 compute-0 unix_chkpwd[453758]: password check failed for user (root)
Oct 11 10:01:01 compute-0 sshd-session[453679]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=43.157.67.116  user=root
Oct 11 10:01:01 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0) v1
Oct 11 10:01:01 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3476374198' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Oct 11 10:01:01 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat"} v 0) v1
Oct 11 10:01:01 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/316147005' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Oct 11 10:01:01 compute-0 ceph-mgr[74605]: log_channel(audit) log [DBG] : from='client.22935 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 10:01:01 compute-0 ceph-33219f8b-dc38-5a8f-a577-8ccc4b37190a-mgr-compute-0-hcsgrm[74601]: 2025-10-11T10:01:01.647+0000 7f7f1b2f5640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Oct 11 10:01:01 compute-0 ceph-mgr[74605]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Oct 11 10:01:01 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions"} v 0) v1
Oct 11 10:01:01 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1387189187' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Oct 11 10:01:01 compute-0 CROND[453852]: (root) CMD (run-parts /etc/cron.hourly)
Oct 11 10:01:01 compute-0 run-parts[453866]: (/etc/cron.hourly) starting 0anacron
Oct 11 10:01:01 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/2639512722' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Oct 11 10:01:01 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/3476374198' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Oct 11 10:01:01 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/316147005' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Oct 11 10:01:01 compute-0 ceph-mon[74313]: from='client.22935 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 10:01:01 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/1387189187' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Oct 11 10:01:01 compute-0 run-parts[453872]: (/etc/cron.hourly) finished 0anacron
Oct 11 10:01:01 compute-0 CROND[453844]: (root) CMDEND (run-parts /etc/cron.hourly)
Oct 11 10:01:02 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3610: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 10:01:02 compute-0 ceph-mgr[74605]: log_channel(audit) log [DBG] : from='client.22941 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 10:01:02 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} v 0) v1
Oct 11 10:01:02 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/475225399' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Oct 11 10:01:02 compute-0 ceph-mgr[74605]: log_channel(audit) log [DBG] : from='client.22943 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 10:01:02 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} v 0) v1
Oct 11 10:01:02 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/412012730' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Oct 11 10:01:02 compute-0 nova_compute[260935]: 2025-10-11 10:01:02.690 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4997-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 10:01:02 compute-0 nova_compute[260935]: 2025-10-11 10:01:02.691 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 10:01:02 compute-0 nova_compute[260935]: 2025-10-11 10:01:02.691 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Oct 11 10:01:02 compute-0 nova_compute[260935]: 2025-10-11 10:01:02.691 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Oct 11 10:01:02 compute-0 nova_compute[260935]: 2025-10-11 10:01:02.722 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 10:01:02 compute-0 nova_compute[260935]: 2025-10-11 10:01:02.722 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Oct 11 10:01:02 compute-0 ceph-mgr[74605]: log_channel(audit) log [DBG] : from='client.22947 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 10:01:02 compute-0 ceph-mon[74313]: pgmap v3610: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 10:01:02 compute-0 ceph-mon[74313]: from='client.22941 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 10:01:02 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/475225399' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Oct 11 10:01:02 compute-0 ceph-mon[74313]: from='client.22943 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 10:01:02 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/412012730' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Oct 11 10:01:03 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump"} v 0) v1
Oct 11 10:01:03 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4227035867' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Oct 11 10:01:03 compute-0 sshd-session[453679]: Failed password for root from 43.157.67.116 port 56932 ssh2
Oct 11 10:01:03 compute-0 ceph-mgr[74605]: log_channel(audit) log [DBG] : from='client.22951 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Oct 11 10:01:03 compute-0 sshd-session[453679]: Received disconnect from 43.157.67.116 port 56932:11: Bye Bye [preauth]
Oct 11 10:01:03 compute-0 sshd-session[453679]: Disconnected from authenticating user root 43.157.67.116 port 56932 [preauth]
Oct 11 10:01:03 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata"} v 0) v1
Oct 11 10:01:03 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3647786049' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:28:54.461254+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4dc236000 session 0x55c4df1d6d20
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: handle_auth_request added challenge on 0x55c4ddb8ec00
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 288374784 unmapped: 40697856 heap: 329072640 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4eb3c3000/0x0/0x4ffc00000, data 0x4c992a5/0x4e3b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:28:55.461386+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3529643 data_alloc: 218103808 data_used: 33161216
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 288374784 unmapped: 40697856 heap: 329072640 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:28:56.461538+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 292487168 unmapped: 36585472 heap: 329072640 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:28:57.461763+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 292601856 unmapped: 36470784 heap: 329072640 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:28:58.461859+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 292626432 unmapped: 36446208 heap: 329072640 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:28:59.462007+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ea9bc000/0x0/0x4ffc00000, data 0x56a02a5/0x5842000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 292626432 unmapped: 36446208 heap: 329072640 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:29:00.462217+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3619357 data_alloc: 234881024 data_used: 34316288
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 292626432 unmapped: 36446208 heap: 329072640 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:29:01.462502+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 292626432 unmapped: 36446208 heap: 329072640 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:29:02.462699+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 8.258290291s of 10.057239532s, submitted: 151
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 292642816 unmapped: 36429824 heap: 329072640 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:29:03.462918+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 292651008 unmapped: 36421632 heap: 329072640 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:29:04.463105+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ea9b9000/0x0/0x4ffc00000, data 0x56a32a5/0x5845000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 292651008 unmapped: 36421632 heap: 329072640 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4db02ec00 session 0x55c4dc1fb0e0
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4dbc11400 session 0x55c4dc1b3e00
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:29:05.465406+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ea9b9000/0x0/0x4ffc00000, data 0x56a32a5/0x5845000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: handle_auth_request added challenge on 0x55c4dc236000
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3617337 data_alloc: 234881024 data_used: 34308096
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 292659200 unmapped: 36413440 heap: 329072640 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:29:06.465616+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 292659200 unmapped: 36413440 heap: 329072640 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ea9b9000/0x0/0x4ffc00000, data 0x56a32a5/0x5845000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [0,0,0,0,0,0,1])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:29:07.465881+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 292700160 unmapped: 36372480 heap: 329072640 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4dc236000 session 0x55c4dbe98d20
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:29:08.466084+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 292700160 unmapped: 36372480 heap: 329072640 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:29:09.466242+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4eb091000/0x0/0x4ffc00000, data 0x4d7c2a5/0x4f1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 292700160 unmapped: 36372480 heap: 329072640 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:29:10.466424+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3513165 data_alloc: 218103808 data_used: 29384704
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 292700160 unmapped: 36372480 heap: 329072640 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:29:11.466610+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 292700160 unmapped: 36372480 heap: 329072640 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:29:12.466763+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4e6920c00 session 0x55c4dfbab0e0
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4dbc11000 session 0x55c4db568000
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: handle_auth_request added challenge on 0x55c4e6920c00
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 8.251020432s of 10.060697556s, submitted: 29
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 292741120 unmapped: 36331520 heap: 329072640 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:29:13.466886+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4e6920c00 session 0x55c4dc11a5a0
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4eb5b4000/0x0/0x4ffc00000, data 0x4765233/0x4905000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 292741120 unmapped: 36331520 heap: 329072640 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:29:14.467052+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 292741120 unmapped: 36331520 heap: 329072640 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:29:15.467218+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4eb5b4000/0x0/0x4ffc00000, data 0x4765210/0x4904000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3453472 data_alloc: 218103808 data_used: 28397568
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 292741120 unmapped: 36331520 heap: 329072640 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:29:16.467368+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 292741120 unmapped: 36331520 heap: 329072640 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:29:17.467482+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 292741120 unmapped: 36331520 heap: 329072640 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4eb5b4000/0x0/0x4ffc00000, data 0x4765210/0x4904000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:29:18.467619+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4eb5b4000/0x0/0x4ffc00000, data 0x4765210/0x4904000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 292741120 unmapped: 36331520 heap: 329072640 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:29:19.467801+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4eb5b4000/0x0/0x4ffc00000, data 0x4765210/0x4904000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 292741120 unmapped: 36331520 heap: 329072640 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:29:20.468014+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3453472 data_alloc: 218103808 data_used: 28397568
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 292741120 unmapped: 36331520 heap: 329072640 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:29:21.468203+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4eb5b4000/0x0/0x4ffc00000, data 0x4765210/0x4904000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 292741120 unmapped: 36331520 heap: 329072640 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:29:22.468361+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 292741120 unmapped: 36331520 heap: 329072640 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:29:23.468497+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 292749312 unmapped: 36323328 heap: 329072640 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:29:24.469136+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 292749312 unmapped: 36323328 heap: 329072640 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:29:25.469497+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4eb5b4000/0x0/0x4ffc00000, data 0x4765210/0x4904000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3453472 data_alloc: 218103808 data_used: 28397568
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 292749312 unmapped: 36323328 heap: 329072640 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:29:26.469709+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 292749312 unmapped: 36323328 heap: 329072640 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:29:27.469961+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 15.098580360s of 15.238180161s, submitted: 38
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 292749312 unmapped: 36323328 heap: 329072640 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:29:28.470200+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4ddb9dc00 session 0x55c4daffd4a0
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4df132c00 session 0x55c4ddbaa5a0
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: handle_auth_request added challenge on 0x55c4db02ec00
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 292749312 unmapped: 36323328 heap: 329072640 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:29:29.470451+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 292749312 unmapped: 36323328 heap: 329072640 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:29:30.470628+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3452562 data_alloc: 218103808 data_used: 28401664
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 292749312 unmapped: 36323328 heap: 329072640 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:29:31.470961+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ec78e000/0x0/0x4ffc00000, data 0x38cc200/0x3a6a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [0,0,0,0,0,1])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 292757504 unmapped: 36315136 heap: 329072640 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:29:32.471108+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 292757504 unmapped: 36315136 heap: 329072640 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:29:33.471334+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 292757504 unmapped: 36315136 heap: 329072640 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:29:34.471573+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ec794000/0x0/0x4ffc00000, data 0x38cc19e/0x3a69000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4db02ec00 session 0x55c4dd81f680
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 292757504 unmapped: 36315136 heap: 329072640 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:29:35.471847+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3306668 data_alloc: 218103808 data_used: 22568960
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 292757504 unmapped: 36315136 heap: 329072640 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:29:36.472017+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 292765696 unmapped: 36306944 heap: 329072640 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:29:37.472196+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 292765696 unmapped: 36306944 heap: 329072640 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:29:38.472340+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ec794000/0x0/0x4ffc00000, data 0x38cc19e/0x3a69000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 292765696 unmapped: 36306944 heap: 329072640 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:29:39.472476+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 292765696 unmapped: 36306944 heap: 329072640 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:29:40.472584+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ec794000/0x0/0x4ffc00000, data 0x38cc19e/0x3a69000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3306668 data_alloc: 218103808 data_used: 22568960
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 292765696 unmapped: 36306944 heap: 329072640 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:29:41.473336+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 292765696 unmapped: 36306944 heap: 329072640 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:29:42.473494+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 292765696 unmapped: 36306944 heap: 329072640 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:29:43.473650+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 292765696 unmapped: 36306944 heap: 329072640 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:29:44.473859+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 292765696 unmapped: 36306944 heap: 329072640 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:29:45.474008+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ec794000/0x0/0x4ffc00000, data 0x38cc19e/0x3a69000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3306668 data_alloc: 218103808 data_used: 22568960
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 292765696 unmapped: 36306944 heap: 329072640 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:29:46.474173+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 292765696 unmapped: 36306944 heap: 329072640 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ec794000/0x0/0x4ffc00000, data 0x38cc19e/0x3a69000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:29:47.474325+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 292765696 unmapped: 36306944 heap: 329072640 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:29:48.474545+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 292765696 unmapped: 36306944 heap: 329072640 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:29:49.474711+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 292773888 unmapped: 36298752 heap: 329072640 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:29:50.474892+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3306668 data_alloc: 218103808 data_used: 22568960
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 292773888 unmapped: 36298752 heap: 329072640 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:29:51.475123+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 292773888 unmapped: 36298752 heap: 329072640 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:29:52.475283+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ec794000/0x0/0x4ffc00000, data 0x38cc19e/0x3a69000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ec794000/0x0/0x4ffc00000, data 0x38cc19e/0x3a69000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 292782080 unmapped: 36290560 heap: 329072640 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:29:53.475484+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 292782080 unmapped: 36290560 heap: 329072640 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:29:54.476491+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ec794000/0x0/0x4ffc00000, data 0x38cc19e/0x3a69000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 292782080 unmapped: 36290560 heap: 329072640 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:29:55.476633+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3306668 data_alloc: 218103808 data_used: 22568960
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 292782080 unmapped: 36290560 heap: 329072640 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:29:56.476794+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 292782080 unmapped: 36290560 heap: 329072640 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:29:57.477088+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 292782080 unmapped: 36290560 heap: 329072640 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:29:58.477237+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ec794000/0x0/0x4ffc00000, data 0x38cc19e/0x3a69000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 292782080 unmapped: 36290560 heap: 329072640 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:29:59.477394+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 292151296 unmapped: 36921344 heap: 329072640 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:30:00.477521+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3306668 data_alloc: 218103808 data_used: 22568960
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 292151296 unmapped: 36921344 heap: 329072640 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:30:01.477810+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 292151296 unmapped: 36921344 heap: 329072640 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:30:02.477991+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ec794000/0x0/0x4ffc00000, data 0x38cc19e/0x3a69000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 292151296 unmapped: 36921344 heap: 329072640 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:30:03.478121+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ec794000/0x0/0x4ffc00000, data 0x38cc19e/0x3a69000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:30:04.478268+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 292159488 unmapped: 36913152 heap: 329072640 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:30:05.478362+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 292159488 unmapped: 36913152 heap: 329072640 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3306668 data_alloc: 218103808 data_used: 22568960
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:30:06.478522+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 292159488 unmapped: 36913152 heap: 329072640 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ec794000/0x0/0x4ffc00000, data 0x38cc19e/0x3a69000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:30:07.478653+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 292159488 unmapped: 36913152 heap: 329072640 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:30:08.478918+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 292159488 unmapped: 36913152 heap: 329072640 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ec794000/0x0/0x4ffc00000, data 0x38cc19e/0x3a69000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:30:09.479282+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 292167680 unmapped: 36904960 heap: 329072640 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: handle_auth_request added challenge on 0x55c4db02ec00
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ec794000/0x0/0x4ffc00000, data 0x38cc19e/0x3a69000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:30:10.479467+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 292167680 unmapped: 36904960 heap: 329072640 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 40.671962738s of 42.242687225s, submitted: 18
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4db02ec00 session 0x55c4ddbc4d20
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: handle_auth_request added challenge on 0x55c4dbc11000
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4dbc11000 session 0x55c4db580780
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: handle_auth_request added challenge on 0x55c4ddb9dc00
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4ddb9dc00 session 0x55c4ddb485a0
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: handle_auth_request added challenge on 0x55c4df132c00
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4df132c00 session 0x55c4ddbb8d20
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: handle_auth_request added challenge on 0x55c4e6920c00
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4e6920c00 session 0x55c4dd17f680
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3390262 data_alloc: 218103808 data_used: 22568960
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:30:11.479919+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 292716544 unmapped: 41615360 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:30:12.480230+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 292716544 unmapped: 41615360 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ebc95000/0x0/0x4ffc00000, data 0x43cc19e/0x4569000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:30:13.480370+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 292716544 unmapped: 41615360 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:30:14.480538+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 292716544 unmapped: 41615360 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4dc1cb000 session 0x55c4ddaf6f00
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: handle_auth_request added challenge on 0x55c4e6920c00
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4dd00d000 session 0x55c4dfbaa960
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: handle_auth_request added challenge on 0x55c4db02ec00
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4ddb8b000 session 0x55c4df1d7860
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: handle_auth_request added challenge on 0x55c4dbc11000
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:30:15.480714+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 292716544 unmapped: 41615360 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3390262 data_alloc: 218103808 data_used: 22568960
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:30:16.480868+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 292724736 unmapped: 41607168 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:30:17.480985+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ebc95000/0x0/0x4ffc00000, data 0x43cc19e/0x4569000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 292724736 unmapped: 41607168 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:30:18.481129+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 292724736 unmapped: 41607168 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:30:19.481426+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 292724736 unmapped: 41607168 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:30:20.481608+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 292732928 unmapped: 41598976 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4dbe5d800 session 0x55c4dd1534a0
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: handle_auth_request added challenge on 0x55c4ddb8b000
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: handle_auth_request added challenge on 0x55c4ddb9dc00
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.103104591s of 10.239775658s, submitted: 11
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4ddb9dc00 session 0x55c4dfbaab40
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ebc95000/0x0/0x4ffc00000, data 0x43cc19e/0x4569000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3391561 data_alloc: 218103808 data_used: 22568960
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: handle_auth_request added challenge on 0x55c4df132c00
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: handle_auth_request added challenge on 0x55c4dbc11400
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:30:21.481798+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 292741120 unmapped: 41590784 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:30:22.481942+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 292741120 unmapped: 41590784 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:30:23.482085+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 293060608 unmapped: 41271296 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:30:24.482235+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ebc94000/0x0/0x4ffc00000, data 0x43cc1c1/0x456a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 293060608 unmapped: 41271296 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:30:25.482386+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 293060608 unmapped: 41271296 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3471561 data_alloc: 234881024 data_used: 33812480
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:30:26.482532+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 293060608 unmapped: 41271296 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ebc94000/0x0/0x4ffc00000, data 0x43cc1c1/0x456a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:30:27.482622+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 293060608 unmapped: 41271296 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ebc94000/0x0/0x4ffc00000, data 0x43cc1c1/0x456a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ebc94000/0x0/0x4ffc00000, data 0x43cc1c1/0x456a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:30:28.482755+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 293060608 unmapped: 41271296 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:30:29.482932+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 293060608 unmapped: 41271296 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:30:30.483087+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 293060608 unmapped: 41271296 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3471561 data_alloc: 234881024 data_used: 33812480
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:30:31.483333+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 293060608 unmapped: 41271296 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ebc94000/0x0/0x4ffc00000, data 0x43cc1c1/0x456a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:30:32.483488+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 293060608 unmapped: 41271296 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.165242195s of 12.180531502s, submitted: 4
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:30:33.483726+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297926656 unmapped: 36405248 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:30:34.483992+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297959424 unmapped: 36372480 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:30:35.484200+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 298778624 unmapped: 35553280 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3584047 data_alloc: 234881024 data_used: 35147776
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:30:36.484393+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 298778624 unmapped: 35553280 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4eb005000/0x0/0x4ffc00000, data 0x505b1c1/0x51f9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:30:37.484600+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 298778624 unmapped: 35553280 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:30:38.484794+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 298778624 unmapped: 35553280 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4eb005000/0x0/0x4ffc00000, data 0x505b1c1/0x51f9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:30:39.484987+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 298778624 unmapped: 35553280 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:30:40.485136+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 298778624 unmapped: 35553280 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3582539 data_alloc: 234881024 data_used: 35147776
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:30:41.485371+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 298778624 unmapped: 35553280 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4eafe6000/0x0/0x4ffc00000, data 0x507a1c1/0x5218000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:30:42.485555+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 298778624 unmapped: 35553280 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:30:43.485858+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 298778624 unmapped: 35553280 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:30:44.486117+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 298778624 unmapped: 35553280 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4eafe6000/0x0/0x4ffc00000, data 0x507a1c1/0x5218000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:30:45.486282+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 298778624 unmapped: 35553280 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.867265701s of 13.200298309s, submitted: 98
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3582939 data_alloc: 234881024 data_used: 35155968
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:30:46.486447+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 298778624 unmapped: 35553280 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:30:47.486622+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: handle_auth_request added challenge on 0x55c4dc236000
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 298778624 unmapped: 35553280 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4dc236000 session 0x55c4ddbc4000
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: handle_auth_request added challenge on 0x55c4dcebdc00
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4dcebdc00 session 0x55c4ddb49a40
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: handle_auth_request added challenge on 0x55c4db37b800
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4db37b800 session 0x55c4ddbab860
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: handle_auth_request added challenge on 0x55c4dcec0800
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4dcec0800 session 0x55c4dbe710e0
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: handle_auth_request added challenge on 0x55c4db37b800
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4db37b800 session 0x55c4df73b2c0
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:30:48.487021+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 299114496 unmapped: 35217408 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:30:49.487183+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 299114496 unmapped: 35217408 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4eaa33000/0x0/0x4ffc00000, data 0x562d1c1/0x57cb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:30:50.487382+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 299114496 unmapped: 35217408 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3630133 data_alloc: 234881024 data_used: 35155968
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:30:51.487559+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 299114496 unmapped: 35217408 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4eaa33000/0x0/0x4ffc00000, data 0x562d1c1/0x57cb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:30:52.487717+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 299114496 unmapped: 35217408 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:30:53.487872+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 299114496 unmapped: 35217408 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: handle_auth_request added challenge on 0x55c4dc236000
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4dc236000 session 0x55c4dd2e2d20
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:30:54.488005+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 299122688 unmapped: 35209216 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: handle_auth_request added challenge on 0x55c4dcebdc00
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: handle_auth_request added challenge on 0x55c4dcec0800
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:30:55.488149+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 299130880 unmapped: 35201024 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3632322 data_alloc: 234881024 data_used: 35307520
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:30:56.488272+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 299302912 unmapped: 35028992 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:30:57.488460+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 300818432 unmapped: 33513472 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4eaa33000/0x0/0x4ffc00000, data 0x562d1c1/0x57cb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:30:58.488609+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 300818432 unmapped: 33513472 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:30:59.488773+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 300818432 unmapped: 33513472 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:31:00.488886+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 300818432 unmapped: 33513472 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3671522 data_alloc: 234881024 data_used: 40796160
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:31:01.489061+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 300818432 unmapped: 33513472 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:31:02.489190+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 300818432 unmapped: 33513472 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4eaa33000/0x0/0x4ffc00000, data 0x562d1c1/0x57cb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:31:03.489311+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 300818432 unmapped: 33513472 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:31:04.489447+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 300818432 unmapped: 33513472 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:31:05.492863+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 19.157880783s of 19.457237244s, submitted: 21
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 300826624 unmapped: 33505280 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3671874 data_alloc: 234881024 data_used: 40796160
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:31:06.492997+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 303357952 unmapped: 30973952 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ea6c8000/0x0/0x4ffc00000, data 0x59981c1/0x5b36000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,14,28])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:31:07.493127+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 303202304 unmapped: 31129600 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:31:08.493299+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 303202304 unmapped: 31129600 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:31:09.493435+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 305004544 unmapped: 29327360 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:31:10.493582+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 305012736 unmapped: 29319168 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3708812 data_alloc: 234881024 data_used: 40869888
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:31:11.493703+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 305029120 unmapped: 29302784 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:31:12.493887+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 305029120 unmapped: 29302784 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ea455000/0x0/0x4ffc00000, data 0x5c0b1c1/0x5da9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:31:13.494009+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 305037312 unmapped: 29294592 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:31:14.494179+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ea455000/0x0/0x4ffc00000, data 0x5c0b1c1/0x5da9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 305037312 unmapped: 29294592 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:31:15.494286+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 305037312 unmapped: 29294592 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3723514 data_alloc: 234881024 data_used: 41152512
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:31:16.494420+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 8.852592468s of 11.093073845s, submitted: 89
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 305037312 unmapped: 29294592 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:31:17.494577+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ea455000/0x0/0x4ffc00000, data 0x5c0b1c1/0x5da9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 305037312 unmapped: 29294592 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:31:18.494717+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 305037312 unmapped: 29294592 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:31:19.494879+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 305037312 unmapped: 29294592 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:31:20.495015+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 305037312 unmapped: 29294592 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3721486 data_alloc: 234881024 data_used: 41213952
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:31:21.495212+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 305037312 unmapped: 29294592 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4dcebdc00 session 0x55c4ddbbe780
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4dcec0800 session 0x55c4dc11ab40
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ea434000/0x0/0x4ffc00000, data 0x5c2c1c1/0x5dca000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: handle_auth_request added challenge on 0x55c4ddb9dc00
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:31:22.495339+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4ddb9dc00 session 0x55c4ddbc54a0
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304406528 unmapped: 29925376 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:31:23.495532+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304406528 unmapped: 29925376 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:31:24.496279+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304406528 unmapped: 29925376 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:31:25.497006+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4eafd3000/0x0/0x4ffc00000, data 0x508d1c1/0x522b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304406528 unmapped: 29925376 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4df132c00 session 0x55c4ddacfc20
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4dbc11400 session 0x55c4df73a000
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3592800 data_alloc: 234881024 data_used: 35217408
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:31:26.497174+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: handle_auth_request added challenge on 0x55c4db37b800
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304406528 unmapped: 29925376 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.106317520s of 10.223801613s, submitted: 27
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4db37b800 session 0x55c4dd97ef00
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:31:27.497341+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 294125568 unmapped: 40206336 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ec5cc000/0x0/0x4ffc00000, data 0x38cc19e/0x3a69000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:31:28.497450+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 294125568 unmapped: 40206336 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:31:29.497613+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 294125568 unmapped: 40206336 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:31:30.497773+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 294125568 unmapped: 40206336 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3327481 data_alloc: 218103808 data_used: 22568960
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:31:31.498003+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 294125568 unmapped: 40206336 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:31:32.498175+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 294125568 unmapped: 40206336 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:31:33.498545+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 294125568 unmapped: 40206336 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ec5cc000/0x0/0x4ffc00000, data 0x38cc19e/0x3a69000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:31:34.498704+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 294125568 unmapped: 40206336 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:31:35.498862+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 294125568 unmapped: 40206336 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ec5cc000/0x0/0x4ffc00000, data 0x38cc19e/0x3a69000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 4800.3 total, 600.0 interval
                                           Cumulative writes: 33K writes, 131K keys, 33K commit groups, 1.0 writes per commit group, ingest: 0.13 GB, 0.03 MB/s
                                           Cumulative WAL: 33K writes, 11K syncs, 2.78 writes per sync, written: 0.13 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 3725 writes, 14K keys, 3725 commit groups, 1.0 writes per commit group, ingest: 17.79 MB, 0.03 MB/s
                                           Interval WAL: 3725 writes, 1461 syncs, 2.55 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:31:36.499068+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3327481 data_alloc: 218103808 data_used: 22568960
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 294125568 unmapped: 40206336 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ec5cc000/0x0/0x4ffc00000, data 0x38cc19e/0x3a69000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:31:37.499281+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 294125568 unmapped: 40206336 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ec5cc000/0x0/0x4ffc00000, data 0x38cc19e/0x3a69000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:31:38.499454+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 294125568 unmapped: 40206336 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:31:39.499622+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 294125568 unmapped: 40206336 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ec5cc000/0x0/0x4ffc00000, data 0x38cc19e/0x3a69000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:31:40.499778+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 294125568 unmapped: 40206336 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ec5cc000/0x0/0x4ffc00000, data 0x38cc19e/0x3a69000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:31:41.500104+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3327481 data_alloc: 218103808 data_used: 22568960
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 294125568 unmapped: 40206336 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:31:42.500250+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 294125568 unmapped: 40206336 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:31:43.500476+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 294125568 unmapped: 40206336 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:31:44.500643+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 294125568 unmapped: 40206336 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ec5cc000/0x0/0x4ffc00000, data 0x38cc19e/0x3a69000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:31:45.500919+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 294125568 unmapped: 40206336 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:31:46.501164+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3327481 data_alloc: 218103808 data_used: 22568960
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 294125568 unmapped: 40206336 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:31:47.501338+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ec5cc000/0x0/0x4ffc00000, data 0x38cc19e/0x3a69000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 294125568 unmapped: 40206336 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:31:48.501508+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 294125568 unmapped: 40206336 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:31:49.501698+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 294125568 unmapped: 40206336 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:31:50.501855+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 294125568 unmapped: 40206336 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:31:51.502113+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3327481 data_alloc: 218103808 data_used: 22568960
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ec5cc000/0x0/0x4ffc00000, data 0x38cc19e/0x3a69000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 294125568 unmapped: 40206336 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:31:52.502341+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 294133760 unmapped: 40198144 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:31:53.502614+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 294133760 unmapped: 40198144 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:31:54.502862+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 294133760 unmapped: 40198144 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:31:55.503053+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 294133760 unmapped: 40198144 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ec5cc000/0x0/0x4ffc00000, data 0x38cc19e/0x3a69000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:31:56.503185+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3327481 data_alloc: 218103808 data_used: 22568960
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 294133760 unmapped: 40198144 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:31:57.503460+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 294133760 unmapped: 40198144 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:31:58.503641+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 294141952 unmapped: 40189952 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:31:59.503795+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 294141952 unmapped: 40189952 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:32:00.503979+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 294141952 unmapped: 40189952 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ec5cc000/0x0/0x4ffc00000, data 0x38cc19e/0x3a69000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:32:01.504185+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3327481 data_alloc: 218103808 data_used: 22568960
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 294150144 unmapped: 40181760 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:32:02.504429+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 294150144 unmapped: 40181760 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:32:03.504587+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 294150144 unmapped: 40181760 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:32:04.505107+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 294150144 unmapped: 40181760 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:32:05.505292+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 294150144 unmapped: 40181760 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ec5cc000/0x0/0x4ffc00000, data 0x38cc19e/0x3a69000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:32:06.507078+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3327481 data_alloc: 218103808 data_used: 22568960
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 294150144 unmapped: 40181760 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:32:07.508137+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 294150144 unmapped: 40181760 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:32:08.508294+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 294150144 unmapped: 40181760 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:32:09.508442+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ec5cc000/0x0/0x4ffc00000, data 0x38cc19e/0x3a69000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 294150144 unmapped: 40181760 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:32:10.508568+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 294158336 unmapped: 40173568 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:32:11.508766+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3327481 data_alloc: 218103808 data_used: 22568960
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 294158336 unmapped: 40173568 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ec5cc000/0x0/0x4ffc00000, data 0x38cc19e/0x3a69000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:32:12.508962+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ec5cc000/0x0/0x4ffc00000, data 0x38cc19e/0x3a69000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 294166528 unmapped: 40165376 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:32:13.509122+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 294166528 unmapped: 40165376 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:32:14.509250+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 294166528 unmapped: 40165376 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:32:15.509373+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 294166528 unmapped: 40165376 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:32:16.509551+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3327481 data_alloc: 218103808 data_used: 22568960
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 294166528 unmapped: 40165376 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:32:17.509682+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 294166528 unmapped: 40165376 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ec5cc000/0x0/0x4ffc00000, data 0x38cc19e/0x3a69000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf9ff9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: handle_auth_request added challenge on 0x55c4dc236000
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 51.215015411s of 51.293136597s, submitted: 20
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4dc236000 session 0x55c4ddaf7c20
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: handle_auth_request added challenge on 0x55c4dcebdc00
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4dcebdc00 session 0x55c4dc11b860
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: handle_auth_request added challenge on 0x55c4dcebdc00
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4dcebdc00 session 0x55c4ddaf74a0
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:32:18.509850+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: handle_auth_request added challenge on 0x55c4db37b800
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4db37b800 session 0x55c4dbe70960
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: handle_auth_request added challenge on 0x55c4dbc11400
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4dbc11400 session 0x55c4dbe71a40
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 294379520 unmapped: 39952384 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:32:19.509986+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 294379520 unmapped: 39952384 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ebf72000/0x0/0x4ffc00000, data 0x3cde200/0x3e7c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xfe0f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:32:20.510287+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 294387712 unmapped: 39944192 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:32:21.510888+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3361388 data_alloc: 218103808 data_used: 22568960
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 294387712 unmapped: 39944192 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ebf72000/0x0/0x4ffc00000, data 0x3cde200/0x3e7c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xfe0f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:32:22.511090+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 294387712 unmapped: 39944192 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:32:23.511637+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 294387712 unmapped: 39944192 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:32:24.511890+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ebf72000/0x0/0x4ffc00000, data 0x3cde200/0x3e7c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xfe0f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 294395904 unmapped: 39936000 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:32:25.512032+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 294395904 unmapped: 39936000 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:32:26.512280+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ebf72000/0x0/0x4ffc00000, data 0x3cde200/0x3e7c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xfe0f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3361388 data_alloc: 218103808 data_used: 22568960
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: handle_auth_request added challenge on 0x55c4dc236000
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4dc236000 session 0x55c4df1d6000
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 294395904 unmapped: 39936000 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:32:27.512474+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: handle_auth_request added challenge on 0x55c4df132c00
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: handle_auth_request added challenge on 0x55c4dcec0800
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 294404096 unmapped: 39927808 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:32:28.512615+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ebf72000/0x0/0x4ffc00000, data 0x3cde200/0x3e7c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xfe0f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 294412288 unmapped: 39919616 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:32:29.512768+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ebf72000/0x0/0x4ffc00000, data 0x3cde200/0x3e7c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xfe0f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 294420480 unmapped: 39911424 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:32:30.512857+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 294420480 unmapped: 39911424 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:32:31.513069+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3389388 data_alloc: 218103808 data_used: 26505216
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ebf72000/0x0/0x4ffc00000, data 0x3cde200/0x3e7c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xfe0f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 294420480 unmapped: 39911424 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:32:32.513220+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 294420480 unmapped: 39911424 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:32:33.513367+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 294420480 unmapped: 39911424 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:32:34.513515+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 294420480 unmapped: 39911424 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:32:35.513672+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ebf72000/0x0/0x4ffc00000, data 0x3cde200/0x3e7c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xfe0f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 294420480 unmapped: 39911424 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ebf72000/0x0/0x4ffc00000, data 0x3cde200/0x3e7c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xfe0f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:32:36.513875+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3389388 data_alloc: 218103808 data_used: 26505216
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 294420480 unmapped: 39911424 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:32:37.513976+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 294420480 unmapped: 39911424 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:32:38.514115+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ebf72000/0x0/0x4ffc00000, data 0x3cde200/0x3e7c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xfe0f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 20.580312729s of 20.828023911s, submitted: 27
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 294420480 unmapped: 39911424 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:32:39.514238+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 294772736 unmapped: 39559168 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:32:40.514392+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296353792 unmapped: 37978112 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:32:41.514573+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3404442 data_alloc: 218103808 data_used: 26533888
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296353792 unmapped: 37978112 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:32:42.514720+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296353792 unmapped: 37978112 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:32:43.514902+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ebe1f000/0x0/0x4ffc00000, data 0x3e31200/0x3fcf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xfe0f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296353792 unmapped: 37978112 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ebe1f000/0x0/0x4ffc00000, data 0x3e31200/0x3fcf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xfe0f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:32:44.515082+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296353792 unmapped: 37978112 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:32:45.515251+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296353792 unmapped: 37978112 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:32:46.515456+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3403254 data_alloc: 218103808 data_used: 26537984
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296353792 unmapped: 37978112 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:32:47.515597+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296353792 unmapped: 37978112 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:32:48.515745+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ebe00000/0x0/0x4ffc00000, data 0x3e50200/0x3fee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xfe0f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296353792 unmapped: 37978112 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:32:49.515893+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296353792 unmapped: 37978112 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:32:50.516094+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296353792 unmapped: 37978112 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ebe00000/0x0/0x4ffc00000, data 0x3e50200/0x3fee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xfe0f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:32:51.516337+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3403254 data_alloc: 218103808 data_used: 26537984
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296353792 unmapped: 37978112 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.226534843s of 13.379148483s, submitted: 41
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:32:52.516493+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296361984 unmapped: 37969920 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ebdf7000/0x0/0x4ffc00000, data 0x3e59200/0x3ff7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xfe0f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:32:53.516668+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296361984 unmapped: 37969920 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:32:54.516880+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296361984 unmapped: 37969920 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:32:55.517065+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296361984 unmapped: 37969920 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:32:56.517188+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ebdf7000/0x0/0x4ffc00000, data 0x3e59200/0x3ff7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xfe0f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3403498 data_alloc: 218103808 data_used: 26537984
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296370176 unmapped: 37961728 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:32:57.517313+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296370176 unmapped: 37961728 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:32:58.517425+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296370176 unmapped: 37961728 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:32:59.517575+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ebdf7000/0x0/0x4ffc00000, data 0x3e59200/0x3ff7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xfe0f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296370176 unmapped: 37961728 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:33:00.517697+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296419328 unmapped: 37912576 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:33:01.517853+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3403146 data_alloc: 218103808 data_used: 26537984
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296443904 unmapped: 37888000 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:33:02.518039+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ebdf7000/0x0/0x4ffc00000, data 0x3e59200/0x3ff7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xfe0f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296443904 unmapped: 37888000 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:33:03.518198+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296443904 unmapped: 37888000 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:33:04.518364+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296443904 unmapped: 37888000 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ebdf7000/0x0/0x4ffc00000, data 0x3e59200/0x3ff7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xfe0f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:33:05.518508+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296443904 unmapped: 37888000 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:33:06.518647+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3403146 data_alloc: 218103808 data_used: 26537984
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296443904 unmapped: 37888000 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:33:07.518705+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296443904 unmapped: 37888000 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 15.515163422s of 15.872191429s, submitted: 92
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:33:08.518874+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ebdf7000/0x0/0x4ffc00000, data 0x3e59200/0x3ff7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xfe0f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296452096 unmapped: 37879808 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:33:09.519488+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: handle_auth_request added challenge on 0x55c4ddb9a800
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296452096 unmapped: 37879808 heap: 334331904 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4ddb9a800 session 0x55c4db430780
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: handle_auth_request added challenge on 0x55c4db37b800
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4db37b800 session 0x55c4dbe9b2c0
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: handle_auth_request added challenge on 0x55c4dbc11400
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4dbc11400 session 0x55c4ddbb9860
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:33:10.519666+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: handle_auth_request added challenge on 0x55c4dc236000
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4dc236000 session 0x55c4dd17f4a0
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: handle_auth_request added challenge on 0x55c4dcebdc00
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4dcebdc00 session 0x55c4dc11e780
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296550400 unmapped: 41459712 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:33:11.520097+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4eb109000/0x0/0x4ffc00000, data 0x4b47200/0x4ce5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xfe0f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3502292 data_alloc: 218103808 data_used: 26537984
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296550400 unmapped: 41459712 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:33:12.520272+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296550400 unmapped: 41459712 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:33:13.520744+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296550400 unmapped: 41459712 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:33:14.520892+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296550400 unmapped: 41459712 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:33:15.521047+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296558592 unmapped: 41451520 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4eb109000/0x0/0x4ffc00000, data 0x4b47200/0x4ce5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xfe0f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:33:16.521189+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: handle_auth_request added challenge on 0x55c4e8e19800
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4e8e19800 session 0x55c4ddb494a0
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3504562 data_alloc: 218103808 data_used: 26537984
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: handle_auth_request added challenge on 0x55c4db37b800
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: handle_auth_request added challenge on 0x55c4dbc11400
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296583168 unmapped: 41426944 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:33:17.521340+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296583168 unmapped: 41426944 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:33:18.521509+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297754624 unmapped: 40255488 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:33:19.521639+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 301056000 unmapped: 36954112 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:33:20.521775+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 301056000 unmapped: 36954112 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:33:21.522250+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3591574 data_alloc: 234881024 data_used: 36249600
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 301056000 unmapped: 36954112 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4eb108000/0x0/0x4ffc00000, data 0x4b47223/0x4ce6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xfe0f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:33:22.522391+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 301056000 unmapped: 36954112 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:33:23.522511+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 301056000 unmapped: 36954112 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:33:24.522722+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 301056000 unmapped: 36954112 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:33:25.522898+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 301056000 unmapped: 36954112 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4eb108000/0x0/0x4ffc00000, data 0x4b47223/0x4ce6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xfe0f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:33:26.523054+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3591574 data_alloc: 234881024 data_used: 36249600
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 301056000 unmapped: 36954112 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:33:27.523284+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 301056000 unmapped: 36954112 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:33:28.523394+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4eb108000/0x0/0x4ffc00000, data 0x4b47223/0x4ce6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xfe0f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 20.463996887s of 20.706624985s, submitted: 28
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 301973504 unmapped: 36036608 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:33:29.523537+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 301989888 unmapped: 36020224 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:33:30.523699+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 302022656 unmapped: 35987456 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:33:31.523865+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3676272 data_alloc: 234881024 data_used: 37085184
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 302022656 unmapped: 35987456 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:33:32.523987+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4eb865000/0x0/0x4ffc00000, data 0x5429223/0x55c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xedcf9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 302022656 unmapped: 35987456 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:33:33.524167+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 302022656 unmapped: 35987456 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:33:34.524302+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4eb865000/0x0/0x4ffc00000, data 0x5429223/0x55c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xedcf9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 302022656 unmapped: 35987456 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:33:35.524458+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 302022656 unmapped: 35987456 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:33:36.524604+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3676272 data_alloc: 234881024 data_used: 37085184
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 302022656 unmapped: 35987456 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:33:37.524761+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4eb865000/0x0/0x4ffc00000, data 0x5429223/0x55c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xedcf9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 302022656 unmapped: 35987456 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:33:38.524931+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 302022656 unmapped: 35987456 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:33:39.525116+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 302022656 unmapped: 35987456 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:33:40.525275+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 302022656 unmapped: 35987456 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:33:41.525445+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3677232 data_alloc: 234881024 data_used: 37154816
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4eb865000/0x0/0x4ffc00000, data 0x5429223/0x55c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xedcf9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 302022656 unmapped: 35987456 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:33:42.525560+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 302022656 unmapped: 35987456 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:33:43.525702+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 302022656 unmapped: 35987456 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4eb865000/0x0/0x4ffc00000, data 0x5429223/0x55c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xedcf9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:33:44.525918+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4db37b800 session 0x55c4dc1d43c0
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 15.669697762s of 15.914948463s, submitted: 63
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4dbc11400 session 0x55c4df1d6b40
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: handle_auth_request added challenge on 0x55c4dc236000
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 294117376 unmapped: 43892736 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:33:45.526059+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4dc236000 session 0x55c4dd92d4a0
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 294150144 unmapped: 43859968 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:33:46.526201+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3416876 data_alloc: 218103808 data_used: 24150016
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295198720 unmapped: 42811392 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:33:47.526357+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ece29000/0x0/0x4ffc00000, data 0x3e66200/0x4004000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xedcf9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295198720 unmapped: 42811392 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:33:48.526511+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ece29000/0x0/0x4ffc00000, data 0x3e66200/0x4004000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xedcf9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295198720 unmapped: 42811392 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:33:49.526694+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ece29000/0x0/0x4ffc00000, data 0x3e66200/0x4004000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xedcf9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295198720 unmapped: 42811392 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:33:50.526898+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295198720 unmapped: 42811392 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4df132c00 session 0x55c4ddbaba40
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4dcec0800 session 0x55c4ddaceb40
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:33:51.527077+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: handle_auth_request added challenge on 0x55c4db37b800
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3345708 data_alloc: 218103808 data_used: 20209664
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4db37b800 session 0x55c4dd97fa40
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295198720 unmapped: 42811392 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:33:52.527196+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ed3c4000/0x0/0x4ffc00000, data 0x38cc19e/0x3a69000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xedcf9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295198720 unmapped: 42811392 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:33:53.527328+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295198720 unmapped: 42811392 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:33:54.527484+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295198720 unmapped: 42811392 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:33:55.527685+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295206912 unmapped: 42803200 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:33:56.527806+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3345708 data_alloc: 218103808 data_used: 20209664
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295206912 unmapped: 42803200 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:33:57.527991+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295206912 unmapped: 42803200 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:33:58.528151+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ed3c4000/0x0/0x4ffc00000, data 0x38cc19e/0x3a69000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xedcf9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295206912 unmapped: 42803200 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ed3c4000/0x0/0x4ffc00000, data 0x38cc19e/0x3a69000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xedcf9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:33:59.528270+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295206912 unmapped: 42803200 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:34:00.528442+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295206912 unmapped: 42803200 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:34:01.528658+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ed3c4000/0x0/0x4ffc00000, data 0x38cc19e/0x3a69000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xedcf9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3345708 data_alloc: 218103808 data_used: 20209664
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295206912 unmapped: 42803200 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:34:02.528889+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ed3c4000/0x0/0x4ffc00000, data 0x38cc19e/0x3a69000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xedcf9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295206912 unmapped: 42803200 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:34:03.529002+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ed3c4000/0x0/0x4ffc00000, data 0x38cc19e/0x3a69000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xedcf9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295206912 unmapped: 42803200 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:34:04.529131+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295206912 unmapped: 42803200 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:34:05.529279+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295206912 unmapped: 42803200 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:34:06.529440+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3345708 data_alloc: 218103808 data_used: 20209664
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295206912 unmapped: 42803200 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:34:07.529592+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295206912 unmapped: 42803200 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:34:08.529735+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ed3c4000/0x0/0x4ffc00000, data 0x38cc19e/0x3a69000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xedcf9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295206912 unmapped: 42803200 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:34:09.529907+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295206912 unmapped: 42803200 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:34:10.530001+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295206912 unmapped: 42803200 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:34:11.530159+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3345708 data_alloc: 218103808 data_used: 20209664
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:34:12.530290+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295215104 unmapped: 42795008 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ed3c4000/0x0/0x4ffc00000, data 0x38cc19e/0x3a69000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xedcf9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:34:13.531904+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295215104 unmapped: 42795008 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:34:14.532073+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295215104 unmapped: 42795008 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:34:15.532193+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295215104 unmapped: 42795008 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:34:16.532790+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295215104 unmapped: 42795008 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3345708 data_alloc: 218103808 data_used: 20209664
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:34:17.533022+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295215104 unmapped: 42795008 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:34:18.533162+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295215104 unmapped: 42795008 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ed3c4000/0x0/0x4ffc00000, data 0x38cc19e/0x3a69000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xedcf9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:34:19.533300+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295215104 unmapped: 42795008 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:34:20.533478+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295223296 unmapped: 42786816 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: handle_auth_request added challenge on 0x55c4dbc11400
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 35.936973572s of 36.086498260s, submitted: 52
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4dbc11400 session 0x55c4ddbc5a40
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: handle_auth_request added challenge on 0x55c4dc236000
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4dc236000 session 0x55c4dc42c780
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: handle_auth_request added challenge on 0x55c4df132c00
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4df132c00 session 0x55c4dfbaba40
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: handle_auth_request added challenge on 0x55c4dcebdc00
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4dcebdc00 session 0x55c4ddbab680
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: handle_auth_request added challenge on 0x55c4dcebdc00
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4dcebdc00 session 0x55c4df73a5a0
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:34:21.533647+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 294076416 unmapped: 43933696 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3379140 data_alloc: 218103808 data_used: 20209664
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:34:22.533800+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 294076416 unmapped: 43933696 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:34:23.534035+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 294076416 unmapped: 43933696 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:34:24.534399+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 294076416 unmapped: 43933696 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ecfd2000/0x0/0x4ffc00000, data 0x3cbf19e/0x3e5c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xedcf9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:34:25.534600+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 294076416 unmapped: 43933696 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ecfd2000/0x0/0x4ffc00000, data 0x3cbf19e/0x3e5c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xedcf9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:34:26.534761+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 294076416 unmapped: 43933696 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3379140 data_alloc: 218103808 data_used: 20209664
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ecfd2000/0x0/0x4ffc00000, data 0x3cbf19e/0x3e5c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xedcf9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:34:27.534902+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 294076416 unmapped: 43933696 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: handle_auth_request added challenge on 0x55c4db37b800
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4db37b800 session 0x55c4dc1e14a0
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:34:28.535048+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 294076416 unmapped: 43933696 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: handle_auth_request added challenge on 0x55c4dbc11400
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4dbc11400 session 0x55c4dd97f4a0
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: handle_auth_request added challenge on 0x55c4dc236000
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4dc236000 session 0x55c4db430000
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: handle_auth_request added challenge on 0x55c4df132c00
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4df132c00 session 0x55c4dd92c960
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:34:29.535197+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 294076416 unmapped: 43933696 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: handle_auth_request added challenge on 0x55c4db37b800
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: handle_auth_request added challenge on 0x55c4dbc11400
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:34:30.535399+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 294076416 unmapped: 43933696 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:34:31.535590+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 294084608 unmapped: 43925504 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ecfd0000/0x0/0x4ffc00000, data 0x3cbf1d1/0x3e5e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xedcf9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3411287 data_alloc: 218103808 data_used: 24174592
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:34:32.535897+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 294092800 unmapped: 43917312 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:34:33.536021+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 294092800 unmapped: 43917312 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:34:34.536163+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 294092800 unmapped: 43917312 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:34:35.536304+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 294092800 unmapped: 43917312 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:34:36.536478+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 294092800 unmapped: 43917312 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3411287 data_alloc: 218103808 data_used: 24174592
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ecfd0000/0x0/0x4ffc00000, data 0x3cbf1d1/0x3e5e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xedcf9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:34:37.536634+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 294092800 unmapped: 43917312 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ecfd0000/0x0/0x4ffc00000, data 0x3cbf1d1/0x3e5e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xedcf9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:34:38.536773+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 294092800 unmapped: 43917312 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:34:39.536987+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 294092800 unmapped: 43917312 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:34:40.537124+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 294092800 unmapped: 43917312 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ecfd0000/0x0/0x4ffc00000, data 0x3cbf1d1/0x3e5e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xedcf9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ecfd0000/0x0/0x4ffc00000, data 0x3cbf1d1/0x3e5e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xedcf9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 20.252729416s of 20.389076233s, submitted: 12
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:34:41.537290+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297803776 unmapped: 40206336 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3481653 data_alloc: 218103808 data_used: 25280512
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:34:42.537452+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297558016 unmapped: 40452096 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:34:43.537659+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297566208 unmapped: 40443904 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:34:44.537903+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297566208 unmapped: 40443904 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:34:45.538048+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297566208 unmapped: 40443904 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:34:46.538217+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297566208 unmapped: 40443904 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4eb53f000/0x0/0x4ffc00000, data 0x45b01d1/0x474f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xff6f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3492885 data_alloc: 218103808 data_used: 25366528
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:34:47.538384+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297566208 unmapped: 40443904 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:34:48.538610+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297566208 unmapped: 40443904 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:34:49.538852+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297566208 unmapped: 40443904 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:34:50.539043+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297566208 unmapped: 40443904 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:34:51.539210+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297566208 unmapped: 40443904 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3488117 data_alloc: 218103808 data_used: 25366528
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:34:52.539326+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297566208 unmapped: 40443904 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4eb53c000/0x0/0x4ffc00000, data 0x45b31d1/0x4752000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xff6f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:34:53.539472+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297566208 unmapped: 40443904 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:34:54.539621+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297574400 unmapped: 40435712 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:34:55.539783+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297574400 unmapped: 40435712 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:34:56.539960+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297574400 unmapped: 40435712 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3488117 data_alloc: 218103808 data_used: 25366528
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:34:57.540078+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297574400 unmapped: 40435712 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4eb53c000/0x0/0x4ffc00000, data 0x45b31d1/0x4752000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xff6f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:34:58.540229+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297574400 unmapped: 40435712 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4eb53c000/0x0/0x4ffc00000, data 0x45b31d1/0x4752000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xff6f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:34:59.540395+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297574400 unmapped: 40435712 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:35:00.540535+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297574400 unmapped: 40435712 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4eb53c000/0x0/0x4ffc00000, data 0x45b31d1/0x4752000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xff6f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:35:01.540712+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297582592 unmapped: 40427520 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3488117 data_alloc: 218103808 data_used: 25366528
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:35:02.540949+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297582592 unmapped: 40427520 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:35:03.541098+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297582592 unmapped: 40427520 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:35:04.541226+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297582592 unmapped: 40427520 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:35:05.541364+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297582592 unmapped: 40427520 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4eb53c000/0x0/0x4ffc00000, data 0x45b31d1/0x4752000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xff6f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:35:06.541524+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297582592 unmapped: 40427520 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3488117 data_alloc: 218103808 data_used: 25366528
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:35:07.541773+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297582592 unmapped: 40427520 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:35:08.541855+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297582592 unmapped: 40427520 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4eb53c000/0x0/0x4ffc00000, data 0x45b31d1/0x4752000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xff6f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:35:09.541995+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297590784 unmapped: 40419328 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4eb53c000/0x0/0x4ffc00000, data 0x45b31d1/0x4752000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xff6f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:35:10.542149+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297590784 unmapped: 40419328 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:35:11.542331+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297590784 unmapped: 40419328 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3488117 data_alloc: 218103808 data_used: 25366528
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:35:12.542450+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297590784 unmapped: 40419328 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 31.730146408s of 32.096771240s, submitted: 93
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:35:13.542552+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297598976 unmapped: 40411136 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4eb53c000/0x0/0x4ffc00000, data 0x45b31d1/0x4752000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xff6f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: handle_auth_request added challenge on 0x55c4dc236000
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4dc236000 session 0x55c4dd137680
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: handle_auth_request added challenge on 0x55c4dcebdc00
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4dcebdc00 session 0x55c4dbe9b2c0
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: handle_auth_request added challenge on 0x55c4e8e19800
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4e8e19800 session 0x55c4ddaceb40
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: handle_auth_request added challenge on 0x55c4ddb9cc00
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4ddb9cc00 session 0x55c4dd414960
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: handle_auth_request added challenge on 0x55c4dbe5d400
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4dbe5d400 session 0x55c4ddaf6000
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:35:14.542706+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297861120 unmapped: 40148992 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:35:15.542857+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297861120 unmapped: 40148992 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4eb4d6000/0x0/0x4ffc00000, data 0x46191d1/0x47b8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xff6f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:35:16.542988+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297861120 unmapped: 40148992 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3497288 data_alloc: 218103808 data_used: 25366528
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:35:17.543106+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297861120 unmapped: 40148992 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4eb4d6000/0x0/0x4ffc00000, data 0x46191d1/0x47b8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xff6f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:35:18.543218+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297861120 unmapped: 40148992 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:35:19.543342+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297861120 unmapped: 40148992 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:35:20.543550+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297861120 unmapped: 40148992 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4eb4d6000/0x0/0x4ffc00000, data 0x46191d1/0x47b8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xff6f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: handle_auth_request added challenge on 0x55c4dbe5d400
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4dbe5d400 session 0x55c4ddaf7680
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: handle_auth_request added challenge on 0x55c4dc236000
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: handle_auth_request added challenge on 0x55c4dcebdc00
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:35:21.543688+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297869312 unmapped: 40140800 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3497420 data_alloc: 218103808 data_used: 25366528
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:35:22.543863+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297877504 unmapped: 40132608 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:35:23.544040+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297877504 unmapped: 40132608 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4eb4d6000/0x0/0x4ffc00000, data 0x46191d1/0x47b8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xff6f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:35:24.544198+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297877504 unmapped: 40132608 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:35:25.544354+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297877504 unmapped: 40132608 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:35:26.544496+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297877504 unmapped: 40132608 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3497740 data_alloc: 218103808 data_used: 25399296
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:35:27.544592+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297877504 unmapped: 40132608 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:35:28.544700+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297877504 unmapped: 40132608 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:35:29.544947+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4eb4d6000/0x0/0x4ffc00000, data 0x46191d1/0x47b8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xff6f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297885696 unmapped: 40124416 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:35:30.545090+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297885696 unmapped: 40124416 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:35:31.545247+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297885696 unmapped: 40124416 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4eb4d6000/0x0/0x4ffc00000, data 0x46191d1/0x47b8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xff6f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3497740 data_alloc: 218103808 data_used: 25399296
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:35:32.545378+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 19.066957474s of 19.134149551s, submitted: 19
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297885696 unmapped: 40124416 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:35:33.545508+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 298737664 unmapped: 39272448 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:35:34.545698+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 298778624 unmapped: 39231488 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4eb302000/0x0/0x4ffc00000, data 0x47e51d1/0x4984000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xff6f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:35:35.545832+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 298778624 unmapped: 39231488 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:35:36.545962+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 298778624 unmapped: 39231488 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3525892 data_alloc: 218103808 data_used: 25448448
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:35:37.546101+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4eb286000/0x0/0x4ffc00000, data 0x48611d1/0x4a00000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xff6f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 298786816 unmapped: 39223296 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:35:38.546346+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 298786816 unmapped: 39223296 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:35:39.546488+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 298786816 unmapped: 39223296 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:35:40.546655+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 298057728 unmapped: 39952384 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4eb26f000/0x0/0x4ffc00000, data 0x48801d1/0x4a1f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xff6f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:35:41.547054+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 298057728 unmapped: 39952384 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3522512 data_alloc: 218103808 data_used: 25452544
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:35:42.547217+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 298057728 unmapped: 39952384 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:35:43.547350+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 298057728 unmapped: 39952384 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:35:44.547554+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 298065920 unmapped: 39944192 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:35:45.547766+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.252151489s of 13.026142120s, submitted: 40
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 298065920 unmapped: 39944192 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:35:46.547966+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4eb26f000/0x0/0x4ffc00000, data 0x48801d1/0x4a1f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xff6f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4dc236000 session 0x55c4dc1d4780
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4dcebdc00 session 0x55c4ddbc4000
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 298065920 unmapped: 39944192 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: handle_auth_request added challenge on 0x55c4ddb9cc00
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4ddb9cc00 session 0x55c4dbe71c20
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:35:47.548123+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3494746 data_alloc: 218103808 data_used: 25370624
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 298090496 unmapped: 39919616 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:35:48.548263+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 298090496 unmapped: 39919616 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:35:49.548463+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 298090496 unmapped: 39919616 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:35:50.548643+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4eb37c000/0x0/0x4ffc00000, data 0x45b41d1/0x4753000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xff6f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 298090496 unmapped: 39919616 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:35:51.548802+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 298090496 unmapped: 39919616 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4eb37c000/0x0/0x4ffc00000, data 0x45b41d1/0x4753000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xff6f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:35:52.548981+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3494746 data_alloc: 218103808 data_used: 25370624
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 298090496 unmapped: 39919616 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:35:53.549155+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 298090496 unmapped: 39919616 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:35:54.549276+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 298090496 unmapped: 39919616 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:35:55.549419+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4eb37c000/0x0/0x4ffc00000, data 0x45b41d1/0x4753000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xff6f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 298090496 unmapped: 39919616 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:35:56.549598+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4db37b800 session 0x55c4dc11ad20
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4dbc11400 session 0x55c4db430b40
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 298098688 unmapped: 39911424 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: handle_auth_request added challenge on 0x55c4dbe5d400
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.455710411s of 11.519099236s, submitted: 24
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4dbe5d400 session 0x55c4dd40b0e0
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:35:57.549745+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3362051 data_alloc: 218103808 data_used: 20209664
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 294944768 unmapped: 43065344 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:35:58.549920+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 294944768 unmapped: 43065344 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:35:59.550100+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 294944768 unmapped: 43065344 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:36:00.550293+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ec224000/0x0/0x4ffc00000, data 0x38cc19e/0x3a69000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xff6f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 294944768 unmapped: 43065344 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:36:01.550601+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 294944768 unmapped: 43065344 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:36:02.550769+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3362051 data_alloc: 218103808 data_used: 20209664
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 294944768 unmapped: 43065344 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:36:03.550917+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 294944768 unmapped: 43065344 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:36:04.551046+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ec224000/0x0/0x4ffc00000, data 0x38cc19e/0x3a69000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xff6f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 294944768 unmapped: 43065344 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:36:05.551225+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 294944768 unmapped: 43065344 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:36:06.551398+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 294944768 unmapped: 43065344 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:36:07.551572+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3362051 data_alloc: 218103808 data_used: 20209664
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 294944768 unmapped: 43065344 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ec224000/0x0/0x4ffc00000, data 0x38cc19e/0x3a69000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xff6f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:36:08.551740+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 294944768 unmapped: 43065344 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:36:09.551894+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 294944768 unmapped: 43065344 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:36:10.552096+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 294944768 unmapped: 43065344 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ec224000/0x0/0x4ffc00000, data 0x38cc19e/0x3a69000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xff6f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:36:11.552373+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 294944768 unmapped: 43065344 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:36:12.552561+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3362051 data_alloc: 218103808 data_used: 20209664
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 294944768 unmapped: 43065344 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:36:13.552730+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 294944768 unmapped: 43065344 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:36:14.552889+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 294944768 unmapped: 43065344 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:36:15.553061+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 294944768 unmapped: 43065344 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ec224000/0x0/0x4ffc00000, data 0x38cc19e/0x3a69000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xff6f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:36:16.553235+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 294944768 unmapped: 43065344 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: handle_auth_request added challenge on 0x55c4dc236000
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4dc236000 session 0x55c4dd97f0e0
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: handle_auth_request added challenge on 0x55c4dcebdc00
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4dcebdc00 session 0x55c4dd81f680
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: handle_auth_request added challenge on 0x55c4ddb9cc00
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4ddb9cc00 session 0x55c4db5812c0
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: handle_auth_request added challenge on 0x55c4dbc11400
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4dbc11400 session 0x55c4daffcb40
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: handle_auth_request added challenge on 0x55c4dbe5d400
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 20.055343628s of 20.130304337s, submitted: 18
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4dbe5d400 session 0x55c4dfbab4a0
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: handle_auth_request added challenge on 0x55c4dc236000
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4dc236000 session 0x55c4dd415a40
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:36:17.553415+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: handle_auth_request added challenge on 0x55c4dcebdc00
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4dcebdc00 session 0x55c4dd1923c0
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: handle_auth_request added challenge on 0x55c4e8e19800
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3385622 data_alloc: 218103808 data_used: 20209664
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4e8e19800 session 0x55c4ddb483c0
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: handle_auth_request added challenge on 0x55c4e8e19800
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4e8e19800 session 0x55c4ddbc4f00
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295084032 unmapped: 42926080 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:36:18.553567+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295084032 unmapped: 42926080 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:36:19.553727+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ebff0000/0x0/0x4ffc00000, data 0x3b001ae/0x3c9e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xff6f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295084032 unmapped: 42926080 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:36:20.553921+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295084032 unmapped: 42926080 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:36:21.554121+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295084032 unmapped: 42926080 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:36:22.554225+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: handle_auth_request added challenge on 0x55c4dbc11400
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4dbc11400 session 0x55c4dc1b30e0
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3385490 data_alloc: 218103808 data_used: 20209664
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295084032 unmapped: 42926080 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: handle_auth_request added challenge on 0x55c4dbe5d400
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4dbe5d400 session 0x55c4dc1e1860
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:36:23.554429+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: handle_auth_request added challenge on 0x55c4dc236000
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4dc236000 session 0x55c4ddbc50e0
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295084032 unmapped: 42926080 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: handle_auth_request added challenge on 0x55c4dcebdc00
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4dcebdc00 session 0x55c4dd92c000
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:36:24.554641+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: handle_auth_request added challenge on 0x55c4dcebdc00
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: handle_auth_request added challenge on 0x55c4dbc11400
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295403520 unmapped: 42606592 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ebfca000/0x0/0x4ffc00000, data 0x3b241e1/0x3cc4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xff6f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:36:25.554841+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295403520 unmapped: 42606592 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:36:26.554983+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295403520 unmapped: 42606592 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:36:27.555107+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3409626 data_alloc: 218103808 data_used: 22437888
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295403520 unmapped: 42606592 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:36:28.555246+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295403520 unmapped: 42606592 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:36:29.555375+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295403520 unmapped: 42606592 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ebfca000/0x0/0x4ffc00000, data 0x3b241e1/0x3cc4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xff6f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:36:30.555508+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295403520 unmapped: 42606592 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:36:31.555660+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295403520 unmapped: 42606592 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:36:32.555797+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3409626 data_alloc: 218103808 data_used: 22437888
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ebfca000/0x0/0x4ffc00000, data 0x3b241e1/0x3cc4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xff6f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295403520 unmapped: 42606592 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:36:33.555943+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295403520 unmapped: 42606592 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ebfca000/0x0/0x4ffc00000, data 0x3b241e1/0x3cc4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xff6f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:36:34.556083+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295403520 unmapped: 42606592 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:36:35.556223+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 18.175285339s of 18.298599243s, submitted: 28
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #49. Immutable memtables: 6.
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 302612480 unmapped: 35397632 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:36:36.556331+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 302612480 unmapped: 35397632 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:36:37.556455+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3484970 data_alloc: 218103808 data_used: 22724608
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 302080000 unmapped: 35930112 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:36:38.556628+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 302080000 unmapped: 35930112 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:36:39.556799+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ea66e000/0x0/0x4ffc00000, data 0x42d81e1/0x4478000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 302080000 unmapped: 35930112 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:36:40.556983+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 302080000 unmapped: 35930112 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ea66e000/0x0/0x4ffc00000, data 0x42d81e1/0x4478000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:36:41.557181+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 302080000 unmapped: 35930112 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:36:42.557320+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3484970 data_alloc: 218103808 data_used: 22724608
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 302080000 unmapped: 35930112 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:36:43.557467+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 302047232 unmapped: 35962880 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:36:44.557607+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 302047232 unmapped: 35962880 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:36:45.557769+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 302047232 unmapped: 35962880 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:36:46.557953+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ea673000/0x0/0x4ffc00000, data 0x42db1e1/0x447b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 302047232 unmapped: 35962880 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:36:47.558104+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3479898 data_alloc: 218103808 data_used: 22724608
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 302047232 unmapped: 35962880 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ea673000/0x0/0x4ffc00000, data 0x42db1e1/0x447b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:36:48.558220+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ea673000/0x0/0x4ffc00000, data 0x42db1e1/0x447b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 302047232 unmapped: 35962880 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:36:49.558363+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 302047232 unmapped: 35962880 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:36:50.558514+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 302047232 unmapped: 35962880 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:36:51.558674+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: handle_auth_request added challenge on 0x55c4dbe5d400
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 15.904396057s of 16.287437439s, submitted: 89
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 306438144 unmapped: 31571968 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4dbe5d400 session 0x55c4dd40b860
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: handle_auth_request added challenge on 0x55c4dc236000
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4dc236000 session 0x55c4db581e00
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: handle_auth_request added challenge on 0x55c4e8e19800
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4e8e19800 session 0x55c4dc1b25a0
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: handle_auth_request added challenge on 0x55c4ddba6800
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ea673000/0x0/0x4ffc00000, data 0x42db1e1/0x447b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:36:52.558841+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4ddba6800 session 0x55c4dbe70f00
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: handle_auth_request added challenge on 0x55c4de70f400
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4de70f400 session 0x55c4dfbabc20
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3516464 data_alloc: 218103808 data_used: 22724608
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 301973504 unmapped: 36036608 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:36:53.559213+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 301973504 unmapped: 36036608 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ea208000/0x0/0x4ffc00000, data 0x47461e1/0x48e6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:36:54.559373+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 301981696 unmapped: 36028416 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ea208000/0x0/0x4ffc00000, data 0x47461e1/0x48e6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:36:55.559545+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 301989888 unmapped: 36020224 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:36:56.559691+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: handle_auth_request added challenge on 0x55c4dbe5d400
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4dbe5d400 session 0x55c4ddbbe1e0
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 301998080 unmapped: 36012032 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: handle_auth_request added challenge on 0x55c4dc236000
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: handle_auth_request added challenge on 0x55c4ddba6800
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:36:57.559865+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3517765 data_alloc: 218103808 data_used: 22724608
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 301998080 unmapped: 36012032 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:36:58.560037+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ea207000/0x0/0x4ffc00000, data 0x4746204/0x48e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 302006272 unmapped: 36003840 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:36:59.560172+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 302006272 unmapped: 36003840 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:37:00.560339+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 302006272 unmapped: 36003840 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:37:01.560552+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 302006272 unmapped: 36003840 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:37:02.560738+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3549765 data_alloc: 218103808 data_used: 27254784
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 302014464 unmapped: 35995648 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:37:03.561570+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 302014464 unmapped: 35995648 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:37:04.561851+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ea207000/0x0/0x4ffc00000, data 0x4746204/0x48e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.904636383s of 13.092563629s, submitted: 11
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 302014464 unmapped: 35995648 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:37:05.561999+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 302014464 unmapped: 35995648 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:37:06.562184+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 302014464 unmapped: 35995648 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:37:07.562368+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ea206000/0x0/0x4ffc00000, data 0x4747204/0x48e8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3550473 data_alloc: 218103808 data_used: 27267072
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 302014464 unmapped: 35995648 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:37:08.562498+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 305487872 unmapped: 32522240 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:37:09.562617+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304824320 unmapped: 33185792 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:37:10.562778+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4e9b2d000/0x0/0x4ffc00000, data 0x4e20204/0x4fc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304824320 unmapped: 33185792 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:37:11.563040+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304824320 unmapped: 33185792 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:37:12.563205+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4e9b2d000/0x0/0x4ffc00000, data 0x4e20204/0x4fc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3610903 data_alloc: 218103808 data_used: 27291648
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304824320 unmapped: 33185792 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:37:13.563380+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4e9b2d000/0x0/0x4ffc00000, data 0x4e20204/0x4fc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304824320 unmapped: 33185792 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:37:14.563540+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4dc236000 session 0x55c4daffd4a0
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4ddba6800 session 0x55c4dc42de00
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304824320 unmapped: 33185792 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: handle_auth_request added challenge on 0x55c4e8e19800
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:37:15.563669+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.393812180s of 10.628395081s, submitted: 47
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4e8e19800 session 0x55c4db568960
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304136192 unmapped: 33873920 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:37:16.563801+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304136192 unmapped: 33873920 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:37:17.563971+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3489091 data_alloc: 218103808 data_used: 22724608
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ea671000/0x0/0x4ffc00000, data 0x42dc1e1/0x447c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304136192 unmapped: 33873920 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:37:18.564145+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4dcebdc00 session 0x55c4dc1e05a0
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4dbc11400 session 0x55c4df73a960
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304136192 unmapped: 33873920 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: handle_auth_request added challenge on 0x55c4dcebdc00
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:37:19.564258+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4dcebdc00 session 0x55c4dd81f860
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304168960 unmapped: 33841152 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:37:20.564373+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304168960 unmapped: 33841152 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:37:21.564576+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4eb084000/0x0/0x4ffc00000, data 0x38cc19e/0x3a69000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304168960 unmapped: 33841152 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:37:22.564711+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3384810 data_alloc: 218103808 data_used: 20209664
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304168960 unmapped: 33841152 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:37:23.564898+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304168960 unmapped: 33841152 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4eb084000/0x0/0x4ffc00000, data 0x38cc19e/0x3a69000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:37:24.565049+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304168960 unmapped: 33841152 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:37:25.565795+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304168960 unmapped: 33841152 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:37:26.566072+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304168960 unmapped: 33841152 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:37:27.567280+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3384810 data_alloc: 218103808 data_used: 20209664
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4eb084000/0x0/0x4ffc00000, data 0x38cc19e/0x3a69000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304168960 unmapped: 33841152 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:37:28.567391+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304168960 unmapped: 33841152 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:37:29.567648+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304168960 unmapped: 33841152 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:37:30.568158+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304168960 unmapped: 33841152 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:37:31.568368+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4eb084000/0x0/0x4ffc00000, data 0x38cc19e/0x3a69000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304168960 unmapped: 33841152 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:37:32.568522+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3384810 data_alloc: 218103808 data_used: 20209664
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304168960 unmapped: 33841152 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:37:33.568864+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4eb084000/0x0/0x4ffc00000, data 0x38cc19e/0x3a69000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304168960 unmapped: 33841152 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:37:34.569384+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304177152 unmapped: 33832960 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:37:35.570269+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304177152 unmapped: 33832960 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:37:36.570678+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: handle_auth_request added challenge on 0x55c4dbe5d400
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 21.278112411s of 21.508995056s, submitted: 69
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4dbe5d400 session 0x55c4dc11ad20
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304177152 unmapped: 33832960 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: handle_auth_request added challenge on 0x55c4dc236000
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4dc236000 session 0x55c4ddace1e0
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: handle_auth_request added challenge on 0x55c4ddba6800
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4ddba6800 session 0x55c4ddace780
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: handle_auth_request added challenge on 0x55c4ddba6800
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4ddba6800 session 0x55c4dd40ab40
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: handle_auth_request added challenge on 0x55c4dbc11400
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4dbc11400 session 0x55c4ddb494a0
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:37:37.570859+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3407475 data_alloc: 218103808 data_used: 20209664
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304177152 unmapped: 33832960 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:37:38.571097+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ead85000/0x0/0x4ffc00000, data 0x3bcc19e/0x3d69000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304177152 unmapped: 33832960 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:37:39.571239+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304177152 unmapped: 33832960 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:37:40.571374+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304185344 unmapped: 33824768 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:37:41.571561+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: handle_auth_request added challenge on 0x55c4dbe5d400
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4dbe5d400 session 0x55c4dc1d54a0
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: handle_auth_request added challenge on 0x55c4dc236000
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: handle_auth_request added challenge on 0x55c4dcebdc00
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304193536 unmapped: 33816576 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:37:42.571698+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3407607 data_alloc: 218103808 data_used: 20209664
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304201728 unmapped: 33808384 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:37:43.571938+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304209920 unmapped: 33800192 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:37:44.572131+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ead85000/0x0/0x4ffc00000, data 0x3bcc19e/0x3d69000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304209920 unmapped: 33800192 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:37:45.572333+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304209920 unmapped: 33800192 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:37:46.572519+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304209920 unmapped: 33800192 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:37:47.572736+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3429687 data_alloc: 218103808 data_used: 23355392
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304209920 unmapped: 33800192 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:37:48.572943+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304209920 unmapped: 33800192 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:37:49.573182+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ead85000/0x0/0x4ffc00000, data 0x3bcc19e/0x3d69000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304209920 unmapped: 33800192 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:37:50.573338+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304209920 unmapped: 33800192 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:37:51.573543+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304209920 unmapped: 33800192 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:37:52.573794+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3429687 data_alloc: 218103808 data_used: 23355392
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 16.078292847s of 16.170019150s, submitted: 12
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304816128 unmapped: 33193984 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:37:53.573947+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ead85000/0x0/0x4ffc00000, data 0x3bcc19e/0x3d69000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304480256 unmapped: 33529856 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:37:54.574073+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304480256 unmapped: 33529856 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:37:55.574210+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304480256 unmapped: 33529856 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:37:56.574372+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304480256 unmapped: 33529856 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:37:57.574577+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3435355 data_alloc: 218103808 data_used: 23355392
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304480256 unmapped: 33529856 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:37:58.574701+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4eacd8000/0x0/0x4ffc00000, data 0x3c7919e/0x3e16000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304480256 unmapped: 33529856 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:37:59.574890+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304480256 unmapped: 33529856 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:38:00.575056+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304488448 unmapped: 33521664 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:38:01.575236+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304488448 unmapped: 33521664 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:38:02.575403+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3435355 data_alloc: 218103808 data_used: 23355392
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4eacd8000/0x0/0x4ffc00000, data 0x3c7919e/0x3e16000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304488448 unmapped: 33521664 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:38:03.575540+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4eacd8000/0x0/0x4ffc00000, data 0x3c7919e/0x3e16000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304488448 unmapped: 33521664 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:38:04.575703+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304488448 unmapped: 33521664 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:38:05.575896+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.828106880s of 12.854273796s, submitted: 6
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4dc236000 session 0x55c4ddbb94a0
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4dcebdc00 session 0x55c4ddbc4000
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: handle_auth_request added challenge on 0x55c4dbc11400
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304439296 unmapped: 33570816 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:38:06.576013+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4dbc11400 session 0x55c4dc1d4b40
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304439296 unmapped: 33570816 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:38:07.576150+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3386683 data_alloc: 218103808 data_used: 20209664
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304439296 unmapped: 33570816 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:38:08.576333+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304439296 unmapped: 33570816 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4eb085000/0x0/0x4ffc00000, data 0x38cc19e/0x3a69000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:38:09.576495+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304439296 unmapped: 33570816 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:38:10.576654+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304439296 unmapped: 33570816 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:38:11.576932+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304447488 unmapped: 33562624 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:38:12.577075+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3386683 data_alloc: 218103808 data_used: 20209664
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304447488 unmapped: 33562624 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:38:13.577280+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304447488 unmapped: 33562624 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:38:14.577476+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4eb085000/0x0/0x4ffc00000, data 0x38cc19e/0x3a69000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4eb085000/0x0/0x4ffc00000, data 0x38cc19e/0x3a69000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304447488 unmapped: 33562624 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:38:15.577715+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304447488 unmapped: 33562624 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:38:16.577897+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4eb085000/0x0/0x4ffc00000, data 0x38cc19e/0x3a69000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304455680 unmapped: 33554432 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:38:17.578096+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3386683 data_alloc: 218103808 data_used: 20209664
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304455680 unmapped: 33554432 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:38:18.578244+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304455680 unmapped: 33554432 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:38:19.578432+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:38:20.578639+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304455680 unmapped: 33554432 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:38:21.578902+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304455680 unmapped: 33554432 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:38:22.579051+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304455680 unmapped: 33554432 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4eb085000/0x0/0x4ffc00000, data 0x38cc19e/0x3a69000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3386683 data_alloc: 218103808 data_used: 20209664
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:38:23.579209+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304455680 unmapped: 33554432 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:38:24.579337+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304463872 unmapped: 33546240 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:38:25.579518+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304472064 unmapped: 33538048 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:38:26.579701+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304472064 unmapped: 33538048 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4eb085000/0x0/0x4ffc00000, data 0x38cc19e/0x3a69000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:38:27.579872+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304472064 unmapped: 33538048 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4eb085000/0x0/0x4ffc00000, data 0x38cc19e/0x3a69000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3386683 data_alloc: 218103808 data_used: 20209664
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:38:28.580004+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304480256 unmapped: 33529856 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:38:29.580381+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304480256 unmapped: 33529856 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:38:30.580550+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304480256 unmapped: 33529856 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4eb085000/0x0/0x4ffc00000, data 0x38cc19e/0x3a69000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:38:31.582389+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304480256 unmapped: 33529856 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:38:32.582584+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304480256 unmapped: 33529856 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: handle_auth_request added challenge on 0x55c4dbe5d400
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4dbe5d400 session 0x55c4ddacfc20
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: handle_auth_request added challenge on 0x55c4dc236000
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4dc236000 session 0x55c4dc11e960
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: handle_auth_request added challenge on 0x55c4dcebdc00
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4dcebdc00 session 0x55c4ddb483c0
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3386683 data_alloc: 218103808 data_used: 20209664
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: handle_auth_request added challenge on 0x55c4ddba6800
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4ddba6800 session 0x55c4dc1b3a40
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: handle_auth_request added challenge on 0x55c4ddba6800
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 27.174865723s of 27.243019104s, submitted: 17
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:38:33.582763+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4ddba6800 session 0x55c4dd92c780
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304930816 unmapped: 33079296 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: handle_auth_request added challenge on 0x55c4dbc11400
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4dbc11400 session 0x55c4ddacf2c0
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: handle_auth_request added challenge on 0x55c4dbe5d400
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4dbe5d400 session 0x55c4db5e8b40
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: handle_auth_request added challenge on 0x55c4dc236000
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4dc236000 session 0x55c4db04cb40
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: handle_auth_request added challenge on 0x55c4dcebdc00
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4dcebdc00 session 0x55c4db47ab40
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:38:34.582915+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304865280 unmapped: 33144832 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:38:35.584053+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304865280 unmapped: 33144832 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ead80000/0x0/0x4ffc00000, data 0x3bd01ae/0x3d6e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:38:36.584316+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304865280 unmapped: 33144832 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:38:37.584482+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304865280 unmapped: 33144832 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3419330 data_alloc: 218103808 data_used: 20213760
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: handle_auth_request added challenge on 0x55c4dbc11400
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4dbc11400 session 0x55c4dd81f2c0
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:38:38.585037+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304865280 unmapped: 33144832 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: handle_auth_request added challenge on 0x55c4dbe5d400
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4dbe5d400 session 0x55c4dd136f00
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:38:39.585268+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304865280 unmapped: 33144832 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: handle_auth_request added challenge on 0x55c4dc236000
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4dc236000 session 0x55c4ddaf65a0
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: handle_auth_request added challenge on 0x55c4ddba6800
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ead80000/0x0/0x4ffc00000, data 0x3bd01ae/0x3d6e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4ddba6800 session 0x55c4db5e9a40
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: handle_auth_request added challenge on 0x55c4e8e19800
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: handle_auth_request added challenge on 0x55c4dd4e0800
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:38:40.585468+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304873472 unmapped: 33136640 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:38:41.585652+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304873472 unmapped: 33136640 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:38:42.585976+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304873472 unmapped: 33136640 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3435338 data_alloc: 218103808 data_used: 21962752
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:38:43.586145+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304873472 unmapped: 33136640 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ead5c000/0x0/0x4ffc00000, data 0x3bf41ae/0x3d92000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:38:44.586312+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304873472 unmapped: 33136640 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:38:45.586549+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304873472 unmapped: 33136640 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:38:46.586880+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304873472 unmapped: 33136640 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:38:47.587061+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304873472 unmapped: 33136640 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3435338 data_alloc: 218103808 data_used: 21962752
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:38:48.587251+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304873472 unmapped: 33136640 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ead5c000/0x0/0x4ffc00000, data 0x3bf41ae/0x3d92000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:38:49.587470+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304881664 unmapped: 33128448 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:38:50.587609+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304881664 unmapped: 33128448 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:38:51.587778+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 18.143718719s of 18.263084412s, submitted: 24
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304889856 unmapped: 33120256 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:38:52.587919+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 305659904 unmapped: 32350208 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4eaacc000/0x0/0x4ffc00000, data 0x3e7c1ae/0x401a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3461194 data_alloc: 218103808 data_used: 22016000
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:38:53.588091+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308846592 unmapped: 29163520 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:38:54.588700+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 305594368 unmapped: 32415744 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:38:55.588882+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304939008 unmapped: 33071104 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ea7ee000/0x0/0x4ffc00000, data 0x41621ae/0x4300000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:38:56.589063+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304939008 unmapped: 33071104 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:38:57.589229+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 305201152 unmapped: 32808960 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3497974 data_alloc: 218103808 data_used: 23076864
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ea770000/0x0/0x4ffc00000, data 0x41e01ae/0x437e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:38:58.589370+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 305430528 unmapped: 32579584 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:38:59.589589+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 305430528 unmapped: 32579584 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:39:00.589780+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 305430528 unmapped: 32579584 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ea770000/0x0/0x4ffc00000, data 0x41e01ae/0x437e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:39:01.590019+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 305430528 unmapped: 32579584 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:39:02.590126+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 305430528 unmapped: 32579584 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3500086 data_alloc: 218103808 data_used: 23224320
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:39:03.590303+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 7.937407017s of 11.942021370s, submitted: 61
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 305430528 unmapped: 32579584 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:39:04.590419+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 305430528 unmapped: 32579584 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:39:05.590689+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 305430528 unmapped: 32579584 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:39:06.590924+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ea74f000/0x0/0x4ffc00000, data 0x42011ae/0x439f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 305430528 unmapped: 32579584 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:39:07.591106+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 305430528 unmapped: 32579584 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ea74f000/0x0/0x4ffc00000, data 0x42011ae/0x439f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3500170 data_alloc: 218103808 data_used: 23232512
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:39:08.591249+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 305430528 unmapped: 32579584 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:39:09.591405+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 306479104 unmapped: 31531008 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: handle_auth_request added challenge on 0x55c4ddb8d000
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4ddb8d000 session 0x55c4dbe98d20
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: handle_auth_request added challenge on 0x55c4dbc11400
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4dbc11400 session 0x55c4db001a40
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: handle_auth_request added challenge on 0x55c4dbe5d400
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4dbe5d400 session 0x55c4dd81eb40
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: handle_auth_request added challenge on 0x55c4dc236000
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4dc236000 session 0x55c4dc1fb680
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:39:10.591568+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: handle_auth_request added challenge on 0x55c4ddba6800
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 306495488 unmapped: 31514624 heap: 338010112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4ddba6800 session 0x55c4dd92d860
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:39:11.591748+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: handle_auth_request added challenge on 0x55c4dcec0400
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4dcec0400 session 0x55c4df73a780
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: handle_auth_request added challenge on 0x55c4dbc11400
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4dbc11400 session 0x55c4ddbc5e00
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: handle_auth_request added challenge on 0x55c4dbe5d400
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4dbe5d400 session 0x55c4ddb48000
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: handle_auth_request added challenge on 0x55c4dc236000
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4dc236000 session 0x55c4ddacf860
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 307003392 unmapped: 35209216 heap: 342212608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ea748000/0x0/0x4ffc00000, data 0x42071be/0x43a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:39:12.591909+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 307003392 unmapped: 35209216 heap: 342212608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3557977 data_alloc: 218103808 data_used: 23232512
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:39:13.592087+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ea129000/0x0/0x4ffc00000, data 0x48261be/0x49c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 307003392 unmapped: 35209216 heap: 342212608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:39:14.592243+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 307003392 unmapped: 35209216 heap: 342212608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.991683960s of 11.550959587s, submitted: 29
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:39:15.592417+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: handle_auth_request added challenge on 0x55c4ddba6800
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4ddba6800 session 0x55c4db0621e0
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 307003392 unmapped: 35209216 heap: 342212608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:39:16.592597+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: handle_auth_request added challenge on 0x55c4dbc29400
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4dbc29400 session 0x55c4ddace960
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 307003392 unmapped: 35209216 heap: 342212608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ea126000/0x0/0x4ffc00000, data 0x48291be/0x49c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:39:17.592767+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 307003392 unmapped: 35209216 heap: 342212608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: handle_auth_request added challenge on 0x55c4dbc11400
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4dbc11400 session 0x55c4dd97fc20
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: handle_auth_request added challenge on 0x55c4dbe5d400
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3559363 data_alloc: 218103808 data_used: 23232512
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4dbe5d400 session 0x55c4dc11ab40
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:39:18.592953+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 307003392 unmapped: 35209216 heap: 342212608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: handle_auth_request added challenge on 0x55c4dc236000
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: handle_auth_request added challenge on 0x55c4ddba6800
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ea125000/0x0/0x4ffc00000, data 0x48291ce/0x49c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:39:19.593099+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 307003392 unmapped: 35209216 heap: 342212608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:39:20.593235+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 306552832 unmapped: 35659776 heap: 342212608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:39:21.593363+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ea125000/0x0/0x4ffc00000, data 0x48291ce/0x49c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 306552832 unmapped: 35659776 heap: 342212608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:39:22.593527+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 306552832 unmapped: 35659776 heap: 342212608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3604743 data_alloc: 218103808 data_used: 29429760
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:39:23.593701+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 306552832 unmapped: 35659776 heap: 342212608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:39:24.593914+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 306552832 unmapped: 35659776 heap: 342212608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:39:25.594107+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 306552832 unmapped: 35659776 heap: 342212608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ea125000/0x0/0x4ffc00000, data 0x48291ce/0x49c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-mgr[74605]: log_channel(audit) log [DBG] : from='client.22955 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:39:26.594290+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 306552832 unmapped: 35659776 heap: 342212608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.017693520s of 12.032156944s, submitted: 4
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:39:27.594408+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 306552832 unmapped: 35659776 heap: 342212608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3605423 data_alloc: 218103808 data_used: 29429760
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:39:28.594724+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 306552832 unmapped: 35659776 heap: 342212608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:39:29.594924+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 306552832 unmapped: 35659776 heap: 342212608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4e98e3000/0x0/0x4ffc00000, data 0x506b1ce/0x520b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:39:30.595086+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 310689792 unmapped: 31522816 heap: 342212608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:39:31.595328+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 310689792 unmapped: 31522816 heap: 342212608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:39:32.595506+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 311140352 unmapped: 31072256 heap: 342212608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4e93d0000/0x0/0x4ffc00000, data 0x557e1ce/0x571e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3712157 data_alloc: 234881024 data_used: 31072256
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:39:33.595670+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 311140352 unmapped: 31072256 heap: 342212608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:39:34.595945+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 311140352 unmapped: 31072256 heap: 342212608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:39:35.596116+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 311140352 unmapped: 31072256 heap: 342212608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:39:36.596271+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 311140352 unmapped: 31072256 heap: 342212608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.915276527s of 10.309535980s, submitted: 113
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4dc236000 session 0x55c4df73ad20
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4ddba6800 session 0x55c4dc11eb40
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4e93d0000/0x0/0x4ffc00000, data 0x557e1ce/0x571e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:39:37.596431+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: handle_auth_request added challenge on 0x55c4dd228800
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 311156736 unmapped: 31055872 heap: 342212608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4e93d0000/0x0/0x4ffc00000, data 0x557e1ce/0x571e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4dd228800 session 0x55c4dc42c960
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3513130 data_alloc: 218103808 data_used: 23293952
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:39:38.596623+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 311156736 unmapped: 31055872 heap: 342212608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:39:39.596775+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 311156736 unmapped: 31055872 heap: 342212608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ea73a000/0x0/0x4ffc00000, data 0x42161ae/0x43b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:39:40.597154+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 311156736 unmapped: 31055872 heap: 342212608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4e8e19800 session 0x55c4ddbc43c0
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4dd4e0800 session 0x55c4dc1d4780
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:39:41.597326+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: handle_auth_request added challenge on 0x55c4dbc11400
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 311173120 unmapped: 31039488 heap: 342212608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:39:42.597778+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 307732480 unmapped: 34480128 heap: 342212608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4dbc11400 session 0x55c4ddbb81e0
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3408058 data_alloc: 218103808 data_used: 20209664
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:39:43.598030+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 307732480 unmapped: 34480128 heap: 342212608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4eb085000/0x0/0x4ffc00000, data 0x38cc19e/0x3a69000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:39:44.598505+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 307732480 unmapped: 34480128 heap: 342212608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:39:45.599128+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 307732480 unmapped: 34480128 heap: 342212608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4eb085000/0x0/0x4ffc00000, data 0x38cc19e/0x3a69000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:39:46.599290+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 307732480 unmapped: 34480128 heap: 342212608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:39:47.599854+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 307732480 unmapped: 34480128 heap: 342212608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3408058 data_alloc: 218103808 data_used: 20209664
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:39:48.600022+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 307732480 unmapped: 34480128 heap: 342212608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:39:49.600316+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 307732480 unmapped: 34480128 heap: 342212608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:39:50.600549+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 307732480 unmapped: 34480128 heap: 342212608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:39:51.600870+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 307732480 unmapped: 34480128 heap: 342212608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4eb085000/0x0/0x4ffc00000, data 0x38cc19e/0x3a69000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:39:52.600997+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 307732480 unmapped: 34480128 heap: 342212608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:39:53.601165+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3408058 data_alloc: 218103808 data_used: 20209664
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4eb085000/0x0/0x4ffc00000, data 0x38cc19e/0x3a69000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 307732480 unmapped: 34480128 heap: 342212608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:39:54.601474+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 307732480 unmapped: 34480128 heap: 342212608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:39:55.601784+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 307732480 unmapped: 34480128 heap: 342212608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:39:56.602046+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 307732480 unmapped: 34480128 heap: 342212608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:39:57.602181+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 307732480 unmapped: 34480128 heap: 342212608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:39:58.602430+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3408058 data_alloc: 218103808 data_used: 20209664
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 307732480 unmapped: 34480128 heap: 342212608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4eb085000/0x0/0x4ffc00000, data 0x38cc19e/0x3a69000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:39:59.602712+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 307732480 unmapped: 34480128 heap: 342212608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:40:00.602886+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: handle_auth_request added challenge on 0x55c4dbe5d400
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4dbe5d400 session 0x55c4db581e00
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: handle_auth_request added challenge on 0x55c4dc236000
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4dc236000 session 0x55c4ddb49860
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: handle_auth_request added challenge on 0x55c4dbc11400
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4dbc11400 session 0x55c4dc1d4b40
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4eb085000/0x0/0x4ffc00000, data 0x38cc19e/0x3a69000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: handle_auth_request added challenge on 0x55c4dbe5d400
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4dbe5d400 session 0x55c4db5e9680
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: handle_auth_request added challenge on 0x55c4dd4e0800
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 22.569824219s of 23.457214355s, submitted: 54
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 307757056 unmapped: 34455552 heap: 342212608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4dd4e0800 session 0x55c4ddacfc20
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: handle_auth_request added challenge on 0x55c4e8e19800
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4e8e19800 session 0x55c4df73be00
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: handle_auth_request added challenge on 0x55c4dd228800
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4dd228800 session 0x55c4db5685a0
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: handle_auth_request added challenge on 0x55c4dd228800
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4dd228800 session 0x55c4dfbaad20
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: handle_auth_request added challenge on 0x55c4dbc11400
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4dbc11400 session 0x55c4dd81e000
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:40:01.603120+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 307773440 unmapped: 34439168 heap: 342212608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4eab4d000/0x0/0x4ffc00000, data 0x3e031ae/0x3fa1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:40:02.603490+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 307773440 unmapped: 34439168 heap: 342212608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:40:03.603743+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3456531 data_alloc: 218103808 data_used: 20213760
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 307773440 unmapped: 34439168 heap: 342212608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:40:04.603892+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 307773440 unmapped: 34439168 heap: 342212608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:40:05.604092+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 307773440 unmapped: 34439168 heap: 342212608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: handle_auth_request added challenge on 0x55c4dbe5d400
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4dbe5d400 session 0x55c4dd97f680
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4eab4d000/0x0/0x4ffc00000, data 0x3e031ae/0x3fa1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:40:06.604227+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4eab4d000/0x0/0x4ffc00000, data 0x3e031ae/0x3fa1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 307773440 unmapped: 34439168 heap: 342212608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: handle_auth_request added challenge on 0x55c4dd4e0800
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: handle_auth_request added challenge on 0x55c4e8e19800
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:40:07.604357+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 307773440 unmapped: 34439168 heap: 342212608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:40:08.604533+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3456823 data_alloc: 218103808 data_used: 20217856
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 307789824 unmapped: 34422784 heap: 342212608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:40:09.604686+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 307789824 unmapped: 34422784 heap: 342212608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:40:10.604898+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 307789824 unmapped: 34422784 heap: 342212608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4eab4d000/0x0/0x4ffc00000, data 0x3e031ae/0x3fa1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:40:11.605096+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 307789824 unmapped: 34422784 heap: 342212608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:40:12.605227+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 307789824 unmapped: 34422784 heap: 342212608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:40:13.605499+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3487223 data_alloc: 218103808 data_used: 24543232
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 307789824 unmapped: 34422784 heap: 342212608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:40:14.605692+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4eab4d000/0x0/0x4ffc00000, data 0x3e031ae/0x3fa1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 307789824 unmapped: 34422784 heap: 342212608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:40:15.605882+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4eab4d000/0x0/0x4ffc00000, data 0x3e031ae/0x3fa1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 307789824 unmapped: 34422784 heap: 342212608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:40:16.606053+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 307789824 unmapped: 34422784 heap: 342212608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:40:17.606385+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 307789824 unmapped: 34422784 heap: 342212608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:40:18.606536+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 17.570680618s of 17.714221954s, submitted: 24
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3497427 data_alloc: 218103808 data_used: 24576000
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 310853632 unmapped: 31358976 heap: 342212608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:40:19.606661+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 310796288 unmapped: 31416320 heap: 342212608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:40:20.606807+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ea571000/0x0/0x4ffc00000, data 0x43d61ae/0x4574000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 311861248 unmapped: 30351360 heap: 342212608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:40:21.607026+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 311861248 unmapped: 30351360 heap: 342212608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ea571000/0x0/0x4ffc00000, data 0x43d61ae/0x4574000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:40:22.607999+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 311861248 unmapped: 30351360 heap: 342212608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:40:23.608163+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3547101 data_alloc: 218103808 data_used: 25014272
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 311861248 unmapped: 30351360 heap: 342212608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:40:24.608327+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 311861248 unmapped: 30351360 heap: 342212608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:40:25.608477+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ea571000/0x0/0x4ffc00000, data 0x43d61ae/0x4574000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 311861248 unmapped: 30351360 heap: 342212608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:40:26.608655+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 311861248 unmapped: 30351360 heap: 342212608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:40:27.608791+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ea577000/0x0/0x4ffc00000, data 0x43d91ae/0x4577000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 311861248 unmapped: 30351360 heap: 342212608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:40:28.608949+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3542061 data_alloc: 218103808 data_used: 25014272
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 311861248 unmapped: 30351360 heap: 342212608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:40:29.609115+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 311861248 unmapped: 30351360 heap: 342212608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:40:30.609269+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 311861248 unmapped: 30351360 heap: 342212608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:40:31.609464+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 311861248 unmapped: 30351360 heap: 342212608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:40:32.609612+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 311861248 unmapped: 30351360 heap: 342212608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:40:33.609753+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3542061 data_alloc: 218103808 data_used: 25014272
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ea577000/0x0/0x4ffc00000, data 0x43d91ae/0x4577000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 311861248 unmapped: 30351360 heap: 342212608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:40:34.609925+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 311861248 unmapped: 30351360 heap: 342212608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: handle_auth_request added challenge on 0x55c4ddba6800
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4ddba6800 session 0x55c4ddbb8960
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: handle_auth_request added challenge on 0x55c4ddba3c00
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4ddba3c00 session 0x55c4db04c780
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: handle_auth_request added challenge on 0x55c4dbc11400
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4dbc11400 session 0x55c4db5805a0
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: handle_auth_request added challenge on 0x55c4dbe5d400
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4dbe5d400 session 0x55c4dc1b2000
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: handle_auth_request added challenge on 0x55c4dd228800
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 16.402957916s of 16.838378906s, submitted: 77
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ea577000/0x0/0x4ffc00000, data 0x43d91ae/0x4577000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [1,0,1,7])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:40:35.611189+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4dd228800 session 0x55c4dd4154a0
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: handle_auth_request added challenge on 0x55c4ddba6800
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4ddba6800 session 0x55c4dc11f680
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: handle_auth_request added challenge on 0x55c4db37ac00
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4db37ac00 session 0x55c4dbe994a0
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: handle_auth_request added challenge on 0x55c4dbc11400
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4dbc11400 session 0x55c4df73a5a0
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: handle_auth_request added challenge on 0x55c4dbe5d400
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4dbe5d400 session 0x55c4db04c000
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 312197120 unmapped: 37896192 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:40:36.611346+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 312197120 unmapped: 37896192 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:40:37.611497+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 312197120 unmapped: 37896192 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:40:38.611682+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3631838 data_alloc: 218103808 data_used: 25014272
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 312197120 unmapped: 37896192 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:40:39.611958+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: handle_auth_request added challenge on 0x55c4dd228800
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4dd228800 session 0x55c4ddacf2c0
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 312352768 unmapped: 37740544 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:40:40.612279+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4e99b2000/0x0/0x4ffc00000, data 0x4f9c220/0x513c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 312352768 unmapped: 37740544 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: handle_auth_request added challenge on 0x55c4ddba6800
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: handle_auth_request added challenge on 0x55c4df107400
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:40:41.612466+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 312369152 unmapped: 37724160 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:40:42.612580+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 314908672 unmapped: 35184640 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:40:43.612708+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3722111 data_alloc: 234881024 data_used: 37122048
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 314908672 unmapped: 35184640 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:40:44.612890+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 314925056 unmapped: 35168256 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4e99b2000/0x0/0x4ffc00000, data 0x4f9c220/0x513c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:40:45.613110+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 314925056 unmapped: 35168256 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:40:46.613281+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 314925056 unmapped: 35168256 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:40:47.613437+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 314925056 unmapped: 35168256 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:40:48.613605+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3722751 data_alloc: 234881024 data_used: 37183488
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 314933248 unmapped: 35160064 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:40:49.613749+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 14.363443375s of 14.558417320s, submitted: 40
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 314933248 unmapped: 35160064 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:40:50.613886+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4e99b1000/0x0/0x4ffc00000, data 0x4f9d220/0x513d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 314933248 unmapped: 35160064 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:40:51.614060+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 314933248 unmapped: 35160064 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4e99b0000/0x0/0x4ffc00000, data 0x4f9d220/0x513d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [0,0,0,1])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:40:52.614222+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 313647104 unmapped: 36446208 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:40:53.614414+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3749519 data_alloc: 234881024 data_used: 37748736
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 313679872 unmapped: 36413440 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:40:54.614566+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4e96de000/0x0/0x4ffc00000, data 0x5270220/0x5410000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 313696256 unmapped: 36397056 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:40:55.614791+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 313696256 unmapped: 36397056 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4e96de000/0x0/0x4ffc00000, data 0x5270220/0x5410000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:40:56.615030+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 313704448 unmapped: 36388864 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:40:57.615167+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 313737216 unmapped: 36356096 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:40:58.615321+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3755591 data_alloc: 234881024 data_used: 38092800
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 313737216 unmapped: 36356096 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:40:59.615477+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4ddba6800 session 0x55c4dd97ef00
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4df107400 session 0x55c4dfbaaf00
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: handle_auth_request added challenge on 0x55c4df107400
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.041719437s of 10.220036507s, submitted: 58
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 313737216 unmapped: 36356096 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:41:00.615607+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4df107400 session 0x55c4dc1fa780
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ea175000/0x0/0x4ffc00000, data 0x43da1ae/0x4578000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 306487296 unmapped: 43606016 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:41:01.615784+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 306487296 unmapped: 43606016 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ea175000/0x0/0x4ffc00000, data 0x43da1ae/0x4578000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:41:02.615936+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 306487296 unmapped: 43606016 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:41:03.616099+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3551337 data_alloc: 218103808 data_used: 25075712
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 306487296 unmapped: 43606016 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:41:04.616267+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4dd4e0800 session 0x55c4ddace780
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4e8e19800 session 0x55c4dd97fa40
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: handle_auth_request added challenge on 0x55c4dbc11400
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304816128 unmapped: 45277184 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4dbc11400 session 0x55c4dd17f680
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:41:05.616414+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304816128 unmapped: 45277184 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4eb085000/0x0/0x4ffc00000, data 0x38cc19e/0x3a69000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:41:06.616597+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304816128 unmapped: 45277184 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:41:07.616758+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304816128 unmapped: 45277184 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:41:08.616955+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3427429 data_alloc: 218103808 data_used: 20209664
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304816128 unmapped: 45277184 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:41:09.617135+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304816128 unmapped: 45277184 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:41:10.617297+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304816128 unmapped: 45277184 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:41:11.617510+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4eb085000/0x0/0x4ffc00000, data 0x38cc19e/0x3a69000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304816128 unmapped: 45277184 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:41:12.617654+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304816128 unmapped: 45277184 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:41:13.617738+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3427429 data_alloc: 218103808 data_used: 20209664
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304816128 unmapped: 45277184 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:41:14.617903+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4eb085000/0x0/0x4ffc00000, data 0x38cc19e/0x3a69000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4eb085000/0x0/0x4ffc00000, data 0x38cc19e/0x3a69000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304816128 unmapped: 45277184 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:41:15.618058+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304816128 unmapped: 45277184 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:41:16.618237+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304816128 unmapped: 45277184 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:41:17.618430+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304816128 unmapped: 45277184 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:41:18.618583+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3427429 data_alloc: 218103808 data_used: 20209664
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304816128 unmapped: 45277184 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:41:19.618763+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304816128 unmapped: 45277184 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4eb085000/0x0/0x4ffc00000, data 0x38cc19e/0x3a69000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:41:20.618889+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304816128 unmapped: 45277184 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:41:21.619065+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304816128 unmapped: 45277184 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:41:22.619201+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4eb085000/0x0/0x4ffc00000, data 0x38cc19e/0x3a69000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304816128 unmapped: 45277184 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:41:23.619413+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4eb085000/0x0/0x4ffc00000, data 0x38cc19e/0x3a69000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3427429 data_alloc: 218103808 data_used: 20209664
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304816128 unmapped: 45277184 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:41:24.619529+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: handle_auth_request added challenge on 0x55c4dbe5d400
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4dbe5d400 session 0x55c4dc1b2b40
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: handle_auth_request added challenge on 0x55c4dbe5d400
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4dbe5d400 session 0x55c4dd137680
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: handle_auth_request added challenge on 0x55c4dbc11400
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4dbc11400 session 0x55c4dc11e780
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304816128 unmapped: 45277184 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: handle_auth_request added challenge on 0x55c4dd4e0800
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4dd4e0800 session 0x55c4daffde00
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4eb085000/0x0/0x4ffc00000, data 0x38cc19e/0x3a69000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: handle_auth_request added challenge on 0x55c4df107400
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:41:25.619662+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 25.166007996s of 25.358474731s, submitted: 57
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4df107400 session 0x55c4df73b680
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: handle_auth_request added challenge on 0x55c4e8e19800
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4e8e19800 session 0x55c4dd40af00
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: handle_auth_request added challenge on 0x55c4e8e19800
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4e8e19800 session 0x55c4ddbb8d20
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: handle_auth_request added challenge on 0x55c4dbc11400
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4dbc11400 session 0x55c4dd81f680
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: handle_auth_request added challenge on 0x55c4dbe5d400
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4dbe5d400 session 0x55c4dd40ba40
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304963584 unmapped: 45129728 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:41:26.619787+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304963584 unmapped: 45129728 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:41:27.619930+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304963584 unmapped: 45129728 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ea71c000/0x0/0x4ffc00000, data 0x3e241ad/0x3fc2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:41:28.620097+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: handle_auth_request added challenge on 0x55c4dd4e0800
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4dd4e0800 session 0x55c4db5e8000
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3470469 data_alloc: 218103808 data_used: 20209664
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304963584 unmapped: 45129728 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: handle_auth_request added challenge on 0x55c4df107400
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4df107400 session 0x55c4df73bc20
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:41:29.620281+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304963584 unmapped: 45129728 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:41:30.620427+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: handle_auth_request added challenge on 0x55c4df107400
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4df107400 session 0x55c4dd192780
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: handle_auth_request added challenge on 0x55c4dbc11400
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4dbc11400 session 0x55c4ddacf680
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304963584 unmapped: 45129728 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:41:31.620609+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: handle_auth_request added challenge on 0x55c4dbe5d400
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: handle_auth_request added challenge on 0x55c4dd4e0800
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304971776 unmapped: 45121536 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ea71c000/0x0/0x4ffc00000, data 0x3e241ad/0x3fc2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:41:32.620735+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304971776 unmapped: 45121536 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:41:33.620874+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3509954 data_alloc: 218103808 data_used: 25616384
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304971776 unmapped: 45121536 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:41:34.621007+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4dbe5d400 session 0x55c4ddbb8780
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4dd4e0800 session 0x55c4dfbaad20
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304971776 unmapped: 45121536 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ea71c000/0x0/0x4ffc00000, data 0x3e241ad/0x3fc2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:41:35.621176+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 5400.3 total, 600.0 interval
                                           Cumulative writes: 35K writes, 140K keys, 35K commit groups, 1.0 writes per commit group, ingest: 0.14 GB, 0.03 MB/s
                                           Cumulative WAL: 35K writes, 12K syncs, 2.75 writes per sync, written: 0.14 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 2305 writes, 8726 keys, 2305 commit groups, 1.0 writes per commit group, ingest: 10.50 MB, 0.02 MB/s
                                           Interval WAL: 2305 writes, 962 syncs, 2.40 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304971776 unmapped: 45121536 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:41:36.621352+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304971776 unmapped: 45121536 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets getting new tickets!
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:41:37.621538+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _finish_auth 0
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:41:37.622474+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ea71c000/0x0/0x4ffc00000, data 0x3e241ad/0x3fc2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304971776 unmapped: 45121536 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:41:38.621676+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3509822 data_alloc: 218103808 data_used: 25616384
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304971776 unmapped: 45121536 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:41:39.621840+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: handle_auth_request added challenge on 0x55c4e8e19800
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: handle_auth_request added challenge on 0x55c4dd228800
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 14.446069717s of 14.544829369s, submitted: 14
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304971776 unmapped: 45121536 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:41:40.621956+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304971776 unmapped: 45121536 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:41:41.622164+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304971776 unmapped: 45121536 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ea71c000/0x0/0x4ffc00000, data 0x3e241ad/0x3fc2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:41:42.622348+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304971776 unmapped: 45121536 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4e8e19800 session 0x55c4ddaceb40
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4dd228800 session 0x55c4df73be00
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:41:43.622537+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3509822 data_alloc: 218103808 data_used: 25616384
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304971776 unmapped: 45121536 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:41:44.622971+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304971776 unmapped: 45121536 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:41:45.623124+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ea71c000/0x0/0x4ffc00000, data 0x3e241ad/0x3fc2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304971776 unmapped: 45121536 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ea71c000/0x0/0x4ffc00000, data 0x3e241ad/0x3fc2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:41:46.623289+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ea71c000/0x0/0x4ffc00000, data 0x3e241ad/0x3fc2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304971776 unmapped: 45121536 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:41:47.623587+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304971776 unmapped: 45121536 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:41:48.623734+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: handle_auth_request added challenge on 0x55c4e8e19800
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: handle_auth_request added challenge on 0x55c4dbc11400
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3509954 data_alloc: 218103808 data_used: 25616384
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304971776 unmapped: 45121536 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:41:49.623911+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ea71c000/0x0/0x4ffc00000, data 0x3e241ad/0x3fc2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304971776 unmapped: 45121536 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:41:50.624149+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304971776 unmapped: 45121536 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ea71c000/0x0/0x4ffc00000, data 0x3e241ad/0x3fc2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:41:51.624361+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4e8e19800 session 0x55c4dc1fb4a0
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.145725250s of 12.158161163s, submitted: 3
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4dbc11400 session 0x55c4db5e9680
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 304979968 unmapped: 45113344 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: handle_auth_request added challenge on 0x55c4dbe5d400
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:41:52.624488+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4dbe5d400 session 0x55c4dc1b23c0
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 301408256 unmapped: 48685056 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:41:53.624678+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3432245 data_alloc: 218103808 data_used: 20209664
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 301408256 unmapped: 48685056 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:41:54.624867+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 301408256 unmapped: 48685056 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:41:55.625139+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 301408256 unmapped: 48685056 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:41:56.625381+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4eac75000/0x0/0x4ffc00000, data 0x38cc19e/0x3a69000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 301408256 unmapped: 48685056 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:41:57.625589+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 301408256 unmapped: 48685056 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:41:58.625912+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3432245 data_alloc: 218103808 data_used: 20209664
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4eac75000/0x0/0x4ffc00000, data 0x38cc19e/0x3a69000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 301408256 unmapped: 48685056 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:41:59.626111+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 301408256 unmapped: 48685056 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:42:00.626395+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 301408256 unmapped: 48685056 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:42:01.626725+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 301408256 unmapped: 48685056 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:42:02.626929+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 301408256 unmapped: 48685056 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:42:03.627200+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4eac75000/0x0/0x4ffc00000, data 0x38cc19e/0x3a69000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3432245 data_alloc: 218103808 data_used: 20209664
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 301408256 unmapped: 48685056 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:42:04.627373+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 301408256 unmapped: 48685056 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:42:05.627649+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 301408256 unmapped: 48685056 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:42:06.627927+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 301408256 unmapped: 48685056 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:42:07.628694+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 301408256 unmapped: 48685056 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4eac75000/0x0/0x4ffc00000, data 0x38cc19e/0x3a69000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:42:08.628918+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3432245 data_alloc: 218103808 data_used: 20209664
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 301408256 unmapped: 48685056 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:42:09.629138+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 301408256 unmapped: 48685056 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:42:10.629357+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 301408256 unmapped: 48685056 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:42:11.629653+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 301408256 unmapped: 48685056 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:42:12.629866+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4eac75000/0x0/0x4ffc00000, data 0x38cc19e/0x3a69000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 301408256 unmapped: 48685056 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:42:13.630037+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3432245 data_alloc: 218103808 data_used: 20209664
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 301408256 unmapped: 48685056 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:42:14.630163+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 301408256 unmapped: 48685056 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:42:15.630309+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 301416448 unmapped: 48676864 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:42:16.630519+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 301416448 unmapped: 48676864 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:42:17.630673+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4eac75000/0x0/0x4ffc00000, data 0x38cc19e/0x3a69000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4eac75000/0x0/0x4ffc00000, data 0x38cc19e/0x3a69000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 301416448 unmapped: 48676864 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:42:18.630872+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3432245 data_alloc: 218103808 data_used: 20209664
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 301416448 unmapped: 48676864 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:42:19.631018+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 301416448 unmapped: 48676864 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:42:20.631174+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 301416448 unmapped: 48676864 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:42:21.631391+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 301416448 unmapped: 48676864 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:42:22.631550+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 301416448 unmapped: 48676864 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:42:23.631666+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3432245 data_alloc: 218103808 data_used: 20209664
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4eac75000/0x0/0x4ffc00000, data 0x38cc19e/0x3a69000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:42:24.631720+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 301424640 unmapped: 48668672 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:42:25.631857+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 301424640 unmapped: 48668672 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:42:26.632013+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 301424640 unmapped: 48668672 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4eac75000/0x0/0x4ffc00000, data 0x38cc19e/0x3a69000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:42:27.632184+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 301424640 unmapped: 48668672 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:42:28.632396+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 301424640 unmapped: 48668672 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4eac75000/0x0/0x4ffc00000, data 0x38cc19e/0x3a69000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3432245 data_alloc: 218103808 data_used: 20209664
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:42:29.632551+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 301424640 unmapped: 48668672 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: handle_auth_request added challenge on 0x55c4dd4e0800
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4dd4e0800 session 0x55c4dc11a5a0
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: handle_auth_request added challenge on 0x55c4dd4e0800
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4dd4e0800 session 0x55c4dd1521e0
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: handle_auth_request added challenge on 0x55c4dbc11400
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4dbc11400 session 0x55c4dd81fa40
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:42:30.632704+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 301432832 unmapped: 48660480 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: handle_auth_request added challenge on 0x55c4dbe5d400
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4dbe5d400 session 0x55c4ddbb8780
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: handle_auth_request added challenge on 0x55c4dd228800
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 38.439521790s of 38.507686615s, submitted: 19
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4dd228800 session 0x55c4dd192780
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: handle_auth_request added challenge on 0x55c4e8e19800
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4e8e19800 session 0x55c4ddbb8d20
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: handle_auth_request added challenge on 0x55c4e8e19800
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4e8e19800 session 0x55c4dd40af00
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: handle_auth_request added challenge on 0x55c4dbc11400
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4dbc11400 session 0x55c4daffde00
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: handle_auth_request added challenge on 0x55c4dbe5d400
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4dbe5d400 session 0x55c4dc11e780
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ea6a3000/0x0/0x4ffc00000, data 0x3e9d1ae/0x403b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:42:31.632886+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 301727744 unmapped: 48365568 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:42:32.633069+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 301727744 unmapped: 48365568 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:42:33.633254+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 301727744 unmapped: 48365568 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3487397 data_alloc: 218103808 data_used: 20209664
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:42:34.633372+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 301727744 unmapped: 48365568 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ea6a3000/0x0/0x4ffc00000, data 0x3e9d1ae/0x403b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:42:35.633523+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 301727744 unmapped: 48365568 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:42:36.633693+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 301727744 unmapped: 48365568 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: handle_auth_request added challenge on 0x55c4dd228800
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4dd228800 session 0x55c4dd17f680
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:42:37.633827+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 301678592 unmapped: 48414720 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: handle_auth_request added challenge on 0x55c4dd4e0800
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: handle_auth_request added challenge on 0x55c4df107400
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:42:38.633987+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 301678592 unmapped: 48414720 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ea67f000/0x0/0x4ffc00000, data 0x3ec11ae/0x405f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3494489 data_alloc: 218103808 data_used: 20856832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:42:39.634137+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 300687360 unmapped: 49405952 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:42:40.634309+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 301506560 unmapped: 48586752 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ea67f000/0x0/0x4ffc00000, data 0x3ec11ae/0x405f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:42:41.634487+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 301506560 unmapped: 48586752 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:42:42.634642+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 301506560 unmapped: 48586752 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ea67f000/0x0/0x4ffc00000, data 0x3ec11ae/0x405f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:42:43.634761+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 301506560 unmapped: 48586752 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ea67f000/0x0/0x4ffc00000, data 0x3ec11ae/0x405f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3523289 data_alloc: 218103808 data_used: 24903680
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:42:44.634916+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 301506560 unmapped: 48586752 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:42:45.635012+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 301506560 unmapped: 48586752 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:42:46.635166+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 301506560 unmapped: 48586752 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:42:47.635321+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 301506560 unmapped: 48586752 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:42:48.635502+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4ea67f000/0x0/0x4ffc00000, data 0x3ec11ae/0x405f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 301506560 unmapped: 48586752 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 18.172748566s of 18.245769501s, submitted: 20
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3588867 data_alloc: 218103808 data_used: 24956928
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:42:49.635754+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 306823168 unmapped: 43270144 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:42:50.635991+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 306823168 unmapped: 43270144 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4e9ce4000/0x0/0x4ffc00000, data 0x48561ae/0x49f4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:42:51.636300+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 307101696 unmapped: 42991616 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:42:52.636456+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4dbc10000 session 0x55c4db5814a0
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: handle_auth_request added challenge on 0x55c4dbc10000
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4e9c47000/0x0/0x4ffc00000, data 0x48f01ae/0x4a8e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 307101696 unmapped: 42991616 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:42:53.636729+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 307101696 unmapped: 42991616 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3628643 data_alloc: 218103808 data_used: 27017216
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:42:54.636875+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 307101696 unmapped: 42991616 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:42:55.637039+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 307101696 unmapped: 42991616 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:42:56.637152+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 307101696 unmapped: 42991616 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 heartbeat osd_stat(store_statfs(0x4e9c2f000/0x0/0x4ffc00000, data 0x49111ae/0x4aaf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:42:57.637375+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 307101696 unmapped: 42991616 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:42:58.637579+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 307118080 unmapped: 42975232 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3628379 data_alloc: 218103808 data_used: 27037696
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:42:59.637724+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 307118080 unmapped: 42975232 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: handle_auth_request added challenge on 0x55c4dbc11400
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 ms_handle_reset con 0x55c4dbc11400 session 0x55c4df73ba40
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: handle_auth_request added challenge on 0x55c4dbe5d400
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:43:00.637954+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 307118080 unmapped: 42975232 heap: 350093312 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _renew_subs
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 288 handle_osd_map epochs [289,289], i have 288, src has [1,289]
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.606063843s of 11.941136360s, submitted: 149
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 289 ms_handle_reset con 0x55c4dbe5d400 session 0x55c4df1d7860
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: handle_auth_request added challenge on 0x55c4dd228800
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: handle_auth_request added challenge on 0x55c4e8e19800
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 289 ms_handle_reset con 0x55c4e8e19800 session 0x55c4ddaf72c0
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 289 ms_handle_reset con 0x55c4dd228800 session 0x55c4dd40ab40
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: handle_auth_request added challenge on 0x55c4e0327800
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:43:01.638213+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 321380352 unmapped: 37822464 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 289 ms_handle_reset con 0x55c4e0327800 session 0x55c4dc11b4a0
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: handle_auth_request added challenge on 0x55c4dbc11400
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 289 ms_handle_reset con 0x55c4dbc11400 session 0x55c4dc1b30e0
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _renew_subs
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 289 handle_osd_map epochs [290,290], i have 289, src has [1,290]
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: handle_auth_request added challenge on 0x55c4dbe5d400
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 290 heartbeat osd_stat(store_statfs(0x4e9c19000/0x0/0x4ffc00000, data 0x4923d8d/0x4ac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [1,0,0,1])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:43:02.638359+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 321421312 unmapped: 37781504 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _renew_subs
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 290 handle_osd_map epochs [291,291], i have 290, src has [1,291]
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 291 ms_handle_reset con 0x55c4dbe5d400 session 0x55c4dd193680
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:43:03.638616+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 321445888 unmapped: 37756928 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: handle_auth_request added challenge on 0x55c4dd228800
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 291 ms_handle_reset con 0x55c4dd228800 session 0x55c4ddbaba40
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: get_auth_request con 0x55c4dab7bc00 auth_method 0
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: handle_auth_request added challenge on 0x55c4e8e19800
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 291 ms_handle_reset con 0x55c4e8e19800 session 0x55c4db430d20
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: handle_auth_request added challenge on 0x55c4dcf90800
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 291 ms_handle_reset con 0x55c4dcf90800 session 0x55c4db47af00
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3788274 data_alloc: 234881024 data_used: 37064704
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:43:04.638777+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 321454080 unmapped: 37748736 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:43:05.638970+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 321454080 unmapped: 37748736 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:43:06.639111+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 321454080 unmapped: 37748736 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 291 heartbeat osd_stat(store_statfs(0x4e8e34000/0x0/0x4ffc00000, data 0x57054f7/0x58a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:43:07.639361+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 321454080 unmapped: 37748736 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:43:08.639561+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 321454080 unmapped: 37748736 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3788674 data_alloc: 234881024 data_used: 37064704
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:43:09.639721+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 291 handle_osd_map epochs [291,292], i have 291, src has [1,292]
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 314769408 unmapped: 44433408 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: get_auth_request con 0x55c4da83b400 auth_method 0
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: handle_auth_request added challenge on 0x55c4dbc11400
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 292 ms_handle_reset con 0x55c4dbc11400 session 0x55c4dd92d2c0
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: handle_auth_request added challenge on 0x55c4dbe5d400
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 292 ms_handle_reset con 0x55c4dbe5d400 session 0x55c4ddace000
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: handle_auth_request added challenge on 0x55c4dd228800
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 292 ms_handle_reset con 0x55c4dd228800 session 0x55c4dbe701e0
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:43:10.639944+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 314769408 unmapped: 44433408 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: handle_auth_request added challenge on 0x55c4e8e19800
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 292 ms_handle_reset con 0x55c4e8e19800 session 0x55c4df73b0e0
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: handle_auth_request added challenge on 0x55c4ddb9e000
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: handle_auth_request added challenge on 0x55c4dbc29c00
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 292 ms_handle_reset con 0x55c4dbc29c00 session 0x55c4db5e8780
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 292 ms_handle_reset con 0x55c4ddb9e000 session 0x55c4df1d7680
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: handle_auth_request added challenge on 0x55c4dbc29c00
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 292 ms_handle_reset con 0x55c4dbc29c00 session 0x55c4dbe701e0
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: handle_auth_request added challenge on 0x55c4dbc11400
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 292 ms_handle_reset con 0x55c4dbc11400 session 0x55c4db47af00
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: handle_auth_request added challenge on 0x55c4dbe5d400
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 292 ms_handle_reset con 0x55c4dbe5d400 session 0x55c4dc1b30e0
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:43:11.640260+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 314793984 unmapped: 44408832 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:43:12.640445+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 292 heartbeat osd_stat(store_statfs(0x4e8e32000/0x0/0x4ffc00000, data 0x5706f6a/0x58ac000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 314793984 unmapped: 44408832 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:43:13.640599+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 314793984 unmapped: 44408832 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3774000 data_alloc: 234881024 data_used: 37068800
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:43:14.640876+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 314793984 unmapped: 44408832 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 292 heartbeat osd_stat(store_statfs(0x4e8e32000/0x0/0x4ffc00000, data 0x5706f6a/0x58ac000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:43:15.641104+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 314793984 unmapped: 44408832 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: handle_auth_request added challenge on 0x55c4dd228800
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 14.858778954s of 15.576435089s, submitted: 154
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:43:16.641270+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 292 ms_handle_reset con 0x55c4dd228800 session 0x55c4ddaf72c0
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 314810368 unmapped: 44392448 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: handle_auth_request added challenge on 0x55c4dbc11400
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: handle_auth_request added challenge on 0x55c4dbc29c00
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:43:17.641458+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 314810368 unmapped: 44392448 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 292 heartbeat osd_stat(store_statfs(0x4e8e32000/0x0/0x4ffc00000, data 0x5706f6a/0x58ac000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:43:18.641611+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 314834944 unmapped: 44367872 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3798493 data_alloc: 234881024 data_used: 40214528
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 292 heartbeat osd_stat(store_statfs(0x4e8e32000/0x0/0x4ffc00000, data 0x5706f6a/0x58ac000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:43:19.641905+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 316227584 unmapped: 42975232 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:43:20.642051+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 316227584 unmapped: 42975232 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 292 heartbeat osd_stat(store_statfs(0x4e8e32000/0x0/0x4ffc00000, data 0x5706f6a/0x58ac000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:43:21.642210+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 316227584 unmapped: 42975232 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 292 heartbeat osd_stat(store_statfs(0x4e8e32000/0x0/0x4ffc00000, data 0x5706f6a/0x58ac000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:43:22.642432+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 316227584 unmapped: 42975232 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:43:23.642589+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 316227584 unmapped: 42975232 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3808253 data_alloc: 234881024 data_used: 41611264
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:43:24.642788+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 316227584 unmapped: 42975232 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:43:25.642940+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 316227584 unmapped: 42975232 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:43:26.643094+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 316227584 unmapped: 42975232 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 292 heartbeat osd_stat(store_statfs(0x4e8e32000/0x0/0x4ffc00000, data 0x5706f6a/0x58ac000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 292 heartbeat osd_stat(store_statfs(0x4e8e32000/0x0/0x4ffc00000, data 0x5706f6a/0x58ac000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:43:27.643258+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 316227584 unmapped: 42975232 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:43:28.643396+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.218788147s of 12.240081787s, submitted: 6
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 315981824 unmapped: 43220992 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3823569 data_alloc: 234881024 data_used: 42450944
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:43:29.643510+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 315981824 unmapped: 43220992 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:43:30.643655+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 316153856 unmapped: 43048960 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:43:31.643889+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 316153856 unmapped: 43048960 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:43:32.644013+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 316153856 unmapped: 43048960 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 292 heartbeat osd_stat(store_statfs(0x4e8dec000/0x0/0x4ffc00000, data 0x574cf6a/0x58f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:43:33.644160+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 316153856 unmapped: 43048960 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3830161 data_alloc: 234881024 data_used: 43458560
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:43:34.644287+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 316153856 unmapped: 43048960 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:43:35.644414+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 292 heartbeat osd_stat(store_statfs(0x4e8dec000/0x0/0x4ffc00000, data 0x574cf6a/0x58f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 316153856 unmapped: 43048960 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:43:36.644545+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 316153856 unmapped: 43048960 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:43:37.644675+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 316153856 unmapped: 43048960 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:43:38.644885+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 316153856 unmapped: 43048960 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3830161 data_alloc: 234881024 data_used: 43458560
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:43:39.645139+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 316153856 unmapped: 43048960 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 292 heartbeat osd_stat(store_statfs(0x4e8dec000/0x0/0x4ffc00000, data 0x574cf6a/0x58f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:43:40.645307+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 316153856 unmapped: 43048960 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:43:41.645590+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 316153856 unmapped: 43048960 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:43:42.645862+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 316153856 unmapped: 43048960 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 292 heartbeat osd_stat(store_statfs(0x4e8dec000/0x0/0x4ffc00000, data 0x574cf6a/0x58f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:43:43.646066+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 316153856 unmapped: 43048960 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3830161 data_alloc: 234881024 data_used: 43458560
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:43:44.646259+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 316153856 unmapped: 43048960 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:43:45.646500+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 316153856 unmapped: 43048960 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:43:46.646716+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 316153856 unmapped: 43048960 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:43:47.646911+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 316153856 unmapped: 43048960 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:43:48.647406+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 316162048 unmapped: 43040768 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 292 heartbeat osd_stat(store_statfs(0x4e8dec000/0x0/0x4ffc00000, data 0x574cf6a/0x58f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3830161 data_alloc: 234881024 data_used: 43458560
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:43:49.647582+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 316162048 unmapped: 43040768 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 292 heartbeat osd_stat(store_statfs(0x4e8dec000/0x0/0x4ffc00000, data 0x574cf6a/0x58f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:43:50.647739+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 316162048 unmapped: 43040768 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:43:51.647882+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 316162048 unmapped: 43040768 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:43:52.648025+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 292 heartbeat osd_stat(store_statfs(0x4e8dec000/0x0/0x4ffc00000, data 0x574cf6a/0x58f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 316162048 unmapped: 43040768 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:43:53.648211+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 316162048 unmapped: 43040768 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 292 heartbeat osd_stat(store_statfs(0x4e8dec000/0x0/0x4ffc00000, data 0x574cf6a/0x58f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 292 ms_handle_reset con 0x55c4ddb8ec00 session 0x55c4db47a1e0
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: handle_auth_request added challenge on 0x55c4dbe5d400
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3837521 data_alloc: 234881024 data_used: 44781568
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:43:54.648338+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 316702720 unmapped: 42500096 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: handle_auth_request added challenge on 0x55c4ddb8ec00
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:43:55.648467+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 292 ms_handle_reset con 0x55c4ddb8ec00 session 0x55c4df1d72c0
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: handle_auth_request added challenge on 0x55c4dd228800
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 316710912 unmapped: 42491904 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _renew_subs
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 292 handle_osd_map epochs [293,293], i have 292, src has [1,293]
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 27.459056854s of 27.480937958s, submitted: 5
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 293 ms_handle_reset con 0x55c4dd228800 session 0x55c4dbe710e0
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: handle_auth_request added challenge on 0x55c4ddb9e000
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: handle_auth_request added challenge on 0x55c4e8e19800
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 293 ms_handle_reset con 0x55c4e8e19800 session 0x55c4ddbbf4a0
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 293 ms_handle_reset con 0x55c4ddb9e000 session 0x55c4ddaf6f00
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: handle_auth_request added challenge on 0x55c4dcf91c00
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:43:56.648590+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 335519744 unmapped: 23683072 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 293 ms_handle_reset con 0x55c4dcf91c00 session 0x55c4daffcb40
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: handle_auth_request added challenge on 0x55c4dd228800
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 293 ms_handle_reset con 0x55c4dd228800 session 0x55c4dd137860
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:43:57.648738+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 324026368 unmapped: 35176448 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:43:58.648916+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 293 handle_osd_map epochs [294,295], i have 293, src has [1,295]
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 324042752 unmapped: 35160064 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4091715 data_alloc: 251658240 data_used: 51884032
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: handle_auth_request added challenge on 0x55c4ddb8ec00
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:43:59.649031+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 295 heartbeat osd_stat(store_statfs(0x4e71e1000/0x0/0x4ffc00000, data 0x7352251/0x74fb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 327540736 unmapped: 31662080 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 295 handle_osd_map epochs [295,296], i have 295, src has [1,296]
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:44:00.649166+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 296 ms_handle_reset con 0x55c4ddb8ec00 session 0x55c4dd2d6b40
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 327606272 unmapped: 31596544 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:44:01.649370+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 327606272 unmapped: 31596544 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:44:02.649505+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 327639040 unmapped: 31563776 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 296 heartbeat osd_stat(store_statfs(0x4e8bb3000/0x0/0x4ffc00000, data 0x597fe3e/0x5b2a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 296 ms_handle_reset con 0x55c4dbc11400 session 0x55c4dfbaa780
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 296 ms_handle_reset con 0x55c4dbc29c00 session 0x55c4dd2e25a0
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:44:03.656929+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: handle_auth_request added challenge on 0x55c4ddb9e000
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 327639040 unmapped: 31563776 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3875547 data_alloc: 251658240 data_used: 51609600
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:44:04.657016+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 296 handle_osd_map epochs [296,297], i have 296, src has [1,297]
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 297 ms_handle_reset con 0x55c4ddb9e000 session 0x55c4dd40a780
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 325410816 unmapped: 33792000 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:44:05.657191+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 325410816 unmapped: 33792000 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 297 heartbeat osd_stat(store_statfs(0x4e8e23000/0x0/0x4ffc00000, data 0x570f8c9/0x58ba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:44:06.657341+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 325410816 unmapped: 33792000 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: handle_auth_request added challenge on 0x55c4dbc11400
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.498741150s of 11.202224731s, submitted: 88
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:44:07.657499+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _renew_subs
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 297 handle_osd_map epochs [298,298], i have 297, src has [1,298]
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 298 heartbeat osd_stat(store_statfs(0x4e8e23000/0x0/0x4ffc00000, data 0x570f8c9/0x58ba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 325484544 unmapped: 33718272 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 298 ms_handle_reset con 0x55c4dbc11400 session 0x55c4db430000
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:44:08.657671+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 322666496 unmapped: 36536320 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3699447 data_alloc: 234881024 data_used: 37097472
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:44:09.657877+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 298 ms_handle_reset con 0x55c4dd4e0800 session 0x55c4dd97fa40
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 298 ms_handle_reset con 0x55c4df107400 session 0x55c4ddb49860
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 322666496 unmapped: 36536320 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: handle_auth_request added challenge on 0x55c4dbc29c00
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:44:10.658024+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 298 ms_handle_reset con 0x55c4dbc29c00 session 0x55c4ddbbef00
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 313188352 unmapped: 46014464 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 298 heartbeat osd_stat(store_statfs(0x4eac33000/0x0/0x4ffc00000, data 0x3901428/0x3aab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:44:11.658231+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 313188352 unmapped: 46014464 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:44:12.658384+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 313188352 unmapped: 46014464 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:44:13.658534+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 313188352 unmapped: 46014464 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:44:14.658686+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3485595 data_alloc: 218103808 data_used: 20008960
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 313188352 unmapped: 46014464 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:44:15.658860+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 313188352 unmapped: 46014464 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:44:16.659069+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 313188352 unmapped: 46014464 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 298 heartbeat osd_stat(store_statfs(0x4eac57000/0x0/0x4ffc00000, data 0x38dd428/0x3a87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 298 handle_osd_map epochs [299,299], i have 298, src has [1,299]
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 298 handle_osd_map epochs [299,299], i have 299, src has [1,299]
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:44:17.659237+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 313188352 unmapped: 46014464 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:44:18.659393+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 313188352 unmapped: 46014464 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:44:19.659581+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3489769 data_alloc: 218103808 data_used: 20017152
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 313188352 unmapped: 46014464 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:44:20.659799+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 309018624 unmapped: 50184192 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:44:21.660071+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 309018624 unmapped: 50184192 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:44:22.660244+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 309018624 unmapped: 50184192 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 299 heartbeat osd_stat(store_statfs(0x4eac53000/0x0/0x4ffc00000, data 0x38dee8b/0x3a8a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:44:23.660463+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 309018624 unmapped: 50184192 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:44:24.660622+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3489769 data_alloc: 218103808 data_used: 20017152
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 309018624 unmapped: 50184192 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:44:25.660778+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 309018624 unmapped: 50184192 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:44:26.660966+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 309018624 unmapped: 50184192 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:44:27.661118+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 299 heartbeat osd_stat(store_statfs(0x4eac53000/0x0/0x4ffc00000, data 0x38dee8b/0x3a8a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 309018624 unmapped: 50184192 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:44:28.661268+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 309018624 unmapped: 50184192 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:44:29.661401+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3489769 data_alloc: 218103808 data_used: 20017152
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 309018624 unmapped: 50184192 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:44:30.661604+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 309018624 unmapped: 50184192 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 299 heartbeat osd_stat(store_statfs(0x4eac53000/0x0/0x4ffc00000, data 0x38dee8b/0x3a8a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:44:31.661803+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 309018624 unmapped: 50184192 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:44:32.661992+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 309018624 unmapped: 50184192 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: handle_auth_request added challenge on 0x55c4dd228800
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 299 ms_handle_reset con 0x55c4dd228800 session 0x55c4ddbbeb40
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: handle_auth_request added challenge on 0x55c4dd228800
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 299 ms_handle_reset con 0x55c4dd228800 session 0x55c4db568b40
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: handle_auth_request added challenge on 0x55c4dbc11400
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 299 ms_handle_reset con 0x55c4dbc11400 session 0x55c4db568000
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:44:33.662176+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: handle_auth_request added challenge on 0x55c4dbc29c00
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 299 ms_handle_reset con 0x55c4dbc29c00 session 0x55c4ddbc4d20
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: handle_auth_request added challenge on 0x55c4dd4e0800
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 26.086532593s of 26.284181595s, submitted: 72
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 299 ms_handle_reset con 0x55c4dd4e0800 session 0x55c4ddbc5860
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: handle_auth_request added challenge on 0x55c4df107400
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 299 ms_handle_reset con 0x55c4df107400 session 0x55c4dc1e0780
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: handle_auth_request added challenge on 0x55c4df107400
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 307396608 unmapped: 51806208 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 299 ms_handle_reset con 0x55c4df107400 session 0x55c4dc1e1a40
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: handle_auth_request added challenge on 0x55c4dbc11400
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 299 ms_handle_reset con 0x55c4dbc11400 session 0x55c4dc1d4960
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: handle_auth_request added challenge on 0x55c4dbc29c00
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 299 ms_handle_reset con 0x55c4dbc29c00 session 0x55c4dd17f0e0
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:44:34.662308+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3533337 data_alloc: 218103808 data_used: 20017152
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: handle_auth_request added challenge on 0x55c4dd228800
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 299 ms_handle_reset con 0x55c4dd228800 session 0x55c4dd17e5a0
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 307396608 unmapped: 51806208 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: handle_auth_request added challenge on 0x55c4dd4e0800
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 299 ms_handle_reset con 0x55c4dd4e0800 session 0x55c4ddb49a40
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:44:35.662927+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: handle_auth_request added challenge on 0x55c4dd4e0800
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 299 ms_handle_reset con 0x55c4dd4e0800 session 0x55c4ddb490e0
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 307396608 unmapped: 51806208 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: handle_auth_request added challenge on 0x55c4dbc11400
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 299 ms_handle_reset con 0x55c4dbc11400 session 0x55c4dc11ba40
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:44:36.663014+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: handle_auth_request added challenge on 0x55c4dbc29c00
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: handle_auth_request added challenge on 0x55c4dd228800
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 307396608 unmapped: 51806208 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ea652000/0x0/0x4ffc00000, data 0x3edeebe/0x408c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:44:37.663132+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 307396608 unmapped: 51806208 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:44:38.663256+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308183040 unmapped: 51019776 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:44:39.663395+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3579470 data_alloc: 218103808 data_used: 26312704
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ea652000/0x0/0x4ffc00000, data 0x3edeebe/0x408c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308183040 unmapped: 51019776 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:44:40.663571+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 299 ms_handle_reset con 0x55c4dbc29c00 session 0x55c4dd40a960
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 299 ms_handle_reset con 0x55c4dd228800 session 0x55c4db580960
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308183040 unmapped: 51019776 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: handle_auth_request added challenge on 0x55c4df107400
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:44:41.663811+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 299 ms_handle_reset con 0x55c4df107400 session 0x55c4ddbbef00
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 299 heartbeat osd_stat(store_statfs(0x4eac53000/0x0/0x4ffc00000, data 0x38dee8b/0x3a8a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308183040 unmapped: 51019776 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:44:42.664025+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308183040 unmapped: 51019776 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:44:43.664156+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 299 heartbeat osd_stat(store_statfs(0x4eac53000/0x0/0x4ffc00000, data 0x38dee8b/0x3a8a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308183040 unmapped: 51019776 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:44:44.664273+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3492936 data_alloc: 218103808 data_used: 20017152
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308183040 unmapped: 51019776 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:44:45.664451+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308183040 unmapped: 51019776 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:44:46.664577+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308183040 unmapped: 51019776 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 299 heartbeat osd_stat(store_statfs(0x4eac53000/0x0/0x4ffc00000, data 0x38dee8b/0x3a8a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:44:47.664793+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308183040 unmapped: 51019776 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:44:48.664976+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308183040 unmapped: 51019776 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:44:49.665137+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3492936 data_alloc: 218103808 data_used: 20017152
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308183040 unmapped: 51019776 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:44:50.665287+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308183040 unmapped: 51019776 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:44:51.665512+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308183040 unmapped: 51019776 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:44:52.665685+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308183040 unmapped: 51019776 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 299 heartbeat osd_stat(store_statfs(0x4eac53000/0x0/0x4ffc00000, data 0x38dee8b/0x3a8a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:44:53.665901+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308183040 unmapped: 51019776 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:44:54.666067+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3492936 data_alloc: 218103808 data_used: 20017152
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308183040 unmapped: 51019776 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:44:55.666224+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 299 heartbeat osd_stat(store_statfs(0x4eac53000/0x0/0x4ffc00000, data 0x38dee8b/0x3a8a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308183040 unmapped: 51019776 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:44:56.666349+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308183040 unmapped: 51019776 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 299 heartbeat osd_stat(store_statfs(0x4eac53000/0x0/0x4ffc00000, data 0x38dee8b/0x3a8a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:44:57.666513+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308183040 unmapped: 51019776 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:44:58.666667+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308183040 unmapped: 51019776 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:44:59.666852+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3492936 data_alloc: 218103808 data_used: 20017152
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308183040 unmapped: 51019776 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:45:00.667025+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 299 heartbeat osd_stat(store_statfs(0x4eac53000/0x0/0x4ffc00000, data 0x38dee8b/0x3a8a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308183040 unmapped: 51019776 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:45:01.667206+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308183040 unmapped: 51019776 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:45:02.667402+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 299 heartbeat osd_stat(store_statfs(0x4eac53000/0x0/0x4ffc00000, data 0x38dee8b/0x3a8a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308183040 unmapped: 51019776 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:45:03.667535+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308183040 unmapped: 51019776 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:45:04.667689+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3492936 data_alloc: 218103808 data_used: 20017152
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308183040 unmapped: 51019776 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 299 heartbeat osd_stat(store_statfs(0x4eac53000/0x0/0x4ffc00000, data 0x38dee8b/0x3a8a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:45:05.667865+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308183040 unmapped: 51019776 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:45:06.668033+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308183040 unmapped: 51019776 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:45:07.668202+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308183040 unmapped: 51019776 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:45:08.668355+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308183040 unmapped: 51019776 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:45:09.668508+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3492936 data_alloc: 218103808 data_used: 20017152
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308183040 unmapped: 51019776 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:45:10.668672+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308183040 unmapped: 51019776 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 299 heartbeat osd_stat(store_statfs(0x4eac53000/0x0/0x4ffc00000, data 0x38dee8b/0x3a8a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:45:11.668912+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308183040 unmapped: 51019776 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:45:12.669075+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308183040 unmapped: 51019776 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:45:13.669246+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308183040 unmapped: 51019776 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:45:14.669413+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 299 ms_handle_reset con 0x55c4e6920c00 session 0x55c4df73b860
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: handle_auth_request added challenge on 0x55c4df107400
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 299 ms_handle_reset con 0x55c4db02ec00 session 0x55c4ddacf0e0
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: handle_auth_request added challenge on 0x55c4dbc11400
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3492936 data_alloc: 218103808 data_used: 20017152
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 299 ms_handle_reset con 0x55c4dbc11000 session 0x55c4df1d61e0
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: handle_auth_request added challenge on 0x55c4dbc29c00
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308183040 unmapped: 51019776 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:45:15.669648+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308183040 unmapped: 51019776 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:45:16.670013+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 299 heartbeat osd_stat(store_statfs(0x4eac53000/0x0/0x4ffc00000, data 0x38dee8b/0x3a8a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308183040 unmapped: 51019776 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:45:17.670214+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308183040 unmapped: 51019776 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:45:18.670399+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308183040 unmapped: 51019776 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:45:19.670642+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3492936 data_alloc: 218103808 data_used: 20017152
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308183040 unmapped: 51019776 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:45:20.670907+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 299 ms_handle_reset con 0x55c4ddb8b000 session 0x55c4dd97f2c0
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: handle_auth_request added challenge on 0x55c4dbc11000
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308183040 unmapped: 51019776 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:45:21.671158+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 299 heartbeat osd_stat(store_statfs(0x4eac53000/0x0/0x4ffc00000, data 0x38dee8b/0x3a8a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308183040 unmapped: 51019776 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:45:22.671333+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 299 heartbeat osd_stat(store_statfs(0x4eac53000/0x0/0x4ffc00000, data 0x38dee8b/0x3a8a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308183040 unmapped: 51019776 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:45:23.671551+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308183040 unmapped: 51019776 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:45:24.671749+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3492936 data_alloc: 218103808 data_used: 20017152
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308183040 unmapped: 51019776 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:45:25.672065+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308183040 unmapped: 51019776 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:45:26.672280+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308183040 unmapped: 51019776 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:45:27.672470+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308183040 unmapped: 51019776 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:45:28.672699+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 299 heartbeat osd_stat(store_statfs(0x4eac53000/0x0/0x4ffc00000, data 0x38dee8b/0x3a8a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308183040 unmapped: 51019776 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:45:29.673117+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3492936 data_alloc: 218103808 data_used: 20017152
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308183040 unmapped: 51019776 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:45:30.673454+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308183040 unmapped: 51019776 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:45:31.673663+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308183040 unmapped: 51019776 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:45:32.674109+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308183040 unmapped: 51019776 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:45:33.674469+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 299 heartbeat osd_stat(store_statfs(0x4eac53000/0x0/0x4ffc00000, data 0x38dee8b/0x3a8a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308183040 unmapped: 51019776 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:45:34.674737+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3492936 data_alloc: 218103808 data_used: 20017152
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308183040 unmapped: 51019776 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:45:35.674882+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308183040 unmapped: 51019776 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:45:36.675306+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 299 heartbeat osd_stat(store_statfs(0x4eac53000/0x0/0x4ffc00000, data 0x38dee8b/0x3a8a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308183040 unmapped: 51019776 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:45:37.675879+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 299 heartbeat osd_stat(store_statfs(0x4eac53000/0x0/0x4ffc00000, data 0x38dee8b/0x3a8a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308183040 unmapped: 51019776 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:45:38.676057+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 299 heartbeat osd_stat(store_statfs(0x4eac53000/0x0/0x4ffc00000, data 0x38dee8b/0x3a8a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308183040 unmapped: 51019776 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:45:39.676290+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3492936 data_alloc: 218103808 data_used: 20017152
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308183040 unmapped: 51019776 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:45:40.676550+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308183040 unmapped: 51019776 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:45:41.676797+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 299 heartbeat osd_stat(store_statfs(0x4eac53000/0x0/0x4ffc00000, data 0x38dee8b/0x3a8a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308183040 unmapped: 51019776 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:45:42.677152+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308183040 unmapped: 51019776 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 299 heartbeat osd_stat(store_statfs(0x4eac53000/0x0/0x4ffc00000, data 0x38dee8b/0x3a8a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:45:43.677390+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308183040 unmapped: 51019776 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:45:44.677537+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3492936 data_alloc: 218103808 data_used: 20017152
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308183040 unmapped: 51019776 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:45:45.677760+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308183040 unmapped: 51019776 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:45:46.677875+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308183040 unmapped: 51019776 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: handle_auth_request added challenge on 0x55c4dd228800
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 73.629302979s of 73.781417847s, submitted: 21
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:45:47.678041+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _renew_subs
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 299 handle_osd_map epochs [300,300], i have 299, src has [1,300]
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 300 ms_handle_reset con 0x55c4dd228800 session 0x55c4dd81ed20
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 300 heartbeat osd_stat(store_statfs(0x4eac51000/0x0/0x4ffc00000, data 0x38e093b/0x3a8b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308191232 unmapped: 51011584 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:45:48.678235+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308191232 unmapped: 51011584 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:45:49.678457+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3494993 data_alloc: 218103808 data_used: 20025344
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308191232 unmapped: 51011584 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:45:50.678615+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308191232 unmapped: 51011584 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:45:51.678875+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 300 heartbeat osd_stat(store_statfs(0x4eac52000/0x0/0x4ffc00000, data 0x38e0918/0x3a8a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308191232 unmapped: 51011584 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:45:52.679031+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308191232 unmapped: 51011584 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:45:53.679180+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308191232 unmapped: 51011584 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:45:54.679333+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3494993 data_alloc: 218103808 data_used: 20025344
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308191232 unmapped: 51011584 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _renew_subs
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 300 handle_osd_map epochs [301,301], i have 300, src has [1,301]
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:45:55.679451+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308191232 unmapped: 51011584 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:45:56.679616+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 301 heartbeat osd_stat(store_statfs(0x4eac50000/0x0/0x4ffc00000, data 0x38e237b/0x3a8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308191232 unmapped: 51011584 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:45:57.679730+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308191232 unmapped: 51011584 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.939418793s of 11.038199425s, submitted: 45
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:45:58.679882+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308191232 unmapped: 51011584 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:45:59.680025+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3498043 data_alloc: 218103808 data_used: 20025344
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308191232 unmapped: 51011584 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:46:00.680192+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308215808 unmapped: 50987008 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:46:01.680586+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 301 heartbeat osd_stat(store_statfs(0x4eac50000/0x0/0x4ffc00000, data 0x38e337b/0x3a8e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308215808 unmapped: 50987008 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:46:02.680919+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 301 heartbeat osd_stat(store_statfs(0x4eac50000/0x0/0x4ffc00000, data 0x38e337b/0x3a8e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308215808 unmapped: 50987008 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:46:03.681097+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308215808 unmapped: 50987008 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:46:04.681394+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3497163 data_alloc: 218103808 data_used: 20025344
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308215808 unmapped: 50987008 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:46:05.681565+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 301 heartbeat osd_stat(store_statfs(0x4eac50000/0x0/0x4ffc00000, data 0x38e337b/0x3a8e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308215808 unmapped: 50987008 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:46:06.681921+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308224000 unmapped: 50978816 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:46:07.682097+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308224000 unmapped: 50978816 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:46:08.683022+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 301 heartbeat osd_stat(store_statfs(0x4eac50000/0x0/0x4ffc00000, data 0x38e337b/0x3a8e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308224000 unmapped: 50978816 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:46:09.683176+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3497163 data_alloc: 218103808 data_used: 20025344
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308224000 unmapped: 50978816 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:46:10.683494+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308224000 unmapped: 50978816 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:46:11.683697+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 301 heartbeat osd_stat(store_statfs(0x4eac50000/0x0/0x4ffc00000, data 0x38e337b/0x3a8e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308232192 unmapped: 50970624 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:46:12.683873+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308232192 unmapped: 50970624 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:46:13.684015+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 301 heartbeat osd_stat(store_statfs(0x4eac50000/0x0/0x4ffc00000, data 0x38e337b/0x3a8e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308232192 unmapped: 50970624 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:46:14.684204+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 301 heartbeat osd_stat(store_statfs(0x4eac50000/0x0/0x4ffc00000, data 0x38e337b/0x3a8e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3497163 data_alloc: 218103808 data_used: 20025344
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 301 heartbeat osd_stat(store_statfs(0x4eac50000/0x0/0x4ffc00000, data 0x38e337b/0x3a8e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308232192 unmapped: 50970624 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:46:15.684337+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 301 heartbeat osd_stat(store_statfs(0x4eac50000/0x0/0x4ffc00000, data 0x38e337b/0x3a8e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308232192 unmapped: 50970624 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:46:16.684484+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308240384 unmapped: 50962432 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:46:17.684637+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:46:18.684792+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308240384 unmapped: 50962432 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:46:19.684978+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308240384 unmapped: 50962432 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3497163 data_alloc: 218103808 data_used: 20025344
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 301 heartbeat osd_stat(store_statfs(0x4eac50000/0x0/0x4ffc00000, data 0x38e337b/0x3a8e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:46:20.685150+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308240384 unmapped: 50962432 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:46:21.685435+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308240384 unmapped: 50962432 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:46:22.685598+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308240384 unmapped: 50962432 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:46:23.685796+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308240384 unmapped: 50962432 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:46:24.685990+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308240384 unmapped: 50962432 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 301 heartbeat osd_stat(store_statfs(0x4eac50000/0x0/0x4ffc00000, data 0x38e337b/0x3a8e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3497163 data_alloc: 218103808 data_used: 20025344
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:46:25.686174+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308248576 unmapped: 50954240 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 27.330192566s of 27.339185715s, submitted: 2
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 301 heartbeat osd_stat(store_statfs(0x4eac50000/0x0/0x4ffc00000, data 0x38e337b/0x3a8e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:46:26.686331+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308256768 unmapped: 50946048 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:46:27.686535+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308256768 unmapped: 50946048 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:46:28.686723+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308256768 unmapped: 50946048 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:46:29.686943+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308256768 unmapped: 50946048 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 301 heartbeat osd_stat(store_statfs(0x4eac50000/0x0/0x4ffc00000, data 0x38e337b/0x3a8e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3497163 data_alloc: 218103808 data_used: 20025344
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:46:30.687129+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308297728 unmapped: 50905088 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:46:31.687307+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308297728 unmapped: 50905088 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:46:32.687494+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308297728 unmapped: 50905088 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 301 heartbeat osd_stat(store_statfs(0x4eac50000/0x0/0x4ffc00000, data 0x38e337b/0x3a8e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:46:33.687629+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308305920 unmapped: 50896896 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:46:34.687799+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308305920 unmapped: 50896896 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 301 heartbeat osd_stat(store_statfs(0x4eac50000/0x0/0x4ffc00000, data 0x38e337b/0x3a8e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3497339 data_alloc: 218103808 data_used: 20025344
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:46:35.688054+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308305920 unmapped: 50896896 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:46:36.688242+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308305920 unmapped: 50896896 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:46:37.688469+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308305920 unmapped: 50896896 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 301 heartbeat osd_stat(store_statfs(0x4eac50000/0x0/0x4ffc00000, data 0x38e337b/0x3a8e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:46:38.688686+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308305920 unmapped: 50896896 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 301 heartbeat osd_stat(store_statfs(0x4eac50000/0x0/0x4ffc00000, data 0x38e337b/0x3a8e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:46:39.688936+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308305920 unmapped: 50896896 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3497339 data_alloc: 218103808 data_used: 20025344
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:46:40.689130+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308322304 unmapped: 50880512 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:46:41.689348+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308322304 unmapped: 50880512 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 301 heartbeat osd_stat(store_statfs(0x4eac50000/0x0/0x4ffc00000, data 0x38e337b/0x3a8e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:46:42.689514+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308322304 unmapped: 50880512 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:46:43.689656+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308322304 unmapped: 50880512 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:46:44.689880+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308330496 unmapped: 50872320 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 301 heartbeat osd_stat(store_statfs(0x4eac50000/0x0/0x4ffc00000, data 0x38e337b/0x3a8e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3497339 data_alloc: 218103808 data_used: 20025344
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 301 heartbeat osd_stat(store_statfs(0x4eac50000/0x0/0x4ffc00000, data 0x38e337b/0x3a8e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:46:45.690181+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308338688 unmapped: 50864128 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:46:46.690408+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308338688 unmapped: 50864128 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:46:47.690597+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308338688 unmapped: 50864128 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:46:48.690775+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308338688 unmapped: 50864128 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:46:49.690999+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308338688 unmapped: 50864128 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3497339 data_alloc: 218103808 data_used: 20025344
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:46:50.691181+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 301 heartbeat osd_stat(store_statfs(0x4eac50000/0x0/0x4ffc00000, data 0x38e337b/0x3a8e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308338688 unmapped: 50864128 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:46:51.691432+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308338688 unmapped: 50864128 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:46:52.691642+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308346880 unmapped: 50855936 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:46:53.691810+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308346880 unmapped: 50855936 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:46:54.692054+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308346880 unmapped: 50855936 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3497339 data_alloc: 218103808 data_used: 20025344
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 301 heartbeat osd_stat(store_statfs(0x4eac50000/0x0/0x4ffc00000, data 0x38e337b/0x3a8e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:46:55.692212+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308346880 unmapped: 50855936 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:46:56.692403+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308346880 unmapped: 50855936 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:46:57.692607+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308355072 unmapped: 50847744 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:46:58.692891+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308355072 unmapped: 50847744 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 301 heartbeat osd_stat(store_statfs(0x4eac50000/0x0/0x4ffc00000, data 0x38e337b/0x3a8e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:46:59.693165+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308355072 unmapped: 50847744 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3497339 data_alloc: 218103808 data_used: 20025344
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:47:00.693408+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308355072 unmapped: 50847744 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:47:01.693698+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308363264 unmapped: 50839552 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:47:02.693939+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308363264 unmapped: 50839552 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:47:03.694117+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308363264 unmapped: 50839552 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:47:04.694294+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 301 heartbeat osd_stat(store_statfs(0x4eac50000/0x0/0x4ffc00000, data 0x38e337b/0x3a8e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308363264 unmapped: 50839552 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3497339 data_alloc: 218103808 data_used: 20025344
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 301 heartbeat osd_stat(store_statfs(0x4eac50000/0x0/0x4ffc00000, data 0x38e337b/0x3a8e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:47:05.694481+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308363264 unmapped: 50839552 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:47:06.694687+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308363264 unmapped: 50839552 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 301 heartbeat osd_stat(store_statfs(0x4eac50000/0x0/0x4ffc00000, data 0x38e337b/0x3a8e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:47:07.694881+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308363264 unmapped: 50839552 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:47:08.695057+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308363264 unmapped: 50839552 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:47:09.695205+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308363264 unmapped: 50839552 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3497339 data_alloc: 218103808 data_used: 20025344
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 301 heartbeat osd_stat(store_statfs(0x4eac50000/0x0/0x4ffc00000, data 0x38e337b/0x3a8e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:47:10.695392+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308363264 unmapped: 50839552 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:47:11.695666+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308363264 unmapped: 50839552 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:47:12.695880+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 301 heartbeat osd_stat(store_statfs(0x4eac50000/0x0/0x4ffc00000, data 0x38e337b/0x3a8e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308371456 unmapped: 50831360 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:47:13.696059+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308371456 unmapped: 50831360 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 301 heartbeat osd_stat(store_statfs(0x4eac50000/0x0/0x4ffc00000, data 0x38e337b/0x3a8e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:47:14.717803+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308371456 unmapped: 50831360 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3497339 data_alloc: 218103808 data_used: 20025344
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:47:15.718001+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308371456 unmapped: 50831360 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:47:16.718144+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308371456 unmapped: 50831360 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:47:17.718370+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308371456 unmapped: 50831360 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 301 heartbeat osd_stat(store_statfs(0x4eac50000/0x0/0x4ffc00000, data 0x38e337b/0x3a8e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 301 heartbeat osd_stat(store_statfs(0x4eac50000/0x0/0x4ffc00000, data 0x38e337b/0x3a8e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:47:18.718524+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308379648 unmapped: 50823168 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:47:19.718682+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308379648 unmapped: 50823168 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3497339 data_alloc: 218103808 data_used: 20025344
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:47:20.718840+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308379648 unmapped: 50823168 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:47:21.719006+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308379648 unmapped: 50823168 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 301 heartbeat osd_stat(store_statfs(0x4eac50000/0x0/0x4ffc00000, data 0x38e337b/0x3a8e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:47:22.719116+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308379648 unmapped: 50823168 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:47:23.719317+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308379648 unmapped: 50823168 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:47:24.719491+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308387840 unmapped: 50814976 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3497339 data_alloc: 218103808 data_used: 20025344
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:47:25.719669+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308387840 unmapped: 50814976 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:47:26.719905+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 301 heartbeat osd_stat(store_statfs(0x4eac50000/0x0/0x4ffc00000, data 0x38e337b/0x3a8e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308387840 unmapped: 50814976 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:47:27.720104+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308387840 unmapped: 50814976 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:47:28.720338+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308387840 unmapped: 50814976 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:47:29.720525+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308396032 unmapped: 50806784 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3497339 data_alloc: 218103808 data_used: 20025344
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:47:30.720716+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308396032 unmapped: 50806784 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 301 heartbeat osd_stat(store_statfs(0x4eac50000/0x0/0x4ffc00000, data 0x38e337b/0x3a8e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:47:31.720945+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308396032 unmapped: 50806784 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:47:32.721116+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308396032 unmapped: 50806784 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:47:33.721283+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 301 heartbeat osd_stat(store_statfs(0x4eac50000/0x0/0x4ffc00000, data 0x38e337b/0x3a8e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308396032 unmapped: 50806784 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:47:34.721468+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308396032 unmapped: 50806784 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3497339 data_alloc: 218103808 data_used: 20025344
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:47:35.721638+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308396032 unmapped: 50806784 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:47:36.721769+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308404224 unmapped: 50798592 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:47:37.721925+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308404224 unmapped: 50798592 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:47:38.722081+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308404224 unmapped: 50798592 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 301 heartbeat osd_stat(store_statfs(0x4eac50000/0x0/0x4ffc00000, data 0x38e337b/0x3a8e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:47:39.722242+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308387840 unmapped: 50814976 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3497339 data_alloc: 218103808 data_used: 20025344
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:47:40.722389+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308387840 unmapped: 50814976 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:47:41.722585+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 301 heartbeat osd_stat(store_statfs(0x4eac50000/0x0/0x4ffc00000, data 0x38e337b/0x3a8e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308387840 unmapped: 50814976 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:47:42.722767+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308387840 unmapped: 50814976 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:47:43.722973+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308387840 unmapped: 50814976 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 301 heartbeat osd_stat(store_statfs(0x4eac50000/0x0/0x4ffc00000, data 0x38e337b/0x3a8e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:47:44.723124+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308396032 unmapped: 50806784 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3497339 data_alloc: 218103808 data_used: 20025344
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:47:45.723258+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308396032 unmapped: 50806784 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:47:46.723464+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308396032 unmapped: 50806784 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:47:47.723621+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308396032 unmapped: 50806784 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 301 heartbeat osd_stat(store_statfs(0x4eac50000/0x0/0x4ffc00000, data 0x38e337b/0x3a8e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:47:48.723784+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308404224 unmapped: 50798592 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:47:49.724030+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308404224 unmapped: 50798592 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3497339 data_alloc: 218103808 data_used: 20025344
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:47:50.724199+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308404224 unmapped: 50798592 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 301 heartbeat osd_stat(store_statfs(0x4eac50000/0x0/0x4ffc00000, data 0x38e337b/0x3a8e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:47:51.724383+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: handle_auth_request added challenge on 0x55c4dd4e0800
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 85.935012817s of 85.945709229s, submitted: 2
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 301 ms_handle_reset con 0x55c4dd4e0800 session 0x55c4ddbc4d20
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: handle_auth_request added challenge on 0x55c4ddb8ec00
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 308404224 unmapped: 50798592 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 301 ms_handle_reset con 0x55c4ddb8ec00 session 0x55c4dc1b25a0
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:47:52.724481+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 307191808 unmapped: 52011008 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:47:53.724607+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 307191808 unmapped: 52011008 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:47:54.725019+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 307191808 unmapped: 52011008 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3496987 data_alloc: 234881024 data_used: 20811776
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:47:55.725177+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 307191808 unmapped: 52011008 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 301 heartbeat osd_stat(store_statfs(0x4eac50000/0x0/0x4ffc00000, data 0x38e337b/0x3a8e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:47:56.725386+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 307191808 unmapped: 52011008 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:47:57.725575+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 307200000 unmapped: 52002816 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:47:58.725726+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 307200000 unmapped: 52002816 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:47:59.725900+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 307200000 unmapped: 52002816 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3496987 data_alloc: 234881024 data_used: 20811776
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:48:00.726018+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 307200000 unmapped: 52002816 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: handle_auth_request added challenge on 0x55c4ddb86000
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 301 heartbeat osd_stat(store_statfs(0x4eac50000/0x0/0x4ffc00000, data 0x38e337b/0x3a8e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:48:01.726242+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 307200000 unmapped: 52002816 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _renew_subs
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 301 handle_osd_map epochs [302,302], i have 301, src has [1,302]
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.209140778s of 10.256829262s, submitted: 13
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 302 ms_handle_reset con 0x55c4ddb86000 session 0x55c4dd2e2b40
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:48:02.726389+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 302 heartbeat osd_stat(store_statfs(0x4eac51000/0x0/0x4ffc00000, data 0x38e3358/0x3a8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 300703744 unmapped: 58499072 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:48:03.726584+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 300703744 unmapped: 58499072 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:48:04.726751+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 300703744 unmapped: 58499072 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: handle_auth_request added challenge on 0x55c4e36b6000
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 302 heartbeat osd_stat(store_statfs(0x4eb8bd000/0x0/0x4ffc00000, data 0x2c74f06/0x2e1f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3383071 data_alloc: 218103808 data_used: 9347072
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:48:05.726930+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 300703744 unmapped: 58499072 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 302 handle_osd_map epochs [302,303], i have 302, src has [1,303]
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 303 heartbeat osd_stat(store_statfs(0x4eb8bf000/0x0/0x4ffc00000, data 0x2c74f06/0x2e1f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 303 ms_handle_reset con 0x55c4e36b6000 session 0x55c4ddbb90e0
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:48:06.727127+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295854080 unmapped: 63348736 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:48:07.727277+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295854080 unmapped: 63348736 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 303 heartbeat osd_stat(store_statfs(0x4ec0bc000/0x0/0x4ffc00000, data 0x2476aa4/0x2620000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:48:08.727529+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295854080 unmapped: 63348736 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:48:09.727736+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295854080 unmapped: 63348736 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 303 handle_osd_map epochs [304,304], i have 303, src has [1,304]
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3314330 data_alloc: 218103808 data_used: 2863104
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:48:10.727931+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295854080 unmapped: 63348736 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:48:11.728115+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295854080 unmapped: 63348736 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 304 heartbeat osd_stat(store_statfs(0x4ec0ba000/0x0/0x4ffc00000, data 0x2478523/0x2623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:48:12.728266+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295854080 unmapped: 63348736 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:48:13.728414+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295854080 unmapped: 63348736 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:48:14.728593+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 304 handle_osd_map epochs [304,305], i have 304, src has [1,305]
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.375181198s of 12.629664421s, submitted: 88
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295854080 unmapped: 63348736 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3317304 data_alloc: 218103808 data_used: 2863104
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:48:15.728775+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295854080 unmapped: 63348736 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:48:16.728948+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 305 heartbeat osd_stat(store_statfs(0x4ec0b7000/0x0/0x4ffc00000, data 0x2479f86/0x2626000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295854080 unmapped: 63348736 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:48:17.729198+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295854080 unmapped: 63348736 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 305 heartbeat osd_stat(store_statfs(0x4ec0b7000/0x0/0x4ffc00000, data 0x2479f86/0x2626000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:48:18.729408+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295854080 unmapped: 63348736 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:48:19.729575+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295854080 unmapped: 63348736 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 305 heartbeat osd_stat(store_statfs(0x4ec0b7000/0x0/0x4ffc00000, data 0x2479f86/0x2626000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3317304 data_alloc: 218103808 data_used: 2863104
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:48:20.729774+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 305 heartbeat osd_stat(store_statfs(0x4ec0b7000/0x0/0x4ffc00000, data 0x2479f86/0x2626000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295854080 unmapped: 63348736 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 305 heartbeat osd_stat(store_statfs(0x4ec0b7000/0x0/0x4ffc00000, data 0x2479f86/0x2626000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: handle_auth_request added challenge on 0x55c4dd228800
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:48:21.730047+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 305 handle_osd_map epochs [305,306], i have 305, src has [1,306]
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 306 ms_handle_reset con 0x55c4dd228800 session 0x55c4dd192b40
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295886848 unmapped: 63315968 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:48:22.730208+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295886848 unmapped: 63315968 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:48:23.730358+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295886848 unmapped: 63315968 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:48:24.730512+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295886848 unmapped: 63315968 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 306 heartbeat osd_stat(store_statfs(0x4ec0b2000/0x0/0x4ffc00000, data 0x247bb49/0x262b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3324291 data_alloc: 218103808 data_used: 2871296
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:48:25.730666+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295886848 unmapped: 63315968 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:48:26.730865+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 306 heartbeat osd_stat(store_statfs(0x4ec0b2000/0x0/0x4ffc00000, data 0x247bb49/0x262b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295886848 unmapped: 63315968 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:48:27.731026+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295886848 unmapped: 63315968 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:48:28.731176+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295886848 unmapped: 63315968 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 306 heartbeat osd_stat(store_statfs(0x4ec0b2000/0x0/0x4ffc00000, data 0x247bb49/0x262b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:48:29.731377+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295886848 unmapped: 63315968 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3324291 data_alloc: 218103808 data_used: 2871296
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:48:30.731633+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295886848 unmapped: 63315968 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:48:31.731840+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295886848 unmapped: 63315968 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:48:32.731991+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295886848 unmapped: 63315968 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:48:33.732142+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295886848 unmapped: 63315968 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 306 heartbeat osd_stat(store_statfs(0x4ec0b2000/0x0/0x4ffc00000, data 0x247bb49/0x262b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:48:34.732292+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295886848 unmapped: 63315968 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3324291 data_alloc: 218103808 data_used: 2871296
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:48:35.732516+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295886848 unmapped: 63315968 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:48:36.732730+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295886848 unmapped: 63315968 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:48:37.732877+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295886848 unmapped: 63315968 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:48:38.733043+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295886848 unmapped: 63315968 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:48:39.733283+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 306 heartbeat osd_stat(store_statfs(0x4ec0b2000/0x0/0x4ffc00000, data 0x247bb49/0x262b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295886848 unmapped: 63315968 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3324291 data_alloc: 218103808 data_used: 2871296
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:48:40.733459+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295886848 unmapped: 63315968 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:48:41.733678+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295886848 unmapped: 63315968 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:48:42.733854+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295886848 unmapped: 63315968 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:48:43.734019+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295886848 unmapped: 63315968 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:48:44.734191+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 306 heartbeat osd_stat(store_statfs(0x4ec0b2000/0x0/0x4ffc00000, data 0x247bb49/0x262b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295886848 unmapped: 63315968 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3324291 data_alloc: 218103808 data_used: 2871296
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:48:45.734482+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295886848 unmapped: 63315968 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:48:46.734746+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295886848 unmapped: 63315968 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 306 heartbeat osd_stat(store_statfs(0x4ec0b2000/0x0/0x4ffc00000, data 0x247bb49/0x262b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:48:47.734905+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295886848 unmapped: 63315968 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:48:48.735165+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295886848 unmapped: 63315968 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:48:49.735312+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295886848 unmapped: 63315968 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:48:50.735527+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3324291 data_alloc: 218103808 data_used: 2871296
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295895040 unmapped: 63307776 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:48:51.735749+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 306 heartbeat osd_stat(store_statfs(0x4ec0b2000/0x0/0x4ffc00000, data 0x247bb49/0x262b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295895040 unmapped: 63307776 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:48:52.735886+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295895040 unmapped: 63307776 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:48:53.736088+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295895040 unmapped: 63307776 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:48:54.736259+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295895040 unmapped: 63307776 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:48:55.736475+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 306 heartbeat osd_stat(store_statfs(0x4ec0b2000/0x0/0x4ffc00000, data 0x247bb49/0x262b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3324291 data_alloc: 218103808 data_used: 2871296
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295895040 unmapped: 63307776 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:48:56.736676+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295895040 unmapped: 63307776 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 306 heartbeat osd_stat(store_statfs(0x4ec0b2000/0x0/0x4ffc00000, data 0x247bb49/0x262b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:48:57.736846+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295895040 unmapped: 63307776 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:48:58.737087+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295895040 unmapped: 63307776 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:48:59.737272+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295903232 unmapped: 63299584 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:49:00.737456+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3324291 data_alloc: 218103808 data_used: 2871296
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295903232 unmapped: 63299584 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 306 heartbeat osd_stat(store_statfs(0x4ec0b2000/0x0/0x4ffc00000, data 0x247bb49/0x262b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:49:01.737639+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295903232 unmapped: 63299584 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:49:02.737877+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295903232 unmapped: 63299584 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:49:03.738055+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295903232 unmapped: 63299584 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:49:04.738214+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295903232 unmapped: 63299584 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:49:05.738402+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3324291 data_alloc: 218103808 data_used: 2871296
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295903232 unmapped: 63299584 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:49:06.738547+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295903232 unmapped: 63299584 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 306 heartbeat osd_stat(store_statfs(0x4ec0b2000/0x0/0x4ffc00000, data 0x247bb49/0x262b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:49:07.738746+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295903232 unmapped: 63299584 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:49:08.738961+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295903232 unmapped: 63299584 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:49:09.739156+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 306 heartbeat osd_stat(store_statfs(0x4ec0b2000/0x0/0x4ffc00000, data 0x247bb49/0x262b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295903232 unmapped: 63299584 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:49:10.739313+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3324291 data_alloc: 218103808 data_used: 2871296
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295903232 unmapped: 63299584 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:49:11.739514+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295903232 unmapped: 63299584 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:49:12.739671+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295903232 unmapped: 63299584 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:49:13.739876+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295903232 unmapped: 63299584 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:49:14.740080+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 306 heartbeat osd_stat(store_statfs(0x4ec0b2000/0x0/0x4ffc00000, data 0x247bb49/0x262b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295903232 unmapped: 63299584 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:49:15.740267+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3324291 data_alloc: 218103808 data_used: 2871296
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295903232 unmapped: 63299584 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:49:16.740478+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 306 heartbeat osd_stat(store_statfs(0x4ec0b2000/0x0/0x4ffc00000, data 0x247bb49/0x262b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295903232 unmapped: 63299584 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:49:17.740930+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295903232 unmapped: 63299584 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:49:18.741099+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 306 heartbeat osd_stat(store_statfs(0x4ec0b2000/0x0/0x4ffc00000, data 0x247bb49/0x262b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295903232 unmapped: 63299584 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:49:19.741215+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295903232 unmapped: 63299584 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:49:20.741554+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3324291 data_alloc: 218103808 data_used: 2871296
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295903232 unmapped: 63299584 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:49:21.741751+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295903232 unmapped: 63299584 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:49:22.741946+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295903232 unmapped: 63299584 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:49:23.742106+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 306 heartbeat osd_stat(store_statfs(0x4ec0b2000/0x0/0x4ffc00000, data 0x247bb49/0x262b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295903232 unmapped: 63299584 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:49:24.742271+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 306 heartbeat osd_stat(store_statfs(0x4ec0b2000/0x0/0x4ffc00000, data 0x247bb49/0x262b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295903232 unmapped: 63299584 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:49:25.742637+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3324291 data_alloc: 218103808 data_used: 2871296
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295903232 unmapped: 63299584 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 306 heartbeat osd_stat(store_statfs(0x4ec0b2000/0x0/0x4ffc00000, data 0x247bb49/0x262b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:49:26.742906+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295903232 unmapped: 63299584 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:49:27.743162+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295911424 unmapped: 63291392 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:49:28.743347+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295911424 unmapped: 63291392 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:49:29.743669+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 306 heartbeat osd_stat(store_statfs(0x4ec0b2000/0x0/0x4ffc00000, data 0x247bb49/0x262b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295911424 unmapped: 63291392 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 306 heartbeat osd_stat(store_statfs(0x4ec0b2000/0x0/0x4ffc00000, data 0x247bb49/0x262b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:49:30.743947+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3324291 data_alloc: 218103808 data_used: 2871296
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295911424 unmapped: 63291392 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:49:31.744136+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295911424 unmapped: 63291392 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:49:32.744867+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295911424 unmapped: 63291392 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:49:33.745016+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295911424 unmapped: 63291392 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:49:34.745173+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 306 heartbeat osd_stat(store_statfs(0x4ec0b2000/0x0/0x4ffc00000, data 0x247bb49/0x262b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295911424 unmapped: 63291392 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:49:35.745319+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3324291 data_alloc: 218103808 data_used: 2871296
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295919616 unmapped: 63283200 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:49:36.745467+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295919616 unmapped: 63283200 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 306 heartbeat osd_stat(store_statfs(0x4ec0b2000/0x0/0x4ffc00000, data 0x247bb49/0x262b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:49:37.745624+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295919616 unmapped: 63283200 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:49:38.745883+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295919616 unmapped: 63283200 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:49:39.746171+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295927808 unmapped: 63275008 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:49:40.746315+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3324291 data_alloc: 218103808 data_used: 2871296
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295927808 unmapped: 63275008 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 306 heartbeat osd_stat(store_statfs(0x4ec0b2000/0x0/0x4ffc00000, data 0x247bb49/0x262b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:49:41.746523+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295927808 unmapped: 63275008 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:49:42.746654+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295927808 unmapped: 63275008 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:49:43.746925+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295927808 unmapped: 63275008 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:49:44.747117+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295936000 unmapped: 63266816 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:49:45.747273+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3324291 data_alloc: 218103808 data_used: 2871296
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 306 heartbeat osd_stat(store_statfs(0x4ec0b2000/0x0/0x4ffc00000, data 0x247bb49/0x262b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295936000 unmapped: 63266816 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:49:46.747436+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 306 heartbeat osd_stat(store_statfs(0x4ec0b2000/0x0/0x4ffc00000, data 0x247bb49/0x262b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295936000 unmapped: 63266816 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:49:47.747599+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295936000 unmapped: 63266816 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:49:48.747750+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295936000 unmapped: 63266816 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:49:49.747901+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295936000 unmapped: 63266816 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:49:50.748061+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3324291 data_alloc: 218103808 data_used: 2871296
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295936000 unmapped: 63266816 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:49:51.748316+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295936000 unmapped: 63266816 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:49:52.748519+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 306 heartbeat osd_stat(store_statfs(0x4ec0b2000/0x0/0x4ffc00000, data 0x247bb49/0x262b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295936000 unmapped: 63266816 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:49:53.748749+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 306 heartbeat osd_stat(store_statfs(0x4ec0b2000/0x0/0x4ffc00000, data 0x247bb49/0x262b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295936000 unmapped: 63266816 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:49:54.748919+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295936000 unmapped: 63266816 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:49:55.749143+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3324291 data_alloc: 218103808 data_used: 2871296
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 306 heartbeat osd_stat(store_statfs(0x4ec0b2000/0x0/0x4ffc00000, data 0x247bb49/0x262b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295944192 unmapped: 63258624 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:49:56.749349+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295944192 unmapped: 63258624 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:49:57.749572+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295944192 unmapped: 63258624 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:49:58.749792+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 306 heartbeat osd_stat(store_statfs(0x4ec0b2000/0x0/0x4ffc00000, data 0x247bb49/0x262b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295944192 unmapped: 63258624 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:49:59.750090+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 306 heartbeat osd_stat(store_statfs(0x4ec0b2000/0x0/0x4ffc00000, data 0x247bb49/0x262b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295944192 unmapped: 63258624 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:50:00.750433+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3324291 data_alloc: 218103808 data_used: 2871296
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295944192 unmapped: 63258624 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:50:01.750786+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295944192 unmapped: 63258624 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:50:02.750897+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295944192 unmapped: 63258624 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:50:03.751167+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 306 heartbeat osd_stat(store_statfs(0x4ec0b2000/0x0/0x4ffc00000, data 0x247bb49/0x262b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295952384 unmapped: 63250432 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:50:04.751528+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295952384 unmapped: 63250432 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:50:05.751919+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 306 heartbeat osd_stat(store_statfs(0x4ec0b2000/0x0/0x4ffc00000, data 0x247bb49/0x262b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3324291 data_alloc: 218103808 data_used: 2871296
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295952384 unmapped: 63250432 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:50:06.752088+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: handle_auth_request added challenge on 0x55c4dd4e0800
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 111.956047058s of 112.013458252s, submitted: 25
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 306 heartbeat osd_stat(store_statfs(0x4ec0b2000/0x0/0x4ffc00000, data 0x247bb49/0x262b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295968768 unmapped: 63234048 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:50:07.752329+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 306 handle_osd_map epochs [306,307], i have 306, src has [1,307]
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 307 ms_handle_reset con 0x55c4dd4e0800 session 0x55c4dd92cd20
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295976960 unmapped: 63225856 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: handle_auth_request added challenge on 0x55c4ddb86000
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:50:08.752489+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 307 heartbeat osd_stat(store_statfs(0x4eb0b1000/0x0/0x4ffc00000, data 0x347bb7c/0x362d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 307 heartbeat osd_stat(store_statfs(0x4eb0ac000/0x0/0x4ffc00000, data 0x347d709/0x3631000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 295985152 unmapped: 63217664 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:50:09.752660+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _renew_subs
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 307 handle_osd_map epochs [308,308], i have 307, src has [1,308]
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 308 ms_handle_reset con 0x55c4ddb86000 session 0x55c4dd92d0e0
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:50:10.752911+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296001536 unmapped: 63201280 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3532041 data_alloc: 218103808 data_used: 2879488
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:50:11.753168+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296009728 unmapped: 63193088 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:50:12.753357+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296009728 unmapped: 63193088 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:50:13.753595+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296009728 unmapped: 63193088 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 308 heartbeat osd_stat(store_statfs(0x4ea439000/0x0/0x4ffc00000, data 0x40ef286/0x42a4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:50:14.753766+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296009728 unmapped: 63193088 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:50:15.754056+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296009728 unmapped: 63193088 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3532041 data_alloc: 218103808 data_used: 2879488
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:50:16.754225+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296017920 unmapped: 63184896 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:50:17.754483+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296017920 unmapped: 63184896 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 308 heartbeat osd_stat(store_statfs(0x4ea439000/0x0/0x4ffc00000, data 0x40ef286/0x42a4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:50:18.754711+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296017920 unmapped: 63184896 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 308 heartbeat osd_stat(store_statfs(0x4ea439000/0x0/0x4ffc00000, data 0x40ef286/0x42a4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:50:19.754963+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296026112 unmapped: 63176704 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 308 heartbeat osd_stat(store_statfs(0x4ea439000/0x0/0x4ffc00000, data 0x40ef286/0x42a4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:50:20.755217+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296026112 unmapped: 63176704 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3532041 data_alloc: 218103808 data_used: 2879488
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:50:21.755462+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296026112 unmapped: 63176704 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 308 heartbeat osd_stat(store_statfs(0x4ea439000/0x0/0x4ffc00000, data 0x40ef286/0x42a4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:50:22.755667+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296026112 unmapped: 63176704 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:50:23.755885+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296026112 unmapped: 63176704 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:50:24.756103+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296026112 unmapped: 63176704 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:50:25.756282+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296026112 unmapped: 63176704 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3532041 data_alloc: 218103808 data_used: 2879488
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 308 heartbeat osd_stat(store_statfs(0x4ea439000/0x0/0x4ffc00000, data 0x40ef286/0x42a4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:50:26.756532+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296026112 unmapped: 63176704 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:50:27.756770+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296026112 unmapped: 63176704 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:50:28.756931+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296026112 unmapped: 63176704 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:50:29.757114+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296034304 unmapped: 63168512 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:50:30.757252+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 308 heartbeat osd_stat(store_statfs(0x4ea439000/0x0/0x4ffc00000, data 0x40ef286/0x42a4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296034304 unmapped: 63168512 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3532041 data_alloc: 218103808 data_used: 2879488
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:50:31.757488+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296034304 unmapped: 63168512 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:50:32.757659+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296034304 unmapped: 63168512 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:50:33.757875+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296042496 unmapped: 63160320 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:50:34.758089+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296042496 unmapped: 63160320 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 308 heartbeat osd_stat(store_statfs(0x4ea439000/0x0/0x4ffc00000, data 0x40ef286/0x42a4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:50:35.758263+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296042496 unmapped: 63160320 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3532041 data_alloc: 218103808 data_used: 2879488
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:50:36.758422+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296050688 unmapped: 63152128 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:50:37.758556+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296050688 unmapped: 63152128 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:50:38.758690+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296050688 unmapped: 63152128 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: handle_auth_request added challenge on 0x55c4ddb8ec00
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 31.948614120s of 32.148563385s, submitted: 31
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _renew_subs
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 308 handle_osd_map epochs [309,309], i have 308, src has [1,309]
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 309 ms_handle_reset con 0x55c4ddb8ec00 session 0x55c4dd40a780
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:50:39.758897+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297115648 unmapped: 62087168 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:50:40.759106+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297115648 unmapped: 62087168 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3534101 data_alloc: 218103808 data_used: 2887680
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 309 heartbeat osd_stat(store_statfs(0x4ea437000/0x0/0x4ffc00000, data 0x40f0e11/0x42a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:50:41.759333+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297123840 unmapped: 62078976 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:50:42.759496+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297123840 unmapped: 62078976 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:50:43.759662+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297123840 unmapped: 62078976 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:50:44.759858+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 309 handle_osd_map epochs [309,310], i have 309, src has [1,310]
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297148416 unmapped: 62054400 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:50:45.760031+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297148416 unmapped: 62054400 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 310 heartbeat osd_stat(store_statfs(0x4ea435000/0x0/0x4ffc00000, data 0x40f2874/0x42a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3536883 data_alloc: 218103808 data_used: 2887680
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:50:46.760167+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297148416 unmapped: 62054400 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:50:47.760335+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297148416 unmapped: 62054400 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:50:48.760511+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297148416 unmapped: 62054400 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:50:49.760681+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297156608 unmapped: 62046208 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:50:50.760877+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297156608 unmapped: 62046208 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3536883 data_alloc: 218103808 data_used: 2887680
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 310 heartbeat osd_stat(store_statfs(0x4ea435000/0x0/0x4ffc00000, data 0x40f2874/0x42a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:50:51.761080+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297156608 unmapped: 62046208 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:50:52.761254+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297156608 unmapped: 62046208 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 310 heartbeat osd_stat(store_statfs(0x4ea435000/0x0/0x4ffc00000, data 0x40f2874/0x42a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:50:53.761390+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297156608 unmapped: 62046208 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:50:54.761559+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297156608 unmapped: 62046208 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:50:55.761786+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297156608 unmapped: 62046208 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3536883 data_alloc: 218103808 data_used: 2887680
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 310 heartbeat osd_stat(store_statfs(0x4ea435000/0x0/0x4ffc00000, data 0x40f2874/0x42a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:50:56.761980+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297164800 unmapped: 62038016 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:50:57.762134+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297164800 unmapped: 62038016 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:50:58.762339+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297164800 unmapped: 62038016 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:50:59.762595+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297164800 unmapped: 62038016 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:51:00.762788+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297172992 unmapped: 62029824 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3536883 data_alloc: 218103808 data_used: 2887680
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 310 heartbeat osd_stat(store_statfs(0x4ea435000/0x0/0x4ffc00000, data 0x40f2874/0x42a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:51:01.763021+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297181184 unmapped: 62021632 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:51:02.763158+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297181184 unmapped: 62021632 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:51:03.763311+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297181184 unmapped: 62021632 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 310 heartbeat osd_stat(store_statfs(0x4ea435000/0x0/0x4ffc00000, data 0x40f2874/0x42a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:51:04.763445+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297181184 unmapped: 62021632 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:51:05.763586+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297181184 unmapped: 62021632 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3536883 data_alloc: 218103808 data_used: 2887680
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 310 heartbeat osd_stat(store_statfs(0x4ea435000/0x0/0x4ffc00000, data 0x40f2874/0x42a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:51:06.763752+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297181184 unmapped: 62021632 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:51:07.763956+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297181184 unmapped: 62021632 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:51:08.764122+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297189376 unmapped: 62013440 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:51:09.764254+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297189376 unmapped: 62013440 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:51:10.764507+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297189376 unmapped: 62013440 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3536883 data_alloc: 218103808 data_used: 2887680
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:51:11.764884+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 310 heartbeat osd_stat(store_statfs(0x4ea435000/0x0/0x4ffc00000, data 0x40f2874/0x42a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297189376 unmapped: 62013440 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:51:12.765068+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297189376 unmapped: 62013440 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:51:13.765233+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297189376 unmapped: 62013440 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:51:14.765484+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297189376 unmapped: 62013440 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:51:15.765723+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297189376 unmapped: 62013440 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3536883 data_alloc: 218103808 data_used: 2887680
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 310 heartbeat osd_stat(store_statfs(0x4ea435000/0x0/0x4ffc00000, data 0x40f2874/0x42a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:51:16.765899+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296435712 unmapped: 62767104 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:51:17.766113+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296443904 unmapped: 62758912 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:51:18.766309+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 310 heartbeat osd_stat(store_statfs(0x4ea435000/0x0/0x4ffc00000, data 0x40f2874/0x42a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296443904 unmapped: 62758912 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:51:19.766496+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 310 heartbeat osd_stat(store_statfs(0x4ea435000/0x0/0x4ffc00000, data 0x40f2874/0x42a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296443904 unmapped: 62758912 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 310 heartbeat osd_stat(store_statfs(0x4ea435000/0x0/0x4ffc00000, data 0x40f2874/0x42a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:51:20.766719+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296452096 unmapped: 62750720 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3536883 data_alloc: 218103808 data_used: 2887680
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:51:21.767055+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296452096 unmapped: 62750720 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:51:22.767300+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296452096 unmapped: 62750720 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:51:23.767477+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296452096 unmapped: 62750720 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:51:24.767624+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296452096 unmapped: 62750720 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:51:25.767866+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296452096 unmapped: 62750720 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 310 heartbeat osd_stat(store_statfs(0x4ea435000/0x0/0x4ffc00000, data 0x40f2874/0x42a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3536883 data_alloc: 218103808 data_used: 2887680
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:51:26.768051+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 310 heartbeat osd_stat(store_statfs(0x4ea435000/0x0/0x4ffc00000, data 0x40f2874/0x42a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296460288 unmapped: 62742528 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:51:27.768212+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296460288 unmapped: 62742528 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:51:28.768401+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296460288 unmapped: 62742528 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:51:29.768615+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296468480 unmapped: 62734336 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:51:30.768874+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296468480 unmapped: 62734336 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3536883 data_alloc: 218103808 data_used: 2887680
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:51:31.769146+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 310 heartbeat osd_stat(store_statfs(0x4ea435000/0x0/0x4ffc00000, data 0x40f2874/0x42a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296468480 unmapped: 62734336 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:51:32.769339+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296468480 unmapped: 62734336 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:51:33.769483+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296468480 unmapped: 62734336 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:51:34.769654+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296476672 unmapped: 62726144 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 310 heartbeat osd_stat(store_statfs(0x4ea435000/0x0/0x4ffc00000, data 0x40f2874/0x42a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:51:35.769807+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296476672 unmapped: 62726144 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 6000.3 total, 600.0 interval
                                           Cumulative writes: 36K writes, 145K keys, 36K commit groups, 1.0 writes per commit group, ingest: 0.15 GB, 0.02 MB/s
                                           Cumulative WAL: 36K writes, 13K syncs, 2.73 writes per sync, written: 0.15 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1359 writes, 4745 keys, 1359 commit groups, 1.0 writes per commit group, ingest: 4.17 MB, 0.01 MB/s
                                           Interval WAL: 1359 writes, 574 syncs, 2.37 writes per sync, written: 0.00 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.05              0.00         1    0.048       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.05              0.00         1    0.048       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.05              0.00         1    0.048       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.3 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c4d9a771f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.3 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c4d9a771f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.3 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c4d9a771f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.3 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c4d9a771f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.06              0.00         1    0.062       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.06              0.00         1    0.062       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.06              0.00         1    0.062       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.3 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.1 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c4d9a771f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.3 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c4d9a771f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.3 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c4d9a771f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.3 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c4d9a77090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.3 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c4d9a77090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.08              0.00         1    0.082       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.08              0.00         1    0.082       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.08              0.00         1    0.082       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.3 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.1 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c4d9a77090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.01              0.00         1    0.012       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.01              0.00         1    0.012       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.01              0.00         1    0.012       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.3 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c4d9a771f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.3 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c4d9a771f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3536883 data_alloc: 218103808 data_used: 2887680
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 310 heartbeat osd_stat(store_statfs(0x4ea435000/0x0/0x4ffc00000, data 0x40f2874/0x42a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:51:36.769967+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296476672 unmapped: 62726144 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:51:37.770102+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296476672 unmapped: 62726144 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:51:38.770253+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296476672 unmapped: 62726144 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:51:39.770404+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296476672 unmapped: 62726144 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:51:40.770545+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 310 heartbeat osd_stat(store_statfs(0x4ea435000/0x0/0x4ffc00000, data 0x40f2874/0x42a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296484864 unmapped: 62717952 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3536883 data_alloc: 218103808 data_used: 2887680
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:51:41.770683+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296484864 unmapped: 62717952 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:51:42.770811+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296484864 unmapped: 62717952 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:51:43.770989+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296484864 unmapped: 62717952 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:51:44.771129+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296493056 unmapped: 62709760 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 310 heartbeat osd_stat(store_statfs(0x4ea435000/0x0/0x4ffc00000, data 0x40f2874/0x42a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:51:45.771328+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296493056 unmapped: 62709760 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3536883 data_alloc: 218103808 data_used: 2887680
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:51:46.771499+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296493056 unmapped: 62709760 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:51:47.771692+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296493056 unmapped: 62709760 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:51:48.771892+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 310 heartbeat osd_stat(store_statfs(0x4ea435000/0x0/0x4ffc00000, data 0x40f2874/0x42a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296493056 unmapped: 62709760 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:51:49.772109+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296493056 unmapped: 62709760 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:51:50.772392+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296493056 unmapped: 62709760 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3536883 data_alloc: 218103808 data_used: 2887680
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:51:51.772659+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 310 heartbeat osd_stat(store_statfs(0x4ea435000/0x0/0x4ffc00000, data 0x40f2874/0x42a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296493056 unmapped: 62709760 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:51:52.772919+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296501248 unmapped: 62701568 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:51:53.773156+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296501248 unmapped: 62701568 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:51:54.773417+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296501248 unmapped: 62701568 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:51:55.773645+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296509440 unmapped: 62693376 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3536883 data_alloc: 218103808 data_used: 2887680
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 310 heartbeat osd_stat(store_statfs(0x4ea435000/0x0/0x4ffc00000, data 0x40f2874/0x42a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:51:56.773948+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296509440 unmapped: 62693376 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:51:57.774122+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296517632 unmapped: 62685184 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:51:58.774339+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296517632 unmapped: 62685184 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:51:59.774561+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296517632 unmapped: 62685184 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:52:00.774809+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296517632 unmapped: 62685184 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3536883 data_alloc: 218103808 data_used: 2887680
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:52:01.775146+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296517632 unmapped: 62685184 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 310 heartbeat osd_stat(store_statfs(0x4ea435000/0x0/0x4ffc00000, data 0x40f2874/0x42a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:52:02.775361+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296517632 unmapped: 62685184 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 310 heartbeat osd_stat(store_statfs(0x4ea435000/0x0/0x4ffc00000, data 0x40f2874/0x42a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:52:03.775619+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296517632 unmapped: 62685184 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:52:04.775893+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296525824 unmapped: 62676992 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:52:05.776094+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296525824 unmapped: 62676992 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3536883 data_alloc: 218103808 data_used: 2887680
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:52:06.776341+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296525824 unmapped: 62676992 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:52:07.776635+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296525824 unmapped: 62676992 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:52:08.776905+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296534016 unmapped: 62668800 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 310 heartbeat osd_stat(store_statfs(0x4ea435000/0x0/0x4ffc00000, data 0x40f2874/0x42a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:52:09.777091+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296534016 unmapped: 62668800 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:52:10.777300+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 310 heartbeat osd_stat(store_statfs(0x4ea435000/0x0/0x4ffc00000, data 0x40f2874/0x42a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296534016 unmapped: 62668800 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3536883 data_alloc: 218103808 data_used: 2887680
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:52:11.777557+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296534016 unmapped: 62668800 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:52:12.777869+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296534016 unmapped: 62668800 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:52:13.778121+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296534016 unmapped: 62668800 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:52:14.778350+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296542208 unmapped: 62660608 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:52:15.778539+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296542208 unmapped: 62660608 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 310 heartbeat osd_stat(store_statfs(0x4ea435000/0x0/0x4ffc00000, data 0x40f2874/0x42a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3536883 data_alloc: 218103808 data_used: 2887680
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:52:16.778779+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296550400 unmapped: 62652416 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:52:17.779058+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296550400 unmapped: 62652416 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:52:18.779304+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 310 heartbeat osd_stat(store_statfs(0x4ea435000/0x0/0x4ffc00000, data 0x40f2874/0x42a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296550400 unmapped: 62652416 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:52:19.779537+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296558592 unmapped: 62644224 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:52:20.779778+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296558592 unmapped: 62644224 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3536883 data_alloc: 218103808 data_used: 2887680
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:52:21.780092+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296558592 unmapped: 62644224 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:52:22.780333+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296558592 unmapped: 62644224 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 310 heartbeat osd_stat(store_statfs(0x4ea435000/0x0/0x4ffc00000, data 0x40f2874/0x42a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:52:23.780607+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296558592 unmapped: 62644224 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:52:24.780952+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296566784 unmapped: 62636032 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:52:25.781207+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296566784 unmapped: 62636032 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3536883 data_alloc: 218103808 data_used: 2887680
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:52:26.781494+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296566784 unmapped: 62636032 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:52:27.781713+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296566784 unmapped: 62636032 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:52:28.781988+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296566784 unmapped: 62636032 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 310 heartbeat osd_stat(store_statfs(0x4ea435000/0x0/0x4ffc00000, data 0x40f2874/0x42a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:52:29.782292+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296566784 unmapped: 62636032 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:52:30.782515+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296566784 unmapped: 62636032 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3536883 data_alloc: 218103808 data_used: 2887680
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:52:31.782748+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296566784 unmapped: 62636032 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:52:32.783059+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296566784 unmapped: 62636032 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:52:33.783361+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296566784 unmapped: 62636032 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 310 heartbeat osd_stat(store_statfs(0x4ea435000/0x0/0x4ffc00000, data 0x40f2874/0x42a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:52:34.783619+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296566784 unmapped: 62636032 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:52:35.783884+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296574976 unmapped: 62627840 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3536883 data_alloc: 218103808 data_used: 2887680
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:52:36.784143+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296574976 unmapped: 62627840 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:52:37.784940+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 310 heartbeat osd_stat(store_statfs(0x4ea435000/0x0/0x4ffc00000, data 0x40f2874/0x42a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296583168 unmapped: 62619648 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:52:38.785091+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296583168 unmapped: 62619648 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:52:39.785235+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296583168 unmapped: 62619648 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:52:40.795793+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 310 heartbeat osd_stat(store_statfs(0x4ea435000/0x0/0x4ffc00000, data 0x40f2874/0x42a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296591360 unmapped: 62611456 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 310 heartbeat osd_stat(store_statfs(0x4ea435000/0x0/0x4ffc00000, data 0x40f2874/0x42a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3536883 data_alloc: 218103808 data_used: 2887680
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:52:41.796040+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296591360 unmapped: 62611456 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:52:42.796212+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296591360 unmapped: 62611456 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:52:43.796389+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296591360 unmapped: 62611456 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:52:44.796577+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296591360 unmapped: 62611456 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:52:45.796713+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296591360 unmapped: 62611456 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3536883 data_alloc: 218103808 data_used: 2887680
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:52:46.796902+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296591360 unmapped: 62611456 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 310 heartbeat osd_stat(store_statfs(0x4ea435000/0x0/0x4ffc00000, data 0x40f2874/0x42a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:52:47.797060+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296591360 unmapped: 62611456 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:52:48.797192+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 310 heartbeat osd_stat(store_statfs(0x4ea435000/0x0/0x4ffc00000, data 0x40f2874/0x42a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296607744 unmapped: 62595072 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 310 heartbeat osd_stat(store_statfs(0x4ea435000/0x0/0x4ffc00000, data 0x40f2874/0x42a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:52:49.797344+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296607744 unmapped: 62595072 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:52:50.797506+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296607744 unmapped: 62595072 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3536883 data_alloc: 218103808 data_used: 2887680
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:52:51.797753+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296607744 unmapped: 62595072 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:52:52.797931+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296607744 unmapped: 62595072 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 310 heartbeat osd_stat(store_statfs(0x4ea435000/0x0/0x4ffc00000, data 0x40f2874/0x42a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:52:53.798105+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296607744 unmapped: 62595072 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:52:54.798241+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296615936 unmapped: 62586880 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:52:55.798427+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296615936 unmapped: 62586880 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3536883 data_alloc: 218103808 data_used: 2887680
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 310 heartbeat osd_stat(store_statfs(0x4ea435000/0x0/0x4ffc00000, data 0x40f2874/0x42a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:52:56.798604+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296615936 unmapped: 62586880 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:52:57.798870+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296615936 unmapped: 62586880 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:52:58.799057+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296615936 unmapped: 62586880 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:52:59.799230+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296615936 unmapped: 62586880 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 310 heartbeat osd_stat(store_statfs(0x4ea435000/0x0/0x4ffc00000, data 0x40f2874/0x42a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:53:00.799581+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 142.185562134s of 142.275115967s, submitted: 42
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296615936 unmapped: 62586880 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3536003 data_alloc: 218103808 data_used: 2887680
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:53:01.799758+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296648704 unmapped: 62554112 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:53:02.799905+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296665088 unmapped: 62537728 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:53:03.800030+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296665088 unmapped: 62537728 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:53:04.800155+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296665088 unmapped: 62537728 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:53:05.800328+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296665088 unmapped: 62537728 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3536003 data_alloc: 218103808 data_used: 2887680
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 310 heartbeat osd_stat(store_statfs(0x4ea436000/0x0/0x4ffc00000, data 0x40f2874/0x42a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:53:06.800494+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296665088 unmapped: 62537728 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:53:07.800699+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296665088 unmapped: 62537728 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:53:08.800903+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296665088 unmapped: 62537728 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:53:09.801053+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296665088 unmapped: 62537728 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 310 heartbeat osd_stat(store_statfs(0x4ea436000/0x0/0x4ffc00000, data 0x40f2874/0x42a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:53:10.801234+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296665088 unmapped: 62537728 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3536003 data_alloc: 218103808 data_used: 2887680
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:53:11.801422+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296665088 unmapped: 62537728 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:53:12.801596+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296665088 unmapped: 62537728 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:53:13.801720+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296665088 unmapped: 62537728 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:53:14.801877+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296665088 unmapped: 62537728 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:53:15.802041+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296673280 unmapped: 62529536 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 310 heartbeat osd_stat(store_statfs(0x4ea436000/0x0/0x4ffc00000, data 0x40f2874/0x42a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3536003 data_alloc: 218103808 data_used: 2887680
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:53:16.802220+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296673280 unmapped: 62529536 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:53:17.802434+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296673280 unmapped: 62529536 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:53:18.802578+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296673280 unmapped: 62529536 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 310 heartbeat osd_stat(store_statfs(0x4ea436000/0x0/0x4ffc00000, data 0x40f2874/0x42a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:53:19.802758+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296673280 unmapped: 62529536 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:53:20.802933+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296681472 unmapped: 62521344 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3536003 data_alloc: 218103808 data_used: 2887680
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:53:21.803140+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296681472 unmapped: 62521344 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:53:22.803368+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296681472 unmapped: 62521344 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:53:23.803516+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296681472 unmapped: 62521344 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:53:24.803696+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296681472 unmapped: 62521344 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 310 heartbeat osd_stat(store_statfs(0x4ea436000/0x0/0x4ffc00000, data 0x40f2874/0x42a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:53:25.803958+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296689664 unmapped: 62513152 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3536003 data_alloc: 218103808 data_used: 2887680
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:53:26.804167+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296689664 unmapped: 62513152 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 310 heartbeat osd_stat(store_statfs(0x4ea436000/0x0/0x4ffc00000, data 0x40f2874/0x42a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:53:27.804322+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 310 heartbeat osd_stat(store_statfs(0x4ea436000/0x0/0x4ffc00000, data 0x40f2874/0x42a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296689664 unmapped: 62513152 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:53:28.804545+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296697856 unmapped: 62504960 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:53:29.804804+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296697856 unmapped: 62504960 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:53:30.805072+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296697856 unmapped: 62504960 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3536003 data_alloc: 218103808 data_used: 2887680
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:53:31.805377+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 310 heartbeat osd_stat(store_statfs(0x4ea436000/0x0/0x4ffc00000, data 0x40f2874/0x42a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296697856 unmapped: 62504960 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:53:32.805754+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296706048 unmapped: 62496768 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:53:33.805991+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296706048 unmapped: 62496768 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:53:34.806212+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296706048 unmapped: 62496768 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:53:35.806450+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296706048 unmapped: 62496768 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3536003 data_alloc: 218103808 data_used: 2887680
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:53:36.806737+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296714240 unmapped: 62488576 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 310 heartbeat osd_stat(store_statfs(0x4ea436000/0x0/0x4ffc00000, data 0x40f2874/0x42a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:53:37.806964+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 310 heartbeat osd_stat(store_statfs(0x4ea436000/0x0/0x4ffc00000, data 0x40f2874/0x42a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296714240 unmapped: 62488576 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:53:38.807175+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296714240 unmapped: 62488576 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:53:39.807361+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296714240 unmapped: 62488576 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:53:40.807663+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296714240 unmapped: 62488576 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3536003 data_alloc: 218103808 data_used: 2887680
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:53:41.807924+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 310 heartbeat osd_stat(store_statfs(0x4ea436000/0x0/0x4ffc00000, data 0x40f2874/0x42a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296714240 unmapped: 62488576 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 310 heartbeat osd_stat(store_statfs(0x4ea436000/0x0/0x4ffc00000, data 0x40f2874/0x42a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:53:42.808195+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296714240 unmapped: 62488576 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:53:43.808462+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296714240 unmapped: 62488576 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:53:44.808716+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296730624 unmapped: 62472192 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:53:45.809000+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296730624 unmapped: 62472192 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:53:46.809310+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3536003 data_alloc: 218103808 data_used: 2887680
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 310 heartbeat osd_stat(store_statfs(0x4ea436000/0x0/0x4ffc00000, data 0x40f2874/0x42a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296730624 unmapped: 62472192 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:53:47.809587+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 310 heartbeat osd_stat(store_statfs(0x4ea436000/0x0/0x4ffc00000, data 0x40f2874/0x42a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296730624 unmapped: 62472192 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:53:48.809936+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296730624 unmapped: 62472192 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:53:49.810233+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296730624 unmapped: 62472192 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:53:50.810445+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296730624 unmapped: 62472192 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:53:51.810734+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3536003 data_alloc: 218103808 data_used: 2887680
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296730624 unmapped: 62472192 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:53:52.811013+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296738816 unmapped: 62464000 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:53:53.811219+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 310 heartbeat osd_stat(store_statfs(0x4ea436000/0x0/0x4ffc00000, data 0x40f2874/0x42a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296738816 unmapped: 62464000 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:53:54.811394+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296738816 unmapped: 62464000 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:53:55.811613+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296738816 unmapped: 62464000 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 310 heartbeat osd_stat(store_statfs(0x4ea436000/0x0/0x4ffc00000, data 0x40f2874/0x42a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:53:56.811800+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3536003 data_alloc: 218103808 data_used: 2887680
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296738816 unmapped: 62464000 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:53:57.811995+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296747008 unmapped: 62455808 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:53:58.812130+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 310 heartbeat osd_stat(store_statfs(0x4ea436000/0x0/0x4ffc00000, data 0x40f2874/0x42a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296747008 unmapped: 62455808 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:53:59.812282+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296747008 unmapped: 62455808 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:54:00.812436+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296747008 unmapped: 62455808 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:54:01.812644+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3536003 data_alloc: 218103808 data_used: 2887680
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 310 heartbeat osd_stat(store_statfs(0x4ea436000/0x0/0x4ffc00000, data 0x40f2874/0x42a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296755200 unmapped: 62447616 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:54:02.812809+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296755200 unmapped: 62447616 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:54:03.813010+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296755200 unmapped: 62447616 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:54:04.813178+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 310 heartbeat osd_stat(store_statfs(0x4ea436000/0x0/0x4ffc00000, data 0x40f2874/0x42a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296755200 unmapped: 62447616 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:54:05.813330+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296755200 unmapped: 62447616 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:54:06.813517+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3536003 data_alloc: 218103808 data_used: 2887680
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296755200 unmapped: 62447616 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:54:07.813675+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 310 heartbeat osd_stat(store_statfs(0x4ea436000/0x0/0x4ffc00000, data 0x40f2874/0x42a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296755200 unmapped: 62447616 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:54:08.813925+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296763392 unmapped: 62439424 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:54:09.814057+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296763392 unmapped: 62439424 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:54:10.814278+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296763392 unmapped: 62439424 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:54:11.814591+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3536003 data_alloc: 218103808 data_used: 2887680
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296763392 unmapped: 62439424 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:54:12.814788+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 310 heartbeat osd_stat(store_statfs(0x4ea436000/0x0/0x4ffc00000, data 0x40f2874/0x42a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296763392 unmapped: 62439424 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:54:13.814880+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296763392 unmapped: 62439424 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 310 heartbeat osd_stat(store_statfs(0x4ea436000/0x0/0x4ffc00000, data 0x40f2874/0x42a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:54:14.814957+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296771584 unmapped: 62431232 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:54:15.815161+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296771584 unmapped: 62431232 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:54:16.815275+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3536003 data_alloc: 218103808 data_used: 2887680
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296771584 unmapped: 62431232 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 310 heartbeat osd_stat(store_statfs(0x4ea436000/0x0/0x4ffc00000, data 0x40f2874/0x42a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:54:17.815457+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 310 heartbeat osd_stat(store_statfs(0x4ea436000/0x0/0x4ffc00000, data 0x40f2874/0x42a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296779776 unmapped: 62423040 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:54:18.815714+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296779776 unmapped: 62423040 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:54:19.815944+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296779776 unmapped: 62423040 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:54:20.816123+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296779776 unmapped: 62423040 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:54:21.816279+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3536003 data_alloc: 218103808 data_used: 2887680
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296779776 unmapped: 62423040 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:54:22.816447+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296779776 unmapped: 62423040 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:54:23.816593+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 310 heartbeat osd_stat(store_statfs(0x4ea436000/0x0/0x4ffc00000, data 0x40f2874/0x42a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296779776 unmapped: 62423040 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:54:24.816742+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296787968 unmapped: 62414848 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:54:25.816885+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 310 heartbeat osd_stat(store_statfs(0x4ea436000/0x0/0x4ffc00000, data 0x40f2874/0x42a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296787968 unmapped: 62414848 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:54:26.817061+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3536003 data_alloc: 218103808 data_used: 2887680
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296787968 unmapped: 62414848 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:54:27.817241+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296787968 unmapped: 62414848 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:54:28.817466+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296787968 unmapped: 62414848 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:54:29.817621+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 310 heartbeat osd_stat(store_statfs(0x4ea436000/0x0/0x4ffc00000, data 0x40f2874/0x42a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 310 heartbeat osd_stat(store_statfs(0x4ea436000/0x0/0x4ffc00000, data 0x40f2874/0x42a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296787968 unmapped: 62414848 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:54:30.817810+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296787968 unmapped: 62414848 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:54:31.818037+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3536003 data_alloc: 218103808 data_used: 2887680
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296796160 unmapped: 62406656 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:54:32.818212+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296804352 unmapped: 62398464 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:54:33.818354+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296804352 unmapped: 62398464 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:54:34.818502+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 310 heartbeat osd_stat(store_statfs(0x4ea436000/0x0/0x4ffc00000, data 0x40f2874/0x42a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296804352 unmapped: 62398464 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:54:35.818634+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 310 heartbeat osd_stat(store_statfs(0x4ea436000/0x0/0x4ffc00000, data 0x40f2874/0x42a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296804352 unmapped: 62398464 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:54:36.819286+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3536003 data_alloc: 218103808 data_used: 2887680
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296804352 unmapped: 62398464 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:54:37.819503+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296804352 unmapped: 62398464 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:54:38.819717+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296804352 unmapped: 62398464 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:54:39.820018+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296812544 unmapped: 62390272 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:54:40.820415+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 310 heartbeat osd_stat(store_statfs(0x4ea436000/0x0/0x4ffc00000, data 0x40f2874/0x42a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296812544 unmapped: 62390272 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:54:41.820990+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3536003 data_alloc: 218103808 data_used: 2887680
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296812544 unmapped: 62390272 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:54:42.821716+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296812544 unmapped: 62390272 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:54:43.821894+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296812544 unmapped: 62390272 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:54:44.822088+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 310 heartbeat osd_stat(store_statfs(0x4ea436000/0x0/0x4ffc00000, data 0x40f2874/0x42a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296820736 unmapped: 62382080 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:54:45.822416+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 310 heartbeat osd_stat(store_statfs(0x4ea436000/0x0/0x4ffc00000, data 0x40f2874/0x42a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296820736 unmapped: 62382080 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:54:46.822641+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3536003 data_alloc: 218103808 data_used: 2887680
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296820736 unmapped: 62382080 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:54:47.823001+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296820736 unmapped: 62382080 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:54:48.823462+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296828928 unmapped: 62373888 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:54:49.823685+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296828928 unmapped: 62373888 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:54:50.823984+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:54:51.824586+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296828928 unmapped: 62373888 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3536003 data_alloc: 218103808 data_used: 2887680
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 310 heartbeat osd_stat(store_statfs(0x4ea436000/0x0/0x4ffc00000, data 0x40f2874/0x42a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:54:52.824863+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296828928 unmapped: 62373888 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:54:53.825298+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296837120 unmapped: 62365696 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:54:54.825464+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296837120 unmapped: 62365696 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 310 heartbeat osd_stat(store_statfs(0x4ea436000/0x0/0x4ffc00000, data 0x40f2874/0x42a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:54:55.825629+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296837120 unmapped: 62365696 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:54:56.825883+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296845312 unmapped: 62357504 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3536003 data_alloc: 218103808 data_used: 2887680
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 310 heartbeat osd_stat(store_statfs(0x4ea436000/0x0/0x4ffc00000, data 0x40f2874/0x42a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:54:57.826042+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296853504 unmapped: 62349312 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:54:58.826191+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296853504 unmapped: 62349312 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:54:59.826350+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296853504 unmapped: 62349312 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:55:00.826573+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296853504 unmapped: 62349312 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:55:01.826764+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296853504 unmapped: 62349312 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3536003 data_alloc: 218103808 data_used: 2887680
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:55:02.827000+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296853504 unmapped: 62349312 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 310 heartbeat osd_stat(store_statfs(0x4ea436000/0x0/0x4ffc00000, data 0x40f2874/0x42a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:55:03.827166+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296853504 unmapped: 62349312 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:55:04.827338+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296853504 unmapped: 62349312 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:55:05.827525+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296861696 unmapped: 62341120 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 310 heartbeat osd_stat(store_statfs(0x4ea436000/0x0/0x4ffc00000, data 0x40f2874/0x42a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:55:06.827673+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296861696 unmapped: 62341120 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3536003 data_alloc: 218103808 data_used: 2887680
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 310 heartbeat osd_stat(store_statfs(0x4ea436000/0x0/0x4ffc00000, data 0x40f2874/0x42a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:55:07.827933+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296861696 unmapped: 62341120 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:55:08.828098+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296861696 unmapped: 62341120 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:55:09.828254+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296861696 unmapped: 62341120 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:55:10.828366+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296869888 unmapped: 62332928 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:55:11.828523+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296869888 unmapped: 62332928 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3536003 data_alloc: 218103808 data_used: 2887680
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:55:12.828667+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296869888 unmapped: 62332928 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 310 heartbeat osd_stat(store_statfs(0x4ea436000/0x0/0x4ffc00000, data 0x40f2874/0x42a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:55:13.828780+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296878080 unmapped: 62324736 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:55:14.828927+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296878080 unmapped: 62324736 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:55:15.829082+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296878080 unmapped: 62324736 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:55:16.829257+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296886272 unmapped: 62316544 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3536003 data_alloc: 218103808 data_used: 2887680
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 310 heartbeat osd_stat(store_statfs(0x4ea436000/0x0/0x4ffc00000, data 0x40f2874/0x42a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:55:17.829422+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296886272 unmapped: 62316544 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:55:18.829533+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296886272 unmapped: 62316544 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:55:19.829643+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296886272 unmapped: 62316544 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 310 heartbeat osd_stat(store_statfs(0x4ea436000/0x0/0x4ffc00000, data 0x40f2874/0x42a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:55:20.829894+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296886272 unmapped: 62316544 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:55:21.830158+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296886272 unmapped: 62316544 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3536003 data_alloc: 218103808 data_used: 2887680
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:55:22.830364+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296886272 unmapped: 62316544 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:55:23.830593+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296886272 unmapped: 62316544 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:55:24.830799+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296894464 unmapped: 62308352 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:55:25.831050+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 310 heartbeat osd_stat(store_statfs(0x4ea436000/0x0/0x4ffc00000, data 0x40f2874/0x42a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296894464 unmapped: 62308352 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:55:26.831279+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296894464 unmapped: 62308352 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3536003 data_alloc: 218103808 data_used: 2887680
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:55:27.831500+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296894464 unmapped: 62308352 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 310 heartbeat osd_stat(store_statfs(0x4ea436000/0x0/0x4ffc00000, data 0x40f2874/0x42a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:55:28.831720+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296894464 unmapped: 62308352 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:55:29.831896+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296910848 unmapped: 62291968 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:55:30.832047+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296910848 unmapped: 62291968 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:55:31.832225+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296910848 unmapped: 62291968 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3536003 data_alloc: 218103808 data_used: 2887680
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:55:32.832349+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 310 heartbeat osd_stat(store_statfs(0x4ea436000/0x0/0x4ffc00000, data 0x40f2874/0x42a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296919040 unmapped: 62283776 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:55:33.832519+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296919040 unmapped: 62283776 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:55:34.832672+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296919040 unmapped: 62283776 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:55:35.832857+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296919040 unmapped: 62283776 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:55:36.833026+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296919040 unmapped: 62283776 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 310 heartbeat osd_stat(store_statfs(0x4ea436000/0x0/0x4ffc00000, data 0x40f2874/0x42a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3536003 data_alloc: 218103808 data_used: 2887680
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:55:37.833207+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296919040 unmapped: 62283776 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:55:38.833340+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296919040 unmapped: 62283776 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 310 heartbeat osd_stat(store_statfs(0x4ea436000/0x0/0x4ffc00000, data 0x40f2874/0x42a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:55:39.833593+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296919040 unmapped: 62283776 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:55:40.833781+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296927232 unmapped: 62275584 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:55:41.833997+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296927232 unmapped: 62275584 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3536003 data_alloc: 218103808 data_used: 2887680
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:55:42.834213+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296927232 unmapped: 62275584 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:55:43.834357+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296927232 unmapped: 62275584 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 310 heartbeat osd_stat(store_statfs(0x4ea436000/0x0/0x4ffc00000, data 0x40f2874/0x42a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:55:44.834548+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296935424 unmapped: 62267392 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:55:45.834701+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296935424 unmapped: 62267392 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:55:46.834894+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296935424 unmapped: 62267392 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3536003 data_alloc: 218103808 data_used: 2887680
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:55:47.835092+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296935424 unmapped: 62267392 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:55:48.835559+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296943616 unmapped: 62259200 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:55:49.835797+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 310 heartbeat osd_stat(store_statfs(0x4ea436000/0x0/0x4ffc00000, data 0x40f2874/0x42a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296943616 unmapped: 62259200 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:55:50.836018+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296943616 unmapped: 62259200 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:55:51.836271+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296943616 unmapped: 62259200 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3536003 data_alloc: 218103808 data_used: 2887680
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:55:52.836528+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296951808 unmapped: 62251008 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:55:53.836683+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296951808 unmapped: 62251008 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 310 heartbeat osd_stat(store_statfs(0x4ea436000/0x0/0x4ffc00000, data 0x40f2874/0x42a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:55:54.836859+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296951808 unmapped: 62251008 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:55:55.836995+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296951808 unmapped: 62251008 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:55:56.837236+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296960000 unmapped: 62242816 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3536003 data_alloc: 218103808 data_used: 2887680
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:55:57.837449+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296960000 unmapped: 62242816 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:55:58.837629+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296960000 unmapped: 62242816 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:55:59.837762+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 310 heartbeat osd_stat(store_statfs(0x4ea436000/0x0/0x4ffc00000, data 0x40f2874/0x42a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296960000 unmapped: 62242816 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:56:00.838006+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296968192 unmapped: 62234624 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:56:01.838425+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296968192 unmapped: 62234624 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3536003 data_alloc: 218103808 data_used: 2887680
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:56:02.838596+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296968192 unmapped: 62234624 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:56:03.838807+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296968192 unmapped: 62234624 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:56:04.839015+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296976384 unmapped: 62226432 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 310 heartbeat osd_stat(store_statfs(0x4ea436000/0x0/0x4ffc00000, data 0x40f2874/0x42a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:56:05.839237+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296976384 unmapped: 62226432 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:56:06.839393+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296976384 unmapped: 62226432 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3536003 data_alloc: 218103808 data_used: 2887680
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:56:07.839685+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296976384 unmapped: 62226432 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:56:08.839904+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296984576 unmapped: 62218240 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 310 heartbeat osd_stat(store_statfs(0x4ea436000/0x0/0x4ffc00000, data 0x40f2874/0x42a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:56:09.840134+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296984576 unmapped: 62218240 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:56:10.840292+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296984576 unmapped: 62218240 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:56:11.840595+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296984576 unmapped: 62218240 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3536003 data_alloc: 218103808 data_used: 2887680
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:56:12.840749+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296992768 unmapped: 62210048 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:56:13.840859+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296992768 unmapped: 62210048 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:56:14.840991+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296992768 unmapped: 62210048 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 310 heartbeat osd_stat(store_statfs(0x4ea436000/0x0/0x4ffc00000, data 0x40f2874/0x42a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:56:15.841138+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296992768 unmapped: 62210048 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:56:16.841282+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296992768 unmapped: 62210048 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3536003 data_alloc: 218103808 data_used: 2887680
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:56:17.841423+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 310 heartbeat osd_stat(store_statfs(0x4ea436000/0x0/0x4ffc00000, data 0x40f2874/0x42a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296992768 unmapped: 62210048 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:56:18.841597+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296992768 unmapped: 62210048 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:56:19.841777+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296992768 unmapped: 62210048 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:56:20.841914+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 310 heartbeat osd_stat(store_statfs(0x4ea436000/0x0/0x4ffc00000, data 0x40f2874/0x42a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297009152 unmapped: 62193664 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:56:21.842132+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297009152 unmapped: 62193664 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3536003 data_alloc: 218103808 data_used: 2887680
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:56:22.842287+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297009152 unmapped: 62193664 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:56:23.842437+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297009152 unmapped: 62193664 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:56:24.842636+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297017344 unmapped: 62185472 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:56:25.842879+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297017344 unmapped: 62185472 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 310 heartbeat osd_stat(store_statfs(0x4ea436000/0x0/0x4ffc00000, data 0x40f2874/0x42a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:56:26.843075+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297017344 unmapped: 62185472 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3536003 data_alloc: 218103808 data_used: 2887680
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:56:27.843244+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297017344 unmapped: 62185472 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:56:28.843399+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297017344 unmapped: 62185472 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:56:29.843568+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297017344 unmapped: 62185472 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:56:30.843725+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 310 heartbeat osd_stat(store_statfs(0x4ea436000/0x0/0x4ffc00000, data 0x40f2874/0x42a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297025536 unmapped: 62177280 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:56:31.843933+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297025536 unmapped: 62177280 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3536003 data_alloc: 218103808 data_used: 2887680
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:56:32.844089+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297025536 unmapped: 62177280 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:56:33.844244+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297025536 unmapped: 62177280 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:56:34.844425+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297025536 unmapped: 62177280 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:56:35.844604+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297025536 unmapped: 62177280 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 310 heartbeat osd_stat(store_statfs(0x4ea436000/0x0/0x4ffc00000, data 0x40f2874/0x42a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:56:36.844851+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297033728 unmapped: 62169088 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3536003 data_alloc: 218103808 data_used: 2887680
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:56:37.844996+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297033728 unmapped: 62169088 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:56:38.845135+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297033728 unmapped: 62169088 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:56:39.845324+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297041920 unmapped: 62160896 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:56:40.845504+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297041920 unmapped: 62160896 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 310 heartbeat osd_stat(store_statfs(0x4ea436000/0x0/0x4ffc00000, data 0x40f2874/0x42a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:56:41.845722+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297050112 unmapped: 62152704 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: mgrc ms_handle_reset ms_handle_reset con 0x55c4ddb9fc00
Oct 11 10:01:03 compute-0 ceph-osd[90364]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/2898047278
Oct 11 10:01:03 compute-0 ceph-osd[90364]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/2898047278,v1:192.168.122.100:6801/2898047278]
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: get_auth_request con 0x55c4e36b6000 auth_method 0
Oct 11 10:01:03 compute-0 ceph-osd[90364]: mgrc handle_mgr_configure stats_period=5
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3536003 data_alloc: 218103808 data_used: 2887680
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:56:42.845877+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297107456 unmapped: 62095360 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:56:43.846027+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297107456 unmapped: 62095360 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:56:44.846235+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297107456 unmapped: 62095360 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 310 heartbeat osd_stat(store_statfs(0x4ea436000/0x0/0x4ffc00000, data 0x40f2874/0x42a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:56:45.846771+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 310 heartbeat osd_stat(store_statfs(0x4ea436000/0x0/0x4ffc00000, data 0x40f2874/0x42a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297107456 unmapped: 62095360 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:56:46.847094+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297107456 unmapped: 62095360 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3536003 data_alloc: 218103808 data_used: 2887680
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:56:47.847444+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297107456 unmapped: 62095360 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:56:48.847631+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297115648 unmapped: 62087168 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:56:49.847978+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297115648 unmapped: 62087168 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:56:50.848081+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 310 heartbeat osd_stat(store_statfs(0x4ea436000/0x0/0x4ffc00000, data 0x40f2874/0x42a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297115648 unmapped: 62087168 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:56:51.848286+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297115648 unmapped: 62087168 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3536003 data_alloc: 218103808 data_used: 2887680
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:56:52.848634+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297115648 unmapped: 62087168 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:56:53.848865+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297115648 unmapped: 62087168 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:56:54.849036+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297115648 unmapped: 62087168 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:56:55.849211+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 310 heartbeat osd_stat(store_statfs(0x4ea436000/0x0/0x4ffc00000, data 0x40f2874/0x42a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297115648 unmapped: 62087168 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:56:56.849420+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297132032 unmapped: 62070784 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3536003 data_alloc: 218103808 data_used: 2887680
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:56:57.849618+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297132032 unmapped: 62070784 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 310 heartbeat osd_stat(store_statfs(0x4ea436000/0x0/0x4ffc00000, data 0x40f2874/0x42a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:56:58.849861+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297132032 unmapped: 62070784 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:56:59.850030+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297132032 unmapped: 62070784 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:57:00.850248+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297132032 unmapped: 62070784 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:57:01.850485+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 310 heartbeat osd_stat(store_statfs(0x4ea436000/0x0/0x4ffc00000, data 0x40f2874/0x42a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297132032 unmapped: 62070784 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3536003 data_alloc: 218103808 data_used: 2887680
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:57:02.850674+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297132032 unmapped: 62070784 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:57:03.850943+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297140224 unmapped: 62062592 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:57:04.851064+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297140224 unmapped: 62062592 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:57:05.851221+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 310 heartbeat osd_stat(store_statfs(0x4ea436000/0x0/0x4ffc00000, data 0x40f2874/0x42a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297148416 unmapped: 62054400 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:57:06.851389+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297148416 unmapped: 62054400 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3536003 data_alloc: 218103808 data_used: 2887680
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:57:07.851553+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 310 heartbeat osd_stat(store_statfs(0x4ea436000/0x0/0x4ffc00000, data 0x40f2874/0x42a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297148416 unmapped: 62054400 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:57:08.851720+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297148416 unmapped: 62054400 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:57:09.851877+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 310 heartbeat osd_stat(store_statfs(0x4ea436000/0x0/0x4ffc00000, data 0x40f2874/0x42a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297148416 unmapped: 62054400 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 310 heartbeat osd_stat(store_statfs(0x4ea436000/0x0/0x4ffc00000, data 0x40f2874/0x42a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:57:10.852020+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297148416 unmapped: 62054400 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:57:11.852193+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297148416 unmapped: 62054400 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3536003 data_alloc: 218103808 data_used: 2887680
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:57:12.852386+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297156608 unmapped: 62046208 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:57:13.852517+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297156608 unmapped: 62046208 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:57:14.852657+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 310 heartbeat osd_stat(store_statfs(0x4ea436000/0x0/0x4ffc00000, data 0x40f2874/0x42a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297156608 unmapped: 62046208 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:57:15.852797+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297156608 unmapped: 62046208 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:57:16.853010+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 310 heartbeat osd_stat(store_statfs(0x4ea436000/0x0/0x4ffc00000, data 0x40f2874/0x42a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297156608 unmapped: 62046208 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3536003 data_alloc: 218103808 data_used: 2887680
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:57:17.853183+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297156608 unmapped: 62046208 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:57:18.853357+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297164800 unmapped: 62038016 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:57:19.853898+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297164800 unmapped: 62038016 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 310 heartbeat osd_stat(store_statfs(0x4ea436000/0x0/0x4ffc00000, data 0x40f2874/0x42a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:57:20.854181+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297164800 unmapped: 62038016 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:57:21.854462+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297164800 unmapped: 62038016 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3536003 data_alloc: 218103808 data_used: 2887680
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 310 heartbeat osd_stat(store_statfs(0x4ea436000/0x0/0x4ffc00000, data 0x40f2874/0x42a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:57:22.854743+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297164800 unmapped: 62038016 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:57:23.855093+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297172992 unmapped: 62029824 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:57:24.855441+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297172992 unmapped: 62029824 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:57:25.855612+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297181184 unmapped: 62021632 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:57:26.855783+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297181184 unmapped: 62021632 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3536003 data_alloc: 218103808 data_used: 2887680
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:57:27.856000+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 310 heartbeat osd_stat(store_statfs(0x4ea436000/0x0/0x4ffc00000, data 0x40f2874/0x42a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297181184 unmapped: 62021632 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:57:28.856314+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297181184 unmapped: 62021632 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:57:29.856632+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 310 heartbeat osd_stat(store_statfs(0x4ea436000/0x0/0x4ffc00000, data 0x40f2874/0x42a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297181184 unmapped: 62021632 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:57:30.856907+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297181184 unmapped: 62021632 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:57:31.857125+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297181184 unmapped: 62021632 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3536003 data_alloc: 218103808 data_used: 2887680
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:57:32.857382+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297181184 unmapped: 62021632 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:57:33.857541+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297189376 unmapped: 62013440 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:57:34.857725+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297189376 unmapped: 62013440 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:57:35.857952+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 310 heartbeat osd_stat(store_statfs(0x4ea436000/0x0/0x4ffc00000, data 0x40f2874/0x42a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297189376 unmapped: 62013440 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:57:36.858108+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297197568 unmapped: 62005248 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3536003 data_alloc: 218103808 data_used: 2887680
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:57:37.858328+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297197568 unmapped: 62005248 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:57:38.858548+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297197568 unmapped: 62005248 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:57:39.858705+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297197568 unmapped: 62005248 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:57:40.858922+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297197568 unmapped: 62005248 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:57:41.859106+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 310 heartbeat osd_stat(store_statfs(0x4ea436000/0x0/0x4ffc00000, data 0x40f2874/0x42a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297197568 unmapped: 62005248 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3536003 data_alloc: 218103808 data_used: 2887680
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:57:42.859228+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297197568 unmapped: 62005248 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:57:43.859394+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297197568 unmapped: 62005248 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:57:44.859583+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: handle_auth_request added challenge on 0x55c4dd228800
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 283.956665039s of 284.302581787s, submitted: 90
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297205760 unmapped: 61997056 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:57:45.859767+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _renew_subs
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 310 handle_osd_map epochs [311,311], i have 310, src has [1,311]
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 311 ms_handle_reset con 0x55c4dd228800 session 0x55c4dd2e21e0
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 311 heartbeat osd_stat(store_statfs(0x4ea436000/0x0/0x4ffc00000, data 0x40f2874/0x42a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297238528 unmapped: 61964288 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:57:46.860068+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297238528 unmapped: 61964288 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3431792 data_alloc: 218103808 data_used: 2895872
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:57:47.860210+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: handle_auth_request added challenge on 0x55c4dd4e0800
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297238528 unmapped: 61964288 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 311 heartbeat osd_stat(store_statfs(0x4eb435000/0x0/0x4ffc00000, data 0x30f4412/0x32a9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:57:48.860378+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _renew_subs
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 311 handle_osd_map epochs [312,312], i have 311, src has [1,312]
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 312 ms_handle_reset con 0x55c4dd4e0800 session 0x55c4df1d74a0
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297246720 unmapped: 61956096 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:57:49.860533+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 312 handle_osd_map epochs [313,313], i have 312, src has [1,313]
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297254912 unmapped: 61947904 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:57:50.860698+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297263104 unmapped: 61939712 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:57:51.860944+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297263104 unmapped: 61939712 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 313 ms_handle_reset con 0x55c4dbc10000 session 0x55c4dc11af00
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: handle_auth_request added challenge on 0x55c4ddb86000
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3353795 data_alloc: 218103808 data_used: 2904064
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:57:52.861079+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: handle_auth_request added challenge on 0x55c4ddb8ec00
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ec09e000/0x0/0x4ffc00000, data 0x2487a52/0x263e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297295872 unmapped: 61906944 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:57:53.861255+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 313 ms_handle_reset con 0x55c4ddb8ec00 session 0x55c4db0625a0
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 313 handle_osd_map epochs [314,314], i have 313, src has [1,314]
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297312256 unmapped: 61890560 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:57:54.861428+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 314 handle_osd_map epochs [315,315], i have 314, src has [1,315]
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.847590446s of 10.088174820s, submitted: 90
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 314 heartbeat osd_stat(store_statfs(0x4ec09b000/0x0/0x4ffc00000, data 0x24895fb/0x2642000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 314 handle_osd_map epochs [315,315], i have 315, src has [1,315]
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297336832 unmapped: 61865984 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:57:55.861573+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec09b000/0x0/0x4ffc00000, data 0x24895fb/0x2642000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec097000/0x0/0x4ffc00000, data 0x248b05e/0x2645000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297345024 unmapped: 61857792 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:57:56.861729+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297345024 unmapped: 61857792 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3361701 data_alloc: 218103808 data_used: 2912256
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:57:57.861899+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297345024 unmapped: 61857792 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:57:58.862079+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297345024 unmapped: 61857792 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:57:59.862322+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297345024 unmapped: 61857792 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:58:00.862518+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec097000/0x0/0x4ffc00000, data 0x248b05e/0x2645000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297345024 unmapped: 61857792 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:58:01.862710+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec097000/0x0/0x4ffc00000, data 0x248b05e/0x2645000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297345024 unmapped: 61857792 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3361701 data_alloc: 218103808 data_used: 2912256
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:58:02.862886+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297345024 unmapped: 61857792 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:58:03.863060+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297345024 unmapped: 61857792 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:58:04.863236+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297345024 unmapped: 61857792 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:58:05.863372+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297345024 unmapped: 61857792 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:58:06.863548+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297353216 unmapped: 61849600 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3361701 data_alloc: 218103808 data_used: 2912256
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:58:07.863727+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec097000/0x0/0x4ffc00000, data 0x248b05e/0x2645000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297353216 unmapped: 61849600 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:58:08.863858+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297361408 unmapped: 61841408 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:58:09.864005+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297361408 unmapped: 61841408 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:58:10.864198+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297361408 unmapped: 61841408 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec097000/0x0/0x4ffc00000, data 0x248b05e/0x2645000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:58:11.864375+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297369600 unmapped: 61833216 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:58:12.864539+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3361701 data_alloc: 218103808 data_used: 2912256
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297369600 unmapped: 61833216 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:58:13.864743+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297369600 unmapped: 61833216 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:58:14.864941+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297369600 unmapped: 61833216 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:58:15.865080+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec097000/0x0/0x4ffc00000, data 0x248b05e/0x2645000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297369600 unmapped: 61833216 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:58:16.865255+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec097000/0x0/0x4ffc00000, data 0x248b05e/0x2645000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297377792 unmapped: 61825024 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:58:17.865525+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3361701 data_alloc: 218103808 data_used: 2912256
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297377792 unmapped: 61825024 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:58:18.865809+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297377792 unmapped: 61825024 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:58:19.866130+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297377792 unmapped: 61825024 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:58:20.866369+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297385984 unmapped: 61816832 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:58:21.866610+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297385984 unmapped: 61816832 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:58:22.866772+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3361701 data_alloc: 218103808 data_used: 2912256
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec097000/0x0/0x4ffc00000, data 0x248b05e/0x2645000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297385984 unmapped: 61816832 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:58:23.866953+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297385984 unmapped: 61816832 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:58:24.867123+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297402368 unmapped: 61800448 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:58:25.867286+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297402368 unmapped: 61800448 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:58:26.867503+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297402368 unmapped: 61800448 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:58:27.867676+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3361861 data_alloc: 218103808 data_used: 2916352
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297402368 unmapped: 61800448 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec097000/0x0/0x4ffc00000, data 0x248b05e/0x2645000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:58:28.867893+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297402368 unmapped: 61800448 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:58:29.868062+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297402368 unmapped: 61800448 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:58:30.868230+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297402368 unmapped: 61800448 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:58:31.868437+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297402368 unmapped: 61800448 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:58:32.868597+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3361861 data_alloc: 218103808 data_used: 2916352
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297410560 unmapped: 61792256 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:58:33.868762+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec097000/0x0/0x4ffc00000, data 0x248b05e/0x2645000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297410560 unmapped: 61792256 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:58:34.868909+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec097000/0x0/0x4ffc00000, data 0x248b05e/0x2645000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297410560 unmapped: 61792256 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:58:35.869095+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297410560 unmapped: 61792256 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:58:36.869265+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297410560 unmapped: 61792256 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:58:37.869424+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3361861 data_alloc: 218103808 data_used: 2916352
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297418752 unmapped: 61784064 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:58:38.869576+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297418752 unmapped: 61784064 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:58:39.869749+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec097000/0x0/0x4ffc00000, data 0x248b05e/0x2645000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297418752 unmapped: 61784064 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:58:40.869932+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec097000/0x0/0x4ffc00000, data 0x248b05e/0x2645000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec097000/0x0/0x4ffc00000, data 0x248b05e/0x2645000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297426944 unmapped: 61775872 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:58:41.870093+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297426944 unmapped: 61775872 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:58:42.870224+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3361861 data_alloc: 218103808 data_used: 2916352
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297426944 unmapped: 61775872 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:58:43.870368+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297435136 unmapped: 61767680 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:58:44.870522+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec097000/0x0/0x4ffc00000, data 0x248b05e/0x2645000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec097000/0x0/0x4ffc00000, data 0x248b05e/0x2645000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297435136 unmapped: 61767680 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:58:45.870689+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297435136 unmapped: 61767680 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:58:46.870855+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297435136 unmapped: 61767680 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:58:47.871052+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3361861 data_alloc: 218103808 data_used: 2916352
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297435136 unmapped: 61767680 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:58:48.871219+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec097000/0x0/0x4ffc00000, data 0x248b05e/0x2645000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297443328 unmapped: 61759488 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:58:49.871451+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297443328 unmapped: 61759488 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:58:50.871627+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ec097000/0x0/0x4ffc00000, data 0x248b05e/0x2645000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297443328 unmapped: 61759488 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:58:51.871874+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: handle_auth_request added challenge on 0x55c4e8e19000
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _renew_subs
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 315 handle_osd_map epochs [316,316], i have 315, src has [1,316]
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 56.906375885s of 56.922370911s, submitted: 15
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 316 ms_handle_reset con 0x55c4e8e19000 session 0x55c4df73bc20
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297467904 unmapped: 61734912 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:58:52.872018+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3363450 data_alloc: 218103808 data_used: 2916352
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297467904 unmapped: 61734912 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:58:53.872164+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 316 ms_handle_reset con 0x55c4dbe5d400 session 0x55c4db000d20
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: handle_auth_request added challenge on 0x55c4dbe5d400
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297467904 unmapped: 61734912 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:58:54.872314+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:58:55.872633+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297467904 unmapped: 61734912 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:58:56.876121+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297467904 unmapped: 61734912 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ec096000/0x0/0x4ffc00000, data 0x248cc1f/0x2647000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:58:57.876239+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297476096 unmapped: 61726720 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3363450 data_alloc: 218103808 data_used: 2916352
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:58:58.876584+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297476096 unmapped: 61726720 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ec096000/0x0/0x4ffc00000, data 0x248cc1f/0x2647000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:58:59.878194+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297476096 unmapped: 61726720 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _renew_subs
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 316 handle_osd_map epochs [317,317], i have 316, src has [1,317]
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 317 heartbeat osd_stat(store_statfs(0x4ec096000/0x0/0x4ffc00000, data 0x248cc1f/0x2647000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:59:00.878353+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297533440 unmapped: 61669376 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:59:01.879239+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297533440 unmapped: 61669376 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:59:02.879384+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297533440 unmapped: 61669376 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3366424 data_alloc: 218103808 data_used: 2916352
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 317 heartbeat osd_stat(store_statfs(0x4ec093000/0x0/0x4ffc00000, data 0x248e682/0x264a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:59:03.879691+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297541632 unmapped: 61661184 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:59:04.879874+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297541632 unmapped: 61661184 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:59:05.880203+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297541632 unmapped: 61661184 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:59:06.881006+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297541632 unmapped: 61661184 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:59:07.881186+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297541632 unmapped: 61661184 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3366424 data_alloc: 218103808 data_used: 2916352
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:59:08.881398+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297549824 unmapped: 61652992 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 317 heartbeat osd_stat(store_statfs(0x4ec093000/0x0/0x4ffc00000, data 0x248e682/0x264a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:59:09.881544+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297549824 unmapped: 61652992 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:59:10.881807+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297549824 unmapped: 61652992 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 317 heartbeat osd_stat(store_statfs(0x4ec093000/0x0/0x4ffc00000, data 0x248e682/0x264a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:59:11.882288+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297549824 unmapped: 61652992 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:59:12.882553+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297549824 unmapped: 61652992 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3366424 data_alloc: 218103808 data_used: 2916352
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:59:13.882801+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297549824 unmapped: 61652992 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:59:14.883031+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297549824 unmapped: 61652992 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:59:15.883172+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297549824 unmapped: 61652992 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 317 heartbeat osd_stat(store_statfs(0x4ec093000/0x0/0x4ffc00000, data 0x248e682/0x264a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:59:16.883308+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297558016 unmapped: 61644800 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:59:17.883454+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297558016 unmapped: 61644800 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3366424 data_alloc: 218103808 data_used: 2916352
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:59:18.883613+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297558016 unmapped: 61644800 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:59:19.883754+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297558016 unmapped: 61644800 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:59:20.883898+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297558016 unmapped: 61644800 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 317 heartbeat osd_stat(store_statfs(0x4ec093000/0x0/0x4ffc00000, data 0x248e682/0x264a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:59:21.884106+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297574400 unmapped: 61628416 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:59:22.884425+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297574400 unmapped: 61628416 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3366424 data_alloc: 218103808 data_used: 2916352
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:59:23.884689+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297574400 unmapped: 61628416 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:59:24.884889+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297574400 unmapped: 61628416 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 317 heartbeat osd_stat(store_statfs(0x4ec093000/0x0/0x4ffc00000, data 0x248e682/0x264a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:59:25.885020+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297574400 unmapped: 61628416 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:59:26.885193+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297582592 unmapped: 61620224 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:59:27.885319+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297582592 unmapped: 61620224 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3366584 data_alloc: 218103808 data_used: 2920448
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:59:28.885505+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297582592 unmapped: 61620224 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:59:29.885642+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297582592 unmapped: 61620224 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:59:30.885805+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297582592 unmapped: 61620224 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 317 heartbeat osd_stat(store_statfs(0x4ec093000/0x0/0x4ffc00000, data 0x248e682/0x264a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:59:31.886039+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297582592 unmapped: 61620224 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:59:32.886191+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297582592 unmapped: 61620224 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3366584 data_alloc: 218103808 data_used: 2920448
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:59:33.886353+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297582592 unmapped: 61620224 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 317 heartbeat osd_stat(store_statfs(0x4ec093000/0x0/0x4ffc00000, data 0x248e682/0x264a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:59:34.886515+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297590784 unmapped: 61612032 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:59:35.886701+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297590784 unmapped: 61612032 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 317 heartbeat osd_stat(store_statfs(0x4ec093000/0x0/0x4ffc00000, data 0x248e682/0x264a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:59:36.886944+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297598976 unmapped: 61603840 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 317 heartbeat osd_stat(store_statfs(0x4ec093000/0x0/0x4ffc00000, data 0x248e682/0x264a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:59:37.887118+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297598976 unmapped: 61603840 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3366584 data_alloc: 218103808 data_used: 2920448
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:59:38.887412+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297598976 unmapped: 61603840 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:59:39.887565+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297598976 unmapped: 61603840 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:59:40.887717+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 317 heartbeat osd_stat(store_statfs(0x4ec093000/0x0/0x4ffc00000, data 0x248e682/0x264a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297607168 unmapped: 61595648 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:59:41.887895+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297607168 unmapped: 61595648 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 317 heartbeat osd_stat(store_statfs(0x4ec093000/0x0/0x4ffc00000, data 0x248e682/0x264a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:59:42.888050+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297615360 unmapped: 61587456 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3366584 data_alloc: 218103808 data_used: 2920448
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:59:43.888219+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297615360 unmapped: 61587456 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:59:44.888340+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297615360 unmapped: 61587456 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:59:45.888503+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297615360 unmapped: 61587456 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 317 heartbeat osd_stat(store_statfs(0x4ec093000/0x0/0x4ffc00000, data 0x248e682/0x264a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:59:46.888649+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297615360 unmapped: 61587456 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 317 heartbeat osd_stat(store_statfs(0x4ec093000/0x0/0x4ffc00000, data 0x248e682/0x264a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:59:47.888793+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297615360 unmapped: 61587456 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3366584 data_alloc: 218103808 data_used: 2920448
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:59:48.889774+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297615360 unmapped: 61587456 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:59:49.889928+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297623552 unmapped: 61579264 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:59:50.890069+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297623552 unmapped: 61579264 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 317 heartbeat osd_stat(store_statfs(0x4ec093000/0x0/0x4ffc00000, data 0x248e682/0x264a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:59:51.890241+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297623552 unmapped: 61579264 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:59:52.890392+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297631744 unmapped: 61571072 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3366584 data_alloc: 218103808 data_used: 2920448
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:59:53.890567+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297631744 unmapped: 61571072 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 317 heartbeat osd_stat(store_statfs(0x4ec093000/0x0/0x4ffc00000, data 0x248e682/0x264a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:59:54.890746+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297631744 unmapped: 61571072 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:59:55.890935+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 317 heartbeat osd_stat(store_statfs(0x4ec093000/0x0/0x4ffc00000, data 0x248e682/0x264a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297631744 unmapped: 61571072 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:59:56.891127+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297631744 unmapped: 61571072 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:59:57.891301+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297631744 unmapped: 61571072 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3366584 data_alloc: 218103808 data_used: 2920448
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:59:58.892184+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297631744 unmapped: 61571072 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:59:59.892371+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 317 heartbeat osd_stat(store_statfs(0x4ec093000/0x0/0x4ffc00000, data 0x248e682/0x264a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297631744 unmapped: 61571072 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T10:00:00.893021+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297648128 unmapped: 61554688 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T10:00:01.893219+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297648128 unmapped: 61554688 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T10:00:02.893361+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297648128 unmapped: 61554688 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3366584 data_alloc: 218103808 data_used: 2920448
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T10:00:03.893517+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297648128 unmapped: 61554688 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 317 heartbeat osd_stat(store_statfs(0x4ec093000/0x0/0x4ffc00000, data 0x248e682/0x264a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T10:00:04.893658+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297648128 unmapped: 61554688 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 317 heartbeat osd_stat(store_statfs(0x4ec093000/0x0/0x4ffc00000, data 0x248e682/0x264a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T10:00:05.893850+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297648128 unmapped: 61554688 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T10:00:06.894019+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297648128 unmapped: 61554688 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T10:00:07.894173+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297648128 unmapped: 61554688 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 317 heartbeat osd_stat(store_statfs(0x4ec093000/0x0/0x4ffc00000, data 0x248e682/0x264a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3366584 data_alloc: 218103808 data_used: 2920448
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 317 heartbeat osd_stat(store_statfs(0x4ec093000/0x0/0x4ffc00000, data 0x248e682/0x264a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T10:00:08.894333+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297656320 unmapped: 61546496 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 317 heartbeat osd_stat(store_statfs(0x4ec093000/0x0/0x4ffc00000, data 0x248e682/0x264a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T10:00:09.894579+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297656320 unmapped: 61546496 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T10:00:10.894781+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297656320 unmapped: 61546496 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T10:00:11.895114+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297656320 unmapped: 61546496 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T10:00:12.895320+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297656320 unmapped: 61546496 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3366584 data_alloc: 218103808 data_used: 2920448
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 317 heartbeat osd_stat(store_statfs(0x4ec093000/0x0/0x4ffc00000, data 0x248e682/0x264a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T10:00:13.895512+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297656320 unmapped: 61546496 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 317 ms_handle_reset con 0x55c4df107400 session 0x55c4dd81fe00
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: handle_auth_request added challenge on 0x55c4dd228800
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 317 ms_handle_reset con 0x55c4dbc29c00 session 0x55c4df73ad20
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: handle_auth_request added challenge on 0x55c4dd4e0800
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T10:00:14.895658+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297672704 unmapped: 61530112 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T10:00:15.895958+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297672704 unmapped: 61530112 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 317 heartbeat osd_stat(store_statfs(0x4ec093000/0x0/0x4ffc00000, data 0x248e682/0x264a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T10:00:16.896109+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297680896 unmapped: 61521920 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 317 heartbeat osd_stat(store_statfs(0x4ec093000/0x0/0x4ffc00000, data 0x248e682/0x264a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T10:00:17.896252+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297680896 unmapped: 61521920 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3366584 data_alloc: 218103808 data_used: 2920448
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T10:00:18.896475+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297680896 unmapped: 61521920 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T10:00:19.896725+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297680896 unmapped: 61521920 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T10:00:20.896933+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297689088 unmapped: 61513728 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T10:00:21.897227+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297689088 unmapped: 61513728 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T10:00:22.897405+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 317 heartbeat osd_stat(store_statfs(0x4ec093000/0x0/0x4ffc00000, data 0x248e682/0x264a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297689088 unmapped: 61513728 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3366584 data_alloc: 218103808 data_used: 2920448
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T10:00:23.897576+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297689088 unmapped: 61513728 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 317 heartbeat osd_stat(store_statfs(0x4ec093000/0x0/0x4ffc00000, data 0x248e682/0x264a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 317 heartbeat osd_stat(store_statfs(0x4ec093000/0x0/0x4ffc00000, data 0x248e682/0x264a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T10:00:24.897667+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297689088 unmapped: 61513728 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T10:00:25.897909+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297689088 unmapped: 61513728 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T10:00:26.898034+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297689088 unmapped: 61513728 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T10:00:27.898578+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297689088 unmapped: 61513728 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3366584 data_alloc: 218103808 data_used: 2920448
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T10:00:28.898704+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297705472 unmapped: 61497344 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T10:00:29.898840+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 317 heartbeat osd_stat(store_statfs(0x4ec093000/0x0/0x4ffc00000, data 0x248e682/0x264a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297705472 unmapped: 61497344 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T10:00:30.899003+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: do_command 'config diff' '{prefix=config diff}'
Oct 11 10:01:03 compute-0 ceph-osd[90364]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Oct 11 10:01:03 compute-0 ceph-osd[90364]: do_command 'config show' '{prefix=config show}'
Oct 11 10:01:03 compute-0 ceph-osd[90364]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Oct 11 10:01:03 compute-0 ceph-osd[90364]: do_command 'counter dump' '{prefix=counter dump}'
Oct 11 10:01:03 compute-0 ceph-osd[90364]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297263104 unmapped: 61939712 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: do_command 'counter schema' '{prefix=counter schema}'
Oct 11 10:01:03 compute-0 ceph-osd[90364]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T10:00:31.899152+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 296853504 unmapped: 62349312 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: osd.2 317 heartbeat osd_stat(store_statfs(0x4ec093000/0x0/0x4ffc00000, data 0x248e682/0x264a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: tick
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_tickets
Oct 11 10:01:03 compute-0 ceph-osd[90364]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T10:00:32.899273+0000)
Oct 11 10:01:03 compute-0 ceph-osd[90364]: prioritycache tune_memory target: 4294967296 mapped: 297107456 unmapped: 62095360 heap: 359202816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:03 compute-0 ceph-osd[90364]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:03 compute-0 ceph-osd[90364]: bluestore.MempoolThread(0x55c4d9b55b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3366584 data_alloc: 218103808 data_used: 2920448
Oct 11 10:01:03 compute-0 ceph-osd[90364]: do_command 'log dump' '{prefix=log dump}'
Oct 11 10:01:03 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls"} v 0) v1
Oct 11 10:01:03 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2166521335' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Oct 11 10:01:03 compute-0 ceph-mon[74313]: from='client.22947 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 10:01:03 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/4227035867' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Oct 11 10:01:03 compute-0 ceph-mon[74313]: from='client.22951 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Oct 11 10:01:03 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/3647786049' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Oct 11 10:01:03 compute-0 ceph-mon[74313]: from='client.22955 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 10:01:03 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/2166521335' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Oct 11 10:01:04 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3611: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 10:01:04 compute-0 ceph-mgr[74605]: log_channel(audit) log [DBG] : from='client.22959 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Oct 11 10:01:04 compute-0 rsyslogd[1003]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 11 10:01:04 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Oct 11 10:01:04 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1496692667' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Oct 11 10:01:04 compute-0 ceph-mgr[74605]: log_channel(audit) log [DBG] : from='client.22963 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 10:01:04 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions"} v 0) v1
Oct 11 10:01:04 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2153358173' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Oct 11 10:01:04 compute-0 ceph-mgr[74605]: log_channel(audit) log [DBG] : from='client.22967 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 11 10:01:04 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e317 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 10:01:04 compute-0 ceph-mon[74313]: pgmap v3611: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 10:01:04 compute-0 ceph-mon[74313]: from='client.22959 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Oct 11 10:01:04 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/1496692667' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Oct 11 10:01:04 compute-0 ceph-mon[74313]: from='client.22963 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 10:01:04 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/2153358173' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Oct 11 10:01:05 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon stat"} v 0) v1
Oct 11 10:01:05 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/178570993' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Oct 11 10:01:05 compute-0 ceph-mgr[74605]: log_channel(audit) log [DBG] : from='client.22971 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 11 10:01:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] _maybe_adjust
Oct 11 10:01:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 10:01:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 10:01:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 10:01:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0026278224099759067 of space, bias 1.0, pg target 0.788346722992772 quantized to 32 (current 32)
Oct 11 10:01:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 10:01:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 10:01:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 10:01:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 10:01:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 10:01:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 1.9077212346161359e-07 of space, bias 1.0, pg target 5.723163703848408e-05 quantized to 32 (current 32)
Oct 11 10:01:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 10:01:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 11 10:01:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 10:01:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 10:01:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 10:01:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 11 10:01:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 10:01:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 11 10:01:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 10:01:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 10:01:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 10:01:05 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 11 10:01:06 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3612: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 10:01:06 compute-0 ceph-mgr[74605]: log_channel(audit) log [DBG] : from='client.22979 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 11 10:01:06 compute-0 ceph-33219f8b-dc38-5a8f-a577-8ccc4b37190a-mgr-compute-0-hcsgrm[74601]: 2025-10-11T10:01:06.010+0000 7f7f1b2f5640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Oct 11 10:01:06 compute-0 ceph-mgr[74605]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Oct 11 10:01:06 compute-0 ceph-mon[74313]: from='client.22967 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 11 10:01:06 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/178570993' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Oct 11 10:01:06 compute-0 ceph-mon[74313]: from='client.22971 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 11 10:01:06 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "node ls"} v 0) v1
Oct 11 10:01:06 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/142142937' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Oct 11 10:01:06 compute-0 sshd-session[454489]: Invalid user newuser from 13.126.15.214 port 44308
Oct 11 10:01:06 compute-0 sshd-session[454489]: pam_unix(sshd:auth): check pass; user unknown
Oct 11 10:01:06 compute-0 sshd-session[454489]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=13.126.15.214
Oct 11 10:01:06 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush class ls"} v 0) v1
Oct 11 10:01:06 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3982567174' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Oct 11 10:01:06 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} v 0) v1
Oct 11 10:01:06 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/530945270' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Oct 11 10:01:06 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush dump"} v 0) v1
Oct 11 10:01:06 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/675396220' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Oct 11 10:01:06 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump", "format": "json-pretty"} v 0) v1
Oct 11 10:01:06 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1141493182' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Oct 11 10:01:07 compute-0 ceph-mon[74313]: pgmap v3612: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 10:01:07 compute-0 ceph-mon[74313]: from='client.22979 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 11 10:01:07 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/142142937' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Oct 11 10:01:07 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/3982567174' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Oct 11 10:01:07 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/530945270' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Oct 11 10:01:07 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/675396220' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Oct 11 10:01:07 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/1141493182' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Oct 11 10:01:07 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush rule ls"} v 0) v1
Oct 11 10:01:07 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2832554885' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Oct 11 10:01:07 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "format": "json-pretty"} v 0) v1
Oct 11 10:01:07 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3038655888' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Oct 11 10:01:07 compute-0 crontab[454825]: (root) LIST (root)
Oct 11 10:01:07 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush show-tunables"} v 0) v1
Oct 11 10:01:07 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3992329415' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Oct 11 10:01:07 compute-0 nova_compute[260935]: 2025-10-11 10:01:07.723 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 10:01:07 compute-0 nova_compute[260935]: 2025-10-11 10:01:07.726 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 10:01:07 compute-0 nova_compute[260935]: 2025-10-11 10:01:07.726 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5004 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Oct 11 10:01:07 compute-0 nova_compute[260935]: 2025-10-11 10:01:07.727 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Oct 11 10:01:07 compute-0 nova_compute[260935]: 2025-10-11 10:01:07.772 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 10:01:07 compute-0 nova_compute[260935]: 2025-10-11 10:01:07.773 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Oct 11 10:01:07 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls", "format": "json-pretty"} v 0) v1
Oct 11 10:01:07 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2907153542' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Oct 11 10:01:08 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3613: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 10:01:08 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush tree", "show_shadow": true} v 0) v1
Oct 11 10:01:08 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1853826303' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Oct 11 10:01:08 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/2832554885' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Oct 11 10:01:08 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/3038655888' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Oct 11 10:01:08 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/3992329415' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Oct 11 10:01:08 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/2907153542' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Oct 11 10:01:08 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/1853826303' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Oct 11 10:01:08 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services", "format": "json-pretty"} v 0) v1
Oct 11 10:01:08 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1346839265' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Oct 11 10:01:08 compute-0 sshd-session[454489]: Failed password for invalid user newuser from 13.126.15.214 port 44308 ssh2
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: handle_auth_request added challenge on 0x557ab9c2fc00
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:28:45.546352+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 330858496 unmapped: 51183616 heap: 382042112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:28:46.546482+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 333545472 unmapped: 48496640 heap: 382042112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:28:47.546613+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 335257600 unmapped: 46784512 heap: 382042112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:28:48.546749+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 335257600 unmapped: 46784512 heap: 382042112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:28:49.546917+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e72bd000/0x0/0x4ffc00000, data 0x6a682be/0x6c00000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x11d3f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 335257600 unmapped: 46784512 heap: 382042112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4233672 data_alloc: 251658240 data_used: 53624832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:28:50.547068+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 335265792 unmapped: 46776320 heap: 382042112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:28:51.547196+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e72bd000/0x0/0x4ffc00000, data 0x6a682be/0x6c00000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x11d3f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 335298560 unmapped: 46743552 heap: 382042112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:28:52.547289+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 335314944 unmapped: 46727168 heap: 382042112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:28:53.547467+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 338542592 unmapped: 43499520 heap: 382042112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab4956000 session 0x557ab5bfbe00
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: handle_auth_request added challenge on 0x557ab6684800
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab6dda400 session 0x557ab72a4f00
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: handle_auth_request added challenge on 0x557ab7347400
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:28:54.547647+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 338001920 unmapped: 44040192 heap: 382042112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e5a83000/0x0/0x4ffc00000, data 0x70fe2be/0x7296000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12edf9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4298296 data_alloc: 251658240 data_used: 54517760
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:28:55.547842+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.904383659s of 10.933584213s, submitted: 71
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 338206720 unmapped: 43835392 heap: 382042112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:28:56.547992+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 342089728 unmapped: 39952384 heap: 382042112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:28:57.548138+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e547f000/0x0/0x4ffc00000, data 0x76ff2be/0x7897000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12edf9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 342097920 unmapped: 39944192 heap: 382042112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:28:58.548297+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e541d000/0x0/0x4ffc00000, data 0x77612be/0x78f9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12edf9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 342097920 unmapped: 39944192 heap: 382042112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:28:59.548418+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 342097920 unmapped: 39944192 heap: 382042112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4362664 data_alloc: 251658240 data_used: 54804480
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:29:00.548578+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 342097920 unmapped: 39944192 heap: 382042112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:29:01.548760+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 341434368 unmapped: 40607744 heap: 382042112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #50. Immutable memtables: 7.
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:29:02.548915+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 342482944 unmapped: 39559168 heap: 382042112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:29:03.549037+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e425e000/0x0/0x4ffc00000, data 0x77882be/0x7920000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 343531520 unmapped: 38510592 heap: 382042112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:29:04.549259+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e425e000/0x0/0x4ffc00000, data 0x77882be/0x7920000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 343539712 unmapped: 38502400 heap: 382042112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab7f88400 session 0x557ab791f680
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab714bc00 session 0x557ab791eb40
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4359640 data_alloc: 251658240 data_used: 54870016
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:29:05.549416+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: handle_auth_request added challenge on 0x557ab714bc00
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 343539712 unmapped: 38502400 heap: 382042112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:29:06.549537+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.994635582s of 11.460409164s, submitted: 86
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 343539712 unmapped: 38502400 heap: 382042112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e425e000/0x0/0x4ffc00000, data 0x77882be/0x7920000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:29:07.549662+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 340787200 unmapped: 41254912 heap: 382042112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab714bc00 session 0x557ab6a93e00
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:29:08.549900+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 340803584 unmapped: 41238528 heap: 382042112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:29:09.550163+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 340803584 unmapped: 41238528 heap: 382042112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e5339000/0x0/0x4ffc00000, data 0x66ad29b/0x6844000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4149256 data_alloc: 234881024 data_used: 44314624
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:29:10.550316+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 340803584 unmapped: 41238528 heap: 382042112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e5339000/0x0/0x4ffc00000, data 0x66ad29b/0x6844000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:29:11.550499+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 340803584 unmapped: 41238528 heap: 382042112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:29:12.550689+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab6696800 session 0x557ab7295a40
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab6dd9800 session 0x557ab5bfb0e0
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: handle_auth_request added challenge on 0x557ab4c39c00
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 336035840 unmapped: 46006272 heap: 382042112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab4c39c00 session 0x557ab55203c0
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:29:13.550861+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 336052224 unmapped: 45989888 heap: 382042112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e69de000/0x0/0x4ffc00000, data 0x500929b/0x51a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:29:14.551019+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 336052224 unmapped: 45989888 heap: 382042112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3865366 data_alloc: 218103808 data_used: 27451392
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e69de000/0x0/0x4ffc00000, data 0x500929b/0x51a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:29:15.551224+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 336052224 unmapped: 45989888 heap: 382042112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:29:16.551388+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 336052224 unmapped: 45989888 heap: 382042112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:29:17.551523+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 336052224 unmapped: 45989888 heap: 382042112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:29:18.551679+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e69de000/0x0/0x4ffc00000, data 0x500929b/0x51a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 336052224 unmapped: 45989888 heap: 382042112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:29:19.551855+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 336052224 unmapped: 45989888 heap: 382042112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3865366 data_alloc: 218103808 data_used: 27451392
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:29:20.551978+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 336052224 unmapped: 45989888 heap: 382042112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:29:21.552141+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 336052224 unmapped: 45989888 heap: 382042112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:29:22.552288+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 336052224 unmapped: 45989888 heap: 382042112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:29:23.552452+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e69de000/0x0/0x4ffc00000, data 0x500929b/0x51a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 336052224 unmapped: 45989888 heap: 382042112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:29:24.552601+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 336052224 unmapped: 45989888 heap: 382042112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3865366 data_alloc: 218103808 data_used: 27451392
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:29:25.552805+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 336052224 unmapped: 45989888 heap: 382042112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:29:26.553105+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 336052224 unmapped: 45989888 heap: 382042112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:29:27.553559+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 336052224 unmapped: 45989888 heap: 382042112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e69de000/0x0/0x4ffc00000, data 0x500929b/0x51a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 20.566871643s of 21.283866882s, submitted: 80
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:29:28.553752+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab7f89800 session 0x557ab552c960
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab9c2fc00 session 0x557ab7951680
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e69de000/0x0/0x4ffc00000, data 0x500929b/0x51a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: handle_auth_request added challenge on 0x557ab4c39c00
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 336052224 unmapped: 45989888 heap: 382042112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:29:29.553927+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 336052224 unmapped: 45989888 heap: 382042112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3865142 data_alloc: 218103808 data_used: 27451392
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:29:30.554076+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 324763648 unmapped: 57278464 heap: 382042112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:29:31.554230+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 324763648 unmapped: 57278464 heap: 382042112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:29:32.554459+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 324763648 unmapped: 57278464 heap: 382042112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:29:33.554619+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e786b000/0x0/0x4ffc00000, data 0x417c29b/0x4313000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 324763648 unmapped: 57278464 heap: 382042112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:29:34.554801+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab4c39c00 session 0x557ab46d30e0
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 324771840 unmapped: 57270272 heap: 382042112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:29:35.555110+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3686476 data_alloc: 218103808 data_used: 18169856
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 324771840 unmapped: 57270272 heap: 382042112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:29:36.555276+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 324771840 unmapped: 57270272 heap: 382042112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:29:37.555421+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e7895000/0x0/0x4ffc00000, data 0x415229b/0x42e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 324771840 unmapped: 57270272 heap: 382042112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:29:38.555543+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 324771840 unmapped: 57270272 heap: 382042112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:29:39.555751+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 324771840 unmapped: 57270272 heap: 382042112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:29:40.555920+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3686476 data_alloc: 218103808 data_used: 18169856
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 324771840 unmapped: 57270272 heap: 382042112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:29:41.556173+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e7895000/0x0/0x4ffc00000, data 0x415229b/0x42e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 324771840 unmapped: 57270272 heap: 382042112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:29:42.556360+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 324771840 unmapped: 57270272 heap: 382042112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:29:43.556499+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 324771840 unmapped: 57270272 heap: 382042112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:29:44.556661+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 324771840 unmapped: 57270272 heap: 382042112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:29:45.556876+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3686476 data_alloc: 218103808 data_used: 18169856
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 324771840 unmapped: 57270272 heap: 382042112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:29:46.557041+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 324771840 unmapped: 57270272 heap: 382042112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:29:47.557229+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e7895000/0x0/0x4ffc00000, data 0x415229b/0x42e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 324771840 unmapped: 57270272 heap: 382042112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:29:48.557387+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 324771840 unmapped: 57270272 heap: 382042112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:29:49.557534+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 324771840 unmapped: 57270272 heap: 382042112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:29:50.557682+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3686476 data_alloc: 218103808 data_used: 18169856
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 324780032 unmapped: 57262080 heap: 382042112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:29:51.557885+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e7895000/0x0/0x4ffc00000, data 0x415229b/0x42e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 324780032 unmapped: 57262080 heap: 382042112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:29:52.558014+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 324788224 unmapped: 57253888 heap: 382042112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:29:53.558165+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 324788224 unmapped: 57253888 heap: 382042112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:29:54.558317+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 324788224 unmapped: 57253888 heap: 382042112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:29:55.558514+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3686476 data_alloc: 218103808 data_used: 18169856
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 324788224 unmapped: 57253888 heap: 382042112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e7895000/0x0/0x4ffc00000, data 0x415229b/0x42e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:29:56.558746+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 324788224 unmapped: 57253888 heap: 382042112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:29:57.558894+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 324796416 unmapped: 57245696 heap: 382042112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:29:58.559054+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e7895000/0x0/0x4ffc00000, data 0x415229b/0x42e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 324796416 unmapped: 57245696 heap: 382042112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:29:59.559195+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 324796416 unmapped: 57245696 heap: 382042112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e7895000/0x0/0x4ffc00000, data 0x415229b/0x42e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:30:00.559361+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3686476 data_alloc: 218103808 data_used: 18169856
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 324796416 unmapped: 57245696 heap: 382042112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:30:01.559545+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 324796416 unmapped: 57245696 heap: 382042112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:30:02.559711+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e7895000/0x0/0x4ffc00000, data 0x415229b/0x42e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 324804608 unmapped: 57237504 heap: 382042112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:30:03.559867+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e7895000/0x0/0x4ffc00000, data 0x415229b/0x42e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 324804608 unmapped: 57237504 heap: 382042112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:30:04.560526+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 324804608 unmapped: 57237504 heap: 382042112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:30:05.560762+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3686476 data_alloc: 218103808 data_used: 18169856
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 324804608 unmapped: 57237504 heap: 382042112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:30:06.561004+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e7895000/0x0/0x4ffc00000, data 0x415229b/0x42e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 324804608 unmapped: 57237504 heap: 382042112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:30:07.561161+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 324804608 unmapped: 57237504 heap: 382042112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:30:08.561305+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e7895000/0x0/0x4ffc00000, data 0x415229b/0x42e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 324812800 unmapped: 57229312 heap: 382042112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:30:09.561436+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: handle_auth_request added challenge on 0x557ab6696800
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 38.966651917s of 41.397136688s, submitted: 35
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 324812800 unmapped: 57229312 heap: 382042112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:30:10.561604+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab6696800 session 0x557ab682a5a0
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3693691 data_alloc: 218103808 data_used: 18169856
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: handle_auth_request added challenge on 0x557ab6dd9800
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab6dd9800 session 0x557ab72a52c0
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: handle_auth_request added challenge on 0x557ab714bc00
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab714bc00 session 0x557ab3ee6b40
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: handle_auth_request added challenge on 0x557ab714bc00
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab714bc00 session 0x557ab55214a0
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: handle_auth_request added challenge on 0x557ab4c39c00
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab4c39c00 session 0x557ab72a4b40
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 324788224 unmapped: 57253888 heap: 382042112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:30:11.561730+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 324788224 unmapped: 57253888 heap: 382042112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:30:12.561874+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 324788224 unmapped: 57253888 heap: 382042112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:30:13.562135+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 324796416 unmapped: 57245696 heap: 382042112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:30:14.562278+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e7815000/0x0/0x4ffc00000, data 0x41d229b/0x4369000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 324796416 unmapped: 57245696 heap: 382042112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:30:15.562466+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3693691 data_alloc: 218103808 data_used: 18169856
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab714f400 session 0x557ab5521860
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: handle_auth_request added challenge on 0x557ab6696800
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 324796416 unmapped: 57245696 heap: 382042112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:30:16.562717+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 324796416 unmapped: 57245696 heap: 382042112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:30:17.562885+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: handle_auth_request added challenge on 0x557ab714f400
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab714f400 session 0x557ab791fa40
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 324796416 unmapped: 57245696 heap: 382042112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:30:18.563032+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e7815000/0x0/0x4ffc00000, data 0x41d229b/0x4369000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: handle_auth_request added challenge on 0x557ab6dd9800
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab6dd9800 session 0x557ab7950f00
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 324804608 unmapped: 57237504 heap: 382042112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:30:19.563171+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e7815000/0x0/0x4ffc00000, data 0x41d229b/0x4369000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e7815000/0x0/0x4ffc00000, data 0x41d229b/0x4369000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 324812800 unmapped: 57229312 heap: 382042112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:30:20.563312+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: handle_auth_request added challenge on 0x557ab9c2fc00
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab9c2fc00 session 0x557ab52921e0
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: handle_auth_request added challenge on 0x557ab4c39c00
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.392966270s of 10.930088997s, submitted: 11
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3696569 data_alloc: 218103808 data_used: 18169856
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab4c39c00 session 0x557ab57cbe00
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: handle_auth_request added challenge on 0x557ab6dd9800
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: handle_auth_request added challenge on 0x557ab714bc00
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 325124096 unmapped: 56918016 heap: 382042112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:30:21.563416+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:30:22.563549+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 325124096 unmapped: 56918016 heap: 382042112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:30:23.563692+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 325124096 unmapped: 56918016 heap: 382042112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e77f0000/0x0/0x4ffc00000, data 0x41f62aa/0x438e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:30:24.563853+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 325124096 unmapped: 56918016 heap: 382042112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:30:25.564021+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 325124096 unmapped: 56918016 heap: 382042112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3703615 data_alloc: 218103808 data_used: 18698240
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e77f0000/0x0/0x4ffc00000, data 0x41f62aa/0x438e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:30:26.564152+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 325124096 unmapped: 56918016 heap: 382042112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:30:27.564288+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 325124096 unmapped: 56918016 heap: 382042112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e77f0000/0x0/0x4ffc00000, data 0x41f62aa/0x438e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:30:28.564410+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 325132288 unmapped: 56909824 heap: 382042112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:30:29.564548+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 325140480 unmapped: 56901632 heap: 382042112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e77f0000/0x0/0x4ffc00000, data 0x41f62aa/0x438e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:30:30.564661+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 325140480 unmapped: 56901632 heap: 382042112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3703615 data_alloc: 218103808 data_used: 18698240
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:30:31.564847+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 325140480 unmapped: 56901632 heap: 382042112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:30:32.564994+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 325140480 unmapped: 56901632 heap: 382042112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.047631264s of 12.067139626s, submitted: 3
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:30:33.565114+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 327548928 unmapped: 54493184 heap: 382042112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e77ba000/0x0/0x4ffc00000, data 0x422c2aa/0x43c4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:30:34.565311+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 327548928 unmapped: 54493184 heap: 382042112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:30:35.565526+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 328695808 unmapped: 53346304 heap: 382042112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3749267 data_alloc: 218103808 data_used: 19013632
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:30:36.565691+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 328695808 unmapped: 53346304 heap: 382042112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:30:37.565851+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 328695808 unmapped: 53346304 heap: 382042112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e7389000/0x0/0x4ffc00000, data 0x465d2aa/0x47f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:30:38.566015+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 328695808 unmapped: 53346304 heap: 382042112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:30:39.566156+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 328695808 unmapped: 53346304 heap: 382042112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:30:40.566301+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 328671232 unmapped: 53370880 heap: 382042112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3748063 data_alloc: 218103808 data_used: 19013632
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:30:41.566442+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e7387000/0x0/0x4ffc00000, data 0x465f2aa/0x47f7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 328671232 unmapped: 53370880 heap: 382042112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e7387000/0x0/0x4ffc00000, data 0x465f2aa/0x47f7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:30:42.566599+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 328671232 unmapped: 53370880 heap: 382042112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e7387000/0x0/0x4ffc00000, data 0x465f2aa/0x47f7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:30:43.566735+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 328671232 unmapped: 53370880 heap: 382042112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e7387000/0x0/0x4ffc00000, data 0x465f2aa/0x47f7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:30:44.566945+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 328671232 unmapped: 53370880 heap: 382042112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:30:45.567151+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 328671232 unmapped: 53370880 heap: 382042112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3748063 data_alloc: 218103808 data_used: 19013632
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.078260422s of 13.326107025s, submitted: 38
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:30:46.567316+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 328671232 unmapped: 53370880 heap: 382042112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: handle_auth_request added challenge on 0x557ab714f400
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab714f400 session 0x557ab79503c0
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: handle_auth_request added challenge on 0x557ab7f88400
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab7f88400 session 0x557ab6df9a40
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: handle_auth_request added challenge on 0x557ab8b9d800
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab8b9d800 session 0x557ab7951c20
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: handle_auth_request added challenge on 0x557ab9c2e400
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab9c2e400 session 0x557ab46d0d20
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: handle_auth_request added challenge on 0x557ab4c39c00
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:30:47.567473+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 328671232 unmapped: 53370880 heap: 382042112 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab4c39c00 session 0x557ab6df6960
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: handle_auth_request added challenge on 0x557ab714f400
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab714f400 session 0x557ab46d3680
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: handle_auth_request added challenge on 0x557ab7f88400
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab7f88400 session 0x557ab727a1e0
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: handle_auth_request added challenge on 0x557ab8b9d800
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab8b9d800 session 0x557ab5521c20
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: handle_auth_request added challenge on 0x557ab6dda400
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab6dda400 session 0x557ab5520b40
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e69ee000/0x0/0x4ffc00000, data 0x4ff82aa/0x5190000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:30:48.567619+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 327286784 unmapped: 58957824 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:30:49.567752+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 327286784 unmapped: 58957824 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:30:50.567893+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 327286784 unmapped: 58957824 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3828738 data_alloc: 218103808 data_used: 19013632
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:30:51.568048+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 327294976 unmapped: 58949632 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: handle_auth_request added challenge on 0x557ab6dda400
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab6dda400 session 0x557ab5bfa000
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:30:52.568195+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 327294976 unmapped: 58949632 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: handle_auth_request added challenge on 0x557ab4c39c00
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab4c39c00 session 0x557ab5521e00
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:30:53.568384+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 327294976 unmapped: 58949632 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: handle_auth_request added challenge on 0x557ab714f400
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab714f400 session 0x557ab727ab40
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: handle_auth_request added challenge on 0x557ab7f88400
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e696e000/0x0/0x4ffc00000, data 0x50782aa/0x5210000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab7f88400 session 0x557ab686c000
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:30:54.568512+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 327294976 unmapped: 58949632 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: handle_auth_request added challenge on 0x557ab8b9d800
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: handle_auth_request added challenge on 0x557ab7348400
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:30:55.568701+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 327303168 unmapped: 58941440 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3831833 data_alloc: 218103808 data_used: 19013632
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:30:56.568912+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 327180288 unmapped: 59064320 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:30:57.569045+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 327221248 unmapped: 59023360 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e696c000/0x0/0x4ffc00000, data 0x50782dd/0x5212000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:30:58.569184+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 327221248 unmapped: 59023360 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:30:59.569350+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 327221248 unmapped: 59023360 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:31:00.569453+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 327221248 unmapped: 59023360 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3894073 data_alloc: 234881024 data_used: 26898432
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:31:01.569608+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 327221248 unmapped: 59023360 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e696c000/0x0/0x4ffc00000, data 0x50782dd/0x5212000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:31:02.569742+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 327221248 unmapped: 59023360 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:31:03.569903+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e696c000/0x0/0x4ffc00000, data 0x50782dd/0x5212000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 327221248 unmapped: 59023360 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:31:04.570045+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 327221248 unmapped: 59023360 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e696c000/0x0/0x4ffc00000, data 0x50782dd/0x5212000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:31:05.570216+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 19.143640518s of 19.456829071s, submitted: 25
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 327221248 unmapped: 59023360 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3894333 data_alloc: 234881024 data_used: 26902528
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:31:06.570357+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 332275712 unmapped: 53968896 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:31:07.570566+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 334086144 unmapped: 52158464 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:31:08.570697+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 336658432 unmapped: 49586176 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e5987000/0x0/0x4ffc00000, data 0x605d2dd/0x61f7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:31:09.570853+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 334856192 unmapped: 51388416 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e594d000/0x0/0x4ffc00000, data 0x60912dd/0x622b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:31:10.570991+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 334553088 unmapped: 51691520 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4015763 data_alloc: 234881024 data_used: 27406336
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:31:11.571163+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 334569472 unmapped: 51675136 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:31:12.571311+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 334569472 unmapped: 51675136 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:31:13.571455+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 334569472 unmapped: 51675136 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:31:14.571609+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 334569472 unmapped: 51675136 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:31:15.571811+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 334569472 unmapped: 51675136 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4025403 data_alloc: 234881024 data_used: 27701248
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e593d000/0x0/0x4ffc00000, data 0x609f2dd/0x6239000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:31:16.572026+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 334569472 unmapped: 51675136 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:31:17.572153+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 334569472 unmapped: 51675136 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:31:18.572298+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 334569472 unmapped: 51675136 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:31:19.572521+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 334569472 unmapped: 51675136 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.048258781s of 14.340513229s, submitted: 126
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:31:20.572812+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 334569472 unmapped: 51675136 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4025403 data_alloc: 234881024 data_used: 27701248
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:31:21.573007+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 334503936 unmapped: 51740672 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab8b9d800 session 0x557ab6815e00
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab7348400 session 0x557ab5bfbe00
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: handle_auth_request added challenge on 0x557ab8b9d800
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e5944000/0x0/0x4ffc00000, data 0x60a02dd/0x623a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:31:22.573135+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab8b9d800 session 0x557ab5292b40
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 329523200 unmapped: 56721408 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:31:23.573269+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 329523200 unmapped: 56721408 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:31:24.573427+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 329523200 unmapped: 56721408 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e7383000/0x0/0x4ffc00000, data 0x46622aa/0x47fa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:31:25.573678+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 329523200 unmapped: 56721408 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3749794 data_alloc: 218103808 data_used: 16392192
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab6dd9800 session 0x557ab59c7860
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab714bc00 session 0x557ab54272c0
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:31:26.573841+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: handle_auth_request added challenge on 0x557ab4c39c00
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 329531392 unmapped: 56713216 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e7384000/0x0/0x4ffc00000, data 0x46622aa/0x47fa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab4c39c00 session 0x557ab4a7cf00
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:31:27.574030+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 330612736 unmapped: 55631872 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:31:28.574144+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 330612736 unmapped: 55631872 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:31:29.574896+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 330612736 unmapped: 55631872 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e7895000/0x0/0x4ffc00000, data 0x415229b/0x42e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 4800.1 total, 600.0 interval
                                           Cumulative writes: 43K writes, 162K keys, 43K commit groups, 1.0 writes per commit group, ingest: 0.16 GB, 0.03 MB/s
                                           Cumulative WAL: 43K writes, 15K syncs, 2.71 writes per sync, written: 0.16 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 4545 writes, 19K keys, 4545 commit groups, 1.0 writes per commit group, ingest: 21.41 MB, 0.04 MB/s
                                           Interval WAL: 4545 writes, 1749 syncs, 2.60 writes per sync, written: 0.02 GB, 0.04 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:31:30.575641+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 330612736 unmapped: 55631872 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3696804 data_alloc: 218103808 data_used: 15548416
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e7895000/0x0/0x4ffc00000, data 0x415229b/0x42e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:31:31.575796+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 330612736 unmapped: 55631872 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e7895000/0x0/0x4ffc00000, data 0x415229b/0x42e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:31:32.575991+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 330612736 unmapped: 55631872 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:31:33.576176+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 330612736 unmapped: 55631872 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:31:34.576372+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 330612736 unmapped: 55631872 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:31:35.576593+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 330612736 unmapped: 55631872 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3696804 data_alloc: 218103808 data_used: 15548416
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e7895000/0x0/0x4ffc00000, data 0x415229b/0x42e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:31:36.576748+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 330612736 unmapped: 55631872 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:31:37.576916+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 330612736 unmapped: 55631872 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:31:38.577080+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 330612736 unmapped: 55631872 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:31:39.577205+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 330612736 unmapped: 55631872 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:31:40.577379+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e7895000/0x0/0x4ffc00000, data 0x415229b/0x42e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 330612736 unmapped: 55631872 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3696804 data_alloc: 218103808 data_used: 15548416
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e7895000/0x0/0x4ffc00000, data 0x415229b/0x42e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:31:41.577635+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 330612736 unmapped: 55631872 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:31:42.577780+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 330620928 unmapped: 55623680 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:31:43.577949+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 330620928 unmapped: 55623680 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:31:44.578135+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 330620928 unmapped: 55623680 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:31:45.578360+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 330620928 unmapped: 55623680 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3696804 data_alloc: 218103808 data_used: 15548416
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:31:46.578588+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e7895000/0x0/0x4ffc00000, data 0x415229b/0x42e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 330620928 unmapped: 55623680 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e7895000/0x0/0x4ffc00000, data 0x415229b/0x42e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:31:47.578755+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 330620928 unmapped: 55623680 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:31:48.578917+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 330629120 unmapped: 55615488 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e7895000/0x0/0x4ffc00000, data 0x415229b/0x42e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:31:49.579077+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 330629120 unmapped: 55615488 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e7895000/0x0/0x4ffc00000, data 0x415229b/0x42e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:31:50.579238+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 330629120 unmapped: 55615488 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3696804 data_alloc: 218103808 data_used: 15548416
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:31:51.579386+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 330629120 unmapped: 55615488 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e7895000/0x0/0x4ffc00000, data 0x415229b/0x42e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:31:52.579598+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 330637312 unmapped: 55607296 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:31:53.579749+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 330637312 unmapped: 55607296 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:31:54.579913+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 330637312 unmapped: 55607296 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:31:55.580092+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 330637312 unmapped: 55607296 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3696804 data_alloc: 218103808 data_used: 15548416
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:31:56.580288+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 330637312 unmapped: 55607296 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e7895000/0x0/0x4ffc00000, data 0x415229b/0x42e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:31:57.580464+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 330645504 unmapped: 55599104 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:31:58.580627+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e7895000/0x0/0x4ffc00000, data 0x415229b/0x42e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 330645504 unmapped: 55599104 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:31:59.580859+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 330645504 unmapped: 55599104 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:32:00.581006+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 330670080 unmapped: 55574528 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3696804 data_alloc: 218103808 data_used: 15548416
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:32:01.581213+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 330670080 unmapped: 55574528 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:32:02.581371+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 330670080 unmapped: 55574528 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:32:03.582082+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e7895000/0x0/0x4ffc00000, data 0x415229b/0x42e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 330678272 unmapped: 55566336 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:32:04.582699+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 330678272 unmapped: 55566336 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:32:05.583015+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 330678272 unmapped: 55566336 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3696804 data_alloc: 218103808 data_used: 15548416
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:32:06.583183+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 330678272 unmapped: 55566336 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e7895000/0x0/0x4ffc00000, data 0x415229b/0x42e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:32:07.583414+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 330678272 unmapped: 55566336 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:32:08.583873+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 330686464 unmapped: 55558144 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:32:09.584571+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 330686464 unmapped: 55558144 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e7895000/0x0/0x4ffc00000, data 0x415229b/0x42e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:32:10.584951+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 330686464 unmapped: 55558144 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3696804 data_alloc: 218103808 data_used: 15548416
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:32:11.585256+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 330686464 unmapped: 55558144 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:32:12.585450+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 330686464 unmapped: 55558144 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:32:13.585607+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 330686464 unmapped: 55558144 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e7895000/0x0/0x4ffc00000, data 0x415229b/0x42e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:32:14.585731+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 330694656 unmapped: 55549952 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:32:15.586001+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 330694656 unmapped: 55549952 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3696804 data_alloc: 218103808 data_used: 15548416
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:32:16.586191+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 330702848 unmapped: 55541760 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:32:17.586363+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e7895000/0x0/0x4ffc00000, data 0x415229b/0x42e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 330702848 unmapped: 55541760 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: handle_auth_request added challenge on 0x557ab6dd9800
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 57.869426727s of 58.322135925s, submitted: 84
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab6dd9800 session 0x557ab7951e00
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:32:18.586475+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 330874880 unmapped: 55369728 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:32:19.586623+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 330874880 unmapped: 55369728 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:32:20.586782+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 330874880 unmapped: 55369728 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3734466 data_alloc: 218103808 data_used: 15548416
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:32:21.586980+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 330874880 unmapped: 55369728 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:32:22.587120+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 330883072 unmapped: 55361536 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:32:23.587303+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e743d000/0x0/0x4ffc00000, data 0x45aa29b/0x4741000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 330883072 unmapped: 55361536 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:32:24.587434+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 330883072 unmapped: 55361536 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:32:25.587612+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 330883072 unmapped: 55361536 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3734466 data_alloc: 218103808 data_used: 15548416
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:32:26.587792+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e743d000/0x0/0x4ffc00000, data 0x45aa29b/0x4741000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: handle_auth_request added challenge on 0x557ab714bc00
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab714bc00 session 0x557ab7295a40
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 331202560 unmapped: 55042048 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:32:27.587871+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: handle_auth_request added challenge on 0x557ab7348400
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e7418000/0x0/0x4ffc00000, data 0x45ce2be/0x4766000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 331202560 unmapped: 55042048 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:32:28.588061+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: handle_auth_request added challenge on 0x557ab8b9d800
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 331202560 unmapped: 55042048 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:32:29.588197+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 331218944 unmapped: 55025664 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:32:30.588352+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 331218944 unmapped: 55025664 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3772168 data_alloc: 218103808 data_used: 20008960
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:32:31.588466+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e7418000/0x0/0x4ffc00000, data 0x45ce2be/0x4766000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 331218944 unmapped: 55025664 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:32:32.588592+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 331218944 unmapped: 55025664 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:32:33.588718+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 331218944 unmapped: 55025664 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:32:34.588881+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 331218944 unmapped: 55025664 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:32:35.589046+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 331218944 unmapped: 55025664 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3772168 data_alloc: 218103808 data_used: 20008960
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:32:36.589210+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e7418000/0x0/0x4ffc00000, data 0x45ce2be/0x4766000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 331227136 unmapped: 55017472 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:32:37.589331+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 331227136 unmapped: 55017472 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:32:38.589465+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 20.643857956s of 20.801364899s, submitted: 20
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 331931648 unmapped: 54312960 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e7418000/0x0/0x4ffc00000, data 0x45ce2be/0x4766000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:32:39.589588+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 334323712 unmapped: 51920896 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:32:40.589744+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 334102528 unmapped: 52142080 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3826782 data_alloc: 218103808 data_used: 21155840
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:32:41.589844+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 334102528 unmapped: 52142080 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:32:42.589990+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e6e98000/0x0/0x4ffc00000, data 0x4b4e2be/0x4ce6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 334184448 unmapped: 52060160 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:32:43.590176+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 334184448 unmapped: 52060160 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:32:44.590322+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 334184448 unmapped: 52060160 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:32:45.590491+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e6e98000/0x0/0x4ffc00000, data 0x4b4e2be/0x4ce6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 334184448 unmapped: 52060160 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3826782 data_alloc: 218103808 data_used: 21155840
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:32:46.590701+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 334184448 unmapped: 52060160 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:32:47.590876+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 334184448 unmapped: 52060160 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:32:48.591041+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 334192640 unmapped: 52051968 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e6e98000/0x0/0x4ffc00000, data 0x4b4e2be/0x4ce6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:32:49.591192+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 334192640 unmapped: 52051968 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e6e98000/0x0/0x4ffc00000, data 0x4b4e2be/0x4ce6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:32:50.591419+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 334192640 unmapped: 52051968 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3826782 data_alloc: 218103808 data_used: 21155840
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:32:51.591603+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 334192640 unmapped: 52051968 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:32:52.591769+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 334192640 unmapped: 52051968 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:32:53.591930+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 334192640 unmapped: 52051968 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:32:54.592092+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e6e98000/0x0/0x4ffc00000, data 0x4b4e2be/0x4ce6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 334200832 unmapped: 52043776 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:32:55.592259+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 334200832 unmapped: 52043776 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3826782 data_alloc: 218103808 data_used: 21155840
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:32:56.592398+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 334200832 unmapped: 52043776 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:32:57.592532+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 334209024 unmapped: 52035584 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:32:58.592683+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 334209024 unmapped: 52035584 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e6e98000/0x0/0x4ffc00000, data 0x4b4e2be/0x4ce6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:32:59.592811+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 334209024 unmapped: 52035584 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:33:00.592998+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 21.530765533s of 21.778186798s, submitted: 53
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e6e98000/0x0/0x4ffc00000, data 0x4b4e2be/0x4ce6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 334233600 unmapped: 52011008 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3824862 data_alloc: 218103808 data_used: 21155840
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:33:01.593179+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 334200832 unmapped: 52043776 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:33:02.593346+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 334200832 unmapped: 52043776 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:33:03.593491+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 334200832 unmapped: 52043776 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e6e98000/0x0/0x4ffc00000, data 0x4b4e2be/0x4ce6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:33:04.593632+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e6e98000/0x0/0x4ffc00000, data 0x4b4e2be/0x4ce6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 334200832 unmapped: 52043776 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:33:05.593869+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 334200832 unmapped: 52043776 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3824862 data_alloc: 218103808 data_used: 21155840
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:33:06.594034+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 334200832 unmapped: 52043776 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:33:07.594182+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e6e98000/0x0/0x4ffc00000, data 0x4b4e2be/0x4ce6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 334200832 unmapped: 52043776 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:33:08.594364+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 334200832 unmapped: 52043776 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:33:09.594588+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: handle_auth_request added challenge on 0x557ab6dda400
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 334209024 unmapped: 52035584 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab6dda400 session 0x557ab7951680
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: handle_auth_request added challenge on 0x557ab714f400
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab714f400 session 0x557ab6fcd0e0
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: handle_auth_request added challenge on 0x557ab7f88400
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab7f88400 session 0x557ab58cfe00
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: handle_auth_request added challenge on 0x557ab7f88400
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab7f88400 session 0x557ab79505a0
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: handle_auth_request added challenge on 0x557ab6dd9800
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:33:10.594737+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab6dd9800 session 0x557ab6df8d20
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 334585856 unmapped: 51658752 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.958397865s of 10.519848824s, submitted: 129
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3847780 data_alloc: 218103808 data_used: 21155840
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e692a000/0x0/0x4ffc00000, data 0x4cab320/0x4e44000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1448f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:33:11.594894+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 334585856 unmapped: 51658752 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:33:12.595063+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 334585856 unmapped: 51658752 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:33:13.595290+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 334585856 unmapped: 51658752 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e6928000/0x0/0x4ffc00000, data 0x4cac320/0x4e45000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1448f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:33:14.595867+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 334585856 unmapped: 51658752 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:33:15.596344+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 334585856 unmapped: 51658752 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3847780 data_alloc: 218103808 data_used: 21155840
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:33:16.596471+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: handle_auth_request added challenge on 0x557ab6dda400
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 334594048 unmapped: 51650560 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:33:17.596632+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 334594048 unmapped: 51650560 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:33:18.596744+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 334364672 unmapped: 51879936 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:33:19.596901+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e6928000/0x0/0x4ffc00000, data 0x4cac320/0x4e45000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1448f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 334364672 unmapped: 51879936 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:33:20.597031+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 334364672 unmapped: 51879936 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3857992 data_alloc: 218103808 data_used: 22470656
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:33:21.597164+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 334364672 unmapped: 51879936 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e6928000/0x0/0x4ffc00000, data 0x4cac320/0x4e45000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1448f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:33:22.597375+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 334364672 unmapped: 51879936 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:33:23.597646+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 334364672 unmapped: 51879936 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:33:24.597920+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 334364672 unmapped: 51879936 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:33:25.598115+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e6928000/0x0/0x4ffc00000, data 0x4cac320/0x4e45000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1448f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 334364672 unmapped: 51879936 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3857992 data_alloc: 218103808 data_used: 22470656
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:33:26.598260+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 334364672 unmapped: 51879936 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:33:27.598402+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 334364672 unmapped: 51879936 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:33:28.598564+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 17.543865204s of 17.608095169s, submitted: 4
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 333447168 unmapped: 52797440 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:33:29.598691+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 333578240 unmapped: 52666368 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:33:30.598837+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 333914112 unmapped: 52330496 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e6252000/0x0/0x4ffc00000, data 0x5375320/0x550e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1448f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3922318 data_alloc: 218103808 data_used: 23736320
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:33:31.611265+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 333914112 unmapped: 52330496 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:33:32.611431+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 333914112 unmapped: 52330496 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:33:33.611670+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 333914112 unmapped: 52330496 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:33:34.611888+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 333914112 unmapped: 52330496 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:33:35.612066+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 333914112 unmapped: 52330496 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3922318 data_alloc: 218103808 data_used: 23736320
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:33:36.612207+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e6252000/0x0/0x4ffc00000, data 0x5375320/0x550e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1448f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 333914112 unmapped: 52330496 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:33:37.612374+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e6252000/0x0/0x4ffc00000, data 0x5375320/0x550e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1448f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 333914112 unmapped: 52330496 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:33:38.612584+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 333914112 unmapped: 52330496 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:33:39.612753+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 333914112 unmapped: 52330496 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:33:40.612919+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 333914112 unmapped: 52330496 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3922318 data_alloc: 218103808 data_used: 23736320
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:33:41.613066+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 333914112 unmapped: 52330496 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:33:42.613284+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e6252000/0x0/0x4ffc00000, data 0x5375320/0x550e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1448f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 333914112 unmapped: 52330496 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:33:43.613424+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 333914112 unmapped: 52330496 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:33:44.613597+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.640145302s of 15.920056343s, submitted: 94
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab6dda400 session 0x557ab46d14a0
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: handle_auth_request added challenge on 0x557ab714bc00
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 333570048 unmapped: 52674560 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:33:45.613779+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab714bc00 session 0x557ab6bfa3c0
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 333578240 unmapped: 52666368 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e6a86000/0x0/0x4ffc00000, data 0x4b4f2be/0x4ce7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1448f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3832760 data_alloc: 218103808 data_used: 21155840
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:33:46.613985+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 333578240 unmapped: 52666368 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:33:47.614138+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 333578240 unmapped: 52666368 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:33:48.614338+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 333578240 unmapped: 52666368 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:33:49.614458+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e6a86000/0x0/0x4ffc00000, data 0x4b4f2be/0x4ce7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1448f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 333578240 unmapped: 52666368 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:33:50.614620+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 333578240 unmapped: 52666368 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab8b9d800 session 0x557ab6df74a0
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab7348400 session 0x557ab791fc20
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3832628 data_alloc: 218103808 data_used: 21155840
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:33:51.614766+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: handle_auth_request added challenge on 0x557ab6dd9800
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab6dd9800 session 0x557ab792ef00
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 333742080 unmapped: 52502528 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:33:52.614912+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e7484000/0x0/0x4ffc00000, data 0x415229b/0x42e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1448f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 333742080 unmapped: 52502528 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:33:53.615068+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 333742080 unmapped: 52502528 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:33:54.615158+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 333742080 unmapped: 52502528 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:33:55.615353+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 333742080 unmapped: 52502528 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3714128 data_alloc: 218103808 data_used: 15548416
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:33:56.615497+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e7484000/0x0/0x4ffc00000, data 0x415229b/0x42e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1448f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 333742080 unmapped: 52502528 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:33:57.615678+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 333742080 unmapped: 52502528 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:33:58.615862+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e7484000/0x0/0x4ffc00000, data 0x415229b/0x42e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1448f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 333742080 unmapped: 52502528 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:33:59.616016+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 333750272 unmapped: 52494336 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:34:00.616179+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 333750272 unmapped: 52494336 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3714128 data_alloc: 218103808 data_used: 15548416
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:34:01.616324+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 333750272 unmapped: 52494336 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:34:02.616494+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 333750272 unmapped: 52494336 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:34:03.616619+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e7484000/0x0/0x4ffc00000, data 0x415229b/0x42e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1448f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 333750272 unmapped: 52494336 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:34:04.616779+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 333758464 unmapped: 52486144 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:34:05.616997+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 333758464 unmapped: 52486144 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3714128 data_alloc: 218103808 data_used: 15548416
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:34:06.617680+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 333758464 unmapped: 52486144 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e7484000/0x0/0x4ffc00000, data 0x415229b/0x42e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1448f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:34:07.618341+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 333758464 unmapped: 52486144 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:34:08.618500+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 333758464 unmapped: 52486144 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:34:09.618661+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 333766656 unmapped: 52477952 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:34:10.618883+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 333766656 unmapped: 52477952 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:34:11.619064+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3714128 data_alloc: 218103808 data_used: 15548416
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e7484000/0x0/0x4ffc00000, data 0x415229b/0x42e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1448f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 333766656 unmapped: 52477952 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:34:12.619232+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 333766656 unmapped: 52477952 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:34:13.619405+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e7484000/0x0/0x4ffc00000, data 0x415229b/0x42e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1448f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 333766656 unmapped: 52477952 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:34:14.619661+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 333766656 unmapped: 52477952 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:34:15.620383+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 333766656 unmapped: 52477952 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:34:16.620707+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3714128 data_alloc: 218103808 data_used: 15548416
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 333774848 unmapped: 52469760 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:34:17.620894+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 333774848 unmapped: 52469760 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:34:18.621063+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e7484000/0x0/0x4ffc00000, data 0x415229b/0x42e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1448f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 333774848 unmapped: 52469760 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:34:19.621186+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 333774848 unmapped: 52469760 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: handle_auth_request added challenge on 0x557ab6dda400
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:34:20.621312+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab6dda400 session 0x557ab6df61e0
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: handle_auth_request added challenge on 0x557ab714bc00
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab714bc00 session 0x557ab6bfbc20
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: handle_auth_request added challenge on 0x557ab7f88400
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab7f88400 session 0x557ab685a3c0
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: handle_auth_request added challenge on 0x557ab6dd9800
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab6dd9800 session 0x557ab5427a40
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: handle_auth_request added challenge on 0x557ab6dda400
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 35.755138397s of 36.050605774s, submitted: 84
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab6dda400 session 0x557ab5427c20
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: handle_auth_request added challenge on 0x557ab714bc00
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab714bc00 session 0x557ab7294d20
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: handle_auth_request added challenge on 0x557ab7348400
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab7348400 session 0x557ab686d680
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: handle_auth_request added challenge on 0x557ab8b9d800
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab8b9d800 session 0x557ab791e960
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: handle_auth_request added challenge on 0x557ab6dd9800
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab6dd9800 session 0x557ab4aab0e0
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 333873152 unmapped: 52371456 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:34:21.621607+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3783807 data_alloc: 218103808 data_used: 15548416
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e6c9f000/0x0/0x4ffc00000, data 0x49372ab/0x4acf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1448f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 333873152 unmapped: 52371456 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:34:22.621767+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 333873152 unmapped: 52371456 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:34:23.621967+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 333873152 unmapped: 52371456 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:34:24.622207+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 333873152 unmapped: 52371456 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:34:25.622426+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e6c9f000/0x0/0x4ffc00000, data 0x49372ab/0x4acf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1448f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 333873152 unmapped: 52371456 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:34:26.622570+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3783807 data_alloc: 218103808 data_used: 15548416
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 333881344 unmapped: 52363264 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:34:27.622889+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: handle_auth_request added challenge on 0x557ab6dda400
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab6dda400 session 0x557ab6821a40
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e6c9f000/0x0/0x4ffc00000, data 0x49372ab/0x4acf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1448f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 333881344 unmapped: 52363264 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:34:28.623034+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: handle_auth_request added challenge on 0x557ab714bc00
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab714bc00 session 0x557ab696f0e0
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e6c9f000/0x0/0x4ffc00000, data 0x49372ab/0x4acf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1448f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: handle_auth_request added challenge on 0x557ab7348400
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab7348400 session 0x557ab4aaa3c0
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: handle_auth_request added challenge on 0x557ab714f400
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab714f400 session 0x557ab72a5680
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e6c9f000/0x0/0x4ffc00000, data 0x49372ab/0x4acf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1448f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 334036992 unmapped: 52207616 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:34:29.623204+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: handle_auth_request added challenge on 0x557ab6dd9800
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: handle_auth_request added challenge on 0x557ab6dda400
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 334036992 unmapped: 52207616 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:34:30.623335+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e6c7b000/0x0/0x4ffc00000, data 0x495b2ab/0x4af3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1448f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 334045184 unmapped: 52199424 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:34:31.623458+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3810788 data_alloc: 218103808 data_used: 18796544
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 334045184 unmapped: 52199424 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:34:32.623594+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e6c7b000/0x0/0x4ffc00000, data 0x495b2ab/0x4af3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1448f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 334045184 unmapped: 52199424 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:34:33.623778+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e6c7b000/0x0/0x4ffc00000, data 0x495b2ab/0x4af3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1448f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 334045184 unmapped: 52199424 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:34:34.623911+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 334045184 unmapped: 52199424 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:34:35.624180+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 334045184 unmapped: 52199424 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:34:36.624430+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3844868 data_alloc: 218103808 data_used: 23621632
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 334045184 unmapped: 52199424 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:34:37.624552+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 334045184 unmapped: 52199424 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:34:38.624733+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e6c7b000/0x0/0x4ffc00000, data 0x495b2ab/0x4af3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1448f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 334045184 unmapped: 52199424 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:34:39.624885+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 334045184 unmapped: 52199424 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:34:40.625039+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 20.186073303s of 20.419456482s, submitted: 32
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 337428480 unmapped: 48816128 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:34:41.625193+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3919034 data_alloc: 218103808 data_used: 23654400
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 337944576 unmapped: 48300032 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:34:42.625303+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 338378752 unmapped: 47865856 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:34:43.625451+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 338378752 unmapped: 47865856 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:34:44.625608+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e7288000/0x0/0x4ffc00000, data 0x538e2ab/0x5526000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 338378752 unmapped: 47865856 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:34:45.625791+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e7288000/0x0/0x4ffc00000, data 0x538e2ab/0x5526000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 338378752 unmapped: 47865856 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:34:46.625938+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3942374 data_alloc: 218103808 data_used: 24608768
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:34:47.626140+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 338378752 unmapped: 47865856 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:34:48.626312+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 338378752 unmapped: 47865856 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:34:49.626506+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 338386944 unmapped: 47857664 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e7267000/0x0/0x4ffc00000, data 0x53af2ab/0x5547000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:34:50.626675+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 338386944 unmapped: 47857664 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:34:51.626910+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 338386944 unmapped: 47857664 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3942170 data_alloc: 218103808 data_used: 24608768
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:34:52.627024+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 338386944 unmapped: 47857664 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:34:53.627147+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 338386944 unmapped: 47857664 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:34:54.627313+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 338386944 unmapped: 47857664 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:34:55.627544+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 338386944 unmapped: 47857664 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e7267000/0x0/0x4ffc00000, data 0x53af2ab/0x5547000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:34:56.627699+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 338386944 unmapped: 47857664 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3942490 data_alloc: 218103808 data_used: 24616960
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:34:57.627834+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 338386944 unmapped: 47857664 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 16.147550583s of 16.505861282s, submitted: 81
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:34:58.627954+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 338386944 unmapped: 47857664 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:34:59.628121+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 338386944 unmapped: 47857664 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:35:00.628303+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 338386944 unmapped: 47857664 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e7261000/0x0/0x4ffc00000, data 0x53b52ab/0x554d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:35:01.628425+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 338386944 unmapped: 47857664 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3942722 data_alloc: 218103808 data_used: 24616960
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:35:02.628565+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 338386944 unmapped: 47857664 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:35:03.628729+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 338386944 unmapped: 47857664 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:35:04.628899+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 338395136 unmapped: 47849472 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e7261000/0x0/0x4ffc00000, data 0x53b52ab/0x554d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:35:05.629105+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 338395136 unmapped: 47849472 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e7261000/0x0/0x4ffc00000, data 0x53b52ab/0x554d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:35:06.629229+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 338395136 unmapped: 47849472 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e7261000/0x0/0x4ffc00000, data 0x53b52ab/0x554d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3942722 data_alloc: 218103808 data_used: 24616960
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:35:07.629371+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 338403328 unmapped: 47841280 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e7261000/0x0/0x4ffc00000, data 0x53b52ab/0x554d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:35:08.629511+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 338403328 unmapped: 47841280 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e7261000/0x0/0x4ffc00000, data 0x53b52ab/0x554d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:35:09.629677+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 338403328 unmapped: 47841280 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:35:10.631008+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 338403328 unmapped: 47841280 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e7261000/0x0/0x4ffc00000, data 0x53b52ab/0x554d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:35:11.632258+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 338403328 unmapped: 47841280 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3942722 data_alloc: 218103808 data_used: 24616960
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:35:12.632399+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 338403328 unmapped: 47841280 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.610344887s of 15.617400169s, submitted: 2
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:35:13.632848+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: handle_auth_request added challenge on 0x557ab714bc00
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab714bc00 session 0x557ab6a93860
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 338411520 unmapped: 47833088 heap: 386244608 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: handle_auth_request added challenge on 0x557ab7348400
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab7348400 session 0x557ab7295680
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: handle_auth_request added challenge on 0x557ab7347000
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab7347000 session 0x557ab72a4960
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: handle_auth_request added challenge on 0x557ab6693800
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab6693800 session 0x557ab6815c20
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: handle_auth_request added challenge on 0x557abb03dc00
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557abb03dc00 session 0x557ab4a7dc20
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: handle_auth_request added challenge on 0x557abb03dc00
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557abb03dc00 session 0x557ab72a4b40
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:35:14.633011+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 338616320 unmapped: 54984704 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:35:15.633203+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 338616320 unmapped: 54984704 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e6670000/0x0/0x4ffc00000, data 0x5fa530d/0x613e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:35:16.633349+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 338616320 unmapped: 54984704 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4036021 data_alloc: 218103808 data_used: 24616960
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:35:17.633475+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 338616320 unmapped: 54984704 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:35:18.633665+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 338616320 unmapped: 54984704 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e6670000/0x0/0x4ffc00000, data 0x5fa530d/0x613e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:35:19.633806+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 338616320 unmapped: 54984704 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:35:20.634009+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 338616320 unmapped: 54984704 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:35:21.634156+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 338624512 unmapped: 54976512 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4036021 data_alloc: 218103808 data_used: 24616960
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: handle_auth_request added challenge on 0x557ab6693800
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:35:22.634309+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 338624512 unmapped: 54976512 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e6670000/0x0/0x4ffc00000, data 0x5fa530d/0x613e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:35:23.634491+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 340860928 unmapped: 52740096 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:35:24.634664+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e6670000/0x0/0x4ffc00000, data 0x5fa530d/0x613e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 340860928 unmapped: 52740096 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:35:25.634960+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e6670000/0x0/0x4ffc00000, data 0x5fa530d/0x613e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 340860928 unmapped: 52740096 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:35:26.635124+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 340869120 unmapped: 52731904 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4117141 data_alloc: 234881024 data_used: 35983360
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e6670000/0x0/0x4ffc00000, data 0x5fa530d/0x613e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:35:27.635288+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 340869120 unmapped: 52731904 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e6670000/0x0/0x4ffc00000, data 0x5fa530d/0x613e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:35:28.635452+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 340869120 unmapped: 52731904 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:35:29.635766+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 340869120 unmapped: 52731904 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:35:30.635905+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 340869120 unmapped: 52731904 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:35:31.636072+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 340869120 unmapped: 52731904 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4117141 data_alloc: 234881024 data_used: 35983360
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:35:32.636202+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 19.115795135s of 19.247415543s, submitted: 33
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 340869120 unmapped: 52731904 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e6670000/0x0/0x4ffc00000, data 0x5fa530d/0x613e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,58])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:35:33.636352+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e59b4000/0x0/0x4ffc00000, data 0x6c6130d/0x6dfa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 348758016 unmapped: 44843008 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:35:34.636492+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 349085696 unmapped: 44515328 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:35:35.636743+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 349085696 unmapped: 44515328 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:35:36.636906+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 349085696 unmapped: 44515328 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4224165 data_alloc: 234881024 data_used: 36777984
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:35:37.637070+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 349085696 unmapped: 44515328 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:35:38.637216+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 349085696 unmapped: 44515328 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:35:39.637387+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e58c6000/0x0/0x4ffc00000, data 0x6d4f30d/0x6ee8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 349085696 unmapped: 44515328 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:35:40.637537+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 349085696 unmapped: 44515328 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:35:41.637699+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 349093888 unmapped: 44507136 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4224657 data_alloc: 234881024 data_used: 36839424
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:35:42.637875+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 349093888 unmapped: 44507136 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:35:43.638011+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 349093888 unmapped: 44507136 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.643822670s of 11.472805023s, submitted: 144
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:35:44.638147+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 349093888 unmapped: 44507136 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e58c4000/0x0/0x4ffc00000, data 0x6d5130d/0x6eea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:35:45.638328+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 349093888 unmapped: 44507136 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:35:46.638457+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab6693800 session 0x557ab696fc20
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 349093888 unmapped: 44507136 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4224901 data_alloc: 234881024 data_used: 36839424
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: handle_auth_request added challenge on 0x557ab714bc00
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab714bc00 session 0x557ab4a7c960
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:35:47.638612+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 343818240 unmapped: 49782784 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e7259000/0x0/0x4ffc00000, data 0x53bc2ab/0x5554000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:35:48.638749+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 343818240 unmapped: 49782784 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:35:49.638925+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 343818240 unmapped: 49782784 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:35:50.639042+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e7259000/0x0/0x4ffc00000, data 0x53bc2ab/0x5554000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 343818240 unmapped: 49782784 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:35:51.639210+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 343818240 unmapped: 49782784 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3948147 data_alloc: 218103808 data_used: 24141824
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:35:52.639331+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 343818240 unmapped: 49782784 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:35:53.639467+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 343818240 unmapped: 49782784 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:35:54.639661+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e7259000/0x0/0x4ffc00000, data 0x53bc2ab/0x5554000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 343818240 unmapped: 49782784 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:35:55.639877+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 343818240 unmapped: 49782784 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e7259000/0x0/0x4ffc00000, data 0x53bc2ab/0x5554000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:35:56.640036+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.474429131s of 12.555706024s, submitted: 35
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab6dd9800 session 0x557ab6bfa3c0
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab6dda400 session 0x557ab6814d20
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 343818240 unmapped: 49782784 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3947615 data_alloc: 218103808 data_used: 24141824
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: handle_auth_request added challenge on 0x557ab6693800
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab6693800 session 0x557ab792fc20
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:35:57.640213+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 338944000 unmapped: 54657024 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:35:58.640369+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 338944000 unmapped: 54657024 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:35:59.640540+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 338944000 unmapped: 54657024 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e84c5000/0x0/0x4ffc00000, data 0x415229b/0x42e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:36:00.640666+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 338944000 unmapped: 54657024 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:36:01.640897+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 338944000 unmapped: 54657024 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3734153 data_alloc: 218103808 data_used: 15548416
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:36:02.640986+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 338944000 unmapped: 54657024 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:36:03.641167+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 338944000 unmapped: 54657024 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e84c5000/0x0/0x4ffc00000, data 0x415229b/0x42e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:36:04.641359+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 338944000 unmapped: 54657024 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:36:05.641578+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 338944000 unmapped: 54657024 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e84c5000/0x0/0x4ffc00000, data 0x415229b/0x42e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:36:06.641760+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 338944000 unmapped: 54657024 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3734153 data_alloc: 218103808 data_used: 15548416
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:36:07.641874+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 338944000 unmapped: 54657024 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:36:08.642015+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e84c5000/0x0/0x4ffc00000, data 0x415229b/0x42e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 338944000 unmapped: 54657024 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:36:09.642166+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 338944000 unmapped: 54657024 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e84c5000/0x0/0x4ffc00000, data 0x415229b/0x42e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:36:10.642337+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 338944000 unmapped: 54657024 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e84c5000/0x0/0x4ffc00000, data 0x415229b/0x42e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:36:11.642487+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 338944000 unmapped: 54657024 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3734153 data_alloc: 218103808 data_used: 15548416
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:36:12.642645+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 338944000 unmapped: 54657024 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e84c5000/0x0/0x4ffc00000, data 0x415229b/0x42e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:36:13.642905+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e84c5000/0x0/0x4ffc00000, data 0x415229b/0x42e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 338944000 unmapped: 54657024 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:36:14.643085+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 338944000 unmapped: 54657024 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:36:15.643279+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 338944000 unmapped: 54657024 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:36:16.643413+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 338944000 unmapped: 54657024 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3734153 data_alloc: 218103808 data_used: 15548416
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: handle_auth_request added challenge on 0x557ab6dd9800
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 20.411876678s of 20.594398499s, submitted: 54
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab6dd9800 session 0x557ab46ada40
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:36:17.643553+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 339263488 unmapped: 54337536 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e7c21000/0x0/0x4ffc00000, data 0x49f629b/0x4b8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:36:18.643718+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 339263488 unmapped: 54337536 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:36:19.643900+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 339263488 unmapped: 54337536 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:36:20.644048+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 339263488 unmapped: 54337536 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e7c21000/0x0/0x4ffc00000, data 0x49f629b/0x4b8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:36:21.644276+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e7c21000/0x0/0x4ffc00000, data 0x49f629b/0x4b8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 339263488 unmapped: 54337536 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3805837 data_alloc: 218103808 data_used: 15548416
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:36:22.644404+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 339263488 unmapped: 54337536 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:36:23.644560+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 339263488 unmapped: 54337536 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:36:24.644682+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 339263488 unmapped: 54337536 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:36:25.644860+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: handle_auth_request added challenge on 0x557ab714bc00
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 339263488 unmapped: 54337536 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:36:26.645002+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 339271680 unmapped: 54329344 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3869517 data_alloc: 218103808 data_used: 23527424
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:36:27.645141+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e7c21000/0x0/0x4ffc00000, data 0x49f629b/0x4b8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 339271680 unmapped: 54329344 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:36:28.645279+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 339271680 unmapped: 54329344 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:36:29.646801+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 339271680 unmapped: 54329344 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:36:30.647021+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 339271680 unmapped: 54329344 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:36:31.647234+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 339271680 unmapped: 54329344 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3869517 data_alloc: 218103808 data_used: 23527424
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:36:32.647412+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 339271680 unmapped: 54329344 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e7c21000/0x0/0x4ffc00000, data 0x49f629b/0x4b8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:36:33.647581+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 339271680 unmapped: 54329344 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:36:34.647787+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 339271680 unmapped: 54329344 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 18.176839828s of 18.275016785s, submitted: 16
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:36:35.648068+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 342966272 unmapped: 50634752 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:36:36.648244+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 343031808 unmapped: 50569216 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e711e000/0x0/0x4ffc00000, data 0x54f929b/0x5690000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3958529 data_alloc: 218103808 data_used: 24690688
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:36:37.648414+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e7107000/0x0/0x4ffc00000, data 0x551029b/0x56a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 343031808 unmapped: 50569216 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:36:38.648641+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e7107000/0x0/0x4ffc00000, data 0x551029b/0x56a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 343031808 unmapped: 50569216 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:36:39.648801+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 343031808 unmapped: 50569216 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:36:40.648995+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 343031808 unmapped: 50569216 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:36:41.649150+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e7107000/0x0/0x4ffc00000, data 0x551029b/0x56a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 343031808 unmapped: 50569216 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e7107000/0x0/0x4ffc00000, data 0x551029b/0x56a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3969485 data_alloc: 218103808 data_used: 24776704
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:36:42.649288+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 343031808 unmapped: 50569216 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:36:43.649421+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 343031808 unmapped: 50569216 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:36:44.649546+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 343031808 unmapped: 50569216 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:36:45.649689+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 343031808 unmapped: 50569216 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:36:46.649879+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e7107000/0x0/0x4ffc00000, data 0x551029b/0x56a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 343031808 unmapped: 50569216 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3969485 data_alloc: 218103808 data_used: 24776704
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:36:47.650023+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 343031808 unmapped: 50569216 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:36:48.650172+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 343031808 unmapped: 50569216 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:36:49.650310+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 343031808 unmapped: 50569216 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:36:50.650492+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 343031808 unmapped: 50569216 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e7107000/0x0/0x4ffc00000, data 0x551029b/0x56a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:36:51.650627+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: handle_auth_request added challenge on 0x557abb03dc00
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.833803177s of 16.100864410s, submitted: 84
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353026048 unmapped: 40574976 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4074802 data_alloc: 218103808 data_used: 24776704
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557abb03dc00 session 0x557ab792f860
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: handle_auth_request added challenge on 0x557ab7347000
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab7347000 session 0x557ab5292960
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: handle_auth_request added challenge on 0x557ab7348400
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab7348400 session 0x557ab6bfa000
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: handle_auth_request added challenge on 0x557ab6693800
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab6693800 session 0x557ab46d2b40
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:36:52.650779+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: handle_auth_request added challenge on 0x557ab6dd9800
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab6dd9800 session 0x557ab57cbe00
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 344006656 unmapped: 49594368 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:36:53.650906+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 344006656 unmapped: 49594368 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e664a000/0x0/0x4ffc00000, data 0x5fcd29b/0x6164000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:36:54.651027+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 344006656 unmapped: 49594368 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: handle_auth_request added challenge on 0x557ab7347000
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab7347000 session 0x557ab6fcc780
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:36:55.651724+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: handle_auth_request added challenge on 0x557abb03dc00
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557abb03dc00 session 0x557ab6fcde00
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 344006656 unmapped: 49594368 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: handle_auth_request added challenge on 0x557ab89b6400
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab89b6400 session 0x557ab792ed20
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: handle_auth_request added challenge on 0x557ab6693800
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:36:56.651872+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab6693800 session 0x557ab792d0e0
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e664a000/0x0/0x4ffc00000, data 0x5fcd29b/0x6164000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 344014848 unmapped: 49586176 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4052393 data_alloc: 218103808 data_used: 24776704
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: handle_auth_request added challenge on 0x557ab6dd9800
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: handle_auth_request added challenge on 0x557ab7347000
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:36:57.652051+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e6649000/0x0/0x4ffc00000, data 0x5fcd2ab/0x6165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 344014848 unmapped: 49586176 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:36:58.652180+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 345096192 unmapped: 48504832 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:36:59.652335+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 345686016 unmapped: 47915008 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:37:00.652476+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e6649000/0x0/0x4ffc00000, data 0x5fcd2ab/0x6165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 345686016 unmapped: 47915008 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e6649000/0x0/0x4ffc00000, data 0x5fcd2ab/0x6165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:37:01.652652+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 345686016 unmapped: 47915008 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4131217 data_alloc: 234881024 data_used: 34672640
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:37:02.652878+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 345686016 unmapped: 47915008 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:37:03.653020+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 345686016 unmapped: 47915008 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e6649000/0x0/0x4ffc00000, data 0x5fcd2ab/0x6165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:37:04.653147+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 345686016 unmapped: 47915008 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:37:05.653345+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e6649000/0x0/0x4ffc00000, data 0x5fcd2ab/0x6165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 345686016 unmapped: 47915008 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:37:06.653489+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 345686016 unmapped: 47915008 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4131217 data_alloc: 234881024 data_used: 34672640
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:37:07.653675+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.667927742s of 16.314342499s, submitted: 35
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 345686016 unmapped: 47915008 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:37:08.653882+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 350175232 unmapped: 43425792 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e5eac000/0x0/0x4ffc00000, data 0x67642ab/0x68fc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:37:09.654058+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351977472 unmapped: 41623552 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:37:10.654223+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351985664 unmapped: 41615360 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:37:11.654389+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351985664 unmapped: 41615360 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4217607 data_alloc: 234881024 data_used: 36577280
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:37:12.654568+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352083968 unmapped: 41517056 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:37:13.654734+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352083968 unmapped: 41517056 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:37:14.654955+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e5e16000/0x0/0x4ffc00000, data 0x67f22ab/0x698a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab6dd9800 session 0x557ab7951e00
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab7347000 session 0x557ab552c960
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352083968 unmapped: 41517056 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: handle_auth_request added challenge on 0x557abb03dc00
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:37:15.655122+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557abb03dc00 session 0x557ab72a5680
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 348905472 unmapped: 44695552 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e5e24000/0x0/0x4ffc00000, data 0x67f22ab/0x698a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:37:16.655281+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 348905472 unmapped: 44695552 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3973628 data_alloc: 218103808 data_used: 23699456
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:37:17.655404+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 348905472 unmapped: 44695552 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:37:18.655580+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab714bc00 session 0x557ab7295a40
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 348905472 unmapped: 44695552 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: handle_auth_request added challenge on 0x557ab6693800
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.908845901s of 11.402800560s, submitted: 168
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:37:19.655707+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab6693800 session 0x557ab727be00
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 342040576 unmapped: 51560448 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:37:20.655850+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 342040576 unmapped: 51560448 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:37:21.657005+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e84c5000/0x0/0x4ffc00000, data 0x415229b/0x42e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 342040576 unmapped: 51560448 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3757716 data_alloc: 218103808 data_used: 14331904
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:37:22.657156+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 342040576 unmapped: 51560448 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:37:23.657323+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 342040576 unmapped: 51560448 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:37:24.657466+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 342040576 unmapped: 51560448 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:37:25.658088+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 342040576 unmapped: 51560448 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:37:26.658250+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e84c5000/0x0/0x4ffc00000, data 0x415229b/0x42e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 342040576 unmapped: 51560448 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3757716 data_alloc: 218103808 data_used: 14331904
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:37:27.658411+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 342040576 unmapped: 51560448 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:37:28.658646+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e84c5000/0x0/0x4ffc00000, data 0x415229b/0x42e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 342040576 unmapped: 51560448 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:37:29.658801+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 342040576 unmapped: 51560448 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:37:30.659677+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 342040576 unmapped: 51560448 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e84c5000/0x0/0x4ffc00000, data 0x415229b/0x42e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:37:31.660398+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e84c5000/0x0/0x4ffc00000, data 0x415229b/0x42e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 342040576 unmapped: 51560448 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3757716 data_alloc: 218103808 data_used: 14331904
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:37:32.660518+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e84c5000/0x0/0x4ffc00000, data 0x415229b/0x42e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 342040576 unmapped: 51560448 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:37:33.660895+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 342040576 unmapped: 51560448 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:37:34.661152+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 342040576 unmapped: 51560448 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:37:35.661318+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 342040576 unmapped: 51560448 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:37:36.661470+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: handle_auth_request added challenge on 0x557ab6dd9800
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab6dd9800 session 0x557ab6df7e00
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: handle_auth_request added challenge on 0x557ab714bc00
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab714bc00 session 0x557ab6bfab40
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: handle_auth_request added challenge on 0x557ab7347000
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab7347000 session 0x557ab5521860
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: handle_auth_request added challenge on 0x557abb03dc00
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557abb03dc00 session 0x557ab46d2f00
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: handle_auth_request added challenge on 0x557ab6693800
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 17.600893021s of 17.668704987s, submitted: 22
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 342761472 unmapped: 50839552 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab6693800 session 0x557ab792d4a0
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3838253 data_alloc: 218103808 data_used: 14331904
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: handle_auth_request added challenge on 0x557ab6dd9800
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab6dd9800 session 0x557ab46d3a40
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:37:37.661613+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 342761472 unmapped: 50839552 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e7c02000/0x0/0x4ffc00000, data 0x4a142fd/0x4bac000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:37:38.661765+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e7c02000/0x0/0x4ffc00000, data 0x4a142fd/0x4bac000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 342761472 unmapped: 50839552 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:37:39.661983+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e7c02000/0x0/0x4ffc00000, data 0x4a142fd/0x4bac000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 342761472 unmapped: 50839552 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: handle_auth_request added challenge on 0x557ab714bc00
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab714bc00 session 0x557ab6bfad20
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:37:40.662139+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: handle_auth_request added challenge on 0x557ab7347000
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab7347000 session 0x557ab696e780
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 342761472 unmapped: 50839552 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: handle_auth_request added challenge on 0x557abb03dc00
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557abb03dc00 session 0x557ab6a92f00
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: handle_auth_request added challenge on 0x557ab6693800
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:37:41.662328+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab6693800 session 0x557ab58ce780
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: handle_auth_request added challenge on 0x557ab6dd9800
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 342908928 unmapped: 50692096 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3842804 data_alloc: 218103808 data_used: 14331904
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:37:42.662449+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 342908928 unmapped: 50692096 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: handle_auth_request added challenge on 0x557ab714bc00
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e7bdc000/0x0/0x4ffc00000, data 0x4a38330/0x4bd2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:37:43.662578+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 342884352 unmapped: 50716672 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:37:44.662694+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 342884352 unmapped: 50716672 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:37:45.662805+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 342884352 unmapped: 50716672 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:37:46.663037+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 342884352 unmapped: 50716672 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3906804 data_alloc: 218103808 data_used: 23220224
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:37:47.663219+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 342884352 unmapped: 50716672 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:37:48.663357+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e7bdc000/0x0/0x4ffc00000, data 0x4a38330/0x4bd2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 342884352 unmapped: 50716672 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:37:49.663463+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e7bdc000/0x0/0x4ffc00000, data 0x4a38330/0x4bd2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 342884352 unmapped: 50716672 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:37:50.663585+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 342884352 unmapped: 50716672 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:37:51.663718+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 342884352 unmapped: 50716672 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3906804 data_alloc: 218103808 data_used: 23220224
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:37:52.663892+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.982942581s of 16.169082642s, submitted: 47
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 344268800 unmapped: 49332224 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [P] New memtable created with log file: #51. Immutable memtables: 0.
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:37:53.664031+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 348372992 unmapped: 45228032 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:37:54.664170+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e5b16000/0x0/0x4ffc00000, data 0x5958330/0x5af2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 349151232 unmapped: 44449792 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:37:55.664330+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 349151232 unmapped: 44449792 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:37:56.664460+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 349151232 unmapped: 44449792 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4046962 data_alloc: 234881024 data_used: 25493504
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:37:57.664608+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 349151232 unmapped: 44449792 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:37:58.664748+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e5a88000/0x0/0x4ffc00000, data 0x59e6330/0x5b80000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 349151232 unmapped: 44449792 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:37:59.664878+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 349151232 unmapped: 44449792 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:38:00.665036+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 349151232 unmapped: 44449792 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:38:01.665203+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 349151232 unmapped: 44449792 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4044926 data_alloc: 234881024 data_used: 25497600
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:38:02.665357+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 349151232 unmapped: 44449792 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e5a6f000/0x0/0x4ffc00000, data 0x5a05330/0x5b9f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:38:03.665462+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 349151232 unmapped: 44449792 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:38:04.665588+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 349151232 unmapped: 44449792 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:38:05.665801+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.468229294s of 12.845428467s, submitted: 168
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab714bc00 session 0x557ab72a41e0
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab6dd9800 session 0x557ab5293a40
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 349151232 unmapped: 44449792 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: handle_auth_request added challenge on 0x557ab7347000
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:38:06.665949+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab7347000 session 0x557ab72a5e00
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 343818240 unmapped: 49782784 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3773583 data_alloc: 218103808 data_used: 14331904
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:38:07.666102+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 343818240 unmapped: 49782784 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:38:08.666285+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e7323000/0x0/0x4ffc00000, data 0x415229b/0x42e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 343818240 unmapped: 49782784 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:38:09.666482+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 343818240 unmapped: 49782784 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:38:10.666633+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 343818240 unmapped: 49782784 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:38:11.667044+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 343818240 unmapped: 49782784 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3773583 data_alloc: 218103808 data_used: 14331904
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:38:12.667204+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 343818240 unmapped: 49782784 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:38:13.667393+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 343818240 unmapped: 49782784 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:38:14.667573+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e7323000/0x0/0x4ffc00000, data 0x415229b/0x42e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 343818240 unmapped: 49782784 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:38:15.667797+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 343818240 unmapped: 49782784 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:38:16.668094+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 343818240 unmapped: 49782784 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3773583 data_alloc: 218103808 data_used: 14331904
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:38:17.668273+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 343818240 unmapped: 49782784 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:38:18.668445+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 343818240 unmapped: 49782784 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:38:19.668605+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e7323000/0x0/0x4ffc00000, data 0x415229b/0x42e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 343818240 unmapped: 49782784 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:38:20.668790+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 343818240 unmapped: 49782784 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:38:21.669011+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 343818240 unmapped: 49782784 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3773583 data_alloc: 218103808 data_used: 14331904
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:38:22.669226+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 343818240 unmapped: 49782784 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:38:23.669377+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e7323000/0x0/0x4ffc00000, data 0x415229b/0x42e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 343818240 unmapped: 49782784 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e7323000/0x0/0x4ffc00000, data 0x415229b/0x42e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:38:24.669534+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 343818240 unmapped: 49782784 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:38:25.669726+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 343818240 unmapped: 49782784 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:38:26.669920+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e7323000/0x0/0x4ffc00000, data 0x415229b/0x42e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 343818240 unmapped: 49782784 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3773583 data_alloc: 218103808 data_used: 14331904
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:38:27.670058+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 343818240 unmapped: 49782784 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:38:28.670228+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 343818240 unmapped: 49782784 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:38:29.670364+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 343818240 unmapped: 49782784 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:38:30.670552+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 343818240 unmapped: 49782784 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:38:31.671249+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 343818240 unmapped: 49782784 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3773583 data_alloc: 218103808 data_used: 14331904
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:38:32.671427+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e7323000/0x0/0x4ffc00000, data 0x415229b/0x42e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: handle_auth_request added challenge on 0x557ab9c2c400
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 27.126367569s of 27.255813599s, submitted: 43
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 343842816 unmapped: 49758208 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab9c2c400 session 0x557ab58cfe00
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:38:33.671578+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: handle_auth_request added challenge on 0x557ab6693800
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab6693800 session 0x557ab686c000
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 343851008 unmapped: 49750016 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:38:34.671727+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 343851008 unmapped: 49750016 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:38:35.672423+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 343851008 unmapped: 49750016 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:38:36.673255+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 343851008 unmapped: 49750016 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:38:37.673369+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3803540 data_alloc: 218103808 data_used: 14331904
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 343851008 unmapped: 49750016 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:38:38.673707+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e6f24000/0x0/0x4ffc00000, data 0x45522fd/0x46ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 343851008 unmapped: 49750016 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:38:39.673876+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: handle_auth_request added challenge on 0x557ab6dd9800
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab6dd9800 session 0x557ab72a4b40
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: handle_auth_request added challenge on 0x557ab714bc00
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 343859200 unmapped: 49741824 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:38:40.674002+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e6f23000/0x0/0x4ffc00000, data 0x4552320/0x46eb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 343867392 unmapped: 49733632 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:38:41.674185+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: handle_auth_request added challenge on 0x557ab7347000
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 343875584 unmapped: 49725440 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:38:42.674486+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3836118 data_alloc: 218103808 data_used: 18526208
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e6f23000/0x0/0x4ffc00000, data 0x4552320/0x46eb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 343875584 unmapped: 49725440 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:38:43.674685+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e6f23000/0x0/0x4ffc00000, data 0x4552320/0x46eb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 343875584 unmapped: 49725440 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:38:44.674882+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 343875584 unmapped: 49725440 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:38:45.678570+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 343875584 unmapped: 49725440 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:38:46.678799+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e6f23000/0x0/0x4ffc00000, data 0x4552320/0x46eb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 343875584 unmapped: 49725440 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:38:47.678982+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3836118 data_alloc: 218103808 data_used: 18526208
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 343875584 unmapped: 49725440 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:38:48.679150+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e6f23000/0x0/0x4ffc00000, data 0x4552320/0x46eb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 343875584 unmapped: 49725440 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:38:49.679329+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 343875584 unmapped: 49725440 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:38:50.679512+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 343875584 unmapped: 49725440 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:38:51.679650+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 18.160448074s of 18.311054230s, submitted: 31
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:38:52.679798+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 345620480 unmapped: 47980544 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3866682 data_alloc: 218103808 data_used: 18530304
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e6c98000/0x0/0x4ffc00000, data 0x47dd320/0x4976000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [0,0,0,0,1,0,0,0,0,6])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:38:53.680054+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 347807744 unmapped: 45793280 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:38:54.680275+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 346521600 unmapped: 47079424 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:38:55.680484+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 346521600 unmapped: 47079424 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e6835000/0x0/0x4ffc00000, data 0x4c40320/0x4dd9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:38:56.680618+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 346521600 unmapped: 47079424 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:38:57.680784+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 346521600 unmapped: 47079424 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3888466 data_alloc: 218103808 data_used: 18575360
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:38:58.680948+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 346529792 unmapped: 47071232 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:38:59.681127+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 346529792 unmapped: 47071232 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:39:00.681309+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 346529792 unmapped: 47071232 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e6824000/0x0/0x4ffc00000, data 0x4c51320/0x4dea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:39:01.681474+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 346537984 unmapped: 47063040 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:39:02.681641+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 346537984 unmapped: 47063040 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3897630 data_alloc: 218103808 data_used: 18673664
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:39:03.681839+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 346537984 unmapped: 47063040 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:39:04.681965+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 346537984 unmapped: 47063040 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:39:05.682152+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 346546176 unmapped: 47054848 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:39:06.682353+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 346546176 unmapped: 47054848 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e6824000/0x0/0x4ffc00000, data 0x4c51320/0x4dea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:39:07.682511+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 346546176 unmapped: 47054848 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3897630 data_alloc: 218103808 data_used: 18673664
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:39:08.682654+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 346546176 unmapped: 47054848 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:39:09.682838+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 346546176 unmapped: 47054848 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:39:10.683170+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 346546176 unmapped: 47054848 heap: 393601024 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: handle_auth_request added challenge on 0x557ab9c2e800
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 17.543237686s of 19.606155396s, submitted: 71
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab9c2e800 session 0x557ab46d0f00
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:39:11.683299+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 346685440 unmapped: 48537600 heap: 395223040 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:39:12.683443+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 346685440 unmapped: 48537600 heap: 395223040 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3966194 data_alloc: 218103808 data_used: 18677760
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e5fcf000/0x0/0x4ffc00000, data 0x54a6320/0x563f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:39:13.683612+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 346693632 unmapped: 48529408 heap: 395223040 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:39:14.683735+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 346693632 unmapped: 48529408 heap: 395223040 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:39:15.683948+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 346693632 unmapped: 48529408 heap: 395223040 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: handle_auth_request added challenge on 0x557ab528cc00
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab528cc00 session 0x557ab7950f00
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e5fcf000/0x0/0x4ffc00000, data 0x54a6320/0x563f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: handle_auth_request added challenge on 0x557ab4688800
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab4688800 session 0x557ab551e960
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:39:16.684122+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 346701824 unmapped: 48521216 heap: 395223040 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:39:17.684296+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e5fcf000/0x0/0x4ffc00000, data 0x54a6320/0x563f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 346701824 unmapped: 48521216 heap: 395223040 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3966194 data_alloc: 218103808 data_used: 18677760
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: handle_auth_request added challenge on 0x557ab4688800
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab4688800 session 0x557ab6bfa5a0
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: handle_auth_request added challenge on 0x557ab528cc00
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab528cc00 session 0x557ab6814d20
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:39:18.684428+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 346701824 unmapped: 48521216 heap: 395223040 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: handle_auth_request added challenge on 0x557ab6693800
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: handle_auth_request added challenge on 0x557ab6dd9800
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:39:19.684552+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 346701824 unmapped: 48521216 heap: 395223040 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:39:20.684686+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 346587136 unmapped: 48635904 heap: 395223040 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:39:21.684795+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 346587136 unmapped: 48635904 heap: 395223040 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:39:22.684983+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 346587136 unmapped: 48635904 heap: 395223040 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4022787 data_alloc: 234881024 data_used: 26189824
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e5fab000/0x0/0x4ffc00000, data 0x54ca320/0x5663000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:39:23.685167+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 346587136 unmapped: 48635904 heap: 395223040 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:39:24.685289+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 346587136 unmapped: 48635904 heap: 395223040 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:39:25.685482+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 346587136 unmapped: 48635904 heap: 395223040 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:39:26.685623+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 346587136 unmapped: 48635904 heap: 395223040 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e5fab000/0x0/0x4ffc00000, data 0x54ca320/0x5663000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:39:27.685765+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 346587136 unmapped: 48635904 heap: 395223040 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4022787 data_alloc: 234881024 data_used: 26189824
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:39:28.685925+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 17.141372681s of 17.398494720s, submitted: 23
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 346587136 unmapped: 48635904 heap: 395223040 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:39:29.686133+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 346595328 unmapped: 48627712 heap: 395223040 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:39:30.686290+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 349544448 unmapped: 45678592 heap: 395223040 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:39:31.686454+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e5a77000/0x0/0x4ffc00000, data 0x59f0320/0x5b89000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [1])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 349552640 unmapped: 45670400 heap: 395223040 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:39:32.686622+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 348889088 unmapped: 46333952 heap: 395223040 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4093051 data_alloc: 234881024 data_used: 27561984
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:39:33.686776+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 348889088 unmapped: 46333952 heap: 395223040 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:39:34.686925+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 348889088 unmapped: 46333952 heap: 395223040 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e5a65000/0x0/0x4ffc00000, data 0x5a08320/0x5ba1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:39:35.687077+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 348889088 unmapped: 46333952 heap: 395223040 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:39:36.687281+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 348889088 unmapped: 46333952 heap: 395223040 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e5a65000/0x0/0x4ffc00000, data 0x5a08320/0x5ba1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab6693800 session 0x557ab46d32c0
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab6dd9800 session 0x557ab72a54a0
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:39:37.687678+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: handle_auth_request added challenge on 0x557ab9c2e800
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 348889088 unmapped: 46333952 heap: 395223040 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3915399 data_alloc: 218103808 data_used: 18788352
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab9c2e800 session 0x557ab791f2c0
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:39:38.688045+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 347545600 unmapped: 47677440 heap: 395223040 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:39:39.688316+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 347545600 unmapped: 47677440 heap: 395223040 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:39:40.688616+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 347545600 unmapped: 47677440 heap: 395223040 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.202589035s of 12.640776634s, submitted: 111
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab7347000 session 0x557ab6bfa960
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab714bc00 session 0x557ab46d2000
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:39:41.689113+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: handle_auth_request added challenge on 0x557ab9c2e800
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 347545600 unmapped: 47677440 heap: 395223040 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e6823000/0x0/0x4ffc00000, data 0x4c52320/0x4deb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [0,0,1])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:39:42.689356+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 347545600 unmapped: 47677440 heap: 395223040 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3797219 data_alloc: 218103808 data_used: 14331904
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab9c2e800 session 0x557ab3ee7c20
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:39:43.689610+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 347545600 unmapped: 47677440 heap: 395223040 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:39:44.689791+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 347545600 unmapped: 47677440 heap: 395223040 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:39:45.690054+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 347545600 unmapped: 47677440 heap: 395223040 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:39:46.690256+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 347545600 unmapped: 47677440 heap: 395223040 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e7323000/0x0/0x4ffc00000, data 0x415229b/0x42e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:39:47.690431+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 347545600 unmapped: 47677440 heap: 395223040 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3796459 data_alloc: 218103808 data_used: 14331904
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:39:48.690602+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 347545600 unmapped: 47677440 heap: 395223040 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:39:49.690770+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 347545600 unmapped: 47677440 heap: 395223040 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e7323000/0x0/0x4ffc00000, data 0x415229b/0x42e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:39:50.690980+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 347545600 unmapped: 47677440 heap: 395223040 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:39:51.691253+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 347545600 unmapped: 47677440 heap: 395223040 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:39:52.691544+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 347545600 unmapped: 47677440 heap: 395223040 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3796459 data_alloc: 218103808 data_used: 14331904
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:39:53.691897+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 347545600 unmapped: 47677440 heap: 395223040 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:39:54.692100+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 347545600 unmapped: 47677440 heap: 395223040 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:39:55.692525+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e7323000/0x0/0x4ffc00000, data 0x415229b/0x42e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 347545600 unmapped: 47677440 heap: 395223040 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:39:56.692914+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 347545600 unmapped: 47677440 heap: 395223040 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:39:57.693222+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 347545600 unmapped: 47677440 heap: 395223040 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3796459 data_alloc: 218103808 data_used: 14331904
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:39:58.693441+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 347545600 unmapped: 47677440 heap: 395223040 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e7323000/0x0/0x4ffc00000, data 0x415229b/0x42e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:39:59.693588+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 347545600 unmapped: 47677440 heap: 395223040 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: handle_auth_request added challenge on 0x557ab4688800
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab4688800 session 0x557ab686c000
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: handle_auth_request added challenge on 0x557ab528cc00
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab528cc00 session 0x557ab72a5e00
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: handle_auth_request added challenge on 0x557ab4688800
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab4688800 session 0x557ab5293a40
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:40:00.693750+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: handle_auth_request added challenge on 0x557ab714bc00
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab714bc00 session 0x557ab72a41e0
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: handle_auth_request added challenge on 0x557ab7347000
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 347545600 unmapped: 47677440 heap: 395223040 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 19.335086823s of 19.687835693s, submitted: 52
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab7347000 session 0x557ab58ce780
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:40:01.693929+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 347717632 unmapped: 47505408 heap: 395223040 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:40:02.694074+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e6e4e000/0x0/0x4ffc00000, data 0x462929b/0x47c0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 347717632 unmapped: 47505408 heap: 395223040 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3839331 data_alloc: 218103808 data_used: 14331904
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:40:03.694340+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e6e4e000/0x0/0x4ffc00000, data 0x462929b/0x47c0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 347717632 unmapped: 47505408 heap: 395223040 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:40:04.737948+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 347717632 unmapped: 47505408 heap: 395223040 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: handle_auth_request added challenge on 0x557ab9c2e800
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab9c2e800 session 0x557ab6bfad20
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: handle_auth_request added challenge on 0x557ab6693800
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab6693800 session 0x557ab46d3a40
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:40:05.738122+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 347717632 unmapped: 47505408 heap: 395223040 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: handle_auth_request added challenge on 0x557ab4688800
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab4688800 session 0x557ab792d4a0
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: handle_auth_request added challenge on 0x557ab714bc00
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab714bc00 session 0x557ab46d2f00
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:40:06.738260+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 347734016 unmapped: 47489024 heap: 395223040 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: handle_auth_request added challenge on 0x557ab7347000
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:40:07.738432+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 347734016 unmapped: 47489024 heap: 395223040 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3844451 data_alloc: 218103808 data_used: 14331904
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:40:08.738629+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: handle_auth_request added challenge on 0x557ab9c2e800
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 347742208 unmapped: 47480832 heap: 395223040 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e6e4c000/0x0/0x4ffc00000, data 0x46292ce/0x47c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:40:09.738781+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 347742208 unmapped: 47480832 heap: 395223040 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:40:10.738940+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e6e4c000/0x0/0x4ffc00000, data 0x46292ce/0x47c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 347742208 unmapped: 47480832 heap: 395223040 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:40:11.739123+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 347742208 unmapped: 47480832 heap: 395223040 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:40:12.739238+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 347742208 unmapped: 47480832 heap: 395223040 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3880611 data_alloc: 218103808 data_used: 19320832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:40:13.739381+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 347742208 unmapped: 47480832 heap: 395223040 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:40:14.739539+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 347742208 unmapped: 47480832 heap: 395223040 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:40:15.739794+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 347742208 unmapped: 47480832 heap: 395223040 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:40:16.740643+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e6e4c000/0x0/0x4ffc00000, data 0x46292ce/0x47c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 347742208 unmapped: 47480832 heap: 395223040 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:40:17.740774+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 347742208 unmapped: 47480832 heap: 395223040 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3880611 data_alloc: 218103808 data_used: 19320832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e6e4c000/0x0/0x4ffc00000, data 0x46292ce/0x47c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 17.534839630s of 17.652015686s, submitted: 20
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:40:18.740927+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351223808 unmapped: 43999232 heap: 395223040 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:40:19.741082+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351297536 unmapped: 43925504 heap: 395223040 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:40:20.741220+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 350904320 unmapped: 44318720 heap: 395223040 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:40:21.741403+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 350904320 unmapped: 44318720 heap: 395223040 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:40:22.741610+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 350986240 unmapped: 44236800 heap: 395223040 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3967763 data_alloc: 218103808 data_used: 20377600
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e63e9000/0x0/0x4ffc00000, data 0x508c2ce/0x5225000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:40:23.741788+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 350986240 unmapped: 44236800 heap: 395223040 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:40:24.741977+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e63e9000/0x0/0x4ffc00000, data 0x508c2ce/0x5225000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 350986240 unmapped: 44236800 heap: 395223040 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:40:25.742199+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e63e9000/0x0/0x4ffc00000, data 0x508c2ce/0x5225000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 350986240 unmapped: 44236800 heap: 395223040 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:40:26.742320+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e63e9000/0x0/0x4ffc00000, data 0x508c2ce/0x5225000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 350986240 unmapped: 44236800 heap: 395223040 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:40:27.742502+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 350986240 unmapped: 44236800 heap: 395223040 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3967763 data_alloc: 218103808 data_used: 20377600
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:40:28.742643+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 350994432 unmapped: 44228608 heap: 395223040 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:40:29.742801+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 350994432 unmapped: 44228608 heap: 395223040 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:40:30.743013+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 350994432 unmapped: 44228608 heap: 395223040 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:40:31.743166+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e63e9000/0x0/0x4ffc00000, data 0x508c2ce/0x5225000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 350994432 unmapped: 44228608 heap: 395223040 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:40:32.743334+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 350994432 unmapped: 44228608 heap: 395223040 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3967763 data_alloc: 218103808 data_used: 20377600
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:40:33.743500+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351002624 unmapped: 44220416 heap: 395223040 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:40:34.743667+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351002624 unmapped: 44220416 heap: 395223040 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: handle_auth_request added challenge on 0x557ab6dd9800
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 16.585489273s of 16.809347153s, submitted: 83
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab6dd9800 session 0x557ab685ba40
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:40:35.743833+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: handle_auth_request added challenge on 0x557ab4689000
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab4689000 session 0x557ab4aaab40
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: handle_auth_request added challenge on 0x557ab528c400
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab528c400 session 0x557ab6fcc3c0
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: handle_auth_request added challenge on 0x557ab528c400
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab528c400 session 0x557ab770ef00
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: handle_auth_request added challenge on 0x557ab4688800
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab4688800 session 0x557ab552c000
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 350953472 unmapped: 44269568 heap: 395223040 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:40:36.744000+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 350953472 unmapped: 44269568 heap: 395223040 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e6086000/0x0/0x4ffc00000, data 0x53ef2ce/0x5588000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:40:37.744154+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 350953472 unmapped: 44269568 heap: 395223040 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4001852 data_alloc: 218103808 data_used: 20377600
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:40:38.744320+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e6086000/0x0/0x4ffc00000, data 0x53ef2ce/0x5588000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: handle_auth_request added challenge on 0x557ab4689000
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab4689000 session 0x557ab46ac960
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 350953472 unmapped: 44269568 heap: 395223040 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e6086000/0x0/0x4ffc00000, data 0x53ef2ce/0x5588000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: handle_auth_request added challenge on 0x557ab6dd9800
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab6dd9800 session 0x557ab6df9c20
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:40:39.744467+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: handle_auth_request added challenge on 0x557ab714bc00
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab714bc00 session 0x557ab685b860
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: handle_auth_request added challenge on 0x557ab714bc00
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 350953472 unmapped: 44269568 heap: 395223040 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab714bc00 session 0x557ab551ed20
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:40:40.744622+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 350953472 unmapped: 44269568 heap: 395223040 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: handle_auth_request added challenge on 0x557ab4688800
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: handle_auth_request added challenge on 0x557ab4689000
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:40:41.744736+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 350953472 unmapped: 44269568 heap: 395223040 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:40:42.744982+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e6085000/0x0/0x4ffc00000, data 0x53ef2de/0x5589000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 350953472 unmapped: 44269568 heap: 395223040 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4026574 data_alloc: 218103808 data_used: 23715840
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:40:43.745092+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 350953472 unmapped: 44269568 heap: 395223040 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:40:44.745236+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 350953472 unmapped: 44269568 heap: 395223040 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:40:45.745404+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 350953472 unmapped: 44269568 heap: 395223040 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:40:46.745575+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e6085000/0x0/0x4ffc00000, data 0x53ef2de/0x5589000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 350953472 unmapped: 44269568 heap: 395223040 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:40:47.745731+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 350953472 unmapped: 44269568 heap: 395223040 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4026734 data_alloc: 218103808 data_used: 23719936
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:40:48.746060+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 350953472 unmapped: 44269568 heap: 395223040 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e6085000/0x0/0x4ffc00000, data 0x53ef2de/0x5589000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:40:49.746231+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 350953472 unmapped: 44269568 heap: 395223040 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:40:50.746444+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.436676979s of 15.575437546s, submitted: 24
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 350953472 unmapped: 44269568 heap: 395223040 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:40:51.746642+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 350953472 unmapped: 44269568 heap: 395223040 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e5939000/0x0/0x4ffc00000, data 0x5b392de/0x5cd3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:40:52.746854+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353509376 unmapped: 41713664 heap: 395223040 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4088814 data_alloc: 218103808 data_used: 23982080
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:40:53.747020+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353509376 unmapped: 41713664 heap: 395223040 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:40:54.747177+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353738752 unmapped: 41484288 heap: 395223040 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:40:55.747352+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353738752 unmapped: 41484288 heap: 395223040 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:40:56.747539+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353738752 unmapped: 41484288 heap: 395223040 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:40:57.747760+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353738752 unmapped: 41484288 heap: 395223040 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4102462 data_alloc: 218103808 data_used: 23887872
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e5467000/0x0/0x4ffc00000, data 0x5bdd2de/0x5d77000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:40:58.747918+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353738752 unmapped: 41484288 heap: 395223040 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:40:59.748035+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab4688800 session 0x557ab770f860
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab4689000 session 0x557ab792ed20
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353746944 unmapped: 41476096 heap: 395223040 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: handle_auth_request added challenge on 0x557ab528c400
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab528c400 session 0x557ab6e74000
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:41:00.748155+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351215616 unmapped: 44007424 heap: 395223040 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:41:01.748356+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351215616 unmapped: 44007424 heap: 395223040 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:41:02.748512+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351215616 unmapped: 44007424 heap: 395223040 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3976137 data_alloc: 218103808 data_used: 20381696
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:41:03.748716+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e5fd8000/0x0/0x4ffc00000, data 0x508d2ce/0x5226000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351215616 unmapped: 44007424 heap: 395223040 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:41:04.748875+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab9c2e800 session 0x557ab792e960
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.293063164s of 13.741784096s, submitted: 135
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab7347000 session 0x557ab72a5680
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: handle_auth_request added challenge on 0x557ab4688800
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351215616 unmapped: 44007424 heap: 395223040 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab4688800 session 0x557ab58cf4a0
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:41:05.749070+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 350126080 unmapped: 45096960 heap: 395223040 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:41:06.749266+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 350126080 unmapped: 45096960 heap: 395223040 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:41:07.749415+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e6f14000/0x0/0x4ffc00000, data 0x415229b/0x42e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 350126080 unmapped: 45096960 heap: 395223040 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3818412 data_alloc: 218103808 data_used: 14331904
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:41:08.749656+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 350126080 unmapped: 45096960 heap: 395223040 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:41:09.749856+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 350126080 unmapped: 45096960 heap: 395223040 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:41:10.750037+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e6f14000/0x0/0x4ffc00000, data 0x415229b/0x42e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 350126080 unmapped: 45096960 heap: 395223040 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:41:11.750199+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 350126080 unmapped: 45096960 heap: 395223040 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:41:12.750320+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 350126080 unmapped: 45096960 heap: 395223040 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3818412 data_alloc: 218103808 data_used: 14331904
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:41:13.750505+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 350126080 unmapped: 45096960 heap: 395223040 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:41:14.750674+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 350126080 unmapped: 45096960 heap: 395223040 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:41:15.750897+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 350126080 unmapped: 45096960 heap: 395223040 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:41:16.751068+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e6f14000/0x0/0x4ffc00000, data 0x415229b/0x42e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 350126080 unmapped: 45096960 heap: 395223040 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:41:17.751210+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 350126080 unmapped: 45096960 heap: 395223040 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3818412 data_alloc: 218103808 data_used: 14331904
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:41:18.751331+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 350126080 unmapped: 45096960 heap: 395223040 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:41:19.751517+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 350126080 unmapped: 45096960 heap: 395223040 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:41:20.751732+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 350126080 unmapped: 45096960 heap: 395223040 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:41:21.751918+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e6f14000/0x0/0x4ffc00000, data 0x415229b/0x42e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 350126080 unmapped: 45096960 heap: 395223040 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:41:22.752073+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 350126080 unmapped: 45096960 heap: 395223040 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3818412 data_alloc: 218103808 data_used: 14331904
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:41:23.752244+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 350134272 unmapped: 45088768 heap: 395223040 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:41:24.752360+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 350134272 unmapped: 45088768 heap: 395223040 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: handle_auth_request added challenge on 0x557ab4689000
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 20.724020004s of 20.886013031s, submitted: 43
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:41:25.752517+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab4689000 session 0x557ab6fcd4a0
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e6f14000/0x0/0x4ffc00000, data 0x415229b/0x42e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 350150656 unmapped: 52944896 heap: 403095552 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:41:26.752653+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 350150656 unmapped: 52944896 heap: 403095552 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:41:27.752851+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 350150656 unmapped: 52944896 heap: 403095552 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3888416 data_alloc: 218103808 data_used: 14331904
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:41:28.752983+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 350158848 unmapped: 52936704 heap: 403095552 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:41:29.753090+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 350158848 unmapped: 52936704 heap: 403095552 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 5400.1 total, 600.0 interval
                                           Cumulative writes: 46K writes, 175K keys, 46K commit groups, 1.0 writes per commit group, ingest: 0.17 GB, 0.03 MB/s
                                           Cumulative WAL: 46K writes, 17K syncs, 2.71 writes per sync, written: 0.17 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 3122 writes, 13K keys, 3122 commit groups, 1.0 writes per commit group, ingest: 16.94 MB, 0.03 MB/s
                                           Interval WAL: 3122 writes, 1172 syncs, 2.66 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:41:30.753262+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: handle_auth_request added challenge on 0x557ab528c400
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab528c400 session 0x557ab79503c0
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 350306304 unmapped: 52789248 heap: 403095552 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets getting new tickets!
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:41:31.753496+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _finish_auth 0
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:41:31.754446+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: handle_auth_request added challenge on 0x557ab714bc00
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e661f000/0x0/0x4ffc00000, data 0x4a4829b/0x4bdf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 350306304 unmapped: 52789248 heap: 403095552 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:41:32.753635+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: handle_auth_request added challenge on 0x557ab9c2e800
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 350306304 unmapped: 52789248 heap: 403095552 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3899108 data_alloc: 218103808 data_used: 15417344
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:41:33.753773+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 350240768 unmapped: 52854784 heap: 403095552 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:41:34.753914+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab9c2e800 session 0x557ab792d0e0
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab714bc00 session 0x557ab727a1e0
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 350240768 unmapped: 52854784 heap: 403095552 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:41:35.754104+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 350240768 unmapped: 52854784 heap: 403095552 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab5467800 session 0x557ab792e5a0
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: handle_auth_request added challenge on 0x557ab4688800
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:41:36.754301+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: mgrc ms_handle_reset ms_handle_reset con 0x557ab92f9000
Oct 11 10:01:08 compute-0 ceph-osd[89278]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/2898047278
Oct 11 10:01:08 compute-0 ceph-osd[89278]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/2898047278,v1:192.168.122.100:6801/2898047278]
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: get_auth_request con 0x557ab6693800 auth_method 0
Oct 11 10:01:08 compute-0 ceph-osd[89278]: mgrc handle_mgr_configure stats_period=5
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 350240768 unmapped: 52854784 heap: 403095552 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e661f000/0x0/0x4ffc00000, data 0x4a4829b/0x4bdf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557abb03c400 session 0x557ab682b2c0
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: handle_auth_request added challenge on 0x557aba78ac00
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:41:37.754433+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab85ec800 session 0x557ab551eb40
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: handle_auth_request added challenge on 0x557abb03c400
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 350240768 unmapped: 52854784 heap: 403095552 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3957188 data_alloc: 218103808 data_used: 23584768
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:41:38.754612+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 350240768 unmapped: 52854784 heap: 403095552 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:41:39.754767+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: handle_auth_request added challenge on 0x557ab528c400
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e661f000/0x0/0x4ffc00000, data 0x4a4829b/0x4bdf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 350240768 unmapped: 52854784 heap: 403095552 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:41:40.754929+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 350240768 unmapped: 52854784 heap: 403095552 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:41:41.755096+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 350240768 unmapped: 52854784 heap: 403095552 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:41:42.755238+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 350240768 unmapped: 52854784 heap: 403095552 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3957188 data_alloc: 218103808 data_used: 23584768
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab528c400 session 0x557ab6a93680
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:41:43.755376+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e661f000/0x0/0x4ffc00000, data 0x4a4829b/0x4bdf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 350240768 unmapped: 52854784 heap: 403095552 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:41:44.755553+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 350240768 unmapped: 52854784 heap: 403095552 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:41:45.755732+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 350240768 unmapped: 52854784 heap: 403095552 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:41:46.755937+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 350240768 unmapped: 52854784 heap: 403095552 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:41:47.756106+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 350240768 unmapped: 52854784 heap: 403095552 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3957188 data_alloc: 218103808 data_used: 23584768
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e661f000/0x0/0x4ffc00000, data 0x4a4829b/0x4bdf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:41:48.756252+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: handle_auth_request added challenge on 0x557ab7347000
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 350240768 unmapped: 52854784 heap: 403095552 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:41:49.756387+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 350240768 unmapped: 52854784 heap: 403095552 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:41:50.756579+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e661f000/0x0/0x4ffc00000, data 0x4a4829b/0x4bdf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 350240768 unmapped: 52854784 heap: 403095552 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:41:51.756903+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab7347000 session 0x557ab57cad20
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 350240768 unmapped: 52854784 heap: 403095552 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: handle_auth_request added challenge on 0x557ab6dd9800
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 26.894992828s of 27.005804062s, submitted: 10
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:41:52.757054+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab6dd9800 session 0x557ab6a93c20
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 346923008 unmapped: 56172544 heap: 403095552 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3824103 data_alloc: 218103808 data_used: 14331904
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:41:53.757381+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e6f15000/0x0/0x4ffc00000, data 0x415229b/0x42e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 346923008 unmapped: 56172544 heap: 403095552 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:41:54.757564+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 346923008 unmapped: 56172544 heap: 403095552 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:41:55.757861+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 346923008 unmapped: 56172544 heap: 403095552 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:41:56.758094+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 346923008 unmapped: 56172544 heap: 403095552 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:41:57.758346+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e6f15000/0x0/0x4ffc00000, data 0x415229b/0x42e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e6f15000/0x0/0x4ffc00000, data 0x415229b/0x42e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 346923008 unmapped: 56172544 heap: 403095552 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3824103 data_alloc: 218103808 data_used: 14331904
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:41:58.758467+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e6f15000/0x0/0x4ffc00000, data 0x415229b/0x42e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 346923008 unmapped: 56172544 heap: 403095552 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:41:59.758652+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 346923008 unmapped: 56172544 heap: 403095552 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:42:00.758850+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 346923008 unmapped: 56172544 heap: 403095552 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:42:01.758997+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 346923008 unmapped: 56172544 heap: 403095552 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:42:02.759333+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 346931200 unmapped: 56164352 heap: 403095552 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3824103 data_alloc: 218103808 data_used: 14331904
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:42:03.759618+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e6f15000/0x0/0x4ffc00000, data 0x415229b/0x42e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 346931200 unmapped: 56164352 heap: 403095552 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:42:04.759811+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 346931200 unmapped: 56164352 heap: 403095552 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:42:05.760054+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 346931200 unmapped: 56164352 heap: 403095552 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:42:06.760260+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 346931200 unmapped: 56164352 heap: 403095552 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:42:07.760489+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e6f15000/0x0/0x4ffc00000, data 0x415229b/0x42e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 346931200 unmapped: 56164352 heap: 403095552 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3824103 data_alloc: 218103808 data_used: 14331904
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:42:08.760725+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 346939392 unmapped: 56156160 heap: 403095552 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:42:09.785531+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 346939392 unmapped: 56156160 heap: 403095552 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:42:10.785926+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 346939392 unmapped: 56156160 heap: 403095552 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:42:11.786206+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e6f15000/0x0/0x4ffc00000, data 0x415229b/0x42e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 346939392 unmapped: 56156160 heap: 403095552 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:42:12.786510+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e6f15000/0x0/0x4ffc00000, data 0x415229b/0x42e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 346939392 unmapped: 56156160 heap: 403095552 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3824103 data_alloc: 218103808 data_used: 14331904
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:42:13.786749+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 346947584 unmapped: 56147968 heap: 403095552 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:42:14.787020+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 346947584 unmapped: 56147968 heap: 403095552 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e6f15000/0x0/0x4ffc00000, data 0x415229b/0x42e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:42:15.787298+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e6f15000/0x0/0x4ffc00000, data 0x415229b/0x42e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 346947584 unmapped: 56147968 heap: 403095552 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:42:16.787482+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 346955776 unmapped: 56139776 heap: 403095552 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:42:17.787675+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 346955776 unmapped: 56139776 heap: 403095552 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd erasure-code-profile ls"} v 0) v1
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3824103 data_alloc: 218103808 data_used: 14331904
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:42:18.788034+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 346955776 unmapped: 56139776 heap: 403095552 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e6f15000/0x0/0x4ffc00000, data 0x415229b/0x42e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:42:19.788284+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 346955776 unmapped: 56139776 heap: 403095552 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4246144449' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:42:20.788620+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 346955776 unmapped: 56139776 heap: 403095552 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:42:21.788808+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 346955776 unmapped: 56139776 heap: 403095552 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:42:22.788968+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e6f15000/0x0/0x4ffc00000, data 0x415229b/0x42e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 346955776 unmapped: 56139776 heap: 403095552 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3824103 data_alloc: 218103808 data_used: 14331904
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:42:23.789181+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 346955776 unmapped: 56139776 heap: 403095552 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:42:24.789337+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 346963968 unmapped: 56131584 heap: 403095552 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:42:25.789536+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 346963968 unmapped: 56131584 heap: 403095552 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:42:26.789679+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e6f15000/0x0/0x4ffc00000, data 0x415229b/0x42e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 346963968 unmapped: 56131584 heap: 403095552 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:42:27.789894+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 346963968 unmapped: 56131584 heap: 403095552 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3824103 data_alloc: 218103808 data_used: 14331904
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:42:28.790037+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 346972160 unmapped: 56123392 heap: 403095552 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:42:29.790184+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e6f15000/0x0/0x4ffc00000, data 0x415229b/0x42e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 346972160 unmapped: 56123392 heap: 403095552 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:42:30.790339+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: handle_auth_request added challenge on 0x557ab77a0000
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 38.082767487s of 38.139495850s, submitted: 16
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab77a0000 session 0x557ab685ab40
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: handle_auth_request added challenge on 0x557ab528c400
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab528c400 session 0x557ab5520960
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: handle_auth_request added challenge on 0x557ab6dd9800
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab6dd9800 session 0x557ab46d14a0
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: handle_auth_request added challenge on 0x557ab714bc00
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab714bc00 session 0x557ab72a4d20
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: handle_auth_request added challenge on 0x557ab7347000
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab7347000 session 0x557ab552c3c0
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 346972160 unmapped: 56123392 heap: 403095552 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:42:31.790502+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 346972160 unmapped: 56123392 heap: 403095552 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:42:32.790620+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 346972160 unmapped: 56123392 heap: 403095552 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3880678 data_alloc: 218103808 data_used: 14331904
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:42:33.790790+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 346972160 unmapped: 56123392 heap: 403095552 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:42:34.790956+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 346972160 unmapped: 56123392 heap: 403095552 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:42:35.791065+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e6795000/0x0/0x4ffc00000, data 0x48d229b/0x4a69000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 346972160 unmapped: 56123392 heap: 403095552 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:42:36.791236+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: handle_auth_request added challenge on 0x557ab92fb400
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab92fb400 session 0x557ab5462f00
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e6795000/0x0/0x4ffc00000, data 0x48d229b/0x4a69000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 346972160 unmapped: 56123392 heap: 403095552 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:42:37.791382+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: handle_auth_request added challenge on 0x557ab528c400
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 346988544 unmapped: 56107008 heap: 403095552 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3882639 data_alloc: 218103808 data_used: 14331904
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:42:38.791507+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e6794000/0x0/0x4ffc00000, data 0x48d22be/0x4a6a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 347873280 unmapped: 55222272 heap: 403095552 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:42:39.791627+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 350306304 unmapped: 52789248 heap: 403095552 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:42:40.791828+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 350306304 unmapped: 52789248 heap: 403095552 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:42:41.791962+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e6794000/0x0/0x4ffc00000, data 0x48d22be/0x4a6a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 350306304 unmapped: 52789248 heap: 403095552 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:42:42.792113+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 350306304 unmapped: 52789248 heap: 403095552 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3938479 data_alloc: 218103808 data_used: 21680128
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:42:43.792298+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 350306304 unmapped: 52789248 heap: 403095552 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:42:44.792444+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 350306304 unmapped: 52789248 heap: 403095552 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:42:45.792664+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 350306304 unmapped: 52789248 heap: 403095552 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:42:46.792861+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e6794000/0x0/0x4ffc00000, data 0x48d22be/0x4a6a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 350306304 unmapped: 52789248 heap: 403095552 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:42:47.793802+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 350306304 unmapped: 52789248 heap: 403095552 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3938479 data_alloc: 218103808 data_used: 21680128
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:42:48.793965+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 18.121250153s of 18.225706100s, submitted: 20
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #52. Immutable memtables: 8.
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353247232 unmapped: 49848320 heap: 403095552 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:42:49.794095+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353247232 unmapped: 49848320 heap: 403095552 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:42:50.794238+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353247232 unmapped: 49848320 heap: 403095552 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:42:51.794378+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e5324000/0x0/0x4ffc00000, data 0x4ba22be/0x4d3a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab5468c00 session 0x557ab696e3c0
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: handle_auth_request added challenge on 0x557ab6dd9800
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353247232 unmapped: 49848320 heap: 403095552 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:42:52.794537+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353247232 unmapped: 49848320 heap: 403095552 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3959195 data_alloc: 218103808 data_used: 21684224
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:42:53.794858+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353247232 unmapped: 49848320 heap: 403095552 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:42:54.795032+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353247232 unmapped: 49848320 heap: 403095552 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:42:55.795286+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353247232 unmapped: 49848320 heap: 403095552 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:42:56.795386+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e5324000/0x0/0x4ffc00000, data 0x4ba22be/0x4d3a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353247232 unmapped: 49848320 heap: 403095552 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:42:57.795558+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353255424 unmapped: 49840128 heap: 403095552 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3959195 data_alloc: 218103808 data_used: 21684224
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:42:58.795796+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e5324000/0x0/0x4ffc00000, data 0x4ba22be/0x4d3a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353255424 unmapped: 49840128 heap: 403095552 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:42:59.796075+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: handle_auth_request added challenge on 0x557ab5468c00
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 ms_handle_reset con 0x557ab5468c00 session 0x557ab770f860
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: handle_auth_request added challenge on 0x557ab714bc00
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.425669670s of 11.516405106s, submitted: 17
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353255424 unmapped: 49840128 heap: 403095552 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:43:00.796350+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _renew_subs
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 288 handle_osd_map epochs [289,289], i have 288, src has [1,289]
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 289 ms_handle_reset con 0x557ab714bc00 session 0x557ab685b860
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: handle_auth_request added challenge on 0x557ab7347000
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: handle_auth_request added challenge on 0x557ab4c45000
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 289 ms_handle_reset con 0x557ab7347000 session 0x557ab685ba40
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 289 ms_handle_reset con 0x557ab4c45000 session 0x557ab52923c0
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: handle_auth_request added challenge on 0x557ab6693c00
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 360734720 unmapped: 53772288 heap: 414507008 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 289 ms_handle_reset con 0x557ab6693c00 session 0x557ab6bfad20
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:43:01.796609+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: handle_auth_request added challenge on 0x557ab4c45000
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 289 handle_osd_map epochs [289,290], i have 289, src has [1,290]
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 290 ms_handle_reset con 0x557ab4c45000 session 0x557ab685ad20
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: handle_auth_request added challenge on 0x557ab5468c00
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 360808448 unmapped: 53698560 heap: 414507008 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:43:02.796756+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 290 handle_osd_map epochs [290,291], i have 290, src has [1,291]
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 291 ms_handle_reset con 0x557ab5468c00 session 0x557ab6845860
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 360816640 unmapped: 53690368 heap: 414507008 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4149632 data_alloc: 234881024 data_used: 24653824
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:43:03.797042+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: handle_auth_request added challenge on 0x557ab714bc00
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 291 ms_handle_reset con 0x557ab714bc00 session 0x557ab79501e0
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: get_auth_request con 0x557ab58a5000 auth_method 0
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: handle_auth_request added challenge on 0x557ab7347000
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 291 ms_handle_reset con 0x557ab7347000 session 0x557ab6df7c20
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: handle_auth_request added challenge on 0x557ab6680800
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 291 ms_handle_reset con 0x557ab6680800 session 0x557ab68443c0
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 291 heartbeat osd_stat(store_statfs(0x4e3f90000/0x0/0x4ffc00000, data 0x5f2e617/0x60cc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 360816640 unmapped: 53690368 heap: 414507008 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:43:04.797238+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 360996864 unmapped: 53510144 heap: 414507008 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:43:05.797451+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 360996864 unmapped: 53510144 heap: 414507008 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:43:06.797680+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 360996864 unmapped: 53510144 heap: 414507008 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:43:07.797885+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 360996864 unmapped: 53510144 heap: 414507008 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:43:08.798031+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4149632 data_alloc: 234881024 data_used: 24653824
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 360996864 unmapped: 53510144 heap: 414507008 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:43:09.798220+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 291 heartbeat osd_stat(store_statfs(0x4e3f90000/0x0/0x4ffc00000, data 0x5f2e617/0x60cc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 291 handle_osd_map epochs [292,292], i have 291, src has [1,292]
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: handle_auth_request added challenge on 0x557ab4c45000
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 291 ms_handle_reset con 0x557ab4c45000 session 0x557ab6df7e00
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 291 handle_osd_map epochs [292,292], i have 292, src has [1,292]
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e3f90000/0x0/0x4ffc00000, data 0x5f2e617/0x60cc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 354713600 unmapped: 59793408 heap: 414507008 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:43:10.798359+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: handle_auth_request added challenge on 0x557ab5468c00
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 292 ms_handle_reset con 0x557ab5468c00 session 0x557ab55214a0
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: handle_auth_request added challenge on 0x557ab714bc00
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 292 ms_handle_reset con 0x557ab714bc00 session 0x557ab5bfbe00
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: handle_auth_request added challenge on 0x557ab7347000
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 292 ms_handle_reset con 0x557ab7347000 session 0x557ab54272c0
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: handle_auth_request added challenge on 0x557ab65a0c00
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 292 ms_handle_reset con 0x557ab65a0c00 session 0x557ab6e74780
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: handle_auth_request added challenge on 0x557ab714f000
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 292 ms_handle_reset con 0x557ab714f000 session 0x557ab5c034a0
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 354713600 unmapped: 59793408 heap: 414507008 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:43:11.798516+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 354721792 unmapped: 59785216 heap: 414507008 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:43:12.798670+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e3f8e000/0x0/0x4ffc00000, data 0x5f3007a/0x60cf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 354721792 unmapped: 59785216 heap: 414507008 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:43:13.798935+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4135806 data_alloc: 234881024 data_used: 24653824
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 354721792 unmapped: 59785216 heap: 414507008 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:43:14.799131+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: handle_auth_request added challenge on 0x557ab5468c00
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 292 ms_handle_reset con 0x557ab5468c00 session 0x557ab79510e0
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 354729984 unmapped: 59777024 heap: 414507008 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:43:15.799355+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: handle_auth_request added challenge on 0x557ab65a0c00
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 292 ms_handle_reset con 0x557ab65a0c00 session 0x557ab6815680
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: handle_auth_request added challenge on 0x557ab714bc00
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 292 ms_handle_reset con 0x557ab714bc00 session 0x557ab685af00
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: handle_auth_request added challenge on 0x557ab7347000
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.129091263s of 16.006256104s, submitted: 187
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 292 ms_handle_reset con 0x557ab7347000 session 0x557ab79510e0
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 355041280 unmapped: 59465728 heap: 414507008 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:43:16.799518+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e3f6a000/0x0/0x4ffc00000, data 0x5f5408a/0x60f4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: handle_auth_request added challenge on 0x557ab528e800
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: handle_auth_request added challenge on 0x557ab68a1c00
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 355041280 unmapped: 59465728 heap: 414507008 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:43:17.799627+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 355115008 unmapped: 59392000 heap: 414507008 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:43:18.799744+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4170080 data_alloc: 234881024 data_used: 28872704
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:43:19.799873+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 356212736 unmapped: 58294272 heap: 414507008 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:43:20.800018+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 356212736 unmapped: 58294272 heap: 414507008 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:43:21.800203+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 356220928 unmapped: 58286080 heap: 414507008 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e3f6a000/0x0/0x4ffc00000, data 0x5f5408a/0x60f4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:43:22.800338+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 356220928 unmapped: 58286080 heap: 414507008 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:43:23.801054+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 356220928 unmapped: 58286080 heap: 414507008 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4215680 data_alloc: 234881024 data_used: 35340288
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:43:24.801178+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 356220928 unmapped: 58286080 heap: 414507008 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:43:25.801322+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 356220928 unmapped: 58286080 heap: 414507008 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:43:26.801453+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 356220928 unmapped: 58286080 heap: 414507008 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e3f6a000/0x0/0x4ffc00000, data 0x5f5408a/0x60f4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:43:27.801575+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 356220928 unmapped: 58286080 heap: 414507008 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:43:28.801678+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 356229120 unmapped: 58277888 heap: 414507008 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.251224518s of 12.262004852s, submitted: 2
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4264664 data_alloc: 234881024 data_used: 35364864
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:43:29.801856+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 360923136 unmapped: 53583872 heap: 414507008 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:43:30.802030+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 360988672 unmapped: 53518336 heap: 414507008 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:43:31.802208+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 360988672 unmapped: 53518336 heap: 414507008 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:43:32.802345+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e39e4000/0x0/0x4ffc00000, data 0x64da08a/0x667a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 360988672 unmapped: 53518336 heap: 414507008 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:43:33.802478+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 360988672 unmapped: 53518336 heap: 414507008 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4291066 data_alloc: 234881024 data_used: 37998592
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:43:34.802645+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 357801984 unmapped: 56705024 heap: 414507008 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e39e4000/0x0/0x4ffc00000, data 0x64da08a/0x667a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:43:35.802889+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 357801984 unmapped: 56705024 heap: 414507008 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:43:36.803196+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 357801984 unmapped: 56705024 heap: 414507008 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e39e4000/0x0/0x4ffc00000, data 0x64da08a/0x667a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:43:37.803357+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 357801984 unmapped: 56705024 heap: 414507008 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:43:38.803553+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 357801984 unmapped: 56705024 heap: 414507008 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4283402 data_alloc: 234881024 data_used: 37998592
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:43:39.803864+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 357801984 unmapped: 56705024 heap: 414507008 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:43:40.804011+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 357801984 unmapped: 56705024 heap: 414507008 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.373227119s of 12.451822281s, submitted: 9
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:43:41.804190+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 357801984 unmapped: 56705024 heap: 414507008 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:43:42.804383+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 357801984 unmapped: 56705024 heap: 414507008 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e39e4000/0x0/0x4ffc00000, data 0x64da08a/0x667a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:43:43.804569+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e39e4000/0x0/0x4ffc00000, data 0x64da08a/0x667a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 357801984 unmapped: 56705024 heap: 414507008 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4283050 data_alloc: 234881024 data_used: 37998592
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:43:44.804714+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 357801984 unmapped: 56705024 heap: 414507008 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:43:45.804910+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 357801984 unmapped: 56705024 heap: 414507008 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:43:46.805085+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 357801984 unmapped: 56705024 heap: 414507008 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:43:47.805227+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 357801984 unmapped: 56705024 heap: 414507008 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e39e4000/0x0/0x4ffc00000, data 0x64da08a/0x667a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:43:48.805372+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 357801984 unmapped: 56705024 heap: 414507008 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4282874 data_alloc: 234881024 data_used: 37998592
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:43:49.805513+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 357801984 unmapped: 56705024 heap: 414507008 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:43:50.805649+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 357810176 unmapped: 56696832 heap: 414507008 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:43:51.805792+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 357810176 unmapped: 56696832 heap: 414507008 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:43:52.805992+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 357818368 unmapped: 56688640 heap: 414507008 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:43:53.806146+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 357818368 unmapped: 56688640 heap: 414507008 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4282874 data_alloc: 234881024 data_used: 37998592
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 292 ms_handle_reset con 0x557ab6684800 session 0x557ab685a960
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: handle_auth_request added challenge on 0x557ab5468c00
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e39e4000/0x0/0x4ffc00000, data 0x64da08a/0x667a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 292 ms_handle_reset con 0x557ab7347400 session 0x557ab792e1e0
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: handle_auth_request added challenge on 0x557ab65a0c00
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:43:54.806282+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 358309888 unmapped: 56197120 heap: 414507008 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: handle_auth_request added challenge on 0x557ab7347400
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 292 ms_handle_reset con 0x557ab7347400 session 0x557ab57cbe00
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: handle_auth_request added challenge on 0x557ab714bc00
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:43:55.806459+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 358309888 unmapped: 56197120 heap: 414507008 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 292 handle_osd_map epochs [292,293], i have 292, src has [1,293]
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.957171440s of 14.975679398s, submitted: 4
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 293 ms_handle_reset con 0x557ab714bc00 session 0x557ab6a93a40
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: handle_auth_request added challenge on 0x557ab7347000
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: handle_auth_request added challenge on 0x557ab7f88000
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 293 ms_handle_reset con 0x557ab7347000 session 0x557ab46ad680
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 293 ms_handle_reset con 0x557ab7f88000 session 0x557ab6df7680
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: handle_auth_request added challenge on 0x557ab528d800
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:43:56.806600+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 394436608 unmapped: 32677888 heap: 427114496 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 293 ms_handle_reset con 0x557ab528d800 session 0x557ab792c780
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: handle_auth_request added challenge on 0x557ab528d800
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 293 ms_handle_reset con 0x557ab528d800 session 0x557ab6fccd20
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: handle_auth_request added challenge on 0x557ab714bc00
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _renew_subs
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 293 handle_osd_map epochs [294,294], i have 293, src has [1,294]
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:43:57.806702+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 370409472 unmapped: 60907520 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 294 handle_osd_map epochs [294,295], i have 294, src has [1,295]
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _renew_subs
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 294 handle_osd_map epochs [295,295], i have 295, src has [1,295]
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 295 ms_handle_reset con 0x557ab714bc00 session 0x557ab696e960
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:43:58.806878+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 370417664 unmapped: 60899328 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4639141 data_alloc: 251658240 data_used: 49020928
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: handle_auth_request added challenge on 0x557ab7347000
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 295 ms_handle_reset con 0x557ab7347000 session 0x557ab6fcc3c0
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: handle_auth_request added challenge on 0x557ab7347400
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 295 heartbeat osd_stat(store_statfs(0x4e10d4000/0x0/0x4ffc00000, data 0x8de03e3/0x8f86000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [0,1])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 295 ms_handle_reset con 0x557ab7347400 session 0x557ab685a5a0
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: handle_auth_request added challenge on 0x557ab7f88000
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 295 ms_handle_reset con 0x557ab7f88000 session 0x557ab6df74a0
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: handle_auth_request added challenge on 0x557ab528d800
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:43:59.807039+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 370483200 unmapped: 60833792 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _renew_subs
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 295 handle_osd_map epochs [296,296], i have 295, src has [1,296]
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 296 ms_handle_reset con 0x557ab528d800 session 0x557ab7951e00
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:44:00.807197+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 370499584 unmapped: 60817408 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:44:01.807352+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 370499584 unmapped: 60817408 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:44:02.807515+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 370499584 unmapped: 60817408 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 296 ms_handle_reset con 0x557ab528e800 session 0x557ab6e74780
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 296 ms_handle_reset con 0x557ab68a1c00 session 0x557ab54272c0
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: handle_auth_request added challenge on 0x557ab714bc00
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:44:03.807667+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e39d7000/0x0/0x4ffc00000, data 0x64e0f5e/0x6686000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 370499584 unmapped: 60817408 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4357326 data_alloc: 251658240 data_used: 49016832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _renew_subs
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 296 handle_osd_map epochs [297,297], i have 296, src has [1,297]
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 297 ms_handle_reset con 0x557ab714bc00 session 0x557ab682a5a0
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:44:04.807787+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 370835456 unmapped: 60481536 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 297 heartbeat osd_stat(store_statfs(0x4e3f5f000/0x0/0x4ffc00000, data 0x5f5af4e/0x60ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:44:05.807982+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 370835456 unmapped: 60481536 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:44:06.808126+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 370843648 unmapped: 60473344 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: handle_auth_request added challenge on 0x557ab7347000
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.327554703s of 11.279193878s, submitted: 148
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 297 handle_osd_map epochs [297,298], i have 297, src has [1,298]
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:44:07.808286+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 370843648 unmapped: 60473344 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 298 ms_handle_reset con 0x557ab7347000 session 0x557ab791e000
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 298 heartbeat osd_stat(store_statfs(0x4e3f80000/0x0/0x4ffc00000, data 0x5f389e9/0x60de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:44:08.808442+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 298 heartbeat osd_stat(store_statfs(0x4e5305000/0x0/0x4ffc00000, data 0x4bb3548/0x4d58000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 359546880 unmapped: 71770112 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4032294 data_alloc: 234881024 data_used: 24264704
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:44:09.808617+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 298 ms_handle_reset con 0x557ab528c400 session 0x557ab7294f00
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 359546880 unmapped: 71770112 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 298 heartbeat osd_stat(store_statfs(0x4e5305000/0x0/0x4ffc00000, data 0x4bb3548/0x4d58000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: handle_auth_request added challenge on 0x557ab7347000
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:44:10.808994+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 355950592 unmapped: 75366400 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 298 ms_handle_reset con 0x557ab7347000 session 0x557ab4a7c960
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:44:11.809172+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 355950592 unmapped: 75366400 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:44:12.809396+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 298 heartbeat osd_stat(store_statfs(0x4e5d57000/0x0/0x4ffc00000, data 0x4163525/0x4307000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 355950592 unmapped: 75366400 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:44:13.809539+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 355950592 unmapped: 75366400 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3895224 data_alloc: 218103808 data_used: 13447168
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:44:14.809702+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 355950592 unmapped: 75366400 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:44:15.809931+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 355950592 unmapped: 75366400 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 298 heartbeat osd_stat(store_statfs(0x4e5d57000/0x0/0x4ffc00000, data 0x4163525/0x4307000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 298 handle_osd_map epochs [299,299], i have 298, src has [1,299]
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:44:16.810113+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 355950592 unmapped: 75366400 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:44:17.810334+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 355950592 unmapped: 75366400 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e5d53000/0x0/0x4ffc00000, data 0x4164f88/0x430a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:44:18.810520+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 355950592 unmapped: 75366400 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3899398 data_alloc: 218103808 data_used: 13455360
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:44:19.810708+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 355950592 unmapped: 75366400 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:44:20.810893+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 355950592 unmapped: 75366400 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e5d53000/0x0/0x4ffc00000, data 0x4164f88/0x430a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:44:21.811050+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 355950592 unmapped: 75366400 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:44:22.811209+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 355950592 unmapped: 75366400 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:44:23.811369+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 355950592 unmapped: 75366400 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3899398 data_alloc: 218103808 data_used: 13455360
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:44:24.811519+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 355950592 unmapped: 75366400 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:44:25.811697+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 355950592 unmapped: 75366400 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:44:26.811958+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 355950592 unmapped: 75366400 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e5d53000/0x0/0x4ffc00000, data 0x4164f88/0x430a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:44:27.812190+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 355950592 unmapped: 75366400 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:44:28.812357+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 355950592 unmapped: 75366400 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3899398 data_alloc: 218103808 data_used: 13455360
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:44:29.812508+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 355950592 unmapped: 75366400 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:44:30.812636+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e5d53000/0x0/0x4ffc00000, data 0x4164f88/0x430a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 355950592 unmapped: 75366400 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:44:31.812798+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 355950592 unmapped: 75366400 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:44:32.813001+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 355950592 unmapped: 75366400 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:44:33.813126+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: handle_auth_request added challenge on 0x557ab528d800
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 25.629646301s of 26.253942490s, submitted: 88
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 299 ms_handle_reset con 0x557ab528d800 session 0x557ab46ad0e0
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: handle_auth_request added challenge on 0x557ab528e800
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 299 ms_handle_reset con 0x557ab528e800 session 0x557ab46d1c20
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 356433920 unmapped: 74883072 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: handle_auth_request added challenge on 0x557ab68a1c00
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 299 ms_handle_reset con 0x557ab68a1c00 session 0x557ab792c960
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3973225 data_alloc: 218103808 data_used: 13455360
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: handle_auth_request added challenge on 0x557ab68a1c00
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 299 ms_handle_reset con 0x557ab68a1c00 session 0x557ab686c1e0
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: handle_auth_request added challenge on 0x557ab528c400
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 299 ms_handle_reset con 0x557ab528c400 session 0x557ab57cab40
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:44:34.813303+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 356433920 unmapped: 74883072 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e5510000/0x0/0x4ffc00000, data 0x49a8f88/0x4b4e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:44:35.813543+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 356433920 unmapped: 74883072 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e5510000/0x0/0x4ffc00000, data 0x49a8f88/0x4b4e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: handle_auth_request added challenge on 0x557ab528d800
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 299 ms_handle_reset con 0x557ab528d800 session 0x557ab46d30e0
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: handle_auth_request added challenge on 0x557ab528e800
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: handle_auth_request added challenge on 0x557ab7347000
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:44:36.813661+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 356671488 unmapped: 74645504 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:44:37.813762+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 356687872 unmapped: 74629120 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:44:38.813929+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 357310464 unmapped: 74006528 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4028302 data_alloc: 218103808 data_used: 20701184
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:44:39.814134+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 357310464 unmapped: 74006528 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:44:40.814321+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 299 ms_handle_reset con 0x557ab528e800 session 0x557ab5292960
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 299 ms_handle_reset con 0x557ab7347000 session 0x557ab685ab40
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 357310464 unmapped: 74006528 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: handle_auth_request added challenge on 0x557ab528c400
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 299 ms_handle_reset con 0x557ab528c400 session 0x557ab682b2c0
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:44:41.814466+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352124928 unmapped: 79192064 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e5d54000/0x0/0x4ffc00000, data 0x4164f88/0x430a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:44:42.814654+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352124928 unmapped: 79192064 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:44:43.814871+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e5d54000/0x0/0x4ffc00000, data 0x4164f88/0x430a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352124928 unmapped: 79192064 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3907992 data_alloc: 218103808 data_used: 13455360
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:44:44.815033+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352124928 unmapped: 79192064 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:44:45.815240+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e5d54000/0x0/0x4ffc00000, data 0x4164f88/0x430a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352124928 unmapped: 79192064 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:44:46.815414+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352124928 unmapped: 79192064 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:44:47.815598+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352124928 unmapped: 79192064 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:44:48.815841+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352124928 unmapped: 79192064 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3907992 data_alloc: 218103808 data_used: 13455360
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:44:49.816028+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352124928 unmapped: 79192064 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:44:50.816200+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352124928 unmapped: 79192064 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e5d54000/0x0/0x4ffc00000, data 0x4164f88/0x430a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:44:51.816338+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352124928 unmapped: 79192064 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:44:52.816516+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352124928 unmapped: 79192064 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:44:53.816651+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352124928 unmapped: 79192064 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3907992 data_alloc: 218103808 data_used: 13455360
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:44:54.816796+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352124928 unmapped: 79192064 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e5d54000/0x0/0x4ffc00000, data 0x4164f88/0x430a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:44:55.816968+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352124928 unmapped: 79192064 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:44:56.817081+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352124928 unmapped: 79192064 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:44:57.817331+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352124928 unmapped: 79192064 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:44:58.817468+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352124928 unmapped: 79192064 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3907992 data_alloc: 218103808 data_used: 13455360
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:44:59.817609+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352124928 unmapped: 79192064 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:45:00.817777+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e5d54000/0x0/0x4ffc00000, data 0x4164f88/0x430a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352124928 unmapped: 79192064 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:45:01.817913+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352124928 unmapped: 79192064 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:45:02.818076+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352124928 unmapped: 79192064 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:45:03.818212+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352124928 unmapped: 79192064 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3907992 data_alloc: 218103808 data_used: 13455360
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:45:04.818403+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352124928 unmapped: 79192064 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e5d54000/0x0/0x4ffc00000, data 0x4164f88/0x430a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:45:05.818610+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e5d54000/0x0/0x4ffc00000, data 0x4164f88/0x430a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352124928 unmapped: 79192064 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:45:06.818776+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352124928 unmapped: 79192064 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:45:07.819003+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352124928 unmapped: 79192064 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:45:08.819169+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352124928 unmapped: 79192064 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3907992 data_alloc: 218103808 data_used: 13455360
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e5d54000/0x0/0x4ffc00000, data 0x4164f88/0x430a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:45:09.819330+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352124928 unmapped: 79192064 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:45:10.819488+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352124928 unmapped: 79192064 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e5d54000/0x0/0x4ffc00000, data 0x4164f88/0x430a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:45:11.819656+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352124928 unmapped: 79192064 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:45:12.819908+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352124928 unmapped: 79192064 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:45:13.820070+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352124928 unmapped: 79192064 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3907992 data_alloc: 218103808 data_used: 13455360
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:45:14.820196+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352124928 unmapped: 79192064 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:45:15.820370+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 299 ms_handle_reset con 0x557ab6696800 session 0x557ab551f680
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: handle_auth_request added challenge on 0x557ab528d800
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352124928 unmapped: 79192064 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e5d54000/0x0/0x4ffc00000, data 0x4164f88/0x430a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:45:16.820587+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352124928 unmapped: 79192064 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:45:17.820757+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352124928 unmapped: 79192064 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:45:18.820960+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352124928 unmapped: 79192064 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3907992 data_alloc: 218103808 data_used: 13455360
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:45:19.821154+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352124928 unmapped: 79192064 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:45:20.821324+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e5d54000/0x0/0x4ffc00000, data 0x4164f88/0x430a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352124928 unmapped: 79192064 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e5d54000/0x0/0x4ffc00000, data 0x4164f88/0x430a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:45:21.821465+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352124928 unmapped: 79192064 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:45:22.821620+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352124928 unmapped: 79192064 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:45:23.821777+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352124928 unmapped: 79192064 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3907992 data_alloc: 218103808 data_used: 13455360
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:45:24.821970+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352124928 unmapped: 79192064 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:45:25.822485+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352124928 unmapped: 79192064 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e5d54000/0x0/0x4ffc00000, data 0x4164f88/0x430a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:45:26.822705+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352124928 unmapped: 79192064 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e5d54000/0x0/0x4ffc00000, data 0x4164f88/0x430a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:45:27.823053+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352124928 unmapped: 79192064 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:45:28.823361+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e5d54000/0x0/0x4ffc00000, data 0x4164f88/0x430a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352124928 unmapped: 79192064 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3907992 data_alloc: 218103808 data_used: 13455360
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:45:29.823530+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352124928 unmapped: 79192064 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:45:30.823874+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352124928 unmapped: 79192064 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:45:31.824041+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352124928 unmapped: 79192064 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:45:32.824294+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352124928 unmapped: 79192064 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:45:33.824466+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e5d54000/0x0/0x4ffc00000, data 0x4164f88/0x430a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352124928 unmapped: 79192064 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3907992 data_alloc: 218103808 data_used: 13455360
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:45:34.824603+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352124928 unmapped: 79192064 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:45:35.824776+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352124928 unmapped: 79192064 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:45:36.825013+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352124928 unmapped: 79192064 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:45:37.825221+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352124928 unmapped: 79192064 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:45:38.825412+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e5d54000/0x0/0x4ffc00000, data 0x4164f88/0x430a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352124928 unmapped: 79192064 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3907992 data_alloc: 218103808 data_used: 13455360
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:45:39.825599+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352124928 unmapped: 79192064 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:45:40.825740+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352124928 unmapped: 79192064 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:45:41.825896+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352124928 unmapped: 79192064 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:45:42.826090+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352124928 unmapped: 79192064 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:45:43.826232+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352124928 unmapped: 79192064 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3907992 data_alloc: 218103808 data_used: 13455360
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e5d54000/0x0/0x4ffc00000, data 0x4164f88/0x430a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:45:44.826419+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352124928 unmapped: 79192064 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:45:45.826600+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352124928 unmapped: 79192064 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:45:46.826747+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352124928 unmapped: 79192064 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: handle_auth_request added challenge on 0x557ab6696800
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 73.412239075s of 73.797233582s, submitted: 87
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _renew_subs
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 299 handle_osd_map epochs [300,300], i have 299, src has [1,300]
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:45:47.826873+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 300 ms_handle_reset con 0x557ab6696800 session 0x557ab685ab40
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352133120 unmapped: 79183872 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e5d51000/0x0/0x4ffc00000, data 0x4166b36/0x430c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:45:48.827054+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352133120 unmapped: 79183872 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3911452 data_alloc: 218103808 data_used: 13463552
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:45:49.827224+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352133120 unmapped: 79183872 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:45:50.827376+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352133120 unmapped: 79183872 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:45:51.827561+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352133120 unmapped: 79183872 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:45:52.827733+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352133120 unmapped: 79183872 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:45:53.827908+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352133120 unmapped: 79183872 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3911452 data_alloc: 218103808 data_used: 13463552
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e5d51000/0x0/0x4ffc00000, data 0x4166b36/0x430c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 300 handle_osd_map epochs [300,301], i have 300, src has [1,301]
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:45:54.828268+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352141312 unmapped: 79175680 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 301 heartbeat osd_stat(store_statfs(0x4e5d4e000/0x0/0x4ffc00000, data 0x4168599/0x430f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:45:55.828444+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352141312 unmapped: 79175680 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:45:56.828622+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352141312 unmapped: 79175680 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:45:57.828780+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352141312 unmapped: 79175680 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.917225838s of 11.012996674s, submitted: 35
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:45:58.828933+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352141312 unmapped: 79175680 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3913898 data_alloc: 218103808 data_used: 13463552
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 301 heartbeat osd_stat(store_statfs(0x4e5d4f000/0x0/0x4ffc00000, data 0x4168599/0x430f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:45:59.829074+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352141312 unmapped: 79175680 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:46:00.829262+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352141312 unmapped: 79175680 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:46:01.829552+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352141312 unmapped: 79175680 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 301 heartbeat osd_stat(store_statfs(0x4e5d4c000/0x0/0x4ffc00000, data 0x416b599/0x4312000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:46:02.829708+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352141312 unmapped: 79175680 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:46:03.829949+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352141312 unmapped: 79175680 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3914198 data_alloc: 218103808 data_used: 13463552
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:46:04.830112+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352149504 unmapped: 79167488 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:46:05.830283+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352149504 unmapped: 79167488 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:46:06.830412+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352149504 unmapped: 79167488 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:46:07.830563+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 301 heartbeat osd_stat(store_statfs(0x4e5d4c000/0x0/0x4ffc00000, data 0x416b599/0x4312000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352149504 unmapped: 79167488 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:46:08.830710+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352157696 unmapped: 79159296 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3914198 data_alloc: 218103808 data_used: 13463552
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:46:09.830875+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352157696 unmapped: 79159296 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:46:10.830991+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352157696 unmapped: 79159296 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:46:11.831102+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352157696 unmapped: 79159296 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:46:12.831238+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 301 heartbeat osd_stat(store_statfs(0x4e5d4c000/0x0/0x4ffc00000, data 0x416b599/0x4312000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352157696 unmapped: 79159296 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:46:13.831369+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352157696 unmapped: 79159296 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3914198 data_alloc: 218103808 data_used: 13463552
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:46:14.831523+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352157696 unmapped: 79159296 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:46:15.831752+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 301 heartbeat osd_stat(store_statfs(0x4e5d4c000/0x0/0x4ffc00000, data 0x416b599/0x4312000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352157696 unmapped: 79159296 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:46:16.831961+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352157696 unmapped: 79159296 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:46:17.843646+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 301 heartbeat osd_stat(store_statfs(0x4e5d4c000/0x0/0x4ffc00000, data 0x416b599/0x4312000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352157696 unmapped: 79159296 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:46:18.843853+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352165888 unmapped: 79151104 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3914198 data_alloc: 218103808 data_used: 13463552
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:46:19.844052+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352165888 unmapped: 79151104 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:46:20.844238+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 301 heartbeat osd_stat(store_statfs(0x4e5d4c000/0x0/0x4ffc00000, data 0x416b599/0x4312000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352165888 unmapped: 79151104 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:46:21.844417+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 301 heartbeat osd_stat(store_statfs(0x4e5d4c000/0x0/0x4ffc00000, data 0x416b599/0x4312000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352165888 unmapped: 79151104 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:46:22.844610+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 24.548757553s of 24.578418732s, submitted: 6
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352165888 unmapped: 79151104 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:46:23.844764+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352165888 unmapped: 79151104 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3914374 data_alloc: 218103808 data_used: 13463552
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:46:24.844949+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352174080 unmapped: 79142912 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:46:25.845181+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 301 heartbeat osd_stat(store_statfs(0x4e5d4c000/0x0/0x4ffc00000, data 0x416b599/0x4312000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 301 heartbeat osd_stat(store_statfs(0x4e5d4c000/0x0/0x4ffc00000, data 0x416b599/0x4312000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352174080 unmapped: 79142912 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:46:26.845336+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 301 heartbeat osd_stat(store_statfs(0x4e5d4c000/0x0/0x4ffc00000, data 0x416b599/0x4312000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352174080 unmapped: 79142912 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:46:27.845493+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352174080 unmapped: 79142912 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:46:28.845674+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352174080 unmapped: 79142912 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3914374 data_alloc: 218103808 data_used: 13463552
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:46:29.845983+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352182272 unmapped: 79134720 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:46:30.846147+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352182272 unmapped: 79134720 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:46:31.846337+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352182272 unmapped: 79134720 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:46:32.846510+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 301 heartbeat osd_stat(store_statfs(0x4e5d4c000/0x0/0x4ffc00000, data 0x416b599/0x4312000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.233197212s of 10.238834381s, submitted: 1
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352206848 unmapped: 79110144 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:46:33.846687+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352206848 unmapped: 79110144 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3914374 data_alloc: 218103808 data_used: 13463552
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:46:34.846904+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352206848 unmapped: 79110144 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:46:35.847099+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352206848 unmapped: 79110144 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:46:36.847247+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352206848 unmapped: 79110144 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:46:37.847398+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 301 heartbeat osd_stat(store_statfs(0x4e5d4c000/0x0/0x4ffc00000, data 0x416b599/0x4312000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352206848 unmapped: 79110144 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:46:38.847570+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352206848 unmapped: 79110144 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3914374 data_alloc: 218103808 data_used: 13463552
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:46:39.847765+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 301 heartbeat osd_stat(store_statfs(0x4e5d4c000/0x0/0x4ffc00000, data 0x416b599/0x4312000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352206848 unmapped: 79110144 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:46:40.847926+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 301 heartbeat osd_stat(store_statfs(0x4e5d4c000/0x0/0x4ffc00000, data 0x416b599/0x4312000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352215040 unmapped: 79101952 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:46:41.848092+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352215040 unmapped: 79101952 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:46:42.848281+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352215040 unmapped: 79101952 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:46:43.848433+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 301 heartbeat osd_stat(store_statfs(0x4e5d4c000/0x0/0x4ffc00000, data 0x416b599/0x4312000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352215040 unmapped: 79101952 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3914374 data_alloc: 218103808 data_used: 13463552
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:46:44.848610+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 301 heartbeat osd_stat(store_statfs(0x4e5d4c000/0x0/0x4ffc00000, data 0x416b599/0x4312000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352215040 unmapped: 79101952 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:46:45.848898+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352223232 unmapped: 79093760 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 301 heartbeat osd_stat(store_statfs(0x4e5d4c000/0x0/0x4ffc00000, data 0x416b599/0x4312000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:46:46.849050+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352223232 unmapped: 79093760 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:46:47.849229+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352223232 unmapped: 79093760 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:46:48.849400+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352223232 unmapped: 79093760 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3914374 data_alloc: 218103808 data_used: 13463552
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:46:49.849588+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352223232 unmapped: 79093760 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:46:50.849745+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 301 heartbeat osd_stat(store_statfs(0x4e5d4c000/0x0/0x4ffc00000, data 0x416b599/0x4312000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352223232 unmapped: 79093760 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:46:51.849926+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352223232 unmapped: 79093760 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:46:52.850088+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352223232 unmapped: 79093760 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:46:53.850249+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352231424 unmapped: 79085568 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3914374 data_alloc: 218103808 data_used: 13463552
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:46:54.850448+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352231424 unmapped: 79085568 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:46:55.850677+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 301 heartbeat osd_stat(store_statfs(0x4e5d4c000/0x0/0x4ffc00000, data 0x416b599/0x4312000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352231424 unmapped: 79085568 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:46:56.850925+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352239616 unmapped: 79077376 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:46:57.851069+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352239616 unmapped: 79077376 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:46:58.851229+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 301 heartbeat osd_stat(store_statfs(0x4e5d4c000/0x0/0x4ffc00000, data 0x416b599/0x4312000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352239616 unmapped: 79077376 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3914374 data_alloc: 218103808 data_used: 13463552
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:46:59.851387+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352239616 unmapped: 79077376 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:47:00.851591+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352239616 unmapped: 79077376 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:47:01.851791+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 301 heartbeat osd_stat(store_statfs(0x4e5d4c000/0x0/0x4ffc00000, data 0x416b599/0x4312000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 301 heartbeat osd_stat(store_statfs(0x4e5d4c000/0x0/0x4ffc00000, data 0x416b599/0x4312000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352247808 unmapped: 79069184 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:47:02.851979+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352247808 unmapped: 79069184 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:47:03.852161+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 301 heartbeat osd_stat(store_statfs(0x4e5d4c000/0x0/0x4ffc00000, data 0x416b599/0x4312000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352247808 unmapped: 79069184 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3914374 data_alloc: 218103808 data_used: 13463552
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:47:04.852287+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352247808 unmapped: 79069184 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:47:05.852440+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352247808 unmapped: 79069184 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:47:06.852608+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352247808 unmapped: 79069184 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:47:07.852758+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352247808 unmapped: 79069184 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:47:08.852943+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352256000 unmapped: 79060992 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3914374 data_alloc: 218103808 data_used: 13463552
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:47:09.853122+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 301 heartbeat osd_stat(store_statfs(0x4e5d4c000/0x0/0x4ffc00000, data 0x416b599/0x4312000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352256000 unmapped: 79060992 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:47:10.853297+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352256000 unmapped: 79060992 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:47:11.853442+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352256000 unmapped: 79060992 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:47:12.853608+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352264192 unmapped: 79052800 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:47:13.853763+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352264192 unmapped: 79052800 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3914374 data_alloc: 218103808 data_used: 13463552
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:47:14.853936+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352264192 unmapped: 79052800 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:47:15.854152+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 301 heartbeat osd_stat(store_statfs(0x4e5d4c000/0x0/0x4ffc00000, data 0x416b599/0x4312000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352272384 unmapped: 79044608 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:47:16.854339+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352272384 unmapped: 79044608 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:47:17.854498+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 301 heartbeat osd_stat(store_statfs(0x4e5d4c000/0x0/0x4ffc00000, data 0x416b599/0x4312000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352272384 unmapped: 79044608 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:47:18.854715+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352272384 unmapped: 79044608 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3914374 data_alloc: 218103808 data_used: 13463552
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:47:19.854865+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352272384 unmapped: 79044608 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:47:20.855005+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352272384 unmapped: 79044608 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:47:21.855127+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 301 heartbeat osd_stat(store_statfs(0x4e5d4c000/0x0/0x4ffc00000, data 0x416b599/0x4312000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352272384 unmapped: 79044608 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:47:22.855292+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352272384 unmapped: 79044608 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:47:23.855431+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 301 heartbeat osd_stat(store_statfs(0x4e5d4c000/0x0/0x4ffc00000, data 0x416b599/0x4312000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 301 heartbeat osd_stat(store_statfs(0x4e5d4c000/0x0/0x4ffc00000, data 0x416b599/0x4312000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352280576 unmapped: 79036416 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:47:24.855619+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3914374 data_alloc: 218103808 data_used: 13463552
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352280576 unmapped: 79036416 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:47:25.855795+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352280576 unmapped: 79036416 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:47:26.855987+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352280576 unmapped: 79036416 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:47:27.856147+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 301 heartbeat osd_stat(store_statfs(0x4e5d4c000/0x0/0x4ffc00000, data 0x416b599/0x4312000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352280576 unmapped: 79036416 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:47:28.856304+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:47:29.856446+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352288768 unmapped: 79028224 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3914374 data_alloc: 218103808 data_used: 13463552
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:47:30.856603+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352288768 unmapped: 79028224 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:47:31.856756+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352288768 unmapped: 79028224 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 301 heartbeat osd_stat(store_statfs(0x4e5d4c000/0x0/0x4ffc00000, data 0x416b599/0x4312000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:47:32.856918+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352288768 unmapped: 79028224 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:47:33.857072+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352288768 unmapped: 79028224 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:47:34.857234+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352296960 unmapped: 79020032 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3914374 data_alloc: 218103808 data_used: 13463552
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:47:35.857407+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352296960 unmapped: 79020032 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:47:36.857541+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352296960 unmapped: 79020032 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:47:37.857678+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352296960 unmapped: 79020032 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 301 heartbeat osd_stat(store_statfs(0x4e5d4c000/0x0/0x4ffc00000, data 0x416b599/0x4312000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 301 heartbeat osd_stat(store_statfs(0x4e5d4c000/0x0/0x4ffc00000, data 0x416b599/0x4312000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:47:38.857869+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352296960 unmapped: 79020032 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 301 heartbeat osd_stat(store_statfs(0x4e5d4c000/0x0/0x4ffc00000, data 0x416b599/0x4312000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:47:39.858027+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352305152 unmapped: 79011840 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3914374 data_alloc: 218103808 data_used: 13463552
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:47:40.858188+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352305152 unmapped: 79011840 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:47:41.858395+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352305152 unmapped: 79011840 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:47:42.858577+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352305152 unmapped: 79011840 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:47:43.858746+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352305152 unmapped: 79011840 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:47:44.858896+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 301 heartbeat osd_stat(store_statfs(0x4e5d4c000/0x0/0x4ffc00000, data 0x416b599/0x4312000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352313344 unmapped: 79003648 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3914374 data_alloc: 218103808 data_used: 13463552
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:47:45.859089+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352313344 unmapped: 79003648 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 301 heartbeat osd_stat(store_statfs(0x4e5d4c000/0x0/0x4ffc00000, data 0x416b599/0x4312000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:47:46.859280+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352313344 unmapped: 79003648 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:47:47.859437+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352313344 unmapped: 79003648 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:47:48.859609+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352313344 unmapped: 79003648 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:47:49.859902+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352313344 unmapped: 79003648 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3914374 data_alloc: 218103808 data_used: 13463552
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 301 heartbeat osd_stat(store_statfs(0x4e5d4c000/0x0/0x4ffc00000, data 0x416b599/0x4312000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:47:50.860007+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352313344 unmapped: 79003648 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:47:51.860154+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: handle_auth_request added challenge on 0x557ab528e800
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352313344 unmapped: 79003648 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 301 ms_handle_reset con 0x557ab528e800 session 0x557ab57cab40
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: handle_auth_request added challenge on 0x557ab68a1c00
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 301 ms_handle_reset con 0x557ab68a1c00 session 0x557ab686c1e0
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:47:52.860313+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 358055936 unmapped: 73261056 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:47:53.860670+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 358055936 unmapped: 73261056 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 301 heartbeat osd_stat(store_statfs(0x4e5d4c000/0x0/0x4ffc00000, data 0x416b599/0x4312000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:47:54.884446+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 358055936 unmapped: 73261056 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3922694 data_alloc: 218103808 data_used: 17199104
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:47:55.884750+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 358055936 unmapped: 73261056 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:47:56.884944+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 358055936 unmapped: 73261056 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 301 heartbeat osd_stat(store_statfs(0x4e5d4c000/0x0/0x4ffc00000, data 0x416b599/0x4312000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:47:57.885123+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 358055936 unmapped: 73261056 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:47:58.885285+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 358064128 unmapped: 73252864 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:47:59.885425+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 358064128 unmapped: 73252864 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3922694 data_alloc: 218103808 data_used: 17199104
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:48:00.885575+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 358064128 unmapped: 73252864 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: handle_auth_request added challenge on 0x557ab7f89c00
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:48:01.885771+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 358064128 unmapped: 73252864 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _renew_subs
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 301 handle_osd_map epochs [302,302], i have 301, src has [1,302]
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 88.754707336s of 88.770294189s, submitted: 3
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e5d48000/0x0/0x4ffc00000, data 0x416d16a/0x4315000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 302 ms_handle_reset con 0x557ab7f89c00 session 0x557ab696e960
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:48:02.885948+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353927168 unmapped: 77389824 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e6548000/0x0/0x4ffc00000, data 0x396d16a/0x3b15000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:48:03.886158+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353927168 unmapped: 77389824 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:48:04.886324+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353927168 unmapped: 77389824 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3852284 data_alloc: 218103808 data_used: 10391552
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: handle_auth_request added challenge on 0x557ab7f89c00
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:48:05.886579+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353927168 unmapped: 77389824 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _renew_subs
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 302 handle_osd_map epochs [303,303], i have 302, src has [1,303]
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 303 ms_handle_reset con 0x557ab7f89c00 session 0x557ab6df7680
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:48:06.887865+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351289344 unmapped: 80027648 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:48:07.888002+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351289344 unmapped: 80027648 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 303 heartbeat osd_stat(store_statfs(0x4e71b6000/0x0/0x4ffc00000, data 0x2cfed18/0x2ea7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:48:08.888135+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351289344 unmapped: 80027648 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 303 handle_osd_map epochs [303,304], i have 303, src has [1,304]
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:48:09.888263+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351289344 unmapped: 80027648 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3752526 data_alloc: 218103808 data_used: 7446528
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 304 heartbeat osd_stat(store_statfs(0x4e71b3000/0x0/0x4ffc00000, data 0x2d00797/0x2eaa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:48:10.888418+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351289344 unmapped: 80027648 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:48:11.888589+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351289344 unmapped: 80027648 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 304 heartbeat osd_stat(store_statfs(0x4e71b3000/0x0/0x4ffc00000, data 0x2d00797/0x2eaa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:48:12.888775+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351289344 unmapped: 80027648 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 304 heartbeat osd_stat(store_statfs(0x4e71b3000/0x0/0x4ffc00000, data 0x2d00797/0x2eaa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:48:13.888957+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351289344 unmapped: 80027648 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:48:14.889115+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351289344 unmapped: 80027648 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3752526 data_alloc: 218103808 data_used: 7446528
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:48:15.889295+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351289344 unmapped: 80027648 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 304 heartbeat osd_stat(store_statfs(0x4e71b3000/0x0/0x4ffc00000, data 0x2d00797/0x2eaa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 304 handle_osd_map epochs [305,305], i have 304, src has [1,305]
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.719218254s of 13.948991776s, submitted: 68
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:48:16.889448+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351289344 unmapped: 80027648 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:48:17.889640+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351289344 unmapped: 80027648 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:48:18.889890+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351289344 unmapped: 80027648 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:48:19.890041+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351289344 unmapped: 80027648 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3755500 data_alloc: 218103808 data_used: 7446528
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:48:20.890236+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351289344 unmapped: 80027648 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 305 heartbeat osd_stat(store_statfs(0x4e71b0000/0x0/0x4ffc00000, data 0x2d021fa/0x2ead000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: handle_auth_request added challenge on 0x557ab528c400
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:48:21.890382+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _renew_subs
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 305 handle_osd_map epochs [306,306], i have 305, src has [1,306]
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 306 ms_handle_reset con 0x557ab528c400 session 0x557ab72a5680
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351297536 unmapped: 80019456 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:48:22.890533+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351297536 unmapped: 80019456 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:48:23.890682+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351297536 unmapped: 80019456 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:48:24.890864+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351297536 unmapped: 80019456 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3760288 data_alloc: 218103808 data_used: 7446528
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:48:25.891035+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351297536 unmapped: 80019456 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e71ac000/0x0/0x4ffc00000, data 0x2d03d87/0x2eb1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:48:26.891169+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351297536 unmapped: 80019456 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:48:27.891315+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351297536 unmapped: 80019456 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e71ac000/0x0/0x4ffc00000, data 0x2d03d87/0x2eb1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:48:28.891476+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351297536 unmapped: 80019456 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:48:29.891797+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351297536 unmapped: 80019456 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3760288 data_alloc: 218103808 data_used: 7446528
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:48:30.891985+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351297536 unmapped: 80019456 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e71ac000/0x0/0x4ffc00000, data 0x2d03d87/0x2eb1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:48:31.892137+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351297536 unmapped: 80019456 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:48:32.892287+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351305728 unmapped: 80011264 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:48:33.892415+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351305728 unmapped: 80011264 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:48:34.892608+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351305728 unmapped: 80011264 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3760288 data_alloc: 218103808 data_used: 7446528
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:48:35.892809+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e71ac000/0x0/0x4ffc00000, data 0x2d03d87/0x2eb1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351305728 unmapped: 80011264 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:48:36.893049+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351305728 unmapped: 80011264 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:48:37.893228+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351305728 unmapped: 80011264 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:48:38.893412+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351305728 unmapped: 80011264 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e71ac000/0x0/0x4ffc00000, data 0x2d03d87/0x2eb1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:48:39.893649+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351305728 unmapped: 80011264 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3760288 data_alloc: 218103808 data_used: 7446528
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:48:40.893851+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351313920 unmapped: 80003072 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e71ac000/0x0/0x4ffc00000, data 0x2d03d87/0x2eb1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:48:41.894190+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351313920 unmapped: 80003072 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e71ac000/0x0/0x4ffc00000, data 0x2d03d87/0x2eb1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:48:42.894375+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351313920 unmapped: 80003072 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:48:43.894568+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351313920 unmapped: 80003072 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:48:44.894771+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351322112 unmapped: 79994880 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3760288 data_alloc: 218103808 data_used: 7446528
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:48:45.895033+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351322112 unmapped: 79994880 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:48:46.895352+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351322112 unmapped: 79994880 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:48:47.895761+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351322112 unmapped: 79994880 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e71ac000/0x0/0x4ffc00000, data 0x2d03d87/0x2eb1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e71ac000/0x0/0x4ffc00000, data 0x2d03d87/0x2eb1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:48:48.896092+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351338496 unmapped: 79978496 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:48:49.896300+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351338496 unmapped: 79978496 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3760288 data_alloc: 218103808 data_used: 7446528
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:48:50.896556+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351338496 unmapped: 79978496 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:48:51.896878+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351338496 unmapped: 79978496 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e71ac000/0x0/0x4ffc00000, data 0x2d03d87/0x2eb1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:48:52.897003+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351346688 unmapped: 79970304 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:48:53.897231+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351346688 unmapped: 79970304 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:48:54.897457+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351346688 unmapped: 79970304 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3760288 data_alloc: 218103808 data_used: 7446528
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:48:55.897685+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351354880 unmapped: 79962112 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:48:56.897881+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351354880 unmapped: 79962112 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e71ac000/0x0/0x4ffc00000, data 0x2d03d87/0x2eb1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:48:57.898051+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351354880 unmapped: 79962112 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:48:58.898239+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351354880 unmapped: 79962112 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:48:59.898467+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351354880 unmapped: 79962112 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3760288 data_alloc: 218103808 data_used: 7446528
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:49:00.898656+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351354880 unmapped: 79962112 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:49:01.898881+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351354880 unmapped: 79962112 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e71ac000/0x0/0x4ffc00000, data 0x2d03d87/0x2eb1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:49:02.899073+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351371264 unmapped: 79945728 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e71ac000/0x0/0x4ffc00000, data 0x2d03d87/0x2eb1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:49:03.899263+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351371264 unmapped: 79945728 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:49:04.899394+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351371264 unmapped: 79945728 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3760288 data_alloc: 218103808 data_used: 7446528
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:49:05.899568+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351371264 unmapped: 79945728 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:49:06.899715+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351371264 unmapped: 79945728 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e71ac000/0x0/0x4ffc00000, data 0x2d03d87/0x2eb1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:49:07.899883+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351371264 unmapped: 79945728 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:49:08.900040+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351371264 unmapped: 79945728 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:49:09.900247+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351379456 unmapped: 79937536 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3760288 data_alloc: 218103808 data_used: 7446528
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:49:10.900442+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351379456 unmapped: 79937536 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:49:11.900683+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351379456 unmapped: 79937536 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e71ac000/0x0/0x4ffc00000, data 0x2d03d87/0x2eb1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:49:12.900891+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351387648 unmapped: 79929344 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:49:13.901285+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351387648 unmapped: 79929344 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:49:14.902497+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351387648 unmapped: 79929344 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3760288 data_alloc: 218103808 data_used: 7446528
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:49:15.903103+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351387648 unmapped: 79929344 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:49:16.903914+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351387648 unmapped: 79929344 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:49:17.904090+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e71ac000/0x0/0x4ffc00000, data 0x2d03d87/0x2eb1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351404032 unmapped: 79912960 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:49:18.904691+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351404032 unmapped: 79912960 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:49:19.904961+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351404032 unmapped: 79912960 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3760288 data_alloc: 218103808 data_used: 7446528
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:49:20.905254+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e71ac000/0x0/0x4ffc00000, data 0x2d03d87/0x2eb1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351412224 unmapped: 79904768 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:49:21.905410+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e71ac000/0x0/0x4ffc00000, data 0x2d03d87/0x2eb1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351412224 unmapped: 79904768 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:49:22.905865+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e71ac000/0x0/0x4ffc00000, data 0x2d03d87/0x2eb1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351412224 unmapped: 79904768 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:49:23.906145+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351412224 unmapped: 79904768 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:49:24.906271+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351412224 unmapped: 79904768 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3760288 data_alloc: 218103808 data_used: 7446528
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:49:25.906593+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351412224 unmapped: 79904768 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e71ac000/0x0/0x4ffc00000, data 0x2d03d87/0x2eb1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:49:26.906989+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351420416 unmapped: 79896576 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e71ac000/0x0/0x4ffc00000, data 0x2d03d87/0x2eb1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:49:27.907207+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351420416 unmapped: 79896576 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:49:28.907425+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351428608 unmapped: 79888384 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:49:29.907868+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351428608 unmapped: 79888384 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3760288 data_alloc: 218103808 data_used: 7446528
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:49:30.908132+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e71ac000/0x0/0x4ffc00000, data 0x2d03d87/0x2eb1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351428608 unmapped: 79888384 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:49:31.908415+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351428608 unmapped: 79888384 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:49:32.908921+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351436800 unmapped: 79880192 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:49:33.909056+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e71ac000/0x0/0x4ffc00000, data 0x2d03d87/0x2eb1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351436800 unmapped: 79880192 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:49:34.909170+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351436800 unmapped: 79880192 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3760288 data_alloc: 218103808 data_used: 7446528
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:49:35.909332+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351436800 unmapped: 79880192 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:49:36.909499+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351436800 unmapped: 79880192 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:49:37.909626+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351436800 unmapped: 79880192 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:49:38.909750+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351436800 unmapped: 79880192 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e71ac000/0x0/0x4ffc00000, data 0x2d03d87/0x2eb1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:49:39.909914+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351453184 unmapped: 79863808 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3760288 data_alloc: 218103808 data_used: 7446528
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:49:40.910082+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351453184 unmapped: 79863808 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:49:41.910372+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351453184 unmapped: 79863808 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:49:42.910539+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351453184 unmapped: 79863808 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:49:43.910741+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e71ac000/0x0/0x4ffc00000, data 0x2d03d87/0x2eb1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351453184 unmapped: 79863808 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:49:44.910939+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351461376 unmapped: 79855616 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3760288 data_alloc: 218103808 data_used: 7446528
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:49:45.911116+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351461376 unmapped: 79855616 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:49:46.911344+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351461376 unmapped: 79855616 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e71ac000/0x0/0x4ffc00000, data 0x2d03d87/0x2eb1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:49:47.911628+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e71ac000/0x0/0x4ffc00000, data 0x2d03d87/0x2eb1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351469568 unmapped: 79847424 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:49:48.911967+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351469568 unmapped: 79847424 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:49:49.912220+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351477760 unmapped: 79839232 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3760288 data_alloc: 218103808 data_used: 7446528
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:49:50.912404+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351477760 unmapped: 79839232 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e71ac000/0x0/0x4ffc00000, data 0x2d03d87/0x2eb1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:49:51.912564+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351477760 unmapped: 79839232 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:49:52.912775+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e71ac000/0x0/0x4ffc00000, data 0x2d03d87/0x2eb1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351477760 unmapped: 79839232 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:49:53.912935+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351477760 unmapped: 79839232 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:49:54.913122+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e71ac000/0x0/0x4ffc00000, data 0x2d03d87/0x2eb1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351485952 unmapped: 79831040 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3760288 data_alloc: 218103808 data_used: 7446528
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:49:55.913949+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351485952 unmapped: 79831040 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:49:56.914080+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351485952 unmapped: 79831040 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:49:57.914296+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351494144 unmapped: 79822848 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e71ac000/0x0/0x4ffc00000, data 0x2d03d87/0x2eb1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:49:58.914488+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351494144 unmapped: 79822848 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:49:59.914647+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351494144 unmapped: 79822848 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3760288 data_alloc: 218103808 data_used: 7446528
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:50:00.914879+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351502336 unmapped: 79814656 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:50:01.915118+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351502336 unmapped: 79814656 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:50:02.915313+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e71ac000/0x0/0x4ffc00000, data 0x2d03d87/0x2eb1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351510528 unmapped: 79806464 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:50:03.915500+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351518720 unmapped: 79798272 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:50:04.915707+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351518720 unmapped: 79798272 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3760288 data_alloc: 218103808 data_used: 7446528
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e71ac000/0x0/0x4ffc00000, data 0x2d03d87/0x2eb1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:50:05.915971+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351518720 unmapped: 79798272 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: handle_auth_request added challenge on 0x557ab528e800
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 110.632034302s of 110.660163879s, submitted: 18
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:50:06.916154+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351535104 unmapped: 79781888 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:50:07.916337+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _renew_subs
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 306 handle_osd_map epochs [307,307], i have 306, src has [1,307]
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 307 ms_handle_reset con 0x557ab528e800 session 0x557ab791f860
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351567872 unmapped: 79749120 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: handle_auth_request added challenge on 0x557ab6696800
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:50:08.916503+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351567872 unmapped: 79749120 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:50:09.917050+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _renew_subs
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 307 handle_osd_map epochs [308,308], i have 307, src has [1,308]
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 308 ms_handle_reset con 0x557ab6696800 session 0x557ab46d32c0
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351584256 unmapped: 79732736 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3769757 data_alloc: 218103808 data_used: 7454720
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:50:10.917203+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 308 heartbeat osd_stat(store_statfs(0x4e71a5000/0x0/0x4ffc00000, data 0x2d074a4/0x2eb8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351584256 unmapped: 79732736 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:50:11.917418+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351584256 unmapped: 79732736 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:50:12.917588+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351584256 unmapped: 79732736 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:50:13.917763+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351592448 unmapped: 79724544 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:50:14.917913+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351592448 unmapped: 79724544 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 308 heartbeat osd_stat(store_statfs(0x4e71a5000/0x0/0x4ffc00000, data 0x2d074a4/0x2eb8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3769757 data_alloc: 218103808 data_used: 7454720
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:50:15.918074+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351600640 unmapped: 79716352 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:50:16.918216+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351600640 unmapped: 79716352 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:50:17.918373+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351600640 unmapped: 79716352 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:50:18.918542+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351600640 unmapped: 79716352 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:50:19.918688+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351600640 unmapped: 79716352 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3769757 data_alloc: 218103808 data_used: 7454720
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 308 heartbeat osd_stat(store_statfs(0x4e71a5000/0x0/0x4ffc00000, data 0x2d074a4/0x2eb8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:50:20.918840+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351608832 unmapped: 79708160 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:50:21.918997+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351608832 unmapped: 79708160 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:50:22.919214+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351608832 unmapped: 79708160 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:50:23.919345+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351608832 unmapped: 79708160 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:50:24.919504+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351617024 unmapped: 79699968 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3769757 data_alloc: 218103808 data_used: 7454720
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:50:25.919757+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351617024 unmapped: 79699968 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 308 heartbeat osd_stat(store_statfs(0x4e71a5000/0x0/0x4ffc00000, data 0x2d074a4/0x2eb8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:50:26.919970+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351617024 unmapped: 79699968 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:50:27.920144+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351625216 unmapped: 79691776 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:50:28.920316+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351625216 unmapped: 79691776 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:50:29.920467+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351625216 unmapped: 79691776 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3769757 data_alloc: 218103808 data_used: 7454720
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:50:30.920613+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 308 heartbeat osd_stat(store_statfs(0x4e71a5000/0x0/0x4ffc00000, data 0x2d074a4/0x2eb8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351625216 unmapped: 79691776 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:50:31.920774+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351625216 unmapped: 79691776 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:50:32.921026+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351633408 unmapped: 79683584 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:50:33.921193+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351633408 unmapped: 79683584 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:50:34.921357+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351633408 unmapped: 79683584 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3769757 data_alloc: 218103808 data_used: 7454720
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:50:35.921561+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351633408 unmapped: 79683584 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 308 heartbeat osd_stat(store_statfs(0x4e71a5000/0x0/0x4ffc00000, data 0x2d074a4/0x2eb8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:50:36.921758+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351641600 unmapped: 79675392 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:50:37.921978+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351649792 unmapped: 79667200 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: handle_auth_request added challenge on 0x557ab68a1c00
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:50:38.922130+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 308 handle_osd_map epochs [308,309], i have 308, src has [1,309]
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 32.523567200s of 32.588008881s, submitted: 14
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351682560 unmapped: 79634432 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 309 ms_handle_reset con 0x557ab68a1c00 session 0x557ab79505a0
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:50:39.922359+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e71a3000/0x0/0x4ffc00000, data 0x2d09065/0x2eba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351682560 unmapped: 79634432 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3772018 data_alloc: 218103808 data_used: 7454720
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:50:40.922566+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351690752 unmapped: 79626240 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:50:41.922757+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351690752 unmapped: 79626240 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:50:42.922951+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351690752 unmapped: 79626240 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:50:43.923089+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351690752 unmapped: 79626240 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:50:44.923314+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e71a3000/0x0/0x4ffc00000, data 0x2d09065/0x2eba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 309 handle_osd_map epochs [310,310], i have 309, src has [1,310]
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351731712 unmapped: 79585280 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3774992 data_alloc: 218103808 data_used: 7454720
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:50:45.923560+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351739904 unmapped: 79577088 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:50:46.923734+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 310 heartbeat osd_stat(store_statfs(0x4e71a0000/0x0/0x4ffc00000, data 0x2d0aac8/0x2ebd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351739904 unmapped: 79577088 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:50:47.923873+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351739904 unmapped: 79577088 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:50:48.923976+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351739904 unmapped: 79577088 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:50:49.924137+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351739904 unmapped: 79577088 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3774992 data_alloc: 218103808 data_used: 7454720
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:50:50.924287+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351739904 unmapped: 79577088 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:50:51.924463+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351748096 unmapped: 79568896 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:50:52.924624+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 310 heartbeat osd_stat(store_statfs(0x4e71a0000/0x0/0x4ffc00000, data 0x2d0aac8/0x2ebd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351748096 unmapped: 79568896 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:50:53.924776+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351748096 unmapped: 79568896 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:50:54.924916+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351748096 unmapped: 79568896 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3774992 data_alloc: 218103808 data_used: 7454720
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:50:55.925118+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351748096 unmapped: 79568896 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:50:56.925245+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351756288 unmapped: 79560704 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:50:57.925433+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 310 heartbeat osd_stat(store_statfs(0x4e71a0000/0x0/0x4ffc00000, data 0x2d0aac8/0x2ebd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351756288 unmapped: 79560704 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:50:58.925615+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 310 heartbeat osd_stat(store_statfs(0x4e71a0000/0x0/0x4ffc00000, data 0x2d0aac8/0x2ebd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 310 heartbeat osd_stat(store_statfs(0x4e71a0000/0x0/0x4ffc00000, data 0x2d0aac8/0x2ebd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351756288 unmapped: 79560704 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:50:59.925767+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351756288 unmapped: 79560704 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3774992 data_alloc: 218103808 data_used: 7454720
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:51:00.925948+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351764480 unmapped: 79552512 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:51:01.926117+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351764480 unmapped: 79552512 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:51:02.926253+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 310 heartbeat osd_stat(store_statfs(0x4e71a0000/0x0/0x4ffc00000, data 0x2d0aac8/0x2ebd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351764480 unmapped: 79552512 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:51:03.926403+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351764480 unmapped: 79552512 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:51:04.926562+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351772672 unmapped: 79544320 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3774992 data_alloc: 218103808 data_used: 7454720
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:51:05.926747+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351772672 unmapped: 79544320 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:51:06.926895+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351772672 unmapped: 79544320 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:51:07.927092+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 310 heartbeat osd_stat(store_statfs(0x4e71a0000/0x0/0x4ffc00000, data 0x2d0aac8/0x2ebd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351780864 unmapped: 79536128 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:51:08.927264+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351780864 unmapped: 79536128 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:51:09.927434+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351780864 unmapped: 79536128 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3774992 data_alloc: 218103808 data_used: 7454720
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:51:10.927562+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351780864 unmapped: 79536128 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:51:11.927754+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 310 heartbeat osd_stat(store_statfs(0x4e71a0000/0x0/0x4ffc00000, data 0x2d0aac8/0x2ebd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351780864 unmapped: 79536128 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:51:12.927953+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351789056 unmapped: 79527936 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:51:13.928143+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 310 heartbeat osd_stat(store_statfs(0x4e71a0000/0x0/0x4ffc00000, data 0x2d0aac8/0x2ebd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351789056 unmapped: 79527936 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:51:14.928325+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351789056 unmapped: 79527936 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3774992 data_alloc: 218103808 data_used: 7454720
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:51:15.928512+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 310 heartbeat osd_stat(store_statfs(0x4e71a0000/0x0/0x4ffc00000, data 0x2d0aac8/0x2ebd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351789056 unmapped: 79527936 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:51:16.928665+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 310 heartbeat osd_stat(store_statfs(0x4e71a0000/0x0/0x4ffc00000, data 0x2d0aac8/0x2ebd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351789056 unmapped: 79527936 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:51:17.928881+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351797248 unmapped: 79519744 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:51:18.929033+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351797248 unmapped: 79519744 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:51:19.929174+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 310 heartbeat osd_stat(store_statfs(0x4e71a0000/0x0/0x4ffc00000, data 0x2d0aac8/0x2ebd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351805440 unmapped: 79511552 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3774992 data_alloc: 218103808 data_used: 7454720
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:51:20.929312+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351805440 unmapped: 79511552 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:51:21.929507+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351813632 unmapped: 79503360 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:51:22.929689+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351813632 unmapped: 79503360 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 310 heartbeat osd_stat(store_statfs(0x4e71a0000/0x0/0x4ffc00000, data 0x2d0aac8/0x2ebd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:51:23.929899+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351813632 unmapped: 79503360 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:51:24.930099+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 310 heartbeat osd_stat(store_statfs(0x4e71a0000/0x0/0x4ffc00000, data 0x2d0aac8/0x2ebd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 310 heartbeat osd_stat(store_statfs(0x4e71a0000/0x0/0x4ffc00000, data 0x2d0aac8/0x2ebd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351813632 unmapped: 79503360 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3774992 data_alloc: 218103808 data_used: 7454720
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:51:25.930318+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351813632 unmapped: 79503360 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:51:26.930488+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351813632 unmapped: 79503360 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:51:27.930669+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351813632 unmapped: 79503360 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:51:28.930862+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351821824 unmapped: 79495168 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:51:29.931026+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 6000.1 total, 600.0 interval
                                           Cumulative writes: 47K writes, 180K keys, 47K commit groups, 1.0 writes per commit group, ingest: 0.18 GB, 0.03 MB/s
                                           Cumulative WAL: 47K writes, 17K syncs, 2.70 writes per sync, written: 0.18 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1324 writes, 4495 keys, 1324 commit groups, 1.0 writes per commit group, ingest: 2.98 MB, 0.00 MB/s
                                           Interval WAL: 1324 writes, 573 syncs, 2.31 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557ab31031f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557ab31031f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557ab31031f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557ab31031f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557ab31031f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557ab31031f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557ab31031f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557ab3103090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 1.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557ab3103090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 1.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557ab3103090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 1.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557ab31031f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557ab31031f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351821824 unmapped: 79495168 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3774992 data_alloc: 218103808 data_used: 7454720
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:51:30.931172+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 310 heartbeat osd_stat(store_statfs(0x4e71a0000/0x0/0x4ffc00000, data 0x2d0aac8/0x2ebd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351821824 unmapped: 79495168 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:51:31.931365+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351830016 unmapped: 79486976 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:51:32.931596+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 310 heartbeat osd_stat(store_statfs(0x4e71a0000/0x0/0x4ffc00000, data 0x2d0aac8/0x2ebd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351838208 unmapped: 79478784 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:51:33.931725+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351838208 unmapped: 79478784 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:51:34.931909+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351838208 unmapped: 79478784 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:51:35.932089+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3774992 data_alloc: 218103808 data_used: 7454720
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351838208 unmapped: 79478784 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:51:36.932224+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351846400 unmapped: 79470592 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:51:37.932388+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 310 heartbeat osd_stat(store_statfs(0x4e71a0000/0x0/0x4ffc00000, data 0x2d0aac8/0x2ebd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351846400 unmapped: 79470592 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:51:38.932581+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351846400 unmapped: 79470592 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:51:39.932726+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351846400 unmapped: 79470592 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:51:40.932873+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3774992 data_alloc: 218103808 data_used: 7454720
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:51:41.933022+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351846400 unmapped: 79470592 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:51:42.933214+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351846400 unmapped: 79470592 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 310 heartbeat osd_stat(store_statfs(0x4e71a0000/0x0/0x4ffc00000, data 0x2d0aac8/0x2ebd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 310 heartbeat osd_stat(store_statfs(0x4e71a0000/0x0/0x4ffc00000, data 0x2d0aac8/0x2ebd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:51:43.933408+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351846400 unmapped: 79470592 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:51:44.933791+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351854592 unmapped: 79462400 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:51:45.934040+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351854592 unmapped: 79462400 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3774992 data_alloc: 218103808 data_used: 7454720
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:51:46.934239+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351854592 unmapped: 79462400 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:51:47.934441+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351862784 unmapped: 79454208 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 310 heartbeat osd_stat(store_statfs(0x4e71a0000/0x0/0x4ffc00000, data 0x2d0aac8/0x2ebd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:51:48.934616+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351870976 unmapped: 79446016 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:51:49.934810+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351870976 unmapped: 79446016 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:51:50.935021+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351870976 unmapped: 79446016 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3774992 data_alloc: 218103808 data_used: 7454720
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 310 heartbeat osd_stat(store_statfs(0x4e71a0000/0x0/0x4ffc00000, data 0x2d0aac8/0x2ebd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:51:51.935172+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351870976 unmapped: 79446016 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 310 heartbeat osd_stat(store_statfs(0x4e71a0000/0x0/0x4ffc00000, data 0x2d0aac8/0x2ebd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:51:52.935299+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351879168 unmapped: 79437824 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:51:53.935447+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351879168 unmapped: 79437824 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:51:54.935631+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351879168 unmapped: 79437824 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 310 heartbeat osd_stat(store_statfs(0x4e71a0000/0x0/0x4ffc00000, data 0x2d0aac8/0x2ebd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:51:55.935830+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351879168 unmapped: 79437824 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3774992 data_alloc: 218103808 data_used: 7454720
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:51:56.935971+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351879168 unmapped: 79437824 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:51:57.936154+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351879168 unmapped: 79437824 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 310 heartbeat osd_stat(store_statfs(0x4e71a0000/0x0/0x4ffc00000, data 0x2d0aac8/0x2ebd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:51:58.936308+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351887360 unmapped: 79429632 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:51:59.936488+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351887360 unmapped: 79429632 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:52:00.936670+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351895552 unmapped: 79421440 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3774992 data_alloc: 218103808 data_used: 7454720
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:52:01.936871+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351895552 unmapped: 79421440 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:52:02.937020+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351895552 unmapped: 79421440 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 310 heartbeat osd_stat(store_statfs(0x4e71a0000/0x0/0x4ffc00000, data 0x2d0aac8/0x2ebd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:52:03.937182+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351895552 unmapped: 79421440 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:52:04.937352+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351895552 unmapped: 79421440 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:52:05.937556+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351895552 unmapped: 79421440 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3774992 data_alloc: 218103808 data_used: 7454720
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:52:06.937710+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351911936 unmapped: 79405056 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:52:07.937884+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351911936 unmapped: 79405056 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 310 heartbeat osd_stat(store_statfs(0x4e71a0000/0x0/0x4ffc00000, data 0x2d0aac8/0x2ebd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:52:08.938048+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351920128 unmapped: 79396864 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:52:09.938217+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351920128 unmapped: 79396864 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:52:10.938376+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351920128 unmapped: 79396864 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3774992 data_alloc: 218103808 data_used: 7454720
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:52:11.938547+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351920128 unmapped: 79396864 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:52:12.938707+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351920128 unmapped: 79396864 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 310 heartbeat osd_stat(store_statfs(0x4e71a0000/0x0/0x4ffc00000, data 0x2d0aac8/0x2ebd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:52:13.938898+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351920128 unmapped: 79396864 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:52:14.939048+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351920128 unmapped: 79396864 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 310 heartbeat osd_stat(store_statfs(0x4e71a0000/0x0/0x4ffc00000, data 0x2d0aac8/0x2ebd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:52:15.939229+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351920128 unmapped: 79396864 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3774992 data_alloc: 218103808 data_used: 7454720
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:52:16.939395+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351928320 unmapped: 79388672 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:52:17.939555+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351928320 unmapped: 79388672 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 310 heartbeat osd_stat(store_statfs(0x4e71a0000/0x0/0x4ffc00000, data 0x2d0aac8/0x2ebd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:52:18.939732+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351928320 unmapped: 79388672 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:52:19.939896+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351928320 unmapped: 79388672 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:52:20.940063+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351928320 unmapped: 79388672 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3774992 data_alloc: 218103808 data_used: 7454720
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:52:21.940235+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351928320 unmapped: 79388672 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:52:22.940399+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351936512 unmapped: 79380480 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 310 heartbeat osd_stat(store_statfs(0x4e71a0000/0x0/0x4ffc00000, data 0x2d0aac8/0x2ebd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:52:23.940574+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351936512 unmapped: 79380480 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:52:24.940738+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351944704 unmapped: 79372288 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:52:25.940917+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351944704 unmapped: 79372288 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3774992 data_alloc: 218103808 data_used: 7454720
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 310 heartbeat osd_stat(store_statfs(0x4e71a0000/0x0/0x4ffc00000, data 0x2d0aac8/0x2ebd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:52:26.941110+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351944704 unmapped: 79372288 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 310 heartbeat osd_stat(store_statfs(0x4e71a0000/0x0/0x4ffc00000, data 0x2d0aac8/0x2ebd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:52:27.941313+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351952896 unmapped: 79364096 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:52:28.941468+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351961088 unmapped: 79355904 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:52:29.941592+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351961088 unmapped: 79355904 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:52:30.941747+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351961088 unmapped: 79355904 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3774992 data_alloc: 218103808 data_used: 7454720
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:52:31.941918+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351961088 unmapped: 79355904 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 310 heartbeat osd_stat(store_statfs(0x4e71a0000/0x0/0x4ffc00000, data 0x2d0aac8/0x2ebd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:52:32.942069+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351969280 unmapped: 79347712 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:52:33.942208+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351969280 unmapped: 79347712 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:52:34.942399+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351969280 unmapped: 79347712 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:52:35.942593+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351969280 unmapped: 79347712 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3774992 data_alloc: 218103808 data_used: 7454720
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:52:36.942750+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351969280 unmapped: 79347712 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 310 heartbeat osd_stat(store_statfs(0x4e71a0000/0x0/0x4ffc00000, data 0x2d0aac8/0x2ebd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:52:37.942925+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351969280 unmapped: 79347712 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 310 heartbeat osd_stat(store_statfs(0x4e71a0000/0x0/0x4ffc00000, data 0x2d0aac8/0x2ebd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:52:38.943068+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351969280 unmapped: 79347712 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 310 heartbeat osd_stat(store_statfs(0x4e71a0000/0x0/0x4ffc00000, data 0x2d0aac8/0x2ebd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:52:39.943220+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351969280 unmapped: 79347712 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:52:40.943380+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351985664 unmapped: 79331328 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3774992 data_alloc: 218103808 data_used: 7454720
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:52:41.943572+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351985664 unmapped: 79331328 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:52:42.943760+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351985664 unmapped: 79331328 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:52:43.943959+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351985664 unmapped: 79331328 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 310 heartbeat osd_stat(store_statfs(0x4e71a0000/0x0/0x4ffc00000, data 0x2d0aac8/0x2ebd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:52:44.944158+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351993856 unmapped: 79323136 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 310 heartbeat osd_stat(store_statfs(0x4e71a0000/0x0/0x4ffc00000, data 0x2d0aac8/0x2ebd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:52:45.944388+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351993856 unmapped: 79323136 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3774992 data_alloc: 218103808 data_used: 7454720
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:52:46.944571+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351993856 unmapped: 79323136 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 310 heartbeat osd_stat(store_statfs(0x4e71a0000/0x0/0x4ffc00000, data 0x2d0aac8/0x2ebd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:52:47.944734+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351993856 unmapped: 79323136 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 310 heartbeat osd_stat(store_statfs(0x4e71a0000/0x0/0x4ffc00000, data 0x2d0aac8/0x2ebd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:52:48.944919+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352002048 unmapped: 79314944 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:52:49.945061+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352002048 unmapped: 79314944 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 310 heartbeat osd_stat(store_statfs(0x4e71a0000/0x0/0x4ffc00000, data 0x2d0aac8/0x2ebd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:52:50.945226+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352002048 unmapped: 79314944 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3774992 data_alloc: 218103808 data_used: 7454720
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:52:51.945368+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352002048 unmapped: 79314944 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 310 heartbeat osd_stat(store_statfs(0x4e71a0000/0x0/0x4ffc00000, data 0x2d0aac8/0x2ebd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:52:52.945538+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352010240 unmapped: 79306752 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:52:53.945634+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352010240 unmapped: 79306752 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 310 heartbeat osd_stat(store_statfs(0x4e71a0000/0x0/0x4ffc00000, data 0x2d0aac8/0x2ebd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:52:54.945792+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352018432 unmapped: 79298560 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:52:55.946027+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352018432 unmapped: 79298560 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3774992 data_alloc: 218103808 data_used: 7454720
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:52:56.946176+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352026624 unmapped: 79290368 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:52:57.946361+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352026624 unmapped: 79290368 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:52:58.946510+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352026624 unmapped: 79290368 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 310 heartbeat osd_stat(store_statfs(0x4e71a0000/0x0/0x4ffc00000, data 0x2d0aac8/0x2ebd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:52:59.946672+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352026624 unmapped: 79290368 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:53:00.946802+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352034816 unmapped: 79282176 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3774992 data_alloc: 218103808 data_used: 7454720
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 141.775878906s of 141.837173462s, submitted: 27
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:53:01.946953+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353091584 unmapped: 78225408 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:53:02.947071+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353132544 unmapped: 78184448 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:53:03.947198+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353132544 unmapped: 78184448 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:53:04.947280+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 310 heartbeat osd_stat(store_statfs(0x4e71a1000/0x0/0x4ffc00000, data 0x2d0aac8/0x2ebd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353140736 unmapped: 78176256 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:53:05.947431+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 310 heartbeat osd_stat(store_statfs(0x4e71a1000/0x0/0x4ffc00000, data 0x2d0aac8/0x2ebd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353140736 unmapped: 78176256 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3774112 data_alloc: 218103808 data_used: 7454720
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:53:06.947605+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353140736 unmapped: 78176256 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:53:07.947733+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353140736 unmapped: 78176256 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:53:08.947915+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353140736 unmapped: 78176256 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 310 heartbeat osd_stat(store_statfs(0x4e71a1000/0x0/0x4ffc00000, data 0x2d0aac8/0x2ebd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:53:09.948071+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353148928 unmapped: 78168064 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:53:10.948234+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353148928 unmapped: 78168064 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3774112 data_alloc: 218103808 data_used: 7454720
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:53:11.948387+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353148928 unmapped: 78168064 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:53:12.948535+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353148928 unmapped: 78168064 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:53:13.948752+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353148928 unmapped: 78168064 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:53:14.948960+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353148928 unmapped: 78168064 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 310 heartbeat osd_stat(store_statfs(0x4e71a1000/0x0/0x4ffc00000, data 0x2d0aac8/0x2ebd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:53:15.949160+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353157120 unmapped: 78159872 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3774112 data_alloc: 218103808 data_used: 7454720
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:53:16.949311+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353157120 unmapped: 78159872 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 310 heartbeat osd_stat(store_statfs(0x4e71a1000/0x0/0x4ffc00000, data 0x2d0aac8/0x2ebd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:53:17.949476+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353157120 unmapped: 78159872 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 310 heartbeat osd_stat(store_statfs(0x4e71a1000/0x0/0x4ffc00000, data 0x2d0aac8/0x2ebd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:53:18.949687+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353165312 unmapped: 78151680 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:53:19.949927+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353165312 unmapped: 78151680 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:53:20.950058+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353165312 unmapped: 78151680 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3774112 data_alloc: 218103808 data_used: 7454720
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:53:21.950217+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353165312 unmapped: 78151680 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 310 heartbeat osd_stat(store_statfs(0x4e71a1000/0x0/0x4ffc00000, data 0x2d0aac8/0x2ebd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 310 heartbeat osd_stat(store_statfs(0x4e71a1000/0x0/0x4ffc00000, data 0x2d0aac8/0x2ebd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:53:22.950421+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353165312 unmapped: 78151680 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:53:23.950582+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353165312 unmapped: 78151680 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:53:24.950774+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353165312 unmapped: 78151680 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:53:25.951135+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353173504 unmapped: 78143488 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3774112 data_alloc: 218103808 data_used: 7454720
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 310 heartbeat osd_stat(store_statfs(0x4e71a1000/0x0/0x4ffc00000, data 0x2d0aac8/0x2ebd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:53:26.951311+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353173504 unmapped: 78143488 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:53:27.951464+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353173504 unmapped: 78143488 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:53:28.951645+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353173504 unmapped: 78143488 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:53:29.951898+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353181696 unmapped: 78135296 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 310 heartbeat osd_stat(store_statfs(0x4e71a1000/0x0/0x4ffc00000, data 0x2d0aac8/0x2ebd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:53:30.952080+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353181696 unmapped: 78135296 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3774112 data_alloc: 218103808 data_used: 7454720
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:53:31.952250+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353181696 unmapped: 78135296 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:53:32.952483+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353181696 unmapped: 78135296 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:53:33.952646+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 310 heartbeat osd_stat(store_statfs(0x4e71a1000/0x0/0x4ffc00000, data 0x2d0aac8/0x2ebd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353189888 unmapped: 78127104 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:53:34.952763+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 310 heartbeat osd_stat(store_statfs(0x4e71a1000/0x0/0x4ffc00000, data 0x2d0aac8/0x2ebd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353189888 unmapped: 78127104 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:53:35.952923+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353189888 unmapped: 78127104 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3774112 data_alloc: 218103808 data_used: 7454720
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:53:36.953038+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353198080 unmapped: 78118912 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:53:37.953184+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353198080 unmapped: 78118912 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:53:38.953370+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353198080 unmapped: 78118912 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:53:39.953623+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353198080 unmapped: 78118912 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:53:40.953812+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 310 heartbeat osd_stat(store_statfs(0x4e71a1000/0x0/0x4ffc00000, data 0x2d0aac8/0x2ebd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353198080 unmapped: 78118912 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3774112 data_alloc: 218103808 data_used: 7454720
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:53:41.954032+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353198080 unmapped: 78118912 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:53:42.954222+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353198080 unmapped: 78118912 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 310 heartbeat osd_stat(store_statfs(0x4e71a1000/0x0/0x4ffc00000, data 0x2d0aac8/0x2ebd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:53:43.954398+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353206272 unmapped: 78110720 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:53:44.954622+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353206272 unmapped: 78110720 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:53:45.954866+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353206272 unmapped: 78110720 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3774112 data_alloc: 218103808 data_used: 7454720
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:53:46.955082+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353206272 unmapped: 78110720 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 310 heartbeat osd_stat(store_statfs(0x4e71a1000/0x0/0x4ffc00000, data 0x2d0aac8/0x2ebd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:53:47.955245+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353222656 unmapped: 78094336 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:53:48.955450+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353222656 unmapped: 78094336 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:53:49.955640+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353222656 unmapped: 78094336 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:53:50.955787+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353222656 unmapped: 78094336 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3774112 data_alloc: 218103808 data_used: 7454720
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 310 heartbeat osd_stat(store_statfs(0x4e71a1000/0x0/0x4ffc00000, data 0x2d0aac8/0x2ebd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:53:51.955989+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353222656 unmapped: 78094336 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:53:52.956193+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353230848 unmapped: 78086144 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:53:53.956328+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353230848 unmapped: 78086144 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:53:54.956517+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353230848 unmapped: 78086144 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:53:55.956686+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 310 heartbeat osd_stat(store_statfs(0x4e71a1000/0x0/0x4ffc00000, data 0x2d0aac8/0x2ebd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353230848 unmapped: 78086144 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3774112 data_alloc: 218103808 data_used: 7454720
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:53:56.956894+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353230848 unmapped: 78086144 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:53:57.957050+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353239040 unmapped: 78077952 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:53:58.957272+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353239040 unmapped: 78077952 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:53:59.957398+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 310 heartbeat osd_stat(store_statfs(0x4e71a1000/0x0/0x4ffc00000, data 0x2d0aac8/0x2ebd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353239040 unmapped: 78077952 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:54:00.957570+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353239040 unmapped: 78077952 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3774112 data_alloc: 218103808 data_used: 7454720
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:54:01.957813+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353239040 unmapped: 78077952 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 310 heartbeat osd_stat(store_statfs(0x4e71a1000/0x0/0x4ffc00000, data 0x2d0aac8/0x2ebd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:54:02.958025+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353247232 unmapped: 78069760 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:54:03.958226+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353247232 unmapped: 78069760 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:54:04.958304+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353247232 unmapped: 78069760 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:54:05.958484+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353255424 unmapped: 78061568 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3774112 data_alloc: 218103808 data_used: 7454720
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:54:06.958646+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353255424 unmapped: 78061568 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:54:07.958879+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353255424 unmapped: 78061568 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 310 heartbeat osd_stat(store_statfs(0x4e71a1000/0x0/0x4ffc00000, data 0x2d0aac8/0x2ebd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:54:08.959036+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353263616 unmapped: 78053376 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:54:09.959708+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353263616 unmapped: 78053376 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:54:10.959926+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353271808 unmapped: 78045184 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3774112 data_alloc: 218103808 data_used: 7454720
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:54:11.960094+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353271808 unmapped: 78045184 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:54:12.960313+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 310 heartbeat osd_stat(store_statfs(0x4e71a1000/0x0/0x4ffc00000, data 0x2d0aac8/0x2ebd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353271808 unmapped: 78045184 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:54:13.960505+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353271808 unmapped: 78045184 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:54:14.960668+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353271808 unmapped: 78045184 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:54:15.960995+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353271808 unmapped: 78045184 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3774112 data_alloc: 218103808 data_used: 7454720
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:54:16.961172+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353280000 unmapped: 78036992 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:54:17.961308+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 310 heartbeat osd_stat(store_statfs(0x4e71a1000/0x0/0x4ffc00000, data 0x2d0aac8/0x2ebd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353280000 unmapped: 78036992 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:54:18.961442+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353280000 unmapped: 78036992 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:54:19.961596+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353280000 unmapped: 78036992 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:54:20.961734+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353280000 unmapped: 78036992 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3774112 data_alloc: 218103808 data_used: 7454720
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:54:21.961879+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353280000 unmapped: 78036992 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:54:22.962031+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353280000 unmapped: 78036992 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:54:23.962185+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 310 heartbeat osd_stat(store_statfs(0x4e71a1000/0x0/0x4ffc00000, data 0x2d0aac8/0x2ebd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353280000 unmapped: 78036992 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:54:24.962311+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 310 heartbeat osd_stat(store_statfs(0x4e71a1000/0x0/0x4ffc00000, data 0x2d0aac8/0x2ebd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353304576 unmapped: 78012416 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:54:25.962497+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3774112 data_alloc: 218103808 data_used: 7454720
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353304576 unmapped: 78012416 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:54:26.962656+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353304576 unmapped: 78012416 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:54:27.962788+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353304576 unmapped: 78012416 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:54:28.962959+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353312768 unmapped: 78004224 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:54:29.963177+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353312768 unmapped: 78004224 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:54:30.963340+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 310 heartbeat osd_stat(store_statfs(0x4e71a1000/0x0/0x4ffc00000, data 0x2d0aac8/0x2ebd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3774112 data_alloc: 218103808 data_used: 7454720
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353312768 unmapped: 78004224 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:54:31.963491+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353312768 unmapped: 78004224 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:54:32.963626+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353320960 unmapped: 77996032 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:54:33.963806+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353320960 unmapped: 77996032 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:54:34.964035+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353320960 unmapped: 77996032 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:54:35.964298+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3774112 data_alloc: 218103808 data_used: 7454720
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353320960 unmapped: 77996032 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 310 heartbeat osd_stat(store_statfs(0x4e71a1000/0x0/0x4ffc00000, data 0x2d0aac8/0x2ebd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:54:36.964452+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353320960 unmapped: 77996032 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:54:37.964665+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353329152 unmapped: 77987840 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:54:38.964856+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 310 heartbeat osd_stat(store_statfs(0x4e71a1000/0x0/0x4ffc00000, data 0x2d0aac8/0x2ebd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353329152 unmapped: 77987840 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:54:39.965011+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 310 heartbeat osd_stat(store_statfs(0x4e71a1000/0x0/0x4ffc00000, data 0x2d0aac8/0x2ebd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353329152 unmapped: 77987840 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:54:40.965283+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3774112 data_alloc: 218103808 data_used: 7454720
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353337344 unmapped: 77979648 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:54:41.965978+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353337344 unmapped: 77979648 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:54:42.966481+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353337344 unmapped: 77979648 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:54:43.966731+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353337344 unmapped: 77979648 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:54:44.966880+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 310 heartbeat osd_stat(store_statfs(0x4e71a1000/0x0/0x4ffc00000, data 0x2d0aac8/0x2ebd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353337344 unmapped: 77979648 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:54:45.967065+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 310 heartbeat osd_stat(store_statfs(0x4e71a1000/0x0/0x4ffc00000, data 0x2d0aac8/0x2ebd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3774112 data_alloc: 218103808 data_used: 7454720
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353337344 unmapped: 77979648 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:54:46.967406+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353337344 unmapped: 77979648 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:54:47.967590+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 310 heartbeat osd_stat(store_statfs(0x4e71a1000/0x0/0x4ffc00000, data 0x2d0aac8/0x2ebd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353337344 unmapped: 77979648 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:54:48.967958+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353345536 unmapped: 77971456 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:54:49.968098+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353345536 unmapped: 77971456 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:54:50.968319+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 310 heartbeat osd_stat(store_statfs(0x4e71a1000/0x0/0x4ffc00000, data 0x2d0aac8/0x2ebd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3774112 data_alloc: 218103808 data_used: 7454720
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353345536 unmapped: 77971456 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:54:51.968454+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353345536 unmapped: 77971456 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:54:52.968611+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353345536 unmapped: 77971456 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:54:53.968772+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353353728 unmapped: 77963264 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:54:54.968946+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 310 heartbeat osd_stat(store_statfs(0x4e71a1000/0x0/0x4ffc00000, data 0x2d0aac8/0x2ebd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353353728 unmapped: 77963264 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:54:55.969153+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3774112 data_alloc: 218103808 data_used: 7454720
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353353728 unmapped: 77963264 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:54:56.969288+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353361920 unmapped: 77955072 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:54:57.969446+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353361920 unmapped: 77955072 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:54:58.969656+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353361920 unmapped: 77955072 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:54:59.969931+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 310 heartbeat osd_stat(store_statfs(0x4e71a1000/0x0/0x4ffc00000, data 0x2d0aac8/0x2ebd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353378304 unmapped: 77938688 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:55:00.970098+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3774112 data_alloc: 218103808 data_used: 7454720
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353378304 unmapped: 77938688 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:55:01.970265+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353378304 unmapped: 77938688 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:55:02.970417+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353378304 unmapped: 77938688 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:55:03.970665+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353378304 unmapped: 77938688 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:55:04.970791+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 310 heartbeat osd_stat(store_statfs(0x4e71a1000/0x0/0x4ffc00000, data 0x2d0aac8/0x2ebd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353386496 unmapped: 77930496 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:55:05.971008+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3774112 data_alloc: 218103808 data_used: 7454720
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353386496 unmapped: 77930496 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:55:06.971262+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353386496 unmapped: 77930496 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:55:07.971457+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353394688 unmapped: 77922304 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:55:08.971611+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353394688 unmapped: 77922304 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:55:09.971796+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 310 heartbeat osd_stat(store_statfs(0x4e71a1000/0x0/0x4ffc00000, data 0x2d0aac8/0x2ebd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353394688 unmapped: 77922304 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:55:10.971980+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 310 heartbeat osd_stat(store_statfs(0x4e71a1000/0x0/0x4ffc00000, data 0x2d0aac8/0x2ebd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3774112 data_alloc: 218103808 data_used: 7454720
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353394688 unmapped: 77922304 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:55:11.972133+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353394688 unmapped: 77922304 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:55:12.972292+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 310 heartbeat osd_stat(store_statfs(0x4e71a1000/0x0/0x4ffc00000, data 0x2d0aac8/0x2ebd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353402880 unmapped: 77914112 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:55:13.972455+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353402880 unmapped: 77914112 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:55:14.972639+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353411072 unmapped: 77905920 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:55:15.972864+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3774112 data_alloc: 218103808 data_used: 7454720
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353411072 unmapped: 77905920 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:55:16.973038+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353411072 unmapped: 77905920 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:55:17.973197+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 310 heartbeat osd_stat(store_statfs(0x4e71a1000/0x0/0x4ffc00000, data 0x2d0aac8/0x2ebd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353411072 unmapped: 77905920 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:55:18.973364+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353411072 unmapped: 77905920 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:55:19.973518+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353419264 unmapped: 77897728 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:55:20.973679+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3774112 data_alloc: 218103808 data_used: 7454720
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353419264 unmapped: 77897728 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:55:21.973899+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353419264 unmapped: 77897728 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:55:22.974076+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 310 heartbeat osd_stat(store_statfs(0x4e71a1000/0x0/0x4ffc00000, data 0x2d0aac8/0x2ebd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353427456 unmapped: 77889536 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:55:23.974244+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353435648 unmapped: 77881344 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:55:24.974393+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353435648 unmapped: 77881344 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:55:25.974555+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3774112 data_alloc: 218103808 data_used: 7454720
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353435648 unmapped: 77881344 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:55:26.974696+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353435648 unmapped: 77881344 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:55:27.974936+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353435648 unmapped: 77881344 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:55:28.975108+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 310 heartbeat osd_stat(store_statfs(0x4e71a1000/0x0/0x4ffc00000, data 0x2d0aac8/0x2ebd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353443840 unmapped: 77873152 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:55:29.975252+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353443840 unmapped: 77873152 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:55:30.975381+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3774112 data_alloc: 218103808 data_used: 7454720
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353443840 unmapped: 77873152 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:55:31.975543+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 310 heartbeat osd_stat(store_statfs(0x4e71a1000/0x0/0x4ffc00000, data 0x2d0aac8/0x2ebd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353443840 unmapped: 77873152 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:55:32.975718+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353443840 unmapped: 77873152 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:55:33.975865+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353443840 unmapped: 77873152 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:55:34.975992+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353443840 unmapped: 77873152 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:55:35.976172+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3774112 data_alloc: 218103808 data_used: 7454720
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353443840 unmapped: 77873152 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:55:36.976328+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 310 heartbeat osd_stat(store_statfs(0x4e71a1000/0x0/0x4ffc00000, data 0x2d0aac8/0x2ebd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353460224 unmapped: 77856768 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:55:37.976519+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353460224 unmapped: 77856768 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:55:38.976665+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353460224 unmapped: 77856768 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:55:39.976894+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353468416 unmapped: 77848576 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:55:40.977393+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3774112 data_alloc: 218103808 data_used: 7454720
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353468416 unmapped: 77848576 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:55:41.977654+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353476608 unmapped: 77840384 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:55:42.977782+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 310 heartbeat osd_stat(store_statfs(0x4e71a1000/0x0/0x4ffc00000, data 0x2d0aac8/0x2ebd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 310 heartbeat osd_stat(store_statfs(0x4e71a1000/0x0/0x4ffc00000, data 0x2d0aac8/0x2ebd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353476608 unmapped: 77840384 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:55:43.978429+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353476608 unmapped: 77840384 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:55:44.978670+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351232000 unmapped: 80084992 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:55:45.979078+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3774112 data_alloc: 218103808 data_used: 7454720
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351232000 unmapped: 80084992 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:55:46.979232+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351232000 unmapped: 80084992 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:55:47.979375+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351232000 unmapped: 80084992 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:55:48.980145+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 310 heartbeat osd_stat(store_statfs(0x4e71a1000/0x0/0x4ffc00000, data 0x2d0aac8/0x2ebd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351232000 unmapped: 80084992 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:55:49.980464+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 310 heartbeat osd_stat(store_statfs(0x4e71a1000/0x0/0x4ffc00000, data 0x2d0aac8/0x2ebd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351240192 unmapped: 80076800 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:55:50.980657+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3774112 data_alloc: 218103808 data_used: 7454720
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351240192 unmapped: 80076800 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:55:51.980807+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351240192 unmapped: 80076800 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:55:52.980979+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 310 heartbeat osd_stat(store_statfs(0x4e71a1000/0x0/0x4ffc00000, data 0x2d0aac8/0x2ebd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351240192 unmapped: 80076800 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:55:53.981377+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 310 heartbeat osd_stat(store_statfs(0x4e71a1000/0x0/0x4ffc00000, data 0x2d0aac8/0x2ebd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351240192 unmapped: 80076800 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:55:54.981521+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351240192 unmapped: 80076800 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:55:55.981693+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3774112 data_alloc: 218103808 data_used: 7454720
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351240192 unmapped: 80076800 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:55:56.981896+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 310 heartbeat osd_stat(store_statfs(0x4e71a1000/0x0/0x4ffc00000, data 0x2d0aac8/0x2ebd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351240192 unmapped: 80076800 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:55:57.982188+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351248384 unmapped: 80068608 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:55:58.982328+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351248384 unmapped: 80068608 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:55:59.982494+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 310 heartbeat osd_stat(store_statfs(0x4e71a1000/0x0/0x4ffc00000, data 0x2d0aac8/0x2ebd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351248384 unmapped: 80068608 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:56:00.982647+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 310 heartbeat osd_stat(store_statfs(0x4e71a1000/0x0/0x4ffc00000, data 0x2d0aac8/0x2ebd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3774112 data_alloc: 218103808 data_used: 7454720
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:56:01.982870+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351248384 unmapped: 80068608 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:56:02.983054+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351248384 unmapped: 80068608 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 310 heartbeat osd_stat(store_statfs(0x4e71a1000/0x0/0x4ffc00000, data 0x2d0aac8/0x2ebd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:56:03.983239+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351256576 unmapped: 80060416 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:56:04.983393+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351272960 unmapped: 80044032 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:56:05.983581+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351272960 unmapped: 80044032 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3774112 data_alloc: 218103808 data_used: 7454720
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:56:06.983770+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351272960 unmapped: 80044032 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:56:07.983931+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351272960 unmapped: 80044032 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:56:08.984102+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351281152 unmapped: 80035840 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 310 heartbeat osd_stat(store_statfs(0x4e71a1000/0x0/0x4ffc00000, data 0x2d0aac8/0x2ebd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:56:09.984332+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351289344 unmapped: 80027648 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:56:10.984494+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351289344 unmapped: 80027648 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3774112 data_alloc: 218103808 data_used: 7454720
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:56:11.984662+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351289344 unmapped: 80027648 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:56:12.984912+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351289344 unmapped: 80027648 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:56:13.985094+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 310 heartbeat osd_stat(store_statfs(0x4e71a1000/0x0/0x4ffc00000, data 0x2d0aac8/0x2ebd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351289344 unmapped: 80027648 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:56:14.985279+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351289344 unmapped: 80027648 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:56:15.985494+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351289344 unmapped: 80027648 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 310 heartbeat osd_stat(store_statfs(0x4e71a1000/0x0/0x4ffc00000, data 0x2d0aac8/0x2ebd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:56:16.985597+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3774112 data_alloc: 218103808 data_used: 7454720
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351305728 unmapped: 80011264 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:56:17.985736+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351305728 unmapped: 80011264 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:56:18.985952+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351305728 unmapped: 80011264 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:56:19.986129+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351305728 unmapped: 80011264 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:56:20.986296+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351313920 unmapped: 80003072 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:56:21.986427+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3774112 data_alloc: 218103808 data_used: 7454720
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351313920 unmapped: 80003072 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 310 heartbeat osd_stat(store_statfs(0x4e71a1000/0x0/0x4ffc00000, data 0x2d0aac8/0x2ebd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:56:22.986570+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351313920 unmapped: 80003072 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:56:23.986721+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351313920 unmapped: 80003072 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:56:24.986875+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 310 heartbeat osd_stat(store_statfs(0x4e71a1000/0x0/0x4ffc00000, data 0x2d0aac8/0x2ebd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351322112 unmapped: 79994880 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 310 heartbeat osd_stat(store_statfs(0x4e71a1000/0x0/0x4ffc00000, data 0x2d0aac8/0x2ebd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:56:25.987071+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351330304 unmapped: 79986688 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:56:26.987251+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3774112 data_alloc: 218103808 data_used: 7454720
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351330304 unmapped: 79986688 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:56:27.987410+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351330304 unmapped: 79986688 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:56:28.987625+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351330304 unmapped: 79986688 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:56:29.987795+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351330304 unmapped: 79986688 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 310 heartbeat osd_stat(store_statfs(0x4e71a1000/0x0/0x4ffc00000, data 0x2d0aac8/0x2ebd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:56:30.987962+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351330304 unmapped: 79986688 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 310 heartbeat osd_stat(store_statfs(0x4e71a1000/0x0/0x4ffc00000, data 0x2d0aac8/0x2ebd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:56:31.988118+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3774112 data_alloc: 218103808 data_used: 7454720
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351330304 unmapped: 79986688 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:56:32.988284+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351338496 unmapped: 79978496 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:56:33.988461+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351338496 unmapped: 79978496 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:56:34.988616+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351338496 unmapped: 79978496 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:56:35.988836+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 310 heartbeat osd_stat(store_statfs(0x4e71a1000/0x0/0x4ffc00000, data 0x2d0aac8/0x2ebd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351338496 unmapped: 79978496 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 310 ms_handle_reset con 0x557ab4688800 session 0x557ab6bfaf00
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: handle_auth_request added challenge on 0x557ab68a1c00
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:56:36.989009+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3774112 data_alloc: 218103808 data_used: 7454720
Oct 11 10:01:08 compute-0 ceph-osd[89278]: mgrc ms_handle_reset ms_handle_reset con 0x557ab6693800
Oct 11 10:01:08 compute-0 ceph-osd[89278]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/2898047278
Oct 11 10:01:08 compute-0 ceph-osd[89278]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/2898047278,v1:192.168.122.100:6801/2898047278]
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: get_auth_request con 0x557ab7f88000 auth_method 0
Oct 11 10:01:08 compute-0 ceph-osd[89278]: mgrc handle_mgr_configure stats_period=5
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351346688 unmapped: 79970304 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 310 ms_handle_reset con 0x557aba78ac00 session 0x557ab6a92f00
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: handle_auth_request added challenge on 0x557ab4689000
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 310 ms_handle_reset con 0x557abb03c400 session 0x557ab68214a0
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: handle_auth_request added challenge on 0x557aba78ac00
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:56:37.989201+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351346688 unmapped: 79970304 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 310 heartbeat osd_stat(store_statfs(0x4e71a1000/0x0/0x4ffc00000, data 0x2d0aac8/0x2ebd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:56:38.989360+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351346688 unmapped: 79970304 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 310 heartbeat osd_stat(store_statfs(0x4e71a1000/0x0/0x4ffc00000, data 0x2d0aac8/0x2ebd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:56:39.989506+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351354880 unmapped: 79962112 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 310 heartbeat osd_stat(store_statfs(0x4e71a1000/0x0/0x4ffc00000, data 0x2d0aac8/0x2ebd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:56:40.989679+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351354880 unmapped: 79962112 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:56:41.989884+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3774112 data_alloc: 218103808 data_used: 7454720
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351354880 unmapped: 79962112 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:56:42.990047+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351354880 unmapped: 79962112 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:56:43.990218+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 310 heartbeat osd_stat(store_statfs(0x4e71a1000/0x0/0x4ffc00000, data 0x2d0aac8/0x2ebd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351354880 unmapped: 79962112 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 310 heartbeat osd_stat(store_statfs(0x4e71a1000/0x0/0x4ffc00000, data 0x2d0aac8/0x2ebd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:56:44.990440+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 310 heartbeat osd_stat(store_statfs(0x4e71a1000/0x0/0x4ffc00000, data 0x2d0aac8/0x2ebd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351354880 unmapped: 79962112 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:56:45.990661+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351354880 unmapped: 79962112 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 310 heartbeat osd_stat(store_statfs(0x4e71a1000/0x0/0x4ffc00000, data 0x2d0aac8/0x2ebd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:56:46.991003+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3774112 data_alloc: 218103808 data_used: 7454720
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351354880 unmapped: 79962112 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:56:47.991148+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351354880 unmapped: 79962112 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 310 heartbeat osd_stat(store_statfs(0x4e71a1000/0x0/0x4ffc00000, data 0x2d0aac8/0x2ebd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:56:48.991353+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 310 heartbeat osd_stat(store_statfs(0x4e71a1000/0x0/0x4ffc00000, data 0x2d0aac8/0x2ebd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351363072 unmapped: 79953920 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:56:49.991535+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351363072 unmapped: 79953920 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:56:50.991738+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351363072 unmapped: 79953920 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:56:51.991924+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3774112 data_alloc: 218103808 data_used: 7454720
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351363072 unmapped: 79953920 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:56:52.992087+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 310 heartbeat osd_stat(store_statfs(0x4e71a1000/0x0/0x4ffc00000, data 0x2d0aac8/0x2ebd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351371264 unmapped: 79945728 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 310 heartbeat osd_stat(store_statfs(0x4e71a1000/0x0/0x4ffc00000, data 0x2d0aac8/0x2ebd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:56:53.992246+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351379456 unmapped: 79937536 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:56:54.992403+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351379456 unmapped: 79937536 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:56:55.992607+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351387648 unmapped: 79929344 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:56:56.992768+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3774112 data_alloc: 218103808 data_used: 7454720
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351387648 unmapped: 79929344 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:56:57.992915+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351387648 unmapped: 79929344 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 310 heartbeat osd_stat(store_statfs(0x4e71a1000/0x0/0x4ffc00000, data 0x2d0aac8/0x2ebd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:56:58.993129+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351395840 unmapped: 79921152 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:56:59.993300+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351395840 unmapped: 79921152 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:57:00.993479+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351404032 unmapped: 79912960 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:57:01.993669+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3774112 data_alloc: 218103808 data_used: 7454720
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351404032 unmapped: 79912960 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 310 heartbeat osd_stat(store_statfs(0x4e71a1000/0x0/0x4ffc00000, data 0x2d0aac8/0x2ebd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:57:02.993917+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351404032 unmapped: 79912960 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:57:03.994210+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351404032 unmapped: 79912960 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 310 heartbeat osd_stat(store_statfs(0x4e71a1000/0x0/0x4ffc00000, data 0x2d0aac8/0x2ebd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:57:04.994379+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351412224 unmapped: 79904768 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:57:05.994610+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351412224 unmapped: 79904768 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:57:06.994784+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3774112 data_alloc: 218103808 data_used: 7454720
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351412224 unmapped: 79904768 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:57:07.994950+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 310 heartbeat osd_stat(store_statfs(0x4e71a1000/0x0/0x4ffc00000, data 0x2d0aac8/0x2ebd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351420416 unmapped: 79896576 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:57:08.995117+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351420416 unmapped: 79896576 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:57:09.995281+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351420416 unmapped: 79896576 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:57:10.995420+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 310 heartbeat osd_stat(store_statfs(0x4e71a1000/0x0/0x4ffc00000, data 0x2d0aac8/0x2ebd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351420416 unmapped: 79896576 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:57:11.995561+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3774112 data_alloc: 218103808 data_used: 7454720
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351420416 unmapped: 79896576 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:57:12.995682+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351436800 unmapped: 79880192 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:57:13.995881+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351436800 unmapped: 79880192 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:57:14.996043+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351436800 unmapped: 79880192 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 310 heartbeat osd_stat(store_statfs(0x4e71a1000/0x0/0x4ffc00000, data 0x2d0aac8/0x2ebd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:57:15.996233+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 310 heartbeat osd_stat(store_statfs(0x4e71a1000/0x0/0x4ffc00000, data 0x2d0aac8/0x2ebd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351436800 unmapped: 79880192 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:57:16.996382+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3774112 data_alloc: 218103808 data_used: 7454720
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351436800 unmapped: 79880192 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:57:17.996551+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351436800 unmapped: 79880192 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 310 heartbeat osd_stat(store_statfs(0x4e71a1000/0x0/0x4ffc00000, data 0x2d0aac8/0x2ebd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:57:18.996729+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351436800 unmapped: 79880192 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:57:19.996905+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351436800 unmapped: 79880192 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:57:20.997037+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351444992 unmapped: 79872000 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 310 heartbeat osd_stat(store_statfs(0x4e71a1000/0x0/0x4ffc00000, data 0x2d0aac8/0x2ebd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:57:21.997207+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 310 heartbeat osd_stat(store_statfs(0x4e71a1000/0x0/0x4ffc00000, data 0x2d0aac8/0x2ebd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3774112 data_alloc: 218103808 data_used: 7454720
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351444992 unmapped: 79872000 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:57:22.997376+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351444992 unmapped: 79872000 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:57:23.997545+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351444992 unmapped: 79872000 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:57:24.997752+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351444992 unmapped: 79872000 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:57:25.998081+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 310 heartbeat osd_stat(store_statfs(0x4e71a1000/0x0/0x4ffc00000, data 0x2d0aac8/0x2ebd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351453184 unmapped: 79863808 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:57:26.998247+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3774112 data_alloc: 218103808 data_used: 7454720
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351453184 unmapped: 79863808 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:57:27.998408+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351453184 unmapped: 79863808 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 310 heartbeat osd_stat(store_statfs(0x4e71a1000/0x0/0x4ffc00000, data 0x2d0aac8/0x2ebd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:57:28.998568+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351461376 unmapped: 79855616 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:57:29.998772+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351461376 unmapped: 79855616 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:57:30.998941+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351461376 unmapped: 79855616 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:57:31.999136+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3774112 data_alloc: 218103808 data_used: 7454720
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351469568 unmapped: 79847424 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 310 heartbeat osd_stat(store_statfs(0x4e71a1000/0x0/0x4ffc00000, data 0x2d0aac8/0x2ebd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:57:32.999344+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351469568 unmapped: 79847424 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 310 heartbeat osd_stat(store_statfs(0x4e71a1000/0x0/0x4ffc00000, data 0x2d0aac8/0x2ebd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:57:33.999507+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351469568 unmapped: 79847424 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:57:34.999646+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351469568 unmapped: 79847424 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:57:35.999888+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351469568 unmapped: 79847424 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:57:37.000081+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 310 heartbeat osd_stat(store_statfs(0x4e71a1000/0x0/0x4ffc00000, data 0x2d0aac8/0x2ebd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3774112 data_alloc: 218103808 data_used: 7454720
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351477760 unmapped: 79839232 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:57:38.000259+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351477760 unmapped: 79839232 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:57:39.000422+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351477760 unmapped: 79839232 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:57:40.000565+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351477760 unmapped: 79839232 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 310 heartbeat osd_stat(store_statfs(0x4e71a1000/0x0/0x4ffc00000, data 0x2d0aac8/0x2ebd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:57:41.000715+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 310 heartbeat osd_stat(store_statfs(0x4e71a1000/0x0/0x4ffc00000, data 0x2d0aac8/0x2ebd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351494144 unmapped: 79822848 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:57:42.000925+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3774112 data_alloc: 218103808 data_used: 7454720
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351502336 unmapped: 79814656 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:57:43.001110+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351502336 unmapped: 79814656 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:57:44.001262+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351502336 unmapped: 79814656 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 310 heartbeat osd_stat(store_statfs(0x4e71a1000/0x0/0x4ffc00000, data 0x2d0aac8/0x2ebd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:57:45.001438+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351502336 unmapped: 79814656 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: handle_auth_request added challenge on 0x557ab6696800
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 283.834289551s of 284.246185303s, submitted: 90
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _renew_subs
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 310 handle_osd_map epochs [311,311], i have 310, src has [1,311]
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:57:46.001625+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 311 ms_handle_reset con 0x557ab6696800 session 0x557ab6df85a0
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351543296 unmapped: 79773696 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 311 heartbeat osd_stat(store_statfs(0x4e719e000/0x0/0x4ffc00000, data 0x2d0c676/0x2ebf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:57:47.001869+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3776980 data_alloc: 218103808 data_used: 7462912
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351543296 unmapped: 79773696 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:57:48.002167+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: handle_auth_request added challenge on 0x557ab7f89c00
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351543296 unmapped: 79773696 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _renew_subs
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 311 handle_osd_map epochs [312,312], i have 311, src has [1,312]
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:57:49.002291+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 312 ms_handle_reset con 0x557ab7f89c00 session 0x557ab4a7c960
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351559680 unmapped: 79757312 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:57:50.002460+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351559680 unmapped: 79757312 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 312 handle_osd_map epochs [313,313], i have 312, src has [1,313]
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:57:51.002635+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 313 heartbeat osd_stat(store_statfs(0x4e719b000/0x0/0x4ffc00000, data 0x2d0e247/0x2ec2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351567872 unmapped: 79749120 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:57:52.002786+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3782928 data_alloc: 218103808 data_used: 7462912
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351576064 unmapped: 79740928 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 313 ms_handle_reset con 0x557ab6dd9800 session 0x557ab6df8960
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: handle_auth_request added challenge on 0x557abb03d800
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:57:53.002903+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: handle_auth_request added challenge on 0x557ab6dd9800
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 351576064 unmapped: 79740928 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 313 handle_osd_map epochs [313,314], i have 313, src has [1,314]
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:57:54.003062+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 314 ms_handle_reset con 0x557ab6dd9800 session 0x557ab685b680
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352632832 unmapped: 78684160 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:57:55.003239+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 314 heartbeat osd_stat(store_statfs(0x4e6523000/0x0/0x4ffc00000, data 0x39818a5/0x3b3a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 314 handle_osd_map epochs [315,315], i have 314, src has [1,315]
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352649216 unmapped: 78667776 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 315 heartbeat osd_stat(store_statfs(0x4e651f000/0x0/0x4ffc00000, data 0x3983308/0x3b3d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:57:56.003479+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352649216 unmapped: 78667776 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:57:57.003609+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3879369 data_alloc: 218103808 data_used: 7471104
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352649216 unmapped: 78667776 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 315 heartbeat osd_stat(store_statfs(0x4e651f000/0x0/0x4ffc00000, data 0x3983308/0x3b3d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:57:58.003760+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352649216 unmapped: 78667776 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:57:59.003989+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352649216 unmapped: 78667776 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:58:00.004117+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 315 heartbeat osd_stat(store_statfs(0x4e651f000/0x0/0x4ffc00000, data 0x3983308/0x3b3d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352649216 unmapped: 78667776 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:58:01.004286+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352657408 unmapped: 78659584 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:58:02.004454+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3879369 data_alloc: 218103808 data_used: 7471104
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352657408 unmapped: 78659584 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:58:03.004668+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352657408 unmapped: 78659584 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:58:04.004852+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 315 heartbeat osd_stat(store_statfs(0x4e651f000/0x0/0x4ffc00000, data 0x3983308/0x3b3d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352657408 unmapped: 78659584 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:58:05.005042+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352657408 unmapped: 78659584 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:58:06.005273+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352657408 unmapped: 78659584 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 315 heartbeat osd_stat(store_statfs(0x4e651f000/0x0/0x4ffc00000, data 0x3983308/0x3b3d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:58:07.005486+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3879369 data_alloc: 218103808 data_used: 7471104
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352665600 unmapped: 78651392 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:58:08.005624+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352673792 unmapped: 78643200 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:58:09.005763+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352673792 unmapped: 78643200 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:58:10.005917+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352673792 unmapped: 78643200 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 315 heartbeat osd_stat(store_statfs(0x4e651f000/0x0/0x4ffc00000, data 0x3983308/0x3b3d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:58:11.006077+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352681984 unmapped: 78635008 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:58:12.006252+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3879369 data_alloc: 218103808 data_used: 7471104
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352681984 unmapped: 78635008 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:58:13.006399+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352681984 unmapped: 78635008 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:58:14.006588+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352681984 unmapped: 78635008 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:58:15.006796+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 315 heartbeat osd_stat(store_statfs(0x4e651f000/0x0/0x4ffc00000, data 0x3983308/0x3b3d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352681984 unmapped: 78635008 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:58:16.007019+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352681984 unmapped: 78635008 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:58:17.007211+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3879369 data_alloc: 218103808 data_used: 7471104
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352698368 unmapped: 78618624 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:58:18.007375+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352698368 unmapped: 78618624 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 315 heartbeat osd_stat(store_statfs(0x4e651f000/0x0/0x4ffc00000, data 0x3983308/0x3b3d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:58:19.007566+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352698368 unmapped: 78618624 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:58:20.007802+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 315 heartbeat osd_stat(store_statfs(0x4e651f000/0x0/0x4ffc00000, data 0x3983308/0x3b3d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352698368 unmapped: 78618624 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:58:21.008064+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352698368 unmapped: 78618624 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:58:22.008220+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3879369 data_alloc: 218103808 data_used: 7471104
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352698368 unmapped: 78618624 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:58:23.008405+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 315 heartbeat osd_stat(store_statfs(0x4e651f000/0x0/0x4ffc00000, data 0x3983308/0x3b3d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352698368 unmapped: 78618624 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:58:24.008581+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352698368 unmapped: 78618624 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:58:25.008740+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352706560 unmapped: 78610432 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:58:26.008954+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352706560 unmapped: 78610432 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:58:27.009090+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 315 heartbeat osd_stat(store_statfs(0x4e651f000/0x0/0x4ffc00000, data 0x3983308/0x3b3d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3879369 data_alloc: 218103808 data_used: 7471104
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352706560 unmapped: 78610432 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:58:28.009261+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352706560 unmapped: 78610432 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:58:29.009406+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352706560 unmapped: 78610432 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:58:30.009591+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352714752 unmapped: 78602240 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:58:31.009733+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352714752 unmapped: 78602240 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 315 heartbeat osd_stat(store_statfs(0x4e651f000/0x0/0x4ffc00000, data 0x3983308/0x3b3d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:58:32.009913+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 315 heartbeat osd_stat(store_statfs(0x4e651f000/0x0/0x4ffc00000, data 0x3983308/0x3b3d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3879369 data_alloc: 218103808 data_used: 7471104
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352714752 unmapped: 78602240 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:58:33.010056+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352714752 unmapped: 78602240 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:58:34.010215+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352714752 unmapped: 78602240 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:58:35.010379+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 315 heartbeat osd_stat(store_statfs(0x4e651f000/0x0/0x4ffc00000, data 0x3983308/0x3b3d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352714752 unmapped: 78602240 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:58:36.010618+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352714752 unmapped: 78602240 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:58:37.010907+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3879369 data_alloc: 218103808 data_used: 7471104
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352722944 unmapped: 78594048 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:58:38.011076+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352731136 unmapped: 78585856 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:58:39.011246+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352731136 unmapped: 78585856 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:58:40.011429+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 315 heartbeat osd_stat(store_statfs(0x4e651f000/0x0/0x4ffc00000, data 0x3983308/0x3b3d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352731136 unmapped: 78585856 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:58:41.011588+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352731136 unmapped: 78585856 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:58:42.011786+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3879369 data_alloc: 218103808 data_used: 7471104
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352731136 unmapped: 78585856 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 315 heartbeat osd_stat(store_statfs(0x4e651f000/0x0/0x4ffc00000, data 0x3983308/0x3b3d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:58:43.011959+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352739328 unmapped: 78577664 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:58:44.012127+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 315 heartbeat osd_stat(store_statfs(0x4e651f000/0x0/0x4ffc00000, data 0x3983308/0x3b3d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352739328 unmapped: 78577664 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:58:45.012285+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352739328 unmapped: 78577664 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:58:46.012489+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352739328 unmapped: 78577664 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:58:47.012679+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3879369 data_alloc: 218103808 data_used: 7471104
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352739328 unmapped: 78577664 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:58:48.012955+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352747520 unmapped: 78569472 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:58:49.013172+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352755712 unmapped: 78561280 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:58:50.014055+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 315 heartbeat osd_stat(store_statfs(0x4e651f000/0x0/0x4ffc00000, data 0x3983308/0x3b3d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352763904 unmapped: 78553088 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:58:51.014260+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352763904 unmapped: 78553088 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:58:52.014449+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: handle_auth_request added challenge on 0x557ab6686c00
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 66.548286438s of 66.841957092s, submitted: 93
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _renew_subs
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 315 handle_osd_map epochs [316,316], i have 315, src has [1,316]
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3881037 data_alloc: 218103808 data_used: 7479296
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352788480 unmapped: 78528512 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 316 ms_handle_reset con 0x557ab6686c00 session 0x557ab727af00
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:58:53.014627+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e651e000/0x0/0x4ffc00000, data 0x3984eb6/0x3b3f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352788480 unmapped: 78528512 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:58:54.014847+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 316 ms_handle_reset con 0x557ab5468c00 session 0x557ab5bfbe00
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: handle_auth_request added challenge on 0x557ab6686c00
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 316 ms_handle_reset con 0x557ab65a0c00 session 0x557ab5292000
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: handle_auth_request added challenge on 0x557ab6696800
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352788480 unmapped: 78528512 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:58:55.015010+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352788480 unmapped: 78528512 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:58:56.015243+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352788480 unmapped: 78528512 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e718e000/0x0/0x4ffc00000, data 0x2d14e93/0x2ece000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:58:57.015382+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3796901 data_alloc: 218103808 data_used: 7479296
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352796672 unmapped: 78520320 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:58:58.016215+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352796672 unmapped: 78520320 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:58:59.016662+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352796672 unmapped: 78520320 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 316 handle_osd_map epochs [316,317], i have 316, src has [1,317]
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:59:00.016879+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352813056 unmapped: 78503936 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:59:01.017276+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 317 heartbeat osd_stat(store_statfs(0x4e718c000/0x0/0x4ffc00000, data 0x2d168f6/0x2ed1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352813056 unmapped: 78503936 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:59:02.017399+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3798774 data_alloc: 218103808 data_used: 7479296
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352821248 unmapped: 78495744 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:59:03.017527+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352821248 unmapped: 78495744 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:59:04.017707+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352821248 unmapped: 78495744 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:59:05.018008+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352821248 unmapped: 78495744 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:59:06.018362+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352821248 unmapped: 78495744 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:59:07.018506+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 317 heartbeat osd_stat(store_statfs(0x4e718c000/0x0/0x4ffc00000, data 0x2d168f6/0x2ed1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3798774 data_alloc: 218103808 data_used: 7479296
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352829440 unmapped: 78487552 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:59:08.018659+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352829440 unmapped: 78487552 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:59:09.018949+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352829440 unmapped: 78487552 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:59:10.019097+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352829440 unmapped: 78487552 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:59:11.019301+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352829440 unmapped: 78487552 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:59:12.019496+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 317 heartbeat osd_stat(store_statfs(0x4e718c000/0x0/0x4ffc00000, data 0x2d168f6/0x2ed1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3798774 data_alloc: 218103808 data_used: 7479296
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352837632 unmapped: 78479360 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:59:13.019725+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352854016 unmapped: 78462976 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:59:14.019887+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 317 heartbeat osd_stat(store_statfs(0x4e718c000/0x0/0x4ffc00000, data 0x2d168f6/0x2ed1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352854016 unmapped: 78462976 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:59:15.020059+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352854016 unmapped: 78462976 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:59:16.020350+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352854016 unmapped: 78462976 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:59:17.020612+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3798774 data_alloc: 218103808 data_used: 7479296
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352854016 unmapped: 78462976 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 317 heartbeat osd_stat(store_statfs(0x4e718c000/0x0/0x4ffc00000, data 0x2d168f6/0x2ed1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:59:18.020811+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352862208 unmapped: 78454784 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:59:19.021057+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 317 heartbeat osd_stat(store_statfs(0x4e718c000/0x0/0x4ffc00000, data 0x2d168f6/0x2ed1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352862208 unmapped: 78454784 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:59:20.021266+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352862208 unmapped: 78454784 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:59:21.021450+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352870400 unmapped: 78446592 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:59:22.021637+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3798774 data_alloc: 218103808 data_used: 7479296
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352870400 unmapped: 78446592 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:59:23.021949+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 317 heartbeat osd_stat(store_statfs(0x4e718c000/0x0/0x4ffc00000, data 0x2d168f6/0x2ed1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352870400 unmapped: 78446592 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:59:24.022173+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 317 heartbeat osd_stat(store_statfs(0x4e718c000/0x0/0x4ffc00000, data 0x2d168f6/0x2ed1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352878592 unmapped: 78438400 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:59:25.022340+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352878592 unmapped: 78438400 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:59:26.022549+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352878592 unmapped: 78438400 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:59:27.022699+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 317 heartbeat osd_stat(store_statfs(0x4e718c000/0x0/0x4ffc00000, data 0x2d168f6/0x2ed1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3798774 data_alloc: 218103808 data_used: 7479296
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352878592 unmapped: 78438400 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:59:28.022862+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 317 heartbeat osd_stat(store_statfs(0x4e718c000/0x0/0x4ffc00000, data 0x2d168f6/0x2ed1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352878592 unmapped: 78438400 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:59:29.023010+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352894976 unmapped: 78422016 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:59:30.023224+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352903168 unmapped: 78413824 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:59:31.023442+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352903168 unmapped: 78413824 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:59:32.023638+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3798774 data_alloc: 218103808 data_used: 7479296
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352903168 unmapped: 78413824 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:59:33.023849+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352903168 unmapped: 78413824 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:59:34.024042+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 317 heartbeat osd_stat(store_statfs(0x4e718c000/0x0/0x4ffc00000, data 0x2d168f6/0x2ed1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 317 heartbeat osd_stat(store_statfs(0x4e718c000/0x0/0x4ffc00000, data 0x2d168f6/0x2ed1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352911360 unmapped: 78405632 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:59:35.024233+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352911360 unmapped: 78405632 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:59:36.024530+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352911360 unmapped: 78405632 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:59:37.024727+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3798774 data_alloc: 218103808 data_used: 7479296
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352919552 unmapped: 78397440 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:59:38.024918+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352919552 unmapped: 78397440 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:59:39.025098+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 317 heartbeat osd_stat(store_statfs(0x4e718c000/0x0/0x4ffc00000, data 0x2d168f6/0x2ed1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352919552 unmapped: 78397440 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:59:40.025268+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352927744 unmapped: 78389248 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:59:41.025448+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352927744 unmapped: 78389248 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:59:42.025630+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3798774 data_alloc: 218103808 data_used: 7479296
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352927744 unmapped: 78389248 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:59:43.025902+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 317 heartbeat osd_stat(store_statfs(0x4e718c000/0x0/0x4ffc00000, data 0x2d168f6/0x2ed1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352927744 unmapped: 78389248 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:59:44.026063+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352927744 unmapped: 78389248 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:59:45.026221+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352944128 unmapped: 78372864 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:59:46.026430+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 317 heartbeat osd_stat(store_statfs(0x4e718c000/0x0/0x4ffc00000, data 0x2d168f6/0x2ed1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352944128 unmapped: 78372864 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:59:47.026600+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3798774 data_alloc: 218103808 data_used: 7479296
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352944128 unmapped: 78372864 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:59:48.026762+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352944128 unmapped: 78372864 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:59:49.026907+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 317 heartbeat osd_stat(store_statfs(0x4e718c000/0x0/0x4ffc00000, data 0x2d168f6/0x2ed1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352952320 unmapped: 78364672 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:59:50.027057+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 317 heartbeat osd_stat(store_statfs(0x4e718c000/0x0/0x4ffc00000, data 0x2d168f6/0x2ed1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352952320 unmapped: 78364672 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:59:51.027237+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 317 heartbeat osd_stat(store_statfs(0x4e718c000/0x0/0x4ffc00000, data 0x2d168f6/0x2ed1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352952320 unmapped: 78364672 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:59:52.027427+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 317 heartbeat osd_stat(store_statfs(0x4e718c000/0x0/0x4ffc00000, data 0x2d168f6/0x2ed1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3798774 data_alloc: 218103808 data_used: 7479296
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352952320 unmapped: 78364672 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 317 heartbeat osd_stat(store_statfs(0x4e718c000/0x0/0x4ffc00000, data 0x2d168f6/0x2ed1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:59:53.027547+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352968704 unmapped: 78348288 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:59:54.027685+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352968704 unmapped: 78348288 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:59:55.027863+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352968704 unmapped: 78348288 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:59:56.028150+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352968704 unmapped: 78348288 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:59:57.028342+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 317 heartbeat osd_stat(store_statfs(0x4e718c000/0x0/0x4ffc00000, data 0x2d168f6/0x2ed1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3798774 data_alloc: 218103808 data_used: 7479296
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352968704 unmapped: 78348288 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:59:58.028547+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352968704 unmapped: 78348288 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:59:59.028712+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352968704 unmapped: 78348288 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T10:00:00.028912+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352976896 unmapped: 78340096 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T10:00:01.029120+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 317 heartbeat osd_stat(store_statfs(0x4e718c000/0x0/0x4ffc00000, data 0x2d168f6/0x2ed1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352985088 unmapped: 78331904 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T10:00:02.029294+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3798774 data_alloc: 218103808 data_used: 7479296
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352985088 unmapped: 78331904 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T10:00:03.029452+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352985088 unmapped: 78331904 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T10:00:04.029637+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352985088 unmapped: 78331904 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T10:00:05.029877+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 317 heartbeat osd_stat(store_statfs(0x4e718c000/0x0/0x4ffc00000, data 0x2d168f6/0x2ed1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 317 heartbeat osd_stat(store_statfs(0x4e718c000/0x0/0x4ffc00000, data 0x2d168f6/0x2ed1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352985088 unmapped: 78331904 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T10:00:06.030101+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352993280 unmapped: 78323712 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T10:00:07.030298+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3798774 data_alloc: 218103808 data_used: 7479296
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352993280 unmapped: 78323712 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T10:00:08.030414+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 317 heartbeat osd_stat(store_statfs(0x4e718c000/0x0/0x4ffc00000, data 0x2d168f6/0x2ed1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353001472 unmapped: 78315520 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T10:00:09.030577+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353001472 unmapped: 78315520 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T10:00:10.030768+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353001472 unmapped: 78315520 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T10:00:11.030915+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353001472 unmapped: 78315520 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T10:00:12.031081+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3798774 data_alloc: 218103808 data_used: 7479296
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353009664 unmapped: 78307328 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T10:00:13.031347+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353009664 unmapped: 78307328 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T10:00:14.031513+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 317 heartbeat osd_stat(store_statfs(0x4e718c000/0x0/0x4ffc00000, data 0x2d168f6/0x2ed1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353009664 unmapped: 78307328 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T10:00:15.031747+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353009664 unmapped: 78307328 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T10:00:16.032009+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 317 ms_handle_reset con 0x557ab528d800 session 0x557ab682b2c0
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: handle_auth_request added challenge on 0x557ab65a0c00
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 317 heartbeat osd_stat(store_statfs(0x4e718c000/0x0/0x4ffc00000, data 0x2d168f6/0x2ed1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353017856 unmapped: 78299136 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T10:00:17.033289+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 317 heartbeat osd_stat(store_statfs(0x4e718c000/0x0/0x4ffc00000, data 0x2d168f6/0x2ed1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3798774 data_alloc: 218103808 data_used: 7479296
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353017856 unmapped: 78299136 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T10:00:18.033559+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 317 heartbeat osd_stat(store_statfs(0x4e718c000/0x0/0x4ffc00000, data 0x2d168f6/0x2ed1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T10:00:19.033729+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353017856 unmapped: 78299136 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T10:00:20.033877+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353017856 unmapped: 78299136 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T10:00:21.034000+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353017856 unmapped: 78299136 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T10:00:22.034171+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353017856 unmapped: 78299136 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3798774 data_alloc: 218103808 data_used: 7479296
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T10:00:23.034316+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353034240 unmapped: 78282752 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 317 heartbeat osd_stat(store_statfs(0x4e718c000/0x0/0x4ffc00000, data 0x2d168f6/0x2ed1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T10:00:24.034514+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353034240 unmapped: 78282752 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T10:00:25.034661+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353042432 unmapped: 78274560 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 317 heartbeat osd_stat(store_statfs(0x4e718c000/0x0/0x4ffc00000, data 0x2d168f6/0x2ed1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T10:00:26.034884+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353042432 unmapped: 78274560 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T10:00:27.035048+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353042432 unmapped: 78274560 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3798774 data_alloc: 218103808 data_used: 7479296
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T10:00:28.035177+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353042432 unmapped: 78274560 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T10:00:29.035337+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353050624 unmapped: 78266368 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T10:00:30.035472+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353050624 unmapped: 78266368 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T10:00:31.035678+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353050624 unmapped: 78266368 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 317 heartbeat osd_stat(store_statfs(0x4e718c000/0x0/0x4ffc00000, data 0x2d168f6/0x2ed1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T10:00:32.035832+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353058816 unmapped: 78258176 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3798774 data_alloc: 218103808 data_used: 7479296
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T10:00:33.036034+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353067008 unmapped: 78249984 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T10:00:34.036274+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353067008 unmapped: 78249984 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: osd.1 317 heartbeat osd_stat(store_statfs(0x4e718c000/0x0/0x4ffc00000, data 0x2d168f6/0x2ed1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T10:00:35.036390+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 353067008 unmapped: 78249984 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: do_command 'config diff' '{prefix=config diff}'
Oct 11 10:01:08 compute-0 ceph-osd[89278]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Oct 11 10:01:08 compute-0 ceph-osd[89278]: do_command 'config show' '{prefix=config show}'
Oct 11 10:01:08 compute-0 ceph-osd[89278]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Oct 11 10:01:08 compute-0 ceph-osd[89278]: do_command 'counter dump' '{prefix=counter dump}'
Oct 11 10:01:08 compute-0 ceph-osd[89278]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T10:00:36.036550+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: do_command 'counter schema' '{prefix=counter schema}'
Oct 11 10:01:08 compute-0 ceph-osd[89278]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352600064 unmapped: 78716928 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T10:00:37.036679+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352354304 unmapped: 78962688 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:08 compute-0 ceph-osd[89278]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:08 compute-0 ceph-osd[89278]: bluestore.MempoolThread(0x557ab31e1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3798774 data_alloc: 218103808 data_used: 7479296
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: tick
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_tickets
Oct 11 10:01:08 compute-0 ceph-osd[89278]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T10:00:38.036807+0000)
Oct 11 10:01:08 compute-0 ceph-osd[89278]: prioritycache tune_memory target: 4294967296 mapped: 352133120 unmapped: 79183872 heap: 431316992 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:08 compute-0 ceph-osd[89278]: do_command 'log dump' '{prefix=log dump}'
Oct 11 10:01:08 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat", "format": "json-pretty"} v 0) v1
Oct 11 10:01:08 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1708756672' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Oct 11 10:01:08 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0) v1
Oct 11 10:01:08 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/640817390' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Oct 11 10:01:08 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions", "format": "json-pretty"} v 0) v1
Oct 11 10:01:08 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2019071381' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Oct 11 10:01:09 compute-0 rsyslogd[1003]: imjournal from <np0005481065:ceph-osd>: begin to drop messages due to rate-limiting
Oct 11 10:01:09 compute-0 ceph-mon[74313]: pgmap v3613: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 10:01:09 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/1346839265' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Oct 11 10:01:09 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/4246144449' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Oct 11 10:01:09 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/1708756672' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Oct 11 10:01:09 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/640817390' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Oct 11 10:01:09 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/2019071381' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Oct 11 10:01:09 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd utilization"} v 0) v1
Oct 11 10:01:09 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2214292972' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Oct 11 10:01:09 compute-0 ceph-mgr[74605]: log_channel(audit) log [DBG] : from='client.23011 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 11 10:01:09 compute-0 ceph-mgr[74605]: log_channel(audit) log [DBG] : from='client.23013 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 10:01:09 compute-0 ceph-mgr[74605]: log_channel(audit) log [DBG] : from='client.23015 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 11 10:01:09 compute-0 nova_compute[260935]: 2025-10-11 10:01:09.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 10:01:09 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e317 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 10:01:10 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3614: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 10:01:10 compute-0 ceph-mgr[74605]: log_channel(audit) log [DBG] : from='client.23019 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 11 10:01:10 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/2214292972' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Oct 11 10:01:10 compute-0 ceph-mon[74313]: from='client.23011 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 11 10:01:10 compute-0 ceph-mon[74313]: from='client.23013 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 10:01:10 compute-0 ceph-mon[74313]: from='client.23015 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 11 10:01:10 compute-0 ceph-mgr[74605]: log_channel(audit) log [DBG] : from='client.23017 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 10:01:10 compute-0 sshd-session[454489]: Received disconnect from 13.126.15.214 port 44308:11: Bye Bye [preauth]
Oct 11 10:01:10 compute-0 sshd-session[454489]: Disconnected from invalid user newuser 13.126.15.214 port 44308 [preauth]
Oct 11 10:01:10 compute-0 ceph-mgr[74605]: log_channel(audit) log [DBG] : from='client.23021 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 11 10:01:10 compute-0 ceph-mgr[74605]: log_channel(audit) log [DBG] : from='client.23025 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 11 10:01:10 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "quorum_status"} v 0) v1
Oct 11 10:01:10 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3465235369' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Oct 11 10:01:11 compute-0 ceph-mon[74313]: pgmap v3614: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 10:01:11 compute-0 ceph-mon[74313]: from='client.23019 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 11 10:01:11 compute-0 ceph-mon[74313]: from='client.23017 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 10:01:11 compute-0 ceph-mon[74313]: from='client.23021 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 11 10:01:11 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/3465235369' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Oct 11 10:01:11 compute-0 ceph-mgr[74605]: log_channel(audit) log [DBG] : from='client.23029 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 11 10:01:11 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "versions"} v 0) v1
Oct 11 10:01:11 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1525373663' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Oct 11 10:01:11 compute-0 ceph-mgr[74605]: log_channel(audit) log [DBG] : from='client.23033 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 11 10:01:11 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "health", "detail": "detail", "format": "json-pretty"} v 0) v1
Oct 11 10:01:11 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1735143824' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Oct 11 10:01:12 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3615: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 10:01:12 compute-0 ceph-mon[74313]: from='client.23025 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 11 10:01:12 compute-0 ceph-mon[74313]: from='client.23029 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 11 10:01:12 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/1525373663' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Oct 11 10:01:12 compute-0 ceph-mon[74313]: from='client.23033 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 11 10:01:12 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/1735143824' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Oct 11 10:01:12 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "format": "json-pretty"} v 0) v1
Oct 11 10:01:12 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3815625780' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Oct 11 10:01:12 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Oct 11 10:01:12 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Oct 11 10:01:12 compute-0 nova_compute[260935]: 2025-10-11 10:01:12.697 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 10:01:12 compute-0 nova_compute[260935]: 2025-10-11 10:01:12.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 10:01:12 compute-0 nova_compute[260935]: 2025-10-11 10:01:12.702 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 11 10:01:12 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump"} v 0) v1
Oct 11 10:01:12 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1335953903' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Oct 11 10:01:12 compute-0 nova_compute[260935]: 2025-10-11 10:01:12.773 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 10:01:12 compute-0 nova_compute[260935]: 2025-10-11 10:01:12.774 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 10:01:12 compute-0 nova_compute[260935]: 2025-10-11 10:01:12.774 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5001 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Oct 11 10:01:12 compute-0 nova_compute[260935]: 2025-10-11 10:01:12.775 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Oct 11 10:01:12 compute-0 nova_compute[260935]: 2025-10-11 10:01:12.775 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Oct 11 10:01:12 compute-0 nova_compute[260935]: 2025-10-11 10:01:12.777 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e3056000/0x0/0x4ffc00000, data 0x8d9c90d/0x8f38000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x13c6f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 332128256 unmapped: 43163648 heap: 375291904 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:29:04.211197+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 332128256 unmapped: 43163648 heap: 375291904 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4526639 data_alloc: 234881024 data_used: 21364736
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 ms_handle_reset con 0x55f92882f000 session 0x55f92a884b40
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:29:05.211338+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 ms_handle_reset con 0x55f92a331000 session 0x55f928d6f4a0
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: handle_auth_request added challenge on 0x55f92a60f000
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 332136448 unmapped: 43155456 heap: 375291904 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.749621391s of 11.027926445s, submitted: 124
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:29:06.211482+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e3034000/0x0/0x4ffc00000, data 0x8dbe90d/0x8f5a000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x13c6f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 332136448 unmapped: 43155456 heap: 375291904 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:29:07.211648+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 332152832 unmapped: 43139072 heap: 375291904 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:29:08.211845+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 ms_handle_reset con 0x55f92a60f000 session 0x55f92a9ce3c0
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 332152832 unmapped: 43139072 heap: 375291904 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:29:09.212032+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 332152832 unmapped: 43139072 heap: 375291904 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4319574 data_alloc: 234881024 data_used: 14745600
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:29:10.212256+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 332152832 unmapped: 43139072 heap: 375291904 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:29:11.212633+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e44c4000/0x0/0x4ffc00000, data 0x792e88b/0x7ac7000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x13c6f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 332152832 unmapped: 43139072 heap: 375291904 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:29:12.212903+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 ms_handle_reset con 0x55f92a338000 session 0x55f92823c5a0
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 ms_handle_reset con 0x55f92eba1800 session 0x55f928d8a000
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 332152832 unmapped: 43139072 heap: 375291904 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: handle_auth_request added challenge on 0x55f92882f000
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:29:13.213070+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 ms_handle_reset con 0x55f92882f000 session 0x55f92bf06f00
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 329129984 unmapped: 46161920 heap: 375291904 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:29:14.213229+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 329129984 unmapped: 46161920 heap: 375291904 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4146896 data_alloc: 218103808 data_used: 10620928
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:29:15.213371+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e56b7000/0x0/0x4ffc00000, data 0x673f87b/0x68d7000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x13c6f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 329129984 unmapped: 46161920 heap: 375291904 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:29:16.213522+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e56b4000/0x0/0x4ffc00000, data 0x674287b/0x68da000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x13c6f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 329129984 unmapped: 46161920 heap: 375291904 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:29:17.213688+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 329129984 unmapped: 46161920 heap: 375291904 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:29:18.213921+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 329129984 unmapped: 46161920 heap: 375291904 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:29:19.214124+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 329129984 unmapped: 46161920 heap: 375291904 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4147292 data_alloc: 218103808 data_used: 10620928
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:29:20.214380+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 329129984 unmapped: 46161920 heap: 375291904 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:29:21.214709+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e56b4000/0x0/0x4ffc00000, data 0x674287b/0x68da000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x13c6f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 329129984 unmapped: 46161920 heap: 375291904 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:29:22.214876+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e56b4000/0x0/0x4ffc00000, data 0x674287b/0x68da000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x13c6f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 329129984 unmapped: 46161920 heap: 375291904 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:29:23.215010+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 329129984 unmapped: 46161920 heap: 375291904 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:29:24.215144+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 329138176 unmapped: 46153728 heap: 375291904 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4147612 data_alloc: 218103808 data_used: 10629120
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:29:25.215314+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 18.848310471s of 19.658231735s, submitted: 42
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 329138176 unmapped: 46153728 heap: 375291904 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:29:26.215429+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e56af000/0x0/0x4ffc00000, data 0x674787b/0x68df000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x13c6f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 329138176 unmapped: 46153728 heap: 375291904 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:29:27.215673+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e56af000/0x0/0x4ffc00000, data 0x674787b/0x68df000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x13c6f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 329138176 unmapped: 46153728 heap: 375291904 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:29:28.215856+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 ms_handle_reset con 0x55f927fdb800 session 0x55f92a9df4a0
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 ms_handle_reset con 0x55f92a601800 session 0x55f92a9de5a0
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 329138176 unmapped: 46153728 heap: 375291904 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:29:29.215992+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: handle_auth_request added challenge on 0x55f92a331000
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 329138176 unmapped: 46153728 heap: 375291904 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4148524 data_alloc: 218103808 data_used: 10653696
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:29:30.216155+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e56af000/0x0/0x4ffc00000, data 0x674787b/0x68df000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x13c6f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 329146368 unmapped: 46145536 heap: 375291904 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:29:31.216307+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 322600960 unmapped: 52690944 heap: 375291904 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:29:32.216552+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 322609152 unmapped: 52682752 heap: 375291904 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:29:33.216731+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 322617344 unmapped: 52674560 heap: 375291904 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:29:34.216942+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 ms_handle_reset con 0x55f92a331000 session 0x55f92a9cfc20
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 322625536 unmapped: 52666368 heap: 375291904 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3950335 data_alloc: 218103808 data_used: 2609152
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:29:35.217064+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 322625536 unmapped: 52666368 heap: 375291904 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e680b000/0x0/0x4ffc00000, data 0x55ec848/0x5782000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x13c6f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:29:36.217268+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 322625536 unmapped: 52666368 heap: 375291904 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:29:37.217456+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 322625536 unmapped: 52666368 heap: 375291904 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:29:38.217590+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 322625536 unmapped: 52666368 heap: 375291904 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:29:39.217898+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 322625536 unmapped: 52666368 heap: 375291904 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3950335 data_alloc: 218103808 data_used: 2609152
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:29:40.218098+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e680b000/0x0/0x4ffc00000, data 0x55ec848/0x5782000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x13c6f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 322625536 unmapped: 52666368 heap: 375291904 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:29:41.218275+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 322625536 unmapped: 52666368 heap: 375291904 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:29:42.218503+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 322625536 unmapped: 52666368 heap: 375291904 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:29:43.218647+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 322625536 unmapped: 52666368 heap: 375291904 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:29:44.218805+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e680b000/0x0/0x4ffc00000, data 0x55ec848/0x5782000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x13c6f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 322625536 unmapped: 52666368 heap: 375291904 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3950335 data_alloc: 218103808 data_used: 2609152
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:29:45.218990+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 322625536 unmapped: 52666368 heap: 375291904 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e680b000/0x0/0x4ffc00000, data 0x55ec848/0x5782000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x13c6f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:29:46.219177+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 322625536 unmapped: 52666368 heap: 375291904 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:29:47.219386+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 322625536 unmapped: 52666368 heap: 375291904 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:29:48.219570+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e680b000/0x0/0x4ffc00000, data 0x55ec848/0x5782000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x13c6f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 322633728 unmapped: 52658176 heap: 375291904 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:29:49.219730+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e680b000/0x0/0x4ffc00000, data 0x55ec848/0x5782000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x13c6f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 322633728 unmapped: 52658176 heap: 375291904 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3950335 data_alloc: 218103808 data_used: 2609152
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:29:50.219894+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 322633728 unmapped: 52658176 heap: 375291904 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:29:51.220040+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 322633728 unmapped: 52658176 heap: 375291904 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:29:52.220182+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 322633728 unmapped: 52658176 heap: 375291904 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:29:53.220336+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e680b000/0x0/0x4ffc00000, data 0x55ec848/0x5782000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x13c6f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 322633728 unmapped: 52658176 heap: 375291904 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:29:54.220720+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 322633728 unmapped: 52658176 heap: 375291904 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3950335 data_alloc: 218103808 data_used: 2609152
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:29:55.220916+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 322633728 unmapped: 52658176 heap: 375291904 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:29:56.221145+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 322641920 unmapped: 52649984 heap: 375291904 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:29:57.221527+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 322641920 unmapped: 52649984 heap: 375291904 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:29:58.221686+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e680b000/0x0/0x4ffc00000, data 0x55ec848/0x5782000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x13c6f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 322641920 unmapped: 52649984 heap: 375291904 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:29:59.221977+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 322641920 unmapped: 52649984 heap: 375291904 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3950335 data_alloc: 218103808 data_used: 2609152
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:30:00.222293+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 322641920 unmapped: 52649984 heap: 375291904 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:30:01.222539+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 322641920 unmapped: 52649984 heap: 375291904 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:30:02.222924+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 322641920 unmapped: 52649984 heap: 375291904 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:30:03.223111+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e680b000/0x0/0x4ffc00000, data 0x55ec848/0x5782000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x13c6f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 322641920 unmapped: 52649984 heap: 375291904 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:30:04.223337+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 322650112 unmapped: 52641792 heap: 375291904 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:30:05.223472+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3950335 data_alloc: 218103808 data_used: 2609152
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e680b000/0x0/0x4ffc00000, data 0x55ec848/0x5782000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x13c6f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 322650112 unmapped: 52641792 heap: 375291904 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:30:06.223631+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 322650112 unmapped: 52641792 heap: 375291904 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:30:07.223866+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 322650112 unmapped: 52641792 heap: 375291904 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:30:08.224035+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: handle_auth_request added challenge on 0x55f92a338000
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 ms_handle_reset con 0x55f92a338000 session 0x55f928d8b0e0
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: handle_auth_request added challenge on 0x55f927fdb800
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 ms_handle_reset con 0x55f927fdb800 session 0x55f92a541c20
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: handle_auth_request added challenge on 0x55f92882f000
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 ms_handle_reset con 0x55f92882f000 session 0x55f928d8b680
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 322658304 unmapped: 52633600 heap: 375291904 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:30:09.224165+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: handle_auth_request added challenge on 0x55f92a331000
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 ms_handle_reset con 0x55f92a331000 session 0x55f928abfa40
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: handle_auth_request added challenge on 0x55f92a601800
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 40.678367615s of 43.662948608s, submitted: 41
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 322658304 unmapped: 52633600 heap: 375291904 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:30:10.224344+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3953765 data_alloc: 218103808 data_used: 2609152
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 ms_handle_reset con 0x55f92a601800 session 0x55f929e5a960
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: handle_auth_request added challenge on 0x55f92a60f000
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 ms_handle_reset con 0x55f92a60f000 session 0x55f92a4d10e0
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: handle_auth_request added challenge on 0x55f927fdb800
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 ms_handle_reset con 0x55f927fdb800 session 0x55f92aa383c0
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: handle_auth_request added challenge on 0x55f92882f000
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 ms_handle_reset con 0x55f92882f000 session 0x55f927c01680
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: handle_auth_request added challenge on 0x55f92a331000
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 ms_handle_reset con 0x55f92a331000 session 0x55f92a884d20
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 324182016 unmapped: 61612032 heap: 385794048 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e5e4c000/0x0/0x4ffc00000, data 0x5faa8ba/0x6142000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x13c6f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:30:11.224522+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e5e4c000/0x0/0x4ffc00000, data 0x5faa8ba/0x6142000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x13c6f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 324182016 unmapped: 61612032 heap: 385794048 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:30:12.224719+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 324182016 unmapped: 61612032 heap: 385794048 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:30:13.224885+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 324182016 unmapped: 61612032 heap: 385794048 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:30:14.225052+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 ms_handle_reset con 0x55f929655c00 session 0x55f92aa374a0
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: handle_auth_request added challenge on 0x55f92a601800
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 324182016 unmapped: 61612032 heap: 385794048 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:30:15.225197+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4038302 data_alloc: 218103808 data_used: 2613248
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 324182016 unmapped: 61612032 heap: 385794048 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:30:16.225350+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 324190208 unmapped: 61603840 heap: 385794048 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:30:17.225518+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e5e4c000/0x0/0x4ffc00000, data 0x5faa8ba/0x6142000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x13c6f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 324190208 unmapped: 61603840 heap: 385794048 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:30:18.225684+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 324190208 unmapped: 61603840 heap: 385794048 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:30:19.225901+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 324190208 unmapped: 61603840 heap: 385794048 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:30:20.226099+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4038302 data_alloc: 218103808 data_used: 2613248
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: handle_auth_request added challenge on 0x55f929655c00
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.586797714s of 11.219021797s, submitted: 43
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 ms_handle_reset con 0x55f92a332400 session 0x55f92a98d860
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: handle_auth_request added challenge on 0x55f92b158800
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 ms_handle_reset con 0x55f929655c00 session 0x55f92aa38f00
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e5e4c000/0x0/0x4ffc00000, data 0x5faa8ba/0x6142000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x13c6f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 324198400 unmapped: 61595648 heap: 385794048 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:30:21.226253+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: handle_auth_request added challenge on 0x55f92c8cc400
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: handle_auth_request added challenge on 0x55f92caabc00
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 ms_handle_reset con 0x55f929c83800 session 0x55f929e44000
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: handle_auth_request added challenge on 0x55f929c83800
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 324206592 unmapped: 61587456 heap: 385794048 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:30:22.226495+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 324206592 unmapped: 61587456 heap: 385794048 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:30:23.226624+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e5e4c000/0x0/0x4ffc00000, data 0x5faa8ba/0x6142000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x13c6f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 324247552 unmapped: 61546496 heap: 385794048 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:30:24.226774+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 324247552 unmapped: 61546496 heap: 385794048 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:30:25.226918+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4102201 data_alloc: 218103808 data_used: 11603968
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 324247552 unmapped: 61546496 heap: 385794048 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:30:26.227080+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e5e4c000/0x0/0x4ffc00000, data 0x5faa8ba/0x6142000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x13c6f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 324247552 unmapped: 61546496 heap: 385794048 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:30:27.227213+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 324247552 unmapped: 61546496 heap: 385794048 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:30:28.227378+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 324247552 unmapped: 61546496 heap: 385794048 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:30:29.227511+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 324247552 unmapped: 61546496 heap: 385794048 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:30:30.227666+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4102201 data_alloc: 218103808 data_used: 11603968
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e5e4c000/0x0/0x4ffc00000, data 0x5faa8ba/0x6142000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x13c6f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e5e4c000/0x0/0x4ffc00000, data 0x5faa8ba/0x6142000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x13c6f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 324247552 unmapped: 61546496 heap: 385794048 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:30:31.227805+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e5e4c000/0x0/0x4ffc00000, data 0x5faa8ba/0x6142000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x13c6f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 324247552 unmapped: 61546496 heap: 385794048 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:30:32.227951+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.028764725s of 12.117904663s, submitted: 7
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:30:33.228097+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 325533696 unmapped: 60260352 heap: 385794048 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:30:34.228802+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 325984256 unmapped: 59809792 heap: 385794048 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:30:35.228940+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 327057408 unmapped: 58736640 heap: 385794048 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4178119 data_alloc: 234881024 data_used: 12967936
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:30:36.229134+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 327057408 unmapped: 58736640 heap: 385794048 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e5215000/0x0/0x4ffc00000, data 0x67c88ba/0x6960000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1407f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:30:37.229301+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 327057408 unmapped: 58736640 heap: 385794048 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:30:38.229493+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 327057408 unmapped: 58736640 heap: 385794048 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:30:39.229647+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 327057408 unmapped: 58736640 heap: 385794048 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:30:40.229909+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 327057408 unmapped: 58736640 heap: 385794048 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4178119 data_alloc: 234881024 data_used: 12967936
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:30:41.230083+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 327057408 unmapped: 58736640 heap: 385794048 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e5215000/0x0/0x4ffc00000, data 0x67c88ba/0x6960000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1407f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:30:42.230281+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 327057408 unmapped: 58736640 heap: 385794048 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:30:43.230420+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 327057408 unmapped: 58736640 heap: 385794048 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:30:44.230570+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 327057408 unmapped: 58736640 heap: 385794048 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e5215000/0x0/0x4ffc00000, data 0x67c88ba/0x6960000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1407f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:30:45.230737+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 327057408 unmapped: 58736640 heap: 385794048 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4178119 data_alloc: 234881024 data_used: 12967936
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:30:46.230887+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 327057408 unmapped: 58736640 heap: 385794048 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: handle_auth_request added challenge on 0x55f927fdb800
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 ms_handle_reset con 0x55f927fdb800 session 0x55f929e454a0
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:30:47.231023+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: handle_auth_request added challenge on 0x55f92882f000
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 ms_handle_reset con 0x55f92882f000 session 0x55f92bf06960
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: handle_auth_request added challenge on 0x55f929655c00
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 ms_handle_reset con 0x55f929655c00 session 0x55f92a4d1a40
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 327057408 unmapped: 58736640 heap: 385794048 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: handle_auth_request added challenge on 0x55f92a331000
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 ms_handle_reset con 0x55f92a331000 session 0x55f92aa381e0
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: handle_auth_request added challenge on 0x55f92a330000
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.325871468s of 14.666234970s, submitted: 106
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 ms_handle_reset con 0x55f92a330000 session 0x55f928dab680
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: handle_auth_request added challenge on 0x55f92a330000
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 ms_handle_reset con 0x55f92a330000 session 0x55f9280cbe00
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: handle_auth_request added challenge on 0x55f927fdb800
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 ms_handle_reset con 0x55f927fdb800 session 0x55f928d8c780
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: handle_auth_request added challenge on 0x55f92882f000
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 ms_handle_reset con 0x55f92882f000 session 0x55f9280012c0
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: handle_auth_request added challenge on 0x55f929655c00
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 ms_handle_reset con 0x55f929655c00 session 0x55f92a4d0f00
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:30:48.231155+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 327081984 unmapped: 58712064 heap: 385794048 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e4c9d000/0x0/0x4ffc00000, data 0x6d4792c/0x6ee1000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1407f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:30:49.231273+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 327081984 unmapped: 58712064 heap: 385794048 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:30:50.231451+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 327081984 unmapped: 58712064 heap: 385794048 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4223462 data_alloc: 234881024 data_used: 12972032
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:30:51.231613+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 327081984 unmapped: 58712064 heap: 385794048 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:30:52.231768+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 327081984 unmapped: 58712064 heap: 385794048 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: handle_auth_request added challenge on 0x55f92a331000
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 ms_handle_reset con 0x55f92a331000 session 0x55f92a4d12c0
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: handle_auth_request added challenge on 0x55f92a331000
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 ms_handle_reset con 0x55f92a331000 session 0x55f92bf072c0
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:30:53.232021+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 327081984 unmapped: 58712064 heap: 385794048 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: handle_auth_request added challenge on 0x55f927fdb800
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 ms_handle_reset con 0x55f927fdb800 session 0x55f92a884780
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: handle_auth_request added challenge on 0x55f92882f000
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:30:54.232185+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e4c9d000/0x0/0x4ffc00000, data 0x6d4792c/0x6ee1000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1407f9c2), peers [1,2] op hist [0,0,0,1])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 327393280 unmapped: 58400768 heap: 385794048 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 ms_handle_reset con 0x55f92882f000 session 0x55f92a4d01e0
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: handle_auth_request added challenge on 0x55f929655c00
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: handle_auth_request added challenge on 0x55f92a330000
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:30:55.232345+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 327393280 unmapped: 58400768 heap: 385794048 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4224826 data_alloc: 234881024 data_used: 12980224
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:30:56.232469+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 327393280 unmapped: 58400768 heap: 385794048 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e4c79000/0x0/0x4ffc00000, data 0x6d6b92c/0x6f05000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1407f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:30:57.232618+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 327852032 unmapped: 57942016 heap: 385794048 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:30:58.232748+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 327852032 unmapped: 57942016 heap: 385794048 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e4c79000/0x0/0x4ffc00000, data 0x6d6b92c/0x6f05000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1407f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:30:59.232875+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 327852032 unmapped: 57942016 heap: 385794048 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:31:00.233043+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 327852032 unmapped: 57942016 heap: 385794048 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4256986 data_alloc: 234881024 data_used: 17514496
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:31:01.233208+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 327852032 unmapped: 57942016 heap: 385794048 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:31:02.233354+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 327852032 unmapped: 57942016 heap: 385794048 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:31:03.233482+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 327852032 unmapped: 57942016 heap: 385794048 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e4c79000/0x0/0x4ffc00000, data 0x6d6b92c/0x6f05000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1407f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:31:04.233655+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 327852032 unmapped: 57942016 heap: 385794048 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:31:05.233874+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e4c79000/0x0/0x4ffc00000, data 0x6d6b92c/0x6f05000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1407f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 327860224 unmapped: 57933824 heap: 385794048 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4256986 data_alloc: 234881024 data_used: 17514496
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 17.718629837s of 18.073154449s, submitted: 36
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:31:06.234202+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 327860224 unmapped: 57933824 heap: 385794048 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:31:07.234372+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 327860224 unmapped: 57933824 heap: 385794048 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e4c35000/0x0/0x4ffc00000, data 0x6daf92c/0x6f49000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1407f9c2), peers [1,2] op hist [0,0,0,0,0,0,0,2,1])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:31:08.234510+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 329089024 unmapped: 56705024 heap: 385794048 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:31:09.234639+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 328384512 unmapped: 57409536 heap: 385794048 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:31:10.234808+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 328400896 unmapped: 57393152 heap: 385794048 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4290106 data_alloc: 234881024 data_used: 17661952
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:31:11.235037+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 327737344 unmapped: 58056704 heap: 385794048 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e498c000/0x0/0x4ffc00000, data 0x705092c/0x71ea000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1407f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:31:12.235186+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 327745536 unmapped: 58048512 heap: 385794048 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:31:13.235305+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 327745536 unmapped: 58048512 heap: 385794048 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:31:14.235439+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 327745536 unmapped: 58048512 heap: 385794048 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:31:15.235583+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 327745536 unmapped: 58048512 heap: 385794048 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4296368 data_alloc: 234881024 data_used: 17657856
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e4968000/0x0/0x4ffc00000, data 0x707b92c/0x7215000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1407f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:31:16.235730+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 327745536 unmapped: 58048512 heap: 385794048 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 8.628540039s of 11.086887360s, submitted: 58
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:31:17.235904+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 328794112 unmapped: 56999936 heap: 385794048 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e4966000/0x0/0x4ffc00000, data 0x707e92c/0x7218000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1407f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:31:18.236082+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 328794112 unmapped: 56999936 heap: 385794048 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:31:19.236290+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 328794112 unmapped: 56999936 heap: 385794048 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:31:20.236476+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 328794112 unmapped: 56999936 heap: 385794048 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4294832 data_alloc: 234881024 data_used: 17666048
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:31:21.236625+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 328802304 unmapped: 56991744 heap: 385794048 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 ms_handle_reset con 0x55f929655c00 session 0x55f92a9df2c0
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 ms_handle_reset con 0x55f92a330000 session 0x55f929e45680
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:31:22.236763+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: handle_auth_request added challenge on 0x55f927fdb800
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 328818688 unmapped: 56975360 heap: 385794048 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 ms_handle_reset con 0x55f927fdb800 session 0x55f92a4d0d20
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e4966000/0x0/0x4ffc00000, data 0x707e92c/0x7218000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1407f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:31:23.236888+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 328818688 unmapped: 56975360 heap: 385794048 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:31:24.237045+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 328818688 unmapped: 56975360 heap: 385794048 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 4800.1 total, 600.0 interval
                                           Cumulative writes: 41K writes, 164K keys, 41K commit groups, 1.0 writes per commit group, ingest: 0.16 GB, 0.03 MB/s
                                           Cumulative WAL: 41K writes, 15K syncs, 2.75 writes per sync, written: 0.16 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 4946 writes, 21K keys, 4946 commit groups, 1.0 writes per commit group, ingest: 24.70 MB, 0.04 MB/s
                                           Interval WAL: 4946 writes, 1807 syncs, 2.74 writes per sync, written: 0.02 GB, 0.04 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:31:25.237209+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 328818688 unmapped: 56975360 heap: 385794048 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4182894 data_alloc: 234881024 data_used: 12963840
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:31:26.237387+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e521e000/0x0/0x4ffc00000, data 0x67c88ba/0x6960000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1407f9c2), peers [1,2] op hist [0,0,0,0,0,0,0,1])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 ms_handle_reset con 0x55f92c8cc400 session 0x55f927bb9680
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 ms_handle_reset con 0x55f92caabc00 session 0x55f928dae1e0
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 328818688 unmapped: 56975360 heap: 385794048 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: handle_auth_request added challenge on 0x55f92882f000
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.806859970s of 10.071356773s, submitted: 40
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 ms_handle_reset con 0x55f92882f000 session 0x55f92a763860
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:31:27.237522+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e608c000/0x0/0x4ffc00000, data 0x55ec8aa/0x5783000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1407f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 322306048 unmapped: 63488000 heap: 385794048 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e608c000/0x0/0x4ffc00000, data 0x55ec8aa/0x5783000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1407f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:31:28.237748+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 322306048 unmapped: 63488000 heap: 385794048 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:31:29.237941+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 322306048 unmapped: 63488000 heap: 385794048 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:31:30.238214+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 322306048 unmapped: 63488000 heap: 385794048 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3974373 data_alloc: 218103808 data_used: 2609152
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:31:31.238426+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 322306048 unmapped: 63488000 heap: 385794048 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:31:32.238588+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 322306048 unmapped: 63488000 heap: 385794048 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e608e000/0x0/0x4ffc00000, data 0x55ec848/0x5782000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1407f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:31:33.238746+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 322306048 unmapped: 63488000 heap: 385794048 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e608e000/0x0/0x4ffc00000, data 0x55ec848/0x5782000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1407f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:31:34.238932+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 322306048 unmapped: 63488000 heap: 385794048 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:31:35.239104+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 322306048 unmapped: 63488000 heap: 385794048 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3974373 data_alloc: 218103808 data_used: 2609152
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:31:36.239240+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 322306048 unmapped: 63488000 heap: 385794048 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e608e000/0x0/0x4ffc00000, data 0x55ec848/0x5782000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1407f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:31:37.239352+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 322306048 unmapped: 63488000 heap: 385794048 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:31:38.239527+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 322306048 unmapped: 63488000 heap: 385794048 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e608e000/0x0/0x4ffc00000, data 0x55ec848/0x5782000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1407f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:31:39.239678+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 322306048 unmapped: 63488000 heap: 385794048 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:31:40.239888+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 322306048 unmapped: 63488000 heap: 385794048 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3974373 data_alloc: 218103808 data_used: 2609152
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:31:41.240031+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 322306048 unmapped: 63488000 heap: 385794048 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e608e000/0x0/0x4ffc00000, data 0x55ec848/0x5782000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1407f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:31:42.240219+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 322306048 unmapped: 63488000 heap: 385794048 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:31:43.240379+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 322306048 unmapped: 63488000 heap: 385794048 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e608e000/0x0/0x4ffc00000, data 0x55ec848/0x5782000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1407f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:31:44.240529+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 322306048 unmapped: 63488000 heap: 385794048 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:31:45.240683+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 322306048 unmapped: 63488000 heap: 385794048 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3974373 data_alloc: 218103808 data_used: 2609152
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:31:46.240877+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 322306048 unmapped: 63488000 heap: 385794048 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:31:47.241011+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 322306048 unmapped: 63488000 heap: 385794048 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:31:48.241202+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 322306048 unmapped: 63488000 heap: 385794048 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:31:49.241359+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e608e000/0x0/0x4ffc00000, data 0x55ec848/0x5782000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1407f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 322306048 unmapped: 63488000 heap: 385794048 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e608e000/0x0/0x4ffc00000, data 0x55ec848/0x5782000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1407f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:31:50.241512+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 322306048 unmapped: 63488000 heap: 385794048 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3974373 data_alloc: 218103808 data_used: 2609152
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:31:51.241662+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 322306048 unmapped: 63488000 heap: 385794048 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e608e000/0x0/0x4ffc00000, data 0x55ec848/0x5782000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1407f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:31:52.241859+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 322306048 unmapped: 63488000 heap: 385794048 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e608e000/0x0/0x4ffc00000, data 0x55ec848/0x5782000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1407f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:31:53.242069+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 322306048 unmapped: 63488000 heap: 385794048 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:31:54.242215+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 322306048 unmapped: 63488000 heap: 385794048 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:31:55.242393+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 322306048 unmapped: 63488000 heap: 385794048 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3974373 data_alloc: 218103808 data_used: 2609152
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:31:56.242550+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 322306048 unmapped: 63488000 heap: 385794048 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:31:57.242686+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e608e000/0x0/0x4ffc00000, data 0x55ec848/0x5782000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1407f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 322306048 unmapped: 63488000 heap: 385794048 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:31:58.242904+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 322306048 unmapped: 63488000 heap: 385794048 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:31:59.243101+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 322306048 unmapped: 63488000 heap: 385794048 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e608e000/0x0/0x4ffc00000, data 0x55ec848/0x5782000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1407f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:32:00.243277+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 322306048 unmapped: 63488000 heap: 385794048 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3974373 data_alloc: 218103808 data_used: 2609152
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:32:01.243438+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 322314240 unmapped: 63479808 heap: 385794048 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:32:02.243644+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 322314240 unmapped: 63479808 heap: 385794048 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e608e000/0x0/0x4ffc00000, data 0x55ec848/0x5782000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1407f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:32:03.243939+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 322314240 unmapped: 63479808 heap: 385794048 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:32:04.244215+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 322314240 unmapped: 63479808 heap: 385794048 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e608e000/0x0/0x4ffc00000, data 0x55ec848/0x5782000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1407f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:32:05.244437+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 322314240 unmapped: 63479808 heap: 385794048 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3974373 data_alloc: 218103808 data_used: 2609152
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:32:06.244609+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 322314240 unmapped: 63479808 heap: 385794048 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:32:07.244872+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 322322432 unmapped: 63471616 heap: 385794048 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:32:08.245025+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e608e000/0x0/0x4ffc00000, data 0x55ec848/0x5782000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1407f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 322322432 unmapped: 63471616 heap: 385794048 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:32:09.245175+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e608e000/0x0/0x4ffc00000, data 0x55ec848/0x5782000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1407f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 322322432 unmapped: 63471616 heap: 385794048 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e608e000/0x0/0x4ffc00000, data 0x55ec848/0x5782000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1407f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:32:10.245337+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 322322432 unmapped: 63471616 heap: 385794048 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3974373 data_alloc: 218103808 data_used: 2609152
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:32:11.245557+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 322322432 unmapped: 63471616 heap: 385794048 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:32:12.245715+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 322330624 unmapped: 63463424 heap: 385794048 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:32:13.245878+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 322330624 unmapped: 63463424 heap: 385794048 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e608e000/0x0/0x4ffc00000, data 0x55ec848/0x5782000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1407f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:32:14.246017+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 322330624 unmapped: 63463424 heap: 385794048 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:32:15.246158+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 322338816 unmapped: 63455232 heap: 385794048 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3974373 data_alloc: 218103808 data_used: 2609152
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e608e000/0x0/0x4ffc00000, data 0x55ec848/0x5782000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1407f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:32:16.246320+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 322338816 unmapped: 63455232 heap: 385794048 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:32:17.246491+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 322355200 unmapped: 63438848 heap: 385794048 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: handle_auth_request added challenge on 0x55f929655c00
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 ms_handle_reset con 0x55f929655c00 session 0x55f92a4d0780
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: handle_auth_request added challenge on 0x55f927fdb800
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 ms_handle_reset con 0x55f927fdb800 session 0x55f92a9de960
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: handle_auth_request added challenge on 0x55f92882f000
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 ms_handle_reset con 0x55f92882f000 session 0x55f928ab0d20
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: handle_auth_request added challenge on 0x55f92c8cc400
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 ms_handle_reset con 0x55f92c8cc400 session 0x55f92a4d0960
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: handle_auth_request added challenge on 0x55f92caabc00
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 51.292716980s of 51.436576843s, submitted: 37
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:32:18.246622+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 ms_handle_reset con 0x55f92caabc00 session 0x55f92aa39c20
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: handle_auth_request added challenge on 0x55f92a330000
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 ms_handle_reset con 0x55f92a330000 session 0x55f92aea21e0
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: handle_auth_request added challenge on 0x55f927fdb800
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 ms_handle_reset con 0x55f927fdb800 session 0x55f9280ca000
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: handle_auth_request added challenge on 0x55f92882f000
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 322666496 unmapped: 64184320 heap: 386850816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 ms_handle_reset con 0x55f92882f000 session 0x55f928daab40
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: handle_auth_request added challenge on 0x55f92c8cc400
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 ms_handle_reset con 0x55f92c8cc400 session 0x55f92a996960
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:32:19.246800+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 322666496 unmapped: 64184320 heap: 386850816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e5727000/0x0/0x4ffc00000, data 0x62c0858/0x6457000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1407f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:32:20.247044+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 322674688 unmapped: 64176128 heap: 386850816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4081340 data_alloc: 218103808 data_used: 2609152
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:32:21.247189+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e5727000/0x0/0x4ffc00000, data 0x62c0858/0x6457000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1407f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 322674688 unmapped: 64176128 heap: 386850816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 rsyslogd[1003]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:32:22.247345+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 322674688 unmapped: 64176128 heap: 386850816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:32:23.247477+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 322674688 unmapped: 64176128 heap: 386850816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:32:24.247629+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 322674688 unmapped: 64176128 heap: 386850816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e5727000/0x0/0x4ffc00000, data 0x62c0858/0x6457000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1407f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: handle_auth_request added challenge on 0x55f92caabc00
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 ms_handle_reset con 0x55f92caabc00 session 0x55f927c01680
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:32:25.247869+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 322682880 unmapped: 64167936 heap: 386850816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4081340 data_alloc: 218103808 data_used: 2609152
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: handle_auth_request added challenge on 0x55f92a331000
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 ms_handle_reset con 0x55f92a331000 session 0x55f92a9cfa40
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:32:26.248028+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: handle_auth_request added challenge on 0x55f927fdb800
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 ms_handle_reset con 0x55f927fdb800 session 0x55f92a4d1e00
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 322682880 unmapped: 64167936 heap: 386850816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e5727000/0x0/0x4ffc00000, data 0x62c0858/0x6457000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1407f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: handle_auth_request added challenge on 0x55f92882f000
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 ms_handle_reset con 0x55f92882f000 session 0x55f92a9de000
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:32:27.248887+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: handle_auth_request added challenge on 0x55f92c8cc400
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: handle_auth_request added challenge on 0x55f92caabc00
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 322682880 unmapped: 64167936 heap: 386850816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:32:28.249031+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 322682880 unmapped: 64167936 heap: 386850816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:32:29.249196+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e5726000/0x0/0x4ffc00000, data 0x62c0868/0x6458000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1407f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 325189632 unmapped: 61661184 heap: 386850816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:32:30.249398+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 325189632 unmapped: 61661184 heap: 386850816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4169758 data_alloc: 234881024 data_used: 14848000
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:32:31.249544+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 325189632 unmapped: 61661184 heap: 386850816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:32:32.249724+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e5726000/0x0/0x4ffc00000, data 0x62c0868/0x6458000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1407f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 325189632 unmapped: 61661184 heap: 386850816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:32:33.249904+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 325189632 unmapped: 61661184 heap: 386850816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:32:34.250090+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 325189632 unmapped: 61661184 heap: 386850816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:32:35.250246+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 325189632 unmapped: 61661184 heap: 386850816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4169758 data_alloc: 234881024 data_used: 14848000
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:32:36.250379+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 325189632 unmapped: 61661184 heap: 386850816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:32:37.250496+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e5726000/0x0/0x4ffc00000, data 0x62c0868/0x6458000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1407f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 325189632 unmapped: 61661184 heap: 386850816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:32:38.250635+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 325189632 unmapped: 61661184 heap: 386850816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 20.437330246s of 20.838657379s, submitted: 32
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:32:39.250767+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e5726000/0x0/0x4ffc00000, data 0x62c0868/0x6458000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1407f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 331317248 unmapped: 55533568 heap: 386850816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:32:40.250929+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 331399168 unmapped: 55451648 heap: 386850816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4331954 data_alloc: 234881024 data_used: 16011264
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:32:41.251078+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e44ce000/0x0/0x4ffc00000, data 0x7501868/0x7699000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1407f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 331399168 unmapped: 55451648 heap: 386850816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:32:42.251206+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 331399168 unmapped: 55451648 heap: 386850816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:32:43.251369+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 331399168 unmapped: 55451648 heap: 386850816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e44ce000/0x0/0x4ffc00000, data 0x7501868/0x7699000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1407f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:32:44.251533+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 331399168 unmapped: 55451648 heap: 386850816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:32:45.251691+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 331399168 unmapped: 55451648 heap: 386850816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4332274 data_alloc: 234881024 data_used: 16019456
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:32:46.251904+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 330727424 unmapped: 56123392 heap: 386850816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e44e3000/0x0/0x4ffc00000, data 0x7503868/0x769b000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1407f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:32:47.252070+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 330727424 unmapped: 56123392 heap: 386850816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:32:48.252243+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 330727424 unmapped: 56123392 heap: 386850816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:32:49.252438+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 330727424 unmapped: 56123392 heap: 386850816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:32:50.252679+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 330727424 unmapped: 56123392 heap: 386850816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4323898 data_alloc: 234881024 data_used: 16019456
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e44e3000/0x0/0x4ffc00000, data 0x7503868/0x769b000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1407f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:32:51.252835+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 330727424 unmapped: 56123392 heap: 386850816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:32:52.253026+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.971134186s of 13.384089470s, submitted: 150
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 330727424 unmapped: 56123392 heap: 386850816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:32:53.253185+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 330727424 unmapped: 56123392 heap: 386850816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e44e2000/0x0/0x4ffc00000, data 0x7504868/0x769c000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1407f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:32:54.253329+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 330727424 unmapped: 56123392 heap: 386850816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:32:55.253485+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e44e2000/0x0/0x4ffc00000, data 0x7504868/0x769c000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1407f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 330727424 unmapped: 56123392 heap: 386850816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4324114 data_alloc: 234881024 data_used: 16019456
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:32:56.253614+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 330727424 unmapped: 56123392 heap: 386850816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:32:57.253775+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 330727424 unmapped: 56123392 heap: 386850816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:32:58.253871+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 330727424 unmapped: 56123392 heap: 386850816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:32:59.254014+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 330727424 unmapped: 56123392 heap: 386850816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:33:00.254199+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 330727424 unmapped: 56123392 heap: 386850816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4323497 data_alloc: 234881024 data_used: 16019456
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:33:01.254340+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e44e2000/0x0/0x4ffc00000, data 0x7504868/0x769c000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1407f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 330801152 unmapped: 56049664 heap: 386850816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:33:02.254508+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 330817536 unmapped: 56033280 heap: 386850816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:33:03.254669+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 330817536 unmapped: 56033280 heap: 386850816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:33:04.255024+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 330817536 unmapped: 56033280 heap: 386850816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:33:05.255126+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 330817536 unmapped: 56033280 heap: 386850816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4323730 data_alloc: 234881024 data_used: 16027648
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:33:06.255308+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e44e2000/0x0/0x4ffc00000, data 0x7504868/0x769c000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1407f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 330817536 unmapped: 56033280 heap: 386850816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:33:07.255506+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 330817536 unmapped: 56033280 heap: 386850816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:33:08.255695+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 330817536 unmapped: 56033280 heap: 386850816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:33:09.255881+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: handle_auth_request added challenge on 0x55f92818d800
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 ms_handle_reset con 0x55f92818d800 session 0x55f9281452c0
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: handle_auth_request added challenge on 0x55f92818cc00
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 ms_handle_reset con 0x55f92818cc00 session 0x55f92a541e00
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 330817536 unmapped: 56033280 heap: 386850816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: handle_auth_request added challenge on 0x55f92a60a000
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 ms_handle_reset con 0x55f92a60a000 session 0x55f928909860
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: handle_auth_request added challenge on 0x55f92a60a000
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 ms_handle_reset con 0x55f92a60a000 session 0x55f927c010e0
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: handle_auth_request added challenge on 0x55f927fdb800
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 17.328674316s of 17.745838165s, submitted: 107
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:33:10.256122+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 ms_handle_reset con 0x55f927fdb800 session 0x55f92a885860
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: handle_auth_request added challenge on 0x55f92818cc00
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 ms_handle_reset con 0x55f92818cc00 session 0x55f9281b4f00
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: handle_auth_request added challenge on 0x55f92818d800
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 ms_handle_reset con 0x55f92818d800 session 0x55f92a98b680
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: handle_auth_request added challenge on 0x55f92882f000
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 ms_handle_reset con 0x55f92882f000 session 0x55f9274eb4a0
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: handle_auth_request added challenge on 0x55f92882f000
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 ms_handle_reset con 0x55f92882f000 session 0x55f927fcdc20
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 331153408 unmapped: 55697408 heap: 386850816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4383270 data_alloc: 234881024 data_used: 16027648
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e3e6b000/0x0/0x4ffc00000, data 0x7b7a878/0x7d13000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1407f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:33:11.256422+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 331153408 unmapped: 55697408 heap: 386850816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:33:12.256698+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 331153408 unmapped: 55697408 heap: 386850816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:33:13.256954+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 331153408 unmapped: 55697408 heap: 386850816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:33:14.257142+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: handle_auth_request added challenge on 0x55f927fdb800
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 ms_handle_reset con 0x55f927fdb800 session 0x55f928814b40
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 331153408 unmapped: 55697408 heap: 386850816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: handle_auth_request added challenge on 0x55f92818cc00
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 ms_handle_reset con 0x55f92818cc00 session 0x55f92aa365a0
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e3de8000/0x0/0x4ffc00000, data 0x7bfd878/0x7d96000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1407f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:33:15.257291+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 331153408 unmapped: 55697408 heap: 386850816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4383446 data_alloc: 234881024 data_used: 16027648
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:33:16.257528+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: handle_auth_request added challenge on 0x55f92818d800
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 ms_handle_reset con 0x55f92818d800 session 0x55f928dabc20
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: handle_auth_request added challenge on 0x55f92a60a000
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 ms_handle_reset con 0x55f92a60a000 session 0x55f9280ca1e0
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 330891264 unmapped: 55959552 heap: 386850816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: handle_auth_request added challenge on 0x55f92a60a000
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: handle_auth_request added challenge on 0x55f927fdb800
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:33:17.257702+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 330891264 unmapped: 55959552 heap: 386850816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:33:18.257837+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e3dc3000/0x0/0x4ffc00000, data 0x7c21888/0x7dbb000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1407f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 331087872 unmapped: 55762944 heap: 386850816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:33:19.257937+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 331915264 unmapped: 54935552 heap: 386850816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:33:20.258068+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 331915264 unmapped: 54935552 heap: 386850816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4437048 data_alloc: 234881024 data_used: 23044096
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:33:21.258214+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 331915264 unmapped: 54935552 heap: 386850816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:33:22.258361+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 331915264 unmapped: 54935552 heap: 386850816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:33:23.258508+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e3dc3000/0x0/0x4ffc00000, data 0x7c21888/0x7dbb000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1407f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 331915264 unmapped: 54935552 heap: 386850816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:33:24.258738+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e3dc3000/0x0/0x4ffc00000, data 0x7c21888/0x7dbb000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1407f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 331915264 unmapped: 54935552 heap: 386850816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:33:25.258910+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 331915264 unmapped: 54935552 heap: 386850816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4437048 data_alloc: 234881024 data_used: 23044096
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:33:26.259027+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 16.411468506s of 16.560039520s, submitted: 16
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 331931648 unmapped: 54919168 heap: 386850816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:33:27.259137+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e3dc3000/0x0/0x4ffc00000, data 0x7c21888/0x7dbb000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1407f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 331931648 unmapped: 54919168 heap: 386850816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:33:28.259251+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 331939840 unmapped: 54910976 heap: 386850816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:33:29.259343+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 335568896 unmapped: 51281920 heap: 386850816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:33:30.259489+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 335847424 unmapped: 51003392 heap: 386850816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4525200 data_alloc: 234881024 data_used: 23302144
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:33:31.259669+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 335847424 unmapped: 51003392 heap: 386850816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e3452000/0x0/0x4ffc00000, data 0x858a888/0x8724000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1407f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:33:32.259847+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 335847424 unmapped: 51003392 heap: 386850816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:33:33.260017+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 335847424 unmapped: 51003392 heap: 386850816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:33:34.260166+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e3452000/0x0/0x4ffc00000, data 0x858a888/0x8724000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1407f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e3452000/0x0/0x4ffc00000, data 0x858a888/0x8724000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1407f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 335855616 unmapped: 50995200 heap: 386850816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:33:35.260318+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 335208448 unmapped: 51642368 heap: 386850816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4519972 data_alloc: 234881024 data_used: 23302144
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:33:36.260527+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 335216640 unmapped: 51634176 heap: 386850816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:33:37.260674+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 335216640 unmapped: 51634176 heap: 386850816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:33:38.260894+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e3436000/0x0/0x4ffc00000, data 0x85ae888/0x8748000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1407f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 335216640 unmapped: 51634176 heap: 386850816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:33:39.261057+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 335216640 unmapped: 51634176 heap: 386850816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:33:40.261259+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.878240585s of 14.240298271s, submitted: 106
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 335216640 unmapped: 51634176 heap: 386850816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4520612 data_alloc: 234881024 data_used: 23363584
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:33:41.261390+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 335216640 unmapped: 51634176 heap: 386850816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:33:42.261562+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e3436000/0x0/0x4ffc00000, data 0x85ae888/0x8748000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1407f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 335216640 unmapped: 51634176 heap: 386850816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:33:43.261732+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 335216640 unmapped: 51634176 heap: 386850816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:33:44.261937+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 ms_handle_reset con 0x55f92a60a000 session 0x55f928001e00
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 ms_handle_reset con 0x55f927fdb800 session 0x55f92a4d1680
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 335224832 unmapped: 51625984 heap: 386850816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: handle_auth_request added challenge on 0x55f92818cc00
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:33:45.262077+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 ms_handle_reset con 0x55f92818cc00 session 0x55f92a762f00
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 329048064 unmapped: 57802752 heap: 386850816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4336965 data_alloc: 234881024 data_used: 16076800
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:33:46.262247+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 329048064 unmapped: 57802752 heap: 386850816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e44e1000/0x0/0x4ffc00000, data 0x7505868/0x769d000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1407f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:33:47.262419+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 329048064 unmapped: 57802752 heap: 386850816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:33:48.262710+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 329048064 unmapped: 57802752 heap: 386850816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:33:49.262908+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 329048064 unmapped: 57802752 heap: 386850816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:33:50.263063+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 329048064 unmapped: 57802752 heap: 386850816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4337145 data_alloc: 234881024 data_used: 16076800
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:33:51.263202+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.433815956s of 10.483644485s, submitted: 17
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 ms_handle_reset con 0x55f92c8cc400 session 0x55f928145860
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 ms_handle_reset con 0x55f92caabc00 session 0x55f92a541c20
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: handle_auth_request added challenge on 0x55f927fdb800
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 324993024 unmapped: 61857792 heap: 386850816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 ms_handle_reset con 0x55f927fdb800 session 0x55f92bf06d20
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:33:52.263345+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e63fc000/0x0/0x4ffc00000, data 0x55ec848/0x5782000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1407f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 324993024 unmapped: 61857792 heap: 386850816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:33:53.263516+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 324993024 unmapped: 61857792 heap: 386850816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:33:54.263676+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 324993024 unmapped: 61857792 heap: 386850816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:33:55.263905+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 324993024 unmapped: 61857792 heap: 386850816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4005476 data_alloc: 218103808 data_used: 2609152
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:33:56.264073+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e63fc000/0x0/0x4ffc00000, data 0x55ec848/0x5782000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1407f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 324993024 unmapped: 61857792 heap: 386850816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:33:57.264254+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 324993024 unmapped: 61857792 heap: 386850816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:33:58.264403+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e63fc000/0x0/0x4ffc00000, data 0x55ec848/0x5782000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1407f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 324993024 unmapped: 61857792 heap: 386850816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:33:59.264546+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 324993024 unmapped: 61857792 heap: 386850816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:34:00.264767+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 324993024 unmapped: 61857792 heap: 386850816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4005476 data_alloc: 218103808 data_used: 2609152
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:34:01.264909+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 324993024 unmapped: 61857792 heap: 386850816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:34:02.265058+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e63fc000/0x0/0x4ffc00000, data 0x55ec848/0x5782000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1407f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 324993024 unmapped: 61857792 heap: 386850816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:34:03.265206+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 324993024 unmapped: 61857792 heap: 386850816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:34:04.265358+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e63fc000/0x0/0x4ffc00000, data 0x55ec848/0x5782000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1407f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 324993024 unmapped: 61857792 heap: 386850816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:34:05.265510+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 324993024 unmapped: 61857792 heap: 386850816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4005476 data_alloc: 218103808 data_used: 2609152
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:34:06.265674+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 324993024 unmapped: 61857792 heap: 386850816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:34:07.265808+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 324993024 unmapped: 61857792 heap: 386850816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e63fc000/0x0/0x4ffc00000, data 0x55ec848/0x5782000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1407f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:34:08.266017+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 325001216 unmapped: 61849600 heap: 386850816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:34:09.266211+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e63fc000/0x0/0x4ffc00000, data 0x55ec848/0x5782000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1407f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 325001216 unmapped: 61849600 heap: 386850816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:34:10.266447+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 325001216 unmapped: 61849600 heap: 386850816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4005476 data_alloc: 218103808 data_used: 2609152
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:34:11.266610+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 325001216 unmapped: 61849600 heap: 386850816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:34:12.266795+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 325001216 unmapped: 61849600 heap: 386850816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:34:13.266976+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e63fc000/0x0/0x4ffc00000, data 0x55ec848/0x5782000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1407f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 325001216 unmapped: 61849600 heap: 386850816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:34:14.267118+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 325001216 unmapped: 61849600 heap: 386850816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:34:15.267598+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 325001216 unmapped: 61849600 heap: 386850816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4005476 data_alloc: 218103808 data_used: 2609152
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:34:16.267771+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 325009408 unmapped: 61841408 heap: 386850816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:34:17.268202+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 325009408 unmapped: 61841408 heap: 386850816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:34:18.268346+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e63fc000/0x0/0x4ffc00000, data 0x55ec848/0x5782000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1407f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 325009408 unmapped: 61841408 heap: 386850816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:34:19.268656+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 325009408 unmapped: 61841408 heap: 386850816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:34:20.268922+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: handle_auth_request added challenge on 0x55f92818cc00
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 ms_handle_reset con 0x55f92818cc00 session 0x55f92a4d12c0
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: handle_auth_request added challenge on 0x55f92a60a000
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 ms_handle_reset con 0x55f92a60a000 session 0x55f928ab0d20
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: handle_auth_request added challenge on 0x55f92c8cc400
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 ms_handle_reset con 0x55f92c8cc400 session 0x55f92a9def00
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: handle_auth_request added challenge on 0x55f92818d800
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 ms_handle_reset con 0x55f92818d800 session 0x55f92a997c20
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: handle_auth_request added challenge on 0x55f927fdb800
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 29.407154083s of 29.489196777s, submitted: 35
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 ms_handle_reset con 0x55f927fdb800 session 0x55f92a9de5a0
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 325009408 unmapped: 61841408 heap: 386850816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4082229 data_alloc: 218103808 data_used: 2609152
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:34:21.269206+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: handle_auth_request added challenge on 0x55f92818cc00
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 ms_handle_reset con 0x55f92818cc00 session 0x55f92aa385a0
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 325017600 unmapped: 61833216 heap: 386850816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:34:22.269496+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 325025792 unmapped: 61825024 heap: 386850816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:34:23.269874+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e5a95000/0x0/0x4ffc00000, data 0x5f528aa/0x60e9000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1407f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 325025792 unmapped: 61825024 heap: 386850816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:34:24.270147+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 325025792 unmapped: 61825024 heap: 386850816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:34:25.270539+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 325025792 unmapped: 61825024 heap: 386850816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4082245 data_alloc: 218103808 data_used: 2609152
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:34:26.270709+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e5a95000/0x0/0x4ffc00000, data 0x5f528aa/0x60e9000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1407f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 325025792 unmapped: 61825024 heap: 386850816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:34:27.271004+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 325033984 unmapped: 61816832 heap: 386850816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:34:28.271149+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e5a95000/0x0/0x4ffc00000, data 0x5f528aa/0x60e9000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1407f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 325033984 unmapped: 61816832 heap: 386850816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:34:29.271435+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: handle_auth_request added challenge on 0x55f92a60a000
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 ms_handle_reset con 0x55f92a60a000 session 0x55f92a9cf2c0
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: handle_auth_request added challenge on 0x55f92c8cc400
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 325033984 unmapped: 61816832 heap: 386850816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:34:30.271590+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 325033984 unmapped: 61816832 heap: 386850816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4082245 data_alloc: 218103808 data_used: 2609152
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:34:31.271755+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 325533696 unmapped: 61317120 heap: 386850816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:34:32.271874+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 325533696 unmapped: 61317120 heap: 386850816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:34:33.272016+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e5a95000/0x0/0x4ffc00000, data 0x5f528aa/0x60e9000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1407f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:34:34.272135+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 325533696 unmapped: 61317120 heap: 386850816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e5a95000/0x0/0x4ffc00000, data 0x5f528aa/0x60e9000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1407f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:34:35.272324+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 325541888 unmapped: 61308928 heap: 386850816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:34:36.272918+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 325541888 unmapped: 61308928 heap: 386850816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4144005 data_alloc: 218103808 data_used: 9936896
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:34:37.273059+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 325541888 unmapped: 61308928 heap: 386850816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:34:38.273193+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 325541888 unmapped: 61308928 heap: 386850816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e5a95000/0x0/0x4ffc00000, data 0x5f528aa/0x60e9000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1407f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 325541888 unmapped: 61308928 heap: 386850816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:34:39.298964+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:34:40.299170+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 325541888 unmapped: 61308928 heap: 386850816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e5a95000/0x0/0x4ffc00000, data 0x5f528aa/0x60e9000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1407f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:34:41.299336+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 325541888 unmapped: 61308928 heap: 386850816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4144005 data_alloc: 218103808 data_used: 9936896
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 20.233539581s of 20.428031921s, submitted: 35
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:34:42.299767+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 325853184 unmapped: 60997632 heap: 386850816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e54b3000/0x0/0x4ffc00000, data 0x652e8aa/0x66c5000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1407f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:34:43.299906+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 326901760 unmapped: 59949056 heap: 386850816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e5496000/0x0/0x4ffc00000, data 0x65428aa/0x66d9000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1407f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:34:44.300072+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 326901760 unmapped: 59949056 heap: 386850816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:34:45.300199+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 326909952 unmapped: 59940864 heap: 386850816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:34:46.300334+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 326909952 unmapped: 59940864 heap: 386850816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4196493 data_alloc: 218103808 data_used: 10219520
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:34:47.300486+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 326909952 unmapped: 59940864 heap: 386850816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e5496000/0x0/0x4ffc00000, data 0x65428aa/0x66d9000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1407f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:34:48.300695+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 326909952 unmapped: 59940864 heap: 386850816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:34:49.300873+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 326909952 unmapped: 59940864 heap: 386850816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:34:50.301132+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 326909952 unmapped: 59940864 heap: 386850816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:34:51.301264+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4196493 data_alloc: 218103808 data_used: 10219520
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 326909952 unmapped: 59940864 heap: 386850816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:34:52.301392+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 326909952 unmapped: 59940864 heap: 386850816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e5496000/0x0/0x4ffc00000, data 0x65428aa/0x66d9000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1407f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:34:53.301533+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 326909952 unmapped: 59940864 heap: 386850816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:34:54.301668+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 326909952 unmapped: 59940864 heap: 386850816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e5496000/0x0/0x4ffc00000, data 0x65428aa/0x66d9000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1407f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:34:55.301849+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 326909952 unmapped: 59940864 heap: 386850816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e5496000/0x0/0x4ffc00000, data 0x65428aa/0x66d9000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1407f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:34:56.302029+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4196493 data_alloc: 218103808 data_used: 10219520
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 326909952 unmapped: 59940864 heap: 386850816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:34:57.302153+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 326909952 unmapped: 59940864 heap: 386850816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:34:58.302378+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 326909952 unmapped: 59940864 heap: 386850816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:34:59.302530+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 326909952 unmapped: 59940864 heap: 386850816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:35:00.302689+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 326918144 unmapped: 59932672 heap: 386850816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e5496000/0x0/0x4ffc00000, data 0x65428aa/0x66d9000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1407f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:35:01.302853+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4196493 data_alloc: 218103808 data_used: 10219520
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 326918144 unmapped: 59932672 heap: 386850816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:35:02.303052+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 326918144 unmapped: 59932672 heap: 386850816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:35:03.303170+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 326918144 unmapped: 59932672 heap: 386850816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:35:04.303319+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 326918144 unmapped: 59932672 heap: 386850816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:35:05.303430+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 326926336 unmapped: 59924480 heap: 386850816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:35:06.303581+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4196493 data_alloc: 218103808 data_used: 10219520
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 326926336 unmapped: 59924480 heap: 386850816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e5496000/0x0/0x4ffc00000, data 0x65428aa/0x66d9000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1407f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:35:07.303766+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 326926336 unmapped: 59924480 heap: 386850816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:35:08.303903+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 326926336 unmapped: 59924480 heap: 386850816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:35:09.304017+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 326926336 unmapped: 59924480 heap: 386850816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:35:10.304207+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 326926336 unmapped: 59924480 heap: 386850816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e5496000/0x0/0x4ffc00000, data 0x65428aa/0x66d9000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1407f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e5496000/0x0/0x4ffc00000, data 0x65428aa/0x66d9000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1407f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:35:11.304350+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4196493 data_alloc: 218103808 data_used: 10219520
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 326934528 unmapped: 59916288 heap: 386850816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:35:12.304500+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 326934528 unmapped: 59916288 heap: 386850816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:35:13.304646+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 326934528 unmapped: 59916288 heap: 386850816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: handle_auth_request added challenge on 0x55f92882f000
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 ms_handle_reset con 0x55f92882f000 session 0x55f927c00780
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: handle_auth_request added challenge on 0x55f927fdb400
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 ms_handle_reset con 0x55f927fdb400 session 0x55f92a540b40
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: handle_auth_request added challenge on 0x55f927fdb800
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 ms_handle_reset con 0x55f927fdb800 session 0x55f928d503c0
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: handle_auth_request added challenge on 0x55f92818cc00
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e5496000/0x0/0x4ffc00000, data 0x65428aa/0x66d9000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1407f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 ms_handle_reset con 0x55f92818cc00 session 0x55f928153860
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: handle_auth_request added challenge on 0x55f92882f000
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 32.409175873s of 32.669803619s, submitted: 69
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 ms_handle_reset con 0x55f92882f000 session 0x55f92a9ded20
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: handle_auth_request added challenge on 0x55f92a60a000
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 ms_handle_reset con 0x55f92a60a000 session 0x55f928153860
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:35:14.304850+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: handle_auth_request added challenge on 0x55f92caaa800
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 ms_handle_reset con 0x55f92caaa800 session 0x55f928d503c0
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: handle_auth_request added challenge on 0x55f927fdb800
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 ms_handle_reset con 0x55f927fdb800 session 0x55f92a540b40
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: handle_auth_request added challenge on 0x55f92818cc00
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 326926336 unmapped: 59924480 heap: 386850816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 ms_handle_reset con 0x55f92818cc00 session 0x55f92a9cf2c0
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:35:15.304994+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 326926336 unmapped: 59924480 heap: 386850816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:35:16.305181+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4266531 data_alloc: 218103808 data_used: 10223616
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 326926336 unmapped: 59924480 heap: 386850816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e4bbb000/0x0/0x4ffc00000, data 0x6e2b8ba/0x6fc3000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1407f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:35:17.305326+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 326926336 unmapped: 59924480 heap: 386850816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:35:18.305488+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 326926336 unmapped: 59924480 heap: 386850816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: handle_auth_request added challenge on 0x55f92882f000
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 ms_handle_reset con 0x55f92882f000 session 0x55f92aa385a0
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:35:19.305636+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 326926336 unmapped: 59924480 heap: 386850816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: handle_auth_request added challenge on 0x55f92a60a000
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 ms_handle_reset con 0x55f92a60a000 session 0x55f92a997c20
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:35:20.305810+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 326926336 unmapped: 59924480 heap: 386850816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: handle_auth_request added challenge on 0x55f92a600800
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 ms_handle_reset con 0x55f92a600800 session 0x55f92a9def00
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e4bbb000/0x0/0x4ffc00000, data 0x6e2b8ba/0x6fc3000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1407f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: handle_auth_request added challenge on 0x55f927fdb800
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 ms_handle_reset con 0x55f927fdb800 session 0x55f92a4d12c0
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:35:21.306007+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4272851 data_alloc: 218103808 data_used: 10223616
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 326942720 unmapped: 59908096 heap: 386850816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: handle_auth_request added challenge on 0x55f92818cc00
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: handle_auth_request added challenge on 0x55f92882f000
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:35:22.306183+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 326959104 unmapped: 59891712 heap: 386850816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:35:23.306338+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 327770112 unmapped: 59080704 heap: 386850816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:35:24.306546+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 327770112 unmapped: 59080704 heap: 386850816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:35:25.306738+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 327770112 unmapped: 59080704 heap: 386850816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e4b95000/0x0/0x4ffc00000, data 0x6e4f8ed/0x6fe9000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1407f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:35:26.306944+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4331863 data_alloc: 234881024 data_used: 18436096
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 327770112 unmapped: 59080704 heap: 386850816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e4b95000/0x0/0x4ffc00000, data 0x6e4f8ed/0x6fe9000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1407f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:35:27.307096+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 327770112 unmapped: 59080704 heap: 386850816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:35:28.307224+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e4b95000/0x0/0x4ffc00000, data 0x6e4f8ed/0x6fe9000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1407f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 327770112 unmapped: 59080704 heap: 386850816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:35:29.307396+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 327770112 unmapped: 59080704 heap: 386850816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:35:30.307588+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 327778304 unmapped: 59072512 heap: 386850816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:35:31.307737+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4331863 data_alloc: 234881024 data_used: 18436096
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 327778304 unmapped: 59072512 heap: 386850816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e4b95000/0x0/0x4ffc00000, data 0x6e4f8ed/0x6fe9000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1407f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:35:32.307877+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 327778304 unmapped: 59072512 heap: 386850816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 18.587835312s of 18.729412079s, submitted: 33
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:35:33.308020+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 330080256 unmapped: 56770560 heap: 386850816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:35:34.308162+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 330661888 unmapped: 56188928 heap: 386850816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:35:35.308299+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 331792384 unmapped: 55058432 heap: 386850816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e426b000/0x0/0x4ffc00000, data 0x77718ed/0x790b000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1407f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:35:36.308529+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4416231 data_alloc: 234881024 data_used: 19189760
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 331792384 unmapped: 55058432 heap: 386850816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:35:37.308688+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 331792384 unmapped: 55058432 heap: 386850816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:35:38.308904+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e426b000/0x0/0x4ffc00000, data 0x77718ed/0x790b000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1407f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 331792384 unmapped: 55058432 heap: 386850816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:35:39.309056+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e426b000/0x0/0x4ffc00000, data 0x77718ed/0x790b000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1407f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 331792384 unmapped: 55058432 heap: 386850816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:35:40.309242+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 331792384 unmapped: 55058432 heap: 386850816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:35:41.309421+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4416231 data_alloc: 234881024 data_used: 19189760
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 331792384 unmapped: 55058432 heap: 386850816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:35:42.309608+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 331792384 unmapped: 55058432 heap: 386850816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:35:43.309769+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 331792384 unmapped: 55058432 heap: 386850816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e426b000/0x0/0x4ffc00000, data 0x77718ed/0x790b000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1407f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.781734467s of 11.440135002s, submitted: 86
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:35:44.309925+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 331792384 unmapped: 55058432 heap: 386850816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:35:45.310067+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 331792384 unmapped: 55058432 heap: 386850816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:35:46.310209+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4414167 data_alloc: 234881024 data_used: 19177472
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 331792384 unmapped: 55058432 heap: 386850816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 ms_handle_reset con 0x55f92818cc00 session 0x55f928001e00
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 ms_handle_reset con 0x55f92882f000 session 0x55f92a885a40
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: handle_auth_request added challenge on 0x55f92a60a000
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:35:47.310340+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 ms_handle_reset con 0x55f92a60a000 session 0x55f929e454a0
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 328851456 unmapped: 57999360 heap: 386850816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:35:48.310522+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 328851456 unmapped: 57999360 heap: 386850816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:35:49.310695+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 328851456 unmapped: 57999360 heap: 386850816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e54a4000/0x0/0x4ffc00000, data 0x65428aa/0x66d9000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1407f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:35:50.311125+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 328859648 unmapped: 57991168 heap: 386850816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e54a4000/0x0/0x4ffc00000, data 0x65428aa/0x66d9000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1407f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:35:51.311267+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4205614 data_alloc: 218103808 data_used: 10211328
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 328859648 unmapped: 57991168 heap: 386850816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:35:52.311438+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 328859648 unmapped: 57991168 heap: 386850816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:35:53.311560+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 328859648 unmapped: 57991168 heap: 386850816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:35:54.311755+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 328859648 unmapped: 57991168 heap: 386850816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:35:55.311917+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 328859648 unmapped: 57991168 heap: 386850816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e54a4000/0x0/0x4ffc00000, data 0x65428aa/0x66d9000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1407f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:35:56.312072+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4205614 data_alloc: 218103808 data_used: 10211328
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 328859648 unmapped: 57991168 heap: 386850816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.439859390s of 12.549081802s, submitted: 53
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 ms_handle_reset con 0x55f92c8cc400 session 0x55f92a763860
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e54a4000/0x0/0x4ffc00000, data 0x65428aa/0x66d9000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1407f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: handle_auth_request added challenge on 0x55f927fdb800
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:35:57.312225+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 ms_handle_reset con 0x55f927fdb800 session 0x55f92a884780
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 326270976 unmapped: 60579840 heap: 386850816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:35:58.312368+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 326270976 unmapped: 60579840 heap: 386850816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:35:59.312543+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 326270976 unmapped: 60579840 heap: 386850816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:36:00.312775+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 326270976 unmapped: 60579840 heap: 386850816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:36:01.312939+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4029906 data_alloc: 218103808 data_used: 2605056
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 326270976 unmapped: 60579840 heap: 386850816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:36:02.313102+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 326270976 unmapped: 60579840 heap: 386850816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e63fb000/0x0/0x4ffc00000, data 0x55ec848/0x5782000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1407f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:36:03.313221+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e63fb000/0x0/0x4ffc00000, data 0x55ec848/0x5782000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1407f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 326270976 unmapped: 60579840 heap: 386850816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:36:04.313369+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 326270976 unmapped: 60579840 heap: 386850816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:36:05.313533+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 326270976 unmapped: 60579840 heap: 386850816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:36:06.313697+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4029906 data_alloc: 218103808 data_used: 2605056
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 326270976 unmapped: 60579840 heap: 386850816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:36:07.313859+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 326270976 unmapped: 60579840 heap: 386850816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:36:08.314001+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 326270976 unmapped: 60579840 heap: 386850816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e63fb000/0x0/0x4ffc00000, data 0x55ec848/0x5782000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1407f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:36:09.314131+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 326270976 unmapped: 60579840 heap: 386850816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:36:10.314321+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 326270976 unmapped: 60579840 heap: 386850816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:36:11.314510+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4029906 data_alloc: 218103808 data_used: 2605056
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 326270976 unmapped: 60579840 heap: 386850816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:36:12.314665+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 326270976 unmapped: 60579840 heap: 386850816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:36:13.314862+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 326270976 unmapped: 60579840 heap: 386850816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:36:14.315044+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e63fb000/0x0/0x4ffc00000, data 0x55ec848/0x5782000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1407f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 326270976 unmapped: 60579840 heap: 386850816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:36:15.315233+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e63fb000/0x0/0x4ffc00000, data 0x55ec848/0x5782000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1407f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 326270976 unmapped: 60579840 heap: 386850816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:36:16.315920+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4029906 data_alloc: 218103808 data_used: 2605056
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 326270976 unmapped: 60579840 heap: 386850816 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: handle_auth_request added challenge on 0x55f92818cc00
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 ms_handle_reset con 0x55f92818cc00 session 0x55f92a28e780
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: handle_auth_request added challenge on 0x55f92882f000
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 ms_handle_reset con 0x55f92882f000 session 0x55f92a9974a0
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: handle_auth_request added challenge on 0x55f92a60a000
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 ms_handle_reset con 0x55f92a60a000 session 0x55f928152960
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: handle_auth_request added challenge on 0x55f933248800
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 ms_handle_reset con 0x55f933248800 session 0x55f92a4d0960
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: handle_auth_request added challenge on 0x55f927fdb800
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 20.457277298s of 20.534765244s, submitted: 20
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:36:17.316030+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 ms_handle_reset con 0x55f927fdb800 session 0x55f92a9d1a40
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: handle_auth_request added challenge on 0x55f92818cc00
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 326303744 unmapped: 68943872 heap: 395247616 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 ms_handle_reset con 0x55f92818cc00 session 0x55f92a28e1e0
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:36:18.316151+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 326311936 unmapped: 68935680 heap: 395247616 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:36:19.316335+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 326311936 unmapped: 68935680 heap: 395247616 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:36:20.316583+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e5995000/0x0/0x4ffc00000, data 0x60528aa/0x61e9000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1407f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 326311936 unmapped: 68935680 heap: 395247616 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:36:21.316749+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e5995000/0x0/0x4ffc00000, data 0x60528aa/0x61e9000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1407f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4113607 data_alloc: 218103808 data_used: 2605056
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 326311936 unmapped: 68935680 heap: 395247616 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:36:22.316872+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 326311936 unmapped: 68935680 heap: 395247616 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:36:23.317033+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 326311936 unmapped: 68935680 heap: 395247616 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: handle_auth_request added challenge on 0x55f92882f000
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 ms_handle_reset con 0x55f92882f000 session 0x55f928d8c960
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:36:24.321733+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: handle_auth_request added challenge on 0x55f92a60a000
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 326320128 unmapped: 68927488 heap: 395247616 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:36:25.321866+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e5995000/0x0/0x4ffc00000, data 0x60528aa/0x61e9000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1407f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 326320128 unmapped: 68927488 heap: 395247616 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:36:26.322014+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4180647 data_alloc: 234881024 data_used: 12083200
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 326336512 unmapped: 68911104 heap: 395247616 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:36:27.322153+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 326336512 unmapped: 68911104 heap: 395247616 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:36:28.322259+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 326336512 unmapped: 68911104 heap: 395247616 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:36:29.322397+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 326336512 unmapped: 68911104 heap: 395247616 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:36:30.322587+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 326336512 unmapped: 68911104 heap: 395247616 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e5995000/0x0/0x4ffc00000, data 0x60528aa/0x61e9000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1407f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:36:31.322714+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4180647 data_alloc: 234881024 data_used: 12083200
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 326336512 unmapped: 68911104 heap: 395247616 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:36:32.322885+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 326336512 unmapped: 68911104 heap: 395247616 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:36:33.323109+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 326336512 unmapped: 68911104 heap: 395247616 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:36:34.323374+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 326336512 unmapped: 68911104 heap: 395247616 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:36:35.323518+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e5995000/0x0/0x4ffc00000, data 0x60528aa/0x61e9000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1407f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 18.162900925s of 18.310272217s, submitted: 35
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 327262208 unmapped: 67985408 heap: 395247616 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e5995000/0x0/0x4ffc00000, data 0x60528aa/0x61e9000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1407f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:36:36.323683+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4233787 data_alloc: 234881024 data_used: 12664832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 329007104 unmapped: 66240512 heap: 395247616 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:36:37.323865+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 329154560 unmapped: 66093056 heap: 395247616 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:36:38.324002+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 329154560 unmapped: 66093056 heap: 395247616 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:36:39.324169+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 329154560 unmapped: 66093056 heap: 395247616 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:36:40.324407+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 329154560 unmapped: 66093056 heap: 395247616 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e534e000/0x0/0x4ffc00000, data 0x66988aa/0x682f000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1407f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e534e000/0x0/0x4ffc00000, data 0x66988aa/0x682f000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1407f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:36:41.324569+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4241423 data_alloc: 234881024 data_used: 12873728
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 329154560 unmapped: 66093056 heap: 395247616 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:36:42.324710+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 329154560 unmapped: 66093056 heap: 395247616 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:36:43.324862+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 329424896 unmapped: 65822720 heap: 395247616 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:36:44.325016+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e532e000/0x0/0x4ffc00000, data 0x66b98aa/0x6850000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1407f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 329424896 unmapped: 65822720 heap: 395247616 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:36:45.325154+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 329424896 unmapped: 65822720 heap: 395247616 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:36:46.325295+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e532e000/0x0/0x4ffc00000, data 0x66b98aa/0x6850000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1407f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4237779 data_alloc: 234881024 data_used: 12877824
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 329424896 unmapped: 65822720 heap: 395247616 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:36:47.325406+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 329424896 unmapped: 65822720 heap: 395247616 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:36:48.325526+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 329424896 unmapped: 65822720 heap: 395247616 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:36:49.325762+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 329433088 unmapped: 65814528 heap: 395247616 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:36:50.326037+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 329433088 unmapped: 65814528 heap: 395247616 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:36:51.326178+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: handle_auth_request added challenge on 0x55f92a331400
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 ms_handle_reset con 0x55f92a331400 session 0x55f92bf061e0
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: handle_auth_request added challenge on 0x55f92a330800
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 ms_handle_reset con 0x55f92a330800 session 0x55f92a540780
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: handle_auth_request added challenge on 0x55f927fdb800
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 ms_handle_reset con 0x55f927fdb800 session 0x55f92a763a40
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: handle_auth_request added challenge on 0x55f92818cc00
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 ms_handle_reset con 0x55f92818cc00 session 0x55f92a7632c0
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: handle_auth_request added challenge on 0x55f92882f000
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.667339325s of 16.049318314s, submitted: 70
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4239923 data_alloc: 234881024 data_used: 12886016
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 329433088 unmapped: 65814528 heap: 395247616 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 ms_handle_reset con 0x55f92882f000 session 0x55f92a9df860
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:36:52.326329+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: handle_auth_request added challenge on 0x55f92a331400
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e5196000/0x0/0x4ffc00000, data 0x684f8e3/0x69e8000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1407f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 ms_handle_reset con 0x55f92a331400 session 0x55f928ab2960
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: handle_auth_request added challenge on 0x55f929c7f400
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 ms_handle_reset con 0x55f929c7f400 session 0x55f927bb8780
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: handle_auth_request added challenge on 0x55f927fdb800
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 ms_handle_reset con 0x55f927fdb800 session 0x55f92bf074a0
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: handle_auth_request added challenge on 0x55f92818cc00
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 ms_handle_reset con 0x55f92818cc00 session 0x55f929e5b0e0
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 329441280 unmapped: 65806336 heap: 395247616 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:36:53.326687+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 329441280 unmapped: 65806336 heap: 395247616 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e4d10000/0x0/0x4ffc00000, data 0x6cd591c/0x6e6e000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1407f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:36:54.326890+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e4d10000/0x0/0x4ffc00000, data 0x6cd591c/0x6e6e000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1407f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 329441280 unmapped: 65806336 heap: 395247616 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:36:55.327039+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: handle_auth_request added challenge on 0x55f92882f000
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 ms_handle_reset con 0x55f92882f000 session 0x55f92bf06780
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 329441280 unmapped: 65806336 heap: 395247616 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: handle_auth_request added challenge on 0x55f92a331400
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 ms_handle_reset con 0x55f92a331400 session 0x55f927bbe5a0
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:36:56.327192+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: handle_auth_request added challenge on 0x55f92a335400
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 ms_handle_reset con 0x55f92a335400 session 0x55f92aea3e00
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: handle_auth_request added challenge on 0x55f927fdb800
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 ms_handle_reset con 0x55f927fdb800 session 0x55f92a4d1680
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4294012 data_alloc: 234881024 data_used: 12886016
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 329449472 unmapped: 65798144 heap: 395247616 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: handle_auth_request added challenge on 0x55f92818cc00
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: handle_auth_request added challenge on 0x55f92882f000
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:36:57.327463+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 329449472 unmapped: 65798144 heap: 395247616 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:36:58.327616+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e4cec000/0x0/0x4ffc00000, data 0x6cf991c/0x6e92000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1407f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 329457664 unmapped: 65789952 heap: 395247616 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:36:59.327788+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 330244096 unmapped: 65003520 heap: 395247616 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:37:00.328014+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 330260480 unmapped: 64987136 heap: 395247616 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:37:01.328197+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4332892 data_alloc: 234881024 data_used: 18210816
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 330260480 unmapped: 64987136 heap: 395247616 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:37:02.328346+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 330260480 unmapped: 64987136 heap: 395247616 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:37:03.328509+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e4cec000/0x0/0x4ffc00000, data 0x6cf991c/0x6e92000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1407f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 330260480 unmapped: 64987136 heap: 395247616 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e4cec000/0x0/0x4ffc00000, data 0x6cf991c/0x6e92000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1407f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:37:04.328657+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 330260480 unmapped: 64987136 heap: 395247616 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.587512970s of 13.324399948s, submitted: 35
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:37:05.328849+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e4cec000/0x0/0x4ffc00000, data 0x6cf991c/0x6e92000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1407f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 330268672 unmapped: 64978944 heap: 395247616 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:37:06.328965+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4333332 data_alloc: 234881024 data_used: 18210816
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 330268672 unmapped: 64978944 heap: 395247616 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:37:07.329091+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 330268672 unmapped: 64978944 heap: 395247616 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:37:08.329210+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 331694080 unmapped: 63553536 heap: 395247616 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:37:09.329376+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 331710464 unmapped: 63537152 heap: 395247616 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:37:10.329538+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e42cb000/0x0/0x4ffc00000, data 0x771991c/0x78b2000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1407f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 331710464 unmapped: 63537152 heap: 395247616 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:37:11.329721+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4424204 data_alloc: 234881024 data_used: 18546688
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 331710464 unmapped: 63537152 heap: 395247616 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:37:12.329912+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 331710464 unmapped: 63537152 heap: 395247616 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:37:13.330098+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 331710464 unmapped: 63537152 heap: 395247616 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:37:14.330295+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 331710464 unmapped: 63537152 heap: 395247616 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.867523193s of 10.164697647s, submitted: 87
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 ms_handle_reset con 0x55f92818cc00 session 0x55f92a98b0e0
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 ms_handle_reset con 0x55f92882f000 session 0x55f92aea3e00
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:37:15.330488+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: handle_auth_request added challenge on 0x55f92a331400
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e42cb000/0x0/0x4ffc00000, data 0x771991c/0x78b2000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1407f9c2), peers [1,2] op hist [0,0,1])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 ms_handle_reset con 0x55f92a331400 session 0x55f928d8c1e0
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 331743232 unmapped: 63504384 heap: 395247616 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:37:16.330664+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4252242 data_alloc: 234881024 data_used: 12947456
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 332791808 unmapped: 62455808 heap: 395247616 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:37:17.330769+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 332791808 unmapped: 62455808 heap: 395247616 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:37:18.330926+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 332791808 unmapped: 62455808 heap: 395247616 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 ms_handle_reset con 0x55f92a60a000 session 0x55f9281b4960
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:37:19.331008+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: handle_auth_request added challenge on 0x55f927fdb800
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 ms_handle_reset con 0x55f927fdb800 session 0x55f92a9df4a0
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 325206016 unmapped: 70041600 heap: 395247616 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:37:20.331189+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e5e08000/0x0/0x4ffc00000, data 0x55ec848/0x5782000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1407f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 325206016 unmapped: 70041600 heap: 395247616 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:37:21.331326+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4050686 data_alloc: 218103808 data_used: 2605056
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 325206016 unmapped: 70041600 heap: 395247616 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:37:22.331448+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 325206016 unmapped: 70041600 heap: 395247616 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:37:23.331600+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 325206016 unmapped: 70041600 heap: 395247616 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:37:24.331784+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e5e08000/0x0/0x4ffc00000, data 0x55ec848/0x5782000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1407f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 325206016 unmapped: 70041600 heap: 395247616 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:37:25.332518+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e5e08000/0x0/0x4ffc00000, data 0x55ec848/0x5782000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1407f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 325206016 unmapped: 70041600 heap: 395247616 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:37:26.332742+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4050686 data_alloc: 218103808 data_used: 2605056
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 325206016 unmapped: 70041600 heap: 395247616 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:37:27.332949+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 325206016 unmapped: 70041600 heap: 395247616 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:37:28.333415+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 325206016 unmapped: 70041600 heap: 395247616 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:37:29.333907+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 325206016 unmapped: 70041600 heap: 395247616 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:37:30.334375+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e5e08000/0x0/0x4ffc00000, data 0x55ec848/0x5782000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1407f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 325214208 unmapped: 70033408 heap: 395247616 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:37:31.334700+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4050686 data_alloc: 218103808 data_used: 2605056
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 325214208 unmapped: 70033408 heap: 395247616 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:37:32.334990+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 325214208 unmapped: 70033408 heap: 395247616 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:37:33.335304+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 325214208 unmapped: 70033408 heap: 395247616 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:37:34.335443+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 325222400 unmapped: 70025216 heap: 395247616 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:37:35.335667+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 325222400 unmapped: 70025216 heap: 395247616 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:37:36.335882+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e5e08000/0x0/0x4ffc00000, data 0x55ec848/0x5782000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1407f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: handle_auth_request added challenge on 0x55f92818cc00
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 ms_handle_reset con 0x55f92818cc00 session 0x55f92a9d1a40
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: handle_auth_request added challenge on 0x55f92882f000
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 ms_handle_reset con 0x55f92882f000 session 0x55f92a98c5a0
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: handle_auth_request added challenge on 0x55f92a331400
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 ms_handle_reset con 0x55f92a331400 session 0x55f928daa3c0
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: handle_auth_request added challenge on 0x55f92caab400
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 ms_handle_reset con 0x55f92caab400 session 0x55f92a9df860
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4050686 data_alloc: 218103808 data_used: 2605056
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 325230592 unmapped: 70017024 heap: 395247616 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: handle_auth_request added challenge on 0x55f927fdb800
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 21.811437607s of 21.961263657s, submitted: 45
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 ms_handle_reset con 0x55f927fdb800 session 0x55f92a28eb40
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:37:37.336004+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: handle_auth_request added challenge on 0x55f92818cc00
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 ms_handle_reset con 0x55f92818cc00 session 0x55f92a8850e0
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: handle_auth_request added challenge on 0x55f92882f000
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 ms_handle_reset con 0x55f92882f000 session 0x55f929e44b40
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: handle_auth_request added challenge on 0x55f92a331400
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 ms_handle_reset con 0x55f92a331400 session 0x55f9280cbe00
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: handle_auth_request added challenge on 0x55f92a601400
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 ms_handle_reset con 0x55f92a601400 session 0x55f927bbe5a0
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 325435392 unmapped: 69812224 heap: 395247616 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:37:38.336228+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 325435392 unmapped: 69812224 heap: 395247616 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:37:39.336380+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 325435392 unmapped: 69812224 heap: 395247616 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:37:40.336588+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: handle_auth_request added challenge on 0x55f927fdb800
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 ms_handle_reset con 0x55f927fdb800 session 0x55f9281454a0
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: handle_auth_request added challenge on 0x55f92818cc00
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 ms_handle_reset con 0x55f92818cc00 session 0x55f92a9ced20
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 325435392 unmapped: 69812224 heap: 395247616 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:37:41.336729+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: handle_auth_request added challenge on 0x55f92882f000
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 ms_handle_reset con 0x55f92882f000 session 0x55f92a541860
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: handle_auth_request added challenge on 0x55f92a331400
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 ms_handle_reset con 0x55f92a331400 session 0x55f927c01680
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4126909 data_alloc: 218103808 data_used: 2605056
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 325435392 unmapped: 69812224 heap: 395247616 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: handle_auth_request added challenge on 0x55f92d869400
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: handle_auth_request added challenge on 0x55f927c6bc00
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:37:42.336885+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e5a7f000/0x0/0x4ffc00000, data 0x5f68858/0x60ff000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1407f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 325435392 unmapped: 69812224 heap: 395247616 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:37:43.337031+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 325984256 unmapped: 69263360 heap: 395247616 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:37:44.337210+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 328015872 unmapped: 67231744 heap: 395247616 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:37:45.337340+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e5a7f000/0x0/0x4ffc00000, data 0x5f68858/0x60ff000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1407f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 328015872 unmapped: 67231744 heap: 395247616 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:37:46.337457+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4189601 data_alloc: 218103808 data_used: 11337728
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 328015872 unmapped: 67231744 heap: 395247616 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:37:47.337613+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 328015872 unmapped: 67231744 heap: 395247616 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 rsyslogd[1003]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:37:48.337751+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e5a7f000/0x0/0x4ffc00000, data 0x5f68858/0x60ff000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1407f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 328015872 unmapped: 67231744 heap: 395247616 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:37:49.337897+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 328015872 unmapped: 67231744 heap: 395247616 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:37:50.338069+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 328015872 unmapped: 67231744 heap: 395247616 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:37:51.338229+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4189601 data_alloc: 218103808 data_used: 11337728
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 328015872 unmapped: 67231744 heap: 395247616 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:37:52.338377+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e5a7f000/0x0/0x4ffc00000, data 0x5f68858/0x60ff000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1407f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 328015872 unmapped: 67231744 heap: 395247616 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:37:53.338522+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 16.143310547s of 16.282493591s, submitted: 20
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 330121216 unmapped: 65126400 heap: 395247616 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:37:54.338687+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 330129408 unmapped: 65118208 heap: 395247616 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:37:55.338839+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e51bd000/0x0/0x4ffc00000, data 0x6821858/0x69b8000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1407f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 330129408 unmapped: 65118208 heap: 395247616 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:37:56.338977+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4273223 data_alloc: 218103808 data_used: 11362304
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 330129408 unmapped: 65118208 heap: 395247616 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:37:57.339096+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 330129408 unmapped: 65118208 heap: 395247616 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:37:58.339213+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 330129408 unmapped: 65118208 heap: 395247616 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:37:59.339354+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e51bd000/0x0/0x4ffc00000, data 0x6821858/0x69b8000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1407f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 329457664 unmapped: 65789952 heap: 395247616 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:38:00.339549+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 329457664 unmapped: 65789952 heap: 395247616 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:38:01.339682+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4267107 data_alloc: 218103808 data_used: 11362304
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 329457664 unmapped: 65789952 heap: 395247616 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:38:02.339873+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 329457664 unmapped: 65789952 heap: 395247616 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:38:03.340010+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 329457664 unmapped: 65789952 heap: 395247616 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:38:04.340133+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 329457664 unmapped: 65789952 heap: 395247616 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:38:05.340280+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e51c4000/0x0/0x4ffc00000, data 0x6823858/0x69ba000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1407f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 329457664 unmapped: 65789952 heap: 395247616 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 ms_handle_reset con 0x55f92d869400 session 0x55f92aa36b40
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.530291557s of 12.769023895s, submitted: 71
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 ms_handle_reset con 0x55f927c6bc00 session 0x55f92a996000
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:38:06.340469+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: handle_auth_request added challenge on 0x55f927fdb800
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 ms_handle_reset con 0x55f927fdb800 session 0x55f92a98d4a0
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4065977 data_alloc: 218103808 data_used: 2605056
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 324599808 unmapped: 70647808 heap: 395247616 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:38:07.340608+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 324599808 unmapped: 70647808 heap: 395247616 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:38:08.340767+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 324599808 unmapped: 70647808 heap: 395247616 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:38:09.340869+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 324599808 unmapped: 70647808 heap: 395247616 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:38:10.341064+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e63fc000/0x0/0x4ffc00000, data 0x55ec848/0x5782000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1407f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 324599808 unmapped: 70647808 heap: 395247616 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:38:11.341255+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4065977 data_alloc: 218103808 data_used: 2605056
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 324599808 unmapped: 70647808 heap: 395247616 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:38:12.341401+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 324599808 unmapped: 70647808 heap: 395247616 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:38:13.341537+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 324599808 unmapped: 70647808 heap: 395247616 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:38:14.341723+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 324599808 unmapped: 70647808 heap: 395247616 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:38:15.341861+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e63fc000/0x0/0x4ffc00000, data 0x55ec848/0x5782000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1407f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 324599808 unmapped: 70647808 heap: 395247616 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:38:16.341951+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e63fc000/0x0/0x4ffc00000, data 0x55ec848/0x5782000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1407f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4065977 data_alloc: 218103808 data_used: 2605056
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 324608000 unmapped: 70639616 heap: 395247616 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:38:17.342094+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 324608000 unmapped: 70639616 heap: 395247616 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:38:18.342250+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 324608000 unmapped: 70639616 heap: 395247616 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:38:19.342399+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e63fc000/0x0/0x4ffc00000, data 0x55ec848/0x5782000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1407f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e63fc000/0x0/0x4ffc00000, data 0x55ec848/0x5782000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1407f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 324616192 unmapped: 70631424 heap: 395247616 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:38:20.342601+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 324616192 unmapped: 70631424 heap: 395247616 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:38:21.342749+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4065977 data_alloc: 218103808 data_used: 2605056
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 324616192 unmapped: 70631424 heap: 395247616 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:38:22.342921+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 324616192 unmapped: 70631424 heap: 395247616 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:38:23.343062+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 324616192 unmapped: 70631424 heap: 395247616 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:38:24.343236+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e63fc000/0x0/0x4ffc00000, data 0x55ec848/0x5782000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1407f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 324624384 unmapped: 70623232 heap: 395247616 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:38:25.343373+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 324624384 unmapped: 70623232 heap: 395247616 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e63fc000/0x0/0x4ffc00000, data 0x55ec848/0x5782000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1407f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:38:26.343551+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e63fc000/0x0/0x4ffc00000, data 0x55ec848/0x5782000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1407f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4065977 data_alloc: 218103808 data_used: 2605056
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 324624384 unmapped: 70623232 heap: 395247616 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e63fc000/0x0/0x4ffc00000, data 0x55ec848/0x5782000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1407f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:38:27.343986+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 324624384 unmapped: 70623232 heap: 395247616 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:38:28.344153+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 324624384 unmapped: 70623232 heap: 395247616 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:38:29.344343+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 324624384 unmapped: 70623232 heap: 395247616 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:38:30.344960+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 324624384 unmapped: 70623232 heap: 395247616 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:38:31.345153+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4065977 data_alloc: 218103808 data_used: 2605056
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 324624384 unmapped: 70623232 heap: 395247616 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:38:32.345304+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e63fc000/0x0/0x4ffc00000, data 0x55ec848/0x5782000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1407f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 324632576 unmapped: 70615040 heap: 395247616 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:38:33.345544+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: handle_auth_request added challenge on 0x55f92818cc00
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 27.135959625s of 27.221839905s, submitted: 32
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 ms_handle_reset con 0x55f92818cc00 session 0x55f928dab860
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 325025792 unmapped: 73900032 heap: 398925824 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:38:34.345711+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 325025792 unmapped: 73900032 heap: 398925824 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:38:35.345868+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e55c2000/0x0/0x4ffc00000, data 0x6426848/0x65bc000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1407f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 325025792 unmapped: 73900032 heap: 398925824 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:38:36.346023+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4182149 data_alloc: 218103808 data_used: 2605056
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 325025792 unmapped: 73900032 heap: 398925824 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:38:37.346187+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 325025792 unmapped: 73900032 heap: 398925824 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:38:38.346341+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: handle_auth_request added challenge on 0x55f92882f000
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 ms_handle_reset con 0x55f92882f000 session 0x55f92a4d0f00
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: handle_auth_request added challenge on 0x55f92a331400
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 ms_handle_reset con 0x55f92a331400 session 0x55f92a9972c0
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 325025792 unmapped: 73900032 heap: 398925824 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:38:39.346516+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: handle_auth_request added challenge on 0x55f927c6bc00
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 ms_handle_reset con 0x55f927c6bc00 session 0x55f928067c20
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: handle_auth_request added challenge on 0x55f927fdb800
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 ms_handle_reset con 0x55f927fdb800 session 0x55f92a9ce5a0
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 325025792 unmapped: 73900032 heap: 398925824 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:38:40.346695+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: handle_auth_request added challenge on 0x55f92818cc00
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e55c1000/0x0/0x4ffc00000, data 0x6426858/0x65bd000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1407f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 325042176 unmapped: 73883648 heap: 398925824 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:38:41.346842+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: handle_auth_request added challenge on 0x55f92882f000
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4264531 data_alloc: 234881024 data_used: 13987840
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 325435392 unmapped: 73490432 heap: 398925824 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:38:42.346976+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 326811648 unmapped: 72114176 heap: 398925824 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:38:43.347111+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 326811648 unmapped: 72114176 heap: 398925824 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:38:44.347240+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 326811648 unmapped: 72114176 heap: 398925824 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:38:45.347684+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e55c1000/0x0/0x4ffc00000, data 0x6426858/0x65bd000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1407f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 326811648 unmapped: 72114176 heap: 398925824 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:38:46.347875+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e55c1000/0x0/0x4ffc00000, data 0x6426858/0x65bd000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1407f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4288691 data_alloc: 234881024 data_used: 17424384
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 326811648 unmapped: 72114176 heap: 398925824 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:38:47.348202+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 326811648 unmapped: 72114176 heap: 398925824 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:38:48.348381+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 326811648 unmapped: 72114176 heap: 398925824 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:38:49.348680+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e55c1000/0x0/0x4ffc00000, data 0x6426858/0x65bd000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1407f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:38:50.348992+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 326811648 unmapped: 72114176 heap: 398925824 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e55c1000/0x0/0x4ffc00000, data 0x6426858/0x65bd000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1407f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:38:51.349186+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 326811648 unmapped: 72114176 heap: 398925824 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 18.322921753s of 18.454061508s, submitted: 24
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4337977 data_alloc: 234881024 data_used: 17432576
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:38:52.349376+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 333193216 unmapped: 65732608 heap: 398925824 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:38:53.349532+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 331882496 unmapped: 67043328 heap: 398925824 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:38:54.349676+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 332693504 unmapped: 66232320 heap: 398925824 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e49cb000/0x0/0x4ffc00000, data 0x7016858/0x71ad000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1407f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:38:55.349917+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 333324288 unmapped: 65601536 heap: 398925824 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:38:56.350114+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 333381632 unmapped: 65544192 heap: 398925824 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4383431 data_alloc: 234881024 data_used: 18649088
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:38:57.350258+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 333496320 unmapped: 65429504 heap: 398925824 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:38:58.350415+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 335806464 unmapped: 63119360 heap: 398925824 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:38:59.350569+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 335806464 unmapped: 63119360 heap: 398925824 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:39:00.350761+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 335806464 unmapped: 63119360 heap: 398925824 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e4990000/0x0/0x4ffc00000, data 0x704f858/0x71e6000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1407f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:39:01.350924+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 335806464 unmapped: 63119360 heap: 398925824 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4396007 data_alloc: 234881024 data_used: 18890752
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:39:02.351056+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 335806464 unmapped: 63119360 heap: 398925824 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:39:03.351196+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 335806464 unmapped: 63119360 heap: 398925824 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 8.644370079s of 11.762716293s, submitted: 141
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:39:04.351346+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 335806464 unmapped: 63119360 heap: 398925824 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e4995000/0x0/0x4ffc00000, data 0x7052858/0x71e9000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1407f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:39:05.351477+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 335806464 unmapped: 63119360 heap: 398925824 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:39:06.351649+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 335806464 unmapped: 63119360 heap: 398925824 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:39:07.351843+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4390295 data_alloc: 234881024 data_used: 18890752
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 335806464 unmapped: 63119360 heap: 398925824 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:39:08.352008+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 335806464 unmapped: 63119360 heap: 398925824 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:39:09.353058+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 335806464 unmapped: 63119360 heap: 398925824 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:39:10.353220+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 335806464 unmapped: 63119360 heap: 398925824 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e4995000/0x0/0x4ffc00000, data 0x7052858/0x71e9000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1407f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: handle_auth_request added challenge on 0x55f92aa33800
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:39:11.353349+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342933504 unmapped: 55992320 heap: 398925824 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 ms_handle_reset con 0x55f92aa33800 session 0x55f928001e00
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: handle_auth_request added challenge on 0x55f92a5fc400
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 ms_handle_reset con 0x55f92a5fc400 session 0x55f92a9cf2c0
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:39:12.353510+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4440372 data_alloc: 234881024 data_used: 18890752
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 335847424 unmapped: 63078400 heap: 398925824 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e42ca000/0x0/0x4ffc00000, data 0x771c8ba/0x78b4000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1407f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:39:13.353698+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 335847424 unmapped: 63078400 heap: 398925824 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:39:14.353882+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 335847424 unmapped: 63078400 heap: 398925824 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:39:15.354013+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 335847424 unmapped: 63078400 heap: 398925824 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e42ca000/0x0/0x4ffc00000, data 0x771c8ba/0x78b4000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1407f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:39:16.354133+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 335847424 unmapped: 63078400 heap: 398925824 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:39:17.354305+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4440372 data_alloc: 234881024 data_used: 18890752
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 335847424 unmapped: 63078400 heap: 398925824 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:39:18.354488+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: handle_auth_request added challenge on 0x55f92a5f5c00
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.316019058s of 14.806444168s, submitted: 27
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 ms_handle_reset con 0x55f92a5f5c00 session 0x55f928d503c0
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 335847424 unmapped: 63078400 heap: 398925824 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e42c9000/0x0/0x4ffc00000, data 0x771c8dd/0x78b5000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1407f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:39:19.354602+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 335847424 unmapped: 63078400 heap: 398925824 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:39:20.354753+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: handle_auth_request added challenge on 0x55f92a5f5c00
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 335978496 unmapped: 62947328 heap: 398925824 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:39:21.354870+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 337403904 unmapped: 61521920 heap: 398925824 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:39:22.354985+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4481689 data_alloc: 234881024 data_used: 22646784
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 337403904 unmapped: 61521920 heap: 398925824 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e42c9000/0x0/0x4ffc00000, data 0x771c8dd/0x78b5000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1407f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:39:23.355119+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 337403904 unmapped: 61521920 heap: 398925824 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:39:24.355284+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 337403904 unmapped: 61521920 heap: 398925824 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e42c9000/0x0/0x4ffc00000, data 0x771c8dd/0x78b5000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1407f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:39:25.355420+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 337403904 unmapped: 61521920 heap: 398925824 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e42c9000/0x0/0x4ffc00000, data 0x771c8dd/0x78b5000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1407f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:39:26.355540+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 337403904 unmapped: 61521920 heap: 398925824 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:39:27.355659+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4482077 data_alloc: 234881024 data_used: 22650880
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 337403904 unmapped: 61521920 heap: 398925824 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:39:28.355785+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 337403904 unmapped: 61521920 heap: 398925824 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.288042068s of 10.306856155s, submitted: 6
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e42c8000/0x0/0x4ffc00000, data 0x771d8dd/0x78b6000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1407f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:39:29.355949+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 337412096 unmapped: 61513728 heap: 398925824 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:39:30.356117+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 337838080 unmapped: 61087744 heap: 398925824 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:39:31.356250+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 337838080 unmapped: 61087744 heap: 398925824 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:39:32.356395+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4540271 data_alloc: 234881024 data_used: 22720512
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 337846272 unmapped: 61079552 heap: 398925824 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e3c3d000/0x0/0x4ffc00000, data 0x7da88dd/0x7f41000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1407f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:39:33.356550+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 337846272 unmapped: 61079552 heap: 398925824 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:39:34.356688+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 337846272 unmapped: 61079552 heap: 398925824 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:39:35.356871+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 337846272 unmapped: 61079552 heap: 398925824 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:39:36.357011+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 337846272 unmapped: 61079552 heap: 398925824 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e3c3c000/0x0/0x4ffc00000, data 0x7da88dd/0x7f41000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1407f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:39:37.357229+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 ms_handle_reset con 0x55f92a5f5c00 session 0x55f92a9de1e0
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4537287 data_alloc: 234881024 data_used: 22724608
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 337854464 unmapped: 61071360 heap: 398925824 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: handle_auth_request added challenge on 0x55f927c6bc00
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 ms_handle_reset con 0x55f927c6bc00 session 0x55f928abf4a0
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:39:38.357465+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 334659584 unmapped: 64266240 heap: 398925824 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:39:39.357649+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 334659584 unmapped: 64266240 heap: 398925824 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e4992000/0x0/0x4ffc00000, data 0x7053858/0x71ea000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1407f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:39:40.357832+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 334659584 unmapped: 64266240 heap: 398925824 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:39:41.358056+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 ms_handle_reset con 0x55f92882f000 session 0x55f92a9de5a0
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 ms_handle_reset con 0x55f92818cc00 session 0x55f92a9cf4a0
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 334659584 unmapped: 64266240 heap: 398925824 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: handle_auth_request added challenge on 0x55f927fdb800
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.282309532s of 13.575644493s, submitted: 86
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:39:42.358216+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4078929 data_alloc: 218103808 data_used: 2605056
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 329826304 unmapped: 69099520 heap: 398925824 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 ms_handle_reset con 0x55f927fdb800 session 0x55f92a9ce000
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:39:43.358404+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 329826304 unmapped: 69099520 heap: 398925824 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:39:44.358623+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 329826304 unmapped: 69099520 heap: 398925824 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e63fc000/0x0/0x4ffc00000, data 0x55ec848/0x5782000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1407f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:39:45.358959+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 329826304 unmapped: 69099520 heap: 398925824 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:39:46.359118+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 329826304 unmapped: 69099520 heap: 398925824 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:39:47.359249+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4078204 data_alloc: 218103808 data_used: 2605056
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 329826304 unmapped: 69099520 heap: 398925824 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:39:48.359366+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e63fc000/0x0/0x4ffc00000, data 0x55ec848/0x5782000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1407f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 329826304 unmapped: 69099520 heap: 398925824 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:39:49.359613+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 329826304 unmapped: 69099520 heap: 398925824 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:39:50.359909+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 329826304 unmapped: 69099520 heap: 398925824 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:39:51.360132+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 329826304 unmapped: 69099520 heap: 398925824 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:39:52.360264+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4078204 data_alloc: 218103808 data_used: 2605056
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 329826304 unmapped: 69099520 heap: 398925824 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:39:53.360409+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 329826304 unmapped: 69099520 heap: 398925824 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e63fc000/0x0/0x4ffc00000, data 0x55ec848/0x5782000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1407f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:39:54.360550+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 329826304 unmapped: 69099520 heap: 398925824 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:39:55.360700+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 329826304 unmapped: 69099520 heap: 398925824 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:39:56.360879+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 329826304 unmapped: 69099520 heap: 398925824 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:39:57.361029+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4078204 data_alloc: 218103808 data_used: 2605056
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 329826304 unmapped: 69099520 heap: 398925824 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e63fc000/0x0/0x4ffc00000, data 0x55ec848/0x5782000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1407f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e63fc000/0x0/0x4ffc00000, data 0x55ec848/0x5782000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1407f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:39:58.361179+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 329826304 unmapped: 69099520 heap: 398925824 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:39:59.361363+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 329826304 unmapped: 69099520 heap: 398925824 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e63fc000/0x0/0x4ffc00000, data 0x55ec848/0x5782000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1407f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:40:00.361565+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 329826304 unmapped: 69099520 heap: 398925824 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: handle_auth_request added challenge on 0x55f927c6bc00
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 18.562551498s of 18.648386002s, submitted: 16
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 ms_handle_reset con 0x55f927c6bc00 session 0x55f92a9961e0
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: handle_auth_request added challenge on 0x55f92818cc00
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:40:01.361698+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 ms_handle_reset con 0x55f92818cc00 session 0x55f9281452c0
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e63fc000/0x0/0x4ffc00000, data 0x55ec848/0x5782000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1407f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 330891264 unmapped: 71712768 heap: 402604032 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:40:02.361868+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4164787 data_alloc: 218103808 data_used: 2605056
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 330891264 unmapped: 71712768 heap: 402604032 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:40:03.361987+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e58cb000/0x0/0x4ffc00000, data 0x611c8aa/0x62b3000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1407f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 330891264 unmapped: 71712768 heap: 402604032 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:40:04.362167+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 330891264 unmapped: 71712768 heap: 402604032 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: handle_auth_request added challenge on 0x55f92882f000
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 ms_handle_reset con 0x55f92882f000 session 0x55f92a28f680
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:40:05.362330+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 330891264 unmapped: 71712768 heap: 402604032 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: handle_auth_request added challenge on 0x55f92a5f5c00
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 ms_handle_reset con 0x55f92a5f5c00 session 0x55f927bb94a0
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: handle_auth_request added challenge on 0x55f92a5fc400
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 ms_handle_reset con 0x55f92a5fc400 session 0x55f928d6f2c0
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:40:06.362525+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: handle_auth_request added challenge on 0x55f927c6bc00
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 ms_handle_reset con 0x55f927c6bc00 session 0x55f928814780
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 331046912 unmapped: 71557120 heap: 402604032 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: handle_auth_request added challenge on 0x55f92818cc00
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:40:07.362701+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4166359 data_alloc: 218103808 data_used: 2605056
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 331046912 unmapped: 71557120 heap: 402604032 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:40:08.362915+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 331046912 unmapped: 71557120 heap: 402604032 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: handle_auth_request added challenge on 0x55f92882f000
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e58a7000/0x0/0x4ffc00000, data 0x61408aa/0x62d7000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1407f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:40:09.363027+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 334127104 unmapped: 68476928 heap: 402604032 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:40:10.363169+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 334127104 unmapped: 68476928 heap: 402604032 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:40:11.363358+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e58a7000/0x0/0x4ffc00000, data 0x61408aa/0x62d7000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1407f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 334127104 unmapped: 68476928 heap: 402604032 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:40:12.363530+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4248119 data_alloc: 234881024 data_used: 14049280
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 334127104 unmapped: 68476928 heap: 402604032 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:40:13.363698+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 334127104 unmapped: 68476928 heap: 402604032 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e58a7000/0x0/0x4ffc00000, data 0x61408aa/0x62d7000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1407f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:40:14.363923+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 334127104 unmapped: 68476928 heap: 402604032 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:40:15.364081+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 334127104 unmapped: 68476928 heap: 402604032 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:40:16.364255+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 334127104 unmapped: 68476928 heap: 402604032 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:40:17.364405+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4248119 data_alloc: 234881024 data_used: 14049280
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 334127104 unmapped: 68476928 heap: 402604032 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:40:18.364525+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 17.531391144s of 17.726591110s, submitted: 35
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 334127104 unmapped: 68476928 heap: 402604032 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e5098000/0x0/0x4ffc00000, data 0x69478aa/0x6ade000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1407f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:40:19.364642+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 336773120 unmapped: 65830912 heap: 402604032 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:40:20.364802+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 337092608 unmapped: 65511424 heap: 402604032 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:40:21.364983+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 337092608 unmapped: 65511424 heap: 402604032 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:40:22.365117+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4331465 data_alloc: 234881024 data_used: 14491648
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 337092608 unmapped: 65511424 heap: 402604032 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:40:23.365287+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 337092608 unmapped: 65511424 heap: 402604032 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:40:24.365459+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 337092608 unmapped: 65511424 heap: 402604032 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e4bb1000/0x0/0x4ffc00000, data 0x6a1e8aa/0x6bb5000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1448f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:40:25.365651+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 337092608 unmapped: 65511424 heap: 402604032 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:40:26.365782+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e4bb1000/0x0/0x4ffc00000, data 0x6a1e8aa/0x6bb5000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1448f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 336420864 unmapped: 66183168 heap: 402604032 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:40:27.365975+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4325581 data_alloc: 234881024 data_used: 14495744
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 336420864 unmapped: 66183168 heap: 402604032 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:40:28.366108+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 336420864 unmapped: 66183168 heap: 402604032 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e4b98000/0x0/0x4ffc00000, data 0x6a3f8aa/0x6bd6000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1448f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:40:29.366258+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 336420864 unmapped: 66183168 heap: 402604032 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:40:30.366448+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 336420864 unmapped: 66183168 heap: 402604032 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:40:31.366593+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 336420864 unmapped: 66183168 heap: 402604032 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:40:32.366737+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4325581 data_alloc: 234881024 data_used: 14495744
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 336420864 unmapped: 66183168 heap: 402604032 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e4b98000/0x0/0x4ffc00000, data 0x6a3f8aa/0x6bd6000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1448f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:40:33.366891+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e4b98000/0x0/0x4ffc00000, data 0x6a3f8aa/0x6bd6000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1448f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 336420864 unmapped: 66183168 heap: 402604032 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e4b98000/0x0/0x4ffc00000, data 0x6a3f8aa/0x6bd6000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1448f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.059427261s of 15.518669128s, submitted: 85
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:40:34.367077+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 336420864 unmapped: 66183168 heap: 402604032 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: handle_auth_request added challenge on 0x55f92a5f5c00
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 ms_handle_reset con 0x55f92a5f5c00 session 0x55f92aa39680
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: handle_auth_request added challenge on 0x55f92aa33800
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 ms_handle_reset con 0x55f92aa33800 session 0x55f927bbf680
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: handle_auth_request added challenge on 0x55f929c7e400
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 ms_handle_reset con 0x55f929c7e400 session 0x55f927c010e0
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:40:35.367211+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: handle_auth_request added challenge on 0x55f927c6a400
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 ms_handle_reset con 0x55f927c6a400 session 0x55f92a884d20
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: handle_auth_request added challenge on 0x55f927c6bc00
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 ms_handle_reset con 0x55f927c6bc00 session 0x55f92aa39a40
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 336650240 unmapped: 65953792 heap: 402604032 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:40:36.367386+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e4556000/0x0/0x4ffc00000, data 0x70818aa/0x7218000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1448f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 336650240 unmapped: 65953792 heap: 402604032 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:40:37.367528+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4377251 data_alloc: 234881024 data_used: 14495744
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 336650240 unmapped: 65953792 heap: 402604032 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:40:38.367697+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 336650240 unmapped: 65953792 heap: 402604032 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:40:39.367843+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 336650240 unmapped: 65953792 heap: 402604032 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: handle_auth_request added challenge on 0x55f929c7e400
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 ms_handle_reset con 0x55f929c7e400 session 0x55f92a9dfe00
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:40:40.367983+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 336650240 unmapped: 65953792 heap: 402604032 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e4555000/0x0/0x4ffc00000, data 0x70818cd/0x7219000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1448f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: handle_auth_request added challenge on 0x55f92a5f5c00
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:40:41.368119+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 336650240 unmapped: 65953792 heap: 402604032 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:40:42.368248+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4404600 data_alloc: 234881024 data_used: 17530880
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 336699392 unmapped: 65904640 heap: 402604032 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:40:43.368366+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 336699392 unmapped: 65904640 heap: 402604032 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:40:44.368502+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 336699392 unmapped: 65904640 heap: 402604032 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e4555000/0x0/0x4ffc00000, data 0x70818cd/0x7219000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1448f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:40:45.368802+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 336699392 unmapped: 65904640 heap: 402604032 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:40:46.369096+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 336699392 unmapped: 65904640 heap: 402604032 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:40:47.369227+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4415960 data_alloc: 234881024 data_used: 19144704
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 336699392 unmapped: 65904640 heap: 402604032 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:40:48.369352+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 336699392 unmapped: 65904640 heap: 402604032 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:40:49.369468+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 336699392 unmapped: 65904640 heap: 402604032 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.745984077s of 15.872829437s, submitted: 17
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:40:50.369612+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e454f000/0x0/0x4ffc00000, data 0x70878cd/0x721f000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1448f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 336748544 unmapped: 65855488 heap: 402604032 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:40:51.369731+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 336748544 unmapped: 65855488 heap: 402604032 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:40:52.369885+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4526096 data_alloc: 234881024 data_used: 19308544
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341270528 unmapped: 61333504 heap: 402604032 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:40:53.370042+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341270528 unmapped: 61333504 heap: 402604032 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:40:54.370281+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341311488 unmapped: 61292544 heap: 402604032 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:40:55.370490+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341311488 unmapped: 61292544 heap: 402604032 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:40:56.370667+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e254f000/0x0/0x4ffc00000, data 0x7ede8cd/0x8076000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1562f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341319680 unmapped: 61284352 heap: 402604032 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:40:57.370838+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4544108 data_alloc: 234881024 data_used: 19693568
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341319680 unmapped: 61284352 heap: 402604032 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:40:58.370987+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341319680 unmapped: 61284352 heap: 402604032 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:40:59.371101+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 ms_handle_reset con 0x55f92a5f5c00 session 0x55f928daf680
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341319680 unmapped: 61284352 heap: 402604032 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: handle_auth_request added challenge on 0x55f92aa33800
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.973115921s of 10.293352127s, submitted: 110
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:41:00.371251+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e39e8000/0x0/0x4ffc00000, data 0x6a4e8cd/0x6be6000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1562f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 ms_handle_reset con 0x55f92aa33800 session 0x55f928e043c0
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 339697664 unmapped: 62906368 heap: 402604032 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:41:01.371428+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 339697664 unmapped: 62906368 heap: 402604032 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:41:02.371571+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e39e5000/0x0/0x4ffc00000, data 0x6a518aa/0x6be8000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1562f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4338562 data_alloc: 234881024 data_used: 13213696
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 339697664 unmapped: 62906368 heap: 402604032 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:41:03.371704+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 339697664 unmapped: 62906368 heap: 402604032 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:41:04.371863+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 ms_handle_reset con 0x55f92882f000 session 0x55f929e5be00
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 ms_handle_reset con 0x55f92818cc00 session 0x55f92a997680
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 339705856 unmapped: 62898176 heap: 402604032 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: handle_auth_request added challenge on 0x55f927c6bc00
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:41:05.372032+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 ms_handle_reset con 0x55f927c6bc00 session 0x55f9281534a0
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 335790080 unmapped: 66813952 heap: 402604032 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:41:06.372223+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 335790080 unmapped: 66813952 heap: 402604032 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:41:07.372395+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e4d95000/0x0/0x4ffc00000, data 0x55ec848/0x5782000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1562f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4102430 data_alloc: 218103808 data_used: 2605056
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 335790080 unmapped: 66813952 heap: 402604032 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:41:08.372592+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 335790080 unmapped: 66813952 heap: 402604032 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:41:09.372776+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e4d95000/0x0/0x4ffc00000, data 0x55ec848/0x5782000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1562f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 335790080 unmapped: 66813952 heap: 402604032 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:41:10.372999+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 335790080 unmapped: 66813952 heap: 402604032 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:41:11.373141+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 335790080 unmapped: 66813952 heap: 402604032 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e4d95000/0x0/0x4ffc00000, data 0x55ec848/0x5782000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1562f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:41:12.373306+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4102430 data_alloc: 218103808 data_used: 2605056
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 335790080 unmapped: 66813952 heap: 402604032 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:41:13.373454+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 335790080 unmapped: 66813952 heap: 402604032 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:41:14.373551+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 335790080 unmapped: 66813952 heap: 402604032 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:41:15.373645+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 335790080 unmapped: 66813952 heap: 402604032 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e4d95000/0x0/0x4ffc00000, data 0x55ec848/0x5782000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1562f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:41:16.373779+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 335790080 unmapped: 66813952 heap: 402604032 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:41:17.373974+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4102430 data_alloc: 218103808 data_used: 2605056
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 335790080 unmapped: 66813952 heap: 402604032 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:41:18.374150+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 335790080 unmapped: 66813952 heap: 402604032 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:41:19.374356+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 335790080 unmapped: 66813952 heap: 402604032 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:41:20.374588+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 335790080 unmapped: 66813952 heap: 402604032 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:41:21.374748+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e4d95000/0x0/0x4ffc00000, data 0x55ec848/0x5782000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1562f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 335790080 unmapped: 66813952 heap: 402604032 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:41:22.374915+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4102430 data_alloc: 218103808 data_used: 2605056
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 335790080 unmapped: 66813952 heap: 402604032 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:41:23.375116+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 335790080 unmapped: 66813952 heap: 402604032 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:41:24.375272+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e4d95000/0x0/0x4ffc00000, data 0x55ec848/0x5782000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1562f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 5400.1 total, 600.0 interval
                                           Cumulative writes: 44K writes, 177K keys, 44K commit groups, 1.0 writes per commit group, ingest: 0.17 GB, 0.03 MB/s
                                           Cumulative WAL: 44K writes, 16K syncs, 2.75 writes per sync, written: 0.17 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 2843 writes, 12K keys, 2843 commit groups, 1.0 writes per commit group, ingest: 13.29 MB, 0.02 MB/s
                                           Interval WAL: 2843 writes, 1055 syncs, 2.69 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 335806464 unmapped: 66797568 heap: 402604032 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: handle_auth_request added challenge on 0x55f92882f000
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 ms_handle_reset con 0x55f92882f000 session 0x55f92a4d05a0
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: handle_auth_request added challenge on 0x55f929c7e400
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 ms_handle_reset con 0x55f929c7e400 session 0x55f92bf070e0
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: handle_auth_request added challenge on 0x55f92a5f5c00
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 ms_handle_reset con 0x55f92a5f5c00 session 0x55f92a9df0e0
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:41:25.375394+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: handle_auth_request added challenge on 0x55f927c6bc00
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 ms_handle_reset con 0x55f927c6bc00 session 0x55f928dab680
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: handle_auth_request added challenge on 0x55f92818cc00
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 25.134710312s of 25.277557373s, submitted: 48
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 ms_handle_reset con 0x55f92818cc00 session 0x55f92a885e00
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: handle_auth_request added challenge on 0x55f92882f000
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 ms_handle_reset con 0x55f92882f000 session 0x55f927fcd4a0
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: handle_auth_request added challenge on 0x55f929c7e400
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 ms_handle_reset con 0x55f929c7e400 session 0x55f92aa385a0
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: handle_auth_request added challenge on 0x55f92aa33800
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 ms_handle_reset con 0x55f92aa33800 session 0x55f92a9970e0
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: handle_auth_request added challenge on 0x55f927c6bc00
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 ms_handle_reset con 0x55f927c6bc00 session 0x55f92a762780
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 337051648 unmapped: 65552384 heap: 402604032 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets getting new tickets!
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:41:26.375633+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _finish_auth 0
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:41:26.377396+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 337059840 unmapped: 65544192 heap: 402604032 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e4737000/0x0/0x4ffc00000, data 0x5d008aa/0x5e97000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1562f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:41:27.375740+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4168178 data_alloc: 218103808 data_used: 2605056
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 337059840 unmapped: 65544192 heap: 402604032 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:41:28.375876+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: handle_auth_request added challenge on 0x55f92818cc00
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 ms_handle_reset con 0x55f92818cc00 session 0x55f92a4d10e0
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e4737000/0x0/0x4ffc00000, data 0x5d008aa/0x5e97000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1562f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 337068032 unmapped: 65536000 heap: 402604032 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:41:29.376001+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: handle_auth_request added challenge on 0x55f92882f000
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 ms_handle_reset con 0x55f92882f000 session 0x55f92a5403c0
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 337068032 unmapped: 65536000 heap: 402604032 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:41:30.376171+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: handle_auth_request added challenge on 0x55f929c7e400
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 ms_handle_reset con 0x55f929c7e400 session 0x55f92a4d12c0
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: handle_auth_request added challenge on 0x55f92a60c400
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 337068032 unmapped: 65536000 heap: 402604032 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 ms_handle_reset con 0x55f92a60c400 session 0x55f92aa381e0
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:41:31.376305+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: handle_auth_request added challenge on 0x55f927c6bc00
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: handle_auth_request added challenge on 0x55f92818cc00
Oct 11 10:01:12 compute-0 ceph-osd[88249]: mgrc ms_handle_reset ms_handle_reset con 0x55f92a333c00
Oct 11 10:01:12 compute-0 ceph-osd[88249]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/2898047278
Oct 11 10:01:12 compute-0 ceph-osd[88249]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/2898047278,v1:192.168.122.100:6801/2898047278]
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: get_auth_request con 0x55f927c6a400 auth_method 0
Oct 11 10:01:12 compute-0 ceph-osd[88249]: mgrc handle_mgr_configure stats_period=5
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 337068032 unmapped: 65536000 heap: 402604032 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:41:32.376434+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e4735000/0x0/0x4ffc00000, data 0x5d008dd/0x5e99000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1562f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4175897 data_alloc: 218103808 data_used: 3260416
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 336871424 unmapped: 65732608 heap: 402604032 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:41:33.376544+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 336871424 unmapped: 65732608 heap: 402604032 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:41:34.376708+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 336871424 unmapped: 65732608 heap: 402604032 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 ms_handle_reset con 0x55f927c6bc00 session 0x55f92aa374a0
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 ms_handle_reset con 0x55f92818cc00 session 0x55f92bf07860
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:41:35.376898+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e4735000/0x0/0x4ffc00000, data 0x5d008dd/0x5e99000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1562f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 336871424 unmapped: 65732608 heap: 402604032 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:41:36.377097+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 336871424 unmapped: 65732608 heap: 402604032 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:41:37.377224+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 ms_handle_reset con 0x55f92a5fa000 session 0x55f9288152c0
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: handle_auth_request added challenge on 0x55f92882f000
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4213845 data_alloc: 218103808 data_used: 8712192
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 336871424 unmapped: 65732608 heap: 402604032 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:41:38.377362+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 336871424 unmapped: 65732608 heap: 402604032 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:41:39.377541+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e4735000/0x0/0x4ffc00000, data 0x5d008dd/0x5e99000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1562f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: handle_auth_request added challenge on 0x55f929c7e400
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.291606903s of 14.514014244s, submitted: 57
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 336871424 unmapped: 65732608 heap: 402604032 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: handle_auth_request added challenge on 0x55f92d869c00
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:41:40.377739+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 336871424 unmapped: 65732608 heap: 402604032 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:41:41.377891+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 336871424 unmapped: 65732608 heap: 402604032 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:41:42.378011+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4213913 data_alloc: 218103808 data_used: 8720384
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 336871424 unmapped: 65732608 heap: 402604032 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:41:43.378193+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 ms_handle_reset con 0x55f929c7e400 session 0x55f928daf680
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 ms_handle_reset con 0x55f92d869c00 session 0x55f92a9d12c0
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e4735000/0x0/0x4ffc00000, data 0x5d008dd/0x5e99000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1562f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 336871424 unmapped: 65732608 heap: 402604032 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:41:44.378535+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 336879616 unmapped: 65724416 heap: 402604032 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:41:45.378696+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 336879616 unmapped: 65724416 heap: 402604032 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:41:46.379127+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 336879616 unmapped: 65724416 heap: 402604032 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:41:47.379274+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4213781 data_alloc: 218103808 data_used: 8720384
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 336879616 unmapped: 65724416 heap: 402604032 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:41:48.379416+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: handle_auth_request added challenge on 0x55f92c8ce800
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: handle_auth_request added challenge on 0x55f929d99400
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 336879616 unmapped: 65724416 heap: 402604032 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:41:49.379528+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e4735000/0x0/0x4ffc00000, data 0x5d008dd/0x5e99000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1562f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 336879616 unmapped: 65724416 heap: 402604032 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:41:50.379964+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 336879616 unmapped: 65724416 heap: 402604032 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:41:51.380385+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 336879616 unmapped: 65724416 heap: 402604032 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.173917770s of 12.186678886s, submitted: 3
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 ms_handle_reset con 0x55f92c8ce800 session 0x55f92a9ced20
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 ms_handle_reset con 0x55f929d99400 session 0x55f928000d20
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:41:52.380544+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: handle_auth_request added challenge on 0x55f927c6bc00
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 ms_handle_reset con 0x55f927c6bc00 session 0x55f928d8a3c0
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4112002 data_alloc: 218103808 data_used: 2605056
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 333783040 unmapped: 68820992 heap: 402604032 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:41:53.380726+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 333783040 unmapped: 68820992 heap: 402604032 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:41:54.380925+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 333783040 unmapped: 68820992 heap: 402604032 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:41:55.381101+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e5e8a000/0x0/0x4ffc00000, data 0x55ec848/0x5782000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x145ef9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 333791232 unmapped: 68812800 heap: 402604032 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:41:56.381480+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e5e8a000/0x0/0x4ffc00000, data 0x55ec848/0x5782000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x145ef9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e5e8a000/0x0/0x4ffc00000, data 0x55ec848/0x5782000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x145ef9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 333791232 unmapped: 68812800 heap: 402604032 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:41:57.381627+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4112002 data_alloc: 218103808 data_used: 2605056
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 333791232 unmapped: 68812800 heap: 402604032 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:41:58.381784+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 333791232 unmapped: 68812800 heap: 402604032 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:41:59.381939+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 333791232 unmapped: 68812800 heap: 402604032 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:42:00.382142+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e5e8a000/0x0/0x4ffc00000, data 0x55ec848/0x5782000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x145ef9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 333791232 unmapped: 68812800 heap: 402604032 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:42:01.382486+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e5e8a000/0x0/0x4ffc00000, data 0x55ec848/0x5782000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x145ef9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e5e8a000/0x0/0x4ffc00000, data 0x55ec848/0x5782000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x145ef9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 333791232 unmapped: 68812800 heap: 402604032 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:42:02.382682+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4112002 data_alloc: 218103808 data_used: 2605056
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 333791232 unmapped: 68812800 heap: 402604032 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:42:03.382973+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 333791232 unmapped: 68812800 heap: 402604032 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:42:04.383138+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 333791232 unmapped: 68812800 heap: 402604032 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:42:05.383338+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 333791232 unmapped: 68812800 heap: 402604032 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:42:06.383495+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 333799424 unmapped: 68804608 heap: 402604032 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:42:07.383690+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e5e8a000/0x0/0x4ffc00000, data 0x55ec848/0x5782000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x145ef9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4112002 data_alloc: 218103808 data_used: 2605056
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 333799424 unmapped: 68804608 heap: 402604032 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:42:08.383872+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e5e8a000/0x0/0x4ffc00000, data 0x55ec848/0x5782000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x145ef9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 333807616 unmapped: 68796416 heap: 402604032 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:42:09.384036+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 333807616 unmapped: 68796416 heap: 402604032 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:42:10.384267+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 333807616 unmapped: 68796416 heap: 402604032 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:42:11.384422+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 333807616 unmapped: 68796416 heap: 402604032 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:42:12.384603+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4112002 data_alloc: 218103808 data_used: 2605056
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 333807616 unmapped: 68796416 heap: 402604032 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:42:13.384804+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 333807616 unmapped: 68796416 heap: 402604032 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:42:14.385125+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e5e8a000/0x0/0x4ffc00000, data 0x55ec848/0x5782000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x145ef9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 333807616 unmapped: 68796416 heap: 402604032 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:42:15.385302+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 333807616 unmapped: 68796416 heap: 402604032 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:42:16.385480+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e5e8a000/0x0/0x4ffc00000, data 0x55ec848/0x5782000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x145ef9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 333815808 unmapped: 68788224 heap: 402604032 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:42:17.385619+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4112002 data_alloc: 218103808 data_used: 2605056
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 333815808 unmapped: 68788224 heap: 402604032 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:42:18.385776+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 333815808 unmapped: 68788224 heap: 402604032 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:42:19.385932+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e5e8a000/0x0/0x4ffc00000, data 0x55ec848/0x5782000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x145ef9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 333815808 unmapped: 68788224 heap: 402604032 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:42:20.386106+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 333815808 unmapped: 68788224 heap: 402604032 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e5e8a000/0x0/0x4ffc00000, data 0x55ec848/0x5782000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x145ef9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:42:21.386253+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 333815808 unmapped: 68788224 heap: 402604032 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:42:22.386302+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4112002 data_alloc: 218103808 data_used: 2605056
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 333815808 unmapped: 68788224 heap: 402604032 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:42:23.386414+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 333815808 unmapped: 68788224 heap: 402604032 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:42:24.386556+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 333824000 unmapped: 68780032 heap: 402604032 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:42:25.386703+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e5e8a000/0x0/0x4ffc00000, data 0x55ec848/0x5782000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x145ef9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 333824000 unmapped: 68780032 heap: 402604032 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:42:26.386879+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 333824000 unmapped: 68780032 heap: 402604032 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:42:27.387022+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4112002 data_alloc: 218103808 data_used: 2605056
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 333824000 unmapped: 68780032 heap: 402604032 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:42:28.387171+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 333824000 unmapped: 68780032 heap: 402604032 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:42:29.387336+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 333824000 unmapped: 68780032 heap: 402604032 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:42:30.387531+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: handle_auth_request added challenge on 0x55f92818cc00
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 ms_handle_reset con 0x55f92818cc00 session 0x55f927c605a0
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: handle_auth_request added challenge on 0x55f929c7e400
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 ms_handle_reset con 0x55f929c7e400 session 0x55f928d8cd20
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: handle_auth_request added challenge on 0x55f92d869c00
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 ms_handle_reset con 0x55f92d869c00 session 0x55f929e5b2c0
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: handle_auth_request added challenge on 0x55f927c6bc00
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 ms_handle_reset con 0x55f927c6bc00 session 0x55f9280ca960
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e5e8a000/0x0/0x4ffc00000, data 0x55ec848/0x5782000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x145ef9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: handle_auth_request added challenge on 0x55f92818cc00
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 38.356048584s of 38.511108398s, submitted: 42
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 ms_handle_reset con 0x55f92818cc00 session 0x55f92a28ef00
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: handle_auth_request added challenge on 0x55f929c7e400
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 ms_handle_reset con 0x55f929c7e400 session 0x55f9280ca000
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e5e8a000/0x0/0x4ffc00000, data 0x55ec848/0x5782000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x145ef9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 334888960 unmapped: 67715072 heap: 402604032 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:42:31.387656+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 334888960 unmapped: 67715072 heap: 402604032 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:42:32.387894+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4179115 data_alloc: 218103808 data_used: 2605056
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 334897152 unmapped: 67706880 heap: 402604032 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:42:33.388039+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 334897152 unmapped: 67706880 heap: 402604032 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:42:34.388173+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 334897152 unmapped: 67706880 heap: 402604032 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:42:35.388333+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: handle_auth_request added challenge on 0x55f929d99400
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 ms_handle_reset con 0x55f929d99400 session 0x55f9281eb680
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e569e000/0x0/0x4ffc00000, data 0x5dd98aa/0x5f70000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x145ef9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: handle_auth_request added challenge on 0x55f92aa35800
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 ms_handle_reset con 0x55f92aa35800 session 0x55f927c003c0
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:42:36.388460+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 334897152 unmapped: 67706880 heap: 402604032 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: handle_auth_request added challenge on 0x55f927c6bc00
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 ms_handle_reset con 0x55f927c6bc00 session 0x55f92a9dfc20
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: handle_auth_request added challenge on 0x55f92818cc00
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 ms_handle_reset con 0x55f92818cc00 session 0x55f92a997e00
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:42:37.388573+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 334905344 unmapped: 67698688 heap: 402604032 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: handle_auth_request added challenge on 0x55f929c7e400
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e569d000/0x0/0x4ffc00000, data 0x5dd98ba/0x5f71000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x145ef9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4181238 data_alloc: 218103808 data_used: 2605056
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:42:38.388707+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 334913536 unmapped: 67690496 heap: 402604032 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: handle_auth_request added challenge on 0x55f929d99400
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:42:39.388854+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 334913536 unmapped: 67690496 heap: 402604032 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:42:40.389043+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 335347712 unmapped: 67256320 heap: 402604032 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:42:41.389166+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 335347712 unmapped: 67256320 heap: 402604032 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e569d000/0x0/0x4ffc00000, data 0x5dd98ba/0x5f71000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x145ef9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:42:42.389277+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 335347712 unmapped: 67256320 heap: 402604032 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4239478 data_alloc: 218103808 data_used: 10813440
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:42:43.389397+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 335347712 unmapped: 67256320 heap: 402604032 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:42:44.389566+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 335347712 unmapped: 67256320 heap: 402604032 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:42:45.389751+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 335347712 unmapped: 67256320 heap: 402604032 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:42:46.389906+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 335347712 unmapped: 67256320 heap: 402604032 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:42:47.390084+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 335347712 unmapped: 67256320 heap: 402604032 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e569d000/0x0/0x4ffc00000, data 0x5dd98ba/0x5f71000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x145ef9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4239478 data_alloc: 218103808 data_used: 10813440
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:42:48.390240+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 335347712 unmapped: 67256320 heap: 402604032 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 18.084468842s of 18.245582581s, submitted: 39
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:42:49.390362+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 338812928 unmapped: 63791104 heap: 402604032 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:42:50.391165+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 338812928 unmapped: 63791104 heap: 402604032 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #51. Immutable memtables: 8.
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:42:51.391303+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 340254720 unmapped: 62349312 heap: 402604032 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:42:52.391605+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 340254720 unmapped: 62349312 heap: 402604032 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 ms_handle_reset con 0x55f92a607c00 session 0x55f92a8841e0
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: handle_auth_request added challenge on 0x55f92a332400
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4343324 data_alloc: 218103808 data_used: 11014144
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:42:53.392191+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 340254720 unmapped: 62349312 heap: 402604032 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e38e7000/0x0/0x4ffc00000, data 0x69ef8ba/0x6b87000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1578f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:42:54.392325+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 340254720 unmapped: 62349312 heap: 402604032 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:42:55.392899+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 340262912 unmapped: 62341120 heap: 402604032 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:42:56.393042+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e38e7000/0x0/0x4ffc00000, data 0x69ef8ba/0x6b87000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1578f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 340262912 unmapped: 62341120 heap: 402604032 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:42:57.393182+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 340262912 unmapped: 62341120 heap: 402604032 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4341468 data_alloc: 218103808 data_used: 11018240
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:42:58.393316+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 340262912 unmapped: 62341120 heap: 402604032 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:42:59.393478+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 340262912 unmapped: 62341120 heap: 402604032 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:43:00.393796+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 340279296 unmapped: 62324736 heap: 402604032 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 288 handle_osd_map epochs [288,289], i have 288, src has [1,289]
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.641731262s of 11.925760269s, submitted: 109
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: handle_auth_request added challenge on 0x55f92a607c00
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: handle_auth_request added challenge on 0x55f928883400
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 289 heartbeat osd_stat(store_statfs(0x4e38e1000/0x0/0x4ffc00000, data 0x69f48ba/0x6b8c000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1578f9c2), peers [1,2] op hist [0,0,0,1])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 289 ms_handle_reset con 0x55f928883400 session 0x55f928d503c0
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 289 ms_handle_reset con 0x55f92a607c00 session 0x55f9280003c0
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: handle_auth_request added challenge on 0x55f929c85000
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:43:01.394034+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 355221504 unmapped: 54779904 heap: 410001408 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 289 ms_handle_reset con 0x55f929c85000 session 0x55f927c00d20
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: handle_auth_request added challenge on 0x55f929c85000
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 289 ms_handle_reset con 0x55f929c85000 session 0x55f92a763680
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _renew_subs
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 289 handle_osd_map epochs [290,290], i have 289, src has [1,290]
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:43:02.394187+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 357310464 unmapped: 52690944 heap: 410001408 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:43:03.394380+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4477552 data_alloc: 234881024 data_used: 23134208
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 349962240 unmapped: 60039168 heap: 410001408 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e2bb3000/0x0/0x4ffc00000, data 0x7721008/0x78bb000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1578f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: handle_auth_request added challenge on 0x55f927c6bc00
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: handle_auth_request added challenge on 0x55f92818cc00
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 290 ms_handle_reset con 0x55f927c6bc00 session 0x55f927c01680
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 290 handle_osd_map epochs [291,291], i have 290, src has [1,291]
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 290 ms_handle_reset con 0x55f92818cc00 session 0x55f92a541860
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 290 handle_osd_map epochs [291,291], i have 291, src has [1,291]
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:43:04.394572+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 349986816 unmapped: 60014592 heap: 410001408 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:43:05.394717+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 349986816 unmapped: 60014592 heap: 410001408 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:43:06.394872+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 349986816 unmapped: 60014592 heap: 410001408 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:43:07.395049+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 349986816 unmapped: 60014592 heap: 410001408 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:43:08.395197+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4481726 data_alloc: 234881024 data_used: 23142400
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 349995008 unmapped: 60006400 heap: 410001408 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:43:09.395352+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 349995008 unmapped: 60006400 heap: 410001408 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 291 heartbeat osd_stat(store_statfs(0x4e2baf000/0x0/0x4ffc00000, data 0x7722ba1/0x78be000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1578f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 291 handle_osd_map epochs [292,292], i have 291, src has [1,292]
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:43:10.395547+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: handle_auth_request added challenge on 0x55f928883400
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 292 ms_handle_reset con 0x55f928883400 session 0x55f9281534a0
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: handle_auth_request added challenge on 0x55f92a607c00
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 292 ms_handle_reset con 0x55f92a607c00 session 0x55f92bf070e0
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: handle_auth_request added challenge on 0x55f92b158c00
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 292 ms_handle_reset con 0x55f92b158c00 session 0x55f92a9970e0
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 349995008 unmapped: 60006400 heap: 410001408 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: handle_auth_request added challenge on 0x55f92caaa400
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 292 ms_handle_reset con 0x55f92caaa400 session 0x55f92a5403c0
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: handle_auth_request added challenge on 0x55f928883400
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 292 ms_handle_reset con 0x55f928883400 session 0x55f928ab30e0
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: handle_auth_request added challenge on 0x55f929c85000
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 292 ms_handle_reset con 0x55f929c85000 session 0x55f92a9cf680
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: handle_auth_request added challenge on 0x55f92a607c00
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 292 ms_handle_reset con 0x55f92a607c00 session 0x55f92a8850e0
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:43:11.395745+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 350003200 unmapped: 59998208 heap: 410001408 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:43:12.395929+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 350003200 unmapped: 59998208 heap: 410001408 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:43:13.396088+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4487956 data_alloc: 234881024 data_used: 23142400
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 350003200 unmapped: 59998208 heap: 410001408 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:43:14.396219+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 350003200 unmapped: 59998208 heap: 410001408 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: handle_auth_request added challenge on 0x55f92b158c00
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 292 ms_handle_reset con 0x55f92b158c00 session 0x55f928ab30e0
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:43:15.396400+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 350003200 unmapped: 59998208 heap: 410001408 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: handle_auth_request added challenge on 0x55f92a5fd000
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 292 ms_handle_reset con 0x55f92a5fd000 session 0x55f92a5403c0
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e2bab000/0x0/0x4ffc00000, data 0x7724666/0x78c2000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1578f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:43:16.396631+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: handle_auth_request added challenge on 0x55f928883400
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 292 ms_handle_reset con 0x55f928883400 session 0x55f92a9970e0
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: handle_auth_request added challenge on 0x55f929c85000
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.748740196s of 15.585949898s, submitted: 163
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 350011392 unmapped: 59990016 heap: 410001408 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 292 ms_handle_reset con 0x55f929c85000 session 0x55f92bf070e0
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e2bab000/0x0/0x4ffc00000, data 0x7724689/0x78c3000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1578f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: handle_auth_request added challenge on 0x55f92a5fd000
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:43:17.396795+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 350027776 unmapped: 59973632 heap: 410001408 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:43:18.396998+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4490781 data_alloc: 234881024 data_used: 23384064
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 350027776 unmapped: 59973632 heap: 410001408 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:43:19.397151+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 352165888 unmapped: 57835520 heap: 410001408 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:43:20.397349+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 352165888 unmapped: 57835520 heap: 410001408 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:43:21.397507+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 352165888 unmapped: 57835520 heap: 410001408 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e2bab000/0x0/0x4ffc00000, data 0x7724689/0x78c3000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1578f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e2bab000/0x0/0x4ffc00000, data 0x7724689/0x78c3000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1578f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:43:22.397645+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 352165888 unmapped: 57835520 heap: 410001408 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:43:23.397794+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4529981 data_alloc: 234881024 data_used: 28205056
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 352165888 unmapped: 57835520 heap: 410001408 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:43:24.397947+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 352165888 unmapped: 57835520 heap: 410001408 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:43:25.398092+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 352165888 unmapped: 57835520 heap: 410001408 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:43:26.398199+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 352165888 unmapped: 57835520 heap: 410001408 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:43:27.398326+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 352165888 unmapped: 57835520 heap: 410001408 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e2bab000/0x0/0x4ffc00000, data 0x7724689/0x78c3000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1578f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:43:28.398475+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4530141 data_alloc: 234881024 data_used: 28209152
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 352165888 unmapped: 57835520 heap: 410001408 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.201756477s of 12.221487045s, submitted: 6
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e2bab000/0x0/0x4ffc00000, data 0x7724689/0x78c3000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1578f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:43:29.398645+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 355090432 unmapped: 54910976 heap: 410001408 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e2ba6000/0x0/0x4ffc00000, data 0x7729689/0x78c8000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1578f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:43:30.398785+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 355205120 unmapped: 54796288 heap: 410001408 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:43:31.398900+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 355205120 unmapped: 54796288 heap: 410001408 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:43:32.399056+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 355205120 unmapped: 54796288 heap: 410001408 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:43:33.399203+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4551533 data_alloc: 251658240 data_used: 32849920
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 355213312 unmapped: 54788096 heap: 410001408 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e2ba6000/0x0/0x4ffc00000, data 0x7729689/0x78c8000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1578f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e2ba6000/0x0/0x4ffc00000, data 0x7729689/0x78c8000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1578f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:43:34.399373+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 355999744 unmapped: 54001664 heap: 410001408 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:43:35.399532+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 355999744 unmapped: 54001664 heap: 410001408 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:43:36.399719+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e2b38000/0x0/0x4ffc00000, data 0x7797689/0x7936000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1578f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 355999744 unmapped: 54001664 heap: 410001408 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:43:37.399886+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 356007936 unmapped: 53993472 heap: 410001408 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:43:38.400028+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4560437 data_alloc: 251658240 data_used: 32849920
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 356007936 unmapped: 53993472 heap: 410001408 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e2b38000/0x0/0x4ffc00000, data 0x7797689/0x7936000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1578f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:43:39.400204+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 356007936 unmapped: 53993472 heap: 410001408 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:43:40.400392+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 356007936 unmapped: 53993472 heap: 410001408 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:43:41.400535+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 356007936 unmapped: 53993472 heap: 410001408 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:43:42.400649+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 356007936 unmapped: 53993472 heap: 410001408 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:43:43.400766+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4560437 data_alloc: 251658240 data_used: 32849920
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e2b38000/0x0/0x4ffc00000, data 0x7797689/0x7936000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1578f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 356007936 unmapped: 53993472 heap: 410001408 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:43:44.400961+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 356007936 unmapped: 53993472 heap: 410001408 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:43:45.401083+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 356007936 unmapped: 53993472 heap: 410001408 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:43:46.401225+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 356007936 unmapped: 53993472 heap: 410001408 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 18.314476013s of 18.385412216s, submitted: 16
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:43:47.401405+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 356007936 unmapped: 53993472 heap: 410001408 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e2b37000/0x0/0x4ffc00000, data 0x7798689/0x7937000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1578f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:43:48.401578+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4560713 data_alloc: 251658240 data_used: 32849920
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 356016128 unmapped: 53985280 heap: 410001408 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e2b37000/0x0/0x4ffc00000, data 0x7798689/0x7937000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1578f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:43:49.401716+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 356016128 unmapped: 53985280 heap: 410001408 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:43:50.401878+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 356016128 unmapped: 53985280 heap: 410001408 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:43:51.402028+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 356016128 unmapped: 53985280 heap: 410001408 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:43:52.402154+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 356016128 unmapped: 53985280 heap: 410001408 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e2b37000/0x0/0x4ffc00000, data 0x7798689/0x7937000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1578f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:43:53.402318+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4560713 data_alloc: 251658240 data_used: 32849920
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 356016128 unmapped: 53985280 heap: 410001408 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:43:54.407399+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 292 ms_handle_reset con 0x55f929c80c00 session 0x55f929e5bc20
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: handle_auth_request added challenge on 0x55f92a607c00
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 356065280 unmapped: 53936128 heap: 410001408 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:43:55.407495+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: handle_auth_request added challenge on 0x55f929c80c00
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 292 ms_handle_reset con 0x55f929c80c00 session 0x55f92a540b40
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: handle_auth_request added challenge on 0x55f92b158c00
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 356065280 unmapped: 53936128 heap: 410001408 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _renew_subs
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 292 handle_osd_map epochs [293,293], i have 292, src has [1,293]
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 293 ms_handle_reset con 0x55f92b158c00 session 0x55f92aa383c0
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: handle_auth_request added challenge on 0x55f92a335400
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: handle_auth_request added challenge on 0x55f92a607400
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:43:56.407644+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 293 ms_handle_reset con 0x55f92a607400 session 0x55f9288152c0
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 293 ms_handle_reset con 0x55f92a335400 session 0x55f92a8854a0
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: handle_auth_request added challenge on 0x55f928883400
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 366854144 unmapped: 51552256 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 293 heartbeat osd_stat(store_statfs(0x4e1662000/0x0/0x4ffc00000, data 0x8c6a268/0x8e0b000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1578f9c2), peers [1,2] op hist [0,0,0,0,0,1,1,1])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 293 ms_handle_reset con 0x55f928883400 session 0x55f928abf4a0
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: handle_auth_request added challenge on 0x55f929c80c00
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.488361359s of 10.013427734s, submitted: 49
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 293 handle_osd_map epochs [293,294], i have 293, src has [1,294]
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _renew_subs
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 293 handle_osd_map epochs [294,294], i have 294, src has [1,294]
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 294 ms_handle_reset con 0x55f929c80c00 session 0x55f929e44960
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: handle_auth_request added challenge on 0x55f929c85000
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:43:57.407783+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 363266048 unmapped: 55140352 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _renew_subs
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 294 handle_osd_map epochs [295,295], i have 294, src has [1,295]
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 295 ms_handle_reset con 0x55f929c85000 session 0x55f927bbf2c0
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:43:58.407940+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4749452 data_alloc: 251658240 data_used: 36716544
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 295 heartbeat osd_stat(store_statfs(0x4e15f0000/0x0/0x4ffc00000, data 0x8cdae39/0x8e7d000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1578f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 363331584 unmapped: 55074816 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:43:59.408082+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: handle_auth_request added challenge on 0x55f92b158c00
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 363413504 unmapped: 54992896 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _renew_subs
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 295 handle_osd_map epochs [296,296], i have 295, src has [1,296]
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:44:00.408299+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 296 ms_handle_reset con 0x55f92b158c00 session 0x55f927fccd20
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 360816640 unmapped: 57589760 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:44:01.408444+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 360824832 unmapped: 57581568 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 296 heartbeat osd_stat(store_statfs(0x4e2ab5000/0x0/0x4ffc00000, data 0x77a455d/0x7948000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1578f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:44:02.408616+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 360824832 unmapped: 57581568 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 296 ms_handle_reset con 0x55f92a5fd000 session 0x55f927c00d20
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 296 heartbeat osd_stat(store_statfs(0x4e2ab5000/0x0/0x4ffc00000, data 0x77a455d/0x7948000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1578f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:44:03.408768+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4591970 data_alloc: 251658240 data_used: 36712448
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: handle_auth_request added challenge on 0x55f928883400
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 360824832 unmapped: 57581568 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:44:04.408907+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 360841216 unmapped: 57565184 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _renew_subs
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 296 handle_osd_map epochs [297,297], i have 296, src has [1,297]
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 297 ms_handle_reset con 0x55f928883400 session 0x55f9281ead20
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:44:05.409046+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 360841216 unmapped: 57565184 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 297 heartbeat osd_stat(store_statfs(0x4e2b22000/0x0/0x4ffc00000, data 0x772cf73/0x78d0000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1578f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:44:06.409229+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 360841216 unmapped: 57565184 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:44:07.409404+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: handle_auth_request added challenge on 0x55f929c80c00
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 360849408 unmapped: 57556992 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 297 handle_osd_map epochs [298,298], i have 297, src has [1,298]
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.551476479s of 10.607522011s, submitted: 121
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 298 ms_handle_reset con 0x55f929c80c00 session 0x55f928909e00
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:44:08.409549+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4419992 data_alloc: 234881024 data_used: 22446080
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 360538112 unmapped: 57868288 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:44:09.409705+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 360538112 unmapped: 57868288 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 298 ms_handle_reset con 0x55f929d99400 session 0x55f92a884d20
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 298 ms_handle_reset con 0x55f929c7e400 session 0x55f928e054a0
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: handle_auth_request added challenge on 0x55f928883400
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:44:10.409883+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 340844544 unmapped: 77561856 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 298 heartbeat osd_stat(store_statfs(0x4e38c4000/0x0/0x4ffc00000, data 0x6a05b44/0x6baa000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1578f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 298 ms_handle_reset con 0x55f928883400 session 0x55f927fcc5a0
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:44:11.410089+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 340860928 unmapped: 77545472 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:44:12.410267+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 340860928 unmapped: 77545472 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:44:13.410444+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4169933 data_alloc: 218103808 data_used: 2166784
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 340860928 unmapped: 77545472 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 298 heartbeat osd_stat(store_statfs(0x4e46d1000/0x0/0x4ffc00000, data 0x55fdad2/0x57a0000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1578f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:44:14.410643+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 298 handle_osd_map epochs [298,299], i have 298, src has [1,299]
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 340860928 unmapped: 77545472 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:44:15.410872+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 340860928 unmapped: 77545472 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:44:16.411087+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 340860928 unmapped: 77545472 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e4cca000/0x0/0x4ffc00000, data 0x55ff535/0x57a3000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1578f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:44:17.411300+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 340860928 unmapped: 77545472 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:44:18.411463+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4173931 data_alloc: 218103808 data_used: 2174976
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 340860928 unmapped: 77545472 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e4cca000/0x0/0x4ffc00000, data 0x55ff535/0x57a3000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1578f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:44:19.411609+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 340860928 unmapped: 77545472 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:44:20.411859+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 340860928 unmapped: 77545472 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:44:21.412039+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 340860928 unmapped: 77545472 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:44:22.412173+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 340860928 unmapped: 77545472 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:44:23.412343+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4173931 data_alloc: 218103808 data_used: 2174976
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 340860928 unmapped: 77545472 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e4cca000/0x0/0x4ffc00000, data 0x55ff535/0x57a3000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1578f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:44:24.412499+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 340860928 unmapped: 77545472 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e4cca000/0x0/0x4ffc00000, data 0x55ff535/0x57a3000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1578f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:44:25.412708+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 340860928 unmapped: 77545472 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:44:26.412914+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 340860928 unmapped: 77545472 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:44:27.413112+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 340860928 unmapped: 77545472 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:44:28.413281+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4173931 data_alloc: 218103808 data_used: 2174976
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 340860928 unmapped: 77545472 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:44:29.413400+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 340860928 unmapped: 77545472 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:44:30.413565+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e4cca000/0x0/0x4ffc00000, data 0x55ff535/0x57a3000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1578f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 340869120 unmapped: 77537280 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:44:31.413672+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 340869120 unmapped: 77537280 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:44:32.413856+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 340869120 unmapped: 77537280 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:44:33.414050+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4173931 data_alloc: 218103808 data_used: 2174976
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: handle_auth_request added challenge on 0x55f929c80c00
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 25.309310913s of 25.986526489s, submitted: 72
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 340869120 unmapped: 77537280 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 299 ms_handle_reset con 0x55f929c80c00 session 0x55f928dafe00
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: handle_auth_request added challenge on 0x55f929d99400
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 299 ms_handle_reset con 0x55f929d99400 session 0x55f92a9ce1e0
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:44:34.414182+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: handle_auth_request added challenge on 0x55f92a5fd000
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 299 ms_handle_reset con 0x55f92a5fd000 session 0x55f92a885860
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 340983808 unmapped: 77422592 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e45d0000/0x0/0x4ffc00000, data 0x5cf9597/0x5e9e000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1578f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:44:35.414311+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: handle_auth_request added challenge on 0x55f929c85000
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 299 ms_handle_reset con 0x55f929c85000 session 0x55f92a28f2c0
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 340983808 unmapped: 77422592 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: handle_auth_request added challenge on 0x55f928883400
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 299 ms_handle_reset con 0x55f928883400 session 0x55f92a9df860
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: handle_auth_request added challenge on 0x55f929c80c00
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 299 ms_handle_reset con 0x55f929c80c00 session 0x55f928ab32c0
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:44:36.414448+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: handle_auth_request added challenge on 0x55f929d99400
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 340983808 unmapped: 77422592 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e45cf000/0x0/0x4ffc00000, data 0x5cf95a7/0x5e9f000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1578f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:44:37.414583+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: handle_auth_request added challenge on 0x55f92a5fd000
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 340983808 unmapped: 77422592 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e45cf000/0x0/0x4ffc00000, data 0x5cf95a7/0x5e9f000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1578f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:44:38.414725+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4285886 data_alloc: 218103808 data_used: 9400320
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341532672 unmapped: 76873728 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:44:39.414887+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341532672 unmapped: 76873728 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:44:40.415057+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341532672 unmapped: 76873728 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 299 ms_handle_reset con 0x55f92a5fd000 session 0x55f928d8a3c0
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 299 ms_handle_reset con 0x55f929d99400 session 0x55f92a541c20
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: handle_auth_request added challenge on 0x55f92a335400
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:44:41.415198+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 299 ms_handle_reset con 0x55f92a335400 session 0x55f92a541a40
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 338329600 unmapped: 80076800 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:44:42.415288+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 338329600 unmapped: 80076800 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:44:43.415456+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4179945 data_alloc: 218103808 data_used: 2174976
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 338329600 unmapped: 80076800 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e4c48000/0x0/0x4ffc00000, data 0x55ff535/0x57a3000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1578f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:44:44.415624+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 338329600 unmapped: 80076800 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:44:45.415806+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 338345984 unmapped: 80060416 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:44:46.416051+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 338345984 unmapped: 80060416 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:44:47.416196+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 338345984 unmapped: 80060416 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:44:48.416341+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4179945 data_alloc: 218103808 data_used: 2174976
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e4c48000/0x0/0x4ffc00000, data 0x55ff535/0x57a3000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1578f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 338345984 unmapped: 80060416 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:44:49.416518+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 338345984 unmapped: 80060416 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:44:50.416702+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 338345984 unmapped: 80060416 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:44:51.416894+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 338345984 unmapped: 80060416 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:44:52.417061+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e4c48000/0x0/0x4ffc00000, data 0x55ff535/0x57a3000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1578f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 338345984 unmapped: 80060416 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:44:53.417242+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4179945 data_alloc: 218103808 data_used: 2174976
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 338345984 unmapped: 80060416 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:44:54.417390+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 338345984 unmapped: 80060416 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:44:55.417537+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 338345984 unmapped: 80060416 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:44:56.417692+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 338345984 unmapped: 80060416 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e4c48000/0x0/0x4ffc00000, data 0x55ff535/0x57a3000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1578f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:44:57.417865+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 338345984 unmapped: 80060416 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:44:58.418029+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4179945 data_alloc: 218103808 data_used: 2174976
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 338345984 unmapped: 80060416 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:44:59.418185+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 338345984 unmapped: 80060416 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:45:00.418374+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 338345984 unmapped: 80060416 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:45:01.418507+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e4c48000/0x0/0x4ffc00000, data 0x55ff535/0x57a3000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1578f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 338345984 unmapped: 80060416 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:45:02.418651+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 338345984 unmapped: 80060416 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:45:03.418863+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4179945 data_alloc: 218103808 data_used: 2174976
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 338345984 unmapped: 80060416 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:45:04.418992+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 338345984 unmapped: 80060416 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e4c48000/0x0/0x4ffc00000, data 0x55ff535/0x57a3000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1578f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:45:05.419183+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 338345984 unmapped: 80060416 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e4c48000/0x0/0x4ffc00000, data 0x55ff535/0x57a3000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1578f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:45:06.419387+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 338345984 unmapped: 80060416 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e4c48000/0x0/0x4ffc00000, data 0x55ff535/0x57a3000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1578f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:45:07.419520+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 338345984 unmapped: 80060416 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:45:08.419674+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e4c48000/0x0/0x4ffc00000, data 0x55ff535/0x57a3000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1578f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4179945 data_alloc: 218103808 data_used: 2174976
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 338345984 unmapped: 80060416 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:45:09.419883+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 338345984 unmapped: 80060416 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:45:10.420090+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 338345984 unmapped: 80060416 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:45:11.420261+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 338345984 unmapped: 80060416 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:45:12.420456+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 338345984 unmapped: 80060416 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e4c48000/0x0/0x4ffc00000, data 0x55ff535/0x57a3000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1578f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:45:13.420851+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4179945 data_alloc: 218103808 data_used: 2174976
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 338345984 unmapped: 80060416 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:45:14.421050+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 299 ms_handle_reset con 0x55f92a601800 session 0x55f92a9ce780
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: handle_auth_request added challenge on 0x55f928883400
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 338345984 unmapped: 80060416 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:45:15.421232+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 338345984 unmapped: 80060416 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:45:16.421405+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e4c48000/0x0/0x4ffc00000, data 0x55ff535/0x57a3000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1578f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 338345984 unmapped: 80060416 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e4c48000/0x0/0x4ffc00000, data 0x55ff535/0x57a3000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1578f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:45:17.421546+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 338345984 unmapped: 80060416 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:45:18.421701+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4179945 data_alloc: 218103808 data_used: 2174976
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 338345984 unmapped: 80060416 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:45:19.421901+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e4c48000/0x0/0x4ffc00000, data 0x55ff535/0x57a3000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1578f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 338345984 unmapped: 80060416 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:45:20.422090+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e4c48000/0x0/0x4ffc00000, data 0x55ff535/0x57a3000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1578f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 299 ms_handle_reset con 0x55f92b158800 session 0x55f92a8843c0
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: handle_auth_request added challenge on 0x55f929c80c00
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 338345984 unmapped: 80060416 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:45:21.422253+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 299 ms_handle_reset con 0x55f929c83800 session 0x55f928abed20
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: handle_auth_request added challenge on 0x55f929d99400
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 338345984 unmapped: 80060416 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:45:22.422433+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 338345984 unmapped: 80060416 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:45:23.422592+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4179945 data_alloc: 218103808 data_used: 2174976
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 338345984 unmapped: 80060416 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e4c48000/0x0/0x4ffc00000, data 0x55ff535/0x57a3000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1578f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:45:24.422786+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 338345984 unmapped: 80060416 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:45:25.422908+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 338345984 unmapped: 80060416 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:45:26.423250+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 338345984 unmapped: 80060416 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:45:27.423522+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 338345984 unmapped: 80060416 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:45:28.423727+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4179945 data_alloc: 218103808 data_used: 2174976
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 338345984 unmapped: 80060416 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:45:29.424377+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e4c48000/0x0/0x4ffc00000, data 0x55ff535/0x57a3000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1578f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 338345984 unmapped: 80060416 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:45:30.432969+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 338345984 unmapped: 80060416 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:45:31.433155+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 338345984 unmapped: 80060416 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e4c48000/0x0/0x4ffc00000, data 0x55ff535/0x57a3000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1578f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:45:32.433300+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 338345984 unmapped: 80060416 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:45:33.433458+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4179945 data_alloc: 218103808 data_used: 2174976
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 338345984 unmapped: 80060416 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:45:34.433612+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 338345984 unmapped: 80060416 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:45:35.433756+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 338345984 unmapped: 80060416 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:45:36.434004+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e4c48000/0x0/0x4ffc00000, data 0x55ff535/0x57a3000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1578f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 338345984 unmapped: 80060416 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:45:37.434465+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 338345984 unmapped: 80060416 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:45:38.434742+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4179945 data_alloc: 218103808 data_used: 2174976
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 338345984 unmapped: 80060416 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:45:39.435001+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 338345984 unmapped: 80060416 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:45:40.435279+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 338345984 unmapped: 80060416 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:45:41.435518+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 338345984 unmapped: 80060416 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e4c48000/0x0/0x4ffc00000, data 0x55ff535/0x57a3000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1578f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:45:42.435668+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 338345984 unmapped: 80060416 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:45:43.435883+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4179945 data_alloc: 218103808 data_used: 2174976
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 338345984 unmapped: 80060416 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:45:44.436138+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 338354176 unmapped: 80052224 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:45:45.436308+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e4c48000/0x0/0x4ffc00000, data 0x55ff535/0x57a3000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1578f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 338354176 unmapped: 80052224 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:45:46.436465+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 338354176 unmapped: 80052224 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:45:47.436659+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: handle_auth_request added challenge on 0x55f929c83800
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 73.500244141s of 73.750411987s, submitted: 48
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 299 handle_osd_map epochs [299,300], i have 299, src has [1,300]
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _renew_subs
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 300 handle_osd_map epochs [300,300], i have 300, src has [1,300]
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 300 ms_handle_reset con 0x55f929c83800 session 0x55f92a9deb40
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 338362368 unmapped: 80044032 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:45:48.436876+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4183351 data_alloc: 218103808 data_used: 2183168
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 338362368 unmapped: 80044032 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e4cc7000/0x0/0x4ffc00000, data 0x5601106/0x57a6000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1578f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:45:49.437082+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 338362368 unmapped: 80044032 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:45:50.437253+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 338362368 unmapped: 80044032 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:45:51.437418+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e4cc7000/0x0/0x4ffc00000, data 0x5601106/0x57a6000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1578f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 338362368 unmapped: 80044032 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:45:52.437602+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 338370560 unmapped: 80035840 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:45:53.441137+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4183351 data_alloc: 218103808 data_used: 2183168
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 338370560 unmapped: 80035840 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:45:54.441287+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e4cc7000/0x0/0x4ffc00000, data 0x5601106/0x57a6000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1578f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _renew_subs
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 300 handle_osd_map epochs [301,301], i have 300, src has [1,301]
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [P] New memtable created with log file: #52. Immutable memtables: 0.
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 340467712 unmapped: 77938688 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:45:55.441474+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 340467712 unmapped: 77938688 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:45:56.441637+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 340475904 unmapped: 77930496 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:45:57.441764+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 340475904 unmapped: 77930496 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:45:58.441871+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 301 heartbeat osd_stat(store_statfs(0x4e3b24000/0x0/0x4ffc00000, data 0x5602b69/0x57a9000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1692f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4186485 data_alloc: 218103808 data_used: 2187264
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 340475904 unmapped: 77930496 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:45:59.441998+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 340475904 unmapped: 77930496 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 301 heartbeat osd_stat(store_statfs(0x4e3b24000/0x0/0x4ffc00000, data 0x5602b69/0x57a9000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1692f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:46:00.442258+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.944168091s of 13.069328308s, submitted: 58
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 340484096 unmapped: 77922304 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:46:01.442890+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 340484096 unmapped: 77922304 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:46:02.443687+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 301 heartbeat osd_stat(store_statfs(0x4e3b22000/0x0/0x4ffc00000, data 0x5605b69/0x57ac000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1692f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 340484096 unmapped: 77922304 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:46:03.443875+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4185969 data_alloc: 218103808 data_used: 2187264
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 301 heartbeat osd_stat(store_statfs(0x4e3b22000/0x0/0x4ffc00000, data 0x5605b69/0x57ac000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1692f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 340484096 unmapped: 77922304 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:46:04.444077+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 340492288 unmapped: 77914112 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:46:05.444457+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 340492288 unmapped: 77914112 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:46:06.444640+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 340492288 unmapped: 77914112 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:46:07.444908+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 340492288 unmapped: 77914112 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 301 heartbeat osd_stat(store_statfs(0x4e3b22000/0x0/0x4ffc00000, data 0x5605b69/0x57ac000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1692f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:46:08.445062+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4185969 data_alloc: 218103808 data_used: 2187264
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 340500480 unmapped: 77905920 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:46:09.445213+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 301 heartbeat osd_stat(store_statfs(0x4e3b22000/0x0/0x4ffc00000, data 0x5605b69/0x57ac000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1692f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 340500480 unmapped: 77905920 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:46:10.445406+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 301 heartbeat osd_stat(store_statfs(0x4e3b22000/0x0/0x4ffc00000, data 0x5605b69/0x57ac000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1692f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 340508672 unmapped: 77897728 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:46:11.445551+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 340508672 unmapped: 77897728 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:46:12.445709+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 340508672 unmapped: 77897728 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:46:13.445930+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4185969 data_alloc: 218103808 data_used: 2187264
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 340508672 unmapped: 77897728 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:46:14.446302+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 340508672 unmapped: 77897728 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:46:15.446414+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 340508672 unmapped: 77897728 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:46:16.446625+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 301 heartbeat osd_stat(store_statfs(0x4e3b22000/0x0/0x4ffc00000, data 0x5605b69/0x57ac000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1692f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 340516864 unmapped: 77889536 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:46:17.446900+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 340516864 unmapped: 77889536 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:46:18.447044+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4185969 data_alloc: 218103808 data_used: 2187264
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:46:19.447226+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 340516864 unmapped: 77889536 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:46:20.447440+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 340516864 unmapped: 77889536 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:46:21.447688+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 340516864 unmapped: 77889536 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 301 heartbeat osd_stat(store_statfs(0x4e3b22000/0x0/0x4ffc00000, data 0x5605b69/0x57ac000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1692f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:46:22.447913+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 340525056 unmapped: 77881344 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:46:23.448140+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 340525056 unmapped: 77881344 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4185969 data_alloc: 218103808 data_used: 2187264
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:46:24.448325+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 340525056 unmapped: 77881344 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:46:25.448500+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 340533248 unmapped: 77873152 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 24.982421875s of 24.990554810s, submitted: 2
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:46:26.448716+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 340533248 unmapped: 77873152 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:46:27.448862+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 340533248 unmapped: 77873152 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 301 heartbeat osd_stat(store_statfs(0x4e3b22000/0x0/0x4ffc00000, data 0x5605b69/0x57ac000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1692f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:46:28.448946+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 340533248 unmapped: 77873152 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4186145 data_alloc: 218103808 data_used: 2187264
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 301 heartbeat osd_stat(store_statfs(0x4e3b22000/0x0/0x4ffc00000, data 0x5605b69/0x57ac000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1692f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:46:29.449110+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 340533248 unmapped: 77873152 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:46:30.449306+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 340533248 unmapped: 77873152 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:46:31.449508+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 340533248 unmapped: 77873152 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:46:32.449660+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 340533248 unmapped: 77873152 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:46:33.449877+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 340541440 unmapped: 77864960 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4185969 data_alloc: 218103808 data_used: 2187264
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:46:34.450033+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 340541440 unmapped: 77864960 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 301 heartbeat osd_stat(store_statfs(0x4e3b22000/0x0/0x4ffc00000, data 0x5605b69/0x57ac000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1692f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:46:35.450216+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 340549632 unmapped: 77856768 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:46:36.450457+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 340549632 unmapped: 77856768 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 301 heartbeat osd_stat(store_statfs(0x4e3b22000/0x0/0x4ffc00000, data 0x5605b69/0x57ac000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1692f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:46:37.450605+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 340549632 unmapped: 77856768 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:46:38.450810+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 340549632 unmapped: 77856768 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4185969 data_alloc: 218103808 data_used: 2187264
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:46:39.451016+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 340549632 unmapped: 77856768 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:46:40.451240+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 340549632 unmapped: 77856768 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:46:41.451395+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 340557824 unmapped: 77848576 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 301 heartbeat osd_stat(store_statfs(0x4e3b22000/0x0/0x4ffc00000, data 0x5605b69/0x57ac000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1692f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:46:42.451683+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 340557824 unmapped: 77848576 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:46:43.451945+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 340557824 unmapped: 77848576 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4185969 data_alloc: 218103808 data_used: 2187264
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:46:44.452110+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 340557824 unmapped: 77848576 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:46:45.452247+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 340557824 unmapped: 77848576 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 301 heartbeat osd_stat(store_statfs(0x4e3b22000/0x0/0x4ffc00000, data 0x5605b69/0x57ac000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1692f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:46:46.452383+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 340566016 unmapped: 77840384 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:46:47.452524+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 340566016 unmapped: 77840384 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 301 heartbeat osd_stat(store_statfs(0x4e3b22000/0x0/0x4ffc00000, data 0x5605b69/0x57ac000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1692f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:46:48.452629+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 340566016 unmapped: 77840384 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4185969 data_alloc: 218103808 data_used: 2187264
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:46:49.452736+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 340574208 unmapped: 77832192 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:46:50.452939+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 340574208 unmapped: 77832192 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:46:51.453143+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 340574208 unmapped: 77832192 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:46:52.453291+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 340574208 unmapped: 77832192 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 301 heartbeat osd_stat(store_statfs(0x4e3b22000/0x0/0x4ffc00000, data 0x5605b69/0x57ac000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1692f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:46:53.453432+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 340582400 unmapped: 77824000 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4185969 data_alloc: 218103808 data_used: 2187264
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:46:54.453586+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 340582400 unmapped: 77824000 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 301 heartbeat osd_stat(store_statfs(0x4e3b22000/0x0/0x4ffc00000, data 0x5605b69/0x57ac000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1692f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:46:55.453768+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 340582400 unmapped: 77824000 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:46:56.453956+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 340582400 unmapped: 77824000 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:46:57.454105+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 340590592 unmapped: 77815808 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:46:58.454264+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 340590592 unmapped: 77815808 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4185969 data_alloc: 218103808 data_used: 2187264
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:46:59.454433+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 340590592 unmapped: 77815808 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 301 heartbeat osd_stat(store_statfs(0x4e3b22000/0x0/0x4ffc00000, data 0x5605b69/0x57ac000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1692f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:47:00.454639+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 340590592 unmapped: 77815808 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:47:01.454879+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 340590592 unmapped: 77815808 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:47:02.455065+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 340590592 unmapped: 77815808 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:47:03.455204+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 340590592 unmapped: 77815808 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4185969 data_alloc: 218103808 data_used: 2187264
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:47:04.455373+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 340590592 unmapped: 77815808 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 301 heartbeat osd_stat(store_statfs(0x4e3b22000/0x0/0x4ffc00000, data 0x5605b69/0x57ac000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1692f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:47:05.455561+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 340598784 unmapped: 77807616 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:47:06.455745+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 340598784 unmapped: 77807616 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:47:07.455918+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 340598784 unmapped: 77807616 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:47:08.456096+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 340606976 unmapped: 77799424 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 301 heartbeat osd_stat(store_statfs(0x4e3b22000/0x0/0x4ffc00000, data 0x5605b69/0x57ac000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1692f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4185969 data_alloc: 218103808 data_used: 2187264
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:47:09.456267+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 340606976 unmapped: 77799424 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:47:10.456435+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 340606976 unmapped: 77799424 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:47:11.456559+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 340606976 unmapped: 77799424 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:47:12.456694+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 340606976 unmapped: 77799424 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:47:13.456951+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 340615168 unmapped: 77791232 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4185969 data_alloc: 218103808 data_used: 2187264
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:47:14.457087+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 340615168 unmapped: 77791232 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 301 heartbeat osd_stat(store_statfs(0x4e3b22000/0x0/0x4ffc00000, data 0x5605b69/0x57ac000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1692f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:47:15.457253+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 340615168 unmapped: 77791232 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:47:16.457408+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 340615168 unmapped: 77791232 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 301 heartbeat osd_stat(store_statfs(0x4e3b22000/0x0/0x4ffc00000, data 0x5605b69/0x57ac000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1692f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:47:17.457530+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 340623360 unmapped: 77783040 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 301 heartbeat osd_stat(store_statfs(0x4e3b22000/0x0/0x4ffc00000, data 0x5605b69/0x57ac000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1692f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:47:18.457676+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 340623360 unmapped: 77783040 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4185969 data_alloc: 218103808 data_used: 2187264
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:47:19.457804+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 340631552 unmapped: 77774848 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:47:20.457960+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 340631552 unmapped: 77774848 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:47:21.458111+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 340631552 unmapped: 77774848 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:47:22.458253+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 340631552 unmapped: 77774848 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:47:23.458378+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 340631552 unmapped: 77774848 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 301 heartbeat osd_stat(store_statfs(0x4e3b22000/0x0/0x4ffc00000, data 0x5605b69/0x57ac000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1692f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4185969 data_alloc: 218103808 data_used: 2187264
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:47:24.458519+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 340639744 unmapped: 77766656 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 301 heartbeat osd_stat(store_statfs(0x4e3b22000/0x0/0x4ffc00000, data 0x5605b69/0x57ac000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1692f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:47:25.458769+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 340639744 unmapped: 77766656 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:47:26.458935+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 301 heartbeat osd_stat(store_statfs(0x4e3b22000/0x0/0x4ffc00000, data 0x5605b69/0x57ac000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1692f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 340639744 unmapped: 77766656 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:47:27.459157+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 301 heartbeat osd_stat(store_statfs(0x4e3b22000/0x0/0x4ffc00000, data 0x5605b69/0x57ac000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1692f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 340647936 unmapped: 77758464 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:47:28.459310+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 340647936 unmapped: 77758464 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:47:29.459516+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4185969 data_alloc: 218103808 data_used: 2187264
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 340656128 unmapped: 77750272 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:47:30.459763+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 340656128 unmapped: 77750272 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:47:31.459948+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 340656128 unmapped: 77750272 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 301 heartbeat osd_stat(store_statfs(0x4e3b22000/0x0/0x4ffc00000, data 0x5605b69/0x57ac000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1692f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:47:32.460092+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 340656128 unmapped: 77750272 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:47:33.460217+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 340656128 unmapped: 77750272 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:47:34.460387+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4185969 data_alloc: 218103808 data_used: 2187264
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 301 heartbeat osd_stat(store_statfs(0x4e3b22000/0x0/0x4ffc00000, data 0x5605b69/0x57ac000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1692f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 340656128 unmapped: 77750272 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:47:35.460537+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 340664320 unmapped: 77742080 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:47:36.460736+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 340664320 unmapped: 77742080 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:47:37.460934+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 340672512 unmapped: 77733888 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 301 heartbeat osd_stat(store_statfs(0x4e3b22000/0x0/0x4ffc00000, data 0x5605b69/0x57ac000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1692f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:47:38.461134+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 340672512 unmapped: 77733888 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:47:39.461305+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4185969 data_alloc: 218103808 data_used: 2187264
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 340672512 unmapped: 77733888 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:47:40.461556+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 340672512 unmapped: 77733888 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:47:41.461771+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 340672512 unmapped: 77733888 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:47:42.461939+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 340680704 unmapped: 77725696 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 301 heartbeat osd_stat(store_statfs(0x4e3b22000/0x0/0x4ffc00000, data 0x5605b69/0x57ac000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1692f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:47:43.462081+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 340680704 unmapped: 77725696 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:47:44.462237+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4185969 data_alloc: 218103808 data_used: 2187264
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 340680704 unmapped: 77725696 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:47:45.462405+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 340688896 unmapped: 77717504 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:47:46.462685+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 340688896 unmapped: 77717504 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:47:47.462888+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 340688896 unmapped: 77717504 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:47:48.463071+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 301 heartbeat osd_stat(store_statfs(0x4e3b22000/0x0/0x4ffc00000, data 0x5605b69/0x57ac000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1692f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 340705280 unmapped: 77701120 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:47:49.463270+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4185969 data_alloc: 218103808 data_used: 2187264
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 340705280 unmapped: 77701120 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:47:50.463522+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 340705280 unmapped: 77701120 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:47:51.463671+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 340705280 unmapped: 77701120 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: handle_auth_request added challenge on 0x55f92a5fd000
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 301 ms_handle_reset con 0x55f92a5fd000 session 0x55f92a9cf680
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: handle_auth_request added challenge on 0x55f92a606800
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 301 ms_handle_reset con 0x55f92a606800 session 0x55f92a28f4a0
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:47:52.463869+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 340713472 unmapped: 77692928 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 301 heartbeat osd_stat(store_statfs(0x4e3b22000/0x0/0x4ffc00000, data 0x5605b69/0x57ac000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1692f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:47:53.464055+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 340713472 unmapped: 77692928 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:47:54.464201+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4185969 data_alloc: 218103808 data_used: 2187264
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 340713472 unmapped: 77692928 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:47:55.464346+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 340713472 unmapped: 77692928 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 301 heartbeat osd_stat(store_statfs(0x4e3b22000/0x0/0x4ffc00000, data 0x5605b69/0x57ac000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1692f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:47:56.464515+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 340713472 unmapped: 77692928 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:47:57.464668+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 340713472 unmapped: 77692928 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:47:58.464879+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 340721664 unmapped: 77684736 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:47:59.465039+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4185969 data_alloc: 218103808 data_used: 2187264
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 340721664 unmapped: 77684736 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:48:00.465242+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 340721664 unmapped: 77684736 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: handle_auth_request added challenge on 0x55f92a333800
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:48:01.465396+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 340729856 unmapped: 77676544 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 301 handle_osd_map epochs [301,302], i have 301, src has [1,302]
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 96.535545349s of 96.545166016s, submitted: 2
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e3b22000/0x0/0x4ffc00000, data 0x5605b69/0x57ac000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1692f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:48:02.465557+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 302 ms_handle_reset con 0x55f92a333800 session 0x55f92a9cfc20
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 340754432 unmapped: 77651968 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:48:03.465737+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 340754432 unmapped: 77651968 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:48:04.465913+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4188918 data_alloc: 218103808 data_used: 2191360
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 340754432 unmapped: 77651968 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e3b1f000/0x0/0x4ffc00000, data 0x560772a/0x57ae000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1692f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: handle_auth_request added challenge on 0x55f92cb23c00
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:48:05.466081+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e3b1f000/0x0/0x4ffc00000, data 0x560772a/0x57ae000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1692f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 340762624 unmapped: 77643776 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 302 handle_osd_map epochs [303,303], i have 302, src has [1,303]
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _renew_subs
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 302 handle_osd_map epochs [303,303], i have 303, src has [1,303]
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:48:06.466240+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 303 ms_handle_reset con 0x55f92cb23c00 session 0x55f927bbf680
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 340770816 unmapped: 77635584 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:48:07.466385+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 340770816 unmapped: 77635584 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:48:08.466555+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 340779008 unmapped: 77627392 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:48:09.466714+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4191892 data_alloc: 218103808 data_used: 2191360
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 303 heartbeat osd_stat(store_statfs(0x4e3b1c000/0x0/0x4ffc00000, data 0x56092fb/0x57b1000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1692f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 303 handle_osd_map epochs [304,304], i have 303, src has [1,304]
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 340811776 unmapped: 77594624 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:48:10.466910+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 340811776 unmapped: 77594624 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:48:11.467074+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 340811776 unmapped: 77594624 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:48:12.467243+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 340819968 unmapped: 77586432 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:48:13.467399+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 340819968 unmapped: 77586432 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:48:14.467595+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 304 heartbeat osd_stat(store_statfs(0x4e3b19000/0x0/0x4ffc00000, data 0x560ad7a/0x57b4000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1692f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4194866 data_alloc: 218103808 data_used: 2191360
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 340819968 unmapped: 77586432 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:48:15.467751+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 340819968 unmapped: 77586432 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:48:16.467918+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 304 heartbeat osd_stat(store_statfs(0x4e3b19000/0x0/0x4ffc00000, data 0x560ad7a/0x57b4000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1692f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 304 handle_osd_map epochs [305,305], i have 304, src has [1,305]
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 304 handle_osd_map epochs [305,305], i have 305, src has [1,305]
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.562905312s of 14.665749550s, submitted: 38
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 340836352 unmapped: 77570048 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:48:17.468063+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 340836352 unmapped: 77570048 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:48:18.468229+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 340836352 unmapped: 77570048 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:48:19.468386+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4197840 data_alloc: 218103808 data_used: 2191360
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 340844544 unmapped: 77561856 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:48:20.468597+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 340852736 unmapped: 77553664 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:48:21.468761+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: handle_auth_request added challenge on 0x55f929c83800
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _renew_subs
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 305 handle_osd_map epochs [306,306], i have 305, src has [1,306]
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 306 ms_handle_reset con 0x55f929c83800 session 0x55f928000d20
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 340869120 unmapped: 77537280 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:48:22.468900+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e3b12000/0x0/0x4ffc00000, data 0x560e46b/0x57bb000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1692f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 340869120 unmapped: 77537280 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:48:23.469051+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 340869120 unmapped: 77537280 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:48:24.469244+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4204435 data_alloc: 218103808 data_used: 2199552
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 340877312 unmapped: 77529088 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:48:25.469370+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 340893696 unmapped: 77512704 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e3b12000/0x0/0x4ffc00000, data 0x560e46b/0x57bb000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1692f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:48:26.469510+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 340893696 unmapped: 77512704 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:48:27.469634+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 340893696 unmapped: 77512704 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:48:28.469763+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 340901888 unmapped: 77504512 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:48:29.469907+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4204915 data_alloc: 218103808 data_used: 2211840
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 340901888 unmapped: 77504512 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:48:30.470136+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 340901888 unmapped: 77504512 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:48:31.470282+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 340901888 unmapped: 77504512 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e3b12000/0x0/0x4ffc00000, data 0x560e46b/0x57bb000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1692f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:48:32.470450+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 340910080 unmapped: 77496320 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e3b12000/0x0/0x4ffc00000, data 0x560e46b/0x57bb000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1692f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:48:33.470701+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 340910080 unmapped: 77496320 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:48:34.470878+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4204915 data_alloc: 218103808 data_used: 2211840
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 340910080 unmapped: 77496320 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:48:35.471073+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 340918272 unmapped: 77488128 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:48:36.471226+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 340918272 unmapped: 77488128 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:48:37.471398+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 340918272 unmapped: 77488128 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:48:38.471552+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 340918272 unmapped: 77488128 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e3b12000/0x0/0x4ffc00000, data 0x560e46b/0x57bb000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1692f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:48:39.471709+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4204915 data_alloc: 218103808 data_used: 2211840
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 340918272 unmapped: 77488128 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:48:40.471907+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e3b12000/0x0/0x4ffc00000, data 0x560e46b/0x57bb000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1692f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 340926464 unmapped: 77479936 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:48:41.472060+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 340926464 unmapped: 77479936 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:48:42.472203+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 340926464 unmapped: 77479936 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e3b12000/0x0/0x4ffc00000, data 0x560e46b/0x57bb000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1692f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:48:43.472404+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 340926464 unmapped: 77479936 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e3b12000/0x0/0x4ffc00000, data 0x560e46b/0x57bb000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1692f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:48:44.472557+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4204915 data_alloc: 218103808 data_used: 2211840
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 340934656 unmapped: 77471744 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:48:45.472749+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 340934656 unmapped: 77471744 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:48:46.472914+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e3b12000/0x0/0x4ffc00000, data 0x560e46b/0x57bb000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1692f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 340942848 unmapped: 77463552 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:48:47.473082+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 340942848 unmapped: 77463552 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:48:48.473229+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 340951040 unmapped: 77455360 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e3b12000/0x0/0x4ffc00000, data 0x560e46b/0x57bb000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1692f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:48:49.473388+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4204915 data_alloc: 218103808 data_used: 2211840
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 340951040 unmapped: 77455360 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:48:50.473553+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 340951040 unmapped: 77455360 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:48:51.473698+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 340951040 unmapped: 77455360 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:48:52.473874+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 340959232 unmapped: 77447168 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:48:53.474023+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 340959232 unmapped: 77447168 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:48:54.474165+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4204915 data_alloc: 218103808 data_used: 2211840
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 340959232 unmapped: 77447168 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e3b12000/0x0/0x4ffc00000, data 0x560e46b/0x57bb000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1692f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:48:55.474322+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 340967424 unmapped: 77438976 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:48:56.474463+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 340975616 unmapped: 77430784 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:48:57.474618+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 340975616 unmapped: 77430784 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:48:58.474760+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 340975616 unmapped: 77430784 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:48:59.474867+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4204915 data_alloc: 218103808 data_used: 2211840
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 340975616 unmapped: 77430784 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:49:00.475045+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e3b12000/0x0/0x4ffc00000, data 0x560e46b/0x57bb000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1692f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 340975616 unmapped: 77430784 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:49:01.475497+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 340975616 unmapped: 77430784 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:49:02.475623+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 340975616 unmapped: 77430784 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:49:03.475802+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e3b12000/0x0/0x4ffc00000, data 0x560e46b/0x57bb000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1692f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 340975616 unmapped: 77430784 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:49:04.475963+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4204915 data_alloc: 218103808 data_used: 2211840
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 340983808 unmapped: 77422592 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:49:05.476340+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e3b12000/0x0/0x4ffc00000, data 0x560e46b/0x57bb000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1692f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 340983808 unmapped: 77422592 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:49:06.477008+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 340983808 unmapped: 77422592 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:49:07.477176+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 340992000 unmapped: 77414400 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:49:08.477399+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 340992000 unmapped: 77414400 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:49:09.477555+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4204915 data_alloc: 218103808 data_used: 2211840
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 340992000 unmapped: 77414400 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:49:10.477734+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341000192 unmapped: 77406208 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:49:11.477866+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e3b12000/0x0/0x4ffc00000, data 0x560e46b/0x57bb000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1692f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341008384 unmapped: 77398016 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:49:12.478013+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341008384 unmapped: 77398016 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:49:13.478172+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341008384 unmapped: 77398016 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:49:14.478364+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4204915 data_alloc: 218103808 data_used: 2211840
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341008384 unmapped: 77398016 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e3b12000/0x0/0x4ffc00000, data 0x560e46b/0x57bb000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1692f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:49:15.478523+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341016576 unmapped: 77389824 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:49:16.478765+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341024768 unmapped: 77381632 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:49:17.479180+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341024768 unmapped: 77381632 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:49:18.479336+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341032960 unmapped: 77373440 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e3b12000/0x0/0x4ffc00000, data 0x560e46b/0x57bb000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1692f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:49:19.479511+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4204915 data_alloc: 218103808 data_used: 2211840
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341032960 unmapped: 77373440 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:49:20.479693+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341041152 unmapped: 77365248 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:49:21.479915+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341041152 unmapped: 77365248 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:49:22.480089+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341041152 unmapped: 77365248 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:49:23.480463+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341041152 unmapped: 77365248 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:49:24.480727+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e3b12000/0x0/0x4ffc00000, data 0x560e46b/0x57bb000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1692f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4204915 data_alloc: 218103808 data_used: 2211840
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341041152 unmapped: 77365248 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:49:25.481031+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341041152 unmapped: 77365248 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:49:26.481197+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341041152 unmapped: 77365248 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:49:27.481295+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e3b12000/0x0/0x4ffc00000, data 0x560e46b/0x57bb000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1692f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341041152 unmapped: 77365248 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:49:28.481460+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341049344 unmapped: 77357056 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:49:29.481702+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4204915 data_alloc: 218103808 data_used: 2211840
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341049344 unmapped: 77357056 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:49:30.481932+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341057536 unmapped: 77348864 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:49:31.482109+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341057536 unmapped: 77348864 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:49:32.482314+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341057536 unmapped: 77348864 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:49:33.482477+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e3b12000/0x0/0x4ffc00000, data 0x560e46b/0x57bb000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1692f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341057536 unmapped: 77348864 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:49:34.482622+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e3b12000/0x0/0x4ffc00000, data 0x560e46b/0x57bb000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1692f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4204915 data_alloc: 218103808 data_used: 2211840
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341065728 unmapped: 77340672 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:49:35.497667+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341065728 unmapped: 77340672 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:49:36.497806+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341090304 unmapped: 77316096 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:49:37.497982+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341090304 unmapped: 77316096 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:49:38.498122+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341090304 unmapped: 77316096 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:49:39.498283+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e3b12000/0x0/0x4ffc00000, data 0x560e46b/0x57bb000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1692f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4204915 data_alloc: 218103808 data_used: 2211840
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341090304 unmapped: 77316096 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:49:40.498484+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341098496 unmapped: 77307904 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:49:41.498647+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341098496 unmapped: 77307904 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:49:42.498877+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e3b12000/0x0/0x4ffc00000, data 0x560e46b/0x57bb000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1692f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341098496 unmapped: 77307904 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:49:43.499037+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341098496 unmapped: 77307904 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:49:44.499238+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4204915 data_alloc: 218103808 data_used: 2211840
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341106688 unmapped: 77299712 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:49:45.499334+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341106688 unmapped: 77299712 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:49:46.499475+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341106688 unmapped: 77299712 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:49:47.499677+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341106688 unmapped: 77299712 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:49:48.499868+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e3b12000/0x0/0x4ffc00000, data 0x560e46b/0x57bb000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1692f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341114880 unmapped: 77291520 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:49:49.500069+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4204915 data_alloc: 218103808 data_used: 2211840
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e3b12000/0x0/0x4ffc00000, data 0x560e46b/0x57bb000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1692f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341114880 unmapped: 77291520 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:49:50.500287+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341114880 unmapped: 77291520 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e3b12000/0x0/0x4ffc00000, data 0x560e46b/0x57bb000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1692f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:49:51.500454+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e3b12000/0x0/0x4ffc00000, data 0x560e46b/0x57bb000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1692f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341131264 unmapped: 77275136 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e3b12000/0x0/0x4ffc00000, data 0x560e46b/0x57bb000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1692f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:49:52.500613+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341131264 unmapped: 77275136 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:49:53.500771+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e3b12000/0x0/0x4ffc00000, data 0x560e46b/0x57bb000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1692f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341131264 unmapped: 77275136 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:49:54.500965+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4204915 data_alloc: 218103808 data_used: 2211840
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341131264 unmapped: 77275136 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:49:55.501122+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341131264 unmapped: 77275136 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:49:56.501334+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341147648 unmapped: 77258752 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:49:57.501492+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341147648 unmapped: 77258752 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:49:58.501621+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e3b12000/0x0/0x4ffc00000, data 0x560e46b/0x57bb000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1692f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341147648 unmapped: 77258752 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:49:59.501943+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:12 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:12 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4204915 data_alloc: 218103808 data_used: 2211840
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341147648 unmapped: 77258752 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:50:00.502182+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e3b12000/0x0/0x4ffc00000, data 0x560e46b/0x57bb000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1692f9c2), peers [1,2] op hist [])
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341164032 unmapped: 77242368 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:12 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:50:01.502372+0000)
Oct 11 10:01:12 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341164032 unmapped: 77242368 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:50:02.502580+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e3b12000/0x0/0x4ffc00000, data 0x560e46b/0x57bb000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1692f9c2), peers [1,2] op hist [])
Oct 11 10:01:13 compute-0 ceph-osd[88249]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e3b12000/0x0/0x4ffc00000, data 0x560e46b/0x57bb000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1692f9c2), peers [1,2] op hist [])
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341172224 unmapped: 77234176 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:50:03.502771+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e3b12000/0x0/0x4ffc00000, data 0x560e46b/0x57bb000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1692f9c2), peers [1,2] op hist [])
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341172224 unmapped: 77234176 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:50:04.502960+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:13 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:13 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4204915 data_alloc: 218103808 data_used: 2211840
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341172224 unmapped: 77234176 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:50:05.503143+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341172224 unmapped: 77234176 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:50:06.503314+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: handle_auth_request added challenge on 0x55f92a333800
Oct 11 10:01:13 compute-0 ceph-osd[88249]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 110.427284241s of 110.483917236s, submitted: 22
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341196800 unmapped: 77209600 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:50:07.503462+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: osd.0 306 ms_handle_reset con 0x55f92a333800 session 0x55f927bb8780
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341204992 unmapped: 77201408 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:50:08.503617+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: handle_auth_request added challenge on 0x55f92a5fd000
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _renew_subs
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 10:01:13 compute-0 ceph-osd[88249]: osd.0 306 handle_osd_map epochs [307,307], i have 306, src has [1,307]
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341221376 unmapped: 77185024 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:50:09.503888+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e369e000/0x0/0x4ffc00000, data 0x5a8000b/0x5c2f000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1692f9c2), peers [1,2] op hist [])
Oct 11 10:01:13 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:13 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:13 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4295870 data_alloc: 218103808 data_used: 2220032
Oct 11 10:01:13 compute-0 ceph-osd[88249]: osd.0 307 handle_osd_map epochs [307,308], i have 307, src has [1,308]
Oct 11 10:01:13 compute-0 ceph-osd[88249]: osd.0 308 ms_handle_reset con 0x55f92a5fd000 session 0x55f92a885e00
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341229568 unmapped: 77176832 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:50:10.504390+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341229568 unmapped: 77176832 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:50:11.504526+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341237760 unmapped: 77168640 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:50:12.504683+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: osd.0 308 heartbeat osd_stat(store_statfs(0x4e2e99000/0x0/0x4ffc00000, data 0x6281bab/0x6433000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1692f9c2), peers [1,2] op hist [])
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341237760 unmapped: 77168640 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:50:13.504813+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341237760 unmapped: 77168640 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:50:14.505031+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:13 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:13 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4301276 data_alloc: 218103808 data_used: 2220032
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341245952 unmapped: 77160448 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:50:15.505255+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341245952 unmapped: 77160448 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:50:16.505439+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: osd.0 308 heartbeat osd_stat(store_statfs(0x4e2e99000/0x0/0x4ffc00000, data 0x6281bab/0x6433000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1692f9c2), peers [1,2] op hist [])
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341262336 unmapped: 77144064 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:50:17.505560+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341262336 unmapped: 77144064 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:50:18.505703+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341262336 unmapped: 77144064 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:50:19.505903+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: osd.0 308 heartbeat osd_stat(store_statfs(0x4e2e99000/0x0/0x4ffc00000, data 0x6281bab/0x6433000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1692f9c2), peers [1,2] op hist [])
Oct 11 10:01:13 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:13 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:13 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4301276 data_alloc: 218103808 data_used: 2220032
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341262336 unmapped: 77144064 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:50:20.506116+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341262336 unmapped: 77144064 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:50:21.506263+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341262336 unmapped: 77144064 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:50:22.506389+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341262336 unmapped: 77144064 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:50:23.506552+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341262336 unmapped: 77144064 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:50:24.506700+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:13 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:13 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4301276 data_alloc: 218103808 data_used: 2220032
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341278720 unmapped: 77127680 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:50:25.506871+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: osd.0 308 heartbeat osd_stat(store_statfs(0x4e2e99000/0x0/0x4ffc00000, data 0x6281bab/0x6433000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1692f9c2), peers [1,2] op hist [])
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341278720 unmapped: 77127680 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:50:26.506993+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341278720 unmapped: 77127680 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:50:27.507160+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341278720 unmapped: 77127680 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:50:28.507294+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341278720 unmapped: 77127680 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:50:29.507466+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: osd.0 308 heartbeat osd_stat(store_statfs(0x4e2e99000/0x0/0x4ffc00000, data 0x6281bab/0x6433000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1692f9c2), peers [1,2] op hist [])
Oct 11 10:01:13 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:13 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:13 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4301276 data_alloc: 218103808 data_used: 2220032
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341278720 unmapped: 77127680 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:50:30.507664+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:50:31.507916+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341286912 unmapped: 77119488 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:50:32.508090+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341286912 unmapped: 77119488 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:50:33.508310+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341286912 unmapped: 77119488 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:50:34.508477+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341286912 unmapped: 77119488 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:13 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:13 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4301276 data_alloc: 218103808 data_used: 2220032
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:50:35.508619+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341286912 unmapped: 77119488 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: osd.0 308 heartbeat osd_stat(store_statfs(0x4e2e99000/0x0/0x4ffc00000, data 0x6281bab/0x6433000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1692f9c2), peers [1,2] op hist [])
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:50:36.508773+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341295104 unmapped: 77111296 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:50:37.508934+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341295104 unmapped: 77111296 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: osd.0 308 heartbeat osd_stat(store_statfs(0x4e2e99000/0x0/0x4ffc00000, data 0x6281bab/0x6433000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1692f9c2), peers [1,2] op hist [])
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:50:38.509085+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341303296 unmapped: 77103104 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: handle_auth_request added challenge on 0x55f92a606800
Oct 11 10:01:13 compute-0 ceph-osd[88249]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 31.519218445s of 31.611387253s, submitted: 19
Oct 11 10:01:13 compute-0 ceph-osd[88249]: osd.0 308 heartbeat osd_stat(store_statfs(0x4e2e99000/0x0/0x4ffc00000, data 0x6281bab/0x6433000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1692f9c2), peers [1,2] op hist [0,0,1])
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _renew_subs
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 10:01:13 compute-0 ceph-osd[88249]: osd.0 308 handle_osd_map epochs [309,309], i have 308, src has [1,309]
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:50:39.509283+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341311488 unmapped: 77094912 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: osd.0 309 ms_handle_reset con 0x55f92a606800 session 0x55f92a9def00
Oct 11 10:01:13 compute-0 ceph-osd[88249]: osd.0 309 heartbeat osd_stat(store_statfs(0x4e2e9b000/0x0/0x4ffc00000, data 0x6281bab/0x6433000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1692f9c2), peers [1,2] op hist [])
Oct 11 10:01:13 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:13 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:13 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4302086 data_alloc: 218103808 data_used: 2228224
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:50:40.509517+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341311488 unmapped: 77094912 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:50:41.509688+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341311488 unmapped: 77094912 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:50:42.509827+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341311488 unmapped: 77094912 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:50:43.509992+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341311488 unmapped: 77094912 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:50:44.510164+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341311488 unmapped: 77094912 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _renew_subs
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 10:01:13 compute-0 ceph-osd[88249]: osd.0 309 handle_osd_map epochs [310,310], i have 309, src has [1,310]
Oct 11 10:01:13 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:13 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:13 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4305060 data_alloc: 218103808 data_used: 2228224
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:50:45.510349+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341327872 unmapped: 77078528 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: osd.0 310 heartbeat osd_stat(store_statfs(0x4e2e94000/0x0/0x4ffc00000, data 0x62850ce/0x6438000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1692f9c2), peers [1,2] op hist [])
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:50:46.510502+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341327872 unmapped: 77078528 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:50:47.510687+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341336064 unmapped: 77070336 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:50:48.510869+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341336064 unmapped: 77070336 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: osd.0 310 heartbeat osd_stat(store_statfs(0x4e2e95000/0x0/0x4ffc00000, data 0x62850ce/0x6438000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1692f9c2), peers [1,2] op hist [])
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:50:49.511039+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341344256 unmapped: 77062144 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:13 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:13 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4305060 data_alloc: 218103808 data_used: 2228224
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:50:50.511202+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: osd.0 310 heartbeat osd_stat(store_statfs(0x4e2e95000/0x0/0x4ffc00000, data 0x62850ce/0x6438000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1692f9c2), peers [1,2] op hist [])
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341344256 unmapped: 77062144 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:50:51.511350+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341344256 unmapped: 77062144 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:50:52.511517+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341344256 unmapped: 77062144 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:50:53.511660+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341344256 unmapped: 77062144 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:50:54.511869+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341344256 unmapped: 77062144 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: osd.0 310 heartbeat osd_stat(store_statfs(0x4e2e95000/0x0/0x4ffc00000, data 0x62850ce/0x6438000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1692f9c2), peers [1,2] op hist [])
Oct 11 10:01:13 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:13 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:13 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4305060 data_alloc: 218103808 data_used: 2228224
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:50:55.512069+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341352448 unmapped: 77053952 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:50:56.512197+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341352448 unmapped: 77053952 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:50:57.512365+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341368832 unmapped: 77037568 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:50:58.512520+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341368832 unmapped: 77037568 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:50:59.512679+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341368832 unmapped: 77037568 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: osd.0 310 heartbeat osd_stat(store_statfs(0x4e2e95000/0x0/0x4ffc00000, data 0x62850ce/0x6438000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1692f9c2), peers [1,2] op hist [])
Oct 11 10:01:13 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:13 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:13 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4305220 data_alloc: 218103808 data_used: 2232320
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:51:00.512903+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341368832 unmapped: 77037568 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:51:01.513080+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341368832 unmapped: 77037568 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:51:02.513242+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341368832 unmapped: 77037568 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:51:03.513444+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341368832 unmapped: 77037568 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:51:04.513610+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341368832 unmapped: 77037568 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:13 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:13 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4305220 data_alloc: 218103808 data_used: 2232320
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:51:05.513788+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341377024 unmapped: 77029376 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: osd.0 310 heartbeat osd_stat(store_statfs(0x4e2e95000/0x0/0x4ffc00000, data 0x62850ce/0x6438000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1692f9c2), peers [1,2] op hist [])
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:51:06.513965+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341377024 unmapped: 77029376 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:51:07.514140+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341377024 unmapped: 77029376 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: osd.0 310 heartbeat osd_stat(store_statfs(0x4e2e95000/0x0/0x4ffc00000, data 0x62850ce/0x6438000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1692f9c2), peers [1,2] op hist [])
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:51:08.514396+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341377024 unmapped: 77029376 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:51:09.514601+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341377024 unmapped: 77029376 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: osd.0 310 heartbeat osd_stat(store_statfs(0x4e2e95000/0x0/0x4ffc00000, data 0x62850ce/0x6438000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1692f9c2), peers [1,2] op hist [])
Oct 11 10:01:13 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:13 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:13 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4305220 data_alloc: 218103808 data_used: 2232320
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:51:10.514870+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341377024 unmapped: 77029376 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:51:11.515077+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341377024 unmapped: 77029376 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:51:12.515243+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341377024 unmapped: 77029376 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:51:13.515433+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341385216 unmapped: 77021184 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:51:14.515597+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341385216 unmapped: 77021184 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:13 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:13 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4305220 data_alloc: 218103808 data_used: 2232320
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:51:15.515732+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341385216 unmapped: 77021184 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: osd.0 310 heartbeat osd_stat(store_statfs(0x4e2e95000/0x0/0x4ffc00000, data 0x62850ce/0x6438000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1692f9c2), peers [1,2] op hist [])
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:51:16.515905+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341401600 unmapped: 77004800 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: osd.0 310 heartbeat osd_stat(store_statfs(0x4e2e95000/0x0/0x4ffc00000, data 0x62850ce/0x6438000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1692f9c2), peers [1,2] op hist [])
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:51:17.516050+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341401600 unmapped: 77004800 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:51:18.516224+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341401600 unmapped: 77004800 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:51:19.516383+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341401600 unmapped: 77004800 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:13 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:13 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4305220 data_alloc: 218103808 data_used: 2232320
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:51:20.516625+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341401600 unmapped: 77004800 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:51:21.516875+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341409792 unmapped: 76996608 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:51:22.517052+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341409792 unmapped: 76996608 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: osd.0 310 heartbeat osd_stat(store_statfs(0x4e2e95000/0x0/0x4ffc00000, data 0x62850ce/0x6438000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1692f9c2), peers [1,2] op hist [])
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:51:23.517252+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341417984 unmapped: 76988416 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: osd.0 310 heartbeat osd_stat(store_statfs(0x4e2e95000/0x0/0x4ffc00000, data 0x62850ce/0x6438000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1692f9c2), peers [1,2] op hist [])
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:51:24.517432+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341426176 unmapped: 76980224 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 11 10:01:13 compute-0 ceph-osd[88249]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 6000.1 total, 600.0 interval
                                           Cumulative writes: 45K writes, 182K keys, 45K commit groups, 1.0 writes per commit group, ingest: 0.18 GB, 0.03 MB/s
                                           Cumulative WAL: 45K writes, 16K syncs, 2.73 writes per sync, written: 0.18 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1586 writes, 4965 keys, 1586 commit groups, 1.0 writes per commit group, ingest: 3.76 MB, 0.01 MB/s
                                           Interval WAL: 1586 writes, 675 syncs, 2.35 writes per sync, written: 0.00 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f9267111f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 0.000104 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f9267111f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 0.000104 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f9267111f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 0.000104 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f9267111f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 0.000104 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f9267111f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 0.000104 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f9267111f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 0.000104 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f9267111f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 0.000104 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f926711090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 1.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f926711090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 1.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f926711090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 1.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f9267111f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 0.000104 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f9267111f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 0.000104 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Oct 11 10:01:13 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:13 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:13 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4305220 data_alloc: 218103808 data_used: 2232320
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:51:25.517638+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341426176 unmapped: 76980224 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:51:26.517909+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341426176 unmapped: 76980224 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: osd.0 310 heartbeat osd_stat(store_statfs(0x4e2e95000/0x0/0x4ffc00000, data 0x62850ce/0x6438000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1692f9c2), peers [1,2] op hist [])
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:51:27.518139+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341434368 unmapped: 76972032 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: osd.0 310 heartbeat osd_stat(store_statfs(0x4e2e95000/0x0/0x4ffc00000, data 0x62850ce/0x6438000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1692f9c2), peers [1,2] op hist [])
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:51:28.518346+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341434368 unmapped: 76972032 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:51:29.518537+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341442560 unmapped: 76963840 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:13 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:13 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4305220 data_alloc: 218103808 data_used: 2232320
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:51:30.518960+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341442560 unmapped: 76963840 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: osd.0 310 heartbeat osd_stat(store_statfs(0x4e2e95000/0x0/0x4ffc00000, data 0x62850ce/0x6438000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1692f9c2), peers [1,2] op hist [])
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:51:31.519184+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341442560 unmapped: 76963840 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:51:32.519358+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341442560 unmapped: 76963840 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:51:33.519604+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341442560 unmapped: 76963840 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:51:34.519914+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341442560 unmapped: 76963840 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:13 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:13 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4305220 data_alloc: 218103808 data_used: 2232320
Oct 11 10:01:13 compute-0 ceph-osd[88249]: osd.0 310 heartbeat osd_stat(store_statfs(0x4e2e95000/0x0/0x4ffc00000, data 0x62850ce/0x6438000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1692f9c2), peers [1,2] op hist [])
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:51:35.520106+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341442560 unmapped: 76963840 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:51:36.520265+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341442560 unmapped: 76963840 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:51:37.520430+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341458944 unmapped: 76947456 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: osd.0 310 heartbeat osd_stat(store_statfs(0x4e2e95000/0x0/0x4ffc00000, data 0x62850ce/0x6438000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1692f9c2), peers [1,2] op hist [])
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:51:38.520602+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341467136 unmapped: 76939264 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: osd.0 310 heartbeat osd_stat(store_statfs(0x4e2e95000/0x0/0x4ffc00000, data 0x62850ce/0x6438000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1692f9c2), peers [1,2] op hist [])
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:51:39.520942+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341467136 unmapped: 76939264 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:13 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:13 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4305220 data_alloc: 218103808 data_used: 2232320
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:51:40.521203+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341475328 unmapped: 76931072 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:51:41.521626+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341475328 unmapped: 76931072 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:51:42.521957+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341475328 unmapped: 76931072 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: osd.0 310 heartbeat osd_stat(store_statfs(0x4e2e95000/0x0/0x4ffc00000, data 0x62850ce/0x6438000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1692f9c2), peers [1,2] op hist [])
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:51:43.522123+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341475328 unmapped: 76931072 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:51:44.522278+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341475328 unmapped: 76931072 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: osd.0 310 heartbeat osd_stat(store_statfs(0x4e2e95000/0x0/0x4ffc00000, data 0x62850ce/0x6438000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1692f9c2), peers [1,2] op hist [])
Oct 11 10:01:13 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:13 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:13 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4305220 data_alloc: 218103808 data_used: 2232320
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:51:45.522385+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341491712 unmapped: 76914688 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: osd.0 310 heartbeat osd_stat(store_statfs(0x4e2e95000/0x0/0x4ffc00000, data 0x62850ce/0x6438000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1692f9c2), peers [1,2] op hist [])
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:51:46.522552+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341491712 unmapped: 76914688 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:51:47.522733+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341491712 unmapped: 76914688 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:51:48.522862+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341499904 unmapped: 76906496 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:51:49.523055+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341499904 unmapped: 76906496 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:13 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:13 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4305220 data_alloc: 218103808 data_used: 2232320
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:51:50.523320+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341499904 unmapped: 76906496 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: osd.0 310 heartbeat osd_stat(store_statfs(0x4e2e95000/0x0/0x4ffc00000, data 0x62850ce/0x6438000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1692f9c2), peers [1,2] op hist [])
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:51:51.523488+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341508096 unmapped: 76898304 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:51:52.523671+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341508096 unmapped: 76898304 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:51:53.523871+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341524480 unmapped: 76881920 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:51:54.524029+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341524480 unmapped: 76881920 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:13 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:13 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4305220 data_alloc: 218103808 data_used: 2232320
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:51:55.524201+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: osd.0 310 heartbeat osd_stat(store_statfs(0x4e2e95000/0x0/0x4ffc00000, data 0x62850ce/0x6438000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1692f9c2), peers [1,2] op hist [])
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341524480 unmapped: 76881920 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:51:56.524363+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341524480 unmapped: 76881920 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:51:57.524530+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341524480 unmapped: 76881920 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:51:58.524659+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341524480 unmapped: 76881920 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:51:59.524798+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341524480 unmapped: 76881920 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: osd.0 310 heartbeat osd_stat(store_statfs(0x4e2e95000/0x0/0x4ffc00000, data 0x62850ce/0x6438000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1692f9c2), peers [1,2] op hist [])
Oct 11 10:01:13 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:13 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:13 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4305220 data_alloc: 218103808 data_used: 2232320
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:52:00.525020+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341524480 unmapped: 76881920 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:52:01.525170+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341549056 unmapped: 76857344 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:52:02.525359+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341549056 unmapped: 76857344 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: osd.0 310 heartbeat osd_stat(store_statfs(0x4e2e95000/0x0/0x4ffc00000, data 0x62850ce/0x6438000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1692f9c2), peers [1,2] op hist [])
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:52:03.525556+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341549056 unmapped: 76857344 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: osd.0 310 heartbeat osd_stat(store_statfs(0x4e2e95000/0x0/0x4ffc00000, data 0x62850ce/0x6438000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1692f9c2), peers [1,2] op hist [])
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:52:04.525723+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341549056 unmapped: 76857344 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:52:05.525958+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:13 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:13 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4305220 data_alloc: 218103808 data_used: 2232320
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341549056 unmapped: 76857344 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:52:06.526133+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341557248 unmapped: 76849152 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:52:07.526327+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341557248 unmapped: 76849152 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:52:08.526488+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341565440 unmapped: 76840960 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: osd.0 310 heartbeat osd_stat(store_statfs(0x4e2e95000/0x0/0x4ffc00000, data 0x62850ce/0x6438000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1692f9c2), peers [1,2] op hist [])
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:52:09.526615+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341573632 unmapped: 76832768 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:52:10.526802+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:13 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:13 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4305220 data_alloc: 218103808 data_used: 2232320
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341573632 unmapped: 76832768 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:52:11.527072+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341573632 unmapped: 76832768 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: osd.0 310 heartbeat osd_stat(store_statfs(0x4e2e95000/0x0/0x4ffc00000, data 0x62850ce/0x6438000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1692f9c2), peers [1,2] op hist [])
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:52:12.527245+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: osd.0 310 heartbeat osd_stat(store_statfs(0x4e2e95000/0x0/0x4ffc00000, data 0x62850ce/0x6438000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1692f9c2), peers [1,2] op hist [])
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341573632 unmapped: 76832768 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:52:13.527416+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341573632 unmapped: 76832768 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:52:14.527597+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341573632 unmapped: 76832768 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:52:15.527747+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:13 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:13 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4305220 data_alloc: 218103808 data_used: 2232320
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341581824 unmapped: 76824576 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:52:16.527900+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: osd.0 310 heartbeat osd_stat(store_statfs(0x4e2e95000/0x0/0x4ffc00000, data 0x62850ce/0x6438000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1692f9c2), peers [1,2] op hist [])
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341590016 unmapped: 76816384 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:52:17.528090+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: osd.0 310 heartbeat osd_stat(store_statfs(0x4e2e95000/0x0/0x4ffc00000, data 0x62850ce/0x6438000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1692f9c2), peers [1,2] op hist [])
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341590016 unmapped: 76816384 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:52:18.528245+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341590016 unmapped: 76816384 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:52:19.528395+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341590016 unmapped: 76816384 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:52:20.528620+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:13 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:13 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4305220 data_alloc: 218103808 data_used: 2232320
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341606400 unmapped: 76800000 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: osd.0 310 heartbeat osd_stat(store_statfs(0x4e2e95000/0x0/0x4ffc00000, data 0x62850ce/0x6438000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1692f9c2), peers [1,2] op hist [])
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:52:21.528768+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341614592 unmapped: 76791808 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:52:22.528922+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341614592 unmapped: 76791808 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:52:23.529072+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341614592 unmapped: 76791808 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:52:24.529234+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341614592 unmapped: 76791808 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:52:25.529372+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:13 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:13 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4305220 data_alloc: 218103808 data_used: 2232320
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341622784 unmapped: 76783616 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: osd.0 310 heartbeat osd_stat(store_statfs(0x4e2e95000/0x0/0x4ffc00000, data 0x62850ce/0x6438000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1692f9c2), peers [1,2] op hist [])
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:52:26.529517+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341622784 unmapped: 76783616 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:52:27.529672+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341622784 unmapped: 76783616 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:52:28.529858+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341622784 unmapped: 76783616 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:52:29.530032+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341622784 unmapped: 76783616 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:52:30.530286+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:13 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:13 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4305220 data_alloc: 218103808 data_used: 2232320
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341622784 unmapped: 76783616 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: osd.0 310 heartbeat osd_stat(store_statfs(0x4e2e95000/0x0/0x4ffc00000, data 0x62850ce/0x6438000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1692f9c2), peers [1,2] op hist [])
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:52:31.530434+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341622784 unmapped: 76783616 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:52:32.530645+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341622784 unmapped: 76783616 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:52:33.530894+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341639168 unmapped: 76767232 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:52:34.531083+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341639168 unmapped: 76767232 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:52:35.531298+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:13 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:13 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4305220 data_alloc: 218103808 data_used: 2232320
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341647360 unmapped: 76759040 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: osd.0 310 heartbeat osd_stat(store_statfs(0x4e2e95000/0x0/0x4ffc00000, data 0x62850ce/0x6438000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1692f9c2), peers [1,2] op hist [])
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:52:36.531479+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341647360 unmapped: 76759040 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:52:37.531610+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341647360 unmapped: 76759040 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:52:38.531778+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341647360 unmapped: 76759040 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:52:39.531996+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341647360 unmapped: 76759040 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:52:40.532183+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:13 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:13 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4305220 data_alloc: 218103808 data_used: 2232320
Oct 11 10:01:13 compute-0 ceph-osd[88249]: osd.0 310 heartbeat osd_stat(store_statfs(0x4e2e95000/0x0/0x4ffc00000, data 0x62850ce/0x6438000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1692f9c2), peers [1,2] op hist [])
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341655552 unmapped: 76750848 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:52:41.532323+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341655552 unmapped: 76750848 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:52:42.532501+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341655552 unmapped: 76750848 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:52:43.532702+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341655552 unmapped: 76750848 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: osd.0 310 heartbeat osd_stat(store_statfs(0x4e2e95000/0x0/0x4ffc00000, data 0x62850ce/0x6438000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1692f9c2), peers [1,2] op hist [])
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:52:44.532909+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341663744 unmapped: 76742656 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: osd.0 310 heartbeat osd_stat(store_statfs(0x4e2e95000/0x0/0x4ffc00000, data 0x62850ce/0x6438000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1692f9c2), peers [1,2] op hist [])
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:52:45.533140+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:13 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:13 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4305220 data_alloc: 218103808 data_used: 2232320
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341671936 unmapped: 76734464 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:52:46.533299+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341671936 unmapped: 76734464 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:52:47.533496+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341671936 unmapped: 76734464 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:52:48.533680+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: osd.0 310 heartbeat osd_stat(store_statfs(0x4e2e95000/0x0/0x4ffc00000, data 0x62850ce/0x6438000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1692f9c2), peers [1,2] op hist [])
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341688320 unmapped: 76718080 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:52:49.533934+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341688320 unmapped: 76718080 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: osd.0 310 heartbeat osd_stat(store_statfs(0x4e2e95000/0x0/0x4ffc00000, data 0x62850ce/0x6438000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1692f9c2), peers [1,2] op hist [])
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:52:50.534134+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:13 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:13 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4305220 data_alloc: 218103808 data_used: 2232320
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341688320 unmapped: 76718080 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:52:51.534299+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341688320 unmapped: 76718080 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:52:52.534443+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341688320 unmapped: 76718080 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: osd.0 310 heartbeat osd_stat(store_statfs(0x4e2e95000/0x0/0x4ffc00000, data 0x62850ce/0x6438000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1692f9c2), peers [1,2] op hist [])
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:52:53.534605+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341688320 unmapped: 76718080 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:52:54.534810+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341688320 unmapped: 76718080 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:52:55.535059+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:13 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:13 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4305220 data_alloc: 218103808 data_used: 2232320
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341688320 unmapped: 76718080 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:52:56.535221+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341696512 unmapped: 76709888 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:52:57.535496+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341696512 unmapped: 76709888 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:52:58.535690+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341696512 unmapped: 76709888 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: osd.0 310 heartbeat osd_stat(store_statfs(0x4e2e95000/0x0/0x4ffc00000, data 0x62850ce/0x6438000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1692f9c2), peers [1,2] op hist [])
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:52:59.535894+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: osd.0 310 heartbeat osd_stat(store_statfs(0x4e2e95000/0x0/0x4ffc00000, data 0x62850ce/0x6438000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1692f9c2), peers [1,2] op hist [])
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341712896 unmapped: 76693504 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:53:00.536075+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:13 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:13 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4305220 data_alloc: 218103808 data_used: 2232320
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341712896 unmapped: 76693504 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 142.217010498s of 142.342285156s, submitted: 45
Oct 11 10:01:13 compute-0 ceph-osd[88249]: osd.0 310 heartbeat osd_stat(store_statfs(0x4e2e96000/0x0/0x4ffc00000, data 0x62850ce/0x6438000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1692f9c2), peers [1,2] op hist [])
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:53:01.536234+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341712896 unmapped: 76693504 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:53:02.536384+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341762048 unmapped: 76644352 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:53:03.536575+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341778432 unmapped: 76627968 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:53:04.536732+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341778432 unmapped: 76627968 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:53:05.536886+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:13 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:13 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4304340 data_alloc: 218103808 data_used: 2232320
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341778432 unmapped: 76627968 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:53:06.537039+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341778432 unmapped: 76627968 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: osd.0 310 heartbeat osd_stat(store_statfs(0x4e2e96000/0x0/0x4ffc00000, data 0x62850ce/0x6438000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1692f9c2), peers [1,2] op hist [])
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:53:07.537214+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341778432 unmapped: 76627968 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: osd.0 310 heartbeat osd_stat(store_statfs(0x4e2e96000/0x0/0x4ffc00000, data 0x62850ce/0x6438000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1692f9c2), peers [1,2] op hist [])
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:53:08.537371+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341778432 unmapped: 76627968 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:53:09.537535+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341778432 unmapped: 76627968 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: osd.0 310 heartbeat osd_stat(store_statfs(0x4e2e96000/0x0/0x4ffc00000, data 0x62850ce/0x6438000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1692f9c2), peers [1,2] op hist [])
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:53:10.537790+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:13 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:13 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4304340 data_alloc: 218103808 data_used: 2232320
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341786624 unmapped: 76619776 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:53:11.538066+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341786624 unmapped: 76619776 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:53:12.538258+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341786624 unmapped: 76619776 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:53:13.538379+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341786624 unmapped: 76619776 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:53:14.538558+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: osd.0 310 heartbeat osd_stat(store_statfs(0x4e2e96000/0x0/0x4ffc00000, data 0x62850ce/0x6438000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1692f9c2), peers [1,2] op hist [])
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341786624 unmapped: 76619776 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:53:15.538788+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:13 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:13 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4304340 data_alloc: 218103808 data_used: 2232320
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341786624 unmapped: 76619776 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:53:16.538919+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341794816 unmapped: 76611584 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: osd.0 310 heartbeat osd_stat(store_statfs(0x4e2e96000/0x0/0x4ffc00000, data 0x62850ce/0x6438000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1692f9c2), peers [1,2] op hist [])
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:53:17.539067+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341794816 unmapped: 76611584 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:53:18.539205+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341794816 unmapped: 76611584 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:53:19.539385+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: osd.0 310 heartbeat osd_stat(store_statfs(0x4e2e96000/0x0/0x4ffc00000, data 0x62850ce/0x6438000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1692f9c2), peers [1,2] op hist [])
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341794816 unmapped: 76611584 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:53:20.539614+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:13 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:13 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4304340 data_alloc: 218103808 data_used: 2232320
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341794816 unmapped: 76611584 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:53:21.539810+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341794816 unmapped: 76611584 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:53:22.540035+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341803008 unmapped: 76603392 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:53:23.540227+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341803008 unmapped: 76603392 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: osd.0 310 heartbeat osd_stat(store_statfs(0x4e2e96000/0x0/0x4ffc00000, data 0x62850ce/0x6438000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1692f9c2), peers [1,2] op hist [])
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:53:24.540402+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341803008 unmapped: 76603392 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:53:25.540554+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:13 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:13 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4304340 data_alloc: 218103808 data_used: 2232320
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341803008 unmapped: 76603392 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:53:26.540712+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341803008 unmapped: 76603392 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:53:27.540918+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341803008 unmapped: 76603392 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:53:28.541113+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341811200 unmapped: 76595200 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: osd.0 310 heartbeat osd_stat(store_statfs(0x4e2e96000/0x0/0x4ffc00000, data 0x62850ce/0x6438000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1692f9c2), peers [1,2] op hist [])
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:53:29.541435+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341811200 unmapped: 76595200 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:53:30.541778+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: osd.0 310 heartbeat osd_stat(store_statfs(0x4e2e96000/0x0/0x4ffc00000, data 0x62850ce/0x6438000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1692f9c2), peers [1,2] op hist [])
Oct 11 10:01:13 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:13 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:13 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4304340 data_alloc: 218103808 data_used: 2232320
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341811200 unmapped: 76595200 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:53:31.542368+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341811200 unmapped: 76595200 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:53:32.542641+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341811200 unmapped: 76595200 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:53:33.543218+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341811200 unmapped: 76595200 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:53:34.543390+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341811200 unmapped: 76595200 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:53:35.543901+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:13 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:13 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4304340 data_alloc: 218103808 data_used: 2232320
Oct 11 10:01:13 compute-0 ceph-osd[88249]: osd.0 310 heartbeat osd_stat(store_statfs(0x4e2e96000/0x0/0x4ffc00000, data 0x62850ce/0x6438000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1692f9c2), peers [1,2] op hist [])
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341819392 unmapped: 76587008 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:53:36.544160+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341819392 unmapped: 76587008 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:53:37.544440+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341819392 unmapped: 76587008 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: osd.0 310 heartbeat osd_stat(store_statfs(0x4e2e96000/0x0/0x4ffc00000, data 0x62850ce/0x6438000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1692f9c2), peers [1,2] op hist [])
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:53:38.544615+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341819392 unmapped: 76587008 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:53:39.544923+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341819392 unmapped: 76587008 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:53:40.545212+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:13 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:13 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4304340 data_alloc: 218103808 data_used: 2232320
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341827584 unmapped: 76578816 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:53:41.545375+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341827584 unmapped: 76578816 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: osd.0 310 heartbeat osd_stat(store_statfs(0x4e2e96000/0x0/0x4ffc00000, data 0x62850ce/0x6438000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1692f9c2), peers [1,2] op hist [])
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:53:42.545576+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341835776 unmapped: 76570624 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:53:43.546014+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341835776 unmapped: 76570624 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:53:44.546222+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341843968 unmapped: 76562432 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:53:45.546494+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: osd.0 310 heartbeat osd_stat(store_statfs(0x4e2e96000/0x0/0x4ffc00000, data 0x62850ce/0x6438000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1692f9c2), peers [1,2] op hist [])
Oct 11 10:01:13 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:13 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:13 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4304340 data_alloc: 218103808 data_used: 2232320
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341843968 unmapped: 76562432 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:53:46.546720+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341843968 unmapped: 76562432 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:53:47.546950+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341843968 unmapped: 76562432 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:53:48.547127+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341843968 unmapped: 76562432 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:53:49.547322+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341843968 unmapped: 76562432 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:53:50.547543+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:13 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:13 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4304340 data_alloc: 218103808 data_used: 2232320
Oct 11 10:01:13 compute-0 ceph-osd[88249]: osd.0 310 heartbeat osd_stat(store_statfs(0x4e2e96000/0x0/0x4ffc00000, data 0x62850ce/0x6438000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1692f9c2), peers [1,2] op hist [])
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341843968 unmapped: 76562432 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:53:51.547656+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: osd.0 310 heartbeat osd_stat(store_statfs(0x4e2e96000/0x0/0x4ffc00000, data 0x62850ce/0x6438000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1692f9c2), peers [1,2] op hist [])
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341843968 unmapped: 76562432 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:53:52.547884+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341852160 unmapped: 76554240 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:53:53.548190+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341852160 unmapped: 76554240 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:53:54.548433+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: osd.0 310 heartbeat osd_stat(store_statfs(0x4e2e96000/0x0/0x4ffc00000, data 0x62850ce/0x6438000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1692f9c2), peers [1,2] op hist [])
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341852160 unmapped: 76554240 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:53:55.548643+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:13 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:13 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4304340 data_alloc: 218103808 data_used: 2232320
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341860352 unmapped: 76546048 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:53:56.548902+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341868544 unmapped: 76537856 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:53:57.549066+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341868544 unmapped: 76537856 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:53:58.549229+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: osd.0 310 heartbeat osd_stat(store_statfs(0x4e2e96000/0x0/0x4ffc00000, data 0x62850ce/0x6438000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1692f9c2), peers [1,2] op hist [])
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341876736 unmapped: 76529664 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:53:59.549392+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341876736 unmapped: 76529664 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:54:00.549591+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: osd.0 310 heartbeat osd_stat(store_statfs(0x4e2e96000/0x0/0x4ffc00000, data 0x62850ce/0x6438000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1692f9c2), peers [1,2] op hist [])
Oct 11 10:01:13 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:13 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:13 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4304340 data_alloc: 218103808 data_used: 2232320
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341884928 unmapped: 76521472 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:54:01.549883+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341884928 unmapped: 76521472 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:54:02.550071+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341884928 unmapped: 76521472 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:54:03.550248+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341884928 unmapped: 76521472 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:54:04.550654+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341884928 unmapped: 76521472 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:54:05.551448+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:13 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:13 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4304340 data_alloc: 218103808 data_used: 2232320
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341893120 unmapped: 76513280 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:54:06.551633+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: osd.0 310 heartbeat osd_stat(store_statfs(0x4e2e96000/0x0/0x4ffc00000, data 0x62850ce/0x6438000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1692f9c2), peers [1,2] op hist [])
Oct 11 10:01:13 compute-0 ceph-osd[88249]: osd.0 310 heartbeat osd_stat(store_statfs(0x4e2e96000/0x0/0x4ffc00000, data 0x62850ce/0x6438000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1692f9c2), peers [1,2] op hist [])
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341893120 unmapped: 76513280 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:54:07.552303+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341901312 unmapped: 76505088 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:54:08.552525+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341901312 unmapped: 76505088 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:54:09.552689+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341901312 unmapped: 76505088 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:54:10.552901+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:13 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:13 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4304340 data_alloc: 218103808 data_used: 2232320
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341901312 unmapped: 76505088 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:54:11.553054+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: osd.0 310 heartbeat osd_stat(store_statfs(0x4e2e96000/0x0/0x4ffc00000, data 0x62850ce/0x6438000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1692f9c2), peers [1,2] op hist [])
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341901312 unmapped: 76505088 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:54:12.553203+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341909504 unmapped: 76496896 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:54:13.553346+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341909504 unmapped: 76496896 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:54:14.553985+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341909504 unmapped: 76496896 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:54:15.554077+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:13 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:13 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4304340 data_alloc: 218103808 data_used: 2232320
Oct 11 10:01:13 compute-0 ceph-osd[88249]: osd.0 310 heartbeat osd_stat(store_statfs(0x4e2e96000/0x0/0x4ffc00000, data 0x62850ce/0x6438000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1692f9c2), peers [1,2] op hist [])
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341909504 unmapped: 76496896 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:54:16.554231+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: osd.0 310 heartbeat osd_stat(store_statfs(0x4e2e96000/0x0/0x4ffc00000, data 0x62850ce/0x6438000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1692f9c2), peers [1,2] op hist [])
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341925888 unmapped: 76480512 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:54:17.554369+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341925888 unmapped: 76480512 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:54:18.554522+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341925888 unmapped: 76480512 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:54:19.554657+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341925888 unmapped: 76480512 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:54:20.554916+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:13 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:13 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4304340 data_alloc: 218103808 data_used: 2232320
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341934080 unmapped: 76472320 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:54:21.555061+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: osd.0 310 heartbeat osd_stat(store_statfs(0x4e2e96000/0x0/0x4ffc00000, data 0x62850ce/0x6438000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1692f9c2), peers [1,2] op hist [])
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341934080 unmapped: 76472320 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:54:22.555215+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341934080 unmapped: 76472320 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:54:23.555348+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341934080 unmapped: 76472320 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:54:24.555482+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341950464 unmapped: 76455936 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:54:25.555627+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:13 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:13 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4304340 data_alloc: 218103808 data_used: 2232320
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341950464 unmapped: 76455936 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:54:26.555755+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341950464 unmapped: 76455936 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:54:27.555904+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: osd.0 310 heartbeat osd_stat(store_statfs(0x4e2e96000/0x0/0x4ffc00000, data 0x62850ce/0x6438000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1692f9c2), peers [1,2] op hist [])
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341950464 unmapped: 76455936 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:54:28.556044+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341950464 unmapped: 76455936 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:54:29.556180+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341950464 unmapped: 76455936 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:54:30.556354+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:13 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:13 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4304340 data_alloc: 218103808 data_used: 2232320
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341950464 unmapped: 76455936 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:54:31.556491+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341950464 unmapped: 76455936 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:54:32.556675+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341958656 unmapped: 76447744 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:54:33.556893+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: osd.0 310 heartbeat osd_stat(store_statfs(0x4e2e96000/0x0/0x4ffc00000, data 0x62850ce/0x6438000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1692f9c2), peers [1,2] op hist [])
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341958656 unmapped: 76447744 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:54:34.557061+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341958656 unmapped: 76447744 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:54:35.557294+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:13 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:13 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4304340 data_alloc: 218103808 data_used: 2232320
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341958656 unmapped: 76447744 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:54:36.557932+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: osd.0 310 heartbeat osd_stat(store_statfs(0x4e2e96000/0x0/0x4ffc00000, data 0x62850ce/0x6438000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1692f9c2), peers [1,2] op hist [])
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341966848 unmapped: 76439552 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:54:37.558171+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341966848 unmapped: 76439552 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:54:38.558694+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341966848 unmapped: 76439552 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:54:39.559212+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341966848 unmapped: 76439552 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:54:40.559710+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:13 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:13 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4304340 data_alloc: 218103808 data_used: 2232320
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341975040 unmapped: 76431360 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:54:41.560052+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341975040 unmapped: 76431360 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: osd.0 310 heartbeat osd_stat(store_statfs(0x4e2e96000/0x0/0x4ffc00000, data 0x62850ce/0x6438000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1692f9c2), peers [1,2] op hist [])
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:54:42.560470+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341983232 unmapped: 76423168 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:54:43.560628+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341983232 unmapped: 76423168 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:54:44.560956+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: osd.0 310 heartbeat osd_stat(store_statfs(0x4e2e96000/0x0/0x4ffc00000, data 0x62850ce/0x6438000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1692f9c2), peers [1,2] op hist [])
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341983232 unmapped: 76423168 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:54:45.561221+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:13 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:13 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4304340 data_alloc: 218103808 data_used: 2232320
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341991424 unmapped: 76414976 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:54:46.561535+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 341999616 unmapped: 76406784 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:54:47.561796+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342007808 unmapped: 76398592 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:54:48.561990+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342007808 unmapped: 76398592 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:54:49.562234+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: osd.0 310 heartbeat osd_stat(store_statfs(0x4e2e96000/0x0/0x4ffc00000, data 0x62850ce/0x6438000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1692f9c2), peers [1,2] op hist [])
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342016000 unmapped: 76390400 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:54:50.562543+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:13 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:13 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4304340 data_alloc: 218103808 data_used: 2232320
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342016000 unmapped: 76390400 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:54:51.562770+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: osd.0 310 heartbeat osd_stat(store_statfs(0x4e2e96000/0x0/0x4ffc00000, data 0x62850ce/0x6438000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1692f9c2), peers [1,2] op hist [])
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342016000 unmapped: 76390400 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:54:52.562907+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: osd.0 310 heartbeat osd_stat(store_statfs(0x4e2e96000/0x0/0x4ffc00000, data 0x62850ce/0x6438000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1692f9c2), peers [1,2] op hist [])
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342016000 unmapped: 76390400 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:54:53.563122+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342016000 unmapped: 76390400 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:54:54.563327+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342016000 unmapped: 76390400 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:54:55.563506+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:13 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:13 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4304340 data_alloc: 218103808 data_used: 2232320
Oct 11 10:01:13 compute-0 ceph-osd[88249]: osd.0 310 heartbeat osd_stat(store_statfs(0x4e2e96000/0x0/0x4ffc00000, data 0x62850ce/0x6438000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1692f9c2), peers [1,2] op hist [])
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342024192 unmapped: 76382208 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:54:56.563718+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: osd.0 310 heartbeat osd_stat(store_statfs(0x4e2e96000/0x0/0x4ffc00000, data 0x62850ce/0x6438000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1692f9c2), peers [1,2] op hist [])
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342024192 unmapped: 76382208 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:54:57.563947+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342024192 unmapped: 76382208 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:54:58.564136+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342024192 unmapped: 76382208 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:54:59.564323+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: osd.0 310 heartbeat osd_stat(store_statfs(0x4e2e96000/0x0/0x4ffc00000, data 0x62850ce/0x6438000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1692f9c2), peers [1,2] op hist [])
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342032384 unmapped: 76374016 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:55:00.564554+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: osd.0 310 heartbeat osd_stat(store_statfs(0x4e2e96000/0x0/0x4ffc00000, data 0x62850ce/0x6438000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1692f9c2), peers [1,2] op hist [])
Oct 11 10:01:13 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:13 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:13 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4304340 data_alloc: 218103808 data_used: 2232320
Oct 11 10:01:13 compute-0 ceph-osd[88249]: osd.0 310 heartbeat osd_stat(store_statfs(0x4e2e96000/0x0/0x4ffc00000, data 0x62850ce/0x6438000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1692f9c2), peers [1,2] op hist [])
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342040576 unmapped: 76365824 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:55:01.564703+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342040576 unmapped: 76365824 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:55:02.564910+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342040576 unmapped: 76365824 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:55:03.565083+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342040576 unmapped: 76365824 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:55:04.565244+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342048768 unmapped: 76357632 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:55:05.565464+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:13 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:13 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4304340 data_alloc: 218103808 data_used: 2232320
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342048768 unmapped: 76357632 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:55:06.565637+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: osd.0 310 heartbeat osd_stat(store_statfs(0x4e2e96000/0x0/0x4ffc00000, data 0x62850ce/0x6438000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1692f9c2), peers [1,2] op hist [])
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342056960 unmapped: 76349440 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:55:07.565794+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342056960 unmapped: 76349440 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:55:08.565912+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: osd.0 310 heartbeat osd_stat(store_statfs(0x4e2e96000/0x0/0x4ffc00000, data 0x62850ce/0x6438000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1692f9c2), peers [1,2] op hist [])
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342056960 unmapped: 76349440 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:55:09.566059+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342056960 unmapped: 76349440 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:55:10.566256+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:13 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:13 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4304340 data_alloc: 218103808 data_used: 2232320
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:55:11.566376+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342056960 unmapped: 76349440 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:55:12.566541+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342056960 unmapped: 76349440 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:55:13.566669+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342065152 unmapped: 76341248 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:55:14.566863+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342065152 unmapped: 76341248 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: osd.0 310 heartbeat osd_stat(store_statfs(0x4e2e96000/0x0/0x4ffc00000, data 0x62850ce/0x6438000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1692f9c2), peers [1,2] op hist [])
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:55:15.566983+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342065152 unmapped: 76341248 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:13 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:13 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4304340 data_alloc: 218103808 data_used: 2232320
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:55:16.567136+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342081536 unmapped: 76324864 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:55:17.567309+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342081536 unmapped: 76324864 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: osd.0 310 heartbeat osd_stat(store_statfs(0x4e2e96000/0x0/0x4ffc00000, data 0x62850ce/0x6438000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1692f9c2), peers [1,2] op hist [])
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:55:18.567466+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342081536 unmapped: 76324864 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:55:19.567638+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342081536 unmapped: 76324864 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: osd.0 310 heartbeat osd_stat(store_statfs(0x4e2e96000/0x0/0x4ffc00000, data 0x62850ce/0x6438000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1692f9c2), peers [1,2] op hist [])
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:55:20.567878+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342081536 unmapped: 76324864 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:13 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:13 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4304340 data_alloc: 218103808 data_used: 2232320
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:55:21.568053+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342081536 unmapped: 76324864 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:55:22.568200+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342081536 unmapped: 76324864 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:55:23.568361+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: osd.0 310 heartbeat osd_stat(store_statfs(0x4e2e96000/0x0/0x4ffc00000, data 0x62850ce/0x6438000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1692f9c2), peers [1,2] op hist [])
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342081536 unmapped: 76324864 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:55:24.568522+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342089728 unmapped: 76316672 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:55:25.568645+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342089728 unmapped: 76316672 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: osd.0 310 heartbeat osd_stat(store_statfs(0x4e2e96000/0x0/0x4ffc00000, data 0x62850ce/0x6438000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1692f9c2), peers [1,2] op hist [])
Oct 11 10:01:13 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:13 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:13 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4304340 data_alloc: 218103808 data_used: 2232320
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:55:26.568882+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342089728 unmapped: 76316672 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:55:27.569011+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342089728 unmapped: 76316672 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:55:28.569182+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342089728 unmapped: 76316672 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:55:29.569336+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342106112 unmapped: 76300288 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: osd.0 310 heartbeat osd_stat(store_statfs(0x4e2e96000/0x0/0x4ffc00000, data 0x62850ce/0x6438000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1692f9c2), peers [1,2] op hist [])
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:55:30.569543+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342106112 unmapped: 76300288 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:13 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:13 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4304340 data_alloc: 218103808 data_used: 2232320
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:55:31.569730+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342106112 unmapped: 76300288 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:55:32.569918+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342106112 unmapped: 76300288 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:55:33.570041+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342106112 unmapped: 76300288 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:55:34.570188+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342106112 unmapped: 76300288 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: osd.0 310 heartbeat osd_stat(store_statfs(0x4e2e96000/0x0/0x4ffc00000, data 0x62850ce/0x6438000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1692f9c2), peers [1,2] op hist [])
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:55:35.570354+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342106112 unmapped: 76300288 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:13 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:13 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4304340 data_alloc: 218103808 data_used: 2232320
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:55:36.570545+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342106112 unmapped: 76300288 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:55:37.570732+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342114304 unmapped: 76292096 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:55:38.570897+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342114304 unmapped: 76292096 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:55:39.571057+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342114304 unmapped: 76292096 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:55:40.571295+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: osd.0 310 heartbeat osd_stat(store_statfs(0x4e2e96000/0x0/0x4ffc00000, data 0x62850ce/0x6438000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1692f9c2), peers [1,2] op hist [])
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342114304 unmapped: 76292096 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:13 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:13 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4304340 data_alloc: 218103808 data_used: 2232320
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:55:41.572081+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342114304 unmapped: 76292096 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:55:42.572798+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342114304 unmapped: 76292096 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: osd.0 310 heartbeat osd_stat(store_statfs(0x4e2e96000/0x0/0x4ffc00000, data 0x62850ce/0x6438000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1692f9c2), peers [1,2] op hist [])
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:55:43.573083+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342122496 unmapped: 76283904 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:55:44.573528+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342122496 unmapped: 76283904 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:55:45.573736+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342130688 unmapped: 76275712 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:13 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:13 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4304340 data_alloc: 218103808 data_used: 2232320
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:55:46.574023+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342130688 unmapped: 76275712 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:55:47.574399+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: osd.0 310 heartbeat osd_stat(store_statfs(0x4e2e96000/0x0/0x4ffc00000, data 0x62850ce/0x6438000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1692f9c2), peers [1,2] op hist [])
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342130688 unmapped: 76275712 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:55:48.574623+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342138880 unmapped: 76267520 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:55:49.574792+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342138880 unmapped: 76267520 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:55:50.575041+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342138880 unmapped: 76267520 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:13 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:13 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4304340 data_alloc: 218103808 data_used: 2232320
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:55:51.575246+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342138880 unmapped: 76267520 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: osd.0 310 heartbeat osd_stat(store_statfs(0x4e2e96000/0x0/0x4ffc00000, data 0x62850ce/0x6438000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1692f9c2), peers [1,2] op hist [])
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:55:52.575592+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342138880 unmapped: 76267520 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:55:53.575790+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342147072 unmapped: 76259328 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:55:54.576064+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342147072 unmapped: 76259328 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:55:55.576342+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342147072 unmapped: 76259328 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: osd.0 310 heartbeat osd_stat(store_statfs(0x4e2e96000/0x0/0x4ffc00000, data 0x62850ce/0x6438000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1692f9c2), peers [1,2] op hist [])
Oct 11 10:01:13 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:13 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:13 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4304340 data_alloc: 218103808 data_used: 2232320
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:55:56.576602+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342147072 unmapped: 76259328 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: osd.0 310 heartbeat osd_stat(store_statfs(0x4e2e96000/0x0/0x4ffc00000, data 0x62850ce/0x6438000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1692f9c2), peers [1,2] op hist [])
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:55:57.576988+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342155264 unmapped: 76251136 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:55:58.577136+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342155264 unmapped: 76251136 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:55:59.577391+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342171648 unmapped: 76234752 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:56:00.577670+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342171648 unmapped: 76234752 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: osd.0 310 heartbeat osd_stat(store_statfs(0x4e2e96000/0x0/0x4ffc00000, data 0x62850ce/0x6438000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1692f9c2), peers [1,2] op hist [])
Oct 11 10:01:13 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:13 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:13 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4304340 data_alloc: 218103808 data_used: 2232320
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:56:01.577803+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342179840 unmapped: 76226560 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: osd.0 310 heartbeat osd_stat(store_statfs(0x4e2e96000/0x0/0x4ffc00000, data 0x62850ce/0x6438000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1692f9c2), peers [1,2] op hist [])
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:56:02.577977+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342179840 unmapped: 76226560 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:56:03.578145+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342179840 unmapped: 76226560 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:56:04.578380+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342179840 unmapped: 76226560 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: osd.0 310 heartbeat osd_stat(store_statfs(0x4e2e96000/0x0/0x4ffc00000, data 0x62850ce/0x6438000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1692f9c2), peers [1,2] op hist [])
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:56:05.578570+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342179840 unmapped: 76226560 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:13 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:13 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4304340 data_alloc: 218103808 data_used: 2232320
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:56:06.604105+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342179840 unmapped: 76226560 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:56:07.604303+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342179840 unmapped: 76226560 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:56:08.604436+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342188032 unmapped: 76218368 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:56:09.604647+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: osd.0 310 heartbeat osd_stat(store_statfs(0x4e2e96000/0x0/0x4ffc00000, data 0x62850ce/0x6438000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1692f9c2), peers [1,2] op hist [])
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342196224 unmapped: 76210176 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:56:10.604892+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342196224 unmapped: 76210176 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:13 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:13 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4304340 data_alloc: 218103808 data_used: 2232320
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:56:11.605066+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342196224 unmapped: 76210176 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:56:12.605220+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342196224 unmapped: 76210176 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:56:13.605368+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342196224 unmapped: 76210176 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:56:14.605540+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342204416 unmapped: 76201984 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: osd.0 310 heartbeat osd_stat(store_statfs(0x4e2e96000/0x0/0x4ffc00000, data 0x62850ce/0x6438000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1692f9c2), peers [1,2] op hist [])
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:56:15.605653+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342212608 unmapped: 76193792 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:13 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:13 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4304340 data_alloc: 218103808 data_used: 2232320
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:56:16.605807+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342212608 unmapped: 76193792 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:56:17.606013+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342220800 unmapped: 76185600 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: osd.0 310 heartbeat osd_stat(store_statfs(0x4e2e96000/0x0/0x4ffc00000, data 0x62850ce/0x6438000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1692f9c2), peers [1,2] op hist [])
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:56:18.606177+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342220800 unmapped: 76185600 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:56:19.606328+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342220800 unmapped: 76185600 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:56:20.606547+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342220800 unmapped: 76185600 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: osd.0 310 heartbeat osd_stat(store_statfs(0x4e2e96000/0x0/0x4ffc00000, data 0x62850ce/0x6438000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1692f9c2), peers [1,2] op hist [])
Oct 11 10:01:13 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:13 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:13 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4304340 data_alloc: 218103808 data_used: 2232320
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:56:21.606707+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342220800 unmapped: 76185600 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: osd.0 310 heartbeat osd_stat(store_statfs(0x4e2e96000/0x0/0x4ffc00000, data 0x62850ce/0x6438000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1692f9c2), peers [1,2] op hist [])
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:56:22.606898+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342220800 unmapped: 76185600 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: osd.0 310 heartbeat osd_stat(store_statfs(0x4e2e96000/0x0/0x4ffc00000, data 0x62850ce/0x6438000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1692f9c2), peers [1,2] op hist [])
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:56:23.607051+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342220800 unmapped: 76185600 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:56:24.607216+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342220800 unmapped: 76185600 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:56:25.607348+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342237184 unmapped: 76169216 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:13 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:13 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4304340 data_alloc: 218103808 data_used: 2232320
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:56:26.607560+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342237184 unmapped: 76169216 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: osd.0 310 heartbeat osd_stat(store_statfs(0x4e2e96000/0x0/0x4ffc00000, data 0x62850ce/0x6438000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1692f9c2), peers [1,2] op hist [])
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:56:27.607719+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342245376 unmapped: 76161024 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:56:28.607861+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342253568 unmapped: 76152832 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:56:29.608025+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342253568 unmapped: 76152832 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: osd.0 310 heartbeat osd_stat(store_statfs(0x4e2e96000/0x0/0x4ffc00000, data 0x62850ce/0x6438000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1692f9c2), peers [1,2] op hist [])
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:56:30.608176+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342253568 unmapped: 76152832 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:56:31.608317+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:13 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:13 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4304340 data_alloc: 218103808 data_used: 2232320
Oct 11 10:01:13 compute-0 ceph-osd[88249]: osd.0 310 heartbeat osd_stat(store_statfs(0x4e2e96000/0x0/0x4ffc00000, data 0x62850ce/0x6438000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1692f9c2), peers [1,2] op hist [])
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342253568 unmapped: 76152832 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:56:32.608471+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342261760 unmapped: 76144640 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:56:33.608608+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342261760 unmapped: 76144640 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:56:34.608765+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342261760 unmapped: 76144640 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:56:35.608914+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342261760 unmapped: 76144640 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:56:36.609051+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:13 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:13 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4304340 data_alloc: 218103808 data_used: 2232320
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342269952 unmapped: 76136448 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:56:37.609283+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: osd.0 310 heartbeat osd_stat(store_statfs(0x4e2e96000/0x0/0x4ffc00000, data 0x62850ce/0x6438000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1692f9c2), peers [1,2] op hist [])
Oct 11 10:01:13 compute-0 ceph-osd[88249]: osd.0 310 ms_handle_reset con 0x55f92882f000 session 0x55f92a9965a0
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: handle_auth_request added challenge on 0x55f929c83800
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342269952 unmapped: 76136448 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:56:38.609475+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: osd.0 310 heartbeat osd_stat(store_statfs(0x4e2e96000/0x0/0x4ffc00000, data 0x62850ce/0x6438000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1692f9c2), peers [1,2] op hist [])
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342278144 unmapped: 76128256 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:56:39.609706+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342278144 unmapped: 76128256 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: osd.0 310 heartbeat osd_stat(store_statfs(0x4e2e96000/0x0/0x4ffc00000, data 0x62850ce/0x6438000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1692f9c2), peers [1,2] op hist [])
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:56:40.609966+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342286336 unmapped: 76120064 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: osd.0 310 heartbeat osd_stat(store_statfs(0x4e2e96000/0x0/0x4ffc00000, data 0x62850ce/0x6438000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1692f9c2), peers [1,2] op hist [])
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:56:41.610149+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:13 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:13 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4304340 data_alloc: 218103808 data_used: 2232320
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342294528 unmapped: 76111872 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:56:42.610307+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342294528 unmapped: 76111872 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:56:43.610479+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342294528 unmapped: 76111872 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:56:44.610636+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342294528 unmapped: 76111872 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:56:45.611298+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: osd.0 310 heartbeat osd_stat(store_statfs(0x4e2e96000/0x0/0x4ffc00000, data 0x62850ce/0x6438000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1692f9c2), peers [1,2] op hist [])
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342294528 unmapped: 76111872 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:56:46.611555+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:13 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:13 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4304340 data_alloc: 218103808 data_used: 2232320
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342302720 unmapped: 76103680 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:56:47.611715+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342302720 unmapped: 76103680 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:56:48.611976+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342302720 unmapped: 76103680 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:56:49.612441+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342302720 unmapped: 76103680 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:56:50.612749+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342302720 unmapped: 76103680 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: osd.0 310 heartbeat osd_stat(store_statfs(0x4e2e96000/0x0/0x4ffc00000, data 0x62850ce/0x6438000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1692f9c2), peers [1,2] op hist [])
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:56:51.613208+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:13 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:13 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4304340 data_alloc: 218103808 data_used: 2232320
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342302720 unmapped: 76103680 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:56:52.613398+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342310912 unmapped: 76095488 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:56:53.613998+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342310912 unmapped: 76095488 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:56:54.614172+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342319104 unmapped: 76087296 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: osd.0 310 heartbeat osd_stat(store_statfs(0x4e2e96000/0x0/0x4ffc00000, data 0x62850ce/0x6438000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1692f9c2), peers [1,2] op hist [])
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:56:55.614477+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342319104 unmapped: 76087296 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:56:56.614684+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:13 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:13 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4304340 data_alloc: 218103808 data_used: 2232320
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342327296 unmapped: 76079104 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:56:57.614891+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342327296 unmapped: 76079104 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:56:58.615059+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342327296 unmapped: 76079104 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:56:59.615198+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342327296 unmapped: 76079104 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: osd.0 310 heartbeat osd_stat(store_statfs(0x4e2e96000/0x0/0x4ffc00000, data 0x62850ce/0x6438000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1692f9c2), peers [1,2] op hist [])
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:57:00.615430+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: osd.0 310 heartbeat osd_stat(store_statfs(0x4e2e96000/0x0/0x4ffc00000, data 0x62850ce/0x6438000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1692f9c2), peers [1,2] op hist [])
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342327296 unmapped: 76079104 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:57:01.615765+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:13 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:13 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4304340 data_alloc: 218103808 data_used: 2232320
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342327296 unmapped: 76079104 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:57:02.615921+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342343680 unmapped: 76062720 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:57:03.616148+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342343680 unmapped: 76062720 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:57:04.616360+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: osd.0 310 heartbeat osd_stat(store_statfs(0x4e2e96000/0x0/0x4ffc00000, data 0x62850ce/0x6438000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1692f9c2), peers [1,2] op hist [])
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342343680 unmapped: 76062720 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:57:05.616610+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342343680 unmapped: 76062720 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:57:06.616789+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: osd.0 310 heartbeat osd_stat(store_statfs(0x4e2e96000/0x0/0x4ffc00000, data 0x62850ce/0x6438000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1692f9c2), peers [1,2] op hist [])
Oct 11 10:01:13 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:13 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:13 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4304340 data_alloc: 218103808 data_used: 2232320
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342343680 unmapped: 76062720 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:57:07.617026+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342343680 unmapped: 76062720 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:57:08.617243+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: osd.0 310 heartbeat osd_stat(store_statfs(0x4e2e96000/0x0/0x4ffc00000, data 0x62850ce/0x6438000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1692f9c2), peers [1,2] op hist [])
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342351872 unmapped: 76054528 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:57:09.617513+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342351872 unmapped: 76054528 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:57:10.617763+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342351872 unmapped: 76054528 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: osd.0 310 heartbeat osd_stat(store_statfs(0x4e2e96000/0x0/0x4ffc00000, data 0x62850ce/0x6438000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1692f9c2), peers [1,2] op hist [])
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:57:11.617998+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: osd.0 310 heartbeat osd_stat(store_statfs(0x4e2e96000/0x0/0x4ffc00000, data 0x62850ce/0x6438000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1692f9c2), peers [1,2] op hist [])
Oct 11 10:01:13 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:13 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:13 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4304340 data_alloc: 218103808 data_used: 2232320
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342351872 unmapped: 76054528 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:57:12.618179+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342351872 unmapped: 76054528 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: osd.0 310 heartbeat osd_stat(store_statfs(0x4e2e96000/0x0/0x4ffc00000, data 0x62850ce/0x6438000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1692f9c2), peers [1,2] op hist [])
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:57:13.618320+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342351872 unmapped: 76054528 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:57:14.618471+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342351872 unmapped: 76054528 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:57:15.618626+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342351872 unmapped: 76054528 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:57:16.618807+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:13 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:13 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4304340 data_alloc: 218103808 data_used: 2232320
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342368256 unmapped: 76038144 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:57:17.619201+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342368256 unmapped: 76038144 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:57:18.619391+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: osd.0 310 heartbeat osd_stat(store_statfs(0x4e2e96000/0x0/0x4ffc00000, data 0x62850ce/0x6438000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1692f9c2), peers [1,2] op hist [])
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342368256 unmapped: 76038144 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:57:19.619989+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342368256 unmapped: 76038144 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:57:20.620338+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342376448 unmapped: 76029952 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:57:21.620523+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: osd.0 310 heartbeat osd_stat(store_statfs(0x4e2e96000/0x0/0x4ffc00000, data 0x62850ce/0x6438000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1692f9c2), peers [1,2] op hist [])
Oct 11 10:01:13 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:13 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:13 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4304340 data_alloc: 218103808 data_used: 2232320
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342376448 unmapped: 76029952 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:57:22.620731+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342376448 unmapped: 76029952 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:57:23.621013+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342376448 unmapped: 76029952 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: osd.0 310 heartbeat osd_stat(store_statfs(0x4e2e96000/0x0/0x4ffc00000, data 0x62850ce/0x6438000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1692f9c2), peers [1,2] op hist [])
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:57:24.621161+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: osd.0 310 heartbeat osd_stat(store_statfs(0x4e2e96000/0x0/0x4ffc00000, data 0x62850ce/0x6438000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1692f9c2), peers [1,2] op hist [])
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342376448 unmapped: 76029952 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:57:25.621319+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342376448 unmapped: 76029952 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:57:26.621529+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:13 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:13 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4304340 data_alloc: 218103808 data_used: 2232320
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342384640 unmapped: 76021760 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:57:27.621726+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342384640 unmapped: 76021760 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:57:28.621935+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342392832 unmapped: 76013568 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:57:29.622122+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342392832 unmapped: 76013568 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: osd.0 310 heartbeat osd_stat(store_statfs(0x4e2e96000/0x0/0x4ffc00000, data 0x62850ce/0x6438000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1692f9c2), peers [1,2] op hist [])
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:57:30.622368+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: osd.0 310 heartbeat osd_stat(store_statfs(0x4e2e96000/0x0/0x4ffc00000, data 0x62850ce/0x6438000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1692f9c2), peers [1,2] op hist [])
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342392832 unmapped: 76013568 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:57:31.622543+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:13 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:13 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4304340 data_alloc: 218103808 data_used: 2232320
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342392832 unmapped: 76013568 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:57:32.622740+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: osd.0 310 heartbeat osd_stat(store_statfs(0x4e2e96000/0x0/0x4ffc00000, data 0x62850ce/0x6438000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1692f9c2), peers [1,2] op hist [])
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342392832 unmapped: 76013568 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:57:33.622902+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342392832 unmapped: 76013568 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:57:34.623106+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342392832 unmapped: 76013568 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:57:35.623267+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342392832 unmapped: 76013568 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:57:36.623520+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: osd.0 310 heartbeat osd_stat(store_statfs(0x4e2e96000/0x0/0x4ffc00000, data 0x62850ce/0x6438000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1692f9c2), peers [1,2] op hist [])
Oct 11 10:01:13 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:13 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:13 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4304340 data_alloc: 218103808 data_used: 2232320
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342409216 unmapped: 75997184 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:57:37.623743+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342409216 unmapped: 75997184 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:57:38.623874+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342409216 unmapped: 75997184 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:57:39.624049+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342409216 unmapped: 75997184 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:57:40.624322+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342425600 unmapped: 75980800 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: osd.0 310 heartbeat osd_stat(store_statfs(0x4e2e96000/0x0/0x4ffc00000, data 0x62850ce/0x6438000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1692f9c2), peers [1,2] op hist [])
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:57:41.624544+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:13 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:13 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4304340 data_alloc: 218103808 data_used: 2232320
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342425600 unmapped: 75980800 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:57:42.624717+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: osd.0 310 heartbeat osd_stat(store_statfs(0x4e2e96000/0x0/0x4ffc00000, data 0x62850ce/0x6438000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1692f9c2), peers [1,2] op hist [])
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342425600 unmapped: 75980800 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:57:43.624894+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342425600 unmapped: 75980800 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:57:44.625086+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: osd.0 310 heartbeat osd_stat(store_statfs(0x4e2e96000/0x0/0x4ffc00000, data 0x62850ce/0x6438000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1692f9c2), peers [1,2] op hist [])
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342441984 unmapped: 75964416 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: handle_auth_request added challenge on 0x55f92a333800
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:57:45.625359+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: osd.0 310 heartbeat osd_stat(store_statfs(0x4e2e96000/0x0/0x4ffc00000, data 0x62850ce/0x6438000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1692f9c2), peers [1,2] op hist [])
Oct 11 10:01:13 compute-0 ceph-osd[88249]: osd.0 310 handle_osd_map epochs [310,311], i have 310, src has [1,311]
Oct 11 10:01:13 compute-0 ceph-osd[88249]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 284.289855957s of 284.663665771s, submitted: 106
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342450176 unmapped: 75956224 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: osd.0 311 ms_handle_reset con 0x55f92a333800 session 0x55f9281eb4a0
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:57:46.625558+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:13 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:13 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4278650 data_alloc: 218103808 data_used: 2240512
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342450176 unmapped: 75956224 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:57:47.625732+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: handle_auth_request added challenge on 0x55f92a5fd000
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342458368 unmapped: 75948032 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:57:48.625910+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: osd.0 311 handle_osd_map epochs [311,312], i have 311, src has [1,312]
Oct 11 10:01:13 compute-0 ceph-osd[88249]: osd.0 312 ms_handle_reset con 0x55f92a5fd000 session 0x55f92a4d0d20
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342499328 unmapped: 75907072 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:57:49.626052+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: osd.0 312 handle_osd_map epochs [312,313], i have 312, src has [1,313]
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342499328 unmapped: 75907072 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: osd.0 313 heartbeat osd_stat(store_statfs(0x4e3b00000/0x0/0x4ffc00000, data 0x561882a/0x57cc000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1692f9c2), peers [1,2] op hist [])
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:57:50.626287+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342507520 unmapped: 75898880 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:57:51.626752+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:13 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:13 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4230444 data_alloc: 218103808 data_used: 2240512
Oct 11 10:01:13 compute-0 ceph-osd[88249]: osd.0 313 heartbeat osd_stat(store_statfs(0x4e3afc000/0x0/0x4ffc00000, data 0x561a2a9/0x57cf000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x1692f9c2), peers [1,2] op hist [])
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342507520 unmapped: 75898880 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:57:52.626896+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: osd.0 313 ms_handle_reset con 0x55f92a332400 session 0x55f928066960
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: handle_auth_request added challenge on 0x55f92a606800
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342515712 unmapped: 75890688 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: handle_auth_request added challenge on 0x55f92a332400
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:57:53.627049+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _renew_subs
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 10:01:13 compute-0 ceph-osd[88249]: osd.0 313 handle_osd_map epochs [314,314], i have 313, src has [1,314]
Oct 11 10:01:13 compute-0 ceph-osd[88249]: osd.0 314 ms_handle_reset con 0x55f92a332400 session 0x55f928ab1e00
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342523904 unmapped: 75882496 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: osd.0 314 heartbeat osd_stat(store_statfs(0x4e2eef000/0x0/0x4ffc00000, data 0x5e1a2a9/0x5fcf000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x16d3f9c2), peers [1,2] op hist [])
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:57:54.627230+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: osd.0 314 handle_osd_map epochs [314,315], i have 314, src has [1,315]
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342540288 unmapped: 75866112 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:57:55.627390+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e2eeb000/0x0/0x4ffc00000, data 0x5e1be42/0x5fd2000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x16d3f9c2), peers [1,2] op hist [])
Oct 11 10:01:13 compute-0 ceph-osd[88249]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e2ee8000/0x0/0x4ffc00000, data 0x5e1d8a5/0x5fd5000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x16d3f9c2), peers [1,2] op hist [])
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342540288 unmapped: 75866112 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:57:56.627580+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:13 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:13 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4292209 data_alloc: 218103808 data_used: 2248704
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342540288 unmapped: 75866112 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:57:57.627763+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342540288 unmapped: 75866112 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e2ee8000/0x0/0x4ffc00000, data 0x5e1d8a5/0x5fd5000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x16d3f9c2), peers [1,2] op hist [])
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:57:58.627937+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342540288 unmapped: 75866112 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:57:59.628105+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342540288 unmapped: 75866112 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:58:00.628289+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342548480 unmapped: 75857920 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:58:01.628496+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:13 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:13 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4292209 data_alloc: 218103808 data_used: 2248704
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342548480 unmapped: 75857920 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e2ee8000/0x0/0x4ffc00000, data 0x5e1d8a5/0x5fd5000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x16d3f9c2), peers [1,2] op hist [])
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:58:02.628642+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342548480 unmapped: 75857920 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:58:03.628869+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342548480 unmapped: 75857920 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:58:04.629044+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342548480 unmapped: 75857920 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:58:05.629185+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342548480 unmapped: 75857920 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:58:06.629346+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:13 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:13 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4292369 data_alloc: 218103808 data_used: 2252800
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342556672 unmapped: 75849728 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:58:07.629499+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342556672 unmapped: 75849728 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e2ee8000/0x0/0x4ffc00000, data 0x5e1d8a5/0x5fd5000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x16d3f9c2), peers [1,2] op hist [])
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:58:08.629689+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342564864 unmapped: 75841536 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:58:09.629899+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342564864 unmapped: 75841536 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:58:10.630120+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342564864 unmapped: 75841536 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:58:11.630343+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e2ee8000/0x0/0x4ffc00000, data 0x5e1d8a5/0x5fd5000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x16d3f9c2), peers [1,2] op hist [])
Oct 11 10:01:13 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:13 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:13 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4292369 data_alloc: 218103808 data_used: 2252800
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342573056 unmapped: 75833344 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:58:12.630494+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e2ee8000/0x0/0x4ffc00000, data 0x5e1d8a5/0x5fd5000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x16d3f9c2), peers [1,2] op hist [])
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342581248 unmapped: 75825152 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:58:13.630619+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342581248 unmapped: 75825152 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:58:14.630895+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342581248 unmapped: 75825152 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:58:15.631053+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e2ee8000/0x0/0x4ffc00000, data 0x5e1d8a5/0x5fd5000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x16d3f9c2), peers [1,2] op hist [])
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342581248 unmapped: 75825152 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:58:16.631210+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:13 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:13 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4292369 data_alloc: 218103808 data_used: 2252800
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342581248 unmapped: 75825152 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:58:17.631394+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342581248 unmapped: 75825152 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:58:18.631533+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342581248 unmapped: 75825152 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:58:19.631742+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342589440 unmapped: 75816960 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:58:20.631940+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342597632 unmapped: 75808768 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:58:21.632130+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e2ee8000/0x0/0x4ffc00000, data 0x5e1d8a5/0x5fd5000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x16d3f9c2), peers [1,2] op hist [])
Oct 11 10:01:13 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:13 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:13 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4292369 data_alloc: 218103808 data_used: 2252800
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342597632 unmapped: 75808768 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:58:22.632286+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342597632 unmapped: 75808768 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e2ee8000/0x0/0x4ffc00000, data 0x5e1d8a5/0x5fd5000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x16d3f9c2), peers [1,2] op hist [])
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:58:23.632452+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e2ee8000/0x0/0x4ffc00000, data 0x5e1d8a5/0x5fd5000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x16d3f9c2), peers [1,2] op hist [])
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342597632 unmapped: 75808768 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:58:24.632609+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e2ee8000/0x0/0x4ffc00000, data 0x5e1d8a5/0x5fd5000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x16d3f9c2), peers [1,2] op hist [])
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342605824 unmapped: 75800576 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:58:25.632754+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342605824 unmapped: 75800576 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:58:26.632886+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:13 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:13 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4292369 data_alloc: 218103808 data_used: 2252800
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342605824 unmapped: 75800576 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e2ee8000/0x0/0x4ffc00000, data 0x5e1d8a5/0x5fd5000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x16d3f9c2), peers [1,2] op hist [])
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:58:27.633047+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342605824 unmapped: 75800576 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:58:28.633190+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342614016 unmapped: 75792384 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:58:29.633386+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e2ee8000/0x0/0x4ffc00000, data 0x5e1d8a5/0x5fd5000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x16d3f9c2), peers [1,2] op hist [])
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342614016 unmapped: 75792384 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:58:30.633595+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342614016 unmapped: 75792384 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:58:31.633763+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:13 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:13 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4292369 data_alloc: 218103808 data_used: 2252800
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342614016 unmapped: 75792384 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:58:32.633943+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e2ee8000/0x0/0x4ffc00000, data 0x5e1d8a5/0x5fd5000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x16d3f9c2), peers [1,2] op hist [])
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342630400 unmapped: 75776000 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:58:33.634097+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342630400 unmapped: 75776000 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:58:34.634242+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342630400 unmapped: 75776000 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:58:35.634452+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342630400 unmapped: 75776000 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:58:36.634636+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:13 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:13 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4292369 data_alloc: 218103808 data_used: 2252800
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342630400 unmapped: 75776000 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:58:37.634800+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342630400 unmapped: 75776000 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e2ee8000/0x0/0x4ffc00000, data 0x5e1d8a5/0x5fd5000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x16d3f9c2), peers [1,2] op hist [])
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:58:38.634994+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342630400 unmapped: 75776000 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:58:39.635201+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342630400 unmapped: 75776000 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:58:40.635397+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342638592 unmapped: 75767808 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:58:41.635554+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:13 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:13 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4292369 data_alloc: 218103808 data_used: 2252800
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342638592 unmapped: 75767808 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:58:42.635692+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342638592 unmapped: 75767808 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:58:43.635868+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e2ee8000/0x0/0x4ffc00000, data 0x5e1d8a5/0x5fd5000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x16d3f9c2), peers [1,2] op hist [])
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342638592 unmapped: 75767808 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:58:44.636056+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342638592 unmapped: 75767808 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:58:45.636222+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342638592 unmapped: 75767808 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:58:46.636416+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:13 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:13 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4292369 data_alloc: 218103808 data_used: 2252800
Oct 11 10:01:13 compute-0 ceph-osd[88249]: osd.0 315 heartbeat osd_stat(store_statfs(0x4e2ee8000/0x0/0x4ffc00000, data 0x5e1d8a5/0x5fd5000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x16d3f9c2), peers [1,2] op hist [])
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342646784 unmapped: 75759616 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:58:47.636630+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342646784 unmapped: 75759616 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:58:48.636799+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342654976 unmapped: 75751424 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:58:49.636989+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342654976 unmapped: 75751424 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:58:50.637208+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342654976 unmapped: 75751424 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:58:51.637367+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:13 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:13 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4292369 data_alloc: 218103808 data_used: 2252800
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: handle_auth_request added challenge on 0x55f92cb23c00
Oct 11 10:01:13 compute-0 ceph-osd[88249]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 66.153221130s of 66.380546570s, submitted: 71
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342671360 unmapped: 75735040 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: osd.0 315 handle_osd_map epochs [315,316], i have 315, src has [1,316]
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _renew_subs
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 10:01:13 compute-0 ceph-osd[88249]: osd.0 315 handle_osd_map epochs [316,316], i have 316, src has [1,316]
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:58:52.637489+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e2ee8000/0x0/0x4ffc00000, data 0x5e1d8a5/0x5fd5000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x16d3f9c2), peers [1,2] op hist [0,0,1])
Oct 11 10:01:13 compute-0 ceph-osd[88249]: osd.0 316 ms_handle_reset con 0x55f92cb23c00 session 0x55f9280cb0e0
Oct 11 10:01:13 compute-0 ceph-osd[88249]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e36e5000/0x0/0x4ffc00000, data 0x561f476/0x57d8000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x16d3f9c2), peers [1,2] op hist [])
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342728704 unmapped: 75677696 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:58:53.637628+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e36e5000/0x0/0x4ffc00000, data 0x561f476/0x57d8000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x16d3f9c2), peers [1,2] op hist [])
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342728704 unmapped: 75677696 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:58:54.637784+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: osd.0 316 heartbeat osd_stat(store_statfs(0x4e36e5000/0x0/0x4ffc00000, data 0x561f476/0x57d8000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x16d3f9c2), peers [1,2] op hist [])
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342728704 unmapped: 75677696 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:58:55.637926+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342728704 unmapped: 75677696 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:58:56.638076+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:13 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:13 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4241495 data_alloc: 218103808 data_used: 2260992
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342736896 unmapped: 75669504 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:58:57.638325+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342736896 unmapped: 75669504 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:58:58.638471+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342736896 unmapped: 75669504 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:58:59.638859+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _renew_subs
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 11 10:01:13 compute-0 ceph-osd[88249]: osd.0 316 handle_osd_map epochs [317,317], i have 316, src has [1,317]
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342753280 unmapped: 75653120 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:59:00.639290+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: osd.0 317 heartbeat osd_stat(store_statfs(0x4e36e5000/0x0/0x4ffc00000, data 0x561f476/0x57d8000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x16d3f9c2), peers [1,2] op hist [])
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342753280 unmapped: 75653120 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:59:01.639447+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:13 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:13 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4244469 data_alloc: 218103808 data_used: 2260992
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342753280 unmapped: 75653120 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:59:02.639614+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: osd.0 317 heartbeat osd_stat(store_statfs(0x4e36e2000/0x0/0x4ffc00000, data 0x5620ed9/0x57db000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x16d3f9c2), peers [1,2] op hist [])
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342753280 unmapped: 75653120 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:59:03.639779+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342753280 unmapped: 75653120 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:59:04.640051+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342761472 unmapped: 75644928 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:59:05.640276+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342761472 unmapped: 75644928 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:59:06.640556+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:13 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:13 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4244629 data_alloc: 218103808 data_used: 2265088
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342761472 unmapped: 75644928 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:59:07.640711+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: osd.0 317 heartbeat osd_stat(store_statfs(0x4e36e2000/0x0/0x4ffc00000, data 0x5620ed9/0x57db000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x16d3f9c2), peers [1,2] op hist [])
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342769664 unmapped: 75636736 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:59:08.640886+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: osd.0 317 heartbeat osd_stat(store_statfs(0x4e36e2000/0x0/0x4ffc00000, data 0x5620ed9/0x57db000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x16d3f9c2), peers [1,2] op hist [])
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342777856 unmapped: 75628544 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:59:09.641053+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342786048 unmapped: 75620352 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:59:10.641908+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342794240 unmapped: 75612160 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:59:11.642112+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:13 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:13 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4244629 data_alloc: 218103808 data_used: 2265088
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342794240 unmapped: 75612160 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:59:12.642329+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: osd.0 317 heartbeat osd_stat(store_statfs(0x4e36e2000/0x0/0x4ffc00000, data 0x5620ed9/0x57db000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x16d3f9c2), peers [1,2] op hist [])
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342802432 unmapped: 75603968 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:59:13.642500+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342802432 unmapped: 75603968 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:59:14.642657+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342802432 unmapped: 75603968 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:59:15.642889+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342810624 unmapped: 75595776 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:59:16.643033+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:13 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:13 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4244629 data_alloc: 218103808 data_used: 2265088
Oct 11 10:01:13 compute-0 ceph-osd[88249]: osd.0 317 heartbeat osd_stat(store_statfs(0x4e36e2000/0x0/0x4ffc00000, data 0x5620ed9/0x57db000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x16d3f9c2), peers [1,2] op hist [])
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342810624 unmapped: 75595776 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:59:17.643260+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342810624 unmapped: 75595776 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:59:18.643435+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342810624 unmapped: 75595776 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:59:19.643620+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342810624 unmapped: 75595776 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:59:20.643799+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342818816 unmapped: 75587584 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:59:21.644008+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:13 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:13 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4244629 data_alloc: 218103808 data_used: 2265088
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342818816 unmapped: 75587584 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:59:22.644231+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: osd.0 317 heartbeat osd_stat(store_statfs(0x4e36e2000/0x0/0x4ffc00000, data 0x5620ed9/0x57db000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x16d3f9c2), peers [1,2] op hist [])
Oct 11 10:01:13 compute-0 ceph-osd[88249]: osd.0 317 heartbeat osd_stat(store_statfs(0x4e36e2000/0x0/0x4ffc00000, data 0x5620ed9/0x57db000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x16d3f9c2), peers [1,2] op hist [])
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342827008 unmapped: 75579392 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:59:23.644506+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: osd.0 317 heartbeat osd_stat(store_statfs(0x4e36e2000/0x0/0x4ffc00000, data 0x5620ed9/0x57db000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x16d3f9c2), peers [1,2] op hist [])
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342827008 unmapped: 75579392 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:59:24.644697+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342827008 unmapped: 75579392 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:59:25.644879+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: osd.0 317 heartbeat osd_stat(store_statfs(0x4e36e2000/0x0/0x4ffc00000, data 0x5620ed9/0x57db000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x16d3f9c2), peers [1,2] op hist [])
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342827008 unmapped: 75579392 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:59:26.645077+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:13 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:13 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4244629 data_alloc: 218103808 data_used: 2265088
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342843392 unmapped: 75563008 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:59:27.645243+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342843392 unmapped: 75563008 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:59:28.645435+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: osd.0 317 heartbeat osd_stat(store_statfs(0x4e36e2000/0x0/0x4ffc00000, data 0x5620ed9/0x57db000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x16d3f9c2), peers [1,2] op hist [])
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342843392 unmapped: 75563008 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:59:29.645606+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342843392 unmapped: 75563008 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:59:30.645788+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: osd.0 317 heartbeat osd_stat(store_statfs(0x4e36e2000/0x0/0x4ffc00000, data 0x5620ed9/0x57db000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x16d3f9c2), peers [1,2] op hist [])
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342843392 unmapped: 75563008 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:59:31.645951+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: osd.0 317 heartbeat osd_stat(store_statfs(0x4e36e2000/0x0/0x4ffc00000, data 0x5620ed9/0x57db000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x16d3f9c2), peers [1,2] op hist [])
Oct 11 10:01:13 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:13 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:13 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4244629 data_alloc: 218103808 data_used: 2265088
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342851584 unmapped: 75554816 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:59:32.646091+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342851584 unmapped: 75554816 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:59:33.646297+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342851584 unmapped: 75554816 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:59:34.646436+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: osd.0 317 heartbeat osd_stat(store_statfs(0x4e36e2000/0x0/0x4ffc00000, data 0x5620ed9/0x57db000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x16d3f9c2), peers [1,2] op hist [])
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342851584 unmapped: 75554816 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:59:35.646604+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: osd.0 317 heartbeat osd_stat(store_statfs(0x4e36e2000/0x0/0x4ffc00000, data 0x5620ed9/0x57db000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x16d3f9c2), peers [1,2] op hist [])
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342851584 unmapped: 75554816 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:59:36.646806+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:13 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:13 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4244629 data_alloc: 218103808 data_used: 2265088
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342867968 unmapped: 75538432 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:59:37.646994+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342867968 unmapped: 75538432 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:59:38.647168+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342867968 unmapped: 75538432 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:59:39.647356+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342867968 unmapped: 75538432 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:59:40.647601+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342867968 unmapped: 75538432 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:59:41.647741+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: osd.0 317 heartbeat osd_stat(store_statfs(0x4e36e2000/0x0/0x4ffc00000, data 0x5620ed9/0x57db000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x16d3f9c2), peers [1,2] op hist [])
Oct 11 10:01:13 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:13 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:13 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4244629 data_alloc: 218103808 data_used: 2265088
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342867968 unmapped: 75538432 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:59:42.647925+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342876160 unmapped: 75530240 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:59:43.648078+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: osd.0 317 heartbeat osd_stat(store_statfs(0x4e36e2000/0x0/0x4ffc00000, data 0x5620ed9/0x57db000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x16d3f9c2), peers [1,2] op hist [])
Oct 11 10:01:13 compute-0 ceph-osd[88249]: osd.0 317 heartbeat osd_stat(store_statfs(0x4e36e2000/0x0/0x4ffc00000, data 0x5620ed9/0x57db000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x16d3f9c2), peers [1,2] op hist [])
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:59:44.648273+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342876160 unmapped: 75530240 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:59:45.648410+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342876160 unmapped: 75530240 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:59:46.648564+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342876160 unmapped: 75530240 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:13 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:13 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4244629 data_alloc: 218103808 data_used: 2265088
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:59:47.648730+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342876160 unmapped: 75530240 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:59:48.648885+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342884352 unmapped: 75522048 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:59:49.649075+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342884352 unmapped: 75522048 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: osd.0 317 heartbeat osd_stat(store_statfs(0x4e36e2000/0x0/0x4ffc00000, data 0x5620ed9/0x57db000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x16d3f9c2), peers [1,2] op hist [])
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:59:50.649451+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342884352 unmapped: 75522048 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: osd.0 317 heartbeat osd_stat(store_statfs(0x4e36e2000/0x0/0x4ffc00000, data 0x5620ed9/0x57db000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x16d3f9c2), peers [1,2] op hist [])
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:59:51.649645+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342884352 unmapped: 75522048 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:13 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:13 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4244629 data_alloc: 218103808 data_used: 2265088
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:59:52.649852+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342884352 unmapped: 75522048 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:59:53.650020+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342900736 unmapped: 75505664 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:59:54.650154+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342900736 unmapped: 75505664 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: osd.0 317 heartbeat osd_stat(store_statfs(0x4e36e2000/0x0/0x4ffc00000, data 0x5620ed9/0x57db000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x16d3f9c2), peers [1,2] op hist [])
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:59:55.650322+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342900736 unmapped: 75505664 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:59:56.650515+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342908928 unmapped: 75497472 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: osd.0 317 heartbeat osd_stat(store_statfs(0x4e36e2000/0x0/0x4ffc00000, data 0x5620ed9/0x57db000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x16d3f9c2), peers [1,2] op hist [])
Oct 11 10:01:13 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:13 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:13 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4244629 data_alloc: 218103808 data_used: 2265088
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:59:57.650682+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342908928 unmapped: 75497472 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:59:58.650853+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342908928 unmapped: 75497472 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: osd.0 317 heartbeat osd_stat(store_statfs(0x4e36e2000/0x0/0x4ffc00000, data 0x5620ed9/0x57db000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x16d3f9c2), peers [1,2] op hist [])
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T09:59:59.650970+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342908928 unmapped: 75497472 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T10:00:00.651143+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342908928 unmapped: 75497472 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T10:00:01.651323+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342917120 unmapped: 75489280 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:13 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:13 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4244629 data_alloc: 218103808 data_used: 2265088
Oct 11 10:01:13 compute-0 ceph-osd[88249]: osd.0 317 heartbeat osd_stat(store_statfs(0x4e36e2000/0x0/0x4ffc00000, data 0x5620ed9/0x57db000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x16d3f9c2), peers [1,2] op hist [])
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T10:00:02.651450+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342917120 unmapped: 75489280 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T10:00:03.651598+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342917120 unmapped: 75489280 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T10:00:04.651730+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342917120 unmapped: 75489280 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T10:00:05.651887+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342917120 unmapped: 75489280 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T10:00:06.652096+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342917120 unmapped: 75489280 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: osd.0 317 heartbeat osd_stat(store_statfs(0x4e36e2000/0x0/0x4ffc00000, data 0x5620ed9/0x57db000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x16d3f9c2), peers [1,2] op hist [])
Oct 11 10:01:13 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:13 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:13 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4244629 data_alloc: 218103808 data_used: 2265088
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T10:00:07.652249+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342925312 unmapped: 75481088 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T10:00:08.652463+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342925312 unmapped: 75481088 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T10:00:09.652625+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342933504 unmapped: 75472896 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T10:00:10.652872+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342933504 unmapped: 75472896 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: osd.0 317 heartbeat osd_stat(store_statfs(0x4e36e2000/0x0/0x4ffc00000, data 0x5620ed9/0x57db000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x16d3f9c2), peers [1,2] op hist [])
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T10:00:11.653045+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342941696 unmapped: 75464704 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:13 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:13 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4244629 data_alloc: 218103808 data_used: 2265088
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T10:00:12.653213+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342941696 unmapped: 75464704 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T10:00:13.653381+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342941696 unmapped: 75464704 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T10:00:14.653599+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342941696 unmapped: 75464704 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: osd.0 317 ms_handle_reset con 0x55f928883400 session 0x55f92a9961e0
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: handle_auth_request added challenge on 0x55f92a332400
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T10:00:15.653759+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342949888 unmapped: 75456512 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: osd.0 317 heartbeat osd_stat(store_statfs(0x4e36e2000/0x0/0x4ffc00000, data 0x5620ed9/0x57db000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x16d3f9c2), peers [1,2] op hist [])
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T10:00:16.653881+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342949888 unmapped: 75456512 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:13 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:13 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4244629 data_alloc: 218103808 data_used: 2265088
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T10:00:17.654026+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342958080 unmapped: 75448320 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T10:00:18.654159+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342958080 unmapped: 75448320 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T10:00:19.654379+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342958080 unmapped: 75448320 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: osd.0 317 heartbeat osd_stat(store_statfs(0x4e36e2000/0x0/0x4ffc00000, data 0x5620ed9/0x57db000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x16d3f9c2), peers [1,2] op hist [])
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T10:00:20.654581+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: osd.0 317 ms_handle_reset con 0x55f929c80c00 session 0x55f92a28e780
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: handle_auth_request added challenge on 0x55f92a333800
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342958080 unmapped: 75448320 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T10:00:21.654734+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342958080 unmapped: 75448320 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:13 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:13 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4244629 data_alloc: 218103808 data_used: 2265088
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T10:00:22.654920+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342958080 unmapped: 75448320 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: osd.0 317 heartbeat osd_stat(store_statfs(0x4e36e2000/0x0/0x4ffc00000, data 0x5620ed9/0x57db000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x16d3f9c2), peers [1,2] op hist [])
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T10:00:23.655071+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342966272 unmapped: 75440128 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T10:00:24.655248+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342974464 unmapped: 75431936 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T10:00:25.655404+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342982656 unmapped: 75423744 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T10:00:26.655708+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342982656 unmapped: 75423744 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:13 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:13 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4244629 data_alloc: 218103808 data_used: 2265088
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T10:00:27.655850+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342982656 unmapped: 75423744 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T10:00:28.655973+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342990848 unmapped: 75415552 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: osd.0 317 heartbeat osd_stat(store_statfs(0x4e36e2000/0x0/0x4ffc00000, data 0x5620ed9/0x57db000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x16d3f9c2), peers [1,2] op hist [])
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T10:00:29.656111+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342990848 unmapped: 75415552 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T10:00:30.656276+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342990848 unmapped: 75415552 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: osd.0 317 heartbeat osd_stat(store_statfs(0x4e36e2000/0x0/0x4ffc00000, data 0x5620ed9/0x57db000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x16d3f9c2), peers [1,2] op hist [])
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T10:00:31.656422+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342990848 unmapped: 75415552 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:13 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:13 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4244629 data_alloc: 218103808 data_used: 2265088
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T10:00:32.656598+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342990848 unmapped: 75415552 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: osd.0 317 heartbeat osd_stat(store_statfs(0x4e36e2000/0x0/0x4ffc00000, data 0x5620ed9/0x57db000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x16d3f9c2), peers [1,2] op hist [])
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T10:00:33.656800+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342999040 unmapped: 75407360 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T10:00:34.656963+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342999040 unmapped: 75407360 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T10:00:35.657108+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342999040 unmapped: 75407360 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T10:00:36.657227+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 343007232 unmapped: 75399168 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: osd.0 317 heartbeat osd_stat(store_statfs(0x4e36e2000/0x0/0x4ffc00000, data 0x5620ed9/0x57db000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x16d3f9c2), peers [1,2] op hist [])
Oct 11 10:01:13 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:13 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:13 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4244629 data_alloc: 218103808 data_used: 2265088
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T10:00:37.657355+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 343007232 unmapped: 75399168 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T10:00:38.657477+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 343007232 unmapped: 75399168 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: osd.0 317 heartbeat osd_stat(store_statfs(0x4e36e2000/0x0/0x4ffc00000, data 0x5620ed9/0x57db000, compress 0x0/0x0/0x0, omap 0x63e, meta 0x16d3f9c2), peers [1,2] op hist [])
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T10:00:39.657592+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 343015424 unmapped: 75390976 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: do_command 'config diff' '{prefix=config diff}'
Oct 11 10:01:13 compute-0 ceph-osd[88249]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Oct 11 10:01:13 compute-0 ceph-osd[88249]: do_command 'config show' '{prefix=config show}'
Oct 11 10:01:13 compute-0 ceph-osd[88249]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Oct 11 10:01:13 compute-0 ceph-osd[88249]: do_command 'counter dump' '{prefix=counter dump}'
Oct 11 10:01:13 compute-0 ceph-osd[88249]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T10:00:40.657745+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: do_command 'counter schema' '{prefix=counter schema}'
Oct 11 10:01:13 compute-0 ceph-osd[88249]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342474752 unmapped: 75931648 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T10:00:41.657862+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: prioritycache tune_memory target: 4294967296 mapped: 342581248 unmapped: 75825152 heap: 418406400 old mem: 2845415832 new mem: 2845415832
Oct 11 10:01:13 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 11 10:01:13 compute-0 ceph-osd[88249]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 11 10:01:13 compute-0 ceph-osd[88249]: bluestore.MempoolThread(0x55f9267efb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4244629 data_alloc: 218103808 data_used: 2265088
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: tick
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_tickets
Oct 11 10:01:13 compute-0 ceph-osd[88249]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-11T10:00:42.658027+0000)
Oct 11 10:01:13 compute-0 ceph-osd[88249]: do_command 'log dump' '{prefix=log dump}'
Oct 11 10:01:13 compute-0 ceph-mon[74313]: pgmap v3615: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 10:01:13 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/3815625780' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Oct 11 10:01:13 compute-0 ceph-mon[74313]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Oct 11 10:01:13 compute-0 ceph-mon[74313]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Oct 11 10:01:13 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/1335953903' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Oct 11 10:01:13 compute-0 ceph-mgr[74605]: log_channel(audit) log [DBG] : from='client.23045 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 10:01:13 compute-0 nova_compute[260935]: 2025-10-11 10:01:13.423 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "refresh_cache-b75d8ded-515b-48ff-a6b6-28df88878996" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 11 10:01:13 compute-0 nova_compute[260935]: 2025-10-11 10:01:13.424 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquired lock "refresh_cache-b75d8ded-515b-48ff-a6b6-28df88878996" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 11 10:01:13 compute-0 nova_compute[260935]: 2025-10-11 10:01:13.424 2 DEBUG nova.network.neutron [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: b75d8ded-515b-48ff-a6b6-28df88878996] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 11 10:01:13 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "detail": "detail"} v 0) v1
Oct 11 10:01:13 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4121564305' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Oct 11 10:01:13 compute-0 nova_compute[260935]: 2025-10-11 10:01:13.642 2 DEBUG nova.network.neutron [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: b75d8ded-515b-48ff-a6b6-28df88878996] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 11 10:01:13 compute-0 podman[455602]: 2025-10-11 10:01:13.75971748 +0000 UTC m=+0.068249890 container health_status 96c41c34a8595488a05ff0ec66d19e9d3179daee319e0810a903fd848db56f5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct 11 10:01:14 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3616: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 10:01:14 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df"} v 0) v1
Oct 11 10:01:14 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3265515020' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Oct 11 10:01:14 compute-0 ceph-mon[74313]: from='client.23045 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 10:01:14 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/4121564305' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Oct 11 10:01:14 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/3265515020' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Oct 11 10:01:14 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "fs dump"} v 0) v1
Oct 11 10:01:14 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2356458944' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Oct 11 10:01:14 compute-0 nova_compute[260935]: 2025-10-11 10:01:14.731 2 DEBUG nova.network.neutron [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: b75d8ded-515b-48ff-a6b6-28df88878996] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 11 10:01:14 compute-0 nova_compute[260935]: 2025-10-11 10:01:14.763 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Releasing lock "refresh_cache-b75d8ded-515b-48ff-a6b6-28df88878996" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 11 10:01:14 compute-0 nova_compute[260935]: 2025-10-11 10:01:14.764 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] [instance: b75d8ded-515b-48ff-a6b6-28df88878996] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 11 10:01:14 compute-0 nova_compute[260935]: 2025-10-11 10:01:14.765 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 10:01:14 compute-0 nova_compute[260935]: 2025-10-11 10:01:14.766 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 10:01:14 compute-0 nova_compute[260935]: 2025-10-11 10:01:14.766 2 DEBUG nova.compute.manager [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 11 10:01:14 compute-0 nova_compute[260935]: 2025-10-11 10:01:14.767 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 10:01:14 compute-0 nova_compute[260935]: 2025-10-11 10:01:14.793 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 10:01:14 compute-0 nova_compute[260935]: 2025-10-11 10:01:14.794 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 10:01:14 compute-0 nova_compute[260935]: 2025-10-11 10:01:14.794 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 10:01:14 compute-0 nova_compute[260935]: 2025-10-11 10:01:14.795 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 11 10:01:14 compute-0 nova_compute[260935]: 2025-10-11 10:01:14.796 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 10:01:14 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "fs ls"} v 0) v1
Oct 11 10:01:14 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2390139054' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Oct 11 10:01:14 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e317 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 10:01:15 compute-0 ceph-mon[74313]: pgmap v3616: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 10:01:15 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/2356458944' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Oct 11 10:01:15 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/2390139054' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Oct 11 10:01:15 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 10:01:15 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2143317368' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 10:01:15 compute-0 nova_compute[260935]: 2025-10-11 10:01:15.259 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 10:01:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 10:01:15.261 162815 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 10:01:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 10:01:15.261 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 10:01:15 compute-0 ovn_metadata_agent[162810]: 2025-10-11 10:01:15.261 162815 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 10:01:15 compute-0 ceph-mgr[74605]: log_channel(audit) log [DBG] : from='client.23057 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 10:01:15 compute-0 nova_compute[260935]: 2025-10-11 10:01:15.364 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 10:01:15 compute-0 nova_compute[260935]: 2025-10-11 10:01:15.365 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 10:01:15 compute-0 nova_compute[260935]: 2025-10-11 10:01:15.365 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 10:01:15 compute-0 nova_compute[260935]: 2025-10-11 10:01:15.368 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 10:01:15 compute-0 nova_compute[260935]: 2025-10-11 10:01:15.368 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 10:01:15 compute-0 nova_compute[260935]: 2025-10-11 10:01:15.372 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000052 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 10:01:15 compute-0 nova_compute[260935]: 2025-10-11 10:01:15.372 2 DEBUG nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] skipping disk for instance-00000052 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 11 10:01:15 compute-0 nova_compute[260935]: 2025-10-11 10:01:15.531 2 WARNING nova.virt.libvirt.driver [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 11 10:01:15 compute-0 nova_compute[260935]: 2025-10-11 10:01:15.533 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=2591MB free_disk=59.83064270019531GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 11 10:01:15 compute-0 nova_compute[260935]: 2025-10-11 10:01:15.533 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 11 10:01:15 compute-0 nova_compute[260935]: 2025-10-11 10:01:15.533 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 11 10:01:15 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds stat"} v 0) v1
Oct 11 10:01:15 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1543097171' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Oct 11 10:01:15 compute-0 systemd[1]: Starting Hostname Service...
Oct 11 10:01:15 compute-0 nova_compute[260935]: 2025-10-11 10:01:15.872 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance c176845c-89c0-4038-ba22-4ee79bd3ebfe actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 10:01:15 compute-0 nova_compute[260935]: 2025-10-11 10:01:15.873 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance b75d8ded-515b-48ff-a6b6-28df88878996 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 10:01:15 compute-0 nova_compute[260935]: 2025-10-11 10:01:15.874 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Instance 52be16b4-343a-4fd4-9041-39069a1fde2a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 11 10:01:15 compute-0 nova_compute[260935]: 2025-10-11 10:01:15.874 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 11 10:01:15 compute-0 nova_compute[260935]: 2025-10-11 10:01:15.875 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=896MB phys_disk=59GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 11 10:01:15 compute-0 systemd[1]: Started Hostname Service.
Oct 11 10:01:16 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3617: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 10:01:16 compute-0 nova_compute[260935]: 2025-10-11 10:01:16.092 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 11 10:01:16 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/2143317368' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 10:01:16 compute-0 ceph-mon[74313]: from='client.23057 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 10:01:16 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/1543097171' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Oct 11 10:01:16 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump"} v 0) v1
Oct 11 10:01:16 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2709831987' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch
Oct 11 10:01:16 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 11 10:01:16 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1285388538' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 10:01:16 compute-0 nova_compute[260935]: 2025-10-11 10:01:16.585 2 DEBUG oslo_concurrency.processutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.493s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 11 10:01:16 compute-0 nova_compute[260935]: 2025-10-11 10:01:16.592 2 DEBUG nova.compute.provider_tree [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed in ProviderTree for provider: ead2f521-4d5d-46d9-864c-1aac19134114 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 11 10:01:16 compute-0 ceph-mgr[74605]: log_channel(audit) log [DBG] : from='client.23065 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 10:01:17 compute-0 nova_compute[260935]: 2025-10-11 10:01:17.030 2 DEBUG nova.scheduler.client.report [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Inventory has not changed for provider ead2f521-4d5d-46d9-864c-1aac19134114 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 11 10:01:17 compute-0 nova_compute[260935]: 2025-10-11 10:01:17.032 2 DEBUG nova.compute.resource_tracker [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 11 10:01:17 compute-0 nova_compute[260935]: 2025-10-11 10:01:17.032 2 DEBUG oslo_concurrency.lockutils [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.499s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 11 10:01:17 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd blocklist ls"} v 0) v1
Oct 11 10:01:17 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/766307108' entity='client.admin' cmd=[{"prefix": "osd blocklist ls"}]: dispatch
Oct 11 10:01:17 compute-0 ceph-mon[74313]: pgmap v3617: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 10:01:17 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/2709831987' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch
Oct 11 10:01:17 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/1285388538' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 11 10:01:17 compute-0 ceph-mon[74313]: from='client.23065 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 10:01:17 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/766307108' entity='client.admin' cmd=[{"prefix": "osd blocklist ls"}]: dispatch
Oct 11 10:01:17 compute-0 ceph-mgr[74605]: log_channel(audit) log [DBG] : from='client.23069 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 10:01:17 compute-0 ceph-mgr[74605]: log_channel(audit) log [DBG] : from='client.23071 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 10:01:17 compute-0 nova_compute[260935]: 2025-10-11 10:01:17.776 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 10:01:17 compute-0 podman[456043]: 2025-10-11 10:01:17.80634716 +0000 UTC m=+0.099601556 container health_status 36b89670d37b6e4d100128573f74ed59bc5bd659891af070a10aca5bfd3737c9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, config_id=iscsid, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 11 10:01:18 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3618: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 10:01:18 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd dump"} v 0) v1
Oct 11 10:01:18 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/894828857' entity='client.admin' cmd=[{"prefix": "osd dump"}]: dispatch
Oct 11 10:01:18 compute-0 ceph-mon[74313]: from='client.23069 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 10:01:18 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/894828857' entity='client.admin' cmd=[{"prefix": "osd dump"}]: dispatch
Oct 11 10:01:18 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd numa-status"} v 0) v1
Oct 11 10:01:18 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3601667168' entity='client.admin' cmd=[{"prefix": "osd numa-status"}]: dispatch
Oct 11 10:01:18 compute-0 ceph-mgr[74605]: log_channel(audit) log [DBG] : from='client.23077 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 10:01:19 compute-0 ceph-mgr[74605]: log_channel(audit) log [DBG] : from='client.23079 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 10:01:19 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 10:01:19 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 11 10:01:19 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 10:01:19 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0026278224099759067 of space, bias 1.0, pg target 0.788346722992772 quantized to 32 (current 32)
Oct 11 10:01:19 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 10:01:19 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 10:01:19 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 10:01:19 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 10:01:19 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 10:01:19 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 1.9077212346161359e-07 of space, bias 1.0, pg target 5.723163703848408e-05 quantized to 32 (current 32)
Oct 11 10:01:19 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 10:01:19 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct 11 10:01:19 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 10:01:19 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 10:01:19 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 10:01:19 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 11 10:01:19 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 10:01:19 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 11 10:01:19 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 10:01:19 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 11 10:01:19 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 11 10:01:19 compute-0 ceph-mgr[74605]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 11 10:01:19 compute-0 ceph-mon[74313]: from='client.23071 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 10:01:19 compute-0 ceph-mon[74313]: pgmap v3618: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 10:01:19 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/3601667168' entity='client.admin' cmd=[{"prefix": "osd numa-status"}]: dispatch
Oct 11 10:01:19 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool ls", "detail": "detail"} v 0) v1
Oct 11 10:01:19 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3268892904' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail"}]: dispatch
Oct 11 10:01:19 compute-0 podman[456246]: 2025-10-11 10:01:19.950431792 +0000 UTC m=+0.093310458 container health_status 37554785d4534555102496262b9d87a5da2aa90eb27775651170e757960e2863 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, managed_by=edpm_ansible)
Oct 11 10:01:19 compute-0 nova_compute[260935]: 2025-10-11 10:01:19.970 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 10:01:19 compute-0 ceph-mon[74313]: mon.compute-0@0(leader).osd e317 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 11 10:01:20 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3619: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 10:01:20 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd stat"} v 0) v1
Oct 11 10:01:20 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3672572574' entity='client.admin' cmd=[{"prefix": "osd stat"}]: dispatch
Oct 11 10:01:20 compute-0 podman[456284]: 2025-10-11 10:01:20.101763659 +0000 UTC m=+0.134036949 container health_status afb00892ff2397324b40b411131dd0cc158ab000b68a141243fa4095f9682143 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct 11 10:01:20 compute-0 ceph-mon[74313]: from='client.23077 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 10:01:20 compute-0 ceph-mon[74313]: from='client.23079 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 10:01:20 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/3268892904' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail"}]: dispatch
Oct 11 10:01:20 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/3672572574' entity='client.admin' cmd=[{"prefix": "osd stat"}]: dispatch
Oct 11 10:01:20 compute-0 ceph-mgr[74605]: log_channel(audit) log [DBG] : from='client.23085 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 10:01:20 compute-0 ceph-mgr[74605]: log_channel(audit) log [DBG] : from='client.23087 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 10:01:21 compute-0 ceph-mon[74313]: pgmap v3619: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 10:01:21 compute-0 ceph-mon[74313]: from='client.23085 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 10:01:21 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0) v1
Oct 11 10:01:21 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3364814184' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Oct 11 10:01:21 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "time-sync-status"} v 0) v1
Oct 11 10:01:21 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4063042410' entity='client.admin' cmd=[{"prefix": "time-sync-status"}]: dispatch
Oct 11 10:01:22 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3620: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 10:01:22 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json-pretty"} v 0) v1
Oct 11 10:01:22 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2669852566' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json-pretty"}]: dispatch
Oct 11 10:01:22 compute-0 ceph-mon[74313]: from='client.23087 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""]}]: dispatch
Oct 11 10:01:22 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/3364814184' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Oct 11 10:01:22 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/4063042410' entity='client.admin' cmd=[{"prefix": "time-sync-status"}]: dispatch
Oct 11 10:01:22 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/2669852566' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json-pretty"}]: dispatch
Oct 11 10:01:22 compute-0 ceph-mgr[74605]: log_channel(audit) log [DBG] : from='client.23095 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 11 10:01:22 compute-0 nova_compute[260935]: 2025-10-11 10:01:22.778 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 10:01:22 compute-0 nova_compute[260935]: 2025-10-11 10:01:22.780 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 11 10:01:22 compute-0 nova_compute[260935]: 2025-10-11 10:01:22.781 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Oct 11 10:01:22 compute-0 nova_compute[260935]: 2025-10-11 10:01:22.781 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Oct 11 10:01:22 compute-0 nova_compute[260935]: 2025-10-11 10:01:22.846 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 11 10:01:22 compute-0 nova_compute[260935]: 2025-10-11 10:01:22.847 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Oct 11 10:01:23 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "detail": "detail", "format": "json-pretty"} v 0) v1
Oct 11 10:01:23 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4038793769' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail", "format": "json-pretty"}]: dispatch
Oct 11 10:01:23 compute-0 ceph-mon[74313]: pgmap v3620: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Oct 11 10:01:23 compute-0 ceph-mon[74313]: from='client.23095 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 11 10:01:23 compute-0 ceph-mon[74313]: from='client.? 192.168.122.100:0/4038793769' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail", "format": "json-pretty"}]: dispatch
Oct 11 10:01:23 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json-pretty"} v 0) v1
Oct 11 10:01:23 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1113948363' entity='client.admin' cmd=[{"prefix": "df", "format": "json-pretty"}]: dispatch
Oct 11 10:01:23 compute-0 nova_compute[260935]: 2025-10-11 10:01:23.702 2 DEBUG oslo_service.periodic_task [None req-a87ac9a0-f2fd-4bd2-b60b-a9aeb875ad09 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 11 10:01:23 compute-0 ceph-mon[74313]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "fs dump", "format": "json-pretty"} v 0) v1
Oct 11 10:01:23 compute-0 ceph-mon[74313]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/935237358' entity='client.admin' cmd=[{"prefix": "fs dump", "format": "json-pretty"}]: dispatch
Oct 11 10:01:24 compute-0 ceph-mgr[74605]: log_channel(cluster) log [DBG] : pgmap v3621: 321 pgs: 321 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
